url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www.transtutors.com/questions/the-management-of-cantell-corporation-is-considering-a-project-that-would-require-an-2870888.htm
# The management of Cantell Corporation is considering a project that would require an initial... The management of Cantell Corporation is considering a project that would require an initial investment of $47,000. No other cash outflows would be required. The present value of the cash inflows would be$55,930. The profitability index of the project is closest to: 1.19 0.81 0.19 0.16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27434414625167847, "perplexity": 2122.1461536372517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203529.38/warc/CC-MAIN-20190324230359-20190325012359-00516.warc.gz"}
http://math.stackexchange.com/questions/114646/looking-for-a-calculus-textbook
Looking for a Calculus Textbook I want to start signal processing and I need a book that satisfies my mathematical requirements: I am in the third grade of high school and I don't know any useful thing about limit, differential, ... - Pun on Grad? $\quad$ – user21436 Feb 29 '12 at 1:16 Are you looking for rigour or a more intuitive understanding of mathematical analysis? If you are looking for a more theoretical approach to analysis, then Apostol's Calculus Vol 1, 2 are probably the best place to start, assuming that as a third grader you have the right background for it. – Rankeya Feb 29 '12 at 1:27 I prefer intuitive understanding one . is it Apostol's Calculus ? – reza Feb 29 '12 at 1:44 The author is in the "third grade of high school", i.e. the OP is probably in the 8 th or 9 th grade of school.I am currently studying calculus from Apostol and I really like his gentle and yet rigorous style. – Eisen Feb 29 '12 at 6:12 @SabyasachiMukherjee When I hear "third grade of high school" I can't think of anything other than 11th grade. – Alex Becker Feb 29 '12 at 7:35 As I said, for a rigorous and theoretical approach to calculus, Apostol's Calculus Vols. $1$ and $2$ are very good. Depending on your background, for multivariable calculus, Spivak's Calculus on manifolds is also good. Spivak's Calculus (which does single variable calculus) is also one of my favorites. I think I first learned calculus from Richard Courant's Introduction to Calculus and Analysis. I think Courant's and Robbin's What is Mathematics? also has good intuitive explanations of differentiation and integration. For a book more intuitive, and perhaps something that a third grader would have background for, try Silvanus Thompson's and Martin Gardner's Calculus Made Easy. Of course, a book that worked for me might not work for you. I would suggest that you go to a library and browse through a number of different calculus books (there are a lot of them out there), till you find the one that appeals to you the most. If you really are in the third grade, then I would assume there is no real hurry for you to master calculus, and if there is, then the books above are a good place to start. - That being said, I have no idea about signal processing, so my answer is more based on what good Calculus books there are out there. – Rankeya Feb 29 '12 at 6:01 I want to work on computer signal Processing and image processing . which one is better ? – reza Feb 29 '12 at 14:22 Differential and Integral Calculus, Vol. I [Paperback] Piskunov (Author) Try this cover to cover and if you finish this you will know more one variable calculus than you will need. - I think one's first exposure to calculus-no matter how gifted or ambitious the student is-should be a physically and geometrically motivated approach that illustrates most of important applications of calculus. Sadly,many people think that means a "pencil-pushing" or "cookbook" approach where things are done sloppily and with no careful explanation of underlying theory. That's simply not true. You can certainly do calculus non-rigorously while still doing it carefully enough to give students the broad picture of the underlying theory. The best example of this kind of book,to me, is Gilbert Strang's Calculus. Strang's emphasis is clearly on applications and it has more applications then just about any other calculus text-including many kinds of differential equations in physics(mechanics),chemistry( first and second order kinetics),biology (modeling heart rythum) and economics and a basic introduction to probability.But Strang doesn't avoid a proof when it's called for and the book has many pictures to soften the blows of these careful proofs. This would be my first choice for a high school student just starting out with calculus. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45687758922576904, "perplexity": 678.2674941338063}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828010.15/warc/CC-MAIN-20160723071028-00321-ip-10-185-27-174.ec2.internal.warc.gz"}
https://sciendo.com/pl/article/10.2478/amns.2022.2.0031
Informacje o czasopiśmie Format Czasopismo eISSN 2444-8656 Pierwsze wydanie 01 Jan 2016 Częstotliwość wydawania 2 razy w roku Języki Angielski Otwarty dostęp # Real Estate Economic Development Based on Logarithmic Growth Function Model ###### Przyjęty: 18 Mar 2022 Informacje o czasopiśmie Format Czasopismo eISSN 2444-8656 Pierwsze wydanie 01 Jan 2016 Częstotliwość wydawania 2 razy w roku Języki Angielski This article uses a logarithmic growth model to analyze the correlation between the national economy and real estate. It reveals the Granger causality between the national economy and the real estate economy. The results show a long-term equilibrium relationship and a two-way Granger causality between real estate prices and economic growth. Excessive growth in real estate prices will create bubbles and will also drive economic growth backward. #### MSC 2010 Introduction Economic change is one of the important driving factors of urbanization. And urbanization is a social consequence of economic development. The relationship between the level of urbanization and economic development is a common research topic in multiple disciplines such as geography and economics [1]. For Western countries, there are more studies around the 1950s. In the 1980s, the focus of attention shifted to the urbanization and economic development of developing countries. China is currently in a period of rapid urbanization. Industrialization is proceeding rapidly, and the industrial structure is gradually adjusted. It is time to study the relationship between urbanization and economic development. As early as the 1980s, Chinese scholars began to explore the relationship between urbanization and the level of economic development. In recent years, research results from various angles have been reported continuously. The main function of science is to explain and predict, and mathematical modeling is an important way to realize the two basic functions of science. The main purpose of explanation is to reveal causality, while prediction (including prediction) is to judge unknown results. A simple and convenient way to deal with many research methods is to find a suitable measure. Reveal the numerical relationship between these two measures and express it with mathematical equations. This results in a concise and easy-to-understand mathematical model. The model is just the right simplification of the real world [2]. Around the 1980s, many scholars tried to establish a relationship model between urbanization and economic development. The growth of urbanization level has a fixed upper limit, while the level of economic development has no clear upper limit [3]. Therefore, the growth rate of urbanization corresponding to economic development will eventually become smaller and smaller. In this way, if the per capita economic output is the independent variable, the urbanization level is the dependent variable. Establishing the functional relationship in this way can get a convex curve in the second half of the arc. Many functions can give this kind of curve. The content includes logarithmic function, power exponential function, hyperbolic function, two-type exponential function, logistic function, etc. This article intends to systematically summarize three mathematical models describing the relationship between urbanization and the level of economic development. Reveal the hidden and previously unknown dynamic mechanism behind it. At the same time, it clarifies the application methods, scope of application, interpretation, and prediction effects of different models. Two basic measures Measurement of urbanization level Measurement is the basic connection between mathematical modeling and empirical analysis. The measurement is easier to understand and establishes a mathematical model. To this end, it is first necessary to explain a few basic measures of urbanization and economic development [4]. The commonly used measure to reflect in a region is the level of urbanization: $L=uP=uu+r$ L = {u \over P} = {u \over {u + r}} In formula (1), L represents the level of urbanization. u is the urban population. r is the rural population. P = u + r is the total population. People's accustomed expression method is to multiply by 100% in formula (1) and use percentage (%) as the measurement method. An alternative measure equivalent to the level of urbanization is the urban-rural population ratio (URR). This is a dimensionless measure. Defined as the ratio of urban population u to the rural population r: $O=u/r$ O = u/r Formula (2) O represents the ratio of urban to rural areas. It is easy to prove that the urban-rural ratio is a measure equivalent to the level of urbanization. The relationship between the two is a hyperbolic function [5]. Dividing the numerator and denominator of formula (1) by the rural population r at the same time gives: $L=u/r1+u/r=O1+O=11+1/O$ L = {{u/r} \over {1 + u/r}} = {O \over {1 + O}} = {1 \over {1 + 1/O}} $1L=1+1O$ {1 \over L} = 1 + {1 \over O} The above formula represents a hyperbola with special parameters. In addition, the rate of change of urbanization level can also be used to reflect the rate of urbanization. The absolute speed formula is: $S=ΔLΔt=Lt−Lt−1$ S = {{\Delta L} \over {\Delta t}} = {L_t} - {L_{t - 1}} Correspondingly, the relative speed can be expressed as: $s=ΔLLt−1 ΔL=Lt−Lt−1Lt−1$ s = {{\Delta L} \over {{L_{t - 1}}\,\Delta L}} = {{{L_t} - {L_{t - 1}}} \over {{L_{t - 1}}}} The speed of urbanization can be expressed in the form of differentiation. Measurement of economic development level There are two simplest measures of the economic development level of a region. The first is the per capita gross regional product, that is, the per capita GDP. The second is per capita national income, referred to as per capita income. GDP and national income are both related and different. National income can only be obtained after depreciation, indirect taxes, transfer payments, and government subsidies in GDP are converted [6]. The average statistical analysis is carried out based on the per capita GDP and per capita income of all countries globally. The results show that there is an allometric relationship between the two: $Per capita income= constant coefficient × per capita GDPScale index$ {\rm{Per}}\,{\rm{capita}}\,{\rm{income}} = \,{\rm{constant}}\,{\rm{coefficient}}\, \times \,{\rm{per}}\,{\rm{capita}}\,{\rm{GD}}{{\rm{P}}^{{\rm{Scale}}\,{\rm{index}}}} The scaled index is slightly greater than 1. Since the scale index of formula (7) is very close to 1, the relationship between per capita income and per capita GDP is approximately proportional. Therefore, if a certain mathematical equation is satisfied between the urbanization level and per capita income, then per capita GDP is used to replace per capita income [7]. The functional form of the equation remains unchanged. Three mathematical models Single logarithmic relationship-logarithmic model The first mathematical model of the relationship between the level of urbanization and the level of economic development is the logarithmic model. Usually expressed as: $L=a ln(x)−b$ L = a\,\ln \left(x \right) - b In formula (8), x represents per capita output value or per capita income. L represents the level of urbanization. A and b are parameters greater than 0. There are many studies on this model at home and abroad, mainly based on cross-sectional data from countries worldwide to describe the relationship between the proportion of urban population and per capita output value. Double logarithmic relationship-power exponent model The second mathematical model of the relationship between the level of urbanization and the level of economic development is the power index relationship model. It is usually expressed as $L=cxd$ L = {cx}^d In formula (9), c and d are parameters greater than zero. Other symbols are the same as formula (8). Take the logarithm of equation (9) to obtain the dual logarithmic, linear relationship $ln(L)=ln(c)+dln(x)=c′+d ln(x)$ \ln \left(L \right) = \ln \left(c \right) + d\ln \left(x \right) = c{'} + d\,\ln \left(x \right) The parameter c′ = ln(c) in equation (10) is the intercept in the linear model. The power index model is also commonly used to study the relationship between urbanization and economic development. Logarithmic relationship-Logistic model This paper proposes the Logistic model to examine the relationship between China's per capita output value x and urban-rural ratio O to obtain an exponential function: $O=Cekx$ O = {Ce}^{kx} In formula (11), C and k are parameters. We substitute equation (11) into equation (3) to get the Logistic function immediately: $L=11+(1/C)e−kx=11+Ae−kx$ L = {1 \over {1 + \left({1/C} \right){e^{- kx}}}} = {1 \over {1 + A{e^{- kx}}}} Formula (12) A = 1/ C is the transformed parameter. Equation (11) is theoretically equivalent to equation (12). In fact, O = L/(1 − L) can be obtained by formula (4). We substitute it into equation (11) and take the logarithm of both sides to get: $ln O = ln(L1−L)=B+kx$ \ln \,O\, = \,\ln \left({{L \over {1 - L}}} \right) = B + kx Formula (13) B = ln(C) is the transformed parameter. This means that formula (11) is not a general exponential model. It is a special logarithmic model. The reason is that the urban-rural ratio is essentially a probability ratio. It reflects the ratio of the probability of urban population appearance to the probability of rural population appearance. It can be seen that the Logistic model of the relationship between urbanization level and per capita output value is equivalent to a logarithmic model [8]. This paper uses the urban-rural ratio and per capita GDP data fitting formula (11) of 31 regions (provinces, autonomous regions, and municipalities) in China in 2020 to obtain: $z=0.319e0.424x$ z = 0.319{e^{0.424x}} Formula (14), z is the estimated value of the urban-rural ratio O. The goodness of fit R2 = 0.878 (Figure 1). The above formula can be equivalently expressed as $y=11+1/z=11+3.13e−0.424x$ y = {1 \over {1 + 1/z}} = {1 \over {1 + 3.13{e^{- 0.424x}}}} Furthermore, the model parameters for 2012 and 2017~2019 can be calculated. The model parameters from 2013 to 2016 were obtained by interpolation (Table 1). The results show that the model scale coefficient C's estimated value decreases year by year. In contrast, the estimated value of the parameter k reflects the rate of change increases year by year [9]. We use the same data to test the single logarithmic and double logarithmic models, and the goodness of fit is relatively low (see Table 1). Generally speaking, the cross-sectional data of various provinces in China are most suitable to be described by the Logistic function, followed by the logarithmic function, and then the power exponential function. The model parameters of the relationship between urban-rural ratio and GDP per capita in various regions of China (2012~2020). Logarithm Single logarithm Double logarithm Model parameters C k R R R 2012 0.255 1.078 20.851 20.818 20.776 2013 0.296 0.837 - - - 2014 0.307 0.738 - - - 2015 0.304 0.681 - - - 2016 0.301 0.63 - - - 2017 0.311 0.618 0.898 0.849 0.84 2018 0.321 0.542 0.899 0.847 0.844 2019 0.329 0.467 0.899 0.847 0.846 2020 0.319 0.424 0.878 0.833 0.801 Comparison of kinetic mechanisms The mathematical model of the human geography system is non-unique. Different system evolution conditions lead to different models. Different models reflect different dynamic mechanisms. The key lies in the purpose of these models. The purpose of modeling urbanization and economic development is to explain theoretically and forecast in practice (Table 2). However, to effectively use these models to carry out interpretation and prediction work, it is necessary to reveal the meaning of the parameters of the model and the dynamic mechanism behind the model. The main uses of the urbanization-economic development level relationship model. Modeling purpose Details explain (1) Reveal the causal relationship between urbanization and economic development (who decides who) (2) Determine whether urbanization and industrial development are in harmony (whether urbanization lags behind industrialization) (3) Understand the dynamic mechanism of urbanization and economic development (what are the control variables) Predict (1) Given the per capita income of a region, estimate the level of urbanization in the region (2) Given the urbanization level of a region, estimate the per capita income of the region First, consider the single logarithmic model. Take the derivative of equation (2) to get the rate of change of urbanization level corresponding to per capita output value: $dL/dx=a/x$ dL/dx = a/x This shows that for every unit increase in per capita output value, urbanization increases by a / x unit. Here a is a constant. X is the per capita output value. This shows two characteristics. One is to control the rate of change [10]. The variable of dL / dx is the per capita output value representing the level of economic development. Second, with the improvement of the economic development level, the impact of output value on the speed of urbanization is getting smaller and smaller. The longitudinal analysis of the early time series of per capita output value found a significant impact on urbanization. Using horizontal analysis of spatial sequence, it is found that when the level of economic development is low, the per capita output value significantly impacts the level of urbanization. This is a slow first and then fast growth. Economic variables control the impact of economic development level on urbanization. Perform a logical inversion of equation (8) to obtain a dynamic model: ${dL(t)/dt=αL0dx(t)/dt=βx(t)$ \left\{{\matrix{{dL\left(t \right)/dt = \alpha {L_0}} \hfill \cr {dx\left(t \right)/dt = \beta x\left(t \right)} \hfill \cr}} \right. In formula (17), α, β is a constant coefficient. L0 is a constant about the level of urbanization. The logarithmic model can only be established when the level of urbanization changes linearly and the per capita output value increases exponentially. What the power exponent model reflects is an allometric growth relationship. Take the derivative of equation (9) to obtain the rate of change of the urban population proportion corresponding to the level of economic development: $dL/dx=γL/x$ dL/dx = \gamma L/x From this, the allometric coefficient is obtained: $γ=(dL/L)/(dx/x)$ \gamma = \left({dL/L} \right)/\left({dx/x} \right) The allometric growth coefficient in geography and biology is also the elasticity coefficient in economics. 1) The rate of change of the urbanization level is directly proportional to the urbanization level. The level of urbanization and the level of economic development jointly control the rate of change. 2) The relative growth rate of urbanization level to the relative growth rate of per capita output value is a constant. This is the basic meaning of allometric growth, and this constant is the allometric coefficient γ. 3) Because the level of urbanization has a definite upper limit, the per capita output value has no fixed upper limit. The level of urbanization is slower than the rate of change of per capita output value. So the rate of change of the level of urbanization corresponding to the per capita output value is also fast and then slow. Kinetic reduction of equation (9) obtains a differential equation system: ${dL(t)/dt=φL(t)dx(t)/dt=ψx(t)$ \left\{{\matrix{{dL\left(t \right)/dt = \varphi L\left(t \right)} \hfill \cr {dx\left(t \right)/dt = \psi x\left(t \right)} \hfill \cr}} \right. In formula (20), φ,ψ is a constant coefficient. It can be seen that the power model can only be established when the level of urbanization and per capita output value both show exponential growth. Finally, the Logistic model is examined. Take the derivative of equation (12) to obtain the second-order Bernoulli differential equation: $dL/dx=kL(1−L)$ dL/dx = kL\left({1 - L} \right) Due to the parameter k > 1, the rate of change is a parabola that opens downwards. 1) The level of urbanization controls the impact of economic development on the level of urbanization. 2) The growth rate is slow at both ends and fast in the middle. The rate of change in the early and end of the time series is low, and the rate of change in the middle is large. When L = 1 − L, the rate of change reaches its maximum value [11]. That is to say, the level of economic development is most sensitive to its impact when the level of urbanization reaches half of the saturation value. The regions with the lowest urbanization and economic development levels and the highest regional rates of change are not sensitive. Only in the middle-stream areas, urbanization has the strongest response to per capita output value. Perform dynamic inversion of Eq. (12) to obtain a system of nonlinear differential equations: ${sdL(t)/dt=ηL(t)[1−L(t)]dx(t)/dt=μx0$ \left\{{\matrix{{dL\left(t \right)/dt = \eta L\left(t \right)\left[{1 - L\left(t \right)} \right]} \hfill \cr {dx\left(t \right)/dt = \mu {x_0}} \hfill \cr}} \right. In formula (22), η,μ is a constant coefficient. x0 is a constant about the per capita output value. The first formula implies that the level of urbanization is a logistic curve in the time direction. The second formula shows that the per capita output value increases linearly. This means that the logistic growth process of the urbanization level in the time direction is the premise of the logistic change of the corresponding economic level. In addition to empirical statistical standards, the model selection also has theoretical symmetry standards. If a mathematical model has invariance under a certain transformation, it has a kind of symmetry. The more symmetrical the model, the more widely applicable it is. The inverse function of the logarithmic model has translation invariance. The power exponential model has scale invariance. And the Logistic model has translation invariance. They are all symmetrical models [12]. Therefore, we can express the main points of the model comparative analysis as shown in Table 3. Comparison of the characteristics of the three urbanization-economic development level relationship models. Model Logarithmic model Power Exponent Model Logistic model relation Single logarithm Double logarithm Logarithm Variation characteristics From fast to slow, fast first, then slow From fast to slow, fast first, then slow Fast in the middle and slow at both ends Law of change Characteristic scale Featureless scale Characteristic scale Rate of the change control variable Per capita output value (economic variable) Per capita output value and urbanization level (two variables) Urbanization level (urban variable) Kinetic characteristics The level of urbanization changes linearly, and the per capita output value increases exponentially Both the level of urbanization and per capita output value have increased exponentially Logistic growth of urbanization level, linear growth of per capita output value symmetry Translational symmetry Scale symmetry Translational symmetry Discussion 1) The logarithmic model is suitable for areas dominated by economic variables, and it reveals a single logarithmic, linear relationship. Suppose this model represents the situation of a region, the rate of change in the level of urbanization corresponding to the level of economic development changes from fast to slow. With the development of the social economy, the impact of per capita output value on urbanization is weaker. 2) The power index model is suitable for areas where economic variables and urban variables are balanced. What it reveals is a double logarithmic, linear relationship. If this model represents the situation of a region, the rate of change of urbanization level corresponding to the level of economic development also changes from fast to slow. The rate of change is slower than the logarithmic model, and the change curve has no characteristic scale. With the development of the social economy, the impact of per capita output value on urbanization is weaker. 3) Logistic model is suitable for areas dominated by urban variables. What it reveals is a logarithmic, linear relationship. Suppose this model represents the situation of a region. In that case, the rate of change of the urbanization level corresponding to the level of economic development is slow at both ends and fast at the middle. The change curve has characteristic scales. The impact of per capita output value on urbanization in the initial stage is relatively weak and gradually strengthened. Conclusion The purpose of modeling the relationship between urbanization and the economic development level of the three models is the same. When the objects described by these algorithms have different background conditions, the model structure reflects different dynamic characteristics. The exponential model and the power exponential model are the results of predecessors, and the Logistic model is the innovation of this article. The analysis and comparison of the dynamic mechanism of different models is also the main innovation of the article. #### The main uses of the urbanization-economic development level relationship model. Modeling purpose Details explain (1) Reveal the causal relationship between urbanization and economic development (who decides who) (2) Determine whether urbanization and industrial development are in harmony (whether urbanization lags behind industrialization) (3) Understand the dynamic mechanism of urbanization and economic development (what are the control variables) Predict (1) Given the per capita income of a region, estimate the level of urbanization in the region (2) Given the urbanization level of a region, estimate the per capita income of the region #### The model parameters of the relationship between urban-rural ratio and GDP per capita in various regions of China (2012~2020). Logarithm Single logarithm Double logarithm Model parameters C k R R R 2012 0.255 1.078 20.851 20.818 20.776 2013 0.296 0.837 - - - 2014 0.307 0.738 - - - 2015 0.304 0.681 - - - 2016 0.301 0.63 - - - 2017 0.311 0.618 0.898 0.849 0.84 2018 0.321 0.542 0.899 0.847 0.844 2019 0.329 0.467 0.899 0.847 0.846 2020 0.319 0.424 0.878 0.833 0.801 #### Comparison of the characteristics of the three urbanization-economic development level relationship models. Model Logarithmic model Power Exponent Model Logistic model relation Single logarithm Double logarithm Logarithm Variation characteristics From fast to slow, fast first, then slow From fast to slow, fast first, then slow Fast in the middle and slow at both ends Law of change Characteristic scale Featureless scale Characteristic scale Rate of the change control variable Per capita output value (economic variable) Per capita output value and urbanization level (two variables) Urbanization level (urban variable) Kinetic characteristics The level of urbanization changes linearly, and the per capita output value increases exponentially Both the level of urbanization and per capita output value have increased exponentially Logistic growth of urbanization level, linear growth of per capita output value symmetry Translational symmetry Scale symmetry Translational symmetry Qiang, Q. Analysis of debt-paying ability of real estate enterprises based on fuzzy mathematics and K-means algorithm. Journal of Intelligent & Fuzzy Systems., 2019; 37(5):6403–6414 QiangQ. Analysis of debt-paying ability of real estate enterprises based on fuzzy mathematics and K-means algorithm Journal of Intelligent & Fuzzy Systems. 2019 37 5 6403 6414 10.3233/JIFS-179219 Search in Google Scholar Berawi, M. A., Miraj, P., Saroji, G., & Sari, M. Impact of rail transit station proximity to commercial property prices: utilizing big data in urban real estate. Journal of Big Data., 2020; 7(1):1–17 BerawiM. A. MirajP. SarojiG. SariM. Impact of rail transit station proximity to commercial property prices: utilizing big data in urban real estate Journal of Big Data. 2020 7 1 1 17 10.1186/s40537-020-00348-z Search in Google Scholar Vaishampayan, S., Dev, M., & Patel, U. Hedonic pricing model for impact of infrastructure facilities on land rates: case study of Kudi Bhagtasni, Jodhpur, India. International Journal of Sustainable Real Estate and Construction Economics., 2021; 2(1):103–115 VaishampayanS. DevM. PatelU. Hedonic pricing model for impact of infrastructure facilities on land rates: case study of Kudi Bhagtasni, Jodhpur, India International Journal of Sustainable Real Estate and Construction Economics. 2021 2 1 103 115 10.1504/IJSRECE.2021.118126 Search in Google Scholar Tsay, J. T., Yang, J. T., & Lin, C. C. Pricing model for real estate in Taiwan-adjusted matching approach. International Journal of Agriculture Innovation, Technology and Globalisation., 2020; 1(3):207–225 TsayJ. T. YangJ. T. LinC. C. Pricing model for real estate in Taiwan-adjusted matching approach International Journal of Agriculture Innovation, Technology and Globalisation. 2020 1 3 207 225 10.1504/IJAITG.2020.106015 Search in Google Scholar Mokhtar, M., Yusoff, S., & Samsuddin, M. D. Modelling an Economic Relationship for Macroeconomic Determinants of Malaysian Housing Prices. ADVANCES IN BUSINESS RESEARCH INTERNATIONAL JOURNAL., 2021; 7(1):242–251 MokhtarM. YusoffS. SamsuddinM. D. Modelling an Economic Relationship for Macroeconomic Determinants of Malaysian Housing Prices ADVANCES IN BUSINESS RESEARCH INTERNATIONAL JOURNAL. 2021 7 1 242 251 Search in Google Scholar Ulbl, M., Verbič, M., Lisec, A., & Pahor, M. Proposal of real estate mass valuation in Slovenia based on generalised additive modelling approach. Geodetski vestnik., 2021; 65(1):46–64 UlblM. VerbičM. LisecA. PahorM. Proposal of real estate mass valuation in Slovenia based on generalised additive modelling approach Geodetski vestnik. 2021 65 1 46 64 10.15292/geodetski-vestnik.2021.01.46-81 Search in Google Scholar Khan, R. A., & Mitra, M. Estimation issues in the Exponential–Logarithmic model under hybrid censoring. Statistical Papers., 2021; 62(1):419–450 KhanR. A. MitraM. Estimation issues in the Exponential–Logarithmic model under hybrid censoring Statistical Papers. 2021 62 1 419 450 10.1007/s00362-019-01100-3 Search in Google Scholar Shen, S. Empirical Research on the Impact of Real Estate on Economic Development. Journal of Mathematical Finance., 2021; 11(2):246–254 ShenS. Empirical Research on the Impact of Real Estate on Economic Development Journal of Mathematical Finance. 2021 11 2 246 254 10.4236/jmf.2021.112014 Search in Google Scholar Mostofi, F., Toğan, V., & Başağa, H. B. House price prediction: A data-centric aspect approach on performance of combined principal component analysis with deep neural network model. Journal of Construction Engineering., 2021; 4(2):106–116 MostofiF. ToğanV. BaşağaH. B. House price prediction: A data-centric aspect approach on performance of combined principal component analysis with deep neural network model Journal of Construction Engineering. 2021 4 2 106 116 10.31462/jcemi.2021.02106116 Search in Google Scholar Chen, Y. J., & Hsu, C. K. Comparison of Housing Price Elasticities Resulting from Different Types of Multimodal Rail Stations in Kaohsiung, Taiwan. International Real Estate Review., 2020; 23(3):1043–1058 ChenY. J. HsuC. K. Comparison of Housing Price Elasticities Resulting from Different Types of Multimodal Rail Stations in Kaohsiung, Taiwan International Real Estate Review. 2020 23 3 1043 1058 10.53383/100308 Search in Google Scholar Li, T. & Yang, W. Solution to Chance Constrained Programming Problem in Swap Trailer Transport Organisation based on Improved Simulated Annealing Algorithm. Applied Mathematics and Nonlinear Sciences., 2020; 5(1): 47–54 LiT. YangW. Solution to Chance Constrained Programming Problem in Swap Trailer Transport Organisation based on Improved Simulated Annealing Algorithm Applied Mathematics and Nonlinear Sciences. 2020 5 1 47 54 10.2478/amns.2020.1.00005 Search in Google Scholar Wang, Y. & Chen, Y. Evaluation Method of Traffic Safety Maintenance of High-Grade Highway. Applied Mathematics and Nonlinear Sciences., 2021; 6(1): 65–80 WangY. ChenY. Evaluation Method of Traffic Safety Maintenance of High-Grade Highway Applied Mathematics and Nonlinear Sciences. 2021 6 1 65 80 10.2478/amns.2021.1.00007 Search in Google Scholar • #### Higher Education Agglomeration Promoting Innovation and Entrepreneurship Based on Spatial Dubin Model Polecane artykuły z Trend MD
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6687972545623779, "perplexity": 1840.9330146103498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00567.warc.gz"}
https://www.physicsforums.com/threads/gaussean-beam.133829/
# Gaussean beam 1. Sep 27, 2006 ### Quasi Particle hello. I've got a gaussean beam, which is collimated with a diameter of 2mm, and a wavelength of 1112nm. I need to focus it to a beam waist of 25µm, but my lens with the smallest focal length I have is still f=300mm, so I need to build a telescope: ---collimated beam (w0)-----|lens f)-----beam with w'0-----|lens f')-----beam with w"0----- I found the following formulae: -focussing a collimated beam with a lens: $$w'_{0}=\frac{\lambda}{\pi w_{0}} f$$ where $$w'_0$$ is the new and $$w_0$$ is the old beam waist. -focussing a beam with the lens in the waist of the original beam: w"0 = w'0 / [1+(z'0/f')^2]^-1/2 $$w"_0=\frac{w'_0}{\sqrt{1+{\left(\frac{\pi {w'_0}^2}{\lambda f'}\right)}^2}}$$ where w"0 is the second beam waist now, from the second formula I wanted to calculate $$w'_0$$ so I can determine f (when f'=300) but ended up with $$w'_0=\sqrt{\sqrt{\frac{{w'_0}^2}{a^2}+\frac{1}{4a^4}}-\frac{1}{2a^2}}$$ with $$a=\frac{\pi}{\lambda f}$$. Is that right? It looks quite strange and I get something on the order of 10e-10 which seems very small for a beam waist. Is it right the right ansatz, in the first place? The aim is to calculate how to make the telescope, i.e. what focal length and distance the first lens must have. This is a real problem btw., but it's so much like textbook problems that I posted it here. Any help is appreciated, I'd be happy to be pointed into the right direction or given some material on this. I searched the forum and the web without much success, but maybe I've been looking in the wrong places Thanks in advance for taking the time =D ____ EDIT: I don't seem to be able to make latex display some of the formulae. Sorry for the inconvenience. Last edited: Sep 28, 2006
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475789666175842, "perplexity": 782.4733581799364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824471.6/warc/CC-MAIN-20171020230225-20171021010225-00584.warc.gz"}
https://tex.stackexchange.com/questions/180196/curly-brackets-in-table-cell-for-language-manual
# Curly brackets in table cell for language manual I try to make a table for language manual. Could anybody help me to manage curly brackets in the cells of a table? Like this: I tried to do it like here but I could't manage it. A solution with blkarray allows to have vertically aligned braces: \documentclass{article} \usepackage[utf8]{inputenc} \usepackage{tabularx, blkarray} \newcommand\colhead[1]{\multicolumn{1}{>{$}c<{$}}{#1}} \usepackage{eqparbox} \begin{document} \begin{tabular}{|lp{\eqboxwidth{C}}@{}|} \hline & \\ \begin{blockarray}{l} I\\ You\\ We\\ They\\ \end{block}\\[-2ex] She\\ He\\ It\\ \end{block} \end{blockarray} & \\ \hline \end{tabular} \end{document} An example with \right\} for the curly brace and a table inside a table: \documentclass{article} \begin{document} \renewcommand*{\arraystretch}{1.2} \begin{tabular}{|l|} \hline $\kern-\nulldelimiterspace\left. \begin{tabular}{@{}l@{}} I\\ You\\ We\\ They \end{tabular}\right\}$ read \\ $\kern-\nulldelimiterspace\left. \begin{tabular}{@{}l@{}} He\\ She\\ It \end{tabular}\right\}$ reads \\ \hline \end{tabular} \end{document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6341267824172974, "perplexity": 3256.0592848400847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363515.28/warc/CC-MAIN-20211208144647-20211208174647-00401.warc.gz"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/F11/f11mlc.html
f11 Chapter Contents f11 Chapter Introduction NAG C Library Manual # NAG Library Function Documentnag_superlu_matrix_norm (f11mlc) ## 1  Purpose nag_superlu_matrix_norm (f11mlc) computes the $1$-norm, the $\infty$-norm or the maximum absolute value of the elements of a real, square, sparse matrix which is held in compressed column (Harwell–Boeing) format. ## 2  Specification #include #include void nag_superlu_matrix_norm (Nag_NormType norm, double *anorm, Integer n, const Integer icolzp[], const Integer irowix[], const double a[], NagError *fail) ## 3  Description nag_superlu_matrix_norm (f11mlc) computes various quantities relating to norms of a real, sparse $n$ by $n$ matrix $A$ presented in compressed column (Harwell–Boeing) format. None. ## 5  Arguments 1:     normNag_NormTypeInput On entry: specifies the value to be returned in anorm. ${\mathbf{norm}}=\mathrm{Nag_RealOneNorm}$ The $1$-norm ${‖A‖}_{1}$ of the matrix is computed, that is $\underset{1\le j\le n}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\sum _{i=1}^{n}\left|{A}_{ij}\right|$. ${\mathbf{norm}}=\mathrm{Nag_RealInfNorm}$ The $\infty$-norm ${‖A‖}_{\infty }$ of the matrix is computed, that is $\underset{1\le i\le n}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\sum _{j=1}^{n}\left|{A}_{ij}\right|$. ${\mathbf{norm}}=\mathrm{Nag_RealMaxNorm}$ The value $\underset{1\le i,j\le n}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}\left|{A}_{ij}\right|$ (not a norm). Constraint: ${\mathbf{norm}}=\mathrm{Nag_RealOneNorm}$, $\mathrm{Nag_RealInfNorm}$ or $\mathrm{Nag_RealMaxNorm}$. 2:     anormdouble *Output On exit: the computed quantity relating the matrix. 3:     nIntegerInput On entry: $n$, the order of the matrix $A$. Constraint: ${\mathbf{n}}\ge 0$. 4:     icolzp[$\mathit{dim}$]const IntegerInput Note: the dimension, dim, of the array icolzp must be at least ${\mathbf{n}}+1$. On entry: ${\mathbf{icolzp}}\left[i-1\right]$ contains the index in $A$ of the start of a new column. See Section 2.1.3 in the f11 Chapter Introduction. 5:     irowix[$\mathit{dim}$]const IntegerInput Note: the dimension, dim, of the array irowix must be at least ${\mathbf{icolzp}}\left[{\mathbf{n}}\right]-1$, the number of nonzeros of the sparse matrix $A$. On entry: the row index array of the sparse matrix $A$. 6:     a[$\mathit{dim}$]const doubleInput Note: the dimension, dim, of the array a must be at least ${\mathbf{icolzp}}\left[{\mathbf{n}}\right]-1$, the number of nonzeros of the sparse matrix $A$. On entry: the array of nonzero values in the sparse matrix $A$. 7:     failNagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. On entry, argument number $〈\mathit{\text{value}}〉$ had an illegal value. On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_INT On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 0$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. Not applicable. None. ## 9  Example This example computes norms and maximum absolute value of the matrix $A$, where $A= 2.00 1.00 0 0 0 0 0 1.00 -1.00 0 4.00 0 1.00 0 1.00 0 0 0 1.00 2.00 0 -2.00 0 0 3.00 .$ ### 9.1  Program Text Program Text (f11mlce.c) ### 9.2  Program Data Program Data (f11mlce.d) ### 9.3  Program Results Program Results (f11mlce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 39, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971848726272583, "perplexity": 4285.576197115626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678703030/warc/CC-MAIN-20140313024503-00052-ip-10-183-142-35.ec2.internal.warc.gz"}
https://slave2.omega.jstor.org/stable/j.ctt7rkjw
# A Primer on Mapping Class Groups (PMS-49) Benson Farb Dan Margalit Pages: 488 https://www.jstor.org/stable/j.ctt7rkjw 1. Front Matter (pp. i-vi) (pp. vii-x) 3. Preface (pp. xi-xii) Benson Farb and Dan Margalit 4. Acknowledgments (pp. xiii-xvi) 5. Overview (pp. 1-14) 6. ### PART 1. MAPPING CLASS GROUPS • Chapter One Curves, Surfaces, and Hyperbolic Geometry (pp. 17-43) A linear transformation of a vector space is determined by, and is best understood by, its action on vectors. In analogy with this, we shall see that an element of the mapping class group of a surfaceSis determined by, and is best understood by, its action on homotopy classes of simple closed curves inS. We therefore begin our study of the mapping class group by obtaining a good understanding of simple closed curves on surfaces. Simple closed curves can most easily be studied via their geodesic representatives, and so we begin with the fact that every surface... • Chapter Two Mapping Class Group Basics (pp. 44-63) In this chapter we begin our study of the mapping class group of a surface. After giving the definition, we compute the mapping class group in essentially all of the cases where it can be computed directly. This includes the case of the disk, the annulus, the torus, and the pair of pants. An important method, which we call the Alexander method, emerges as a tool for such computations. It answers the fundamental question: how can one prove that a homeomorphism is or is not homotopically trivial? Equivalently, how can one decide when two homeomorphisms are homotopic or not? Let... • Chapter Three Dehn Twists (pp. 64-88) In this chapter we study a particular type of mapping class called a Dehn twist. Dehn twists are the simplest infinite-order mapping classes in the sense that they have representatives with the smallest possible supports. Dehn twists play the role for mapping class groups that elementary matrices play for linear groups. We begin by defining Dehn twists inSand proving that they have infinite order in Mod(S). We determine many of the basic properties of Dehn twists by studying their action on simple closed curves. As one consequence, we compute the center of Mod(S). At the end of the... • Chapter Four Generating the Mapping Class Group (pp. 89-115) Is there a way to generate all (homotopy classes of) homeomorphisms of a surface by compositions of simple-to-understand homeomorphisms? We have already seen that Mod(T2) is generated by the Dehn twists about the latitude and longitude curves. Our next main goal will be to prove the following result. THEOREM 4.1 (Dehn–Lickorish theorem)For g≥ 0,the mapping class group Mod(Sg) is generated by finitely many Dehn twists about nonseparating simple closed curves. Theorem 4.1 can be likened to the theorem that for eachn≥ 2 the group SL(n,$\mathbb{Z}$) can be generated by finitely many elementary matrices.... • Chapter Five Presentations and Low-dimensional Homology (pp. 116-161) Having found a finite set of generators for the mapping class group, we now begin to focus on relations. Indeed, one of our main goals in this chapter is to give a finite presentation for Mod(S). In doing so, we will see some beautiful topological ideas, as well as some useful techniques from geometric group theory. The relations in a groupGare intimately related to the first and second homology groups ofG. Recall that the homology groups ofGare defined to be the homology groups of anyK(G, 1)-space. The first and second homology groups have direct,... • Chapter Six The Symplectic Representation and the Torelli Group (pp. 162-199) One of the fundamental aspects of Mod(Sg) is its action onH1(Sg;$\mathbb{Z}$). The representation Ψ : Mod(Sg) → Aut(H1(Sg;$\mathbb{Z}$)) is like a first linear approximation to Mod(Sg), and we can try to transfer our knowledge of the linear group Aut(H1(Sg;$\mathbb{Z}$)) to the group Mod(Sg). As we show in Section 6.1, the algebraic intersection number onH1(Sg;$\mathbb{R}$) gives this vector space a symplectic structure. This symplectic structure is preserved by the image of Ψ, and so Ψ can be thought of as a representation $\Psi :{\text{Mod}}(S_g ) \to {\text{Sp}}(2g,\mathbb{Z})$ into the integral symplectic group. The homomorphism Ψ is called thesymplectic... • Chapter Seven Torsion (pp. 200-218) In this chapter we investigate finite subgroups of the mapping class group. After explaining the distinction between finite-order mapping classes and finite-order homeomorphisms, we then turn to the problem of determining what is the maximal order of a finite subgroup of Mod(Sg). We will show that, forg≥ 2, finite subgroups have order at most 84(g— 1) and cyclic subgroups have order at most 4g+ 2. We will also see that there are finitely many conjugacy classes of finite subgroups in Mod(S). At the end of the chapter, we prove that Mod(Sg) is generated by finitely many... • Chapter Eight The Dehn—Nielsen—Baer Theorem (pp. 219-238) The Dehn—Nielsen—Baer theorem states that Mod(Sg) is isomorphic to an index 2 subgroup of the group Out(π1(Sg)) of outer automorphisms of π1(Sg). This is a beautiful example of the interplay between topology and algebra in the mapping class group. It relates a purely topological object, Mod(Sg), to a purely algebraic one, Out(π1(Sg)). Further, these are related via hyperbolic geometry! We begin by defining the objects in the statement of the theorem. Extended mapping class group. LetSbe a surface without boundary. Theextended mapping class group, denoted Mod±(S), is the group of isotopy classes of all... • Chapter Nine Braid Groups (pp. 239-260) In this chapter we give a brief introduction to Artin’s classical braid groupsBn. WhileBnis just a special kind of mapping class group, namely, that of a multipunctured disk, the study ofBnhas its own special flavor. One reason for this is that multipunctured disks can be embedded in the plane, so that elements ofBnlend themselves to specialized kinds of pictorial representations. The notion of a mathematical braid is quite natural and classical. For instance, this concept appeared in Gauss’s study of knots in the early nineteenth century (see [182]) and in Hurwitz’s 1891 paper... 7. ### PART 2. TEICHMÜLLER SPACE AND MODULI SPACE • Chapter Ten Teichmüller Space (pp. 263-293) This chapter introduces another main player in our story: the Teichmüller space Teich(S) of a surfaceS. Forg≥ 2, the space Teich(Sg) parameterizes all hyperbolic structures onSgup to isotopy. After defining a topology on Teich(S), we give a few heuristic arguments for computing its dimension. The length and twist parameters of Fenchel and Nielsen are then introduced in order to prove that Teich(Sg) is homeomorphic to$\mathbb{R}^{6g - 6}$. At the end of the chapter, we prove the 9g− 9 theorem, which tells us that a hyperbolic structure onSgis completely determined by the lengths... • Chapter Eleven Teichmüller Geometry (pp. 294-341) Teichmüller space Teich(S) was defined in Chapter 10 as the space of hyperbolic structures on the surfaceSmodulo isotopy. But Teich(S) parameterizes other important structures as well, for example, complex structures onSmodulo isotopy and conformal classes of metrics onSup to isotopy. We would like to have a way to compare different complex or conformal structures onSto each other. A natural way to do this is to search for a quasiconformal homeomorphismf:SSthat is homotopic to the identity map and that has the smallest possible quasiconformal dilatation with respect... • Chapter Twelve Moduli Space (pp. 342-364) The moduli space of Riemann surfaces is one of the fundamental objects of mathematics. It is ubiquitous, appearing as a basic object in fields from low-dimensional topology to algebraic geometry to mathematical physics. The moduli space$\cal{M}$(S) parameterizes, among other things: isometry classes of hyperbolic structures onS, conformal classes of Riemannian metrics onS, biholomorphism classes of complex structures onS, and isomorphism classes of smooth algebraic curves homeomorphic toS. We will access$\cal{M}$(S) as the quotient of Teich(S) by an action of Mod(S). A key result of this chapter is the theorem (due to Fricke)... 8. ### PART 3. THE CLASSIFICATION AND PSEUDO-ANOSOV THEORY • Chapter Thirteen The Nielsen—Thurston Classification (pp. 367-389) In this chapter we explain and prove one of the central theorems in the study of mapping class groups: the Nielsen—Thurston classification of elements of Mod(S). This theorem is the analogue of the Jordan canonical form for matrices. It states that every$f \in {\text{Mod}}(S)$is one of three special types: periodic, reducible, or pseudo-Anosov. The knowledge of individual mapping classes is essential to our understanding of the algebraic structure of Mod(S). As we will soon explain, it is also essential for our understanding of the geometry and topology of many 3-dimensional manifolds. We begin this chapter with a classification of... • Chapter Fourteen Pseudo-Anosov Theory (pp. 390-423) The power of the Nielsen–Thurston classification is that it gives a simple criterion for an element$f \in {\text{Mod}}(S)$to be pseudo-Anosov:fis neither finiteorder nor reducible. This fact, however, is only as useful as the depth of our knowledge of pseudo-Anosov homeomorphisms. The purpose of this chapter is to study pseudo-Anosov homeomorphisms: their construction, their algebraic properties, and their dynamical properties. Anosov maps of the torus. An Anosov homeomorphism of the torusT2is a linear representative of an Anosov mapping class. As discussed in Section 13.1, an Anosov homeomorphism φ :T2T2has an associated Anosov... • Chapter Fifteen Thurston’s Proof (pp. 424-446) In this chapter we give some indication of how Thurston originally discovered the Nielsen—Thurston classification theorem. We begin with a concrete, accessible example that illustrates much of the general theory. We then provide a sketch of how that general theory works. Our goal is not to give a formal treatment as per the rest of the text. Rather, we hope to convey to the reader part of the beautiful circle of ideas surrounding the Nielsen—Thurston classification, including Teichmüller’s theorems, Markov partitions, train tracks, foliations, laminations, and more. We start by giving an in-depth analysis of a fundamental and... 9. Bibliography (pp. 447-464) 10. Index (pp. 465-472)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7942729592323303, "perplexity": 775.0193499881262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517048.78/warc/CC-MAIN-20210622093910-20210622123910-00631.warc.gz"}
https://xianblog.wordpress.com/2011/09/
Archive for September, 2011 Posted in Running, Travel with tags , , , , , , on September 30, 2011 by xi'an It seems that every Sunday I run in Central Park, I am doomed to hit a race! This time it was not the NYC half-marathon (and I did not see Paula Radcliffe as she was in Berlin) but an 18 miles race in preparation for the NYC marathon. I had completed my fartlek training of 6x4mn and was recovering from a anaerobic last round when I saw some runners coming, so went with them as a recuperation jog for a mile or so. They had done the first 4 miles in 27’28”, which corresponds to a 4’16” pace per kilometer, so I must have missed the top runners. Actually, I think the first runners were at least 4 minutes faster, as they were coming when I left for the last 4mn. (But it was good for recovery!) Checking on the webpage of the race, the winner finished in 1:37’45”, which gives a marathon time of 2:21’40” unless I am confused. chance meeting Posted in Running with tags , , on September 29, 2011 by xi'an As I was having my last training session at 6:45 this morning around the local park, I crossed a runner and said my customary “Bonjour !” (which is rarely answered, by the way), getting a surprising “Salut Christian !” answer as the other runner was Aurélien Garivier, also running his lap in the dark but with a better eyesight than mine! So we had a nice chat while doing a half-lap together (a wee faster than I planned!). Next run is Argentan on Saturday! the biggest change Posted in Statistics, University life with tags , , , , , , on September 29, 2011 by xi'an The current question for the ISBA Bulletin is “What is the biggest and most surprising change in the field of Statistics that you have witnessed, and what do you think will be the next one?” The answer to the second part is easy: I do not know and even if I knew I would be writing papers about it rather than spilling the beans… The answer to the first part is anything but easy. At the most literal level, taking “witnessed” at face value, I have witnessed the “birth” of Markov chain Monte Carlo methods at the conference organised in Sherbrooke by Jean-Francois Angers in June 1989… (This was already reported in our Short history of MCMC with George Casella.) I clearly remember Adrian showing the audience a slide with about ten lines of Fortran code that corresponded to the Gibbs sampler for a Bayesian analysis of a mixed effect linear model (later to be analysed in JASA). This was so shockingly simple… It certainly was the talk that had the most impact on my whole career, even though (a) I would have certainly learned about MCMC quickly enough had I missed the Sherbrooke conference and (b) there were other talks in my academic life that also induced that “wow” moment, for sure. At a less literal level, the biggest chance if not the most surprising is that the field has become huge, multifaceted, and ubiquitous. When I started studying statistics, it was certainly far from being the sexiest possible field! (At least in the general public) And the job offers were not as numerous and diverse as they are today. (The same is true for Bayesian statistics, of course. Even though it has sounded sexy from the start!) Bessel integral Posted in R, Statistics, University life with tags , , , on September 28, 2011 by xi'an Pierre Pudlo and I worked this morning on a distribution related to philogenic philogenetic trees and got stuck on the following Bessel integral $\int_a^\infty e^{-bt}\,I_n(t)\,\text{d}t\qquad a,b>0$ where In is the modified Bessel function of the first kind. We could not find better than formula 6.611(4) in Gradshteyn and Ryzhik. which is for a=0… Anyone in for a closed form formula, even involving special functions? Error and Inference [#5] Posted in Books, Statistics, University life with tags , , , , , , , , , , , , on September 28, 2011 by xi'an (This is the fifth post on Error and Inference, as previously being a raw and naïve reaction following a linear and slow reading of the book, rather than a deeper and more informed criticism.) ‘Frequentist methods achieve an objective connection to hypotheses about the data-generating process by being constrained and calibrated by the method’s error probabilities in relation to these models .”—D. Cox and D. Mayo, p.277, Error and Inference, 2010 The second part of the seventh chapter of Error and Inference, is David Cox’s and Deborah Mayo’s “Objectivity and conditionality in frequentist inference“. (Part of the section is available on Google books.) The purpose is clear and the chapter quite readable from a statistician’s perspective. I however find it difficult to quantify objectivity by first conditioning on “a statistical model postulated to have generated data”, as again this assumes the existence of a “true” probability model where “probabilities (…) are equal or close to  the actual relative frequencies”. As earlier stressed by Andrew: “I don’t think it’s helpful to speak of “objective priors.” As a scientist, I try to be objective as much as possible, but I think the objectivity comes in the principle, not the prior itself. A prior distribution–any statistical model–reflects information, and the appropriate objective procedure will depend on what information you have.” The paper opposes the likelihood, Bayesian, and frequentist methods, reproducing what Gigerenzer called the “superego, the ego, and the id” in his paper on statistical significance. Cox and Mayo stress from the start that the frequentist approach is (more) objective because it is based on the sampling distribution of the test. My primary problem with this thesis is that the “hypothetical long run” (p.282) does not hold in realistic settings. Even in the event of a reproduction of similar or identical tests, a sequential procedure exploiting everything that has been observed so far is more efficient than the mere replication of the same procedure solely based on the current observation. Virtually all (…) models are to some extent provisional, which is precisely what is expected in the building up of knowledge.”—D. Cox and D. Mayo, p.283, Error and Inference, 2010 The above quote is something I completely agree with, being another phrasing of George Box’s “all models are wrong”, but this transience of working models is a good reason in my opinion to account for the possibility of alternative working models from the start of the statistical analysis. Hence for an inclusion of those models in the statistical analysis equally from the start. Which leads almost inevitably to a Bayesian formulation of the testing problem. ‘Perhaps the confusion [over the role of sufficient statistics] stems in part because the various inference schools accept the broad, but not the detailed, implications of sufficiency.”—D. Cox and D. Mayo, p.286, Error and Inference, 2010 The discussion over the sufficiency principle is interesting, as always. The authors propose to solve the confusion between the sufficiency principle and the frequentist approach by assuming that inference “is relative to the particular experiment, the type of inference, and the overall statistical approach” (p.287). This creates a barrier between sampling distributions that avoids the binomial versus negative binomial paradox always stressed in the Bayesian literature. But the solution is somehow tautological: by conditioning on the sampling distribution, it avoids the difficulties linked with several sampling distributions all producing the same likelihood. After my recent work on ABC model choice, I am however less excited about the sufficiency principle as the existence of [non-trivial] sufficient statistics is quite the rare event. Especially across models. The section (pp. 288-289) is also revealing about the above “objectivity” of the frequentist approach in that the derivation of a test taking large value away from the null with a well-known distribution under the null is not an automated process, esp. when nuisance parameters cannot be escaped from (pp. 291-294). Achieving separation from nuisance parameters, i.e. finding statistics that can be conditioned upon to eliminate those nuisance parameters, does not seem feasible outside well-formalised models related with exponential families. Even in such formalised models, a (clear?) element of arbitrariness is involved in the construction of the separations, which implies that the objectivity is under clear threat. The chapter recognises this limitation in Section 9.2 (pp.293-294), however it argues that separation is much more common in the asymptotic sense and opposes the approach to the Bayesian averaging over the nuisance parameters, which “may be vitiated by faulty priors” (p.294). I am not convinced by the argument, given that the (approximate) condition approach amount to replace the unknown nuisance parameter by an estimator, without accounting for the variability of this estimator. Averaging brings the right (in a consistency sense) penalty. A compelling section is the one about the weak conditionality principle (pp. 294-298), as it objects to the usual statement that a frequency approach breaks this principle. In a mixture experiment about the same parameter θ, inferences made conditional on the experiment  “are appropriately drawn in terms of the sampling behavior in the experiment known to have been performed” (p. 296). This seems hardly objectionable, as stated. And I must confess the sin of stating the opposite as The Bayesian Choice has this remark (Example 1.3.7, p.18) that the classical confidence interval averages over the experiments… Mea culpa! The term experiment validates the above conditioning in that several experiments could be used to measure θ, each with a different p-value. I will not argue with this. I could however argue about “conditioning is warranted to achieve objective frequentist goals” (p. 298) in that the choice of the conditioning, among other things, weakens the objectivity of the analysis. In a sense the above pirouette out of the conditioning principle paradox suffers from the same weakness, namely that when two distributions characterise the same data (the mixture and the conditional distributions), there is a choice to be made between “good” and “bad”. Nonetheless, an approach based on the mixture remains frequentist if non-optimal… (The chapter later attacks the derivation of the likelihood principle, I will come back to it in a later post.) ‘Many seem to regard reference Bayesian theory to be a resting point until satisfactory subjective or informative priors are available. It is hard to see how this gives strong support to the reference prior research program.”—D. Cox and D. Mayo, p.302, Error and Inference, 2010 A section also worth commenting is (unsurprisingly!) the one addressing the limitations of the Bayesian alternatives (pp. 298–302). It however dismisses right away the personalistic approach to priors by (predictably if hastily) considering it fails the objectivity canons. This seems a wee quick to me, as the choice of a prior is (a) the choice of a reference probability measure against which to assess the information brought by the data, not clearly less objective than picking one frequentist estimator or another, and (b) a personal construction of the prior can also be defended on objective grounds, based on the past experience of the modeler. That it varies from one modeler to the next is not an indication of subjectivity per se, simply of different past experiences. Cox and Mayo then focus on reference priors, à la Bernardo-Berger, once again pointing out the lack of uniqueness of those priors as a major flaw. While the sub-chapter agrees on the understanding of those priors as convention or reference priors, aiming at maximising the input from the data, it gets stuck on the impropriety of such priors: “if priors are not probabilities, what then is the interpretation of a posterior?” (p.299). This seems like a strange comment to me:  the interpretation of a posterior is that it is a probability distribution and this is the only mathematical constraint one has to impose on a prior. (Which may be a problem in the derivation of reference priors.) As detailed in The Bayesian Choice among other books, there are many compelling reasons to invite improper priors into the game. (And one not to, namely the difficulty with point null hypotheses.) While I agree that the fact that some reference priors (like matching priors, whose discussion p. 302 escapes me) have good frequentist properties is not compelling within a Bayesian framework, it seems a good enough answer to the more general criticism about the lack of objectivity: in that sense, frequency-validated reference priors are part of the huge package of frequentist procedures and cannot be dismissed on the basis of being Bayesian. That reference priors are possibly at odd with the likelihood principle does not matter very much:  the shape of the sampling distribution is part of the prior information, not of the likelihood per se. The final argument (Section 12) that Bayesian model choice requires the preliminary derivation of “the possible departures that might arise” (p.302) has been made at several points in Error and Inference. Besides being in my opinion a valid working principle, i.e. selecting the most appropriate albeit false model, this definition of well-defined alternatives is mimicked by the assumption of “statistics whose distribution does not depend on the model assumption” (p. 302) found in the same last paragraph. In conclusion this (sub-)chapter by David Cox and Deborah Mayo is (as could be expected!) a deep and thorough treatment of the frequentist approach to the sufficiency and (weak) conditionality principle. It however fails to convince me that there exists a “unique and unambiguous” frequentist approach to all but the most simple problems. At least, from reading this chapter, I cannot find a working principle that would lead me to this single unambiguous frequentist procedure. no school today Posted in Kids, pictures with tags , , , , on September 27, 2011 by xi'an My daughter’s school was blocked this morning, as the result of a day of actions against the persistent and long-term policy of position cuts and hour reductions in the Education budget by the current government of France. The impact on higher education of those cuts is already perceptible in the math background of our first-year students… Not that a few garbage bins piled in front of the high school entrance could make a dent in our governmental policy. Nor on the absurd statistics of the Education minister, Luc Chatel, who goes back to 1980 as a reference year to show “increases and improvements”! workshop in Columbia [day 3] Posted in pictures, R, Running, Statistics, Travel, University life with tags , , , , , , , , , , on September 27, 2011 by xi'an Although this was only a half-day of talks, the third day of the workshop was equally thought-challenging and diverse.  (I managed to miss the ten first minutes by taking a Line 3 train to 125th street, having overlooked the earlier split from Line 1… Crossing south Harlem on a Sunday morning is a fairly mild experience though.) Jean-Marc Azaïs gave a personal recollection on the work of Mario Wschebor, who passed away a week ago and should have attended the workshop. Nan Chen talked about the Monte Carlo approximation of quantities of the form $\mathbb{E}[f(\mathbb{E}[Y|X])]$ which is a problem when f is non linear. This reminded me (and others) of the Bernoulli factory and of the similar trick we use in the vanilla Rao-Blackwellisation paper with Randal Douc. However, the approach was different in that the authors relied on a nested simulation algorithm that did not adapt against f. And did not account for the shape of f. Peter Glynn, while also involved in the above, delivered a talk on the initial transient that showed possibilities for MCMC convergence assessment (even though this is a much less active area than earlier). And, as a fitting conclusion, the conference organiser, Jingchen Liu gave a talk on non-compatible conditionals he and Andrew are using to handle massively-missing datasets. It reminded me of Hobert and Casella (1996, JASA) of course and also of discussions we had in Paris with Andrew and Nicolas. Looking forward to the paper (as I have missed some points about the difference between true and operational models)! Overall, this was thus a terrific workshop (I just wish I could have been able to sleep one hour more each night to be more alert during all talks!) and a fantastic if intense schedule fitting the start of the semester and of teaching (even though Robin had to teach my first R class in English on Friday). I also discovered that several of the participants were attending the Winter Simulation Conference later this year, hence another opportunity to discuss simulation strategies together.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7502750754356384, "perplexity": 1224.0803412238188}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189802.18/warc/CC-MAIN-20170322212949-00096-ip-10-233-31-227.ec2.internal.warc.gz"}
https://brilliant.org/problems/simple-isnt-enough/
# Simple isn't enough Calculus Level 5 $\int_{0}^{16} { \arctan{(\sqrt{\sqrt{z} -1})} \ dz}$ If the complex integral above can be expressed as $$a + ib$$. Find the value of $$\left \lfloor{a + 10b}\right \rfloor$$. Clarification: $$i = \sqrt{-1}$$. ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9945032596588135, "perplexity": 2715.325544225391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646189.21/warc/CC-MAIN-20180319003616-20180319023616-00372.warc.gz"}
https://colekillian.com/posts/quick-presentations/
Contents # Overview There are three aspects to a slideshow presentation: • the content • the design • the delivery This post focuses on the content and design. Imagine yourself, looking to quickly prepare a slideshow, and opening up google slides or powerpoint. What you really bring to the table is the content, but along the way you have to deal with manually composing slides; moving the textbox a little to the right, changing the font, resizing images. Would it be possible to write a program that lets you forget about design, focus on the content, and then have a polished presentation generated for you? You would need a way to communicate the content with the program, an outliner tool. hmm It turns out that there is a great package that accomplishes exactly this! See org-reveal, a package that uses reveal.js to turn plain text org files into html slideshow presentations. You can include all the typical presentation things: images, code, tables, etc. See an example presentation here. # Installation In spacemacs org-reveal is easy to setup. You enable it using org variables as follows and set the org-re-reveal-root to where you installed reveal.js on your computer: 1 2 3 4 5 (org :variables org-enable-reveal-js-support t ) (setq org-re-reveal-root "file:///home/gautierk/.npm-global/lib/node_modules/reveal.js/") # Usage To use org-reveal, outline your presentation in an org file. Use org metadata to set things like the title and author. When you’re ready to export, run org-export-dispatch (, e e on spacemacs) and then type v b to export using reveal. You did it! For more options see the org-reveal README. Here is an example presentation I made.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3292721211910248, "perplexity": 5082.56951286368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038084601.32/warc/CC-MAIN-20210415065312-20210415095312-00267.warc.gz"}
https://crazyproject.wordpress.com/2011/06/22/decide-whether-a-given-linear-congruence-over-an-algebraic-integer-ring-has-a-solution/
## Decide whether a given linear congruence over an algebraic integer ring has a solution Consider $A = (3\sqrt{-5}, 10+\sqrt{-5})$ as an ideal in $\mathcal{O} = \mathbb{Z}[\sqrt{-5}]$. Does $3\xi \equiv 5$ mod $A$ have a solution in $\mathcal{O}$? Let $D = ((3),A) = (3,1+\sqrt{-5})$. By Theorem 9.13, our congruence has a solution if and only if $5 \equiv 0$ mod $D$. But as we saw in a previous exercise, $5 \equiv 2 \not\equiv 0$ mod $D$; so this congruence does not have a solution in $\mathcal{O}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8605159521102905, "perplexity": 89.45391425429825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476990033880.51/warc/CC-MAIN-20161020190033-00354-ip-10-142-188-19.ec2.internal.warc.gz"}
https://physics.stackexchange.com/questions/314638/variational-derivative-of-function-with-respect-to-its-derivative/314640
# Variational derivative of function with respect to its derivative [closed] What is $$\frac{\delta f(t)}{\delta \dot{f}(t)}~?$$ Where $\dot{f}(t) = df/dt$. • could you provide some context please? – ZeroTheHero Feb 25 '17 at 3:32 • It is purely a math question ,so better to ask in mathematics stack exchange – Lapmid Feb 25 '17 at 4:00 • @SherlockHolmes yeah and all possible answers are purely math too. – ZeroTheHero Feb 25 '17 at 6:05 • Related question by OP: physics.stackexchange.com/q/263261/2451 – Qmechanic Feb 25 '17 at 7:36 • – Qmechanic Feb 25 '17 at 7:47 The definition of the functional derivative of a functional $I[g]$ is the distribution $\frac{\delta I}{\delta g}(\tau)$ such that $$\left\langle \frac{\delta I}{\delta g}, h\right\rangle := \frac{d}{d\alpha}\bigg\rvert_{\alpha=0} I[g+ \alpha h]$$ for every test function $h$. In our case, assuming to deal with functions which suitably vanish before reaching $\pm \infty$, $$I[g] = \int_{-\infty}^t g(x)dx$$ so that $$I[\dot{f}]= f(t)$$ as requested. Going on with the procedure $$\left\langle \frac{\delta I}{\delta g}, h\right\rangle = \frac{d}{d\alpha}|_{\alpha=0} \int_{-\infty}^t(g(\tau)+ \alpha h(\tau)) d\tau = \int_{-\infty}^t h(\tau) dx = \int_{-\infty}^{+\infty} \theta(t-\tau)h(\tau) d\tau$$ where $\theta(\tau)=1$ for $\tau\geq 0$ and $\theta(\tau)=0$ for $\tau<0$ and so $$\frac{\delta f(t)}{\delta \dot{f}}(\tau) = \frac{\delta I}{\delta g}(\tau)= \theta(t-\tau)$$ • Just a LaTeX tip: you can use \big, \bigg and so forth to have a larger vertical line, \rvert. (See edit.) – JamalS Feb 25 '17 at 9:54 • Thanks. I usually use $\left.$ $\right|$ but I did not exploit them here. – Valter Moretti Feb 25 '17 at 9:56 • Great explanation, thanks! I guess it makes physical sense too - varying at $\tau$ you only expect any effect at $t\geq\tau$ – smörkex Feb 26 '17 at 3:43 The important thing to keep in mind is that a functional derivative is more like a gradient than an ordinary derivative. The reason that this is an important consideration is because, practically, we always specify functions with (possibly infinite) lists of numbers, be they: Taylor series coefficients, continued fraction constants, a list of constant values (approximating with boxcars), a list of points (connect the dots), Fourier series coefficients, or etc. The important part of this consideration is that the function's derivative doesn't carry any information about a constant vertical offset. Thus, because any function of the form $f(t) + c$ has the same derivative, $\dot{f}(t)$, the functional derivative in the question will not be defined in the "direction" that corresponds to the degree of freedom defined by $c$. In equations, let \begin{align} g(t) &\equiv \dot{f}(t) \Rightarrow \\ f(t) - f(t_0) & = \int_{t_0}^t g(t') \operatorname{d} t'\end{align} From there: \begin{align} \frac{\delta f(t)}{ \delta \dot{f}(\tau)} - \frac{\delta f(t_0)}{ \delta \dot{f}(\tau)} & = \frac{\delta \int_{t_0}^t g(t') \operatorname{d}t'}{ \delta g(\tau)} \\ & = \int_{t_0}^t \delta(t' - \tau) \operatorname{d}t' \\ & = \Theta(t-\tau) \, \Theta(\tau - t_0) - \Theta(t_0 - \tau)\, \Theta(\tau - t). \end{align} This now satisfies: $$\frac{\partial}{\partial t} \left(\frac{\delta f(t)}{\delta \dot{f}(\tau)}\right) = \delta(t - \tau),$$ as expected. Because $\dot{f}(t)$ doesn't carry any information about the vertical offset of $f(t)$, only differences of the functional derivative, like above, are well defined. If the space of functions is limited to those that satisfy $\lim_{t\rightarrow -\infty} f(t) = 0$, then we can take $t_0\rightarrow -\infty$ to get the expression from Valter Moretti's answer. • Since the choice of $t_0$ is arbitrary, your calculation seems to suggest that this variation $\frac{\delta f(t)}{\delta \dot{f}(t)}$ is not well defined. – taper Feb 25 '17 at 4:15 • @taper I have now addressed that, and you're right, only differences in that functional derivative are well defined. – Sean E. Lake Feb 25 '17 at 19:35 • Thanks for this insight. So combining with Valter Moretti's answer, the full solution is then $\delta f(t) / \delta \dot{f}(\tau) = \theta(t-\tau) + c(\tau)$. But what kinds of initial conditions would determine $c(\tau)$ - it seems like for this problem these should be context-independent of what $f$ actually is. One natural condition seems to be $\delta f(a) / \delta \dot{f}(b) = 0$ where $b>a$. Then $0 = \delta f(a) / \delta \dot{f}(b) = \theta(a-b) + c(b) = c(b)$, so the full solution is just $\delta f(t) / \delta \dot{f}(\tau) = \theta(t-\tau)$ - is this true? – smörkex Feb 26 '17 at 3:53 • I would disagree with that. Looking at your other questions, related to Euler-Lagrange equations, this isn't relevant anyway. There are two main ways to do that problem. First, variational derivative of the action w.r.t. $x(t)$ witch uses chain rules and $\frac{\delta x(t)}{\delta x(t')} = \delta(t-t')$. The second is to use partial derivatives in which $x$, $\dot{x}$, $\ddot{x}$, etc are all treated as independent variables. – Sean E. Lake Feb 26 '17 at 5:59 • @Kurt In either case, the result is: \begin{align} \frac{\delta S[x]}{\delta x(t)} &= \int \left(\frac{\partial L}{\partial x} \delta(t'-t) + \frac{\partial L}{\partial \dot{x}} \delta'(t'-t) + \frac{\partial L}{\partial \ddot{x}} \delta''(t'-t) + \ldots \right) \operatorname{d}t' \\ & = \sum_{n=0}^\infty (-1)^n \frac{\operatorname{d}^n}{\operatorname{d}t^n} \left(\frac{\partial L}{\partial \frac{\mathrm{d}^n\, x}{\mathrm{d}\, t^n}}\right) \end{align} – Sean E. Lake Feb 26 '17 at 6:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9909767508506775, "perplexity": 741.8760936008621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540565544.86/warc/CC-MAIN-20191216121204-20191216145204-00405.warc.gz"}
https://www.physicsforums.com/threads/convergence-of-an-alternating-series.875083/
# Convergence of an alternating series • I • Start date • #1 673 29 Consider a sequence with the ##n^{th}## term ##u_n##. Let ##S_{2m}## be the sum of the ##2m## terms starting from ##u_N## for some ##N\geq1##. If ##\lim_{N\rightarrow\infty}S_{2m}=0## for all ##m##, then the series converges. Why? This is not explained in the following proof: Last edited: • #2 mfb Mentor 35,978 12,843 Consider a sequence with the ##n^{th}## term ##u_n##. Let ##S_{2m}## be the sum of the ##2m## terms starting from ##u_N## for some ##N\geq1##. If ##\lim_{N\rightarrow\infty}S_{2m}=0## for all ##m##, then the series converges. Why? This is not true, consider ##u_n=(-1)^n##. You also have to add that un are alternating and decreasing in magnitude. The proof looks a bit sloppy, but you can use a similar approach to show that the partial sum is always between two numbers that approach the same limit. Sandwich theorem. Likes Happiness • Last Post Replies 17 Views 3K • Last Post Replies 4 Views 2K • Last Post Replies 4 Views 3K • Last Post Replies 8 Views 732 • Last Post Replies 17 Views 3K • Last Post Replies 13 Views 6K • Last Post Replies 31 Views 3K • Last Post Replies 2 Views 2K • Last Post Replies 23 Views 2K • Last Post Replies 5 Views 1K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9291890859603882, "perplexity": 5300.008058685025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00591.warc.gz"}
https://hcpssgt.digication.com/aarnette1/Bibliography_7
DRAFT: This module has unpublished changes. http://forums.somd.com/news-current-events/188919-once-convict-now-millionaire.html Mike. "Once a Convict, Now a Millionaire." Forum. N.p., 2009. Web. 8 Oct. 2009. <forums.somd.com... 188919-once-convict-now-millionaire.html>. Anthony Arnette 10-8-09 Pd: 2 Another man found in the state of Maryland that was accused of a crime that he had not committed and then compensated for. The full quote was: Thomas McGowan's journey from prison to prosperity is about to culminate in $1.8 million, and he knows just how to spend it: on a house with three bedrooms, stainless steel kitchen appliances and a washer and dryer. "I'll let my girlfriend pick out the rest," said McGowan, who was exonerated last year based on DNA evidence after spending nearly 23 years in prison for rape and robbery. He and other exonerees in Texas, which leads the nation in freeing the wrongly convicted, soon will become instant millionaires under a new state law that took effect this week. Exonerees will get$80,000 for each year they spent behind bars. The compensation also includes lifetime annuity payments that for most of the wrongly convicted are worth between $40,000 and$50,000 a year — making it by far the nation's most generous package. "I'm nervous and excited," said McGowan, 50. "It's something I never had, this amount of money. I didn't have any money — period." In this case this man was very happy to receive this money; however, many citizens would look at the twenty three years in jail and immediately decline the offer. DRAFT: This module has unpublished changes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1751280575990677, "perplexity": 7210.956487638624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997731.69/warc/CC-MAIN-20190616042701-20190616064701-00363.warc.gz"}
https://poetry.openlibhums.org/article/id/766/
Attention is the rarest and purest form of generosity (Simone Weil) Thomas A. Clark is not an overtly political poet. His intense focus on natural objects and spaces, on plants and animals, on the value of aesthetic and contemplative states and on the possibility of turning away from routine preoccupations to quiet immersion in rural environments, together with his work’s formal qualities of simplicity and materiality, all might suggest a certain degree of distance or withdrawal from politics understood as debate, polemic, or the representation of power relations. At times there is an impulse to reject certain elements of the ‘everyday’ (and, in one sense, of the ‘political’), as in the opening lines of ‘In Praise of Walking’: Early one morning, any morning, we can set out, with the least possible baggage, and discover the world. It is quite possible to refuse all the coercion, violence, property, triviality, to simply walk away.1 Such a view of his work would accord with what was, until recently, a common view of ‘nature’ or pastoral poetry, as conservative or escapist.2 Such views, however, have been transformed by growing ecological awareness and the rise of ecopoetics, involving the realisation that the ‘natural’, far from being a place of escape from the destructive power-relations of post-industrial (post)modernity, is the site of those power-relations’ most intense (and potentially terminal) destructive exercise. Seen in such a context, Clark’s work may be read as offering a sharp ethical-political critique of certain potentially destructive aspects of modernity. I will address in this article three dimensions of this critique: a privileging of forms of attention, a re-envisioning of the nature and experience of time, and a celebration of the gift, these three constituting forms of resistance to excessive complexity, instrumental efficiency and commodification as aspects of modernity. Summarising elements of Adorno’s ‘materialist thinking of the imbrication of art in the modern’ (in his Aesthetic Theory), Simon Malpas writes that Art stands out against the rationalising and industrialising drives of the modern, fragmenting them by recapturing the techniques and experiences disavowed in the continual striving for progress and development. This, for the critics of modernity, is art’s fundamental critical potential.3 The political implications of Clark’s work are to be found primarily in terms of what Adorno saw as ‘artwork’s necessary and illusory autonomy’, which defines its ‘social character’, as the ‘social antithesis of society’.4 It must be acknowledged that both the association of poetry with close attention, and the hope that this might serve as a bulwark against a technological (post)modernity seen as threat to the value of attention, have been so widely expressed as to be at risk of becoming clichés. Andrew Epstein deals extensively with the former idea in his book Attention Equals Life, but with reference to American poetry. Epstein’s central argument links the idea of poetry as a ‘form of attention’ to the ‘poetics of everyday life’.5 Epstein rightly reminds us that the ‘recurring idea’ that ‘“poetry is a form of attention” … is not a timeless or “natural” definition of poetry’ but ‘historically conditioned … a reflexive response to widespread fears that “our ability to pay attention isn’t what it once was”’.6 But he is perhaps too uncritical of some of the associated rhetoric, in that he refers to ‘the crisis of attention roiling contemporary culture’, presenting as a starting premise of his book that: poetry is an important, and perhaps unlikely cultural form that has mounted a response, and even method of resistance, to a culture gradually losing its capacity to pay attention.7 I would agree that many concerns about the effects of media technology are well-founded, but the grounds of such concern need to be refined beyond a ‘crisis of attention’. Someone who spends ten hours playing an online game is, after all, attending very intensely, even if one may worry about the effects on health or offline social relations. Social media is powerful because it is effective at capturing attention: this is often described as fragmented attention, but such a formulation only takes us so far. Attention while driving a car is fragmented, and necessarily so. Even, say, teaching a class, may require attention on several levels (content, group interaction, timing, environment), but this would tend to be seen as a creative distribution rather than a fragmentation of attention. Different sorts of concern need to be distinguished: attending to the wrong sort of thing; attending in the wrong sort of way; forms of attention to one thing which make it difficult to attend to something else; and so on. Epstein sees certain types of poetry since the 1950s as championing the ethical value of attending closely to ‘daily life’, and associates this with poetry’s formal and generic qualities:8 Freed from the exigencies of narrative … poetry steps forward as perhaps the quintessential genre for the rendering of concrete, everyday experiences and objects, for an investigation of the workings of attention, and for a method of responding to the moment-by-moment unfolding of daily time.9 Again, the general tenor of this observation seems reasonable, but the terms seem to invite further scrutiny. What is a ‘concrete’ experience and what not? The ‘everyday’ experiences of, say, a hill farmer, a school teacher and a soldier are very different, but what sort of experience is not ‘everyday’? The ‘everyday’ can also figure in very different forms in poetry; in say, Wordsworth, or Larkin, or Frank O’Hara. As alternatives to the everyday, the heroic and the mythical are the most likely candidates, bearing in mind their literary significance. Certainly one would not describe Paradise Lost or Sir Gawain and the Green Knight as centrally concerned with ‘everyday experience’, even if at the time of their writing they may have seemed somewhat closer to such experience. In the case of Clark’s work, the focus is not best defined as ‘everyday experiences and objects’. As I have suggested, he advocates a turn away from ‘the everyday’ in certain guises. Rather his focus is upon natural objects, the experience of natural environments (but not excluding the effects of culture and human presence); also, to a lesser extent, ordinary (that is to say likely to be found in a home) objects considered as possessing aesthetic qualities and viewed in that light; also the processes and objects of sensory perception itself (such as colour and visual or aural features). Epstein is critical of what he terms the ‘transformation trope’: the tendency of reviewers to fall back on the cliché that poems ‘discover the extraordinary within the ordinary’ or ‘transform the everyday’.10 This is surely a degraded version of the modernist (or particularly Joycean) epiphany. Epstein acknowledges the possible argument that the quotidian in contemporary American poetry is ‘merely a deepening of the already extant modernist interest in the everyday’ but counters that ‘the modernists retained a greater emphasis on epic ambitions, on the mythic dimensions of the daily, on epiphany and the special moment’.11 Epstein’s stress on the novelty of post-war poetry in this respect underestimates the strength of the allegiance to the ‘everyday’ in both Romantic writing (such as Wordsworth, who does not always seek to transcend the mundane) and in ecological writing more generally conceived.12 Epstein’s suggestion that more recent poetry in fact resists transcendence or transformation is somewhat reminiscent of Alan Wilde’s characterisation of postmodernist ‘disjunctive’ irony in contradistinction to modernist ‘suspensive’ irony. In Wilde’s account, ‘suspensive’ irony, characteristic of high Anglo-American modernism ‘represents the desire simultaneously to be true to [the] incoherence [of fragmentary experience] and to transcend it’.13 In postmodernist ‘disjunctive’ irony ‘an indecision about the meanings or relations of things is matched by a willingness to live with uncertainty … to welcome a word seen and random and multiple’ (a refusal of transcendence).14 Mythic elements are not absent from Clark’s work: for example in ‘Tao Te Ching [a small country sparsely populated]’ (2011) there is an element of mythic allegory. On the whole, though, one would want to say that his work remains within the texture of experiences of perception and reflection available during the course of unexceptional contemporary life, but seeks to renew and perhaps change the reader’s positioning and awareness of these; without transfiguration or transcendence, but with some sense of the numinous. In relation to attention, it is worth noting that, if Clark’s work seems to ask for and encourage careful attention, it does so in a very different way from many other literary forms. Anxiety about attention in relation to literature tends to focus on attention spans: whether there is a loss of ability to concentrate over long periods; on reading a ‘big’ novel or closely interpreting a complex poem. These are not the challenges that Clark’s work offer. The elements of concrete and minimalism in his works mean that some immediate apprehension of them is generally possible, even if a rich understanding or full interpretation would require sustained thought. In a sense they are immediately available, and this is part of their aesthetic appeal and their mode of address to their reader. Rather than asking for a lot of attention themselves, they tend to incite attention to particular aspects of the world, or ways of experiencing the world.15 In this light, and as regards attention as an issue in contemporary culture, I want to address the nature of the intervention of Clark’s work, not in terms of a very general assertion of a ‘crisis of attention’, but a more specific analysis of the its resistance to attention being treated as quantifiable and commodifiable. In a forthcoming article, ‘Paying Attention: Philosophy as Strong Therapy for the Information Age’, Dominic Smith argues that: we are the inheritors of a deeply engrained and crudely economic grammar that frames attention as a ‘resource’ or form of ‘capital’ that can be ‘paid.’16 Jonathan Crary associates this idea with ‘Western modernity since the nineteenth century’.17 In recent years, this economic model of attention has been actualised by the migration of advertising into so-called ‘interactive’ digital media.18 Notoriously, Facebook and other social media are ‘always free’ because users pay with their attention, which can be quantified and commodified by ‘likes’, clicks or links followed. Certain scientific or pseudo-scientific discourses around technology also share in the rhetoric of quantification and measurement of human faculties. Smith quotes two strikingly incommensurable measures. On the one hand, Mihalyi Csikszentmihalyi claims that the human organism can discriminate a maximum of 126 bits of information per second, meaning in a lifetime of 70 years one can process about 185 billion bits. This, he comments, ‘most people find … tragically insufficient’.19 In contrast, a 2013 experiment to simulate human brain processing on the Fujitsu ‘K’ supercomputer found that: It took 40 minutes with the combined muscle of 82,944 processors in K computer to [simulate] 1 second of biological brain processing time. While running, the system ate up about 1 PB of memory (Whitham 2013).20 On one level, these figures merely illustrate that we have no real consensus as to what it means for a human to ‘process’ something, nor what constitutes attention, and that the language of ‘information processing’ is deeply inappropriate to human experience. My present intention, though, is to consider how these rhetorics of quantification and commodification offer a challenge to the valuation of aesthetic experience. This is in order to suggest that Clark’s distinctive poetics meets this challenge, resisting the discourse of an ‘economy’ of attention by means of particular forms of engagement with time and the temporal (also through the idea of the gift). The challenge is there to be met because one familiar way of describing and valuing aesthetic experience is as a sustained act of attention to a work of art or literature.21 We might praise a masterpiece of visual art or literature by saying that it demands sustained and deep attention. Smith suggests a list of possible alternatives to the metaphor of attention as something to be ‘paid’, including ‘nourished’, ‘drawn’, ‘dedicated’, ‘demanded’, ‘defended’, ‘devoted’, ‘attracted’.22 But the boundaries are not clear: if an artwork may ‘demand’ our highest mode of attention, one can also receive a ‘final demand’ for a gas bill to be paid. The rhetoric of an economy of attention slips into aesthetic contexts. The trope of ‘100 books you must read before you die’ incites anxiety about the 185 billion bits lifetime allowance, and the gallery visitors who photograph paintings instead of looking at them are acting out the conception of human attention as ‘processing’ by replacing perceptual experience and the formation of memories with a process of recording and storing. My proposal is that time is crucial here. Bergsonian durée as opposed to clock time would clearly be one way to destabilise the measurement of life in terms of processing capacity, though I shall suggest that Deleuze’s philosophy of time, which in certain ways develops Bergson’s ideas, may be particularly useful. But my initial focus is the deployment of time in Clark’s poetry. Of Woods & Water (2008) announces its resistance to the economic model of attention with its subtitle ‘Forty Eight Delays’.23 This title converts what would be a negative term within the discourse of efficient processing into an element of aesthetic form, suggestive of a musical genre.24 That resistance is also embedded in its diction. The following words are selected from the forty-eight quatrains which make up Of Woods & Water: leisurely relax slow hesitation detains briefly slow-moving slows defer immediacy hidden passing stay stillness for a moment continually leisurely The word ‘leisurely’ occurs in the partly-mirrored first and last quatrains, in the line ‘there is a leisurely turning of the water’. The cumulative effect here is, I think, something more than just a mood. It works in collaboration with the formal and design features of the book. We note the absence of page numbers, which is common to a number of Clark’s books. The work both invites and resists quantification or measure: each quatrain is a unique event placed on a white, unnumbered page, and yet the subtitle tells us how many there are, and the free-verse quatrains use their own form of poetic ‘measure’, but on a scale incommensurable with the sort of quantification of attention represented by measuring processing capacity. The poetry is not ‘demanding’ in the way that The Waste Land is; there is nothing here difficult to understand on the immediate level, nor requiring specialist knowledge. It invites, I think, a highly distinctive form of attention, conditioned by a sense of temporal suspension. It is neither a ‘quick read’ nor a ‘slow read’, requiring rather a leisurely process of return and reflection. The first use of the word ‘delay’ in Clark’s oeuvre seems to be in a 1982 book, Twenty Four Sentences About the Forest.25 Twenty-six years earlier than Of Woods & Water: Forty-Eight Delays, this work shows by comparison both the continuity and the development in Clark’s work. Notable, for example, is the continuity of focus on the value of the natural environment and the formal poise suggested by the title or subtitle announcing a number of poetic units. Development is apparent in the presence of evidently symbolic and mythic-sounding lines in the earlier work (which are absent from the later work): In the beginning was the forest and the forest stretched everywhere, unbroken and single. When men let light into the forest, darkness hid in their hearts. (‘Twenty Four Sentences About the Forest’) The word ‘delay’ occurs in a seemingly more conventional sense than the formal noun use of ‘A Delay of Eight Syllables’ (inscribed in the window of Scottish Poetry Library in Edinburgh) or ‘Forty-Eight Delays’ (Of Woods & Water): The function of trees in a forest is to delay our passage from one part of the forest to another. (‘Twenty Four Sentences About the Forest’) The implications of the line – that natural forms have the potential to reshape our temporal relations – anticipate that later, formal sense of ‘delay’. Earlier in the work we read: ‘All the verbs of the forest are intransitive’. While ‘delay’ is not grammatically intransitive in the phrase ‘to delay our passage’, the uses of delay to which Clark’s poetic tends are indeed intransitive: ‘A Delay of Eight Syllables’ and ‘Forty-Eight Delays’ are not delays to some specific process or thing. A delay in this sense is not focused on that which is delayed but on the value of the delay itself. Underlying this evocation of a subjective (human) experience of slowed time is the other-than-human experience of the trees and the forest ecosystem. What Peter Wohlleben refers to as ‘the slow rhythms of life in ancient forests’ are not merely a matter of human response, but integral to entities such as trees and lichen, which live and develop over time scales of (in some cases) hundreds of years.26 In tracing Clark’s use of the word ‘delay’ from Twenty-Four Sentences to Of Woods & Water, we might observe a process by which a more conventionally post-Romantic use of natural symbol in early Clark (the human interaction with nature as a source of experiential and ethical illumination) mutates, via the poet’s engagement with concrete poetry, minimalism and a materiality of the word, into his mature style (‘delay’ as formal device, with the ethical/phenomenological ‘lesson’ embodied in linguistic form rather than represented). Ten years after Twenty Four Sentences About the Forest, a small pamphlet entitled Waiting (1992) treats the word ‘waiting’ to a sustained process of meditative variation. It includes the phrase ‘an intransitive waiting’.27 ‘Delay’ and ‘waiting’ are both terms with primarily negative connotations in normal use, at least in the context of the functionalist assumptions of late capitalism underlying a measurement of life in bytes (there is a slow food movement and a slow TV movement; we have yet to see a ‘slow computer’ movement, for obvious reasons, although the fashion for ‘digital detox’ is perhaps its equivalent). Clark’s poetry subjects the negative connotations of ‘delay’ and ‘waiting’ in a functionalist context to radical revaluation. The pamphlet, Waiting, assimilates waiting to verbs of action and perception. It is made up of a series of stanzas, with lengths varying between two and six lines, the lines being very short, frequently consisting of only one word. The fourth stanza, for example, reads: standing walking running waiting while the tenth stanza has a similar form: looking listening touching waiting A central structural principle of the poem is that the word ‘waiting’ occurs at least once in every stanza. The word is also paired with adverbs and adjectives, including ‘indifferently’, ‘patiently’, ‘idly’, ‘provisional’, ‘generous’; and with places: ‘where the shade is deepest’; ‘under/a pine tree’. Sometimes consequences are suggested: waiting and forgetting and falling asleep On two occasions the word is doubled almost at once: ‘waiting/upon waiting’; ‘waiting and/waiting’. Near the end of the work, we find a sudden (and unusual) emotiveness of language: singing weeping hurting waiting The work ends with a complex series of ambiguities: in morning light still waiting How can one characterise the temporal mode associated with delay and waiting, lifted out of its negative associations as impediments to ‘efficient’ activity, and revalued as the occasions or conditions of forms of attention which are not commodified or quantified? I don’t think this temporal mode can be fully defined, at least not by me, and perhaps shouldn’t be. However, to return to Dominic Smith’s article: his provisional gesture towards an alternative metaphorical language for attention is drawn from music: What consequences follow for how we relate to attention today, … by framing it, not as something to be ‘paid’, but as something to be ‘played’, in the sense of music? … this shift … introduces a different grammar and conceptual toolbox for framing attention, and different metaphysical, epistemological, ethical and aesthetic considerations thereby. Instead of framing attention as a ‘resource’, a ‘supply’, or as a form of ‘capital’ to be mined, exploited or captured, for example, it allows us to frame it as something potentially ‘resonant’, ‘dissonant’, ‘tonic’, ‘in concert’, ‘harmonic’, ‘creative’, ‘processual’, ‘rhythmic’ or ‘polyrhythmic’; further, and crucially, it provides an alternative standard against which to assess the successes and failures of the crudely economic model of attention …. Are ‘acts’ of attention properly speaking ‘acts’ at all, or do they involve a passive capacity for synthesis and receptivity on the model of attending to music ….29 This suggestion feels appropriate to the poetics of delay and waiting which I have been sketching out. In part this may be because titles or sub-titles such as A Delay of Eight Syllables and Forty Eight Delays sound to my ear akin to musical titles; and this in turn may be because units of 4 and 8 are so crucial both to poetic measure in English (the quatrain and the octet – the former much used in Clark’s work) and many elements of Western music, from 4/4 ‘common time’ to twelve-bar blues and (perhaps most relevant to Clark) Scottish folk music: the 6/8 of the jig and the 4/4 of the reel.30 The term ‘composing’ attention would seem particularly appropriate: these poetic works do not ‘demand’ that we ‘pay attention’; they ‘compose’ attention, both in the sense of being composed themselves, and in inciting composure as an existential state. The musical analogy raises again those questions about activity and passivity, but also implicitly deconstructs that antithesis. One wouldn’t think of listening to music as a passive activity, but nor is it in any straightforward sense an act of will. The ‘music’ of Clark’s poetry, elicited from the precise patterning, echoes and placing of words, sounds and spaces, incites analogous forms of attention. A striking features of these poems, and of Clark’s work in general, is the use of repetition as a key ordering principle: both the structural repetition of the word ‘waiting’ throughout Waiting, and the repetition-with-variation of the first and last sections of Of Woods & Water: on a wide bend of the river there is a leisurely turning of water that flows so profoundly it can relax into every gesture on a wide bend of the river there is a leisurely turning of water away from the light and into These lines could be scanned in various ways, but the crucial effect seems to me to be a shift from rising rhythm at the start of lines (the iambs of ‘that flows’, ‘relax’, ‘away’ and ‘a broad’) to a falling rhythm (trochees such as ‘river’, ‘water’, ‘gesture’, ‘into’). There is a relatively high number of unstressed syllables (‘on a’, ‘of the’, ‘there is a’, and the final two syllables of ‘leisurely’) which, along with the polysyllables ‘leisurely’ and ‘profoundly’, creates a sense of relaxation, brought up short by the stress of the final line (scanned as iamb, trochee, spondee; or two bacchii). Roughly-silvered leaves that are the snow On Ararat seen through those leaves. The sun lays down a foliage of shade.31 In both cases, the pattern-making quality of light and shade serves as a mise-en-abyme of the poem as representation embodied in pattern. The reader is drawn to reflect upon, and question, the phenomenology of light and darkness, and its expression in language. We commonly say that ‘night falls’, but not usually that ‘day falls’: presumably because the sun appears to come ‘up’ at dawn, whereas the night appears to come ‘down’ at dusk. ‘Darkness falls’ is loaded with symbolic connotations, whereas light may simply fall upon an object or scene; however, a shaft of light may figure spiritual transformation. What is the difference between shadow and shade, and how do they inflect the loaded cultural associations of light (enlightenment, reason, clarity, insight, realisation, falling light as divine presence) and darkness (danger, death, the unconscious, the hidden)? In Clark as in Hill there is a sense of the reciprocity of light and dark, or light and shade; their mutual dependence, which resists a binary sense of their meaning. Here I would like to invoke elements of Gilles Deleuze’s philosophy as it bears on time, repetition and the gift. At the opening of Difference and Repetition, Deleuze distinguishes between ‘generality’, which he associates with exchange and substitution, and ‘repetition’, which he associates with ‘non-exchangeable and non-substitutable singularities’.32 The first of these (generality), which ‘expresses a point of view according to which one term may be exchanged or substituted for another’ might, at first sight, seem applicable to the repetition of the same word (such as ‘waiting’). Considered as a signifier, independent of its material support, the printed word is, in principle, substitutable. Two factors, however, work against the assumption in the present case. First, poetry in general has order, structure and pattern as highly meaningful elements. So each instance of the word ‘waiting’ is different precisely because it follows, and precedes, in a certain specific relation or pattern, a previous instance. In relation to discourse more generally, this would be an instance of what James Williams terms the marginal case which is in fact the most indicative (or, as he puts it ‘For [Deleuze], all repetitions are of the marginal kind’).33 Second, Clark’s work is heavily committed to the materiality and objecthood of the word.34 His use of pattern, concrete poetry, poetry objects, cards or installed texts all bring to the fore the non-substitutable status of words as material presence in the world, over against their exchangeability as disembodied signifiers. In this he typifies, though in highly distinctive form, the role of repetition in innovative poetry. As Lyn Hejinian observes in her classic 1983 essay ‘The Rejection of Closure’: Repetition, conventionally used to unify a text or harmonize its parts, as if returning melody to the tonic, instead, in these works … challenges our inclination to isolate, identify, and limit the burden of meaning given to an event (the sentence or line). Here, where certain phrases recur in the work, recontextualized and with new emphasis, repetition disrupts the initial apparent meaning scheme. The initial reading is adjusted; meaning is set in motion, emended and extended, and the rewriting that repetition becomes postpones completion of the thought indefinitely.35 It is interesting, then, that Deleuze concludes the second paragraph of his Introduction to Repetition and Difference with the following observation: If exchange is the criterion of generality, theft and gift are those of repetition. There is, therefore, an economic difference between the two.36 Clark’s use of repetition could be seen as removing his language from the (potentially debased) economy of general linguistic exchange and offering an economy of the gift, in which each instantiation of the word is conceived as a unique offering to the reader. This point could be further illustrated in relation to Clark’s card poems Generosity (2010) and Gaelic Flowers (2016).37 Commissioned for the Poetry Beyond Text project exhibitions (Visual Research Centre, Dundee and the Royal Scottish Academy), and for a 2016 symposium on Clark’s work at the Scottish Poetry Library respectively, these two multiples were offered as a free gift for visitors or delegates to take away.38 The text of Generosity evokes objects in the natural world, human affective/cognitive responses, and even time itself as potential gifts: if the waves were silver and the leaves were gold if the miles were accomplishments and the hours were joys you would give them all away if cares were goods and moods were faculties if an impulse brought you to banks of wild strawberries you would give them all away all that your hands can reach In an article on Deleuze’s concept of repetition, Adrian Parr comments that repetition is connected to the power of difference in terms of a productive process that produces variation in and through every repetition. In this way, repetition is best understood in terms of discovery and experimentation; it allows new experiences, affects and expressions to emerge.39 In the case of Generosity, the repetition is not primarily at the level of the word (although ‘you would give them all away’ is repeated), but at the level of the poem-object, and the experiences which it evokes and generates. As a multiple, the card work is repeated, but each instance has a different trajectory, leaving the event (exhibition or symposium) with a different person, and potentially taking up some place in their lives, material environment and experience. The principle of repetition as instantiating a productive power of difference applies also to the reading process. All of the interpretations created or revealed in a process of ‘close reading’ are actuated only by the event of reading (and, previously, by the event of writing). The experience of reading many of Clark’s works is defined precisely by the tension between a more conventional idea of repetition as equivalence or substitutability, and a Deleuzian sense of repetition as founded in difference. His poems come to effect a subversion of the former idea by the latter. The repetition-with-difference of the first and last stanzas of Waiting performs this Deleuzian conception of repetition, but the conception is most powerfully present precisely when the repetition seems to minimize difference: as in the lines: ‘waiting/and longing/waiting and/waiting’ (stanza 22). These lines act out the way in which, when one is in fact waiting (say for a bus, or for a ferry), each moment of waiting is subtly different because of the moments which have preceded it (otherwise one would only be waiting for one moment). It is, after all, the succession of different moments which defines waiting as such. Or the way in which moments or acts of waiting are inflected by different emotions: longing, boredom, pleasure. This opens up an alternative vision of time. In the light of Deleuze’s philosophy, we may see Clark’s focus on delay, attention and waiting, not simply as a plea to set aside specific periods of time for quiet attention or reflection, but as a more radical invocation of incommensurable modes of time; a setting-aside of quantifiable time as a series of exchangeable units (the instrumentalised time of modernity) in favour of a different mode of being. Deleuze’s vision of time also helps us to make sense of the contradictory ‘calculations’ concerning the time available to humans cited by Smith, and why the idea of the ‘tragic inadequacy’ of ‘a lifetime of 70 years’ in which ‘one can process about 185 billion bits’ may strike us as absurd. Humans are not clocks, nor are they computers, and though we do make strenuous efforts to adapt ourselves to instrumentalised time (and no doubt change ourselves in the process), our experience of our own lives is not one of homogeneous, quantifiable, exchangeable units being progressively used up, but one of multiple, overlapping or tessellated, interacting or incommensurable, forms of temporal experience and process. In further interpreting Waiting and Of Woods & Water I want to refer primarily to Deleuze’s three syntheses of time. To briefly establish the context, I will summarize, drawing on James William’s book Gilles Deleuze’s Philosophy of Time. According to Deleuze’s account in Difference and Repetition and The Logic of Sense, ‘[t]imes are made in multiple synthetic processes … [so that] time is the result of the syntheses’; the ‘multiple times’ produced by the syntheses are incommensurable, forming ‘a network of asymmetrical formal and singular processes’. The first synthesis of time ‘implies a process in the present determining the past and future as dimensions of the process’; the second synthesis takes the past as ‘the primary process’, so that ‘the present becomes a dimension of the past, as its most contracted leading tip when we picture the past as an expanding cone’.40 The third synthesis of time is ‘the pure and empty form of time’, is the ‘condition for novelty’ and reinterprets ‘Nietzsche’s doctrine of eternal return’ via the principle that ‘only difference returns and never sameness’.41 The first synthesis of time further illuminates the role of repetition in Clark’s work, and in human experience, while demonstrating one radical difference between human perception and memory on the one hand, and computational processes and ‘memory’ on the other. One reason that human experience is neither homogeneous nor quantifiable is that each moment of a human life is (at least potentially) a revision of all the preceding moments (the present operating on the past so as to transform it) and of those to follow (the present operating on the future so as to condition it). This transformation process is acted out on a micro level within individual sections of Of Woods & Water: what you thought might take place is what thought will displace the trace of a presence a thrill through the grass Reading these lines, each word as it is read is, momentarily, the ‘present’ of that act of reading. Each word, carrying forward the ‘thought’, ‘take[s] place’ in, and takes the place of, the present moment, only to be ‘displaced’ by the next word, thereby becoming a ‘trace’. Each word as read revises, enhances, shifts, the meaning of preceding words. The semantic space of ‘thought’ in the first line (a verb in the past tense meaning imagined, or expected) is defined by contradistinction to ‘thought’ (a noun meaning a mental process). ‘Take place’ as a commonplace phrase for ‘happen’ is subtly revised by the skewed symmetry of ‘displace’ (‘take – place’/‘dis – place’) and both are further shifted by the internal rhyme of ‘trace’, setting up a sense of connotations running through, across and back: a trace which is found in a place; a trace which takes the place (in memory?) of a place (displaces it; it becomes only a trace). The second synthesis of time (the present as a dimension of the past) is embodied in literature in familiar concepts such as ‘tradition’ and ‘intextextuality’. All literary works are in some sense conditioned by what has preceded them. Both first and second syntheses are registered in Eliot’s famous formulations in ‘Tradition and the Individual Talent’: the historical sense involves a perception, not only of the pastness of the past, but of its presence … The existing monuments form an ideal order among themselves, which is modified by the introduction of the new (the really new) work of art among them … for order to persist after the supervention of novelty, the whole existing order must be, if ever so slightly, altered … the past [is] altered by the present as much as the present is directed by the past. Literary language, and poetic language in particular, and Clark’s poetic of accumulative series in particular, enact the terms of the second synthesis. For example, the word ‘green’ is one which recurs in Of Woods & Water; the following are individual lines from various points in the sequence: ‘drifts grey over green’; ‘pale green over green’; ‘are a deeper green’; ‘green filling every distance’; ‘green dipped in the stream’; ‘in the green each incident’; ‘the moss is green’. A single section reads: green above you below and behind you green with you green around you The first occurrence of the word ‘green’ initiates what will become a series of instances, each of which is different by virtue of being a repetition, so that each is conditioned by what has preceded it: From the outside there must be a difference between the repeated things for repetition to be registered, for without such a difference, there is only one and the same thing and not a repetition.42 Multiple dimensions of repetition with difference play out here. Intertextuality: the echo of Marvell’s ‘The Garden’ – ‘a green thought in a green shade’; a line which Clark has reimagined in material forms on a number of occasions, always playing on the indefinite number of multiple shades of green.43 Another form of repetition with difference is thus evoked (that of shades of colour), while Of Woods & Water is conditioned by Clark’s preceding works (After Marvell, 1980), and in turn conditions those which follow, in this sequence of intertextual relations within and between poets. The third synthesis of time in Deleuze’s philosophy of time illuminates the absent presence of the human subject in Waiting; what might, in another form of poetry, constitute the lyrical ‘I’. From the opening lines – ‘sitting/on a stone/in the dark/waiting’ – there is an implicit (human?) presence who is the grammatical subject of the verbs, but no name or pronoun ever appears. The relevance of the third synthesis of time requires explanation by a somewhat extended quotation. As explained by Williams: when you actively conceive of a proposition such as ‘I am breathing’, you can chose to vary the attribute, from ‘breathing’ to ‘walking’, for instance. You cannot control or deny the way in which both those conceptions take place in time; the denial itself even presupposes time. This is the passivity Deleuze is concerned with. It is a double passivity, though. This is because it is not the subject of the active conception that is directly passive, but rather, it is passive through a self positioned in time. The ‘I’ that conceives of the proposition is different from the ‘me’ positioned in time by being a living and sleeping thing. So now being is divided between an active subject and passive self where any action by the subject presupposes that self because the subject is only passively determinable in time through the self. A passive self is the condition for any active subject. The ‘I’ is therefore fractured or traversed by a fault line, because of the way the self is determinable in time.44 Clark’s Waiting would have been a very different poem had it started ‘I am sitting/on a stone/in the dark/waiting’; it would have been in a more conventional lyrical mode, which would have implied a relatively straightforward sense of reflective agency. What the concept of the third synthesis of time helps to bring into sharp focus is a question concerning waiting: is it active (something one does) or passive (something that happens to one)? It seems to partake of both. Once can actively decide to wait or not (‘I am going to wait for the right moment to tell him’; ‘I am not going to wait for the bus, I would rather walk’), or one can have relatively little choice (‘we will have to wait for the tide to go out’; ‘all we can do now is wait’). Even if once decides to wait, it is not clear that waiting itself could be construed as an action; rather it seems something which occurs, to which one is given over, more or less willingly. Deleuze’s synthesis identifies the role of time in this uncertainty, and potentially offers an analysis of the active and passive components. Waiting (both the activity and the poem) foreground the ‘self positioned in time’; serving, therefore, as a figure or allegory of the condition of living with a fractured ‘I’ (or subject); one ‘traversed by a fault line’, dependent upon a ‘passive self’ (or ‘me’) as the ‘condition for any active subject’. This fracture splits open the dehumanized instrumentality and linearity of a lifetime imagined in terms of processing speed. The active subject might be tempted by such a rationalised self-understanding, but its dependence on the passive self undermines that vision. The present participles without pronouns of the poem embody, in poetic and syntactic form, that divided condition. The third synthesis of time involves ‘the future’ which ‘has its prior processes and includes the past and present as dimensions’.45 This seems a productive point from which to interpret waiting; a process in which the (imagined or expected) future determines the nature of the present. Waiting is a future-oriented process, amenable to understanding via the third synthesis’ prioritising of ‘the future as a novel event’, and ‘the new as pure difference determined through singularities’.46 The link between formal poetics and the ethics of delay is suggested by Clark’s comments in a recent interview: I do think that the vertical pull of lineation, together with a certain avant-garde practice of parataxis … is too much in collaboration with consumerism, with the continual inducement to move on. If something is worthy of mention at all, give it some space and time, before moving on to the next thing.47 Here Clark reverses, or shifts, the claims sometimes made for parataxis as a form of resistance to commodified subjectivity embodied in syntax.48 This seems to mark a point of dissent from the poetics of the ‘British Poetry Revival’ with which Clark was in part associated, though he continues to embody another element of that practice: the role of small presses as a resistance to ‘accelerated participation in consumerism’.49 Waiting has some paratactic elements, in sequences such as ‘standing/walking/running/waiting’, but its combination of such list-like parataxis with hypotactic syntax such as ‘a waiting/integral/to every/activity’ does not resemble the typical paratactic disruptions of ‘linguistically-innovative poetry’ which often (notably, for example, in the work of Tom Raworth) has a rapid forward movement; there is a sense of poise rather than such momentum in Clark’s work. The idea of careful attention as a source of value is a relatively familiar one in the context of the reception and criticism of poetry. The practice of ‘close reading’, privileged since the time of New Criticism and Modernism, implies the idea of ‘close attention’ to the poem (though not necessarily to the world). For Clark, attention to the poem is, in a sense, attention to the world, or at least an incitement or inducement to intensive modes of attention. Close attention to the perceptual world is often celebrated by critics and reviewers as a quality of the poet, which then results in a poem which repays the reader’s close attention. But Clark’s work, while resolutely material and linguistically self-aware, has a strong quality of pointing the reader to the potential of their own perceptual and interactive acts, rather than inviting them to share the poet’s own; what Alice Tarbuck has termed his propaedeutic technique: Clark’s poetry, whilst never overtly didactic, is nevertheless propaedeutic, preparing the reader for encounters in the natural world which will require their attention, honed and developed through close engagement with Clark’s poetry.50 This is the particular way in which Clark has followed through on the resistance to the lyric subject, a resistance so central to the innovative poetry movement of the 1960s onwards, with which he was associated, but from which he remains distinct.51 The practice and valuation of close reading and close attention have often been phrased in terms which link them to clarity, precision and enlightenment. But Clark’s work is alert to what one might term the attentional unconscious: that attention is not simply a matter of will, training and focus, but a complex set of states and practices. This is conveyed in section 32 of Of Woods & Water: something made of shadow at the edge of attention with an undulating motion Attention has an edge, and a dimension of attention is to be responsive to its own edges: to the shadow as well as the light. There is always the pattern of light and shade: any foreground of attention requires a background; any act of will is conditioned by the unwilled. Attention, understood as something we ‘give’ (rather than pay) to the world, is matched by the gifts which the world gives us and the gifts which are passed amongst the objects of the world; a possibility affirmed through its negation in Of Woods & Water: the grace of the birch belongs to it it is not the gift of a passing breeze is there a shape you recover again when what moves you leaves you (Section 30) At stake here are questions of time (process) and essence (belonging), as well as the being of both humans and non-human objects such as trees. Of Woods & Water contains few pronouns, and none in the first person, but ‘you’ occurs a number of times. At first sight the reader is likely to take this as the generalised form of ‘you’, in English an informal equivalent to ‘one’; perhaps also an address to the reader. But the lines above ask us to consider a non-human ‘you’ in the shape of the birch, so that here Clark’s writing partakes of the recent shift, in many forms of philosophy, poetry and poetics towards a rethinking of the boundaries and shaping of the ‘human’, seen for example in the evocation of the ‘more-than-human’ by John Burnside and Kathleen Jamie; in Jane Bennett’s vital materialism; in Timothy Morton’s ‘strange stranger’; in Dona Haraway’s cyborgs and Val Plumwood’s critique of dualism.52 In Clark’s lines, the breeze, a temporal process, shows the grace of the birch, but that grace ‘belongs to’ the birch by virtue of its biological and perceptual being: its botanical morphology and growth patterns. The location and belonging of properties and agency is further destabilized by the uncertain status of ‘is there a shape you recover again’: a question lacking a question mark (‘Is there a shape [that] you recover again?’), or a statement about the role and destination of ‘the grace’ (‘the grace … is there a shape you recover again’)? One might interpret the role of the breeze in terms of Deleuze’s first synthesis of time: process in the present determining the past as dimension, in that the breeze as present process (movement of air) gives perceptual shape and aesthetic consequence (‘grace’) to the shape of the birch which is a product of its past development. The being of the birch would then be embodied in the second synthesis of time: the past as ‘the primary process’, wherein ‘the present becomes a dimension of the past, as its most contracting leading tip when we picture the past as an expanding cone’: the past of the birch defines and gives birth to its present shape and movement.53 The touch of theological language (‘grace’) is not irrelevant here: the dialectic of will and gift has echoes of the Christian theology of salvation by works and by grace, in an oblique manner somewhat typical of Clark’s relation to religious modes of thought, as well as of their broader presence in (primarily) secular twenty-first-century poetry. The meditation on time, essence and being in relation to the birch is transferred onto the human in lines 3 and 4 of the section (linked by the homophone of ‘leaves’ – as verb and plural noun). As the wind moves and then leaves the birch, so process or experience moves the individual human subject (changes them, elicits emotion, displaces them), and then departs, leaving them to recover the shape of self-definition which will nevertheless be changed, as humans are changed by all interaction: the first and second syntheses of time operating in relation to human selfhood and being.54 Attention and time are closely related because attention can only be given in the present, but can only acquire force by being sustained over time (momentary attention is close to inattention). Here the temporal and spatial dimensions of both word and image, and their reception come into play. The art historian T.J. Clark’s account of looking at two Poussin paintings in his book The Sight of Death is an exemplary act of attention to the aesthetic object which reimports the awareness of time into the image and into the process of its reception and interpretation. He tries to represent for the reader his own temporal process of viewing and interpreting the paintings over a period of days, and makes the process or act of repetition central to aesthetic experience, rather than a contingent circumstance: Many of us, maybe all of us, look at some images repeatedly, but it seems we do not write that repetition … Maybe we fear that the work we depend on images to do for us – the work of immobilizing, and therefore making tolerable – will be undone if we throw the image back into the flow of time.55 Underlying the effacement of the temporal which T.J. Clark detects in art criticism is some trace of Gotthold Lessing’s view of visual art as a spatial, rather than temporal form. If applied to the Poussin works, this would involve the sense of a landscape painting as both representing a moment, and as being an object/image the aesthetic completion of which abstracts it from the flow of time.56 With text works, and particularly poetry, of course, the assumptions are rather different. Poetry is habitually read as temporal and dynamic, and internal repetition (in diction, metre, pattern etc.) are formally crucial. However, Heather H. Yeung points out ‘the suspended temporality and non-narrative nature, or “space” of poetic voice’.57 These effects are equally pronounced in the forms of poetry which tend to resist or bracket ‘voice’, notably the traditions of concrete and intermedial poetry which inform Thomas A. Clark’s work. Such forms of poetry imbue text with image-like qualities, including a potential sense of stasis, and the possibility (for example with single-word poems) of more-or-less instant (initial) ‘reading’. None of Thomas A. Clark’s works discussed here fall into that category: they have a concern with process and movement, as signalled by the present participles which are so frequent in Waiting and play a significant role in Of Woods & Water. Nevertheless, Waiting focuses on a process which is not a process, or a process which embodies a degree of stasis (precisely, something not happening, or having not yet happened). Comparably, Of Woods & Water, as its title suggests, is concerned with the experience of natural objects which engage in, or are defined, by, natural processes, but with a degree of the ‘timeless’ because of their steady persistence: the water flowing and turning, the light falling, the leaves trembling. The present participles and gerunds embody by their grammatical status (extending to the roles of noun, verb and adjective) the obscure relations of object and process. Of Woods & Water is, in a sense, and with the necessary caveats concerning ‘landscape’ as a concept, a ‘landscape poem’ as the Poussin works are ‘landscape paintings’.58 It raises some of the same questions about temporality, process, stasis: Is what we are looking at in Calm a transitory state of affairs, or enduring? Is it Nature or Art here that has brought the world to a standstill?59 Such questions are the result of T. J. Clark’s sustained act of close attention, not only in the sense that they arise from long reflection, but also in the sense that the temporality of his extended viewing, one assumes, intensifies his perception of temporality within the work. When one looks repeatedly at what might be a ‘moment’ – the cows, smoke, water, horse, and so on in Poussin’s painting, all in a precise place and representing a condition which will never be precisely repeated – one is led to reflect very intensely upon the nature of that momentariness. What would be comparable in the process of reception of Thomas A. Clark’s poetry would be a form of attention requiring, not so much concentration, as patience. This would involve a waiting for the appearance or experience of what Andrew Bowie terms the ‘hermeneutic’ or ‘aesthetic conception … of truth’ as ‘revelation or “disclosure”’, distinct from ‘conceptions of truth as warrantable assertibility, though sharing with these an ultimate dependence on a process of ‘seeing as’.60 T.J. Clark describes his notes (which led to or became The Sight of Death) as ‘a record of looking taking place and changing through time’.61 To look in a changing way at what may be a moment is one of the defining experiences of responding to art. What emerges is an understanding of the paintings in some of the same terms as those I have been using to read Thomas A. Clark’s poems: not only temporality, the momentary and process, but also dark, light and shadow: light and darkness have to be part of my story. I need to hold onto the pathos of these paintings’ materiality.62 Aren’t there plenty of moments in life that, whether they last or not, have enough of permanence about them to stand for things as they are, ‘things as the mind conceives them’.63 T.J. Clark’s reflections bring him to the presence or sight of death. In relation to Thomas A. Clark’s work, this leads one to reflect on the implicit presence of death within the pastoral mode with which his work is a complex engagement. Reminding oneself of the literary-historical connections between the pastoral and the elegiac, one returns to the concluding section in Of Woods & Water, and its difference from the first section which it echoes: on a wide bend of the river there is a leisurely turning of water that flows so profoundly it can relax into every gesture on a wide bend of the river there is a leisurely turning of water away from the light and into The relaxed ‘leisurely turning’ of the opening is transformed by the lines which follow its later appearance, into a ‘leisurely turning’ with more sombre overtones: the turn into darkness, with its suggestion of death, even perhaps of the Styx. And this is consistent with the poem’s meditation on time and process, since it is death that, above all, defines the human experience of temporality, and pervades the iconography and symbolism of light, dark and shadow. T.J. Clark interprets the pastoral aspect of Poussin’s Landscape with a Calm and Landscape with a Snake via Panofsky’s essay on the painter’s Et in Arcadia Ego, where Panofsky sees Virgil’s version of pastoral as turning on a dissonance between ordinary human sorrows and the unruffled calm of Arcady. The dissonance is resolved, but by means of mood more than story or form; and the mood – the ‘mixture of sadness and tranquillity which is perhaps Virgil’s most personal contribution to poetry’ – is that of evening coming on.64 Clark’s point, however, is that Poussin resists such a resolution: Landscape with a Calm works with much of this material, obviously. But I’d say that its closeness to the Virgil stock phrases only makes its refusal quite to repeat them the more clear. Hesperus has not risen, and shadows have not gathered for good.65 There are elements of the neo-classical in Thomas A. Clark’s work: a strong sense of decorum and proportion; an impulse towards the general (woods and water rather than a particular wood or a particular river) and the ideal; a restraint or even at times severity of style (though balanced by humour and lightness of touch). Such elements form part of its resistance to aspects of modernity. Returning to those first and last section of Of Woods & Water, one could find echoes of Virgilian pastoral, although stripped of myth and character: No, never again shall I find solace among the wood-nymphs, Or in poetry, even; words and woods mean nothing to me now …. My goats You have pastured well, the twilight deepens – home, then, home!66 But, as T.J. Clark comments of Poussin, in Of Woods & Water ‘shadows have not gathered for good’. If there is a touch of melancholy and even foreboding in the turn to darkness (as also in ‘singing/weeping/hurting/waiting’, in Waiting), it is outweighed by the pleasure and affirmation of what a resistance to accelerated, commodified time can offer: the leisurely, profound and relaxed gesture of natural processes, forms of temporality which, being non-linear, can contemplate death as other than simply the bringing to an end of a series. As a model of such a resistant understanding of time and attention, we might turn to one of T.J. Clark’s comments on the experience of repeated aesthetic attention which led to his book: But astonishing things happen if one gives oneself over to the process of seeing again and again: aspect after aspect of the picture surfaces, what is salient and what incidental alter bewilderingly from day to day, the larger order of the depiction breaks up, recrystallizes, fragments again, persists like an afterimage. And slowly the question arises: What is it, fundamentally, I am returning to in this particular case?’67 In contrast to the idea of a ‘tragically insufficient’ lifetime allowance of ‘185 billion bits’ of human processing, this passage envisages time and attention as complex sets of layered and accumulative processes; as inflected by mood, context and occasion. It invokes an aesthetic but also ontological mode of attention which, like waiting, is neither wholly passive nor wholly active, and is not captured by the subject-object binary. It also suggests the particular accumulative and repetitive form of attention which Clark’s work composes, offers as a gift, and rewards. ## Notes 1. Thomas A. Clark, ‘In Praise of Walking’, in Distance and Proximity (Edinburgh: Canongate, 2000), pp. 13-22 (p. 19). [^] 2. See Terry Gifford, ‘Towards a Post-Pastoral View of British Poetry’ in The Environmental Tradition in English Literature, ed. John Parham (London: Routledge, 2016), pp. 51–63. [^] 3. Simon Malpas, ‘Touching Art: Aesthetics, Fragmentation and Community’, in The New Aestheticism, ed. John J. Joughin and Simon Malpas (Manchester: Manchester University Press, 1993), p. 89. [^] 4. Lambert Zuidervaart, ‘Theodor W. Adorno’, The Stanford Encyclopedia of Philosophy (Winter 2015 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2015/entries/adorno/>. The phrase ‘social antithesis of society’ is a direct quotation from Adorno’s Aesthetic Theory. [^] 5. Andrew Epstein, Attention Equals Life: The Pursuit of the Everyday in Contemporary Poetry and Culture (Oxford: Oxford University Press, 2016), p. 2. [^] 6. Epstein, p. 13. [^] 7. Epstein, p. 11. [^] 8. Epstein, p. 14. [^] 9. Epstein, p. 12. [^] 10. Epstein, p. 21. [^] 11. Epstein, p. 7. [^] 12. ‘For the discerning intellect of Man/When wedded to this goodly universe/In love and holy passion, shall find these/A simple produce of the common day’. Wordsworth, Preface to The Excursion, Wordsworth Poetical Works, ed. Thomas Hutchinson, rev. Ernest de Selincourt (1936; London: Oxford University Press, 1969), p. 590. [^] 13. Steven Connor, Postmodernist Culture: An Introduction to Theories of the Contemporary (2nd edn; Oxford: Blackwell, 1989), p. 121. [^] 14. Wilde, quoted Connor, p. 122. [^] 15. ‘Attention’ has been a significant concept in the reception of Clark’s work, though sometimes in relation to language rather than the world. For example, Tony Lopez writes that an early minimal poem ‘calls up a particular kind of attention to arbitrary linguistic features’ and another aims to ‘entice from us an attentiveness to language’. Tony Lopez, ‘Thomas A. Clark: Nationality, Modernism’, in Meaning Performance: Essays on Poetry (Cambridge: Salt, 2006), pp. 177–189 (pp. 179, 181). [^] 16. Dominic Smith, ‘Paying Attention: Philosophy as Strong Therapy for the Information Age’, World Literature and Dissent (Routledge, in press), TS, p. 3. [^] 17. Jonathan Crary, Suspensions of Perception: Attention, Spectacle, and Modern Culture (Cambridge, MA: MIT Press, 2001), p. 1; quoted Smith, p. 4. [^] 18. Lev Manovich suggests that the breadth of the term ‘interactivity’ limits its usefulness since human-computer interaction is ‘by definition interactive’. The Language of New Media (Cambridge, Mass: MIT Press, 2001), p. 55. [^] 19. Quoted Smith, p. 8. [^] 20. Quoted Smith, p. 15. [^] 21. For example Robert Sheppard writes that ‘The insistence that “attentive reading” is necessary reminds us that “to understand another person’s utterance means to orient oneself with respect to it”.’ Robert Sheppard, The Poetry of Saying: British Poetry and its Discontents, 1950–2000 (Liverpool: Liverpool University Press, 2005) p. 9. [^] 22. Smith, p. 3. [^] 23. Thomas A. Clark, Of Woods & Water: Forty Eight Delays (Moschatel Press, 2008). [^] 24. The technical meaning of delay in musical technology, relating to a delay produced in an audio signal, seems probably irrelevant here. [^] 25. Thomas A. Clark, Twenty Four Sentence About the Forest (Moschatel Press, 1982). This and a number of other points are indebted to Alice Tarbuck’s very detailed knowledge of the Clark archive at the Scottish Poetry Library. [^] 26. Peter Wohlleben, The Hidden Life of Trees: What They Feel, How They Communicate: Discoveries from a Secret World, trans Jane Billinghurst (2016; London: William Collins, 2017), p. 168. [^] 27. Thomas A. Clark, Waiting (Moschatel Press, 1992). [^] 28. Samuel Beckett, Waiting for Godot (1950; London: Faber and Faber, 1965), p. 48. [^] 29. Smith, p. 21. [^] 30. One of Clark’s works portrays the music for a traditional Irish folk song, in common or 4:4 time. Thomas A. Clark, The Dawning of the Day, (Pittenweem, Moschatel, 2015). [^] 31. ‘A Song From Armenia’, from ‘The Songbook of Sebastian Arrurruz’, Broken Hierarchies: Poems 1952–2012 (Oxford: Oxford University Press, 2013), p. 77. [^] 32. Gilles Deleuze, Difference and Repetition, trans. Paul Patton (1994; London: Bloomsbury, 2014), p. 1. [^] 33. James Williams, Gilles Deleuze’s Difference and Repetition: A Critical Introduction and Guide (Edinburgh: Edinburgh University Press, 2003), p. 33. [^] 34. ‘Clark’s interest is primarily in the materiality of text and its visual potential’. Alice Tarbuck, ‘Some Particulars’: The Poetry and Practice of Thomas A. Clark (PhD thesis, University of Dundee, 2018), p. 59. [^] 35. ‘The Rejection of Closure’ was originally delivered as a talk in 1983. It is available at: https://www.poetryfoundation.org/articles/69401/the-rejection-of-closure [accessed 19 June 2018]. [^] 36. Deleuze, Difference and Repetition, p. 1. [^] 37. Generosity (Dundee: Poetry Beyond Text, 2010); Gaelic Flowers (Dundee: University of Dundee, 2016) [^] 38. Poetry Beyond Text was a research project based at the Universities of Dundee and Kent, funded by the Arts and Humanities Research Council’s Beyond Text programme. Generosity can be seen online at http://www.poetrybeyondtext.org/clark-thomas.html. [^] 39. Adrian Parr, ‘Repetition’, in The Deleuze Dictionary, ed. Adrian Parr, p. 223. [^] 40. Williams, Gilles Deleuze’s Philosophy of Time, p. 3. [^] 41. Williams, Gilles Deleuze’s Philosophy of Time, p. 86, 87, 79, 87. [^] 42. Williams, Deleuze’s Philosophy of Time, p. 23. [^] 43. After Marvell (1980) is ‘a book of green pages without text’; after Andrew Marvell (2015) is ‘five hand-coloured squares of green in different shades, arranged horizontally along a small card; green shade: homage to Andrew Marvell (2016) is ‘twenty-one framed pictures, each a slightly different shade of green’; green shades (2016) ‘consists only of a pair of green-tinted spectacles’. Tarbuck. pp. 187–88. In this context, and in relation to Epstein’s theory of the refusal of transcendence in contemporary poetry of the ‘everyday’, it is worth recalling that Marvell’s line concludes a stanza which specifically describes a process of transcendence: ‘Yet it [the mind] creates, transcending these/Far other worlds, and other seas’. Andrew Marvell, ‘The Garden’, in The Complete Poems, ed. Elizabeth Story Donno (Harmondsworth: Penguin, 1972), p. 101. [^] 44. Williams, Gilles Deleuze’s Philosophy of Time, p. 81. [^] 45. Williams, Gilles Deleuze’s Philosophy of Time, p. 14. [^] 46. Williams, Gilles Deleuze’s Philosophy of Time, pp. 14, 15. [^] 47. Alice Tarbuck, ‘In, among, with and from: In Conversation with Thomas A. Clark’. PN Review, 42.5 (2016), pp. 37–41 (p. 40). [^] 48. See, for example, Robert Sheppard’s strictures on Movement poetry: ‘a poetry of closure, narrative coherence and grammatical and syntactic cohesion [which] … posited the existence of a stable ego’ (The Poetry of Saying, p. 27), with which he contrasts the resistance to such constructs of a (frequently paratactic) innovative poetry. [^] 49. Sheppard, p. 48. [^] 50. Alice Tarbuck, ‘Some Particulars’, p. 11. [^] 51. Alice Tarbuck argues that ‘Clark has always removed himself from the mainstream, and even from the mainstreams of the avant-garde. Clark is interested in formally innovative poetry precisely because it offers a move away from the mainstream, and thus a move away from the “anguished poetry of self” … this thesis has revealed Clark’s complex relationship with the influence of movements, individuals and forms more broadly’. ‘Some Particulars’, p. 318. [^] 52. See: Attila Dósa, ‘Poets and Other Animals: An Interview with John Burnside’, in Attila Dósa (ed.), Beyond Identity: New Horizons in Modern Scottish Poetry (Amsterdam and New York: Rodopi, 2009), pp. 113–134 (p. 121); Attila Dósa, ‘Kathleen Jamie: More Than Human’, in Beyond Identity; Jane Bennett, Vibrant Matter: A Political Ecology of Things (Durham, NC: Duke University Press, 2010); Timothy Morton, The Ecological Thought (Cambridge, MA: Harvard University Press, 2010); Donna J. Haraway, When Species Meet (Minneapolis: University of Minnesota Press, 2008); Val Plumwood, Environmental Culture: The Ecological Crisis of Reason (London: Routledge, 2002). [^] 53. James Williams, Gilles Deleuze’s Philosophy of Time, p. 3. [^] 54. Mark J.P. Wolf points out that, while interactivity is a quality frequently attributed to computer systems, interaction with a computer often has little or no permanent effect on the computer, whereas social, physical or chemical interactions produce ‘long-lasting and irreversible effects’ on humans. Mark J.P. Wolf, Abstracting Reality Art, Communication, and Cognition in the Digital Age (University Press of America: 2000), p. 161. [^] 55. T.J. Clark, The Sight of Death (New Haven and London: Yale University Press, 2006), p. 8. [^] 56. Gotthold Ephraim Lessing, Laocoon. An Essay upon the Limits of Painting and Poetry. With remarks illustrative of various points in the history of ancient art (1766). [^] 57. Heather H. Yeung, Spatial Engagement with Poetry (New York: Palgrave Macmillan, 2015), p. 65. [^] 58. For a critique of the ideology implicit in ‘landscape’, see Jonathan Smith, ‘The Lie That Blinds: Destabilizing the Text of Landscape’, in Place/Culture/Representation, eds. James Duncan and David Ley (London: Routledge, 1993), pp. 78–92, (pp. 78–79). [^] 59. T.J. Clark, p. 12. [^] 60. Andrew Bowie, From Romanticism to Critical Theory: The Philosophy of German Literary Theory (London and New York: Routledge, 1997), p. 18. [^] 61. T.J. Clark, p. 5. [^] 62. T.J. Clark, p. 12. [^] 63. T.J. Clark, p. 16. [^] 64. T.J. Clark, p. 93. [^] 65. T.J. Clark, pp. 93–4. [^] 66. The Eclogues, Georgics and Aeneid of Virgil, trans. C. Day Lewis (London: Oxford University Press, 1966), p. 44. [^] 67. T.J. Clark, p. 5. [^] ## Acknowledgements I would like to thank James Williams for reading a draft of this article and offering invaluable advice in relation to the philosophy of Gilles Deleuze. Thanks also to Alice Tarbuck, for advice on the archive and many fascinating discussions of Clark’s work over recent years. ## Competing Interests The author has no competing interests to declare.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4092410206794739, "perplexity": 4874.5108004270205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541288287.53/warc/CC-MAIN-20191214174719-20191214202719-00543.warc.gz"}
https://pdglive.lbl.gov/Particle.action?init=0&node=M059&home=MXXX025
${\mathit {\mathit c}}$ ${\mathit {\overline{\mathit c}}}$ MESONS(including possibly non- ${\mathit {\mathit q}}$ ${\mathit {\overline{\mathit q}}}$ states) #### ${{\mathit \eta}_{{c}}{(2S)}}$ $I^G(J^{PC})$ = $0^+(0^{- +})$ Quantum numbers are quark model predictions. ${{\mathit \eta}_{{c}}{(2S)}}$ MASS $3637.5 \pm1.1$ MeV (S = 1.2) ${{\mathit \eta}_{{c}}{(2S)}}$ WIDTH $11.3 {}^{+3.2}_{-2.9}$ MeV $\Gamma_{1}$ hadrons not seen $\Gamma_{2}$ ${{\mathit K}}{{\overline{\mathit K}}}{{\mathit \pi}}$ $(1.9\pm{1.2})\%$ 1729 $\Gamma_{3}$ ${{\mathit K}}{{\overline{\mathit K}}}{{\mathit \eta}}$ $(5\pm{4})\times 10^{-3}$ 1637 $\Gamma_{4}$ 2 ${{\mathit \pi}^{+}}$2 ${{\mathit \pi}^{-}}$ not seen 1792 $\Gamma_{5}$ ${{\mathit \rho}^{0}}{{\mathit \rho}^{0}}$ not seen 1645 $\Gamma_{6}$ 3 ${{\mathit \pi}^{+}}$3 ${{\mathit \pi}^{-}}$ not seen 1749 $\Gamma_{7}$ ${{\mathit K}^{+}}{{\mathit K}^{-}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ not seen 1700 $\Gamma_{8}$ ${{\mathit K}^{*0}}{{\overline{\mathit K}}^{*0}}$ not seen 1585 $\Gamma_{9}$ ${{\mathit K}^{+}}{{\mathit K}^{-}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit \pi}^{0}}$ $(1.4\pm{1.0})\%$ 1667 $\Gamma_{10}$ ${{\mathit K}^{+}}{{\mathit K}^{-}}$2 ${{\mathit \pi}^{+}}$2 ${{\mathit \pi}^{-}}$ not seen 1627 $\Gamma_{11}$ ${{\mathit K}_S^0}$ ${{\mathit K}^{-}}$2 ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ + c.c. seen 1666 $\Gamma_{12}$ 2 ${{\mathit K}^{+}}$2 ${{\mathit K}^{-}}$ not seen 1470 $\Gamma_{13}$ ${{\mathit \phi}}{{\mathit \phi}}$ not seen 1506 $\Gamma_{14}$ ${{\mathit p}}{{\overline{\mathit p}}}$ seen 1558 $\Gamma_{15}$ ${{\mathit p}}{{\overline{\mathit p}}}{{\mathit \pi}^{+}}{{\mathit \pi}^{-}}$ seen 1461 $\Gamma_{16}$ ${{\mathit \gamma}}{{\mathit \gamma}}$ $(1.9\pm{1.3})\times 10^{-4}$ 1819 $\Gamma_{17}$ ${{\mathit \gamma}}{{\mathit J / \psi}{(1S)}}$ $<1.4\%$ CL=90% 500 $\Gamma_{18}$ ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit \eta}}$ not seen 1766 $\Gamma_{19}$ ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit \eta}^{\,'}}$ not seen 1680 $\Gamma_{20}$ ${{\mathit \pi}^{+}}{{\mathit \pi}^{-}}{{\mathit \eta}_{{c}}{(1S)}}$ $<25\%$ CL=90% 537 FOOTNOTES
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9141559600830078, "perplexity": 2274.262832301001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945315.31/warc/CC-MAIN-20230325033306-20230325063306-00472.warc.gz"}
http://www.thefullwiki.org/Polyhedron
# Polyhedron: Wikis Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles. # Encyclopedia A polyhedron (plural polyhedra or polyhedrons) is often defined as a geometric solid with flat faces and straight edges (the word polyhedron comes from the Classical Greek πολύεδρον, from poly-, stem of πολύς, "many," + -edron, form of έδρα, "base", "seat", or "face"). This definition of a polyhedron is not very precise, and to a modern mathematician is quite unsatisfactory. Grünbaum (1994, p. 43) observed, "The Original Sin in the theory of polyhedra goes back to Euclid, and through Kepler, Poinsot, Cauchy and many others ... [in that] at each stage ... the writers failed to define what are the 'polyhedra' ...." Mathematicians still do not agree as to exactly what makes something a polyhedron. ## Basis for definition Any polyhedron can be built up from different kinds of element or entity, each associated with a different number of dimensions: • 3 dimensions: The body is bounded by the faces, and is usually the volume enclosed by them. • 2 dimensions: A face is a polygon bounded by a circuit of edges, and usually including the flat (plane) region inside the boundary. These polygonal faces together make up the polyhedral surface. • 1 dimension: An edge joins one vertex to another and one face to another, and is usually a line segment. The edges together make up the polyhedral skeleton. • 0 dimensions: A vertex (plural vertices) is a corner point. • -1 dimension: The nullity is a kind of non-entity required by abstract theories. More generally in mathematics and other disciplines, "polyhedron" is used to refer to a variety of related constructs, some geometric and others purely algebraic or abstract. A defining characteristic of almost all kinds of polyhedra is that just two faces join along any common edge. This ensures that the polyhedral surface is continuously connected and does not end abruptly or split off in different directions. A polyhedron is a 3-dimensional example of the more general polytope in any number of dimensions. ## Characteristics ### Names of polyhedra Polyhedra are often named according to the number of faces. The naming system is again based on Classical Greek, for example tetrahedron (4), pentahedron (5), hexahedron (6), heptahedron (7), triacontahedron (30), and so on. Often this is qualified by a description of the kinds of faces present, for example the Rhombic dodecahedron vs. the Pentagonal dodecahedron. Other common names indicate that some operation has been performed on a simpler polyhedron, for example the truncated cube looks like a cube with its corners cut off, and has 14 faces (so it is also an example of a tetrakaidecahedron). Some special polyhedra have grown their own names over the years, such as Miller's monster or the Szilassi polyhedron. ### Edges Edges have two important characteristics (unless the polyhedron is complex): • An edge joins just two vertices. • An edge joins just two faces. These two characteristics are dual to each other. ### Euler characteristic The Euler characteristic χ relates the number of vertices V, edges E, and faces F of a polyhedron: $\chi=V-E+F.\$ For a simply connected polyhedron, χ = 2. For a detailed discussion, see Proofs and Refutations by Imre Lakatos. ### Orientability Some polyhedra, such as all convex polyhedra, have two distinct sides to their surface, for example one side can consistently be coloured black and the other white. We say that the figure is orientable. But for some polyhedra this is not possible, and the figure is said to be non-orientable. All polyhedra with odd-numbered Euler characteristic are non-orientable. A given figure with even χ < 2 may or may not be orientable. ### Vertex figure For every vertex one can define a vertex figure, which describes the local structure of the figure around the vertex. If the vertex figure is a regular polygon, then the vertex itself is said to be regular. ### Duality For every polyhedron we can construct a dual polyhedron having: • faces in place of the original's vertices and vice versa, • the same number of edges • the same Euler characteristic and orientability For a convex polyhedron the dual can be obtained by the process of polar reciprocation. ### Volume The volume of an orientable polyhedron having an identifiable centroid can be calculated using Green's theorem: $\int\limits_\Omega {div(\vec F)d\Omega = } \oint\limits_S {\vec F \bullet d\vec S}$ by choosing the function $\vec F = \frac{{\left( {x\hat i + y\hat j + z\hat k} \right)}}{3}$ where (x,y,z) is the centroid of the surface enclosing the volume under consideration. As we have, $div(\vec F) = 1.$ Hence the volume can be calculated as: $volume = \oint\limits_S {\vec F \bullet \hat ndS}$ where the normal of the surface pointing outwards is given by: $\hat n = (n_x \hat i + n_y \hat j + n_z \hat k).$ The final expression can be written as $volume = \frac{1}{3}\sum\limits_{faces} {\left[ {\left( {x \times n_x + y \times n_y + z \times n_z } \right) \bullet S} \right]}$ where S is the surface area of the polyhedron. A dodecahedron In geometry, a polyhedron is traditionally a three-dimensional shape that is made up of a finite number of polygonal faces which are parts of planes; the faces meet in pairs along edges which are straight-line segments, and the edges meet in points called vertices. Cubes, prisms and pyramids are examples of polyhedra. The polyhedron surrounds a bounded volume in three-dimensional space; sometimes this interior volume is considered to be part of the polyhedron, sometimes only the surface is considered, and occasionally only the skeleton of edges. A polyhedron is said to be convex if its surface (comprising its faces, edges and vertices) does not intersect itself and the line segment joining any two points of the polyhedron is contained in the interior or surface. ### Symmetrical polyhedra Many of the most studied polyhedra are highly symmetrical. Of course it is easy to distort such polyhedra so they are no longer symmetrical. But where a polyhedral name is given, such as icosidodecahedron, the most symmetrical geometry is almost always implied, unless otherwise stated. Some of the most common names in particular are often used with "regular" in front or implied because for each there are different types which have little in common except for having the same number of faces. These are the triangular pyramid or tetrahedron, cube or hexahedron, octahedron, dodecahedron and icosahedron: Polyhedra of the highest symmetries have all of some kind of element - faces, edges and/or vertices, within a single symmetry orbit. There are various classes of such polyhedra: • Isogonal or Vertex-transitive if all vertices are the same, in the sense that for any two vertices there exists a symmetry of the polyhedron mapping the first isometrically onto the second. • Isotoxal or Edge-transitive if all edges are the same, in the sense that for any two edges there exists a symmetry of the polyhedron mapping the first isometrically onto the second. • Isohedral or Face-transitive if all faces are the same, in the sense that for any two faces there exists a symmetry of the polyhedron mapping the first isometrically onto the second. • Regular if it is vertex-transitive, edge-transitive and face-transitive (this implies that every face is the same regular polygon; it also implies that every vertex is regular). • Quasi-regular if it is vertex-transitive and edge-transitive (and hence has regular faces) but not face-transitive. A quasi-regular dual is face-transitive and edge-transitive (and hence every vertex is regular) but not vertex-transitive. • Semi-regular if it is vertex-transitive but not edge-transitive, and every face is a regular polygon. (This is one of several definitions of the term, depending on author. Some definitions overlap with the quasi-regular class). A semi-regular dual is face-transitive but not vertex-transitive, and every vertex is regular. • Uniform if it is vertex-transitive and every face is a regular polygon, i.e. it is regular, quasi-regular or semi-regular. A uniform dual is face-transitive and has regular vertices, but is not necessarily vertex-transitive). • Noble if it is face-transitive and vertex-transitive (but not necessarily edge-transitive). The regular polyhedra are also noble; they are the only noble uniform polyhedra. A polyhedron can belong to the same overall symmetry group as one of higher symmetry, but will have several groups of elements (for example faces) in different symmetry orbits. #### Uniform polyhedra and their duals Uniform polyhedra are vertex-transitive and every face is a regular polygon. They may be regular, quasi-regular, or semi-regular, and may be convex or starry. The uniform duals are face-transitive and every vertex figure is a regular polygon. Face-transitivity of a polyhedron corresponds to vertex-transitivity of the dual and conversely, and edge-transitivity of a polyhedron corresponds to edge-transitivity of the dual. In most duals of uniform polyhedra, faces are irregular polygons. The regular polyhedra are an exception, because they are dual to each other. Each uniform polyhedron shares the same symmetry as its dual, with the symmetries of faces and vertices simply swapped over. Because of this some authorities regard the duals as uniform too. But this idea is not held widely: a polyhedron and its symmetries are not the same thing. The uniform polyhedra and their duals are traditionally classified according to their degree of symmetry, and whether they are convex or not. Convex uniform Convex uniform dual Star uniform Star uniform dual Regular Platonic solids Kepler-Poinsot polyhedra Quasiregular Archimedean solids Catalan solids (no special name) (no special name) Semiregular (no special name) (no special name) Prisms Dipyramids Star Prisms Star Dipyramids Antiprisms Trapezohedra Star Antiprisms Star Trapezohedra #### Noble polyhedra A noble polyhedron is both isohedral (equal-faced) and isogonal (equal-cornered). Besides the regular polyhedra, there are many other examples. The dual of a noble polyhedron is also noble. #### Symmetry groups The polyhedral symmetry groups (using Schoenflies notation) are all point groups and include: Those with chiral symmetry do not have reflection symmetry and hence have two enantiomorphous forms which are reflections of each other. The snub Archimedean polyhedra have this property. ### Other polyhedra with regular faces #### Equal regular faces A few families of polyhedra, where every face is the same kind of polygon: • Deltahedra have equilateral triangles for faces. • With regard to polyhedra whose faces are all squares: if coplanar faces are not allowed, even if they are disconnected, there is only the cube. Otherwise there is also the result of pasting six cubes to the sides of one, all seven of the same size; it has 30 square faces (counting disconnected faces in the same plane as separate). This can be extended in one, two, or three directions: we can consider the union of arbitrarily many copies of these structures, obtained by translations of (expressed in cube sizes) (2,0,0), (0,2,0), and/or (0,0,2), hence with each adjacent pair having one common cube. The result can be any connected set of cubes with positions (a,b,c), with integers a,b,c of which at most one is even. • There is no special name for polyhedra whose faces are all equilateral pentagons or pentagrams. There are infinitely many of these, but only one is convex: the dodecahedron. The rest are assembled by (pasting) combinations of the regular polyhedra described earlier: the dodecahedron, the small stellated dodecahedron, the great stellated dodecahedron and the great icosahedron. There exists no polyhedron whose faces are all identical and are regular polygons with six or more sides because the vertex of three regular hexagons defines a plane. (See infinite skew polyhedron for exceptions with zig-zagging vertex figures.) ##### Deltahedra A deltahedron (plural deltahedra) is a polyhedron whose faces are all equilateral triangles. There are infinitely many deltahedra, but only eight of these are convex: • 3 regular convex polyhedra (3 of the Platonic solids) • 5 non-uniform convex polyhedra (5 of the Johnson solids) #### Johnson solids Norman Johnson sought which non-uniform polyhedra had regular faces. In 1966, he published a list of 92 convex solids, now known as the Johnson solids, and gave them their names and numbers. He did not prove there were only 92, but he did conjecture that there were no others. Victor Zalgaller in 1969 proved that Johnson's list was complete. ### Other important families of polyhedra #### Pyramids Pyramids include some of the most time-honoured and famous of all polyhedra. #### Stellations and facettings Stellation of a polyhedron is the process of extending the faces (within their planes) so that they meet to form a new polyhedron. It is the exact reciprocal to the process of facetting which is the process of removing parts of a polyhedron without creating any new vertices. #### Zonohedra A zonohedron is a convex polyhedron where every face is a polygon with inversion symmetry or, equivalently, symmetry under rotations through 180°. #### Toroidal polyhedra A toroidal polyhedron is a polyhedra with an Euler characteristic of 0 or smaller, representing a torus surface. #### Compounds Polyhedral compounds are formed as compounds of two or more polyhedra. These compounds often share the same vertices as other polyhedra and are often formed by stellation. Some are listed in the list of Wenninger polyhedron models. #### Orthogonal polyhedra An orthogonal polyhedron is one all of whose faces meet at right angles, and all of whose edges are parallel to axes of a Cartesian coordinate system. Aside from a rectangular box, orthogonal polyhedra are nonconvex. They are the 3D analogs of 2D orthogonal polygons, also known as rectilinear polygons. Orthogonal polyhedra are used in computational geometry, where their constrained structure has enabled advances on problems unsolved for arbitrary polyhedra, for example, unfolding the surface of a polyhedron to a polygonal net. ## Generalisations of polyhedra The name 'polyhedron' has come to be used for a variety of objects having similar structural properties to traditional polyhedra. ### Apeirohedra A classical polyhedral surface comprises finite, bounded plane regions, joined in pairs along edges. If such a surface extends indefinitely it is called an apeirohedron. Examples include: ### Complex polyhedra A complex polyhedron is one which is constructed in complex Hilbert 3-space. This space has six dimensions: three real ones corresponding to ordinary space, with each accompanied by an imaginary dimension. See for example Coxeter (1974). ### Curved polyhedra Some fields of study allow polyhedra to have curved faces and edges. #### Spherical polyhedra The surface of a sphere may be divided by line segments into bounded regions, to form a spherical polyhedron. Much of the theory of symmetrical polyhedra is most conveniently derived in this way. Spherical polyhedra have a long and respectable history: • The first known man-made polyhedra are spherical polyhedra carved in stone. • Poinsot used spherical polyhedra to discover the four regular star polyhedra. • Coxeter used them to enumerate all but one of the uniform polyhedra. Some polyhedra, such as hosohedra and dihedra, exist only as spherical polyhedra and have no flat-faced analogue. #### Curved spacefilling polyhedra Two important types are: • Bubbles in froths and foams, such as Weaire-Phelan bubbles. • Spacefilling forms used in architecture. See for example Pearce (1978). ### General polyhedra More recently mathematics has defined a polyhedron as a set in real affine (or Euclidean) space of any dimensional n that has flat sides. It could be defined as the union of a finite number of convex polyhedra, where a convex polyhedron is any set that is the intersection of a finite number of half-spaces. It may be bounded or unbounded. In this meaning, a polytope is a bounded polyhedron. All traditional polyhedra are general polyhedra, and in addition there are examples like: • A quadrant in the plane. For instance, the region of the cartesian plane consisting of all points above the horizontal axis and to the right of the vertical axis: { ( x, y ) : x ≥ 0, y ≥ 0 }. Its sides are the two positive axes. • An octant in Euclidean 3-space, { ( x, y, z ) : x ≥ 0, y ≥ 0, z ≥ 0 }. • A prism of infinite extent. For instance a doubly-infinite square prism in 3-space, consisting of a square in the xy-plane swept along the z-axis: { ( x, y, z ) : 0 ≤ x ≤ 1, 0 ≤ y ≤ 1 }. • Each cell in a Voronoi tessellation is a convex polyhedron. In the Voronoi tessellation of a set S, the cell A corresponding to a point cS is bounded (hence a traditional polyhedron) when c lies in the interior of the convex hull of S, and otherwise (when c lies on the boundary of the convex hull of S) A is unbounded. ### Hollow faced or skeletal polyhedra It is not necessary to fill in the face of a figure before we can call it a polyhedron. For example Leonardo da Vinci devised frame models of the regular solids, which he drew for Pacioli's book Divina Proportione. In modern times, Branko Grünbaum (1994) made a special study of this class of polyhedra, in which he developed an early idea of abstract polyhedra. He defined a face as a cyclically ordered set of vertices, and allowed faces to be skew as well as planar. ## Non-geometric polyhedra Various mathematical constructs have been found to have properties also present in traditional polyhedra. ### Topological polyhedra A topological polytope is a topological space given along with a specific decomposition into shapes that are topologically equivalent to convex polytopes and that are attached to each other in a regular way. Such a figure is called simplicial if each of its regions is a simplex, i.e. in an n-dimensional space each region has n+1 vertices. The dual of a simplicial polytope is called simple. Similarly, a widely studied class of polytopes (polyhedra) is that of cubical polyhedra, when the basic building block is an n-dimensional cube. ### Abstract polyhedra An abstract polyhedron is a partially ordered set (poset) of elements whose partial ordering obeys certain rules. Theories differ in detail, but essentially the elements of the set correspond to the body, faces, edges and vertices of the polyhedron. The empty set corresponds to the null polytope, or nullitope, which has a dimensionality of −1. These posets belong to the larger family of abstract polytopes in any number of dimensions. ### Polyhedra as graphs Any polyhedron gives rise to a graph, or skeleton, with corresponding vertices and edges. Thus graph terminology and properties can be applied to polyhedra. For example: ## History ### Prehistory Stones carved in shapes showing the symmetries of various polyhedra have been found in Scotland and may be as much a 4,000 years old. These stones show not only the form of various symmetrical polyehdra, but also the relations of duality amongst some of them (that is, that the centres of the faces of the cube gives the vertices of an octahedron, and so on). Examples of these stones are on display in the John Evans room of the Ashmolean Museum at Oxford University. It is impossible to know why these objects were made, or how the sculptor gained the inspiration for them. Other polyhedra have of course made their mark in architecture - cubes and cuboids being obvious examples, with the earliest four-sided pyramids of ancient Egypt also dating from the Stone Age. The Etruscans preceded the Greeks in their awareness of at least some of the regular polyhedra, as evidenced by the discovery near Padua (in Northern Italy) in the late 1800s of a dodecahedron made of soapstone, and dating back more than 2,500 years (Lindemann, 1987). Pyritohedric crystals are found in northern Italy[citation needed]. ### Greeks The earliest known written records of these shapes come from Classical Greek authors, who also gave the first known mathematical description of them. The earlier Greeks were interested primarily in the convex regular polyhedra, which came to be known as the Platonic solids. Pythagoras knew at least three of them, and Theaetetus (circa 417 B. C.) described all five. Eventually, Euclid described their construction in his Elements. Later, Archimedes expanded his study to the convex uniform polyhedra which now bear his name. His original work is lost and his solids come down to us through Pappus. ### Muslims and Chinese After the end of the Classical era, Islamic scholars continued to make advances, for example in the tenth century Abu'l Wafa described the convex regular and quasiregular spherical polyhedra. Meanwhile in China, dissection of the cube into its characteristic tetrahedron (orthoscheme) and related solids was used as the basis for calculating volumes of earth to be moved during engineering excavations. ### Renaissance As with other areas of Greek thought maintained and enhanced by Islamic scholars, Western interest in polyhedra revived during the Renaissance. Much to be said here: Piero della Francesca, Pacioli, Leonardo Da Vinci, Wenzel Jamnitzer, Durer, etc. leading up to Kepler. ### Star polyhedra For almost 2,000 years, the concept of a polyhedron had remained as developed by the ancient Greek mathematicians. Johannes Kepler realised that star polygons could be used to build star polyhedra, which have non-convex regular polygons, typically pentagrams as faces. Some of these star polyhedra may have been discovered before Kepler's time, but he was the first to recognise that they could be considered "regular" if one removed the restriction that regular polytopes be convex. Later, Louis Poinsot realised that star vertex figures (circuits around each corner) can also be used, and discovered the remaining two regular star polyhedra. Cauchy proved Poinsot's list complete, and Cayley gave them their accepted English names: (Kepler's) the small stellated dodecahedron and great stellated dodecahedron, and (Poinsot's) the great icosahedron and great dodecahedron. Collectively they are called the Kepler-Poinsot polyhedra. The Kepler-Poinsot polyhedra may be constructed from the Platonic solids by a process called stellation. Most stellations are not regular. The study of stellations of the Platonic solids was given a big push by H. S. M. Coxeter and others in 1938, with the now famous paper The 59 icosahedra. This work has recently been re-published (Coxeter, 1999). The reciprocal process to stellation is called facetting (or faceting). Every stellation of one polytope is dual, or reciprocal, to some facetting of the dual polytope. The regular star polyhedra can also be obtained by facetting the Platonic solids. Bridge 1974 listed the simpler facettings of the dodecahedron, and reciprocated them to discover a stellation of the icosahedron that was missing from the famous "59". More have been discovered since, and the story is not yet ended. ## Polyhedra in nature For natural occurrences of regular polyhedra, see Regular polyhedron: Regular polyhedra in nature. Irregular polyhedra appear in nature as crystals. ## References • Coxeter, H.S.M.; Regular complex Polytopes, CUP (1974). • Cromwell, P.;Polyhedra, CUP hbk (1997), pbk. (1999). • Grünbaum, B.; Polyhedra with Hollow Faces, Proc of NATO-ASI Conference on Polytopes ... etc. (Toronto 1993), ed T. Bisztriczky et al., Kluwer Academic (1994) pp. 43–70. • Grünbaum, B.; Are your polyhedra the same as my polyhedra? Discrete and comput. geom: the Goodman-Pollack festschrift, ed. Aronov et al. Springer (2003) pp. 461–488. (pdf) • Pearce, P.; Structure in nature is a strategy for design, MIT (1978) ## Books on polyhedra ### Introductory books, also suitable for school use • Cromwell, P.; Polyhedra, CUP hbk (1997), pbk. (1999). • Cundy, H.M. & Rollett, A.P.; Mathematical models, 1st Edn. hbk OUP (1951), 2nd Edn. hbk OUP (1961), 3rd Edn. pbk Tarquin (1981). • Holden; Shapes, space and symmetry, (1971), Dover pbk (1991). • Pearce, P and Pearce, S: Polyhedra primer, Van Nost. Reinhold (May 1979), ISBN 0442264968, ISBN 978-0442264963. • Richeson, David S. (2008) Euler's Gem: The Polyhedron Formula and the Birth of Topology. Princeton University Press. • Senechal, M. & Fleck, G.; Shaping Space a Polyhedral Approach, Birhauser (1988), ISBN 0817633510 • Tarquin publications: books of cut-out and make card models. • Wenninger, Magnus; Polyhedron models for the classroom, pbk (1974) • Wenninger, M.; Polyhedron models, CUP hbk (1971), pbk (1974). • Wenninger, M.; Spherical models, CUP. • Wenninger, M.; Dual models, CUP. • Coxeter, H.S.M. DuVal, Flather & Petrie; The fifty-nine icosahedra, 3rd Edn. Tarquin. • Coxeter, H.S.M. Twelve geometric essays. Republished as The beauty of geometry, Dover. • Thompson, Sir D'A. W. On growth and form, (1943). (not sure if this is the right category for this one, I haven't read it). ### Design and architecture bias • Critchlow, K.; Order in space. • Pearce, P.; Structure in nature is a strategy for design, MIT (1978) • Williams, R.; The Geometry of Natural Structure (40th Anniversary Edition), San Francisco: Eudaemon Press (2009).ISBN 978-0-9823465-1-8 ### Historic books WorldCat English: Polygons and Polyhedra: Theory and History. • Fejes Toth, L.; • Kepler, J.; De harmonices Mundi (Latin. Available in English translation). • Pacioli, L.; # 1911 encyclopedia Up to date as of January 14, 2010 (Redirected to Database error article) # Simple English [[File:|120px]]dodecahedron(Regular polyhedron) File:Small stellatedSmall stellated dodecahedron(Regular star) File:IcosidodecaëIcosidodecahedron(Uniform) File:GreatGreat cubicuboctahedron(Uniform star) [[File:|120px]]Rhombic triacontahedron(Uniform dual) File:Elongated pentagonalElongated pentagonal cupola(Convex regular-faced) File:OctagonalOctagonal prism(Uniform prism) File:SquareSquare antiprism(Uniform antiprism) [[File:|thumb|right|Most dice are polyhedra]] A Polyhedron (one polyhedron, many Polyhedra, or Polyhedrons) is a geometrical shape. It has flat faces, and straight edges. Usually it is defined by the number of faces, or edges. Mathematicians do not agree what makes a polyhedron. ## Naming Usually, polyhedra are named by the number of faces they have. The first polyhedra are the tetrahedron, which is made of 4 triangles, pentahedron (5 faces, can look like a 4-sided pyramid), hexahedron (6 faces, usually looks like a cube if it is regular), and heptahedron (7 faces, can look like a prism based on a pentagon, or a pyramid based on a hexagon amongst others). Most Polyhedrons are allowed to chose their own name, or are named by their parents, based on their number of sides.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 7, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5797051191329956, "perplexity": 1786.7123891022256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823839.40/warc/CC-MAIN-20171020063725-20171020083725-00299.warc.gz"}
https://www.zbmath.org/?q=an%3A0958.30029
× zbMATH — the first resource for mathematics Metrics of constant curvature 1 with three conical singularities on the 2-sphere. (English) Zbl 0958.30029 Summary: Let $$\text{Met}_1(\Sigma)$$ be the set of positive semi-definite conformal metrics of constant curvature 1 with conical singularities on a compact Riemann surface $$\Sigma$$. Suppose that $$d\sigma^2 \in\text{Met}_1 (\Sigma)$$ has conical singularities at points $$p_j\in\Sigma (j=1,\dots,n)$$ with order $$\beta_j(>-1)$$, that is, it admits a tangent cone of angle $$2\pi(\beta_j +1)$$ at each $$p_j$$. A formal sum $$D=\sum^n_{j=1} \beta_jp_j$$ is called the divisor of $$d\sigma^2$$. Then the Gauss-Bonnet formula implies that $$\chi (\Sigma, D):= \chi(\Sigma): =\chi(\Sigma) +\sum^n_{j=1} \beta_j>0$$. The divisor $$D$$ is called subcritical, critical, or supercritical when $$\delta(\Sigma,D): = \chi (\Sigma,D) -2\text{Min}_{j=1, \dots,n} \{1,\beta_j +1\}$$ is negative, zero, or positive, respectively. M. Troyanov [Trans. Am. Math. Soc. 324, No. 2, 793-821 (1991; Zbl 0724.53023)] showed that if $$\chi(\Sigma,D)>0$$, there exists a pseudometric in $$\text{Met}_1(\Sigma)$$ with divisor $$D$$ whenever it is subcritical. On the other hand, for the supercritical case several obstructions are known and the existence problem of the metrics is difficult: M. Troyanov [Lect. Notes Math. 1410, 296-306 (1989; Zbl 0697.53037) gave a classification of metrics of constant curvature 1 with at most two conical singularities on the 2-sphere. In the paper, the authors gave a necessary and sufficient condition for the existence and uniqueness of a metric with three conical singularities of given order on the 2-sphere. As shown by the authors, there is a one to one correspondence between the set $$\text{Met}_1(\Sigma)$$ and the set of branched CMC-1 (constant mean curvature one) immersions of $$\Sigma$$ excluded finite points into the hyperbolic 3-space with given hyperholic Gauss map. To show the theorem, this correspondence plays an important role. It should be remarked that classical work of F. Klein (Vorlesungen über die hypergeometrische Funktion (1933; Zbl 0461.33001) is related to the paper. MSC: 30F10 Compact Riemann surfaces and uniformization 53C21 Methods of global Riemannian geometry, including PDE methods; curvature restrictions 53A10 Minimal surfaces in differential geometry, surfaces with prescribed mean curvature
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9365729689598083, "perplexity": 252.8889897527901}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057329.74/warc/CC-MAIN-20210922041825-20210922071825-00645.warc.gz"}
https://naijavarcity.com.ng/past-questions/mathematics/mathematics2/
Mathematics 2017 Past Questions | WAEC Study the following Mathematics past questions and answers for JAMBWAEC  NECO and Post-JAMB. Get prepared with official past questions and answers for upcoming examinations. 11.  Solve: – $$\frac{1}{4}$$ < $$\frac{3}{4}$$ (3x – 2) < $$\frac{1}{2}$$ • A. -$$\frac{5}{9}$$ < x <$$\frac{8}{9}$$ • B. -$$\frac{8}{9}$$ < x <$$\frac{7}{9}$$ • C. -$$\frac{8}{9}$$ < x <$$\frac{5}{9}$$ • D. -$$\frac{7}{9}$$ < x <$$\frac{8}{9}$$ 12. Simplify; 3x – (p – x) – (r – p) • A. 2x – r • B. 2x + r • C. 4x – r • D. 2x – 2p – r 13.  An arc of a circle of radius 7.5cm is 7.5cm  long. Find, correct to the nearest degree, the angle which the arc subtends at the centre of the circle. [Take π=22/7] • A. 29o • B. 57o • C. 65o • D. 115o 14. Water flows out of a pipe at a rate of 40πcm2$\pi c{m}^{2}$ per seconds into an empty cylinder container of base radius 4cm. Find the height of water in the container after 4 seconds. • A. 10 cm • B. 14 cm • C. 16 cm • D. 20 cm 15. The dimensions of water tank are 13cm, 10cm and 70cm. If it is half-filled with water, calculate the volume of water in litres • A. 4.55 litres • B. 7.50 litres • C. 8.10 litres • D. 9.55 litres 16. If the total surface area of a solid hemisphere is equal to its volume, find the radius • A. 3.0cm • B. 4.5cm • C. 5.0cm • D. 9.0cm 17. Which of the following is true about parallelogram? • A. opposite angles are supplementary • B. opposite angles are complementary • C. opposite angles are equal • D. opposite angles are reflex angles 18. Calculate the gradient (slope) of the joining points (-1, 1) and (2, -2) • A. -1 • B. 1/2 • C. 12 • D. 1 19.  If P(2,3) and Q)2, 5) are points on a graph, calculate the length PQ • A. 6 units • B. 5 units • C. 4 units • D. 2 units 20.  A bearing of 320º expressed as a compass bearing is • A. N 50º W • B. N 40º W • C. N 50º E • D. N 40o E Subscribe Notify of
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5551609992980957, "perplexity": 3316.734084910941}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494936.89/warc/CC-MAIN-20230127033656-20230127063656-00621.warc.gz"}
https://gmatclub.com/forum/if-p-and-n-are-positive-integers-and-p-n-what-is-the-rema-93971.html
If p and n are positive integers and p > n, what is the rema : GMAT Data Sufficiency (DS) Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack It is currently 19 Feb 2017, 20:55 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If p and n are positive integers and p > n, what is the rema Author Message TAGS: ### Hide Tags Director Joined: 03 Sep 2006 Posts: 879 Followers: 6 Kudos [?]: 796 [2] , given: 33 If p and n are positive integers and p > n, what is the rema [#permalink] ### Show Tags 10 May 2010, 08:34 2 KUDOS 7 This post was BOOKMARKED 00:00 Difficulty: 95% (hard) Question Stats: 55% (02:56) correct 45% (02:30) wrong based on 325 sessions ### HideShow timer Statistics If p and n are positive integers and p > n, what is the remainder when p^2 - n^2 is divided by 15? (1) The remainder when p + n is divided by 5 is 1. (2) The remainder when p - n is divided by 3 is 1. [Reveal] Spoiler: OA Attachments DS8.PNG [ 9.33 KiB | Viewed 4753 times ] Math Expert Joined: 02 Sep 2009 Posts: 37024 Followers: 7227 Kudos [?]: 96071 [4] , given: 10706 ### Show Tags 10 May 2010, 14:36 4 KUDOS Expert's post 3 This post was BOOKMARKED If p and n are positive integers and p>n, what is the remainder when p^2 - n^2 is divided by 15? First of all $$p^2 - n^2=(p+n)(p-n)$$. (1) The remainder when p + n is divided by 5 is 1. No info about p-n. Not sufficient. (2) The remainder when p - n is divided by 3 is 1. No info about p+n. Not sufficient. (1)+(2) "The remainder when p + n is divided by 5 is 1" can be expressed as $$p+n=5t+1$$ and "The remainder when p - n is divided by 3 is 1" can be expressed as $$p-n=3k+1$$. Multiply these two --> $$(p+n)(p-n)=(5t+1)(3k+1)=15kt+5t+3k+1$$, now first term (15kt) is clearly divisible by 15 (r=0), but we don't know about 5t+3k+1. For example t=1 and k=1, answer r=9 BUT t=7 and k=3, answer r=0. Not sufficient. OR by number plugging: if $$p+n=11$$ (11 divided by 5 yields remainder of 1) and $$p-n=1$$ (1 divided by 3 yields remainder of 1) then $$(p+n)(p-n)=11$$ and remainder upon division 11 by 15 is 11 BUT if $$p+n=21$$ (21 divided by 5 yields remainder of 1) and $$p-n=1$$ (1 divided by 3 yields remainder of 1) then $$(p+n)(p-n)=21$$ and remainder upon division 21 by 15 is 6. Not sufficient. _________________ Manager Joined: 04 Sep 2010 Posts: 51 Followers: 2 Kudos [?]: 2 [1] , given: 1 ### Show Tags 04 Oct 2010, 12:55 1 KUDOS Hi Bunuel, According to my understanding ans should be c.. given (p+n)/5 = rem(1) (p-n)/3= rem(1) so (p^2 - n^2)/15 = (p+n)/5 * (p-n)/3... so remainder will be equal to 1*1 = 1 please correct me where I am wrong. Math Expert Joined: 02 Sep 2009 Posts: 37024 Followers: 7227 Kudos [?]: 96071 [0], given: 10706 ### Show Tags 04 Oct 2010, 13:05 sudhanshushankerjha wrote: Hi Bunuel, According to my understanding ans should be c.. given (p+n)/5 = rem(1) (p-n)/3= rem(1) so (p^2 - n^2)/15 = (p+n)/5 * (p-n)/3... so remainder will be equal to 1*1 = 1 please correct me where I am wrong. Red part is not correct. There are both algebraic and number plugging approaches in my previous post showing that answer is E. Yuo can check it yourself: If $$p=6$$ and $$n=5$$ then $$p+n=11$$ (11 divided by 5 yields remainder of 1) and $$p-n=1$$ (1 divided by 3 yields remainder of 1) then $$(p+n)(p-n)=11$$ and remainder upon division 11 by 15 is 11 If $$p=11$$ and $$n=10$$ then $$p+n=21$$ (21 divided by 5 yields remainder of 1) and $$p-n=1$$ (1 divided by 3 yields remainder of 1) then $$(p+n)(p-n)=21$$ and remainder upon division 21 by 15 is 6. _________________ Math Expert Joined: 02 Sep 2009 Posts: 37024 Followers: 7227 Kudos [?]: 96071 [0], given: 10706 Re: If p and n are positive integers and p > n, what is the rema [#permalink] ### Show Tags 19 Jul 2013, 00:24 From 100 hardest questions. Bumping for review and further discussion. _________________ Manager Joined: 16 Feb 2012 Posts: 237 Concentration: Finance, Economics Followers: 7 Kudos [?]: 293 [0], given: 121 If p and n are positive integers and p>n, what is the remain [#permalink] ### Show Tags 04 Aug 2013, 00:26 If p and n are positive integers and p>n, what is the remainder when $$p^2 - n^2$$ is devided by 15? 1) The remainder when p+n is devided by 5 is 1. 2) The remainder when p-n is devided by 3 is 1. _________________ Kudos if you like the post! Failing to plan is planning to fail. Last edited by Zarrolou on 04 Aug 2013, 00:28, edited 1 time in total. Merging similar topics. Director Joined: 03 Aug 2012 Posts: 916 Concentration: General Management, General Management GMAT 1: 630 Q47 V29 GMAT 2: 680 Q50 V32 GPA: 3.7 WE: Information Technology (Investment Banking) Followers: 23 Kudos [?]: 718 [0], given: 322 Re: If p and n are positive integers and p > n, what is the rema [#permalink] ### Show Tags 09 Aug 2013, 08:07 Hi Bunuel, I understood your approach for this problem. However , would like to have your opinion why the below solution as given in the older post is wrong? so (p^2 - n^2)/15 = (p+n)/5 * (p-n)/3... so remainder will be equal to 1*1 = 1 Please advise as to what was wrong in this solution in detail. Rgds, TGC! _________________ Rgds, TGC! _____________________________________________________________________ I Assisted You => KUDOS Please _____________________________________________________________________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 13872 Followers: 589 Kudos [?]: 167 [0], given: 0 Re: If p and n are positive integers and p > n, what is the rema [#permalink] ### Show Tags 20 Sep 2015, 05:23 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ GMAT Club Legend Joined: 09 Sep 2013 Posts: 13872 Followers: 589 Kudos [?]: 167 [0], given: 0 Re: If p and n are positive integers and p > n, what is the rema [#permalink] ### Show Tags 01 Oct 2016, 01:59 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email. _________________ Intern Joined: 24 Aug 2016 Posts: 21 Location: India WE: Project Management (Aerospace and Defense) Followers: 0 Kudos [?]: 6 [0], given: 27 Re: If p and n are positive integers and p > n, what is the rema [#permalink] ### Show Tags 01 Oct 2016, 21:27 sudhanshushankerjha wrote: Hi Bunuel, According to my understanding ans should be c.. given (p+n)/5 = rem(1) (p-n)/3= rem(1) so (p^2 - n^2)/15 = (p+n)/5 * (p-n)/3... so remainder will be equal to 1*1 = 1 please correct me where I am wrong. then p+n=21 mean (p+n)/5, remainder 1 p-n=7,means( p-n)/3, remainder 1 then p^2-n^2= 196-49=147 divided by 15 remainder 12. will not be able to find using 1 & 2. Hence answer is E Re: If p and n are positive integers and p > n, what is the rema   [#permalink] 01 Oct 2016, 21:27 Similar topics Replies Last post Similar Topics: If m and n are positive integers, what is the value of p? 2 07 Feb 2016, 09:45 91 If p and n are positive integers and p > n, what is the 23 23 Feb 2012, 07:10 If n is a positive integer and p = 3.021 10n, what is the 6 24 Apr 2011, 08:22 5 If p and n are positive integers and p > n, what is the remainder when 5 23 Sep 2010, 11:07 2 If n = p + r, where n, p, and r are positive integers and n 5 08 Apr 2010, 02:56 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42074987292289734, "perplexity": 3152.3224454119786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00482-ip-10-171-10-108.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/209654/proof-of-continuity-for-a-real-function/210006
# Proof of continuity for a real function! Let $f:\mathbb{R} \rightarrow \mathbb{R}$ be a real function, and satisfy that: for all $x\in\mathbb{R}$ $$\lim_{r\to x,r\in\mathbb{Q}}f(r)=f(x).$$ Show that $f$ is continuous on $\mathbb{R}.$ - you have continuous function $\tilde{f}$ on $\mathbb Q$. Now the question is: what is extension of this functon on $\mathbb R$. Try this physicsforums.com/showthread.php?t=430083 –  Nikita Evseev Oct 9 '12 at 5:58 For a fixed $x$, and a given $\epsilon>0$, first find a $\delta$ so that, if $r$ is rational and $|r-x|<\delta$ then $|f(r)-f(x)|<\epsilon$. Then if $t$ is any real with say $|t-x|<\delta/2$ we can pick a rational $r$ very close to $t$ such that $r$ is also within $\delta$ of $x$. Now apply the triangle inequality to $(f(t)-f(x)) = (f(t)-f(r)) + (f(r)-f(x))$ and obtain $|f(t)-f(x)|<=|f(t)-f(r)|+|f(r)-f(x)|.$ Since $t$ is close to $r$ the first term is small, and since $r$ is close to $x$ the second term is small. The idea is that in each separate term one rational and one real occurs, so that your assumption about convergence of limits through rational values converging to any real applies. I didn't fill in all the details, but with some manipulation one can get the thing less than $2\epsilon$. EDIT: This needs to be thought through more. I agree with the OP that it's not clear how the details should go. But at least it seems to me it will go through... Another try, more details: Given the fixed real $x$ and some $\epsilon>0$. First we can pick $\delta_1>0$ so that for rational $r$ we have $|r-x|<\delta_1$ implies $|f(r)-f(x)|<\epsilon/2$. Define $\delta=\delta_1/2$ Suppose $t$ is real with $|t-x|<\delta = \delta_1/2$. We can now pick $\delta'>0$ so that for rational $r$ we have $|r-t|<\delta'$ implies $|f(r)-f(t)|<\epsilon/2$. Now put $\delta_2=min(\delta,\delta')$ and pick a rational $r$ with $|r-t|<\delta_2$. Then we have $|r-x|<=|r-t|+|t-x|<\delta_1/2+\delta_1/2=\delta_1$ so that $|f(r)-f(x)|<\epsilon/2$, and also from $|r-t|<\delta_2$ we have also $|r-t|<\delta'$ so that $|f(r)-f(t)|<\epsilon/2$. We finally arrive at $|f(t)-f(x)|<\epsilon$ on applying the triangle inequality. - you can have a try. Threr will be somthing wrong when you take $\delta$ –  Riemann Oct 9 '12 at 8:02 Riemann: I think you'll find the writeup is now in standard "epsilon delta" format for continuity at x. Thanks for the note, as before it wasn't completely clear. –  coffeemath Oct 9 '12 at 19:53 Fix a $\xi\in{\mathbb R}$. If $f$ were not continuous at $\xi$ we could find an $\epsilon_0>0$ and for each $\delta>0$ a point $x_\delta\in U_\delta(\xi)$ with $$|f(x_\delta)- f(\xi)|\geq\epsilon_0\ .$$ Consider such an $x_\delta$. As $$\lim_{q\to x_\delta, \ q\in{\mathbb Q}} f(q)=f(x_\delta)$$ we can find a $q_\delta\in{\mathbb Q}$ with $|q_\delta-x_\delta|<\delta$ such that $$|f(q_\delta)-f(x_\delta|<{\epsilon_0\over2}\ .$$ It follows that there is for each $\delta>0$ a point $q_\delta\in U_{2\delta}(\xi)\cap{\mathbb Q}$ such that $$|f(q_\delta)-f(\xi)|\geq{\epsilon_0\over2}\ .$$ This contradicts $\lim_{q\to \xi, \ q\in{\mathbb Q}} f(q)=f(\xi)$. - It looks like Rudin's theorem 4.6 (in the Third edition) is almost exactly this. To paraphrase it to make it match the situation: If $x$ is a limit point of $\mathbb Q$, then $f$ is continuous at $x$ if and only if $\lim_{r\rightarrow x,r\in \mathbb Q}f(r)=f(x)$. Clearly, all $x\in \mathbb R$ are limit points of $\mathbb Q$. Sadly, Rudin's proof is, and I quote exactly, "This is clear if we compare Definitions 4.1 and 4.5". The former being the definition of the limit and the latter the definition of continuity. Maybe what he's getting at is that being a limit point guarantees that a $\delta$ neighborhood of $x$ will always exist that contains an appropriate $r$, and we can use the exact same $\delta$ for any corresponding $\epsilon$ used to establish the limit in the hypothesis. - Here is my own answer. (right or not) Proof: We will use Heine Theorem: Suppose $x_n\to x_0$ as $n\to \infty$, we want to prove that: $$\lim_{n\to \infty}f(x_n)=f(x_0).$$ Due to $$\lim_{r\to x,r\in\mathbb{Q}}f(r)=f(x)\ \ (*)$$ we know that: for any given $\epsilon>0,\exists\ \delta>0$, s.t when $|r-x_n|<\delta$ and $r\in\mathbb{Q}$ $$|f(r)-f(x_n)|<\epsilon.$$ Now take $\epsilon_n=\frac{1}{n}$, then $\exists \ \delta_n'>0$, s.t when $|r-x_n|<\delta_n'$ and $r\in\mathbb{Q}$ $$|f(r)-f(x_n)|<\epsilon_n=\frac{1}{n}.$$ Let $\delta_n=\min\{\delta_n',1/n\}$, obviously, $\delta_n\leq\frac{1}{n}$, then take $r_n$ such that $|r_n-x_n|<\delta_n\leq\frac{1}{n}.$ And it satisfies that $$|f(r_n)-f(x_n)|<\epsilon_n=\frac{1}{n}.$$ By $|r_n-x_n|<\frac{1}{n}$ and $x_n\to x_0$, we know that $r_n\to x_0.$ Combining $(*)$, we get $$\lim\limits_{n\to \infty}f(r_n)=f(x_0).$$ For any given $\epsilon>0,\exists N>0$, s.t when $n>N$ $$\frac{1}{n}<\frac{\epsilon}{2}\ \text{and}\ |f(r_n)-f(x_0)|<\frac{\epsilon}{2}.$$ So $|f(x_n)-f(x_0)|\leq|f(x_n)-f(r_n)|+|f(r_n)-f(x_0)|<\frac{1}{n}+\frac{\epsilon}{2}<\epsilon.$ Here comes the result. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9838025569915771, "perplexity": 137.60522340093362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246661733.69/warc/CC-MAIN-20150417045741-00056-ip-10-235-10-82.ec2.internal.warc.gz"}
https://byjus.com/question-answer/balance-the-following-equation-by-ion-electron-method-kmno4-feso4-h2so4-fe2-so4-3-k2so4/
Question # Balance the following equation by ion electron method-- KMnO4 + FeSO4 + H2SO4 -----> Fe2(SO4)3 + K2SO4 + H2O​​ Solution ## The give reaction is in correct and cannot be balanced using ion-electron method. The following equation is the correct representation of the reaction. KMnO4 + FeSO4 + H2SO4--> Fe2(SO4)3 + MnS04 + K2SO4 + H20 Suggest corrections
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9533125758171082, "perplexity": 7786.214420212397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00607.warc.gz"}
https://learn.careers360.com/ncert/question-arrange-the-following-set-of-compounds-in-order-of-their-decreasing-relative-reactivity-with-an-electrophile-e-superscript-plus-a-chlorobenzene-2-4-dinitrochlorobenzene-p-nitrochlorobenzene/
Q # Arrange the following set of compounds in order of their decreasing relative reactivity with an electrophile, E^+ (a) Chlorobenzene, 2,4-dinitrochlorobenzene, p-nitrochlorobenzene 13.22     Arrange the following set of compounds in order of their decreasing relative reactivity with an electrophile, E+ (a)      Chlorobenzene, 2,4-dinitrochlorobenzene, p-nitrochlorobenzene Views Electrophiles are electron deficient species, so they want a nucleophile which donates electrons to them. The higher the electron density on a benzene ring, the higher is the reactivity towards electrophile. $NO_{2}$ is an electron withdrawing group, it deactivates the benzene ring towards electrophile by decreasing the electron density from the ring. Decreasing order of their reactivity with an electrophile($E^+$) Chlorobenzene > p-nitrochlorobenzene > 2, 4-dinitrochlorobenzene Exams Articles Questions
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6084083318710327, "perplexity": 7243.958632173254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145538.32/warc/CC-MAIN-20200221203000-20200221233000-00368.warc.gz"}
http://project-navel.com/navel/news/magazines/2003-06.html
# ŐV - fڎ 2011N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2010N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2009N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2008N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2007N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2006N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2005N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2004N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2003N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 ##### 2003N6 G Le e dP 8 ifBA[NXj 630ij SHUFFLE!E낵܍|X^[EґSeJ v1/2y[W PUSH!! 8 iWVɁj 621iyj J낵CXgE SHUFFLE!EAb`PyR v5y[W RveB[N 7 ip쏑Xj 610i΁j SHUFFLE! v2y[W Raspberry Vol.10 i\tgoN pubVOj 610i΁j \E}EXpbhE SHUFFLE!E uhЉE Ab`PC^r[ v6y[W {}EXpbh JtPUREGIRL 7 iruXj 62ij 낵CXgE VR[i[uNavelʏ`100v v2y[W 2011N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2010N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2009N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2008N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2007N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2006N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2005N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2004N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 2003N \\ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997242093086243, "perplexity": 10.49472725131493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.75/warc/CC-MAIN-20171021224608-20171022004608-00383.warc.gz"}
https://www.gamedev.net/forums/topic/20108-problem/
• Advertisement #### Archived This topic is now archived and is closed to further replies. # Problem! This topic is 6395 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. If you intended to correct an error in the post then please contact us. ## Recommended Posts I''ll put it simply: /* DEFINE KEY CHARACTERS */ char upkey; char downkey; char leftkey; char rightkey; char keypress; /* PROTOTYPES */ void empty(void); /* KEY ASSIGNMENTS */ printf("Choose ''UP'' Key\n"); upkey=getch(); // assign up key empty(); // empty keyboard buffer do { printf("Choose ''DOWN'' Key\n"); downkey=getch(); } while(downkey==upkey); // check for double assignment empty(); do { printf("Choose ''LEFT'' Key\n"); leftkey=getch(); } while(leftkey==downkey || leftkey==upkey); empty(); do { printf("Choose ''RIGHT'' Key\n"); rightkey=getch(); } while(rightkey==leftkey || rightkey==downkey || rightkey==upkey); // ... code keypress=getch(); switch(keypress) { case upkey: y--; break; case downkey: y++; break; case leftkey: x--; break; case rightkey: x++; break; } // ... code /* EMPTY FUNCTION */ void empty(void) { while(kbhit()) { getch(); } } It works okay... for most keys... upon trying to hit the up arrow, down arrow, left arrow, and right arrow respectively, I found that in game, hitting "up" would send me up, hitting "down" would send me up, hitting "left" would send me left, and hitting "right" would send me left. Since the keyboard buffer was emptied after each one, I figured maybe the arrow keys required a larger variable space, so I used the wchar_t variable to hold them, which is supposed to be doubly as large, so it should fit them... but it didn''t. Same result. Any ideas? - Goblin "A woodchuck would chuck as much wood as a woodchuck could if a woodchuck could chuck wood. Deal." #### Share this post ##### Share on other sites Advertisement If you try using getch() to get a key -- like an arrow key -- that has a 2-byte (extended) ASCII code, you''ll just end up with a null character. The up arrow key, for instance, is denoted by a null character followed by (char)72. To get those keypresses with getch(), you have to call getch() twice, something like this: char k;if ((k = getch()) == 0){ switch (k = getch()) { case 72: // code to handle up arrow key goes here break; case 75: // code to handle left arrow key goes here break; case 77: // code to handle right arrow key goes here break; case 80: // code to handle down arrow key goes here break; }} The way the code you''ve listed is set up, you''re only reading the null character, so that''s what gets assigned as every key. I think. If you want to use getch() for input then you''ll have to come up with some way to flag whether the user''s key selections are regular ASCII codes, or extended ASCII codes, so you don''t have any problems with input in the program. It''s not the best way to do keyboard input for a game though. Are you programming in Windows? If so, it''s much easier just to use GetAsyncKeyState(), since it won''t have the problems that getch() has: double-keypress speed and simultaneous keypresses, both of which are necessities for games. Or there''s always DirectInput if you''re at all familiar with DirectX. If you''re working in DOS, a keyboard handler written in assembly is practically a requirement for games, especially if it''s something fast-paced like an action game. -Ironblayde Aeon Software #### Share this post ##### Share on other sites Assembly?! *melts* - Goblin "A woodchuck would chuck as much wood as a woodchuck could if a woodchuck could chuck wood. Deal." #### Share this post ##### Share on other sites • Advertisement • Advertisement • ### Popular Tags • Advertisement • ### Popular Now • 10 • 11 • 11 • 11 • 9 • Advertisement
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25667500495910645, "perplexity": 15604.493307705428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892059.90/warc/CC-MAIN-20180123171440-20180123191440-00187.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-9-quadratic-relations-and-conic-sections-9-7-solve-quadratic-systems-9-7-exercises-skill-practice-page-661/15
## Algebra 2 (1st Edition) Published by McDougal Littell # Chapter 9 Quadratic Relations and Conic Sections - 9.7 Solve Quadratic Systems - 9.7 Exercises - Skill Practice - Page 661: 15 #### Answer $(-1,-4),(-6.5,7)$ #### Work Step by Step Substituting the second equation ($y=-6-2x$) into the first one we get: $4x^2-5(-6-2x)^2=-76\\4x^2-5(36+24x+4x^2)=-76\\-16x^2-104-120x=0\\2x^2+13+15x=0\\(x+1)(2x+13)=0$ Thus $x=-1$ or $x=-6.5$. If $x=-1$, then $y=-4$, and if $x=-6.5$, then $y=7$. Thus the solutions are: $(-1,-4),(-6.5,7)$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9570375680923462, "perplexity": 1189.5796032332764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711003.56/warc/CC-MAIN-20221205032447-20221205062447-00836.warc.gz"}
https://tclcertifications.com/my-big-gty/237f36-sum-of-exponential-distribution
Jan 02, 2021 0 comment sum of exponential distribution The Gamma random variable of the exponential distribution with rate parameter λ can be expressed as: $Z=\sum_{i=1}^{n}X_{i}$ Here, Z = gamma random variable. 1. PROPOSITION 2. 2) so – according to Prop. We obtain: PROPOSITION 4 (m = 3). An interesting property of the exponential distribution is that it can be viewed as a continuous analogue of the geometric distribution. The distribution-specific functions can accept parameters of multiple exponential distributions. 1 – we can write: The reader has likely already realized that we have the expressions of and , thanks to Prop. � ����������H��^oR�| �~�� ���#�p�82e1�θ���CM�u� where f_X is the distribution of the random vector [].. $$X=$$ lifetime of a radioactive particle $$X=$$ how long you have to wait for an accident to occur at a given intersection Sum of exponential random variables over their indices. The reader might have recognized that the density of Y in Prop. 2. endobj This has been the quality of my life for most of the last two decades. And once more, with a great effort, my mind, which is not so young anymore, started her slow process of recovery. Let be independent random variables with an exponential distribution with pairwise distinct parameters , respectively. Now, calculate the probability function at different values of x to derive the distribution curve. <> • E(S n) = P n i=1 E(T i) = n/λ. <> That is, if , then, (8) (2) The rth moment of Z can be expressed as; (9) Cumulant generating function By definition, the cumulant generating function for a random variable Z is obtained from, By expansion using Maclaurin series, (10) Desperately searching for a cure. Then, some days ago, the miracle happened again and I found myself thinking about a theorem I was working on in July. If we let Y i = X i / t , i = 1 , … , n − 1 then, as the Jacobian of … Consider I want x random numbers that sum up to one and that distribution is exponential. exponential distribution, mean and variance of exponential distribution, exponential distribution calculator, exponential distribution examples, memoryless property of exponential … Therefore, X is a two- I know that they will then not be completely independent anymore. • Define S n as the waiting time for the nth event, i.e., the arrival time of the nth event. Modifica ), Mandami una notifica per nuovi articoli via e-mail, Sum of independent exponential random variables, Myalgic Encephalomyelitis/Chronic Fatigue Syndrome, Postural orthostatic tachycardia syndrome (POTS), Sum of independent exponential random variables with the same parameter, Sum of independent exponential random variables with the same parameter – paolo maccallini. In the following lines, we calculate the determinant of the matrix below, with respect to the second line. Let be independent exponential random variables with distinct parameters , respectively. For example, the amount of time (beginning now) until an earthquake occurs has an exponential distribution. Let’s derive the PDF of Exponential from scratch! PROPOSITION 1. So we have: The sum within brackets can be written as follows: So far, we have found the following relationship: In order for the thesis to be true, we just need to prove that. ;^���wE�1���Nm���=V�5�N>?l�4�9(9 R�����9&�h?ք���,S�����>�9>�Q&��,�Cif�W�2��h���V�g�t�ۆ�A#���#-�6�NШ����'�iI��W3�AE��#n�5Tp_\$���8������g��ON�Nl"�)Npn#3?�,��x �g�������Y����J?����C� When I use . This lecture discusses how to derive the distribution of the sum of two independent random variables.We explain first how to derive the distribution function of the sum and then how to derive its probability mass function (if the summands are discrete) or its probability density function (if the summands are continuous). Below, suppose random variable X is exponentially distributed with rate parameter λ, and $${\displaystyle x_{1},\dotsc ,x_{n}}$$ are n independent samples from X, with sample mean $${\displaystyle {\bar {x}}}$$. <>/XObject<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/MediaBox[ 0 0 612 792] /Contents 4 0 R/Group<>/Tabs/S/StructParents 0>> : (15.7) The above example describes the process of computing the pdf of a sum of continuous random variables. Then $$W = \min(W_1, \ldots, W_n)$$ is the winning time of the race, and $$W$$ has an Exponential distribution with rate parameter equal to sum of the individual contestant rate parameters. I concluded this proof last night. We just have to substitute in Prop. A paper on this same topic has been written by Markus Bibinger and it is available here. read about it, together with further references, in “Notes on the sum and maximum of independent exponentially distributed random variables with different scale parameters” by Markus Bibinger under So does anybody know a way so that the probabilities are still exponential distributed? identically distributed exponential random variables with mean 1/λ. The two random variables and (with n
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576094150543213, "perplexity": 433.1842657424845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369420.71/warc/CC-MAIN-20210304143817-20210304173817-00545.warc.gz"}
https://rd.springer.com/article/10.1007%2FBF00276205
Springer Nature is making Coronavirus research free. View research | View latest news | Sign up for updates # On a transcendental equation in the stability analysis of a population growth model • 63 Accesses • 14 Citations ## Summary We consider the rate equation n = rn for the density n of a single species population in a constant environment. We assume only that there is a positive constant solution n*, that the rate of increase r depends on the history of n and that r decreases for great n. The stability properties of the solution n* depend on the location of the eigenvalues of the linearized functional differential equation. These eigenvalues are the complex solutions λ of the equation λ + α∫ −1 0 exp [λa]ds(a) − 0 with α>0 and s increasing, s (−1)=0, s (0)=1. We give conditions on a and s which ensure that all eigenvalues have negative real part, or that there are eigenvalues with positive real part. In the case of the simplest smooth function s (s=id+1), we obtain a theorem which describes the distribution of all eigenvalues in the complex plane for every α>0. This is a preview of subscription content, log in to check access. ## References 1. [1] Dieudonne, J.: Foundations of modern analysis. New York: Academic Press 1960. 2. [2] Halbach, U., Burkhardt, H. J.: Sind einfache Zeitverzögerungen die Ursachen für periodische Populationsschwankungen? Oecologia (Berlin) 9, 215–222 (1972). 3. [3] Hutchinson, G. E.: Circular causal systems in ecology. Annals of the New York Academy of Sciences 50, 221–246 (1948). 4. [4] Hale, J. K.: Functional differential equations. Berlin-Heidelberg-New York: Springer 1971. 5. [5] Walther, H. O.: Asymptotic stability for some functional differential equations. Proceedings of the Royal Society of Edinburgh 74 A (1974/75). 6. [6] Wright, E. M.: A non-linear differential-difference equation. Jour. Reine Angewandte Math. 194, 66–87(1955). 7. [7] Zygmund, A.: Trigonometric series I, Second edition. Cambridge: University Press 1959. ## Rights and permissions Reprints and Permissions Walther, H.-. On a transcendental equation in the stability analysis of a population growth model. J. Math. Biology 3, 187–195 (1976). https://doi.org/10.1007/BF00276205
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8560588955879211, "perplexity": 2376.338191823203}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141749.3/warc/CC-MAIN-20200217055517-20200217085517-00557.warc.gz"}
https://economics.stackexchange.com/questions/5549/stagflation-and-the-labor-force
Stagflation and the Labor Force Which of the following best explains how an economy can experience stagflation. 1. Women and teenagers stayed out of the labor force. 2. Negative supply shock cause the factor prices to increase. My thoughts: While 2 is correct, I think 1 is also correct. the fact the women and teens stay out of the labor force will increase the wage and thus cause the supply curve to shift to the left and thus cause stagflation. Am I wrong? • Are you talking about shocking the labor force (i.e. all women and teens exit market on May 7th, 2015) or an economy in which women and teens are not a part of the labor force? – Nox May 7 '15 at 16:22 • In a normal economy, which could cause an stagflation. 1 or 2. – Kun May 7 '15 at 16:33 • Number 2 is "best", because it has historical precedent in the United States 1980s, whereas suddenly removing all labor force is unprecedented. "Best" prohibits this type of "both right" answer contesting. – RegressForward May 7 '15 at 18:29 • "cause the supply curve to shift and thus cause stagflation". If I were to grade this kind of hand waiving argumentation, it'd get 0 points. – FooBar May 7 '15 at 18:59 • @RegressForward Also, the question is which of the following best explains how an economy can experience stagflation. If you replace can with has, empirical precedent is a proper argument. Otherwise, just because it didn't, doesn't mean it won't. – FooBar May 7 '15 at 19:07 Definition In economics, stagflation, a portmanteau of stagnation and inflation, is a situation where the inflation rate is high, the economic growth rate slows down, and unemployment remains steadily high. Stagflation hence requires a high unemployment rate. The unemployment rate can be defined as $$u = \frac{U}{POP}\\ u =\frac{U}{LF}$$ Direct increase of unemployment? that is, either the number of unemployed over population (more common) or over the labor force. Removing women and teens from the labor force does not affect anything in the first definition, and decreases the unemployment rate by the second definition. Indirect increase of unemployment? As stagflation requires a high unemployment rate, reducing the labor force size cannot directly create stagflation. Your only argument could then be that a reduced labor force somehow leads to a higher unemployment rate. Most likely, this is orthogonal. Equilibrium unemployment is a composite of frictional unemployment and voluntary unemployment. I cannot think of a reasonable argument why a reduction of the labor force should increase the relative share of either of these (unless of course, voluntary unemployment is higher among men than women, which is false). Level effects and growth rate effects Even more importantly, there is an important distinction between shocks to the level versus shocks to the growth rate. To the extent that - after women and teens have left - the growth rate of the labor force is the same as before, long term effects are negligible. In a world with exponential growth, a shock to the level is - in the long run - negligible, as we will catch up quite quickly. A shock to the growth rate however, is permanent. Of course, "in the long run" is relative, but there is some idea of persistence behind stagflation. An exit of women from the labor force would, in my opinion, cause a sudden drop in output, but would not affect growth rates. • So a drop in output caused by a leftward shift of short run supply curve will cause inflation due to the AD/AS diagram while the AD didn't change, right? – Kun May 7 '15 at 19:20 • @Kun You have a bunch of people who are not working any more, and hence have lower income. Are you sure that their demand is the same? – FooBar May 7 '15 at 19:23 • @Kun if the answer was sufficient and helpful, don't forget to mark it as such (answered). – FooBar May 8 '15 at 13:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5977554321289062, "perplexity": 1635.3539395022501}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783342.96/warc/CC-MAIN-20200128215526-20200129005526-00080.warc.gz"}
https://www.maplesoft.com/support/help/MapleSim/view.aspx?path=componentLibrary/multibody/contacts/forces/CylinderCylinderContact
Cylinder Cylinder Contact - MapleSim Help Cylinder Cylinder Contact Cylinder-cylinder contact force model Description The Cylinder Cylinder Contact model connects Cylinder contact elements. Activation Contact forces are generated only when the contacts are enabled. The active parameter selects how the contacts are enabled. It has the following settings: Always Active, the default, means the contacts are always enabled. Boolean Signal means the contacts are enabled when the enable contact boolean input is true. Start/Stop Time means the contacts are enabled at specified start time, ${T}_{\mathrm{on}}$, and disabled at a specified stop time, ${T}_{\mathrm{off}}$. The on/off parameter is used with the Start/Stop Time selection and has the following settings: Start Time means the contacts are enabled at ${T}_{\mathrm{on}}$. Stop Time means the contacts are disabled at ${T}_{\mathrm{off}}$. Start/Stop Time means the contacts are enabled at ${T}_{\mathrm{on}}$ and disabled at ${T}_{\mathrm{off}}$. Contact Properties The use record boolean parameter, if enabled, specifies the name of an external record parameter that defines the parameters of the contact. The mode parameter selects one of three modes: Linear spring and damper, Linear spring and limited damper, and Hunt and Crossley. The first two use the $c$ and $d$ parameters to set the spring and damping constants. The Hunt and Crossley model uses the parameters ${c}_{n}$, ${d}_{n}$, $n$, $p$, and $q$. See the Multibody Contact Modes help page for the resulting force equations. The $\mathrm{\mu }$ parameter is the coefficient of friction between contacting bodies. The ${k}_{\mathrm{\mu }}$ parameter is a smoothness coefficient for sliding friction, it scales $\mathrm{\mu }$ by $\mathrm{tanh}\left({k}_{\mathrm{\mu }}\left|{v}_{t}\right|\right)$, where ${v}_{t}$ is the tangential velocity. The $\mathrm{\epsilon }$ parameter specifies a minimum length used when normalizing vectors. Connections Name Description Modelica ID ${\mathrm{enable}}_{\mathrm{contact}}$ Optional boolean input; enable contact enable_contact ${\mathrm{port}}_{1}$ Connection to cylinders port_1 ${\mathrm{port}}_{2}$ Connection to cylinders port_2 Parameters Name Default Units Description Modelica ID active Always Active - Selects contact activation active on/off Start Time - Selects start/stop times onoff ${T}_{\mathrm{on}}$ $0$ $s$ On time Ton ${T}_{\mathrm{off}}$ $0$ $s$ Off time Toff use record $\mathrm{false}$ - Use contact properties record useRecord mode Linear spring and damper - Contact force formulation mode $c$ ${10}^{4}$ $\frac{N}{m}$ Spring constant (c>0) c $d$ $0$ $N\frac{s}{m}$ Damping constant d ${c}_{n}$ ${10}^{4}$ - Nonlinear spring constant (cn>0) cn ${d}_{n}$ $0$ - Nonlinear damping constant dn $n$ $1.5$ - Nonlinear elastic force exponent n $p$ $n$ - Nonlinear damping force exponent p $q$ $1$ - Nonlinear damping force exponent q $\mathrm{\mu }$ $0$ - Coefficient of friction mu ${k}_{\mathrm{\mu }}$ $1$ - Smoothness coefficient for sliding friction kTANH $\mathrm{\epsilon }$ $1.{10}^{-6}$ - Minimum length of vectors for normalization eps contact properties - Name of contact property record component conparams ${n}_{\mathrm{cylinder1}}$ $1$ - Number of cylinders at port 1 nCylinder1 use cylinder to cylinder contact at port 1 $\mathrm{false}$ - True means model contacts between the cylinders at port 1 useCylinderCylinderContact1 ${n}_{\mathrm{cylinder2}}$ $1$ - Number of cylinders at port 2 nCylinder2 use cylinder to cylinder contact at port 2 $\mathrm{false}$ - True means model contacts between the cylinders at port 1 useCylinderCylinderContact2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 57, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3055964410305023, "perplexity": 3486.8642162840624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493120.15/warc/CC-MAIN-20200328194743-20200328224743-00285.warc.gz"}
https://reliccastle.com/threads/2536/
# CompletedPokemon Royal Version (Chronos Isles Expansion OUT NOW) This project is complete. Any future revisions will be bug fixes or small updates. Project Status Completed Project Version 1.3.2 Join the Official Discord Server: https://discord.gg/CcvnsEB Plot Welcome to the Aristo Region, a quiet region in the Pokemon world. It just so happens that your father and mother are the king and queen of this region. Just as you have turned 10 years old it is time for the Royal Contest to begin. Here you will journey out with your brother, Victor, and Sister, Alice, across the region to complete the gym challenge, all to decide who will become heir to the throne. Have you got what it takes to become the next ruler of the Aristo Region? Features Pokemon from Gen 1-8 A brand new region to explore Pokemon Reserves - Sanctuaries that breed the starter Pokemon for the various regions of the world. Several Side Quests A new evil team intent on overthrowing the crown Following Pokemon Optional gen 6 style Exp.Share Temporal Pokemon - Pokemon from different time periods Current Version All 8 Gyms and Pokemon League Post game Content 15 Legendary Pokemon to find and catch 18 Temporal Pokemon Evolution Lines to hunt down Version 1.3 Chronos Isles OUT NOW -A whole new postgame story set on the Chronos Isles -A Battle Frontier -A Training Dojo to both help level and boost stats of your Pokemon -12 New Temporal Pokemon lines (4 of which are in the base game and not the Chronos Isles) -Zen Tower Challenges - A new area with various challenges including some familiar faces -Most Legendary Pokemon now available (Only missing Ultra Beasts, some mythicals and the gen 8 dlc legendaries) -A Shiny Machine? -Various other bug/glitch/QoL updates Credits: Ultimate Title Screens: - Luka S.J. Fast Forward Scripts: - Marin - mej71 Gen 7 Follow Pokemon: Sprites mady by Larryturbo, Princess-Phoenix, Kidkatt, Zender1752, SageDeoxys. Find us at: larryturbo.deviantart.com princess-phoenix.deviantart.com kidkatt.deviantart.com zender1752.deviantart.com sagedeoxys.deviantart.com Sprites: • princess-pheonix devinatart • Jefries22 deviantart Gen 8 Icons: • leParagon • Kalalokki • NoelleMBrooks • Anarlaurendil (Only Melmetal) Link to the original resource: https://www.deviantart.com/leparagon/art/Gen-8-Icon-sprites-40x30-823261656 Smogon XY Sprite Project: Smogon Sword/Shield Sprite Project: Gen 8 Follow Sprites by SageDeoxys Sun and Moon Sprite Pack: • Marin • Zeak6464 • Tapu Fini • SpartaLazor • leparagon • BlackOutG5 • Rune • M3rein • Rigbycwts • Rot8er_ConeX • James Davy • Luka S.J. - Music used from Bensound.com - Thundaga for the amazing tutorials to help create this - My incredible wife Jess for testing everything, making custom sprites and most of all putting up with me. - Most importantly Gamefreak for making one of the best game series ever, they also own all of this so. Notes I started this project in 2017 where all that was done was the first 4 gyms. I have finally been able to finish the game with everything I wanted for it. This game is a complete passion project for myself after playing Pokemon for years and having ideas that just kept coming. I have future post game content ideas which will hopefully be released in the future, along with more Temporal Pokemon. Please let me know if you played and any feedback you have.​ Last edited: #### Aki ##### Ace trainer Member I've only played a little bit so far but I like this game. There's some basic gripes; the battle/overworld sprites of any Galar Pokemon are blurry, a lot of maps are a bit big even though they have good flow. But the writing in this game is really nice, I like talking to every NPC. The premise too is a pretty unique setup, and even though it kind of leads into a familiar Pokemon journey, I think what really sells it is the consistent writing. Any game could just say I'm a princess fighting for the crown, but in this game, 1. The setup really motivates my rivals/siblings in a believable way, and 2. Characters do react to meeting a princess and I love that. The fan freakouts are the best but the sweet old people just trying to wish me good luck are adorable. Anyway the actual gameplay?? The gyms are hard without being able to cheese it with a type advantage! I started with Espurr, so the water type I was able to get for the first gym had a little too much type overlap and it was still a struggle. For the second gym I thought I'd gotten lucky with that free grass pokemon? But everything knows ice moves and my whole team is weak, I'm gonna be stuck here awhile :P I'm loving my Fire/Ice Quilava, that typing is so much fun and I can't wait to find more temporal pokemon. #### Norfolk Gaming ##### Trainer Member Joined Sep 14, 2017 Posts 79 I've only played a little bit so far but I like this game. There's some basic gripes; the battle/overworld sprites of any Galar Pokemon are blurry, a lot of maps are a bit big even though they have good flow. But the writing in this game is really nice, I like talking to every NPC. The premise too is a pretty unique setup, and even though it kind of leads into a familiar Pokemon journey, I think what really sells it is the consistent writing. Any game could just say I'm a princess fighting for the crown, but in this game, 1. The setup really motivates my rivals/siblings in a believable way, and 2. Characters do react to meeting a princess and I love that. The fan freakouts are the best but the sweet old people just trying to wish me good luck are adorable. Anyway the actual gameplay?? The gyms are hard without being able to cheese it with a type advantage! I started with Espurr, so the water type I was able to get for the first gym had a little too much type overlap and it was still a struggle. For the second gym I thought I'd gotten lucky with that free grass pokemon? But everything knows ice moves and my whole team is weak, I'm gonna be stuck here awhile :P I'm loving my Fire/Ice Quilava, that typing is so much fun and I can't wait to find more temporal pokemon. Hey thanks for the feedback, I really appreciate you checking it out. Yeah the Galar Pokemon are either super tiny or the correct size but blurry, which of the two looked better. I'm looking into sorting it out though. #### Wizarmonfan ##### Rookie Member Joined Aug 28, 2017 Posts 7 Mediafire doesn't want to work for me for some reason, so I was wondering if you could please upload the game to mega nz? #### Norfolk Gaming ##### Trainer Member Joined Sep 14, 2017 Posts 79 Mediafire doesn't want to work for me for some reason, so I was wondering if you could please upload the game to mega nz? Try Link 2 and let me know if that's worked #### Wizarmonfan ##### Rookie Member Joined Aug 28, 2017 Posts 7 It worked. Thank you very much. :D ##### Just a wolf that loves the shadows on a hot day :) Member Joined Jun 29, 2018 Posts 73 Hi. I love your game so far. I have encountered a problem. So I tried to get the mystery gift in the menu but it just leaves me hanging and then tell me the script is taking too long and that it is restarting the game. I tried it twice. #### Norfolk Gaming ##### Trainer Member Joined Sep 14, 2017 Posts 79 Hi. I love your game so far. I have encountered a problem. So I tried to get the mystery gift in the menu but it just leaves me hanging and then tell me the script is taking too long and that it is restarting the game. I tried it twice. Hi, thanks for playing, really appreciate it. There currently isn't a mystery gift implemented in the game so that will be why. #### GalacticGaming ##### Novice Member Joined Jan 29, 2020 Posts 13 After the interview at the radio station, no matter who I talk to my game just crashes. Script 'Follow Pokemon' line 702: NoMethodError occurred undefined method -' for nil:Nilclass #### Norfolk Gaming ##### Trainer Member Joined Sep 14, 2017 Posts 79 After the interview at the radio station, no matter who I talk to my game just crashes. Script 'Follow Pokemon' line 702: NoMethodError occurred undefined method -' for nil:Nilclass Hi thanks for this, I'm currently looking at fixing it, have you got follow Pokemon on while you're there? If not then press control and they'll come back and it might work. I'm assuming it's just the part upstairs as that's the only part to progress the story. #### Azorkin ##### Rookie Member Joined Aug 2, 2020 Posts 1 Age 25 Hi there, for a lot of the game, the buddy, keep glitching out, to where i cannot tell if its out or not cause i don't see it, and when i try to interact with items and people, sometimes it brings up an error that closes the game, is there anything i can do? #### Norfolk Gaming ##### Trainer Member Joined Sep 14, 2017 Posts 79 Hi there, for a lot of the game, the buddy, keep glitching out, to where i cannot tell if its out or not cause i don't see it, and when i try to interact with items and people, sometimes it brings up an error that closes the game, is there anything i can do? Hi, I'm looking into this, I would just make sure your Pokemon is following you at all times. If your Pokemon isn't following you then press control to bring it back out and hopefully it should be ok. #### SunGoddess23 ##### Novice Member Joined Jul 16, 2018 Posts 48 Hi I'm curious what is the shiny rate for the game since there are so many way to do that. The reason I'm asking is because with the starter I want to try and get a shiny of whichever one I decided to get, which is hard since I like 2 of the starters and I love their shiny forms, and also is it possible to get HAs on the Pokemon? #### Norfolk Gaming ##### Trainer Member Joined Sep 14, 2017 Posts 79 Hi I'm curious what is the shiny rate for the game since there are so many way to do that. The reason I'm asking is because with the starter I want to try and get a shiny of whichever one I decided to get, which is hard since I like 2 of the starters and I love their shiny forms, and also is it possible to get HAs on the Pokemon? Hi, the shiny rate is the same as the normal games. The shiny charm is available later in the game and makes the rate 6 times better. There is also an area later in the game where you'll be able to get any of the starters. As for HAs there isn't currently a method of obtaining them. #### SunGoddess23 ##### Novice Member Joined Jul 16, 2018 Posts 48 Hi, the shiny rate is the same as the normal games. The shiny charm is available later in the game and makes the rate 6 times better. There is also an area later in the game where you'll be able to get any of the starters. As for HAs there isn't currently a method of obtaining them. Okay thank you and that is too bad because Riolu's HA is so great to have. The 2 choices I had was Riolu and Zorua but I decided to go with Riolu and just SR to get the right nature and got lucky enough to get the right nature as well as Riolu coming out female. #### Munit ##### Rookie Member Joined Jun 13, 2020 Posts 4 Is there a battle frontier or any post game similar? #### Norfolk Gaming ##### Trainer Member Joined Sep 14, 2017 Posts 79 Is there a battle frontier or any post game similar? In the current version there are rematchable gym leaders and Pokemon league. I am currently working on a post game update that will include a battle frontier. #### Solartile ##### Rookie Member Joined Aug 6, 2020 Posts 3 Age 19 Is there mega evolution? If so what megas are available? #### Norfolk Gaming ##### Trainer Member Joined Sep 14, 2017 Posts 79 Is there mega evolution? If so what megas are available? Sadly there are no megas in the game. I did consider it but it didn't end up fitting in the game anywhere. #### hyper_the_hybrid ##### Rookie Member Joined Aug 6, 2020 Posts 1 Age 17 how do i get past this part? ive done it all right, whats missing?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16078625619411469, "perplexity": 3124.556685583636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361723.15/warc/CC-MAIN-20210228175250-20210228205250-00066.warc.gz"}
https://chapel-lang.org/docs/users-guide/taskpar/coforall.html
The coforall statement can be used to create an arbitrary number of related, homogeneous tasks. Syntactically, it mirrors the for-loop statement, but uses the coforall keyword in place of for. Operationally, a coforall loop creates a distinct task per loop iteration, each of which executes a copy of the loop body. Mnemonically, the coforall loop can be thought of as a concurrent forall—that is, a parallel loop in which each iteration is a concurrent task. As with the cobegin statement, the original task does not proceed until the child tasks corresponding to the coforall’s iterations have completed. And, as with cobegin, the original task waits only for its immediate children, not their descendents. The following code illustrates a simple use of the coforall loop: config const numTasks = 8; writeln("Signing off..."); This program will create a number of tasks equal to the configuration constant numTasks. Each task executes the loop body, printing a hello message indicating the value of its unique, private copy of the loop index variable tid (think “task ID”) and the total number of tasks. As in previous examples, since the tasks are not coordinating with one another, their “Hello” messages will print out in an arbitrary order. However, the “Signing off…” message will not print until all the “Hello” messages have, since it will be executed by the original task only once the per-iteration tasks are done. Thus, the following shows a possible output of the test: Hello from task 4 of 8 Hello from task 1 of 8 Hello from task 2 of 8 Hello from task 5 of 8 Hello from task 3 of 8 Hello from task 6 of 8 Hello from task 7 of 8 Hello from task 8 of 8 Signing off...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1518978774547577, "perplexity": 2528.7328256441892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740423.36/warc/CC-MAIN-20200815005453-20200815035453-00431.warc.gz"}
https://arxiv.org/abs/1210.5023
astro-ph.SR (what is this?) # Title: Towards Synchronism Through Dynamic Tides in J0651: the "Antiresonance" Locking Abstract: In recent years, the Extremely Low Mass White Dwarf (ELM WD) survey has quintupled the number of known close, detached double WD binaries (DWD). The tightest such DWD, SDSS J065133.33+284423.3 (J0651), harbors a He WD eclipsing a C/O WD every $\simeq\,12\,$min. The orbital decay of this source was recently measured to be consistent with general relativistic (GR) radiation. Here we investigate the role of dynamic tides in a J0651-Like binary and we uncover the potentially new phenomenon of "antiresonance" locking. In the most probable scenario of an asynchronous binary at birth, we find that dynamic tides play a key role in explaining the measured GR-driven orbital decay, as they lock the system at stable antiresonances with the star's eigenfrequencies. We show how such locking is naturally achieved and how, while locked at an antiresonance, GR drives the evolution of the orbital separation, while dynamic tides act to synchronize the spin of the He WD with the companion's orbital motion, but \emph{only on the GR timescale}. Given the relevant orbital and spin evolution timescales, the system is clearly on its way to synchronism, if not already synchronized. Comments: 13 pages, 4 pages, 1 table, submitted to ApJ Letters Subjects: Solar and Stellar Astrophysics (astro-ph.SR) Cite as: arXiv:1210.5023 [astro-ph.SR] (or arXiv:1210.5023v1 [astro-ph.SR] for this version) ## Submission history From: Francesca Valsecchi Miss [view email] [v1] Thu, 18 Oct 2012 04:54:57 GMT (1820kb)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.816608726978302, "perplexity": 9066.708060229394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815918.89/warc/CC-MAIN-20180224172043-20180224192043-00600.warc.gz"}
https://worksheets.tutorvista.com/conjunction-worksheet.html
Conjunction Worksheet Conjunction Worksheet • Page 1 1. Classify the disjunction as true, false or open: 0 < - 8 $\mathrm{or}$ South Dakota is a state. a. true b. open c. none Solution: 0 < - 8 is false South Dakota is a state is true. A disjunction is true if at least one of the sentences is true. So the disjunction is true. 2. Establish the truth or falsity of the sentence '- (5 - 16) = 21 $\mathrm{or}$ 95 > - 95'. a. true b. open c. false Solution: - (5 - 16) = - (- 11) = 11 [Simplify.] 11 ≠ 21 So, - (5 - 16) = 21 is false. 95 > - 95 is true One of the sentences is true, so the disjunction is true. 3. State whether the conjunction: All parallelograms are rectangles $\mathrm{and}$ 2 - (- 3) < 5 + $z$ is true, false or open. a. true b. false Solution: All parallelograms are rectangles is a false sentence. 2 - (- 3) < 5 + z 5 < 5 + z 0 < z is open. [Simplify.] One of the sentences is false. The conjunction is false. 4. Classify the disjunction: $x$ + 5 ≥ 8 $\mathrm{or}$ - (- 19) ≥ 0 as true, false, or open. a. open b. true c. false Solution: x + 5 ≥ 8 or x ≥ 3 is an open sentence. - (- 19) = 19 ≥ 0 is true. One sentence is true, so the disjunction is true. 5. State whether the sentence 'Mercury is a star' $\mathrm{or}$ 'Uranus is a satellite' is true, false or open. a. "true# Solution: ‘Mercury is a star’ is false. ‘Uranus is a satellite’ is false. A disjunction is true only if at least one of the sentences is true. Both sentences are false, so the disjunction is false. 6. State whether the sentences 'He is the Mayor of our town' $\mathrm{and}$ '- 7 > - 9' is true, false or open. a. open b. true c. false Solution: ’He is the Mayor of our Town’ is an open sentence. - 7 > - 9 is true The truth or falsity of the conjunction cannot be established, so it is an open sentence. 7. Classify the conjunction as true, false or open: 3 < 8 - 2 $\mathrm{and}$ 8 > 64. a. true b. false c. open Solution: 3 < 8 - 2 3 < 6 is true 8 > 64 is false The conjunction is false. [A conjunction can be true if and only if both the sentences are true.] 8. Classify the conjunction: 2 + 3 = 5 $\mathrm{and}$ "As You Like It' was written by Shakespeare, as true, false or open. a. true b. false c. none Solution: 2 + 3 = 5 is a true sentence. ’As You Like It’ was written by Shakespeare is a true sentence. Both sentences are true. The conjunction is true. 9. State whether the sentence is true, false or open: 4(2$a$ + 7) = 8$a$ + 28 $\mathrm{and}$ $x$ - 8 < 0. a. false b. true c. open Solution: 4(2a + 7) = 4(2a) + 4(7) [Distributive property.] = 8a + 28 4(2a + 7) = 8a + 28 is true. x - 8 < 0 x < 8 is open The truth or falsity of the conjunction cannot be established, so it is an open sentence. 10. State whether the sentence $x$ < 11 $\mathrm{or}$ $x$ ≥ 8 is true, false or open. a. false b. true c. open Solution: x < 11 is an open sentence. x ≥ 8 is also an open sentence. The truth or falsity of the disjunction cannot be established, the disjunction is open.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2660549581050873, "perplexity": 1824.8807938558643}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257605.76/warc/CC-MAIN-20190524104501-20190524130501-00430.warc.gz"}
https://docs.eyesopen.com/toolkits/cpp/molproptk/OEMolPropFunctions/OEGetFractionCsp3.html
# OEGetFractionCsp3¶ float OEGetFractionCsp3(const OEChem::OEMolBase &mol) Returns the number of $$sp^3$$ carbons divided by the total number of carbons as described in [Lovering-2009].
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7279412150382996, "perplexity": 6776.741195175539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988793.99/warc/CC-MAIN-20210507120655-20210507150655-00459.warc.gz"}
https://ergodicity.net/tag/conferences/
# CFP: Theory and Practice of Differential Privacy (TPDP) 2019 November 11 London, UK Colocated with CCS 2019 Differential privacy is a promising approach to privacy-preserving data analysis.  Differential privacy provides strong worst-case guarantees about the harm that a user could suffer from participating in a differentially private data analysis, but is also flexible enough to allow for a wide variety of data analyses to be performed with a high degree of utility.  Having already been the subject of a decade of intense scientific study, it has also now been deployed in products at government agencies such as the U.S. Census Bureau and companies like Apple and Google. Researchers in differential privacy span many distinct research communities, including algorithms, computer security, cryptography, databases, data mining, machine learning, statistics, programming languages, social sciences, and law.  This workshop will bring researchers from these communities together to discuss recent developments in both the theory and practice of differential privacy. Specific topics of interest for the workshop include (but are not limited to): • theory of differential privacy, • differential privacy and security, • privacy preserving machine learning, • differential privacy and statistics, • differential privacy and data analysis, • trade-offs between privacy protection and analytic utility, • differential privacy and surveys, • programming languages for differential privacy, • relaxations of the differential privacy definition, • differential privacy vs other privacy notions and methods, • experimental studies using differential privacy, • differential privacy implementations, • differential privacy and policy making, • applications of differential privacy. ### Submissions The goal of TPDP is to stimulate the discussion on the relevance of differentially private data analyses in practice. For this reason, we seek contributions from different research areas of computer science and statistics.  Authors are invited to submit a short abstract (4 pages maximum) of their work. Submissions will undergo a lightweight review process and will be judged on originality, relevance, interest and clarity. Submission should describe novel work or work that has already appeared elsewhere but that can stimulate the discussion between different communities at the workshop. Accepted abstracts will be presented at the workshop either as a talk or a poster.  The workshop will not have formal proceedings and is not intended to preclude later publication at another venue. Selected papers from the workshop will be invited to submit a full version of their work for publication in a special issue of the Journal of Privacy and Confidentiality. Submission website: https://easychair.org/conferences/?conf=tpdp2019 ### Important Dates Submission: June 21 (anywhere on earth) Workshop: 11/11 ### Program Committee • Michael Hay (co-chair), Colgate University • Aleksandar Nikolov (co-chair), University of Toronto • Aws Albarghouthi, University of Wisconsin–Madison • Borja Balle, Amazon • Mark Bun, Boston University • Graham Cormode, University of Warwick • Rachel Cummings, Georgia Tech University • Xi He, University of Waterloo • Gautam Kamath, University of Waterloo • Ilya Mironov, Google Research – Brain • Uri Stemmer, Ben-Gurion University • Danfeng Zhang, Penn State University # Signal boost: travel grants for SPAWC 2019 Passing a message along for my colleague Waheed Bajwa: As the US Liaison Chair of IEEE SPAWC 2019, I have received NSF funds to support travel of undergraduate and/or graduate students to Cannes, France for IEEE SPAWC 2019. Having a paper at the workshop is not a prerequisite for these grants and a number of grants are reserved for underrepresented minority students whose careers might benefit from these travel grants. Please share this with any interested students and, if you know one, please encourage her/him to consider applying for these grants. # What’s new is old in ethics and conduct (h/t to Stark Draper, Elza Erkip, Allie Fletcher, Tara Javidi, and Tsachy Weissman for sources) The IEEE Information Theory Society Board of Governors voted to approve the following statement to be included on official society events and on the website: IEEE members are committed to the highest standards of integrity, responsible behavior, and ethical and professional conduct. The IEEE Information Theory Society reaffirms its commitment to an environment free of discrimination and harassment as stated in the IEEE Code of Conduct, IEEE Code of Ethics, and IEEE Nondiscrimination Policy. In particular, as stated in the IEEE Code of Ethics and Code of Conduct, members of the society will not engage in harassment of any kind, including sexual harassment, or bullying behavior, nor discriminate against any person because of characteristics protected by law. In addition, society members will not retaliate against any IEEE member, employee or other person who reports an act of misconduct, or who reports any violation of the IEEE Code of Ethics or Code of Conduct. I guess the lawyers had to have a go at it, but this is essentially repeating that the IEEE already had rules and so here, we’re reminding you about the rules. This statement is saying “the new rules are the old rules.” We probably need more explicit new rules, however. In particular, many conferences have more detailed codes of conduct (NeurohackWeek, RSA, Usenix, APEC) that provide more detail about how the principles espoused in the text above are implemented. Often, these conferences have formal reporting procedures/policies and sanctions for violations: many IEEE conferences do not. The NSF is now requiring reporting on PIs who are “found to have committed sexual harassment” so incidents at conferences where the traveler is presenting NSF-sponsored should also be reported, it seems. While the ACM’s rules suggest making reporting procedures, perhaps a template (borrowed from another academic community?) could just become part of the standard operating procedure for running an IEEE conference. Just have a member of the organizing committee in charge, similar to having a local arrangements chair, publicity chair, etc. However, given the power dynamics of academic communities, perhaps people would feel more comfortable reporting incidents to someone outside the community. Relatedly, The Society also approved creating an Ad Hoc Committee on Diversity and Inclusion (I’m not on it) who have already done a ton of work on this and will find other ways to make the ITSOC (even) more open and welcoming. # Hello from the IPAM Workshop on Privacy for Biomedical Data I just arrived in LA for the IPAM Workshop on Algorithmic Challenges in Protecting Privacy for Biomedical Data. I co-organized this workshop with Cynthia Dwork, James Zou, and Sriram Sankararaman and it is (conveniently) before the semester starts and (inconveniently) overlapping with the MIT Mystery Hunt. The workshop has a really diverse set of speakers so to get everyone on the same page and anchor the discussion, we have 5 tutorial speakers and a few sessions or shorter talks. The hope is that these tutorials (which are on the first two days of the workshop) will give people some “common language” to discuss research problems. The other big change we made to the standard workshop schedule was to put in time for “breakout groups” to have smaller discussions focused on identifying the key fundamental problems that need to be addressed when thinking about privacy and biomedical data. Because of the diversity of viewpoints among participants, it seems a tall order to generate new research collaborations out of attending talks and going to lunch. But if we can, as a group, identify what the mathematical problems are (and maybe even why they are hard), this can help identify the areas of common interest. I think of these as falling into a few different categories. • Questions about demarcation. Can we formalize (mathematically) the privacy objective in different types of data sets/computations? Can we use these to categorize different types of problems? • Metrics. How do we formulate the privacy-utility tradeoffs for different problems? What is the right measure of performance? What (if anything) do we lose in guaranteeing privacy? • Possibility/impossibility. Algorithms which can guarantee privacy and utility are great, but on the flip side we should try to identify when privacy might be impossible to guarantee. This would have implications for higher-level questions about system architectures and policy. • Domain-specific questions. In some cases all of the setup is established: we want to compute function F on dataset D under differential privacy and the question is to find algorithms with optimal utility for fixed privacy loss or vice versa. Still, identifying those questions and writing them down would be a great outcome. In addition to all of this, there is a student poster session, a welcome reception, and lunches. It’s going to be a packed 3 days, and although I will miss the very end of it, I am excited to learn a lot from the participants. # Some thoughts on paper awards at conferences We (really Mohsen and Zahra) had a paper nominated for a student paper award at CAMSAP last year, but since both student authors are from Iran, their single-entry student visas prevented them from going to the conference. The award terms require that the student author present the work (in a poster session) and the conference organizers were kind enough to allow Mohsen to present his poster via Skype. It’s hardly an ideal communication channel, given how loud poster sessions are. Although the award went to a different paper, the experience brought up two questions that are not new but don’t get a lot of discussion. How should paper awards deal with visa issues? This is not an issue specific to students from Iran, although the US State Department’s visa issuance for Iranian students is stupidly restrictive. Students from Iran are essentially precluded from attending any non-US conference unless they want to roll the dice again and wait for another visa at home. Other countries may also deny visas to students for various reasons. Requiring students to be present at the conference is discriminatory, since the award should be based on the work. Disqualifying a student for an award because of bullshit political/bureaucratic nonsense that is totally out of their control just reinforces that bullshit. Why are best papers judged by their presentation? I have never been a judge for a paper award and I am sure that judges try to be as fair as they can. However, the award is for the paper and not its performance. I agree that scholarly communication through oral presentation is a valuable skill, but if the award is going to be determined by who gives the best show at the conference, they should retitle these to “best student paper and presentation award” or something like that. Maybe it should instead be based on video presentations to allow remote participation. If you are going to call it a paper award, then it should based on the written work. I don’t want this to seem like a case of sour grapes. Not all student paper awards work this way, but it seems to be the trend in IEEE-ish venues. The visa issue has hurt a lot of researchers I know; they miss out on opportunities to get their name/face known, chances to meet and network with people, and the experience of being exposed to a ton of ideas in a short amount of time. Back when I had time to do conference blogging, it was a way for me to process the wide array of new things that I saw. For newer researchers (i.e. students) this is really important. Making paper awards based on presentations hits these students doubly: they can neither attend the conference nor receive recognition for their work. # IPAM Workshop on Algorithmic Challenges in Protecting Privacy for Biomedical Data IPAM is hosting a workshop on Algorithmic Challenges in Protecting Privacy for Biomedical Data” which will be held at IPAM from January 10-12, 2018. The workshop will be attended by many junior as well as senior researchers with diverse backgrounds. We want to to encourage students or postdoctoral scholars who might be interested, to apply and/or register for this workshop. I think it will be quite interesting and has the potential to spark a lot of interesting conversations around what we can and cannot do about privacy for medical data in general and genomic data in specific. # DIMACS Workshop on Distributed Optimization, Information Processing, and Learning My colleague Waheed Bajwa, Alejandro Ribeiro, and Alekh Agarwal are organizing a Workshop on Distributed Optimization, Information Processing, and Learning from August 21 to August 23, 2017 at Rutgers DIMACS. The purpose of this workshop is to bring together researchers from the fields of machine learning, signal processing, and optimization for cross-pollination of ideas related to the problems of distributed optimization, information processing, and learning. All in all, we are expecting to have 20 to 26 invited talks from leading researchers working in these areas as well as around 20 contributed posters in the workshop. Registration is open from now until August 14 — hope to see some of you there! # Mathematical Tools of Information-Theoretic Security Workshop: Day 1 It’s been a while since I have conference-blogged but I wanted to set aside a little time for it. Before going to Allerton I went to a lovely workshop in Paris on the Mathematical Tools of Information-Theoretic Security thanks to a very kind invitation from Vincent Tan and Matthieu Bloch. This was a 2.5 day workshop covering a rather wide variety of topics, which was good for me since I learned quite a bit. I gave a talk on differential privacy and machine learning with a little more of a push on the mathematical aspects that might be interesting from an information-theory perspective. Paris was appropriately lovely, and it was great to see familiar and new faces there. Now that I am at Rutgers I should note especially our three distinguished alumnae, Şennur Ulukuş, Aylin Yener, and Lalitha Sankar. # ISIT 2015 : statistics and learning The advantage of flying to Hong Kong from the US is that the jet lag was such that I was actually more or less awake in the mornings. I didn’t take such great notes during the plenaries, but they were rather enjoyable, and I hope that the video will be uploaded to the ITSOC website soon. There were several talks on entropy estimation in various settings that I did not take great notes on, to wit: • OPTIMAL ENTROPY ESTIMATION ON LARGE ALPHABETS VIA BEST POLYNOMIAL APPROXIMATION (Yihong Wu, Pengkun Yang, University Of Illinois, United States) • DOES DIRICHLET PRIOR SMOOTHING SOLVE THE SHANNON ENTROPY ESTIMATION PROBLEM? (Yanjun Han, Tsinghua University, China; Jiantao Jiao, Tsachy Weissman, Stanford University, United States) • ADAPTIVE ESTIMATION OF SHANNON ENTROPY (Yanjun Han, Tsinghua University, China; Jiantao Jiao, Tsachy Weissman, Stanford University, United States) I would highly recommend taking a look for those who are interested in this problem. In particular, it looks like we’re getting towards more efficient entropy estimators in difficult settings (online, large alphabet), which is pretty exciting. QUICKEST LINEAR SEARCH OVER CORRELATED SEQUENCES Javad Heydari, Ali Tajer, Rensselaer Polytechnic Institute, United States This talk was about hypothesis testing where the observer can control the samples being taken by traversing a graph. We have an $n$-node graph (c.f. a graphical model) representing the joint distribution on $n$ variables. The data generated is i.i.d. across time according to either $F_0$ or $F_1$. At each time you get to observe the data from only one node of the graph. You can either observe the same node as before, explore by observing a different node, or make a decision about whether the data from from $F_0$ or $F_1$. By adopting some costs for different actions you can form a dynamic programming solution for the search strategy but it’s pretty heavy computationally. It turns out the optimal rule for switching has a two-threshold structure and can be quite a bit different than independent observations when the correlations are structured appropriately. MISMATCHED ESTIMATION IN LARGE LINEAR SYSTEMS Yanting Ma, Dror Baron, North Carolina State University, United States; Ahmad Beirami, Duke University, United States The mismatch studied in this paper is a mismatch in the prior distribution for a sparse observation problem $y = Ax + \sigma_z z$, where $x \sim P$ (say a Bernoulli-Gaussian prior). The question is what happens when we do estimation assuming a different prior $Q$. The main result of the paper is an analysis of the excess MSE using a decoupling principle. Since I don’t really know anything about the replica method (except the name “replica method”), I had a little bit of a hard time following the talk as a non-expert, but thankfully there were a number of pictures and examples to help me follow along. SEARCHING FOR MULTIPLE TARGETS WITH MEASUREMENT DEPENDENT NOISE Yonatan Kaspi, University of California, San Diego, United States; Ofer Shayevitz, Tel-Aviv University, Israel; Tara Javidi, University of California, San Diego, United States This was another search paper, but this time we have, say, $K$ targets $W_1, W_2, \ldots, W_K$ uniformly distributed in the unit interval, and what we can do is query at each time $n$ a set $S_n \subseteq [0,1]$ and get a response $Y_n = X_n \oplus Z_n$ where $X_n = \mathbf{1}( \exists W_k \in S_n )$ and $Z_n \sim \mathrm{Bern}( \mu(S_n) + b )$ where $\mu$ is the Lebesgue measure. So basically you can query a set and you get a noisy indicator of whether you hit any targets, where the noise depends on the size of the set you query. At some point $\tau$ you stop and guess the target locations. You are $(\epsilon,\delta)$ successful if the probability that you are within $\delta$ of each target is less than $\epsilon$. The targeting rate is the limit of $\log(1/\delta) / \mathbb{E}[\tau]$ as $\epsilon,\delta \to 0$ (I’m being fast and loose here). Clearly there are some connections to group testing and communication with feedback, etc. They show there is a significant gap between the adaptive and nonadaptive rate here, so you can find more targets if you can adapt your queries on the fly. However, since rate is defined for a fixed number of targets, we could ask how the gap varies with $K$. They show it shrinks. ON MODEL MISSPECIFICATION AND KL SEPARATION FOR GAUSSIAN GRAPHICAL MODELS Varun Jog, University of California, Berkeley, United States; Po-Ling Loh, University of Pennsylvania, United States The graphical model for jointly Gaussian variables has no edge between nodes $i$ and $j$ if the corresponding entry $(\Sigma^{-1})_{ij} = 0$ in the inverse covariance matrix. They show a relationship between the KL divergence of two distributions and their corresponding graphs. The divergence is lower bounded by a constant if they differ in a single edge — this indicates that estimating the edge structure is important when estimating the distribution. CONVERSES FOR DISTRIBUTED ESTIMATION VIA STRONG DATA PROCESSING INEQUALITIES Aolin Xu, Maxim Raginsky, University of Illinois at Urbana–Champaign, United States Max gave a nice talk on the problem of minimizing an expected loss $\mathbb{E}[ \ell(W, \hat{W}) ]$ of a $d$-dimensional parameter $W$ which is observed noisily by separate encoders. Think of a CEO-style problem where there is a conditional distribution $P_{X|W}$ such that the observation at each node is a $d \times n$ matrix whose columns are i.i.d. and where the $j$-th row is i.i.d. according to $P_{X|W_j}$. Each sensor gets independent observations from the same model and can compress its observations to $b$ bits and sends it over independent channels to an estimator (so no MAC here). The main result is a lower bound on the expected loss as s function of the number of bits latex $b$, the mutual information between $W$ and the final estimate $\hat{W}$. The key is to use the strong data processing inequality to handle the mutual information — the constants that make up the ratio between the mutual informations is important. I’m sure Max will blog more about the result so I’ll leave a full explanation to him (see what I did there?) More on Shannon theory etc. later! # AISTATS 2015: a few talks from one day I attended AISTATS for about a day and change this year — unfortunately due to teaching I missed the poster I had there but Shuang Song presented a work on learning from data sources of different quality, which her work with Kamalika Chaudhuri and myself. This was my first AISTATS. It had single track of oral presentations and then poster sessions for the remaining papers. The difficulty with a single track for me is that my interest in the topics is relatively focused, and the format of a general audience with a specialist subject matter meant that I couldn’t get as much out of the talks as I would have wanted. Regardless, I did get exposed to a number of new problems. Maybe the ideas can percolate for a while and inform something in the future. Computational Complexity of Linear Large Margin Classification With Ramp Loss Søren Frejstrup Maibing, Christian Igel The main result of this paper (I think) is that ERM under ramp loss is NP-hard. They gave the details of the reduction but since I’m not a complexity theorist I got a bit lost in the weeds here. A la Carte — Learning Fast Kernels Zichao Yang, Andrew Wilson, Alex Smola, Le Song Ideas like “random kitchen sinks” and other kernel approximation methods require you to have a kernel you want to approximate, but in many problems you in fact need to learn the kernel from the data. If I give you a kernel function $k(x,x') = k( |x - x'| )$, then you can take the Fourier transform $K(\omega)$ of $k$. This turns out to be a probability distribution, so you can sample random $\{\omega_i\}$ i.i.d. and build a randomized Fourier approximation of $k$. If you don’t know the kernel function, or you have to learn it, then you could instead try to learn/estimate the transform directly. This paper was about trying to do that in a reasonably efficient way. Learning Where to Sample in Structured Prediction Tianlin Shi, Jacob Steinhardt, Percy Liang This was about doing Gibbs sampling, not for MCMC sampling from the stationary distribution, but for “stochastic search” or optimization problems. The intuition was that some coordinates are “easier” than others, so we might want to focus resampling on the harder coordinates. But this might lead to inaccurate sampling. The aim here twas to build a heterogenous sampler that is cheap to compute and still does the right thing. Tradeoffs for Space, Time, Data and Risk in Unsupervised Learning Mario Lucic, Mesrob Ohannessian, Amin Karbasi, Andreas Krause This paper won the best student paper award. They looked at a k-means problem where they do “data summarization” to make the problem a bit more efficient — that is, by learning over an approximation/summary of the features, they can find different tradeoffs between the running time, risk, and sample size for learning problems. The idea is to use coresets — I’d recommend reading the paper to get a better sense of what is going on. It’s on my summer reading list. Averaged Least-Mean-Squares: Bias-Variance Trade-offs and Optimal Sampling Distributions Alexandre Defossez, Francis Bach What if you want to do SGD but you don’t want to sample the points uniformly? You’ll get a bias-variance tradeoff. This is another one of those “you have to read the paper” presentations. A nice result if you know the background literature, but if you are not a stochastic gradient aficionado, you might be totally lost. Sparsistency of $\ell_1$-Regularized M-Estimators Yen-Huan Li, Jonathan Scarlett, Pradeep Ravikumar, Volkan Cevher In this paper they find a new condition, which they call local structured smoothness, which is sufficient for certain M-estimators to be “sparsistent” — that is, they recover the support pattern of a sparse parameter asymptotically as the number of data points goes to infinity. Examples include the LASSO, regression in general linear models, and graphical model selection. Some of the other talks which were interesting but for which my notes were insufficient: • Two-stage sampled learning theory on distributions (Zoltan Szabo, Arthur Gretton, Barnabas Poczos, Bharath Sriperumbudur) • Generalized Linear Models for Aggregated Data (Avradeep Bhowmik, Joydeep Ghosh, Oluwasanmi Koyejo) • Efficient Estimation of Mutual Information for Strongly Dependent Variables (Shuyang Gao, Greg Ver Steeg, Aram Galstyan) • Sparse Submodular Probabilistic PCA (Rajiv Khanna, Joydeep Ghosh, Russell Poldrack, Oluwasanmi Koyejo)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 44, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25903812050819397, "perplexity": 1416.6823839247447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400205950.35/warc/CC-MAIN-20200922094539-20200922124539-00792.warc.gz"}
https://www.compadre.org/portal/../picup/exercises/exercise.cfm?A=ParticleAccelerator
+ Particle Accelerator! Developed by Christopher Orban - Published May 23, 2017 This exercise illustrates a charged particle being accelerated through two charged plates. The student will explore how changing the mass, charge, and the spacing between the plates affects the final velocity of the particle. Although looking at the code is an important part of this exercise, there is only a small amount of coding involved. The student will change the values of a few different variables. Much of the work of this exercise is in doing analytic calculations for the final speed of the particle. Coding is still important to this exercise because there is a "Particle repulsion" exercise that will follow this one in which the student will modify the code to allow two particles to repel from each other. This exercise will use a javascript based programming language called [p5.js](http://p5js.org) that is very similar to C and C++ programming. (Note: If you are familiar with C or C++ the main difference you will see is that there is no main() function and instead the draw() function serves this role.) **Importantly, this exercise can be completed using any computer or chromebook without downloading any software!** This exercise is broken up into two parts because there are two different, equivalent ways to think about the acceleration of a proton from two charged plates: You can think of the proton as having a constant acceleration due to the electric field until it leaves the plates, or you can think of the proton as being accelerated over a "potential" that increases its kinetic energy by an amount that depends on the electric field and spacing between the plates. Either way you get the same answer, but it's interesting to think about it from two different points of view. This exercise is designed for an algebra-based physics class at the college or high school level. It may also be useful for calculus-based physics for non-majors (e.g. engineering & science majors). This exercise is part of a series of exercises developed by Prof. Chris Orban. The next exercise is on the [Replusion between two charges (with application to fusion!)](http://www.compadre.org/PICUP/exercises/exercise.cfm?I=253&A=ParticleRepulsion) There are pre-and-post assessment questions associated with this exercise (not available here) that are being used in an educational research study. If interested to collaborate on that study please e-mail Prof. Chris Orban ([email protected]). The first paper from this study [is available at this link](https://doi.org/10.1119/1.5058449), the second paper which discusses the electromagnetism exercises [is available at this link](http://dx.doi.org/10.1119/perc.2017.pr.067) Subject Area Electricity & Magnetism High School and First Year Javascript 1. Students will gain experience applying kinematics equations from classical mechanics to a situation with a proton being accelerated from between two charged plates. This will involve performing an analytic calculation for the final speed of the proton that should closely match the result of the simulation. 2. Students will gain intuition on how the charge and mass of a particle affects its behavior in an electric field by modifying the code to change the charge, mass and direction of the electric field and seeing what happens. After making these changes students will also perform analytic calculations for the final speed of the particle that should match the result of the simulation. 3. Students will also learn how to think about charged plates in terms of potential difference. Students will perform analytic calculations that use potential difference to determine the final speed of the particle. 60 min These exercises are not tied to a specific programming language. Example implementations are provided under the Code tab, but the Exercises can be implemented in whatever platform you wish to use (e.g., Excel, Python, MATLAB, etc.). ### Part 1. Electric fields and acceleration! In this exercise we will make a simulation of a particle being accelerated between two plates. The relevant equations in this case are these: $$v_{xf} = v_{xi} + a_x t \nonumber$$ $$v_{xf}^2 = v_{xi}^2 + 2 a_x \Delta x \nonumber$$ $$F_x = m a_x \nonumber$$ $$F_x = q E \nonumber$$ We will use some [unusual force and electric field units](http://www.physics.ohio-state.edu/~orban/physics_coding/units.html) in this exercise, but other things should be more familiar. Specifically the variable q will be in terms of the elementary charge. So a proton will be q = 1.0;. The variable mass will be in atomic mass units, so a proton would be mass = 1.0;. Step. 1. Check out this nice animation of a [proton being accelerated through two charged plates](https://www.asc.ohio-state.edu/orban.14/stemcoding/accelerator2/accelerator.html). In the animation, notice that the initial x velocity ($v_{xi}$) is non-zero. Step 2. Try out the accelerator code in an editor [Click on this link to open the accelerator code in a p5.js editor](https://editor.p5js.org/ChrisOrban/sketches/1selx5sR) Press play there to run the code. It should behave the same way it did [with the link you were given in Step 1](https://www.asc.ohio-state.edu/orban.14/stemcoding/accelerator2/accelerator.html) Important! Create an account with the editor or sign in to your account. Then click "Duplicate" so you can have your own version of the code! Step 3. Try to make sense of the code behind the animation. Think especially about this section: if ( ( x > x_plate_left) & (x < x_plate_right)) { deltaVx = (q*E/mass)*dt; t += dt; } This is the change in velocity each timestep (deltaVx) when the particle is inbetween the two plates. The quantity in the parenthesis (q*E/mass) is acceleration. Optional Step: Plot $v_x$ versus time by adding this code after display(); but before the end of draw() graph1.addPoint(vx); graph1.display(); This should produce a plot of vx versus time in the top right corner of the simulation. Step 4. Calculate the acceleration The final velocity at the end of the animation is 55.5 meters per second. (Ok, really it's pixels per second but let's just think about it as meters per second. The width of the screen would be 750 meters.) The particle spends t = 9.1 seconds in the electric field. If we can just figure out the acceleration, we should be able to use this formula to relate the initial velocity to the final velocity: $$v_{xf} = v_{xi} + a_x t \nonumber$$ What should we use for $a_x$ in this case? Use q = 1, E = 5, and m = 1 to figure it out. You should be able to come up with 55.5 meters per second for $v_{xf}$ with the correct value for $a_x$. Do not simply use the 55.5 meters per second result to figure out what the acceleration was! We are doing a consistency check on the code! Consistency is key! Step 5. Imagine you didn't know the time In a laboratory setting it is often hard to figure out exactly how much time a particle spends in the electric field. But we still know the initial velocity, the strength of the electric field, the mass of the particle and the separation ($\Delta x$) between the two plates which in this case is $\Delta x = 500-200 = 300$ meters. Use this information with this equation to come up with the 55.5 meters per second result for $v_{xf}$. $$v_{xf}^2 = v_{xi}^2 + 2 a_x \Delta x \nonumber$$ Show that you can get 55.5 meters per second for $v_{xf}$ from this equation. This is another consistency check for the code. Step 6. See what happens if the charge of the particle is doubled. Set q = 2.0 instead of 1.0. Does the charge of the particle affect the final velocity? why or why not? Step 7. See what happens if the mass of the particle is 2.0 instead of 1.0. Change the charge of the particle back to 1.0 so that the simulation is like accelerating a Deuteron instead of a proton. Note: Deuterons have about twice the mass as protons because a Deuteron is a proton and a neutron that are stuck together by nuclear forces. Protons and neutrons have roughly the same mass so the total mass of a Deuteron is about twice that of the proton. The net charge of the Deuteron is the same as the proton because neutrons are electrically neutral particles (no charge). Predict the final velocity of the Deuteron and check to see if your expectation is proven right! Show your calculation, prediction and measurement in what you turn in for this lab. Step 8. What happens if you change the electric field from 5 (the default value) to -5? Notice that the direction of the field lines changes when you do this. How fast does a Deuteron need to be traveling in order to get through the plate? Calculate why it has to be this fast! Optional: Step 9. (Extra Credit) Modify the program in some way (choose one or more) Suggestions/inspiration for modifying the program: • add a component of the initial velocity in the y direction and predict the final speed • Make the code smart enough to use negative(x,y) if the charge is less than zero and positive(x,y) if the charge is greater than zero. ### How to get full credit for Part 1!!! In what you turn in you should answer the questions asked in this programming lab: 1. Make sure to explain why the final velocity turned out to be 55.5 meters per second (Steps 4 & 5) As best you can write down the equations that you used to calculate your number and write down the number you got. You may not get exactly 55.5 but that's ok. Try to get within 10% of that number. 2. Say in words whether increasing the charge of the particle from 1.0 to 2.0 affects the final velocity (Step 6) Just write a sentence. Say whether the final velocity increases, decreases or stays the same. No calculation necessary. 3. Change the mass to 2.0 and the charge back to 1.0 so that the particle is a Deuteron. Predict the final velocity and measure it (Step 7) Make sure your calculation matches the measured result to 10%. 4. Describe what happens when the electric field is negative and figure out how fast the Deuteron needs to be traveling (Step 8) Just change the initial velocity of the Deuteron until it passes through. Then calculate why it had to be this way. Write down the number for how fast it should be going. It may not match your empirical result exactly, but it should agree to maybe 10% 5. The extra credit really is optional You can still get full credit without doing the extra credit as long as you've done everything else correctly ### Part 2. Electric fields and electric potential! Thinking about the problem in terms of potential difference, the relevant equations are: $$\Delta V = E d \nonumber$$ $$\Delta KE = q \Delta V \nonumber$$ $$KE = \frac{1}{2} m v_x^2 \nonumber$$ $$\Delta KE = KE_f - KE_i \nonumber$$ where $d$ is the distance between the plates. It should be clear from the animation that the initial kinetic energy is non-zero (because $v_{xi}$ is non-zero). The purpose of this programming lab is to show that the potential difference way of thinking about the problem is just as useful as thinking about the problem in terms of forces, and maybe even more useful, if the energy is what we care about!!! You can also use your code from the previous exercise so long as you **change the charge of the particle back to +1 and the mass of the particle back to 1.0!!!** Step 3. The final velocity at the end of the animation is 55.5 meters per second. The initial velocity was 10 meters per second and the acceleration occurs over 300 meters. Go back to the previous programming lab and remember how you were able to explain why the final velocity is 55.5 meters per second. In the last programming lab we thought about the problem in terms of forces. Write down the equations that explained the 55.5 meters per second in what you turn in for this lab. Step 4. The potential difference between the plates is $\Delta V = E d$ where $E$ is 5 and $d$ is 300 meters. The potential difference is therefore $\Delta V = E d = 1500$. Choose a different value for $E$, and choose a different value for $d$ by changing the variables x_plate_left and x_plate_right. Make sure that the new values of $d$ multiply to $\Delta V = 1500$ and make sure $d < 750$ meters or else one of the plates will be off-screen. Check to see if approximately the same final velocity (55.5 meters per second) is achieved. (The final velocity should be the same because the potential difference is the same. This is one reason why the potential difference is such a useful concept.) When you turn in this lab make sure your code has the values of $E$ and $d$ that you chose and write down the measured value of the velocity to confirm that this worked. Step 5. With your new values for $E$ and $d$, check to see what happens if the charge of the particle is doubled. Set q = 2.0 instead of 1.0. Does the charge of the particle affect the final velocity? why or why not? Do you get the same final velocity with the original values of $E$ and $d$? Is it faster or slower? Step 6. With your new values for $E$ and $d$, check to see what happens if the mass of the particle is 2.0 instead of 1.0 and change the charge of the particle back to 1.0. As mentioned in the last programming lab, this is like changing the particle from a proton to a deuteron. Predict the final velocity and check to see if your expectation is proven right. Do this calculation three different ways: (1) thinking about the problem in terms of acceleration and the time, as in Part 1, (2) thinking about the problem in terms of acceleration and the distance without knowing the time as in Part 1 and (3) thinking about the problem in terms of the potential difference and the change in kinetic energy You should be able to show that all three approaches give essentially the same answer. ### How to get full credit for Part 2!!! In what you turn in you should answer the questions asked in this programming lab: 1. Write down the equations that gave 55.5 meters per second from Part 1 Feel free to look back at Part 1 and just put these same equations here 2. Choose new values for $E$ and $d$ (Step 4a) Make sure that the code you submit contains the new values for $E$ and $d$. These should multiply to $\Delta V = Ed = 1500$ and make sure $d < 750$ meters or else one of the plates will be off screen. 3. State whether you get approximately the same final velocity with the new values of $E$ and $d$ (Step 4b) This is just a yes or no question. The answer should be yes (as mentioned in step 4) or else you've done something wrong. 4. Say whether increasing the charge of the particle increases the final speed (Step 5) This is just a yes/no question. Make sure you change $E$ and $d$ before you test this. 5. Change the mass and show three different ways to calculate the final velocity (Step 6) You can calculate the final velocity using acceleration with time, or acceleration with distance or using the change in electric potential and the change in kinetic energy. Write down the equations as best you can in the comments. Don't just show the result. You should get approximately the same answer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8275870680809021, "perplexity": 240.82806207985954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539131.21/warc/CC-MAIN-20220521143241-20220521173241-00309.warc.gz"}
https://blog.stefan-koch.name/2011/09/02/thumbnails-upload-and-resize-images-with-zend_form_element_file
The Zend Framework includes a class called Zend_Form_Element_File for creating a file upload within a form. It uses Zend_File_Transfer for receiving the file, but the whole process lacks some methods for the often required image upload. ## What is easily possible Let’s begin with the things that are possible. You can specify several validators for your file to check for file extensions or maximum file size. That’s stuff which is required for all files and therefore it is included. You may also specify a target directory. ## Renaming a file according to your needs Renaming a file according to your needs is also possible, even though (often) not as easily as the other stuff. You need to add a filter after initialising, because you usually do not know the filename at runtime. At least when you upload a profile picture you often want to give it a name like the username or the user’s id. This filter will rename the upload (only one file is allowed) according to the rule in target. ## Resizing an image The difficult part with Zend is resizing the image. Of course you can do this in your controller after you received the upload, but this is not very nice style. As Zend supports filters, we better program a new filter for this task. I called it Skoch_Filter_File_Resize: As you might see this file also requires an adapter to ensure you can use the filter with both GD and Imagick. Thus, we need an abstract class and the implementation classes: ### Using the filter This filter can now be attached to your Zend_Form_Element_File instance and will then resize the image to produce a thumbnail: You may specify several options invoking the filter. As you see in my code, I used with, height and keepRatio resulting in two maximum sizes. The image will then be resized so that it fits both of the lengths, but the aspect ratio will be kept. The whole list of options: • width: The maximum width of the resized image • height: The maximum height of the resized image • keepRatio: Keep the aspect ratio and do not resize to both width and height (usually expected) • keepSmaller: Do not resize if the image is already smaller than the given sizes • directory: Set a directory to store the thumbnail in. If nothing is given, the normal image will be overwritten. This will usually be used when you produce thumbnails in different sizes. • adapter: The adapter to use for resizing. You may specify a string or an instance of an adapter. Now it’s easily possible to resize an uploaded image. To automatically load the classes, you need to add an option to your application.ini. ### Multiple thumbnails Often you want to create several thumbnails in different sizes. This can be done by using a so called filter chain and the directory option of the Skoch_Filter_File_Resize. If you specify directory, the value of setDestination() will not be considered anymore. Thus, you have to pass the full path to the directory option. You can download the library and a tiny example from my github repository. ## Caveats If you want to use the directory option together with renaming, make sure to add the Resize-filter after the Rename-filter to ensure that Resize gets the new filename and will save the thumbnail with the new filename. Otherwise you might get this structure: /img/gallery/stefan/thumbs/Spain_1000.png /img/gallery/stefan/1234.png Where you probably do not want to have the filename Spain_1000.png on your server ;) So don’t forget to add Resize after Rename. I do not maintain a comments section. If you have any questions or comments regarding my posts, please do not hesitate to send me an e-mail to [email protected].
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3599703013896942, "perplexity": 1270.0472382578844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153515.0/warc/CC-MAIN-20210727233849-20210728023849-00036.warc.gz"}
https://www.sdss4.org/dr16/algorithms/ancillary/boss/luminousblue/
# Luminous Blue Galaxies at 0.7 < z < 1.7 ## Contact Jean-Paul Kneib École Polytechnique Fédérale de Lausanne [email protected] ## Summary Spectra of a sample of luminous blue galaxies at 0.7 < z < 1.7 in a 143-square-degree subset of the BOSS survey area ## Finding Targets An object whose ANCILLARY_TARGET1 value includes one or more of the bitmasks in the following table was targeted for spectroscopy as part of this ancillary target program. See SDSS bitmasks to learn how to use these values to identify objects in this ancillary target program. Program (bit name) Bit number Target Description Number of Fibers Number of Unique Primary Objects ELG 61 Luminous Blue Galaxies 3,661 3,515 ## Description Studies from the second Deep Extragalactic Evolutionary Probe (DEEP2; Davis et al. 2003) reveal that the most luminous, most star-forming blue galaxies at z ~ 1 appear to be a population that evolves into massive red galaxies at lower redshifts (Cooper et al. 2008). Sampling color-selected galaxies from either SDSS Stripe 82 or the CFHT-LS Wide fields (W1, W3, and W4) allows a measure of the clustering of the rarest, most luminous of these blue galaxies on large scales. Such a measurement has not yet been conducted, as prior galaxy-evolution motivated surveys have had a limited field of view, and have mostly targeted fainter galaxies. This dataset has been important in motivating the eBOSS project, an ongoing survey of the SDSS. The data will also be used to improve the targeting strategy of future projects such as BigBOSS (Schlegel et al. 2011). ## Target Selection The galaxy targets were color-selected based on the CFHT-LS photometric-redshift catalog (Coupon et al. 2009). Different color selections were explored using either the (uPSF – gPSF , gPSF – rPSF ) color-color diagram down to gPSF < 22.5 or the (gPSF – rPSF , rPSF -iPSF) color-color diagram down to iPSF < 21.3. Detailed description of the color selection and redshift measurements are discussed in Comparat et al (2013a). Using this dataset, photometric redshifts can be re-calibrated in the CFHT-LS W3 field, thereby reducing biases in redshift estimates at z > 1. Measurement of the galaxy bias of these luminous blue galaxies are presented in Comparat et al (2013b). ## REFERENCES Comparat, J. et al 2013a, MNRAS, 428, 1498 Comparat, J. et al 2013b, MNRAS, 433, 1146 Cooper, M. C., et al. 2008, MNRAS, 383, 1058 Coupon, J., et al. 2009, A&A, 500, 981 Davis, M., et al. 2003, in Discoveries and Research Prospects from 6- to 10-Meter-Class Telescopes II. Edited by Guhathakurta, Puragra. Proceedings of the SPIE, Volume 4834, 161-172 Schlegel, D., et al. 2011, ArXiv 1106.1706
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940874934196472, "perplexity": 7163.960492279712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944996.49/warc/CC-MAIN-20230323034459-20230323064459-00277.warc.gz"}
http://mathematica.stackexchange.com/questions/16050/a-function-that-only-evaluates-on-lists-of-pairs
# A function that only evaluates on lists of pairs [duplicate] Possible Duplicate: Why doesn’t PatternTest work with Composition? I'd like my function to only evaluate when the argument is a list of pairs. It seems like Repeated and a question mark should work: fourPears[argument_?MatchQ[#, {{_, _} ..}] &] but it does not. What's wrong with this? - ## marked as duplicate by Leonid Shifrin, The Toad♦Dec 10 '12 at 21:09 The question mark is called PatternTest in this context and is not to be confused with Information, which also has a question mark as its short form. –  ArgentoSapiens Dec 10 '12 at 23:22 I prefer the simpler pattern pairs : {{_, _} ..}. For example, f[pairs : {{_, _} ..}] := Row[{pairs, " is a list of pairs"}] f@a f[a] f@{a} f[{a}] f@{a, b, {c, d}} f[{a, b, {c, d}}] f@{{a, w}, {b, x}, {c, y}} {{a, w}, {b, x}, {c, y}} is a list of pairs - The ? in this expression (does it have a name?) has high precedence, so it sticks to argument_ and MatchQ more than MatchQ sticks to its own arguments. You need parentheses. fourPears[argument_?(MatchQ[#, {{_, _} ..}] &)] should work, but it might not be the best way to accomplish this. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23006242513656616, "perplexity": 6077.980405163381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988930.94/warc/CC-MAIN-20150728002308-00340-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/221989-indices-logarithms.html
# Math Help - indices & logarithms 1. ## indices & logarithms 3^2n+1 x (1/6)^n = 2^2-n x 8^n Find n 2. ## Re: indices & logarithms Do you mean (3^(2n+1))(1/6)^n= (2^(2-n))8^n? It helps to know that 6= 2(3) and 8= 2^n. Write everything in powers of 2 and 3 (The fact that there is no power of 3 on the right is something of a "giveaway"!) 3. ## Re: indices & logarithms Stuck at this step
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9704380631446838, "perplexity": 6718.749655523859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500825567.38/warc/CC-MAIN-20140820021345-00327-ip-10-180-136-8.ec2.internal.warc.gz"}
http://itfeature.com/statistics/measure-of-dispersion/descriptive-statistics-multivariate-data-set
# Descriptive Statistics Multivariate Data set Much of the information contained in the data can be assessed by calculating certain summary numbers, known as descriptive statistics such as Arithmetic mean (measure of location), average of the squares of the distances of all of the numbers from the mean (variation/spread i.e. measure of spread or variation) etc. Here we will discuss about descriptive statistics multivariate data set. We shall rely most heavily on descriptive statistics that is measure of location, variation and linear association. ## Measure of Location The arithmetic Average of n measurements $(x_{11}, x_{21}, x_{31},x_{41})$ on the first variable (defined in Multivariate Analysis: An Introduction) is Sample Mean = $\bar{x}=\frac{1}{n} \sum _{j=1}^{n}x_{j1} \mbox{ where } j =1, 2,3,\cdots , n$ The sample mean for $n$ measurements on each of the p variables (there will be p sample means) $\bar{x}_{k} =\frac{1}{n} \sum _{j=1}^{n}x_{jk} \mbox{ where } k = 1, 2, \cdots , p$ Measure of spread (variance) for n measurements on the first variable can be found as $s_{1}^{2} =\frac{1}{n} \sum _{j=1}^{n}(x_{j1} -\bar{x}_{1} )^{2}$ where $\bar{x}_{1}$ is sample mean of the $x_{j}$’s for p variables. Measure of spread (variance) for n measurements on all variable can be found as $s_{k}^{2} =\frac{1}{n} \sum _{j=1}^{n}(x_{jk} -\bar{x}_{k} )^{2} \mbox{ where } k=1,2,\dots ,p \mbox{ and } j=1,2,\cdots ,p$ The Square Root of the sample variance is sample standard deviation i.e $S_{l}^{2} =S_{kk} =\frac{1}{n} \sum _{j=1}^{n}(x_{jk} -\bar{x}_{k} )^{2} \mbox{ where } k=1,2,\cdots ,p$ Sample Covariance Consider n pairs of measurement on each of Variable 1 and Variable 2 $\left[\begin{array}{c} {x_{11} } \\ {x_{12} } \end{array}\right],\left[\begin{array}{c} {x_{21} } \\ {x_{22} } \end{array}\right],\cdots ,\left[\begin{array}{c} {x_{n1} } \\ {x_{n2} } \end{array}\right]$ That is $x_{j1}$ and $x_{j2}$ are observed on the jth experimental item $(j=1,2,\cdots ,n)$. So a measure of linear association between the measurements of  $V_1$ and $V_2$ is provided by the sample covariance $s_{12} =\frac{1}{n} \sum _{j=1}^{n}(x_{j1} -\bar{x}_{1} )(x_{j2} -\bar{x}_{2} )$ (the average of product of the deviation from their respective means) therefore $s_{ik} =\frac{1}{n} \sum _{j=1}^{n}(x_{ji} -\bar{x}_{i} )(x_{jk} -\bar{x}_{k} )$;  i=1,2,..,p and k=1,2,\… ,p. It measures the association between the kth variable. Variance is the most commonly used measure of dispersion (variation) in the data and it is directly proportional to the amount of variation or information available in the data. ## Sample Correlation Coefficient The sample correlation coefficient for the ith and kth variable is $r_{ik} =\frac{s_{ik} }{\sqrt{s_{ii} } \sqrt{s_{kk} } } =\frac{\sum _{j=1}^{n}(x_{ji} -\bar{x}_{j} )(x_{jk} -\bar{x}_{k} ) }{\sqrt{\sum _{j=1}^{n}(x_{ji} -\bar{x}_{i} )^{2} } \sqrt{\sum _{j=1}^{n}(x_{jk} -\bar{x}_{k} )^{2} } }$ $\mbox{ where } i=1,2,..,p \mbox{ and} k=1,2,\dots ,p$ Note that $r_{ik} =r_{ki}$ for all $i$ and $k$, and $r$ lies between -1 and +1. $r$ measures the strength of the linear association. If $r=0$ the lack of linear association between the components exists. The sign of $r$ indicates the direction of the association.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9231976270675659, "perplexity": 642.8680939932141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805894.15/warc/CC-MAIN-20171120013853-20171120033853-00602.warc.gz"}
http://lambda-the-ultimate.org/node/4448
## Why Concatenative Programming Matters Jon Purdy riffs on Hughe's famous "Why Functional Programming Matters" with a blog post on Why Concatenative Programming Matters. ## Comment viewing options ### A Few Critiques I'm responsible for the rather awkward definition of "concatenative programming languages" that Jon claims somewhat misses the point. He's right. There's a discussion on the concatenative mailing list about how we might be able to fix it; it also includes some of my criticisms of concatenative languages. (Set your browser's encoding to to UTF-8 if you want the symbols in the FL code to not be hopelessly mangled.) I want want to take issue with a few points in the article. However, this does not mean that function application becomes explicit—it actually becomes unnecessary. And as it turns out, this peculiar fact makes these languages a whole lot simpler to build, use, and reason about. As much as I love them, there's really no evidence at all that concatenative languages are easier to use or reason about. The main issue is that the equational reasoning you can do ends up being quite weak as the entire state of the program is threaded through every function. The result is that, syntactically, there is no way to determine logical dependencies between parts of a function and hence reasoning about subexpressions is quite limited. It's also somewhat easy to accidentally introduce a dependency where there shouldn't be one by smashing part of the program state you didn't mean to touch. So already, concatenative languages give us something applicative functional languages generally can’t: we can actually return multiple values from a function, not just tuples. This is not really true. You can view functions in a concatenative language as functions from lists to lists which are certainly possible in most functional languages. For example, we can write a concatenative-style length function in Haskell just by using unit (for the empty list) and nested tuples (for cons). First, here's the setup: {-# LANGUAGE NoImplicitPrelude #-} {-# LANGUAGE TypeOperators #-} module Concatenative where import qualified Prelude as P import Prelude ((.), (+), flip, snd, Bool, Integer) infixl 9 ! (!) = flip (.) infixl 9 :* type (:*) = (,) -- quotation q :: b -> a -> a :* b q = flip (,) -- forall A B. A [A -> B bool] [A -> C] [A -> C] -> C ifte :: a :* (a -> (b, Bool)) :* (a -> c) :* (a -> c) -> c ifte (((s, c), f), g) = if snd (c s) then f s else g s -- forall A b C. A b [A -> C] -> C b dip = \((s, a), f) -> (f s, a) -- forall A b. A b -> A drop = \(s, a) -> s -- forall A. A integer integer -> A integer add = \((s, a), b) -> (s, a + b) -- forall A b. A (list b) -> A bool null = \(s, xs) -> (s, P.null xs) -- forall A b. A (list b) -> A (list b) tail = \(s, xs) -> (s, P.tail xs) -- forall A. A -> A integer zero = \s -> (s, 0) one = \s -> (s, 1) With all that mess, we can now write our tail-recursive length function; the type signature for 'length' is optional: -- length : all A b. A (list b) -> A integer -- length = [0] dip loop where -- loop = [null] [drop] [[1 +] dip tail loop] ifte length :: a :* [b] -> a :* Integer length = q zero ! dip ! loop where loop = q null ! q drop ! q (q (one ! add) ! dip ! tail ! loop) ! ifte -- xs == ((), 5) xs = length ((), [0, 1, 2, 3, 4]) Hopefully this demonstates that there's not much "special" going on in a typed concatenative language. If you write 'q' using square brackets and make ':*' and '!' silent, you're almost all the way there. Ideally, you'd finish it with a kinding restriction such that the left side of a pair was always another pair or the unit type; unfortunately, this restriction is not expressible in Haskell so you just need to be careful to respect it. (You will also want support for polymorphic recursion for functions that make use of things like linear recursion instead of pure iteration as above. You can probably figure out why this is so. Of course, GHC already handles it nowadays.) They are amenable to dataflow programming This nice property breaks down when you get into higher-order programming. This is the reason we've not seen any visualizers for languages like Joy; I'd have written one by now otherwise. You can easily roll your own control structures I'm not sure this is true at all; or, at least, not any more easily than you can in something like Haskell. My 'length' above is not too bad, but it gets quite a bit more gnarly when you're writing higher-order functions. As an exercise, try writing 'map'. Even if you write 'map', you need to be careful about how you write it. In particular, you need to restrict the function passed in to using no more than one element on the stack otherwise you'll really have a map/accumulator and the usual map fusion rule won't hold. This sort of restriction is possible both via types and in an untyped context. I've been writing about this stuff to a small audience for so long now that I've probably made plenty of inappropriate assumptions. Feel free to ask for clarification where necessary. ### Ease of reasoning I'd agree that concatenative languages are not easier to reason about, if the author had meant that they're purely superior. But he didn't. They are easier in some ways, harder in others. You correctly cite the fact that the entire state is implicitly passed to all functions. This is partially true; if the compiler doesn't care, the entire state can be accessed by any function. But this need not be so. Obviously, a strongly typed language can ensure that a function accesses only the part of the stack for which its type is suited. Less obviously, there are other data structures in addition to the stack that can be controlled in other ways. ANS Forth specifies floating point, but leaves it as an implementation issue whether there is a separate floating point stack. Suppose there were one; then none of the words which didn't use the FP stack would need access to it. Now, an FP stack of this nature is so trivial it doesn't matter. A more sophisticated problem would be raised by words that perform something like return stack manipulation. It's possible for such words to be typechecked and to be determined compatible (ANS Forth defines a few basic types, although probably without the rigor people here would expect). Forth also has a dictionary whose use could and should be checked (since runtime dictionary access for the purpose of creating named definitions is a very unusual action). ### Obviously, a strongly typed Obviously, a strongly typed language can ensure that a function accesses only the part of the stack for which its type is suited. Yes, but it conceptually is passed and the syntax reflects that. That's what I mean by "syntactically, there is no way to determine logical dependencies". ### Parametricity Indeed, this would simply be an instance of parametricity. Except that (1) that is a non-trivial property and needing it severely complicates proofs of things that would be simple otherwise, and (2) nobody has actually proved parametricity properties for any of these languages AFAICT, and (3) as soon as you add reflection capabilities it's out the window. In terms of security this is also pretty bad, because the stack essentially provides unrestricted ambient authority. ### All that is true and not All that is true and not true. The syntax "reflects" that everything is passed, because the syntax is, by definition, an exact reflection of a purely associative semantics (which does require passing everything). But real languages (i.e. anything other than zeroone) add other structures on top of the purely associative syntax (for example, the nested square brackets most concatenative languages use in imitation of Joy), so those real languages could also add syntaxes that reflect other types of access. ColorForth isn't at all a pure language by any definition, but it provide an example of this by tagging words when they write to the dictionary, as well as when they perform immediate actions. That "tag" is actually a special character prepended to the word, and it's only the editor's implementation that hides the special character and colors the word. Obviously, a tag like this is best used with effects that occur relatively rarely, unless you add parenthesis and a distributive tag. Manfred von Thun recently defined a language called "floy" (Flat Joy) in which he exposed a formerly internal structure of Joy called (IIRC) the "annex", where quotations get built. These annexes give the real language its tree structure, and in the real Joy language they're "typechecked" to keep quotations balanced and to make annexes appear only while the compiler is executing, never at runtime. -Wm ### Arrows There are styles of concatenative programming that do not pass the whole program state around, that structurally constrain what each subprogram sees. Arrows seem to be an effective basis for this, though they'd still need to be accompanied by an appropriate vocabulary. ### Analogy Between Concatenative Languages and Arrows In my untutored way I've wondered about this question before. Can you elaborate (or point to an elaboration) on the relation between concatenative languages and arrows? ### If you like, you can view If you like, you can view '(>>>)' as composition and 'first' as 'dip' in terms of how you tend to use them to do concatenative programming. ### Point-free style, Tacit programming In other communities concatenative programming might be identified by the words point-free style or tacit programming. These are styles that focus on composing operations rather than the variables. Though, it is still sometimes convenient to use variables locally and desugar to a point-free model. Arrows extend simple function composition with some additional structure and, importantly, operations on just part of that structure - thus enabling developers to enforce that certain subprograms only interact with certain parts of the structure. This addresses the above complaint about syntactically controlling relationships in the program. Anyhow, if you shun use of the local variables to desugar back to arrows, arrow programming will be point-free, tacit, very compositional. In Haskell the composition will be type-safe - you can't just concatenate any two programs, but you can do so whenever the types match up. Generalized arrows enforce the arrow structure far more strictly than the Hughes / Patterson arrows originally introduced in Haskell (which heavily use arr to lift Haskell functions). Generalized arrows are very useful if you want to embed a language that isn't a superset of Haskell. ### In other communities In other communities concatenative programming might be identified by the words point-free style or tacit programming. Not quite. I think it's important to consider concatenative languages as a subset of pointfree languages. At a minimum, there is no form of application (as there is in J or Backus's FL). As I've said elsewhere in these comments, I think a more useful definition goes a bit beyond this however. ### Application as Staging Even for purist concatenative programming, application is possible in a manner similar to ArrowApply. Function and operand are part of the state, and you apply one to another and replace the state with the result. Something like 3 f apply where f puts a function on the stack. Application, in this sense, becomes conflated with staged metaprogramming, separating the construction of the program from its execution. So constrained use of parameters can be acceptable as a form of syntactic sugar. The relevant constraint is that the parameters can be computed statically. This is close to templated functions or macros, eliminating the need to statically write many variations of functions. The staging results in a very different programming experience than application in normal functional programming. Of course, too much sugar would detract from the concatenative flavor. But developers will quickly find the right balance, being guided to it by the strict staging. Developers won't be able to concatenate programs at just any darn point, but it will still be clear to them where they can do so, and they'll still emphasize compositional reasoning with regards to the runtime behavior of the program. Meanwhile, it won't take many statically parameterized operators - such as (***) from Arrows - to effectively and syntactically control access to different parts of the runtime state. Even if primitive, such operators will fit right in with the other uses of static parameters. ### Even for purist Even for purist concatenative programming, application is possible Sure. The point though is that there's no syntax for it. So constrained use of parameters can be acceptable as a form of syntactic sugar. The relevant constraint is that the parameters can be computed statically. Yes, and Factor does this. The translation is not too bad. FL also does this with a quasi-matching syntax so you can write 〚x, y〛→ e and x and y will be translated to s1 and s2 (the list selectors) respectively in e. But developers will quickly find the right balance I think they already have in Factor. Variables are used (and translated away) for a very small percentage of functions. ### Sure There are styles of concatenative programming that do not pass the whole program state around Sure, and you can do this in "normal" concatenative languages as well; I was just generalizing a bit. For example, let's suppose a combinator called 'infra' which applies a quotation to a stack on the stack: {A} [B] infra == {A B} We can use this to write a function 'bi*' (choice of name not mine) that, given two quotations, applies the first to all elements below the top non-quotation and the second to only the top non-quotation; the second quotation is restricted to accessing at most one argument. In other words, we have the following semantics (where parentheses denote independent expressions and lowercase variables bind single elements): A x [F] [G] bi* == (A F) (x G) Here's the implementation: bi* = [dip] dip -- apply the first quotation [{} swap push] dip -- group the top non-quotation value into a substack infra -- apply the second quotation to the substack concat -- concatenate the main stack with the substack You can also use 'infra' commonly to limit how many values a quotation passed into a higher order function may use. For example, you could imagine an 'apply_n' combinator written in terms of 'infra' that, given a quotation and a natural number representing the number of arguments it should use, applies that quotation to exactly that number of arguments and concatenates the result back onto the main stack; if the quotation attempts to use too many arguments, you'd get a stack underflow. Obviously you can also do this statically with types to push the error to compile-time. Of course, none of this solves the problem that independence of subexpressions is not syntactically apparent. Perhaps this is not so bad if you have a good set of well-known and sufficiently general combinators that limit views of the main stack appropriately. So far as I know, no one has really experimented with this in real programs. ### I am not a stack The program state doesn't need to be a stack. That is how FORTH does it, of course, but with Arrows the program state is easily a (rich,((product,of),values)) + (or,(a,sum)). Some of those values could be stacks, of course, but that isn't necessary. If you eliminate the assumption or preconception that there must be a main stack, that frees you up to explore design spaces where independence of sub-expressions is more syntactically apparent. I know at least one designer who chose to shun first f in favor of f *** id in order to make the structure more syntactically apparent, especially as he was exploring the possibility of tile-based composition. ### A Definitional Question The program state doesn't need to be a stack. Well this is sort of a definitional question. All languages that claim to be concatenative that I'm aware of either are stack-based or have some effectively similar scheme (e.g. Enchilada's parallel term rewriting or XY's stack + continuation model). I've personally played quite a bit with alternative models but nothing particularly successful. I remember someone presenting a rather disturbing language based on a deque instead of a stack actually... There are certainly pointfree programming approaches that avoid the problem but I'm not sure I'd call them concatenative. For example, Backus's FP makes use of the "construction" combining form which is essentially an n-ary special syntax for '(&&&)'. I've wanted to add an n-ary special syntax for '(***)' called "combination" as well. I wouldn't call FP concatenative though even if composition were written via juxtaposition. I think for the term concatenative to be useful, it needs to mean more than just "composition is juxtaposition"; I think it also implies something about functions accepting and propagating "additional" arguments by default. The link in my original post covers this fairly well. ### Concatenative means more Concatenative means more than composition is juxtaposition. It suggests a heavy emphasis on composition, to the exclusion of other programming styles. The property I'd be looking for is that operations are closed, which leaves us free for immediate composition. Operations like (***) and (&&&) or even first aren't very closed unless they operate on elements already in the program state. Though, a few standard such operators won't detract much from the concatenative feel. An easy way to keep operations closed is to place operands in a stack or other state, and pass the full state from one operator to another. But is this necessary? What other options are there? Maybe staged programming, which partitions state over time. Or object capability model, which partitions state over space. Perhaps the issue is that plain old quotation is not enough for the control we desire. I've only idly pursued concatenative programming. Emphasis on composition, more generally, is the real win. I believe we'll want multi-dimensional composition (e.g. parallel, choice, staging, sequential, extension), but concatenation seems to imply keeping things in a single-dimension. ### According to Manfred, I According to Manfred, I coined the term "concatenative" (I don't remember doing that); and I never intended it to be exclusive. I did coin the word "flat" with the intent of describing PURELY associative languages, i.e. languages which always map juxtaposition to composition. The difference is that in concatenative languages, you can build programs by joining valid programs; in a flat language you can also split any valid program and always get two valid programs. So... I agree with your motives here. I'd like to see what comes of exploring your ideas. [EDIT:] but I disagree with the specific claim that there's an innate exclusion of other possibilities. -Wm ### Useful characterisation? Is that a useful characterisation at all? Assembly language is concatenative and flat too, then, and so is BASIC. ### OK, what you're claiming, I OK, what you're claiming, I think, is that assembly and BASIC are, in some respect, impurely associative over their lines. Let's take this as true. But BASIC's lines are parsed; there are therefore nearly an infinite number of possible lines (I admit that I don't know the maximum number of characters per line of any BASIC, though). Therefore, calling BASIC concatenative is not a useful characterization -- it gives no analytical power, because your splitting point doesn't split apart any of the interesting parts of the language. Assembly language is different, since in spite of a little bit of parsed syntax it does display SOME limitations on the number of possibilities per line. Machine language is a better idea, though, since it doesn't include the parsed flourishes an assembler can allow; with machine code you can consider the number of 1-byte opcodes (something less than 256) plus the number of 16-bit opcodes (obviously less than 2^16), and so on. The sum is a bit high, but it's manageable. A concatenative assembler is conceivable, even if you allow the registers to be parsed. There are, of course, stack-based assemblers that are traditionally provided with Forth implementations; I find them pleasant to work with in comparison to the traditional assemblers. (I'm not, by the way, confusing a stack-based assembler with the question of whether the assembly language is concatenative.) The trouble becomes that the assembly language is very _poorly_ concatenative; the associative property might be said to hold, but only when you keep in mind a huge list of rules, since so many operations act on hard-wired registers for which the correct value depends on previous operations. To TRULY model an associative assembler, you have to build a model that doesn't match the real hardware at all. So although you CAN model assembler in a concatenative manner, it's of very limited use in terms of using the strengths of either the machine or of the concatenative properties of the language. On the other hand, you can very easily build a partially concatenative VM that comes _very_ close to the properties of the machine. -Wm ### Why bytes? So concatenative has to apply on the level of bytes, not tokens, lines, or statements? That strikes me as a rather random and low-level definition. How are such accidental details of syntax useful as a classification principle for languages, whose main concern should be semantics? I answered your previous question in detail because your other comments here show that you understand computing with detail and nuance. So it feels like you're "phoning it in" with this one, or perhaps simply being hostile. To answer your previous question without any detail: no, BASIC is not concatenative, because it has a rich applicative structure. Ditto for most assembly languages, although in some ways assembly is closer to being concatenative -- but as you understand, I can't explain that concept without getting into the low-level details of assembly language, which you've now revealed are taboo. As for your current question: the point is that the semantics and the syntax of a concatenative language both must be associative. (Semantics are not the ONLY point of language design; the mapping between semantics and syntax is important too, as is the syntax itself.) If they're not both associative, it means you can't apply associativity to random text from the middle of the program until you carefully analyze it. Now, if your impurely concatenative language has a simple and clear way to tell when associativity doesn't apply (for example, in all real languages you can't play with letter association, you have to lex the words out first; and in Joy and other higher level language there's a nestable [syntax] that doesn't itself follow the associative rules), you can still use the associative property without too much analysis. -Wm ### What is "program text"? I apologise if this came across as hostile. But I'm truly mystified. You talk about program "text". But the definition of "concatenative" seems to hinge entirely on the level of syntactic abstraction or granularity that you are looking at it -- and you haven't really motivated why any given level is more relevant than any other. E.g. it seems to me that BASIC statement sequences are associative, syntactically and semantically. Why does it matter that they have inner structure? You can always destructure syntactic "atoms" on one level into something even smaller. Until you have arrived at individual bits, I suppose. So where stop, and why there? More importantly, why does it matter, i.e. what is the "analytical power" you are referring to? Maybe I miss something completely... ### Entanglement When we have a lexical scope, statements are often tied together by it. You cannot readily separate - change association of - the code that uses a name from the code that declares it. For BASIC, or at least the variations I've used, this would hinder it from being flat in the sense described initially. I think William has muddled the issue starting with trying to grant your premise ("what you're claiming, I think, is that assembly and BASIC are, in some respect, impurely associative over their lines. Let's take this as true.") For a program to be usefully concatenative or flat, I think you'd need a useful definition for valid program. The lack thereof limits utility for flatness of assembly or machine language. ### All good points; you're All good points; you're right. Except one minor problem. The problem with machine language and assembly isn't that we don't know what a "valid program" is; of course we can define that. The problem is that understanding three machine-language programs doesn't always help you understand their concatenation, because they have impure side effects that aren't associative. -Wm ### Impure? In what sense is modifying a register or a an absolute memory address less pure than modifying a relative stack slot (that, depending on context, may not even exist)? ### Stack items may be Stack items may be immutable, even though the array that usually underlies a stack is mutable (and note that Joy uses an immutable list instead of a mutable array, which it uses to backtrack to previous stack states). Therefore, a stack language is normally pure inasmuch as it's point-free -- even in a language like Forth that makes NO attempt to be pure. With that said, though, I wasn't trying to compare languages; I used "impure" to refer to machine language instructions. I think you'd have a hard time disagreeing with that. ### Old-school BASIC To clarify, I was referring to the BASIC of the Good Old Times, with no control structure, just GOTO. Not the bastard children of today. ;) ### It's refreshing to meet a It's refreshing to meet a fellow curmudgeon. I agree with you entirely -- although old-time BASIC needed improvement, so do all languages; you don't get to steal a language's name and put it on a totally different language. Humbug. Modern BASIC is a contradiction in terms. Grin. ### Any PL that attracted Any PL that attracted a passionate fan base (wide use doesn't require fans) must imho have had something beautiful in its elemental design. Ideally, the next generation of language designers would identify the beauty and find a way to merge it with new ideas; instead, typically they say "that language is popular, therefore we should imitate its syntax", and so we inherit from the old languages syntax rather than beauty. Enumerating disadvantages of vintage BASIC is uninteresting; what's interesting is trying to understand what's beautiful about it. ### Thank you for clarifying. My Thank you for clarifying. My problem was when you asked about assembly but then criticized me for giving a low-level answer (to a low-level question). It seemed like I'm putting a TON of effort into answering questions, and you had put NO effort into objecting to my answers. The problem with ignoring inner structure is that in order to analyze a program, you have to know precisely what the atoms of the program mean. If you're examining BASIC and defining the statements as atoms, you've got a language with a countably near-infinite number of possible atoms -- far more than you can usefully consider. In addition, many of the "atoms" have an effect on the state of the machine that is non-associative in its nature, so that the resulting statements may not actually have associative semantics against its neighbors (although it might!). You're actually ignoring structure that would be useful if you would consider it on its own terms -- as an impurely applicative language. A simple example of inner structure that preserves the concatenative structure of the program that contains it is the "number" parser that Forth provides. Although there are a large number of possible atoms those parsers generate, they all produce the same program effect -- to push a specific number on the stack. The inner non-concatenative structure is simple and maintains the outer associativity, even though internally, its syntax is not associative. On the other hand, you can almost always usefully analyze a language while ignoring outer structure -- you simply limit your analysis to the area contained by the outer structure. The "[" and "]" characters are a very common way to denote outer structure in concatenative languages ever since Joy. Yes, you can always break down words further until you reach the bit level. For this exact reason I wrote a purely concatenative language at the bit level -- it has only two functions, zero and one. Aside from such self-imposed silliness (grin), the point is that one still has to analyze the programs written in the language by starting with the syntax of the language, mentally following the language's defined mapping from syntax to semantics in order to get a mental model of the program's semantics. One advantage of a purely concatenative language is that, because both the syntax and the semantics are everywhere associative, you can start at any point inside the program's text, and know that the semantics you figured out are correct in the big picture. This allows you, then, to not need to figure out hard parts of the program until you've figured out the rest. This means you don't even have to _parse_ the program. For example, in my original 1-bit language, the bit sequence "01" happens to mean "drop" (discard the top item from the stack). Thus, that sequence ALWAYS means "drop", no matter where it appears. Yes, my original language was VERY wasteful, constantly throwing data away -- but in that respect it was easy to analyze. From this fact, I was able to immediately tell that the shortest nontrivial no-op was "0010101", since I knew that "0" puts 3 items on the stack, and each "01" dropped one of them; I didn't have to KNOW what each of those 3 items actually did, since I knew that they were all discarded. In a PRACTICAL language, unlike the toys I design, breaking down programs to the bit level, and even the character level, won't assist in understanding. All practical languages define their meanings at the lexeme level. Thus, this is where you MUST stop. Now, depending on the nature of your analysis you MAY stop earlier; but lexemes are usually defined so that their boundaries are easy to see. (I admit that BASIC statements meet this definition because they're each on a line by themselves, but I wasn't complaining about your question; I simply pointed out why it was a problem to do that, even though it's possible to do it.) So, let's see. "Why does it matter that they have inner structure?" Because their inner structure is giving you useful clues that you don't want to throw away. "So where to stop, and why?" Stop at the level that helps your analysis, and describe the language based on the properties that hold at that helpful level; alternately, stop at the level appropriate to the type of language you're examining, so that your analysis can be reasonably low effort. "What is the analytical power [I am] referring to?" Well, the ability to examine and think about a string of text (a program) and to come up with a mental model of what it does (the semantics). -Wm ### Structure OK, I kind of see what you are saying, but I still think there is something arbitrary about it that I cannot really put my fingers on. For example, it seems that you want to be able to inspect a piece of program as a sequence of lexemes, and then be able to tell what any subsequence does without context. I said this is low-level, because in almost all languages, the abstract structure of a program, on which semantics is defined, is a tree. And at that level of abstraction, you can, analogously, look at a subtree and tell what it does. I still haven't understood what the advantage is in insisting on the low-level sequential textual representation of a program, which, to me, is just an (uninteresting) encoding of its abstract structure, an artefact of our inability to edit trees more directly. Essentially, my understanding now is that "concatenative" says that this encoding is trivial. But why should I care about that at a higher level? And in a similar vein, why is it in any way preferable to have less structure on the higher levels of abstraction rather than more? (Btw, just to help me understand your example: in your bit language, if you see a 0 in a random bitstream, how do you know it's the 0 instruction, and not part of a bigger instruction? And what prevents you from splitting 01 in the middle and produce something completely unrelated?) ### "I still haven't understood "I still haven't understood what the advantage is in insisting on the low-level sequential textual representation of a program" [...] I'm not insisting on textual representation. Why would I do that? I'm telling you that textual representation is how our actual languages encode programs. Concatenative or applicative, they all encode our programs as flat text. Someday this might not be the case; but right now it IS the case. So a language that has a theory that fits well with flat text has some advantages over one that doesn't. Don't let me claim too much here, by the way. I freely admit that my argument doesn't prove "concatenative is better than applicative". I'm actually not interested in that argument. What I'm interested in is "what advantages does concatenative have, if any." "an artefact of our inability to edit trees more directly." Sure, fine; I'll grant that for the sake of argument, although I don't think trees are in any way a fundamental model of computation (so being able to edit them would merely be equivalent to a new programming language, NOT a paradigm shift). Suppose that sequential text representation IS a mere artifact of our current inability. So? Right NOW we can't edit trees; so let's use what we actually have. "But why should I care about that at a higher level? And in a similar vein, why is it in any way preferable to have less structure on the higher levels of abstraction rather than more?" I'm confused here. What do you mean "higher level"? And "abstraction"? In my limited experience, those terms are usually used to describe program design, not language analysis. I have literally no idea what you might be referring to here. ### Representational abstraction By "higher-level" I was referring to the level of representational abstraction (character seq < token seq < parse tree < AST). As a user of a language, the AST level usually is all that matters to me on a grander scale, because that is where the semantics lives. Everything below effectively is a representational implementation detail. It may matter for editing and reading a program properly, but not for understanding its meaning. The latter is far more important than the former. And more structure helps. ### Ok, I guess I see.I see Ok, I guess I see. I see massive problems with this response, though. First is that your model isn't complete or accurate -- it's like the OSI's 7-layer model of networks; it's nice to have the terminology, but nobody who actually writes network code can afford to follow it. It's a minor detail that every implementation ever written disregards your model; it's much more important that many language designs depend on violating it (indentation based languages, for example). Concatenative languages are just another group of examples of languages that don't follow those rules -- eventually you need to take the huge number of exceptions as evidence that the rules aren't as universal as you thought. Second, and even more importantly, where language design is done, we look for ways to allow people to use characters to express semantic intent; we can't just ignore the characters produced by people pressing keys on the keyboard. BECAUSE programming languages are generally textual, good programming language design MUST not ignore text. Third, when you distinguish between "editing and reading" and "understanding its meaning" you're making an unimportant distinction. If a language design makes editing and reading easier, then it also makes understanding more likely to succeed. So... I claim that there's some usefulness in how concatenative languages change the (character seq - token seq - parse tree - AST) and make it into (character seq - token seq - ASL). (ASL = Abstract Syntax List.) I don't think this shortcut trumps all of the difficulties real programmers have with real concatenative languages; I'm not that kind of tragic misunderstood artist :-). I just think there's something interesting to explore here, that might perhaps be useful, somehow. You ended by saying "and more structure helps". Sure. But is function application the essence of structure? I don't see how that's possible. -Wm ### In my language there are In my language there are only two functions defined, zero and one. If you see a zero, it's ALWAYS the zero instruction, because there are none other. (User definitions are not provided -- think of this as being more like a bitcode VM than a language. Yes, I'm working on an assembler for it that'll have most of the same characteristics but also provide conveniences like names and so on.) If you split "01" (drop) in the middle, you'll get "0" and "1" (of course). Those are not unrelated; they mean the same thing when executed one ofter the other that "01" does when executed all at once. To me, the creepy part is realizing that this means that if I have ANY function that ends in "0", and I concatenate it with ANY function that begins in "1", no matter what those two functions do, there's a "01" that means _drop_ right there where they meet. Now, HOW that works does take some explaining. If you're curious, I'd be glad to explain; but first a couple of questions (assuming you ARE curious; if not, of course I won't waste your time). First, do you know combinator logic? (I think you do.) Second, have you read any part of Kerby's "A Theory of Concatenative Combinators" with any amount of understanding? How you answer those questions will allow me to tune my answer to your current understanding. ### Zero and One I do know of one language that uses 0 and 1 as the only instructions. But it also uses two cyclic registers to track the current meaning of an instruction, and external state. Whirl. ### Heh, that's actually not too Heh, that's actually not too bad. IMO it's a better obfuscated language than Malbolge (which is not a language at all, it's just a shoddy encryption algorithm -- they could have met their apparent goals better by implanting a virtual machine in the middle of an AES algorithm, so in order to make the machine do anything useful you'd have to control the bits that appear in the middle of AES encryption). Of course, you see the difference between Whirl and my "zeroone" -- my language has only one meaning for each symbol. Yes, my language is almost as hard to program in as Whirl; it's so dang primitive. I built a brute-force search to find zeroone source code for a given combinator, and am building an evolutionary algorithm -- although rather than attempting to find source code for a combinator, I'm trying to find a better pair of primitives. I'm wasting my time, but I'm having fun. You should see my other hobby projects. ### ZeroOne I am intrigued on how "push 3 items" become "drop 1" when concatenated with an unknown instruction. Do you have a blog where you can explain this? ### Sadly, I don't have a blog Sadly, I don't have a blog -- I've never been able to post continuously. If you'd like to explore, feel free to subscribe to the Concatenative mailing list; I can answer questions is far greater detail there. http://tech.groups.yahoo.com/group/concatenative/ For these purposes, I'm not sure what level of detail to go into. If you've read that web page by Kerby I posted earlier, you'll understand right away... Otherwise there's a little groundwork to lay first. In short, the answer is that the "zero" and "one" form a complete "basis" over combinatorial logic on a stack (the usual logic operates over a parse tree). Therefore, however we DO it, there must be a way that stringing zeros and ones together forms ALL POSSIBLE combinators. The longer answer is ... we had to figure out whether it was possible to invent a combinatorially complete basis that contained only two symbols and operated over a stack without any need for list syntax. It turns out that it is; but there are some complex restrictions that I still haven't completely figured out. I asked, we debated, and eventually Kerby turned up and was able to produce a combinator set; I then produced a different one that had some advantages (we later decided that neither set was definitely better than the other, so Kerby's definitely the inventor and the only person who could even say whether it was possible, although I did ask the question first). I'll let you digest what I've said so far. ### Minor OT: Polymorphic Recursion in Haskell Polymorphic recursion has been required of Haskell implementations since Haskell 1.3 released in 1996. . ### Concatenative Dataflow Programming with Arrows I've been developing a concatenative programming language based on an arrowized reactive/dataflow programming model. I had completely forgotten this discussion from eighteen months ago, so I was a bit surprised to realize I had contributed to it. One of my comments - about arrows allowing a richer environment than a stack - seems very relevant now. John Nowak argues that concatenative programming being "amenable to dataflow programming" breaks down for higher-order programming. I agree with this for existing languages. Fortunately, the arrowized model helps out a great deal by enforcing a clean staged programming model. For normal arrows, the inputs cannot be observed before the arrow is fully constructed, which results in a very rigid and inflexible program structure - but one very useful for dataflows and reactive programming (as demonstrated in a multitude of FRP libraries). To gain some flexibility, I support 'static' data types along with static decisions. In Haskell terms, a 'static' number would be a number represented in the type system, and static decisions could be modeled by dispatching on typeclasses. In my language, all literals (text, numbers, and [blocks of code]) are of this static-typed nature. (But I don't support any form of quoting.) The concatenative program is ultimately a compile-time metaprogram for a run-time arrow structure. A subprogram like 7 roll might roll the 7th element of the stack to the top. More interestingly, "foo" load could automatically build the arrowized data-plumbing to load a value from an association list in the complex environment. I'll provide a little example code. My language doesn't actually support type signatures (inference and introspective assertions only) so I'll just comment a few in. % first :: ((a->a')*(a*b)) -> (a'*b) -- PRIM % assocl :: (a*(b*c))->((a*b)*c) -- PRIM % swap :: (a*b)->(b*a) -- PRIM % rot3 :: (a*(b*(c*d)))->(c*(a*(b*d))) -- PRIM % intro1 :: a -> (Unit*a) -- PRIM % elim1 :: (Unit*a) -> a -- PRIM % assocr :: ((a*b)*c) -> (a*(b*c)) assocr = swap assocl swap assocl swap % rot2 :: (a*(b*c)) -> (b*(a*c)) rot2 = intro1 rot3 intro1 rot3 elim1 elim1 % rot4 :: (a*(b*(c*(d*e)))) -> (d*(a*(b*(c*e)))) rot4 = assocl rot3 rot2 assocr rot3 % second :: ((y->y')*(x*y))->(x*y') second = assocl swap rot2 first swap % apply :: ((x->x')*x)->x' apply = intro1 swap assocr first I could provide a bunch of dup/over/pick instructions. But they're a bit more involved in my language since I support substructural types, and I don't want to get into those differences right here. Also, I introduced rot3 to support block-free data plumbing. Without rot3 the same functions would use blocks, which works fine if our environment is a simple stack and a single stack and blocks: second = swap [swap] first swap first swap rot2 = assocl [swap] first assocr rot3 = [rot2] second rot2 The blocks of code serve a similar role as they do in Factor. Developers are more limited in their ability to 'capture' runtime-computed values into a block. One issue I quickly discovered is that: a single-stack environment is awful for concurrent dataflow programming. In concurrent dataflows, I often have multiple workflows or pipelines that are mostly independent, but occasionally interact (sharing intermediate results), merge together, or divide into new subtasks. A single stack works wonderfully for a single pipeline. But the moment I try to integrate intermediate results, it fails: I need to dig too deeply into the stack to find values from the other workflows computation, and this digging is fragile to upstream code changes. So I tried a few other environments. Ah! I thought. A Huet zipper based environment looks expressive! So I first tried that. And it was very expressive. I could create subtrees to my right or left, wander into them, create values and objects. I came up with a very powerful idea: of modeling a "hand" to carry things as I move around. So I could model different tasks or workflows in different subtrees, and carry intermediate results from one tree to another. The problem was, a tree-zipper was too expressive. It was too sparse; never clear how to use the space effectively. I felt lost in a maze of twisty little passages, all alike. When I felt the hungry eyes of a grue waiting for that torch I was bearing (in my newly modeled hand) to fizzle, I bravely turned my tail and fled. So I tried a couple more environments. First, a list-zipper of stacks. This worked pretty well, except that I could never remember the relative position of a task. The normal pattern I fell into is not jack-be-nimble jumping between tasks, but rather to extend one task as far as possible, then go to another task to build it up enough to get some intermediate results. My current environment looks like this: % ENV (stack * (hand * (stackName * listOfNamedStacks))) % apply :: ((f*s)*e) -> se' apply = assocr intro1 swap assocr first swap elim1 % take :: ((x*s)*(h*e)) -> (s*((x*h)*e)) % put :: (s*((x*h)*e)) -> ((x*s)*(h*e)) take = assocr rot3 rot2 assocl rot2 put = rot2 assocr rot3 rot2 assocl One point is that literals (numbers, text, blocks) are added to the current stack, to avoid burying the extended environment. The 'rot3' primitive enables this to work, by ensuring I can always do basic data plumbing without adding more blocks to the stack. I must use static metaprogramming to access those named stacks, and I haven't implemented the functions yet. But the API I envision is: "foo" load and "foo" store to operate on remote stacks, and "foo" goto to navigate to a stack by name (which will store the current stack away under its current stackName). The load/store operations enable named stacks to serve as registers or variables, useful for storing objects that would be painful to explicitly thread through the application (singletons, context objects, or configuration info, for example). Their stack-like nature can enable use similar to lexically scoped variables, assuming disciplined developers. Anyhow, I've come to a few conclusions: 1. The static nature of the arrow type is very promising for visualization. Continuous visualization of the data type during development should enable developers to operate at larger scales. 2. Multiple stacks for multiple tasks! It's essential. Workflow programming, task concurrency, multiple pipelines - it's all bad on a single stack. 3. The concept of a hand is awesome. It's useful for carrying things between stacks. It's also useful for operating on a single stack - i.e. it's often convenient to pick up a few items and operate deep on the stack, then drop those items. Most importantly, the hand - more generally, keeping a programmer model in the program - can potentially support deep integration of GUI with visual programming; I might try it for augmented reality. 4. At one point, I tried to model a "literals brush" - a block in the environment that controls where literals are added to the environment. This was a mistake. It resulted in every subprogram that uses literals having a highly modal meaning. 5. Imperative metaprogramming of a declarative, arrowized model has a lot of nice properties. Developers get a lot of expressive power with predictable performance. Further, the equational properties of the target model simplify reasoning and refactoring in the imperative code (e.g. we can know that the order we add operations on different stacks doesn't matter). Basically, we get the best of both worlds. (Whereas declarative metaprogramming of imperative programs seems to get the worst of both.) 6. It is not difficult to reason about application to a subset of the environment. Arrowized models do this normally with first. Syntactically, we get the benefit if we have a common and well known library of applicators (words that apply a block) that operate on well known subsets of the environment - e.g. the top one to few elements of the current stack, and perhaps also an explicitly provided list of named stacks. 7. Zipper environments are not so good. However, zippers are still promising for manipulating a document-like object that is sitting on the current stack. E.g. we could model an HTML-like document on the stack, sift through it, add and remove structures, or integrate between documents. Diagrams, higher-order programs, simple databases, DSLs, etc. can all be modeled as document-like objects. I've been taking some flak for choosing a concatenative language. But I don't regret it, especially since it resulted in the powerful 'hands' concept and programming environments both simpler and more expressive than I would have dreamed up in an applicative language. ### a single-stack environment a single-stack environment is awful for concurrent dataflow programming Yes. I have lately been considering how to implement concurrency in my language, and came to essentially the same conclusion as you. Static typing enables static program structure, which solves the problem by showing you which things are separable. If you have two functions of type (r a b cr d), then you can copy the top three elements of the stack, run those two operations concurrently, join on both, and return two values of type d. Doing this efficiently of course means multiplexing those concurrent operations onto the available parallelism, which will be an interesting challenge. I need to dig too deeply into the stack to find values from the other workflows computation, and this digging is fragile to upstream code changes. My solution to this was having an auxiliary stack, for which named local variables are sugar. Simply, if you can move computations off the data stack thread-locally, then you can reason thread-locally about data stack effects. I hate stack-shuffling and find no justification for it over named locals. When combinators are natural, they should be preferred—but the effort required to correctly factor most code is better spent elsewhere. Their stack-like nature can enable use similar to lexically scoped variables, assuming disciplined developers. Never assume that. The static nature of the arrow type is very promising for visualization. Really looking forward to seeing what you come up with in this area. My only work has been on making the kinds of stack effect diagrams seen in my article, which seem suitable for low-level (imperative) and high-level (dataflow) operations, but suffer for mid-level code that mixes styles. I've been taking some flak for choosing a concatenative language. But I don't regret it Ditto. :) ### named locals vs. stack shuffling I hate stack-shuffling and find no justification for it over named locals. I'm hearing that 'hate' word a lot recently, regarding this issue. I think stack shuffling is rarely difficult once you move beyond the newbie stage. A relevant observation: The more complex shuffles, such as rot and roll, are only used a few dozen times in the entire code base (consisting of tens of thousands of lines), so indeed they are more of a crutch for beginners than a real tool that serious concatenative language programmers advocate using. Good code has simple data flow and does not need stack gymnastics. Factor's optimizing compiler, web framework and UI toolkit do not have any complex shufflers at all, and together they comprise some 30,000 lines of code. But people new to concatenative stack-based languages - to them, it's dup, dip, bloop, dribble, rot... just arcane nonsense and gibberish, no better than Perl line noise to someone new to Perl. I can understand why they might hate it. They can't visualize what happens to the stack, and really shouldn't have to. My hypothesis is that your named local variables help these newbies visualize the stack in a more concrete manner, and they help manipulate objects on it. And, if my hypothesis is correct, named locals should be replacable by any other system that serves the same role, such as an IDE with good visualization features and good support for interactively automating data-plumbing code. A problem with names is that names can be syntactically used in ways that their referents cannot safely be manipulated. By analogy, I can mail you the word 'gorilla', but I cannot mail you the gorilla. Names can too easily express behaviors that are inefficient or insane - such as transporting large values between CPU and GPU, sharing state between threads, naming deleted objects, fulfilling a promise with itself, or unintuitively keeping whole environments around just to support closures. Names seem especially problematic for substructural types - syntactically, it is too easy to drop a name unused, or use a name twice, when the type does not allow it. There are other problems with names, such as nominative types binding at too large a granularity, or negotiating the meaning of a name with remote systems that might use different library versions. Names are useful. But I think names are most useful for the non-locals: singletons, configuration parameters, 'special' variables or context objects, and concurrent workflows. And even in these cases, I think they should be modeled carefully, in a way that does not confuse programmers into thinking the name is equivalent to the referent (unless it really is equivalent to the referent in all relevant scenarios). ### I heard two people having a violent agreement My hypothesis is that your named local variables help these newbies visualize the stack in a more concrete manner, and they help manipulate objects on it. And, if my hypothesis is correct, named locals should be replacable by any other system that serves the same role, such as an IDE with good visualization features and good support for interactively automating data-plumbing code. Named locals are just one representation of the data flow DAG which I find convenient for teaching and implementation purposes. Names can too easily express behaviors that are inefficient or insane - such as transporting large values between CPU and GPU, sharing state between threads, naming deleted objects, fulfilling a promise with itself, or unintuitively keeping whole environments around just to support closures. All of these things need to be addressed, and some have been. Kitten has no mutation, is memory-safe, uses eager evaluation, and has cheap closures (only copying values and sharing references that are actually used). The transport of values between CPU and GPU is a big concern, as I would like AMP to feature prominently in (distant) future versions of the language. You want tight control over what happens when, and for inefficient code to look inefficient. syntactically, it is too easy to drop a name unused, or use a name twice, when the type does not allow it. That’s what good error messages are for. Names can actually help here—“x was copied but it is noncopyable” rather than “could not match type [a] with type ~[a]”. names are most useful for the non-locals Absolutely. A function, config value, or concurrency primitive models a behaviour, while a local models state; names are, as far as I can tell, much better for the former than the latter. ### That’s what good error That’s what good error messages are for. Names can actually help here—“x was copied but it is noncopyable” rather than “could not match type [a] with type ~[a]”. Names were the cause of the problem in the first place. But I suppose it's nice that you'll at least have an easy way to name the problem. ;) Anyhow, a better error message would say where the problem occurred, and maybe render what the type looks like in context, and highlight the problem type in red. Even better would be to integrate with the IDE. You want tight control over what happens when, and for inefficient code to look inefficient. I do indeed. ### Names though? Names were the cause of the problem in the first place. How so? You can’t dup something noncopyable, whether you name it or not. And if you implement the same behaviour as dup using locals, then you at least have a source location to point to where the error is: def myDup: ->x // Okay, moves x. x // Not okay; copies x. x // Seriously not okay at this point. Similar arguments apply to values with linear/affine types; I’m just being lazy about coming up with examples. ### As I see it, the explicit As I see it, the explicit use of 'copy' - and the expansion (code path) leading to it, including dup - would make it really easy to say "look, you say copy on this no-copy item. It's OBVIOUSLY wrong. With names, multiple uses aren't nearly as obvious. There are safe usage patterns (e.g. linear sequence of method calls where each returns a linear object) and unsafe uses. Since names are for convenience (the same argument is repeated), it is not unexpected for PL designers to enable as many safe uses as possible. People make mistakes because it isn't syntactically obvious what is safe or unsafe. Dropping names is even less obvious. ### Visualisation helps Yes, this is where program text is inadequate. As I mentioned earlier, the names are just a textual shorthand for expressing the dataflow DAG, which is better rendered as an actual DAG: +---[ myDup ]---+ | | ~a | +--[ x ]----> ~a | / | ----[ x ]--< | | \ | ~a | +--[ x ]----> | | +---------------+ (Pardon my ASCII.) Here the error is much more apparent, and the UI can indicate the exact point in the diagram where a unique reference got copied when it wasn’t supposed to be. ### I agree, visualization helps I agree, visualization helps find the error. Now if only we could have avoided it. Perhaps by not using a reference that can casually be copied or dropped when the referent cannot. :) ### A rose in any stack position But people new to concatenative stack-based languages - to them, it's dup, dip, bloop, dribble, rot... just arcane nonsense and gibberish, no better than Perl line noise to someone new to Perl. I don't think the problem is unfamiliarity. Stack swizzling is just a terrible UI. I became fairly proficient at programming directly in assembler (many) years ago, but I wouldn't ever want to go back to that style of coding. There is a mental tax involved. A programming language is low level when its programs require attention to the irrelevant. I obviously don't have any experience programming in either of your (Dave's or Jon's) languages, but I will still stake the position that names are a better UI than stack swizzling, especially in the presence of visual feedback of the sort you're proposing. Consider switching from using a local stack parameter to a global configuration parameter -- can you do that by changing a name or must you 'dip trip flip fantasia'? Also, sometimes you might want to make meta-level assertions about signals in ways that don't make sense computationally (i.e. the buffer on the GPU matches the buffer on the CPU or if I were to mail this gorilla...). Names can let you talk about these things and in the presence of a visualization of the inferred stack manipulations, it should be clear when you're doing things with names that don't have a well behaved computational correspondence. ### Locals and Globals, Names and Nonsense Consider switching from using a local stack parameter to a global configuration parameter -- can you do that by changing a name Yes. Concretely, a programmer might express that as "use_opencv" store in my language, to move the top item of the current stack (ignoring the text we just added) to the top of a (potentially new) named stack identified by the text "use_opencv". A named stack can be used as a global variable for configuration data. The environment is modeled with the product (stack * (hand * (stackName * listOfNamedStacks)))). The actual motion from the top of the stack to the correct location in the listOfNamedStacks does require some rich manipulations of this environment structure. But. My language supports metaprogramming by use of introspection. (Not really different than use of typeclasses for typeful dispatch; "use_opencv" is formally a unique unit type, not a runtime value. I don't have dependent types, though they're on my wishlist.) So, when dealing with global names, all that 'drip trip flip fantasia' is automated. Global names I don't have a problem with. Names are convenient when used to speak of things that are far away - i.e. that we cannot easily pick up or point at. Even so, I use the quoted name "use_opencv" to enforce a very clear distinction between the reference and the referent. But speaking of globals is misleading. Those are a relatively small part of our programming experience. To argue we shouldn't use stack swizzling because they're painful for globals is analogous to saying we shouldn't use a mouse because it's painful to copy characters out of a charmap. "Doctor, it hurts when I do this!" "Then stop!" My position is that stack shuffling is simple and works great for the bulk of the programming experience, where there are only a few arguments or intermediates in play at a time, they have stable call positions, and they've already been gathered by the caller. This accounts for basic, reusable functions and procedures. While stack shuffling might still not be fun, it is often better than the syntactic and semantic overheads of declaring and managing names. Visualization helps with the 'mental tax'... I imagine it helps almost as much as writing with a text area instead of a line editor. Even visualization of globals can be useful to provide context. Of course, we could also support drag and drop gestures with globals that write code like "foo" load or "foo" store or "foo" loadCopy or "foo" recursiveDropStack If developers want to automate some of the local data plumbing by drawing a few wires, I can support this too (though I'd refuse to make it opaque to the user; I'd like a little understanding to occur by osmosis if nothing else). I even could model local names with metaprogramming. Similarly, I could model brainfuck with metaprogramming. It isn't hard to build a Turing tarpit for the people who really insist on working within one. sometimes you might want to make meta-level assertions about signals in ways that don't make sense computationally (i.e. the buffer on the GPU matches the buffer on the CPU or if I were to mail this gorilla...). We can assert "the gorilla in my left hand must, at runtime, weigh more than the banana in my right hand" without naming either of them. We just need a language to make such assertions - i.e. pairwise declarations relate the top two elements of the stack. Names would be useful if we wish to grab and relate global artifacts, but that's just the same story as before. In practice, I think the common case would be local assertions, since that leads to a more constructive and compositional analysis. E.g. in your CPU-to-GPU buffer example, you would generally make the assertion near the site of transfer, at which point both buffers would be present. Names might seem useful for 'dead' objects, e.g. when comparing preconditions to postconditions on a linear object, or in terms of your example - perhaps an equivalence after buffer transfer. Fortunately, there is a good name-free solution for this, too. Names aren't the only way to describe an object. We can support a primitive that creates a 'memory' of an object's type at a given point - i.e. a kind of staged reflection: memoryOfALonelyType :: x ~> (Ty x * x) This operation can be safe even for linear types. As a rather nice consequence, this nameless memory of a type as a first class static object should support first-class refactoring and decomposition of the type analysis and declarations code. Further, you never need to say things that aren't sensibly behaved; you have a sane modal distinction between an object and a description of an object. in the presence of a visualization of the inferred stack manipulations, it should be clear You seem to be assuming that "the presence of a visualization" is orthogonal to the use of a concatenative programming model where the full environment is an explicit, well-typed object to be passed from function to function. I think such an environment may be much more difficult to achieve in an applicative language, or one that heavily uses names. They don't make the environment very explicit in the programming model. ### De Bruijn vs named I agree. Even in formal calculi, most people highly prefer working with named binders over De Bruijn notation. Although the latter is simpler to deal with on a puristic technical level (no trouble with alpha equivalence etc), it is rather impractical and error-prone for most human minds. Just try following a proof computing with a De Bruijn indexed lambda calculus -- it requires 10 times as much concentration, care, and time. Names are an essential abstraction to reduce cognitive load. Same in programming languages. The fascination with stack juggling languages strikes me as an instance of falling victim to the joy of "sense of achievement". ### De Bruijn vs. Named - false dichotomy De Bruijn vs. Named is a false dichotomy. I'm certainly NOT suggesting we use numbers-as-names instead of symbols-as-names. That'd be like choosing the very worst of both worlds. Names aren't essential. You're just using them as a poor man's visualization tool. If I needed to follow proofs presented using De Bruijn indexes - or at least more than a few small lemmas - I would be tempted to write a program to help me visualize what is happening - i.e. implement various test cases, perhaps animate what is happening, probe intermediate states. I agree that stack programming has a cognitive load. Stack programming without automated visualization seems analogous to using a line editor instead of a text editor. Which is to say, it isn't all that difficult once you have a little experience with it, but it could be much easier. In the absence of visualization, the cognitive load for writing concatenative code seems to be lower than for reading it, perhaps because the writer knows how each word she uses will impact its environment but this is not syntactically obvious to the reader. (OTOH, effectful code, or code with callbacks or continuations, has the same problems even in applicative code with abundant use of names.) But, perhaps as a consequence of this, there is no "sense of achievement" for merely writing the code - not beyond the normal "getting started" or "mission accomplished" achievements we feel in any language. A "sense of achievement" for just getting stack shuffling right would wear out relatively quickly. But you would be hard pressed to find people more satisfied with programming than in the concatenative languages communities. What concatenative programmers feel is perhaps not 'achievement' at making things work at all, but satisfaction of doing so concisely, with easy refactoring and simple semantics (occasionally augmented with powerful metaprogramming). Or maybe it is because the language is more easily adapted to the user; conventions are very flexible. ### Easier to refactor? De Bruijn is a pretty exact analogy AFAICS. Except that you cannot even refer to the i-th stack element directly in these languages, but have to do so indirectly by extra commands that shuffle it to the top first. Frankly, from where I stand, that is even worse. I question that stack programs are easier to refactor -- exactly because they lack the abstraction of naming, and so induce stronger coupling in the correct bookkeeping of irrelevant details. ...Unless you are talking about low-level syntactic aspects of cutting & pasting code segments, which are not terribly important if you ask me. ### De Bruijn is a straw-man analogy You seem to be focusing on one small aspect: how to reference values. Things you aren't considering include: multiple return values, multiple parameters, how words in stack-based languages can continuously clean up the stack, or consequently how tail-call recursion can be expressed directly. you cannot even refer to the i-th stack element directly in these languages Sure you can. Unless you're nit-picking on the word 'refer'. We can certainly access the i-th stack element. Trivially, if you can describe stack-shufflers to extract the i-th element to the top of the stack, you could name and reuse these shufflers. So you could create a series like "roll2..roll9" to move the i-th element to the top, or "pick1..pick9" to copy the i-th element to the top. If you see a pattern (and you will), and if your type-system allows it, you could even abstract these words so that the 'i' value is an argument - and doing so would get you the FORTH-like pick/roll commands. In practice the need to refer to more than the third element is rare, especially in well-factored code, so you don't see developers using these much. But they're available if you want them. I question that stack programs are easier to refactor [..] Unless you are talking about low-level syntactic aspects of cutting & pasting code segments The ability to find common code segments and factor them without disentangling local names is useful for refactoring (and certainly lowers the barriers, so it happens more continuously), but there is more to it than that. Concatenative code seems more effective than applicative code when refactoring the internal structure of functions - i.e. dealing with all those temporary variables, or those cases where the same argument gets passed to a bunch of different functions. It also keeps the environment cleaner while doing so, rather than building up a bunch of irrelevant temporary variables or explicit re-assignments. There is more to well-factored concatenative code than cut-and-paste. The words should have meaningful connotations to humans. And we want the input elements on the stack to all be relevant. And we also want the inputs arranged to minimize shuffling in the common call case (if there is more than one common call-case, we'll tend to define a few words). Same with the outputs, if any: they should be arranged so they can be immediately used in the common case. It is because of this factoring that complex stack shufflers are rarely needed. stack programs [..] lack the abstraction of naming, and so induce stronger coupling in the correct bookkeeping of irrelevant details I suppose by 'irrelevant details' you mean 'locations on the stack'. True enough. But using names just trades one set of irrelevant details for another. With names, I need to deal with all these symbols and spellings that have no impact on meaning of the computation. And the bookkeeping for these irrelevant names is often horrendous: every relevant name must be spelled out at least twice: once as an LHS, once as an RHS, resulting in all sorts of verbosity. Symbols often have multiple human meanings so we often encounter accidental shadowing (the word 'map' is especially bad). I need to concern myself with lexical vs. dynamic scope (or worse). I need to concern myself with alpha equivalence, and potential name capture in macros. I need to deal with this funky semantic distinction between reference and referent. And, oh, then I need to some handwaving to explain tail calls or named return value optimizations. As bad as you imagine bookkeeping of irrelevant details for stack languages, names are worse. You just stopped thinking about it, due to your experience. The same happens to a concatenative programmer - minor stack manipulations fade into the background. But since there is less bookkeeping to perform or reason about in stack languages, the remaining language is simpler and more concise. ### Stack languages are worse But using names just trades one set of irrelevant details for another. When you write a function with names, first you write down the parameter names. It's a natural step in the design process, makes good documentation, and you'll probably need some equivalent commenting in your stack language program anyway. Once you've done that, the cognitive load of trying to get at the names where you need them is very low. You might have to glance up to remember what you called the parameter, but that's not as difficult as trying to keep track of where values have been shuffled on the stack by all of the other words you've written thus far. The total cognitive load for names is higher than than "trying to get at the names where you need them". With tacit programming, I quite appreciate not having to establish good names. I apparently avoid some decision fatigue by avoiding such irrelevant decisions. When I comment what a word does, it is unusual for me to name all parameters, or even to use a consistent name. And there are indirect effects. Using names for parameters leads to verbose expressions. Local names get entangled with expressions, which hinders refactoring. Also, you seem to be assuming a single-stack language. I can say for certain that modeling a second stack for the 'hand' (with take/put/juggle ops) makes it a great deal easier to prevent items from getting buried or lost. I can just pick something up if I'll need it downstream, or if I need it out of the way. The hand is intuitive; it doubles the number of items I can mentally track and manage, and eliminates most need for careful factorings. But I believe you raise a valid point. Programmers in concatenative languages must concern themselves with "keeping track of where values have been shuffled on the stack" and "getting at them where you need them". In practice, developers take steps to make this easier on themselves: factor difficult functions into small pieces, keep the stacks clean, use simple calling conventions. One could say they are trading the cognitive load of "keeping track" for the cognitive load of developing a simple factoring or API, even in cases where code won't be reused. Programmers using names can be a lot sloppier. They can write functions fifty lines or longer, create a huge mess on the stack, and simply ignore it. Which isn't a bad thing. Unfortunately for your position that "stack languages are worse", the cognitive load you complain about is easily addressed with automatic visualization. Similarly, if developers don't want to think about 'getting at an item' I can provide a little automatic code generation. I can do this without accepting the verbosity, irrelevance, decision fatigue, and multifarious semantic issues that names introduce. You might wonder: "but why is this important? why rely on an IDE? why not just use names so the problem is handled in the language?" Well, I consider PL to be a user-interface, and vice versa. We are -through text, icons, forms, gestures - influencing the behavior of a computer system. Unfortunately, most PLs are poorly designed as user-interfaces; for example, they require us to use names like a psycho with a label-maker, instead of just using a few gestures to establish our meanings. Conversely, I consider many user interface designs to be awful PLs - lacking composition, abstraction, expressiveness, reusability. I would like to address this stupid, short-sighted design gap - one that I expect has cost us more than a few billions. I'm not the first to try, but I have a new perspective on an old idea - simple, powerful, and extremely promising. I recently realized the possibility of representing a user-model within a program - i.e. a programmer-model. It started with the recent idea of 'navigating' a zipper-based or multi-stack environment (in order to program concurrent behaviors in Awelon), and was soon augmented: use a 'hand' (with take/put/juggle ops) to carry objects (including runtime signals or wires) as I navigate. But a programmer model is nowhere near the limit of that idea. We're modeling an environment, and we're modeling a programmer. The next step should be obvious: why not model the integrated development environment. The entire user-interface (menus, workspaces, open pages, etc.) can feasibly be modeled in the type-system of an arrow. Every button-click, gesture, menu selection, text area and character typed can be formally modeled as a concatenative extension to the 'program' that describes this IDE. Even multi-user environments are feasible, perhaps using a DVCS-like system. Such a system could support continuous extension (and reduction, where I want it) and metaprogramming, would be robust to some history rewriting (undo, macros, optimization), could be efficient through snapshots and replay. The 'user interfaces' could contain live, time-varying signals in addition to ad-hoc statics, and would be easily composed. Security could be modeled within the language. Anyhow. Summary: Names are okay for some purposes - large-volume configuration parameters, singletons, wiki pages, reusable modules or software components. But such names should be used carefully - reference distinguished from referent, explicit lookups - so that people don't gain bad intuitions about them. I believe named locals are interfering more than you know, and helping you less than you imagine. The cognitive load for tacit, concatenative programming is easily addressed and then some. I think the corresponding issues for names - being more semantic and implicit - cannot be addressed as easily. ### Unconvinced I'll agree that names are less important the more local you get, but short functions can have parameter names like x, xs, n or f. That's usually how it's done in Haskell and I doubt anyone gets decision fatigue when picking a letter for a parameter. So for locals, a single letter name like x is easier to type than a stack manipulator like... bloop ;). And as you move to larger scopes, referencing things over a larger distance, good names become more important, but it sounds like you agree with that. I'm all for designing a programming language to be a good UI, but my thinking has me doubling down on names. I don't really get the idea of IDE interaction as a program. I get that you could do that, but not why it would be interesting. And wouldn't this proposal restrict the programmer to edits that leave the program in a well-typed state at every intermediate step along the way? That seems like it would be very frustrating. I'm not sure that I understand this idea, honestly. ### Short name vs. long name is Short name vs. long name is pretty much irrelevant regarding any of my concerns. Many cognitive and verbosity benefits of short reusable names are undermined by accidental shadowing. And the real semantic issues aren't addressed at all. Also, pointing at 'x' vs. 'bloop' in terms of character count isn't convincing to me. One should certainly count the separate declaration, and the extra line typically needed to make it (let x = in, or var x = ;, foo = function(x) {, etc.). It is also my impression that, in stack programming, objects on the stack are often where they can be used without manipulation. I wonder what the average ratio is between parameters/local names in applicative code vs. explicit stack manipulations in concatenative code. My guess is that this ratio is between two and four, but this is something that could be tested against an actual codebase with similar libraries. I'm all for designing a programming language to be a good UI, but my thinking has me doubling down on names. I do not know of any 'good UIs' that require humans to declare and use even a tenth as many names as modern programming. But let me know if you find an exception. the idea of IDE interaction as a program [..] why it would be interesting. It is interesting because it enables well-defined but ad-hoc mashups, transclusion, views, transforms, and extensions, metaprogramming and reprogrammable UI, undo that can be precise or pervasive. It is interesting because it can unify programming environments and user-interfaces. Applications become extensions. Programming can be a flexible, ad-hoc mix of live operations or manipulation, and constructing behaviors to apply later. Security can be modeled formally and pervasively. wouldn't this proposal restrict the programmer to edits that leave the program in a well-typed state at every intermediate step along the way? The programmer would be restricted to manipulations that leave the programming environment in a well-typed state at every intermediate step. So this would include not installing extensions or running transforms for which the environment isn't well typed. If you wanted to represent an ill-typed program, you'd do it much like you might in Haskell: use an AST, or text. ### IDEs The potential benefits of short reusable names are undermined by accidental shadowing Speaking of problems that are easily remedied with IDE feedback, this is one. It is interesting because it enables well-defined but ad-hoc mashups, transclusion, views, transforms, and extensions, metaprogramming and reprogramming, undo that can be precise or pervasive. It is interesting because it can unify programming environments and user-interfaces. Applications become extensions. Programming can be a flexible, ad-hoc mix of live operations or manipulation, and constructing behaviors to apply later. Security can be modeled formally and pervasively. The IDE I'm building supports most (all?) of these features under a simple framework, but I apparently don't share your ontological view. Most specifically, I don't see anything useful about viewing arbitrary input to a UI as a program. The only thing useful you can do with arbitrary UI input is undo/redo it (possibly cloning the state of the application between). Languages typically have richer structure that makes it useful to consider them as languages. If you wanted to represent an ill-typed program, you'd do it much like you might in Haskell: use an AST, or text. OK, I assumed you had in mind more alignment between the environment and the program, like in a staged programming environment. ### OK, I do see the appeal Mapping keys / gestures onto language elements would be a good way to allow custom configuration, and there is still value in using languages when the lowest common denominator might just support undo/redo. I was already planning to have certain UIs exposed through language (I think we've discussed that in the past), but I'll have to think more about exposing every UI through its language. Maybe that is a good idea, after all. And I can see how that might map more directly to a concatenative language. I'll think more on that, too. ### IDEs with Direct Semantic Access The IDE I'm building supports most (all?) of these features Oh really? I wonder if you're interpreting my words differently, or in a far more constrained (or crippled) manner, than I mean them. Let's try a scenario: copy and paste of an observed time-varying signal, along with a pair of control capabilities associated with buttons. Let's assume the capabilities can influence lighting in your house, and the signal describes the current state of the lights. You want to integrate this with a software agent that will record how long your lights are on, and turn them off after a configurable time if a motion sensor signal hasn't reported activity. Can your IDE do the copy and paste? If so, does it show up as a mess of context-sensitive code to get the signal to the agent? Or is it just a formal 'programmer hand' modeled in the development environment, with precise data plumbing semantics? I don't see anything useful about viewing arbitrary input to a UI as a program. The only thing useful you can do with arbitrary UI input is undo/redo it Nonetheless, you'll execute a subprogram for every gesture, mouse move, and manipulation. It's just, if you follow normal conventions, your subprograms will affect a separate view-state of your program (your 'IDE'). This isn't so much an issue for pure views, but the moment you start manipulating the program in rich, ad-hoc ways, you'll encounter a huge semantic disconnect between the view and the manipulation. This semantic disconnect has historically been a 'wall' between PL and UI. Even for languages like SmallTalk. This wall can be addressed by modeling UI actions - navigation, cut and paste, menu selection, etc. - as acts of programming. Let's call the generated program 'E'. Program E formally models a programmer navigating and manipulating a user interface and development environment. Every time the programmer/user does something, it is modeled as a concatenative extension to this program. (With an exception for 'undo', though even that could be modeled explicitly - capabilities to view our history or move into an alternative history.) An advantage of this design is that we now have a formal, internal meaning for exactly what it means to navigate, to view, to cut, copy, and paste - a formal meaning that will make sense from the perspective of the language rather than interpreted through an external IDE. By avoiding the indirection through an external IDE, the types involved in the operations can be much richer - live signals, capabilities, sealer/unsealer pairs, etc.. Further, the context is much more robust and persistent. A programmer can develop directly (e.g. extending program E, attaching behaviors to real data-hoses), or indirectly (using program E to compose blocks or describe ASTs that will later be interpreted, without activating them. Mixed-mode systems are certainly possible, e.g. describing a software agent that, if you ever activate it, will have access to a live signal indicating the state of lights in your house. (This is an example of what I earlier described as "an ad-hoc mix of live operations or manipulations, and constructing behaviors to apply later".) Does your IDE do this? Or is it stuck like most IDEs behind a wall of syntax, with fragile context and crippled manipulations? Languages typically have richer structure that makes it useful to consider them as languages. In natural language, the rich structure is semantic, not syntactic. The same is true in the language I'm developing, Awelon. The structure is all there in the type system. I assumed you had in mind more alignment between the environment and the program, like in a staged programming environment I do. But a developer who wants to build a program to be applied later can certainly do so within "the program" E. Thus, the scenario I believe you were envisioning is addressed well enough. ### What does Direct Semantic Access mean? Oh really? I wonder if you're interpreting my words differently, or in a far more constrained (or crippled) manner, than I mean them. Of course you do :) Let me see if I understand your scenario. I have a process running that's monitoring my house motion sensor and building a history. I have capabilities to control my lights. I have (or want to write) an agent that looks at the history of the sensor and uses it to control the lights. I would want to support most of this scenario, but I would not support cut/paste of the active process that's building the history. You'd need to leave the monitor process building the history and hook-up a reference to it. I don't really understand the rest of what you want. I like editing constructs as text (or maybe ADTs directly) and not as live objects. Also, I've thought about it a little bit more and I don't think I'm going to change my approach regarding UIs as languages. General UI elements don't need to have meaningful language representations. The most general thing I can see making sense with an arbitrary UI is to capture and rewind/replay parts of it. I do think it will often make good sense for there to be a shallow translation from UI interactions into a core application language (as is common with scriptable environments like Word or AutoCAD, etc.), but that's different. ### Most UIs and programmable Most UIs and programmable devices are live objects. I.e. UIs are provided from external services, or encapsulate capabilities you cannot access. So, for my goals of unifying PL with HCI, this is among the problems that matter most. Another relevant feature is a direct or shallow relationship between UI elements and signals/capabilities that are objects in the language. Without that, it's very difficult to programmatically observe and manipulate the things the human can see and manipulate, which is just awful. The scenario was a signal indicating current state of the lights, along with capabilities from buttons. I imagined, but did not explicate, that you were grabbing these from a UI element (e.g. a form, or a webpage). But your interpretation works well enough. Would you be able to "copy and paste" to get that reference you need? Could you do such a thing with formal meaning? How much extra knowledge would be needed to go from reference to getting your information about lights usage? Edit: I tend to think this should go without saying, but... To unify PL and UI, it is essential that the set of computations we can observe or influence are exactly the same whether we do it by hand through UI widgets, or programmatically. Further, the act of programming must be an action on the UI - i.e. directly grabbing a particular input or output field from a widget and integrating it into a software agent, or another widget - such that we don't need special under-the-hood arcane (or proprietary) knowledge to manipulate or observe stuff programmatically. It is okay to manipulate some artifacts as text or ADTs, so long as this principle is supported. Among other things, any text you observe or manipulate must also be subject to this principle, and thus subject to programmatic observation and manipulation. When you say "General UI elements don't need to have meaningful language representations," I think your meaning is that you have no such principle. You will lack a consistent means to programmatically extend and manipulate your user interfaces. You seem to be assuming that a common approach to user-interface with programs is sufficient. It really isn't. The most general thing I can see making sense with an arbitrary UI is to capture and rewind/replay parts of it I would like to see a future where the 'general' set of things we can do with UI include programmatic manipulation or observation, capturing live fields from one form into another (mashups, transclusion), extending the UI with software agents and new macros, and so on. Adjectives we apply to a 'good PL' such as composable, modular, expressive, extensible, abstraction, simplicity, safety - all apply to a good UI. Unifying PL and UI means more than using PL as an effective UI; it also means turning UI into a good PL. ### I don't understand the I don't understand the significance of the house light example. I think we're coming at this from different angles, but yes, one of my goals is to have the IDE function as an OS, and manipulating the lights in my house doesn't sound like a particularly challenging example problem. But you also seem to be focusing on some particular copy/paste behavior that I don't understand (e.g. "Could you do such a thing with formal meaning?" -- I don't know what you mean). Further, the act of programming must be an action on the UI I don't exactly agree with this. Widgets are made for interaction with people. Programmatically, we usually want to bind to something below this layer, though I will agree it should be just below it. As an example of what I mean, people will enter numbers character-by-character at the keypad, but programmatically we want to pass a number and not a string. When you say "General UI elements don't need to have meaningful language representations," I think your meaning is that you have no such principle. It's not clear to me that a command language is necessary for every interaction. At lower levels where the command language would look like "click (100,200); keypress 'A'; ...", it's not clear to me that it's a good idea to think of this as a language. I think I'd rather see things from the viewpoint that every data stream could be interpreted as a language, but then not use that interpretation for cases where it makes little sense. I'm open to being convinced that it is a good idea to view every UI as corresponding to a language, but I don't see this as a particularly important idea. You will lack a consistent means to programmatically extend and manipulate your user interfaces. I think you're assuming too much. The most general thing I can see making sense with an arbitrary UI is to capture and rewind/replay parts of it I would like to see programmatic manipulation [...] My wording was poor, but the context was the UI language. Yes, we can do other things with UIs themselves, but I was asking what we might do with an interaction transcript in the language of an arbitrary UI, such as "click; keypress; etc". ### significance of the house significance of the house light example The ability to copy-and-paste the capabilities AND signal - e.g. copy from a buttons and label in a 'my smart house' form, paste into a software agent. That was the only significant part of my question. It's also the only part I actually asked you about. I'm not suggesting it's a hard problem; the agent itself should be easy, but it will be easy ONLY if you can easily extract the necessary capabilities and signals from an ad-hoc UI component, and combine them with a sensor capability from someplace else entirely. Widgets are made for interaction with people. Programmatically, we usually want to bind to something below this layer, though I will agree it should be just below it. It's fine to have a shallow binding, so long as 'people' can directly get at this binding in order to programmatically use them somewhere else. At lower levels where the command language would look like "click (100,200); keypress 'A'; ...", it's not clear to me that it's a good idea to think of this as a language. You seem to be focusing on the 'character by character' aspect. I don't really care whether you do that in some logically intermediate state that gets validated and transformed according to input type (or other declarative rules) underlying an input form. I was asking what we might do with an interaction transcript in the language of an arbitrary UI; such as "click; keypress; etc". Access to this is useful if you want to extend or manage the set of macros, shortcuts, prediction, etc. in ways the original designer did not anticipate. But it doesn't mean every single widget must handle it. Though, whatever does handle it should ideally be subject to observation and programmatic manipulation, and thus explicitly part of the environment - so we can develop new macros, predictions and tab completion, memory of history, and other useful features that the original designer did not anticipate. ### Discoverability The ability to copy-and-paste the capabilities AND signal - e.g. copy from a buttons and label in a 'my smart house' form, paste into a software agent. That was the only significant part of my question. It's also the only part I actually asked you about. I'm not suggesting it's a hard problem; the agent itself should be easy, but it will be easy ONLY if you can easily extract the necessary capabilities and signals from an ad-hoc UI component, and combine them with a sensor capability from someplace else entirely. OK, I think I understand the example problem now. You're worried about discoverability. You have a UI that does something, and without probing into the documentation (arcane knowledge) you want to be able to capture the abilities that it has and use them programmatically somewhere else. I agree this is an important goal. So you want to have a discoverability protocol for arbitrary UIs. I'm not sure if you're literally suggesting copying/pasting the button as the UI for that (which I don't think is a good idea) or are suggesting something like "view source" followed by copy/paste or a "capture commands" mode for UIs that, instead of running commands, shows you what would have been run (and from which you could copy/paste). Anyhow, I'm on board with all of that, but I think the latter idea of "capture commands" will work better than "view source", for reasons closely related to the fact that not all UI is coherent as a language. If you try to start "at the top" and ask the UI for "what happens when I press enter", then you're going to find yourself in the plumbing of a text widget. And there might be intermediate modal dialogs, client-side prediction, etc. that would get in the way of the "real" capability layer that you're interested in. What makes more sense to me is to have opt-in protocols at arbitrary levels of abstraction (at the click level if desired) where you can see the commands produced by the UI at that level. ### What makes more sense to me What makes more sense to me is to have opt-in protocols at arbitrary levels of abstraction Totally agree with that. If good UI should have the properties of good PL, supporting a flexible and appropriate level of abstraction is certainly among them. But I am probably imagining a different UI structure than you. I tend to think the user-model should be processing all the user's lowest-level signals and certain view-transforms, while the UI 'structure' itself should be higher level and much closer to semantics. I guess this POV evolved from developing a couple multi-user systems, where users need different view-preferences and macros and such. ### But I am probably imagining But I am probably imagining a different UI structure than you. I think you could prove that in court. It will be interesting to see where you end up. I agree that names can have their problems, too. But compared to the alternative, those are luxury problems to have, which I'd trade any day. Seriously, I don't find these arguments convincing at all, if not totally backwards. I also don't see why multiple parameters/return values are best expressed by stacks, or why the unavoidable abstraction and capability leak you introduce everywhere by allowing everybody to see all their caller's arguments and locals is anything but a horrible misfeature. But to be honest, I also do not care all that much. As long as I don't have to deal with it, everybody is entitled to enjoy their own masochistic treat. :) ### don't see why multiple don't see why multiple parameters/return values are best expressed by stacks I haven't argued that multiple return values are "best expressed by stacks". What I would argue is that multiple return values are (1) one of the reasons your analogy to De Bruijn numbers fails, (2) one of the reasons that stack-based languages are effective at refactoring the internals of functions. the unavoidable abstraction and capability leak you introduce everywhere by allowing everybody to see all their caller's arguments and locals is [..] a horrible misfeature While FORTH does have this problem, it is not fundamentally difficult to control how much a subprogram sees in concatenative or stack-based programming. Factor, Cat, Kitten, and other stack-based languages make it easy, through use of combinators or types. John Nowak described one such combinator, 'dip', in the first post in this topic. compared to the alternative, those are luxury problems to have The biggest problem with this argument is that you've already established your complete (and apparently unabashed) ignorance of "the alternative" under discussion. Your arguments seem to consist of blind prejudice, false assertions (i-th element, unavoidable capability leak), and straw-man analogies (De Bruijn numbers). I can understand that you're comfortable with names, that any problems they might have are the 'devil you know'. I can also respect that you "do not care all that much" and that you're happy to remain ignorant of alternatives. But, please, if you're going to argue, at least understand the subject. ### Work-arounds I haven't argued that multiple return values are "best expressed by stacks". What I would argue is that multiple return values are (1) one of the reasons your analogy to De Bruijn numbers fails, Fine, add anonymous tuples with only indexed projection to the mix then. Equally brittle and equally tedious to work with. (2) one of the reasons that stack-based languages are effective at refactoring the internals of functions. I fail to see the connection. That's an unfounded claim that neither matches my experience, nor makes any sense to me technically. While FORTH does have this problem, it is not fundamentally difficult to control how much a subprogram sees in concatenative or stack-based programming. Factor, Cat, Kitten, and other stack-based languages make it easy, through use of combinators or types. John Nowak described one such combinator, 'dip', in the first post in this topic. Fair enough, I shouldn't have said "unavoidable", but clearly, it's the default, and that default is all wrong. The "good" behaviour should be the norm, not require extra cost and extra work by the programmer---that never works out. I'm surprised to hear you argue otherwise. The biggest problem with this argument is that you've already established your complete (and apparently unabashed) ignorance of "the alternative" under discussion. Your arguments seem to consist of blind prejudice, false assertions (i-th element, unavoidable capability leak), and straw-man analogies (De Bruijn numbers). That ad hominem was neither called for nor does it refute what I said. So far, your arguments consisted mostly of hand-waving the advantages and pointing to work-arounds for the disadvantages. Programmer-made work-arounds are okay as long as you buy into the premise that there are (significant) advantages, but I haven't heard a plausible point in case yet. add anonymous tuples with only indexed projection to the mix then Adding tuples won't save your straw-man analogy. Actually, I was already assuming you could model tuples in a system with De Bruijn numbers (using a Church encoding if nothing else). But tuples have very different characteristics from multiple return values on the stack - i.e. the code receiving the tuple cannot be transparent to the tuple's existence, so now you need to introduce extra logic to break apart the tuple and use it. Also, multiple return values were only one of many reasons listed for why your analogy fails. I'm not sure why you're focusing on it in particular. (2) one of the reasons that stack-based languages are effective at refactoring the internals of functions. I fail to see the connection. The ability to return multiple values onto the stack enables what in applicative code would be the refactoring of local sub-expression variables - i.e. we can transparently factor out the code that dumps a bunch of local variables onto the stack. This works well in conjunction with the ability to remove or process multiple items from the stack. Relevantly, this is done without introducing a intermediate tuple. A tuple must be explicitly constructed, and explicitly decomposed, both of which add code and undermine the benefits of the factoring. Often applicative code won't factor out sequences of local variables because it isn't worth the hassle (or performance risk) of building a tuple then breaking it apart. The hassle is made worse because it's very verbose (e.g. passing parameters to the function that builds the tuple, using methods or pattern matching to extract from the tuple) and the savings is often very little. The "good" behaviour should be the norm, not require extra cost and extra work by the programmer -- that never works out. I'm surprised to hear you argue otherwise. I'm a big proponent capability security model, robust compositional security (and compositional reasoning in general), path of least resistance, good defaults, and abstractions that don't lie to me. It must seem strange to you, then, that I'm promoting stack-based programming, where it seems the entire environment being accessible is the normal state of things. Thing is, I'm not. I'm promoting concatenative programming, and tacit programming. The underlying model I favor is based on John Hughes's arrow model. (See my initial post in this sub-thread.) The primitive combinators for arrows are: (>>>) :: (a ~> b) -> (b ~> c) -> (a ~> c) first :: (a ~> a') -> (a * b) ~> (a' * b) In arrows, the 'entire environment' is passed through (>>>) composition operator, which I agree is concerning for security. OTOH, use of 'first' enables cutting off a very precise chunk of the environment for further operations, and is great for security. Also, every use of first is static - i.e. the amount of environment accessed never depends on runtime values. This makes it easy for developers to be aware of how data and authority flows within the language at compile-time. (Edit: Cat and Kitten get the same 'static' advantage by use of typed functions, albeit at some cost to expressiveness unless they use explicit macros.) In the concatenative language, structured types such as (a * (b * (c * d))) can be used to model stacks of varying size and type. An argument to 'first' is a static 'block' (function object) placed on a stack, i.e. instead of an applicative parameter. (This is nice for composition, but does result in a Turing complete model.) In a simplistic (non-extensible) environment, this might look like: [fooAction] first The blocks syntactically stand out. I think they even look a bit like jail cells. So we do have a syntactic means to easily recognize when code is idiomatically being applied to some subset of the environment. Of course, Awelon's environment is a bit richer, and developers will use combinators other than 'first' based on how much of the environment they wish to provide - e.g. there might be a special combinator that passes on the first one, two, or three elements on the stack, or developers might create one that passes on those elements plus a couple globals. Developers must either know the combinators if they want to understand how the environment is impacted... or automatic stack visualization would also do the trick. This turns out to be a major boon for expressiveness. I want security, like an object capability language, but I also want expressiveness and abstractive power. With tacit programming for arrows, I now can get both - without syntactic muss or fuss. Even the problem of 'forwarding a specific set of capabilities and data' can be refactored into user-defined combinators. (Not something easy to do with applicative code.) Beyond all that, I also have another feature for security, which is static support for sealer/unsealer pairs. I can, when I wish, encapsulate parts of an environment and hide them from any subprogram that lacks the unsealer. Makes parameteric code easy to write. With a decent library, I could also use these to model ADTs or objects. Edit: I find it baffling that you consider combinators and types a "work around". Should I remind you of such fine examples of applicative languages as C, JavaScript, and Lisp? If I were to guess, I'd think concatenative languages have a much better record for controlling access to environment than applicative languages do (on a percentage of languages basis). That ad hominem was neither called for nor does it refute what I said. I had already refuted what you said (De Bruijn false analogy, i-th element, unavoidable capability leak). The slight on you was not part of those refutations, but an observation on the quality and nature of your arguments thus far. If you think it was unwarranted, I suggest you consider how I might take you implying I'm a "masochist" or involved with concatenative programming for a shallow "sense of achievement". You should also consider how using phrases like "AFAICS" and "my experience" as part of your arguments will place your sight and experience under valid scrutiny. Programmer-made work-arounds are okay as long as you buy into the premise that there are (significant) advantages, but I haven't heard a plausible point in case yet. My strategy with you has not been to present advantages, but rather to (attempt to) break you from your misconceptions (i-th element, capability leaks, the De Bruijn analogy, that "Names are an essential abstraction to reduce cognitive load."). You've already acknowledged a few advantages - such as "I agree that names can have their problems, too", or "low-level syntactic aspects of cutting & pasting code segments". You've even acknowledged the potential expressive power of full environment manipulations, though you did so in terms of security risks and abstraction leaks rather than in terms of constructing DSLs with sentinels and full-stack processing. I consider those advantages to be valuable; the 'problems' of names are really quite severe, especially for safety-by-construction, which is in part why arrow models or box-and-wire programming have been created. The syntactic challenges of factoring are relevant - without the syntactic hassle, refactoring happens more often and at a finer granularity. The expressiveness of accessing the larger environment is useful for extensible languages and metaprogramming, even if we also address security concerns. Convincing you of their value seems pointless. You've already indicated your dismissiveness towards such ideas. ("luxury problems to have", "not terribly important to me", "clearly, that default is all wrong"). The syntactic properties of concatenative programming seem advantageous, btw, for fast parsing (almost no syntax), specifying rewrite optimizations (transform sequences of words into new sequences), automatic code generation (program directly corresponds to search path), genetic programming (splicing and recombining linear sequence of code), and hygienic macros (no risk of name capture). I think it's hard to argue the against the technical properties here, but I've only mentioned these so you can dismiss them. Go ahead. I've got a few guesses about which dismissive tactics you'll use. I'd side with Andreas here: I don't get your points about tuples. The ability to return multiple values onto the stack enables what in applicative code would be the refactoring of local sub-expression variables - i.e. we can transparently factor out the code that dumps a bunch of local variables onto the stack. This works well in conjunction with the ability to remove or process multiple items from the stack. I hear you say two things: (1) it's nice to return multiple values (2) there is a duality with taking multiple arguments that is important Scenario for point 1.: I want to combine several producers each returning a value, and have that combination be a first-class citizen in the language (no overly heavy syntax is enforced) Scenario for point 2.:: I want to compose a producer and a consumer, be able to change them in sync to pass one more value, without having to touch the code that connects them together. It seems to me however that both those points are enabled with tuples in applicative code naming variables. Point (1) would be the return of a tuple (that can be either immediately decomposed into separate names by the caller, or kept as a tuple when it is a good value abstraction). Point (2) would be simply a function call consumer(producer) where the type of producer is a tuple. Relevantly, this is done without introducing a intermediate tuple. A tuple must be explicitly constructed, and explicitly decomposed, both of which add code and undermine the benefits of the factoring. In my consumer(producer) example, there is no syntactic overhead. If you don't want to manipulate an intermediate tuple, you'll write something like let (foo, bar, baz) = producer in ... I don't think this can be considered as syntactically heavy. We're introducing names, but there is no syntactic overhead to introduce several names instead of ones. In fact this is an exact ML-style counterpart of Lisp's syntax for multiple-bind, multi-value-return -- which means that a ML language *without* a concept of multiple values subsumes (in theory but also in practice) use cases of multi-value return -- in applicative languages. "value types" that have a different kind from the usual single-world values, which limits the level of abstraction you can use on bunch of values -- one very strict way to enforce this is to disallow naming "values" of these value type, which amounts to forcing the expanded elimintation form let (foo, bar, baz) everywhere. You don't gain anything in expressiveness (only in closeness to some performance model), so I don't think that would help in this discussion. Often applicative code won't factor out sequences of local variables because it isn't worth the hassle (or performance risk) of building a tuple then breaking it apart. The hassle is made worse because it's very verbose (e.g. passing parameters to the function that builds the tuple, using methods or pattern matching to extract from the tuple) and the savings is often very little. I emphatically disagree with your "it's very verbose". ML syntax for passing a tuple to a function is in fact exactly the syntax for passing multiple arguments in C-syntax languages, which I don't think is perceived as verbose. I've already pointed out that the deconstruction syntax also coincides to the most natural (and rather light) deconstruction for multi-value return. Re. De Bruijn indices. They are often used in formalization of core calculus that don't both with multiple-name binders, so we have less experience with them (binding theory papers, eg. the "Locally Nameless" paper, note that they don't handle that case for simplicity). However, it is extremely natural to extend De Bruijn indices with multiple binders: just give them consecutive indices. De Bruijn syntax being nameless, the lambda-abstraction syntax is not (lambda x t), but simply (lambda t) (implicitly shifts binders and redefine 0). The single-value let-binding form would be (let e1 e2) (binds 0 and shifts in e2). The tuple-deconstructing let-binding form would be (let-tuple e1 e2), where e1 returns a tuple of length N (or multiple values if you want to read it that way), and indices 0..N-1 are rebound in e2, with the rest shifted by N. This gives you a light multi-return syntax that, I agree with Andreas, rather closely corresponds to what happens in concatenative languages. A corresponding multi-arguments syntax would be (lambda-tuple t), where you may have to statically note the size of the input tuple if your type inference is not good enough (it's not immediate as with let-tuple as you have to look at use sites), or you find that helps readability -- just like we mentioned explicit type language, as there is no obvious "implicit" information of how values to take in input. ### Tuples don't help the De Tuples don't help the De Bruijn analogy. That's my point. They don't help because returning multiple values as a tuple is qualitatively different than returning multiple values on the stack, especially with regards to factoring code. You hand-waved over the issue of operating on an intermediate tuple. For some reason, you even deconstructed a tuple as (foo,bar,baz) and suggested it wasn't a manipulation of an intermediate tuple. But this manipulation is important to the qualitative difference. The need for it is why tuples don't help Andreas's case. A common case is that you want to modify different elements of the data being passed to the consumer. With applicative programming, the code ends up looking like this: (foo,bar) = producer baz = makeBaz foo consumer (baz,42,bar) With multiple return values on the stack, we don't need to explicitly pick apart any tuples or put them back together. producer makeBaz 42 swap consumer Here 'swap' can be considered semantic noise. What happens in practice - in "well factored code" - is that producers and consumers are arranged so that, when tweaks need to be made, it's almost always on the first outputs and inputs, or first two. (It may seem arbitrary, but such scenarios come up a lot in practice.) Also, there are many cases that (due to careful factoring) end up looking more like this: (foo,bar) = producer1 qux = producer2 baz = makeBaz 42 qux foo consumer(baz,bar) Which corresponds to concatenative code: producer1 producer2 42 makeBaz consumer This sort of code (no stack manipulations at all) happens only by design, but after a while it starts happening often without much thinking about it (like Haskellers designing functions that are easily curried). OTOH, beginners in concatenative programming tend to have messy stacks, and noisy stack manipulations, both of which make it even harder to keep track of a stack. It's a really harsh learning curve compared to use of names, though I think that could also be addressed. Mucking around foobar examples probably doesn't reveal some of the more common structures that can be factored out of large functions, where applicative code often has difficulty. I find there is often a lot of repeated internal-function structure within or between functions regarding, for example, subsets of else-if-then sequences or boiler-plate intializers. Many factorings that would take macros in other languages can be done directly in concatenative programming. I emphatically disagree with your "it's very verbose". [..] syntax for passing multiple arguments in C-syntax languages, which I don't think is perceived as verbose. It is very verbose compared to tacit concatenative programming. From that perspective, C does the following things that are very verbose: • parameter declarations take horizontal and vertical space • use descriptive parameters in expression => wide expressions, take lots of horizontal space • to reduce horizontal space, assign sub-expression to intermediate named local • LHS declaration of intermediate named local also takes horizontal space; prior action inefficient • every time you factor code, extract method, you need yet another set of name declarations Use of descriptive names is especially bad for verbosity. But even non-descriptive names are very verbose compared to tacit concatenative programming. People comfortable with applicative and unfamiliar with concatenative tend to imagine that the concatenative programming will need almost as many stack manipulations as they use names. This impression might be enforced when they, through inexperience, make a mess of the stack. But in practice (at least for experienced programmers) there are fewer stack manipulations than parameters, and concatenative programmers also avoid many intermediate names and parameter declarations. the deconstruction syntax also coincides to the most natural (and rather light) deconstruction for multi-value return Most natural deconstruction? Returning multiple values directly on the stack seems very natural to me. And even if you do return a tuple as a value, I can factor out the code to deconstruct it (i.e. code to take a 3-tuple and turn it into 3 items on the stack). I appreciate multiple values being returned directly on the stack, since constructing and deconstructing tuples would hinder refactoring. This gives you a light multi-return syntax that, I agree with Andreas, rather closely corresponds to what happens in concatenative languages. Even if tuples are "lightweight", you cannot operate on returned tuples as directly as you can multiple returns on the stack. The difference is obvious and significant to anyone who has done much concatenative programming. Many concatenative languages do have some support for tuples or lists as values. Mine is among them. Multiple values can be returned either directly on the stack, or via tuple. In my experience, multiple returns on the stack is much better for factoring, while a tuple or list is useful when I want to treat structured objects (e.g. a complex number) as a single value. ### The "good" behaviour The "good" behaviour should be the norm, not require extra cost and extra work by the programmer---that never works out. I'm surprised to hear you argue otherwise. I don’t believe he was arguing otherwise. You can have a stack-based language that only allows static/safe effects, and you can have a concatenative language that’s not based on stacks: arrows, queues, and graphs are all effective models. In addition, Kitten and Cat are type-inferred, so there is no burden on the programmer to ensure that a function’s effects are localised. Kitten’s stack is simply an efficient way of mapping its semantics onto real hardware. And inference is not free, of course, but its value more than outweighs its cost. stack-based languages are effective at refactoring the internals of functions. I fail to see the connection. That's an unfounded claim that neither matches my experience, nor makes any sense to me technically. The refactoring abilities of a concatenative language are very general, which makes them difficult to describe both succinctly and convincingly. The situation is similar with monads, where you have a proliferation of analogies with burritos and spacesuits and who knows what else, for what is essentially a very simple concept. The first example we usually trot out is that it’s very easy to “extract method” in a concatenative language—cut, paste, name. But the key point is that unlike in Joe’s Imperative Language, this is something you do all the time because it is so easy. Definitions rarely exceed a few lines of code. It’s also not just about strings of the same function calls, but sequences which can be arbitrarily interleaved with calls to other functions. They describe a DAG of data and control flow—factoring is what lets you collapse arbitrary subgraphs into individual nodes, reducing program complexity in a very quantifiable way. dup for example is the pattern of sending the same data to multiple places: var x = f(); g(x); return h(x); In Kitten, with locals: f ->x x g x h Without: f dup g h As a DAG, dup is simply a bit of program wiring that does this: | / \ At any point where your program does that, you can use dup because the puzzle piece fits, by its shape and more formally its type. And there are many more useful stack combinators which give us a common language for the shapes of subprograms, in the same way that folds give us a common language for subcomputations with the power of an FSA. That is the key insight: concatenative programming is about abstracting and composing flow structures. ### f dup g swap h I'm pretty sure your f dup g h code was an incorrect translation. This serves as a fine example of the risks of in-the-head visualization. Hope you don't mind. :) Assuming f and g each return one item, the correct code would be: f dup g swap h Automatic visualization of inferred types, and live programming where we can continuously see the outputs for a sample input, are technologies could help a great deal for all programming models. But I think they'll help a great deal more for tacit concatenative programming. My hypothesis is that named locals will interfere with automatic visualization - due to repetition of names, verbose code, lack of a clear 'environment' type, jumpy data-flow via names. Parameters and local variables are simultaneously a "poor man's" visualization tool, a verbose and semantically dubious dataflow representation, and an impediment to automated visualization tools. ### Thanks Changed the example. My excuse is that it’s very hot today and my brain is not operating at peak efficiency. ;) ### Thanks :) Thanks for proving my very point though. :) Despite dmbarbour's remarkable attempt to spin it in a different direction: obviously, the named form is trivial to get right and trivial to read, while the stack-only form is neither -- even for such a trivial example, and even for somebody as experienced as you. As for flow abstractions: I keep hearing that, but I don't get the argument. Applicative languages can abstract flow just as well -- but they don't necessitate it. They provide more high-level structure than a pure stack language, in that they can express tree and DAG-like flow directly, instead of having you encode everything down into sequences. You use sequentialised flow (and define abstractions like arrows) only where that is a suitable abstraction. That seems like a clear win to me. ### Non-trivial variable plumbing In my experience, algebra is full of examples of variable plumbing that are hard to express in nameless languages. The example that tricked Jon exposed non-linearity, and this difficult. Non-local effects where the same variable is needed in distant part of a computation is another. A list of examples: scalar_product (a1,b1) (a2,b2) = a1*a2 + b1*b2 linear, local, easy in stack languages norm2 (a,b) = a*a + b*b non-linear, local, not too difficult complex_product (a1,b1) (a2,b2) = (a1*a2-b1*b2, a1*b2 + b1*a2) non-linear, non-local, difficult Getting the nameless form of those is often enlightening (eg. you can get expression that scales better to adding a third dimension), just like Bird-Merteens squiggol-ish expressions of familiar operations have a certain hieratic beauty. One may argue that we're biased in favor the nameful syntax due to the existing practices of domain experts, though on a small scale the plumbing often looks like something we want the compiler to do for us (... by using names). ### This another aspect of This another aspect of programming languages that is psychology instead of mathematics. Are names an accident of history or inherently easier for humans to understand? I think the latter but even those who disagree I hope will agree that in our current culture, code with names is much easier to express and understand. Even stack based languages use names for their operators. You could imagine a stack based language without names at all, where you hold the functions that you want to call on the stack just like normal data, and then you use a CALL operator to call them (in the same way that functions are ordinary data in functional programming languages). That would be truly horrible to program in, since even getting to a primitive operator like + would require stack manipulation. psychology instead of mathematics. Are names an accident of history or inherently easier for humans to understand? We should be asking: under which conditions is it easier for humans to understand and use names? Do you use names for the clothes in your closet? Do your shoes have names? your legs? What about the last apple you ate? Your car keys? Your car? If names offered an easy way for humans to understand things, then why don't humans use them more often? My observation: We use names primarily to identify distant people, important places, and important things. For other distant items, we're more likely to use identifying descriptions along with general location. For nearby items, we're more likely to communicate by pointing or glancing, leveraging our visual and spatial abilities. Humans have a very advanced visual processing and spatial memory capabilities, and we are decenly skilled at structural manipulations. Further, this seems to be our preferred modus operandi for the bulk of interactions with our environments and a fair portion of our communication with other humans. Shouldn't programming be the same way? Even stack based languages use names for their operators. Sure. It's only naming local data (parameters, local variables) that is under scrutiny. ### For nearby items, we're more For nearby items, we're more likely to communicate by pointing or glancing, leveraging our visual and spatial abilities. So you see concatenative dip trip flip expressions as the equivalent of pointing? A closer analogy to me would be a conversational style where the participants are supposed to keep track of a stack of objects that they're discussing and every conversational phrase rearranges the stack. :) It seems to me that you're complaining that humans aren't good at names and then advancing as a solution something at which they're even worse. And to then argue that the IDE is going to be playing this mini-game for us, so it doesn't matter -- well, then why introduce it at all? Why not just say that your syntax is a graph, hooked up by pointing, and for the interface pick something stable. For example, have the UI introduce names (a, b, c, ...) for locals or parameters, and then reuse them as they're consumed. Maybe have a syntax for using a symbol without consuming it for explicit duping. Or, you could look at that scheme and decide that it's better to let programmers introduce names for those symbols because they can pick mnemonically superior symbols. And that dup detection is best handled by the IDE. And then you're back at the system everyone else uses, except that you have a nice graph presentation of the term that doesn't overemphasize the binding structure. (And you can still forbid implicit shadowing!) ### Metaprogramming, Visualization, and Problematic Names then why introduce it at all? One reason is metaprogramming. Graphical PLs tend to become unwieldy at scale. Humans can handle some of the "in-the-small" tough problems by pointing, but many of the in-the-large ones should be addressed automatically. I need a formal model of the environment and operations on it to model and understand interactions at the larger scale. Anything the human can do by navigating, pointing, and manipulating, must have names so the programmer can automate the acts of navigating, pointing, and manipulating. (This is what distinguishes the programmer-model from the real programmer.) Another reason is visualization. Visualization is easy to address for tacit concatenative programming. I have a well-typed environment at any given point, and I can visualize that type over space. I have a simple linear model for operations on the environment, which is easy to treat as time. I have well-typed dataflows, and I can thus easily trace or animate the movement of data. As I've said before, tacit concatenative code has both much more to gain from automatic visualization, and much greater potential for automatic visualization, compared to name-based code. Also, names have problems. The dataflows using names are implicit, and not explicitly typed. Names can be used in many unsafe or inefficient ways. It becomes non-trivial to analyze for safe uses of substructural types, or unsafe/inefficient name-capture in staged or distributed programs. Even if a computer can analyze for them, humans can have trouble understanding the issues. Names effectively teach bad intuitions about reference vs. referent and valid dataflow. The mini-game, by comparison, would teach valid intuitions... and this can be directly extended to dependent types, in a form much akin to assisted theorem proving. Name-based code seems to be relatively verbose, lacks any way to refactor out the basic declarations and uses of names. I believe that the pervasive use of names actually becomes a form of semantic noise that interferes with understanding of the code. Not as much as trying to visualize dataflow in one's head without automatic support, of course. But the abundant proliferation of names for local data and parameters can get in the way of reading and understanding behavior. And then, of course, there are other psychological problems, like decision fatigue, how names really aren't something humans naturally use in-the-small, and accidental shadowing. Sure, some of these problems can be addressed or patched over. But avoiding names is a simple solution to the problems of names, and thus one that can be exercised consistently. ... I believe we can obtain a better programming experience by marginalizing use of names, and formally modeling (e.g. via staged metaprogramming) the names that remain. But support of the IDE is the other essential ingredient to that experience. If names are out of the way, we can replace them - with better visualization, better automation, better composition, better code generation and factoring, better safety-by-construction properties, better intuitions. Potentially better UI/PL integration. ### How is this possible? I don't understand how names could have serious semantics disadvantages in any of these categories when there is a straightforward translation from names to concatenative style. That such a shallow translation is possible to me implies that the use of names is just a syntax/user interface issue. Further evidence is that you're talking about using names for non-local reference and there isn't much of a semantic difference between local and non-local. Unless you meant that names were going to be treated as first class references in your non-local scheme, in which case... now you have two problems :). So you seem to be conceding that concatenative style isn't the best from an HCI standpoint, but I'd argue that HCI is exactly what you need to be worried about when picking a surface level syntax, which is all I see this as. ### how could names have serious how could names have serious semantics disadvantages in any of these categories when there is a straightforward translation from names to concatenative style? There is no simple translation. Not for names as they are used in applicative programming. Unless you meant that names were going to be treated as first class references in your non-local scheme, in which case... now you have two problems :). Yes, that's what I've been saying. We formally distinguish reference and referent, and we formally model the relationship between them. In blunt terms: we use an association list, probably with text as the key. We can take or copy objects from this list, or add items to it. You say this is "two problems". I disagree. It's at least three problems: reference, referent, AND the mapping between them. By modeling all these explicitly, we can use names safely and robustly, even in staged, distributed, or heterogenous systems... but we lose some of the syntactic convenience and equivalence properties. In some cases - e.g. for singleton objects or configuration variables, names that are relatively stable - the convenience of names for data-plumbing is worth trading for a few other problems. In many other cases - e.g. for parameters and local variables, or keyword arguments - it often isn't worth it. Sometimes it is - I'd use keyword arguments if I had an extensible set of parameters with potential default values. But names shouldn't be so pervasive. you seem to be conceding that concatenative style isn't the best from an HCI standpoint I won't argue it's "the best", but I think it's a significant improvement over pervasive use of names. Concatenative style, and marginalizing use of names, is makes many powerful HCI features feasible. Therefore, it is better from an HCI standpoint. Even when names are modeled in the environment, it's useful for them to be explicitly modeled for HCI purposes, e.g. so we can clearly visualize the linear nature of referents separate from references. HCI is exactly what you need to be worried about when picking a surface level syntax, which is all I see this as. That's what I'm doing. I'm choosing an HCI and surface syntax that doesn't come with numerous, built-in problems with semantics, verbosity, intuition, fatigue, refactoring, visualization, and automation. I'm choosing an UI that can more naturally fit the human, with regards to many local and environment manipulations not requiring names. Do you honestly think that names are the best we can possibly do? (If so, could you possibly defend that position in an argument?) How can we make programming simpler, more robust, and more scalable if we're patching over leaky abstractions like the untyped dataflow of names? ### Conversations with a Stack? So you see concatenative dip trip flip expressions as the equivalent of pointing? A closer analogy to me would be a conversational style where the participants are supposed to keep track of a stack of objects that they're discussing and every conversational phrase rearranges the stack. :) Not quite. Pointing is something we do to communicate to intelligent beings that have similar visual and spatial capabilities as our own. Yet, when we interact with our environments, we also do not seem to think in terms of names. (This is what I meant when I said: "this seems to be our preferred modus operandi for the bulk of interactions with our environments"). These interactions include navigation, shifting stuff around, opening and closing, scanning surfaces, plugging elements together. Much of the 'dip trip flip' operations correspond to these actions, not to a conversational style between Q. Swap, roll, apply, some zipper navigations, take and put with the 'hand', content-based searches - these are a very good fit for our visual and spatial operations. It doesn't take much experience to think of these operations in spatial terms. And I imagine it would take much less in presence of continuous visual feedback. Your attempt at an analogy to a conversational style with multiple participants would only seem relevant in a multi-programmer environment, where multiple programmers are taking turns controlling a structure. That might be interesting as a means to model some protocols (contracts, trades, games) but I grant that it could be difficult to track. Visualization would help a lot, of course. But, for the most part, even in a multi-programmer system, I imagine a single programmer would de-facto have exclusive control of a territory and relatively few encounters of that sort. ### OK, so you are arguing that OK, so you are arguing that dip trip flip is a reasonable HCI form factor for a programmer looking at a visualization of the stack. I can almost believe that for the programmer writing the code, but when you're reading the code surely those words are impenetrable unless they are annotated onto a graph showing what's going on. In which case, the graph is your source of understanding when reading and the words were only really useful for building the graph. I guess that makes the situation similar to the Coq Ltac language. Again, I can believe that such an interface is useful for expert level interaction, but the transcript of such interaction doesn't seem very interesting. I think it's a bigger problem with Coq where things very quickly become too large to visualize at once and the Ltac history is just kind of a time-index into the evolving proof environment state. Maybe with the added benefit that you'll remember and recognize the labels for some of the steps. when you're reading the code surely those words are impenetrable unless they are annotated onto a graph showing what's going on Visualization (and preferably some interaction) is also important for reading the code, understanding its meaning or behavior in context and with a variety of inputs. There are many ways to render functions, and live programs can help. I'm not sure what you are imaging when you say "annotated onto a graph", but concatenative code has a rather nice property of looking just like a timeline. There are many ways to visualize time - before and after pictures, an image for each word in a concatenative program, animations. Objects can be highlighted as they're manipulated, perhaps glow warm and cool down for a while. We could even animate a character or disembodied hand wandering around to perform actions. If readers wish to trace certain objects or areas, it would not be hard to tag objects in the visualization, highlighting them. I believe the precise nature of dataflow would make objects easier to follow than with names. I think applicative code would severely interfere with many rich visualizations. There is no clear environment type to render. The dataflow is implicit and jumpy. Aliasing and linearity are harder to represent. The code itself is more verbose, leaving less space for a visualization canvas. Names the programmer doesn't care about are semantic noise that is difficult remove from the visualization. The primary advantage applicative code seems to have for reading code seems to be in the absence of automatic visualization - i.e. for a static page of code, visualizing in our minds, using the names as visual cues. For reading code, names are a poor man's visualization tool. the transcript of such interaction doesn't seem very interesting It's just as interesting as "the code" in any program. Which is to say: it's mostly interesting to someone who is trying to understand the program, learn from it, or debug it. things very quickly become too large to visualize at once True. Naturally, we'd want some ability to scope what we're looking at, both in space (parts of the environment - places or things) and time (volumes of operations). I don't see any inherent difficulty with this. I think that one reason Coq becomes harder to follow is that it implicitly uses search rather than deterministic operations that (in some cases) might explicitly encode search. Another reason is that the visualization for CoqIDE seems text than graph, doesn't make good use of color and placement. ### Visualization is also Visualization is also important for reading the code, understanding its meaning or behavior in context. I'm not sure what you are imaging when you say "annotated onto a graph", but concatenative code has a rather nice property of looking just like a timeline. Do you have something promising along these lines? Point-free code tends to be concise and read left-to-right, so I was thinking a simple IDE could just display the stack type top-to-bottom under each value/operator. Something like: sum = foldr (+) 0 ['a->'b->'b] ['a] ['a list] ['b] ['a list] ['a] ['a list] ['a] ['b] Top-to-bottom displays expected stack values, left-to-right describes stack transformations. Reading off the last one is the inferred type of the operator you're defining. -> is used to describe higher order operators in this context. ### Representing part of the Representing part of the environment object under each word is one potential visualization I was considering. I'd use more colors, shapes, concrete values; fewer symbols. I can imagine hovering over an object in one of these representations to see how it flows through the others (drawing a trace arrow). I have several other visualizations in mind. Bret Victor's "Drawing Dynamic Visualizations" seems appropriately related. I imagine a programming environment that puts the concatenative code itself more in the background. Not hidden from the programmer, so much as sitting in a smaller portion of the screen than the environment/process representations and any 'live' programming examples. I think timeline/animation on an environment can work very well. Developers can visually see the dataflows involved with accessing global configuration values or singletons in the extended environment; it becomes trivial to understand how code is entangled, or to follow important objects (e.g. linear or relevant types, global objects, etc.) to see how they are used through the environment. I think animation will be a very big part of understanding. Many of my ideas are specific to staged programming with the arrows model. I'm not sure how well they can generalize to stack based languages that perform side-effects or observe runtime inputs. A consequence is that I can often focus on concrete values that are available to the compile-time, and I don't need to deal as much with homogeneous typed lists and so on (all compile-time lists in Awelon are heterogeneously typed, as are all zippers, etc..). ### on a small scale the on a small scale the plumbing often looks like something we want the compiler to do for us (... by using names) The alternative I'd like to pursue is that "tough" small-scale plumbing becomes something we have our IDE help us with, using PL that can have graphical elements where convenient. I.e. manipulate objects in the stack visualization and generate code to perform said mamanipulations. I think this would naturally appeal to the human spatial/visual capabilities, far more than using names. It should also teach better intuitions about how data and objects can be manipulated compared to guessing at what the compiler does. I envision a VR/AR programming environment. I sure wouldn't want to use names to access and manipulate most things in such an environment. I need a formal concept of moving objects in such an environment, and I want safety by construction. So tacit concatenative programming seems a more promising formal basis, even if I model names for some purposes (like different workspaces). Many stack languages do support local names (Kitten and Factor, for example). I can't find the ref, but I recall a note indicating that 'as of this writing' Factor used named locals for around 308/38k words (less than 1%) in its internal codebases (optimizers, IDE, etc.). I can model local names in Awelon using metaprogramming, but I can't imagine them being used much more often than that. I might be tempted for your complex_product example, if I wasn't able to use a little visualization and code gen. More likely I would break it into two linear problems. ### one way to do it is often better than two I'm not here for "spin" or sophistry. My interest is developing a better programming experience. I've never argued that concatenative code is easy to read in the absence of visualization. But if that's your "very point", I believe it is is easily addressed with technology. Applicative languages can abstract flow just as well -- but they don't necessitate it What I've found is that mixed-mode ad-hoc applicative with compositional dataflow is a really awful programming experience. In context of the arrows model in Haskell, there are two cases where applicative code is often used: 1. some operators (e.g. arr, first) take applicative parameters. The consequence was that those particular inputs could not be abstracted in the arrow itself 2. in some cases, I want to perform a little static metaprogramming, e.g. repeat an arrow N times using a parameter N rather than repeating by hand. Again, this would take applicative code. With these two influences are combined, they both contribute to a lot of ad-hoc dataflow through the applicative model. Essentially, I would be abstracting "parameterized configurations" of components. The problem was that this dataflow was not very compositional. The ad-hoc dataflow becomes a glue globbing everything in a parameterized configuration together. It was difficult to factor components, and conversely it was often challenging to ensure that parameters to different subcomponents are related in sensible ways. I found I really wanted the ability to parameterize via the dataflow, such that I can separate the site where a parameter is constructed from the site where it is applied in a compositional manner. When I came up with the idea of modeling 'static' type parameters within the arrow itself (thus representing a staged computation), that proved suitable to solve the problem. It also inherently resulted in a staged concatenative language. My own experience reminded me that more choice isn't necessarily better. In this case, it seems that shoving everything into the dataflow is both conceptually simpler and much more compositional. There are also fewer things to learn - there is only one way to do it. they can express tree and DAG-like flow directly Arguably, the only direct expression of a tree and DAG-like flow is to literally draw a tree - i.e. as done in many graphical PLs. Applicative code does enable ad-hoc dataflow by use of names, though. Each use of a name can be considered a 'goto' for data. This makes them convenient, but also makes it difficult to structurally enforce safety properties that might have been protected in a more direct representation. instead of having you encode everything down into sequences [..] use sequentialised flow (and define abstractions like arrows) only where that is a suitable abstraction. I'm finding that it's always a suitable abstraction. Encoding operations in sequences is actually quite pleasant: • To encode operations on lists or tree-like structures, we can use zippers, lenses, folds, traversals. • Zippers are close to physical intuition of editing and navigating a document with a finger or cursor. • very concise, since we don't have the overhead of naming things or referring to things by names when they are within easy reach • good fit for relatively linear media like text; composition in syntax in addition to semantics, results in easy syntax-layer factoring • good fit for safety-by-construction, since we have a more precise model and syntactic representation for every dataflow, copy, and drop operation on data And then there are indirect advantages, since tacit concatenative programming also results in easier parsers, compilers, optimizers, code generators, and so on. ### My own experience reminded My own experience reminded me that more choice isn't necessarily better. In this case, it seems that shoving everything into the dataflow is both conceptually simpler and much more compositional. There are also fewer things to learn - there is only one way to do it. On that note. As for concatenative vs. non, I think names are good for configuration and documentation. But as Ruby on Rails popularized, convention over configuration is often superior. Concatenative style is the ultimate convention, which offers tremendous flexibility by reducing the language primitives to the minimal set required, and as the SF-calculus showed, it allows trivially extending the language in quite novel ways, including with metacircularity. The loss of documentation is a real downer though, and debuggers are very geared towards inspecting named variables, but if you're going to augment the default programming experience with an IDE that provides whatever concatenative programming typically loses, then it might work out just fine. I think the long history of Forth, and the growth of Factor has demonstrated that usability isn't the real barrier for such languages, any more than usability was a barrier to the adoption of ML. ### SF-calculus showed, it SF-calculus showed, it allows trivially extending the language in quite novel ways, including with metacircularity. I think attributing this to concatenative style or a lack of names is probably wrong, but I don't know enough about the SF-calculus to say that definitively. I think the long history of Forth, and the growth of Factor has demonstrated that usability isn't the real barrier for such languages, That a barrier isn't insurmountable doesn't mean it isn't a barrier. Do you think the paranthesis soup ever stopped anyone from using Lisp? ### I think attributing this to I think attributing this to concatenative style or a lack of names is probably wrong, but I don't know enough about the SF-calculus to say that definitively. Using the SF-calculus trick to reflect at the level of lambdas and binders is an open problem, according to the recent papers I've been reviewing. I wasn't claiming that you can only reflect concatenatively, merely that the inherent simplicity and flexibility of the primitives of concatenative programming made it "easy". It doesn't seem so easy once you have binders. That a barrier isn't insurmountable doesn't mean it isn't a barrier. Do you think the paranthesis soup ever stopped anyone from using Lisp? Absolutely, but that's neither here nor there, because regardless of any barriers syntax and semantics may pose, what matters is the expressive power of the paradigm (seems very high), and how maintenance scales (unclear, which is where David will likely have his toughest fight). As one hypothesis: if the lack of names poses a serious burden on the human mind, then that will encourage writing even smaller named functions, which in concatenative programming will probably end up being quite reusable. The deficiency of names may thus end up making programs more modular and reusable. ### Perception of The ability to refactor code easily means it happens often. Small words, with descriptive names, often make it easy to "guess the intention" of the code that uses them. Coupled with some visualization and animation (which is available due to the staged programming model), I think maintenance won't be a problem. But perception of maintenance being a problem will probably be a problem for a long time. People seem to panic every time you take something away (goto, pervasive state, ambient authority, local state, events, names, etc.), even if you replace it with something better. I wonder if people might accept it easier if they're first exposed to the graphical programming, with the concatenative code generated in the background - available for investigation and learning by osmosis. The concatenative code would then help for environment extensions, metaprogramming, pattern detection (programming by example and a few code tweaks), automatic refactoring, recognition, rewriting, replay... People need to start concrete and start small. Concatenative has the potential to make it scale. ### concatenative the ultimate Concatenative style is the ultimate convention, which offers tremendous flexibility by reducing the language primitives to the minimal set required I was quite surprised at just how flexible these languages are - easily competitive with a pattern calculus or vau calculus, due to the direct access to the environment, but it also seems easier to control access to the environment. And the metaprogramming is naturally hygienic. I had only done a little concatenative programming before starting mine. Initially, I was just going for concatenative byte-code for RDP (software components, agents, distribution language). But its simplicity and expressiveness surprised me, so I lifted it into a full language. Then I developed the programmer-model idea, which can address many difficulties I was having for AR/VR programming environments. This is taking me in some exciting directions. So, yeah: I agree that these languages offer a great deal of power by convention. ### kittens if you have two functions of type (r a b c → r d), then you can copy the top three elements of the stack, run those two operations concurrently, join on both, and return two values of type d The concurrency model you're suggesting would be very inexpressive. Consider instead introducing promise or future support, such that when you spin up an (r a b c → r d) computation, the 'd' is immediately left on the stack as a future and you only wait for it if you try to observe it in a significant manner. Another good approach is to add 'channel' support: create a bounded-buffer channel for one-to-one communication, communicate through it by pushing messages or waiting on them. (A future can be modeled as a single-message channel.) Regardless, spinning up a fixed number of threads then joining? You can do much better than that. stack-like nature can enable use similar to lexically scoped variables, assuming disciplined developers Never assume that. I suppose some metaprogramming could also support the convention, i.e. use word { to create a new scope-object on the current stack, use var to add words to it, and } to end the scope and try to drop the variables (type error if any types we don't know how to drop). ### Good points Consider instead introducing promise or future support As long as the only observable difference is performance, then that is probably the better approach. Another good approach is to add 'channel' support: create a bounded-buffer channel for one-to-one communication, communicate through it by pushing messages or waiting on them. (A future can be modeled as a single-message channel.) Channels are just a generalisation of mutable references (with a buffer size of n instead of 1). Perfectly fine concept—doesn’t mesh well with Kitten. Regardless, spinning up a fixed number of threads then joining? You can do much better than that. You’re right. I was wooed by the the conceptual simplicity. A par word forks green threads, and a scheduler keeps OS/hardware threads busy: def parmap: ->{ xs f } option (xs last): ->x {xs init f map} {x f@} par append else: [] I’m sure the equivalents with futures or channels can be equally nice or nicer, though. ### From the user's POV, one of From the user's POV, one of the big issues of 'parmap' is that it isn't compositional unless it runs in parallel with the calling thread: there's a huge performance difference between sequentially applying 'parmap' on two vectors of ten items vs. first gathering those into one vector of twenty items. Many operations that are conceptually commutative or associative will not behave that way with regards to performance. This distinction, if it exists, will cause headaches as developers stubbornly collect those vectors... and again, as they carefully restore them. We don't want 'conceptual simplicity' that leads to Ptolemaic epicycles. :) Use of futures, however, is compositional in this sense. Developers can create compositional parallel strategies. They also gain more expressiveness, e.g. by passing the promises of some computations as arguments to others, or explicitly waiting for certain computations to complete before starting others. Channels are just a generalisation of mutable references. Perfectly fine concept—doesn’t mesh well with Kitten. References can work well in pure languages if given linear types or bound to a linear structure like an ST monad. Additionally, global variables would not be difficult to model in terms of a stack carrying a structurally typed record. I think you could make such things play nicely with Kitten. The real question is whether you want to allow your kitten near such things. ;) ### par vs. fut From the user's POV, one of the big issues of 'parmap' is that it isn't compositional unless it runs in parallel with the calling thread Right, par would not allow the lifetime of a thread to implicitly escape the scope in which it was forked. A useful restriction to offer but maybe a silly one to impose. there's a huge performance difference between sequentially applying 'parmap' on two vectors of ten items vs. first gathering those into one vector of twenty items You could evaluate the two parmap calls in a par as well. par just enables concurrency; the scheduler manages parallelism. Could a fut combinator take one function and evaluate it as though it were a par whose other argument is the current continuation? The real question is whether you want to allow your kitten near such things. ;) Maybe when it’s a little older. :) ### You could evaluate the two You could evaluate the two parmap calls in a par as well. That wouldn't address the issue I'm trying to describe. It's the act of gathering of inputs to a call-site (and subsequent distribution of results) that fails to compose. It doesn't matter whether the call itself is 'par' or 'parmap'. The reason 'futures' help is they abrogate the need to gather everything you want parallelized to one site: you can start futures one at a time or a hundred at a time, it results in the same potential for parallelism. Could a fut combinator take one function and evaluate it as though it were a par whose other argument is the current continuation? [edited] I suppose you could think about it that way, though it isn't clear to me what you gain by doing so. An important point is that futures also need some kind of 'future' data structure to serve as a synchronization point. The par you described doesn't have this because it joins immediately. Basically, the 'Future x' type could be implicit or explicit, but must exist. (I favor explicit, but implicit is nice for transparent parallelism.) ### Gotcha The par you described doesn't have this because it joins immediately. Of course! I knew there was something wrong about my thinking there. So a simple semi-explicit implementation comes to mind: fut takes a function, applies all its arguments, and gives you back a nullary function of the same result type. Calling this either blocks until the computation is finished, or retrieves the already computed result—which one is invisible to you as a caller. I just wonder how much value there is in having a separate wait operation. Effectful futures are also a little scary. ### Effectful futures are also a Effectful futures are also a little scary. It might be best to avoid those to start, if you can typefully distinguish pure functions from impure ones. It's easier to add a safe way to address concurrent effects later, than to take away the ability to cause unsafe effects. Also, you can potentially use a cheaper implementation - or perhaps leverage resources you otherwise couldn't (like a cloud or GPU) - if you know you won't need effects. Meanwhile, pure futures would have real value for parallel computation. An explicit 'wait' operation can be nice if you want to make it syntactically obvious in the source code when you might wait. But I suppose that is always the case with functions, right? You might not gain anything by it. Related: you might also provide a lazy combinator that takes a pure function (and its arguments) then returns a memoized function. Might as well complete the set. :) ### I do not understand what I am supposed to gain here From reading the original blog entry and the articles here, I really do not get what I am supposed to gain with concatenative programming. When I played around with Forth a long time ago, I would break my code into multiple lines with comments on each marking the current state of the stack at the end of the line, until I eventually just gave up on that and used local variables for anything that is not to be consumed as quickly as it is put on the stack. (Without the comments, I found the code nigh-impossible to modify after the fact.) These days, working in Haskell, I tend to find more elaborate point-free notation beyond the likes of f . g . h painful to try to understand, much preferring an applicative style of programming for beyond the simplest code. And yes, while concatenative programming may make returning multiple values trivial, what really is the problem with tuples? Conversely, I am not convinced by the supposed disadvantages of applicative programming. So what if many virtual machine implementations themselves are based on stacks and not ASTs - that is merely an implementation detail that I would prefer to not have to think about. And I prefer to give name bindings descriptive names beyond trivial cases (where then I do often use names like x, xs, n, f, g, etc.), as I am much more likely to remember what they are for when I come back to the code later (as I have found code where I did not take the time to come up with descriptive names difficult to understand when I came back to it after an extended period of times). In comparison, concatenative programming means no names to tell one what values are what in the first place! (Hence the necessity of extra comments for that purpose, and if one needs comments, what is one gaining here?) ### Concatenative.org lists such Concatenative.org lists such properties as concision, simple syntax and semantics, pipeline style, easy metaprogramming. People who stick with concatenative programming past the initial rejection stage tend to speak very highly of it. But I think there isn't any one thing that anyone would say is the 'killer app' of concatenative. It's a bunch of little (but important) things. My favorite features are probably the easy refactoring of function internals, the ability to keep my environment clean, the emphasis on composition. I also like safety-by-construction, which is a hard feature to achieve using names (names require too much analysis, especially with substructural types or aliasing), and there are some nice applications of concatenative code for automatic code generation, genetic programming, and the like. But one thing I dislike is trying to keep stack states in my head. I've gotten decent at it - up to three items on the stack and three in my hand (a second stack) is enough for many functions without factoring - but I really think this is something that could be and should be automated by an IDE. My intuition is that the difference will be like moving from a line editor to a text editor. I mean, I write text all the time without naming each line or paragraph, and without keeping line numbers in my head. That's what programming should be like. concatenative programming means no names to tell one what values are what in the first place! (Hence the necessity of extra comments for that purpose, and if one needs comments, what is one gaining here?) Are you suggesting that applicative code avoids the need for 'extra comments'? I was under the impression that the general consensus is that parameter names are not sufficient documentation. Anyhow, documentation certainly doesn't require names. It is often sufficient to describe the role of inputs, or sometimes to provide a small example program like: 12 4 div = 3 or "text" [action] newButton. Also, if we avoid names for parameters and locals, what's left is behavior. With good word choice, subprograms can often suggest their intention to any reader. I.e. instead of f . g . h you might have textToLines [wordCount] map listAverage. Even if you don't know it's correct, you can probably guess what it should do. Since many functions are factored into smaller subprograms that each fit on one or two lines in concatenative languages, this "guess its meaning" approach often works well in conjunction with a little extra documentation. Applicative code with descriptive names tends to become very bloated due to the cycle of names being used in large horizontal sub-expressions and sub-expressions being assigned to newly declared variables. IMO, it's also easy to get lost in all the syntax, and method names fail to dominate unless the IDE bolds them or something. It is really hard to write good self-documenting code using names for data - it focuses too much attention on the ingredients rather than the process.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4303378760814667, "perplexity": 1837.9517040285232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573759.32/warc/CC-MAIN-20190919224954-20190920010954-00026.warc.gz"}
https://www.khanacademy.org/math/algebra-home/alg-exp-and-log/alg-properties-of-logarithms/v/using-multiple-logarithm-properties-to-simplify
# Using the properties of logarithms: multiple steps ## Video transcript We're asked to simplify log base 5 of 25 to the x power over y. So we can use some logarithm properties. And I do agree that this does require some simplification over here, that having this right over here inside of the logarithm is not a pleasant thing to look at. So the first thing that we realize-- and this is one of our logarithm properties-- is logarithm for a given base-- so let's say that the base is x-- of a/b, that is equal to log base x of a minus log base x of b. And here we have 25 to the x over y. So we can simplify. So let me write this down. I'll do this in blue. Log base 5 of 25 to the x over y using this property means that it's the same thing as log base 5 of 25 to the x power minus log base 5 of y. Now, this looks like we can do a little bit of simplifying. It seems like the relevant logarithm property here is if I have log base x of a to the b power, that's the same thing as b times log base x of a, that this exponent over here can be moved out front, which is what we did it right over there. So this part right over here can be rewritten as x times the logarithm base 5 of 25. And then, of course, we have minus log base 5 of y. And this is useful because log base 5 of 25 is actually fairly easy to think about. This part right here is asking us, what power do I have to raise 5 to to get to 25? So we have to raise 5 to the second power to get to 25. So this simplifies to 2. So then we are left with, this is equal to-- and I'll write it in front of the x now-- 2 times x minus log base 5 of y. And we're done.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 25, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7480258345603943, "perplexity": 275.17176950180294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828448.76/warc/CC-MAIN-20181217065106-20181217091106-00564.warc.gz"}
https://www.fulltextarchive.com/page/The-Sewerage-of-Sea-Coast-Towns2/
# The Sewerage of Sea Coast Towns by Henry C. Adams Part 2 out of 3 which the storm water has to be held up for tidal reasons. It is found that on the average the whole of the rain on a rainy day falls within a period of 2-1/2 hours; therefore, ignoring the relief which may be afforded by overflows, if the sewers are tide-locked for a period of 2-1/2 hours or over it would appear to be necessary to provide storage for the rainfall of a whole day; but in this case again it is permissible to run a certain amount of risk, varying with the length of time the sewers are tide-locked, because, first of all, it only rains on the average on about 160 days in the year, and, secondly, when it does rain, it may not be at the time when the sewers are tide-locked, although it is frequently found that the heaviest storms occur just at the most inconvenient time, namely, about high water. Table No. 9 shows the frequency of heavy rain recorded during a period of ten years at the Birmingham Observatory, which, being in the centre of England, may be taken as an approximate average of the country. TABLE No. 9. FREQUENCY OF HEAVY RAIN ------------------------------------------------------- Total Daily Rainfall. Average Frequency of Rainfall ------------------------------------------------------- 0.4 inches and over 155 times each year 0.5 " 93 " 0.6 " 68 " 0.7 " 50 " 0.8 " 33 " 0.9 " 22 " 1.0 " 17 " 1.1 " Once each year 1.2 " Once in 17 months 1.25 " " 2 years 1.3 " " 2-1/2 1.4 " " 3-1/3 1.5 " " 5 years 1.6 " " 5 years 1.7 " " 5 years 1.8 " " 10 years 1.9 " " 10 years 2.0 " " 10 years -------------------------------------------------- It will be interesting and useful to consider the records for the year 1903, which was one of the wettest years on record, and to compare those taken in Birmingham with the mean of those given in "Symons' Rainfall," taken at thirty-seven different stations distributed over the rest of the country. TABLE No. 10. RAINFALL FOR 1903. Mean of 37 stations in Birmingham England and Wales. Daily Rainfall of 2 in and over ...... None 1 day Daily Rainfall of 1 in and over ...... 3 days 6 days Daily Rainfall of 1/2 in and over .... 17 days 25 days Number of rainy days.................. 177 days 211 days Total rainfall ...................... 33.86 in 44.89 in Amount per rainy day ................ 0.19 in 0.21 in The year 1903 was an exceptional one, but the difference existing between the figures in the above table and the average figures in Table 9 are very marked, and serve to emphasise the necessity for close investigation in each individual case. It must be further remembered that the wettest year is not necessarily the year of the heaviest rainfalls, and it is the heavy rainfalls only which affect the design of sewerage works. CHAPTER VIII. STORM WATER IN SEWERS. If the whole area of the district is not impermeable the percentage which is so must be carefully estimated, and will naturally vary in each case. The means of arriving at an estimate will also probably vary considerably according to circumstances, but the following figures, which relate to investigations recently made by the writer, may be of interest. In the town, which has a population of 10,000 and an area of 2,037 acres, the total length of roads constructed was 74,550 lineal feet, and their average width was 36 ft, including two footpaths. The average density of the population was 4.9 people per acre. Houses were erected adjoining a length of 43,784 lineal feet of roads, leaving 30,766 lineal feet, which for distinction may be called "undeveloped"--that is, the land occupied by houses by the total number of the inhabitants of the town, the average length of road per head was 4.37 ft, and assuming five people per house and one house on each side of the road we get ten people per two houses opposite each other. Then 10 x 4.37 = 43.7 lineal feet of road frontage to each pair of opposite houses. After a very careful inspection of the whole town, the average area of the impermeable surfaces appertaining to each house was estimated at 675 sq. ft, of which 300 sq. ft was apportioned to the front roof and garden paths and 375 sq. ft to the back roof and paved yards. Dividing these figures by 43.71 in ft of road frontage per house, we find that the effective width of the impermeable roadway is increased by 6 ft 10 in for the front portions of each house, and by a width of 8 ft 7 in, for the back portions, making a total width of 36 ft + 2(6 ft 10 in) + 2(8 ft 7 in) = 66 ft 10 in, say 67 ft On this basis the impermeable area in the town therefore equals: 43,7841 in ft x 67 ft =2,933,528; and 30,766 lin ft x 36 ft = 1,107,576. Total, 4,041,104 sq. ft, or 92.77 acres. As the population is 10,000 the impermeable area equals 404, say, 400 sq. ft per head, or ~ (92.77 x 100) / 2037 = 4.5 per cent, of the whole area of the town. It must be remembered that when rain continues for long periods, ground which in the ordinary way would generally be considered permeable becomes soaked and eventually becomes more or less impermeable. Mr. D. E. Lloyd-Davies, M.Inst.C.E., gives two very interesting diagrams in the paper previously referred to, which show the average percentage of effective impermeable area according to the population per acre. This information, which is applicable more to large towns, has been embodied in Fig. 16, from which it will be seen that, for storms of short duration, the proportion of impervious areas equals 5 per cent. with a population of 4.9 per acre, which is a very close approximation to the 4.5 per cent. obtained in the example just described. Where the houses are scattered at long intervals along a road the better way to arrive at an estimate of the quantity of storm water which may be expected is to ascertain the average impervious area of, or appertaining to, each house, and divide it by five, so as to get the area per head. Then the flow off from any section of road is directly obtained from the sum of the impervious area due to the length of the road, and that due to the population distributed along it. [Illustration: FIG. 16.--VARIATION IN AVERAGE PERCENTAGE OF EFFECTIVE IMPERMEABLE AREA ACCORDING TO DENSITY OF POPULATION.] In addition to being undesirable from a sanitary point of view, it is rarely economical to construct special storm water drains, but in all cases where they exist, allowance must be made for any rain that may be intercepted by them. Short branch sewers constructed for the conveyance of foul water alone are usually 9in or 12 in in diameter, not because those sizes are necessary to convey the quantity of liquid which may be expected, but because it is frequently undesirable to provide smaller public sewers, and there is generally sufficient room for the storm water without increasing the size of the sewer. If this storm water were conveyed in separate sewers the cost would be double, as two sewers would be required in the place of one. In the main sewers the difference is not so great, but generally one large sewer will be more economical than two smaller ones. Where duplicate sewers are provided and arranged, so that the storm water sewer takes the rain-water from the roads, front roofs and gardens of the houses, and the foul water sewer takes the rain-water from the back roofs and paved yards, it was found in the case previously worked out in detail that in built-up roads a width of 36 ft + 2 (8 ft 7 in) = 53 ft 2 in, or, say, 160 sq. ft per lineal yard of road would drain to the storm water sewer, and a width of 2 (6 ft 10 in) = 13 ft 8 in, or, say, 41 sq. ft per lineal yard of road to the foul water sewer. This shows that even if the whole of the rain which falls on the impervious areas flows off, only just under 80 per cent. of it would be intercepted by the special storm water sewers. Taking an average annual rainfall of 30 in, of which 75 per cent. flows off, the quantity reaching the storm water sewer in the course of a year from each lineal 30 75 yard of road would be --- x 160 x --- = 300 cubic 12 100 feet = 1,875 gallons. [Illustration: FIG. 17.--SECTION OF "LEAP WEIR" OVERFLOW] The cost of constructing a separate surface water system will vary, but may be taken at an average of, approximately, l5s. 0d. per lineal yard of road. To repay this amount in thirty years at 4 per cent, would require a sum of 10.42d., say 10-1/2d. per annum; that is to say, the cost of taking the surface water into special 10-1/2 d. x 1000 sewers is ---------------- = 5.6, say 6d. per 1,000 1875 gallons. If the sewage has to be pumped, the extra cost of pumping by reason of the increased quantity of surface water can be looked at from two different points of view:-- 1. The net cost of the gas or other fuel or electric current consumed in lifting the water. 2. The cost of the fuel consumed plus wages, stores, etc., and a proportion of the sum required to repay the capital cost of the pumping station and machinery. The extra cost of the sewers to carry the additional quantity of storm water might also be taken into account by working out and preparing estimates for the alternative schemes. The actual cost of the fuel may be taken at approximately 1/4 d. per 1,000 gallons. The annual works and capital charges, exclusive of fuel, should be divided by the normal quantity of sewage pumped per annum, rather than by the maximum quantity which the pumps would lift if they were able to run continuously during the whole time. For a town of about 10,000 inhabitants these charges may be taken at 1-1/4 d. per 1,000 gallons, which makes the total cost of pumping, inclusive of capital charges, 1-1/2 d. per 1,000 gallons. Even if the extra cost of enlarging the sewers is added to this sum it will still be considerably below the sum of 6 d., which represents the cost of providing a separate system for the surface water. Unless it is permissible for the sewage to have a free outlet to the sea at all states of the tide, the provision of effective storm overflows is a matter of supreme importance. Not only is it necessary for them to be constructed in well- considered positions, but they must be effective in action. A weir constructed along one side of a manhole and parallel to the sewer is rarely efficient, as in times of storm the liquid in the sewer travels at a considerable velocity, and the greater portion of it, which should be diverted, rushes past the weir and continues to flow in the sewer; and if, as is frequently the case, it is desirable that the overflowing liquid should be screened, and vertical bars are fixed on the weir for the purpose, they block the outlet and render the overflow practically useless. Leap weir overflows are theoretically most suitable for separating the excess flow during times of storm, but in practice they rarely prove satisfactory. This is not the fault of the system, but is, in the majority of the cases, if not all, due to defective designing. The general arrangement of a leap weir overflow is shown in Fig. 17. In normal circumstances the sewage flowing along the pipe A falls down the ramp, and thence along the sewer B; when the flow is increased during storms the sewage from A shoots out from the end of the pipe into the trough C, and thence along the storm-water sewer D. In order that it should be effective the first step is to ascertain accurately the gradient of the sewer above the proposed overflow, then, the size being known, it is easy to calculate the velocity of flow for the varying depths of sewage corresponding with minimum flow, average dry weather flow, maximum dry weather flow, and six times the dry weather flow. The natural curve which the sewage would follow in its downward path as it flowed out from the end of the sewer can then be drawn out for the various depths, taking into account the fact that the velocity at the invert and sides of the sewer is less than the average velocity of flow. The ramp should be built in accordance with the calculated curves so as to avoid splashing as far as possible, and the level of the trough C fixed so that when it is placed sufficiently far from A to allow the dry weather flow to pass down the ramp it will at the same time catch the storm water when the required dilution has taken place. Due regard must be had to the altered circumstances which will arise when the growth of population occurs, for which provision is made in the scheme, so that the overflow will remain efficient. The trough C is movable, so that the width of the leap weir may be adjusted from time to time as required. The overflow should be frequently inspected, and the accumulated rubbish removed from the trough, because sticks and similar matters brought down by the sewer will probably leap the weir instead of flowing down the ramp with the sewage. It is undesirable to fix a screen in conjunction with this overflow, but if screening is essential the operation should be carried out in a special manhole built lower down the course of the storm-water sewer. Considerable wear takes place on the ramp, which should, therefore, be constructed of blue Staffordshire or other hard bricks. The ramp should terminate in a stone block to resist the impact of the falling water, and the stones which may be brought with it, which would crack stoneware pipes if such were used. In cases where it is not convenient to arrange a sudden drop in the invert of the sewer as is required for a leap weir overflow, the excess flow of storm-water may be diverted by an arrangement similar to that shown in Fig. 18. [Footnote: PLATE IV] In this case calculations must be made to ascertain the depth at which the sewage will flow in the pipes at the time it is diluted to the required extent; this gives the level of the lip of the diverting plate. The ordinary sewage flow will pass steadily along the invert of the sewer under the plate until it rises up to that height, when the opening becomes a submerged orifice, and its discharging capacity becomes less than when the sewage was flowing freely. This restricts the flow of the sewage, and causes it to head up on the upper side of the overflow in an endeavour to force through the orifice the same quantity as is flowing in the sewer, but as it rises the velocity carries the upper layer of the water forward up the diverting plate and thence into the storm overflow drain A deep channel is desirable, so as to govern the direction of flow at the time the overflow is in action. The diverting trough is movable, and its height above the invert can be increased easily, as may be necessary from time to time. With this arrangement the storm-water can easily be screened before it is allowed to pass out by fixing an inclined screen in the position shown in Fig. 18. [Footnote: PLATE IV] It is loose, as is the trough, and both can be lifted out when it is desired to have access to the invert of the sewer. The screen is self- cleansing, as any floating matter which may be washed against it does not stop on it and reduce its discharging capacity, but is gradually drawn down by the flow of the sewage towards the diverting plate under which it will be carried. The heavier matter in the sewage which flows along the invert will pass under the plate and be carried through to the outfall works, instead of escaping by the overflow, and perhaps creating a nuisance at that point. CHAPTER IX. WIND AND WINDMILLS. In small sewerage schemes where pumping is necessary the amount expended in the wages of an attendant who must give his whole attention to the pumping station is so much in excess of the cost of power and the sum required for the repayment of the loan for the plant and buildings that it is desirable for the economical working of the scheme to curtail the wages bill as far as possible. If oil or gas engines are employed the man cannot be absent for many minutes together while the machinery is running, and when it is not running, as for instance during the night, he must be prepared to start the pumps at very short notice, should a heavy rain storm increase the flow in the sewers to such an extent that the pump well or storage tank becomes filled up. It is a simple matter to arrange floats whereby the pump may be connected to or disconnected from a running engine by means of a friction clutch, so that when the level of the sewage in the pump well reaches the highest point desired the pump may be started, and when it is lowered to a predetermined low water level the pump will stop; but it is impracticable to control the engine in the same way, so that although the floats are a useful accessory to the plant during the temporary absence of the man in charge they will not obviate his more or less constant attendance. An electric motor may be controlled by a float, but in many cases trouble is experienced with the switch gear, probably caused by its exposure to the damp air. In all cases an alarm float should be fixed, which would rise as the depth of the sewage in the pump well increased, until the top water level was reached, when the float would make an electrical contact and start a continuous ringing warning bell, which could be placed either at the pumping station or at the man's residence. On hearing the bell the man would know the pump well was full, and that he must immediately repair to the pumping-station and start the pumps, otherwise the building would be flooded. If compressed air is available a hooter could be fixed, which would be heard for a considerable distance from the station. [Illustration: PLATE IV. "DIVERTING PLATE" OVERFLOW. To face page 66.] It is apparent, therefore, that a pumping machine is wanted which will work continuously without attention, and will not waste money when there is nothing to pump. There are two sources of power in nature which might be harnessed to give this result--water and wind. The use of water on such a small scale is rarely economically practicable, as even if the water is available in the vicinity of the pumping-station, considerable work has generally to be executed at the point of supply, not only to store the water in sufficient bulk at such a level that it can be usefully employed, but also to lead it to the power-house, and then to provide for its escape after it has done its work. The power-house, with its turbines and other machinery, involves a comparatively large outlay, but if the pump can be directly driven from the turbines, so that the cost of attendance is reduced to a minimum, the system should Although the wind is always available in every district, it is more frequent and powerful on the coast than inland. The velocity of the wind is ever varying within wide limits, and although the records usually give the average hourly velocity, it is not constant even for one minute. Windmills of the modern type, consisting of a wheel composed of a number of short sails fixed to a steel framework upon a braced steel tower, have been used for many years for driving machinery on farms, and less frequently for pumping water for domestic use. In a very few cases it has been utilised for pumping sewage, but there is no reason why, under proper conditions, it should not be employed to a greater extent. The reliability of the wind for pumping purposes may be gauged from the figures in the following table, No. 11, which were observed in Birmingham, and comprise a period of ten years; they are arranged in order corresponding with the magnitude of the annual rainfall:-- TABLE No. 11. MEAN HOURLY VELOCITY OF WIND Reference | Rainfall |Number of days in year during which the mean | Number | for |hourly velocity of the wind was below | | year | 6 m.p.h. | 10 m.p.h. | 15 m.p.h. | 20 m.p.h. | ----------+----------+----------+-----------+-----------+-----------+ 1... 33.86 16 88 220 314 2... 29.12 15 120 260 334 3... 28.86 39 133 263 336 4... 26.56 36 126 247 323 5... 26.51 34 149 258 330 6... 26.02 34 132 262 333 7... 25.16 33 151 276 332 8... 22.67 46 155 272 329 9... 22.30 26 130 253 337 10... 21.94 37 133 276 330 ----------+----------+----------+-----------+-----------+-----------+ Average 31.4 131.7 250.7 330.8 It may be of interest to examine the monthly figures for the two years included in the foregoing table, which had the least and the most wind respectively, such figures being set out in the following table: TABLE No. 12 MONTHLY ANALYSIS OF WIND Number of days in each month during which the mean velocity of the wind was respectively below the value mentioned hereunder. Month | Year of least wind (No. 8) | Year of most wind (No. *8*) | | 5 10 15 20 | 5 10 15 20 | | m.p.h. m.p.h. m.p.h. m.p.h. | m.p.h. m.p.h. m.p.h. m.p.h. | ------+-------+-----+-------+-------+-------+------+------+-------+ Jan. 5 11 23 27 3 6 15 23 Feb. 5 19 23 28 0 2 8 16 Mar. 5 10 20 23 0 1 11 18 April 6 16 23 28 1 7 16 26 May 1 14 24 30 3 11 24 31 June 1 12 22 26 1 10 21 27 July 8 18 29 31 1 12 25 29 Aug. 2 9 23 30 1 9 18 30 Sept. 1 13 25 30 1 12 24 28 Oct. 5 17 21 26 0 4 16 29 Nov. 6 11 20 26 3 7 19 28 Dec. 1 5 19 24 2 7 23 29 ------+-------+-----+-------+-------+-------+------+------+-------+ Total 46 155 272 329 16 88 220 314 During the year of least wind there were only eight separate occasions upon which the average hourly velocity of the wind was less than six miles per hour for two consecutive days, and on two occasions only was it less than six miles per hour on three consecutive days. It must be remembered, however, that this does not by any means imply that during such days the wind did not rise above six miles per hour, and the probability is that a mill which could be actuated by a six-mile wind would have been at work during part of the time. It will further be observed that the greatest differences between these two years occur in the figures relating to the light winds. The number of days upon which the mean hourly velocity of the wind exceeds twenty miles per hour remains fairly constant year after year. As the greatest difficulty in connection with pumping sewage is the influx of storm water in times of rain, it will be useful to notice the rainfall at those times when the wind is at a minimum. From the following figures (Table No. 13) it will be seen that, generally speaking, when there is very little wind there is very little rain Taking the ten years enumerated in Table No. 11, we find that out of the 314 days on which the wind averaged less than six miles per hour only forty-eight of them were wet, and then the rainfall only averaged .l3 in on those days. TABLE No. 13. WIND LESS THAN 6 M.P.H. -----------+-------------+------------+--------+---------------------------------- Ref. No. | Total No. | Days on | | Rainfall on each from Table | of days in | which no | Rainy | rainy day in No. 11. | each year. | rain fell. | days. | inches. -----------+-------------+------------+--------+---------------------------------- 1 | 16 | 14 | 2 | .63 and .245 2 | 15 | 13 | 2 | .02 and .02 3 | 39 | 34 | 5 | .025, .01, .26, .02 and .03 4 | 36 | 29 | 7 | / .02, .08, .135, .10, .345, .18 | | | | \ and .02 5 | 34 | 28 | 6 | .10, .43, .01, .07, .175 and .07 6 | 32 | 27 | 5 | .10, .11, .085, .04 and .135 7 | 33 | 21 | 2 | .415 and .70 8 | 46 | 40 | 6 | .07, .035, .02, .06, .13 and .02 9 | 26 | 20 | 6 | .145, .20, .33, .125, .015 & .075 10 | 37 | 30 | 7 | / .03, .23, .165, .02, .095 | | | | \ .045 and .02 -----------+-------------+------------+--------+---------------------------------- Total | 314 | 266 | 48 | Average rainfall on each of | | | | the 48 days = .13 in The greater the height of the tower which carries the mill the greater will be the amount of effective wind obtained to drive the mill, but at the same time there are practical considerations which limit the height. In America many towers are as much as 100 ft high, but ordinary workmen do not voluntarily climb to such a height, with the result that the mill is not properly oiled. About 40 ft is the usual height in this country, and 60 ft should be used as a maximum. Mr. George Phelps, in a paper read by him in 1906 before the Association of Water Engineers, stated that it was safe to assume that on an average a fifteen miles per hour wind was available for eight hours per day, and from this he gave the following figures as representing the approximate average duty with, a lift of l00 ft, including friction:-- TABLE NO. 14 DUTY OF WINTDMILU Diameter of Wheel. 10 12 14 16 18 20 25 30 35 40 The following table gives the result of tests carried out by the United States Department of Agriculture at Cheyenne, Wyo., with a l4 ft diameter windmill under differing wind velocities:-- TABLE No. 15. POWER or l4-rx WINDMILL IN VARYING WINDS. Velocity of Wind (miles per hour). 0--5 6-10 11-15 16-20 21-25 26-30 31-35 It will be apparent from the foregoing figures that practically the whole of the pumping for a small sewerage works may be done by means of a windmill, but it is undesirable to rely entirely upon such a system, even if two mills are erected so that the plant will be in duplicate, because there is always the possibility, although it may be remote, of a lengthened period of calm, when the sewage would accumulate; and, further, the Local Government Board would not approve the scheme unless it included an engine, driven by gas, oil, or other mechanical power, for emergencies. In the case of water supply the difficulty may be overcome by providing large storage capacity, but this cannot be done for sewage without creating an intolerable nuisance. In the latter case the storage should not be less than twelve hours dry weather flow, nor more than twenty-four. With a well-designed mill, as has already been indicated, the wind will, for the greater part of the year, be sufficient to lift the whole of the sewage and storm-water, but, if it is allowed to do so, the standby engine will deteriorate for want of use to such an extent that when urgently needed it will not be effective. It is, therefore, desirable that the attendant should run the engine at least once in every three days to keep it in working order. If it can be conveniently arranged, it is a good plan for the attendant to run the engine for a few minutes to entirely empty the pump well about six o'clock each evening. The bulk of the day's sewage will then have been delivered, and can be disposed of when it is fresh, while at the same time the whole storage capacity is available for the night flow, and any rainfall which may occur, thus reducing the chances of the man being called up during the night. About 22 per cent, of the total daily dry weather flow of sewage is delivered between 7 p.m. and 7 a.m. The first cost of installing a small windmill is practically the same as for an equivalent gas or oil engine plant, so that the only advantage to be looked for will be in the maintenance, which in the case of a windmill is a very small matter, and the saving which may be obtained by the reduction of the amount of attendance necessary. Generally speaking, a mill 20 ft in diameter is the largest which should be used, as when this size is exceeded it will be found that the capital cost involved is incompatible with the value of the work done by the mill, as compared with that done by a modern internal combustion engine. Mills smaller than 8 ft in diameter are rarely employed, and then only for small work, such as a 2 1/2 in pump and a 3-ft lift The efficiency of a windmill, measured by the number of square feet of annular sail area, decreases with the size of the mill, the 8 ft, 10 ft, and l2 ft mills being the most efficient sizes. When the diameter exceeds l2 ft, the efficiency rapidly falls off, because the peripheral velocity remains constant for any particular velocity or pressure of the wind, and as every foot increase in the diameter of the wheel makes an increase of over 3 ft in the length of the circumference, the greater the diameter the less the number of revolutions in any given time; and consequently the kinetic flywheel action which is so valuable in the smaller sizes is to a great extent lost in the larger mills. Any type of pump can be used, but the greatest efficiency will be obtained by adopting a single acting pump with a short stroke, thus avoiding the liability, inherent in a long pump rod, to buckle under compression, and obviating the use of a large number of guides which absorb a large part of the power given out by the mill. Although some of the older mills in this country are of foreign origin, there are several British manufacturers turning out well-designed and strongly-built machines in large numbers. Fig. 19 represents the general appearance and Fig. 20 the details of the type of mill made by the well-known firm of Duke and Ockenden, of Ferry Wharf, Littlehampton, Sussex. This firm has erected over 400 windmills, which, after the test of time, have proved thoroughly efficient. From Fig. 20 it will be seen that the power applied by the wheel is transmitted through spur and pinion gearing of 2 1/2 ratio to a crank shaft, the gear wheel having internal annular teeth of the involute type, giving a greater number of teeth always in contact than is the case with external gears. This minimises wear, which is an important matter, as it is difficult to properly lubricate these appliances, and they are exposed to and have to work in all sorts of weather. [Illustration: Fig. l9.--General View of Modern Windmill.] [Illustration: Fig. 20.--Details of Windmill Manufactured by Messrs. Duke and Ockenden, Littlehampton.] It will be seen that the strain on the crank shaft is taken by a bent crank which disposes the load centrally on the casting, and avoids an overhanging crank disc, which has been an objectionable feature in some other types. The position of the crank shaft relative to the rocker pin holes is studied to give a slow upward motion to the rocker with a more rapid downward stroke, the difference in speed being most marked in the longest stroke, where it is most required. In order to transmit the circular internal motion a vertical connecting rod in compression is used, which permits of a simple method of changing the length of stroke by merely altering the pin in the rocking lever, the result being that the pump rod travels in a vertical line. The governing is entirely automatic. If the pressure on the wind wheel, which it will be seen is set off the centre line of the mill and tower, exceeds that found desirable--and this can be regulated by means of a spring on the fantail--the windmill automatically turns on the turn-table and presents an ellipse to the wind instead of a circular face, thus decreasing the area exposed to the wind gradually until the wheel reaches its final position, or is hauled out of gear, when the edges only are opposed to the full force of the wind. The whole weight of the mill is taken upon a ball-bearing turn-table to facilitate instant "hunting" of the mill to the wind to enable it to take advantage of all changes of direction. The pump rod in the windmill tower is provided with a swivel coupling, enabling the mill head to turn completely round without altering the position of the rod. CHAPTER X. THE DESIGN OF SEE OUTFALLS. The detail design of a sea outfall will depend upon the level of the conduit with reference to present surface of the shore, whether the beach is being eroded or made up, and, if any part of the structure is to be constructed above the level of the shore, whether it is likely to be subject to serious attack by waves in times of heavy gales. If there is probability of the direction of currents being affected by the construction of a solid structure or of any serious scour being caused, the design must be prepared accordingly. While there are examples of outfalls constructed of glazed stoneware socketed pipes surrounded with concrete, as shown in Fig. 21, cast iron pipes are used in the majority of cases. There is considerable variation in the design of the joints for the latter class of pipes, some of which are shown in Figs. 22, 23, and 24. Spigot and socket joints (Fig. 22), with lead run in, or even with rod lead or any of the patent forms caulked in cold, are unsuitable for use below high-water mark on account of the water which will most probably be found in the trench. Pipes having plain turned and bored joints are liable to be displaced if exposed to the action of the waves, but if such joints are also flanged, as Fig. 24, or provided with lugs, as Fig. 23, great rigidity is obtained when they are bolted up; in flange is formed all round the joint, it is necessary, in order that its thickness may be kept within reasonable limits, to provide bolts at frequent intervals. A gusset piece to stiffen the flange should be formed between each hole and the next, and the bolt holes should be arranged so that when the pipes are laid there will not be a hole at the bottom on the vertical axis of the pipe, as when the pipes are laid in a trench below water level it is not only difficult to insert the bolt, but almost impracticable to tighten up the nut afterwards. The pipes should be laid so that the two lowest bolt holes are placed equidistant on each side of the centre line, as shown in the end views of Figs. Nos. 23 and 24. [Illustration: Fig. 2l.-Stoneware Pipe and Concrete Sea Outfall.] With lug pipes, fewer bolts are used, and the lugs are made specially strong to withstand the strain put upon them in bolting up the pipes. These pipes are easier and quicker to joint under water than are the flanged pipes, so that their use is a distinct advantage when the hours of working are limited. In some cases gun-metal bolts are used, as they resist the action of sea water better than steel, but they add considerably to the cost of the outfall sewer, and the principal advantage appears to be that they are possibly easier to remove than iron or steel ones would be if at any time it was required to take out any pipe which may have been accidentally broken. On the other hand, there is a liability of severe corrosion of the metal taking place by reason of galvanic action between the gun-metal and the iron, set up by the sea water in which they are immersed. If the pipes are not to be covered with concrete, and are thus exposed to the action of the sea water, particular care should be taken to see that the coating by Dr. Angus Smith's process is perfectly applied to them. [Illustration: Fig. 22.--Spigot and Socket Joint for Cast Iron Pipes.] [Illustration: Fig. 23.--Lug Joint for Cast Iron Pipes.] [Illustration: Fig. 24.--Turned, Board, and Flanged Joint for Cast Iron Pipes.] Steel pipes are, on the whole, not so suitable as cast iron. They are, of course, obtainable in long lengths and are easily jointed, but their lightness compared with cast iron pipes, in a sea outfall, where the weight of the structure adds to its stability. The extra length of steel pipes necessitates a greater extent of trench being excavated at one time, which must be well timbered to prevent the sides falling in On the other hand, cast iron pipes are more liable to fracture by heavy stones being thrown upon them by the waves, but this is a contingency which does not frequently occur in practice. According to Trautwine, the cast iron for pipes to resist sea water should be close-grained, hard, white metal. In such metal the small quantity of contained carbon is chemically combined with the iron, but in the darker or mottled metals it is mechanically combined, and such iron soon becomes soft, like plumbago, under the influence of sea water. Hard white iron has been proved to resist sea water for forty years without deterioration, whether it is continually under water or alternately wet and dry. Several types of sea outfalls are shown in Figs. 25 to 31.[1] In the example shown in Fig. 25 a solid rock bed occurred a short distance below the sand, which was excavated so as to allow the outfall to be constructed on the rock. Anchor bolts with clevis heads were fixed into the rock, and then, after a portion of the concrete was laid, iron bands, passing around the cast iron pipes, were fastened to the anchors. This construction would not be suitable below low-water mark. Fig. 26 represents the Aberdeen sea outfall, consisting of cast iron pipes 7 ft in diameter, which are embedded in a heavy concrete breakwater 24 ft in width, except at the extreme end, where it is 30 ft wide. The 4 in wrought iron rods are only used to the last few pipes, which were in 6 ft lengths instead of 9 ft, as were the remainder. Fig. 27 shows an inexpensive method of carrying small pipes, the slotted holes in the head of the pile allowing the pipes to be laid in a straight line, even if the pile is not driven quite true, and if the level of the latter is not correct it can be adjusted by inserting a packing piece Great Crosby outfall sewer into the Mersey is illustrated in Fig. 28. The piles are of greenheart, and were driven to a solid foundation. The 1 3/4 in sheeting was driven to support the sides of the excavation, and was left in when the concrete was laid. Light steel rails were laid under the sewer, in continuous lengths, on steel sleepers and to 2 ft gauge. The invert blocks were of concrete, and the pipes were made of the same material, but were reinforced with steel ribs. The Waterloo (near Liverpool) sea outfall is shown in Fig. 31. [Footnote 1: Plate V.] Piling may be necessary either to support the pipes or to keep them secure in their proper position, but where there is a substratum of rock the pipes may be anchored, as shown in Figs. 25 and 26. The nature of the piling to be adopted will vary according to the character of the beach. Figs. 27, 29, 30, and 31 show various types. With steel piling and bearers, as shown in Fig. 29, it is generally difficult to drive the piles with such accuracy that the bearers may be easily bolted up through the holes provided in the piles, and, if the holes are not drilled in the piles until after they are driven to their final position, considerable time is occupied, and perhaps a tide lost in the attempt to drill them below water. There is also the difficulty of tightening up the bolts when the sewer is partly below the surface of the shore, as shown. In both the types shown in Figs. 29 and 30 it is essential that the piles and the bearers should abut closely against the pipes; otherwise the shock of the waves will cause the pipes to move and hammer against the framing, and thus lead to failure of the structure. Piles similar to Fig. 31 can only be fixed in sand, as was the case at Waterloo, because they must be absolutely true to line and level, otherwise the pipes cannot be laid in the cradles. The method of fixing these piles is described by Mr. Ben Howarth (Minutes of Proceedings of Inst.C.E., Vol. CLXXV.) as follows:--"The pile was slung vertically into position from a four-legged derrick, two legs of which were on each side of the trench; a small winch attached to one pair of the legs lifted and lowered the pile, through a block and tackle. When the pile was ready to be sunk, a 2 in iron pipe was let down the centre, and coupled to a force-pump by means of a hose; a jet of water was then forced down this pipe, driving the sand and silt away from below the pile. The pile was then rotated backwards and forwards about a quarter of a turn, by men pulling on the arms; the pile, of course, sank by its own weight, the water-jet driving the sand up through the hollow centre and into the trench, and it was always kept vertical by the sling from the derrick. As soon as the pile was down to its final level the ground was filled in round the arms, and in this running sand the pile became perfectly fast and immovable a few minutes after the sinking was completed. The whole process, from the first slinging of the pile to the final setting, did not take more than 20 or 25 minutes." [Illustration: PLATE V. ROCK BED. Fig. 26--ABERDEEN SEA OUTFALL. Fig. 27--SMALL GREAT CROSBY SEA OUTFALL. Fig. 29--CAST IRON PIPE ON STEEL CAST AND BEARERS. Fig. 31--WATERLOO (LIVERPOOL) SEA OUTFALL.] (_To face page 80_.) Screw piles may be used if the ground is suitable, but, if it is boulder clay or similar material, the best results will probably be obtained by employing rolled steel joists as piles. CHAPTER XI. THE ACTION OF SEA WATER ON CEMENT. Questions are frequently raised in connection with sea-coast works as to whether any deleterious effect will result from using sea-water for mixing the concrete or from using sand and shingle off the beach; and, further, whether the concrete, after it is mixed, will withstand the action of the elements, exposed, as it will be, to air and sea-water, rain, hot sun, and frosts. Some concrete structures have failed by decay of the material, principally between high and low water mark, and in order to ascertain the probable causes and to learn the precautions which it is necessary to take, some elaborate experiments have been carried out. To appreciate the chemical actions which may occur, it will be as well to examine analyses of sea-water and cement. The water of the Irish Channel is composed of Sodium chloride.................... 2.6439 per cent. Magnesium chloride................. 0.3150 " " Magnesium sulphate................. 0.2066 " " Calcium sulphate................... 0.1331 " " Potassium chloride................. 0.0746 " " Magnesium bromide.................. 0.0070 " " Calcium carbonate.................. 0.0047 " " Iron carbonate..................... 0.0005 " " Magnesium nitrate.................. 0.0002 " " Lithium chloride................... Traces. Ammonium chloride.................. Traces. Silica chloride.................... Traces. Water.............................. 96.6144 -------- 100.0000 An average analysis of a Thames cement may be taken to be as follows:-- Silica................................ 23.54 per cent. Insoluble residue (sand, clay, etc.)............................ 0.40 " Alumina and ferric oxide............... 9.86 " Lime.................................. 62.08 " Magnesia............................... 1.20 " Sulphuric anhydride.................... 1.08 " Carbonic anhydride and water........... 1.34 " Alkalies and loss on analysis.......... 0.50 " ----- 100.00 The following figures give the analysis of a sample of cement expressed in terms of the complex compounds that are found:-- Sodium silicate (Na2SiO3)........ 3.43 per cent. Calcium sulphate (CaSO4)......... 2.45 " Dicalcium silicate (Ca2SiO4).... 61.89 " Dicalcium aluminate (Ca2Al2O5).. 12.14 " Dicalcium ferrate (Ca2Fe2O5)..... 4.35 " Magnesium oxide (MgO)............ 0.97 " Calcium oxide (CaO)............. 14.22 " Loss on analysis, &c............. 0.55 " ----- 100.00 Dr. W. Michaelis, the German cement specialist, gave much consideration to this matter in 1906, and formed the opinion that the free lime in the Portland cement, or the lime freed in hardening, combines with the sulphuric acid of the sea-water, which causes the mortar or cement to expand, resulting in its destruction. He proposed to neutralise this action by adding to the mortar materials rich in silica, such as trass, which would combine with the lime. Mr. J. M. O'Hara, of the Southern Pacific Laboratory, San Francisco, Cal., made a series of tests with sets of pats 4 in diameter and 1/2 in thick at the centre, tapering to a thin edge on the circumference, and also with briquettes for ascertaining the tensile strength, all of which were placed in water twenty-four hours after mixing. At first some of the pats were immersed in a "five-strength solution" of sea-water having a chemical analysis as follows:-- Sodium chloride.................... 11.5 per cent. Magnesium chloride................. 1.4 " " Magnesium sulphate................. 0.9 " " Calcium sulphate................... 0.6 " " Water.............................. 85.6 " " 100.0 This strong solution was employed in order that the probable effect of immersing the cement in sea-water might be ascertained very much quicker than could be done by observing samples actually placed in ordinary sea-water, and it is worthy of note that the various mixtures which failed in this accelerated test also subsequently failed in ordinary sea-water within a period of twelve months. Strong solutions were next made of the individual salts contained in sea-water, and pats were immersed as before, when it was found that the magnesium sulphate present in the water acted upon the calcium hydrate in the cement, forming calcium sulphate, and leaving the magnesium hydrate free. The calcium sulphate combines with the alumina of the cement, forming calcium sulpho-aluminate, which causes swelling and cracking of the concrete, and in cements containing a high proportion of alumina, leads to total destruction of all cohesion. The magnesium hydrate has a tendency to fill the pores of the concrete so as to make it more impervious to the destructive action of the sea-water, and disintegration may be retarded or checked. A high proportion of magnesia has been found in samples of cement which have failed under the action of sea water, but the disastrous result cannot be attributed to this substance having been in excess in the original cement, as it was probably due to the deposition of the magnesia salts from the sea-water; although, if magnesia were present in the cement in large quantities, it would cause it to expand and crack, still with the small proportion in which it occurs in ordinary cements it is probably inert. The setting of cement under the action of water always frees a portion of the lime which was combined, but over twice as much is freed when the cement sets in sea-water as in fresh water. The setting qualities of cement are due to the iron and alumina combined with calcium, so that for sea-coast work it is desirable for the alumina to be replaced by iron as far as possible. The final hardening and strength of cement is due in a great degree to the tri-calcium silicate (3CaO, SiO2) which is soluble by the sodium chloride found in sea-water, so that the resultant effect of the action of these two compounds is to enable the sea-water to gradually penetrate the mortar and rot the concrete. The concrete is softened, when there is an abnormal amount of sulphuric acid present, as a result of the reaction of the sulphuric acid of the salt dissolved by the water upon a part of the lime in the cement. The ferric oxide of the cement is unaffected by sea- water. The neat cement briquette tests showed that those immersed in sea-water attained a high degree of strength at a much quicker rate than those immersed in fresh water, but the 1 to 3 cement and sand briquette tests gave an opposite result. At the end of twelve months, however, practically all the cements set in fresh water showed greater strength than those set in sea- water. When briquettes which have been immersed in fresh water and have thoroughly hardened are broken, the cores are found to be quite dry, and if briquettes immersed in sea-water show a similar dryness there need be no hesitation in using the cement; but if, on the other hand, the briquette shows that the sea-water has permeated to the interior, the cement will lose strength by rotting until it has no cohesion at all. It must be remembered that it is only necessary for the water to penetrate to a depth of 1/2 in on each side of a briquette to render it damp all through, whereas in practical work, if the water only penetrated to the same depth, very little ill-effect would be experienced, although by successive removals of a skin 1/2 in deep the structure might in time be imperilled. The average strength in pounds per square inch of six different well-known brands of cement tested by Mr. O'Hara was as follows:-- TABLE No. 16. EFFECT OF SEA WATER ON STRENGTH OF CEMENT. Neat cement 1 cement to 3 sand set in set in Sea Water Fresh Water Sea Water Fresh Water 7 days 682 548 214 224 28 days 836 643 293 319 2 months 913 668 313 359 3 months 861 667 301 387 6 months 634 654 309 428 9 months 542 687 317 417 12 months 372 706 325 432 Some tests were also made by Messrs. Westinghouse, Church, Kerr, and Co., of New York, to ascertain the effect of sea- water on the tensile strength of cement mortar. Three sets of briquettes were made, having a minimum section of one square inch. The first were mixed with fresh water and kept in fresh water; the second were mixed with fresh water, but kept immersed in pans containing salt water; while the third were mixed with sea-water and kept in sea-water. In the experiments the proportion of cement and sand varied from 1 to 1 to 1 to 6. The results of the tests on the stronger mixtures are shown in Fig. 32. The Scandinavian Portland cement manufacturers have in hand tests on cubes of cement mortar and cement concrete, which were started in 1896, and are to extend over a period of twenty years. A report upon the tests of the first ten years was submitted at the end of 1909 to the International Association of Testing Materials at Copenhagen, and particulars of them are published in "Cement and Sea-Water," by A. Poulsen (chairman of the committee), J. Jorsen and Co., Copenhagen, 1909, price 3s. [Illustration: FIG. 32.--Tests of the Tensile Strength of Cement and Sand Briquettes, Showing the Effect of Sea Water.] Cements from representative firms in different countries were obtained for use in making the blocks, which had coloured glass beads and coloured crushed glass incorporated to facilitate identification. Each block of concrete was provided with a number plate and a lifting bolt, and was kept moist for one month before being placed in position. The sand and gravel were obtained from the beach on the west coast of Jutland. The mortar blocks were mixed in the proportion of 1 to 1, 1 to 2, and 1 to 3, and were placed in various positions, some between high and low water, so as to be exposed twice in every twenty- four hours, and others below low water, so as to be always submerged. The blocks were also deposited under these conditions in various localities, the mortar ones being placed at Esbjerb at the south of Denmark, at Vardo in the Arctic Ocean, and at Degerhamm on the Baltic, where the water is only one-seventh as salt as the North Sea, while the concrete blocks were built up in the form of a breakwater or groyne at Thyboron on the west coast of Jutland. At intervals of three, six, and twelve months, and two, four, six, ten, and twenty years, some of the blocks have, or will be, taken up and subjected to chemical tests, the material being also examined to ascertain the effect of exposure upon them. The blocks tested at intervals of less than one year after being placed in position gave very variable results, and the tests were not of much value. The mortar blocks between high and low water mark of the Arctic Ocean at Vardo suffered the worst, and only those made with the strongest mixture of cement, 1 to 1, withstood the severe frost experienced. The best results were obtained when the mortar was made compact, as such a mixture only allowed diffusion to take place so slowly that its effect was negligible; but when, on the other hand, the mortar was loose, the salts rapidly penetrated to the interior of the mass, where chemical changes took place, and caused it to disintegrate. The concrete blocks made with 1 to 3 mortar disintegrated in nearly every case, while the stronger ones remained in fairly good condition. The best results were given by concrete containing an excess of very fine sand. Mixing very finely-ground silica, or trass, with the cement proved an advantage where a weak mixture was employed, but in the other cases no benefit was observed. The Association of German Portland Cement Manufacturers carried out a series of tests, extending over ten years, at their testing station at Gross Lichterfeld, near Berlin, the results of which were tabulated by Mr. C. Schneider and Professor Gary. In these tests the mortar blocks were made 3 in cube and the concrete blocks l2 in cube; they were deposited in two tanks, one containing fresh water and the other sea-water, so that the effect under both conditions might be noted. In addition, concrete blocks were made, allowed to remain in moist sand for three months, and were then placed in the form of a groyne in the sea between high and low-water mark. Some of the blocks were allowed to harden for twelve months in sand before being placed, and these gave better results than the others. Two brands of German Portland cement were used in these tests, one, from which the best results were obtained, containing 65.9 per cent. of lime, and the other 62.0 per cent. of lime, together with a high percentage of alumina. In this case, also, the addition of finely-ground silica, or trass, improved the resisting power of blocks made with poor mortars, but did not have any appreciable effect on the stronger mixtures. Professor M. Moller, of Brunswick, Germany, reported to the International Association for Testing Materials, at the Copenhagen Congress previously referred to, the result of his tests on a small hollow, trapezium shape, reinforced concrete structure, which was erected in the North Sea, the interior being filled with sandy mud, which would be easily removable by flowing water. The sides were 7 cm. thick, formed of cement concrete 1:2 1/2:2, moulded elsewhere, and placed in the structure forty days after they were made, while the top and bottom were 5 cm. thick, and consisted of concrete 1:3:3, moulded _in situ_ and covered by the tide within twenty-four hours of being laid. The concrete moulded _in situ_ hardened a little at first, and then became soft when damp, and friable when dry, and white efflorescence appeared on the surface. In a short time the waves broke this concrete away, and exposed the reinforcement, which rusted and disappeared, with the result that in less than four years holes were made right through the concrete. The sides, which were formed of slabs allowed to harden before being placed in the structure, were unaffected except for a slight roughening of the surface after being exposed alternately to the sea and air for a period, of thirteen years. Professor Moller referred also to several cases which had come under his notice where cement mortar or concrete became soft and showed white efflorescence when it had been brought into contact with sea-water shortly after being made. In experiments in Atlantic City samples of dry cement in powder form were put with sea-water in a vessel which was rapidly rotated for a short time, after which the cement and the sea- water were analysed, and it was found that the sea-water had taken up the lime from the cement, and the cement had absorbed the magnesia salts from the sea-water. Some tests were carried out in 1908-9 at the Navy Yard, Charlestown, Mass., by the Aberthaw Construction Company of Boston, in conjunction with the Navy Department. The cement concrete was placed so that the lower portions of the surfaces of the specimens were always below water, the upper portions were always exposed to the air, and the middle portions were alternately exposed to each. Although the specimens were exposed to several months of winter frost as well as to the heat of the summer, no change was visible in any part of the concrete at the end of six months. Mons. R. Feret, Chief of the Laboratory of Bridges and Roads, Boulogne-sur-Mer, France, has given expression to the following opinions:-- 1. No cement or other hydraulic product has yet been found which presents absolute security against the decomposing action of sea-water. 2. The most injurious compound of sea-water is the acid of the dissolved sulphates, sulphuric acid being the principal agent in the decomposition of cement. 3. Portland cement for sea-water should be low in aluminium and as low as possible in lime. 4. Puzzolanic material is a valuable addition to cement for sea-water construction, 5. As little gypsum as possible should be added for regulating the time of setting to cements which are to be used in sea- water. 6. Sand containing a large proportion of fine grains must never be used in concrete or mortar for sea-water construction. 7. The proportions of the cement and aggregate for sea-water construction must be such as will produce a dense and impervious concrete. On the whole, sea-water has very little chemical effect on good Portland cements, such as are now easily obtainable, and, provided the proportion of aluminates is not too high, the varying composition of the several well-known commercial cements is of little moment. For this reason tests on blocks immersed in still salt water are of very little use in determining the probable behaviour of concrete when exposed to damage by physical and mechanical means, such as occurs in practical work. The destruction of concrete works on the sea coast is due to the alternate exposure to air and water, frost, and heat, and takes the form of cracking or scaling, the latter being the most usual when severe frosts are experienced. When concrete blocks are employed in the construction of works, they should be made as long as possible before they are required to be built in the structure, and allowed to harden in moist sand, or, if this is impracticable, the blocks should be kept in the air and thoroughly wetted each day. On placing cement or concrete blocks in sea water a white precipitate is formed on their surfaces, which shows that there is some slight chemical action, but if the mixture is dense this action is restricted to the outside, and does not harm the block. Cement mixed with sea water takes longer to harden than if mixed with fresh water, the time varying in proportion to the amount of salinity in the water. Sand and gravel from the beach, even though dry, have their surfaces covered with saline matters, which retard the setting of the cement, even when fresh water is used, as they become mixed with such water, and thus permeate the whole mass. If sea water and aggregate from the shore are used, care must be taken to see that no decaying seaweed or other organic matter is mixed with it, as every such piece will cause a weak place in the concrete. If loam, clay, or other earthy matters from the cliffs have fallen down on to the beach, the shingle must be washed before it is used in concrete. Exposure to damp air, such as is unavoidable on the coast, considerably retards the setting of cement, so that it is desirable that it should not be further retarded by the addition of gypsum, or calcium sulphate, especially if it is to be used with sea water or sea-washed sand and gravel. The percentage of gypsum found in cement is, however, generally considerably below the maximum allowed by the British Standard Specification, viz., 2 per cent., and is so small that, for practical purposes, it makes very little difference in sea coast work, although of course, within reasonable limits, the quicker the cement sets the better. When cement is used to joint stoneware pipe sewers near the coast, allowance must be made for this retardation of the setting, and any internal water tests which may be specified to be applied must not be made until a longer period has elapsed after the laying of the pipes than would otherwise be necessary. A high proportion of aluminates tends to cause disintegration when exposed to sea water. The most appreciable change which takes place in a good sound cement after exposure to the sea is an increase in the chlorides, while a slight increase in the magnesia and the sulphates also takes place, so that the proportion of sulphates and magnesia in the cement should be kept fairly low. Hydraulic lime exposed to the sea rapidly loses the lime and takes up magnesia and sulphates. To summarise the information upon this point, it appears that it is better to use fresh water for all purposes, but if, for the sake of economy, saline matters are introduced into the concrete, either by using sea water for mixing or by using sand and shingle from the beach, the principal effect will be to delay the time of setting to some extent, but the ultimate strength of the concrete will probably not be seriously affected. When the concrete is placed in position the portion most liable to be destroyed is that between high and low water mark, which is alternately exposed to the action of the sea and the air, but if the concrete has a well-graded aggregate, is densely mixed, and contains not more than two parts of sand to one part of cement, no ill-effect need be anticipated. CHAPTER XII DIVING. The engineer is not directly concerned with the various methods employed in constructing a sea outfall, such matters being left to the discretion of the contractor. It may, however, be briefly stated that the work frequently involves the erection of temporary steel gantries, which must be very carefully designed and solidly built if they are to escape destruction by the heavy seas. It is amazing to observe the ease with which a rough sea will twist into most fantastic shapes steel joists 10 in by 8in, or even larger in size. Any extra cost incurred in strengthening the gantries is well repaid if it avoids damage, because otherwise there is not only the expense of rebuilding the structure to be faced, but the construction of the work will be delayed possibly into another season. In order to ensure that the works below water are constructed in a substantial manner, it is absolutely necessary that the resident engineer, at least, should be able to don a diving dress and inspect the work personally. The particular points to which attention must be given include the proper laying of the pipes, so that the spigot of one is forced home into the socket of the other, the provision and tightening up of all the bolts required to be fixed, the proper driving of the piles and fixing the bracing, the dredging of a clear space in the bed of the sea in front of the outlet pipe, and other matters dependent upon the special form of construction adopted. If a plug is inserted in the open end of the pipes as laid, the rising of the tide will press on the plugged end and be of considerable assistance in pushing the pipes home; it will therefore be necessary to re-examine the joints to see if the bolts can be tightened up any more. Messrs. Siebe, Gorman, and Co., the well-known makers of submarine appliances, have fitted up at their works at Westminster Bridge-road, London, S.E., an experimental tank, in which engineers may make a few preliminary descents and be instructed in the art of diving; and it is distinctly more advantageous to acquire the knowledge in this way from experts than to depend solely upon the guidance of the divers engaged upon the work which the engineer desires to inspect. Only a nominal charge of one guinea for two descents is made, which sum, less out-of-pocket expenses, is remitted to the Benevolent Fund of the Institution of Civil Engineers. It is generally desirable that a complete outfit, including the air pump, should be provided for the sole use of the resident engineer, and special men should be told off to assist him in dressing and to attend to his wants while he is below water. He is then able to inspect the work while it is actually in progress, and he will not hinder or delay the divers. It is a wise precaution to be medically examined before undertaking diving work, although, with the short time which will generally be spent below water, and the shallow depths usual in this class of work, there is practically no danger; but, generally speaking, a diver should be of good physique, not unduly stout, free from heart or lung trouble and varicose veins, and should not drink or smoke to excess. It is necessary, however, to have acquaintance with the physical principles involved, and to know what to do in emergencies. A considerable amount of useful information is given by Mr. R. H. Davis in his "Diving Manual" (Siebe, Gorman, and Co., 5s.), from which many of the following notes are taken. A diving dress and equipment weighs about l75 lb, including a 40 lb lead weight carried by the diver on his chest, a similar weight on his back, and l6lb of lead on each boot. Upon entering the water the superfluous air in the dress is driven out through the outlet valve in the helmet by the pressure of the water on the legs and body, and by the time the top of the diver's head reaches the surface his breathing becomes laboured, because the pressure of air in his lungs equals the atmospheric pressure, while the pressure upon his chest and abdomen is greater by the weight of the water thereon. He is thus breathing against a pressure, and if he has to breathe deeply, as during exertion, the effect becomes serious; so that the first thing he has to learn is to adjust the pressure of the spring on the outlet valve, so that the amount of air pumped in under pressure and retained in the diving dress counterbalances the pressure of the water outside, which is equal to a little under 1/2lb per square inch for every foot in depth. If the diver be 6 ft tall, and stands in an upright position, the pressure on his helmet will be about 3lb per square inch less than on his boots. The breathing is easier if the dress is kept inflated down to the abdomen, but in this case there is danger of the diver being capsized and floating feet upwards, in which position he is helpless, and the air cannot escape by the outlet valve. Air is supplied to the diver under pressure by an air pump through a flexible tube called the air pipe; and a light rope called a life line, which is used for signalling, connects the man with the surface. The descent is made by a 3 in "shot-rope," which has a heavy sinker weighing about 50 lb attached, and is previously lowered to the bottom. A 1-1/4 in rope about 15 ft long, called a "distance- line," is attached to the shot-rope about 3 ft above the sinker, and on reaching the bottom the diver takes this line with him to enable him to find his way back to the shot-rope, and thus reach the surface comfortably, instead of being hauled up by his life line. The diver must be careful in his movements that he does not fall so as suddenly to increase the depth of water in which he is immersed, because at the normal higher level the air pressure in the dress will be properly balanced against the water pressure; but if he falls, say 30 ft, the pressure of the water on his body will be increased by about 15 lb per square inch, and as the air pump cannot immediately increase the pressure in the dress to a corresponding extent, the man's body in the unresisting dress will be forced into the rigid helmet, and he will certainly be severely injured, and perhaps even killed. When descending under water the air pressure in the dress is increased, and acts upon the outside of the drum of the ear, causing pain, until the air passing through the nose and up the Eustachian tube inside the head reaches the back of the drum and balances the pressure. This may be delayed, or prevented, if the tube is partially stopped up by reason of a cold or other cause, but the balance can generally be brought about if the diver pauses in his descent and swallows his saliva; or blocks up his nose as much as possible by pressing it against the front of the helmet, closing the mouth and then making a strong effort at expiration so as to produce temporarily an extra pressure inside the throat, and so blow open the tubes; or by yawning or going through the motions thereof. If this does not act he must come up again Provided his ears are "open," and the air pumps can keep the pressure of air equal to that of the depth of the water in which the diver may be, there is nothing to limit the rate of his descent. Now in breathing, carbonic acid gas is exhaled, the quality varying in accordance with the amount of work done, from .014 cubic feet per minute when at rest to a maximum of about .045, and this gas must be removed by dilution with fresh air so as not to inconvenience the diver. This is not a matter of much difficulty as the proportion in fresh air is about .03 per cent., and no effect is felt until the proportion is increased to about 0.3 per cent., which causes one to breathe twice as deeply as usual; at 0.6 per cent. there is severe panting; and at a little over 1.0 per cent. unconsciousness occurs. The effect of the carbonic acid on the diver, however, increases the deeper he descends; and at a depth of 33 ft 1 per cent. of carbonic acid will have the same effect as 2 per cent. at the surface. If the diver feels bad while under water he should signal for more air, stop moving about, and rest quietly for a minute or two, when the fresh air will revive him. The volume of air required by the diver for respiration is about 1.5 cubic feet per minute, and there is a non-return valve on the air inlet, so that in the event of the air pipe being broken, or the pump failing, the air would not escape backwards, but by closing the outlet valve the diver could retain sufficient air to enable him to reach the surface. During the time that a diver is under pressure nitrogen gas from the air is absorbed by his blood and the tissues of his body. This does not inconvenience him at the time, but when he rises the gas is given off, so that if he has been at a great depth for some considerable time, and comes up quickly, bubbles form in the blood and fill the right side of the heart with air, causing death in a few minutes. In less sudden cases the bubbles form in the brain or spinal cord, causing paralysis of the legs, which is called divers' palsy, or the only trouble which is experienced may be severe pains in the joints and muscles. It is necessary, therefore, that he shall come up by stages so as to decompress himself gradually and avoid danger. The blood can hold about twice as much gas in solution as an equal quantity of water, and when the diver is working in shallow depths, up to, say, 30 ft, the amount of nitrogen absorbed is so small that he can stop down as long as is necessary for the purposes of the work, and can come up to the surface as quickly as he likes without any danger. At greater depths approximately the first half of the upward journey may be done in one stage, and the remainder done by degrees, the longest rest being made at a few feet below the surface. The following table shows the time limits in accordance with the latest British Admiralty practice; the time under the water being that from leaving the surface to the beginning of the ascent:-- TABLE No. l7.--DIVING DATA. Stoppages in Total time minutes at for ascent Depth in feet. Time under water. different depths in minutes. at 20 ft 10 ft Up to 36 No limit - - 0 to 1 36 to 42 Up to 3 hours - - 1 to 1-1/2 Over 3 hours - 5 6 42 to 48 Up to 1 hour - - 1-1/2 1 to 3 hours - 5 6-1/2 Over 3 hours - 10 11-1/2 48 to 54 Up to 1/2 hour - - 2 1/2 to 1-1/2 hour - 5 7 1-1/2 to 3 hours - 10 12 Over 3 hours - 20 22 54 to 60 Up to 20 minutes - - 2 20 to 45 minutes - 5 7 3/4 to 1-1/2 hour - 10 12 1-1/2 to 3 hours 5 15 22 Over 3 hours 10 20 32 When preparing to ascend the diver must tighten the air valve in his helmet to increase his buoyancy; if the valve is closed too much to allow the excess air to escape, his ascent will at first be gradual, but the pressure of the water reduces, the air in the dress expands, making it so stiff that he cannot move his arms to reach the valve, and he is blown up, with ever-increasing velocity, to the surface. While ascending he should exercise his muscles freely during the period of waiting at each stopping place, so as to increase the circulation, and consequently the rate of deceleration. During the progress of the works the location of the sea outfall will be clearly indicated by temporary features visible by day and lighted by night; but when completed its position must be marked in a permanent manner. The extreme end of the outfall should be indicated by a can buoy similar to that shown in Fig. 33, made by Messrs. Brown, Lenox, and Co. (Limited), Milwall, London, E., which costs about L75, including a 20 cwt. sinker and 10 fathoms of chain, and is approved for the purpose [Illustration: FIG 33 CAN BUOY FOR MARKING OUTFALL SEWER.] It is not desirable to fasten the chain to any part of the outfall instead of using a sinker, because at low water the slack of the chain may become entangled, which by preventing the buoy from rising with the tide, will lead to damage; but a special pile may be driven for the purpose of securing the buoy, at such a distance from the outlet that the chain will not foul it. The buoy should be painted with alternate vertical stripes of yellow and green, and lettered "Sewer Outfall" in white letters 12 in deep. It must be remembered that it is necessary for the plans and sections of outfall sewers and other obstructions proposed to be placed in tidal waters to be submitted to the Harbour and Fisheries Department of the Board of Trade for their approval, and no subsequent alteration in the works may be made without their consent being first obtained. CHAPTER XIII. THE DISCHARGE OF SEA OUTFALL SEWERS. The head which governs the discharge of a sea outfall pipe is measured from the surface of the sewage in the tank, sewer, or reservoir at the head of the outfall to the level of the sea. As the sewage is run off the level of its surface is lowered, and at the same time the level of the sea is constantly varying as the tide rises and falls, so that the head is a variable factor, and consequently the rate of discharge varies. A curve of discharge may be plotted from calculations according to these varying conditions, but it is not necessary; and all requirements will be met if the discharges under certain stated conditions are ascertained. The most important condition, because it is the worst, is that when the level of the sea is at high water of equinoctial spring tides and the reservoir is practically empty. Sea water has a specific gravity of 1.027, and is usually taken as weighing 64.14 lb per cubic foot, while sewage may be taken as weighing 62.45 lb per cubic foot, which is the weight of fresh water at its maximum density. Now the ratio of weight between sewage and sea water is as 1 to 1.027, so that a column of sea water l2 inches in height requires a column of fresh water 12.324, or say 12-1/3 in, to balance it; therefore, in order to ascertain the effective head producing discharge it will be necessary to add on 1/3 in for every foot in depth of the sea water over the centre of the outlet. The sea outfall should be of such diameter that the contents of the reservoir can be emptied in the specified time--say, three hours--while the pumps are working to their greatest power in pouring sewage into the reservoir during the whole of the period; so that when the valves are closed the reservoir will be empty, and its entire capacity available for storage until the valves are again opened. To take a concrete example, assume that the reservoir and outfall are constructed as shown in Fig. 34, and that it is required to know the diameter of outfall pipe when the reservoir holds 1,000,000 gallons and the whole of the pumps together, including any that may be laid down to cope with any increase of the population in the future, can deliver 600,000 gallons per hour. When the reservoir is full the top water level will be 43.00 O.D., but in order to have a margin for contingencies and to allow for the loss in head due to entry of sewage into the pipe, for friction in passing around bends, and for a slight reduction in discharging capacity of the pipe by reason of incrustation, it will be desirable to take the reservoir as full, but assume that the sewage is at the level 31.00. The head of water in the sea measured above the centre of the pipe will be 21 ft, so that [*Math: $21 \times 1/3$], or 7 in--say, 0.58 ft--must be added to the height of high water, thus reducing the effective head from 31.00 - 10.00 = 21.00 to 20.42 ft The quantity to be discharged will be [*Math: $\frac{1,000,000 + (3 * 600,000)}{3}$] = 933,333 gallons per hour = 15,555 gallons per minute, or, taking 6.23 gallons equal to 1 cubic foot, the quantity equals 2,497 cubic feet per min Assume the required diameter to be 30 in, then, by Hawksley's formula, the head necessary to produce velocity = [*Math: $\frac{Gals. per min^2}{215 \times diameter in inches^4} = \frac{15,555^2}{215 * 30^4}$] = 1.389 ft, and the head to overcome friction = [*Math: $\frac{Gals. per min^2 \times Length in yards}{240 * diameter in inches^5} = \frac{15,555^2 * 2042}{240 * 30^5}] = 84.719. Then 1.389 + 84.719 = 86.108--say, 86.11 ft; but the acutal head is 20.42 ft, and the flow varies approximately as the square root of the head, so that the true flow will be about [*Math:$15,555 * \sqrt{\frac{20.42}{86.11} = 7574.8\$] [Illustration: FIG 34 DIAGRAM ILLUSTRATING CALCULATIONS FOR THE DISCHARGE OF SEA OUTFALLS] --say 7,575 gallons. But a flow of 15,555 gallons per minute is required, as it varies approximately as the fifth power of the diameter, the requisite diameter will be about [*Math: \sqrt[5]{\frac{30^5 \times 15,555}{7575}] = 34.64 inches. Now assume a diameter of 40 in, and repeat the calculations. Then head necessary to produce velocity [*Math: = \frac{15,555^2}{215 \times 40^4}] = 0.044 ft, and [*Math: \frac{15,555^2 \times 2042}{240 \times 40^5}] = 20.104 ft Then 0.044 + 20.104 = 20.148, say 20.15 ft, and the true flow will therefore be about [*Math: 15,555 * \sqrt{\frac{20.42}{20.15}}] = 15,659 gallons, and the requisite diameter about [*Math: \sqrt[5]{\frac{40^5 * 15,555}{15,659}}] = 39.94 inches. When, therefore, a 30 in diameter pipe is assumed, a diameter of 34.64 in is shown to be required, and when 40 in is assumed 39.94 in is indicated. Let _a_ = difference between the two assumed diameters. _b_ = increase found over lower diameter. _c_ = decrease found under greater diameter. _d_ = lower assumed diameter. Then true diameter = [*Math: d + \frac{ab}{b+c} = 30 + \frac{10 \times 4.64}{4.64+0.06} = 30 + \frac{46.4}{4.7} = 39.872], or, say, 40 in, which equals the required diameter. A simpler way of arriving at the size would be to calculate it by Santo Crimp's formula for sewer discharge, namely, velocity in feet per second = [*Math: 124 \sqrt[3]{R^2} \sqrt{S}], where R equals hydraulic mean depth in feet, and S = the ratio of fall to length; the fall being taken as the difference in level between the sewage and the sea after allowance has been made for the differing densities. In this case the fall is 20.42 ft in a length of 6,126 ft, which gives a gradient of 1 in 300. The hydraulic mean depth equals [*Math: \frac{d}{4}]; the required discharge, 2,497 cubic feet per min, equals the area, [*Math: (\frac{\pi d^2}{4})] multiplied by the velocity, therefore the velocity in feet per second = 4/(pi d^2) x 2497/60 = 2497/(15 pi d^2) and the formula then becomes 2497/(15 pi d^2) = 124 x * 3rd_root(d^2)/3rd_root(4^3*) x sqrt(1)/sqrt(300) or d^2 x 3rd_root(d^2) = 3rd_root(d^6) = (2497 x 3rd_root(16) x sqrt(300)) / (124 x 15 x 3.14159*) or (8 x log d)/3 = log 2497 + (1/3 x log 16) + (* x log 300) - log 124 - log 15 - log 3.14159; or log d = 3/8 (3.397419 + 0.401373 + 1.238561 - 2.093422 - 1.176091 - 0.497150) = 3/8 (1.270690) = 0.476509. * d = 2.9958* feet = 35.9496, say 36 inches. As it happens, this could have been obtained direct from the tables where the discharge of a 36 in pipe at a gradient of 1 in 300 = 2,506 cubic feet per minute, as against 2,497 cubic feet required, but the above shows the method of working when the figures in the tables do not agree with those relating to the particular case in hand. This result differs somewhat from the one previously obtained, but there remains a third method, which we can now make trial of--namely, Saph and Schoder's formula for the discharge of water mains, V = 174 3rd_root(R^2) x S^.51*. Substituting values similar to those taken previously, this formula can be written 2497/(15 pi d^2) = 174 x 3rd_root(d_2)/3rd_root(4^2) x 1^.51/300^.51 or d^2 x 3rd_root(d^2) = 3rd_root(d^6) = (2497 x 3rd_root(16) x 300^.51) / (174 x 15 x 3.14159) or* log d = 3/8 (3.397419 + 0.401373 + (54 x 2.477121) - 2.240549 - 1.176091 - 0.497150) = 3/8 (1.222647) = 0.458493 * d = 2.874* feet = 34.388 say 34 1/2 inches. By Neville's general formula the velocity in feet per second = 140 SQRT(RS)-11(RS)^(1/3) or, assuming a diameter of 37 inches, V = 140 X SQRT(37/(12 x 4) x 1/300) - 11 (37/(12x4x300))^(1/3) = 140 x SQRT(37/14400) - 11 (37/1440)^(1/3) = 7.09660 - 1.50656 = 5.59 feet per second. Discharge = area x velocity; therefore, the discharge in cubic feet per minute = 5.59 x 60 x (3.14159 x 37^2)/(4*12^2) = 2504 compared with 2,497 c.f.m, required, showing that if this formula is used the pipe should be 37 in diameter. The four formula, therefore, give different results, as follows:-- Hawksley = 40 in Neville = 37 in Santo Crimp = 36 in Saph and Schoder = 34-1/2 in The circumstances of the case would probably be met by constructing the outfall 36 in in diameter. It is very rarely desirable to fix a flap-valve at the end of a sea outfall pipe, as it forms a serious obstruction to the flow of the sewage, amounting, in one case the writer investigated, to a loss of eight-ninths of the available head; the head was exceptionally small, and the flap valve practically absorbed it all. The only advantage in using a flap valve occurs when the pipe is directly connected with a tank sewer below the level of high water, in which case, if the sea water were allowed to enter, it would not only occupy space required for storing sewage, but it would act on the sewage and speedily start decomposition, with the consequent emission of objectionable odours. If there is any probability of sand drifting over the mouth of the outfall pipe, the latter will keep free much better if there is no valve. Schemes have been suggested in which it was proposed to utilise a flap valve on the outlet so as to render the discharge of the sewage automatic. That is to say, the sewage was proposed to be collected in a reservoir at the head of, and directly connected to, the outfall pipe, at the outlet end of which a flap valve was to be fixed. During high water the mouth of the outfall would be closed, so that sewage would collect in the pipes, and in the reservoir beyond; then when the tide had fallen such a distance that its level was below the level of the sewage, the flap valve would open, and the sewage flow out until the tide rose and closed the valve. There are several objections to this arrangement. First of all, a flap valve under such conditions would not remain watertight, unless it were attended to almost every day, which is, of course, impracticable when the outlet is below water. As the valve would open when the sea fell to a certain level and remain open during the time it was below that level, the period of discharge would vary from, say, two hours at neap tides to about four hours at springs; and if the two hours were sufficient, the four hours would be unnecessary. Then the sewage would not only be running out and hanging about during dead water at low tide, but before that time it would be carried in one direction, and after that time in the other direction; so that it would be spread out in all quarters around the outfall, instead of being carried direct out to sea beyond chance of return, as would be the case in a well- designed scheme. When opening the valve in the reservoir, or other chamber, to allow the sewage to flow through the outfall pipe, care should be taken to open it at a slow rate so as to prevent damage by concussion when the escaping sewage meets the sea water standing in the lower portion of the pipes. When there is considerable difference of level between the reservoir and the sea, and the valve is opened somewhat quickly, the sewage as it enters the sea will create a "water-spout," which may reach to a considerable height, and which draws undesirable attention to the fact that the sewage is then being turned into the sea. Chapter XIV TRIGONOMETRICAL SURVEYING. In the surveying work necessary to fix the positions of the various stations, and of the float, a few elementary trigonometrical problems are involved which can be advantageously explained by taking practical examples. Having selected the main station A, as shown in Fig. 35, and measured the length of any line A B on a convenient piece of level ground, the next step will be to fix its position upon the plan. Two prominent landmarks, C and D, such as church steeples, flag-staffs, etc., the positions of which are shown upon the ordnance map, are selected and the angles read from each of the stations A and B. Assume the line A B measures ll7 ft, and the angular measurements reading from zero on that line are, from A to point C, 29 23' and to point D 88 43', and from B to point C 212 43', and to point D 272 18' 30". The actual readings can be noted, and then the arrangement of the lines and angles sketched out as shown in Fig. 35, from which it will be necessary to find the lengths AC and AD. As the three angles of a triangle equal 180 , the angle B C A = 180 - 147 17'-29 23'= 3 20', the angle B D A = 180 -87 41' 30"- 88 43'= 3 35' 30". In any triangle the sides are proportionate to the sines of the opposite angles, and vice versa; therefore, A B : A C :: sin B C A : sin A B C, or sin B C A : A B :: sin ABC : A C, nr A C = (A B sin A B C) / (sin B C A) = (117 x sin 147 17') / (sin 3 20') or log A C = log 117 + L sin 147 17' - L sin 3 20'. The sine of an angle is equal to the sine of its supplement, so that sin 147 17' = sin 32 43', whence log A C = 2.0681859 + 9.7327837-8.7645111 = 3.0364585 Therefore A C = 1087.6 feet. Similarly sin B D A: A B :: sin A B D: A D A B sin A B D 117 x sin 87 41' 30" therefore A D = --------------- = ----------------------- sin B D A sin 3 35' 30" whence log A D = log ll7 + L sin 87 41' 30" - L sin 3 35' 30" = 2.0681859 + 9.99964745 - 8.79688775 = 3.2709456 The length of two of the sides and all three angles of each of the two triangles A C B and A D B are now known, so that the triangles can be drawn upon the base A B by setting off the sides at the known angles, and the draughtsmanship can be checked by measuring the other known side of each triangle. The points C and D will then represent the positions of the two landmarks to which the observations were taken, and if the triangles are drawn upon a piece of tracing paper, and then superimposed upon the ordnance map so that the points C and D correspond with the landmarks, the points A and B can be pricked through on to the map, and the base line A B drawn in its correct position. If it is desired to draw the base line on the map direct from the two known points, it will be necessary to ascertain the magnitude of the angle A D C. Now, in any triangle the tangent of half the difference of two angles is to the tangent of half their sum as the difference of the two opposite sides is to their sum; that is:-- therefore, tan 1/2 (ACD - ADC): tan 1/2 (120 40'):: (1866.15 - 1087.6): (1866.15 + 1087.6), 778.55 tan 60 20' therefore, tan 1/2 (ACD - ADC) = -------------------- 2953.75 or L tan 1/2 (ACD - ADC) = log 778.55 + L tan 60 20' - log 2953.75 . = 2.8912865 + 10.2444l54 - 3.4703738 = 9.6653281 .•. 1/2 (ACD - ADC) = 24 49' 53" .•. ACD - ADC = 49 39' 46". Then algebraically 2 120 40' - 49 39' 46" 71 0' 14" .•. ADC = ------------------------- = ------------ = 35 30' 7", 2 2 ACD = 180 - 35 30' 7" - 59 20' = 85 9' 53". [Illustration: Fig. 35.--Arrangement of lines and Angles Now join up points C and D on the plan, and from point D set off the line D A, making an angle of 35 30' 7" with C D, and having a length of l866.15 ft, and from point C set off the angle A C D equal to 85 9' 53". Then the line A C should measure l087.6 ft long, and meet the line A D at the point A, making an angle of 59 20'. From point A draw a line A B, ll7 ft long, making an angle of 29 23' with the line A C; join B C, then the angle ABC should measure 147 17', and the angle B C A 3 20'. If the lines and angles are accurately drawn, which can be proved by checking as indicated, the line A B will represent the base line in its correct position on the plan. The positions of the other stations can be calculated from the readings of the angles taken from such stations. Take stations E, F, G, and H as shown in Fig. 36*, the angles which are observed being marked with an arc. It will be observed that two of the angles of each triangle are recorded, so that the third is always known. The full lines represent those sides, the lengths of which are calculated, so that the dimensions of two sides and the three angles of each triangle are known. Starting with station E, Sin A E D: A D:: sin D A E: D E A D sin D A E D E = -------------- sin A E D or log D E = log A D + L sin D A E-L sin A E D. From station F, E and G are visible, but the landmark D cannot be seen; therefore, as the latter can be seen from G, it will be necessary to fix the position of G first. Then, sin E G D: D E :: sin E D G : E G, D E sin E D G or EG= --------------- sin E G D Now, sin E F G: E G :: sin F E G : F G E G sin F E G F G = ------------- sin E F G thus allowing the position of F to be fixed, and then sin F H G : F G :: sin F G H : F H F G sin F G H F H= ------------- sin F H G [Illustration: FIG 36.--DIAGRAM ILLUSTRATING TRIGONOMETRICAL SURVEY OF OBSERVATION STATIONS.]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5902246236801147, "perplexity": 3759.629812091764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141201836.36/warc/CC-MAIN-20201129153900-20201129183900-00084.warc.gz"}
http://math.stackexchange.com/users/81212/bori12
# bori12 less info reputation 5 bio website location age member for 10 months seen Jun 27 '13 at 14:57 profile views 10 # 4 Questions 3 finding the maximum area of 2 circles 2 proving $\frac{1}{1\cdot 2}+\frac{1}{3\cdot 4}+\frac{1}{5\cdot 6}+\cdots+$ Without Induction 0 find the locus of P(x,y) 0 given point (2,6) and a line passes through point (3,0) # 32 Reputation +10 proving $\frac{1}{1\cdot 2}+\frac{1}{3\cdot 4}+\frac{1}{5\cdot 6}+\cdots+$ Without Induction +15 finding the maximum area of 2 circles # 5 Tags 0 analytic-geometry × 2 0 homework 0 calculus × 2 0 geometry 0 sequences-and-series × 2 # 1 Account Mathematics 32 rep 5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3678780496120453, "perplexity": 3419.2976139190637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
http://2010.igem.org/wiki/index.php?title=Galactose_dose_response_of_Gal1_Promoter_in_pRS415&diff=prev&oldid=144421
# Galactose dose response of Gal1 Promoter in pRS415 (Difference between revisions) Revision as of 23:26, 24 October 2010 (view source)← Older edit Revision as of 13:35, 25 October 2010 (view source)Joseph (Talk | contribs) Newer edit → Line 1: Line 1: + {{:Team:Aberdeen_Scotland/css}} + {{:Team:Aberdeen_Scotland/Title}} Measurement of dose responsiveness of the GAL1 promoter to galactose using construct GAL1p-(Npep-GFP) Measurement of dose responsiveness of the GAL1 promoter to galactose using construct GAL1p-(Npep-GFP) Aim Aim Line 20: Line 22: - + [[Image: Gal-facs3.jpg|300 px]] [[Image: Gal-facs3.jpg|300 px]] - + ## Revision as of 13:35, 25 October 2010 University of Aberdeen - ayeSwitch - iGEM 2010 iGEM 2010 # Measurement of dose responsiveness of the GAL1 promoter to galactose using construct GAL1p-(Npep-GFP) ### Aim Previous dose response experiments using the fluorometer revealed that full GAL1 promoter induction was achieved at concentrations above 0.5% (data not shown). We wanted to examine the dose responsive behaviour of the GAL1 promoter across a full range of concentrations. Therefore the dose response experiments were repeated using lower concentrations of this inducing agent. We have therefore tested media containing: 0.05%, 0.1%, 0.2%, 0.3%, 0.5%, 1% and 2% of galactose. ### Protocol 1. Yeast transformed with a plasmid carrying the GAL1p-(Npep-GFP) construct was inoculated overnight into 5 ml of synthetic defined (SD) medium with amino acids: his (0.2 %), met (0.2 %), ura (0.2 %), trp (0.2 %) and raffinose (2 %) as the carbon source. 2. The following evening this cell culture was sub-cultured into a flask containing pre-warmed SD medium (50 mls) with 2% raffinose, and one of a range of concentrations of galactose between 0.05% and 2% of galactose, to achieve an optical density at 600nm of 0.6 by 9.00 am the following morning. 3. Samples were washed into PBS, and diluted 1/20 in preparation for FACS analysis. ### Results Flow cytometry was used to quantify GFP fluorescence, with an excitation wavelength of 488 nm, and an emission filter of 510 nm, The graph above summarises the FACS data, and shows that the intensity of GFP expressing cells increases in response to the percentage of galactose in the growth medium. The GAL1 promoter in our construct showed a high degree of sensitivity to the inducing agent, with concentrations as low as 0.01% having significant inducing potential. ### Conclusion The experiment clearly showed that the percentage of cells expressing GFP was exquisitely sensitive to the presence of galactose, with the dose response saturating above 0.1% galactose. This therefore clearly shows that the GAL1 promoter is highly sensitive, but that as a synthetic biology part, it may not exhibit ideal linear responses to inducing agent for some applications. The observed GFP expression response suggests that the GAL1 promoter behaves as an analogue switch across only a very narrow range of inducer concentrations.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9097334742546082, "perplexity": 11828.887901568043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363376.49/warc/CC-MAIN-20211207105847-20211207135847-00356.warc.gz"}
http://www.aephysum.umontreal.ca/SAPHARI2021/posters/sandrinetrotechaud/
# Two equally probable, but vastly different transitions, to a seasonally ice-free Arctic ## Implications for global warming communication Sandrine Trotechaud1, Bruno Tremblay1,2 1 McGill University, 2Columbia University Lamont-Doherty Earth Observatory ### Introduction The decline in the minimum sea ice extent (Min SIE) has been the center of attention since a few years. Global climate models project a seasonally ice-free Arctic within the next two/three decades. Of the 40 ensemble members (EM) from the CESM-LE model, two are of interest: • EM 27 projects a relatively continuous decline in the Min SIE until reaching a seasonally ice-free state • EM 13 projects a considerable recovery in the Min SIE that reaches a level similar of the 2000s followed by a rapid decline. Both EMs however lead to a seasonally ice-free Arctic. A recovery in the Min SIE can have an impact on the communication of global warming, because it could cast doubts in people's mind about the existence of climate change. Therefore, the objective of the project is to understand the mechanisms responsible for such trends in the Min SIE. ### Method CESM-LE • Fully coupled global climate model with nominal 1∘ resolution in all components • 40 Ensemble Members initialized with perturbations in the temperature field of 10-14 K • Annual data from 1920 to 2100 RD A rapid decline (RD) is defined as being steeper than -0.3 million km 2 per year in 5-year running mean. Gradual declines (GD) identified are less steep, but still important in the study. ### Method (continued) Spatial Domain The spatial domain of this study consists of the Arctic Ocean. It includes the Barents Sea Opening (BSO), the Fram Strait and the Bering Strait. The continental shelves, characterized by depths shallower than 425 meters, are shown on the figure below as grey shaded area. OHT Ocean heat transport defined through a gate: OHT=cp$$\rho$$UTAcross OHT=cp$$\rho$$$$\sum$$i,kFi,kAi$$\Delta$$zk ### Results : EM 27 The EM 27 has two key periods identified: a gradual decline from 1999 (black) to 2016 (cyan) and a rapid decline from 2016 to 2022 (green). ### Results : EM 27 (continued) According to a 20-year sliding window correlation of the anomalies: • The first half of the GD isn’t significantly correlated with any OHTs. When comparing the trend in the Min SIE during the first half of the GD with the model mean thermodynamic tendency (not shown here), it is shown that it is less than climatology. • The second half of the GD and the RD are significantly anti-correlated with the BSO (r=-0.65) and the Fram Strait OHTs (r=0.60). ### Results: EM 13 (continued) The EM 13 has three key periods identified: a gradual decline from 2000 (black) to 2014 (green), a recovery period from 2014 to 2020 (pink) followed by a rapid decline from 2020 to 2029 (cyan). Shown by the correlation of the anomalies: • The GD is only anti-correlated significantly with the Bering Strait at the beginning and the end of the period (r=-0.55). However, the significance level remains above 80% the whole period with r=-0.40. • The recovery period and the RD are significantly anti-correlated with the Bering Strait at r=-0.45. The SW and LW heat fluxes are significantly anti-correlated with the Min SIE (resp. r=-0.95 and r=-0.75). The LW heat flux undergo a considerable increase during the RD. ### Discussion From the results and with further investigation, we can note that: • Recoveries are still possible (to the observed 1994 level) even from a nearly ice-free Arctic initial condition. • During the rapid decline following the recovery period, the LW heat flux has a significative role in inhibiting the formation of the ice during the winter. The sea ice stays relatively young and thin which predisposes a great ice loss during the next few years, consisting of a RD. • The SW heat flux role varies between a moderate and an effective amplifier of the melting through the ice-albedo feedback, but is rarely the primary cause. • The OHTs are the primary cause in the key periods identified in this study with a focused importance of the Bering Strait OHT in EM 13 and Fram Strait/BSO OHTs in EM 27. • Although the Bering Strait OHT is of smaller amplitude, it interacts with the sea ice over expanded continental shelves which promotes considerable melting. Multiple scenarios are plausible for the future trend in the Min SIE due to great climate variability. A considerable recovery in the Min SIE is possible even at a nearly ice-free Arctic condition through internal variability. However, as we saw in the case of EM 13, it is not sustainable and the EM still projects a seasonal ice-free cover within a few years of EM 27. Hence, recoveries in the Arctic sea ice cover are possible due to climate variability, even with the presence of climate change.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6009973883628845, "perplexity": 3393.732867482701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00101.warc.gz"}
https://oxfordre.com/physics/view/10.1093/acrefore/9780190871994.001.0001/acrefore-9780190871994-e-20?rskey=I1Fbul
Show Summary Details Page of PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, PHYSICS (oxfordre.com/physics). (c) Oxford University Press USA, 2020. All Rights Reserved. Personal use only; commercial use is strictly prohibited (for details see Privacy Policy and Legal Notice). date: 07 July 2020 # Solar Flares ## Summary and Keywords A solar flare is a transient increase in solar brightness powered by the release of magnetic energy stored in the Sun’s corona. Flares are observed in all wavelengths of the electromagnetic spectrum. The released magnetic energy heats coronal plasma to temperatures exceeding ten million Kelvins, leading to a significant increase in solar brightness at X-ray and extreme ultraviolet wavelengths. The Sun’s overall brightness is normally low at these wavelengths, and a flare can increase it by two or more an orders of magnitude. The size of a given flare is traditionally characterized by its peak brightness in a soft X-ray wavelength. Flares occur with frequency inversely related to this measure of size, with those of greatest size occuring less than once per year. Images and light curves from different parts of the spectrum from many different flares have led to an accepted model framework for explaining the typical solar flare. According to this model, a sheet of electric current (a current sheet) is first formed in the corona, perhaps by a coronal mass ejection. Magnetic reconnection at this current sheet allows stored magnetic energy to be converted into bulk flow energy, heat, radiation, and a population of non-thermal electrons and ions. Some of this energy is transmitted downward to cooler layers, which are then evaporated (or ablated) upward to fill the coronal with hot dense plasma. Much of the flares bright emission comes from this newly heated plasma. Theoretical models have been proposed to describe each step in this process. # Observation and Overview ## Light Curves A solar flare is a sudden brightening of a small portion of the Sun’s surface, powered by the release of stored magnetic energy. In current practice, a flare is identified and classified as a peak in the Sun’s total brightness in the 1–8Å soft X-ray (SXR) band of NOAA’s GOES satellite (The Geostationary Operational Environmental Satellite of the US National Oceanic and Atmospheric Administration). During a strong flare the Sun’s brightness in this particular band can increase more than 100-fold as shown in Figure 1a. A peak exceeding $10−4$, $10−5$, or $10−6W/m2$ is categorized as a flare of class X, M, or C respectively. The example flare peaks at $7×10−5W/m2$, designating it class M7. While describing flares in generality, the well-observed example shown in Figure 1 will be used for concrete illustration. Figure 1. Light curves from an M7 flare on April 18, 2014. (a) SXR emission in the 1–8Å band from GOES, on a logarithmic scale. The left axis gives the intensity as a fraction of pre-flare levels, while the right gives the intensity in $W/m2$, along with ranges for X, M, and C flares. (b) Integrated intensities from ions of iron: Fe xx (red, 133Å, $T=9$ MK), Fe xviii (magenta, 94Å, $T=6$ MK), Fe xvi (green, 335Å, $T=3$ MK), from SDO/EVE, and Fe ix (blue, 171Å, $T=0.6$ MK) from SDO/AIA. All curves show the relative difference from pre-flare level on a logarithmic scale. (c) The 1600Å bandpass of SDO/AIA, showing relative difference from pre-flare on a linear scale. (d) Hard X-rays in 25–50 keV (blue) and 50–100 keV (red) band, by RHESSI, on a linear scale. (e) The 1–8Å curve from (a), now on a linear scale (blue), and its time derivative in black. The bottom axis is UTC, the top is in minutes from peak in 1–8Å. Diamonds on (b), (c) and (d) mark the times of the images in Figure 2. A flare includes simultaneous brightening, to some degree, in virtually every wavelength in the spectrum. The extreme ultraviolet (EUV) spectral lines shown in Figure 1b brighten by anywhere from 3% (Fe ix) to 1,000% (Fe xx) above pre-flare levels. The wide range is due mostly to the wide range of quiescent values on which the comparison is based. The Sun is brightest in the visible, around 5,000Å, and even a large flare will only brighten these wavelengths by about $∼0.01$ % (Kopp, Lawrence, & Rottman, 2005; Kretzschmar et al., 2010). Therefore, only extremely large flares are detectable as increases in overall visible brightness of the Sun, or a comparable increases in its bolometric luminosity. It is easier to detect a localized increase in visible surface brightness; a flare large enough to show such an increase is termed a white light flare. Light curves of different wavelengths from a given flare tend toward one of two characteristic behaviors. Some radiation, such as microwaves, gamma rays, or the hard X-ray (HXR) curve shown in Figure 1d, originates from the flare’s footpoints, and has a brief, impulsive light curve. In general, these persist only for the initial 1–10 minutes, known as the flare’s impulsive (or rise) phase. Other radiation, such as the SXR and EUV curves from Figures 1a and 1b, originates from the coronal plasma, and evolves more gradually. These light curves rise during the impulsive phase, and then decay for times ranging from 10 minutes to over 10 hours, known as the gradual (or main) phase. The time derivative of a coronal light curve, most commonly the GOES 1–8Å curve shown in Figure 1e, tends to resemble the more impulsive footpoint light curves. This resemblance is known as the Neupert effect (Dennis & Zarro, 1993; Neupert, 1968), and is taken as evidence that energy deposited in the lower atmosphere heats and ablates material upward into the corona. This ablation process is commonly called chromosphere evaporation (Canfield et al., 1980; Antonucci et al., 1999), even though it is related to familiar chemical evaporation only by analogy. The gradually evolving coronal plasma has a temperature typically ranging from 1–30 MK. In the example from Figure 1, a ratio of SXR bands shows a temperature peaking at $Tmax≃17$ MK, and decreasing gradually thereafter. This cooling behavior is generally reflected in the light curves from progressively cooler ion species peaking at progressively later times (Aschwanden & Alexander, 2001; Qiu & Longcope, 2016; Warren, Mariska, & Doschek, 2013); compare the four curves in Figure 1b. Emission during the impulsive phase from microwaves and HXR often appears to originate from a population of energetic electrons interacting with the coronal plasma or the footpoints (Krucker et al., 2008). Spectra show this population to have a non-Maxwellian (i.e., non-thermal) distribution of energies—typically a power law, above some lower cutoff, $Ec$. The flare process evidently produces such a non-thermal electron population, at least during its impulsive phase. Photons with energies above 1 MeV, that is, gamma-rays, are observed during the impulsive phases of some large flares (Murphy, 2007). These sometimes show evidence of spectral lines from nuclear processes, providing clear evidence of non-thermal ions of high energies. It is generally believed that ions are being accelerated in many, if not all, flares, even though gamma ray signatures can be observed in only the largest ones. Flares, especially large ones, are frequently associated with the eruption of mass-loaded flux ropes, known as coronal mass ejections (CMEs). Several investigators have attempted to determine whether flares cause, or at least precede, CMEs or vice versa (Zhang, Dere, Howard, Kundu, & White, 2001). There are, however, numerous well-studied cases of flares occurring without CMEs (Chen et al., 2015b; Yashiro, Gopalswamy, Akiyama, Michalek, & Howard, 2005), called compact or confined flares, and of CMEs occurring without flares (Munro et al., 1979). This makes it clear that there can be no invariable cause–effect relation between these two distinct phenomena (Gosling, 1993). They are simply associated phenomena. Their association is most likely when the flare is large and has an extended gradual phase; such cases are known as eruptive flares or long-duration events. In all cases, the flare is the component that produces the enhanced brightness, intimately related to the Sun’s lower atmosphere. The enhancement of EUV and X-rays increases the rate of ionization in the upper atmosphere of the Earth and other planets (Chamberlin, Woods, & Eparvier, 2008; Fuller-Rowell & Solomon, 2010). A flare thereby affects the ionosphere immediately, while a CME can have more varied effects at Earth, but only when the magnetized mass impacts its magnetosphere. The brightness enhancement of a flare demands that it be associated with the release of energy beyond the Sun’s steady luminous radiation. It is relatively straightforward to compute the total radiative loss from the hot coronal plasma, since this is readily observed in EUV and SXR. The two SXR bands from GOES provide an estimate of temperature and emission measure from which total radiative losses can be computed. Doing so for the example in Figure 1 reveals a peak coronal radiation power, $Pc,r≃1027erg/s$, roughly 20 times greater than the power in the narrow 1–8Å bandpass of GOES. Over its multi-hour duration the coronal plasma of this particular flare radiates $ΔEc,r≃5×1030$ ergs. These values are fairly average, and X flares will typically show coronal radiation up to $Pc,r∼1029erg/s$ and total energies up to a few times $1032$ erg. More careful computations, made using the full differential emission measure of the coronal plasma, yield comparable values in general. The coronal plasma is only part of the flare, so the power it radiates accounts for only a fraction of the flare’s total energy. That total has proven difficult to compute, although several serious attempts have been made to do so in particular cases (Emslie et al., 2004, 2005) or for collections (Emslie et al., 2012). Energy radiated from the lower atmosphere, while at lower temperature, will usually exceed, even far exceed, the coronal losses. This is, however, a much smaller fraction of the pre-flare losses at those wavelengths, and is thus far more difficult to measure. Other contributions cannot be quantified without a good model for the flare process: the power deposited in the chromosphere by non-thermal electrons can often exceed the losses to coronal radiation, but it is not entirely clear whether the deposited energy is ultimately radiated from the corona or chromosphere (and is therefore already counted) or is lost in some other way. A CME will also carry away energy, but its source may be from the flare, or from the much larger coronal volume it affects. In light of these factors, there is no simple relationship between the peak X-ray flux, or flare class, and the total energy powering a solar flare. ## Morphology Solar flares almost always occur in active regions (ARs) of relatively strong, complex magnetic field (see Figure 2a). The most intense chromospheric emission is generally organized into elongated structures, called flare ribbons. In the prototypical case, called a two-ribbon flare, there is one ribbon in each of the AR’s magnetic polarities, and they are separated by the polarity inversion line (PIL; see Figure 2b). The ribbons generally move apart slowly over the course of the flare, providing evidence of progressing magnetic reconnection. There is also an apparent motion, especially early on, related to the formation and elongation of the ribbons (Fletcher, Pollock, & Potts, 2004; Qiu, 2009). Some ribbons can have very complex structure on their finest scales and this may undergo motions more disorderly than the foregoing description implies (Fletcher & Hudson, 2001). Coronal wavelengths show loops of hot plasma tracing out field lines interconnecting the magnetic polarities. The loops often connect points on the opposing ribbons, thereby forming an elongated arcade: the flare arcade (see Figures 2c and 2d). The loops appear later in images from progressively cooler ions, consistent with the cooling plasma scenario (Aschwanden & Alexander, 2001; Warren et al., 2013). As the ribbons spread apart, the loops anchored to them appear to be rising upward. It is generally accepted, however, that individual loops are relatively stationary, or may even be contracting downward (Forbes & Acton, 1996). The apparent upward motion is thus ascribed to the appearance of new loops piling on top of older loops as magnetic reconnection proceeds. When images are made from HXR emission, they typically show one or more concentrated sources. Often the sources fall at points along the ribbons, but not along the entire extended ribbon (Sakao, 1994). Such sources are attributed to electron deposition at the footpoint(s) of flare loop(s). The yellow contours on Figure 2b show one source on each ribbon, presumably from both footpoints of a single loop, such as the one appearing in Figure 2c. HXR footpoint sources from a single loop can appear of differing strength, as in Figure 2b. One interpretation is that the footpoint with stronger magnetic field mirrors a larger fraction of the precipitating electrons (Goff, Matthews, van Driel-Gesztelyi, & Harra, 2004; Sakao, 1994). The sources can appear to move along the ribbon, probably showing footpoints of different loops in succession. The directions and speed of this motion has been used to infer aspects of the magnetic reconnection, and the electron acceleration (Bogachev, Somov, Kosugi, & Sakao, 2005). Figure 2. Images of the solar flare on April 18, 2014. All four panels show the same $180′′×150′′$ field of view. (a) Line-of-sight magnetogram from SDO/HMI on a linear grey scale. Blue and cyan curves are the leading edge of the ribbons from (b), and magenta curve is the PIL. (b)–(d) show different SDO/AIA images using inverse logarithmic color scales. Figure 2(b). shows the flare ribbons in 1600Å image from 12:50. An image made at the same time from the 25–50 keV band of RHESSI is over plotted as yellow contours at 60%, 75%, and 90% of maximum. The magenta curve is the PIL. Figure 2(c). 94Å (Fe xviii) from 12:55. Figure 2(d). 171Å image from 13:09. The times of each AIA image are marked by a diamond on the corresponding curves of Figure 1. HXR images sometimes show a concentrated source between the footpoints, where the loop’s apex is expected to be (Masuda, Doschek, Boris, Oran, & Young, 1994). Flares at the limb show the source to be located just above the hottest loop visible in softer wavelengths, leading to the term above-the-looptop source for such features (see yellow contours in Figure 3a). Apparent motions of such sources has also been interpreted in terms of the time-dependance of the reconnection process (Sui & Holman, 2003). ## The Standard Model Framework The various observations have led to a model, or framework, of a generic flare. The model, shown in Figure 3b, is typically cast in terms of an eruptive flare (see Figure 3a), but most features are expected to have counterparts in compact flares. The earliest version is attributed to Carmichael (1964), Sturrock (1968), Hirayama (1974), and Kopp and Pneuman (1976), and is called the CSHKP model. Since then the basic model has been extended and augmented to accommodate new theoretical understanding and new observed features. Figure 3. The standard flare model. (a) Three images of an eruptive flare on the west limb September 10, 2017, made by SDO/AIA in its 193Å band. Cyan curves mark the solar limb, with north to the left. Each image shows the same $90′′×270′′$ field of view. Yellow curves in the final image are 60%, 75%, and 90% contours from RHESSI’s 25–50 keV image. Figure 3(b). The geometry of the standard flare model in the same orientation. Blue lines are magnetic field lines, and the red curve is the separatrix field line. A cyan shaded circle is the erupting flux rope, and the red shaded regions are outflows originating from the diffusion region (green ellipse). A blue region shows the most recently closed flux tube which forms the flare loop together with its feet, the two flare ribbons (magenta squares), separated by the PIL. The model flare is initiated when an erupting flux rope (cyan circle in Figure 3b) pulls open the flux on either side of the PIL, creating a current sheet separating upward from downward open field. Reconnection occurs at some point in the current sheet, designated by a green ellipse labeled X. Open flux is swept inward, and reconnected to form closed field lines which are then swept out by long narrow outflow jets. Loop retraction stops abruptly at a point labeled termination, at the end of the jet. The fully retracted loop becomes the flare loop and its feet form the ribbons, which are seen end-on as magenta boxes in Figure 3b. According to the foregoing, two-dimensional model, the outermost, or leading, edge of the flare ribbon anchors the separatrix (red curve in Figure 3b) which connects to the X-point. The amount of flux reconnected,$φrx$, can be computed by integrating the vertical magnetic flux over which the ribbon appears to sweep (Forbes & Priest, 1984; Poletto & Kopp, 1986; Qiu, Lee, Gary, & Wang, 2002). Such measurements have become reasonably routine and provide reconnection rates typically peaking in the range $φ˙rx∼3×1017Mx/s$ in small flares to $3×1019Mx/s$ for the largest (Tschernitz, Veronig, Thalmann, Hinterreiter, & Pötzi, 2018). The reconnection in the standard flare model reflects the structure found in studies of reconnection in generic contexts, and has been verified to some extent through observation (McKenzie, 2002). The diffusion region (green ellipse in Figure 3b) occupies only a small portion of the current sheet, thereby permitting reconnection of the fast, Petschek variety (Forbes, Priest, Seaton, & Litvinenko, 2013; Petschek, 1964). In this mode of reconnection the outflow jets are bounded by slow magnetosonic shocks, shown in dark red in Figure 3b, which accelerate, compress, and heat the jet’s plasma (Forbes & Priest, 1983). If accelerated to a speed above the local fast magnetosonic speed, there will be a fast magnetosonic shock at its termination (Forbes, 1986). Some evidence has been found for such a structure in radio observations (Aurass, Vršnak, & Mann, 2002; Chen et al., 2015a). The model offers several possibilities by which the energy released by reconnection could generate a population of non-thermal electrons. Various theoretical models under study predict electron acceleration occurring within the diffusion region, in the outflow jet, or at the termination shock. Each of these possibilities offers an explanation for the population of non-thermal electrons observed in microwaves and HXRs, and each produces a distribution consistent with a power law. The electrons may be trapped near the termination, to produce an above-the-looptop source (shown in orange in Figure 3b), or they could precipitate along the flare loop to produce the footpoint sources. Some of the energy released by magnetic reconnection will be guided downward by the magnetic field to the chromospheric flare ribbons. The energy could be transported by the non-thermal electrons or it could be transported through conventional thermal conduction, which is directed almost entirely along magnetic field lines. Once the energy reaches the feet, it will raise the temperature of the chromospheric plasma, and drive upward evaporation. This scenario nicely explains the Neupert effect where coronal emission appears as a response to the chromosphere. Spectroscopic measurements confirm the fast upflow of material within the flare ribbons (Antonucci & Dennis, 1983; Milligan & Dennis, 2009). As reconnection proceeds, new loops will be moved through the outflow to lie atop the arcade. This will have the effect of moving the reconnection point upward and moving the separatrix, and hence the flare ribbons, outward. It will also produce a series of ever higher flare loops. Both effects are consistent with the observed evolution. ## The Flare Population While every flare is unique, there is a strong tendency for extensive quantities to scale together: a large flare is usually large in every measure (known as “big flare syndrome”; Kahler, 1982). It is therefore common to characterize a flare’s size is by a single measure, usually by its peak flux in GOES 1–8Å, designated here by $F1−8$. As was mentioned, no rigorous relation exists between this measure and any other characteristic of a flare. Nevertheless, $F1−8$ is readily measured and scales with the flare’s size. Flares occur at all sizes with frequency inversely dependent on size (Crosby, Aschwanden, & Dennis, 1993). Figure 4 summarizes the flare activity, as characterized by $F1−8$, over three solar cycles (from 1986 to 2016). Since flares are associated with ARs, they occur with highest frequency around solar maxima (1991, 2001, and 2014). At these times flaring rates can increase to over 8 C-class flares and one M-class flare per day, as shown in Figure 4b. These rates drop by more than two orders of magnitude during solar minimum. Averaged over all three cycles, M-flares occur at a mean rate of $0.33$/day, or about 1,300 over an 11-year cycle. (There were 2,047, 1,441, and 681 M-flares in these three solar cycles, whose amplitudes clearly decrease.) Figure 4. Summary of flaring activity from 1986 to 2016, characterized by flare magnitude $F1−8$. (a) The number of flares vs. date and magnitude, using an inverse color scale—darker for more flares. Blue, green, and red dashed lines mark the levels for C, M, and X flares respectively. A flare occurs above the cyan curve once per day. Red corsses show the largest flare over a 90-day window. (b) The mean frequency, averaged over 90-day windows, of C-class flares (blue) and M-class flares (green). Blue and green ticks to the left of the $y$ axis show the rates averaged over the 31-year period. The cyan dashed line is for reference to the cyan curve in (a). (c) The international sunspot number, for reference. (d) The frequency distribution for the entire 31-year interval, plotted on its side, with magnitude on the vertical axis to match panel (a) to its left. The magenta line is a power-law fit, $dN/dF∼F−2.14$. The average flaring rate over the entire three-cycle interval can be formed into a frequency histogram, shown in Figure 4d. This clearly shows a power-law behavior, $dN/dF∼F−2.14$. This means that on average an X-flare is less probable than an M-flare by a factor of $10−1.14=0.072$. They occur at an average rate of $8.7$/year, compared to 120 M-flares per year. The power-law distribution of solar flares is reminiscent of the distributions of earthquakes energies, or avalanche sizes (Bak, Tang, & Wiesenfeld, 1988). This resemblance has led some investigators to propose analogous models to explain flare-size distributions (Aschwanden et al., 2018; Lu & Hamilton, 1991). It is also possible to use the empirical relationship along with observations of the current rate of small flares to forecast the likelihood of a larger flare in the near future (Wheatland, 2004). Investigations reveal that the power-law distribution continues in the direction of smaller flares, in spite of the apparent roll-off evident in Figure 4d. That is caused by the systematic undercounting of small flares when the activity, and related background, rises. (Their absence is clear as light voids in the $F1−8<10−6W/m2$ regions at solar maximum.) This confusion limit can be overcome using spatially-resolved measurements over more limited fields of view. Such measurements reveal a continuation to ever smaller flares, and perhaps still farther to yet-unobserved nano-flares. The distribution may also extend to extremely large flares, but small numbers make this unclear. Based on the power law, flares above X10 should occur at a rate of $0.63$ per year, which is 7.2% of the rate for flares X1–X10. This would amount to 19 in the 31-year sample. In actuality 16 flares were observed in this class, (10 in cycle 22 and 6 in cycle 23), which is within expectations for such a small sample. Thus we cannot rule out the extension of the power law to that and still higher levels. If it applies out to $F1−8>0.1erg/s$ (i.e., X1000), we would expect one such super-flare every 300 years, on average. This physics of flares shows that larger flares require larger active regions and the reconnection of more flux from them. Any of these factors may turn out to have an upper bound which would in turn place a limit on the possible size of a solar flare. Several efforts have been made to estimate that upper limit (Aulanier et al., 2013; Schrijver et al., 2012; Shibata et al., 2013), but no consensus has been reached. ## Models and Theories There have been many attempts to understand solar flares theoretically and to incorporate this understanding into models. Most efforts have tended to focus on one specific aspect, although a few notable efforts have attempted to combine multiple aspects into a single model. One focus has been on the large-scale dynamics of a solar flare, often including the CME. A second has been on the dynamics of the plasma flowing within the flare loop, with particular attention to the process of chromospheric evaporation. A final set of studies has focused on the generation of non-thermal particles and their propagation along the flare loop. # Large-Scale Models ## Triggering and Eruption Under the standard model framework, flares and CME begin together, so their earliest phases are generally combined into a single flare/CME model. This initial evolution occurs on a very large scale and is almost always modeled using the single-fluid equations of MHD (see “Magnetohydrodynamics—Overview,” Priest, 2019). A successful model must explain the sudden onset of fast eruption after an extended period of slow evolution. In a number of models the eruption is initiated through a large-scale, current-driven MHD instability triggered when slow (quasi-static) evolution, driven from the lower boundary brings the system past the instability threshold. Every active region, and thus every flare, has a different geometry. Models typically use a simplified, generic geometry, in an effort to study flare evolution in general. In one model (Biskamp & Welter, 1989; Hood & Priest, 1980; Mikic, Barnes, & Schnack, 1988) the initial magnetic field forms an equilibrium arcade across the PIL, which is sheared by slow motions of the lower boundary. This slow shearing causes a proportionately slow upward expansion of the equilibrium arcade. In some versions of this model there is a shear threshold beyond which equilibria are unstable (Kusano, Maeshiro, Yokoyama, & Sakurai, 2004). Once this threshold is crossed, the upward expansion becomes dynamic, rather than quasi-static. Other versions lack a genuine threshold, but instead exhibit an expansion of increasing speed until the system must behave dynamically, regardless of how slow the boundary moves (Mikic & Linker, 1994). In either case, the rapid upward expansion creates a current sheet at which reconnection occurs to form the erupting flux rope and thereafter the flare and CME. Several other models assume a twisted (i.e., current-carrying) flux rope exists in equilibrium prior to eruption. The rope’s twist can be increased either by slow boundary motions or by reconnection (Amari, Luciani, Mikic, & Linker, 2000). Such equilibria are subject to large-scale instability for sufficient levels of twist. In one instability, the kink mode, the previously smooth axis of the flux rope develops a helical pitch, reducing the total magnetic energy. The equilibrium becomes unstable once the field lines wrap around the straight axis more by than a critical angle, generally $2π$$3.5π$ depending on the particular equilibrium (Hood & Priest, 1981). A second instability concerns a twisted flux rope overlain by an external field required to balance the rope’s outward hoop force. The hoop force is a repulsion between the current in a section of the rope and its image current below the boundary. This force decreases as the flux rope moves upward, away from the boundary. A stable equilibrium requires that the overlying field, and the balancing force it supplies, decrease less rapidly than the hoop force, in order that a balance can always be achieved. If this is not the case, that is, if the overlying field deceases too rapidly with height, the net upward force will increase with height leading to a run-away expansion: an eruption. This is a form of lateral kink instability known as the torus instability, and it is triggered once the flux rope enters a region where the overlying field strength decreases sufficiently rapidly with height (Kliem & Török, 2006). Both kink and torus instabilities are considered viable mechanisms to produce a CME and associated flare (Démoulin & Aulanier, 2010). No consensus has yet been reached as to whether one is invariably responsible, or simply more frequently responsible, for observed eruptions. In a separate class of models, called loss of equilibrium, the equilibrium contains a flux rope as well as one or two current sheets. Magnetic reconnection is assumed to occur at the current sheet, but slowly enough to drive quasi-static evolution (Moore & Sterling, 2006). This evolution can reach a point beyond which no neighboring equilibrium exists—a loss of equilibrium or a catastrophe—requiring rapid dynamical evolution to a new equilibrium (Forbes & Isenberg, 1991; Longcope & Forbes, 2014; Priest & Forbes, 1990; Yeates & Mackay, 2009). In one such scenario, called tether-cutting, reconnection occurs at a current sheet beneath the flux rope, causing it to rise slowly until equilibrium is lost and the flux rope rises dynamically: eruption (Moore, Sterling, Hudson, & Lemen, 2001). In an alternative scenario, called break-out, the initial equilibrium includes a horizontal current sheet above the flux rope, as well as the vertical sheet beneath it (Antiochos, Devore, & Klimchuk, 1999). Slow reconnection at the upper sheet causes the flux rope to rise slowly, until equilibrium is lost and it erupts dynamically. Under this scenario, the lower current sheet stretches and thins, first slowly under quasi-static evolution, and then more rapidly as a result of eruption. The latter phase is accompanied by rapid reconnection at the lower sheet, termed flare reconnection, which produces the flare itself according to the standard model. ## Reconnection in a Flare The flare itself is largely a consequence of the magnetic reconnection occurring at the current sheet beneath the erupting flux rope. At least in the standard model, the current sheet has global scale so reconnection there is typically modeled using MHD equations (the vertical structure evident in Figure 3a is more than 100 Mm long). In general such studies conform to results of more generic studies of reconnection in the MHD regime (Forbes & Priest, 1983). To obtain steady Petschek reconnection of the form depicted in the standard model, it is necessary that the reconnection electric field be somehow localized within the large-scale current sheet (Biskamp & Schwarz, 2001; Kulsrud, 2001). This localization will not occur if the magnetic induction equation includes only a uniform resistivity. For this reason, it is common for models to use a current-dependent anomalous resistivity (Magara, Mineshige, Yokoyama, & Shibata, 1996; Ugai & Tsuda, 1977). Doing so yields numerical results closely resembling the standard model cartoon, that is, Figure 3b, including a fast magnetosonic shock at the termination (Forbes & Malherbe, 1986), although exhibiting its own dynamics (Takasao, Matsumoto, Nakamura, & Shibata, 2015). Other models have used uniform resistivity, or no resistivity at all, and cannot therefore have fast, steady reconnection. Instead they exhibit reconnection of an unsteady variety, including multiple, evolving magnetic islands (Karpen, Antiochos, & DeVore, 2012). Such reconnection is also found in more generic studies (Bhattacharjee, HuangYang, & Rogers, 2009; Loureiro, Schekochihin, & Cowley, 2007). These evolving islands have been invoked to explain certain features observed in the context of solar flares, such as supra-arcade downflows (McKenzie & Hudson, 1999; Savage, McKenzie, & Reeves, 2012). Flare reconnection differs from more generic varieties owing to the significant role played by the cool chromosphere to which the reconnecting field lines are anchored. This layer is responsible for many of the observational signatures of a solar flare. Several numerical and analytic models have examined the role of field-aligned thermal conductivity by which energy may be transported to the chromosphere (Chen, Fang, Tang, & Ding, 1999; Forbes, Malherbe, & Priest, 1989; Yokoyama & Shibata, 1997). These exhibit a layer of hot plasma surrounding the outflow jet and chromospheric evaporation, but the most detailed studies of the latter process remain those using a class of one-dimensional flare loop models. # Flare Loop Models Flare loop models consider, for the most part, the plasma dynamics in a static, closed magnetic loop. Magnetic evolution is neglected, or assumed to be complete, leaving a stationary curved tube. Plasma flows only along this static loop, with velocity parallel to the axis. The loop is assumed thin enough to reduce the problem to a single spatial dimension. Mass density, plasma velocity (parallel), and plasma pressure are all functions of axial position, and evolve in time as required by the conservation of mass, momentum, and energy. Assuming a static loop obviates the need for an equation governing magnetic field evolution, leaving a system of gas dynamic equations, rather than MHD. Restriction to a single spatial dimension permits numerical solutions to resolve scales as small as meters—scales which can develop in a flare’s low atmosphere (Fisher, Canfield, & McClymont, 1985a; MacNeice, Burgess, McWhirter, & Spicer, 1984). The energy equation typically includes radiative transport, including optically thin losses from the corona and thermal conduction along the tube’s axis. It also includes an energy source term representing the magnetic energy released to produce the flare. In some versions the source term is an ad hoc function of space and time representing a generic dissipation, and typically concentrated around the loop top (Cheng, Oran, Doschek, Boris, & Mariska, 1983; MacNeice, 1986). In others it is taken to be the energy deposition from non-thermal electrons, which had originated at the loop top with a specified flux and energy spectrum (Emslie & Nagai, 1985; Fisher, Canfield, & McClymont, 1985b). Both versions have been extensively studied, and produce broadly similar evolution, largely conforming to observations. ## Integrated (0D) Models The one-dimensional gas dynamic equations can be simplified further by integrating them over the loop’s coronal section. This yields two ordinary differential equations governing the time evolution of total coronal mass and energy, or equivalently, average coronal density and temperature (Antiochos & Sturrock, 1978). A few assumptions are made about the spatial profiles of the primitive quantities, but the result is a robust zero-dimensional system based on global conservation laws. The equations may be solved numerically, or analytically after a few more assumptions; examples of each are shown in Figure 5. The former approach generally shows evolution in three phases, as illustrated in Figure 5c and 5d. Analytical approaches assume this three-phase evolution (Cargill, Mariska, & Antiochos, 1995). Figure 5. Zero-dimensional models of a flare loop of full length $L=40$ Mm to which $2×1011erg/cm2$ is added over 1 s. The left column (a) and (c), shows the evolution of average coronal density (blue), along the left axis, and average coronal temperature (red), along the right axis, against a logarithmic time axis. The right column, (b) and (d), shows the evolution in temperature/density space. These curves progress clockwise, as indicated by arrows on (d). Magenta dashed curves show the line along which radiative time scales equal the conductive time scale. Violet dashed lines mark a line of constant pressure equal to the total energy input uniformly distributed over the loop volume. The top row, (a) and (c), shows the numerical solution of EBTEL (Klimchuk, Patsourakos, & Cargill, 2008), while the bottom row, (c) and (d), is from the analytic model of Cargill, Mariska, and Antiochos (1995). For comparison, a grey curve in (b) shows the evolution when the same total energy is added over 10 s rather than 1 s. During the heating phase (see Figure 5a) energy is added and the coronal temperature rises more rapidly than density can respond. This phase is particularly distinct if the duration of energy input is short, as in the example, otherwise it overlaps the next phase. In the next phase, coronal heat is transported to the chromosphere where it drives evaporation, carrying much of the energy back into the corona (i.e., by enthalpy flux). Evaporation thus increases the corona’s mass, but keeps its energy, and thus its pressure, largely constant: the evaporative phase proceeds along a line of constant $neT$, as shown by violet dashed lines in Figures 5a and 5c. The time scale for optically thin radiative losses, scaling inversely with the square of the density, is very long at the high temperatures and low densities found at the end of the heating phase. Evaporation thus proceeds on the shorter conductive time scale, until the coronal density has increased enough to make the radiative time-scale comparable. The corona then begins to lose energy through radiation—its final phase. It is no coincidence that the equality of these two time scales is also the condition for mechanical equilibrium, so the cooling occurs through a series of loop equilibria. This final, radiative, phase is the longest and thus dictates the overall lifetime of the flare loop: $2000$s in Figure 5. Unless the heating persists throughout this phase, models generally find loop cooling times shorter than the gradual phases of flares with properties similar to simulated loops.1 Coronal emission, including the GOES 1–8Å band, peaks when the density peaks at the end of the evaporative phase. The zero-dimensional model can relate that emission peak to the total energy input and other parameters of the loop. Warren and Antiochos (2004) followed this approach to obtain an expression for peak emission $F1−8≃(4×10−5W/m2)E307/4L9−1A18−3/4$, where $E30$ is the total energy added to the loop in units of $1030ergs$, and $L9$ and $A18$ are the loop’s full length and cross sectional area in units of $109$ cm and $1018cm2$ respectively. This would relate the observed SXR peak to the energy of a flare, provided the flare behaved as a single loop. However, a large flare is not well described as a single loop, so the above relation is only approximate. ## Gas-Dynamic (1D) Models Solutions to the full gas dynamic equations generally corroborate the conclusion of zero-dimensional models that radiative cooling occurs through a sequence of equilibria. Conversely, the processes of heating and evaporation are found to be very dynamic, with flow speeds at or above the sound speed, as shown in the $t≤40$s curves of Figure 6. These phases are therefore better studied using the full one-dimensional gas dynamic models (Mariska, Doschek, Boris, Oran, & Young, 1982; Nagai, 1980; Pallavicini et al., 1983). Figure 6. Evolution of a one-dimensional model of a flare loop like that shown in Figure 5: $L=40$ Mm to which $2×1011erg/cm2$ is added over 2 seconds. The simulation includes thermal conduction, but no non-thermal particles. The four rows show pressure, velocity, temperature, and density, reading down. The right column shows the left half of the loop’s coronal section. The right column zooms in to the left footpoints, including a crude chromospheric section ($l<0$). The colors represent times, $t=0$ s (black), 5 (red), 10 violet, 20 (green), 40 (magenta), and 120 s (yellow). The axis atop the upper left panel shows the integrated column for the initial loop, in units of $1019cm−2$. Energy added to the chromosphere, either deposited by non-thermal electrons or conducted from the corona, creates a pressure peak that drives material upward as evaporation (see velocity plots in 6). This occurs in an ablative rarefaction wave with a shock at its front. The upflow speeds in the models are several hundred km/s, which is typically supersonic. Fisher, Canfield, and McClymont (1984) placed an upper bound on the evaporation speed of two to three times the isothermal sound speed. Longcope (2014) found that evaporation driven by thermal conduction flux $Fc$, reached a velocity $ve∼Fc1/3$. Energy deposited by non-thermal electrons results in chromospheric evaporation classified as either gentle or explosive. The electrons deposit energy in cooler denser chromospheric layers, where radiative loss is particularly effective and becomes more so as heating drives up the temperature there. It is therefore possible for the deposited energy to be immediately radiated with only minor effects; this is called gentle evaporation. Optically thin losses increase with temperature until peaking at a maximum volumetric rate at around 150,000 K. If the rate of deposition exceeds this maximum the radiation will be unable to compensate, allowing temperature and pressure to rise explosively: this is explosive evaporation. The low-lying pressure peak drives plasma upward (evaporation) as well as downward. This downward motion, essentially a back-reaction to the evaporation, is called chromospheric condensation, and has been observed (Canfield, Metcalf, Strong, & Zarro, 1987; Graham & Cauzzi, 2015). Radiation cannot be assumed optically thin in deeper layers of the low chromosphere, or for wavelengths around very strong spectral lines. Accurate treatment of these cases requires an explicit treatment of the radiative transfer. This is currently the state of the art for flare modeling (Allred, Hawley, Abbett, & Carlsson, 2005; McClymont & Canfield, 1983), and is essential in order to accurately model the form of strong, optically thick spectral lines in a flare. ## Multi-Loop Models Investigators have encountered some difficulties modeling the full light-curves from a given flare as a single loop. A particularly vexing difficulty is that gradual phases tend to last longer than the radiative cooling time of a characteristic loop (Qiu & Longcope, 2016; Warren, 2006). This has led to the conclusion that a single flare consists of many distinct loops evolving independently after being energized at different times (Hori Yokoyama, Kosugi, & Shibata, 1997; Qiu et al., 2012; Reeves & Warren, 2002; Warren, 2006). This means the phases of flare evolution, impulsive and gradual, do not map simply onto the phases of flare loop evolution. Energy release is not restricted to the impulsive phase, but may continue through much or all of the gradual phase. This is consistent with many observations showing ribbons continuing their spreading motion during this phase (Longcope, Qiu, & Brewer, 2016). Nor is the gradual phase entirely equivalent to the cooling phase of a single loop. Figure 1b shows ample emission by 10 MK Fe xx throughout most of the gradual phase, so the plasma cannot be cooling everywhere. This understanding has led to models capable of reproducing with reasonable fidelity most of a flare’s myriad light curves (Liu, Qiu, Longcope, & Caspi, 2013; Qiu, Sturrock, Longcope, Klimchuk, & Liu, 2013). The flare is synthesized from a set of loops, initiated in sequence, and the flare’s light curve is a super-position of the light curves of the loop sequence. The flare’s time scale is therefore set by the loop initialization sequence and not by the radiative cooling of a single loop. Moreover, the measured flux transfer rate, $φ˙rx$, reflects the rate of loop creation, rather than the X-point reconnection electric field that it would in steady models (Longcope, Des Jardins, Carranza-Fulmer, & Qiu, 2010; Longcope, Qiu, & Brewer, 2016). # Non-Thermal Particle Models A population of particles can be described by its distribution function, $f(x,E,μ)$, depending on particle energy $E$, and the pitch-angle cosine $μ=cosα$. Coulomb collisions among charged particles will drive their distribution toward a Maxwellian, $f∼Eexp(−E/kbT)$. This limiting form is achieved after a few dozen collisions. At particle densities typical of a flare ($ne∼1010cm−3$) 1 keV electrons will collide with frequency $νc∼10$ Hz, and thereby remain approximately Maxwellian during a flare. The Coulomb collision frequency scales inversely with particle energy, $νc∼E−3/2$, so electrons over $E∼100$ keV collide rarely and can travel more than 100 Mm before colliding once. It is these electrons which travel unimpeded along the flare loop to deposit energy at the feet and create the flare ribbons and the footpoint HXR sources. Lacking frequent collisions, the distribution function does not need to form a Maxwellian at high energies. Instead it is often observed to be better described by a power law, $f∼E−δ′$.2 The entire distribution function is most often written as a sum of a Maxwellian, called the thermal component, and a non-thermal component whose distribution function is a power law restricted to $E>Ec$. The low-energy cut-off $Ec$ is formally required for normalizability, but more physically needed so that non-thermal collision rates are low enough to justify a departure from Maxwellian. The cut-off is generally expected at energies where the thermal component dominates the sum, and thus proves extremely difficult to constrain well by observation. The lowest three moments of the distribution function correspond to density, fluid velocity, and pressure. A Maxwellian distribution is completely determined by these moments alone, but any other distribution requires more moments, or the entire function, to be specified. Fluid equations, such as assumed in previous sections (see also “Magnetohydrodynamics—Overview,” Priest, 2019), describe the evolution of the lowest three moments and can therefore be considered a valid, but partial, description of plasma evolution. Their description is reasonably complete provided energies are low enough, and collisions frequent enough, to keep the distribution close to Maxwellian, and thus fully described by its lowest moments. When collisions are not frequent enough, the evolution of the distribution function must be followed using the Fokker–Planck equation. This includes effects of single-particle motion such as propagation along the field line, and mirroring from points of strong field (Parker, 1958; Rosenbluth, MacDonald, & Judd, 1957). It also includes a velocity–space diffusion arising from the average effect of random, high-frequency electric and magnetic fields. These high-frequency fields can arise from Coulomb collisions or from randomly phased plasma waves, of various kinds. The Coulomb contribution will, as mentioned above, cause the distribution function to relax toward a Maxwellian. The other contributions, however, can drive evolution in other directions, and can thus contribute to the creation of a non-thermal component. ## Models of Particle Acceleration The process of generating the non-thermal component from an erstwhile Maxwellian distribution is known as particle acceleration. A wide variety of models have been proposed for the acceleration process in flares. All are set within the standard flare model framework and produce distributions resembling power laws. It has thus proven difficult to reach a consensus on which mechanism is at work in a solar flare. This question remains open. A number of models, collectively known as second-order Fermi or stochastic acceleration (SA) models, focus on the velocity-space diffusion from a spectrum of randomly-phased plasma waves. Waves of various kinds can be generated by the MHD turbulence expected within the reconnection outflow jet, featured in Figure 3b. Velocity diffusion is dominated by resonant interactions between the waves and the particles. This poses a challenge for SA models in general since plasma waves often have phase speeds much higher than thermal particles. Many investigators have, however, been able to show that resonances can occur with different wave modes under reasonable assumptions. Some even follow the evolution of the wave spectrum (Miller, Larosa, & Moore, 1996; Petrosian, Yan, & Lazarian, 2006). Stochastic acceleration therefore remains a viable explanation for high-energy flare particles. A related model considers the effects of an MHD shock along with turbulence capable of repeatedly scattering the particles back to the shock (i.e., effective pitch-angle scattering). These elements combine in a process known as first-order Fermi or diffusive shock acceleration (DSA), which has been extensively studied and observed in other astrophysical and space plasmas (see Blandford & Eichler, 1987, for a review). It results in a power-law distribution whose index, $δ$, is related to the plasma compression ratio across the shock. The fast magnetosonic shock predicted at the termination point (see Figure 3) is an ideal location for DSA (Tsuneta & Naito, 1998; Mann, Aurass, & Warmuth, 2006), and some observations suggest acceleration is indeed occurring there (Chen et al., 2015a; Sui & Holman, 2003). Charged particles can be temporarily confined either on closed field lines or between magnetic mirror points. As the magnetic field changes, it can add energy to the trapped particles through the betatron term, curvature-drift, or head-on reflection from a moving mirror point. A certain class of models invokes these effects to explain particle acceleration. The magnetic field strength will have a local minimum at the end of the outflow region. This can serve as a magnetic trap, and if it shrinks in size, the particles trapped there can gain energy. This is the basis of the collapsing trap model (Somov & Kosugi, 1997; Karlický & Kosugi, 2004). Alternatively, MHD turbulence in the outflow jet could feature closed magnetic islands, often called plasmoids in reconnection models (Loureiro, Schekochihin, & Cowley, 2007; Shibayama, Kusano, Miyoshi, Nakabou, & Vekstein, 2015). These islands will tend to evolve from elongated to circular, and in so doing accelerate the particles trapped within them (Drake, Swisdak, Schoeffler, Rogers, & Kobayashi, 2006). Finally it is possible for the large-scale electric field, the defining feature of magnetic reconnection, to accelerate charged particles directly, in so-called direct acceleration. It has already been noted that reconnection is observed at rates $∼φ˙∼1018$ Mx/s. If this occurred as a steady, large-scale electric field along an X-line, there would be a $V∼1010$ Volt drop along it—far more than observed in any electron. Simple as this seems, a detailed model faces several challenges. A plasma tends to screen out any electric field component parallel to the magnetic field, and undergoes a dramatic response if subjected to a field in excess of the so-called Dreicer field—$ED∼10−2$ V/m for a typical flare plasma. Moreover, the simplest scenario would predict all particles of a given charge to be accelerated in the same direction, in apparent contradiction to observations showing electron precipitation at both feet of a loop (see Figure 2b). Several investigators have produced models which overcome these challenges, demonstrating the viability of direct acceleration (Emslie & Hénoux, 1995; Holman, 1985; Litvinenko, 1996; Martens, 1988) ## Models of Non-Thermal Particle Propagation In most of these models, charged particles are energized within a certain region of the solar flare, such as the reconnection site, the outflow jet, or the termination shock. From there the particles propagate along magnetic field lines, until they have dissipated their energy and rejoined the thermal population. An electron with energy $EkeV$ (in keV) and pitch-angle cosine $μ$ will traverse a total column $N=∫ndl≃(1017cm−2)μEkev2$, before stopping. Electrons leaving the acceleration region roughly parallel ($μ≃1$) with $E≥10$ keV will not stop until they have reached the chromosphere where $N≃1019cm−2$ (see the upper left axis of Figure 6). They will lose the vast majority of this energy at the very end of their journey, leading to chromospheric energy deposition. Virtually all the energy lost by propagating particles goes into heating the background plasma: a colder thermal plasma. For electrons, a very small fraction (typically $10−5$) is converted to photons via bremsstrahlung, which thus provides the most direct diagnostic of the non-thermal electron population. (Ions lose a far smaller fraction to bremstrahlung, making their detection far more challenging.) An electron with energy $E$ can emit photons of energy $ε≤E$. A single electron will emit a spectrum of bremsstrahlung photons before ultimately joining the thermal population; the complete process is known as thick-target emission. A power-law distribution of electrons, $F(E)∼E−δ$, will thereby produce a power-law distribution of photons, $I(ε)∼ε−γ$, with $γ=δ−1$ in this thick-target process (Tandberg-Hanssen & Emslie, 1988). Hard X-ray spectra from flare footpoints generally exhibit power laws with $γ≥2$, corresponding to electron distributions with $δ≥3$. The coronal column $N≪1019$ will have little effect on the energy of the electrons propagating through it. Bremstrahlung emission under this condition, called thin-target emission, will reflect the distribution of energies at which electron are produced (accelerated). The resulting photon spectrum will therefore have a power-law index, $γ=δ+1$, considerably softer than for thick-target emission (Tandberg-Hanssen & Emslie, 1988). ## References Allred, J. C., Hawley, S. L., Abbett, W. P., & Carlsson, M. (2005). Radiative Hydrodynamic Models of the Optical and Ultraviolet Emission from Solar Flares. The Astrophysical Journal, 630, 573.Find this resource: Amari, T., Luciani, J. F., Mikic, Z., & Linker, J. (2000). A Twisted Flux Rope Model for Coronal Mass Ejections and Two-Ribbon Flares. The Astrophysical Journal, 529, L49.Find this resource: Antiochos, S. K., Devore, C. R., & Klimchuk, J. A. (1999). A Model for Solar Coronal Mass Ejections. The Astrophysical Journal, 510, 485.Find this resource: Antiochos, S. K., & Sturrock, P. A. (1978). Evaporative cooling of flare plasma. The Astrophysical Journal, 220, 1137.Find this resource: Antonucci, E., Alexander, D., Culhane, J. L., de Jager, C., MacNeice, P., Somov, B. V., & Zarro, D. M. (1999). Flare dynamics. In K. T. Strong, J. L. R. Saba, B. M. Haisch, & J. T. Schmelz (Eds.), The many faces of the sun: A summary of the results from NASA’s Solar Maximum Mission (p. 331). New York, NY: Springer.Find this resource: Antonucci, E., & Dennis, B. R. (1983). Observation of chromospheric evaporation during the Solar Maximum Mission. Solar Physics, 86, 67.Find this resource: Aschwanden, M. J., & Alexander, D. (2001). Flare Plasma Cooling from 30 MK down to 1 MK modeled from Yohkoh, GOES, and TRACE observations during the Bastille Day Event (14 July 2000). Solar Physics, 204, 91.Find this resource: Aschwanden, M. J., Scholkmann, F., B´ethune, W., Schmutz, W., Abramenko, V., Cheung, M. C. M., M¨uller, D., Benz, A., Chernov, G., Kritsuk, A. G., Scargle, J. D., Melatos, A., Wagoner, R. V., Trimble, V., & Green, W. H. (2018). Order out of Randomness: Self-Organization Processes in Astrophysics. Space Science Reviews, 214, 55.Find this resource: Aulanier, G., Démoulin, P., Schrijver, C. J., Janvier, M., Pariat, E., & Schmieder, B. (2013). The standard flare model in three dimensions. II. Upper limit on solar flare energy. Astronomy & Astrophysics, 549, A66.Find this resource: Aurass, H., Vršnak, B., & Mann, G. (2002). Shock-excited radio burst from reconnection outflow jet? Astronomy & Astrophysics, 384, 273.Find this resource: Bak, P., Tang, C., & Wiesenfeld, K. (1988). Self-organized criticality. Physical Review A, 38, 364.Find this resource: Bhattacharjee, A., Huang, Y.-M., Yang, H., & Rogers, B. (2009). Fast reconnection in high-Lundquist-number plasmas due to the plasmoid Instability. Physics of Plasmas, 16, 112102.Find this resource: Biskamp, D., & Schwarz, E. (2001). Localization, the clue to fast magnetic reconnection. Physics of Plasmas, 8, 4729.Find this resource: Biskamp, D., & Welter, H. (1989). Magnetic arcade evolution and instability. Solar Physics, 120, 49.Find this resource: Blandford, R., & Eichler, D. (1987). Particle acceleration at astrophysical shocks: A theory of cosmic ray origin. Physics Reports, 154, 1.Find this resource: Bogachev, S. A., Somov, B. V., Kosugi, T., & Sakao, T. (2005). The Motions of the Hard X-Ray Sources in Solar Flares: Images and Statistics. The Astrophysical Journal, 630, 561.Find this resource: Canfield, R. C., Brown, J. C., Brueckner, G. E., Cook, J. W., Craig, I. J. D., Doschek, G. A., Emslie, A. G., Henoux, J.-C., Lites, B. W., Machado, M. E., & Underwood, J. H. (1980). The Chromosphere and Transition Region. In P. A. Sturrock (Ed.), Solar flares. A monograph from Skylab Solar Workshop II (p. 231). Boulder: Colorado Associated University Press.Find this resource: Canfield, R. C., Metcalf, T. R., Strong, K. T., & Zarro, D. M. (1987). A novel observational test of momentum balance in a solar flare. Nature, 326, 165.Find this resource: Cargill, P. J., Mariska, J. T., & Antiochos, S. K. (1995). Cooling of solar flares plasmas. 1: Theoretical considerations. The Astrophysical Journal, 439, 1034.Find this resource: Carmichael, H. (1964). A Process for Flares. In W. N. Hess (Ed.), AAS-NASA Symposium on the Physics of Solar Flares (p. 451). Washington, DC: NASA.Find this resource: Chamberlin, P. C., Woods, T. N., & Eparvier, F. G. (2008). Flare Irradiance Spectral Model (FISM): Flare component algorithms and results. Space Weather, 6, S05001.Find this resource: Chen, B., Bastian, T. S., Shen, C., Gary, D. E., Krucker, S., & Glesener, L. (2015a). Particle acceleration by a solar flare termination shock. Science, 350, 1238.Find this resource: Chen, H., Zhang, J., Ma, S., Yang, S., Li, L., Huang, X., & Xiao, J. (2015b). Confined Flares in Solar Active Region 12192 from 2014 October 18 to 29. The Astrophysical Journal, 808, L24.Find this resource: Chen, P. F., Fang, C., Tang, Y. H., & Ding, M. D. (1999). Simulation of Magnetic Reconnection with Heat Conduction. The Astrophysical Journal, 513, 516.Find this resource: Cheng, C.-C., Oran, E. S., Doschek, G. A., Boris, J. P., & Mariska, J. T. (1983). Numerical simulations of loops heated to solar flare temperatures I. The Astrophysical Journal, 265, 1090.Find this resource: Crosby, N. B., Aschwanden, M. J., & Dennis, B. R. (1993). Frenquency distributions and correlations of solar x-ray flar parameters. Solar Physics, 143, 275.Find this resource: Démoulin, P., & Aulanier, G. (2010). Criteria for Flux Rope Eruption: Non-equilibrium Versus Torus Instability. The Astrophysical Journal, 718, 1388.Find this resource: Dennis, B. R., & Zarro, D. M. (1993). The Neupert effect - What can it tell us about the impulsive and gradual phases of solar flares? Solar Physics, 146, 177.Find this resource: Drake, J. F., Swisdak, M., Schoeffler, K. M., Rogers, B. N., & Kobayashi, S. (2006). Formation of secondary islands during magnetic reconnection. Geophysical Research Letters, 33, 13105.Find this resource: Emslie, A. G., Dennis, B. R., Holman, G. D., & Hudson, H. S. (2005). Refinements to flare energy estimates: A followup to “Energy partition in two solar flare/CME events” by A. G. Emslie et al. Journal of Geophysical Research, 110, A11103.Find this resource: Emslie, A. G., Dennis, B. R., Shih, A. Y., Chamberlin, P. C., Mewaldt, R. A., Moore, C. S., Share, G. H., Vourlidas, A., & Welsch, B. T. (2012). Global Energetics of Thirty-eight Large Solar Eruptive Events. The Astrophysical Journal, 759, 71.Find this resource: Emslie, A. G., & Hénoux, J.-C. (1995). The electrical current structure associated with solar flare electrons accelerated by large-scale electric fields. The Astrophysical Journal, 446, 371.Find this resource: Emslie, A. G., Kucharek, H., Dennis, B. R., Gopalswamy, N., Holman, G. D., Share, G. H., Vourlidas, A., Forbes, T. G., Gallagher, P. T., Mason, G. M., Metcalf, T. R., Mewaldt, R. A., Murphy, R. J., Schwartz, R. A., & Zurbuchen, T. H. (2004). Energy partition in two solar flare/CME events. Journal of Geophysical Research, 109, 10104.Find this resource: Emslie, A. G., & Nagai, F. (1985). Gas dynamics in the impulsive phase of solar flares. II - The structure of the transition region - A diagnostic of energy transport processes. The Astrophysical Journal, 288, 779.Find this resource: Fisher, G. H., Canfield, R. C., & McClymont, A. N. (1984). Chromospheric evaporation velocities in solar flares. The Astrophysical Journal, 281, L79.Find this resource: Fisher, G. H., Canfield, R. C., & McClymont, A. N. (1985a). Flare loop radiative hydrodynamics. V - Response to thick-target heating. The Astrophysical Journal, 289, 414.Find this resource: Fisher, G. H., Canfield, R. C., & McClymont, A. N. (1985b). Flare Loop Radiative Hydrodynamics. VI - Chromospheric Evaporation due to Heating by Nonthermal Electrons. The Astrophysical Journal, 289, 425.Find this resource: Fletcher, L., & Hudson, H. (2001). The Magnetic Structure and Generation of EUV Flare Ribbons. Solar Physics, 204, 69.Find this resource: Fletcher, L., Pollock, J. A., & Potts, H. E. (2004). Tracking of TRACE Ultraviolet Flare Footpoints. Solar Physics, 222, 279.Find this resource: Forbes, T. G. (1986). Fast-shock formation in line-tied magnetic reconnection models of solar flares. The Astrophysical Journal, 305, 553.Find this resource: Forbes, T. G., & Acton, L. W. (1996). Reconnection and Field Line Shrink age in Solar Flares. The Astrophysical Journal, 459, 330.Find this resource: Forbes, T. G., & Isenberg, P. A. (1991). A catasrophe mechanism for coronal mass ejections. The Astrophysical Journal, 373, 294.Find this resource: Forbes, T. G., & Malherbe, J. M. (1986). A shock condensation mechanism for loop prominences. The Astrophysical Journal, 302, L67.Find this resource: Forbes, T. G., Malherbe, J. M., & Priest, E. R. (1989). The formation flare loops by magnetic reconnection and chromospheric ablation. Solar Physics, 120, 285.Find this resource: Forbes, T. G., & Priest, E. R. (1983). A numerical experiment relevant to line-tied reconnection in two-ribbon flares. Solar Physics, 84, 169.Find this resource: Forbes, T. G., & Priest, E. R. (1984). Reconnection in Solar Flares. In D. Butler & K. Papadopoulos (Eds.), Solar terrestrial physics: Present and future (p. 35). Washington, DC: NASA.Find this resource: Forbes, T. G., Priest, E. R., Seaton, D. B., & Litvinenko, Y. E. (2013). Indeterminacy and instability in Petschek reconnection. Physics of Plasmas, 20, 052902.Find this resource: Fuller-Rowell, T., & Solomon, S. C. (2010). Flares, coronal mass ejections, and atmopsheric responses. In C. J. Schrijver & G. Siscoe (Eds.), Heliophysics II. Space storms and radiation: Causes and effects (p. 321). Cambridge, UK: Cambridge University Press.Find this resource: Goff, C. P., Matthews, S. A., van Driel-Gesztelyi, L., & Harra, L. K. (2004). Relating magnetic field strengths to hard X-ray emission in solar flares. Astronomy & Astrophysics, 423, 363.Find this resource: Gosling, J. T. (1993). The solar flare myth. Journal of Geophysical Research, 98, 18937.Find this resource: Graham, D. R., & Cauzzi, G. (2015). Temporal Evolution of Multiple Evaporating Ribbon Sources in a Solar Flare. The Astrophysical Journal, 807, L22.Find this resource: Hirayama, T. (1974). Theoretical Model of Flares and Prominences. I: Evaporating Flare Model. Solar Physics, 34, 323.Find this resource: Holman, G. D. (1985). Acceleration of runaway electrons and Joule heating in solar flares. The Astrophysical Journal, 293, 584.Find this resource: Hood, A. W., & Priest, E. R. (1980). Magnetic instability of coronal arcades as the origin of two-ribbon flares. Solar Physics, 66, 113.Find this resource: Hood, A. W., & Priest, E. R. (1981). Critical conditions for magnetic instabilities in force-free coronal loops. Geophysical and Astrophysical Fluid Dynamics, 17, 297.Find this resource: Hori, K., Yokoyama, T., Kosugi, T., & Shibata, K. (1997). Pseudo–Two- dimensional Hydrodynamic Modeling of Solar Flare Loops. The Astrophysical Journal, 489, 426.Find this resource: Kahler, S. W. (1982). The role of the big flare syndrome in correlations of solar energetic proton fluxes and associated microwave burst parameters. Journal of Geophysical Research, 87, 3439.Find this resource: Karlický, M., & Kosugi, T. (2004). Acceleration and heating processes in ay collapsing magnetic trap. Astronomy & Astrophysics, 419, 1159.Find this resource: Karpen, J. T., Antiochos, S. K., & DeVore, C. R. (2012). The Mechanisms for the Onset and Explosive Eruption of Coronal Mass Ejections and Eruptive Flares. The Astrophysical Journal, 760, 15.Find this resource: Kliem, B., & Török, T. (2006). Torus Instability. Physical Review Letters, 96, 255002.Find this resource: Klimchuk, J. A., Patsourakos, S., & Cargill, P. J. (2008). Highly Efficient Modeling of Dynamic Coronal Loops. The Astrophysical Journal, 682, 1351.Find this resource: Kopp, G., Lawrence, G., & Rottman, G. (2005). The Total Irradiance Monitor (TIM): Science Results. Solar Physics, 230, 129.Find this resource: Kopp, R. A., & Pneuman, G. W. (1976). Magnetic reconnection in the corona and the loop prominence phenomenon. Solar Physics, 50, 85.Find this resource: Kretzschmar, M., de Wit, T. D., Schmutz, W., Mekaoui, S., Hochedez, J.-F., & Dewitte, S. (2010). The effect of flares on total solar irradiance. Nature Physics, 6, 690.Find this resource: Krucker, S., Battaglia, M., Cargill, P. J., Fletcher, L., Hudson, H. S., MacKinnon, A. L., Masuda, S., Sui, L., Tomczak, M., Veronig, A. L., Vlahos, L., & White, S. M. (2008). Hard X-ray emission from the solar corona. Astronomy and Astrophysics Review, 16, 155.Find this resource: Kulsrud, R. M. (2001). Magnetic reconnection: Sweet-Parker vs. Petscheck. Earth, Planets and Space, 53, 417.Find this resource: Kusano, K., Maeshiro, T., Yokoyama, T., & Sakurai, T. (2004). The Trigger Mechanism of Solar Flares in a Coronal Arcade with Reversed Magnetic Shear. The Astrophysical Journal, 610, 537.Find this resource: Litvinenko, Y. E. (1996). Particle Acceleration in Reconnecting Current Sheets with a Nonzero Magnetic Field. The Astrophysical Journal, 462, 997.Find this resource: Liu, W.-J., Qiu, J., Longcope, D. W., & Caspi, A. (2013). Determining Heating Rates in Reconnection Formed Flare Loops of the M8.0 Flare on 2005 May 13. The Astrophysical Journal, 770, 111.Find this resource: Longcope, D. W. (2014). A Simple Model of Chromospheric Evaporation and Condensation Driven Conductively in a Solar Flare. The Astrophysical Journal, 795, 10.Find this resource: Longcope, D. W., Des Jardins, A. C., Carranza-Fulmer, T., & Qiu, J. (2010). A Quantitative Model of Energy Release and Heating by Time-dependent, Localized Reconnection in a Flare with a Thermal Looptop X-ray Source. Solar Physics, 267, 107.Find this resource: Longcope, D. W., & Forbes, T. G. (2014). Breakout and tether-cutting eruption models are both catastrophic (sometimes). Solar Physics, 6, 2091.Find this resource: Longcope, D. W., Qiu, J., & Brewer, J. (2016). A reconnection-driven model of the hard X-ray loop-top source from flare 2004-Feb-26. The Astrophysical Journal, 833, 211.Find this resource: Loureiro, N. F., Schekochihin, A. A., & Cowley, S. C. (2007). Instability of current sheets and formation of plasmoid chains. Physics of Plasmas, 14, 100703.Find this resource: Lu, E. T., & Hamilton, R. J. (1991). Avalanches and the distribution of solar flares. The Astrophysical Journal, 380, L89.Find this resource: MacNeice, P. (1986). A numerical hydrodynamic model of a heated coronal loop. Solar Physics, 103, 47.Find this resource: MacNeice, P., Burgess, A., McWhirter, R. W. P., & Spicer, D. S. (1984). A numerical model of a solar flare based on electron beam heating of the chromospheres. Solar Physics, 90, 357.Find this resource: Magara, T., Mineshige, S., Yokoyama, T., & Shibata, K. (1996). Numerical Simulation of Magnetic Reconnection in Eruptive Flares. The Astrophysical Journal, 466, 1054.Find this resource: Mann, G., Aurass, H., & Warmuth, A. (2006). Electron acceleration by the reconnection outflow shock during solar flares. Astronomy & Astrophysics, 454, 969.Find this resource: Mariska, J. T., Doschek, G. A., Boris, J. P., Oran, E. S., & Young, T. R., Jr. (1982). Solar transition region response to variations in the heating rate. The Astrophysical Journal, 255, 783.Find this resource: Martens, P. C. H. (1988). The generation of proton beams in two-ribbon flares. The Astrophysical Journal, 330, L131.Find this resource: Masuda, S., Kosugi, T., Hara, H., Tsuneta, S., & Ogawara, Y. (1994). A loop-top hard X-ray source in a compact solar flare as evidence for magnetic reconnection. Nature, 371, 495.Find this resource: McClymont, A. N., & Canfield, R. C. (1983). Flare loop radiative hydrodynamics. I - Basic methods. The Astrophysical Journal, 265, 483.Find this resource: McKenzie, D. E. (2002). Signatures of Reconnection in Eruptive Flares. In P. C. H. Martens & D. Cauffman (Eds.), Multi-wavelength observations of coronal structure and dynamics—Yohkoh 10th anniversary meeting (COSPAR Colloquia Series, p. 155). Elsevier.Find this resource: McKenzie, D. E., & Hudson, H. S. (1999). X-Ray Observations of Motions and Structure above a Solar Flare Arcade. The Astrophysical Journal, 519, L93.Find this resource: Mikic, Z., Barnes, D. C., & Schnack, D. D. (1988). Dynamical evolution of a solar coronal magnetic field arcade. The Astrophysical Journal, 328, 830.Find this resource: Mikic, Z., & Linker, J. A. (1994). Disruption of coronal magnetic field arcades. The Astrophysical Journal, 430, 898.Find this resource: Miller, J. A., Larosa, T. N., & Moore, R. L. (1996). Stochastic Electron Acceleration by Cascading Fast Mode Waves in Impulsive Solar Flares. The Astrophysical Journal, 461, 445.Find this resource: Milligan, R. O., & Dennis, B. R. (2009). Velocity Characteristics of Evaporated Plasma Using Hinode/EUV Imaging Spectrometer. The Astrophysical Journal, 699, 968.Find this resource: Moore, R. L., & Sterling, A. C. (2006). Initiation of Coronal Mass Ejections. In E. Robbrecht & D. Berghmans (Eds.), Solar Eruptions and Energetic Particles (Vol. 165, p. 43). Washington DC: American Geophysical Union Geophysical Monograph Series.Find this resource: Moore, R. L., Sterling, A. C., Hudson, H. S., & Lemen, J. R. (2001). Onset of the Magnetic Explosion in Solar Flares and Coronal Mass Ejections. The Astrophysical Journal, 552, 833.Find this resource: Munro, R. H., Gosling, J. T., Hildner, E., MacQueen, R. M., Poland, A. I., & Ross, C. L. (1979). The association of coronal mass ejection transients with other forms of solar activity. Solar Physics, 61, 201.Find this resource: Murphy, R. J. (2007). Solar Gamma-Ray Spectroscopy. Space Science Reviews, 130, 127.Find this resource: Nagai, F. (1980). A model of hot loops associated with solar flares. I - Gas- dynamics in the loops. Solar Physics, 68, 351.Find this resource: Neupert, W. M. (1968). Comparison of Solar X-Ray Line Emission with Microwave Emission during Flares. The Astrophysical Journal, 153, L59.Find this resource: Pallavicini, R., Peres, G., Serio, S., Vaiana, G., Acton, L., Leibacher, J., & Rosner, R. (1983). Closed coronal structures. V - Gasdynamic models of flaring loops and comparison with SMM observations. The Astrophysical Journal, 270, 270.Find this resource: Parker, E. N. (1958). Suprathermal Particle Generation in the Solar Corona. The Astrophysical Journal, 128, 677.Find this resource: Petrosian, V., Yan, H., & Lazarian, A. (2006). Damping of Magnetohydrodynamic Turbulence in Solar Flares. The Astrophysical Journal, 644, 603.Find this resource: Petschek, H. E. (1964). Magnetic field annihilation. In W. N. Hess (Ed.), AAS-NASA Symposium on the physics of solar flares (p. 425). Washington, DC: NASAFind this resource: Poletto, G., & Kopp, R. A. (1986). Macroscopic electric fields during two-ribbon flares. In D. F. Neidig (Ed.), The lower atmospheres of solar flares (p. 453). National Solar Observatory.Find this resource: Priest, E. R. (2019). Magnetohydrodynamics – Overview. In B. Foster (Ed.), Oxford Research Encyclopedia of Physics. Oxford, UK: Oxford University Press.Find this resource: Priest, E. R., & Forbes, T. G. (1990). Magnetic field evolution during prominence eruptions and two-ribbon flares. Solar Physics, 126, 319.Find this resource: Qiu, J. (2009). Observational Analysis of Magnetic Reconnection Sequence. The Astrophysical Journal, 692, 1110.Find this resource: Qiu, J., Lee, J., Gary, D. E., & Wang, H. (2002). Motion of Flare Footpoint Emission and Inferred Electric Field in Reconnecting Current Sheets. The Astrophysical Journal, 565, 1335.Find this resource: Qiu, J., Liu, W.-J., & Longcope, D. W. (2012). Heating of Flare Loops With Observationally Constrained Heating Functions. The Astrophysical Journal, 752, 124.Find this resource: Qiu, J., & Longcope, D. W. (2016). Long Duration Flare Emission: Impulsive Heating or Gradual Heating? The Astrophysical Journal, 820, 14.Find this resource: Qiu, J., Sturrock, Z., Longcope, D. W., Klimchuk, J. A., & Liu, W.-J. (2013). Ultraviolet and Extreme-ultraviolet Emissions at the Flare Footpoints Observed by Atmosphere Imaging Assembly. The Astrophysical Journal, 774, 14.Find this resource: Reeves, K. K., & Warren, H. P. (2002). Modeling the Cooling of Postflare Loops. The Astrophysical Journal, 578, 590.Find this resource: Rosenbluth, M. N., MacDonald, W. M., & Judd, D. L. (1957). Fokker-Planck Equation for an Inverse-Square Force. Physical Review, 107, 1.Find this resource: Sakao, T. (1994). Characteristics of solar flare hard X-ray sources revealed with the hard X-ray telescope aboard the Yohkoh satellite. PhD thesis, University of Tokyo.Find this resource: Savage, S. L., McKenzie, D. E., & Reeves, K. K. (2012), Re-interpretation of Supra-arcade Downflows in Solar Flares. The Astrophysical Journal, 747, L40.Find this resource: Schrijver, C. J., et al. (2012). Estimating the frequency of extremely energetic solar events, based on solar, stellar, lunar, and terrestrial records. Journal of Geophysical Research, 117, A08103.Find this resource: Shibata, K., et al. (2013). Can Superflares Occur on Our Sun? Publications of the Astronomical Society of Japan, 65, 49.Find this resource: Shibayama, T., Kusano, K., Miyoshi, T., Nakabou, T., & Vekstein, G. (2015). Fast magnetic reconnection supported by sporadic small-scale Petschek type shocks. Physics of Plasmas, 22, 100706.Find this resource: Somov, B. V., & Kosugi, T. (1997). Collisionless Reconnection and High-Energy Particle Acceleration in Solar Flares. The Astrophysical Journal, 485, 859.Find this resource: Sturrock, P. A. (1968). A Model of Solar Flares in IAU Symp. 35: Structure and Development of Solar Active Regions, 471.Find this resource: Sui, L., & Holman, G. D. (2003). Evidence for the Formation of a Large- Scale Current Sheet in a Solar Flare. The Astrophysical Journal, 596, L251.Find this resource: Takasao, S., Matsumoto, T., Nakamura, N., & Shibata, K. (2015). Magnetohydrodynamic Shocks in and above Post-flare Loops: Two-dimensional Simulation and a Simplified Model. The Astrophysical Journal, 805, 135.Find this resource: Tandberg-Hanssen, E., & Emslie, A. G. (1988). The physics of solar flares (Cambridge Astrophysics Series). Cambridge, UK: Cambridge University Press.Find this resource: Tschernitz, J., Veronig, A. M., Thalmann, J. K., Hinterreiter, J., & Pötzi, W. (2018). Reconnection Fluxes in Eruptive and Confined Flares and Implications for Superflares on the Sun. The Astrophysical Journal, 853, 41.Find this resource: Tsuneta, S., & Naito, T. (1998). Fermi Acceleration at the Fast Shock in a Solar Flare and the Impulsive Loop-Top Hard X-Ray Source. The Astrophysical Journal, 495, L67.Find this resource: Ugai, M., & Tsuda, T. (1977). Magnetic field line reconnection by localized enhancement of resistivity. I. Evolution in a compressible MHD fluid. Journal of Plasma Physics, 17, 337.Find this resource: Warren, H. P. (2006). Multithread Hydrodynamic Modeling of a Solar Flare. The Astrophysical Journal, 637, 522.Find this resource: Warren, H. P., & Antiochos, S. K. (2004). Thermal and Nonthermal Emission in Solar Flares. The Astrophysical Journal, 611, L49.Find this resource: Warren, H. P., Mariska, J. T., & Doschek, G. A. (2013). Observations of Thermal Flare Plasma with the EUV Variability Experiment. The Astrophysical Journal, 770, 116.Find this resource: Wheatland, M. S. (2004). A Bayesian Approach to Solar Flare Prediction. The Astrophysical Journal, 609, 1134.Find this resource: Yashiro, S., Gopalswamy, N., Akiyama, S., Michalek, G., & Howard, R. A. (2005). Visibility of coronal mass ejections as a function of flare location and intensity. Journal of Geophysical Research, 110, A12S05.Find this resource: Yeates, A. R., & Mackay, D. H. (2009). Initiation of Coronal Mass Ejections in a Global Evolution Model. The Astrophysical Journal, 699, 1024.Find this resource: Yokoyama, T., & Shibata, K. (1997). Magnetic Reconnection Coupled with Heat Conduction. The Astrophysical Journal, 474, L61.Find this resource: Zhang, J., Dere, K. P., Howard, R. A., Kundu, M. R., & White, S. M. (2001), On the Temporal Relationship between Coronal Mass Ejections and Flares. The Astrophysical Journal, 559, 452.Find this resource: ## Notes: (1.) The loop in Figure 5 is chosen to resemble one of those found near the south end of the arcade in Figure 2c. It has similar length and the energy used results in coronal density $ne≃2×1010cm−3$ when $T≃6$ Mm. (2.) Measurements tend to work with the particle flux distribution, $F(E)∼Ef(e)$, whose power law is traditionally written $F∼E−δ$. The power in the latter is $δ=δ′+1/2$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 92, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.873569130897522, "perplexity": 2409.574837356891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891884.11/warc/CC-MAIN-20200707080206-20200707110206-00235.warc.gz"}
https://www.khanacademy.org/science/physics/mechanical-waves-and-sound/simple-harmonic-motion-with-calculus/v/harmonic-motion-part-2-calculus
If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. ## Physics library ### Unit 8: Lesson 2 Simple harmonic motion (with calculus) # Harmonic motion part 2 (calculus) We test whether Acos(wt) can describe the motion of the mass on a spring by substituting into the differential equation F=-kx. Created by Sal Khan. ## Want to join the conversation? • in some books its given x= A sin wt and in some books it is x= Acos(wt). which one is right or are both of them correct?? • The thing is, the only difference between the two is where you start. The function A sin wt is just the function A cos wt displaced by 90 degrees (graph it on a calculator, you'll see). So, both are right. It just depends on how you decide to graph it. If you start the oscillation by compressing a spring some distance and then releasing it, then x = A cos wt, because at time t=0, x=A (A being whatever distance you compressed the spring). But if you start the oscillation by suddenly applying a force to the spring at rest and then letting it oscillate, at t=0, x must equal 0. So in that case, x = A sin wt (sin0 = 0) • what is the purpose of omega in the equation? x(t) = A cos(wt) • The omega is a constant in the equation that stretches the cosine wave left and right (along the x axis), just as the A at the front of equation scales the cosine wave up and down. The bigger the omega, the more squashed the cosine wave showing the spring's position (and thus quicker the spring's movement). • If my knowledge of Calculus is correct, derivatives lower the power of a function and integrals raise the power of a function. In this lesson, a (acceleration) is depicted as a second derivative, which is true, but the actual work appears to be that of an integral equation. In short, my question is this: Why would the acceleration equation have a higher power than the distance equation? • In addition, the power of a function is given by the exponent on the variable (in this case, x). The w, on the other hand, is a constant, so the power of w isn't related to the power of the function. • I can understand the fact that to make position-time equation dimensionally correct you introduce a term w(omega) multiplied with t in 'sin(wt)'. But I can't justify the appearance of a rotational quantity 'w' in this equation which is actually the representation of linear motion of spring-mass system performing SHM. Thanking You. • the omega (curvy w) comes from the explanation of SHM using a reference circle. Omega is the angular velocity of the displacement phasor as it travels in a circular motion. This velocity is constant, unlike the linear motion of SHM. wt (omega x time) therefore equals angualr displacement, represented by the letter theta. For a little clearer understanding, the diagrams on this page may help. http://www.tutorvista.com/content/physics/physics-iii/oscillations/circle-reference.php • Why w=square root of k/m not plus or minus square root of k/m ?. Isn't angular velocity a vector • angular velocity is a vector but w is a magnitude of it • why x(t)= A*cos(omega*t)? and wat is 'T'? • x(t) = A*cos(omega*t) represents the function of the SHM. and T is the time period of the SHM, that is the time taken by the system to complete on cycle.. from A to O and to -A and again to O and again to A. • If I keep a block attached to a horizontal spring on the floor of an elevator going up with an acceleration ‘a’, and then displace the block slightly by stretching the spring-block system horizontally, will the time period of oscillation change as compared to the (T=2π√m/k)? I don't think it should, because the time period depends on the horizontal force, and the elevator changes it in the vertical direction, but my friend asserts that it should change, though she can't explain why. Which of us, if either, is correct? • Here is a simulation of a mass on a vertical spring Go to the simulation and try it out. You can measure the period of oscillation by clicking on the box for "stopwatch". To measure the period, measure the time for 10 or 20 bounces and divide by 10 or 20. Now look over on the right side and you will see that you can change gravity from that of Earth to that of Jupiter. Go ahead and do that and see if gravity makes a difference to the period. Now how does this apply to acceleration? When you are in an elevator accelerating up at a rate of a, that is exactly the same as if gravity increased from g to g+a. You can tell this by figuring out what your weight would be in the elevator - if you work it out you will see it will be m(g+a). So if the period is the same on jupiter, that means it will also be the same in an accelerating elevator, and if it is different on jupiter, that means it will also be different in the elevator. Once you know the answer by experiment, see if you can figure out why the answer is what it is. • How would we solve a problem that is an oscillation but doesn't start where cos(wt) and sin(wt) start? • That's a good question. The function would then be Asin(wt+k) where k is some constant. If the graph is just shifted (horizontally) slightly, you add or subtract a constant that is equal to the amount by which it has been shifted on the x-axis. When to add and when to subtract? - Add when it is shifted to the left - Subtract when it is shifted to the right
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872663974761963, "perplexity": 493.1076943543525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499967.46/warc/CC-MAIN-20230202070522-20230202100522-00373.warc.gz"}
https://cs.stackexchange.com/questions/67754/how-to-compare-two-circular-sequences-in-linear-time
# How to compare two circular sequences in linear time? We have two circular sequences, which means there is no start or end in the sequences, how to test if two sequence are equal or not in linear time? I have two circular sequences of E. Coli bacteria with length (4,639,221). I thought about attaching two sample of the first sequence and find the other one in it in linear time, but I was looking for a better idea, using a suffix tree is a suggestion that I think works for this problem. • What are your thoughts? – André Souza Lemos Dec 21 '16 at 20:18 • @AndréSouzaLemos I thought we can choose a start and concatenate two string along each other and then find the second string in the first one, but I don't know if it's fine or not. – user137927 Dec 21 '16 at 21:04 • Have you considered suffix trees? – Yuval Filmus Dec 21 '16 at 21:16 • What did you try? Where did you get stuck? We're happy to help you understand the concepts but just solving exercises for you is unlikely to achieve that. You might find this page helpful in improving your question. – D.W. Dec 21 '16 at 22:40 • Your own suggestion in the comments seems to work. If $y$ is shorter than $x$, then we find $y$ in a "rotation" of $x$ iff $y$ is a substring of $xx$. You mention you don't know whether that is fine. What is keeping you? – Hendrik Jan Dec 22 '16 at 23:54 Start with computing the lexicographlcally least circular substrings$^1$ of the both circular sequences and then compare them directly. Alternatively you can check for the substring $A$ (the first seqience) in the string $BB$ (a concatenation of $B$, the second sequence) circular sequence with itself) using for example the KMP$^2$ You might also be interested in the application of the Suffix Trees (also Suffix Arrays) and this thesis reviewing the applications to DNA sequences. $^{[1]}$(described in K. S. Booth. Lexicographically least circular substrings. Inf. Process. Lett., 10(4/5):240-242, 1980., and here $^{[2]}$(Donald E. Knuth, James H. Morris, Jr., and Vaughan R. Pratt, SIAM J. Comput., 6(2), 323–350. Fast Pattern Matching in Strings Start in first list with a ptr with 1X speed and the second list with 2X speed. They meet at a point after some time if they both are the same lists. From then on, you can check if each of their nodes are the same. This is linear in time. If they dont meet, they are not the same circular lists. • I think the question is not about checking if two given pointers are at the same circular lists but whether two separate lists are equal, meaning the elements are equal and in the same order. – Evil Dec 22 '16 at 3:58 • Traverse till you get the same elements i.e. till they meet. – vidyasagarr7 Dec 22 '16 at 4:38 • Traverse till you get the same elements i.e. till they meet. Then check if all the elements from then have the same value. This is what I was trying to say in the answer. – vidyasagarr7 Dec 22 '16 at 4:39 • Ok. How do you know that the start is really the same point at the both lists? – Evil Dec 22 '16 at 4:51 • I think this is a misunderstanding of the question. There's no reason to think that there is any node in common between the two sequences. Suppose the input is that one sequence is "ANABAN" and the other sequence is "BANANA". Those might be stored in completely disjoint locations in memory (so your procedure would say 'they are not the same circular lists'), but the desired answer is 'Yes the latter can be obtained by rotating the former'. – D.W. Dec 22 '16 at 17:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5515452027320862, "perplexity": 490.8720982321667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318375.80/warc/CC-MAIN-20190823104239-20190823130239-00435.warc.gz"}
https://docs.snowflake.com/en/sql-reference/sql/create-external-table.html
Categories: Table, View, & Sequence DDL # CREATE EXTERNAL TABLE¶ Creates a new external table in the current/specified schema or replaces an existing external table. When queried, an external table reads data from a set of one or more files in a specified external stage and outputs the data in a single VARIANT column. Additional columns can be defined, with each column definition consisting of a name, data type, and optionally whether the column requires a value (NOT NULL) or has any referential integrity constraints (primary key, foreign key, etc.). See the usage notes for more information. See also: In this Topic: ## Syntax¶ CREATE [ OR REPLACE ] EXTERNAL TABLE [IF NOT EXISTS] <table_name> ( [ <col_name> <col_type> AS <expr> | <part_col_name> <col_type> AS <part_expr> ] [ inlineConstraint ] [ , <col_name> <col_type> AS <expr> | <part_col_name> <col_type> AS <part_expr> ... ] [ , ... ] ) cloudProviderParams [ PARTITION BY ( <part_col_name> [, <part_col_name> ... ] ) ] [ WITH ] LOCATION = externalStage [ REFRESH_ON_CREATE = { TRUE | FALSE } ] [ AUTO_REFRESH = { TRUE | FALSE } ] [ PATTERN = '<regex_pattern>' ] FILE_FORMAT = ( { FORMAT_NAME = '<file_format_name>' | TYPE = { CSV | JSON | AVRO | ORC | PARQUET } [ formatTypeOptions ] } ) [ AWS_SNS_TOPIC = <string> ] [ COPY GRANTS ] [ COMMENT = '<string_literal>' ] Where: inlineConstraint ::= [ NOT NULL ] [ CONSTRAINT <constraint_name> ] { UNIQUE | PRIMARY KEY | { [ FOREIGN KEY ] REFERENCES <ref_table_name> [ ( <ref_col_name> ) } } [ <constraint_properties> ] For additional inline constraint details, see CREATE | ALTER TABLE … CONSTRAINT. cloudProviderParams (for Microsoft Azure) ::= [ INTEGRATION = '<integration_name>' ] externalStage ::= @[<namespace>.]<ext_stage_name>[/<path>] formatTypeOptions ::= -- If FILE_FORMAT = ( TYPE = CSV ... ) COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE RECORD_DELIMITER = '<character>' | NONE FIELD_DELIMITER = '<character>' | NONE SKIP_HEADER = <integer> SKIP_BLANK_LINES = TRUE | FALSE -- If FILE_FORMAT = ( TYPE = JSON ... ) COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE -- If FILE_FORMAT = ( TYPE = AVRO ... ) COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE -- If FILE_FORMAT = ( TYPE = PARQUET ... ) COMPRESSION = AUTO | SNAPPY | NONE ## Required Parameters¶ table_name String that specifies the identifier (i.e. name) for the table; must be unique for the schema in which the table is created. In addition, the identifier must start with an alphabetic character and cannot contain spaces or special characters unless the entire identifier string is enclosed in double quotes (e.g. "My object"). Identifiers enclosed in double quotes are also case-sensitive. For more details, see Identifier Requirements. [ WITH ] LOCATION = Specifies the external stage where the files containing data to be read are staged: @[namespace.]ext_stage_name[/path] Files are in the specified named external stage. Where: • namespace is the database and/or schema in which the external stage resides, in the form of database_name.schema_name or schema_name. It is optional if a database and schema are currently in use within the user session; otherwise, it is required. • path is an optional case-sensitive path for files in the cloud storage location (i.e. files have names that begin with a common string) that limits the set of files to load. Paths are alternatively called prefixes or folders by different cloud storage services. Note that the external table appends this path to any path specified in the stage definition. To view the stage definition, execute DESC STAGE stage_name and check the url property value. For example, if the stage URL includes path a and the external table location includes path b, then the external table reads files staged in stage/a/b. FILE_FORMAT = ( FORMAT_NAME = 'file_format_name' ) or . FILE_FORMAT = ( TYPE = CSV | JSON | AVRO | ORC | PARQUET [ ... ] ) String (constant) that specifies the file format: FORMAT_NAME = file_format_name Specifies an existing named file format that describes the staged data files to scan. The named file format determines the format type (CSV, JSON, etc.), as well as any other format options, for data files. TYPE = CSV | JSON | AVRO | ORC | PARQUET [ ... ] Specifies the format type of the staged data files to scan when querying the external table. If a file format type is specified, additional format-specific options can be specified. For more details, see Format Type Options (in this topic). The file format options can be configured at either the external table or stage level. Any settings specified at the external table level take precedence. Any settings not specified at either level assume the default values. Default: TYPE = CSV. Important The external table does not inherit the file format, if any, in the stage definition. You must explicitly specify any file format options for the external table using the FILE_FORMAT parameter. Note FORMAT_NAME and TYPE are mutually exclusive; to avoid unintended behavior, you should only specify one or the other when creating an external table. ## Optional Parameters¶ col_name String that specifies the column identifier (i.e. name). All the requirements for table identifiers also apply to column identifiers. External table columns are virtual columns, which are defined using an explicit expression. For more details, see Identifier Requirements. col_type String (constant) that specifies the data type for the column. The data type must match the result of expr for the column. For details about the data types that can be specified for table columns, see Data Types. expr String that specifies the expression for the column. When queried, the column returns results derived from this expression. CONSTRAINT ... String that defines an inline or out-of-line constraint for the specified column(s) in the table. For syntax details, see CREATE | ALTER TABLE … CONSTRAINT. For more information about constraints, see Constraints. REFRESH_ON_CREATE = TRUE | FALSE Specifies whether to automatically refresh the external table metadata once, immediately after the external table is created. Refreshing the external table metadata synchronizes the metadata with the current list of data files in the specified stage path. This action is required for the metadata to register any existing data files in the named external stage specified in the [ WITH ] LOCATION = setting. TRUE Snowflake automatically refreshes the external table metadata once after creation. FALSE Snowflake does not automatically refresh the external table metadata. To register any existing data files in the stage, you must manually refresh the external table metadata once using ALTER EXTERNAL TABLE … REFRESH. Default: TRUE AUTO_REFRESH = TRUE | FALSE Specifies whether Snowflake should enable triggering automatic refreshes of the external table metadata when new or updated data files are available in the named external stage specified in the [ WITH ] LOCATION = setting. Note • You must configure an event notification for your storage location (Amazon S3 or Microsoft Azure) to notify Snowflake when new or updated data is available to read into the external table metadata. For more information, see Refreshing External Tables Automatically for Amazon S3 (S3) or Refreshing External Tables Automatically for Azure Blob Storage (Azure). • Currently, the ability to automatically refresh the metadata is not available for external tables that reference Google Cloud Storage stages. As a workaround, we suggest following our best practices for staging your data files and periodically executing an ALTER EXTERNAL TABLE … REFRESH statement to register any missed files. For satisfactory performance, we also recommend using a selective path prefix with ALTER EXTERNAL TABLE to reduce the number of files that need to be listed and checked if they have been registered already (e.g. bucket_name/YYYY/MM/DD/ or even bucket_name/YYYY/MM/DD/HH/ depending on your volume). • When an external table is created, its metadata is refreshed automatically once unless REFRESH_ON_CREATE = FALSE. TRUE Snowflake enables triggering automatic refreshes of the external table metadata. FALSE Snowflake does not enable triggering automatic refreshes of the external table metadata. You must manually refresh the external table metadata periodically using ALTER EXTERNAL TABLE … REFRESH to synchronize the metadata with the current list of files in the stage path. Default: TRUE PATTERN = 'regex_pattern' A regular expression pattern string, enclosed in single quotes, specifying the file names and/or paths on the external stage to match. Tip For the best performance, try to avoid applying patterns that filter on a large number of files. Note Currently, this parameter is only supported when the external table metadata is refreshed manually by executing an ALTER EXTERNAL TABLE ... REFRESH statement to register files. The parameter is not supported when the metadata is refreshed using event notifications. AWS_SNS_TOPIC = string Required only when configuring AUTO_REFRESH for Amazon S3 stages using Amazon Simple Notification Service (SNS). Specifies the Amazon Resource Name (ARN) for the SNS topic for your S3 bucket. The CREATE EXTERNAL TABLE statement subscribes the Amazon Simple Queue Service (SQS) queue to the specified SNS topic. Event notifications via the SNS topic trigger metadata refreshes. For more information, see Refreshing External Tables Automatically for Amazon S3. COPY GRANTS Specifies to retain the access permissions from the original table when an external table is recreated using the CREATE OR REPLACE TABLE variant. The parameter copies all permissions, except OWNERSHIP, from the existing table to the new table. By default, the role that executes the CREATE EXTERNAL TABLE command owns the new external table. Note: The operation to copy grants occurs atomically in the CREATE EXTERNAL TABLE command (i.e. within the same transaction). COMMENT = 'string_literal' String (literal) that specifies a comment for the external table. Default: No value ### Partitioning Parameters¶ Use these parameters to partition your external table. part_col_name col_type AS part_expr Required for partitioning the data in an external table Specifies one or more partition columns in the external table. A partition column must evaluate as an expression that parses the path and/or filename information in the METADATA$FILENAME pseudocolumn. Partition columns optimize query performance by pruning out the data files that do not need to be scanned (i.e. partitioning the external table). A partition consists of all data files that match the path and/or filename in the expression for the partition column. part_col_name String that specifies the partition column identifier (i.e. name). All the requirements for table identifiers also apply to column identifiers. col_type String (constant) that specifies the data type for the column. The data type must match the result of part_expr for the column. part_expr String that specifies the expression for the column. The expression must include the METADATA$FILENAME pseudocolumn. External tables currently support the following subset of functions in partition expressions: List of supported functions: After defining any partition columns for the table, identify these columns using the PARTITION BY clause. [ PARTITION BY ( part_col_name [, part_col_name ... ] ) ] Specifies any partition columns to evaluate for the external table. Usage When querying an external table, include one or more partition columns in a WHERE clause, e.g.: ... WHERE part_col_name = 'filter_value' Snowflake filters on the partition columns to restrict the set of data files to scan. Note that all rows in these files are scanned. If a WHERE clause includes non-partition columns, those filters are evaluated after the data files have been filtered. A common practice is to partition the data files based on increments of time; or, if the data files are staged from multiple sources, to partition by a data source identifier and date or timestamp. ## Cloud Provider Parameters (cloudProviderParams)¶ Microsoft Azure INTEGRATION = integration_name Specifies the name of the notification integration used to automatically refresh the external table metadata using Azure Event Grid notifications. A notification integration is a Snowflake object that provides an interface between Snowflake and third-party cloud message queuing services. This parameter is required to enable auto-refresh operations for the external table. For instructions on configuring the auto-refresh capability, see Refreshing External Tables Automatically for Azure Blob Storage. ## Format Type Options (formatTypeOptions)¶ Format type options are used for loading data into and unloading data out of tables. Depending on the file format type specified (FILE_FORMAT = ( TYPE = ... )), you can include one or more of the following format-specific options (separated by blank spaces, commas, or new lines): ### TYPE = CSV¶ COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE String (constant) that specifies the current compression algorithm for the data files to be loaded. Snowflake uses this option to detect how already-compressed data files were compressed so that the compressed data in the files can be extracted for loading. Supported Values Notes AUTO Compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. If loading Brotli-compressed files, explicitly use BROTLI instead of AUTO. GZIP BZ2 BROTLI Must be used if loading Brotli-compressed files. ZSTD Zstandard v0.8 (and higher) supported. DEFLATE Deflate-compressed files (with zlib header, RFC1950). RAW_DEFLATE Raw Deflate-compressed files (without header, RFC1951). NONE Data files to load have not been compressed. RECORD_DELIMITER = 'character' | NONE One or more singlebyte or multibyte characters that separate records in an input file. Accepts common escape sequences, octal values (prefixed by \\), or hex values (prefixed by 0x). For example, for records delimited by the thorn (Þ) character, specify the octal (\\336) or hex (0xDE) value. Also accepts a value of NONE. The specified delimiter must be a valid UTF-8 character and not a random sequence of bytes. Multiple-character delimiters are also supported; however, the delimiter for RECORD_DELIMITER or FIELD_DELIMITER cannot be a substring of the delimiter for the other file format option (e.g. FIELD_DELIMITER = 'aa' RECORD_DELIMITER = 'aabb'). The delimiter is limited to a maximum of 20 characters. Default: New line character. Note that “new line” is logical such that \r\n will be understood as a new line for files on a Windows platform. FIELD_DELIMITER = 'character' | NONE One or more singlebyte or multibyte characters that separate fields in an input file. Accepts common escape sequences, octal values (prefixed by \\), or hex values (prefixed by 0x). For example, for fields delimited by the thorn (Þ) character, specify the octal (\\336) or hex (0xDE) value. Also accepts a value of NONE. The specified delimiter must be a valid UTF-8 character and not a random sequence of bytes. Multiple-character delimiters are also supported; however, the delimiter for RECORD_DELIMITER or FIELD_DELIMITER cannot be a substring of the delimiter for the other file format option (e.g. FIELD_DELIMITER = 'aa' RECORD_DELIMITER = 'aabb'). The delimiter is limited to a maximum of 20 characters. Default: comma (,) SKIP_HEADER = integer Number of lines at the start of the file to skip. Note that SKIP_HEADER does not use the RECORD_DELIMITER or FIELD_DELIMITER values to determine what a header line is; rather, it simply skips the specified number of CRLF (Carriage Return, Line Feed)-delimited lines in the file. RECORD_DELIMITER and FIELD_DELIMITER are then used to determine the rows of data to load. Default: 0 SKIP_BLANK_LINES = TRUE | FALSE Use Data loading only Definition Boolean that specifies to skip any blank lines encountered in the data files; otherwise, blank lines produce an end-of-record error (default behavior). Default: FALSE ### TYPE = JSON¶ COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE String (constant) that specifies the current compression algorithm for the data files to be loaded. Snowflake uses this option to detect how already-compressed data files were compressed so that the compressed data in the files can be extracted for loading. Supported Values Notes AUTO Compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. If loading Brotli-compressed files, explicitly use BROTLI instead of AUTO. GZIP BZ2 BROTLI ZSTD DEFLATE Deflate-compressed files (with zlib header, RFC1950). RAW_DEFLATE Raw Deflate-compressed files (without header, RFC1951). NONE Indicates the files for loading data have not been compressed. Default: AUTO ### TYPE = AVRO¶ COMPRESSION = AUTO | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE String (constant) that specifies the current compression algorithm for the data files to be loaded. Snowflake uses this option to detect how already-compressed data files were compressed so that the compressed data in the files can be extracted for loading. Supported Values Notes AUTO Compression algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. If loading Brotli-compressed files, explicitly use BROTLI instead of AUTO. GZIP BZ2 BROTLI ZSTD DEFLATE Deflate-compressed files (with zlib header, RFC1950). RAW_DEFLATE Raw Deflate-compressed files (without header, RFC1951). NONE Data files to load have not been compressed. Default: AUTO N/A ### TYPE = PARQUET¶ COMPRESSION = AUTO | SNAPPY | NONE String (constant) that specifies the current compression algorithm for columns in the Parquet files. Supported Values Notes AUTO Compression algorithm detected automatically. Supports the following compression algorithms: Brotli, gzip, Lempel–Ziv–Oberhumer (LZO), LZ4, Snappy, or Zstandard v0.8 (and higher). SNAPPY NONE Data files to load have not been compressed. Default: AUTO N/A ## Usage Notes¶ • External tables support external (i.e. S3, Azure, or GCS) stages only; internal (i.e. Snowflake) stages are not supported. • Every external table has a column named VALUE of type VARIANT. Additional columns might be specified. All of the columns are treated as virtual columns. • The VALUE column structures rows in a CSV data file as JSON objects with elements identified by column position, e.g. {c1: col_1_value, c2: col_2_value, c3: col_3_value ...}. • No referential integrity constants on external tables are enforced by Snowflake. This differs from the behavior for normal tables, whereby the NOT NULL constraint on columns is enforced. • External tables include the following metadata column: • METADATA$FILENAME: Name of each staged data file included in the external table. Includes the path to the data file in the stage. • The following are not supported for external tables: • Clustering keys • Cloning • Data in XML format • Time Travel is not supported for external tables. ## Examples¶ ### Simple External Table¶ 1. Create an external stage named mystage for the storage location where a set of Parquet data files are stored. For more information, see CREATE STAGE. Amazon S3 Create an external stage using a private/protected S3 bucket named mybucket with a folder path named files: CREATE OR REPLACE STAGE mystage URL='s3://mybucket/files/' .. ; Google Cloud Storage Create an external stage using an Google Cloud Storage container named mybucket with a folder path named files: CREATE OR REPLACE STAGE mystage URL='gcs://mybucket/files' .. ; Microsoft Azure Create an external stage using an Azure storage account named myaccount and a container named mycontainer with a folder path named files: CREATE OR REPLACE STAGE mystage URL='azure://myaccount.blob.core.windows.net/mycontainer/files' .. ; Note Use the blob.core.windows.net endpoint for all supported types of Azure blob storage accounts, including Data Lake Storage Gen2. 2. Create an external table named ext_twitter_feed that references the Parquet files in the mystage external stage. The stage reference includes a folder path named daily. The external table appends this path to the stage definition, i.e. the external table references the data files in @mystage/files/daily. The SQL command specifies Parquet as the file format type. In addition, file pattern matching is applied to include only Parquet files whose names include the string sales: Amazon S3 CREATE OR REPLACE EXTERNAL TABLE ext_twitter_feed WITH LOCATION = @mystage/daily/ AUTO_REFRESH = true FILE_FORMAT = (TYPE = PARQUET) PATTERN='.*sales.*[.]parquet'; Google Cloud Storage CREATE OR REPLACE EXTERNAL TABLE ext_twitter_feed WITH LOCATION = @mystage/daily/ FILE_FORMAT = (TYPE = PARQUET) PATTERN='.*sales.*[.]parquet'; Microsoft Azure CREATE OR REPLACE EXTERNAL TABLE ext_twitter_feed INTEGRATION = 'MY_AZURE_INT' WITH LOCATION = @mystage/daily/ AUTO_REFRESH = true FILE_FORMAT = (TYPE = PARQUET) PATTERN='.*sales.*[.]parquet'; 3. Refresh the external table metadata: ALTER EXTERNAL TABLE ext_twitter_feed REFRESH; ### Partitioned External Table¶ Create a partitioned external table that partitions data by the logical, granular details in the stage path. In the following example, the data files are organized in cloud storage with the following structure: logs/YYYY/MM/DD/HH24, e.g.: • logs/2018/08/05/0524/ • logs/2018/08/27/1408/ 1. Create an external stage named exttable_part_stage for the storage location where the data files are stored. For more information, see CREATE STAGE. The stage definition includes the path /files/logs/: Amazon S3 CREATE STAGE exttable_part_stage URL='s3://mybucket/files/logs/' .. ; Google Cloud Storage CREATE STAGE exttable_part_stage URL='gcs://mybucket/files/logs/' .. ; Microsoft Azure CREATE STAGE exttable_part_stage URL='azure://mycontainer/files/logs/' .. ; 2. Query the METADATA$FILENAME pseudocolumn in the staged data. Use the results to develop your partition column(s): SELECT metadata$filename FROM @exttable_part_stage/; +----------------------------------------+ | METADATA$FILENAME | |----------------------------------------| | files/logs/2018/08/05/0524/log.parquet | | files/logs/2018/08/27/1408/log.parquet | +----------------------------------------+ 3. Create the partitioned external table. The partition column date_part casts YYYY/MM/DD in the METADATA$FILENAME pseudocolumn as a date using TO_DATE , DATE. The SQL command also specifies Parquet as the file format type: Amazon S3 CREATE EXTERNAL TABLE exttable_part( date_part date AS TO_DATE(SPLIT_PART(metadata$filename, '/', 3) || '/' || SPLIT_PART(metadata$filename, '/', 4) || '/' || SPLIT_PART(metadata$filename, '/', 5), 'YYYY/MM/DD'), timestamp bigint AS (value:timestamp::bigint), col2 varchar AS (value:col2::varchar)) PARTITION BY (date_part) LOCATION=@exttable_part_stage/logs/ AUTO_REFRESH = true FILE_FORMAT = (TYPE = PARQUET); Google Cloud Storage CREATE EXTERNAL TABLE exttable_part( date_part date AS TO_DATE(SPLIT_PART(metadata$filename, '/', 3) || '/' || SPLIT_PART(metadata$filename, '/', 4) || '/' || SPLIT_PART(metadata$filename, '/', 5), 'YYYY/MM/DD'), timestamp bigint AS (value:timestamp::bigint), col2 varchar AS (value:col2::varchar)) PARTITION BY (date_part) LOCATION=@exttable_part_stage/logs/ AUTO_REFRESH = true FILE_FORMAT = (TYPE = PARQUET); Microsoft Azure CREATE EXTERNAL TABLE exttable_part( date_part date AS TO_DATE(SPLIT_PART(metadata$filename, '/', 3) || '/' || SPLIT_PART(metadata$filename, '/', 4) || '/' || SPLIT_PART(metadata$filename, '/', 5), 'YYYY/MM/DD'), timestamp bigint AS (value:timestamp::bigint), col2 varchar AS (value:col2::varchar)) PARTITION BY (date_part) INTEGRATION = 'MY_INT' LOCATION=@exttable_part_stage/logs/ AUTO_REFRESH = true FILE_FORMAT = (TYPE = PARQUET); 4. Refresh the external table metadata: ALTER EXTERNAL TABLE exttable_part REFRESH; When querying the external table, filter the data by the partition column using a WHERE clause: SELECT timestamp, col2 FROM exttable_part WHERE date_part = to_date('08/05/2018'); ### Simple external table: AUTO_REFRESH Using Amazon SNS¶ Create a non-partitioned external table in the current schema whose metadata is refresh automatically when triggered by event notifications received from Amazon SNS: CREATE OR REPLACE EXTERNAL TABLE ext_table WITH LOCATION = @mystage/path1/ FILE_FORMAT = (TYPE = JSON) AWS_SNS_TOPIC = 'arn:aws:sns:us-west-2:001234567890:s3_mybucket'; ### Materialized View on an External Table¶ Create a materialized view based on a subquery of the columns in the external table created in the Partitioned External Table example: CREATE MATERIALIZED VIEW exttable_part_mv AS SELECT col2 FROM exttable_part; For general syntax, usage notes, and further examples for this SQL command, see CREATE MATERIALIZED VIEW.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2416834533214569, "perplexity": 10822.377820528813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141715252.96/warc/CC-MAIN-20201202175113-20201202205113-00153.warc.gz"}
http://www.numdam.org/item/ITA_1995__29_1_1_0/
Algebraic and topological theory of languages RAIRO - Theoretical Informatics and Applications - Informatique Théorique et Applications, Tome 29 (1995) no. 1, pp. 1-44. @article{ITA_1995__29_1_1_0, author = {Rhodes, J. and Weil, P.}, title = {Algebraic and topological theory of languages}, journal = {RAIRO - Theoretical Informatics and Applications - Informatique Th\'eorique et Applications}, pages = {1--44}, publisher = {EDP-Sciences}, volume = {29}, number = {1}, year = {1995}, zbl = {0889.68088}, mrnumber = {1315699}, language = {en}, url = {http://www.numdam.org/item/ITA_1995__29_1_1_0/} } TY - JOUR AU - Rhodes, J. AU - Weil, P. TI - Algebraic and topological theory of languages JO - RAIRO - Theoretical Informatics and Applications - Informatique Théorique et Applications PY - 1995 DA - 1995/// SP - 1 EP - 44 VL - 29 IS - 1 PB - EDP-Sciences UR - http://www.numdam.org/item/ITA_1995__29_1_1_0/ UR - https://zbmath.org/?q=an%3A0889.68088 UR - https://www.ams.org/mathscinet-getitem?mr=1315699 LA - en ID - ITA_1995__29_1_1_0 ER - Rhodes, J.; Weil, P. Algebraic and topological theory of languages. RAIRO - Theoretical Informatics and Applications - Informatique Théorique et Applications, Tome 29 (1995) no. 1, pp. 1-44. http://www.numdam.org/item/ITA_1995__29_1_1_0/ 1. J.-C. Birget and J. Rhodes, Almost finite expansions, Journ. Pure Appl. Alg., 1984, 32, pp. 239-287. | MR 745358 | Zbl 0546.20055 2. A. De Luca and S. Varricchio, On non-counting regular classes, in Automata, languages and programming (M.S. Patersen, ed.), Lecture Notes in Computer Science, 1990, 443, Springer, pp. 74-87. | Zbl 0765.68074 3. A. De Luca and S. Varricchio, On non-counting regular classes, Theoret. Comp. Science, 1992, 100, pp. 67-104. | MR 1171435 | Zbl 0780.68084 4. S. Eilenberg, Automata, languages and machines, vol. B, Academic Press, New York, 1976. | MR 530383 | Zbl 0359.94067 5. R. Grigorchuk, Degrees of growth of finitely generated groups, and the theory of invariant means, Math. USSR Izvestyia, 1985, 25, pp. 259-300. (English translation AMS.) | MR 764305 | Zbl 0583.20023 6. K. Henckell, S. Lazarus and J. Rhodes, Prime decomposition theorem for arbitrary semigroups: general holonomy decomposition and synthesis theorem, Journ. Pure Appl. Alg., 1988, 55, pp. 127-172. | MR 968572 | Zbl 0679.20056 7. I. Herstein, Noncommutative rings, Carus Mathematical Monographs 15, Mathematical Association of America, 1968. | MR 1449137 | Zbl 0874.16001 8. J. Howie, An introduction to semigroup theory, London, Academic Press, 1976. | MR 466355 | Zbl 0355.20056 9. S. Kleene, Representation of events in nerve nets and finite automata, in Automata Studies (Shannon and McCarthy eds), Princeton, Princeton University Press, 1954, pp. 3-51. | MR 77478 10. G. Lallement, Semigroups and combinatorial applications, New York, Wiley, 1979. | MR 530552 | Zbl 0421.20025 11. J. Mccammond, The solution to the word problem for the relatively free semigroups satisfying ta = ta+b with a ≥ 6, Intern. Journ. Algebra Comput. 1, 1991, pp. 1-32. | MR 1112297 | Zbl 0732.20034 12. J. L. Menicke, Burnside groups, Lecture Notes in Mathematics 806, 1980, Springer. | Zbl 0424.00008 13. E. F. Moore, Sequential machines, Addison-Wesley, 1964, Reading, Mass. | Zbl 0147.24107 14. A. Pereira Do Lago, On the Burnside semigroups xn = xn+m, LATIN 92 (I. Simon ed.), Lecture Notes in Computer Sciences, 583, springer. 15. J.-E. Pin, Concatenation hierarchies and decidability results, in Combinatorics on words: progress and perspectives (L. Cummings, ed.), New York, Academic Press, 1983, pp. 195-228. | MR 910136 | Zbl 0561.68055 16. J.-E. Pin, Variétés de langages formels, Paris Masson, 1984, (English translation: Varieties of formal languages, Plenum (New York, 1986. | MR 752695 | Zbl 0636.68093 17. J. Rhodes, Infinite iteration of matrix semigroups, I, J. Algebra, 1986, 98, pp. 422-451. | MR 826135 | Zbl 0584.20053 18. J. Rhodes, Infinite iteration of matrix semigroups, II, J. Algebra, 1986, 100, pp. 25-137. | MR 839575 | Zbl 0626.20050 19. M.-P. Schützenberger, On finite monoids having only trivial subgroups, Information and Control, 1965, 8, pp. 190-194. | MR 176883 | Zbl 0131.02001 20. H. Straubing, Families of recognizable sets corresponding to certain varieties of finite monoids, Journ. Pure Appl. Alg., 1979, 15, pp. 305-318. | MR 537503 | Zbl 0414.20056 21. H. Straubing, Relational morphisms and operations on recognizable sets, RAIRO Inform. Théor., 1981, 15, pp. 149-159. | EuDML 92139 | Numdam | MR 618452 | Zbl 0463.20049 22. P. Weil, Products of languages with counter, Theoret. Comp. Science, 1990, 76, pp. 251-260. | MR 1079529 | Zbl 0704.68071 23. P. Weil, Closure of varieties of languages under products with counter, Journ. Comp. System and Sciences, 1992, 45, pp. 316-339. | MR 1193376 | Zbl 0766.20023
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2978160083293915, "perplexity": 6691.993198555765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301309.22/warc/CC-MAIN-20220119094810-20220119124810-00193.warc.gz"}
https://www.andrewcbancroft.com/blog/musings/make-bash-script-executable/
Make a Bash Script Executable If you constantly run the same set of commands at the command line, why not automate that? I found myself typing the same things over and over to deploy this website. Here’s how I encapsulated it into a script that saves me keystrokes at the command line. 1) Create a new text file with a .sh extension. I created a new file called deploy.sh for my website. 2) Add #!/bin/bash to the top of it. This is necessary for the “make it executable” part. 3) Add lines that you’d normally type at the command line. As an example, here’s the full contents of the file I use to deploy general updates to andrewcbancroft.com 1#!/bin/bash 2 3hugo 4 6 9git push 4) At the command line, run chmod u+x YourScriptFileName.sh I ran chmod u+x deploy.sh to make mine executable. Now, whenever I deploy changes to my website, I run ./deploy.sh and boom. Done.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6474698185920715, "perplexity": 5163.868833371509}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525046.5/warc/CC-MAIN-20190717041500-20190717063500-00399.warc.gz"}
https://arxiv.org/abs/1812.02086
math.MG (what is this?) # Title:Infinitesimal Hilbertianity of locally CAT($κ$)-spaces Abstract: We show that, given a metric space $(Y,d)$ of curvature bounded from above in the sense of Alexandrov, and a positive Radon measure $μ$ on $Y$ giving finite mass to bounded sets, the resulting metric measure space $(Y,d,μ)$ is infinitesimally Hilbertian, i.e. the Sobolev space $W^{1,2}(Y,d,μ)$ is a Hilbert space. The result is obtained by constructing an isometric embedding of the abstract and analytical' space of derivations into the concrete and geometrical' bundle whose fibre at $x\in Y$ is the tangent cone at $x$ of $Y$. The conclusion then follows from the fact that for every $x\in Y$ such a cone is a CAT(0)-space and, as such, has a Hilbert-like structure. Comments: 44 pages Subjects: Metric Geometry (math.MG) MSC classes: 51Fxx, 49J52, 46E35 Cite as: arXiv:1812.02086 [math.MG] (or arXiv:1812.02086v1 [math.MG] for this version) ## Submission history From: Elefterios Soultanis Mr. [view email] [v1] Wed, 5 Dec 2018 16:21:26 UTC (54 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9726376533508301, "perplexity": 996.9756533724704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376830479.82/warc/CC-MAIN-20181219025453-20181219051453-00376.warc.gz"}
https://cosmocoffee.info/viewtopic.php?t=1603&view=next
## Finally equations in PowerPoint Antony Lewis Posts: 1352 Joined: September 23 2004 Affiliation: University of Sussex Contact: ### Finally equations in PowerPoint For anyone who uses PowerPoint but doesn't know, the new 2010 version finally adds the workable equation editing system from Word 2007 to PowerPoint. e.g. Alt+= followed by typing 1/(x3+α)2 converts the input into the correctly formatted embedded equation $\frac{1}{(x^3+\alpha)^2}$. Staff and students in the UK can get it at a good price at http://www.microsoft.com/student/office ... fault.aspx
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37690970301628113, "perplexity": 12515.487905028544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589618.52/warc/CC-MAIN-20180717070721-20180717090721-00249.warc.gz"}
https://ftp.aimsciences.org/article/doi/10.3934/dcds.2010.28.495
# American Institute of Mathematical Sciences June  2010, 28(2): 495-509. doi: 10.3934/dcds.2010.28.495 ## On some strong ratio limit theorems for heat kernels 1 Department of Physics, Technion - Israel Institute of Technology, Haifa, Israel 2 Department of Theoretical Physics, Nuclear Physics Institute, Academy of Sciences, 25068 Řež, Czech Republic 3 Department of Mathematics, Technion - Israel Institute of Technology, Haifa, Israel Received  December 2009 Revised  April 2010 Published  April 2010 We study strong ratio limit properties of the quotients of the heat kernelsof subcritical and critical operators which are defined on a noncompact Riemannian manifold. Citation: Martin Fraas, David Krejčiřík, Yehuda Pinchover. On some strong ratio limit theorems for heat kernels. Discrete & Continuous Dynamical Systems, 2010, 28 (2) : 495-509. doi: 10.3934/dcds.2010.28.495 [1] Kazuhiro Ishige, Asato Mukai. Large time behavior of solutions of the heat equation with inverse square potential. Discrete & Continuous Dynamical Systems, 2018, 38 (8) : 4041-4069. doi: 10.3934/dcds.2018176 [2] Feimin Huang, Yeping Li. Large time behavior and quasineutral limit of solutions to a bipolar hydrodynamic model with large data and vacuum. Discrete & Continuous Dynamical Systems, 2009, 24 (2) : 455-470. doi: 10.3934/dcds.2009.24.455 [3] Jesus Ildefonso Díaz, Jacqueline Fleckinger-Pellé. Positivity for large time of solutions of the heat equation: the parabolic antimaximum principle. Discrete & Continuous Dynamical Systems, 2004, 10 (1&2) : 193-200. doi: 10.3934/dcds.2004.10.193 [4] Dongfen Bian, Boling Guo. Global existence and large time behavior of solutions to the electric-magnetohydrodynamic equations. Kinetic & Related Models, 2013, 6 (3) : 481-503. doi: 10.3934/krm.2013.6.481 [5] Kin Ming Hui, Soojung Kim. Asymptotic large time behavior of singular solutions of the fast diffusion equation. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5943-5977. doi: 10.3934/dcds.2017258 [6] Hiroshi Takeda. Large time behavior of solutions for a nonlinear damped wave equation. Communications on Pure & Applied Analysis, 2016, 15 (1) : 41-55. doi: 10.3934/cpaa.2016.15.41 [7] Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems, 2020, 40 (6) : 3395-3409. doi: 10.3934/dcds.2019229 [8] Nakao Hayashi, Elena I. Kaikina, Pavel I. Naumkin. Large time behavior of solutions to the generalized derivative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 1999, 5 (1) : 93-106. doi: 10.3934/dcds.1999.5.93 [9] Yihong Du, Yoshio Yamada. On the long-time limit of positive solutions to the degenerate logistic equation. Discrete & Continuous Dynamical Systems, 2009, 25 (1) : 123-132. doi: 10.3934/dcds.2009.25.123 [10] Qiwei Wu, Liping Luan. Large-time behavior of solutions to unipolar Euler-Poisson equations with time-dependent damping. Communications on Pure & Applied Analysis, 2021, 20 (3) : 995-1023. doi: 10.3934/cpaa.2021003 [11] Kazuhiro Ishige, Tatsuki Kawakami, Kanako Kobayashi. Global solutions for a nonlinear integral equation with a generalized heat kernel. Discrete & Continuous Dynamical Systems - S, 2014, 7 (4) : 767-783. doi: 10.3934/dcdss.2014.7.767 [12] Shige Peng. Law of large numbers and central limit theorem under nonlinear expectations. Probability, Uncertainty and Quantitative Risk, 2019, 4 (0) : 4-. doi: 10.1186/s41546-019-0038-2 [13] Junyong Eom, Ryuichi Sato. Large time behavior of ODE type solutions to parabolic $p$-Laplacian type equations. Communications on Pure & Applied Analysis, 2020, 19 (9) : 4373-4386. doi: 10.3934/cpaa.2020199 [14] Toyohiko Aiki, Adrian Muntean. Large time behavior of solutions to a moving-interface problem modeling concrete carbonation. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1117-1129. doi: 10.3934/cpaa.2010.9.1117 [15] Thi Tuyen Nguyen. Large time behavior of solutions of local and nonlocal nondegenerate Hamilton-Jacobi equations with Ornstein-Uhlenbeck operator. Communications on Pure & Applied Analysis, 2019, 18 (3) : 999-1021. doi: 10.3934/cpaa.2019049 [16] Shifeng Geng, Lina Zhang. Large-time behavior of solutions for the system of compressible adiabatic flow through porous media with nonlinear damping. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2211-2228. doi: 10.3934/cpaa.2014.13.2211 [17] Peng Jiang. Global well-posedness and large time behavior of classical solutions to the diffusion approximation model in radiation hydrodynamics. Discrete & Continuous Dynamical Systems, 2017, 37 (4) : 2045-2063. doi: 10.3934/dcds.2017087 [18] Zhong Tan, Yong Wang, Xu Zhang. Large time behavior of solutions to the non-isentropic compressible Navier-Stokes-Poisson system in $\mathbb{R}^{3}$. Kinetic & Related Models, 2012, 5 (3) : 615-638. doi: 10.3934/krm.2012.5.615 [19] Emre Esentürk, Juan Velazquez. Large time behavior of exchange-driven growth. Discrete & Continuous Dynamical Systems, 2021, 41 (2) : 747-775. doi: 10.3934/dcds.2020299 [20] Geonho Lee, Sangdong Kim, Young-Sam Kwon. Large time behavior for the full compressible magnetohydrodynamic flows. Communications on Pure & Applied Analysis, 2012, 11 (3) : 959-971. doi: 10.3934/cpaa.2012.11.959 2020 Impact Factor: 1.392
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7042027115821838, "perplexity": 4116.706482472412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358842.4/warc/CC-MAIN-20211129194957-20211129224957-00157.warc.gz"}
http://www.maa.org/press/maa-reviews/the-riemann-hypothesis-and-the-roots-of-the-riemann-zeta-function
# The Riemann Hypothesis and the Roots of the Riemann Zeta Function ###### Samuel W. Gilbert Publisher: BookSurge Publishing Publication Date: 2009 Number of Pages: 140 Format: Paperback Price: 49.95 ISBN: 9781439216385 Category: General [Reviewed by Underwood Dudley , on 08/23/2009 ] The Clay Mathematics Institute has offered a prize of \$1,000,000 for a resolution of the Riemann Hypothesis that all of the zeros of the Riemann zeta function that lie in the critical strip 0 < s < 1 are on the line s = 1/2. Perhaps because of the prize, recently many proofs (and at least one disproof) have been put forward by a variety of authors. Two proofs made it as far as the web site arXiv.org, but were withdrawn by their authors after flaws were pointed out. This book purports to give a proof. Its author, a member of the American Mathematical Society, holds the Ph. D. degree in chemical engineering (1987, University of Illinois). He has worked for Eastman Kodak and Exxon Research and currently has a “wealth advisory practice” in Virginia. His book, 140 pages long, was published by BookSurge Publishing, an organization that enables on-demand publishing. The author does not say at whom his book is aimed, but the level of mathematics is sufficiently high that I doubt that it could be read by anyone other than professional mathematicians. As might be expected, the book contains a good deal of what could be called padding. For example, there are graphs of the first ten roots of the Riemann zeta function, six pages devoted to a chart of a sequence converging to its first imaginary root, a constant given to 1026 significant figures, and an excerpt from an encyclopedia about the Gordian knot. For this reason, and others, I can’t say if his proof is correct. He says that the series for the zeta function for s > 1, the sum of the reciprocals of the sth powers of the positive integers, diverges everywhere in the critical strip 0 < s < 1 but that it “does, in fact, converge at the roots in the critical strip—and only at the roots in the critical roots in the critical strip—in a special geometrical sense.” What this means was not clear to me and I did not exert myself sufficiently to make it clear. A good use for the book, I think, would be for an instructor in a course in analytic number theory to give it to a student with the assignment of seeing what, if anything, is there. If the proof is valid, I owe the author an apology. Woody Dudley knows enough number theory not to attack the Riemann Hypothesis, and not enough chemical engineering to attack any large open problems in that discipline.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012692928314209, "perplexity": 552.5072709135161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174157.36/warc/CC-MAIN-20170219104614-00264-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.r-bloggers.com/2010/03/page/4/
# Monthly Archives: March 2010 ## A Demo for the Ratio Estimation in Sampling Survey (Animation) March 24, 2010 By $\reverse \bar{Y}$ mber Watkins gave me a suggestion on the animation for the ratio estimation, and I think this is a good topic for my animation package. I’ve finished writing the initial version of the function sample.ratio() for this package, which will appear in the version 1.1-2 a couple of days later. As we know, the benefit ## ECG Signal Processing March 24, 2010 By After reading (most of) “The Scientists and Engineers Guide to Digital Signal Processing” by Steven W. Smith, PhD, I decided to take a second crack at the ECG data. I wrote a set of R functions that implement a windowed (Blackman) sinc low-pass filter. The convolution of filter kernel with the input signal is conducted ## Statistical learning with MARS March 24, 2010 By Steve Miller at the InformationManagement blog has been looking at predictive analytics tools for business intelligence applications, and naturally turns to the statistical modeling and prediction capabilities of R. Says Steve: The R Project for Statistical Computing continues to dazzle in the open source world, with exciting new leadership at Revolution Computing promising to align commercial R with business... ## RXQuery March 24, 2010 By I have put a new version of the RXQuery package which interfaces to the Zorba XQuery engine. This makes the package compatible with the 1.0.0 release of Zorba for external functions. The package allows one to use XQuery from within R and to use R fu... ## Lessons Learned from EC2 March 24, 2010 By A week or so ago I had my first experience using someone else’s cluster on Amazon EC2. EC2 is the Amazon Elastic Compute Cloud. Users set up a virtual computing platform that runs on Amazon’s servers “in the cloud.” Amazon EC2 is not just another cluster. EC2 allows the user to create a disk image containing an operating system... ## Font Families for the R PDF Device March 24, 2010 By otivated by the excellent R package pgfSweave, I begin to notice the font families in my graphs when writing Sweave documents. The default font family for PDF graphs is Helvetica, which is, in most cases (I think), inconsistent with the LaTeX font styles. Some common font families are listed in ?postscript, and we can take ## oro.nifti 0.1.4 March 24, 2010 By The latest release of oro.nifti (0.1.4) has been released on CRAN.  New features include:Added text capability in the (unused) fourth pane in orthographic()A vignette is now included (taken from dcemriS4) ## oro.dicom 0.2.5 March 24, 2010 By The latest version of oro.dicom (0.2.5) has been released on CRAN.  New features include:Added "mosaic" capability when creating 3D arrays from DICOMdicomTable() now accepts single DICOM fileBetter handling of SequenceItem tags when reading in DIC... ## R 2.11.0 due date March 23, 2010 By This is the announcement as posted in the mailing list : This is to announce that we plan to release R version 2.11.0 on Thursday, April 22, 2010. Those directly involved should review the generic schedule at http://developer.r-project.org/release-checklist.html The source tarballs will be made available daily (barring build troubles) via http://cran.r-project.org/src/base-prerelease/ For the R Core ## The “Future of Open Source” Survey – an R user’s thoughts and conclusions March 23, 2010 By Over a month ago, David Smith published a call for people to participate in the “Future of Open Source” Survey. 550 people (and me) took the survey, and today I got an e-mail with the news that the 2010 survey results are analysed and where published in the “Future.Of.Open.Source blog” In the following (38 slides) presentation: I would like...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3282047510147095, "perplexity": 4654.247232377255}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146714.29/warc/CC-MAIN-20200227125512-20200227155512-00280.warc.gz"}
http://mathhelpforum.com/trigonometry/137006-tough-question.html
1. ## Tough question I'm not strong with this, i need some help: 2(1-sin(b)sin(c)) = cos^2(b) + cos^(c) when b+c = 180 2. Originally Posted by OmegaCenturion I'm not strong with this, i need some help: 2(1-sin(b)sin(c)) = cos^2(b) + cos^(c) when b+c = 180 Your clue lies in "b+c=180 degrees", as now c=180-b. sin(angle) gives the vertical co-ordinate in a circle of radius 1, centre (0,0). Hence sin(b)=sin(180-b). $2\left(1-sin^2(b)\right)=cos^2b+cos^2c$ Using similar logic $cos(c)=-cos(b)$ allows you to write the entire equation in terms of any one of the four of sinc, sinb, cosc, cosb with $cos^2b+sin^2b=1$ etc 3. Originally Posted by OmegaCenturion I'm not strong with this, i need some help: 2(1-sin(b)sin(c)) = cos^2(b) + cos^(c) when b+c = 180 $2(1-sinb. sinc) = cos^{2}b + cos^{2}c$ write this as: $2 - 2.sinb.sinc = (1-sin^{2}b) + (1-sin^{2}c)$ or, $2 - 2.sinb.sinc = 2 - sin^{2}b - sin^{2}c$ $sin^{2}b - 2.sinb.sinc + sin^{2}c = 0$ $(sinb -sinc)^2 = 0$ Note that: $b+c = 180 \rightarrow c=180-b$ $\therefore (sinb -sinc) = (sinb - sin(\pi-b)) = (sinb-sinb) = 0$ You should know that $sin(180-b) = sin(\pi-b) = sinb$ 4. thanks guys!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9561755061149597, "perplexity": 8567.58858119542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718284.75/warc/CC-MAIN-20161020183838-00426-ip-10-171-6-4.ec2.internal.warc.gz"}
https://ai.stackexchange.com/questions/13526/is-prelu-superfluous-with-respect-to-relu?noredirect=1
Is PReLU superfluous with respect to ReLU? Why do people use the $$PReLU$$ activation? $$PReLU[x] = ReLU[x] + ReLU[p*x]$$ with the parameter $$p$$ typically being a small negative number. If a fully connected layer is followed by a at least two element $$ReLU$$ layer then the combined layers together are capable of emulating exactly the $$PReLU$$, so why is it necessary? Am I missing something? Lets assume we have 3 Dense layers, where the activations are $$x^0 \rightarrow x^1 \rightarrow x^2$$, such that $$x^2 = \psi PReLU(x^1) + \gamma$$ and $$x^1 = PReLU(Ax^0 + b)$$ Now lets see what it would take to conform the PReLU into a ReLU \begin{align*} PReLU(x^1) &= ReLU(x^1) + ReLU(p \odot x^1)\\ &= ReLU(Ax^0+b) + ReLU(p\odot(Ax^0+b))\\ &= ReLU(Ax^0+b) + ReLU((eye(p)A + eye(p)b)x^0)\\ &= ReLU(Ax^0+b) + ReLU(Qx^0+c) \quad s.t. \quad Q = eye(p)A, \ \ c = eye(p)b\\ &= [I, I]^T[ReLU(Ax^0+b), ReLU(Qx^0+c)]\\ \implies x^2 &= [\psi, \psi][ReLU(Ax^0+b), ReLU(Qx^0+c)]\\ &= V*ReLU(Sx^0 + d) \quad V=[\psi, \psi], \ \ S=[A, Q] \ \ d=[b, c] \end{align*} So as you said it is possible to break the form of the intermiediary $$PReLU$$ into a pure $$ReLU$$ while keeping it as a linear model, but if you take a second look at the parameters of the model, the size increase drastically. The hidden units of S doubled meaning to keep $$x^2$$ the same size $$V$$ also doubles in size. So this means if you dont want to use the $$PReLU$$ you are learning double the parameters to achieve the same capability (granted it allows you to learn a wider span of functions as well), and if you enforce the constraints on $$V,S$$ set by the $$PReLU$$ the number of paramaters is the same but you are still using more memory and more operations! I hope this example convinces you of the difference • ok thanks, this sounds like a matter of efficiency and probably better learning/convergence capabilities. what does eye(p) stand for? Jul 23 '19 at 15:40 • eye(p), is taking the vector p and making a diagonal matrix, where the elements of p is the diagonal (same functionality as like numpys np.eye). Jul 23 '19 at 15:48 Here are 3 reasons I can think of: • Space - As @mshlis pointed out, size. To approximate a PReLu you require more than 1 ReLu. Even without formal proof one can easily see that PReLu is 2 adjustable (parameterizable) linear functions within 2 different ranges joined together, while ReLu is just a single adjustable (parameterizable) linear function within half that range, so you require minimum 2 ReLu's to approximate a PReLU. And thus space complexity increases and you require more space to store parameters • Time - This increase in number of ReLu directly affects training time, here is a question on the time complexity of training a Neural Network, you can check out and work out the necessary mathematical details for time increment for a 2x Neural Network size.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9833402037620544, "perplexity": 1137.6289203150397}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00313.warc.gz"}
https://yalmip.github.io/command/linearize/
# linearize Tags: Updated: linearize returns the linearization of the polynomial $$p(x)$$ at the point value(x). ### Syntax h = linearize(p) ### Examples The linearization is performed at the current value of x x = sdpvar(1,1); f = x^2; assign(x,1); sdisplay(linearize(f)) ans = '-1+2*x' assign(x,3); sdisplay(linearize(f)) ans = '-9+6*x' The command applies to matrices as well p11 = sdpvar(1,1);p12 = sdpvar(1,1); P = [p11 p12;p12 1]; assign(P,[3 2;2 1]) sdisplay(linearize(P*P)) ans = '-13+6*p11+4*p12' '-6+2*p11+4*p12' '-6+2*p11+4*p12' '-3+4*p12'
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33870044350624084, "perplexity": 25470.628883770074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141187753.32/warc/CC-MAIN-20201126084625-20201126114625-00640.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-7-exponential-functions-7-3-logarithms-and-their-derivatives-exercises-page-343/66
## Calculus (3rd Edition) $$y'= 24x+47.$$ Taking the $\ln$ on both sides of the equation, we get $$\ln y= \ln (3x+5)(4x+9)$$ Then using the properties of $\ln$, we can write $$\ln y= \ln (3x+5)+\ln(4x+9).$$ Now taking the derivative for the above equation, we have $$\frac{y'}{y}= \frac{3}{3x+5}+ \frac{4}{4x+9},$$ Hence $y'$ is given by $$y'=y\left( \frac{3}{3x+5}+ \frac{4}{4x+9}\right)=(3x+5)(4x+9)\left( \frac{3}{3x+5}+ \frac{4}{4x+9}\right)\\ 12x+27+12x+20=24x+47.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997203946113586, "perplexity": 50.77625703619477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00085.warc.gz"}
http://simbad.cds.unistra.fr/simbad/sim-ref?bibcode=2021A%26A...653A.107R&simbo=on
# 2021A&A...653A.107R other querymodes : Identifierquery Coordinatequery Criteriaquery Referencequery Basicquery Scriptsubmission TAP Outputoptions Help Query : 2021A&A...653A.107R 2021A&A...653A.107R - Astronomy and Astrophysics, volume 653A, 107-107 (2021/9-1) Preparing for LSST data. Estimating the physical properties of z < 2.5 main-sequence galaxies. RICCIO G., MALEK K., NANNI A., BOQUIEN M., BUAT V., BURGARELLA D., DONEVSKI D., HAMED M., HURLEY P., SHIRLEY R. and POLLO A. Abstract (from CDS): Aims. We study how the upcoming Legacy Survey of Space and Time (LSST) data from the Vera C. Rubin Observatory can be employed to constrain the physical properties of normal star-forming galaxies (main-sequence galaxies). Because the majority of the observed LSST objects will have no auxiliary data, we use simulated LSST data and existing real observations to test the reliability of estimates of the physical properties of galaxies, such as their star formation rate (SFR), stellar mass (Mstar), and dust luminosity (Ldust). We focus on normal star-forming galaxies because they form the majority of the galaxy population in the universe and are therefore more likely to be observed with the LSST. Methods. We performed a simulation of LSST observations and uncertainties of 50 385 real galaxies within the redshift range 0<z<2.5. In order to achieve this goal, we used the unique multi-wavelength data from the Herschel Extragalactic Legacy Project (HELP) survey. Our analysis focused on two fields, ELAIS N1 and COSMOS. To obtain the physical properties of the galaxies, we fit their spectral energy distributions (SEDs) using the Code Investigating GALaxy Emission. We simulated the LSST data by convolving the SEDs fitted by employing the multi-wavelength observations. We compared the main galaxy physical properties, such as SFR, Mstar, and Ldust obtained from the fit of the observed multi-wavelength photometry of galaxies (from the UV to the far-IR) to those obtained from the simulated LSST optical measurements alone. Results. We present the catalogue of simulated LSST observations for 23291 main-sequence galaxies in the ELAIS N1 field and for 9093 galaxies in the COSMOS field. It is available in the HELP virtual observatory. The stellar masses estimated based on the LSST measurements agree with the full UV to far-IR SED estimates because they mainly depend on the UV and optical emission, which is well covered by LSST in the considered redshift range. Instead, we obtain a clear overestimate of the dust-related properties (SFR, Ldust, Mstar) estimated with the LSST alone. They are highly correlated with redshift. We investigate the cause of this overestimate and conclude that it is related to an overestimate of the dust attenuation in both UV and near-IR. We find that it is necessary to employ auxiliary rest-frame mid-IR observations, simulated UV observations, or the far-UV attenuation (AFUV)-Mstar relation to correct for the overestimate. We also deliver the correction formula log10(SFRLSST/SFRreal)=0.26.z2-0.94.z+0.87. It is based on the 32384 MS galaxies detected with Herschel. Journal keyword(s): galaxies: fundamental parameters - galaxies: photometry - infrared: galaxies - galaxies: star formation - surveys Full paper Number of rows : 3 N Identifier Otype ICRS (J2000) RA ICRS (J2000) DEC Mag U Mag B Mag V Mag R Mag I Sp type #ref 1850 - 2023 #notes 1 NAME COSMOS Field reg 10 00 28.60 +02 12 21.0           ~ 2630 0 2 NAME NOAO Deep Wide Field reg 14 32 05.75 +34 16 47.5           ~ 367 0 3 ELAIS N1 reg 16 10 01 +54 30.6           ~ 372 0 To bookmark this query, right click on this link: simbad:objects in 2021A&A...653A.107R and select 'bookmark this link' or equivalent in the popup menu 2022.12.02-09:56:10
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.858280599117279, "perplexity": 7076.0053359339345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710900.9/warc/CC-MAIN-20221202082526-20221202112526-00723.warc.gz"}
https://manufacturingscience.asmedigitalcollection.asme.org/DSCC/proceedings-abstract/DSCC2020/84270/V001T21A007/1096550
Abstract A design method is proposed for a nonlinear disturbance observer based on the notion of passivity. As an initial application, we consider here systems whose structure comprises a set of integrator cascades, though the proposed approach can be extended to a larger class of systems. We describe an explicit procedure to choose the output of the system and to design the nonlinear feedback law used by the observer, provided the system satisfies a sufficient condition for output feedback semi-passification. The output injection term in the observer scales the measurement residual with a nonlinear gain that depends on the output and a set of static design parameters. We provide guidance for parameter tuning such that the disturbance tracking performance and the transient response of the estimation error can be intuitively adjusted. Example applications to two nonlinear mechanical systems illustrate that the proposed nonlinear observer design method is quite effective, producing an observer that can estimate a wide range of disturbances without any need to know or assume the disturbance dynamics. This content is only available via PDF. You do not currently have access to this content.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8233617544174194, "perplexity": 316.408379274746}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948871.42/warc/CC-MAIN-20230328201715-20230328231715-00099.warc.gz"}
https://math.stackexchange.com/questions/2417195/help-with-strong-induction
# Help with Strong Induction I am stuck on the inductive step of a proof by strong induction, in which I am proving proposition $S(x)$: $$\sum_{i=1}^{2^x} \frac{1}{i} \geq 1 + \frac{x}{2}$$ for $x \geq 0$. I have already finished verifying the base case, $S(0)$, and writing my inductive hypothesis, $S(x)$ for $0 \leq x \leq x + 1$. What I need to prove is $S(x+1)$: $$\sum_{i=1}^{2^{x+1}} \frac{1}{i} \geq 1 + \frac{x+1}{2}$$ but I cannot figure out how to go from point A to point B on this. What I have so far is the following (use of inductive hypothesis denoted by I.H.): \begin{eqnarray*} \sum_{i=1}^{2^{x+1}} \frac{1}{i} & = & \sum_{i=1}^{2^x} \frac{1}{i} + \sum_{i=2^x+1}^{2^{x+1}} \frac{1}{i} \\ & \stackrel{I.H.}{\geq} & 1 + \frac{x}{2} + \sum_{i=2^x+1}^{2^{x+1}} \frac{1}{i} \\ \end{eqnarray*} But in order to complete the proof with this approach, I need to show that $$\sum_{i=2^x+1}^{2^{x+1}} \frac{1}{i} \geq \frac{1}{2}$$ and I have absolutely no idea how to do that. When I consulted with my professor, he suggested that I should leverage the inequality more than I am, but I frankly can't see how to do that either. I have been staring at this proof for over 6 hours, will someone please give me a hint? Or, more preferably, could you explain a simpler/easier way to go about this proof? Thank you. • Notice that in your last sum, $i$ is always greater than $2^x$. That can give you an upper bound on all of the $1/i$ terms – JonathanZ Sep 5 '17 at 4:11 Note that $\frac{1}{i}\geq \frac{1}{2^{x+1}}, \forall i\in \{2^x+1,2^x+2,....,2^{x+1}\}$ $$\Rightarrow \sum_{i=2^x+1}^{2^{x+1}} \frac{1}{i} \geq 2^x\cdot \frac{1}{2^{x+1}}=\frac{1}{2}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9788329005241394, "perplexity": 84.15447289581807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526237.47/warc/CC-MAIN-20190719115720-20190719141720-00009.warc.gz"}
https://crypto.stackexchange.com/questions/32506/common-modulus-attack-not-reproducible/32514
# Common Modulus Attack not reproducible I want to calculate a simple example of the RSA common modulus attack. However, the result is not correct and I do not find my mistake. p=$29, q=37, n=p*q = 1073, \phi(n) = 1008, e1 = 5, e2 = 11$ Let $m = 999$. $c_1 = m^{e_1} \pmod n = 296$, $c_2 = m^{e_2} \pmod n = 555$ The extended Euclidean algorithm gives me $y_1$ and $y_2$: $y_1 \cdot e_1 + y_2 \cdot e_2 = 1$ $y_1 = -2, y_2 = 1$ (edited) $m = c_1^{y_1} * c_2^{y_2} = 296^{-2} \cdot 555^1 \pmod {1073}$ How do I calculate $296^{-2}$? I tried to get the inverse of $296 \pmod {1073}$ and then square it, but $296$ has no inverse. What am I doing wrong? • You're not noticing that m is not coprime to n. ​ (Encryption followed by decryption still gives the original input, but such an m [that's also not a multiple of n] gives a non-trivial factorization of n, and the ciphertext will have the same property.) ​ ​ ​ ​ – user991 Feb 6 '16 at 12:23 • The original RSA encryption scheme does not require m to be coprime to n. Why is this necessary when conducting the common modulus attack? – null Feb 6 '16 at 12:53 • I haven't checked this, but think it's not actually necessary. ​ One could instead try using meadow inverses. ​ ​ ​ ​ – user991 Feb 6 '16 at 12:57 • But as we see that the attack above does not work, because "m is not coprime to n". So it seems to be a prerequisite, doesn't it? – null Feb 6 '16 at 13:05 • @fgrieu : ​ Yes, and that can be done with gcd and CRT. ​ ​ ​ ​ – user991 Feb 6 '16 at 15:46 In real word RSA modules are so large that probability for finding $c_1$ which is not coprime with $n$ is approximately zero. Also if you founded such number then $p=gcd(c_1,n)\neq1$ so $p$ is a factor of $n$ and in this case attack is not necessary because $n$ is factored. $gcd(296,1073)=37\neq 1$ so $p=37,q=\frac{1073}{37}=29$ and $\phi(n)=1008$ Now you can easily compute private key $d_2$: $e_2\cdot d_2=1 \pmod{ \phi(n)}$ so $d_2=275$. $$m={c_2}^{d2}\pmod n={555}^{275}\pmod{1073}=999$$ • How does your answer help me in solving the upper exercise? – null Feb 6 '16 at 17:30 • @null, The goal of attack is breaking RSA. With factoring $n$ you can easily find private key and decrypt ciphertext. If your question is compute $d$ which $296\cdot d\pmod{1073}=1$ you cant find such $d$. – Meysam Ghahramani Feb 6 '16 at 19:27 • I understand. Well if I cannot invert d the upper attack is not possible. Why? – null Feb 6 '16 at 22:47 The problem is to reliably and efficiently find message $m$ (with $0\le m<n$) given RSA modulus $n$, distinct RSA public exponents $e_1$ and $e_2$ coprime to each others and to the unknown $\phi(n)$, and ciphertexts $c_1=m^{e_1}\bmod n$ and $c_2=m^{e_2}\bmod n$. WLoG, and per the corrected question, $y_1$ is negative when it is applied the extended euclidean algorithm to $e_1$ and $e_2$ in order to find $y_1$ and $y_2$ with $y_1\cdot e_1+y_2\cdot e_2=1$. For random choice of message $m$, odds that $\gcd(m,n)=0$ are low, precisely $1-\phi(n)/n$, that is $1/p+1/q-1/n$ if $n=p\cdot q$ with $p$ and $q$ distinct primes. If $n$ is square-free (as assumed in most definitions of RSA), $\gcd(m,n)=\gcd(m^e_1,n)$, thus odds that $\gcd(c_1,n)=0$ also are $1-\phi(n)/n$. Hence, odds that $c_1$ has no inverse for random choice of $m$ are low (less than $2^{-510}$ of 1024-bit RSA with two 512-bit primes factors). Hence, for overwhelmingly most $m$, $c_1^{y_1}\cdot c_2^{y_2}\bmod n$ is well-defined, and is the desired $m$. But that does not quite always work. We can make an efficient algorithm that always work, including for the definition of RSA in PKCS#1v2 where $n$ can have multiple prime factors, even though we might be unable to efficiently find any prime factor in $n$. The method goes: • Check if $c_1=0$, in which case $m=0$. • Compute $r=\gcd(c_1,n)$. That's a divisor of $n$, often $1$ (however it is possible that $r>1$, in which case $r$ divides $n$; and also that $r$ or/and $n/r$ are composite, thus factoring $n$ might remain uneasy). • Compute $s=n/r$; with the assumption that $n$ is square-free, $\gcd(r,s)=1$ holds. • Compute $i_1=((((c_1\bmod s)\cdot r)\bmod s)^{-1}\bmod s)\cdot r$, the so-called meadow inverse of $c_1$ modulo $n$, such that $i_1\cdot c_1\bmod r=0$ and $i_1\cdot c_1\bmod s=1$, with $r$ and $s$ defined as above. • Compute $i_1^{-y_1}\cdot c_2^{y_2}\bmod n$, which is the desired $m$ (as pointed by Ricky Demer in a comment to the question). Proof sketch: we prove $i_1^{-y_1}\cdot c_2^{y_2}-m\equiv0\pmod r$ and $i_1^{-y_1}\cdot c_2^{y_2}-m\equiv0\pmod s$. Example: $e_1=5$, $e_2=11$, $n=837876170870196973028071$, $c_1=621961884462245272210948$, $c_2=653042419105836777869045$. We compute • $r=932340427217$; that's a factor of $n$ (this example is crafted to make it composite) • $s=898680510263$; that's a factor of $n$ (also composite in this example) • $i_1=653042419105836777869045$ • $m=331563319321409011786785$. Note: we do not need to factor $n$ (or $r$ or $s$), as required to compute a valid private exponent $d$, as would be required by the method outlined in that other answer; and we always find $m$ with polynomial effort w.r.t. the bit size of parameters, contrary to the method in that other answer. You can actually "invert" a value m with respect to n even if $$gcd(m,n) \neq 1$$ You are looking for a value $m^{-1}$ that satisfies $$m*m^{-1} = 1 \pmod{n}$$ This is a linear congruence. You need a bit more time to solve than just simply inverting a number. But by trying all the possible values as specified in the link above, you will finally discover the "inverse" you are looking for. • Um, no you can't. By definition, if $\gcd(m,n) = k$, then any multiple of $m$ modulo $n$ is also a multiple of $k$. The closest you can get is finding a pseudoinverse $m^*$ such that $m \times m^* = k \pmod n$ (and you can do that with the same extended Euclidean algorithm used to find normal modular inverses). – Ilmari Karonen Feb 7 '16 at 20:55 • Yes, and if you try every possible pseudoinverse as you call it, one of them will be the message and it will allow you to perform the common modulus attack. – mandragore Feb 7 '16 at 21:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.841684103012085, "perplexity": 352.9301836221263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038469494.59/warc/CC-MAIN-20210418073623-20210418103623-00190.warc.gz"}
http://www.conservapedia.com/Determinant
# Determinant Jump to: navigation, search The determinant of a matrix (written |A|) is a single number that depends on the elements of the matrix A. Determinants exist only for square matrices (i.e., ones where the number of rows equals the number of columns). Determinants are a basic building block of linear algebra, and are useful for finding areas and volumes of geometric figures, in Cramer's rule, and in many other areas ways. If the characteristic polynomial splits into linear factors, then he determinant is equal to the product of the eigenvalues of a matrix, counted by their algebraic multiplicities. ## Motivation A matrix can be used to transform a geometric figure. For example, in the plane, if we have a triangle defined by its vertices (3,3), (5,1), and (1,4), and we wish to transform this triangle into the triangle of vertices (3,-3), (5,-9), and (1,2), we can simply do a matrix multiplication of each vertex by the matrix $\begin{pmatrix} 1 & 0 \\ -2 & 1 \\ \end{pmatrix}$. In this transformation, no matter what is the shape of the initial geometric figure, its position, or its area, the final geometric figure will have the same area and orientation. It can be seen that matrix transformations of geometric figures always give resulting figures whose area is proportional to the initial figure, and whose orientation is either always the same, or always the reverse. This ratio is called the determinant of the matrix, and it's positive when the orienation is kept, negative when the orientation is reversed, and zero when the final figure always has zero area. This two-dimensional concept is easily generalized for any dimensions. In 3D, replace area for volume, and in higher dimensions the analogue concept is called hypervolume. The determinant of a matrix is the oriented ratio of the hypervolumes of the transformed figure to the source figure. ## How to calculate We need to introduce two notions: the minor and the cofactor of a matrix element. Also, the determinant of a 1x1 matrix equals the sole element of that matrix. Minor The minor mij of the element aij of an NxN matrix is the determinant of the (N-1)x(N-1) matrix formed by removing the ith row and jth column from M. Cofactor The cofactor Cij equals the minor mij multiplied by ( − 1)i + j The determinant is then defined to be the sum of the products of the elements of any one row or column with their corresponding cofactors. ### 2x2 case For the 2x2 matrix $\begin{pmatrix} a & b \\ c & d\end{pmatrix}$ the determinant is simply ad-bc (for example, using the above rule on the first row). ### 3x3 case For a general 3x3 matrix $\begin{pmatrix} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33} \\ \end{pmatrix}$ we can expand along the first row to find $|A|=A_{11}\begin{vmatrix}A_{22} & A_{23} \\ A_{32} & A_{33} \end{vmatrix}- A_{12}\begin{vmatrix}A_{21} & A_{23} \\ A_{31} & A_{33} \end{vmatrix}+ A_{13}\begin{vmatrix}A_{21} & A_{22} \\ A_{31} & A_{32} \end{vmatrix}$ where each of the 2x2 determinants is given above. ## Properties of determinants The following are some useful properties of determinants. Some are useful computational aids for simplifying the algebra needed to calculate a determinant. The first property is that | M | = | MT | where the superscript "T" denotes transposition. Thus, although the following rules refer to the rows of a matrix they apply equally well to the columns. • The determinant is unchanged by adding a multiple of one row to any other row. • If two rows are interchanged the sign of the determinant will change • If a common factor α is factored out from each element of a single row, the determinant is multiplied by that same factor. • If all the elements of a single row are zero (or can be made to be zero using the above rules) then the determinant is zero. • | AB | = | A | | B | In practice, one of the most efficient ways of finding the determinant of a large matrix is to add multiples of rows and/or columns until the matrix is in triangular form such that all the elements above or below the diagonal are zero, for example $\begin{pmatrix} A_{11} & A_{12} & A_{13} \\ 0 & A_{22} & A_{23} \\ 0 & 0 & A_{33} \\ \end{pmatrix}$. The determinant of such a matrix is simply the product of the diagonal elements (use the cofactor expansion discussed above and expand down the first column).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940122723579407, "perplexity": 293.6320987383853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462070.88/warc/CC-MAIN-20150226074102-00293-ip-10-28-5-156.ec2.internal.warc.gz"}
https://overbrace.com/bernardparent/viewtopic.php?f=13&t=369&view=unread
The Sewol Ferry Disaster — an Engineering Education Problem The collapse of the Gyeongju auditorium in February followed by the capsizing of the Sewol ferry in April have resulted in hundreds of high school and university students losing their lives unnecessarily at a very early age. The cause of these sad events is being attributed mostly to the greediness of entrepreneurs who sacrifice public safety in favour of monetary gains as well as, in the case of the Sewol ferry, to incorrect behavior of the crew members. However, I would argue that, ultimately, these disasters are less the responsibility of the business owners or of the crew members and more the responsibility of the engineers in charge of the design of the ferry/auditorium or in charge of imposing the safety regulations. Indeed, engineers are those who approved extra cabins to be added to the Sewol ferry resulting in a too-high center of gravity which made the ship more prone to capsizing. Engineers are also those who poorly designed the Gyeongju auditorium by miscalculating the additional stresses that would result due to snow accumulation on the roof. It is also government-employed engineers who were supposed to ensure that safety regulations were respected and that the ferry and auditorium were safe for public use. If the engineers working for the government and the companies had been competent and acted responsibly, the disasters would not have occurred. In this short article, I express my opinion on why engineering disasters seem to occur more often in Korea than in other industrialized nations, and what to do to remedy the problem. 06.01.14 When I first arrived from Canada and joined the College of Engineering at Pusan National University in 2008 as a tenure-track faculty member, I recall one of my colleagues at the time mentioning that I was quite courageous to have come here. I thought perhaps he was referring to a lower standard of living I would encounter in Korea or perhaps to difficulties I would face in teaching in English to Korean students who may not be fully fluent in that language. It’s only several months later that I realized what my colleague meant: there is an educational problem in Korea and the educational quality is lagging considerably compared to other industrialized states. The difference in education quality is not a minor one: an analysis of the syllabi and exams of engineering courses taught at Pusan National University (PNU) reveals that they cover approximately 2 times less material than they should when compared to those in similarly-ranked universities in the anglosphere or Europe. After discussing these matters a multitude of times with students and professors not only from PNU but from other universities in the country as well, I have little doubt that there is a serious issue in this respect not only at PNU but in most — if not all — Colleges of Engineering in Korea. Despite the number of courses and credits being essentially the same as in universities in the U.S., the courses taught here cover substantially less material, effectively resulting in the amount learnt at the tertiary level being half of what it should be. By teaching the students shallow courses that require little effort and preparation to pass, the professors are not only preventing the engineering students to reach their full potential but are also encouraging them to develop bad habits (poor attention to details, poor self-organization, poor capabilities in solving challenging problems, etc) that many keep for the rest of their careers. This may explain the need of Korean engineers to acquire technology from abroad instead of developing their own even in flagship national projects such as the Naro rocket (Russian technology) or the KTX train (French technology). This may also be the root cause for the recent engineering-related disasters such as the Sewol ferry capsizing or the Gyeongju auditorium collapsing. What makes the below-par educational environment at the tertiary level in Korea initially surprising is that various studies show that the Korean high school graduates are better prepared for their university studies (both in terms of material learned and in terms of problem solving skills) than their Canadian or American counterparts. And this is corroborated by my experience teaching Korean students in their freshman and sophomore years at Pusan National University: if they study as much, my PNU students do as well or better than McGill students when taught engineering courses at the same level as in Canada. Why don't professors in Korea teach at the same level as in other industrialized nations then? I find the biggest hurdle that the professors are here facing is the too high number of courses they are required to teach. At PNU as well as in most universities in Korea, engineering professors are required to teach 9 hours of lectures per week during the semester and, often, to teach one course during the winter and summer breaks. In Canada, the United States, and the European Union, the engineering professors are required to teach at the most 6 hours per week during the semester, and none during the vacation (which is normally spent doing research or improving the course material for the following semester). In this light, it shouldn't be surprising that the engineering courses are “diluted” in Korea and cover two times too little material: the professors are required to teach two times too many courses. As long as engineering professors are subject to such a high teaching load, there is little doubt that the quality of the courses will continue to remain low resulting in more engineering disasters in the future such as the sinking of the Sewol ferry and the collapse of the Gyeongju auditorium. To improve the education of the Korean engineers, I would recommend the Ministry of Education to make the following changes to the regulations: (i) no engineering professor should be allowed to teach more than 6 hours per week, as anything more leads to shallow courses that are detrimental to the development of the engineering students; and (ii) incentives should be implemented to encourage the engineering professors to raise the rhythm and cover as much material as their counterparts abroad. Perhaps the best way that the latter could be achieved is through external reviews of the courses performed by faculty members from foreign universities with a strong reputation for tertiary educational quality. The overly shallow engineering courses in Korean universities is not an insolvable problem. It could be fixed with minor changes to the regulations. Not only will this reduce the number of accidents and save lives in the future but, perhaps as importantly, this will also help reverse the less-than-positive reputation Korea has acquired on the world stage in recent times with respect to engineering safety and quality. This article appeared in the Korea Herald on June 23rd, 2014 [l]. 06.23.14 $\pi$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22116829454898834, "perplexity": 1446.3094362707336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578596571.63/warc/CC-MAIN-20190423094921-20190423120921-00186.warc.gz"}
http://math.stackexchange.com/questions/184335/map-z-xiy-x-geq-y-with-a-branch-cut-on-the-negative-imaginary-axis
# Map $\{ z=x+iy : |x|\geq y \}$ with a branch cut on the negative imaginary axis from $[-i,0]$ to the unit disc I have a conformal map that I've been having problems with. Map the set $\{ z=x+iy : |x|\geq y \}$ with a branch cut on the negative imaginary axis from $[-i,0]$ to the unit disc. I've tried a few things but the branch cut keeps messing me up. - The interior of your domain consists of all points $z=x+iy$ which satisfies one of the three conditions: (i) $x>0$ and $y<x$, (ii) $x=0$ and $y<-1$, or (iii) $x<0$ and $y<-x$. First map $z$ to $w=iz$. This will rotate your picture counterclockwise by $90^\circ$. Then map $w$ to $\tau=w^{4/3}$. This will close the open sector, and your domain will be mapped to the complex plane minus a cut from $\tau=1$ along the real axis to $\tau=-\infty$. Then map $\tau$ to $\zeta=\sqrt{\tau-1}$, which will send your domain to the right half plane $\Re(\zeta)>0$. Finally map $\zeta$ to ${1-\zeta \over 1+\zeta}$, which will map your domain to the unit disc. - The region has two rays emanating from the origin as its boundary. One of the rays is in the $i+i$ direction, and the other is in the $-1+i$ direction. This boundary needs to become the boundary of the disc. The region under discussion is the region below these rays. We will do what we need to do to "straighten" out the rays into a vertical line, so that exponentiation will turn them into the boundary of the disc. Multiply $z$ by $i$, so that the boundary rays are now in the $-1\pm i$ directions. Raise $z$ to the $2/3$ power, using the negative real axis as a branch cut for cube root. Now the boundary rays are the imaginary axis, and the region has nonnegative real parts. Negate $z$, so that the region has nonpositive imaginary parts. And lastly, exponentiate to get the unit disc with its interior. So all together, $$z\mapsto \exp(-(iz)^{2/3})$$ where the branch cut for the cube root is the negative real axis. Now, this makes the branch cut for the entire map the positive imaginary axis, which is the opposite of what you asked for. Can you modify this approach? - What about the "branch [?] cut on the negative imaginary axis from $[-i,0]$"? I think the two sides of this cut also want to make it onto the boundary of the disc. –  Christian Blatter Aug 19 '12 at 19:00 @ChristianBlatter A branch cut will put the closed edge on one or of the sides, and leave an open edge on the other side. After the transformation, this ripped seam can glue back together inside the disc, say along a radius. I'll add the full answer now, I just wanted to give OP a chance to think about it more. –  alex.jordan Aug 19 '12 at 20:47 @ChristianBlatter Tougher than I thought! I take it back. –  alex.jordan Aug 19 '12 at 21:28 Seeing Per Manne's answer, I think I misunderstood the nature of the desired branch cut. It's just supposed to be a slit, not an entire ray. –  alex.jordan Aug 19 '12 at 21:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7811622023582458, "perplexity": 214.4058211129298}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010749774/warc/CC-MAIN-20140305091229-00049-ip-10-183-142-35.ec2.internal.warc.gz"}
https://tonyelhabr.rbind.io/post/texas-high-schools-academics-cors/
# Correlations Between Texas High School Academic Competition Results and SAT/ACT Scores ## Introduction I wanted to do a follow-up on my series of posts about Texas high school University Interscholastic League (UIL) academic competitions to more closely evaluate the relationship between the school performance in those competitions with school-wide SAT) and ACT scores. For those who may not be familiar with these tests, these are the two most popular standardized tests used for college admission in the United States. In my introduction to that series, I stated the following: School-wide … scores on state- and national-standardized tests (e.g. the SAT) certainly are the most common measure of academic strength, but I think rankings by academic competitions may be more indicative. Essentially, I was implying that the academic UIL scores may not correspond well, or at all, with standardized test scores. However, I did not attempt to prove this hypothesis, which is what I set out to do here. While I’m at it, I’ll show the code and provide some commentary to explain my process. ## Data Collection While I already have collected and cleaned the UIL data that I’ll need by virtue of my work for my series of posts analyzing the UIL competitions, I did not retrieve data for standardized test scores. To my delight, the Texas Education Agency’s website publishes Texas high school SAT and ACT scores for the years 2011 through 2015. The task of scraping from this source is a perfect use-case for the super-handy {xml2} and {rvest} packages, as well the well-known awesome {stringr} and {purrr} packages in the {tidyverse}. library("tidyverse") library("rlang") library("teplot") # Personal package. urls_tea <- "https://tea.texas.gov/acctres/sat_act_index.html" %>% rvest::html_nodes(xpath = "//tr //td //a") %>% rvest::html_attr("href") %>% str_subset("^\\/acctres\\/[:alpha:]{3}_[Cc]ampus_[Dd]ata") urls_tea create_path_tea <- function(url_suffix, dir = "data-raw", ext = "csv") { if(!dir.exists(dir)) { dir.create(dir) } url_suffix %>% str_remove_all("acctres|\\/") %>% paste0(".", ext) %>% file.path(dir, .) } # NOTE(s): # + urls_tea_dl is actually the same as url_tea because purrr::walk() returns its first argument. # + mode = "wb" is important! Otherwise, the downloaded files have empty lines every other line # (due to the way that CR and LFs are handled. urls_tea_dl <- urls_tea %>% walk( url = paste0("https://tea.texas.gov/", .x), destfile = create_path_tea(url_suffix = .x), mode = "wb" ) ) ## Data Cleaning Next, I bind the data from all of the downloaded files together and do some cleaning. I put these actions in function(s) because I plan on re-using them in future posts where I explore this data set in other ways. One relatively significant choice that I make here is to only include the data for the school-wide level (via the "All Students" filtering criteria), although data for different demographics within each school is provided. The other data set that I am evaluating—the academic UIL data— does not have demographci-specific information, so I want to treat the two set as “equally” as possible. Additionally, in order to better understand the resulting data set, the reader should be made aware of some of the details of the tests. The SAT has math, reading, and writing sections, each having minimum and maximum scores of 200 and 800, meaning that the total can range from 600 to 2400. The ACT has math, reading, english, and science sections, each having a minimum and maximum score of 1 and 36, combined for a single compos score also ranging from 1 to 36. To eliminate duplicate columns representing the same underlying “thing”. I don’t distinguish the math and reading section scores for each test in separate columns, I rename the ACT’s compos score to total, following the convention used for the SAT’s cumulative score. The other sections—writing for the SAT and english and science for the ACT— are not really analogous to sections in the other test, so they are filled with NAs appropriately. Finally, for the interEsted reader, there are some details regarding the code implementation that I document in comments (both for explaining actions for myself and for the reader). import_tea_data <- function(path, rgx_grp) { res <- path %>% rename_all(funs(tolower)) if(!is.null(rgx_grp)) { res <- res %>% filter(group %>% str_detect(rgx_grp)) } res <- res %>% select( matches( "^group$|name$|math|reading|writing|total|english|science|compos" ) ) res } import_tea_data_cleanly <- function(urls, rgx_grp, ...) { res <- urls %>% create_path_tea(...) %>% tibble(path = .) %>% mutate( test = stringr::str_extract(path, "([Ss][Aa][Tt])|([Aa][Cc][Tt])") %>% toupper(), year = stringr::str_extract(path, "[:digit:]+") %>% as.integer() ) %>% mutate(contents = purrr::map(path, ~import_tea_data(.x, rgx_grp = rgx_grp))) %>% unnest() %>% # NOTE: No longer need this columns(s) any more. select(-path) %>% mutate_at(vars(total), funs(ifelse(test == "ACT", compos, .))) %>% # NOTE: No longer need this column(s) any more. select(-compos) %>% # NOTE: Rearranging score columns in a more logical fashion. select(-total, everything(), total) %>% # NOTE: Renaming "important" columns. rename(school = campname, district = distname, county = cntyname, city = regnname) %>% mutate_if(is.character, funs(str_replace_all(., "=|\"", ""))) %>% mutate_at(vars(school, district, county, city), funs(toupper)) %>% # NOTE: Some county names are truncated and end with COUN or COUNT. # (The max seems to be 18 characters). # Fortunately, ther are no county names with COUN in their names, so the following # regular expression is sufficient. mutate_at(vars(county), funs(str_remove_all(., "\\s+COUN.*$"))) %>% # NOTE: Remove all HS/H S at the end of school names, as well as ampersands. # This seems to improve join percentages with other data sets. mutate_at(vars(school), funs(str_remove_all(., "([H]\\s*[S]$)|(\\s+\\&)") %>% str_trim())) %>% # NOTE: This is (try to) to resolve duplicates in raw data. # group_by_at(vars(matches("test|year|school|district|county|city"))) %>% # summarise_all(funs(max(., na.rm = TRUE))) %>% # ungroup() %>% arrange(test, year, school) res } schools_tea <- urls_tea %>% import_tea_data_cleanly(rgx_grp = "All Students") schools_tea test year school district county city math reading writing english science total ACT 2011 A C JONES BEEVILLE ISD BEE CORPUS CHRISTI 19 18 NA 17 19 18 ACT 2011 A J MOORE ACAD WACO ISD MCLENNAN WACO 19 18 NA 16 18 18 ACT 2011 A M CONS COLLEGE STATION ISD BRAZOS HUNTSVILLE 26 24 NA 23 24 24 ACT 2011 A MACEO SMITH HIGH SCHOOL DALLAS ISD DALLAS RICHARDSON 16 14 NA 13 15 14 ACT 2011 ABBOTT SCHOOL ABBOTT ISD HILL WACO 20 20 NA 19 21 20 ACT 2011 ABERNATHY ABERNATHY ISD HALE LUBBOCK 22 20 NA 19 21 21 ACT 2011 ABILENE ABILENE ISD TAYLOR ABILENE 21 21 NA 20 21 21 ACT 2011 ACADEMY ACADEMY ISD BELL WACO 24 23 NA 21 24 23 ACT 2011 ACADEMY HIGH SCHOOL HAYS CISD HAYS AUSTIN NA NA NA NA NA NA ACT 2011 ACADEMY OF CAREERS AND TECHNOLOGIE ACADEMY OF CAREERS AND TECHNOLOGIE BEXAR SAN ANTONIO 15 14 NA 12 14 14 ACT 2011 ACADEMY OF CREATIVE ED NORTH EAST ISD BEXAR SAN ANTONIO NA NA NA NA NA NA ACT 2011 ADRIAN SCHOOL ADRIAN ISD OLDHAM AMARILLO 19 18 NA 20 19 19 ACT 2011 AGUA DULCE AGUA DULCE ISD NUECES CORPUS CHRISTI 21 19 NA 18 20 19 ACT 2011 AIM CENTER VIDOR ISD ORANGE BEAUMONT NA NA NA NA NA NA ACT 2011 AKINS AUSTIN ISD TRAVIS AUSTIN 19 17 NA 16 17 17 ACT 2011 ALAMO HEIGHTS ALAMO HEIGHTS ISD BEXAR SAN ANTONIO 25 24 NA 24 24 24 ACT 2011 ALBA-GOLDEN ALBA-GOLDEN ISD WOOD KILGORE 20 19 NA 18 20 19 ACT 2011 ALBANY JR-SR ALBANY ISD SHACKELFORD ABILENE 24 22 NA 21 22 22 ACT 2011 ALDINE ALDINE ISD HARRIS HOUSTON 19 17 NA 16 18 18 1 # of total rows: 15,073 ### EDA: Year-to-Year Correlations First, before evaluating the primary concern at hand—the relationship between the academic UIL scores and the SAT/ACT scores (available in the schools_tea data created above)—I want to verify that there is some non-trivial relationship among the scores for a given school on a given test across years. (I would be surprised if this were not shown to be true.) schools_tea_cors_byyear <- schools_tea %>% distinct(test, year, school, .keep_all = TRUE) %>% filter(!is.na(total)) %>% unite(test_school, test, school) %>% widyr::pairwise_cor( feature = test_school, item = year, value = total ) %>% rename(year1 = item1, year2 = item2, cor = correlation) schools_tea_cors_byyear %>% filter(year1 <= year2) year1 year2 cor 2011 2012 0.80 2011 2013 0.76 2012 2013 0.86 2011 2014 0.69 2012 2014 0.78 2013 2014 0.83 2011 2015 0.64 2012 2015 0.74 2013 2015 0.78 2014 2015 0.86 ![](viz_schools_tea_cors_byyear_show-1.png) As expected, there are some strong correlations among the years for school-wide scores on these tests. Ok, now let’s bring in the “cleaned” school data (schools_uil) that I collected and cleaned in my UIL analysis. I’ll subset the data to include only the same years found in schools_tea—2011 through 2015. school city complvl_num score year conf complvl comp advanced n_state n_bycomp prnk n_defeat w HASKELL HASKELL 13 616 2011 1 District Calculator Applications 1 0 8 1.00 7 TRUE POOLVILLE POOLVILLE 13 609 2011 1 District Calculator Applications 1 0 8 0.86 6 FALSE LINDSAY LINDSAY 17 553 2011 1 District Calculator Applications 1 0 7 1.00 6 TRUE PLAINS PLAINS 3 537 2011 1 District Calculator Applications 1 0 10 1.00 9 TRUE SAN ISIDRO SAN ISIDRO 32 534 2011 1 District Calculator Applications 1 0 4 1.00 3 TRUE CANADIAN CANADIAN 7 527 2011 1 District Calculator Applications 1 0 7 1.00 6 TRUE GARDEN CITY GARDEN CITY 10 518 2011 1 District Calculator Applications 1 0 8 1.00 7 TRUE WATER VALLEY WATER VALLEY 10 478 2011 1 District Calculator Applications 0 0 8 0.86 6 FALSE GRUVER GRUVER 7 464 2011 1 District Calculator Applications 0 0 7 0.83 5 FALSE YANTIS YANTIS 19 451 2011 1 District Calculator Applications 1 0 10 1.00 9 TRUE SHINER SHINER 27 450 2011 1 District Calculator Applications 1 0 9 1.00 8 TRUE WEST TEXAS STINNETT 7 443 2011 1 District Calculator Applications 0 0 7 0.67 4 FALSE HONEY GROVE HONEY GROVE 17 440 2011 1 District Calculator Applications 1 0 7 0.83 5 FALSE LATEXO LATEXO 23 439 2011 1 District Calculator Applications 1 0 10 1.00 9 TRUE MUENSTER MUENSTER 17 436 2011 1 District Calculator Applications 0 0 7 0.67 4 FALSE VAN HORN VAN HORN 1 436 2011 1 District Calculator Applications 1 0 7 1.00 6 TRUE SLOCUM ELKHART 23 415 2011 1 District Calculator Applications 0 0 10 0.89 8 FALSE ERA ERA 17 415 2011 1 District Calculator Applications 0 0 7 0.50 3 FALSE GOLDTHWAITE GOLDTHWAITE 15 413 2011 1 District Calculator Applications 1 0 7 1.00 6 TRUE NEWCASTLE NEWCASTLE 12 408 2011 1 District Calculator Applications 1 0 10 1.00 9 TRUE 1 # of total rows: 27,359 Now let’s try to evaluate whether or not year-to-year correlations also exist with this data set. Importantly, some choice about how to quantify performance needs to be made. As I discussed in my long-form series of posts exploring the UIL academic data, the evaluation of performance is somewhat subjective. Should we use number of times a school advanced to the next level of competition in a given year? (Note that there are three competition levels—District, Region, and State.) What about the number the number of other schools it “defeated” in head-to-head competitions? In that separate analysis, I made the choice to use the percentile rank (prnk) of the school’s placings across all competition levels for a given competition type (comp). I believe this measure bests represent a school’s quality of performance (where a higher value indicates better performance). As I stated there when explaining my choice to use percent rank for identifying “dominant” individual“, ”I choose to use percent rank—which is a always a value between 0 and 1—because it inherently accounts for the wide range of number of competitors across all competitions. (For this context, a percent rank of 1 corresponds to the highest score in a given competition, and, conversely, a value of 0 corresponds to the lowest score.)” Aside from this decision regarding performance evaluation in academic UIL competitions, note that I treat the competition type (comp) in schools_uil as analogous to the test variable indicating SAT or ACT score in the schools_tea data set. For those who have not read through my UIL analysis, note that scores for five different competition types was collected—Calculator Applications, Computer Science, Mathematics, Number Sense, and Science. schools_uil_cors_byyear <- schools_uil %>% select(year, school, city, comp, prnk) %>% group_by(year, school, city, comp) %>% summarise(prnk_sum = sum(prnk, na.rm = TRUE)) %>% ungroup() %>% unite(comp_school, comp, school) %>% widyr::pairwise_cor( feature = comp_school, item = year, value = prnk_sum ) %>% rename(year1 = item1, year2 = item2, cor = correlation) schools_uil_cors_byyear %>% filter(year1 <= year2) table class=“table” style=“width: auto !important; margin-left: auto; margin-right: auto;“> year1 year2 cor 2011 2012 0.74 2011 2013 0.63 2012 2013 0.72 2011 2014 0.53 2012 2014 0.60 2013 2014 0.75 2011 2015 0.48 2012 2015 0.52 2013 2015 0.61 2014 2015 0.70 We can see that correlations among years do exist, as we would expect. The strength of the correlations decrease for years that are farther apart, which is also what we might expect. ### “Final” Correlation Analysis So, at this point, I have set myself up to do that which I set out to do—evaluate the relationship between the academic UIL competition scores and the national SAT/ACT scores. In order to put the two sets of data on “equal grounds”, I only evaluate math scores. In particular, I filter comp in the UIL data to just the mathematically-based competitions—Calculator Applications, Mathematics, and Number Sense—excluding Science and Computer Science. And, for the SAT/ACT data, I select only the math score, which is available fore both tests, excluding the total and reading scores also available for each and the writing, english, and science scores available for one or the other. (Perhaps the ACT’s science score could be compared to the Science UIL scores, but I choose not to do so here.) schools_uil_math <- schools_uil %>% filter(str_detect(comp, "Calculator|Math|Number")) %>% group_by(year, school, city) %>% summarise(prnk_sum = sum(prnk, na.rm = TRUE)) %>% ungroup() %>% # NOTE: "Renormalize" prnk_sum. mutate(math_prnk = percent_rank(prnk_sum)) %>% select(-prnk_sum) schools_uil_math year school city math_prnk 2011 ABBOTT ABBOTT 0.82 2011 ABERNATHY ABERNATHY 0.59 2011 ABILENE ABILENE 0.00 2011 ACADEMY OF FINE ARTS FORT WORTH 0.55 2011 AGUA DULCE AGUA DULCE 0.57 2011 ALAMO HEIGHTS SAN ANTONIO 0.70 2011 ALBA-GOLDEN ALBA 0.72 2011 ALBANY ALBANY 0.95 2011 ALEDO ALEDO 0.89 2011 ALEXANDER LAREDO 0.56 2011 ALICE ALICE 0.10 2011 ALLEN ALLEN 0.85 2011 ALPINE ALPINE 0.57 2011 ALTO ALTO 0.19 1 # of total rows: 5,596 schools_tea_math <- schools_tea %>% select(test, year, school, city, math) %>% filter(!is.na(math)) %>% group_by(test) %>% mutate(math_prnk = percent_rank(math)) %>% ungroup() %>% group_by(year, school, city) %>% summarise_at(vars(math_prnk), funs(mean(., na.rm = TRUE))) %>% ungroup() schools_tea_math year school city math_prnk 2011 A C JONES CORPUS CHRISTI 0.51 2011 A J MOORE ACAD WACO 0.24 2011 A M CONS HUNTSVILLE 0.97 2011 A MACEO SMITH HIGH SCHOOL RICHARDSON 0.03 2011 ABBOTT SCHOOL WACO 0.72 2011 ABERNATHY LUBBOCK 0.63 2011 ABILENE ABILENE 0.60 2011 ACADEMY HIGH SCHOOL AUSTIN 0.32 2011 ACADEMY OF CAREERS AND TECHNOLOGIE SAN ANTONIO 0.03 2011 ACADEMY OF CREATIVE ED SAN ANTONIO 0.48 2011 AGUA DULCE CORPUS CHRISTI 0.66 2011 AKINS AUSTIN 0.25 2011 ALAMO HEIGHTS SAN ANTONIO 0.95 2011 ALBA-GOLDEN KILGORE 0.52 2011 ALBANY JR-SR ABILENE 0.83 2011 ALDINE HOUSTON 0.23 1 # of total rows: 7,730 schools_join_math <- schools_tea_math %>% rename_at(vars(matches("^math")), funs(paste0("tea_", .))) %>% inner_join(schools_uil_math %>% rename_at(vars(matches("^math")), funs(paste0("uil_", .))), by = c("year", "school", "city")) %>% select(year, school, city, matches("math")) schools_join_math year school city tea_math_prnk uil_math_prnk 2011 ABILENE ABILENE 0.60 0.00 2011 ALAMO HEIGHTS SAN ANTONIO 0.95 0.70 2011 AMERICAS EL PASO 0.31 0.69 2011 ANDERSON AUSTIN 0.98 0.64 2011 ANDRESS EL PASO 0.15 0.56 2011 ARLINGTON HEIGHTS FORT WORTH 0.63 0.49 2011 AUSTIN AUSTIN 0.89 0.22 2011 AUSTIN EL PASO 0.22 0.68 2011 AUSTIN HOUSTON 0.09 0.85 2011 BEL AIR EL PASO 0.17 0.49 2011 BERKNER RICHARDSON 0.80 0.50 2011 BOSWELL FORT WORTH 0.83 0.22 2011 BOWIE AUSTIN 0.97 0.70 2011 BOWIE EL PASO 0.15 0.15 2011 BRANDEIS SAN ANTONIO 0.79 0.39 2011 BREWER FORT WORTH 0.48 0.19 2011 BURBANK SAN ANTONIO 0.22 0.76 2011 BURGES EL PASO 0.57 0.70 2011 CALALLEN CORPUS CHRISTI 0.47 0.93 2011 CANUTILLO EL PASO 0.18 0.81 1 # of total rows: 699 schools_join_math_cors <- schools_join_math %>% select(-year) %>% select_if(is.numeric) %>% corrr::correlate() schools_join_math_cors rowname tea_math_prnk uil_math_prnk tea_math_prnk NA 0.36 uil_math_prnk 0.36 NA So, this correlation value—0.36—seems fairly low. At face value, it certainly does not provide any basis to claim that schools that do well in the academic UIL competitions also do well with SAT/ACT tests. However, perhaps if I used a different methodology, the result would be different. Other metrics used to quantify academic UIL performance could be tested in some kind of sensitivity analysis. ### EDA: Year-to-Year Correlations, Cont. While I won’t do any kind of rigorous second evaluation here, I do want to try to quantify the impact of the “missing” data dropped due to mismatched school names. If all possible data had been used, would the final correlation value have increased (or decreased) with more (or less) data? Although finding direct answer to this question is impossible, we can evaluate the difference in the year-to-year correlations of scores from the schools that are joined with the correlations calculated for all in “unjoined” schools_tea and schools_uil data sets. If we find that there are large discrepancies (one way or the other), then we may have some reason to believe that the 0.36 number found above is misleading. To perform this task, I create a couple of intermediary data sets, as well as some functions. schools_postjoin_math_tidy <- schools_join_math %>% unite(school_city, school, city) %>% gather(metric, value, matches("prnk")) pairwise_cor_f1 <- function(data, which = c("tea", "uil")) { which <- match.arg(which) data %>% filter(metric %>% str_detect(which)) %>% # filter_at(vars(value), all_vars(!is.nan(.))) %>% widyr::pairwise_cor( feature = school_city, item = year, value = value ) %>% rename(year1 = item1, year2 = item2, cor = correlation) %>% mutate(source = which %>% toupper()) } pairwise_cor_f2 <- function(data, which = c("tea", "uil")) { which <- match.arg(which) col <- data %>% names() %>% str_subset("math") data %>% unite(school_city, school, city) %>% rename(value = !!rlang::sym(col)) %>% mutate(source = which %>% toupper()) %>% widyr::pairwise_cor( feature = school_city, item = year, value = value ) %>% rename(year1 = item1, year2 = item2, cor = correlation) %>% mutate(source = which %>% toupper()) } schools_postjoin_math_cors_byyear <- bind_rows( schools_postjoin_math_tidy %>% pairwise_cor_f1("tea"), schools_postjoin_math_tidy %>% pairwise_cor_f1("uil") ) schools_prejoin_math_cors_byyear <- bind_rows( schools_tea_math %>% pairwise_cor_f2("tea"), schools_uil_math %>% pairwise_cor_f2("uil") ) schools_math_cors_byyear_diffs <- schools_postjoin_math_cors_byyear %>% inner_join(schools_prejoin_math_cors_byyear, by = c("year1", "year2", "source"), suffix = c("_join", "_unjoin")) %>% mutate(cor_diff = cor_join - cor_unjoin) Ok, enough of the data munging—let’s review the results! schools_math_cors_byyear_diffs_wide <- schools_math_cors_byyear_diffs %>% filter(year1 <= year2) %>% select(-matches("join\$")) %>% unite(year_pair, year1, year2) %>% schools_math_cors_byyear_diffs_wide year_pair TEA UIL 2011_2012 0.09 -0.02 2011_2013 0.05 -0.04 2011_2014 0.11 -0.06 2011_2015 0.24 -0.03 2012_2013 0.01 0.00 2012_2014 0.06 0.04 2012_2015 0.15 0.00 2013_2014 -0.01 -0.03 2013_2015 0.08 0.00 2014_2015 0.05 -0.01 Note that the correlations in the joined data are a bit “stronger”—in the sense that they are more positive—among the TEA SAT/ACT data, although not in any kind of magnificent way. Additionally, the differences for the UIL data are trivial. Thus, we might say that the additional data that could have possibly increased (or decreased) the singular correlation value found—0.36—would not have changed much at all. ## Conclusion So, my initial inclination in my analysis of academic UIL competitions) seems correct—there is no significant relationship between Texas high school academic competition scores and standardized test scores (for math, between 2011 and 2015). And, with that question answered, I intend to explore this rich data set in other ways in future blog posts. ##### Tony ElHabr ###### Data person Passionate mostly about energy markets and sports analytics.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18133218586444855, "perplexity": 12775.684932896735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141748276.94/warc/CC-MAIN-20201205165649-20201205195649-00201.warc.gz"}
https://collaborate.princeton.edu/en/publications/load-shifting-in-the-smart-grid-to-participate-or-not
# Load Shifting in the Smart Grid: To Participate or Not? Yunpeng Wang, Walid Saad, Narayan B. Mandayam, H. Vincent Poor Research output: Contribution to journalArticlepeer-review 38 Scopus citations ## Abstract Demand-side management (DSM) has emerged as an important smart grid feature that allows utility companies to maintain desirable grid loads. However, the success of DSM is contingent on active customer participation. Indeed, most existing DSM studies are based on game-theoretic models that assume customers will act rationally and will voluntarily participate in DSM. In contrast, in this paper, the impact of customers' subjective behavior on each other's DSM decisions is explicitly accounted for. In particular, a noncooperative game is formulated between grid customers in which each customer can decide on whether to participate in DSM or not. In this game, customers seek to minimize a cost function that reflects their total payment for electricity. Unlike classical game-theoretic DSM studies, which assume that customers are rational in their decision-making, a novel approach is proposed based on the framework of prospect theory (PT) to explicitly incorporate the impact of customer behavior on DSM decisions. To solve the proposed game under both conventional game theory and PT, a new algorithm based on fictitious play is proposed using which the game will reach an ${\epsilon }$-mixed Nash equilibrium. Simulation results are provided to assess the impact of customer behavior on DSM. In particular, the overall participation level and grid load can depend significantly on the rationality level of the players and their risk aversion tendencies. Original language English (US) 2604-2614 11 IEEE Transactions on Smart Grid 7 6 https://doi.org/10.1109/TSG.2015.2483522 Published - Nov 2016 ## All Science Journal Classification (ASJC) codes • Computer Science(all) ## Keywords • Demand-side management (DSM) • game theory • prospect theory (PT) • smart grid
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3896715044975281, "perplexity": 2734.3830124284773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506832.21/warc/CC-MAIN-20210116165621-20210116195621-00682.warc.gz"}
https://www.physicsforums.com/threads/the-difference-between-system-equili-and-system-and-steady-state.1623/
# The difference between system equili and system and steady state • Start date • #1 hi Can anyone explain the difference between a system at equilibrium and a system at steady-state water flow? I know that equilibrium occurs at equal rates, no net change is produced. But I don't understand steady state system....Please explain it to me.... Thanks also the difference between saturated and unsaturated hydraulic conductivity? I have hard time to understand these stuffs.. If anyone know these stuffs, can u please explain it to me. Thanks. • #2 Tom Mattson Staff Emeritus Gold Member 5,500 8 I'm moving this to Physics, where perhaps it will get some discussion. • #3 Alexander Equilibrium: dU/dx = 0 (usually this happens at extrema of potential energy). • #4 Tom Mattson Staff Emeritus Gold Member 5,500 8 Originally posted by hi I know that equilibrium occurs at equal rates, no net change is produced. OK, at first I thought you meant the "zero force" condition, but now I am thinking that you are referring to the continuity equation. That is because when you say "equal rates", it makes me think of "equal flow rates into and out of a volume". So, that statement of equilibrium would be: [nab].j+&part;&rho;/&part;t=0 But I don't understand steady state system....Please explain it to me.... Thanks I dug up the old Fluid Mechanics book (it's been about 10 years!) and looked up the mathematical definition of steady state. It is... &part;A/&part;t=0 for any fluid property A. That would include the density &rho;, which reduces the continuity equation to: [nab].j=0 also the difference between saturated and unsaturated hydraulic conductivity? I have hard time to understand these stuffs.. If anyone know these stuffs, can u please explain it to me. Thanks. This I don't know. Our local "fluids" guy is Enigma; try sending him a PM. edit: fixed &part; signs. • #5 Alexander I would call steady state a state at which power (rate of chande of energy dU/dt) is constant. • Last Post Replies 6 Views 20K • Last Post Replies 6 Views 18K • Last Post Replies 8 Views 3K • Last Post Replies 2 Views 1K • Last Post Replies 1 Views 2K • Last Post Replies 0 Views 7K • Last Post Replies 3 Views 614 • Last Post Replies 2 Views 1K • Last Post Replies 1 Views 3K • Last Post Replies 2 Views 2K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106267690658569, "perplexity": 3364.941660461981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00308.warc.gz"}
https://docs.sympy.org/dev/modules/tensor/tensor.html
# Tensor# class sympy.tensor.tensor.TensorIndexType(name, dummy_name=None, dim=None, eps_dim=None, metric_symmetry=1, metric_name='metric', **kwargs)[source]# A TensorIndexType is characterized by its name and its metric. Parameters: name : name of the tensor type dummy_name : name of the head of dummy indices dim : dimension, it can be a symbol or an integer or None eps_dim : dimension of the epsilon tensor metric_symmetry : integer that denotes metric symmetry or None for no metric metric_name : string with the name of the metric tensor Notes The possible values of the metric_symmetry parameter are: 1 : metric tensor is fully symmetric 0 : metric tensor possesses no index symmetry -1 : metric tensor is fully antisymmetric None: there is no metric tensor (metric equals to None) The metric is assumed to be symmetric by default. It can also be set to a custom tensor by the .set_metric() method. If there is a metric the metric is used to raise and lower indices. In the case of non-symmetric metric, the following raising and lowering conventions will be adopted: psi(a) = g(a, b)*psi(-b); chi(-a) = chi(b)*g(-b, -a) From these it is easy to find: g(-a, b) = delta(-a, b) where delta(-a, b) = delta(b, -a) is the Kronecker delta (see TensorIndex for the conventions on indices). For antisymmetric metrics there is also the following equality: g(a, -b) = -delta(a, -b) If there is no metric it is not possible to raise or lower indices; e.g. the index of the defining representation of SU(N) is ‘covariant’ and the conjugate representation is ‘contravariant’; for N > 2 they are linearly independent. eps_dim is by default equal to dim, if the latter is an integer; else it can be assigned (for use in naive dimensional regularization); if eps_dim is not an integer epsilon is None. Examples >>> from sympy.tensor.tensor import TensorIndexType >>> Lorentz = TensorIndexType('Lorentz', dummy_name='L') >>> Lorentz.metric metric(Lorentz,Lorentz) Attributes metric (the metric tensor) delta (Kronecker delta) epsilon (the Levi-Civita epsilon tensor) data ((deprecated) a property to add ndarray values, to work in a specified basis.) class sympy.tensor.tensor.TensorIndex(name, tensor_index_type, is_up=True)[source]# Represents a tensor index Parameters: name : name of the index, or True if you want it to be automatically assigned tensor_index_type : TensorIndexType of the index is_up : flag for contravariant index (is_up=True by default) Notes Tensor indices are contracted with the Einstein summation convention. An index can be in contravariant or in covariant form; in the latter case it is represented prepending a - to the index name. Adding - to a covariant (is_up=False) index makes it contravariant. Dummy indices have a name with head given by tensor_inde_type.dummy_name with underscore and a number. Similar to symbols multiple contravariant indices can be created at once using tensor_indices(s, typ), where s is a string of names. Examples >>> from sympy.tensor.tensor import TensorIndexType, TensorIndex, TensorHead, tensor_indices >>> Lorentz = TensorIndexType('Lorentz', dummy_name='L') >>> mu = TensorIndex('mu', Lorentz, is_up=False) >>> nu, rho = tensor_indices('nu, rho', Lorentz) >>> A = TensorHead('A', [Lorentz, Lorentz]) >>> A(mu, nu) A(-mu, nu) >>> A(-mu, -rho) A(mu, -rho) >>> A(mu, -mu) A(-L_0, L_0) Attributes name tensor_index_type is_up class sympy.tensor.tensor.TensorHead(name, index_types, symmetry=None, comm=0)[source]# Tensor head of the tensor. Parameters: name : name of the tensor index_types : list of TensorIndexType symmetry : TensorSymmetry of the tensor comm : commutation group number Notes Similar to symbols multiple TensorHeads can be created using tensorhead(s, typ, sym=None, comm=0) function, where s is the string of names and sym is the monoterm tensor symmetry (see tensorsymmetry). A TensorHead belongs to a commutation group, defined by a symbol on number comm (see _TensorManager.set_comm); tensors in a commutation group have the same commutation properties; by default comm is 0, the group of the commuting tensors. Examples Define a fully antisymmetric tensor of rank 2: >>> from sympy.tensor.tensor import TensorIndexType, TensorHead, TensorSymmetry >>> Lorentz = TensorIndexType('Lorentz', dummy_name='L') >>> asym2 = TensorSymmetry.fully_symmetric(-2) >>> A = TensorHead('A', [Lorentz, Lorentz], asym2) Examples with ndarray values, the components data assigned to the TensorHead object are assumed to be in a fully-contravariant representation. In case it is necessary to assign components data which represents the values of a non-fully covariant tensor, see the other examples. >>> from sympy.tensor.tensor import tensor_indices >>> from sympy import diag >>> Lorentz = TensorIndexType('Lorentz', dummy_name='L') >>> i0, i1 = tensor_indices('i0:2', Lorentz) Specify a replacement dictionary to keep track of the arrays to use for replacements in the tensorial expression. The TensorIndexType is associated to the metric used for contractions (in fully covariant form): >>> repl = {Lorentz: diag(1, -1, -1, -1)} Let’s see some examples of working with components with the electromagnetic tensor: >>> from sympy import symbols >>> Ex, Ey, Ez, Bx, By, Bz = symbols('E_x E_y E_z B_x B_y B_z') >>> c = symbols('c', positive=True) Let’s define $$F$$, an antisymmetric tensor: >>> F = TensorHead('F', [Lorentz, Lorentz], asym2) Let’s update the dictionary to contain the matrix to use in the replacements: >>> repl.update({F(-i0, -i1): [ ... [0, Ex/c, Ey/c, Ez/c], ... [-Ex/c, 0, -Bz, By], ... [-Ey/c, Bz, 0, -Bx], ... [-Ez/c, -By, Bx, 0]]}) Now it is possible to retrieve the contravariant form of the Electromagnetic tensor: >>> F(i0, i1).replace_with_arrays(repl, [i0, i1]) [[0, -E_x/c, -E_y/c, -E_z/c], [E_x/c, 0, -B_z, B_y], [E_y/c, B_z, 0, -B_x], [E_z/c, -B_y, B_x, 0]] and the mixed contravariant-covariant form: >>> F(i0, -i1).replace_with_arrays(repl, [i0, -i1]) [[0, E_x/c, E_y/c, E_z/c], [E_x/c, 0, B_z, -B_y], [E_y/c, -B_z, 0, B_x], [E_z/c, B_y, -B_x, 0]] Energy-momentum of a particle may be represented as: >>> from sympy import symbols >>> P = TensorHead('P', [Lorentz], TensorSymmetry.no_symmetry(1)) >>> E, px, py, pz = symbols('E p_x p_y p_z', positive=True) >>> repl.update({P(i0): [E, px, py, pz]}) The contravariant and covariant components are, respectively: >>> P(i0).replace_with_arrays(repl, [i0]) [E, p_x, p_y, p_z] >>> P(-i0).replace_with_arrays(repl, [-i0]) [E, -p_x, -p_y, -p_z] The contraction of a 1-index tensor by itself: >>> expr = P(i0)*P(-i0) >>> expr.replace_with_arrays(repl, []) E**2 - p_x**2 - p_y**2 - p_z**2 Attributes name index_types rank (total number of indices) symmetry comm (commutation group) commutes_with(other)[source]# Returns 0 if self and other commute, 1 if they anticommute. Returns None if self and other neither commute nor anticommute. sympy.tensor.tensor.tensor_heads(s, index_types, symmetry=None, comm=0)[source]# Returns a sequence of TensorHeads from a string $$s$$ class sympy.tensor.tensor.TensExpr(*args)[source]# Abstract base class for tensor expressions Notes A tensor expression is an expression formed by tensors; currently the sums of tensors are distributed. A TensExpr can be a TensAdd or a TensMul. TensMul objects are formed by products of component tensors, and include a coefficient, which is a SymPy expression. In the internal representation contracted indices are represented by (ipos1, ipos2, icomp1, icomp2), where icomp1 is the position of the component tensor with contravariant index, ipos1 is the slot which the index occupies in that component tensor. Contracted indices are therefore nameless in the internal representation. get_matrix()[source]# DEPRECATED: do not use. Returns ndarray components data as a matrix, if components data are available and ndarray dimension does not exceed 2. replace_with_arrays(replacement_dict, indices=None)[source]# Replace the tensorial expressions with arrays. The final array will correspond to the N-dimensional array with indices arranged according to indices. Parameters: replacement_dict dictionary containing the replacement rules for tensors. indices the index order with respect to which the array is read. The original index order will be used if no value is passed. Examples >>> from sympy.tensor.tensor import TensorIndexType, tensor_indices >>> from sympy.tensor.tensor import TensorHead >>> from sympy import symbols, diag >>> L = TensorIndexType("L") >>> i, j = tensor_indices("i j", L) >>> A = TensorHead("A", [L]) >>> A(i).replace_with_arrays({A(i): [1, 2]}, [i]) [1, 2] Since ‘indices’ is optional, we can also call replace_with_arrays by this way if no specific index order is needed: >>> A(i).replace_with_arrays({A(i): [1, 2]}) [1, 2] >>> expr = A(i)*A(j) >>> expr.replace_with_arrays({A(i): [1, 2]}) [[1, 2], [2, 4]] For contractions, specify the metric of the TensorIndexType, which in this case is L, in its covariant form: >>> expr = A(i)*A(-i) >>> expr.replace_with_arrays({A(i): [1, 2], L: diag(1, -1)}) -3 Symmetrization of an array: >>> H = TensorHead("H", [L, L]) >>> a, b, c, d = symbols("a b c d") >>> expr = H(i, j)/2 + H(j, i)/2 >>> expr.replace_with_arrays({H(i, j): [[a, b], [c, d]]}) [[a, b/2 + c/2], [b/2 + c/2, d]] Anti-symmetrization of an array: >>> expr = H(i, j)/2 - H(j, i)/2 >>> repl = {H(i, j): [[a, b], [c, d]]} >>> expr.replace_with_arrays(repl) [[0, b/2 - c/2], [-b/2 + c/2, 0]] The same expression can be read as the transpose by inverting i and j: >>> expr.replace_with_arrays(repl, [j, i]) [[0, -b/2 + c/2], [b/2 - c/2, 0]] class sympy.tensor.tensor.TensAdd(*args, **kw_args)[source]# Sum of tensors. Parameters: free_args : list of the free indices Examples >>> from sympy.tensor.tensor import TensorIndexType, tensor_heads, tensor_indices >>> Lorentz = TensorIndexType('Lorentz', dummy_name='L') >>> a, b = tensor_indices('a,b', Lorentz) >>> p, q = tensor_heads('p,q', [Lorentz]) >>> t = p(a) + q(a); t p(a) + q(a) Examples with components data added to the tensor expression: >>> from sympy import symbols, diag >>> x, y, z, t = symbols("x y z t") >>> repl = {} >>> repl[Lorentz] = diag(1, -1, -1, -1) >>> repl[p(a)] = [1, 2, 3, 4] >>> repl[q(a)] = [x, y, z, t] The following are: 2**2 - 3**2 - 2**2 - 7**2 ==> -58 >>> expr = p(a) + q(a) >>> expr.replace_with_arrays(repl, [a]) [x + 1, y + 2, z + 3, t + 4] Attributes args (tuple of addends) rank (rank of the tensor) free_args (list of the free indices in sorted order) canon_bp()[source]# Canonicalize using the Butler-Portugal algorithm for canonicalization under monoterm symmetries. contract_metric(g)[source]# Raise or lower indices with the metric g. Parameters: g : metric contract_all : if True, eliminate all g which are contracted Notes see the TensorIndexType docstring for the contraction conventions class sympy.tensor.tensor.TensMul(*args, **kw_args)[source]# Product of tensors. Parameters: coeff : SymPy coefficient of the tensor args Notes args[0] list of TensorHead of the component tensors. args[1] list of (ind, ipos, icomp) where ind is a free index, ipos is the slot position of ind in the icomp-th component tensor. args[2] list of tuples representing dummy indices. (ipos1, ipos2, icomp1, icomp2) indicates that the contravariant dummy index is the ipos1-th slot position in the icomp1-th component tensor; the corresponding covariant index is in the ipos2 slot position in the icomp2-th component tensor. Attributes components (list of TensorHead of the component tensors) types (list of nonrepeated TensorIndexType) free (list of (ind, ipos, icomp), see Notes) dum (list of (ipos1, ipos2, icomp1, icomp2), see Notes) ext_rank (rank of the tensor counting the dummy indices) rank (rank of the tensor) coeff (SymPy coefficient of the tensor) free_args (list of the free indices in sorted order) is_canon_bp (True if the tensor in in canonical form) canon_bp()[source]# Canonicalize using the Butler-Portugal algorithm for canonicalization under monoterm symmetries. Examples >>> from sympy.tensor.tensor import TensorIndexType, tensor_indices, TensorHead, TensorSymmetry >>> Lorentz = TensorIndexType('Lorentz', dummy_name='L') >>> m0, m1, m2 = tensor_indices('m0,m1,m2', Lorentz) >>> A = TensorHead('A', [Lorentz]*2, TensorSymmetry.fully_symmetric(-2)) >>> t = A(m0,-m1)*A(m1,-m0) >>> t.canon_bp() -A(L_0, L_1)*A(-L_0, -L_1) >>> t = A(m0,-m1)*A(m1,-m2)*A(m2,-m0) >>> t.canon_bp() 0 contract_metric(g)[source]# Raise or lower indices with the metric g. Parameters: g : metric Notes See the TensorIndexType docstring for the contraction conventions. Examples >>> from sympy.tensor.tensor import TensorIndexType, tensor_indices, tensor_heads >>> Lorentz = TensorIndexType('Lorentz', dummy_name='L') >>> m0, m1, m2 = tensor_indices('m0,m1,m2', Lorentz) >>> g = Lorentz.metric >>> p, q = tensor_heads('p,q', [Lorentz]) >>> t = p(m0)*q(m1)*g(-m0, -m1) >>> t.canon_bp() metric(L_0, L_1)*p(-L_0)*q(-L_1) >>> t.contract_metric(g).canon_bp() p(L_0)*q(-L_0) get_free_indices() [source]# Returns the list of free indices of the tensor. Explanation The indices are listed in the order in which they appear in the component tensors. Examples >>> from sympy.tensor.tensor import TensorIndexType, tensor_indices, tensor_heads >>> Lorentz = TensorIndexType('Lorentz', dummy_name='L') >>> m0, m1, m2 = tensor_indices('m0,m1,m2', Lorentz) >>> g = Lorentz.metric >>> p, q = tensor_heads('p,q', [Lorentz]) >>> t = p(m1)*g(m0,m2) >>> t.get_free_indices() [m1, m0, m2] >>> t2 = p(m1)*g(-m1, m2) >>> t2.get_free_indices() [m2] get_indices()[source]# Returns the list of indices of the tensor. Explanation The indices are listed in the order in which they appear in the component tensors. The dummy indices are given a name which does not collide with the names of the free indices. Examples >>> from sympy.tensor.tensor import TensorIndexType, tensor_indices, tensor_heads >>> Lorentz = TensorIndexType('Lorentz', dummy_name='L') >>> m0, m1, m2 = tensor_indices('m0,m1,m2', Lorentz) >>> g = Lorentz.metric >>> p, q = tensor_heads('p,q', [Lorentz]) >>> t = p(m1)*g(m0,m2) >>> t.get_indices() [m1, m0, m2] >>> t2 = p(m1)*g(-m1, m2) >>> t2.get_indices() [L_0, -L_0, m2] perm2tensor(g, is_canon_bp=False)[source]# Returns the tensor corresponding to the permutation g For further details, see the method in TIDS with the same name. sorted_components()[source]# Returns a tensor product with sorted components. split()[source]# Returns a list of tensors, whose product is self. Explanation Dummy indices contracted among different tensor components become free indices with the same name as the one used to represent the dummy indices. Examples >>> from sympy.tensor.tensor import TensorIndexType, tensor_indices, tensor_heads, TensorSymmetry >>> Lorentz = TensorIndexType('Lorentz', dummy_name='L') >>> a, b, c, d = tensor_indices('a,b,c,d', Lorentz) >>> A, B = tensor_heads('A,B', [Lorentz]*2, TensorSymmetry.fully_symmetric(2)) >>> t = A(a,b)*B(-b,c) >>> t A(a, L_0)*B(-L_0, c) >>> t.split() [A(a, L_0), B(-L_0, c)] sympy.tensor.tensor.canon_bp(p)[source]# Butler-Portugal canonicalization. See tensor_can.py from the combinatorics module for the details. sympy.tensor.tensor.riemann_cyclic_replace(t_r)[source]# replace Riemann tensor with an equivalent expression R(m,n,p,q) -> 2/3*R(m,n,p,q) - 1/3*R(m,q,n,p) + 1/3*R(m,p,n,q) sympy.tensor.tensor.riemann_cyclic(t2)[source]# Replace each Riemann tensor with an equivalent expression satisfying the cyclic identity. This trick is discussed in the reference guide to Cadabra. Examples >>> from sympy.tensor.tensor import TensorIndexType, tensor_indices, TensorHead, riemann_cyclic, TensorSymmetry >>> Lorentz = TensorIndexType('Lorentz', dummy_name='L') >>> i, j, k, l = tensor_indices('i,j,k,l', Lorentz) >>> R = TensorHead('R', [Lorentz]*4, TensorSymmetry.riemann()) >>> t = R(i,j,k,l)*(R(-i,-j,-k,-l) - 2*R(-i,-k,-j,-l)) >>> riemann_cyclic(t) 0 class sympy.tensor.tensor.TensorSymmetry(*args, **kw_args)[source]# Monoterm symmetry of a tensor (i.e. any symmetric or anti-symmetric index permutation). For the relevant terminology see tensor_can.py section of the combinatorics module. Parameters: bsgs : tuple (base, sgs) BSGS of the symmetry of the tensor Notes A tensor can have an arbitrary monoterm symmetry provided by its BSGS. Multiterm symmetries, like the cyclic symmetry of the Riemann tensor (i.e., Bianchi identity), are not covered. See combinatorics module for information on how to generate BSGS for a general index permutation group. Simple symmetries can be generated using built-in methods. Examples Define a symmetric tensor of rank 2 >>> from sympy.tensor.tensor import TensorIndexType, TensorSymmetry, get_symmetric_group_sgs, TensorHead >>> Lorentz = TensorIndexType('Lorentz', dummy_name='L') >>> sym = TensorSymmetry(get_symmetric_group_sgs(2)) >>> T = TensorHead('T', [Lorentz]*2, sym) Note, that the same can also be done using built-in TensorSymmetry methods >>> sym2 = TensorSymmetry.fully_symmetric(2) >>> sym == sym2 True Attributes base (base of the BSGS) generators (generators of the BSGS) rank (rank of the tensor) classmethod direct_product(*args)[source]# Returns a TensorSymmetry object that is being a direct product of fully (anti-)symmetric index permutation groups. Notes Some examples for different values of (*args): (1) vector, equivalent to TensorSymmetry.fully_symmetric(1) (2) tensor with 2 symmetric indices, equivalent to .fully_symmetric(2) (-2) tensor with 2 antisymmetric indices, equivalent to .fully_symmetric(-2) (2, -2) tensor with the first 2 indices commuting and the last 2 anticommuting (1, 1, 1) tensor with 3 indices without any symmetry classmethod fully_symmetric(rank)[source]# Returns a fully symmetric (antisymmetric if rank<0) TensorSymmetry object for abs(rank) indices. classmethod no_symmetry(rank)[source]# TensorSymmetry object for rank indices with no symmetry classmethod riemann()[source]# Returns a monotorem symmetry of the Riemann tensor sympy.tensor.tensor.tensorsymmetry(*args)[source]# Returns a TensorSymmetry object. This method is deprecated, use TensorSymmetry.direct_product() or .riemann() instead. Explanation One can represent a tensor with any monoterm slot symmetry group using a BSGS. args can be a BSGS args[0] base args[1] sgs Usually tensors are in (direct products of) representations of the symmetric group; args can be a list of lists representing the shapes of Young tableaux Notes For instance: [[1]] vector [[1]*n] symmetric tensor of rank n [[n]] antisymmetric tensor of rank n [[2, 2]] monoterm slot symmetry of the Riemann tensor [[1],[1]] vector*vector [[2],[1],[1] (antisymmetric tensor)*vector*vector Notice that with the shape [2, 2] we associate only the monoterm symmetries of the Riemann tensor; this is an abuse of notation, since the shape [2, 2] corresponds usually to the irreducible representation characterized by the monoterm symmetries and by the cyclic symmetry. class sympy.tensor.tensor.TensorType(*args, **kwargs)[source]# Class of tensor types. Deprecated, use tensor_heads() instead. Parameters: index_types : list of TensorIndexType of the tensor indices symmetry : TensorSymmetry of the tensor Attributes index_types symmetry types (list of TensorIndexType without repetitions) class sympy.tensor.tensor._TensorManager[source]# Class to manage tensor properties. Notes Tensors belong to tensor commutation groups; each group has a label comm; there are predefined labels: 0 tensors commuting with any other tensor 1 tensors anticommuting among themselves 2 tensors not commuting, apart with those with comm=0 Other groups can be defined using set_comm; tensors in those groups commute with those with comm=0; by default they do not commute with any other group. clear()[source]# Clear the TensorManager. comm_i2symbol(i)[source]# Returns the symbol corresponding to the commutation group number. comm_symbols2i(i)[source]# Get the commutation group number corresponding to i. i can be a symbol or a number or a string. If i is not already defined its commutation group number is set. get_comm(i, j)[source]# Return the commutation parameter for commutation group numbers i, j see _TensorManager.set_comm set_comm(i, j, c)[source]# Set the commutation parameter c for commutation groups i, j. Parameters: i, j : symbols representing commutation groups c : group commutation number Notes i, j can be symbols, strings or numbers, apart from 0, 1 and 2 which are reserved respectively for commuting, anticommuting tensors and tensors not commuting with any other group apart with the commuting tensors. For the remaining cases, use this method to set the commutation rules; by default c=None. The group commutation number c is assigned in correspondence to the group commutation symbols; it can be 0 commuting 1 anticommuting None no commutation property Examples G and GH do not commute with themselves and commute with each other; A is commuting. >>> from sympy.tensor.tensor import TensorIndexType, tensor_indices, TensorHead, TensorManager, TensorSymmetry >>> Lorentz = TensorIndexType('Lorentz') >>> i0,i1,i2,i3,i4 = tensor_indices('i0:5', Lorentz) >>> A = TensorHead('A', [Lorentz]) >>> G = TensorHead('G', [Lorentz], TensorSymmetry.no_symmetry(1), 'Gcomm') >>> GH = TensorHead('GH', [Lorentz], TensorSymmetry.no_symmetry(1), 'GHcomm') >>> TensorManager.set_comm('Gcomm', 'GHcomm', 0) >>> (GH(i1)*G(i0)).canon_bp() G(i0)*GH(i1) >>> (G(i1)*G(i0)).canon_bp() G(i1)*G(i0) >>> (G(i1)*A(i0)).canon_bp() A(i0)*G(i1) set_comms(*args)[source]# Set the commutation group numbers c for symbols i, j. Parameters: args : sequence of (i, j, c)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6933791637420654, "perplexity": 13211.260644225746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494852.95/warc/CC-MAIN-20230127001911-20230127031911-00792.warc.gz"}
https://www.rdocumentation.org/packages/KRLS/versions/1.0-0
# KRLS v1.0-0 0 0th Percentile ## Kernel-Based Regularized Least Squares Package implements Kernel-based Regularized Least Squares (KRLS), a machine learning method to fit multidimensional functions y=f(x) for regression and classification problems without relying on linearity or additivity assumptions. KRLS finds the best fitting function by minimizing the squared loss of a Tikhonov regularization problem, using Gaussian kernels as radial basis functions. For further details see Hainmueller and Hazlett (2014). ## Functions in KRLS Name Description krls Kernel-based Regularized Least Squares (KRLS) lambdasearch Leave-one-out optimization to find $\lambda$ summary.krls Summary method for Kernel-based Regularized Least Squares (KRLS) Model Fits fdskrls Compute first differences with KRLS predict.krls Predict method for Kernel-based Regularized Least Squares (KRLS) Model Fits solveforc Solve for Choice Coefficients in KRLS looloss Loss Function for Leave One Out Error plot.krls Plot method for Kernel-based Regularized Least Squares (KRLS) Model Fits gausskernel Gaussian Kernel Distance Computation No Results!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23020420968532562, "perplexity": 8134.50716271337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146485.15/warc/CC-MAIN-20200226181001-20200226211001-00491.warc.gz"}
http://www.physicsforums.com/showthread.php?t=541289
## Simple PDE.... I'm trying to solve the PDE: $\frac{\partial^2 f(x,t)}{\partial x^2}=\frac{\partial f(x,t)}{\partial t}$ with $x \in [-1,1]$ and boundary conditions f(1,t)=f(-1,t)=0. Thought that $e^{i(kx-\omega t)}$ would work, but that obviously does not fit with the boundary conditions. Has anyone an idea? PhysOrg.com science news on PhysOrg.com >> Hong Kong launches first electric taxis>> Morocco to harness the wind in energy hunt>> Galaxy's Ring of Fire Blog Entries: 1 Recognitions: Gold Member Staff Emeritus Quote by Aidyan I'm trying to solve the PDE: $\frac{\partial^2 f(x,t)}{\partial x^2}=\frac{\partial f(x,t)}{\partial t}$ with $x \in [-1,1]$ and boundary conditions f(1,t)=f(-1,t)=0. Thought that $e^{i(kx-\omega t)}$ would work, but that obviously does not fit with the boundary conditions. Has anyone an idea? Your equation is the 1D heat equation, the solutions of which are very well known and understood. A google search should yield what you need. P.S. You will also need some kind of initial condition. Quote by Hootenanny Your equation is the 1D heat equation, the solutions of which are very well known and understood. A google search should yield what you need. P.S. You will also need some kind of initial condition. Hmm... looks like it isn't just a simple solution, however. It seems I'm lacking the basics ... I thought this is sufficeint data to solve it uniquely, what is the difference between boundary and initial conditions? Blog Entries: 1 Recognitions: Gold Member Staff Emeritus ## Simple PDE.... Quote by Aidyan I thought this is sufficeint data to solve it uniquely, Afraid not, without knowing the temperature distribution at a specific time you aren't going to obtain a (non-trivial) unique solution. Quote by Aidyan what is the difference between boundary and initial conditions? The former specifies the temperature on the spatial boundaries of the domain (in this case x=-1 and x=1). The latter specifies the temperature distribution at a specific point in time (usually t=0, hence the term initial condition).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8521706461906433, "perplexity": 773.4843891461609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698238192/warc/CC-MAIN-20130516095718-00081-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.rdmag.com/news/2014/03/new-algorithm-improves-efficiency-small-wind-turbines?et_cid=3830296&et_rid=623702531&location=top
News New algorithm improves the efficiency of small wind turbines Tue, 03/18/2014 - 9:47am In recent years, mini wind energy has been developing in a spectacular way. According to estimates by the WWEA-World Wind Energy Association, the level of development of the mini wind energy industry is not the same as that of the wind energy industry, although forecasts are optimistic. The main reason is that the level of efficiency of small wind turbines is low. To address this problem, the UPV/EHU’s research group APERT (Applied Electronics Research Team) has developed an adaptative algorithm. The improvements that are applied to the control of these turbines will in fact contribute towards making them more efficient. The study has been published in the journal Renewable Energy. Small wind turbines tend to be located in areas where wind conditions are more unfavourable. “The control systems of current wind turbines are not adaptative; in other words, the algorithms lack the capacity to adapt to new situations,” explained Iñigo Kortabarria, one of the researchers in the UPV/EHU’sAPERT research group. That is why “the aim of the research was to develop a new algorithm capable of adapting to new conditions or to the changes that may take place in the wind turbine,” added Kortabarria. That way, the researchers have managed to increase the efficiency of wind turbines. The speed of the wind and that of the wind turbine must be directly related if the latter is to be efficient. The same thing happens with a dancing partner. The more synchronised the rhythms of the dancers are, the more comfortable and efficient the dance is, and this can be noticed because the energy expenditure for the two partners is at a minimum level. To put it another way, the algorithm specifies the way in which the wind turbine adapts to changes. This is what the UPV/EHU researchers have focussed on: the algorithm, the set of orders that the wind turbine will receive to adapt to wind speed. “The new algorithm adapts to the environmental conditions and, what is more, it is more stable and does not move aimlessly. The risk that algorithms run is that of not adapting to the changes and, in the worst case scenario, that of making the wind turbine operate in very unfavourable conditions, thereby reducing its efficiency. Efficiency is the aim Efficiency is one of the main concerns in the mini wind turbine industry. One has to bear in mind that small wind turbines tend to be located in areas where wind conditions are more unfavourable. Large wind turbines are located in mountainous areas or on the coast; however, small ones are installed in places where the wind conditions are highly variable. What is more, the mini wind turbine industry has few resources to devote to research and very often is unaware of the aerodynamic features of these wind turbines. All these aspects make it difficult to monitor the point of maximum power (MPPT Maximum Power Tracking) optimally.“There has to be a direct relation between wind speed and wind turbine speed so that the monitoring of the maximum point of power is appropriate. It is important for this to be done optimally. Otherwise, energy is not produced efficiently,” explained Iñigo Kortabarria. Most of the current algorithms have not been tested under the conditions of the wind that blows in the places where small wind turbines are located. That is why the UPV/EHU researchers have designed a test bench and have tested the algorithms that are currently being used —including the new algorithm developed in this piece of research— in the most representative conditions that could exist in the life of a wind turbine with this power. “Current algorithms cannot adapt to changes, and therefore wind turbine efficiency is severely reduced, for example, when wind density changes," asserted Kortabarria. “The experimental trials conducted clearly show that the capacity to adapt of the new algorithm improves energy efficiency when the wind conditions are variable,” explained Kortabarria.“ We have seen that under variable conditions, in other words, in the actual conditions of a wind turbine, the new algorithm will be more efficient than the existing ones." A novel adaptative maximum power point tracking algorithm for small wind turbines Source: Basque Research
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8622714877128601, "perplexity": 744.1381833577454}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663637.20/warc/CC-MAIN-20140930004103-00001-ip-10-234-18-248.ec2.internal.warc.gz"}
http://primarystandards.aamt.edu.au/Webshop/Secondary/Teaching-Mathematics-Visually-Actively
## Featured resource ### Problem Pictures: Numbers - single user The third in the series continues the tradition of bringing mathematics to life with photographs using attractive, unusual and puzzling images. Members: $52.00 inc.GST Others:$ 65.00 inc.GST Home > Webshop > Secondary > Teaching Mathematics Visually & Actively # Teaching Mathematics Visually & Actively ### 2nd edition Tandi Clausen-May This book is about making mathematics visible and tangible – not something that just lies flat on the page. Dipping into it will provide instantly usable suggestions across a variety of topics at different levels: from early number concepts through to fractions and ratios, algebra, aspects of geometry (including angles and circles), time and data handling. When you get a chance to read it more thoroughly you will find arguments for using these approaches, consideration of some of the pitfalls to avoid, and inspiration to develop different ways of helping students to achieve deep and connected understandings. Formerly titled ‘Teaching Maths to Pupils with Different Learning Styles’, this updated edition now includes a CD with slide show presentations for each chapter, activity sheets and further resources. For any teacher who wants to provide students with opportunities for visual and kinaesthetic learning in mathematics. Members: $64.00 inc.GST Others:$ 80.00 inc.GST ISBN-13: 978-1-4462-4086-1 Year Levels: 2 - 10 Publisher: Sage Publications Page Count: 105 Cover type: Soft cover Publication date: 2013 Product number: ECA012 Keywords: Algebra, Chance and data, Instructional method, Number, Professional learning/teacher education, Space/Measurement
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23320025205612183, "perplexity": 8882.192370605417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886758.34/warc/CC-MAIN-20180116224019-20180117004019-00353.warc.gz"}
https://discuss.prosemirror.net/t/how-to-create-a-nodetype-instance-to-write-the-content/3965
How to create a NodeType Instance to write the content? Hi, I have been trying to write a latex equation programmatically using the ProseMirror Math plugin, (GitHub - benrbray/prosemirror-math: Schema and plugins for "first-class" math support in Pro). This is the command I am using(prosemirror-math/insert-math-cmd.ts at master · benrbray/prosemirror-math · GitHub), But however, I don’t know how to pass the NodeType Instance to that function. addLatex() { // this.editor.schema.nodes.math_inline = "{S}_{dp}={n}_A.{n}_B.\mathit{\exp}\left(-{wd}^2\right); Sc= median\left\{{S}_{dp}\right\}" const nodeSchema = this.editor.schema.node(schema.nodes.math_inline); nodeSchema.textContent = "{S}_{dp}={n}_A.{n}_B.\mathit{\exp}\left(-{wd}^2\right); Sc= median\left\{{S}_{dp}\right\}"; this.editor.commands .insertMathInline(nodeSchema) .exec(); } insertMathInline(nodeType:NodeType): this { insertMathCmd(nodeType)(this.state, this.dispatch); return this; } const mathInline: NodeSpec = { group: "inline math", content: "text*", // important! inline: true, // important! atom: true, // important! toDOM: () => ["math-inline", { class: "math-node" }, 0], parseDOM: [{ tag: "math-inline" // important! },// ...defaultInlineMathParseRules ] }; const mathDisplay: NodeSpec = { group: "block math", content: "text*", // important! atom: true, // important! code: true, // important! toDOM: () => ["math-display", { class: "math-node" }, 0], parseDOM: [{ tag: "math-display" // important! }, //...defaultBlockMathParseRules ] }; const nodes = { doc, text, paragraph, blockquote, horizontal_rule: horizontalRule, hard_break: hardBreak, code_block: codeBlock, image, list_item: listItem, ordered_list: orderedList, bullet_list: bulletList, math_inline: mathInline, math_display: mathDisplay }; Node types should be available under schema.nodes[nodeName] Thanks, @marijn for helping me out, But how to populate the string(’\sqrt{3}’) which I want to display in the editor. Actually, I am framing it like this. But the NodeType is applying in the editor without the input string content. addLatex() { const nodeType: NodeType = this.editor.schema.nodes.math_inline; nodeType.create({ content: "\sqrt{3}" }) this.editor.commands .insertMathInline(nodeType) .exec(); } Actual One: Expected One: Going by the node spec, it has text content, and no content attribute, so you want something like nodeType.create(null, [schema.text("\sqrt{3}")]) It doesn’t help me out @marijn addLatex() { const nodeType: NodeType = schema.nodes.math_inline; nodeType.create(null, [schema.text("\sqrt{3}")]); this.editor.commands .insertMathInline(nodeType) .exec(); } I don’t know how insertMathInline works, but calling nodeType.create and throwing away the result is definitely not going to do something (it’s a pure function, returning a node). BTW This is the insertMathInline function which will call the insertMathCmd function insertMathInline(nodeType:NodeType): this { insertMathCmd(nodeType)(this.state, this.dispatch); return this; } import { Command } from "prosemirror-commands"; import { NodeType } from "prosemirror-model"; import { EditorState, NodeSelection, Transaction } from "prosemirror-state"; //////////////////////////////////////////////////////////////////////////////// /** * Returns a new command that can be used to inserts a new math node at the * users current document position, provided that the document schema actually * allows a math node to be placed there. * * @param mathNodeType An instance for either your math_inline or math_display * NodeType. Must belong to the same schema that your EditorState uses! **/ export function insertMathCmd(mathNodeType: NodeType): Command { return function(state:EditorState, dispatch:((tr:Transaction)=>void)|undefined){ let { $from } = state.selection, index =$from.index(); if (!$from.parent.canReplaceWith(index, index, mathNodeType)) { return false; } if (dispatch){ let tr = state.tr.replaceSelectionWith(mathNodeType.create({})); tr = tr.setSelection(NodeSelection.create(tr.doc,$from.pos)); dispatch(tr); } return true; } } You’re confused between the difference between a NodeType and a Node, but are pretty close to a working solution. What you need to do is modify the insertMathCmd to accept initial content, and then within that function use the NodeType to create a Node with that content: + export function insertMathCmd(mathNodeType: NodeType, initialText = null): Command { - export function insertMathCmd(mathNodeType: NodeType): Command { return function(state:EditorState, dispatch:((tr:Transaction)=>void)|undefined){ let { $from } = state.selection, index =$from.index(); if (!$from.parent.canReplaceWith(index, index, mathNodeType)) { return false; } if (dispatch){ + // schema.text does not allow empty strings + let mathNode = initialText ? mathNodeType.create({}, [state.schema.text(initialText)]) : mathNodeType.create({}) + let tr = state.tr.replaceSelectionWith(mathNode); - let tr = state.tr.replaceSelectionWith(mathNodeType.create({})); tr = tr.setSelection(NodeSelection.create(tr.doc,$from.pos)); dispatch(tr); } return true; } } Then you can use insertMathCmd like so: addInlineLatex() { const mathInline: NodeType = schema.nodes.math_inline; this.editor.commands .insertMathInline(mathInline, "\sqrt{3}") .exec(); } Note that prosemirror-math comes with two NodeTypes: math_inline and math_display. That’s why insertMathCmd takes in NodeType as an argument. You saved me @bhl! As I am new to the ProseMirror library, I am not sure what I am doing wrong! I did not follow this topic today, because I figured out a dirty workaround yesterday, Eventually, I have seen your PR to the base branch & opened the link to know why this PR has been made today, Then Surprise… The perfect solution has been given for my Post! also thanks, @marijn for responding to me earlier!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23053032159805298, "perplexity": 19658.364623239206}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00557.warc.gz"}
https://socratic.org/questions/what-is-the-equation-of-the-line-with-slope-m-5-17-that-passes-through-11-7
Algebra Topics # What is the equation of the line with slope m= 5/17 that passes through (11,7) ? Nov 20, 2015 $y = \frac{5}{17} x - 4$ #### Explanation: Since we are given a point and the slope, we're gonna use the Slope- Intercept form: $y - {y}_{2} = m \left(x - {x}_{2}\right)$ Substitute: $y - 7 = \frac{5}{17} \left(x - 11\right)$ $y - 7 = \frac{5}{17} x - 11$ $y = \frac{5}{17} x - 11 + 7$ $y = \frac{5}{17} x - 4$ ##### Impact of this question 198 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6021571159362793, "perplexity": 2797.793572731046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371665328.87/warc/CC-MAIN-20200407022841-20200407053341-00175.warc.gz"}
https://embdev.net/topic/239283
# EmbDev.net Author: Sean Astviken (strik3r) Posted on: 2011-11-22 18:24 Attached files: Rate this post 0 ▲ useful ▼ not useful Hi! I'm relatively new at vhdl and need some help. I'm writing code that is supposed to work with a VGA-display I need the "program" to write a 4X4 colored pixel in the middle of the screen, and then with the help of four buttons, move this pixel up,down right and left on the display. I am using the resolution 640 X 480, so the middle would be 320 and 240, but since i need a 4X4 pixel, i know that it need to start at 318-322 and 238-242, but i have no idea how to write this code. I earlier written code for the sync signals so thats out of the picture. I know that i should at least need 3 processes one for the 4 buttons one for the updating of the pixel on the screen (X and Y) and one for the sync signals which iv'e already written I have attached two pictures, the first (vhdl(1)) shows the entity and a working sync process along with counters. The second picture shows just a thought i had (it's not complete), there also seems to be syntax errors in it.. I couldn't think of anything better than to take a printscreen of the code,unfortunately its a tall picture.. How can i write code for the pixel and the X/Y? How can i write code for the buttons? Can anyone help me with this?? it would be much appriciated! //Strik3r Author: lkmiller (Guest) Posted on: 2011-11-22 22:09 Rate this post 0 ▲ useful ▼ not useful Pls post your code as *.vhd attachment. Author: PittyJ (Guest) Posted on: 2011-11-23 07:37 Attached files: Rate this post 0 ▲ useful ▼ not useful I wrote in September something similar. The size of the 'Pixel' is different, but the rest should be very similar. Take it as example. Author: tzu (Guest) Posted on: 2011-11-23 08:45 Rate this post 0 ▲ useful ▼ not useful usable example but please fix this: - gated clock with 25 MHz - gated clock with 5 Hz out of gates clock - buttons sampled only every 200 ms, so possibly don't detect a short press - incoming button pins directly used on logic - extremly long logic path for pixel position check: if(VSyncCounter < 480 ) and ( HSyncCounter < 640) and (VSyncCounter >= POSY0) and (VSyncCounter < POSY1) and (HSyncCounter >= POSX0) and (HSyncCounter < POSX1) -> 60 Bits compared + Mux afterwards. Author: Lothar Miller (lkmiller) (Moderator) Posted on: 2011-11-23 10:08 Rate this post 0 ▲ useful ▼ not useful Already said: Generate25MHZ: process(CLOCK_50MHZ) is begin if rising_edge(CLOCK_50MHZ) then if(MHZ_25 = '0' ) then MHZ_25 <= '1'; else MHZ_25 <= '0'; end if; end if; -- rising edge end process Generate25MHZ; Generate25HZ: process(MHZ_25) is begin if rising_edge(MHZ_25) then if(HZ_KeyCounter = 500000) then -- ****** HZ_Key <= not(HZ_Key); : This is not the way clocks are generated! Use clock-enables instead: GenerateClockEnables: process(CLOCK_50MHZ) is begin if rising_edge(CLOCK_50MHZ) then MHZ_25 <= not MHZ_25; if(HZ_KeyCounter = 1000000-1) then -- 0...999999 = 1 Mio steps HZ_Key <= '1'; HZ_KeyCounter <= 0; else HZ_Key <= '0'; HZ_KeyCounter <= HZ_KeyCounter+1; end if; end if; end if; -- rising edge end process; GenerateSync : process(CLOCK_50MHZ) is begin -- there is only ONE clock in the design! if rising_edge(CLOCK_50MHZ) then if (MHZ_25='1') then ... ButtonCheck : process(CLOCK_50MHZ) is begin -- there is only ONE clock in the design! if rising_edge(CLOCK_50MHZ) then if (HZ_Key='1') then ... Have a look at this line: if (HZ_KeyCounter = 500000) then This is fundamentally wrong! An obvious beginners mistake: A counter counting from 0 to 500000 counts 500001 cycles! Here the problem is hidden in the big number (you cannot distinguish between 500000 and 500001), but you would see the effekt clearly when the counter would count from 0 to 4... Author: Sean Astviken (strik3r) Posted on: 2011-11-29 12:29 Attached files: Rate this post 0 ▲ useful ▼ not useful Thanks for all the replies! =) I've managed to get a pixel on the screen now and it moves over the screen with the help of the buttons, there is just one remaining problem. The pixel moves way to fast when i push a button. The vga-screen that I'm using works with 25MHZ,so when I press a button, the pixel "transforms" into a line. I need to decrease the moving speed of the pixel, but yet again, I have no idea how to do that. I have posted my code in .vhd PittyJ I tested your construction but there was no indication on when the blank and v_sync signals should go low, so nothing happend on the screen. As i said before I'm really new at this so there are a lot of codes that i do not understand, but thanks for the example. I attached a syncsignal picture. My teacher told me that i needed all the 4 signals for the screen to work properly. Tzu I'm far to new at this to know how to make those changes, dont even know where to start lkmiller In which part of my code am I supposed to put in your code? Also, does this mean? "if (HZ_KeyCounter = 500000) then" the 500000, what does it do? //Strik3r • $formula (LaTeX syntax)$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17150427401065826, "perplexity": 7002.150511255832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190295.4/warc/CC-MAIN-20170322212950-00619-ip-10-233-31-227.ec2.internal.warc.gz"}
https://chemicalstatistician.wordpress.com/2014/03/17/video-the-hazard-function-is-the-probability-density-function-divided-by-the-survival-function/
# Video Tutorial – The Hazard Function is the Probability Density Function Divided by the Survival Function In an earlier video, I introduced the definition of the hazard function and broke it down into its mathematical components.  Recall that the definition of the hazard function for events defined on a continuous time scale is $h(t) = \lim_{\Delta t \rightarrow 0} [P(t < X \leq t + \Delta t \ | \ X > t) \ \div \ \Delta t]$. Did you know that the hazard function can be expressed as the probability density function (PDF) divided by the survival function? $h(t) = f(t) \div S(t)$ In my new Youtube video, I prove how this relationship can be obtained from the definition of the hazard function!  I am very excited to post this second video in my new Youtube channel.  You can also view the video below the fold! ### 2 Responses to Video Tutorial – The Hazard Function is the Probability Density Function Divided by the Survival Function 1. skaae says: Thanks for the explanation. The first time someone explained what appears to be a fact in most text books! I suggest you break down the partial likelihood in Cox models as well :) • I’m glad that it was useful to you! I will discuss Cox models and partial likelihoods eventually – thanks for the suggestion!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9116663932800293, "perplexity": 560.8716145731784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990217.27/warc/CC-MAIN-20150728002310-00024-ip-10-236-191-2.ec2.internal.warc.gz"}
https://developer.lsst.io/stack/packaging-third-party-eups-dependencies.html
# Distributing Third-Party Packages with EUPS¶ This page documents how to make a third-party software package install-able using the eups distrib install command. ## Getting Approval¶ Creating a new third-party package that will be a dependency of the LSST code typically requires an RFC. If the code is to be distributed via eups, as this page describes, the license for the third-party code should be verified and cited in the text of that RFC. The license must be compatible with the license under which we distribute our code, currently GPL3. See this page for a list of compatible licenses. ## Creating the Package¶ Repositories containing third-party packages exist in the LSST GitHub organization. (Unfortunately, it is currently difficult to distinguish between an LSST package and a third-party package: the table file in the lsst_thirdparty package and the documentation on third party software may help.) In order to distribute a new third-party package, someone with administrator privileges will have to create a new repository of this form for you. Create a development branch on that repository and set it up to distribute the package as described below. You will be able to test the package distribution off of your development branch before you merge to master. The repository, once created, needs to contain the following directories: upstream/ This directory should contain a gzipped tarball of the source code for the third-party package. Literally, that is all it should contain. The code should not be altered from whatever is distributed by the package’s author. Any changes that need to be made to the source code should be done with patches in the patches/ directory. If you are testing out a version that is not a distributed package (e.g. master), you can create the correct type of repository from within a clone of the package with, e.g.: git archive --format=tar --prefix=astrometry.net-68b1/ HEAD | gzip > astrometry.net-68b1.tar.gz ups/ This directory should contain the packages EUPS table file as well as an optional file eupspkg.cfg.sh which will contain any customized commands for installing the third-party package. patches/ This directory is optional. It contains any patches to the third-party package (which EUPS will apply using the patch command) that are required to make the package work with the stack. We discuss the contents of ups/ and patches/ in more detail below. Warning If the root directory of your repository contains any other files (e.g. README, .gitignore, etc) you will need to give special instructions on how to handle them. See the section on Other Files, below. ### The ups/ Directory¶ #### EUPS Table File¶ The ups/ directory in your repository must contain an EUPS table file named following the pattern packageName.table. It specifies what other packages your package depends on and environment variables that will be set when you setup your package. Consider the table file for the sphgeom package, sphgeom.table: setupRequired(base) setupRequired(sconsUtils) setupOptional(doxygen) envPrepend(LD_LIBRARY_PATH, ${PRODUCT_DIR}/lib) envPrepend(DYLD_LIBRARY_PATH,${PRODUCT_DIR}/lib) envPrepend(LSST_LIBRARY_PATH, ${PRODUCT_DIR}/lib) envPrepend(PYTHONPATH,${PRODUCT_DIR}/python) This tells EUPS that, in order to setup the sphgeom package, it must also setup the packages base, sconsUtils and doxygen. Furthermore, it adds the location of the sphgeom package (stored in the environment variable PRODUCT_DIR at build time) to the environment variables PYTHONPATH, LD_LIBRARY_PATH, DYLD_LIBRARY_PATH, LSST_LIBRARY_PATH. These three environment variables are usually set for any installed package. We use the pre-defined envPrepend command so that the new PRODUCT_DIR is prepended to the environment variables and does not interfere with the non-stack system of libraries. #### eupspkg.cfg.sh¶ eupspkg.cfg.sh is an optional script in the ups/ directory that customizes the installation of your package. Often, EUPS is smart enough to figure out how to install your package just based on the contents of the gzipped tarball in upstream/. Sometimes, however, you will need to pass some additional commands in by hand. A simple version of this can be seen in the eupspkg.cfg.sh for the GalSim package, which passes instructions to the SCons build system using the SCONSFLAGS environment variable: export SCONSFLAGS=$SCONSFLAGS" USE_UNKNOWN_VARS=true TMV_DIR="$TMV_DIR" \ PREFIX="$PREFIX" PYPREFIX="$PREFIX"/lib/python \ EXTRA_LIB_PATH="$TMV_DIR"/lib EXTRA_INCLUDE_PATH="$TMV_DIR"/include" The eupspkg.cfg.sh for the stack-distributed anaconda package is more complicated: # EupsPkg config file. Sourced by 'eupspkg' prep() { # Select the apropriate Anaconda distribution OS=$(uname -s -m) case "$OS" in "Linux x86_64") FN=Anaconda-2.1.0-Linux-x86_64.sh ;; "Linux "*) FN=Anaconda-2.1.0-Linux-x86.sh ;; "Darwin x86_64") FN=Anaconda-2.1.0-MacOSX-x86_64.sh ;; *) die "unsupported OS or architecture ($OS). try installing Anaconda manually." esac # Prefer system curl; user-installed ones sometimes behave oddly if [[ -x /usr/bin/curl ]]; then CURL=${CURL:-/usr/bin/curl} else CURL=${CURL:-curl} fi "$CURL" -s -L -o installer.sh http://repo.continuum.io/archive/$FN } build() { :; } install() { clean_old_install bash installer.sh -b -p "$PREFIX" if [[ $(uname -s) = Darwin* ]]; then #run install_name_tool on all of the libpythonX.X.dylib dynamic #libraries in anaconda for entry in$PREFIX/lib/libpython*.dylib do install_name_tool -id $entry$entry done fi install_ups } When EUPS installs a third party package, it does so in five steps: 1. fetch 2. prep 3. config 4. build 5. install The eupspkg.cfg.sh file allows you to customize any or all of these steps for your package. Above, we see that the prep and install steps have been customized for the Anaconda package. More detailed documentation of the purpose and capabilities of the eupspkg.cfg.sh file can be found in the source code file \$EUPS_DIR/python/eups/distrib/eupspkg.py. ### The patches/ Directory¶ Sometimes, it will be necessary to change the source code in the gzipped tarball stored in upstream/ to make the package installable and runnable with the stack. If this is necessary, it is done using the patch command, which applies diffs to source code files. For each logical change that needs to be made to the source code (possibly affecting multiple files), generate a patch file by following these instructions: 1. Untar the tarball you’re trying to patch (e.g., astrometry.net-0.50.tar.gz). It will generate a directory (e.g., astrometry.net-0.50/) with the source. 2. Make a copy of that directory: cp -a astrometry.net-0.50 astrometry.net-0.50.orig 3. Make any changes you need to the source in astrometry.net-0.50/ 4. Create a patch diff -ru and move it into the patches/ subdirectory: diff -ru astrometry.net-0.50.orig astrometry.net-0.50 > blah.patch EUPS will apply these patches after it unpacks the gzipped tarball in upstream/. Patches are applied in alphabetical order, so it can be useful to start your patches with, e.g. 000-something.patch, 001-somethingelse.patch. Note EUPS expects the patches to be formatted according to the output of git diff, not the output of diff. ### Other Files¶ The form of package that has been constructed is referred to by EUPS as a ‘tarball-and-patch’ or ‘TaP’ package. Although these are standard for use in LSST, they are not the only type of package EUPS supports. When confronted with a source directory, EUPS attempts to determine what sort of package it is dealing with. If it sees any files other than the directories listed above, it concludes that the package in question is not a TaP package. Often, it is desirable to add other files to the package (for example, README or .gitignore). EUPS will then misidentify the package type, and the build will fail. To account for this, it is necessary to explicitly flag this as a TaP package. There are two mechanisms for this, depending of the version of EUPS being used. At time of writing, LSST’s Jenkins use a version of EUPS which only supports the now-deprecated mechanism. Therefore, in the interests of future proofing, both: 1. Add the line TAP_PACKAGE=1 to the top of ups/eupspkg.cfg.sh; 2. Add an empty file, .tap_package, to the root directory of your package. ## Testing the package¶ If you’ve created a new external package or updated an existing package, you need to test whether the new package builds and works. From within build/yourPackage (add -r to build in the current directory, which is effectively how Jenkins does it, instead using _eupspkg/): • rm -r _eupspkg • eupspkg -e -v 1 fetch • eupspkg -e -v 1 prep • eupspkg -e -v 1 config • eupspkg -e -v 1 build • eupspkg -e -v 1 install • setup -r _eupspkg/binary/yourPackage/tickets.DM-NNNN to set up the newly built version. • When your local tests pass, git push. • See if the stack will build with your branch in Jenkins. For the branch name, specify the branch you created above (i.e. tickets/DM-NNNN), leaving the rest of the fields as they are. • Merge to master after Jenkins passes and your changes are reviewed. ## Updating the Package¶ To update the version of your external package after a new upstream release, start with a copy of the LSST stack (installed using the lsstsw tool). Then: • Create a ticket for the package update (and/or an RFC, if it may cause more trouble), and note the ticket number NNNN. • cd build/yourPackage • git checkout -b tickets/DM-NNNN (where NNNN is the ticket number above) • git clean -id • Download a copy of the tarball from wherever the external package is distributed. Don’t unzip or untar it. • git rm the copy of the tarball that is currently in upstream/. • Copy the new version of the external tarball into upstream/ and git add it. • git commit Now test your package by following the instructions above. ## Distributing the Package¶ Once the package builds and passes review (or vice-versa), you need to tell eups that it is available for distribution to the wide world. To do this, add an annotated tag to your package repository using: git tag -a versionNumber -m "Some comment." The initial versionNumber should match the external package’s version number. If the package does not supply an appropriate version number, one can be generated from an upstream git SHA1 or equivalent version control revision number: use the format 0.N.SHA1, where N is 1 for the first release of the package, 2 for the second, etc. Note that the version number should never start with a letter, as EUPS regards that as semantically significant. If changes are required to the packaging (in the ups or patches directories) but not the external package source (in the upstream directory), the string .lsst1 (and .lsst2 etc. thereafter) should be appended to the external package’s version number. Merge your changes to master, then push your changes to the remote repository. Push your tags to the remote repository using: git push --tags Now you must log onto lsst-dev as the user lsstsw (this will require special permissions): see the documentation on using this machine. Once logged in as lsstsw, the steps are: • Build your package with the command: rebuild yourPackage This will cause lsst-dev to build your package and all of its dependencies. This build will be assigned a build number formatted as bNNN • Once the build is complete, release it to the world using: publish -b bNNN yourPackage This will make your package installable using: eups distrib install yourPackage versionNumber If you wish to add a distribution server tag to your package, you can do so by changing the publish command to: publish -b bNNN -t yourTag yourPackage Warning Do not use the tag ‘current’ as that will overwrite all other packages marked as current and break the stack. Let the people in charge of official releases handle marking things as ‘current.’ it is not usually necessary to distribution-server-tag a particular third party package. • Generally, if you’re publishing a third party package, it should be because it is a dependency in the build of some (or all) top-level package(s). When the top-level package(s) are next published (and optionally tagged), your new package will be incorporated. If you need something sooner, you can do this publishing yourself using the steps above with the top-level package. In this case, a distribution-server-tag (something like qserv-dev) is usually desirable. That makes the top-level product (or any of its dependency components, including your third-party package) installable using: eups distrib install -t yourTag packageName ## Announcing the Package¶ Any new packages, major version upgrades, or other breaking changes to third-party package versions should be announced in the DM Notifications category of community.lsst.org. For upgrades to third-party packages with headers we build against, this should include a note that source packages should be cleaned and recompiled after the upgrade, because SCons/sconsUtils will not automatically detect changes in third-party headers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36191582679748535, "perplexity": 4885.39423224912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823895.25/warc/CC-MAIN-20181212134123-20181212155623-00267.warc.gz"}
https://jeeneetqna.in/390/let-a-i-j-b-i-j-k-and-c-be-a-vector-such-that-a-c-and-then-is-equal-to
# Let a = i^−j^, b = i^+j^+k^ and c be a vector such that a × c + b = 0 and a . c = 4, then |c|^2 is equal to : more_vert Let $\vec{a}=\hat{i}-\hat{j},\ \vec{b}=\hat{i}+\hat{j}+\hat{k}$ and $\vec{c}$ be a vector such that $\vec{a}\times\vec{c}+\vec{b}=0$ and $\vec{a}.\vec{c}=4$, then $|\vec{c}|^2$ is equal to : (1) $19\over2$ (2) $8$ (3) $17\over2$ (4) $9$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9817787408828735, "perplexity": 175.840308065079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949694.55/warc/CC-MAIN-20230401001704-20230401031704-00581.warc.gz"}
https://math.stackexchange.com/questions/1734893/does-this-definition-of-e-even-make-sense/1735035
# Does this definition of $e$ even make sense? This sprung from a conversation here. In Stewart's Calculus textbook, he defined $e$ as the unique solution to $\lim\limits_{h\to 0}\frac{x^h-1}{h}=1$. Ahmed asked how do you define $x^h$ is not by $\exp(h\ln(x))$ and I'm not sure. Does this definition of $e$ even make sense? Definition here: The definition of $e$ as the unique number such that $$\lim_{h \to 0}\frac{e^{h} - 1}{h} = 1$$ makes sense, but there are few points which must be established before this definition can be used: 1. Define the general power $a^{x}$ for all $a > 0$ and all real $x$. One approach is to define it as the limit of $a^{x_{n}}$ where $x_{n}$ is a sequence of rational numbers tending to $x$ (this is not so easy). 2. Based on the definition of $a^{x}$ above show that the limit $(a^{x} - 1)/x$ as $x \to 0$ exists for all $a > 0$ (this is hard) and hence the limit defines a functions $f(a)$ for $a > 0$. 3. The function $f(x)$ defined above is continuous, strictly increasing and maps $(0, \infty)$ to $(-\infty, \infty)$ (easy if previous points are established). From the last point above it follows that there is a unique number $e > 1$ such that $f(e) = 1$. This is the definition of $e$ with which we started. And as can be seen this definition must be preceded by the proof of the results mentioned in three points above. All this is done in my blog post and in my opinion this is the most difficult route to a theory of exponential and logarithmic functions. Easier routes to the theory of exponential and logarithmic functions are covered in this post and next. • arguably it is the most difficult but irritatingly it is also the must intuitively and direct to what we think the definitions "mean". Intuitively $b^n$ is b multiplied by itself n times so "obviously" $b^{n/m}$ is the m-th root of b to the n and as x = lim q then $b^x$ is the limit of $b^q$. I mean "duh" and obviously d(b^x)/dx = $C_b*b^x$ so there must be some $e$ where $C_e = 1$ and obviously $e^x$ means $e^x$ and $\ln x$ is just a logarithm. That's obviously what it all "means". It's a pity this is the freaking hardest approach. – fleablood Apr 9 '16 at 19:54 • @fleablood: as I say in my blog post "the most intuitive and obvious approach". – Paramanand Singh Apr 9 '16 at 20:45 • That's a nice blog post btw. – fleablood Apr 9 '16 at 22:21 $b^x$ can be defined as $\lim_{q\in \mathbb Q \rightarrow x}b^q$. (Isn't it usually so defined?) Or alternatively one can define $e = \lim_{h\in \mathbb Q\rightarrow 0}\frac {x^h - 1}{h}$. I think it's legit and not circular. • Of course, this assumes there is a limit. – fleablood Apr 9 '16 at 17:21 • It works; $b^q$ is defined for rational $q$ in the "usual" way, and you can force the limit to exist using a monotonicity argument. It is essentially the same argument that is used to prove that the only memoryless distributions, i.e. the ones with the property $P(X>t+s|X>t)=P(X>s)$, are the exponential distributions. – Ian Apr 9 '16 at 19:24 • I think defining new notions of limit is bit complicated and non-standard. The standard calculus texts define limits of functions of real variable and limit of functions of integral variable (sequences). If $x_{n}$ is a sequence of rational numbers with limit $x$ then we can define $a^{x}$ as the limit of sequence $a^{x_{n}}$. The notion of $\lim_{x \in \mathbb{Q} \to a}$ can be made precise by an appropriate definition but it is not very commonly seen. – Paramanand Singh Apr 9 '16 at 19:31 • This isn't a new definition of limits at all. For every real $x$ there is a rational sequence of {$q_n$}$\rightarrow x$ so we define $b^x$ as $\lim b^{q_n}$. That's all my notation of $\lim_{q\in \mathbb Q\rightarrow x}$ means. We do have to clear up that such a real sequence of {$b^{q_n}$} converges but that's pretty standard and mechanical as Ian points out. (Actually it's a pain in the ass, but never mind...) – fleablood Apr 9 '16 at 19:46 • The actual difficulty is establishing the uniqueness: why is $\lim_n b^{q_n}$ the same for any given $q_n \to x$? – Ian Apr 9 '16 at 19:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9534657001495361, "perplexity": 199.8614999177262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675598.53/warc/CC-MAIN-20191017172920-20191017200420-00033.warc.gz"}
https://www.physicsforums.com/threads/entropy-change-of-melting-ice-cube-initially-at-5c.765848/
# Entropy change of melting ice cube initially at -5°C 1. Aug 13, 2014 ### Flucky 1. The problem statement, all variables and given/known data Calculate the entropy change of an ice cube of mass 10g, at an initial temperature of -5°C, when it completely melts. cice = 2.1 kJkg-1K-1 Lice-water = 3.34x105 Jkg-1 2. Relevant equations dQ = mcdT dS = $\frac{dQ}{T}$ ΔS = $\frac{Q}{T}$ Q = mL 3. The attempt at a solution First I set the problem out in two stages: a) the entropy change from the ice going from -5°C to 0°C (in order to melt) b) the entropy change from the ice going to water For a) dQ = mcdT ---------(1) dS = $\frac{dQ}{T}$ ---------(2) Putting (1) into (2): dS = $\frac{mcdT}{T}$ ΔS = mc∫$\frac{1}{T}$dT ΔS = mcln(Tf/Ti) ∴ΔS1 = (0.01)(2100)ln($\frac{273}{268}$) = 0.388 JK-1 For b) Q = mL = (0.01)(3.34x105) = 3340J ΔS2 = $\frac{Q}{T}$ = $\frac{3340}{273}$ = 12.23 JK-1 ∴ total ΔS = ΔS1 + ΔS2 = 0.388 + 12.23 = 12.62 JK-1 Am I right in simply adding the to changes of entropy together? Does ΔS work like that? Cheers. 2. Aug 13, 2014 ### rude man Looks good, and the answer is yes, the entropies add. Entropy is a state function, like gravitational potential. If you went from 0 to 1m above ground you would have g x 1m change in potential. If you went from 1m to 2m there would be a further g x 1m change in potential. Giving total change in potential = g x 2m. 3. Aug 14, 2014 ### Flucky Great, thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8679383397102356, "perplexity": 3780.555366891645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647244.44/warc/CC-MAIN-20180319234034-20180320014034-00641.warc.gz"}
http://math.stackexchange.com/questions/1612437/notation-of-the-second-derivative-where-does-the-d-go
# Notation of the second derivative - Where does the d go? In school I was taught that we use $\frac{du}{dx}$ as a notation for the first derivative of a function $u(x)$. I was also told that we could use the $d$ just like any variable. After some time we were given the notation for the second derivative and it was explained as follows: $$\frac{d(\frac{du}{dx})}{dx} = \frac{d^2 u}{dx^2}$$ What I do not get here is, if we can use the $d$ as any variable, I would get the following result: $$\frac{d(\frac{du}{dx})}{dx} =\frac{ddu}{dxdx} = \frac{d^2 u}{d^2 x^2}$$ Apparently it is not the same as the notation we were given. A $d$ is missing. I have done some research on this and found some vague comments about "There are reasons for that, but you do not need to know..." or "That is mainly a notation issue, but you do not need to know further." So what I am asking for is: Is this really just a notation thing? If so, does this mean we can actually NOT use d like a variable? If not, where does the $d$ go? I found this related question, but it does not really answer my specific question. So I would not see it as a duplicate, but correct me if my search has not been sufficient and there indeed is a similar question out there already. - $d$ is not a variable; in other words $dxdx$ is not $d$ times $x$ times $d$ times $x$. At best, you can think of $dx$ as one object (with a two letter name), an infinitesimal. – Michael Burr Jan 14 at 19:06 $d$ cannot be used just like any variable. Otherwise you will have $du/dx=u/x$ for example. – AlphaGo Jan 14 at 19:08 I have a hard time to believe somebody told you this. Maybe they said it about the entity "$dx$" – quid Jan 14 at 20:24 It's because the dx is "in parentheses", so to speak. – Mehrdad Jan 14 at 21:20 I kinda assumed $dx^2$ meant $(dx)^2$; i.e. $dx$ is basically one variable. – Akiva Weinberger Jan 15 at 0:16 where does the $d$ go? Physicist checking in. All the other answers seem to focus on whether $d$ is a variable and are neglecting the heart of your question. Simply put, $dx$ is the name of one thing, so in your example $$\frac{d^2u}{dx^2}=\frac{d^2u}{\left(dx\right)^2}$$ In your words, the "second $d$" is inside the implied parentheses. - +1. I'm surprised that so many other answers missed this aspect of the question. – mweiss Jan 15 at 16:22 I guess the same could occur with a delta, for example with the formula $U=\frac12 k \Delta x^2$ one might interpret it as $U=\frac12 k (\Delta x)^2$ (the elastic potential energy in a Hooke spring). – Jeppe Stig Nielsen Jan 15 at 22:10 Thanks for this short but good answer. Using the $dx$ as one unit and not as two separate things $d$ and $x$ clears the things up a lot. – Numenkok Balok Jan 18 at 7:50 Gotta love physicists. – Arrow Jan 18 at 11:23 Gottfried Wilhelm Leibniz, who introduced this notation in the 17th century, intended $dx$ to be an infinitely small change in $x$ and $du$ to be the corresponding infinitely small change in $u$, so that if, for example, $du/dx=3$ at a particular point that means $u$ is changing $3$ times as fast as $x$ is changing at that point. The notation $\dfrac{d^2u}{dx^2}$ actually means $\dfrac{d\left(\dfrac{du}{dx}\right)}{dx}$, the infinitely small change in $du/dx$ divided by the corresponding infinitely small change in $x$. Thus the second derivative is the rate of change of the rate of change. Notice that if $u$ is in meters and $x$ in seconds, then $du/dx$ is in $\dfrac{\text{m}}{\text{sec}}$, i.e. meters per second, and $d^2 u/dx^2$ is in $\dfrac{\text{m}}{\text{sec}^2}$, i.e. meters per second per second. Thus $dx^2$ means $(dx)^2$, so the units of measurement of $x$ get squared, and $d^2y$ is in the same units of measurement that $y$ is in, consistently with the fact that $y$ is not a part of what gets squared in the numerator. - $d$ is not a variable, and neither is $dx$ for that matter. It is confusing because in some case, like the chain rule, differentials act like variables which can cancel: $$\frac{dy}{dx}\frac{dx}{dt}=\frac{dy}{dt}$$ However, it is most appropriate to think of $\frac{d}{dx}$ as an operator that does something. Thus, $\frac{d}{dx}(\frac{d}{dx} y)=\frac{d^2}{dx^2}y$. Somewhat similarly, you wouldn't say that $\sin^2 x=s^2i^2n^2x$ Edit: In case it isn't from the example, you cannot separate $dx$. That is, $dx$ is not $d$ times $x$. This is very much analogous to chemistry when we say things like $\Delta H$. This isn't $\Delta$ times $H$. It is $\Delta$ (change) of $H$. - Are you sure this is a good example? I wouldn't say $\sin^2 x = (\sin x)^2$ either, if I didn't know that's the conventional meaning. What I would rather say is $\sin x^2 = (\sin x)^2$, which is not commonly understood so. – leftaroundabout Jan 14 at 22:48 @leftaroundabout I agree. I would more likely mistake $\sin^2 x$ to mean $\sin(\sin(x))$, which would actually be similar to the reasoning for $\frac{d}{dx}(\frac{d}{dx}y) = \left(\frac{d}{dx}\right)^2(y)=\frac{d^2}{dx^2}y$. And of course the issue with $\sin x ^ 2 = (\sin x)^2$ is that it could easily be mistaken for $\sin\left(x^2\right)$ – David Etler Jan 15 at 1:42 @leftaroundabout I don't see how somebody who didn't know what $\sin^2 x$ means could reasonably come to the conclusion taht it would mean $\sin(x^2)$. The "squared" is applied to the sine, so it could only reasonably mean "take the sine of $x$ and then square it" or "take the sine of $x$ twice (i.e., $\sin(\sin x)$." – David Richerby Jan 15 at 5:31 -1 This answer doesn't seem to address the main issue of the question ("A $d$ is missing.") at all. The OP seems to think that $dx^2$ means $d(x^2)$, not $(dx)^2$, so I don't see how he could agree with the notation $\frac{d}{dx}\frac{d}{dx} = \frac{d^2}{dx^2}$ used here without explanation. – JiK Jan 15 at 10:02 +1 This answer definitely addresses the main issue. – dshapiro Jan 15 at 14:31 ${\rm d}(A)$ means an infinitesimally small change in $A$. The ${\rm d}$ is an operator and you better look at it as a function and not a value. If anything we drop the parenthesis from ${\rm d}x$ for brevity as it should be ${\rm d}(x)$ as in $$\frac{{\rm d}(y)}{{\rm d}(x)}$$ and $$\frac{{\rm d}(\frac{{\rm d}(y)}{{\rm d}(x)})}{{\rm d}(x)} = \frac{ \frac{1}{{\rm d}(x)} {\rm d}({\rm d}(y))}{{\rm d}(x)} = \frac{{\rm d}({\rm d}(y))}{({\rm d}(x))^2} = \frac{{\rm d}^2(y)}{({\rm d}x)^2} = \frac{{\rm d}^2 y}{{\rm d}x^2}$$ - The derivative $\frac{dy}{dx}$ is not the ratio of a small change of $y$ to a small change in $x$. Even in nonstandard analysis it's not defined that simply. – Bye_World Jan 14 at 19:19 Actually it is $$\frac{{\rm d}A}{{\rm d}x} = \lim_{h\rightarrow 0} \frac{ \left(A(x+h) - A(x)\right)}{\left( (x+h)-x \right)}$$ This is the definition of a derivative (en.wikipedia.org/wiki/Derivative). – ja72 Jan 14 at 19:24 There are several contexts in which $d$ should be considered an operator -- chief among them being as the exterior derivative of a differential $k$-form -- but IMO the derivative in scalar calculus is not one of them. An "infinitesimally small number" not equal to zero doesn't exist in $\Bbb R$. This is fine as a "heuristic" but the claim that $\frac{dy}{dx}$ actually is a ratio -- whether of infinitesimals or finite differences -- is just not true. – Bye_World Jan 14 at 19:37 That's false. Such a quantity would violate the Archmedean property of the real numbers. You have to extend the reals to the hyperreals to make use of nonzero infinitesimals. This is the closest formalization of what you're talking about that exists in mathematics and it still doesn't define the derivative as a fraction of infinitesimals, but as the standard part of a fraction of infinitesimals. Note that is not a part of standard analysis. – Bye_World Jan 14 at 21:00 Arguments about axiomatisation aside, this answer is the best intuitive answer to "where does the d go?", in my opinion. If you wanted to make it precise, you could simply say that $df$ is defined as $f(x+h)-f(x)$, and then have an implicit convention that we always take the $h\to 0$ limit whenever we write down an expression involving $d$. – Nathaniel Jan 16 at 8:06 Think of the meaning of $d/dx$. The $d$ in the numerator is an operator: it says, "take the infinitesimal difference of whatever follows $d/dx$". In contrast, the $dx$ in the denominator is just a number (yes, I know; mathematicians, please don't cringe): it is the infinitesimal difference in $x$. So $d/dx$ means "take the infinitesimal difference of whatever follows, and then divide by the number $dx$." Similarly, $d^2/dx^2$ means "take the infinitesimal difference of the infinitesimal difference of whatever follows, and then divide by the square of the number $dx$." In short, the $d$ in the numerator is an operator, whereas in the denominator, it is part of a symbol. A slightly less ambiguous notation, as suggested by user1717828, would be to put the $(dx)$ in the denominator in parenthesis, but it really isn't necessary in practice. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414179921150208, "perplexity": 202.741514035161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826916.34/warc/CC-MAIN-20160723071026-00205-ip-10-185-27-174.ec2.internal.warc.gz"}
http://math.stackexchange.com/users/53394/deepak
# Deepak less info reputation 11 bio website location age member for 5 months seen Apr 3 at 21:01 profile views 108 Life is more fun with SE, I guess. # 51 Questions 8 Convergence in $L^1$ space 8 Proving $f$ is a constant function 5 proving $f$ is absolutely continuous on $[0,1]$ 5 Finding a conformal map of lunar domain to upper half disk 5 $P(z)$ defines a polynomial # 636 Reputation +5 Proving $f$ is a constant function +5 proving $f$ is absolutely continuous on $[0,1]$ +5 $f$ Borel measurable and and $f=g$ a.e (Lebesgue) but $g$ is not Borel measurable +5 How to prove conformal self map of punctured disk ${0<|z|<1}$ is rotation This user has not answered any questions # 25 Tags 0 complex-analysis × 35 0 analysis × 2 0 real-analysis × 14 0 recreational-mathematics × 2 0 measure-theory × 11 0 inequality × 2 0 convergence × 4 0 integration × 2 0 education × 2 0 calculus × 2 # 5 Accounts Mathematics 636 rep 11 Area 51 151 rep 1 French Language & Usage 133 rep 3 English Language & Usage 120 rep 4 Stack Overflow 101 rep
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8274610042572021, "perplexity": 2366.320923354745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706477730/warc/CC-MAIN-20130516121437-00002-ip-10-60-113-184.ec2.internal.warc.gz"}
http://openstudy.com/updates/505b0230e4b0cc122893d3e8
Here's the question you clicked on: 55 members online • 0 viewing ## rebecca1233 2 years ago A 20−pound bag of sugar costs (y + 3) dollars while a 10−pound package costs (y − 3) dollars. If the price per pound of sugar is the same for both packages, the equation below can be used to solve for y. y+3/20 = y-3/10 What is the value, in dollars, of y? Delete Cancel Submit • This Question is Closed 1. ash2326 • 2 years ago Best Response You've already chosen the best response. 1 We have $\frac{y+3}{20}=\frac{y-3}{10}$ multiply both sides by 20, can you do it @rebecca1233 ? 2. rebecca1233 • 2 years ago Best Response You've already chosen the best response. 2 which side 10*20? 3. ash2326 • 2 years ago Best Response You've already chosen the best response. 1 $20\times \frac{y+3}{20}=\frac{y-3}{10}\times 20$ 4. ash2326 • 2 years ago Best Response You've already chosen the best response. 1 could you simplify this? 5. rebecca1233 • 2 years ago Best Response You've already chosen the best response. 2 oh alright, give me a sec (: 6. rebecca1233 • 2 years ago Best Response You've already chosen the best response. 2 i think so 7. ash2326 • 2 years ago Best Response You've already chosen the best response. 1 cool :) 8. rebecca1233 • 2 years ago Best Response You've already chosen the best response. 2 i got 9 am i correct :) 9. ash2326 • 2 years ago Best Response You've already chosen the best response. 1 yes, good work:D did you understand? 10. rebecca1233 • 2 years ago Best Response You've already chosen the best response. 2 yes :) 11. Not the answer you are looking for? Search for more explanations. • Attachments: Find more explanations on OpenStudy ##### spraguer (Moderator) 5→ View Detailed Profile 23 • Teamwork 19 Teammate • Problem Solving 19 Hero • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997525811195374, "perplexity": 16240.134509006644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1433195036618.23/warc/CC-MAIN-20150601214356-00042-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.asiimaging.com/docs/tech_note_rs232_comm
# Applied Scientific Instrumentation ### Products Custom Optics diSPIM Focus Stage FTP FW1000: Filter Wheel MPPI MS2000, MFC2000 and RM2000 Controller Piezo Systems RAMM, MIM and VTS Portal Serial Command Software TG-1000/Tiger Controller Tracking & Stabilization XY Stage ### Tags tech_note_rs232_comm # RS-232 Communication All ASI Controllers(TG-1000/Tiger, MS2000 and RM2000) utilizes an RS-232 serial link to connect with any computer with an RS-232 serial port in order to utilize all of the controller’s abilities. For controllers build since approximately 2008, the physical connection is most often a USB Type A to Type B cable. The controller includes a chip that emulates a serial com port over the USB connection (i.e. the controller will appear to be on a COM port). USB with emulated RS-232 port is the only interface option on TG-1000 “Tiger” controllers, but MS2000 and RM2000 controllers will often have a physical RS-232 serial port as well. In case the operating system doesn't automatically install the drivers, they can be found at http://www.silabs.com/products/mcu/Pages/USBtoUARTBridgeVCPDrivers.aspx. As of early 2018 there was a bug affecting the “Universal” version of the SiLabs driver when used with the ASI Console programs (depending on .NET) so use the Windows 7/8/10 version 6.x instead. MS2000 and RM2000 TG-1000/Tiger Baud Rate 9600 (default), 19200, 28800, or 115200 (set by DIP switch) 115200 Data Bits 8 Parity None Stop Bits 1 Flow Control None ## Software All software control is using the serial commands sent over the serial port or virtual serial port. Due to the nature of serial ports, only one program can use the serial port at a time. So only one of these softwares can be used at a given time to send commands and receive replies from the ASI controller. ### ASI Console and ASI Tiger Console For MS2000, RM2000 and FW1000 controllers ASI Console can be used to communication via serial port or virtual serial port (USB-serial adapter) to control, configure, and update firmware. More info on Downloading and Setting up ASI console is here. For TG-1000/Tiger controller, ASI Tiger Console can be used to communication via serial port or virtual serial port (USB-serial adapter) to control, configure, and update firmware. More info on downloading and setting up Tiger console is here. You can send serial commands using the input box at the bottom of the scripting tab once you have it set up. Another program for interacting with TG-1000 controller is Tiger Control Panel, like the Tiger Console it lets user send serial commands to the controller, however it has a few additional utilities like displaying live axis position and states. ### LabView LabView drivers are available from ASI, see the ASI main website under Support → Downloads (direct link). ### Terminal Programs You can use terminal programs such as Advanced Serial Port Monitor, Termite, TeraTerm, PuTTY, and HyperTerminal to send and receive commands directly from the controller. #### Termite An free and easy to use terminal program is Termite ( https://www.compuphase.com/software_termite.htm ). Below is a screenshot of the setup dialog and how it should be configured for talking with our controllers (through RS232 or USB): Note: your Port and Baud rate setting requirements may be different than shown, e.g. baud rate is 112500 for Tiger controllers. #### Advanced Serial Port Monitor ASI uses Advanced Serial Port Monitor in-house for serial communications even though it is a paid program after a brief trial period. The main thing it offers that others don't is a “Spy Mode” for monitoring communication done via other programs. ### Micro-manager Micro-Manager is a free open-source microscope control software, and ASI contributes and supports device adapters to utilize ASI hardware. There are two ways to send serial commands in Micro-Manager, the first is generic for any serial device using FreeSerialPort device adapter and the second is specific for ASI hardware using the ASI device adapter, either ASIStage for MS-2000 and RM-2000 controllers or ASITiger for TG-1000 controllers. #### Micro-Manager FreeSerialPort Create a device using the FreeSerialPort adapter in the Hardware Configuration Wizard. Assign the same com port as your ASI product uses. In the Device/Property browser (MM Tools menu) set the CommandTerminator property to be \r and the ResponseTerminator property to be \n. Then modify the Command property to be whatever command you like and look for the reply in the Response property. You can do this from the Property Browser and/or from your script using the mmc.SetProperty() and mmc.GetProperty() methods. If the Response property is too long to fully display then try to copy it and paste into another program; \r indicates the start of a new line. #### Micro-Manager ASI Device Adapters If you haven't already, add the ASI controller to your configuration using the Hardware Configuration Wizard. Then use the property SerialCommand and SerialResponse to send commands and view the controller's reply. You can do this from the Property Browser and/or from your script using the mmc.SetProperty() and mmc.GetProperty() methods. For TG-1000 “Tiger” controllers, this is under the TigerCommHub device. If the SerialResponse property is too long to fully display then try to copy it and paste into another program; \r indicates the start of a new line. ### Other Third Party Applications Finally, proprietary high-level microscope control software which support ASI controllers (e.g. Molecular Devices’ Metamorph, Nikon Elements) uses serial commands to communicate with the controller. The communication details are generally hidden from the user. ## Command The controller’s instruction set is implemented using the following format: COMMAND X=?????? Y=?????? Z=?????? <Carriage Return> The COMMAND is a string of ASCII characters such as MOVE or HOME, which must be followed by a space. All commands are case insensitive. Many commands have abbreviated versions that help cut down on typing time and serial bus traffic. Next are the axis parameters. (Bracketed [ ] parameters are optional.) The axis name is given, followed immediately by an equal sign and the axis parameter value. Each axis must be separated from the one before by one blank space. One or more axes may be specified on a single command line. An axis symbol typed without an = assignment is often assumed to mean =0, but that behavior isn’t guaranteed in general (it does, however, work for the commonly-used MOVE and MOVREL commands). Sometimes the command format may not require a parameter value (e.g., INFO X). Commands will accept integer or decimal numbers; internal truncation or rounding will occur if fractional decimals are of no meaning to the command. axis or [axis] is the placeholder for the axis name, which is a single character. All 26 alphabet characters A-Z can be used as axis names, and the special character * means “all axes” (not including filterwheels). For example, X and Y are typically used for a sample translation stage, Z and F are commonly used for focus axes, and A-D are the default letters for scanner axes. Filter Wheels are designated by numbers, TG-1000 can accommodate up to 10 wheels numbered 0-9. For Tiger: When [Addr#] appears in the format, then the intended card address must be prepended to the serial command, as the command is Card-Addressed. … indicates more arguments can be sent with the same command. All commands are completed with a Carriage Return (ASCII hex code: 0D). The controllers receive ASCII characters one at a time and place them into their memory buffer. With the exception of single hex code commands like the tilde ~, the controller will not process a command in the memory buffer until the Carriage Return <CR> has been received. Upon receiving a Carriage Return <CR>, the Controller will process the command stored in its command buffer, clear the command buffer, and return a reply. #### MS-2000 Reply Syntax When a command is recognized, the controller will send back a colon : (hex code: 3A) to show that it is processing the command. When processing of the command is complete, an answer is returned with any requested information, typically beginning with the letter A. In some cases, the answer part of the reply is delayed until the completion of the command. The reply is terminated by a carriage return and a linefeed character <CR><LF>. In the examples below, the <CR> and <CR> <LF> are implied. This programming manual gives examples in the MS-2000 reply syntax unless otherwise specified. ##### Examples Typed commands are in THIS TYPEFACE Controller replies are in THIS TYPEFACE MOVE X=1234 Z=1234.5 <CR> :A <CR><LF> MOVE X Y Z <CR> :A <CR><LF> WHERE X <CR> :A 0 <CR><LF> MOVE X=4 Y=3 Z=1.5 <CR> :A <CR><LF> WHERE X Y Z <CR> :A 4 3 1.5 <CR><LF> WHERE Z Y X <CR> :A 4 3 1.5 <CR><LF> #### Tiger Reply Syntax The TG-1000 has two reply syntaxes; the active one is set using the VB F command. The default syntax is backwards compatible with the MS-2000 controller, including all the quirks and inconsistencies between commands. The Tiger syntax is more self-consistent and in some cases more explanatory (e.g. with WHERE), but is not backwards compatible. Choice of reply syntax is completely arbitrary and does not affect operation. In the Tiger syntax no :A is sent back. Furthermore, whenever an axis position or command value is returned (i.e. whenever the command is a query), the axis letter is always specified. Consequently, when no information needs to be sent back and there is no error the controller simply replies with <CR><LF> only. The above examples in Tiger reply syntax are as follows: MOVE X=1234 Z=1234.5 <CR> :A <CR><LF> MOVE X Y Z <CR> :A <CR><LF> WHERE X <CR> :A 0 <CR><LF> MOVE X=4 Y=3 Z=1.5 <CR> :A <CR><LF> WHERE X Y Z <CR> :A 4 3 1.5 <CR><LF> WHERE Z Y X <CR> :A 4 3 1.5 <CR><LF> ### Query of Parameters Most commands used to set parameter values can be queried for the current values using the question-mark syntax: CMND X? Y? Z? F? The controller will respond with CMND’s current settings, e.g. :A X=0 Y=1 Z=10 F=2 This feature is most useful when using a terminal program to change controller parameters to verify that you have made the changes that you think you did, or to check present settings. ### Error Codes When a command is received that the controller cannot interpret, for one reason or another, an error message is returned in the following format: :N-<error code> The error codes are as follows: :N-1 Unknown Command (Not Issued in TG-1000) Unrecognized Axis Parameter (valid axes are dependent on the controller) Missing parameters (command received requires an axis parameter such as x=1234) Parameter Out of Range Operation failed Undefined Error (command is incorrect, but for none of the above reasons) Invalid card address Reserved Reserved for filterwheel Serial Command halted by the HALT command Reserved ## Optimizing Communication Speed For certain applications the speed of the communication can become a limiting factor, e.g. when keeping a live view of positions and statuses of all axes on a complicated system. Some ideas and notes about improving communication speed follow. Obviously, to increase communication speed the first step is to increase the baud rate as much as possible. Tiger uses 115200 baud always, but for the MS2000 family of controllers (including RM2000, etc.) the speed is configured using DIP switches as detailed elsewhere; normally 115200 baud can be used without any problem. To compute the raw transfer time, take 10 seconds and divide by the baud rate (10 because there are 8 data bits plus a start and stop bit sent for every byte), so for every millisecond only 1 byte can be sent at 9600 baud compared with more than 11 bytes at 115200 baud. Note that the computer, drivers, and high-level software can play a strong role in determining the communication speed. As an example, we noticed that the round-trip time for position queries on a 2015-era Xeon-based Windows machine was either 16ms or 31ms using Advanced Serial Port Monitor, but connecting the same controller to a very similar computer with identical drivers, operating system, and serial software the round-trip time is almost always 11ms. Using Micro-Manager on the latter computer reduces the round trip time to 8ms but in general Micro-Manager seems to check serial traffic in 10ms intervals. Because the TG-1000/Tiger controllers are modular, servicing most commands require that the communication card parse them, relay the message to the relevant card (e.g. for motorized axes or micro-mirror or PLC), and then relay the response. This adds some extra time compared with the MS2000 family of controllers. For instance, querying a position using home-built serial software takes 7ms on Tiger and about half that on MS2000. The intra-controller communication happens at 57600 baud, though it could be extended to 115200 if needed. For use with Micro-Manager, for example, this extra intra-controller communication time is irrelevant because it happens within the 10ms polling period that Micro-Manager appears to use. If communication speed is of the utmost importance, it is possible to use binary commands in which case the controller doesn't need to parse the human-readable commands. For MS2000 these are called low level commands and documentation is here. For TG-1000 these are called W commands and the documentation is here. Avoiding the parsing will save about a millisecond or so. Depending on the exact nature of the command the number of bytes transmitted can be more or less than with the usual high-level commands. tech_note_rs232_comm.txt · Last modified: 2018/08/02 12:30 by vik
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.284858763217926, "perplexity": 4972.3027067958765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221217909.77/warc/CC-MAIN-20180821014427-20180821034427-00640.warc.gz"}
http://www.jstor.org/stable/119387
## Access You are not currently logged in. Access JSTOR through your library or other institution: ## If You Use a Screen Reader This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader. # Reflection and Uniqueness Theorems for Harmonic Functions D. H. Armitage Proceedings of the American Mathematical Society Vol. 128, No. 1 (Jan., 2000), pp. 85-92 Stable URL: http://www.jstor.org/stable/119387 Page Count: 8 Preview not available ## Abstract Suppose that h is harmonic on an open half-ball β in RN such that the origin 0 is the centre of the flat part τ of the boundary ∂ β . If h has non-negative lower limit at each point of τ and h tends to 0 sufficiently rapidly on the normal to τ at 0, then h has a harmonic continuation by reflection across τ . Under somewhat stronger hypotheses, the conclusion is that h $\equiv$ 0. These results strengthen recent theorems of Baouendi and Rothschild. While the flat boundary set τ can be replaced by a spherical surface, it cannot in general be replaced by a smooth (N - 1)-dimensional manifold. • 85 • 86 • 87 • 88 • 89 • 90 • 91 • 92
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6059864163398743, "perplexity": 2505.6775892352666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00230-ip-10-171-10-108.ec2.internal.warc.gz"}
https://arxiv.org/abs/1706.07262
Full-text links: astro-ph.SR # Title:MOVES I. The evolving magnetic field of the planet-hosting star HD189733 Abstract: HD189733 is an active K dwarf that is, with its transiting hot Jupiter, among the most studied exoplanetary systems. In this first paper of the Multiwavelength Observations of an eVaporating Exoplanet and its Star (MOVES) program, we present a 2-year monitoring of the large-scale magnetic field of HD189733. The magnetic maps are reconstructed for five epochs of observations, namely June-July 2013, August 2013, September 2013, September 2014, and July 2015, using Zeeman-Doppler Imaging. We show that the field evolves along the five epochs, with mean values of the total magnetic field of 36, 41, 42, 32 and 37 G, respectively. All epochs show a toroidally-dominated field. Using previously published data of Moutou et al. 2007 and Fares et al. 2010, we are able to study the evolution of the magnetic field over 9 years, one of the longest monitoring campaign for a given star. While the field evolved during the observed epochs, no polarity switch of the poles was observed. We calculate the stellar magnetic field value at the position of the planet using the Potential Field Source Surface extrapolation technique. We show that the planetary magnetic environment is not homogeneous over the orbit, and that it varies between observing epochs, due to the evolution of the stellar magnetic field. This result underlines the importance of contemporaneous multi-wavelength observations to characterise exoplanetary systems. Our reconstructed maps are a crucial input for the interpretation and modelling of our MOVES multi-wavelength observations. Comments: 14 pages, 6 figures, accepted for publication in MNRAS Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Earth and Planetary Astrophysics (astro-ph.EP) DOI: 10.1093/mnras/stx1581 Cite as: arXiv:1706.07262 [astro-ph.SR] (or arXiv:1706.07262v2 [astro-ph.SR] for this version) ## Submission history From: Rim Fares [view email] [v1] Thu, 22 Jun 2017 11:31:42 UTC (1,817 KB) [v2] Fri, 23 Jun 2017 13:44:19 UTC (1,817 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.867119312286377, "perplexity": 2856.406803658351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315811.47/warc/CC-MAIN-20190821065413-20190821091413-00423.warc.gz"}
https://tohoku.pure.elsevier.com/en/publications/can-the-21-cm-signal-probe-population-iii-and-ii-star-formation
# Can the 21-cm signal probe Population III and II star formation? Research output: Contribution to journalArticlepeer-review 9 Citations (Scopus) ## Abstract Using varying models for the star formation rate (SFR) of Population (Pop) III and II stars at z > 6 we derive the expected redshift history of the global 21-cm signal from the intergalactic medium (IGM). To recover the observed Thomson scattering optical depth of the cosmic microwave background (CMB) requires SFRs at the level of ~10-3M yr-1 Mpc-3 at z ~ 15 from Pop III stars, or ~10-1M yr-1 Mpc-3 at z ~ 7 from Pop II stars. In the case the SFR is dominated by Pop III stars, the IGM quickly heats above the CMB at z ≳ 12 due to heating from supernovae. In addition, Lyα photons from haloes hosting Pop III stars couple the spin temperature to that of the gas, resulting in a deep absorption signal. If the SFR is dominated by Pop II stars, the IGM slowly heats and exceeds the CMB temperature at z ~ 10. However, the larger and varying fraction of Pop III stars are able to break this degeneracy. We find that the impact of the initial mass function (IMF) of Pop III stars on the 21-cm signal results in an earlier change to a positive signal if the IMF slope is ~-1.2. Measuring the 21-cm signal at z ≳ 10 with next generation radio telescopes such as the Square Kilometre Array will be able to investigate the contribution from Pop III and Pop II stars to the global SFR. Original language English 654-665 12 Monthly Notices of the Royal Astronomical Society 448 1 https://doi.org/10.1093/mnras/stu2687 Published - 2015 Mar 21 ## Keywords • Dark ages • First stars • Galaxies: formation • Galaxies: high-redshift • Reionization • Stars: Population II ## ASJC Scopus subject areas • Astronomy and Astrophysics • Space and Planetary Science
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8612089157104492, "perplexity": 3695.216859326236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00476.warc.gz"}
https://codeforces.com/problemset/problem/666/E
E. Forensic Examination time limit per test 6 seconds memory limit per test 768 megabytes input standard input output standard output The country of Reberland is the archenemy of Berland. Recently the authorities of Berland arrested a Reberlandian spy who tried to bring the leaflets intended for agitational propaganda to Berland illegally . The most leaflets contain substrings of the Absolutely Inadmissible Swearword and maybe even the whole word. Berland legal system uses the difficult algorithm in order to determine the guilt of the spy. The main part of this algorithm is the following procedure. All the m leaflets that are brought by the spy are numbered from 1 to m. After that it's needed to get the answer to q queries of the following kind: "In which leaflet in the segment of numbers [l, r] the substring of the Absolutely Inadmissible Swearword [pl, pr] occurs more often?". The expert wants you to automate that procedure because this time texts of leaflets are too long. Help him! Input The first line contains the string s (1 ≤ |s| ≤ 5·105) — the Absolutely Inadmissible Swearword. The string s consists of only lowercase English letters. The second line contains the only integer m (1 ≤ m ≤ 5·104) — the number of texts of leaflets for expertise. Each of the next m lines contains the only string ti — the text of the i-th leaflet. The sum of lengths of all leaflet texts doesn't exceed 5·104. The text of the leaflets consists of only lowercase English letters. The next line contains integer q (1 ≤ q ≤ 5·105) — the number of queries for expertise. Finally, each of the last q lines contains four integers l, r, pl, pr (1 ≤ l ≤ r ≤ m, 1 ≤ pl ≤ pr ≤ |s|), where |s| is the length of the Absolutely Inadmissible Swearword. Output Print q lines. The i-th of them should contain two integers — the number of the text with the most occurences and the number of occurences of the substring [pl, pr] of the string s. If there are several text numbers print the smallest one. Examples Input suffixtree3suffixtreesareawesomecartesiantreeisworsethansegmenttreenyeeheeheee21 2 1 101 3 9 10 Output 1 13 4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38772016763687134, "perplexity": 1581.59880031999}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00535.warc.gz"}
https://www.physicsforums.com/threads/stored-energy-in-a-battery.209242/
# Stored energy in a battery 1. Jan 16, 2008 ### PowerIso 1. The problem statement, all variables and given/known data A certain lead acid storage battery has a mass of 30kg, Starting from a fully charged state, it can supply 5 amperes for 24 hours with a terminal voltage of 12 V before it is totally discharged. a If the energy stored in the the fully charged battery is used to lift the battery with 100% efficiency what height is attained? Assume that the acceleration due to gravity is 9.88 m/s/s and is constant with height. b. If the energy stored is used to accelerate the battery with 100% efficiency what velocity is attained. C Gasoline contains about 4.5 x 10^7 j/kg. Compare this with the energy content per unit mass for the fully charged battery. 2. Relevant equations 3. The attempt at a solution I don't have an attempt to the solution manly because I am at lost to where to begin. I've read the section this question is referring to over and over again and I can't seem to get any closer to solving this problem. Can anyone please give me a hint on how to solve such a problem? 2. Jan 16, 2008 ### chroot Staff Emeritus It's just an energy problem. Start by calculating how much energy is delivered in total by a 5 A current at 12 V over 24 hours. Then, figure out the equivalent altitude where the 30 kg battery has that same amount of gravitational potential energy as was released in the form of electricity. - Warren 3. Jan 16, 2008 ### PowerIso Thanks a lot. I ended up with 17.6 km for the height and 587 m/s for the velocity question. I'm still confused on part c. It's been about 3 years since i've taken physics I and II, so i'm trying to recall old information, but it's hard. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: Stored energy in a battery
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940984606742859, "perplexity": 429.2365012483009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814833.62/warc/CC-MAIN-20180223194145-20180223214145-00560.warc.gz"}
https://dynamicsystems.asmedigitalcollection.asme.org/article.aspx?articleid=1414210
0 Discussions # Discussion: “Model Reduction of Large-Scale Discrete Plants With Specified Frequency Domain Balanced Structure” (, and , 2005, ASME J. Dyn. Syst. Meas., Control, 127, pp. 486–498)PUBLIC ACCESS [+] Author and Article Information Hamid Reza Shaker Section for Automation and Control, Department of Electronic Systems, Aalborg University, Aalborg, [email protected] Rafael Wisniewski Section for Automation and Control, Department of Electronic Systems, Aalborg University, Aalborg, [email protected] J. Dyn. Sys., Meas., Control 131(6), 065501 (Nov 10, 2009) (1 page) doi:10.1115/1.4000138 History: Received November 23, 2007; Revised May 29, 2008; Published November 10, 2009; Online November 10, 2009 ## Abstract This work presents a commentary of the article published by A. Zadegan and A. Zilouchian (2005, ASME J. Dyn. Syst. Meas., Control, 127 , pp. 486–498). We show their order reduction method is not always true and may lead to inaccurate results and is therefore erroneous. A framework for solving the problem is also suggested. ## DISCUSSION Model reduction of systems with specified frequency domain balanced structure is a reduction technique which is an attempt for increasing the accuracy of approximation by looking at reduction problem within a specified frequency bound instead of the whole frequency domain. In this method it is not required to keep the approximation good outside the specified frequency bound of operation; the accuracy of approximation can be increased comparing to approximation results by applying well-known ordinary balanced reduction method. In this method continuous time controllability and observability Grammians in terms of $ω$ over a frequency bound $[ω1,ω2]$ are defined as (1-7) $Wcf≜12π∫w2w1(Ijw−A)−1BB*(−Ijw−A*)−1dw$ $Wof≜12π∫w2w1(−Ijw−A*)−1C*C(Ijw−A)−1dw$ Similarly, for discrete time cases, Gramminans are defined as (1-7) $Wcf≜12π∫w2w1(Iejw−A)−1BB*(Ie−jw−A*)−1dw$ $Wof≜12π∫w2w1(Ie−jw−A*)−1C*C(Iejw−A)−1dw$ This model reduction technique is based on ordinary balanced model reduction method that was first proposed by Moore (8) and then improved and developed in different directions (10). The philosophy of the model reduction method proposed by Zadegan (1-7) is very similar to the one presented by Enns (9), but it is not always true and may lead to inaccurate results. In what follows we discuss the problem of the method in more detail. In the first step of the aforementioned model reduction technique the original system should be transformed to the specified frequency domain balanced structure, i.e., the controllability and observability Grammians of the transformed system should be equal and diagonal. The second step of the reduction procedure consists of partitioning and applying the generalized singular perturbation approximation to the system with specified frequency domain balanced structure. The problem which arises in the practical implementation of the reduction technique is the infeasibility of the balancing algorithms for finding an appropriate similarity transform which should transform the original system into the frequency domain balanced structure. In order to find an appropriate similarity transform the authors of Refs. 1-2,7, have suggested to use one of the well-known numerical algorithms which was proposed for the first time by Laub (7). In this algorithm we should apply the Cholesky factorization to the Grammians obtained from 1 or 2. Because the aforementioned Grammians are not real, we cannot apply the Cholesky factorization, and the overall Laub algorithm is not applicable then. If we use $Wcf+Conj(Wcf)$ and $Wof+Conj(Wof)$ instead of $Wcf$ and $Wof$, respectively, as the authors of Refs. 1-2,7 have done in their works, the Laub algorithm can be applied to them but the structure which the original system is transformed to is no longer the frequency domain balanced structure. In the frequency domain balanced structure we should have the equal and diagonal Grammians but the similarity transform obtained from the aforementioned procedure can only transform the system to the structure in which the real part of the Grammians is equal and diagonal. In order to overcome the problem, one can use input-output weights and make the dynamic system work just within the frequency bound of interest. The frequency weighed dynamic system can be reduced successfully. In this case Plancherel's theorem can guarantee the trueness method. ## References View article in PDF format. ## Related Some tools below are only available to our subscribers or users with an online account. ### Related Content Customize your page view by dragging and repositioning the boxes below. Related Journal Articles Related Proceedings Articles Related eBook Content Topic Collections
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034910559654236, "perplexity": 842.1514222297144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256724.28/warc/CC-MAIN-20190522022933-20190522044933-00198.warc.gz"}
http://sachinashanbhag.blogspot.com/2014/09/the-magic-of-compounding.html
## Monday, September 8, 2014 ### The Magic of Compounding While Albert Einstein may never have said that compound interest is "the most powerful source in the universe", the exponential growth implied by the magic of compounding can lead to spectacular outcomes. Example 1: As as example, consider the following: You start with a penny on day one. On day two, you double it. So you have $0.02. On day 3, you double it again ($0.04), and so on. Without actually carrying out the math, can you guess how much money you will end up with at the end of a 30 day month? The answer of course is 2^29 pennies, which is over 5 million dollars! Example 2: The idea is also enshrined in legend. Consider, for example, the story of the chess-board and grains of rice. Essentially, a king was asked to set one grain of rice in the first square, two in the next, and to keep on doubling until all the 64 squares on the chessboard were used up. A quick calculation shows that the total number of grains would be $2^0 + 2^1 + ... + 2^{63} = 2^{64} - 1.$ Assuming each grain weights 25 mg, this corresponds to more than 450 billion tonnes of rice, which is about 1000 times larger than the annual global production. Example 3: What makes Warren Buffett fabulously wealthy? If you start with an amount $P$ and grow it at an annual growth rate of $i$ for $n$ years, you end up with,$A = P (1 + i)^n.$ Two ways to get compounding to work its magic is to have large growth rates and/or long incubation times. In Buffett's case, he has managed both; he's compounded money at more than $i = 0.20$, for a long time, $n=60$ years. With this \$100 becomes, $A = 100 (1+0.2)^{60} = 5,634,514.$ Example 4: This is in someways my favorite example, because it doesn't deal with material things. It is an insight that comes from this essay that I wrote about a while ago. I love the following quote: What Bode was saying was this: "Knowledge and productivity are like compound interest.'' Given two people of approximately the same ability and one person who works ten percent more than the other, the latter will more than twice outproduce the former. The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity - it is very much like compound interest. I don't want to give you a rate, but it is a very high rate. Given two people with exactly the same ability, the one person who manages day in and day out to get in one more hour of thinking will be tremendously more productive over a lifetime.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40233153104782104, "perplexity": 699.127612577354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945793.18/warc/CC-MAIN-20180423050940-20180423070940-00052.warc.gz"}
https://www.physicsforums.com/threads/infinite-square-well-with-attractive-potential.345823/
# Infinite square well with attractive potential 1. Oct 14, 2009 ### Brian-san 1. The problem statement, all variables and given/known data We have an infinite square well potential of width 2L centered at the origin, with an attractive delta function potential V0δ(x) at the origin, with the properties $$V_0<0, -V_0>\frac{\hbar^2}{mL^2}$$ Determine the conditions for a negative energy bound state. There are a few other parts to the question, but I do not have the sheet at the moment. 2. Relevant equations Schrodinger Equation 3. The attempt at a solution In the absence of the delta function, we get $$-\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x^2}=E\psi$$ This differential equation has character polynomial given by $$x^2+\frac{2mE}{\hbar^2}=0, x=\pm\frac{i\sqrt{2mE}}{\hbar}$$ The solution is then $$\psi=Asin\left(\frac{\sqrt{2mE}}{\hbar}x\right)+Bcos\left(\frac{\sqrt{2mE}}{\hbar}x\right)$$ Using the boundary condition that the wave function is zero at ±L and normalizing the wave function, I get $$\psi=L^{-\frac{1}{2}}cos\left(\frac{n\pi x}{2L}\right), L^{-\frac{1}{2}}sin\left(\frac{n\pi x}{2L}\right)$$ Where the cosine solution is for odd n and the sine solution is for even n. Also, the energy spectrum is given by $$E_n=\frac{n^2\pi^2\hbar^2}{8mL^2}$$ For any positive integer n. This is the solution for the infinite well without the delta function, the full Schrodinger equation should be $$-\frac{\hbar^2}{2m}\frac{\partial^2\psi}{\partial x^2}+V_0\delta(x)\psi=E\psi$$ I thought about integrating both sides over the width of the well to eliminate the delta function and get $$-\frac{\hbar^2}{2m}\int_{-L}^{L}\frac{\partial^2\psi}{\partial x^2}dx+V_0\psi(0)=E\int_{-L}^{L}\psi dx$$ For even n this is trivial, but for odd n it seems to give me that V0=0, which is not true, nor very helpful. Obviously the bound state will most likely be the ground state n=1, so I thought it could be $$E_1<-V_0, \frac{\pi^2\hbar^2}{8mL^2}<-V_0$$ However, I have a feeling that I am supposed to work in the conditions imposed on V0 somehow. Is the work so far on the right track, or have I missed something important? There are a few more parts to the question, but this is all I could remember without the sheet near me. 2. Oct 14, 2009 ### gabbagabbahey Actually, it gives you $V_0\psi(0)=0$, and since $V_0\neq 0$, $\psi(0)=$___? You should not be surprised by this result; the wavefunction is always zero in regions where the potential is infinite. The effect of this is simply to divide the well in two. 3. Oct 15, 2009 ### Brian-san Given the solutions I got for the wavefunction, $\psi(0)=0$ for even n, and $\psi(0)=L^{-\frac{1}{2}}$, for odd n. But since V0 is non zero, in order for $V_0\psi(0)=0$, it would imply $\psi(0)=0$. However, wouldn't this make the wavefunction discontinuous for odd n at the origin? Would this still have the effect of splitting the potential into two regions even if it is attractive? It makes sense if the delta function has a positive coefficient. Visually, this problem should look like a square well with a large dip toward $-\infty$ at the origin. This is what would produce the negative energy bound state I'm looking for. 4. Oct 15, 2009 ### gabbagabbahey No, your solutions are invalid. The fact that $\psi(0)=0$ provides an extra boundary condition which you must apply to the general solution to Schroedinger's equation in inside $|x|\leqL$...make sense? 5. Oct 15, 2009 ### Brian-san With the additional condition that $\psi(0)=0$, If I apply that to my general solution from above, since cosine zero is never zero, then B=0 and the solution is just $$\psi=Asin\left(\frac{\sqrt{2mE}}{\hbar}x\right)$$ Applying the boundary conditions at the walls of the well and normalizing tells me $$\psi(x)=L^{-\frac{1}{2}}sin\left(\frac{n\pi}{L}x\right), E_n=\frac{n^2\pi^2\hbar^2}{2mL^2}$$ Is the incorrect solution simply because I did not apply the condition at x=0 to my first solution, or do I have to solve the differential equation leaving the delta potential term intact? If it's the latter, I don't think I can still solve the differential equation by finding the roots of the characteristic equation. We derived the solution for an attractive delta potential in class, but that was without the infinite potential walls at ±L. In that case solutions involved the exponential function, but those can't satisfy the boundary conditions at the walls of the well, so I don't think that example will be of much help. 6. Oct 15, 2009 ### gabbagabbahey As I said earlier, the delta function effectively divides your well into two halves, giving you an new boundary at $x=0$. Your original solution was incorrect because it failed to take this boundary and its corresponding boundary condition into account. 7. Oct 15, 2009 ### Brian-san Then is this solution from a previous post correct? $$\psi(x)=L^{-\frac{1}{2}}sin\left(\frac{n\pi}{L}x\right), E_n=\frac{n^2\pi^2\hbar^2}{2mL^2}$$ It satisfies that the wave function is zero at x=0,L,-L, is normalized and satisfies the Schrodinger equation. Also, the last few parts ask about limits on the binding energy when $-V_0$ is large, and when $-\frac{mLV_0}{\hbar^2}=1+\delta, \delta<<1$ and if the energy is continuous at delta=0. That's nothing difficult once I find the relation between E and V. Once I have the correct expression for the energy states, I also have the facts that $V_0<0, -V_0>\frac{\hbar^2}{mL}$. But in what way do I combine these to find the condition of a bound energy state? Presumably it would occur when the energy of the particle is insufficient to escape the attractive well, so E+V<0 8. Oct 16, 2009 ### gabbagabbahey Well, these are eigenstates for this potential, but they aren't really the eigenstates you were asked for now, are they? What is the general solution for $E<0$? What do you get when you apply your boundary conditions to it? (Remember to consider the regions $-L<x<0$ and $0<x<L$ separately) 9. Oct 17, 2009 ### Brian-san I solved the equations again, separately for each region and took a few ideas from my notes when thinking about the boundary conditions at the walls of the well. Without going through that whole process, I got: $$\psi_1(x)=A_1sin(k(L-x)), 0<x\leq L$$ $$\psi_2(x)=A_2sin(k(L+x)), -L\leq x<0$$ With the usual $k^2=\frac{\sqrt{2mE}}{\hbar}$. Since the wave function must be continuous at x=0, then we get that $A_2=\pm A_1$. More specifically, we know $\psi(0)=0$. (The normalization constant is still $A_1=L^{-1/2}$, but it hasn't been needed for anything yet.) So if we consider the first case, $A_2=-A_1$, then at x=0 $$A_1sin(kL)=-A_1sin(kL)$$ This implies that $kL=n\pi$, and gives the usual result of $$E_n=\frac{n^2\pi^2\hbar^2}{2mL^2}$$ Also, in this case, the derivative of the wave function is continuous at x=0. For the case of $A_2=A_1$, the wave function is still continuous at x=0 for our expression of k, but there is now a discontinuity in the derivative. Integrating over a small region near x=0, this can be described by the relation $$2A_1kcos(kL)=\frac{2mV_0}{\hbar^2}A_1sin(kL)$$ If we let z=kL, then we find a transcendental equation $$tan(z)=\frac{z\hbar^2}{mLV_0}$$ If you look at this graphically, there are an infinite number of solutions that occur where the two functions intersect. Looking at the limit $-V_0\rightarrow\infty$, then $tan(z)=0$, and $kL=n\pi$. This leads to the usual relation $$E_n=\frac{n^2\pi^2\hbar^2}{2mL^2}$$ In the other limit $-\frac{mLV_0}{\hbar^2}=1+\delta, \delta<<1$, I was thinking that $$tan(z)=\frac{-z}{1+\delta}$$ Then I'm kind of stuck from here. I was thinking that the intersection in this case would occur at a small enough value of z so I could use the approximation $$tan(z)\approx\frac{z}{1-\frac{1}{2}z^2}$$ Then I thought, even if that were the case, it would only apply to the first intersection point. 10. Oct 17, 2009 ### gabbagabbahey I must apologize, but I think I may have steered you in the wrong direction when I said $\psi(0)=0$. Looking again at the equation, $$-\frac{\hbar^2}{2m}\int_{-L}^{L}\frac{d^2\psi}{d x^2}dx+V_0\psi(0)=E\int_{-L}^{L}\psi dx$$ It's true that the RHS will be zero since the wavefunction is continuous and $\psi(L)=\psi(-L)=0$, but $\frac{d\psi}{dx}$ need not be continuous, because of the delta function at the center $$\implies \int_{-L}^{L}\frac{d^2\psi}{d x^2}dx\neq \left.\frac{d\psi}{d x}\right|_{-L}^{L}[/itex] Which was my basis for claiming that $V_0\psi(0)=0$ Instead, for negative energy states, I think you'll want to write the general solution in the form: [tex]]\psi_1(x)=A_1\sinh(\kappa x)+B_1\cosh(\kappa x), -L\leq x<0$$ $$\psi_2(x)=A_2\sinh(\kappa x)+B_2\cosh(\kappa x), 0< x\leq L$$ where $\kappa\equiv\frac{-2mE}{\hbar^2}$ is real and positive. Then apply your boundary conditions at $x=\pm L$ and the fact that the wavefunction is continuous at $x=0$, but its derivative has a finite discontinuity there. Last edited: Oct 17, 2009 Similar Discussions: Infinite square well with attractive potential
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9743317365646362, "perplexity": 267.08212534868744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805023.14/warc/CC-MAIN-20171118190229-20171118210229-00083.warc.gz"}
https://events.saip.org.za/event/7/contributions/5004/
# SAIP 2011 12-15 July 2011 Saint George Hotel Africa/Johannesburg timezone ## Performance monitoring of a downdraft System Johansson Biomass Gasifier™ 14 Jul 2011, 17:00 2h Asteria ### Asteria Poster Presentation Track F - Applied and Industrial Physics ### Speaker Dr Sampson Mamphweli (University of Fort Hare) ### Description Biomass gasification for electricity generation has attracted much attention over the past few years. This is due to the fact that biomass is a renewable resource, which is also considered to be carbon neutral. However electricity generation using biomass gasifiers can only be technically and economically achieved at small scale using downdraft gasifier systems, which produce gas that has very little quantities of tar. This paper presents the technical and operational challenges experienced in biomass gasification for electricity generation. The data was collected at the System Johansson Biomass Gasifier installed by Eskom. NDIR and Pd/Ni gas sensors were used to measure the gas profiles while type K thermocouples were used to measure the temperature in the reactor. This paper presents the performance monitoring results including the gasifier operating conditions, fuel properties, gas profiles as well as gas heating value and cold gas efficiency. Consider for a student award (Yes / No)? No Yes Other ### Primary author Dr Sampson Mamphweli (University of Fort Hare) ### Co-author Prof. Edson Meyer (University of Fort Hare, Institute of Technology) Paper
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8564645051956177, "perplexity": 8868.569371881358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150067.87/warc/CC-MAIN-20210724001211-20210724031211-00077.warc.gz"}
https://dev.echo.org/developers/operations/asset_market/_call_order_update_operation/
# call_order_update_operation¶ This operation can be used to add collateral, cover, and adjust the margin call price for a particular user. For prediction markets the collateral and debt must always be equal. This operation will fail if it would trigger a margin call that couldn't be filled. If the margin call hits the call price limit then it will fail if the call price is above the settlement price. Note: This operation can be used to force a market order using the collateral without requiring outside funds. ### JSON Example¶ [ 3,{ "fee": { "amount": 0, "asset_id": "1.3.0" }, "funding_account": "1.2.0", "delta_collateral": { "amount": 0, "asset_id": "1.3.0" }, "delta_debt": { "amount": 0, "asset_id": "1.3.0" }, "extensions": [] } ]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5720431208610535, "perplexity": 6189.608351885916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986679439.48/warc/CC-MAIN-20191018081630-20191018105130-00495.warc.gz"}