Search is not available for this dataset
url
string
text
string
date
timestamp[s]
meta
dict
https://notes.tyrocity.com/vector-operations/
# Vector operations ### Vector operations If two scalars are added resulting scalar will be unique which will be equal to sum of the given two scalars. For examples if two scalars 2 and 8 are added their sum will be always  equal to 2+8 =10. But, the addition of vectors is complicated. If we add two vectors of magnitudes 2 and 8 the resultant vector’s magnitude will be 6 or 10 or any value  between 6 and 10 depending on the directions of the vectors we are adding. 1. If the two given vectors are acting in same direction then the magnitude of the resultant vector will be 10 units, 2. If the two given vectors are acting in opposite directions then the magnitude of resultant vector will be 6 units, 3. If the two given  vectors are acting in different directions then the magnitude of the resultant vector will be between 6 and 10. How to add two given  vectors(Geometrical representation): Suppose $overline{AB}$ and $overline{CD}$ are two given vectors. i)If the two vectors are acting in the same direction,take the first vector  Suppose $overline{AB}$, to the terminal point of $overline{AB}$ connect the initial point of $overline{CD}$. We get $overline{AB}$ + $overline{CD}$ = $overline{AD}$. The magnitude of resultant vector will be equal to the sum of magnitudes of $overline{AB}$ and $overline{CD}$ i.e When two vectors are acting in same direction thesum of the magnitudes of the vectors = Magnitude of resultant vector” ii)If the two vectors are acting in different directions: In that case the procedure of addition will be same but the direction and magnitude of resultant vector will be different.Suppose $overline{AB}$ and $overline{CD}$ are two given vectors acting in different directions as shown in the below fig(a). To add these two vectors connect the initial point C of IInd vectors $overline{CD}$ to the terminal point B of First  vector $overline{AB}$.Now join the initial point A of first vector $overline{AB}$ with the terminal point D of the second vector $overline{CD}$.The vector $overline{AD}$ taken in reverse order $overline{AD}$ (closing side taken in reverse order) represents the resultant vector both in magnitude and direction. i.e. When two vectors are acting in different directions the “Sum of the magnitudes of the vectors > Magnitude of resultant vector”. i) Vector addition is commutative: $bar{a}$ + $bar{b}$ = $bar{b}$$bar{a}$ Proof: Suppose $overline{OA}$ = $bar{a}$ and $overline{AB}$ = $bar{b}$ are two given vectors.Now let us add these two vectors. To add them let us connect the initial position of $overline{AB}$ to the terminal point of $overline{OA}$ in anti clock wise direction,now closing side OB taken in the reverse order represents the resultant vector $bar{r}$ . Now, draw a vectors parallel to $bar{a}$ and $bar{b}$ and complete the parallelogram OABC.From the Fig(ii) $overline{CB}$ = $overline{OA}$ = $bar{a}$ and $overline{AB}$ = $overline{OC}$ = $bar{b}$. From triangle OAB $overline{OA}$ + $overline{AB}$ = $overline{OB}$ i.e $bar{a}$ + $bar{b}$ = $bar{r}$ – – – – – – – – – – – – (1) From triangle OCB $overline{OC}$ + $overline{CB}$ = $overline{OB}$ i.e $bar{b}$ + $bar{a}$ = $bar{r}$ – – – – – – – – – – – – (2) from eq(1) and (2) we get $bar{a}$ + $bar{b}$ = $bar{b}$ + $bar{a}$ Let $overline{OA}$ = $bar{a}$,$overline{AB}$ = $bar{b}$ and $overline{BC}$ = $bar{c}$ be three different vectors. To add the given three vectors $bar{a}$ , $bar{b}$ and $bar{c}$ we have to connect the initial point of $overline{AB}$ to the terminal point of $overline{OA}$ and the initial point of $overline{BC}$ to terminal point of $overline{AB}$.The closing side taken in reverse order represents the resultant vector. Hence resultant $overline{OC}$ = $bar{r}$. Proof: To add three vectors we have to first add two vectors and to sum vector we will add the third vector. Form  fig(ii) from the triangle OAB $overline{OA}$ + $overline{AB}$ =$overline{OB}$ = ($bar{a}$ + $bar{b}$) – – – (1) from triangle OBC $overline{OB}$ + $overline{BC}$ = $overline{OC}$ = $bar{r}$ – – – – – – – – – – – – (2) substitute the value $overline{BC}$ and $overline{OB}$ = ($bar{a}$ + $bar{b}$) from eq(1) to eq(2) we get ($bar{a}$ + $bar{b}$) + $bar{c}$ = $overline{OC}$ = $bar{r}$ – – – – –  -(A) From triangle ABC $overline{AB}$ + $overline{BC}$ = $overline{AC}$ i.e ($bar{b}$ + $bar{c}$) = $overline{AC}$ – – – – – – – – – (3) From triangle OAC $overline{OA}$ + $overline{AC}$ = $overline{OC}$ = $bar{r}$ – – – – – – – – (4) solving eq(3) and eq(4) we get $bar{a}$ + ($bar{b}$ + $bar{c}$) = $overline{OC}$ = $bar{r}$ – – – – – – – (B) Comparing eq(A) & eq(B) we get ($bar{a}$ + $bar{b}$) + $bar{c}$ = $bar{a}$ + ($bar{b}$ + $bar{c}$) Subtraction of Vectors: Subtraction of vectors is also a form of addition.Addition of two vectors acting in opposite direction is called subtraction of vectors. Suppose as in fig(i) $overline{AB}$ = $bar{a}$ and $overline{CD}$ = $bar{b}$ are two vectors, to subtract $bar{b}$ from $bar{a}$ we have to add the negative vector of $bar{b}$ to $bar{a}$, i.e $bar{a}$$bar{b}$ = $bar{a}$ + ( –$bar{b}$). In the fig (ii) we have drawn $overline{BE}$ = (-$bar{b}$) negative vector of $overline{CD}$ = $bar{b}$ , now connect the initial point of latex overline{BE}\$ to the terminal point of $overline{AB}$. From the resultant vector of addition of these two vectors is $overline{AE}$. Therefore, $overline{AB}$ + $overline{BE}$ = $overline{AE}$ i.e $bar{a}$ + (- $bar{b}$) = $bar{a}$$bar{b}$ = $overline{AE}$. Note:-*Subtraction of vectors is not commutative i.e. $bar{a}$$bar{b}$$not =$$bar{b}$$bar{a}$ Have a question? Ask us in our discussion forum.
2019-10-19T22:11:12
{ "domain": "tyrocity.com", "url": "https://notes.tyrocity.com/vector-operations/", "openwebmath_score": 0.8548914790153503, "openwebmath_perplexity": 410.0957085701906, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795098861571, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043221712197 }
https://www.tutorialspoint.com/equivalent_plotting_ordering_fractions/using_common_denominator_to_order_fraction.htm
# Using a Common Denominator to Order Fraction Ordering fractions is arranging them either in increasing or decreasing order. The fractions that are to be ordered can have like or unlike denominators. In case we are required to order fractions with unlike denominators, we write their equivalent fractions with like denominators after finding their least common denominator. Then we order their numerators and the same order applies to the original fractions. First, rewrite $\frac{9}{11}$ and $\frac{5}{6}$ so that they have a common denominator. Then use <, = or > to order $\frac{9}{11}$ and $\frac{5}{6}$. ### Solution Step 1: We must rewrite the fractions so that they have a common denominator. We can use the least common denominator (LCD) The LCD of $\frac{9}{11}$ and $\frac{5}{6}$ is 66. Step 2: Now we rewrite the fractions with this denominator. $\frac{9}{11}$ = 9×6 ÷ 11×6 = $\frac{54}{66}$ $\frac{5}{6}$ = 5×11 ÷ 6×11 = $\frac{55}{66}$ Step 3: Since $\frac{54}{66}$ and $\frac{55}{66}$ have a common denominator, we can order them using their numerators. Because 54 < 55, we have $\frac{54}{66}$ < $\frac{55}{66}$ Step 4: Writing these fractions in original form $\frac{9}{11}$ < $\frac{5}{6}$ First, rewrite $\frac{1}{9}$ and $\frac{2}{15}$ so that they have a common denominator. Then use <, = or > to order $\frac{1}{9}$ and $\frac{2}{15}$. ### Solution Step 1: We must rewrite the fractions so that they have a common denominator. We can use the least common denominator (LCD) The LCD of $\frac{1}{9}$ and $\frac{2}{15}$ is 45. Step 2: Now we rewrite the fractions with this denominator. $\frac{1}{9}$ = 1×5 ÷ 9×5 = $\frac{5}{45}$ $\frac{2}{15}$ = 2×3÷ 15×3 = $\frac{6}{45}$ Step 3: Since $\frac{5}{45}$ and $\frac{6}{45}$ have a common denominator, we can order them using their numerators. Because 5 < 6, we have $\frac{5}{45}$ < $\frac{6}{45}$ Step 4: Writing these fractions in original form $\frac{1}{9}$ < $\frac{2}{15}$
2018-09-18T17:20:32
{ "domain": "tutorialspoint.com", "url": "https://www.tutorialspoint.com/equivalent_plotting_ordering_fractions/using_common_denominator_to_order_fraction.htm", "openwebmath_score": 0.9180678129196167, "openwebmath_perplexity": 877.7653527041452, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795098861571, "lm_q2_score": 0.6619228825191871, "lm_q1q2_score": 0.6533043221712196 }
https://socratic.org/questions/what-is-the-equation-of-the-line-with-slope-m-12-11-that-passes-through-2-11
# What is the equation of the line with slope m= 12/11 that passes through (-2,11) ? Apr 9, 2018 $y = \frac{12}{11} x + \frac{145}{11}$ #### Explanation: The equation of a line in slope-intercept form is $y = m x + b$. We are given $x$, $y$, and $m$. So, plug these values in: $11 = \frac{12}{11} \cdot - 2 + b$ $11 = - \frac{24}{11} + b$ $11 + \frac{24}{11} = b$ $\frac{121}{11} + \frac{24}{11} = b$ $\frac{145}{11} = b$ This is how I would leave it but feel free to turn it into a mixed fraction or decimal. So, our equation is $y = \left(\frac{12}{11}\right) x + \frac{145}{11}$
2020-03-29T18:52:53
{ "domain": "socratic.org", "url": "https://socratic.org/questions/what-is-the-equation-of-the-line-with-slope-m-12-11-that-passes-through-2-11", "openwebmath_score": 0.8771723508834839, "openwebmath_perplexity": 411.69016545899274, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795095031688, "lm_q2_score": 0.6619228825191871, "lm_q1q2_score": 0.6533043219177109 }
https://plainmath.net/7902/numbers-expressed-repeating-decimal-irrational-divided-nonzero-irrational
True or False? 1) Let x and y real numbers. If x^2-5x=y^2-5y and x != y, then x+y is five. 2) The real number pi can be expressed as a repeating decimal. 3) If an irrational number is divided by a nonzero integer the result is irrational. True or False? 1) Let x and y real numbers. If ${x}^{2}-5x={y}^{2}-5y$ and $x\ne y$, then x+y is five. 2) The real number pi can be expressed as a repeating decimal. 3) If an irrational number is divided by a nonzero integer the result is irrational. You can still ask an expert for help Want to know more about Irrational numbers? • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it hajavaF 1) True. ${x}^{2}-5x={y}^{2}-5y$ Rearranging equation, ${x}^{2}-{y}^{2}=5x-5y$ ${x}^{2}-{y}^{2}=5\left(x-y\right)$ Using identity ${x}^{2}-{y}^{2}=\left(x+y\right)\left(x-y\right)$ we get, $\left(x+y\right)\left(x-y\right)=5\left(x-y\right)$ Dividing by (x−y) on both sides we get, x+y=5 Therefore, if ${x}^{2}-5x={y}^{2}-5y$ and $x\ne y$, then x+y is five. 2) False. The decimal expansion of pi is 3.1415926535897932384....., the decimal does not repeat exactly. pi is an irrational number. Therefore, it cannot be expressed as ratio of two integers and it cannot be expressed as terminating or repeating decimal. 3) True. Any irrational number divided by an integer is irrational number. Proof (by contradiction): Let x be a irrational number and a be an integer if $\frac{x}{a}$ is a rational number it can be expressed as ratio of two integers, $\frac{x}{a}=\frac{m}{n}$ this implies $x=\frac{am}{n}$. Since a, m and n are integers, am is an integer and x is ratio of two integers which implies x is a rational number. This is a contradiction to fact that x is irrational number. Therefore, if an irrational number divided by a non zero integer then the result is an irrational number.
2022-06-25T08:33:38
{ "domain": "plainmath.net", "url": "https://plainmath.net/7902/numbers-expressed-repeating-decimal-irrational-divided-nonzero-irrational", "openwebmath_score": 0.9286690950393677, "openwebmath_perplexity": 376.5376849480266, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986979508737192, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043214106934 }
https://math.stackexchange.com/questions/3193222/for-two-different-products-of-primes-with-rational-powers-are-the-real-number-r
# For two different products of primes with rational powers, are the real number representations always unique? So from the fundamental theorem of arithmetic we have that every $$\prod_{i=0}^{n}p_{i}^{e_{i}}$$, for some $$e_i \hspace{2px} \epsilon \hspace{2px} \mathbb{N}$$ and prime numbers $$p_i$$, gives a unique number in $$\mathbb{N}$$ and the set of all numbers generated in this way equals $$\mathbb{N}$$. This can be generalised to have a unique representation of every positive element of $$\mathbb{Q}$$ if $$e_i \hspace{2px} \epsilon \hspace{2px} \mathbb{Z}$$. My question is can we generalise this further to have a unique representation of all elements of the set containing all $$x$$ such that $$x = \prod_{i=0}^{n}s_{i}^{e_i} ,\hspace{3px} e_i \hspace{2px} \epsilon \hspace{2px} \mathbb{Q} ,\hspace{3px} s_i \hspace{2px} \epsilon \hspace{2px} \mathbb{N}$$ using products of prime numbers with rational powers, or is there some case in which $$\prod_{i=0}^{n}p_{i}^{e_{i}} = \prod_{i=0}^{m}q_{i}^{f_{i}}$$ for some $$e_i, f_i \hspace{2px} \epsilon \hspace{2px} \mathbb{Q}$$ and primes $$p_i$$ and $$q_i$$ where not all $$e_k = f_k$$ or not all $$p_k = q_k$$? I suspect that this is the case but I can not think of a way to prove it myself. On a side note, is the set I defined above the complete set of algebraic numbers or are there elements of the algebraic numbers that do not appear in that set? I doubt the set is the full set of algebraic numbers but it would be useful to know if they are equivalent, or if the set has a special name of some kind. • $i$ is algebraic and it is not in your set. – user657449 Apr 19 at 5:57 If $$N$$ is the common denominator of all $$e_i$$ and $$f_i$$, we find $$\prod p_i^{Ne_i}=\left(\prod p_i^{e_i}\right)^N=\left(\prod q_i^{f_i}\right)^N=\prod q_i^{Nf_i}$$ and hence (up to permutation) $$q_i=p_i$$, $$e_i=f_i$$. In other words, the representation is still unique if we use rational exponents. At the same time we see that all representable numbers are of the form $$\sqrt[N]M$$ with $$N,M\in\Bbb N$$.
2019-07-15T18:31:31
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3193222/for-two-different-products-of-primes-with-rational-powers-are-the-real-number-r", "openwebmath_score": 0.8632200956344604, "openwebmath_perplexity": 69.42633212952353, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986979508737192, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043214106934 }
http://mathhelpforum.com/algebra/167110-polynomial.html
Let $a,b,c$ be distinct integers and $P$ be a polynomial with integer coefficients such that $P(a)=b, P(b)=c, P(c)=a$. How many polynomials are there? Let $a,b,c$ be distinct integers and $P$ be a polynomial with integer coefficients such that $P(a)=b, P(b)=c, P(c)=a$. How many polynomials are there?
2018-01-20T02:08:29
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/algebra/167110-polynomial.html", "openwebmath_score": 0.8438196778297424, "openwebmath_perplexity": 40.3275809248431, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795083542037, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043211571847 }
https://pqnelson.wordpress.com/2012/06/28/continuity-for-functions-of-several-variables-partial-derivatives/
## Continuity for Functions of Several Variables, Partial Derivatives 1. We considered differentiating and integrating functions of a single-variable. How? We began with the notion of a limit, and then considered the derivative. If we have a, e.g., polynomial (1)$\displaystyle p(x,y) = x^{3}+x^{2}y+xy^{2}+y^{3}$ we see (2)$\displaystyle p(x+\Delta x,y) = x^{3}+x^{2}y+xy^{2}+y^{3}+\bigl(3x^{2}+2xy+y\bigr)\Delta x+ \mathcal{O}(\Delta x^{2})$ Again we stop and reflect: this treats ${y}$ as if it were constant. So the derivative formed by (3)$\displaystyle \lim_{\Delta x\rightarrow0}\frac{p(x+\Delta x,y)-p(x,y)}{\Delta x}=3x^{2}+2xy+y$ are “incomplete” or partial. There is some subtlety here due to using multiple variables, and we have to discuss the problems of limits first. 2. Definition. The function ${z=f(x,y)}$ is “Continuous” at ${(x_{0},y_{0})}$ if (i) ${f(x_{0},y_{0})}$ is defined and finite; (ii) ${\displaystyle\lim_{(x,y)\rightarrow(x_{0},y_{0})}f(x,y)=f(x_{0},y_{0})}$ is defined; (iii) ${\displaystyle\lim_{(x,y)\rightarrow(x_{0},y_{0})}f(x,y)}$ is defined (and finite). Note: this can be determined by picking any curve ${\gamma\colon[0,1]\rightarrow{\mathbb R}^{2}}$ which satisfies (4)$\displaystyle \gamma(t_{0}) = (x_{0},y_{0})$ for some ${0\leq t_{0}\leq 1}$, then taking (5)$\displaystyle \lim_{t\rightarrow t_{0}}f\bigl(\gamma(t)\bigr) = \lim_{(x,y)\rightarrow(x_{0},y_{0})}f(x,y).$ The subtletly here lies with ${\gamma}$ being arbitrary. If two different curves produce two different results, the limit does not exist. Lets consider some examples and non-examples. Example 1 (Limit Exists). Find (6)$\displaystyle \lim_{(x,y)\rightarrow(2,4)}\frac{y+4}{x^{2}y-xy+4x^{2}-4x}.$ Solution: for this, we can simply plug in the values (7)\displaystyle \begin{aligned} \lim_{(x,y)\rightarrow(2,4)}\frac{y+4}{x^{2}y-xy+4x^{2}-4x} &=\frac{(4)+4}{(2)^{2}(4)-(2)(4)+4(2^{2})-4(2)}\\ &=\frac{8}{16-8+16-8}=\frac{1}{2}. \end{aligned} This is because the function is sufficiently nice. Example 2 (Limit Doesn’t Exist). What is (8)$\displaystyle \lim_{(x,y)\rightarrow(0,0)}\frac{x^{4}}{x^{4}+y^{2}}=?$ Solution: Lets first approach it along the ${x}$-axis, i.e. first setting ${y=0}$. We find (9)$\displaystyle \lim_{(x,y)\rightarrow(0,0)}\frac{x^{4}}{x^{4}+y^{2}}=\lim_{x\rightarrow0}\frac{x^{4}}{x^{4}}=1.$ Now lets approach it on the ${y}$-axis, i.e. first setting ${x=0}$. We see (10)$\displaystyle \lim_{(x,y)\rightarrow(0,0)}\frac{x^{4}}{x^{4}+y^{2}}=\lim_{y\rightarrow0}\frac{0}{0+y^{2}}=0.$ Still, approaching along the curve ${y=x^{2}}$ we see (11)$\displaystyle \lim_{(x,y)\rightarrow(0,0)}\frac{x^{4}}{x^{4}+y^{2}}=\lim_{x\rightarrow0}\frac{x^{4}}{x^{4}+x^{4}}=\frac{1}{2}.$ But we have a problem: this implies ${0=1/2=1}$. This cannot be! So the limit cannot exist! Very sad. 3. Definition. Let ${z=f(x,y)}$ be defined on a region ${R}$ in the ${xy}$-plane, and let ${(x_{0},y_{0})}$ be an inerior point of ${R}$, we just don’t want a boundary point! If (12)$\displaystyle \lim_{\Delta x\rightarrow0}\frac{f(x_{0}+\Delta x,y_{0})-f(x_{0},y_{0})}{\Delta x}$ exists, then it is called the “Partial Derivative” of ${z=f(x,y)}$ at ${(x_{0},y_{0})}$ with respect to ${x}$. It is denoted (13)$\displaystyle \frac{\partial}{\partial x}f = \frac{\partial}{\partial x}z =f_{x} = \partial_{x}f = \partial_{x}z$ evaluated at ${(x_{0},y_{0})}$. NB: the subscripts in the ${\partial_{x}}$ indicate what variable we are taking the partial derivative of, i.e., it’s shorthand for ${\partial_{x}=\partial/\partial x}$. Under similar conditions, (14)$\displaystyle \lim_{\Delta y\rightarrow0}\frac{f(x_{0},y_{0}+\Delta y)-f(x_{0},y_{0})}{\Delta y}$ is the partial derivative of ${z=f(x,y)}$ with respect to ${y}$ at ${(x_{0},y_{0})}$. We denote this by (15)$\displaystyle \frac{\partial f}{\partial y}=\frac{\partial z}{\partial y}=\partial_{y}f = \partial_{y}z$ among a myriad of different conventions. 4. Higher order partial derivatives are done by taking it one at a time. So if (16)$\displaystyle z=\mathrm{e}^{xy}$ for example, we have (17)$\displaystyle \partial_{y}z=x\mathrm{e}^{xy}$ and taking its derivative again yields (18)\displaystyle \begin{aligned} \partial_{y}^{2}z &= \partial_{y}\left(x\mathrm{e}^{xy}\right)\\ &=x\partial_{y}(\mathrm{e}^{xy}) \end{aligned} Note we factor ${x}$ out in front of the partial derivative with respect to ${y}$ because ${x}$ is constant with respect to ${y}$. So we then obtain (19)$\displaystyle \partial_{y}^{2}z = x^{2}\mathrm{e}^{xy}.$ We take partial derivatives one at a time, from right to left: (20)$\displaystyle \partial_{x}\partial_{y}z = \partial_{x}\bigl(\partial_{y}z\bigr).$ Question: do partial derivatives commute? I.e., is ${\partial_{x}\partial_{y}=\partial_{y}\partial_{x}}$ always? Lets first consider an example calculation before considering an answer. Example 3. Consider the function ${u=x^{2}-y^{2}}$. Find ${\partial_{x}^{2}u+\partial_{y}^{2}u}$. Solution: We find that (21)$\displaystyle \partial_{x}u = 2x\implies \partial_{x}^{2}u = 2.$ Similarly, we find (22)$\displaystyle \partial_{y}u=-2y\implies \partial_{y}^{2}y=-2.$ Thus we conclude (23)$\displaystyle \partial_{x}^{2}u+\partial_{y}^{2}u=2-2=0.$ 5. Do Partial Derivatives Commute? Answer: not always. The conditions are fairly weak: if ${\partial_{x}z}$, ${\partial_{y}z}$, ${\partial_{x}\partial_{y}z}$ and ${\partial_{y}\partial_{x}z}$ are continuous throughout their respective domains, then (24)$\displaystyle \partial_{x}\partial_{y}z = \partial_{y}\partial_{x}z.$ Exercise 1. Let ${u(x,t) = f(x+vt) + g(x-vt)}$ where ${v\not=0}$ is some constant. Prove (25)$\displaystyle \partial_{t}^{2}u(x,t)=v^{2}\partial_{x}^{2}u(x,t).$ Exercise 2. Let ${f(x,y)=\ln(x^{2}+y^{2})}$. What is ${\partial_{x}^{2}f(x,y)+\partial_{y}^{2}f(x,y)}$? Exercise 3. Consider ${g(x,y)=1/\sqrt{x^{2}+y^{2}}}$. What is ${\partial_{x}^{2}g(x,y)}$? What is ${\partial_{y}^{2}g(x,y)}$? Is ${\partial_{x}\partial_{y}g(x,y)=\partial_{y}\partial_{x}g(x,y)}$? Exercise 4. Let ${f(x,y)=3x^{2}+4y^{3}++x^{2}y^{3}+\sin(xy)}$. What is ${\partial_{x}f}$? What is ${\partial_{y}f}$? Exercise 5. Let ${z=\arctan(x^{2}\mathrm{e}^{2y})}$. What is ${\partial_{x}z}$? What is ${\partial_{y}z}$?
2018-01-20T15:08:43
{ "domain": "wordpress.com", "url": "https://pqnelson.wordpress.com/2012/06/28/continuity-for-functions-of-several-variables-partial-derivatives/", "openwebmath_score": 0.9821020364761353, "openwebmath_perplexity": 566.9063239751408, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795083542036, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043211571846 }
https://oatcookies.neocities.org/mmult
I sometimes have trouble remembering mathematical matrix syntax and the order of matrix multi­pli­cation, and I'm sure others do too, so here's a quick summary. Matrix size and indexing is written row-first: an m-by-n matrix has m rows and n columns. (This is the exact opposite of the x,y notation in computer graphics, where the column x is specified first.) A vector of k elements is represented either as a 1-by-k row matrix or a k-by-1 column matrix. An example 2-by-3 matrix: −3 4 −2 −8 7 4 Matrix multi­pli­cation multi­plies an m-by-n matrix with an n-by-p matrix, producing an m-by-p matrix: the second matrix ("matrix B") must have as many rows as the first matrix ("matrix A") has columns, and the result has the same number of rows that matrix A has and the number of columns that B has. (AB has A's rows and B's columns.) Recall the dot product (or scalar product), that takes in two equal-length vectors and returns one number. If the two vectors (of length k) are v and w, then to get their dot product, we multiply corresponding entries of v and w and sum the resulting products together. Mathematically: $v⋅w=∑i=1kviwi$ Dot product example: v = 1 8 −2 5 −2 w = 1 6 2 −8 −5 Product: 1 + 48 + −4 + −40 + 10 = 15 To multiply two matrixes (A and B), we take single rows of A and single columns of B, then compute their dot product and put that in the inter­section. Cell i,j of the product matrix AB is calculated with row i of A and column j of B. $ABij=Ai1B1j+Ai2B2j+…+AinBnj=∑k=1nAikBkj$ A couple of examples, including an illustration of when a column vector is multiplied by a row vector and not vice-versa: b11 b12 b21 b22 a11 a12 a11b11 + a12b21 a11b12 + a12b22 a21 a22 a21b11 + a22b21 a21b12 + a22b22 w1 w2 w3 v1 v2 v3 v1w1 + v2w2 + v3w3 w1 w2 w3 v1 v1w1 v1w2 v1w3 v2 v2w1 v2w2 v2w3 v3 v3w1 v3w2 v3w3 In linear algebra, a linear function (or linear map) of n variables is often expressed as Ax, with x being a column vector of n variables and A being an n×n matrix. How this works is quite clear when the multi­plication is carried out, in this example for n=3: x1 x2 x3 3 1 0 3x1 + 1x2 + 0 0 -2 5 0 − 2x2 + 5x 0 0 6 0 + 0 + 6x3 Geometric rotations are also done with matrix multiplication. For 2d points, rotated around the origin, the following matrix is used (with θ being the rotation angle); it is then multiplied with the column vector of an (x, y) point. The result is a new column vector, with the newly rotated x and y. This is so common that GPUs have in-built hardware for this; it's what they're made for. cos θ −sin θ sin θ cos θ 2D rotation only has one axis and one angle, but for 3D rotation there are three axes that a point can be rotated around. 1 0 0 0 cos θ −sin θ 0 sin θ cos θ cos θ 0 sin θ 0 1 0 −sin θ 0 cos θ cos θ −sin θ 0 sin θ cos θ 0 0 0 1 created 2022-11-18, edited 2022-11-18. index.
2023-03-27T21:01:06
{ "domain": "neocities.org", "url": "https://oatcookies.neocities.org/mmult", "openwebmath_score": 0.7955682873725891, "openwebmath_perplexity": 1029.9912978424375, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795083542036, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043211571846 }
https://takagrow.com/notes/alive/mathforquantum1.html
# Basic Mathematics to Understand Caltech Quantum Computing Lecture Notes (1) This is a page for me collecting mathematical terms that often appears in quantum computing, especially to read these lecture notes smoothly without worrying much about mathematical terms. If you are interested in these fields, I would say this page will also be your help. • Bra-ket notation • Hermite conjugate • Hermitial matrix • Unitary matrix • Qubit ## Bra-ket notation (Notation) In quantum computing (mechanics), a vector $$\bm{u}$$ is expressed in this way $$|u \rang$$ where $$|u \rang$$ is equivalent to $$|u \rang = \bm{u} = \begin{bmatrix} u_1 \\ \vdots \\ u_n \\ \end{bmatrix}$$ This notation $$| \, \, \rang$$ is called ket notation. If we transposed $$\bm{u}$$ as $$\bm{u}^t$$, we can rewrite that $$\lang u| = \bm{u}^t = [u_1^t \, \dots \, u_n^t]$$ and this $$\lang \, \,|$$ notation is called bra notation. Basically ket notation shows a column vector and bra notation shows a row vector. We can naturally write down an inner product of these two vectors as \begin{aligned} \lang u| u \rang &= \bm{u}^t \bm{u} = [u_1^t \, \dots \, u_n^t] \begin{bmatrix} u_1 \\ \vdots \\ u_n \\ \end{bmatrix} \\ \lang u| u \rang &= \sum_{i=1}^n u_{i}^t \, u_i \end{aligned} We can also multiply these two vectors as follows. \begin{aligned} |u \rang \lang u| = \bm{u} \, \bm{u}^t &= \begin{bmatrix} u_1 \\ \vdots \\ u_n \\ \end{bmatrix} [u_1^t \, \dots \, u_n^t] \\\\ &= \begin{bmatrix} u_1 u_1^t & \dots & u_n u_n^t\\ \vdots &\ddots& \vdots\\ u_n u_1^t & \dots & u_n u_n^t\\ \end{bmatrix} \end{aligned} In Quantum Computing, this bra-ket notation is often used to express the computational basis of $$\Complex$$. Computational means that we use a bit (0 or 1) to express infomation. To understand this, I will take an example in $$\R^3$$. We can choose $$\bm{e_{i (i=0, 1, 2)}}$$ as an orthonormal basis for $$\R^3$$. $$\bm{e_0} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} , \bm{e_1} = \begin{bmatrix} 0 \\ 1 \\ 0 \\ \end{bmatrix} , \bm{e_2} =\begin{bmatrix} 0 \\ 0 \\ 1 \\ \end{bmatrix}$$ In bra-ket world, we write those vectors as $$| i \rang = \bm{e_i}$$ For $$i=1$$, $$| 1 \rang = \bm{e_1} = \begin{bmatrix} 0 \\ 1 \\ 0 \\ \end{bmatrix}$$ $$i$$ refers to the index where 1 exists in the vector. Note that the norm of an orthonormal basis is always 1. ## Hermite conjugate $$A^\dagger$$ (Operator) A matrix $$A^\dagger$$ is a transposed and conjugated of the original matrix $$A$$. For instance, let 2 by 2 matrix $$A$$ as $$A = \begin{bmatrix} i & 1 + 2i \\ -3 & 5 - 4i\\ \end{bmatrix}$$ then the daggered version $$A^\dagger$$ is equivalent to $$A^\dagger = (A^T)^\ast = (A^\ast)^T = \begin{bmatrix} -i & -3 \\ 1-2i & 5 + 4i\\ \end{bmatrix}$$ ## Hermitial matrix (Matrix) When $$A^\dagger = A$$ $$A$$ is called Hermitial matrix. For example, let A as $$A = \begin{bmatrix} 1 & i \\ -i & 1\\ \end{bmatrix}$$ then the daggered $$A^\dagger$$ is also $$A^\dagger = \begin{bmatrix} 1 & i \\ -i & 1\\ \end{bmatrix}$$ ## Unitary matrix (Matrix) When a matrix $$A$$ has this property, it is called Unitary matrix. $$A \, A^\dagger = A^\dagger \, A = I$$ Here $$I$$ is the identity matrix $$I = \begin{bmatrix} 1 && \bf{0}\\ & \ddots & \\ \bf{0} && 1\\ \end{bmatrix}$$ For example, let $$U_1$$ $$U_1 = \begin{bmatrix} \cos{\theta} & \sin{\theta}\\ -\sin{\theta} & \cos{\theta}\\ \end{bmatrix} \\$$ then, apparently this $$U_1$$ is a unitary matrix because \begin{aligned} U_1 U_1^\dagger &= \begin{bmatrix} \cos{\theta} & \sin{\theta}\\ -\sin{\theta} & \cos{\theta}\\ \end{bmatrix} \begin{bmatrix} \cos{\theta} & -\sin{\theta}\\ \sin{\theta} & \cos{\theta}\\ \end{bmatrix} \\\\ &= \begin{bmatrix} 1 &0 \\ 0 &1\\ \end{bmatrix} \\\\ &= I \end{aligned} Another example is that, let $$U_2$$ as $$a, b \in \Complex^2, ~~~ U_2 = \begin{bmatrix} a & b \\ -b^\ast e^{i\varphi} & a^\ast e^{i\varphi} \\ \end{bmatrix}, ~~~~ |a|^2 + |b|^2 = 1$$ then as you can see \begin{aligned} U_2^\dagger &= \begin{bmatrix} a^\ast & -b e^{-i\varphi} \\ b^\ast & a e^{-i\varphi} \\ \end{bmatrix}\\\\ U_2 U_2^\dagger &= \begin{bmatrix} a & b \\ -b^\ast e^{i\varphi} & a^\ast e^{i\varphi} \\ \end{bmatrix} \begin{bmatrix} a^\ast & -b e^{-i\varphi} \\ b^\ast & a e^{-i\varphi} \\ \end{bmatrix}\\\\ &= \begin{bmatrix} 1 &0 \\ 0 &1\\ \end{bmatrix} \\\\ &= I \end{aligned} This is also a unitary matrix. And for any unitary matrix $$U$$, $$|\det{U}| = 1$$ We can rewrite any unitary matrix $$U$$ using Hermite matrix $$H$$ as $$U = e^{iH}$$ As you know from the first linear algebra class, when $$Q$$ is a orthogonal matrix $$Q^T Q = Q Q^T = I \\ Q^T = Q^{-1} \\ |\det{Q}| = 1\\ \vdots$$ We can think of orthogonal matrices as expanded versions of unitary matrices. ## Qubit A qubit is represented as $$| \psi \rang = \alpha |0\rang + \beta |1\rang$$ where $$\alpha$$ and $$\beta$$ are complex numbers that satisfy $$|\alpha| ^2 + |\beta| ^2 = 1$$ $$|0\rang$$ and $$|1\rang$$ are the computational basis of $$\Complex^2$$. As you already know $$|0 \rang = \begin{bmatrix} 1 \\ 0 \\ \end{bmatrix} , |1 \rang = \begin{bmatrix} 0 \\ 1 \\ \end{bmatrix}$$ the number inside the bra-ket shows the location of 1 as an index. ## References takakun6899 at gmail.com
2022-08-19T15:20:21
{ "domain": "takagrow.com", "url": "https://takagrow.com/notes/alive/mathforquantum1.html", "openwebmath_score": 1.0000098943710327, "openwebmath_perplexity": 932.2983738851188, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795079712151, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043209036759 }
https://www.jobilize.com/algebra/section/solving-applied-problems-involving-ellipses-by-openstax?qcr=www.quizover.com
# 8.1 The ellipse  (Page 7/16) Page 7 / 16 ## Graphing an ellipse centered at ( h , k ) by first writing it in standard form Graph the ellipse given by the equation $\text{\hspace{0.17em}}4{x}^{2}+9{y}^{2}-40x+36y+100=0.\text{\hspace{0.17em}}$ Identify and label the center, vertices, co-vertices, and foci. We must begin by rewriting the equation in standard form. $4{x}^{2}+9{y}^{2}-40x+36y+100=0$ Group terms that contain the same variable, and move the constant to the opposite side of the equation. $\left(4{x}^{2}-40x\right)+\left(9{y}^{2}+36y\right)=-100$ Factor out the coefficients of the squared terms. $4\left({x}^{2}-10x\right)+9\left({y}^{2}+4y\right)=-100$ Complete the square twice. Remember to balance the equation by adding the same constants to each side. $4\left({x}^{2}-10x+25\right)+9\left({y}^{2}+4y+4\right)=-100+100+36$ Rewrite as perfect squares. $4{\left(x-5\right)}^{2}+9{\left(y+2\right)}^{2}=36$ Divide both sides by the constant term to place the equation in standard form. $\frac{{\left(x-5\right)}^{2}}{9}+\frac{{\left(y+2\right)}^{2}}{4}=1$ Now that the equation is in standard form, we can determine the position of the major axis. Because $\text{\hspace{0.17em}}9>4,\text{\hspace{0.17em}}$ the major axis is parallel to the x -axis. Therefore, the equation is in the form $\text{\hspace{0.17em}}\frac{{\left(x-h\right)}^{2}}{{a}^{2}}+\frac{{\left(y-k\right)}^{2}}{{b}^{2}}=1,\text{\hspace{0.17em}}$ where $\text{\hspace{0.17em}}{a}^{2}=9\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}{b}^{2}=4.\text{\hspace{0.17em}}$ It follows that: • the center of the ellipse is $\text{\hspace{0.17em}}\left(h,k\right)=\left(5,-2\right)$ • the coordinates of the vertices are $\text{\hspace{0.17em}}\left(h±a,k\right)=\left(5±\sqrt{9},-2\right)=\left(5±3,-2\right),\text{\hspace{0.17em}}$ or $\text{\hspace{0.17em}}\left(2,-2\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\left(8,-2\right)$ • the coordinates of the co-vertices are $\text{\hspace{0.17em}}\left(h,k±b\right)=\left(\text{5},-2±\sqrt{4}\right)=\left(\text{5},-2±2\right),\text{\hspace{0.17em}}$ or $\text{\hspace{0.17em}}\left(5,-4\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\left(5,\text{0}\right)$ • the coordinates of the foci are $\text{\hspace{0.17em}}\left(h±c,k\right),\text{\hspace{0.17em}}$ where $\text{\hspace{0.17em}}{c}^{2}={a}^{2}-{b}^{2}.\text{\hspace{0.17em}}$ Solving for $\text{\hspace{0.17em}}c,\text{\hspace{0.17em}}$ we have: $\begin{array}{l}c=±\sqrt{{a}^{2}-{b}^{2}}\hfill \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=±\sqrt{9-4}\hfill \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=±\sqrt{5}\hfill \end{array}$ Therefore, the coordinates of the foci are $\text{\hspace{0.17em}}\left(\text{5}-\sqrt{5},-2\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\left(\text{5+}\sqrt{5},-2\right).$ Next we plot and label the center, vertices, co-vertices, and foci, and draw a smooth curve to form the ellipse as shown in [link] . Express the equation of the ellipse given in standard form. Identify the center, vertices, co-vertices, and foci of the ellipse. $4{x}^{2}+{y}^{2}-24x+2y+21=0$ $\text{\hspace{0.17em}}\frac{{\left(x-3\right)}^{2}}{4}+\frac{{\left(y+1\right)}^{2}}{16}=1;\text{\hspace{0.17em}}$ center: $\text{\hspace{0.17em}}\left(3,-1\right);\text{\hspace{0.17em}}$ vertices: $\text{\hspace{0.17em}}\left(3,-\text{5}\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\left(3,\text{3}\right);\text{\hspace{0.17em}}$ co-vertices: $\text{\hspace{0.17em}}\left(1,-1\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\left(5,-1\right);\text{\hspace{0.17em}}$ foci: $\text{\hspace{0.17em}}\left(3,-\text{1}-2\sqrt{3}\right)\text{\hspace{0.17em}}$ and $\text{\hspace{0.17em}}\left(3,-\text{1+}2\sqrt{3}\right)$ ## Solving applied problems involving ellipses Many real-world situations can be represented by ellipses, including orbits of planets, satellites, moons and comets, and shapes of boat keels, rudders, and some airplane wings. A medical device called a lithotripter uses elliptical reflectors to break up kidney stones by generating sound waves. Some buildings, called whispering chambers, are designed with elliptical domes so that a person whispering at one focus can easily be heard by someone standing at the other focus. This occurs because of the acoustic properties of an ellipse. When a sound wave originates at one focus of a whispering chamber, the sound wave will be reflected off the elliptical dome and back to the other focus. See [link] . In the whisper chamber at the Museum of Science and Industry in Chicago, two people standing at the foci—about 43 feet apart—can hear each other whisper. ## Locating the foci of a whispering chamber The Statuary Hall in the Capitol Building in Washington, D.C. is a whispering chamber. Its dimensions are 46 feet wide by 96 feet long as shown in [link] . 1. What is the standard form of the equation of the ellipse representing the outline of the room? Hint: assume a horizontal ellipse, and let the center of the room be the point $\text{\hspace{0.17em}}\left(0,0\right).$ 2. If two senators standing at the foci of this room can hear each other whisper, how far apart are the senators? Round to the nearest foot. 1. We are assuming a horizontal ellipse with center $\text{\hspace{0.17em}}\left(0,0\right),$ so we need to find an equation of the form $\text{\hspace{0.17em}}\frac{{x}^{2}}{{a}^{2}}+\frac{{y}^{2}}{{b}^{2}}=1,\text{\hspace{0.17em}}$ where $\text{\hspace{0.17em}}a>b.\text{\hspace{0.17em}}$ We know that the length of the major axis, $\text{\hspace{0.17em}}2a,\text{\hspace{0.17em}}$ is longer than the length of the minor axis, $\text{\hspace{0.17em}}2b.\text{\hspace{0.17em}}$ So the length of the room, 96, is represented by the major axis, and the width of the room, 46, is represented by the minor axis. • Solving for $\text{\hspace{0.17em}}a,$ we have $\text{\hspace{0.17em}}2a=96,$ so $\text{\hspace{0.17em}}a=48,$ and $\text{\hspace{0.17em}}{a}^{2}=2304.$ • Solving for $\text{\hspace{0.17em}}b,$ we have $\text{\hspace{0.17em}}2b=46,$ so $\text{\hspace{0.17em}}b=23,$ and $\text{\hspace{0.17em}}{b}^{2}=529.$ Therefore, the equation of the ellipse is $\text{\hspace{0.17em}}\frac{{x}^{2}}{2304}+\frac{{y}^{2}}{529}=1.$ 2. To find the distance between the senators, we must find the distance between the foci, $\text{\hspace{0.17em}}\left(±c,0\right),\text{\hspace{0.17em}}$ where $\text{\hspace{0.17em}}{c}^{2}={a}^{2}-{b}^{2}.\text{\hspace{0.17em}}$ Solving for $\text{\hspace{0.17em}}c,$ we have: The points $\text{\hspace{0.17em}}\left(±42,0\right)\text{\hspace{0.17em}}$ represent the foci. Thus, the distance between the senators is $\text{\hspace{0.17em}}2\left(42\right)=84\text{\hspace{0.17em}}$ feet. how do I set up the problem? what is a solution set? Harshika hello, I am happy to help! Abdullahi find the value of 2x=32 divide by 2 on each side of the equal sign to solve for x corri X=16 Michael Want to review on complex number 1.What are complex number 2.How to solve complex number problems. Beyan use the y -intercept and slope to sketch the graph of the equation y=6x how do we prove the quadratic formular hello, if you have a question about Algebra 2. I may be able to help. I am an Algebra 2 Teacher thank you help me with how to prove the quadratic equation Seidu may God blessed u for that. Please I want u to help me in sets. Opoku what is math number 4 Trista x-2y+3z=-3 2x-y+z=7 -x+3y-z=6 Need help solving this problem (2/7)^-2 x+2y-z=7 Sidiki what is the coefficient of -4× -1 Shedrak the operation * is x * y =x + y/ 1+(x × y) show if the operation is commutative if x × y is not equal to -1 An investment account was opened with an initial deposit of \$9,600 and earns 7.4% interest, compounded continuously. How much will the account be worth after 15 years? lim x to infinity e^1-e^-1/log(1+x) given eccentricity and a point find the equiation
2020-07-06T06:36:05
{ "domain": "jobilize.com", "url": "https://www.jobilize.com/algebra/section/solving-applied-problems-involving-ellipses-by-openstax?qcr=www.quizover.com", "openwebmath_score": 0.7165656685829163, "openwebmath_perplexity": 575.8138702868706, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795079712153, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043209036759 }
https://www.bookofproofs.org/branches/cosine/
Welcome guest You're not logged in. 313 users online, thereof 0 logged in ## Definition: Cosine of a Real Variable Let $$x\in\mathbb R$$ be any real number and let $$z$$ be the complex number obtained from $$x$$ by multiplying it with the imaginary unit, i.e. $$z:=ix$$. The cosine of $$x$$ is a function $$f:\mathbb R\mapsto\mathbb R$$, which is defined as the real part of the complex exponential function $\cos(x):=\Re(\exp(ix)).$ Geometrically, the cosine is a projection of the complex number $$\exp(ix)$$, which is on the unit circle, to the real axis. The behavior of the cosine function can be studied in the following interactive figure (with a draggable value of $$x$$): Cosine graph of $\cos(x)$ Projection of $\exp(ix)$ happening in the complex plane | | | | | created: 2016-02-28 18:24:40 | modified: 2020-09-23 15:18:00 | by: bookofproofs | references: [581]
2021-08-05T10:30:52
{ "domain": "bookofproofs.org", "url": "https://www.bookofproofs.org/branches/cosine/", "openwebmath_score": 0.8398541212081909, "openwebmath_perplexity": 661.8077008333081, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795079712153, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043209036759 }
http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter12_deterministic/03_ode.ipynb
This is one of the 100 recipes of the IPython Cookbook, the definitive guide to high-performance scientific computing and data science in Python. 12.3. Simulating an Ordinary Differential Equation with SciPy¶ 1. Let's import NumPy, SciPy (integrate package), and matplotlib. In [ ]: import numpy as np import scipy.integrate as spi import matplotlib.pyplot as plt %matplotlib inline 1. We define a few parameters appearing in our model. In [ ]: m = 1. # particle's mass k = 1. # drag coefficient g = 9.81 # gravity acceleration 1. We have two variables: x and y (two dimensions). We note $\mathbf{u}=(x,y)$. The ODE we are going to simulate is: $$\ddot{\mathbf{u}} = -\frac{k}{m} \dot{\mathbf{u}} + \mathbf{g}$$ where $\mathbf{g}$ is the gravity acceleration vector. In order to simulate this second-order ODE with SciPy, we can convert it to a first-order ODE (another option would be to solve $\dot{\mathbf{u}}$ first before integrating the solution). To do this, we consider two 2D variables: $\mathbf{u}$ and $\dot{\mathbf{u}}$. We note $\mathbf{v} = (\mathbf{u}, \dot{\mathbf{u}})$. We can express $\dot{\mathbf{v}}$ as a function of $\mathbf{v}$. Now, we create the initial vector $\mathbf{v}_0$ at time $t=0$: it has four components. In [ ]: # The initial position is (0, 0). v0 = np.zeros(4) # The initial speed vector is oriented # to the top right. v0[2] = 4. v0[3] = 10. 1. We need to create a Python function $f$ that takes the current vector $\mathbf{v}(t_0)$ and a time $t_0$ as argument (with optional parameters), and that returns the derivative $\dot{\mathbf{v}}(t_0)$. In [ ]: def f(v, t0, k): # v has four components: v=[u, u']. u, udot = v[:2], v[2:] # We compute the second derivative u'' of u. udotdot = -k/m * udot udotdot[1] -= g # We return v'=[u', u'']. return np.r_[udot, udotdot] 1. Now, we simulate the system for different values of $k$. We use the SciPy function odeint, defined in the scipy.integrate package. In [ ]: plt.figure(figsize=(6,3)); # We want to evaluate the system on 30 linearly # spaced times between t=0 and t=3. t = np.linspace(0., 3., 30) # We simulate the system for different values of k. for k in np.linspace(0., 1., 5): # We simulate the system and evaluate $v$ on the # given times. v = spi.odeint(f, v0, t, args=(k,)) # We plot the particle's trajectory. plt.plot(v[:,0], v[:,1], 'o-', mew=1, ms=8, mec='w', label='k={0:.1f}'.format(k)); plt.legend(); plt.xlim(0, 12); plt.ylim(0, 6); The most outward trajectory (blue) corresponds to drag-free motion (without air resistance). It is a parabola. In the other trajectories, we can observe the increasing effect of air resistance, parameterized with $k$. You'll find all the explanations, figures, references, and much more in the book (to be released later this summer). IPython Cookbook, by Cyrille Rossant, Packt Publishing, 2014 (500 pages).
2017-03-30T00:48:52
{ "domain": "jupyter.org", "url": "http://nbviewer.jupyter.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter12_deterministic/03_ode.ipynb", "openwebmath_score": 0.8745487332344055, "openwebmath_perplexity": 1688.2911985972553, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986979507588227, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043206501672 }
https://www.dummies.com/article/academics-the-arts/study-skills-test-prep/armed-services/simplifying-fractions-asvab-afqt-math-subtests-248801/
##### ASVAB AFQT For Dummies: Book + 8 Practice Tests Online, 3rd Edition Simplifying (or reducing) fractions means to make the fraction as simple as possible. You're usually required to simplify fractions on the ASVAB AFQT math subtests before you can select the correct answer. For example, if you worked out a problem and the answer was 4/8, the correct answer choice on the math subtest would probably be 1/2, which is the simplest equivalent to 4/8. Many methods of simplifying fractions are available. Following are two of the easiest methods; you can decide which is best for you. ## Method 1: Divide by the lowest prime numbers Try dividing the numerator and denominator by the lowest prime numbers until you can't go any further. For example, simplify 24/108. Both the numerator and denominator are even numbers, so they can be divided by the lowest prime number, which is 2. The numerator and denominator are both still even numbers, so divide by 2 again: This time the denominator is an odd number, so you know it isn't divisible by 2. Try the next highest prime number, which is 3: Because no common prime numbers divide evenly into both 2 and 9, the fraction is fully simplified. ## Method 2: List prime factors With this method, you simply list the prime factors of both the numerator and the denominator, and then see whether any cancel out (are the same). Once again, simplify 24/108. The prime factors of 24 are The prime factors of 108 are You can now write the fraction as Two of the 2s and one of the 3s cancel out, so you can remove them from both the numerator and the denominator. What's left is
2022-05-20T02:08:11
{ "domain": "dummies.com", "url": "https://www.dummies.com/article/academics-the-arts/study-skills-test-prep/armed-services/simplifying-fractions-asvab-afqt-math-subtests-248801/", "openwebmath_score": 0.8961290717124939, "openwebmath_perplexity": 556.6589456828896, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795075882268, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043206501671 }
https://math.stackexchange.com/questions/1532397/evaluate-a-sum-using-generating-function-and-use-inclusion-exclusion-to-prove-id
# Evaluate a sum using generating function and use Inclusion-Exclusion to prove identity I need to evaluate the following sum using generating functions, and then find some counting argument to prove the resulting identity using inclusion-exclusion principle. \begin{align} \sum_{k=0}^n(-1)^k{n \choose k}(n-k) \end{align} It looks like a binomial convolution so I tried using that to obtain an identity but I can't seem to get the right one. I have found that this sum evaluates to 0 for any $n$ ## 1 Answer Let $$a_n=\sum_{k=0}^n(-1)^k\binom{n}k(n-k)\;,$$ and let $$A(x)=\sum_{n\ge 0}a_n\frac{x^n}{n!}\;;$$ as you say, it appears that $A(x)$ is the binomial convolution of a couple of exponential generating functions. One of these is apparently $$f(x)=\sum_{n\ge 0}(-1)^n\frac{x^n}{n!}=\sum_{n\ge 0}\frac{(-x)^n}{n!}=e^{-x}\;,$$ and the other is $$g(x)=\sum_{n\ge 0}n\frac{x^n}{n!}=\sum_{n\ge 1}\frac{x^n}{(n-1)!}=x\sum_{n\ge 0}\frac{x^n}{n!}=xe^x\;.$$ Thus, $$A(x)=f(x)g(x)=(e^{-x})(xe^x)=x\;,$$ and it follows that $a_1=1$, and $a_n=0$ if $n\ne 1$. Using an Iverson bracket we can say that $$\sum_{k=0}^n(-1)^k\binom{n}k(n-k)=[n=1]\;.$$ To see this as an instance of the inclusion-exclusion principle it may be helpful to write out the first few terms of the sum: $$\binom{n}0n-\binom{n}1(n-1)+\binom{n}2(n-2)-+\ldots$$ It appears that we’re starting with a crude count of $n$; as a first improvement we’re subtracting $n-1$ for each of $n$ things, and the second correction is adding $n-2$ for each pair of things from a set of $n$. One way to get these terms is to let $H_n$ be the set of maps $h:[n]\to[n]$ such that • $h$ is the identity map, and • $h$ is a constant function. It’s not hard to see that $H_n=\varnothing$ if $n\ne 1$, and $H_n$ contains the unique map from $\{1\}$ to $\{1\}$ if $n=1$, so $|H_n|=a_n$ for all $n\ge 0$. The term $\binom{n}0n=n$ counts the constant functions from $[n]$ to $[n]$. The first condition requires that $h(k)=k$ for each $k\in[n]$; thus, for each $k\in[n]$ there are $n-1$ constant functions that have to be thrown away, and there are $\binom{n}1$ choices for $k$, so the first correction term is $-\binom{n}1(n-1)$. Now if $k,\ell\in[n]$ and $k\ne\ell$, any constant function that maps $[n]$ to one of the $n-2$ elements of $[n]\setminus\{k,\ell\}$ violates the first condition for both $k$ and $\ell$, so it’s been thrown away twice and needs to be added back in; there are $\binom{n}2$ pairs of distinct elements of $[n]$, so the second correction term is $\binom{n}2(n-2)$. And so on. I’ll leave it to you to clean this up and express it properly as an inclusion-exclusion argument in whatever form you’re most familiar with.
2019-10-20T21:44:13
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1532397/evaluate-a-sum-using-generating-function-and-use-inclusion-exclusion-to-prove-id", "openwebmath_score": 0.9196610450744629, "openwebmath_perplexity": 83.63330110317177, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795075882268, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043206501671 }
https://stacks.math.columbia.edu/tag/0AYC
Lemma 42.26.1 (Key formula). In the situation above the cycle $\sum (Z_ i \to X)_*\left( \text{ord}_{B_ i}(f_ i) \text{div}_{\mathcal{N}|_{Z_ i}}(t_ i|_{Z_ i}) - \text{ord}_{B_ i}(g_ i) \text{div}_{\mathcal{L}|_{Z_ i}}(s_ i|_{Z_ i}) \right)$ is equal to the cycle $\sum (Z_ i \to X)_*\text{div}(\partial _{B_ i}(f_ i, g_ i))$ Proof. First, let us examine what happens if we replace $s_ i$ by $us_ i$ for some unit $u$ in $B_ i$. Then $f_ i$ gets replaced by $u^{-1} f_ i$. Thus the first part of the first expression of the lemma is unchanged and in the second part we add $-\text{ord}_{B_ i}(g_ i)\text{div}(u|_{Z_ i})$ (where $u|_{Z_ i}$ is the image of $u$ in the residue field) by Divisors, Lemma 31.27.3 and in the second expression we add $\text{div}(\partial _{B_ i}(u^{-1}, g_ i))$ by bi-linearity of the tame symbol. These terms agree by property (6) of the tame symbol. Let $Z \subset X$ be an irreducible closed with $\dim _\delta (Z) = n - 2$. To show that the coefficients of $Z$ of the two cycles of the lemma is the same, we may do a replacement $s_ i \mapsto us_ i$ as in the previous paragraph. In exactly the same way one shows that we may do a replacement $t_ i \mapsto vt_ i$ for some unit $v$ of $B_ i$. Since we are proving the equality of cycles we may argue one coefficient at a time. Thus we choose an irreducible closed $Z \subset X$ with $\dim _\delta (Z) = n - 2$ and compare coefficients. Let $\xi \in Z$ be the generic point and set $A = \mathcal{O}_{X, \xi }$. This is a Noetherian local domain of dimension $2$. Choose generators $\sigma$ and $\tau$ for $\mathcal{L}_\xi$ and $\mathcal{N}_\xi$. After shrinking $X$, we may and do assume $\sigma$ and $\tau$ define trivializations of the invertible sheaves $\mathcal{L}$ and $\mathcal{N}$ over all of $X$. Because $Z_ i$ is locally finite after shrinking $X$ we may assume $Z \subset Z_ i$ for all $i \in I$ and that $I$ is finite. Then $\xi _ i$ corresponds to a prime $\mathfrak q_ i \subset A$ of height $1$. We may write $s_ i = a_ i \sigma$ and $t_ i = b_ i \tau$ for some $a_ i$ and $b_ i$ units in $A_{\mathfrak q_ i}$. By the remarks above, it suffices to prove the lemma when $a_ i = b_ i = 1$ for all $i$. Assume $a_ i = b_ i = 1$ for all $i$. Then the first expression of the lemma is zero, because we choose $\sigma$ and $\tau$ to be trivializing sections. Write $s = f\sigma$ and $t = g \tau$ with $f$ and $g$ in the fraction field of $A$. By the previous paragraph we have reduced to the case $f_ i = f$ and $g_ i = g$ for all $i$. Moreover, for a height $1$ prime $\mathfrak q$ of $A$ which is not in $\{ \mathfrak q_ i\}$ we have that both $f$ and $g$ are units in $A_\mathfrak q$ (by our choice of the family $\{ Z_ i\}$ in the discussion preceding the lemma). Thus the coefficient of $Z$ in the second expression of the lemma is $\sum \nolimits _ i \text{ord}_{A/\mathfrak q_ i}(\partial _{B_ i}(f, g))$ which is zero by the key Lemma 42.6.3. $\square$ ## Comments (0) There are also: • 2 comment(s) on Section 42.26: The key formula ## Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work. All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0AYC. Beware of the difference between the letter 'O' and the digit '0'.
2022-01-27T17:44:21
{ "domain": "columbia.edu", "url": "https://stacks.math.columbia.edu/tag/0AYC", "openwebmath_score": 0.8879427909851074, "openwebmath_perplexity": 135.94704981752776, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795075882268, "lm_q2_score": 0.6619228825191871, "lm_q1q2_score": 0.653304320650167 }
https://stats.stackexchange.com/questions/278726/algebra-on-random-variables?noredirect=1
Algebra on random variables I have the feeling this should be doable, or at least have an approximation, but I'm failing to find one. Let's consider a random variable $C$, that belongs to a Truncated Exponential distribution between 0 and 1. If we observe $n$ i.i.d. variables $C$, what is the distribution of the harmonic mean of this set? Formally, what is the PDF of $P = \dfrac{n}{\sum_{i=1}^n \frac{1}{C_i}}$ I have the feeling that this question is getting close to that answer, but I'm not sure. Now, the may reason why I started by this was to introduce the following complication. If to each random variable $C_i$ there's a weight $w$ associated and this weights come from another Truncated Exponential distribution, what is the Weighted harmonic mean? $P2 = \dfrac{\sum_{i=1}^n w_i}{\sum_{i=1}^n \frac{wi}{C_i}}$ I'm getting that $P2$ and $P1$ are equivalents? Is this correct?
2019-08-22T16:17:53
{ "domain": "stackexchange.com", "url": "https://stats.stackexchange.com/questions/278726/algebra-on-random-variables?noredirect=1", "openwebmath_score": 0.7389283776283264, "openwebmath_perplexity": 172.42310019893569, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795075882267, "lm_q2_score": 0.6619228825191871, "lm_q1q2_score": 0.6533043206501669 }
https://math.stackexchange.com/questions/1172241/big-theta-notation
# Big-theta notation I was wondering about big-theta ($\Theta$) notation. A) Is $\Theta(n/2) \leq \Theta(n)$ for $n$ being an integer? I know that $n/2 = O(n)$, but does it also mean that $\Theta(n/2) \leq \Theta(n)$? B) If I add two theta terms, so let's assume, we have: $\Theta(n) + 2*\Theta(n/2) + \Theta(n/3)$. Is that all $\Theta(n)$ or is it $\Theta(n) + \Theta(n/2) + \Theta(n/3)$. Again, for big-oh notation I just take the max when I add them, and I don't know if the same applies to big-theta. Since $n = O(n/2)$ is also true, as $n \le 2 (n/2)$, you get $\Theta(n)= \Theta(n/2)$. For B, yes this is all just $\Theta(n)$. Recall the first part and note $\Theta(n)+\Theta(n) = \Theta(n)$.
2019-08-19T14:02:22
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1172241/big-theta-notation", "openwebmath_score": 0.9706258177757263, "openwebmath_perplexity": 249.7893879097966, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795072052384, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043203966583 }
https://blog.demofox.org/2016/04/
# Shamir’s Quest: Collect Any 3 Keys To Unlock The Secret! This post is on something called Shamir’s Secret Sharing. It’s a technique where you can break a secret number up into $M$ different pieces, where if you have any $N$ of those $M$ pieces, you are able to figure out the secret. Thinking of it in video game terms, imagine there are 10 keys hidden in a level, but you can escape the level whenever you find any 7 of them. This is what Shamir’s Secret Sharing enables you to set up cryptographically. Interestingly in this case, the term sharing in “secret sharing” doesn’t mean sharing the secret with others. It means breaking the secret up into pieces, or SHARES. Secret sharing means that you make shares out of a secret, such that if you have enough of the shares, you can recover the secret. ## How Do You Share (Split) The Secret? The basic idea of how it works is actually really simple. This is good for us trying to learn the technique, but also good to show it’s security since there are so few moving parts. It relies on something called the Unisolvence Theorem which is a fancy label meaning these things: • If you have a linear equation, it takes two (x,y) points to uniquely identify that line. No matter how you write a linear equation, if it passes through those same two points, it’s mathematically equivelant. • If you have a quadratic equation, it takes three (x,y) points to uniquely identify that quadratic curve. Again, no matter how you write a quadratic equation, if it passes through those same three points, it’s mathematically equivalent. • The pattern continues for equations of any degree. Cubic equations require four points to be uniquely identified, Quartic equations require five points, and so on. At a high level, how this technique works is that the number of shares (keys) you want someone to collect ($N$) defines the degree of an equation. You use random numbers as the coefficients of the powers of $x$ in that equation, but use your secret number as the constant term. You then create $M$ data points of the form $(x,y)$ aka $(x,f(x))$. Those are your shares. You then give individual shares to people, or go hide them in your dungeon or do whatever you are going to do with them. As soon as any one person has $N$ of those $M$ shares (data points), they will be able to figure out the equation of the curve and thus get the secret. The secret number is the constant term of the polynomial, which is also just $f(0)$. This image below from wikipedia is great for seeing how you may have two points of a cubic curve, but without a third point you can’t be sure what the quadratic equation is. In fact, there are an infinite number of quadratic curves that pass through any two points! Because of that, it takes the full number of required shares for you to be able to unlock the secret. ## Example: Sharing (Splitting) The Secret First you decide how many shares you want it to take to unlock the secret. This determines the degree of your equation. Let’s say you wanted a person to have to have four shares to unlock the secret. This means our equation will be a cubic equation, since it takes four points to uniquely define a cubic equation. Our equation is: $f(x) = R_1x^3 + R_2x^2 + R_3x + S$ Where the $R_i$ values are random numbers, and $S$ is the secret value. Let’s say that our secret value is 435, and that we picked some random numbers for the equation, making the below: $f(x) = 28x^3 + 64x^2 + 9x + 435$ We now have a function that is uniquely identifiable by any 4 points of data on it’s curve. Next we decide how many pieces we are going to create total. We need at least 4 so that it is in fact solvable. Let’s make 6 shares. To do this, you just plug in 6 different values of x and pair each x value with it’s y value. Let’s do that: $\begin{array}{c|c} x & f(x) \\ \hline 1 & 536 \\ 2 & 933 \\ 3 & 1794 \\ 4 & 3287 \\ 5 & 5580 \\ 6 & 8841 \\ \end{array}$ When doing this part, remember that the secret number is $f(0)$, so make sure and not share what the value of the function is when x is 0! You could then distribute the shares (data pairs) as you saw fit. Maybe some people are more important, so you give them more than one share, requiring a smaller amount of cooperation with them to unlock the secret. Share distribution details are totally up to you, but we now have our shares, whereby if you have any of the 4 of the 6 total shares, you can unlock the secret. ## How Do You Join The Secret? Once you have the right number of shares and you know the degree of the polynomial (pre-shared “public” information), unlocking the secret is a pretty straightforward process too. To unlock the secret, you just need to use ANY method available for creating an equation of the correct degree from a set of data points. This can be one of several different interpolation techniques, but the most common one to use seems to be Lagrange interpolation, which is something I previously wrote up that you can read about here: Lagrange Interpolation. Once you have the equation, you can either evaluate $f(0)$, or you can write the equation in polynomial form and the constant term will be the secret value. ## Example: Joining the Secret Let’s say that we have these four shares and are ready to get the cubic function and then unlock the secret number: $\begin{array}{c|c} x & y \\ \hline 1 & 536 \\ 2 & 933 \\ 4 & 3287 \\ 6 & 8841 \\ \end{array}$ We could bust out some Lagrange interpolation and figure this out, but let’s be lazy… err efficient I mean. Wolfram alpha can do this for us! Wolfram Alpha: cubic fit (1, 536), (2, 933), (4, 3287), (6, 8841) That gives us this equation, saying that it is a perfect fit (which it is!) $28x^3 + 64x^2 + 9x + 435$ You can see that our constant term (and $f(0)$) is the correct secret value of 435. Daaaayummm Bru… that is lit AF! We just got hacked by wolfram alpha 😛 ## A Small Complication Unfortunately, the above has a weakness. The weakness is that each share you get gives you a little bit more information about the secret value. You can read more about this in the links section at the end if you want to know more details. Ideally, you wouldn’t have any information about the secret value until you had the full number of shares required to unlock the secret. To address this problem, we are going to choose some prime number $k$ and instead of shares being $(x,y)$ data points on the curve, they are going to be $(x,y \bmod k)$. In technical terms we are going to be using points on a finite field, or a Galois field. The value we choose for $k$ needs to be larger than any of the coefficients of our terms (the random numbers) as well as larger than our secret value and larger than the number of shares we want to create. The larger the better besides that, because a larger $k$ value means a larger “brute force” space to search. If you want to use this technique in a situation which has real needs for security, please make sure and read more on this technique from more authoritative sources. I’m glossing over the details of security quite a bit, and just trying to give an intuitive understanding of this technique (: ## Source Code Below is some sample source code that implements Shamir’s Secret Sharing in C++. I use 64 bit integers, but if you were going to be using this in a realistic situation you could very well overflow 64 bit ints and get the wrong answers. I hit this problem for instance when trying to require more than about 10 shares, using a prime of 257, and generating 50 shares. If you hit the limit of 64 bit ints you can use a multi precision math library instead to have virtually unlimited sized ints. The boost multiprecision header library is a decent choice for multi precision integers, specifically cpp_int. #include <stdio.h> #include <array> #include <vector> #include <math.h> #include <random> #include <assert.h> #include <stdint.h> #include <inttypes.h> typedef int64_t TINT; typedef std::array<TINT, 2> TShare; typedef std::vector<TShare> TShares; class CShamirSecretSharing { public: CShamirSecretSharing (size_t sharesNeeded, TINT prime) : c_sharesNeeded(sharesNeeded), c_prime(prime) { // There needs to be at least 1 share needed assert(sharesNeeded > 0); } // Generate N shares for a secretNumber TShares GenerateShares (TINT secretNumber, TINT numShares) const { // calculate our curve coefficients std::vector<TINT> coefficients; { // store the secret number as the first coefficient; coefficients.resize((size_t)c_sharesNeeded); coefficients[0] = secretNumber; // randomize the rest of the coefficients std::array<int, std::mt19937::state_size> seed_data; std::random_device r; std::generate_n(seed_data.data(), seed_data.size(), std::ref(r)); std::seed_seq seq(std::begin(seed_data), std::end(seed_data)); std::mt19937 gen(seq); std::uniform_int_distribution<TINT> dis(1, c_prime - 1); for (TINT i = 1; i < c_sharesNeeded; ++i) coefficients[(size_t)i] = dis(gen); } // generate the shares TShares shares; shares.resize((size_t)numShares); for (size_t i = 0; i < numShares; ++i) shares[i] = GenerateShare(i + 1, coefficients); return shares; } // use lagrange polynomials to find f(0) of the curve, which is the secret number TINT JoinShares (const TShares& shares) const { // make sure there is at elast the minimum number of shares assert(shares.size() >= size_t(c_sharesNeeded)); // Sigma summation loop TINT sum = 0; for (TINT j = 0; j < c_sharesNeeded; ++j) { TINT y_j = shares[(size_t)j][1]; TINT numerator = 1; TINT denominator = 1; // Pi product loop for (TINT m = 0; m < c_sharesNeeded; ++m) { if (m == j) continue; numerator = (numerator * shares[(size_t)m][0]) % c_prime; denominator = (denominator * (shares[(size_t)m][0] - shares[(size_t)j][0])) % c_prime; } sum = (c_prime + sum + y_j * numerator * modInverse(denominator, c_prime)) % c_prime; } return sum; } const TINT GetPrime () const { return c_prime; } const TINT GetSharesNeeded () const { return c_sharesNeeded; } private: // Generate a single share in the form of (x, f(x)) TShare GenerateShare (TINT x, const std::vector<TINT>& coefficients) const { TINT xpow = x; TINT y = coefficients[0]; for (TINT i = 1; i < c_sharesNeeded; ++i) { y += coefficients[(size_t)i] * xpow; xpow *= x; } return{ x, y % c_prime }; } // Gives the decomposition of the gcd of a and b. Returns [x,y,z] such that x = gcd(a,b) and y*a + z*b = x static const std::array<TINT, 3> gcdD (TINT a, TINT b) { if (b == 0) return{ a, 1, 0 }; const TINT n = a / b; const TINT c = a % b; const std::array<TINT, 3> r = gcdD(b, c); return{ r[0], r[2], r[1] - r[2] * n }; } // Gives the multiplicative inverse of k mod prime. In other words (k * modInverse(k)) % prime = 1 for all prime > k >= 1 static TINT modInverse (TINT k, TINT prime) { k = k % prime; TINT r = (k < 0) ? -gcdD(prime, -k)[2] : gcdD(prime, k)[2]; return (prime + r) % prime; } private: // Publically known information const TINT c_prime; const TINT c_sharesNeeded; }; void WaitForEnter () { printf("Press Enter to quit"); fflush(stdin); getchar(); } int main (int argc, char **argv) { // Parameters const TINT c_secretNumber = 435; const TINT c_sharesNeeded = 7; const TINT c_prime = 439; // must be a prime number larger than the other three numbers above // set up a secret sharing object with the public information CShamirSecretSharing secretSharer(c_sharesNeeded, c_prime); // split a secret value into multiple shares // shuffle the shares, so it's random which ones are used to join std::array<int, std::mt19937::state_size> seed_data; std::random_device r; std::generate_n(seed_data.data(), seed_data.size(), std::ref(r)); std::seed_seq seq(std::begin(seed_data), std::end(seed_data)); std::mt19937 gen(seq); std::shuffle(shares.begin(), shares.end(), gen); // join the shares TINT joinedSecret = secretSharer.JoinShares(shares); // show the public information and the secrets being joined printf("%" PRId64 " shares needed, %i shares maden", secretSharer.GetSharesNeeded(), shares.size()); printf("Prime = %" PRId64 "nn", secretSharer.GetPrime()); for (TINT i = 0, c = secretSharer.GetSharesNeeded(); i < c; ++i) printf("Share %" PRId64 " = (%" PRId64 ", %" PRId64 ")n", i+1, shares[i][0], shares[i][1]); // show the result printf("nJoined Secret = %" PRId64 "nActual Secret = %" PRId64 "nn", joinedSecret, c_secretNumber); assert(joinedSecret == c_secretNumber); WaitForEnter(); return 0; } ## Example Output Here is some example output of the program: Wikipedia: Shamir’s Secret Sharing (Note: for some reason the example javascript implementation here only worked for odd numbered keys required) Wikipedia: Finite Field Cryptography.wikia.com: Shamir’s Secret Sharing Java Implementation of Shamir’s Secret Sharing (Note: I don’t think this implementation is correct, and neither is the one that someone posted to correct them!) When writing this post I wondered if maybe you could use the coefficients of the other terms as secrets as well. These two links talk about the details of that: Cryptography Stack Exchange: Why only one secret value with Shamir’s secret sharing? Cryptography Stack Exchange: Coefficients in Shamir’s Secret Sharing Scheme Now that you understand this, you are probably ready to start reading up on elliptic curve cryptography. Give this link below a read if you are interested in a gentle introduction on that! A (Relatively Easy To Understand) Primer on Elliptic Curve Cryptography # Turning a Truth Table Into A digital Circuit (ANF) In this post I’m going to show how you turn a truth table into a digital logic circuit that uses XOR and AND gates. ## My Usage Case My specific usage case for this is in my investigations into homomorphic encryption, which as you may recall is able to perform computation on encrypted data. This lets encrypted data be operated on by an untrusted source, given back to you, and then you can decrypt your data to get a result. Lots of use cases if this can ever get fast enough to become practical, such as doing cloud computing with private data. However, when doing homomorphic encryption (at least currently, for the techniques I’m using), you only have XOR and AND logic operations. So, I’m using the information in this post to be able to turn a lookup table, or a specific boolean function, into a logic circuit that I can feed into a homomorphic encryption based digital circuit. Essentially I want to figure out how to do a homomorphic table lookup to try and make some simple as possible circuits, that will in turn be as fast and lean as possible. If you want to know more about homomorphic encryption, here’s a post I wrote which explains a very simple algorithm: Super Simple Symmetric Leveled Homomorphic Encryption Implementation ## Algebraic Normal Form Algebraic normal form (ANF) is a way of writing a boolean function using only XOR and AND. Since it’s a normal form, two functions that do the same thing will be the same thing in ANF. There are other forms for writing boolean logic, but ANF suits me best for my homomorphic encryption circuit needs! An example of boolean logic in ANF is the below: $f(x_1, x_2, x_3, x_4) = x_1 x_2 \oplus x_1 x_3 \oplus x_1 x_4$ It is essentially a boolean polynomial, where AND is like multiplication, and XOR is like addition. It even factors the same way. In fact, ANF is not always the smallest circuit possible, you’d have to factor common ANDs to find the smallest way you could represent the circuit, like the below: $f(x_1, x_2, x_3, x_4) = x_1 (x_2 \oplus x_3 \oplus x_4)$ That smaller form does 1 AND and 2 XORs, versus the ANF which does 3 ANDs and 2 XORs. In homomorphic encryption, since AND is so much more costly than XOR, minimizing the ANDs is a very nice win, and worth the effort. ## Truth Tables and Lookup Tables A truth table is just where you specify the inputs into a boolean function and the output of that boolean function for the given input: $\begin{array}{c|c|c|c} x_1 & x_2 & x_3 & f(x_1, x_2, x_3) \\ \hline 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 \\ \end{array}$ A lookup table is similar in functionality, except that it has multi bit output. When dealing with digital circuits, you can make a lookup table by making a truth table per output bit. For instance, the above truth table might just be the low bit of the lookup table below, which is just a truth table for addition of the input bits. $\begin{array}{c|c|c|c} x_1 & x_2 & x_3 & f(x_1, x_2, x_3) \\ \hline 0 & 0 & 0 & 00 \\ 0 & 0 & 1 & 01 \\ 0 & 1 & 0 & 01 \\ 0 & 1 & 1 & 10 \\ 1 & 0 & 0 & 01 \\ 1 & 0 & 1 & 10 \\ 1 & 1 & 0 & 10 \\ 1 & 1 & 1 & 11 \\ \end{array}$ ## Converting Truth Table to ANF When I first saw the explanation for converting a truth table to ANF, it looked pretty complicated, but luckily it turns out to be pretty easy. The basic idea is that you make a term for each possible combination of x inputs, ANDing a term by each constant, and then solving for those constants. Let’s use the truth table from the last section: $\begin{array}{c|c|c|c} x_1 & x_2 & x_3 & f(x_1, x_2, x_3) \\ \hline 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 \\ \end{array}$ For three inputs, the starting equation looks like this: $f(x_1, x_2, x_3) = \\ a_0 \\ \oplus a_1 x_1 \oplus a_2 x_2 \oplus a_3 x_3 \\ \oplus a_{12} x_1 x_2 \oplus a_{13} x_1 x_3 \oplus a_{23} x_2 x_3 \\ \oplus a_{123} x_1 x_2 x_3$ Now we have to solve for the a values. To solve for $a_{123}$, we just look in the truth table for function $f(x_1, x_2, x_3)$ to see if we have an odd or even number of ones in the output of the function. If there is an even number, it is 0, else it is a 1. Since we have an even number of ones, the value is 0, so our equation becomes this: $f(x_1, x_2, x_3) = \\ a_0 \\ \oplus a_1 x_1 \oplus a_2 x_2 \oplus a_3 x_3 \\ \oplus a_{12} x_1 x_2 \oplus a_{13} x_1 x_3 \oplus a_{23} x_2 x_3 \\ \oplus 0 \land x_1 x_2 x_3$ Note that $\land$ is the symbol for AND. I’m showing it explicitly because otherwise the equation looks weird, and a multiplication symbol isn’t correct. Since 0 ANDed with anything else is 0, and also since n XOR 0 = n, that whole last term disappears, leaving us with this equation: $f(x_1, x_2, x_3) = \\ a_0 \\ \oplus a_1 x_1 \oplus a_2 x_2 \oplus a_3 x_3 \\ \oplus a_{12} x_1 x_2 \oplus a_{13} x_1 x_3 \oplus a_{23} x_2 x_3$ Next up, to solve for $a_{12}$, we need to limit our truth table to $f(x_1, x_2, 0)$. That truth table is below, made from the original truth table, but throwing out any row where $x_{3}$ is 1. $\begin{array}{c|c|c|c} x_1 & x_2 & x_3 & f(x_1, x_2, 0) \\ \hline 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & 0 & 1 \\ 1 & 1 & 0 & 0 \\ \end{array}$ We again just look at whether there are an odd or even number of ones in the function output, and use that to set $a_{12}$ appropriately. In this case, there are an even number, so we set it to 0, which makes that term disappear again. Our function is now down to this: $f(x_1, x_2, x_3) = \\ a_0 \\ \oplus a_1 x_1 \oplus a_2 x_2 \oplus a_3 x_3 \\ \oplus a_{13} x_1 x_3 \oplus a_{23} x_2 x_3$ If we look at $f(x_1,0,x_3)$, we find that it also has an even number of ones, making $a_{13}$ become 0 and making that term disappear. Looking at $f(0,x_2,x_3)$, it also has an even number of ones, making $a_{23}$ become 0 and making that term disappear as well. That leaves us with this equation: $f(x_1, x_2, x_3) = \\ a_0 \\ \oplus a_1 x_1 \oplus a_2 x_2 \oplus a_3 x_3$ To solve for $a_1$, we look at the truth table for $f(x_1,0,0)$, which is below: $\begin{array}{c|c|c|c} x_1 & x_2 & x_3 & f(x_1, 0, 0) \\ \hline 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 \\ \end{array}$ There are an odd number of ones in the output, so $a_1$ becomes 1. Finally, we get to keep a term! The equation is below: $f(x_1, x_2, x_3) = \\ a_0 \\ \oplus 1 \land x_1 \oplus a_2 x_2 \oplus a_3 x_3$ Since 1 AND n = n, we can drop the explicit 1 to become this: $f(x_1, x_2, x_3) = \\ a_0 \\ \oplus x_1 \oplus a_2 x_2 \oplus a_3 x_3$ If you do the same process for $a_2$ and $a_3$, you’ll find that they also have odd numbers of ones in the output so also become ones. That puts our equation at: $f(x_1, x_2, x_3) = \\ a_0 \\ \oplus x_1 \oplus x_2 \oplus x_3$ Solving for $a_0$, is just looking at whether there are an odd or even number of ones in the function $f(0,0,0)$ which you can look up directly in the lookup table. It’s even, so $a_0$ becomes 0, which makes our full final equation into this: $f(x_1, x_2, x_3) = x_1 \oplus x_2 \oplus x_3$ We are done! This truth table can be implemented with 3 XORs and 0 ANDs. A pretty efficient operation! You can see this is true if you work it out with the truth table. Try it out and see! $\begin{array}{c|c|c|c} x_1 & x_2 & x_3 & f(x_1, x_2, x_3) \\ \hline 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 \\ \end{array}$ ## Sample Code Here is some sample code that lets you define a lookup table by implementing an integer function, and it generates the ANF for each output bit for the truth table. It also verifies that the ANF gives the correct answer. It shows you how to use this to make various circuits: bit count, addition, multiplication, division and modulus. #include <stdio.h> #include <array> #include <vector> #define PRINT_TRUTHTABLES() 0 #define PRINT_NUMOPS() 1 #define PRINT_ANF() 1 void WaitForEnter () { printf("Press Enter to quit"); fflush(stdin); getchar(); } template <size_t NUM_INPUT_BITS> { for (size_t i = 0; i < NUM_INPUT_BITS; ++i) { const size_t bitMask = 1 << i; return false; } return true; } template <size_t NUM_INPUT_BITS> bool ANFHasTerm (const std::array<size_t, 1 << NUM_INPUT_BITS> &lookupTable, size_t outputBitIndex, size_t termMask) { const size_t c_inputValueCount = 1 << NUM_INPUT_BITS; int onesCount = 0; for (size_t i = 0; i < c_inputValueCount; ++i) { onesCount++; } return (onesCount & 1) != 0; } template <size_t NUM_INPUT_BITS> void MakeANFTruthTable (const std::array<size_t, 1 << NUM_INPUT_BITS> &lookupTable, std::array<size_t, 1 << NUM_INPUT_BITS> &reconstructedLookupTable, size_t outputBitIndex) { const size_t c_inputValueCount = 1 << NUM_INPUT_BITS; printf("-----Output Bit %u-----rn", outputBitIndex); // print truth table if we should #if PRINT_TRUTHTABLES() for (size_t inputValue = 0; inputValue < c_inputValueCount; ++inputValue) printf(" [%u] = %urn", inputValue, ((lookupTable[inputValue] >> outputBitIndex) & 1) ? 1 : 0); printf("rn"); #endif // find each ANF term std::vector<size_t> terms; { } // print function params #if PRINT_ANF() printf("f("); for (size_t i = 0; i < NUM_INPUT_BITS; ++i) { if (i > 0) printf(","); printf("x%i",i+1); } printf(") = rn"); #endif // print ANF and count XORs and ANDs size_t numXor = 0; size_t numAnd = 0; if (terms.size() == 0) { #if PRINT_ANF() printf("0rn"); #endif } else { for (size_t termIndex = 0, termCount = terms.size(); termIndex < termCount; ++termIndex) { if (termIndex > 0) { #if PRINT_ANF() printf("XOR "); #endif ++numXor; } size_t term = terms[termIndex]; if (term == 0) { #if PRINT_ANF() printf("1"); #endif } else { bool firstProduct = true; for (size_t bitIndex = 0; bitIndex < NUM_INPUT_BITS; ++bitIndex) { const size_t bitMask = 1 << bitIndex; if ((term & bitMask) != 0) { #if PRINT_ANF() printf("x%i ", bitIndex + 1); #endif if (firstProduct) firstProduct = false; else ++numAnd; } } } #if PRINT_ANF() printf("rn"); #endif } } #if PRINT_ANF() printf("rn"); #endif #if PRINT_NUMOPS() printf("%u XORs, %u ANDsrnrn", numXor, numAnd); #endif // reconstruct a bit of the reconstructedLookupTable for each entry to be able to verify correctness const size_t c_outputBitMask = 1 << outputBitIndex; for (size_t valueIndex = 0; valueIndex < c_inputValueCount; ++valueIndex) { bool xorSum = false; for (size_t termIndex = 0, termCount = terms.size(); termIndex < termCount; ++termIndex) { size_t term = terms[termIndex]; if (term == 0) { xorSum = 1 ^ xorSum; } else { bool andProduct = true; for (size_t bitIndex = 0; bitIndex < NUM_INPUT_BITS; ++bitIndex) { const size_t bitMask = 1 << bitIndex; if ((term & bitMask) != 0) { if ((valueIndex & bitMask) == 0) andProduct = false; } } xorSum = andProduct ^ xorSum; } } if (xorSum) } } template <size_t NUM_INPUT_BITS, size_t NUM_OUTPUT_BITS, typename LAMBDA> void MakeANFLookupTable (const LAMBDA& lambda) { // make lookup table const size_t c_outputValueMask = (1 << NUM_OUTPUT_BITS) - 1; const size_t c_inputValueCount = 1 << NUM_INPUT_BITS; std::array<size_t, c_inputValueCount> lookupTable; for (size_t inputValue = 0; inputValue < c_inputValueCount; ++inputValue) lookupTable[inputValue] = lambda(inputValue, NUM_INPUT_BITS, NUM_OUTPUT_BITS) & c_outputValueMask; // make the anf for each truth table (each output bit of the lookup table) std::array<size_t, c_inputValueCount> reconstructedLookupTable; std::fill(reconstructedLookupTable.begin(), reconstructedLookupTable.end(), 0); for (size_t outputBitIndex = 0; outputBitIndex < NUM_OUTPUT_BITS; ++outputBitIndex) MakeANFTruthTable<NUM_INPUT_BITS>(lookupTable, reconstructedLookupTable, outputBitIndex); // verify that our anf expressions perfectly re-create the lookup table for (size_t inputValue = 0; inputValue < c_inputValueCount; ++inputValue) { if (lookupTable[inputValue] != reconstructedLookupTable[inputValue]) printf("ERROR: expression / lookup mismatch for index %urn", inputValue); } printf("expression / lookup verification complete.rnrn"); } size_t CountBits (size_t inputValue, size_t numInputBits, size_t numOutputBits) { // Count how many bits there are int result = 0; while (inputValue) { if (inputValue & 1) result++; inputValue = inputValue >> 1; } return result; } size_t AddBits (size_t inputValue, size_t numInputBits, size_t numOutputBits) { // break the input bits in half and add them const size_t bitsA = numInputBits / 2; const size_t mask = (1 << bitsA) - 1; size_t a = inputValue & mask; size_t b = inputValue >> bitsA; return a+b; } size_t MultiplyBits (size_t inputValue, size_t numInputBits, size_t numOutputBits) { // break the input bits in half and add them const size_t bitsA = numInputBits / 2; const size_t mask = (1 << bitsA) - 1; size_t a = inputValue & mask; size_t b = inputValue >> bitsA; return a * b; } size_t DivideBits (size_t inputValue, size_t numInputBits, size_t numOutputBits) { // break the input bits in half and add them const size_t bitsA = numInputBits / 2; const size_t mask = (1 << bitsA) - 1; size_t a = inputValue & mask; size_t b = inputValue >> bitsA; // workaround for divide by zero if (b == 0) return 0; return a / b; } size_t ModulusBits (size_t inputValue, size_t numInputBits, size_t numOutputBits) { // break the input bits in half and add them const size_t bitsA = numInputBits / 2; const size_t mask = (1 << bitsA) - 1; size_t a = inputValue & mask; size_t b = inputValue >> bitsA; // workaround for divide by zero if (b == 0) return 0; return a % b; } int main (int argc, char **argv) { //MakeANFLookupTable<3, 2>(CountBits); // Output bits needs to be enough to store the number "input bits" //MakeANFLookupTable<4, 3>(AddBits); // Output bits needs to be (InputBits / 2)+1 //MakeANFLookupTable<4, 4>(MultiplyBits); // Output bits needs to be same as input bits //MakeANFLookupTable<4, 2>(DivideBits); // Output bits needs to be half of input bits (rounded down) //MakeANFLookupTable<4, 2>(ModulusBits); // Output bits needs to be half of input bits (rounded down) //MakeANFLookupTable<10, 5>(DivideBits); // 5 bit vs 5 bit division is amazingly complex! MakeANFLookupTable<4, 2>(ModulusBits); // Output bits needs to be half of input bits (rounded down) WaitForEnter(); return 0; } ## Sample Code Runs Here is the program output for a “bit count” circuit. It counts the number of bits that are 1, in the 3 bit input, and outputs the answer as 2 bit output. Note that the bit 0 output is the same functionality as the example we worked through by hand, and you can see that it comes up with the same answer. Here is the program output for an adder circuit. It adds two 2 bit numbers, and outputs a 3 bit output. Here is the program output for a multiplication circuit. It multiplies two 2 bit numbers, and outputs a 4 bit number. Here is the program output for a division circuit. It divides a 2 bit number by another 2 bit number and outputs a 2 bit number. When higher bit counts are involved, the division circuit gets super complicated, it’s really crazy! 5 bit divided by 5 bit is several pages of output for instance. Note that it returns 0 whenever it would divide by 0. Lastly, here is the program output for a modulus circuit. It divides a 2 bit number by another 2 bit number and outputs the remainder as a 2 bit number. While the above shows you how to turn a single bit truth table into ANF, extending this to a multi bit lookup table is super simple; you just do the same process for each output bit in the lookup table. Finding Boolean/Logical Expressions for truth tables in algebraic normal form(ANF) Finding Boolean/Logical Expressions for truth tables # Game Development Needs Data Pipeline Middleware In 15 years I’ve worked at 7 different game studios, ranging from small to large, working on many different kinds of projects in a variety of roles. At almost every studio, there was some way for the game to load data at runtime that controlled how it behaved – such as the damage a weapon would do or the cost of an item upgrade. The studios that didn’t have this setup could definitely have benefited from having it. After all, this is how game designers do their job! Sometimes though, this data was maintained via excel spreadsheets (export as csv for instance and have the game read that). That is nearly the worst case scenario for data management. Better though is to have an editor which can edit that data, preferably able to edit data described by schemas, which the game also uses to generate code to load that data. Each studio I’ve worked at that did have game data each had their own solution for their data pipeline, and while they are all of varying qualities, I have yet to see something that is both fast and has most of the features you’d reasonably want or expect. We really need some middleware to tackle this “solved problem” and offer it to us at a reasonable price so we can stop dealing with it. Open sourced would be fine too. Everyone from engineers to production to content people will be much happier and more productive! # Required Features Here are the features I believe are required to satisfy most folks: 1. Be able to define the structure of your data in some format (define data schema). 2. Have an editor that is able to quickly launch, quickly load up data in the data schema and allow a nice interface to editing the data as well as searching the data. 3. This edited data should be in some format that merges well (for dealing with branching), and preferably is standardized so you can use common tools on the data – such as XSLT if storing data as xml. XML isn’t commonly very mergable so not sure the solution there other than perhaps a custom merge utility perhaps? 4. The “data solution” / project file should store your preferences about how you want the final data format to be: xml, json, binary, other? Checkboxes for compression and encryption, etc. Switching the data format should take moments. 5. There should be a cooking process that can be run from the data editor or via command line which transforms the edited data into whatever format the destination data should be in. AKA turn the human friendly XML into machine friendly binary files which you load in with a single read and then do pointer fixup on. 6. This pipeline should generate the code that loads and interacts with the data as described in the data schema. For instance you say “load my data” and it does all the decompression, decryption, parsing, etc giving you back a root data structure which contains compile time defined strongly typed structures. This is important because when you change the format of the data that the game uses, no game code actually has to know or care. Whatever it takes to load your data happens when you call the function. # Bonus Points Here are some bonus point features that would be great to have: 1. Handle live editing of data. When the game and editor is both open, and data is edited, have it change the data on the game side in real time, and perhaps allow a callback to be intercepted in case the game needs to clear out any cached values or anything. This helps iteration time by letting people make data changes without having to relaunch the game. Also needs to be able to connect to a game over tcp/ip and handle endian correction as needed as well as 32 vs 64 bit processes using the same data. 2. Handle the usual problems associated with DLC and versioning in an intelligent way. Many data systems that support DLC / Patching / Schema Updates post ship have strange rules about what data you can and can’t change. Often times if you get it wrong, you make a bug that isn’t always obvious. If support for this was built in, and people didnt have to concern themselves with it, it’d be great. 3. On some development environments, data must be both forwards and backwards compatible. Handling that under the covers in an intelligent way would be awesome. 4. The editor should be extensible with custom types and plugins for visualizations of data, as well as interactive editing of data. This same code path could be used to integrate parts of the game engine with the editor for instance (slippery slope to making the editor slow, however). 5. Being able to craft custom curves, and being able to query them simply and efficiently from the game side at runtime would be awesome. 6. Support “cook time computations”. The data the user works with isn’t always set up the way that would be best for the machine. It’d be great to be able to do custom calculations and computations at runtime. Also great for building acceleration data structures. 7. You should be able to run queries against the data or custom scripts. To answer questions like “Is anyone using this feature?” and “I need to export data from our game into a format that this other program can read” 8. Being able to export data as c++ literal data structures, for people who want to embed (at least some of) their data in the exe to reduce complexity, loading times, etc. It should also be as fast and lightweight as possible. It should allow games to specify memory and file i/o overrides. Localized text is also a “solved problem” that needs an available solution. It could perhaps be rolled into this, or maybe it would make most sense for it to be separate. As another example of how having something like this would be useful, on multiple occasions at previous studios, people have suggested we change the format of the data that the game uses at runtime. For instance, from json to a binary format. In each case this has come up so far, people have said it would take too long and it got backlogged (ie killed). With data pipeline middleware that works as i describe it, you would click a few checkboxes, recook your data and test it to have your runtime results. That’s as hard as it SHOULD be, but in practice it’s much harder because everyone rolls their own and the cobbler never has time to fix his shoes (; Anyone out there want to make this happen? (:
2018-12-14T00:21:56
{ "domain": "demofox.org", "url": "https://blog.demofox.org/2016/04/", "openwebmath_score": 0.40753909945487976, "openwebmath_perplexity": 2056.8368789473548, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795072052383, "lm_q2_score": 0.6619228825191872, "lm_q1q2_score": 0.6533043203966582 }
https://math.stackexchange.com/questions/4316971/in-a-banach-algebra-can-the-product-of-two-elements-be-invertible-if-we-know-on
# In a Banach Algebra, can the product of two elements be invertible if we know one of the element is non invertible? In a Banach Algebra, can the product of two non-invertible element be invertible? Basically in linear algebra, we know that if the product of two matrix is invertible, then both matrix must be invertible. The proof is very simple if we use determinants. Now for general Banach space with infinite dimensions, there is no such thing as a determinant, so I was wondering if we have two elements whose product is invertible, do we know if both elements are invertible? And what if the elements commute? Consider the Hilbert space $$l^2(\mathbb{N})$$, the Banach algebra of all linear and continuous operators $$T:l^2(\mathbb{N}) \to l^2(\mathbb{N})$$ and the left and right shift operators, that is $$L(x)=(x_2,x_3,\dots)$$ and $$R(x)=(0,x_1,x_2,\dots)$$ for each $$x=(x_1,x_2,\dots) \in l^2(\mathbb{N})$$. Then $$L(R(x))=x$$, hence $$L\circ R= I$$ is invertible. But $$L$$ is not injective and $$R$$ is not surjective. If $${\cal A}$$ is any Banach algebra with unit $$e$$, and $$a,b \in {\cal A}$$ with $$ab=ba$$ and $$ab=:c$$ invertible, then $$ca=ac$$, $$cb=bc$$, hence $$ac^{-1}=c^{-1}a$$, $$bc^{-1}=c^{-1}b$$ and therefore $$e=a(bc^{-1})=(bc^{-1})a,$$ thus $$a$$ is invertible. By the same way $$b$$ is invertible. Take the space $$X=\ell^2$$ and let $$A$$ be the algebra of all bounded linear operators on $$X$$. Define $$f,g\in A$$ like this: $$f(x_1,x_2,x_3,...)=(x_2,x_3,x_4,...)$$ $$g(x_1,x_2,x_3,...)=(0,x_1,x_2,x_3,...)$$ Both are clearly not invertible, as $$f$$ is not injective and $$g$$ is not surjective. However, note that $$fg=id_X$$, which is clearly invertible. If the elements commute then they are indeed both invertible. Suppose that $$x,y\in A$$ commute and $$xy$$ is invertible. Then there is some $$a\in A$$ such that: $$xya=axy=1$$ Since $$xy=yx$$ it follows that $$x(ya)=(ay)x=1$$. So $$x$$ has both a left and right inverse. This is an easy exercise to show that both one sided inverses must be equal in this case, and so $$x$$ is invertible.
2022-01-19T11:01:15
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/4316971/in-a-banach-algebra-can-the-product-of-two-elements-be-invertible-if-we-know-on", "openwebmath_score": 0.9532451033592224, "openwebmath_perplexity": 55.720251313161015, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795072052383, "lm_q2_score": 0.6619228825191871, "lm_q1q2_score": 0.6533043203966581 }
https://andrescaicedo.wordpress.com/2009/01/21/305-1-algebra-and-induction/
## 305 -1. Algebra and induction 1. Abstract algebra is the study of algebraic structures (groups, rings, fields, etc). Most of this study is done at a fairly general level, and its development marks a significant shift from how mathematics had been done before, where the emphasis was on particular structures (say, the set of reals, or triangles in Euclidean plane) and their properties. We follow the textbook in that our approach will be with the goal of solving the question of which polynomial equations (with integer) coefficients can be solved (by radicals). Concepts will be introduced with this goal in mind, and we will begin by looking at fields (mainly number fields) and move to groups, rather than the other way around, as it is customary. The study of equations has always been an integral part of algebra, and it certainly guided its origins. The book starts with a nice short history, here is the pdf on this subject I used in lecture. It ends with a short list of names of (mostly) mathematicians that intends to illustrate the historical development of the idea of mathematical induction. Following the book, we introduce induction in the  form of the well-ordering principle. 2. Notation. ${\mathbb N}=\{0,1,2,\dots\}$ is the set of natural numbers. $P={\mathbb Z}^+={\mathbb N}^+$ is the set of positive integers. ${\mathbb Z}$ is the set of integers (or whole numbers). The well-ordering principle. Every nonempty set of natural numbers has a least element. This should be understood as an axiomatic property of the natural numbers. Our first application is in proving the following result, (what one usually calls the division algorithm). Theorem. Given integers $m,n$ with $n>0$, there are integers $q,r$ such that $m=nq+r$ and $0\le r. Proof.  Consider the set $A$ of natural numbers that can be written in the form $m-nq$ for some integer $q$. (So, as $q$ varies, the numbers $m-nq$ belong to $A$ or not depending on whether they are nonnegative.) This set is nonempty (this can be proved by analyzing two cases, depending on whether $m\ge0$ or not; in each case, we can explicitly exhibit an element of $A$). By the well-ordering principle, $A$ has a least element. Call it $r$. Since $r\in A$, there is some integer $q$ such that $m=nq+r$; fix such $q$ and $r$ in what follows. It remains to show that $0\le r. That $0\le r$ follows from the fact that $r\in A$, and all the elements of $A$ are nonnegative. That $r is proved by contradiction: Otherwise, $0\le r-n, and $r=m-nq$, so $r-n=m-n(q+1)=m-nq'$ where $q'=q+1$. It follows that $r-n\in A$, but this is impossible, since $r-n, and $r$ is the least element of $A$. This is a contradiction. It follows that it is not the case that $r\ge n$, so $r, as wanted. $\mathsf{QED}$
2017-04-27T22:33:09
{ "domain": "wordpress.com", "url": "https://andrescaicedo.wordpress.com/2009/01/21/305-1-algebra-and-induction/", "openwebmath_score": 0.7948495149612427, "openwebmath_perplexity": 270.69497723692314, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795068222499, "lm_q2_score": 0.6619228825191871, "lm_q1q2_score": 0.6533043201431493 }
https://www.pymc.io/projects/examples/en/latest/gaussian_processes/log-gaussian-cox-process.html
# Modeling spatial point patterns with a marked log-Gaussian Cox process# ## Introduction# The log-Gaussian Cox process (LGCP) is a probabilistic model of point patterns typically observed in space or time. It has two main components. First, an underlying intensity field $$\lambda(s)$$ of positive real values is modeled over the entire domain $$X$$ using an exponentially-transformed Gaussian process which constrains $$\lambda$$ to be positive. Then, this intensity field is used to parameterize a Poisson point process which represents a stochastic mechanism for placing points in space. Some phenomena amenable to this representation include the incidence of cancer cases across a county, or the spatiotemporal locations of crime events in a city. Both spatial and temporal dimensions can be handled equivalently within this framework, though this tutorial only addresses data in two spatial dimensions. In more formal terms, if we have a space $$X$$ and $$A\subseteq X$$, the distribution over the number of points $$Y_A$$ occurring within subset $$A$$ is given by $$$Y_A \sim Poisson\left(\int_A \lambda(s) ds\right)$$$$and the intensity field is defined as$$$$\log \lambda(s) \sim GP(\mu(s), K(s,s'))$$$$where$$GP(\mu(s), K(s,s’))$$denotes a Gaussian process with mean function$$\mu(s)$$and covariance kernel$$K(s,s’)$$for a location$$s \in X$$. This is one of the simplest models of point patterns of$$n$$events recorded as locations$$s_1,…,s_n$ in an arbitrary metric space. In conjunction with a Bayesian analysis, this model can be used to answering questions of interest such as: • Does an observed point pattern imply a statistically significant shift in spatial intensities? • What would randomly sampled patterns with the same statistical properties look like? • Is there a statistical correlation between the frequency and magnitude of point events? In this notebook, we’ll use a grid-based approximation to the full LGCP with PyMC to fit a model and analyze its posterior summaries. We will also explore the usage of a marked Poisson process, an extension of this model to account for the distribution of marks associated with each data point. ## Data# Our observational data concerns 231 sea anemones whose sizes and locations on the French coast were recorded. This data was taken from the spatstat spatial modeling package in R which is designed to address models like the LGCP and its subsequent refinements. The original source of this data is the textbook Spatial data analysis by example by Upton and Fingleton (1985) and a longer description of the data can be found there. import warnings from itertools import product import arviz as az import matplotlib.pyplot as plt import numpy as np import pandas as pd import pymc as pm from matplotlib import MatplotlibDeprecationWarning from numpy.random import default_rng warnings.filterwarnings(action="ignore", category=MatplotlibDeprecationWarning) %config InlineBackend.figure_format = 'retina' az.style.use("arviz-darkgrid") data = pd.read_csv(pm.get_data("anemones.csv")) n = data.shape[0] This dataset has coordinates and discrete mark values for each anemone. While these marks are integers, for the sake of simplicity we will model these values as continuous in a later step. data.head(3) x y marks 0 27 7 6 1 197 5 4 2 74 15 4 Let’s take a look at this data in 2D space: plt.scatter(data["x"], data["y"], c=data["marks"]) plt.colorbar(label="Anemone size") plt.axis("equal"); The ‘marks’ column indicates the size of each anemone. If we were to model both the marks and the spatial distribution of points, we would be modeling a marked Poisson point process. Extending the basic point pattern model to include this feature is the second portion of this notebook. While there are multiple ways to conduct inference, perhaps the simplest way is to slice up our domain $$X$$ into many small pieces $$A_1, A_2,...,A_M$$ and fix the intensity field to be constant within each subset. Then, we will treat the number of points within each $$A_j$$ as a Poisson random variable such that $$Y_j \sim Poisson(\lambda_j)$$. and we also consider the $$\log{\lambda_1}...,\log{\lambda_M}$$ variables as a single draw from a Gaussian process. The code below splits up the domain into grid cells, counts the number of points within each cell and also identifies its centroid. xy = data[["x", "y"]].values # Jitter the data slightly so that none of the points fall exactly # on cell boundaries eps = 1e-3 rng = default_rng() xy = xy.astype("float") + rng.standard_normal(xy.shape) * eps resolution = 20 # Rescaling the unit of area so that our parameter estimates area_per_cell = resolution**2 / 100 cells_x = int(280 / resolution) cells_y = int(180 / resolution) # Creating bin edges for a 2D histogram quadrat_x = np.linspace(0, 280, cells_x + 1) quadrat_y = np.linspace(0, 180, cells_y + 1) # Identifying the midpoints of each grid cell cell_counts = cell_counts.ravel().astype(int) With the points split into different cells and the cell centroids computed, we can plot our new gridded dataset as shown below. line_kwargs = {"color": "k", "linewidth": 1, "alpha": 0.5} plt.figure(figsize=(6, 4.5)) [plt.axhline(y, **line_kwargs) for y in quadrat_y] [plt.axvline(x, **line_kwargs) for x in quadrat_x] plt.scatter(data["x"], data["y"], c=data["marks"], s=6) for i, row in enumerate(centroids): shifted_row = row - 2 plt.annotate(cell_counts[i], shifted_row, alpha=0.75) plt.title("Anemone counts per grid cell"), plt.colorbar(label="Anemone size"); We can see that all of the counts are fairly low and range from zero to five. With all of our data prepared, we can go ahead and start writing out our probabilistic model in PyMC. We are going to treat each of the per-cell counts $$Y_1,...Y_M$$ above as a Poisson random variable. # Inference# Our first step is to place prior distributions over the high-level parameters for the Gaussian process. This includes the length scale $$\rho$$ for the covariance function and a constant mean $$\mu$$ for the GP. with pm.Model() as lgcp_model: mu = pm.Normal("mu", sigma=3) rho = pm.Uniform("rho", lower=25, upper=300) variance = pm.InverseGamma("variance", alpha=1, beta=1) cov_func = variance * pm.gp.cov.Matern52(2, ls=rho) mean_func = pm.gp.mean.Constant(mu) Next, we transform the Gaussian process into a positive-valued process via pm.math.exp and use the area per cell to transform the intensity function $$\lambda(s)$$ into rates $$\lambda_i$$ parameterizing the Poisson likelihood for the counts within cell $$i$$. with lgcp_model: gp = pm.gp.Latent(mean_func=mean_func, cov_func=cov_func) log_intensity = gp.prior("log_intensity", X=centroids) intensity = pm.math.exp(log_intensity) rates = intensity * area_per_cell counts = pm.Poisson("counts", mu=rates, observed=cell_counts) With the model fully specified, we can start sampling from the posterior using the default NUTS sampler. I’ll also tweak the target acceptance rate to reduce the number of divergences. with lgcp_model: trace = pm.sample(1000, tune=2000, target_accept=0.95) Auto-assigning NUTS sampler... Multiprocess sampling (4 chains in 4 jobs) NUTS: [mu, rho, variance, log_intensity_rotated_] 100.00% [12000/12000 2:15:23<00:00 Sampling 4 chains, 3 divergences] Sampling 4 chains for 2_000 tune and 1_000 draw iterations (8_000 + 4_000 draws total) took 8125 seconds. There were 2 divergences after tuning. Increase target_accept or reparameterize. There was 1 divergence after tuning. Increase target_accept or reparameterize. # Interpreting the results# Posterior inference on the length_scale parameter is useful for understanding whether or not there are long-range correlations in the data. We can also examine the mean of the log-intensity field, but since it is on the log scale it is hard to directly interpret. az.summary(trace, var_names=["mu", "rho"]) mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat mu -0.929 0.642 -2.186 0.293 0.012 0.009 2825.0 2025.0 1.0 rho 243.612 44.685 161.203 299.971 0.658 0.470 4083.0 2785.0 1.0 We are also interested in looking at the value of the intensity field at a large number of new points in space. We can accommodate this within our model by including a new random variable for the latent Gaussian process evaluated at a denser set of points. Using sample_posterior_predictive, we generate posterior predictions on new data points contained in the variable intensity_new. x_new = np.linspace(5, 275, 20) y_new = np.linspace(5, 175, 20) xs, ys = np.asarray(np.meshgrid(x_new, y_new)) xy_new = np.asarray([xs.ravel(), ys.ravel()]).T with lgcp_model: intensity_new = gp.conditional("log_intensity_new", Xnew=xy_new) spp_trace = pm.sample_posterior_predictive( trace, var_names=["log_intensity_new"], keep_size=True ) trace.extend(spp_trace) intensity_samples = np.exp(trace.posterior_predictive["log_intensity_new"]) 100.00% [4000/4000 25:30<00:00] Let’s take a look at a few realizations of $$\lambda(s)$$. Since the samples are on the log scale, we’ll need to exponentiate them to obtain the spatial intensity field of our 2D Poisson process. In the plot below, the observed point pattern is overlaid. fig, axes = plt.subplots(2, 3, figsize=(8, 5), constrained_layout=True) axes = axes.ravel() field_kwargs = {"marker": "o", "edgecolor": "None", "alpha": 0.5, "s": 80} for i in range(6): field_handle = axes[i].scatter( xy_new[:, 0], xy_new[:, 1], c=intensity_samples.sel(chain=0, draw=i), **field_kwargs ) obs_handle = axes[i].scatter(data["x"], data["y"], s=10, color="k") axes[i].axis("off") axes[i].set_title(f"Sample {i}") plt.figlegend( (obs_handle, field_handle), ("Observed data", r"Posterior draws of $\lambda(s)$"), ncol=2, loc=(0.2, -0.01), fontsize=14, frameon=False, ); While there is some heterogeneity in the patterns these surfaces show, we obtain a posterior mean surface with a very clearly defined spatial surface with higher intensity in the upper right and lower intensity in the lower left. fig = plt.figure(figsize=(5, 4)) plt.scatter( xy_new[:, 0], xy_new[:, 1], c=intensity_samples.mean(("chain", "draw")), marker="o", alpha=0.75, s=100, edgecolor=None, ) plt.title("$E[\\lambda(s) \\vert Y]$") plt.colorbar(label="Posterior mean"); The spatial variation in our estimates of the intensity field may not be very meaningful if there is a lot of uncertainty. We can make a similar plot of the posterior variance (or standard deviation) in this case: fig = plt.figure(figsize=(5, 4)) plt.scatter( xy_new[:, 0], xy_new[:, 1], c=intensity_samples.var(("chain", "draw")), marker="o", alpha=0.75, s=100, edgecolor=None, ) plt.title("$Var[\\lambda(s) \\vert Y]$"), plt.colorbar(); The posterior variance is lowest in the middle of the domain and largest in the corners and edges. This makes sense - in locations where there is more data, we have more accurate estimates for what the values of the intensity field may be. ## Authors# • This notebook was written by Christopher Krapu on September 6, 2020 and updated on April 1, 2021. • Updated by Chris Fonnesbeck on May 31, 2022 for v4 compatibility. ## Watermark# %load_ext watermark %watermark -n -u -v -iv -w Last updated: Wed Jun 01 2022 Python implementation: CPython Python version : 3.9.10 IPython version : 8.1.1 matplotlib: 3.5.1 arviz : 0.12.0 numpy : 1.22.2 pymc : 4.0.0b6 pandas : 1.4.1 sys : 3.9.10 | packaged by conda-forge | (main, Feb 1 2022, 21:24:11) [GCC 9.4.0] Watermark: 2.3.0 All the notebooks in this example gallery are provided under the MIT License which allows modification, and redistribution for any use provided the copyright and license notices are preserved. ## Citing PyMC examples# To cite this notebook, use the DOI provided by Zenodo for the pymc-examples repository. Important Many notebooks are adapted from other sources: blogs, books… In such cases you should cite the original source as well. Also remember to cite the relevant libraries used by your code. Here is an citation template in bibtex: @incollection{citekey, author = "<notebook authors, see above>" title = "<notebook title>", editor = "PyMC Team", booktitle = "PyMC examples", doi = "10.5281/zenodo.5654871" } which once rendered could look like: • Chrisopher Krapu , Chris Fonnesbeck . "Modeling spatial point patterns with a marked log-Gaussian Cox process". In: PyMC Examples. Ed. by PyMC Team. DOI: 10.5281/zenodo.5654871
2022-12-07T06:20:55
{ "domain": "pymc.io", "url": "https://www.pymc.io/projects/examples/en/latest/gaussian_processes/log-gaussian-cox-process.html", "openwebmath_score": 0.2573668956756592, "openwebmath_perplexity": 4660.494079214703, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9869795129500637, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043176169328 }
https://handwiki.org/wiki/Homogeneous_function
# Homogeneous function Short description: Function with a multiplicative scaling behaviour In mathematics, a homogeneous function is a function of several variables such that, if all its arguments are multiplied by a scalar, then its value is multiplied by some power of this scalar, called the degree of homogeneity, or simply the degree; that is, if k is an integer, a function f of n variable is homogeneous of degree k if $\displaystyle{ f(sx_1,\ldots, sx_n)=s^k f(x_1,\ldots, x_n) }$ for every $\displaystyle{ x_1, \ldots, x_n, }$ and $\displaystyle{ s\ne 0. }$ For example a homogeneous polynomial of degree k defines a homogeneous function of degree k. The above definition extends to functions whose domain and codomain are vector spaces over a field F: a function $\displaystyle{ f : V \to W }$ between two F-vector space is homogeneous of degree $\displaystyle{ k }$ if $\displaystyle{ f(s \mathbf{v}) = s^k f(\mathbf{v}) }$ (1) for all nonzero $\displaystyle{ s \in F }$ and $\displaystyle{ v \in V. }$ This definition is often further generalized to functions whose domain is not V, but a cone in V, that is, a subset C of V such that $\displaystyle{ \mathbf{v}\in C }$ implies $\displaystyle{ s\mathbf{v}\in C }$ for every nonzero scalar s. In the case of functions of several real variables and real vector spaces, a slightly more general form of homogeneity, called positive homogeneity is often considered, by requiring only that the above identities hold for $\displaystyle{ s \gt 0, }$ and allowing any real number k as a degree of homogeneity. Every homogeneous real function is positively homogeneous. The converse is not true, but is locally true in the sense that (for integer degrees) the two kinds of homogeneity cannot be distinguished by considering the behavior of a function near a given point. A norm over a real vector space is an example of positively homogeneous function that is not homogeneous. A special case is the absolute value of real numbers. The quotient of two homogeneous polynomials of the same degree gives an example of homogeneous function of degree zero. This example is fundamental in the definition of projective schemes. ## Definitions The concept of a homogeneous function was originally introduced for functions of several real variables. With the definition of vector spaces at the end of 19th century, the concept has been naturally extended to functions between vector spaces, since a tuple of variable values can be considered as a coordinate vector. It is this more general point of view that is described in this article. There are two commonly used definitions. The general one works for vector spaces over arbitrary fields, and is restricted to degrees of homogeneity that are integers. The second one supposes to work over the field of real numbers, or, more generally, over an ordered field. This definition restricts to positive values the scaling factor that occurs in the definition, and is therefore called positive homogeneity, the qualificative positive being often omitted when there is no risk of confusion. Positive homogeneity leads to consider more functions as homogeneous. For example, the absolute value and all norms are positively homogeneous functions that are not homogeneous. The restriction of the scaling factor to real positive values allows also considering homogeneous functions whose degree of homogeneity is any real number. ### General homogeneity Let V and W be two vector spaces over a field F. A linear cone in V is a subset C of V such that $\displaystyle{ sx\in C }$ for all $\displaystyle{ x\in C }$ and all nonzero $\displaystyle{ s\in F. }$ A homogeneous function f from V to W is a partial function from V to W that has a linear cone C as its domain, and satisfies $\displaystyle{ f(sx) = s^kf(x) }$ for some integer k, every $\displaystyle{ x\in C, }$ and every nonzero $\displaystyle{ s\in F. }$ The integer k is called the degree of homogeneity, or simply the degree of f. A typical example of a homogeneous function of degree k is the function defined by a homogeneous polynomial of degree k. The rational function defined by the quotient of two homogeneous polynomials is a homogeneous function; its degree is the difference of the degrees of the numerator and the denominator; its cone of definition is the linear cone of the points where the value of denominator is not zero. Homogeneous functions play a fundamental role in projective geometry since any homogeneous function f from V to W defines a well-defined function between the projectivizations of V and W. The homogeneous rational functions of degree zero (those defined by the quotient of two homogeneous polynomial of the same degre) play an essential role in the Proj construction of projective schemes. ### Positive homogeneity When working over the real numbers, or more generally over an ordered field, it is commonly convenient to consider positive homogeneity, the definition being exactly the same as that in the preceding section, with "nonzero s" replaced by "s > 0" in the definitions of a linear cone and a homogeneous function. This change allow considering (positively) homogeneous functions with any real number as their degrees, since exponentiation with a positive real base is well defined. Even in the case of integer degrees, there are many useful functions that are positively homogeneous without being homogeneous. This is, in particular, the case of the absolute value function and norms, which are all positively homogeneous of degree 1. They are not homogeneous since $\displaystyle{ |-x|=-|x|\neq |x| }$ if $\displaystyle{ x\neq 0. }$ This remains true in the complex case, since the field of the complex numbers $\displaystyle{ \C }$ and every complex vector space can be considered as real vector spaces. Euler's homogeneous function theorem is a characterization of positively homogeneous differentiable functions, which may be considered as the fundamental theorem on homogeneous functions. ## Examples A homogeneous function is not necessarily continuous, as shown by this example. This is the function $\displaystyle{ f }$ defined by $\displaystyle{ f(x,y) = x }$ if $\displaystyle{ xy \gt 0 }$ and $\displaystyle{ f(x, y) = 0 }$ if $\displaystyle{ xy \leq 0. }$ This function is homogeneous of degree 1, that is, $\displaystyle{ f(s x, s y) = s f(x,y) }$ for any real numbers $\displaystyle{ s, x, y. }$ It is discontinuous at $\displaystyle{ y = 0, x \neq 0. }$ ### Simple example The function $\displaystyle{ f(x, y) = x^2 + y^2 }$ is homogeneous of degree 2: $\displaystyle{ f(tx, ty) = (tx)^2 + (ty)^2 = t^2 \left(x^2 + y^2\right) = t^2 f(x, y). }$ ### Absolute value and norms The absolute value of a real number is a positively homogeneous function of degree 1, which is not homogeneous, since $\displaystyle{ |sx|=s|x| }$ if $\displaystyle{ s\gt 0, }$ and $\displaystyle{ |sx|=-s|x| }$ if $\displaystyle{ s\lt 0. }$ The absolute value of a complex number is a positively homogeneous function of degree $\displaystyle{ 1 }$ over the real numbers (that is, when considering the complex numbers as a vector space over the real numbers). It is not homogeneous, over the real numbers as well as over the complex numbers. More generally, every norm and seminorm is a positively homogeneous function of degree 1 which is not a homogeneous function. As for the absolute value, if the norm or semi-norm is defined on a vector space over the complex numbers, this vector space has to be considered as vector space over the real number for applying the definition of a positively homogeneous function. ### Linear functions Any linear map $\displaystyle{ f : V \to W }$ between vector spaces over a field F is homogeneous of degree 1, by the definition of linearity: $\displaystyle{ f(\alpha \mathbf{v}) = \alpha f(\mathbf{v}) }$ for all $\displaystyle{ \alpha \in {F} }$ and $\displaystyle{ v \in V. }$ Similarly, any multilinear function $\displaystyle{ f : V_1 \times V_2 \times \cdots V_n \to W }$ is homogeneous of degree $\displaystyle{ n, }$ by the definition of multilinearity: $\displaystyle{ f\left(\alpha \mathbf{v}_1, \ldots, \alpha \mathbf{v}_n\right) = \alpha^n f(\mathbf{v}_1, \ldots, \mathbf{v}_n) }$ for all $\displaystyle{ \alpha \in {F} }$ and $\displaystyle{ v_1 \in V_1, v_2 \in V_2, \ldots, v_n \in V_n. }$ ### Homogeneous polynomials Monomials in $\displaystyle{ n }$ variables define homogeneous functions $\displaystyle{ f : \mathbb{F}^n \to \mathbb{F}. }$ For example, $\displaystyle{ f(x, y, z) = x^5 y^2 z^3 \, }$ is homogeneous of degree 10 since $\displaystyle{ f(\alpha x, \alpha y, \alpha z) = (\alpha x)^5(\alpha y)^2(\alpha z)^3 = \alpha^{10} x^5 y^2 z^3 = \alpha^{10} f(x, y, z). \, }$ The degree is the sum of the exponents on the variables; in this example, $\displaystyle{ 10 = 5 + 2 + 3. }$ A homogeneous polynomial is a polynomial made up of a sum of monomials of the same degree. For example, $\displaystyle{ x^5 + 2x^3 y^2 + 9xy^4 }$ is a homogeneous polynomial of degree 5. Homogeneous polynomials also define homogeneous functions. Given a homogeneous polynomial of degree $\displaystyle{ k }$ with real coefficients that takes only positive values, one gets a positively homogeneous function of degree $\displaystyle{ k/d }$ by raising it to the power $\displaystyle{ 1 / d. }$ So for example, the following function is positively homogeneous of degree 1 but not homogeneous: $\displaystyle{ \left(x^2 + y^2 + z^2\right)^\frac{1}{2}. }$ ### Min/max For every set of weights $\displaystyle{ w_1,\dots,w_n, }$ the following functions are positively homogeneous of degree 1, but not homogeneous: • $\displaystyle{ \min\left(\frac{x_1}{w_1}, \dots, \frac{x_n}{w_n}\right) }$ (Leontief utilities) • $\displaystyle{ \max\left(\frac{x_1}{w_1}, \dots, \frac{x_n}{w_n}\right) }$ ### Rational functions Rational functions formed as the ratio of two homogeneous polynomials are homogeneous functions in their domain, that is, off of the linear cone formed by the zeros of the denominator. Thus, if $\displaystyle{ f }$ is homogeneous of degree $\displaystyle{ m }$ and $\displaystyle{ g }$ is homogeneous of degree $\displaystyle{ n, }$ then $\displaystyle{ f / g }$ is homogeneous of degree $\displaystyle{ m - n }$ away from the zeros of $\displaystyle{ g. }$ ### Non-examples The homogeneous real functions of a single variable have the form $\displaystyle{ x\mapsto cx^k }$ for some constant c. So, the affine function $\displaystyle{ x\mapsto x+5, }$ the natural logarithm $\displaystyle{ x\mapsto ln(x), }$ and the exponential function$\displaystyle{ x\mapsto e^x }$ are not homogeneous. ## Euler's theorem Roughly speaking, Euler's homogeneous function theorem asserts that the positively homogeneous functions of a given degree are exactly the solution of a specific partial differential equation. More precisely: Euler's homogeneous function theorem — If f is a (partial) function of n real variables that is positively homogeneous of degree k, and continuously differentiable in some open subset of $\displaystyle{ \R^n, }$ then it satisfies in this open set the partial differential equation $\displaystyle{ k\,f(x_1, \ldots,x_n)=\sum_{i=1}^n x_i\frac{\partial f}{\partial x_i}(x_1, \ldots,x_n). }$ Conversely, every maximal continuously differentiable solution of this partial differentiable equation is a positively homogeneous function of degree k (here, maximal means that the solution cannot be prolongated to a function with a larger domain). Proof: For having simpler formulas, we set $\displaystyle{ \mathbf x=(x_1, \ldots, x_n). }$ The first part results by using the chain rule for differentiating both sides of the equation $\displaystyle{ f(s\mathbf x ) = s^k f(\mathbf x) }$ with respect to $\displaystyle{ s, }$ and taking the limit of the result when s tends to 1. The converse is proved by integrating a simple differential equation. Let $\displaystyle{ \mathbf{x} }$ be in the interior of the domain of f. For s sufficiently close of 1, the function $\displaystyle{ g(s) = f(s \mathbf{x}) }$ is well defined. The partial differential equation implies that $\displaystyle{ sg'(s)= k f(s \mathbf{x})=k g(s). }$ The solutions of this linear differential equation have the form $\displaystyle{ g(s)=g(1)s^k. }$ Therefore, $\displaystyle{ f(s \mathbf{x}) = g(s) = s^k g(1) = s^k f(\mathbf{x}), }$ if s is sufficiently close to 1. If this solution of the partial differential equation would not be defined for all positive s, then the functional equation would allow to prolongate the solution, and the partial differential equation implies that this prolongation is unique. So, the domain of a maximal solution of the partial differential equation is a linear cone, and the solution is positively homogeneous of degree k. $\displaystyle{ \square }$ As a consequence, if $\displaystyle{ f : \R^n \to \R }$ is continuously differentiable and homogeneous of degree $\displaystyle{ k, }$ its first-order partial derivatives $\displaystyle{ \partial f/\partial x_i }$ are homogeneous of degree $\displaystyle{ k - 1. }$ The results from Euler's theorem by derivating the partial differential equation with respect to one variable. In the case of a function of a single real variable ($\displaystyle{ n = 1 }$), the theorem implies that a continuously differentiable and poxitively homogeneous function of degree k has the form $\displaystyle{ f(x)=c_+ x^k }$ for $\displaystyle{ x\gt 0 }$ and $\displaystyle{ f(x)=c_- x^k }$ for $\displaystyle{ x\lt 0. }$ The constants $\displaystyle{ c_+ }$ and $\displaystyle{ c_+ }$are not necessarily the same, as it is the case for the absolute value. ## Application to differential equations The substitution $\displaystyle{ v = y / x }$ converts the ordinary differential equation $\displaystyle{ I(x, y)\frac{\mathrm{d}y}{\mathrm{d}x} + J(x,y) = 0, }$ where $\displaystyle{ I }$ and $\displaystyle{ J }$ are homogeneous functions of the same degree, into the separable differential equation $\displaystyle{ x \frac{\mathrm{d}v}{\mathrm{d}x} = - \frac{J(1,v)}{I(1,v)} - v. }$ ## Generalizations ### Homogeneity under a monoid action The definitions given above are all specialized cases of the following more general notion of homogeneity in which $\displaystyle{ X }$ can be any set (rather than a vector space) and the real numbers can be replaced by the more general notion of a monoid. Let $\displaystyle{ M }$ be a monoid with identity element $\displaystyle{ 1 \in M, }$ let $\displaystyle{ X }$ and $\displaystyle{ Y }$ be sets, and suppose that on both $\displaystyle{ X }$ and $\displaystyle{ Y }$ there are defined monoid actions of $\displaystyle{ M. }$ Let $\displaystyle{ k }$ be a non-negative integer and let $\displaystyle{ f : X \to Y }$ be a map. Then $\displaystyle{ f }$ is said to be homogeneous of degree $\displaystyle{ k }$ over $\displaystyle{ M }$ if for every $\displaystyle{ x \in X }$ and $\displaystyle{ m \in M, }$ $\displaystyle{ f(mx) = m^k f(x). }$ If in addition there is a function $\displaystyle{ M \to M, }$ denoted by $\displaystyle{ m \mapsto |m|, }$ called an absolute value then $\displaystyle{ f }$ is said to be absolutely homogeneous of degree $\displaystyle{ k }$ over $\displaystyle{ M }$ if for every $\displaystyle{ x \in X }$ and $\displaystyle{ m \in M, }$ $\displaystyle{ f(mx) = |m|^k f(x). }$ A function is homogeneous over $\displaystyle{ M }$ (resp. absolutely homogeneous over $\displaystyle{ M }$) if it is homogeneous of degree $\displaystyle{ 1 }$ over $\displaystyle{ M }$ (resp. absolutely homogeneous of degree $\displaystyle{ 1 }$ over $\displaystyle{ M }$). More generally, it is possible for the symbols $\displaystyle{ m^k }$ to be defined for $\displaystyle{ m \in M }$ with $\displaystyle{ k }$ being something other than an integer (for example, if $\displaystyle{ M }$ is the real numbers and $\displaystyle{ k }$ is a non-zero real number then $\displaystyle{ m^k }$ is defined even though $\displaystyle{ k }$ is not an integer). If this is the case then $\displaystyle{ f }$ will be called homogeneous of degree $\displaystyle{ k }$ over $\displaystyle{ M }$ if the same equality holds: $\displaystyle{ f(mx) = m^k f(x) \quad \text{ for every } x \in X \text{ and } m \in M. }$ The notion of being absolutely homogeneous of degree $\displaystyle{ k }$ over $\displaystyle{ M }$ is generalized similarly. ### Distributions (generalized functions) A continuous function $\displaystyle{ f }$ on $\displaystyle{ \R^n }$ is homogeneous of degree $\displaystyle{ k }$ if and only if $\displaystyle{ \int_{\R^n} f(tx) \varphi(x)\, dx = t^k \int_{\R^n} f(x)\varphi(x)\, dx }$ for all compactly supported test functions $\displaystyle{ \varphi }$; and nonzero real $\displaystyle{ t. }$ Equivalently, making a change of variable $\displaystyle{ y = tx, }$ $\displaystyle{ f }$ is homogeneous of degree $\displaystyle{ k }$ if and only if $\displaystyle{ t^{-n}\int_{\R^n} f(y)\varphi\left(\frac{y}{t}\right)\, dy = t^k \int_{\R^n} f(y)\varphi(y)\, dy }$ for all $\displaystyle{ t }$ and all test functions $\displaystyle{ \varphi. }$ The last display makes it possible to define homogeneity of distributions. A distribution $\displaystyle{ S }$ is homogeneous of degree $\displaystyle{ k }$ if $\displaystyle{ t^{-n} \langle S, \varphi \circ \mu_t \rangle = t^k \langle S, \varphi \rangle }$ for all nonzero real $\displaystyle{ t }$ and all test functions $\displaystyle{ \varphi. }$ Here the angle brackets denote the pairing between distributions and test functions, and $\displaystyle{ \mu_t : \R^n \to \R^n }$ is the mapping of scalar division by the real number $\displaystyle{ t. }$ ## Glossary of name variants Let $\displaystyle{ f : X \to Y }$ be a map between two vector spaces over a field $\displaystyle{ \mathbb{F} }$ (usually the real numbers $\displaystyle{ \R }$ or complex numbers $\displaystyle{ \Complex }$). If $\displaystyle{ S }$ is a set of scalars, such as $\displaystyle{ \Z }$, $\displaystyle{ [0, \infty) }$, or $\displaystyle{ \R }$ for example, then $\displaystyle{ f }$ is said to be homogeneous over $\displaystyle{ S }$ if $\displaystyle{ f(s x) = s f(x) }$ for every $\displaystyle{ x \in X }$ and scalar $\displaystyle{ s \in S }$. For instance, every additive map between vector spaces is homogeneous over the rational numbers $\displaystyle{ S := \Q }$ although it might not be homogeneous over the real numbers $\displaystyle{ S := \R }$. The following commonly encountered special cases and variations of this definition have their own terminology: 1. (Strict) Positive homogeneity: $\displaystyle{ f(rx) = r f(x) }$ for all $\displaystyle{ x \in X }$ and all positive real $\displaystyle{ r \gt 0 }$. • This property is often also called nonnegative homogeneity because for a function valued in a vector space or field, it is logically equivalent to: $\displaystyle{ f(rx) = r f(x) }$ for all $\displaystyle{ x \in X }$ and all non-negative real $\displaystyle{ r \geq 0 }$.[proof 1] However, for a function valued in the extended real numbers $\displaystyle{ [-\infty, \infty] = \R \cup \{\pm \infty\} }$, which appear in fields like convex analysis, the multiplication $\displaystyle{ 0 \cdot f(x) }$ will be undefined whenever $\displaystyle{ f(x) = \pm \infty }$ and so these statements are not necessarily interchangeable.[note 1] • This property is used in the definition of a sublinear function. • Minkowski functionals are exactly those non-negative extended real-valued functions with this property. 2. Real homogeneity: $\displaystyle{ f(rx) = r f(x) }$ for all $\displaystyle{ x \in X }$ and all real $\displaystyle{ r }$. • This property is used in the definition of a real linear functional. 3. Homogeneity: $\displaystyle{ f(sx) = s f(x) }$ for all $\displaystyle{ x \in X }$ and all scalars $\displaystyle{ s \in \mathbb{F} }$. • It is emphasized that this definition depends on the scalar field $\displaystyle{ \mathbb{F} }$ underlying the domain $\displaystyle{ X }$. • This property is used in the definition of linear functionals and linear maps. 4. Conjugate homogeneity: $\displaystyle{ f(sx) = \overline{s} f(x) }$ for all $\displaystyle{ x \in X }$ and all scalars $\displaystyle{ s \in \mathbb{F} }$. • If $\displaystyle{ \mathbb{F} = \Complex }$ then $\displaystyle{ \overline{s} }$ typically denotes the complex conjugate of $\displaystyle{ s }$. But more generally, as with semilinear maps for example, $\displaystyle{ \overline{s} }$ could be the image of $\displaystyle{ s }$ under some distinguished automorphism of $\displaystyle{ \mathbb{F} }$. • Along with additivity, this property is assumed in the definition of an antilinear map. It is also assumed that one of the two coordinates of a sesquilinear form has this property (such as the inner product of a Hilbert space). All of the above definitions can be generalized by replacing the condition $\displaystyle{ f(rx) = r f(x) }$ with $\displaystyle{ f(rx) = |r| f(x) }$, in which case that definition is prefixed with the word "absolute" or "absolutely." For example, 1. Absolute homogeneity: $\displaystyle{ f(sx) = |s| f(x) }$ for all $\displaystyle{ x \in X }$ and all scalars $\displaystyle{ s \in \mathbb{F} }$. • This property is used in the definition of a seminorm and a norm. If $\displaystyle{ k }$ is a fixed real number then the above definitions can be further generalized by replacing the condition $\displaystyle{ f(rx) = r f(x) }$ with $\displaystyle{ f(rx) = r^k f(x) }$ (and similarly, by replacing $\displaystyle{ f(rx) = |r| f(x) }$ with $\displaystyle{ f(rx) = |r|^k f(x) }$ for conditions using the absolute value, etc.), in which case the homogeneity is said to be "of degree $\displaystyle{ k }$" (where in particular, all of the above definitions are "of degree $\displaystyle{ 1 }$"). For instance, 1. Real homogeneity of degree $\displaystyle{ k }$: $\displaystyle{ f(rx) = r^k f(x) }$ for all $\displaystyle{ x \in X }$ and all real $\displaystyle{ r }$. 2. Homogeneity of degree $\displaystyle{ k }$: $\displaystyle{ f(sx) = s^k f(x) }$ for all $\displaystyle{ x \in X }$ and all scalars $\displaystyle{ s \in \mathbb{F} }$. 3. Absolute real homogeneity of degree $\displaystyle{ k }$: $\displaystyle{ f(rx) = |r|^k f(x) }$ for all $\displaystyle{ x \in X }$ and all real $\displaystyle{ r }$. 4. Absolute homogeneity of degree $\displaystyle{ k }$: $\displaystyle{ f(sx) = |s|^k f(x) }$ for all $\displaystyle{ x \in X }$ and all scalars $\displaystyle{ s \in \mathbb{F} }$. A nonzero continuous function that is homogeneous of degree $\displaystyle{ k }$ on $\displaystyle{ \R^n \backslash \lbrace 0 \rbrace }$ extends continuously to $\displaystyle{ \R^n }$ if and only if $\displaystyle{ k \gt 0 }$. 1. However, if such an $\displaystyle{ f }$ satisfies $\displaystyle{ f(rx) = r f(x) }$ for all $\displaystyle{ r \gt 0 }$ and $\displaystyle{ x \in X, }$ then necessarily $\displaystyle{ f(0) \in \{\pm \infty, 0\} }$ and whenever $\displaystyle{ f(0), f(x) \in \R }$ are both real then $\displaystyle{ f(r x) = r f(x) }$ will hold for all $\displaystyle{ r \geq 0. }$ 1. Assume that $\displaystyle{ f }$ is strictly positively homogeneous and valued in a vector space or a field. Then $\displaystyle{ f(0) = f(2 \cdot 0) = 2 f(0) }$ so subtracting $\displaystyle{ f(0) }$ from both sides shows that $\displaystyle{ f(0) = 0 }$. Writing $\displaystyle{ r := 0 }$, then for any $\displaystyle{ x \in X }$, $\displaystyle{ f(r x) = f(0) = 0 = 0 f(x) = r f(x), }$ which shows that $\displaystyle{ f }$ is nonnegative homogeneous.
2022-10-01T08:03:42
{ "domain": "handwiki.org", "url": "https://handwiki.org/wiki/Homogeneous_function", "openwebmath_score": 0.94976407289505, "openwebmath_perplexity": 110.95281466851064, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795129500637, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043176169328 }
https://testbook.com/question-answer/sanjya-and-saumya-together-can-finish-a-work-in-9--5f9c0f184f05b665e2c0e7ee
# Sanjya and Saumya together can finish a work in 9 days. Sanjya can do the same job on his own in 15 days. The same work Saumya can do by himself in: 1. 14 days 2. $$15\frac{1}{2}$$ days 3. $$22\frac{1}{2}$$ days 4. 22 days Option 3 : $$22\frac{1}{2}$$ days Free Electric charges and coulomb's law (Basic) 48838 10 Questions 10 Marks 10 Mins ## Detailed Solution Given: Sanjya and Saumya together can finish a work in 9 days. Sanjya can do the same job on his own in 15 days. Calculation Work done per unit by Saumya alone = $$\frac{1}{9} - \frac{1}{15}$$ $$= \frac{5-3}{45} = \frac{2}{45}$$ ∴ Time taken by Saumya alone = $$\frac{45}{2} = 22\frac{1}{2}$$ days
2021-12-05T05:42:55
{ "domain": "testbook.com", "url": "https://testbook.com/question-answer/sanjya-and-saumya-together-can-finish-a-work-in-9--5f9c0f184f05b665e2c0e7ee", "openwebmath_score": 0.5869161486625671, "openwebmath_perplexity": 3104.9768209368576, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795121840872, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043171099154 }
https://plainmath.net/8186/concert-promoter-produces-kinds-souvenir-shirt-sells-company-determines
# A concert promoter produces two kinds of souvenir shirt, one kind sells for $18 ad the other for$25. The company determines, the total cost, in thous A concert promoter produces two kinds of souvenir shirt, one kind sells for $18 ad the other for$25. The company determines, the total cost, in thousands of dollars, of producting x thousand of the $18 shirt and y thousand of the$25 shirt is given by $C\left(x,y\right)=4{x}^{2}-6xy+3{y}^{2}+20x+19y-12.$ How many of each type of shirt must be produced and sold in order to maximize profit? You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Bentley Leach Revenue function $R\left(x,y\right)=18x+25y$ Profit function $C\left(x,y\right)=R\left(x,y\right)-C\left(x,y\right)$ $C\left(x,y\right)=\cdot 18x+25y\right)-\left(4{x}^{2}-6xy+3{y}^{2}+20x+19y-12\right)$ $C\left(x,y\right)=-4{x}^{2}+6×y-3{y}^{2}-2x+6y+12$ First we will find the critical point. Now, ${P}_{x}\left(x,y\right)=-8x+6y-2$ ${P}_{y}\left(x,y\right)=6x-6y+6$ ${P}_{x}\left(x,y\right)=-8$ ${P}_{yy}\left(x,y\right)=-6$ ${P}_{y}\left(x,y\right)=0$ 6x-6y+6=0 y=x+1 And ${P}_{x}\left(x,y\right)=0$ 6y-8x-2=0 6(x+1)-8x=2 x=2 (i) So, y=x+1 y=2 (ii) Hence, the critical point is (2,3) Since ${f}_{x},{f}_{yy},\phantom{\rule{1em}{0ex}}\text{and}\phantom{\rule{1em}{0ex}}{f}_{xy}$ all are constant so we will need to pkug critical point to function $D\left(x,y\right)={f}_{x}{f}_{yy}-{f}_{xy}^{2}$ $=\left(-8\right)\left(-6\right)-{\left(-6\right)}^{2}$ =12>0 Since, D>0, and ${f}_{x}<0$ The critical point P(2,3) has a relative maximum. And the maximum value of profit function $P\left(2,3\right)=-4{\left(2\right)}^{2}+6\cdot 2\cdot 3-3{\left(3\right)}^{2}-2\cdot 2+6\cdot 3+12$ $P\left(2,3\right)=19$ The maximum profit of $19000 will be earned if 2000 shirt of$18 and 3000 shirts of \$25 are produced and sold.
2022-06-28T03:11:17
{ "domain": "plainmath.net", "url": "https://plainmath.net/8186/concert-promoter-produces-kinds-souvenir-shirt-sells-company-determines", "openwebmath_score": 0.5673462748527527, "openwebmath_perplexity": 4082.7423911871915, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795121840871, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043171099153 }
https://anirdesh.com/math/geometry/equiangular-segments-1.php
# Equiangular Segments Equiangular segments are an interesting occurrence in geometry. We will discover the fascinating segments, their concurrency, and their length. These segments can be external or internal. ## External Equiangular Segments If equilateral triangles are erected externally on the sides of any arbitrary triangle and segments from the vertices of the equilateral triangles that do not touch the arbitrary triangle are drawn to the corresponding vertices of the arbitrary triangle, then the three segments are equal in length and are concurrent. In the figure: • The length of the external equiangular segments, pe, in terms of the sides of the arbitrary triangle is given by the formula: $${\overline{AA'}=\overline{BB'}=\overline{CC'}=p_{e}}$$ and $${p_{e}^2=\frac{1}{2}(a^2+b^2+c^2)+2\sqrt{3}K}$$, where a, b, and c are the sides of the arbitrary triangle and K is the area of the arbitrary triangle. • The equiangular segments are concurrent at a common point O. Point O is the equiangular point of ΔABC: ∠AOB = ∠BOC = ∠COA = 120°. Moreover, all of the 6 angles at O are equal to 60°. • AOCB', BOAC', and COBA' are cyclic quadrilaterals. ## Proof of Congruency and Cyclic Quadrilaterals By congruency, we mean that AA', BB', and CC' are equal in length. This is proved with the SAS congruency criterion. Using SAS criterion, two triangles are congruent if two sides are equal in length and the include angle is equal. In triangles B'CB and ACA', two of the sides are equal, where B'C = AC and CB = CA'. Also, angle ∠B'CB = 60° + C and angle ∠ACA' = C + 60°. Hence, by SAS, the two triangles are congruent. This means that the third sides, AA' and BB', of each triangle are equal in length. Using the same method, we can prove that the third segment CC' is also equal to the other two. (i) AA' = BB' = CC' To prove that the quadrilaterals AOCB', BOAC', and COBA' are cyclic, we notice that ∠CA'A = ∠CBB' because the triangles are congruent. Its measure is denoted by α1. Similarly ∠C'CB = ∠AA'B = α2. Also, α1 + α2 = 60°. The sum of the internal angles of a quadrilateral = 360°. We can use simple math to conclude that ∠COB = 120°. Since ∠COB + ∠CA'B = 120° + 60° = 180°, the quadrilateral COBA' must be cyclic because the opposite angles in a cyclic quadrilateral equal 180°. We can make the same conclusions for the other two quadrilaterals. (ii) AOCB', BOAC', and COBA' are cyclic quadrilaterals ## Proof of Concurrency After having proved the aforementioned quadrilaterals are cyclic, we can easily prove the concurrency of the three equiangular segments with Ceva's Theorem. Since AOCB' is a cyclic quadrilateral, we have $$\frac{v}{r} = \frac{\overline{AB'}}{l} = \frac{b}{l}$$ and $${\frac{v}{z}=\frac{\overline{CB'}}{j}=\frac{b}{j}}$$. Therefore, $${\frac{z}{r}=\frac{j}{l}}$$. This property is a property of angle bisectors. Therefore, OB' must be the angle bisector of ∠AOC. Since ∠AOC = 120°, ∠COB' = ∠AOB' = 60°. Similarly, all of the equiangular segments are angle bisectors. To summarize: $${\frac{x}{p}=\frac{l}{k}}$$, $${\frac{y}{q}=\frac{k}{j}}$$, and $${\frac{z}{r}=\frac{j}{l}}$$. Thus, by Ceva’s Theorem, the segments are concurrent since $${\frac{x}{p}\cdot\frac{y}{q}\cdot \frac{z}{r}=\frac{l}{k}\cdot\frac{k}{j}\cdot\frac{j}{l}=1}$$. ## Derivation of the Length Formula Next problem we tackle is to determine the length of the equiangular segment. Because the segments are known to be equal in length, we can use the Law of Cosines to state the following: (i) $${p_{e}^2=b^2+c^2-2bc\cdot\cos(60+A)}$$ (ii) $${p_{e}^2=b^2+c^2-}$$ $${2bc(\cos{60}\cdot\cos{A}-\sin{60}\cdot\sin{A})}$$ The area of a triangle, K, is given by the formula: $$K = \frac{1}{2}bc\sin A$$. We can substitute that into equation (ii). Also, $$\cos A$$ can be found using the Law of Cosines. (iii) $$p_{e}^2=b^2+c^2-$$ $$2bc\left(\frac{1}{2}\cdot\frac{b^2+c^2-a^2}{2bc}-\frac{\sqrt{3}}{2}\cdot\frac{2K}{bc}\right)$$  (Expanding and collecting using the sum property of Cosine) (iv) $${p_{e}^2=\frac{a^2+b^2+c^2}{2}+2\sqrt{3}K}$$   where K is the area of the triangle ## Internal Equiangular Segments The properties of the internal equiangular segments are similar to the properties of the external equiangular segments as noted below. In the figure: • The lengths of the internal equiangular segments, AA', BB', and CC', are equal in length. • The segments, if extended, are concurrent at O. • The six angles created by point O are all equal to 60°. • Quadrilaterals ABOC', OCAB', and BOCA' are cyclic. All of these cyclic quadrilaterals have the common angle ∠BOC = 120°. • The proof of concurrency is similar to the proof of concurrency for the external counterpart. The only difference is that point O lies outside the triangle. • Point C'' is not shown since it lies well beyond this page! • Angles α and γ are shown to prove that ΔACC' ≅ ΔAB'B by SAS criterion. Thus, CC' = BB'. If the equilateral triangles are created on the inside of the triangle ABC, then the internal equiangular segments, AA', BB', and CC' are again equal in length. The length of the segments, pi, is given by $${\overline{AA'}=\overline{BB'}=\overline{CC'}=p_{i}}$$ and $${p_{i}^2=\frac{1}{2}(a^2+b^2+c^2)-2\sqrt{3}K}$$. By simple addition, we also have $${p_{e}^2+p_{i}^2=a^2+b^2+c^2}$$. ## Proof of Concurrency To prove that the internal equiangular segments are concurrent, we need to prove that $${\left(\frac{\overline{AB''}}{\overline{-B''C}}\right)\left(\frac{\overline{CA''}}{\overline{A''B}}\right)\left(\frac{\overline{BC''}}{\overline{-C''A}}\right)=1}$$. The negative sign indicates a point lying on the line continuous with the segment on the side of the triangle rather than on the side of the triangle itself. For example, point B'' does not lie on the side of the triangle but on a line continuous with the side. In triangle AOB'', we know that OC is the angle bisector of ∠AOB''. Therefore: (i) $${ \frac{AB''}{-B''C}= -\frac{OA}{OC} }$$. Similarly, from triangle BOC, we have: (ii) $${ \frac{AB''}{B''C}= \frac{OC}{OB} }$$. Since C'' lies beyond this page, triangle AOC'' is not shown. The important relationship we obtain from this triangle is: (iii) $${ \frac{BC'}{-C''A}= -\frac{OB}{OA} }$$. Multiplying these relationships, we obtain 1; thus, completing the proof of concurrency.
2022-01-26T22:57:47
{ "domain": "anirdesh.com", "url": "https://anirdesh.com/math/geometry/equiangular-segments-1.php", "openwebmath_score": 0.7886587977409363, "openwebmath_perplexity": 977.6564500113745, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795121840871, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043171099153 }
https://socratic.org/questions/what-is-the-slope-of-a-line-parallel-to-the-line-with-equation-2x-5y-9
What is the slope of a line parallel to the line with equation 2x – 5y = 9? Dec 18, 2016 The slope of this line is $\frac{2}{5}$ therefore by definition the slope of any parallel line is $\frac{2}{5}$ Explanation: The slope of two parallel lines are by definition the same. So if we find the slope of the given line we will find the slope of any line parallel to the given line. To find the slope of the given line we must convert it to slope-intercept form. Slope intercept form is: $\textcolor{red}{y = m x + b}$ Where $\textcolor{red}{m}$ is the slope and $\textcolor{red}{b}$ is the y-intercept. We can convert the given line as follows: $\textcolor{red}{- 2 x} + 2 x - 5 y = \textcolor{red}{- 2 x} + 9$ $0 - 5 y = - 2 x + 9$ $- 5 y = - 2 x + 9$ $\frac{- 5 y}{\textcolor{red}{- 5}} = \frac{- 2 x + 9}{\textcolor{red}{- 5}}$ $\frac{- 5}{-} 5 y = \frac{- 2 x}{-} 5 + \frac{9}{-} 5$ $y = \frac{2}{5} x - \frac{9}{5}$ So the slope of this line is $\frac{2}{5}$ therefore by definition the slope of any parallel line is $\frac{2}{5}$
2019-10-17T05:10:50
{ "domain": "socratic.org", "url": "https://socratic.org/questions/what-is-the-slope-of-a-line-parallel-to-the-line-with-equation-2x-5y-9", "openwebmath_score": 0.8194720149040222, "openwebmath_perplexity": 104.11075056679503, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795121840871, "lm_q2_score": 0.6619228758499941, "lm_q1q2_score": 0.6533043171099152 }
https://blog.peterkagey.com/tag/tiling/
## Regular Truchet Tilings I recently made my first piece of math art for my apartment: a 30″×40″ canvas print based on putting Truchet tiles on the truncated trihexagonal tiling. I first became interested in these sorts of patterns after my former colleague Shane sent me a YouTube video of the one-line Commodore 64 BASIC program: 10 PRINT CHR\$(205.5+RND(1)); : GOTO 10 I implemented a version of this program on my website, with the added feature that you could click on a section to recolor the entire component, and this idea was also the basis of Problem 2 and Problem 31 in my Open Problem Collection. I saw this idea generalized by Colin Beveridge in the article “Too good to be Truchet” in Chalkdust Magazine. In this article, Colin counts the ways of drawing hexagons, octagons, and decagons with curves connecting the midpoints of edges, and regions colored in an alternating fashion. In the case of the hexagon, there are three ways to do it, one of which looks like Palago tiles. It turns out that if you ignore the colors, the number of ways to pair up midpoints of the sides of a $$2n$$-gon in such a way that the curves connecting the midpoints don’t overlap is given by the $$n$$-th Catalan number. For example, there are $$C_4 = 14$$ ways of connecting midpoints of the sides of an octagon, where different rotations are considered distinct. There are three regular tilings of the plane by $$2n$$-gons, the square tiling, the truncated square tiling, and the truncated trihexagonal tiling. Placing a Truchet tile uniformly at random over each of the $$2n$$-gons, results in a really lovely emergent structure. If you find these designs as lovely as I do, I’d recommend taking a look at the Twitter bots @RandomTiling by Dave Richeson and @Truchet_Nested/@Trichet_Nested by @SerinDelaunay (based on a idea from Christopher Carlson) which feature a series of visually interesting generalizations of Truchet tilings and which are explained in Christopher’s blog post “Multi-scale Truchet Patterns“. Edward Borlenghi has a blog post “The Curse of Truchet’s Tiles” about how he tried—mostly unsuccessfully—to sell products based on Truchet tiles, like carpet squares and refrigerator magnets (perhaps similar to “YoYo” magnets from Magnetic Poetry). The post is filled with lots of cool, alternative designs for square Truchet tiles and how they fit together. Edward got a patent for some of his ideas, and his attempt to sell these very cool products feels like it could have been my experience in another life. If you want to see more pretty images and learn more about this, make sure to read Truchet Tilings Revisited by Robert J. Krawczyk! If you want to see what this looks like on a spherical geometry, check out Matt Zucker’s tweet. And if you want to try to draw some of these patterns for yourself, take a look at @Ayliean’s Truchet Tiles Zine.
2023-03-27T19:02:00
{ "domain": "peterkagey.com", "url": "https://blog.peterkagey.com/tag/tiling/", "openwebmath_score": 0.3642646074295044, "openwebmath_perplexity": 1004.1686083481644, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795118010989, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043168564067 }
https://socratic.org/questions/how-do-you-use-the-discriminant-to-determine-the-nature-of-the-solutions-given-9
# How do you use the discriminant to determine the nature of the solutions given 9m^2 + 24m + 16 = 0? ##### 1 Answer Feb 5, 2017 $\text{roots are real and equal}$ #### Explanation: $\text{For any quadratic equation "ax^2+bx+c=0" the nature of the roots depends on the discriminant } \Delta = {b}^{2} - 4 a c$ $\Delta > 0 \text{ roots are real and unequal}$ $\Delta = 0 \text{ roots are real and equal}$ $\Delta < 0 \text{ roots are complex}$ In this case: $9 {m}^{2} + 24 m + 16 = 0$ $a = 9 , b = 24 , c = 16$ $\Delta = {24}^{2} - 4 \times 9 \times 16$ $\Delta = 576 - 576 = 0$ $\therefore \text{ roots are real and equal}$ this can be confirmed by factorising $9 {m}^{2} + 24 m + 16 = 0 \implies {\left(3 m + 4\right)}^{2} = 0$
2020-06-05T19:16:37
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-use-the-discriminant-to-determine-the-nature-of-the-solutions-given-9", "openwebmath_score": 0.8837980031967163, "openwebmath_perplexity": 1642.356997489533, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795118010989, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043168564067 }
https://math.libretexts.org/Bookshelves/Linear_Algebra/Map%3A_Linear_Algebra_(Waldron%2C_Cherney%2C_and_Denton)/09%3A_Subspaces_and_Spanning_Sets
# 9: Subspaces and Spanning Sets It is time to study vector spaces more carefully and return to some fundamental questions: 1. $$\textit{Subspaces}$$: When is a subset of a vector space itself a vector space? (This is the notion of a $$\textit{subspace}$$.) 2. $$\textit{Linear Independence}$$: Given a collection of vectors, is there a way to tell whether they are independent, or if one is a "linear combination'' of the others? 3. $$\textit{Dimension}$$: Is there a consistent definition of how "big'' a vector space is? 4. $$\textit{Basis}$$: How do we label vectors? Can we write any vector as a sum of some basic set of vectors? How do we change our point of view from vectors labelled one way to vectors labelled in another way Let's start at the top!
2020-06-02T09:06:27
{ "domain": "libretexts.org", "url": "https://math.libretexts.org/Bookshelves/Linear_Algebra/Map%3A_Linear_Algebra_(Waldron%2C_Cherney%2C_and_Denton)/09%3A_Subspaces_and_Spanning_Sets", "openwebmath_score": 0.8240281343460083, "openwebmath_perplexity": 176.6391991587157, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9869795118010988, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043168564066 }
https://proofwiki.org/wiki/Limit_of_Sine_of_X_over_X/Proof_1
# Limit of Sine of X over X/Proof 1 ## Theorem $\displaystyle \lim_{x \mathop \to 0} \frac {\sin x} x = 1$ ## Proof $\displaystyle \sin x$ $=$ $\displaystyle \sum_{n \mathop = 0}^\infty \left({-1}\right)^n \frac {x^{2n+1} }{\left({2n+1}\right)!}$ Definition of the Sine Function $\displaystyle$ $=$ $\displaystyle \left({-1}\right)^0 \frac{x^{2 \cdot 0 + 1} } { \left({2 \cdot 0 + 1}\right)!} + \sum_{n \mathop = 1}^\infty \left({-1}\right)^n \frac {x^{2n+1} } {\left({2n+1}\right)!}$ $\displaystyle$ $=$ $\displaystyle x + \sum_{n \mathop = 1}^\infty \left({-1}\right)^n \frac {x^{2n+1} }{\left({2n+1}\right)!}$ $\displaystyle \lim_{x \mathop \to 0}\frac{\sin x} x$ $=$ $\displaystyle \lim_{x \mathop \to 0} \frac{x + \sum_{n \mathop = 1}^\infty \left({-1}\right)^n \frac {x^{2n+1} }{\left({2n+1}\right)!} } x$ $\displaystyle$ $=$ $\displaystyle \lim_{x \mathop \to 0} \frac x x + \lim_{x \mathop \to 0} \frac{\sum_{n \mathop = 1}^\infty \left({-1}\right)^n \frac {x^{2n + 1} }{\left({2n+1}\right)!} } x$ $\displaystyle$ $=$ $\displaystyle 1 + \lim_{x \mathop \to 0} \frac{\sum_{n \mathop = 1}^\infty \left({-1}\right)^n \frac {x^{2n} } {\left({2n}\right)!} } 1$ Power Series is Differentiable on Interval of Convergence and L'Hôpital's Rule $\displaystyle$ $=$ $\displaystyle 1 + \lim_{x \mathop \to 0} \sum_{n \mathop = 1}^\infty \left({-1}\right)^n \frac {x^{2n} }{\left({2n}\right)!}$ $\displaystyle$ $=$ $\displaystyle 1 + \sum_{n \mathop = 1}^\infty \left({-1}\right)^n \frac {0^{2n} }{\left({2n}\right)!}$ Polynomial is Continuous $\displaystyle$ $=$ $\displaystyle 1$ $\blacksquare$
2019-12-15T09:59:09
{ "domain": "proofwiki.org", "url": "https://proofwiki.org/wiki/Limit_of_Sine_of_X_over_X/Proof_1", "openwebmath_score": 0.9834941029548645, "openwebmath_perplexity": 92.59228876941476, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795114181105, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043166028979 }
https://www.encyclopediaofmath.org/index.php?title=Convergence_of_measures&oldid=27239
# Convergence of measures 2010 Mathematics Subject Classification: Primary: 28A33 [MSN][ZBL] $\newcommand{\abs}[1]{\left|#1\right|}$ A concept in measure theory, determined by a certain topology in a space of measures that are defined on a certain $\sigma$-algebra $\mathcal{B}$ of subsets of a space $X$ or, more generally, in a space $\mathcal{M} (X, \mathcal{B})$ of charges, i.e. countably-additive real (resp. complex) functions $\mu: \mathcal{B}\to \mathbb R$ (resp. $\mathbb C$), often also called $\mathbb R$ (resp. $\mathbb C$) valued or signed measures. The total variation measure of a $\mathbb C$-valued measure is defined on $\mathcal{B}$ as: $\abs{\mu}(B) :=\sup\left\{ \sum \abs{\mu(B_i)}: \text{'"UNIQ-MathJax13-QINU"' is a countable partition of '"UNIQ-MathJax14-QINU"'}\right\}.$ In the real-valued case the above definition simplifies as $\abs{\mu}(B) = \sup_{A\in \mathcal{B}, A\subset B} \left(\abs{\mu (A)} + \abs{\mu (X\setminus B)}\right).$ The total variation of $\mu$ is then defined as $\left\|\mu\right\|_v := \abs{\mu}(X)$. The space $\mathcal{M}^b (X, \mathcal{B})$ of $\mathbb R$ (resp. $\mathbb C$) valued measure with finite total variation is a Banach space and the following are the most commonly used topologies. 1) The norm or strong topology: $\mu_n\to \mu$ if and only if $\left\|\mu_n-\mu\right\|_v\to 0$. 2) The weak topology: a sequence of measures $\mu_n \rightharpoonup \mu$ if and only if $F (\mu_n)\to F(\mu)$ for every bounded linear functional $F$ on $\mathcal{M}^b$. 3) When $X$ is a topological space and $\mathcal{B}$ the corresponding $\sigma$-algebra of Borel sets, we can introduce on $X$ the narrow topology. In this case $\mu_n$ converges to $\mu$ if and only if $$\label{e:narrow} \int f\, \mathrm{d}\mu_n \to \int f\, \mathrm{d}\mu$$ for every bounded continuous function $f:X\to \mathbb R$ (resp. $\mathbb C$). This topology is also sometimes called the weak topology, however such notation is inconsistent with the Banach space theory, see below. The following is an important consequence of the narrow convergence: if $\mu_n$ converges narrowly to $\mu$, then $\mu_n (A)\to \mu (A)$ for any Borel set such that $\abs{\mu}(\partial A) = 0$. 4) When $X$ is a locally compact topological space and $\mathcal{B}$ the $\sigma$-algebra of Borel sets yet another topology can be introduced, the so-called wide topology, or sometimes referred to as weak$^\star$ topology. A sequence $\mu_n\rightharpoonup^\star \mu$ if and only if \eqref{e:narrow} holds for continuous functions which are compactly supported. This topology is in general weaker than the narrow topology. If $X$ is compact and Hausdorff the Riesz representation theorem shows that $\mathcal{M}^b$ is the dual of the space $C(X)$ of continuous functions. Under this assumption the narrow and weak$^\star$ topology coincides with the usual weak$^\star$ topology of the Banach space theory. Since in general $C(X)$ is not a reflexive space, it turns out that the narrow topology is in general weaker than the weak topology. A topology analogous to the weak$^\star$ topology is defined in the more general space $\mathcal{M}^b_{loc}$ of locally bounded measures, i.e. those measures $\mu$ such that for any point $x\in X$ there is a neighborhood $U$ with $\abs{\mu}(U)<\infty$. #### References [AmFuPa] L. Ambrosio, N. Fusco, D. Pallara, "Functions of bounded variations and free discontinuity problems". Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000. MR1857292Zbl 0957.49001 [Bo] N. Bourbaki, "Elements of mathematics. Integration" , Addison-Wesley (1975) pp. Chapt.6;7;8 (Translated from French) MR0583191 Zbl 1116.28002 Zbl 1106.46005 Zbl 1106.46006 Zbl 1182.28002 Zbl 1182.28001 Zbl 1095.28002 Zbl 1095.28001 Zbl 0156.06001 [DS] N. Dunford, J.T. Schwartz, "Linear operators. General theory" , 1 , Interscience (1958) MR0117523 [Bi] P. Billingsley, "Convergence of probability measures" , Wiley (1968) MR0233396 Zbl 0172.21201 [Ma] P. Mattila, "Geometry of sets and measures in euclidean spaces. Cambridge Studies in Advanced Mathematics, 44. Cambridge University Press, Cambridge, 1995. MR1333890 Zbl 0911.28005 How to Cite This Entry: Convergence of measures. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Convergence_of_measures&oldid=27239 This article was adapted from an original article by R.A. Minlos (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
2019-09-19T04:58:02
{ "domain": "encyclopediaofmath.org", "url": "https://www.encyclopediaofmath.org/index.php?title=Convergence_of_measures&oldid=27239", "openwebmath_score": 0.9619030952453613, "openwebmath_perplexity": 236.7732194997341, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795110351222, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043163493891 }
https://proofwiki.org/wiki/That_which_produces_Medial_Whole_with_Rational_Area_is_Irrational
# That which produces Medial Whole with Rational Area is Irrational ## Theorem In the words of Euclid: If from a straight line there be subtracted a straight line which is incommensurable in square with the whole, and which with the whole makes the sum of the squares on them medial, but twice the rectangle contained by them rational, the remainder is irrational; and let it be called that which produces with a rational area a medial whole. ## Proof Let $AB$ be a straight line. Let a straight line $BC$ such that: $BC$ is incommensurable in square with $AB$ $AB^2 + BC^2$ is medial the rectangle contained by $AB$ and $BC$ is rational be cut off from $AB$. We have that: $AB^2 + BC^2$ is medial while: $2 \cdot AB \cdot BC$ is rational. Therefore $AB^2 + BC^2$ is incommensurable with $2 \cdot AB \cdot BC$. From: Proposition $7$ of Book $\text{II}$: Square of Difference and: Proposition $16$ of Book $\text{X}$: Incommensurability of Sum of Incommensurable Magnitudes it follows that: $2 \cdot AB \cdot BC$ is incommensurable with $AC^2$. But $2 \cdot AB \cdot BC$ is rational. Therefore $AC^2$ is irrational. Therefore $AC$ is irrational. Such a straight line is known as that which produces with a rational area a medial whole. $\blacksquare$ ## Historical Note This proof is Proposition $77$ of Book $\text{X}$ of Euclid's The Elements.
2023-03-29T23:08:56
{ "domain": "proofwiki.org", "url": "https://proofwiki.org/wiki/That_which_produces_Medial_Whole_with_Rational_Area_is_Irrational", "openwebmath_score": 0.7661867141723633, "openwebmath_perplexity": 1139.7906614462008, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795110351222, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043163493891 }
https://www.math24.net/inverse-functions/
# Inverse Functions Suppose $$f : A \to B$$ is a function whose domain is the set $$A$$ and whose codomain is the set $$B.$$ The function $$f$$ is called invertible if there exists a function $$f^{-1} : B \to A$$ with the domain $$B$$ and the codomain $$A$$ such that ${{f^{ – 1}}\left( y \right) = x\; \text{ if and only if }\;}\kern0pt{ f\left( x \right) = y,}$ where $$x \in A,$$ $$y \in B.$$ The function $$f^{-1}$$ is then called the inverse of $$f.$$ Not all functions have an inverse. If a function $$f$$ is not injective, different elements in its domain may have the same image: $f\left( {{x_1}} \right) = f\left( {{x_2}} \right) = y_1.$ In this case, the converse relation $${f^{-1}}$$ is not a function because there are two preimages $${x_1}$$ and $${x_2}$$ for the element $${y_1}$$ in the codomain $$B.$$ So, to have an inverse, the function must be injective. If a function $$f$$ is not surjective, not all elements in the codomain have a preimage in the domain. In this case, the converse relation $${f^{-1}}$$ is also not a function. Thus, to have an inverse, the function must be surjective. Recall that a function which is both injective and surjective is called bijective. Hence, to have an inverse, a function $$f$$ must be bijective. The converse is also true. If $$f : A \to B$$ is bijective, then it has an inverse function $${f^{-1}}.$$ ## Solved Problems Click or tap a problem to see the solution. ### Example 1 Show that the function $$f:\mathbb{Z} \to \mathbb{Z}$$ defined by $$f\left( x \right) = x + 5$$ is bijective and find its inverse. ### Example 2 Show that the function $$g:\mathbb{R^{+}} \to \mathbb{R^{+}},$$ $$f\left( x \right) = x^2$$ is bijective and find its inverse. ### Example 3 The function $$f: \mathbb{R}\backslash\left\{ 3 \right\} \to \mathbb{R}\backslash\left\{ 1 \right\}$$ is defined as $$f\left( x \right) = \large{\frac{{x – 2}}{{x – 3}}}\normalsize.$$ Find the inverse function $$f^{-1}.$$ ### Example 4 The function $$g: \mathbb{R} \to \mathbb{R}^{+}$$ is defined as $$g\left( x \right) = {e^{2x + 1}}.$$ Find the inverse function $$g^{-1}.$$ ### Example 5 Consider the function $$f:\mathbb{Z}^2 \to \mathbb{Z}^2$$ defined as $$f\left( {x,y} \right) = \left( {2x – y,x + 2y} \right).$$ Find the inverse function $${f^{-1}}.$$ ### Example 1. Show that the function $$f:\mathbb{Z} \to \mathbb{Z}$$ defined by $$f\left( x \right) = x + 5$$ is bijective and find its inverse. Solution. It is easy to show that the function $$f$$ is injective. Using the contrapositive approach, suppose that $${x_1} \ne {x_2}$$ but $$f\left( {{x_1}} \right) = f\left( {{x_2}} \right).$$ Then we have: ${{x_1} + 5 = {x_2} + 5,}\;\; \Rightarrow {{x_1} = {x_2}.}$ This is a contradiction. Hence, the function $$f$$ is injective. For any $$y \in \mathbb{Z}$$ in the codomain of $$f,$$ there exists a preimage $$x:$$ ${y = f\left( x \right) = x + 5,}\;\; \Rightarrow {x = y – 5.}$ We see that the function $$f$$ is surjective, and consequently, it is bijective. The inverse function is given by $x = {f^{ – 1}}\left( y \right) = y – 5.$ ### Example 2. Show that the function $$g:\mathbb{R^{+}} \to \mathbb{R^{+}},$$ $$f\left( x \right) = x^2$$ is bijective and find its inverse. Solution. By contradiction, let $${x_1} \ne {x_2}$$ but $$g\left( {{x_1}} \right) = g\left( {{x_2}} \right).$$ Then ${x_1^2 = x_2^2,}\;\; \Rightarrow {\left| {{x_1}} \right| = \left| {{x_2}} \right|.}$ Since the domain is restricted to the set of positive real numbers, we get $${x_1} = {x_2}.$$ This proves that the function $$g$$ is injective. Take an arbitrary positive number $$y \in \mathbb{R^{+}}$$ in the codomain of $$g.$$ Find the preimage of the number: ${y = g\left( x \right) = {x^2},}\;\; \Rightarrow {x = \sqrt y .}$ It is clear that the preimage $$x$$ exists for any positive $$y,$$ so the function $$g$$ is surjective. Since the function $$g$$ is injective and surjective, it is bijective and has an inverse $$g^{-1}$$ that is given by $x = {g^{ – 1}}\left( y \right) = \sqrt y .$ ### Example 3. The function $$f: \mathbb{R}\backslash\left\{ 3 \right\} \to \mathbb{R}\backslash\left\{ 1 \right\}$$ is defined as $$f\left( x \right) = \large{\frac{{x – 2}}{{x – 3}}}\normalsize.$$ Find the inverse function $$f^{-1}.$$ Solution. First we check that the function $$f$$ is bijective. Let $${x_1} \ne {x_2},$$ where $${x_1},{x_2} \ne 1,$$ and suppose $$f\left( {{x_1}} \right) = f\left( {{x_2}} \right).$$ Then $\require{cancel}{\frac{{{x_1} – 2}}{{{x_1} – 3}} = \frac{{{x_2} – 2}}{{{x_2} – 3}},}\;\; \Rightarrow {\left( {{x_1} – 2} \right)\left( {{x_2} – 3} \right) }={ \left( {{x_1} – 3} \right)\left( {{x_2} – 2} \right),}\;\; \Rightarrow {\cancel{{x_1}{x_2}} – 2{x_2} – 3{x_1} + \cancel{6} }={ \cancel{{x_1}{x_2}} – 3{x_2} – 2{x_1} + \cancel{6},}\;\; \Rightarrow {- 2{x_2} – 3{x_1} = – 3{x_2} – 2{x_1},}\;\; \Rightarrow {3{x_2} – 2{x_2} = 3{x_1} – 2{x_1},}\;\; \Rightarrow {{x_2} = {x_1}.}$ This is a contradiction. Hence, the function $$f$$ is injective. Consider an arbitrary real number $$y$$ in the codomain of $$f.$$ Determine the preimage of the number $$y$$ by solving the equation for $$x:$$ ${y = f\left( x \right) = \frac{{x – 2}}{{x – 3}},}\;\; \Rightarrow {x – 2 = y\left( {x – 3} \right),}\;\; \Rightarrow {x – 2 = xy – 3y,}\;\; \Rightarrow {xy – x = 3y – 2,}\;\; \Rightarrow {x\left( {y – 1} \right) = 3y – 2,}\;\; \Rightarrow {x = \frac{{3y – 2}}{{y – 1}}.}$ As you can see, the preimage $$x$$ exists for any $$y \ne 1.$$ Consequently, the function $$f$$ is surjective and, hence, it is bijective. The inverse function $$f^{-1}$$ is expressed as $x = {f^{ – 1}}\left( y \right) = \frac{{3y – 2}}{{y – 1}}.$ ### Example 4. The function $$g: \mathbb{R} \to \mathbb{R}^{+}$$ is defined as $$g\left( x \right) = {e^{2x + 1}}.$$ Find the inverse function $$g^{-1}.$$ Solution. We need to make sure that the function $$g$$ is bijective. By contradiction, suppose $${x_1} \ne {x_2}$$ but $$g\left( {{x_1}} \right) = g\left( {{x_2}} \right).$$ It then follows that ${{e^{2{x_1} + 1}} = {e^{2{x_2} + 1}},}\;\; \Rightarrow {\ln {e^{2{x_1} + 1}} = \ln {e^{2{x_2} + 1}},}\Rightarrow {\left( {2{x_1} + 1} \right)\ln e = \left( {2{x_2} + 1} \right)\ln e,}\;\; \Rightarrow {2{x_1} + 1 = 2{x_2} + 1,}\;\; \Rightarrow {2{x_1} = 2{x_2},}\;\; \Rightarrow {{x_1} = {x_2}.}$ This proves that $$g$$ is injective. Choose a positive real number $$y.$$ Solve the equation $$y = g\left( x \right)$$ for $$x:$$ ${g\left( x \right) = y,}\;\; \Rightarrow {{e^{2x + 1}} = y,}\;\; \Rightarrow {2x + 1 = \ln y,}\;\; \Rightarrow {2x = \ln y – 1,}\;\; \Rightarrow {x = \frac{1}{2}\left( {\ln y – 1} \right).}$ The preimage $$x$$ exists for any $$y$$ in the codomain of $$g.$$ So, the function is surjective. Since the function $$g$$ is injective and surjective, it is bijective and has an inverse $${g^{-1}},$$ which is given by $x = {g^{ – 1}}\left( y \right) = \frac{1}{2}\left( {\ln y – 1} \right).$ ### Example 5. Consider the function $$f:\mathbb{Z}^2 \to \mathbb{Z}^2$$ defined as $$f\left( {x,y} \right) = \left( {2x – y,x + 2y} \right).$$ Find the inverse function $${f^{-1}}.$$ Solution. Check the function $$f$$ for injectivity. Suppose that $$\left( {{x_1},{y_1}} \right) \ne \left( {{x_2},{y_2}} \right)$$ but $$f\left( {{x_1},{y_1}} \right) = f\left( {{x_2},{y_2}} \right).$$ Then ${\left( {2{x_1} – {y_1},{x_1} + 2{y_1}} \right) }={ \left( {2{x_2} – {y_2},{x_2} + 2{y_2}} \right),}\;\;\Rightarrow {\left\{ {\begin{array}{*{20}{l}} {2{x_1} – {y_1} = 2{x_2} – {y_2}}\\ {{x_1} + 2{y_1} = {x_2} + 2{y_2}} \end{array}} \right..}$ Solve the system of equation for $$\left( {{x_2},{y_2}} \right).$$ To eliminate $${y_2},$$ we multiply the first equation by $$2$$ and add both equations: ${\left\{ {\begin{array}{*{20}{l}} {2{x_1} – {y_1} = 2{x_2} – {y_2}}\\ {{x_1} + 2{y_1} = {x_2} + 2{y_2}} \end{array}} \right.,}\;\; \Rightarrow {\left\{ {\begin{array}{*{20}{l}} {4{x_1} – 2{y_1} = 4{x_2} – 2{y_2}}\\ {{x_1} + 2{y_1} = {x_2} + 2{y_2}} \end{array}} \right.,}\;\; \Rightarrow {\left\{ {\begin{array}{*{20}{l}} {5{x_1} = 5{x_2}}\\ {{x_1} + 2{y_1} = {x_2} + 2{y_2}} \end{array}} \right.,}\;\; \Rightarrow {\left\{ {\begin{array}{*{20}{l}} {{x_1} = {x_2}}\\ {{x_1} + 2{y_1} = {x_2} + 2{y_2}} \end{array}} \right.,}\;\; \Rightarrow {\left\{ {\begin{array}{*{20}{l}} {{x_1} = {x_2}}\\ {2{y_1} = 2{y_2}} \end{array}} \right.,}\;\; \Rightarrow {\left\{ {\begin{array}{*{20}{l}} {{x_1} = {x_2}}\\ {{y_1} = {y_2}} \end{array}} \right..}$ Since $$\left( {{x_1},{y_1}} \right) = \left( {{x_2},{y_2}} \right),$$ we get a contradiction. So, the function $$f$$ is injective. Check the surjectivity of the function $$f.$$ Let $$\left( {a,b} \right)$$ be an arbitrary pair of real numbers in the codomain of $$f.$$ Solve the equation $$f\left( {x,y} \right) = \left( {a,b} \right)$$ to express $$x,y$$ in terms of $$a,b.$$ ${\left( {2x – y,x + 2y} \right) = \left( {a,b} \right),}\;\; \Rightarrow {\left\{ {\begin{array}{*{20}{c}} {2x – y = a}\\ {x + 2y = b} \end{array}} \right.,}\;\; \Rightarrow {\left\{ {\begin{array}{*{20}{c}} {y = 2x – a}\\ {x + 2\left( {2x – a} \right) = b} \end{array}} \right.,}\;\; \Rightarrow {\left\{ {\begin{array}{*{20}{c}} {y = 2x – a}\\ {x + 4x – 2a = b} \end{array}} \right.,}\;\; \Rightarrow {\left\{ {\begin{array}{*{20}{c}} {y = 2x – a}\\ {5x = 2a + b} \end{array}} \right.,}\;\; \Rightarrow {\left\{ {\begin{array}{*{20}{c}} {y = 2x – a}\\ {x = \frac{{2a + b}}{5}} \end{array}} \right.,}\;\; \Rightarrow {\left\{ {\begin{array}{*{20}{c}} {x = \frac{{2a + b}}{5}}\\ {y = \frac{{2b – a}}{5}} \end{array}} \right..}$ Thus, we can always determine the preimage $$\left( {x,y} \right)$$ for any image $$\left( {a,b} \right).$$ Hence, the function is surjective and bijective. The inverse of the function $${f^{-1}}$$ has already been found above. It is given by ${\left( {x,y} \right) = {f^{ – 1}}\left( {a,b} \right) }={ \left( {\frac{{2a + b}}{5},\frac{{2b – a}}{5}} \right).}$ We can check the result given that $$f\left( {x,y} \right) = \left( {a,b} \right):$$ ${f\left( {x,y} \right) = \left( {2x – y,x + 2y} \right) }={ \left( {2 \cdot \frac{{2a + b}}{5} – \frac{{2b – a}}{5},}\right.}\kern0pt{\left.{\frac{{2a + b}}{5} + 2 \cdot \frac{{2b – a}}{5}} \right) }={ \left( {\frac{{4a + \cancel{2b} – \cancel{2b} + a}}{5},}\right.}\kern0pt{\left.{\frac{{\cancel{2a} + b + 4b – \cancel{2a}}}{5}} \right) }={ \left( {\frac{{5a}}{5},\frac{{5b}}{5}} \right) }={ \left( {a,b} \right).}$
2021-01-24T12:12:56
{ "domain": "math24.net", "url": "https://www.math24.net/inverse-functions/", "openwebmath_score": 0.9246832728385925, "openwebmath_perplexity": 98.04976601107673, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986979510652134, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043160958805 }
https://plainmath.net/algebra-i/103293-how-to-write-an-equation-of-a
Bentley Floyd 2023-03-09 How to write an equation of a line with slope of 3 and contains the point (4, 9)? angel52594672ve $\text{the equation of a line in}\phantom{\rule{1ex}{0ex}}\text{slope-intercept form}$ is. $•xy=mx+b$ $\text{where m is the slope and b the y-intercept}$ $\text{here}\phantom{\rule{1ex}{0ex}}m=3$ $⇒y=3x+b←\text{is the partial equation}$ $\text{to find b substitute}\phantom{\rule{1ex}{0ex}}\left(4,9\right)\phantom{\rule{1ex}{0ex}}\text{into the partial equation}$ $9=12+b⇒b=9-12=-3$ $⇒y=3x-3←\text{is the equation of the line}$ polemann7tm The equation of a line in point-slope form is as follows: $y-{y}_{1}=m\left(x-{x}_{1}\right)$ Make the following substitutions: ${x}_{1}=4$ (the x-coordinate of the point) ${y}_{1}=9$ (the y-coordinate of the same point) $m=3$ (the slope of the line) You will then have: $y-9=3\left(x-4\right)$ Then simplify: $y-9=3\left(x-4\right)$ $y-9=3x-12$ $y=3x-3$ Do you have a similar question?
2023-03-23T20:47:54
{ "domain": "plainmath.net", "url": "https://plainmath.net/algebra-i/103293-how-to-write-an-equation-of-a", "openwebmath_score": 0.8393768668174744, "openwebmath_perplexity": 1467.3408348458736, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795102691455, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043158423717 }
http://matthew-brett.github.io/teaching/matrix_rank.html
# Matrix rank¶ The rank of a matrix is the number of independent rows and / or columns of a matrix. We will soon define what we mean by the word independent. For a matrix with more columns than rows, it is the number of independent rows. For a matrix with more rows than columns, like a design matrix, it is the number of independent columns. In fact, linear algebra tells us that it is impossible to have more independent columns than there are rows, or more independent rows than there are columns. Try it with some test matrices. A column is dependent on other columns if the values in the column can be generated by a weighted sum of one or more other columns. To put this more formally - let’s say we have a matrix $$\mathbf{X}$$ with $$M$$ rows and $$N$$ columns. Write column $$i$$ of $$\mathbf{X}$$ as $$X_{:,i}$$. Column $$i$$ is independent of the rest of $$\mathbf{X}$$ if there is no length $$N$$ column vector of weights $$\vec{c}$$, where $$c_i = 0$$, such that $$\mathbf{X} \cdot \vec{c} = X_{:,i}$$. Let’s make a design with independent columns: >>> #: Standard imports >>> import numpy as np >>> # Make numpy print 4 significant digits for prettiness >>> np.set_printoptions(precision=4, suppress=True) >>> import matplotlib.pyplot as plt >>> # Default to nearest neighbor interpolation, gray colormap >>> import matplotlib >>> matplotlib.rcParams['image.interpolation'] = 'nearest' >>> matplotlib.rcParams['image.cmap'] = 'gray' Hint If running in the IPython console, consider running %matplotlib to enable interactive plots. If running in the Jupyter Notebook, use %matplotlib inline. >>> trend = np.linspace(0, 1, 10) >>> X = np.ones((10, 3)) >>> X[:, 0] = trend >>> X[:, 1] = trend ** 2 >>> plt.imshow(X) <...> (png, hires.png, pdf) In this case, no column can be generated by a weighted sum of the other two. We can test this with np.linalg.matrix_rank: >>> import numpy.linalg as npl >>> npl.matrix_rank(X) 3 This does not mean the columns are orthogonal: >>> # Orthogonal columns have dot products of zero >>> X.T.dot(X) array([[ 3.5185, 2.7778, 5. ], [ 2.7778, 2.337 , 3.5185], [ 5. , 3.5185, 10. ]]) Nor does it mean that the columns have zero correlation (see Correlation and projection for the relationship between correlation and the vector dot product): >>> np.corrcoef(X[:,0], X[:, 1]) array([[ 1. , 0.9627], [ 0.9627, 1. ]]) As long as each column cannot be fully predicted by the others, the column is independent. Now let’s add a fourth column that is a weighted sum of the first three: >>> X_not_full_rank = np.zeros((10, 4)) >>> X_not_full_rank[:, :3] = X >>> X_not_full_rank[:, 3] = np.dot(X, [-1, 0.5, 0.5]) >>> plt.imshow(X_not_full_rank) <...> (png, hires.png, pdf) matrix_rank is up to the job: >>> npl.matrix_rank(X_not_full_rank) 3 A more typical situation with design matrices, is that we have some dummy variable columns coding for group membership, that sum up to a column of ones. >>> dummies = np.kron(np.eye(3), np.ones((4, 1))) >>> plt.imshow(dummies) <...> (png, hires.png, pdf) So far, so good: >>> npl.matrix_rank(dummies) 3 If we add a column of ones to model the mean, we now have an extra column that is a linear combination of other columns in the model: >>> dummies_with_mean = np.hstack((dummies, np.ones((12, 1)))) >>> plt.imshow(dummies_with_mean) <...> (png, hires.png, pdf) >>> npl.matrix_rank(dummies_with_mean) 3 A matrix is full rank if the matrix rank is the same as the number of columns / rows. That is, a matrix is full rank if all the columns (or rows) are independent. If a matrix is not full rank then it is rank deficient.
2018-01-19T01:09:00
{ "domain": "github.io", "url": "http://matthew-brett.github.io/teaching/matrix_rank.html", "openwebmath_score": 0.6055312752723694, "openwebmath_perplexity": 1400.603739332709, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795098861572, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.653304315588863 }
https://math.stackexchange.com/questions/907934/medians-of-a-triangle-and-similar-triangle-properties
# Medians of a triangle and similar triangle properties Prove using similar triangle properties that "any two medians of a triangle divide each other in the ratio $2:1$. I do not know which criteria of similar triangle must be used Here's a diagram: $|AB|:|ED|=2:1$ and $AB\parallel ED$ (midpoint theorem) $\Longrightarrow \triangle ABC \sim \triangle EDC$. Now use alternate interior angles to prove $\triangle ABS \sim \triangle DES$. Use the fact that $|AB|:|ED|=2:1$ to conclude. • Your answer is essentially correct. I just add in some details. The first two results are based on the “midpoint theorem”. From those facts, we conclude that △ABC∼△EDC. – Mick Aug 24 '14 at 18:10 There are several properties of similar triangles that you can use, including: 1. If two angles of one triangle are congruent to two angles of another triangle, then the triangles are similar. 2. The three sides of one triangle are proportional to the three corresponding sides of another triangle if and only if the triangles are similar. 3. If two sides of one triangle are proportional to two sides of another triangle and their included angles are congruent, then the triangles are similar. 4. If a line divides two sides of a triangle proportionally, then it is parallel to the third side. In this diagram, point $D$ is the midpoint of $\overline {AC}$, point $E$ is the midpoint of $\overline {BC}$, point $H$ is the midpoint of $\overline {AF}$, and point $G$ is the midpoint of $\overline {BF}$. Examine this carefully, find the similar triangles, and use their properties. Don't forget to look for congruent triangles as well.
2020-01-26T03:01:37
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/907934/medians-of-a-triangle-and-similar-triangle-properties", "openwebmath_score": 0.7376626133918762, "openwebmath_perplexity": 235.2621264288581, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795098861572, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.653304315588863 }
https://www.vrcbuzz.com/operations-research/transportation-problem/
## Vogels Approximation Method (VAM) Vogel’s Approximation Method (VAM) Vogel’s approximation method is an improved version of the least cost entry method. It gives better starting solution as compared to any other method. Consider a general transportation problem with $m$ origins and $n$ destinations. Origin Destination $D_1$ $D_2$ $\cdots$ $D_j$ $\cdots$ $D_n$ Availability $O_1$ $c_{11}$ $c_{12}$ $\cdots$ $c_{1j}$ $\cdots$ $c_{1n}$ … ## Least Cost Entry Method For Transportation Problem Least Cost Entry Method For Transportation Problem Least cost entry method (also known as Matrix Minima Method) is a method of finding initial basic feasible solution for a transportation problem. Consider a general transportation problem with $m$ origins and $n$ destinations. Origin Destination $D_1$ $D_2$ $\cdots$ $D_j$ $\cdots$ $D_n$ Availability $O_1$ $c_{11}$ $c_{12}$ $\cdots$ $c_{1j}$ … ## Column Minima Method for Transportation Problem Column Minima Method Column minima method is a method of finding initial basic feasible solution for a transportation problem. Consider a general transportation problem with $m$ origins and $n$ destinations. Origin \ Destination $D_1$ $D_2$ $\cdots$ $D_j$ $\cdots$ $D_n$ Availability $O_1$ $c_{11}$ $c_{12}$ $\cdots$ $c_{1j}$ $\cdots$ $c_{1n}$ $a_1$ $O_2$ $c_{21}$ $c_{22}$ $\cdots$ $c_{2j}$ $\cdots$ $c_{2n}$ … ## Row Minima Method for Transportation Problem Row Minima Method Row minima method is a method of finding initial basic feasible solution for a transportation problem. Consider a general transportation problem with $m$ origins and $n$ destinations. Origin Destination $D_1$ $D_2$ $\cdots$ $D_j$ $\cdots$ $D_n$ Availability $O_1$ $c_{11}$ $c_{12}$ $\cdots$ $c_{1j}$ $\cdots$ $c_{1n}$ $a_1$ $O_2$ $c_{21}$ $c_{22}$ $\cdots$ $c_{2j}$ $\cdots$ $c_{2n}$ $a_2$ … ## North-West Corner Method North-West Corner Method The North-West cornet method is a method of finding an initial basic feasible solution to the transportation problem. Consider a general transportation problem with $m$ origins and $n$ destinations. Origin Destination $D_1$ $D_2$ $\cdots$ $D_j$ $\cdots$ $D_n$ Availability $O_1$ $c_{11}$ $c_{12}$ $\cdots$ $c_{1j}$ $\cdots$ $c_{1n}$ $a_1$ $O_2$ $c_{21}$ $c_{22}$ $\cdots$ $c_{2j}$ $\cdots$ … ## Transportation Problem Transportation Problem Transportation problem is a special class of linear programming problem that deals with transporting (or shipping) a commodity from various origins or sources (e.g. factories) to various destinations or sinks (e.g., warehouses). In this type of problem the objective is to determine the transportation schedule that minimizes the total transportation cost while satisfying …
2023-04-02T05:07:52
{ "domain": "vrcbuzz.com", "url": "https://www.vrcbuzz.com/operations-research/transportation-problem/", "openwebmath_score": 0.4698460102081299, "openwebmath_perplexity": 541.6010170753799, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795098861572, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.653304315588863 }
https://mathtrifles.wordpress.com/2012/10/14/vanishing-points-in-presence-of-noise/
## Vanishing points in presence of noise Most self-calibration algorithms require a prior knowledge of the camera calibration matrix $K$; as an instance, you need it to normalize the image points as $\tilde{\mathbf{x}}=K^{-1} \mathbf{x}$ and therefore fit the essential matrix $E$. With most commercial cameras it is safe to assume that the pixels are square and have no slant; in this case, $K= \begin{bmatrix} -fk & 0 & X_c \\ 0 & -fk & Y_c \\ 0 & 0 & 1 \end{bmatrix}$, where • $X_c, Y_c$ are the coordinates of the principal point in pixel units; it is usually an acceptable first approximation to assume that the principal point is at the centre of the sensor; • $k$ is the number of pixels per mm; it is usually safe to take the nominal value specified by the sensor vendor; • $f$ is the focal length in mm. Taking the nominal value specified by the lens vendor could not be sufficient; even worse if, working with a zoom lens, one tries to read the focus scale. So it would be nice to obtain at least $f$, or better the whole calibration matrix, from the images. The calibration matrix can be computed from the image of the absolute conic which, in the above conditions, is $\omega = \begin{bmatrix} \omega_1 & 0 & \omega_2 \\ 0 & \omega_1 & \omega_3 \\ \omega_2 & \omega_3 & \omega_4 \end{bmatrix}$; it is related to $K$ by $\omega \propto K^{-T}K^{-1}= \begin{bmatrix} \frac{1}{f^2 k^2} & 0 & -\frac{X_c}{f^2 k^2} \\ 0 & \frac{1}{f^2 k^2} & -\frac{Y_c}{f^2 k^2} \\ -\frac{X_c}{f^2 k^2} & -\frac{Y_c}{f^2 k^2} & 1+\frac{X_c^2+Y_c^2}{f^2 k^2} \end{bmatrix}$. $\omega$, in turn, can be obtained from three couples of vanishing points $\mathbf{v}_1, \mathbf{v}_1'$; $\mathbf{v}_2, \mathbf{v}_2'$$\mathbf{v}_3, \mathbf{v}_3'$ corresponding to pencils of lines which are orthogonal in the object space; each couple contributes a constraint $\mathbf{v}^T \omega \mathbf{v}'=0$. Three constraints are sufficient because $\omega$ has four distinct non-null entries but, being the matrix of a conic section, it is homogeneous (defined up to a scale factor). In an ideal world, things are so simple. All the lines of the pencil intersect exactly in the same point, whose coordinates are readily computed and can be used to obtain $\omega$. In the real world, nothing is perfect and things are not so simple. Straight lines must be obtained by fitting them to points (edgels), which are noisy. As a result, each couple of lines in the pencil will intersect in a different point (though all the points should be very close). Taking the centroid of the intersections may work, but it is possible to do better by realizing that we are facing a constrained minimum problem: we are looking for a pencil of straight lines, minimizing the sum of squared distances of each line from its own cloud of points, all lines constrained to intersect in the same point. Let • $N$ be the number of lines, • $M_i, i \in 1 \cdots N$ be the number of points in the cloud of the i-th line, • $x_{ij}, y_{ij}, j \in 1 \cdots M_i$ be the coordinates of the j-th point in the cloud of the i-th line, • $\theta_i, d_i$ be the parameters of the polar representation of the i-th line, • $x_I, y_I$ be the coordinates of the intersection; then the problem can be written as $min_{\{\theta_i\},\{d_i\}} \sum_{i=1}^N \sum_{j=1}^{M_i} (x_{ij} cos \theta_i + y_{ij} sin \theta_i - d_i)^2$ subject to $x_I cos \theta_i + y_I sin \theta_i - d_i = 0, i \in 1 \cdots N$ The lagrangian is $\Lambda (\{\theta_i\}, \{d_i\}, x_I, y_I)=\sum_{i=1}^N (\sum_{j=1}^{M_i} (x_{ij} cos \theta_i + y_{ij} sin \theta_i - d_i)^2 + \lambda_i (x_I cos \theta_i + y_I sin \theta_i - d_i))$ where $\lambda_i$ are the Lagrange multipliers. This is a nonlinear problem and as such needs an initial approximation to start with. The unconstrained straight lines can be fitted eg with the method of Alciatore and Miranda, which is closed form and fast, and in this context possible outliers can be rejected eg with RANSAC; the intersection can be obtained as the centroid of all the intersections. It is a good idea to check that no intersection be too far from the centroid; this could be a symptom of something gone astray in the previous steps. Unluckily, it is not possible to adapt to this problem the linear method I described here (reduction to the subspace of constraints) because the common intersection point is not known in advance.
2017-10-21T19:12:45
{ "domain": "wordpress.com", "url": "https://mathtrifles.wordpress.com/2012/10/14/vanishing-points-in-presence-of-noise/", "openwebmath_score": 0.9464290738105774, "openwebmath_perplexity": 826.633875267076, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9869795098861571, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043155888629 }
http://icpc.njust.edu.cn/Problem/Pku/2508/
# Conic distance Time Limit: 1000MS Memory Limit: 65536K ## Description A cone is located in 3D such that its base of radius r is in the z = 0 plane with the center at (0,0,0). The tip of the cone is located at (0, 0, h). Two points are given on the cone surface in conic coordinates. The conic coordinates of a point p lying on the surface of the cone are two numbers: the first, d, is the distance from the tip of the cone to p and the second, A < 360, is the angle in degrees between the plane y = 0 and the plane through points (0,0,0), (0,0,h) and p, measured counterclockwise from the direction of the x axis. Given are two points p1 = (d1, A1) and p2 = (d2, A2) in the conic coordinates. What is the (shortest) distance between p1 and p2 measured on the surface of the cone? ## Input The input is a sequence of lines. Each line contains 6 floating point numbers giving values of: r, h, d1, A1, d2, and A2 ## Output For each line of input, output the (shortest) distance between points p1 and p2 on the surface of the cone with the fraction rounded to 2 decimal places. ## Sample Input 3.0 4.0 2.0 0.0 4.0 0.0 3.0 4.0 2.0 90.0 4.0 0.0 6.0 8.0 2.14 75.2 9.58 114.3 3.0 4.0 5.0 0.0 5.0 90.0 ## Sample Output 2.00 3.26 7.66 4.54 ## Source The UofA Local 2000.10.14
2019-11-21T14:49:34
{ "domain": "edu.cn", "url": "http://icpc.njust.edu.cn/Problem/Pku/2508/", "openwebmath_score": 0.5159060955047607, "openwebmath_perplexity": 574.4038748350703, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795095031688, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043153353542 }
https://statsletters.com/2017/11/09/solving-ordinary-linear-differential-equations-with-random-initial-conditions/
# Solving Ordinary Linear Differential Equations with Random Initial Conditions ## Introduction Ordinary linear differential equations can be solved as trajectories given some initial conditions. But what if your initial conditions are given as distributions of probability? It turns out that the problem is relatively simple to solve. ## Transformation of Random Variables If we have a random system described as $dot{X}(t) = f(X(t),t) qquad X(t_0) = X_0$ we can write this as $X(t) = h(X_0,t)$ which is an algebraic transformation of a set of random variables into another representing a one-to-one mapping. Its inverse transform is written as $X_0 = h^{-1}(X,t)$ and the joint density function f(x,t) of X(t) is given by $f(x,t) = f_0 left[ x_0 = h^{-1}(x,t) right] left| J right|$ where $J$ is the Jacobian $J = left| frac{partial x^T_0}{partial x} right|$. ## Solving Linear Systems For a system of differential equations written as $dot{x}(t) = A x(t) + B u(t)$ a transfer matrix can be defined $Phi(t,t_0) = e^{A(t-t_0)}$ which can be used to write the solution as $x(t) = Phi(t,t_0) x(0) + int_{t_0}^{t} {Phi(t,s) B u(t) ds}$. The inverse formulation of this solution is $x(0) = Phi^{-1}(t,t_0) x(t) - Phi^{-1}(t,t_0) int_{t_0}^{t} {Phi(t,s) B u(t) ds}$. ## Projectile Trajectory Example Based on the formulations above we can now move on to a concrete example where a projectile is sent away in a vacuum. The differential equations to describe the motion are $left{ begin{array}{rcl} dot{p}_{x_1}(t) & = & p_{x_2}(t) \ dot{p}_{x_2}(t) & = & 0 \ dot{p}_{y_1}(t) & = & p_{y_2}(t) \ dot{p}_{y_2}(t) & = & -g end{array} right.$ where $p_{x_1}$ and $p_{y_1}$ are cartesian coordinates of the projectile in a two dimensional space while $p_{x_2}$ is the horizontal velocity and $p_{y_2}$ is the vertical velocity. We only have gravity as external force ($-g$) and no wind resistance which means that the horizontal velocity will not change. The matrix representation of this system becomes $A = left( begin{array}{cccc} 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 \ 0 & 0 & 0 & 0 \ 0 & 0 & 0 & 0 end{array} right)$ with $B^T = left( begin{array}{cccc} 0 & 0 & 0 & 1 end{array} right)$. The transfer matrix is (matrix exponential, not element-wise exponential) $Phi(t,t_0) = e^{A(t-t_0)} = left( begin{array}{cccc} 1 & 0 & t-t_0 & 0 \ 0 & 1 & 0 & t-t_0 \ 0 & 0 & 1 & 0 \ 0 & 0 & 0 & 1 end{array} right)$ Calculating the solution of the differential equation gives $x(t) = Phi(t,0) x(0) + int_0^t {Phi(t,s) B u(t) ds}$ where $u(t) = -g$ and $x^T(0) = left( begin{array}{cccc} 0 & 0 & v_x & v_y end{array} right)$. The parameters $v_x$ and $v_y$ are initial velocities of the projectile. The solution becomes $x(t) = left( begin{array}{c} v_x t \ v_y t - frac{g t^2}{2} \ v_x \ v_y - g t end{array} right)$ and the time when the projectile hits the ground is given by $p_y(t) = v_y t - frac{g t^2}{2} = 0 qquad t > 0$ as $t_{y=0} = 2 frac{v_y}{g}$. A visualization of the trajectory given $v_x = 1$ and $v_y = 2$ with gravity $g = 9.81$ shows an example of the motion of the projectile: Now, if assume that the initial state $x(0)$ can be described by a joint Gaussian distribution we can use the formula shown earlier to say that $f(x,t) = f_0left[x(0)=h^{-1}(x,t)right] left|Jright| = frac{1}{sqrt{left|2 pi Sigma right|}} e^{-frac{1}{2}(x(0)-mu)^T Sigma^{-1} (x(0)-mu)}$, where $left| J right| = left| Phi^{-1}(t) right|$, $mu^T = left( begin{array}{cccc} 0 & 0 & v_x & v_y end{array} right)$ and $Sigma = left( begin{array}{cccc} 0.00001 & 0 & 0 & 0 \ 0 & 0.00001 & 0 & 0 \ 0 & 0 & 0.01 & 0 \ 0 & 0 & 0 & 0.01 end{array} right)$ which means that we have high confidence in the firing position but less in the initial velocity. We are only interested in where the projectile lands and we can marginalize the velocities to get: $fleft(p_{x_1},p_{y_1},tright) = int_{-infty}^{infty} int_{-infty}^{infty} f(x,t) dp_{x_2} dp_{y_2}$ which when plotted gives Since we have used the landing time for the deterministic trajectory, we get a spread across the y-axis as well (the ground is located at $p_y = 0$). We could marginalize the y-direction as well to end up with: This shows the horizontal distribution of the projectile at the time when the deterministic trajectory of the projectile is expected to hit the ground. ## Conclusion Given a set of ordinary differential equations, it is possible to derive the uncertainty of the states given a probability distribution in the initial conditions. There are two other important cases to look into as well: stochastic input signals and random parameters. This site uses Akismet to reduce spam. Learn how your comment data is processed.
2021-01-17T05:52:07
{ "domain": "statsletters.com", "url": "https://statsletters.com/2017/11/09/solving-ordinary-linear-differential-equations-with-random-initial-conditions/", "openwebmath_score": 0.8324488997459412, "openwebmath_perplexity": 276.3943317227966, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795095031687, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043153353542 }
https://www.proofwiki.org/wiki/Set_of_Linear_Transformations_is_Isomorphic_to_Matrix_Space
# Set of Linear Transformations is Isomorphic to Matrix Space Jump to navigation Jump to search ## Theorem Let $R$ be a ring with unity. Let $F$, $G$ and $H$ be free $R$-modules of finite dimension $p,n,m>0$ respectively. Let $\sequence {a_p}$, $\sequence {b_n}$ and $\sequence {c_m}$ be ordered bases Let $\map {\LL_R} {G, H}$ denote the set of all linear transformations from $G$ to $H$. Let $\map {\MM_R} {m, n}$ be the $m \times n$ matrix space over $R$. Let $\sqbrk {u; \sequence {c_m}, \sequence {b_n} }$ be the matrix of $u$ relative to $\sequence {b_n}$ and $\sequence {c_m}$. Let $M: \map {\LL_R} {G, H} \to \map {\MM_R} {m, n}$ be defined as: $\forall u \in \map {\LL_R} {G, H}: \map M u = \sqbrk {u; \sequence {c_m}, \sequence {b_n} }$ Then $M$ is a module isomorphism. ### Corollary Let $R$ be a commutative ring with unity. Let $M: \struct {\map {\LL_R} G, +, \circ} \to \struct {\map {\MM_R} n, +, \times}$ be defined as: $\forall u \in \map {\LL_R} G: \map M u = \sqbrk {u; \sequence {a_n} }$ Then $M$ is an isomorphism. ## Proof Let $u, v \in \map {\LL_R} {G, H}$ such that: $\map M u = \map M v$ We have that the matrix of $u$ relative to $\sequence {b_n}$ and $\sequence {c_m}$ is defined as the $m \times n$ matrix $\sqbrk \alpha_{m n}$ where: $\ds \forall \tuple {i, j} \in \closedint 1 m \times \closedint 1 n: \map u {b_j} = \sum_{i \mathop = 1}^m \alpha_{i j} \circ c_i$ and it is seen that $\map M u$ and $\map M v$ are the same object. That is: $\map M u = \map M v \implies u = v$ and $M$ is seen to be injective. ## Motivation What Set of Linear Transformations is Isomorphic to Matrix Space tells us is two things: 1. That the relative matrix of a linear transformation can be considered to be the same thing as the transformation itself 2. To determine the relative matrix for the composite of two linear transformations, what you do is multiply the relative matrices of those linear transformations. Thus one has a means of direct arithmetical manipulation of linear transformations, thereby transforming geometry into algebra. In fact, matrix multiplication was purposely defined (some would say designed) so as to produce exactly this result.
2023-03-25T14:01:15
{ "domain": "proofwiki.org", "url": "https://www.proofwiki.org/wiki/Set_of_Linear_Transformations_is_Isomorphic_to_Matrix_Space", "openwebmath_score": 0.973494827747345, "openwebmath_perplexity": 168.11943221850547, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795091201805, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043150818455 }
https://examproblems4lossmodels.wordpress.com/2013/04/29/exam-c-practice-problem-17-estimating-claim-frequency/
# Exam C Practice Problem 17 – Estimating Claim Frequency Both Problems 17-A and 17-B use the following information. An insurance portfolio consists of independent risks. For each risk in this portfolio, the number of claims in a year has a Poisson distribution with mean $\theta$. The parameter $\theta$ follows a Gamma distribution. A risk is randomly selected from this portfolio. Prior to obtaining any claim experience, the number of claims in a year for this risk has a distribution with mean 0.6 and variance 0.72. After observing this risk for one year, insurance company records indicate that there are 2 claims for this risk. ___________________________________________________________________________________ Problem 17-A After knowing the insurance company records, what is the expected number of claims per year for this risk? $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.60$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.83$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0.91$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 1.25$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 2.00$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ ___________________________________________________________________________________ Problem 17-B After knowing the insurance company records, what is the variance of the number of claims per year for this risk? $\text{ }$ $\displaystyle (A) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{26}{36}$ $\displaystyle (B) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{32}{36}$ $\displaystyle (C) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{35}{36}$ $\displaystyle (D) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{42}{36}$ $\displaystyle (E) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{45}{36}$ ___________________________________________________________________________________ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ $\text{ }$ ___________________________________________________________________________________ ___________________________________________________________________________________ $\copyright \ 2013 \ \ \text{Dan Ma}$
2019-08-25T23:39:06
{ "domain": "wordpress.com", "url": "https://examproblems4lossmodels.wordpress.com/2013/04/29/exam-c-practice-problem-17-estimating-claim-frequency/", "openwebmath_score": 0.40301641821861267, "openwebmath_perplexity": 805.1535327249128, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795091201804, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043150818454 }
https://socratic.org/questions/how-do-you-factor-a-6-1-1
# How do you factor a^6+1? Dec 9, 2015 Factor as a sum of cubes. Then we can factor the remaining quartic factor into two quadratics: ${a}^{6} + 1$ $= \left({a}^{2} + 1\right) \left({a}^{4} - {a}^{2} + 1\right)$ $= \left({a}^{2} + 1\right) \left({a}^{2} - \sqrt{3} a + 1\right) \left({a}^{2} + \sqrt{3} a + 1\right)$ #### Explanation: The sum of cubes identity can be written: ${A}^{3} + {B}^{3} = \left(A + B\right) \left({A}^{2} - A B + {B}^{2}\right)$ So (putting $A = {a}^{2}$ and $B = 1$) we find: ${a}^{6} + 1 = {\left({a}^{2}\right)}^{3} + {1}^{3} = \left({a}^{2} + 1\right) \left({\left({a}^{2}\right)}^{2} - {a}^{2} + 1\right)$ $= \left({a}^{2} + 1\right) \left({a}^{4} - {a}^{2} + 1\right)$ It is not possible to factor $\left({a}^{2} + 1\right)$ into linear factors with Real coefficients, so let's leave that alone and look at the remaining quartic factor $\left({a}^{4} - {a}^{2} + 1\right)$. This does not factor as a quadratic in ${a}^{2}$ with Real coefficients, but will factor into two quadratics with irrational coefficients: In order to get the terms in ${a}^{4}$, ${a}^{3}$, $a$ and the constant term to work out, the factorisation must be something like this: ${a}^{4} - {a}^{2} + 1 = \left({a}^{2} - k a + 1\right) \left({a}^{2} + k a + 1\right)$ Then the coefficient of ${a}^{2}$ is $2 - {k}^{2} = - 1$. Hence $k = \pm \sqrt{3}$. ${a}^{4} - {a}^{2} + 1 = \left({a}^{2} - \sqrt{3} a + 1\right) \left({a}^{2} + \sqrt{3} a + 1\right)$ Both of these quadratic factors has negative discriminants, so only Complex roots. Dec 9, 2015 Alternatively, use Complex arithmetic to find the linear factors, then combine in conjugate pairs to derive the Real factoring: ${a}^{6} + 1 = \left({a}^{2} + 1\right) \left({a}^{2} - \sqrt{3} + 1\right) \left({a}^{2} + \sqrt{3} + 1\right)$ #### Explanation: Using De Moivre's Theorem: ${\left(\cos \theta + i \sin \theta\right)}^{n} = \cos \left(n \theta\right) + i \sin \left(n \theta\right)$ Hence, the following Complex numbers are all $6$th roots of $- 1$: $\cos \left(\frac{\pi}{6}\right) + i \sin \left(\frac{\pi}{6}\right) = \frac{\sqrt{3}}{2} + \frac{1}{2} i$ $\cos \left(\frac{\pi}{2}\right) + i \sin \left(\frac{\pi}{2}\right) = i$ $\cos \left(\frac{5 \pi}{6}\right) + i \sin \left(\frac{5 \pi}{6}\right) = - \frac{\sqrt{3}}{2} + \frac{1}{2} i$ $\cos \left(\frac{7 \pi}{6}\right) + i \sin \left(\frac{7 \pi}{6}\right) = - \frac{\sqrt{3}}{2} - \frac{1}{2} i$ $\cos \left(\frac{3 \pi}{2}\right) + i \sin \left(\frac{3 \pi}{2}\right) = - i$ $\cos \left(\frac{11 \pi}{2}\right) + i \sin \left(\frac{11 \pi}{2}\right) = \frac{\sqrt{3}}{2} - \frac{1}{2} i$ graph{(x^2+(y-1)^2-0.0015)(x^2+(y+1)^2-0.0015)((x-sqrt(3)/2)^2+(y-1/2)^2-0.0015)((x-sqrt(3)/2)^2+(y+1/2)^2-0.0015)((x+sqrt(3)/2)^2+(y+1/2)^2-0.0015)((x+sqrt(3)/2)^2+(y-1/2)^2-0.0015) = 0 [-2.5, 2.5, -1.25, 1.25]} Taking these in conjugate pairs we find quadratic factors with Real coefficients: $\left(a - i\right) \left(a + i\right) = {a}^{2} + 1$ $\left(a - \left(\frac{\sqrt{3}}{2} + \frac{1}{2} i\right)\right) \left(a - \left(\frac{\sqrt{3}}{2} - \frac{1}{2} i\right)\right) = {a}^{2} - \sqrt{3} a + 1$ $\left(a + \left(\frac{\sqrt{3}}{2} + \frac{1}{2} i\right)\right) \left(a + \left(\frac{\sqrt{3}}{2} - \frac{1}{2} i\right)\right) = {a}^{2} + \sqrt{3} a + 1$ Hence the factorisation with Real Coefficients is: ${a}^{6} + 1 = \left({a}^{2} + 1\right) \left({a}^{2} - \sqrt{3} + 1\right) \left({a}^{2} + \sqrt{3} + 1\right)$
2020-07-08T21:44:23
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-factor-a-6-1-1", "openwebmath_score": 0.9055935740470886, "openwebmath_perplexity": 531.9111367124474, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795091201804, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043150818454 }
https://www.mathwarehouse.com/calculus/derivatives/how-to-use-the-derivative-definition.php
How to Use the Definition of the Derivative. Visual Explanation with color coded examples # How to Use the Definition of the Derivative ### Quick Overview • The definition of the derivative is used to find derivatives of basic functions. • Derivatives always have the $$\frac 0 0$$ indeterminate form. Consequently, we cannot evaluate directly, but have to manipulate the expression first. • We can use the definition to find the derivative function, or to find the value of the derivative at a particular point. • There are a few standard variations in how we write out the definition (see below for details) ### Variations in the Derivative Definition The following are all standard variations for writing the definition of the derivative of $$f(x)$$. The examples and exercises that follow will make use of each variation. 1. $$\displaystyle f'(x) = \lim_{\Delta x \to 0} \frac{f(x+\Delta x) - f(x)}{\Delta x}$$ 2. $$\displaystyle f'(x) = \lim_{h\to 0} \frac{f(x + h) - f(x)} h$$ 3. $$\displaystyle f'(a) = \lim_{x\to a} \frac{f(x) - f(a)}{x-a}$$ ### Examples ##### Example 1 Use the first version of the definition of the derivative to find $$f'(3)$$ for $$f(x) = 5x^2$$ Step 1 Replace the $$x$$'s with 3's in the definition. \begin{align*} f'(3) & = \displaystyle\lim_{\Delta x \to 0} \frac{f(3+\Delta x) - f(3)}{\Delta x} \end{align*} Note: '$$\Delta x$$' is considered a single symbol. So replacing the $$x$$'s with 3's does not change the $$\Delta x$$'s Step 2 Evaluate the functions in the definition. \begin{align*} f'(3) & = \displaystyle\lim_{\Delta x \to 0} \frac{\blue{f(3+\Delta x)} - \red{f(3)}}{\Delta x}\\[6pt] & = \displaystyle\lim_{\Delta x \to 0} \frac{\blue{5(3+\Delta x)^2} - \red{5(3^2)}}{\Delta x}\\[6pt] & = \displaystyle\lim_{\Delta x \to 0} \frac{\blue{5\left(9+6\Delta x + (\Delta x)^2\right)} - \red{45}}{\Delta x}\\[6pt] & = \displaystyle\lim_{\Delta x \to 0} \frac{\blue{45+30\Delta x + 5(\Delta x)^2} - \red{45}}{\Delta x} \end{align*} Step 3 Simplify until the denominator no longer approaches 0 when $$\Delta x \to 0$$ \begin{align*} f'(3) & = \displaystyle\lim_{\Delta x \to 0} \frac{45+30\Delta x + 5(\Delta x)^2 - 45}{\Delta x}\\[6pt] & = \displaystyle\lim_{\Delta x \to 0} \frac{30\Delta x + 5(\Delta x)^2}{\Delta x}\\[6pt] & = \displaystyle\lim_{\Delta x \to 0} \frac{\Delta x\left(30 + 5\Delta x\right)}{\Delta x}\\[6pt] & = \displaystyle\lim_{\Delta x \to 0} (30 + \Delta x) \end{align*} Step 4 Evaluate the simpler limit. $$\displaystyle\lim_{\blue {\Delta x \to 0}} (30 + \blue{\Delta x})= 30 + \blue 0 = 30$$ $$f'(3) = 30$$ when $$f(x) = 5x^2$$ ##### Example 2 Use the second version of the definition of the derivative to find $$\frac{df}{dx}$$ for the function $$f(x) = \sqrt x$$ Step 1 Evaluate the functions in the definition. \begin{align*} \frac{df}{dx} & = \displaystyle\lim_{h\to 0} \frac{f(x+h) - f(x)} h\\[6pt] & = \displaystyle\lim_{h\to 0} \frac{\sqrt{x+h} - \sqrt x} h\\[6pt] \end{align*} Step 2 Rationalize the numerator. \begin{align*} \frac{df}{dx} & = \displaystyle\lim_{h\to 0} \frac{\sqrt{x+h} - \sqrt x} h \cdot \blue{\frac{\sqrt{x+h}+\sqrt x}{\sqrt{x+h}+\sqrt x}}\\[6pt] & = \displaystyle\lim_{h\to 0} \frac{(\sqrt{x+h})^2 - (\sqrt x)^2}{h(\sqrt{x+h}+\sqrt x)}\\[6pt] & = \displaystyle\lim_{h\to 0} \frac{x+h - x}{h(\sqrt{x+h}+\sqrt x)} \end{align*} Step 3 Simplify until the denominator no longer approaches 0 as $$h\to 0$$. \begin{align*} \frac{df}{dx} & = \displaystyle\lim_{h\to 0} \frac{\blue x+h - \blue x}{h(\sqrt{x+h}+\sqrt x)}\\[6pt] & = \displaystyle\lim_{h\to 0} \frac{\red h}{\red h(\sqrt{x+h}+\sqrt x)}\\[6pt] & = \displaystyle\lim_{h\to 0} \frac{1}{\sqrt{x+h}+\sqrt x} \end{align*} Step 4 Evaluate this simpler limit. $$\displaystyle\lim_{\blue{h\to 0}} \frac{1}{\sqrt{x+\blue h}+\sqrt x} = \frac{1}{\sqrt{x+\blue 0}+\sqrt x} = \frac{1}{\sqrt x+\sqrt x} = \frac{1}{2\sqrt x}$$ $$\displaystyle \frac{df}{dx} = \frac 1 {2\sqrt x}$$ when $$f(x) = \sqrt x$$. ##### Example 3 Use the third version of the definition of the derivative to evaluate $$f'(5)$$ when $$f(x) = \frac 1 x$$ Step 1 Setup the definition of the derivative using the appropriate value for $$a$$. Note that $$a = 5$$ in this exercise, since we are evaluating $$f'(5)$$. \begin{align*} f'(5) & = \displaystyle\lim_{x\to 5} \frac{f(x) - f(5)}{x - 5} \end{align*} Step 2 Evaluate $$f(x)$$ and $$f(5)$$. $$f'(5) = \displaystyle\lim_{x\to 5} \frac{\blue{f(x)} - \red{f(5)}}{x - 5} = \displaystyle\lim_{x\to 5} \frac{\blue{\frac 1 x} - \red{\frac 1 5}}{x - 5}$$ Step 3 Simplify until the denominator no longer approaches 0 as $$x$$ approaches 5. \begin{align*} f'(5) & = \displaystyle\lim_{x\to 5} \frac{\frac 1 x - \frac 1 5}{x - 5}\\[6pt] & = \displaystyle\lim_{x\to 5} \frac{\left(\frac 1 x - \frac 1 5\right)}{(x - 5)} \cdot \blue{\frac{5x}{5x}}\\[6pt] & = \displaystyle\lim_{x\to 5} \frac{\frac{\blue{5x}}x - \frac{\blue{5x}} 5}{\blue{5x}(x - 5)}\\[6pt] & = \displaystyle\lim_{x\to 5} \frac{\blue 5 - \blue x}{\blue{5x}(x - 5)}\\[6pt] & = \displaystyle\lim_{x\to 5} \frac{-1\red{\left(x - 5\right)}}{5x\red{(x - 5)}}\\[6pt] & = \displaystyle\lim_{x\to 5} \left(-\frac 1 {5x}\right) \end{align*} Step 4 Evaluate the simpler limit. $$f'(5) = \displaystyle\lim_{\blue{x\to 5}} \left(-\frac 1 {5\blue x}\right) = - \frac 1 {5\blue{(5)}} = -\frac 1 {25}$$ $$f'(5) = -\frac 1 {25}$$ when $$f(x) = \frac 1 x$$. ### A Word on Notation It is cumbersome to always have to write "find the derivative when $$f(x) =$$ ..." However, we can adapt the $$\frac{df}{dx}$$ notation to communicate the same idea. The notation $$\frac d {dx}\left( \sin 2x\right)$$ can be read "find the derivative of $$f(x) = \sin 2x$$." ##### Example 4 Use the first variation of the definition of the derivative to find $$\frac{d}{dx}\left(\sin 2x\right)$$ when $$x$$ is in radians. Step 1 Evaluate the functions in the definition of the derivative. \begin{align*} \frac{d}{dx}\left(\sin 2x\right) & = \lim_{\Delta x \to 0} \frac{\blue{f(x+\Delta x)} - \red{f(x)}}{\Delta x}\\[6pt] & = \lim_{\Delta x \to 0} \frac{\blue{\sin\left(2(x+\Delta x)\right)} - \red{\sin 2x}}{\Delta x}\\[6pt] & = \lim_{\Delta x \to 0} \frac{\sin\left(2x+2\Delta x\right) - \sin 2x}{\Delta x} \end{align*} Step 2 Use the Sum of Angles identity for the sine function. Recall that $$\sin(A+B) = \sin A\cos B + \sin B\cos A$$. \begin{align*} \frac{d}{dx}\left(\sin 2x\right) & = \lim_{\Delta x \to 0} \frac{\sin\left(\blue{2x}+\red{2\Delta x}\right) - \sin 2x}{\Delta x}\\[6pt] & = \lim_{\Delta x \to 0} \frac{\sin \blue{2x}\cos \red{2\Delta x}+\sin \red{2\Delta x}\cos \blue{2x} - \sin 2x}{\Delta x} \end{align*} Step 3 Rearrange the numerator so the terms containing $$\sin 2x$$ are together. Then, factor out the $$\sin 2x$$. \begin{align*} \frac{d}{dx}\left(\sin 2x\right) & = \lim_{\Delta x \to 0} \frac{\blue{\sin 2x} \cos 2\Delta x +\sin 2\Delta x \cos 2x - \blue{\sin 2x}}{\Delta x}\\[6pt] & = \lim_{\Delta x \to 0} \frac{\blue{\sin 2x}\cos 2\Delta x - \blue{\sin 2x}+\sin 2\Delta x \cos 2x}{\Delta x}\\[6pt] & = \lim_{\Delta x \to 0} \frac{\blue{\sin 2x}\left(\cos 2\Delta x - 1\right)+\sin 2\Delta x \cos 2x}{\Delta x} \end{align*} Step 4 Separate into two limits. \begin{align*} \frac{d}{dx}\left(\sin 2x\right) & = \lim_{\Delta x \to 0} \frac{\blue{\sin 2x \left(\cos 2\Delta x - 1\right)}+\red{\sin 2\Delta x \cos 2x }}{\Delta x}\\[6pt] & = \lim_{\Delta x \to 0}\left(\frac{\blue{\sin 2x \left(\cos 2\Delta x - 1\right)}}{\blue{\Delta x}}+\frac{\red{\sin 2\Delta x \cos 2x }}{\red{\Delta x}}\right)\\[6pt] & = \lim_{\Delta x \to 0} \frac{\blue{\sin 2x \left(\cos 2\Delta x - 1\right)}}{\blue{\Delta x}}+\lim_{\Delta x \to 0}\frac{\red{\sin 2\Delta x \cos 2x }}{\red{\Delta x}} \end{align*} Step 5 Evaluate the first limit using the techniques from Indeterminate Limits---Cosine Forms. \begin{align*} \lim_{\Delta x \to 0} \frac{\sin 2x \left(\cos 2\Delta x - 1\right)}{\Delta x} & = \blue{\lim_{\Delta x \to 0} \frac{\cos 2\Delta x - 1}{\Delta x}} \cdot \sin 2x \\[6pt] & = \blue{(0)} \cdot \sin 2x \\[6pt] & = 0 \end{align*} Our derivative calculations now look like this: $$\frac{d}{dx}\left(\sin 2x\right) = 0 + \displaystyle\lim_{\Delta x \to 0}\frac{\sin 2\Delta x \cos 2x}{\Delta x}$$ Step 6 Evaluate the remaining limit using the techniques from Indeterminate Limits---Sine Forms. \begin{align*} \lim_{\Delta x \to 0}\frac{\sin 2\Delta x \cos 2x}{\Delta x} & = \red{\lim_{\Delta x \to 0}\frac{\sin 2\Delta x}{\Delta x}}\cdot \cos 2x\\[6pt] & = \red{(2)}\cdot \cos 2x\\ & = 2\cos 2x \end{align*} $$\displaystyle \frac{df}{dx} = 2\cos 2x$$ when $$f(x) = \sin 2x$$ and $$x$$ is in radians.
2018-12-15T14:19:02
{ "domain": "mathwarehouse.com", "url": "https://www.mathwarehouse.com/calculus/derivatives/how-to-use-the-derivative-definition.php", "openwebmath_score": 0.9997274279594421, "openwebmath_perplexity": 4030.8057307458507, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795091201804, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043150818454 }
https://123dok.net/article/the-makar-limanov-invariant-dart-europe-theses-portal.y9648v9v
# The Makar-Limanov invariant Dans le document The DART-Europe E-theses Portal (Page 84-96) The Makar-Limanov invariant [KML97] (ML for short) is an important tool which allows, in particular, to distinguish certain varieties from the affine space. In this chapter, we consider a homogeneous version of the ML invariant. For toric varieties andT-varieties of complexity one we give an explicit expression of the latter invariant in terms of the classification developed in Chapter 2. The triviality of the homogeneous ML invariant implies that of the usual one. As an application we show a first example of a non-rational affine variety having a trivial ML invariant. This is a mayor shortcoming for the ML invariant. Furthermore, we establish a birational characterization of affine varieties with trivial ML invariant and propose a field version of the ML invariant called the FML invariant. We conjecture that the triviality of the FML invariant implies rationality. We confirm this conjecture in dimension at most 3. 3.1. The homogeneous Makar-Limanov invariant In this section we introduce the ML invariant and its homogeneous version, and show that there is a significant difference between these two invariants. Definition 3.1.1. LetX = SpecA be a normal affine variety, and let LND(A) be the set of all LNDs onA. The Makar-Limanov invariant ofA (or, equivalently, ofX) is defined as ML(X) = ML(A) = \ ∂∈LND(A) ker∂ . Similarly, if A is effectively M-graded we let LNDh(A) be the set of all homo-geneous LNDs on A, LNDfib(A) be the set of all homogeneous LNDs of fiber type onA, and LNDhor(A) be the set of all homogeneous LNDs of horizontal type onA. We define MLh(X) = MLh(A) = \ ∂∈LNDh(A) ker∂ thehomogeneous Makar-Limanov invariant of A. We also let MLfib(A) = \ ∂∈LNDfib(A) ker∂, and MLhor(A) = \ ∂∈LNDhor(A) ker∂ . Clearly, ML(A)⊆MLh(A)⊆MLfib(A), and MLh(A) = MLhor(A)∩MLfib(A). (11) Remark 3.1.2. (i) LetX= SpecAbe an affine variety. Taking the kernel ker∂on an LND∂onA is the same as taking the ring of invariantsH0(X,OX)Ga by the corresponding Ga-action, see Remark 2.1.3. Therefore, the above invariants can be expressed in terms of theGa-actions on X. 75 (ii) Since two equivalent LNDs (see Definition 2.1.5) have the same kernel, to compute ML(A) or MLh(A) it is sufficient to consider pairwise non-equivalent LNDs onA. Now, we provide examples showing that, in general, the inclusions in (11) are strict and so, the homogeneous LNDs are not enough to compute the ML invariant. Example 3.1.3. Let A = k[x, y] with the grading given by degx = 0 and degy = 1. In this case, both partial derivatives ∂x = ∂/∂x and ∂y = ∂/∂y are homogeneous. Since ker∂x =k[y] and ker∂y =k[x] we have MLh =k. Furthermore, it is easy to see that there is only one equivalence class of LNDs of fiber type. A representative of this class is∂y (see Corollary 2.3.10). This yields MLfib(A) =k[x]. Thus MLh(A)(MLfib(A) in this case. Example 3.1.4. To provide an example where ML(A) (MLh(A) we consider the Koras-Russell threefold X= SpecA, where A=k[x, y, z, t]/(x+x2y+z2+t3). The ML invariant was first introduced in [KML97] to prove that X 6≃A3. In fact ML(A) = k[x] while ML(A3) = k [ML96]. In the recent paper [Dub09] Dubouloz shows that the cylinder over the Koras-Russell threefold has trivial ML invariant i.e., ML(A[w]) =k, wherew is a new variable. Let A[w] be graded by degA = 0 and degw = 1, and let ∂ be a homogeneous LND on A[w]. Ife:= deg∂≤ −1 then ∂(A) = 0 and by Lemma 2.1.4 (i) we have that ker∂=A and ∂ is equivalent to the partial derivative ∂/∂w. Ife≥0 then∂(w) =awe+1, wherea∈Aand so, by Lemma 2.1.4 (vi)w∈ker∂. Furthermore, for any a ∈A we have ∂(a) = bwe, for a unique b ∈A. We define a derivation ¯∂:A→A by ¯∂(a) =b. Since∂r(a) = ¯∂r(a)wre the derivation ¯∂ is LND. This yields MLh(A[w]) = ML(A) =k[x] while ML(A[w]) =k. Remark3.1.5. In Example 3.1.4, theT-action onX×A1 is of complexity three. On the contrary, in Section 3.2 we show that if X is a normal affine T-variety of complexity zero i.e., a toric variety, then ML(X) = MLh(X). To our best knowledge, it is unknown if the equality ML(X) = MLh(X) holds in complexity one or two. Nevertheless, Theorem 4.5 in [FZ05a] shows that it does hold fork-surfaces. In the following two sections we apply the results in Section 2.2 and 2.3 in order to compute MLh(A) in the case where the complexity of the T-action on Spec A is 0 or 1. We also give some partial results for the usual invariant ML(A) in this particular case. 3.2. ML-invariant of toric varieties We treat now the case of affine toric varieties. Letσ⊆NQ be a pointed polyhe-dral cone and ω⊆MQ be its dual cone. Proposition 3.2.1. Let A=k[ωM]be an affine semigroup algebra so thatX= SpecA is a toric variety. Then ML(A) = MLh(A) =k[θM], where θ⊆MQ is the maximal subspace contained in ω. In particular ML(A) =k if and only ifσ is of complete dimension i.e., if and only if there is no torus factor in X. 3.3. ML-INVARIANT OFT-VARIETIES OF COMPLEXITY ONE 77 Proof. By Corollary 2.2.11 and Theorem 2.2.7, the pairwise non-equivalent homogeneous LNDs onA are in one to one correspondence with the rays of σ. For any ray ρ ⊆ σ and any e∈ Sρ as in Lemma 2.2.4, the kernel of the corresponding homogeneous LND is ker∂ρ,e=k[τM], where τ ⊆ω is the facet dual to ρ. Since θ ⊆ ω is the intersection of all facets, we have MLh(A) = k[θM]. Fur-thermore, the characters in k[θM] ⊆ A are invertible functions on A and so, by Lemma 2.1.4 (iii),∂(k[θM]) = 0∀∂∈LND(A). Hencek[θM]⊆ML(A), proving the lemma. 3.3. ML-invariant of T-varieties of complexity one In this section we give a combinatorial description of the homogeneous ML in-variant ofT-varieties of complexity one in terms of the Altmann-Hausen description. LetA=A[C,D], where Dis a properσ-polyhedral divisor on a smooth curveC. We first compute MLfib(A). IfAis non-elliptic (elliptic, respectively) we let{ρi} be the set of all rays ofω (of all rays ofω such thatρ∩degD=∅, respectively). In both cases we letτi ⊆MQ denote the facet dual toρi and θ=T τi. Lemma 3.3.1. With the notation as above, MLfib(A) = M m∈θM Amχm. Proof. By Corollary 2.3.12, for every rayρi there is a homogeneous LND∂i of fiber type with kernel ker∂i = M m∈τi∩M Amχm. By Corollary 2.3.10 any homogeneous LND of fiber type on A is equivalent to one of the ∂i. Finally, taking the intersection T iker∂i gives the desired description of MLfib(A). Remark 3.3.2. If A is non-elliptic, then θ ⊆ MQ is the maximal subspace contained in ω. In particular, if A is parabolic then θ = {0} and MLfib(A) = A0, and if Ais hyperbolic then θ=MQ and MLfib(A) =A. If there is no LND of horizontal type onA, then MLhor(A) =A and MLh(A) = MLfib(A). In the sequel we assume thatAadmits a homogeneous LND of horizontal type. Lemma3.3.3. With the notation as before, if∂ is a homogeneous LND on A of horizontal type, then MLhor(A) = M m∈δL mχm, where L=L(∂) and ϕm∈Am satisfy the relation div(ϕm) +D(m) = 0. Proof. We treat first the non-elliptic case. By Corollary 2.3.27 for every δi there is a homogeneous LND∂i of horizontal type with kernel ker∂i = M m∈δi∩Li mχm, where Li = L(∂i) and ϕm ∈ Am is such that div(ϕm) +D(m) = 0. By Corollary 2.3.28, any homogeneous LND of horizontal type onAis equivalent to one of the∂i. Taking the intersection of all ker∂i gives the lemma in this case. Let further A be elliptic, and let ∂ be a homogeneous LND of horizontal type on A. Letz0, z∈P1, and η and Lbe as in Theorem 2.3.26 so that ker∂ = M m∈ηL mχm, whereϕm∈Am satisfies div(ϕm)|P1\{z}+D(m)|P1\{z}= 0. By permuting the roles ofz0 and z in Theorem 2.3.26 we obtain another LND on A. The description of ker∂ and ker∂ shows that ker∂∩ker∂= M ηL∩B kϕχm, whereϕm∈Am is such that div(ϕm) +D(m) = 0. Now the lemma follows by an argument similar to that in the non-elliptic case. Theorem 3.3.4. In the notation of Lemmas 3.3.1 and 3.3.3, if there is no ho-mogeneous LND of horizontal type onA, then MLh(A) = M m∈θM Amχm. If ∂ is a homogeneous LND of horizontal type on A, then MLh(A) = M m∈θ∩δL mχm, where L=L(∂) and ϕm∈Am is such that div(ϕm) +D(m) = 0. Proof. The assertions follow immediately by virtue of (11) and Lemmas 3.3.1 and 3.3.3. In the following corollary we give a criterion of triviality of the homogeneous Makar-Limanov invariant MLh(A). Corollary 3.3.5. With the notation as above,MLh(A) =k if and only if one of the following conditions hold. (i) A is elliptic,rank(M)≥2, and degDdoes not intersect any of the rays of the coneω. (ii) A admits a homogeneous LND of horizontal type and θ∩δ ={0}. In particular, in both casesML(A) =k. Proof. By Lemma 3.3.1, (i) holds if and only if MLhor(A) = k. By Theorem 3.3.4, (ii) holds if and only if there is a homogeneous LND of horizontal type and MLh(A) =k. Remark 3.3.6. It easily seen that MLh(A) =k forAas in Example 2.3.31. 3.3. ML-INVARIANT OFT-VARIETIES OF COMPLEXITY ONE 79 3.3.1. A non-rational threefold with trivial Makar-Limanov invariant. To exhibit such an example, we let σ be a pointed polyhedral cone in MQ, where rank(M) =n≥2. We let as beforeA=A[C,D], where Dis a properσ-polyhedral divisor on a smooth curve C. By Remark 1.3.10 (iii), FracA = k(C)(M) and so X= Spec A is birational to C×Pn. By Corollary 3.3.5, if A is non-elliptic and ML(A) = k, then A admits a ho-mogeneous LND of horizontal type. So C ≃ A1 and X is rational. On the other hand, if A is elliptic Corollary 3.3.5 (i) is independent of the curve C. So if (i) is fulfilled, then ML(A) =kwhileXis birational toC×Pn. This leads to the following proposition. Proposition 3.3.7. Let A=A[C,D], where D is a proper σ-polyhedral divisor on a smooth projective curve C of positive genus. Suppose further that degD is contained in the relative interior of σ. Then ML(A) = k whereas Spec A is non-rational. Remark 3.3.8. It is evident that X in Proposition 3.3.7 is in fact stably non-rational i.e., X×P is non-rational for all ℓ≥0, cf. [Pop10, Example 1.22]. In the remaining of this section we give a concrete geometric example illustrating this proposition. Example 3.3.9. Letting N = Z2 and M = Z2 with the canonical bases and duality, we let σ ⊆ NQ be the first quadrant, ∆ = (1,1) +σ, and h =h so that h(m1, m2) =m1+m2. Furthermore, we letA=A[C,D], where C ⊆P2 is the elliptic curve with affine equations2−t3+t= 0, andD= ∆·P is the properσ-polyhedral divisor onCwith P being the point at infinity of C. Since C 6≃ P1 and degD = ∆, A satisfies the assumptions of Corollary 3.3.7. Lettingk(C) be the function field of C, by Theorem 1.5.5 we obtain A(m1,m2)=H0(C,O((m1+m2)P))⊆k(C). The functionst, s∈k(C) are regular in the affine part ofC, and have poles of or-der 2 and 3 onP, respectively. By the Riemann-Roch theorem dimH0(C,O(rP)) = r ∀r > 0. Hence the functions {ti, tjs : 2i ≤ r and 2j + 3 ≤ r} form a basis of H0(C,O(rP)) (see [Har77] Chapter IV, Proposition 4.6). In this setting the first gradded pieces are thek-modules A(0,0)=A(1,0) =A(0,1) =k, A(2,0)=A(1,1)=A(0,2)=k+kt , A(3,0) =A(2,1) =A(1,2) =A(0,3) =k+kt+ks , A(4,0) =A(3,1) =A(2,2) =A(1,3) =A(0,4) =k+kt+kt2+ks . It is easy to see that Aadmits the following set of generators. u1(1,0), u2(0,1), u3 =tχ(2,0), u4 =tχ(1,1), u5=tχ(0,2), u6 =sχ(3,0), u7 =sχ(2,1), u8 =sχ(1,2), u9 =sχ(0,3). So A ≃ k[9]/I, where k[9] = k[x1, . . . , x9], and I is the ideal of relations of ui (i = 1. . .9). Using a software for elimination theory it is possible to show that following list is a minimal set of generators ofI. point at infinity P. The semigroup ωM is spanned by (1,0) and (0,1), so letting v=χ(1,0) andw=χ(0,1) we obtain A=k[v, w, tv2, tvw, tw2, sv3, sv2w, svw2, sw3]⊆k[s, t, v, w]/(s2−t3+t). Thus SpecA is birationally dominated byC0×A2, whereC0 =C\ {P}. Since C 6≃ P1, by Lemma 2.3.14 there is no homogeneous LND of horizontal type onA. There are two rays ρi ⊆σ spanned by the vectors (1,0) and (0,1). Since degD = ∆ is contained in the relative interior of σ, Corollaries 2.3.10 and 2.3.12 imply that there are exactly 2 pairwise non-equivalent homogeneous LNDs∂iof fiber type which correspond to the raysρi,i= 1,2, respectively. The facetτ1 dual toρ1 is spanned by (0,1) and, in the notation of Lemma 2.3.7, The LNDs ∂i are induced, under the isomorphism A≃k[9]/I, by the following LNDs onk[9]: 3.4. BIRATIONAL GEOMETRY OF VARIETIES WITH TRIVIAL ML INVARIANT 81 The orbit closure Θ =π−1(0,0) over (0,0)∈ C is general and it is isomorphic toA2= Speck[x1, x2]. The restrictions to Θ of theGa-actions φi corresponding to i,i= 1,2, respectively are given by φ1|Θ : (t,(x1, x2))7→(x1+tx2, x2) and φ2|Θ: (t,(x1, x2))7→(x1, x2+tx1). Furthermore, there is a unique singular point ¯0 ∈X corresponding to the fixed point of theT-action onX. The point ¯0 is given by the augmentation ideal A+ = M ωM\{0} Amχm, On the other hand, let A =A[C,D], where D is a proper σ-polyhedral divisor on a smooth projective curve C. By Theorem 2.5 in [KR82], if Spec A is smooth, then SpecA ≃An+1 (see also Proposition 3.1 in [S¨us08]). In particular, SpecA is rational. 3.4. Birational geometry of varieties with trivial ML invariant In this section we establish the following birational characterization of normal affine varieties with trivial ML invariant. Let k be an algebraically closed field of characteristic 0. Theorem 3.4.1. Let X = SpecA be an affine variety over k. If ML(X) = k thenX ≃birY ×P2 for some variety Y. Conversely, in any birational class Y ×P2 there is an affine varietyX withML(X) =k. Proof. As usual tr.degk(K) denotes the transcendence degree of the field ex-tension k ⊆ K. Let K = FracA be the field of rational functions on X so that tr.degk(K)≥2. Since ML(X) = k, there exists at least 2 non-equivalent LNDs ∂1, ∂2 :A →A. We letLi= Frac(ker∂i)⊆K, fori= 1,2. By Lemma 2.1.4 (vii),Li⊆K is a purely transcendental extension of degree 1, fori= 1,2. We letL=L1∩L2. By an inclusion-exclusion argument we have tr.degL(K) = 2. We let ¯Abe the 2-dimensional algebra over L A¯=A⊗kL . Since Frac ¯A = FracA = K and L ⊆ ker∂i for i = 1,2, the LND ∂i extends to a locally nilpotentL-derivation ¯∂i by setting ∂¯i(a⊗l) =∂i(a)⊗l, where a∈A, and l∈L . Furthermore, ker ¯∂i = ¯A∩Li, fori= 1,2 and so ker ¯∂1∩ker ¯∂2 = ¯A∩L1∩L2 =L . Thus the Makar-Limanov invariant of the 2-dimensionalL-algebra ¯Ais trivial. By the theorem in [ML, p. 41], ¯A is isomorphic to anL-subalgebra of L[x1, x2], wherex1, x2 are new variables. Thus K≃L(x1, x2), and so X≃birY ×P2, whereY is any variety withL as the field of rational functions. The second assertion follows from Lemma 3.4.2 bellow. This completes the proof. The following lemma provides examples of affine varieties with trivial ML invari-ant in any birational classY ×Pn,n≥2. It is a generalization of Section 3.3.1. Let us introduce some notation. As before, we letN be a lattice of rankn≥2 andM be its dual lattice. We let σ ⊆NQ be a pointed polyhedral cone of full dimension. We fixp∈rel.int(σ)∩M. We let ∆ =p+σ andh=h so that h(m) =hp, mi>0, for allm∈ω\ {0}. Furthermore, letting Y be a projective variety and H be a semiample and big Cartier Z-divisor on Y, we let A = A[Y,D], where D is the proper σ-polyhedral divisorD= ∆·H, so that D(m) =hp, mi ·H, for allm∈ω . Recall that FracA=k(Y)(M) so that SpecA≃birY ×Pn. Lemma3.4.2. With the above notation, the affine variety X= SpecA[Y,D]has trivial ML invariant. Proof. Let{ρi}i be the set of all rays ofσand{τi}i the set of the corresponding dual facets ofω. SincerH is big for allr >0, Theorem 2.4.6 shows that there exists ei ∈ Sρi such that dim Φei is positive, and so we can chose a non-zero ϕi ∈ Φei. In this case, Theorem 2.4.4 shows that there exists a non-trivial locally nilpotent derivation ∂ρi,eii, with ker∂ρi,eii = M m∈τi∩M Amχm. Since the coneσ is pointed and has full dimension, the same holds forω. Thus, the intersection of all facets reduces to one pointT iτi ={0}and so \ i ker∂ρi,eii ⊆A0=H0(Y,OY) =k. This yields ML(A) = MLh(A) = MLfib(A) =k. Example 3.4.3. With the notation as in the proof of Lemma 3.4.2, we can provide yet another explicit construction. We fix isomorphismsM ≃Zn and N ≃ Zn such that the standard bases {µ1,· · ·, µn} and {ν1,· · ·, νn} for MQ and NQ, respectively, are mutually dual. We letσ be the first quadrant inNQ, andp=P iνi, so that h(m) =X i mi, and D(m) =X i mi·H, where m= (m1,· · ·, mn), andmi∈Q≥0. We letρi ⊆σ be the ray spanned by the vectorνi, and let τi be its dual facet. In this setting, Sρi = (τi −µi)∩M. Furthermore, letting ei,j = −µij (where j6=i) yields h(m) =h(m+ei,j), so that Dei,j = 0, and Φei,j =H0(Y,OY) =k. Choosingϕi,j = 1∈Φei,j we obtain that ∂i,j :=∂ρi,ei,ji,j given by i,j(f χm) =hm, νii ·f χm+ei,j, where i, j∈ {1,· · · , n}, i6=j 3.5. A FIELD VERSION OF THE ML INVARIANT 83 is a homogeneous LND onA=A[Y,D] with degreeei,j and kernel ker∂i,j = M τi∩M Amχm. As in the proof of Lemma 3.4.2 the intersection \ i,j ker∂i,j =k, and so ML(X) =k. We can give a geometrical description of X. Consider the OY-algebra Ae= M And so Xe = SpecY Ae is the vector bundle associated to the locally free sheaf Ln i=1OY(H) (see Ch. II Ex. 5.18 in [Har77]). We let π : Xe → Y be the corre-sponding affine morphism. The morphismϕ:Xe →X induced by taking global sections corresponds to the contraction of the zero section to a point ¯0. We let θ := π◦ϕ−1 :X\ {¯0} → Y. The point ¯0 corresponds to the augmentation ideal A\k. It is the only attractive fixed point of the T-action. The orbit closures of the T-action on X are Θy := θ−1(y) = θ−1(y)∪ {0}, ∀y ∈ Y. Let χµi = ui. Θy is equivariantly isomorphic to Speck[ωM] = Speck[u1,· · · , un]≃An. TheGa-actionφi,j :Ga×X→Xinduced by the homogeneous LND∂i,j restricts to aGa-action on Θy given by φi,j|ΘY :Ga×An→An, where ui 7→ui+tuj, ur7→ur, ∀r6=i . Moreover, the unique fixed point ¯0 is singular unless Y is a projective space and there is no other singular point. By Theorem 2.9 in [Lie09b] X has rational singularities if and only if OY and OY(H) are acyclic. The latter assumption can be fulfilled by taking, for instance, Y toric or Y a rational surface, and H a large enough multiple of an ample divisor. 3.5. A field version of the ML invariant The main application of the ML invariant is to distinguish some varieties from the affine space. Nevertheless, this invariant is far from being optimal as we have seen in the previous section. Indeed, there is a large class of non-rational normal affine varieties with trivial ML invariant. To eliminate such a pathology, we propose below a generalization of the classical ML invariant. Let A be a finitely generated normal domain. We define the FML invariant of A as the subfield ofK = FracA given by FML(A) = \ ∂∈LND(A) Frac(ker∂). In the case whereAisM-graded we define FMLh and FMLfib in the analogous way. Remark 3.5.1. Let A = k[x1,· · · , xn] so that K = k(x1,· · · , xn). For the partial derivative∂i =∂/∂xi we have Frac(ker∂i) =k(x1,· · ·,xbi,· · · , xn), wherexbi means thatxi is omitted. This yields FML(A)⊆ \n i=1 Frac(ker∂i) =k, and so FML(A) =k. Thus, the FML invariant of the affine space is trivial. For any finitely generated normal domain A there is an inclusion ML(A) ⊆ FML(A). A priori, since FML(An) = k the FML invariant is stronger than the classical one in the sense that it can distinguish more varieties form the affine space that the classical one. In the next proposition we show that the classical ML invariant can be recovered from the FML invariant. Proposition 3.5.2. Let A be a finitely generated normal domain, then ML(A) = FML(A)∩A . Proof. We must show that for any LND∂ onA, ker∂= Frac(ker∂)∩A . The inclusion “⊆” is trivial. To prove the converse inclusion, we fix an element a∈Frac(ker∂)∩A. Lettingb, c∈ker∂ be such thatac=b, Lemma 2.1.4 (ii) shows thata∈ker∂. LetA=A[Y,D] for some properσ-polyhedral divisorDon a normal semiprojec-tive varietyY. In this caseK= FracA=k(Y)(M), wherek(Y)(M) corresponds to the field of fractions of the semigroup algebrak(Y)[M]. It is a purely transcendental extension of k(Y) of degree rankM. Let ∂ be a homogeneous LND of fiber type on A. By definition, k(Y) ⊆ Frac(ker∂) and so, k(Y) ⊆ FMLfib(A). This shows that the pathological exam-ples as in Lemma 3.4.2 cannot occur. Let us formulate the following conjecture. Conjecture 3.5.3. Let X be an affine variety. If FML(X) = k then X is rational. The following lemma proves Conjecture 3.5.3 in the particular case whereX≃bir C×Pn, withC a curve. Lemma 3.5.4. Let X = SpecA be an affine variety such that X ≃bir C ×Pn, where C is a curve with field rational functions L. If C has positive genus then FML(X)⊇L. In particular, if FML(X) =k thenC is rational. Proof. Assume thatChas positive genus. We haveK = FracA=L(x1, . . . , xn), wherex1, . . . , xn are new variables. We claim thatL⊆FML(A). Indeed, let∂be an LND onA and letf, g∈L\k. Since tr.degk(L) = 1, there exists a polynomialP ∈k[x, y]\ksuch thatP(f, g) = 0. Applying the derivation ∂:K→K toP(f, g) we obtain ∂P ∂x(f, g)·∂(f) +∂P ∂y(f, g)·∂(g) = 0. Sincef andgare not constant we may suppose that∂P∂x(f, g)6= 0 and ∂P∂y(f, g)6= 0. Hence∂(f) = 0 if and only if ∂(g) = 0. This shows that one of the two following 3.5. A FIELD VERSION OF THE ML INVARIANT 85 possibilities occurs: L⊆Frac(ker∂) or L∩Frac(ker∂) =k. Assume first that L∩Frac(ker∂) =k. Then, by Lemma 2.1.4 (i) Frac(ker∂) = k(x1, . . . , xn) and so the field extension Frac(ker∂)⊆Kis not purely transcendental. This contradits Lemma 2.1.4 (vii). ThusL⊆Frac(ker∂) proving the claim and the lemma. Remark 3.5.5. We can apply Lemma 3.5.4 to show that the FML invariant carries more information than usual ML invariant. Indeed, let, in the notation of Lemma 3.4.2,Y be a smooth projective curve of positive genus. Lemma 3.4.2 shows that ML(A[Y,D]) =k. While by Lemma 3.5.4, FML(A[Y,D])⊇k(Y). In the following theorem we prove Conjecture 3.5.3 in dimension at most 3. Theorem3.5.6. LetXbe an affine variety of dimensiondimX≤3. IfFML(X) = k thenX is rational. Proof. Since FML(X) is trivial, the same holds for ML(X). If dimX≤2 then ML(X) = k implies X rational (see e.g., [ML, p. 41]). Assume that dimX = 3. Lemma 3.4.1 implies thatX ≃birC×P2 for some curveC. While by Lemma 3.5.4, C is a rational curve. CHAPTER 4 Dans le document The DART-Europe E-theses Portal (Page 84-96)
2022-12-01T14:25:41
{ "domain": "123dok.net", "url": "https://123dok.net/article/the-makar-limanov-invariant-dart-europe-theses-portal.y9648v9v", "openwebmath_score": 0.9343320727348328, "openwebmath_perplexity": 3929.614311624837, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795091201804, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043150818454 }
https://oalevelsolutions.com/past-papers-solutions/cambridge-international-examinations/as-a-level-mathematics-9709/pure-mathematics-p1-9709-01/year-2016-may-june-p1-9709-13/cie_16_mj_9709_13_q_3/
# Past Papers’ Solutions | Cambridge International Examinations (CIE) | AS & A level | Mathematics 9709 | Pure Mathematics 1 (P1-9709/01) | Year 2016 | May-Jun | (P1-9709/13) | Q#3 Question A curve is such that  and passes through the point P(1,9). The gradient of the curve at P is 2. i. Find the value of the constant k. ii.       Find the equation of the curve. Solution i. We are given that; Gradient (slope) of the curve is the derivative of equation of the curve. Hence gradient of curve  with respect to  is: Hence, we have expression for the gradient of the curve. We are also given that curve passes through the point P(1,9) and gradient of the curve at P is 2. Gradient (slope) of the curve at the particular point is the derivative of equation of the curve at that  particular point. Gradient (slope)  of the curve  at a particular point  can be found by substituting x- coordinates of that point in the expression for gradient of the curve; From the given information we can write; Substituting ; ii. We can find equation of the curve from its derivative through integration; We are given that; As we found in (i) that ; Therefore; Rule for integration of  is: Rule for integration of  is: If a point   lies on the curve , we can find out value of . We substitute values of  and    in the equation obtained from integration of the derivative of the curve i.e. . We are also given that curve passes through the point P(1,9) and gradient of the curve at P is 2. Substitution of x and y coordinates of point P in above equation; Therefore equation of the curve is;
2022-05-20T14:13:46
{ "domain": "oalevelsolutions.com", "url": "https://oalevelsolutions.com/past-papers-solutions/cambridge-international-examinations/as-a-level-mathematics-9709/pure-mathematics-p1-9709-01/year-2016-may-june-p1-9709-13/cie_16_mj_9709_13_q_3/", "openwebmath_score": 0.9692708253860474, "openwebmath_perplexity": 2275.5782422598813, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795091201804, "lm_q2_score": 0.6619228758499941, "lm_q1q2_score": 0.6533043150818453 }
https://yutsumura.com/page/19/?wpfpaction=add&postid=2707
## Problem 399 Prove that the cubic polynomial $x^3-2$ is irreducible over the field $\Q(i)$. ## Problem 398 Prove that any algebraic closed field is infinite. ## Problem 397 Suppose $A$ is a positive definite symmetric $n\times n$ matrix. (a) Prove that $A$ is invertible. (b) Prove that $A^{-1}$ is symmetric. (c) Prove that $A^{-1}$ is positive-definite. (MIT, Linear Algebra Exam Problem) ## Problem 396 A real symmetric $n \times n$ matrix $A$ is called positive definite if $\mathbf{x}^{\trans}A\mathbf{x}>0$ for all nonzero vectors $\mathbf{x}$ in $\R^n$. (a) Prove that the eigenvalues of a real symmetric positive-definite matrix $A$ are all positive. (b) Prove that if eigenvalues of a real symmetric matrix $A$ are all positive, then $A$ is positive-definite. ## Problem 395 Suppose that the vectors $\mathbf{v}_1=\begin{bmatrix} -2 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}, \qquad \mathbf{v}_2=\begin{bmatrix} -4 \\ 0 \\ -3 \\ -2 \\ 1 \end{bmatrix}$ are a basis vectors for the null space of a $4\times 5$ matrix $A$. Find a vector $\mathbf{x}$ such that $\mathbf{x}\neq0, \quad \mathbf{x}\neq \mathbf{v}_1, \quad \mathbf{x}\neq \mathbf{v}_2,$ and $A\mathbf{x}=\mathbf{0}.$ (Stanford University, Linear Algebra Exam Problem) ## Problem 394 Determine the values of $x$ so that the matrix $A=\begin{bmatrix} 1 & 1 & x \\ 1 &x &x \\ x & x & x \end{bmatrix}$ is invertible. For those values of $x$, find the inverse matrix $A^{-1}$. ## Problem 393 (a) Let $A$ be a $6\times 6$ matrix and suppose that $A$ can be written as $A=BC,$ where $B$ is a $6\times 5$ matrix and $C$ is a $5\times 6$ matrix. Prove that the matrix $A$ cannot be invertible. (b) Let $A$ be a $2\times 2$ matrix and suppose that $A$ can be written as $A=BC,$ where $B$ is a $2\times 3$ matrix and $C$ is a $3\times 2$ matrix. Can the matrix $A$ be invertible? ## Problem 392 Let $V$ be the subspace of $\R^4$ defined by the equation $x_1-x_2+2x_3+6x_4=0.$ Find a linear transformation $T$ from $\R^3$ to $\R^4$ such that the null space $\calN(T)=\{\mathbf{0}\}$ and the range $\calR(T)=V$. Describe $T$ by its matrix $A$. ## Problem 391 (a) Is the matrix $A=\begin{bmatrix} 1 & 2\\ 0& 3 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 3 & 0\\ 1& 2 \end{bmatrix}$? (b) Is the matrix $A=\begin{bmatrix} 0 & 1\\ 5& 3 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 1 & 2\\ 4& 3 \end{bmatrix}$? (c) Is the matrix $A=\begin{bmatrix} -1 & 6\\ -2& 6 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 3 & 0\\ 0& 2 \end{bmatrix}$? (d) Is the matrix $A=\begin{bmatrix} -1 & 6\\ -2& 6 \end{bmatrix}$ similar to the matrix $B=\begin{bmatrix} 1 & 2\\ -1& 4 \end{bmatrix}$? ## Problem 390 Prove that if $A$ and $B$ are similar matrices, then their determinants are the same. ## Problem 389 (a) A $2 \times 2$ matrix $A$ satisfies $\tr(A^2)=5$ and $\tr(A)=3$. Find $\det(A)$. (b) A $2 \times 2$ matrix has two parallel columns and $\tr(A)=5$. Find $\tr(A^2)$. (c) A $2\times 2$ matrix $A$ has $\det(A)=5$ and positive integer eigenvalues. What is the trace of $A$? (Harvard University, Linear Algebra Exam Problem) ## Problem 388 Let $A$ be $n\times n$ matrix and let $\lambda_1, \lambda_2, \dots, \lambda_n$ be all the eigenvalues of $A$. (Some of them may be the same.) For each positive integer $k$, prove that $\lambda_1^k, \lambda_2^k, \dots, \lambda_n^k$ are all the eigenvalues of $A^k$. ## Problem 387 Let $A$ be an $n\times n$ matrix. Its only eigenvalues are $1, 2, 3, 4, 5$, possibly with multiplicities. What is the nullity of the matrix $A+I_n$, where $I_n$ is the $n\times n$ identity matrix? (The Ohio State University, Linear Algebra Final Exam Problem) ## Problem 386 Find all eigenvalues of the matrix $A=\begin{bmatrix} 0 & i & i & i \\ i &0 & i & i \\ i & i & 0 & i \\ i & i & i & 0 \end{bmatrix},$ where $i=\sqrt{-1}$. For each eigenvalue of $A$, determine its algebraic multiplicity and geometric multiplicity. ## Problem 385 Let $A=\begin{bmatrix} 2 & -1 & -1 \\ -1 &2 &-1 \\ -1 & -1 & 2 \end{bmatrix}.$ Determine whether the matrix $A$ is diagonalizable. If it is diagonalizable, then diagonalize $A$. That is, find a nonsingular matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$. ## Problem 384 Let $A$ be an $n\times n$ matrix with the characteristic polynomial $p(t)=t^3(t-1)^2(t-2)^5(t+2)^4.$ Assume that the matrix $A$ is diagonalizable. (a) Find the size of the matrix $A$. (b) Find the dimension of the eigenspace $E_2$ corresponding to the eigenvalue $\lambda=2$. (c) Find the nullity of $A$. (The Ohio State University, Linear Algebra Final Exam Problem) ## Problem 383 Let $A=\begin{bmatrix} 1 & 1 & 1 \\ 0 &0 &1 \\ 0 & 0 & 1 \end{bmatrix}$ be a $3\times 3$ matrix. Then find the formula for $A^n$ for any positive integer $n$. ## Problem 382 Let $\lambda$ be an eigenvalue of $n\times n$ matrices $A$ and $B$ corresponding to the same eigenvector $\mathbf{x}$. (a) Show that $2\lambda$ is an eigenvalue of $A+B$ corresponding to $\mathbf{x}$. (b) Show that $\lambda^2$ is an eigenvalue of $AB$ corresponding to $\mathbf{x}$. (The Ohio State University, Linear Algebra Final Exam Problem) ## Problem 381 Consider the matrix $A=\begin{bmatrix} 3/2 & 2\\ -1& -3/2 \end{bmatrix} \in M_{2\times 2}(\R).$ (a) Find the eigenvalues and corresponding eigenvectors of $A$. (b) Show that for $\mathbf{v}=\begin{bmatrix} 1 \\ 0 \end{bmatrix}\in \R^2$, we can choose $n$ large enough so that the length $\|A^n\mathbf{v}\|$ is as small as we like. (University of California, Berkeley, Linear Algebra Final Exam Problem) $A=\begin{bmatrix} 6 & 2 & 2 & 2 &2 \\ 2 & 6 & 2 & 2 & 2 \\ 2 & 2 & 6 & 2 & 2 \\ 2 & 2 & 2 & 6 & 2 \\ 2 & 2 & 2 & 2 & 6 \end{bmatrix}.$
2020-02-25T21:57:36
{ "domain": "yutsumura.com", "url": "https://yutsumura.com/page/19/?wpfpaction=add&postid=2707", "openwebmath_score": 0.942328929901123, "openwebmath_perplexity": 195.92738227210683, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986979508737192, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043148283366 }
https://homework.cpm.org/category/CCI_CT/textbook/pc3/chapter/13/lesson/13.3.5/problem/13-154
### Home > PC3 > Chapter 13 > Lesson 13.3.5 > Problem13-154 13-154. Consider the curve $y=12\sqrt{x}$. 1. For what value of $x$ is the slope of the line tangent to the curve $2$? $m=\lim \limits_{h\to0}\frac{12\sqrt{x+h}-12\sqrt{x}}{h}$ $m=\lim_{h\to0}\frac{12\sqrt{x+h}-12\sqrt{x}}{h}\cdot\frac{\sqrt{x+h}+\sqrt{x}}{\sqrt{x+h}+\sqrt{x}}$ 2. At what point is the line tangent to the curve? Use your value of $x$ from part (a) to compute the corresponding value of $y$. 3. What is the equation of this tangent line? This is a line with a slope of $2$, passing through the point you found in part (b).
2021-08-02T04:55:38
{ "domain": "cpm.org", "url": "https://homework.cpm.org/category/CCI_CT/textbook/pc3/chapter/13/lesson/13.3.5/problem/13-154", "openwebmath_score": 0.8690312504768372, "openwebmath_perplexity": 223.07742348903707, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986979508737192, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043148283366 }
https://blog.zilin.one/tag/maths-contest/
Categories ## Crazy Telescoping In mathematics, a telescoping series is a series whose partial sums eventually only have a fixed number of terms after cancellation. I learnt this technique when I was doing maths olympiad. However until last year I learnt this buzz word ‘telescoping’ since I received my education in China and we call it ‘the method of differences’. Anyway, yesterday, I was helping Po-Shen Loh to proctor this Annual Virginia Regional Mathematics Contest at CMU. Unfortunately, I did not take my breakfast. All I could digest are the seven problems. And even worse, I got stuck by the last problem which is the following. Find $$\sum_{n=1}^\infty\frac{n}{(2^n+2^{-n})^2}+\frac{(-1)^nn}{(2^n-2^{-n})^2}.$$ It turns out that the punch line is, as you might have expected, telescoping. But it is just a bit crazy. There is Pleasure sure, In being Mad, which none but Madmen know! John Dryden’s “The Spanish Friar” First consider the first term, $$\frac{1}{(2^1+2^{-1})^2}+\frac{-1}{(2^1-2^{-1})^2}=\frac{-2\times 2}{(2^2-2^{-2})^2}.$$ Then take look at the second term, $$\frac{2}{(2^2+2^{-2})^2}+\frac{2}{(2^2-2^{-2})^2}.$$ Aha, the sum of the first two terms becomes $$\frac{2}{(2^2+2^{-2})^2}+\frac{-2}{(2^2-2^{-2})^2}=\frac{-4\times 2}{(2^4-2^{-4})^2}$$ which again will interact nicely with the 4th term (not the 3rd term). After all, the sum of the 1st, 2nd, 4th, 8th, …, $n=(2^m)$th  terms is $$\frac{-4n}{(2^{2n}-2^{-2n})^2}.$$ Note that this goes to zero. But this only handles the terms indexed by 2’s power. Naturally, the next thing to look at is the sum of the 3rd, 6th, 12th, 24th, …, $n=(3\times 2^m)$th terms, which amazingly turns out have the telescoping phenomena again and is equal to $$\frac{-4n}{(2^{2n}-2^{-2n})^2}.$$ Again, the this goes to zero. We claim that starting with an odd index, say $(2l+1)$, the partial sum of the $(2l+1), 2(2l+1), \ldots, 2^m(2l+1), \ldots$‘th terms goes to zero. For people who is willing to suffer a bit more for rigorousness, the argument for the previous claim can be easily formalized by the method of differences. The key fact we have been exploiting above is the following identity. $$\frac{2n}{(2^{2n}+2^{-2n})^2}+\frac{2n}{(2^{2n}-2^{-2n})^2} = \frac{4n}{(2^{2n}-2^{-2n})^2}-\frac{8n}{(2^{4n}-2^{-4n})^2}.$$ The last bit of the ingredient is the absolute summability of the original sequence which implies that changing the order of summation does not affect the result. Hence the sum in total is $0$.
2020-05-31T19:56:09
{ "domain": "zilin.one", "url": "https://blog.zilin.one/tag/maths-contest/", "openwebmath_score": 0.8890186548233032, "openwebmath_perplexity": 504.15001589151194, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986979508737192, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043148283366 }
https://learn.careers360.com/jobs/question-in-the-following-question-a-matrix-of-certain-characters-is-given-these-characters-follow-a-certain--9956/
In the following question, a matrix of certain characters is given. These characters follow a certain trend, row-wise or column-wise. Find out this trend and choose the missing character accordingly. Option 1) 1Option 2) 2Option 3) 3Option 4) 5Option 5) 4 In each row, second value is obtained by taking the difference of sum of digits of first value and second value. $\dpi{80} \Rightarrow 9+6+3=18=1+8=9$ $\dpi{80} \Rightarrow 8+4+4=16=1+6=7$ $\dpi{80} 9-7=2$ $\dpi{80} \Rightarrow 4+6+4=14=1+4=5$ $\dpi{80} \Rightarrow 9+0+3=12+1+2=3$ $\dpi{80} 5-3=2$ Exams Articles Questions
2020-08-09T08:38:42
{ "domain": "careers360.com", "url": "https://learn.careers360.com/jobs/question-in-the-following-question-a-matrix-of-certain-characters-is-given-these-characters-follow-a-certain--9956/", "openwebmath_score": 0.3522007465362549, "openwebmath_perplexity": 1936.6319351990196, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.986979508737192, "lm_q2_score": 0.6619228758499941, "lm_q1q2_score": 0.6533043148283365 }
https://planetmath.org/ProofOfChineseRemainderTheorem
# proof of Chinese remainder theorem First we prove that $\mathfrak{{a}}_{i}+\prod_{j\neq i}\mathfrak{{a}}_{j}=R$ for each $i$. Without loss of generality, assume that $i=1$. Then $R=(\mathfrak{{a}}_{1}+\mathfrak{{a}}_{2})(\mathfrak{{a}}_{1}+\mathfrak{{a}}_{3% })\cdots(\mathfrak{{a}}_{1}+\mathfrak{{a}}_{n}),$ since each factor $\mathfrak{{a}}_{1}+\mathfrak{{a}}_{j}$ is $R$. Expanding the product, each term will contain $\mathfrak{{a}}_{1}$ as a factor, except the term $\mathfrak{{a}}_{2}\mathfrak{{a}}_{2}\cdots\mathfrak{{a}}_{n}$. So we have $(\mathfrak{{a}}_{1}+\mathfrak{{a}}_{2})(\mathfrak{{a}}_{1}+\mathfrak{{a}}_{3})% \cdots(\mathfrak{{a}}_{1}+\mathfrak{{a}}_{n})\subseteq\mathfrak{{a}}_{1}+% \mathfrak{{a}}_{2}\mathfrak{{a}}_{2}\cdots\mathfrak{{a}}_{n},$ and hence the expression on the right hand side must equal $R$. Now we can prove that $\prod\mathfrak{{a}}_{i}=\bigcap\mathfrak{{a}}_{i}$, by induction. The statement is trivial for $n=1$. For $n=2$, note that $\mathfrak{{a}}_{1}\cap\mathfrak{{a}}_{2}=(\mathfrak{{a}}_{1}\cap\mathfrak{{a}}% _{2})R=(\mathfrak{{a}}_{1}\cap\mathfrak{{a}}_{2})(\mathfrak{{a}}_{1}+\mathfrak% {{a}}_{2})\subseteq\mathfrak{{a}}_{2}\mathfrak{{a}}_{1}+\mathfrak{{a}}_{1}% \mathfrak{{a}}_{2}=\mathfrak{{a}}_{1}\mathfrak{{a}}_{2},$ and the reverse inclusion is obvious, since each $\mathfrak{{a}}_{i}$ is an ideal. Assume that the statement is proved for $n-1$, and condsider it for $n$. Then $\bigcap_{1}^{n}\mathfrak{{a}}_{i}=\mathfrak{{a}}_{1}\cap\bigcap_{2}^{n}% \mathfrak{{a}}_{i}=\mathfrak{{a}}_{1}\cap\prod_{2}^{n}\mathfrak{{a}}_{i},$ using the induction hypothesis in the last step. But using the fact proved above and the $n=2$ case, we see that $\mathfrak{{a}}_{1}\cap\prod_{2}^{n}\mathfrak{{a}}_{i}=\mathfrak{{a}}_{1}\cdot% \prod_{2}^{n}\mathfrak{{a}}_{i}=\prod_{1}^{n}\mathfrak{{a}}_{i}.$ Finally, we are ready to prove the . Consider the ring homomorphism $R\to\prod R/\mathfrak{{a}}_{i}$ defined by projection on each component of the product: $x\mapsto(\mathfrak{{a}}_{1}+x,\mathfrak{{a}}_{2}+x,\dots,\mathfrak{{a}}_{n}+x)$. It is easy to see that the kernel of this map is $\bigcap\mathfrak{{a}}_{i}$, which is also $\prod\mathfrak{{a}}_{i}$ by the earlier part of the proof. So it only remains to show that the map is surjective. Accordingly, take an arbitrary element $(\mathfrak{{a}}_{1}+x_{1},\mathfrak{{a}}_{2}+x_{2},\dots,\mathfrak{{a}}_{n}+x_% {n})$ of $\prod R/\mathfrak{{a}}_{i}$. Using the first part of the proof, for each $i$, we can find elements $y_{i}\in\mathfrak{{a}}_{i}$ and $z_{i}\in\prod_{j\neq i}\mathfrak{{a}}_{j}$ such that $y_{i}+z_{i}=1$. Put $x=x_{1}z_{1}+x_{2}z_{2}+\dots+x_{n}z_{n}.$ Then for each $i$, $\mathfrak{{a}}_{i}+x=\mathfrak{{a}}_{i}+x_{i}z_{i},$ since $x_{j}z_{j}\in\mathfrak{{a}}_{i}$ for all $j\neq i$, $=\mathfrak{{a}}_{i}+x_{i}y_{i}+x_{i}z_{i},$ since $x_{i}y_{i}\in\mathfrak{{a}}_{i}$, $=\mathfrak{{a}}_{i}+x_{i}(y_{i}+z_{i})=\mathfrak{{a}}_{i}+x_{i}\cdot 1=% \mathfrak{{a}}_{i}+x_{i}.$ Thus the map is surjective as required, and induces the isomorphism $\frac{R}{\prod\mathfrak{{a}}_{i}}\to\prod\frac{R}{\mathfrak{{a}}_{i}}.$ Title proof of Chinese remainder theorem ProofOfChineseRemainderTheorem 2013-03-22 12:57:20 2013-03-22 12:57:20 mclase (549) mclase (549) 8 mclase (549) Proof msc 11A05 msc 11N99 msc 13A15
2018-11-19T03:27:25
{ "domain": "planetmath.org", "url": "https://planetmath.org/ProofOfChineseRemainderTheorem", "openwebmath_score": 0.9838885068893433, "openwebmath_perplexity": 97.12477790646095, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795083542036, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043145748278 }
https://math.stackexchange.com/questions/525854/proximal-operator-of-a-quadratic-function
Proximal Operator of a Quadratic Function What is a proximal operator and how would one derive it in general for a function? In particular, if I had a function: $f(x) = x^TQx + b^Tx + c$ How would I get the proximal operator for this if Q was a m dimensional square symmetric positive semidefinite matrix? The proximal operator is the map $\def\prox{\mathop{\rm prox}\nolimits}\prox_f \colon \mathbb R^m \to \mathbb R^m$ given by $$\prox_f(x) = {\rm argmin}_{y \in \mathbb R^m} f(y) + \frac 12\|x-y\|^2$$ For the given $f$ and $x,y \in \mathbb R^m$, we have \begin{align*} g_x(y) &:= f(y) + \frac 12\|x-y\|^2\\ &= y^tQy + b^ty + c + \frac 12x^tx + x^ty + \frac 12y^ty\\ &= y^t\left(Q + \frac 12\right)y + (b+x)^ty + c + \frac 12x^tx \end{align*} Now $Q + \frac 12$ is symmetric and positive definite, hence this has a unique minimum. To find it, we compute $g_x$'s critical point, we have for $h \in \mathbb R^m$: \begin{align*} g_x'(y)h &= 2y^t\left(Q + \frac 12\right)h + (b+x)^t h \end{align*} so $g_x'(y) = 0$ iff $$y = \left(2Q + 1\right)^{-1}(b+x)$$ That is $$\prox_f(x) = \left(2Q + 1\right)^{-1}(b+x)$$
2019-12-13T13:31:33
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/525854/proximal-operator-of-a-quadratic-function", "openwebmath_score": 0.9999988079071045, "openwebmath_perplexity": 597.9215546899151, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795083542037, "lm_q2_score": 0.6619228758499941, "lm_q1q2_score": 0.6533043145748277 }
http://mathoverflow.net/questions/151279/which-rate-of-growth-of-the-sobolev-norms-guarantees-analyticity
# Which rate of growth of the Sobolev norms guarantees analyticity? Let $u\in C^\infty(\mathbb T^k)$, where $\mathbb T^k$ is the $k$-dimensional torus. (Equivalently, $u\in\mathbb R^k$ and $u$ is $2\pi$-periodic with respect of each argument.) We define the semi-norm $$\|u\|_s=\|(-\Delta)^{s/2}u\|_{L^2(\mathbb T^k)}=\Big(\sum_{\ell_1,\ldots,\ell_k\in\mathbb Z}(\ell_1^2+\cdots+\ell^2_k)^{s/2}\big|\hat{u}_{\ell_1,\ldots,\ell_k}\big|^2\Big)^{1/2},$$ where $s>0$ and $u(x_1,\ldots,x_k)=\sum_{\ell_1,\ldots,\ell_k\in\mathbb Z}\mathrm{e}^{2\pi i(\ell_1x_1+\cdots\ell_kx_k)}\hat{u}_{\ell_1,\ldots,\ell_k}$. My question is the following: Which rate of growth of $\|u\|_s$, as $s\to\infty$, implies that $u$ extends holomorphically to an open neighborhood of $\mathbb R^k$ in $\mathbb C^k$? More specifically: Which rate of growth of $\|u\|_s$, as $s\to\infty$, guarantees that $u$ extends holomorphically to $$\Omega_\alpha= \{(x_1+iy_1,\ldots,x_k+iy_k): x_1,y_1,\ldots,x_k,y_k\in\mathbb R\,\&\,|y_1|,\ldots,|y_k|<\alpha\} \subset \mathbb C^k,$$ for a given $\alpha>0$. - potentially related: en.wikipedia.org/wiki/Hardy_space –  Otis Chodosh Dec 9 '13 at 15:37 Not the kind of answer I am looking for. I do know that the rate of growth is expected to be over-exponential, as exponential growth corresponds only to trigonometric polynomials. –  smyrlis Dec 9 '13 at 15:42 Have a loot at volume 3 in Lions-Magenes' book Non-Homogeneous Boundary Value Problems and Applications –  Liviu Nicolaescu Dec 9 '13 at 16:22 The rate of growth must be $(cs)^s$ for some $c>0$. In my sketch of the proof I assume for simplicity that $k=1$ and Fourier coefficients $a_n$ are zero for $n<0$. The function has an analytic extension in a neighborhood of the unit circle if $|a_n|$ decrease faster than a geometric progression, that is $\log |a_n|^2\leq-\delta n$ for some $\delta>0$. Then your norm squared is $$\Phi(s)=\sum_{1}^\infty n^s|a_n|^2\leq\sum e^{s\log n-\delta n}.$$ We consider this as a Dirichlet series of one complex variable $s$. Let $m(s)$ be the maximal term of the sum in the RHS. It is known from the theory of Dirichlet series that for every $\epsilon$ $$\Phi(s)\leq A(\epsilon)m(s+1+\epsilon).$$ The maximim term on the right hand side is found by calculus, by maximizing with respect to $n$, and we get $$\Phi(s)\leq (cs)^s$$ for some $c>0.$ In the opposite direction, suppose that $\Phi(s)\leq (cs)^s$. Then $$|a_n|^2n^s\leq (cs)^s.$$ This gives an estimate for $|a_n|$ with parameter $s$. Minimizing with respect to $s$ we obtain the desired $\log|a_n|\leq -\delta n$, for some $\delta>0$. For the inequality from the theory of Dirichlet series (this is also no more than calculus) I refer to the proof of Theorem III.2.1 of S. Mandelbrojt, Series de Dirichlet. Principes et methodes, Paris Gauthier-Villars, 1969. The only references that I know are in French, Russian and Ukrainian, sorry. - Could you provide a russian reference? –  smyrlis Dec 10 '13 at 8:10 MR0584943 Leontʹev, A. F. Ryady eksponent. –  Alexandre Eremenko Dec 10 '13 at 14:42 The formula is not explicitly written there, but the estimate is contained in page 177 of Leontiev. –  Alexandre Eremenko Dec 10 '13 at 18:57 How is $c$ related to $\alpha$? –  smyrlis Dec 10 '13 at 20:38 Small $\alpha$ gives large $c$. According to my computation, $\alpha c$ is bounded above and below. –  Alexandre Eremenko Dec 10 '13 at 21:06
2014-10-24T13:41:48
{ "domain": "mathoverflow.net", "url": "http://mathoverflow.net/questions/151279/which-rate-of-growth-of-the-sobolev-norms-guarantees-analyticity", "openwebmath_score": 0.9623914957046509, "openwebmath_perplexity": 201.2087312612341, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9869795081627094, "lm_q2_score": 0.6619228758499941, "lm_q1q2_score": 0.6533043144480734 }
https://socratic.org/questions/how-do-you-find-the-arcsin-sin-7pi-6
# How do you find the arcsin(sin((7pi)/6))? Feb 10, 2015 $\arcsin \left(\sin \left(7 \frac{\pi}{6}\right)\right) = - \frac{\pi}{6}$. The range of a function $\arcsin \left(x\right)$ is, by definition , $- \frac{\pi}{2} \le \arcsin \left(x\right) \le \frac{\pi}{2}$ It means that we have to find an angle $\alpha$ that lies between $- \frac{\pi}{2}$ and $\frac{\pi}{2}$ and whose $\sin \left(\alpha\right)$ equals to a $\sin \left(7 \frac{\pi}{6}\right)$. From trigonometry we know that $\sin \left(\phi + \pi\right) = - \sin \left(\phi\right)$ for any angle $\phi$. This is easy to see if use the definition of a sine as an ordinate of the end of a radius in the unit circle that forms an angle $\phi$ with the X-axis (counterclockwise from the X-axis to a radius). We also know that sine is an odd function, that is $\sin \left(- \phi\right) = - \sin \left(\phi\right)$. We will use both properties as follows: $\sin \left(7 \frac{\pi}{6}\right) = \sin \left(\frac{\pi}{6} + \pi\right) = - \sin \left(\frac{\pi}{6}\right) = \sin \left(- \frac{\pi}{6}\right)$ As we see, the angle $\alpha = - \frac{\pi}{6}$ fits our conditions. It is in the range from $- \frac{\pi}{2}$ to $\frac{\pi}{2}$ and its sine equals to $\sin \left(7 \frac{\pi}{6}\right)$. Therefore, it's a correct answer to a problem.
2019-10-14T00:39:36
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-find-the-arcsin-sin-7pi-6", "openwebmath_score": 0.8688903450965881, "openwebmath_perplexity": 181.3962122225079, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795079712153, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043143213191 }
https://pdfkul.com/the-prevalent-dimension-of-graphs-mark-mcclure_5ad2518e7f8b9a34698b4573.html
Mark McClure, Department of Mathematics, St. Olaf College, North eld, MN 55057, USA (email: [email protected]) THE PREVALENT DIMENSION OF GRAPHS Abstract f We show that the upper entropy dimension of the prevalent function 2 C [0; 1] is 2. 1 Prevalence The extension of the various notions of \almost every" in R n to in nite dimensional spaces is an interesting and dicult problem. Perhaps the simplest and most successful generalization has been through the use of category. Banach's application of category to the investigation of di erentiability is classic. As another example, [HP] demonstrates that the graph of the generic function has lower entropy dimension one and upper entropy dimension two. There are fundamental diculties, however, with attempts to extend measures to in nite dimensional spaces. Prevalence is a notion de ned in [HSY] which generalizes the measure theoretic \almost every" without actually de ning a measure on the entire space. An equivalent notion was originally introduced in [Chr] as pointed out in [HSY2]. Prevalence is de ned as follows: Let V be a Banach space. A Borel set A  V will be called shy if there is a positive Borel measure  on V such that (A + v) = 0 for every v 2 V . More generally, a subset of a shy Borel set will be called shy. In [HSY] it is shown that shyness satis es all the properties one would expect of a generalization of measure zero. For example: 1. Shyness is shift invariant. 2. Shyness is closed under countable unions. 3. A subset of a shy set is shy. Key Words: Prevalence, Fractal Dimensions 1 2 Mark McClure 4. A shy set has empty interior. 5. If V = Rn , then the shy sets coincide with the measure zero sets. The complement of a shy set will be called prevalent. The goal here is to investigate the prevalent dimensional properties of graphs of functions. 2 Dimension In this section, we de ne the upper entropy index, , and from that the upper b . For " > 0, the "-square mesh for R 2 is de ned as the entropy dimension,  collection of closed squares f[i"; (i + 1)"]  [j"; (j + 1)"]gi;j2Z . For a totally bounded set E  R 2 , de ne N" (E ) = # of "-mesh squares which meet E and ) (E ) = lim sup log;Nlog" (E " : "!0 An easy but important property of  is that it respects closure. That is (E ) = (E ). Another ([F] p. 41) is that the limsup need only be taken along any sequence fcng1 n=1 where c 2 (0; 1) and we still obtain the same value. One problem with  is that it is not -stable. In other words it is possible that ([n En ) > supn f(En )g. For example, (Q ) = 1 even though Q is countable. For this reason,  is used to de ne a new set function, b , de ned by: b (E ) = inf fsupf(En )g : E = [n En g: n This new -stable set function, b , is the upper entropy index. See [Edg] section 6.5 or [F] sections 3.1 through 3.3 for reference. We may now state the main result. Let C [0; 1] denote the Banach space of continuous, real valued functions de ned on [0; 1] with the uniform metric . For f 2 C [0; 1], let G(f ) = f(x; f (x)) : x 2 [0; 1]g denote the graph of f . Theorem 2.1 The set ff 2 C [0; 1] : b (G(f )) = 2g is a prevalent subset of C [0; 1]. 3 Application In this section, we prove several lemmas and Theorem 2.1. First we x some notation. Let I = [k2;m; (k + 1)2;m]  [0; 1] be a dyadic interval, where k; m 2 N are xed. For f 2 C [0; 1], let GI (f ) = f(x; f (x))gx2I be that 3 Prevalent Dimensions of Graphs portion of the graph of f lying over I . For any interval [a; b]  [0; 1] de ne Rf [a; b] = supfjf (x) ; f (y)j : a < x; y < bg. For n > m, let ;n (f ) = 2n M2 k Xn;m ; ( +1)2 i=k2n;m 1 Rf [i2;n; (i + 1)2;n]: For 2 [1; 2), let A = ff 2 C [0; 1] : (GI (f )) > g. Lemma 3.1 For every f 2 C [0; 1] and natural number n > m, M2;n (f )  N2;n (GI (f ))  2n;m+1 + M2;n (f ): Proof: See [F] proposition 11.1.2 Corollary 3.1 For every f 2 C [0; 1], M2;n (f ) : (GI (f )) = lim sup log log 2n n!1 Proof: Note that limn!1 2;nM2;n (f ) = 1. Thus ;n (GI (f )) 2n;m;1 + M2;n (f )  !1 1  N2M M2;n (f ) 2;n (f ) or M2;n  N2;n (GI (f )) as n ! 1. The result easily follows.2 Lemma 3.2 The set A is a G subset of C [0; 1]. Proof: For any rational number q 2 ( ; 2) and any natural number n > m, let M2;n (f ) > qg: Aq (n) = ff 2 C [0; 1] : log log 2n Note that 1 [ 1 [ \ A = Aq (n): q2Q\( ;2) k=1 n=k So it suces to show that Aq (n) is an open set. Let f 2 Aq (n). Choose " > 0 so small that log(M2;n (f ) ; ") > q: log 2n Suppose that g 2 C [0; 1] satis es (f; g) < "2;n. Then the triangle inequality yields jf (x) ; f (y)j  jf (x) ; g(x)j + jg(x) ; g(y)j + jg(y) ; f (y)j: 4 Mark McClure Thus jg(x) ; g(y)j  jf (x) ; f (y)j ; 2"2;n  jf (x) ; f (y)j ; "2m;n: Therefore Rg [i2;n; (i + 1)2;n]  Rf [i2;n; (i + 1)2;n ] ; 2m;n " and log(M2;n (g))  log(M2;n (f ) ; ") > q: log 2n log 2n Thus g 2 Aq (n) and Aq (n) is open.2 Lemma 3.3 For all f 2 C [0; 1] and  6= 0, (GI (f )) = (GI (f )). Proof: This is a simple consequence of the fact that Rf [a; b] = Rf [a; b].2. Lemma 3.4 For all f; g 2 C [0; 1], (GI (f + g))  maxf(GI (f )); (GI (g))g: Proof: This is a simple consequence of the inequality Rf g [a; b]  Rf [a; b] + Rg [a; b]  2 maxfRf [a; b]; Rg [a; b]g:2 Lemma 3.5 For all < 2, A is a prevalent, Borel set. Proof: A is a Borel set by lemma 3.2. Let g 2 C [0; 1] satisfy (GI (g)) > . The existence of such a g is guaranteed by the fact that the typical g 2 C [0; 1] satis es (GI (g)) = 2 (see [HP], Proposition 2). Let  be the Lebesgue type measure concentrated on the line [g] de ned by [g] = fg 2 C [0; 1] :  2 [0; 1]g: Let h 2 C [0; 1]. We will show that #f(Ac + h) \ [g]g = 1. Therefore, (Ac + h) = 0: Suppose that f ; f 2 Ac are such that f + h 2 [g] and f + h 2 [g]. Then there exists  ;  2 [0; 1] such that f + h =  g and f + h =  g. This implies h =  g ; f =  g ; f . Thus f ; f = ( ;  )g. + 1 2 2 2 1 2 1 1 2 1 1 2 2 1 1 2 1 2 This can only happen if 1 = 2 by lemmas 3.3 and 3.4. Therefore, f1 = f2 . Since h is arbitrary, this says that Ac is a shy set or A is a prevalent set.2 By expressing ff 2 C [0; 1] : (GI (f )) = 2g as a countable intersection ff 2 C [0; 1] : (GI (f )) = 2g = we obtain the following: \ 2Q\(1;2) A ; 5 Prevalent Dimensions of Graphs Corollary 3.2 The set ff 2 C [0; 1] : (GI (f )) = 2g is a prevalent, Borel subset of C [0; 1]. Finally, we prove theorem 2.1. Proof: Let fIng1n=1 be an enumeration of the dyadic intervals and let An = ff 2 C [0; 1] : (GIn (f )) = 2g: Then An is a prevalent, Borel set by corollary 3.2, as is A = \1 1 An , being the countable intersection of prevalent, Borel sets. If B = ff 2 C [0; 1] : b (G(f )) = 2g; S then we claim that A  B . Let f 2 A and let G(f ) = 1 1 En be a decomposition. Since  respects closure, we may assume that the En 's are closed. Since G(f ) is closed, one of the En 's must be somewhere dense by the Baire category theorem. Therefore, En  GIk (f ) for some n; k. Thus, b (G(f )) = 2: Therefore, B is a prevalent set (En )  (GIk (f )) = 2 and  since it is the superset of a prevalent, Borel set.2 References [Chr] J. P. R. Christensen, \On Sets of Haar Measure Zero in Abelian Polish Groups", Israel J. Math. 13 (1972), 255-60. [Edg] G. A. Edgar. Measure, Topology, and Fractal Geometry. SpringerVerlag, New York, NY, 1990. [F] K. J. Falconer. Fractal Geometry: Mathematical Foundations and Applications. John Wiley and Sons, West Sussex, England, 1990. [HP] P. D. Humke and G. Petruska. The Packing Dimension of a Typical Continuous Function is 2. Real Analysis Exchange, 14 (1989), 345-58. [HSY] Brian Hunt, Tim Sauer, and James Yorke. Prevalence: A translation invariant \almost every" on in nite dimensional spaces, Bulletin of the American Mathematical Society, 27 (1992), 217 - 38. [HSY2] Brian Hunt, Tim Sauer, and James Yorke. \Prevalence: An Addendum", Bulletin of the American Mathematical Society, 28 (1993), 306-7. ## the prevalent dimension of graphs - Mark McClure An easy but important property of is that it respects closure. That is. (E) = (E). Another ( F] p. 41) is that the limsup need only be taken along any sequence fcng1n=1 where c 2 (01) and we still obtain the same value. One problem with is that it is not -stable. In other words it is possible that ( nEn) > supnf (En)g. For example ... #### Recommend Documents Vibration of the Koch drum - Mark McClure The fundamental modes of vibration of this drum can be modelled by the eigenfunctions of the .... We begin by setting up the boundary of the snowflake. The Read-Bajraktarevic Operator - Mark McClure 0.4. 0.6. 0.8. 1. References. [1] Massopust, Peter R. Fractal functions, fractal surfaces, and wavelets. Academic Press, Inc., San Diego, CA, 1994. ReadBajPP.nb. Boundary scanning and complex dynamics - Mark McClure with this notebook. .... [2] J. Glynn and T. Gray, The Beginner's Guide to Mathematica Version 4. Cambridge University Press, NY, 2000. BoundaryScanPP.nb. 9. Boundary scanning and complex dynamics - Mark McClure with this notebook. .... [2] J. Glynn and T. Gray, The Beginner's Guide to Mathematica Version 4. Cambridge University Press, NY, 2000. BoundaryScanPP.nb. 9. The Borel Structure of the Collections of Sub-Self ... - Mark McClure Abstract. We show that the sets of sub-self-similar sets and super-self-similar sets are both dense, first category, F subsets of K(Rd), the Hausdorff metric space of non-empty compact, subsets of Rd. We also investigate the set of self-similar sets Generating self-affine tiles and their boundaries - Mark McClure Now for each pair Ha, bL where a and b are chosen from , we want MHa, bL to denote the set of pairs of digits. Hd, d'L so that b = A a ..... used by the program. "Decremental tag systems and random trees". - Mark McClure We fix a positive natural number m and consider sequences of the form xn. = (x1,...,xn), where each ..... Sequences, http://www.research.att.com/~njas/sequences/. A Stochastic Cellular Automaton for Three-Coloring ... - Mark McClure Aug 24, 2001 - of the tiling maps each tile to another tile. Figure 2 shows part of such a tiling. ... dynamic images are available on the author's web page:. pdf-1831\graphs-for-determining-the-approximate-elevation-of-the ... ... the apps below to open or edit this item. pdf-1831\graphs-for-determining-the-approximate-ele ... esources-summary-arkansas-geological-commission.pdf.
2022-05-28T06:56:26
{ "domain": "pdfkul.com", "url": "https://pdfkul.com/the-prevalent-dimension-of-graphs-mark-mcclure_5ad2518e7f8b9a34698b4573.html", "openwebmath_score": 0.833595871925354, "openwebmath_perplexity": 3027.945290382795, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795075882268, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043140678103 }
https://math.stackexchange.com/questions/620712/combinatorics-dont-seem-to-get-it-right
# Combinatorics, Dont seem to get it right In how many ways can I give out n white balls(identical), and n colored balls(different colors), to 2n boxes so that in each box: 1) at most one ball 2) at most one white ball(there can be several colored balls) 3) at most one colored ball(there can be several white balls) 4) equal number of white and colored balls Hints: 1) So everybody gets exactly one ball. Label the colours $C_1$ to $C_n$. The ball $C_1$ can be assigned in $2n$ ways. For each such way, the ball $C_2$ can be assigned in $2n-1$ ways. And so on. 2) Ball $C_1$ can be assigned in $2n$ ways. For each such way, ball $C_2$ can be assigned in $2n$ ways. And so on. For every way of assigning the coloured balls, how many ways are there to choose the boxes that will receive a white? 3) Count the number of ways to assign the coloured balls. We have already mentioned how to find this. For each way, how many ways are there to assign the whites? (This component of the calculation is done using Stars and Bars.) 4) The only thing that matters is the coloured balls. For 1), the coloured balls are all distinct. The ball coloured $C_1$ can be put in any one of the $2n$ boxes. A box can contain at most one ball. So for every way of assigning a box to ball $C_1$, there are $2n-1$ ways to assign a box to ball $C_2$. And now there are $2n-2$ ways to assign a box to ball $C_2$, and so on. Thus there are $(2n)(2n-1)(2n-2)\cdots(2n-n+1)$ ways to assign boxes to all the coloured balls. If you wish, this number can be written alternately as $\frac{(2n)!}{n!}$. Once we have assigned boxes to the coloured balls, there is only one way to assign boxes to the white balls, since every box is supposed to hold exactly one ball. Thus the answer to 1) is $\frac{(2n)!}{n!}$. One can do problem 1) in other ways. For example, we can choose which boxes will hold white balls. This can be done in $\binom{2n}{n}$ ways. For every way of deciding which boxes hold white balls, the remaining $n$ coloured balls can be assigned to the remaining $n$ boxes is $n!$ ways. That gives a total of $\binom{2n}{n}n!$, which can be simplified in various ways, for example to $\frac{(2n)!}{n!}$. For problem 2), there are no restrictions except that a box can contain at most one white. So ball $C_1$ can be assigned a box in $2n$ ways. Since a box can hold more than one coloured ball, the ball $C_2$ can now be assigned a box in $2n$ ways, and so on for a total of $(2n)^n$ ways. For every one of these ways, we can choose the boxes that will hold a white ball in $\binom{2n}{n}$ ways, for a total of $(2n)^n \binom{2n}{n}$.
2022-01-23T06:34:41
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/620712/combinatorics-dont-seem-to-get-it-right", "openwebmath_score": 0.9062091112136841, "openwebmath_perplexity": 138.6710199166003, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795075882268, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043140678103 }
http://math.stackexchange.com/questions/187497/proof-that-determinant-rank-equals-row-column-rank/187512
# Proof that determinant rank equals row/column rank Let $A$ be a $m \times n$ matrix with entries from some field $F$. Define the determinant rank of $A$ to be the largest possible size of a nonzero minor, ie. the largest invertible square submatrix of $A$. It is true that the determinant rank is equal to the rank of a matrix, which we define to be the dimension of the row/column space. It's not difficult to see that $\text{rank} \geq \text{determinant rank}$. If some submatrix of $A$ is invertible, then its columns/rows are linearly indepedent, which implies that the corresponding rows/columns of $A$ are also linearly indepedent. Is there a nice proof for the converse? - If the matrix $A$ has rank $k$, then it has $k$ linearly independent lines. Those form an $k\times n$ submatrix, which of course also has rank $k$. But if it has rank $k$, then it has $k$ linearly independent columns. Those form a $k\times k$ submatrix of $A$, which of course also has rank $k$. But a $k\times k$ submatrix with rank $k$ is a full-rank square matrix, therefore invertible, thus is has a non-zero determinant. And therefore the determinant rank has to be at least $k$.
2013-12-19T12:51:00
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/187497/proof-that-determinant-rank-equals-row-column-rank/187512", "openwebmath_score": 0.9305490851402283, "openwebmath_perplexity": 66.61483772016602, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795072052384, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043138143015 }
https://mathspace.co/textbooks/syllabuses/Syllabus-411/topics/Topic-7332/subtopics/Subtopic-97726/?activeTab=theory
New Zealand Level 8 - NCEA Level 3 # Central limit theorem Lesson The standard version of the Central Limit Theorem, first proved by the French mathematician Pierre-Simon Laplace in 1810, states that if random samples of size $n$n are drawn from any population with mean $\mu$μ and variance $\sigma^2$σ2, the sampling distribution of $\overline{x}$x will be approximately normally distributed with a mean $\mu_{\overline{x}}=\mu$μx=μ and a variance $\sigma_{\overline{x}}^2=\frac{\sigma^2}{n}$σ2x=σ2n. That statement is certainly a mouthful, so to unpick it, we will develop the idea a little more carefully. The Central Limit Theorem is one of the most important theorems in all of statistics, so it is really important to understand what it all means. To do this we will build the concept using a much simplified scenario. we will use a population of only five numbers. In reality many populations are infinite in size, but using just five numbers makes it easy to build the idea. Our samples taken from that population will be smaller subsets of the five numbers, starting from single number sampling and progressing through to samples of size $4$4 and $5$5. ##### Step 1 Suppose we begin sampling, with replacement, in single units from the set of numbers $1,2,3,4,5$1,2,3,4,5. Because our sample size is just one, the sample possibilities are $1,2,3,4$1,2,3,4 or $5$5, and the probability of each of these being selected is obviously $\frac{1}{5}$15. We could imagine sampling single numbers continuously, so that a list of results might look something like this: $1,5,4,5,2,3,1,2,3,5,4,1,2,2,3,4,5,2,4,3,\dots$1,5,4,5,2,3,1,2,3,5,4,1,2,2,3,4,5,2,4,3, We expect the average of these draws to converge on the number $3$3 simply because the mean value of the equally likely outcomes is given by $\frac{\Sigma x}{n}=\frac{1+2+3+4+5}{5}=3$Σxn=1+2+3+4+55=3. This makes perfect sense because there is no reason for the random process to favour one number over any other number, so in the long term the average should tend toward $3$3. In the draw shown above, as each number is drawn, we see the successive averages forming as: • $\frac{1}{1}=1,\frac{1+5}{2}=3$11=1,1+52=3 • $\frac{1+5+4}{3}=3.333$1+5+43=3.333 • $\frac{1+5+4+5}{4}=3.75,$1+5+4+54=3.75, • $\frac{1+5+4+5+2}{5}=3.4,\dots$1+5+4+5+25=3.4, and so on. The variance could also be progressively determined but we know that this would also tend to the variance of the five numbers, easily determined as the average of the squared deviations of each possible outcome from $3$3. Using computer software, we find this value to be exactly $2$2. The probability distribution is clearly discrete and uniform as shown here: ##### Step 2 Suppose now, instead of sampling one number at a time, we sample two numbers at a time from the set, like this: $2-4,1-3,5-2,...$24,13,52,... and begin writing down the average of each pair like this: $(2,4)\rightarrow3,(1,3)\rightarrow2,(5,2)\rightarrow3.5$(2,4)3,(1,3)2,(5,2)3.5…etc. We find that the long term average of these averages (in other words the average of $3,2,3.5\dots$3,2,3.5etc) tends toward the average of $3$3 as well. We can convince you of this by the following argument. Suppose we take the overall average of the first three $2$2-number averages above: Then the overall average becomes: Average $=$= $\frac{\frac{2+4}{2}+\frac{1+3}{2}+\frac{5+2}{2}}{3}$2+42​+1+32​+5+22​3​ $=$= $\frac{(2+4)+(1+3)+(5+2)}{2\times3}$(2+4)+(1+3)+(5+2)2×3​ $=$= $\frac{2+4+1+3+5+2}{6}$2+4+1+3+5+26​ But this is just the average formed by six $1$1-number averages, and we have already agreed that the sequence of these tends to the theoretical average of $3$3 because of the fact that the single numbers are all equally likely to be selected. From the counting principle, there are $5^2=25$52=25 subsets of $2$2 possible when selecting with replacement $2$2 numbers from $5$5.  These are listed in the table, along with the average shown in brackets. $1,1(1)$1,1(1) $2,1(1.5)$2,1(1.5) $3,1(2)$3,1(2) $4,1(2.5)$4,1(2.5) $5,1(3)$5,1(3) $1,2(1.5)$1,2(1.5) $2,2(2)$2,2(2) $3,2(2.5)$3,2(2.5) $4,2(3)$4,2(3) $5,2(3.5)$5,2(3.5) $1,3(2)$1,3(2) $2,3(2.5)$2,3(2.5) $3,3(3)$3,3(3) $4,3(3.5)$4,3(3.5) $5,3(4)$5,3(4) $1,4(2.5)$1,4(2.5) $2,4(3)$2,4(3) $3,4(3.5)$3,4(3.5) $4,4(4)$4,4(4) $5,4(4.5)$5,4(4.5) $1,5(3)$1,5(3) $2,5(3.5)$2,5(3.5) $3,5(4)$3,5(4) $4,5(4.5)$4,5(4.5) $5,5(5)$5,5(5) Note that not all of these averages are different with the most common average being $3$3 occurring $5$5 times. Using computer software we can verify that the average of these averages is still $3$3, but the variance has changed. The software shows a variance of just $1$1. The distribution of these averages changes from a discrete uniform one to a discrete symmetric triangular one. Most of the pairs have an average of $3$3. Less and less averages of a certain value appear as we move further and further away from an average of $3$3 ##### Step 3 The plot thickens. Suppose we now sample three numbers at a time. The $5^3=125$53=125 possible averages (too many to write them all down) range from $1$1 to $5$5 again, with most of them (in fact $19$19 of them) having the value $3$3. Here is a frequency and probability distribution table of these $125$125 averages. $\overline{x}$x $f$f $P(\overline{x})$P(x) $1$1 $1$1 $0.008$0.008 $1\frac{1}{3}$113 $3$3 $0.0194$0.0194 $1\frac{2}{3}$123 $6$6 $0.048$0.048 $2$2 $10$10 $0.08$0.08 $2\frac{1}{3}$213 $15$15 $0.12$0.12 $2\frac{2}{3}$223 $18$18 $0.144$0.144 $3$3 $19$19 $0.152$0.152 $3\frac{1}{3}$313 $18$18 $0.144$0.144 $3\frac{2}{3}$323 $15$15 $0.12$0.12 $4$4 $10$10 $0.08$0.08 $4\frac{1}{3}$413 $6$6 $0.048$0.048 $4\frac{2}{3}$423 $3$3 $0.0194$0.0194 $5$5 $1$1 $0.008$0.008 Once again we find the average of all possible averages stubbornly remains at $3$3, but the variance continues to reduce. The variance of these averages turns out to be exactly $\frac{2}{3}$23. Something really interesting is happening to the sampling distribution. The distribution of averages looks more and more like a normal distribution, with most of the averages being $3$3 and the frequency of other averages reducing as the value of the average moves away from $3$3 above and below. ##### Step 4 and beyond Sampling four numbers at a time produces $5^4=625$54=625 averages, and the average of these averages again remains at $3$3. The variance continues to decrease. It now becomes $\frac{2}{4}=\frac{1}{2}$24=12. The probability of selecting three numbers with an average of $3$3 becomes, from a simple count, $\frac{85}{625}=0.136$85625=0.136. Sampling $5$5 numbers produces $3125$3125 possible averages and the average of these averages is still $3$3 but the variance reduces further to $\frac{2}{5}=0.4$25=0.4.  The distribution of these averages looks approximately normal (Our particular population, being a discrete finite distribution with a lowest number 1 and highest number 5, can never be truly normal). Here is what the distribution of $\mu_{\overline{x}}$μx might look like for a sample of size $5$5: ##### Conclusions If we look back at steps 1 to 4 and beyond, we notice two important things. 1. The average of the $n$n-size sample averages remained constant at $3$3 2. The variance of the $n$n-size sample averages became the variance of the population divided by $n$n. If we call the average of the averages $\mu_{\overline{x}}$μx, and the variance of the averages $\sigma_{\overline{x}}^2$σ2x, then it is true that, for any $n$n-size sample drawn from a population consisting of the five numbers $1,2,3,4,5$1,2,3,4,5, we have: 1. $\mu_{\overline{x}}=3=\mu$μx=3=μ 2. $\sigma_{\overline{x}}^2=\frac{2}{n}=\frac{\sigma^2}{n}$σ2x=2n=σ2n The remarkable result that this investigation is leading to is known as the Central Limit Theorem. ### The Central Limit Theorem again The Central Limit Theorem states that: If random samples of size $n$n are drawn from any population with mean $\mu$μ and variance $\sigma^2$σ2, the sampling distribution of $\overline{x}$x will be approximately normally distributed with a mean $\mu_{\overline{x}}=\mu$μx=μ and a variance $\sigma_{\overline{x}}^2=\frac{\sigma^2}{n}$σ2x=σ2n. What a profound statement this is! Look back across our discussion and see that as the sample size increased, the average of the averages, $\mu_{\overline{x}}$μx, remained constant at $3$3 and the variance $\sigma_{\overline{x}}^2$σ2x,  reduced steadily as $\frac{2}{1}$21, $\frac{2}{2}$22, $\frac{2}{3}$23,$\frac{2}{4}$24 and $\frac{2}{5}$25. A sample of size $6$6 would show a variance of $\frac{2}{6}=\frac{1}{3}$26=13, and so on. The Central limit theorem (CLT for short) will be put to use in later chapters. Sample means can be determined and certain probabilistic inferences can be made about the population mean $\mu$μ itself, even though it may not be known. ### Further notes 1. As a general rule, the variance of the sampling distribution for samples of size $n$n from any size population will reduce as $n$n increases. 2. As the variance reduces, the sampling distribution becomes less dispersed (more compacted) around the mean value. 3. As the sample size increases, the more likely it will be that the collection of sample averages will resemble the true population average. Thus, the variation is expected to drop. The beauty of the Central Limit Theorem is that it tells us how it drops. 4. Irrespective of the distribution of the population, the sampling distribution for large $n$n (usually taken as $n=30$n=30 or more) becomes asymptotically normal. 5. If the variance of the sampling distribution is given by $\sigma_{\overline{x}}^2=\frac{\sigma^2}{n}$σ2x=σ2n, then the standard deviation is given by $\sigma_{\overline{x}}=\frac{\sqrt{\sigma^2}}{\sqrt{n}}$σx=σ2n. 6. In many instances we take real life samples without replacement. For example we might conduct a survey of $100$100 people's height, drawn randomly from a population in a particular location. As each person's height is recorded, we exclude that person from being remeasured. In such situations, provided the sample is large enough, the sampling distribution will still be normal with the sampling mean and standard deviation given approximately by $\mu$μ and $\frac{\sigma}{\sqrt{n}}$σn. ### An example ##### Question A regular tetrahedral dice has four sides labelled $1,2,3$1,2,3 and $4$4. Let $X$X be the outcome when the dice is rolled. 1. What type of distribution does $X$X represent? 2. Samples of size $64$64 are taken from the distribution (in other words, each sample constitutes a roll of the dice $16$16 times) and the means $\overline{x}$x of each sample are calculated and recorded. What type of distribution does $\overline{x}$x represent? 3. Calculate the mean $\mu_{\overline{x}}$μx and standard deviation $\sigma_{\overline{x}}$σx of $\overline{x}$x. Q1 $X$X has a discrete uniform probability distribution. Note that a continuous uniform probability distribution is a distribution where the random variable could assume a continuous range of values between a minimum and maximum value. Q2 With a large sample size like this, any population distribution at all would have a sampling distribution that was approximately normal. We are concerned with a uniform distribution, but even if the distribution was $U$U shaped, the corresponding sampling distribution for a sample size of $64$64 would be fairly close to normal. Q3 The mean of the sampling distribution, as we have seen, will be exactly the same as the mean of the population distribution. In other words, $\mu_{\overline{x}}=\frac{1+2+3+4}{4}=2.5$μx=1+2+3+44=2.5. The variance of the population can be evaluated as: $\sigma^2$σ2 $=$= $\Sigma\frac{(X-\overline{x})^2}{n}$Σ(X−x)2n​ $=$= $\frac{(1-2.5)^2+(2-2.5)^2+(3-2.5)^2+(4-2.5)^2}{4}$(1−2.5)2+(2−2.5)2+(3−2.5)2+(4−2.5)24​ $=$= $\frac{5}{4}$54​ $=$= $1.25$1.25 Therefore the standard deviation $\sigma_{\overline{x}}=\sqrt{1.25}=\frac{\sqrt{5}}{2}\approx1.118$σx=1.25=521.118. Hence the sampling distribution has a mean of $2.5$2.5 and a standard deviation of $1.118$1.118. This information can be used to predict the results of future samples of size $64$64 taken from the same population. For example, using the empirical rule, we can state that there is about a $68%$68% chance of a future sample of size $64$64 will have mean somewhere between $2.5\pm\times1.118$2.5±×1.118. #### Worked examples ##### Question 1 Consider a fair $6$6 sided dice, with faces labeled from $1$1 to $6$6. Let $X$X be the outcome when the dice is rolled. 1. What type of distribution does $X$X represent? Continuous Uniform Distribution A Discrete Uniform Distribution B Normal Distribution C Exponential Distribution D Continuous Uniform Distribution A Discrete Uniform Distribution B Normal Distribution C Exponential Distribution D 2. Many samples of size $75$75 are taken from the distribution, and the means of each of the samples $\overline{X}$X calculated. What type of distribution does $\overline{X}$X approximately represent? Discrete Uniform Distribution A Exponential Distribution B Continuous Uniform Distribution C Normal Distribution D Discrete Uniform Distribution A Exponential Distribution B Continuous Uniform Distribution C Normal Distribution D 3. Calculate the mean of $\overline{X}$X. 4. Calculate the standard deviation of $\overline{X}$X corresponding to a sample size of $75$75. Round your answer to $2$2 decimal places. ##### Question 2 A discrete random variable $X$X has a mean of $0.1$0.1 and a variance of $1.3$1.3. Samples of $60$60 observations of $X$X are taken and $\overline{X}$X, the mean of each sample, was calculated. 1. What is the mean of $\overline{X}$X? 2. What is the standard deviation of $\overline{X}$X? Round your answer to two decimal places. 3. Using your answers to part (a) and part (b), calculate $P($P($0<\overline{X}<0.2$0<X<0.2$)$). Write your answer to two decimal places. 4. Using your answers to part (a) and part (b), calculate $P(\overline{X}<$P(X<$0.3$0.3$|\overline{X}>$|X>$0.2$0.2$)$) ##### Question 3 The weight of small tins of tuna represented by the random variable $X$X is normally distributed with a mean of $90.4$90.4 g and a standard deviation of $6.5$6.5 g. 1. If the cans are advertised as weighing $92$92 g, what is the probability a randomly chosen can is underweight? Round your answer to two decimal places. 2. What is the expected value of $\overline{X}$X, the sample mean of a randomly chosen sample of size $50$50? 3. Calculate the standard deviation for $\overline{X}$X. Round your answer to three decimal places. 4. Calculate the probability, $p$p, that a randomly chosen sample of size $50$50 has a mean weight less than the advertised weight, using the central limit theorem. 5. $45$45 samples, each of size $50$50 are taken. Calculate the probability, $q$q, that more than $41$41 samples each have a mean weight less than the advertised weight, using the central limit theorem. Recall that the Central Limit Theorem states: If random samples of size $n$n are drawn from any population with mean $\mu$μ and variance $\sigma^2$σ2, the sampling distribution of $\overline{x}$x will be approximately normally distributed with a mean $\mu_{\overline{x}}=\mu$μx=μ and a variance $\sigma_{\overline{x}}^2=\frac{\sigma^2}{n}$σ2x=σ2n. The theorem allows us to make probability statements on the location of any sample mean from future samples that we take. Here is an explained example to follow. The weights of adult female cows from a very large Australian cattle station are determined to be normally distributed with a mean of $720$720 kg and a standard deviation of $150$150 kg. In a major farm study, $36$36 adult female cows are randomly chosen and weighed using a large weighing machine. The average weight is recorded. This sampling procedure is repeated daily for $3$3 months using different cows on the property and each time the mean weight is recorded. Consider the following four questions: 1. What is the approximate probability that a sample mean exceeds $750$750 kg? 2. What is the probability that a sample mean lies between $700$700 kg and $740$740 kg? 3. Given that the sample mean does not exceed $740$740 kg, what is the approximate probability that the sample mean is at least $700$700 kg? 4. What is the probability that a sample mean is not within the range $670$670 kg and $770$770 kg? Q1 Mathematically, we are determining the probability $P(\overline{x}\ge750)$P(x750) on the assumption that the sampling distribution is approximately normal with mean $\mu_{\overline{x}}=720$μx=720 and standard deviation determined as: $\sigma_{\overline{x}}$σx​ $=$= $\frac{\sigma}{\sqrt{n}}$σ√n​ $=$= $\frac{150}{\sqrt{36}}$150√36​ $=$= $25$25 To do this we are best to transform the problem to a standard normal problem and use tables or computer software to evaluate the relevant area. So we have: $P(\overline{x}\ge750)$P(x≥750) $\approx$≈ $P(z\ge\frac{750-\mu_{\overline{x}}}{\sigma_{\overline{x}}})$P(z≥750−μx​σx​​) $=$= $P(z\ge\frac{750-720}{25})$P(z≥750−72025​) $=$= $P(z\ge1.2)$P(z≥1.2) $=$= $0.1151$0.1151 Hence there is a little more than $11%$11% chance that $\overline{x}$x will exceed $750$750 kg. Q2 This interval can be transformed to an interval on the standard normal distribution so that: $P(700\le\overline{x}\le740)$P(700≤x≤740) $\approx$≈ $P(\frac{700-720}{25}\le z\le\frac{740-720}{25})$P(700−72025​≤z≤740−72025​) $=$= $P(-0.8\le z\le0.8)$P(−0.8≤z≤0.8) $=$= $2\times P(0\le z\le0.8)$2×P(0≤z≤0.8) $=$= $2\times0.2881$2×0.2881 $=$= $0.5762$0.5762 This means that there is about a $58%$58% chance of a sample mean between $700$700 kg and $740$740 kg. Q3 This is a conditional probability statement. It almost looks the same question as question 2, but there is an important difference. What question 3 is asking can be mathematically described as $P(\overline{x}\ge700|\overline{x}\le740)$P(x700|x740). This can be reinterpreted as $\frac{P(700\le\overline{x}\le740)}{P(\overline{x}\le740)}$P(700x740)P(x740) In other words, the numerator probability is exactly the same as that for question 2, but the total fraction increases because of the reduction of sample space probability present in the denominator. This is what the conditional part of the question is doing - reducing the total amount of probability available. Thus, using the result of question 2, we have: $\frac{P(700\le\overline{x}\le740)}{P(\overline{x}\le740)}$P(700≤x≤740)P(x≤740)​ $\approx$≈ $\frac{0.5762}{P(z\le\frac{740-720}{25})}$0.5762P(z≤740−72025​)​ $=$= $\frac{0.5762}{P(z\le0.8)}$0.5762P(z≤0.8)​ $=$= $\frac{0.5762}{0.7119}$0.57620.7119​ $=$= $0.8094$0.8094 This represents about an $81%$81% chance of a sample mean being at leat $700$700 kg given that the sample mean doesn't exceed $740$740 kg. Q4 This last question can be done a number of ways but perhaps the best way is to split it up into the sum of two equal probabilities: $P(\overline{x}\le670\cap\overline{x}\ge770)$P(x≤670∩x≥770) $\approx$≈ $2\times P(\overline{x}\le670)$2×P(x≤670) $=$= $2\times P(z\le\frac{670-720}{25})$2×P(z≤670−72025​) $=$= $2\times P(z\le2)$2×P(z≤2) $=$= $2\times0.0228$2×0.0228 $=$= $0.0456$0.0456 Thus there is about a $4\frac{1}{2}%$412% chance of this event happening. #### Worked Examples ##### QUESTION 1 A discrete random variable $X$X has a mean of $0.1$0.1 and a variance of $1.3$1.3. Samples of $60$60 observations of $X$X are taken and $\overline{X}$X, the mean of each sample, was calculated. 1. What is the mean of $\overline{X}$X? 2. What is the standard deviation of $\overline{X}$X? Round your answer to two decimal places. 3. Using your answers to part (a) and part (b), calculate $P($P($0<\overline{X}<0.2$0<X<0.2$)$). Write your answer to two decimal places. 4. Using your answers to part (a) and part (b), calculate $P(\overline{X}<$P(X<$0.3$0.3$|\overline{X}>$|X>$0.2$0.2$)$) ##### QUESTION 2 The weight of small tins of tuna represented by the random variable $X$X is normally distributed with a mean of $90.4$90.4 g and a standard deviation of $6.5$6.5 g. 1. If the cans are advertised as weighing $92$92 g, what is the probability a randomly chosen can is underweight? Round your answer to two decimal places. 2. What is the expected value of $\overline{X}$X, the sample mean of a randomly chosen sample of size $50$50? 3. Calculate the standard deviation for $\overline{X}$X. Round your answer to three decimal places. 4. Calculate the probability, $p$p, that a randomly chosen sample of size $50$50 has a mean weight less than the advertised weight, using the central limit theorem. 5. $45$45 samples, each of size $50$50 are taken. Calculate the probability, $q$q, that more than $41$41 samples each have a mean weight less than the advertised weight, using the central limit theorem. ### Outcomes #### S8-2 Make inferences from surveys and experiments: A determining estimates and confidence intervals for means, proportions, and differences, recognising the relevance of the central limit theorem B using methods such as resampling or randomisation to assess #### 91582 Use statistical methods to make a formal inference
2022-01-28T22:03:44
{ "domain": "mathspace.co", "url": "https://mathspace.co/textbooks/syllabuses/Syllabus-411/topics/Topic-7332/subtopics/Subtopic-97726/?activeTab=theory", "openwebmath_score": 0.8621714115142822, "openwebmath_perplexity": 1051.8435126579875, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795072052384, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043138143015 }
http://mathinsight.org/discrete_dynamical_system_elementary_problem_solutions_2
# Math Insight ### Solutions to elementary discrete dynamical systems problems, part 2 The following is a set of solutions to the elementary discrete dynamical systems problems, part 2. #### Problem 1 1. To find the equilibria, we plug in $x_{n+1}=x_n = E$. \begin{align*} E &= E^2\\ E-E^2 &=0\\ E(1-E) &=0\\ E=0 & \quad \text{or} \quad E=1 \end{align*} 2. To determine the stability of the equilibria, the first step is to write the system as $x_{n+1}=f(x_n)$. In this case, the dynamical system is already in that form, with $f(x)=x^2$. The derivative $f'(x)=2x$, and we evaluate it at the equilibria. For $E=0$: $f'(0) = 0$. Since $|f'(0)| \lt 1$, the equilibrium $x_n=0$ is stable. For $E=1$: $f'(1) = 2$. Since $f'(1)>1$, the equilibrium $x_n=1$ is unstable. 3. The below cobwebbing confirms that the equilibrium $x_n=0$ is stable and the equilibrium $x_n=1$ is unstable. #### Problem 2 1. To find the equilibria, plug in $y_{t+1}=y_t = E$. \begin{align*} E &= E^3\\ E - E^3 &= 0\\ E(1-E^2) &= 0\\ E(1-E)(1+E) &= 0\\ E=0 \quad \text{or} \quad E &= \pm 1 \end{align*} 2. The dynamical system is already in the form $y_{t+1}=g(y_t)$ where $g(y)=y^3$. Since $g'(y)=3x^2$, we calculate that at the equilibria $g'(0)=0$, $g'(1)=g'(-1)=3$. The equilibrium $y_t=0$ since $|g'(0)|<1$. The equilibria $y_t=1$ and $y_t=-1$ are unstable since $g'(1)>1$ and $g'(-1)>1$. 3. The below cobwebbing confirms that the equilibrium $y_t=0$ is stable and the equilibria $y_t=1$ and $y_t=-1$ are unstable. #### Problem 3 Consider the dynamical system \begin{align*} z_{t+1} -z_t &= z_t(1-z_t) \quad \text{for $t=0,1,2,3, \ldots$} \end{align*} 1. Plug in $z_{t+1}=z_t=E$. \begin{align*} E - E &= E(1-E)\\ 0 &= E(1-E)\\ E &= 0 \quad \text{or} \quad E=1 \end{align*} 2. Need to rewrite system in the form $z_{t+1}=f(z_t)$. Just add $z_t$ to borth sides. \begin{align*} z_{t+1} -z_t &= z_t(1-z_t)\\ z_{t+1} &= z_t + z_t(1-z_t)\\ &= f(z_t) \end{align*} where $f(z)=z+z(1-z)$. The derivative of $f$ is \begin{align*} f'(z) &= 1 + 1(1-z) + z(-1)\\ &= 1 + 1-z-z\\ &= 2 -2z. \end{align*} Plugging in $z_t=0$: $f'(0) = 2 >1$, so the equilibrium $z_t=0$ is unstable. Plugging in $z_t=1$: $|f'(1)|=0 < 1$, so the equiliibrium $z_t=1$ is stable. 3. The below cobwebbing confirms that the equilibrium $z_t=0$ is unstable and the equilibrium $z_t=1$ is stable. #### Problem 4 1. Plugging in $w_{n+1}=w_n=E$, \begin{align*} E-E&= 3E(1-E/2)\\ 0 &= 3E(1-E/2)\\ E &=0 \quad \text{or} \quad E=2 \end{align*} 2. We solve for $w_{n+1}$ by adding $w_n$ to both sides of the dynamical system: \begin{align*} w_{n+1} &= w_n + 3w_n(1-w_n/2)\\ &= g(w) \end{align*} where $g(w)=w+3w(1-w/2)$. The derivative is \begin{align*} g'(w)&=1+3(1-w/2)-3w/2\\ &=1 +3 -3w/2-3w/2\\ &= 4 - 3w. \end{align*} Since $g'(0)=4 >1$, the equilibrium $w_n=0$ is unstable. Since $|g'(2)| = |-2| > 1$, the equilibrium $w_n=2$ is also unstable. #### Problem 5 1. Let $v_{n+1}=v_n=E$ so that \begin{align*} E - E &= 0.9E(2-E)\\ 0 &= 0.9E(2-E)\\ E &=0 \quad \text{or} \quad E = 2. \end{align*} 2. Adding $v_n$ to both sides: \begin{align*} v_{n+1} &= v_n + 0.9v_n(2-v_n)\\ &= g(v_n) \end{align*} where $g(v) = v+0.9v(2-v)$. The derivative is \begin{align*} g'(v) &= 1 + 0.9(2-v) - 0.9v\\ &=2.8 - 1.8v. \end{align*} Since $g'(0)=2.8 > 1$, the equilibrium $v_n=0$ is unstable. Since $|g'(2)| = |-0.8| = 0.8 < 1$, the equilibrium $v_n=2$ is stable. #### Problem 6 1. Let $u_{t+1}=u_t = E$. so that \begin{align*} E - E &= a E(1-E)\\ 0 &= aE(1-E)\\ E &=0 \quad \text{or} \quad E=1 \end{align*} 2. Add $u_t$ to both size to solve for $u_{t+1}$ \begin{align*} u_{t+1} &= u_t+ a u_t(1-u_t)\\ &= f(u) \end{align*} where $f(u) = u+au(1-u)$. The derivative is \begin{align*} f'(u) &= 1 + a(1-u) - au\\ &=1+a -2au \end{align*} The equilibrium $u_t=0$ is stable when $|f'(0)|< 1$. Since $f'(0) = 1+a$, this condition is satisfied when $|1+a|<1$. We see that the equilibrium $u_t=0$ is stable when $-2 < a < 0$. It is unstable when $|f'(0)|=|1+a|>1$, i.e., when $a < -2$ or $a>0$. The equilibrium $u_t=1$ is stable when $|f'(1)| = |1-a| < 1$, i.e., when $0 < a < 2$. The equilibrium $u_t=1$ is unstable when $|f'(1)| = |1-a| > 1$, i.e., when $a < 0$ or $a > 2$. #### Problem 7 Consider the dynamical system \begin{align*} s_{t+1} -s_t &= 0.5s_t(1-s_t/b) \quad \text{for $t=0,1,2,3, \ldots$} \end{align*} where $b$ is a positive parameter. 1. Let $s_{t+1}=s_t = E$. \begin{align*} E - E &= 0.5E(1-E/b)\\ 0 &= 0.5 E(1-E/b)\\ E &=0 \quad \text{or} \quad E=b \end{align*} 2. Write system as \begin{align*} s_{t+1} &=s_t+ 0.5s_t(1-s_t/b)\\ &= h(s) \end{align*} where $h(s) = s+0.5s(1-s/b)$. The derivative is \begin{align*} h'(s) &= 1 + 0.5(1-s/b) - 0.5s/b\\ &= 1.5 - s/b. \end{align*} Since $h'(0) = 1.5 > 1$, the equilibrium $s_t=0$ is unstable. Since $|h'(b)| = |1.5 - 1 |= 0.5 < 1$, the equilibrium $s_t =b$ is stable. 3. The stability of the equilibria does not depend on $b$.
2017-07-22T06:55:54
{ "domain": "mathinsight.org", "url": "http://mathinsight.org/discrete_dynamical_system_elementary_problem_solutions_2", "openwebmath_score": 1.0000100135803223, "openwebmath_perplexity": 996.9553312510361, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795072052383, "lm_q2_score": 0.6619228758499942, "lm_q1q2_score": 0.6533043138143014 }
https://plainmath.net/high-school-geometry/83864-the-exterior-angle-theorem-for-triangles
2022-07-22 Proof of the Exterior Angle Theorem. The exterior angle theorem for triangles states that the sum of “The measure of an exterior angle of a triangle is equal to the sum of the measures of the two interior angles that are not adjacent to it; this is the exterior angle theorem. The sum of the measures of the three exterior angles (one for each vertex) of any triangle is 360 degrees.” How can we prove this theorem? So, for the triangle above, we need to prove why $\mathrm{\angle }CBD=\mathrm{\angle }BAC+\mathrm{\angle }BCA$ Brienueentismvh Expert Step 1 In the $\mathrm{△}ABC$, we know that ABD is a straight line. So $\mathrm{\angle }ABC={180}^{\circ }-\mathrm{\angle }CBD$. From the angle sum property of triangles we can infer that $\mathrm{\angle }BAC+\mathrm{\angle }ABC+\mathrm{\angle }BCA={180}^{\circ }$ or $\mathrm{\angle }ABC={180}^{\circ }-\left(\mathrm{\angle }BAC+\mathrm{\angle }BCA\right)$. Step 2 Therefore: $\mathrm{\angle }ABC={180}^{\circ }-\mathrm{\angle }CBD={180}^{\circ }-\left(\mathrm{\angle }BAC+\mathrm{\angle }BCA\right)$ $⇒-\mathrm{\angle }CBD=-\left(\mathrm{\angle }BAC+\mathrm{\angle }BCA\right)$ $⇒-\mathrm{\angle }CBD×-1=-\left(\mathrm{\angle }BAC+\mathrm{\angle }BCA\right)×-1$ $⇒\mathrm{\angle }CBD=\mathrm{\angle }BAC+\mathrm{\angle }BCA$ Ciara Rose Expert Step 1 Euclid gives a visually satisfying proof of the exterior angle theorem by drawing BE parallel to AC, and observing that $\mathrm{\angle }CBE=\mathrm{\angle }ACB$ (alternate interior angles) and $\mathrm{\angle }EBD=\mathrm{\angle }CAB$ (corresponding angles), making $\mathrm{\angle }CBD=\mathrm{\angle }ACB+\mathrm{\angle }CAB$. This theorem includes the further important result that the three angles of a triangle sum to ${180}^{o}$, or "two right angles" as Euclid says. Step 2 But if, as I suspect, the true intent of OP's question is, assuming the truth of the exterior angle theorem, prove that the sum of the three exterior angles of a triangle is ${360}^{o}$, then we can argue as follows. Since by the exterior angle theorem $\mathrm{\angle }CBD=\mathrm{\angle }BCA+\mathrm{\angle }BAC$ and $\mathrm{\angle }ACE=\mathrm{\angle }CBA+\mathrm{\angle }BAC$ and $\mathrm{\angle }BAF=\mathrm{\angle }CBA+\mathrm{\angle }BCA$ then by addition $\mathrm{\angle }CBD+\mathrm{\angle }ACE+\mathrm{\angle }BAF=2\mathrm{\angle }BCA+2\mathrm{\angle }CBA+2\mathrm{\angle }BAC=2\cdot {180}^{o}={360}^{o}$. Do you have a similar question?
2023-02-08T21:12:06
{ "domain": "plainmath.net", "url": "https://plainmath.net/high-school-geometry/83864-the-exterior-angle-theorem-for-triangles", "openwebmath_score": 0.8064074516296387, "openwebmath_perplexity": 229.1031855919809, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795125670754, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043107810672 }
http://mathhelpforum.com/statistics/36799-multiplication-addition-rules.html
# Math Help - Multiplication and Addition rules 1. ## Multiplication and Addition rules Well I want to ask what is the difference between Multiplication and Addition rules and when to use them to find out the probabilities of events' occurence. For example: When you toss two coins what's the probability that they are heads up? Thank you for helping me. 2. Originally Posted by fromia Well I want to ask what is the difference between Multiplication and Addition rules and when to use them to find out the probabilities of events' occurence. For example: When you toss two coins what's the probability that they are heads up? Thank you for helping me. You use multiplication when the two events are independent of one another. In the case of flipping coins, the probability that a coin is heads-up is 1/2. So the probability of flipping two coins heads-up is (1/2)(1/2) = 1/4. You add distinct probabilities when you want to find the probability of an event that encompasses several probabilities, such as: What is the probability of drawing an 8 or a 9 from a standard deck of 52 cards? It is the same as the probability of drawing an 8 plus the probability of drawing a 9.
2015-10-09T17:47:52
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/statistics/36799-multiplication-addition-rules.html", "openwebmath_score": 0.8604564666748047, "openwebmath_perplexity": 161.97717843191458, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9869795125670754, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043107810672 }
http://thousandfold.net/cz/2013/11/
## A useful trick for computing gradients w.r.t. matrix arguments, with some examples I’ve spent hours this week and last week computing, recomputing, and checking expressions for matrix gradients of functions. It turns out that except in the simplest of cases, the most painfree method for finding such gradients is to use the Frechet derivative (this is one of the few concrete benefits I derived from the differential geometry course I took back in grad school). Remember that the Frechet derivative of a function $$f : X \rightarrow \mathbb{R}$$ at a point $$x$$ is defined as the unique linear operator $$d$$ that is tangent to $$f$$ at $$x$$, i.e. that satisfies $f(x+h) = f(x) + d(h) + o(\|h\|).$ This definition of differentiability makes sense whenever $$X$$ is a normed linear space. If $$f$$ has a gradient, then the Frechet derivative exists and the gradient satisfies the relation $$d(h) = \langle \nabla f(x), h \rangle.$$ ### Simple application As an example application, lets compute the gradient of the function $f(X) = \langle A, XX^T \rangle := \mathrm{trace}(A^T XX^T) = \sum_{ij} A_{ij} (XX^T)_{ij}$ over the linear space of $$m$$ by $$n$$ real-valued matrices equipped with the Frobenius norm. First we can expand out $$f(X+H)$$ as $f(X + H) = \langle A, (X+H)(X+H)^T \rangle = \langle A, XX^T + XH^T + HX^T + HH^T \rangle$ Now we observe that the terms which involve more than one power of $$H$$ are $$O(\|H\|^2) = o(\|H\|)$$ as $$H \rightarrow 0$$, so $f(X + H) = f(X) + \langle A, XH^T + HX^T \rangle + o(\|H\|).$ It follows that $d(H) = \langle A, XH^T + HX^T \rangle = \mathrm{trace}(A^TXH^T) + \mathrm{trace}(A^THX^T),$ which is clearly a linear function of $$H$$ as desired. To write this in a way that exposes the gradient, we use the cyclicity properties of the trace, and exploit its invariance under transposes to see that \begin{align} d(H) & = \mathrm{trace}(HX^TA) + \mathrm{trace}(X^TA^T H) \\ & = \mathrm{trace}(X^TAH) + \mathrm{trace}(X^TA^T H) \\ & = \langle AX, H \rangle + \langle A^TX, H \rangle \\ & = \langle (A + A^T)X, H \rangle. \end{align} The gradient of $$f$$ at $$X$$ is evidently $$(A + A^T)X$$. ### More complicated application If you have the patience to work through a lot of algebra, you could probably calculate the above gradient component by component using the standard rules of differential calculus, then back out the simple matrix expression $$(A + A^T)X$$. But what if we partitioned $$X$$ into $$X = [\begin{matrix}X_1^T & X_2^T \end{matrix}]^T$$ and desired the derivative of $f(X_1, X_2) = \mathrm{trace}\left(A \left[\begin{matrix} X_1 \\ X_2 \end{matrix}\right] \left[\begin{matrix}X_1 \\ X_2 \end{matrix} \right]^T\right)$ with respect to $$X_2$$? Then the bookkeeping necessary becomes even more tedious if you want to compute component by component derivatives (I imagine, not having attempted it). On the other hand, the Frechet derivative route is not significantly more complicated. Some basic manipulations allow us to claim \begin{align} f(X_1, X_2 + H) & = \mathrm{trace}\left(A \left[\begin{matrix} X_1 \\ X_2 + H \end{matrix}\right] \left[\begin{matrix}X_1 \\ X_2 + H \end{matrix} \right]^T\right) \\ & = f(X_1, X_2) + \mathrm{trace}\left(A \left[\begin{matrix} 0 & X_1 H^T \\ H X_2^T & H X_2^T + X_2 H^T + H H^T \end{matrix} \right]\right) \end{align} Once again we drop the $$o(\|H\|)$$ terms to see that $d(H) = \mathrm{trace}\left(A \left[\begin{matrix} 0 & X_1 H^T \\ H X_2^T & H X_2^T + X_2 H^T \end{matrix} \right]\right).$ To find a simple expression for the gradient, we partition $$A$$ (conformally with our partitioning of $$X$$ into $$X_1$$ and $$X_2$$) as $A = \left[\begin{matrix} A_1 & A_2 \\ A_3 & A_4 \end{matrix} \right].$ Given this partitioning, \begin{align} d(H) & = \mathrm{trace}\left(\left[\begin{matrix} A_2 H X_1^T & \\ & A_3 X_1 H^T + A_4 H X_2^T + A_4 X_2 H^T \end{matrix}\right] \right) \\ & = \langle A_2^TX_1, H \rangle + \langle A_3X_1, H \rangle + \langle A_4^T X_2, H \rangle + \langle A_4X_2, H \rangle \\ & = \langle (A_2^T + A_3)X_1 + (A_4^T + A_4)X_2, H \rangle. \end{align} The first equality comes from noting that the trace of a block matrix is simply the trace of its diagonal parts, and the second comes from manipulating the traces using their cyclicity and invariance to transposes. Thus $$\nabla_{X_2} f(X_1, X_2) = (A_2^T + A_3)X_1 + (A_4^T + A_4)X_2.$$ ### A masterclass application Maybe you didn’t find the last example convincing. Here’s a function I needed to compute the matrix gradient for— a task which I defy you to accomplish using standard calculus operations—: $f(V) = \langle 1^T K^T, \log(1^T \mathrm{e}^{VV^T}) \rangle = \log(1^T \mathrm{e}^{VV^T})K1.$ Here, $$K$$ is an $$n \times n$$ matrix (nonsymmetric in general), $$V$$ is an $$n \times d$$ matrix, and $$1$$ is a column vector of ones of length $$n$$. The exponential $$\mathrm{e}^{VV^T}$$ is computed entrywise, as is the $$\log$$. To motivate why you might want to take the gradient of this function, consider the situation that $$K_{ij}$$ measures how similar items $$i$$ and $$j$$ are in a nonsymmetric manner, and the rows of $$V$$ are coordinates for representations of the items in Euclidean space. Then $$(1^T K)_j$$ measures how similar item $$j$$ is to all the items, and $(1^T \mathrm{e}^{VV^T})_j = \sum_{\ell=1}^n \mathrm{e}^{v_\ell^T v_j}$ is a measure of how similar the embedding $$v_j$$ is to the embeddings of all the items. Thus, if we constrain all the embeddings to have norm 1, maximizing $$f(V)$$ with respect to $$V$$ ensures that the embeddings capture the item similarities in some sense. (Why do you care about this particular sense? That’s another story altogether.) Ignoring the constraints (you could use a projected gradient method for the optimization problem), we’re now interested in finding the gradient of $$f$$. In the following, I use the notation $$A \odot B$$ to indicate the pointwise product of two matrices. \begin{align} f(V + H) & = \langle 1^T K, \log(1^T \mathrm{e}^{(V+H)(V+H)^T} \rangle \\ & = \langle 1^T K, \log(1^T [\mathrm{e}^{VV^T} \odot \mathrm{e}^{VH^T} \odot \mathrm{e}^{HV^T} \odot \mathrm{e}^{HH^T} ]) \rangle \end{align} One can use the series expansion of the exponential to see that \begin{align} \mathrm{e}^{VH^T} & = 11^T + VH^T + o(\|H\|), \\ \mathrm{e}^{HV^T} & = 11^T + HV^T + o(\|H\|), \text{ and}\\ \mathrm{e}^{HH^T} & = 11^T + o(\|H\|). \end{align} It follows that \begin{multline} f(V + H) = \langle 1^T K, \log(1^T [\mathrm{e}^{VV^T} \odot (11^T + VH^T + o(\|H\|)) \\ \odot (11^T + HV^T + o(\|H\|)) \odot (11^T + o(\|H\|)) ]) \rangle. \end{multline} \begin{align} f(V + H) & = \langle 1^T K, \log(1^T [\mathrm{e}^{VV^T} \odot(11^T + VH^T + HV^T + o(\|H\|) )]) \rangle \\ & = \langle 1^T K, \log(1^T [\mathrm{e}^{VV^T} + e^{VV^T} \odot (VH^T + HV^T) + o(\|H\|) )]) \rangle \end{align} Now recall the linear approximation of $$\log$$: $\log(x) = \log(x_0) + \frac{1}{x_0} (x-x_0) + o(|x- x_0|^2).$ Apply this approximation pointwise to conclude that \begin{multline} f(V + H) = \langle 1^T K, \log(1^T \mathrm{e}^{VV^T}) + \\ \{1^T \mathrm{e}^{VV^T}\}^{-1}\odot (1^T [\mathrm{e}^{VV^T} \odot (VH^T + HV^T) + o(\|H\|)]) \rangle, \end{multline} where $$\{x\}^{-1}$$ denotes the pointwise inverse of a vector. Take $$D$$ to be the diagonal matrix with diagonal entries given by $$1^T \mathrm{e}^{VV^T}$$. We have shown that $f(V + H) = f(V) + \langle K^T1, D^{-1} [\mathrm{e}^{VV^T} \odot (VH^T + HV^T)]1 \rangle + o(\|H\|),$ so \begin{align} d(H) & = \langle K^T1, D^{-1} [\mathrm{e}^{VV^T} \odot (VH^T + HV^T)]1 \rangle \\ & = \langle D^{-1}K^T 11^T, \mathrm{e}^{VV^T} \odot (VH^T + HV^T) \rangle \\ & = \langle \mathrm{e}^{VV^T} \odot D^{-1}K^T 11^T, (VH^T + HV^T) \rangle. \end{align} The second inequality follows from the standard properties of inner products and the third from the observation that $\langle A, B\odot C \rangle = \sum_{ij} A_{ij}*B_{ij}*C_{ij} = \langle B \odot A, C \rangle.$ Finally, manipulations in the vein of the two preceding examples allow us to claim that $\nabla_V f(V) = [\mathrm{e}^{VV^T} \odot (11^T K D^{-1} + D^{-1} K^T 11^T)] V.$ As a caveat, note that if instead $$f(V) = \log(1^T \mathrm{e}^{VV^T} ) K^T 1$$, then one should substitute $$K$$ for $$K^T$$ in the last expression.
2021-07-24T04:56:03
{ "domain": "thousandfold.net", "url": "http://thousandfold.net/cz/2013/11/", "openwebmath_score": 0.9845360517501831, "openwebmath_perplexity": 911.5378959062883, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795125670755, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.6533043107810672 }
https://www.jobilize.com/trigonometry/test/using-algebra-to-simplify-trigonometric-expressions-by-openstax?qcr=www.quizover.com
# 9.1 Solving trigonometric equations with identities  (Page 4/9) Page 4 / 9 ## Verifying an identity using algebra and even/odd identities Verify the identity: $\frac{{\mathrm{sin}}^{2}\left(-\theta \right)-{\mathrm{cos}}^{2}\left(-\theta \right)}{\mathrm{sin}\left(-\theta \right)-\mathrm{cos}\left(-\theta \right)}=\mathrm{cos}\text{\hspace{0.17em}}\theta -\mathrm{sin}\text{\hspace{0.17em}}\theta$ Verify the identity $\text{\hspace{0.17em}}\frac{{\mathrm{sin}}^{2}\theta -1}{\mathrm{tan}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta -\mathrm{tan}\text{\hspace{0.17em}}\theta }=\frac{\mathrm{sin}\text{\hspace{0.17em}}\theta +1}{\mathrm{tan}\text{\hspace{0.17em}}\theta }.$ $\begin{array}{ccc}\hfill \frac{{\mathrm{sin}}^{2}\theta -1}{\mathrm{tan}\text{\hspace{0.17em}}\theta \mathrm{sin}\text{\hspace{0.17em}}\theta -\mathrm{tan}\text{\hspace{0.17em}}\theta }& =& \frac{\left(\mathrm{sin}\text{\hspace{0.17em}}\theta +1\right)\left(\mathrm{sin}\text{\hspace{0.17em}}\theta -1\right)}{\mathrm{tan}\text{\hspace{0.17em}}\theta \left(\mathrm{sin}\text{\hspace{0.17em}}\theta -1\right)}\hfill \\ & =& \frac{\mathrm{sin}\text{\hspace{0.17em}}\theta +1}{\mathrm{tan}\text{\hspace{0.17em}}\theta }\hfill \end{array}$ ## Verifying an identity involving cosines and cotangents Verify the identity: $\text{\hspace{0.17em}}\left(1-{\mathrm{cos}}^{2}x\right)\left(1+{\mathrm{cot}}^{2}x\right)=1.$ We will work on the left side of the equation. ## Using algebra to simplify trigonometric expressions We have seen that algebra is very important in verifying trigonometric identities, but it is just as critical in simplifying trigonometric expressions before solving. Being familiar with the basic properties and formulas of algebra, such as the difference of squares formula, the perfect square formula, or substitution, will simplify the work involved with trigonometric expressions and equations. For example, the equation $\text{\hspace{0.17em}}\left(\mathrm{sin}\text{\hspace{0.17em}}x+1\right)\left(\mathrm{sin}\text{\hspace{0.17em}}x-1\right)=0\text{\hspace{0.17em}}$ resembles the equation $\text{\hspace{0.17em}}\left(x+1\right)\left(x-1\right)=0,$ which uses the factored form of the difference of squares. Using algebra makes finding a solution straightforward and familiar. We can set each factor equal to zero and solve. This is one example of recognizing algebraic patterns in trigonometric expressions or equations. Another example is the difference of squares formula, $\text{\hspace{0.17em}}{a}^{2}-{b}^{2}=\left(a-b\right)\left(a+b\right),$ which is widely used in many areas other than mathematics, such as engineering, architecture, and physics. We can also create our own identities by continually expanding an expression and making the appropriate substitutions. Using algebraic properties and formulas makes many trigonometric equations easier to understand and solve. ## Writing the trigonometric expression as an algebraic expression Write the following trigonometric expression as an algebraic expression: $\text{\hspace{0.17em}}2{\mathrm{cos}}^{2}\theta +\mathrm{cos}\text{\hspace{0.17em}}\theta -1.$ Notice that the pattern displayed has the same form as a standard quadratic expression, $\text{\hspace{0.17em}}a{x}^{2}+bx+c.\text{\hspace{0.17em}}$ Letting $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta =x,$ we can rewrite the expression as follows: $2{x}^{2}+x-1$ This expression can be factored as $\text{\hspace{0.17em}}\left(2x+1\right)\left(x-1\right).\text{\hspace{0.17em}}$ If it were set equal to zero and we wanted to solve the equation, we would use the zero factor property and solve each factor for $\text{\hspace{0.17em}}x.\text{\hspace{0.17em}}$ At this point, we would replace $\text{\hspace{0.17em}}x\text{\hspace{0.17em}}$ with $\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta \text{\hspace{0.17em}}$ and solve for $\text{\hspace{0.17em}}\theta .$ ## Rewriting a trigonometric expression using the difference of squares Rewrite the trigonometric expression using the difference of squares: $\text{\hspace{0.17em}}4\text{\hspace{0.17em}}{\mathrm{cos}}^{2}\theta -1.$ Notice that both the coefficient and the trigonometric expression in the first term are squared, and the square of the number 1 is 1. This is the difference of squares. $\begin{array}{ccc}\hfill 4\text{\hspace{0.17em}}{\mathrm{cos}}^{2}\theta -1& =& {\left(2\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta \right)}^{2}-1\hfill \\ & =& \left(2\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta -1\right)\left(2\text{\hspace{0.17em}}\mathrm{cos}\text{\hspace{0.17em}}\theta +1\right)\hfill \end{array}$ Rewrite the trigonometric expression using the difference of squares: $\text{\hspace{0.17em}}25-9\text{\hspace{0.17em}}{\mathrm{sin}}^{2}\text{\hspace{0.17em}}\theta .$ This is a difference of squares formula: $\text{\hspace{0.17em}}25-9\text{\hspace{0.17em}}{\mathrm{sin}}^{2}\text{\hspace{0.17em}}\theta =\left(5-3\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta \right)\left(5+3\text{\hspace{0.17em}}\mathrm{sin}\text{\hspace{0.17em}}\theta \right).$ A laser rangefinder is locked on a comet approaching Earth. The distance g(x), in kilometers, of the comet after x days, for x in the interval 0 to 30 days, is given by g(x)=250,000csc(π30x). Graph g(x) on the interval [0, 35]. Evaluate g(5)  and interpret the information. What is the minimum distance between the comet and Earth? When does this occur? To which constant in the equation does this correspond? Find and discuss the meaning of any vertical asymptotes. The sequence is {1,-1,1-1.....} has how can we solve this problem Sin(A+B) = sinBcosA+cosBsinA Prove it Eseka Eseka hi Joel June needs 45 gallons of punch. 2 different coolers. Bigger cooler is 5 times as large as smaller cooler. How many gallons in each cooler? 7.5 and 37.5 Nando find the sum of 28th term of the AP 3+10+17+--------- I think you should say "28 terms" instead of "28th term" Vedant the 28th term is 175 Nando 192 Kenneth if sequence sn is a such that sn>0 for all n and lim sn=0than prove that lim (s1 s2............ sn) ke hole power n =n write down the polynomial function with root 1/3,2,-3 with solution if A and B are subspaces of V prove that (A+B)/B=A/(A-B) write down the value of each of the following in surd form a)cos(-65°) b)sin(-180°)c)tan(225°)d)tan(135°) Prove that (sinA/1-cosA - 1-cosA/sinA) (cosA/1-sinA - 1-sinA/cosA) = 4 what is the answer to dividing negative index In a triangle ABC prove that. (b+c)cosA+(c+a)cosB+(a+b)cisC=a+b+c. give me the waec 2019 questions
2019-06-26T11:44:22
{ "domain": "jobilize.com", "url": "https://www.jobilize.com/trigonometry/test/using-algebra-to-simplify-trigonometric-expressions-by-openstax?qcr=www.quizover.com", "openwebmath_score": 0.9025177359580994, "openwebmath_perplexity": 690.6361573850811, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795125670754, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043107810672 }
https://artofproblemsolving.com/wiki/index.php/2018_AIME_II_Problems/Problem_4
# 2018 AIME II Problems/Problem 4 ## Problem In equiangular octagon $CAROLINE$, $CA = RO = LI = NE =$ $\sqrt{2}$ and $AR = OL = IN = EC = 1$. The self-intersecting octagon $CORNELIA$ encloses six non-overlapping triangular regions. Let $K$ be the area enclosed by $CORNELIA$, that is, the total area of the six triangular regions. Then $K =$ $\dfrac{a}{b}$, where $a$ and $b$ are relatively prime positive integers. Find $a + b$. ## Solution We can draw $CORNELIA$ and introduce some points. The diagram is essentially a 3x3 grid where each of the 9 squares making up the grid have a side length of 1. In order to find the area of $CORNELIA$, we need to find 4 times the area of $\bigtriangleup$$ACY$ and 2 times the area of $\bigtriangleup$$YZW$. Using similar triangles $\bigtriangleup$$ARW$ and $\bigtriangleup$$YZW$(We look at their heights), $YZ$ $=$ $\frac{1}{3}$. Therefore, the area of $\bigtriangleup$$YZW$ is $\frac{1}{3}\cdot\frac{1}{2}\cdot\frac{1}{2}$ $=$ $\frac{1}{12}$ Since $YZ$ $=$ $\frac{1}{3}$ and $XY = ZQ$, $XY$ $=$ $\frac{1}{3}$ and $CY$ $=$ $\frac{4}{3}$. Therefore, the area of $\bigtriangleup$$ACY$ is $\frac{4}{3}\cdot$ $1$ $\cdot$ $\frac{1}{2}$ $=$ $\frac{2}{3}$ Our final answer is $\frac{1}{12}$ $\cdot$ $2$ $+$ $\frac{2}{3}$ $\cdot$ $4$ $=$ $\frac{17}{6}$ $17 + 6 =$ $\boxed{023}$ ## Solution 2 $CAROLINE$ is essentially a plus sign with side length 1 with a few diagonals, which motivates us to coordinate bash. We let $N = (1, 0)$ and $E = (0, 1)$. To find $CORNELIA$'s self intersections, we take $$CO = y = 2, AI = y = -3x + 6, RN = y = 3x - 3$$ And plug them in to get $C_1 = (\frac{4}{3}, 2)$ where $C_1$ is the intersection of $CO$ and $AI$, and $C_2 = (\frac{5}{3}, 2)$ is the intersection of $RN$ and $CO$. We also track the intersection of $AI$ and $RN$ to get $(\frac{3}{2}, \frac{3}{2})$. By vertical symmetry, the other 2 points of intersection should have the same x-coordinates. We can then proceed with Solution 1 to calculate the area of the triangle (compare the $y$-coordinates of $A,R,I,N$ and $CO$ and $EL$).
2022-11-30T09:38:11
{ "domain": "artofproblemsolving.com", "url": "https://artofproblemsolving.com/wiki/index.php/2018_AIME_II_Problems/Problem_4", "openwebmath_score": 0.9330405592918396, "openwebmath_perplexity": 184.0615851480608, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795125670754, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.6533043107810671 }
https://drexel28.wordpress.com/2011/01/20/representation-theory-the-group-algebra-pt-iii/
# Abstract Nonsense ## The Group Algebra (Pt. III) Point of post: This post is a continuation of this one. As one last tid-bit we decide precisely when $\mathcal{A}\left(G\right)$ is a commutative algebra, but first we need  a small lemma: Lemma: Let $G$ be a finite group and $\mathcal{A}\left(G\right)$ the group algebra on $G$. Define $\tilde{G}=\{\delta_g\}_{g\in G}$. Then $\tilde{G}$ is a multiplicative group and $\tilde{G}\cong G$. Proof: We have from the first theorem in this chapter that $\ast$ is an associative binary operation and that $\delta_e$ is an identity. Thus, it suffices to show that each element of $\tilde{G}$ has a two-sided inverse. But, this follows immediately from the previous theorem since if $\delta_g\in\tilde{G}$ then $\delta_{g^{-1}}\in\tilde{G}$ and $\delta_{g}\ast \delta_{g^{-1}}=\delta_{gg^{-1}}=\delta_e=\delta_{g^{-1}g}=\delta_{g^{-1}}\ast\delta_g$ To show that $G\cong\tilde{G}$ we merely note that the canonical association is in fact an isomorphism. More directly define $\phi:G\to\tilde{G}:g\mapsto \delta_g$ This is clearly surjective and since $|G|=\left|\tilde{G}\right|<\infty$ it must be a bijection. The fact that it’s a homomorphism follows almost immediately from the preceding theorem, namely: $\phi\left(gh\right)=\delta_{gh}=\delta_g\ast\delta_h=\phi\left(g\right)\ast\phi\left(h\right)$ the conclusion follows. $\blacksquare$ With this we are now able to state explicitly when $\mathcal{A}\left(G\right)$ is a commutative algebra: Theorem: Let $G$ be a finite group and $\mathcal{A}\left(G\right)$ the group algebra over $G$. Then, $\mathcal{A}\left(G\right)$ is a commutative algebra if and only if $G$ is abelian. Proof: Suppose first that $G$ is abelian. Then, by the lemma we know that $\tilde{G}$ is abelian and so in particular they all commute with each other. The rest is a simple calculation. Namely, if $a,b\in\mathcal{A}\left(G\right)$ then \displaystyle \begin{aligned}a\ast b &= \left(\sum_{g\in G}a(g)\delta_g\right)\ast\left(\sum_{h\in G}b(h)\delta_h\right)\\ &= \sum_{g\in G}\left(a(g)\delta_g\ast\left(\sum_{h\in G}b(h)\delta_h\right)\right)\\ &= \sum_{g\in G}\sum_{h\in G}\left(a(g)\delta_g\ast b(h)\delta_h\right)\\ &= \sum_{g\in G}\sum_{h\in G}\left(a(g)b(h)\right)\left(\delta_g\ast\delta_h\right)\\ &= \sum_{g\in G}\sum_{h\in G}\left(b(h)a(g)\right)\left(\delta_h\ast\delta_g\right)\\ &= \sum_{g\in G}\sum_{h\in G}b(h)\delta_h\ast a(g)\delta_g\\ &= \sum_{g\in G}\left(\left(\sum_{h\in G}b(h)\delta_h\right)\ast a(g)\delta_g\right)\\ &= \left(\sum_{h\in G}b(h)\delta_h\right)\ast\left(\sum_{g\in G}a(g)\delta_g\right)\\ &= b\ast a\end{aligned} and since $a,b\in\mathcal{A}\left(G\right)$ is arbitrary the conclusion follows. Conversely, if $\mathcal{A}\left(G\right)$ is a commutative algebra then it easily follows that $\tilde{G}$ is an abelian group and since $\tilde{G}\cong G$ we may conclude that $G$ is abelian. The conclusion follows. $\blacksquare$ References: 1.Simon, Barry. Representations of Finite and Compact Groups. Providence, RI: American Mathematical Society, 1996. Print. 2. Serre, Jean Pierre. Linear Representations of Finite Groups. New York: Springer-Verlag, 1977. Prin January 20, 2011 - 1. […] be a finite group and be as before . We claim that in the usual inner product on the group algebra   is (up to a scalar factor) orthonormal. More […] Pingback by Representation Theory: Matrix Entry Functions Form an (almost) Orthonormal Basis « Abstract Nonsense | February 23, 2011 | Reply 2. […] Note that this gives an alternate proof to our previous one that the group algebra is a commutative algebra if and only if is abelian. Indeed, \$latex […] Pingback by Representation Theory: Class Functions « Abstract Nonsense | March 21, 2011 | Reply 3. […] shows up in the decomposition of is equal to where the inner product is taken in the group algebra. But, the above shows […] Pingback by University of Maryland College Park Qualifying Exams (Group Theory and Representation Theory) (January-2004) « Abstract Nonsense | May 6, 2011 | Reply 4. […] We need only compute that where the inner product is taken on the group algebras and respectively. But, this is clear since by the above we have that and […] Pingback by University of Maryland College Park Qualifying Exams (Group Theory and Representation Theory) (August-2004) « Abstract Nonsense | May 6, 2011 | Reply
2018-03-18T23:26:59
{ "domain": "wordpress.com", "url": "https://drexel28.wordpress.com/2011/01/20/representation-theory-the-group-algebra-pt-iii/", "openwebmath_score": 0.9992368221282959, "openwebmath_perplexity": 416.73615661980847, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795121840872, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043105275586 }
http://mathhelpforum.com/advanced-statistics/212553-markov-chains-absorption-probabilites.html
## Markov Chains and absorption probabilites Could someone please help me with this question? A single-celled organism contains N particles, some of which are of type A, the others of type B . The cell is said to be in state i , where $0\leq  i \leq N$, if it contains exactly i particles of type A. Daughter cells are formed by cell division, but rst each particle replicates itself; the daughter cell inherits N particles chosen at random from the 2i particles of type A and 2N-2i of type B in the parent cell. Find the absorption probabilities and expected times to absorption for the case N = 3. I have that the absorbing states are i=0 and i=3 but need help finding the transition matrix P
2016-06-25T23:43:27
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/advanced-statistics/212553-markov-chains-absorption-probabilites.html", "openwebmath_score": 0.6465422511100769, "openwebmath_perplexity": 806.441096882869, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9869795121840872, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043105275586 }
https://sunglee.us/mathphysarchive/?p=3753
# Quantificational Logic Quantificational logic is an extension of propositional logic. It is logic of expressions such as “for any”, “for all”, “there is some”, and “there is exactly one”. Quantificational logic is also called first-order logic, predicate logic, or predicate calculus. The universal quantifier $\forall$ means “for all”, “for each”, “for any”, or “for every”. The existential quantifier $\exists$ means “there exists”. Example. $\forall x\exists y P(x,y)$ Quantificational formulae such as the one in the above example are merely strings of symbols which do not mean anything (i.e. being true or false) as logical statements until they are accompanied by proper interpretations. An interpretation of a quantificational formula has to specify the following. 1. The universe $U$, the nonempty set from which the values of the variables ($x$, $y$, $z$, etc.) are drawn. 2. For each, say $k$-ary, predicate symbol $P$, which $k$-tuples of members of $U$ the predicate is true of, and 3. What elements of the universe correspond to any constant symbols, and what functions from the universe to itself correspond to function symbols mentioned in the formula. Example. Let $U=\{0,1\}$ and $P$ be the less-than relation i.e. $P(x,y)$ is “$x$ is less than $y$.” Then • $P(0,0)$ is false • $P(0,1)$ is true • $P(1,0)$ is false • $P(1,1)$ is false The quantificational formula $\forall x\exists y P(x,y)$ is false because there is no value of $y$ in the universe for which $P(x,y)$ is true when $x=1$. Example. Let $U=\{0,1\}$ and $P$ be the not-equal relation i.e. $P(x,y)$ is “$x$ is not equal to $y$.” Then • $P(0,0)$ is false • $P(0,1)$ is true • $P(1,0)$ is true • $P(1,1)$ is false Hence $\forall x\exists y P(x,y)$ is true. In general, the universe of an interpretation is an infinite set, so it is impossible to specify the values of the predicate for every combination of elements. For a remedy, we restate the definition in terms of relations by saying an interpretation of a quantificational formula consists of 1. a nonempty set called the universe 2. for each $k$-place predicate symbol, a $k$-ary relation on the universe 3. for each $k$-place function symbol, a $k$-ary function from the universe to itself. Example. Let $U=\mathbb{N}$, the set of natural numbers and $P$ be the less-than relation. Then the formula $\forall x\exists y P(x,y)$ is true. Example. Let $U=\mathbb{N}$ and $P$ be the greater-than relation. Then the formula $\forall x\exists y P(x,y)$ is false. Example. An example of a formula involving a function. Let $U=\mathbb{N}$ and $P(x,y)$ be $\forall x\exists y(x+y=0)$. The constant symbol 0 is interpreted as zero and the binary function symbol $+$ represents addition. The formula is false. For example, when $x=1$, there is no value of $y$ in the universe such that $x+y=0$. If $U=\mathbb{Z}$, the set of integers, however, the formula is true. Two formulae are equivalent if they are true under the same interpretation. Example. $\forall x\exists y P(x,y)$ and $\forall y\exists x P(y,x)$ are equivalent. If two formulae $F$ and $G$ are equivalent, we write $F\equiv G$. A model of a formula is an interpretation in which it is true. A satisfiable formula of quantificational logic is one that has a model. A valid formula, also called theorem, is a formula that is true under every interpretation. Valid formulae are the quantificational analogs of tautologies in propositional logic. Example. $\forall x(P(x)\wedge Q(x))\Longrightarrow\forall y P(y)$ is a valid formula. $\forall x P(x)\wedge\exists y\neg P(y)$ is unsatisfiable. Note. $\exists x (H(x)\wedge B(x))\not\equiv\exists xH(x)\wedge\exists x B(x)$. $\exists x (H(x)\wedge B(x))\not\equiv\exists x H(x)\wedge B(x)$. $x$ in $B(x)$ in the second formula is a free variable i.e. \exists x H(x)\wedge B(x)\equiv(\exists x H(x))\wedge B(x)The laws we learned in propositional logic are carried over. For example, Distributive Laws \begin{align}\forall x(P(x)\wedge(Q(x)\vee R(x)))&\equiv\forall x((P(x)\wedge Q(x))\vee(P(x)\wedge R(x)))\label{eq:distlaw1}\\\forall x(P(x)\vee(Q(x)\wedge R(x)))&\equiv\forall x((P(x)\vee Q(x))\wedge(P(x)\vee R(x)))\label{eq:distlaw2}\end{align} Quantificational Equivalence Rule 1 (Proposition Substitutions) Suppose F and G are quantificational formulae and F’ and G’ are propositional formulae that result from F and G, respectively, by replacing each subformula by a corresponding propositional variable at all of its occurrences in both F and G. Suppose F’\equiv G’ as formulae of propositional logic. Then replacing F by G in any formula results n an equivalent formula. Example. \forall x\neg\neg P(x)\equiv\forall x P(x) since p\equiv\neg\neg p and so \neg\neg P(x) can be replaced by P(x). Example. Replacing P(x), Q(x) and R(x) by p, q and r, respectively, turns P(x)\wedge(Q(x)\vee R(x)) into p\wedge(q\vee r) and (P(x)\wedge Q(x))\vee(P(x)\wedge R(x)) into (p\wedge q)\vee(p\wedge r). Since p\wedge(q\vee r)\equiv (p\wedge q)\vee(p\wedge r), P(x)\wedge(Q(x)\vee R(x)) can be replaced by (P(x)\wedge Q(x))\vee(P(x)\wedge R(x)). Hence we have the equivalence in \eqref{eq:distlaw1}. Quantificational Equivalence Rule 2 (Change of Variables) Let F be a formula containing a subformula \Box x G, where \Box is either \forall or \exists. Assume G has no bound occurrence of x and let G’ be the result of replacing x by y everywhere in G. Then replacing \Box x G by \Box y G’ within the formula results in an equivalent formula. Example. \exists x (H(x)\wedge B(x))\equiv\exists y (H(y)\wedge B(y)) Quantificational Equivalence Rule 3 (Quantifier negation) \begin{align*}\neg\forall x F&\equiv\exists x\neg F\\\neg\exists x F&\equiv\forall x\neg F\end{align*} Quantificational Equivalence Rule 4 (Scope change) Suppose the variable x does not appear in G. Let \Box denote either \forall or \exists. Let \diamond denote either \vee or \wedge. Then \begin{align*}(\Box x F\diamond G)&\equiv\Box x(F\diamond G)\\(G\diamond\Box x F)&\equiv\Box x(G\diamond F)\end{align*} Example. Let r be “it rains”, \mathrm{outside(x)} “x is outside” and \mathrm{wet}(x) “x gets wet”. Then “If it is raining, then anything that is outside will get wet” is written as the quantificational formular\Longrightarrow\forall x(\mathrm{ouside}(x)\Longrightarrow\mathrm{wet}(x))$$This is equivalent to$$\forall x(r\Longrightarrow(\mathrm{ouside}(x)\Longrightarrow\mathrm{wet}(x)))as a consequence of scope change. The transformed formula says, in plain English, “Any object, if it is raining, will get wet if it is outside” which sounds less natural than the original statement. Example. As a consequence of scope change we obtain \begin{align*}(\forall x P(x)\vee\exists y Q(y))&\equiv\forall x\exists y (P(x)\vee Q(x))\\&\equiv\exists y\forall x(P(x)\vee Q(y))\end{align*} Example. For (\forall x P(x)\vee\exists x Q(x)), neither quantifiers can be moved out because the quantified variable x appears in both subformulae. However, instead of scope change one can use change of variable rule to turn it into (\forall x P(x)\vee\exists y Q(y)) which is shown in the previous example. Through repeated application of the quantificational equivalence rules, all quantifiers can be pulled out to the beginning of the formula. Such a formula is said to be in prenex normal form. Example. Let L(x,y) be “x loves y”. Then the statement “everyone has a unique beloved” can be written as the quantificational formula\forall x\exists y (L(x,y)\wedge\forall z (L(x,z)\Longrightarrow y=z)$$It can be transformed to the formula in prenex form$$\forall x\exists y\forall z(L(x,y)\wedge(L(x,z)\Longrightarrow y=z))$$Example. Translate into quantificational logic and put into prenex form: “If there are any ants, then one of them is the queen.” Solution. Let A(x) be “x is an ant and Q(x) “x is a queen.” Then the statement can be written as the quantificational formula$$\exists x A(x)\Longrightarrow\exists y (A(y)\wedge Q(y)\wedge\forall z (Q(z)\Longrightarrow z=y))$It’s direct translation is “If there exists an$x$such that$x$is an ant, then there exists a$y$such that$y$is an ant and$y$is a queen and any$z$that is a queen is equal to$y\$.” The formula can be transformed to \begin{align*}\neg\exists x A(x)\vee\exists y(A(y)\wedge Q(y)\wedge\forall z (Q(z)\Longrightarrow z=y))&\equiv\forall x\neg A(x)\wedge\exists y(A(y)\wedge Q(y)\wedge\forall z (Q(z)\Longrightarrow z=y))\\&\equiv\forall x\exists y\forall z(\neg A(x)\vee(A(y)\wedge Q(y)\wedge\forall z (Q(z)\Longrightarrow z=y))\end{align*} References. [1] Essential Discrete Mathematics for Computer Science, Harry Lewis and Rachel Zax, Princeton University Press, 2019
2020-04-02T09:46:55
{ "domain": "sunglee.us", "url": "https://sunglee.us/mathphysarchive/?p=3753", "openwebmath_score": 0.9192503690719604, "openwebmath_perplexity": 1645.7565138524885, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795121840872, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.6533043105275584 }
https://socratic.org/questions/how-do-you-find-the-slope-that-is-perpendicular-to-the-line-y-6x-2-1
# How do you find the slope that is perpendicular to the line y = 6x + 2? Dec 24, 2016 The slope of a perpendicular line will be $- \frac{1}{\textcolor{red}{6}}$ #### Explanation: Let the slope of a line be called $\textcolor{red}{m}$ The slope of a line perpendicular to this first line will be $- \frac{1}{m}$ The line in this problem is in the slope intercept form: $y = \textcolor{red}{m} x + \textcolor{b l u e}{b}$ Where $\textcolor{red}{m}$ is the slope and $\textcolor{b l u e}{b}$ is the y-intercept. Therefore, a line perpendicular to the line $y = \textcolor{red}{6} x + \textcolor{b l u e}{2}$ will have a slope of: $- \frac{1}{\textcolor{red}{6}}$
2021-12-02T07:43:22
{ "domain": "socratic.org", "url": "https://socratic.org/questions/how-do-you-find-the-slope-that-is-perpendicular-to-the-line-y-6x-2-1", "openwebmath_score": 0.47184550762176514, "openwebmath_perplexity": 179.02188401088353, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795118010989, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043102740499 }
https://math.stackexchange.com/questions/3579777/transformations-of-random-variables
Transformations of random variables I try to derive the probability density function (PDF) of the random variable $$S=\frac{ AX+BY+CZ+D }{ EX+FY+GZ+H }.$$ We know that the random variables $$X, Y, Z$$ are independent and each of them satisfies Gaussian distribution. I do not know how to derive the PDF of $$S$$. Any kind of help would be appreciated. If $$X,Y,Z$$ are independent and Gaussian, then $$P = AX+BY+CZ+D$$ and $$Q = EX+FY+GZ+H$$ are both Gaussian, but not (necessarily) independent. $$(P,Q)$$ will jointly follow a bivariate normal distribution.
2020-08-14T03:21:31
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/3579777/transformations-of-random-variables", "openwebmath_score": 0.779013991355896, "openwebmath_perplexity": 27.589298728773205, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795118010989, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.6533043102740498 }
https://www.12000.org/my_notes/pde_in_CAS/pdse12.htm
### 12 Laplace PDE in Polar coordinates _______________________________________________________________________________________ #### 12.1 Laplace PDE inside quarter-circle (Haberman 2.5.5 (c)) problem number 103 This is problem 2.5.5 part (c) from Richard Haberman applied partial differential equations, 5th edition Solve Laplace equation $\frac{\partial ^2 u}{\partial r^2} + \frac{1}{r }\frac{\partial u}{\partial r} + \frac{1}{r^2} \frac{\partial ^2 u}{\partial \theta ^2} =0$ Inside quarter circle of radius 1 with $$0 \leq \theta \leq \frac{\pi }{2}$$ and $$0 \leq r \leq 1$$, with following boundary conditions \begin{align*} u(r,0) &= 0 \\ u(r,\frac{\pi }{2}) &= 0 \\ \frac{\partial u}{\partial r}(1,\theta ) &= f(\theta ) \\ \end{align*} Mathematica $\text{DSolve}\left [\left \{\frac{u^{(0,2)}(r,\theta ) u^{(1,0)}(r,\theta )}{r^3}+u^{(2,0)}(r,\theta )=0,\left \{u^{(1,0)}(1,\theta )=f(\theta ),u\left (r,\frac{\pi }{2}\right )=0,u(r,0)=0\right \}\right \},u(r,\theta ),\{r,\theta \},\text{Assumptions}\to \left \{0\leq r\leq 1\land 0\leq \theta \leq \frac{\pi }{2}\right \}\right ]$ Maple $u \left ( r,\theta \right ) =\sum _{n=1}^{\infty } \left ( 2\,{\frac{\int _{0}^{\pi /2}\!f \left ( \theta \right ) \sin \left ( 2\,n\theta \right ) \,{\rm d}\theta{r}^{2\,n}\sin \left ( 2\,n\theta \right ) }{\pi \,n}} \right )$ Hand solution The Laplace PDE in polar coordinates is  $$r^{2}\frac{\partial ^{2}u}{\partial r^{2}}+r\frac{\partial u}{\partial r}+\frac{\partial ^{2}u}{\partial \theta ^{2}}=0\tag{A}$$ With boundary conditions \begin{align} u\left ( r,0\right ) & =0\nonumber \\ u\left ( r,\frac{\pi }{2}\right ) & =0\tag{B}\\ u\left ( 1,\theta \right ) & =f\left ( \theta \right ) \nonumber \end{align} Assuming the solution can be written as $u\left ( r,\theta \right ) =R\left ( r\right ) \Theta \left ( \theta \right )$ And substituting this assumed solution back into the (A) gives$r^{2}R^{\prime \prime }\Theta +rR^{\prime }\Theta +R\Theta ^{\prime \prime }=0$ Dividing the above by $$R\Theta \neq 0$$ gives\begin{align*} r^{2}\frac{R^{\prime \prime }}{R}+r\frac{R^{\prime }}{R}+\frac{\Theta ^{\prime \prime }}{\Theta } & =0\\ r^{2}\frac{R^{\prime \prime }}{R}+r\frac{R^{\prime }}{R} & =-\frac{\Theta ^{\prime \prime }}{\Theta } \end{align*} Since each side depends on different independent variable and they are equal, they must be equal to same constant. say $$\lambda$$. $r^{2}\frac{R^{\prime \prime }}{R}+r\frac{R^{\prime }}{R}=-\frac{\Theta ^{\prime \prime }}{\Theta }=\lambda$ This results in the following two ODE’s. The boundaries conditions in (B) are also transferred to each ODE. This gives\begin{align} \Theta ^{\prime \prime }+\lambda \Theta & =0\nonumber \\ \Theta \left ( 0\right ) & =0\tag{1}\\ \Theta \left ( \frac{\pi }{2}\right ) & =0\nonumber \end{align} And\begin{align} r^{2}R^{\prime \prime }+rR^{\prime }-\lambda R & =0\tag{2}\\ \left \vert R\left ( 0\right ) \right \vert & <\infty \nonumber \end{align} Starting with (1). Consider the Case $$\lambda <0$$. The solution in this case will be $\Theta =A\cosh \left ( \sqrt{\lambda }\theta \right ) +B\sinh \left ( \sqrt{\lambda }\theta \right )$ Applying first B.C. gives $$A=0$$. The solution becomes $$\Theta =B\sinh \left ( \sqrt{\lambda }\theta \right )$$. Applying second B.C. gives $0=B\sinh \left ( \sqrt{\lambda }\frac{\pi }{2}\right )$ But $$\sinh$$ is zero only when $$\sqrt{\lambda }\frac{\pi }{2}=0$$ which is not the case here. Therefore $$B=0$$ and hence trivial solution. Hence $$\lambda <0$$ is not an eigenvalue. Case $$\lambda =0$$ The ODE becomes $$\Theta ^{\prime \prime }=0$$ with solution $$\Theta =A\theta +B$$. First B.C. gives $$0=B$$. The solution becomes $$\Theta =A\theta$$. Second B.C. gives $$0=A\frac{\pi }{2}$$, hence $$A=0$$ and trivial solution. Therefore $$\lambda =0$$ is not an eigenvalue. Case $$\lambda >0$$ The ODE becomes $$\Theta ^{\prime \prime }+\lambda \Theta =0$$ with solution $\Theta =A\cos \left ( \sqrt{\lambda }\theta \right ) +B\sin \left ( \sqrt{\lambda }\theta \right )$ The first B.C. gives $$0=A$$. The solution becomes $\Theta =B\sin \left ( \sqrt{\lambda }\theta \right )$ And the second B.C. gives $0=B\sin \left ( \sqrt{\lambda }\frac{\pi }{2}\right )$ For non-trivial solution $$\sin \left ( \sqrt{\lambda }\frac{\pi }{2}\right ) =0$$ or $$\sqrt{\lambda }\frac{\pi }{2}=n\pi$$ for $$n=1,2,3,\cdots$$. Hence the eigenvalues are\begin{align*} \sqrt{\lambda _{n}} & =2n\\ \lambda _{n} & =4n^{2}\qquad n=1,2,3,\cdots \end{align*} And the eigenfunctions are $$\Theta _{n}\left ( \theta \right ) =B_{n}\sin \left ( 2n\theta \right ) \qquad n=1,2,3,\cdots \tag{3}$$ Now the $$R$$ ODE is solved. There is one case to consider, which is $$\lambda >0$$ based on the above. The ODE is\begin{align*} r^{2}R^{\prime \prime }+rR^{\prime }-\lambda _{n}R & =0\\ r^{2}R^{\prime \prime }+rR^{\prime }-4n^{2}R & =0\qquad n=1,2,3,\cdots \end{align*} This is Euler ODE. Let $$R\left ( r\right ) =r^{p}$$. Then $$R^{\prime }=pr^{p-1}$$ and $$R^{\prime \prime }=p\left ( p-1\right ) r^{p-2}$$. This gives\begin{align*} r^{2}\left ( p\left ( p-1\right ) r^{p-2}\right ) +r\left ( pr^{p-1}\right ) -4n^{2}r^{p} & =0\\ \left ( \left ( p^{2}-p\right ) r^{p}\right ) +pr^{p}-4n^{2}r^{p} & =0\\ r^{p}p^{2}-pr^{p}+pr^{p}-4n^{2}r^{p} & =0\\ p^{2}-4n^{2} & =0\\ p & =\pm 2n \end{align*} Hence the solution is$R\left ( r\right ) =Cr^{2n}+D\frac{1}{r^{2n}}$ Applying the condition that $$\left \vert R\left ( 0\right ) \right \vert <\infty$$ implies $$D=0$$, and the solution becomes$$R_{n}\left ( r\right ) =C_{n}r^{2n}\qquad n=1,2,3,\cdots \tag{4}$$ Using (3,4) the solution $$u_{n}\left ( r,\theta \right )$$ is\begin{align*} u_{n}\left ( r,\theta \right ) & =R_{n}\Theta _{n}\\ & =C_{n}r^{2n}B_{n}\sin \left ( 2n\theta \right ) \\ & =B_{n}r^{2n}\sin \left ( 2n\theta \right ) \end{align*} Where $$C_{n}B_{n}$$ was combined into one constant $$B_{n}$$. (No need to introduce new symbol). The final solution is\begin{align*} u\left ( r,\theta \right ) & =\sum _{n=1}^{\infty }u_{n}\left ( r,\theta \right ) \\ & =\sum _{n=1}^{\infty }B_{n}r^{2n}\sin \left ( 2n\theta \right ) \end{align*} Now the nonhomogeneous condition is applied to find $$B_{n}$$.$\frac{\partial }{\partial r}u\left ( r,\theta \right ) =\sum _{n=1}^{\infty }B_{n}\left ( 2n\right ) r^{2n-1}\sin \left ( 2n\theta \right )$ Hence $$\frac{\partial }{\partial r}u\left ( 1,\theta \right ) =f\left ( \theta \right )$$ becomes$f\left ( \theta \right ) =\sum _{n=1}^{\infty }2B_{n}n\sin \left ( 2n\theta \right )$ Multiplying by $$\sin \left ( 2m\theta \right )$$ and integrating gives\begin{align} \int _{0}^{\frac{\pi }{2}}f\left ( \theta \right ) \sin \left ( 2m\theta \right ) d\theta & =\int _{0}^{\frac{\pi }{2}}\sin \left ( 2m\theta \right ) \sum _{n=1}^{\infty }2B_{n}n\sin \left ( 2n\theta \right ) d\theta \nonumber \\ & =\sum _{n=1}^{\infty }2nB_{n}\int _{0}^{\frac{\pi }{2}}\sin \left ( 2m\theta \right ) \sin \left ( 2n\theta \right ) d\theta \tag{5} \end{align} When $$n=m$$ then\begin{align*} \int _{0}^{\frac{\pi }{2}}\sin \left ( 2m\theta \right ) \sin \left ( 2n\theta \right ) d\theta & =\int _{0}^{\frac{\pi }{2}}\sin ^{2}\left ( 2n\theta \right ) d\theta \\ & =\int _{0}^{\frac{\pi }{2}}\left ( \frac{1}{2}-\frac{1}{2}\cos 4n\theta \right ) d\theta \\ & =\frac{1}{2}\left [ \theta \right ] _{0}^{\frac{\pi }{2}}-\frac{1}{2}\left [ \frac{\sin 4n\theta }{4n}\right ] _{0}^{\frac{\pi }{2}}\\ & =\frac{\pi }{4}-\left ( \frac{1}{8n}\left ( \sin \frac{4n}{2}\pi \right ) -\sin \left ( 0\right ) \right ) \end{align*} And since $$n$$ is integer, then $$\sin \frac{4n}{2}\pi =\sin 2n\pi =0$$ and the above becomes $$\frac{\pi }{4}$$. Now for the case when $$n\neq m$$ using $$\sin A\sin B=\frac{1}{2}\left ( \cos \left ( A-B\right ) -\cos \left ( A+B\right ) \right )$$ then\begin{align*} \int _{0}^{\frac{\pi }{2}}\sin \left ( 2m\theta \right ) \sin \left ( 2n\theta \right ) d\theta & =\int _{0}^{\frac{\pi }{2}}\frac{1}{2}\left ( \cos \left ( 2m\theta -2n\theta \right ) -\cos \left ( 2m\theta +2n\theta \right ) \right ) d\theta \\ & =\frac{1}{2}\int _{0}^{\frac{\pi }{2}}\cos \left ( 2m\theta -2n\theta \right ) d\theta -\frac{1}{2}\int _{0}^{\frac{\pi }{2}}\cos \left ( 2m\theta +2n\theta \right ) d\theta \\ & =\frac{1}{2}\int _{0}^{\frac{\pi }{2}}\cos \left ( \left ( 2m-2n\right ) \theta \right ) d\theta -\frac{1}{2}\int _{0}^{\frac{\pi }{2}}\cos \left ( \left ( 2m+2n\right ) \theta \right ) d\theta \\ & =\frac{1}{2}\left [ \frac{\sin \left ( \left ( 2m-2n\right ) \theta \right ) }{\left ( 2m-2n\right ) }\right ] _{0}^{\frac{\pi }{2}}-\frac{1}{2}\left [ \frac{\sin \left ( \left ( 2m+2n\right ) \theta \right ) }{\left ( 2m+2n\right ) }\right ] _{0}^{\frac{\pi }{2}}\\ & =\frac{1}{4\left ( m-n\right ) }\left [ \sin \left ( \left ( 2m-2n\right ) \theta \right ) \right ] _{0}^{\frac{\pi }{2}}-\frac{1}{4\left ( m+n\right ) }\left [ \sin \left ( \left ( 2m+2n\right ) \theta \right ) \right ] _{0}^{\frac{\pi }{2}}\\ & =\frac{1}{4\left ( m-n\right ) }\left [ \sin \left ( \left ( 2m-2n\right ) \frac{\pi }{2}\right ) -0\right ] -\frac{1}{4\left ( m+n\right ) }\left [ \sin \left ( \left ( 2m+2n\right ) \frac{\pi }{2}\right ) -0\right ] \end{align*} Since $$2m-2n\frac{\pi }{2}=\pi \left ( m-n\right )$$ which is integer multiple of $$\pi$$ and also $$\left ( 2m+2n\right ) \frac{\pi }{2}$$ is integer multiple of $$\pi$$ then the whole term above becomes zero. Therefore (5) becomes$\int _{0}^{\frac{\pi }{2}}f\left ( \theta \right ) \sin \left ( 2m\theta \right ) d\theta =2mB_{m}\frac{\pi }{4}$ Hence$B_{n}=\frac{2}{\pi n}\int _{0}^{\frac{\pi }{2}}f\left ( \theta \right ) \sin \left ( 2n\theta \right ) d\theta$ Summary: the final solution is$u\left ( r,\theta \right ) =\frac{2}{\pi }\sum _{n=1}^{\infty }\frac{1}{n}\left [ \int _{0}^{\frac{\pi }{2}}f\left ( \theta \right ) \sin \left ( 2n\theta \right ) d\theta \right ] \left ( r^{2n}\sin \left ( 2n\theta \right ) \right )$ _______________________________________________________________________________________ #### 12.2 Laplace PDE inside semi-circle problem number 104 Solve Laplace equation $\frac{\partial ^2 u}{\partial r^2} + \frac{1}{r }\frac{\partial u}{\partial r} + \frac{1}{r^2} \frac{\partial ^2 u}{\partial \theta ^2} =0$ Inside semi-circle of radius 1 with $$0 \leq \theta \leq \pi$$ and $$0 \leq r \leq 1$$, with following boundary conditions \begin{align*} u(r,0) &= 0 \\ u(r,\pi ) &= 0 \\ u(0,\theta ) &= 0 \\ u(1,\theta ) &= f(\theta ) \end{align*} Mathematica $\text{DSolve}\left [\left \{\frac{u^{(0,2)}(r,\theta ) u^{(1,0)}(r,\theta )}{r^3}+u^{(2,0)}(r,\theta )=0,\{u(r,0)=0,u(r,\pi )=0,u(0,\theta )=0,u(1,\theta )=f(\theta )\}\right \},u(r,\theta ),\{r,\theta \},\text{Assumptions}\to \{0\leq r\leq 1\land 0\leq \theta \leq \pi \}\right ]$ Maple $u \left ( r,\theta \right ) =\sum _{n=1}^{\infty } \left ( 2\,{\frac{ \left ( \int _{0}^{\pi }\!\sin \left ( n\theta \right ) f \left ( \theta \right ) \,{\rm d}\theta{r}^{n}-1/2\,{\it \_C5} \left ( n \right ) \pi \, \left ({r}^{n}-{r}^{-n} \right ) \right ) \sin \left ( n\theta \right ) }{\pi }} \right )$ Hand solution The Laplace PDE in polar coordinates is  $$r^{2}\frac{\partial ^{2}u}{\partial r^{2}}+r\frac{\partial u}{\partial r}+\frac{\partial ^{2}u}{\partial \theta ^{2}}=0\tag{A}$$ With\begin{align} \frac{\partial u}{\partial r}\left ( a,\theta \right ) & =0\nonumber \\ u\left ( b,\theta \right ) & =g\left ( \theta \right ) \tag{B} \end{align} Assuming the solution can be written as $u\left ( r,\theta \right ) =R\left ( r\right ) \Theta \left ( \theta \right )$ And substituting this assumed solution back into the (A) gives$r^{2}R^{\prime \prime }\Theta +rR^{\prime }\Theta +R\Theta ^{\prime \prime }=0$ Dividing the above by $$R\Theta$$ gives\begin{align*} r^{2}\frac{R^{\prime \prime }}{R}+r\frac{R^{\prime }}{R}+\frac{\Theta ^{\prime \prime }}{\Theta } & =0\\ r^{2}\frac{R^{\prime \prime }}{R}+r\frac{R^{\prime }}{R} & =-\frac{\Theta ^{\prime \prime }}{\Theta } \end{align*} Since each side depends on different independent variable and they are equal, they must be equal to same constant. say $$\lambda$$. $r^{2}\frac{R^{\prime \prime }}{R}+r\frac{R^{\prime }}{R}=-\frac{\Theta ^{\prime \prime }}{\Theta }=\lambda$ This results in the following two ODE’s. The boundaries conditions in (B) are also transferred to each ODE. This results in\begin{align} \Theta ^{\prime \prime }+\lambda \Theta & =0\tag{1}\\ \Theta \left ( -\pi \right ) & =\Theta \left ( \pi \right ) \nonumber \\ \Theta ^{\prime }\left ( -\pi \right ) & =\Theta ^{\prime }\left ( \pi \right ) \nonumber \end{align} And\begin{align} r^{2}R^{\prime \prime }+rR^{\prime }-\lambda R & =0\tag{2}\\ R^{\prime }\left ( a\right ) & =0\nonumber \end{align} Starting with (1) Case $$\lambda <0$$ The solution is$\Theta \left ( \theta \right ) =A\cosh \left ( \sqrt{\lambda }\theta \right ) +B\sinh \left ( \sqrt{\lambda }\theta \right )$ First B.C. gives\begin{align*} \Theta \left ( -\pi \right ) & =\Theta \left ( \pi \right ) \\ A\cosh \left ( -\sqrt{\lambda }\pi \right ) +B\sinh \left ( -\sqrt{\lambda }\pi \right ) & =A\cosh \left ( \sqrt{\lambda }\pi \right ) +B\sinh \left ( \sqrt{\lambda }\pi \right ) \\ A\cosh \left ( \sqrt{\lambda }\pi \right ) -B\sinh \left ( \sqrt{\lambda }\pi \right ) & =A\cosh \left ( \sqrt{\lambda }\pi \right ) +B\sinh \left ( \sqrt{\lambda }\pi \right ) \\ 2B\sinh \left ( \sqrt{\lambda }\pi \right ) & =0 \end{align*} But $$\sinh \left ( \sqrt{\lambda }\pi \right ) =0$$ only at zero and $$\lambda \neq 0$$, hence $$B=0$$ and the solution becomes\begin{align*} \Theta \left ( \theta \right ) & =A\cosh \left ( \sqrt{\lambda }\theta \right ) \\ \Theta ^{\prime }\left ( \theta \right ) & =A\sqrt{\lambda }\cosh \left ( \sqrt{\lambda }\theta \right ) \end{align*} Applying the second B.C. gives\begin{align*} \Theta ^{\prime }\left ( -\pi \right ) & =\Theta ^{\prime }\left ( \pi \right ) \\ A\sqrt{\lambda }\cosh \left ( -\sqrt{\lambda }\pi \right ) & =A\sqrt{\lambda }\cosh \left ( \sqrt{\lambda }\pi \right ) \\ A\sqrt{\lambda }\cosh \left ( \sqrt{\lambda }\pi \right ) & =A\sqrt{\lambda }\cosh \left ( \sqrt{\lambda }\pi \right ) \\ 2A\sqrt{\lambda }\cosh \left ( \sqrt{\lambda }\pi \right ) & =0 \end{align*} But $$\cosh \left ( \sqrt{\lambda }\pi \right ) \neq 0$$ hence $$A=0$$. Therefore trivial solution and $$\lambda <0$$ is not an eigenvalue. Case $$\lambda =0$$ The solution is $$\Theta =A\theta +B$$. Applying the first B.C. gives\begin{align*} \Theta \left ( -\pi \right ) & =\Theta \left ( \pi \right ) \\ -A\pi +B & =\pi A+B\\ 2\pi A & =0\\ A & =0 \end{align*} And the solution becomes $$\Theta =B_{0}$$. A constant. Hence $$\lambda =0$$ is an eigenvalue. Case $$\lambda >0$$ The solution becomes\begin{align*} \Theta & =A\cos \left ( \sqrt{\lambda }\theta \right ) +B\sin \left ( \sqrt{\lambda }\theta \right ) \\ \Theta ^{\prime } & =-A\sqrt{\lambda }\sin \left ( \sqrt{\lambda }\theta \right ) +B\sqrt{\lambda }\cos \left ( \sqrt{\lambda }\theta \right ) \end{align*} Applying first B.C. gives\begin{align} \Theta \left ( -\pi \right ) & =\Theta \left ( \pi \right ) \nonumber \\ A\cos \left ( -\sqrt{\lambda }\pi \right ) +B\sin \left ( -\sqrt{\lambda }\pi \right ) & =A\cos \left ( \sqrt{\lambda }\pi \right ) +B\sin \left ( \sqrt{\lambda }\pi \right ) \nonumber \\ A\cos \left ( \sqrt{\lambda }\pi \right ) -B\sin \left ( \sqrt{\lambda }\pi \right ) & =A\cos \left ( \sqrt{\lambda }\pi \right ) +B\sin \left ( \sqrt{\lambda }\pi \right ) \nonumber \\ 2B\sin \left ( \sqrt{\lambda }\pi \right ) & =0 \tag{3} \end{align} Applying second B.C. gives\begin{align} \Theta ^{\prime }\left ( -\pi \right ) & =\Theta ^{\prime }\left ( \pi \right ) \nonumber \\ -A\sqrt{\lambda }\sin \left ( -\sqrt{\lambda }\pi \right ) +B\sqrt{\lambda }\cos \left ( -\sqrt{\lambda }\pi \right ) & =-A\sqrt{\lambda }\sin \left ( \sqrt{\lambda }\pi \right ) +B\sqrt{\lambda }\cos \left ( \sqrt{\lambda }\pi \right ) \nonumber \\ A\sqrt{\lambda }\sin \left ( \sqrt{\lambda }\pi \right ) +B\sqrt{\lambda }\cos \left ( \sqrt{\lambda }\pi \right ) & =-A\sqrt{\lambda }\sin \left ( \sqrt{\lambda }\pi \right ) +B\sqrt{\lambda }\cos \left ( \sqrt{\lambda }\pi \right ) \nonumber \\ A\sqrt{\lambda }\sin \left ( \sqrt{\lambda }\pi \right ) & =-A\sqrt{\lambda }\sin \left ( \sqrt{\lambda }\pi \right ) \nonumber \\ 2A\sin \left ( \sqrt{\lambda }\pi \right ) & =0 \tag{4} \end{align} Equations (3,4) can be both zero only if $$A=B=0$$ which gives trivial solution, or when $$\sin \left ( \sqrt{\lambda }\pi \right ) =0$$. Therefore taking $$\sin \left ( \sqrt{\lambda }\pi \right ) =0$$ gives a non-trivial solution. Hence\begin{align*} \sqrt{\lambda }\pi & =n\pi \qquad n=1,2,3,\cdots \\ \lambda _{n} & =n^{2}\qquad n=1,2,3,\cdots \end{align*} Hence the solution for $$\Theta$$ is$$\Theta =A_{0}+\sum _{n=1}^{\infty }A_{n}\cos \left ( n\theta \right ) +B_{n}\sin \left ( n\theta \right ) \tag{5}$$ Now the $$R$$ equation is solved The case for $$\lambda =0$$ gives\begin{align*} r^{2}R^{\prime \prime }+rR^{\prime } & =0\\ R^{\prime \prime }+\frac{1}{r}R^{\prime } & =0\qquad r\neq 0 \end{align*} As was done in last problem, the solution to this is$R\left ( r\right ) =A\ln \left \vert r\right \vert +C$ Since $$r>0$$ no need to keep worrying about $$\left \vert r\right \vert$$ and is removed for simplicity. Applying the B.C. gives$R^{\prime }=A\frac{1}{r}$ Evaluating at $$r=a$$ gives$0=A\frac{1}{a}$ Hence $$A=0$$, and the solution becomes$R\left ( r\right ) =C_{0}$ Which is a constant. Case $$\lambda >0$$ The ODE in this case is $r^{2}R^{\prime \prime }+rR^{\prime }-n^{2}R=0\qquad n=1,2,3,\cdots$ Let $$R=r^{p}$$, the above becomes\begin{align*} r^{2}p\left ( p-1\right ) r^{p-2}+rpr^{p-1}-n^{2}r^{p} & =0\\ p\left ( p-1\right ) r^{p}+pr^{p}-n^{2}r^{p} & =0\\ p\left ( p-1\right ) +p-n^{2} & =0\\ p^{2} & =n^{2}\\ p & =\pm n \end{align*} Hence the solution is$R_{n}\left ( r\right ) =Cr^{n}+D\frac{1}{r^{n}}\qquad n=1,2,3,\cdots$ Applying the boundary condition $$R^{\prime }\left ( a\right ) =0$$ gives\begin{align*} R_{n}^{\prime }\left ( r\right ) & =nC_{n}r^{n-1}-nD_{n}\frac{1}{r^{n+1}}\\ 0 & =R_{n}^{\prime }\left ( a\right ) \\ & =nC_{n}a^{n-1}-nD_{n}\frac{1}{a^{n+1}}\\ & =nC_{n}a^{2n}-nD_{n}\\ & =C_{n}a^{2n}-D_{n}\\ D_{n} & =C_{n}a^{2n} \end{align*} The solution becomes\begin{align*} R_{n}\left ( r\right ) & =C_{n}r^{n}+C_{n}a^{2n}\frac{1}{r^{n}}\qquad n=1,2,3,\cdots \\ & =C_{n}\left ( r^{n}+\frac{a^{2n}}{r^{n}}\right ) \end{align*} Hence the complete solution for $$R\left ( r\right )$$ is$$R\left ( r\right ) =C_{0}+\sum _{n=1}^{\infty }C_{n}\left ( r^{n}+\frac{a^{2n}}{r^{n}}\right ) \tag{6}$$ Using (5),(6)  gives\begin{align*} u_{n}\left ( r,\theta \right ) & =R_{n}\Theta _{n}\\ u\left ( r,\theta \right ) & =\left [ C_{0}+\sum _{n=1}^{\infty }C_{n}\left ( r^{n}+\frac{a^{2n}}{r^{n}}\right ) \right ] \left [ A_{0}+\sum _{n=1}^{\infty }A_{n}\cos \left ( n\theta \right ) +B_{n}\sin \left ( n\theta \right ) \right ] \\ & =D_{0}+\sum _{n=1}^{\infty }A_{n}\cos \left ( n\theta \right ) C_{n}\left ( r^{n}+\frac{a^{2n}}{r^{n}}\right ) +\sum _{n=1}^{\infty }B_{n}\sin \left ( n\theta \right ) C_{n}\left ( r^{n}+\frac{a^{2n}}{r^{n}}\right ) \end{align*} Where $$D_{0}=C_{0}A_{0}$$. To simplify more, $$A_{n}C_{n}$$ is combined to $$A_{n}$$ and $$B_{n}C_{n}$$ is combined to $$B_{n}$$. The full solution is$u\left ( r,\theta \right ) =D_{0}+\sum _{n=1}^{\infty }A_{n}\left ( r^{n}+\frac{a^{2n}}{r^{n}}\right ) \cos \left ( n\theta \right ) +\sum _{n=1}^{\infty }B_{n}\left ( r^{n}+\frac{a^{2n}}{r^{n}}\right ) \sin \left ( n\theta \right )$ The final nonhomogeneous B.C. is applied.\begin{align*} u\left ( b,\theta \right ) & =g\left ( \theta \right ) \\ g\left ( \theta \right ) & =D_{0}+\sum _{n=1}^{\infty }A_{n}\left ( b^{n}+\frac{a^{2n}}{b^{n}}\right ) \cos \left ( n\theta \right ) +\sum _{n=1}^{\infty }B_{n}\left ( b^{n}+\frac{a^{2n}}{b^{n}}\right ) \sin \left ( n\theta \right ) \end{align*} For $$n=0$$, integrating both sides give\begin{align*} \int _{-\pi }^{\pi }g\left ( \theta \right ) d\theta & =\int _{-\pi }^{\pi }D_{0}d\theta \\ D_{0} & =\frac{1}{2\pi }\int _{-\pi }^{\pi }g\left ( \theta \right ) d\theta \end{align*} For $$n>0$$, multiplying both sides by $$\cos \left ( m\theta \right )$$ and integrating gives\begin{align*} \int _{-\pi }^{\pi }g\left ( \theta \right ) \cos \left ( m\theta \right ) d\theta & =\int _{-\pi }^{\pi }D_{0}\cos \left ( m\theta \right ) d\theta \\ & +\int _{-\pi }^{\pi }\sum _{n=1}^{\infty }A_{n}\left ( b^{n}+\frac{a^{2n}}{b^{n}}\right ) \cos \left ( m\theta \right ) \cos \left ( n\theta \right ) d\theta \\ & +\int _{-\pi }^{\pi }\sum _{n=1}^{\infty }B_{n}\left ( b^{n}+\frac{a^{2n}}{b^{n}}\right ) \cos \left ( m\theta \right ) \sin \left ( n\theta \right ) d\theta \end{align*} Hence\begin{align} \int _{-\pi }^{\pi }g\left ( \theta \right ) \cos \left ( m\theta \right ) d\theta & =\int _{-\pi }^{\pi }D_{0}\cos \left ( m\theta \right ) d\theta \nonumber \\ & +\sum _{n=1}^{\infty }A_{n}\left ( b^{n}+\frac{a^{2n}}{b^{n}}\right ) \int _{-\pi }^{\pi }\cos \left ( m\theta \right ) \cos \left ( n\theta \right ) d\theta \nonumber \\ & +\sum _{n=1}^{\infty }B_{n}\left ( b^{n}+\frac{a^{2n}}{b^{n}}\right ) \int _{-\pi }^{\pi }\cos \left ( m\theta \right ) \sin \left ( n\theta \right ) d\theta \tag{7} \end{align} But \begin{align*} \int _{-\pi }^{\pi }\cos \left ( m\theta \right ) \cos \left ( n\theta \right ) d\theta & =\pi \qquad n=m\neq 0\\ \int _{-\pi }^{\pi }\cos \left ( m\theta \right ) \cos \left ( n\theta \right ) d\theta & =0\qquad n\neq m \end{align*} And$\int _{-\pi }^{\pi }\cos \left ( m\theta \right ) \sin \left ( n\theta \right ) d\theta =0\qquad$ And$\int _{-\pi }^{\pi }D_{0}\cos \left ( m\theta \right ) d\theta =0$ Then (7) becomes\begin{align} \int _{-\pi }^{\pi }g\left ( \theta \right ) \cos \left ( n\theta \right ) d\theta & =\pi A_{n}\left ( b^{n}+\frac{a^{2n}}{b^{n}}\right ) \nonumber \\ A_{n} & =\frac{1}{\pi }\frac{\int _{-\pi }^{\pi }g\left ( \theta \right ) \cos \left ( n\theta \right ) d\theta }{b^{n}+\frac{a^{2n}}{b^{n}}} \tag{8} \end{align} Again, multiplying both sides by $$\sin \left ( m\theta \right )$$ and integrating gives\begin{align*} \int _{-\pi }^{\pi }g\left ( \theta \right ) \sin \left ( m\theta \right ) d\theta & =\int _{-\pi }^{\pi }D_{0}\sin \left ( m\theta \right ) d\theta \\ & +\int _{-\pi }^{\pi }\sum _{n=1}^{\infty }A_{n}\left ( b^{n}+\frac{a^{2n}}{b^{n}}\right ) \sin \left ( m\theta \right ) \cos \left ( n\theta \right ) d\theta \\ & +\int _{-\pi }^{\pi }\sum _{n=1}^{\infty }B_{n}\left ( b^{n}+\frac{a^{2n}}{b^{n}}\right ) \sin \left ( m\theta \right ) \sin \left ( n\theta \right ) d\theta \end{align*} Hence\begin{align} \int _{-\pi }^{\pi }g\left ( \theta \right ) \sin \left ( m\theta \right ) d\theta & =\int _{-\pi }^{\pi }D_{0}\sin \left ( m\theta \right ) d\theta \nonumber \\ & +\sum _{n=1}^{\infty }A_{n}\left ( b^{n}+\frac{a^{2n}}{b^{n}}\right ) \int _{-\pi }^{\pi }\sin \left ( m\theta \right ) \cos \left ( n\theta \right ) d\theta \nonumber \\ & +\sum _{n=1}^{\infty }B_{n}\left ( b^{n}+\frac{a^{2n}}{b^{n}}\right ) \int _{-\pi }^{\pi }\sin \left ( m\theta \right ) \sin \left ( n\theta \right ) d\theta \tag{9} \end{align} But \begin{align*} \int _{-\pi }^{\pi }\sin \left ( m\theta \right ) \sin \left ( n\theta \right ) d\theta & =\pi \qquad n=m\neq 0\\ \int _{-\pi }^{\pi }\sin \left ( m\theta \right ) \sin \left ( n\theta \right ) d\theta & =0\qquad n\neq m \end{align*} And$\int _{-\pi }^{\pi }\sin \left ( m\theta \right ) \cos \left ( n\theta \right ) d\theta =0$ And$\int _{-\pi }^{\pi }D_{0}\sin \left ( m\theta \right ) d\theta =0$ Then (9) becomes\begin{align*} \int _{-\pi }^{\pi }g\left ( \theta \right ) \sin \left ( n\theta \right ) d\theta & =\pi B_{n}\left ( b^{n}+\frac{a^{2n}}{b^{n}}\right ) \\ B_{n} & =\frac{1}{\pi }\frac{\int _{-\pi }^{\pi }g\left ( \theta \right ) \sin \left ( n\theta \right ) d\theta }{b^{n}+\frac{a^{2n}}{b^{n}}} \end{align*} This complete the solution. Summary\begin{align*} u\left ( r,\theta \right ) & =D_{0}+\sum _{n=1}^{\infty }A_{n}\left ( r^{n}+\frac{a^{2n}}{r^{n}}\right ) \cos \left ( n\theta \right ) +\sum _{n=1}^{\infty }B_{n}\left ( r^{n}+\frac{a^{2n}}{r^{n}}\right ) \sin \left ( n\theta \right ) \\ D_{0} & =\frac{1}{2\pi }\int _{-\pi }^{\pi }g\left ( \theta \right ) d\theta \\ A_{n} & =\frac{1}{\pi }\frac{\int _{-\pi }^{\pi }g\left ( \theta \right ) \cos \left ( n\theta \right ) d\theta }{b^{n}+\frac{a^{2n}}{b^{n}}}\\ B_{n} & =\frac{1}{\pi }\frac{\int _{-\pi }^{\pi }g\left ( \theta \right ) \sin \left ( n\theta \right ) d\theta }{b^{n}+\frac{a^{2n}}{b^{n}}} \end{align*} _______________________________________________________________________________________ #### 12.3 Laplace PDE inside circular annulus, Neumann boundary conditions using unspecified functions (Haberman 2.5.8 (b)) problem number 105 This is problem 2.5.8 part (b) from Richard Haberman applied partial differential equations, 5th edition Solve Laplace equation $\frac{\partial ^2 u}{\partial r^2} + \frac{1}{r }\frac{\partial u}{\partial r} + \frac{1}{r^2} \frac{\partial ^2 u}{\partial \theta ^2} =0$ Inside circular annulus $$a<r<b$$ subject to the following boundary conditions \begin{align*} \frac{\partial u}{\partial r}(a,\theta ) &= 0 \\ u(b,0) &= g(\theta ) \end{align*} Mathematica $\text{DSolve}\left [\left \{\frac{u^{(0,2)}(r,\theta )}{r^2}+\frac{u^{(1,0)}(r,\theta )}{r}+u^{(2,0)}(r,\theta )=0,\left \{u^{(1,0)}(a,\theta )=0,u(b,\theta )=g(\theta )\right \}\right \},u(r,\theta ),\{r,\theta \},\text{Assumptions}\to a<r\leq b\right ]$ Maple $u \left ( r,\theta \right ) ={\it invfourier} \left ({\frac{{\it fourier} \left ( g \left ( \theta \right ) ,\theta ,s \right ){{\rm e}^{-s \left ( \ln \left ( b \right ) -\ln \left ( r \right ) \right ) }}}{{{\rm e}^{2\, \left ( \ln \left ( a \right ) -\ln \left ( b \right ) \right ) s}}+1}},s,\theta \right ) +{\it invfourier} \left ({\frac{{\it fourier} \left ( g \left ( \theta \right ) ,\theta ,s \right ){{\rm e}^{s \left ( 2\,\ln \left ( a \right ) -\ln \left ( b \right ) -\ln \left ( r \right ) \right ) }}}{{{\rm e}^{2\, \left ( \ln \left ( a \right ) -\ln \left ( b \right ) \right ) s}}+1}},s,\theta \right )$ But has unresolved Invfourier and Fourier calls _______________________________________________________________________________________ #### 12.4 Laplace PDE inside circular annulus, Dirichlet boundary conditions using specified functions problem number 106 Solve Laplace equation $\frac{\partial ^2 u}{\partial r^2} + \frac{1}{r }\frac{\partial u}{\partial r} + \frac{1}{r^2} \frac{\partial ^2 u}{\partial \theta ^2} =0$ Inside circular annulus $$1<r<2$$ subject to the following boundary conditions \begin{align*} u(1,\theta ) &= 0 \\ u(2,\theta ) &= \sin \theta \end{align*} Mathematica $\left \{\left \{u(r,\theta )\to \begin{array}{cc} \{ & \begin{array}{cc} \frac{2 \left (r^2-1\right ) \sin (\theta )}{3 r} & 1\leq r\leq 2 \\ \text{Indeterminate} & \text{True} \\\end{array} \\\end{array}\right \}\right \}$ Maple $u \left ( r,\theta \right ) =2/3\,{\frac{\sin \left ( \theta \right ) \left ({r}^{2}-1 \right ) }{r}}$ _______________________________________________________________________________________ #### 12.5 Laplace PDE outside a disk, periodic boundary conditions problem number 107 Solve Laplace equation in polar coordinates outside a disk Solve for $$u\left ( r,\theta \right )$$ \begin{align*} \frac{\partial ^{2}u}{\partial r^{2}}+\frac{1}{r}\frac{\partial u}{\partial r} +\frac{1}{r^{2}}\frac{\partial ^{2}u}{\partial \theta ^{2}} & =0\\ a & \leq r \\ 0 & <\theta \leq 2\pi \end{align*} Boundary conditions \begin{align*} u\left ( a,\theta \right ) & =f\left ( \theta \right ) \\ \left \vert u\left ( 0,\theta \right ) \right \vert & <\infty \\ u\left ( r,0\right ) & =u\left ( r,2\pi \right ) \\ \frac{\partial u}{\partial \theta }\left ( r,0\right ) & =\frac{\partial u}{\partial \theta }\left ( r,2\pi \right ) \end{align*} Mathematica $\text{DSolve}\left [\left \{\frac{u^{(0,2)}(r,\theta )}{r^2}+\frac{u^{(1,0)}(r,\theta )}{r}+u^{(2,0)}(r,\theta )=0,\left \{u(a,\theta )=f(\theta ),u(r,-\pi )=u(r,\pi ),u^{(0,1)}(r,-\pi )=u^{(0,1)}(r,\pi )\right \}\right \},u(r,\theta ),\{r,\theta \},\text{Assumptions}\to \{a>0,r>a\}\right ]$ Maple $u \left ( r,\theta \right ) =1/2\,{\frac{1}{\pi } \left ( 2\,\sum _{n=1}^{\infty } \left ({\frac{\int _{-\pi }^{\pi }\!\sin \left ( n\theta \right ) f \left ( \theta \right ) \,{\rm d}\theta \sin \left ( n\theta \right ) +\int _{-\pi }^{\pi }\!f \left ( \theta \right ) \cos \left ( n\theta \right ) \,{\rm d}\theta \cos \left ( n\theta \right ) }{\pi } \left ({\frac{r}{a}} \right ) ^{-n}} \right ) \pi +\int _{-\pi }^{\pi }\!f \left ( \theta \right ) \,{\rm d}\theta \right ) }$
2019-03-19T04:18:20
{ "domain": "12000.org", "url": "https://www.12000.org/my_notes/pde_in_CAS/pdse12.htm", "openwebmath_score": 1.0000100135803223, "openwebmath_perplexity": 5331.067097687874, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795114181106, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043100205411 }
https://cstheory.stackexchange.com/questions/31525/enumerating-set-combinations-in-an-order-that-maximises-the-number-of-previously
# Enumerating set combinations in an order that maximises the number of previously unseen subsets Consider a set $S=\{a,b,c,d,e,f,g,h,i,j,k\}$, $\left|S\right|=11$. There are ${11 \choose 5} = 462$ combinations of $S$'s members of size $5$. There are $462! \approx 1.419 × 10^{1032}$ possible ordering of those sets. All ordering are in $\Phi_0$. For a given ordering, $\phi \in \Phi_0$, $x_i^\phi$ is the set placed at position $i$. Initial $\Phi_0$ was defined, following ones are defined as $$\Phi_i = \left\{\,\phi \in \Phi_{i-1} \mid \not\exists \phi^\prime \in \Phi_{i-1} , C\left(i,\phi^\prime\right) \gt C\left(i,\phi\right)\, \right\}$$ where $C\left(i,\phi\right)$ is number of subsets given by $x^\phi_i$ which have not already exist as a subset of previous $x^\phi$'s $$C\left(i,\phi\right)=\left|2^{x^\phi_i} \setminus \bigcup_{\forall h<i}{2^{x^\phi_h}} \right|$$ I'm looking for ways to efficiently enumerate any one of the orderings that exists in $\Phi_{462}$. Any suggestions, pointing towards relevant/potential useful algorithms or papers would be appreciated. There are $\sum_{j=1}^4 \binom{11}{j} = 561$ smaller subsets, and each $x^\phi$ contains $\sum_{j=1}^4 \binom{5}{j} = 30$ of them. If you put all $462$ $5$-element sets in a priority queue with priority corresponding to the number of subsets which haven't yet appeared, after each pop you have to check $30$ subsets to see whether they're appearing for the first time, and for each $k$-element subset that is appearing for the first time you have to update the priorities of $\binom{11}{5-k}$ sets. There's an easy upper bound on the number of updates of $25410$. As a follow-up optimisation, once every smaller subset has been seen (which happens after you've removed the first 90 elements from the priority queue), you can just iterate through the rest. If your priority queue is e.g. a binary heap, this will save you a lot of $O(\lg n)$ pops. • This get most of the way there. This is guaranteed to give an ordering where $C(i,\phi)$ is monotonicity decreasing (which is the main desirable feature), but not necessarily a member of $\Phi_{462}$. Consider the priority queue when selecting the $i$th set in the ordering, there can be multiple sets of the same priority. At least one of them will allow you to construct an ordering in $\Phi_{462}$, but not necessarily all of them. – Gareth A. Lloyd May 21 '15 at 12:25 • I misunderstood the details of $\Phi_i$. Perhaps the question would be clearer if it didn't mention $\Phi_i$ at all, but defined a "new subset count" vector for each $\phi$ and asked for a lexicographically maximum "new subset count" vector. – Peter Taylor May 21 '15 at 20:46
2020-08-04T00:48:29
{ "domain": "stackexchange.com", "url": "https://cstheory.stackexchange.com/questions/31525/enumerating-set-combinations-in-an-order-that-maximises-the-number-of-previously", "openwebmath_score": 0.7441185712814331, "openwebmath_perplexity": 320.77750621920967, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795114181106, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.653304310020541 }
https://artofproblemsolving.com/wiki/index.php?title=User:Temperal/The_Problem_Solver%27s_Resource1&oldid=29412
# User:Temperal/The Problem Solver's Resource1 Introduction | Other Tips and Tricks | Methods of Proof | You are currently viewing page 1. ## Trigonometric Formulas Note that all measurements are in radians. ### Basic Facts $\sin (-A)=-\sin A$ $\cos (-A)=\cos A$ $\tan (-A)=-\tan A$ $\sin (\pi-A) = \sin A$ $\cos (\pi-A) = -\cos A$ $\sin (-A) = -\sin A$ $\cos (-A) = \cos A$ $\tan (\pi+A) = \tan A$ $\cos (\pi/2-A)=\sin A$ $\tan (\pi/2-A)=\cot A$ $\sec (\pi/2-A)=\csc A$ $\cos (\pi/2-A) = \sin A$ $\cot (\pi/2-A)=\tan A$ $\csc (\pi/2-A)=\sec A$ The above can all be seen clearly by examining the graphs or plotting on a unit circle - the reader can figure that out themselves. ### Terminology and Notation $\cot A=\frac{1}{\tan A}$, but $\cot A\ne\tan^{-1} A}$ (Error compiling LaTeX. ! Extra }, or forgotten $.), the former being the reciprocal and the latter the inverse. $\csc A=\frac{1}{\sin A}$, but$\csc A\ne\sin^{-1} A}$(Error compiling LaTeX. ! Extra }, or forgotten$.). $\sec A=\frac{1}{\sin A}$, but $\sec A\ne\cos^{-1} A}$ (Error compiling LaTeX. ! Extra }, or forgotten \$.). Speaking of inverses: $\tan^{-1} A=\text{atan } A=\arctan A$ $\cos^{-1} A=\text{acos } A=\arccos A$ $\sin^{-1} A=\text{asin } A=\arcsin A$ ### Sum of Angle Formulas $\sin (A \pm B)=\sin A \cos B \pm \cos A \sin B$ If we can prove this one, the other ones can be derived easily using the "Basic Facts" identities above. In fact, we can simply prove the addition case, for plugging $A=-B$ into the addition case gives the subtraction case. As it turns out, there's quite a nice geometric proof of the addition case, though other methods, such as de Moivre's Theorem, exist. The following proof is taken from the Art of Problem Solving, Vol. 2 and is due to Masakazu Nihei of Japan, who originally had it published in Mathematics & Informatics Quarterly, Vol. 3, No. 2: $[asy] pair A,B,C; C=(0,0); B=(10,0); A=(6,4); draw(A--B--C--cycle); label("A",A,N); label("B",B,E); label("C",C,W); draw(A--(6,0)); label("\beta",A,(-1,-2)); label("\alpha",A,(1,-2.5)); label("H",(6,0),S); draw((6,0)--(5.5,0)--(5.5,0.5)--(6,0.5)--cycle); [/asy]$ Figure 1 We'll find $[ABC]$ in two different ways: $\frac{1}{2}(AB)(AC)(\sin \angle BAC)$ and $[ABH]+[ACH]$. We let $AH=1$. We have: $[ABC]=[ABH]+[ACH]$ $\frac{1}{2}(AC)(AB)(\sin \angle BAC)=\frac{1}{2}(AH)(BH)+\frac{1}{2}(AH)(CH)$ $\frac{1}{2}\left(\frac{1}{\cos \beta}\right)\left(\frac{1}{\cos \alpha}\right)(\sin \angle BAC)=\frac{1}{2}(1)(\tan \alpha)(\tan \beta)$ $\frac{\sin (\alpha+\beta)}{\cos \alpha \cos \beta}=\frac{\sin \alpha}{\cos \alpha}+\frac{\sin \beta}{\cos \beta}$ $\sin(\alpha+\beta)=\sin \alpha \cos \beta +\sin \beta \cos \alpha$ $\mathbb{QED.}$ $\cos (A \pm B)=\cos A \cos B \mp \sin A \sin B$ $\tan (A \pm B)=\frac{\tan A \pm \tan B}{1 \mp \tan A \tan B}$ The following identities can be easily derived by plugging $A=B$ into the above: $\sin2A=2\sin A \cos A$ $\cos2A=\cos^2 A - \sin^2 A$ or $\cos2A=2\cos^2 A -1$ or $\cos2A=1- 2 \sin^2 A$ $\tan2A=\frac{2\tan A}{1-\tan^2 A}$ ### Pythagorean identities $\sin^2 A+\cos^2 A=1$ $1 + \tan^2 A = \sec^2 A$ $1 + \cot^2 A = \csc^2 A$ for all $A$. These can be easily seen by going back to the unit circle and the definition of these trig functions. ### Other Formulas #### Law of Cosines In a triangle with sides $a$, $b$, and $c$ opposite angles $A$, $B$, and $C$, respectively, $c^2=a^2+b^2-2ab\cos C$ and: #### Law of Sines $\frac{a}{\sin A}=\frac{b}{\sin B}=\frac{c}{\sin C}=2R$ where $R$ is the radius of the circumcircle of $\triangle ABC$ Proof: In the diagram below, circle $O$ circumscribes triangle $ABC$. $OD$ is perpendicular to $BC$. Since $\triangle ODB \cong \triangle ODC$, $BD = CD = \frac a2$ and $\angle BOD = \angle COD$. But $\angle BAC = 2\angle BOC$ making $\angle BOD = \angle COD = \theta$. Therefore, we can use simple trig in right triangle $BOD$ to find that $\sin \theta = \frac{\frac a2}R \Leftrightarrow \frac a{\sin\theta} = 2R.$ The same holds for $b$ and $c$, thus establishing the identity. #### Law of Tangents If $A$ and $B$ are angles in a triangle opposite sides $a$ and $b$ respectively, then $$\frac{a-b}{a+b}=\frac{\tan (A-B)/2}{\tan (A+B)/2} .$$ The proof of this is less trivial than that of the law of sines and cosines, but still fairly easy: Let $s$ and $d$ denote $(A+B)/2$, $(A-B)/2$, respectively. By the law of sines, $$\frac{a-b}{a+b} = \frac{\sin A - \sin B}{\sin A + \sin B} = \frac{ \sin(s+d) - \sin (s-d)}{\sin(s+d) + \sin(s-d)} .$$ By the angle addition identities, $$\frac{\sin(s+d) - \sin(s-d)}{\sin(s+d) + \sin(s-d)} = \frac{2\cos s \sin d}{2\sin s \cos d} = \frac{\tan d}{\tan s} = \frac{\tan (A-B)/2}{\tan (A+B)/2}$$ as desired. #### Area of a Triangle The area of a triangle can be found by $\frac 12ab\sin C$ This can be easily proven by the well-known formula $\frac{1}{2}ah_a$ - considering one of the triangles which altitude $h_a$ divides $\triangle ABC$ into, we see that $h_a=b\sin C$ and hence $[ABC]=\frac 12ab\sin C$ as desired.
2020-10-20T23:02:08
{ "domain": "artofproblemsolving.com", "url": "https://artofproblemsolving.com/wiki/index.php?title=User:Temperal/The_Problem_Solver%27s_Resource1&oldid=29412", "openwebmath_score": 0.9244819283485413, "openwebmath_perplexity": 466.43710056054226, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795114181105, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.653304310020541 }
https://studyadda.com/sample-papers/ibps-clerk-pt-sample-paper-test-2_q96/140/263711
• # question_answer Direction: Study the following information carefully and andwer the questions given below: A survey was conducted on 2500 persons to know their likeness of different types of house. 68% people like Duplex, 69.6% like Villas and 64% like apartments. 14.4% of them like only Duplex, 16.8% like only Villas and 8.8% like only apartments. Now, answer the following questions based on this information. What per cent of people like only Duplex and Villas but not apartments? A)  4.8%               B)  4.2%   C) 5.4%   D) 4%                  E)  3.8% Reqd%$=\frac{120}{2500}\times 100=4.8%$
2019-09-21T17:44:32
{ "domain": "studyadda.com", "url": "https://studyadda.com/sample-papers/ibps-clerk-pt-sample-paper-test-2_q96/140/263711", "openwebmath_score": 0.4892469644546509, "openwebmath_perplexity": 6735.356198406433, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9869795114181105, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.6533043100205409 }
http://emmettmcquinn.com/blog/2015/08/09/differential-privacy.html
Imagine that you were asked to discover whether the workplace is generally happy. Unfortunately, your organization happens to be run by a semi-evil dictator who would fire anyone he knows for sure is unhappy: Normally you might use a standard survey, but you cannot in good conscience ask individuals to fill out a survey that may get them fired. What can you do? Enter Differential Privacy. The key is to give an individual plausible deniability in their specific response, but collectively provide a relevant statistic. With randomness at our side, we can have a surprisingly simple survey approach that will let us know approximately if most people are happy or unhappy. # Randomized Response Function Our survey where we whether an employee is happy or unhappy can be generalized to a binary function that asks whether an employee is happy, returning True when happy or False when unhappy. With this generalization in mind we can write an algorithm that can guarantee privacy at an individual level with any binary response query. This same approach will work regardless if you are asking an individual a survey question, sending anonymized user event statistics over the internet, or peeking into a test set without overfitting (to be covered in more detail later). The privacy preserving function that we will use is called the “randomized response function”, where individually we can return either True or False regardless of what our true response value is. The algorithm is quite simple: 1. Randomly pick True or False 2. If you picked True, return the true response 3. Else return a new random True or False Super simple! If we break down every possible combination of values, we can see that while there is uncertainty about an individual’s response, when many individuals are combined together we begin to see a sketch of the true number of True and False values: True Response First Random Value Second Random Value Output True True True True True True False True True False True True True False False False False True True False False True False False False False True True False False False False From this table we see when True Response = True, $p(True \vert True) = 3/4$. When True Response = False, $p(True \vert False) = 1/4$. If we ask an individual what their true response is and they say True, we expect $3/4$ of the time that their true response is True. So if you start firing people who say False, you will incorrectly fire $1/4$ of your workplace - enough that even our semi-evil dictator takes pause. In our example, our dictator is only semi-evil. A fully evil dictator simply does not care about a false positive rate of $1/4$ and will still fire anyone who tells him they are unhappy, knowing some happy employees will be fired as well. What this approach guarantees is that even if most unhappy employees will be fired ($3/4$ on average) there will still be some unhappy employees left around. Our evil dictator will never be completely successful picking out individuals, and even if they fired $1/4$ of the workplace we will still be able to estimate workplace happiness. That’s pretty cool! # Population Happiness The next step is to use our individually randomized survey and determine if these individuals are happy overall. The expected number of happy responses is: Individuals are happy overall when the number of happy responses is greater than the number of unhappy responses. Put symbolically this is true if $\#True > \#False$. This will begin to happen when $\#True \approx \#False$, and continue until $\#Surveys = \#True$. When $\#True = \#False$ we expect $E[\#True] = \#Surveys / 2$. Therefore a simple approximate test to see if employees are generally happy is when $\#True > \#Surveys/2$. # Putting it All Together For this algorithm, the number of times our happiness threshold $\#True > \#Surveys/2$ fails to indicate the correct response is captured in the CDF of a Binomial Distribution. If you want to model how randomized response will perform with your population, plot the CDF of the Binomial distribution for $n=\#employees$ and $p=0.5$. In practice, it’s always good to test randomized methods with simulation. This algorithm works great as the number of employees grows. If we only have 10 employees, expect a lot of noise: However if you have 100, we have a much better algorithm: While analysis with this approach are not exact, one should be able to feel comfortable saying that employees are fairly happy when the threshold is passed. One can make the threshold more strict by adding an offset, such as one standard deviation $\#True > \#Surveys/2 + \sqrt{\#Surveys}/2$: This significantly reduces the false positive rate but at the cost of false negatives. I’ve put the implementation of this up on github in an ipython notebook, hopefully others find it helpful! # Cautionary Notes This approach in itself doesn’t fully prevent information leakage when there are multiple queries asked to the same employee, either by asking the same query multiple times or asking multiple queries that are related. If a spy attempts to ask an employee the same question twice or more, the employee will inadvertently leak more information about their true response value. A simple solution to this is to only ever answer a question once as an employee, or alternatively return random values after the first response. The spy realizes they cant ask the same question twice, so instead they ask multiple related questions much like the 20-questions game. For example, if you wanted to determine a user’s age, you could ask multiple questions like $% $, $% $, $% $. Although these are all binary questions, they are not independent and will probabilistically leak some information about the age of a user (ex: if $% $ then $% $ and $% $). With dependent queries the privacy guarantees will fail without modifications to incorporate additional noise. However, privacy is still preserved if the questions are independent. Less realistically, there are also timing attacks that one could make that reveal when a user is returning their true response. With the simple implementation of the randomized response function there will be different cycle counts and memory accesses arising from the naive implementation using a branch and drawing the random numbers lazily; it takes longer to draw two random numbers than one. It’s very unlikely this difference will be measurable with a fast random number generator; however, one could easily make a branchless version not susceptible to timing attacks.
2018-11-17T04:36:35
{ "domain": "emmettmcquinn.com", "url": "http://emmettmcquinn.com/blog/2015/08/09/differential-privacy.html", "openwebmath_score": 0.2739148437976837, "openwebmath_perplexity": 906.7690810018189, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986979510652134, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043095135237 }
https://random-walks.org/content/gp/sparse/gp-sampling.html
# Efficiently sampling GP posteriors¶ ## VFE approximate posterior¶ In exact GP regression, we are typically intersted in placing a GP prior over a latent function $$f$$, written as \begin{align} f \sim \mathcal{GP}(m, k), \end{align} where $$m$$ is the mean function, hereafter assumed to be $$0$$ without loss of generality, and $$k$$ is the covariance function. We model observations $$y_n$$ via an observation model with Gaussian noise \begin{align} y_n = f_n + \epsilon_n, \text{ where } \epsilon_n \sim \mathcal{N}(0, \sigma^2). \end{align} Conditioning on data with inputs $$\mathbf{X} = (\mathbf{x}_1, ..., \mathbf{x}_N)^\top$$ and outputs $$\mathbf{y} = (y_1, ..., y_N)^\top$$ and using Bayes’ rule, we obtain the exact GP posterior for the latent function $$f$$ at a set of prediction locations $$\mathbf{X}^*$$, written $$\mathbf{f}_{\mathbf{X}^*}$$, as \begin{split}\begin{align} p(\mathbf{f}_{\mathbf{X^*}} | \mathbf{X^*}, \mathbf{X}, \mathbf{y}) &= \mathcal{N}(\mathbf{f}; \mathbf{m}, \mathbf{K}), \\ \end{align}\end{split} where the mean vector $$\mathbf{m}$$ and covariance matrix $$\mathbf{K}$$ are given by \begin{split}\begin{align} \mathbf{m} &= \mathbf{K}_{\mathbf{X^*}\mathbf{X}} \left(\mathbf{K}_{\mathbf{X}\mathbf{X}} + \sigma^2 \mathbf{I} \right)^{-1} \mathbf{y}, \\ \mathbf{K} &= \mathbf{K}_{\mathbf{X^*}\mathbf{X^*}} - \mathbf{K}_{\mathbf{X^*}\mathbf{X}} \left(\mathbf{K}_{\mathbf{X}\mathbf{X}} + \sigma^2 \mathbf{I} \right)^{-1} \mathbf{K}_{\mathbf{X}\mathbf{X^*}}. \end{align}\end{split} and the boldface $$\mathbf{K}_{\cdot, \cdot}$$ matrices are given by evaluating the covariance function $$k$$ at the locations specified by the subscripts. Computing this posterior, or evaluating the marginal log-liklihood $$\log p(\mathbf{y} | \mathbf{X})$$, involves a computational cost of $$\mathcal{O}(N^3)$$, where $$N$$ is the number of datapoints we are conditioning on. The computational bottleneck in these expressions is the matrix inversion displayed above. The Variational Free Energy (VFE) approximation for GPs gets around this issue by using an approximate posterior with pseudo-points at a learnable set of locations $$\Xb$$. The posterior mean and covariance under this approximation are \begin{split}\begin{align} \mathbf{m} &= \sigma^{-2} \Ksb \bs{\Sigma}^{-1} \Kbx \mathbf{y}, \\ \mathbf{K} &= \Kss - \Ksb \Kbb^{-1}\Kbs + \Ksb \bs{\Sigma}^{-1}\Kbs, \\ \bs{\Sigma} &= \lrb[\Kbb + \sigma^{-2} \Kbx \Kxb]. \end{align}\end{split} These expressions require the inversion of the (smaller) $$\bs{\Sigma}$$ matrix, reducing the computational cost down to $$\mathcal{O}(NM^2)$$, where $$M$$ is the number of pseudopoints. Unfortunately, sampling from this posterior is still very costly, because it requires computing the Cholesky factor of the $$\mathbf{K}$$ matrix, which still involves a $$\Kss$$ factor. The cost of Cholesky factorisation still scales cubically as $$\mathcal{O}(K^3)$$, where $$K$$ is the number of locations $$\Xs$$ at which to sample, which is prohibitively expensive. The procedure developed by Wilson et al. [WBT+20] gets around this difficulty in an elegant way which combines existing approximations to efficiently draw approximate samples from the VFE posterior. ## Matheron’s rule¶ We start from Matheron’s rule for Gaussian random variables, stated and proved below. Lemma (Matheron’s rule) Suppose $$\mathbf{a}$$ and $$\mathbf{b}$$ are jointly distributed Gaussian random variables. Then \begin{align} \mathbf{a} | (\mathbf{b} = \boldsymbol{\beta}) \stackrel{d}{=} \mathbf{a} + \text{Cov}(\mathbf{a}, \mathbf{b}) \text{Cov}(\mathbf{b}, \mathbf{b})^{-1} (\boldsymbol{\beta} - \mathbf{b}), \end{align} where $$\stackrel{d}{=}$$ denotes equality in distribution. Derivation: Matheron's rule Computing the mean of the right hand side, we see \begin{align} \mathbb{E} \left[ \mathbf{a} + \text{Cov}(\mathbf{a}, \mathbf{b}) \text{Cov}(\mathbf{b}, \mathbf{b})^{-1} (\boldsymbol{\beta} - \mathbf{b}) \right] = \mathbf{m}_{\mathbf{a}} + \text{Cov}(\mathbf{a}, \mathbf{b}) \text{Cov}(\mathbf{b}, \mathbf{b})^{-1} (\boldsymbol{\beta} - \mathbf{m}_{\mathbf{b}}), \end{align} where $$\mathbf{m}_{\mathbf{a}}$$ and $$\mathbf{m}_{\mathbf{b}}$$ are the prior means of $$\mathbf{a}$$ and $$\mathbf{b}$$. We see that the right-hand side is precisely the expression for the mean of $$\mathbf{a}$$ conditioned on $$\mathbf{b} = \boldsymbol{\beta}$$. Simlilarly, taking covariances of both sides we obtain \begin{align} \text{Cov} \left[ \mathbf{a} + \text{Cov}(\mathbf{a}, \mathbf{b}) \text{Cov}(\mathbf{b}, \mathbf{b})^{-1} (\boldsymbol{\beta} - \mathbf{b}) \right] = \text{Cov}(\mathbf{a}, \mathbf{a}) - \text{Cov}(\mathbf{a}, \mathbf{b}) \text{Cov}(\mathbf{b}, \mathbf{b})^{-1} \text{Cov}(\mathbf{b}, \mathbf{a}), \end{align} which is exactly the expression of the conditional of a Gaussian distribution. Matheron’s rule therefore gives a straightforward way to condition a variable $$\mathbf{a}$$ on the event $$\mathbf{b} = \boldsymbol{\beta}$$ after the variables $$\mathbf{a}$$ and $$\mathbf{b}$$ have been sampled from the prior. In particular, if we already have jointly sampled $$\mathbf{a}$$ and $$\mathbf{b}$$ from the prior Gaussian, we can adjust $$\mathbf{a}$$ by adding $$\text{Cov}(\mathbf{a}, \mathbf{b}) \text{Cov}(\mathbf{b}, \mathbf{b})^{-1} (\boldsymbol{\beta} - \mathbf{b})$$ to it, to obtain a valid sample of $$\mathbf{a} | (\mathbf{b} = \boldsymbol{\beta})$$. In the case of VFE GPs, the joint distribution of the latent function at the inducing locations $$\mathbf{f}_{\mathbf{\bar{X}}}$$ and the latent function at the prediction locations $$\mathbf{f}_{\mathbf{\bar{X}}}$$ is Gaussian. Therefore, we can draw a sample from the approximate posterior by first sampling $$\mathbf{f}_{\Xs}$$ and $$\mathbf{f}_{\Xb}$$ from the GP prior, then draw a sample $$\mathbf{u} \sim \mathbf{f}_{\Xb} | \mathbf{X}, \mathbf{\bar{X}}, \mathbf{y}$$ from the approximate posterior over the inducing values and lastly adjust $$\mathbf{f}_{\Xs}$$ according to Matheron’s rule \begin{align} \mathbf{f}_{\Xs} \leftarrow \mathbf{f}_{\Xs} + \Ksb \Kbb^{-1} (\mathbf{u} - \mathbf{f}_{\Xb}). \end{align} This procedure thus draws a sample from the prior, and adjusts it in accordance to the event that the inducing point values are $$\mathbf{f}_{\Xb} = \mathbf{u}$$, giving exact samples from the (approximate) VFE posterior. However, jointly sampling $$\mathbf{f}_{\Xb}$$ and $$\mathbf{f}_{\Xs}$$ from the prior still involves an expensive Cholesky factorisation, since we now have to factorise the covariance matrix corresponding to both the $$\Xb$$ and $$\Xs$$ locations. Fortunately, for a broad class of covariance matrices, we can cheaply draw approximate samples from the prior using Random Fourier Features (RFF) [RR+07]. We can then plug these approximate samples to Matheron’s rule to obtain approximate samples from the VFE posterior. Random Fourier Features are applicable to stationary covariances such as the EQ, Matern, Laplace or Cauchy covariances. Given such a covariance, the RFF algorithm produces a randomly sampled set of sinusoids $$\boldsymbol{\phi}(\mathbf{x}) = (\phi_1(\mathbf{x}), ..., \phi_F(\mathbf{x}))^\top$$, where $$F$$ is a prespecified number of sinusoids to use, such that the linear-in-the-parameters model \begin{align} f(\mathbf{x}) = \boldsymbol{\phi}(\mathbf{x})^\top \mathbf{w}, \text{ where } \mathbf{w} \sim \mathcal{N}(\mathbf{0}, \sigma^2 \mathbf{I}). \end{align} has, in expectation, a covariance $$k$$. The cost of sampling RFFs is linear in the number of features $$F$$ to draw, thereby avoiding computing a Cholesky factor of the prior covariance. This comes at the cost that the prior samples are now approximate, however the quality of these samples can be improved by using more features while the quality of the approximate samples can itself be quantitatively guaranteed (see Wilson et al. [WBT+20] for quantitative bounds on this approximation). Putting everything together, we arrive at the following algorithm for drawin approximate samples from the VFE GP posterior, referred to as pathwise conditioning. Algorithm (Approximate sampling via pathwise conditioning) Given a VFE GP with a covariance function $$k$$ and a distribution $$q$$ over the inducing point values $$\mathbf{u}$$, with corresponding inducing inputs $$\Xb$$, the following proceduree yields an approximate sample from the VFE posterior 1. Sample $$\mathbf{u} \sim q(\mathbf{u})$$. 2. Sample $$\boldsymbol{\phi}(\cdot) = (\phi_1(\cdot), ..., \phi_F(\cdot))^\top$$ using the RFF approximation. 3. Sample weights $$\mathbf{w} = (w_1, ..., w_F)^\top$$, where $$w_i \sim \mathcal{N}(0, 1)$$ independently. 4. Return the function \begin{align} f(\cdot) = \boldsymbol{\phi}(\cdot)^\top \mathbf{w} + \mathbf{K}_{\cdot, \Xb} \mathbf{K}_{\Xb, \Xb}^{-1} \big(\mathbf{u} - \boldsymbol{\phi}\big(\Xb\big)^\top \mathbf{w}\big). \end{align} ## Implementation¶ We can now implement this algorithm, reusing code from the examples on the VFE approximation, as well as the RFF approximation. First we write down the ConstantMean and EQCovariance classes, adding the sample_rff method to the EQCovariance class. This draws an RFF sample from the GP prior, and returns this as a function rff, which can be queried and differentiated at any input location in constant time. import tensorflow as tf import tensorflow_probability as tfp class ConstantMean(tf.keras.Model): def __init__(self, dtype, name='constant_mean'): super().__init__(name=name, dtype=dtype) self.constant = tf.Variable(tf.constant(0., dtype=dtype)) def __call__(self, x): return self.constant * tf.ones(x.shape[0], dtype=self.dtype) class EQcovariance(tf.keras.Model): def __init__(self, log_coeff, log_scales, dim, dtype, trainable=True, name='eq_covariance', **kwargs): super().__init__(name=name, dtype=dtype, **kwargs) # Convert parameters to tensors log_coeff = tf.convert_to_tensor(log_coeff, dtype=dtype) log_scales = tf.convert_to_tensor(log_scales, dtype=dtype) # Reshape parameter tensors log_coeff = tf.squeeze(log_coeff) log_scales = tf.reshape(log_scales, (-1,)) # Set input dimensionality self.dim = dim # Set EQ parameters self.log_scales = tf.Variable(log_scales, trainable=trainable) self.log_coeff = tf.Variable(log_coeff, trainable=trainable) def __call__(self, x1, x2, diag=False, epsilon=None): # Convert to tensors x1 = tf.convert_to_tensor(x1, dtype=self.dtype) x2 = tf.convert_to_tensor(x2, dtype=self.dtype) # Get vector of lengthscales scales = self.scales # If calculating full covariance, add dimensions to broadcast if not diag: x1 = x1[:, None, :] x2 = x2[None, :, :] scales = self.scales[None, None, :] ** 2 # Compute quadratic, exponentiate and multiply by coefficient quad = - 0.5 * (x1 - x2) ** 2 / scales eq_cov = self.coeff ** 2 * tf.exp(quad) # Add jitter for invertibility if epsilon is not None: eq_cov = eq_cov + epsilon * tf.eye(eq_cov.shape[0], dtype=self.dtype) return eq_cov @property def scales(self): return tf.math.exp(self.log_scales) @property def coeff(self): return tf.math.exp(self.log_coeff) def sample_rff(self, num_features): # Dimension of data space x_dim = self.scales.shape[0] omega_shape = (num_features, x_dim) omega = tf.random.normal(shape=(num_features, x_dim), dtype=self.dtype) # Scale omegas by lengthscale omega = omega / self.scales[None, :] # Draw normally distributed RFF weights weights = tf.random.normal(mean=0., stddev=1., shape=(num_features,), dtype=self.dtype) phi = tf.random.uniform(minval=0., maxval=(2 * np.pi), shape=(num_features, x_dim), dtype=self.dtype) def rff(x): features = tf.cos(tf.einsum('fd, nd -> fn', omega, x) + phi) features = (2 / num_features) ** 0.5 * features * self.coeff return tf.einsum('f, fn -> n', weights, features) return rff ### Sanity check and sampling¶ We can sanity check the classes above by using them to sample data from a GP with a zero mean and an EQ covariance, and compute the ground truth posterior, shown below. The implementation seems to be working correctly. # Set random seed and tf.dtype np.random.seed(0) dtype = tf.float64 # Num. observations (N) N = 100 # EQ covariance hyperparameters log_coeff = 0. log_scale = 0. noise = 1e-2 dim = 1 # Initialise covariance ground_truth_cov = EQcovariance(log_coeff=log_coeff, log_scales=log_scale, dim=dim, dtype=dtype) # Pick inputs at random x_train = np.random.uniform(low=-4., high=4., size=(N, 1)) # Compute covariance matrix terms K_train_train = ground_truth_cov(x_train, x_train, epsilon=1e-12) I_noise = noise * np.eye(N) # Sample f_ind | x_ind y_train = np.dot(np.linalg.cholesky(K_train_train + I_noise), np.random.normal(loc=0., scale=1., size=(N, 1))) # Locations to plot mean and variance of generative model, y_plot | f_ind, x_plot x_plot = np.linspace(-8., 8., 100)[:, None] # Covariances between inducing points and input locations K_train_plot = ground_truth_cov(x_train, x_plot) K_plot_train = ground_truth_cov(x_plot, x_train) K_plot_diag = ground_truth_cov(x_plot, x_plot, diag=True) # Mean and standard deviation of y_plot | f_ind, x_plot y_plot_mean = np.dot(K_plot_train, np.linalg.solve(K_train_train + I_noise, y_train))[:, 0] f_plot_var = K_plot_diag - np.diag(np.dot(K_plot_train, np.linalg.solve(K_train_train + I_noise, K_train_plot))) y_plot_var = f_plot_var + noise y_plot_std = y_plot_var ** 0.5 # Plot inducing points and observed data plt.figure(figsize=(10, 3)) # Plot exact posterior predictive plt.plot(x_plot, y_plot_mean - 2*y_plot_std, '--', color='purple', zorder=2) plt.plot(x_plot, y_plot_mean, color='purple', zorder=2, label='Exact post.') plt.plot(x_plot, y_plot_mean + 2*y_plot_std, '--', color='purple', zorder=2) # Plot sampled data plt.scatter(x_train, y_train, color='red', marker='+', zorder=3, label=r'Observed $\mathbf{y}$') # Plot formatting plt.title('Synthetic data and ground truth', fontsize=22) plt.xticks(np.arange(-8, 9, 4), fontsize=14) plt.yticks(np.arange(-8, 9, 4), fontsize=14) plt.legend(loc='lower right', fontsize=14) plt.xlim([-8., 8.]) plt.xlabel('$x$', fontsize=18) plt.ylabel('$y$', fontsize=18) plt.show() ### The model¶ Now let’s implement the GP which we will sample from. We will use the implementation we have for GPs approximated using the VFE approximation. The only new bit here is the method sample_posterior. This method returns a posterior sample, which is itself a function which can be queried at arbitrary inputs. class VFEGP(tf.keras.Model): def __init__(self, x_train, y_train, x_ind_init, mean, cov, log_noise, trainable_noise, dtype, name='vfe_gp', **kwargs): super().__init__(name=name, dtype=dtype, **kwargs) # Set training data and inducing point initialisation self.x_train = tf.convert_to_tensor(x_train, dtype=dtype) self.y_train = tf.convert_to_tensor(y_train, dtype=dtype) # Set inducing points self.x_ind = tf.convert_to_tensor(x_ind_init, dtype=dtype) self.x_ind = tf.Variable(self.x_ind) # Set mean and covariance functions self.mean = mean self.cov = cov # Set log of noise parameter self.log_noise = tf.convert_to_tensor(log_noise, dtype=dtype) self.log_noise = tf.Variable(self.log_noise, trainable=trainable_noise) @property def noise(self): return tf.math.exp(self.log_noise) def post_pred(self, x_pred): # Number of training points N = self.y_train.shape[0] M = self.x_ind.shape[0] # Compute covariance terms K_ind_ind = self.cov(self.x_ind, self.x_ind, epsilon=1e-9) K_train_ind = self.cov(self.x_train, self.x_ind) K_ind_train = self.cov(self.x_ind, self.x_train) K_pred_ind = self.cov(x_pred, self.x_ind) K_ind_pred = self.cov(self.x_ind, x_pred) K_pred_pred_diag = self.cov(x_pred, x_pred, diag=True) # Compute intermediate matrices using Cholesky for numerical stability L, U, A, B, B_chol = self.compute_intermediate_matrices(K_ind_ind, K_ind_train) # Compute mean diff = self.y_train # - self.mean(self.x_train)[:, None] beta = tf.linalg.cholesky_solve(B_chol, tf.matmul(U, diff)) beta = tf.linalg.triangular_solve(tf.transpose(L, (1, 0)), beta, lower=False) mean = tf.matmul(K_pred_ind / self.noise ** 2, beta)[:, 0] C = tf.linalg.triangular_solve(L, K_ind_pred) D = tf.linalg.triangular_solve(B_chol, C) # Compute variance var = K_pred_pred_diag + self.noise ** 2 var = var - tf.linalg.diag_part(tf.matmul(C, C, transpose_a=True)) var = var + tf.linalg.diag_part(tf.matmul(D, D, transpose_a=True)) return mean, var def free_energy(self): # Number of training points and inducing points N = self.y_train.shape[0] M = self.x_ind.shape[0] # Compute covariance terms K_ind_ind = self.cov(self.x_ind, self.x_ind, epsilon=1e-9) K_train_ind = self.cov(self.x_train, self.x_ind) K_ind_train = self.cov(self.x_ind, self.x_train) K_train_train_diag = self.cov(self.x_train, self.x_train, diag=True) # Compute intermediate matrices using Cholesky for numerical stability L, U, A, B, B_chol = self.compute_intermediate_matrices(K_ind_ind, K_ind_train) # Compute log-normalising constant of the matrix log_pi = - N / 2 * tf.math.log(tf.constant(2 * np.pi, dtype=self.dtype)) log_det_B = - tf.reduce_sum(tf.math.log(tf.linalg.diag_part(B_chol))) log_det_noise = - N / 2 * tf.math.log(self.noise ** 2) # Log of determinant of normalising term log_det = log_pi + log_det_B + log_det_noise # Compute quadratic form diff = self.y_train - self.mean(self.x_train)[:, None] c = tf.linalg.triangular_solve(B_chol, tf.matmul(A, diff), lower=True) / self.noise quad = - 0.5 * tf.reduce_sum((diff / self.noise) ** 2) quad = quad + 0.5 * tf.reduce_sum(c ** 2) # Compute trace term trace = - 0.5 * tf.reduce_sum(K_train_train_diag) / self.noise ** 2 trace = trace + 0.5 * tf.linalg.trace(tf.matmul(A, A, transpose_b=True)) free_energy = (log_det + quad + trace) / N return free_energy def sample_posterior(self, num_features): # Number of inducing points M = self.x_ind.shape[0] # Draw a sample function from the RFF prior - rff_prior is a function rff_prior = self.cov.sample_rff(num_features) K_ind_ind = self.cov(self.x_ind, self.x_ind, epsilon=1e-9) K_train_ind = self.cov(self.x_train, self.x_ind) K_ind_train = self.cov(self.x_ind, self.x_train) # Compute intermediate matrices using Cholesky for numerical stability L, U, A, B, B_chol = self.compute_intermediate_matrices(K_ind_ind, K_ind_train) # Compute mean of VFE posterior over inducing values u_mean = self.noise ** -2 * \ L @ tf.linalg.cholesky_solve(B_chol, U @ self.y_train) # Compute Cholesky of covariance of VFE posterior over inducing values u_cov_chol = tf.linalg.triangular_solve(B_chol, tf.transpose(L, (1, 0))) rand = tf.random.normal((M, 1), dtype=self.dtype) u = u_mean[:, 0] + tf.matmul(u_cov_chol, rand, transpose_a=True)[:, 0] v = tf.linalg.cholesky_solve(L, (u - rff_prior(self.x_ind))[:, None])[:, 0] def post_sample(x): K_x_ind = self.cov(x, self.x_ind) Phi_w = rff_prior(x) return Phi_w + (K_x_ind @ v[:, None])[:, 0] return post_sample def compute_intermediate_matrices(self, K_ind_ind, K_ind_train): # Compute the following matrices, in a numerically stable way # L = chol(K_ind_ind) # U = iL K_ind_train # A = U / noise # B = I + A A.T L = tf.linalg.cholesky(K_ind_ind) U = tf.linalg.triangular_solve(L, K_ind_train, lower=True) A = U / self.noise B = tf.eye(M, dtype=self.dtype) + tf.matmul(A, A, transpose_b=True) B_chol = tf.linalg.cholesky(B) return L, U, A, B, B_chol ## Posterior sampling¶ Finally, we can train the model on some data. We’ll use two datasets, one consisting of the GP-generated we created earlier, and another containing the sinusoidal data from the RFF example. ### GP generated data¶ We train the VFE GP model on the GP-generated data we created earlier, the training procedure is identical to the VFE GP example, and visualise approximate samples from the posterior. # Set random seed and tensor dtype tf.random.set_seed(0) # Number GP constants M = 10 inducing_range = (-4., -2.) log_noise = np.log(1e-1) log_coeff = np.log(1e0) log_scales = [np.log(1e0)] trainable = True # Define mean and covariance mean = ConstantMean(dtype=dtype) cov = EQcovariance(log_coeff=log_coeff, log_scales=log_scales, dim=1, dtype=dtype, trainable=trainable) # Initial locations of inducing points x_ind_dist = tfp.distributions.Uniform(*inducing_range) x_ind_init = x_ind_dist.sample(sample_shape=(M, 1)) x_ind_init = tf.cast(x_ind_init, dtype=dtype) # Re-define sparse VFEGP with trainable noise vfe_gp = VFEGP(mean=mean, cov=cov, log_noise=log_noise, x_train=x_train, y_train=y_train, x_ind_init=x_ind_init, dtype=dtype, trainable_noise=trainable) num_steps = 1000 x_pred = tf.linspace(-8., 8., 100)[:, None] x_pred = tf.cast(x_pred, dtype=dtype) for step in range(num_steps + 1): with tf.GradientTape() as tape: free_energy = vfe_gp.free_energy() loss = - free_energy # Print optimisation information print_info(vfe_gp, step) # Plot posterior plot( vfe_gp, ground_truth_cov, noise, x_pred, x_train, y_train, x_ind_init, step, plot_post_pred=True, num_samples=10 ) Step: 1000 Free energy: 0.547 Coeff: 1.17 Scales: [1.116] Noise: 0.10 First, we obsereve that the VFE GP posterior (shaded yellow), closely matches the exact posterior (dashed purple). The samples drawn by path-wise conditioning (yellow lines) appear qualitatively good, matching the VFE posterior. Note that calling vfe_gp.sample_posterior returns a function which can be queried and differentiated at arbitrary input locations - at constant time once the sample has been drawn. ### Sinusoidal data¶ We can also train the model on a dataset similar to the one used in the sinusoidal data used in the RFF example, where we observed that an approximate sparse posterior based solely on RFFs gave a rather poor approximation (see second dataset in example). The poor performance of RFF is due to the fact that an RFF-only model is equivalent to a finite basis function model, whose weights are quickly pinned down once enough data have been observed, leading to overly narrow error bars, a behaviour which is referred to as variance starvation. # Number of datapoints to generate num_data = 3000 # Generate sinusoidal data with a gap in input space x_train_sine = 4 * np.pi * (np.random.uniform(size=(5 * num_data // 4, 1)) - 0.5) x_train_sine = np.sort(x_train_sine, axis=0) x_train_sine = np.concatenate([x_train_sine[:(2*x_train_sine.shape[0]//7)], x_train_sine[-(2*x_train_sine.shape[0]//7):]], axis=0) y_train_sine = np.sin(x_train_sine) + 1e-1 * np.random.normal(size=x_train_sine.shape) # Plot data plt.figure(figsize=(10, 3)) # Plot data points plt.scatter(x_train_sine, y_train_sine, marker='+', color='red', s=1, alpha=1.0) # Format plot plt.title('Toy sinusoidal data', fontsize=18) plt.xlabel('$x$', fontsize=18) plt.ylabel('$y$', fontsize=18) plt.xticks(np.arange(-10, 11, 4), fontsize=14) plt.yticks(np.arange(-6, 7, 3), fontsize=14) plt.xlim([-8., 8.]) plt.ylim([-8., 4.]) plt.show() # Set random seed and tensor dtype tf.random.set_seed(0) # Number GP constants M = 20 inducing_ranges = [(-6., -2.), ( 2., 6.)] log_noise = np.log(1e-2) log_coeff = np.log(1e0) log_scales = [np.log(1e0)] trainable = True # Define mean and covariance mean = ConstantMean(dtype=dtype) cov = EQcovariance(log_coeff=log_coeff, log_scales=log_scales, dim=1, dtype=dtype, trainable=trainable) # Initial locations of inducing points x_ind_dist1 = tfp.distributions.Uniform(*inducing_ranges[0]) x_ind_dist2 = tfp.distributions.Uniform(*inducing_ranges[1]) x_ind_init = tf.concat( [x_ind_dist1.sample(sample_shape=(M//2, 1)), x_ind_dist2.sample(sample_shape=(M//2, 1))], axis=0 ) x_ind_init = tf.cast(x_ind_init, dtype=dtype) # Re-define sparse VFEGP with trainable noise vfe_gp = VFEGP(mean=mean, cov=cov, log_noise=log_noise, x_train=x_train_sine, y_train=y_train_sine, x_ind_init=x_ind_init, dtype=dtype, trainable_noise=trainable) num_steps = 1000 x_pred = tf.linspace(-8., 8., 100)[:, None] x_pred = tf.cast(x_pred, dtype=dtype) for step in range(num_steps + 1): with tf.GradientTape() as tape: free_energy = vfe_gp.free_energy() loss = - free_energy # Print optimisation information print_info(vfe_gp, step) # EQ covariance hyperparameters log_coeff = vfe_gp.cov.log_coeff log_scale = vfe_gp.cov.log_scales[0] noise = vfe_gp.noise dim = 1 # Initialise covariance exact_cov = EQcovariance(log_coeff=log_coeff, log_scales=log_scale, dim=dim, dtype=dtype) # Plot posterior plot( vfe_gp, exact_cov, noise, x_pred, x_train_sine, y_train_sine, x_ind_init, step, plot_post_pred=True, num_samples=10 ) Step: 1000 Free energy: 0.739 Coeff: 0.76 Scales: [1.39] Noise: 0.07 We observe that the VFE GP has produced a sensible posterior. The VFE posterior matches the exact posterior quite closely indicating that the inducing points summarise the exact posterior accurately. In addition, we observe that pathwise conditioning has produced sensible samples, which do not suffer from variance starvation. In an RFF-based approximate model, the weights of the basis functions are constrained such that the weighted sum of the features passees close to the datapoints, thus constraining the model severely across all inputs. In the pathwise conditioning approximation, this issue is no longer present because the uncertainty in the inducing point outputs helps maintain the uncertainty in the samples, outside the range of the data. ## Conclusion¶ We have seen how RFFs can be used in conjuction with the VFE approximation to efficiently draw samples from approximate GP posteriors. Matheron’s rule allows us to condition RFF samples to particular values of the pseudopoints. This procedure yields high quality samples which can be queried and differentiated at arbitrary input locations. There is a host of useful applications that these samples can be used for, ranging from Thompson-sampling Bayesian Optimisation, posterior sampling for simulating dynamical systems whose dynamics are governed by GPs and many other interesting problems. The original paper Wilson et al. [WBT+20] as well as the extended journal paper by the same authors [WBT+21] give a great deal of experimental results and theoretical analysis for this method. ## References¶ Tit09 Michalis Titsias. Variational learning of inducing variables in sparse gaussian processes. In Artificial intelligence and statistics, 567–574. PMLR, 2009. WBT+20(1,2,3,4) James Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, and Marc Deisenroth. Efficiently sampling functions from gaussian process posteriors. In International Conference on Machine Learning, 10292–10302. PMLR, 2020. WBT+21 James T. Wilson, Viacheslav Borovitskiy, Alexander Terenin, Peter Mostowsky, and Marc Peter Deisenroth. Pathwise conditioning of gaussian processes. Journal of Machine Learning Research, 2021.
2023-03-29T05:08:05
{ "domain": "random-walks.org", "url": "https://random-walks.org/content/gp/sparse/gp-sampling.html", "openwebmath_score": 0.8979224562644958, "openwebmath_perplexity": 7314.871000390146, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986979510652134, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.6533043095135236 }
https://math.stackexchange.com/questions/1352546/finding-an-equation-to-the-surface-s-that-is-bounded-between-z-x2-y2-inside
# Finding an equation to the surface S that is bounded between $z=x^2-y^2$ inside the cylinder $x^2+y^2=1$ How to find a parametric equation to the surface S that is bounded between $z=x^2-y^2$ inside the cylinder $x^2+y^2=1$, and while C be the the Boundary of that surface. While reading the solution of one of the questions they said that a parametric equation for surface S is: $\vec r(\rho$,$\theta$) = $\rho$cos$\theta$$\hat i + \rhosin\theta$$\hat j$ + $\rho^2$cos$2\theta$$\hat k while 0\le \theta \le 2\pi , 0\le \rho \le 1 and a parametric equation for curve/boundary C is: \vec r(\theta)= cos\theta$$\hat i$ + sin$\theta$$\hat j + cos2\theta$$\hat k$ while $0\le$ $\theta \le 2\pi$ but how did they reach these equations? I would appreciate any kind of help. any point inside the cylinder $x^2 + y^2 = 1$ can be represented by $$x = r \cos t, y = r \sin t,\space 0 \le r \le 1, 0 \le t \le 2\pi.$$ then $$z = x^2 - y^2 = r^2 (\cos^2 t - \sin^2 t) = r^2 \cos (2t).$$
2019-11-14T18:50:52
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1352546/finding-an-equation-to-the-surface-s-that-is-bounded-between-z-x2-y2-inside", "openwebmath_score": 0.9769172668457031, "openwebmath_perplexity": 566.6631235154854, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795106521339, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043095135236 }
https://thuses.com/category-theory/fun-example-empty-colimit-does-not-commute-with-empty-limit/
## Fun example: Empty colimit does not commute with empty limit One important property of filtered colimits is that they commute with finite limits in the category of sets. Theorem: Let be a functor, where is a filtered small category and is a finite category. Then the natural mapping is an isomorphism. This statement is used for example to check that a continuous morphism of sites commuting with finite limits induces a morphism of topoi , i.e. the pullback map is exact (note that a continuous morphism of sites does not induce a morphism of topoi in the general case). The definition of a filtered system has a somewhat strange condition in that it is non-empty. For a while I was not sure why it is essential to require this property. It turns out that without this condition, filtered colimits will not commute with finite limits. Namely, the empty colimit will not commute with the empty limit (and only with it!). I claim that the limit over an empty diagram in any category is simply a final object in that category. Indeed, let be an empty diagram in a category . Then is an object in such that any other object admits exactly one morphism . This means that is a final object in . Clearly, a final object in is the one-point set . Similarly, the colimit on an empty diagram is nothing more than an initial object in . Clearly, an initial object in is the empty set . Then we apply these considerations to the empty diagram , the natural map is the map that is clearly not an isomorphism! Subscribe Notify of Inline Feedbacks
2023-03-31T12:33:58
{ "domain": "thuses.com", "url": "https://thuses.com/category-theory/fun-example-empty-colimit-does-not-commute-with-empty-limit/", "openwebmath_score": 0.9279170632362366, "openwebmath_perplexity": 280.06043134703975, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795106521339, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043095135236 }
https://artofproblemsolving.com/wiki/index.php?title=1992_AJHSME_Problems/Problem_13&diff=55633&oldid=41161
# Difference between revisions of "1992 AJHSME Problems/Problem 13" ## Problem Five test scores have a mean (average score) of $90$, a median (middle score) of $91$ and a mode (most frequent score) of $94$. The sum of the two lowest test scores is $\text{(A)}\ 170 \qquad \text{(B)}\ 171 \qquad \text{(C)}\ 176 \qquad \text{(D)}\ 177 \qquad \text{(E)}\ \text{not determined by the information given}$ ## Solution Because there was an odd number of scores, $91$ must be the middle score. Since there are two scores above $91$ and $94$ appears the most frequent (so at least twice) and $94>91$, $94$ appears twice. Also, the sum of the five numbers is $90 \times 5 =450$. Thus, the sum of the lowest two scores is $450-91-94-94= \boxed{\text{(B)}\ 171}$. 1992 AJHSME (Problems • Answer Key • Resources) Preceded byProblem 12 Followed byProblem 14 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 • 11 • 12 • 13 • 14 • 15 • 16 • 17 • 18 • 19 • 20 • 21 • 22 • 23 • 24 • 25 All AJHSME/AMC 8 Problems and Solutions
2022-05-28T00:39:47
{ "domain": "artofproblemsolving.com", "url": "https://artofproblemsolving.com/wiki/index.php?title=1992_AJHSME_Problems/Problem_13&diff=55633&oldid=41161", "openwebmath_score": 0.5017428994178772, "openwebmath_perplexity": 2877.5043949636574, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795102691455, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043092600149 }
https://en.wikipedia.org/wiki/Tweedie_distribution
Tweedie distribution In probability and statistics, the Tweedie distributions are a family of probability distributions which include the purely continuous normal and gamma distributions, the purely discrete scaled Poisson distribution, and the class of mixed compound Poisson–gamma distributions which have positive mass at zero, but are otherwise continuous.[1] For any random variable Y that obeys a Tweedie distribution, the variance var(Y) relates to the mean E(Y) by the power law, ${\displaystyle {\text{var}}\,(Y)=a[{\text{E}}\,(Y)]^{p},}$ where a and p are positive constants. The Tweedie distributions were named by Bent Jørgensen[2] after Maurice Tweedie, a statistician and medical physicist at the University of Liverpool, UK, who presented the first thorough study of these distributions in 1984.[1][3] Examples The Tweedie distributions include a number of familiar distributions as well as some unusual ones, each being specified by the domain of the index parameter. We have the For 0 < p < 1 no Tweedie model exists. Definitions Tweedie distributions are a special case of exponential dispersion models, a class of models used to describe error distributions for the generalized linear model.[4] The term "exponential dispersion model" refers to the exponential form that these models take, evident from the canonical equation used to describe the distribution Pλ,θ of the random variable Z on the measurable sets A, ${\displaystyle P_{\lambda ,\theta }(Z\in A)=\int _{A}\exp[\theta \cdot z-\lambda \kappa (\theta )]\cdot \nu _{\lambda }\,(dz),}$ with the interrelated measures νλ. θ is the canonical parameter; the cumulant function is ${\displaystyle \kappa (\theta )=\lambda ^{-1}\log \int e^{\theta z}\cdot \nu _{\lambda }\,(dz);}$ λ is the index parameter; and z the canonical statistic. This equation represents a family of exponential dispersion models ED*(θ,λ) that are completely determined by the parameters θ and λ and the cumulant function. The models just described are additive models with the property that the distribution of the sum of independent random variables, ${\displaystyle Z_{+}=Z_{1}+\cdots +Z_{n},}$ for which Zi ~ ED*(θ,λi) with fixed θ and various λ are members of the family of distributions with the same θ, ${\displaystyle Z_{+}\sim ED^{*}(\theta ,\lambda _{1}+\cdots +\lambda _{n}).}$ Reproductive exponential dispersion models A second class of exponential dispersion models exists designated by the random variable ${\displaystyle Y=Z/\lambda \sim ED(\mu ,\sigma ^{2}),}$ where σ2 = 1/λ, known as reproductive exponential dispersion models. They have the property that for n independent random variables Yi ~ ED(μ,σ2/wi), with weighting factors wi and ${\displaystyle w=\sum _{i=1}^{n}w_{i},}$ a weighted average of the variables gives, ${\displaystyle w^{-1}\sum _{i=1}^{n}w_{i}Y_{i}\sim ED(\mu ,\sigma ^{2}/w).}$ For reproductive models the weighted average of independent random variables with fixed μ and σ2 and various values for wi is a member of the family of distributions with same μ and σ2. The Tweedie exponential dispersion models are both additive and reproductive; we thus have the duality transformation ${\displaystyle Y\mapsto Z=Y/\sigma ^{2}.}$ Scale invariance A third property of the Tweedie models is that they are scale invariant: For a reproductive exponential dispersion model ED(μ,σ2) and any positive constant c we have the property of closure under scale transformation, ${\displaystyle cED(\mu ,\sigma ^{2})=ED(c\mu ,c^{2-p}\sigma ^{2}),}$ where the index parameter p is a real-valued unitless constant. With this transformation the new variable Y’ = cY belongs to the family of distributions with fixed μ and σ2 but different values of c. The Tweedie power variance function To define the variance function for exponential dispersion models we make use of the mean value mapping, the relationship between the canonical parameter θ and the mean μ. It is define by the function ${\displaystyle \tau (\theta )=\kappa ^{\prime }(\theta )=\mu .}$ The variance function V(μ) is constructed from the mean value mapping, ${\displaystyle V(\mu )=\tau ^{\prime }[\tau ^{-1}(\mu )].}$ Here the minus exponent in τ−1(μ) denotes an inverse function rather than a reciprocal. The mean and variance of an additive random variable is then E(Z) = λμ and var(Z)=λV(μ). Scale invariance implies that the variance function obeys the relationship V(μ) = μ p.[4] The Tweedie cumulant generating functions The properties of exponential dispersion models give us two differential equations.[4] The first relates the mean value mapping and the variance function to each other, ${\displaystyle {\frac {\partial \tau ^{-1}(\mu )}{\partial \mu }}={\frac {1}{V(\mu )}}.}$ The second shows how the mean value mapping is related to the cumulant function, ${\displaystyle {\frac {\partial \kappa (\theta )}{\partial \theta }}=\tau (\theta ).}$ These equations can be solved to obtain the cumulant function for different cases of the Tweedie models. A cumulant generating function (CGF) may then be obtained from the cumulant function. The additive CGF is generally specified by the equation ${\displaystyle K^{*}(s)=\log[{\text{E}}(e^{sZ})]=\lambda [\kappa (\theta +s)-\kappa (\theta )],}$ and the reproductive CGF by ${\displaystyle K(s)=\log[{\text{E}}(e^{sY})]=\lambda [\kappa (\theta +s/\lambda )-\kappa (\theta )],}$ where s is the generating function variable. The cumulant functions for specific values of the index parameter p are[4] ${\displaystyle \kappa _{p}(\theta )={\begin{cases}{\dfrac {\alpha -1}{\alpha }}\left({\dfrac {\theta }{\alpha -1}}\right)^{\alpha }&\quad p\neq 1,2,\\-\log(-\theta )&\quad p=2,\\e^{\theta }&\quad p=1,\end{cases}}}$ where α is the Tweedie exponent ${\displaystyle \alpha ={\dfrac {p-2}{p-1}}.}$ For the additive Tweedie models the CGFs take the form, ${\displaystyle K_{p}^{*}(s;\theta ,\lambda )={\begin{cases}\lambda \kappa _{p}(\theta )[(1+s/\theta )^{\alpha }-1]&\quad p\neq 1,2,\\-\lambda \log(1+s/\theta )&\quad p=2,\\\lambda e^{\theta }(e^{s}-1)&\quad p=1,\end{cases}}}$ and for the reproductive models, ${\displaystyle K_{p}(s;\theta ,\lambda )={\begin{cases}\lambda \kappa _{p}(\theta )\left\{[1+s/(\theta \lambda )]^{\alpha }-1\right\}&\quad p\neq 1,2,\\-\lambda \log[1+s/(\theta \lambda )]&\quad p=2,\\\lambda e^{\theta }(e^{s/\lambda }-1)&\quad p=1.\end{cases}}}$ The additive and reproductive Tweedie models are conventionally denoted by the symbols Tw*p(θ,λ) and Twp(θ,σ2), respectively. The first and second derivatives of the CGFs, with s = 0, yields the mean and variance, respectively. One can thus confirm that for the additive models the variance relates to the mean by the power law, ${\displaystyle \mathrm {var} (Z)\propto \mathrm {E} (Z)^{p}.}$ The Tweedie convergence theorem The Tweedie exponential dispersion models are fundamental in statistical theory consequent to their roles as foci of convergence for a wide range of statistical processes. Jørgensen et al proved a theorem that specifies the asymptotic behaviour of variance functions known as the Tweedie convergence theorem".[5] This theorem, in technical terms, is stated thus:[4] The unit variance function is regular of order p at zero (or infinity) provided that V(μ) ~ c0μp for μ as it approaches zero (or infinity) for all real values of p and c0 > 0. Then for a unit variance function regular of order p at either zero or infinity and for ${\displaystyle p\notin (0,1),}$ for any ${\displaystyle \mu >0}$, and ${\displaystyle \sigma ^{2}>0}$ we have ${\displaystyle c^{-1}ED(c\mu ,\sigma ^{2}c^{2-p})\rightarrow Tw_{p}(\mu ,c_{0}\sigma ^{2})}$ as ${\displaystyle c\downarrow 0}$ or ${\displaystyle c\rightarrow \infty }$, respectively, where the convergence is through values of c such that is in the domain of θ and cp−2/σ2 is in the domain of λ. The model must be infinitely divisible as c2−p approaches infinity.[4] In nontechnical terms this theorem implies that any exponential dispersion model that asymptotically manifests a variance-to-mean power law is required to have a variance function that comes within the domain of attraction of a Tweedie model. Almost all distribution functions with finite cumulant generating functions qualify as exponential dispersion models and most exponential dispersion models manifest variance functions of this form. Hence many probability distributions have variance functions that express this asymptotic behavior, and the Tweedie distributions become foci of convergence for a wide range of data types.[6] The Tweedie models and Taylor’s power law Taylor's law is an empirical law in ecology that relates the variance of the number of individuals of a species per unit area of habitat to the corresponding mean by a power-law relationship.[7] For the population count Y with mean µ and variance var(Y), Taylor’s law is written, ${\displaystyle {\text{var}}\,(Y)=a\mu ^{p},}$ where a and p are both positive constants. Since L. R. Taylor described this law in 1961 there have been many different explanations offered to explain it, ranging from animal behavior,[7] a random walk model,[8] a stochastic birth, death, immigration and emigration model,[9] to a consequence of equilibrium and non-equilibrium statistical mechanics.[10] No consensus exists as to an explanation for this model. Since Taylor’s law is mathematically identical to the variance-to-mean power law that characterizes the Tweedie models, it seemed reasonable to use these models and the Tweedie convergence theorem to explain the observed clustering of animals and plants associated with Taylor’s law.[11][12] The majority of the observed values for the power-law exponent p have fallen in the interval (1,2) and so the Tweedie compound Poisson–gamma distribution would seem applicable. Comparison of the empirical distribution function to the theoretical compound Poisson–gamma distribution has provided a means to verify consistency of this hypothesis.[11] Whereas conventional models for Taylor’s law have tended to involve ad hoc animal behavioral or population dynamic assumptions, the Tweedie convergence theorem would imply that Taylor’s law results from a general mathematical convergence effect much as how the central limit theorem governs the convergence behavior of certain types of random data. Indeed, any mathematical model, approximation or simulation that is designed to yield Taylor’s law (on the basis of this theorem) is required to converge to the form of the Tweedie models.[6] Tweedie convergence and 1/f noise Pink noise, or 1/f noise, refers to a pattern of noise characterized by a power-law relationship between its intensities S(f) at different frequencies f, ${\displaystyle S(f)\propto 1/f^{\gamma },}$ where the dimensionless exponent γ ∈ [0,1]. It is found within a diverse number of natural processes.[13] Many different explanations for 1/f noise exist, a widely held hypothesis is based on Self-organized criticality where dynamical systems close to a critical point are thought to manifest scale-invariant spatial and/or temporal behavior. In this subsection a mathematical connection between 1/f noise and the Tweedie variance-to-mean power law will be described. To begin, we first need to introduce self-similar processes: For the sequence of numbers ${\displaystyle Y=(Y_{i}:i=0,1,2,\ldots ,N)}$ with mean ${\displaystyle {\hat {\mu }}={\text{E}}(Y_{i}),}$ deviations ${\displaystyle y_{i}=Y_{i}-{\hat {\mu }},}$ variance ${\displaystyle {\hat {\sigma }}^{2}={\text{E}}(y_{i}^{2}),}$ and autocorrelation function ${\displaystyle r(k)={\text{E}}(y_{i},y_{i+k})/{\text{E}}(y_{i}^{2})}$ with lag k, if the autocorrelation of this sequence has the long range behavior ${\displaystyle r(k)\sim k^{-d}L(k)}$ as k→∞ and where L(k) is a slowly varying function at large values of k, this sequence is called a self-similar process.[14] The method of expanding bins can be used to analyze self-similar processes. Consider a set of equal-sized non-overlapping bins that divides the original sequence of N elements into groups of m equal-sized segments (N/m is integer) so that new reproductive sequences, based on the mean values, can be defined: ${\displaystyle Y_{i}^{(m)}=(Y_{im-m+1}+\cdots +Y_{im})/m.}$ The variance determined from this sequence will scale as the bin size changes such that ${\displaystyle {\text{var}}[Y^{(m)}]={\hat {\sigma }}^{2}m^{-d}}$ if and only if the autocorrelation has the limiting form[15] ${\displaystyle \lim _{k\to \infty }r(k)/k^{-d}=(2-d)(1-d)/2.}$ One can also construct a set of corresponding additive sequences ${\displaystyle Z_{i}^{(m)}=mY_{i}^{(m)},}$ based on the expanding bins, ${\displaystyle Z_{i}^{(m)}=(Y_{im-m+1}+\cdots +Y_{im}).}$ Provided the autocorrelation function exhibits the same behavior, the additive sequences will obey the relationship ${\displaystyle {\text{var}}[Z_{i}^{(m)}]=m^{2}{\text{var}}[Y^{(m)}]=({\hat {\sigma }}^{2}/{\hat {\mu }}^{2-d}){\text{E}}[Z_{i}^{(m)}]^{2-d}}$ Since ${\displaystyle {\hat {\mu }}}$ and ${\displaystyle {\hat {\sigma }}^{2}}$ are constants this relationship constitutes a variance-to-mean power law, with p = 2 - d.[6][16] The biconditional relationship above between the variance-to-mean power law and power law autocorrelation function, and the Wiener–Khinchin theorem[17] imply that any sequence that exhibits a variance-to-mean power law by the method of expanding bins will also manifest 1/f noise, and vice versa. Moreover, the Tweedie convergence theorem, by virtue of its central limit-like effect of generating distributions that manifest variance-to-mean power functions, will also generate processes that manifest 1/f noise.[6] The Tweedie convergence theorem thus provides an alternative explanation for the origin of 1/f noise, based its central limit-like effect. Much as the central limit theorem requires certain kinds of random processes to have as a focus of their convergence the Gaussian distribution and thus express white noise, the Tweedie convergence theorem requires certain non-Gaussian processes to have as a focus of convergence the Tweedie distributions that express 1/f noise.[6] The Tweedie models and multifractality From the properties of self-similar processes, the power-law exponent p = 2 - d is related to the Hurst exponent H and the fractal dimension D by[15] ${\displaystyle D=2-H=2-p/2.}$ A one-dimensional data sequence of self-similar data may demonstrate a variance-to-mean power law with local variations in the value of p and hence in the value of D. When fractal structures manifest local variations in fractal dimension, they are said to be multifractals. Examples of data sequences that exhibit local variations in p like this include the eigenvalue deviations of the Gaussian Orthogonal and Unitary Ensembles.[6] The Tweedie compound Poisson–gamma distribution has served to model multifractality based on local variations in the Tweedie exponent α. Consequently, in conjunction with the variation of α, the Tweedie convergence theorem can be viewed as having a role in the genesis of such multifractals. The variation of α has been found to obey the asymmetric Laplace distribution in certain cases.[18] This distribution has been shown to be a member of the family of geometric Tweedie models,[19] that manifest as limiting distributions in a convergence theorem for geometric dispersion models. Applications Regional organ blood flow Regional organ blood flow has been traditionally assessed by the injection of radiolabelled polyethylene microspheres into the arterial circulation of animals, of a size that they become entrapped within the microcirculation of organs. The organ to be assessed is then divided into equal-sized cubes and the amount of radiolabel within each cube is evaluated by liquid scintillation counting and recorded. The amount of radioactivity within each cube is taken to reflect the blood flow through that sample at the time of injection. It is possible to evaluate adjacent cubes from an organ in order to additively determine the blood flow through larger regions. Through the work of J B Bassingthwaighte and others an empirical power law has been derived between the relative dispersion of blood flow of tissue samples (RD = standard deviation/mean) of mass m relative to reference-sized samples:[20] ${\displaystyle RD(m)=RD(m_{\text{ref}})\left({\frac {m}{m_{\text{ref}}}}\right)^{1-D_{s}}}$ This power law exponent Ds has been called a fractal dimension. Bassingthwaighte’s power law can be shown to directly relate to the variance-to-mean power law. Regional organ blood flow can thus be modelled by the Tweedie compound Poisson–gamma distribution.,[21] In this model tissue sample could be considered to contain a random (Poisson) distributed number of entrapment sites, each with gamma distributed blood flow. Blood flow at this microcirculatory level has been observed to obey a gamma distribution,[22] thus providing support for this hypothesis. Cancer metastasis The "experimental cancer metastasis assay"[23] has some resemblance to the above method to measure regional blood flow. Groups of syngeneic and age matched mice are given intravenous injections of equal-sized aliquots of suspensions of cloned cancer cells and then after a set period of time their lungs are removed and the number of cancer metastases enumerated within each pair of lungs. If other groups of mice are injected with different cancer cell clones then the number of metastases per group will differ in accordance with the metastatic potentials of the clones. It has been long recognized that there can be considerable intraclonal variation in the numbers of metastases per mouse despite the best attempts to keep the experimental conditions within each clonal group uniform.[23] This variation is larger than would be expected on the basis of a Poisson distribution of numbers of metastases per mouse in each clone and when the variance of the number of metastases per mouse was plotted against the corresponding mean a power law was found.[24] The variance-to-mean power law for metastases was found to also hold for spontaneous murine metastases[25] and for cases series of human metastases.[26] Since hematogenous metastasis occurs in direct relationship to regional blood flow[27] and videomicroscopic studies indicate that the passage and entrapment of cancer cells within the circulation appears analogous to the microsphere experiments[28] it seemed plausible to propose that the variation in numbers of hematogenous metastases could reflect heterogeneity in regional organ blood flow.[29] The blood flow model was based on the Tweedie compound Poisson–gamma distribution, a distribution governing a continuous random variable. For that reason in the metastasis model it was assumed that blood flow was governed by that distribution and that the number of regional metastases occurred as a Poisson process for which the intensity was directly proportional to blood flow. This led to the description of the Poisson negative binomial (PNB) distribution as a discrete equivalent to the Tweedie compound Poisson–gamma distribution. The probability generating function for the PNB distribution is ${\displaystyle G(s)=\exp \left[\lambda {\frac {\alpha -1}{\alpha }}\left({\frac {\theta }{\alpha -1}}\right)^{\alpha }\left\{\left(1-{\frac {1}{\theta }}+{\frac {s}{\theta }}\right)^{\alpha }-1\right\}\right]}$ The relationship between the mean and variance of the PNB distribution is then ${\displaystyle {\text{var}}\,(Y)=a{\text{E}}(Y)^{b}+{\text{E}}(Y),}$ which, in the range of many experimental metastasis assays, would be indistinguishable from the variance-to-mean power law. For sparse data, however, this discrete variance-to-mean relationship would behave more like that of a Poisson distribution where the variance equaled the mean. Genomic structure and evolution The local density of Single Nucleotide Polymorphisms (SNPs) within the human genome, as well as that of genes, appears to cluster in accord with the variance-to-mean power law and the Tweedie compound Poisson–gamma distribution.[30][31] In the case of SNPs their observed density reflects the assessment techniques, the availability of genomic sequences for analysis, and the nucleotide heterozygosity.[32] The first two factors reflect ascertainment errors inherent to the collection methods, the latter factor reflects an intrinsic property of the genome. In the coalescent model of population genetics each genetic locus has its own unique history. Within the evolution of a population from some species some genetic loci could presumably be traced back to a relatively recent common ancestor whereas other loci might have more ancient genealogies. More ancient genomic segments would have had more time to accumulate SNPs and to experience recombination. R R Hudson has proposed a model where recombination could cause variation in the time to most common recent ancestor for different genomic segments.[33] A high recombination rate could cause a chromosome to contain a large number of small segments with less correlated genealogies. Assuming a constant background rate of mutation the number of SNPs per genomic segment would accumulate proportionately to the time to the most recent common ancestor. Current population genetic theory would indicate that these times would be gamma distributed, on average.[34] The Tweedie compound Poisson–gamma distribution would suggest a model whereby the SNP map would consist of multiple small genomic segments with the mean number of SNPs per segment would be gamma distributed as per Hudson’s model. The distribution of genes within the human genome also demonstrated a variance-to-mean power law, when the method of expanding bins was used to determine the corresponding variances and means.[31] Similarly the number of genes per enumerative bin was found to obey a Tweedie compound Poisson–gamma distribution. This probability distribution was deemed compatible with two different biological models: the microarrangement model where the number of genes per unit genomic length was determined by the sum of a random number of smaller genomic segments derived by random breakage and reconstruction of protochormosomes. These smaller segments would be assumed to carry on average a gamma distributed number of genes. In the alternative gene cluster model, genes would be distributed randomly within the protochromosomes. Over large evolutionary timescales there would occur tandem duplication, mutations, insertions, deletions and rearrangements that could affect the genes through a stochastic birth, death and immigration process to yield the Tweedie compound Poisson–gamma distribution. Both these mechanisms would implicate neutral evolutionary processes that would result in regional clustering of genes. Random matrix theory The Gaussian unitary ensemble (GUE) consists of complex Hermitian matrices that are invariant under unitary transformations whereas the Gaussian orthogonal ensemble (GOE) consists of real symmetric matrices invariant under orthogonal transformations. The ranked eigenvalues En from these random matrices obey Wigner’s semicircular distribution: For a N×N matrix the average density for eigenvalues of size E will be ${\displaystyle {\bar {\rho }}(E)={\begin{cases}{\sqrt {2N-E^{2}}}/\pi &\quad \left\vert E\right\vert <{\sqrt {2N}}\\0&\quad \left\vert E\right\vert >{\sqrt {2N}}\end{cases}}}$ as E→ ∞ . Integration of the semicircular rule provides the number of eigenvalues on average less than E, ${\displaystyle {\bar {\eta }}(E)={\frac {1}{2\pi }}\left[E{\sqrt {2N-E^{2}}}+2N\arcsin \left({\frac {E}{\sqrt {2N}}}\right)+\pi N\right].}$ The ranked eigenvalues can be unfolded, or renormalized, with the equation ${\displaystyle e_{n}={\bar {\eta }}(E)=\int \limits _{-\infty }^{E_{n}}dE^{\prime }{\bar {\rho }}(E^{\prime }).}$ This removes the trend of the sequence from the fluctuating portion. If we look at the absolute value of the difference between the actual and expected cumulative number of eigenvalues ${\displaystyle \left|{\bar {D}}_{n}\right|=\left|n-{\bar {\eta }}(E_{n})\right|}$ we obtain a sequence of eigenvalue fluctuations which, using the method of expanding bins, reveals a variance-to-mean power law.[6] The eigenvalue fluctuations of both the GUE and the GOE manifest this power law with the power law exponents ranging between 1 and 2, and they similarly manifest 1/f noise spectra. These eigenvalue fluctuations also correspond to the Tweedie compound Poisson–gamma distribution and they exhibit multifractality.[6] The distribution of prime numbers The second Chebyshev function ψ(x) is given by, ${\displaystyle \psi (x)=\sum _{{\hat {p}}^{k}\leq x}\log {\hat {p}}=\sum _{n\leq x}\Lambda (n)}$ where the summation extends over all prime powers ${\displaystyle {\hat {p}}^{k}}$ not exceeding x, x runs over the positive real numbers, and ${\displaystyle \Lambda (n)}$ is the von Mangoldt function. The function ψ(x) is related to the prime-counting function π(x), and as such provides information with regards to the distribution of prime numbers amongst the real numbers. It is asymptotic to x, a statement equivalent to the prime number theorem and it can also be shown to be related to the zeros of the Riemann zeta function located on the critical strip ρ, where the real part of the zeta zero ρ is between 0 and 1. Then ψ expressed for x greater than one can be written: ${\displaystyle \psi _{0}(x)=x-\sum _{\rho }{\frac {x^{\rho }}{\rho }}-\ln 2\pi -{\frac {1}{2}}\ln(1-x^{-2})}$ where ${\displaystyle \psi _{0}(x)=\lim _{\varepsilon \rightarrow 0}{\frac {\psi (x-\varepsilon )+\psi (x+\varepsilon )}{2}}.}$ The Riemann hypothesis states that the nontrivial zeros of the Riemann zeta function all have real part ½. These zeta function zeros are related to the distribution of prime numbers. Schoenfeld[35] has shown that if the Riemann hypothesis is true then ${\displaystyle \Delta (x)=\left\vert \psi (x)-x\right\vert <{\sqrt {x}}\log ^{2}(x)/(8\pi )}$ for all ${\displaystyle x>73.2}$. If we analyze the Chebyshev deviations Δ(n) on the integers n using the method of expanding bins and plot the variance versus the mean a variance to mean power law can be demonstrated.[36] Moreover, these deviations correspond to the Tweedie compound Poisson-gamma distribution and they exhibit 1/f noise. Other applications Applications of Tweedie distributions include: References 1. ^ a b Tweedie, M.C.K. (1984). "An index which distinguishes between some important exponential families". In Ghosh, J.K.; Roy, J. Statistics: Applications and New Directions. Proceedings of the Indian Statistical Institute Golden Jubilee International Conference. Calcutta: Indian Statistical Institute. pp. 579–604. MR 786162. 2. ^ Jørgensen, B (1987). "Exponential dispersion models". Journal of the Royal Statistical Society, Series B. 49 (2): 127–162. JSTOR 2345415. 3. ^ Smith, C.A.B. (1997). "Obituary: Maurice Charles Kenneth Tweedie, 1919–96". Journal of the Royal Statistical Society: Series A (Statistics in Society). 160 (1): 151–154. doi:10.1111/1467-985X.00052. 4. Jørgensen, Bent (1997). The theory of dispersion models. Chapman & Hall. ISBN 978-0412997112. 5. ^ Jørgensen, B; Martinez, JR; Tsao, M (1994). "Asymptotic behaviour of the variance function". Scandinavian Journal of Statistics. 21: 223–243. 6. Kendal, W. S.; Jørgensen, B. (2011). "Tweedie convergence: A mathematical basis for Taylor's power law, 1/f noise, and multifractality". Physical Review E. 84 (6): 066120. Bibcode:2011PhRvE..84f6120K. doi:10.1103/PhysRevE.84.066120. PMID 22304168. 7. ^ a b Taylor, LR (1961). "Aggregation, variance and the mean". Nature. 189: 732–735. doi:10.1038/189732a0. 8. ^ Hanski, I (1980). "Spatial patterns and movements in coprophagous beetles". Oikos. 34: 293–310. doi:10.2307/3544289. 9. ^ Anderson, RD; Crawley, GM; Hassell, M (1982). "Variability in the abundance of animal and plant species". Nature. 296: 245–248. doi:10.1038/296245a0. 10. ^ Fronczak, A; Fronczak, P (2010). "Origins of Taylor's power law for fluctuation scaling in complex systems". Phys Rev E. 81: 066112. doi:10.1103/physreve.81.066112. 11. ^ a b c Kendal, WS (2002). "Spatial aggregation of the Colorado potato beetle described by an exponential dispersion model". Ecological Modelling. 151: 261–269. doi:10.1016/s0304-3800(01)00494-x. 12. ^ Kendal, WS (2004). "Taylor's ecological power law as a consequence of scale invariant exponential dispersion models". Ecol Complex. 1: 193–209. doi:10.1016/j.ecocom.2004.05.001. 13. ^ Dutta, P; Horn, PM (1981). "Low frequency fluctuations in solids: 1/f noise". Rev Mod Phys. 53: 497–516. doi:10.1103/revmodphys.53.497. 14. ^ Leland, WE; Taqqu, MS; Willinger, W; Wilson, DV (1994). "On the self-similar nature of ethernet traffic". IEE/ACM Trans Networking. 2: 1–15. doi:10.1109/90.282603. 15. ^ a b Tsybakov, B; Georganas, ND (1997). "On self-similar traffic in ATM queues: definitions, overflow probability bound, and cell delay distribution". IEEE/ACM Trans Networking. 5: 397–409. doi:10.1109/90.611104. 16. ^ Kendal, WS (2007). "Scale invariant correlations between genes and SNPs on Human chromosome 1 reveal potential evolutionary mechanisms". J Theor Biol. 245: 329–340. doi:10.1016/j.jtbi.2006.10.010. 17. ^ McQuarrie DA (1976) Statistical mechanics [Harper & Row] 18. ^ Kendal, WS (2014). "Multifractality attributed to dual central limit-lie convergence effects". Physica A. 401: 22–33. doi:10.1016/j.physa.2014.01.022. 19. ^ Jørgensen, B; Kokonendji, CC (2011). "Dispersion models for geometric sums". Braz J Probab Stat. 25: 263–293. doi:10.1214/10-bjps136. 20. ^ Bassingthwaighte, JB (1989). "Fractal nature of regional myocardial blood flow heterogeneity". Circ Res. 65: 578–590. doi:10.1161/01.res.65.3.578. 21. ^ Kendal, WS (2001). "A stochastic model for the self-similar heterogeneity of regional organ blood flow". Proc Natl Acad Sci U S A. 98: 837–841. doi:10.1073/pnas.98.3.837. 22. ^ Honig, CR; Feldstein, ML; Frierson, JL (1977). "Capillary lengths, anastomoses, and estimated capillary transit times in skeletal muscle". Am J Physiol Heart Circul Physiol. 233: H122–H129. 23. ^ a b Fidler, IJ; Kripke, M (1977). "Metastasis results from preexisting variant cells within a malignant tumor". Science. 197: 893–895. Bibcode:1977Sci...197..893F. doi:10.1126/science.887927. 24. ^ Kendal, WS; Frost, P (1987). "Experimental metastasis: a novel application of the variance-to-mean power function". J Natl Cancer Inst. 79: 1113–1115. 25. ^ Kendal, WS (1999). "Clustering of murine lung metastases reflects fractal nonuniformity in regional lung blood flow". Invasion Metastasis. 18: 285–296. doi:10.1159/000024521. 26. ^ Kendal, WS; Lagerwaard, FJ; Agboola, O (2000). "Characterization of the frequency distribution for human hematogenous metastases: evidence for clustering and a power variance function". Clin Exp Metastasis. 18: 219–229. 27. ^ Weiss, L; Bronk, J; Pickren, JW; Lane, WW (1981). "Metastatic patterns and targe organ arterial blood flow". Invasion Metastasis. 1: 126–135. 28. ^ Chambers, AF; Groom, AC; MacDonald, IC (2002). "Dissemination and growth of cancer cells in metastatic sites". Nature Rev Cancer. 2: 563–572. doi:10.1038/nrc865. 29. ^ Kendal, WS (2002). "A frequency distribution for the number of hematogenous organ metastases". Invasion Metastasis. 1: 126–135. 30. ^ Kendal, WS (2003). "An exponential dispersion model for the distribution of human single nucleotide polymorphisms". Mol Biol Evol. 20: 579–590. doi:10.1093/molbev/msg057. 31. ^ a b Kendal, WS (2004). "A scale invariant clustering of genes on human chromosome 7". BMC Evol Biol. 4: 3. doi:10.1186/1471-2148-4-3. PMC . PMID 15040817. 32. ^ Sachidanandam, R; Weissman, D; Schmidt, SC; et al. (2001). "A map of human genome variation containing 1.42 million single nucleotide polymorphisms". Nature. 409: 928–933. doi:10.1038/35057149. PMID 11237013. 33. ^ Hudson, RR (1991). "Gene genealogies and the coalescent process". Oxford Surveys in Evolutionary Biology. 7: 1–44. 34. ^ Tavare, S; Balding, DJ; Griffiths, RC; Donnelly, P (1997). "Inferring coalescent times from DNA sequence data". Genetics. 145: 505–518. 35. ^ Schoenfeld, J (1976). "Sharper bounds for the Chebyshev functions θ(x) and ψ(x). II". Math Computation. 30 (134): 337–360. doi:10.1090/s0025-5718-1976-0457374-x. 36. ^ Kendal, WS (2013). "Fluctuation scaling and 1/f noise: shared origins from the Tweedie family of statistical distributions". J Basic Appl Phys. 2: 40–49. 37. ^ Haberman, S.; Renshaw, A. E. (1996). "Generalized linear models and actuarial science". The Statistician. 45: 407–436. doi:10.2307/2988543. 38. ^ Renshaw, A. E. 1994. Modelling the claims process in the presence of covariates. ASTIN Bulletin 24: 265–286. 39. ^ Jørgensen, B.; Paes; Souza, M. C. (1994). "Fitting Tweedie's compound Poisson model to insurance claims data". Scand. Actuar. J. 1: 69–93. 40. ^ Haberman, S., and Renshaw, A. E. 1998. Actuarial applications of generalized linear models. In Statistics in Finance, D. J. Hand and S. D. Jacka (eds), Arnold, London. 41. ^ Mildenhall, S. J. 1999. A systematic relationship between minimum bias and generalized linear models. 1999 Proceedings of the Casualty Actuarial Society 86: 393–487. 42. ^ Murphy, K. P., Brockman, M. J., and Lee, P. K. W. (2000). Using generalized linear models to build dynamic pricing systems. Casualty Actuarial Forum, Winter 2000. 43. ^ Smyth, G.K.; Jørgensen, B. (2002). "Fitting Tweedie's compound Poisson model to insurance claims data: dispersion modelling" (PDF). ASTIN Bulletin. 32: 143–157. doi:10.2143/ast.32.1.1020. 44. ^ Davidian, M (1990). "Estimation of variance functions in assays with possible unequal replication and nonnormal data". Biometrika. 77: 43–54. doi:10.1093/biomet/77.1.43. 45. ^ Davidian, M.; Carroll, R. J.; Smith, W. (1988). "Variance functions and the minimum detectable concentration in assays". Biometrika. 75: 549–556. doi:10.1093/biomet/75.3.549. 46. ^ Aalen, O. O. (1992). "Modelling heterogeneity in survival analysis by the compound Poisson distribution". Ann. Appl. Probab. 2: 951–972. doi:10.1214/aoap/1177005583. 47. ^ Hougaard, P.; Harvald, B.; Holm, N. V. (1992). "Measuring the similarities between the lifetimes of adult Danish twins born between 1881–1930". Journal of the American Statistical Association. 87: 17–24. doi:10.1080/01621459.1992.10475170. 48. ^ Hougaard, P (1986). "Survival models for heterogeneous populations derived from stable distributions". Biometrika. 73: 387–396. doi:10.1093/biomet/73.2.387. 49. ^ Gilchrist, R. and Drinkwater, D. 1999. Fitting Tweedie models to data with probability of zero responses. Proceedings of the 14th International Workshop on Statistical Modelling, Graz, pp. 207–214. 50. ^ a b Smyth, G. K. 1996. Regression analysis of quantity data with exact zeros. Proceedings of the Second Australia--Japan Workshop on Stochastic Models in Engineering, Technology and Management. Technology Management Centre, University of Queensland, 572– 580. 51. ^ Hasan, M.M.; Dunn, P.K. (2010) "Two Tweedie distributions that are near-optimal for modelling monthly rainfall in Australia", International Journal of Climatology, doi:10.1002/joc.2162 52. ^ Candy, S. G. (2004). "Modelling catch and effort data using generalized linear models, the Tweedie distribution, random vessel effects and random stratum-by-year effects". CCAMLR Science. 11: 59–80. 53. ^ Kendal, WS; Jørgensen, B (2011). "Taylor's power law and fluctuation scaling explained by a central-limit-like convergence". Phys. Rev. E. 83: 066115. doi:10.1103/physreve.83.066115. 54. ^ Kendal, WS (2015). "Self-organized criticality attributed to a central limit-like convergence effect". Physica A. 421: 141–150. doi:10.1016/j.physa.2014.11.035.
2017-03-23T09:41:22
{ "domain": "wikipedia.org", "url": "https://en.wikipedia.org/wiki/Tweedie_distribution", "openwebmath_score": 0.759919285774231, "openwebmath_perplexity": 3017.6414839767654, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795102691455, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043092600149 }
http://math.stackexchange.com/questions/34846/root-of-a-multi-variable-derivative
# Root of a multi-variable derivative How I got to the problem: Let $f(x,y)=\frac{1}{\sqrt{(x-a_1)^2+(y-a_2)^2}}+\frac{1}{\sqrt{(x-b_1)^2+(y-b_2)^2}}$, where $a_1,a_2,b_1,b_2 \in \mathbb{R}, a=(a_1,a_2)\neq (b_1,b_2)=b$ are fixed and $x,y \in \mathbb{R}, b \neq (x,y)\neq a$ (You can also see $a$ and $b$ as vectors in $\mathbb{R^2}$. I want to show that the derivative of this function has exactly one point $z=(x_0,y_0)$ where the derivative is zero e.g. $D(f(z))=0$. One can easily see that $p_1((x,y))=\frac{\partial}{\partial x}f=\frac{a_1-x}{\left(\left(x-a_1\right){}^2+\left(y-a_2\right){}^2\right){}^{3/2}}+\frac{b_1-x}{\left(\left(x-b_1\right){}^2+\left(y-b_2\right){}^2\right){}^{3/2}}$ whereas $p_2((x,y))=\frac{\partial}{\partial y}f=\frac{a_2-y}{\left(\left(x-a_1\right){}^2+\left(y-a_2\right){}^2\right){}^{3/2}} +\frac{b_2-y}{\left(\left(x-b_1\right){}^2+\left(y-b_2\right){}^2\right){}^{3/2}}$ Before going any deeper, let us look at an example function: I set $a=(0,0)$ and $b=(1,0)$ and get We can see right easily that the only point where the derivative shoul be zero is exactly between the peaks so we guess that $z=\frac{a+b}{2}$. We set it in and easily verify that $p_1(z)=p_2(z)=0$ (so the total derivative will also be 0 at z). Now we have to proove that this is the only solution which is the hard part for me. We can play a bit around with this and get to two final equations (will be right below this). Now the real problem: $(1) 0=\left(a_1-x\right)\left(\left(b_2-y\right){}^2+(\text{b1}-x)^2\right){}^{3/2}+\left(b_1-x\right)\left(\left(a_1-x\right){}^2+\left(a_2-y\right){}^2\right){}^{3/2}$ $(2) 0=\left(b_2-y\right) \left(\left(a_1-x\right){}^2+\left(a_2-y\right){}^2\right){}^{3/2}+\left(a_2-y\right) \left(\left(b_1-x\right){}^2+\left(b_2-y\right){}^2\right){}^{3/2}$ We are done if I can show that $(1)$ and $(2)$ imply that $(x,y)=z=\frac{a+b}{2}$. But even Mathematica fails on that one, I hope you can provide some help on this final step. - Your problem is clearly invariant under translations and rotations, so that you may assume without loss of generality that $a=(0,0)$ and $b=(b_1,0)$ with $b_1>0$. Then equation (2) has only one solution: $y=0$. Equation (1) reduces to $$-x|b_1-x|^3+(b_1-x)|x|^3=0.$$ There are no solutions with $x<0$ or $x>b_1$. Trivial solutions are $x=0$ and $x=b_1$, but these are not solutions of $\nabla f=0$. The only solution in $(0,b_1)$ is $x=b_1/2$.
2015-10-06T16:39:21
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/34846/root-of-a-multi-variable-derivative", "openwebmath_score": 0.9248993992805481, "openwebmath_perplexity": 68.99709573424634, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795102691455, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.6533043092600148 }
https://courses.cs.cornell.edu/cs2800/wiki/index.php/SP20:Lecture_3_prep
# SP20:Lecture 3 prep Please come to lecture 3 knowing the following definitions (you can click on the terms or symbols for more information, or you can review the entire lecture notes from last semester here): Definition: Subset If and are sets, then is a subset of (written ) if every is also in Definition: Power set The power set of a set (written )is the set of all subsets of . Formally, . Definition: Union If and are sets, then the union of and (written ) is given by . Definition: Intersection If and are sets, then the intersection of and (written ) is given by . Definition: Set difference If and are sets, then the set difference minus (written ) is given by .
2021-10-25T11:16:51
{ "domain": "cornell.edu", "url": "https://courses.cs.cornell.edu/cs2800/wiki/index.php/SP20:Lecture_3_prep", "openwebmath_score": 0.8177921772003174, "openwebmath_perplexity": 1520.570534711383, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795098861572, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043090065062 }
http://sadeepj.blogspot.com/2012/06/understanding-riemannian-manifolds-part.html
## Monday, June 11, 2012 ### Topological Spaces The $n$-dimensional Euclidean space, $\mathbb{R}^n$, is the richest space that we know of. Notions of distance, norm, inner product etc. are understood in $\mathbb{R}^n$. Moreover, limits, continuity and differentiability are well defined on it. We would like to see how these concepts (or more abstract versions of them) could be generalized to other spaces that are not as rich as Euclidean spaces. Topological spaces are the most abstract kind of such spaces. We shall begin the discussion on Riemannian manifolds by defining topological spaces. Definition 1.1: Topological space A set $X$ together with a collections of its subsets $T$, usually denoted by $(X, T)$, is known as a topological space if the following axioms are satisfied. 1. $X$ and $\phi$ are included in $T$. 2. Any union of sets in $T$ is included in $T$. 3. Any finite intersection of sets in $T$ is included in $T$. In such cases, sets in $T$ are named as open sets and $T$ itself is referred to as a topology of X. An open neighborhood of a point $x$ $\in$ $X$ is any open set containing $x$. In this series of articles we use the terms neighborhood and open neighborhood interchangeably to mean the same. Example $(X, T)$ with $X = \{1, 2, 3\}, T = \{\{1,2,3\}, \phi, \{2, 3\}, \{2\}, \{3\}\}$ is a topological space since all the three axioms are satisfied. Some tests are shown below. $\{2\} \in T$ and $\{2, 3\} \in T$, 1. $\{2\} \cup \{2, 3\} = \{2, 3\}$ is also in $T$ (Axiom 2) 2. $\{2\} \cap \{2, 3\} = \{2\}$ is also in $T$ (Axiom 3) Similarly, the reader can verify the same for other unions and finite intersections of sets in $T$. In this example $\{1, 2, 3\}, \{2, 3\}$ and $\{2\}$ are open neighborhoods of 2 $\in$ $X$. Non-Example $(X, T)$ with $X = \{1, 2, 3\}, T = \{\{1,2,3\}, \phi, \{1, 2\}, \{2, 3\}\}$ is not a topological space. Because, for $\{1, 2\}, \{2, 3\} \in T$, their intersection, $\{1, 2\} \cap \{2, 3\} = \{2\}$ is not in $T$. From now on we may refer to a set $X$ as a topological space without explicit reference to its topology $T$. In such references it is implied that a valid topology is defined on $X$. ### Euclidean Topology and Metric Topology In order to understand the rather abstract notion of a topological space presented in the previous section, let us now consider a particular concrete example of a topological space. We pick the $n$-dimensional Euclidean space, $\mathbb{R}^n$. As understood from the previous section, in order to define a topological space, we need to identify a set ($X$) and a collection of subsets ($T$) that are closed under union and finite intersection. 'Open sets' is a name given to members of $T$. In the present example, $X$ is the infinite set that consists of all $n$-dimensional vectors in $\mathbb{R}^n$. What is $T$ then? In order to define $T$, we first define open sets on $\mathbb{R}^n$ such that the axioms in Definition 1.1 are satisfied. Then, all $n$-dimensional vectors together with the collection of such defined open sets will form a topological space. We now proceed to defining an open set in $\mathbb{R^n}$. We start with the definition of an open ball. Definition 1.2: Open ball (Euclidean spaces) For any $x_0 \in \mathbb{R}^n$ and $\varepsilon > 0$, the open ball of radius $\varepsilon$ around $x_0$ is the set $B_{\varepsilon}(x_0) = \{x \in \mathbb{R}^n \mid \|x - x_0\| < \varepsilon \}$. An open ball around a point $x_0 \in \mathbb{R}^2$ is graphically shown in Figure 1. Radius of the ball, $\varepsilon$, can be arbitrarily small. The intuition is, an open ball contains all points that are sufficiently close to the given point. How do we define 'closeness'? Well, we use the concept of distance between two points in $\mathbb{R}^n$. Figure 1: An open ball in $\mathbb{R}^2$. Note that the circumference of the circle is not included in the 'open' ball. It's worth noting that this definition of an open ball depends only on the notion of distance between two points i.e., $\| x - x_0 \|$. In Euclidean spaces, we measure the distance between any two points by taking the vector norm of the difference vector. This is known as Euclidean distance or Euclidean metric. It is not necessary to have a Euclidean space (or any normed vector space for that matter) to measure the distance between two points. In fact, the notion of distance between any two points is present in a more general metric space. While discussing about $\mathbb{R}^n$ we would like to extend the concepts to this more general structure. Simply put, a metric space is a set with the notion of distance between two points. A metric space is usually denoted by $(M, d)$ where $M$ is the set and $d$ is the distance function or the metric. Any $n$-dimensional Euclidean space, along with the Euclidean distance, forms a metric space. The following definition of an open ball in a metric space is a direct generalization of the above definition for Euclidean spaces. Definition 1.3: Open ball (Metric space) Let $(M, d)$ be a metric space. For any $x_0 \in M$ and $\varepsilon > 0$, the open ball of radius $\varepsilon$ around $x_0$ is the set $B_{\varepsilon}(x_0) = \{x \in M \mid d(x, x_0) < \varepsilon \}$. We next present the definition of an open set for a general metric space. This also includes the definition of an open set in Euclidean spaces since they are metric spaces. Definition 1.4: Open set (Metric space) Let $(M, d)$ be a metric space. A subset $A \subseteq M$ is said to be an open set if it contains an open ball around each of its points. or equivalently, An open set is a subset of $M$ that can be realized as a union of open balls in $M$. Figure 2. An open set in $\mathbb{R}^2$. An open ball itself is an open set. Figure 2 visualizes an open set in $\mathbb{R}^2$. Note that an open ball itself is an open set according to the above definition. Also recall that an open neighborhood of a point is defined as an open set containing the point. Hence, whenever we refer to an open neighborhood of some point $x_0$, it is sufficient to visualize an open ball around $x_0$ with an arbitrarily small radius. Reader is encouraged verify that following Definition 1.4, open sets of a metric space are closed under union and finite intersection and therefore satisfy axioms in Definition 1.1. Therefore, a metric space (and hence the $n$-dimensional Euclidean space) together with the collection of open sets defined as in Definition 1.4 is a topological space. In the Euclidean case this topology is referred as the Euclidean topology and in the more general case of a metric space it is referred to as the metric topology. ### Continuity We will now discuss the topological definition of continuity of a function, perhaps the most important concept defined on topological spaces. We start from the usual $\varepsilon$-$\delta$ definition of continuity of a function at a point in $\mathbb{R}^n$. The intuition is, a function $f$ is continuous at $x_0$ if it is possible to make $f(x)$ arbitrarily close to $f(x_0)$ by choosing $x$ values sufficiently close to $x_0$. It is formalized as follows: Definition 1.5: Continuity at a point (Euclidean space) A function $f: \mathbb{R}^n \to \mathbb{R}^m$ is said to be continuous at $x_0 \in \mathbb{R}^n$ if $\forall \varepsilon > 0, \exists \delta > 0$ such that, $\|x - x_0 \| < \delta \Rightarrow \|f(x) - f(x_0)\| < \varepsilon$. Following Definition 1.2, this is equivalent to stating that it is possible to confine $f(x)$ to an arbitrarily small open ball around $f(x_0)$ by confining $x_0$ to a sufficiently small open ball around $x_0$. Definition 1.5 could easily be generalized to a metric space by replacing difference vector norms with the distance metric. Further, we can convert this definition to an equivalent involving open neighborhoods, using the definition of an open neighborhood in Euclidean spaces. Figure 2: Defining continuity using the notion of open neighborhoods. Definition 1.6: Continuity at a point (Euclidean space) A function $f: \mathbb{R}^n \to \mathbb{R}^m$ is said to be continuous at $x_0 \in \mathbb{R}^n$ if for any open neighborhood $V$ of $f(x_0)$, $\exists$ an open neighborhood $U$ of $x_0$, such that, $f(U) \subset V$. Or equivalently, Definition 1.7: Continuity at a point (Euclidean space) A function $f: \mathbb{R}^n \to \mathbb{R}^m$ is said to be continuous at $x_0 \in R^n$ if for any open neighborhood $V$ of $f(x_0)$, $f^{-1}(V)$ is an open neighborhood of $x_0$, where the inverse(or pre-image) of $f$ is defined as $f^{-1}(V) = \{x \in X \mid f(x) \in V\}$ for $V \in R^m$. This definition does not use distances but the concept of open neighborhoods. Therefore, one could generalize this definition directly to topological spaces. However, continuity at a point is not a very useful concept for topological spaces. Hence, we present the topological definition of a continuous function. In the Euclidean (or a metric) space, a continuous function is defined as a function which is continuous at all points of its domain. Definition 1.8: Continuous function (Topological space) A function $f:X \to Y$ between two topological spaces $X, Y$ is said to be continuous if for every open set $V \in Y$, $f^{-1}(V)$ is open in $X$. In the next article of the series we shall continue our discussion towards understanding topological manifolds, differentiable manifolds and Riemannian manifolds. ### References [1] John M. Lee. Introduction to Topological Manifolds. Graduate Texts in Mathematics. Springer, 2011. [2] P.A. Absil, R. Mahony, and R. Sepulchre. Optimization Algorithms on Matrix Manifolds. Princeton University Press, 2008. 1. Thanks for the good explanation! It's finally starting to make sense to me now! 1. You're welcome! And thanks for the feedback... 2. Thanks for the excellent gentle introduction .. Looking forward to part II 1. Thanks for the feedback. Working on Part II :) 3. Your writing is quite rigorous I think, like an academic textbook. Rigorous formalism does not make the topic easier to understand. You should build the intuitive idea first in the reader, then explain it to the reader how to formalize it, with more pictures and examples, and less theorems, proofs and definitions. 1. Thanks for the feedback. Feedback from the readers helps me a lot to improve my writing style. I should update this post incorporating your suggestions when I find some free time. 4. It might be better if you give an example of a function that is *not* continuous and demonstrate how it violates the topological definition of continuity. Suppose we take a non-continuous function, f(x)=-1 if x <0 , f(x) = 1 if x >=0 how does this violate the definition of continuity given in terms of open sets. Also it might be better if you gave more examples of metric spaces which are more general than the normed difference vector space . 1. I think it makes a lot of sense to explain things with examples. I will add some examples to this post when I find time. Thanks for the generous comments. :)
2018-11-13T04:43:42
{ "domain": "blogspot.com", "url": "http://sadeepj.blogspot.com/2012/06/understanding-riemannian-manifolds-part.html", "openwebmath_score": 0.9461057782173157, "openwebmath_perplexity": 135.9778608008889, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795098861572, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043090065062 }
https://math.stackexchange.com/questions/1708882/gradient-of-the-frobenius-norm-or-matrix-trace-of-an-expression-involving-a-ma
# Gradient of the Frobenius Norm (or matrix trace) of an expression involving a matrix and its inverse For real, positive definite (square) matrices $\mathbf{A}$, $\mathbf{X}$, and $\mathbf{C}$, I would like to find an expression for the following gradient: $\nabla_\mathbf{X} || \mathbf{AX}+\mathbf{X}^{-1}\mathbf{C} ||_F$ where $|| \cdot ||_F$ represents the Frobenius norm (although if another choice of norm makes this easier, I'd be interested to know). I know that this corresponds to the trace as $\nabla_\mathbf{X} \mathrm{Trace}\left[ \left(\mathbf{AX}+\mathbf{X}^{-1}\mathbf{C}\right)^T \left(\mathbf{AX}+\mathbf{X}^{-1}\mathbf{C}\right) \right]$ which expands to $\nabla_\mathbf{X} \mathrm{Trace}\left[ \mathbf{XAAX} + \mathbf{CX}^{-1}\mathbf{AX} + \mathbf{XAX}^{-1}\mathbf{C}+\mathbf{C}\mathbf{X}^{-1}\mathbf{X}^{-1}\mathbf{C}\right]$ And I believe the trace of the sum is the sum of the traces, so each of these terms may be considered separately. The first and last terms seem like they could probably be evaluated in terms of (111) and (119) in the matrix cookbook, but I'm not sure about the middle two, which have both $\mathbf{X}$ and $\mathbf{X}^{-1}$. Any guidance would be appreciated. For convenience, define the variable \eqalign{ M &= AX+X^{-1}C \cr dM &= A\,dX - X^{-1}\,dX\,X^{-1}C \cr } and the function \eqalign{ f &= \|M\|_F^2 \,=\, M:M \cr\cr df &= 2M:dM \cr &= 2M:A\,dX - 2M:X^{-1}\,dX\,X^{-1}C \cr &= 2(A^TM - X^{-T}MC^TX^{-T}) : dX \cr } where a colon is used to denote the Frobenius Inner Product. Your question was about a slightly different, but related function \eqalign{ h &= f^\frac{1}{2} \cr h^2 &= f \cr 2hdh &= df \,=\, 2(A^TM - X^{-T}MC^TX^{-T}) : dX \cr\cr \frac{\partial h}{\partial X} &= \frac{A^TM - X^{-T}MC^TX^{-T}}{h} \cr } • Thank you so much. Could you clarify how you go between these two lines? \eqalign{ df &= 2M:A\,dX - 2M:X^{-1}\,dX\,X^{-1}C \cr &= 2(A^TM - X^{-T}MC^TX^{-T}) : dX \cr } – Nick Mar 23 '16 at 15:47 • The Frobenius mixed product rules are\eqalign{A:BC&=B^TA:C\cr &=AC^T:B\cr (A\otimes B):(X\otimes Y)&=(A:X)\otimes(B:Y)\cr A:B\circ C&=A\circ B:C}where $\otimes$ is the Kronecker product, $\circ$ is the Hadamard product, and juxtaposition is the standard matrix product. It is also worth noting that the Hadamard and Frobenius products are commutative.$$.$$These rules can be derived from the Frobenius-trace equivalence$$A:B={\rm tr}(A^TB)$$ – lynn Mar 24 '16 at 15:26
2019-09-22T20:43:37
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1708882/gradient-of-the-frobenius-norm-or-matrix-trace-of-an-expression-involving-a-ma", "openwebmath_score": 0.9999934434890747, "openwebmath_perplexity": 726.4731624776742, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795098861572, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.653304309006506 }
http://math.stackexchange.com/questions/104449/how-to-find-a-generic-parabola-through-3-arbitrary-points-in-r2
# How to find a generic parabola through 3 arbitrary points in R^2? Given $(a,b)$, $(c,d)$, and $(e,f)$ (assume non-collinear and $a\neq c$, $c\neq e$, and $a\neq e$), is there a generic way to find a parabolic function between the three? - Yes. Write $y = Ax^2 + Bx + C$. Substitute in the three points; if the values $a$, $c$ and $e$ are distinct, you get a nondegenerate system of three linear equations in three unknowns. Solve for $A$, $B$ and $C$ and you are there. - The assumption here, of course, is that the parabola sought has an axis parallel to a coordinate axis. A parabola in general position, however, is only completely determined by four points. – J. M. Feb 1 '12 at 1:56 Correct. He seeks a quadratic. This gives one, but it has a graph so that $y$ is a function of $x$. – ncmathsadist Feb 1 '12 at 2:04 Otherwise the problem is underdetermined. – ncmathsadist Feb 1 '12 at 2:05
2016-07-01T09:55:06
{ "domain": "stackexchange.com", "url": "http://math.stackexchange.com/questions/104449/how-to-find-a-generic-parabola-through-3-arbitrary-points-in-r2", "openwebmath_score": 0.8972891569137573, "openwebmath_perplexity": 220.26609397530433, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795098861571, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.653304309006506 }
http://zbmath.org/?q=an:1217.47098
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Fixed point and mean ergodic theorems for new nonlinear mappings in Hilbert spaces spaces. (English) Zbl 1217.47098 Let $H$ be a Hilbert space and $C$ be a nonempty closed convex subset of $H$. Then a mapping $T:C\to C$ is called 2-generalized hybrid if there are ${\alpha }_{1}$, ${\alpha }_{2}$, ${\beta }_{1}$, ${\beta }_{2}\in ℝ$ such that $\begin{array}{c}{\alpha }_{1}{∥{T}^{2}x-Ty∥}^{2}+{\alpha }_{2}{∥Tx-Ty∥}^{2}+\left(1-{\alpha }_{1}-{\alpha }_{2}\right){∥x-Ty∥}^{2}\hfill \\ \hfill \le {\beta }_{1}{∥{T}^{2}x-y∥}^{2}+{\beta }_{2}{∥Tx-y∥}^{2}+\left(1-{\beta }_{1}-{\beta }_{2}\right){∥x-y∥}^{2}\phantom{\rule{4pt}{0ex}}\phantom{\rule{4.pt}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}\text{all}\phantom{\rule{4.pt}{0ex}}x,y\in C·\end{array}$ This is a new and broad class of nonlinear mappings, covering several known classes such as nonexpansive mappings, nonspreading mappings, hybrid mappings, $\left(\alpha ,\beta \right)$-generalized hybrid mappings and quasi-nonexpansive mappings. Theorem 3.1 asserts that, provided $C$ is a nonempty closed convex subset of a Hilbert space $H$, a 2-generalized hybrid mapping $T:C\to C$ has a fixed point in $C$ if and only if $\left\{{T}^{n}z\right\}$ is bounded for some $z\in C$. Several known fixed point results for the subclasses of 2-generalized hybrid mappings mentioned above are proved as consequences of Theorem 3.1. An even broader class of nonlinear mappings, that of $n$-generalized hybrid mappings, is mentioned. An analogue of Theorem 3.1 can be proved for this class as well. Two other important results for the class of 2-generalized hybrid mappings are proved: a nonlinear ergodic theorem of Baillon’s type (Theorem 4.1) and a weak convergence theorem of Mann’s type (Theorem 5.3). ##### MSC: 47H10 Fixed point theorems for nonlinear operators on topological linear spaces 47H05 Monotone operators (with respect to duality) and generalizations 47H25 Nonlinear ergodic theorems
2014-04-20T16:24:51
{ "domain": "zbmath.org", "url": "http://zbmath.org/?q=an:1217.47098", "openwebmath_score": 0.8096240758895874, "openwebmath_perplexity": 3749.2964192351624, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795095031688, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043087529974 }
https://math.stackexchange.com/questions/991918/quadratic-formula?noredirect=1
$$0 = a(x^2 + \frac ba x) + c = a(x^2 + \frac ba x + \frac{b^2}{4a^2}) -\frac{b^2}{4a} + c$$ $$= a(x + \frac b{2a})^2 + c - \frac{b^2}{4a}$$ It is more than obvious that the above equation simplifies to the Quadratic Formula, yet I was curious as to why the method of simplifying is done as seen above from the original $ax^2 + bx + c = 0$ in that way, for instance how does it become $a(x^2 + \frac bax) + c$? ## marked as duplicate by Jonas Meyer, user147263, apnorton, Daniel Fischer, Adam HughesDec 12 '14 at 22:11 $$ax^2+bx+c=0\tag{1}$$ taking $a$ common from first two terms $$a\left(x^2 + \frac bax\right) + c=0$$
2019-06-25T09:38:00
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/991918/quadratic-formula?noredirect=1", "openwebmath_score": 0.5752679705619812, "openwebmath_perplexity": 547.2181909824856, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795095031689, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.6533043087529974 }
https://wiki.math.ucr.edu/index.php?title=009B_Sample_Final_1,_Problem_4&oldid=1231
# 009B Sample Final 1, Problem 4 (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Compute the following integrals. a) ${\displaystyle \int e^{x}(x+\sin(e^{x}))~dx}$ b) ${\displaystyle \int {\frac {2x^{2}+1}{2x^{2}+x}}~dx}$ c) ${\displaystyle \int \sin ^{3}x~dx}$ Foundations: Recall: 1. Integration by parts tells us that ${\displaystyle \int u~dv=uv-\int v~du}$. 2. Through partial fraction decomposition, we can write the fraction  ${\displaystyle {\frac {1}{(x+1)(x+2)}}={\frac {A}{x+1}}+{\frac {B}{x+2}}}$  for some constants ${\displaystyle A,B}$. 3. We have the Pythagorean identity ${\displaystyle \sin ^{2}(x)=1-\cos ^{2}(x)}$. Solution: (a) Step 1: We first distribute to get ${\displaystyle \int e^{x}(x+\sin(e^{x}))~dx\,=\,\int e^{x}x~dx+\int e^{x}\sin(e^{x})~dx.}$ Now, for the first integral on the right hand side of the last equation, we use integration by parts. Let ${\displaystyle u=x}$ and ${\displaystyle dv=e^{x}dx}$. Then, ${\displaystyle du=dx}$ and ${\displaystyle v=e^{x}}$. So, we have ${\displaystyle {\begin{array}{rcl}\displaystyle {\int e^{x}(x+\sin(e^{x}))~dx}&=&\displaystyle {{\bigg (}xe^{x}-\int e^{x}~dx{\bigg )}+\int e^{x}\sin(e^{x})~dx}\\&&\\&=&\displaystyle {xe^{x}-e^{x}+\int e^{x}\sin(e^{x})~dx}.\\\end{array}}}$ Step 2: Now, for the one remaining integral, we use ${\displaystyle u}$-substitution. Let ${\displaystyle u=e^{x}}$. Then, ${\displaystyle du=e^{x}dx}$. So, we have ${\displaystyle {\begin{array}{rcl}\displaystyle {\int e^{x}(x+\sin(e^{x}))~dx}&=&\displaystyle {xe^{x}-e^{x}+\int \sin(u)~du}\\&&\\&=&\displaystyle {xe^{x}-e^{x}-\cos(u)+C}\\&&\\&=&\displaystyle {xe^{x}-e^{x}-\cos(e^{x})+C}.\\\end{array}}}$ (b) Step 1: First, we add and subtract ${\displaystyle x}$ from the numerator. So, we have ${\displaystyle {\begin{array}{rcl}\displaystyle {\int {\frac {2x^{2}+1}{2x^{2}+x}}~dx}&=&\displaystyle {\int {\frac {2x^{2}+x-x+1}{2x^{2}+x}}~dx}\\&&\\&=&\displaystyle {\int {\frac {2x^{2}+x}{2x^{2}+x}}~dx+\int {\frac {1-x}{2x^{2}+x}}~dx}\\&&\\&=&\displaystyle {\int ~dx+\int {\frac {1-x}{2x^{2}+x}}~dx}.\\\end{array}}}$ Step 2: Now, we need to use partial fraction decomposition for the second integral. Since ${\displaystyle 2x^{2}+x=x(2x+1)}$, we let ${\displaystyle {\frac {1-x}{2x^{2}+x}}={\frac {A}{x}}+{\frac {B}{2x+1}}}$. Multiplying both sides of the last equation by ${\displaystyle x(2x+1)}$, we get ${\displaystyle 1-x=A(2x+1)+Bx}$. If we let ${\displaystyle x=0}$, the last equation becomes ${\displaystyle 1=A}$. If we let ${\displaystyle x=-{\frac {1}{2}}}$, then we get  ${\displaystyle {\frac {3}{2}}=-{\frac {1}{2}}\,B}$. Thus, ${\displaystyle B=-3}$. So, in summation, we have  ${\displaystyle {\frac {1-x}{2x^{2}+x}}={\frac {1}{x}}+{\frac {-3}{2x+1}}}$. Step 3: If we plug in the last equation from Step 2 into our final integral in Step 1, we have ${\displaystyle {\begin{array}{rcl}\displaystyle {\int {\frac {2x^{2}+1}{2x^{2}+x}}~dx}&=&\displaystyle {\int ~dx+\int {\frac {1}{x}}~dx+\int {\frac {-3}{2x+1}}~dx}\\&&\\&=&\displaystyle {x+\ln x+\int {\frac {-3}{2x+1}}~dx}.\\\end{array}}}$ Step 4: For the final remaining integral, we use ${\displaystyle u}$-substitution. Let ${\displaystyle u=2x+1}$. Then, ${\displaystyle du=2\,dx}$ and  ${\displaystyle {\frac {du}{2}}=dx}$. Thus, our final integral becomes ${\displaystyle {\begin{array}{rcl}\displaystyle {\int {\frac {2x^{2}+1}{2x^{2}+x}}~dx}&=&\displaystyle {x+\ln x+\int {\frac {-3}{2x+1}}~dx}\\&&\\&=&\displaystyle {x+\ln x+\int {\frac {-3}{2u}}~du}\\&&\\&=&\displaystyle {x+\ln x-{\frac {3}{2}}\ln u+C}.\\\end{array}}}$ ${\displaystyle \int {\frac {2x^{2}+1}{2x^{2}+x}}~dx\,=\,x+\ln x-{\frac {3}{2}}\ln(2x+1)+C.}$ (c) Step 1: First, we write ${\displaystyle \int \sin ^{3}x~dx=\int \sin ^{2}x\sin x~dx}$. Using the identity ${\displaystyle \sin ^{2}x+\cos ^{2}x=1}$, we get ${\displaystyle \sin ^{2}x=1-\cos ^{2}x}$. If we use this identity, we have ${\displaystyle \int \sin ^{3}x~dx=\int (1-\cos ^{2}x)\sin x~dx}$. Step 2: Now, we proceed by ${\displaystyle u}$-substitution. Let ${\displaystyle u=\cos x}$. Then, ${\displaystyle du=-\sin xdx}$. So we have ${\displaystyle {\begin{array}{rcl}\displaystyle {\int \sin ^{3}x~dx}&=&\displaystyle {\int -(1-u^{2})~du}\\&&\\&=&\displaystyle {-u+{\frac {u^{3}}{3}}+C}\\&&\\&=&\displaystyle {-\cos x+{\frac {\cos ^{3}x}{3}}+C}.\\\end{array}}}$ (a)  ${\displaystyle xe^{x}-e^{x}-\cos(e^{x})+C}$ (b)  ${\displaystyle x+\ln x-{\frac {3}{2}}\ln(2x+1)+C}$ (c)  ${\displaystyle -\cos x+{\frac {\cos ^{3}x}{3}}+C}$
2022-05-21T03:18:59
{ "domain": "ucr.edu", "url": "https://wiki.math.ucr.edu/index.php?title=009B_Sample_Final_1,_Problem_4&oldid=1231", "openwebmath_score": 0.996390163898468, "openwebmath_perplexity": 922.3395390015122, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795095031688, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043087529974 }
https://www.cut-the-knot.org/triangle/SineCosineLawsEquivalent.shtml
# The Law of Cosines and the Law of Sines Are Equivalent ### Sidney H. Kung Cupertino, CA December 2016 We show by using the projection property of triangle that the Law of Sines and the Law of Cosines are equivalent. Refer to the figure below: $\alpha+\beta+\gamma=\pi,\,$ $a=b\cos\gamma+c\cos\beta.\,$ (Note that the latter holds even when one of the angles $\beta,\gamma\,$ is obtuse.) By squaring, \begin{align} a^2 &= b^2\cos^2\gamma+c^2\cos^2\beta+2bc\cos\beta\cos\gamma\\ &=b^2(1-\sin^2\gamma)+c^2(1-\sin^2\beta)+2bc\cos\beta\cos\gamma\\ &=b^2+c^2+2bc[\cos\beta\cos\gamma-\sin\beta\sin\gamma]\\ &\qquad\qquad-[b^2\sin^2\gamma+c^2\sin^2\beta-2bc\sin\beta\sin\gamma]\\ &=[b^2+c^2+2bc\cos (\beta+\gamma)]-(b\sin\gamma-c\sin\beta)^2\\ &=[b^2+c^2-2bc\cos \alpha]-(b\sin\gamma-c\sin\beta)^2, \end{align} where we used the Addition Formula for Cosine. Thus, (1) $a^2=[b^2+c^2-2bc\cos (\alpha)]-(b\sin\gamma-c\sin\beta)^2.$ This expression indicates that the Law of Sines and the Law of Cosines are equivalent in that, the two conditions $a^2=b^2+c^2-2bc\cos (\alpha)\,$ and $b\sin\gamma-c\sin\beta=0\,$ either hold or do not hold simultaneously, the latter being equivalent to the Law of Sines $\displaystyle \frac{\sin\beta}{b}=\frac{\sin\gamma}{c},\,$ the former is the expression of the Law of Cosines. There have been articles giving, separately, a proof for the Laws of Sines and Cosines or a proof of the Law of Sines using the Law of Cosines ([1],[2],[3]). But with (1) we can do both simultaneously. ### References 1. Law of cosines, Sec 3.7 Using the law of sines, wikipedia.org 2. Patrik Nystedt, A proof of the law of sines using the law of cosines, to appear, 2017 Mathematics Magazine 3. W. W. Sawyer, Prelude to Mathematics, Dover, 2011, pp 37-39
2020-09-26T15:57:42
{ "domain": "cut-the-knot.org", "url": "https://www.cut-the-knot.org/triangle/SineCosineLawsEquivalent.shtml", "openwebmath_score": 0.966563880443573, "openwebmath_perplexity": 588.8580854834197, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795095031688, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043087529974 }
http://nrich.maths.org/7385/solution
### Conical Bottle A right circular cone is filled with liquid to a depth of half its vertical height. The cone is inverted. How high up the vertical height of the cone will the liquid rise? ### Arrh! Triangle ABC is equilateral. D, the midpoint of BC, is the centre of the semi-circle whose radius is R which touches AB and AC, as well as a smaller circle with radius r which also touches AB and AC. What is the value of r/R? ### Flower Show Weekly Problem 7 - 2006 It takes four gardeners four hours to dig four circular flower beds, each of diameter 4 metres. How long will it take six gardeners to dig six circular flower beds, each of diameter six metres? # Fit for Photocopying ##### Stage: 4 Challenge Level: Sam from Appleton Thorn school made a start by noting the following: If we start of by enlarging an A4 sheet to an A2 sheet we can say: 2 A4 sheets go into an A3 sheet. 2 A3 sheets go into an A2 sheet. From this information we can gather that the number of A4 sheets that fit into a A2 sheet is $2\times2 = 4$. Many more of you including Niharika (Leicester High School for Girls) and Declan (Thomas Keble) worked out that: A4 ->  A3 = 2 = $2^1$ A4 -> A2 = 4 = $2^2$ A4 -> A1 = 8 = $2^3$ A4 -> A0 = 16 = $2^4$ and also: A5 -> A4 = 2 = $2^1$ A5 -> A3 = 4 = $2^2$ A5 -> A2 = 8 = $2^3$ A5 -> A1 = 16 = $2^4$ A5 -> A0 = 32 = $2^5$ Jonathan (Najing International School), Muntej (Wilson's School) and someone who didnt leave their name so we will call Anon then noticed a formula for getting from A(n) -> A(m): $2^{n-m}$ Anon showed that this works for n< m as well as n> m: Take for example the transition of A3 to A5. This is a scale factor of ${1\over2} \times {1\over2} = {1\over4}$. Using the formula, $3-5=-2$ so $2^{-2} = {1\over{2^2}} = {1\over4}$. Using these equations, Chensheng from Wells Cathedral School calculated that the percentage needed to scale by in order to photocopy an A3 poster to an A4 sheet is about 70.7%. To go from an A3 sheet to an A4 sheet you need to halve the area. Because area is quadratic and length is linear the length is decreased by a scale factor $\sqrt{1\over2}$ which is 0.707 so an A3 sheet needs to be reduced to 70.7% to fit an A4 sheet. We received lots of solutions for expressing the length of the longer side of a sheet of paper in terms of its shorter side. Well done to William (Barton Community School), Peter (Torquay Boys' Grammar School), Chensheng, Muntej, Niharika and Anon for getting it right. Here is Anon's method. To express the longer side of paper A(n) in terms of its short side, we must consider that the longer side of paper A(n) is the same length as the shorter side of paper (n-1). Using the above formula we can see that the area of the larger sheet is twice that of the smaller sheet and we must square root the whole ratio to find the length. $1:2$ becomes $\sqrt{1} : \sqrt{2}$. Therefore, as the shorter side of paper A(n-1) is $\sqrt{2}$ times longer than the shorter side of paper A(n) and equal to length of the longer side of paper A(n), the longer side, L, can be expressed as: L = S x $\sqrt{2}$, where S is the shorter side. Niharika, Anon and Declan used this formula to work out the dimensions of an A0 sheet of paper with area $1m^2$. Niharika used simultaneous equations. $l\times s=1$ and $l=s\times\sqrt{2}$ $(s\times\sqrt{2})\times s = 1$ $s^2\times\sqrt{2} = 1$ ${s^2} = {1\over{\sqrt{2}}}$ ${s}={1\over{2^{1\over4}}}$ so $l={1\over{2^{1\over4}}}\times 2^{1\over2} = 2^{1\over4}$ This helped them find the exact dimensions of an A4 sheet. This is Anon's answer. The area of an A4 sheet can be calculated as $1\times{2^{0-4}} = 2^{-4} = 0.0625m^2$. $a = l\times s$ $0.0625 = s\times\sqrt{2}\times s$ $0.0625 = s^2\times\sqrt{2}$ ${s^2} = {0.0625\over{\sqrt{2}}}$ $s = \sqrt{0.0625\over{\sqrt{2}}}$ Therefore: $l = {0.0625\over{\sqrt{0.0625\over{\sqrt{2}}}}}$ Can you simplify these equations? Now see if you can use them to define A(-1) and A($1\over2$)
2015-08-30T14:13:00
{ "domain": "maths.org", "url": "http://nrich.maths.org/7385/solution", "openwebmath_score": 0.689782977104187, "openwebmath_perplexity": 1615.189069883141, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795095031687, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043087529973 }
https://math.stackexchange.com/questions/2389394/continuity-of-bijection-with-continuous-factor-continuous-on-each-fiber-and-ea
# Continuity of bijection with continuous factor, continuous on each fiber, and each fiber connected. Suppose $X$ , $Y$ , and $Z$ are compact metric spaces. Suppose $g: Y\to Z$ is a continuous surjection and each fiber $g^{-1}(z)$ is connected. Suppose $f:X\to Y$ is a bijection such that $g\circ f:X \to Z$ is continuous and the restriction of $f$ to $f^{-1}g^{-1}(z)$ is continuous for all $z\in Z$. Does it follow that $f$ is continuous? As answered by Adayah here: Bijection with continuous factors continuous on each fiber., the answer is "no" without the connectedness assumption on the fibers. Inspired by Adayah's answer, we can find a function $f$ that is not continuous. Let $X=Y=[0,1]\times[0,1]$, and $Z=[0,1]$. Define $f$ and $g$ by $$f(x,y)=\begin{cases} (x,y)\quad y<1/2 \\ (1-x,y)\quad y\ge1/2 \end{cases}$$ $$g(x,y)=y$$ Then $f$ is a bijection and $g$ is a continuous surjection, and $g\circ f$ is continuous since $(g\circ f)(x,y)=y$. The preimage of some $z\in Z$ under $f\circ g$ is the connected set $[0,1]\times \{z\}$, which $f$ acts on either as the identity if $z<1/2$ or by negating the first argument if $z\ge1/2$.
2022-01-25T08:50:07
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/2389394/continuity-of-bijection-with-continuous-factor-continuous-on-each-fiber-and-ea", "openwebmath_score": 0.9840094447135925, "openwebmath_perplexity": 77.3994775586659, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795091201805, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043084994886 }
https://web2.0calc.com/questions/cylinders-geometry-question
+0 # Cylinders - Geometry Question 0 91 1 A cylindrical can that is six inches high has a label that is $60\pi$ square inches in area and exactly covers the outside of the can excluding the top and bottom lids. What is the radius of the can, in inches? Guest Jun 30, 2018 #1 +91213 +1 If we  unwrapped the label, we would have a rectangle  whose dimensions woud be : 6  *   circumference of the can  = area of the label So....we have that 6 * 2*pi*r  =  60 pi    ....   divide out  pi 12r  =  60      divide both sides by 12 r  = 5 in =  radius of the can CPhill  Jun 30, 2018 ### New Privacy Policy We use cookies to personalise content and advertisements and to analyse access to our website. Furthermore, our partners for online advertising receive information about your use of our website. For more information: our cookie policy and privacy policy.
2018-11-19T16:38:59
{ "domain": "0calc.com", "url": "https://web2.0calc.com/questions/cylinders-geometry-question", "openwebmath_score": 0.3924829363822937, "openwebmath_perplexity": 9810.524875443205, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795091201804, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043084994886 }
https://en.wikipedia.org/wiki/Minkowski%27s_question_mark_function
# Minkowski's question mark function Minkowski question mark function. Left: ?(x). Right: ?(x) − x. In mathematics, the Minkowski question mark function (or the slippery devil's staircase), denoted by ?(x), is a function possessing various unusual fractal properties, defined by Hermann Minkowski (1904, pages 171–172). It maps quadratic irrationals to rational numbers on the unit interval, via an expression relating the continued fraction expansions of the quadratics to the binary expansions of the rationals, given by Arnaud Denjoy in 1938. In addition, it maps rational numbers to dyadic rationals, as can be seen by a recursive definition closely related to the Stern–Brocot tree. ## Definition If $[a_0; a_1, a_2, \ldots]$ is the continued fraction representation of an irrational number x, then ${\rm ?}(x) = a_0 + 2 \sum_{n=1}^\infty \frac{(-1)^{n+1}}{2^{a_1 + \cdots + a_n}}$ whereas: If $[a_0; a_1, a_2, \ldots, a_m]$ is a continued fraction representation of a rational number x, then ${\rm ?}(x) = a_0 + 2 \sum_{n=1}^m \frac{(-1)^{n+1}}{2^{a_1 + \cdots + a_n}}$ ## Intuitive explanation To get some intuition for the definition above, consider the different ways of interpreting an infinite string of bits beginning with 0 as a real number in [0,1]. One obvious way to interpret such a string is to place a binary point after the first 0 and read the string as a binary expansion: thus, for instance, the string 001001001001001001001001... represents the binary number 0.010010010010..., or 2/7. Another interpretation views a string as the continued fraction [0; a1, a2, … ], where the integers ai are the run lengths in a run-length encoding of the string. The same example string 001001001001001001001001... then corresponds to [0; 2, 1, 2, 1, 2, 1, …] = 3 − 1/2. If the string ends in an infinitely long run of the same bit, we ignore it and terminate the representation; this is suggested by the formal "identity": [0; a1, … ,an, ∞] = [0; a1, … ,an+1/∞] = [0; a1, … ,an+0] = [0; a1, … ,an]. The effect of the question mark function on [0,1] can then be understood as mapping the second interpretation of a string to the first interpretation of the same string,[1][2] just as the Cantor function can be understood as mapping a triadic base 3 representation to a base 2 representation. Our example string gives the equality $?\left(\frac{\sqrt3-1}{2}\right)=\frac{2}{7}.$ ## Recursive definition for rational arguments For rational numbers in the unit interval, the function may also be defined recursively; if p/q and r/s are reduced fractions such that | psrq | = 1 (so that they are adjacent elements of a row of the Farey sequence) then[2] $?\left(\frac{p+r}{q+s}\right) = \frac12 \left(?\bigg(\frac pq\bigg) + {}?\bigg(\frac rs\bigg)\right)$ Using the base cases $?\left(\frac{0}{1}\right) = 0 \quad \mbox{ and } \quad ?\left(\frac{1}{1}\right)=1$ it is then possible to compute ?(x) for any rational x, starting with the Farey sequence of order 2, then 3, etc. If $p_{n-1}/q_{n-1}$ and $p_{n}/q_{n}$ are two successive convergents of a continued fraction, then the matrix $\begin{pmatrix} p_{n-1} & p_{n} \\ q_{n-1} & q_{n} \end{pmatrix}$ has determinant ±1. Such a matrix is an element of SL(2,Z), the group of two-by-two matrices with determinant ±1. This group is related to the modular group. ### Algorithm This recursive definition naturally lends itself to an algorithm for computing the function to any desired degree of accuracy for any real number, as the following C function demonstrates. The algorithm descends the Stern–Brocot tree in search of the input x, and sums the terms of the binary expansion of y = ?(x) on the way. As long as the loop invariant $qr-ps=1$ remains satisfied there is no need to reduce the fraction $\frac m n = \frac{p+r}{q+s},$ since it is already in lowest terms. Another invariant is $\frac p q \le x < \frac r s.$ The for loop in this program may be analyzed somewhat like a while loop, with the conditional break statements in the first three lines making out the condition. The only statements in the loop that can possibly affect the invariants are in the last two lines, and these can be shown to preserve the truth of both invariants as long as the first three lines have executed successfully without breaking out of the loop. A third invariant for the body of the loop (up to floating point precision) is $y \le \; ?(x) < y + d,$ but since d is halved at the beginning of the loop before any conditions are tested, our conclusion is only that $y \le \; ?(x) < y + 2d$ at the termination of the loop. To prove termination, it is sufficient to note that the sum $q+s$ increases by at least 1 with every iteration of the loop, and that the loop will terminate when this sum is too large to be represented in the primitive C data type long. However, in practice, the conditional break when "y+d==y" is what ensures the termination of the loop in a reasonable amount of time. /* Minkowski's question mark function */ double minkowski(double x) { long p=x; if ((double)p>x) --p; /* p=floor(x) */ long q=1, r=p+1, s=1, m, n; double d=1, y=p; if (x<(double)p||(p<0)^(r<=0)) return x; /* out of range ?(x) =~ x */ for (;;) /* invariants: q*r-p*s==1 && (double)p/q <= x && x < (double)r/s */ { d/=2; if (y+d==y) break; /* reached max possible precision */ m=p+r; if ((m<0)^(p<0)) break; /* sum overflowed */ n=q+s; if (n<0) break; /* sum overflowed */ if (x<(double)m/n) r=m, s=n; else y+=d, p=m, q=n; } return y+d; /* final round-off */ } ## Self-symmetry The question mark is clearly visually self-similar. A monoid of self-similarities may be generated by two operators S and R acting on the unit square and defined as follows: $\begin{array}{lcl} S(x, y) &=& \left( \frac{x}{x+1}, \frac{y}{2} \right) \\ R(x, y) &=& \left( 1-x, 1-y \right)\,. \end{array}$ Visually, S shrinks the unit square to its bottom-left quarter, while R performs a point reflection through its center. A point on the graph of ? has coordinates (x, ?(x)) for some x in the unit interval. Such a point is transformed by S and R into another point of the graph, because ? satisfies the following identities for all $x\in [0,1]$: $\begin{array}{lcl} ?\left(\frac{x}{x+1}\right) &=& \frac{?(x)}{2} \\ ?(1-x) &=& 1-?(x)\,. \end{array}$ These two operators may be repeatedly combined, forming a monoid. A general element of the monoid is then $S^{a_1} R S^{a_2} R S^{a_3} \cdots$ for positive integers $a_1, a_2, a_3, \ldots$. Each such element describes a self-similarity of the question mark function. This monoid is sometimes called the period-doubling monoid, and all period-doubling fractal curves have a self-symmetry described by it (the de Rham curve, of which the question mark is a special case, is a category of such curves). Note also that the elements of the monoid are in correspondence with the rationals, by means of the identification of $a_1, a_2, a_3, \ldots$ with the continued fraction $[0; a_1, a_2, a_3, \ldots]$. Since both $S: x \mapsto \frac{x}{x+1}$ and $T: x \mapsto 1-x$ are linear fractional transformations with integer coefficients, the monoid may be regarded as a subset of the modular group PSL(2,Z). ## Properties of ?(x) ?(x) − x The question mark function is a strictly increasing and continuous,[3] but not absolutely continuous function. The derivative vanishes on the rational numbers. There are several constructions for a measure that, when integrated, yields the question mark function. One such construction is obtained by measuring the density of the Farey numbers on the real number line. The question mark measure is the prototypical example of what are sometimes referred to as multi-fractal measures. The question mark function maps rational numbers to dyadic rational numbers, meaning those whose base two representation terminates, as may be proven by induction from the recursive construction outlined above. It maps quadratic irrationals to non-dyadic rational numbers. It is an odd function, and satisfies the functional equation ?(x + 1) = ?(x) + 1; consequently x → ?(x) − x is an odd periodic function with period one. If ?(x) is irrational, then x is either algebraic of degree greater than two, or transcendental. The question mark function has fixed points at 0, 1/2 and 1, and at least two more, symmetric about the midpoint. One is approximately 0.42037.[3] In 1943, Raphaël Salem raised the question of whether the Fourier-Stieltjes coefficients of the question mark function vanish at infinity.[4] In other words, he wanted to know whether or not $\lim_{n \to \infty}\int_0^{1}e^{2\pi inx}d?(x)=0.$ This was answered affirmatively by Jordan and Sahlsten,[5] as a special case of a result on Gibbs measures. The graph of Minkowski question mark function is a special case of fractal curves known as de Rham curves. ## Conway box function The ? is invertible, and the inverse function has also attracted the attention of various mathematicians, in particular John Conway, who discovered it independently, and whose notation for ?−1(x) is x with a box drawn around it: x The box function can be computed as an encoding of the base two expansion of $(x-\lfloor x \rfloor)/2$, where $\lfloor x \rfloor$ denotes the floor function. To the right of the point, this will have n1 0s, followed by n2 1s, then n3 0s and so on. For $n_0 = \lfloor x \rfloor$, x = [n0; n1, n2, n3, … ], where the term on the right is a continued fraction.
2015-09-01T08:22:08
{ "domain": "wikipedia.org", "url": "https://en.wikipedia.org/wiki/Minkowski%27s_question_mark_function", "openwebmath_score": 0.8378725647926331, "openwebmath_perplexity": 504.5258046598727, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795091201804, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043084994886 }
https://www.esaral.com/q/the-sums-of-n-terms-of-two-arithmetic-progressions-are-in-the-ratio-5n-4-9n-6-find-the-ratio-of-their-18th-terms-84666/
The sums of n terms of two arithmetic progressions are in the ratio 5n + 4: 9n + 6. Find the ratio of their 18th terms. Question: The sums of $n$ terms of two arithmetic progressions are in the ratio $5 n+4: 9 n+6$. Find the ratio of their $18^{8 \mathrm{~h}}$ terms. Solution: Let $a_{1}, a_{2}$, and $d_{1}, d_{2}$ be the first terms and the common difference of the first and second arithmetic progression respectively. According to the given condition, $\frac{\text { Sum of } n \text { terms of first A.P. }}{\text { Sum of } n \text { terms of second A.P. }}=\frac{5 n+4}{9 n+6}$ $\Rightarrow \frac{\frac{n}{2}\left[2 a_{1}+(n-1) d_{1}\right]}{\frac{n}{2}\left[2 a_{2}+(n-1) d_{2}\right]}=\frac{5 n+4}{9 n+6}$ $\Rightarrow \frac{2 a_{1}+(n-1) d_{1}}{2 a_{2}+(n-1) d_{2}}=\frac{5 n+4}{9 n+6}$ (1) Substituting $n=35$ in (1), we obtain $\frac{2 a_{1}+34 d_{1}}{2 a_{2}+34 d_{2}}=\frac{5(35)+4}{9(35)+6}$ $\Rightarrow \frac{a_{1}+17 d_{1}}{a_{2}+17 d_{2}}=\frac{179}{321}$ …(2) $\frac{18^{\text {th }} \text { term of first A.P. }}{18^{\text {th }} \text { term of second A.P }}=\frac{a_{1}+17 d_{1}}{a_{2}+17 d_{2}}$ (3) From (2) and (3), we obtain $\frac{18^{\text {th }} \text { term of first A.P. }}{18^{\text {th }} \text { term of second A.P. }}=\frac{179}{321}$ Thus, the ratio of $18^{\text {th }}$ term of both the A.P.S is $179: 321$.
2022-08-08T09:34:27
{ "domain": "esaral.com", "url": "https://www.esaral.com/q/the-sums-of-n-terms-of-two-arithmetic-progressions-are-in-the-ratio-5n-4-9n-6-find-the-ratio-of-their-18th-terms-84666/", "openwebmath_score": 0.9174430966377258, "openwebmath_perplexity": 277.03286829384814, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795091201804, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.6533043084994885 }
http://mathhelpforum.com/math-topics/281977-i-need-help-circle-theorums.html
# Thread: I need help with circle theorums 1. ## I need help with circle theorums Can i get help on this question. It is a 6 marker 2. ## Re: I need help with circle theorums Actually, I'd start backward a bit. Angles ADO and ABO are both right angles. (Prove this!) Hence you can get angle DOB in terms of y. Can you fill in the steps? -Dan 3. ## Re: I need help with circle theorums it is really hard and I dont understand it 4. ## Re: I need help with circle theorums Another hint then: Knowing that angle ABO is a right angle, then you know that 2x + angle CBO + 90 degrees = 180 degrees. (Again, prove it.) So what is angle CBO in terms of x? Here's a site with a list of the circle theorems. Maybe having all of them in one spot might help. -Dan 5. ## Re: I need help with circle theorums Originally Posted by Shook it is really hard and I dont understand it Do you understand that $y=m(\angle BAD)=\frac{1}{2}\left( {arc(BCD) - arc(BD)} \right)~?$ The quadrilateral $ABOD$ contains two right angles & the sum of all its angles is $2\pi$. Therefore $m(\angle BOD)=\pi-y.$ Look at the figure: $m(\angle COD)=\pi-2x=m(arc DC)$; moreover since $2x=0.5(arc BC)$ then $4x=m(\angle COB)$. $m(\angle BOC)+m(\angle COD)+m(\angle DOB)=2\pi$ now you finish and sow us your work.
2019-03-24T23:14:44
{ "domain": "mathhelpforum.com", "url": "http://mathhelpforum.com/math-topics/281977-i-need-help-circle-theorums.html", "openwebmath_score": 0.6462113857269287, "openwebmath_perplexity": 1543.639337398935, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.986979508737192, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043082459798 }
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-2-cumulative-review-exercises-page-435/2
## Precalculus (6th Edition) Blitzer The zeroes of the function are $1\,\text{ and }-1$ with the least possible multiplicity of two. The zeros of the graph of a function are the points where the graph intersects the x-axis, i.e., the y-coordinate of that point is zero. Observe from the above graph that the function touches the x-axis at $x=1\,\text{ and }-1$. Thus, $1\,\text{ and }-1$ are the zeros of the function. This implies that $\left( x-1 \right)$ and $\left( x+1 \right)$ are the factors of the function. The multiplicity is defined as the exponent of the factors of the function, such that, r is the zero of the function. If r is of even multiplicity, then the graph will touch the x-axis and will turn around at r. If r is of odd multiplicity, then the graph of the function crosses the x-axis. Since the graph does not cross the x-axis, it instead turns around at the zeroes, and thus the zeroes have even multiplicities. Since the least even number is two, thus, $\left( x-1 \right)$ and $\left( x+1 \right)$ have at least two multiplicities. Therefore, the zeroes of the function are $1\,\text{ and }-1$ with the least possible multiplicity of two.
2020-04-08T16:19:33
{ "domain": "gradesaver.com", "url": "https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-2-cumulative-review-exercises-page-435/2", "openwebmath_score": 0.8859764337539673, "openwebmath_perplexity": 110.13844678375489, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795087371921, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.6533043082459798 }
http://tasks.illustrativemathematics.org/content-standards/HSA/SSE/B/3/tasks/167
# Increasing or Decreasing? Variation 2 Alignments to Content Standards: A-SSE.A.1.b A-SSE.B.3 Consider the expression $$\frac{R_1 + R_2}{R_1R_2}$$ where $R_1$ and $R_2$ are positive. Suppose we increase the value of $R_1$ while keeping $R_2$ constant. Find an equivalent expression whose structure makes clear whether the value of the expression increases, decreases, or stays the same. ## IM Commentary The purpose of this task is to help students see manipulation of expressions as an activity undertaken for a purpose. Variation 1 of this task presents a related more complex expression already in the correct form to answer the question. The expression arises in physics as the reciprocal of the combined resistance of two resistors in parallel. However, the context is not explicitly considered here. ## Solution We rewrite $$\frac{R_1+R_2}{R_1R_2}$$ in the form $$\frac{R_1}{R_1R_2} + \frac{R_2}{R_1R_2} = \frac{1}{R_2} + \frac{1}{R_1}.$$ Now the bigger a (positive) number is, the smaller its reciprocal is, and so the value of this expression decreases as $R_1$ increases.
2022-05-28T17:44:14
{ "domain": "illustrativemathematics.org", "url": "http://tasks.illustrativemathematics.org/content-standards/HSA/SSE/B/3/tasks/167", "openwebmath_score": 0.8045588731765747, "openwebmath_perplexity": 382.0107134927103, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986979508737192, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.6533043082459797 }
http://notes.reasoning.page/html/difference-two-squares
# A calculus of the absurd ##### 4.3.3 Difference of two squares • Theorem 4.3.1 Let $$x, y \in \mathbb {R}$$, in which case $$\Big (x + y\Big )\Big (x + (-y)\Big ) \equiv x^2 - y^2$$ I have also seen this referred to as the “third binomial law”, but I believe this terminology is non-standard in English-speaking countries. Proof: we use the standard method for proving identities (see Section 5.2). \begin{align} \Big (x + y\Big )\Big (x + (-y)\Big ) &= (x+y) \times (x) + (x+y) \times (-y) \\ &= x \times x + y \times x + -(y \times x) - (y \times y) \\ &= x^2 - y^2 \end{align} This shows up often, and if you don’t spot it (as I have done a few times) it often makes life very painful.
2023-01-28T01:10:29
{ "domain": "reasoning.page", "url": "http://notes.reasoning.page/html/difference-two-squares", "openwebmath_score": 1.0000100135803223, "openwebmath_perplexity": 1142.854724104177, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.986979508737192, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.6533043082459797 }
https://math.stackexchange.com/questions/1680513/latin-square-and-groups/1680529
# Latin square and groups From what I can gather, table of each group present latin square. But I wonder, how can I give an example of latin square which isn't a result of the operations on group? This cannot represent a group because there is no identity element. $1*1=2$, $2*2=3$, and $3*3=1$. The answer is yes; there are just a few (up to relabeling) $3\times 3$ latin squares. One is the group multiplication table for the cyclic group of order 3. The other will be the multiplication table of what is called a quasigroup. These are algebraic objects interesting in their own right and even have their own representation theory. In this case, the idea is to try and make a latin square where no element acts as an identity element. You will probably get a right (or left) identity, though.
2019-07-23T18:31:17
{ "domain": "stackexchange.com", "url": "https://math.stackexchange.com/questions/1680513/latin-square-and-groups/1680529", "openwebmath_score": 0.9013772010803223, "openwebmath_perplexity": 84.38553016482483, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795083542036, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.653304307992471 }
https://mathsgee.com/36607/mathbf-vectors-euclidean-product-mathbf-mathbf-mathbf-mathbf
2 like 0 dislike 427 views Prove that if $\mathbf{u}$ and $\mathbf{v}$ are vectors in $R^{n}$ with the Euclidean inner product, then $$\mathbf{u} \cdot \mathbf{v}=\frac{1}{4}\|\mathbf{u}+\mathbf{v}\|^{2}-\frac{1}{4}\|\mathbf{u}-\mathbf{v}\|^{2}$$ | 427 views 0 like 0 dislike Proof $\|\mathbf{u}+\mathbf{v}\|^{2}=(\mathbf{u}+\mathbf{v}) \cdot(\mathbf{u}+\mathbf{v})=\|\mathbf{u}\|^{2}+2(\mathbf{u} \cdot \mathbf{v})+\|\mathbf{v}\|^{2}$ $\|\mathbf{u}-\mathbf{v}\|^{2}=(\mathbf{u}-\mathbf{v}) \cdot(\mathbf{u}-\mathbf{v})=\|\mathbf{u}\|^{2}-2(\mathbf{u} \cdot \mathbf{v})+\|\mathbf{v}\|^{2}$ by Diamond (88,832 points) 2 like 0 dislike 1 like 0 dislike 2 like 0 dislike 1 like 0 dislike 2 like 0 dislike 1 like 0 dislike 0 like 0 dislike 2 like 0 dislike 1 like 0 dislike 1 like 0 dislike 1 like 0 dislike 1 like 0 dislike 0 like 0 dislike 0 like 0 dislike
2023-03-23T11:33:54
{ "domain": "mathsgee.com", "url": "https://mathsgee.com/36607/mathbf-vectors-euclidean-product-mathbf-mathbf-mathbf-mathbf", "openwebmath_score": 0.7340888977050781, "openwebmath_perplexity": 13285.616398502954, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795083542036, "lm_q2_score": 0.6619228691808012, "lm_q1q2_score": 0.653304307992471 }
https://learn.careers360.com/jobs/question-a-whole-seller-allows-a-discount-of-20-on-the-list-price-to-retailer-the-retailer-sells-at-5-discount-on-the-list-price-if-the-customer-paid-rs-38-for-an-article-what-profit-is-made-by-the-retailer-8372/
Q # A whole- seller allows a discount of 20% on the list price to retailer. The retailer sells at 5 A whole- seller allows a discount of 20% on the list price to retailer. The retailer sells at 5 % discount on the list price. If the customer paid Rs. 38 for an article, what profit is made by the retailer? • Option 1) Rs. 10 • Option 2) Rs.8 • Option 3) Rs. 6 • Option 4) Rs. 7 • Option 5) Rs. 12 Views Let MP = 100 CP to retailer = 80 SP to retailer = 95 Profit = 15 Customer paid = 38 ,   It means 95 = 38 , 100 = 40 If MP = 40 Profit =  $\dpi{100} 40\times \frac{15}{100}=6$ Exams Articles Questions
2020-01-24T16:28:21
{ "domain": "careers360.com", "url": "https://learn.careers360.com/jobs/question-a-whole-seller-allows-a-discount-of-20-on-the-list-price-to-retailer-the-retailer-sells-at-5-discount-on-the-list-price-if-the-customer-paid-rs-38-for-an-article-what-profit-is-made-by-the-retailer-8372/", "openwebmath_score": 0.4237094819545746, "openwebmath_perplexity": 10549.53866480456, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9869795079712153, "lm_q2_score": 0.6619228691808011, "lm_q1q2_score": 0.6533043077389622 }