Datasets:
AI4M
/

text
stringlengths
0
3.34M
Formal statement is: lemma holomorphic_on_paste_across_line: assumes S: "open S" and "d \<noteq> 0" and holf1: "f holomorphic_on (S \<inter> {z. d \<bullet> z < k})" and holf2: "f holomorphic_on (S \<inter> {z. k < d \<bullet> z})" and contf: "continuous_on S f" shows "f holomorphic_on S" Informal statement is: If $f$ is holomorphic on the upper and lower half-planes and continuous on the real line, then $f$ is holomorphic on the whole plane.
State Before: α : Type u_3 α' : Type ?u.4493903 β : Type u_2 β' : Type ?u.4493909 γ : Type ?u.4493912 E : Type ?u.4493915 inst✝⁹ : MeasurableSpace α inst✝⁸ : MeasurableSpace α' inst✝⁷ : MeasurableSpace β inst✝⁶ : MeasurableSpace β' inst✝⁵ : MeasurableSpace γ μ μ' : Measure α ν✝ ν' : Measure β τ : Measure γ inst✝⁴ : NormedAddCommGroup E inst✝³ : SigmaFinite ν✝ inst✝² : SigmaFinite μ ι : Type u_1 inst✝¹ : Finite ι ν : ι → Measure β inst✝ : ∀ (i : ι), SigmaFinite (ν i) ⊢ Measure.prod μ (sum ν) = sum fun i => Measure.prod μ (ν i) State After: α : Type u_3 α' : Type ?u.4493903 β : Type u_2 β' : Type ?u.4493909 γ : Type ?u.4493912 E : Type ?u.4493915 inst✝⁹ : MeasurableSpace α inst✝⁸ : MeasurableSpace α' inst✝⁷ : MeasurableSpace β inst✝⁶ : MeasurableSpace β' inst✝⁵ : MeasurableSpace γ μ μ' : Measure α ν✝ ν' : Measure β τ : Measure γ inst✝⁴ : NormedAddCommGroup E inst✝³ : SigmaFinite ν✝ inst✝² : SigmaFinite μ ι : Type u_1 inst✝¹ : Finite ι ν : ι → Measure β inst✝ : ∀ (i : ι), SigmaFinite (ν i) s : Set α t : Set β hs : MeasurableSet s ht : MeasurableSet t ⊢ ↑↑(sum fun i => Measure.prod μ (ν i)) (s ×ˢ t) = ↑↑μ s * ↑↑(sum ν) t Tactic: refine' prod_eq fun s t hs ht => _ State Before: α : Type u_3 α' : Type ?u.4493903 β : Type u_2 β' : Type ?u.4493909 γ : Type ?u.4493912 E : Type ?u.4493915 inst✝⁹ : MeasurableSpace α inst✝⁸ : MeasurableSpace α' inst✝⁷ : MeasurableSpace β inst✝⁶ : MeasurableSpace β' inst✝⁵ : MeasurableSpace γ μ μ' : Measure α ν✝ ν' : Measure β τ : Measure γ inst✝⁴ : NormedAddCommGroup E inst✝³ : SigmaFinite ν✝ inst✝² : SigmaFinite μ ι : Type u_1 inst✝¹ : Finite ι ν : ι → Measure β inst✝ : ∀ (i : ι), SigmaFinite (ν i) s : Set α t : Set β hs : MeasurableSet s ht : MeasurableSet t ⊢ ↑↑(sum fun i => Measure.prod μ (ν i)) (s ×ˢ t) = ↑↑μ s * ↑↑(sum ν) t State After: no goals Tactic: simp_rw [sum_apply _ (hs.prod ht), sum_apply _ ht, prod_prod, ENNReal.tsum_mul_left]
# Terms ## operator > ## $\color{red}{\nrightarrow}$ >> ### inner product( n right arrow ) >> ### $ \color{magenta}{q^{(n,r)}_{\nrightarrow}}$ > ## $\color{red}{\circlearrowleft}$ >> ### cross product ( circle arrow left ) >> ### $ \color{magenta}{q^{(n,r)}_{\circlearrowleft}}$ > ## $\color{red}{\looparrowright}$ >> ### projection ( loop arrow right ) >> ### $ \color{magenta}{q^{(n,r)}_{\looparrowright}}$ > ## $\color{red}{\pitchfork}$ >> ### rotation in special ($\pi/2$) ( pitch fork ) >>> ## $\color{magenta}{q^{(r',r)}_{\pitchfork \theta} = q^{(cos\theta \, r + sin\theta\, (n, r))}_{,\circlearrowright} = q^{(cos\theta \, r + sin\theta\, n\, r)}_{} }$ --- > ## $\color{red}{\curvearrowleft}$ >> ### rotation in general ( curve arrow left ) >>> ## $\color{magenta}{q^{(r',r)}_{\curvearrowleft \theta}} = q^{\upuparrows+\Rsh'}_{v} \\ \because q^{\upuparrows} = q^{(r,n)}_{\looparrowright} = q^{\frac{(r,n)\,n}{(n,n)}}_{\nrightarrow, \nrightarrow} = q^{(r,n)\, n}_{\nrightarrow} = q^{}_{} \\ \because q^{\Rsh}_{v} = q^{r-\upuparrows}_{v} \\ \because q^{\Rsh'}_{v} = q^{\Rsh',\Rsh}_{\pitchfork,\theta} = q^{cos\theta \, \Rsh - sin\theta \, (n,\Rsh)}_{\circlearrowright} = q^{cos\theta \big(r-\upuparrows \big)}_{v} + q^{sin\theta\big(n, (r-\upuparrows)\big)}_{\circlearrowright} = q^{cos\theta \big(r-\upuparrows \big)}_{v} + q^{sin\theta\big(n, r\big)}_{\circlearrowright} \\ = q^{(cos\theta) \big(r-\upuparrows \big)}_{v} + q^{(sin\theta) \big(n, r \big)}_{\circlearrowright} \\ q^{\upuparrows}_{v} + q^{cos\theta(r-\upuparrows) + \sin\theta(n,r)}_{\circlearrowright} = q^{(r,n)n + \cos\theta r - \cos\theta (r,n)n + \sin\theta(n,r)}_{\nrightarrow, \nrightarrow, \circlearrowright} \\ \large { \color {magenta}{\because q^{(r \cdot \hat{n}) \hat{n} \, (1-\cos\theta) \, + \, \cos\theta \, + \, \sin\theta \, (\hat{n} \times r)}_{\nrightarrow, \circlearrowright} }}$ --- ## symbols > ## $\color{red}{\upuparrows}$ >> ### rotation in general ( up up arrows ) >>> ### $\color{magenta}{q^{\upuparrows}_{v} = q^{r,n}_{\looparrowright}}$ > ## $\color{red}{\Rsh}$ >> ### rotation in general ( Rsh ) >>> ## $\color{magenta}{q^{\Rsh} = q^{r-\upuparrows}_{v}}$ > ## $\color{red}{\Rsh'}$ >> ### rotation in general ( Rsh ) >>> ## $\color{magenta}{q^{(\Rsh',\Rsh)}_{\pitchfork \theta}}$ # $\color{rec}{\text{Quaternion Matrix}}$ > ### $ (s + ai + bj + ck) =: (s,\vec{v}), \quad (s'+ a'i + b'j + c'k) =: (s', \vec{v'}) \\ (s,\vec{v})(s',\vec{v'}) = (ss' - (aa' + bb' + cc'), s(a'+b'+c') + s'(a+b+c) + (bc'-cb')_{jkj} + (ca'-ac')_{kik} + (ab'-ba')_{iji}) \\ \therefore \color{magenta}{\Big(ss' - \big(\vec{v} \cdot \vec{v'}\big), \; s\vec{v'} + a\vec{v} + \big(\vec{v} \times \vec{v'}\big) \Big)} \\ \begin{array}{l|r} s + ai + bj + ck & \\ s' + a'i + b'j + c'k & \times \\ \end{array} = \left( \begin{array}{l|l} ss' - aa' - bb' - cc' \\ as' + sa' - cb' + bc' \\ bs' + ca' + sb' - ac' \\ cs' - ba' + ab' + sc' \end{array}\right) \iff \left[ \begin{array}{rrrr} s & -a & -b & -c \\ a & s & -c & b \\ b & c & s & -a \\ c & -b & a & s \\ \end{array}\right] \left[ \begin{array}{rrrr} s' & -a' & -b' & -c' \\ a' & s' & -c' & b' \\ b' & c' & s' & -a' \\ c' & -b' & a' & s' \\ \end{array}\right] \iff \left[ \begin{array}{rrrr} s & -a & -b & -c \\ a & s & -c & b \\ b & c & s & -a \\ c & -b & a & s \\ \end{array}\right] \left[ \begin{array}{} s'\\ a'\\ b'\\ c'\\ \end{array}\right] $ --- > ### $ q = (q^s_{r}, q^v_{v}),\quad q' = (q^s_{r'}, q^{v}_{v'})\\ \big(q^s_r q^s_{r'} - \big( q^v_{v} \cdot q^{v}_{v'} \big), \; q^s_{r} q^{v}_{v'} + q^s_{r'}q^v_{v} + \big( q^v_{v} \times q^{v}_{v'} \big) \big) \\ \begin{array}{} q = (q^s_r,q^i_a, q^j_b,q^k_c)\\ q' = (q^s_{r'}, q^i_{a'}, q^j_{b'}, q^k_{c'}) \end{array} \left[ \begin {array}{}q^s_r \\ q^i_a \\ q^j_b \\ q^k_c \end{array} \right] \left[ \begin{array}{} q^s_{r'} \\ q^i_{a'} \\ q^j_{b'} \\ q^k_{c'} \end{array}\right] = \left( \begin{array}{l|l|l} q^s_r q^s_{r'} - q^i_a q^i_{a'} - q^j_b q^j_{b'} - q^k_c q^k_{c'} & scalar & ijk = -1 & i^2,j^2,k^2 = -1 \\ q^i_a q^s_{r'} + q^s_r q^i_{a'} - q^k_c q^j_{b'} + q^j_b q^k_{c'} & i & jk = i & kj = -i \\ q^j_b q^s_{r'} + q^k_c q^i_{a'} + q^s_r q^j_{b'} - q^i_a q^k_{c'} & j & ki = j & ik = -j \\ q^k_c q^s_{r'} - q^j_b q^i_{a'} + q^i_a q^j_{b'} + q^s_r q^k_{c'} & k & ij = k & ji = -1 \end{array}\right) $ --- > ### $ q = (q^r_s, q^v_v)= (q^r_s,q^{a}_i, q^{b}_j,q^{c}_k),\quad q' = (q^{r'}_s, q^{v'}_v)=(q^{r'}_s, q^{a'}_i, q^{b'}_j, q^{c'}_k) \\ \Big(q^1_s q^2_s - \big( q^1_v \cdot q^2_v \big), \; q^1_s q^2_v + q^2_s q^1_v + \big( q^1_v \times q^2_v \big) \Big) \\ \left[ \begin {array}{}q^r_s \\ q^{a}_i \\ q^{b}_j \\ q^{c}_k \end{array} \right] \left[ \begin{array}{} q^{r'}_s \\ q^{a'}_i \\ q^{b'}_j \\ q^{c'}_k \end{array}\right] = \left( \begin{array}{l|l|l} q^r_s q^{r'}_s - q^{a}_i q^{a'}_i - q^{b}_j q^{b'}_j - q^{c}_k q^{c'}_k & q^r_s q^{r'}_s - (q^{a}_i q^{a'}_i + q^{b}_j q^{b'}_j + q^{c}_k q^{c'}_k) & scalar\\ q^{a}_i q^{r'}_s + q^r_s q^{a'}_i - q^{c}_k q^{b'}_j + q^l_j q^{c'}_k & q^{a}_i q^{r'}_s + q^r_s q^{a'}_i + (q^l_j q^{c'}_k - q^{c}_k q^{b'}_j) & i\\ q^{b}_j q^{r'}_s + q^{c}_k q^{a'}_i + q^r_s q^{b'}_j - q^{a}_i q^{c'}_k & q^{b}_j q^{r'}_s + q^r_s q^{b'}_j + (q^{c}_k q^{a'}_i - q^{a}_i q^{c'}_k) & j\\ q^{c}_k q^{r'}_s - q^{b}_j q^{a'}_i + q^{a}_i q^{b'}_j + q^r_s q^{c'}_k & q^{c}_k q^{r'}_s + q^r_s q^{c'}_k + (q^{a}_i q^{b'}_j - q^{b}_j q^{a'}_i) & k \end{array}\right) \because ijk = -1,\quad i^2, j^2 , k^2 = -1,\quad ij = k ,jk = i,ki= j $ ```python # Quaternion Matrix import IPython as Ipy Ipy.display.YouTubeVideo('https://www.youtube.com/watch?v=3Ki14CsP_9k&list=PLpzmRsG7u_gr0FO12cBWj-15_e0yqQQ1U&index=2',width=800,height=640) ``` ```python import sympy as sm a,b,c,d = sm.symbols('a:d') a1,b1,c1,d1 = sm.symbols("a' b' c' d'") # M[1,1] -> 1st row dot 1st col = \sum_{j} A[1,j] x B[j,1] # M[2,1] -> 2nd row dot 1st col = \sum_{j} A[2,j] x B[j,1] # M[3,1] -> 3rd row dot 1st col = \sum_{j} A[3,j] x B[j,1] # M[4,1] -> 4th row dot 1st col = \sum_{j} A[4,j] x B[j,1] # M[1,2] -> 1st row dot 2st col = \sum_{j} A[1,j] x B[j,2] # M[2,2] -> 2nd row dot 2st col = \sum_{j} A[2,j] x B[j,2] # M[3,2] -> 3rd row dot 2st col = \sum_{j} A[3,j] x B[j,2] # M[4,2] -> 4th row dot 2st col = \sum_{j} A[4,j] x B[j,2] # M[i,j] -> \sum_{k} jA[i,k] x B[k,j] # quaternion muliply matrix # a + bi + cj + dk = (a,b,c,d) # a = scalar b = coeffient of i Q = sm.Matrix([ [a,-b,-c,-d], [b, a,-d, c], [c, d, a,-b], [d,-c, b, a]]) q = sm.Matrix([a1,b1,c1,d1]) # list(i for i in zip(q1,[1,2,3,4])) # dict(i for i in zip(q1,[1,2,3,4])) # scalar ### (M[1,1]= scalar positon | when a scalar join other scalar then 1 times) ### (M[2,1]= i positon | when a scalar join other i then 1 times) ### (M[3,1]= j positon | when a scalar join other j then 1 times) ### (M[4,1]= k positon | when a scalar join other k then 1 times) s = sm.Matrix([ [ 1, 0, 0, 0], [ 0, 1, 0, 0], [ 0, 0, 1, 0], [ 0, 0, 0, 1], ]) # i ### (M[1,1]= scalar positon | when i join other i then -1 times) ### (M[2,1]= i positon | when i join other scalar then 1 times) ### (M[3,1]= j positon | when i join other k then -1 times) ### (M[4,1]= k positon | when i join other j then 1 times) i = sm.Matrix([ [ 0,-1, 0, 0], [ 1, 0, 0, 0], [ 0, 0, 0,-1], [ 0, 0, 1, 0], ]) # j ### (M[1,1]= scalar positon | when j by other j then -1 times) ### (M[2,1]= i positon | when j join other k then 1 times) ### (M[3,1]= j positon | when j join other scalar then 1 times) ### (M[4,1]= k positon | when j join other i then -1 times) j = sm.Matrix([ [ 0, 0,-1, 0], [ 0, 0, 0, 1], [ 1, 0, 0, 0], [ 0,-1, 0, 0], ]) # k ### (M[1,1]= scalar positon | when k join other k then -1 times) ### (M[2,1]= i positon | when k join other j then -1 times) ### (M[3,1]= j positon | when k jion other i then 1 times) ### (M[4,1]= k positon | when k join other scalar then 1 times) k = sm.Matrix([ [ 0, 0, 0,-1], [ 0, 0,-1, 0], [ 0, 1, 0, 0], [ 1, 0, 0, 0], ]) q1 = (1*s+2*i+ 3*j+4*k) # minor i.minorMatrix(1,1) # adjoint(adjugate in sympy)-붙어있는 matrix i.adjugate() ``` $\displaystyle \left[\begin{matrix}0 & 1 & 0 & 0\\-1 & 0 & 0 & 0\\0 & 0 & 0 & 1\\0 & 0 & -1 & 0\end{matrix}\right]$ ```python R = sm.Matrix([ [a,b,c,d], [b,a,d,c], [c,d,a,b], [d,c,b,a]]) R1 = sm.Matrix([ [a1,b1,c1,d1], [b1,a1,d1,c1], [c1,d1,a1,b1], [d1,c1,b1,a1]]) R*R1 ``` $\displaystyle \left[\begin{matrix}a a' + b b' + c c' + d d' & a b' + a' b + c d' + c' d & a c' + a' c + b d' + b' d & a d' + a' d + b c' + b' c\\a b' + a' b + c d' + c' d & a a' + b b' + c c' + d d' & a d' + a' d + b c' + b' c & a c' + a' c + b d' + b' d\\a c' + a' c + b d' + b' d & a d' + a' d + b c' + b' c & a a' + b b' + c c' + d d' & a b' + a' b + c d' + c' d\\a d' + a' d + b c' + b' c & a c' + a' c + b d' + b' d & a b' + a' b + c d' + c' d & a a' + b b' + c c' + d d'\end{matrix}\right]$ ```python (1*s+2*i+3*j+4*k)*(1*s+2*i+4*j+9*k) ``` $\displaystyle \left[\begin{matrix}-51 & -15 & 3 & -15\\15 & -51 & -15 & -3\\-3 & 15 & -51 & -15\\15 & 3 & 15 & -51\end{matrix}\right]$ ```python (1*s+2*i+ 3*j+4*k)*sm.Matrix([1,2,4,9]) ``` $\displaystyle \left[\begin{matrix}-51\\15\\-3\\15\end{matrix}\right]$ ```python # conjugate q1_ =1*s - 2*i - 3*j - 4*k q1*q1_ ``` $\displaystyle \left[\begin{matrix}30 & 0 & 0 & 0\\0 & 30 & 0 & 0\\0 & 0 & 30 & 0\\0 & 0 & 0 & 30\end{matrix}\right]$ --- # $\color{red}{\text{Quaternion Multiplication}}$ > ### $ ( q^{r}_{s} q^{v}_{v}) \; ( q^{r'}_{s}, q^{v'}_{v} ) \\ \therefore \Big( q^{r}_{s} q^{r'}_{s} - \big( q^{v}_{v} \cdot q^{v'}_{v} \big), \; q^{r}_{s} q^{v'}_{v} + q^{r'}_{s} q^{v}_{v} + \big( q^v_{v} \times q^{v'}_{v} \big) \Big) $ --- > ### $ (0, q^{v}_{v}) \; (0,q^{v'}_{v}) \\ \therefore \Big ( - \big( q^{v}_{v} \cdot q^{v'}_{v} \big) , \quad q^{v}_{v} \times q^{v'}_{v} \; \Big) $ --- > ### $ \color{magenta}{\text{commutative of multiplication}} \\ (0,q^{v'}_{v}) \; (0, q^{v}_{v}) \\ \therefore \Big ( - \big( q^{v}_{v} \cdot q^{v'}_{v} \big) , \quad - \big( q^{v}_{v} \times q^{v'}_{v} \big) \; \Big) $ > ### $ q_1 = (0, \vec{qv_{1}}) \quad q_2 = (0,\vec{qv_{2}}) \\ \begin{array}{c|l} \\ &q_1 q_2 = \Big(-\vec{qv_{1}} \cdot \vec{qv_{2}}, \; \vec{qv_{1}}\times\vec{qv_{2}} \Big) \\ \pm & q_2 q_1 = \Big(-\vec{qv_{1}} \cdot \vec{qv_{2}}, \; - \vec{qv_{1}}\times\vec{qv_{2}} \Big) \\ & \hline \\ & q_1 q_2 + q_2 q_1 = \Big(-2\big(\vec{qv_1} \cdot \vec{qv_2},\; 0\big) \Big) = -2\Big( \vec{q_1} \cdot \vec{q_2}\Big)\\ & q_1 q_2 - q_2 q_1 = \Big(0,\; 2\big(\vec{qv_1} \times \vec{qv_2}\big) \Big) = 2\Big( \vec{qv_1} \times \vec{qv_2}\Big) \end{array} \\ \because \vec{qv_1} \times \vec{qv_2} = \frac{1}{2}\Big(q_1 q_2 - q_2 q_1 \Big) = \frac{1}{2}\Big[q_1,q_2\Big] $ ## 교환자 ($\color{magenta}{commutator}$) > ### $ \therefore \Big[q_1, q_2 \Big] = 2\big( \vec{qv_1} \times \vec{qv_2} \big) \quad s.t \;q_1=(0,\vec{qv_1}),q_2=(0,\vec{qv_2}) \\ \therefore \vec{qv_1} \times \vec{qv_2} = \frac{1}{2}\Big[q_1, q_2 \Big] \quad s.t \; \Big[A,B\Big] = AB - BA $ ```python # commutator import IPython as Ipy Ipy.display.IFrame('https://ko.wikipedia.org/wiki/%EA%B5%90%ED%99%98%EC%9E%90_(%ED%99%98%EB%A1%A0)#%EC%A0%95%EC%9D%98',width=800,height=600) ``` ```python Ipy.display.YouTubeVideo('https://www.youtube.com/watch?v=UaK2q22mMEg&list=PLpzmRsG7u_gr0FO12cBWj-15_e0yqQQ1U&index=4',width=800,height=600) ``` # $\color{red}{\text{Rotation}}$ of special angle > ### $ q_{r} = (0, q^{r}_v) \\ \theta = \text{Rotation angle} \\ q_{r'} = (r0, q^{r'}_v) \\ q_a = (0, q^a_v) \quad \text{rotation axis} \\ q_n = (0, q^n_v) \quad \text{rotation axis normal vector} \\ q^{r',r}_{\looparrowright} = q_{\perp} = (0, {\large \frac{ q^{r',r}_{\nrightarrow}} { q^{r,r}_{\nrightarrow}} q^{r}_{v}}) = (0, \; {\large q^{r',}_{\nrightarrow} \frac{q^{r}_v \; q^{r}_v}{q^{r,r}_{\nrightarrow}}}) = (0, \; {\large q^{r'}_v \cdot \frac{q^{r}_v \; q^{r}_v}{q^{r}_v \cdot q^{r}_v}}) = (0, \; {\large \frac{cos(\theta)\;||q^{r'}_v||\; ||q^{r}_v|| \; q^{r}_v}{||q^{r}_v||^2}}) = (0,\; cos(\theta)q^{r}_v)\; \because ||q^{r}_v|| = ||q^{r'}_v|| \\ q^{r',}_{\looparrowright} q^{n,r}_{\circlearrowright} = (0,{\large \frac{q^{r',}_{\nrightarrow}( q^{n,r}_{\circlearrowright})(q^{n,r}_{\circlearrowright})}{q^{(n,r),(n,r)}_{\circlearrowright \nrightarrow \circlearrowright}}}) = (0,q^{r',}_{\nrightarrow}{\large \frac{ q^{(n,r)(n,r)}_{\circlearrowright \circlearrowright}}{q^{(n,r)\nrightarrow (n,r)}_{\circlearrowright \circlearrowright}}}) = (0,{\large \frac{q^{r'}_v \cdot (q_n \times q_{r})(q_n \times q_{r})}{(q_n \times q_{r})\cdot (q_n \times q_{r})}}) = \cos(90-\theta)(q^n_{v} \times q^{r}_v) = \sin(\theta)(q^n_{v} \times q^{r}_v) \; \because ||(q^n_v \times q^{r}_v)|| = ||q^{r}_v||,\; ||q^n_v|| = 1 $ --- > # $ \because \color{red}{q^{r'}_v = q^{(r',r)}_{\perp \theta} = q^{(r',r)}_{\looparrowright} + q^{r \looparrowright (n,r)}_{,\circlearrowright}} $ > ### $ \begin{array}{} cos(\theta)q^{r}_v + q^{r\looparrowright (n,r)}_{,\circlearrowright} \end{array} \left \{ \begin{array}{} \because || q^{r'}_n || = || q^{r}_v || = || q^{(n,r)}_{\circlearrowright} ||, || q^n_v || = 1 \\ || q^{(v,(n,r))}_{\looparrowright \circlearrowright}|| = || \sin (\theta) q^{r'}_v || = || \cos(90-\theta)q^{r'}_v || = || \sin(\theta)q^{r}_v || = || sin(\theta) q^{(n,r)}_{\circlearrowright} || \\ \therefore q^{(r',(n,r))}_{\looparrowright, \circlearrowright} = sin(\theta)q^{(n,r)}_{\circlearrowright} \\ \because q^n q^{r} = (-q^{(n,r)}_{\looparrowright}, q^{(n,r)}_{\circlearrowright}) = q^{(n,r)}_{\circlearrowright} \;| q^{(n,r)}_{\nrightarrow} = 0 \\ \therefore q^{(r \looparrowright (n,r)}_{\circlearrowright} = sin(\theta) q^n_v q^{r}_v \end{array} \right . \\ \therefore cos(\theta)q^{r}_v + sin(\theta)q^n_v q^{r}_v $ > ### $ \cos(\theta)q^r_v + \sin(\theta)q^n_v q^r_v = \big( \cos(\theta) + \sin(\theta)q^n_v \big) q^r_v \\ \because q^n = (0, q^n_v), (q^n)^2 = (0,q^n_v)(0,q^n_v) = (-(q^n_v \cdot q^n_v), q^n_v \times q^n_v) = -1 $ > ## $ \color{red}{\therefore \exp^{\theta q^n_v} q^r_v}$ ```python # Rotate q_1 = (1, -1, 0) by \pi/3 radian about the axix q_{axix} = (1,1,1) # qr = cos(\theta)q1 + sin(\theta)(\hat{qn} x q1) # = cos()qv1 + sin()qn*q1 \because (0,qvn)(0,qv1) = (-qvn \dot qv1, qvn x qv1) /cz qvn \perp qv1 \thf (0,qvn x qv1) = qvn x qv1 # = cos()qv1 + sin()((qvn*qv1) # = ( cos() + sin()qvn )qv1 # = \cz qvn^2 = -1 qvn*qvn = (0,\hat{qvn})(0,\hat{qvn}) = (-(qvn \cdot qvn), qvn x qvn) = (-1, 0) = -1 i = sm.Matrix([[0,-1,0,0], [1,0,0,0], [0,0,0,-1], [0,0,1,0]]) j = sm.Matrix([[0,0,-1,0], [0,0,0,1], [1,0,0,0], [0,-1,0,0]]) k = sm.Matrix([[0,0,0,-1], [0,0,-1,0], [0,1,0,0], [1,0,0,0]]) q0 = i - j qa = i + j + k theta = sm.pi/3 qn = qa/qa.col(0).norm() (sm.cos(theta)*q0 + sm.sin(theta)*qn*q0).col(0) ``` $\displaystyle \left[\begin{matrix}0\\1\\0\\-1\end{matrix}\right]$ ```python # Rotate q_1 = (1, -1, 0) by \pi/3 radian about the axix q_{axix} = (1,1,1) # qr = cos(\theta)q1 + sin(\theta)(\hat{qn} x q1) # = cos()qv1 + sin()qn*q1 \because (0,qvn)(0,qv1) = (-qvn \dot qv1, qvn x qv1) /cz qvn \perp qv1 \thf (0,qvn x qv1) = qvn x qv1 # = cos()qv1 + sin()((qvn*qv1) # = ( cos() + sin()qvn )qv1 # = \cz qvn^2 = -1 qvn*qvn = (0,\hat{qvn})(0,\hat{qvn}) = (-(qvn \cdot qvn), qvn x qvn) = (-1, 0) = -1 import sympy as sm q1 = sm.Matrix([0,1,-1,0]) qa = sm.Matrix([0,1,1,1]) theta = sm.pi/3 l9 = 0 for l8 in qa: l9 += sm.sqrt(l8) qn = (1/sm.sqrt(l9))*qa sm.cos(theta)*q1 nc1 = qn[1:,0].cross(q1[1:,0]) sm.cos(theta)*q1[1:,:] + sm.sin(theta)*nc1 ``` $\displaystyle \sqrt{3}$ --- # Rotation in general $\big( \color{red}{\text{Rodrigues Rotation}} = \color{red}{q^{(r',r)}_{\curvearrowleft \theta}}\big)$ > ### $ q^0 = (0,q^0_v)\\ \theta = \text{rotation angle by radian} \\ q^{r}= q^{r,0}_{\curvearrowleft \theta} = (0,q^r_{v}) \\ q^{axis} = (0,q^{axis}_v)\\ q^{n} = (0,q^{axis}_\hat{v})\\ q^{\upuparrows} = (0, q^{(0,q^{n}_{v})}_{\looparrowleft}) \\ q^{\Rsh} = (0, q^{(0-\upuparrows)} ) \\ q^{\Rsh'} = q^{(\Rsh',\Rsh)}_{\perp,\theta} \\ q_{\perp} = (0,q^v_{\perp}), \quad q'_{\perp} = (0,q'^v_{\perp}) \\ q_{\ulcorner} = (0,q^v_{\ulcorner}), \quad q'_{\ulcorner} = (0,q'^v_{\ulcorner}) \\ q_{\parallel} = (0,q^v_{\parallel})$ > ### $ q_r = q^v_{\parallel} + q'^{v}_{\ulcorner} \left \{ \begin {array}{} q'^{v}_{\ulcorner} = {\large \exp^{\theta \hat{q}^v_{axis}}} q^v_{\ulcorner} = {\large \exp^{\theta q^v_n}q^v_{\perp}}\\ q'^{v}_{\perp} = cos\theta\;\big( q^v_{\perp} \big) + sin\theta \;\big(q^v_{\parallel} \times q^v_{\perp}\big) = cos\theta\;(q^v_0 - q^v_{\parallel}) + sin\theta\;q^v_n \times (q^v_0 - q^v_{\parallel}) = cos\theta \, (q^v_{0} - q^v_{\parallel}) + sin\theta\; q^v_n \times q^v_0 \\ \quad = cos\theta\; q^v_{0} - cos\theta \, q^v_{\parallel} + sin \theta q^v_n q^v_{0}\\ q^v_{\ulcorner} = \big( q^v_0 - q^v_{\parallel}\big) = q^v_{\perp} = \big(q^v_0 - q^v_{\parallel}\big) \\ q^v_{\parallel} = {\large q^v_0 \cdot \frac{q^v_{n} q^v_n}{q^v_n \cdot q^v_n}} = q^v_0 \cdot q^v_n q^v_n \end{array}\right . $ > ### $ \therefore q^v_r = q^v_{\parallel} + cos\theta\; q^v_{0} - cos\theta\; q^v_{\parallel} + sin\theta\; q^v_n q^v_{0} \\ \quad = (1 - cos\theta)q^v_{\parallel} +cos\theta\; q^v_0 + sin\theta\; q^v_n q^v_0 \\ \quad = (1 - cos\theta)\big(q^v_0 \cdot q^v_n q^v_n \big) +cos\theta\; q^v_0 + sin\theta\; q^v_n q^v_0 $ > # $ q^v_{\parallel} = \frac{q^v_0 \cdot q^v_{a} q^v_{a}}{q^v_{a}\cdot q^v_{a}} = q^v_ $ > # $ q^v_{\lceil} = q^v - q^v_{\parallel} $ > # $ q^v_{r\lceil} = \cos(\theta)q^v_{\lceil} + \sin(\theta)(q^v_{n}\times q^v_{\lceil}) $ ```python import matplotlib.pyplot as plt %matplotlib widget import numpy as np import sympy as sm fig = plt.figure() ax = fig.add_subplot(projection='3d') xi = np.linspace(-1.5,1.5,100) xi,yi = np.meshgrid(xi,xi) zi = np.zeros_like(xi) ax.plot_surface(xi,yi,zi,alpha=0.3) ax.set_xlim3d(-2,2) ax.set_ylim3d(-2,2) ax.set_zlim3d(-2,2) ax.set_xlabel('x') ax.quiver([0],[0],[0],[0],[0],[2],color='r',arrow_length_ratio=0.1) ax.quiver([0],[0],[0],[1],[-1],[1],color='r',arrow_length_ratio=0.1) ax.quiver([0],[0],[0],[1],[1],[0],color='r',arrow_length_ratio=0.1) ax.quiver([0],[0],[0],[1],[1],[1],color='r',arrow_length_ratio=0.1) ``` <mpl_toolkits.mplot3d.art3d.Line3DCollection at 0x7f0cf48f3640> <div style="display: inline-block;"> <div class="jupyter-widgets widget-label" style="text-align: center;"> Figure </div> </div> ```python ```
module BoehmBerarducci %default total -- NOTE: Issues with scoped implicits: -- https://github.com/idris-lang/Idris-dev/issues/2346 NatQ : Type NatQ = {A : Type} -> (A -> A) -> A -> A unNatQ : {A : Type} -> (A -> A) -> A -> NatQ -> A unNatQ f a q = q f a succQ : NatQ -> NatQ succQ q = \f, a => f (q f a) zeroQ : NatQ zeroQ = \f, a => a fromNatQ : NatQ -> Nat fromNatQ q = unNatQ S Z q -- NOTE: Issue #2346 / 1 toNatQ : Nat -> NatQ toNatQ (S n) = succQ (toNatQ n) toNatQ Z = zeroQ iterated : Nat -> (a -> a) -> a -> a iterated (S n) f a = f (iterated n f a) iterated Z f a = a test_iterated : (n : Nat) -> iterated n S Z = n test_iterated (S n) = rewrite test_iterated n in Refl test_iterated Z = Refl -- NOTE: Issue #2346 / 1 -- test_fromNatQ : (n : Nat) -> fromNatQ (iterated n succQ zeroQ) = n -- test_fromNatQ (S n) = rewrite test_fromNatQ n in Refl -- test_fromNatQ Z = Refl -- TODO: Unknown issue -- test_toNatQ : (n : Nat) -> toNatQ n = iterated n succQ zeroQ -- test_toNatQ (S n) = rewrite test_toNatQ n in Refl -- test_toNatQ Z = Refl ListQ : Type -> Type ListQ A = {B : Type} -> (A -> B -> B) -> B -> B unListQ : {A, B : Type} -> (A -> B -> B) -> B -> ListQ A -> B unListQ f b q = q f b consQ : {A : Type} -> A -> ListQ A -> ListQ A consQ a q = \f, b => f a (q f b) nilQ : {A : Type} -> ListQ A nilQ = \f, b => b fromListQ : {A : Type} -> ListQ A -> List A fromListQ q = unListQ (::) [] q -- NOTE: Issue #2346 / 1 toListQ : {A : Type} -> List A -> ListQ A toListQ (a :: aa) = consQ a (toListQ aa) toListQ [] = nilQ
## Linear Regression <br/> linear regression is a linear approach for **modelling** the relationship between a scalar **dependent variable y** and one or more **independent variables denoted X**. Using exsting data to estimate unknown data. Linear regression model assumes that the relationship between the dependent variable **yi** and the p-vector of regressors **xi** is linear. $$y_{i} = \beta _{0}1 + \beta _{1}x_{i1} + ... + \beta _{p}x_{ip} + \varepsilon _{i} = x_{i}^{T}\beta + \varepsilon _{i}$$ where $^{T}$ denotes the transpose, so that $x_{i}^{T}$ is the inner product between vectors $x_{i}$ and C. The equation can stacked together and written in vector form as: $$ y = X\beta + \varepsilon $$ where,<br/> $ y=\begin{pmatrix} y_{1}\\ y_{2}\\ ...\\ y_{n} \end{pmatrix} $ , $X=\begin{pmatrix} x_{1}^{T}\\ x_{2}^{T}\\ ...\\ x_{n}^{T}\end{pmatrix} =\begin{pmatrix} 1 &x _{11} & ... &x _{1p} \\ 1& x _{21} & ... & x _{2p}\\ ...& ... & ... & ...\\ 1& x _{n1}& ... & x _{np} \end{pmatrix}$ , <br/> $\beta = \begin{pmatrix} \beta _{0}\\ \beta _{1}\\ \beta _{2}\\ ...\\ \beta _{p} \end{pmatrix}$ , and $\varepsilon = \begin{pmatrix} \varepsilon _{0}\\ \varepsilon _{1}\\ ...\\ \varepsilon _{n} \end{pmatrix}$ - $y_{i}$ is called dependent variable. - $x_{i}$ is called independent variable. - $\beta$ is called intercept, a (p+1)-dimension **parameter vector**, where $\beta _{0}$ is the constant (offset) term. Its elements are also called effects, and the estimates of it are called "estimated effects" or regression coefficients. - $\varepsilon _{i}$ is called the _error term_ or _disturbance term_.which is diffence between estimate value $y_{i}$ and real value $x_{i}\beta$, $$\varepsilon _{i} = y_{i} - x_{i}\beta$$ 1. they are independently 2. they are normal distribution #### Probability density function: $$p(\varepsilon _{i}) = \frac{1}{\sqrt{2\pi \sigma ^{2}}} exp(-\frac{\varepsilon _{i}^{2}}{2\sigma ^{2}})$$ $$\Rightarrow p(y_{i}|x_{i},\beta) = \frac{1}{\sqrt{2\pi \sigma^{2}}} exp(-\frac{(y_{i}-x_{i}\beta)^{2}}{2\sigma ^{2}})$$ #### Likelihood estimation:using data to esitimate function paramters to let argmin($\varepsilon$) $$Likelihood(\beta ) = \prod_{i=1}^{m}p(y_{i}|x_{i},\beta) =\prod_{i=1}^{m} \frac{1}{\sqrt{2\pi \sigma^{2}}} exp(-\frac{(y_{i}-x_{i}\beta)^{2}}{2\sigma ^{2}})$$ #### Logarithm likelihood estimation $$log\, L(\beta ) = log\, \prod_{i=1}^{m} \frac{1}{\sqrt{2\pi \sigma^{2}}} exp(-\frac{(y_{i}-x_{i}\beta)^{2}}{2\sigma ^{2}})$$ $$=\sum_{i=1}^{m} log\, \frac{1}{\sqrt{2\pi \sigma^{2}}} exp(-\frac{(y_{i}-x_{i}\beta)^{2}}{2\sigma ^{2}})$$ $$= m\, log\, \frac{1}{\sqrt{2\pi \sigma^{2}}} - \frac{1}{\sigma^{2}}\cdot \frac{1}{2}\sum_{i=1}^{m}(y_{i}-x_{i}\beta)^{2}$$ #### Least squares $$J(\beta ) = \frac{1}{2}\sum_{i=1}^{m}(y_{i}-x_{i}\beta)^{2}$$ #### objective function $$ \begin{equation} \begin{split} J(\beta )&= \frac{1}{2}\sum_{i=1}^{m}(x_{i}\beta-y_{i})^{2} \\ &= \frac{1}{2}(X\beta -y)^{T}(X\beta -y) \\ &= \frac{1}{2}(\beta ^{T}X ^{T} - y ^{T})(X\beta -y) \\ &= \frac{1}{2}(\beta ^{T}X ^{T}X\beta -\beta ^{T}X ^{T}y- y ^{T}X\beta+y ^{T}y) \\ \end{split} \end{equation} $$ #### partial derivatives $\triangledown_{\beta} J(\beta)$ $$ \begin{equation} \begin{split} \triangledown_{\beta } J(\beta ) &=\triangledown_{\beta } (\frac{1}{2}(\beta ^{T}X ^{T}X\beta -\beta ^{T}X ^{T}y- y ^{T}X\beta+y ^{T}y)) \\ &= \frac{1}{2}(2X ^{T}X\beta - X ^{T}y - (y^{T}X)^{T}) \\ &= X^{T}X\beta - X^{T}y \end{split} \end{equation} $$ when $\triangledown_{\beta} J(\beta) = 0$: $$\beta = (X^{T}X)^{-1}\,X^{T}y$$ #### Evaluation: In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variance in the dependent variable that is predictable from the independent variable(s) $$R^{2} : 1 - \frac{\sum_{i=1}^{m}(\hat{y_{i}}-y_{i})^{2}}{\sum_{i=1}^{m}(y_{i} - \overline{y})^{2}}$$ where, <br /> $\hat{y_{i}}$ is estimated y <br /> $\overline{y}$ is mean y ```python ```
The sphere of radius $r$ around a point $a$ is bounded.
# Spectral Estimation of Random Signals *This jupyter/Python notebook is part of a [collection of notebooks](../index.ipynb) in the masters module [Digital Signal Processing](http://www.int.uni-rostock.de/Digitale-Signalverarbeitung.48.0.html), Comunications Engineering, Universität Rostock. Please direct questions and suggestions to <mailto:[email protected]>.* ## Parametric Methods ### Motivation Non-parametric methods for the estimation of the power spectral density (PSD), like the [periodogram](periodogram.ipynb) or [Welch's method](welch_method.ipynb), don't rely on a-priori information about the process generating the random signal. Often some a-priori information is available that can be used to formulate a parametric model of the random process. The goal is then to estimate these parameters in order to characterize the random signal. Such techniques are known as *[parametric methods](https://en.wikipedia.org/wiki/Spectral_density_estimation#Parametric_estimation)* or *model-based methods*. The incorporation of a-priori knowledge can improve the estimation of the PSD significantly, as long as the underlying model is a valid description of the random process. The parametric model of the random process can also be used to generate random signals with a desired PSD. ### Process Models For the remainder we assume wide-sense stationary real-valued random processes. For many applications the process can be modeled by a linear-time invariant (LTI) system where $n[k]$ is [white noise](../random_signals/white_noise.ipynb) and $H(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ denotes the transfer function of the system. In general, the random signal $x[k]$ will be correlated as a result of the processing of the uncorrelated input signal $n[k]$ by the system $H(\mathrm{e}^{\,\mathrm{j}\,\Omega})$. Due to the white noise assumption $\Phi_{nn}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = N_0$, the PSD of the random process is given as \begin{equation} \Phi_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = N_0 \cdot | H(\mathrm{e}^{\,\mathrm{j}\,\Omega}) |^2 \end{equation} Parametric methods model the system $H(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ by a limited number of parameters. These parameters are then estimated from $x[k]$, providing an estimate $\hat{H}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of the transfer function. This estimate is then used to calculate the desired estimate $\hat{\Phi}_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of the PSD. #### Autoregressive model The [autoregressive](https://en.wikipedia.org/wiki/Autoregressive_model) (AR) model assumes a recursive system with a direct path. Its output relation is given as \begin{equation} x[k] = \sum_{n=1}^{N} a_n \cdot x[k-n] + n[k] \end{equation} where $a_n$ denote the coefficients of the recursive path and $N$ the order of the model. Its system function $H(z)$ is derived by $z$-transformation of the output relation \begin{equation} H(z) = \frac{1}{1 - \sum_{n=1}^{N} a_n z^{-n}} \end{equation} Hence, the AR model is a pole-only model of the system. #### Moving average model The [moving average](https://en.wikipedia.org/wiki/Moving-average_model) (MA) model assumes a non-recursive system. The output relation is given as \begin{equation} x[k] = \sum_{m=0}^{M-1} b_m \cdot n[k-m] = h[k] * n[k] \end{equation} with the impulse response of the system $h[k] = [ b_0, b_1, \dots, b_{M-1} ]$. The MA model is a finite impulse response (FIR) model of the random process. Its system function is given as \begin{equation} H(z) = \mathcal{Z} \{ h[k] \} = \sum_{m=0}^{M-1} b_m \; z^{-m} \end{equation} #### Autoregressive moving average model The [autoregressive moving average](https://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving-average_model) (ARMA) model is a combination of the AR and MA model. It constitutes a general linear process model. Its output relation is given as \begin{equation} x[k] = \sum_{n=1}^{N} a_n \cdot x[k-n] + \sum_{m=0}^{M-1} b_m \cdot n[k-m] \end{equation} Its system function reads \begin{equation} H(z) = \frac{\sum_{m=0}^{M-1} b_m \; z^{-m}}{1 - \sum_{n=1}^{N} a_n z^{-n}} \end{equation} ### Parametric Spectral Estimation The models above describe the synthesis of the samples $x[k]$ from the white noise $n[k]$. For spectral estimation only the random signal $x[k]$ is known and we are aiming at estimating the parameters of the model. This can be achieved by determining an analyzing system $G(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ such to decorrelate the signal $x[k]$ where $e[k]$ should be white noise. Due to its desired operation, the filter $G(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ is also denoted as *whitening filter*. The optimal filter $G(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ is given by the inverse system $\frac{1}{H(\mathrm{e}^{\,\mathrm{j}\,\Omega})}$. However, $H(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ is in general not known. But this nevertheless implies that our linear process model of $H(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ also applies to $G(\mathrm{e}^{\,\mathrm{j}\,\Omega})$. Various techniques have been developed to estimate the parameters of the filter $G(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ such that $e[k]$ becomes decorrelated. For instance, by expressing the auto-correlation function (ACF) $\varphi_{xx}[\kappa]$ in terms of the model parameters and solving with respect to these. The underlying set of equations are known as [Yule-Walker equations](https://en.wikipedia.org/wiki/Autoregressive_model#Yule-Walker_equations). Once the model parameters have been estimated, these can be used to calculate an estimate $\hat{G}(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ of the analysis system. The desired estimate of the PSD is then given as \begin{equation} \hat{\Phi}_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \frac{\Phi_{ee}(\mathrm{e}^{\,\mathrm{j}\,\Omega})}{|\hat{G}(\mathrm{e}^{\,\mathrm{j}\,\Omega})|^2} \end{equation} where if $e[k]$ is white noise, $\Phi_{ee}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = N_0$. ### Example In the following example $n[k]$ is drawn from normal distributed white noise with $N_0 = 1$. The Yule-Walker equations are used to estimate the parameters of an AR model of $H(\mathrm{e}^{\,\mathrm{j}\,\Omega})$. The implementation provided by `statsmodels.api.regression.yule_walker` returns the estimated AR coefficients of the system $H(\mathrm{e}^{\,\mathrm{j}\,\Omega})$. These parameters are then used to numerically evaluate the estimated transfer function, resulting in $\hat{\Phi}_{xx}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = 1 \cdot \vert \hat{H}(\mathrm{e}^{\,\mathrm{j}\,\Omega}) \vert^2$. ```python import numpy as np import matplotlib.pyplot as plt import statsmodels.api as sm import scipy.signal as sig %matplotlib inline K = 4096 # length of random signal N = 3 # order of AR model a = np.array((1, -1, .5)) # coefficients of AR model # generate random signal n[k] np.random.seed(2) n = np.random.normal(size=K) # AR model for random signal x[k] x = np.zeros(K) for k in np.arange(3, K): x[k] = a[0]*x[k-1] + a[1]*x[k-2] + a[2]*x[k-3] + n[k] # estimate AR parameters by Yule-Walker method rho, sigma = sm.regression.yule_walker(x, order=N, method='mle') # compute true and estimated transfer function Om, H = sig.freqz(1, np.insert(-a, 0, 1)) Om, He = sig.freqz(1, np.insert(-rho, 0, 1)) # compute PSD by Welch method Om2, Pxx = sig.welch(x, return_onesided=True) # plot PSDs plt.figure(figsize=(10, 5)) plt.plot(Om, np.abs(H)**2, label=r'$\Phi_{xx}(e^{j\Omega})$') plt.plot(Om2*2*np.pi, .5*np.abs(Pxx), 'k-', alpha=.5, label=r'$\hat{\Phi}_{xx}(e^{j\Omega})$ (Welch)') plt.plot(Om, np.abs(He)**2, label=r'$\hat{\Phi}_{xx}(e^{j\Omega})$ (parametric)') plt.xlabel(r'$\Omega$') plt.axis([0, np.pi, 0, 20]) plt.legend() plt.grid() ``` **Exercise** * Change the order `N` of the AR model used for estimation by the Yule-Walker equations. What happens if the order is smaller or higher than the order of the true system? Why? * Change the number of samples `K`. Is the estimator consistent? Solution: Choosing the order of the estimated AR model differently from the true process results in a mismatch of the model with the consequence of potentially large deviations between the estimated PSD and true PSD. These deviations are typically larger when choosing the order smaller since the model does definitely not fit to the true process. However, choosing the order larger is typically not that problematic since some of the AR coefficients are estimated as approximately zero due to the lower order of the process generating the random signal. Increasing the number of samples seems to lower the bias and variance of the estimated PSD, the estimator can therefore be assumed to be consistent. **Copyright** This notebook is provided as [Open Educational Resource](https://en.wikipedia.org/wiki/Open_educational_resources). Feel free to use the notebook for your own purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Sascha Spors, Digital Signal Processing - Lecture notes featuring computational examples*.
#ifndef NETWORK_CANUDPRECEIVER_H #define NETWORK_CANUDPRECEIVER_H #include <cstdint> #include <gsl/gsl> #include <QObject> #include "tincan/canrawframe.h" #include "network/udpasyncreceiver.h" namespace network { class Can_udp_receiver final : public QObject, public udp::Async_receiver { Q_OBJECT public: Can_udp_receiver() = default; Can_udp_receiver(const Can_udp_receiver&) = delete; Can_udp_receiver(Can_udp_receiver&&) = delete; Can_udp_receiver& operator=(const Can_udp_receiver&) = delete; Can_udp_receiver& operator=(Can_udp_receiver&&) = delete; void handle_receive(gsl::span<std::uint8_t> buffer) override; signals: void received_frame(std::uint64_t, tin::Can_raw_frame); }; } // namespace network #endif // NETWORK_CANUDPRECEIVER_H
Formal statement is: lemma (in bounded_bilinear) isCont: "isCont f a \<Longrightarrow> isCont g a \<Longrightarrow> isCont (\<lambda>x. f x ** g x) a" Informal statement is: If $f$ and $g$ are continuous at $a$, then the function $f \cdot g$ is continuous at $a$.
From Hammer Require Import Hammer. Require Import Recdef. From compcert Require Import Coqlib. From compcert Require Import Maps. From compcert Require Import Errors. Local Open Scope nat_scope. Local Open Scope error_monad_scope. Module Type TYPE_ALGEBRA. Parameter t: Type. Parameter eq: forall (x y: t), {x=y} + {x<>y}. Parameter default: t. Parameter sub: t -> t -> Prop. Axiom sub_refl: forall x, sub x x. Axiom sub_trans: forall x y z, sub x y -> sub y z -> sub x z. Parameter sub_dec: forall x y, {sub x y} + {~sub x y}. Parameter lub: t -> t -> t. Axiom lub_left: forall x y z, sub x z -> sub y z -> sub x (lub x y). Axiom lub_right: forall x y z, sub x z -> sub y z -> sub y (lub x y). Axiom lub_min: forall x y z, sub x z -> sub y z -> sub (lub x y) z. Parameter glb: t -> t -> t. Axiom glb_left: forall x y z, sub z x -> sub z y -> sub (glb x y) x. Axiom glb_right: forall x y z, sub z x -> sub z y -> sub (glb x y) y. Axiom glb_max: forall x y z, sub z x -> sub z y -> sub z (glb x y). Parameter low_bound: t -> t. Parameter high_bound: t -> t. Axiom low_bound_sub: forall t, sub (low_bound t) t. Axiom low_bound_minorant: forall x y, sub x y -> sub (low_bound y) x. Axiom high_bound_sub: forall t, sub t (high_bound t). Axiom high_bound_majorant: forall x y, sub x y -> sub y (high_bound x). Parameter weight: t -> nat. Parameter max_weight: nat. Axiom weight_range: forall t, weight t <= max_weight. Axiom weight_sub: forall x y, sub x y -> weight x <= weight y. Axiom weight_sub_strict: forall x y, sub x y -> x <> y -> weight x < weight y. End TYPE_ALGEBRA. Module SubSolver (T: TYPE_ALGEBRA). Inductive bounds : Type := B (lo: T.t) (hi: T.t) (SUB: T.sub lo hi). Definition constraint : Type := (positive * positive)%type. Record typenv : Type := Typenv { te_typ: PTree.t bounds; te_sub: list constraint }. Definition initial : typenv := {| te_typ := PTree.empty _; te_sub := nil |}. Definition type_def (e: typenv) (x: positive) (ty: T.t) : res typenv := match e.(te_typ)!x with | None => let b := B ty (T.high_bound ty) (T.high_bound_sub ty) in OK {| te_typ := PTree.set x b e.(te_typ); te_sub := e.(te_sub) |} | Some(B lo hi s1) => match T.sub_dec ty hi with | left s2 => let lo' := T.lub lo ty in if T.eq lo lo' then OK e else let b := B lo' hi (T.lub_min lo ty hi s1 s2) in OK {| te_typ := PTree.set x b e.(te_typ); te_sub := e.(te_sub) |} | right _ => Error (MSG "bad definition of variable " :: POS x :: nil) end end. Fixpoint type_defs (e: typenv) (rl: list positive) (tyl: list T.t) {struct rl}: res typenv := match rl, tyl with | nil, nil => OK e | r1::rs, ty1::tys => do e1 <- type_def e r1 ty1; type_defs e1 rs tys | _, _ => Error (msg "arity mismatch") end. Definition type_use (e: typenv) (x: positive) (ty: T.t) : res typenv := match e.(te_typ)!x with | None => let b := B (T.low_bound ty) ty (T.low_bound_sub ty) in OK {| te_typ := PTree.set x b e.(te_typ); te_sub := e.(te_sub) |} | Some(B lo hi s1) => match T.sub_dec lo ty with | left s2 => let hi' := T.glb hi ty in if T.eq hi hi' then OK e else let b := B lo hi' (T.glb_max hi ty lo s1 s2) in OK {| te_typ := PTree.set x b e.(te_typ); te_sub := e.(te_sub) |} | right _ => Error (MSG "bad use of variable " :: POS x :: nil) end end. Fixpoint type_uses (e: typenv) (rl: list positive) (tyl: list T.t) {struct rl}: res typenv := match rl, tyl with | nil, nil => OK e | r1::rs, ty1::tys => do e1 <- type_use e r1 ty1; type_uses e1 rs tys | _, _ => Error (msg "arity mismatch") end. Definition type_move (e: typenv) (r1 r2: positive) : res (bool * typenv) := if peq r1 r2 then OK (false, e) else match e.(te_typ)!r1, e.(te_typ)!r2 with | None, None => OK (false, {| te_typ := e.(te_typ); te_sub := (r1, r2) :: e.(te_sub) |}) | Some(B lo1 hi1 s1), None => let b2 := B lo1 (T.high_bound lo1) (T.high_bound_sub lo1) in OK (true, {| te_typ := PTree.set r2 b2 e.(te_typ); te_sub := if T.sub_dec hi1 lo1 then e.(te_sub) else (r1, r2) :: e.(te_sub) |}) | None, Some(B lo2 hi2 s2) => let b1 := B (T.low_bound hi2) hi2 (T.low_bound_sub hi2) in OK (true, {| te_typ := PTree.set r1 b1 e.(te_typ); te_sub := if T.sub_dec hi2 lo2 then e.(te_sub) else (r1, r2) :: e.(te_sub) |}) | Some(B lo1 hi1 s1), Some(B lo2 hi2 s2) => if T.sub_dec hi1 lo2 then OK (false, e) else match T.sub_dec lo1 hi2 with | left s => let lo2' := T.lub lo1 lo2 in let hi1' := T.glb hi1 hi2 in let b1 := B lo1 hi1' (T.glb_max hi1 hi2 lo1 s1 s) in let b2 := B lo2' hi2 (T.lub_min lo1 lo2 hi2 s s2) in if T.eq lo2 lo2' then if T.eq hi1 hi1' then OK (false, {| te_typ := e.(te_typ); te_sub := (r1, r2) :: e.(te_sub) |}) else OK (true, {| te_typ := PTree.set r1 b1 e.(te_typ); te_sub := (r1, r2) :: e.(te_sub) |}) else if T.eq hi1 hi1' then OK (true, {| te_typ := PTree.set r2 b2 e.(te_typ); te_sub := (r1, r2) :: e.(te_sub) |}) else OK (true, {| te_typ := PTree.set r2 b2 (PTree.set r1 b1 e.(te_typ)); te_sub := (r1, r2) :: e.(te_sub) |}) | right _ => Error(MSG "ill-typed move from " :: POS r1 :: MSG " to " :: POS r2 :: nil) end end. Fixpoint solve_rec (e: typenv) (changed: bool) (q: list constraint) : res (typenv * bool) := match q with | nil => OK (e, changed) | (r1, r2) :: q' => do (changed1, e1) <- type_move e r1 r2; solve_rec e1 (changed || changed1) q' end. Definition weight_bounds (ob: option bounds) : nat := match ob with None => T.max_weight + 1 | Some(B lo hi s) => T.weight hi - T.weight lo end. Lemma weight_bounds_1: forall lo hi s, weight_bounds (Some (B lo hi s)) < weight_bounds None. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.weight_bounds_1". intros; simpl. generalize (T.weight_range hi); omega. Qed. Lemma weight_bounds_2: forall lo1 hi1 s1 lo2 hi2 s2, T.sub lo2 lo1 -> T.sub hi1 hi2 -> lo1 <> lo2 \/ hi1 <> hi2 -> weight_bounds (Some (B lo1 hi1 s1)) < weight_bounds (Some (B lo2 hi2 s2)). Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.weight_bounds_2". intros; simpl. generalize (T.weight_sub _ _ s1) (T.weight_sub _ _ s2) (T.weight_sub _ _ H) (T.weight_sub _ _ H0); intros. destruct H1. assert (T.weight lo2 < T.weight lo1) by (apply T.weight_sub_strict; auto). omega. assert (T.weight hi1 < T.weight hi2) by (apply T.weight_sub_strict; auto). omega. Qed. Hint Resolve T.sub_refl: ty. Lemma weight_type_move: forall e r1 r2 changed e', type_move e r1 r2 = OK(changed, e') -> (e'.(te_sub) = e.(te_sub) \/ e'.(te_sub) = (r1, r2) :: e.(te_sub)) /\ (forall r, weight_bounds e'.(te_typ)!r <= weight_bounds e.(te_typ)!r) /\ (changed = true -> weight_bounds e'.(te_typ)!r1 + weight_bounds e'.(te_typ)!r2 < weight_bounds e.(te_typ)!r1 + weight_bounds e.(te_typ)!r2). Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.weight_type_move". unfold type_move; intros. destruct (peq r1 r2). inv H. split; auto. split; intros. omega. discriminate. destruct (te_typ e)!r1 as [[lo1 hi1 s1]|] eqn:E1; destruct (te_typ e)!r2 as [[lo2 hi2 s2]|] eqn:E2. - destruct (T.sub_dec hi1 lo2). inv H. split; auto. split; intros. omega. discriminate. destruct (T.sub_dec lo1 hi2); try discriminate. set (lo2' := T.lub lo1 lo2) in *. set (hi1' := T.glb hi1 hi2) in *. assert (S1': T.sub hi1' hi1) by (eapply T.glb_left; eauto). assert (S2': T.sub lo2 lo2') by (eapply T.lub_right; eauto). set (b1 := B lo1 hi1' (T.glb_max hi1 hi2 lo1 s1 s)) in *. set (b2 := B lo2' hi2 (T.lub_min lo1 lo2 hi2 s s2)) in *. Local Opaque weight_bounds. destruct (T.eq lo2 lo2'); destruct (T.eq hi1 hi1'); inversion H; clear H; subst changed e'; simpl. + split; auto. split; intros. omega. discriminate. + assert (weight_bounds (Some b1) < weight_bounds (Some (B lo1 hi1 s1))) by (apply weight_bounds_2; auto with ty). split; auto. split; intros. rewrite PTree.gsspec. destruct (peq r r1). subst r. rewrite E1. omega. omega. rewrite PTree.gss. rewrite PTree.gso by auto. rewrite E2. omega. + assert (weight_bounds (Some b2) < weight_bounds (Some (B lo2 hi2 s2))) by (apply weight_bounds_2; auto with ty). split; auto. split; intros. rewrite PTree.gsspec. destruct (peq r r2). subst r. rewrite E2. omega. omega. rewrite PTree.gss. rewrite PTree.gso by auto. rewrite E1. omega. + assert (weight_bounds (Some b1) < weight_bounds (Some (B lo1 hi1 s1))) by (apply weight_bounds_2; auto with ty). assert (weight_bounds (Some b2) < weight_bounds (Some (B lo2 hi2 s2))) by (apply weight_bounds_2; auto with ty). split; auto. split; intros. rewrite ! PTree.gsspec. destruct (peq r r2). subst r. rewrite E2. omega. destruct (peq r r1). subst r. rewrite E1. omega. omega. rewrite PTree.gss. rewrite PTree.gso by auto. rewrite PTree.gss. omega. - set (b2 := B lo1 (T.high_bound lo1) (T.high_bound_sub lo1)) in *. assert (weight_bounds (Some b2) < weight_bounds None) by (apply weight_bounds_1). inv H; simpl. split. destruct (T.sub_dec hi1 lo1); auto. split; intros. rewrite PTree.gsspec. destruct (peq r r2). subst r; rewrite E2; omega. omega. rewrite PTree.gss. rewrite PTree.gso by auto. rewrite E1. omega. - set (b1 := B (T.low_bound hi2) hi2 (T.low_bound_sub hi2)) in *. assert (weight_bounds (Some b1) < weight_bounds None) by (apply weight_bounds_1). inv H; simpl. split. destruct (T.sub_dec hi2 lo2); auto. split; intros. rewrite PTree.gsspec. destruct (peq r r1). subst r; rewrite E1; omega. omega. rewrite PTree.gss. rewrite PTree.gso by auto. rewrite E2. omega. - inv H. split; auto. simpl; split; intros. omega. congruence. Qed. Definition weight_constraints (b: PTree.t bounds) (cstr: list constraint) : nat := List.fold_right (fun xy n => n + weight_bounds b!(fst xy) + weight_bounds b!(snd xy)) 0 cstr. Remark weight_constraints_tighter: forall b1 b2, (forall r, weight_bounds b1!r <= weight_bounds b2!r) -> forall q, weight_constraints b1 q <= weight_constraints b2 q. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.weight_constraints_tighter". induction q; simpl. omega. generalize (H (fst a)) (H (snd a)); omega. Qed. Lemma weight_solve_rec: forall q e changed e' changed', solve_rec e changed q = OK(e', changed') -> (forall r, weight_bounds e'.(te_typ)!r <= weight_bounds e.(te_typ)!r) /\ weight_constraints e'.(te_typ) e'.(te_sub) + (if changed' && negb changed then 1 else 0) <= weight_constraints e.(te_typ) e.(te_sub) + weight_constraints e.(te_typ) q. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.weight_solve_rec". induction q; simpl; intros. - inv H. split. intros; omega. replace (changed' && negb changed') with false. omega. destruct changed'; auto. - destruct a as [r1 r2]; monadInv H; simpl. rename x into changed1. rename x0 into e1. exploit weight_type_move; eauto. intros [A [B C]]. exploit IHq; eauto. intros [D E]. split. intros. eapply le_trans. eapply D. eapply B. assert (P: weight_constraints (te_typ e1) (te_sub e) <= weight_constraints (te_typ e) (te_sub e)) by (apply weight_constraints_tighter; auto). assert (Q: weight_constraints (te_typ e1) (te_sub e1) <= weight_constraints (te_typ e1) (te_sub e) + weight_bounds (te_typ e1)!r1 + weight_bounds (te_typ e1)!r2). { destruct A as [Q|Q]; rewrite Q. omega. simpl. omega. } assert (R: weight_constraints (te_typ e1) q <= weight_constraints (te_typ e) q) by (apply weight_constraints_tighter; auto). set (ch1 := if changed' && negb (changed || changed1) then 1 else 0) in *. set (ch2 := if changed' && negb changed then 1 else 0) in *. destruct changed1. assert (ch2 <= ch1 + 1). { unfold ch2, ch1. rewrite orb_true_r. simpl. rewrite andb_false_r. destruct (changed' && negb changed); omega. } exploit C; eauto. omega. assert (ch2 <= ch1). { unfold ch2, ch1. rewrite orb_false_r. omega. } generalize (B r1) (B r2); omega. Qed. Definition weight_typenv (e: typenv) : nat := weight_constraints e.(te_typ) e.(te_sub). Function solve_constraints (e: typenv) {measure weight_typenv e}: res typenv := match solve_rec {| te_typ := e.(te_typ); te_sub := nil |} false e.(te_sub) with | OK(e', false) => OK e | OK(e', true) => solve_constraints e' | Error msg => Error msg end. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.weight_typenv". intros. exploit weight_solve_rec; eauto. simpl. intros [A B]. unfold weight_typenv. omega. Qed. Definition typassign := positive -> T.t. Definition makeassign (e: typenv) : typassign := fun x => match e.(te_typ)!x with Some(B lo hi s) => lo | None => T.default end. Definition solve (e: typenv) : res typassign := do e' <- solve_constraints e; OK(makeassign e'). Definition satisf (te: typassign) (e: typenv) : Prop := (forall x lo hi s, e.(te_typ)!x = Some(B lo hi s) -> T.sub lo (te x) /\ T.sub (te x) hi) /\ (forall x y, In (x, y) e.(te_sub) -> T.sub (te x) (te y)). Lemma satisf_initial: forall te, satisf te initial. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.satisf_initial". unfold initial; intros; split; simpl; intros. rewrite PTree.gempty in H; discriminate. contradiction. Qed. Lemma type_def_incr: forall te x ty e e', type_def e x ty = OK e' -> satisf te e' -> satisf te e. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_def_incr". unfold type_def; intros. destruct (te_typ e)!x as [[lo hi s1]|] eqn:E. - destruct (T.sub_dec ty hi); try discriminate. destruct (T.eq lo (T.lub lo ty)); monadInv H. auto. destruct H0 as [P Q]; split; auto; intros. destruct (peq x x0). + subst x0. rewrite E in H; inv H. exploit (P x); simpl. rewrite PTree.gss; eauto. intuition. apply T.sub_trans with (T.lub lo0 ty); auto. eapply T.lub_left; eauto. + eapply P; simpl. rewrite PTree.gso; eauto. - inv H. destruct H0 as [P Q]; split; auto; intros. eapply P; simpl. rewrite PTree.gso; eauto. congruence. Qed. Hint Resolve type_def_incr: ty. Lemma type_def_sound: forall te x ty e e', type_def e x ty = OK e' -> satisf te e' -> T.sub ty (te x). Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_def_sound". unfold type_def; intros. destruct H0 as [P Q]. destruct (te_typ e)!x as [[lo hi s1]|] eqn:E. - destruct (T.sub_dec ty hi); try discriminate. destruct (T.eq lo (T.lub lo ty)); monadInv H. + apply T.sub_trans with lo. rewrite e0. eapply T.lub_right; eauto. eapply P; eauto. + apply T.sub_trans with (T.lub lo ty). eapply T.lub_right; eauto. eapply (P x). simpl. rewrite PTree.gss; eauto. - inv H. eapply (P x); simpl. rewrite PTree.gss; eauto. Qed. Lemma type_defs_incr: forall te xl tyl e e', type_defs e xl tyl = OK e' -> satisf te e' -> satisf te e. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_defs_incr". induction xl; destruct tyl; simpl; intros; monadInv H; eauto with ty. Qed. Hint Resolve type_defs_incr: ty. Lemma type_defs_sound: forall te xl tyl e e', type_defs e xl tyl = OK e' -> satisf te e' -> list_forall2 T.sub tyl (map te xl). Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_defs_sound". induction xl; destruct tyl; simpl; intros; monadInv H. constructor. constructor; eauto. eapply type_def_sound; eauto with ty. Qed. Lemma type_use_incr: forall te x ty e e', type_use e x ty = OK e' -> satisf te e' -> satisf te e. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_use_incr". unfold type_use; intros. destruct (te_typ e)!x as [[lo hi s1]|] eqn:E. - destruct (T.sub_dec lo ty); try discriminate. destruct (T.eq hi (T.glb hi ty)); monadInv H. auto. destruct H0 as [P Q]; split; auto; intros. destruct (peq x x0). + subst x0. rewrite E in H; inv H. exploit (P x); simpl. rewrite PTree.gss; eauto. intuition. apply T.sub_trans with (T.glb hi0 ty); auto. eapply T.glb_left; eauto. + eapply P; simpl. rewrite PTree.gso; eauto. - inv H. destruct H0 as [P Q]; split; auto; intros. eapply P; simpl. rewrite PTree.gso; eauto. congruence. Qed. Hint Resolve type_use_incr: ty. Lemma type_use_sound: forall te x ty e e', type_use e x ty = OK e' -> satisf te e' -> T.sub (te x) ty. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_use_sound". unfold type_use; intros. destruct H0 as [P Q]. destruct (te_typ e)!x as [[lo hi s1]|] eqn:E. - destruct (T.sub_dec lo ty); try discriminate. destruct (T.eq hi (T.glb hi ty)); monadInv H. + apply T.sub_trans with hi. eapply P; eauto. rewrite e0. eapply T.glb_right; eauto. + apply T.sub_trans with (T.glb hi ty). eapply (P x). simpl. rewrite PTree.gss; eauto. eapply T.glb_right; eauto. - inv H. eapply (P x); simpl. rewrite PTree.gss; eauto. Qed. Lemma type_uses_incr: forall te xl tyl e e', type_uses e xl tyl = OK e' -> satisf te e' -> satisf te e. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_uses_incr". induction xl; destruct tyl; simpl; intros; monadInv H; eauto with ty. Qed. Hint Resolve type_uses_incr: ty. Lemma type_uses_sound: forall te xl tyl e e', type_uses e xl tyl = OK e' -> satisf te e' -> list_forall2 T.sub (map te xl) tyl. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_uses_sound". induction xl; destruct tyl; simpl; intros; monadInv H. constructor. constructor; eauto. eapply type_use_sound; eauto with ty. Qed. Lemma type_move_incr: forall te e r1 r2 e' changed, type_move e r1 r2 = OK(changed, e') -> satisf te e' -> satisf te e. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_move_incr". unfold type_move; intros. destruct H0 as [P Q]. destruct (peq r1 r2). inv H; split; auto. destruct (te_typ e)!r1 as [[lo1 hi1 s1]|] eqn:E1; destruct (te_typ e)!r2 as [[lo2 hi2 s2]|] eqn:E2. - destruct (T.sub_dec hi1 lo2). inv H; split; auto. destruct (T.sub_dec lo1 hi2); try discriminate. set (lo := T.lub lo1 lo2) in *. set (hi := T.glb hi1 hi2) in *. destruct (T.eq lo2 lo); destruct (T.eq hi1 hi); monadInv H; simpl in *. + subst e'; simpl in *. split; auto. + subst e'; simpl in *. split; auto. intros. destruct (peq x r1). subst x. rewrite E1 in H. injection H; intros; subst lo0 hi0. exploit (P r1). rewrite PTree.gss; eauto. intuition. apply T.sub_trans with (T.glb hi1 hi2); auto. eapply T.glb_left; eauto. eapply P. rewrite PTree.gso; eauto. + subst e'; simpl in *. split; auto. intros. destruct (peq x r2). subst x. rewrite E2 in H. injection H; intros; subst lo0 hi0. exploit (P r2). rewrite PTree.gss; eauto. intuition. apply T.sub_trans with (T.lub lo1 lo2); auto. eapply T.lub_right; eauto. eapply P. rewrite PTree.gso; eauto. + split; auto. intros. destruct (peq x r1). subst x. rewrite E1 in H. injection H; intros; subst lo0 hi0. exploit (P r1). rewrite PTree.gso; eauto. rewrite PTree.gss; eauto. intuition. apply T.sub_trans with (T.glb hi1 hi2); auto. eapply T.glb_left; eauto. destruct (peq x r2). subst x. rewrite E2 in H. injection H; intros; subst lo0 hi0. exploit (P r2). rewrite PTree.gss; eauto. intuition. apply T.sub_trans with (T.lub lo1 lo2); auto. eapply T.lub_right; eauto. eapply P. rewrite ! PTree.gso; eauto. - inv H; simpl in *. split; intros. eapply P. rewrite PTree.gso; eauto. congruence. apply Q. destruct (T.sub_dec hi1 lo1); auto with coqlib. - inv H; simpl in *. split; intros. eapply P. rewrite PTree.gso; eauto. congruence. apply Q. destruct (T.sub_dec hi2 lo2); auto with coqlib. - inv H; simpl in *. split; auto. Qed. Hint Resolve type_move_incr: ty. Lemma type_move_sound: forall te e r1 r2 e' changed, type_move e r1 r2 = OK(changed, e') -> satisf te e' -> T.sub (te r1) (te r2). Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_move_sound". unfold type_move; intros. destruct H0 as [P Q]. destruct (peq r1 r2). subst r2. apply T.sub_refl. destruct (te_typ e)!r1 as [[lo1 hi1 s1]|] eqn:E1; destruct (te_typ e)!r2 as [[lo2 hi2 s2]|] eqn:E2. - destruct (T.sub_dec hi1 lo2). inv H. apply T.sub_trans with hi1. eapply P; eauto. apply T.sub_trans with lo2; auto. eapply P; eauto. destruct (T.sub_dec lo1 hi2); try discriminate. set (lo := T.lub lo1 lo2) in *. set (hi := T.glb hi1 hi2) in *. destruct (T.eq lo2 lo); destruct (T.eq hi1 hi); monadInv H; simpl in *. + subst e'; simpl in *. apply Q; auto. + subst e'; simpl in *. apply Q; auto. + subst e'; simpl in *. apply Q; auto. + apply Q; auto. - inv H; simpl in *. destruct (T.sub_dec hi1 lo1). + apply T.sub_trans with hi1. eapply P; eauto. rewrite PTree.gso; eauto. apply T.sub_trans with lo1; auto. eapply P. rewrite PTree.gss; eauto. + auto with coqlib. - inv H; simpl in *. destruct (T.sub_dec hi2 lo2). + apply T.sub_trans with hi2. eapply P. rewrite PTree.gss; eauto. apply T.sub_trans with lo2; auto. eapply P. rewrite PTree.gso; eauto. + auto with coqlib. - inv H. simpl in Q; auto. Qed. Lemma solve_rec_incr: forall te q e changed e' changed', solve_rec e changed q = OK(e', changed') -> satisf te e' -> satisf te e. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.solve_rec_incr". induction q; simpl; intros. - inv H. auto. - destruct a as [r1 r2]; monadInv H. eauto with ty. Qed. Lemma solve_rec_sound: forall te r1 r2 q e changed e' changed', solve_rec e changed q = OK(e', changed') -> In (r1, r2) q -> satisf te e' -> T.sub (te r1) (te r2). Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.solve_rec_sound". induction q; simpl; intros. - contradiction. - destruct a as [r3 r4]; monadInv H. destruct H0. + inv H. eapply type_move_sound; eauto. eapply solve_rec_incr; eauto. + eapply IHq; eauto with ty. Qed. Lemma type_move_false: forall e r1 r2 e', type_move e r1 r2 = OK(false, e') -> te_typ e' = te_typ e /\ T.sub (makeassign e r1) (makeassign e r2). Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_move_false". unfold type_move; intros. destruct (peq r1 r2). inv H. split; auto. apply T.sub_refl. unfold makeassign; destruct (te_typ e)!r1 as [[lo1 hi1 s1]|] eqn:E1; destruct (te_typ e)!r2 as [[lo2 hi2 s2]|] eqn:E2. - destruct (T.sub_dec hi1 lo2). inv H. split; auto. eapply T.sub_trans; eauto. destruct (T.sub_dec lo1 hi2); try discriminate. set (lo := T.lub lo1 lo2) in *. set (hi := T.glb hi1 hi2) in *. destruct (T.eq lo2 lo); destruct (T.eq hi1 hi); try discriminate. monadInv H. split; auto. rewrite e0. unfold lo. eapply T.lub_left; eauto. - discriminate. - discriminate. - inv H. split; auto. apply T.sub_refl. Qed. Lemma solve_rec_false: forall r1 r2 q e changed e', solve_rec e changed q = OK(e', false) -> changed = false /\ (In (r1, r2) q -> T.sub (makeassign e r1) (makeassign e r2)). Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.solve_rec_false". induction q; simpl; intros. - inv H. tauto. - destruct a as [r3 r4]; monadInv H. exploit IHq; eauto. intros [P Q]. destruct changed; try discriminate. destruct x; try discriminate. exploit type_move_false; eauto. intros [U V]. split. auto. intros [A|A]. inv A. auto. exploit Q; auto. unfold makeassign; rewrite U; auto. Qed. Lemma solve_constraints_incr: forall te e e', solve_constraints e = OK e' -> satisf te e' -> satisf te e. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.solve_constraints_incr". intros te e; functional induction (solve_constraints e); intros. - inv H. auto. - exploit solve_rec_incr; eauto. intros [A B]. split; auto. intros; eapply solve_rec_sound; eauto. - discriminate. Qed. Lemma solve_constraints_sound: forall e e', solve_constraints e = OK e' -> satisf (makeassign e') e'. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.solve_constraints_sound". intros e0; functional induction (solve_constraints e0); intros. - inv H. split; intros. unfold makeassign; rewrite H. split; auto with ty. exploit solve_rec_false. eauto. intros [A B]. eapply B; eauto. - eauto. - discriminate. Qed. Theorem solve_sound: forall e te, solve e = OK te -> satisf te e. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.solve_sound". unfold solve; intros. monadInv H. eapply solve_constraints_incr. eauto. eapply solve_constraints_sound; eauto. Qed. Lemma type_def_complete: forall te e x ty, satisf te e -> T.sub ty (te x) -> exists e', type_def e x ty = OK e' /\ satisf te e'. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_def_complete". unfold type_def; intros. destruct H as [P Q]. destruct (te_typ e)!x as [[lo hi s1]|] eqn:E. - destruct (T.sub_dec ty hi). destruct (T.eq lo (T.lub lo ty)). exists e; split; auto. split; auto. econstructor; split; eauto. split; simpl; auto; intros. rewrite PTree.gsspec in H. destruct (peq x0 x). inv H. exploit P; eauto. intuition. eapply T.lub_min; eauto. eapply P; eauto. elim n. apply T.sub_trans with (te x); auto. eapply P; eauto. - econstructor; split; eauto. split; simpl; auto; intros. rewrite PTree.gsspec in H. destruct (peq x0 x). inv H. split; auto. apply T.high_bound_majorant; auto. eapply P; eauto. Qed. Lemma type_defs_complete: forall te xl tyl e, satisf te e -> list_forall2 T.sub tyl (map te xl) -> exists e', type_defs e xl tyl = OK e' /\ satisf te e'. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_defs_complete". induction xl; intros; inv H0; simpl. econstructor; eauto. exploit (type_def_complete te e a a1); auto. intros (e1 & P & Q). exploit (IHxl al e1); auto. intros (e2 & U & V). exists e2; split; auto. rewrite P; auto. Qed. Lemma type_use_complete: forall te e x ty, satisf te e -> T.sub (te x) ty -> exists e', type_use e x ty = OK e' /\ satisf te e'. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_use_complete". unfold type_use; intros. destruct H as [P Q]. destruct (te_typ e)!x as [[lo hi s1]|] eqn:E. - destruct (T.sub_dec lo ty). destruct (T.eq hi (T.glb hi ty)). exists e; split; auto. split; auto. econstructor; split; eauto. split; simpl; auto; intros. rewrite PTree.gsspec in H. destruct (peq x0 x). inv H. exploit P; eauto. intuition. eapply T.glb_max; eauto. eapply P; eauto. elim n. apply T.sub_trans with (te x); auto. eapply P; eauto. - econstructor; split; eauto. split; simpl; auto; intros. rewrite PTree.gsspec in H. destruct (peq x0 x). inv H. split; auto. apply T.low_bound_minorant; auto. eapply P; eauto. Qed. Lemma type_uses_complete: forall te xl tyl e, satisf te e -> list_forall2 T.sub (map te xl) tyl -> exists e', type_uses e xl tyl = OK e' /\ satisf te e'. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_uses_complete". induction xl; intros; inv H0; simpl. econstructor; eauto. exploit (type_use_complete te e a b1); auto. intros (e1 & P & Q). exploit (IHxl bl e1); auto. intros (e2 & U & V). exists e2; split; auto. rewrite P; auto. Qed. Lemma type_move_complete: forall te e r1 r2, satisf te e -> T.sub (te r1) (te r2) -> exists changed e', type_move e r1 r2 = OK(changed, e') /\ satisf te e'. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.type_move_complete". unfold type_move; intros. elim H; intros P Q. assert (Q': forall x y, In (x, y) ((r1, r2) :: te_sub e) -> T.sub (te x) (te y)). { intros. destruct H1; auto. congruence. } destruct (peq r1 r2). econstructor; econstructor; eauto. destruct (te_typ e)!r1 as [[lo1 hi1 s1]|] eqn:E1; destruct (te_typ e)!r2 as [[lo2 hi2 s2]|] eqn:E2. - exploit (P r1); eauto. intros [L1 U1]. exploit (P r2); eauto. intros [L2 U2]. destruct (T.sub_dec hi1 lo2). econstructor; econstructor; eauto. destruct (T.sub_dec lo1 hi2). destruct (T.eq lo2 (T.lub lo1 lo2)); destruct (T.eq hi1 (T.glb hi1 hi2)); econstructor; econstructor; split; eauto; split; auto; simpl; intros. + rewrite PTree.gsspec in H1. destruct (peq x r1). clear e0. inv H1. split; auto. apply T.glb_max. auto. apply T.sub_trans with (te r2); auto. eapply P; eauto. + rewrite PTree.gsspec in H1. destruct (peq x r2). clear e0. inv H1. split; auto. apply T.lub_min. apply T.sub_trans with (te r1); auto. auto. eapply P; eauto. + rewrite ! PTree.gsspec in H1. destruct (peq x r2). inv H1. split; auto. apply T.lub_min; auto. apply T.sub_trans with (te r1); auto. destruct (peq x r1). inv H1. split; auto. apply T.glb_max; auto. apply T.sub_trans with (te r2); auto. eapply P; eauto. + elim n1. apply T.sub_trans with (te r1); auto. apply T.sub_trans with (te r2); auto. - econstructor; econstructor; split; eauto; split. + simpl; intros. rewrite PTree.gsspec in H1. destruct (peq x r2). inv H1. exploit P; eauto. intuition. apply T.sub_trans with (te r1); auto. apply T.high_bound_majorant. apply T.sub_trans with (te r1); auto. eapply P; eauto. + destruct (T.sub_dec hi1 lo1); auto. - econstructor; econstructor; split; eauto; split. + simpl; intros. rewrite PTree.gsspec in H1. destruct (peq x r1). inv H1. exploit P; eauto. intuition. apply T.low_bound_minorant. apply T.sub_trans with (te r2); auto. apply T.sub_trans with (te r2); auto. eapply P; eauto. + destruct (T.sub_dec hi2 lo2); auto. - econstructor; econstructor; split; eauto; split; auto. Qed. Lemma solve_rec_complete: forall te q e changed, satisf te e -> (forall r1 r2, In (r1, r2) q -> T.sub (te r1) (te r2)) -> exists e' changed', solve_rec e changed q = OK(e', changed') /\ satisf te e'. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.solve_rec_complete". induction q; simpl; intros. - econstructor; econstructor; eauto. - destruct a as [r1 r2]. exploit (type_move_complete te e r1 r2); auto. intros (changed1 & e1 & A & B). exploit (IHq e1 (changed || changed1)); auto. intros (e' & changed' & C & D). exists e'; exists changed'. rewrite A; simpl; rewrite C; auto. Qed. Lemma solve_constraints_complete: forall te e, satisf te e -> exists e', solve_constraints e = OK e' /\ satisf te e'. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.solve_constraints_complete". intros te e. functional induction (solve_constraints e); intros. - exists e; auto. - exploit (solve_rec_complete te (te_sub e) {| te_typ := te_typ e; te_sub := nil |} false). destruct H; split; auto. simpl; tauto. destruct H; auto. intros (e1 & changed1 & P & Q). apply IHr. congruence. - exploit (solve_rec_complete te (te_sub e) {| te_typ := te_typ e; te_sub := nil |} false). destruct H; split; auto. simpl; tauto. destruct H; auto. intros (e1 & changed1 & P & Q). congruence. Qed. Lemma solve_complete: forall te e, satisf te e -> exists te', solve e = OK te'. Proof. hammer_hook "Subtyping" "Subtyping.SubSolver.solve_complete". intros. unfold solve. destruct (solve_constraints_complete te e H) as (e' & P & Q). econstructor. rewrite P. simpl. eauto. Qed. End SubSolver.
[STATEMENT] lemma qbs_eval_morphism: "qbs_eval \<in> (exp_qbs X Y) \<Otimes>\<^sub>Q X \<rightarrow>\<^sub>Q Y" [PROOF STATE] proof (prove) goal (1 subgoal): 1. qbs_eval \<in> (X \<Rightarrow>\<^sub>Q Y) \<Otimes>\<^sub>Q X \<rightarrow>\<^sub>Q Y [PROOF STEP] proof(rule qbs_morphismI,simp) [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>\<alpha>. \<alpha> \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X \<Longrightarrow> qbs_eval \<circ> \<alpha> \<in> qbs_Mx Y [PROOF STEP] fix f [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>\<alpha>. \<alpha> \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X \<Longrightarrow> qbs_eval \<circ> \<alpha> \<in> qbs_Mx Y [PROOF STEP] assume "f \<in> pair_qbs_Mx (exp_qbs X Y) X" [PROOF STATE] proof (state) this: f \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X goal (1 subgoal): 1. \<And>\<alpha>. \<alpha> \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X \<Longrightarrow> qbs_eval \<circ> \<alpha> \<in> qbs_Mx Y [PROOF STEP] let ?f1 = "fst \<circ> f" [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>\<alpha>. \<alpha> \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X \<Longrightarrow> qbs_eval \<circ> \<alpha> \<in> qbs_Mx Y [PROOF STEP] let ?f2 = "snd \<circ> f" [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>\<alpha>. \<alpha> \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X \<Longrightarrow> qbs_eval \<circ> \<alpha> \<in> qbs_Mx Y [PROOF STEP] define g :: "real \<Rightarrow> real \<times> _" where "g \<equiv> \<lambda>r.(r,?f2 r)" [PROOF STATE] proof (state) this: g \<equiv> \<lambda>r. (r, (snd \<circ> f) r) goal (1 subgoal): 1. \<And>\<alpha>. \<alpha> \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X \<Longrightarrow> qbs_eval \<circ> \<alpha> \<in> qbs_Mx Y [PROOF STEP] have "g \<in> qbs_Mx (real_quasi_borel \<Otimes>\<^sub>Q X)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. g \<in> qbs_Mx (\<real>\<^sub>Q \<Otimes>\<^sub>Q X) [PROOF STEP] proof(auto simp add: pair_qbs_Mx_def) [PROOF STATE] proof (state) goal (2 subgoals): 1. fst \<circ> g \<in> borel_measurable real_borel 2. snd \<circ> g \<in> qbs_Mx X [PROOF STEP] have "fst \<circ> g = id" [PROOF STATE] proof (prove) goal (1 subgoal): 1. fst \<circ> g = id [PROOF STEP] by(auto simp add: g_def comp_def) [PROOF STATE] proof (state) this: fst \<circ> g = id goal (2 subgoals): 1. fst \<circ> g \<in> borel_measurable real_borel 2. snd \<circ> g \<in> qbs_Mx X [PROOF STEP] thus "fst \<circ> g \<in> real_borel \<rightarrow>\<^sub>M real_borel" [PROOF STATE] proof (prove) using this: fst \<circ> g = id goal (1 subgoal): 1. fst \<circ> g \<in> borel_measurable real_borel [PROOF STEP] by(auto simp add: measurable_ident) [PROOF STATE] proof (state) this: fst \<circ> g \<in> borel_measurable real_borel goal (1 subgoal): 1. snd \<circ> g \<in> qbs_Mx X [PROOF STEP] next [PROOF STATE] proof (state) goal (1 subgoal): 1. snd \<circ> g \<in> qbs_Mx X [PROOF STEP] have "snd \<circ> g = ?f2" [PROOF STATE] proof (prove) goal (1 subgoal): 1. snd \<circ> g = snd \<circ> f [PROOF STEP] by(auto simp add: g_def) [PROOF STATE] proof (state) this: snd \<circ> g = snd \<circ> f goal (1 subgoal): 1. snd \<circ> g \<in> qbs_Mx X [PROOF STEP] thus "snd \<circ> g \<in> qbs_Mx X" [PROOF STATE] proof (prove) using this: snd \<circ> g = snd \<circ> f goal (1 subgoal): 1. snd \<circ> g \<in> qbs_Mx X [PROOF STEP] using \<open>f \<in> pair_qbs_Mx (exp_qbs X Y) X\<close> pair_qbs_Mx_def [PROOF STATE] proof (prove) using this: snd \<circ> g = snd \<circ> f f \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X pair_qbs_Mx ?X ?Y \<equiv> {f. fst \<circ> f \<in> qbs_Mx ?X \<and> snd \<circ> f \<in> qbs_Mx ?Y} goal (1 subgoal): 1. snd \<circ> g \<in> qbs_Mx X [PROOF STEP] by auto [PROOF STATE] proof (state) this: snd \<circ> g \<in> qbs_Mx X goal: No subgoals! [PROOF STEP] qed [PROOF STATE] proof (state) this: g \<in> qbs_Mx (\<real>\<^sub>Q \<Otimes>\<^sub>Q X) goal (1 subgoal): 1. \<And>\<alpha>. \<alpha> \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X \<Longrightarrow> qbs_eval \<circ> \<alpha> \<in> qbs_Mx Y [PROOF STEP] moreover [PROOF STATE] proof (state) this: g \<in> qbs_Mx (\<real>\<^sub>Q \<Otimes>\<^sub>Q X) goal (1 subgoal): 1. \<And>\<alpha>. \<alpha> \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X \<Longrightarrow> qbs_eval \<circ> \<alpha> \<in> qbs_Mx Y [PROOF STEP] have "?f1 \<in> exp_qbs_Mx X Y" [PROOF STATE] proof (prove) goal (1 subgoal): 1. fst \<circ> f \<in> exp_qbs_Mx X Y [PROOF STEP] using \<open>f \<in> pair_qbs_Mx (exp_qbs X Y) X\<close> [PROOF STATE] proof (prove) using this: f \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X goal (1 subgoal): 1. fst \<circ> f \<in> exp_qbs_Mx X Y [PROOF STEP] by(simp add: pair_qbs_Mx_def) [PROOF STATE] proof (state) this: fst \<circ> f \<in> exp_qbs_Mx X Y goal (1 subgoal): 1. \<And>\<alpha>. \<alpha> \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X \<Longrightarrow> qbs_eval \<circ> \<alpha> \<in> qbs_Mx Y [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: g \<in> qbs_Mx (\<real>\<^sub>Q \<Otimes>\<^sub>Q X) fst \<circ> f \<in> exp_qbs_Mx X Y [PROOF STEP] have "(\<lambda>(r,x). (?f1 r x)) \<circ> g \<in> qbs_Mx Y" [PROOF STATE] proof (prove) using this: g \<in> qbs_Mx (\<real>\<^sub>Q \<Otimes>\<^sub>Q X) fst \<circ> f \<in> exp_qbs_Mx X Y goal (1 subgoal): 1. (\<lambda>(r, x). (fst \<circ> f) r x) \<circ> g \<in> qbs_Mx Y [PROOF STEP] by (auto simp add: exp_qbs_Mx_def qbs_morphism_def) (metis (mono_tags, lifting) case_prod_conv comp_apply cond_case_prod_eta) [PROOF STATE] proof (state) this: (\<lambda>(r, x). (fst \<circ> f) r x) \<circ> g \<in> qbs_Mx Y goal (1 subgoal): 1. \<And>\<alpha>. \<alpha> \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X \<Longrightarrow> qbs_eval \<circ> \<alpha> \<in> qbs_Mx Y [PROOF STEP] moreover [PROOF STATE] proof (state) this: (\<lambda>(r, x). (fst \<circ> f) r x) \<circ> g \<in> qbs_Mx Y goal (1 subgoal): 1. \<And>\<alpha>. \<alpha> \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X \<Longrightarrow> qbs_eval \<circ> \<alpha> \<in> qbs_Mx Y [PROOF STEP] have "(\<lambda>(r,x). (?f1 r x)) \<circ> g = qbs_eval \<circ> f" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<lambda>(r, x). (fst \<circ> f) r x) \<circ> g = qbs_eval \<circ> f [PROOF STEP] by(auto simp add: case_prod_unfold g_def qbs_eval_def) [PROOF STATE] proof (state) this: (\<lambda>(r, x). (fst \<circ> f) r x) \<circ> g = qbs_eval \<circ> f goal (1 subgoal): 1. \<And>\<alpha>. \<alpha> \<in> pair_qbs_Mx (X \<Rightarrow>\<^sub>Q Y) X \<Longrightarrow> qbs_eval \<circ> \<alpha> \<in> qbs_Mx Y [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: (\<lambda>(r, x). (fst \<circ> f) r x) \<circ> g \<in> qbs_Mx Y (\<lambda>(r, x). (fst \<circ> f) r x) \<circ> g = qbs_eval \<circ> f [PROOF STEP] show "qbs_eval \<circ> f \<in> qbs_Mx Y" [PROOF STATE] proof (prove) using this: (\<lambda>(r, x). (fst \<circ> f) r x) \<circ> g \<in> qbs_Mx Y (\<lambda>(r, x). (fst \<circ> f) r x) \<circ> g = qbs_eval \<circ> f goal (1 subgoal): 1. qbs_eval \<circ> f \<in> qbs_Mx Y [PROOF STEP] by simp [PROOF STATE] proof (state) this: qbs_eval \<circ> f \<in> qbs_Mx Y goal: No subgoals! [PROOF STEP] qed
[STATEMENT] lemma child_uneq: "t \<in> fst ` fset xs \<Longrightarrow> Node r xs \<noteq> t" [PROOF STATE] proof (prove) goal (1 subgoal): 1. t \<in> fst ` fset xs \<Longrightarrow> Node r xs \<noteq> t [PROOF STEP] using dtree_size_decr_aux' [PROOF STATE] proof (prove) using this: ?t1.0 \<in> fst ` fset ?xs \<Longrightarrow> size ?t1.0 < size (Node ?r ?xs) goal (1 subgoal): 1. t \<in> fst ` fset xs \<Longrightarrow> Node r xs \<noteq> t [PROOF STEP] by fast
module Main main : IO () main = do printLn $ divNat 1809022195644390369852458 91238741987 printLn $ div (-432642342742368327462378462387) 36473264372 -- values at the border between C ints and bignums printLn $ div 1073741822 73892 printLn $ div 1073741823 73892 printLn $ div 1073741824 73892 printLn $ div (-1073741823) 73892 printLn $ div (-1073741824) 73892 printLn $ div (-1073741825) 73892
(* Title: Fun With Functions Author: Tobias Nipkow *) theory FunWithFunctions imports Complex_Main begin text{* See \cite{Tao2006}. Was first brought to our attention by Herbert Ehler who provided a similar proof. *} theorem identity1: fixes f :: "nat \<Rightarrow> nat" assumes fff: "\<And>n. f(f(n)) < f(Suc(n))" shows "f(n) = n" proof - { fix m n have key: "n \<le> m \<Longrightarrow> n \<le> f(m)" proof(induct n arbitrary: m) case 0 show ?case by simp next case (Suc n) hence "m \<noteq> 0" by simp then obtain k where [simp]: "m = Suc k" by (metis not0_implies_Suc) have "n \<le> f(k)" using Suc by simp hence "n \<le> f(f(k))" using Suc by simp also have "\<dots> < f(m)" using fff by simp finally show ?case by simp qed } hence "\<And>n. n \<le> f(n)" by simp hence "\<And>n. f(n) < f(Suc n)" by(metis fff order_le_less_trans) hence "f(n) < n+1" by (metis fff lift_Suc_mono_less_iff[of f] Suc_eq_plus1) with `n \<le> f(n)` show "f n = n" by arith qed text{* See \cite{Tao2006}. Possible extension: Should also hold if the range of @{text f} is the reals! *} lemma identity2: fixes f :: "nat \<Rightarrow> nat" assumes "f(k) = k" and "k \<ge> 2" and f_times: "\<And>m n. f(m*n) = f(m)*f(n)" and f_mono: "\<And>m n. m<n \<Longrightarrow> f m < f n" shows "f(n) = n" proof - have 0: "f(0) = 0" by (metis f_mono f_times mult_1_right mult_is_0 nat_less_le nat_mult_eq_cancel_disj not_less_eq) have 1: "f(1) = 1" by (metis f_mono f_times gr_implies_not0 mult_eq_self_implies_10 nat_mult_1_right zero_less_one) have 2: "f 2 = 2" proof - have "2 + (k - 2) = k" using `k \<ge> 2` by arith hence "f(2) \<le> 2" using mono_nat_linear_lb[of f 2 "k - 2",OF f_mono] `f k = k` by simp arith thus "f 2 = 2" using 1 f_mono[of 1 2] by arith qed show ?thesis proof(induct rule:less_induct) case (less i) show ?case proof cases assume "i\<le>1" thus ?case using 0 1 by (auto simp add:le_Suc_eq) next assume "~i\<le>1" show ?case proof cases assume "i mod 2 = 0" hence "EX k. i=2*k" by arith then obtain k where "i = 2*k" .. hence "0 < k" and "k<i" using `~i\<le>1` by arith+ hence "f(k) = k" using less(1) by blast thus "f(i) = i" using `i = 2*k` by(simp add:f_times 2) next assume "i mod 2 \<noteq> 0" hence "EX k. i=2*k+1" by arith then obtain k where "i = 2*k+1" .. hence "0<k" and "k+1<i" using `~i\<le>1` by arith+ have "2*k < f(2*k+1)" proof - have "2*k = 2*f(k)" using less(1) `i=2*k+1` by simp also have "\<dots> = f(2*k)" by(simp add:f_times 2) also have "\<dots> < f(2*k+1)" using f_mono[of "2*k" "2*k+1"] by simp finally show ?thesis . qed moreover have "f(2*k+1) < 2*(k+1)" proof - have "f(2*k+1) < f(2*k+2)" using f_mono[of "2*k+1" "2*k+2"] by simp also have "\<dots> = f(2*(k+1))" by simp also have "\<dots> = 2*f(k+1)" by(simp only:f_times 2) also have "f(k+1) = k+1" using less(1) `i=2*k+1` `~i\<le>1` by simp finally show ?thesis . qed ultimately show "f(i) = i" using `i = 2*k+1` by arith qed qed qed qed text{* One more from Tao's booklet. If @{text f} is also assumed to be continuous, @{term"f(x::real) = x+1"} holds for all reals, not only rationals. Extend the proof! *} theorem plus1: fixes f :: "real \<Rightarrow> real" assumes 0: "f 0 = 1" and f_add: "\<And>x y. f(x+y+1) = f x + f y" assumes "r : \<rat>" shows "f(r) = r + 1" proof - { fix i :: int have "f(real i) = real i + 1" proof (induct i rule: int_induct [where k=0]) case base show ?case using 0 by simp next case (step1 i) have "f(real(i+1)) = f(real i + 0 + 1)" by simp also have "\<dots> = f(real i) + f 0" by(rule f_add) also have "\<dots> = real(i+1) + 1" using step1 0 by simp finally show ?case . next case (step2 i) have "f(real i) = f(real(i - 1) + 0 + 1)" by simp also have "\<dots> = f(real(i - 1)) + f 0" by(rule f_add) also have "\<dots> = f(real(i - 1)) + 1 " using 0 by simp finally show ?case using step2 by simp qed } note f_int = this { fix n r have "f(real(Suc n)*r + real n) = real(Suc n) * f r" proof(induct n) case 0 show ?case by simp next case (Suc n) have "real(Suc(Suc n))*r + real(Suc n) = r + (real(Suc n)*r + real n) + 1" (is "?a = ?b") by(simp add:real_of_nat_Suc field_simps) hence "f ?a = f ?b" by simp also have "\<dots> = f r + f(real(Suc n)*r + real n)" by(rule f_add) also have "\<dots> = f r + real(Suc n) * f r" by(simp only:Suc) finally show ?case by(simp add:real_of_nat_Suc field_simps) qed } note 1 = this { fix n::nat and r assume "n\<noteq>0" have "f(real(n)*r + real(n - 1)) = real(n) * f r" proof(cases n) case 0 thus ?thesis using `n\<noteq>0` by simp next case Suc thus ?thesis using `n\<noteq>0` by (simp add:1) qed } note f_mult = this from `r:\<rat>` obtain i::int and n::nat where r: "r = real i/real n" and "n\<noteq>0" by(fastforce simp:Rats_eq_int_div_nat) have "real(n)*f(real i/real n) = f(real i + real(n - 1))" using `n\<noteq>0` by(simp add:f_mult[symmetric]) also have "\<dots> = f(real(i + int n - 1))" using `n\<noteq>0`[simplified] by (metis One_nat_def Suc_leI int_1 add_diff_eq real_of_int_add real_of_int_of_nat_eq zdiff_int) also have "\<dots> = real(i + int n - 1) + 1" by(rule f_int) also have "\<dots> = real i + real n" by arith finally show ?thesis using `n\<noteq>0` unfolding r by (simp add:field_simps) qed text{* The only total model of a naive recursion equation of factorial on integers is 0 for all negative arguments. Probably folklore. *} theorem ifac_neg0: fixes ifac :: "int \<Rightarrow> int" assumes ifac_rec: "\<And>i. ifac i = (if i=0 then 1 else i*ifac(i - 1))" shows "i<0 \<Longrightarrow> ifac i = 0" proof(rule ccontr) assume 0: "i<0" "ifac i \<noteq> 0" { fix j assume "j \<le> i" have "ifac j \<noteq> 0" apply(rule int_le_induct[OF `j\<le>i`]) apply(rule `ifac i \<noteq> 0`) apply (metis `i<0` ifac_rec linorder_not_le mult_eq_0_iff) done } note below0 = this { fix j assume "j<i" have "1 < -j" using `j<i` `i<0` by arith have "ifac(j - 1) \<noteq> 0" using `j<i` by(simp add: below0) then have "\<bar>ifac (j - 1)\<bar> < (-j) * \<bar>ifac (j - 1)\<bar>" using `j<i` mult_le_less_imp_less[OF order_refl[of "abs(ifac(j - 1))"] `1 < -j`] by(simp add:mult.commute) hence "abs(ifac(j - 1)) < abs(ifac j)" using `1 < -j` by(simp add: ifac_rec[of "j"] abs_mult) } note not_wf = this let ?f = "%j. nat(abs(ifac(i - int(j+1))))" obtain k where "\<not> ?f (Suc k) < ?f k" using wf_no_infinite_down_chainE[OF wf_less, of "?f"] by blast moreover have "i - int (k + 1) - 1 = i - int (Suc k + 1)" by arith ultimately show False using not_wf[of "i - int(k+1)"] by (simp only:) arith qed end
Formal statement is: lemmas prime_divprod_pow_nat = prime_elem_divprod_pow[where ?'a = nat] Informal statement is: If $p$ is a prime number and $n$ is a natural number, then $p^n$ divides $p^m$ if and only if $n$ divides $m$.
= = Influence on postwar culture = =
program ftr implicit doubleprecision (a-h,o-z) integer*4,parameter :: nf=20000 real*8, dimension(nf) :: w, t complex*16, dimension(nf) :: ac,ft complex*16 im,te real*4 a1,a2,a3,a4,a5 character*32 fil1,fil2,fil3 im=(0.d0,1.d0) open(20,file='ftr_in',status='old') read(20,*) nm,beta read(20,*) fil1 open(25,file='cor1', status='old') open(26,file='fft.dat') ! energy units eu=27.2d0/1836.15 emx=4d0 write(*,*) 'energy unit',eu do 1 i=1,nm read(25,1000) x1,ac(i),xx t(i)=x1 1 continue print *,'ac',ac(1) ac(1)=ac(1)/2.d0 ac(nm)=ac(nm)/2.d0 !-------- damp the signal ---------------- ! beta = 0d0 ! beta=-log(beta/x2)/t(nm)**2 ! do i=1,nm ! ac(i)=ac(i)*exp(-beta*t(i)**2) ! enddo pi=4.d0*atan(1.d0) dt=abs(t(2)-t(1)) write(*,*) dt,nm h=0.5d0/nm/dt*pi nw=emx/h write(*,*) h,nw de = 0.05 nw = 100 do 3 i=1,nw w(i)=de*i 3 continue print *,im,ac(1),dt,t(1),w(1) do 2 k=1,nw ft(k)=(0.d0,0.d0) do i=1,nm ft(k) = ac(i)*exp(im*w(k)*t(i)) + ft(k) enddo ! fin=dreal(ft(k))*dt fin=abs(ft(k))*dt !ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc print *,'ft',ft(k) write(26,1000) w(k),ft(k)*dt 2 continue 1001 format(20(e14.5,1x)) 1000 format(20(e14.7,1x)) stop end
[STATEMENT] lemma (in Module) fsps_chain_boundTr2_2:"\<lbrakk>R module N; free_generator R M H; f \<in> H \<rightarrow> carrier N; C \<subseteq> fsps R M N f H; \<forall>a\<in>C. \<forall>b\<in>C. fst a \<subseteq> fst b \<or> fst b \<subseteq> fst a; C \<noteq> {}; (a, b) \<in> C; x \<in> fgs R M a; (a1, b1) \<in> C; x \<in> fgs R M a1\<rbrakk> \<Longrightarrow> b x = b1 x" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>R module N; free_generator R M H; f \<in> H \<rightarrow> carrier N; C \<subseteq> fsps R M N f H; \<forall>a\<in>C. \<forall>b\<in>C. fst a \<subseteq> fst b \<or> fst b \<subseteq> fst a; C \<noteq> {}; (a, b) \<in> C; x \<in> fgs R M a; (a1, b1) \<in> C; x \<in> fgs R M a1\<rbrakk> \<Longrightarrow> b x = b1 x [PROOF STEP] apply (frule_tac x = "(a, b)" in bspec, assumption, thin_tac "\<forall>a\<in>C. \<forall>b\<in>C. fst a \<subseteq> fst b \<or> fst b \<subseteq> fst a", frule_tac x = "(a1, b1)" in bspec, assumption, thin_tac "\<forall>ba\<in>C. fst (a, b) \<subseteq> fst ba \<or> fst ba \<subseteq> fst (a, b)", simp) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>R module N; free_generator R M H; f \<in> H \<rightarrow> carrier N; C \<subseteq> fsps R M N f H; C \<noteq> {}; (a, b) \<in> C; x \<in> fgs R M a; (a1, b1) \<in> C; x \<in> fgs R M a1; a \<subseteq> a1 \<or> a1 \<subseteq> a\<rbrakk> \<Longrightarrow> b x = b1 x [PROOF STEP] apply (erule disjE) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<lbrakk>R module N; free_generator R M H; f \<in> H \<rightarrow> carrier N; C \<subseteq> fsps R M N f H; C \<noteq> {}; (a, b) \<in> C; x \<in> fgs R M a; (a1, b1) \<in> C; x \<in> fgs R M a1; a \<subseteq> a1\<rbrakk> \<Longrightarrow> b x = b1 x 2. \<lbrakk>R module N; free_generator R M H; f \<in> H \<rightarrow> carrier N; C \<subseteq> fsps R M N f H; C \<noteq> {}; (a, b) \<in> C; x \<in> fgs R M a; (a1, b1) \<in> C; x \<in> fgs R M a1; a1 \<subseteq> a\<rbrakk> \<Longrightarrow> b x = b1 x [PROOF STEP] apply (rule fsps_chain_boundTr2_1[of N H f C a b a1 b1], assumption+) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>R module N; free_generator R M H; f \<in> H \<rightarrow> carrier N; C \<subseteq> fsps R M N f H; C \<noteq> {}; (a, b) \<in> C; x \<in> fgs R M a; (a1, b1) \<in> C; x \<in> fgs R M a1; a1 \<subseteq> a\<rbrakk> \<Longrightarrow> b x = b1 x [PROOF STEP] apply (frule fsps_chain_boundTr2_1[of N H f C a1 b1 a b], assumption+) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>R module N; free_generator R M H; f \<in> H \<rightarrow> carrier N; C \<subseteq> fsps R M N f H; C \<noteq> {}; (a, b) \<in> C; x \<in> fgs R M a; (a1, b1) \<in> C; x \<in> fgs R M a1; a1 \<subseteq> a; b1 x = b x\<rbrakk> \<Longrightarrow> b x = b1 x [PROOF STEP] apply (rule sym, assumption) [PROOF STATE] proof (prove) goal: No subgoals! [PROOF STEP] done
function [node,face,elem]=meshabox(p0,p1,opt,nodesize) % % [node,face,elem]=meshabox(p0,p1,opt,maxvol) % % create the surface and tetrahedral mesh of a box geometry % % author: Qianqian Fang, <fangq at nmr.mgh.harvard.edu> % % input: % p0: coordinates (x,y,z) for one end of the box diagnoal % p1: coordinates (x,y,z) for the other end of the box diagnoal % opt: maximum volume of the tetrahedral elements % nodesize: 1 or a 8x1 array, size of the element near each vertex % % output: % node: node coordinates, 3 columns for x, y and z respectively % face: integer array with dimensions of NB x 3, each row represents % a surface mesh face element % elem: integer array with dimensions of NE x 4, each row represents % a tetrahedron % % example: % [node,face,elem]=meshabox([2 3 2],[6 12 15],0.1,1); % plotmesh(node,elem,'x>4'); % % -- this function is part of iso2mesh toolbox (http://iso2mesh.sf.net) % if(nargin<4) nodesize=1; end [node,elem,face]=surf2mesh([],[],p0,p1,1,opt,[],[],nodesize); elem=elem(:,1:4); face=face(:,1:3);
\hypertarget{project-velocity}{% \section{Project Velocity}\label{project-velocity}} Question: What is the development speed for an organization? \hypertarget{description}{% \subsection{Description}\label{description}} Project velocity is the number of issues, the number of pull requests, volume of commits, and number of contributors as an indicator of 'innovation'. \hypertarget{objectives}{% \subsection{Objectives}\label{objectives}} Gives an Open Source Program Office (OSPO) manager a way to compare the project velocity across a portfolio of projects. The OSPO manager can use the Project Velocity metric to: \begin{itemize} \tightlist \item Report project velocity of open source projects vs in-house projects \item Compare project velocity across a portfolio of projects \item Identify which projects grow beyond internal contributors (when filtering internal vs. external contributors) \item Identify promising areas in which to get involved \item Highlight areas likely to be the successful platforms over the next several years \end{itemize} \href{https://www.cncf.io/blog/2017/06/05/30-highest-velocity-open-source-projects}{See Example} \hypertarget{implementation}{% \subsection{Implementation}\label{implementation}} Base metrics include: \begin{itemize} \tightlist \item \href{https://github.com/chaoss/wg-evolution/blob/master/metrics/Issues_Closed.md}{issues closed} \item \href{https://github.com/chaoss/wg-evolution/blob/master/metrics/Reviews.md}{number of reviews} \item \href{https://github.com/chaoss/wg-evolution/blob/master/metrics/Code_Changes.md}{\# of code changes} \item \href{https://github.com/chaoss/wg-risk/blob/master/metrics/Committers.md}{\# of committers} \end{itemize} \hypertarget{filters}{% \subsubsection{Filters}\label{filters}} \begin{itemize} \tightlist \item Internal vs external contributors \item Project sources (e.g., internal repositories, open-source repositories, and competitor open-source repositories) \item Time \end{itemize} \hypertarget{visualizations}{% \subsubsection{Visualizations}\label{visualizations}} \begin{itemize} \tightlist \item X-Axis: Logarithmic scale for Code Changes \item Y-Axis: Logarithmic scale of Sum of Number of Issues and Number of Reviews \item Dot-size: Committers \item Dots are projects \end{itemize} \includegraphics{images/project-velocity_visualization.png} \href{https://www.cncf.io/blog/2017/06/05/30-highest-velocity-open-source-projects/}{From CNCF} \hypertarget{tools-providing-the-metric}{% \subsubsection{Tools providing the Metric}\label{tools-providing-the-metric}} \begin{itemize} \tightlist \item CNCF - \url{https://github.com/cncf/velocity} \end{itemize} \hypertarget{references}{% \subsection{References}\label{references}} \begin{itemize} \tightlist \item \href{https://www.threefivetwo.com/blog/can-open-source-innovation-work-in-the-enterprise}{Can Open Source Innovation work in the Enterprise?} \item \href{https://www.nearform.com/blog/want-a-high-performing-culture-make-way-for-open-innovation}{Open Innovation for a High Performance Culture} \item \href{https://www.cio.com/article/3213146/open-source-is-powering-the-digital-enterprise.html}{Open Source for the Digital Enterprise} \item \href{https://www.cncf.io/blog/2017/06/05/30-highest-velocity-open-source-projects}{Highest Velocity Open Source Projects} \end{itemize}
PICacheInstance <- function(id = NULL, lastRefreshTime = NULL, willRefreshAfter = NULL, scheduledExpirationTime = NULL, user = NULL, webException = NULL) { if (is.null(id) == FALSE) { if (is.character(id) == FALSE) { return (print(paste0("Error: id must be a string."))) } } if (is.null(lastRefreshTime) == FALSE) { if (is.character(lastRefreshTime) == FALSE) { return (print(paste0("Error: lastRefreshTime must be a string."))) } } if (is.null(willRefreshAfter) == FALSE) { if (is.character(willRefreshAfter) == FALSE) { return (print(paste0("Error: willRefreshAfter must be a string."))) } } if (is.null(scheduledExpirationTime) == FALSE) { if (is.character(scheduledExpirationTime) == FALSE) { return (print(paste0("Error: scheduledExpirationTime must be a string."))) } } if (is.null(user) == FALSE) { if (is.character(user) == FALSE) { return (print(paste0("Error: user must be a string."))) } } if (is.null(webException) == FALSE) { className <- attr(webException, "className") if ((is.null(className)) || (className != "PIWebException")) { return (print(paste0("Error: the class from the parameter webException should be PIWebException."))) } } value <- list( Id = id, LastRefreshTime = lastRefreshTime, WillRefreshAfter = willRefreshAfter, ScheduledExpirationTime = scheduledExpirationTime, User = user, WebException = webException) valueCleaned <- rmNullObs(value) attr(valueCleaned, "className") <- "PICacheInstance" return(valueCleaned) }
Formal statement is: lemma bilinear_times: fixes c::"'a::real_algebra" shows "bilinear (\<lambda>x y::'a. x*y)" Informal statement is: The multiplication operation is bilinear.
lemma closed_negations: fixes S :: "'a::real_normed_vector set" assumes "closed S" shows "closed ((\<lambda>x. -x) ` S)"
/- Copyright (c) 2019 Neil Strickland. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Neil Strickland -/ import tactic.ring import data.pnat.prime /-! # Euclidean algorithm for ℕ This file sets up a version of the Euclidean algorithm that only works with natural numbers. Given `0 < a, b`, it computes the unique `(w, x, y, z, d)` such that the following identities hold: * `a = (w + x) d` * `b = (y + z) d` * `w * z = x * y + 1` `d` is then the gcd of `a` and `b`, and `a' := a / d = w + x` and `b' := b / d = y + z` are coprime. This story is closely related to the structure of SL₂(ℕ) (as a free monoid on two generators) and the theory of continued fractions. ## Main declarations * `xgcd_type`: Helper type in defining the gcd. Encapsulates `(wp, x, y, zp, ap, bp)`. where `wp` `zp`, `ap`, `bp` are the variables getting changed through the algorithm. * `is_special`: States `wp * zp = x * y + 1` * `is_reduced`: States `ap = a ∧ bp = b` ## Notes See `nat.xgcd` for a very similar algorithm allowing values in `ℤ`. -/ open nat namespace pnat /-- A term of xgcd_type is a system of six naturals. They should be thought of as representing the matrix [[w, x], [y, z]] = [[wp + 1, x], [y, zp + 1]] together with the vector [a, b] = [ap + 1, bp + 1]. -/ @[derive inhabited] structure xgcd_type := (wp x y zp ap bp : ℕ) namespace xgcd_type variable (u : xgcd_type) instance : has_sizeof xgcd_type := ⟨λ u, u.bp⟩ /-- The has_repr instance converts terms to strings in a way that reflects the matrix/vector interpretation as above. -/ instance : has_repr xgcd_type := ⟨λ u, "[[[" ++ (repr (u.wp + 1)) ++ ", " ++ (repr u.x) ++ "], [" ++ (repr u.y) ++ ", " ++ (repr (u.zp + 1)) ++ "]], [" ++ (repr (u.ap + 1)) ++ ", " ++ (repr (u.bp + 1)) ++ "]]"⟩ def mk' (w : ℕ+) (x : ℕ) (y : ℕ) (z : ℕ+) (a : ℕ+) (b : ℕ+) : xgcd_type := mk w.val.pred x y z.val.pred a.val.pred b.val.pred def w : ℕ+ := succ_pnat u.wp def z : ℕ+ := succ_pnat u.zp def a : ℕ+ := succ_pnat u.ap def b : ℕ+ := succ_pnat u.bp def r : ℕ := (u.ap + 1) % (u.bp + 1) def q : ℕ := (u.ap + 1) / (u.bp + 1) def qp : ℕ := u.q - 1 /-- The map v gives the product of the matrix [[w, x], [y, z]] = [[wp + 1, x], [y, zp + 1]] and the vector [a, b] = [ap + 1, bp + 1]. The map vp gives [sp, tp] such that v = [sp + 1, tp + 1]. -/ def vp : ℕ × ℕ := ⟨ u.wp + u.x + u.ap + u.wp * u.ap + u.x * u.bp, u.y + u.zp + u.bp + u.y * u.ap + u.zp * u.bp ⟩ def v : ℕ × ℕ := ⟨u.w * u.a + u.x * u.b, u.y * u.a + u.z * u.b⟩ def succ₂ (t : ℕ × ℕ) : ℕ × ℕ := ⟨t.1.succ, t.2.succ⟩ theorem v_eq_succ_vp : u.v = succ₂ u.vp := by { ext; dsimp [v, vp, w, z, a, b, succ₂]; repeat { rw [nat.succ_eq_add_one] }; ring } /-- is_special holds if the matrix has determinant one. -/ def is_special : Prop := u.wp + u.zp + u.wp * u.zp = u.x * u.y def is_special' : Prop := u.w * u.z = succ_pnat (u.x * u.y) theorem is_special_iff : u.is_special ↔ u.is_special' := begin dsimp [is_special, is_special'], split; intro h, { apply eq, dsimp [w, z, succ_pnat], rw [← h], repeat { rw [nat.succ_eq_add_one] }, ring }, { apply nat.succ.inj, replace h := congr_arg (coe : ℕ+ → ℕ) h, rw [mul_coe, w, z] at h, repeat { rw [succ_pnat_coe, nat.succ_eq_add_one] at h }, repeat { rw [nat.succ_eq_add_one] }, rw [← h], ring } end /-- is_reduced holds if the two entries in the vector are the same. The reduction algorithm will produce a system with this property, whose product vector is the same as for the original system. -/ def is_reduced : Prop := u.ap = u.bp def is_reduced' : Prop := u.a = u.b theorem is_reduced_iff : u.is_reduced ↔ u.is_reduced' := ⟨ congr_arg succ_pnat, succ_pnat_inj ⟩ def flip : xgcd_type := { wp := u.zp, x := u.y, y := u.x, zp := u.wp, ap := u.bp, bp := u.ap } @[simp] theorem flip_w : (flip u).w = u.z := rfl @[simp] theorem flip_x : (flip u).x = u.y := rfl @[simp] theorem flip_y : (flip u).y = u.x := rfl @[simp] theorem flip_z : (flip u).z = u.w := rfl @[simp] theorem flip_a : (flip u).a = u.b := rfl @[simp] theorem flip_b : (flip u).b = u.a := rfl theorem flip_is_reduced : (flip u).is_reduced ↔ u.is_reduced := by { dsimp [is_reduced, flip], split; intro h; exact h.symm } theorem flip_is_special : (flip u).is_special ↔ u.is_special := by { dsimp [is_special, flip], rw[mul_comm u.x, mul_comm u.zp, add_comm u.zp] } theorem flip_v : (flip u).v = (u.v).swap := by { dsimp [v], ext, { simp only, ring }, { simp only, ring } } /-- Properties of division with remainder for a / b. -/ theorem rq_eq : u.r + (u.bp + 1) * u.q = u.ap + 1 := nat.mod_add_div (u.ap + 1) (u.bp + 1) theorem qp_eq (hr : u.r = 0) : u.q = u.qp + 1 := begin by_cases hq : u.q = 0, { let h := u.rq_eq, rw [hr, hq, mul_zero, add_zero] at h, cases h }, { exact (nat.succ_pred_eq_of_pos (nat.pos_of_ne_zero hq)).symm } end /-- The following function provides the starting point for our algorithm. We will apply an iterative reduction process to it, which will produce a system satisfying is_reduced. The gcd can be read off from this final system. -/ def start (a b : ℕ+) : xgcd_type := ⟨0, 0, 0, 0, a - 1, b - 1⟩ theorem start_is_special (a b : ℕ+) : (start a b).is_special := by { dsimp [start, is_special], refl } theorem start_v (a b : ℕ+) : (start a b).v = ⟨a, b⟩ := begin dsimp [start, v, xgcd_type.a, xgcd_type.b, w, z], rw [one_mul, one_mul, zero_mul, zero_mul, zero_add, add_zero], rw [← nat.pred_eq_sub_one, ← nat.pred_eq_sub_one], rw [nat.succ_pred_eq_of_pos a.pos, nat.succ_pred_eq_of_pos b.pos] end def finish : xgcd_type := xgcd_type.mk u.wp ((u.wp + 1) * u.qp + u.x) u.y (u.y * u.qp + u.zp) u.bp u.bp theorem finish_is_reduced : u.finish.is_reduced := by { dsimp [is_reduced], refl } theorem finish_is_special (hs : u.is_special) : u.finish.is_special := begin dsimp [is_special, finish] at hs ⊢, rw [add_mul _ _ u.y, add_comm _ (u.x * u.y), ← hs], ring end theorem finish_v (hr : u.r = 0) : u.finish.v = u.v := begin let ha : u.r + u.b * u.q = u.a := u.rq_eq, rw [hr, zero_add] at ha, ext, { change (u.wp + 1) * u.b + ((u.wp + 1) * u.qp + u.x) * u.b = u.w * u.a + u.x * u.b, have : u.wp + 1 = u.w := rfl, rw [this, ← ha, u.qp_eq hr], ring }, { change u.y * u.b + (u.y * u.qp + u.z) * u.b = u.y * u.a + u.z * u.b, rw [← ha, u.qp_eq hr], ring } end /-- This is the main reduction step, which is used when u.r ≠ 0, or equivalently b does not divide a. -/ def step : xgcd_type := xgcd_type.mk (u.y * u.q + u.zp) u.y ((u.wp + 1) * u.q + u.x) u.wp u.bp (u.r - 1) /-- We will apply the above step recursively. The following result is used to ensure that the process terminates. -/ theorem step_wf (hr : u.r ≠ 0) : sizeof u.step < sizeof u := begin change u.r - 1 < u.bp, have h₀ : (u.r - 1) + 1 = u.r := nat.succ_pred_eq_of_pos (nat.pos_of_ne_zero hr), have h₁ : u.r < u.bp + 1 := nat.mod_lt (u.ap + 1) u.bp.succ_pos, rw[← h₀] at h₁, exact lt_of_succ_lt_succ h₁, end theorem step_is_special (hs : u.is_special) : u.step.is_special := begin dsimp [is_special, step] at hs ⊢, rw [mul_add, mul_comm u.y u.x, ← hs], ring end /-- The reduction step does not change the product vector. -/ theorem step_v (hr : u.r ≠ 0) : u.step.v = (u.v).swap := begin let ha : u.r + u.b * u.q = u.a := u.rq_eq, let hr : (u.r - 1) + 1 = u.r := (add_comm _ 1).trans (add_tsub_cancel_of_le (nat.pos_of_ne_zero hr)), ext, { change ((u.y * u.q + u.z) * u.b + u.y * (u.r - 1 + 1) : ℕ) = u.y * u.a + u.z * u.b, rw [← ha, hr], ring }, { change ((u.w * u.q + u.x) * u.b + u.w * (u.r - 1 + 1) : ℕ) = u.w * u.a + u.x * u.b, rw [← ha, hr], ring } end /-- We can now define the full reduction function, which applies step as long as possible, and then applies finish. Note that the "have" statement puts a fact in the local context, and the equation compiler uses this fact to help construct the full definition in terms of well-founded recursion. The same fact needs to be introduced in all the inductive proofs of properties given below. -/ def reduce : xgcd_type → xgcd_type | u := dite (u.r = 0) (λ h, u.finish) (λ h, have sizeof u.step < sizeof u, from u.step_wf h, flip (reduce u.step)) theorem reduce_a {u : xgcd_type} (h : u.r = 0) : u.reduce = u.finish := by { rw [reduce], simp only, rw [if_pos h] } theorem reduce_b {u : xgcd_type} (h : u.r ≠ 0) : u.reduce = u.step.reduce.flip := by { rw [reduce], simp only, rw [if_neg h, step] } theorem reduce_reduced : ∀ (u : xgcd_type), u.reduce.is_reduced | u := dite (u.r = 0) (λ h, by { rw [reduce_a h], exact u.finish_is_reduced }) (λ h, have sizeof u.step < sizeof u, from u.step_wf h, by { rw [reduce_b h, flip_is_reduced], apply reduce_reduced }) theorem reduce_reduced' (u : xgcd_type) : u.reduce.is_reduced' := (is_reduced_iff _).mp u.reduce_reduced theorem reduce_special : ∀ (u : xgcd_type), u.is_special → u.reduce.is_special | u := dite (u.r = 0) (λ h hs, by { rw [reduce_a h], exact u.finish_is_special hs }) (λ h hs, have sizeof u.step < sizeof u, from u.step_wf h, by { rw [reduce_b h], exact (flip_is_special _).mpr (reduce_special _ (u.step_is_special hs)) }) theorem reduce_special' (u : xgcd_type) (hs : u.is_special) : u.reduce.is_special' := (is_special_iff _).mp (u.reduce_special hs) theorem reduce_v : ∀ (u : xgcd_type), u.reduce.v = u.v | u := dite (u.r = 0) (λ h, by {rw[reduce_a h, finish_v u h]}) (λ h, have sizeof u.step < sizeof u, from u.step_wf h, by { rw[reduce_b h, flip_v, reduce_v (step u), step_v u h, prod.swap_swap] }) end xgcd_type section gcd variables (a b : ℕ+) def xgcd : xgcd_type := (xgcd_type.start a b).reduce def gcd_d : ℕ+ := (xgcd a b).a def gcd_w : ℕ+ := (xgcd a b).w def gcd_x : ℕ := (xgcd a b).x def gcd_y : ℕ := (xgcd a b).y def gcd_z : ℕ+ := (xgcd a b).z def gcd_a' : ℕ+ := succ_pnat ((xgcd a b).wp + (xgcd a b).x) def gcd_b' : ℕ+ := succ_pnat ((xgcd a b).y + (xgcd a b).zp) theorem gcd_a'_coe : ((gcd_a' a b) : ℕ) = (gcd_w a b) + (gcd_x a b) := by { dsimp [gcd_a', gcd_x, gcd_w, xgcd_type.w], rw [nat.succ_eq_add_one, nat.succ_eq_add_one, add_right_comm] } theorem gcd_b'_coe : ((gcd_b' a b) : ℕ) = (gcd_y a b) + (gcd_z a b) := by { dsimp [gcd_b', gcd_y, gcd_z, xgcd_type.z], rw [nat.succ_eq_add_one, nat.succ_eq_add_one, add_assoc] } theorem gcd_props : let d := gcd_d a b, w := gcd_w a b, x := gcd_x a b, y := gcd_y a b, z := gcd_z a b, a' := gcd_a' a b, b' := gcd_b' a b in (w * z = succ_pnat (x * y) ∧ (a = a' * d) ∧ (b = b' * d) ∧ z * a' = succ_pnat (x * b') ∧ w * b' = succ_pnat (y * a') ∧ (z * a : ℕ) = x * b + d ∧ (w * b : ℕ) = y * a + d ) := begin intros, let u := (xgcd_type.start a b), let ur := u.reduce, have ha : d = ur.a := rfl, have hb : d = ur.b := u.reduce_reduced', have ha' : (a' : ℕ) = w + x := gcd_a'_coe a b, have hb' : (b' : ℕ) = y + z := gcd_b'_coe a b, have hdet : w * z = succ_pnat (x * y) := u.reduce_special' rfl, split, exact hdet, have hdet' : ((w * z) : ℕ) = x * y + 1 := by { rw [← mul_coe, hdet, succ_pnat_coe] }, have huv : u.v = ⟨a, b⟩ := (xgcd_type.start_v a b), let hv : prod.mk (w * d + x * ur.b : ℕ) (y * d + z * ur.b : ℕ) = ⟨a, b⟩ := u.reduce_v.trans (xgcd_type.start_v a b), rw [← hb, ← add_mul, ← add_mul, ← ha', ← hb'] at hv, have ha'' : (a : ℕ) = a' * d := (congr_arg prod.fst hv).symm, have hb'' : (b : ℕ) = b' * d := (congr_arg prod.snd hv).symm, split, exact eq ha'', split, exact eq hb'', have hza' : (z * a' : ℕ) = x * b' + 1, by { rw [ha', hb', mul_add, mul_add, mul_comm (z : ℕ), hdet'], ring }, have hwb' : (w * b' : ℕ) = y * a' + 1, by { rw [ha', hb', mul_add, mul_add, hdet'], ring }, split, { apply eq, rw [succ_pnat_coe, nat.succ_eq_add_one, mul_coe, hza'] }, split, { apply eq, rw [succ_pnat_coe, nat.succ_eq_add_one, mul_coe, hwb'] }, rw [ha'', hb''], repeat { rw [← mul_assoc] }, rw [hza', hwb'], split; ring, end theorem gcd_eq : gcd_d a b = gcd a b := begin rcases gcd_props a b with ⟨h₀, h₁, h₂, h₃, h₄, h₅, h₆⟩, apply dvd_antisymm, { apply dvd_gcd, exact dvd.intro (gcd_a' a b) (h₁.trans (mul_comm _ _)).symm, exact dvd.intro (gcd_b' a b) (h₂.trans (mul_comm _ _)).symm}, { have h₇ : (gcd a b : ℕ) ∣ (gcd_z a b) * a := (nat.gcd_dvd_left a b).trans (dvd_mul_left _ _), have h₈ : (gcd a b : ℕ) ∣ (gcd_x a b) * b := (nat.gcd_dvd_right a b).trans (dvd_mul_left _ _), rw[h₅] at h₇, rw dvd_iff, exact (nat.dvd_add_iff_right h₈).mpr h₇,} end theorem gcd_det_eq : (gcd_w a b) * (gcd_z a b) = succ_pnat ((gcd_x a b) * (gcd_y a b)) := (gcd_props a b).1 theorem gcd_b_eq : b = (gcd_b' a b) * (gcd a b) := (gcd_eq a b) ▸ (gcd_props a b).2.2.1 theorem gcd_rel_left' : (gcd_z a b) * (gcd_a' a b) = succ_pnat ((gcd_x a b) * (gcd_b' a b)) := (gcd_props a b).2.2.2.1 theorem gcd_rel_right' : (gcd_w a b) * (gcd_b' a b) = succ_pnat ((gcd_y a b) * (gcd_a' a b)) := (gcd_props a b).2.2.2.2.1 theorem gcd_rel_left : ((gcd_z a b) * a : ℕ) = (gcd_x a b) * b + (gcd a b) := (gcd_eq a b) ▸ (gcd_props a b).2.2.2.2.2.1 theorem gcd_rel_right : ((gcd_w a b) * b : ℕ) = (gcd_y a b) * a + (gcd a b) := (gcd_eq a b) ▸ (gcd_props a b).2.2.2.2.2.2 end gcd end pnat
{-# OPTIONS --safe #-} module Definition.Conversion.HelperDecidable where open import Definition.Untyped open import Definition.Untyped.Properties open import Definition.Typed open import Definition.Typed.Properties open import Definition.Conversion open import Definition.Conversion.Whnf open import Definition.Conversion.Soundness open import Definition.Conversion.Symmetry open import Definition.Conversion.Stability open import Definition.Conversion.Conversion open import Definition.Conversion.Lift open import Definition.Typed.Consequences.Syntactic open import Definition.Typed.Consequences.Substitution open import Definition.Typed.Consequences.Injectivity open import Definition.Typed.Consequences.Reduction open import Definition.Typed.Consequences.Equality open import Definition.Typed.Consequences.Inequality as IE open import Definition.Typed.Consequences.NeTypeEq open import Definition.Typed.Consequences.SucCong open import Tools.Nat open import Tools.Product open import Tools.Empty open import Tools.Nullary import Tools.PropositionalEquality as PE dec-relevance : ∀ (r r′ : Relevance) → Dec (r PE.≡ r′) dec-relevance ! ! = yes PE.refl dec-relevance ! % = no (λ ()) dec-relevance % ! = no (λ ()) dec-relevance % % = yes PE.refl dec-level : ∀ (l l′ : Level) → Dec (l PE.≡ l′) dec-level ⁰ ⁰ = yes PE.refl dec-level ⁰ ¹ = no (λ ()) dec-level ¹ ⁰ = no (λ ()) dec-level ¹ ¹ = yes PE.refl -- Algorithmic equality of variables infers propositional equality. strongVarEq : ∀ {m n A Γ l} → Γ ⊢ var n ~ var m ↑! A ^ l → n PE.≡ m strongVarEq (var-refl x x≡y) = x≡y -- Helper function for decidability of applications. dec~↑!-app : ∀ {k k₁ l l₁ F F₁ G G₁ rF B Γ Δ lF lG lΠ lK} → ⊢ Γ ≡ Δ → Γ ⊢ k ∷ Π F ^ rF ° lF ▹ G ° lG ° lΠ ^ [ ! , ι lΠ ] → Δ ⊢ k₁ ∷ Π F₁ ^ rF ° lF ▹ G₁ ° lG ° lΠ ^ [ ! , ι lΠ ] → Γ ⊢ k ~ k₁ ↓! B ^ lK → Dec (Γ ⊢ l [genconv↑] l₁ ∷ F ^ [ rF , ι lF ]) → Dec (∃ λ A → ∃ λ lA → Γ ⊢ k ∘ l ^ lΠ ~ k₁ ∘ l₁ ^ lΠ ↑! A ^ lA) dec~↑!-app Γ≡Δ k k₁ k~k₁ (yes p) = let whnfA , neK , neL = ne~↓! k~k₁ ⊢A , ⊢k , ⊢l = syntacticEqTerm (soundness~↓! k~k₁) l≡l , ΠFG₁≡A = neTypeEq neK k ⊢k H , E , A≡ΠHE = Π≡A ΠFG₁≡A whnfA F≡H , rF≡rH , lF≡lH , lG≡lE , G₁≡E = injectivity (PE.subst (λ x → _ ⊢ _ ≡ x ^ _) A≡ΠHE ΠFG₁≡A) in yes (E [ _ ] , _ , app-cong (PE.subst₂ (λ x y → _ ⊢ _ ~ _ ↓! x ^ y) A≡ΠHE (PE.sym l≡l) k~k₁) (convConvTerm%! p F≡H)) dec~↑!-app Γ≡Δ k k₁ k~k₁ (no ¬p) = no (λ { (_ , _ , app-cong k~k₁′ p) → let whnfA , neK , neL = ne~↓! k~k₁′ ⊢A , ⊢k , ⊢l = syntacticEqTerm (soundness~↓! k~k₁′) l≡l , Π≡Π = neTypeEq neK k ⊢k F≡F , rF≡rF , lF≡lF , lG≡lG , G≡G = injectivity Π≡Π in ¬p (convConvTerm%! (PE.subst₂ (λ x y → _ ⊢ _ [genconv↑] _ ∷ _ ^ [ x , ι y ]) (PE.sym rF≡rF) (PE.sym lF≡lF) p) (sym F≡F)) }) nonNeutralℕ : Neutral ℕ → ⊥ nonNeutralℕ () nonNeutralU : ∀ {r l} → Neutral (Univ r l) → ⊥ nonNeutralU () Idℕ-elim : ∀ {Γ l A B t u t' u'} → Neutral A → Γ ⊢ Id A t u ~ Id ℕ t' u' ↑! B ^ l → ⊥ Idℕ-elim neA (Id-cong x x₁ x₂) = let _ , _ , neℕ = ne~↓! x in ⊥-elim (nonNeutralℕ neℕ) Idℕ-elim neA (Id-ℕ x x₁) = ⊥-elim (nonNeutralℕ neA) Idℕ-elim neA (Id-ℕ0 x) = ⊥-elim (nonNeutralℕ neA) Idℕ-elim neA (Id-ℕS x x₁) = ⊥-elim (nonNeutralℕ neA) Idℕ-elim' : ∀ {Γ l A B t u t' u'} → Neutral A → Γ ⊢ Id ℕ t u ~ Id A t' u' ↑! B ^ l → ⊥ Idℕ-elim' neA e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in Idℕ-elim neA e' conv↑-inversion : ∀ {Γ l A t u} → Whnf A → Whnf t → Whnf u → Γ ⊢ t [conv↑] u ∷ A ^ l → Γ ⊢ t [conv↓] u ∷ A ^ l conv↑-inversion whnfA whnft whnfu ([↑]ₜ B t′ u′ D d d′ whnfB whnft′ whnfu′ t<>u) = let et = whnfRed*Term d whnft eu = whnfRed*Term d′ whnfu eA = whnfRed* D whnfA in PE.subst₃ (λ A X Y → _ ⊢ X [conv↓] Y ∷ A ^ _) (PE.sym eA) (PE.sym et) (PE.sym eu) t<>u Idℕ0-elim- : ∀ {Γ l t} → Neutral t → Γ ⊢ t [conv↓] zero ∷ ℕ ^ l → ⊥ Idℕ0-elim- net (ℕ-ins ()) Idℕ0-elim- net (ne-ins x x₁ x₂ ()) Idℕ0-elim- () (zero-refl x) Idℕ0-elim : ∀ {Γ l A t u u'} → Neutral t → Γ ⊢ Id ℕ t u ~ Id ℕ zero u' ↑! A ^ l → ⊥ Idℕ0-elim net (Id-cong x y x₂) = let e = conv↑-inversion ℕₙ (ne net) zeroₙ y in Idℕ0-elim- net e Idℕ0-elim net (Id-ℕ () x₁) Idℕ0-elim () (Id-ℕ0 x) Idℕ0-elim' : ∀ {Γ l A t u u'} → Neutral t → Γ ⊢ Id ℕ zero u ~ Id ℕ t u' ↑! A ^ l → ⊥ Idℕ0-elim' net e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in Idℕ0-elim net e' IdℕS-elim- : ∀ {Γ l t n} → Neutral t → Γ ⊢ t [conv↓] suc n ∷ ℕ ^ l → ⊥ IdℕS-elim- net (ℕ-ins ()) IdℕS-elim- net (ne-ins x x₁ x₂ ()) IdℕS-elim- () (suc-cong x) IdℕS-elim : ∀ {Γ l A t u n u'} → Neutral t → Γ ⊢ Id ℕ t u ~ Id ℕ (suc n) u' ↑! A ^ l → ⊥ IdℕS-elim net (Id-cong x y x₂) = let e = conv↑-inversion ℕₙ (ne net) sucₙ y in IdℕS-elim- net e IdℕS-elim net (Id-ℕ () x₁) IdℕS-elim () (Id-ℕS x _) IdℕS-elim' : ∀ {Γ l A t u n u'} → Neutral t → Γ ⊢ Id ℕ (suc n) u ~ Id ℕ t u' ↑! A ^ l → ⊥ IdℕS-elim' net e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in IdℕS-elim net e' Idℕ0S-elim- : ∀ {Γ l n} → Γ ⊢ zero [conv↓] suc n ∷ ℕ ^ l → ⊥ Idℕ0S-elim- (ℕ-ins ()) Idℕ0S-elim- (ne-ins x x₁ x₂ ()) Idℕ0S-elim : ∀ {Γ l A u u' n} → Γ ⊢ Id ℕ zero u ~ Id ℕ (suc n) u' ↑! A ^ l → ⊥ Idℕ0S-elim (Id-cong x y x₂) = let e = conv↑-inversion ℕₙ zeroₙ sucₙ y in Idℕ0S-elim- e Idℕ0S-elim (Id-ℕ () _) Idℕ0S-elim' : ∀ {Γ l A u u' n} → Γ ⊢ Id ℕ (suc n) u ~ Id ℕ zero u' ↑! A ^ l → ⊥ Idℕ0S-elim' e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in Idℕ0S-elim e' IdU-elim : ∀ {Γ l A B t u t' u' rU lU} → Neutral A → Γ ⊢ Id A t u ~ Id (Univ rU lU) t' u' ↑! B ^ l → ⊥ IdU-elim neA (Id-cong x x₁ x₂) = let _ , _ , neU = ne~↓! x in ⊥-elim (nonNeutralU neU) IdU-elim neA (Id-U x x₁) = ⊥-elim (nonNeutralU neA) IdU-elim neA (Id-Uℕ x) = ⊥-elim (nonNeutralU neA) IdU-elim neA (Id-UΠ x x₁) = ⊥-elim (nonNeutralU neA) IdU-elim' : ∀ {Γ l A B t u t' u' rU lU} → Neutral A → Γ ⊢ Id (Univ rU lU) t u ~ Id A t' u' ↑! B ^ l → ⊥ IdU-elim' neA e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in IdU-elim neA e' IdUℕ-elim : ∀ {Γ l A t u t' u' rU lU} → Γ ⊢ Id (Univ rU lU) t u ~ Id ℕ t' u' ↑! A ^ l → ⊥ IdUℕ-elim (Id-cong () x₁ x₂) IdℕU-elim : ∀ {Γ l A t u t' u' rU lU} → Γ ⊢ Id ℕ t u ~ Id (Univ rU lU) t' u' ↑! A ^ l → ⊥ IdℕU-elim (Id-cong () x₁ x₂) IdUUℕ-elim : ∀ {Γ l A t u u'} → Neutral t → Γ ⊢ Id (U ⁰) t u ~ Id (U ⁰) ℕ u' ↑! A ^ l → ⊥ IdUUℕ-elim () (Id-Uℕ x) IdUUℕ-elim' : ∀ {Γ l A t u u'} → Neutral t → Γ ⊢ Id (U ⁰) ℕ u ~ Id (U ⁰) t u' ↑! A ^ l → ⊥ IdUUℕ-elim' () (Id-Uℕ x) IdUUΠ-elim- : ∀ {Γ l A rA B X t} → Neutral t → Γ ⊢ t [conv↓] Π A ^ rA ° ⁰ ▹ B ° ⁰ ° ⁰ ∷ X ^ l → ⊥ IdUUΠ-elim- net (η-eq x x₁ x₂ x₃ x₄ x₅ (ne ()) x₇) IdUUΠ-elim : ∀ {Γ l A rA B X t u u'} → Neutral t → Γ ⊢ Id (U ⁰) t u ~ Id (U ⁰) (Π A ^ rA ° ⁰ ▹ B ° ⁰ ° ⁰) u' ↑! X ^ l → ⊥ IdUUΠ-elim net (Id-cong x y x₂) = let e = conv↑-inversion Uₙ (ne net) Πₙ y in IdUUΠ-elim- net e IdUUΠ-elim net (Id-U () x₁) IdUUΠ-elim () (Id-UΠ x x₁) IdUUΠ-elim' : ∀ {Γ l A rA B X t u u'} → Neutral t → Γ ⊢ Id (U ⁰) (Π A ^ rA ° ⁰ ▹ B ° ⁰ ° ⁰) u ~ Id (U ⁰) t u' ↑! X ^ l → ⊥ IdUUΠ-elim' net e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in IdUUΠ-elim net e' IdUUΠℕ-elim- : ∀ {Γ l A rA B X} → Γ ⊢ Π A ^ rA ° ⁰ ▹ B ° ⁰ ° ⁰ [conv↓] ℕ ∷ X ^ l → ⊥ IdUUΠℕ-elim- (η-eq x x₁ x₂ x₃ x₄ x₅ (ne ()) x₇) IdUUΠℕ-elim : ∀ {Γ l A rA B X u u'} → Γ ⊢ Id (U ⁰) (Π A ^ rA ° ⁰ ▹ B ° ⁰ ° ⁰) u ~ Id (U ⁰) ℕ u' ↑! X ^ l → ⊥ IdUUΠℕ-elim (Id-cong x y x₂) = let e = conv↑-inversion Uₙ Πₙ ℕₙ y in IdUUΠℕ-elim- e IdUUΠℕ-elim (Id-U () x₁) IdUUΠℕ-elim' : ∀ {Γ l A rA B X u u'} → Γ ⊢ Id (U ⁰) ℕ u ~ Id (U ⁰) (Π A ^ rA ° ⁰ ▹ B ° ⁰ ° ⁰) u' ↑! X ^ l → ⊥ IdUUΠℕ-elim' e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in IdUUΠℕ-elim e' castℕ-elim : ∀ {Γ l A B B' X t e t' e'} → Neutral A → Γ ⊢ cast ⁰ A B e t ~ cast ⁰ ℕ B' e' t' ↑! X ^ l → ⊥ castℕ-elim neA (cast-cong () x₁ x₂ x₃ x₄) castℕ-elim () (cast-ℕ x x₁ x₂ x₃) castℕ-elim () (cast-ℕℕ x x₁ x₂) castℕ-elim () (cast-ℕΠ x x₁ x₂ x₃) castℕ-elim' : ∀ {Γ l A B B' X t e t' e'} → Neutral A → Γ ⊢ cast ⁰ ℕ B e t ~ cast ⁰ A B' e' t' ↑! X ^ l → ⊥ castℕ-elim' neA e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in castℕ-elim neA e' castΠ-elim : ∀ {Γ l A B B' X t e t' e' r P Q} → Neutral A → Γ ⊢ cast ⁰ A B e t ~ cast ⁰ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) B' e' t' ↑! X ^ l → ⊥ castΠ-elim neA (cast-cong () x₁ x₂ x₃ x₄) castΠ-elim () (cast-Π x x₁ x₂ x₃ x₄) castΠ-elim () (cast-Πℕ x x₁ x₂ x₃) castΠ-elim () (cast-ΠΠ%! x x₁ x₂ x₃ x₄) castΠ-elim () (cast-ΠΠ!% x x₁ x₂ x₃ x₄) castΠ-elim' : ∀ {Γ l A B B' X t e t' e' r P Q} → Neutral A → Γ ⊢ cast ⁰ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) B e t ~ cast ⁰ A B' e' t' ↑! X ^ l → ⊥ castΠ-elim' neA e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in castΠ-elim neA e' castℕℕ-elim : ∀ {Γ l A X t e t' e'} → Neutral A → Γ ⊢ cast ⁰ ℕ A e t ~ cast ⁰ ℕ ℕ e' t' ↑! X ^ l → ⊥ castℕℕ-elim neA (cast-cong () x₁ x₂ x₃ x₄) castℕℕ-elim (var n) (cast-ℕ () x₁ x₂ x₃) castℕℕ-elim () (cast-ℕℕ x x₁ x₂) castℕℕ-elim' : ∀ {Γ l A X t e t' e'} → Neutral A → Γ ⊢ cast ⁰ ℕ ℕ e t ~ cast ⁰ ℕ A e' t' ↑! X ^ l → ⊥ castℕℕ-elim' neA e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in castℕℕ-elim neA e' castℕΠ-elim : ∀ {Γ l A A' X t e t' e' r P Q} → Γ ⊢ cast ⁰ ℕ A e t ~ cast ⁰ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) A' e' t' ↑! X ^ l → ⊥ castℕΠ-elim (cast-cong () x₁ x₂ x₃ x₄) castℕΠ-elim' : ∀ {Γ l A A' X t e t' e' r P Q} → Γ ⊢ cast ⁰ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) A e t ~ cast ⁰ ℕ A' e' t' ↑! X ^ l → ⊥ castℕΠ-elim' (cast-cong () x₁ x₂ x₃ x₄) castℕneΠ-elim : ∀ {Γ l A X t e t' e' r P Q} → Neutral A → Γ ⊢ cast ⁰ ℕ A e t ~ cast ⁰ ℕ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) e' t' ↑! X ^ l → ⊥ castℕneΠ-elim neA (cast-cong () x₁ x₂ x₃ x₄) castℕneΠ-elim neA (cast-ℕ () x₁ x₂ x₃) castℕneΠ-elim () (cast-ℕΠ x x₁ x₂ x₃) castℕneΠ-elim' : ∀ {Γ l A X t e t' e' r P Q} → Neutral A → Γ ⊢ cast ⁰ ℕ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) e t ~ cast ⁰ ℕ A e' t' ↑! X ^ l → ⊥ castℕneΠ-elim' neA e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in castℕneΠ-elim neA e' castℕℕΠ-elim : ∀ {Γ l X t e t' e' r P Q} → Γ ⊢ cast ⁰ ℕ ℕ e t ~ cast ⁰ ℕ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) e' t' ↑! X ^ l → ⊥ castℕℕΠ-elim (cast-cong () x₁ x₂ x₃ x₄) castℕℕΠ-elim (cast-ℕ () x₁ x₂ x₃) castℕℕΠ-elim' : ∀ {Γ l X t e t' e' r P Q} → Γ ⊢ cast ⁰ ℕ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) e t ~ cast ⁰ ℕ ℕ e' t' ↑! X ^ l → ⊥ castℕℕΠ-elim' e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in castℕℕΠ-elim e' castΠneℕ-elim : ∀ {Γ l A X t e t' e' r P Q r' P' Q'} → Neutral A → Γ ⊢ cast ⁰ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) A e t ~ cast ⁰ (Π P' ^ r' ° ⁰ ▹ Q' ° ⁰ ° ⁰) ℕ e' t' ↑! X ^ l → ⊥ castΠneℕ-elim neA (cast-cong () x₁ x₂ x₃ x₄) castΠneℕ-elim neA (cast-Π x () x₂ x₃ x₄) castΠneℕ-elim () (cast-Πℕ x x₁ x₂ x₃) castΠneℕ-elim' : ∀ {Γ l A X t e t' e' r P Q r' P' Q'} → Neutral A → Γ ⊢ cast ⁰ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) ℕ e t ~ cast ⁰ (Π P' ^ r' ° ⁰ ▹ Q' ° ⁰ ° ⁰) A e' t' ↑! X ^ l → ⊥ castΠneℕ-elim' neA e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in castΠneℕ-elim neA e' castΠneΠ-elim : ∀ {Γ l A X t e t' e' r P Q r' P' Q' r'' P'' Q''} → Neutral A → Γ ⊢ cast ⁰ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) A e t ~ cast ⁰ (Π P' ^ r' ° ⁰ ▹ Q' ° ⁰ ° ⁰) (Π P'' ^ r'' ° ⁰ ▹ Q'' ° ⁰ ° ⁰) e' t' ↑! X ^ l → ⊥ castΠneΠ-elim neA (cast-cong () x₁ x₂ x₃ x₄) castΠneΠ-elim neA (cast-Π x () x₂ x₃ x₄) castΠneΠ-elim () (cast-ΠΠ%! x x₁ x₂ x₃ x₄) castΠneΠ-elim () (cast-ΠΠ!% x x₁ x₂ x₃ x₄) castΠneΠ-elim' : ∀ {Γ l A X t e t' e' r P Q r' P' Q' r'' P'' Q''} → Neutral A → Γ ⊢ cast ⁰ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) (Π P'' ^ r'' ° ⁰ ▹ Q'' ° ⁰ ° ⁰) e t ~ cast ⁰ (Π P' ^ r' ° ⁰ ▹ Q' ° ⁰ ° ⁰) A e' t' ↑! X ^ l → ⊥ castΠneΠ-elim' neA e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in castΠneΠ-elim neA e' castΠΠℕ-elim : ∀ {Γ l X t e t' e' r P Q r' P' Q' r'' P'' Q''} → Γ ⊢ cast ⁰ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) ℕ e t ~ cast ⁰ (Π P' ^ r' ° ⁰ ▹ Q' ° ⁰ ° ⁰) (Π P'' ^ r'' ° ⁰ ▹ Q'' ° ⁰ ° ⁰) e' t' ↑! X ^ l → ⊥ castΠΠℕ-elim (cast-cong () x₁ x₂ x₃ x₄) castΠΠℕ-elim (cast-Π x () x₂ x₃ x₄) castΠΠℕ-elim' : ∀ {Γ l X t e t' e' r P Q r' P' Q' r'' P'' Q''} → Γ ⊢ cast ⁰ (Π P ^ r ° ⁰ ▹ Q ° ⁰ ° ⁰) (Π P'' ^ r'' ° ⁰ ▹ Q'' ° ⁰ ° ⁰) e t ~ cast ⁰ (Π P' ^ r' ° ⁰ ▹ Q' ° ⁰ ° ⁰) ℕ e' t' ↑! X ^ l → ⊥ castΠΠℕ-elim' e = let _ , _ , e' = sym~↑! (reflConEq (wfEqTerm (soundness~↑! e))) e in castΠΠℕ-elim e' castΠΠ!%-elim : ∀ {Γ l A A' X t e t' e' P Q P' Q'} → Γ ⊢ cast ⁰ (Π P ^ ! ° ⁰ ▹ Q ° ⁰ ° ⁰) A e t ~ cast ⁰ (Π P' ^ % ° ⁰ ▹ Q' ° ⁰ ° ⁰) A' e' t' ↑! X ^ l → ⊥ castΠΠ!%-elim (cast-cong () x₁ x₂ x₃ x₄) castΠΠ%!-elim : ∀ {Γ l A A' X t e t' e' P Q P' Q'} → Γ ⊢ cast ⁰ (Π P ^ % ° ⁰ ▹ Q ° ⁰ ° ⁰) A e t ~ cast ⁰ (Π P' ^ ! ° ⁰ ▹ Q' ° ⁰ ° ⁰) A' e' t' ↑! X ^ l → ⊥ castΠΠ%!-elim (cast-cong () x₁ x₂ x₃ x₄) -- Helper functions for decidability for neutrals decConv↓Term-ℕ-ins : ∀ {t u Γ l} → Γ ⊢ t [conv↓] u ∷ ℕ ^ l → Γ ⊢ t ~ t ↓! ℕ ^ l → Γ ⊢ t ~ u ↓! ℕ ^ l decConv↓Term-ℕ-ins (ℕ-ins x) t~t = x decConv↓Term-ℕ-ins (ne-ins x x₁ () x₃) t~t decConv↓Term-ℕ-ins (zero-refl x) ([~] A D whnfB ()) decConv↓Term-ℕ-ins (suc-cong x) ([~] A D whnfB ()) decConv↓Term-U-ins : ∀ {t u Γ r lU l} → Γ ⊢ t [conv↓] u ∷ Univ r lU ^ l → Γ ⊢ t ~ t ↓! Univ r lU ^ l → Γ ⊢ t ~ u ↓! Univ r lU ^ l decConv↓Term-U-ins (ne x) t~r = x decConv↓Term-ne-ins : ∀ {t u A Γ l} → Neutral A → Γ ⊢ t [conv↓] u ∷ A ^ l → ∃ λ B → ∃ λ lB → Γ ⊢ t ~ u ↓! B ^ lB decConv↓Term-ne-ins neA (ne-ins x x₁ x₂ x₃) = _ , _ , x₃ -- Helper function for decidability for impossibility of terms not being equal -- as neutrals when they are equal as terms and the first is a neutral. decConv↓Term-ℕ : ∀ {t u Γ l} → Γ ⊢ t [conv↓] u ∷ ℕ ^ l → Γ ⊢ t ~ t ↓! ℕ ^ l → ¬ (Γ ⊢ t ~ u ↓! ℕ ^ l) → ⊥ decConv↓Term-ℕ (ℕ-ins x) t~t ¬u~u = ¬u~u x decConv↓Term-ℕ (ne-ins x x₁ () x₃) t~t ¬u~u decConv↓Term-ℕ (zero-refl x) ([~] A D whnfB ()) ¬u~u decConv↓Term-ℕ (suc-cong x) ([~] A D whnfB ()) ¬u~u decConv↓Term-U : ∀ {t u Γ r lU l} → Γ ⊢ t [conv↓] u ∷ Univ r lU ^ l → Γ ⊢ t ~ t ↓! Univ r lU ^ l → ¬ (Γ ⊢ t ~ u ↓! Univ r lU ^ l) → ⊥ decConv↓Term-U (ne x) t~t ¬u~u = ¬u~u x
# A quick, practical intro to the Jupyter Notebook Fernando Perez ## Introduction The IPython Notebook is an **interactive computing environment** that enables users to author notebook documents that include: - Live code - Interactive widgets - Plots - Narrative text - Equations - Images - Video These documents provide a **complete and self-contained record of a computation** that can be converted to various formats and shared with others using email, [Dropbox](http://dropbox.com), version control systems (like git/[GitHub](http://github.com)) or [nbviewer.ipython.org](http://nbviewer.ipython.org). ### Components The IPython Notebook combines three components: * **The notebook web application**: An interactive web application for writing and running code interactively and authoring notebook documents. * **Kernels**: Separate processes started by the notebook web application that runs users' code in a given language and returns output back to the notebook web application. The kernel also handles things like computations for interactive widgets, tab completion and introspection. * **Notebook documents**: Self-contained documents that contain a representation of all content visible in the notebook web application, including inputs and outputs of the computations, narrative text, equations, images, and rich media representations of objects. Each notebook document has its own kernel. ## Notebook web application The notebook web application enables users to: * **Edit code in the browser**, with automatic syntax highlighting, indentation, and tab completion/introspection. * **Run code from the browser**, with the results of computations attached to the code which generated them. * See the results of computations with **rich media representations**, such as HTML, LaTeX, PNG, SVG, PDF, etc. * Create and use **interactive JavaScript wigets**, which bind interactive user interface controls and visualizations to reactive kernel side computations. * Author **narrative text** using the [Markdown](https://daringfireball.net/projects/markdown/) markup language. * Build **hierarchical documents** that are organized into sections with different levels of headings. * Include mathematical equations using **LaTeX syntax in Markdown**, which are rendered in-browser by [MathJax](http://www.mathjax.org/). ## Kernels Through IPython's kernel and messaging architecture, the Notebook allows code to be run in a range of different programming languages. For each notebook document that a user opens, the web application starts a kernel that runs the code for that notebook. Each kernel is capable of running code in a single programming language and there are kernels available in over [100 programming languages](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels). [IPython](https://github.com/ipython/ipython) is the default kernel, it runs Python code. Each of these kernels communicate with the notebook web application and web browser using a JSON over ZeroMQ/WebSockets message protocol that is described [here](https://jupyter-client.readthedocs.io/en/latest/messaging.html#messaging). Most users don't need to know about these details, but it helps to understand that "kernels run code." ## Notebook documents Notebook documents contain the **inputs and outputs** of an interactive session as well as **narrative text** that accompanies the code but is not meant for execution. **Rich output** generated by running code, including HTML, images, video, and plots, is embeddeed in the notebook, which makes it a complete and self-contained record of a computation. When you run the notebook web application on your computer, notebook documents are just **files on your local filesystem with a `.ipynb` extension**. This allows you to use familiar workflows for organizing your notebooks into folders and sharing them with others using email, Dropbox and version control systems. Notebooks consist of a **linear sequence of cells**. There are three basic cell types: * **Code cells:** Input and output of live code that is run in the kernel * **Markdown cells:** Narrative text with embedded LaTeX equations * **Raw cells:** Unformatted text that is included, without modification, when notebooks are converted to different formats using nbconvert Internally, notebook documents are **[JSON](http://en.wikipedia.org/wiki/JSO) data** with **binary values [base64]**(http://en.wikipedia.org/wiki/Base64) encoded. This allows them to be **read and manipulated programmatically** by any programming language. Because JSON is a text format, notebook documents are version control friendly. **Notebooks can be exported** to different static formats including HTML, reStructeredText, LaTeX, PDF, and slide shows using Jupyter's `nbconvert` utility. Furthermore, any notebook document available from a **public URL on or GitHub can be shared** via http://nbviewer.jupyter.org. This service loads the notebook document from the URL and renders it as a static web page. The resulting web page may thus be shared with others **without their needing to install Jupyter**. ### Body The body of a notebook is composed of cells. Each cell contains either markdown, code input, code output, or raw text. Cells can be included in any order and edited at-will, allowing for a large ammount of flexibility for constructing a narrative. - **Markdown cells** - These are used to build a nicely formatted narrative around the code in the document. The majority of this lesson is composed of markdown cells. - **Code cells** - These are used to define the computational code in the document. They come in two forms: the *input cell* where the user types the code to be executed, and the *output cell* which is the representation of the executed code. Depending on the code, this representation may be a simple scalar value, or something more complex like a plot or an interactive widget. - **Raw cells** - These are used when text needs to be included in raw form, without execution or transformation. This is what the three types of cells look like: #### Modality The notebook user interface is *modal*. This means that the keyboard behaves differently depending upon the current mode of the notebook. A notebook has two modes: **edit** and **command**. **Edit mode** is indicated by a blue cell border and a prompt showing in the editor area. When a cell is in edit mode, you can type into the cell, like a normal text editor. **Command mode** is indicated by a grey cell background. When in command mode, the structure of the notebook can be modified as a whole, but the text in individual cells cannot be changed. Most importantly, the keyboard is mapped to a set of shortcuts for efficiently performing notebook and cell actions. For example, pressing **`c`** when in command mode, will copy the current cell; no modifier is needed. Enter edit mode by pressing `Enter` or using the mouse to click on a cell's editor area. Enter command mode by pressing `Esc` or using the mouse to click *outside* a cell's editor area. Do not attempt to type into a cell when in command mode; unexpected things will happen! ```python %pylab inline plot(rand(100)) ``` #### Mouse navigation The first concept to understand in mouse-based navigation is that **cells can be selected by clicking on them.** The currently selected cell is indicated with a blue outline or gray background depending on whether the notebook is in edit or command mode. Clicking inside a cell's editor area will enter edit mode. Clicking on the prompt or the output area of a cell will enter command mode. The second concept to understand in mouse-based navigation is that **cell actions usually apply to the currently selected cell**. For example, to run the code in a cell, select it and then click the <button class='btn btn-default btn-xs'><i class="fa fa-play icon-play"></i></button> button in the toolbar or the **`Run -> Run Selected Cells`** menu item. Similarly, to copy a cell, select it and then click the <button class='btn btn-default btn-xs'><i class="fa fa-copy icon-copy"></i></button> button in the toolbar or the **`Edit -> Copy`** menu item. With this simple pattern, it should be possible to perform nearly every action with the mouse. Markdown cells have one other state which can be modified with the mouse. These cells can either be rendered or unrendered. When they are rendered, a nice formatted representation of the cell's contents will be presented. When they are unrendered, the raw text source of the cell will be presented. To render the selected cell with the mouse, click the <button class='btn btn-default btn-xs'><i class="fa fa-play icon-play"></i></button> button in the toolbar or the **`Run -> Run Selected Cells`** menu item. To unrender the selected cell, double click on the cell. #### Keyboard Navigation The modal user interface of the IPython Notebook has been optimized for efficient keyboard usage. This is made possible by having two different sets of keyboard shortcuts: one set that is active in edit mode and another in command mode. The most important keyboard shortcuts are **`Enter`**, which enters edit mode, and **`Esc`**, which enters command mode. In edit mode, most of the keyboard is dedicated to typing into the cell's editor. Thus, in edit mode there are relatively few shortcuts. In command mode, the entire keyboard is available for shortcuts, so there are many more possibilities. The following shortcuts have been found to be the most useful in day-to-day tasks: - Basic navigation: **`enter`**, **`shift-enter`**, **`up/k`**, **`down/j`** - Saving the notebook: **`s`** - Cell types: **`y`**, **`m`**, **`r`** - Cell creation: **`a`**, **`b`** - Cell editing: **`x`**, **`c`**, **`v`**, **`d`**, **`z`**, **`ctrl+shift+-`** - Kernel operations: **`i`**, **`.`** You can fully customize JupyterLab's keybindings by accessing the **`Settings -> Advanced Settings Editor`** menu item. # Running Code First and foremost, the Jupyter Notebook is an interactive environment for writing and running code. Jupyter is capable of running code in a wide range of languages. However, this notebook, and the default kernel in Jupyter, runs Python code. ## Code cells allow you to enter and run Python code Run a code cell using `Shift-Enter` or pressing the <button class='btn btn-default btn-xs'><i class="icon-play fa fa-play"></i></button> button in the toolbar above: ```python a = 10 ``` ```python print(a + 1) ``` Note the difference between the above printing statement and the operation below: ```python a + 1 ``` When a value is *returned* by a computation, it is displayed with a number, that tells you this is the output value of a given cell. You can later refere to any of these values (should you need one that you forgot to assign to a named variable). The last three are available respectively as auto-generated variables called `_`, `__` and `___` (one, two and three underscores). In addition to these three convenience ones for recent results, you can use `_N`, where N is the number in `[N]`, to access any numbered output. There are two other keyboard shortcuts for running code: * `Alt-Enter` runs the current cell and inserts a new one below. * `Ctrl-Enter` run the current cell and enters command mode. ## Managing the IPython Kernel Code is run in a separate process called the IPython Kernel. The Kernel can be interrupted or restarted. Try running the following cell and then hit the <button class='btn btn-default btn-xs'><i class='icon-stop fa fa-stop'></i></button> button in the toolbar above. ```python import time time.sleep(10) ``` If the Kernel dies you will be prompted to restart it. Here we call the low-level system libc.time routine with the wrong argument via ctypes to segfault the Python interpreter: ```python import sys from ctypes import CDLL # This will crash a Linux or Mac system # equivalent calls can be made on Windows dll = 'dylib' if sys.platform == 'darwin' else 'so.6' libc = CDLL("libc.%s" % dll) libc.time(-1) # BOOM!! ``` ## Cell menu The "Run" menu has a number of items for running code in different ways, including * Run Selected Cells * Run All Cells * Run Selected Cell or Current Line in Console * Run All Above Selected Cell * Run Selected Cell and All Below * Restart Kernel and Run All Cells ## Restarting the kernels The kernel maintains the state of a notebook's computations. You can reset this state by restarting the kernel. This is done by clicking on the <button class='btn btn-default btn-xs'><i class='fa fa-repeat icon-repeat'></i></button> in the toolbar above. ## sys.stdout and sys.stderr The stdout and stderr streams are displayed as text in the output area. ```python print("hi, stdout") ``` ```python from __future__ import print_function print('hi, stderr', file=sys.stderr) ``` ## Output is asynchronous All output is displayed as it is generated in the Kernel: instead of blocking on the execution of the entire cell, output is made available to the Notebook immediately as it is generated by the kernel (even though the whole cell is submitted for execution as a single unit). If you execute the next cell, you will see the output one piece at a time, not all at the end: ```python import time, sys for i in range(8): print(i) time.sleep(0.5) ``` ## Large outputs To better handle large outputs, the output area can be collapsed. Run the following cell and then click on the vertical blue bar to the left of the output: ```python for i in range(50): print(i) ``` --- # Markdown Cells Text can be added to IPython Notebooks using Markdown cells. Markdown is a popular markup language that is a superset of HTML. Its specification can be found here: <http://daringfireball.net/projects/markdown/> You can view the source of a cell by double clicking on it, or while the cell is selected in command mode, press `Enter` to edit it. One A cell has been editted, use `Shift-Enter` to re-render it. ## Markdown basics You can make text *italic* or **bold**. You can build nested itemized or enumerated lists: * One - Sublist - This - Sublist - That - The other thing * Two - Sublist * Three - Sublist Now another list: 1. Here we go 1. Sublist 2. Sublist 2. There we go 3. Now this You can add horizontal rules: --- Here is a blockquote: > Beautiful is better than ugly. > Explicit is better than implicit. > Simple is better than complex. > Complex is better than complicated. > Flat is better than nested. > Sparse is better than dense. > Readability counts. > Special cases aren't special enough to break the rules. > Although practicality beats purity. > Errors should never pass silently. > Unless explicitly silenced. > In the face of ambiguity, refuse the temptation to guess. > There should be one-- and preferably only one --obvious way to do it. > Although that way may not be obvious at first unless you're Dutch. > Now is better than never. > Although never is often better than *right* now. > If the implementation is hard to explain, it's a bad idea. > If the implementation is easy to explain, it may be a good idea. > Namespaces are one honking great idea -- let's do more of those! And shorthand for links: [IPython's website](http://ipython.org) You can add headings using Markdown's syntax: # Heading 1 # Heading 2 ## Heading 2.1 ## Heading 2.2 ## Embedded code You can embed code meant for illustration instead of execution in Python: def f(x): """a docstring""" return x**2 or other languages: if (i=0; i<n; i++) { printf("hello %d\n", i); x += 4; } ## LaTeX equations Courtesy of MathJax, you can include mathematical expressions both inline: $e^{i\pi} + 1 = 0$ and displayed: $$e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$$ Use single dolars delimiter for inline math, so `$thisisinline\int math$` will give $this is inline\int math$, for example to refer to variable within text. Double dollars `$$\int_0^{2\pi} f(r, \phi) \partial \phi $$` is used for standalone formulas: $$\int_0^{2\pi} f(r, \phi) \partial \phi $$ ## Github flavored markdown (GFM) The Notebook webapp support Github flavored markdown meaning that you can use triple backticks for code blocks <pre> ```python print "Hello World" ``` ```javascript console.log("Hello World") ``` </pre> Gives ```python print "Hello World" ``` ```javascript console.log("Hello World") ``` And a table like this : <pre> | This | is | |------|------| | a | table| </pre> A nice HTML Table | This | is | |------|------| | a | table| ## General HTML Because Markdown is a superset of HTML you can even add things like HTML tables: <table> <tr> <th>Header 1</th> <th>Header 2</th> </tr> <tr> <td>row 1, cell 1</td> <td>row 1, cell 2</td> </tr> <tr> <td>row 2, cell 1</td> <td>row 2, cell 2</td> </tr> </table> ## Local files If you have local files in your Notebook directory, you can refer to these files in Markdown cells directly: [subdirectory/]<filename> For example, in the images folder, we have the Python logo: and a video with the HTML5 video tag: <video controls src="images/animation.m4v" /> <video controls src="images/animation.m4v" /> These do not embed the data into the notebook file, and require that the files exist when you are viewing the notebook. ### Security of local files Note that this means that the IPython notebook server also acts as a generic file server for files inside the same tree as your notebooks. Access is not granted outside the notebook folder so you have strict control over what files are visible, but for this reason it is highly recommended that you do not run the notebook server with a notebook directory at a high level in your filesystem (e.g. your home directory). When you run the notebook in a password-protected manner, local file access is restricted to authenticated users unless read-only views are active. --- # Typesetting Equations The Markdown parser included in IPython is MathJax-aware. This means that you can freely mix in mathematical expressions using the [MathJax subset of Tex and LaTeX](http://docs.mathjax.org/en/latest/tex.html#tex-support). You can use single-dollar signs to include inline math, e.g. `$e^{i \pi} = -1$` will render as $e^{i \pi} = -1$, and double-dollars for displayed math: ``` $$ e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i $$ ``` renders as: $$ e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i $$ You can also use more complex LaTeX constructs for displaying math, such as: ``` \begin{align} \dot{x} & = \sigma(y-x) \\ \dot{y} & = \rho x - y - xz \\ \dot{z} & = -\beta z + xy \end{align} ``` to produce the Lorenz equations: \begin{align} \dot{x} & = \sigma(y-x) \\ \dot{y} & = \rho x - y - xz \\ \dot{z} & = -\beta z + xy \end{align} Please refer to the MathJax documentation for a comprehensive description of which parts of LaTeX are supported, but note that Jupyter's support for LaTeX is **limited to mathematics**. You can **not** use LaTeX typesetting constrcuts for text or document structure, for text formatting you should restrict yourself to Markdown syntax.
__precompile__() module SeqHax using ArgParse using Bio.Seq include("ProgressLoggers.jl") include("utils.jl") include("comp.jl") include("length.jl") include("interleave.jl") function parse_cli() s = ArgParseSettings() ## Global @add_arg_table s begin "comp" help = "Calculate nucleotide composition of sequences" action = :command "join" help = "Join separate R1/R2 files into an interleaved file" action = :command "length" help = "Count read lengths" action = :command "split" help = "Split an interleaved file into separate R1/R2 files" action = :command end Comp.add_args(s) Length.add_args(s) Interleave.add_join_args(s) Interleave.add_split_args(s) return parse_args(s) end function main() cli = parse_cli() cmd = cli["%COMMAND%"] mainfuncs = Dict{AbstractString, Any}( "comp" => Comp.main, "join" => Interleave.join_main, "length" => Length.main, "split" => Interleave.split_main, "preappend" => PreApp.main, ) return mainfuncs[cmd](cli[cmd]) end end # module SeqHax
SUBROUTINE GFLD (RHO,PHI,RZ,ETH,EPI,ERD,UX,KSYMP) C C GFLD COMPUTES THE RADIATED FIELD INCLUDING GROUND WAVE. C COMPLEX CUR,EPI,CIX,CIY,CIZ,EXA,XX1,XX2,U,U2,ERV,EZV,ERH,EPH COMPLEX EZH,EX,EY,ETH,UX,ERD COMMON /DATA/ LD,N1,N2,N,NP,M1,M2,M,MP,X(300),Y(300),Z(300),SI(300 1),BI(300),ALP(300),BET(300),ICON1(300),ICON2(300),ITAG(300),ICONX( 2300),WLAM,IPSYM COMMON /ANGL/ SALP(300) COMMON /CRNT/ AIR(300),AII(300),BIR(300),BII(300),CIR(300),CII(300 1),CUR(900) COMMON /GWAV/ U,U2,XX1,XX2,R1,R2,ZMH,ZPH DIMENSION CAB(1), SAB(1) EQUIVALENCE (CAB(1),ALP(1)), (SAB(1),BET(1)) DATA PI,TP/3.141592654,6.283185308/ R=SQRT(RHO*RHO+RZ*RZ) IF (KSYMP.EQ.1) GO TO 1 IF (ABS(UX).GT..5) GO TO 1 IF (R.GT.1.E5) GO TO 1 GO TO 4 C C COMPUTATION OF SPACE WAVE ONLY C 1 IF (RZ.LT.1.E-20) GO TO 2 THET=ATAN(RHO/RZ) GO TO 3 2 THET=PI*.5 3 CALL FFLD (THET,PHI,ETH,EPI) ARG=-TP*R EXA=CMPLX(COS(ARG),SIN(ARG))/R ETH=ETH*EXA EPI=EPI*EXA ERD=(0.,0.) RETURN C C COMPUTATION OF SPACE AND GROUND WAVES. C 4 U=UX U2=U*U PHX=-SIN(PHI) PHY=COS(PHI) RX=RHO*PHY RY=-RHO*PHX CIX=(0.,0.) CIY=(0.,0.) CIZ=(0.,0.) C C SUMMATION OF FIELD FROM INDIVIDUAL SEGMENTS C DO 17 I=1,N DX=CAB(I) DY=SAB(I) DZ=SALP(I) RIX=RX-X(I) RIY=RY-Y(I) RHS=RIX*RIX+RIY*RIY RHP=SQRT(RHS) IF (RHP.LT.1.E-6) GO TO 5 RHX=RIX/RHP RHY=RIY/RHP GO TO 6 5 RHX=1. RHY=0. 6 CALP=1.-DZ*DZ IF (CALP.LT.1.D-6) GO TO 7 CALP=SQRT(CALP) CBET=DX/CALP SBET=DY/CALP CPH=RHX*CBET+RHY*SBET SPH=RHY*CBET-RHX*SBET GO TO 8 7 CPH=RHX SPH=RHY 8 EL=PI*SI(I) RFL=-1. C C INTEGRATION OF (CURRENT)*(PHASE FACTOR) OVER SEGMENT AND IMAGE FOR C CONSTANT, SINE, AND COSINE CURRENT DISTRIBUTIONS C DO 16 K=1,2 RFL=-RFL RIZ=RZ-Z(I)*RFL RXYZ=SQRT(RIX*RIX+RIY*RIY+RIZ*RIZ) RNX=RIX/RXYZ RNY=RIY/RXYZ RNZ=RIZ/RXYZ OMEGA=-(RNX*DX+RNY*DY+RNZ*DZ*RFL) SILL=OMEGA*EL TOP=EL+SILL BOT=EL-SILL IF (ABS(OMEGA).LT.1.E-7) GO TO 9 A=2.*SIN(SILL)/OMEGA GO TO 10 9 A=(2.-OMEGA*OMEGA*EL*EL/3.)*EL 10 IF (ABS(TOP).LT.1.E-7) GO TO 11 TOO=SIN(TOP)/TOP GO TO 12 11 TOO=1.-TOP*TOP/6. 12 IF (ABS(BOT).LT.1.E-7) GO TO 13 BOO=SIN(BOT)/BOT GO TO 14 13 BOO=1.-BOT*BOT/6. 14 B=EL*(BOO-TOO) C=EL*(BOO+TOO) RR=A*AIR(I)+B*BII(I)+C*CIR(I) RI=A*AII(I)-B*BIR(I)+C*CII(I) ARG=TP*(X(I)*RNX+Y(I)*RNY+Z(I)*RNZ*RFL) EXA=CMPLX(COS(ARG),SIN(ARG))*CMPLX(RR,RI)/TP IF (K.EQ.2) GO TO 15 XX1=EXA R1=RXYZ ZMH=RIZ GO TO 16 15 XX2=EXA R2=RXYZ ZPH=RIZ 16 CONTINUE C C CALL SUBROUTINE TO COMPUTE THE FIELD OF SEGMENT INCLUDING GROUND C WAVE. C CALL GWAVE (ERV,EZV,ERH,EZH,EPH) ERH=ERH*CPH*CALP+ERV*DZ EPH=EPH*SPH*CALP EZH=EZH*CPH*CALP+EZV*DZ EX=ERH*RHX-EPH*RHY EY=ERH*RHY+EPH*RHX CIX=CIX+EX CIY=CIY+EY 17 CIZ=CIZ+EZH ARG=-TP*R EXA=CMPLX(COS(ARG),SIN(ARG)) CIX=CIX*EXA CIY=CIY*EXA CIZ=CIZ*EXA RNX=RX/R RNY=RY/R RNZ=RZ/R THX=RNZ*PHY THY=-RNZ*PHX THZ=-RHO/R ETH=CIX*THX+CIY*THY+CIZ*THZ EPI=CIX*PHX+CIY*PHY ERD=CIX*RNX+CIY*RNY+CIZ*RNZ RETURN END
lemma Lim_transform_away_at: fixes a b :: "'a::t1_space" assumes ab: "a \<noteq> b" and fg: "\<forall>x. x \<noteq> a \<and> x \<noteq> b \<longrightarrow> f x = g x" and fl: "(f \<longlongrightarrow> l) (at a)" shows "(g \<longlongrightarrow> l) (at a)"
REBOL [ Title: "Rebol 3 parser" Author: "Shixin Zeng<[email protected]>" Rights: "Copyright (C) Atronix Engineering, Inc. 2017" Type: 'module Exports: [scan-source] ] debug: :print err: _ pos: _ line-no: 1 last-line: _ open-at: 0x0 binary-source?: false syntax-errors: [ invalid-integer "Invalid integer" invalid-time "Invalid time" invalid-tuple "Invalid tuple" invalid-issue "Invalid issue" invalid-path "Invalid path" invalid-refinement "Invalid refinement" invalid-char "Invalid char" invalid-hex-digit "Invalid hex digit" invalid-lit-word-path "Invalid lit word or path" invalid-get-word-path "Invalid get word or path" invalid-utf8-char "Invalid utf8 char" odd-binary-digit "Dangling digit at the end" ;must be in pair missing-close-paren "Missing a close parenthesis ')'" missing-close-brace "Missing a close brace '}'" missing-close-bracket "Missing a close bracket ']'" missing-close-quote {Missing a quotation mark (")} unrecognized-named-escape "Unrecognized named escape" mal-construct "Malconstruct" syntax-error "Syntax error" ] abort: function [ reason [word!] pos [binary! string!] <with> err ][ ;trace off print ["Aborting due to" reason "at:" loc: locate pos] print line: to string! find-line pos loc pointer: make string! to integer! loc/2 for i 1 (loc/2 - 1) 1 [ append pointer either #"^-" = pick line i [ #"^-" ][ #" " ] ] append pointer "^^" assert [loc/2 = length pointer] print pointer ;print ["Last open-at:" mold open-at] ;print ["source:" to string! pos] err: reason fail (select* syntax-errors reason else spaced ["Unknown error" reason]) ] digit: charset "0123456789" hex-digit: charset "0123456789ABCDEFabcdef" letter: charset [#"a" - #"z" #"A" - #"Z"] sign: charset "+-" byte: complement charset {} lit-prefix: {'} space-char: charset "^@^(01)^(02)^(03)^(04)^(05)^(06)^(07)^(08)^(09)^(0A)^(0B)^(0C)^(0D)^(0E)^(0F)^(10)^(11)^(12)^(13)^(14)^(15)^(16)^(17)^(18)^(19)^(1A)^(1B)^(1C)^(1D)^(1E)^(1F)^(20)^(7F)" non-space: complement space-char regular-word-char: charset ["!&*=?" #"A" - #"Z" #"_" "^`" #"a" - #"z" #"|" #"~" #"^(80)" - #"^(BF)" ;old control chars and alternate chars #"^(C2)" - #"^(FE)" ] ;can appear in anywhere in a word special-char: charset "@%\:'<>+-~|_.,#$" non-quote: complement charset "^"" non-close-brace: complement charset "^}" A: charset "Aa" B: charset "Bb" C: charset "Cc" D: charset "Dd" E: charset "Ee" F: charset "Ff" G: charset "Gg" H: charset "Hh" I: charset "Ii" J: charset "Jj" K: charset "Kk" L: charset "Ll" M: charset "Mm" N: charset "Nn" O: charset "Oo" P: charset "Pp" Q: charset "Qq" R: charset "Rr" S: charset "Ss" T: charset "Tt" U: charset "Uu" V: charset "Vv" W: charset "Ww" X: charset "Xx" Y: charset "Yy" Z: charset "Zz" utf8-single-byte: charset [ #"^(00)" - #"^(7F)" ;0xxxxxxx ] utf8-first-in-2: charset [ #"^(C0)" - #"^(DF)" ;110xxxxx ] utf8-first-in-3: charset [ #"^(E0)" - #"^(EF)" ;1110xxxx ] utf8-first-in-4: charset [ #"^(F0)" - #"^(F7)" ;11110xxx ] utf8-non-first-byte: charset [ #"^(80)" - #"^(BF)" ;10xxxxxx ] utf8-char: [ utf8-single-byte | [utf8-first-in-2 2 utf8-non-first-byte] | [utf8-first-in-3 3 utf8-non-first-byte] | [utf8-first-in-4 4 utf8-non-first-byte] ] skip-char: _ skip-char-or-abort: _ ; newlines will have some pairs, ; each pair represents a start of a new line, with x being the index in the ; source, and y being the length of the last line break (1 for CR or LF, and 2 ; for CRLF) new-lines: make block! 128 add-new-line: procedure [ src len [integer!] {Length of line-break (1 or 2)} ][ idx: index-of src if idx > first (last new-lines) [ append new-lines to pair! reduce [idx len] ] ] line-break: [ "^(0A)^(0D)" pos: (add-new-line pos 2) | #"^(0A)" pos: (add-new-line pos 1) | #"^(0D)" pos: (add-new-line pos 1) ] space: [ line-break | space-char ] non-space-delimiter: charset {()[]{}^"/;} delimiter: [ space | non-space-delimiter | end ] find-line: function [ {Find the line @ser is in} ser [binary! string!] pos [blank! pair!] ][ if blank? pos [ pos: locate ser ] start: to integer! first pick new-lines pos/x either pos/x < length new-lines [ next-line: pick new-lines (pos/x + 1) end-of-line: to integer! next-line/x - next-line/y ][ cur: ser while [not tail? cur][ if find? "^(0A)^(0D)" to char! to integer! cur/1 [ break ] cur: next cur ] end-of-line: index-of cur ] copy/part skip (head ser) (start - 1) (end-of-line - start) ] locate: function [ {Find out the location of the series, returning a pair: line x col} ser [binary! string!] ][ ;trace off ;return reduce [{} 0x0] lines: back tail new-lines idx: index-of ser start: head lines for-skip lines -1 [ ;print ["Looking at lines:" first lines] if lines/1/x <= idx [ ;print ["Found start at:" start] start: lines break ] ] to pair! reduce [ index-of start 1 + (index-of ser) - start/1/x ] ] open-brace: [ pos: #"^{" (open-at: locate pos) ] open-bracket: [ pos: #"[" (open-at: locate pos) ] open-quote: [ pos: #"^"" (open-at: locate pos) ] open-paren: [ pos: #"(" (open-at: locate pos) ] required-close-brace: [ #"^}" | pos: (abort 'missing-close-brace pos) ] required-quote: [ #"^"" | pos: (abort 'missing-close-quote pos) ] required-close-paren: [ [#")" | pos: (abort 'missing-close-paren pos)] ] required-close-bracket: [ [#"]" | pos: (abort 'missing-close-bracket pos)] ] required-close-angle: [ ] unsigned-integer: context [ val: 0 s: _ rule: [ pos: copy s [ some [digit | #"'"] ;allow leading zeros ] (if error? err: try [val: to integer! to string! s][ ;FIXME: examine the error for better error message abort 'invalid-integer pos ]) ] ] integer: context [ val: 0 s: _ rule: [ pos: copy s [ opt sign unsigned-integer/rule ] (if error? err: try [val: to integer! to string! s][ ;FIXME: examine the error for better error message abort 'invalid-integer pos ]) ] ] unsigned-decimal: context [ fraction: [ [#"," | #"."] any digit ] exponential: [ E integer/rule ] rule: [ opt [ digit any [digit | #"'"] ] [ [fraction opt exponential] | [opt fraction exponential] ] ] ] decimal: context [ val: _ s: _ rule: [ copy s [ opt sign unsigned-decimal/rule ] (val: to decimal! to string! s) ] ] any-number: context [ val: _ rule: [ decimal/rule (val: decimal/val) | integer/rule (val: integer/val) ] ] percent: context [ val: 0% rule: [ any-number/rule #"%" (val: to percent! any-number/val / 100) ] ] money: context [ val: $0.00 s: _ sign-char: _ rule: [ opt copy sign-char sign ;-$2.1, not $-2.1 #"$" copy s unsigned-decimal/rule ( val: to money! to decimal! to string! either blank? sign [s][join-of sign-char s] ) ] ] time: context [ val: 00:00:00 start: _ hour: 0 minute: 0 sec: 0.0 intra-day: false init: does [ val: 00:00:00 hour: 0 minute: 0 sec: 0.0 intra-day: false ] rule: [ (init) start: opt [integer/rule (hour: integer/val)] #":" integer/rule (minute: integer/val) opt [ #":" any-number/rule (sec: any-number/val) opt [ [ [A M] | [P M] (hour: hour + 12) ] (intra-day: true) ] ] ( val: make time! reduce [hour minute sec] if all [intra-day any [hour < 0 hour >= 24]][ abort 'invalid-time start ;do not use 'pos' because it would be overwritten by 'integer/rule' ] ) ] ] pair: context [ val: 0x0 x0: _ rule: [ [ any-number/rule (x0: any-number/val) X any-number/rule (val: to pair! reduce [x0 any-number/val]) ] ] ] date: context [ val: _ day: _ month: _ year: _ tz: _ tm: _ sep: _ sign-char: _ init: does [ day: _ month: _ year: _ tm: _ tz: _ ] named-month: [ J A N opt [U opt [A opt [R opt Y]]] (month: 1) | F E B opt [R opt [U opt [A opt [ R opt Y]]]] (month: 2) | M A R opt [C opt opt H] (month: 3) | A P R opt [I opt opt L] (month: 4) | M A Y (month: 5) | J U N opt E (month: 6) | J U L opt Y (month: 7) | A U G opt [U opt [S opt T]] (month: 8) | S E P opt [T opt [E opt [ M opt [B opt [E opt R]]]]] (month: 9) | O C T opt [O opt [B opt [E opt R]]] (month: 10) | N O V opt [E opt [ M opt [B opt [E opt R]]]] (month: 11) | D E C opt [E opt [ M opt [B opt [E opt R]]]] (month: 12) ] rule: [ (init) [ unsigned-integer/rule (day: unsigned-integer/val) set sep [#"-" | #"/"] (sep: to char! sep) ;without converting to char!, sep would be an integer if source is binary!! [ unsigned-integer/rule (month: unsigned-integer/val) | named-month ] sep unsigned-integer/rule (year: unsigned-integer/val) opt [ #"/" time/rule (tm: time/val) opt [ set sign-char sign (sign-char: to char! sign-char) time/rule ( either sign-char = #"-" [ tz: negate time/val ][ tz: time/val ] ) ] ] ( val: reduce [day month year] if tm [append val tm] if tz [append val tz] val: make date! val ) ] ] ] string: context [ val: _ s: _ b: _ c: _ named-escapes: [ "null" "^(null)" "line" "^(line)" "tab" "^(tab)" "page" "^(page)" "escape" "^(escape)" "esc" "^(esc)" "back" "^(back)" "del" "^(del)" ] escapes: [ #"/" #"^/" #"!" #"^!" #"-" #"^-" #"~" #"^~" #"@" #"^@" #"a" #"^a" #"b" #"^b" #"c" #"^c" #"d" #"^d" #"e" #"^e" #"f" #"^f" #"g" #"^g" #"h" #"^h" #"i" #"^i" #"j" #"^j" #"k" #"^k" #"l" #"^l" #"m" #"^m" #"n" #"^n" #"o" #"^o" #"p" #"^p" #"q" #"^q" #"r" #"^r" #"s" #"^s" #"t" #"^t" #"u" #"^u" #"v" #"^v" #"w" #"^w" #"x" #"^x" #"y" #"^y" #"z" #"^z" #"[" #"^[" #"\" #"^\" #"]" #"^]" #"_" #"^_" ] unescape: func [ src [string! binary!] "modified" ][ parse src [ while [ change {^^^^} {^^} | change ["^^" open-paren pos: [ copy s [ 4 digit | 2 hex-digit] and ")" (c: debase/base to binary! s 16) ;and ")" is to prevent it from matching "ba" in "back" | copy s [some letter] (c: select named-escapes lowercase to string! s if blank? c [abort 'unrecognized-named-escape pos]) ] required-close-paren] c | change ["^^" set b utf8-char (b: to char! b c: any [select escapes (lowercase b) b])] c | end break | skip-char-or-abort ] ] src ] init: does [ val: _ b: _ c: _ ] non-close-brace: complement charset "^}" in-brace-rule: [ any [ {^^^^} | "^^{" | "^^}" | line-break ;unescaped braces | open-brace in-brace-rule required-close-brace | non-close-brace ] ] rule: [ (init) [ open-quote copy val [ any [ {^^^^} | {^^"} | line-break pos: (abort 'missing-close-quote pos) | non-quote ] ] required-quote | open-brace copy val [ in-brace-rule ] required-close-brace ]( ;process escaping ;print ["unescapped string:" mold val] ;trace on unescape val if binary? val [ val: decode 'text val ;to-string would convert #{0D} to "^(0A)"!!! ] ;trace off ;print ["escapped string:" mold val] ) ] ] binary: context [ val: _ h1: _ h2: _ s: _ base64-char: charset [ #"A" - #"Z" #"a" - #"z" #"0" - #"9" "+-=" ] base2-char: charset "01" rule: [ "#" open-brace (val: make binary! 1) copy val [ some [hex-digit | space] ] (val: debase/base to binary! val 16) required-close-brace | "2#" open-brace (s: make string! 1) copy val [ some [base2-char | space] ] (val: debase/base to binary! val 2) required-close-brace | "64#" open-brace copy val [ some [base64-char | space] ] (val: debase to binary! val) required-close-brace ] ] word: context [ val: _ s: _ num-starter: charset "+-." ;must be followed by a non-digit rule: [ copy s [ [ num-starter [and delimiter | and #":"] ;num-starters can be words by themselves | [ opt num-starter [regular-word-char | num-starter] any [num-starter | regular-word-char | digit] ] | some #"/" and delimiter ;all-slash words ] ] (val: to word! to string! s) ;special words starting with #"<", not a tag | "<<" and delimiter (val: '<<) | "<=" and delimiter (val: '<=) | "<>" and delimiter (val: '<>) | "<|" and delimiter (val: to word! "<|") | "<-" and delimiter (val: to word! "<-") | ">>" and delimiter (val: '>>) | ">=" and delimiter (val: '>=) ; some special chars can be signle-char words ; "@%\:,',$" are exceptions | #"#" and delimiter (val: _) ;bug??? | #"<" and delimiter (val: '<) | #">" and delimiter (val: '>) ] ] get-word: context [ val: _ rule: [ #":" [word/rule (val: to get-word! word/val) | pos: (abort 'invalid-get-word-path pos)] ] ] set-word: context [ val: _ rule: [ word/rule #":" (val: to set-word! word/val) ] ] lit-word: context [ val: _ rule: [ lit-prefix [ word/rule (val: to lit-word! word/val) | pos: (abort 'invalid-lit-word-path pos) ] ] ] issue: context [ val: _ issue-char: charset "',.+-" rule: [ #"#" [ copy val some [ ;delimiter reject issue-char | regular-word-char | digit ] (val: to issue! to string! val) | pos: (abort 'invalid-issue pos) ] ] ] refinement: context [ val: _ rule: [ pos: #"/" [ some #"/" fail ; all slashes are words ;exclude some words that can't be refinement | pos: [#"<" | #">"] (abort 'invalid-refinement pos) | integer/rule (val: to refinement! integer/val) | word/rule (val: to refinement! word/val) ] ] ] get-path: context [ val: _ rule: [ #":" path/rule (val: to get-path! path/val) ] ] set-path: context [ val: _ rule: [ path/rule #":" (val: to set-path! path/val) ] ] lit-path: context [ val: _ rule: [ lit-prefix [ pos: [lit-prefix | #":"] (abort 'invalid-lit-word-path pos) | path/rule (val: to lit-path! path/val) ] ] ] file: context [ val: _ s: _ white-char: charset [#"^(00)" - #"^(20)"] valid-in-quotes: complement charset {:;^"} valid: complement union white-char charset {:;()[]^"} rule: [ #"%" [ [ open-quote copy s [any valid-in-quotes] required-quote ] | [ copy s [any valid] ] ] (val: to file! s) ] ] stack: make block! 32 group: context [ val: _ rule: [ open-paren rebol/rule (val: as group! rebol/val) required-close-paren ] ] block: context [ val: _ rule: [ open-bracket rebol/rule (val: rebol/val) required-close-bracket ] ] tag: context [ val: _ non-angle-bracket: complement charset ">" rule: [ #"<" copy val [ some [ ; "<>" is a word! [#"^"" any non-quote required-quote] ; ">" can be embedded if it's quoted | non-angle-bracket ] ] #">" (val: to tag! val); | pos: (abort 'missing-close-angle-bracket)] ] ] url: context [ val: _ s: _ c: _ rule: [ copy val [ word/rule "://" any [ [#"%" 2 hex-digit] | #"/" | and delimiter break | skip-char-or-abort ] ] ( ; replace all %xx parse val [ while [ change [#"%" copy s [2 hex-digit] ( c: debase/base to binary! s 16 unless binary-source? [ c: to char! to integer! c ] )] c | end break | skip-char-or-abort ] ] val: to url! val ) ] ] char: context [ val: _ rule: [ #"#" [ #"^"" copy val pos: [ "^^^^" | "^^^"" | "^^" open-paren some [letter | digit] required-close-paren ;named escape | "^^" skip-char-or-abort ;escape | non-quote ] #"^"" | #"^{" copy val pos: [ "^^^^" | "^^^"" | "^^" open-paren some [letter | digit] required-close-paren ;named escape | "^^" skip-char-or-abort ;escape | non-close-brace ] #"^}" ] ( string/unescape val either string? val [ unless 1 = length val [ ;print ["length of string is not 1" mold val] abort 'invalid-char pos ] val: first val ][; binary! ;dump val val: to char! to-integer/unsigned val ] ) ] ] construct: context [ val: _ start: _ rule: [ #"#" and #"[" (insert/only stack start) start: [ block/rule ( start: take stack ;debug ["block:" mold block/val] switch/default length block/val [ 2 [ insert block/val :make val: do bind block/val lib ] 1 [; #[false] #[true] or #[none] val: switch/default first block/val [ true [true] false [false] none [_] ][ abort 'mal-construct start ] ] ][ abort 'mal-construct start ] ) | (start: take stack) fail ] ] ] byte-integer: context [ val: _ four-or-less: charset "01234" five-or-less: charset "012345" rule: [ copy val [ #"1" opt [digit opt digit] | #"2" [ four-or-less digit | #"5" five-or-less ] | digit opt digit ] (val: to integer! to string! val) ] ] tuple: context [ val: _ rule: [ (val: make binary! 7) opt #"+" byte-integer/rule (append val byte-integer/val) #"." byte-integer/rule (append val byte-integer/val) some [#"." [ byte-integer/rule (append val byte-integer/val) | pos: (abort 'invalid-tuple pos) ] ] (val: to tuple! val) ] ] email: context [ val: _ leading-email-char: complement union space-char charset "<@" non-at: complement union space-char charset "@" rule: [ copy val [ leading-email-char any non-at #"@" any [delimiter break | skip-char-or-abort] ; "to delimiter" causes invalid-rule error ??? ] (val: to email! val) ] ] void: context [ val: _ rule: [ #"(" any space #")" (value: ()) ] ] comment: context [ val: _ rule: [ copy val [ #";" any [ line-break break | end | skip-char-or-abort ] ] ;(print ["comment found:" mold val]) ] ] pre-parse: func [ source [binary! string!] ][ clear stack pos: source last-line: source line-no: 1 err: _ clear new-lines append new-lines [1x0] ] path: context [ val: _ rule: [ (insert/only stack nested-path/val) nested-path/rule ( val: nested-path/val nested-path/val: take stack ) | (nested-path/val: take stack) fail ] ] nested-path: context [ val: _ rule: [ word/rule ( val: make path! 1 append/only val word/val ) some [ #"/" [ #"|" and delimiter (append/only val '|) | "'|" and delimiter (append/only val to lit-bar! '|) | #"_" and delimiter (append/only val _) | block/rule (append/only val block/val) ; [ ;| void/rule (append/only val block/val) ; ( | group/rule (append/only val group/val) ; ( | string/rule (append/only val string/val) ; { | char/rule (append/only val char/val) ; #{ or #" | binary/rule (append/only val binary/val) ; #{ or 64# | construct/rule (append/only val construct/val) ; #[ | issue/rule (append/only val issue/val) ; # | file/rule (append/only val file/val) ; % | money/rule (append/only val money/val) ; $ ;| lit-path/rule (append/only val lit-path/val) ; ' ;| get-path/rule (append/only val get-path/val) ; : | lit-word/rule (append/only val lit-word/val) ; ' | get-word/rule (append/only val get-word/val) ; : | email/rule (append/only val email/val) | pair/rule (append/only val pair/val) ;before percent | tuple/rule (append/only val tuple/val) ;before percent | percent/rule (append/only val percent/val) ;before decimal | decimal/rule (append/only val decimal/val) ;before integer | time/rule (append/only val time/val) ;before integer ;| date/rule (append/only val date/val) ;before integer | integer/rule ;[and delimiter | (abort 'invalid-integer)] (append/only val integer/val) | url/rule (append/only val url/val) ;before set-word ;| set-path/rule (append/only val set-path/val) ;before path ;| path/rule (append/only val path/val) ;before word | set-word/rule (append/only val set-word/val) ;before word | word/rule (append/only val word/val) ;before refinement, because #"/" could be a word | refinement/rule (append/only val refinement/val) ;#"/" | tag/rule (append/only val tag/val) ;#"<" | pos: (abort 'invalid-path pos) ] ] ] ] nested-rebol: context [ val: _ rule: [ (val: make block! 1) any [ pos: ; sequence is important end | space | comment/rule | #"|" and delimiter (append/only val '|) | "'|" and delimiter (append/only val to lit-bar! '|) | #"_" and delimiter (append/only val _) | block/rule (append/only val block/val) ; [ ;| void/rule (append/only val block/val) ; ( | group/rule (append/only val group/val) ; ( | string/rule (append/only val string/val) ; { | char/rule (append/only val char/val) ; #{ or #" | binary/rule (append/only val binary/val) ; #{ or 64# | construct/rule (append/only val construct/val) ; #[ | issue/rule (append/only val issue/val) ; # | file/rule (append/only val file/val) ; % | money/rule (append/only val money/val) ; $ | lit-path/rule (append/only val lit-path/val) ; ' | get-path/rule (append/only val get-path/val) ; : | lit-word/rule (append/only val lit-word/val) ; ' | get-word/rule (append/only val get-word/val) ; : | email/rule (append/only val email/val) | pair/rule (append/only val pair/val) ;before percent | tuple/rule (append/only val tuple/val) ;before percent | percent/rule (append/only val percent/val) ;before decimal | decimal/rule (append/only val decimal/val) ;before integer | time/rule (append/only val time/val) ;before integer | date/rule (append/only val date/val) ;before integer | integer/rule [and delimiter | pos: (abort 'invalid-integer pos)] (append/only val integer/val) | url/rule (append/only val url/val) ;before set-word | set-path/rule (append/only val set-path/val) ;before path | path/rule (append/only val path/val) ;before word | set-word/rule (append/only val set-word/val) ;before word | word/rule (append/only val word/val) ;before refinement, because #"/" could be a word | refinement/rule (append/only val refinement/val) ;#"/" | tag/rule (append/only val tag/val) ;#"<" ;| skip ();invalid UTF8 byte? | [and [#")" | #"]"] | pos: (abort 'syntax-error pos)] ] ] ] rebol: context [ val: _ rule: [ (insert/only stack nested-rebol/val) nested-rebol/rule ( val: nested-rebol/val nested-rebol/val: take stack ) | (nested-rebol/val: take stack) fail ] ] scan-source: function [ source [binary! string!] <with> binary-source? skip-char skip-char-or-abort ][ ;debug ["scanning:" mold source] binary-source?: binary? source either binary-source? [ skip-char: [utf8-char] skip-char-or-abort: [utf8-char | pos: (abort 'invalid-utf8-char pos)] ][ skip-char: skip-char-or-abort: [skip] ] pre-parse source ;trace on ret: parse source rebol/rule ;trace off if ret [ return rebol/val ] fail "Uncatched syntax error" ]
lemma complex_derivative_transform_within_open: "\<lbrakk>f holomorphic_on s; g holomorphic_on s; open s; z \<in> s; \<And>w. w \<in> s \<Longrightarrow> f w = g w\<rbrakk> \<Longrightarrow> deriv f z = deriv g z"
(* Author: Norbert Schirmer Maintainer: Norbert Schirmer, norbert.schirmer at web de License: LGPL *) (* Title: XVcgEx.thy Author: Norbert Schirmer, TU Muenchen Copyright (C) 2006-2008 Norbert Schirmer Some rights reserved, TU Muenchen This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this library; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA *) header "Examples for Parallel Assignments" theory XVcgEx imports "../XVcg" begin record "globals" = "G_'"::"nat" "H_'"::"nat" record 'g vars = "'g state" + A_' :: nat B_' :: nat C_' :: nat I_' :: nat M_' :: nat N_' :: nat R_' :: nat S_' :: nat Arr_' :: "nat list" Abr_':: string term "BASIC \<acute>A :== x, \<acute>B :== y END" term "BASIC \<acute>G :== \<acute>H, \<acute>H :== \<acute>G END" term "BASIC LET (x,y) = (\<acute>A,b); z = \<acute>B IN \<acute>A :== x, \<acute>G :== \<acute>A + y + z END" lemma "\<Gamma>\<turnstile> \<lbrace>\<acute>A = 0\<rbrace> \<lbrace>\<acute>A < 0\<rbrace> \<longmapsto> BASIC LET (a,b,c) = foo \<acute>A IN \<acute>A :== a, \<acute>B :== b, \<acute>C :== c END \<lbrace>\<acute>A = x \<and> \<acute>B = y \<and> \<acute>C = c\<rbrace>" apply vcg oops lemma "\<Gamma>\<turnstile> \<lbrace>\<acute>A = 0\<rbrace> \<lbrace>\<acute>A < 0\<rbrace> \<longmapsto> BASIC LET (a,b,c) = foo \<acute>A IN \<acute>A :== a, \<acute>G :== b + \<acute>B, \<acute>H :== c END \<lbrace>\<acute>A = x \<and> \<acute>G = y \<and> \<acute>H = c\<rbrace>" apply vcg oops definition foo:: "nat \<Rightarrow> (nat \<times> nat \<times> nat)" where "foo n = (n,n+1,n+2)" lemma "\<Gamma>\<turnstile> \<lbrace>\<acute>A = 0\<rbrace> \<lbrace>\<acute>A < 0\<rbrace> \<longmapsto> BASIC LET (a,b,c) = foo \<acute>A IN \<acute>A :== a, \<acute>G :== b + \<acute>B, \<acute>H :== c END \<lbrace>\<acute>A = x \<and> \<acute>G = y \<and> \<acute>H = c\<rbrace>" apply (vcg add: foo_def snd_conv fst_conv) oops end
#include <boost/thread.hpp> void run_thread() { return; } int main() { boost::thread t(run_thread); return 0; }
C09AAJ Example Program Results Parameters read from file :: Wavelet : Haar End mode : Zero N = 8 Input data X : 2.000 5.000 8.000 9.000 7.000 4.000 -1.000 1.000 Length of wavelet filter : 2 Number of Levels : 3 Number of coefficients in each level: 1 1 2 4 Total number of wavelength coefficients : 8 Wavelet coefficients C: 12.374 4.596 -5.000 5.500 -2.121 -0.707 2.121 -1.414 Reconstruction Y : 2.000 5.000 8.000 9.000 7.000 4.000 -1.000 1.000
library(methods) library(data.table) {{rimport}}('__init__.r', 'plot.r') indir = {{i.indir | R}} outdir = {{o.outdir | R}} plink = {{args.plink | R}} indep = {{args.indep | R}} highld = {{args.highld | R}} devpars = {{args.devpars | R}} pihat = {{args.pihat | R}} samid = {{R(args.samid) if not args.samid.startswith("function") else args.samid}} annofile = {{args.anno | R}} doplot = {{args.plot | R}} seed = {{args.seed | R}} shell$load_config(plink = plink) bedfile = Sys.glob(file.path(indir, '*.bed')) input = tools::file_path_sans_ext(bedfile) output = file.path(outdir, basename(input)) params = list( bfile = input, exclude = highld, range = T, `indep-pairwise` = indep, out = output ) shell$plink(params, .raise = TRUE, .report = TRUE, .fg = TRUE)$reset() prunein = paste0(output, '.prune.in') params = list( bfile = input, extract = prunein, genome = T, out = output ) shell$plink(params, .raise = TRUE, .report = TRUE, .fg = TRUE)$reset() genome = read.table(paste0(output, '.genome'), row.names = NULL, header = T, check.names = F) # "unmelt" it # FID1 IID1 FID2 IID2 RT EZ Z0 Z1 Z2 PI_HAT PHE DST PPC RATIO # s1 s1 s2 s2 UN NA 1.0000 0.0000 0.0000 0.0000 -1 0.866584 0.0000 0.9194 # s1 s1 s2 s2 UN NA 0.4846 0.3724 0.1431 0.3293 -1 0.913945 0.7236 2.0375 # s1 s1 s3 s3 UN NA 1.0000 0.0000 0.0000 0.0000 -1 0.867186 0.0000 1.0791 genome$SAMPLE1 = paste(genome$FID1, genome$IID1, sep = "\t") genome$SAMPLE2 = paste(genome$FID2, genome$IID2, sep = "\t") # get all samples samples = unique(c(genome$SAMPLE1, genome$SAMPLE2)) # make paired into a distance-like matrix similarity = dcast(genome, SAMPLE1 ~ SAMPLE2, value.var = "PI_HAT") rm(genome) # get the rownames back samids = unlist(similarity[, 1, drop = TRUE]) rownames(similarity) = samids similarity = similarity[, -1, drop = F] # get samples that didn't involved missedrow = setdiff(samples, rownames(similarity)) missedcol = setdiff(samples, colnames(similarity)) similarity[missedrow, ] = NA similarity[, missedcol] = NA # order the matrix similarity = similarity[samples, samples, drop = F] # transpose the matrix to get the symmetric values sim2 = t(similarity) isna = is.na(similarity) # fill the na's with their symmetric values similarity[isna] = sim2[isna] rm(sim2) # still missing: keep them similarity[is.na(similarity)] = 0 # get the marks (samples that fail the pihat cutoff) nsams = length(samples) fails = which(similarity > pihat) marks = data.frame(x = (fails - 1)%%nsams + 1, y = ceiling(fails/nsams)) diag(similarity) = 1 failflags = rep(F, nrow(marks)) freqs = as.data.frame(table(factor(as.matrix(marks)))) freqs = freqs[order(freqs$Freq, decreasing = T), 'Var1', drop = T] ibd.fail = c() while (sum(failflags) < nrow(marks)) { samidx = freqs[1] ibd.fail = c(ibd.fail, samples[samidx]) freqs = freqs[-1] sapply(1:nrow(marks), function(i) { if (samidx %in% marks[i,]) failflags[i] <<- T }) } if (!is.null(ibd.fail)) writeLines(ibd.fail, con = file(paste0(output, '.ibd.fail'))) if (doplot) { set.seed(seed) library(ComplexHeatmap) fontsize8 = gpar(fontsize = 8) fontsize9 = gpar(fontsize = 9) ht_opt$heatmap_row_names_gp = fontsize8 ht_opt$heatmap_column_names_gp = fontsize8 ht_opt$legend_title_gp = fontsize9 ht_opt$legend_labels_gp = fontsize8 ht_opt$simple_anno_size = unit(3, "mm") samids = sapply(samples, function(sid) { fidiid = unlist(strsplit(sid, "\t", fixed = TRUE)) if (is.function(samid)) { do.call(samid, as.list(fidiid)) } else { gsub("fid", fidiid[1], gsub("iid", fidiid[2], samid)) } }) rownames(similarity) = samids colnames(similarity) = samids annos = list() if (is.true(annofile)) { options(stringsAsFactors = TRUE) andata = read.table.inopts(annofile, list(cnames = TRUE, rnames = TRUE)) andata = andata[samids,,drop = FALSE] for (anname in colnames(andata)) { annos[[anname]] = as.matrix(andata[, anname]) } annos$annotation_name_gp = fontsize8 annos = do.call(HeatmapAnnotation, annos) } params = list( name = "PI_HAT", cell_fun = function(j, i, x, y, width, height, fill) { if (similarity[i, j] > pihat && i != j) grid.points(x, y, pch = 4, size = unit(.5, "char")) }, #heatmap_legend_param = list( # title_gp = fontsize9, # labels_gp = fontsize8 #), clustering_distance_rows = function(m) as.dist(1-m), clustering_distance_columns = function(m) as.dist(1-m), top_annotation = if (length(annos) == 0) NULL else annos ) plot.heatmap2( similarity, paste0(output, '.ibd.png'), params = params, draw = list( annotation_legend_list = list( Legend( labels = paste(">", pihat), title = "", type = "points", pch = 4, title_gp = fontsize9, labels_gp = fontsize8)), merge_legend = TRUE), devpars = devpars) }
[STATEMENT] lemma ffb_fbd_galois: "(bd\<^sub>\<F> f) \<stileturn> (fb\<^sub>\<F> f)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. bd\<^sub>\<F> f \<stileturn> fb\<^sub>\<F> f [PROOF STEP] unfolding adj_def ffb_set klift_prop [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<forall>x y. (\<mu> (\<P> f x) \<subseteq> y) = (x \<subseteq> {x. f x \<subseteq> y}) [PROOF STEP] by blast
From CReal.MetricSpace Require Export M_pack. Theorem PNP : forall p : Prop, ~(p /\ ~p) . Proof. unfold not. intros. destruct H. apply H0 in H. apply H. Qed. Lemma le_one : forall (m n : nat), (m <= n)%nat \/ (n <= m)%nat. Proof. intro m. induction m. -intros. left. induction n. apply le_n. apply le_S. apply IHn. -intros. destruct n. +right. apply le_0_n. +destruct IHm with (n := n). {left. apply le_n_S. auto. } {right. apply le_n_S. auto. } Qed. Lemma le_equ : forall (m n : nat), (m <= n)%nat -> (n <= m)%nat -> m = n. Proof. intro m. induction m as [| m' IH]. -intros. destruct n. auto. inversion H0. -intros. destruct n. +inversion H. +apply le_S_n in H. apply le_S_n in H0. assert (m' = n). apply IH. apply H. apply H0. auto. Qed. Lemma always_greater : forall (m n : nat), exists N, (m < N)%nat /\ (n < N)%nat. Proof. intro m. induction m. -intros. exists (S n). split. apply neq_0_lt. unfold not. intros. inversion H. unfold lt. apply le_n. -intros. destruct IHm with (n :=n) as [N']. destruct H. exists (S N'). split. apply lt_n_S. auto. unfold lt. unfold lt in H0. apply (le_trans (S n) N' (S N')). auto. apply le_S. apply le_n. Qed.
On of the key elements for most cloud solutions and hosting environments is networking. In Azure, using the ARM deployment model, this lab will cover various topics regarding Azure Virtual Networks and how to use them. We will cover proper CIDR selection, region selection, and deploy new virtual networks.
​The Acoustic Sound Organization ("AcoSoundOrg") is a virtual panoply covering all things ukulele! Video lessons, local performances in Tokyo, a blog, music sales, and much, much more. I first learned of AcoSoundOrg through its YouTube postings. Frankly, I'm not sure who comprises the actual organization but there seem to be several regulars covering everything from classic Rock and Roll to Jazz and beyond. Some are better than others. All are interesting. Given my preference for Jazz I recently watched a duo play a spirited arrangement of Take the A Train. If you are patient you can find a some chord charts on their website for downloading. I found a TAB of Moon River in JPEG format that I was able to print out after adding it to Apple Images. The video I got after opening this article has a beautiful sound track on it this was stream by the YouTube on this article. This was just amazing to hear a beautiful track I really want to know about them so I got data and all their track through this article.
State Before: C : Type u inst✝¹ : Category C inst✝ : HasStrictTerminalObjects C I : C hI : IsTerminal I A : C f g : I ⟶ A ⊢ f = g State After: C : Type u inst✝¹ : Category C inst✝ : HasStrictTerminalObjects C I : C hI : IsTerminal I A : C f g : I ⟶ A this : IsIso f ⊢ f = g Tactic: haveI := hI.isIso_from f State Before: C : Type u inst✝¹ : Category C inst✝ : HasStrictTerminalObjects C I : C hI : IsTerminal I A : C f g : I ⟶ A this : IsIso f ⊢ f = g State After: C : Type u inst✝¹ : Category C inst✝ : HasStrictTerminalObjects C I : C hI : IsTerminal I A : C f g : I ⟶ A this✝ : IsIso f this : IsIso g ⊢ f = g Tactic: haveI := hI.isIso_from g State Before: C : Type u inst✝¹ : Category C inst✝ : HasStrictTerminalObjects C I : C hI : IsTerminal I A : C f g : I ⟶ A this✝ : IsIso f this : IsIso g ⊢ f = g State After: no goals Tactic: exact eq_of_inv_eq_inv (hI.hom_ext (inv f) (inv g))
2 Joahaz was twenty and three years old when he began to reign; and he reigned three months in Jerusalem. 3 And the king of Egypt deposed him at Jerusalem, and fined the land a hundred talents of silver and a talent of gold. 4 And the king of Egypt made Eliakim his brother king over Judah and Jerusalem, and changed his name to Jehoiakim. And Neco took Joahaz his brother, and carried him to Egypt. 5 Jehoiakim was twenty and five years old when he began to reign; and he reigned eleven years in Jerusalem: and he did that which was evil in the sight of Jehovah his God. 7 Nebuchadnezzar also carried of the vessels of the house of Jehovah to Babylon, and put them in his temple at Babylon. 9 Jehoiachin was eight years old when he began to reign; and he reigned three months and ten days in Jerusalem: and he did that which was evil in the sight of Jehovah. 10 And at the return of the year king Nebuchadnezzar sent, and brought him to Babylon, with the goodly vessels of the house of Jehovah, and made Zedekiah his brother king over Judah and Jerusalem. 12 and he did that which was evil in the sight of Jehovah his God; he humbled not himself before Jeremiah the prophet [speaking] from the mouth of Jehovah. 13 And he also rebelled against king Nebuchadnezzar, who had made him swear by God: but he stiffened his neck, and hardened his heart against turning unto Jehovah, the God of Israel. 14 Moreover all the chiefs of the priests, and the people, trespassed very greatly after all the abominations of the nations; and they polluted the house of Jehovah which he had hallowed in Jerusalem. 16 but they mocked the messengers of God, and despised his words, and scoffed at his prophets, until the wrath of Jehovah arose against his people, till there was no remedy. 17 Therefore he brought upon them the king of the Chaldeans, who slew their young men with the sword in the house of their sanctuary, and had no compassion upon young man or virgin, old man or hoary-headed: he gave them all into his hand. 18 And all the vessels of the house of God, great and small, and the treasures of the house of Jehovah, and the treasures of the king, and of his princes, all these he brought to Babylon. 21 to fulfil the word of Jehovah by the mouth of Jeremiah, until the land had enjoyed its sabbaths: [for] as long as it lay desolate it kept sabbath, to fulfil threescore and ten years. 23 Thus saith Cyrus king of Persia, All the kingdoms of the earth hath Jehovah, the God of heaven, given me; and he hath charged me to build him a house in Jerusalem, which is in Judah. Whosoever there is among you of all his people, Jehovah his God be with him, and let him go up.
#pragma once #include "Identifier.hpp" #include <cstdint> #include <boost/spirit/home/x3/support/ast/variant.hpp> namespace pdl::spirit::syntax::literal { struct Auto : Annotation<Auto> { [[maybe_unused]] bool discovered = false; }; struct Default : Annotation<Default> { [[maybe_unused]] bool discovered = false; }; struct Placeholder : Annotation<Placeholder> { uint16_t value = 0; }; struct Designator : Annotation<Designator> { Identifier member; }; struct Numeric : Annotation<Numeric> { std::int64_t value = 0; }; struct Float32 : Annotation<Float32> { float value = 0.0; }; struct Float64 : Annotation<Float64> { double value = 0.0; }; struct Boolean : Annotation<Boolean> { bool value = false; }; struct String : Annotation<String> { std::string value; }; struct MacAddress : Annotation<MacAddress> { std::string value; }; struct IPv4Address : Annotation<IPv4Address> { std::string value; }; struct Definition : Annotation<Definition> { std::string value; }; struct Prefix : Annotation<Prefix> { std::string value; }; struct MappingMemberAccess : x3::variant<Placeholder, Numeric, Definition>, Annotation<MappingMemberAccess> { }; struct MappingMember : Annotation<MappingMember> { std::string statement; std::string member; MappingMemberAccess value; }; struct DefaultValue : x3::variant<Placeholder, Numeric, Definition>, Annotation<DefaultValue> { using base_type::base_type; using base_type::operator=; }; struct Literal : x3::variant<Default, Numeric, Float32, Float64, Boolean, String, MacAddress, IPv4Address>, Annotation<Literal> { using base_type::base_type; using base_type::operator=; }; struct Id : x3::variant<Auto, Numeric, Definition>, Annotation<Id> { using base_type::base_type; using base_type::operator=; }; struct Variable : x3::variant<Literal, types::InternalDefines, Definition>, Annotation<Variable> { using base_type::base_type; using base_type::operator=; }; } // namespace literal. BOOST_FUSION_ADAPT_STRUCT(pdl::spirit::syntax::literal::Auto, discovered) BOOST_FUSION_ADAPT_STRUCT(pdl::spirit::syntax::literal::Default, discovered) BOOST_FUSION_ADAPT_STRUCT(pdl::spirit::syntax::literal::Placeholder, value) BOOST_FUSION_ADAPT_STRUCT(pdl::spirit::syntax::literal::Designator, member) BOOST_FUSION_ADAPT_STRUCT(pdl::spirit::syntax::literal::Numeric, value) BOOST_FUSION_ADAPT_STRUCT(pdl::spirit::syntax::literal::Float32, value) BOOST_FUSION_ADAPT_STRUCT(pdl::spirit::syntax::literal::Float64, value) BOOST_FUSION_ADAPT_STRUCT(pdl::spirit::syntax::literal::Boolean, value) BOOST_FUSION_ADAPT_STRUCT(pdl::spirit::syntax::literal::String, value) BOOST_FUSION_ADAPT_STRUCT(pdl::spirit::syntax::literal::MacAddress, value) BOOST_FUSION_ADAPT_STRUCT(pdl::spirit::syntax::literal::IPv4Address, value) BOOST_FUSION_ADAPT_STRUCT(pdl::spirit::syntax::literal::Definition, value) BOOST_FUSION_ADAPT_STRUCT(pdl::spirit::syntax::literal::Prefix, value)
#include <boost/algorithm/string/config.hpp>
lemma aff_dim_convex_hull: fixes S :: "'n::euclidean_space set" shows "aff_dim (convex hull S) = aff_dim S"
Skys Dog Walking and Pet Sitting is focused on providing outstanding pets pet care. Long walks for your young, energy crazy dogs dog? Or just showing some love to a dog with some separation anxiety? How about anywhere in between? They provide it all, including requests for bath, driving to and from the vet for any scheduled visits, and much more. They also provide inhome pet sitting; this can be for a few hours or for a few days, keeping your animals healthy and happy in your own home while you are away. Services & Rates Skys Dog Walking and Pet Sitting main goals is to offer very affordable pet services, but provides the same quality and care as some of the most expensive options. Dog Walking The rate for dog walking is $15 a visit/ a day. This can provide 1 to 2 hour session with your dog, depending on the needs of your dog. This also completely personalized; tell them the needs of your dog. A 2 hour walk? A hour walk and an hour of play? Or just a little walk and some attention? Whatever suits your dog! Inhome Pet Sitting All of their pet sitting is done inhome meaning they only provide pet sitting services done in a clients (your) home. Pets are the most comfortable in their own homes, and any addition house keeping can also be provided during the visit or stay, such as bring in the mail, feeding fish, watering plants, etc. Day Pet Sitting Daily visits for feeding, playtime, any home care need. This is $15 to $30 a visit depending on location, amount of dogs, and duration of visit. Over Night Pet Sitting This is also done in clients home and is the same as daily visit except 24/7 with an over night stay. This is typically $30 a day/night and there is an additional $5 charge per dog or cat. Additional Services Baths $5 Vet visits (driving to and from) $5 to $15 (depending on location of vets office)
From Test Require Import tactic. Section FOFProblem. Variable Universe : Set. Variable UniverseElement : Universe. Variable wd_ : Universe -> Universe -> Prop. Variable col_ : Universe -> Universe -> Universe -> Prop. Variable col_swap1_1 : (forall A B C : Universe, (col_ A B C -> col_ B A C)). Variable col_swap2_2 : (forall A B C : Universe, (col_ A B C -> col_ B C A)). Variable col_triv_3 : (forall A B : Universe, col_ A B B). Variable wd_swap_4 : (forall A B : Universe, (wd_ A B -> wd_ B A)). Variable col_trans_5 : (forall P Q A B C : Universe, ((wd_ P Q /\ (col_ P Q A /\ (col_ P Q B /\ col_ P Q C))) -> col_ A B C)). Theorem pipo_6 : (forall O E Eprime S U1 A AX B BX C CX BXMAX CXMAX AB AC IAC T A1 A2 BXprimeprime CXprime ABXprimeprime ACXprimeprime : Universe, ((wd_ A B /\ (wd_ A C /\ (wd_ B C /\ (wd_ A1 A2 /\ (wd_ C CXprime /\ (wd_ B BXprimeprime /\ (wd_ A BXprimeprime /\ (wd_ O E /\ (wd_ E Eprime /\ (wd_ O Eprime /\ (wd_ S U1 /\ (col_ O E AX /\ (col_ O E BX /\ (col_ O E CX /\ (col_ O E BXMAX /\ (col_ O E CXMAX /\ (col_ O E T /\ (col_ O E AB /\ (col_ O E AC /\ (col_ O E IAC /\ (col_ A A1 A2 /\ (col_ O E ABXprimeprime /\ (col_ O E ACXprimeprime /\ (col_ A1 A2 BXprimeprime /\ (col_ S U1 B /\ (col_ A1 A2 C /\ (col_ S U1 CXprime /\ col_ A B C))))))))))))))))))))))))))) -> col_ A B BXprimeprime)). Proof. time tac. Qed. End FOFProblem.
[STATEMENT] lemma pot_comp_cong: assumes "snd (rep_nat_term u) = snd (rep_nat_term v) \<Longrightarrow> to1 u v = to2 u v" shows "pot_comp to1 u v = pot_comp to2 u v" [PROOF STATE] proof (prove) goal (1 subgoal): 1. pot_comp to1 u v = pot_comp to2 u v [PROOF STEP] using assms [PROOF STATE] proof (prove) using this: snd (rep_nat_term u) = snd (rep_nat_term v) \<Longrightarrow> to1 u v = to2 u v goal (1 subgoal): 1. pot_comp to1 u v = pot_comp to2 u v [PROOF STEP] by (simp add: pot_comp comparator_of_def split: order.split)
State Before: α : Type u β : Type v inst✝ : Group α i : ℤ a b : α ⊢ (a * b * a⁻¹) ^ i = a * b ^ i * a⁻¹ State After: case ofNat α : Type u β : Type v inst✝ : Group α a b : α a✝ : ℕ ⊢ (a * b * a⁻¹) ^ Int.ofNat a✝ = a * b ^ Int.ofNat a✝ * a⁻¹ case negSucc α : Type u β : Type v inst✝ : Group α a b : α a✝ : ℕ ⊢ (a * b * a⁻¹) ^ Int.negSucc a✝ = a * b ^ Int.negSucc a✝ * a⁻¹ Tactic: induction' i State Before: case ofNat α : Type u β : Type v inst✝ : Group α a b : α a✝ : ℕ ⊢ (a * b * a⁻¹) ^ Int.ofNat a✝ = a * b ^ Int.ofNat a✝ * a⁻¹ State After: case ofNat α : Type u β : Type v inst✝ : Group α a b : α a✝ : ℕ ⊢ (a * b * a⁻¹) ^ Int.ofNat a✝ = a * b ^ Int.ofNat a✝ * a⁻¹ Tactic: change (a * b * a⁻¹) ^ (_ : ℤ) = a * b ^ (_ : ℤ) * a⁻¹ State Before: case ofNat α : Type u β : Type v inst✝ : Group α a b : α a✝ : ℕ ⊢ (a * b * a⁻¹) ^ Int.ofNat a✝ = a * b ^ Int.ofNat a✝ * a⁻¹ State After: no goals Tactic: simp [zpow_ofNat] State Before: case negSucc α : Type u β : Type v inst✝ : Group α a b : α a✝ : ℕ ⊢ (a * b * a⁻¹) ^ Int.negSucc a✝ = a * b ^ Int.negSucc a✝ * a⁻¹ State After: case negSucc α : Type u β : Type v inst✝ : Group α a b : α a✝ : ℕ ⊢ a * ((b ^ (a✝ + 1))⁻¹ * a⁻¹) = a * (b ^ (a✝ + 1))⁻¹ * a⁻¹ Tactic: simp [zpow_negSucc, conj_pow] State Before: case negSucc α : Type u β : Type v inst✝ : Group α a b : α a✝ : ℕ ⊢ a * ((b ^ (a✝ + 1))⁻¹ * a⁻¹) = a * (b ^ (a✝ + 1))⁻¹ * a⁻¹ State After: no goals Tactic: rw [mul_assoc]
from datetime import datetime from sklearn.metrics import classification_report, confusion_matrix from tensorflow.python.keras.models import load_model from tensorflow.python.keras.callbacks import ModelCheckpoint, TensorBoard import pickle from tensorflow.python.keras.layers import Dense, Input, Dropout, Flatten, concatenate from tensorflow.python.keras.layers.convolutional import Conv1D from tensorflow.python.keras.layers.pooling import MaxPool1D from tqdm import tqdm import numpy as np from tensorflow.python.keras.layers.recurrent import LSTM from tensorflow.python.keras.models import Model from tensorflow.python.keras.utils import plot_model import pickle from helpers import load_batch, dump_batch import numpy as np from constants import NUM_DIMENSIONS_WORD2VEC, HEADLINES_INPUT_LEN, BODIES_INPUT_LEN # from gensim.models import KeyedVectors import os from datetime import datetime def load_token_vectors(filepath): with open(filepath,"rb") as f: return pickle.load(f) def join(*paths): return '/'.join(paths) def encode_docs(documents,token_vectors,num_dimensions_word2vec,max_len): """Encodes a text document so that the model can use it as input""" vectors_sequences = np.zeros(shape=(len(documents),max_len,num_dimensions_word2vec),dtype="float16") for document_ind,document in tqdm(enumerate(documents)): tokens = document.split() for i in range(min(max_len,len(tokens))): token = tokens[i] if token in token_vectors: vectors_sequences[document_ind,i] = token_vectors[token] return vectors_sequences def process_data(folder_prefix): headlines, bodies, one_hot_stances = load_batch(["headlines","bodies","one_hot_stances"],folder_prefix=folder_prefix) print("Dataset loaded") #encoding dataset token_vectors = load_token_vectors("token_vectors.pickle") encoded_bodies =encode_docs(bodies,token_vectors,NUM_DIMENSIONS_WORD2VEC,BODIES_INPUT_LEN) encoded_headlines =encode_docs(headlines,token_vectors,NUM_DIMENSIONS_WORD2VEC,HEADLINES_INPUT_LEN) print("Finished encoding dataset") Y_train = np.asarray(one_hot_stances)[:,:3] return encoded_headlines,encoded_bodies,Y_train def save_doc(filename,text): with open(filename,"w") as f: f.write(text) def evaluate(model,X,Y,name): preds = model.predict(X) harsh_preds = np.argmax(preds,axis=-1) harsh_y = np.argmax(Y,axis=-1) report = classification_report(harsh_y,harsh_preds,target_names=["Agree","Disagree","Discusses"]) conf_matrix = confusion_matrix(harsh_y,harsh_preds) evaluate_str = '\n\n'.join((name,report,str(conf_matrix))) print(evaluate_str) # save_doc(f"report-{name}",evaluate_str) def main(): #shutting down kmp warnings os.environ["KMP_WARNINGS"] = "off" #loading dataset model_path = input("Enter model filepath") model = load_model(model_path) encoded_headlines_train,encoded_bodies_train,Y_train = process_data("clean_training_data") encoded_headlines_valid,encoded_bodies_valid,Y_valid = process_data("clean_testing_data") evaluate(model,[encoded_headlines_train,encoded_bodies_train],Y_train,name=f"train-{model_path}") evaluate(model,[encoded_headlines_valid,encoded_bodies_valid],Y_valid,name=f"valid-{model_path}") if __name__ == "__main__": main()
theory TLB imports "HOL-Word.Word" PTABLE_TLBJ.PageTable_seL4 L3_LIBJ.L3_Lib begin record tlb_flags = nG :: "1 word" (* nG = 0 means global *) perm_APX :: "1 word" (* Access permission bit 2 *) perm_AP :: "2 word" (* Access permission bits 1 and 0 *) perm_XN :: "1 word" (* Execute-never bit *) type_synonym asid = "8 word" datatype tlb_entry = EntrySmall (asid_of :"asid option") "20 word" "20 word" tlb_flags | EntrySection (asid_of :"asid option") "12 word" "12 word" tlb_flags type_synonym tlb = "tlb_entry set" (* polymorphic lookup type *) datatype 'e lookup_type = Miss | Incon | Hit 'e instantiation lookup_type :: (_) order begin definition less_eq_lookup_type: "e \<le> e' \<equiv> e' = Incon \<or> e' = e \<or> e = Miss" definition less_lookup_type: "e < (e'::'a lookup_type) \<equiv> e \<le> e' \<and> e \<noteq> e'" instance by intro_classes (auto simp add: less_lookup_type less_eq_lookup_type) end class entry_op = fixes range_of :: "'a \<Rightarrow> vaddr set" begin definition entry_set :: "'a set \<Rightarrow> vaddr \<Rightarrow> 'a set" where "entry_set t v \<equiv> {e\<in>t. v : range_of e}" end definition lookup :: "(vaddr \<Rightarrow> 'a set ) \<Rightarrow> vaddr \<Rightarrow> 'a lookup_type" where "lookup t v \<equiv> if t v = {} then Miss else if \<exists>x. t v = {x} then Hit (the_elem (t v)) else Incon" (* access a lookup as "lookup' t v \<equiv> lookup (entry_set t) v" *) (* Address translation *) fun va_to_pa :: "vaddr \<Rightarrow> tlb_entry \<Rightarrow> paddr" where "va_to_pa va (EntrySmall a vba pba fl) = Addr ((ucast pba << 12) OR (addr_val va AND mask 12))" | "va_to_pa va (EntrySection a vba pba fl) = Addr ((ucast pba << 20) OR (addr_val va AND mask 20))" instantiation tlb_entry :: entry_op begin definition "range_of (e :: tlb_entry) \<equiv> case e of EntrySmall a vba pba fl \<Rightarrow> Addr ` {(ucast vba) << 12 .. ((ucast vba) << 12) + (2^12 - 1)} | EntrySection a vba pba fl \<Rightarrow> Addr ` {(ucast vba) << 20 .. ((ucast vba) << 20) + (2^20 - 1)}" instance .. end (* page table walk interface *) definition to_tlb_flags :: "arm_perm_bits \<Rightarrow> tlb_flags" where "to_tlb_flags perms \<equiv> \<lparr>nG = arm_p_nG perms, perm_APX = arm_p_APX perms, perm_AP = arm_p_AP perms, perm_XN = arm_p_XN perms \<rparr>" definition "tag_conv (a::8 word) fl \<equiv> (if nG fl = 0 then None else Some a)" definition pt_walk :: "asid \<Rightarrow> heap \<Rightarrow> paddr \<Rightarrow> vaddr \<Rightarrow> tlb_entry option" where "pt_walk a hp rt v \<equiv> case get_pde hp rt v of None \<Rightarrow> None | Some InvalidPDE \<Rightarrow> None | Some ReservedPDE \<Rightarrow> None | Some (SectionPDE bpa perms) \<Rightarrow> Some (EntrySection (tag_conv a (to_tlb_flags perms)) (ucast (addr_val v >> 20) :: 12 word) ((word_extract 31 20 (addr_val bpa)):: 12 word) (to_tlb_flags perms)) | Some (PageTablePDE p) \<Rightarrow> (case get_pte hp p v of None \<Rightarrow> None | Some InvalidPTE \<Rightarrow> None | Some (SmallPagePTE bpa perms) \<Rightarrow> Some(EntrySmall (tag_conv a (to_tlb_flags perms)) (ucast (addr_val v >> 12) :: 20 word) ((word_extract 31 12 (addr_val bpa)):: 20 word) (to_tlb_flags perms)))" definition "is_fault e \<equiv> (e = None)" (* Flush operations *) datatype flush_type = FlushTLB | Flushvarange "vaddr set" | FlushASID asid | FlushASIDvarange asid "vaddr set" definition flush_tlb :: "tlb \<Rightarrow> tlb" where "flush_tlb t \<equiv> {}" definition flush_tlb_vset :: "tlb \<Rightarrow> vaddr set \<Rightarrow> tlb" where "flush_tlb_vset t vset = t - (\<Union>v\<in>vset. {e \<in> t. v \<in> range_of e})" (* consistency polymorphic defs *) definition consistent0 :: "(vaddr \<Rightarrow> 'b lookup_type) \<Rightarrow> (vaddr \<Rightarrow> 'b option) \<Rightarrow> vaddr \<Rightarrow> bool" where "consistent0 lukup ptwalk va \<equiv> (lukup va = Hit (the (ptwalk va)) \<and> \<not>is_fault (ptwalk va)) \<or> lukup va = Miss" lemma consistent_not_Incon_imp: "consistent0 lukup ptwalk va \<Longrightarrow> lukup va \<noteq> Incon \<and> (\<forall>e. lukup va = Hit e \<longrightarrow> e = the (ptwalk va) \<and> ptwalk va \<noteq> None)" apply (clarsimp simp: consistent0_def is_fault_def) by force lemma consistent_not_Incon': "consistent0 lukup ptwalk va = (lukup va \<noteq> Incon \<and> (\<forall>e. lukup va = Hit e \<longrightarrow> e = the (ptwalk va) \<and> ptwalk va \<noteq> None))" by ((cases "lukup va"); simp add: consistent0_def is_fault_def) end
module Main import Common.Util import Common.Interfaces import Specifications.DiscreteOrderedGroup import Specifications.OrderedRing import Proofs.GroupTheory import Proofs.TranslationInvarianceTheory import Proofs.DiscreteOrderTheory import Proofs.Interval import Instances.Notation import Instances.TrustInteger import Instances.ZZ import Addition.Carry import Addition.Absorb import Addition.Reduce %default total testCarry : Integer -> String testCarry x = case decideBetween {leq = IntegerLeq} (-18) 18 x of Yes inRange => show $ result $ computeCarry integerDiscreteOrderedGroup 9 (CheckIntegerLeq Oh) x inRange No _ => "Error" testCarryZZ : ZZ -> String testCarryZZ x = case decideBetween {leq = LTEZ} (-18) 18 x of Yes inRange => show $ result $ computeCarry zzDiscreteOrderedGroup 9 bound x inRange No _ => "Error" where bound : LTEZ 1 8 bound = LtePosPos (LTESucc LTEZero) integerDigits : Vect n Integer -> Maybe (Vect n (Digit IntegerLeq Ng 18)) integerDigits = maybeDigits {leq = IntegerLeq} Ng 18 testAddition : Vect (S k) (Digit {s = Integer} IntegerLeq Ng 18) -> (Carry, Vect (S k) Integer) testAddition inputs = outputs $ reduce integerDiscreteOrderedRing 9 (CheckIntegerLeq Oh) inputs main : IO () main = do printLn $ map testCarry [(-20)..20] printLn $ map testCarryZZ (map fromInteger [(-21)..21]) printLn $ liftA testAddition (integerDigits (reverse [17,2,13,4,5,-15,0])) ||| compile time test test1 : testCarryZZ (-15) = testCarry (-15) test1 = Refl ||| compile time test test2 : testCarryZZ 12 = "(P, 2)" test2 = Refl
{-# OPTIONS --without-K --safe #-} open import Categories.Category module Categories.Category.BicartesianClosed {o ℓ e} (𝒞 : Category o ℓ e) where open import Level open import Categories.Category.CartesianClosed 𝒞 open import Categories.Category.Cocartesian 𝒞 record BicartesianClosed : Set (levelOfTerm 𝒞) where field cartesianClosed : CartesianClosed cocartesian : Cocartesian module cartesianClosed = CartesianClosed cartesianClosed module cocartesian = Cocartesian cocartesian open cartesianClosed public open cocartesian public
Children are asked to describe their moms – and themselves – using given adjectives (smart, good-looking, strong, friendly…) or inserting new ones; to draw themselves, their moms and their whole families; to indicate their favourite activities and games with moms; to create a real artwork, colour the outline of a heart and make a drawing recalling an adventure they’ve had together; finally, to colour a large “I love you” and leave their handprint – to be completed with their moms’. Articolo precedente You and I, Mom. The perfect moms gift!
Three years ago Michael Ivanov and I started the Davis Wiki with a simple goal: collect all the interesting and amazing things wed learned about Davis in one spot, and let other people help and build on what we knew. The Davis Wiki has become an awesome success, and today provides an invaluable resource in ways we couldnt have even imagined when we started. Lots of people asked us about doing similar things for their communities. Other people, living in different places and having different interests, wanted to set up wikis for their communities. We helped a few people set up wikis, and helped provide a home to the Rochester Wiki (http://rocwiki.org), Santa Cruz wiki (http://scwiki.org), a housing coop (http://anthill.org), Chico Wiki (http://chicowiki.org), and the Pittsburgh wiki (http://pghwiki.org). As time passed, more and more folks came to us and asked us about wikis for their communities. We took some preemptive steps toward helping out more communities, and helping out the Davis Wiki in turn, when we fundraised and purchased a server last year. Even then, though, we didnt really have the proper infrastructure to accommodate all of these different communities and do so in a way that let them have complete control over their wikis. What weve been working on will do just that. Were calling this project wiki:wikispot:Front Page Wiki Spot. Its a nonprofit effort to let communities anywhere in the world initiate, maintain, publicize and fund wikis. We hope to provide a trusting home for wikis, encourage development of collaborative software, and promote adoption of the wiki as a tool for enriching communities. Basically, were going to be doing what some of us have been doing all along, but with better, stronger infrastructure behind us. After seeing the amazing things weve accomplished together with the Davis Wiki so far, I think weve all come to realize that wikis can be an invaluable asset for communities, and thats why weve been working hard to make sure that every community can do what weve been doing in Davis. We want the only barrier to success to be drive and vision not technical knowhow. We think that its important that our communities have a safe, noncommercial home, too. Theres a lot of really terrific things we could talk about that make Wiki Spot special like the way were focused on making it easy to share and connect among communities but you should go wiki:wikispot:Front Page check stuff out for yourself, and read up wiki:wikispot:about Wiki Spot. Heres to the next three years of Davis Wiki and to the very new Wiki Spot! Users/PhilipNeustrom Special thanks A special thanks goes out to the following individuals who have helped make this project a reality (in no particular order): David Poole, Arlen Abraham, Brent Laabs, Graham Freeman, Amit Vainsencher, Zac Morris, Mike Ivanov, Jon McKamey, Jack Haskel, Reid Serozi, Eric Talevich, Charles McLaughlin, Adam Dewitz, Jason Aller, David Reid, Heather Carpenter. Whats new on Davis Wiki? Okay, now for some major changes youll notice here on the Davis Wiki: The old Davis Wiki map is now gone but its been replaced by something new. The new way to add a location to a page is to use {{{Address}}}. Check out wiki:wikispot:Help with Maps for a lot more about the new system. The downside to this is that we didnt convert over all of the old pages, which means well have to work to fix up and add addresses to a lot of the pages on the wiki. Most will probably find the new mapping system, which wiki:wikispot:Mapping Discussion currently uses Google Maps, to be a big improvement. You can now double click on nearly anything to edit it! Check out the explanation of wiki:wikispot:Quick Edit. Big idea: Interwiki community One of the big ideas behind the Wiki Spot network of wikis is that we are an interconnected group of wikis. Weve made it really really easy to watch changes on a bunch of different wikis you find interesting, and as Wiki Spot grows and provides a home to more communities youre bound to find this more useful. You can watch the recent changes of many wikis at once using the wiki:wikispot:Interwiki Recent Changes page and you can watch all of your bookmarks on the wiki:wikispot:Interwiki Bookmarks page! Just click watch this wiki by your name in the upper right hand corner to, well, watch a wiki! You can do a lot of other cool stuff, so wiki:wikispot:Front Page go explore! Other stuff! You can all now wiki:wikispot:create a wiki, though you probably knew that, if you got to this point, reading wise. Interwiki links look slightly different, and are more like normal wiki links. Theyre now written with double quotes around the page name. For example: nbsp {{{wiki:wikispot:A_Page_in_the_Wiki A Page in the Wiki}}} is now: {{{wiki:wikispot:A Page in the Wiki}}} which displays as: wiki:wikispot:A Page in the Wiki You might have noticed the interwiki icon looks different too. Thats because all Wiki Spot wikis get their little logo included in all interwiki links. NonWiki Spot wikis keep the old icons, the interwiki community dudes. If you want to get complicated, you can even do interwiki redirects in much the same way (see wiki:wikispot:Help with Linking). Help Files (e.g. Help on Editing) have generally been moved from constituent wikis to the Wiki Spot hub see wiki:wikispot:Help with Editing there. The general reason is that it makes it much much easier to update if we ever add new features. Which we will. Most of these help files have been renamed and rewritten as well to remove the Engrish. Links now work in Headings. Interwiki links too, even though they look a little funny. Page Includes have gotten much simpler, and {{{Include}}} has basically replaced all of the functionality of {{{IncludePages}}} without being so damn confusing. See wiki:wikispot:Help with Macros for details. User Pages are now in a separate namespace every users homepage is now in {{{Users/PersonName}}}. This is done partially because Wiki Spot may get a ton of users, but it also makes the distinction between public pages and user pages more apparent, as this was a continual source of confusion here. You can choose to set up a single user page in wiki:wikispot:User Settings. wiki:wikispot:Quick Edit is so awesome it gets its own page! Creative Commons Attribution 3.0 is the new license, as we have upgraded from 2.0. (All content before this date is still available under Creative Commons 2.0.)
"""Precompute and cache shapenet pointclouds. """ import os import sys WORKING_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), "..") sys.path.append(WORKING_DIR) import tqdm # noqa: E402 import argparse # noqa: E402 import glob # noqa: E402 import numpy as np # noqa: E402 import trimesh # noqa: E402 from multiprocessing import Pool # noqa: E402 import pickle # noqa: E402 from collections import OrderedDict # noqa: E402 def sample_vertex(filename): mesh = trimesh.load(filename) v = np.array(mesh.vertices, dtype=np.float32) np.random.shuffle(v) return v[:n_points] def wrapper(arg): return arg, sample_vertex(arg) def update(args): filename, point_samp = args fname = "/".join(filename.split("/")[-4:-1]) # note: input comes from async `wrapper` sampled_points[ fname ] = point_samp # put answer into correct index of result list pbar.update() def get_args(): """Parse command line arguments.""" parser = argparse.ArgumentParser( description="Precompute and cache shapenet pointclouds." ) parser.add_argument( "--file_pattern", type=str, default="**/*.ply", help="filename pattern for files to be rendered.", ) parser.add_argument( "--input_root", type=str, default="data/shapenet_simplified", help="path to input mesh root.", ) parser.add_argument( "--output_pkl", type=str, default="data/shapenet_points.pkl", help="path to output image root.", ) parser.add_argument( "--n_points", type=int, default=4096, help="Number of points to sample per shape", ) parser.add_argument( "--n_jobs", type=int, default=-1, help="Number of processes to use. Use all if set to -1.", ) args = parser.parse_args() return args def main(): args = get_args() patt = os.path.join(WORKING_DIR, args.input_root, args.file_pattern) in_files = glob.glob(patt, recursive=True) global sampled_points global n_points global pbar sampled_points = OrderedDict() n_points = args.n_points pbar = tqdm.tqdm(total=len(in_files)) pool = Pool(processes=None if args.n_jobs == -1 else args.n_jobs) for fname in in_files: pool.apply_async(wrapper, args=(fname,), callback=update) pool.close() pool.join() pbar.close() with open(args.output_pkl, "wb") as fh: pickle.dump(sampled_points, fh) if __name__ == "__main__": main()
lemma at_to_0: "at a = filtermap (\<lambda>x. x + a) (at 0)" for a :: "'a::real_normed_vector"
theory CS_Ch2 imports Main begin value "1 + (2::nat)" value "1 + (2::int)" value "1 - (2::nat)" value "1 - (2::int)" fun add :: "nat \<Rightarrow> nat \<Rightarrow> nat" where "add 0 n = n" | "add (Suc m) n = Suc(add m n)" lemma add_zero_does_nothing[simp]: "add a 0 = a" apply(induction a) apply(auto) done lemma add_succ[simp]: "add a (Suc b) = Suc(add a b)" apply(induction a) apply auto done lemma add_comm: "add a b = add b a" apply(induction a) apply(auto) done fun double :: "nat \<Rightarrow> nat" where "double 0 = 0" | "double (Suc n) = Suc (Suc (double n))" lemma double_is_add_self: "double m = add m m" apply(induction m) apply(auto) done fun count :: "'a \<Rightarrow> 'a list \<Rightarrow> nat" where "count _ [] = 0" | "count e (x # xs) = (if e = x then Suc (count e xs) else count e xs)" lemma count_less_than_length: "count x xs \<le> length xs" apply(induction xs) apply(auto) done fun snoc :: "'a list \<Rightarrow> 'a \<Rightarrow> 'a list" where "snoc [] a = [a]" | "snoc (x # xs) a = x # (snoc xs a)" fun reverse :: "'a list \<Rightarrow> 'a list" where "reverse [] = []" | "reverse (x # xs) = (reverse xs) @ [x]" lemma rev_distributes_over_app: "reverse (xs @ ys) = (reverse ys) @ (reverse xs)" apply(induction xs) apply(auto) done lemma rev_undos_self: "reverse (reverse xs) = xs" apply(induction xs) apply(auto) apply(simp add: rev_distributes_over_app) done fun sum :: "nat \<Rightarrow> nat" where "sum 0 = 0" | "sum (Suc n) = add (Suc n) (sum n)" lemma add_doubles_under_div: "add n (m div 2) = (n + n + m) div 2" apply(induction n) apply(auto) done lemma euler_is_cheeky: "sum n = n * (n + 1) div 2" apply(induction n) apply(auto) apply(simp add: add_doubles_under_div) done datatype 'a tree = Tip | Node "'a tree" 'a "'a tree" fun contents :: "'a tree \<Rightarrow> 'a list" where "contents Tip = []" | "contents (Node a b c) = b # (contents a) @ (contents c)" fun treesum :: "nat tree \<Rightarrow> nat" where "treesum Tip = 0" | "treesum (Node a b c) = b + (treesum a) + (treesum c)" lemma sum_preserved: "treesum t = listsum (contents t)" apply(induction t) apply(auto) done datatype 'a tree2 = Node "'a tree2" 'a "'a tree2" | Leaf "'a option" fun mirror2 :: "'a tree2 \<Rightarrow> 'a tree2" where "mirror2 (Leaf a) = Leaf a" | "mirror2 (Node a b c) = Node (mirror2 c) b (mirror2 a)" lemma "mirror2(mirror2(t)) = t" apply(induction t) apply(auto) done fun pre_order :: "'a tree2 \<Rightarrow> 'a option list" where "pre_order (Leaf a) = [a]" | "pre_order (Node a b c) = (Some b) # (pre_order a) @ (pre_order c)" fun post_order :: "'a tree2 \<Rightarrow> 'a option list" where "post_order (Leaf a) = [a]" | "post_order (Node a b c) = (post_order a) @ (post_order c) @ [Some b]" lemma "pre_order (mirror2 t) = rev (post_order t)" apply(induction t) apply(auto) done fun intersperse :: "'a \<Rightarrow> 'a list \<Rightarrow> 'a list" where "intersperse a [] = []" | "intersperse a (x # []) = [a, x]" | "intersperse a (x # xs) = [a, x] @ intersperse a xs" lemma "map f (intersperse a xs) = intersperse (f a) (map f xs)" apply(induction xs rule: intersperse.induct) apply(auto) done fun itadd :: "nat \<Rightarrow> nat \<Rightarrow> nat" where "itadd 0 n = n" | "itadd (Suc m) n = itadd m (Suc n)" lemma "itadd m n = add n m" apply(induction m arbitrary: n) apply(auto) done datatype tree0 = Leaf | Node tree0 tree0 (* note: using "+" and "1" here is important for algebra_simps to kick in! *) fun nodes :: "tree0 \<Rightarrow> nat" where "nodes Leaf = 1" | "nodes (Node l r) = 1 + (nodes l) + (nodes r)" fun explode :: "nat \<Rightarrow> tree0 \<Rightarrow> tree0" where "explode 0 t = t" | "explode (Suc n) t = explode n (Node t t)" theorem "nodes (explode n t) = 2^n * nodes t + 2^n - 1" apply (induction n arbitrary: t) apply (auto simp add: algebra_simps) done datatype exp = Var | Const int | Add exp exp | Mult exp exp fun eval :: "exp \<Rightarrow> int \<Rightarrow> int" where "eval Var x = x" | "eval (Const a) _ = a" | "eval (Add a b) x = (eval a x) + (eval b x)" | "eval (Mult a b) x = (eval a x) * (eval b x)" fun evalp' :: "int list \<Rightarrow> int \<Rightarrow> nat \<Rightarrow> int" where "evalp' [] _ _ = 0" | "evalp' (x # xs) val exp = (val^exp)*x + (evalp' xs val (Suc exp))" fun evalp :: "int list \<Rightarrow> int \<Rightarrow> int" where "evalp xs val = evalp' xs val 0" fun coeff_add :: "int list \<Rightarrow> int list \<Rightarrow> int list" where "coeff_add (x # xs) (y # ys) = (x + y) # (coeff_add xs ys)" | "coeff_add [] ys = ys" | "coeff_add xs [] = xs" fun shift_list_by :: "nat \<Rightarrow> 'a \<Rightarrow> 'a list \<Rightarrow> 'a list" where "shift_list_by 0 val xs = xs" | "shift_list_by (Suc n) val xs = val # (shift_list_by n val xs)" fun list_sum :: "int list \<Rightarrow> int list \<Rightarrow> int list" where "list_sum (x # xs) (y # ys) = (x+y) # list_sum xs ys" | "list_sum [] ys = ys" | "list_sum xs [] = xs" lemma "list_sum xs xs = map (\<lambda>x. x+x) xs" apply(induction xs) apply(auto) done fun fill_with :: "int list \<Rightarrow> int \<Rightarrow> int list" where "fill_with (x # xs) val = val # fill_with xs val" | "fill_with [] _ = []" (* fun coeff_mult' :: "int list \<Rightarrow> nat \<Rightarrow> int list \<Rightarrow> int list" where "coeff_mult' [] n ys = []" | "coeff_mult' (x # xs) n ys = list_sum (coeff_mult' xs (Suc n) ys) (shift_list_by n 0 (map (\<lambda>v. x*v) ys))" fun coeff_mult :: "int list \<Rightarrow> int list \<Rightarrow> int list" where "coeff_mult xs ys = coeff_mult' xs 0 ys" *) fun coeff_mult_constant :: "int \<Rightarrow> int list \<Rightarrow> int list" where "coeff_mult_constant a [] = []" | "coeff_mult_constant a (x # xs) = (a * x) # coeff_mult_constant a xs" fun coeff_mult :: "int list \<Rightarrow> int list \<Rightarrow> int list" where "coeff_mult [] ys = []" | "coeff_mult (x # xs) ys = coeff_add (coeff_mult_constant x ys) (coeff_mult xs (0 # ys))" (* I originally used a map instead of coeff_mult_constant, but I had a hard time trying to prove anything involving map. *) lemma "coeff_mult_constant a xs = coeff_mult [a] xs" apply(induction xs arbitrary: a) apply(auto simp add: algebra_simps map_add_def) done fun coeffs :: "exp \<Rightarrow> int list" where "coeffs Var = [0, 1]" | "coeffs (Const a) = [a]" | "coeffs (Add a b) = coeff_add (coeffs a) (coeffs b)" | "coeffs (Mult a b) = coeff_mult (coeffs a) (coeffs b)" lemma add_is_plus: "evalp' (coeff_add a b) x d = evalp' a x d + evalp' b x d" apply(induction a b arbitrary: d rule: coeff_add.induct) apply(auto simp add: algebra_simps) done lemma multiplying_by_a_constant_distributes: "evalp' (coeff_mult_constant a xs) x d = a * (evalp' xs x d)" apply(induction xs arbitrary: d) apply(auto simp add: algebra_simps add_is_plus) done lemma multiplying_by_x_extends_coeff_list: "x * evalp' a x d = evalp' (0 # a) x d" apply(induction a arbitrary: d) apply(auto simp add: algebra_simps) done lemma succ_of_exponent_mults_by_x: "evalp' a x (Suc d) = x * evalp' a x d" apply(induction d) apply(auto simp add: algebra_simps multiplying_by_x_extends_coeff_list) done lemma evalp_mult_can_swap: "evalp' a x 0 * evalp' b x n = evalp' b x 0 * evalp' a x n" apply(induction n) apply(auto simp add: algebra_simps succ_of_exponent_mults_by_x) done lemma mult_is_star: "evalp' (coeff_mult a b) x 0 = (evalp' a x 0) * (evalp' b x 0)" apply(induction a arbitrary: b) apply(auto simp add: algebra_simps add_is_plus multiplying_by_a_constant_distributes evalp_mult_can_swap) done lemma "evalp (coeffs poly) x = eval poly x" apply(induction poly rule: coeffs.induct) apply(auto simp add: algebra_simps mult_is_star add_is_plus) done end
Looking for an ambitious role that combines project management and technical solutions? Want to work in a challenging international work environment with highly educated technical colleagues? Is cooperation within the company more than self-evident for you? Do complex multidisciplinary projects energise you? Then applying for a role as Project Manager at Lloyd’s Register might be a good idea! Proficient in the English and Dutch language; written and spoken, at business level. Strong client and commercial skills. Some companies seem to work for their shareholders. You could say that Lloyd's Register’s shareholder is society! Lloyd’s Register have been fiercely independent since its start more than 257 years ago as what would become the world’s first ship classification society. It’s also where Lloyd’s Register first started contributing to the safety of the world’s critical infrastructure, helping ship builders and owners make safer ocean-going vessels. Today that tradition continues as Lloyds Register, still owned by the Lloyd's Register Foundation, offers it deep technical expertise to asset owners- providing quality assurance and certification to everything from cruise ships to the pressure equipment that helps power cities everywhere! Contribute to our success story. Please apply to find out more.
lemma LIMSEQ_n_over_Suc_n: "(\<lambda>n. of_nat n / of_nat (Suc n) :: 'a :: real_normed_field) \<longlonglongrightarrow> 1"
#include <puppet/compiler/catalog.hpp> #include <puppet/compiler/exceptions.hpp> #include <puppet/cast.hpp> #include <boost/graph/hawick_circuits.hpp> #include <boost/graph/graphviz.hpp> #include <boost/format.hpp> #include <rapidjson/document.h> #include <rapidjson/prettywriter.h> using namespace std; using namespace puppet::runtime; using namespace puppet::runtime::values; namespace puppet { namespace compiler { ostream& operator<<(ostream& out, relationship relation) { // These are labels for edges in a dependency graph. // Thus, they should always read as "A <string> B", where the string explains why A depends on B. switch (relation) { case relationship::contains: out << "contains"; break; case relationship::before: out << "after"; break; case relationship::require: out << "requires"; break; case relationship::notify: out << "notified by"; break; case relationship::subscribe: out << "subscribes to"; break; default: throw runtime_error("unexpected relationship."); } return out; } resource_cycle_exception::resource_cycle_exception(std::string const& message) : runtime_error(message) { } catalog::catalog(string node, string environment) : _node(rvalue_cast(node)), _environment(rvalue_cast(environment)) { } string const& catalog::node() const { return _node; } string const& catalog::environment() const { return _environment; } resource* catalog::add(types::resource type, resource const* container, shared_ptr<evaluation::scope> scope, boost::optional<ast::context> context, bool virtualized, bool exported) { // Ensure the resource name is fully qualified if (!type.fully_qualified()) { throw runtime_error("resource name is not fully qualified."); } // Attempt to find an existing resource if (find(type)) { return nullptr; } _resources.emplace_back(resource(rvalue_cast(type), container, rvalue_cast(scope), rvalue_cast(context), exported)); auto resource = &_resources.back(); // Map the type to the resource _resource_map[resource->type()] = resource; // Append to the type list _resource_lists[resource->type().type_name()].emplace_back(resource); // Realize the resource if not virtual if (!virtualized) { realize(*resource); } return resource; } resource* catalog::find(runtime::types::resource const& type) { // Call the const overload, but cast away the const return // catalog::find should never mutate the catalog's state, but this overload should allow the resource to be modified return const_cast<resource*>(static_cast<catalog const*>(this)->find(type)); } resource const* catalog::find(runtime::types::resource const& type) const { if (!type.fully_qualified()) { return nullptr; } // Find the resource type and title auto it = _resource_map.find(type); if (it == _resource_map.end()) { return nullptr; } return it->second; } size_t catalog::size() const { return _resources.size(); } void catalog::each(function<bool(resource&)> const& callback, string const& type, size_t offset) { // Adapt the given function so that we cast away const-ness of the resource auto adapted = [&](resource const& r) { return callback(const_cast<resource&>(r)); }; // Enumerate the resources using the const overload return static_cast<catalog const*>(this)->each(adapted, type); } void catalog::each(function<bool(resource const&)> const& callback, std::string const& type, size_t offset) const { // If no type given, enumerate all resources if (type.empty()) { for (size_t i = offset; i < _resources.size(); ++i) { if (!callback(_resources[i])) { break; } } return; } // A type was given, enumerate resources of that type only auto it = _resource_lists.find(type); if (it == _resource_lists.end()) { return; } for (size_t i = offset; i < it->second.size(); ++i) { if (!callback(*it->second[i])) { break; } } } void catalog::each_edge(compiler::resource const& resource, function<bool(relationship, compiler::resource const&)> const& callback) const { if (resource.virtualized()) { return; } // Get the out edges from this resource for (auto const& edge : make_iterator_range(boost::out_edges(resource.vertex_id(), _graph))) { if (!callback(_graph[edge], *_graph[boost::target(edge, _graph)])) { break; } } } void catalog::relate(relationship relation, resource const& source, resource const& target) { if (source.virtualized()) { throw runtime_error("source cannot be a virtual resource."); } if (target.virtualized()) { throw runtime_error("target cannot be a virtual resource."); } auto source_ptr = &source; auto target_ptr = &target; // For before and notify, swap the target and source vertices so that the target depends on the source if (relation == relationship::before || relation == relationship::notify) { source_ptr = &target; target_ptr = &source; } // Add the edge to the graph if it doesn't already exist for (auto const& edge : make_iterator_range(boost::out_edges(source_ptr->vertex_id(), _graph))) { if (_graph[boost::target(edge, _graph)] == target_ptr && _graph[edge] == relation) { return; } } boost::add_edge(source_ptr->vertex_id(), target_ptr->vertex_id(), relation, _graph); } void catalog::realize(compiler::resource& resource) { if (!resource.virtualized()) { return; } // Realize the resource resource.realize(boost::add_vertex(&resource, _graph)); // Add a relationship from container to this resource if (resource.container()) { // Add a relationship to the container relate(relationship::contains, *resource.container(), resource); } } void catalog::populate_graph() { string const before_parameter = "before"; string const notify_parameter = "notify"; string const require_parameter = "require"; string const subscribe_parameter = "subscribe"; // Loop through each resource and add metaparameter relationships for (auto const& resource : _resources) { // Skip any virtual resources if (resource.virtualized()) { continue; } // Add the relationships from the metaparameters populate_relationships(resource, before_parameter, relationship::before); populate_relationships(resource, notify_parameter, relationship::notify); populate_relationships(resource, require_parameter, relationship::require); populate_relationships(resource, subscribe_parameter, relationship::subscribe); } } void catalog::write(ostream& out) const { // Declare an adapter for RapidJSON's pretty writter struct stream_adapter { explicit stream_adapter(ostream& stream) : _stream(stream) { } void Put(char c) { _stream.put(c); } void Flush() { } private: ostream& _stream; } adapter(out); // Create the document and treat it as an object json_document document; document.SetObject(); auto& allocator = document.GetAllocator(); // Create an array to store the classes json_value classes; classes.SetArray(); // Create an array to store the resources json_value resources; resources.SetArray(); resources.Reserve(_resources.size(),allocator); // Create an array to store the dependency edges json_value edges; edges.SetArray(); for (auto const& resource : _resources) { // Skip virtual resources if (resource.virtualized()) { continue; } if (resource.type().is_class()) { auto& title = resource.type().title(); classes.PushBack(rapidjson::StringRef(title.c_str(), title.size()), allocator); } // Add the resource resources.PushBack(resource.to_json(allocator, *this), allocator); // Add the edges for this resource each_edge(resource, [&](relationship relation, compiler::resource const& target) { if (relation != relationship::contains) { // The top level edges are only containment edges return true; } // Create an edge object from source to target json_value edge; edge.SetObject(); edge.AddMember( "source", json_value(boost::lexical_cast<string>(resource.type()).c_str(), allocator), allocator); edge.AddMember( "target", json_value(boost::lexical_cast<string>(target.type()).c_str(), allocator), allocator); edges.PushBack(rvalue_cast(edge), allocator); return true; }); } // Write out the catalog attributes document.AddMember("name", rapidjson::StringRef(_node.c_str(), _node.size()), allocator); document.AddMember("version", static_cast<int64_t>(std::time(nullptr)), allocator); document.AddMember("environment", rapidjson::StringRef(_environment.c_str(), _environment.size()), allocator); // Write out the resources document.AddMember("resources", rvalue_cast(resources), allocator); // Write out the containment edges document.AddMember("edges", rvalue_cast(edges), allocator); // Write out the declared classes document.AddMember("classes", rvalue_cast(classes), allocator); // Write the document to the stream rapidjson::PrettyWriter<stream_adapter> writer{adapter}; writer.SetIndent(' ', 2); document.Accept(writer); // Flush the stream with one last newline out << endl; } void catalog::write_graph(ostream& out) { out << "digraph resources {\n"; // Output the vertices for (auto const& vertex : boost::make_iterator_range(boost::vertices(_graph))) { auto resource = _graph[vertex]; if (resource->virtualized()) { // Don't write out a vertex unless the resource was realized continue; } out << " " << vertex << " [label=" << boost::escape_dot_string(boost::lexical_cast<string>(resource->type())) << "];\n"; } // Output the edges for (auto const& edge : boost::make_iterator_range(boost::edges(_graph))) { auto source = boost::source(edge, _graph); auto target = boost::target(edge, _graph); if (_graph[source]->virtualized() || _graph[target]->virtualized()) { // Don't write out vertices unless source and target are both realized continue; } out << " " << source << " -> " << target << " [label=\"" << _graph[edge] << "\"];\n"; } out << "}\n"; } struct cycle_visitor { explicit cycle_visitor(vector<string>& cycles) : _cycles(cycles) { } template <typename Path, typename Graph> void cycle(Path const& path, Graph const& graph) { ostringstream cycle; bool first = true; for (auto const& id : path) { if (first) { first = false; } else { cycle << " => "; } auto resource = graph[id]; cycle << resource->type() << " declared at " << resource->path() << ":" << resource->line(); } // Append on the first vertex again to complete the cycle auto resource = graph[path.front()]; cycle << " => " << resource->type(); _cycles.push_back(cycle.str()); } vector<string>& _cycles; }; void catalog::detect_cycles() { // Check for cycles in the graph vector<string> cycles; boost::hawick_unique_circuits(_graph, cycle_visitor(cycles)); if (cycles.empty()) { return; } // At least one cycle found, so throw an exception ostringstream message; message << "found " << cycles.size() << " resource dependency cycle" << (cycles.size() == 1 ? ":\n" : "s:\n"); for (size_t i = 0; i < cycles.size(); ++i) { if (i > 0) { message << "\n"; } message << " " << i + 1 << ". " << cycles[i]; } throw resource_cycle_exception(message.str()); } void catalog::populate_relationships(resource const& source, string const& name, compiler::relationship relationship) { auto attribute = source.get(name); if (!attribute) { return; } attribute->value().each_resource([&](types::resource const& target_resource) { // Locate the target in the catalog auto target = find(target_resource); if (!target) { throw evaluation_exception( (boost::format("resource %1% (declared at %2%:%3%) cannot form a '%4%' relationship with resource %5%: the resource does not exist in the catalog.") % source.type() % source.path() % source.line() % name % target_resource ).str(), attribute->value_context(), {} ); } if (&source == target) { throw evaluation_exception( (boost::format("resource %1% cannot form a relationship with itself.") % source.type() ).str(), attribute->value_context(), {} ); } // Add the relationship relate(relationship, source, *target); }, [&](string const& message) { throw evaluation_exception( (boost::format("resource %1% (declared at %2%:%3%) cannot form a '%4%' relationship: %5%") % source.type() % source.path() % source.line() % name % message ).str(), attribute->value_context(), {} ); }); } }} // namespace puppet::compiler
{-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE DeriveDataTypeable, DeriveGeneric #-} -- | -- Module : Statistics.Distribution.Normal -- Copyright : (c) 2009 Bryan O'Sullivan -- License : BSD3 -- -- Maintainer : [email protected] -- Stability : experimental -- Portability : portable -- -- The normal distribution. This is a continuous probability -- distribution that describes data that cluster around a mean. module Statistics.Distribution.Normal ( NormalDistribution -- * Constructors -- , normalDistr --, normalDistrE , standard ) where import Data.Data (Data, Typeable) import GHC.Generics (Generic) import Numeric.MathFunctions.Constants (m_sqrt_2, m_sqrt_2_pi) import Numeric.SpecFunctions (erfc, invErfc) import qualified Statistics.Distribution as D import Statistics.Internal -- | The normal distribution. data NormalDistribution = ND { mean :: {-# UNPACK #-} !Double , stdDev :: {-# UNPACK #-} !Double , ndPdfDenom :: {-# UNPACK #-} !Double , ndCdfDenom :: {-# UNPACK #-} !Double } deriving (Eq, Typeable, Data, Generic) instance Show NormalDistribution where showsPrec i (ND m s _ _) = defaultShow2 "normalDistr" m s i instance Read NormalDistribution where readPrec = defaultReadPrecM2 "normalDistr" normalDistrE instance D.Distribution NormalDistribution where cumulative = cumulative complCumulative = complCumulative instance D.ContDistr NormalDistribution where logDensity = logDensity quantile = quantile complQuantile = complQuantile -- | Standard normal distribution with mean equal to 0 and variance equal to 1 standard :: NormalDistribution standard = ND { mean = 0.0 , stdDev = 1.0 , ndPdfDenom = log m_sqrt_2_pi , ndCdfDenom = m_sqrt_2 } -- | Create normal distribution from parameters. -- -- IMPORTANT: prior to 0.10 release second parameter was variance not -- standard deviation. normalDistrE :: Double -- ^ Mean of distribution -> Double -- ^ Standard deviation of distribution -> Maybe NormalDistribution normalDistrE m sd | sd > 0 = Just ND { mean = m , stdDev = sd , ndPdfDenom = log $ m_sqrt_2_pi * sd , ndCdfDenom = m_sqrt_2 * sd } | otherwise = Nothing logDensity :: NormalDistribution -> Double -> Double logDensity d x = (-xm * xm / (2 * sd * sd)) - ndPdfDenom d where xm = x - mean d sd = stdDev d cumulative :: NormalDistribution -> Double -> Double cumulative d x = erfc ((mean d - x) / ndCdfDenom d) / 2 complCumulative :: NormalDistribution -> Double -> Double complCumulative d x = erfc ((x - mean d) / ndCdfDenom d) / 2 quantile :: NormalDistribution -> Double -> Double quantile d p | p == 0 = -inf | p == 1 = inf | p == 0.5 = mean d | p > 0 && p < 1 = x * ndCdfDenom d + mean d | otherwise = error $ "Statistics.Distribution.Normal.quantile: p must be in [0,1] range. Got: "++show p where x = - invErfc (2 * p) inf = 1/0 complQuantile :: NormalDistribution -> Double -> Double complQuantile d p | p == 0 = inf | p == 1 = -inf | p == 0.5 = mean d | p > 0 && p < 1 = x * ndCdfDenom d + mean d | otherwise = error $ "Statistics.Distribution.Normal.complQuantile: p must be in [0,1] range. Got: "++show p where x = invErfc (2 * p) inf = 1/0
(* Not heaps *) Require Import Coq.Arith.Arith. Require Import Coq.Arith.EqNat. Require Import Maps. Require Import Imp. Require Import Coq.Lists.List. Import List Notations. Require Import Smallstep. Module DistIMP. Inductive aexp : Type := | AId : id -> aexp | ANum : nat -> aexp | APlus : aexp -> aexp -> aexp | AMinus : aexp -> aexp -> aexp | AMult : aexp -> aexp -> aexp. Inductive bexp : Type := | BTrue : bexp | BFalse : bexp | BEq : aexp -> aexp -> bexp | BLe : aexp -> aexp -> bexp | BNot : bexp -> bexp | BAnd : bexp -> bexp -> bexp. Inductive com : Type := | CSkip : com | CAss : id -> aexp -> com | CSeq : com -> com -> com | CIf : bexp -> com -> com -> com | CWhile : bexp -> com -> com (* Distributed Commands*) | CSend : aexp -> id -> id -> com | CRecieve: com. Notation "'SKIP'" := CSkip. Notation "x '::=' a" := (CAss x a) (at level 60). Notation "c1 ;; c2" := (CSeq c1 c2) (at level 80, right associativity). Notation "'WHILE' b 'DO' c 'END'" := (CWhile b c) (at level 80, right associativity). Notation "'IFB' b 'THEN' c1 'ELSE' c2 'FI'" := (CIf b c1 c2) (at level 80, right associativity). Notation "'SEND' a 'TO' id1 'CALLED' id2" := (CSend a id1 id2) (at level 80, right associativity). Notation "'RECEIVE'" := (CRecieve) (at level 80, right associativity). (*Definition state := total_map nat. Definition empty_state : state := t_empty 0.*) Inductive triple (A B C : Type) : Type := | trip : A -> B -> C -> triple A B C. Notation "x '*' y '*' z" := (triple x y z) (at level 70, right associativity). Definition sb := list (aexp * id * id). Definition rb := list (aexp * id). Definition st := total_map nat. Inductive State : Type := | state : sb -> rb -> st -> State. Definition empty_state : State := state nil nil (t_empty 0). Check state. Inductive aval : aexp -> Prop := av_num : forall n, aval (ANum n). Reserved Notation " t '/' st '==>a' t' " (at level 40, st at level 39). Inductive astep : State -> aexp -> aexp -> Prop := | AS_Id : forall sb rb st i, AId i / state sb rb st ==>a ANum (st i) | AS_Plus : forall st n1 n2, APlus (ANum n1) (ANum n2) / st ==>a ANum (n1 + n2) | AS_Plus1 : forall st a1 a1' a2, a1 / st ==>a a1' -> (APlus a1 a2) / st ==>a (APlus a1' a2) | AS_Plus2 : forall st v1 a2 a2', aval v1 -> a2 / st ==>a a2' -> (APlus v1 a2) / st ==>a (APlus v1 a2') | AS_Minus : forall st n1 n2, (AMinus (ANum n1) (ANum n2)) / st ==>a (ANum (minus n1 n2)) | AS_Minus1 : forall st a1 a1' a2, a1 / st ==>a a1' -> (AMinus a1 a2) / st ==>a (AMinus a1' a2) | AS_Minus2 : forall st v1 a2 a2', aval v1 -> a2 / st ==>a a2' -> (AMinus v1 a2) / st ==>a (AMinus v1 a2') | AS_Mult : forall st n1 n2, (AMult (ANum n1) (ANum n2)) / st ==>a (ANum (mult n1 n2)) | AS_Mult1 : forall st a1 a1' a2, a1 / st ==>a a1' -> (AMult (a1) (a2)) / st ==>a (AMult (a1') (a2)) | AS_Mult2 : forall st v1 a2 a2', aval v1 -> a2 / st ==>a a2' -> (AMult v1 a2) / st ==>a (AMult v1 a2') where " t '/' st '==>a' t' " := (astep st t t'). Reserved Notation " t '/' st '==>b' t' " (at level 40, st at level 39). Inductive bstep : State -> bexp -> bexp -> Prop := | BS_Eq : forall st n1 n2, (BEq (ANum n1) (ANum n2)) / st ==>b (if (beq_nat n1 n2) then BTrue else BFalse) | BS_Eq1 : forall st a1 a1' a2, a1 / st ==>a a1' -> (BEq a1 a2) / st ==>b (BEq a1' a2) | BS_Eq2 : forall st v1 a2 a2', aval v1 -> a2 / st ==>a a2' -> (BEq v1 a2) / st ==>b (BEq v1 a2') | BS_LtEq : forall st n1 n2, (BLe (ANum n1) (ANum n2)) / st ==>b (if (leb n1 n2) then BTrue else BFalse) | BS_LtEq1 : forall st a1 a1' a2, a1 / st ==>a a1' -> (BLe a1 a2) / st ==>b (BLe a1' a2) | BS_LtEq2 : forall st v1 a2 a2', aval v1 -> a2 / st ==>a a2' -> (BLe v1 a2) / st ==>b (BLe v1 (a2')) | BS_NotTrue : forall st, (BNot BTrue) / st ==>b BFalse | BS_NotFalse : forall st, (BNot BFalse) / st ==>b BTrue | BS_NotStep : forall st b1 b1', b1 / st ==>b b1' -> (BNot b1) / st ==>b (BNot b1') | BS_AndTrueTrue : forall st, (BAnd BTrue BTrue) / st ==>b BTrue | BS_AndTrueFalse : forall st, (BAnd BTrue BFalse) / st ==>b BFalse | BS_AndFalse : forall st b2, (BAnd BFalse b2) / st ==>b BFalse | BS_AndTrueStep : forall st b2 b2', b2 / st ==>b b2' -> (BAnd BTrue b2) / st ==>b (BAnd BTrue b2') | BS_AndStep : forall st b1 b1' b2, b1 / st ==>b b1' -> (BAnd b1 b2) / st ==>b (BAnd b1' b2) where " t '/' st '==>b' t' " := (bstep st t t'). Inductive cstep : (com * State) -> (com * State) -> Prop := | CS_AssStep : forall (sb1 : sb) (rb1 : rb) (st1 : st) (i : id) (a a' : aexp), a / state sb1 rb1 st1 ==>a a' -> cstep (i ::= a, state sb1 rb1 st1) (i ::= a', state sb1 rb1 st1) | CS_Ass : forall (sb1 : sb) (rb1 : rb) (st1 : st) (i : id) (n : nat), cstep (i ::= (ANum n), state sb1 rb1 st1) (SKIP, state sb1 rb1 (t_update st1 i n)) | CS_SeqStep : forall (st: State) (c1 c1': com) (st' : State) (c2 : com), cstep (c1, st) (c1', st') -> cstep (c1 ;; c2, st) (c1' ;; c2, st') | CS_SeqFinish : forall (sb1 : sb) (rb1 : rb) (st1 : st) (c2 : com), cstep (SKIP ;; c2, state sb1 rb1 st1) (c2, state sb1 rb1 st1) | CS_IfTrue : forall (sb1 : sb) (rb1 : rb) (st1 : st) (c1 c2 : com), cstep (IFB BTrue THEN c1 ELSE c2 FI, state sb1 rb1 st1) (c1, state sb1 rb1 st1) | CS_IfFalse : forall (sb1 : sb) (rb1 : rb) (st1 : st) (c1 c2 : com), cstep (IFB BFalse THEN c1 ELSE c2 FI, state sb1 rb1 st1) (c2, state sb1 rb1 st1) | CS_IfStep : forall (sb1 : sb) (rb1 : rb) (st1 : st) (b b' : bexp) (c1 c2: com), b / state sb1 rb1 st1 ==>b b' -> cstep (IFB b THEN c1 ELSE c2 FI, state sb1 rb1 st1) ((IFB b' THEN c1 ELSE c2 FI), state sb1 rb1 st1) | CS_While : forall (sb1 : sb) (rb1 : rb) (st1 : st) (b : bexp) (c1 : com), cstep (WHILE b DO c1 END, state sb1 rb1 st1) ((IFB b THEN (c1;; (WHILE b DO c1 END)) ELSE SKIP FI), state sb1 rb1 st1) | CS_Send1 : forall (sb1 : sb) (rb1 : rb) (st1 : st) (a a' : aexp) (x z : id), a / state sb1 rb1 st1 ==>a a' -> cstep (SEND a TO x CALLED z, state sb1 rb1 st1) (SEND a' TO x CALLED z, state sb1 rb1 st1) | CS_Send2 : forall (sb1 : sb) (rb1 : rb) (st1 : st) (a : aexp) (x z : id) (n : nat), a = ANum n -> cstep (SEND a TO x CALLED z, state sb1 rb1 st1) (SKIP, state (app sb1 (cons (a, x, z) nil)) rb1 st1) | CS_Rec1 : forall (sb1 : sb) (st1 : st), cstep (RECEIVE, state sb1 nil st1) (SKIP ;; RECEIVE, state sb1 nil st1) | CS_Rec2 : forall (sb1 : sb) (rb1 : rb) (st1 : st) (a : aexp) (z : id), cstep (RECEIVE, state sb1 (app (cons (a, z) nil) rb1) st1) (z ::= a, state sb1 rb1 st1). Inductive imp : Type := | machine : id -> com -> State -> imp. Inductive dist_imp : (imp * imp) -> (imp * imp) -> Prop := | imp_step_1 : forall (c1 c1' c2 : com) (st1 st1' st2 : State) (x y : id), cstep (c1, st1) (c1', st1') -> dist_imp ((machine x c1 st1), (machine y c2 st2)) ((machine x c1' st1'), (machine y c2 st2)) | imp_step_2 : forall (c1 c2' c2 : com) (st1 st2' st2 : State) (x y : id), cstep (c2, st2) (c2', st2') -> dist_imp ((machine x c1 st1), (machine y c2 st2)) ((machine x c1 st1), (machine y c2' st2')) | send_y : forall (c1 c2 : com) (sb1 sb2 : sb) (rb1 rb2 : rb) (st1 st2 : st) (a : aexp) (x y z : id), dist_imp ((machine x c1 (state (cons (a, y, z) sb1) rb1 st1)), (machine y c2 (state sb2 rb2 st2))) ((machine x c1 (state sb1 rb1 st1)), ((machine y c2 (state sb2 (app (cons (a, z) nil) rb2) st2)))) | send_x : forall (c1 c2 : com) (sb1 sb2 : sb) (rb1 rb2 : rb) (st1 st2 : st) (a : aexp) (x y z : id), dist_imp ((machine x c1 (state sb1 rb1 st1)), (machine y c2 (state (cons (a, x, z) sb2) rb2 st2))) ((machine x c1 (state sb1 (app (cons (a, z) nil) rb1) st1)), (machine y c2 (state sb2 rb2 st2))). Definition cdist_imp := multi dist_imp. Lemma proof_of_concept : forall x y n z, multi dist_imp ((machine x (SEND (ANum n) TO y CALLED z) empty_state), (machine y (RECEIVE) empty_state)) ((machine x SKIP empty_state), (machine y (z ::= (ANum n)) empty_state)). Proof. intros. eapply multi_step. apply imp_step_1. eapply CS_Send2. reflexivity. eapply multi_step. apply send_y. fold empty_state. eapply multi_step. apply imp_step_2. apply CS_Rec2. fold empty_state. eapply multi_refl. Qed. Lemma proof_of_concept' : forall x y z, multi dist_imp ((machine x ((SEND (APlus (APlus (ANum 1) (ANum 1)) (ANum 1)) TO y CALLED z) ;; RECEIVE) empty_state), (machine y (RECEIVE ;; SEND (APlus (APlus (ANum 1) (ANum 1)) (ANum 1)) TO x CALLED z) empty_state)) ((machine x SKIP (state nil nil (t_update (t_empty 0) z 3))), (machine y SKIP (state nil nil (t_update (t_empty 0) z 3)))). Proof. intros. eapply multi_step. apply imp_step_2. apply CS_SeqStep. apply CS_Rec1. eapply multi_step. apply imp_step_1. apply CS_SeqStep. apply CS_Send1. apply AS_Plus1. apply AS_Plus. eapply multi_step. apply imp_step_1. apply CS_SeqStep. apply CS_Send1. apply AS_Plus. eapply multi_step. apply imp_step_1. apply CS_SeqStep. eapply CS_Send2. reflexivity. eapply multi_step. apply imp_step_2. apply CS_SeqStep. apply CS_SeqFinish. eapply multi_step. apply send_y. eapply multi_step. apply imp_step_2. apply CS_SeqStep. apply CS_Rec2. eapply multi_step. apply imp_step_2. apply CS_SeqStep. apply CS_Ass. eapply multi_step. apply imp_step_2. apply CS_SeqFinish. eapply multi_step. apply imp_step_1. apply CS_SeqFinish. eapply multi_step. apply imp_step_2. apply CS_Send1. apply AS_Plus1. apply AS_Plus. eapply multi_step. apply imp_step_2. apply CS_Send1. apply AS_Plus. eapply multi_step. apply imp_step_2. eapply CS_Send2. reflexivity. eapply multi_step. apply send_x. eapply multi_step. apply imp_step_1. apply CS_Rec2. eapply multi_step. apply imp_step_1. apply CS_Ass. eapply multi_refl. Qed. End DistIMP.
Formal statement is: lemma (in t2_space) LIMSEQ_Uniq: "\<exists>\<^sub>\<le>\<^sub>1l. X \<longlonglongrightarrow> l" Informal statement is: If $X$ is a sequence in a T2 space, then there is at most one limit point of $X$.
section \<open>Fib and Gcd commute\<close> theory Fibonacci imports "HOL-Computational_Algebra.Primes" begin text \<open>A few proofs of laws taken from Graham et al. @{cite "concrete-math"}.\<close> subsection \<open>Fibonacci numbers\<close> fun fib :: "nat \<Rightarrow> nat" where "fib 0 = 0" | "fib (Suc 0) = 1" | "fib (Suc (Suc n)) = fib n + fib (Suc n)" lemma fib_positive: "fib (Suc n) > 0" by (induction n rule: fib.induct) auto subsection \<open>Fib and gcd commute\<close> lemma fib_add: "fib (Suc (n + k)) = fib (Suc k) * fib (Suc n) + fib k * fib n" by (induction n rule: fib.induct) (auto simp: distrib_left) lemma coprime_fib_Suc: "coprime (fib n) (fib (Suc n))" proof (induction n rule: fib.induct) case (3 x) then show ?case by (metis coprime_iff_gcd_eq_1 fib.simps(3) gcd.commute gcd_add1) qed auto lemma gcd_fib_add: "gcd (fib m) (fib (n + m)) = gcd (fib m) (fib n)" proof (cases m) case 0 then show ?thesis by simp next case (Suc k) then have "gcd (fib m) (fib (n + m)) = gcd (fib k * fib n) (fib (Suc k))" by (metis add_Suc_right fib_add gcd.commute gcd_add_mult mult.commute) also have "\<dots> = gcd (fib n) (fib (Suc k))" using coprime_commute coprime_fib_Suc gcd_mult_left_left_cancel by blast also have "\<dots> = gcd (fib m) (fib n)" using Suc by (simp add: ac_simps) finally show ?thesis . qed lemma gcd_fib_diff: "gcd (fib m) (fib (n - m)) = gcd (fib m) (fib n)" if "m \<le> n" proof - have "gcd (fib m) (fib (n - m)) = gcd (fib m) (fib (n - m + m))" by (simp add: gcd_fib_add) also from \<open>m \<le> n\<close> have "n - m + m = n" by simp finally show ?thesis . qed lemma gcd_fib_mod: "gcd (fib m) (fib (n mod m)) = gcd (fib m) (fib n)" if "0 < m" proof (induction n rule: less_induct) case (less n) show ?case proof - have "n mod m = (if n < m then n else (n - m) mod m)" by (rule mod_if) also have "gcd (fib m) (fib \<dots>) = gcd (fib m) (fib n)" using gcd_fib_diff less.IH that by fastforce finally show ?thesis . qed qed theorem fib_gcd: "fib (gcd m n) = gcd (fib m) (fib n)" proof (induction m n rule: gcd_nat_induct) case (step m n) then show ?case by (metis gcd.commute gcd_fib_mod gcd_red_nat) qed auto end
#include <stdlib.h> #include <stdio.h> #include <math.h> #include <assert.h> #include <gsl/gsl_math.h> #include <gsl/gsl_integration.h> #include <gsl/gsl_spline.h> #include <gsl/gsl_sort.h> #include <gsl/gsl_roots.h> #include <gsl/gsl_sf_bessel.h> #include "cosmocalc.h" #include "weaklens.h" double chiLim; static void comp_lens_power_spectrum(lensPowerSpectra lps); double lens_power_spectrum(double ell, lensPowerSpectra lps) { if(lps->initFlag == 1 || lps->currCosmoNum != cosmoData.cosmoNum || lps->currWLNum != wlData.wlNum) { lps->initFlag = 0; lps->currCosmoNum = cosmoData.cosmoNum; lps->currWLNum = wlData.wlNum; comp_lens_power_spectrum(lps); } return exp(gsl_spline_eval(lps->spline,log(ell),lps->accel)); } double nonlinear_powspec_for_lens(double k, double a) { return nonlinear_powspec(k,a); } static double lenskern(double chi, double chis) { if(chi > chis) return 0.0; else return 1.5*cosmoData.OmegaM*(100.0/CSOL)*(100.0/CSOL)/acomvdist(chi)*(chis-chi)/chis; } static double lenspk_integrand(double chi, void *p) { lensPowerSpectra lps = (lensPowerSpectra) p; double sn = lps->sn; if(chi == 0.0 || chi < chiLim) return 0.0; else return lenskern(chi,lps->chis1)*lenskern(chi,lps->chis2)*(nonlinear_powspec_for_lens(lps->ell/chi,acomvdist(chi)) + sn); } static void comp_lens_power_spectrum(lensPowerSpectra lps) { #define WORKSPACE_NUM 100000 #define ABSERR 1e-12 #define RELERR 1e-12 #define TABLE_LENGTH 1000 gsl_integration_workspace *workspace; gsl_function F; double result,abserr; double logltab[TABLE_LENGTH]; double logpkltab[TABLE_LENGTH]; double chimax; int i; //fill in bin information chiLim = lps->chiLim; if(lps->chis1 > lps->chis2) chimax = lps->chis1; else chimax = lps->chis2; fprintf(stderr,"doing lens pk - chiLim = %lf, chiMax = %lf\n",chiLim,chimax); //init workspace = gsl_integration_workspace_alloc((size_t) WORKSPACE_NUM); F.function = &lenspk_integrand; F.params = lps; //make table double lnlmin = log(wlData.lmin); double lnlmax = log(wlData.lmax); for(i=0;i<TABLE_LENGTH;++i) { logltab[i] = i*(lnlmax-lnlmin)/(TABLE_LENGTH-1) + lnlmin; lps->ell = exp(logltab[i]); gsl_integration_qag(&F,0.0,chimax,ABSERR,RELERR,(size_t) WORKSPACE_NUM,GSL_INTEG_GAUSS51,workspace,&result,&abserr); logpkltab[i] = log(result); } //free gsl_integration_workspace_free(workspace); //init splines and accels if(lps->spline != NULL) gsl_spline_free(lps->spline); lps->spline = gsl_spline_alloc(gsl_interp_akima,(size_t) (TABLE_LENGTH)); gsl_spline_init(lps->spline,logltab,logpkltab,(size_t) (TABLE_LENGTH)); if(lps->accel != NULL) gsl_interp_accel_reset(lps->accel); else lps->accel = gsl_interp_accel_alloc(); #undef TABLE_LENGTH #undef ABSERR #undef RELERR #undef WORKSPACE_NUM } lensPowerSpectra init_lens_power_spectrum(double zs1, double zs2) { lensPowerSpectra lps; lps = (lensPowerSpectra)malloc(sizeof(_lensPowerSpectra)); assert(lps != NULL); lps->initFlag = 1; lps->zs1 = zs1; lps->zs2 = zs2; lps->chis1 = comvdist(1.0/(1.0 + zs1)); lps->chis2 = comvdist(1.0/(1.0 + zs2)); lps->spline = NULL; lps->accel = NULL; return lps; } void free_lens_power_spectrum(lensPowerSpectra lps) { if(lps->spline != NULL) gsl_spline_free(lps->spline); if(lps->accel != NULL) gsl_interp_accel_free(lps->accel); free(lps); } //////////////////////////////////////// // corr. funcs! ////////////////////////////////////// static double lenscfp_integrand(double ell, void *p) { lensCorrFunc lcf = (lensCorrFunc) p; return ell/2.0/M_PI*lens_power_spectrum(ell,lcf->lps)*gsl_sf_bessel_J0(ell*lcf->theta/60.0/180.0*M_PI); } static double lenscfm_integrand(double ell, void *p) { lensCorrFunc lcf = (lensCorrFunc) p; return ell/2.0/M_PI*lens_power_spectrum(ell,lcf->lps)*gsl_sf_bessel_Jn(4,ell*lcf->theta/60.0/180.0*M_PI); } static void comp_lens_corr_funcs(lensCorrFunc lcf) { #define WORKSPACE_NUM 100000 #define ABSERR 1e-12 #define RELERR 1e-12 #define TABLE_LENGTH 1000 gsl_integration_workspace *workspace; gsl_function F; double result,abserr; double logttab[TABLE_LENGTH]; double logcfptab[TABLE_LENGTH]; double logcfmtab[TABLE_LENGTH]; int i; double lntmin; double lntmax; //init workspace = gsl_integration_workspace_alloc((size_t) WORKSPACE_NUM); F.params = lcf; lntmin = log(wlData.tmin); lntmax = log(wlData.tmax); //make tables F.function = &lenscfp_integrand; for(i=0;i<TABLE_LENGTH;++i) { logttab[i] = i*(lntmax-lntmin)/(TABLE_LENGTH-1) + lntmin; lcf->theta = exp(logttab[i]); gsl_integration_qag(&F,wlData.lmin,wlData.lmax,ABSERR,RELERR,(size_t) WORKSPACE_NUM,GSL_INTEG_GAUSS51,workspace,&result,&abserr); logcfptab[i] = log(result); } F.function = &lenscfm_integrand; for(i=0;i<TABLE_LENGTH;++i) { logttab[i] = i*(lntmax-lntmin)/(TABLE_LENGTH-1) + lntmin; lcf->theta = exp(logttab[i]); gsl_integration_qag(&F,wlData.lmin,wlData.lmax,ABSERR,RELERR,(size_t) WORKSPACE_NUM,GSL_INTEG_GAUSS51,workspace,&result,&abserr); logcfmtab[i] = log(result); } //free gsl_integration_workspace_free(workspace); //init splines and accels if(lcf->splineP != NULL) gsl_spline_free(lcf->splineP); lcf->splineP = gsl_spline_alloc(gsl_interp_akima,(size_t) (TABLE_LENGTH)); gsl_spline_init(lcf->splineP,logttab,logcfptab,(size_t) (TABLE_LENGTH)); if(lcf->accelP != NULL) gsl_interp_accel_reset(lcf->accelP); else lcf->accelP = gsl_interp_accel_alloc(); if(lcf->splineM != NULL) gsl_spline_free(lcf->splineM); lcf->splineM = gsl_spline_alloc(gsl_interp_akima,(size_t) (TABLE_LENGTH)); gsl_spline_init(lcf->splineM,logttab,logcfmtab,(size_t) (TABLE_LENGTH)); if(lcf->accelM != NULL) gsl_interp_accel_reset(lcf->accelM); else lcf->accelM = gsl_interp_accel_alloc(); #undef TABLE_LENGTH #undef ABSERR #undef RELERR #undef WORKSPACE_NUM } double lens_corr_func_minus(double theta, lensCorrFunc lcf) { if(lcf->initFlag == 1 || lcf->currCosmoNum != cosmoData.cosmoNum || lcf->currWLNum != wlData.wlNum) { lcf->initFlag = 0; lcf->currCosmoNum = cosmoData.cosmoNum; lcf->currWLNum = wlData.wlNum; comp_lens_corr_funcs(lcf); } return exp(gsl_spline_eval(lcf->splineM,log(theta),lcf->accelM)); } double lens_corr_func_plus(double theta, lensCorrFunc lcf) { if(lcf->initFlag == 1 || lcf->currCosmoNum != cosmoData.cosmoNum || lcf->currWLNum != wlData.wlNum) { lcf->initFlag = 0; lcf->currCosmoNum = cosmoData.cosmoNum; lcf->currWLNum = wlData.wlNum; comp_lens_corr_funcs(lcf); } return exp(gsl_spline_eval(lcf->splineP,log(theta),lcf->accelP)); } lensCorrFunc init_lens_corr_func(lensPowerSpectra lps) { lensCorrFunc lcf; lcf = (lensCorrFunc)malloc(sizeof(_lensCorrFunc)); assert(lcf != NULL); lcf->initFlag = 1; lcf->lps = lps; lcf->splineM = NULL; lcf->accelM = NULL; lcf->splineP = NULL; lcf->accelP = NULL; return lcf; } void free_lens_corr_func(lensCorrFunc lcf) { if(lcf->splineM != NULL) gsl_spline_free(lcf->splineM); if(lcf->accelM != NULL) gsl_interp_accel_free(lcf->accelM); if(lcf->splineP != NULL) gsl_spline_free(lcf->splineP); if(lcf->accelP != NULL) gsl_interp_accel_free(lcf->accelP); free(lcf); }
The current Romanian reconnaissance battalions ( the 313th , the 317th and the 528th ) are also considered special forces units , and were formed in the 1960s during the communist regime . After the revolution , the units suffered from a lack of funds which resulted in the temporary disbandment of the 313th Battalion . However , their equipment was completely overhauled in the past few years and the combat readiness and capabilities have regained full strength .
State Before: ι : Type ?u.113737 𝕜 : Type u_1 R : Type ?u.113743 M : Type u_2 N : Type ?u.113749 inst✝⁵ : LinearOrderedSemifield 𝕜 inst✝⁴ : OrderedAddCommMonoid M inst✝³ : OrderedAddCommMonoid N inst✝² : MulActionWithZero 𝕜 M inst✝¹ : MulActionWithZero 𝕜 N inst✝ : OrderedSMul 𝕜 M s : Set M a b : M c : 𝕜 h : 0 < c ⊢ a ≤ c⁻¹ • b ↔ c • a ≤ b State After: no goals Tactic: rw [← smul_le_smul_iff_of_pos h, smul_inv_smul₀ h.ne']
module Adjoint where import Category import Functor open Category open Functor using (Functor) module Adj where open Functor.Projections using (Map; map) data _⊢_ {ℂ ⅅ : Cat}(F : Functor ℂ ⅅ)(G : Functor ⅅ ℂ) : Set1 where adjunction : (_* : {X : Obj ℂ}{Y : Obj ⅅ} -> Map F X ─→ Y -> X ─→ Map G Y) (_# : {X : Obj ℂ}{Y : Obj ⅅ} -> X ─→ Map G Y -> Map F X ─→ Y) (inv₁ : {X : Obj ℂ}{Y : Obj ⅅ}(g : X ─→ Map G Y) -> g # * == g) (inv₂ : {X : Obj ℂ}{Y : Obj ⅅ}(f : Map F X ─→ Y) -> f * # == f) (nat₁ : {X₁ X₂ : Obj ℂ}{Y₁ Y₂ : Obj ⅅ} (f : Y₁ ─→ Y₂)(g : X₂ ─→ X₁)(h : Map F X₁ ─→ Y₁) -> (f ∘ h ∘ map F g) * == map G f ∘ (h *) ∘ g ) (nat₂ : {X₁ X₂ : Obj ℂ}{Y₁ Y₂ : Obj ⅅ} (f : Y₁ ─→ Y₂)(g : X₂ ─→ X₁)(h : X₁ ─→ Map G Y₁) -> (map G f ∘ h ∘ g) # == f ∘ (h #) ∘ map F g ) -> F ⊢ G open Adj public
Be classy, show temper and shine bright. If you do, winning takes care of itself. This quote totally expresses who I really am. I believe that woman should combine a lot of traits inside of her but only the real and strong one is able to keep them all together. I am a very optimistic person, no matter what happens in life I like to keep my head up and look at things from positive prospective. I believe that positive mindset is the key to everything. Once I put my mind to something I always achieve it. I’m also a very romantic and passionate. I like going for the long walks to the park or beach, having dates at the shore or somewhere in the mountains. But as I am a very diverse person I like to mix things up from time to time and have fun. I think that life was meant to be lived to its fullest and I am not wasting any minute on empty things. I like to go out with my friends, visit my family and play with my cousins, what can I say, I really love kids. I am a very adventurous person, I like visiting new places, trying new things and doing something unusual and a little crazy from time to time. When it comes to things that I like and the way I dress up I have to say that the diversity of my interests is very broad. I like listening to almost every kind of music, watching every film genre and reading different kinds of books. Most of the time it depends on my mood, but I’m just trying to develop myself in every possible way. My fashion style is mostly classy, I like wearing dresses, skirts, blouses and heels. But sometimes you may find me wearing casual clothes and I am feeling just amazing. I can make even simple jeans, t-shirt and sneakers rock. If you are looking for the real lady, someone who knows how to be classy and keep it real and to the top, crack jokes from time to time then you are on the right page. The bigger the plot the more interesting the book is. I think that there is no time on this planet to waste and while having all these amazing opportunities people have to live it up!! That’s what I’m trying to do. My range of interest is so big, there are so many things that I’m doing right now and still so many that I want to try. One of my biggest interests is learning languages. I like discovering new ways to communicate with people and make new friends. Even though I still got a lot to learn I’m trying to do my best and develop myself every single day. I like to travel and I think that travelling and discovering different countries and islands, meeting a lot of new people, learning about new cultures and trying local cuisines are the best things to do. I’ve been to a lot of cities in Ukraine and I hope I will travel abroad very soon. You never know where life may take you so I am open to anything. I also like spending time with my family, friends and all my little cousins. I think that family is one of the most important things in life and I truly want to find someone reliable to have a strong and big family with. Summer has to be one of my favorite seasons. I like hot weather, swimming in the sea and bathing in the sun. My dream is to live somewhere near to the beach and be surrounded by palm trees. As you can see I’m trying to explore as many things as I can and live my life to the fullest, all I need is a good man by my side to make my life journey perfect. We all want that somebody to make us feel safe and give us love. I am here to prove everyone that love has no borders and no distance. No matter what the circumstances in life are, no matter if we are happy or sad we all need somebody to share our ups and downs with. I am here to find myself not a superhero, but simply my own hero. From my point of view, woman can’t be completely strong and powerful if she doesn’t have a strong man by her side. Someone who can make her feel special and turn the whole world upside down just to make his queen happy. I am looking for that type of man, someone mentally and physically strong, and someone who knows what he wants from life and how to take care of his business. No one is perfect and so am I but I didn’t come here to look for a magical superhero to make all of my wishes come true, I came here to find a man who can show his woman that she’s loved and the most precious person to him. I believe that if two people are perfect for each other, if they are soul mates everything will fall into places. I am willing to make my man happy and give him all he needs but he also has to do the same thing for me. Only mutual love, work, support, trust, affection and passion can make relationships work. I believe that passion and affection are very important. So, I am that type of a person who can kiss you passionately and show you my affection every single day. I hope to find someone with a good sense of humor, I like having fun and joking around from time to time, well, we all know that laugh is the best medicine. I hope you can relate yourself to my perfect man description. If you do, don’t hesitate and write me. I can’t wait to meet my love and have a happy and adventurous life with him.
function CMAESOptimizer(arg0::jint, arg1::jdouble, arg2::jboolean, arg3::jint, arg4::jint, arg5::RandomGenerator, arg6::jboolean, arg7::ConvergenceChecker) return CMAESOptimizer((jint, jdouble, jboolean, jint, jint, RandomGenerator, jboolean, ConvergenceChecker), arg0, arg1, arg2, arg3, arg4, arg5, arg6, arg7) end function get_statistics_d_history(obj::CMAESOptimizer) return jcall(obj, "getStatisticsDHistory", List, ()) end function get_statistics_fitness_history(obj::CMAESOptimizer) return jcall(obj, "getStatisticsFitnessHistory", List, ()) end function get_statistics_mean_history(obj::CMAESOptimizer) return jcall(obj, "getStatisticsMeanHistory", List, ()) end function get_statistics_sigma_history(obj::CMAESOptimizer) return jcall(obj, "getStatisticsSigmaHistory", List, ()) end function optimize(obj::CMAESOptimizer, arg0::Vector{OptimizationData}) return jcall(obj, "optimize", PointValuePair, (Vector{OptimizationData},), arg0) end
Formal statement is: lemma setdist_eq_0_sing_1: "setdist {x} S = 0 \<longleftrightarrow> S = {} \<or> x \<in> closure S" Informal statement is: The distance between a singleton set and a set $S$ is zero if and only if $S$ is empty or $x$ is in the closure of $S$.
(*3 ω ∈ {noShortIf, full} Statementω ⇒ EmptyStatement | (*ExpressionStatement OptionalSemicolon*) | VariableDefinition OptionalSemicolon | Block | LabeledStatementω | IfStatementω | SwitchStatement | DoStatement OptionalSemicolon | WhileStatementω | ForStatementω | WithStatementω | ContinueStatement OptionalSemicolon | BreakStatement OptionalSemicolon | ReturnStatement OptionalSemicolon | ThrowStatement OptionalSemicolon | TryStatement OptionalSemicolon ⇒ ; Empty Statement JavaScript 1.4 Grammar up Monday, May 3, 1999 This is an LR(1) grammar written by waldemar that describes the state of ECMAScript as of February 1999. The grammar is complete except for semicolon insertion (the OptionalSemicolon grammar state can sometimes reduce to «empty») and distinguishing RegularExpression from / and /=. Also, there is some controversy about elision in array literals, so this feature has been omitted for now. Grammar syntax Grammar productions may expand nonterminals into empty right sides. Such right sides are indicated as «empty». A number of rules in the grammar occur in groups of analogous rules. Rather than list them individually, these groups have been summarized using the shorthand illustrated by the example below: Statements such as α ∈ {normal, initial} β ∈ {allowIn, noIn} introduce grammar arguments α and β. If these arguments later parametrize the nonterminal on the left side of a rule, that rule is implicitly replicated into a set of rules in each of which a grammar argument is consistently substituted by one of its variants. For example, AssignmentExpressionα,β ⇒ ConditionalExpressionα,β | LeftSideExpressionα = AssignmentExpressionnormal,β | LeftSideExpressionα CompoundAssignment AssignmentExpressionnormal,β expands into the following four rules: AssignmentExpressionnormal,allowIn ⇒ ConditionalExpressionnormal,allowIn | LeftSideExpressionnormal = AssignmentExpressionnormal,allowIn | LeftSideExpressionnormal CompoundAssignment AssignmentExpressionnormal,allowIn AssignmentExpressionnormal,noIn ⇒ ConditionalExpressionnormal,noIn | LeftSideExpressionnormal = AssignmentExpressionnormal,noIn | LeftSideExpressionnormal CompoundAssignment AssignmentExpressionnormal,noIn AssignmentExpressioninitial,allowIn ⇒ ConditionalExpressioninitial,allowIn | LeftSideExpressioninitial = AssignmentExpressionnormal,allowIn | LeftSideExpressioninitial CompoundAssignment AssignmentExpressionnormal,allowIn AssignmentExpressioninitial,noIn ⇒ ConditionalExpressioninitial,noIn | LeftSideExpressioninitial = AssignmentExpressionnormal,noIn | LeftSideExpressioninitial CompoundAssignment AssignmentExpressionnormal,noIn AssignmentExpressionnormal,allowIn is now an unparametrized nonterminal and processed normally by the grammar. Some of the expanded rules (such as the fourth one in the example above) may be unreachable from the starting nonterminal Program; these are ignored. Expressions α ∈ {normal, initial} β ∈ {allowIn, noIn} Primary Expressions PrimaryExpressionnormal ⇒ SimpleExpression | FunctionExpression | ObjectLiteral PrimaryExpressioninitial ⇒ SimpleExpression SimpleExpression ⇒ this | null | true | false | Number | String | Identifier | RegularExpression | ParenthesizedExpression | ArrayLiteral ParenthesizedExpression ⇒ ( Expressionnormal,allowIn ) Function Expressions FunctionExpression ⇒ AnonymousFunction | NamedFunction Object Literals ObjectLiteral ⇒ { } | { FieldList } FieldList ⇒ LiteralField | FieldList , LiteralField LiteralField ⇒ Identifier : AssignmentExpressionnormal,allowIn Array Literals ArrayLiteral ⇒ [ ] | [ ElementList ] ElementList ⇒ LiteralElement | ElementList , LiteralElement LiteralElement ⇒ AssignmentExpressionnormal,allowIn Left-Side Expressions LeftSideExpressionα ⇒ CallExpressionα | ShortNewExpression CallExpressionα ⇒ PrimaryExpressionα | FullNewExpression | CallExpressionα MemberOperator | CallExpressionα Arguments FullNewExpression ⇒ new FullNewSubexpression Arguments ShortNewExpression ⇒ new ShortNewSubexpression FullNewSubexpression ⇒ PrimaryExpressionnormal | FullNewExpression | FullNewSubexpression MemberOperator ShortNewSubexpression ⇒ FullNewSubexpression | ShortNewExpression MemberOperator ⇒ [ Expressionnormal,allowIn ] | . Identifier Arguments ⇒ ( ) | ( ArgumentList ) ArgumentList ⇒ AssignmentExpressionnormal,allowIn | ArgumentList , AssignmentExpressionnormal,allowIn Postfix Operators PostfixExpressionα ⇒ LeftSideExpressionα | LeftSideExpressionα ++ | LeftSideExpressionα -- Unary Operators UnaryExpressionα ⇒ PostfixExpressionα | delete LeftSideExpressionnormal | void UnaryExpressionnormal | typeof UnaryExpressionnormal | ++ LeftSideExpressionnormal | -- LeftSideExpressionnormal | + UnaryExpressionnormal | - UnaryExpressionnormal | ~ UnaryExpressionnormal | ! UnaryExpressionnormal Multiplicative Operators MultiplicativeExpressionα ⇒ UnaryExpressionα | MultiplicativeExpressionα * UnaryExpressionnormal | MultiplicativeExpressionα / UnaryExpressionnormal | MultiplicativeExpressionα % UnaryExpressionnormal Additive Operators AdditiveExpressionα ⇒ MultiplicativeExpressionα | AdditiveExpressionα + MultiplicativeExpressionnormal | AdditiveExpressionα - MultiplicativeExpressionnormal Bitwise Shift Operators ShiftExpressionα ⇒ AdditiveExpressionα | ShiftExpressionα << AdditiveExpressionnormal | ShiftExpressionα >> AdditiveExpressionnormal | ShiftExpressionα >>> AdditiveExpressionnormal Relational Operators RelationalExpressionα,allowIn ⇒ ShiftExpressionα | RelationalExpressionα,allowIn < ShiftExpressionnormal | RelationalExpressionα,allowIn > ShiftExpressionnormal | RelationalExpressionα,allowIn <= ShiftExpressionnormal | RelationalExpressionα,allowIn >= ShiftExpressionnormal | RelationalExpressionα,allowIn instanceof ShiftExpressionnormal | RelationalExpressionα,allowIn in ShiftExpressionnormal RelationalExpressionα,noIn ⇒ ShiftExpressionα | RelationalExpressionα,noIn < ShiftExpressionnormal | RelationalExpressionα,noIn > ShiftExpressionnormal | RelationalExpressionα,noIn <= ShiftExpressionnormal | RelationalExpressionα,noIn >= ShiftExpressionnormal | RelationalExpressionα,noIn instanceof ShiftExpressionnormal Equality Operators EqualityExpressionα,β ⇒ RelationalExpressionα,β | EqualityExpressionα,β == RelationalExpressionnormal,β | EqualityExpressionα,β != RelationalExpressionnormal,β | EqualityExpressionα,β === RelationalExpressionnormal,β | EqualityExpressionα,β !== RelationalExpressionnormal,β Binary Bitwise Operators BitwiseAndExpressionα,β ⇒ EqualityExpressionα,β | BitwiseAndExpressionα,β & EqualityExpressionnormal,β BitwiseXorExpressionα,β ⇒ BitwiseAndExpressionα,β | BitwiseXorExpressionα,β ^ BitwiseAndExpressionnormal,β BitwiseOrExpressionα,β ⇒ BitwiseXorExpressionα,β | BitwiseOrExpressionα,β | BitwiseXorExpressionnormal,β Binary Logical Operators LogicalAndExpressionα,β ⇒ BitwiseOrExpressionα,β | LogicalAndExpressionα,β && BitwiseOrExpressionnormal,β LogicalOrExpressionα,β ⇒ LogicalAndExpressionα,β | LogicalOrExpressionα,β || LogicalAndExpressionnormal,β Conditional Operator ConditionalExpressionα,β ⇒ LogicalOrExpressionα,β | LogicalOrExpressionα,β ? AssignmentExpressionnormal,β : AssignmentExpressionnormal,β Assignment Operators AssignmentExpressionα,β ⇒ ConditionalExpressionα,β | LeftSideExpressionα = AssignmentExpressionnormal,β | LeftSideExpressionα CompoundAssignment AssignmentExpressionnormal,β CompoundAssignment ⇒ *= | /= | %= | += | -= | <<= | >>= | >>>= | &= | ^= | |= Expressions Expressionα,β ⇒ AssignmentExpressionα,β | Expressionα,β , AssignmentExpressionnormal,β OptionalExpression ⇒ Expressionnormal,allowIn | «empty» Statements ω ∈ {noShortIf, full} Statementω ⇒ EmptyStatement | ExpressionStatement OptionalSemicolon | VariableDefinition OptionalSemicolon | Block | LabeledStatementω | IfStatementω | SwitchStatement | DoStatement OptionalSemicolon | WhileStatementω | ForStatementω | WithStatementω | ContinueStatement OptionalSemicolon | BreakStatement OptionalSemicolon | ReturnStatement OptionalSemicolon | ThrowStatement OptionalSemicolon | TryStatement OptionalSemicolon ⇒ ; Empty Statement EmptyStatement ⇒ ; Expression Statement ExpressionStatement ⇒ Expressioninitial,allowIn Variable Definition VariableDefinition ⇒ var VariableDeclarationListallowIn VariableDeclarationListβ ⇒ VariableDeclarationβ | VariableDeclarationListβ , VariableDeclarationβ VariableDeclarationβ ⇒ Identifier VariableInitializerβ VariableInitializerβ ⇒ «empty» | = AssignmentExpressionnormal,β Block Block ⇒ { BlockStatements } BlockStatements ⇒ «empty» | BlockStatementsPrefix BlockStatementsPrefix ⇒ Statementfull | BlockStatementsPrefix Statementfull Labeled Statements LabeledStatementω ⇒ Identifier : Statementω If Statement IfStatementfull ⇒ if ParenthesizedExpression Statementfull | if ParenthesizedExpression StatementnoShortIf else Statementfull IfStatementnoShortIf ⇒ if ParenthesizedExpression StatementnoShortIf else StatementnoShortIf Switch Statement SwitchStatement ⇒ switch ParenthesizedExpression { } | switch ParenthesizedExpression { CaseGroups LastCaseGroup } CaseGroups ⇒ «empty» | CaseGroups CaseGroup CaseGroup ⇒ CaseGuards BlockStatementsPrefix LastCaseGroup ⇒ CaseGuards BlockStatements CaseGuards ⇒ CaseGuard | CaseGuards CaseGuard CaseGuard ⇒ case Expressionnormal,allowIn : | default : Do-While Statement DoStatement ⇒ do Statementfull while ParenthesizedExpression While Statement WhileStatementω ⇒ while ParenthesizedExpression Statementω For Statements ForStatementω ⇒ for ( ForInitializer ; OptionalExpression ; OptionalExpression ) Statementω | for ( ForInBinding in Expressionnormal,allowIn ) Statementω ForInitializer ⇒ «empty» | Expressionnormal,noIn | var VariableDeclarationListnoIn ForInBinding ⇒ LeftSideExpressionnormal | var VariableDeclarationnoIn With Statement WithStatementω ⇒ with ParenthesizedExpression Statementω Continue and Break Statements ContinueStatement ⇒ continue OptionalLabel BreakStatement ⇒ break OptionalLabel OptionalLabel ⇒ «empty» | Identifier Return Statement ReturnStatement ⇒ return OptionalExpression Throw Statement ThrowStatement ⇒ throw Expressionnormal,allowIn Try Statement TryStatement ⇒ try Block CatchClauses | try Block FinallyClause | try Block CatchClauses FinallyClause CatchClauses ⇒ CatchClause | CatchClauses CatchClause CatchClause ⇒ catch ( Identifier ) Block FinallyClause ⇒ finally Block Function Definition FunctionDefinition ⇒ NamedFunction AnonymousFunction ⇒ function FormalParametersAndBody NamedFunction ⇒ function Identifier FormalParametersAndBody FormalParametersAndBody ⇒ ( FormalParameters ) { TopStatements } FormalParameters ⇒ «empty» | FormalParametersPrefix FormalParametersPrefix ⇒ FormalParameter | FormalParametersPrefix , FormalParameter FormalParameter ⇒ Identifier Programs Program ⇒ TopStatements TopStatements ⇒ «empty» | TopStatementsPrefix TopStatementsPrefix ⇒ TopStatement | TopStatementsPrefix TopStatement TopStatement ⇒ Statementfull | FunctionDefinition Waldemar Horwat Last modified Monday, May 3, 1999 up *)
module cws where open import lib open import cws-types public ---------------------------------------------------------------------------------- -- Run-rewriting rules ---------------------------------------------------------------------------------- data gratr2-nt : Set where _ws-plus-67 : gratr2-nt _ws : gratr2-nt _start : gratr2-nt _posinfo : gratr2-nt _otherpunct-bar-58 : gratr2-nt _otherpunct-bar-57 : gratr2-nt _otherpunct-bar-56 : gratr2-nt _otherpunct-bar-55 : gratr2-nt _otherpunct-bar-54 : gratr2-nt _otherpunct-bar-53 : gratr2-nt _otherpunct-bar-52 : gratr2-nt _otherpunct-bar-51 : gratr2-nt _otherpunct-bar-50 : gratr2-nt _otherpunct-bar-49 : gratr2-nt _otherpunct-bar-48 : gratr2-nt _otherpunct-bar-47 : gratr2-nt _otherpunct-bar-46 : gratr2-nt _otherpunct-bar-45 : gratr2-nt _otherpunct-bar-44 : gratr2-nt _otherpunct-bar-43 : gratr2-nt _otherpunct-bar-42 : gratr2-nt _otherpunct-bar-41 : gratr2-nt _otherpunct-bar-40 : gratr2-nt _otherpunct-bar-39 : gratr2-nt _otherpunct-bar-38 : gratr2-nt _otherpunct-bar-37 : gratr2-nt _otherpunct-bar-36 : gratr2-nt _otherpunct-bar-35 : gratr2-nt _otherpunct-bar-34 : gratr2-nt _otherpunct-bar-33 : gratr2-nt _otherpunct-bar-32 : gratr2-nt _otherpunct-bar-31 : gratr2-nt _otherpunct-bar-30 : gratr2-nt _otherpunct-bar-29 : gratr2-nt _otherpunct-bar-28 : gratr2-nt _otherpunct-bar-27 : gratr2-nt _otherpunct-bar-26 : gratr2-nt _otherpunct-bar-25 : gratr2-nt _otherpunct-bar-24 : gratr2-nt _otherpunct-bar-23 : gratr2-nt _otherpunct-bar-22 : gratr2-nt _otherpunct-bar-21 : gratr2-nt _otherpunct-bar-20 : gratr2-nt _otherpunct-bar-19 : gratr2-nt _otherpunct-bar-18 : gratr2-nt _otherpunct-bar-17 : gratr2-nt _otherpunct-bar-16 : gratr2-nt _otherpunct-bar-15 : gratr2-nt _otherpunct-bar-14 : gratr2-nt _otherpunct-bar-13 : gratr2-nt _otherpunct-bar-12 : gratr2-nt _otherpunct : gratr2-nt _numpunct-bar-9 : gratr2-nt _numpunct-bar-8 : gratr2-nt _numpunct-bar-7 : gratr2-nt _numpunct-bar-6 : gratr2-nt _numpunct-bar-11 : gratr2-nt _numpunct-bar-10 : gratr2-nt _numpunct : gratr2-nt _numone-range-4 : gratr2-nt _numone : gratr2-nt _num-plus-5 : gratr2-nt _num : gratr2-nt _nonws-plus-70 : gratr2-nt _nonws : gratr2-nt _entity : gratr2-nt _entities : gratr2-nt _comment-star-64 : gratr2-nt _comment : gratr2-nt _aws-bar-66 : gratr2-nt _aws-bar-65 : gratr2-nt _aws : gratr2-nt _anynonwschar-bar-69 : gratr2-nt _anynonwschar-bar-68 : gratr2-nt _anynonwschar : gratr2-nt _anychar-bar-63 : gratr2-nt _anychar-bar-62 : gratr2-nt _anychar-bar-61 : gratr2-nt _anychar-bar-60 : gratr2-nt _anychar-bar-59 : gratr2-nt _anychar : gratr2-nt _alpha-range-2 : gratr2-nt _alpha-range-1 : gratr2-nt _alpha-bar-3 : gratr2-nt _alpha : gratr2-nt gratr2-nt-eq : gratr2-nt → gratr2-nt → 𝔹 gratr2-nt-eq _ws-plus-67 _ws-plus-67 = tt gratr2-nt-eq _ws _ws = tt gratr2-nt-eq _start _start = tt gratr2-nt-eq _posinfo _posinfo = tt gratr2-nt-eq _otherpunct-bar-58 _otherpunct-bar-58 = tt gratr2-nt-eq _otherpunct-bar-57 _otherpunct-bar-57 = tt gratr2-nt-eq _otherpunct-bar-56 _otherpunct-bar-56 = tt gratr2-nt-eq _otherpunct-bar-55 _otherpunct-bar-55 = tt gratr2-nt-eq _otherpunct-bar-54 _otherpunct-bar-54 = tt gratr2-nt-eq _otherpunct-bar-53 _otherpunct-bar-53 = tt gratr2-nt-eq _otherpunct-bar-52 _otherpunct-bar-52 = tt gratr2-nt-eq _otherpunct-bar-51 _otherpunct-bar-51 = tt gratr2-nt-eq _otherpunct-bar-50 _otherpunct-bar-50 = tt gratr2-nt-eq _otherpunct-bar-49 _otherpunct-bar-49 = tt gratr2-nt-eq _otherpunct-bar-48 _otherpunct-bar-48 = tt gratr2-nt-eq _otherpunct-bar-47 _otherpunct-bar-47 = tt gratr2-nt-eq _otherpunct-bar-46 _otherpunct-bar-46 = tt gratr2-nt-eq _otherpunct-bar-45 _otherpunct-bar-45 = tt gratr2-nt-eq _otherpunct-bar-44 _otherpunct-bar-44 = tt gratr2-nt-eq _otherpunct-bar-43 _otherpunct-bar-43 = tt gratr2-nt-eq _otherpunct-bar-42 _otherpunct-bar-42 = tt gratr2-nt-eq _otherpunct-bar-41 _otherpunct-bar-41 = tt gratr2-nt-eq _otherpunct-bar-40 _otherpunct-bar-40 = tt gratr2-nt-eq _otherpunct-bar-39 _otherpunct-bar-39 = tt gratr2-nt-eq _otherpunct-bar-38 _otherpunct-bar-38 = tt gratr2-nt-eq _otherpunct-bar-37 _otherpunct-bar-37 = tt gratr2-nt-eq _otherpunct-bar-36 _otherpunct-bar-36 = tt gratr2-nt-eq _otherpunct-bar-35 _otherpunct-bar-35 = tt gratr2-nt-eq _otherpunct-bar-34 _otherpunct-bar-34 = tt gratr2-nt-eq _otherpunct-bar-33 _otherpunct-bar-33 = tt gratr2-nt-eq _otherpunct-bar-32 _otherpunct-bar-32 = tt gratr2-nt-eq _otherpunct-bar-31 _otherpunct-bar-31 = tt gratr2-nt-eq _otherpunct-bar-30 _otherpunct-bar-30 = tt gratr2-nt-eq _otherpunct-bar-29 _otherpunct-bar-29 = tt gratr2-nt-eq _otherpunct-bar-28 _otherpunct-bar-28 = tt gratr2-nt-eq _otherpunct-bar-27 _otherpunct-bar-27 = tt gratr2-nt-eq _otherpunct-bar-26 _otherpunct-bar-26 = tt gratr2-nt-eq _otherpunct-bar-25 _otherpunct-bar-25 = tt gratr2-nt-eq _otherpunct-bar-24 _otherpunct-bar-24 = tt gratr2-nt-eq _otherpunct-bar-23 _otherpunct-bar-23 = tt gratr2-nt-eq _otherpunct-bar-22 _otherpunct-bar-22 = tt gratr2-nt-eq _otherpunct-bar-21 _otherpunct-bar-21 = tt gratr2-nt-eq _otherpunct-bar-20 _otherpunct-bar-20 = tt gratr2-nt-eq _otherpunct-bar-19 _otherpunct-bar-19 = tt gratr2-nt-eq _otherpunct-bar-18 _otherpunct-bar-18 = tt gratr2-nt-eq _otherpunct-bar-17 _otherpunct-bar-17 = tt gratr2-nt-eq _otherpunct-bar-16 _otherpunct-bar-16 = tt gratr2-nt-eq _otherpunct-bar-15 _otherpunct-bar-15 = tt gratr2-nt-eq _otherpunct-bar-14 _otherpunct-bar-14 = tt gratr2-nt-eq _otherpunct-bar-13 _otherpunct-bar-13 = tt gratr2-nt-eq _otherpunct-bar-12 _otherpunct-bar-12 = tt gratr2-nt-eq _otherpunct _otherpunct = tt gratr2-nt-eq _numpunct-bar-9 _numpunct-bar-9 = tt gratr2-nt-eq _numpunct-bar-8 _numpunct-bar-8 = tt gratr2-nt-eq _numpunct-bar-7 _numpunct-bar-7 = tt gratr2-nt-eq _numpunct-bar-6 _numpunct-bar-6 = tt gratr2-nt-eq _numpunct-bar-11 _numpunct-bar-11 = tt gratr2-nt-eq _numpunct-bar-10 _numpunct-bar-10 = tt gratr2-nt-eq _numpunct _numpunct = tt gratr2-nt-eq _numone-range-4 _numone-range-4 = tt gratr2-nt-eq _numone _numone = tt gratr2-nt-eq _num-plus-5 _num-plus-5 = tt gratr2-nt-eq _num _num = tt gratr2-nt-eq _nonws-plus-70 _nonws-plus-70 = tt gratr2-nt-eq _nonws _nonws = tt gratr2-nt-eq _entity _entity = tt gratr2-nt-eq _entities _entities = tt gratr2-nt-eq _comment-star-64 _comment-star-64 = tt gratr2-nt-eq _comment _comment = tt gratr2-nt-eq _aws-bar-66 _aws-bar-66 = tt gratr2-nt-eq _aws-bar-65 _aws-bar-65 = tt gratr2-nt-eq _aws _aws = tt gratr2-nt-eq _anynonwschar-bar-69 _anynonwschar-bar-69 = tt gratr2-nt-eq _anynonwschar-bar-68 _anynonwschar-bar-68 = tt gratr2-nt-eq _anynonwschar _anynonwschar = tt gratr2-nt-eq _anychar-bar-63 _anychar-bar-63 = tt gratr2-nt-eq _anychar-bar-62 _anychar-bar-62 = tt gratr2-nt-eq _anychar-bar-61 _anychar-bar-61 = tt gratr2-nt-eq _anychar-bar-60 _anychar-bar-60 = tt gratr2-nt-eq _anychar-bar-59 _anychar-bar-59 = tt gratr2-nt-eq _anychar _anychar = tt gratr2-nt-eq _alpha-range-2 _alpha-range-2 = tt gratr2-nt-eq _alpha-range-1 _alpha-range-1 = tt gratr2-nt-eq _alpha-bar-3 _alpha-bar-3 = tt gratr2-nt-eq _alpha _alpha = tt gratr2-nt-eq _ _ = ff open import rtn gratr2-nt cws-start : gratr2-nt → 𝕃 gratr2-rule cws-start _ws-plus-67 = (just "P197" , nothing , just _ws-plus-67 , inj₁ _aws :: inj₁ _ws-plus-67 :: []) :: (just "P196" , nothing , just _ws-plus-67 , inj₁ _aws :: []) :: [] cws-start _ws = (just "P198" , nothing , just _ws , inj₁ _ws-plus-67 :: []) :: [] cws-start _start = (just "File" , nothing , just _start , inj₁ _entities :: []) :: [] cws-start _posinfo = (just "Posinfo" , nothing , just _posinfo , []) :: [] cws-start _otherpunct-bar-58 = (just "P175" , nothing , just _otherpunct-bar-58 , inj₁ _otherpunct-bar-57 :: []) :: (just "P174" , nothing , just _otherpunct-bar-58 , inj₂ '|' :: []) :: [] cws-start _otherpunct-bar-57 = (just "P173" , nothing , just _otherpunct-bar-57 , inj₁ _otherpunct-bar-56 :: []) :: (just "P172" , nothing , just _otherpunct-bar-57 , inj₂ '□' :: []) :: [] cws-start _otherpunct-bar-56 = (just "P171" , nothing , just _otherpunct-bar-56 , inj₁ _otherpunct-bar-55 :: []) :: (just "P170" , nothing , just _otherpunct-bar-56 , inj₂ 'Π' :: []) :: [] cws-start _otherpunct-bar-55 = (just "P169" , nothing , just _otherpunct-bar-55 , inj₁ _otherpunct-bar-54 :: []) :: (just "P168" , nothing , just _otherpunct-bar-55 , inj₂ 'ι' :: []) :: [] cws-start _otherpunct-bar-54 = (just "P167" , nothing , just _otherpunct-bar-54 , inj₁ _otherpunct-bar-53 :: []) :: (just "P166" , nothing , just _otherpunct-bar-54 , inj₂ 'λ' :: []) :: [] cws-start _otherpunct-bar-53 = (just "P165" , nothing , just _otherpunct-bar-53 , inj₁ _otherpunct-bar-52 :: []) :: (just "P164" , nothing , just _otherpunct-bar-53 , inj₂ '∀' :: []) :: [] cws-start _otherpunct-bar-52 = (just "P163" , nothing , just _otherpunct-bar-52 , inj₁ _otherpunct-bar-51 :: []) :: (just "P162" , nothing , just _otherpunct-bar-52 , inj₂ 'π' :: []) :: [] cws-start _otherpunct-bar-51 = (just "P161" , nothing , just _otherpunct-bar-51 , inj₁ _otherpunct-bar-50 :: []) :: (just "P160" , nothing , just _otherpunct-bar-51 , inj₂ '★' :: []) :: [] cws-start _otherpunct-bar-50 = (just "P159" , nothing , just _otherpunct-bar-50 , inj₁ _otherpunct-bar-49 :: []) :: (just "P158" , nothing , just _otherpunct-bar-50 , inj₂ '☆' :: []) :: [] cws-start _otherpunct-bar-49 = (just "P157" , nothing , just _otherpunct-bar-49 , inj₁ _otherpunct-bar-48 :: []) :: (just "P156" , nothing , just _otherpunct-bar-49 , inj₂ '·' :: []) :: [] cws-start _otherpunct-bar-48 = (just "P155" , nothing , just _otherpunct-bar-48 , inj₁ _otherpunct-bar-47 :: []) :: (just "P154" , nothing , just _otherpunct-bar-48 , inj₂ '⇐' :: []) :: [] cws-start _otherpunct-bar-47 = (just "P153" , nothing , just _otherpunct-bar-47 , inj₁ _otherpunct-bar-46 :: []) :: (just "P152" , nothing , just _otherpunct-bar-47 , inj₂ '➔' :: []) :: [] cws-start _otherpunct-bar-46 = (just "P151" , nothing , just _otherpunct-bar-46 , inj₁ _otherpunct-bar-45 :: []) :: (just "P150" , nothing , just _otherpunct-bar-46 , inj₂ '➾' :: []) :: [] cws-start _otherpunct-bar-45 = (just "P149" , nothing , just _otherpunct-bar-45 , inj₁ _otherpunct-bar-44 :: []) :: (just "P148" , nothing , just _otherpunct-bar-45 , inj₂ '↑' :: []) :: [] cws-start _otherpunct-bar-44 = (just "P147" , nothing , just _otherpunct-bar-44 , inj₁ _otherpunct-bar-43 :: []) :: (just "P146" , nothing , just _otherpunct-bar-44 , inj₂ '●' :: []) :: [] cws-start _otherpunct-bar-43 = (just "P145" , nothing , just _otherpunct-bar-43 , inj₁ _otherpunct-bar-42 :: []) :: (just "P144" , nothing , just _otherpunct-bar-43 , inj₂ '(' :: []) :: [] cws-start _otherpunct-bar-42 = (just "P143" , nothing , just _otherpunct-bar-42 , inj₁ _otherpunct-bar-41 :: []) :: (just "P142" , nothing , just _otherpunct-bar-42 , inj₂ ')' :: []) :: [] cws-start _otherpunct-bar-41 = (just "P141" , nothing , just _otherpunct-bar-41 , inj₁ _otherpunct-bar-40 :: []) :: (just "P140" , nothing , just _otherpunct-bar-41 , inj₂ ':' :: []) :: [] cws-start _otherpunct-bar-40 = (just "P139" , nothing , just _otherpunct-bar-40 , inj₁ _otherpunct-bar-39 :: []) :: (just "P138" , nothing , just _otherpunct-bar-40 , inj₂ '.' :: []) :: [] cws-start _otherpunct-bar-39 = (just "P137" , nothing , just _otherpunct-bar-39 , inj₁ _otherpunct-bar-38 :: []) :: (just "P136" , nothing , just _otherpunct-bar-39 , inj₂ '[' :: []) :: [] cws-start _otherpunct-bar-38 = (just "P135" , nothing , just _otherpunct-bar-38 , inj₁ _otherpunct-bar-37 :: []) :: (just "P134" , nothing , just _otherpunct-bar-38 , inj₂ ']' :: []) :: [] cws-start _otherpunct-bar-37 = (just "P133" , nothing , just _otherpunct-bar-37 , inj₁ _otherpunct-bar-36 :: []) :: (just "P132" , nothing , just _otherpunct-bar-37 , inj₂ ',' :: []) :: [] cws-start _otherpunct-bar-36 = (just "P131" , nothing , just _otherpunct-bar-36 , inj₁ _otherpunct-bar-35 :: []) :: (just "P130" , nothing , just _otherpunct-bar-36 , inj₂ '!' :: []) :: [] cws-start _otherpunct-bar-35 = (just "P129" , nothing , just _otherpunct-bar-35 , inj₁ _otherpunct-bar-34 :: []) :: (just "P128" , nothing , just _otherpunct-bar-35 , inj₂ '{' :: []) :: [] cws-start _otherpunct-bar-34 = (just "P127" , nothing , just _otherpunct-bar-34 , inj₁ _otherpunct-bar-33 :: []) :: (just "P126" , nothing , just _otherpunct-bar-34 , inj₂ '}' :: []) :: [] cws-start _otherpunct-bar-33 = (just "P125" , nothing , just _otherpunct-bar-33 , inj₁ _otherpunct-bar-32 :: []) :: (just "P124" , nothing , just _otherpunct-bar-33 , inj₂ '⇒' :: []) :: [] cws-start _otherpunct-bar-32 = (just "P123" , nothing , just _otherpunct-bar-32 , inj₁ _otherpunct-bar-31 :: []) :: (just "P122" , nothing , just _otherpunct-bar-32 , inj₂ '?' :: []) :: [] cws-start _otherpunct-bar-31 = (just "P121" , nothing , just _otherpunct-bar-31 , inj₁ _otherpunct-bar-30 :: []) :: (just "P120" , nothing , just _otherpunct-bar-31 , inj₂ 'Λ' :: []) :: [] cws-start _otherpunct-bar-30 = (just "P119" , nothing , just _otherpunct-bar-30 , inj₁ _otherpunct-bar-29 :: []) :: (just "P118" , nothing , just _otherpunct-bar-30 , inj₂ 'ρ' :: []) :: [] cws-start _otherpunct-bar-29 = (just "P117" , nothing , just _otherpunct-bar-29 , inj₁ _otherpunct-bar-28 :: []) :: (just "P116" , nothing , just _otherpunct-bar-29 , inj₂ 'ε' :: []) :: [] cws-start _otherpunct-bar-28 = (just "P115" , nothing , just _otherpunct-bar-28 , inj₁ _otherpunct-bar-27 :: []) :: (just "P114" , nothing , just _otherpunct-bar-28 , inj₂ 'β' :: []) :: [] cws-start _otherpunct-bar-27 = (just "P113" , nothing , just _otherpunct-bar-27 , inj₁ _otherpunct-bar-26 :: []) :: (just "P112" , nothing , just _otherpunct-bar-27 , inj₂ '-' :: []) :: [] cws-start _otherpunct-bar-26 = (just "P111" , nothing , just _otherpunct-bar-26 , inj₁ _otherpunct-bar-25 :: []) :: (just "P110" , nothing , just _otherpunct-bar-26 , inj₂ '𝒌' :: []) :: [] cws-start _otherpunct-bar-25 = (just "P109" , nothing , just _otherpunct-bar-25 , inj₁ _otherpunct-bar-24 :: []) :: (just "P108" , nothing , just _otherpunct-bar-25 , inj₂ '=' :: []) :: [] cws-start _otherpunct-bar-24 = (just "P107" , nothing , just _otherpunct-bar-24 , inj₁ _otherpunct-bar-23 :: []) :: (just "P106" , nothing , just _otherpunct-bar-24 , inj₂ 'ς' :: []) :: [] cws-start _otherpunct-bar-23 = (just "P105" , nothing , just _otherpunct-bar-23 , inj₁ _otherpunct-bar-22 :: []) :: (just "P104" , nothing , just _otherpunct-bar-23 , inj₂ 'θ' :: []) :: [] cws-start _otherpunct-bar-22 = (just "P103" , nothing , just _otherpunct-bar-22 , inj₁ _otherpunct-bar-21 :: []) :: (just "P102" , nothing , just _otherpunct-bar-22 , inj₂ '+' :: []) :: [] cws-start _otherpunct-bar-21 = (just "P101" , nothing , just _otherpunct-bar-21 , inj₁ _otherpunct-bar-20 :: []) :: (just "P100" , nothing , just _otherpunct-bar-21 , inj₂ '<' :: []) :: [] cws-start _otherpunct-bar-20 = (just "P99" , nothing , just _otherpunct-bar-20 , inj₁ _otherpunct-bar-19 :: []) :: (just "P98" , nothing , just _otherpunct-bar-20 , inj₂ '>' :: []) :: [] cws-start _otherpunct-bar-19 = (just "P97" , nothing , just _otherpunct-bar-19 , inj₁ _otherpunct-bar-18 :: []) :: (just "P96" , nothing , just _otherpunct-bar-19 , inj₂ '≃' :: []) :: [] cws-start _otherpunct-bar-18 = (just "P95" , nothing , just _otherpunct-bar-18 , inj₁ _otherpunct-bar-17 :: []) :: (just "P94" , nothing , just _otherpunct-bar-18 , inj₂ '\"' :: []) :: [] cws-start _otherpunct-bar-17 = (just "P93" , nothing , just _otherpunct-bar-17 , inj₁ _otherpunct-bar-16 :: []) :: (just "P92" , nothing , just _otherpunct-bar-17 , inj₂ 'δ' :: []) :: [] cws-start _otherpunct-bar-16 = (just "P91" , nothing , just _otherpunct-bar-16 , inj₁ _otherpunct-bar-15 :: []) :: (just "P90" , nothing , just _otherpunct-bar-16 , inj₂ 'χ' :: []) :: [] cws-start _otherpunct-bar-15 = (just "P89" , nothing , just _otherpunct-bar-15 , inj₁ _otherpunct-bar-14 :: []) :: (just "P88" , nothing , just _otherpunct-bar-15 , inj₂ 'μ' :: []) :: [] cws-start _otherpunct-bar-14 = (just "P87" , nothing , just _otherpunct-bar-14 , inj₁ _otherpunct-bar-13 :: []) :: (just "P86" , nothing , just _otherpunct-bar-14 , inj₂ 'υ' :: []) :: [] cws-start _otherpunct-bar-13 = (just "P85" , nothing , just _otherpunct-bar-13 , inj₁ _otherpunct-bar-12 :: []) :: (just "P84" , nothing , just _otherpunct-bar-13 , inj₂ 'φ' :: []) :: [] cws-start _otherpunct-bar-12 = (just "P83" , nothing , just _otherpunct-bar-12 , inj₂ 'ω' :: []) :: (just "P82" , nothing , just _otherpunct-bar-12 , inj₂ '◂' :: []) :: [] cws-start _otherpunct = (just "P176" , nothing , just _otherpunct , inj₁ _otherpunct-bar-58 :: []) :: [] cws-start _numpunct-bar-9 = (just "P76" , nothing , just _numpunct-bar-9 , inj₁ _numpunct-bar-8 :: []) :: (just "P75" , nothing , just _numpunct-bar-9 , inj₂ '-' :: []) :: [] cws-start _numpunct-bar-8 = (just "P74" , nothing , just _numpunct-bar-8 , inj₁ _numpunct-bar-7 :: []) :: (just "P73" , nothing , just _numpunct-bar-8 , inj₂ '~' :: []) :: [] cws-start _numpunct-bar-7 = (just "P72" , nothing , just _numpunct-bar-7 , inj₁ _numpunct-bar-6 :: []) :: (just "P71" , nothing , just _numpunct-bar-7 , inj₂ '#' :: []) :: [] cws-start _numpunct-bar-6 = (just "P70" , nothing , just _numpunct-bar-6 , inj₂ '/' :: []) :: (just "P69" , nothing , just _numpunct-bar-6 , inj₂ '_' :: []) :: [] cws-start _numpunct-bar-11 = (just "P80" , nothing , just _numpunct-bar-11 , inj₁ _numpunct-bar-10 :: []) :: (just "P79" , nothing , just _numpunct-bar-11 , inj₁ _numone :: []) :: [] cws-start _numpunct-bar-10 = (just "P78" , nothing , just _numpunct-bar-10 , inj₁ _numpunct-bar-9 :: []) :: (just "P77" , nothing , just _numpunct-bar-10 , inj₂ '\'' :: []) :: [] cws-start _numpunct = (just "P81" , nothing , just _numpunct , inj₁ _numpunct-bar-11 :: []) :: [] cws-start _numone-range-4 = (just "P64" , nothing , just _numone-range-4 , inj₂ '9' :: []) :: (just "P63" , nothing , just _numone-range-4 , inj₂ '8' :: []) :: (just "P62" , nothing , just _numone-range-4 , inj₂ '7' :: []) :: (just "P61" , nothing , just _numone-range-4 , inj₂ '6' :: []) :: (just "P60" , nothing , just _numone-range-4 , inj₂ '5' :: []) :: (just "P59" , nothing , just _numone-range-4 , inj₂ '4' :: []) :: (just "P58" , nothing , just _numone-range-4 , inj₂ '3' :: []) :: (just "P57" , nothing , just _numone-range-4 , inj₂ '2' :: []) :: (just "P56" , nothing , just _numone-range-4 , inj₂ '1' :: []) :: (just "P55" , nothing , just _numone-range-4 , inj₂ '0' :: []) :: [] cws-start _numone = (just "P65" , nothing , just _numone , inj₁ _numone-range-4 :: []) :: [] cws-start _num-plus-5 = (just "P67" , nothing , just _num-plus-5 , inj₁ _numone :: inj₁ _num-plus-5 :: []) :: (just "P66" , nothing , just _num-plus-5 , inj₁ _numone :: []) :: [] cws-start _num = (just "P68" , nothing , just _num , inj₁ _num-plus-5 :: []) :: [] cws-start _nonws-plus-70 = (just "P205" , nothing , just _nonws-plus-70 , inj₁ _anynonwschar :: inj₁ _nonws-plus-70 :: []) :: (just "P204" , nothing , just _nonws-plus-70 , inj₁ _anynonwschar :: []) :: [] cws-start _nonws = (just "P206" , nothing , just _nonws , inj₁ _nonws-plus-70 :: []) :: [] cws-start _entity = (just "EntityWs" , nothing , just _entity , inj₁ _posinfo :: inj₁ _ws :: inj₁ _posinfo :: []) :: (just "EntityNonws" , nothing , just _entity , inj₁ _nonws :: []) :: (just "EntityComment" , nothing , just _entity , inj₁ _posinfo :: inj₁ _comment :: inj₁ _posinfo :: []) :: [] cws-start _entities = (just "Entity" , nothing , just _entities , inj₁ _entity :: inj₁ _entities :: []) :: (just "EndEntity" , nothing , just _entities , []) :: [] cws-start _comment-star-64 = (just "P189" , nothing , just _comment-star-64 , inj₁ _anychar :: inj₁ _comment-star-64 :: []) :: (just "P188" , nothing , just _comment-star-64 , []) :: [] cws-start _comment = (just "P190" , nothing , just _comment , inj₂ '%' :: inj₁ _comment-star-64 :: inj₂ '\n' :: []) :: [] cws-start _aws-bar-66 = (just "P194" , nothing , just _aws-bar-66 , inj₁ _aws-bar-65 :: []) :: (just "P193" , nothing , just _aws-bar-66 , inj₂ '\n' :: []) :: [] cws-start _aws-bar-65 = (just "P192" , nothing , just _aws-bar-65 , inj₂ ' ' :: []) :: (just "P191" , nothing , just _aws-bar-65 , inj₂ '\t' :: []) :: [] cws-start _aws = (just "P195" , nothing , just _aws , inj₁ _aws-bar-66 :: []) :: [] cws-start _anynonwschar-bar-69 = (just "P202" , nothing , just _anynonwschar-bar-69 , inj₁ _anynonwschar-bar-68 :: []) :: (just "P201" , nothing , just _anynonwschar-bar-69 , inj₁ _alpha :: []) :: [] cws-start _anynonwschar-bar-68 = (just "P200" , nothing , just _anynonwschar-bar-68 , inj₁ _otherpunct :: []) :: (just "P199" , nothing , just _anynonwschar-bar-68 , inj₁ _numpunct :: []) :: [] cws-start _anynonwschar = (just "P203" , nothing , just _anynonwschar , inj₁ _anynonwschar-bar-69 :: []) :: [] cws-start _anychar-bar-63 = (just "P186" , nothing , just _anychar-bar-63 , inj₁ _anychar-bar-62 :: []) :: (just "P185" , nothing , just _anychar-bar-63 , inj₁ _alpha :: []) :: [] cws-start _anychar-bar-62 = (just "P184" , nothing , just _anychar-bar-62 , inj₁ _anychar-bar-61 :: []) :: (just "P183" , nothing , just _anychar-bar-62 , inj₁ _numpunct :: []) :: [] cws-start _anychar-bar-61 = (just "P182" , nothing , just _anychar-bar-61 , inj₁ _anychar-bar-60 :: []) :: (just "P181" , nothing , just _anychar-bar-61 , inj₂ '\t' :: []) :: [] cws-start _anychar-bar-60 = (just "P180" , nothing , just _anychar-bar-60 , inj₁ _anychar-bar-59 :: []) :: (just "P179" , nothing , just _anychar-bar-60 , inj₂ ' ' :: []) :: [] cws-start _anychar-bar-59 = (just "P178" , nothing , just _anychar-bar-59 , inj₁ _otherpunct :: []) :: (just "P177" , nothing , just _anychar-bar-59 , inj₂ '%' :: []) :: [] cws-start _anychar = (just "P187" , nothing , just _anychar , inj₁ _anychar-bar-63 :: []) :: [] cws-start _alpha-range-2 = (just "P51" , nothing , just _alpha-range-2 , inj₂ 'Z' :: []) :: (just "P50" , nothing , just _alpha-range-2 , inj₂ 'Y' :: []) :: (just "P49" , nothing , just _alpha-range-2 , inj₂ 'X' :: []) :: (just "P48" , nothing , just _alpha-range-2 , inj₂ 'W' :: []) :: (just "P47" , nothing , just _alpha-range-2 , inj₂ 'V' :: []) :: (just "P46" , nothing , just _alpha-range-2 , inj₂ 'U' :: []) :: (just "P45" , nothing , just _alpha-range-2 , inj₂ 'T' :: []) :: (just "P44" , nothing , just _alpha-range-2 , inj₂ 'S' :: []) :: (just "P43" , nothing , just _alpha-range-2 , inj₂ 'R' :: []) :: (just "P42" , nothing , just _alpha-range-2 , inj₂ 'Q' :: []) :: (just "P41" , nothing , just _alpha-range-2 , inj₂ 'P' :: []) :: (just "P40" , nothing , just _alpha-range-2 , inj₂ 'O' :: []) :: (just "P39" , nothing , just _alpha-range-2 , inj₂ 'N' :: []) :: (just "P38" , nothing , just _alpha-range-2 , inj₂ 'M' :: []) :: (just "P37" , nothing , just _alpha-range-2 , inj₂ 'L' :: []) :: (just "P36" , nothing , just _alpha-range-2 , inj₂ 'K' :: []) :: (just "P35" , nothing , just _alpha-range-2 , inj₂ 'J' :: []) :: (just "P34" , nothing , just _alpha-range-2 , inj₂ 'I' :: []) :: (just "P33" , nothing , just _alpha-range-2 , inj₂ 'H' :: []) :: (just "P32" , nothing , just _alpha-range-2 , inj₂ 'G' :: []) :: (just "P31" , nothing , just _alpha-range-2 , inj₂ 'F' :: []) :: (just "P30" , nothing , just _alpha-range-2 , inj₂ 'E' :: []) :: (just "P29" , nothing , just _alpha-range-2 , inj₂ 'D' :: []) :: (just "P28" , nothing , just _alpha-range-2 , inj₂ 'C' :: []) :: (just "P27" , nothing , just _alpha-range-2 , inj₂ 'B' :: []) :: (just "P26" , nothing , just _alpha-range-2 , inj₂ 'A' :: []) :: [] cws-start _alpha-range-1 = (just "P9" , nothing , just _alpha-range-1 , inj₂ 'j' :: []) :: (just "P8" , nothing , just _alpha-range-1 , inj₂ 'i' :: []) :: (just "P7" , nothing , just _alpha-range-1 , inj₂ 'h' :: []) :: (just "P6" , nothing , just _alpha-range-1 , inj₂ 'g' :: []) :: (just "P5" , nothing , just _alpha-range-1 , inj₂ 'f' :: []) :: (just "P4" , nothing , just _alpha-range-1 , inj₂ 'e' :: []) :: (just "P3" , nothing , just _alpha-range-1 , inj₂ 'd' :: []) :: (just "P25" , nothing , just _alpha-range-1 , inj₂ 'z' :: []) :: (just "P24" , nothing , just _alpha-range-1 , inj₂ 'y' :: []) :: (just "P23" , nothing , just _alpha-range-1 , inj₂ 'x' :: []) :: (just "P22" , nothing , just _alpha-range-1 , inj₂ 'w' :: []) :: (just "P21" , nothing , just _alpha-range-1 , inj₂ 'v' :: []) :: (just "P20" , nothing , just _alpha-range-1 , inj₂ 'u' :: []) :: (just "P2" , nothing , just _alpha-range-1 , inj₂ 'c' :: []) :: (just "P19" , nothing , just _alpha-range-1 , inj₂ 't' :: []) :: (just "P18" , nothing , just _alpha-range-1 , inj₂ 's' :: []) :: (just "P17" , nothing , just _alpha-range-1 , inj₂ 'r' :: []) :: (just "P16" , nothing , just _alpha-range-1 , inj₂ 'q' :: []) :: (just "P15" , nothing , just _alpha-range-1 , inj₂ 'p' :: []) :: (just "P14" , nothing , just _alpha-range-1 , inj₂ 'o' :: []) :: (just "P13" , nothing , just _alpha-range-1 , inj₂ 'n' :: []) :: (just "P12" , nothing , just _alpha-range-1 , inj₂ 'm' :: []) :: (just "P11" , nothing , just _alpha-range-1 , inj₂ 'l' :: []) :: (just "P10" , nothing , just _alpha-range-1 , inj₂ 'k' :: []) :: (just "P1" , nothing , just _alpha-range-1 , inj₂ 'b' :: []) :: (just "P0" , nothing , just _alpha-range-1 , inj₂ 'a' :: []) :: [] cws-start _alpha-bar-3 = (just "P53" , nothing , just _alpha-bar-3 , inj₁ _alpha-range-2 :: []) :: (just "P52" , nothing , just _alpha-bar-3 , inj₁ _alpha-range-1 :: []) :: [] cws-start _alpha = (just "P54" , nothing , just _alpha , inj₁ _alpha-bar-3 :: []) :: [] cws-return : maybe gratr2-nt → 𝕃 gratr2-rule cws-return _ = [] cws-rtn : gratr2-rtn cws-rtn = record { start = _start ; _eq_ = gratr2-nt-eq ; gratr2-start = cws-start ; gratr2-return = cws-return } open import run ptr open noderiv ------------------------------------------ -- Length-decreasing rules ------------------------------------------ len-dec-rewrite : Run → maybe (Run × ℕ) len-dec-rewrite {- Entity-} ((Id "Entity") :: (ParseTree (parsed-entity x0)) :: _::_(ParseTree (parsed-entities x1)) rest) = just (ParseTree (parsed-entities (norm-entities (Entity x0 x1))) ::' rest , 3) len-dec-rewrite {- EntityComment-} ((Id "EntityComment") :: (ParseTree (parsed-posinfo x0)) :: (ParseTree parsed-comment) :: _::_(ParseTree (parsed-posinfo x1)) rest) = just (ParseTree (parsed-entity (norm-entity (EntityComment x0 x1))) ::' rest , 4) len-dec-rewrite {- EntityNonws-} ((Id "EntityNonws") :: _::_(ParseTree parsed-nonws) rest) = just (ParseTree (parsed-entity (norm-entity EntityNonws)) ::' rest , 2) len-dec-rewrite {- EntityWs-} ((Id "EntityWs") :: (ParseTree (parsed-posinfo x0)) :: (ParseTree parsed-ws) :: _::_(ParseTree (parsed-posinfo x1)) rest) = just (ParseTree (parsed-entity (norm-entity (EntityWs x0 x1))) ::' rest , 4) len-dec-rewrite {- File-} ((Id "File") :: _::_(ParseTree (parsed-entities x0)) rest) = just (ParseTree (parsed-start (norm-start (File x0))) ::' rest , 2) len-dec-rewrite {- P0-} ((Id "P0") :: _::_(InputChar 'a') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P1-} ((Id "P1") :: _::_(InputChar 'b') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P10-} ((Id "P10") :: _::_(InputChar 'k') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P100-} ((Id "P100") :: _::_(InputChar '<') rest) = just (ParseTree parsed-otherpunct-bar-21 ::' rest , 2) len-dec-rewrite {- P101-} ((Id "P101") :: _::_(ParseTree parsed-otherpunct-bar-20) rest) = just (ParseTree parsed-otherpunct-bar-21 ::' rest , 2) len-dec-rewrite {- P102-} ((Id "P102") :: _::_(InputChar '+') rest) = just (ParseTree parsed-otherpunct-bar-22 ::' rest , 2) len-dec-rewrite {- P103-} ((Id "P103") :: _::_(ParseTree parsed-otherpunct-bar-21) rest) = just (ParseTree parsed-otherpunct-bar-22 ::' rest , 2) len-dec-rewrite {- P104-} ((Id "P104") :: _::_(InputChar 'θ') rest) = just (ParseTree parsed-otherpunct-bar-23 ::' rest , 2) len-dec-rewrite {- P105-} ((Id "P105") :: _::_(ParseTree parsed-otherpunct-bar-22) rest) = just (ParseTree parsed-otherpunct-bar-23 ::' rest , 2) len-dec-rewrite {- P106-} ((Id "P106") :: _::_(InputChar 'ς') rest) = just (ParseTree parsed-otherpunct-bar-24 ::' rest , 2) len-dec-rewrite {- P107-} ((Id "P107") :: _::_(ParseTree parsed-otherpunct-bar-23) rest) = just (ParseTree parsed-otherpunct-bar-24 ::' rest , 2) len-dec-rewrite {- P108-} ((Id "P108") :: _::_(InputChar '=') rest) = just (ParseTree parsed-otherpunct-bar-25 ::' rest , 2) len-dec-rewrite {- P109-} ((Id "P109") :: _::_(ParseTree parsed-otherpunct-bar-24) rest) = just (ParseTree parsed-otherpunct-bar-25 ::' rest , 2) len-dec-rewrite {- P11-} ((Id "P11") :: _::_(InputChar 'l') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P110-} ((Id "P110") :: _::_(InputChar '𝒌') rest) = just (ParseTree parsed-otherpunct-bar-26 ::' rest , 2) len-dec-rewrite {- P111-} ((Id "P111") :: _::_(ParseTree parsed-otherpunct-bar-25) rest) = just (ParseTree parsed-otherpunct-bar-26 ::' rest , 2) len-dec-rewrite {- P112-} ((Id "P112") :: _::_(InputChar '-') rest) = just (ParseTree parsed-otherpunct-bar-27 ::' rest , 2) len-dec-rewrite {- P113-} ((Id "P113") :: _::_(ParseTree parsed-otherpunct-bar-26) rest) = just (ParseTree parsed-otherpunct-bar-27 ::' rest , 2) len-dec-rewrite {- P114-} ((Id "P114") :: _::_(InputChar 'β') rest) = just (ParseTree parsed-otherpunct-bar-28 ::' rest , 2) len-dec-rewrite {- P115-} ((Id "P115") :: _::_(ParseTree parsed-otherpunct-bar-27) rest) = just (ParseTree parsed-otherpunct-bar-28 ::' rest , 2) len-dec-rewrite {- P116-} ((Id "P116") :: _::_(InputChar 'ε') rest) = just (ParseTree parsed-otherpunct-bar-29 ::' rest , 2) len-dec-rewrite {- P117-} ((Id "P117") :: _::_(ParseTree parsed-otherpunct-bar-28) rest) = just (ParseTree parsed-otherpunct-bar-29 ::' rest , 2) len-dec-rewrite {- P118-} ((Id "P118") :: _::_(InputChar 'ρ') rest) = just (ParseTree parsed-otherpunct-bar-30 ::' rest , 2) len-dec-rewrite {- P119-} ((Id "P119") :: _::_(ParseTree parsed-otherpunct-bar-29) rest) = just (ParseTree parsed-otherpunct-bar-30 ::' rest , 2) len-dec-rewrite {- P12-} ((Id "P12") :: _::_(InputChar 'm') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P120-} ((Id "P120") :: _::_(InputChar 'Λ') rest) = just (ParseTree parsed-otherpunct-bar-31 ::' rest , 2) len-dec-rewrite {- P121-} ((Id "P121") :: _::_(ParseTree parsed-otherpunct-bar-30) rest) = just (ParseTree parsed-otherpunct-bar-31 ::' rest , 2) len-dec-rewrite {- P122-} ((Id "P122") :: _::_(InputChar '?') rest) = just (ParseTree parsed-otherpunct-bar-32 ::' rest , 2) len-dec-rewrite {- P123-} ((Id "P123") :: _::_(ParseTree parsed-otherpunct-bar-31) rest) = just (ParseTree parsed-otherpunct-bar-32 ::' rest , 2) len-dec-rewrite {- P124-} ((Id "P124") :: _::_(InputChar '⇒') rest) = just (ParseTree parsed-otherpunct-bar-33 ::' rest , 2) len-dec-rewrite {- P125-} ((Id "P125") :: _::_(ParseTree parsed-otherpunct-bar-32) rest) = just (ParseTree parsed-otherpunct-bar-33 ::' rest , 2) len-dec-rewrite {- P126-} ((Id "P126") :: _::_(InputChar '}') rest) = just (ParseTree parsed-otherpunct-bar-34 ::' rest , 2) len-dec-rewrite {- P127-} ((Id "P127") :: _::_(ParseTree parsed-otherpunct-bar-33) rest) = just (ParseTree parsed-otherpunct-bar-34 ::' rest , 2) len-dec-rewrite {- P128-} ((Id "P128") :: _::_(InputChar '{') rest) = just (ParseTree parsed-otherpunct-bar-35 ::' rest , 2) len-dec-rewrite {- P129-} ((Id "P129") :: _::_(ParseTree parsed-otherpunct-bar-34) rest) = just (ParseTree parsed-otherpunct-bar-35 ::' rest , 2) len-dec-rewrite {- P13-} ((Id "P13") :: _::_(InputChar 'n') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P130-} ((Id "P130") :: _::_(InputChar '!') rest) = just (ParseTree parsed-otherpunct-bar-36 ::' rest , 2) len-dec-rewrite {- P131-} ((Id "P131") :: _::_(ParseTree parsed-otherpunct-bar-35) rest) = just (ParseTree parsed-otherpunct-bar-36 ::' rest , 2) len-dec-rewrite {- P132-} ((Id "P132") :: _::_(InputChar ',') rest) = just (ParseTree parsed-otherpunct-bar-37 ::' rest , 2) len-dec-rewrite {- P133-} ((Id "P133") :: _::_(ParseTree parsed-otherpunct-bar-36) rest) = just (ParseTree parsed-otherpunct-bar-37 ::' rest , 2) len-dec-rewrite {- P134-} ((Id "P134") :: _::_(InputChar ']') rest) = just (ParseTree parsed-otherpunct-bar-38 ::' rest , 2) len-dec-rewrite {- P135-} ((Id "P135") :: _::_(ParseTree parsed-otherpunct-bar-37) rest) = just (ParseTree parsed-otherpunct-bar-38 ::' rest , 2) len-dec-rewrite {- P136-} ((Id "P136") :: _::_(InputChar '[') rest) = just (ParseTree parsed-otherpunct-bar-39 ::' rest , 2) len-dec-rewrite {- P137-} ((Id "P137") :: _::_(ParseTree parsed-otherpunct-bar-38) rest) = just (ParseTree parsed-otherpunct-bar-39 ::' rest , 2) len-dec-rewrite {- P138-} ((Id "P138") :: _::_(InputChar '.') rest) = just (ParseTree parsed-otherpunct-bar-40 ::' rest , 2) len-dec-rewrite {- P139-} ((Id "P139") :: _::_(ParseTree parsed-otherpunct-bar-39) rest) = just (ParseTree parsed-otherpunct-bar-40 ::' rest , 2) len-dec-rewrite {- P14-} ((Id "P14") :: _::_(InputChar 'o') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P140-} ((Id "P140") :: _::_(InputChar ':') rest) = just (ParseTree parsed-otherpunct-bar-41 ::' rest , 2) len-dec-rewrite {- P141-} ((Id "P141") :: _::_(ParseTree parsed-otherpunct-bar-40) rest) = just (ParseTree parsed-otherpunct-bar-41 ::' rest , 2) len-dec-rewrite {- P142-} ((Id "P142") :: _::_(InputChar ')') rest) = just (ParseTree parsed-otherpunct-bar-42 ::' rest , 2) len-dec-rewrite {- P143-} ((Id "P143") :: _::_(ParseTree parsed-otherpunct-bar-41) rest) = just (ParseTree parsed-otherpunct-bar-42 ::' rest , 2) len-dec-rewrite {- P144-} ((Id "P144") :: _::_(InputChar '(') rest) = just (ParseTree parsed-otherpunct-bar-43 ::' rest , 2) len-dec-rewrite {- P145-} ((Id "P145") :: _::_(ParseTree parsed-otherpunct-bar-42) rest) = just (ParseTree parsed-otherpunct-bar-43 ::' rest , 2) len-dec-rewrite {- P146-} ((Id "P146") :: _::_(InputChar '●') rest) = just (ParseTree parsed-otherpunct-bar-44 ::' rest , 2) len-dec-rewrite {- P147-} ((Id "P147") :: _::_(ParseTree parsed-otherpunct-bar-43) rest) = just (ParseTree parsed-otherpunct-bar-44 ::' rest , 2) len-dec-rewrite {- P148-} ((Id "P148") :: _::_(InputChar '↑') rest) = just (ParseTree parsed-otherpunct-bar-45 ::' rest , 2) len-dec-rewrite {- P149-} ((Id "P149") :: _::_(ParseTree parsed-otherpunct-bar-44) rest) = just (ParseTree parsed-otherpunct-bar-45 ::' rest , 2) len-dec-rewrite {- P15-} ((Id "P15") :: _::_(InputChar 'p') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P150-} ((Id "P150") :: _::_(InputChar '➾') rest) = just (ParseTree parsed-otherpunct-bar-46 ::' rest , 2) len-dec-rewrite {- P151-} ((Id "P151") :: _::_(ParseTree parsed-otherpunct-bar-45) rest) = just (ParseTree parsed-otherpunct-bar-46 ::' rest , 2) len-dec-rewrite {- P152-} ((Id "P152") :: _::_(InputChar '➔') rest) = just (ParseTree parsed-otherpunct-bar-47 ::' rest , 2) len-dec-rewrite {- P153-} ((Id "P153") :: _::_(ParseTree parsed-otherpunct-bar-46) rest) = just (ParseTree parsed-otherpunct-bar-47 ::' rest , 2) len-dec-rewrite {- P154-} ((Id "P154") :: _::_(InputChar '⇐') rest) = just (ParseTree parsed-otherpunct-bar-48 ::' rest , 2) len-dec-rewrite {- P155-} ((Id "P155") :: _::_(ParseTree parsed-otherpunct-bar-47) rest) = just (ParseTree parsed-otherpunct-bar-48 ::' rest , 2) len-dec-rewrite {- P156-} ((Id "P156") :: _::_(InputChar '·') rest) = just (ParseTree parsed-otherpunct-bar-49 ::' rest , 2) len-dec-rewrite {- P157-} ((Id "P157") :: _::_(ParseTree parsed-otherpunct-bar-48) rest) = just (ParseTree parsed-otherpunct-bar-49 ::' rest , 2) len-dec-rewrite {- P158-} ((Id "P158") :: _::_(InputChar '☆') rest) = just (ParseTree parsed-otherpunct-bar-50 ::' rest , 2) len-dec-rewrite {- P159-} ((Id "P159") :: _::_(ParseTree parsed-otherpunct-bar-49) rest) = just (ParseTree parsed-otherpunct-bar-50 ::' rest , 2) len-dec-rewrite {- P16-} ((Id "P16") :: _::_(InputChar 'q') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P160-} ((Id "P160") :: _::_(InputChar '★') rest) = just (ParseTree parsed-otherpunct-bar-51 ::' rest , 2) len-dec-rewrite {- P161-} ((Id "P161") :: _::_(ParseTree parsed-otherpunct-bar-50) rest) = just (ParseTree parsed-otherpunct-bar-51 ::' rest , 2) len-dec-rewrite {- P162-} ((Id "P162") :: _::_(InputChar 'π') rest) = just (ParseTree parsed-otherpunct-bar-52 ::' rest , 2) len-dec-rewrite {- P163-} ((Id "P163") :: _::_(ParseTree parsed-otherpunct-bar-51) rest) = just (ParseTree parsed-otherpunct-bar-52 ::' rest , 2) len-dec-rewrite {- P164-} ((Id "P164") :: _::_(InputChar '∀') rest) = just (ParseTree parsed-otherpunct-bar-53 ::' rest , 2) len-dec-rewrite {- P165-} ((Id "P165") :: _::_(ParseTree parsed-otherpunct-bar-52) rest) = just (ParseTree parsed-otherpunct-bar-53 ::' rest , 2) len-dec-rewrite {- P166-} ((Id "P166") :: _::_(InputChar 'λ') rest) = just (ParseTree parsed-otherpunct-bar-54 ::' rest , 2) len-dec-rewrite {- P167-} ((Id "P167") :: _::_(ParseTree parsed-otherpunct-bar-53) rest) = just (ParseTree parsed-otherpunct-bar-54 ::' rest , 2) len-dec-rewrite {- P168-} ((Id "P168") :: _::_(InputChar 'ι') rest) = just (ParseTree parsed-otherpunct-bar-55 ::' rest , 2) len-dec-rewrite {- P169-} ((Id "P169") :: _::_(ParseTree parsed-otherpunct-bar-54) rest) = just (ParseTree parsed-otherpunct-bar-55 ::' rest , 2) len-dec-rewrite {- P17-} ((Id "P17") :: _::_(InputChar 'r') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P170-} ((Id "P170") :: _::_(InputChar 'Π') rest) = just (ParseTree parsed-otherpunct-bar-56 ::' rest , 2) len-dec-rewrite {- P171-} ((Id "P171") :: _::_(ParseTree parsed-otherpunct-bar-55) rest) = just (ParseTree parsed-otherpunct-bar-56 ::' rest , 2) len-dec-rewrite {- P172-} ((Id "P172") :: _::_(InputChar '□') rest) = just (ParseTree parsed-otherpunct-bar-57 ::' rest , 2) len-dec-rewrite {- P173-} ((Id "P173") :: _::_(ParseTree parsed-otherpunct-bar-56) rest) = just (ParseTree parsed-otherpunct-bar-57 ::' rest , 2) len-dec-rewrite {- P174-} ((Id "P174") :: _::_(InputChar '|') rest) = just (ParseTree parsed-otherpunct-bar-58 ::' rest , 2) len-dec-rewrite {- P175-} ((Id "P175") :: _::_(ParseTree parsed-otherpunct-bar-57) rest) = just (ParseTree parsed-otherpunct-bar-58 ::' rest , 2) len-dec-rewrite {- P176-} ((Id "P176") :: _::_(ParseTree parsed-otherpunct-bar-58) rest) = just (ParseTree parsed-otherpunct ::' rest , 2) len-dec-rewrite {- P177-} ((Id "P177") :: _::_(InputChar '%') rest) = just (ParseTree parsed-anychar-bar-59 ::' rest , 2) len-dec-rewrite {- P178-} ((Id "P178") :: _::_(ParseTree parsed-otherpunct) rest) = just (ParseTree parsed-anychar-bar-59 ::' rest , 2) len-dec-rewrite {- P179-} ((Id "P179") :: _::_(InputChar ' ') rest) = just (ParseTree parsed-anychar-bar-60 ::' rest , 2) len-dec-rewrite {- P18-} ((Id "P18") :: _::_(InputChar 's') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P180-} ((Id "P180") :: _::_(ParseTree parsed-anychar-bar-59) rest) = just (ParseTree parsed-anychar-bar-60 ::' rest , 2) len-dec-rewrite {- P181-} ((Id "P181") :: _::_(InputChar '\t') rest) = just (ParseTree parsed-anychar-bar-61 ::' rest , 2) len-dec-rewrite {- P182-} ((Id "P182") :: _::_(ParseTree parsed-anychar-bar-60) rest) = just (ParseTree parsed-anychar-bar-61 ::' rest , 2) len-dec-rewrite {- P183-} ((Id "P183") :: _::_(ParseTree parsed-numpunct) rest) = just (ParseTree parsed-anychar-bar-62 ::' rest , 2) len-dec-rewrite {- P184-} ((Id "P184") :: _::_(ParseTree parsed-anychar-bar-61) rest) = just (ParseTree parsed-anychar-bar-62 ::' rest , 2) len-dec-rewrite {- P185-} ((Id "P185") :: _::_(ParseTree parsed-alpha) rest) = just (ParseTree parsed-anychar-bar-63 ::' rest , 2) len-dec-rewrite {- P186-} ((Id "P186") :: _::_(ParseTree parsed-anychar-bar-62) rest) = just (ParseTree parsed-anychar-bar-63 ::' rest , 2) len-dec-rewrite {- P187-} ((Id "P187") :: _::_(ParseTree parsed-anychar-bar-63) rest) = just (ParseTree parsed-anychar ::' rest , 2) len-dec-rewrite {- P189-} ((Id "P189") :: (ParseTree parsed-anychar) :: _::_(ParseTree parsed-comment-star-64) rest) = just (ParseTree parsed-comment-star-64 ::' rest , 3) len-dec-rewrite {- P19-} ((Id "P19") :: _::_(InputChar 't') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P190-} ((Id "P190") :: (InputChar '%') :: (ParseTree parsed-comment-star-64) :: _::_(InputChar '\n') rest) = just (ParseTree parsed-comment ::' rest , 4) len-dec-rewrite {- P191-} ((Id "P191") :: _::_(InputChar '\t') rest) = just (ParseTree parsed-aws-bar-65 ::' rest , 2) len-dec-rewrite {- P192-} ((Id "P192") :: _::_(InputChar ' ') rest) = just (ParseTree parsed-aws-bar-65 ::' rest , 2) len-dec-rewrite {- P193-} ((Id "P193") :: _::_(InputChar '\n') rest) = just (ParseTree parsed-aws-bar-66 ::' rest , 2) len-dec-rewrite {- P194-} ((Id "P194") :: _::_(ParseTree parsed-aws-bar-65) rest) = just (ParseTree parsed-aws-bar-66 ::' rest , 2) len-dec-rewrite {- P195-} ((Id "P195") :: _::_(ParseTree parsed-aws-bar-66) rest) = just (ParseTree parsed-aws ::' rest , 2) len-dec-rewrite {- P196-} ((Id "P196") :: _::_(ParseTree parsed-aws) rest) = just (ParseTree parsed-ws-plus-67 ::' rest , 2) len-dec-rewrite {- P197-} ((Id "P197") :: (ParseTree parsed-aws) :: _::_(ParseTree parsed-ws-plus-67) rest) = just (ParseTree parsed-ws-plus-67 ::' rest , 3) len-dec-rewrite {- P198-} ((Id "P198") :: _::_(ParseTree parsed-ws-plus-67) rest) = just (ParseTree parsed-ws ::' rest , 2) len-dec-rewrite {- P199-} ((Id "P199") :: _::_(ParseTree parsed-numpunct) rest) = just (ParseTree parsed-anynonwschar-bar-68 ::' rest , 2) len-dec-rewrite {- P2-} ((Id "P2") :: _::_(InputChar 'c') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P20-} ((Id "P20") :: _::_(InputChar 'u') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P200-} ((Id "P200") :: _::_(ParseTree parsed-otherpunct) rest) = just (ParseTree parsed-anynonwschar-bar-68 ::' rest , 2) len-dec-rewrite {- P201-} ((Id "P201") :: _::_(ParseTree parsed-alpha) rest) = just (ParseTree parsed-anynonwschar-bar-69 ::' rest , 2) len-dec-rewrite {- P202-} ((Id "P202") :: _::_(ParseTree parsed-anynonwschar-bar-68) rest) = just (ParseTree parsed-anynonwschar-bar-69 ::' rest , 2) len-dec-rewrite {- P203-} ((Id "P203") :: _::_(ParseTree parsed-anynonwschar-bar-69) rest) = just (ParseTree parsed-anynonwschar ::' rest , 2) len-dec-rewrite {- P204-} ((Id "P204") :: _::_(ParseTree parsed-anynonwschar) rest) = just (ParseTree parsed-nonws-plus-70 ::' rest , 2) len-dec-rewrite {- P205-} ((Id "P205") :: (ParseTree parsed-anynonwschar) :: _::_(ParseTree parsed-nonws-plus-70) rest) = just (ParseTree parsed-nonws-plus-70 ::' rest , 3) len-dec-rewrite {- P206-} ((Id "P206") :: _::_(ParseTree parsed-nonws-plus-70) rest) = just (ParseTree parsed-nonws ::' rest , 2) len-dec-rewrite {- P21-} ((Id "P21") :: _::_(InputChar 'v') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P22-} ((Id "P22") :: _::_(InputChar 'w') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P23-} ((Id "P23") :: _::_(InputChar 'x') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P24-} ((Id "P24") :: _::_(InputChar 'y') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P25-} ((Id "P25") :: _::_(InputChar 'z') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P26-} ((Id "P26") :: _::_(InputChar 'A') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P27-} ((Id "P27") :: _::_(InputChar 'B') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P28-} ((Id "P28") :: _::_(InputChar 'C') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P29-} ((Id "P29") :: _::_(InputChar 'D') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P3-} ((Id "P3") :: _::_(InputChar 'd') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P30-} ((Id "P30") :: _::_(InputChar 'E') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P31-} ((Id "P31") :: _::_(InputChar 'F') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P32-} ((Id "P32") :: _::_(InputChar 'G') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P33-} ((Id "P33") :: _::_(InputChar 'H') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P34-} ((Id "P34") :: _::_(InputChar 'I') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P35-} ((Id "P35") :: _::_(InputChar 'J') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P36-} ((Id "P36") :: _::_(InputChar 'K') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P37-} ((Id "P37") :: _::_(InputChar 'L') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P38-} ((Id "P38") :: _::_(InputChar 'M') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P39-} ((Id "P39") :: _::_(InputChar 'N') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P4-} ((Id "P4") :: _::_(InputChar 'e') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P40-} ((Id "P40") :: _::_(InputChar 'O') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P41-} ((Id "P41") :: _::_(InputChar 'P') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P42-} ((Id "P42") :: _::_(InputChar 'Q') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P43-} ((Id "P43") :: _::_(InputChar 'R') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P44-} ((Id "P44") :: _::_(InputChar 'S') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P45-} ((Id "P45") :: _::_(InputChar 'T') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P46-} ((Id "P46") :: _::_(InputChar 'U') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P47-} ((Id "P47") :: _::_(InputChar 'V') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P48-} ((Id "P48") :: _::_(InputChar 'W') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P49-} ((Id "P49") :: _::_(InputChar 'X') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P5-} ((Id "P5") :: _::_(InputChar 'f') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P50-} ((Id "P50") :: _::_(InputChar 'Y') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P51-} ((Id "P51") :: _::_(InputChar 'Z') rest) = just (ParseTree parsed-alpha-range-2 ::' rest , 2) len-dec-rewrite {- P52-} ((Id "P52") :: _::_(ParseTree parsed-alpha-range-1) rest) = just (ParseTree parsed-alpha-bar-3 ::' rest , 2) len-dec-rewrite {- P53-} ((Id "P53") :: _::_(ParseTree parsed-alpha-range-2) rest) = just (ParseTree parsed-alpha-bar-3 ::' rest , 2) len-dec-rewrite {- P54-} ((Id "P54") :: _::_(ParseTree parsed-alpha-bar-3) rest) = just (ParseTree parsed-alpha ::' rest , 2) len-dec-rewrite {- P55-} ((Id "P55") :: _::_(InputChar '0') rest) = just (ParseTree parsed-numone-range-4 ::' rest , 2) len-dec-rewrite {- P56-} ((Id "P56") :: _::_(InputChar '1') rest) = just (ParseTree parsed-numone-range-4 ::' rest , 2) len-dec-rewrite {- P57-} ((Id "P57") :: _::_(InputChar '2') rest) = just (ParseTree parsed-numone-range-4 ::' rest , 2) len-dec-rewrite {- P58-} ((Id "P58") :: _::_(InputChar '3') rest) = just (ParseTree parsed-numone-range-4 ::' rest , 2) len-dec-rewrite {- P59-} ((Id "P59") :: _::_(InputChar '4') rest) = just (ParseTree parsed-numone-range-4 ::' rest , 2) len-dec-rewrite {- P6-} ((Id "P6") :: _::_(InputChar 'g') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P60-} ((Id "P60") :: _::_(InputChar '5') rest) = just (ParseTree parsed-numone-range-4 ::' rest , 2) len-dec-rewrite {- P61-} ((Id "P61") :: _::_(InputChar '6') rest) = just (ParseTree parsed-numone-range-4 ::' rest , 2) len-dec-rewrite {- P62-} ((Id "P62") :: _::_(InputChar '7') rest) = just (ParseTree parsed-numone-range-4 ::' rest , 2) len-dec-rewrite {- P63-} ((Id "P63") :: _::_(InputChar '8') rest) = just (ParseTree parsed-numone-range-4 ::' rest , 2) len-dec-rewrite {- P64-} ((Id "P64") :: _::_(InputChar '9') rest) = just (ParseTree parsed-numone-range-4 ::' rest , 2) len-dec-rewrite {- P65-} ((Id "P65") :: _::_(ParseTree parsed-numone-range-4) rest) = just (ParseTree parsed-numone ::' rest , 2) len-dec-rewrite {- P66-} ((Id "P66") :: _::_(ParseTree parsed-numone) rest) = just (ParseTree parsed-num-plus-5 ::' rest , 2) len-dec-rewrite {- P67-} ((Id "P67") :: (ParseTree parsed-numone) :: _::_(ParseTree parsed-num-plus-5) rest) = just (ParseTree parsed-num-plus-5 ::' rest , 3) len-dec-rewrite {- P68-} ((Id "P68") :: _::_(ParseTree parsed-num-plus-5) rest) = just (ParseTree parsed-num ::' rest , 2) len-dec-rewrite {- P69-} ((Id "P69") :: _::_(InputChar '_') rest) = just (ParseTree parsed-numpunct-bar-6 ::' rest , 2) len-dec-rewrite {- P7-} ((Id "P7") :: _::_(InputChar 'h') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P70-} ((Id "P70") :: _::_(InputChar '/') rest) = just (ParseTree parsed-numpunct-bar-6 ::' rest , 2) len-dec-rewrite {- P71-} ((Id "P71") :: _::_(InputChar '#') rest) = just (ParseTree parsed-numpunct-bar-7 ::' rest , 2) len-dec-rewrite {- P72-} ((Id "P72") :: _::_(ParseTree parsed-numpunct-bar-6) rest) = just (ParseTree parsed-numpunct-bar-7 ::' rest , 2) len-dec-rewrite {- P73-} ((Id "P73") :: _::_(InputChar '~') rest) = just (ParseTree parsed-numpunct-bar-8 ::' rest , 2) len-dec-rewrite {- P74-} ((Id "P74") :: _::_(ParseTree parsed-numpunct-bar-7) rest) = just (ParseTree parsed-numpunct-bar-8 ::' rest , 2) len-dec-rewrite {- P75-} ((Id "P75") :: _::_(InputChar '-') rest) = just (ParseTree parsed-numpunct-bar-9 ::' rest , 2) len-dec-rewrite {- P76-} ((Id "P76") :: _::_(ParseTree parsed-numpunct-bar-8) rest) = just (ParseTree parsed-numpunct-bar-9 ::' rest , 2) len-dec-rewrite {- P77-} ((Id "P77") :: _::_(InputChar '\'') rest) = just (ParseTree parsed-numpunct-bar-10 ::' rest , 2) len-dec-rewrite {- P78-} ((Id "P78") :: _::_(ParseTree parsed-numpunct-bar-9) rest) = just (ParseTree parsed-numpunct-bar-10 ::' rest , 2) len-dec-rewrite {- P79-} ((Id "P79") :: _::_(ParseTree parsed-numone) rest) = just (ParseTree parsed-numpunct-bar-11 ::' rest , 2) len-dec-rewrite {- P8-} ((Id "P8") :: _::_(InputChar 'i') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P80-} ((Id "P80") :: _::_(ParseTree parsed-numpunct-bar-10) rest) = just (ParseTree parsed-numpunct-bar-11 ::' rest , 2) len-dec-rewrite {- P81-} ((Id "P81") :: _::_(ParseTree parsed-numpunct-bar-11) rest) = just (ParseTree parsed-numpunct ::' rest , 2) len-dec-rewrite {- P82-} ((Id "P82") :: _::_(InputChar '◂') rest) = just (ParseTree parsed-otherpunct-bar-12 ::' rest , 2) len-dec-rewrite {- P83-} ((Id "P83") :: _::_(InputChar 'ω') rest) = just (ParseTree parsed-otherpunct-bar-12 ::' rest , 2) len-dec-rewrite {- P84-} ((Id "P84") :: _::_(InputChar 'φ') rest) = just (ParseTree parsed-otherpunct-bar-13 ::' rest , 2) len-dec-rewrite {- P85-} ((Id "P85") :: _::_(ParseTree parsed-otherpunct-bar-12) rest) = just (ParseTree parsed-otherpunct-bar-13 ::' rest , 2) len-dec-rewrite {- P86-} ((Id "P86") :: _::_(InputChar 'υ') rest) = just (ParseTree parsed-otherpunct-bar-14 ::' rest , 2) len-dec-rewrite {- P87-} ((Id "P87") :: _::_(ParseTree parsed-otherpunct-bar-13) rest) = just (ParseTree parsed-otherpunct-bar-14 ::' rest , 2) len-dec-rewrite {- P88-} ((Id "P88") :: _::_(InputChar 'μ') rest) = just (ParseTree parsed-otherpunct-bar-15 ::' rest , 2) len-dec-rewrite {- P89-} ((Id "P89") :: _::_(ParseTree parsed-otherpunct-bar-14) rest) = just (ParseTree parsed-otherpunct-bar-15 ::' rest , 2) len-dec-rewrite {- P9-} ((Id "P9") :: _::_(InputChar 'j') rest) = just (ParseTree parsed-alpha-range-1 ::' rest , 2) len-dec-rewrite {- P90-} ((Id "P90") :: _::_(InputChar 'χ') rest) = just (ParseTree parsed-otherpunct-bar-16 ::' rest , 2) len-dec-rewrite {- P91-} ((Id "P91") :: _::_(ParseTree parsed-otherpunct-bar-15) rest) = just (ParseTree parsed-otherpunct-bar-16 ::' rest , 2) len-dec-rewrite {- P92-} ((Id "P92") :: _::_(InputChar 'δ') rest) = just (ParseTree parsed-otherpunct-bar-17 ::' rest , 2) len-dec-rewrite {- P93-} ((Id "P93") :: _::_(ParseTree parsed-otherpunct-bar-16) rest) = just (ParseTree parsed-otherpunct-bar-17 ::' rest , 2) len-dec-rewrite {- P94-} ((Id "P94") :: _::_(InputChar '\"') rest) = just (ParseTree parsed-otherpunct-bar-18 ::' rest , 2) len-dec-rewrite {- P95-} ((Id "P95") :: _::_(ParseTree parsed-otherpunct-bar-17) rest) = just (ParseTree parsed-otherpunct-bar-18 ::' rest , 2) len-dec-rewrite {- P96-} ((Id "P96") :: _::_(InputChar '≃') rest) = just (ParseTree parsed-otherpunct-bar-19 ::' rest , 2) len-dec-rewrite {- P97-} ((Id "P97") :: _::_(ParseTree parsed-otherpunct-bar-18) rest) = just (ParseTree parsed-otherpunct-bar-19 ::' rest , 2) len-dec-rewrite {- P98-} ((Id "P98") :: _::_(InputChar '>') rest) = just (ParseTree parsed-otherpunct-bar-20 ::' rest , 2) len-dec-rewrite {- P99-} ((Id "P99") :: _::_(ParseTree parsed-otherpunct-bar-19) rest) = just (ParseTree parsed-otherpunct-bar-20 ::' rest , 2) len-dec-rewrite {- EndEntity-} (_::_(Id "EndEntity") rest) = just (ParseTree (parsed-entities (norm-entities EndEntity)) ::' rest , 1) len-dec-rewrite {- P188-} (_::_(Id "P188") rest) = just (ParseTree parsed-comment-star-64 ::' rest , 1) len-dec-rewrite {- Posinfo-} (_::_(Posinfo n) rest) = just (ParseTree (parsed-posinfo (ℕ-to-string n)) ::' rest , 1) len-dec-rewrite x = nothing rrs : rewriteRules rrs = record { len-dec-rewrite = len-dec-rewrite }
[STATEMENT] lemma (in orset) add_rem_commute: assumes "i \<notin> is" shows "\<langle>Add i e1\<rangle> \<rhd> \<langle>Rem is e2\<rangle> = \<langle>Rem is e2\<rangle> \<rhd> \<langle>Add i e1\<rangle>" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<langle>Add i e1\<rangle> \<rhd> \<langle>Rem is e2\<rangle> = \<langle>Rem is e2\<rangle> \<rhd> \<langle>Add i e1\<rangle> [PROOF STEP] using assms [PROOF STATE] proof (prove) using this: i \<notin> is goal (1 subgoal): 1. \<langle>Add i e1\<rangle> \<rhd> \<langle>Rem is e2\<rangle> = \<langle>Rem is e2\<rangle> \<rhd> \<langle>Add i e1\<rangle> [PROOF STEP] by(auto simp add: interpret_op_def kleisli_def op_elem_def, fastforce)
(* Title: HOL/Auth/n_germanSimp_lemma_on_inv__53.thy Author: Yongjian Li and Kaiqiang Duan, State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences Copyright 2016 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences *) header{*The n_germanSimp Protocol Case Study*} theory n_germanSimp_lemma_on_inv__53 imports n_germanSimp_base begin section{*All lemmas on causal relation between inv__53 and some rule r*} lemma n_RecvReqSVsinv__53: assumes a1: "(\<exists> i. i\<le>N\<and>r=n_RecvReqS N i)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain i where a1:"i\<le>N\<and>r=n_RecvReqS N i" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4" apply fastforce done have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(i=p__Inv4)" have "?P3 s" apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Ident ''CurCmd'')) (Const Empty)) (eqn (IVar (Field (Para (Ident ''Chan2'') p__Inv3) ''Cmd'')) (Const Inv))))" in exI, auto) done then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i=p__Inv3)" have "?P3 s" apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Ident ''Chan2'') p__Inv3) ''Cmd'')) (Const Inv)) (eqn (IVar (Field (Para (Ident ''Cache'') p__Inv3) ''State'')) (Const I))))" in exI, auto) done then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)" have "?P3 s" apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Ident ''CurCmd'')) (Const Empty)) (eqn (IVar (Field (Para (Ident ''Chan2'') p__Inv3) ''Cmd'')) (Const Inv))))" in exI, auto) done then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_RecvReqE__part__0Vsinv__53: assumes a1: "(\<exists> i. i\<le>N\<and>r=n_RecvReqE__part__0 N i)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain i where a1:"i\<le>N\<and>r=n_RecvReqE__part__0 N i" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4" apply fastforce done have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(i=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_RecvReqE__part__1Vsinv__53: assumes a1: "(\<exists> i. i\<le>N\<and>r=n_RecvReqE__part__1 N i)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain i where a1:"i\<le>N\<and>r=n_RecvReqE__part__1 N i" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4" apply fastforce done have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(i=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_SendInv__part__0Vsinv__53: assumes a1: "(\<exists> i. i\<le>N\<and>r=n_SendInv__part__0 i)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain i where a1:"i\<le>N\<and>r=n_SendInv__part__0 i" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4" apply fastforce done have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(i=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_SendInv__part__1Vsinv__53: assumes a1: "(\<exists> i. i\<le>N\<and>r=n_SendInv__part__1 i)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain i where a1:"i\<le>N\<and>r=n_SendInv__part__1 i" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4" apply fastforce done have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(i=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i=p__Inv3)" have "?P3 s" apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (andForm (eqn (IVar (Ident ''CurCmd'')) (Const ReqS)) (eqn (IVar (Field (Para (Ident ''Chan3'') p__Inv4) ''Cmd'')) (Const InvAck))) (eqn (IVar (Para (Ident ''InvSet'') p__Inv3)) (Const true))))" in exI, auto) done then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_SendInvAckVsinv__53: assumes a1: "(\<exists> i. i\<le>N\<and>r=n_SendInvAck i)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain i where a1:"i\<le>N\<and>r=n_SendInvAck i" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4" apply fastforce done have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(i=p__Inv4)" have "?P3 s" apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (andForm (eqn (IVar (Ident ''CurCmd'')) (Const ReqS)) (eqn (IVar (Field (Para (Ident ''Chan2'') p__Inv3) ''Cmd'')) (Const Inv))) (eqn (IVar (Field (Para (Ident ''Chan2'') p__Inv4) ''Cmd'')) (Const Inv))))" in exI, auto) done then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_RecvInvAckVsinv__53: assumes a1: "(\<exists> i. i\<le>N\<and>r=n_RecvInvAck i)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain i where a1:"i\<le>N\<and>r=n_RecvInvAck i" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4" apply fastforce done have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(i=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i=p__Inv3)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_SendGntSVsinv__53: assumes a1: "(\<exists> i. i\<le>N\<and>r=n_SendGntS i)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain i where a1:"i\<le>N\<and>r=n_SendGntS i" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4" apply fastforce done have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(i=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_SendGntEVsinv__53: assumes a1: "(\<exists> i. i\<le>N\<and>r=n_SendGntE N i)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain i where a1:"i\<le>N\<and>r=n_SendGntE N i" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4" apply fastforce done have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(i=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_RecvGntSVsinv__53: assumes a1: "(\<exists> i. i\<le>N\<and>r=n_RecvGntS i)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain i where a1:"i\<le>N\<and>r=n_RecvGntS i" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4" apply fastforce done have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(i=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_RecvGntEVsinv__53: assumes a1: "(\<exists> i. i\<le>N\<and>r=n_RecvGntE i)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain i where a1:"i\<le>N\<and>r=n_RecvGntE i" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4" apply fastforce done have "(i=p__Inv4)\<or>(i=p__Inv3)\<or>(i~=p__Inv3\<and>i~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(i=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(i~=p__Inv3\<and>i~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_StoreVsinv__53: assumes a1: "\<exists> i d. i\<le>N\<and>d\<le>N\<and>r=n_Store i d" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__53 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done end
import jams import json import pandas as pd import numpy as np import os times = [] titles = [] lists = [] name = [] jam_path = 'DataBase/jams' num_jams = len(os.listdir(jam_path)) for j in os.listdir(jam_path): jam = jams.load(f'{jam_path}/{j}') # Load Jam Path meta = jam['file_metadata'] title = meta['title'] metadata = meta['identifiers'] youtube = metadata['youtube_url'] duration = meta['duration'] duration = duration/60 # Calculate duration nome = str(j) name.append(nome.split('.')[0]) # Split the archive name times.append(duration) # Add duration lists.append(youtube) # Add link titles.append(title) # Add title data = pd.DataFrame() data['id'] = name data['link'] = lists data['title'] = titles data['time'] = times data.to_csv('DataBase/youtube.csv', index=True) print(data)
module Cats.Category.Fun where open import Cats.Trans public using (Trans ; component ; natural ; id ; _∘_) open import Relation.Binary using (Rel ; IsEquivalence ; _Preserves₂_⟶_⟶_) open import Level open import Cats.Category open import Cats.Functor using (Functor) module _ {lo la l≈ lo′ la′ l≈′} (C : Category lo la l≈) (D : Category lo′ la′ l≈′) where infixr 4 _≈_ private module C = Category C module D = Category D open D.≈ open D.≈-Reasoning Obj : Set (lo ⊔ la ⊔ l≈ ⊔ lo′ ⊔ la′ ⊔ l≈′) Obj = Functor C D _⇒_ : Obj → Obj → Set (lo ⊔ la ⊔ la′ ⊔ l≈′) _⇒_ = Trans record _≈_ {F G} (θ ι : F ⇒ G) : Set (lo ⊔ l≈′) where constructor ≈-intro field ≈-elim : ∀ {c} → component θ c D.≈ component ι c open _≈_ public equiv : ∀ {F G} → IsEquivalence (_≈_ {F} {G}) equiv = record { refl = ≈-intro refl ; sym = λ eq → ≈-intro (sym (≈-elim eq)) ; trans = λ eq₁ eq₂ → ≈-intro (trans (≈-elim eq₁) (≈-elim eq₂)) } Fun : Category (lo ⊔ la ⊔ l≈ ⊔ lo′ ⊔ la′ ⊔ l≈′) (lo ⊔ la ⊔ la′ ⊔ l≈′) (lo ⊔ l≈′) Fun = record { Obj = Obj ; _⇒_ = _⇒_ ; _≈_ = _≈_ ; id = id ; _∘_ = _∘_ ; equiv = equiv ; ∘-resp = λ θ≈θ′ ι≈ι′ → ≈-intro (D.∘-resp (≈-elim θ≈θ′) (≈-elim ι≈ι′)) ; id-r = ≈-intro D.id-r ; id-l = ≈-intro D.id-l ; assoc = ≈-intro D.assoc }
-- exercises in "Type-Driven Development with Idris" -- chapter 10 import Shape -- check that all functions are total %default total area : Shape -> Double area x with (shapeView x) area (triangle base height) | STriangle = 0.5 * base * height area (rectangle width height) | SRectangle = width * height area (circle radius) | SCircle = pi * radius * radius
Jasper Lawrence is from Sebastopol, CA. He starts at Davis as a freshman in the Fall 2010 semester, majoring in linguistics. Hes a fan of Magic the Gathering Magic: The Gathering, Runescape, and Guitar Hero/Rock Band. 20100610 12:49:36 nbsp Hi Jasper, and Welcome to the Wikiand to Davis! Have you checked out the two comics & cards retailers in town? Droms Comics and Cards Droms and Bizarro World. Users/TomGarberson 20100610 12:54:01 nbsp Sonoma County, represent! Users/WilliamLewis
If $r \neq 0$ and $s \neq t$, then the arc length of the part of the circle with radius $r$ and center $z$ from $s$ to $t$ is less than $2\pi$.
State Before: n d : ℕ n_coprime_d : coprime n d ⊢ coprime (natAbs (↑n - ↑d * ⌊↑n / ↑d⌋)) d State After: n d : ℕ n_coprime_d : coprime n d this : ↑n % ↑d = ↑n - ↑d * ⌊↑n / ↑d⌋ ⊢ coprime (natAbs (↑n - ↑d * ⌊↑n / ↑d⌋)) d Tactic: have : (n : ℤ) % d = n - d * ⌊(n : ℚ) / d⌋ := Int.mod_nat_eq_sub_mul_floor_rat_div State Before: n d : ℕ n_coprime_d : coprime n d this : ↑n % ↑d = ↑n - ↑d * ⌊↑n / ↑d⌋ ⊢ coprime (natAbs (↑n - ↑d * ⌊↑n / ↑d⌋)) d State After: n d : ℕ n_coprime_d : coprime n d this : ↑n % ↑d = ↑n - ↑d * ⌊↑n / ↑d⌋ ⊢ coprime (natAbs (↑n % ↑d)) d Tactic: rw [← this] State Before: n d : ℕ n_coprime_d : coprime n d this : ↑n % ↑d = ↑n - ↑d * ⌊↑n / ↑d⌋ ⊢ coprime (natAbs (↑n % ↑d)) d State After: n d : ℕ n_coprime_d : coprime n d this✝ : ↑n % ↑d = ↑n - ↑d * ⌊↑n / ↑d⌋ this : coprime d n ⊢ coprime (natAbs (↑n % ↑d)) d Tactic: have : d.coprime n := n_coprime_d.symm State Before: n d : ℕ n_coprime_d : coprime n d this✝ : ↑n % ↑d = ↑n - ↑d * ⌊↑n / ↑d⌋ this : coprime d n ⊢ coprime (natAbs (↑n % ↑d)) d State After: no goals Tactic: rwa [Nat.coprime, Nat.gcd_rec] at this
State Before: α : Type u_1 inst✝² : CanonicallyOrderedAddMonoid α inst✝¹ : Sub α inst✝ : OrderedSub α a b c d : α ⊢ b - a + a = b ↔ a ≤ b State After: α : Type u_1 inst✝² : CanonicallyOrderedAddMonoid α inst✝¹ : Sub α inst✝ : OrderedSub α a b c d : α ⊢ a + (b - a) = b ↔ a ≤ b Tactic: rw [add_comm] State Before: α : Type u_1 inst✝² : CanonicallyOrderedAddMonoid α inst✝¹ : Sub α inst✝ : OrderedSub α a b c d : α ⊢ a + (b - a) = b ↔ a ≤ b State After: no goals Tactic: exact add_tsub_cancel_iff_le
function [gcjLat, gcjLng] = bd2gcj(bdLat, bdLng) gcjLat = bdLat; gcjLng = bdLng; x_pi = pi * 3000.0 / 180.0; inChina = ~outOfChina(bdLat, bdLng); if ~any(inChina),return;end x = bdLng(inChina) - 0.0065; y = bdLat(inChina) - 0.006; z = hypot(x, y) - 0.00002 * sin(y * x_pi); theta = atan2(y, x) - 0.000003 * cos(x * x_pi); gcjLng(inChina) = z.*cos(theta); gcjLat(inChina) = z.*sin(theta); end
Darden was ranked 47th on the Cleveland Browns top 100 players list .
{-# OPTIONS --without-K --safe #-} open import Categories.Category using (Category) -- Defines the following properties of a Category: -- Cartesian -- a Cartesian category is a category with all products -- (for the induced Monoidal structure, see Cartesian.Monoidal) module Categories.Category.Cartesian {o ℓ e} (𝒞 : Category o ℓ e) where open import Level using (levelOfTerm) open import Data.Nat using (ℕ; zero; suc) open Category 𝒞 open HomReasoning open import Categories.Category.BinaryProducts 𝒞 using (BinaryProducts; module BinaryProducts) open import Categories.Object.Terminal 𝒞 using (Terminal) private variable A B C D W X Y Z : Obj f f′ g g′ h i : A ⇒ B -- Cartesian monoidal category record Cartesian : Set (levelOfTerm 𝒞) where field terminal : Terminal products : BinaryProducts open BinaryProducts products using (_×_) power : Obj → ℕ → Obj power A 0 = Terminal.⊤ terminal power A 1 = A power A (suc (suc n)) = A × power A (suc n)
Formal statement is: lemma complex_is_Real_iff: "z \<in> \<real> \<longleftrightarrow> Im z = 0" Informal statement is: A complex number $z$ is real if and only if its imaginary part is zero.
[STATEMENT] lemma represents_div: fixes x y :: "'a :: field_char_0" assumes "p represents x" and "q represents y" and "poly q 0 \<noteq> 0" shows "(poly_div p q) represents (x / y)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. poly_div p q represents x / y [PROOF STEP] using assms [PROOF STATE] proof (prove) using this: p represents x q represents y poly q 0 \<noteq> 0 goal (1 subgoal): 1. poly_div p q represents x / y [PROOF STEP] by (intro representsI ipoly_poly_div ipoly_poly_div_nonzero, auto)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % QPSK demonstration packet-based transceiver for Chilipepper % Toyon Research Corp. % http://www.toyon.com/chilipepper.php % Created 10/17/2012 % [email protected] %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %#codegen function [d_i_out, d_q_out] = qpsk_rx_srrc(d_i_in, d_q_in) persistent buf_i buf_q OS_RATE = 8; f = SRRC; if isempty(buf_i) buf_i = zeros(1,OS_RATE*2+1); buf_q = zeros(1,OS_RATE*2+1); end buf_i = [buf_i(2:end) d_i_in]; buf_q = [buf_q(2:end) d_q_in]; d_i_out = buf_i*f; d_q_out = buf_q*f;
Shortly before Bloody Sunday was broadcast , Nesbitt described it as " difficult but extraordinary " and " emotionally draining " . The broadcast on ITV in January 2002 and its promotion did not pass without incident ; he was criticised by Unionists for saying that Protestants in Northern Ireland felt " a collective guilt " over the killings . His parents ' home was also vandalised and he received death threats . During the awards season , Nesbitt won the British Independent Film Award for Best Performance by an Actor in a British Independent Film and was nominated for the British Academy Television Award for Best Actor . The film was also screened at film festivals such as the Stockholm International Film Festival , where Nesbitt was presented with the Best Actor award .
# Copyright (c) 2018-2021, Carnegie Mellon University # See LICENSE for details Verifier := rec( MaxError := 6, IgnoreRoots := false ); if not IsBound(GenerateErrorReports) then GenerateErrorReports := true; fi; #F HandleTestError ( <func-name>, <func-args>, <bool> ) #F Creates a directory and reports the given function call in #F 'test.g', backtrace in 'backtrace' and current configuration in #F 'conf.g' #F #F Boolean parameter, if true, tells HandleTestError to all Error() #F thus causing an exception, if false, program will continue #F normally. #F HandleTestError := function (FuncName, FuncArgs, doErr) local dir,file,arg,n,k; if IsBound(GenerateErrorReports) and GenerateErrorReports = false then return; fi; dir := Concat(Conf("tmp_dir"), Conf("path_sep"), "Error-", String(TimeInSecs()), Conf("path_sep")); file := Concat(dir, "test.g"); SysMkdir(dir); # Create a file (truncate if it exists) PrintTo(file, "Import(spiral.spl, spiral.nt, spiral.code); \n"); #F AppendTo(file, "config_update_val(\"remove_temporaries\", SRC_CMDLINE, int_val(0));\n"); #F AppendTo(file,"config_update_val(\"tmp_dir\", SRC_CMDLINE, str_val(\"./tmp\"));\n"); AppendTo(file, "GenerateErrorReports := false; \n"); AppendTo(file, "SPL_DEFAULTS := ", SPL_DEFAULTS, ";\n"); # write function arguments n := 1; for arg in FuncArgs do AppendTo(file, "arg", String(n), " := "); if IsString(arg) then AppendTo(file, "\"", arg, "\""); else AppendTo(file, arg); fi; AppendTo(file, ";\n\n"); n := n+1; od; # write function call k := 1; AppendTo(file, "result := ", FuncName, "( "); while k<>n do AppendTo(file, "arg", String(k)); if k <> n-1 then AppendTo(file, ", "); fi; k := k + 1; od; AppendTo(file, " );\n\n"); # other info # NOTE: implement a config dump in sys_conf #F AppendTo(Concat(dir, "conf.g"), ConfigProfileList(), ";\n"); BacktraceTo(Concat(dir, "backtrace.txt"), 100); if doErr then Error(FuncName, " failed, see ", dir); else Print(FuncName, " failed, see ", dir, "\n"); fi; end; #F DeriveSPLOptions ( <spl>, <spl-options-record> ) #F Merges the defaults with <spl-options-record>, derives other #F fields, such as dataType, from <spl>, and returns a complete #F options record. #F DeriveSPLOptions := function (S, R) # set options R := MergeSPLOptionsRecord(R); # check if MPI req'd # if IsDMP(S) then # R.language := "c.mpi.mpich"; # fi; # if user didn't specify data type determine it from S if R.dataType = "no default" then if IsRealSPL(S) then R.dataType := "real"; else R.dataType := "complex"; fi; else ;# prevent user from doing nonsense #if not IsRealSPL(S) and R.dataType = "real" then # Error("invalid combination: complex <S> and real data type"); #fi; fi; return R; end; #F DeriveScalarType ( <spl-options-record> ) #F DeriveScalarType := function(SPLOpts) local suffix; if IsBound(SPLOpts.customDataType) then return SPLOpts.customDataType; elif IsBound(SPLOpts.customReal) and SPLOpts.dataType = "real" then return SPLOpts.customReal; elif IsBound(SPLOpts.customComplex) and SPLOpts.dataType = "complex" then return SPLOpts.customComplex; else if SPLOpts.dataType = "real" then suffix := ""; elif SPLOpts.dataType = "complex" then suffix := "_cplx"; else Error("SPLOpts.dataType has invalid value '", SPLOpts.dataType, "'"); fi; if SPLOpts.precision = "single" then return Concat("float",suffix); elif SPLOpts.precision = "double" then return Concat("double",suffix); elif SPLOpts.precision = "extended" then return Concat("long_double",suffix); else Error("SPLOpts.precision has invalid value '", SPLOpts.dataType, "'"); fi; fi; end; ProgInputType := rec( SPLSource := 0, TargetSource := 1, ObjFile := 2 ); #F DeriveType ( <spl-options-record> ) DeriveType := (opts)->When(IsBound(opts.vector), opts.vector.isa.ctype, DeriveScalarType(opts)); #F ProgSPL ( <spl> , <spl-options-record> ) #F Convert <spl> to a 'Prog' record used by xxxProg functions. #F See gap/src/spiral_spl_prog.c for details. #F ProgSPL := function (SPL, Opts) local prog; Opts := DeriveSPLOptions(SPL, Opts); prog := rec(); prog.profile := Opts.language; prog.type := ProgInputType.SPLSource; prog.data_type := DeriveScalarType(Opts); if IsBound(Opts.zeroBits) then prog.zero_bits := Opts.zeroBits; else prog.zero_bits := 0; fi; prog.dim_rows := EvalScalar(SPL.dimensions[1]); prog.dim_cols := EvalScalar(SPL.dimensions[2]); prog.auto_dim := 0; if IsBound(Opts.compflags) then prog.compiler_flags := Opts.compflags; fi; if IsBound(Opts.file) then prog.file := Opts.file; else prog.file := SysTmpName(); fi; if IsBound(Opts.subName) then prog.sub_name := Opts.subName; else prog.sub_name := "sub"; fi; prog.spl_file := prog.file; return prog; end; #F Valid <compare-type>'s are #F "random": compare on random vector (default) #F "basis" : compare on standard basis #F <int> : compare on <int> random standard base vectors #F VerifierOpts := function (CO) local opts; opts := Concat("-g -e ", String(Verifier.MaxError), " "); if CO = "basis" then return Concat(opts, " -b"); elif CO = "random" then return Concat(opts, " -r"); elif IsInt(CO) then return Concat(opts, " -s ", String(CO)); else Error("<CO> must be an integer, \"random\", or \"basis\""); fi; end;
function [AB]=vectorTensorProductArray(a,b) %% %Determine multiplication order if size(a,2)==3 %type B*A B=a; A=b; multType='post'; else %type A*B B=b; A=a; multType='pre'; end %Perform product switch multType case 'pre' %type A*B % A1_1*B1 + A1_2*B2 + A1_3*B3 % A2_1*B1 + A2_2*B2 + A2_3*B3 % A3_1*B1 + A3_2*B2 + A3_3*B3 AB=[A(:,1).*B(:,1)+A(:,4).*B(:,2)+A(:,7).*B(:,3) ... A(:,2).*B(:,1)+A(:,5).*B(:,2)+A(:,8).*B(:,3) ... A(:,3).*B(:,1)+A(:,6).*B(:,2)+A(:,9).*B(:,3)]; case 'post' %type B*A % A1_1*B1 + A1_2*B2 + A1_3*B3 % A2_1*B1 + A2_2*B2 + A2_3*B3 % A3_1*B1 + A3_2*B2 + A3_3*B3 AB=[A(:,1).*B(:,1)+A(:,2).*B(:,2)+A(:,3).*B(:,3) ... A(:,4).*B(:,1)+A(:,5).*B(:,2)+A(:,6).*B(:,3) ... A(:,7).*B(:,1)+A(:,8).*B(:,2)+A(:,9).*B(:,3)]; end %% % _*GIBBON footer text*_ % % License: <https://github.com/gibbonCode/GIBBON/blob/master/LICENSE> % % GIBBON: The Geometry and Image-based Bioengineering add-On. A toolbox for % image segmentation, image-based modeling, meshing, and finite element % analysis. % % Copyright (C) 2006-2022 Kevin Mattheus Moerman and the GIBBON contributors % % This program is free software: you can redistribute it and/or modify % it under the terms of the GNU General Public License as published by % the Free Software Foundation, either version 3 of the License, or % (at your option) any later version. % % This program is distributed in the hope that it will be useful, % but WITHOUT ANY WARRANTY; without even the implied warranty of % MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the % GNU General Public License for more details. % % You should have received a copy of the GNU General Public License % along with this program. If not, see <http://www.gnu.org/licenses/>.
module Deps.Parser -- Implementation heavily inspired by Idris' Language.JSON and Blodwen import public Text.Parser import Deps.Data import Deps.Lexer %default total %access public export Parser : Type -> Type Parser a = Grammar IdrisToken True a symbol : String -> Parser () symbol expectedName = do symbolName <- match Symbol if symbolName == expectedName then pure () else fail ("Expected: " ++ expectedName) exactIdent : String -> Parser () exactIdent expectedName = do identName <- match Identifier if identName == expectedName then pure () else fail ("Expected: " ++ expectedName) namespace_ : Parser Namespace namespace_ = do ns <- sepBy1 (symbol ".") (match Identifier) pure ns module_ : Parser Namespace module_ = do exactIdent "module" namespace_ import_ : Parser Import import_ = do exactIdent "import" reexp <- option False (do exactIdent "public" pure True) ns <- namespace_ nsAs <- option ns (do exactIdent "as" namespace_) pure (MkImport reexp ns nsAs) program : Grammar IdrisToken False IdrisHead program = do moduleNs <- option defaultModuleNs module_ imports <- many import_ pure (MkIdrisHead moduleNs imports) runParser : String -> Grammar IdrisToken e a -> Maybe a runParser str parser = let tokens = map tok (runLexer str) validTokens = filter (not . ignored) tokens Right (res, _) = parse parser validTokens | Nothing in Just res
(* Contribution to the Coq Library V6.3 (July 1999) *) (****************************************************************************) (* *) (* Constructive Geometry *) (* according to *) (* Jan von Plato *) (* *) (* Coq V5.10 *) (* *) (* Gilles Kahn *) (* INRIA *) (* Sophia-Antipolis *) (* *) (* *) (* Acknowledgments: This work started in Turin in May 95 with Yves Bertot. *) (* Thanks to the Newton Institute for providing an exceptional work *) (* environment in Summer 1995 so that I could complete this. *) (****************************************************************************) (****************************************************************************) (* part2.v *) (****************************************************************************) Require Import basis. Require Import part1. Theorem thm4_1a : forall (x : Segment) (l : Line), DiLn l (ln x) -> Apart (origin x) l \/ Apart (extremity x) l. Proof. intros x l. generalize (inc_ln2 x); generalize (inc_ln1 x). unfold Incident in |- *. generalize (el_ax x l (ln x)). tauto. Qed. Theorem thm4_1b : forall (x : Segment) (l : Line), Apart (origin x) l \/ Apart (extremity x) l -> DiLn l (ln x). Proof. intros x l. generalize (inc_ln2 x); generalize (inc_ln1 x). unfold Incident in |- *. intros H' H'0 H'1; elim H'1; intro H'2; clear H'1. elim (cmp_apt_diln (origin x) l (ln x)); tauto. elim (cmp_apt_diln (extremity x) l (ln x)); tauto. Qed. Hint Resolve thm4_1a thm4_1b. Theorem thm4_1c : forall (x : Twolines) (a : Point), DiPt a (pt x) -> Apart a (line1 x) \/ Apart a (line2 x). Proof. intros x a. generalize (inc_pt2 x); generalize (inc_pt1 x). unfold Incident in |- *. intros H' H'0 H'1. generalize (el_ax (Seg a (pt x) H'1) (line1 x) (line2 x)); simpl in |- *. cut (DiLn (line1 x) (line2 x)). tauto. elim x; auto. Qed. Theorem thm4_1d : forall (x : Twolines) (a : Point), Apart a (line1 x) \/ Apart a (line2 x) -> DiPt a (pt x). Proof. intros x a. generalize (inc_pt2 x); generalize (inc_pt1 x). unfold Incident in |- *. intros H' H'0 H'1; elim H'1; intro H'2; clear H'1. generalize (cmp_apt_dipt a (pt x) (line1 x)); tauto. generalize (cmp_apt_dipt a (pt x) (line2 x)); tauto. Qed. Theorem Symmetry_of_Apart : forall x y : Segment, Apart (origin x) (ln y) \/ Apart (extremity x) (ln y) -> Apart (origin y) (ln x) \/ Apart (extremity y) (ln x). intros x y H'. apply thm4_1a. apply sym_DiLn; auto. Qed. Theorem thm4_3a : forall (x : Segment) (c : Point), Apart c (ln x) -> DiPt c (origin x). Proof. intros x c H'. elim (cmp_apt_dipt c (origin x) (ln x)); trivial. intro H0; elim (inc_ln1 x); trivial. Qed. Theorem thm4_3b : forall (x : Segment) (c : Point), Apart c (ln x) -> DiPt c (extremity x). Proof. intros x c H'. elim (cmp_apt_dipt c (extremity x) (ln x)); trivial. intro H0; elim (inc_ln2 x); trivial. Qed. Definition Side1 : Triangle -> Segment. Proof. intro H'; elim H'; clear H'. intros summit base H'. apply (Seg summit (origin base)). apply thm4_3a; trivial. Defined. Definition Side2 : Triangle -> Segment. Proof. intro H'; elim H'; clear H'. intros summit base H'. apply (Seg summit (extremity base)). apply thm4_3b; trivial. Defined. Theorem auxs1 : forall t : Triangle, origin (base t) = extremity (Side1 t). Proof. intro t; elim t; auto. Qed. Theorem auxs2 : forall t : Triangle, extremity (base t) = extremity (Side2 t). Proof. intro t; elim t; auto. Qed. Theorem auxs3 : forall t : Triangle, summit t = origin (Side1 t). Proof. intro t; elim t; auto. Qed. Theorem auxs4 : forall t : Triangle, summit t = origin (Side2 t). Proof. intro t; elim t; auto. Qed. Theorem thm4_3c : forall t : Triangle, DiLn (ln (base t)) (ln (Side1 t)). Proof. intro H'; elim H'; clear H'. intros summit base Tri_cond. elim (cmp_apt_diln summit (ln base) (ln (Side1 (Tri summit base Tri_cond)))); auto. Qed. Theorem thm4_3d : forall t : Triangle, DiLn (ln (base t)) (ln (Side2 t)). Proof. intro H'; elim H'; clear H'. intros summit base Tri_cond. elim (cmp_apt_diln summit (ln base) (ln (Side2 (Tri summit base Tri_cond)))); auto. Qed. Hint Resolve thm4_3c thm4_3d.
\begin{document} \chapter{Collecting Data} This chapter describes how the Lightning Network's base information are gathered and which tools will be used for the analysis of the said network. The Lightning Network is essentially composed of a client/daemon running on a Linux-based machine that act as a \textit{watchdog} over the Bitcoin blockchain in case any attacker decides to act maliciously. Since the Lightning Network is a \textit{trustless} off-chain payment network, it is not yet possible to watch on a third-party blockchain, that is, a full Bitcoin node and its blockchain are required to run locally and cannot be delegated to an external arbiter. Downloading and maintaining a blockchain requires a server that is constantly up and running: for this reason a Raspberry Pi 2 model B with a custom Debian based distribution was setup along with a storage unit of 1 TeraByte. The main Bitcoin blockchain size is 181,223 Gigabyte as the time of writing\footnote{https://www.blockchain.com/charts/blocks-size}, while the testing blockchain (the Bitcoin developer environment) is only 25,975 Gigabyte. Lightning Network is both implemented for the main and the testing blockchain but this work will refer only to the testnet environment; nonetheless, everything that is presented here can be replicated on the main network. \section{Clients} There are two main daemons running on the Raspberry Pi, the btcd and lnd clients, with the first one being a particular implementation of the Bitcoin protocol that included a fix to a very serious security threat known as \textit{transaction malleability} \cite{Andrychowicz2015} that caused the Mt. Gox bankrupt in 2014 \cite{Decker2014}, the fix was later implemented in the Bitcoin official client during the Segwit update. Both clients are written in Go, a programming language created by Google that puts emphasis on multithreading. It is important to notice that lnd isn't the only implementation of the Lightning Network, though it is the only repository where the original inventors of the Lightning Network are directly involved. Different declinations such as \textit{c-lightning} or \textit{eclair} are valid alternatives that are compliant to the lightning network specifications known as \textit{lightning-rfc}, consisting in twelve BOLTs (Basis of Lightning Technology) describing the requirements to be satisfied in order to be able to use the network. \subsection{btcd} btcd\footnote{https://github.com/btcsuite/btcd} is a Go Bitcoin client which was the first to include a fix to the malleability problem. The client is able to relay every Bitcoin transaction, process the same blocks and transaction of the official one without causing blockchain forks. This implementation corrects a design flaw in which the wallet functionality is integrated within the Bitcoin client in a 1-to-1 relationship with the user, i.e two user using the same device must run two different Bitcoin clients; instead with btcd it is possible to build a 1-to-many relationship allowing for multiple user to share the same Bitcoin blockchain on the device they have in common. btcd was the first client to implement the SegWit update that was essential to the success of the Lighting Network. The usage of btcd was later considered not mandatory after the official SegWit update that has been activated on August 24th, 2017. The clients offers a large set of commands to help monitoring the blockchain \subsection{lnd} lnd is the official Lightning Network client which let the user to participate to the micropayment network. In order to participate to the network, lnd client has to be synced to the Bitcoin client before starting the channel creation phase. lnd provides also a wallet functionality, enabling to manage a user funds and to create the very first channels; the whole channel creation is automated and by default it picks random nodes across the network to create channels, by contacting bootstraps nodes. Through an automated system called \textit{autopilot}, an agent will attempt to automatically open up channels to put the user current node in a good spot with respect to the network topology, that is a more central position in the network. It is also possible to set the maximum number of channels a user wants to open (default is 5) and how many funds should be allocated from the bitcoin wallet to the lightning network wallet (for example, having autopilot.allocation = 0.6 means that 60\% of the funds of the wallet should automatically be used as channel funds). It is also possible to manually manage the channels through a subroutine called \textit{lncli}, which is a command line interface that exposes a set of API calls that help the node management. These APIs were fundamental to retrieve the current state of the network as they are able to retrieve the entire network information. These API are also remotely reachable, running by default on port 10009, allowing to provide Lightning Network services for third parties (which need to be trusted) and the authentication is performed through a system knows as \textit{macaroons}\footnote{https://github.com/rescrv/libmacaroons}, an evolution of the cookie authentication mechanism. To pay through the Lightning Network the payee must create a new invoice which is a well formatted bech32 encoded string as specified in lightning-rfc 11\footnote{https://github.com/lightningnetwork/lightning-rfc/blob/master/11-payment-encoding.md} by using the command \textit{addinvoice} provided by the lncli client. The invoice is then passed to the payer which is able to decode it using \text{decodepayreq}, revealing the destination, payment hash and value of the payment request. Once the payer has verified the validity of the information it proceeds to perform the payment by calling the \textit{payinvoice} method; the payment route can either be chosen by the client according to their metrics or be open to user preferences. As a matter of fact, the entire network topology can be obtained through the command \textit{describegraph} that returns a JSON file containing nodes and edges informations (the details will be shown in the next section). Therefore, it is clear that every node of the network has a priori knowledge of the network when it comes to choosing the payment path. The discovery of the network is performed through a \textit{gossip} protocol where peers in the network exchange messages containing channel and nodes announcement/updates as specified lightning-rfc 7\footnote{https://github.com/lightningnetwork/lightning-rfc/blob/master/07-routing-gossip.md}. Because of that, there are plenty of network explorers across the internet which can be examined for a quick network representation. \section{Data Collection} The period of observation went from May 15th to July 30th (2018), but the analysis of the data are carried on the timespan that goes from May 25th to June 27th and comprises one snapshot per day. Every snapshot was taken at 4:00pm CEST in order to minimize the discrepancy between Europe worktime and US worktime, however there are no significant changes between nighttime and daytime due to the Lightning Network channels nature. Snapshots of the network were taken initially every 10 minutes, but since there were no appreciable changes between consecutive snapshots and because of the size of each snapshot (about 4MB each), the snapshots intervals were increased to 30 minutes. A bash script, set to be running every 30 minutes thanks to the \textit{cron} daemon task scheduler, uses the lncli client to retrieve the current state of the Lightning Network; data is then stored in folders according to the time they were taken. Data as it is comprises isolated nodes (nodes without channels) and several connected components, for this reason it has been necessary to trim the graph from these superfluous data and this task is performed by a Python script that removes every isolated node and selects the largest component of the graph. \section{Data Representation} The lncli method \textit{describegraph} returns a JSONObject composed of two JSONArrays, \textit{nodes} and \textit{edges} inside of which every tuple is a JSONObject containing all the informations of nodes and channels respectively. The more significant properties resides in the channel object as the nodes informations (except for the public key parameter that act as an unique identifier) are just fancy customization set by user preferences; on the contrary, the channel object holds the fundamental properties that define and shape the Lightning Network: \lstdefinestyle{toplisting}{ float=ht!, floatplacement=th, } \lstset{ float=t, floatplacement=t, basicstyle=\small\ttfamily, string=[s]{"}{"}, stringstyle=\color{weborange}\bfseries, comment=[l]{:}, commentstyle=\color{black}, numbers=left, numberstyle=\small, numbersep=5pt, frame = lines, backgroundcolor=\color{backgroundgray}, framexleftmargin=15pt, caption=JSON schema of the network topology., captionpos=b, } \singlespacing \begin{lstlisting}[style=toplisting] { "nodes": [ { "last_update": integer, "pub_key": string, "alias": string, "addresses": [ { "network": string, "addr": string } ], "color": string } ], "edges": [ { "channel_id": string, "chan_point": string, "last_update": integer, "node1_pub": string, "node2_pub": string, "capacity": "string, "node1_policy": { "time_lock_delta": integer, "min_htlc": string, "fee_base_msat": string, "fee_rate_milli_msat": string }, "node2_policy": { "time_lock_delta": integer, "min_htlc": string, "fee_base_msat": string, "fee_rate_milli_msat": string } ] } \end{lstlisting} \onehalfspacing \begin{itemize} \item \textit{capacity}: is the total amount of coins invested in the channel. The exact allocation of the money (i.e. the exact channel balance) is not possible to know due to privacy as overall system security. \item \textit{nodeX\_policy}: describes the conditions under which every payer must adhere in case the node becomes an active participant of a payment route. The policy are two, one for each node of the channel. These information are: \begin{itemize} \item \textit{time\_lock\_delta}: the locktime after which a pending payment is not to be considered valid anymore. \item \textit{min\_htlc}: the minimum number of coins that this channel accept to deliver. \item \textit{fee\_base\_msat}: the base fee that the channel will retain after completing a deliver regardless the total amount delivered . \item \textit{fee\_rate\_milli\_msat}: a scaling factor that adds to the base rate depending on the size of the delivered amount of satoshi. \end{itemize} \end{itemize} \section{Analisys Tools} The tools used to carry the data analysis were Matlab and NetworkX \cite{Hagberg2008}, a framework written in Python designed around graph exploration that offer plenty of functionalities in terms of state of the art techniques and methodologies, allowing the study of the dynamics, structures and functions of complex network and their creation and manipulation. A special version of Python was installed for this task, the Anaconda platform, which comprises more than one thousand and four hundred ready-to-use scientific packages alongside with matplotlib package for plotting graphs. All the code that was written ran on Python Jupiter Notebook, an interactive environment in which is possible to combine code to be executed, markdown formatted text, and media visualization. \end{document}
(* Title: HOL/Auth/n_germanSymIndex_lemma_inv__2_on_rules.thy Author: Yongjian Li and Kaiqiang Duan, State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences Copyright 2016 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences *) header{*The n_germanSymIndex Protocol Case Study*} theory n_germanSymIndex_lemma_inv__2_on_rules imports n_germanSymIndex_lemma_on_inv__2 begin section{*All lemmas on causal relation between inv__2*} lemma lemma_inv__2_on_rules: assumes b1: "r \<in> rules N" and b2: "(\<exists> p__Inv0 p__Inv2. p__Inv0\<le>N\<and>p__Inv2\<le>N\<and>p__Inv0~=p__Inv2\<and>f=inv__2 p__Inv0 p__Inv2)" shows "invHoldForRule s f r (invariants N)" proof - have c1: "(\<exists> i d. i\<le>N\<and>d\<le>N\<and>r=n_Store i d)\<or> (\<exists> i. i\<le>N\<and>r=n_SendReqS i)\<or> (\<exists> i. i\<le>N\<and>r=n_SendReqE__part__0 i)\<or> (\<exists> i. i\<le>N\<and>r=n_SendReqE__part__1 i)\<or> (\<exists> i. i\<le>N\<and>r=n_RecvReqS N i)\<or> (\<exists> i. i\<le>N\<and>r=n_RecvReqE N i)\<or> (\<exists> i. i\<le>N\<and>r=n_SendInv__part__0 i)\<or> (\<exists> i. i\<le>N\<and>r=n_SendInv__part__1 i)\<or> (\<exists> i. i\<le>N\<and>r=n_SendInvAck i)\<or> (\<exists> i. i\<le>N\<and>r=n_RecvInvAck i)\<or> (\<exists> i. i\<le>N\<and>r=n_SendGntS i)\<or> (\<exists> i. i\<le>N\<and>r=n_SendGntE N i)\<or> (\<exists> i. i\<le>N\<and>r=n_RecvGntS i)\<or> (\<exists> i. i\<le>N\<and>r=n_RecvGntE i)" apply (cut_tac b1, auto) done moreover { assume d1: "(\<exists> i d. i\<le>N\<and>d\<le>N\<and>r=n_Store i d)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_StoreVsinv__2) done } moreover { assume d1: "(\<exists> i. i\<le>N\<and>r=n_SendReqS i)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_SendReqSVsinv__2) done } moreover { assume d1: "(\<exists> i. i\<le>N\<and>r=n_SendReqE__part__0 i)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_SendReqE__part__0Vsinv__2) done } moreover { assume d1: "(\<exists> i. i\<le>N\<and>r=n_SendReqE__part__1 i)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_SendReqE__part__1Vsinv__2) done } moreover { assume d1: "(\<exists> i. i\<le>N\<and>r=n_RecvReqS N i)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_RecvReqSVsinv__2) done } moreover { assume d1: "(\<exists> i. i\<le>N\<and>r=n_RecvReqE N i)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_RecvReqEVsinv__2) done } moreover { assume d1: "(\<exists> i. i\<le>N\<and>r=n_SendInv__part__0 i)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_SendInv__part__0Vsinv__2) done } moreover { assume d1: "(\<exists> i. i\<le>N\<and>r=n_SendInv__part__1 i)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_SendInv__part__1Vsinv__2) done } moreover { assume d1: "(\<exists> i. i\<le>N\<and>r=n_SendInvAck i)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_SendInvAckVsinv__2) done } moreover { assume d1: "(\<exists> i. i\<le>N\<and>r=n_RecvInvAck i)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_RecvInvAckVsinv__2) done } moreover { assume d1: "(\<exists> i. i\<le>N\<and>r=n_SendGntS i)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_SendGntSVsinv__2) done } moreover { assume d1: "(\<exists> i. i\<le>N\<and>r=n_SendGntE N i)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_SendGntEVsinv__2) done } moreover { assume d1: "(\<exists> i. i\<le>N\<and>r=n_RecvGntS i)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_RecvGntSVsinv__2) done } moreover { assume d1: "(\<exists> i. i\<le>N\<and>r=n_RecvGntE i)" have "invHoldForRule s f r (invariants N)" apply (cut_tac b2 d1, metis n_RecvGntEVsinv__2) done } ultimately show "invHoldForRule s f r (invariants N)" by satx qed end
module unifac use unifacdata implicit none private save integer :: ng !< Number of active sub-groups in unifac !> Structure for active unifac parameters type :: unifacdb logical :: FloryHuggins logical :: StavermanGuggenheim integer, allocatable, dimension(:) :: mainGroupMapping integer, allocatable, dimension(:,:) :: vik !< (nc,ng) [-] Number of groups in one component real, allocatable, dimension(:,:) :: ajk !< (ng,ng) [K] Group interaction energy real, allocatable, dimension(:,:) :: bjk !< (ng,ng) [-] Group interaction energy real, allocatable, dimension(:,:) :: cjk !< (ng,ng) [1/K] Group interaction energy real, allocatable, dimension(:) :: Qk !< (ng) [] Molecyle group surface area real, allocatable, dimension(:) :: qi !< [-] (nc) Combinatorial term param real, allocatable, dimension(:) :: ri !< [-] (nc) Combinatorial term param contains procedure :: dealloc => unifacdb_dealloc procedure :: assign_unifacdb generic, public :: assignment(=) => assign_unifacdb end type unifacdb type(unifacdb) :: unifdb !< Active unifac parameters public :: init_unifac, cleanup_unifac public :: GeFloryHuggins, GeStavermanGuggenheim public :: GeUNIFAC, Ge_UNIFAC_GH_SG public :: unifacdb, unifdb public :: setUNIFACgroupInteraction, getUNIFACgroupInteraction contains !> Initiate unifac model !! subroutine init_unifac(UFdb,mrulestr) use thermopack_var, only: complist, nc use stringmod, only: str_eq ! type (unifacdb), intent(out) :: UFdb character(len=*), intent(in) :: mrulestr ! Locals integer, dimension(nSubGroups) :: activeGroups type (unifacComp), dimension(nc) :: activeUnifacComp real, dimension(nSubGroups) :: Rk integer :: i,j,k,err,idx if ( str_eq(mrulestr,"UMR") ) then ! print *,"USING UMR MIXING RULE" UFdb%FloryHuggins = .false. UFdb%StavermanGuggenheim = .true. else if ( str_eq(mrulestr,"VTPR") ) then ! print *,"USING VTPR MIXING RULE" UFdb%FloryHuggins = .false. UFdb%StavermanGuggenheim = .false. else if ( str_eq(mrulestr,"UNIFAC") ) then UFdb%FloryHuggins = .true. UFdb%StavermanGuggenheim = .true. else call stoperror("Invalid unifac mixing rule.") end if ! Find componets in database do i=1,nc activeUnifacComp(i) = unifacCompdb(getUNIFACcompdbidx(complist(i))) enddo activeGroups = 0 do i=1,nc do j=1,nSubGroups activeGroups(j) = activeGroups(j) + activeUnifacComp(i)%v(j) enddo enddo ng = 0 do j=1,nSubGroups if (activeGroups(j) > 0) then ng = ng + 1 activeGroups(ng) = j ! Mapping for active sub-groups endif enddo ! Deallocat previously allocated memory call cleanup_unifac(UFdb) ! Allocate new memory allocate(UFdb%vik(nc,ng),UFdb%ajk(ng,ng),& UFdb%bjk(ng,ng),UFdb%cjk(ng,ng),& UFdb%Qk(ng),UFdb%qi(nc),& UFdb%ri(nc),UFdb%mainGroupMapping(ng),STAT=err) if (err /= 0) Call StopError('Could not allocate unifac memory!') ! Set up Qk, and main group mapping do j=1,ng UFdb%Qk(j) = unifacprmdb(activeGroups(j))%Qk Rk(j) = unifacprmdb(activeGroups(j))%Rk UFdb%mainGroupMapping(j) = unifacprmdb(activeGroups(j))%mainGrp enddo ! Set up vik, qi and ri do i=1,nc do j=1,ng UFdb%vik(i,j) = activeUnifacComp(i)%v(activeGroups(j)) enddo UFdb%qi(i) = sum(UFdb%vik(i,:)*UFdb%Qk) UFdb%ri(i) = sum(UFdb%vik(i,:)*Rk(1:ng)) enddo ! Set up ajk, bjk and cjk do j=1,ng do k=1,ng idx = getUNIFACujkdbidx(UFdb%mainGroupMapping(j),UFdb%mainGroupMapping(k)) if (idx > 0) then UFdb%ajk(j,k) = unifacUijdb(idx)%aij UFdb%bjk(j,k) = unifacUijdb(idx)%bij UFdb%cjk(j,k) = unifacUijdb(idx)%cij else print *,'UNIFAC interaction energies not found for main groups: ',& UFdb%mainGroupMapping(j),UFdb%mainGroupMapping(k) UFdb%ajk(j,k) = 0.0 UFdb%bjk(j,k) = 0.0 UFdb%cjk(j,k) = 0.0 endif enddo enddo end subroutine init_unifac !> Set interaction parameters for tunig subroutine setUNIFACgroupInteraction(UFdb,i,j,aij,bij,cij) ! type (unifacdb), intent(out) :: UFdb integer, intent(in) :: i,j ! Main group indices real, intent(in) :: aij,bij,cij ! integer :: ii, jj ! Set aij, bij and cij do ii=1,ng do jj=1,ng if (UFdb%mainGroupMapping(ii) == i .and. UFdb%mainGroupMapping(jj) == j) then UFdb%ajk(ii,jj) = aij UFdb%bjk(ii,jj) = bij UFdb%cjk(ii,jj) = cij endif enddo enddo end subroutine setUNIFACgroupInteraction !> Set interaction parameters for tunig subroutine getUNIFACgroupInteraction(UFdb,i,j,aij,bij,cij) ! type (unifacdb), intent(in) :: UFdb integer, intent(in) :: i,j ! Main group indices real, intent(out) :: aij,bij,cij ! integer :: ii, jj ! Set aij, bij and cij do ii=1,ng do jj=1,ng if (UFdb%mainGroupMapping(ii) == i .and. UFdb%mainGroupMapping(jj) == j) then aij = UFdb%ajk(ii,jj) bij = UFdb%bjk(ii,jj) cij = UFdb%cjk(ii,jj) return endif enddo enddo end subroutine getUNIFACgroupInteraction !> Initiate unifac model !! subroutine cleanup_unifac(UFdb) ! type (unifacdb), intent(inout) :: UFdb integer :: err ! De-allocate memory err = 0 if (allocated (UFdb%vik)) deallocate (UFdb%vik, STAT=err) if (err /= 0) Call StopError('Could not deallocate UFdb%vik!') if (allocated (UFdb%ajk)) deallocate (UFdb%ajk, STAT=err) if (err /= 0) Call StopError('Could not deallocate UFdb%ajk!') if (allocated (UFdb%bjk)) deallocate (UFdb%bjk, STAT=err) if (err /= 0) Call StopError('Could not deallocate UFdb%bjk!') if (allocated (UFdb%cjk)) deallocate (UFdb%cjk, STAT=err) if (err /= 0) Call StopError('Could not deallocate UFdb%cjk!') if (allocated (UFdb%Qk)) deallocate (UFdb%Qk, STAT=err) if (err /= 0) Call StopError('Could not deallocate UFdb%Qk!') if (allocated (UFdb%qi)) deallocate (UFdb%qi, STAT=err) if (err /= 0) Call StopError('Could not deallocate UFdb%qi!') if (allocated (UFdb%ri)) deallocate (UFdb%ri, STAT=err) if (err /= 0) Call StopError('Could not deallocate UFdb%ri!') if (allocated (UFdb%mainGroupMapping)) deallocate (UFdb%mainGroupMapping, STAT=err) if (err /= 0) Call StopError('Could not deallocate UFdb%mainGroupMapping!') end subroutine cleanup_unifac !> Get index of entry containg component information in the unifac db !! \author MH, 2015 function getUNIFACcompdbidx(compid) result(idx) integer :: idx character(len=*), intent(in) :: compid logical :: found idx = 1 found = .false. do while (idx <= nUnifacComp .and. .not. found) if (trim(compid) /= trim(unifacCompdb(idx)%uid)) then idx = idx + 1 else found = .true. endif enddo if (.not. found) then idx = 0 call stoperror('UNIFAC component not found: '//trim(compid)) endif end function getUNIFACcompdbidx !> Get index of entry containg energies of mixing in the unifac db !! \author MH, 2015 function getUNIFACujkdbidx(id1,id2) result(idx) integer, intent(in) :: id1,id2 integer :: idx ! logical :: found idx = 1 found = .false. do while (idx <= nUnifacUij .and. .not. found) if (id1 == unifacUijdb(idx)%mgi .and. id2 == unifacUijdb(idx)%mgj) then found = .true. else idx = idx + 1 endif enddo if (.not. found) then idx = 0 endif end function getUNIFACujkdbidx !> Calculate Flory Huggins combinatorial term !! \author MH, 2015 subroutine GeFloryHuggins(n,UFdb,Ge,dGedn,d2Gedn2) use thermopack_var, only: nc ! real, dimension(nc), intent(in) :: n type (unifacdb), intent(in) :: UFdb real, intent(out) :: Ge real, optional, dimension(nc), intent(out) :: dGedn real, optional, dimension(nc,nc), intent(out) :: d2Gedn2 ! Locals integer :: i,j real, dimension(nc) :: lnr real :: sumn, sumnr, sumlnr, lnsumnr, lnsumn ! sumn = sum(n) sumnr = sum(n*UFdb%ri) lnr = log(UFdb%ri) sumlnr = sum(n*lnr) lnsumnr = log(sumnr) lnsumn = log(sumn) Ge = sumlnr - sumn*lnsumnr + sumn*lnsumn if (present(dGedn)) then do i=1,nc dGedn(i) = lnr(i) - lnsumnr + lnsumn + 1.0 - sumn*UFdb%ri(i)/sumnr enddo endif if (present(d2Gedn2)) then do i=1,nc do j=i,nc d2Gedn2(i,j) = - (UFdb%ri(i) + UFdb%ri(j))/sumnr + 1.0/sumn & + sumn*UFdb%ri(i)*UFdb%ri(j)/sumnr**2 d2Gedn2(j,i) = d2Gedn2(i,j) enddo enddo endif end subroutine GeFloryHuggins !> Calculate Staverman-Guggenheim combinatorial term !! \author MH, 2015 subroutine GeStavermanGuggenheim(n,UFdb,Ge,dGedn,d2Gedn2) use thermopack_var, only: nc ! real, dimension(nc), intent(in) :: n type (unifacdb), intent(in) :: UFdb real, intent(out) :: Ge real, optional, dimension(nc), intent(out) :: dGedn real, optional, dimension(nc,nc), intent(out) :: d2Gedn2 ! Locals integer :: i,j real, dimension(nc) :: lnqor !, phi_i, theta_i real :: sumn, sumnr, lnsumnr real :: sumnq, lnsumnq real, parameter :: zdiv2 = 5.0 ! sumn = sum(n) sumnr = sum(n*UFdb%ri) sumnq = sum(n*UFdb%qi) lnqor = log(UFdb%qi/UFdb%ri) lnsumnr = log(sumnr) lnsumnq = log(sumnq) Ge = zdiv2*(sum(n*UFdb%qi*lnqor) - sumnq*lnsumnq + sumnq*lnsumnr) !Ge = zdiv2*(sum(n*UFdb%qi*log(theta_i/phi_i))) if (present(dGedn)) then do i=1,nc dGedn(i) = zdiv2*UFdb%qi(i)*(lnqor(i) - lnsumnq + lnsumnr - 1.0 & + UFdb%ri(i)*sumnq/(UFdb%qi(i)*sumnr)) !theta_i = n*UFdb%qi/sumnq !phi_i = n*UFdb%ri/sumnr !dGedn(i) = zdiv2*UFdb%qi(i)*(log(theta_i(i)/phi_i(i))-1+phi_i(i)/theta_i(i)) enddo endif if (present(d2Gedn2)) then do i=1,nc do j=i,nc d2Gedn2(i,j) = zdiv2*(-UFdb%qi(i)*UFdb%qi(j)/sumnq & + (UFdb%qi(i)*UFdb%ri(j) + UFdb%qi(j)*UFdb%ri(i))/sumnr & - UFdb%ri(i)*UFdb%ri(j)*sumnq/sumnr**2) d2Gedn2(j,i) = d2Gedn2(i,j) enddo enddo endif end subroutine GeStavermanGuggenheim !> Calculate UNIFAC Ge (Ae) !! \author MH, 2015 subroutine GeUNIFAC(n,T,UFdb,Ge,dGedT,d2GedT2,dGedn,d2GedndT,d2Gedn2) use thermopack_var, only: nc use thermopack_constants, only: kRgas ! real, dimension(nc), intent(in) :: n real, intent(in) :: T type (unifacdb), intent(in) :: UFdb real, intent(out) :: Ge real, optional, intent(out) :: dGedT,d2GedT2 real, optional, dimension(nc), intent(out) :: dGedn,d2GedndT real, optional, dimension(nc,nc), intent(out) :: d2Gedn2 ! Locals integer :: i,j, k real, dimension(ng) :: theta_j, lambda_k, sum_nlvlj, sum_thetaj_Ejk real, dimension(ng,ng) :: Ejk, dEjkdT, d2EjkdT2 real, dimension(nc,ng) :: theta_ij, lambda_ik, dlambda_k_dn, sum_vijQjEjk real, dimension(nc,ng) :: d2lambda_k_dndT real, dimension(nc,nc,ng) :: d2lambda_k_dn2 real, dimension(nc) :: sum_vikQk, sum_vQLambda, sum_vijQjdLdTdiff real, dimension(nc,nc) :: sum_Q_v_dlambda_k_dn real, dimension(ng) :: sum_nivijQjEjk, dlambda_k_dT, d2lambda_k_dT2 real, dimension(ng) :: sum_nivijQjdEjkdT real, dimension(nc,ng) :: sum_vijQjdEjkdT, dlambda_ik_dT, d2lambda_ik_dT2 real, dimension(nc,ng) :: sum_thetaij_Ejk, sum_vijQjd2EjkdT2 real :: sum_nivikQk, temp real, dimension(nc) :: tempsum ! Calculate mixture energies do j=1,ng do k=1,ng Ejk(j,k) = exp(-(UFdb%ajk(j,k)/T + UFdb%bjk(j,k) + UFdb%cjk(j,k)*T)) enddo enddo ! Do we need temperature differentials? if (present(dGedT) .or. present(d2GedT2) .or. present(d2GedndT)) then do j=1,ng do k=1,ng dEjkdT(j,k) = Ejk(j,k)*(UFdb%ajk(j,k)/T**2 - UFdb%cjk(j,k)) if (present(d2GedT2)) then d2EjkdT2(j,k) = dEjkdT(j,k)*(UFdb%ajk(j,k)/T**2 - UFdb%cjk(j,k)) & - Ejk(j,k)*2.0*UFdb%ajk(j,k)/T**3 endif enddo enddo endif ! Calculate some useful sums do j=1,nc sum_vikQk(j) = sum(UFdb%vik(j,:)*UFdb%Qk(:)) enddo sum_nivikQk = sum(n*sum_vikQk) ! Calculate theta_ij do i=1,nc do j=1,ng theta_ij(i,j) = UFdb%vik(i,j)*UFdb%Qk(j)/sum_vikQk(i) enddo enddo ! Calculate theta_j do j=1,ng sum_nlvlj(j) = sum(n*UFdb%vik(:,j)) theta_j(j) = sum_nlvlj(j)*UFdb%Qk(j)/sum_nivikQk enddo ! Lambda_k do k=1,ng sum_thetaj_Ejk(k) = sum(theta_j*Ejk(:,k)) enddo lambda_k = log(sum_thetaj_Ejk) ! Lambda_ik do i=1,nc do k=1,ng sum_thetaij_Ejk(i,k) = sum(theta_ij(i,:)*Ejk(:,k)) enddo enddo lambda_ik = log(sum_thetaij_Ejk) ! Calculate Ge do i=1,nc sum_vQLambda(i) = sum(UFdb%vik(i,:)*UFdb%Qk*(lambda_k - lambda_ik(i,:))) enddo Ge = - sum(n*sum_vQLambda) ! Calculate differential of lambda_k wrpt. ni if (present(dGedn) .or. present(d2GedndT) .or. present(d2Gedn2)) then do i=1,nc do k=1,ng sum_vijQjEjk(i,k) = sum(UFdb%vik(i,:)*UFdb%Qk*Ejk(:,k)) enddo enddo do k=1,ng sum_nivijQjEjk(k) = sum(n*sum_vijQjEjk(:,k)) enddo do i=1,nc dlambda_k_dn(i,:) = sum_vijQjEjk(i,:)/sum_nivijQjEjk - sum_vikQk(i)/sum_nivikQk enddo endif ! dGedn if (present(dGedn)) then do i=1,nc tempsum(i) = sum(sum_nlvlj*UFdb%Qk*dlambda_k_dn(i,:)) enddo dGedn = -sum_vQLambda - tempsum endif ! d2Gedn2 if (present(d2Gedn2)) then do i=1,nc do j=1,nc sum_Q_v_dlambda_k_dn(i,j) = sum(UFdb%Qk*UFdb%vik(j,:)*dlambda_k_dn(i,:)) d2lambda_k_dn2(i,j,:) = - sum_vijQjEjk(i,:)*sum_vijQjEjk(j,:)/sum_nivijQjEjk**2 + sum_vikQk(i)*sum_vikQk(j)/sum_nivikQk**2 enddo enddo do i=1,nc do j=1,nc temp = sum(sum_nlvlj*d2lambda_k_dn2(i,j,:)*UFdb%Qk) d2Gedn2(i,j) = -(sum_Q_v_dlambda_k_dn(i,j) + sum_Q_v_dlambda_k_dn(j,i)) - temp enddo enddo endif if (present(d2GedndT) .or. present(d2GedT2) .or. present(dGedT)) then do i=1,nc do k=1,ng sum_vijQjdEjkdT(i,k) = sum(UFdb%vik(i,:)*UFdb%Qk*dEjkdT(:,k)) sum_vijQjd2EjkdT2(i,k) = sum(UFdb%vik(i,:)*UFdb%Qk*d2EjkdT2(:,k)) enddo enddo ! Does this work? dlambda_ik_dT = sum_vijQjdEjkdT/sum_vijQjEjk d2lambda_ik_dT2 = sum_vijQjd2EjkdT2/sum_vijQjEjk - dlambda_ik_dT*dlambda_ik_dT do k=1,ng sum_nivijQjdEjkdT(k) = sum(n*sum_vijQjdEjkdT(:,k)) dlambda_k_dT(k) = sum_nivijQjdEjkdT(k)/sum_nivijQjEjk(k) d2lambda_k_dT2(k) = sum(n*sum_vijQjd2EjkdT2(:,k))/sum_nivijQjEjk(k) - dlambda_k_dT(k)**2 enddo do i=1,nc sum_vijQjdLdTdiff(i) = sum(UFdb%vik(i,:)*UFdb%Qk*(dlambda_k_dT - dlambda_ik_dT(i,:))) enddo endif ! Calculate differential of Ge/RT wrpt. ni and T if (present(d2GedndT)) then do i=1,nc d2lambda_k_dndT(i,:) = sum_vijQjdEjkdT(i,:)/sum_nivijQjEjk - sum_vijQjEjk(i,:)*sum_nivijQjdEjkdT/sum_nivijQjEjk**2 enddo do i=1,nc tempsum(i) = sum(sum_nlvlj*UFdb%Qk*d2lambda_k_dndT(i,:)) enddo d2GedndT = -sum_vijQjdLdTdiff - tempsum endif ! Calculate differential of Ge/RT wrpt. T if (present(dGedT)) then dGedT = -sum(n*sum_vijQjdLdTdiff) endif ! Calculate second differential of Ge/RT wrpt. T if (present(d2GedT2)) then d2GedT2 = 0.0 do i=1,nc d2GedT2 = d2GedT2 - n(i)*sum(UFdb%vik(i,:)*UFdb%Qk*(d2lambda_k_dT2 - d2lambda_ik_dT2(i,:))) enddo endif ! Final scaling of results if (present(d2GedT2)) then d2GedT2 = kRgas*(2.0*dGedT + T*d2GedT2) endif if (present(dGedT)) then dGedT = kRgas*(Ge + dGedT*T) endif if (present(d2GedndT)) then d2GedndT = kRgas*(dGedn + d2GedndT*T) endif Ge = Ge*kRgas*T if (present(dGedn)) then dGedn = dGedn*kRgas*T endif if (present(d2Gedn2)) then d2Gedn2 = d2Gedn2*kRgas*T endif end subroutine GeUNIFAC !> Calculate excess Gibbs combined from UNIFAC, Staverman-Guggenheim and !! Flory-Huggins !! \author MH, 2015 subroutine Ge_UNIFAC_GH_SG(n,T,UFdb,Ge,dGedT,d2GedT2,dGedn,d2GedndT,d2Gedn2) use thermopack_var, only: nc use thermopack_constants, only: kRgas ! real, dimension(nc), intent(in) :: n real, intent(in) :: T type (unifacdb), intent(in) :: UFdb real, intent(out) :: Ge ! [kJ] real, intent(out) :: dGedT,d2GedT2 real, dimension(nc), intent(out) :: dGedn,d2GedndT real, dimension(nc,nc), intent(out) :: d2Gedn2 ! Locals real :: Ge_FH, Ge_SG real, dimension(nc) :: dGedn_FH, dGedn_SG real, dimension(nc,nc) :: d2Gedn2_FH, d2Gedn2_SG call GeUNIFAC(n,T,UFdb,Ge,dGedT,d2GedT2,dGedn,d2GedndT,d2Gedn2) ! Ge = 0.0 ! dGedT = 0.0 ! d2GedT2 = 0.0 ! dGedn = 0.0 ! d2GedndT = 0.0 ! d2Gedn2 = 0.0 if (UFdb%FloryHuggins) then !print *,'FloryHuggins' call GeFloryHuggins(n,UFdb,Ge_FH,dGedn_FH,d2Gedn2_FH) Ge = Ge + Ge_FH*kRgas*T dGedn = dGedn + dGedn_FH*kRgas*T d2Gedn2 = d2Gedn2 + d2Gedn2_FH*kRgas*T dGedT = dGedT + Ge_FH*kRgas d2GedndT = d2GedndT + dGedn_FH*kRgas endif if (UFdb%StavermanGuggenheim) then !print *,'StavermanGuggenheim' call GeStavermanGuggenheim(n,UFdb,Ge_SG,dGedn_SG,d2Gedn2_SG) Ge = Ge + Ge_SG*kRgas*T dGedn = dGedn + dGedn_SG*kRgas*T d2Gedn2 = d2Gedn2 + d2Gedn2_SG*kRgas*T dGedT = dGedT + Ge_SG*kRgas d2GedndT = d2GedndT + dGedn_SG*kRgas endif end subroutine Ge_UNIFAC_GH_SG subroutine assign_unifacdb(u1,u2) class(unifacdb), intent(inout) :: u1 class(unifacdb), intent(in) :: u2 ! Locals integer :: ierr u1%FloryHuggins = u2%FloryHuggins u1%StavermanGuggenheim = u2%StavermanGuggenheim if (allocated(u2%mainGroupMapping)) then if (allocated(u1%mainGroupMapping)) then deallocate(u1%mainGroupMapping, stat=ierr) if (ierr /= 0) call stoperror("Not able to deallocate u1%mainGroupMapping") endif allocate(u1%mainGroupMapping(size(u2%mainGroupMapping,dim=1)), stat=ierr) if (ierr /= 0) call stoperror("Not able to allocate u1%mainGroupMapping") u1%mainGroupMapping = u2%mainGroupMapping endif if (allocated(u2%vik)) then if (allocated(u1%vik)) then deallocate(u1%vik, stat=ierr) if (ierr /= 0) call stoperror("Not able to deallocate u1%vik") endif allocate(u1%vik(size(u2%vik,dim=1),size(u2%vik,dim=2)), stat=ierr) if (ierr /= 0) call stoperror("Not able to allocate u1%vik") u1%vik = u2%vik endif if (allocated(u2%ajk)) then if (allocated(u1%ajk)) then deallocate(u1%ajk, stat=ierr) if (ierr /= 0) call stoperror("Not able to deallocate u1%ajk") endif allocate(u1%ajk(size(u2%ajk,dim=1),size(u2%ajk,dim=2)), stat=ierr) if (ierr /= 0) call stoperror("Not able to allocate u1%ajk") u1%ajk = u2%ajk endif if (allocated(u2%bjk)) then if (allocated(u1%bjk)) then deallocate(u1%bjk, stat=ierr) if (ierr /= 0) call stoperror("Not able to deallocate u1%bjk") endif allocate(u1%bjk(size(u2%bjk,dim=1),size(u2%bjk,dim=2)), stat=ierr) if (ierr /= 0) call stoperror("Not able to allocate u1%bjk") u1%bjk = u2%bjk endif if (allocated(u2%cjk)) then if (allocated(u1%cjk)) then deallocate(u1%cjk, stat=ierr) if (ierr /= 0) call stoperror("Not able to deallocate u1%cjk") endif allocate(u1%cjk(size(u2%cjk,dim=1),size(u2%cjk,dim=2)), stat=ierr) if (ierr /= 0) call stoperror("Not able to allocate u1%cjk") u1%cjk = u2%cjk endif if (allocated(u2%Qk)) then if (allocated(u1%Qk)) then deallocate(u1%Qk, stat=ierr) if (ierr /= 0) call stoperror("Not able to deallocate u1%Qk") endif allocate(u1%Qk(size(u2%Qk,dim=1)), stat=ierr) if (ierr /= 0) call stoperror("Not able to allocate u1%Qk") u1%Qk = u2%Qk endif if (allocated(u2%qi)) then if (allocated(u1%qi)) then deallocate(u1%qi, stat=ierr) if (ierr /= 0) call stoperror("Not able to deallocate u1%qi") endif allocate(u1%qi(size(u2%qi,dim=1)), stat=ierr) if (ierr /= 0) call stoperror("Not able to allocate u1%qi") u1%qi = u2%qi endif if (allocated(u2%ri)) then if (allocated(u1%ri)) then deallocate(u1%ri, stat=ierr) if (ierr /= 0) call stoperror("Not able to deallocate u1%ri") endif allocate(u1%ri(size(u2%ri,dim=1)), stat=ierr) if (ierr /= 0) call stoperror("Not able to allocate u1%ri") u1%ri = u2%ri endif end subroutine assign_unifacdb subroutine unifacdb_dealloc(u) use utilities, only: deallocate_real, deallocate_real_2 class(unifacdb), intent(inout) :: u ! Locals integer :: ierr ierr = 0 if (allocated(u%mainGroupMapping)) deallocate(u%mainGroupMapping, stat=ierr) if (ierr /= 0) call stoperror("unifacdb_dealloc: Not able to deallocate u%mainGroupMapping") if (allocated(u%vik)) deallocate(u%vik, stat=ierr) if (ierr /= 0) call stoperror("unifacdb_dealloc: Not able to deallocate u%vik") call deallocate_real_2(u%ajk,"u%ajk") call deallocate_real_2(u%bjk,"u%bjk") call deallocate_real_2(u%cjk,"u%cjk") call deallocate_real(u%Qk,"u%Qk") call deallocate_real(u%qi,"u%qi") call deallocate_real(u%ri,"u%ri") end subroutine unifacdb_dealloc end module unifac
module Main import IdrisScript import IdrisScript.Date main : JS_IO () main = do current <- now if !(getDay current) == Friday then putStrLn' "It's Friday! I'm in love!" else putStrLn' "Meh!"
subroutine secbou(j ,nmmaxj ,kmax ,icx ,icy , & & lstsci ,lsecfl ,kfu ,irocol ,norow , & & s0 ,s1 ,dps ,r1 ,sour , & & sink ,gdp ) !----- GPL --------------------------------------------------------------------- ! ! Copyright (C) Stichting Deltares, 2011-2016. ! ! This program is free software: you can redistribute it and/or modify ! it under the terms of the GNU General Public License as published by ! the Free Software Foundation version 3. ! ! This program is distributed in the hope that it will be useful, ! but WITHOUT ANY WARRANTY; without even the implied warranty of ! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ! GNU General Public License for more details. ! ! You should have received a copy of the GNU General Public License ! along with this program. If not, see <http://www.gnu.org/licenses/>. ! ! contact: [email protected] ! Stichting Deltares ! P.O. Box 177 ! 2600 MH Delft, The Netherlands ! ! All indications and logos of, and references to, "Delft3D" and "Deltares" ! are registered trademarks of Stichting Deltares, and remain the property of ! Stichting Deltares. All rights reserved. ! !------------------------------------------------------------------------------- ! $Id: secbou.f90 5717 2016-01-12 11:35:24Z mourits $ ! $HeadURL: https://svn.oss.deltares.nl/repos/delft3d/tags/6686/src/engines_gpl/flow2d3d/packages/kernel/src/compute/secbou.f90 $ !!--description----------------------------------------------------------------- ! ! Function: Horizontal boundary conditions for secondary ! flow (spiral motion intensity) is stored in R1 ! The horizontal and vertical boundary conditions ! assume I_BE + I_CE ! Secondary flow only in 2D situation ! Method used: ! !!--pseudo code and references-------------------------------------------------- ! NONE !!--declarations---------------------------------------------------------------- use precision ! use globaldata ! implicit none ! type(globdat),target :: gdp ! ! The following list of pointer parameters is used to point inside the gdp structure ! real(fp) , pointer :: eps real(fp) , pointer :: dryflc ! ! Global variables ! integer, intent(in) :: icx !! Increment in the x-dir., if icx= nmax !! then computation proceeds in the x- !! dir. if icx=1 then computation pro- !! ceeds in the y-dir. integer, intent(in) :: icy !! Increment in the y-dir. (see icx) integer :: j !! Begin pointer for arrays which have !! been transformed into 1d arrays. !! due to the shift in the 2nd (m-) !! index, j = -2*nmax + 1 integer, intent(in) :: kmax ! Description and declaration in esm_alloc_int.f90 integer, intent(in) :: lsecfl ! Description and declaration in dimens.igs integer, intent(in) :: lstsci ! Description and declaration in esm_alloc_int.f90 integer :: nmmaxj ! Description and declaration in dimens.igs integer, intent(in) :: norow ! Description and declaration in esm_alloc_int.f90 integer, dimension(5, norow), intent(in) :: irocol ! Description and declaration in esm_alloc_int.f90 integer, dimension(gdp%d%nmlb:gdp%d%nmub), intent(in) :: kfu ! Description and declaration in esm_alloc_int.f90 real(prec), dimension(gdp%d%nmlb:gdp%d%nmub), intent(in) :: dps ! Description and declaration in esm_alloc_real.f90 real(fp), dimension(gdp%d%nmlb:gdp%d%nmub), intent(in) :: s0 ! Description and declaration in esm_alloc_real.f90 real(fp), dimension(gdp%d%nmlb:gdp%d%nmub), intent(in) :: s1 ! Description and declaration in esm_alloc_real.f90 real(fp), dimension(gdp%d%nmlb:gdp%d%nmub, kmax, lstsci), intent(out) :: r1 ! Description and declaration in esm_alloc_real.f90 real(fp), dimension(gdp%d%nmlb:gdp%d%nmub, kmax, lstsci), intent(in) :: sink ! Description and declaration in esm_alloc_real.f90 real(fp), dimension(gdp%d%nmlb:gdp%d%nmub, kmax, lstsci), intent(in) :: sour ! Description and declaration in esm_alloc_real.f90 ! ! ! Local variables ! integer :: ddb integer :: ic integer :: icxy integer :: k ! 2dH application integer :: mf ! IROCOL(2,IC)-1 integer :: ml ! IROCOL(3,IC) integer :: n ! IROCOL(1,IC) integer :: nmf integer :: nmfu ! NMF+ICX integer :: nml integer :: nmlu ! NML+ICX real(fp) :: epsh real(fp) :: equili ! Sour/sink real(fp) :: h0new real(fp) :: h0old real(fp) :: sinkhn ! ! !! executable statements ------------------------------------------------------- ! eps => gdp%gdconst%eps dryflc => gdp%gdnumeco%dryflc ! epsh = 0.5*eps*dryflc ! ddb = gdp%d%ddbound icxy = max(icx, icy) k = 1 ! ! loop over computational grid ! do ic = 1, norow ! n = irocol(1, ic) mf = irocol(2, ic) - 1 ml = irocol(3, ic) nmf = (n + ddb)*icy + (mf + ddb)*icx - icxy nml = (n + ddb)*icy + (ml + ddb)*icx - icxy nmlu = nml + icx nmfu = nmf + icx ! !***open boundary at begin of row ! if (kfu(nmf)==1) then h0new = s1(nmfu) + real(dps(nmfu),fp) sinkhn = sink(nmfu, k, lsecfl)*h0new if (abs(sinkhn)>epsh) then h0old = s0(nmfu) + real(dps(nmfu),fp) equili = sour(nmfu, k, lsecfl)*h0old/sinkhn r1(nmf, k, lsecfl) = equili else r1(nmf, k, lsecfl) = 0.0 endif endif ! ! open boundary at end of row ! if (kfu(nml)==1) then h0new = s1(nml) + real(dps(nml),fp) sinkhn = sink(nml, k, lsecfl)*h0new if (abs(sinkhn)>epsh) then h0old = s0(nml) + real(dps(nml),fp) equili = sour(nml, k, lsecfl)*h0old/sinkhn r1(nmlu, k, lsecfl) = equili else r1(nmlu, k, lsecfl) = 0.0 endif endif enddo end subroutine secbou
import numpy as np from gym import utils from gym.envs.mujoco import mujoco_env class HalfCheetahRandomParamsDiscreteEnv(mujoco_env.MujocoEnv, utils.EzPickle): def __init__(self): mujoco_env.MujocoEnv.__init__(self, 'half_cheetah.xml', 5) utils.EzPickle.__init__(self) def step(self, action): xposbefore = self.sim.data.qpos[0] self.do_simulation(action, self.frame_skip) xposafter = self.sim.data.qpos[0] ob = self._get_obs() reward_ctrl = - 0.1 * np.square(action).sum() reward_run = (xposafter - xposbefore)/self.dt reward = reward_ctrl + reward_run done = False return ob, reward, done, dict(reward_run=reward_run, reward_ctrl=reward_ctrl) def _get_obs(self): return np.concatenate([ self.sim.data.qpos.flat[1:], self.sim.data.qvel.flat, ]) def reset_model(self): # torso mass if self.np_random.rand() < 0.1: torso_mass = 13.0 else: torso_mass = 1.0 self.model.body_mass[1] = torso_mass # ground friction if self.np_random.rand() < 0.1: ground_friction = 3.1 else: ground_friction = 0.1 for i in range(0, 9): self.model.geom_friction[i][0] = ground_friction # joint damping if self.np_random.rand() < 0.1: dof_damping = 11.0 else: dof_damping = 1.0 for i in range(3,9): self.model.dof_damping[i] = dof_damping #print(torso_mass) #print(ground_friction) #print(dof_damping) #print() qpos = self.init_qpos + self.np_random.uniform(low=-.1, high=.1, size=self.model.nq) qvel = self.init_qvel + self.np_random.randn(self.model.nv) * .1 self.set_state(qpos, qvel) return self._get_obs() def viewer_setup(self): self.viewer.cam.distance = self.model.stat.extent * 0.5
[STATEMENT] lemma new_free_vars_on_heap: assumes "\<Gamma> : e \<Down>\<^bsub>L\<^esub> \<Delta> : z" shows "fv (\<Delta>, z) - domA \<Delta> \<subseteq> fv (\<Gamma>, e) - domA \<Gamma>" [PROOF STATE] proof (prove) goal (1 subgoal): 1. fv (\<Delta>, z) - domA \<Delta> \<subseteq> fv (\<Gamma>, e) - domA \<Gamma> [PROOF STEP] using reds_fresh_fv[OF assms(1)] reds_doesnt_forget[OF assms(1)] [PROOF STATE] proof (prove) using this: ?x \<in> fv (\<Delta>, z) \<and> (?x \<notin> domA \<Delta> \<or> ?x \<in> set L) \<Longrightarrow> ?x \<in> fv (\<Gamma>, e) domA \<Gamma> \<subseteq> domA \<Delta> goal (1 subgoal): 1. fv (\<Delta>, z) - domA \<Delta> \<subseteq> fv (\<Gamma>, e) - domA \<Gamma> [PROOF STEP] by auto