Datasets:
AI4M
/

text
stringlengths
0
3.34M
# Classical Euler methods ### Table of contents 1. Chapter 2: Classical Methods 1. [Section 1.1: Explicit Euler](#explicitEuler) 1. [Section 1.2: Implicit Euler](#implicitEuler) ```python # If you do not have numpy, matplotlib, scipy or nodepy, run this cell !pip install numpy # This is the basic package in python with all the numerical functions !pip install scipy # This package has some functions to deal with polynomials !pip install matplotlib # This package allows to plot !pip install nodepy # This package has some interesting features for RK methods ``` ```python # We need a couple of packages in this chapter import numpy as np # This is the basic package in python with all the numerical functions import matplotlib.pyplot as plt # This package allows to plot from scipy import optimize # This module has optimization tools from nodepy import * #This package already implemented some functions for Runge Kutta and multistep methods ``` We want to **approximate** the ODE on $I:=[t_0, t_{end}]\subset \mathbb{R}$ for the unknown variable $y:I\to \mathbb{R}^{S}$ with *continuous* function $F:I\times \mathbb{R}^S\to\mathbb{R}^S$ $$ \begin{equation} \begin{cases}\label{eq:IVP} \frac{dy}{dt} = F(t,y(t)),\\ y(0)=y_0. \end{cases} \end{equation} $$ * Why approximate? Solution may be unknown, or too complicated to be solved analytically. * How we want to approximate? We want to be **accurate** and we want to preserve the physical properties we have seen before. ## Explicit Euler <a id='explicitEuler'></a> Consider the time interval $I=[t_0,t_{end}]$ and let us subdivide it into $N$ subintervals, $$t_0=t^0<t^1< \dots <t^n < \dots <t^N=t_{end}.$$ We approximate naïvely the integral form $$ y(t^{n+1})=y(t^n) +\int_{t^n}^{t^{n+1}} F(s,y(s))ds \approx y(t^n) -\underbrace{(t^{n+1}-t^n)}_{\Delta t^n} F(t^n,y(t^n)) $$ leading to the method (forward Euler/explicit Euler), where we use $y^n$ to approximate $y(t^n)$ $$ \begin{cases} y^0=y_0,\\ y^{n+1}=y^n+\Delta t^n F(t^n,y^n), \qquad n=0,\dots, N-1. \end{cases} $$ ```python def explicitEuler(func, tspan, y_0): ''' Simple implementation of the explicit Euler method Input are func the function F of the ODE which takes as input y and t F(y,t) tspan is the vector of all timesteps (t^0,...,t^N) y_0 is the initial condition ''' N_time=len(tspan) # N+1 dim=len(y_0) # S y=np.zeros((dim,N_time)) # initializing the variable of solutions y[:,0]=y_0 # first timestep for n in range(N_time-1): # n=0,..., N-1 y[:,n+1]=y[:,n]+ FILL IN THE EVOLUTION ## Code the explicit euler step return tspan, y ``` Test on $$ \begin{equation}\label{eq:linear_test} \begin{aligned} & \begin{cases} c_1'(t)=c_2(t)-5c_1(t),\\ c_2'(t)=5c_1(t)-c_2(t), \end{cases}\\ &c_1(0)=c_1^0=0.9, \quad &c_2(0)=c_2^0=0.1 \, ,\\ &t\in [0,3]. \end{aligned} \end{equation} $$ ```python # Define the function F def linSysF(y,t=0): # evolution function F=np.zeros(np.shape(y)) F[0] = FILL IN THE DEFINITION F[1] = FILL IN THE DEFINITION return F ## Now we plot the solution with different number of timesteps for N in [100,30,10]: tspan=np.linspace(0,3,N) y0=np.array([0.9,0.1]) tt,yy=explicitEuler(linSysF,tspan,y0) A=np.array([[-5,1],[5,-1]]) y_exact=np.zeros((len(y0),len(tt))) for it, t in enumerate(tt): y_exact[:,it]=y0+(1-np.exp(-6*t))/6*np.dot(A,y0) plt.figure() plt.plot(tt,y_exact[0,:],":", label="c1 ex") plt.plot(tt,y_exact[1,:],":", label="c2 ex") plt.plot(tt,yy[0,:],label="c1") plt.plot(tt,yy[1,:],label="c2") plt.title("N=%d"%N) plt.legend() ``` Preliminary, we can observe that 1. The more the points we put in the time discretization the better the solution gets 1. Explicit Euler does not preserve **unconditionally** the positivity of the solution ($N=10$) 1. The total mass is conserved $$ c_1^{n+1}+c_2^{n+1}=c_1^{n}+c_2^{n}+ \Delta t\left( -5c_1^{n}+c_2^{n}+5c_1^{n}-c_2^{n} \right) = c_1^{n}+c_2^{n} $$ ### Error analysis The error that we observe $$e_n=y(t^n)-y^n$$ is composed of several parts that we can divide and study separately. #### Consistency error Given the exact solution $y(t)$, we define the consistency error to be $$ \varepsilon_n = y(t^{n+1})-y(t^n) - \Delta t F(t^n,y(t^n)) = \int_{t^n}^{t^{n+1}} y'(t) -y'(t^n)\, dt. $$ Notice that $|\varepsilon_n|\leq \Delta t \omega (y',\Delta t)$, where $\omega$ is the modulus of continuity of a bounded function, i.e., $$ \omega(f,\Delta t):= \max_{t,t': |t-t'|\leq \Delta t} |f(t)-f(t')|. $$ Essentially, this is the error that we obtain by substituting the exact solution inside the method. It is one of the 2 ingredients that leads the error of a method. Going back to the error, we observe that $$ e_{n+1}=y(t^{n+1})-y^{n+1}=e_n +\varepsilon_n +\Delta t \big(f(t^n,y(t^n))-f(t^n,y^n)\big) $$ using the Lipschitz continuity of $f$, we have $$ |e_{n+1}|\leq |e_n| +|\varepsilon_n| +\Delta t L|y(t^n)-y^n| =(1+L\Delta t)|e_n| +|\varepsilon_n|. $$ Using the **Discrete Gronwall Lemma** we obtain that $$ |e_{n}|\leq e^{L|t^n-t^0|}|e_0| + \sum_{i=0}^{n-1} e^{L(t^n-t^{i+1})}|\varepsilon_i|. $$ This tells us that, except for the initial error (that usually we can bound accurately or know its error), the consistency error leads this sum. So, if we keep $\varepsilon_n$ small enough, the final error will be small enough. Using the estimation for $\varepsilon_i$ and suppose $\Delta t^n=\Delta t$, we can collect $$ \begin{align} |e_{n}|&\leq e^{L|t^n-t^0|}|e_0| + \Delta t \omega(y',\Delta t) \sum_{i=0}^{n-1} e^{L(t^n-t^{i+1})}\\ &\leq e^{L|t^n-t^0|}|y^0-y(t^0)| + \Delta t \omega(y',\Delta t) \frac{e^{L(t^n-t^{0})}-1}{L}. \end{align} $$ This shows that the solution converges to the exact one as $\Delta t \to 0$, if the initial datum is correct. If we know more on the regularity of the solution ($y\in \mathcal C^2$), we can say that $$ |y(t^n)-y^n|\leq e^{L(t^n-t^0)}|y^0-y(t^0)| + \Delta t \int_{t^0}^{t^n} e^{L(t^n-s)} |y''(s)| ds. $$ #### Local vs Global Error A small remark must be done in order to understand how the global error generates from the local one. The local truncation error is the one given for one time step, i.e., using Taylor expansion and supposing $y^0=y(t^0)$, $$ e_1=|y^1-y(t^1)|=|y^0 +\Delta t F(t^0,y^0) - \left(y(t^0) + \Delta t y'(t^0) + \frac{1}{2} \Delta t^2 y''(t^0)\right) + \mathcal{O}(\Delta t^3)| = \frac{1}{2} \Delta t^2 |y''(t^0)| + \mathcal{O}(\Delta t^3). $$ In one step we see an error of $\mathcal O (\Delta t^2)$, when integrating of the whole time interval $[t^0,t^N]$ one obtains an $\mathcal O (\Delta t)$ as we have seen before. Naïvely one can see it as if in every step we are adding an $\mathcal O (\Delta t^2)$ to the global error $$ e_N\approx \sum_{i=1}^N |y(t^i)-y^i| \approx N \Delta t^2 \max_{t\in [t^0,t^N]} |y''(t)| = \frac{t^N-t^0}{\Delta t} \Delta t^2\max_{t\in [t^0,t^N]} |y''(t)|= (t^N-t^0) \Delta t\max_{t\in [t^0,t^N]} |y''(t)| $$ The **order of accuracy** of a method is the largest *integer* $p$, such that the error can be bounded by $$ |e_N| \leq C \Delta t^p, \qquad \forall \Delta t \in \mathbb R ^+. $$ This definition is of course meant to be verified in the limit for $\Delta t\to 0$ (in realistic cases we stop at $\approx 10^{-14}$). The explicit Euler method is of order (at least) 1. (one can check that it is not 2 with numerical tests) #### Roundoff effects We should always keep in mind that the error we studied before is not the only one that a computer produces. Indeed, at each operation (initial value, evaluation of $F$, sums, products) we introduce a roundoff error due to the machine precision. One can get similar estimations to what we have seen before, knowing that the error can be controlled. We ignore this error in this course. ### Stability Question: going with $\Delta t \to 0$ should produce nice results, but, what can we say about $\Delta t >>0$? Can we give a qualitatively say when a method could be stable/reliable in particular when *stiff* problems are considered. *Stiff* problems are the ones for which a *normal* discretization can not produce decent results. This rough description can be made more precise and studied only in limited cases. In particular, we restrict to linear problems with constant coefficients. $$ y'(t)=My(t) $$ with $M$ constant matrix. We fix the timestep $\Delta t$. The explicit Euler method gives the update $$ y^{n}=y^{n-1}+\Delta t M y^{n-1} =(I+\Delta t M) y^{n-1} = (I+\Delta t M)^{n} y^0. $$ Doing a change of basis given by the nonsingular matrix $S$ so that $\hat{M}=S^{-1} M S$ is in the Jordan canonical form, and defining $\hat{y}^n=S^{-1} y^n$, we have that $$ \hat y^{n}=\hat y^{n-1}+\Delta t \hat M \hat{y}^{n-1} =(I+\Delta t \hat M) \hat{y}^{n-1} = (I+\Delta t \hat M)^{n} \hat{y}^0. $$ This means that for each distinct eigenvalue $q$ we can study the linear scalar equation $$ y'(t)= q y(t). $$ The other components that correspond to the same Jordan block will depend on this solution, but will not contribute to its behaviour. The final question is whether $(1+\Delta t q)^N$ is an *acceptable* approximation of $e^{N\Delta t q }$. We are interested in bounded behaviors for $N\to \infty$ , this implies that $|1+\Delta t q|\leq 1$, or that $Re(q)\leq 0$. ($q$ could be complex as it is an eigenvalue of $M$). Rewriting the problem with $z=q\Delta t$, one can see that $$ y^n=y^{n-1}+z y^{n-1}=(1+z)y^{n-1} $$ and the method will be stable if $|1+z|\leq 1$, which is the region in the complex plane denoted by a circle of radius 1 with center in $(-1,0)$. The function $R(z):=1+z$ for the explicit Euler is the *stability function*. For a general method for the Dahlquist's equation $$ y'(t)=qy(t) $$ denoting by $z=\Delta t q$, the method can be written as $$ y^{n+1}=R(z) y^n. $$ ```python ## We will see soon how to write a RK method ## This is the explicit Euler method written into the RK formalism ## and we plot the stability region using the nodepy module A=np.array([[0]]) b=np.array([1]) exEuler=runge_kutta_method.ExplicitRungeKuttaMethod(A,b) p,q=exEuler.stability_function() print(p) exEuler.plot_stability_region(); ``` #### How can we ensure the belonging to the stability region? We want $z=q\Delta t$ to stay in the stability region. On $q$ we do not have control, hence, we can only modify $\Delta t$. In particular, denoting $q=p+ir$ with $p,r \in \mathbb R $ and $p\leq 0$, the stability relation we have seen before leads to at least check that the real part verifies the relation $$ |1+\Delta t p + i \Delta t r|\leq 1 \\ 1-\Delta t |p| \geq 1\\ \Delta t \leq \frac{2}{|p|} $$ where $|p|$ is for sure bounded by the Lipschitz constant $L$ of the function $F$. So, it is necessary to check that $$ \Delta t \leq \frac{2}{L}. $$ This can be generalized also for nonlinear problems. #### Imaginary eigenvalues If the problem we are considering contains only imaginary eigenvalues, then we cannot solve it with explicit Euler method. An example is $$ u''=-u $$ Consider the exact solution $$ u=\sin(t) $$ So, we can put it into a system of first order ODEs with initial conditions $$ \begin{cases} u'=v,\\ v'=-u,\\ u(0) = 0,\\ v(0) = 1. \end{cases} $$ ```python # Define the function F def linSysF(y,t=0): # evolution function F=np.zeros(np.shape(y)) F[0] = y[1] F[1] = -y[0] return F ## Now we plot the solution with different number of timesteps dt = 1 T_end = 100 tspan=np.linspace(0,T_end,np.int(T_end/dt)+1) y0=np.array([0,1]) tt,yy=explicitEuler(linSysF,tspan,y0) plt.figure() plt.plot(tt,np.sin(tt),":", label="c1 ex") plt.plot(tt,yy[0,:],label="c1") plt.title("N=%d"%N) plt.legend() plt.show() plt.plot(tt,yy[0,:]-np.sin(tt)) plt.title("Error") ``` ## Implicit Euler <a id='implicitEuler'></a> The implicit Euler method approximates our problem with the following strategy $$ y^{n+1}=y^n +\Delta t f(t^{n+1},y^{n+1}). $$ 1. It is not always easy to find the solution of such method, for example when $f$ is nonlinear, one may need nonlinear solvers to find the solution (e.g. Newton method, Broyden, and so on) 1. We can compute the error estimate similarly to explicit Euler, obtaining that also implicit Euler is a *first* order method 1. More interesting are the **stability** property of this scheme. Consider again the Dahlquist's equation $$y'=qy$$ and the implicit Euler method $$ \begin{align} y^{n+1}=y^n+ \Delta t q y^{n+1},\\ (1-\Delta t q) y^{n+1}=y^n,\\ y^{n+1}=\frac{1}{1-\Delta t q} y^n. \end{align} $$ So the stability function is $R(z)=\frac{1}{1-z}$ and the stability region $\mathcal S := \lbrace z \in \mathbb C : |R(z)|\leq 1 \rbrace$ contains the whole left complex semiplane. Indeed, if Re$(z)\leq 0$, then Re$(1-z)\geq 1$ and $|1-z|\geq 1$. So, $R(z)\leq 1$. ```python ## We will see soon how to write a RK method ## This is the implicit Euler and we plot the stability region A=np.array([[1]]) b=np.array([1]) imEuler=runge_kutta_method.RungeKuttaMethod(A,b) p,q=imEuler.stability_function() print(p) ## Numerator print(q) ## Denominator imEuler.plot_stability_region(bounds=[-10,4, -5,5]); ``` #### Unconditionally TVD/positivity preserving For some class of linear problems, it can be shown that for positive systems it is **positivity preserving**, or more in general for finite difference method in hyperbolic conservation laws, it is total variation diminishing (TVD). The clou is that these properties are true independently on the size of $\Delta t$. For TVD one can read the SSPRK article by Gottlieb, Shu and Tadmor [link](https://www.researchgate.net/publication/2365594_Strong_Stability-Preserving_High-Order_Time_Discretization_Methods). ##### TVD for incremental form problem The implicit Euler method for incremental problems, i.e., $$ U^{n+1}_j=U^{n}_j +\Delta t \left [ C_{j+1/2}(U_{j+1}^{n+1}-U_{j}^{n+1})-D_{j-1/2}(U_{j}^{n+1}-U_{j-1}^{n+1}) \right] $$ where $C_{j+1/2},D_{j+1/2}\geq 0$ is TVD independently on $\Delta t$. ###### Proof (Harten) Define $$ TV(U^n) = \sum_j |U^n_{j+1}-U^n_j|. $$ We can compute $$ U^{n+1}_j=U^{n}_j +\Delta t \left [ C_{j+1/2}(U_{j+1}^{n+1}-U_{j}^{n+1})-D_{j-1/2}(U_{j}^{n+1}-U_{j-1}^{n+1}) \right]\\ [1+\Delta t (C_{j+1/2}+D_{j+1/2})](U^{n+1}_{j+1}-U_j^{n+1})=U^{n}_{j+1}-U_j^{n} +\Delta t \left [ C_{j+3/2}(U_{j+2}^{n+1}-U_{j+1}^{n+1})+D_{j-1/2}(U_{j}^{n+1}-U_{j-1}^{n+1}) \right]\\ [1+\Delta t (C_{j+1/2}+D_{j+1/2})]|U^{n+1}_{j+1}-U_j^{n+1}|\leq|U^{n}_{j+1}-U_j^{n}| +\Delta t \left [ C_{j+3/2}|U_{j+2}^{n+1}-U_{j+1}^{n+1}|+D_{j-1/2}|U_{j}^{n+1}-U_{j-1}^{n+1}| \right]\\ TV(U^{n+1}) +\Delta t \sum_j(C_{j+1/2}+D_{j+1/2})|U^{n+1}_{j+1}-U^{n+1}_j| \leq TV(U^{n}) +\Delta t \sum_j(C_{j+1/2}+D_{j+1/2})|U^{n+1}_{j+1}-U^{n+1}_j| \\ TV(U^{n+1}) \leq TV(U^n). $$ Reminder: Total variation diminishing means for conservation laws, that if the initial solution is positive, it stays positive. ##### Positivity for production destruction system We will see the positivity for a specific case: a production-destruction system with constant coefficient. It can be written as $$ y'=My $$ with $$ M_{ii}<0,\qquad M_{ij}\geq 0,\, i\neq j, \qquad \sum_{i}M_{ij}=0. $$ The linear system at the beginning of this chapter falls in this example. This system is positive if $y_i^0\geq 0$. The implicit Euler is also positive. $$ (I-\Delta t M)y^{n+1}= y^{n} $$ ##### Theorem Defining with $A:=I-\Delta t M$, we can prove that $A$ is non singular and that $A^{-1}\geq 0$, i.e., every entry of the matrix is nonnegative. ##### Proof 1. $A$ is strictly diagonally dominant by columns Indeed, $$ 0< A_{ii} = 1+\Delta t |M_{ii}| > \Delta t \sum_{j:j\neq i} |M_{ji}| = \sum_{j:j\neq i} |A_{ji}| $$ Hence, $A$ is nonsingular. 2. The Jacobi method converge and the Jacobi Matrix is positive [Jacobi method](https://en.wikipedia.org/wiki/Jacobi_method) Define the Jacobi matrix $B=D^{-1}(D-A)$, with $D=\text{diag}(A)$. The diagonal of $B$ is 0 and each element on the off diagonal terms are $$ B_{ji}=\frac{-A_{ji}}{A_{ii}}, \quad j\neq i. $$ So, the spectral radius of $B$ is bounded by $$ \rho(B)\leq ||B||_{\infty} =\max_{i}\sum_{j\neq i} \frac{|A_{ji}|}{|A_{ii}|} \leq 1. $$ The iterative Jacobi method is convergent to the solution of $Ay^{n+1}=y^n$. The method reads $$ w^{(k+1)}=D^{-1}(y^n- (D-A)w^{(k)}) $$ which is a linear combination of positive matrices and vectors. Hence, the solutions $w^{(k)}$ stay positive if $y^n$ is positive. Knowing that $Dy^{n+1}=(D-A)y^{n+1} +y^n$, the error at each iteration reads $$ e^{(k+1)}:=w^{(k+1)}-y^{n+1} = D^{-1}(y^n- (D-A)w^{(k)})-D^{-1}(y^n- (D-A)y^{n+1})=D^{-1}(D-A)(w^{(k)}-y^{n+1})= B e^{(k)}. $$ Knowing that $B$ has norm smaller than 1, we know that the iteration process converges to the solution of the system. ```python def implicitEulerLinear(M, tspan, y_0): ''' Simple implementation of the implicit Euler for Linear systems y'=My with M constant matrix Input are M the ODE constant matrix tspan vector of timesteps (t^0,...,t^N) y_0 initial value ''' N_time=len(tspan) # N+1 dim=len(y_0) # S y=np.zeros((dim,N_time)) # initializing the variable of solutions y[:,0]=y_0 # first timestep for n in range(N_time-1): # n=0,..., N-1 A= FILL IN # define the matrix that need to be inverted y[:,n+1]=np.linalg.solve(A,y[:,n]) return tspan, y ``` ```python # Test implicit Euler on the linear systems: first the production destruction system with matrix linSysM=np.array([[-5,1],[5,-1]]) def linSysF(y,t=0): # evolution function F=np.zeros(np.shape(y)) F[0] = FILL IN THE EVOLUTION F[1] = FILL IN THE EVOLUTION return F for N in [100,30,10]: tspan=np.linspace(0,3,N) y0=np.array([0.9,0.1]) tt,yy=implicitEulerLinear(linSysM,tspan,y0) A=np.array([[-5,1],[5,-1]]) y_exact=np.zeros((len(y0),len(tt))) for it, t in enumerate(tt): y_exact[:,it]=y0+(1-np.exp(-6*t))/6*np.dot(A,y0) plt.figure() plt.plot(tt,y_exact[0,:],":", label="c1 ex") plt.plot(tt,y_exact[1,:],":", label="c2 ex") plt.plot(tt,yy[0,:],label="c1") plt.plot(tt,yy[1,:],label="c2") plt.title("N=%d"%N) plt.legend() ``` #### Let's check the order of accuracy of the implicit and explicit Euler! ```python # Convergence error def linSysF(y,t=0): # evolution function F=np.zeros(np.shape(y)) F[0] = y[1]-5*y[0] F[1] = -F[0] return F linSysM=np.array([[-5,1],[5,-1]]) y0=np.array([0.9,0.1]) def exact_sol(t): return y0+(1-np.exp(-6*t))/6*np.dot(linSysM,y0) def error(tt,yy): ''' Compute the average error over the whole time domain, in norm 2 on the components of the system ''' errors=np.zeros(len(tt)) for it, t in enumerate(tt): errors[it]=np.linalg.norm(yy[:,it]-exact_sol(t)) return np.mean(errors) Ns=[2**k for k in range(1,12)] errorEx=np.zeros(len(Ns)) errorIm=np.zeros(len(Ns)) dts= np.zeros(len(Ns)) for iN, N in enumerate(Ns): tspan=np.linspace(0,3,N) dts[iN]=tspan[1]-tspan[0] tt,yy=explicitEuler(linSysF,tspan,y0) errorEx[iN]=error(tt,yy) tt,yy=implicitEulerLinear(linSysM,tspan,y0) errorIm[iN]=error(tt,yy) plt.figure() plt.loglog(dts,errorEx,label="ex Euler") plt.loglog(dts,errorIm, label="im Euler") plt.loglog(dts,0.1*dts,":", label="order 1") plt.loglog(dts,0.1*dts**2., ":", label="order 2") plt.legend() ``` ```python ### Test the implicit Euler method with the linear system with purely imaginary eigenvalues ``` ## Extra exercise: code implicit Euler for nonlinear fluxes * Use a nonlinear solver to solve $y^{n+1}-\Delta t F(y^{n+1},t^{n+1})=y^n$ (**scipy.optimize.newton**, scipy.optimize.broyden1) * Use lambda function to define the nonlinear function * Search the documentation on Google ```python def implicitEuler(func, tspan, y_0): ''' Implicit Euler method with a nonlinear solver Input: func (nonlinear) function fo the ODE tspan vector of timesteps (t^0,...,t^N) y_0 initial value ''' N_time=len(tspan) # N+1 dim=len(y_0) # S y=np.zeros((dim,N_time)) # initializing the variable of solutions y[:,0]=y_0 # first timestep for n in range(N_time-1): # n=0,..., N-1 FILL IN WITH THE OPERATIONS THAT LEAD TO THE SOLUTION OF THE NONLIENEAR FUNCTION y[:,n+1] = ### return tspan, y ``` ```python ## Nonlinear 3x3 system production destruction def nonlinear_system3_flux(u,t=0): ff=np.zeros(len(u)) ff[0]= -u[0]*u[1]/(u[0]+1) ff[1]= u[0]*u[1]/(u[0]+1) -0.3*u[1] ff[2]= 0.3*u[1] return ff y_0 = np.array([9.98,0.01,0.01]) T_fin = 30 ``` ```python ## Run implicit Euler method and plot the solution tt=np.linspace(0,T_fin, 100) tt,yy = implicitEuler(nonlinear_system3_flux, tt, y_0) plt.figure(figsize=(10,4)) plt.subplot(121) plt.title("implicit Euler") plt.plot(tt,yy[0,:]) plt.plot(tt,yy[1,:]) plt.plot(tt,yy[2,:]) tt,yy = explicitEuler(nonlinear_system3_flux, tt, y_0) plt.subplot(122) plt.title("explicit Euler") plt.plot(tt,yy[0,:]) plt.plot(tt,yy[1,:]) plt.plot(tt,yy[2,:]) ``` ```python ## Nonlinear stiff problem: Robertson def Robertson_flux(u,t=0,alpha=10**4,beta=0.04, gamma=3*10**7): ff=np.zeros(np.shape(u)) ff[0] = alpha*u[1]*u[2]-beta*u[0] ff[1] = beta*u[0]-alpha*u[1]*u[2] - gamma*u[1]**2 ff[2] = gamma*u[1]**2 return ff NN=10000 tt = np.array([10**k for k in np.linspace(-7,11,NN)]) y_0 = np.array([1.,10**-20,10**-20]) ``` ```python tt,yy = implicitEuler(Robertson_flux, tt, y_0) plt.semilogx(tt,yy[0,:]) plt.semilogx(tt,yy[1,:]*10**4) plt.semilogx(tt,yy[2,:]) plt.ylim([-0.05, 1.05]) ``` ```python tt,yy = explicitEuler(Robertson_flux, tt, y_0) plt.semilogx(tt,yy[0,:]) plt.semilogx(tt,yy[1,:]*10**4) plt.semilogx(tt,yy[2,:]) plt.ylim([-0.05, 1.05]) ```
/* Copyright 2017 International Business Machines Corporation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ #pragma once #include <kernelpp/types.h> #include <kernelpp/kernel.h> #include <gsl.h> namespace mwe { namespace kernels { using kernelpp::compute_mode; using kernelpp::error_code; /* Declare a kernel 'add' which is overloaded to operate * on single or array-like inputs. * * Also, declare the compute modes (CPU,GPU) which * this kernel will support. */ KERNEL_DECL(add, compute_mode::CPU, compute_mode::CUDA) { template <compute_mode> static kernelpp::variant<int, error_code> op( int a, int b ); template <compute_mode> static error_code op( const gsl::span<int> a, const gsl::span<int> b, gsl::span<int> result ); }; } }
State Before: C : Type u inst✝² : Category C D : Type ?u.77182 inst✝¹ : Category D inst✝ : HasPullbacks C P : MorphismProperty C hP : StableUnderComposition P hP' : RespectsIso P hP'' : StableUnderBaseChange P ⊢ StableUnderComposition (MorphismProperty.diagonal P) State After: C : Type u inst✝² : Category C D : Type ?u.77182 inst✝¹ : Category D inst✝ : HasPullbacks C P : MorphismProperty C hP : StableUnderComposition P hP' : RespectsIso P hP'' : StableUnderBaseChange P X Y Z : C f : X ⟶ Y g : Y ⟶ Z h₁ : MorphismProperty.diagonal P f h₂ : MorphismProperty.diagonal P g ⊢ MorphismProperty.diagonal P (f ≫ g) Tactic: introv X h₁ h₂ State Before: C : Type u inst✝² : Category C D : Type ?u.77182 inst✝¹ : Category D inst✝ : HasPullbacks C P : MorphismProperty C hP : StableUnderComposition P hP' : RespectsIso P hP'' : StableUnderBaseChange P X Y Z : C f : X ⟶ Y g : Y ⟶ Z h₁ : MorphismProperty.diagonal P f h₂ : MorphismProperty.diagonal P g ⊢ MorphismProperty.diagonal P (f ≫ g) State After: C : Type u inst✝² : Category C D : Type ?u.77182 inst✝¹ : Category D inst✝ : HasPullbacks C P : MorphismProperty C hP : StableUnderComposition P hP' : RespectsIso P hP'' : StableUnderBaseChange P X Y Z : C f : X ⟶ Y g : Y ⟶ Z h₁ : MorphismProperty.diagonal P f h₂ : MorphismProperty.diagonal P g ⊢ P (pullback.diagonal f ≫ (pullbackDiagonalMapIdIso f f g).inv ≫ pullback.snd) Tactic: rw [diagonal_iff, pullback.diagonal_comp] State Before: C : Type u inst✝² : Category C D : Type ?u.77182 inst✝¹ : Category D inst✝ : HasPullbacks C P : MorphismProperty C hP : StableUnderComposition P hP' : RespectsIso P hP'' : StableUnderBaseChange P X Y Z : C f : X ⟶ Y g : Y ⟶ Z h₁ : MorphismProperty.diagonal P f h₂ : MorphismProperty.diagonal P g ⊢ P (pullback.diagonal f ≫ (pullbackDiagonalMapIdIso f f g).inv ≫ pullback.snd) State After: no goals Tactic: exact hP _ _ h₁ (by simpa [hP'.cancel_left_isIso] using hP''.snd _ _ h₂) State Before: C : Type u inst✝² : Category C D : Type ?u.77182 inst✝¹ : Category D inst✝ : HasPullbacks C P : MorphismProperty C hP : StableUnderComposition P hP' : RespectsIso P hP'' : StableUnderBaseChange P X Y Z : C f : X ⟶ Y g : Y ⟶ Z h₁ : MorphismProperty.diagonal P f h₂ : MorphismProperty.diagonal P g ⊢ P ((pullbackDiagonalMapIdIso f f g).inv ≫ pullback.snd) State After: no goals Tactic: simpa [hP'.cancel_left_isIso] using hP''.snd _ _ h₂
inductive rtc {α : Type} (r : α → α → Prop) : α → α → Prop | base : ∀ a, rtc a a | trans : ∀ {a b c}, r a b → rtc b c → rtc a c lemma rtc_is_transitive {α : Type} {a b c: α} {r : α → α → Prop}: rtc r a b → rtc r b c → rtc r a c := begin intros hab hbc, induction hab with a a p b hap hpb hbcpc, exact hbc, exact rtc.trans hap (hbcpc hbc) end definition weak_cr {α : Type} (r : α → α → Prop): Prop := ∀ {a b c: α}, r a b → r a c → ∃ d: α, rtc r b d ∧ rtc r c d definition cr {α : Type} (r : α → α → Prop): Prop := ∀ a b c: α, rtc r a b → rtc r a c → ∃ d: α, rtc r b d ∧ rtc r c d constant α: Type constant r: α → α → Prop definition ir (a b: α) := r b a constant r_wf: well_founded ir constant r_weak_cr: weak_cr r lemma newmann: cr r := begin intros x, apply well_founded.induction r_wf x _, clear x, intros a ih b c hab hac, simp [ir] at ih, cases hab with _ _ p _ hap hpb, existsi c, split, exact hac, exact rtc.base r c, cases hac with _ _ q _ haq hqc, existsi b, split, exact rtc.base r b, exact rtc.trans hap hpb, cases r_weak_cr hap haq with s hs, cases hs with hps hqs, cases ih q haq s c hqs hqc with t ht, cases ht with hst hct, have hpt: rtc r p t, from rtc_is_transitive hps hst, cases ih p hap b t hpb hpt with d hd, cases hd with hbd htd, clear hap hpb haq hqc hps hqs hst hpt p q s, have hcd: rtc r c d, from rtc_is_transitive hct htd, existsi d, exact ⟨hbd, hcd⟩ end
From Coq Require Extraction. From ExtLib Require Import MonadFix. From SimpleIO Require Import SimpleIO. From CoqFFI Require Import Int. From Guess Require Import Guess. Definition main : io_unit := let target : i63 := 5 in let max_attempt : nat := 10%nat in IO.unsafe_run (guess target max_attempt). Extraction "main.ml" main.
import LMT variable {I} [Nonempty I] {E} [Nonempty E] [Nonempty (A I E)] example {a1 a2 a3 : A I E} : (((a1).write i2 ((a1).read i1)).read i1) ≠ ((a1).read i1) → False := by arr
The first clue to a diagnosis of AML is typically an abnormal result on a complete blood count . While an excess of abnormal white blood cells ( <unk> ) is a common finding , and leukemic blasts are sometimes seen , AML can also present with isolated decreases in platelets , red blood cells , or even with a low white blood cell count ( leukopenia ) . While a presumptive diagnosis of AML can be made by examination of the peripheral blood smear when there are circulating leukemic blasts , a definitive diagnosis usually requires an adequate bone marrow aspiration and biopsy .
import numpy import pytest from mpmath import mp import orthopy import quadpy @pytest.mark.parametrize( "scheme", [quadpy.e1r2.gauss_hermite(n) for n in range(5, 12)] + [quadpy.e1r2.genz_keister(n) for n in range(8)], ) def test_scheme(scheme): print(scheme.name) tol = 1.0e-14 assert scheme.points.dtype == numpy.float64, scheme.name assert scheme.weights.dtype == numpy.float64, scheme.name def eval_orthopolys(x): return orthopy.e1r2.tree( x, scheme.degree + 1, standardization="normal", symbolic=False ) approximate = scheme.integrate(eval_orthopolys) exact = numpy.zeros(approximate.shape) exact[0] = numpy.sqrt(numpy.sqrt(numpy.pi)) diff = numpy.abs(approximate - exact) k, _ = numpy.where(diff > tol) assert len(k) > 0, "{} -- Degree is higher than {}.".format( scheme.name, scheme.degree ) degree = k[0] - 1 assert degree == scheme.degree, "{} -- Observed: {} expected: {}".format( scheme.name, degree, scheme.degree ) return @pytest.mark.parametrize("scheme", [quadpy.e1r2.gauss_hermite(2)]) def test_show(scheme): scheme.show() return def test_hermite_mpmath(): mp.dps = 51 scheme = quadpy.e1r2.gauss_hermite(4, mode="mpmath") tol = 1.0e-50 x1 = mp.sqrt((3 - mp.sqrt(6)) / 2) x2 = mp.sqrt((3 + mp.sqrt(6)) / 2) assert (abs(scheme.points - [-x2, -x1, +x1, +x2]) < tol).all() w1 = mp.sqrt(mp.pi) / 4 / (3 - mp.sqrt(6)) w2 = mp.sqrt(mp.pi) / 4 / (3 + mp.sqrt(6)) assert (abs(scheme.weights - [w2, w1, w1, w2]) < tol).all() return if __name__ == "__main__": scheme_ = quadpy.e1r2.gauss_hermite(10) test_scheme(scheme_, 1.0e-14) test_show(scheme_)
[GOAL] K : Type u_1 V₁ : Type u_2 V₂ : Type u_3 ι₁ : Type u_4 ι₂ : Type u_5 inst✝⁸ : Field K inst✝⁷ : AddCommGroup V₁ inst✝⁶ : Module K V₁ inst✝⁵ : AddCommGroup V₂ inst✝⁴ : Module K V₂ inst✝³ : Fintype ι₁ inst✝² : Fintype ι₂ inst✝¹ : DecidableEq ι₁ inst✝ : DecidableEq ι₂ B₁ : Basis ι₁ K V₁ B₂ : Basis ι₂ K V₂ u : V₁ →ₗ[K] V₂ ⊢ ↑(toMatrix (Basis.dualBasis B₂) (Basis.dualBasis B₁)) (↑Module.Dual.transpose u) = (↑(toMatrix B₁ B₂) u)ᵀ [PROOFSTEP] ext i j [GOAL] case a.h K : Type u_1 V₁ : Type u_2 V₂ : Type u_3 ι₁ : Type u_4 ι₂ : Type u_5 inst✝⁸ : Field K inst✝⁷ : AddCommGroup V₁ inst✝⁶ : Module K V₁ inst✝⁵ : AddCommGroup V₂ inst✝⁴ : Module K V₂ inst✝³ : Fintype ι₁ inst✝² : Fintype ι₂ inst✝¹ : DecidableEq ι₁ inst✝ : DecidableEq ι₂ B₁ : Basis ι₁ K V₁ B₂ : Basis ι₂ K V₂ u : V₁ →ₗ[K] V₂ i : ι₁ j : ι₂ ⊢ ↑(toMatrix (Basis.dualBasis B₂) (Basis.dualBasis B₁)) (↑Module.Dual.transpose u) i j = (↑(toMatrix B₁ B₂) u)ᵀ i j [PROOFSTEP] simp only [LinearMap.toMatrix_apply, Module.Dual.transpose_apply, B₁.dualBasis_repr, B₂.dualBasis_apply, Matrix.transpose_apply, LinearMap.comp_apply] [GOAL] K : Type u_1 V₁ : Type u_2 V₂ : Type u_3 ι₁ : Type u_4 ι₂ : Type u_5 inst✝⁸ : Field K inst✝⁷ : AddCommGroup V₁ inst✝⁶ : Module K V₁ inst✝⁵ : AddCommGroup V₂ inst✝⁴ : Module K V₂ inst✝³ : Fintype ι₁ inst✝² : Fintype ι₂ inst✝¹ : DecidableEq ι₁ inst✝ : DecidableEq ι₂ B₁ : Basis ι₁ K V₁ B₂ : Basis ι₂ K V₂ M : Matrix ι₁ ι₂ K ⊢ ↑(toLin (Basis.dualBasis B₁) (Basis.dualBasis B₂)) Mᵀ = ↑Module.Dual.transpose (↑(toLin B₂ B₁) M) [PROOFSTEP] apply (LinearMap.toMatrix B₁.dualBasis B₂.dualBasis).injective [GOAL] case a K : Type u_1 V₁ : Type u_2 V₂ : Type u_3 ι₁ : Type u_4 ι₂ : Type u_5 inst✝⁸ : Field K inst✝⁷ : AddCommGroup V₁ inst✝⁶ : Module K V₁ inst✝⁵ : AddCommGroup V₂ inst✝⁴ : Module K V₂ inst✝³ : Fintype ι₁ inst✝² : Fintype ι₂ inst✝¹ : DecidableEq ι₁ inst✝ : DecidableEq ι₂ B₁ : Basis ι₁ K V₁ B₂ : Basis ι₂ K V₂ M : Matrix ι₁ ι₂ K ⊢ ↑(LinearMap.toMatrix (Basis.dualBasis B₁) (Basis.dualBasis B₂)) (↑(toLin (Basis.dualBasis B₁) (Basis.dualBasis B₂)) Mᵀ) = ↑(LinearMap.toMatrix (Basis.dualBasis B₁) (Basis.dualBasis B₂)) (↑Module.Dual.transpose (↑(toLin B₂ B₁) M)) [PROOFSTEP] rw [LinearMap.toMatrix_toLin, LinearMap.toMatrix_transpose, LinearMap.toMatrix_toLin]
#ifndef AMICI_MISC_H #define AMICI_MISC_H #include "amici/defines.h" #include "amici/exception.h" #include "amici/vector.h" #include <sunmatrix/sunmatrix_sparse.h> // SUNMatrixContent_Sparse #include <algorithm> #include <vector> #include <memory> #include <regex> #include <gsl/gsl-lite.hpp> namespace amici { /** * @brief creates a slice from existing data * * @param data to be sliced * @param index slice index * @param size slice size * @return span of the slice */ template <class T> gsl::span<T> slice(std::vector<T> &data, int index, unsigned size) { if ((index + 1) * size > data.size()) throw std::out_of_range("requested slice is out of data range"); if (size > 0) return gsl::make_span(&data.at(index*size), size); return gsl::make_span(static_cast<T*>(nullptr), 0); } /** * @brief creates a constant slice from existing constant data * * @param data to be sliced * @param index slice index * @param size slice size * @return span of the slice */ template <class T> const gsl::span<const T> slice(const std::vector<T> &data, int index, unsigned size) { if ((index + 1) * size > data.size()) throw std::out_of_range("requested slice is out of data range"); if (size > 0) return gsl::make_span(&data.at(index*size), size); return gsl::make_span(static_cast<T*>(nullptr), 0); } /** * @brief local helper to check whether the provided buffer has the expected * size * @param buffer buffer to which values are to be written * @param expected_size expected size of the buffer */ template <class T> void checkBufferSize(gsl::span<T> buffer, typename gsl::span<T>::index_type expected_size) { if (buffer.size() != expected_size) throw AmiException("Incorrect buffer size! Was %u, expected %u.", buffer.size(), expected_size); } /* TODO: templating writeSlice breaks implicit conversion between vector & span not sure whether this is fixable */ /** * @brief local helper function to write computed slice to provided buffer (span) * @param slice computed value * @param buffer buffer to which values are to be written */ template <class T> void writeSlice(const gsl::span<const T> slice, gsl::span<T> buffer) { checkBufferSize(buffer, slice.size()); std::copy(slice.begin(), slice.end(), buffer.data()); }; /** * @brief local helper function to write computed slice to provided buffer (vector) * @param s computed value * @param b buffer to which values are to be written */ template <class T> void writeSlice(const std::vector<T> &s, std::vector<T> &b) { writeSlice(gsl::make_span(s.data(), s.size()), gsl::make_span(b.data(), b.size())); }; /** * @brief local helper function to write computed slice to provided buffer (vector/span) * @param s computed value * @param b buffer to which values are to be written */ template <class T> void writeSlice(const std::vector<T> &s, gsl::span<T> b) { writeSlice(gsl::make_span(s.data(), s.size()), b); }; /** * @brief local helper function to write computed slice to provided buffer (AmiVector/span) * @param s computed value * @param b buffer to which values are to be written */ void writeSlice(const AmiVector &s, gsl::span<realtype> b); /** * @brief Remove parameter scaling according to the parameter scaling in pscale * * All vectors must be of same length. * * @param bufferScaled scaled parameters * @param pscale parameter scaling * @param bufferUnscaled unscaled parameters are written to the array */ void unscaleParameters(gsl::span<const realtype> bufferScaled, gsl::span<const ParameterScaling> pscale, gsl::span<realtype> bufferUnscaled); /** * @brief Remove parameter scaling according to `scaling` * * @param scaledParameter scaled parameter * @param scaling parameter scaling * * @return Unscaled parameter */ double getUnscaledParameter(double scaledParameter, ParameterScaling scaling); /** * @brief Apply parameter scaling according to `scaling` * @param unscaledParameter * @param scaling parameter scaling * @return Scaled parameter */ double getScaledParameter(double unscaledParameter, ParameterScaling scaling); /** * @brief Apply parameter scaling according to `scaling` * @param bufferUnscaled * @param pscale parameter scaling * @param bufferScaled destination */ void scaleParameters(gsl::span<const realtype> bufferUnscaled, gsl::span<const ParameterScaling> pscale, gsl::span<realtype> bufferScaled); /** * @brief Returns the current backtrace as std::string * @param maxFrames Number of frames to include * @return Backtrace */ std::string backtraceString(int maxFrames); /** * @brief Convert std::regex_constants::error_type to string * @param err_type error type * @return Error type as string */ std::string regexErrorToString(std::regex_constants::error_type err_type); /** * @brief Format printf-style arguments to std::string * @param fmt Format string * @param ap Argument list pointer * @return Formatted String */ std::string printfToString(const char *fmt, va_list ap); /** * @brief Generic implementation for a context manager, explicitly deletes copy * and move operators for derived classes */ class ContextManager{ public: ContextManager() = default; ContextManager(ContextManager &other) = delete; ContextManager(ContextManager &&other) = delete; }; } // namespace amici #endif // AMICI_MISC_H
Formal statement is: lemma convex_on_add [intro]: assumes "convex_on S f" and "convex_on S g" shows "convex_on S (\<lambda>x. f x + g x)" Informal statement is: If $f$ and $g$ are convex functions on a set $S$, then $f + g$ is convex on $S$.
lemma vimage_algebra_sigma: assumes X: "X \<subseteq> Pow \<Omega>'" and f: "f \<in> \<Omega> \<rightarrow> \<Omega>'" shows "vimage_algebra \<Omega> f (sigma \<Omega>' X) = sigma \<Omega> {f -` A \<inter> \<Omega> | A. A \<in> X }" (is "?V = ?S")
/*! * @file * @brief * @copyright alphya 2020-2021 * Distributed under the Boost Software License, Version 1.0. * (See accompanying file LICENSE.md or copy at http://boost.org/LICENSE_1_0.txt) */ #ifndef NYARUGA_UTIL_CHECK_SEMANTICS_HPP #define NYARUGA_UTIL_CHECK_SEMANTICS_HPP #pragma once // ランダムな値を入れた配列と T -> bool な関数を使って、セマンティクスが // 満たされているかをテストする // 例えば、ある型の集合が群の構造を持つか確かめたいとき、Concept で制限できるのは演算(+)が定義 // されているかどうかといった程度で、結合則が満たされているかを確かめることができない // そのような場合に結合則をテストするような関数を使ってテストするためのヘルパ関数がここに定義されている #include <utility> #include <concepts> #include <array> #include <boost/hana/functional/overload.hpp> namespace nyaruga { namespace util { namespace detail { template <typename ArrayLike, typename Fn, std::size_t... I> constexpr bool check_semantics_impl(ArrayLike&& a, Fn&& f, std::index_sequence<I...>) { return ( f(a[I]) && ... ); } } // namespace detail namespace check_semantics_ { // protection from unintended ADL static inline constexpr auto check_semantics = boost::hana::overload( []<std::size_t N>(auto&& a, auto&& f) requires requires() { { f(a[0]) } -> std::same_as<bool>; } { return detail::check_semantics_impl(std::forward<decltype(a)>(a), std::forward<decltype(f)>(f), std::make_index_sequence<N>{}); }, []<std::size_t N, typename T, template <class, std::size_t> class Array> (const Array<T, N>& a, auto&& f) requires requires() { { f(a[0]) } -> std::same_as<bool>; } { return detail::check_semantics_impl(std::forward<decltype(a)>(a), std::forward<decltype(f)>(f), std::make_index_sequence<N>{}); }, []<std::size_t N, typename T, template <class, std::size_t> class Array> (T (&a)[N], auto&& f) requires requires() { { f(a[0]) } -> std::same_as<bool>; } { return detail::check_semantics_impl(std::forward<decltype(a)>(a), std::forward<decltype(f)>(f), std::make_index_sequence<N>{}); }, []<std::size_t N, typename T, template <class, std::size_t> class Array> (T (&&a)[N], auto&& f) requires requires() { { f(a[0]) } -> std::same_as<bool>; } { return detail::check_semantics_impl(std::forward<decltype(a)>(a), std::forward<decltype(f)>(f), std::make_index_sequence<N>{}); } ); } // namespace check_semantics_ using namespace check_semantics_; /* 使用方法 // 値をランダムに生成する(これは、コンパイル時乱数の配列を生成する <random_array> を使うとよい) constexpr std::array<int, 5> a = {2, 4, 5, 2, 5}; // 何かの semantics を調査して、bool を返す関数を用意する(一引数関数とする) auto f = [](int a){ return a == a; }; // 配列のそれぞれの値に対して、semantics を満たすかどうかを確かめる static_assert(check_semantics(a, f), ""); */ } // nyaruga :: util #endif // NYARUGA_UTIL_CHECK_SEMANTICS_HPP
(**************************************************************) (* Copyright Dominique Larchey-Wendling [*] *) (* Jean-François Monin [+] *) (* *) (* [*] Affiliation Univ. Lorraine - CNRS - LORIA *) (* [+] Affiliation VERIMAG - Univ. Grenoble Alpes *) (**************************************************************) (* This file is distributed under the terms of the *) (* CeCILL v2.1 FREE SOFTWARE LICENSE AGREEMENT *) (**************************************************************) From Coq Require Import Utf8. From MuRec Require Import sigma relations schemes computable_def. Section prec_compute. Variables (X Y : Type) (F : X → Y → Prop) (Ffun : functional F) (Fcomp : ∀x, computable (F x)) (G : X → nat → Y → Y → Prop) (Gfun : ∀ x n, functional (G x n)) (Gcomp : ∀ x n y, computable (G x n y)) (x : X). Section prim_rec_compute_props. Variables (n : nat) (e : ∃y, prim_rec F G x (S n) y). Local Fact prc_TC1 : ∃y, prim_rec F G x n y. Proof. destruct e as (? & yn₁ & ? & ?). now exists yn₁. Qed. Variables (yn : Y) (Hyn : prim_rec F G x n yn). Local Fact prc_TC2 : ∃y, G x n yn y. Proof. destruct e as (yn₁' & yn₁ & Hyn₁ & Hyn₁'). exists yn₁'. now rewrite <- (prim_rec_fun Ffun Gfun Hyn₁ Hyn). Qed. Variables (yn' : Y) (Hyn' : G x n yn yn'). Local Fact prc_PO1 : prim_rec F G x (S n) yn'. Proof. now exists yn. Qed. End prim_rec_compute_props. Arguments prc_TC1 {_} _. Arguments prc_TC2 {_} _ {_} _. Arguments prc_PO1 {_ _} _ {_} _. Fixpoint prim_rec_compute m : computable (prim_rec F G x m) := match m with | 0 => λ e, Fcomp x e | S n => λ e, let (yn , y_yn) := prim_rec_compute n (prc_TC1 e) in let (yn', yn_yn') := Gcomp x n yn (prc_TC2 e y_yn) in ⟪yn', prc_PO1 y_yn yn_yn'⟫ end. End prec_compute. Arguments prim_rec_compute {X Y F} Ffun Fcomp {G} Gfun Gcomp x m.
The Primer is intended to act as a basic introduction to all the concepts on the blog in structured approach. The blog is free, but it could take you weeks or even months to get up to speed reading through the first two and a half years of posts. Reading the Primer gets you all caught up today.
A law professor at the University of Cape Coast, Philip Ebow Bondzi-Simpson, says the Single Spine Salary Structure (SSSS) is unsustainable and needs to be reviewed. According to the law professor, the relationship between what is paid by government to the public sector by way of remuneration and compensation and what is collected by way of tax revenues does not match in anyway. Prof. Bondzi-Simpson disclosed this while speaking at this year’s inaugural Mfantsipim ‘Dwen Hwe Kan’ Lectures under the theme, ‘Nation-Building and the National Interest: A Regime for Public Sector Compensation’ in Accra recently. He said government spends more on payment of public sector remuneration and compensation than it collects through taxes. “All together, the ratio between what is paid out and what is collected is about 70 percent. In other words, the public sector compensation issue matters because the more that is paid out as wages, salaries, allowances and other remunerations, the less money would be available for infrastructural development. So the 70 percent that is paid out is an unattainable and unsustainable arrangement,” Professor Bondzi-Simpson said. “About seven years ago, the SSSS was introduced. Everybody in the public sector was put on one ladder or the other but then a number of problems started emerging. You can identify amongst these problems the quantum issues and the relativity issues. Quantum issues simply say we are not getting enough and relativity issues simply says how can we be getting less than those in that profession or in that sector. It started off with the police and they got huge hikes. Not too long after that those in the security services and the revenue agencies had theirs out and the parities were disconcerting,” he recounted. “The present salary structure has multiple spines, and it is confusing and prone to unjustified and significant intra and inter-sectoral variations,” he said.
Formal statement is: lemma locally_compact_UNIV: "locally compact (UNIV :: 'a :: heine_borel set)" Informal statement is: The set of all real numbers is locally compact.
#include <boost/spirit/home/qi/string/tst.hpp>
Formal statement is: lemma Lim_bounded: "f \<longlonglongrightarrow> l \<Longrightarrow> \<forall>n\<ge>M. f n \<le> C \<Longrightarrow> l \<le> C" for l :: "'a::linorder_topology" Informal statement is: If $f$ converges to $l$ and is bounded above by $C$ for all $n \geq M$, then $l \leq C$.
subroutine RESGHOSTBC( & state & ,istatelo0 & ,istatehi0 & ,nstatecomp & ,ibcBoxlo0 & ,ibcBoxhi0 & ,idir & ,side & ,ncomp & ) implicit none integer*8 ch_flops COMMON/ch_timer/ ch_flops integer CHF_ID(0:5,0:5) data CHF_ID/ 1,0,0,0,0,0 ,0,1,0,0,0,0 ,0,0,1,0,0,0 ,0,0,0,1,0,0 ,0 &,0,0,0,1,0 ,0,0,0,0,0,1 / integer nstatecomp integer istatelo0 integer istatehi0 REAL*8 state( & istatelo0:istatehi0, & 0:nstatecomp-1) integer ibcBoxlo0 integer ibcBoxhi0 integer idir integer side integer ncomp integer nc integer ii integer i REAL*8 nearval, farval ii = side*CHF_ID(0,idir) do nc = 0, ncomp-1 do i = ibcBoxlo0,ibcBoxhi0 nearval = state(i-ii,nc) farval = state(i-2*ii,nc) state(i,nc) = (2.0d0)*nearval - farval enddo enddo return end subroutine HORESGHOSTBC( & state & ,istatelo0 & ,istatehi0 & ,nstatecomp & ,ibcBoxlo0 & ,ibcBoxhi0 & ,idir & ,side & ,ncomp & ) implicit none integer*8 ch_flops COMMON/ch_timer/ ch_flops integer CHF_ID(0:5,0:5) data CHF_ID/ 1,0,0,0,0,0 ,0,1,0,0,0,0 ,0,0,1,0,0,0 ,0,0,0,1,0,0 ,0 &,0,0,0,1,0 ,0,0,0,0,0,1 / integer nstatecomp integer istatelo0 integer istatehi0 REAL*8 state( & istatelo0:istatehi0, & 0:nstatecomp-1) integer ibcBoxlo0 integer ibcBoxhi0 integer idir integer side integer ncomp integer nc integer ii integer i REAL*8 nearval, midval, farval ii = side*CHF_ID(0,idir) do nc = 0, ncomp-1 do i = ibcBoxlo0,ibcBoxhi0 nearval = state(i-ii,nc) midval = state(i-2*ii,nc) farval = state(i-3*ii,nc) state(i,nc) = (3.0d0)*(nearval - midval) + farval enddo enddo return end
subroutine rmtabl (st) integer st integer mem( 60000) common/cdsmem/mem integer i integer walker, bucket, node bucket = st do 23000 i = 1, 43 bucket = bucket + 1 walker = mem (bucket) 23002 if (.not.(walker .ne. 0))goto 23003 node = walker walker = mem (node + 0) call dsfree (node) goto 23002 23003 continue 23000 continue 23001 continue call dsfree (st) return end
[STATEMENT] lemma \<gamma>_simps [simp]: shows "arr \<gamma>" and "src \<gamma> = src u" and "trg \<gamma> = src f" and "dom \<gamma> = w" and "cod \<gamma> = w'" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (arr \<gamma> &&& src \<gamma> = src u) &&& trg \<gamma> = src f &&& local.dom \<gamma> = w &&& cod \<gamma> = w' [PROOF STEP] using \<gamma>_in_hom [PROOF STATE] proof (prove) using this: \<guillemotleft>\<gamma> : src u \<rightarrow> src f\<guillemotright> \<guillemotleft>\<gamma> : w \<Rightarrow> w'\<guillemotright> goal (1 subgoal): 1. (arr \<gamma> &&& src \<gamma> = src u) &&& trg \<gamma> = src f &&& local.dom \<gamma> = w &&& cod \<gamma> = w' [PROOF STEP] by auto
# this is the modelfile for joint meta-analysis # for humerus and femur data # treating the studies as parallel designs # model { #### Both tests, complete crosstabs for TPR and FPR: group 1 for(k1 in 1:K.G1) { # observational part of the model # x and p order is 11 10 01 00 # first humerus, second femur # 1 2 3 4 5 6 # eta.xi order is eta1 (hum) eta2 (fem) eta3 (jtpr) xi1 (hum) xi2 (fem) xi3 (jfpr) # x.down.11.G1[k1] ~ dbin(p.down.11.G1[k1], N.down.G1[k1]) x.down.1dot.G1[k1] ~ dbin(p.down.1dot.G1[k1], N.down.G1[k1]) x.down.dot1.G1[k1] ~ dbin(p.down.dot1.G1[k1], N.down.G1[k1]) # p11.DOWN <-ilogit(eta3) p.down.11.G1[k1] <- ilogit(eta.xi.G1[k1,3]) # Sensitivity for test 1 (humerus) p.down.1dot.G1[k1] <- ilogit(eta.xi.G1[k1,1]) # Sensitivity for test 2 (femur) p.down.dot1.G1[k1] <- ilogit(eta.xi.G1[k1,2]) # p11.NODOWN x.nodown.11.G1[k1] ~ dbin(p.nodown.11.G1[k1], N.nodown.G1[k1]) x.nodown.1dot.G1[k1] ~ dbin(p.nodown.1dot.G1[k1], N.nodown.G1[k1]) x.nodown.dot1.G1[k1] ~ dbin(p.nodown.dot1.G1[k1], N.nodown.G1[k1]) # p11.nodown <-ilogit(eta3) p.nodown.11.G1[k1] <- ilogit(eta.xi.G1[k1,6]) # Sensitivity for test 1 (humerus) p.nodown.1dot.G1[k1] <- ilogit(eta.xi.G1[k1,4]) # Sensitivity for test 2 (femur) p.nodown.dot1.G1[k1] <- ilogit(eta.xi.G1[k1,5]) # structural part of the model eta.xi.G1[k1, 1:6] ~dmnorm(Eta.Xi[1:6], Omega[1:6,1:6]) } for(k4 in 1:K.G4) { # observational part of the model x.down.dot1.G4[k4] ~ dbin(p.down.dot1.G4[k4], N.down.G4[k4]) x.nodown.dot1.G4[k4] ~ dbin(p.nodown.dot1.G4[k4], N.nodown.G4[k4]) p.down.dot1.G4[k4] <- ilogit(eta.xi.G4[k4, 1]) p.nodown.dot1.G4[k4] <- ilogit(eta.xi.G4[k4, 2]) # structural part of the model eta.xi.G4[k4, 1:2] ~ dmnorm(Eta.Xi.G4[1:2], Omega.G4[1:2, 1:2]) } # prior for mean -- independent vague priors suffice # -- otherwise, mvnormal with vague Wishart Eta.Xi[1] ~ dnorm(mu0[1], prec.mu0[1]) Eta.Xi[2] ~ dnorm(mu0[2], prec.mu0[2]) Eta.Xi[3] ~ dnorm(mu0[3], prec.mu0[3]) Eta.Xi[4] ~ dnorm(mu0[4], prec.mu0[4]) Eta.Xi[5] ~ dnorm(mu0[5], prec.mu0[5]) Eta.Xi[6] ~ dnorm(mu0[6], prec.mu0[6]) # S is a diagonal matrix of sd's S[1,1] ~ dunif(s.lower[1], s.upper[1]) # or use inverse gamma S[2,2] ~ dunif(s.lower[2], s.upper[2]) S[3,3] ~ dunif(s.lower[3], s.upper[3]) S[4,4] ~ dunif(s.lower[4], s.upper[4]) S[5,5] ~ dunif(s.lower[5], s.upper[5]) S[6,6] ~ dunif(s.lower[6], s.upper[6]) for(r1 in 1:5) { for(c1 in (r1+1):6) { S[r1,c1] <- 0 S[c1,r1] <- 0 } } # LL' = U'U is the cholesky decomposition of the correlation matrix (L=U') L[1,1] <- 1 L[1,2] <- 0 L[1,3] <- 0 L[1,4] <- 0 L[1,5] <- 0 L[1,6] <- 0 L[2,1] <- cos(phi.21) L[2,2] <- sin(phi.21) L[2,3] <- 0 L[2,4] <- 0 L[2,5] <- 0 L[2,6] <- 0 L[3,1] <- cos(phi.31) L[3,2] <- cos(phi.32)*sin(phi.31) L[3,3] <- sin(phi.32)*sin(phi.31) L[3,4] <- 0 L[3,5] <- 0 L[3,6] <- 0 L[4,1] <- cos(phi.41) L[4,2] <- cos(phi.42)*sin(phi.41) L[4,3] <- cos(phi.43)*sin(phi.42)*sin(phi.41) L[4,4] <- sin(phi.43)*sin(phi.42)*sin(phi.41) L[4,5] <- 0 L[4,6] <- 0 L[5,1] <- cos(phi.51) L[5,2] <- cos(phi.52)*sin(phi.51) L[5,3] <- cos(phi.53)*sin(phi.52)*sin(phi.51) L[5,4] <- cos(phi.54)*sin(phi.53)*sin(phi.52)*sin(phi.51) L[5,5] <- sin(phi.54)*sin(phi.53)*sin(phi.52)*sin(phi.51) L[5,6] <- 0 L[6,1] <- cos(phi.61) L[6,2] <- cos(phi.62)*sin(phi.61) L[6,3] <- cos(phi.63)*sin(phi.62)*sin(phi.61) L[6,4] <- cos(phi.64)*sin(phi.63)*sin(phi.62)*sin(phi.61) L[6,5] <- cos(phi.65)*sin(phi.64)*sin(phi.63)*sin(phi.62)*sin(phi.61) L[6,6] <- sin(phi.65)*sin(phi.64)*sin(phi.63)*sin(phi.62)*sin(phi.61) phi.21 ~ dunif(chol.r.lower[1], chol.r.upper[1]) phi.31 ~ dunif(chol.r.lower[2], chol.r.upper[2]) phi.32 ~ dunif(chol.r.lower[3], chol.r.upper[3]) phi.41 ~ dunif(chol.r.lower[4], chol.r.upper[4]) phi.42 ~ dunif(chol.r.lower[5], chol.r.upper[5]) phi.43 ~ dunif(chol.r.lower[6], chol.r.upper[6]) phi.51 ~ dunif(chol.r.lower[7], chol.r.upper[7]) phi.52 ~ dunif(chol.r.lower[8], chol.r.upper[8]) phi.53 ~ dunif(chol.r.lower[9], chol.r.upper[9]) phi.54 ~ dunif(chol.r.lower[10], chol.r.upper[10]) phi.61 ~ dunif(chol.r.lower[11], chol.r.upper[11]) phi.62 ~ dunif(chol.r.lower[12], chol.r.upper[12]) phi.63 ~ dunif(chol.r.lower[13], chol.r.upper[13]) phi.64 ~ dunif(chol.r.lower[14], chol.r.upper[14]) phi.65 ~ dunif(chol.r.lower[15], chol.r.upper[15]) R <- L %*% t(L) Tau <- S %*% R %*% S Omega <- inverse(Tau) # Omega.G4 and Eta.Xi.G4 # I <- diag(rep(1,6)) # P.G4 <- I[,c(2, 5, 1, 3, 4, 6)] Omega.G4 <- t(P.G4) %*% Omega %*% P.G4 Eta.Xi.G4 <- Eta.Xi %*% P.G4 # Data to report tpr.hum <- ilogit(Eta.Xi[1]) tpr.fem <- ilogit(Eta.Xi[2]) jtpr <- ilogit(Eta.Xi[3]) fpr.hum <- ilogit(Eta.Xi[4]) fpr.fem <- ilogit(Eta.Xi[5]) jfpr <- ilogit(Eta.Xi[6]) diff.tpr <- tpr.hum - tpr.fem diff.fpr <- fpr.hum - fpr.fem or.tpr <- exp(Eta.Xi[1]-Eta.Xi[2]) or.fpr <- exp(Eta.Xi[4]-Eta.Xi[5]) diff.eta <- Eta.Xi[1]-Eta.Xi[2] diff.xi <- Eta.Xi[4]-Eta.Xi[5] }
\section{Conclusion} \label{sec:conclusion} \input{TOG/fig_tile_results} In this paper, we present a gaze-contingent neural scene representation and view synthesis method tailored for egocentric and stereoscopic VR viewing. To unlock the practical deployment of neural radiance fields in VR, we overcome their challenges such as high latency, low resolution, and low fidelity that are exacerbated with stereoscopic and immersive displays. This is achieved by encoding not only the scene content but also \textit{how} human vision perceives it, i.e., the gaze-contingent visual acuity and stereopsis. Our network individually synthesizes foveal, mid-, and far-periphery retinal images, which are then blended to form a wide field-of-view image. We also derive an analytical model depicting quality-latency balance and optimizes these two essential factors based on psychophysical study data. Orthogonal to traditional rendering pipeline, our method takes advantage of NeRF's versatile capability of synthesising not only virtual content but also physically captured scenes, as demonstrated in \Cref{fig:results:comparison2}. Compared with alternative neural view synthesis approaches (\cite{mildenhall2020nerf}), our method creates significantly faster and higher perceptual fidelity for high resolution and high FoV immersive viewing. Furthermore, with the support of view-dependent effects (\Cref{fig:vd}), our method is robust to occluded scenes with complex geometry, a challenge faced by panorama-based synthesis approaches (e.g., \cite{Lin:DeepPanorama}). %Compared with panoramic image based approaches \cite{Lin:DeepPanorama}, our method is robust to large scale translation with complex and occluded scenes, with the support of view-dependent effects (\Cref{fig:vd}). \paragraph{Limitations and future work} While our method achieves superior performance compared to existing approaches, its broader deployment requires combating several constraints. \new{Our multi-spherical representation renders optimal quality when the virtual camera is within the innermost sphere. Although our method unlocks large translation scale than panorama + neural dis-occlusion approaches\qisun{(May 19) Make sure we include example either as a small fig or video}, the optimal translation range is constrained by the radius of this sphere. However, infinitely enlarge it will decrease synthesized image quality. }%new %\new{\warning{As shown in \Cref{fig:large_translation}, }When the virtual camera translates largely within a highly occluded scene, our method still synthesizes the accurate views but with declined foveal image quality. This is due to the accumulated error of the concentric spherical representation (\Cref{eq:imageError}).} Therefore, devising an adaptive and dynamic system that automatically optimizes the coordinates may shed lights on allowing traversal-scale translation range without decreasing the perceptual quality or performance. Similarly, multiscale coordinates that consider various level-of-details of the 3D space have shown their effectiveness of interpolating geometries \cite{winkler2010multi}. Developing a corresponding network that synthesizes the imagery from global to local level-of-details would be an interesting direction for future work that extends the applicability to large-scale scenes with strong occlusions. % \zh{remove this fig} % \begin{figure}[!ht] % \centering % % \subfloat[small translation]{\includegraphics[width=0.47\linewidth]{TOG/figs/large_tran/2_gas_0.3.pdf}\label{fig:large_translation:small}}\hspace{1em} % or 2_gas_0.3.png % \subfloat[small translation]{\includegraphics[width=0.47\linewidth]{TOG/figs/large_tran/2_gas_0.3.png}\label{fig:large_translation:small}}\hspace{1em} % % \subfloat[large translation]{\includegraphics[width=0.47\linewidth]{TOG/figs/large_tran/2_gas_0.9.pdf}\label{fig:large_translation:large}} % or 2_gas_0.9.png % \subfloat[large translation]{\includegraphics[width=0.47\linewidth]{TOG/figs/large_tran/2_gas_0.9.png}\label{fig:large_translation:large}} % \Caption{Comparing foveal quality with regards to translation range. } % {% % Using our method, \subref{fig:large_translation:small} shows a fovea image \dnc{from training dataset} with a small-scale translation box (0.3m); \subref{fig:large_translation:large} shows a fovea image with a large-scale translation box (0.9m). % } % \label{fig:large_translation} % \end{figure} Currently, we sample the scene fully based on eccentricity, considering acuity and stereopsis. However, we envision that fine-grained visual sensitivity analysis, such as luminance \cite{Tursun:2019:LCA} or depth \cite{Sun:20:OE}, would provide more insights on achieving higher quality and/or faster performance. %Also, our multi-spherical representation is specialized to the scene for optimal quality. Thus, our network is trained per-scene. Exploring potential means of generalized representation may further reduce storage. For simplicity, the spatial-temporal joint optimization in \Cref{sec:method:optimization} connects the output precision and latency to the number of the spheres ($\sphereNum$) but not their radii $\mathbf{\sphereRadius}$. This is due to the potentially significantly higher parameter sampling. Incorporating the parameters into a single training process may significantly reduce the time consumption for the optimization. With the adaptive training process, a content-aware distribution (i.e., $\mathbf{\sphereRadius}$) of the spheres would further improve the synthesis quality and performance. \new{The perceptually-based method is orthogonal yet compatible with other remarkable neural radiance field inference acceleration solutions, e.g., \cite{autoint,Wizadwongsa2021NeX}. Combining varies perspectives may unlock the future instant and high quality immersive viewing experience such as teleportation.}%new %\qisun{note to myself: Right now we use did not optimize for radii but only number of spheres} %\zh{shall we end with non-limitation sentences?}\qisun{yep, future work is positive than limitations.;)}
lemma measurable_Ex1[measurable (raw)]: assumes [simp]: "countable I" and [measurable]: "\<And>i. i \<in> I \<Longrightarrow> Measurable.pred M (P i)" shows "Measurable.pred M (\<lambda>x. \<exists>!i\<in>I. P i x)"
{- | This module provides normalized multi-dimensional versions of the transforms in @fftw@. The forwards transforms in this module are identical to those in "Numeric.FFT.Vector.Unnormalized". The backwards transforms are normalized to be their inverse operations (approximately, due to floating point precision). For more information on the underlying transforms, see <http://www.fftw.org/fftw3_doc/What-FFTW-Really-Computes.html>. @since 0.2 -} module Numeric.FFT.Vector.Invertible.Multi ( -- * Creating and executing 'Plan's run, plan, execute, -- * Complex-to-complex transforms U.dft, idft, -- * Real-to-complex transforms U.dftR2C, dftC2R, ) where import Numeric.FFT.Vector.Base import qualified Numeric.FFT.Vector.Unnormalized.Multi as U import Data.Complex import qualified Data.Vector.Storable as VS -- | A backward discrete Fourier transform which is the inverse of 'U.dft'. The output and input sizes are the same (@n@). idft :: TransformND (Complex Double) (Complex Double) idft = U.idft {normalizationND = \ns -> constMultOutput $ 1 / toEnum (VS.product ns)} -- | A normalized backward discrete Fourier transform which is the left inverse of -- 'U.dftR2C'. (Specifically, @run dftC2R . run dftR2C == id@.) -- -- This 'Transform' behaves differently than the others: -- -- - Calling @planND dftC2R dims@ where @dims = [n0, ..., nk]@ creates a 'Plan' whose /output/ size is @dims@, and whose -- /input/ size is @[n0, ..., nk \`div\` 2 + 1]@. -- -- - If @length v == n0 * ... * nk@, then @length (run dftC2R v) == n0 * ... * 2*(nk-1)@. -- dftC2R :: TransformND (Complex Double) Double dftC2R = U.dftC2R {normalizationND = \ns -> constMultOutput $ 1 / toEnum (VS.product ns)}
using PolyChaos, Test, LinearAlgebra # values taken from https://en.wikipedia.org/wiki/Hermite_polynomials#Definition c = [ [-0.0] , [-1.0, -0.0] , [0.0, -3.0, -0.0] , [3.0, 0.0, -6.0, -0.0] , [-0.0, 15.0, 0.0, -10.0, -0.0] , [-15.0, -0.0, 45.0, 0.0, -15.0, -0.0] , [0.0, -105.0, -0.0, 105.0, 0.0, -21.0, -0.0] , [105.0, 0.0, -420.0, -0.0, 210.0, 0.0, -28.0, -0.0] , [-0.0, 945.0, 0.0, -1260.0, -0.0, 378.0, 0.0, -36.0, -0.0] , [-945.0, -0.0, 4725.0, 0.0, -3150.0, -0.0, 630.0, 0.0, -45.0, -0.0] , [0.0, -10395.0, -0.0, 17325.0, 0.0, -6930.0, -0.0, 990.0, 0.0, -55.0, -0.0] ] α, β = rm_hermite_prob(length(c)) cc = rec2coeff(α,β) @testset "showpoly" begin for (i, c_) in enumerate(c) @test isapprox(norm(c_ - cc[i]), 0) end end
[STATEMENT] lemma almost_all_commutative': "finite S \<Longrightarrow> (\<And>x. x \<in> S \<Longrightarrow> \<forall>\<^sub>\<infinity>i. P x (i::nat)) \<Longrightarrow> (\<forall>\<^sub>\<infinity>i. \<forall>x \<in> S. P x i)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>finite S; \<And>x. x \<in> S \<Longrightarrow> \<forall>\<^sub>\<infinity>i. P x i\<rbrakk> \<Longrightarrow> \<forall>\<^sub>\<infinity>i. \<forall>x\<in>S. P x i [PROOF STEP] using almost_all_commutative [PROOF STATE] proof (prove) using this: finite ?S \<Longrightarrow> (\<forall>x\<in>?S. \<forall>\<^sub>\<infinity>i. ?P x i) = (\<forall>\<^sub>\<infinity>i. \<forall>x\<in>?S. ?P x i) goal (1 subgoal): 1. \<lbrakk>finite S; \<And>x. x \<in> S \<Longrightarrow> \<forall>\<^sub>\<infinity>i. P x i\<rbrakk> \<Longrightarrow> \<forall>\<^sub>\<infinity>i. \<forall>x\<in>S. P x i [PROOF STEP] by blast
\documentclass{mrl} \title{Sets of spent outputs} \authors{Sarang Noether\footnote{\texttt{[email protected]}}} \affiliations{Monero Research Lab} \date{\today} \type{TECHNICAL NOTE} \ident{MRL-0007} \newtheorem{definition}{Definition} \newtheorem{example}{Example} \begin{document} \begin{abstract} This technical note generalizes the concept of spent outputs using basic set theory. The definition captures a variety of earlier work on identifying such outputs. We quantify the effects of this analysis on the Monero blockchain and give a brief overview of mitigations. \end{abstract} \section{Introduction} Transactions in Monero generate \textit{outputs} (sometimes called \textit{notes} in other literature) destined for a set of recipients by consuming one or more existing outputs under the sender's control. For each spent output in the transaction, the sender chooses a collection of arbitrary outputs from the blockchain into a \textit{ring}. The transaction includes a proof that for each ring, any of the ring's outputs is equiprobable as the spent output. A \textit{key image} (also called a \textit{tag} in other literature) is included to ensure that no spent output has been spent in any previous transaction. It is important for sender anonymity that no output in a ring is otherwise known to have been spent from external information. If an output is known to be spent, an observer can reduce the effective size of the ring as an anonymity set. If this process continues with enough outputs, the true spent output may be identified. We stress that Monero outputs cannot be linked to the wallet address of the sender, providing an additional layer of protection. At Monero's launch, senders could choose any ring size, including a ring containing only a single output; this output is obviously the true spend. With this information, it is possible to deduce other spent outputs in small rings. Later protocol upgrades added consensus-enforced minimum ring sizes that have increased over time. These increases, as well as a transition to outputs with confidential amounts, have all but eliminated the effects of these early trivial rings. However, it is possible to generate more complex sets of rings that together reveal spent outputs, even though it may not be possible to identify which transaction spent such an output. In this technical note, we define the idea of a spent output in a general way using basic set theory. We show that this definition captures several known methods for spent output identification. Using a tool available to all Monero users, we quantify the occurrence of many spent outputs on the Monero blockchain, showing that modern transactions are essentially unaffected by them. Independent concurrent work in \cite{unpub1,unpub2} (to appear) makes a similar definition and provides a somewhat more general algorithm for identifying certain spent outputs, as well as performing formal analysis. \section{Definition} Let $\mathcal{N}$ be the (finite) set of all outputs on a blockchain. We define a \textit{ring} as a subset of $\mathcal{N}$. A ring containing exactly $n$ elements is an $n$\textit{-ring}. We often use lowercase letters as generic outputs. \begin{definition} Let $\{R_i\}_{i=1}^n$ be a set of rings. We say each $R_i$ is \textit{spent} if $$\left| \bigcup_{i=1}^n R_i \right| = n.$$ An output is \textit{spent} if it is an element of a spent ring. \end{definition} \begin{example} Let $R = \{a\}$ be a 1-ring. Then the output $a$ (and $R$ itself) is spent. \end{example} \begin{example} Let $R = \{a,b\}$ and $S = \{b,c\}$ and $T = \{a,c\}$ be rings. Then each output (and ring) is spent. \end{example} \section{Specific cases} Earlier work like in \cite{mrl0001,mrl0004,kumar,moser,wijaya} has suggested several classes of spent outputs. We review some of them briefly and show how they fit into our definition. \subsection{Chain reaction} The so-called \textit{chain reaction} method uses trivial rings to iteratively identify spent outputs. This method first marks all 1-rings as spent, and removes the corresponding outputs from all other rings. It repeats this process until no 1-rings remain. The initial presentations of this method were also identified in the context of an active attack, where the adversary spends many outputs in 1-rings in an attempt to identify honest users' spent outputs. At the end of the iteration process, each spent ring contributes a single unique spent output that was the last such identified output in the ring. This means the collection of all such spent rings matches our definition. \subsection{Ring repetition} The so-called \textit{ring repetition} method uses multiple appearances of the same ring to identify spent outputs. This method simply identifies a collection of $n$ separate $n$-rings containing the same outputs, where we can conclude that the ring is spent. This analysis was initially presented as semi-cooperative attack, where an adversary generates ring repetitions of controlled outputs to signal to other adversarial users that the ring is spent. This method trivially matches our definition. \subsection{Subset analysis} The standard Monero toolset includes an optional blackball tool that scans the blockchain and flags certain classes of spent outputs. In addition to the chain reaction and ring repetition methods, the tool can also perform a \textit{subset analysis}. In this method, the tool iterates over each ring. For each of the $2^n-1$ (nonempty) subsets of an $n$-ring $R$, it counts the number of occurrences of the subset as a standalone ring elsewhere. If the sum of all such counts for subsets of $R$ is exactly $n$, it flags $R$ as spent. This method trivially matches our definition. \subsection{Other analysis} Absent other information, our definition completely captures on-chain spent outputs. However, other sources exist in practice that may be used to flag outputs as spent, either by attackers or by users who wish to avoid selecting such outputs in new rings. \begin{itemize} \item \textbf{Chain forks}: In the event of a chain fork, a user may choose to spend the same output on multiple forks. The construction of Monero key images means that each spend of the same output will yield the same key image. An observer who sees distinct rings on multiple forks with the same key image can conclude that the spent output must appear in the intersection of all such rings, which statistically is likely to reveal the spent output. Observe that this analysis is beyond the scope of our definition. \item \textbf{Output age distribution}: A variety of heuristics exist that may give an adversary a statistical advantage in guessing the spent output in a ring. For example, spend analysis on transparent blockchains suggests that recently-generated outputs are more likely to be spent than older outputs. We note that in practice, selection of non-spent ring elements according to a distribution matching expected spend patterns easily mitigates the effectiveness of this particular heuristic. There exist other heuristics that we do not consider here. Such heuristics do not inherently provide proof that a given output is spent, and are beyond the scope of our definition. \end{itemize} \section{Mitigations} It is possible in theory for each user to scan her copy of the blockchain, identify all spent outputs using whatever information sources are available, and ensure that she does not choose spent outputs as ring members in future transactions. However, a complete set-theoretic characterization using our definition is impractical. Even the use of an integrated blackball tool that performs only a partial analysis may take several hours for a recent snapshot of the Monero blockchain, and would need to be regularly updated for maximal privacy. Fortunately, the risk to users of spent output identification is negligible. Early chain reaction effects among small rings dissipated quickly early in the Monero blockchain's history. As mandatory minimum ring sizes have increased, the likelihood of an accidental ring union producing a set of spent outputs is vanishingly small. While an attacker could generate collections of rings maliciously designed to produce spent outputs, a non-cooperating attacker may need to perform intensive computations to detect them; further, the generating attacker can always identify her own controlled outputs regardless of their association with other rings, making such an attack unlikely to be of additional value since it costs the attacker fees. The number of spent outputs produced from a chain fork depends highly on the number of existing outputs spent on multiple chains, and requires a large fraction of the existing network to participate. Further, modern selection algorithms for ring members strongly favor newer outputs, meaning the effects of a fork dissipate quickly over time. In practice, the combination of these effects renders them generally impractical. To quantify these effects, we analyzed the Monero blockchain in October 2018 using the integrated blackball tool. This tool examined several classes of spent outputs: \begin{itemize} \item outputs included in 1-rings \item outputs in repeated rings (discussed above) \item outputs identified by subset analysis (discussed above) \item outputs identified by chain reaction analysis (discussed above) \end{itemize} We further classify these outputs by whether they use confidential amounts. Modern transactions choose only decoys that use confidential amounts. Table \ref{table:spent} shows the results of this analysis. \begin{table}[ht] \begin{center} \begin{tabular}{rrr} & Legacy outputs & Confidential outputs \\ \hline 1-ring & 12147067 & 0 \\ Repeated & 40 & 5 \\ Subset & 5916927 & 0 \\ Chain reaction & 749688 & 0 \\ \hline Total spent outputs & 18813722 & 5 \\ Total outputs on chain & 21850122 & 7445622 \\ \end{tabular} \caption{Spent output analysis for Monero blockchain as of October 2018 using integrated blackball tool} \label{table:spent} \end{center} \end{table} While the analysis shows that 86\% of all non-confidential outputs are identified as spent, 0\% of confidential outputs are. Since modern transactions only use the latter type as decoys, the effects of spent output analysis on anonymity is completely negligible. \section{Conclusion} We have presented a simple set-theoretic definition that completely characterizes spent outputs on the Monero blockchain given only set information about the ring elements themselves. This definition captures and generalizes other analysis presented elsewhere. While this definition does not address external information from sources like forked chains or temporal analysis, it offers insight into the selection of outputs toward optimal spend anonymity. While a complete analysis of all spent outputs on the Monero blockchain is computationally infeasible, we quantified several known classes of spent outputs and determined that modern transactions are unaffected by them. \bibliographystyle{plain} \nocite{*} \bibliography{refs} \end{document}
theory prop_43 imports Main "$HIPSTER_HOME/IsaHipster" begin datatype 'a list = Nil2 | Cons2 "'a" "'a list" fun takeWhile :: "('a => bool) => 'a list => 'a list" where "takeWhile x (Nil2) = Nil2" | "takeWhile x (Cons2 z xs) = (if x z then Cons2 z (takeWhile x xs) else Nil2)" fun dropWhile :: "('a => bool) => 'a list => 'a list" where "dropWhile x (Nil2) = Nil2" | "dropWhile x (Cons2 z xs) = (if x z then dropWhile x xs else Cons2 z xs)" fun append :: "'a list => 'a list => 'a list" where "append (Nil2) y = y" | "append (Cons2 z xs) y = Cons2 z (append xs y)" (*hipster takeWhile dropWhile append *) theorem x0 : "(append (takeWhile p xs) (dropWhile p xs)) = xs" by (tactic \<open>Subgoal.FOCUS_PARAMS (K (Tactic_Data.hard_tac @{context})) @{context} 1\<close>) end
module Data.LeftistHeap import Decidable.Order %default total mutual export data Heap : .(constraint : Ordered a rel) -> .(count : Nat) -> Type where Empty : Heap _ Z Node : (n : Nat) -> (value : a) -> {countLeft : Nat} -> (left : Heap constraint countLeft) -> {countRight : Nat} -> (right : Heap constraint countRight) -> .{auto fitsLeft : Fits value left} -> .{auto fitsRight : Fits value right} -> .{auto leftistPrf : LTE (rank right) (rank left)} -> .{auto rankPrf : n = S $ rank right} -> Heap constraint (S $ countLeft + countRight) Fits : {constraint : Ordered a rel} -> a -> Heap constraint cnt -> Type Fits {cnt = Z} _ _ = () Fits {cnt = S _} x h = rel x (findMin h) rank : Heap _ _ -> Nat rank Empty = Z rank (Node _ _ _ right) = S $ rank right export findMin : .{constraint : Ordered a _} -> Heap constraint (S _) -> a findMin (Node _ value _ _) = value export empty : Heap _ Z empty = Empty makeFit : .{constraint : Ordered a rel} -> .(fitsValue : a) -> (value : a) -> {count1 : Nat} -> {count2 : Nat} -> (h1 : Heap constraint count1) -> (h2 : Heap constraint count2) -> .{auto fits1 : Fits value h1} -> .{auto fits2 : Fits value h2} -> .{auto relPrf : rel fitsValue value} -> Subset (Heap constraint (S $ count1 + count2)) (Fits fitsValue) makeFit {count1} {count2} {relPrf} fitsValue value h1 h2 with (order {to = LTE} (rank h1) (rank h2)) | (Left _) = rewrite plusCommutative count1 count2 in Element (Node _ value h2 h1) relPrf | (Right _) = Element (Node _ value h1 h2) relPrf covering mergeHelper : .{constraint : Ordered a rel} -> .{value : a} -> {count1 : Nat} -> {count2 : Nat} -> (h1 : Heap constraint count1) -> (h2 : Heap constraint count2) -> .{auto fits1 : Fits value h1} -> .{auto fits2 : Fits value h2} -> Subset (Heap constraint (count1 + count2)) (Fits value) mergeHelper Empty Empty = Element Empty () mergeHelper {fits1} h@(Node {countLeft} {countRight} n _ _ _) Empty = rewrite plusZeroRightNeutral (countLeft + countRight) in Element h fits1 mergeHelper {fits2} Empty h@(Node {countLeft} {countRight} n _ _ _) = Element h fits2 mergeHelper {value} {rel} (Node {countLeft = countLeft1} {countRight = countRight1} _ value1 left1 right1) (Node {countLeft = countLeft2} {countRight = countRight2} _ value2 left2 right2) = case order {to = rel} value1 value2 of Left orderPrf => rewrite sym $ plusAssociative countLeft1 countRight1 (S $ countLeft2 + countRight2) in let (Element mergedHeap fitsMergedHeap) = mergeHelper {value = value1} right1 (Node _ value2 left2 right2) in makeFit value value1 left1 mergedHeap Right orderPrf => rewrite sym $ plusSuccRightSucc (countLeft1 + countRight1) (countLeft2 + countRight2) in rewrite plusCommutative countLeft2 countRight2 in rewrite plusAssociative (countLeft1 + countRight1) countRight2 countLeft2 in let (Element mergedHeap fitsMergedHeap) = mergeHelper {value = value2} (Node _ value1 left1 right1) right2 in makeFit value value2 mergedHeap left2 export merge : .{constraint : Ordered a rel} -> {count1 : Nat} -> {count2 : Nat} -> (h1 : Heap constraint count1) -> (h2 : Heap constraint count2) -> Heap constraint (count1 + count2) merge Empty Empty = Empty merge {count1} h Empty = rewrite plusZeroRightNeutral count1 in h merge Empty h = h merge h1@(Node _ _ _ _) h2@(Node _ _ _ _) = assert_total $ case order {to = rel} (findMin h1) (findMin h2) of Left orderPrf => case mergeHelper {value = (findMin h1)} h1 h2 {fits1 = reflexive (findMin h1)} of Element h _ => h Right orderPrf => case mergeHelper {value = (findMin h2)} h1 h2 {fits2 = reflexive (findMin h2)} of Element h _ => h export insert : .{constraint : Ordered a _} -> .{n : Nat} -> a -> Heap constraint n -> Heap constraint (S n) insert value heap = merge (Node 1 value Empty Empty) heap export deleteMin : .{constraint : Ordered a _} -> {n : Nat} -> Heap constraint (S n) -> Heap constraint n deleteMin (Node _ _ left right) = merge left right namespace OrderedLeftistHeap export data CountedHeap : .(constraint : Ordered a rel) -> Type where MkCountedHeap : (n : Nat) -> (Heap constraint n) -> CountedHeap constraint export empty : CountedHeap _ empty = MkCountedHeap Z empty export count : CountedHeap _ -> Nat count (MkCountedHeap n _) = n export findMin : .{constraint : Ordered ty _} -> CountedHeap constraint -> Maybe ty findMin (MkCountedHeap Z _) = Nothing findMin (MkCountedHeap (S _) h) = Just $ findMin h export merge : .{constraint : Ordered _ _} -> CountedHeap constraint -> CountedHeap constraint -> CountedHeap constraint merge (MkCountedHeap count1 h1) (MkCountedHeap count2 h2) = MkCountedHeap (count1 + count2) (merge h1 h2) export insert : .{constraint : Ordered ty _} -> ty -> CountedHeap constraint -> CountedHeap constraint insert a (MkCountedHeap n h) = MkCountedHeap (S n) (insert a h) export deleteMin : .{constraint : Ordered ty _} -> CountedHeap constraint -> CountedHeap constraint deleteMin orig@(MkCountedHeap Z h) = orig deleteMin (MkCountedHeap (S n) h) = MkCountedHeap n (deleteMin h)
theorem Cauchy_integral_formula_convex_simple: assumes "convex S" and holf: "f holomorphic_on S" and "z \<in> interior S" "valid_path \<gamma>" "path_image \<gamma> \<subseteq> S - {z}" "pathfinish \<gamma> = pathstart \<gamma>" shows "((\<lambda>w. f w / (w - z)) has_contour_integral (2*pi * \<i> * winding_number \<gamma> z * f z)) \<gamma>"
(* Title: HOL/ex/Cartouche_Examples.thy Author: Makarius *) section \<open>Some examples with text cartouches\<close> theory Cartouche_Examples imports MainRLT keywords "cartouche" :: diag begin subsection \<open>Regular outer syntax\<close> text \<open>Text cartouches may be used in the outer syntax category \<open>text\<close>, as alternative to the traditional \<open>verbatim\<close> tokens. An example is this text block.\<close> \<comment> \<open>The same works for small side-comments.\<close> notepad begin txt \<open>Cartouches work as additional syntax for embedded language tokens (types, terms, props) and as a replacement for the \<open>altstring\<close> category (for literal fact references). For example:\<close> fix x y :: 'a assume \<open>x = y\<close> note \<open>x = y\<close> have \<open>x = y\<close> by (rule \<open>x = y\<close>) from \<open>x = y\<close> have \<open>x = y\<close> . txt \<open>Of course, this can be nested inside formal comments and antiquotations, e.g. like this @{thm \<open>x = y\<close>} or this @{thm sym [OF \<open>x = y\<close>]}.\<close> have \<open>x = y\<close> by (tactic \<open>resolve_tac \<^context> @{thms \<open>x = y\<close>} 1\<close>) \<comment> \<open>more cartouches involving ML\<close> end subsection \<open>Outer syntax: cartouche within command syntax\<close> ML \<open> Outer_Syntax.command \<^command_keyword>\<open>cartouche\<close> "" (Parse.cartouche >> (fn s => Toplevel.keep (fn _ => writeln s))) \<close> cartouche \<open>abc\<close> cartouche \<open>abc \<open>\<alpha>\<beta>\<gamma>\<close> xzy\<close> subsection \<open>Inner syntax: string literals via cartouche\<close> ML \<open> local fun mk_char (s, pos) = let val c = if Symbol.is_ascii s then ord s else if s = "\<newline>" then 10 else error ("String literal contains illegal symbol: " ^ quote s ^ Position.here pos); in list_comb (Syntax.const \<^const_syntax>\<open>Char\<close>, String_Syntax.mk_bits_syntax 8 c) end; fun mk_string [] = Const (\<^const_syntax>\<open>Nil\<close>, \<^typ>\<open>string\<close>) | mk_string (s :: ss) = Syntax.const \<^const_syntax>\<open>Cons\<close> $ mk_char s $ mk_string ss; in fun string_tr content args = let fun err () = raise TERM ("string_tr", args) in (case args of [(c as Const (\<^syntax_const>\<open>_constrain\<close>, _)) $ Free (s, _) $ p] => (case Term_Position.decode_position p of SOME (pos, _) => c $ mk_string (content (s, pos)) $ p | NONE => err ()) | _ => err ()) end; end; \<close> syntax "_cartouche_string" :: \<open>cartouche_position \<Rightarrow> string\<close> ("_") parse_translation \<open> [(\<^syntax_const>\<open>_cartouche_string\<close>, K (string_tr (Symbol_Pos.cartouche_content o Symbol_Pos.explode)))] \<close> term \<open>\<open>\<close>\<close> term \<open>\<open>abc\<close>\<close> term \<open>\<open>abc\<close> @ \<open>xyz\<close>\<close> term \<open>\<open>\<newline>\<close>\<close> subsection \<open>Alternate outer and inner syntax: string literals\<close> subsubsection \<open>Nested quotes\<close> syntax "_string_string" :: \<open>string_position \<Rightarrow> string\<close> ("_") parse_translation \<open> [(\<^syntax_const>\<open>_string_string\<close>, K (string_tr Lexicon.explode_string))] \<close> term \<open>""\<close> term \<open>"abc"\<close> term \<open>"abc" @ "xyz"\<close> term \<open>"\<newline>"\<close> term \<open>"\001\010\100"\<close> subsubsection \<open>Further nesting: antiquotations\<close> ML \<open> \<^term>\<open>""\<close>; \<^term>\<open>"abc"\<close>; \<^term>\<open>"abc" @ "xyz"\<close>; \<^term>\<open>"\<newline>"\<close>; \<^term>\<open>"\001\010\100"\<close>; \<close> text \<open> \<^ML>\<open> ( \<^term>\<open>""\<close>; \<^term>\<open>"abc"\<close>; \<^term>\<open>"abc" @ "xyz"\<close>; \<^term>\<open>"\<newline>"\<close>; \<^term>\<open>"\001\010\100"\<close> ) \<close> \<close> subsubsection \<open>Uniform nesting of sub-languages: document source, ML, term, string literals\<close> text \<open> \<^ML>\<open> ( \<^term>\<open>""\<close>; \<^term>\<open>"abc"\<close>; \<^term>\<open>"abc" @ "xyz"\<close>; \<^term>\<open>"\<newline>"\<close>; \<^term>\<open>"\001\010\100"\<close> ) \<close> \<close> subsection \<open>Proof method syntax: ML tactic expression\<close> ML \<open> structure ML_Tactic: sig val set: (Proof.context -> tactic) -> Proof.context -> Proof.context val ml_tactic: Input.source -> Proof.context -> tactic end = struct structure Data = Proof_Data(type T = Proof.context -> tactic fun init _ = K no_tac); val set = Data.put; fun ml_tactic source ctxt = let val ctxt' = ctxt |> Context.proof_map (ML_Context.expression (Input.pos_of source) (ML_Lex.read "Theory.local_setup (ML_Tactic.set (fn ctxt: Proof.context => (" @ ML_Lex.read_source source @ ML_Lex.read ")))")); in Data.get ctxt' ctxt end; end \<close> subsubsection \<open>Explicit version: method with cartouche argument\<close> method_setup ml_tactic = \<open> Scan.lift Args.cartouche_input >> (fn arg => fn ctxt => SIMPLE_METHOD (ML_Tactic.ml_tactic arg ctxt)) \<close> lemma \<open>A \<and> B \<longrightarrow> B \<and> A\<close> apply (ml_tactic \<open>resolve_tac \<^context> @{thms impI} 1\<close>) apply (ml_tactic \<open>eresolve_tac \<^context> @{thms conjE} 1\<close>) apply (ml_tactic \<open>resolve_tac \<^context> @{thms conjI} 1\<close>) apply (ml_tactic \<open>ALLGOALS (assume_tac \<^context>)\<close>) done lemma \<open>A \<and> B \<longrightarrow> B \<and> A\<close> by (ml_tactic \<open>blast_tac ctxt 1\<close>) ML \<open>@{lemma \<open>A \<and> B \<longrightarrow> B \<and> A\<close> by (ml_tactic \<open>blast_tac ctxt 1\<close>)}\<close> text \<open>\<^ML>\<open>@{lemma \<open>A \<and> B \<longrightarrow> B \<and> A\<close> by (ml_tactic \<open>blast_tac ctxt 1\<close>)}\<close>\<close> subsubsection \<open>Implicit version: method with special name "cartouche" (dynamic!)\<close> method_setup "cartouche" = \<open> Scan.lift Args.cartouche_input >> (fn arg => fn ctxt => SIMPLE_METHOD (ML_Tactic.ml_tactic arg ctxt)) \<close> lemma \<open>A \<and> B \<longrightarrow> B \<and> A\<close> apply \<open>resolve_tac \<^context> @{thms impI} 1\<close> apply \<open>eresolve_tac \<^context> @{thms conjE} 1\<close> apply \<open>resolve_tac \<^context> @{thms conjI} 1\<close> apply \<open>ALLGOALS (assume_tac \<^context>)\<close> done lemma \<open>A \<and> B \<longrightarrow> B \<and> A\<close> by (\<open>resolve_tac \<^context> @{thms impI} 1\<close>, \<open>eresolve_tac \<^context> @{thms conjE} 1\<close>, \<open>resolve_tac \<^context> @{thms conjI} 1\<close>, \<open>assume_tac \<^context> 1\<close>+) subsection \<open>ML syntax\<close> text \<open>Input source with position information:\<close> ML \<open> val s: Input.source = \<open>abc123def456\<close>; Output.information ("Look here!" ^ Position.here (Input.pos_of s)); \<open>abc123def456\<close> |> Input.source_explode |> List.app (fn (s, pos) => if Symbol.is_digit s then Position.report pos Markup.ML_numeral else ()); \<close> end
using LambertW """ struct NoNoise end """ struct NoNoise end export NoNoise """ mutable struct EscapeNoise β::Float64 endofrefrperiod::Array{Float64, 1} end """ mutable struct RectLinEscapeNoise β::Float64 endofrefrperiod::Array{Float64, 1} end export RectLinEscapeNoise function RectLinEscapeNoise(array; β = 0.1) RectLinEscapeNoise(β, array) end @inline function _gettimetothreshold(v, input, threshold, tau, v_reset) (v < threshold) ? tau*log((input-v)/(input-threshold)) : 0 end @inline function _getspiketimesampleparams(v, input, threshold, tau, beta, v_reset) a = beta*(input-threshold) b = beta*tau*(input-v) return a, b end @inline function getspiketimesample(v, input, threshold, t, tau, beta, v_reset, isinrefrperiod, t_refr_rest) threshold >= input && return typeof(v)(Inf) isinrefrperiod && (v = v_reset) #a, b = _getspiketimesampleparams(v, input, threshold, tau, beta, v_reset) y = -log(1-rand()) a = beta*(input-threshold) if v < threshold tsample = tau*(1+lambertw(-exp(-1-y/(a*tau)))) + y/a + _gettimetothreshold(v, input, threshold, tau, v_reset) else b = beta*tau*(input-v) tsample = tau*lambertw(-b*exp(-(y+b)/(a*tau))/(a*tau)) + (b + y)/a end isinrefrperiod ? (tsample += t_refr_rest) : (tsample += t) #relative to absolute time return tsample end function _evaluateanalyticdistribution!(ISIdistr, v, input, threshold, tau, beta, v_reset, t_refr_rest, trise, i) if v < threshold if i <= trise+t_refr_rest push!(ISIdistr,0.) else push!(ISIdistr,beta*((input-threshold)-(input-v)*exp(-(i-t_refr_rest)/tau))* exp(-beta*(input-threshold)*(i-t_refr_rest-trise-tau)-beta*tau*(input-v)*(exp(-(i-t_refr_rest)/tau)))) end else push!(ISIdistr,beta*(input-threshold-(input-v)*exp(-i/tau))* exp(-beta*i*(input-threshold)+beta*tau*(input-v)*(1-exp(-i/tau)))) end end function getanalyticdistribution(v, input, threshold, t, tau, beta, v_reset, isinrefrperiod, t_refr_rest; t_lag = []) threshold >= input && error("input to small, no spike ever") isinrefrperiod && (v = v_reset) ISIdistr = [] trise = _gettimetothreshold(v, input, threshold, tau, v_reset) if isempty(t_lag) for i in 0:0.01:100 _evaluateanalyticdistribution!(ISIdistr, v, input, threshold, tau, beta, v_reset, t_refr_rest, trise, i) push!(t_lag,i) end else for i in t_lag _evaluateanalyticdistribution!(ISIdistr, v, input, threshold, tau, beta, v_reset, t_refr_rest, trise, i) end end t_lag .+= t return t_lag, ISIdistr, trise end export getanalyticdistribution
module Parser where import Data.Ratio import Data.Complex import Number import LispData import Text.ParserCombinators.Parsec hiding (spaces) import Text.Parsec.Number import LispData --TODO Learn monads again and check what mapM is doing -- Recognizes if a character is a valid scheme symbol symbol :: Parser Char symbol = oneOf "-.!#$%&|*+/:<=>?@^_~" -- Skips one or more spaces spaces1 :: Parser () spaces1 = skipMany1 space escapedChars :: Parser Char escapedChars = do _ <- char '\\' x <- oneOf "\"tn\\" case x of 't' -> return '\t' 'n' -> return '\n' 'r' -> return '\r' '\\' -> return '\\' _ -> return x -- Parses a string which starts with a " and ends with a" -- TODO \\t \\n \\r \\ \" parseString :: Parser LispVal parseString = do _ <- char '"' --Starts with a " -- Stops at " x <- many $ escapedChars <|> noneOf "\"" _ <- char '"' --ends with a " return $ String x -- Parses a symbol parseAtom :: Parser LispVal parseAtom = do first <- (letter <|> symbol) <?> "I HATE PARSEC" --first <- choice symlpars --the following chars must be one of letter, digit or symbol rest <- many (letter <|> digit <|> symbol) let atom = first :rest --catch special atoms case atom of "#t" -> return $ Bool True "#f" -> return $ Bool False ('-':x:_) -> case x of ' ' -> return $ Atom atom _ -> parseNumber --return $ (LispNumber . Integer . read) atom --TODO THIS IS NOT GOOD, try to parse -3o _ -> {-trace ("attom"++ show atom)-} (return $ Atom atom) parseComplex :: Parser LispVal parseComplex = do f <- parseNumber s <- char '+' n <- parseNumber i <- char 'i' let x = case f of ((LispNumber (Integer n))) -> (fromIntegral n :: Double) ((LispNumber (Rational n))) -> (realToFrac n) ((LispNumber (Real n))) -> n _ -> 0 let y = case n of ((LispNumber (Integer p))) -> (fromIntegral p :: Double) ((LispNumber (Rational p))) -> (realToFrac p) ((LispNumber (Real p))) -> p _ -> 0 return $ (LispNumber . Complex) (x :+ y) parseNegFloat :: Parser LispVal parseNegFloat = do s <- sign beforeDot <- int _ <- char '.' <?> "Floating Point Parse Error: expecting ." afterDot <- int let d = s (read (show beforeDot ++ "." ++ (show afterDot))) return $ (LispNumber . Real) d parseNegRational :: Parser LispVal parseNegRational = do top <- int _ <- char '/' bottom <- int return $ (LispNumber . Rational) (top % bottom) parseInteger :: Parser LispVal parseInteger = do int' <- many1 digit return $ (LispNumber . Integer . read) int' parseNegInteger :: Parser LispVal parseNegInteger = do _ <- char '-' int' <- many1 digit return $ (LispNumber . Integer . negate . read) int' parseVector :: Parser LispVal parseVector = do _ <- char '(' vec <- sepBy parseExpr spaces1 _ <- char ')' return $ Vector vec parseList :: Parser LispVal -- Parse lispExpr which hare seperated by one or more whitespace parseList = List <$> sepBy parseExpr spaces1 parseDottedList :: Parser LispVal parseDottedList = do -- what? head' <- endBy parseExpr spaces1 -- parses a dot then exactly one space and saves the expression after the space tail' <- char '.' >> spaces1 >> parseExpr return $ DottedList head' tail' -- TODO READ r5rs parseQuoted :: Parser LispVal parseQuoted = do _ <- char '\'' x <- parseExpr return $ List [Atom "quote", x] parseNumber = try parseNegFloat <|> try parseNegRational <|> try parseNegInteger --TODO WENN MIR WAS UM DIE OHREN FLIEGT LIEGTS HIER DRAN <|> parseInteger -- etc parseExpr :: Parser LispVal parseExpr = try parseAtom --first try to parse a atom <|> try parseComplex <|> parseNumber <|> parseString -- if this fails try to parse a string <|> parseQuoted <|> do _ <- char '(' -- parses a normal list until it encounter a dot, at which point it will go back and sstart to parse -- a dotted list x <- try parseList <|> parseDottedList _ <- char ')' return x
import MyNat.Definition namespace MyNat open MyNat /-! # Advanced Proposition world ## Level 1: the `constructor` tactic. The logical symbol `∧` means "and". If `P` and `Q` are propositions, then `P ∧ Q` is the proposition "`P` and `Q`". If your *goal* is `P ∧ Q` then you can make progress with the [`constructor` tactic](../Tactics/constructor.lean.md)., which turns one goal `⊢ P ∧ Q` into two goals, namely `⊢ P` and `⊢ Q`. In the level below, after a `constructor`, you can finish off the two new sub-goals with the `exact` tactic since both `p` and `q` provide exactly what we need. You could also use the `assumption` tactic. ## Lemma If `P` and `Q` are true, then `P ∧ Q` is true. -/ example (P Q : Prop) (p : P) (q : Q) : P ∧ Q := by constructor exact p exact q /-! Next up [Level 2](./Level2.lean.md) -/
# import Base: show """ The structure used for robot requests. """ mutable struct RobotRequest id::String name::String description::String status::String end """ The structure returned when any robot requests are made. """ mutable struct RobotResponse robotId::String userId::Union{Nothing, String} status::String id::String name::String description::String createdTimestamp::String lastUpdatedTimestamp::String links::Dict{String, Any} end function show(io::IO, r::RobotResponse) println(io, "GraffSDK Robot:") println(io, " - ID: $(r.id)") println(io, " - Name: $(r.name)") println(io, " - Description: $(r.description)") println(io, " - Status: $(r.status)") println(io, " - Created: $(r.createdTimestamp)") println(io, " - Last Updated: $(r.lastUpdatedTimestamp)") end
/- Copyright (c) 2022 Sina Hazratpour. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. ---------------- # Introduction to Lean Sina Hazratpour Introduction to Proof MATH 301, Johns Hopkins University, Fall 2022 -/ import ..prooflab set_option pp.beta true set_option pp.generalized_field_notation false -- set_option pp.all true /- In this lesson, we learn the very basics of the language of Lean, and how to write some mathematical statements and their proofs in Lean. We also learn about tactics and how to write some trivial proofs using the following tactics: 1. `refl` 2. `exact` 3. `rw` and its variants 4. `change` -/ namespace PROOFS /-! # What is Lean Lean is a functional programming language (developed by Leonardo de Moura at Microsoft Research) that makes it easy to write correct and maintainable code. We use Lean as an __interactive theorem prover__. - Lean helps us prove the __correctness__ of mathematical statements, - Lean lets us develop __automation__ to support __mathematical reasoning__ and __proof__. At its core, Lean is what is known as a __type checker__. This means that we can write expressions and ask the system to check that they are well formed, and also ask the system to tell us what type of object they denote. - Lean parses input, interpret it as expressions in a formal logical language, and - Lean determine the types of these formal expressions. -/ /-! ## Types (Collections) In Lean things are largely organized as __types__ (collections) and __terms__ (elements). __In Lean every expression is typed__. You can use the `#check` command to print its type. Some familiar number systems such as __natural numbers (ℕ)__ , __integers (ℤ)__, __rational numbers (ℚ)__, and __real numbers (ℝ)__ are encoded as types in Lean. (Lean's "check" command tells us the type of the object we have constructd). -/ #check ℕ -- the type of natural numbers #check ℤ -- integers type (ℤ is a special symbol typed by \Z ) #check ℚ -- rationals #check ℝ -- real numbers #check empty -- All types, except the __empty__ type, have terms (or elements, or inhabitants). For instance `0 : ℕ`. We read the expression `a : A` as `a is an A`. So `0 : ℕ` says `0 is a natural number`, and `0 : ℤ` says that `0 is an integer`. section -- `section` declares an environment #check bool -- the type of booleans #check tt #check ff -- you can put parentheses around an infix operation (like disjunction and conjunction) to talk about the operation itself. #check (∨) #check (∧) #check 0 #check (0 : ℤ) #check 2 + 2 #check (2 : ℝ) #check ([1, 2, 3] : list ℤ) -- bracket notation for lists #check [(1 : ℕ), (-1 : ℤ) ] -- gives error because the members of the list must have the same types #check (1, 2, 3) -- round brackets for tuples #check ((1 : ℝ), (-1 : ℝ)) #check ((1,-1) : ℝ × ℝ) #check λ x, x + 1 -- the empty type (`empty`) does not have any terms: we cannot find any `a` such that `a : empty`. end section -- we use `sections` to limit the __scope__ of a variable. Whatever variable we introduce in between `section ... end` remains hidden from the rest of the file. #check 2 + 2 set_option pp.all true #check 2 + 2 end /- We can introduce new types as follows: -/ variable A : Type -- "Let A be a type" variable A' : Type variables B C X Y Z U V W : Type /- We do not know anything about types `A .... W` excepts that they are types. We don't even know if they have any terms. Note that the variables `A ... W` are global variables which means they are accessible throughout the entire file, and any file which imports this file. -/ /-! ### New Types from the Old -/ -- We know that `A` and `B` are types becasue in the previous section we declared so by `variable A : Type`. In below we are defining new types from the old ones by describing what the terms of these new types are. section #check A × B -- the type of pairs: the terms of `A × B` are pairs `(a,b)` where `a : A` and `b : B`. Therefore, if `a : A` and `b : B` then `(a,b) : A × B`. #check ℕ × ℕ #check (1, 2) #check A ⊕ B -- the sum type #check A → B -- the type of functions which we talk about in the lecture: a term of type A → B is a function which takes a term of A and returns a term of B. #check λ (x : ℤ), x + 1 #check λ (x : ℝ), x + 2 #check list A -- for any type `A` we can form a new type `list A`: the terms of `list A` are lists (finite ordered sequence) of terms of `A`. #check list ℕ #check [0,2,5] -- Lean infers the type of the list `[0,2,5]` #check [2,0,5] -- this is a different list than the one above #check list empty #check ([] : list empty) -- the empty list of the empty type #check ([] : list ℕ) -- the empty list of natural numbers #check (-2 : ℤ) #check ([-2] : list ℤ) -- the list that has only one member in it and that is `-2 : ℤ`. the round bracket is to say to Lean to consider `-2` as an integer #check [2, 1] #check ([-2, 1] : list ℤ) #check [1,16,2,10,1,1,1] end -- Some new types out of the type of natural numbers section #check ℕ × ℕ #check list ℕ #check ℕ → list (ℕ × ℕ) end /-! ### Propositions - Some Lean expression are __propositions__, i.e. assertions that can be judged to be true or false. For instance `2 + 2 = 5` is a proposition, albeit a false one. - Propositions are terms of type `Prop`. Here are some examples: -/ section #check 2 + 2 = 4 #check 2 + 2 = 5 -- the command #check does not check the validity only the type #check ∀ x y z n : ℕ, n > 2 → x * y * z ≠ 0 → x^n + y^n ≠ z^n end /- We introduce a proposition by the expression `P : Prop`. -/ section variables P Q : Prop -- let "P and Q be propositions" #check P #check Prop -- this is the type of all propositions. So, we have `P : Prop : Type`. We can think of propositions (such as `P`) as types and proofs of propositions as terms of types. so you can have `rfl : 2 + 2 = 4 : Prop : Type`. #check P ∧ Q -- the conjunction of `P` and `Q` (and) #check P ∨ Q -- the disjunction of `P` and `Q` (or) #check P → Q -- the implication (if `P` then `Q`) #check P ↔ Q -- biimplication (`P` if and only if `Q`) end /- For `P : Prop`, we read `hp : P` as "hp is a proof of P", or we have a hypothesis "P" verified by "hp", or "P is true by virtue of hp". -/ section -- Terms of propositions are proofs of propositions. #check (rfl : 1 = 1) -- `rfl` is a proof of reflexivity of eqaulity, thins like `x = x` #check (rfl : 2 + 2 = 4) --rfl refers to "reflexivity", `rfl` works because the "normal" form of `2 + 2` and `4` are syntactically the same. #check rfl #check @rfl -- this gives a more explicit type of `rfl` #check ∀ x y : ℤ, x + y = y + x -- says "for all integers x and y, the sum x + y is equal to the sum y + x." #check (rfl : ∀ x y : ℤ, x + y = y + x) variables x y : ℤ #check (rfl : x + y = y + x) -- syntactically these expressions are not the same. `rfl` works for syntactic equality and definitional equality. While `x + y` and `y + x` are propositionally equal. variable a : ℕ #check (rfl : a + 0 = x) -- Lean knows `a` is a natural number because we told it so. Then it infers that the `+` operation is an operation between two natural numbers. Then it infers that `0` is a natural number. And then it infers the equality `=` is between two natural numbers. And then it expects `x` to be a natural number, but we told Lean `variable x y : ℤ`. So these are not equal for the simple type-checking reasons. #check (rfl : a + 0 = a) -- a bit weird, what's going on is that Lean knows that it has to use `rfl` to establish syntactic equality, but definitionally `a` is the same as `a + 0`. So it converts the second `a` in to `a + 0` and then used `rfl`. #check (add_comm : ∀ x y : ℤ, x + y = y + x) -- here we invoke a lemma. We borrowed it from the Lean library. end /- The term `rfl` is the (trivial) proof that any term is equal to itself. Here is a term-style proof of ` 2 = 2 `. Here `2 = 2` is the statement we wanna prove and `rfl` is the proof of it. -/ /- To assert a statement in Lean we use the `example` environment which includes - context: the parameters which are used in the statement of the lemma. Think of context as a way of telling to Lean "let x, y, z, and n be natural numbers". A statement only makes sense in an appropriate context. - statement we wanna prove - the proof of the statement (which comes it two styles: term style and tactic style). A general __term style__ form of an `example` is as follows: example (context_of_our_assumptions) : statement_of_lemma := proof_of_lemma -/ -- in the example below the context is empty. example : 2 = 2 := rfl -- An assertion can have a nonempty __context__: in below the context comprises of four variables of natural numbers `a b c d`. The statement we wanna prove tis that `(a + b) * (c + d) = (a + b) * (c + d)` and the proof is by reflexivity. example (a b c d : ℕ) : (a + b) * (c + d) = (a + b) * (c + d) := rfl -- The term `rfl` proves more than syntactic equalities, it proves the equalities between terms which are __definitionally equal__. example : 2 + 3 = 5 := rfl -- `( ( ) ( ) ) ( ( ) ( ) ( ) ) = ( ( ) ( ) ( ) ( ) ( ) )` /-! ## Tactics - There is another way of writing of proofs which is closer to how mathematicians write their proofs in maths books and journal papers. For instance, you might have seen a proof being written in the following style: "To prove the _forward direction_, first _unfold_ the definition. Suppose `x` is an arbitray blah which satisfy the property blah blah. By definition there is some `y` which satisfies the property blah blah blah. Now, _apply_ the previous lemma to `y`, and _specialize_ the result to when `y` is `y₀`." - These are __instructions__ given by the author to the reader for finding find the relevant proof that the author has in mind. In a similar way, tactics are instructions that we tell proof assistants (in our case Lean) to construct a proof term. __Tactics__ are like cooking recipes for making a dish while the __term proofs__ are the food. - The point of tactics -- and this point becomes clear after the third lecture -- is when we are faced with constructing a complicated proof, we can break down the process into __multiple intermediary easier goals__ which we know how to solve. This is such a crucial technique not only in Lean but in all of mathematics. And while we are constructing these smaller proofs to be later composed, we interact with Lean to see the result of our instructions. - Like informal proofs, proof tactics support an incremental style of writing proofs, in which you unfold a proof and work on goals one step at a time. - A general form of an `example` in __tactic style__: example (context_of_our_assumptions) : statement_we_wanna_prove := begin tactic_1 [...], tactic_2 [...], ⋮ tactic_n [...], end **Note**: Even if we prove a theorem in tactic mode, what is __stored__ in Lean is the __proof term__ corresponding to this tactic proof. Lean has an automatic way of converting a tactic proof to a term proof and we usually do not see this unless we use the command `show_term`. -/ /-! ### Tactic refl -/ example : 3 = 1 + 2 := begin refl, -- refl is a __tactic__ corresponding to reflexitivity proof end -- We use `refl` instead in the _tactic mode_: example (a b c d : ℕ) : (a + b) * (c + d) = (a + b) * (c + d) := begin refl, end -- set_option trace.type_context.is_def_eq true -- set_option trace.type_context.is_def_eq_detail true example : 2 + 3 = 5 := begin refl, end example : (0 : ℕ) + (0 : ℕ) = (0 : ℕ) := -- experiment with changing the first/last ℕ to ℤ begin refl, end example : (0 : ℤ) + (0 : ℕ) = (0 : ℤ ) := -- experiment with changing the first/last ℕ to ℤ -- this has something to do with coersion begin refl, end example (x : ℕ) : x + 0 = x := -- true by definition begin refl, end #check nat.add example (x : ℕ) : 0 + x = x := -- true by proof begin sorry end example : (2 : ℝ) + (2 : ℝ) = (4 : ℝ) := begin refl, end example : (2 : ℝ) + (2 : ℕ) = (4 : ℝ) := begin -- refl, refl, -- why did previous one work? end -- try `refl` in below; does it work? example (x y : ℝ) : x + y = y + x := begin sorry end -- what about here? does `refl` work? example : ∀ x y : ℝ, (x + y) ^ 3 = x ^ 3 + 3 * x ^ 2 + 3 * x + 1 := begin sorry end /-! ### Tactic exact `exact` tactic allows to provide direct proof terms. If the goal is ` ⊢ X ` then `exact hp` will close the goal if and only if `hp : X`. -/ -- Comment out the below lines to see various other ways that lean can display info: -- set_option pp.notation false -- set_option pp.parens true example (h : 2 + 2 = 5) : 2 + 2 = 5 := -- The goal is to construct a proof of ` 2 + 2 = 5 `. begin exact h, -- this is like copying; we copy `h` from the local context of our hypotheses end example (h : 2 + 2 = 5) : 2 + 2 = 4 := begin refl, end example (x : ℕ) (h₁ : 5 = 2 + x) (h₂ : 2 + x = 4) : 5 = 2 + x := -- The goal is to construct a proof of ` 5 = 2 + x `. begin exact h₁, end /-! ### Tactic rewrite The equality symbol `=` is meant to formalize what we mean when we say “something is the same as something else” (e.g. “ascorbic acid is vitamin C” or “5+7=12” ). We are asserting that two different descriptions refer to the same object. Since the notion of equality is very general and can be applied to virtually any domain of discourse, it is falling under the purview of logic. One of the earliest kind of proofs one encounters in mathematics is proof by calculation. This usually involves a chaing of equalities using lemmas expressing properties of operations on numbers and other structures. It also uses the fundamental properties of equality: If two terms denote the same thing, then we should be able to substitute one for any other in any expression. Let's say `E` is an expression containing `a` somewhere: `E = ... a ...`. If ` a = a' ` then we should be able to replace ` a ` with ` a' ` in `E` and get the same expression, that is the expression `E = ... a ... = ... a' ... `. This operation is called rewriting, and the Lean "tactic" for this is `rw`. -/ example (m n : ℕ) (h₁ : m + 1 = 7) (h₂ : n = m) : n + 1 = 7 := begin -- we want to prove that `n + 1 = 7`. Since we know that `n = m` we need to replace `n` by `m` in the goal. Then we use hypothesis `h₁`. rw h₂, -- replaces `n` by `m` in the goal. exact h₁, -- use the hypothesis `h₁` to finish the proof. end -- transitivity of equality via `rw` example (x y z : ℝ) (h₁ : x = y) (h₂ : y = z) : x = z := begin rw h₁, -- changes the goal `x = z` to `y = z` by replacing `x` with `y` in virtue of `h₁`. -- all we need to prove now is `y = z` which we do by `h₂`. exact h₂, end #check eq.trans -- symmetry of equality via `rw` example {X : Type} (x y : X) (h₁ : x = y) : y = x := begin rw h₁, end #check eq.symm -- try refl first; what do you get? example (x y : ℕ) (h₁ : x = 0) (h₂ : y = 0) : x + y = 0 := begin -- refl, rw h₁, -- Uses the hypothesis h₁ : x = 0 to change all x's to 0's in the goal. rw h₂, -- Uses the hypothesis h₂ : y = 0 to change all y's to 0's in the goal. -- 0 + 0 = 0 is done end -- try refl first, why does it not work? example (x y : ℕ) (h : x = y) : x + 0 = y + 0 := begin -- refl, rw h, end /-! #### Variants of rewrite-/ /- We already have seen the simple version of the rewrite tactic: 1. `rw h₁` (rewrites `h₁` in the current goal) We now see some useful variants of `rw` tactic: 2. `rw ← h₁` (backward rewrite) 3. `rw h₁ at h₂` (rewrites hypothesis `h₁` in the hypothesis `h₂`) 4. `rw ← h₁ at h₂` (backward rewrites hypothesis `h₁` in the hypothesis `h₂`) 5. `rw h at *` (rewrites hypothesis `h` everywhere) -/ /- Rewrite in the opposite direction-/ example (m n : ℕ) (h₁ : m + 1 = 7) (h₂ : m = n) : n + 1 = 7 := begin -- we want to prove that `n + 1 = 7`. Since, by `h₂` we know that `m = n` we need to replace `n` by `m` in the goal. However, for this we need to use `h₂` in the opposite direction of the above. Then we use the hypothesis `h₁`. rw ← h₂, exact h₁, end -- transitivity of equality via `rw` example (x y z : ℝ) (h₁ : x = y) (h₂ : y = z) : x = z := begin rw h₁, -- changes the goal `x = z` to `y = z` by replacing `x` with `y` in virtue of `h₁`. -- all we need to prove now is `y = z` which we do by `h₂`. exact h₂, end /- another proof involves replacing `y` with `z` (in virtue of `h₂`) in hypothesis `h₁` to get a new hypothesis `x = z` which is our goal. This proof uses the following variant of the rewrite tactic: `rw h₁ at h₂` (rewrites hypothesis `h₁` in the hypothesis `h₂`) -/ example (x y z : ℝ) (h₁ : x = y) (h₂ : y = z) : x = z := begin rw h₂ at h₁, exact h₁, end #check eq.trans -- for transitivity of equality relation example (x : ℕ) (h₁ : 5 = 2 + x) (h₂ : 2 + x = 4) : 5 = 4 := -- we sub 2 + x in h₁ with 4 because of h₂. begin rw h₂ at h₁, exact h₁, end /- `rw h at *` rewrites `h` everywhere, in the goal and all hypotheses. -/ example (x y z : ℕ) (h₁ : x = 2) (h₂ : 2 + x = y) (h₃ : z = x + 2): x + z = x + y := begin rw h₃, -- this changes the goal by replacing `z` with `x + 2` rw ← h₂, rw h₁, end /-! ### Tactic change -/ /- If we tweak our assumptions a little bit as follows, we are not able to directly use `rw` anymore. -/ example (x y z : ℕ) (h₁ : x + 0 = y) (h₂ : y = z) : x = z := begin -- rw h₁, -- fails because rw works up to syntactic equality change x = y at h₁, -- change works up to _definitional_ equality rw h₁, -- now it works exact h₂, end end PROOFS
{-# LANGUAGE DataKinds #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE PatternSynonyms #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} {-# OPTIONS_GHC -fplugin GHC.TypeLits.KnownNat.Solver #-} {-# OPTIONS_GHC -fplugin GHC.TypeLits.Normalise #-} module Backprop.Learn.Model.Neural ( -- * Fully connected -- ** Feed-forward FC, pattern FC, FCp, fcBias, fcWeights -- *** With activation function , FCA, pattern FCA, _fcaActivation -- ** Recurrent , FCR, pattern FCR, FCRp, fcrBias, fcrInputWeights, fcrStateWeights -- *** With activation function , FCRA, pattern FCRA, _fcraStore, _fcraActivation ) where import Backprop.Learn.Model.Combinator import Backprop.Learn.Model.Regression import Backprop.Learn.Model.State import Data.Tuple import GHC.TypeNats import Lens.Micro import Numeric.Backprop import Numeric.LinearAlgebra.Static.Backprop import qualified Numeric.LinearAlgebra.Static as H -- | Fully connected feed-forward layer with bias. Parameterized by its -- initialization distribution. -- -- Note that this has no activation function; to use as a model with -- activation function, chain it with an activation function using 'RMap', -- ':.~', etc.; see 'FCA' for a convenient type synonym and constructor. -- -- Without any activation function, this is essentially a multivariate -- linear regression. -- -- With the logistic function as an activation function, this is -- essentially multivariate logistic regression. (See 'Logistic') type FC i o = LinReg i o pattern FC :: FC i o pattern FC = LinReg -- | Convenient synonym for an 'FC' post-composed with a simple -- parameterless activation function. type FCA i o = RMap (R o) (R o) (FC i o) -- | Construct an 'FCA' using a generating function and activation -- function. -- -- Some common ones include 'logistic' and @'vmap' 'reLU'@. pattern FCA :: (forall s. Reifies s W => BVar s (R o) -> BVar s (R o)) -- ^ '_fcaActivation' -> FCA i o pattern FCA { _fcaActivation } = RM _fcaActivation FC type FCp = LRp fcWeights :: Lens (FCp i o) (FCp i' o) (L o i) (L o i') fcWeights = lrBeta fcBias :: Lens' (FCp i o) (R o) fcBias = lrAlpha -- | Fully connected recurrent layer with bias. -- -- Parameterized by its initialization distributions, and also the function -- to compute the new state from previous input. -- -- @ -- instance 'Learn' ('R' i) (R o) ('FCR' h i o) where -- type 'LParamMaybe' (FCR h i o) = ''Just' ('FCRp' h i o) -- type 'LStateMaybe' (FCR h i o) = 'Just (R h) -- @ type FCR h i o = Recurrent (R (i + h)) (R i) (R h) (R o) (FC (i + h) o) -- | Construct an 'FCR' pattern FCR :: (KnownNat h, KnownNat i) => (forall s. Reifies s W => BVar s (R o) -> BVar s (R h)) -- ^ '_fcrSTore' -> FCR h i o pattern FCR { _fcrStore } <- Rec { _recLoop = _fcrStore , _recLearn = FC } where FCR s = Rec { _recSplit = H.split , _recJoin = (H.#) , _recLoop = s , _recLearn = FC } {-# COMPLETE FCR #-} -- | Convenient synonym for an 'FCR' post-composed with a simple -- parameterless activation function. type FCRA h i o = RMap (R o) (R o) (FCR h i o) -- | Construct an 'FCRA' using a generating function and activation -- function. -- -- Some common ones include 'logistic' and @'vmap' 'reLU'@. pattern FCRA :: (KnownNat h, KnownNat i) => (forall s. Reifies s W => BVar s (R o) -> BVar s (R h)) -- ^ '_fcraStore' -> (forall s. Reifies s W => BVar s (R o) -> BVar s (R o)) -- ^ '_fcraActivation' -> FCRA h i o pattern FCRA { _fcraStore, _fcraActivation } = RM _fcraActivation (FCR _fcraStore) type FCRp h i o = FCp (i + h) o lensIso :: (s -> (a, x)) -> ((b, x) -> t) -> Lens s t a b lensIso f g h x = g <$> _1 h (f x) fcrInputWeights :: (KnownNat h, KnownNat i, KnownNat i', KnownNat o) => Lens (FCRp h i o) (FCRp h i' o) (L o i) (L o i') fcrInputWeights = fcWeights . lensIso H.splitCols (uncurry (H.|||)) fcrStateWeights :: (KnownNat h, KnownNat h', KnownNat i, KnownNat o) => Lens (FCRp h i o) (FCRp h' i o) (L o h) (L o h') fcrStateWeights = fcWeights . lensIso (swap . H.splitCols) (uncurry (H.|||) . swap) fcrBias :: Lens' (FCRp h i o) (R o) fcrBias = fcBias
[GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f ⊢ ∃ h, f ≫ h = g [PROOFSTEP] let G : (⟨X⟩ : ModuleCat ℤ) ⟶ ⟨A⟩ := { g with map_smul' := by intros dsimp rw [map_zsmul] } [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f ⊢ ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x [PROOFSTEP] intros [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f r✝ : ℤ x✝ : ↑(ModuleCat.mk ↑X) ⊢ AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r✝ • x✝) = ↑(RingHom.id ℤ) r✝ • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x✝ [PROOFSTEP] dsimp [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f r✝ : ℤ x✝ : ↑(ModuleCat.mk ↑X) ⊢ ↑g (r✝ • x✝) = r✝ • ↑g x✝ [PROOFSTEP] rw [map_zsmul] [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } ⊢ ∃ h, f ≫ h = g [PROOFSTEP] let F : (⟨X⟩ : ModuleCat ℤ) ⟶ ⟨Y⟩ := { f with map_smul' := by intros dsimp rw [map_zsmul] } [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } ⊢ ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x [PROOFSTEP] intros [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } r✝ : ℤ x✝ : ↑(ModuleCat.mk ↑X) ⊢ AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r✝ • x✝) = ↑(RingHom.id ℤ) r✝ • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x✝ [PROOFSTEP] dsimp [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } r✝ : ℤ x✝ : ↑(ModuleCat.mk ↑X) ⊢ ↑f (r✝ • x✝) = r✝ • ↑f x✝ [PROOFSTEP] rw [map_zsmul] [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } ⊢ ∃ h, f ≫ h = g [PROOFSTEP] have : Mono F := by refine' ⟨fun {Z} α β eq1 => _⟩ -- Porting note: trouble getting to ℤ-module from ModuleCat ℤ -- AddCommGroup.intModule not defeq to .isModule let α' : AddCommGroupCat.of Z ⟶ X := @LinearMap.toAddMonoidHom _ _ _ _ _ _ _ _ (_) _ _ α let β' : AddCommGroupCat.of Z ⟶ X := @LinearMap.toAddMonoidHom _ _ _ _ _ _ _ _ (_) _ _ β have eq2 : α' ≫ f = β' ≫ f := by ext x simp only [CategoryTheory.comp_apply, LinearMap.toAddMonoidHom_coe] simpa only [ModuleCat.coe_comp, LinearMap.coe_mk, Function.comp_apply] using FunLike.congr_fun eq1 x rw [cancel_mono] at eq2 have : ⇑α' = ⇑β' := congrArg _ eq2 ext x apply congrFun this _ [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } ⊢ Mono F [PROOFSTEP] refine' ⟨fun {Z} α β eq1 => _⟩ -- Porting note: trouble getting to ℤ-module from ModuleCat ℤ -- AddCommGroup.intModule not defeq to .isModule [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } Z : ModuleCat ℤ α β : Z ⟶ ModuleCat.mk ↑X eq1 : α ≫ F = β ≫ F ⊢ α = β [PROOFSTEP] let α' : AddCommGroupCat.of Z ⟶ X := @LinearMap.toAddMonoidHom _ _ _ _ _ _ _ _ (_) _ _ α [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } Z : ModuleCat ℤ α β : Z ⟶ ModuleCat.mk ↑X eq1 : α ≫ F = β ≫ F α' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom α ⊢ α = β [PROOFSTEP] let β' : AddCommGroupCat.of Z ⟶ X := @LinearMap.toAddMonoidHom _ _ _ _ _ _ _ _ (_) _ _ β [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } Z : ModuleCat ℤ α β : Z ⟶ ModuleCat.mk ↑X eq1 : α ≫ F = β ≫ F α' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom α β' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom β ⊢ α = β [PROOFSTEP] have eq2 : α' ≫ f = β' ≫ f := by ext x simp only [CategoryTheory.comp_apply, LinearMap.toAddMonoidHom_coe] simpa only [ModuleCat.coe_comp, LinearMap.coe_mk, Function.comp_apply] using FunLike.congr_fun eq1 x [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } Z : ModuleCat ℤ α β : Z ⟶ ModuleCat.mk ↑X eq1 : α ≫ F = β ≫ F α' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom α β' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom β ⊢ α' ≫ f = β' ≫ f [PROOFSTEP] ext x [GOAL] case w A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } Z : ModuleCat ℤ α β : Z ⟶ ModuleCat.mk ↑X eq1 : α ≫ F = β ≫ F α' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom α β' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom β x : ↑(of ↑Z) ⊢ ↑(α' ≫ f) x = ↑(β' ≫ f) x [PROOFSTEP] simp only [CategoryTheory.comp_apply, LinearMap.toAddMonoidHom_coe] [GOAL] case w A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } Z : ModuleCat ℤ α β : Z ⟶ ModuleCat.mk ↑X eq1 : α ≫ F = β ≫ F α' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom α β' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom β x : ↑(of ↑Z) ⊢ ↑(LinearMap.toAddMonoidHom α ≫ f) x = ↑(LinearMap.toAddMonoidHom β ≫ f) x [PROOFSTEP] simpa only [ModuleCat.coe_comp, LinearMap.coe_mk, Function.comp_apply] using FunLike.congr_fun eq1 x [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } Z : ModuleCat ℤ α β : Z ⟶ ModuleCat.mk ↑X eq1 : α ≫ F = β ≫ F α' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom α β' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom β eq2 : α' ≫ f = β' ≫ f ⊢ α = β [PROOFSTEP] rw [cancel_mono] at eq2 [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } Z : ModuleCat ℤ α β : Z ⟶ ModuleCat.mk ↑X eq1 : α ≫ F = β ≫ F α' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom α β' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom β eq2 : α' = β' ⊢ α = β [PROOFSTEP] have : ⇑α' = ⇑β' := congrArg _ eq2 [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } Z : ModuleCat ℤ α β : Z ⟶ ModuleCat.mk ↑X eq1 : α ≫ F = β ≫ F α' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom α β' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom β eq2 : α' = β' this : ↑α' = ↑β' ⊢ α = β [PROOFSTEP] ext x [GOAL] case h A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } Z : ModuleCat ℤ α β : Z ⟶ ModuleCat.mk ↑X eq1 : α ≫ F = β ≫ F α' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom α β' : of ↑Z ⟶ X := LinearMap.toAddMonoidHom β eq2 : α' = β' this : ↑α' = ↑β' x : ↑Z ⊢ ↑α x = ↑β x [PROOFSTEP] apply congrFun this _ [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } this : Mono F ⊢ ∃ h, f ≫ h = g [PROOFSTEP] refine' ⟨(Injective.factorThru G F).toAddMonoidHom, _⟩ [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } this : Mono F ⊢ f ≫ LinearMap.toAddMonoidHom (Injective.factorThru G F) = g [PROOFSTEP] ext x [GOAL] case w A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (ModuleCat.mk A) X Y : Bundled AddCommGroup g : X ⟶ Bundled.mk A f : X ⟶ Y m : Mono f G : ModuleCat.mk ↑X ⟶ ModuleCat.mk A := { toAddHom := { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := g.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑g) (x + y) = ZeroHom.toFun (↑g) x + ZeroHom.toFun (↑g) y) } x) } F : ModuleCat.mk ↑X ⟶ ModuleCat.mk ↑Y := { toAddHom := { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) }, map_smul' := (_ : ∀ (r : ℤ) (x : ↑(ModuleCat.mk ↑X)), AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := f.toFun, map_add' := (_ : ∀ (x y : ↑X), ZeroHom.toFun (↑f) (x + y) = ZeroHom.toFun (↑f) x + ZeroHom.toFun (↑f) y) } x) } this : Mono F x : (forget (Bundled AddCommGroup)).obj X ⊢ ↑(f ≫ LinearMap.toAddMonoidHom (Injective.factorThru G F)) x = ↑g x [PROOFSTEP] convert FunLike.congr_fun (Injective.comp_factorThru G F) x [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f ⊢ ∃ h, f ≫ h = g [PROOFSTEP] let G : (⟨X, inferInstance⟩ : AddCommGroupCat) ⟶ ⟨A, inferInstance⟩ := @LinearMap.toAddMonoidHom _ _ _ _ _ _ _ _ (_) _ _ g [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g ⊢ ∃ h, f ≫ h = g [PROOFSTEP] let F : (⟨X, inferInstance⟩ : AddCommGroupCat) ⟶ ⟨Y, inferInstance⟩ := @LinearMap.toAddMonoidHom _ _ _ _ _ _ _ _ (_) (_) _ f [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f ⊢ ∃ h, f ≫ h = g [PROOFSTEP] have : Mono F := by rw [mono_iff_injective] intro _ _ h exact ((ModuleCat.mono_iff_injective f).mp m) h [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f ⊢ Mono F [PROOFSTEP] rw [mono_iff_injective] [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f ⊢ Function.Injective ↑F [PROOFSTEP] intro _ _ h [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f a₁✝ a₂✝ : ↑(Bundled.mk ↑X) h : ↑F a₁✝ = ↑F a₂✝ ⊢ a₁✝ = a₂✝ [PROOFSTEP] exact ((ModuleCat.mono_iff_injective f).mp m) h [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F ⊢ ∃ h, f ≫ h = g [PROOFSTEP] refine ⟨@LinearMap.mk _ _ _ _ _ _ _ _ _ (_) _ (Injective.factorThru G F).toAddHom ?_, ?_⟩ [GOAL] case refine_1 A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F ⊢ ∀ (r : ℤ) (x : ↑Y), AddHom.toFun (↑(Injective.factorThru G F)) (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun (↑(Injective.factorThru G F)) x case refine_2 A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F ⊢ f ≫ { toAddHom := ↑(Injective.factorThru G F), map_smul' := ?refine_1 } = g [PROOFSTEP] change ∀ r, ∀ x, (Injective.factorThru G F).toFun _ = _ • (Injective.factorThru G F).toFun _ [GOAL] case refine_1 A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F ⊢ ∀ (r : ℤ) (x : ↑Y), ZeroHom.toFun (↑(Injective.factorThru G F)) (r • x) = ↑(RingHom.id ℤ) r • ZeroHom.toFun (↑(Injective.factorThru G F)) x [PROOFSTEP] intro m x [GOAL] case refine_1 A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m✝ : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F m : ℤ x : ↑Y ⊢ ZeroHom.toFun (↑(Injective.factorThru G F)) (m • x) = ↑(RingHom.id ℤ) m • ZeroHom.toFun (↑(Injective.factorThru G F)) x [PROOFSTEP] rw [AddMonoidHom.toFun_eq_coe, RingHom.id_apply] [GOAL] case refine_1 A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m✝ : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F m : ℤ x : ↑Y ⊢ ↑(Injective.factorThru G F) (m • x) = m • ↑(Injective.factorThru G F) x [PROOFSTEP] induction' m using Int.induction_on with n hn n hn [GOAL] case refine_1.hz A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y ⊢ ↑(Injective.factorThru G F) (0 • x) = 0 • ↑(Injective.factorThru G F) x [PROOFSTEP] rw [zero_smul] [GOAL] case refine_1.hz A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y ⊢ ↑(Injective.factorThru G F) (0 • x) = 0 [PROOFSTEP] convert map_zero (M := Y) (N := A) (F := Y →+ A) _ -- Porting note: hell of non-defeq instances; somehow this worked [GOAL] case h.e'_2.h.e'_6 A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y ⊢ 0 • x = 0 [PROOFSTEP] refine @zero_smul ℤ Y (MonoidWithZero.toZero) (AddMonoid.toZero) ?_ x [GOAL] case refine_1.hp A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (↑n • x) = ↑n • ↑(Injective.factorThru G F) x ⊢ ↑(Injective.factorThru G F) ((↑n + 1) • x) = (↑n + 1) • ↑(Injective.factorThru G F) x [PROOFSTEP] conv_rhs => rw [add_smul] [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (↑n • x) = ↑n • ↑(Injective.factorThru G F) x | (↑n + 1) • ↑(Injective.factorThru G F) x [PROOFSTEP] rw [add_smul] [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (↑n • x) = ↑n • ↑(Injective.factorThru G F) x | (↑n + 1) • ↑(Injective.factorThru G F) x [PROOFSTEP] rw [add_smul] [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (↑n • x) = ↑n • ↑(Injective.factorThru G F) x | (↑n + 1) • ↑(Injective.factorThru G F) x [PROOFSTEP] rw [add_smul] [GOAL] case refine_1.hp A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (↑n • x) = ↑n • ↑(Injective.factorThru G F) x ⊢ ↑(Injective.factorThru G F) ((↑n + 1) • x) = ↑n • ↑(Injective.factorThru G F) x + 1 • ↑(Injective.factorThru G F) x [PROOFSTEP] rw [← hn, one_smul, ← map_add] [GOAL] case refine_1.hp A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (↑n • x) = ↑n • ↑(Injective.factorThru G F) x ⊢ ↑(Injective.factorThru G F) ((↑n + 1) • x) = ↑(Injective.factorThru G F) (↑n • x + x) [PROOFSTEP] congr [GOAL] case refine_1.hp.h.e_6.h A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (↑n • x) = ↑n • ↑(Injective.factorThru G F) x ⊢ (↑n + 1) • x = ↑n • x + x [PROOFSTEP] convert @add_smul ℤ Y _ _ ?_ n 1 x [GOAL] case h.e'_3.h.e'_6 A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (↑n • x) = ↑n • ↑(Injective.factorThru G F) x ⊢ x = 1 • x [PROOFSTEP] refine @one_smul ℤ Y _ ?_ x |>.symm [GOAL] case refine_1.hn A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (-↑n • x) = -↑n • ↑(Injective.factorThru G F) x ⊢ ↑(Injective.factorThru G F) ((-↑n - 1) • x) = (-↑n - 1) • ↑(Injective.factorThru G F) x [PROOFSTEP] conv_rhs => rw [sub_smul] [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (-↑n • x) = -↑n • ↑(Injective.factorThru G F) x | (-↑n - 1) • ↑(Injective.factorThru G F) x [PROOFSTEP] rw [sub_smul] [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (-↑n • x) = -↑n • ↑(Injective.factorThru G F) x | (-↑n - 1) • ↑(Injective.factorThru G F) x [PROOFSTEP] rw [sub_smul] [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (-↑n • x) = -↑n • ↑(Injective.factorThru G F) x | (-↑n - 1) • ↑(Injective.factorThru G F) x [PROOFSTEP] rw [sub_smul] [GOAL] case refine_1.hn A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (-↑n • x) = -↑n • ↑(Injective.factorThru G F) x ⊢ ↑(Injective.factorThru G F) ((-↑n - 1) • x) = -↑n • ↑(Injective.factorThru G F) x - 1 • ↑(Injective.factorThru G F) x [PROOFSTEP] rw [← hn, one_smul, ← map_sub] [GOAL] case refine_1.hn A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (-↑n • x) = -↑n • ↑(Injective.factorThru G F) x ⊢ ↑(Injective.factorThru G F) ((-↑n - 1) • x) = ↑(Injective.factorThru G F) (-↑n • x - x) [PROOFSTEP] congr [GOAL] case refine_1.hn.h.e_6.h A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (-↑n • x) = -↑n • ↑(Injective.factorThru G F) x ⊢ (-↑n - 1) • x = -↑n • x - x [PROOFSTEP] convert @sub_smul ℤ Y _ _ ?_ (-n) 1 x [GOAL] case h.e'_3.h.e'_6 A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑Y n : ℕ hn : ↑(Injective.factorThru G F) (-↑n • x) = -↑n • ↑(Injective.factorThru G F) x ⊢ x = 1 • x [PROOFSTEP] refine @one_smul ℤ Y _ ?_ x |>.symm [GOAL] case refine_2 A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F ⊢ f ≫ { toAddHom := ↑(Injective.factorThru G F), map_smul' := (_ : ∀ (r : ℤ) (x : ↑Y), ZeroHom.toFun (↑(Injective.factorThru G F)) (r • x) = ↑(RingHom.id ℤ) r • ZeroHom.toFun (↑(Injective.factorThru G F)) x) } = g [PROOFSTEP] ext x [GOAL] case refine_2.h A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this : Mono F x : ↑X ⊢ ↑(f ≫ { toAddHom := ↑(Injective.factorThru G F), map_smul' := (_ : ∀ (r : ℤ) (x : ↑Y), ZeroHom.toFun (↑(Injective.factorThru G F)) (r • x) = ↑(RingHom.id ℤ) r • ZeroHom.toFun (↑(Injective.factorThru G F)) x) }) x = ↑g x [PROOFSTEP] have := congrFun (congrArg (fun H => H.toFun) (Injective.comp_factorThru G F)) x [GOAL] case refine_2.h A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this✝ : Mono F x : ↑X this : (fun H => H.toFun) (F ≫ Injective.factorThru G F) x = (fun H => H.toFun) G x ⊢ ↑(f ≫ { toAddHom := ↑(Injective.factorThru G F), map_smul' := (_ : ∀ (r : ℤ) (x : ↑Y), ZeroHom.toFun (↑(Injective.factorThru G F)) (r • x) = ↑(RingHom.id ℤ) r • ZeroHom.toFun (↑(Injective.factorThru G F)) x) }) x = ↑g x [PROOFSTEP] simp only [ModuleCat.coe_comp, Function.comp_apply] at this [GOAL] case refine_2.h A : Type u inst✝¹ : AddCommGroup A inst✝ : Injective (Bundled.mk A) X Y : ModuleCat ℤ g : X ⟶ ModuleCat.mk A f : X ⟶ Y m : Mono f G : Bundled.mk ↑X ⟶ Bundled.mk A := LinearMap.toAddMonoidHom g F : Bundled.mk ↑X ⟶ Bundled.mk ↑Y := LinearMap.toAddMonoidHom f this✝ : Mono F x : ↑X this : ZeroHom.toFun (↑(LinearMap.toAddMonoidHom f ≫ Injective.factorThru (LinearMap.toAddMonoidHom g) (LinearMap.toAddMonoidHom f))) x = ZeroHom.toFun (↑(LinearMap.toAddMonoidHom g)) x ⊢ ↑(f ≫ { toAddHom := ↑(Injective.factorThru G F), map_smul' := (_ : ∀ (r : ℤ) (x : ↑Y), ZeroHom.toFun (↑(Injective.factorThru G F)) (r • x) = ↑(RingHom.id ℤ) r • ZeroHom.toFun (↑(Injective.factorThru G F)) x) }) x = ↑g x [PROOFSTEP] apply this [GOAL] A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ I : Ideal ℤ g : { x // x ∈ I } →ₗ[ℤ] A ⊢ ∃ g', ∀ (x : ℤ) (mem : x ∈ I), ↑g' x = ↑g { val := x, property := mem } [PROOFSTEP] rcases IsPrincipalIdealRing.principal I with ⟨m, rfl⟩ [GOAL] case mk.intro A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A ⊢ ∃ g', ∀ (x : ℤ) (mem : x ∈ Submodule.span ℤ {m}), ↑g' x = ↑g { val := x, property := mem } [PROOFSTEP] by_cases m_eq_zero : m = 0 [GOAL] case pos A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : m = 0 ⊢ ∃ g', ∀ (x : ℤ) (mem : x ∈ Submodule.span ℤ {m}), ↑g' x = ↑g { val := x, property := mem } [PROOFSTEP] subst m_eq_zero [GOAL] case pos A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ g : { x // x ∈ Submodule.span ℤ {0} } →ₗ[ℤ] A ⊢ ∃ g', ∀ (x : ℤ) (mem : x ∈ Submodule.span ℤ {0}), ↑g' x = ↑g { val := x, property := mem } [PROOFSTEP] refine' ⟨{ toFun := _ map_add' := _ map_smul' := _ }, fun n hn => _⟩ [GOAL] case pos.refine'_1 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ g : { x // x ∈ Submodule.span ℤ {0} } →ₗ[ℤ] A ⊢ ℤ → A [PROOFSTEP] intro _ [GOAL] case pos.refine'_1 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ g : { x // x ∈ Submodule.span ℤ {0} } →ₗ[ℤ] A a✝ : ℤ ⊢ A [PROOFSTEP] exact g 0 [GOAL] case pos.refine'_2 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ g : { x // x ∈ Submodule.span ℤ {0} } →ₗ[ℤ] A ⊢ ℤ → ℤ → ↑g 0 = ↑g 0 + ↑g 0 [PROOFSTEP] intro _ _ [GOAL] case pos.refine'_2 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ g : { x // x ∈ Submodule.span ℤ {0} } →ₗ[ℤ] A x✝ y✝ : ℤ ⊢ ↑g 0 = ↑g 0 + ↑g 0 [PROOFSTEP] simp only [map_zero, add_zero] [GOAL] case pos.refine'_3 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ g : { x // x ∈ Submodule.span ℤ {0} } →ₗ[ℤ] A ⊢ ∀ (r x : ℤ), AddHom.toFun { toFun := fun a => ↑g 0, map_add' := (_ : ℤ → ℤ → ↑g 0 = ↑g 0 + ↑g 0) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := fun a => ↑g 0, map_add' := (_ : ℤ → ℤ → ↑g 0 = ↑g 0 + ↑g 0) } x [PROOFSTEP] intro n1 _ [GOAL] case pos.refine'_3 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ g : { x // x ∈ Submodule.span ℤ {0} } →ₗ[ℤ] A n1 x✝ : ℤ ⊢ AddHom.toFun { toFun := fun a => ↑g 0, map_add' := (_ : ℤ → ℤ → ↑g 0 = ↑g 0 + ↑g 0) } (n1 • x✝) = ↑(RingHom.id ℤ) n1 • AddHom.toFun { toFun := fun a => ↑g 0, map_add' := (_ : ℤ → ℤ → ↑g 0 = ↑g 0 + ↑g 0) } x✝ [PROOFSTEP] simp only [map_zero, smul_zero] [GOAL] case pos.refine'_4 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ g : { x // x ∈ Submodule.span ℤ {0} } →ₗ[ℤ] A n : ℤ hn : n ∈ Submodule.span ℤ {0} ⊢ ↑{ toAddHom := { toFun := fun a => ↑g 0, map_add' := (_ : ℤ → ℤ → ↑g 0 = ↑g 0 + ↑g 0) }, map_smul' := (_ : ∀ (n1 : ℤ), ℤ → ↑g 0 = ↑(RingHom.id ℤ) n1 • ↑g 0) } n = ↑g { val := n, property := hn } [PROOFSTEP] rw [Submodule.span_singleton_eq_bot.mpr rfl, Submodule.mem_bot] at hn [GOAL] case pos.refine'_4 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ g : { x // x ∈ Submodule.span ℤ {0} } →ₗ[ℤ] A n : ℤ hn✝ : n ∈ Submodule.span ℤ {0} hn : n = 0 ⊢ ↑{ toAddHom := { toFun := fun a => ↑g 0, map_add' := (_ : ℤ → ℤ → ↑g 0 = ↑g 0 + ↑g 0) }, map_smul' := (_ : ∀ (n1 : ℤ), ℤ → ↑g 0 = ↑(RingHom.id ℤ) n1 • ↑g 0) } n = ↑g { val := n, property := hn✝ } [PROOFSTEP] simp only [hn, map_zero] [GOAL] case pos.refine'_4 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ g : { x // x ∈ Submodule.span ℤ {0} } →ₗ[ℤ] A n : ℤ hn✝ : n ∈ Submodule.span ℤ {0} hn : n = 0 ⊢ 0 = ↑g { val := 0, property := (_ : (fun x => x ∈ Submodule.span ℤ {0}) 0) } [PROOFSTEP] symm [GOAL] case pos.refine'_4 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ g : { x // x ∈ Submodule.span ℤ {0} } →ₗ[ℤ] A n : ℤ hn✝ : n ∈ Submodule.span ℤ {0} hn : n = 0 ⊢ ↑g { val := 0, property := (_ : (fun x => x ∈ Submodule.span ℤ {0}) 0) } = 0 [PROOFSTEP] convert map_zero g [GOAL] case neg A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 ⊢ ∃ g', ∀ (x : ℤ) (mem : x ∈ Submodule.span ℤ {m}), ↑g' x = ↑g { val := x, property := mem } [PROOFSTEP] set gₘ := g ⟨m, Submodule.subset_span (Set.mem_singleton _)⟩ with gm_eq [GOAL] case neg A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } ⊢ ∃ g', ∀ (x : ℤ) (mem : x ∈ Submodule.span ℤ {m}), ↑g' x = ↑g { val := x, property := mem } [PROOFSTEP] refine' ⟨{ toFun := _ map_add' := _ map_smul' := _ }, fun n hn => _⟩ [GOAL] case neg.refine'_1 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } ⊢ ℤ → A [PROOFSTEP] intro n [GOAL] case neg.refine'_1 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } n : ℤ ⊢ A [PROOFSTEP] exact n • DivisibleBy.div gₘ m [GOAL] case neg.refine'_2 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } ⊢ ∀ (x y : ℤ), (x + y) • DivisibleBy.div gₘ m = x • DivisibleBy.div gₘ m + y • DivisibleBy.div gₘ m [PROOFSTEP] intro n1 n2 [GOAL] case neg.refine'_2 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } n1 n2 : ℤ ⊢ (n1 + n2) • DivisibleBy.div gₘ m = n1 • DivisibleBy.div gₘ m + n2 • DivisibleBy.div gₘ m [PROOFSTEP] simp only [add_smul] [GOAL] case neg.refine'_3 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } ⊢ ∀ (r x : ℤ), AddHom.toFun { toFun := fun n => n • DivisibleBy.div gₘ m, map_add' := (_ : ∀ (n1 n2 : ℤ), (n1 + n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m + n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) } (r • x) = ↑(RingHom.id ℤ) r • AddHom.toFun { toFun := fun n => n • DivisibleBy.div gₘ m, map_add' := (_ : ∀ (n1 n2 : ℤ), (n1 + n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m + n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) } x [PROOFSTEP] intro n1 n2 [GOAL] case neg.refine'_3 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } n1 n2 : ℤ ⊢ AddHom.toFun { toFun := fun n => n • DivisibleBy.div gₘ m, map_add' := (_ : ∀ (n1 n2 : ℤ), (n1 + n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m + n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) } (n1 • n2) = ↑(RingHom.id ℤ) n1 • AddHom.toFun { toFun := fun n => n • DivisibleBy.div gₘ m, map_add' := (_ : ∀ (n1 n2 : ℤ), (n1 + n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m + n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) } n2 [PROOFSTEP] dsimp [GOAL] case neg.refine'_3 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } n1 n2 : ℤ ⊢ (n1 * n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m [PROOFSTEP] rw [mul_smul] [GOAL] case neg.refine'_4 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } n : ℤ hn : n ∈ Submodule.span ℤ {m} ⊢ ↑{ toAddHom := { toFun := fun n => n • DivisibleBy.div gₘ m, map_add' := (_ : ∀ (n1 n2 : ℤ), (n1 + n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m + n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) }, map_smul' := (_ : ∀ (n1 n2 : ℤ), AddHom.toFun { toFun := fun n => n • DivisibleBy.div gₘ m, map_add' := (_ : ∀ (n1 n2 : ℤ), (n1 + n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m + n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) } (n1 • n2) = ↑(RingHom.id ℤ) n1 • AddHom.toFun { toFun := fun n => n • DivisibleBy.div gₘ m, map_add' := (_ : ∀ (n1 n2 : ℤ), (n1 + n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m + n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) } n2) } n = ↑g { val := n, property := hn } [PROOFSTEP] rw [Submodule.mem_span_singleton] at hn [GOAL] case neg.refine'_4 A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } n : ℤ hn✝ : n ∈ Submodule.span ℤ {m} hn : ∃ a, a • m = n ⊢ ↑{ toAddHom := { toFun := fun n => n • DivisibleBy.div gₘ m, map_add' := (_ : ∀ (n1 n2 : ℤ), (n1 + n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m + n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) }, map_smul' := (_ : ∀ (n1 n2 : ℤ), AddHom.toFun { toFun := fun n => n • DivisibleBy.div gₘ m, map_add' := (_ : ∀ (n1 n2 : ℤ), (n1 + n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m + n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) } (n1 • n2) = ↑(RingHom.id ℤ) n1 • AddHom.toFun { toFun := fun n => n • DivisibleBy.div gₘ m, map_add' := (_ : ∀ (n1 n2 : ℤ), (n1 + n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m + n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) } n2) } n = ↑g { val := n, property := hn✝ } [PROOFSTEP] rcases hn with ⟨n, rfl⟩ [GOAL] case neg.refine'_4.intro A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } n : ℤ hn : n • m ∈ Submodule.span ℤ {m} ⊢ ↑{ toAddHom := { toFun := fun n => n • DivisibleBy.div gₘ m, map_add' := (_ : ∀ (n1 n2 : ℤ), (n1 + n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m + n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) }, map_smul' := (_ : ∀ (n1 n2 : ℤ), AddHom.toFun { toFun := fun n => n • DivisibleBy.div gₘ m, map_add' := (_ : ∀ (n1 n2 : ℤ), (n1 + n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m + n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) } (n1 • n2) = ↑(RingHom.id ℤ) n1 • AddHom.toFun { toFun := fun n => n • DivisibleBy.div gₘ m, map_add' := (_ : ∀ (n1 n2 : ℤ), (n1 + n2) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n1 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m + n2 • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) } n2) } (n • m) = ↑g { val := n • m, property := hn } [PROOFSTEP] simp only [gm_eq, Algebra.id.smul_eq_mul, LinearMap.coe_mk] [GOAL] case neg.refine'_4.intro A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } n : ℤ hn : n • m ∈ Submodule.span ℤ {m} ⊢ ↑{ toFun := fun n => n • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m, map_add' := (_ : ∀ (x y : ℤ), (fun n => n • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) (x + y) = (fun n => n • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) x + (fun n => n • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m) y) } (n * m) = ↑g { val := n * m, property := (_ : (fun x => x ∈ Submodule.span ℤ {m}) (n * m)) } [PROOFSTEP] dsimp [GOAL] case neg.refine'_4.intro A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } n : ℤ hn : n • m ∈ Submodule.span ℤ {m} ⊢ (n * m) • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = ↑g { val := n * m, property := hn } [PROOFSTEP] rw [mul_smul] -- Porting note: used to be able to just rw [Div...] [GOAL] case neg.refine'_4.intro A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } n : ℤ hn : n • m ∈ Submodule.span ℤ {m} ⊢ n • m • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = ↑g { val := n * m, property := hn } [PROOFSTEP] have s := congrArg (fun l => n • l) <| DivisibleBy.div_cancel gₘ m_eq_zero [GOAL] case neg.refine'_4.intro A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } n : ℤ hn : n • m ∈ Submodule.span ℤ {m} s : (fun l => n • l) (m • DivisibleBy.div gₘ m) = (fun l => n • l) gₘ ⊢ n • m • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = ↑g { val := n * m, property := hn } [PROOFSTEP] dsimp at s [GOAL] case neg.refine'_4.intro A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } n : ℤ hn : n • m ∈ Submodule.span ℤ {m} s : n • m • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n • ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } ⊢ n • m • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = ↑g { val := n * m, property := hn } [PROOFSTEP] rw [s, ← LinearMap.map_smul] [GOAL] case neg.refine'_4.intro A : Type u inst✝¹ : AddCommGroup A inst✝ : DivisibleBy A ℤ m : ℤ g : { x // x ∈ Submodule.span ℤ {m} } →ₗ[ℤ] A m_eq_zero : ¬m = 0 gₘ : A := ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } gm_eq : gₘ = ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } n : ℤ hn : n • m ∈ Submodule.span ℤ {m} s : n • m • DivisibleBy.div (↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) m = n • ↑g { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) } ⊢ ↑g (n • { val := m, property := (_ : m ∈ ↑(Submodule.span ℤ {m})) }) = ↑g { val := n * m, property := hn } [PROOFSTEP] congr
[GOAL] R : Type u₁ L : Type u₂ inst✝² : CommRing R inst✝¹ : LieRing L inst✝ : LieAlgebra R L src✝ : L →ₗ[R] UniversalEnvelopingAlgebra R L := LinearMap.comp (AlgHom.toLinearMap (mkAlgHom R L)) ιₜ x y : L ⊢ AddHom.toFun { toAddHom := src✝.toAddHom, map_smul' := (_ : ∀ (r : R) (x : L), AddHom.toFun src✝.toAddHom (r • x) = ↑(RingHom.id R) r • AddHom.toFun src✝.toAddHom x) }.toAddHom ⁅x, y⁆ = ⁅AddHom.toFun { toAddHom := src✝.toAddHom, map_smul' := (_ : ∀ (r : R) (x : L), AddHom.toFun src✝.toAddHom (r • x) = ↑(RingHom.id R) r • AddHom.toFun src✝.toAddHom x) }.toAddHom x, AddHom.toFun { toAddHom := src✝.toAddHom, map_smul' := (_ : ∀ (r : R) (x : L), AddHom.toFun src✝.toAddHom (r • x) = ↑(RingHom.id R) r • AddHom.toFun src✝.toAddHom x) }.toAddHom y⁆ [PROOFSTEP] suffices mkAlgHom R L (ιₜ ⁅x, y⁆ + ιₜ y * ιₜ x) = mkAlgHom R L (ιₜ x * ιₜ y) by rw [AlgHom.map_mul] at this ; simp [LieRing.of_associative_ring_bracket, ← this] [GOAL] R : Type u₁ L : Type u₂ inst✝² : CommRing R inst✝¹ : LieRing L inst✝ : LieAlgebra R L src✝ : L →ₗ[R] UniversalEnvelopingAlgebra R L := LinearMap.comp (AlgHom.toLinearMap (mkAlgHom R L)) ιₜ x y : L this : ↑(mkAlgHom R L) (↑ιₜ ⁅x, y⁆ + ↑ιₜ y * ↑ιₜ x) = ↑(mkAlgHom R L) (↑ιₜ x * ↑ιₜ y) ⊢ AddHom.toFun { toAddHom := src✝.toAddHom, map_smul' := (_ : ∀ (r : R) (x : L), AddHom.toFun src✝.toAddHom (r • x) = ↑(RingHom.id R) r • AddHom.toFun src✝.toAddHom x) }.toAddHom ⁅x, y⁆ = ⁅AddHom.toFun { toAddHom := src✝.toAddHom, map_smul' := (_ : ∀ (r : R) (x : L), AddHom.toFun src✝.toAddHom (r • x) = ↑(RingHom.id R) r • AddHom.toFun src✝.toAddHom x) }.toAddHom x, AddHom.toFun { toAddHom := src✝.toAddHom, map_smul' := (_ : ∀ (r : R) (x : L), AddHom.toFun src✝.toAddHom (r • x) = ↑(RingHom.id R) r • AddHom.toFun src✝.toAddHom x) }.toAddHom y⁆ [PROOFSTEP] rw [AlgHom.map_mul] at this [GOAL] R : Type u₁ L : Type u₂ inst✝² : CommRing R inst✝¹ : LieRing L inst✝ : LieAlgebra R L src✝ : L →ₗ[R] UniversalEnvelopingAlgebra R L := LinearMap.comp (AlgHom.toLinearMap (mkAlgHom R L)) ιₜ x y : L this : ↑(mkAlgHom R L) (↑ιₜ ⁅x, y⁆ + ↑ιₜ y * ↑ιₜ x) = ↑(mkAlgHom R L) (↑ιₜ x) * ↑(mkAlgHom R L) (↑ιₜ y) ⊢ AddHom.toFun { toAddHom := src✝.toAddHom, map_smul' := (_ : ∀ (r : R) (x : L), AddHom.toFun src✝.toAddHom (r • x) = ↑(RingHom.id R) r • AddHom.toFun src✝.toAddHom x) }.toAddHom ⁅x, y⁆ = ⁅AddHom.toFun { toAddHom := src✝.toAddHom, map_smul' := (_ : ∀ (r : R) (x : L), AddHom.toFun src✝.toAddHom (r • x) = ↑(RingHom.id R) r • AddHom.toFun src✝.toAddHom x) }.toAddHom x, AddHom.toFun { toAddHom := src✝.toAddHom, map_smul' := (_ : ∀ (r : R) (x : L), AddHom.toFun src✝.toAddHom (r • x) = ↑(RingHom.id R) r • AddHom.toFun src✝.toAddHom x) }.toAddHom y⁆ [PROOFSTEP] simp [LieRing.of_associative_ring_bracket, ← this] [GOAL] R : Type u₁ L : Type u₂ inst✝² : CommRing R inst✝¹ : LieRing L inst✝ : LieAlgebra R L src✝ : L →ₗ[R] UniversalEnvelopingAlgebra R L := LinearMap.comp (AlgHom.toLinearMap (mkAlgHom R L)) ιₜ x y : L ⊢ ↑(mkAlgHom R L) (↑ιₜ ⁅x, y⁆ + ↑ιₜ y * ↑ιₜ x) = ↑(mkAlgHom R L) (↑ιₜ x * ↑ιₜ y) [PROOFSTEP] exact RingQuot.mkAlgHom_rel _ (Rel.lie_compat x y) [GOAL] R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f✝ f : L →ₗ⁅R⁆ A ⊢ ∀ ⦃x y : TensorAlgebra R L⦄, Rel R L x y → ↑(↑(TensorAlgebra.lift R) ↑f) x = ↑(↑(TensorAlgebra.lift R) ↑f) y [PROOFSTEP] intro a b h [GOAL] R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f✝ f : L →ₗ⁅R⁆ A a b : TensorAlgebra R L h : Rel R L a b ⊢ ↑(↑(TensorAlgebra.lift R) ↑f) a = ↑(↑(TensorAlgebra.lift R) ↑f) b [PROOFSTEP] induction' h with x y [GOAL] case lie_compat R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f✝ f : L →ₗ⁅R⁆ A a b : TensorAlgebra R L x y : L ⊢ ↑(↑(TensorAlgebra.lift R) ↑f) (↑ιₜ ⁅x, y⁆ + ↑ιₜ y * ↑ιₜ x) = ↑(↑(TensorAlgebra.lift R) ↑f) (↑ιₜ x * ↑ιₜ y) [PROOFSTEP] simp only [LieRing.of_associative_ring_bracket, map_add, TensorAlgebra.lift_ι_apply, LieHom.coe_toLinearMap, LieHom.map_lie, map_mul, sub_add_cancel] [GOAL] R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f✝ f : L →ₗ⁅R⁆ A ⊢ (fun F => LieHom.comp (AlgHom.toLieHom F) (ι R)) ((fun f => ↑(RingQuot.liftAlgHom R) { val := ↑(TensorAlgebra.lift R) ↑f, property := (_ : ∀ ⦃a b : TensorAlgebra R L⦄, Rel R L a b → ↑(↑(TensorAlgebra.lift R) ↑f) a = ↑(↑(TensorAlgebra.lift R) ↑f) b) }) f) = f [PROOFSTEP] ext -- Porting note: was -- simp only [ι, mkAlgHom, TensorAlgebra.lift_ι_apply, LieHom.coe_toLinearMap, -- LinearMap.toFun_eq_coe, LinearMap.coe_comp, LieHom.coe_comp, AlgHom.coe_toLieHom, -- LieHom.coe_mk, Function.comp_apply, AlgHom.toLinearMap_apply, -- RingQuot.liftAlgHom_mkAlgHom_apply] [GOAL] case h R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f✝ f : L →ₗ⁅R⁆ A x✝ : L ⊢ ↑((fun F => LieHom.comp (AlgHom.toLieHom F) (ι R)) ((fun f => ↑(RingQuot.liftAlgHom R) { val := ↑(TensorAlgebra.lift R) ↑f, property := (_ : ∀ ⦃a b : TensorAlgebra R L⦄, Rel R L a b → ↑(↑(TensorAlgebra.lift R) ↑f) a = ↑(↑(TensorAlgebra.lift R) ↑f) b) }) f)) x✝ = ↑f x✝ [PROOFSTEP] simp only [LieHom.coe_comp, Function.comp_apply, AlgHom.coe_toLieHom, UniversalEnvelopingAlgebra.ι_apply, mkAlgHom] [GOAL] case h R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f✝ f : L →ₗ⁅R⁆ A x✝ : L ⊢ ↑(↑(RingQuot.liftAlgHom R) { val := ↑(TensorAlgebra.lift R) ↑f, property := (_ : ∀ ⦃a b : TensorAlgebra R L⦄, Rel R L a b → ↑(↑(TensorAlgebra.lift R) ↑f) a = ↑(↑(TensorAlgebra.lift R) ↑f) b) }) (↑(RingQuot.mkAlgHom R (Rel R L)) (↑ιₜ x✝)) = ↑f x✝ [PROOFSTEP] rw [RingQuot.liftAlgHom_mkAlgHom_apply] [GOAL] case h R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f✝ f : L →ₗ⁅R⁆ A x✝ : L ⊢ ↑(↑(TensorAlgebra.lift R) ↑f) (↑ιₜ x✝) = ↑f x✝ [PROOFSTEP] simp only [TensorAlgebra.lift_ι_apply, LieHom.coe_toLinearMap] [GOAL] R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A F : UniversalEnvelopingAlgebra R L →ₐ[R] A ⊢ (fun f => ↑(RingQuot.liftAlgHom R) { val := ↑(TensorAlgebra.lift R) ↑f, property := (_ : ∀ ⦃a b : TensorAlgebra R L⦄, Rel R L a b → ↑(↑(TensorAlgebra.lift R) ↑f) a = ↑(↑(TensorAlgebra.lift R) ↑f) b) }) ((fun F => LieHom.comp (AlgHom.toLieHom F) (ι R)) F) = F [PROOFSTEP] apply RingQuot.ringQuot_ext' [GOAL] case w R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A F : UniversalEnvelopingAlgebra R L →ₐ[R] A ⊢ AlgHom.comp ((fun f => ↑(RingQuot.liftAlgHom R) { val := ↑(TensorAlgebra.lift R) ↑f, property := (_ : ∀ ⦃a b : TensorAlgebra R L⦄, Rel R L a b → ↑(↑(TensorAlgebra.lift R) ↑f) a = ↑(↑(TensorAlgebra.lift R) ↑f) b) }) ((fun F => LieHom.comp (AlgHom.toLieHom F) (ι R)) F)) (RingQuot.mkAlgHom R (Rel R L)) = AlgHom.comp F (RingQuot.mkAlgHom R (Rel R L)) [PROOFSTEP] ext -- Porting note: was -- simp only [ι, mkAlgHom, TensorAlgebra.lift_ι_apply, LieHom.coe_toLinearMap, -- LinearMap.toFun_eq_coe, LinearMap.coe_comp, LieHom.coe_linearMap_comp, -- AlgHom.comp_toLinearMap, Function.comp_apply, AlgHom.toLinearMap_apply, -- RingQuot.liftAlgHom_mkAlgHom_apply, AlgHom.coe_toLieHom, LieHom.coe_mk] [GOAL] case w.w.h R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A F : UniversalEnvelopingAlgebra R L →ₐ[R] A x✝ : L ⊢ ↑(LinearMap.comp (AlgHom.toLinearMap (AlgHom.comp ((fun f => ↑(RingQuot.liftAlgHom R) { val := ↑(TensorAlgebra.lift R) ↑f, property := (_ : ∀ ⦃a b : TensorAlgebra R L⦄, Rel R L a b → ↑(↑(TensorAlgebra.lift R) ↑f) a = ↑(↑(TensorAlgebra.lift R) ↑f) b) }) ((fun F => LieHom.comp (AlgHom.toLieHom F) (ι R)) F)) (RingQuot.mkAlgHom R (Rel R L)))) ιₜ) x✝ = ↑(LinearMap.comp (AlgHom.toLinearMap (AlgHom.comp F (RingQuot.mkAlgHom R (Rel R L)))) ιₜ) x✝ [PROOFSTEP] simp [mkAlgHom] [GOAL] R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A x : L ⊢ ↑(↑(lift R) f) (↑(ι R) x) = ↑f x [PROOFSTEP] rw [← Function.comp_apply (f := lift R f) (g := ι R) (x := x), ι_comp_lift] [GOAL] R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A x : L ⊢ ↑(↑(lift R) f) (↑(mkAlgHom R L) (↑ιₜ x)) = ↑f x [PROOFSTEP] simpa using lift_ι_apply R f x [GOAL] R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A g : UniversalEnvelopingAlgebra R L →ₐ[R] A ⊢ ↑g ∘ ↑(ι R) = ↑f ↔ g = ↑(lift R) f [PROOFSTEP] refine' Iff.trans _ (lift R).symm_apply_eq [GOAL] R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A g : UniversalEnvelopingAlgebra R L →ₐ[R] A ⊢ ↑g ∘ ↑(ι R) = ↑f ↔ ↑(lift R).symm g = f [PROOFSTEP] constructor [GOAL] case mp R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A g : UniversalEnvelopingAlgebra R L →ₐ[R] A ⊢ ↑g ∘ ↑(ι R) = ↑f → ↑(lift R).symm g = f [PROOFSTEP] intro h [GOAL] case mp R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A g : UniversalEnvelopingAlgebra R L →ₐ[R] A h : ↑g ∘ ↑(ι R) = ↑f ⊢ ↑(lift R).symm g = f [PROOFSTEP] ext [GOAL] case mp.h R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A g : UniversalEnvelopingAlgebra R L →ₐ[R] A h : ↑g ∘ ↑(ι R) = ↑f x✝ : L ⊢ ↑(↑(lift R).symm g) x✝ = ↑f x✝ [PROOFSTEP] simp [← h] [GOAL] case mpr R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A g : UniversalEnvelopingAlgebra R L →ₐ[R] A ⊢ ↑(lift R).symm g = f → ↑g ∘ ↑(ι R) = ↑f [PROOFSTEP] intro h [GOAL] case mpr R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A g : UniversalEnvelopingAlgebra R L →ₐ[R] A h : ↑(lift R).symm g = f ⊢ ↑g ∘ ↑(ι R) = ↑f [PROOFSTEP] ext [GOAL] case mpr.h R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A g : UniversalEnvelopingAlgebra R L →ₐ[R] A h : ↑(lift R).symm g = f x✝ : L ⊢ (↑g ∘ ↑(ι R)) x✝ = ↑f x✝ [PROOFSTEP] simp [← h] [GOAL] R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A g₁ g₂ : UniversalEnvelopingAlgebra R L →ₐ[R] A h : LieHom.comp (AlgHom.toLieHom g₁) (ι R) = LieHom.comp (AlgHom.toLieHom g₂) (ι R) ⊢ ↑(lift R).symm g₁ = ↑(lift R).symm g₂ [PROOFSTEP] ext [GOAL] case h R : Type u₁ L : Type u₂ inst✝⁴ : CommRing R inst✝³ : LieRing L inst✝² : LieAlgebra R L A : Type u₃ inst✝¹ : Ring A inst✝ : Algebra R A f : L →ₗ⁅R⁆ A g₁ g₂ : UniversalEnvelopingAlgebra R L →ₐ[R] A h : LieHom.comp (AlgHom.toLieHom g₁) (ι R) = LieHom.comp (AlgHom.toLieHom g₂) (ι R) x✝ : L ⊢ ↑(↑(lift R).symm g₁) x✝ = ↑(↑(lift R).symm g₂) x✝ [PROOFSTEP] simp [h]
\section{Identified Impact}\label{sec:impact} This research will introduce new state-of-the-art methodologies for solving the NLP related problems in business processes. Machine learning research has evolved rapidly with the introduction of deep learning techniques build upon the support of tremendous computational power. But still there use in business processes is very few due to the difference between nature of problems in real-world applications and academic research. We will bring attention to the academic research community towards the problems faced by business processes that are not well addressed right now. Our work will help industries make their workforce allocation better as many tasks which do not need much human intervention will be handled in an automated manner. This will free up the human resource of a company for more creative and innovate tasks. Eventually leading in the cost reduction as they have to focus on quality of their workforce instead of quantity. Right now the work done by few industries to automate their business process is not made available in public domain. There are a few conglomerates who work on developing methods to address some of their problems but they tend not to open-source their findings in order to gain financial profits. The findings from our project will be made public through various publications and code sharing website like GitHub \footnote{\url{https://github.com/}} that can help industries with lesser resources to enhance their performance. Instead of using their resources on automating their text related processes they can focus on their core issues \& ideas whereas our work can help them in automation. % We plan to build upon current % deep learning techniques on HAR, adapt and improve them for multi-modal sensor reasoning for exercises. % With objective 1 we will bring together different machine learning data domains such as time % series, images and video as we create a multi-modal spatio-temporal dataset. It will be adaptable to % different application domains as well as future research in HAI. It will also contribute towards evaluating % transfer learning capabilities of machine learning models. Objective 2 will introduce new attention % mechanisms for sensor data selection in abstract levels and in objective 3 we will introduce strategies % to improve robustness in deep learning architectures.Objective 4 will implement novel deep learning % models for qualitative evaluation which involves similarity comparison of spatio-temporal data. % From the healthcare application perspective this research will contribute towards a sustainable % digital intervention for MSD prevention and self-management while raising awareness. MSD has % directly affects the workforce of a country with sedentary lifestyles and as a result it has a major % impact on economical and social status of the country. This digital intervention will enable the user % to perform physiotherapist recommended exercises at home with supervision from the qualitative % evaluation component. We plan to involve users in each step of this research and raise awareness. We % will seek user feedback on the utility of the digital intervention to support, maintain and encourage % an active lifestyle in the prevention and management of MSDs % The major outcome expected of this project is the development and production % of a sustainable profile generation and long term health condition risk determination system, designed to scale with additional FitHomes installations as they % are constructed. As new residents move into homes, labelled examples will be % produced with a slight involvement on their end, ensuring personalised profiles % are produced based on their regular performances of activities. Bespoke and % retrofit sensor environments will also be supported. This will ensure that the % FitHomes project can safely expand and continue to provide state-of-the-art % sensor analytics to residents. % It was proposed that 2-3 impactful publications may be produced as a result % of this project. In the long term, we can expect the FitHomes project as a whole % will provide a very useful unique dataset for data mining. This may potentially % allow for future research on seemingly unrelated and unknown risk factors for a % range of long term health conditions to be highlighted. Other long term studies % in home environments may also be able to make use of any publicly distributed % FitHomes datasets which would have a high impact in the field. Multi-modal % sensor analytics as a field could benefit from this project. % This will help several industries to automate their process in various domains
Set Implicit Arguments. From TLC Require Import LibTactics LibLogic LibFun LibProd LibContainer LibSet LibRelation LibPer LibMonoid. Local Notation path := rtclosure. From iris_time.union_find.math Require Import LibNatExtra LibIter InverseNatNat Ackermann InverseAckermann MiscArith TLCBuffer UnionFind01Data UnionFind03Link UnionFind04Compress UnionFind05IteratedCompression UnionFind11Rank UnionFind13RankLink UnionFind14RankCompress UnionFind21Parent UnionFind22ParentEvolution UnionFind23Evolution UnionFind24Pleasant UnionFind41Potential UnionFind42PotentialCompress. (* -------------------------------------------------------------------------- *) (* We now study how level, index, and potential evolve over time. *) (* -------------------------------------------------------------------------- *) Section R. Variable r : nat. Hypothesis r_geq_1: 1 <= r. Notation alphar := (alphar r). Notation rankr := (rankr r). Notation prek := (prek r). Notation k := (k r). Notation i := (i r). Notation phi := (phi r). Notation Phi := (Phi r). Section StudyOfEvolution. Variable V : Type. Variable D : set V. Variables F F' : binary V. Variables K K' : V -> nat. Hypothesis is_rdsf_F : is_rdsf D F K. Hypothesis ev : evolution D F K F' K'. (* -------------------------------------------------------------------------- *) Section NonRoot. (* Now, let us study a vertex [v], which we assume already has a parent. *) Variable v : V. Hypothesis v_has_a_parent: ~ is_root F v. (* -------------------------------------------------------------------------- *) (* [rankr v] remains constant. *) Lemma non_root_has_constant_rankr: rankr K v = rankr K' v. Proof using ev v_has_a_parent. unfold rankr. forwards: non_root_has_constant_rank; eauto. Qed. (* [rankr (p v)] grows. *) Lemma rankrpv_grows: rankr K (p F v) <= rankr K' (p F' v). Proof using is_rdsf_F ev v_has_a_parent. unfold rankr. eapply plus_le_plus. eauto using Kpv_grows. Qed. (* -------------------------------------------------------------------------- *) (* Because [rankr v] is constant while [rankr (p v)] may grow, [k v] can only grow. *) Lemma kv_grows: k F K v <= k F' K' v. Proof using is_rdsf_F ev v_has_a_parent r_geq_1. unfold k, prek, defk. eapply plus_le_plus. forwards f: non_root_has_constant_rankr; eauto. rewrite <- f. eapply betaf_monotonic; eauto using rankrpv_grows with k monotonic. Qed. (* -------------------------------------------------------------------------- *) (* As long as [k v] remains constant, [i v] can only grow. *) Lemma iv_grows_if_kv_constant: k F K v = k F' K' v -> i F K v <= i F' K' v. Proof using is_rdsf_F ev v_has_a_parent. clear r_geq_1. introv h. unfold i. assert (h': prek F K v = prek F' K' v). { unfold k in h. lia. } rewrite <- h'. forwards f: non_root_has_constant_rankr; eauto. rewrite <- f. eapply betaf_monotonic; eauto using rankrpv_grows with i. Qed. (* -------------------------------------------------------------------------- *) (* The potential of [v] cannot increase. (Lemma 4.5 on pages 18-19.) *) Lemma phiv_cannot_increase_nonroot: phi F' K' v <= phi F K v. Proof using is_rdsf_F ev v_has_a_parent r_geq_1. intros. assert (hK: rankr K v = rankr K' v). { eauto using non_root_has_constant_rankr. } assert (hR: ~ is_root F' v). { eauto using non_root_forever. } (* For [phi F K v], we are in case 2 or 3. *) tests hc : (alphar (rankr K v) = alphar (rankr K (p F v))). (* Case 2. *) { forwards hphi: phi_case_2 F; eauto. rewrite hphi. clear hphi. (* For [phi F' K' v], we are in case 2 or 3. *) tests hc' : (alphar (rankr K' v) = alphar (rankr K' (p F' v))). (* Case 2. *) (* This is the non-trivial case. *) { forwards hphi: phi_case_2 F'; eauto. rewrite hphi. clear hphi. eapply plus_le_plus. rewrite <- hK. eapply lexpo_cannot_increase. { eauto using i_le_rank. } { eauto using kv_grows. } { rewrite hK. rewrite hc'. eauto using k_lt_alphar, is_rdsf_evolution. } { exact iv_grows_if_kv_constant. } } (* Case 3. Immediate, since the new potential (i.e., the left-hand side of the inequality) is zero. *) { forwards hphi: phi_case_3 F'; eauto. rewrite hphi. clear hphi. lia. } } (* Case 3. *) { forwards hphi: phi_case_3 F; eauto. rewrite hphi. clear hphi. forwards: alphar_rankr_grows_along_edges_corollary; eauto. (* For [phi F' K' v], we must be in case 3 too. *) assert (alphar (rankr K' v) < alphar (rankr K' (p F' v))). { rewrite <- hK. eapply Nat.lt_le_trans. eauto. eapply alphar_monotonic. eauto. eauto using rankrpv_grows. } rewrite phi_case_3 by solve [ eauto | lia ]. (* Thus, the potential was zero and remains zero. *) eauto. } Qed. End NonRoot. (* -------------------------------------------------------------------------- *) (* Let [v] be an arbitrary vertex (possibly a root). If its rank is preserved, then its potential cannot increase. *) Lemma phiv_cannot_increase: forall v, rankr K v = rankr K' v -> phi F' K' v <= phi F K v. Proof using is_rdsf_F ev r_geq_1. introv h. tests : (is_root F v). { (* Case: [v] is a root. The fact that the rank of [v] does not change is sufficient to prove that its potential cannot increase. *) rewrite (@phi_case_1 _ _ F K) by assumption. rewrite phi_upper_bound. rewrite h. eauto. } { (* Case: [v] is not a root. *) eauto using phiv_cannot_increase_nonroot. } Qed. End StudyOfEvolution. (* -------------------------------------------------------------------------- *) (* The increase of the potential [Phi] during a link is at most 2. This is Lemma 4.7 on page 19. The formal proof follows roughly the paper proof, except we find it necessary to make certain case analyses explicit. *) Lemma potential_increase_during_link_preliminary: forall V D F K K' (x y : V), is_rdsf D F K -> x <> y -> x \in D -> y \in D -> is_root F x -> is_root F y -> (* In this lemma, we consider only the following two cases. It is easy to later argue that, by symmetry, this covers all cases. *) K x < K y /\ K' = K \/ K x = K y /\ K' = fupdate K y (1 + K y) -> (* The following hypothesis is redundant, but it is convenient to require it. *) evolution D F K (link F x y) K' -> (* The conclusion. *) Phi D (link F x y) K' <= Phi D F K + 2. Proof using r_geq_1. intros. unfold Phi. (* Either the rank of [y] remains the same, or it increases by one, and in that case, [x] and [y] initially have the same rank. We fully distinguish these two cases, as the first one is trivial. *) branches; unpack. (* Case: the rank of [y] is unchanged. In fact, every rank is unchanged. *) { subst K'. (* We do not even need the two-credit slack. *) match goal with |- ?a <= ?a' + _ => cut (a <= a'); [ lia | ] end. (* Every rank is unchanged, so no potential can increase. Easy. *) eapply fold_pointwise; eauto with finite typeclass_instances. eauto using phiv_cannot_increase. } (* Case: the rank of [y] is increased by one, and [x] and [y] initially have the same rank. Then, the potential of [y] may increase, but the potential of [x] decreases, and this compensates (up to a two-credit slack) for the increase in the potential of [y]. *) assert (hKx: rankr K x = rankr K y). { unfold rankr. lia. } assert (hK'y: rankr K' y = rankr K y + 1). { unfold rankr. subst K'. rewrite fupdate_eq by eauto. case_if; lia. } (* [is_rdsf] is preserved. Proving this is a bit painful, due to the way things are set up. We must do some house-keeping before we can apply the lemma [is_rdsf_link]. *) assert (is_rdsf D (link F x y) K'). { assert (hF: link F x y = link_by_rank_F F K y x). { unfold link_by_rank_F. cases_if. false. unfold rankr in *. lia. reflexivity. } assert (hK: K' = link_by_rank_K K y x). { unfold link_by_rank_K. cases_if. eauto. } rewrite hF. rewrite hK. eauto using is_rdsf_link. } (* Set [y] and [x] aside. *) do 2 rewrite (@fold_isolate _ D y) by eauto with finite typeclass_instances. simpl. do 2 try rewrite (@fold_isolate _ (D \-- y) x); try solve [ eauto with finite typeclass_instances | rew_set in * ]. simpl. (* Divide the goal into two subgoals. First, the total potential of all vertices other than [x] and [y] cannot increase. Second, the total potential of [x] and [y] increases by at most two. *) match goal with |- ?p + (?q + ?r) <= ?p' + (?q' + ?r') + ?a => cut (r <= r' /\ p + q <= p' + q' + a); [ lia | split ] end. (* Subgoal 1: the vertices other than [x] and [y]. Again, their rank is unchanged, so their potential cannot increase. Easy. *) { eapply fold_pointwise; eauto with finite typeclass_instances. intros v ?. assert (rankr K v = rankr K' v). { unfold rankr. subst K'. rewrite fupdate_neq. auto. rew_set in *. tauto. } eauto using phiv_cannot_increase. } (* Subgoal 2: examine how the total potential of [x] and [y] evolves. This is the non-trivial part of this proof. *) (* The rank of [x] has not changed. *) assert (hK'x: rankr K x = rankr K' x). { subst K'. unfold rankr. rewrite fupdate_neq; eauto. } (* The parent of [x] is now [y]. *) forwards hpx: link_sets_parent F K x y; eauto using is_root_link. (* The (new) level and index of [x] are at least 1. *) assert (hkx: 1 <= k (link F x y) K' x). { unfold k. lia. } assert (hix: 1 <= i (link F x y) K' x). { eauto using i_ge_1, x_no_longer_a_root. } (* The vertex [y] is a root, both before and after the link. Thus, the potential of [y] before and after the link is as follows. *) forwards h: phi_case_1 r F K y; eauto. rewrite h. clear h. forwards h: phi_case_1 r (link F x y) K' y; eauto using is_root_link. rewrite h. clear h. (* The vertex [x] was a root before the link. Its potential before the link is as follows. *) forwards h: phi_case_1 r F K x; eauto. rewrite h. clear h. (* After the link, [x] is no longer a root. After the link, the rank of [x] is strictly less than the rank of [y]. Yet, we don't know whether the images of these ranks through [alphar] are equal or distinct. Thus, we don't know whether we are in case 2 or 3. A case analysis appears to be necessary; indeed, if we attempt to treat these two cases together, the fact that the subtraction of [i] is known to be safe only in case 2 seems to become a problem. This case analysis does not explicitly appear in Alstrup et al.'s paper. *) tests hranks: (alphar (rankr K' x) = alphar (rankr K' y)). (* We are in case 2. The new potential of [x] is as follows. *) { forwards h: phi_case_2 r (link F x y) K' x; eauto using x_no_longer_a_root. congruence. rewrite h. clear h. (* The subtractions are safe. *) forwards: phi_case_2_safe_k r (link F x y) K' x; try rewrite hpx; eauto using x_no_longer_a_root. forwards: phi_case_2_safe_i r (link F x y) K' x; try rewrite hpx; eauto using x_no_longer_a_root. (* Simplify. *) rewrite <- hK'x in *. clear hK'x. rewrite hKx in *. clear hKx. rewrite hK'y in *. clear hK'y. rewrite <- hranks in *. clear hranks. (* Conclude via low-level arithmetic reasoning. We do not even need the two-credit slack in this case. *) eapply random_arithmetic_lemma_01; eauto. } (* We are in case 3. The new potential of [x] is zero. *) { forwards h: phi_case_3 r (link F x y) K' x; try rewrite hpx; eauto using x_no_longer_a_root. rewrite h. clear h. (* Simplify. *) rewrite <- hK'x in *. clear hK'x. rewrite hKx in *. clear hKx. rewrite hK'y in *. clear hK'y. (* Exploit [alphar (n + 1) <= alphar n + 1]. *) rewrite alphar_grows_one_by_one by eauto. (* Conclude via low-level arithmetic reasoning. This is where we need the two-credit slack. *) eapply random_arithmetic_lemma_02. eapply alphar_positive. } Qed. (* PUBLIC *) Lemma potential_increase_during_link: forall V D F F' K K' (x y : V), is_rdsf D F K -> x <> y -> x \in D -> y \in D -> is_root F x -> is_root F y -> F' = link_by_rank_F F K x y -> K' = link_by_rank_K K x y -> Phi D F' K' <= Phi D F K + 2. Proof using r_geq_1. intros. assert (evolution D F K F' K'). { subst. eauto with evolution. } subst. unfold link_by_rank_F, link_by_rank_K in *. three_ways (K x) (K y); eapply potential_increase_during_link_preliminary; eauto with lia. Qed. (* -------------------------------------------------------------------------- *) (* Instantiate the theory of [UnionFind24Pleasant]. *) (* The predicate [top_part] corresponds to the ``top part of the path'' mentioned in the proof of Lemma 4.11. *) Definition top_part V z F K (x : V) := is_repr F x z /\ alphar (rankr K x) = alphar (rankr K z). Definition pleasant V z F K (x : V) := pleasant (top_part z) (@k V) F K x. Definition displeasure V z F K (x : V) := displeasure (top_part z) (@k V) F K x. (* The predicate [top_part] is hereditary. *) Lemma top_part_hereditary: forall V D F K (x y z : V), @is_rdsf V D F K -> top_part z F K x -> F x y -> top_part z F K y. Proof using r_geq_1. unfold top_part. intros. unpack. (* The representative of [y], the parent of [x], must be [z]. So, there is a path from [y] to [z]. *) assert (path F y z). { eauto using path_from_parent_to_repr_F with is_dsf. } split. { eauto using is_repr_is_root with is_repr. } (* Because [alphar . rankr] is constant all the way from [x] to [z], [x] and its parent have the same image through this function. *) assert (alphar (rankr K x) <= alphar (rankr K y)). { eauto using alphar_rankr_grows_along_a_path with rtclosure. } assert (alphar (rankr K y) <= alphar (rankr K z)). { eauto using alphar_rankr_grows_along_a_path. } lia. Qed. (* One step of path compression at [x] does not affect the ``top part of the path'' above [x]. *) Lemma compress_preserves_top_part_above_y: forall V D F K x y z, @is_rdsf V D F K -> F x y -> path F y z -> forall v z', (* note: we allow [z'] and [z] to be distinct; this makes life easier down the road *) top_part z' F K v -> top_part z' (compress F x z) K v. Proof using. unfold top_part. intros. unpack. split; eauto. { eauto using compress_preserves_is_repr_direct with is_dsf. } Qed. (* This result bounds the number of vertices in the top part of the path which are unpleasant (i.e., whose potential does not decrease). *) (* The proof of Lemma 4.11 says this is a strict inequality. This is true. Here, we do not exploit the fact that a level is at least 1, which is why we end up establishing a large inequality. *) Lemma bounded_displeasure_alstrup: forall V D F K, @is_rdsf V D F K -> forall x z, top_part z F K x -> displeasure z F K x <= alphar (rankr K z). Proof using r_geq_1. intros. eapply bounded_displeasure_preliminary_2. { eauto. } { eauto using top_part_hereditary. } { intros y hTopLevel hNonRoot. (* There is an edge from [y] to its parent. *) forwards: parent_spec. eauto. (* The parent of [y] satisfies [top_part], too. Which implies that its [alphar . rankr] is equal to that of [z]. *) forwards [ ? hrpy ]: top_part_hereditary; eauto. rewrite <- hrpy. clear hrpy. (* The result follows. *) eauto using k_lt_alphar. } { assumption. } Qed. (* -------------------------------------------------------------------------- *) (* During a path compression step at [x], the vertex [x] is the only one whose potential changes. So, if the potential of [x] decreases, then the total potential [Phi] decreases as well. *) Lemma from_phi_to_Phi: forall V (D : set V) F K, is_rdsf D F K -> forall x z, ~ is_root F x -> is_repr F x z -> phi (compress F x z) K x < phi F K x -> Phi D (compress F x z) K < Phi D F K. Proof using r_geq_1. intros. assert (x \in D). { eauto with confined is_dsf. } (* Set [x] aside. *) unfold Phi. do 2 rewrite (@fold_isolate _ D x) by eauto with finite typeclass_instances. simpl. (* Treat [x] on the one hand, and the other vertices on the other hand. *) match goal with |- ?p + ?q < ?p' + ?q' => cut (p < p' /\ q <= q'); [ lia | split ] end. { assumption. } { eapply fold_pointwise; eauto with finite typeclass_instances. eauto using phiv_cannot_increase, compress_evolution, pleasant_non_root. } Qed. (* -------------------------------------------------------------------------- *) (* If [alphar . rankr] is constant all the way from [x] to its root [z], then a pleasant path compression step at [x] causes the potential of [x] to decrease. This is Lemma 4.10 in Alstrup et al.'s paper. *) Lemma pleasant_phi: forall V (D : set V) F K, is_rdsf D F K -> forall x z, pleasant z F K x -> is_repr F x z -> phi (compress F x z) K x < phi F K x. Proof using r_geq_1. introv ? (hTopLevel & hNonRoot & hY) ?. destruct hTopLevel. destruct hY as [ y ? ]. unpack. (* The representative of the parent of [x] must be [z]. So, there is a path from the parent of [x] to [z]. *) assert (path F (p F x) z). { eauto using path_from_parent_to_repr. } (* Because [alphar . rankr] is constant all the way from [x] to [z], [x] and its parent have the same image through this function. *) assert (alphar (rankr K x) <= alphar (rankr K (p F x))). { eauto using alphar_rankr_grows_along_edges. } assert (alphar (rankr K (p F x)) <= alphar (rankr K z)). { eauto using alphar_rankr_grows_along_a_path. } assert (alphar (rankr K x) = alphar (rankr K (p F x))). { lia. } (* Hence, before compression, the potential of [x] is given by case 2. *) forwards h: phi_case_2 r F K x; eauto. rewrite h. clear h. (* No rank changes during compression. Hence, after compression, the potential of [x] is also given by case 2. *) assert (hxpx: alphar (rankr K x) = alphar (rankr K (p (compress F x z) x))). { erewrite compress_changes_parent_of_x_to_z by eauto. lia. } rewrite phi_case_2 by eauto using compress_preserves_roots_converse. (* Simplify, and apply a low-level arithmetic lemma. *) eapply plus_lt_plus. forwards: compress_evolution x; eauto. eapply lexpo_cannot_increase_and_decreases_if. (* yay! *) { eauto using i_le_rank. } { eauto using i_ge_1, is_rdsf_evolution, non_root_forever with lia. } { eauto using i_le_rank, is_rdsf_evolution, non_root_forever with lia. } { eauto using kv_grows. } { rewrite hxpx. eauto using k_lt_alphar, is_rdsf_evolution, non_root_forever with lia. } { eauto using iv_grows_if_kv_constant. } { eauto using prove_lexpo_decreases, kv_grows, kx_ky_compress. } Qed. (* During an arbitrary path compression step, the total potential [Phi] cannot increase. *) Lemma arbitrary_Phi: forall V (D : set V) F K, is_rdsf D F K -> forall x y z, F x y -> path F y z -> Phi D (compress F x z) K <= Phi D F K. Proof using r_geq_1. intros. unfold Phi. eapply fold_pointwise; eauto with finite typeclass_instances. eauto using phiv_cannot_increase with evolution. Qed. (* -------------------------------------------------------------------------- *) (* We now evaluate the amortized cost of path compression in the ``top part'' of the path. *) Lemma amortized_cost_fw_ipc_top_part_inductive: forall V F x l F', @fw_ipc V F x l F' -> forall D K z, is_rdsf D F K -> top_part z F K x -> Phi D F' K + l <= Phi D F K + displeasure z F K x. Proof using r_geq_1. unfold displeasure. induction 1; intros. (* FWIPCBase *) { lia. } (* FWIPCStep *) (* At this point, we unfortunately have two names for [z], so we must first argue that they are the same vertex. *) match goal with h: top_part ?z' _ _ _ |- _ => assert (z' = z); [ destruct h | subst z' ] end. { eapply is_repr_is_equiv_is_repr_bis; eauto with is_dsf. eauto using path_is_equiv with rtclosure is_dsf. } (* Argue that compression does not affect the displeasure of [y]. This seems obvious, but is in fact non-trivial, as we must check that it affects neither the function [k] nor the predicate [top_part]. *) forwards: compress_preserves_displeasure_of_y (top_part z) x y z; eauto using is_repr_path, (@compress_preserves_k_above_y r), compress_preserves_top_part_above_y. (* Use the induction hypothesis. *) forwards: IHfw_ipc; eauto using is_rdsf_compress, is_repr_path, compress_preserves_top_part_above_y, top_part_hereditary. (* Now, perform case analysis. *) tests : (pleasant z F K x); unfold pleasant in *. { (* Case: [x] is pleasant. Then, path compression at [x] frees up one unit of potential, which pays for this step. The induction hypothesis does the rest. *) erewrite displeasure_parent_if_pleasant by eauto. forwards: from_phi_to_Phi x; eauto 8 using pleasant_phi, a_root_has_no_parent_contrapositive with is_dsf is_repr is_equiv. lia. } { (* Case: [x] is unpleasant. Then, we pay for this path compression step out of the [k] credits that we explicitly request from the client. *) erewrite displeasure_parent_if_unpleasant by eauto. forwards: arbitrary_Phi; eauto using is_repr_path. lia. } Qed. (* A corollary. *) Lemma amortized_cost_fw_ipc_top_part: forall V D F K x z l F', @fw_ipc V F x l F' -> is_rdsf D F K -> top_part z F K x -> Phi D F' K + l <= Phi D F K + alphar (rankr K z). Proof using r_geq_1. intros. rewrite amortized_cost_fw_ipc_top_part_inductive by eauto. rewrite bounded_displeasure_alstrup by eauto. reflexivity. Qed. (* -------------------------------------------------------------------------- *) (* Say [x] is ``easy'' if [alphar . rankr] is NOT constant all the way from [x] to its root [z], but maps [x] and its parent to the same value. In this case, a path compression step at [x] causes the potential of [x] to decrease. This is Lemma 4.9 in Alstrup et al.'s paper. *) Lemma easy_phi: forall V (D : set V) F K, is_rdsf D F K -> forall x, ~ is_root F x -> forall z, is_repr F x z -> alphar (rankr K x) = alphar (rankr K (p F x)) -> alphar (rankr K (p F x)) < alphar (rankr K z) -> phi (compress F x z) K x < phi F K x. Proof using r_geq_1. intros. (* The representative of the parent of [x] must be [z]. So, there is a path from the parent of [x] to [z]. *) assert (path F (p F x) z). { eauto using path_from_parent_to_repr. } (* Before compression, the potential of [x] is given by case 2. Hence, it is non-zero. *) forwards h: phi_case_2_lower_bound r F K x; eauto. (* After compression, the parent of [x] is [z], so the potential of [x] is given by case 3. Hence, it is zero. *) assert (alphar (rankr K x) <> alphar (rankr K (p (compress F x z) x))). { erewrite compress_changes_parent_of_x_to_z by eauto. lia. } rewrite phi_case_3 by eauto using compress_preserves_roots_converse. (* The result follows. *) lia. Qed. (* -------------------------------------------------------------------------- *) (* The following result covers the so-called ``bottom part'' of the path. It combines Lemma 4.8 and the ``bottom part'' of Lemma 4.10, and calls the previous result [amortized_cost_fw_ipc_top_part] when it reaches the ``top part'' of the path. *) Lemma amortized_cost_fw_ipc_bottom_part: forall V F x l F', @fw_ipc V F x l F' -> forall D K z, is_rdsf D F K -> is_repr F x z -> Phi D F' K + l <= Phi D F K + 2 * alphar (rankr K z) - alphar (rankr K x). Proof using r_geq_1. induction 1; intros. (* FWIPCBase *) { (* [x] is [z]. *) assert (x = z). { eauto using a_path_out_of_a_root_is_trivial, is_repr_path. } subst z. (* The result follows, quite trivially. *) lia. } (* FWIPCStep *) (* At this point, we unfortunately have two names for [z], so we must first argue that they are the same vertex. *) match goal with h: is_repr _ x ?z' |- _ => assert (z' = z); [ | subst z' ] end. { eapply is_repr_is_equiv_is_repr_bis; eauto with is_dsf. eauto using path_is_equiv with rtclosure is_dsf. } (* The parent of [x] is [y]. *) assert (hpx: p F x = y). { eauto using parent_unique. } (* The function [alphar . rankr] grows along the path from [x] to [z]. *) assert (alphar (rankr K x) <= alphar (rankr K z)). { eauto using alphar_rankr_grows_along_a_path, is_repr_path. } (* Perform case analysis: are [x] and [z] at the same height? *) tests: (alphar (rankr K x) = alphar (rankr K z)). (* Case: yes, [x] and [z] are at the same height. This means [x] is in fact in the top part of the path. *) { assert (top_part z F K x). { split; eauto. } (* The previous analysis applies. *) forwards: amortized_cost_fw_ipc_top_part x; eauto with fw_ipc. (* The result follows. *) lia. } (* Case: no, [x] and [z] are at different heights. *) (* Use the induction hypothesis. *) forwards: IHfw_ipc; eauto using is_rdsf_compress, is_repr_path, compress_preserves_is_repr_direct with is_dsf. (* Perform case analysis: are [x] and its parent at the same height? *) tests: (alphar (rankr K x) = alphar (rankr K (p F x))). (* Case: [x] is easy. *) { (* Path compression at [x] frees up one unit of potential, which pays for this step. *) forwards: from_phi_to_Phi x; eauto 8 using easy_phi, a_root_has_no_parent_contrapositive with lia. rewrite hpx in *. lia. } (* Case: [x] is not easy. *) { forwards: arbitrary_Phi; eauto using is_repr_path. rewrite hpx in *. (* In that case, [alphar . rankr] at [y] is one more than at [x], which pays for this step. *) assert (alphar (rankr K x) <= alphar (rankr K y)). { eauto using alphar_rankr_grows_along_a_path with rtclosure. } lia. } Qed. (* As a corollary, we obtain the amortized cost of iterated path compression, in a formulation based on [fw_ipc]. *) Lemma amortized_cost_fw_ipc: forall V F x l F', @fw_ipc V F x l F' -> forall D K z, is_rdsf D F K -> is_repr F x z -> Phi D F' K + l < Phi D F K + 2 * alphar (rankr K z). Proof using r_geq_1. intros. forwards: amortized_cost_fw_ipc_bottom_part; eauto. assert (alphar (rankr K x) > 0). { eauto using alphar_positive. } assert (alphar (rankr K z) > 0). { eauto using alphar_positive. } lia. Qed. (* -------------------------------------------------------------------------- *) (* This corollary combines [ipc_defined], [amortized_cost_fw_ipc], and [bw_ipc_fw_ipc], so as to obtain the amortized cost of iterated path compression, in a variant based on [bw_ipc], and in a form where the [ipc] predicate appears as part of the conclusion (not as a hypothesis). *) (* PUBLIC *) Lemma amortized_cost_of_iterated_path_compression_local: forall V D F K x, @is_rdsf V D F K -> x \in D -> exists l F' z, is_repr F x z /\ bw_ipc F x l F' /\ Phi D F' K + l < Phi D F K + 2 * alphar (rankr K z). Proof using r_geq_1. intros. forwards (l&F'&?): ipc_defined x. eauto with is_dsf. forwards (z&?): is_dsf_defined_is_repr x. eauto with is_dsf. exists l F' z. splits; eauto using amortized_cost_fw_ipc, bw_ipc_fw_ipc with is_dsf. Qed. (* A simplified version, where we bound the rankr of [z] by the cardinality of [D], plus [r - 1]. *) (* PUBLIC *) Lemma amortized_cost_of_iterated_path_compression_global: forall V D F K x, @is_rdsf V D F K -> x \in D -> exists l F', bw_ipc F x l F' /\ Phi D F' K + l < Phi D F K + 2 * alphar (card D + (r - 1)). Proof using r_geq_1. intros. forwards (l&F'&z&?): amortized_cost_of_iterated_path_compression_local; eauto. unpack. exists l F'. splits; eauto. assert (K z <= log2 (card D)). { eapply rank_is_logarithmic; eauto 6 using is_equiv_in_D_direct, path_is_equiv, is_repr_path with is_dsf. } assert (1 <= card D). { eapply card_ge_one; eauto with finite. } assert (K z < card D). { forwards: log2_lt_n (card D); eauto. lia. } assert (rankr K z <= card D + (r - 1)). { unfold rankr. lia. } assert (alphar (rankr K z) <= alphar (card D + (r - 1))). { eapply alphar_monotonic; eauto. } lia. Qed. End R. (* A further simplified version, where we take [r] to be 1. *) (* PUBLIC *) Lemma amortized_cost_of_iterated_path_compression_simplified: forall V D F K x, @is_rdsf V D F K -> x \in D -> exists l F', bw_ipc F x l F' /\ Phi 1 D F' K + l <= Phi 1 D F K + 2 * alpha (card D) + 3. Proof using. intros. forwards (l&F'&?&hb): amortized_cost_of_iterated_path_compression_global; eauto. exists l F'. splits; eauto. replace (card D + (1 - 1)) with (card D) in hb by lia. unfold alphar, prealphar, defalphar in hb. assert (f: alpha (card D + 1) <= alpha (card D) + 1). { eauto using alpha_grows_one_by_one. } lia. Qed.
#include <boost/asio.hpp> #include <boost/asio/ssl.hpp> #include <boost/beast.hpp> #include <boost/beast/ssl.hpp> #include <iostream> namespace net = boost::asio; namespace ssl = net::ssl; namespace beast = boost::beast; namespace http = beast::http; namespace websocket = beast::websocket; using tcp = net::ip::tcp; using stream_t = websocket::stream<ssl::stream<tcp::socket>>; std::string hexify(std::string const& in) { std::string result; result.reserve(in.size() * 2); auto to_hex = [](unsigned char c) -> std::string { static const char hexdigit[] = "0123456789abcdef" ; auto result = std::string(4, ' '); result [0] = '0'; result [1] = 'x'; result [2] = hexdigit[(c >> 4) & 0xf]; result [3] = hexdigit[c & 0xf]; return result; }; for (auto c:in) { if (std::isprint(c)) result += c; else result += to_hex(c); } return result; } int main() { std::string host = "vstream.binance.com"; auto const port = "443"; auto const path = "/ws/BTC-211112-59000-P@depth20"; auto const rpcJson = "{\"method\":\"BINARY\", \"params\":[\"false\"], \"id\":1}"; net::io_context ioc; tcp::resolver resolver{ioc}; ssl::context ctx{ssl::context::sslv23}; ctx.set_verify_mode(ssl::verify_none); stream_t s{ioc, ctx}; net::connect(beast::get_lowest_layer(s), resolver.resolve(host, port)); // SSL handshake s.next_layer().handshake(ssl::stream_base::client); s.handshake(host + ":" + port, path); std::cout << "connected." << std::endl; // send request to the websocket s.write(net::buffer("{\"method\":\"BINARY\", \"params\":[\"false\"], \"id\":1}")); { boost::beast::multi_buffer m_buf; s.read(m_buf); std::string strbuf = hexify(beast::buffers_to_string(m_buf.data())); m_buf.consume(m_buf.size()); std::cout << "s. " << strbuf << std::endl; } return 0; }
\documentclass[aspectratio=169]{beamer} \usefonttheme[onlymath]{serif} \usepackage[UTF8, scheme = plain]{ctex} \usepackage[utf8]{inputenc} \usepackage{graphicx} % Allows including images \usepackage{booktabs} % Allows the use of \toprule, \midrule and \bottomrule in tables \usepackage{subfigure} \usepackage{subfiles} \usepackage{url} \usepackage{amssymb} \usepackage{amsmath} \usepackage{xcolor,colortbl} \usepackage[backend=bibtex,sorting=none]{biblatex} \usepackage{pifont}% http://ctan.org/pkg/pifont \usepackage{wrapfig} \newcommand{\cmark}{\ding{51}}% \newcommand{\xmark}{\ding{55}}% \usepackage[ruled,vlined,noend]{algorithm2e} \input{math_commands.tex} \addbibresource{reference.bib} \definecolor{NJUPurple}{rgb}{0.41568, 0, 0.37255} \colorlet{LightNJUPurple}{white!60!NJUPurple} \colorlet{SuperLightNJUPurple}{white!90!NJUPurple} \usecolortheme[named=NJUPurple]{structure} %Information to be included in the title page: \title{Unsupervised Meta-Learning for Reinforcement Learning} \author{\href{mailto:}{田鸿龙}} \institute{LAMDA, Nanjing University} \date{\today} %Logo in every slide \logo{% \makebox[0.98\paperwidth]{ \href{https://www.nju.edu.cn}{\includegraphics[height=0.75cm,keepaspectratio]{logos/nju_logo.jpg}} \hfill% \href{https://www.lamda.nju.edu.cn}{\includegraphics[height=0.75cm,keepaspectratio]{logos/lamda_logo.png}}% } } \setbeamertemplate{blocks}[rounded][shadow=true] \setbeamercolor{block title}{fg=white,bg=LightNJUPurple} \setbeamercolor{block body}{fg=black,bg=SuperLightNJUPurple} \setbeamerfont{title}{shape=\bfseries,size=\Large} \setbeamerfont{author}{shape=\bfseries} \makeatletter \setbeamertemplate{title page}{% \vbox{} \vfill \vskip2cm%<- added \begingroup \centering \begin{beamercolorbox}[sep=8pt,center]{title} \usebeamerfont{title}\inserttitle\par% \ifx\insertsubtitle\@empty% \else% \vskip0.25em% {\usebeamerfont{subtitle}\usebeamercolor[fg]{subtitle}\insertsubtitle\par}% \fi% \end{beamercolorbox}% \vskip1em\par \vfill%<- added \begin{beamercolorbox}[sep=8pt,center]{author} \usebeamerfont{author}\insertauthor \end{beamercolorbox} \vskip-0.2cm%<- changed \begin{beamercolorbox}[sep=8pt,center]{institute} \usebeamerfont{institute}\insertinstitute \end{beamercolorbox} \vfill%<- added \begin{beamercolorbox}[sep=8pt,center]{date} \usebeamerfont{date}\insertdate \end{beamercolorbox}% \vskip0.5cm%<- changed \endgroup % \vfill%<- removed } \makeatother \AtBeginSection[] { \begin{frame} \frametitle{Table of Contents} \tableofcontents[ currentsection, currentsubsection, subsectionstyle=show/show/hide, sectionstyle=show/shaded ] \end{frame} } % you can comment to disable TOC before every subsection \AtBeginSubsection[] { \begin{frame} \frametitle{Table of Contents} \tableofcontents[ currentsection, currentsubsection, sectionstyle=show/shaded, subsectionstyle=show/shaded/hide, ] \end{frame} } % shape, colour of item, nested item bullets in itemize only \setbeamertemplate{itemize item}[circle] \setbeamercolor{itemize item}{fg=NJUPurple} \setbeamertemplate{itemize subitem}[circle] \setbeamercolor{itemize subitem}{fg=LightNJUPurple} \setbeamertemplate{itemize subsubitem}[circle] \setbeamercolor{itemize subsubitem}{fg=SuperLightNJUPurple} % font size of nested and nested-within-nested bulltes in both itemize and enumerate % options are \tiny, \small, \scriptsize, \normalsize, \footnotesize, \large, \Large, \LARGE, \huge and \Huge \setbeamerfont{itemize/enumerate subbody}{size=\scriptsize} \setbeamerfont{itemize/enumerate subsubbody}{size=\scriptsize} \newenvironment{splitframe}[5] %[1] ==> 1 parameter passed through {} %[2] ==> 2 parameters passed through {}{} %[4] ==> 4 parameters passed through {}{}{}{} { \begin{frame}{#3} \begin{columns} \column{#1\linewidth} \centering #4 \column{#2\linewidth} \centering #5 \end{columns} \centering \vspace{\baselineskip} % adds one line space } %Inside the first pair of braces (ABOVE) is set what your new environment will do before the text within, then inside the second pair of braces (BELOW) declare what your new environment will do after the text. Note second pair can be empty braces too. { \end{frame} } \begin{document} \frame{\titlepage} \begin{frame} \frametitle{Table of Contents} \tableofcontents[hidesubsections] \end{frame} \section{Preliminaries Knowledge} \begin{frame} \frametitle{Terminology} \begin{itemize} \item task: a problem needs RL Algorithm to solve \item MDP = CMP + Reward Mechanisms \begin{itemize} \item one-to-one correspondence between MDP and task \end{itemize} \item CMP: controlled Markov process \begin{itemize} \item namely the dynamics of the environments \item consist of \textbf{state space, action space, initial state distribution, transition dynamics}... \end{itemize} \item Reward Mechanisms: $r(s, a, s^\prime, t)$ \end{itemize} \end{frame} \begin{frame} \frametitle{Terminology(cont.)} \begin{itemize} \item skill: a latent-conditioned policy that alters that state of the environment in a consistent way \item $Z \sim p(z)$ is a latent variable, policy conditioned on a fixed Z as a "skill" \item policy(skill) = parameter $\theta$ + latent variable Z \item one-to-one correspondence between skill and task \end{itemize} \end{frame} \begin{frame} \frametitle{Mutual Information} \begin{itemize} \item mutual information (MI) of two random variables is a measure of the mutual dependence between the two variables \item $\mathrm{I}(x, y)= \mathrm{KL}[p(x, y) \| p(x) p(y)] =-\iint p(x, y) \ln \frac{p(x) p(y)}{p(x, y)} \mathrm{d} x \mathrm{d} y$ \begin{itemize} \item Kullback–Leibler divergence: a directed divergence between two distributions \item the \textbf{larger} of MI, the \textbf{more divergent} between $P(x,y)$ and $P(x)P(y)$, the \textbf{more dependent} between $P(x)$ and $P(y)$ \end{itemize} \item or $\mathrm{I}(x, y)=\mathrm{H}(x)-\mathrm{H}(x \mid y)$ \begin{itemize} \item $\mathrm{H}(y \mid x)=-\iint p(x, y) \ln p(y \mid x) \mathrm{d} y \mathrm{d} x$ \end{itemize} \end{itemize} \end{frame} \section{A Unsupervised RL Algorithm: Diversity is all you need} \begin{frame} \frametitle{A Unsupervised RL Algorithm: Diversity is all you need} \begin{figure} \includegraphics[width=0.8\textwidth]{imgs/DIAYN_tittle.png} \end{figure} \end{frame} \begin{frame} \frametitle{Motivation} \begin{itemize} \item Autonomous acquisition of useful skills without any reward signal. \item Why without any reward signal? \begin{itemize} \item for sparse rewards setting, learning useful skills without supervision may help address challenges in exploration \item serve as primitives for hierarchical RL, effectively shortening the episode length \item in many practical settings, interacting with the environment is essentially free, but evaluating the reward requires human feedback. \item it is challenging to design a reward function that elicits the desired behaviors from the agent(without imitation sample, hard to design a reward funciton) \item when given an unfamiliar environment, it is challenging to determine what tasks an agent should be able to learn \end{itemize} \end{itemize} \end{frame} \begin{frame} \frametitle{Motivation(cont.)} \begin{itemize} \item Autonomous acquisition of useful skills without any reward signal. \item How to define "useful skills"? \begin{itemize} \item consider the setting where the reward function is unknown, so we want to learn a set of skills by \textbf{maximizing the utility of this set} \end{itemize} \item How to maximize the utility of this set? \begin{itemize} \item each skill individually is distinct \item the skills collectively explore large parts of the state space \end{itemize} \end{itemize} \end{frame} \begin{frame} \frametitle{Key Idea: Using discriminability between skills as an objective} \begin{itemize} \item design a reward function which only depends on CMP \item skills are just distinguishable \xmark \item skills diverse in a semantically meaningful way \cmark \begin{itemize} \item action distributions \xmark (actions that do not affect the environment are not visible to an outside observer) \item state distributions \cmark \end{itemize} \end{itemize} \end{frame} \begin{frame} \frametitle{How It Works} \begin{itemize} \item [1] skill to dictate the states that the agent visits \begin{itemize} \item one-to-one correspondence between skill and Z(for any certain time, parameters $\theta$ is fixed) \item $Z \sim p(z)$, which means Z is different with each other \item make state distributions depend on Z(vice versa.), then state distributions become diverse \end{itemize} \item [2] ensure that states, not actions, are used to distinguish skills \begin{itemize} \item given state, action is not related to skill \item make action directly depends on skill is a \textbf{trivial} method, we better avoid it \end{itemize} \item [3] viewing all skills together with p(z) as a mixture of policies, we maximize the entropy $\mathcal{H}[A \mid S]$ \item Attention: \textcolor{NJUPurple}{2} maybe causes the network don't care input Z, but \textcolor{NJUPurple}{1} avoids it; maybe causes output(action) become same one, but \textcolor{NJUPurple}{3} avoids it \end{itemize} $$ \begin{aligned} \mathcal{F}(\theta) & \textcolor{red}{\triangleq I(S ; Z)+\mathcal{H}[A \mid S]-I(A ; Z \mid S)} \\ &=(\mathcal{H}[Z]-\mathcal{H}[Z \mid S])+\mathcal{H}[A \mid S]-(\mathcal{H}[A \mid S]-\mathcal{H}[A \mid S, Z]) \\ &=\mathcal{H}[Z]-\mathcal{H}[Z \mid S]+\mathcal{H}[A \mid S, Z] \end{aligned} $$ \end{frame} \begin{frame} \frametitle{How It Works(cont.)} $$ \begin{aligned} \mathcal{F}(\theta) & \triangleq I(S ; Z)+\mathcal{H}[A \mid S]-I(A ; Z \mid S) \\ &=(\mathcal{H}[Z]-\mathcal{H}[Z \mid S])+\mathcal{H}[A \mid S]-(\mathcal{H}[A \mid S]-\mathcal{H}[A \mid S, Z]) \\ &=\textcolor{red}{\mathcal{H}[Z]-\mathcal{H}[Z \mid S]+\mathcal{H}[A \mid S, Z]} \end{aligned} $$ \begin{itemize} \item [1] fix p(z) to be uniform in our approach, guaranteeing that is has maximum entropy \item [2] it should be easy to infer the skill z from the current state \item [3] each skill should act as randomly as possible \end{itemize} \end{frame} \begin{frame} \frametitle{How It Works(cont.)} $$ \begin{aligned} \mathcal{F}(\theta) &=\mathcal{H}[A \mid S, Z]-\mathcal{H}[Z \mid S]+\mathcal{H}[Z] \\ &=\mathcal{H}[A \mid S, Z]+\mathbb{E}_{z \sim p(z), s \sim \pi(z)}[\log p(z \mid s)]-\mathbb{E}_{z \sim p(z)}[\log p(z)] \\ & \geq \mathcal{H}[A \mid S, Z]+\mathbb{E}_{z \sim p(z), s \sim \pi(z)}\left[\log q_{\phi}(z \mid s)-\log p(z)\right] \triangleq \mathcal{G}(\theta, \phi) \end{aligned} $$ \begin{itemize} \item $\mathcal{G}(\theta, \phi)$ is a variational lower bound \end{itemize} \end{frame} \begin{frame} \frametitle{Implementation} \begin{minipage}{0.45\linewidth} \includegraphics[width=\linewidth]{imgs/model.pdf} \end{minipage} \hfill \begin{minipage}{0.45\linewidth} \begin{itemize} \item maxize a cumulative pseudo-reward by SAC \item pseudo-reward: $r_{z}(s, a) \triangleq \log q_{\phi}(z \mid s)-\log p(z)$ \end{itemize} \end{minipage} \end{frame} \begin{frame} \frametitle{Algorithm} \begin{algorithm}[H] \small \DontPrintSemicolon \SetAlgoLined \While{not converged}{ Sample skill $z \sim p(z)$ and initial state $s_0 \sim p_0(s)$\; \For{$t \leftarrow 1$ \KwTo $steps\_per\_episode$}{ Sample action $a_t \sim \pi_{\theta}(a_t \mid s_t, z)$ from skill.\; Step environment: $s_{t+1} \sim p(s_{t+1} \mid s_t, a_t)$.\; Compute $q_{\phi}(z \mid s_{t+1})$ with discriminator.\; Set skill reward $r_t = \log q_{\phi}(z \mid s_{t+1}) - \log p(z)$\; Update policy ($\theta$) to maximize $r_t$ with SAC.\; Update discriminator ($\phi$) with SGD.\; } } \caption{DIAYN} \end{algorithm} \end{frame} \begin{frame} \frametitle{Applications} \begin{itemize} \item adapting skills to maximize a reward \item hierarchical RL \item imitation learning \item \textbf{unsupervised meta RL} \end{itemize} \end{frame} \section{Unsupervised Meta-Learning for Reinforcement Learning} \begin{frame} \frametitle{Unsupervised Meta-Learning for Reinforcement Learnin} \begin{figure} \includegraphics[width=0.8\textwidth]{imgs/UML_tittle.png} \end{figure} \end{frame} \begin{frame} \frametitle{Motivation} \begin{itemize} \item aim to do so without depending on any human supervision or information about the tasks that will be provided for meta-testing \item assumptions of prior work \xmark \begin{itemize} \item a fixed tasks distribution \item tasks of meta-train and meta-test are sample from this distribution \end{itemize} \item Why not pre-specified task distribution? \begin{itemize} \item specifying a task distribution is tedious and requires a significant amount of supervision \item the performance of meta-learning algorithms critically depends on the meta-training task distribution, and meta-learning algorithms generalize best to new tasks which are drawn from the same distribution as the meta-training tasks \end{itemize} \item assumptions of this work: the environment dynamics(CMP) remain the same \item \textbf{"environment-specific learning procedure"} \end{itemize} \end{frame} \begin{frame} \frametitle{Attention} \begin{itemize} \item this paper have been rejected(maybe twice) \item this paper make some vary strong assumption when analysising: \begin{itemize} \item deterministic dynamics(the "future work" of 2018, but authors maybe forget it...) \item only get a reward when the end state(two case have been concerned) \end{itemize} \item the expriment may be not enough and convincing \item there are something wrong (at least ambiguous) in the paper... \end{itemize} \end{frame} \begin{frame} \frametitle{Definition of Terminology and Symbol} \begin{itemize} \item MDP: $M=(S, A, P, \gamma, \rho, r)$ \item CMP: $C=(S, A, P, \gamma, \rho)$ \item S: state space \item A: action space \item P: transition dynamics \item $\gamma$: discount factor \item $\rho$: initial state distribution \item dataset of experience(for MDP): $\mathcal{D}=\left\{\left(s_{i}, a_{i}, r_{i}, s_{i}^{\prime}\right)\right\} \sim M$ \item learning algorithm(for MDP): $f: \mathcal{D} \rightarrow \pi$ \end{itemize} \end{frame} \begin{frame} \frametitle{Definition of Terminology and Symbol(cont.)} \begin{itemize} \item for CMP: $R(f, r_z) = \sum_i \E_{\substack{\pi = f(\{\tau_1, \cdots, \tau_{i-1}\}) \\ \tau \sim \pi}} \left[\sum_t r_z(s_t, a_t) \right]$ \item evaluate the learning procedure f by summing its cumulative reward across iterations \end{itemize} \end{frame} \begin{frame} \frametitle{Key Idea} \begin{itemize} \item from the perspective of "no free lunch theorem": the assumption that the dynamics remain the same across tasks affords us an inductive bias with which we pay for our lunch \item our results are lower bounds for the performance of general learning procedures \end{itemize} \end{frame} \begin{frame} \frametitle{Regret for certain Task Distribution(given CMP)} \begin{itemize} \item For a task distribution $p(r_z)$, the optimal learning procedure $f^∗$ is given by\\ $f^{*} \triangleq \arg \max _{f} \mathbb{E}_{p\left(r_{z}\right)}\left[R\left(f, r_{z}\right)\right]$ \item regret of a certain learning procedure and task distribution:\\ $\operatorname{REGRET}\left(f, p\left(r_{z}\right)\right) \triangleq \mathbb{E}_{p\left(r_{z}\right)}\left[R\left(f^{*}, r_{z}\right)\right]-\mathbb{E}_{p\left(r_{z}\right)}\left[R\left(f, r_{z}\right)\right]$ \item Obviously\\$f^{*} \triangleq \arg \min _{f} \operatorname{REGRET}\left(f, p\left(r_{z}\right)\right) $\\ and\\$\operatorname{REGRET}\left(f^*, p\left(r_{z}\right)\right) = 0$ \item $f^*$ should be the output of traditional "meta RL algorithm" \end{itemize} \end{frame} \begin{frame} \frametitle{Regret for worst-case Task Distribution(given CMP)} \begin{itemize} \item evaluate a learning procedure $f$ based on its regret against the worst-case task distribution for CMP $C$:\\ $\operatorname{REGRET}_{\mathrm{WC}}(f, C)=\max _{p\left(r_{z}\right)} \operatorname{REGRET}\left(f, p\left(r_{z}\right)\right)$ \item by this way, we do not need any prior knowledge of $p(r_z)$ \item Attention: CMP may lead to \textbf{inductive bias} \end{itemize} \end{frame} \begin{frame} \frametitle{Optimal Unsupervised Learning Procedure} \begin{definition} The optimal unsupervised learning procedure $f_C^*$ for a CMP $C$ is defined as \begin{equation*} \vspace{-1em} f_C^* \triangleq \argmin_f \textsc{Regret}_{\textsc{WC}}(f, C). \end{equation*} \end{definition} \begin{itemize} \item "unsupervised" means you do not need "reward"(like DIAYN) \item $f_C^*$ should be the output of our "unsupervised meta RL algorithm" \end{itemize} \end{frame} \begin{frame} \frametitle{Optimal Unsupervised Meta-learner} \begin{definition} The optimal unsupervised meta-learner $\gF^*(C) = f_C^*$ is a function that takes as input a CMP $C$ and outputs the corresponding optimal unsupervised learning procedure $f_C^*$: \begin{equation*} \vspace{-1em} \gF^* \triangleq \argmin_{\gF} \textsc{Regret}_{\textsc{WC}}(\gF(C), C) \end{equation*} \end{definition} \begin{itemize} \item the optimal unsupervised meta-learner $\gF^*$ is universal, it does not depend on any particular task distribution, or any particular CMP \end{itemize} \end{frame} \begin{frame} \frametitle{Min-Max} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{imgs/labeled_regret_v2.png} \end{figure} $$\min _{f} \max _{p} \quad \operatorname{Regret}(f, p)$$ \end{frame} \begin{frame} \frametitle{Analysis by Case Study} \begin{itemize} \item Special Case: Goal-Reaching Tasks \item General Case: Trajectory-Matching Tasks \item in these case, we make some assumption such as deterministic dynamics, then generalize it \end{itemize} \end{frame} \begin{frame} \frametitle{Special Case: Goal-Reaching Tasks} consider episodes with finite horizon T and a discount factor of $\gamma = 1$\\ reward: $r_{g}\left(s_{t}\right) \triangleq \mathbf{1}(t=T) \cdot \mathbf{1}\left(s_{t}=g\right)$ \begin{figure} \centering \includegraphics[width=0.4\linewidth]{imgs/Grid-world-problem-The-agent-moves-in-four-directions-to-find-the-goal-marked-with-a.png} \end{figure} \end{frame} \begin{frame} \frametitle{Optimal Learing Procedure for known $p(s_g)$} \begin{itemize} \item Define $f_\pi$ as the learning procedure that \textbf{uses policy $\pi$ to explore until the goal is found}, and then always returns to the goal state($f$ is a learning procedure, which is something like SAC or PPO...) \item the goal of meta-RL (for known $p(s_g)$): find the best exploration policy $\pi$ \item probability that policy $\pi$ visits state $s$ at time step $t = T$: $\rho_{\pi}^{T}(s)$ \item expected hitting time of this goal state: \\ $$\mathrm{HITTINGTIME}_{\pi}\left(s_{g}\right)=\frac{1}{\rho_{\pi}^{T}\left(s_{g}\right)}$$ \item tips: "hitting time" means the \textbf{epected number of episodes} we need to make our end-state to be the goal-state(explore by the given policy $\pi$) \end{itemize} \end{frame} \begin{frame} \frametitle{Optimal Learing Procedure for known $p(s_g)$(cont.)} \begin{itemize} \item difinition of regret: \\ $$\operatorname{REGRET}\left(f, p\left(r_{z}\right)\right) \triangleq \mathbb{E}_{p\left(r_{z}\right)}\left[R\left(f^{*}, r_{z}\right)\right]-\mathbb{E}_{p\left(r_{z}\right)}\left[R\left(f, r_{z}\right)\right]$$ \item regret of the learning procedure $f_\pi$:\\ $$\operatorname{REGRET}\left(f_{\pi}, p\left(r_{g}\right)\right)=\int\mathrm{HITTINGTIME}_{\pi}\left(s_{g}\right) p\left(s_{g}\right) d s_{g}=\int \frac{p\left(s_{g}\right)}{\rho_{\pi}^{T}\left(s_{g}\right)} d s_{g}$$ \item exploration policy for the optimal meta-learner, $\pi^*$, satisfies: \\ $$\rho_{\pi^{*}}^{T}\left(s_{g}\right)=\frac{\sqrt{p\left(s_{g}\right)}}{\int \sqrt{p\left(s_{g}^{\prime}\right)} d s_{g}^{\prime}}$$ \end{itemize} \end{frame} \begin{frame} \frametitle{Optimal Learing Procedure for Unknown $p(s_g)$} \begin{lemma} \label{lemma:uniform} Let $\pi$ be a policy for which $\rho_\pi^T(s)$ is uniform. Then $f_\pi$ is has lowest worst-case regret among learning procedures in $\gF_\pi$. \end{lemma} (proof is straight by disproval) \begin{itemize} \item finding such a policy $\pi$ is challenging, especially in \textbf{high-dimensional} state spaces and in the absense of resets \item acquiring $f_\pi$ directly without every computing $\pi$ \end{itemize} \end{frame} \begin{frame} \frametitle{Optimal Learing Procedure for Unknown $p(s_g)$(cont.)} \begin{itemize} \item what we want: $\rho_\pi^T(s)$ is a uniform distribution \item how to do: define a latent variable $z$, make $z$ and $s_T$, and sample $z$ from a uniform distributions \item there exists a conditional distribution $\mu(s_T | z)$ (more detail later), change it to maximize the mutual information:\\ $$\max _{\mu\left(s_{T} \mid z\right)} I_{\mu}\left(s_{T} ; z\right)$$ \item still need to make sure maximize the mutual information can make $s_T$ uniform \end{itemize} \end{frame} \begin{frame} \frametitle{Optimal Learing Procedure for Unknown $p(s_g)$(cont.)} \begin{lemma} \label{lemma:MImax} Assume there exists a conditional distribution $\mu(s_T \mid z)$ satisfying the following two properties: \begin{enumerate} \item The marginal distribution over terminal states is uniform: $\mu(s_T) = \int \mu(s_T \mid z) \mu(z) dz = \textsc{Unif}(\gS)$; and \item The conditional distribution $\mu(s_T \mid z)$ is a Dirac: $\forall z, s_T \; \exists s_z \text{ s.t. } \mu(s_T \mid z) = \mathbf{1}(s_T = s_z)$. \end{enumerate} Then any solution $\mu(s_T \mid z)$ to the mutual information objective satisfies the following: % distribution $\mu^* \in \gP_\mu$ such that $\mu^*(s_T) = \textsc{Unif}(\gS)$ is uniform over goal states and $\mu^*(s_T \mid z) \stackrel{d}{=} \mathbbm{1}(s_T = s_z)$ is a Dirac distribution, centered at a state $s_z$ specified by the latent $z$. Then any distribution $\mu$ that maximizes mutual information $I_\mu(s_T; z)$ satisfies \begin{equation*} \mu(s_T) = \textsc{Unif}(\gS) \quad \text{and} \quad \mu(s_T \mid z) = \mathbf{1}(s_T = s_z). \end{equation*} \end{lemma} \end{frame} \begin{frame} \frametitle{Optimal Learing Procedure for Unknown $p(s_g)$(cont.)} \begin{itemize} \item how to get $\mu(s_T | z)$ ? \item define a latent-conditioned policy $\mu(a \mid s,z)$ \item then we have\\ $$\mu(\tau, z)=\mu(z) p\left(s_{1}\right) \prod_{t} p\left(s_{t+1} \mid s_{t}, a_{t}\right) \mu\left(a_{t} \mid s_{t}, z\right)$$ \item get marginal likelihood by integrate the trajectory except $s_T$\\$$\mu\left(s_{T}, z\right)=\int \mu(\tau, z) d s_{1} a_{1} \cdots a_{T-1}$$ \item divide by $\mu(z)$ (which is a uniform distribution): $\mu(s_T\mid z) = \frac{\mu\left(s_{T}, z\right)}{\mu(z)}$ \item then make $r_{z}\left(s_{T}, a_{T}\right) \triangleq \log p\left(s_{T} \mid z\right)$ \end{itemize} \end{frame} \begin{frame} \frametitle{Optimal Learing Procedure for Unknown $p(s_g)$(cont.)} what wrong with it? \begin{equation*} \begin{aligned} I_\mu(s_T;z)&=\mathcal{H}[S_T] - \mathcal{H}[S_T\mid Z]\\ &=\mathbb{E}_{z \sim p(z), s_T \sim \mu(s_T\mid z)}[\log \mu(s_T \mid z) - \log \mu(s_T)] \end{aligned} \end{equation*} but... how to get $\log \mu(s_T)$? \begin{equation*} \begin{aligned} I_\mu(s_T;z)&=\mathcal{H}[Z] - \mathcal{H}[Z\mid S_T]\\ &=\mathbb{E}_{z \sim p(z), s_T \sim \mu(s_T\mid z)}[\log \mu(z\mid s_T) - \log \mu(z)] \end{aligned} \end{equation*} $\log \mu(z\mid s_T)$ is also difficult to get(because we do not have $\mu(s_T)$), but we can learn $\mu(z\mid s_T)$ directedly, just like DIAYN \end{frame} \begin{frame} \frametitle{General Case: Trajectory-Matching Tasks} \begin{itemize} \item “trajectory-matching” tasks: only provide a positive reward when the policy executes the \textbf{optimal trajectory}\\$$r_{\tau}^{*}(\tau) \triangleq \mathbf{1}\left(\tau=\tau^{*}\right)$$ \item trajectory-matching case is actually a generalization of the typical reinforcement learning case with Markovian rewards \item hitting time and regret (for known $p(\tau^*)$)\\ $$\mathrm{HITTINGTIME}_{\pi}\left(\tau^{*}\right)=\frac{1}{\pi\left(\tau^{*}\right)}$$ $\operatorname{REGRET}\left(f_{\pi}, p\left(r_{\tau}\right)\right)=\int \left.\mathrm{HITTING} \operatorname{TIME}_{\pi}(\tau) p(\tau) d \tau\right)=\int \frac{p(\tau)}{\pi(\tau)} d \tau$ \end{itemize} \end{frame} \begin{frame} \frametitle{General Case: Trajectory-Matching Tasks(cont.)} for unknow $p(\tau^*)$, we have lemma, again \begin{lemma} Let $\pi$ be a policy for which $\pi(\tau)$ is uniform. Then $f_\pi$ has lowest worst-case regret among learning procedures in $\gF_\pi$. \end{lemma} and we maxize the object just the same as last time$$I(\tau ; z)=\mathcal{H}[\tau]-\mathcal{H}[\tau \mid z]$$ \end{frame} \begin{frame} \frametitle{General Reward Maximizing Tasks} \begin{itemize} \item that trajectory-matching is a super-set of the problem of optimizing any possible Markovian reward function at test-time \item bounding the worst-case regret on $R_\pi$ minimizes an upper bound on the worst-case regret on $R_{s,a}$: $$\min _{r_{\tau} \in R_{\tau}} \mathbb{E}_{\pi}\left[r_{\tau}(\tau)\right] \leq \min _{r \in R_{s, a}} \mathbb{E}_{\pi}\left[\sum_{t} r\left(s_{t}, a_{t}\right)\right]$$ \item (bound is too loose, is it realy work?) \end{itemize} \end{frame} \begin{frame} \frametitle{Algorithm} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{imgs/agorithm.png} \end{figure} \end{frame} \begin{frame} \frametitle{Performance} Unsupervised meta-learning accelerates learning \begin{figure} \centering \includegraphics[width=0.8\textwidth]{imgs/performance1.png} \end{figure} \end{frame} \begin{frame} \frametitle{Performance(cont.)} Comparison with handcrafted tasks \begin{figure} \centering \includegraphics[width=0.8\textwidth]{imgs/performance2.png} \end{figure} \end{frame} \begin{frame} \frametitle{Discussion: 能不能再“无监督”一点?} \begin{itemize} \item 文中强调了他们的算法是基于给定CMP的情况下的,也就是说算法对reward mechanism不作要求,但是要求所有的task都有相同的CMP。 \item 能否直接去掉“固定CMP”的约束?\xmark \item 能否使用其他meta-RL的方法,例如PEARL,得到关于CMP的context,再根据这个context做unsupervised meta-RL? \end{itemize} \end{frame} \begin{frame} \frametitle{Discussion: 能不能再“有监督”一点?} \begin{itemize} \item 文中一直再强调task distribution设计很困难,试图直接放弃设计task distribution,直接从CMP中获得prori knowledge。但是这样的方式完全抛弃了加入expert knowledge的可能性。 \item 有没有更好的融合expert knowledge和environment dynamics的方式? \item 在Goal-Reaching Tasks中,如果到达goal state的奖赏不同,满足min-max的探索策略则将不再是均匀分布,而是和最终的奖赏有关。 \end{itemize} \end{frame} \begin{frame} \frametitle{Discussion: 结合上面两点,能不能显式的使用对抗策略,在无监督meta-RL和监督meta-RL中寻找平衡?} \begin{itemize} \item 可以理解为,无监督meta-RL的精髓就是在给定某个特性(文中是CMP)后,根据对抗的思想得到一个“能在最差的情况下都表现的足够好的learning procedure” \item 文章中经过分析认为对抗的思想蕴含在“每个状态出现的频率相同”这一假设上。 \item 是否可以结合前面的讨论,显式的对抗,使用更弱一点的假设,从而引入expert knowledge。 \end{itemize} \end{frame} \begin{frame} \frametitle{Discussion: 关于stochastic dynamics} \begin{itemize} \item 被作者遗忘的"future work" \item 同样使用context-based表示dynamics \item 其实现在的方法可以直接应用在stochastic dynamics,但是需要更多的理论证明 \end{itemize} \end{frame} \begin{frame} \centering \textbf{Thank You!} \end{frame} \end{document}
From stbor.lang Require Import steps_retag steps_progress. From stbor.sim Require Export instance. Set Default Proof Using "Type". Instance insert_gmap_proper `{Countable K} {A : cmraT} (i: K) : Proper ((≡) ==> (≡) ==> (≡)) (insert (M:=gmap K A) i). Proof. intros m1 m2 Eq a1 a2 Eqa i'. case (decide (i = i')) => [->|?]. - by rewrite 2!lookup_insert Eq. - do 2 (rewrite lookup_insert_ne //). Qed. Lemma priv_loc_not_public r l t t' : priv_loc r l t → (∃ (h: heapletR), rtm r !! t' ≡ Some (to_tgkR tkPub, h)) → t ≠ t'. Proof. intros [k1 [h1 [Eqh1 [Inl Eqk]]]] [h2 Eqh2] ?. subst t'. move : Eqh2. rewrite Eqh1. intros [Eq1 ?]%Some_equiv_inj. simpl in *. destruct Eqk as [|[]]; subst k1. - inversion Eq1. - apply Cinr_inj in Eq1. inversion Eq1. Qed. Lemma local_access_eq r l (h: heapletR) t t' stk n stk' kind σs σt : σt.(sst) !! l = Some stk → access1 stk kind t' σt.(scs) = Some (n, stk') → wsat r σs σt → rtm r !! t ≡ Some (to_tgkR tkLocal, h) → is_Some (h !! l) → t' = Tagged t ∧ stk' = stk. Proof. intros Eql Eqstk WSAT Eqt' Inl. destruct WSAT as (WFS & WFT & VALID & PINV & CINV & SREL). specialize (PINV _ _ _ Eqt') as [? Eqs]. destruct Inl as [sst' Eqsst']. have VLh: ✓ h. { apply (pair_valid (to_tgkR tkLocal) h). rewrite -pair_valid -Some_valid -Eqt'. apply VALID. } have VLsst: ✓ sst'. { change (✓ (Some sst')). rewrite -Eqsst'. apply VLh. } apply to_agree_uninj in VLsst as [[ss st] VLsst]. specialize (Eqs l ss st). rewrite Eqsst' in Eqs. destruct Eqs as [Eqh1 [Eqh2 Eqs]]; [by rewrite VLsst|done|]. rewrite /= Eql in Eqs. simplify_eq. destruct (access1_in_stack _ _ _ _ _ _ Eqstk) as [it [?%elem_of_list_singleton [Eqt ?]]]. subst. split; [done|]. destruct (tag_unique_head_access σt.(scs) (init_stack (Tagged t)) (Tagged t) None kind) as [stk1 Eq1]; [by exists []|]. rewrite Eq1 in Eqstk. by simplify_eq. Qed. Lemma protector_access_eq r l (h: heapletR) t t' stk n stk' kind σs σt : σt.(sst) !! l = Some stk → access1 stk kind t' σt.(scs) = Some (n, stk') → wsat r σs σt → rtm r !! t ≡ Some (to_tgkR tkUnique, h) → is_Some (h !! l) → (∃ (c: call_id) T L, rcm r !! c ≡ Excl' T ∧ T !! t = Some L ∧ l ∈ L) → t' = Tagged t. Proof. intros Eql Eqstk WSAT Eqh Inl PRO. destruct PRO as (c & T & tls & Eqc & EqT & Intls). destruct WSAT as (WFS & WFT & VALID & PINV & CINV & SREL). destruct Inl as [sa Eqs]. move : (proj1 VALID t). rewrite Eqh. intros [_ VALh]. specialize (VALh l). rewrite Eqs in VALh. apply to_agree_uninj in VALh as [[ss st] Eqsa]. have Eqss : h !! l ≡ Some (to_agree (ss, st)) by rewrite Eqsa Eqs. specialize (PINV _ _ _ Eqh) as [? PINV]. specialize (PINV _ _ _ Eqss). specialize (CINV _ _ Eqc) as [Ltc CINV]. specialize (CINV _ _ EqT) as [Ltt CINV]. specialize (CINV _ Intls) as (stk1 & pm1 & Eqstk1 & In1 & NDIS1). rewrite Eqstk1 in Eql. simplify_eq. destruct PINV as [Eqs1 [Eqs2 HD]]; [by do 3 eexists|]. destruct HD as (stk1 & Eq1 & opro & stk2 & Eq2). rewrite Eq1 in Eqstk1. simplify_eq. have ?: opro = Some c. { destruct (state_wf_stack_item _ WFT _ _ Eq1) as [_ ND]. have Eq := stack_item_tagged_NoDup_eq _ _ (mkItem Unique (Tagged t) opro) t ND In1 ltac:(by left) eq_refl eq_refl. by inversion Eq. } subst opro. eapply (access1_incompatible_head_protector _ _ _ _ _ _ _ _ ltac:(by eexists) Ltc); eauto. Qed. Lemma priv_loc_UB_access_eq r l kind σs σt t t' stk : σt.(sst) !! l = Some stk → is_Some (access1 stk kind t' σt.(scs)) → wsat r σs σt → priv_loc r l t → t' = Tagged t. Proof. intros Eql [[]] WSAT (k & h & Eqh & Inl & [|[? PRO]]); subst k. - eapply local_access_eq; eauto. - eapply protector_access_eq; eauto. Qed. Lemma priv_pub_access_UB (r: resUR) l kind σs σt t t' stk : σt.(sst) !! l = Some stk → is_Some (access1 stk kind t' σt.(scs)) → wsat r σs σt → priv_loc r l t → (if t' is (Tagged t2) then ∃ (h: heapletR), rtm r !! t2 ≡ Some (to_tgkR tkPub, h) else True) → False. Proof. intros Eql IS WSAT PV PB. have Eq := priv_loc_UB_access_eq _ _ _ _ _ _ _ _ Eql IS WSAT PV. rewrite Eq in PB. by apply (priv_loc_not_public _ _ _ _ PV PB). Qed. Lemma local_unique_access_head r σs (σt: state) l stk kind n' stk' t ss st k h (WFT: Wf σt) (LU: k = tkLocal ∨ k = tkUnique): σt.(sst) !! l = Some stk → access1 stk kind (Tagged t) σt.(scs) = Some (n', stk') → tmap_inv r σs σt → rtm r !! t ≡ Some (to_tgkR k, h) → h !! l ≡ Some $ to_agree (ss,st) → ∃ it, it.(perm) = Unique ∧ it.(tg) = Tagged t ∧ is_stack_head it stk ∧ σt.(shp) !! l = Some st ∧ σs.(shp) !! l = Some ss. Proof. intros Eqstk ACC1 PINV HL Eqs. specialize (PINV _ _ _ HL) as [? PINV]. specialize (PINV _ _ _ Eqs). destruct (access1_in_stack _ _ _ _ _ _ ACC1) as (it & Init & Eqti & NDIS). destruct PINV as [Eqss [? HD]]. { destruct LU; subst k; [done|]. rewrite /= -Eqti. exists stk, it.(perm), it.(protector). repeat split; [done| |done]. by destruct it. } exists (mkItem Unique (Tagged t) it.(protector)). repeat split; [|done..]. destruct LU; subst k. - rewrite /= Eqstk in HD. simplify_eq. apply elem_of_list_singleton in Init. subst it. simpl. by eexists. - destruct HD as (stk1&Eq1&opro& stk2&HD). rewrite Eq1 in Eqstk. simplify_eq. have ?: opro = it.(protector). { destruct (state_wf_stack_item _ WFT _ _ Eq1) as [_ ND]. have Eq := stack_item_tagged_NoDup_eq _ _ (mkItem Unique (Tagged t) opro) t ND Init ltac:(by left) Eqti eq_refl. by inversion Eq. } subst opro. by eexists. Qed. Lemma public_read_head r σs (σt: state) l stk n' stk' t ss st h: σt.(sst) !! l = Some stk → access1 stk AccessRead (Tagged t) σt.(scs) = Some (n', stk') → wsat r σs σt → rtm r !! t ≡ Some (to_tgkR tkPub, h) → h !! l ≡ Some $ to_agree (ss,st) → ∃ it, it ∈ stk ∧ it.(tg) = Tagged t ∧ it.(perm) ≠ Disabled ∧ t ∈ active_SRO stk ∧ σt.(shp) !! l = Some st ∧ σs.(shp) !! l = Some ss ∧ arel r ss st. Proof. intros Eqstk ACC WSAT HL Eqs. have WSAT1 := WSAT. destruct WSAT as (WFS & WFT & VALID & PINV & CINV & SREL). destruct SREL as (Eqst & Eqnp & Eqcs & Eqnc & PV). specialize (PINV _ _ _ HL) as [? PINV]. specialize (PINV _ _ _ Eqs). destruct (access1_in_stack _ _ _ _ _ _ ACC) as (it & Init & Eqti & NDIS). destruct PINV as [Eqss [Eqss' HD]]. { rewrite /= -Eqti. exists stk, it.(perm), it.(protector). repeat split; [done| |done]. by destruct it. } destruct HD as (stk1 & Eqstk1 & InSRO). rewrite Eqstk1 in Eqstk. simplify_eq. exists it. repeat split; [done..|]. destruct (PV l) as [PB|[t' PR]]; cycle 2. { exfalso. have Eq := priv_loc_UB_access_eq _ _ AccessRead σs σt t' (Tagged t) _ Eqstk1 ltac:(by eexists) WSAT1 PR. simplify_eq. destruct PR as (k1 & h1 & Eqh1 & ? & CASE). move : Eqh1. rewrite HL. rewrite -(left_id None op (Some _)). destruct CASE as [|[]]; subst k1. - apply tagKindR_local_exclusive_r. - apply tagKindR_uniq_exclusive_r. } { rewrite (state_wf_dom _ WFT) elem_of_dom. by eexists. } destruct (PB _ Eqss) as (st' & Eqst' & AREL). rewrite Eqst' in Eqss'. by simplify_eq. Qed. Lemma priv_loc_UB_retag_access_eq r σs σt c l old new mut T kind α' nxtp' (FRZ: if mut is Immutable then is_freeze T else True) (N2P: kind ≠ TwoPhase) : retag σt.(sst) σt.(snp) σt.(scs) c l old kind (RefPtr mut) T = Some (new, α', nxtp') → wsat r σs σt → ∀ i t, (i < tsize T)%nat → priv_loc r (l +ₗ i) t → (if old is (Tagged t2) then ∃ (h: heapletR), rtm r !! t2 ≡ Some (to_tgkR tkPub, h) else True) → False. Proof. intros RT WSAT i t Lti. have NZST: (0 < tsize T)%nat by lia. destruct (retag_change_ref_NZST _ _ _ _ _ _ _ _ _ _ _ _ NZST RT); subst new nxtp'. destruct (retag_Some_Ref _ _ _ _ _ _ _ _ _ _ _ _ NZST FRZ N2P RT _ Lti) as (stk & stk' & Eqstk & Eqstk' & GR). apply grant_access1 in GR; [|by destruct mut]. revert Eqstk GR WSAT. by apply priv_pub_access_UB. Qed.
% Test file for Scaling Functions % % Copyright (c) 2018 Department of Computer Science, % University of Toronto, Canada, % Vector Institute, Canada % % License % This file is under the LGPL license, you can % redistribute it and/or modify it under the terms of the GNU Lesser General % Public License as published by the Free Software Foundation, either version 3 % of the License, or (at your option) any later version. This file is % distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; % without even the implied warranty of MERCHANTABILITY or FITNESS FOR A % PARTICULAR PURPOSE. See the GNU Lesser General Public License for more % details. % % This function is part of the Covarep project: http://covarep.github.io/covarep % % Author % Yingxue Wang <[email protected]> % Sean Robertson <[email protected]> % classdef test_scales < matlab.unittest.TestCase properties scaling_function end methods(TestMethodSetup) function setScalingFunction(testCase) testCase.scaling_function = MelScaling(); end end methods (Test) function test_scales_invertible(testCase) for hertz = 200:100:2000 scale = testCase.scaling_function.hertz_to_scale(hertz); testCase.verifyEqual(hertz, testCase.scaling_function.scale_to_hertz(scale),'relTol',sqrt(eps)); end end end end
theory Pow imports Main begin section {* Pow *} fun pow :: "nat \<Rightarrow> nat \<Rightarrow> nat" where "pow _ 0 = 1" | "pow a (Suc n') = a * pow a n'" lemma test_pow: "pow x 0 = 1" by auto theorem pow_0: "\<forall>x :: nat. pow x 0 = 1" by simp end
lemma rel_frontier_deformation_retract_of_punctured_convex: fixes S :: "'a::euclidean_space set" assumes "convex S" "convex T" "bounded S" and arelS: "a \<in> rel_interior S" and relS: "rel_frontier S \<subseteq> T" and affS: "T \<subseteq> affine hull S" obtains r where "homotopic_with_canon (\<lambda>x. True) (T - {a}) (T - {a}) id r" "retraction (T - {a}) (rel_frontier S) r"
{-# OPTIONS --without-K --rewriting #-} open import HoTT module groups.KernelCstImageCst {i j k} (G : Group i) (H : Group j) (K : Group k) (H-ab : is-abelian H) where private module H = Group H open import groups.KernelImage {G = G} {H = H} {K = K} cst-hom cst-hom H-ab Ker-cst-quot-Im-cst : Ker/Im ≃ᴳ H Ker-cst-quot-Im-cst = ≃-to-≃ᴳ (equiv to from to-from from-to) to-pres-comp where to : Ker/Im.El → H.El to = SetQuot-rec H.El-level to' to-rel where to' : Ker.El (cst-hom {G = H} {H = K}) → H.El to' ker = fst ker abstract to-rel : ∀ {h₁ h₂} → ker/im-rel' h₁ h₂ → h₁ == h₂ to-rel {h₁} {h₂} = Trunc-rec (H.El-level _ _) λ{(_ , 0=h₁h₂⁻¹) → H.zero-diff-same h₁ h₂ (! 0=h₁h₂⁻¹)} from : H.El → Ker/Im.El from h = q[ h , idp ] abstract to-from : ∀ h → to (from h) == h to-from h = idp from-to : ∀ k/i → from (to k/i) == k/i from-to = SetQuot-elim (λ _ → =-preserves-set SetQuot-level) (λ _ → ap q[_] $ ker-El=-out idp) (λ _ → prop-has-all-paths-↓ (SetQuot-level _ _)) to-pres-comp : preserves-comp Ker/Im.comp H.comp to to-pres-comp = SetQuot-elim (λ _ → Π-is-set λ _ → =-preserves-set H.El-level) (λ _ → SetQuot-elim (λ _ → =-preserves-set H.El-level) (λ _ → idp) (λ _ → prop-has-all-paths-↓ (H.El-level _ _))) (λ _ → prop-has-all-paths-↓ (Π-is-prop λ _ → H.El-level _ _))
# 3. faza: Vizualizacija podatkov #1 Število nazivov igralca tedna po zveznih državah stevilo_nazivov_zvezne = c(0,0,45,0,44+27+71+23,41,0,0,31,57+17,32,0,0,44,23,0,0,0,17,0,0,43, 29,26,0,0,0,0,0,0,0,0,36+36,26,0,59,61,33,37,0,0,0,9,30+56+61,47,0,0,0,0,26,0) nazivi_zvezne <- statepop #Dodal bom nov stolpec s številom nazivov nazivi_zvezne$Nazivi <- stevilo_nazivov_zvezne graf_zda <- plot_usmap(data = nazivi_zvezne, values = "Nazivi", lines = "black") + scale_fill_continuous(name = "Število", label = scales::comma) + theme(legend.position = "right") + ggtitle("Število nazivov igralca tedna po zveznih državah v sezonah 1984/85 do 2017/18") #2 Uspešnost ekip v rednem delu VS Uspešnost ekip v Play-Offih graf_korelacija_uspesn <- ggplot(data=podatki_ekipe_imensko, aes(x=podatki_ekipe_imensko$Uspesnost_redni_del, y=podatki_ekipe_imensko$Playoff_uspesnost)) + geom_jitter(size=(podatki_ekipe_imensko$Stevilo_playoffov)/1.8, shape=21, color="black", fill="dodgerblue4", alpha=0.7) + geom_smooth(method = "lm",size=0.6, alpha= 0.3, color= "dodgerblue2") + ggtitle("Korelacija deleža zmag v rednem delu in izločilnih bojih (Play-Offih)") + xlab("Uspešnost v rednem delu") + ylab("Uspešnost v Play-offih") + annotate("text", x = 0.63, y = 0.22, label = "Velikost kroga pomeni relativno število uvrstitev v izločilne boje", size=3) + theme_bw() #3 Število osvojenih nazivov igralca tedna glede na število sezon v ligi (0 = sezona drafta) graf_sezone <- ggplot(data=tabela_nazivi_po_letih, aes(x=Var1,y=Freq)) + geom_point(size=3, shape=16, color="dodgerblue4", alpha = 0.7) + geom_hline(yintercept=mean(tabela_nazivi_po_letih$Freq), color="gray30", size=0.6) + geom_smooth(method="loess", color="dodgerblue2", size=0.6, alpha = 0.3) + ggtitle("Število nazivov glede na število sezon v ligi") + xlab("Število sezon v ligi") + ylab("Število nazivov")+ theme_bw() annotate("text", x = 14, y = 145, label = "Število sezon pomeni število že zaključenih sezon pred trenutno", size = 3) #4 Višina nagrajenih igralcev glede na pozicijo igralci_tedna_pozicijefilt2 <- igralci_tedna[igralci_tedna$Pozicija != "G-F" & igralci_tedna$Pozicija != "F-C" & igralci_tedna$Pozicija != "F",] graf_visina_pozicija <- ggplot(data=igralci_tedna_pozicijefilt2, aes(x=reorder(Pozicija,igralci_tedna_pozicijefilt2$Visina), y=Visina)) + geom_boxplot(alpha=I(0.7), fill="dodgerblue4", outlier.size = 0.3) + ggtitle("Višina nagrajenih košarkarjev glede na pozicijo igranja") + xlab("Pozicija igranja") + ylab("Višina igralcev") + theme_bw() #5 Število osvojenih nazivov igralca tedna glede na pozicijo #Prirejena tabela - izločeni pozicij G-F in F-C zaradi zelo majhnega števila igralcev: igralci_tedna_pozicijefilt <- igralci_tedna[igralci_tedna$Pozicija != "G-F" & igralci_tedna$Pozicija != "F-C",] graf_pozicije_cas <- ggplot(data=igralci_tedna_pozicijefilt, aes(x=igralci_tedna_pozicijefilt$Sezona_okrajsano)) + geom_histogram(binwidth=1.1, fill="dodgerblue4", alpha= 0.7) + facet_grid(~Pozicija) + theme_bw() + theme(axis.text.x = element_text(angle=90)) + ggtitle("Osvojeni nazivi igralca tedna glede na pozicijo igranja skozi čas") + xlab("Leto") + ylab("Število") #6 --> analiza/analiza.r #7 --> analiza/analiza.r #8 Število zmag in število nazivov igralca tedna podatki.join <- inner_join(x = podatki_ekipe_imensko, y= stevilo_nazivov, by = "Ekipa") # %>% select("Ekipa", "Stevilo", "Uspesnost_redni_del") join_za_graf <- podatki.join %>% arrange(desc(Zmage_redni_del+Playoff_zmage)) graf_zmage_nazivi <- ggplot(data=join_za_graf, aes(x=reorder(Kratice,desc(Zmage_redni_del+Playoff_zmage)), y=Stevilo, fill=Zmage_redni_del+Playoff_zmage)) + theme(axis.text.x = element_text(angle=90)) + geom_col() + xlab("Ekipa") + ggtitle("Število zmag in število osvojenih nazivov igralca tedna po ekipah") + xlab("Ekipa") + ylab("Igralec tedna") + labs(fill="Zmage")
import Lean example : True := by apply True.intro --^ textDocument/hover example : True := by simp [True.intro] --^ textDocument/hover example (n : Nat) : True := by match n with | Nat.zero => _ --^ textDocument/hover | n + 1 => _ /-- My tactic -/ macro "mytac" o:"only"? e:term : tactic => `(exact $e) example : True := by mytac only True.intro --^ textDocument/hover --^ textDocument/hover --^ textDocument/hover /-- My way better tactic -/ macro_rules | `(tactic| mytac $[only]? $e) => `(apply $e) example : True := by mytac only True.intro --^ textDocument/hover /-- My ultimate tactic -/ elab_rules : tactic | `(tactic| mytac $[only]? $e) => `(tactic| refine $e) >>= Lean.Elab.Tactic.evalTactic example : True := by mytac only True.intro --^ textDocument/hover /-- My notation -/ macro "mynota" e:term : term => e #check mynota 1 --^ textDocument/hover /-- My way better notation -/ macro_rules | `(mynota $e) => `(2 * $e) #check mynota 1 --^ textDocument/hover -- macro_rules take precedence over elab_rules for term/command, so use new syntax syntax "mynota'" term : term /-- My ultimate notation -/ elab_rules : term | `(mynota' $e) => `($e * $e) >>= (Lean.Elab.Term.elabTerm · none) #check mynota' 1 --^ textDocument/hover /-- My command -/ macro "mycmd" e:term : command => `(def hi := $e) mycmd 1 --^ textDocument/hover /-- My way better command -/ macro_rules | `(mycmd $e) => `(@[inline] def hi := $e) mycmd 1 --^ textDocument/hover syntax "mycmd'" term : command /-- My ultimate command -/ elab_rules : command | `(mycmd' $e) => `(/-- hi -/ @[inline] def hi := $e) >>= Lean.Elab.Command.elabCommand mycmd' 1 --^ textDocument/hover #check ({ a := }) -- should not show `sorry` --^ textDocument/hover example : True := by simp [id True.intro] --^ textDocument/hover --^ textDocument/hover example : Id Nat := do let mut n := 1 n := 2 --^ textDocument/hover n
[STATEMENT] lemma lstrict_prefix_llength_less: assumes "lstrict_prefix xs ys" shows "llength xs < llength ys" [PROOF STATE] proof (prove) goal (1 subgoal): 1. llength xs < llength ys [PROOF STEP] proof(rule ccontr) [PROOF STATE] proof (state) goal (1 subgoal): 1. \<not> llength xs < llength ys \<Longrightarrow> False [PROOF STEP] assume "\<not> llength xs < llength ys" [PROOF STATE] proof (state) this: \<not> llength xs < llength ys goal (1 subgoal): 1. \<not> llength xs < llength ys \<Longrightarrow> False [PROOF STEP] moreover [PROOF STATE] proof (state) this: \<not> llength xs < llength ys goal (1 subgoal): 1. \<not> llength xs < llength ys \<Longrightarrow> False [PROOF STEP] from \<open>lstrict_prefix xs ys\<close> [PROOF STATE] proof (chain) picking this: lstrict_prefix xs ys [PROOF STEP] have "xs \<sqsubseteq> ys" "xs \<noteq> ys" [PROOF STATE] proof (prove) using this: lstrict_prefix xs ys goal (1 subgoal): 1. xs \<sqsubseteq> ys &&& xs \<noteq> ys [PROOF STEP] unfolding lstrict_prefix_def [PROOF STATE] proof (prove) using this: xs \<sqsubseteq> ys \<and> xs \<noteq> ys goal (1 subgoal): 1. xs \<sqsubseteq> ys &&& xs \<noteq> ys [PROOF STEP] by simp_all [PROOF STATE] proof (state) this: xs \<sqsubseteq> ys xs \<noteq> ys goal (1 subgoal): 1. \<not> llength xs < llength ys \<Longrightarrow> False [PROOF STEP] from \<open>xs \<sqsubseteq> ys\<close> [PROOF STATE] proof (chain) picking this: xs \<sqsubseteq> ys [PROOF STEP] have "llength xs \<le> llength ys" [PROOF STATE] proof (prove) using this: xs \<sqsubseteq> ys goal (1 subgoal): 1. llength xs \<le> llength ys [PROOF STEP] by(rule lprefix_llength_le) [PROOF STATE] proof (state) this: llength xs \<le> llength ys goal (1 subgoal): 1. \<not> llength xs < llength ys \<Longrightarrow> False [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: \<not> llength xs < llength ys llength xs \<le> llength ys [PROOF STEP] have "llength xs = llength ys" [PROOF STATE] proof (prove) using this: \<not> llength xs < llength ys llength xs \<le> llength ys goal (1 subgoal): 1. llength xs = llength ys [PROOF STEP] by auto [PROOF STATE] proof (state) this: llength xs = llength ys goal (1 subgoal): 1. \<not> llength xs < llength ys \<Longrightarrow> False [PROOF STEP] with \<open>xs \<sqsubseteq> ys\<close> [PROOF STATE] proof (chain) picking this: xs \<sqsubseteq> ys llength xs = llength ys [PROOF STEP] have "xs = ys" [PROOF STATE] proof (prove) using this: xs \<sqsubseteq> ys llength xs = llength ys goal (1 subgoal): 1. xs = ys [PROOF STEP] by(rule lprefix_llength_eq_imp_eq) [PROOF STATE] proof (state) this: xs = ys goal (1 subgoal): 1. \<not> llength xs < llength ys \<Longrightarrow> False [PROOF STEP] with \<open>xs \<noteq> ys\<close> [PROOF STATE] proof (chain) picking this: xs \<noteq> ys xs = ys [PROOF STEP] show False [PROOF STATE] proof (prove) using this: xs \<noteq> ys xs = ys goal (1 subgoal): 1. False [PROOF STEP] by contradiction [PROOF STATE] proof (state) this: False goal: No subgoals! [PROOF STEP] qed
lemma measurable_Sup_measurable: assumes f: "f \<in> space N \<rightarrow> A" shows "f \<in> measurable N (Sup {M. space M = A \<and> f \<in> measurable N M})"
-- Andreas, 2015-07-22 Fixed bug in test for empty type of sizes {-# OPTIONS --copatterns #-} open import Common.Size record U (i : Size) : Set where coinductive field out : (j : Size< i) → U j open U fixU : {A : Size → Set} (let C = λ i → A i → U i) → ∀ i (f : ∀ i → (∀ (j : Size< i) → C j) → C i) → C i out (fixU i f a) j = out (f i {!λ (j : Size< i) → fixU j f!} a) j -- Giving should succeed (even if termination checking might not succeed yet)
State Before: F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 ⊢ (Integrable fun x => f (R • x)) ↔ Integrable f State After: F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 ⊢ ∀ {g : E → F}, Integrable g → ∀ {S : ℝ}, S ≠ 0 → Integrable fun x => g (S • x) Tactic: suffices ∀ {g : E → F} (hg : Integrable g μ) {S : ℝ} (hS : S ≠ 0), Integrable (fun x => g (S • x)) μ by refine' ⟨fun hf => _, fun hf => this hf hR⟩ convert this hf (inv_ne_zero hR) rw [← mul_smul, mul_inv_cancel hR, one_smul] State Before: F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 ⊢ ∀ {g : E → F}, Integrable g → ∀ {S : ℝ}, S ≠ 0 → Integrable fun x => g (S • x) State After: F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 g : E → F hg : Integrable g S : ℝ hS : S ≠ 0 ⊢ Integrable fun x => g (S • x) Tactic: intro g hg S hS State Before: F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 g : E → F hg : Integrable g S : ℝ hS : S ≠ 0 ⊢ Integrable fun x => g (S • x) State After: F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 g : E → F hg : Integrable g S : ℝ hS : S ≠ 0 t : E ≃ᵐ E := Homeomorph.toMeasurableEquiv (Homeomorph.smul (IsUnit.unit (_ : IsUnit S))) ⊢ Integrable fun x => g (S • x) Tactic: let t := ((Homeomorph.smul (isUnit_iff_ne_zero.2 hS).unit).toMeasurableEquiv : E ≃ᵐ E) State Before: F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 g : E → F hg : Integrable g S : ℝ hS : S ≠ 0 t : E ≃ᵐ E := Homeomorph.toMeasurableEquiv (Homeomorph.smul (IsUnit.unit (_ : IsUnit S))) ⊢ Integrable g State After: F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 g : E → F hg : Integrable g S : ℝ hS : S ≠ 0 t : E ≃ᵐ E := Homeomorph.toMeasurableEquiv (Homeomorph.smul (IsUnit.unit (_ : IsUnit S))) ⊢ ENNReal.ofReal (abs (S ^ finrank ℝ E)⁻¹) ≠ 0 Tactic: rwa [map_add_haar_smul μ hS, integrable_smul_measure _ ENNReal.ofReal_ne_top] State Before: F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 g : E → F hg : Integrable g S : ℝ hS : S ≠ 0 t : E ≃ᵐ E := Homeomorph.toMeasurableEquiv (Homeomorph.smul (IsUnit.unit (_ : IsUnit S))) ⊢ ENNReal.ofReal (abs (S ^ finrank ℝ E)⁻¹) ≠ 0 State After: no goals Tactic: simpa only [Ne.def, ENNReal.ofReal_eq_zero, not_le, abs_pos] using inv_ne_zero (pow_ne_zero _ hS) State Before: F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 this : ∀ {g : E → F}, Integrable g → ∀ {S : ℝ}, S ≠ 0 → Integrable fun x => g (S • x) ⊢ (Integrable fun x => f (R • x)) ↔ Integrable f State After: F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 this : ∀ {g : E → F}, Integrable g → ∀ {S : ℝ}, S ≠ 0 → Integrable fun x => g (S • x) hf : Integrable fun x => f (R • x) ⊢ Integrable f Tactic: refine' ⟨fun hf => _, fun hf => this hf hR⟩ State Before: F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 this : ∀ {g : E → F}, Integrable g → ∀ {S : ℝ}, S ≠ 0 → Integrable fun x => g (S • x) hf : Integrable fun x => f (R • x) ⊢ Integrable f State After: case h.e'_5.h.h.e'_1 F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 this : ∀ {g : E → F}, Integrable g → ∀ {S : ℝ}, S ≠ 0 → Integrable fun x => g (S • x) hf : Integrable fun x => f (R • x) x✝ : E ⊢ x✝ = R • R⁻¹ • x✝ Tactic: convert this hf (inv_ne_zero hR) State Before: case h.e'_5.h.h.e'_1 F : Type u_2 inst✝⁶ : NormedAddCommGroup F E : Type u_1 inst✝⁵ : NormedAddCommGroup E inst✝⁴ : NormedSpace ℝ E inst✝³ : MeasurableSpace E inst✝² : BorelSpace E inst✝¹ : FiniteDimensional ℝ E μ : Measure E inst✝ : IsAddHaarMeasure μ f : E → F R : ℝ hR : R ≠ 0 this : ∀ {g : E → F}, Integrable g → ∀ {S : ℝ}, S ≠ 0 → Integrable fun x => g (S • x) hf : Integrable fun x => f (R • x) x✝ : E ⊢ x✝ = R • R⁻¹ • x✝ State After: no goals Tactic: rw [← mul_smul, mul_inv_cancel hR, one_smul]
# Agave Platform Science API # # Power your digital lab and reduce the time from theory to discovery using the Agave Science-as-a-Service API Platform. Agave provides hosted services that allow researchers to manage data, conduct experiments, and publish and share results from anywhere at any time. # # Agave Platform version: 2.2.14 # # Generated by: https://github.com/swagger-api/swagger-codegen.git #' ClientAPISubscription Class #' #' #' #' @field apiContext The base url path of the API. #' @field apiName The name of the API. #' @field apiProvider The user who registered the API. #' @field apiVersion The current major version of the API. This is appended to the api_context to create the base API url. #' @field status #' @field tier #' #' @importFrom R6 R6Class #' @importFrom jsonlite fromJSON toJSON #' @export ClientAPISubscription <- R6::R6Class( 'ClientAPISubscription', public = list( `apiContext` = NULL, `apiName` = NULL, `apiProvider` = NULL, `apiVersion` = NULL, `status` = NULL, `tier` = NULL, initialize = function(`apiContext`, `apiName`, `apiProvider`, `apiVersion`, `status`, `tier`){ if (!missing(`apiContext`)) { stopifnot(is.character(`apiContext`), length(`apiContext`) == 1) self$`apiContext` <- `apiContext` } if (!missing(`apiName`)) { stopifnot(is.character(`apiName`), length(`apiName`) == 1) self$`apiName` <- `apiName` } if (!missing(`apiProvider`)) { stopifnot(is.character(`apiProvider`), length(`apiProvider`) == 1) self$`apiProvider` <- `apiProvider` } if (!missing(`apiVersion`)) { stopifnot(is.character(`apiVersion`), length(`apiVersion`) == 1) self$`apiVersion` <- `apiVersion` } if (!missing(`status`)) { stopifnot(R6::is.R6(`status`)) self$`status` <- `status` } if (!missing(`tier`)) { stopifnot(R6::is.R6(`tier`)) self$`tier` <- `tier` } }, asJSON = function() { self$toJSON() }, toJSON = function() { ClientAPISubscriptionObject <- list() if (!is.null(self$`apiContext`)) { ClientAPISubscriptionObject[['apiContext']] <- self$`apiContext` } else { ClientAPISubscriptionObject[['apiContext']] <- NULL } if (!is.null(self$`apiName`)) { ClientAPISubscriptionObject[['apiName']] <- self$`apiName` } else { ClientAPISubscriptionObject[['apiName']] <- NULL } if (!is.null(self$`apiProvider`)) { ClientAPISubscriptionObject[['apiProvider']] <- self$`apiProvider` } else { ClientAPISubscriptionObject[['apiProvider']] <- NULL } if (!is.null(self$`apiVersion`)) { ClientAPISubscriptionObject[['apiVersion']] <- self$`apiVersion` } else { ClientAPISubscriptionObject[['apiVersion']] <- NULL } if (!is.null(self$`status`)) { ClientAPISubscriptionObject[['status']] <- self$`status`$toJSON() } else { ClientAPISubscriptionObject[['status']] <- NULL } if (!is.null(self$`tier`)) { ClientAPISubscriptionObject[['tier']] <- self$`tier`$toJSON() } else { ClientAPISubscriptionObject[['tier']] <- NULL } ClientAPISubscriptionObject }, fromJSON = function(ClientAPISubscriptionObject) { if (is.character(ClientAPISubscriptionObject)) { ClientAPISubscriptionObject <- jsonlite::fromJSON(ClientAPISubscriptionJson) } if ("result" %in% names(ClientAPISubscriptionObject)) { ClientAPISubscriptionObject <- ClientAPISubscriptionObject$result } if (!is.null(ClientAPISubscriptionObject$`apiContext`)) { self$`apiContext` <- ClientAPISubscriptionObject$`apiContext` } if (!is.null(ClientAPISubscriptionObject$`apiName`)) { self$`apiName` <- ClientAPISubscriptionObject$`apiName` } if (!is.null(ClientAPISubscriptionObject$`apiProvider`)) { self$`apiProvider` <- ClientAPISubscriptionObject$`apiProvider` } if (!is.null(ClientAPISubscriptionObject$`apiVersion`)) { self$`apiVersion` <- ClientAPISubscriptionObject$`apiVersion` } if (!is.null(ClientAPISubscriptionObject$`status`)) { statusObject <- ClientAPISubscriptionStatusType$new() statusObject$fromJSON(jsonlite::toJSON(ClientAPISubscriptionObject$status, auto_unbox = TRUE)) self$`status` <- statusObject } if (!is.null(ClientAPISubscriptionObject$`tier`)) { tierObject <- ClientSubscriptionTier$new() tierObject$fromJSON(jsonlite::toJSON(ClientAPISubscriptionObject$tier, auto_unbox = TRUE)) self$`tier` <- tierObject } }, toJSONString = function() { sprintf( '{ "apiContext": %s, "apiName": %s, "apiProvider": %s, "apiVersion": %s, "status": %s, "tier": %s }', ifelse( is.null(self$`apiContext`),"null",paste0(c('"', self$`apiContext`, '"'))), ifelse( is.null(self$`apiName`),"null",paste0(c('"', self$`apiName`, '"'))), ifelse( is.null(self$`apiProvider`),"null",paste0(c('"', self$`apiProvider`, '"'))), ifelse( is.null(self$`apiVersion`),"null",paste0(c('"', self$`apiVersion`, '"'))), self$`status`$toJSON(), self$`tier`$toJSON() ) }, fromJSONString = function(ClientAPISubscriptionJson) { ClientAPISubscriptionObject <- jsonlite::fromJSON(ClientAPISubscriptionJson) self::fromJSON(ClientAPISubscriptionObject) } ) )
[STATEMENT] lemma horner_sum_less_iff_lexordp: \<open>horner_sum of_bool 2 bs < horner_sum of_bool 2 cs \<longleftrightarrow> ord_class.lexordp (rev bs) (rev cs)\<close> if \<open>length bs = length cs\<close> [PROOF STATE] proof (prove) goal (1 subgoal): 1. (horner_sum of_bool (2::'a) bs < horner_sum of_bool (2::'a) cs) = ord_class.lexordp (rev bs) (rev cs) [PROOF STEP] proof - [PROOF STATE] proof (state) goal (1 subgoal): 1. (horner_sum of_bool (2::'a) bs < horner_sum of_bool (2::'a) cs) = ord_class.lexordp (rev bs) (rev cs) [PROOF STEP] have \<open>horner_sum of_bool 2 (rev bs) < horner_sum of_bool 2 (rev cs) \<longleftrightarrow> ord_class.lexordp bs cs\<close> if \<open>length bs = length cs\<close> for bs cs [PROOF STATE] proof (prove) goal (1 subgoal): 1. (horner_sum of_bool (2::'a) (rev bs) < horner_sum of_bool (2::'a) (rev cs)) = ord_class.lexordp bs cs [PROOF STEP] using that [PROOF STATE] proof (prove) using this: length bs = length cs goal (1 subgoal): 1. (horner_sum of_bool (2::'a) (rev bs) < horner_sum of_bool (2::'a) (rev cs)) = ord_class.lexordp bs cs [PROOF STEP] proof (induction bs cs rule: list_induct2) [PROOF STATE] proof (state) goal (2 subgoals): 1. (horner_sum of_bool (2::'a) (rev []) < horner_sum of_bool (2::'a) (rev [])) = ord_class.lexordp [] [] 2. \<And>x xs y ys. \<lbrakk>length xs = length ys; (horner_sum of_bool (2::'a) (rev xs) < horner_sum of_bool (2::'a) (rev ys)) = ord_class.lexordp xs ys\<rbrakk> \<Longrightarrow> (horner_sum of_bool (2::'a) (rev (x # xs)) < horner_sum of_bool (2::'a) (rev (y # ys))) = ord_class.lexordp (x # xs) (y # ys) [PROOF STEP] case Nil [PROOF STATE] proof (state) this: goal (2 subgoals): 1. (horner_sum of_bool (2::'a) (rev []) < horner_sum of_bool (2::'a) (rev [])) = ord_class.lexordp [] [] 2. \<And>x xs y ys. \<lbrakk>length xs = length ys; (horner_sum of_bool (2::'a) (rev xs) < horner_sum of_bool (2::'a) (rev ys)) = ord_class.lexordp xs ys\<rbrakk> \<Longrightarrow> (horner_sum of_bool (2::'a) (rev (x # xs)) < horner_sum of_bool (2::'a) (rev (y # ys))) = ord_class.lexordp (x # xs) (y # ys) [PROOF STEP] then [PROOF STATE] proof (chain) picking this: [PROOF STEP] show ?case [PROOF STATE] proof (prove) goal (1 subgoal): 1. (horner_sum of_bool (2::'a) (rev []) < horner_sum of_bool (2::'a) (rev [])) = ord_class.lexordp [] [] [PROOF STEP] by simp [PROOF STATE] proof (state) this: (horner_sum of_bool (2::'a) (rev []) < horner_sum of_bool (2::'a) (rev [])) = ord_class.lexordp [] [] goal (1 subgoal): 1. \<And>x xs y ys. \<lbrakk>length xs = length ys; (horner_sum of_bool (2::'a) (rev xs) < horner_sum of_bool (2::'a) (rev ys)) = ord_class.lexordp xs ys\<rbrakk> \<Longrightarrow> (horner_sum of_bool (2::'a) (rev (x # xs)) < horner_sum of_bool (2::'a) (rev (y # ys))) = ord_class.lexordp (x # xs) (y # ys) [PROOF STEP] next [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>x xs y ys. \<lbrakk>length xs = length ys; (horner_sum of_bool (2::'a) (rev xs) < horner_sum of_bool (2::'a) (rev ys)) = ord_class.lexordp xs ys\<rbrakk> \<Longrightarrow> (horner_sum of_bool (2::'a) (rev (x # xs)) < horner_sum of_bool (2::'a) (rev (y # ys))) = ord_class.lexordp (x # xs) (y # ys) [PROOF STEP] case (Cons b bs c cs) [PROOF STATE] proof (state) this: length bs = length cs (horner_sum of_bool (2::'a) (rev bs) < horner_sum of_bool (2::'a) (rev cs)) = ord_class.lexordp bs cs goal (1 subgoal): 1. \<And>x xs y ys. \<lbrakk>length xs = length ys; (horner_sum of_bool (2::'a) (rev xs) < horner_sum of_bool (2::'a) (rev ys)) = ord_class.lexordp xs ys\<rbrakk> \<Longrightarrow> (horner_sum of_bool (2::'a) (rev (x # xs)) < horner_sum of_bool (2::'a) (rev (y # ys))) = ord_class.lexordp (x # xs) (y # ys) [PROOF STEP] with horner_sum_nonnegative [of \<open>rev bs\<close>] horner_sum_nonnegative [of \<open>rev cs\<close>] horner_sum_bound [of \<open>rev bs\<close>] horner_sum_bound [of \<open>rev cs\<close>] [PROOF STATE] proof (chain) picking this: (0::'a) \<le> horner_sum of_bool (2::'a) (rev bs) (0::'a) \<le> horner_sum of_bool (2::'a) (rev cs) horner_sum of_bool (2::'a) (rev bs) < (2::'a) ^ length (rev bs) horner_sum of_bool (2::'a) (rev cs) < (2::'a) ^ length (rev cs) length bs = length cs (horner_sum of_bool (2::'a) (rev bs) < horner_sum of_bool (2::'a) (rev cs)) = ord_class.lexordp bs cs [PROOF STEP] show ?case [PROOF STATE] proof (prove) using this: (0::'a) \<le> horner_sum of_bool (2::'a) (rev bs) (0::'a) \<le> horner_sum of_bool (2::'a) (rev cs) horner_sum of_bool (2::'a) (rev bs) < (2::'a) ^ length (rev bs) horner_sum of_bool (2::'a) (rev cs) < (2::'a) ^ length (rev cs) length bs = length cs (horner_sum of_bool (2::'a) (rev bs) < horner_sum of_bool (2::'a) (rev cs)) = ord_class.lexordp bs cs goal (1 subgoal): 1. (horner_sum of_bool (2::'a) (rev (b # bs)) < horner_sum of_bool (2::'a) (rev (c # cs))) = ord_class.lexordp (b # bs) (c # cs) [PROOF STEP] by (auto simp add: horner_sum_append not_less Cons intro: add_strict_increasing2 add_increasing) [PROOF STATE] proof (state) this: (horner_sum of_bool (2::'a) (rev (b # bs)) < horner_sum of_bool (2::'a) (rev (c # cs))) = ord_class.lexordp (b # bs) (c # cs) goal: No subgoals! [PROOF STEP] qed [PROOF STATE] proof (state) this: length ?bs1 = length ?cs1 \<Longrightarrow> (horner_sum of_bool (2::'a) (rev ?bs1) < horner_sum of_bool (2::'a) (rev ?cs1)) = ord_class.lexordp ?bs1 ?cs1 goal (1 subgoal): 1. (horner_sum of_bool (2::'a) bs < horner_sum of_bool (2::'a) cs) = ord_class.lexordp (rev bs) (rev cs) [PROOF STEP] from that this [of \<open>rev bs\<close> \<open>rev cs\<close>] [PROOF STATE] proof (chain) picking this: length bs = length cs length (rev bs) = length (rev cs) \<Longrightarrow> (horner_sum of_bool (2::'a) (rev (rev bs)) < horner_sum of_bool (2::'a) (rev (rev cs))) = ord_class.lexordp (rev bs) (rev cs) [PROOF STEP] show ?thesis [PROOF STATE] proof (prove) using this: length bs = length cs length (rev bs) = length (rev cs) \<Longrightarrow> (horner_sum of_bool (2::'a) (rev (rev bs)) < horner_sum of_bool (2::'a) (rev (rev cs))) = ord_class.lexordp (rev bs) (rev cs) goal (1 subgoal): 1. (horner_sum of_bool (2::'a) bs < horner_sum of_bool (2::'a) cs) = ord_class.lexordp (rev bs) (rev cs) [PROOF STEP] by simp [PROOF STATE] proof (state) this: (horner_sum of_bool (2::'a) bs < horner_sum of_bool (2::'a) cs) = ord_class.lexordp (rev bs) (rev cs) goal: No subgoals! [PROOF STEP] qed
context("Color detection") test_that("has_color, option", { withr::with_options( c(crayon.enabled = FALSE), expect_false(has_color()) ) withr::with_options( c(crayon.enabled = TRUE), expect_true(has_color()) ) }) test_that("has_color, rstudio", { mockery::stub(has_color, "rstudio_with_ansi_support", TRUE) mockery::stub(has_color, "rstudioapi::callFun", TRUE) expect_true(has_color()) }) test_that("has_color, not a terminal", { mockery::stub(has_color, "rstudio_with_ansi_support", FALSE) mockery::stub(has_color, "isatty", FALSE) withr::with_options( list(crayon.enabled = NULL), expect_false(has_color()) ) }) test_that("has_color, windows terminal", { mockery::stub(has_color, "rstudio_with_ansi_support", FALSE) mockery::stub(has_color, "os_type", "windows") withr::with_envvar( c(ConEmuANSI = "ON", CMDER_ROOT = ""), expect_true(has_color()) ) withr::with_envvar( c(ConEmuANSI = "OFF", CMDER_ROOT = "/foobar"), expect_true(has_color()) ) withr::with_options( list(crayon.enabled = NULL), withr::with_envvar( c(ConEmuANSI = "OFF", CMDER_ROOT = NA_character_), expect_false(has_color()) ) ) }) test_that("number of colors is detected", { nc <- num_colors() expect_true(nc > 0) expect_equal(nc, as.integer(nc)) }) test_that("closure based memoization works", { # use `crayon.colors` option to force expected color return old.opt <- options(crayon.colors = 42, crayon.enabled=TRUE) expect_equal(num_colors(forget = TRUE), 42) options(crayon.colors = 43) expect_equal(num_colors(), 42) expect_equal(num_colors(forget = TRUE), 43) # reset options and run one more time options(old.opt) expect_equal(num_colors(), 43) # reset cache to original value try(num_colors(forget = TRUE)) }) test_that("tput errors are handled gracefully", { ## if tput errors num_colors is 8 mockery::stub(get_terminal_colors, "system", function(...) stop("Error!")) expect_equal(get_terminal_colors(), 8) ## if tput returns nothing num_colors is 8 mockery::stub(get_terminal_colors, "system", function(...) character(0)) expect_equal(get_terminal_colors(), 8) ## if tput returns a non-number num_colors is 8 mockery::stub(get_terminal_colors, "system", function(...) "no colors!") ## if tput returns a number the result is that number mockery::stub(get_terminal_colors, "system", function(...) 16) expect_equal(get_terminal_colors(), 16) })
function out = convert_caffe2img( out ) assert(length(size(out)) <= 4, 'Only support at most 4-D data for convert.'); out = single(permute(out(:,:,end:-1:1,:), [2 1 3 4])); end
{-# LANGUAGE BangPatterns #-} {-# LANGUAGE DefaultSignatures #-} {-# LANGUAGE DerivingStrategies #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE EmptyCase #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE PolyKinds #-} {-# LANGUAGE Trustworthy #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE TypeSynonymInstances #-} {-# language RankNTypes #-} -- | A \"confusing\" default implementation of a 'Traversable'-like class that -- produces code very much like derived instances. It uses the same magic -- behind @Control.Lens.Traversal.confusing@. module Generics.Deriving.TraversableConf ( -- * Generic Traversable class GTraversable(..) -- * Default method , gtraversedefault -- * Internal Traversable class , GTraversable'(..) ) where import Control.Applicative (Const, WrappedMonad(..), ZipList) import qualified Data.Monoid as Monoid (First, Last, Product, Sum) import Data.Monoid (Dual) import Generics.Linear import Generics.Deriving.Foldable import Generics.Deriving.Functor import Data.Complex (Complex) import Data.Ord (Down) import Data.Proxy (Proxy) import Data.Functor.Identity (Identity) import qualified Data.Functor.Product as Functor (Product) import qualified Data.Functor.Sum as Functor (Sum) import Data.Functor.Compose (Compose) import Data.List.NonEmpty (NonEmpty) import qualified Data.Semigroup as Semigroup (First, Last) import Data.Semigroup (Arg, Max, Min, WrappedMonoid) -------------------------------------------------------------------------------- -- Generic traverse -------------------------------------------------------------------------------- class GTraversable' t where gtraverse' :: Applicative f => (a -> f b) -> t a -> CY f (t b) instance GTraversable' V1 where gtraverse' _ x = pure $ case x of instance GTraversable' U1 where gtraverse' _ U1 = pure U1 instance GTraversable' Par1 where gtraverse' f (Par1 a) = Par1 <$> liftCY (f a) instance GTraversable' (K1 i c) where gtraverse' _ (K1 a) = pure (K1 a) instance GTraversable' f => GTraversable' (M1 i c f) where gtraverse' f (M1 a) = M1 <$> gtraverse' f a instance GTraversable' f => GTraversable' (MP1 m f) where gtraverse' f (MP1 a) = (\x -> MP1 x) <$> gtraverse' f a instance (GTraversable' f, GTraversable' g) => GTraversable' (f :+: g) where gtraverse' f (L1 a) = L1 <$> gtraverse' f a gtraverse' f (R1 a) = R1 <$> gtraverse' f a instance (GTraversable' f, GTraversable' g) => GTraversable' (f :*: g) where gtraverse' f (a :*: b) = (:*:) <$> gtraverse' f a <*> gtraverse' f b instance (GTraversable' f, GTraversable g) => GTraversable' (f :.: g) where gtraverse' f (Comp1 x) = Comp1 <$> gtraverse' (gtraverse f) x instance GTraversable' UAddr where gtraverse' _ (UAddr a) = pure (UAddr a) instance GTraversable' UChar where gtraverse' _ (UChar c) = pure (UChar c) instance GTraversable' UDouble where gtraverse' _ (UDouble d) = pure (UDouble d) instance GTraversable' UFloat where gtraverse' _ (UFloat f) = pure (UFloat f) instance GTraversable' UInt where gtraverse' _ (UInt i) = pure (UInt i) instance GTraversable' UWord where gtraverse' _ (UWord w) = pure (UWord w) class (GFunctor t, GFoldable t) => GTraversable t where gtraverse :: Applicative f => (a -> f b) -> t a -> f (t b) default gtraverse :: (Generic1 t, GTraversable' (Rep1 t), Applicative f) => (a -> f b) -> t a -> f (t b) gtraverse = gtraversedefault gsequenceA :: Applicative f => t (f a) -> f (t a) gsequenceA = gtraverse id gmapM :: Monad m => (a -> m b) -> t a -> m (t b) gmapM f = unwrapMonad . gtraverse (WrapMonad . f) gsequence :: Monad m => t (m a) -> m (t a) gsequence = gmapM id gtraversedefault :: (Generic1 t, GTraversable' (Rep1 t), Applicative f) => (a -> f b) -> t a -> f (t b) gtraversedefault f x = lowerCY $ to1 <$> gtraverse' f (from1 x) {-# INLINE gtraversedefault #-} -- Base types instances instance GTraversable ((,) a) instance GTraversable ((,,) a b) instance GTraversable ((,,,) a b c) instance GTraversable [] instance GTraversable (Arg a) instance GTraversable Complex instance GTraversable (Const m) instance GTraversable Down instance GTraversable Dual instance GTraversable (Either a) instance GTraversable Monoid.First instance GTraversable (Semigroup.First) instance GTraversable Identity instance GTraversable Monoid.Last instance GTraversable Semigroup.Last instance GTraversable Max instance GTraversable Maybe instance GTraversable Min instance GTraversable NonEmpty instance GTraversable Monoid.Product instance (GTraversable f, GTraversable g) => GTraversable (Functor.Product f g) instance (GTraversable f, GTraversable g) => GTraversable (Compose f g) instance GTraversable Proxy instance GTraversable Monoid.Sum instance (GTraversable f, GTraversable g) => GTraversable (Functor.Sum f g) instance GTraversable WrappedMonoid instance GTraversable ZipList -- The types below are stolen from kan-extensions, and used in the same way as -- Control.Lens.Traversal.confusing. Note that this is *not* equivalent to -- applying `confusing` itself to a plain traversal: the latter seems to make a -- mess with types like -- -- data Gramp f a = Gramp Int a (f a) newtype Curried g h a = Curried (forall r. g (a -> r) -> h r) instance Functor g => Functor (Curried g h) where fmap f (Curried g) = Curried (g . fmap (.f)) {-# INLINE fmap #-} instance (Functor g, g ~ h) => Applicative (Curried g h) where pure a = Curried (fmap ($ a)) {-# INLINE pure #-} Curried mf <*> Curried ma = Curried (ma . mf . fmap (.)) {-# INLINE (<*>) #-} lowerCurried :: Applicative f => Curried f g a -> g a lowerCurried (Curried f) = f (pure id) {-# INLINE lowerCurried #-} newtype Yoneda f a = Yoneda { runYoneda :: forall b. (a -> b) -> f b } lowerYoneda :: Yoneda f a -> f a lowerYoneda (Yoneda f) = f id {-# INLINE lowerYoneda #-} instance Functor (Yoneda f) where fmap f m = Yoneda (\k -> runYoneda m (k . f)) {-# INLINE fmap #-} instance Applicative f => Applicative (Yoneda f) where pure a = Yoneda (\f -> pure (f a)) {-# INLINE pure #-} Yoneda m <*> Yoneda n = Yoneda (\f -> m (f .) <*> n id) {-# INLINE (<*>) #-} -- Lifted from the implementation of Control.Lens.Traversal.confusing liftCurriedYoneda :: Applicative f => f a -> Curried (Yoneda f) (Yoneda f) a liftCurriedYoneda fa = Curried (`yap` fa) {-# INLINE liftCurriedYoneda #-} yap :: Applicative f => Yoneda f (a -> b) -> f a -> Yoneda f b yap (Yoneda k) fa = Yoneda (\ab_r -> k (ab_r .) <*> fa) {-# INLINE yap #-} -- This wrapper makes it easy to swap out implementations. -- See, for example, https://github.com/glguy/generic-traverse, -- which is essentially the same but uses a custom @Boggle@ -- type. I don't have a good sense of the tradeoffs between -- the two. newtype CY f a = CY { unCY :: Curried (Yoneda f) (Yoneda f) a } deriving newtype (Functor, Applicative) liftCY :: Applicative f => f a -> CY f a liftCY = CY . liftCurriedYoneda lowerCY :: Applicative f => CY f a -> f a lowerCY = lowerYoneda . lowerCurried . unCY
```python from sympy import Symbol from sympy import expand, factor x = Symbol('x') ex = (x + 3/(x**2))**5 expand(ex) ``` $\displaystyle x^{5} + 15 x^{2} + \frac{90}{x} + \frac{270}{x^{4}} + \frac{405}{x^{7}} + \frac{243}{x^{10}}$ 답 15
module HelloWorld export hello : String hello = ?hello_rhs export version : String version = "1.0.0"
-- Andreas, 2020-09-26, issue #4944. -- Size solver got stuck on projected variables which are left over -- in some size constraints by the generalization feature. -- {-# OPTIONS --sized-types #-} -- {-# OPTIONS --show-implicit #-} -- {-# OPTIONS -v tc.conv.size:60 -v tc.size:30 -v tc.meta.assign:10 #-} open import Agda.Builtin.Size variable i : Size postulate A : Set data ListA (i : Size) : Set where nil : ListA i cons : (j : Size< i) (t : A) (as : ListA j) → ListA i postulate node : A → ListA ∞ → A R : (i : Size) (as as′ : ListA i) → Set test : -- {i : Size} -- made error vanish (t u : A) (as : ListA i) → R (↑ (↑ i)) (cons (↑ i) t (cons i u as)) (cons _ (node t (cons _ u nil)) as) variable t u : A as : ListA i postulate tst2 : R _ (cons _ t (cons _ u as)) (cons _ (node t (cons _ u nil)) as) -- Should pass.
[STATEMENT] lemma typeof_box_operand_comp[simp]: "typeof \<circ> box_operand = (\<lambda>_. None)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. typeof \<circ> box_operand = Map.empty [PROOF STEP] by auto
----------------------------------------------------------------------------- -- | -- Module : Numeric.Statistics.Histogram -- Copyright : (c) A. V. H. McPhail 2010, 2014 -- License : BSD3 -- -- Maintainer : haskell.vivian.mcphail <at> gmail <dot> com -- Stability : provisional -- Portability : portable -- -- create histograms from density functions -- ----------------------------------------------------------------------------- module Numeric.Statistics.Histogram ( cumulativeToHistogram , gaussianHistogram ) where ----------------------------------------------------------------------------- --import qualified Data.Packed.Vector as V import qualified Data.Vector.Storable as V import qualified Numeric.GSL.Histogram as H --import qualified Numeric.GSL.Histogram2D as H2 import qualified Numeric.GSL.Distribution.Continuous as C --import Numeric.LinearAlgebra.Algorithms --import Numeric.LinearAlgebra.Interface() ----------------------------------------------------------------------------- vectorToTuples = toTuples . V.toList where toTuples [] = error "need a minimum of two elements" toTuples [_] = error "need a minimum of two elements" toTuples [x1,x2] = [(x1,x2)] toTuples (x1:x2:xs) = (x1,x2) : (toTuples (x2:xs)) ----------------------------------------------------------------------------- cumulativeToHistogram :: (Double -> Double) -- ^ the cumulative distribution function D(x <= X) -> V.Vector Double -- ^ the bins -> H.Histogram -- ^ the resulting histogram cumulativeToHistogram f v = H.addListWeighted (H.emptyRanges v) $ map (\(x1,x2) -> ((x1 + x2) / 2.0,f x2 - f x1)) (vectorToTuples v) gaussianHistogram :: Double -- ^ mean -> Double -- ^ standard deviation -> V.Vector Double -- ^ the bins -> H.Histogram -- ^ the resulting histogram gaussianHistogram u s = cumulativeToHistogram (\x -> C.density_1p C.Gaussian C.Lower s (x-u)) -----------------------------------------------------------------------------
module Libra.Parser import Libra.Constants import Libra.Data import public Control.Monad.Identity import public Lightyear import Lightyear.Char import Lightyear.Strings %access export %default partial {- Parse Combinators -} parseName : Parser SExpr parseName = lexeme (letter >>= \x => commitTo $ do { xs <- many alphaNum let name = pack (x :: xs) pure (MkName name) }) parseExpr : Parser SExpr parseExpr = parseName <|>| ( MkSExpr <$> parens (many parseExpr) ) out : Identity (Result str SExpr) -> Either (List (str, String)) SExpr out (Id (Failure xs)) = Left xs out (Id (Success x y)) = Right y
/** * @file simple_http_client.cpp * * @date 2015-02-03 * * @author Pablo Rodríguez González * */ #include <boost/program_options.hpp> #include <cstdlib> #include <iostream> #include <string> #include <boost/asio.hpp> #include <boost/algorithm/string.hpp> using namespace std; class HTTPGetRequest { public: HTTPGetRequest(const string &host, const string &path) : host(host), path(path) { } const string& Host() const { return host; } const string& Path() const { return path; } private: string host; string path; }; std::ostream & operator<<(std::ostream &os, const HTTPGetRequest &request) { os << "GET " << request.Path() << " HTTP/1.0\r\n"; os << "Host: " << request.Host() << "\r\n"; os << "Accept: */*\r\n"; os << "Connection: close\r\n\r\n"; return os; } class HTTPResponse { public: HTTPResponse(istream &response_stream) { response_stream >> http_version; response_stream >> status_code; getline(response_stream, status_message); // Extra space at beginning string header; using namespace boost; while (getline(response_stream, header)) { auto separator_position = header.find_first_of(":"); if (separator_position != string::npos) { headers[header.substr(0, separator_position)] = header.substr( separator_position + 2, string::npos); } } } const map<string, string>& Headers() const { return headers; } const string& HttpVersion() const { return http_version; } unsigned int StatusCode() const { return status_code; } const string& StatusMessage() const { return status_message; } private: string http_version; unsigned int status_code; string status_message; map<string, string> headers; }; std::ostream & operator<<(std::ostream &os, const HTTPResponse &response) { os << "HTTPResponse: " << endl; os << response.HttpVersion() << " "; os << response.StatusCode() << " "; os << response.StatusMessage() << endl; auto headers = response.Headers(); for_each(headers.begin(), headers.end(), [&os](const pair<string, string> &pair) { os << pair.first << ": " << pair.second << endl; }); return os; } void ParseCommandLine(int argc, const char *argv[], string &host, int &port, string &path); int main(int argc, const char *argv[]) { int port; string host; string path; ParseCommandLine(argc, argv, host, port, path); try { using namespace boost::asio; using namespace boost::asio::ip; using boost::asio::streambuf; io_service communication_service; auto server = tcp::resolver(communication_service).resolve( boost::asio::ip::tcp::resolver::query(host, to_string(port))); tcp::socket connection(communication_service); connect(connection, server); streambuf request; ostream request_stream(&request); HTTPGetRequest http_get(host, path); request_stream << http_get; write(connection, request); streambuf response; read_until(connection, response, "\r\n"); istream response_stream(&response); HTTPResponse http_response(response_stream); cout << http_response; } catch (exception &e) { cerr << "An exception has ocurred: " << e.what() << endl; } return 0; } void ParseCommandLine(int argc, const char *argv[], string &host, int &port, string &path) { using namespace boost::program_options; using boost::program_options::error; options_description command_line_description("Arguments"); command_line_description.add_options()("port", value<int>(&port)->default_value(80), "Port number")("host", value<string>(&host)->required(), "Host name: www.example.com")( "path", value<string>(&path)->default_value("/", "Server path")); try { variables_map arguments; store(parse_command_line(argc, argv, command_line_description), arguments); notify(arguments); } catch (error &wrong_arguments) { cerr << wrong_arguments.what() << endl; cerr << command_line_description << endl; exit(EXIT_FAILURE); } }
/- Copyright (c) 2021 Jakob von Raumer. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Jakob von Raumer ! This file was ported from Lean 3 source module linear_algebra.coevaluation ! leanprover-community/mathlib commit d6814c584384ddf2825ff038e868451a7c956f31 ! Please do not edit these lines, except to modify the commit id ! if you have ported upstream changes. -/ import Mathbin.LinearAlgebra.Contraction import Mathbin.LinearAlgebra.FiniteDimensional import Mathbin.LinearAlgebra.Dual /-! # The coevaluation map on finite dimensional vector spaces Given a finite dimensional vector space `V` over a field `K` this describes the canonical linear map from `K` to `V ⊗ dual K V` which corresponds to the identity function on `V`. ## Tags coevaluation, dual module, tensor product ## Future work * Prove that this is independent of the choice of basis on `V`. -/ noncomputable section section coevaluation open TensorProduct FiniteDimensional open TensorProduct BigOperators universe u v variable (K : Type u) [Field K] variable (V : Type v) [AddCommGroup V] [Module K V] [FiniteDimensional K V] /-- The coevaluation map is a linear map from a field `K` to a finite dimensional vector space `V`. -/ def coevaluation : K →ₗ[K] V ⊗[K] Module.Dual K V := let bV := Basis.ofVectorSpace K V (Basis.singleton Unit K).constr K fun _ => ∑ i : Basis.ofVectorSpaceIndex K V, bV i ⊗ₜ[K] bV.Coord i #align coevaluation coevaluation theorem coevaluation_apply_one : (coevaluation K V) (1 : K) = let bV := Basis.ofVectorSpace K V ∑ i : Basis.ofVectorSpaceIndex K V, bV i ⊗ₜ[K] bV.Coord i := by simp only [coevaluation, id] rw [(Basis.singleton Unit K).constr_apply_fintype K] simp only [Fintype.univ_punit, Finset.sum_const, one_smul, Basis.singleton_repr, Basis.equivFun_apply, Basis.coe_ofVectorSpace, one_nsmul, Finset.card_singleton] #align coevaluation_apply_one coevaluation_apply_one open TensorProduct /-- This lemma corresponds to one of the coherence laws for duals in rigid categories, see `category_theory.monoidal.rigid`. -/ theorem contractLeft_assoc_coevaluation : (contractLeft K V).rtensor _ ∘ₗ (TensorProduct.assoc K _ _ _).symm.toLinearMap ∘ₗ (coevaluation K V).ltensor (Module.Dual K V) = (TensorProduct.lid K _).symm.toLinearMap ∘ₗ (TensorProduct.rid K _).toLinearMap := by letI := Classical.decEq (Basis.ofVectorSpaceIndex K V) apply TensorProduct.ext apply (Basis.ofVectorSpace K V).dualBasis.ext; intro j; apply LinearMap.ext_ring rw [LinearMap.compr₂_apply, LinearMap.compr₂_apply, TensorProduct.mk_apply] simp only [LinearMap.coe_comp, Function.comp_apply, LinearEquiv.coe_toLinearMap] rw [rid_tmul, one_smul, lid_symm_apply] simp only [LinearEquiv.coe_toLinearMap, LinearMap.ltensor_tmul, coevaluation_apply_one] rw [TensorProduct.tmul_sum, LinearEquiv.map_sum]; simp only [assoc_symm_tmul] rw [LinearMap.map_sum]; simp only [LinearMap.rtensor_tmul, contractLeft_apply] simp only [Basis.coe_dualBasis, Basis.coord_apply, Basis.repr_self_apply, TensorProduct.ite_tmul] rw [Finset.sum_ite_eq']; simp only [Finset.mem_univ, if_true] #align contract_left_assoc_coevaluation contractLeft_assoc_coevaluation /-- This lemma corresponds to one of the coherence laws for duals in rigid categories, see `category_theory.monoidal.rigid`. -/ theorem contractLeft_assoc_coevaluation' : (contractLeft K V).ltensor _ ∘ₗ (TensorProduct.assoc K _ _ _).toLinearMap ∘ₗ (coevaluation K V).rtensor V = (TensorProduct.rid K _).symm.toLinearMap ∘ₗ (TensorProduct.lid K _).toLinearMap := by letI := Classical.decEq (Basis.ofVectorSpaceIndex K V) apply TensorProduct.ext apply LinearMap.ext_ring; apply (Basis.ofVectorSpace K V).ext; intro j rw [LinearMap.compr₂_apply, LinearMap.compr₂_apply, TensorProduct.mk_apply] simp only [LinearMap.coe_comp, Function.comp_apply, LinearEquiv.coe_toLinearMap] rw [lid_tmul, one_smul, rid_symm_apply] simp only [LinearEquiv.coe_toLinearMap, LinearMap.rtensor_tmul, coevaluation_apply_one] rw [TensorProduct.sum_tmul, LinearEquiv.map_sum]; simp only [assoc_tmul] rw [LinearMap.map_sum]; simp only [LinearMap.ltensor_tmul, contractLeft_apply] simp only [Basis.coord_apply, Basis.repr_self_apply, TensorProduct.tmul_ite] rw [Finset.sum_ite_eq]; simp only [Finset.mem_univ, if_true] #align contract_left_assoc_coevaluation' contractLeft_assoc_coevaluation' end coevaluation
import numpy import tables fileName = 'carray1.h5' shape = (200, 300) atom = tables.UInt8Atom() filters = tables.Filters(complevel=5, complib='zlib') h5f = tables.openFile(fileName, 'w') ca = h5f.createCArray(h5f.root, 'carray', atom, shape, filters=filters) # Fill a hyperslab in ``ca``. ca[10:60, 20:70] = numpy.ones((50, 50)) h5f.close() # Re-open and read another hyperslab h5f = tables.openFile(fileName) print h5f print h5f.root.carray[8:12, 18:22] h5f.close()
State Before: a b : ℝ hab : a < b l : Filter ℝ f f' g g' : ℝ → ℝ hff' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt f (f' x) x hgg' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt g (g' x) x hcf : ContinuousOn f (Ioc a b) hcg : ContinuousOn g (Ioc a b) hg' : ∀ (x : ℝ), x ∈ Ioo a b → g' x ≠ 0 hfb : f b = 0 hgb : g b = 0 hdiv : Tendsto (fun x => f' x / g' x) (𝓝[Iio b] b) l ⊢ Tendsto (fun x => f x / g x) (𝓝[Iio b] b) l State After: case refine'_1 a b : ℝ hab : a < b l : Filter ℝ f f' g g' : ℝ → ℝ hff' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt f (f' x) x hgg' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt g (g' x) x hcf : ContinuousOn f (Ioc a b) hcg : ContinuousOn g (Ioc a b) hg' : ∀ (x : ℝ), x ∈ Ioo a b → g' x ≠ 0 hfb : f b = 0 hgb : g b = 0 hdiv : Tendsto (fun x => f' x / g' x) (𝓝[Iio b] b) l ⊢ Tendsto (fun x => f x) (𝓝[Iio b] b) (𝓝 0) case refine'_2 a b : ℝ hab : a < b l : Filter ℝ f f' g g' : ℝ → ℝ hff' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt f (f' x) x hgg' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt g (g' x) x hcf : ContinuousOn f (Ioc a b) hcg : ContinuousOn g (Ioc a b) hg' : ∀ (x : ℝ), x ∈ Ioo a b → g' x ≠ 0 hfb : f b = 0 hgb : g b = 0 hdiv : Tendsto (fun x => f' x / g' x) (𝓝[Iio b] b) l ⊢ Tendsto (fun x => g x) (𝓝[Iio b] b) (𝓝 0) Tactic: refine' lhopital_zero_left_on_Ioo hab hff' hgg' hg' _ _ hdiv State Before: case refine'_1 a b : ℝ hab : a < b l : Filter ℝ f f' g g' : ℝ → ℝ hff' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt f (f' x) x hgg' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt g (g' x) x hcf : ContinuousOn f (Ioc a b) hcg : ContinuousOn g (Ioc a b) hg' : ∀ (x : ℝ), x ∈ Ioo a b → g' x ≠ 0 hfb : f b = 0 hgb : g b = 0 hdiv : Tendsto (fun x => f' x / g' x) (𝓝[Iio b] b) l ⊢ Tendsto (fun x => f x) (𝓝[Iio b] b) (𝓝 0) State After: case refine'_1 a b : ℝ hab : a < b l : Filter ℝ f f' g g' : ℝ → ℝ hff' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt f (f' x) x hgg' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt g (g' x) x hcf : ContinuousOn f (Ioc a b) hcg : ContinuousOn g (Ioc a b) hg' : ∀ (x : ℝ), x ∈ Ioo a b → g' x ≠ 0 hfb : f b = 0 hgb : g b = 0 hdiv : Tendsto (fun x => f' x / g' x) (𝓝[Iio b] b) l ⊢ Tendsto (fun x => f x) (𝓝[Ioo a b] b) (𝓝 (f b)) Tactic: rw [← hfb, ← nhdsWithin_Ioo_eq_nhdsWithin_Iio hab] State Before: case refine'_1 a b : ℝ hab : a < b l : Filter ℝ f f' g g' : ℝ → ℝ hff' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt f (f' x) x hgg' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt g (g' x) x hcf : ContinuousOn f (Ioc a b) hcg : ContinuousOn g (Ioc a b) hg' : ∀ (x : ℝ), x ∈ Ioo a b → g' x ≠ 0 hfb : f b = 0 hgb : g b = 0 hdiv : Tendsto (fun x => f' x / g' x) (𝓝[Iio b] b) l ⊢ Tendsto (fun x => f x) (𝓝[Ioo a b] b) (𝓝 (f b)) State After: no goals Tactic: exact ((hcf b <| right_mem_Ioc.mpr hab).mono Ioo_subset_Ioc_self).tendsto State Before: case refine'_2 a b : ℝ hab : a < b l : Filter ℝ f f' g g' : ℝ → ℝ hff' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt f (f' x) x hgg' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt g (g' x) x hcf : ContinuousOn f (Ioc a b) hcg : ContinuousOn g (Ioc a b) hg' : ∀ (x : ℝ), x ∈ Ioo a b → g' x ≠ 0 hfb : f b = 0 hgb : g b = 0 hdiv : Tendsto (fun x => f' x / g' x) (𝓝[Iio b] b) l ⊢ Tendsto (fun x => g x) (𝓝[Iio b] b) (𝓝 0) State After: case refine'_2 a b : ℝ hab : a < b l : Filter ℝ f f' g g' : ℝ → ℝ hff' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt f (f' x) x hgg' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt g (g' x) x hcf : ContinuousOn f (Ioc a b) hcg : ContinuousOn g (Ioc a b) hg' : ∀ (x : ℝ), x ∈ Ioo a b → g' x ≠ 0 hfb : f b = 0 hgb : g b = 0 hdiv : Tendsto (fun x => f' x / g' x) (𝓝[Iio b] b) l ⊢ Tendsto (fun x => g x) (𝓝[Ioo a b] b) (𝓝 (g b)) Tactic: rw [← hgb, ← nhdsWithin_Ioo_eq_nhdsWithin_Iio hab] State Before: case refine'_2 a b : ℝ hab : a < b l : Filter ℝ f f' g g' : ℝ → ℝ hff' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt f (f' x) x hgg' : ∀ (x : ℝ), x ∈ Ioo a b → HasDerivAt g (g' x) x hcf : ContinuousOn f (Ioc a b) hcg : ContinuousOn g (Ioc a b) hg' : ∀ (x : ℝ), x ∈ Ioo a b → g' x ≠ 0 hfb : f b = 0 hgb : g b = 0 hdiv : Tendsto (fun x => f' x / g' x) (𝓝[Iio b] b) l ⊢ Tendsto (fun x => g x) (𝓝[Ioo a b] b) (𝓝 (g b)) State After: no goals Tactic: exact ((hcg b <| right_mem_Ioc.mpr hab).mono Ioo_subset_Ioc_self).tendsto
# Looker API 4.0 (Beta) Reference # # Welcome to the future! API 4.0 co-exists with APIs 3.1 and 3.0. (3.0 should no longer be used.) The \"beta\" tag means updates for API 4.0 may include breaking changes, but as always we will work to minimize them. ### Authorization The classic method of API authorization uses Looker **API3** credentials for authorization and access control. Looker admins can create API3 credentials on Looker's **Admin/Users** page. API 4.0 adds additional ways to authenticate API requests, including OAuth and CORS requests. For details, see [Looker API Authorization](https://looker.com/docs/r/api/authorization). ### API Explorer The API Explorer is a Looker-provided utility with many new and unique features for learning and using the Looker API and SDKs. It is a replacement for the 'api-docs' page currently provided on Looker instances. For details, see the [API Explorer documentation](https://looker.com/docs/r/api/explorer). ### Looker Language SDKs The Looker API is a RESTful system that should be usable by any programming language capable of making HTTPS requests. SDKs for a variety of programming languages are also provided to streamline using the API. Looker has an OpenSource [sdk-codegen project](https://github.com/looker-open-source/sdk-codegen) that provides several language SDKs. Language SDKs generated by `sdk-codegen` have an Authentication manager that can automatically authenticate API requests when needed. For details on available Looker SDKs, see [Looker API Client SDKs](https://looker.com/docs/r/api/client_sdks). ### API Versioning Future releases of Looker expand the latest API version release-by-release to securely expose more and more of the core power of the Looker platform to API client applications. API endpoints marked as \"beta\" may receive breaking changes without warning (but we will try to avoid doing that). Stable (non-beta) API endpoints should not receive breaking changes in future releases. For details, see [Looker API Versioning](https://looker.com/docs/r/api/versioning). ### In This Release API 4.0 version was introduced so we can make adjustments to API functions, parameters, and response types to fix bugs and inconsistencies. These changes fall outside the bounds of non-breaking additive changes we can make to our stable API 3.1. One benefit of these type adjustments in API 4.0 is dramatically better support for strongly typed languages like TypeScript, Kotlin, Swift, Go, C#, and more. While API 3.1 is still the de-facto Looker API (\"current\", \"stable\", \"default\", etc), the bulk of our development activity has shifted to API 4.0, where all new features are added. The API Explorer can be used to [interactively compare](https://looker.com/docs/r/api/explorer#comparing_api_versions) the differences between API 3.1 and 4.0. ### API and SDK Support Policies Looker API versions and language SDKs have varying support levels. Please read the API and SDK [support policies](https://looker.com/docs/r/api/support-policy) for more information. # # OpenAPI spec version: 4.0.21.18 # # Generated by: https://github.com/swagger-api/swagger-codegen.git #' @title Config operations #' @description looker.Config #' #' @field path Stores url path of the request. #' @field apiClient Handles the client-server communication. #' @field userAgent Set the user agent of the request. #' #' @importFrom R6 R6Class #' #' @section Methods: #' \describe{ #' #' all_legacy_features Get All Legacy Features #' #' #' all_locales Get All Locales #' #' #' all_timezones Get All Timezones #' #' #' api_spec Get an API specification #' #' #' cloud_storage_configuration Get Cloud Storage #' #' #' create_digest_email_send Deliver digest email contents #' #' #' custom_welcome_email Get Custom Welcome Email #' #' #' digest_emails_enabled Get Digest_emails #' #' #' get_setting Get Setting #' #' #' internal_help_resources Get Internal Help Resources #' #' #' internal_help_resources_content Get Internal Help Resources Content #' #' #' legacy_feature Get Legacy Feature #' #' #' mobile_settings Get Mobile_Settings #' #' #' set_setting Set Setting #' #' #' update_cloud_storage_configuration Update Cloud Storage #' #' #' update_custom_welcome_email Update Custom Welcome Email Content #' #' #' update_custom_welcome_email_test Send a test welcome email to the currently logged in user with the supplied content #' #' #' update_digest_emails_enabled Update Digest_emails #' #' #' update_internal_help_resources Update internal help resources configuration #' #' #' update_internal_help_resources_content Update internal help resources content #' #' #' update_legacy_feature Update Legacy Feature #' #' #' update_whitelabel_configuration Update Whitelabel configuration #' #' #' versions Get ApiVersion #' #' #' whitelabel_configuration Get Whitelabel configuration #' #' } #' #' @export ConfigApi <- R6::R6Class( 'ConfigApi', public = list( userAgent = "R-SDK", apiClient = NULL, initialize = function(apiClient){ if (!missing(apiClient)) { self$apiClient <- apiClient } else { self$apiClient <- ApiClient$new() } }, all_legacy_features = function(...){ args <- list(...) queryParams <- list() headerParams <- character() urlPath <- "/legacy_features" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- LegacyFeature$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- LegacyFeature$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, all_locales = function(...){ args <- list(...) queryParams <- list() headerParams <- character() urlPath <- "/locales" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- Locale$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- Locale$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, all_timezones = function(...){ args <- list(...) queryParams <- list() headerParams <- character() urlPath <- "/timezones" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- Timezone$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- Timezone$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, api_spec = function(api_version, specification, ...){ args <- list(...) queryParams <- list() headerParams <- character() urlPath <- "/api_spec/{api_version}/{specification}" if (!missing(`api_version`)) { urlPath <- gsub(paste0("\\{", "api_version", "\\}"), `api_version`, urlPath) } if (!missing(`specification`)) { urlPath <- gsub(paste0("\\{", "specification", "\\}"), `specification`, urlPath) } resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { # void response, no need to return anything } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, cloud_storage_configuration = function(...){ args <- list(...) queryParams <- list() headerParams <- character() urlPath <- "/cloud_storage" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- BackupConfiguration$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- BackupConfiguration$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, create_digest_email_send = function(...){ args <- list(...) queryParams <- list() headerParams <- character() urlPath <- "/digest_email_send" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "POST", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- DigestEmailSend$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- DigestEmailSend$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, custom_welcome_email = function(...){ args <- list(...) queryParams <- list() headerParams <- character() urlPath <- "/custom_welcome_email" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- CustomWelcomeEmail$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- CustomWelcomeEmail$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, digest_emails_enabled = function(...){ args <- list(...) queryParams <- list() headerParams <- character() urlPath <- "/digest_emails_enabled" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- DigestEmails$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- DigestEmails$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, get_setting = function(fields, ...){ args <- list(...) queryParams <- list() headerParams <- character() if (!missing(`fields`)) { queryParams['fields'] <- fields } urlPath <- "/setting" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- Setting$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- Setting$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, internal_help_resources = function(...){ args <- list(...) queryParams <- list() headerParams <- character() urlPath <- "/internal_help_resources_enabled" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- InternalHelpResources$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- InternalHelpResources$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, internal_help_resources_content = function(...){ args <- list(...) queryParams <- list() headerParams <- character() urlPath <- "/internal_help_resources_content" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- InternalHelpResourcesContent$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- InternalHelpResourcesContent$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, legacy_feature = function(legacy_feature_id, ...){ args <- list(...) queryParams <- list() headerParams <- character() urlPath <- "/legacy_features/{legacy_feature_id}" if (!missing(`legacy_feature_id`)) { urlPath <- gsub(paste0("\\{", "legacy_feature_id", "\\}"), `legacy_feature_id`, urlPath) } resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- LegacyFeature$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- LegacyFeature$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, mobile_settings = function(...){ args <- list(...) queryParams <- list() headerParams <- character() urlPath <- "/mobile/settings" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- MobileSettings$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- MobileSettings$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, set_setting = function(body, fields, ...){ args <- list(...) queryParams <- list() headerParams <- character() if (!missing(`fields`)) { queryParams['fields'] <- fields } if (!missing(`body`)) { body <- `body`$toJSONString() } else { body <- NULL } urlPath <- "/setting" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "PATCH", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- Setting$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- Setting$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, update_cloud_storage_configuration = function(body, ...){ args <- list(...) queryParams <- list() headerParams <- character() if (!missing(`body`)) { body <- `body`$toJSONString() } else { body <- NULL } urlPath <- "/cloud_storage" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "PATCH", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- BackupConfiguration$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- BackupConfiguration$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, update_custom_welcome_email = function(body, send_test_welcome_email, ...){ args <- list(...) queryParams <- list() headerParams <- character() if (!missing(`send_test_welcome_email`)) { queryParams['send_test_welcome_email'] <- send_test_welcome_email } if (!missing(`body`)) { body <- `body`$toJSONString() } else { body <- NULL } urlPath <- "/custom_welcome_email" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "PATCH", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- CustomWelcomeEmail$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- CustomWelcomeEmail$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, update_custom_welcome_email_test = function(body, ...){ args <- list(...) queryParams <- list() headerParams <- character() if (!missing(`body`)) { body <- `body`$toJSONString() } else { body <- NULL } urlPath <- "/custom_welcome_email_test" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "PUT", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- WelcomeEmailTest$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- WelcomeEmailTest$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, update_digest_emails_enabled = function(body, ...){ args <- list(...) queryParams <- list() headerParams <- character() if (!missing(`body`)) { body <- `body`$toJSONString() } else { body <- NULL } urlPath <- "/digest_emails_enabled" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "PATCH", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- DigestEmails$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- DigestEmails$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, update_internal_help_resources = function(body, ...){ args <- list(...) queryParams <- list() headerParams <- character() if (!missing(`body`)) { body <- `body`$toJSONString() } else { body <- NULL } urlPath <- "/internal_help_resources" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "PATCH", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- InternalHelpResources$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- InternalHelpResources$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, update_internal_help_resources_content = function(body, ...){ args <- list(...) queryParams <- list() headerParams <- character() if (!missing(`body`)) { body <- `body`$toJSONString() } else { body <- NULL } urlPath <- "/internal_help_resources_content" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "PATCH", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- InternalHelpResourcesContent$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- InternalHelpResourcesContent$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, update_legacy_feature = function(legacy_feature_id, body, ...){ args <- list(...) queryParams <- list() headerParams <- character() if (!missing(`body`)) { body <- `body`$toJSONString() } else { body <- NULL } urlPath <- "/legacy_features/{legacy_feature_id}" if (!missing(`legacy_feature_id`)) { urlPath <- gsub(paste0("\\{", "legacy_feature_id", "\\}"), `legacy_feature_id`, urlPath) } resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "PATCH", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- LegacyFeature$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- LegacyFeature$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, update_whitelabel_configuration = function(body, ...){ args <- list(...) queryParams <- list() headerParams <- character() if (!missing(`body`)) { body <- `body`$toJSONString() } else { body <- NULL } urlPath <- "/whitelabel_configuration" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "PUT", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- WhitelabelConfiguration$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- WhitelabelConfiguration$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, versions = function(fields, ...){ args <- list(...) queryParams <- list() headerParams <- character() if (!missing(`fields`)) { queryParams['fields'] <- fields } urlPath <- "/versions" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- ApiVersion$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- ApiVersion$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } }, whitelabel_configuration = function(fields, ...){ args <- list(...) queryParams <- list() headerParams <- character() if (!missing(`fields`)) { queryParams['fields'] <- fields } urlPath <- "/whitelabel_configuration" resp <- self$apiClient$callApi(url = paste0(self$apiClient$basePath, urlPath), method = "GET", queryParams = queryParams, headerParams = headerParams, body = body, ...) if (httr::status_code(resp) >= 200 && httr::status_code(resp) <= 299) { data <- jsonlite::fromJSON(httr::content(resp, "text", encoding = "UTF-8"),simplifyVector = FALSE) if (is.null(names(data))) { returnObjects <- lapply(data, function(x) { returnObject <- WhitelabelConfiguration$new() #returnObject$fromJSON(jsonlite::toJSON(x, auto_unbox = FALSE)) returnObject$fromJSONObject(x) returnObject }) Response$new(returnObjects, resp) } else { returnObject <- WhitelabelConfiguration$new() #result <- returnObject$fromJSON(httr::content(resp, "text", encoding = "UTF-8")) result <- returnObject$fromJSONObject(data) Response$new(returnObject, resp) } } else if (httr::status_code(resp) >= 400 && httr::status_code(resp) <= 499) { Response$new("API client error", resp) } else if (httr::status_code(resp) >= 500 && httr::status_code(resp) <= 599) { Response$new("API server error", resp) } } ) )
(* * Copyright 2016, NICTA * * This software may be distributed and modified according to the terms of * the GNU General Public License version 2. Note that NO WARRANTY is provided. * See "LICENSE_GPLv2.txt" for details. * * @TAG(NICTA_GPL) *) theory AutoCorresModifiesProofs imports "../../lib/ml-helpers/TermPatternAntiquote" "../../tools/autocorres/AutoCorres" "../../lib/SIMPL_Lemmas" "../../lib/Monad_WP/NonDetMonadVCG" begin text \<open> Generate modifies specs, i.e. specifications of which globals fields may potentially be modified by each function. It turns out that ac_corres is not strong enough to automagically transfer C-parser's modifies theorems across to the AutoCorres functions. This is because the modifies specs are unconditional, whereas our ac_corres theorems have preconditions on the initial states. In other words, the modifies spec is a syntactic property of a function rather than a semantic one. Fortunately, this also makes it straightforward to prove them from scratch over our newly-generated functions. \<close> section \<open>Rules for modifies proof method\<close> text \<open> Transferring modifies rules for un-translated functions. These functions are defined to be equivalent to their SIMPL specs (via L1_call_simpl), so the limitations of ac_corres do not apply. \<close> lemma autocorres_modifies_transfer: notes select_wp[wp] hoare_seq_ext[wp] fixes \<Gamma> globals f' f_'proc modifies_eqn P xf assumes f'_def: "f' \<equiv> AC_call_L1 P globals xf (L1_call_simpl check_termination \<Gamma> f_'proc)" assumes f_modifies: "\<forall>\<sigma>. \<Gamma>\<turnstile>\<^bsub>/UNIV\<^esub> {\<sigma>} Call f_'proc {t. modifies_eqn (globals t) (globals \<sigma>)}" shows "\<lbrace> \<lambda>s. s = \<sigma> \<rbrace> f' \<lbrace> \<lambda>_ s. modifies_eqn s \<sigma> \<rbrace>" apply (clarsimp simp: f'_def AC_call_L1_def L2_call_L1_def L1_call_simpl_def) apply (simp add: liftM_def bind_assoc) apply wp apply (clarsimp split: sum.splits) apply wp apply (clarsimp simp: in_monad select_def split: xstate.splits) apply (case_tac xa; clarsimp) apply (drule exec_normal[OF singletonI _ f_modifies[rule_format]]) apply (clarsimp simp: mex_def meq_def) apply (drule exec_abrupt[OF singletonI _ f_modifies[rule_format]]) apply (clarsimp simp: mex_def meq_def) apply blast done text \<open> A monadic Hoare triple for a modifies spec looks like "\<lbrace>\<lambda>s. s = \<sigma>\<rbrace> prog \<lbrace>\<lambda>s. \<exists>x1 x2... s = \<sigma>\<lparr>field1 := x1, ...\<rparr>\<rbrace>" where (fieldk, xk) are the possibly-modified fields. To prove it, we rewrite the precondition to an invariant: \<lbrace>\<lambda>s. \<exists>x1 x2... s = \<sigma>\<lparr>field1 := x1, ...\<rparr> \<rbrace> prog \<lbrace>\<lambda>_ s. \<exists>x1 x2... s = \<sigma>\<lparr>field1 := x1, ...\<rparr> \<rbrace> Then we carry the invariant through each statement of the program. \<close> text \<open> Adapter for apply rules to an invariant goal. We allow "I" and "I'" to be different (in our modifies proof, "I" will have \<exists>-quantified vars lifted out). \<close> lemma valid_inv_weaken: "\<lbrakk> valid P f (\<lambda>_. R); \<And>s. I s \<Longrightarrow> P s; \<And>s. R s \<Longrightarrow> I' s \<rbrakk> \<Longrightarrow> valid I f (\<lambda>_. I')" by (fastforce simp: valid_def) text \<open> Our function modifies rules have a schematic precond, so this rule avoids weakening the invariant when applying those rules and ending up with an underspecified P. \<close> lemma valid_inv_weaken': "\<lbrakk> valid I f (\<lambda>_. Q); \<And>s. Q s \<Longrightarrow> I' s \<rbrakk> \<Longrightarrow> valid I f (\<lambda>_. I')" by (rule valid_inv_weaken) text \<open> Used by modifies_initial_tac to instantiate a schematic precondition to an invariant. \<close> lemma valid_invI: "valid I f (\<lambda>_. I) \<Longrightarrow> valid I f (\<lambda>_. I)" by - text \<open>For rewriting foralls in premises.\<close> lemma All_to_all: "Trueprop (\<forall>x. P x) \<equiv> (\<And>x. P x)" by presburger subsection \<open>Hoare rules for state invariants\<close> named_theorems valid_inv lemmas [valid_inv] = fail_inv gets_inv return_inv hoare_K_bind lemma when_inv[valid_inv]: "\<lbrace>I\<rbrace> f \<lbrace>\<lambda>_. I\<rbrace> \<Longrightarrow> \<lbrace>I\<rbrace> when c f \<lbrace>\<lambda>_. I\<rbrace>" apply wp apply auto done lemma bind_inv[valid_inv]: "\<lbrace>I\<rbrace> f \<lbrace>\<lambda>_. I\<rbrace> \<Longrightarrow> (\<And>x. \<lbrace>I\<rbrace> g x \<lbrace>\<lambda>_. I\<rbrace>) \<Longrightarrow> \<lbrace>I\<rbrace> f >>= g \<lbrace>\<lambda>_. I\<rbrace>" apply wp apply auto done lemma guard_inv[valid_inv]: "\<lbrace>I\<rbrace> guard G \<lbrace>\<lambda>_. I\<rbrace>" by (fastforce simp: valid_def) lemma modify_inv[valid_inv]: "(\<And>s. I s \<Longrightarrow> I (f s)) \<Longrightarrow> \<lbrace>I\<rbrace> modify f \<lbrace>\<lambda>_. I\<rbrace>" apply wp by simp lemma skip_inv[valid_inv]: "\<lbrace>I\<rbrace> skip \<lbrace>\<lambda>_. I\<rbrace>" by (rule skip_wp) lemma select_inv[valid_inv]: "\<lbrace>I\<rbrace> select f \<lbrace>\<lambda>_. I\<rbrace>" by (rule hoare_weaken_pre, rule select_wp, blast) lemma condition_inv[valid_inv]: "\<lbrace>I\<rbrace> t \<lbrace>\<lambda>_. I\<rbrace> \<Longrightarrow> \<lbrace>I\<rbrace> f \<lbrace>\<lambda>_. I\<rbrace> \<Longrightarrow> \<lbrace>I\<rbrace> condition c t f\<lbrace>\<lambda>_. I\<rbrace>" apply (rule hoare_weaken_pre) apply (rule condition_wp) apply auto done lemma whileLoop_inv[valid_inv]: "\<lbrakk>\<And>r. \<lbrace>I\<rbrace> b r \<lbrace>\<lambda>_. I\<rbrace> \<rbrakk> \<Longrightarrow> \<lbrace>I\<rbrace> whileLoop c b r \<lbrace>\<lambda>_. I\<rbrace>" apply (rule whileLoop_wp) apply (blast intro: hoare_weaken_pre) apply assumption done lemma valid_case_prod_inv[valid_inv]: "(\<And>x y. \<lbrace>I\<rbrace> f x y \<lbrace>\<lambda>_. I\<rbrace>) \<Longrightarrow> \<lbrace>I\<rbrace> case v of (x, y) \<Rightarrow> f x y \<lbrace>\<lambda>_. I\<rbrace>" apply wp apply auto done lemma unknown_inv[valid_inv]: "\<lbrace>I\<rbrace> unknown \<lbrace>\<lambda>_. I\<rbrace>" apply (unfold unknown_def) apply (rule select_inv) done lemma throwError_inv[valid_inv]: "\<lbrace>I\<rbrace> throwError e \<lbrace>\<lambda>_. I\<rbrace>" by wp lemma catch_inv[valid_inv]: "\<lbrakk> \<lbrace>I\<rbrace> f \<lbrace>\<lambda>_. I\<rbrace>; \<And>e. \<lbrace>I\<rbrace> h e \<lbrace>\<lambda>_. I\<rbrace> \<rbrakk> \<Longrightarrow> \<lbrace>I\<rbrace> catch f h \<lbrace>\<lambda>_. I\<rbrace>" apply wp apply assumption apply (simp add: validE_def) done lemma whileLoopE_inv[valid_inv]: "(\<And>r. \<lbrace>I\<rbrace> b r \<lbrace>\<lambda>_. I\<rbrace>) \<Longrightarrow> \<lbrace>I\<rbrace> whileLoopE c b r \<lbrace>\<lambda>_. I\<rbrace>" apply (unfold whileLoopE_def) apply (rule whileLoop_inv) apply (auto simp: lift_def split: sum.splits intro: throwError_inv) done lemma bindE_inv[valid_inv]: "\<lbrakk> \<lbrace>I\<rbrace> f \<lbrace>\<lambda>_. I\<rbrace>; \<And>x. \<lbrace>I\<rbrace> g x \<lbrace>\<lambda>_. I\<rbrace> \<rbrakk> \<Longrightarrow> \<lbrace>I\<rbrace> f >>=E g \<lbrace>\<lambda>_. I\<rbrace>" apply (unfold bindE_def) apply (rule bind_inv) apply (auto simp: lift_def split: sum.splits intro: throwError_inv) done lemma returnOk_inv[valid_inv]: "\<lbrace>I\<rbrace> returnOk x \<lbrace>\<lambda>_. I\<rbrace>" apply (simp add: returnOk_def) done lemma liftE_inv[valid_inv]: "\<lbrace>I\<rbrace> f \<lbrace>\<lambda>_. I\<rbrace> \<Longrightarrow> \<lbrace>I\<rbrace> liftE f \<lbrace>\<lambda>_. I\<rbrace>" by wp lemma getsE_inv[valid_inv]: "\<lbrace>I\<rbrace> getsE f \<lbrace>\<lambda>r. I\<rbrace>" apply (unfold getsE_def) apply (blast intro: liftE_inv gets_inv) done lemma skipE_inv[valid_inv]: "\<lbrace>I\<rbrace> skipE \<lbrace>\<lambda>r. I\<rbrace>" apply (unfold skipE_def) apply (blast intro: liftE_inv returnOk_inv) done lemma modifyE_inv[valid_inv]: "(\<And>s. I s \<Longrightarrow> I (f s)) \<Longrightarrow> \<lbrace>I\<rbrace> modifyE f \<lbrace>\<lambda>_. I\<rbrace>" apply (unfold modifyE_def) apply (blast intro: liftE_inv modify_inv) done lemma guardE_inv[valid_inv]: "\<lbrace>I\<rbrace> guardE G \<lbrace>\<lambda>_. I\<rbrace>" apply (unfold guardE_def) apply (blast intro: liftE_inv guard_inv) done lemma whenE_inv[valid_inv]: "\<lbrace>I\<rbrace> f \<lbrace>\<lambda>_. I\<rbrace> \<Longrightarrow> \<lbrace>I\<rbrace> whenE c f \<lbrace>\<lambda>_. I\<rbrace>" apply (unfold whenE_def) apply (blast intro: returnOk_inv) done lemma unless_inv[valid_inv]: "\<lbrace>I\<rbrace> f \<lbrace>\<lambda>_. I\<rbrace> \<Longrightarrow> \<lbrace>I\<rbrace> unless c f \<lbrace>\<lambda>_. I\<rbrace>" apply (unfold unless_def) by (rule when_inv) lemma unlessE_inv[valid_inv]: "\<lbrace>I\<rbrace> f \<lbrace>\<lambda>_. I\<rbrace> \<Longrightarrow> \<lbrace>I\<rbrace> unlessE c f \<lbrace>\<lambda>_. I\<rbrace>" apply (unfold unlessE_def) by (blast intro: returnOk_inv) lemma handleE'_inv[valid_inv]: "\<lbrakk> \<lbrace>I\<rbrace> f \<lbrace>\<lambda>_. I\<rbrace>; \<And>e. \<lbrace>I\<rbrace> h e \<lbrace>\<lambda>_. I\<rbrace> \<rbrakk> \<Longrightarrow> \<lbrace>I\<rbrace> handleE' f h \<lbrace>\<lambda>_. I\<rbrace>" by (fastforce simp: handleE'_def intro: return_inv bind_inv split: sum.splits) lemma handleE_inv[valid_inv]: "\<lbrakk> \<lbrace>I\<rbrace> f \<lbrace>\<lambda>_. I\<rbrace>; \<And>e. \<lbrace>I\<rbrace> h e \<lbrace>\<lambda>_. I\<rbrace> \<rbrakk> \<Longrightarrow> \<lbrace>I\<rbrace> handleE f h \<lbrace>\<lambda>_. I\<rbrace>" apply (unfold handleE_def) by (rule handleE'_inv) text \<open>@{term measure_call} appears in AutoCorres-generated calls to recursive functions.\<close> lemma measure_call_inv[valid_inv]: "\<lbrakk>\<And>m. \<lbrace>I\<rbrace> f m \<lbrace>\<lambda>_. I\<rbrace>\<rbrakk> \<Longrightarrow> \<lbrace>I\<rbrace> measure_call f \<lbrace>\<lambda>_. I\<rbrace>" by (fastforce simp: measure_call_def valid_def) text \<open> Recursion base case for AutoCorres-generated specs. NB: we don't make this valid_inv because it conflicts with bind_inv. Instead we apply it manually. \<close> lemma modifies_recursive_0: "\<lbrace>I\<rbrace> do guard (\<lambda>_. (0 :: nat) < 0); f od \<lbrace>\<lambda>_. I\<rbrace>" by simp text \<open>These rules are currently sufficient for all the kernel code.\<close> thm valid_inv section \<open>Modifies proof procedure\<close> text \<open> The most important assumptions are currently: - skip_heap_abs is set; we don't deal with lifted_globals - all functions are translated to the nondet_monad - we assume a specific format of modifies rule from the C-parser (see comment for gen_modifies_prop) - function_name_prefix="" and function_name_suffix="'" as per default (FIXME: get these from function_info once it's been fixed) The top-level procedure gets all kernel functions in topological order, then does the modifies proofs for each function (or recursive function group). The "scope" option should be supported (TODO: but not yet tested) and in that case the C-parser modifies rules will be transferred directly. \<close> ML \<open> structure AutoCorresModifiesProofs = struct (* Translate a term, top-down, stopping once a conversion has been applied. * trans is an assoc-list of terms to translate. * Bound vars in trans are interpreted relative to outside t. *) fun translate_term trans t = case assoc (trans, t) of SOME t' => t' | NONE => case t of f $ x => translate_term trans f $ translate_term trans x | Abs (v, vT, b) => Abs (v, vT, translate_term (map (apply2 (incr_boundvars 1)) trans) b) | _ => t; (* Remove "Hoare.meq" and "Hoare.mex" scaffolding from SIMPL modifies specs *) fun modifies_simp ctxt = Conv.fconv_rule (Raw_Simplifier.rewrite ctxt true @{thms meq_def[THEN eq_reflection] mex_def[THEN eq_reflection]}); fun modifies_simp_term ctxt = Raw_Simplifier.rewrite_term (Proof_Context.theory_of ctxt) @{thms meq_def[THEN eq_reflection] mex_def[THEN eq_reflection]} []; (* Translate c-parser's "modifies" specs of the form * \<forall>\<sigma>. \<Gamma>\<turnstile>\<^bsub>/UNIV\<^esub> {\<sigma>} Call f_'proc {t. mex x1... meq (globals t) ((globals s)\<lparr>f1 := x1...\<rparr>)} * to specs on the AutoCorres-generated monad * \<lbrace>\<lambda>s. s = \<sigma>\<rbrace> f' \<lbrace>\<lambda>_ s. \<exists>x1... \<sigma> = s\<lparr>f1 := x1...\<rparr>\<rbrace> * * This involves: * - talking about the "globals" state instead of "globals myvars" * - removing "meq" and "mex" which are unnecessary for our proof method * and have buggy syntax translations * - using the monadic hoare predicate * * Returns tuple of (state var, function arg vars, measure var, prop). * The returned vars are Free in the prop; the measure var is NONE for non-recursive functions. *) fun gen_modifies_prop ctxt (fn_info: FunctionInfo.function_info Symtab.table) (prog_info: ProgramInfo.prog_info) f_name c_prop = let val f_info = the (Symtab.lookup fn_info f_name); val globals_type = #globals_type prog_info; val globals_term = #globals_getter prog_info; val ac_ret_type = #return_type f_info; val state0_var = Free ("\<sigma>", globals_type); val @{term_pat "Trueprop (\<forall>\<sigma>. _\<turnstile>\<^bsub>/UNIV\<^esub> {\<sigma>} Call ?f_'proc {t. ?c_modifies_eqn})"} = c_prop; (* Bound 0 = s, Bound 1 = \<sigma> in c_prop *) val modifies_eqn = c_modifies_eqn |> translate_term [(globals_term $ Bound 0, Bound 0), (globals_term $ Bound 1, state0_var)]; val modifies_postcond = Abs (Name.uu_, ac_ret_type, Abs ("s", globals_type, modifies_eqn)) |> modifies_simp_term ctxt; val arg_vars = map Free (#args f_info); val measure_var = if FunctionInfo.is_function_recursive f_info (* will not clash with arg_vars as C identifiers do not contain primes *) then SOME (Free ("measure'", @{typ nat})) else NONE; val f_const = #const f_info val f_call = betapplys (f_const, (case measure_var of SOME v => [v] | NONE => []) @ arg_vars); val modifies_prop = @{mk_term "Trueprop (\<lbrace>\<lambda>s. s = ?state\<rbrace> ?call \<lbrace>?postcond\<rbrace>)" (state, call, postcond)} (state0_var, f_call, modifies_postcond); in (state0_var, arg_vars, measure_var, modifies_prop) end; (* Solve invariant goals of the form * \<And>s. (\<exists>x1 x2... s = \<sigma>\<lparr>field1 := x1, field2 := x2, ...\<rparr>) \<Longrightarrow> * (\<exists>x1 x2... f s = \<sigma>\<lparr>field1 := x1, field2 := x2, ...\<rparr>) * where f is some update function (usually id, but for modify statements * it is the modifying function). * We assume that s is all-quantified and \<sigma> is free. *) fun modifies_invariant_tac quiet_fail ctxt n st = if Thm.nprems_of st = 0 then no_tac st else let val globals_typ = Syntax.read_typ ctxt "globals"; val globals_cases = Proof_Context.get_thm ctxt "globals.cases"; val globals_splits = hd (Proof_Context.get_thms ctxt "globals.splits"); (* The fastest way (so far) is manually splitting s and \<sigma>, then simplifying. * The Isar analogue would be * elim exE, case_tac "s", case_tac "\<sigma>", simp *) (* \<sigma> is free, so obtaining the split rule is straightforward *) val sigma_free = Free ("\<sigma>", globals_typ); val case_sigma = Drule.infer_instantiate' ctxt [SOME (Thm.cterm_of ctxt sigma_free)] globals_cases; (* However, s is bound and accessing it requires some awkward contortions *) (* globals.splits is an equation that looks like * (\<And>r. ?P r) \<equiv> (\<And>fields... ?P {fields}) * We walk down the current goal and apply globals.splits to the quantifier for the correct s. * The correct s would be the one that appears in our invariant assumption of the form * s = \<sigma>\<lparr>updates...\<rparr> * (We removed the preceding \<exists>'s using exE beforehand.) *) fun split_s_tac st = let val subgoal = Logic.get_goal (Thm.prop_of st) n; val prems = Logic.strip_assums_hyp subgoal; val env = Term.strip_all_vars subgoal |> map snd |> rev; fun find @{term_pat "Trueprop (?s = _)"} = (case s of Bound s_idx => if fastype_of1 (env, s) = globals_typ then [s_idx] else [] | _ => []) | find _ = [] val s_idx = case maps find prems of [] => raise THM ("modifies_invariant_tac: failed to find invariant assumption", n, [st]) | idx::_ => idx; fun split_conv idx (Const ("Pure.all", _) $ Abs (_, _, body)) ct = if idx > s_idx then Conv.forall_conv (fn _ => split_conv (idx - 1) body) ctxt ct else let val (_, cP) = Thm.dest_comb ct val inst = Drule.infer_instantiate' ctxt [SOME cP] globals_splits (*val _ = @{trace} (cP, inst, case Thm.prop_of inst of @{term_pat "?x \<equiv> _"} => x, Thm.term_of ct);*) in inst end | split_conv _ _ ct = Conv.no_conv ct; (* shouldn't happen *) in Conv.gconv_rule (split_conv (length env - 1) subgoal) n st |> Seq.single end handle e as THM _ => if quiet_fail then no_tac st else reraise e; (* avoid contextual rules, like split_pair_Ex, that lead simp down the garden path *) val globals_record_simps = maps (Proof_Context.get_thms ctxt) ["globals.ext_inject", "globals.update_convs"]; in st |> (DETERM (REPEAT (eresolve_tac ctxt @{thms exE} n)) THEN split_s_tac THEN resolve_tac ctxt [case_sigma] n THEN SOLVES (asm_full_simp_tac (put_simpset HOL_ss ctxt addsimps globals_record_simps) n)) end (* Convert initial modifies goal of the form * \<lbrace>\<lambda>s. s = \<sigma>\<rbrace> prog \<lbrace>\<lambda>_ s. \<exists>x1... s = \<sigma>\<lparr>field1 := x1, ...\<rparr> \<rbrace> * to one where the modifies is expressed as an invariant: * \<lbrace>\<lambda>s. \<exists>x1... s = \<sigma>\<lparr>field1 := x1, ...\<rparr> \<rbrace> prog \<lbrace>\<lambda>_ s. \<exists>x1... s = \<sigma>\<lparr>field1 := x1, ...\<rparr> \<rbrace> *) fun modifies_initial_tac ctxt n = resolve_tac ctxt @{thms hoare_weaken_pre} n THEN resolve_tac ctxt @{thms valid_invI} n; (* Incremental nets. (Why isn't this standard??) *) type incr_net = (int * thm) Net.net * int; fun build_incr_net rls = (Tactic.build_net rls, length rls); fun add_to_incr_net th (net, sz) = (Net.insert_term (K false) (Thm.concl_of th, (sz + 1, th)) net, sz + 1); (* guessed from tactic.ML *) fun net_of (n: incr_net) = fst n; (* Apply a callee's modifies rule to its call site. * The current goal should be expressed as an invariant: * \<lbrace>\<lambda>s. \<exists>x1... s = \<sigma>\<lparr>field1 := x1, ...\<rparr> \<rbrace> f args... \<lbrace>\<lambda>_ s. \<exists>x1... s = \<sigma>\<lparr>field1 := x1, ...\<rparr> \<rbrace> * We assume that callee_modifies contains the correct modifies rule * and is unique (no backtracking). *) fun modifies_call_tac (callee_modifies: incr_net) ctxt n = DETERM ( (* We move the existentials out of the precondition: \<And>x1... \<lbrace>\<lambda>s. s = \<sigma>\<lparr>field1 := x1, ...\<rparr> \<rbrace> f args... \<lbrace>\<lambda>_ s. \<exists>x1... s = \<sigma>\<lparr>field1 := x1, ...\<rparr> \<rbrace> *) REPEAT (resolve_tac ctxt @{thms hoare_ex_pre} n) THEN resolve_tac ctxt @{thms valid_inv_weaken'} n THEN (* Then we can apply the modifies rule, which looks like: \<lbrace>\<lambda>s. s = ?\<sigma>\<rbrace> f ?args... \<lbrace>\<lambda>_ s. \<exists>x1... s = ?\<sigma>\<lparr>field1 := x1, ...\<rparr> \<rbrace> *) DETERM (resolve_from_net_tac ctxt (net_of callee_modifies) n) THEN modifies_invariant_tac true ctxt n); (* VCG for trivial state invariants, such as globals modifies specs. * Takes vcg rules from "valid_inv". *) val valid_invN = Context.theory_name @{theory} ^ ".valid_inv" fun modifies_vcg_tac leaf_tac ctxt n = let val vcg_rules = Named_Theorems.get ctxt valid_invN |> Tactic.build_net; fun vcg n st = Seq.make (fn () => let (* do not backtrack once we have matched vcg_rules *) val st' = DETERM (resolve_from_net_tac ctxt vcg_rules n) st; in Seq.pull ((case Seq.pull st' of NONE => leaf_tac ctxt n | SOME _ => (K (K st') THEN_ALL_NEW vcg) n) st) end); in vcg n end; (* Specify and prove modifies for one (non-recursive) function. *) fun do_modifies_one ctxt fn_info (prog_info: ProgramInfo.prog_info) callee_modifies f_name = let val c_modifies_prop = Thm.prop_of (Proof_Context.get_thm ctxt (f_name ^ "_modifies")); val (state0_var, arg_vars, measure_var, ac_modifies_prop) = gen_modifies_prop ctxt fn_info prog_info f_name c_modifies_prop; val _ = if isSome measure_var then error ("do_modifies_one bug: got recursive function " ^ f_name) else (); val f_def = the (Symtab.lookup fn_info f_name) |> #definition; fun leaf_tac ctxt n = FIRST [modifies_call_tac callee_modifies ctxt n, modifies_invariant_tac true ctxt n, print_tac ctxt ("do_modifies_one failed (goal " ^ string_of_int n ^ ")")]; val thm = Goal.prove ctxt (map (fn (Free (v, _)) => v) (state0_var :: arg_vars)) [] ac_modifies_prop (K (Method.NO_CONTEXT_TACTIC ctxt (Method.unfold [f_def] ctxt []) THEN modifies_initial_tac ctxt 1 THEN modifies_vcg_tac leaf_tac ctxt 1 THEN leaf_tac ctxt 1)); in thm end; (* Make a list of conjunctions. *) fun mk_conj_list [] = @{term "HOL.True"} | mk_conj_list [x] = x | mk_conj_list (x::xs) = HOLogic.mk_conj (x, (mk_conj_list xs)) (* Specify and prove modifies for a recursive function group. *) fun do_modifies_recursive ctxt fn_info (prog_info: ProgramInfo.prog_info) (callee_modifies: incr_net) f_names = let (* Collect information *) val c_modifies_props = map (fn f_name => Thm.prop_of (Proof_Context.get_thm ctxt (f_name ^ "_modifies"))) f_names; val modifies_props = map2 (gen_modifies_prop ctxt fn_info prog_info) f_names c_modifies_props; val f_defs = map (fn f_name => the (Symtab.lookup fn_info f_name) |> #definition) f_names; fun free_name (Free (v, _)) = v; (* * We do the proof in three parts. * * First, we prove modifies on the base case (measure' = 0) for each function. * This is trivially handled by @{thm modifies_recursive_0}. *) val base_case_props = map (fn (state0_var, arg_vars, SOME measure_var, prop) => (state0_var, arg_vars, subst_free [(measure_var, @{term "0 :: nat"})] prop)) modifies_props; val base_case_leaf_tac = modifies_invariant_tac true; val base_case_thms = map2 (fn (state0_var, arg_vars, prop) => fn f_def => Goal.prove ctxt (map free_name (state0_var :: arg_vars)) [] prop (K (EqSubst.eqsubst_tac ctxt [0] [f_def] 1 THEN modifies_initial_tac ctxt 1 THEN resolve_tac ctxt @{thms modifies_recursive_0} 1 THEN base_case_leaf_tac ctxt 1))) base_case_props f_defs; (* * Next, we prove the induction step "measure'" \<rightarrow> "Suc measure'". * We create an induction hypothesis for each function, quantifying * over its variables: * \<And>\<sigma> arg1 arg2... \<lbrace>\<lambda>s. s = \<sigma>\<rbrace> f measure' arg1 arg2... \<lbrace>\<lambda>s. \<exists>x1... s = \<sigma>\<lparr>f1 := x1...\<rparr>\<rbrace> * Then, we can perform the VCG-based proof as usual, using these * hypotheses in modifies_call_tac. *) val inductive_hyps = map (fn (state0_var, arg_vars, SOME measure_var, prop) => fold Logic.all (state0_var :: arg_vars) prop) modifies_props; val inductive_props = map (fn (state0_var, arg_vars, SOME measure_var, prop) => (state0_var, arg_vars, measure_var, subst_free [(measure_var, @{term "Suc"} $ measure_var)] prop)) modifies_props; val inductive_thms = map2 (fn (state0_var, arg_vars, measure_var, prop) => fn f_def => Goal.prove ctxt (map free_name (state0_var :: measure_var :: arg_vars)) inductive_hyps prop (fn {context, prems} => let val callee_modifies' = fold add_to_incr_net prems callee_modifies; fun inductive_leaf_tac ctxt n = FIRST [modifies_call_tac callee_modifies' ctxt n, modifies_invariant_tac true ctxt n]; in EqSubst.eqsubst_tac ctxt [0] [f_def] 1 THEN (* AutoCorres specifies recursive calls to use "measure - 1", * which in our case becomes "Suc measure - 1". Simplify to "measure". *) Method.NO_CONTEXT_TACTIC ctxt (Method.unfold @{thms diff_Suc_1} ctxt []) THEN modifies_initial_tac ctxt 1 THEN modifies_vcg_tac inductive_leaf_tac ctxt 1 THEN inductive_leaf_tac ctxt 1 end)) inductive_props f_defs (* * Third, we create a combined modifies prop * (\<forall>\<sigma> arg1... \<lbrace>\<lambda>s. s = \<sigma>\<rbrace> f1 measure' arg1... \<lbrace>...\<rbrace>) \<and> * (\<forall>\<sigma> arg1... \<lbrace>\<lambda>s. s = \<sigma>\<rbrace> f2 measure' arg1... \<lbrace>...\<rbrace>) \<and> ... * and apply induction on measure', solving the subgoals using the * theorems from before. * Note that we quantify over args because arg names may clash between functions. * * We pre-proved the induction steps separately for convenience * (e.g. so we can access the hypotheses as facts instead of premises). *) fun hd_of_equal [x] = x | hd_of_equal (x::xs) = if forall (fn x' => x = x') xs then x else raise TERM ("do_modifies_group bug: unequal terms", xs); val (measure_var, final_props) = modifies_props |> map (fn (state0_var, arg_vars, SOME measure_var, prop) => (measure_var, fold (fn Free (v, T) => fn P => HOLogic.mk_all (v, T, P)) (state0_var :: arg_vars) (HOLogic.dest_Trueprop prop))) |> (fn xs => let val props = map snd xs; val measure_var = map fst xs |> hd_of_equal; in (measure_var, props) end); val combined_prop = HOLogic.mk_Trueprop (mk_conj_list final_props); fun intro_tac ctxt rls n = TRY ((resolve_tac ctxt rls THEN_ALL_NEW intro_tac ctxt rls) n); fun elim_tac ctxt rls n = TRY ((eresolve_tac ctxt rls THEN_ALL_NEW elim_tac ctxt rls) n); (*fun maybe_print_tac msg ctxt = print_tac ctxt msg;*) fun maybe_print_tac msg ctxt = all_tac; (*val _ = @{trace} ("inductive thms", inductive_thms);*) val combined_thm = Goal.prove ctxt [free_name measure_var] [] combined_prop (K (Induct.induct_tac ctxt false (* simplifier *) [[SOME (NONE, (measure_var, false))]] (* variables *) [] (* arbitrary: *) [] (* ??? *) (SOME @{thms nat.induct}) (* induct rule *) [] (* extra thms *) 1 THEN maybe_print_tac "base case" ctxt THEN (* base case *) SOLVES ( (((DETERM o intro_tac ctxt @{thms conjI allI}) THEN' K (maybe_print_tac "base case'" ctxt)) THEN_ALL_NEW resolve_tac ctxt base_case_thms) 1 ) THEN maybe_print_tac "inductive case" ctxt THEN (* recursive case *) SOLVE ( (((DETERM o (intro_tac ctxt @{thms conjI allI} THEN_ALL_NEW elim_tac ctxt @{thms conjE}) THEN_ALL_NEW (fn n => Conv.gconv_rule (Raw_Simplifier.rewrite ctxt true @{thms All_to_all}) n #> Seq.succeed)) THEN' K (maybe_print_tac "inductive case'" ctxt)) THEN_ALL_NEW (resolve_tac ctxt inductive_thms THEN_ALL_NEW Method.assm_tac ctxt)) 1 ))); (* Finally, we extract theorems for individual functions. *) val final_thms = HOLogic.conj_elims ctxt combined_thm |> map (fn thm => thm |> Thm.equal_elim (Raw_Simplifier.rewrite ctxt true @{thms All_to_all} (Thm.cprop_of thm)) |> Thm.forall_elim_vars 0); in final_thms end; (* Prove and store modifies rules for one function or recursive function group. *) fun prove_modifies (fn_info: FunctionInfo.function_info Symtab.table) (prog_info: ProgramInfo.prog_info) (callee_modifies: incr_net) (results: thm Symtab.table) (f_names: string list) (thm_names: string list) ctxt : (thm list * Proof.context) option = let val f_infos = map (the o Symtab.lookup fn_info) f_names; val maybe_thms = if length f_names = 1 andalso #is_simpl_wrapper (hd f_infos) then let val f_name = hd f_names; val _ = tracing (f_name ^ " is un-translated; transferring C-parser's modifies rule directly"); val f_def = the (Symtab.lookup fn_info f_name) |> #definition; val orig_modifies = Proof_Context.get_thm ctxt (f_name ^ "_modifies"); val transfer_thm = @{thm autocorres_modifies_transfer}; val thm = transfer_thm OF [f_def, orig_modifies]; in SOME [modifies_simp ctxt thm] end else let val callees = map (FunctionInfo.all_callees o the o Symtab.lookup fn_info) f_names |> Symset.union_sets |> Symset.dest; val missing_callees = callees |> filter_out (fn callee => Symtab.defined results callee orelse member op= f_names callee); in if not (null missing_callees) then (warning ("Can't prove modifies; depends on functions without modifies proofs: " ^ commas missing_callees); NONE) else if length f_names = 1 then SOME ([do_modifies_one ctxt fn_info prog_info callee_modifies (hd f_names)]) else SOME (do_modifies_recursive ctxt fn_info prog_info callee_modifies f_names) end; in case maybe_thms of SOME thms => let val (_, ctxt) = Local_Theory.notes (map2 (fn thm_name => fn thm => ((Binding.name thm_name, []), [([thm], [])])) thm_names thms) ctxt; in SOME (thms, ctxt) end | NONE => NONE end; fun define_modifies_group fn_info prog_info f_names (acc as (callee_modifies, results, ctxt)) = (tracing ("Doing modifies proof for: " ^ commas f_names); case f_names |> filter (fn f_name => not (isSome (try (Proof_Context.get_thm ctxt) (f_name ^ "_modifies")))) of [] => (case prove_modifies fn_info prog_info callee_modifies results f_names (map (fn f_name => f_name ^ "'_modifies") f_names) ctxt of NONE => acc | SOME (thms, ctxt') => (fold add_to_incr_net thms callee_modifies, fold Symtab.update_new (f_names ~~ thms) results, ctxt')) | missing => (warning ("Can't do proof because C-parser modifies rules are missing for: " ^ commas missing); acc)); (* * This is the top-level wrapper that generates modifies rules for the most * recently translated set of functions from a given C file. *) fun new_modifies_rules filename ctxt = let val all_fn_info = Symtab.lookup (AutoCorresFunctionInfo.get (Proof_Context.theory_of ctxt)) filename |> the; val ts_info = FunctionInfo.Phasetab.lookup all_fn_info FunctionInfo.TS |> the; val prog_info = ProgramInfo.get_prog_info ctxt filename; (* Assume that the user has already generated and named modifies rules * for previously-translated callees. *) val existing_modifies = Symtab.dest ts_info |> List.mapPartial (fn (fn_name, fn_def) => try (fn _ => (fn_name, Proof_Context.get_thm ctxt (fn_name ^ "'_modifies"))) ()) |> Symtab.make; (* We will do modifies proofs for these functions *) val pending_fn_info = Symtab.dest ts_info |> List.mapPartial (fn (f, info) => if Symtab.defined existing_modifies f then NONE else SOME (f, info)) |> Symtab.make; val (call_graph, _) = FunctionInfo.calc_call_graph pending_fn_info; val (callee_modifies, results, ctxt') = fold (define_modifies_group ts_info prog_info) (#topo_sorted_functions call_graph |> map Symset.dest) (build_incr_net (Symtab.dest existing_modifies |> map snd), existing_modifies, ctxt) in ctxt' end end; \<close> end
lemma complex_cnj_cancel_iff [simp]: "cnj x = cnj y \<longleftrightarrow> x = y"
;;; -*- syntax: common-lisp; package: KEIM; base: 10; mode: LISP -*- ;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; ;; ;; ;; Copyright (C) 1993 by AG Siekmann, Fachbereich Informatik, ;; ;; Universitaet des Saarlandes, Saarbruecken, Germany. ;; ;; All rights reserved. ;; ;; For information about this program, write to: ;; ;; KEIM Project ;; ;; AG Siekmann/FB Informatik ;; ;; Universitaet des Saarlandes ;; ;; Postfach 1150 ;; ;; D-66041 Saarbruecken ;; ;; Germany ;; ;; electronic mail: [email protected] ;; ;; ;; ;; The author makes no representations about the suitability of this ;; ;; software for any purpose. It is provided "AS IS" without express or ;; ;; implied warranty. In particular, it must be understood that this ;; ;; software is an experimental version, and is not suitable for use in ;; ;; any safety-critical application, and the author denies a license for ;; ;; such use. ;; ;; ;; ;; You may use, copy, modify and distribute this software for any ;; ;; noncommercial and non-safety-critical purpose. Use of this software ;; ;; in a commercial product is not included under this license. You must ;; ;; maintain this copyright statement in all copies of this software that ;; ;; you modify or distribute. ;; ;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ;; (in-package :omega) (th~defproblem boolos-curious-inference (in boolos) (assumption a1 (forall (lam (n i) (= (f n one) (s one))))) (assumption a2 (forall (lam (x i) (= (f one (s x)) (s (s (f one x))))))) (assumption a3 (forall (lam (n i) (forall (lam (x i) (= (f (s n) (s x)) (f n (f (s n) x)))))))) (assumption a4 (D one)) (assumption a5 (forall (lam (x i) (implies (D x) (D (s x)))))) (conclusion conc (D (f (s (s (s (s one)))) (s (s (s (s one)))))))) ;;;; The following Lemmata are prposed by Boolos ;; lemma conc (N one) ;; lemma conc (forall (lam (y i) (implies (N y) (N (s y))))) ;; lemma conc (N (s (s (s (s one))))) ;; lemma conc (E one) ;; lemma conc (forall (lam (y i) (implies (E y) (E (s y))))) ;; lemma conc (E (s one)) ;; lemma conc (forall (lam (nn i) (implies (N nn) (forall (lam (x i) (implies (N x) (E (f nn x)))))))) ;; -> lemma ?? (M one) ;; -> -> lemma ?? (forall (lam (x i) (implies (N x) (Q x)))) ;; -> -> -> lemma ?? (Q one) ;; -> -> -> lemma ?? (forall (lam (x i) (implies (Q x) (Q (s x))))) ;; -> lemma ?? (forall (lam (y i) (implies (M y) (M (s y))))) ;; -> -> lemma ?? (forall (lam (x i) (implies (N x) (P x)))) ;; -> -> -> lemma ?? (P one) ;; -> -> -> lemma ?? (forall (lam (x i) (implies (P x) (P (s x)))))
import tools.auto.finish import _target.deps.mini_crush.src.mini_crush namespace arith -- *Artihmetic Expressions Over Natural Numbers -- **Source Language -- binary operation syntax inductive binop : Type | Plus | Times -- arithmetic expression syntax inductive exp : Type | Const : ℕ → exp | Binop : binop → exp → exp → exp -- binop denotational semantics def binopDenote : binop → ℕ → ℕ → ℕ | binop.Plus := nat.add | binop.Times := nat.mul -- exp denotational semantics def expDenote : exp → ℕ | (exp.Const n) := n | (exp.Binop b e1 e2) := (binopDenote b) (expDenote e1) (expDenote e2) -- tests using both #reduce and #eval #reduce (decidable.to_bool(expDenote (exp.Const 42) = 42)) #eval (decidable.to_bool(expDenote (exp.Const 42) = 42)) #reduce (decidable.to_bool(expDenote (exp.Binop binop.Plus (exp.Const 2) (exp.Const 2)) = 4)) #eval (decidable.to_bool(expDenote (exp.Binop binop.Plus (exp.Const 2) (exp.Const 2)) = 4)) #reduce (decidable.to_bool(expDenote (exp.Binop binop.Times (exp.Binop binop.Plus (exp.Const 2) (exp.Const 2)) (exp.Const 7)) = 28)) #eval (decidable.to_bool(expDenote (exp.Binop binop.Times (exp.Binop binop.Plus (exp.Const 2) (exp.Const 2)) (exp.Const 7)) = 28)) -- **Target Language -- instruction syntax inductive instr : Type | iConst : ℕ → instr | iBinop : binop → instr -- program and stack syntax -- mark as reducible so lean can unfold the definitions during typechecking @[reducible] def prog : Type := list instr @[reducible] def stack : Type := list ℕ -- instr denotational semantics def instrDenote (i : instr) (s : stack) : option stack := match i with | (instr.iConst n) := some (n :: s) | (instr.iBinop b) := match s with | arg1 :: arg2 :: s' := some ((binopDenote b) arg1 arg2 :: s') | _ := none end end -- prog denotational semantics def progDenote : prog → stack → option stack | [] s := some s | (i :: p') s := match instrDenote i s with | none := none | some s' := progDenote p' s' end -- **Translation def compile : exp → prog | (exp.Const n) := instr.iConst n :: [] | (exp.Binop b e1 e2) := compile e2 ++ compile e1 ++ instr.iBinop b :: [] -- compilation examples #reduce compile (exp.Const 42) #reduce compile (exp.Binop binop.Plus (exp.Const 2) (exp.Const 2)) #reduce compile (exp.Binop binop.Times (exp.Binop binop.Plus (exp.Const 2) (exp.Const 2))(exp.Const 7)) -- compilation-evaluation examples #reduce progDenote (compile (exp.Const 42)) [] #reduce progDenote (compile (exp.Binop binop.Plus (exp.Const 2) (exp.Const 2))) [] #reduce progDenote (compile (exp.Binop binop.Times (exp.Binop binop.Plus (exp.Const 2) (exp.Const 2))(exp.Const 7))) [] -- **Translation Correctness lemma compile_correct' (e : exp) : ∀ p s, progDenote (compile e ++ p) s = progDenote p (expDenote e :: s) := begin induction e; intros, case exp.Const { unfold compile, unfold expDenote, simp, unfold progDenote, unfold instrDenote, simp, unfold instrDenote._match_1, unfold progDenote._match_1, refl }, case exp.Binop { unfold compile, unfold expDenote, simp, /- can replace these lines with simph -/ rw ih_2, /- -/ rw ih_1, /- -/ unfold progDenote, unfold instrDenote, simp, unfold instrDenote._match_1, unfold instrDenote._match_2, unfold progDenote._match_1, refl } end lemma compile_correct'_lean (e : exp) : ∀ p s, progDenote (compile e ++ p) s = progDenote p (expDenote e :: s) := begin induction e; intros, case exp.Const {refl}, case exp.Binop { unfold compile expDenote, simph, refl } end @[simp] lemma compile_correct'_crush (e : exp) : ∀ p s, progDenote (compile e ++ p) s = progDenote p (expDenote e :: s) := by mini_crush lemma app_nil_end {A : Type} (l : list A) : l = l ++ list.nil := by induction l; simph theorem compile_correct (e : exp) : progDenote (compile e) [] = some (expDenote e :: []) := begin rw app_nil_end (compile e), rw compile_correct', refl end -- TODO: excessive memory consumption theorem compile_correct_crush (e : exp) : progDenote (compile e) [] = some (expDenote e :: []) := by mini_crush end arith namespace typed -- *Typed Expressions -- **Source Language -- atomic types inductive type : Type | Nat | Bool open type -- binop types inductive tbinop : type → type → type → Type | TPlus : tbinop Nat Nat Nat | TTimes : tbinop Nat Nat Nat | TEq : ∀ t, tbinop t t Bool | TLt : tbinop Nat Nat Bool open tbinop -- expression types inductive texp : type → Type | TNConst : ℕ → texp Nat | TBConst : bool → texp Bool | TBinop : ∀ {t1 t2 t}, tbinop t1 t2 t → texp t1 → texp t2 → texp t open texp -- type denotational semantics def typeDenote : type → Type | Nat := ℕ | Bool := bool -- tbinop denotational semantics def tbinopDenote : ∀ {arg1 arg2 res} (b : tbinop arg1 arg2 res), typeDenote arg1 → typeDenote arg2 → typeDenote res | ._ ._ ._ TPlus := nat.add | ._ ._ ._ TTimes := nat.mul | ._ ._ ._ (TEq Nat) := (λ x y, decidable.to_bool $ x = y) | ._ ._ ._ (TEq Bool) := (λ x y, decidable.to_bool $ x = y) | ._ ._ ._ TLt := (λ x y, decidable.to_bool $ nat.le x y) -- texp denotational semantics def texpDenote : ∀ {t}, texp t → typeDenote t | ._ (TNConst n) := n | ._ (TBConst b) := b | ._ (@TBinop _ _ _ b e1 e2) := (tbinopDenote b) (texpDenote e1) (texpDenote e2) #reduce texpDenote (TNConst 42) #reduce texpDenote (TBConst true) -- TODO: is there a way to remove need for _'s? #reduce texpDenote (TBinop TTimes (TBinop TPlus (TNConst 2) (TNConst 2)) (TNConst 7)) #reduce texpDenote (TBinop (TEq Nat) (TBinop TPlus (TNConst 2) (TNConst 2)) (TNConst 7)) #reduce texpDenote (TBinop TLt (TBinop TPlus (TNConst 2) (TNConst 2)) (TNConst 7)) -- **Target Language -- stack type to describe how expressions affect the stack @[reducible] def tstack := list type inductive tinstr : tstack → tstack → Type | TiNConst : ∀ s, ℕ → tinstr s (Nat :: s) | TiBConst : ∀ s, bool → tinstr s (Bool :: s) | TiBinop : ∀ arg1 arg2 res s, tbinop arg1 arg2 res → tinstr (arg1 :: arg2 :: s) (res :: s) open tinstr inductive tprog : tstack → tstack → Type | TNil : ∀ s, tprog s s | TCons : ∀ s1 s2 s3, tinstr s1 s2 → tprog s2 s3 → tprog s1 s3 open tprog def vstack : tstack → Type | [] := unit | (t :: ts) := typeDenote t × vstack ts def tinstrDenote : ∀ {ts} {ts'}, tinstr ts ts' → vstack ts → vstack ts' | ._ ._ (TiNConst _ n) := (λ s, (n, s)) | ._ ._ (TiBConst _ b) := (λ s, (b, s)) | ._ ._ (TiBinop _ _ _ _ b) := (λ s, let (arg1, (arg2, s')) := s in ((tbinopDenote b) arg1 arg2, s')) def tprogDenote : ∀ {ts} {ts'}, tprog ts ts' → vstack ts → vstack ts' | ._ ._ (TNil _) := (λ s, s) | ._ ._ (TCons _ _ _ i p) := (λ s, tprogDenote p (tinstrDenote i s)) -- **Translation def tconcat : ∀ {ts} {ts'} {ts''}, tprog ts ts' → tprog ts' ts'' → tprog ts ts'' | ._ ._ ts'' (TNil _) := (λ p, p) | ._ ._ ts'' (TCons _ _ _ i p1) := (λ p, TCons _ _ _ i (tconcat p1 p)) def tcompile : ∀ {t} (e : texp t) (ts : tstack), tprog ts (t :: ts) | ._ (TNConst n) _ := TCons _ _ _ (TiNConst _ n) (TNil _) | ._ (TBConst b) _ := TCons _ _ _ (TiBConst _ b) (TNil _) | ._ (@TBinop _ _ _ b e1 e2) _ := tconcat (tcompile e2 _) (tconcat (tcompile e1 _) (TCons _ _ _ (TiBinop _ _ _ _ b) (TNil _))) #reduce tprogDenote (tcompile (TNConst 42) []) () #reduce tprogDenote (tcompile (TBConst true) []) () #reduce tprogDenote (tcompile (TBinop TTimes (TBinop TPlus (TNConst 2) (TNConst 2)) (TNConst 7)) []) () #reduce tprogDenote (tcompile (TBinop (TEq Nat) (TBinop TPlus (TNConst 2) (TNConst 2)) (TNConst 7)) []) () #reduce tprogDenote (tcompile (TBinop TLt (TBinop TPlus (TNConst 2) (TNConst 2)) (TNConst 7)) []) () -- **Translation Correctness @[simp] lemma tconcat_correct : ∀ ts ts' ts'' (p : tprog ts ts') (p' : tprog ts' ts'') (s : vstack ts), tprogDenote (tconcat p p') s = tprogDenote p' (tprogDenote p s) := by mini_crush @[simp] lemma tcompile_correct' : ∀ t (e : texp t) ts (s : vstack ts), tprogDenote (tcompile e ts) s = (texpDenote e, s) := by mini_crush theorem tcompile_correct : ∀ t (e : texp t), tprogDenote (tcompile e []) () = (texpDenote e, ()) := by mini_crush end typed
Formal statement is: lemma Re_divide_numeral [simp]: "Re (z / numeral w) = Re z / numeral w" Informal statement is: The real part of a complex number divided by a numeral is the real part of the complex number divided by the numeral.
[STATEMENT] lemma trunc_ell2_norm_mono: \<open>M \<subseteq> N \<Longrightarrow> norm (trunc_ell2 M \<psi>) \<le> norm (trunc_ell2 N \<psi>)\<close> [PROOF STATE] proof (prove) goal (1 subgoal): 1. M \<subseteq> N \<Longrightarrow> norm (trunc_ell2 M \<psi>) \<le> norm (trunc_ell2 N \<psi>) [PROOF STEP] proof (rule power2_le_imp_le[rotated], force, transfer) [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>M N \<psi>. \<lbrakk>M \<subseteq> N; has_ell2_norm \<psi>\<rbrakk> \<Longrightarrow> (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 \<le> (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 [PROOF STEP] fix M N :: \<open>'a set\<close> and \<psi> :: \<open>'a \<Rightarrow> complex\<close> [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>M N \<psi>. \<lbrakk>M \<subseteq> N; has_ell2_norm \<psi>\<rbrakk> \<Longrightarrow> (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 \<le> (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 [PROOF STEP] assume \<open>M \<subseteq> N\<close> and \<open>has_ell2_norm \<psi>\<close> [PROOF STATE] proof (state) this: M \<subseteq> N has_ell2_norm \<psi> goal (1 subgoal): 1. \<And>M N \<psi>. \<lbrakk>M \<subseteq> N; has_ell2_norm \<psi>\<rbrakk> \<Longrightarrow> (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 \<le> (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 [PROOF STEP] have \<open>(ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 = (\<Sum>\<^sub>\<infinity>i\<in>M. (cmod (\<psi> i))\<^sup>2)\<close> [PROOF STATE] proof (prove) goal (1 subgoal): 1. (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 = (\<Sum>\<^sub>\<infinity>i\<in>M. (cmod (\<psi> i))\<^sup>2) [PROOF STEP] unfolding ell2_norm_square [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<Sum>\<^sub>\<infinity>i. (cmod (if i \<in> M then \<psi> i else 0))\<^sup>2) = (\<Sum>\<^sub>\<infinity>i\<in>M. (cmod (\<psi> i))\<^sup>2) [PROOF STEP] apply (rule infsum_cong_neutral) [PROOF STATE] proof (prove) goal (3 subgoals): 1. \<And>x. x \<in> M - UNIV \<Longrightarrow> (cmod (\<psi> x))\<^sup>2 = 0 2. \<And>x. x \<in> UNIV - M \<Longrightarrow> (cmod (if x \<in> M then \<psi> x else 0))\<^sup>2 = 0 3. \<And>x. x \<in> UNIV \<inter> M \<Longrightarrow> (cmod (if x \<in> M then \<psi> x else 0))\<^sup>2 = (cmod (\<psi> x))\<^sup>2 [PROOF STEP] by auto [PROOF STATE] proof (state) this: (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 = (\<Sum>\<^sub>\<infinity>i\<in>M. (cmod (\<psi> i))\<^sup>2) goal (1 subgoal): 1. \<And>M N \<psi>. \<lbrakk>M \<subseteq> N; has_ell2_norm \<psi>\<rbrakk> \<Longrightarrow> (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 \<le> (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 [PROOF STEP] also [PROOF STATE] proof (state) this: (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 = (\<Sum>\<^sub>\<infinity>i\<in>M. (cmod (\<psi> i))\<^sup>2) goal (1 subgoal): 1. \<And>M N \<psi>. \<lbrakk>M \<subseteq> N; has_ell2_norm \<psi>\<rbrakk> \<Longrightarrow> (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 \<le> (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 [PROOF STEP] have \<open>\<dots> \<le> (\<Sum>\<^sub>\<infinity>i\<in>N. (cmod (\<psi> i))\<^sup>2)\<close> [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<Sum>\<^sub>\<infinity>i\<in>M. (cmod (\<psi> i))\<^sup>2) \<le> (\<Sum>\<^sub>\<infinity>i\<in>N. (cmod (\<psi> i))\<^sup>2) [PROOF STEP] apply (rule infsum_mono2) [PROOF STATE] proof (prove) goal (4 subgoals): 1. (\<lambda>i. (cmod (\<psi> i))\<^sup>2) summable_on M 2. (\<lambda>i. (cmod (\<psi> i))\<^sup>2) summable_on N 3. M \<subseteq> N 4. \<And>x. x \<in> N - M \<Longrightarrow> 0 \<le> (cmod (\<psi> x))\<^sup>2 [PROOF STEP] using \<open>has_ell2_norm \<psi>\<close> \<open>M \<subseteq> N\<close> [PROOF STATE] proof (prove) using this: has_ell2_norm \<psi> M \<subseteq> N goal (4 subgoals): 1. (\<lambda>i. (cmod (\<psi> i))\<^sup>2) summable_on M 2. (\<lambda>i. (cmod (\<psi> i))\<^sup>2) summable_on N 3. M \<subseteq> N 4. \<And>x. x \<in> N - M \<Longrightarrow> 0 \<le> (cmod (\<psi> x))\<^sup>2 [PROOF STEP] by (auto simp add: ell2_norm_square has_ell2_norm_def simp flip: norm_power intro: summable_on_subset_banach) [PROOF STATE] proof (state) this: (\<Sum>\<^sub>\<infinity>i\<in>M. (cmod (\<psi> i))\<^sup>2) \<le> (\<Sum>\<^sub>\<infinity>i\<in>N. (cmod (\<psi> i))\<^sup>2) goal (1 subgoal): 1. \<And>M N \<psi>. \<lbrakk>M \<subseteq> N; has_ell2_norm \<psi>\<rbrakk> \<Longrightarrow> (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 \<le> (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 [PROOF STEP] also [PROOF STATE] proof (state) this: (\<Sum>\<^sub>\<infinity>i\<in>M. (cmod (\<psi> i))\<^sup>2) \<le> (\<Sum>\<^sub>\<infinity>i\<in>N. (cmod (\<psi> i))\<^sup>2) goal (1 subgoal): 1. \<And>M N \<psi>. \<lbrakk>M \<subseteq> N; has_ell2_norm \<psi>\<rbrakk> \<Longrightarrow> (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 \<le> (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 [PROOF STEP] have \<open>\<dots> = (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2\<close> [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<Sum>\<^sub>\<infinity>i\<in>N. (cmod (\<psi> i))\<^sup>2) = (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 [PROOF STEP] unfolding ell2_norm_square [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<Sum>\<^sub>\<infinity>i\<in>N. (cmod (\<psi> i))\<^sup>2) = (\<Sum>\<^sub>\<infinity>i. (cmod (if i \<in> N then \<psi> i else 0))\<^sup>2) [PROOF STEP] apply (rule infsum_cong_neutral) [PROOF STATE] proof (prove) goal (3 subgoals): 1. \<And>x. x \<in> UNIV - N \<Longrightarrow> (cmod (if x \<in> N then \<psi> x else 0))\<^sup>2 = 0 2. \<And>x. x \<in> N - UNIV \<Longrightarrow> (cmod (\<psi> x))\<^sup>2 = 0 3. \<And>x. x \<in> N \<inter> UNIV \<Longrightarrow> (cmod (\<psi> x))\<^sup>2 = (cmod (if x \<in> N then \<psi> x else 0))\<^sup>2 [PROOF STEP] by auto [PROOF STATE] proof (state) this: (\<Sum>\<^sub>\<infinity>i\<in>N. (cmod (\<psi> i))\<^sup>2) = (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 goal (1 subgoal): 1. \<And>M N \<psi>. \<lbrakk>M \<subseteq> N; has_ell2_norm \<psi>\<rbrakk> \<Longrightarrow> (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 \<le> (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 [PROOF STEP] finally [PROOF STATE] proof (chain) picking this: (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 \<le> (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 [PROOF STEP] show \<open>(ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 \<le> (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2\<close> [PROOF STATE] proof (prove) using this: (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 \<le> (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 goal (1 subgoal): 1. (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 \<le> (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 [PROOF STEP] by - [PROOF STATE] proof (state) this: (ell2_norm (\<lambda>i. if i \<in> M then \<psi> i else 0))\<^sup>2 \<le> (ell2_norm (\<lambda>i. if i \<in> N then \<psi> i else 0))\<^sup>2 goal: No subgoals! [PROOF STEP] qed
SUBROUTINE CHEGVX_F95( A, B, W, ITYPE, JOBZ, UPLO, VL, VU, IL, IU, & & M, IFAIL, ABSTOL, INFO ) ! .. USE STATEMENTS .. USE LA_PRECISION, ONLY: WP => SP USE LA_AUXMOD, ONLY: ERINFO, LSAME USE F77_LAPACK, ONLY: LAMCH_F77 => SLAMCH USE F77_LAPACK, ONLY: HEGVX_F77 => LA_HEGVX, ILAENV_F77 => ILAENV ! .. IMPLICIT STATEMENT .. IMPLICIT NONE ! .. CHARACTER ARGUMENTS .. CHARACTER(LEN=1), INTENT(IN), OPTIONAL :: JOBZ, UPLO ! .. SCALAR ARGUMENTS .. INTEGER, INTENT(IN), OPTIONAL :: IL, IU, ITYPE INTEGER, INTENT(OUT), OPTIONAL :: INFO, M REAL(WP), INTENT(IN), OPTIONAL :: ABSTOL, VL, VU ! .. ARRAY ARGUMENTS .. INTEGER, INTENT(OUT), OPTIONAL, TARGET :: IFAIL(:) COMPLEX(WP), INTENT(INOUT) :: A(:,:), B(:,:) REAL(WP), INTENT(OUT) :: W(:) !---------------------------------------------------------------------- ! ! Purpose ! ======= ! ! LA_SYGVX and LA_HEGVX compute selected eigenvalues and, optionally, ! the corresponding eigenvectors of generalized eigenvalue problems of ! the form ! A*z = lambda*B*z, A*B*z = lambda*z, and B*A*z = lambda*z, ! where A and B are real symmetric in the case of LA_SYGVX and complex ! Hermitian in the case of LA_HEGVX. In both cases B is positive ! definite. Eigenvalues and eigenvectors can be selected by specifying ! either a range of values or a range of indices for the desired ! eigenvalues. ! ! ========= ! ! SUBROUTINE LA_SYGVX / LA_HEGVX (A, B, W, ITYPE= itype, & ! JOBZ= jobz, UPLO= uplo, VL= vl, VU= vu, IL= il, & ! IU= iu, M= m, IFAIL= ifail, ABSTOL= abstol, INFO= info ) ! <type>(<wp>), INTENT(INOUT) :: A(:,:), B(:,:) ! REAL(<wp>), INTENT(OUT) :: W(:) ! INTEGER, INTENT(IN), OPTIONAL :: ITYPE ! CHARACTER(LEN=1), INTENT(IN), OPTIONAL :: JOBZ, UPLO ! REAL(<wp>), INTENT(IN), OPTIONAL :: VL, VU ! INTEGER, INTENT(IN), OPTIONAL :: IL, IU ! INTEGER, INTENT(OUT), OPTIONAL :: M ! INTEGER, INTENT(OUT), OPTIONAL :: IFAIL(:) ! REAL(<wp>), INTENT(IN), OPTIONAL :: ABSTOL ! INTEGER, INTENT(OUT), OPTIONAL :: INFO ! where ! <type> ::= REAL | COMPLEX ! <wp> ::= KIND(1.0) | KIND(1.0D0) ! ! Arguments ! ========= ! ! A (input/output) REAL or COMPLEX square array, shape (:,:). ! On entry, the matrix A. ! If UPLO = 'U', the upper triangular part of A contains the ! upper triangular part of matrix A. If UPLO = 'L', the lower ! triangular part of A contains the lower triangular part of ! matrix A. ! On exit, if JOBZ = 'V', the first M columns of A contain the ! orthonormal eigenvectors corresponding to the selected ! eigenvalues, with the i-th column of A holding the eigenvector ! associated with the eigenvalue in W(i). ! The eigenvectors are normalized as follows: ! if ITYPE = 1 or 2: Z^H * B * Z = I , ! if ITYPE = 3: Z^H * B^-1 * Z = I . ! If an eigenvector fails to converge, then that column of A ! contains the latest approximation to the eigenvector and the ! index of the eigenvector is returned in IFAIL. ! If JOBZ = 'N', then the upper triangle (if UPLO = 'U') or the ! lower triangle (if UPLO = 'L') of A, including the diagonal, is ! destroyed. ! B (input/output) REAL or COMPLEX square array, shape (:,:) with ! size(B,1) = size(A,1). ! On entry, the matrix B. ! If UPLO = 'U', the upper triangular part of B contains the ! upper triangular part of matrix B. If UPLO = 'L', the lower ! triangular part of B contains the lower triangular part of ! matrix B. ! On exit, the part of B containing the matrix is overwritten by ! the triangular factor U or L of the Cholesky factorization ! B = U^H*U or B = L*L^H. ! W (output) REAL array, shape (:) with size(W) = size(A,1). ! The first M elements contain the selected eigenvalues in ! ascending order. ! ITYPE Optional (input) INTEGER. ! Specifies the problem type to be solved: ! = 1: A*z = lambda*B*z ! = 2: A*B*z = lambda*z ! = 3: B*A*z = lambda*z ! Default value: 1. ! JOBZ Optional (input) CHARACTER(LEN=1). ! = 'N': Computes eigenvalues only; ! = 'V': Computes eigenvalues and eigenvectors. ! Default value: 'N'. ! UPLO Optional (input) CHARACTER(LEN=1). ! = 'U': Upper triangles of A and B are stored; ! = 'L': Lower triangles of A and B are stored. ! Default value: 'U'. ! VL,VU Optional (input) REAL. ! The lower and upper bounds of the interval to be searched for ! eigenvalues. VL < VU. ! Default values: VL = -HUGE(<wp>) and VU = HUGE(<wp>), where ! <wp> ::= KIND(1.0) | KIND(1.0D0). ! Note: Neither VL nor VU may be present if IL and/or IU is ! present. ! IL,IU Optional (input) INTEGER. ! The indices of the smallest and largest eigenvalues to be ! returned. The IL-th through IU-th eigenvalues will be found. ! 1<=IL<=IU<=size(A,1). ! Default values: IL = 1 and IU = size(A,1). ! Note: Neither IL nor IU may be present if VL and/or VU is ! present. ! Note: All eigenvalues are calculated if none of the arguments ! VL, VU, IL and IU are present. ! M Optional (output) INTEGER. ! The total number of eigenvalues found. 0 <= M <= size(A,1). ! Note: If IL and IU are present then M = IU - IL + 1. ! IFAIL Optional (output) INTEGER array, shape (:) with size(IFAIL) = ! size(A,1). ! If INFO = 0, the first M elements of IFAIL are zero. ! If INFO > 0, then IFAIL contains the indices of the ! eigenvectors that failed to converge. ! Note: IFAIL should be present if JOBZ = 'V'. ! ABSTOL Optional (input) REAL. ! The absolute error tolerance for the eigenvalues. An approximate ! eigenvalue is accepted as converged when it is determined to lie ! in an interval [a,b] of width less than or equal to ! ABSTOL + EPSILON(1.0_<wp>) * max(| a |, | b |), ! where <wp> is the working precision. If ABSTOL <= 0, then ! EPSILON(1.0_<wp>)* ||T||1 will be used in its place, where ! ||T||1 is the l1 norm of the tridiagonal matrix obtained by ! reducing the generalized eigenvalue problem to tridiagonal form. ! Eigenvalues will be computed most accurately when ABSTOL is set ! to twice the underflow threshold 2 * LA_LAMCH(1.0_<wp>, 'S'), ! not zero. ! Default value: 0.0_<wp>. ! Note: If this routine returns with 0 < INFO <= n, then some ! eigenvectors did not converge. ! Try setting ABSTOL to 2 * LA_LAMCH(1.0_<wp>, 'S'). ! INFO Optional (output) INTEGER. ! = 0: successful exit. ! < 0: if INFO = -i, the i-th argument had an illegal value. ! > 0: the algorithm failed to converge or matrix B is not ! positive definite: ! <= n: the algorithm failed to converge; if INFO = i, then i ! eigenvectors failed to converge. Their indices are stored ! in array IFAIL. ! > n: if INFO = n+i, for 1 <= i <= n, then the leading minor ! of order i of B is not positive definite. The ! factorization of B could not be completed and no ! eigenvalues or eigenvectors were computed. ! n is the order of A. ! If INFO is not present and an error occurs, then the program is ! terminated with an error message. !----------------------------------------------------------------------- ! .. LOCAL PARAMETERS .. CHARACTER(LEN=8), PARAMETER :: SRNAME = 'LA_HEGVX' CHARACTER(LEN=6), PARAMETER :: BSNAME = 'ZHETRD' ! .. LOCAL SCALARS .. CHARACTER(LEN=1) :: LJOBZ, LUPLO, LRANGE INTEGER :: N, LINFO, LDA, LDZ, LZ, LIL, LIU, LM, LWORK, NB, ISTAT, & & SIFAIL, LDB, LITYPE INTEGER, TARGET :: ISTAT1(1) REAL(WP) :: LABSTOL, LVL, LVU ! .. LOCAL ARRAYS .. INTEGER, POINTER :: IWORK(:), LIFAIL(:) COMPLEX(WP), POINTER :: WORK(:), Z(:,:) REAL(WP), POINTER :: RWORK(:) ! .. INTRINSIC FUNCTIONS .. INTRINSIC HUGE, PRESENT, SIZE ! .. EXECUTABLE STATEMENTS .. N = SIZE(A,1); LDA = MAX(1,N); LDB=MAX(1,SIZE(B,1)); LINFO = 0; ISTAT = 0 IF( PRESENT(ITYPE) )THEN; LITYPE = ITYPE; ELSE; LITYPE = 1; END IF IF( PRESENT(IFAIL) )THEN; SIFAIL = SIZE(IFAIL);ELSE; SIFAIL = N; END IF IF( PRESENT(JOBZ ) )THEN; LJOBZ=JOBZ; ELSE; LJOBZ = 'N'; ENDIF IF( PRESENT(UPLO) ) THEN; LUPLO = UPLO; ELSE; LUPLO = 'U'; END IF IF( PRESENT(VL) )THEN; LVL = VL; ELSE; LVL = -HUGE(LVL); ENDIF IF( PRESENT(VU) )THEN; LVU = VU; ELSE; LVU = HUGE(LVU); ENDIF IF( PRESENT(IL) )THEN; LIL = IL; ELSE; LIL = 1; ENDIF IF( PRESENT(IU) )THEN; LIU = IU; ELSE; LIU = N; ENDIF ! .. TEST THE ARGUMENTS IF( PRESENT(VL) .OR. PRESENT(VU) )THEN ; LRANGE = 'V'; LM=N ELSE IF( PRESENT(IL) .OR. PRESENT(IU) )THEN ; LRANGE = 'I'; LM=LIU-LIL+1 ELSE ; LRANGE = 'A'; LM=N; END IF IF( SIZE( A, 2 ) /= N .OR. N < 0 )THEN; LINFO = -1 ELSE IF (SIZE (B, 2) /= N ) THEN; LINFO = -2 ELSE IF( SIZE( W ) /= N )THEN; LINFO = -3 ELSE IF( LITYPE < 1 .OR. LITYPE > 3 )THEN; LINFO = -4 ELSE IF( .NOT.LSAME(LJOBZ,'V') .AND. .NOT.LSAME(LJOBZ,'N') )THEN; LINFO = -5 ELSE IF( .NOT.LSAME(LUPLO,'U') .AND. .NOT.LSAME(LUPLO,'L') )THEN; LINFO = -6 ELSE IF( LVU < LVL )THEN ; LINFO = -7 ELSE IF( (PRESENT(VL) .OR. PRESENT(VU)) .AND. & (PRESENT(IL) .OR. PRESENT(IU)) )THEN; LINFO = -8 ELSE IF( LSAME(LRANGE, 'I') .AND. ( LIU < MIN( N, LIL ) .OR. LIU>N))THEN; LINFO = -9 ELSE IF( N < LIU )THEN; LINFO = -10 ELSE IF( SIFAIL /= N )THEN; LINFO = -12 ELSE IF( N > 0 )THEN IF(LSAME(LJOBZ, 'V')) THEN LDZ = MAX(1,N); LZ=LM ELSE LDZ = 1; LZ=1 ENDIF IF( PRESENT(IFAIL) )THEN; LIFAIL => IFAIL ELSE; ALLOCATE( LIFAIL(N), STAT=ISTAT ); END IF ! .. DETERMINE THE WORKSPACE NB = ILAENV_F77( 1, BSNAME, LUPLO, N, -1, -1, -1 ) IF( NB < 5 .OR. NB >= N )THEN NB = 5 END IF LWORK = N*(3+NB) ALLOCATE(Z(LDZ, LZ), STAT=ISTAT) IF (ISTAT /= 0) LINFO = -100 ALLOCATE(IWORK(10*5*N), WORK(10*LWORK), RWORK(7*N), STAT=ISTAT) IF( ISTAT /= 0 )THEN DEALLOCATE(IWORK, WORK, STAT=ISTAT1(1)) LWORK = N*8*10 ALLOCATE(IWORK(10*5*N), WORK(LWORK), RWORK(7 *N), STAT=ISTAT) IF( ISTAT /= 0 ) THEN LINFO = - 100 ELSE CALL ERINFO( -200, SRNAME, LINFO ) ENDIF END IF IF( LINFO == 0 )THEN IF( PRESENT(ABSTOL) )THEN; LABSTOL = ABSTOL ELSE; LABSTOL = 2*LAMCH_F77('Safe minimum'); ENDIF ! .. CALL LAPACK77 ROUTINE CALL HEGVX_F77( LITYPE, LJOBZ, LRANGE, LUPLO, N, A, LDA, B, & & LDB, LVL, LVU, LIL, LIU, LABSTOL, LM, W, Z, LDZ, WORK, & & LWORK, RWORK, IWORK, LIFAIL, LINFO ) IF( PRESENT(M) ) M = LM IF (LSAME(LJOBZ,'V')) A(1:LDZ, 1:LM)=Z(1:LDZ, 1:LM) END IF DEALLOCATE(IWORK, WORK, RWORK, STAT=ISTAT1(1)) DEALLOCATE(Z, STAT=ISTAT) END IF CALL ERINFO(LINFO,SRNAME,INFO,ISTAT) END SUBROUTINE CHEGVX_F95
! ! Interpolate_Utility ! ! Container module for interpolation routine modules ! ! ! CREATION HISTORY: ! Written by: Paul van Delst, CIMSS/SSEC 06-Oct-2006 ! [email protected] ! MODULE Interpolate_Utility ! ------------------ ! Environment set up ! ------------------ ! Modules used USE Linear_Interpolation, ONLY: Linear_Interpolate USE Polynomial_Interpolation, ONLY: Polynomial_Interpolate USE Spline_Interpolation, ONLY: Spline_Initialize, & Spline_Interpolate ! Disable all implicit typing IMPLICIT NONE ! ------------ ! Visibilities ! ------------ PRIVATE PUBLIC :: Linear_Interpolate PUBLIC :: Polynomial_Interpolate PUBLIC :: Spline_Initialize PUBLIC :: Spline_Interpolate ! ----------------- ! Module parameters ! ----------------- ! RCS Id field CHARACTER(*), PRIVATE, PARAMETER :: MODULE_RCS_ID = & END MODULE Interpolate_Utility
= = = Modern day = = =
""" initMat!(matrix::SparseMatrixCSC{Tv, Ti}, diag_l::Ti, diag_u::Ti, size::Ti; scale::Real = 1.0, shift::Real = 0.0, sparsity::Real = 0.9) where {Tv<:Complex, Ti<:Integer} Initialization of matrix to be generated. This function is for non-Hermitian case, in which the entries of matrix are complex scalars. In this function, the parameters `diag_l` and `diag_u` determine the range (`[diag_l, diag_u]`) of lower triangular part to be filled. The two parameters should also be negatif, which refer to the offfsets of diagonals in the lower triangular part. The parameter `size` defines the size of matrix to be generated. The non-zero elements of the initialized matrices are randomly generated in the interval (0,1). Two optionals parameters `shift` and `scale` allow shifting and scaling this interval. Another optional parameter `sparsity` is more important. It determines the possility of the elements within the band of diagonal determined by `diag_l` and `diag_u` to be non-zeros. In other words, the initalized matrix would be more sparse with a smaller number of `sparisty` parameter. ## Examples ```julia julia> Am=spzeros(ComplexF64, 50, 50); julia> initMat!(Am, -20, -10, 50) julia> Am 50×50 SparseMatrixCSC{ComplexF64, Int64} with 335 stored entries: ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⣟⡳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⣿⣿⣢⣵⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠙⢟⣾⣿⣽⢲⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠙⢽⣿⣛⣿⣵⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠙⠽⣻⡾⣿⣷⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠙⢟⣻⢿⢻⣗⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠙⢿⣽⣯⣻⣳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢿⣛⢿⡾⣷⣄⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠑⢯⡽⣯⣜⣷⢄⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠚⠋⠛⠚⠓⠀⠀⠀⠀⠀ ``` ## Examples ```julia julia> Am=spzeros(ComplexF64, 50, 50); julia> initMat!(Am, -20, -10, 50, sparsity=0.1) julia> Am 50×50 SparseMatrixCSC{ComplexF64, Int64} with 36 stored entries: ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠠⢀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠉⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠂⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⢂⠀⠈⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠂⠈⠀⠤⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⢀⢀⠅⠂⠄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠂⠀⠠⠈⠰⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠂⠈⠀⠡⢀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠄⢄⡆⢀⠀⠀⠀⠀⠀⠀ ```⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ """ function initMat!( matrix::SparseMatrixCSC{Tv, Ti}, diag_l::Ti, diag_u::Ti, size::Ti; scale::Real = 1.0, shift::Real = 0.0, sparsity::Real = 0.9 ) where {Tv<:Complex, Ti<:Integer} diag_l = abs(diag_l) diag_u = abs(diag_u) if diag_l < diag_u error( "for initialisationg of matrix, please ensure abs(diag_l) < abas(diag_u) ", ) end if diag_u >= size error( "for initialisationg of matrix, please ensure abs(diag_u) < size ", ) end cnt::Ti = (diag_l - diag_u + 1) * (2 * size - diag_l - diag_u) / 2 rnd = rand(MersenneTwister(1234), Float64, cnt) for i = 1:cnt if real(rnd[i]) < 1.0 - sparsity rnd[i] = 0.0 end end idx = 0 for i = diag_u + 1:size for j = max(1, i - diag_l) : i - diag_u idx += 1 matrix[i, j] = Tv(scale * (rnd[idx]) + shift) end end end """ initMat!(matrix::SparseMatrixCSC{Tv, Ti}, diag_l::Ti, diag_u::Ti, size::Ti; scale::Real = 1.0, shift::Real = 0.0, sparsity::Real = 0.9) where {Tv<:Real, Ti<:Integer} Initialization of matrix to be generated. This function is for non-Symmetric case, in which the entries of matrix are real scalars. The usage of this function is quite similar as the one for non-Hermitian case, the only difference is that the entries of matrices are real scalars. ## Examples ```julia julia> Am=spzeros(Float64, 50, 50); julia> initMat!(Am, -20, -10, 50, sparsity=0.5) julia> Am 50×50 SparseMatrixCSC{Float64, Int64} with 178 stored entries: ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠌⡱⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠍⠫⢠⡀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠈⠏⣌⠺⣑⠰⢀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠙⣣⢑⡿⢡⢀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠅⣊⡼⠖⠷⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠉⢀⣫⠇⢛⡆⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠑⠟⡕⣭⠉⣱⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⢐⡚⢘⠔⣡⢄⠀⠀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠪⡌⡆⣜⣆⢀⠀⠀⠀⠀⠀⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠒⠉⠂⠘⠁⠀⠀⠀⠀⠀ ``` """ function initMat!( matrix::SparseMatrixCSC{Tv, Ti}, diag_l::Ti, diag_u::Ti, size::Ti; scale::Real = 1.0, shift::Real = 0.0, sparsity::Real = 0.9 ) where {Tv<:Real, Ti<:Integer} diag_l = abs(diag_l) diag_u = abs(diag_u) if diag_u < 2 error( "for initialisationg of non-symmetric matrix, please ensure abs(diag_u) >= 2 ", ) end if diag_l < diag_u error( "for initialisationg of matrix, please ensure abs(diag_l) < abs(diag_u) ", ) end if diag_u >= size error( "for initialisationg of matrix, please ensure abs(diag_u) < size ", ) end cnt::Ti = (diag_l - diag_u + 1) * (2 * size - diag_l - diag_u) / 2 rnd = rand(MersenneTwister(1234), Float64, cnt) for i = 1:cnt if real(rnd[i]) < 1.0 - sparsity rnd[i] = 0.0 end end idx = 0 for i = diag_u + 1:size for j = max(1, i - diag_l) : i - diag_u idx += 1 matrix[i, j] = Tv(scale * (rnd[idx]) + shift) end end end
(* Inductive bool : Type := | true : bool | false : bool . *) (* true -> 100 false -> true *) Definition btype (b: bool) : Type := match b with | true => nat | false => bool end. Check btype. Compute (btype true). Compute (btype false). (* Definition foo (b: bool) : btype b := match b with | true => 100 | false => true end. *) (* e1 : Type e2 : Type ========= forall (x:e1), e2 : Type *) (* foo: bool -> nat + bool *) (* foo: forall (b: bool), btype b *) (* forall (n: nat), vector n *) (* nat -> list *) (* Definition foo: forall (b: bool), btype b := fun b: bool => match b with | true => 100 | false => true end. *) Definition foo (b: bool) : btype b := match b with | true => 100 | false => true end. Check foo. Compute foo true. Compute foo false. Check foo. Check btype. Definition bar (b: bool) : Type := match b with | true => nat | false => bool end. (* forall b : bool, btype b forall b : bool, Type *) (* 9 Feb 2017 *) (* Inductive TrueT : Type := | TT : TrueT . Inductive FalseT: Type := . Definition istrue (b: bool) : Type := match b with | true => TrueT | false => FalseT end. Compute istrue true . Compute istrue false. Check (TT: istrue true). Definition true_is_true : istrue true := TT. (* Definition false_is_true : istrue false := ???. *) Definition false_is_not_true : istrue false -> FalseT := fun x : istrue false => match x with end : FalseT . (* P -> Q P ------------ Q e: P -> Q e1 : P -------------------------- e e1 : Q *) (* Fixpoint evenb (n: nat) : bool := match n with | 0 => true | S n' => match n' with | 0 => false | S n'' => evenb n'' end end. *) Fixpoint evenb (n: nat) : bool := match n with | 0 => true | 1 => false | S (S m) => evenb m end. Definition two_plus_even_is_even : forall n:nat, istrue (evenb n) -> istrue (evenb (2 + n)) := fun n:nat => fun pf: istrue (evenb n) => pf. Definition even_plus_two_is_even : forall n:nat, istrue (evenb n) -> istrue (evenb (n+2)) := fix even_plus_two_is_even n := match n return istrue (evenb n) -> istrue (evenb (n+2)) with | 0 => fun x => x | 1 => fun x => x | S (S m) => fun x => even_plus_two_is_even m x end. Inductive even: nat -> Type := | zero_is_even : even 0 | succ_succ_even (n: nat) (pf: even n) : even (S (S n)) . Check even. Check (even: nat -> Type). (* * zero_is_even : even 0 * succ_succ_even 0 zero_is_even : even (S (S 0)) * succ_succ_even (S(S 0)) (succ_succ_even 0 zero_is_even) : even (S (S (S (S 0))) *) (* Definition even' (n: nat) : Type := exists m, n = 2 * m. *) (* e : vector (1 + n) e: vector (S(n)) e : T S===T ------------- e : S *) Definition two_plus_even_is_even' : forall n:nat, even n -> even (2+n) := fun n:nat => fun pf: even n => succ_succ_even n (pf). Definition even_plus_two_is_even' : forall n:nat, even n -> even (n+2) := fix even_plus_two_is_even n := match n return even n -> even (n+2) with | 0 | 1 => fun x => succ_succ_even _ x | S (S m) => fun pf : even (S (S m)) => succ_succ_even (m+2) ( even_plus_two_is_even m match pf in even k return match k with 0 | 1 => TrueT | S(S l) => even l end with | zero_is_even => TT | succ_succ_even m pf => pf end) end. (* Compute (even_plus_two_is_even' 0 zero_is_even). *) Definition even_plus_two_is_even'': forall n:nat, even n -> even (n+2). Proof. fix 1. intros. destruct n. - simpl. apply succ_succ_even. assumption. - destruct n. + apply succ_succ_even. assumption. + inversion H. subst. simpl. apply succ_succ_even. apply even_plus_two_is_even''. assumption. Defined. Compute (even_plus_two_is_even'' 0 zero_is_even). Definition induction_for_nat: forall P : nat -> Type, P 0 -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n := fun P => fun (base: P 0) => fun (step: forall n : nat, P n -> P (S n)) => fix ind n := match n return P n with | 0 => base | S m => step m (ind m) end. Inductive EqT (A: Type) : A -> A -> Type := | eqrefl (x: A) : EqT A x x . Check eqrefl. Definition succ_is_not_0: forall n: nat, EqT nat (S n) 0 -> FalseT := fun n => fun (pf: EqT nat (S n) 0) => match pf in EqT _ x y return match x with | 0 => match y with | 0 => TrueT | S _ => FalseT end | S _ => match y with | 0 => FalseT | S _ => TrueT end end with | eqrefl _ z => match z with | 0 => TT | S _ => TT end end : FalseT. Definition succ_is_not_0': forall n: nat, EqT nat (S n) 0 -> FalseT. Proof. intros. inversion H. Defined. (* Inductive EqT (A: Type) (x: A) : A -> Type := | eqrefl : EqT A x x . Definition succ_is_not_0: forall n: nat, EqT nat (S n) 0 -> FalseT := fun n => fun (pf: EqT nat (S n) 0) => match pf in EqT _ _ y return match y with 0 => FalseT | S _ => TrueT end with | eqrefl _ _ => TT end. Definition succ_is_not_0': forall n: nat, EqT nat (S n) 0 -> FalseT. Proof. intros. inversion H. Defined. *) (* forall P: Prop (x y: P), x = y *) *) (* 9 Feb 2017 : Proposition *) Definition istrue (b: bool) : Prop := match b with | true => True | false => False end. Print istrue. Compute istrue true . Compute istrue false. Check (I: istrue true). Definition true_is_true : istrue true := I. (* Definition false_is_true : istrue false := ???. *) Definition false_is_not_true : istrue false -> False := fun x : istrue false => match x with end : False . (* P -> Q P ------------ Q e: P -> Q e1 : P -------------------------- e e1 : Q *) (* Fixpoint evenb (n: nat) : bool := match n with | 0 => true | S n' => match n' with | 0 => false | S n'' => evenb n'' end end. *) Fixpoint evenb (n: nat) : bool := match n with | 0 => true | 1 => false | S (S m) => evenb m end. Definition two_plus_even_is_even : forall n:nat, istrue (evenb n) -> istrue (evenb (2 + n)) := fun n:nat => fun pf: istrue (evenb n) => pf. Check ( (fun n:nat => fun pf: istrue (evenb n) => pf) : (forall n:nat, istrue (evenb n) -> istrue (evenb (2 + n))) ). Definition even_plus_two_is_even : forall n:nat, istrue (evenb n) -> istrue (evenb (n+2)) := fix even_plus_two_is_even n := match n return istrue (evenb n) -> istrue (evenb (n+2)) with | 0 => fun x => x | 1 => fun x => x | S (S m) => fun x => even_plus_two_is_even m x end. Check( (fix even_plus_two_is_even n := match n return istrue (evenb n) -> istrue (evenb (n+2)) with | 0 => fun x => x | 1 => fun x => x | S (S m) => fun x => even_plus_two_is_even m x end) : (forall n:nat, istrue (evenb n) -> istrue (evenb (n+2))) ). Inductive even: nat -> Prop := | zero_is_even : even 0 | succ_succ_even (n: nat) (pf: even n) : even (S (S n)) . Check even. Check (even: nat -> Type). (* * zero_is_even : even 0 * succ_succ_even 0 zero_is_even : even (S (S 0)) * succ_succ_even (S(S 0)) (succ_succ_even 0 zero_is_even) : even (S (S (S (S 0))) *) (* Definition even' (n: nat) : Type := exists m, n = 2 * m. *) (* e : vector (1 + n) e: vector (S(n)) e : T S===T ------------- e : S *) Definition two_plus_even_is_even' : forall n:nat, even n -> even (2+n) := fun n:nat => fun pf: even n => succ_succ_even n (pf). Definition even_plus_two_is_even' : forall n:nat, even n -> even (n+2) := fix even_plus_two_is_even n := match n return even n -> even (n+2) with | 0 | 1 => fun x => succ_succ_even _ x | S (S m) => fun pf : even (S (S m)) => succ_succ_even (m+2) ( even_plus_two_is_even m match pf in even k return (match k with 0 | 1 => True | S(S l) => even l end) : Prop with | zero_is_even => I | succ_succ_even m pf => pf end) end. (* Compute (even_plus_two_is_even' 0 zero_is_even). *) Theorem even_plus_two_is_even'': forall n:nat, even n -> even (n+2). Proof. fix 1. intros. destruct n. - simpl. apply succ_succ_even. assumption. - destruct n. + apply succ_succ_even. assumption. + inversion H. subst. simpl. apply succ_succ_even. apply even_plus_two_is_even''. assumption. Defined. Print even_plus_two_is_even''. Compute (even_plus_two_is_even'' 0 zero_is_even). Definition induction_for_nat: forall P : nat -> Prop, P 0 -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n := fun P => fun (base: P 0) => fun (step: forall n : nat, P n -> P (S n)) => fix ind n := match n return P n with | 0 => base | S m => step m (ind m) end. Print eq. Check (eq 0 0). (* Inductive EqT (A: Type) : A -> A -> Prop := *) (* | eqrefl (x: A) : EqT A x x *) (* . *) (* Check eqrefl. *) (* Definition succ_is_not_0: forall n: nat, S n = 0 -> False := fun n => fun (pf: S n = 0) => match pf in x = y return match x with | 0 => match y with | 0 => TrueT | S _ => FalseT end | S _ => match y with | 0 => FalseT | S _ => TrueT end end with | eqrefl _ z => match z with | 0 => TT | S _ => TT end end : FalseT. Definition succ_is_not_0': forall n: nat, EqT nat (S n) 0 -> FalseT. Proof. intros. inversion H. Defined. *) (* Inductive EqT (A: Type) : A -> A -> Prop := *) (* | eqrefl (x: A) : EqT A x x *) (* . *) (* Inductive EqT (A: Type) (x: A) : A -> Prop := | eqrefl : EqT A x x . *) Definition succ_is_not_0: forall n: nat, S n = 0 -> False := fun n => fun (pf: S n = 0) => match pf in _ = y return match y with 0 => False | S _ => True end with | eq_refl _ => I end. Print eq. Definition succ_is_not_0': forall n: nat, S n = 0 -> False. Proof. intros. inversion H. Defined. (* forall P: Prop (x y: P), x = y *) Fixpoint sum n := match n with | 0 => 0 | S m => S m + sum m end. Compute sum 100. Require Import Lia. Lemma sum_n_times: forall n, 2 * sum n = n * (n+1). Proof. intros. induction n; simpl; lia. Qed. Print sum_n_times. (* 10 Feb 2017 *) (* forall, ->, =, P -> False, True, False, /\, \/, exists *) (* Conjunction *) (* Inductive and (P: Prop) (Q: Prop) : Prop := | conj (x: P) (y: Q) : and P Q . (1) a : P, b: Q => conj a b : and P Q *) Print and. Check (forall n: nat, n = 0 /\ (n = 1 -> False) : Prop). (* Disjunction *) (* Inductive or (P: Prop) (Q: Prop) : Prop := | left (a: P) : or P Q | right (b: Q) : or P Q . *) Print or. Check (forall n: nat, n = 0 \/ (n = 0 -> False) : Prop). (* Axiom axiom_of_choice: forall A B (R: A -> B -> Prop), (forall a: A, exists b: B, R a b) -> exists f: A -> B, forall a: A, R a (f a). *) (* Existential *) (* Inductive ex (A: Type) (P: A -> Prop) : Prop := | exist (a: A) (pf: P a) : ex A P . *) Print ex. Check (forall P: Prop, ((P->False)->False) -> P). Check (forall P: Prop, P \/ P->False). Check ((exists a:nat, exists b:nat, exists c:nat, a*a*a + b*b*b = c*c*c) -> False). (* (exists a:A, e) === ex A (fun a => e) (exists b:A, P b) (exists ryu:A, P ryu) *) Lemma ryu: exists n, n = 0 \/ n = 1. Proof. exists 0. left. auto. Qed. Require Import Program. Lemma ryu2: (exists n, n = n+1) -> False. Proof. intros EX. destruct EX as [n EQ]. revert EQ. induction n. - intros. simpl in EQ. inversion EQ. - intros. simpl in EQ. apply IHn. dependent destruction EQ. assumption. Qed. Print ryu2.
function VaR = garchkvar(data, model, ar, ma, p, q, max_forecast, alpha) %{ ----------------------------------------------------------------------- PURPOSE: Value-at-Risk Estimation for both long and short positions (Model Estimation) ----------------------------------------------------------------------- USAGE: VaR = garchvar(data, model, ar, ma, p, q, max_forecast, alpha) INPUTS: data: (T x 1) vector of data model: 'GARCH', 'GJR', 'AGARCH','NAGARCH' ar: positive scalar integer representing the order of AR am: positive scalar integer representing the order of MA p: positive scalar integer representing the order of ARCH q: positive scalar integer representing the order of GARCH max_forecasts: maximum number of forecasts (i.e. 1-trading months 22 days) alpha: confidence level OUTPUTS: VaR: a vector of VaR forecasts ----------------------------------------------------------------------- Author: Alexandros Gabrielsen, [email protected] Date: 11/2011 ----------------------------------------------------------------------- %} if nargin == 0 error('Data, GARCH Model, AR, MA, ARCH, GARCH, Maximum Number of Forecasts, Confidence Level') end if size(data,2) > 1 error('Data vector should be a column vector') end if (length(ar) > 1) | (length(ma) > 1) | ar < 0 | ma < 0 error('AR and MA should be positive scalars') end if (length(p) > 1) | (length(q) > 1) | p < 0 | q < 0 | alpha < 0 | alpha > 1 error('P,Q and alpha should be positive scalars') end % Estimate the model [parameters, stderrors, LLF, ht, kt, nu, resids] = garchk(data, strcat(model), ar, ma, 0, p, q, 0); % Calling garchkfor2 function to pass the parameters [MF, VF, KF, MC, VC] = garchkfor2(data, resids, ht, kt, parameters, strcat(model),ar, ma, p, q, max_forecast); % Estimating Inverce-CDF using the degrees of freedom which are described % as a function of the forecasted conditional kurtosis cdfqtile1 = tinv(alpha,(4*KF-6)./(KF-3)); cdfqtile2 = tinv(1-alpha,(4*KF-6)./(KF-3)); % Estimating VaR VaR = [MC + cdfqtile1.*VC, MC + cdfqtile2.*VC]; end
import topology.tactic import topology.algebra.monoid import topology.instances.real import analysis.special_functions.trigonometric example {X Y : Type*} [topological_space X] [topological_space Y] (f₁ f₂ : X → Y) (hf₁ : continuous f₁) (hf₂ : continuous f₂) (g : Y → ℝ) (hg : continuous g) : continuous (λ x, (max (g (f₁ x)) (g (f₂ x))) + 1) := by guard_proof_term { continuity } ((hg.comp hf₁).max (hg.comp hf₂)).add continuous_const example {κ ι : Type} (K : κ → Type) [∀ k, topological_space (K k)] (I : ι → Type) [∀ i, topological_space (I i)] (e : κ ≃ ι) (F : Π k, homeomorph (K k) (I (e k))) : continuous (λ (f : Π k, K k) (i : ι), F (e.symm i) (f (e.symm i))) := by guard_proof_term { continuity } @continuous_pi _ _ _ _ _ (λ (f : Π k, K k) i, (F (e.symm i)) (f (e.symm i))) (λ (i : ι), ((F (e.symm i)).continuous).comp (continuous_apply (e.symm i))) open real local attribute [continuity] continuous_exp continuous_sin continuous_cos example : continuous (λ x : ℝ, exp ((max x (-x)) + sin x)^2) := by guard_proof_term { continuity } (continuous_exp.comp ((continuous_id.max continuous_id.neg).add continuous_sin)).pow 2 example : continuous (λ x : ℝ, exp ((max x (-x)) + sin (cos x))^2) := by guard_proof_term { continuity } (continuous_exp.comp ((continuous_id'.max continuous_id'.neg).add (continuous_sin.comp continuous_cos))).pow 2 -- Without `md := semireducible` in the call to `apply_rules` in `continuity`, -- this last example would have run off the rails: -- ``` -- example : continuous (λ x : ℝ, exp ((max x (-x)) + sin (cos x))^2) := -- by show_term { continuity! } -- ``` -- produces lots of subgoals, including many copies of -- ⊢ continuous complex.re -- ⊢ continuous complex.exp -- ⊢ continuous coe -- ⊢ continuous (λ (x : ℝ), ↑x)
If $f$ is a polynomial, then $a \cdot (bx^n \cdot f) = (abx^n) \cdot f$.
It is difficult to predict the future. What is going to happen in the stock market? Will my property value continue to increase? Will I be able to afford to send my children to college? The current stable economy will not last forever. No one knows what the future holds but I do know that those organizations that have the best leadership will be positioned to be the most competitive, the most desirable to work at and the most likely to prosper in any economic climate. When a leader freaks out and starts changing things their people are intensely affected. Inconsistent leadership creates an environment that is full of distractions for the people they lead. When the people are distracted they cannot focus on the purpose, mission and tasks at hand for the enterprise. It will always come back to what you believe. What do you believe about keeping your commitments? What do you believe about your people? What do you believe about what is in your control? What do you believe about the values that you say you have and make decisions by? It is common to want to change, cut, reorganize and eliminate during uncertain times. In uncertain times the foundation of great leadership is staying steady. The key is to understand where and what to stay steady with. It is your values that keep you steady. Your values detail what you believe about who you are as an organization, how you see your people, what you believe your commitment to the community is and what excellence looks like in your organization. People do not realize how quickly they are moved by external pressure. They are quick to change what they do. They don’t stay steady with their values. They may not even give their values a thought. If they have a great set of values the values will serve them in good and difficult times – that’s part of the test of a great set of values. The values are your rock. They are your foundation. How you see your people is critical. The people you lead know if you believe in them or resent them. They know if you trust them or fear them. And, they know if you see them as important or inessential to your success. When you believe that your people create solutions to the challenges you face, you treat them differently than if you see them as the cause of those problems. This critical belief about the people you work with will shape every decision you make and your effectiveness as a powerful leader. Truly great leaders also have a complete understanding of four areas of control. They understand what they can control. They understand what they cannot control. They understand what they should not try and control. And, they understand when something is out of control. When you focus on what you cannot control, weak economic conditions and soft markets, then you start making excuses for underperformance. When you understand what is truly in your control, you naturally focus on those areas and create solutions to challenges and lead your people in a more powerful way. If you, as the leader, are freaking out over things that are out of your control then your people will also. If you, as the leader, are steady, calm and focused on what you can control then so are the people you lead. In challenging times, as in any times, calm and focused always wins out. Accountability is first and foremost about people. So is leadership. Accountability is not something you do by yourself. Nor is it something you ask of your people, if you as the leader are not living it yourself. That is never going to work. When you keep your commitments to your people, when you are an accountable leader, your people respond differently. They focus, they are productive and they seek both your success and the success of the organization. Accountability is never about taking the easy way out. There is no quick fix. Accountability is labor intensive. Time is a friend to accountability. Accountability is an investment you make in your people and in your organization for the long haul. Accountability is there when times are good and anyone can prosper. It is there when times are tough when only some prosper. We live in the present. The past is gone. There is only now. How we lead now, today, will determine the future of our relationships, our organizations and the results we achieve. Accountability is the future of leadership. Those organizations, in the future, that focus on accountability will always have the best people, the most committed people and achieve the greatest results.
some : Bool -> Bool some True = True some False = True some False = True some False = False
\documentclass[aps, prd, %%%%%%%%%%%%% % Choose one of the two following options: %preprint,% %onecolumn, twocolumn,% %%%%%%%%%%%% %tightenlines, superscriptaddress, showpacs, nofootinbib, eqsecnum, amsmath, amssymb, floatfix %floats ]{revtex4} %\maxdeadcycles=1000 %\pdfimageresolution=72x \usepackage{hyperref} \usepackage{bm} \usepackage{graphicx} \usepackage[usenames]{color} \usepackage{ulem} \usepackage{amsmath} \usepackage{amssymb} \maxdeadcycles=5000 %%%%%%%%%%%% % Uncomment the following line to display all labels %\usepackage{showkeys} %%%%%%%%%%%% \allowdisplaybreaks % Better to do this locally for a given very long equation: % {\allowdisplaybreaks \begin{eqnarray} ... \end{eqnarray}} % \noindent \newcommand{\ui}{\mathrm{i}} \newcommand{\ud}{\mathrm{d}} \newcommand{\uD}{\mathrm{D}} \newcommand{\bmSeffp}{{\bm{S}_{0}^+}} \newcommand{\bmSeffm}{{\bm{S}_{0}^-}} \newcommand{\Seffp}{{S_{0}^+}} \newcommand{\Seffm}{{S_{0}^-}} \newcommand{\IAP}{\affiliation{Institut d'Astrophysique de Paris, UMR 7095 CNRS Universit\'e Pierre \& Marie Curie, 98$^{\text{bis}}$ boulevard Arago, 75014 Paris, France}} \newcommand{\Maryland}{\affiliation{Maryland Center for Fundamental Physics \& Joint Space-Science Institute,\\ Department of Physics, University of Maryland, College Park, MD 20742, USA}} \newcommand{\red}{\textcolor{red}} \newcommand{\blue}{\textcolor{blue}} \newcommand{\comment}[1]{\textcolor{red}{[#1]}} \newcommand{\tanja}[1]{\textcolor{green}{#1}} \newcommand{\gf}[1]{\textcolor{cyan}{#1}} \newcommand{\ab}[1]{\textcolor{blue}{#1}} \begin{document} \title{Spin effects on gravitational waves from inspiraling compact binaries at second post-Newtonian order} \author{Alessandra Buonanno} \Maryland% \author{Guillaume Faye} \IAP % \author{Tanja Hinderer} \Maryland % \date{\today} \begin{abstract} We calculate the gravitational waveform for spinning, precessing compact binary inspirals through second post-Newtonian order in the amplitude. When spins are collinear with the orbital angular momentum and the orbits are quasi-circular, we further provide explicit expressions for the gravitational-wave polarizations and the decomposition into spin-weighted spherical-harmonic modes. Knowledge of the second post-Newtonian spin terms in the waveform could be used to improve the physical content of analytical templates for data analysis of compact binary inspirals and for more accurate comparisons with numerical-relativity simulations. \end{abstract} \pacs{04.30.-w, 04.25.-g} \maketitle \section{Introduction} \label{sec:intro} Coalescing compact binary systems are a key source of gravitational radiation for ground-based gravitational-wave detectors such as the advanced Laser Interferometer Gravitational Wave Observatory (LIGO)~~\cite{Abbott:2007}, the advanced Virgo~\cite{Acernese:2008}, the GEO-HF~\cite{Grote:2008zz}, the Large Cryogenic Gravitational Telescope (LCGT) (or KAGRA)~\cite{Kuroda:2010}, coming into operation within the next few years, and future space-based detectors~\cite{lisa,ESALISAwebsite}. For this class of gravitational-wave sources, the signal detection and interpretation will be based on the method of matched filtering~ \cite{Finn1992,Finn1993}, where the noisy detector output is cross correlated with a bank of theoretical templates. The accuracy requirement on the templates is that they remain as much as possible phase coherent with the signal over the hundreds to thousands of cycles of inspiral that are within the detector's sensitive bandwidth. Constructing such accurate templates has motivated a significant research effort during the past 30 years. In the regime where the separation between the two bodies is large, gravitational waveforms can be computed using the post-Newtonian (PN) approximation method~\cite{Sasaki:2003xr, Blanchet2006, Futamase:2007zz}. In the post-Newtonian scheme, the results are written as an asymptotic expansion in powers of $v_A/c$, with $v_A$ being the magnitude of the orbital coordinate velocity $\bm{v}_A$ of body $A$ at a given time. This approximation is physically relevant for $v_A/c \ll 1$, i.e. in the so-called inspiraling regime where the radiation reaction forces, of order $\sim (v_A/c)^5 $ are negligible over an orbital period and act adiabatically on a quasiconservative system. In the domain of validity of the post-Newtonian scheme, the separation $r \sim (G m_A/v^2)\sim (c/v)^2$, with $m=m_1 + m_2$ and $v = |\bm{v}| \equiv |\bm{v}_1 - \bm{v}_2 |$, remains large with respect to the radii of both compact objects $\sim G m_A/c^2$ or, in other words, the bodies can be regarded effectively as point particles. Post-Newtonian waveforms cease to be reliable near the end of the inspiral and the coalescence phase, where numerical-relativity simulations should be used to predict the gravitational-wave signal ~\cite{Pretorius2005a, Campanelli2006a, Baker2006a}. By combining the information from post-Newtonian predictions and the numerical-relativity simulations it is possible to accurately and analytically describe the gravitational-wave signal during the entire inspiral, plunge, merger and ringdown process~\cite{Buonanno99, Buonanno00, DJS00, Buonanno-Cook-Pretorius:2007, Ajith:2008, Damour2009a, Pan:2009wj, Santamaria:2010yb, Pan:2011gk}. For nonspinning binaries, the post-Newtonian expansion has been iterated to $3.5$PN order beyond the leading Newtonian order in the gravitational-wave phasing~\cite{Blanchet95a, Blanchet04, Blanchet2005b}. The gravitational-wave amplitude has been computed through $3$PN order~\cite{Blanchet96a, Kidder07, Kidder2008, BFIS} and the quadrupole mode through $3.5$PN order~\cite{Faye:2012we}. However, black hole binaries could potentially have large spins~\cite{Miller2009} which may be misaligned with the orbital angular momentum, in which case the precession effects add significant complexity to the emitted gravitational waves~\cite{Apostolatos1994}. Ignoring the effects of black hole spins could lead to a reduction in the signal-to-noise ratio and decrease the detection efficiency~\cite{Apostolatos1996, Buonanno:2002fy} although this should be overcome with phenomenological and physical models~\cite{Pan:2003qt, Buonanno2004, Buonanno:2005pt, Buonanno06, Ajith:2009, Pan:2009wj, Ajith:2011ec, Brown:2012gs, Taracchini:2012ig}. To maximize the payoffs for astrophysics will require extracting the source parameters from the gravitational-wave signal using template models computed from the most accurate physical prediction available~\cite{CutlerFlanagan1994, PoissonWill95, vanderSluys, AjithBose2009}. Spin effects in the waveform are currently known through much lower post-Newtonian order than for nonspinning binaries. More specifically, spin effects are known through $2.5$PN order in the phase~\cite{Mikoczi:2005dn,Faye-Blanchet-Buonanno:2006, Blanchet-Buonanno-Faye:2006}, $1.5$PN order in the polarizations for spin-orbit effects~\cite{Kidder:1995zr, Arun:2009}, 2PN order for the spin${}_1$-spin${}_2$ effects~\cite{Kidder:1995zr, Will96} and partially $3$PN order in the polarizations for the tail-induced spin-orbit effects~\cite{BlanchetEtAl:2011}. In this paper, we compute all spin effects in the gravitational-wave strain tensor through 2PN order. This requires knowledge of the influence of the spins on the system's orbital dynamics as well as on the radiative multipole moments. At this PN order, nonlinear spin effects attributable to the spin-induced quadrupole moments of the compact objects first appear. Using results from Ref.~\cite{1980AnPhy.130..188B,PortoRothstein2006,Porto:2008jj,SP10}, we derive the stress-energy tensor with self-spin terms and compute the self-induced quadrupole terms in the equations of motion and in the source multipole moments at 2PN order. Our results are in agreement with previous calculations~\cite{Poisson:1997ha,Damour01c, Steinhoff:2010zz,Porto:2012x}. The two main inputs entering our calculation of the gravitational-wave strain tensor through 2PN order are (i) the results of Refs.~\cite{Kidder:1995zr, Poisson:1997ha, Blanchet-Buonanno-Faye:2006} for the influence of the spins on the system's orbital dynamics, which have also been derived by effective field theory and canonical methods \cite{PortoRothstein2006,Porto:2008tb, Porto:2010tr,Damour:2007nc, Steinhoff08, Steinhoff08a, Steinhoff08b}, and (ii) the spin effects in the system's radiative multipole moments~\cite{Blanchet-Buonanno-Faye:2006}. Recently, the necessary knowledge to compute the waveform at 2.5PN order was obtained using the effective field theory approach \cite{Porto:2010tr,Porto:2012x}. Here we use (i) and (ii) in the multipolar wave generation formalism~\cite{thorne80, Blanchet:1992br, Blanchet:1995fr} to obtain the waveform for spinning, precessing binaries through 2PN order. To compute the gravitational polarizations from this result, one must specify an appropriate source frame and project the strain tensor onto a polarization triad. For precessing systems, there are several frames that could be employed~\cite{Finn1993, Kidder:1995zr, Buonanno:2002fy, Schmidt:2010it,OShaughnessy2011, Ochsner2012, 2011PhRvD..84l4011B,Schmidt:2012rh}. For nonprecessing binaries with the spins collinear to the orbital angular momentum, the most natural frame is the one used for nonspinning binaries. Therefore, instead of choosing one frame, for simplicity, we specialize to the nonprecessing case and quasicircular orbits and provide the explicit expressions for the gravitational polarizations. Lengthy calculations are performed with the help of the scientific software \textsc{mathematica}{\footnotesize \textregistered}, supplemented by the package xTensor~\cite{xtensor} dedicated to tensor calculus. Our generic, precessing result is available in \textsc{mathematica} format upon request and can be used to compute the polarizations for specific choices of frame. We notice that the 2PN terms in the polarizations, for circular orbits, linear in the spins were also computed in Ref.~\cite{1998PhRvD..57.6168O}. However, these results contain errors in the multipole moments, which were corrected in Ref.~\cite{Blanchet-Buonanno-Faye:2006}. For future work at the interface of analytical and numerical relativity, we also explicitly compute the decomposition of the strain tensor into spin-weighted spherical-harmonic modes for nonprecessing spinning binaries on circular orbits. The test-particle limit of these results can also be directly compared with the black-hole perturbation calculations of Refs.~\cite{Tagoshi:1996gh,Pan2010hz}, and we verify that the relevant terms agree. The organization of the paper is as follows. In Sec.~\ref{sec:modelling}, we review the Lagrangian for compact objects with self-induced spin effects ~\cite{1980AnPhy.130..188B,PortoRothstein2006,Porto:2008jj,Steinhoff:2010zz}, compute the stress-energy tensor and derive the self-induced spin couplings in the two-body acceleration and source multipole moments ~\cite{Poisson:1997ha,Damour01c,Steinhoff:2010zz,Porto:2012x}. In Sec.~\ref{sec:dynamics} we summarize the necessary information about spin effects in the equations of motion and the wave generation necessary for our calculation. In Sec.~\ref{sec:SO} we calculate the spin-orbit effects at 2PN order in the strain tensor for generic precessing binaries. In Sec.~\ref{sec:SS} we complete the knowledge of 2PN spin-spin terms by including the spin self-induced quadrupole terms in addition to the spin${}_1$-spin${}_2$ terms obtained in Ref.~\cite{Kidder:1995zr}. In Sec.~\ref{sec:pol} we specialize to quasicircular orbits and explicitly give the polarization tensors for nonprecessing systems. Then, in Sec.~\ref{sec:modes} we decompose the polarizations into spin-weighted spherical-harmonic modes. Finally, Sec.~\ref{sec:conclusions} summarizes our main findings. %We use units where Newton constant is $G =1$, but retain factors of %the speed of light $c$, with $1/c^2$ serving as the formal expansion %parameter for the post-Newtonian expansion. We use lowercase Latin letters $a, b, ..., i, j, ...$ for indices of spatial tensors. Spatial indices are contracted with the Euclidean metric, with up or down placement of the indices having no meaning and repeated indices summed over. We use angular brackets to denote the symmetric, trace-free (STF) projection of tensors, e.g., $T_{\langle ij\rangle} = {\rm STF}[T_{ij}]=T_{(ij)}-\frac{1}{3}\delta_{ij}T_{kk}$, where the round parentheses indicate the symmetrization operation. Square parentheses indicate antisymmetrized indices, e.g., $T_{[ij]} = \frac{1}{2} (T_{ij} - T_{ji})$. The letter $L=i_1... i_\ell$ signifies a multi-index composed of $\ell$ STF indices. The transverse-traceless (TT) projection operator is denoted ${\cal P}_{ijab}^\mathrm{TT}={\cal P}_{a(i}{\cal P}_{j)b}-\frac{1}{2}{\cal P}_{ij}{\cal P}_{ab}$, where ${\cal P}_{ij}=\delta_{ij}-N_iN_j$ is the projector orthogonal to the unit direction $\bm{N}=\bm{X}/R$ of a radiative coordinate system $X^\mu=(cT, \bm{X})$, where the boldface denotes a spatial three-vector. As usual, $g_{\mu\nu}$ represents the space-time metric and $g$ its determinant. The quantity $\varepsilon_{ijk}$ is the antisymmetric Levi-Civit\`a symbol, with $\varepsilon_{123}=1$, and $\epsilon_{\mu\nu\rho\sigma}$ stands for the Levi-Civit\`a four-volume form, with $\epsilon_{0123} = + \sqrt{-g}$. Henceforth, we shall indicate the spin${}_1$-spin${}_2$ terms with $S_1S_2$, the spin${}^2_1$, spin${}^2_2$ terms with $S^2$ and the total spin-spin terms with ${\rm SS}$. Throughout the paper, we retain only the terms relevant to our calculations and omit all other terms, which either are already known or appear at a higher post-Newtonian order than required for our purposes. \section{Modeling spinning compact objects with self-induced quadrupoles} \label{sec:modelling} In this section we review the construction of a Lagrangian for compact objects with self-induced quadrupole spin effects ~\cite{Tulczyjew1959,1980AnPhy.130..188B,PortoRothstein2006,Porto:2008jj, Steinhoff:2010zz}, compute the stress-energy tensor and derive the self-induced spin couplings in the two-body acceleration and source multipole moments. Our findings are in agreement with previous results~\cite{Poisson:1997ha,Damour01c,Steinhoff:2010zz,Porto:2012x}. \subsection{Lagrangian for compact objects with self-induced spin effects} A Lagrangian for a system of spinning compact objects with nondynamical\footnote{We shall not include kinetic terms in the Lagrangian for the quadrupole moment that can describe resonance effects in neutron stars.} self-induced quadrupole moments can be obtained by augmenting the Lagrangian for point particles with $L^{\text{S}^2}_A$ describing the quadrupole-curvature coupling for each body $A$. Since the action for body $A$ must admit a covariant representation, the corresponding Lagrangian $L^{\text{S}^2}_A$ should be a function of the four-velocity $u_A^\mu$, the metric $g_{\mu\nu}$, the Riemann tensor $R^\lambda_{~\rho\mu\nu}$ and its covariant derivatives, evaluated at the worldline point $y_A^\mu$, and the spin variables entering via the antisymmetric spin tensor $S_A^{\mu\nu}$. The spin tensor $S_A^{\mu\nu}$ contains six degrees of freedom. It is well known that in order to reduce them to the three physical degrees of freedom a spin supplementary condition (SSC) should be imposed~~\cite{BOC79}. This is equivalent to performing a shift of the worldline $y_A^\mu$. In this paper we specialize to the SSC $S_A^{\mu\nu}p^A_\nu=0$ which is equivalent to $S_A^{\mu\nu}u^A_\nu=0$ since $p_A^\mu \approx m_A c u_A^\mu$ through 2.5PN order. To ensure the preservation of the SSC under the evolution, we follow Ref.~\cite{Porto:2008jj} and introduce the spin tensor ${{{{\cal{S}}}}}_A^{\mu\nu} =S_A^{\mu\nu} + 2 u_A^{[\mu} S_A^{\nu]\lambda} u^A_\lambda$. The spin tensor ${{{{\cal{S}}}}}_A^{\mu\nu}$ automatically satisfies the algebraic identity ${{{{\cal{S}}}}}^{\mu\nu}_A u^A_\nu = 0$, which provides three constraints that can be used to reduce the spin degrees of freedom from six to three. From the above discussion and Refs.~\cite{Porto:2005ac, PortoRothstein2006}, we assume that the Lagrangian of particle $A$ is of the form $L^{\text{S}^2}_A=L_{A\, \mu\nu\lambda\rho} {{{\cal{S}}}}_A^{\mu\nu} {{{\cal{S}}}}_A^{\lambda\rho}$, where $L_{A\, \mu\nu\lambda\rho}$ is a polynomial in the Riemann tensor and its derivatives, as well as the four-velocity $u_A^\mu$. As noticed in Ref.~\cite{DE98a}, any term proportional to $\nabla_{...}R_{\alpha\beta}$ evaluated at point $y_A^\mu$ can be recast into a redefinition of the gravitational field. As a result, the Riemann tensor may be replaced in each of its occurrences by the Weyl tensor $C^\lambda_{~\rho\mu\nu}$, which can be decomposed into a combination of the gravitoelectric- and gravitomagnetic-type STF tidal quadrupole moments $G_{\mu\nu}^A \equiv G_{\mu\nu}(y_A^\alpha) \equiv - c^2 R_{\mu\alpha\nu\beta} u_A^{\alpha} u_A^{\beta}$ and $H^A_{\mu\nu}\equiv H_{\mu\nu}(y_A^\alpha) \equiv 2 c^3 R^{A*}_{\mu\alpha\nu\beta} u_A^{\alpha} u_A^{\beta}$ with $R^*_{\mu\nu\alpha\beta} \equiv \frac{1}{2} \epsilon_{\mu\nu\rho\sigma} R^{\rho\sigma}_{~~\alpha\beta}$. More generally, the multiple space derivatives of $C^\lambda_{~\rho\mu\nu}$ at point $y_A^\mu$ may be expressed in terms of some STF tidal multipole moments $G^A_{\mu_1 ...\mu_\ell}$ and $H^A_{\mu_1 ... \mu_\ell}$ of parity $1$ and $-1$ respectively. However, those higher-order moments will play no role in this paper. Taking into account that the contraction of the velocity vector $u_A^\nu$ with both $G^A_{\mu\nu}$ and ${{{\cal{S}}}}^{\mu\nu}_A$ vanishes, that the spin and tidal multipole tensors are traceless, and that the Lagrangian must obey parity and time-reversal symmetries %and that we are interested in describing %the self-induced quadrupole terms, we obtain~\cite{1980AnPhy.130..188B,Porto:2005ac,PortoRothstein2006, Porto:2008jj} % \begin{equation} \label{eq:LSSA} L^{\text{S}^2}_A = - \frac{\kappa_A}{2 m_A c^2} G_{\mu\nu} S^\mu_{A \lambda}\,S_A^{\lambda \nu} \, . \end{equation} % Here, we have also assumed that the rotating body is axially symmetric and we have replaced ${{{\cal{S}}}}^{\mu \nu}_A$ with $S^{\mu \nu}_A$ since the difference between these spin variables contributes to the equations of motion at $\mathcal{O}(S_A^3)$, where $S_A= \sqrt{|S_A^\mu S^A_\mu|}$ with $S^A_\mu = \epsilon_{\rho\sigma\nu\mu} S^{\rho\sigma}_A p^\nu_A/(2 m_A c)$. For a neutron star the numerical constant $\kappa_A$ in Eq.~(\ref{eq:LSSA}) depends on the equation of state of the fluid \cite{Laarakkers99}. For an isolated black hole $\kappa_A=1$~\cite{Poisson:1997ha,Damour01c}, but for a black hole in a compact binary $\kappa_A$ can deviate from 1. However, these deviations occur at PN orders that are much higher than the ones considered here. We notice that the leading contribution $\kappa_A=1$ can be obtained by computing the acceleration of body $A$ from Eq.~(\ref{eq:LSSA}) in a compact binary for $m_A\ll m$ and matching it with the acceleration of a test particle in the gravitational field of a Kerr black hole of mass $m$~\cite{Porto:2005ac}. \subsection{Effective stress-energy tensor with self-induced quadrupoles} The piece of the stress-energy tensor encoding the self-induced quadrupole dynamics of body $A$ reads by definition % \begin{equation} \label{eq:TSSAdef} T^{\mu\nu}_{\text{quad},A} = \frac{2}{\sqrt{-g}} \frac{\delta}{\delta g_{\mu\nu}(x)} \int d\tau_A\, L^{\text{S}^2}_A[y_A^\alpha(\tau_A), S^{\alpha\beta}_A(\tau_A)] \, , \end{equation} % where $L_A^{\text{S}^2}$ is the Lagrangian~\eqref{eq:LSSA}. To determine the action of the operator $\delta/\delta g_{\mu\nu}$, which stands for the usual ``functional derivative'' with respect to the field $g_{\mu\nu}$, we need to adopt a specific model for the spin. The rotational state of the extended object $A$ is usually represented by a tetrad of orthonormal vectors $e^{\mu}_{A\overline{\alpha}}(\tau_A)$ with $\overline{\alpha} \in \{0,1,2,3\}$ along the worldline $y^\mu_A$ with affine parameter $\tau_A$. The corresponding angular rotation tensor is then defined as $\Omega^{\mu\nu}_A = \eta^{\overline{\alpha}\overline{\beta}} e_{A\overline{\alpha}}^{\mu} D e^\nu_{A\overline{\beta}}/d\tau_A$. We now make the reasonable physical hypothesis that the rotation of the axially symmetric object takes place about the symmetry axis. The moment of inertia $I_A$ along that direction is a 2PN-order quantity $ \sim G^2 m_A^3/c^4$ for compactness parameters of order 1, whereas $\Omega^{\mu\nu}_A \sim V_A/R_A$, $R_A$ being the radius of body $A$ and $V_A$ its typical internal velocity, is roughly equal to $ c^3/(G m_A)$. In the weak field limit where $G$ goes formally to zero, the spin must satisfy the relation $S^{\mu\nu}_A = I_A \Omega^{\mu\nu}_A$, as in special relativity~\cite{HR74}. In the presence of a nonnegligible gravitational field, this relation is expected to be modified by nonminimal coupled terms proportional to positive powers of $R^A_{\mu\nu\alpha\beta}$ times positive powers of $I_A$ and $S^{\mu\nu}_A$~\cite{Porto:2005ac}: % \begin{equation} \label{eq:hatspin} \hat{S}_A^{\mu\nu} = I_A \Big[ \Omega_A^{\mu\nu} + \mathcal{O}\Big(\frac{\hat{S}_A}{c^2}\Big) \Big] \, . \end{equation} % Here we use a hat to distinguish the generic spin variable from the one related to our specific spin model. The corrections $ I_A\times \mathcal{O}(\hat{S}_A/c^2)$ are not relevant for the two-body dynamics in this paper because they correspond to the 4.5PN order when taking into account the factor $\mathcal{O}(1/c)$ contained in the spin variable. Using the definition~\eqref{eq:hatspin} for the spin variables, we compute in a covariant manner the variation of the action % \begin{align} \mathcal{A}^{\text{S}^2} &= \int d\tau_A L_A^{\text{S}^2}(\tau_A) \nonumber \\ &= \int \frac{d^4 x}{c} \sqrt{-g} \int d\lambda_A L_A^{\text{S}^2}(\lambda_A) \frac{\delta^4(x^\alpha - y^\alpha_A(\lambda_A))}{\sqrt{-g}} \, , \end{align} % when the metric varies by $\delta g_{\mu\nu}(x)$, and find the following quadrupolar piece of the stress-energy tensor % \begin{align} \label{eq:TSSA} T^{\mu\nu}_{\text{quad},A} & = \frac{\kappa_A}{m_A c^2} \Big[ \frac{n^*_A}{2} \Big(-3 u_A^\mu u^\nu_A G^A_{\lambda \rho} \hat{S}^{\lambda \sigma}_A \hat{S}_\sigma^{A\rho}\nonumber \\ & \qquad - c^2 R^{(\mu}_{A\lambda\rho \tau} u_A^{\nu)} \hat{S}^\lambda_{A\sigma} \hat{S}_A^{\sigma \rho} u_A^\tau + G^{(\mu}_{A\lambda} \hat{S}^{\nu)}_{A\rho} \hat{S}_A^{\rho \lambda} \Big) \nonumber \\ & + \nabla_\rho \Big( I_A c\, n_A^* (G^{A(\mu}_\lambda u_A^{\nu)} \hat{S}_A^{\lambda \rho} - G_\lambda^{A\rho} \hat{S}^{\lambda(\mu}_A u_A^{\nu)} ) \Big) \Big] \nonumber \\ & -2 \nabla_\lambda \nabla_\rho \Big[ n^*_A \hat{S}_A^{\sigma [\lambda} u_A^{(\mu]} \hat{S}_{\sigma}^{A[\nu)} u_A^{\rho]} \Big] \, , \end{align} % where we have indicated with $n^*_A$ the Dirac-type scalar density $n^*_A(x^\mu) = \int d\lambda_A \, \delta^4(x^\mu- y_A^\mu(\lambda_A))/\sqrt{-g(x^\nu)}$ and, in the last term, we have adopted the convention that symmetrization of indices applies after antisymmetrization. As derived in Ref.~\cite{Tulczyjew1959}, the most general form of the effective stress-energy tensor is % \begin{multline} T^{\mu\nu}_{\text{skel} , A}(x^\mu) = \\\sum_{\ell = 0}^{+\infty} \nabla_{\lambda_1} \nabla_{\lambda_2} ... \nabla_{\lambda_\ell} \Big[t_A^{\mu\nu | \lambda_1 \lambda_2 ... \lambda_\ell}(\tau_A) n^*_A(x^\mu) \Big] \, , \label{skeletonTmunu} \end{multline} %% where $\tau_A$ is the proper time of the $A$th worldline at event $y_A^\mu$ with $y_A^0 = x^0$ and the coefficients $t^{\mu\nu | \lambda_1 \lambda_2 ... \lambda_\ell}_A(\tau_A)$ are the ``skeleton'' multipole moments. The latter are not arbitrary but satisfy algebraic constraints imposed by the equation of conservation $\nabla_\nu T^{\mu\nu}_\text{skel} =0$. Let us check that we can indeed recast the total stress-energy tensor, including the monopolar, dipolar and quadrupolar pieces, in the form (\ref{skeletonTmunu}). If we add $T^{\mu\nu}_{\text{quad}}$ to the monopolar and dipolar contributions~\cite{Papa51spin,Tulczyjew1959,Dixon1964, Tagoshi-Ohashi-Owen:2001,Faye-Blanchet-Buonanno:2006} % \begin{align} \label{eq:TMon} T^{\mu\nu}_{\text{mon}+\text{dipole}} = \sum_{A} \Big[n_A^* \tilde{p}_A^{(\mu} u^{\nu)}_A c + \nabla_\lambda \Big( n_A^* c\, u_A^{(\mu} \tilde{S}_A^{\nu) \lambda}\Big)\Big] \,, \end{align} % and redefine the spin variable entering the quadrupolar piece as % \begin{equation} S^{\mu\nu}_A = \tilde{ S}^{\mu\nu}_A - \frac{2 \kappa_A}{m_A c^2} I_A \hat{S}_A^{\lambda[\mu} G_{A\lambda}^{\nu]} \,, \label{S:red} \end{equation} % we obtain the total stress-energy tensor in the form % \begin{subequations} \label{eq:TmunuJ} \begin{align} \label{eq:Tmunu} T^{\mu\nu} & = \sum_{A} \Big[n_A^* \Big( p_A^{(\mu} u^{\nu)}_A c + \frac{1}{3} R^{(\mu}_{A\tau\lambda\rho} J_A^{\nu)\tau\lambda\rho}c^2 \Big) \nonumber \\ & \qquad + \nabla_\lambda \Big( n_A^* c\, u_A^{(\mu} S_A^{\nu) \lambda}\Big) \nonumber \\ & \qquad - \frac{2}{3} \nabla_{\lambda}\nabla_{\rho}\Big(n_A^* c^2 J_A^{\lambda(\mu\nu) \rho} \Big)\Big] \, , \end{align} % where the four-rank tensor $J_A^{\lambda\rho\mu\nu}$ takes the following expression in our effective description: % \begin{equation} \label{JA} J_A^{\lambda\rho\mu\nu} = \frac{3 \kappa_A}{m_A c^2} S_A^{\sigma[\lambda} u^{\rho]}_A S^{A[\mu}_\sigma u_A^{\nu]} \, . \end{equation} \end{subequations} % Consistently with the approximation already made in the spin model~\eqref{eq:hatspin}, we have neglected here the difference of order $I_A\times \mathcal{O}(\hat{S}_A/c^2)$ between the spins $\hat{S}^{\mu\nu}_A$ and $S^{\mu\nu}_A$ in the above formula. The net result is that Eq.~(\ref{eq:Tmunu}) matches Eq.~(\ref{skeletonTmunu}) for $\ell = 0,1,2$ as expected. Moreover, Eqs.~(\ref{eq:TmunuJ}) agree with Refs.~\cite{Steinhoff:2010zz,SP10}. Lastly, the conservation of the stress-energy tensor~\eqref{eq:Tmunu} is equivalent to the equation of motion for the particle worldline, supplemented by the spin precession equation~\cite{SP10}. They read % \begin{subequations} \begin{align} \label{eq:Dixon_EOM} \frac{D p_A^\mu}{d\tau_A} &= - \frac{c}{2} R^\mu_{A\rho\nu\lambda} u_A^\rho S^{\nu \lambda}_A - \frac{c^2}{3} \nabla_\tau R^\mu_{A\rho\nu\lambda} J_A^{\tau\rho\nu\lambda} \, , \\ \label{eq:Dixon_precession} \frac{DS^{\mu\nu}}{d\tau_A} &= 2 c \, p_A^{[\mu} u_A^{\nu]} + \frac{4c^2}{3} R^{[\mu}_{A\tau\lambda\rho} J_A^{\nu]\tau\lambda\rho} \, . \end{align} \end{subequations} % Those equations are in full agreement with the equations of evolution derived from the Dixon formalism truncated at the quadrupolar order~\cite{Dixon1974}. \subsection{Self-induced quadrupole terms in the 2PN binary dynamics and source multipole moments} Once the stress-energy tensor has been derived, the post-Newtonian equations of motion and the source multipole moments parametrizing the linearized gravitational field outside the system can be computed by means of the usual standard techniques~\cite{Blanchet2006}. At 2PN order, the accelerations including the self-spin interactions were obtained in Refs.~\cite{Poisson:1997ha,Damour01c}, but the self-induced quadrupole effects in the source multipole moments were never explicitly included in the standard version of the post-Newtonian scheme, although recently they were calculated at 3PN order using effective-field-theory techniques~\cite{Porto:2010zg}. Here we can use the results of the previous section, which constitutes a natural extension of the standard post-Newtonian approximation for spinning compact bodies~\cite{Faye-Blanchet-Buonanno:2006}, and explicitly derive the self-induced quadrupole couplings in the 2PN dynamics and source multipole moments. Henceforth, we define the spin vectors $S_A^i$ by the relation $S_i^A/c = g_{ij}^A S_A^j$, where $S_i^A$ is the three-form induced on the hypersurface $t= {\rm const}$ by $S_\mu^A$. Note that it is $S^i_A/c$ that has the dimension of a spin, while $S^i_A$ has been rescaled in order to have a nonzero Newtonian limit for compact objects. In the post-Newtonian formalism for point particles in the harmonic gauge, it is convenient to represent effectively the source by the mass density $\sigma = (T^{00}+T^{ii})/c^2$, the current density $\sigma_i = T^{0i}/c$, and the stress density $\sigma_{ij}= T^{ij}$. They are essentially the components of the stress-energy tensor rescaled so as not to vanish in the formal limit $c \to 0$ for weakly stressed, standard matter. At 2PN order, the second term in the right-hand side of Eq.~(\ref{eq:Tmunu}) does not contribute. From the last term, we obtain the following self-spin contributions: % \begin{subequations} \begin{align} \label{eq:sigma} & \sigma^{\text{S}^2} = \frac{\kappa_1}{2 m_1 c^2} \partial_{ij} [\delta_1 S_1^{ki} S_1^{kj}] + 1 \leftrightarrow 2 + \mathcal{O}\Big(\frac{S_A^2}{c^4} \Big) \, ,\\ & \sigma^{\text{S}^2}_i = \mathcal{O}\Big(\frac{S_A^2}{c^2}\Big) \, , \\ & \sigma^{\text{S}^2}_{ij} = \mathcal{O}\Big(\frac{S_A^2}{c^2}\Big) \, . \end{align} \end{subequations} % where $1 \leftrightarrow 2$ represents the counterpart of the preceding term with particles 1 and 2 exchanged, and $\delta_1 \equiv \delta^3(\bm{x}-\bm{y}_1)$. At 2PN order, the spin$^{2}$ part of the equations of motion~\eqref{eq:Dixon_EOM} for, say, the first particle, reduce to % \begin{equation} \frac{D(u_i^1 c)}{d\tau_1} = \text{non-}S_1^2\text{ terms} -\frac{\kappa_1}{2m_1^2} \partial_k R^1_{i0j0} S_1^{lk} S_1^{lj} + \mathcal{O} \Big(\frac{S_1^2}{c^4} \Big) \, . \end{equation} % The only occurrence of self-spin interactions at 2PN order on the left-hand side of the above equation comes from the gradient of the time component of the metric, $g_{00} = -1 + 2 V/c^2 + \mathcal{O}(1/c^4)$, where the Newton-like potential $V$ satisfies $\Box V = - 4\pi G \sigma$. Although $V$ coincides with the Newtonian potential $U$ in the leading approximation, it contains higher order corrections, including quadratic-in-spin terms coming from the mass density~\eqref{eq:sigma}, which are smaller than $U$ by a factor $\mathcal{O}(1/c^4)$. They read % \begin{align} V_{\text{S}^2} &= - \frac{2\pi G \kappa_1}{m_1 c^2} \partial_{ij} \Delta^{-1} [\delta_1 S_1^{ki} S_1^{kj}] + 1 \leftrightarrow 2 + \mathcal{O} \Big(\frac{S_A^2}{c^4} \Big) \nonumber \\ & = \frac{ G \kappa_1}{2m_1 c^2} \partial_{ij} \frac{1}{r_1} S_1^{ki} S_1^{kj} + 1 \leftrightarrow 2 + \mathcal{O} \Big(\frac{S_A^2}{c^4} \Big)\, , \end{align} % with $\partial_i = \partial/\partial x^i$ and $r_1\equiv |\bm{x}-\bm{y}_1|$, the symbol $\Delta^{-1}$ holding for the retarded integral operator. Other potentials appear at the 1PN approximation or beyond, but their sources cannot contain a self-induced quadrupole below $\mathcal{O}(1/c^4)$; thus they are negligible here. The self-induced spin part of the acceleration $\bm{a}_1$ of the first particle is therefore given by % \begin{equation} (a^i_1)_{\text{S}^2} = -c^2 (\Gamma_{~0 i}^0)_{\text{S}^2} - \frac{\kappa_1}{2m_1^2} \partial_k R^1_{i0j0} S_1^{lk} S_1^{lj} + \mathcal{O} \Big(\frac{S_A^2}{c^4} \Big) \ . \end{equation} % Replacement of the Christoffel symbols $\Gamma_{~\mu\nu}^\lambda$ and the Riemann tensor by the leading order values % \begin{align} \Gamma_{~0 i}^0 = - \frac{\partial_i V}{c^2} + \mathcal{O}\Big(\frac{1}{c^4}\Big) \, , ~ R_{i0j0} = - \frac{ \partial_{ij} U}{c^2} + \mathcal{O}\Big(\frac{1}{c^4}\Big) \, , \end{align} % with $U = G m_1/r_1 + G m_2 /r_2 + \mathcal{O}(1/c^2)$ yields the more explicit result (posing $\partial_{1i} \equiv \partial/\partial y_1^i$): % \begin{align} \label{aSS} (a^i_1)_{\text{S}^2} &= - \frac{G}{2c^2} \partial_{1ijk} \frac{1}{r} \Big[ \frac{\kappa_2}{m_2} S_2^j S_2^k + \frac{m_2 \kappa_1}{m_1^2} S_1^j S_1^k \Big] \nonumber \\ &+ \mathcal{O} \Big( \frac{1}{c^4} \Big) \, , \end{align} % which agrees with Refs.~\cite{Poisson:1997ha,Damour01c} in the center-of-mass frame, for $S_A^i/c = \varepsilon_{ijk} S^{jk} + \mathcal{O}( 1/c^3)$. Self-induced quadrupolar deformations of the bodies also produce 2PN-order terms in the source multipole moments $I_L$ and $J_L$. Those are defined as volume integrals whose integrands are certain polynomials in the densities $\sigma$, $\sigma_i$ and $\sigma_{ij}$ as well as some gravitational potentials, such as $V$, that parametrize the metric. Now, since those potentials are multiplied by prefactors of order $\mathcal{O}(1/c^2)$ and cannot contain themselves spin${}^2$ interactions below the 2PN order, monomials involving one potential or more may be ignored for the calculation. The remaining sources are linear in the $\sigma$ variables. With the help of the general formula~(5.15) of Ref.~\cite{Blanchet98}, it is then immediate to get the self-spin contribution to $I_L$: % \begin{equation} I_L^{\text{S}^2} =\int d^3\!\bm{x} \, x^{\langle i_1}\!\! ...\, x^{i_\ell \rangle}\sigma_{\text{S}^2} + \mathcal{O} \Big(\frac{S_A^2}{c^4} \Big) \, . \end{equation} % Inserting expression~\eqref{eq:sigma} for $\sigma_{\text{S}^2}$ and performing a straightforward integration, we arrive at % \begin{equation} I_L^{\text{S}^2} = \frac{\kappa_1}{2m_1 c^2} \partial_{1ij} (y_1^{\langle i_1}\! \! ...\, y_1^{i_\ell \rangle}) S_1^{ki} S_1^{kj} + 1 \leftrightarrow 2 + \mathcal{O} \Big( \frac{S_A^2}{c^4}\Big) \, . \end{equation} % We can show similarly that $J_L$ is of order $\mathcal{O}(S_A^2/c^2)$. As a result, at the accuracy level required for the 2PN waveform, the only terms quadratic in one of the spins that originate from the source moments come from the quadrupole $\ell=2$, for which we have % \begin{equation} \label{eq:ISSresult} I_{ij}^{\text{S}^2} = - \frac{\kappa_1}{m_1 c^4} S_1^{\langle i} S_1^{j\rangle} + 1 \leftrightarrow 2 + \mathcal{O} \Big( \frac{1}{c^6}\Big)\, , \end{equation} % whereas similar terms in $(I_L)_{\ell \ge 3}$ or $(J_L)_{\ell \ge 2}$ lie beyond our approximation. The above correction to the mass quadrupole agrees with that of Porto \textit{et al}.~\cite{Porto:2010zg} truncated at 2PN order. It is formally of order $\mathcal{O}(1/c^4)$ but, because $\dot{\bm{S}}_A= \mathcal{O}(1/c^2)$, it is cast to the 3PN order in the waveform expansion given below [see Eq.~\eqref{eq:hij}] after the second time derivative is applied. This result was already argued in Ref.~\cite{Racine2008}. \section{Two-body dynamics with spin effects through 2PN order} \label{sec:dynamics} The equations of motion in harmonic coordinates for the relative orbital separation $\bm{x}=r\,\bm{n}$ in the center of mass frame are ~\cite{Blanchet2006} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subequations} \label{eq:eom} \begin{eqnarray} \frac{d^2 x^i}{dt^2}&=&a^i_{\rm Newt}+\frac{1}{c^2}a^i_{\rm 1PN}+ \frac{1}{c^3}a^i_{\rm SO} \nonumber \\ && +\frac{1}{c^4}\left[a^i_{\rm S_1S_2}+a^i_{\text{S}^2}+ a^i_{\rm 2PN}\right], \label{eq:eomscale} \end{eqnarray} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% where %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{align} \bm{a}_{\rm Newt}&=-\frac{G m}{r^2}\bm{n} \, , \\ \bm{a}_{\rm 1PN}&=-\frac{G m}{r^2}\left\{\left[(1+3\nu)v^2- \frac{3}{2}\nu\dot r^2-2(2+\nu)\frac{G m}{r}\right]\bm{n} \right.\nonumber\\ & \qquad \qquad -2 \dot r (2-\nu) \bm{v}\Bigg\}, \end{align} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% with $m\equiv m_1+m_2$, $\nu\equiv m_1\,m_2/m^2$, $\bm{n}=\bm{x}/r$ and $\bm{v}=d\bm{x}/dt$. The 2PN acceleration given, e.g., in Ref.~\cite{Kidder:1995zr} will not be needed for our calculation. The spin-orbit terms are~\cite{Kidder:1995zr} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{align} &\bm{a}_{\rm SO}=\frac{G}{r^3} \left\{6\left[(\bm{n} \times \bm{v})\cdot \left(2 \bm{S} + \delta\, \bm{\Sigma}\right)\right]\bm{n} \right. \label{eq:aSO} \\ & \left. \qquad \qquad - \left[\bm{v}\times \left(7 \bm{S}+ 3\delta\, \bm{\Sigma}\right)\right] +3\dot r \left[\bm{n}\times \left(3 \bm{S}+ \delta\,\bm{\Sigma}\right)\right] \right\} \, ,\nonumber \end{align} \end{subequations} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% where we denote with $\delta=(m_1-m_2)/m$ and %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subequations}\label{SDelta}\begin{align} \bm{S} &\equiv \bm{S}_1 + \bm{S}_2\,,\\ \bm{\Sigma} &\equiv m\left[\frac{\bm{S}_2}{m_2} - \frac{\bm{S}_1}{m_1}\right]\,. \end{align}\end{subequations} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The spin$_1$-spin$_2$ interaction terms are~\cite{Kidder:1995zr} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subequations} \begin{align} &\bm{a}_{\rm S_1 S_2}=-\frac{3G}{m \nu r^4}\bigg[ \left[ ( \bm{S}_1 \cdot \bm{S}_2)-5 (\bm{n}\cdot \bm{S}_1) (\bm{n} \cdot \bm{S}_2)\right]\bm{n} \nonumber\\ & \qquad \qquad \qquad \quad +(\bm{n} \cdot \bm{S}_1)\bm{S}_2+(\bm{n}\cdot \bm{S}_2)\bm{S}_1 \bigg]. \end{align} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% As originally computed in Ref.~\cite{Poisson:1997ha} [see Eq.~(\ref{aSS}) above], an additional term due to the influence of the spin-induced mass quadrupole moment on the motion arises at 2PN order: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{align} &\bm{a}_{\text{S}^2}=-\frac{3G}{2m \nu r^4}\bigg\{ \bm{n}\left[\frac{\kappa_1}{q} S_1^2+q \ \kappa_2 S_2^2 \right]\nonumber\\ & \ \ \ +2 \left[\frac{\kappa_1}{q}(\bm{n} \cdot \bm{S}_1)\bm{S}_1 + q \, \kappa_2 (\bm{n}\cdot \bm{S}_2)\bm{S}_2\right]\nonumber\\ & \ \ \ -\bm{n}\left[\frac{5\kappa_1}{q} (\bm{n}\cdot \bm{S}_1)^2+5 q \, \kappa_2 (\bm{n}\cdot \bm{S}_2)^2\right]\bigg\}. \label{eq:aspinspin} \end{align} \end{subequations} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Here, $q=m_1/m_2$ is the mass ratio and we recall that the parameters $\kappa_A$ characterize the mass quadrupole moments of the bodies. % For an axisymmetric rotating body, the quadrupole moment scalar is % $Q_{A}=-\kappa_A S_A^2/m$, with $\kappa_A=1$ for a Kerr black hole. We find that the quadratic spin contribution to the acceleration can be rewritten in a simpler way by introducing the spin variables % \begin{align} \bmSeffp &= \frac{m}{m_1} \left(\frac{\kappa_1}{\kappa_2} \right)^{1/4} (1+ \sqrt{1-\kappa_1 \kappa_2})^{1/2} \bm{S}_1 \nonumber \\ &+ \frac{m}{m_2} \left(\frac{\kappa_2}{\kappa_1} \right)^{1/4} (1- \sqrt{1-\kappa_1 \kappa_2})^{1/2} \bm{S}_2 \, , \end{align} % and $\bmSeffm$, which is obtained by exchanging the labels 1 and 2 in the above equation.~\footnote{In the formal limit where the induced quadrupole of at least one body vanishes, so that e.g. $\kappa_2 \to 0$, we may define the effective spins as: $\bmSeffp = \frac{m}{m_1} \sqrt{2} \bm{S}_1$, $\bmSeffm = \frac{m}{m_1} \frac{\kappa_1}{\sqrt{2}} \bm{S}_1 + \frac{m}{m_2} \sqrt{2} \bm{S}_2$.} Those variables generalize the quantity $\bm{S}_0$ of Ref.~\cite{Damour01c} in the case where the two bodies are not black holes. In terms of these spin variables the spin-spin part of the acceleration reads % \begin{multline} \label{eq:ssaccel} \bm{a}_{\rm S_1 S_2} + \bm{a}_{\text{S}^2} = - \frac{3G}{2 m r^4} [ \bm{n} \, (\bmSeffp \cdot \bmSeffm) + (\bm{n} \cdot \bmSeffp) \, \bmSeffm \\ + (\bm{n} \cdot \bmSeffm) \, \bmSeffp - 5 \bm{n} \, (\bm{n} \cdot \bmSeffp) (\bm{n} \cdot \bmSeffm) ] \, . \end{multline} The spin precession equations through 2PN order are~\cite{Kidder:1995zr, Racine:2008qv} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subequations} \begin{align} &\frac{d\bm{S}}{dt}=\frac{G m \nu}{c^2 r^2}\bigg\{ \left[-4 (\bm{v}\cdot \bm{S})- 2 \delta\,(\bm{v}\cdot \bm{\Sigma})\right]\bm{n} \nonumber\\ & \qquad~ +\left[3 (\bm{n}\cdot \bm{S})+ \delta\,(\bm{n}\cdot \bm{\Sigma})\right]\bm{v}+\dot r \left[2 \bm{S}+ \delta\,\bm{\Sigma}\right]\bigg\},\\ %\nonumber\\ %&& \ \ \ \ \ + \ \frac{3}{r^3c^3} % \left[\frac{\kappa_1}{q}\left(\bm{n}\cdot \bm{S}_1\right)+ %\left(\bm{n}\cdot \bm{S}_2\right) \right](\bm{n}\times \bm{S}_1)\nonumber\\ %&& \ \ \ \ \ + %\ \frac{3}{r^3c^3}\bigg[ q\, \kappa_2\left(\bm{n}\cdot \bm{S}_2\right) %+\left(\bm{n}\cdot \bm{S}_1\right)\bigg]%(\bm{n}\times \bm{S}_2), \\ %%%% &\frac{d\bm{\Sigma}}{dt}= \frac{G m}{c^2 r^2}\bigg\{ \left[-2 \delta\,(\bm{v}\cdot \bm{S})- 2(1-2\nu)(\bm{v}\cdot \bm{\Sigma})\right]\bm{n}\nonumber\\ & \qquad~ +\left[\delta\, (\bm{n}\cdot \bm{S})+ (1-\nu)(\bm{n}\cdot \bm{\Sigma})\right]\bm{v}\nonumber\\ & \qquad~ + \dot r \left[\delta\,\bm{S}+ (1-2\nu)\bm{\Sigma}\right]\bigg\}. %\nonumber\\ %&& \ \ \ \ \ - %\ \frac{3(1+q)}{q c^3r^3}\left[\frac{\kappa_1}{q}(\bm{n}\cdot \bm{S}_1) %+ (\bm{n}\cdot \bm{S}_2)\right](\bm{n}\times %\bm{S}_1)\nonumber\\ %&& \ \ \ \ \ %+\ \frac{3(1+q)}{c^3r^3}\bigg[q \, \kappa_2 (\bm{n}\cdot \bm{S}_2)+ %(\bm{n}\cdot \bm{S}_1)\bigg](\bm{n}\times %\bm{S}_2)\nonumber\\ %&& \ \ \ \ \ + \ \frac{q^2-1}{q c^3 r^3}(\bm{S}_2\times \bm{S}_1). %\label{eq:eomscale1} \end{align} \end{subequations} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% It is often convenient to use a different set of spin variables $S^{\rm c}_{\!Ai}$ whose magnitude remains constant and that obey precession equations of the form $d{\bm{S}}^{\rm c}_A/dt=\bm{\Omega}_A \times \bm{S}_A^{\rm c}$. The relationship between the spin variables appearing in the equations of motion above and the constant magnitude spin variables is~\cite{ Blanchet-Buonanno-Faye:2006} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subequations} \begin{align} \bm{S}_{\rm c}&= \bm{S}+\frac{G m \nu}{r c^2}\left[2 \bm{S}+ \delta\,\bm{\Sigma}\right]\nonumber\\ &-\frac{\nu}{2 c^2}\left[(\bm{v}\cdot \bm{S})+ \delta\,(\bm{v}\cdot \bm{\Sigma})\right]\bm{v}\, ,\\ %%%% \bm{\Sigma}_{\rm c}&= \bm{\Sigma}+\frac{G m}{r c^2}\left[\delta\,\bm{S} +(1-2\nu)\bm{\Sigma}\right]\nonumber\\ &-\frac{1}{2 c^2}\left[\delta\,(\bm{v}\cdot \bm{S})+ (1-3\nu)(\bm{v}\cdot \bm{\Sigma})\right]\bm{v}. \ \ \ \ \ \ \ \ \end{align} \end{subequations} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Waveforms with spin effects at 2PN order } \subsection{General formalism} \label{subsec:general} The gravitational radiation from the two-body system is calculated from symmetric trace-free radiative multipole moments $I_{L}$ and $J_{L}$ using the general formula from Ref.~\cite{thorne80} truncated at 2PN order %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{align} \label{eq:hij} &h_{ij}^{\mathrm{ TT}}=\frac{2G}{Rc^4}\bigg\{I_{ab}^{(2)}+ \frac{1}{3c} I_{ abc}^{(3)}N^c+\frac{1}{12 c^2}I_{abcd}^{(4)} N^{c}N^{d}\nonumber\\ &+ \frac{1}{60 c^3}I^{(5)}_{ abcde}N^{c}N^{d}N^{e}+\frac{1}{360c^4} I^{(6)}_{abcdef}N^{c}N^{d}N^{e}N^{f} \nonumber\\ & + N^{k}\varepsilon_{cka}\bigg[\frac{4}{3c}J_{ bc}^{(2)}+ \frac{1}{2c^2}J_{ bcd }^{(3)}N^d +\frac{2}{15 c^3} J_{ bcde}^{(4)}N^{d}N^{e}\nonumber\\ & \qquad \qquad + \frac{1}{36c^4}J_{ bcdef}^{(5)}N^{d}N^{e}N^{f}\bigg]\bigg\}{\cal P}_{ijab}^{\mathrm{TT}}, \end{align} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% where $\bm{N}$ is the unit vector pointing from the center of mass of the source to the observer's location and $R$ is the distance between the source and the observer. Here, the superscript ${(n)}$ signifies the $n$th time derivative, and the transverse-traceless projection operator is %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{equation} {\cal P}_{ijab}^{\mathrm{TT}}={\cal P}_{a(i}{\cal P}_{j)b}- \frac{1}{2}{\cal P}_{ij}{\cal P}_{ab}, \end{equation} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% with ${\cal P}_{ij}=\delta_{ij}-N_{i}N_{j}$. The gravitational radiation (\ref{eq:hij}) can be rewritten in a post-Newtonian expansion as % \begin{align} \label{eq:hexpansion} h_{ij}^{\mathrm{TT}} =&\frac{1}{c^4}\,\bigg[ h^{\rm Newt}_{{ij~{\mathrm{TT}}}} + \frac{1}{c^2}\,h^{\rm 1PN}_{{ij~{\mathrm{TT}}}}+\frac{1}{c^2}\,h^{\rm 1PN SO}_{{ij \ {\mathrm{TT}}}} + \frac{1}{c^3}\,h^{\rm 1.5PN SO}_{{ij \ {\mathrm{TT}}}} \hfill ~ \; \; \; \; \; ~ \; \nonumber \\ & \; \; + \frac{1}{c^4}\,h^{\rm 2PN}_{{ij \ {\mathrm{TT}}}} + \frac{1}{c^4}\,h^{\rm 2PN SO}_{{ij \ {\mathrm{TT}}}} + \frac{1}{c^4}\,h^{\rm 2PN SS}_{{ij \ {\mathrm{TT}}}} + \cdots\bigg]\,. \end{align} % The 1PN and 1.5PN spin terms are given explicitly in Refs. \cite{Kidder:1995zr, Arun:2009}. The terms in the source multipole moments that are \textit{a priori} needed to compute the spin-orbit waveform exactly at 2PN order are identified by considering their schematic structure, %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subequations} \begin{align} I_{L} &=I_{L}^{\rm Newt}+ \frac{1}{c^2}I_{L}^{\rm 1PN}+\frac{1}{c^3}I_{L}^{\rm SO}\nonumber \\ & \qquad \quad~~ + \frac{1}{c^4}(I_{L}^{\rm 2PN}+I_L^{\rm SS}) \,, \label{eq:ulstruct}\\ J_{L}&=J_{L}^{\rm Newt}+\frac{1}{c}J_{L}^{\rm SO}+\frac{1}{c^2}J_{L}^{\rm 1PN} \nonumber \\ & \qquad \quad~~ + \frac{1}{c^3}J_{L}^{\rm 1.5PNSO}\,, \label{eq:vlstruct} \end{align} \end{subequations} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% together with the scalings of Eqs.~(\ref{eq:hij}) and (\ref{eq:eomscale}). Specifically, the following pieces are required: $(I_{abc}^{\rm Newt})^{(3)}$ using the $1.5$PN motion and $(I_{abc}^{\rm SO})^{(3)}$ with $\bm{a}^{\rm Newt}$, $(J_{ab}^{\rm SO})^{(2)}$ with the 1PN motion and the spin evolution, $(J_{ab}^{ \rm 1.5PNSO})^{(2)}$ with $\bm{a}^{\rm Newt}$, $(J_{ab}^{ \rm Newt})^{(2)}$ with the $1.5$PN accurate motion, and $(J_{abcd}^{\rm SO})^{(4)}$ with $\bm{a}^{\rm Newt}$. For the SS part, we need $(I_{ab}^{\rm Newt})^{(2)}$ with $a^{\rm SS}$, as the time derivative of $I_{ab}^{\rm SS}$ does not contribute at 2PN order. When we write the waveform in terms of the constant magnitude spin variables, there is an additional contribution to the 2PN spin piece of the waveform coming from $J_{ab}^{\rm SO}$ with $\bm{a}^{\rm Newt}$ and the 1PN conversion factor in $\Sigma^{\rm c}$. The relevant spin contributions to the multipole moments are~\cite{Blanchet-Buonanno-Faye:2006} \begin{widetext} \begin{subequations} \begin{align}\label{JijS} J^{\rm spin}_{ij} &= \frac{\nu}{c}\biggl\{-\frac{3}{2} r \,n^{\langle i}\, \Sigma^{ j\rangle}\biggr\}\nonumber\\ %% &+ \frac{\nu}{c^3}\biggl\{\left(\frac{3}{7}-\frac{16}{7}\nu\right) r \, \dot r\,v^{\langle i}\, \Sigma^{j\rangle} + \frac{3}{7} \,\delta \, r \, \dot r \,v^{\langle i}\,S^{j\rangle} + \left[\left(\frac{27}{14}-\frac{109}{14}\nu\right) (\bm{v}\cdot\bm{\Sigma}) + \frac{27}{14} \delta \,(\bm{v}\cdot\bm{S})\right] r \, n^{\langle i}\, v^{j\rangle}\nonumber\\ %%% & \qquad + \left[\left(-\frac{11}{14}+\frac{47}{14}\nu\right) (\bm{n}\cdot\bm{\Sigma}) - \frac{11}{14} \delta \,(\bm{n}\cdot\bm{S})\right]r \, v^{\langle i} \ v^{j\rangle} + \left[\left(\frac{19}{28}+\frac{13}{28}\nu\right) \frac{G m}{r} + \left(-\frac{29}{28}+\frac{143}{28}\nu\right) v^2\right] r \, n^{\langle i} \, \Sigma^{ j\rangle}\nonumber\\ %%% &\qquad + \left[\left(-\frac{4}{7}+\frac{31}{14}\nu\right) (\bm{n}\cdot \bm{\Sigma}) - \frac{29}{14} \delta \,(\bm{n}\cdot \bm{S})\right] G m \, n^{\langle i}\, n^{j\rangle} + \left[-\frac{1}{14}\frac{G m}{r} - \frac{2}{7} v^2\right] \delta\, r \, n^{\langle i}\, S^{j\rangle}\biggr\}\,, \\ I^{\rm spin}_{ijk} &= \frac{\nu}{c^3} \, r^2 \, \biggl\{-\frac{9}{2}\,\delta \,n^{\langle i}n^j(\bm{v}\times\bm{S})^{k\rangle}- \frac{3}{2}\,(3-11\nu)\,n^{\langle i}n^j(\bm{v}\times\bm{\Sigma})^{k\rangle} \nonumber\\ & \qquad \quad~~ +3\,\delta \,n^{\langle i} v^j(\bm{n}\times\bm{S})^{k\rangle}+ 3\,(1-3\nu)\,n^{\langle i}v^j(\bm{n}\times \bm{\Sigma})^{k\rangle}\biggr\} \,,\\ J^{\rm spin}_{ijkl} &= -\frac{5\nu}{2 c} \, r^3 \, \left\{\delta\, n^{\langle i}n^j n^k S^{l\rangle}+ (1-3\nu)n^{\langle i}n^{j} n^{k}\Sigma^{l\rangle}\right\}\,. \end{align} \label{spinmultis} \end{subequations} \end{widetext} The nonspinning contributions to the multipole moments that we employed in our calculation are % \begin{subequations} \label{nonspinmultis} \begin{align} I_{ij} &=m\nu \, r^2 \, n^{\langle i}n^{j\rangle}\,, \\ I_{ijk} &= -m\nu\, r^3 \, \delta \, n^{\langle i} n^j n^{k \rangle}\,, \\ J_{ij} &=- m\nu\, r^2 \, \delta \, \varepsilon_{ab\langle i} n^{j\rangle}n^{ a}v^b \,. \end{align} \end{subequations} \subsection{Spin-orbit effects} \label{sec:SO} Using the multipole moments of Eqs. (\ref{spinmultis}) and (\ref{nonspinmultis}) in Eq. (\ref{eq:hij}) and substituting the equations of motion (\ref{eq:eom}) and (\ref{eq:aspinspin}), we find the following 2PN spin-orbit piece: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{widetext} \begin{align} \label{eq:hijSO} h_{ij \ {\rm{TT}}}^{\rm 2PN SO}&= \frac{2 G^2 m\nu}{r^2R}{\cal P}^{\rm TT}_{ijab}\Bigg\{ n^a \, n^b \left[\frac{5}{2} (3-13\nu) \, \dot r^2 \, (\bm{n}\times \bm{\Sigma}_{\rm c})\cdot \bm{N} +30 (1-4\nu)(\bm{n}\cdot \bm{N}) \, \dot r \, (\bm{n}\times\bm{v})\cdot \bm{\Sigma}_{\rm c} \right.\nonumber\\ & \left. \; \; \; - (7-29\nu) \, \dot r \, (\bm{v}\times \bm{\Sigma}_{\rm c})\cdot \bm{N} -6(1-4\nu) (\bm{v}\cdot \bm{N})(\bm{n}\times\bm{v})\cdot \bm{\Sigma}_{\rm c} - \frac{1}{2}(3-13\nu) \, v^2 \, (\bm{n}\times \bm{\Sigma}_{\rm c})\cdot \bm{N} \right.\nonumber\\ & \left. \; \; \; - \frac{2G m}{3r}(1-5\nu) (\bm{n}\times \bm{\Sigma}_{\rm c})\cdot \bm{N} +\delta \left( \frac{35}{2}\, \dot r^2 \, (\bm{n}\times \bm{S}_{\rm c})\cdot \bm{N}- \frac{7}{2} \, v^2 \, (\bm{n}\times \bm{S}_{\rm c})\cdot \bm{N} + \ 60 (\bm{n}\cdot \bm{N}) \, \dot r (\bm{n}\times\bm{v})\cdot \bm{S}_{\rm c} \right.\right. \nonumber\\ & \; \; \; - 12(\bm{v}\cdot \bm{N})(\bm{n}\times\bm{v})\cdot \bm{S}_{\rm c} -13 \, \dot r \, (\bm{v}\times \bm{S}_{\rm c})\cdot \bm{N} \bigg) \bigg]\, + \, n^{a}(\bm{n}\times \bm{S}_{\rm c})^{b} \delta \bigg[ 35 (\bm{n}\cdot \bm{N}) \, \dot r^2- 14 (\bm{v}\cdot \bm{N}) \, \dot r- 7 (\bm{n}\cdot \bm{N}) \, v^2 \bigg]\nonumber\\ %%%% %%%% & + n^{a}(\bm{n}\times \bm{N})^{b}\left[\frac{5}{2}(3-13\nu) \, \dot r^2 \, (\bm{n}\cdot \bm{\Sigma}_{\rm c})-\frac{1}{2}(3-13\nu) \, v^2 \, (\bm{n}\cdot \bm{\Sigma}_{\rm c})+ \frac{15}{2}(1-3\nu) \, \dot r^2 \, (\bm{n}\cdot \bm{N}) (\bm{N}\cdot \bm{\Sigma}_{\rm c}) \right.\nonumber\\ & \left. \; \; \; -5 (1-3\nu)\, \dot r \, (\bm{v}\cdot \bm{N}) (\bm{N}\cdot \bm{\Sigma}_{\rm c}) - \frac{3}{2}(1-3\nu) \, v^2 \, (\bm{n}\cdot \bm{N})(\bm{N}\cdot \bm{\Sigma}_{\rm c})-\frac{2Gm}{r}(1-3\nu) (\bm{n}\cdot \bm{N}) (\bm{N}\cdot \bm{\Sigma}_{\rm c})\right.\nonumber\\ &\left. \; \; \; + \frac{4Gm}{3r}(1-5\nu) (\bm{n}\cdot \bm{\Sigma}_{\rm c}) - (3+11\nu) \, \dot r \, (\bm{v}\cdot \bm{\Sigma}_{\rm c})+ \delta \left(\frac{4Gm}{r} (\bm{n}\cdot \bm{S}_{\rm c}) + \frac{35}{2}\, \dot r^2 \, (\bm{n}\cdot \bm{S}_{\rm c}) - \frac{7}{2} \, v^2 \, (\bm{n}\cdot \bm{S}_{\rm c})\right. \right.\nonumber\\ &\left. \; \; \;+ \frac{15}{2} \, \dot r^2 \,(\bm{n}\cdot \bm{N}) (\bm{N}\cdot \bm{S}_{\rm c})- \frac{2Gm}{r}(\bm{n}\cdot \bm{N}) (\bm{N}\cdot \bm{S}_{\rm c})- \frac{3}{2} \, v^2 \, (\bm{n}\cdot \bm{N})(\bm{N}\cdot \bm{S}_{\rm c}) - 5 \, \dot r \, (\bm{v}\cdot \bm{N}) (\bm{N}\cdot \bm{S}_{\rm c}) + \, \dot r \, (\bm{v}\cdot \bm{S}_{\rm c}) \bigg) \right] \nonumber\\ %%% %%% &+ n^{a}(\bm{n}\times \bm{\Sigma}_{\rm c})^{b} \bigg[5 (3-13\nu)(\bm{n}\cdot \bm{N})\, \dot r^2\, - (3-13\nu) (\bm{n}\cdot \bm{N})\, v^2-2(3-14 \nu)(\bm{v}\cdot \bm{N}) \, \dot r \nonumber\\ &\left. \; \; \; - \frac{4Gm}{3r}(1-5\nu)(\bm{n}\cdot \bm{N}) \right]\, + \, n^{a}(\bm{n}\times\bm{v})^{b} \, \dot r \, \left[2 (1-4\nu)(\bm{N}\cdot \bm{\Sigma}_{\rm c})+6 \delta\, (\bm{N}\cdot \bm{S}_{\rm c})\right]\nonumber\\ %%% %%% &+ (\bm{n}\times\bm{N})^{a} \Sigma_{\rm c}^{b} \left[ \frac{5}{4} (1+7\nu)\, \dot r^2 + \frac{15}{4} (1-3\nu) (\bm{n}\cdot \bm{N})^2 \, \dot r^2 - 5(1-3\nu) (\bm{n}\cdot \bm{N})(\bm{v}\cdot \bm{N})\dot r+ \frac{5}{3}(1-3\nu) (\bm{v}\cdot \bm{N})^2\right.\nonumber\\ & \left. \; \; \; + \frac{1}{12} (11-25\nu) v^2- \frac{3}{4}(1-3\nu) (\bm{n}\cdot \bm{N})^2 \, v^2 - \frac{G m}{3r} (11+2\nu)-\frac{G m}{r}(1-3\nu) (\bm{n}\cdot \bm{N})^2\right] \, \nonumber\\ %%%% & + (\bm{n}\times\bm{N})^{a} S_{\rm c}^{ b} \ \delta\left[-\frac{5}{4}\, \dot r^2+ \frac{15}{4} (\bm{n}\cdot \bm{N})^2 \, \dot r^2- 5(\bm{n}\cdot \bm{N})(\bm{v}\cdot \bm{N}) \, \dot r+ \frac{5}{3}(\bm{v}\cdot \bm{N})^2+\frac{1}{4} \, v^2\right. \nonumber\\ & \left. \; \; \; -\frac{3}{4} (\bm{n}\cdot \bm{N})^2 \, v^2 - \frac{G m}{r} (\bm{n}\cdot \bm{N})^2\right] \, + \, (\bm{n}\times\bm{v})^{a}\Sigma_{\rm c}^{b} \ (1-4\nu) \bigg[2(\bm{v}\cdot \bm{N})- 2(\bm{n}\cdot \bm{N}) \dot r\bigg] \nonumber\\ %%% %%% & +n^{a} \, v^{ b}\left[36 (-1+4\nu) (\bm{n}\cdot \bm{N}) (\bm{n}\times \bm{v}) \cdot \bm{\Sigma}_{\rm c} -4(2-9\nu) \, \dot r \, (\bm{n}\times \bm{\Sigma}_{\rm c})\cdot \bm{N}+ \frac{2}{3}(13-55\nu) (\bm{v}\times \bm{\Sigma}_{\rm c})\cdot \bm{N} \right.\nonumber\\ &\left. \; \; \; +\delta \left( -72 (\bm{n}\cdot \bm{N}) (\bm{n}\times \bm{v}) \cdot \bm{S}_{\rm c}- 20 \, \dot r\, (\bm{n}\times \bm{S}_{\rm c})\cdot \bm{N}+ \frac{50}{3} (\bm{v}\times \bm{S}_{\rm c})\cdot \bm{N}\right)\right] \nonumber\\ %%% %%% &+ (\bm{n}\times\bm{v})^{a} S_{\rm c}^{b} \delta\left[-6 (\bm{n}\cdot \bm{N}) \dot r+ \frac{14}{3}(\bm{v}\cdot \bm{N}) \right]\, + n^{a} (\bm{v}\times \bm{S}_{\rm c})^{b} \delta \bigg[-26 \, \dot r \, (\bm{n}\cdot \bm{N})+ 12 (\bm{v}\cdot \bm{N})\bigg]\nonumber\\ %%%% & + n^{a}(\bm{v}\times \bm{\Sigma}_{\rm c})^{b}\left[ 2(-7+29\nu) \, \dot r \, (\bm{n}\cdot \bm{N}) + \frac{2}{3}(10-43\nu) (\bm{v}\cdot \bm{N})\right] \, + \, v^{a}(\bm{v}\times\bm{S}_{\rm c})^{b} \, \delta\, \frac{64}{3} (\bm{n}\cdot \bm{N})\nonumber\\ %%% & + v^{a}(\bm{n}\times \bm{\Sigma}_{\rm c})^{b} \left[-2(5-22\nu)\, \dot r \, (\bm{n}\cdot \bm{N})+\frac{4}{3}\left(1- 6 \nu\right) (\bm{v}\cdot \bm{N}) \right] \, + \, v^{a}(\bm{v}\times \bm{\Sigma}_{\rm c})^{b} \ \frac{2}{3} (16-67 \nu) (\bm{n}\cdot \bm{N})\nonumber\\ %%% & +v^{a}(\bm{n}\times \bm{S}_{\rm c})^{b} \delta\left[ -26 \, \dot r \, (\bm{n}\cdot \bm{N})+ \frac{4}{3}(\bm{v}\cdot \bm{N})\right]\, + \, v^{a} (\bm{n}\times \bm{v})^{b} \left[ 2(-1+4\nu) (\bm{N}\cdot \bm{\Sigma}_{\rm c}) - \frac{14}{3}\delta\, (\bm{N}\cdot \bm{S}_{\rm c})\right] \nonumber\\ %%%% & + v^{a}(\bm{n}\times \bm{N})^{b}\left[-(3-23\nu)\, \dot r\, (\bm{n}\cdot\bm{\Sigma}_{\rm c})- 5 (1-3\nu)\, \dot r\, (\bm{n}\cdot \bm{N})(\bm{N}\cdot \bm{\Sigma}_{\rm c})+ \frac{2}{3}(1+8\nu) (\bm{v}\cdot \bm{\Sigma}_{\rm c})\right.\nonumber\\ &\left. \; \; \; + \frac{10}{3}(1-3\nu)(\bm{v}\cdot \bm{N}) (\bm{N}\cdot \bm{\Sigma}_{\rm c})+ \delta \left( \frac{10}{3} (\bm{v}\cdot \bm{N}) (\bm{N}\cdot \bm{S}_{\rm c})- 11 \, \dot r\, (\bm{n}\cdot \bm{S}_{\rm c})- 5 \, \dot r\, (\bm{n}\cdot \bm{N})(\bm{N}\cdot \bm{S}_{\rm c}) \right.\right.\nonumber\\ & \; \; \; - \frac{2}{3} (\bm{v}\cdot \bm{S}_{\rm c}) \bigg)\bigg] \, + \, S_{\rm c}^{a} (\bm{v}\times \bm{N})^{b} \delta\left[ \frac{5}{6}\, \dot r-\frac{5}{2} \, \dot r \, (\bm{n}\cdot \bm{N})^2+ \frac{10}{3}(\bm{v}\cdot \bm{N})(\bm{n}\cdot \bm{N})\right]\nonumber\\ %%% %%% & + \Sigma_{\rm c}^{a} (\bm{v}\times \bm{N})^{b} \left[ -\frac{29}{6}(1+\nu) \, \dot r - \frac{5}{2} (1-3\nu) \, \dot r \, (\bm{n}\cdot \bm{N})^2+ \frac{10}{3} (1-3\nu) (\bm{v}\cdot \bm{N}) (\bm{n}\cdot \bm{N}) \right] \nonumber\\ %%% %%% & + v^{a} (\bm{v}\times \bm{N})^{b}\left[ -\frac{40\nu}{3} (\bm{n}\cdot \bm{\Sigma}_{\rm c})+ \frac{10}{3} (1-3\nu)(\bm{n}\cdot \bm{N}) (\bm{N}\cdot \bm{\Sigma}_{\rm c}) + \delta \left(\frac{20}{3} (\bm{n}\cdot \bm{S}_{\rm c}) + \frac{10}{3} (\bm{n}\cdot \bm{N})(\bm{N}\cdot \bm{S}_{\rm c}) \right)\right] \nonumber\\ %%% &+ v^{a} \, v^{b} \left[ \left(\frac{2}{3}-4\nu\right)(\bm{n}\times \bm{\Sigma}_{\rm c})\cdot \bm{N}+ \frac{2}{3} \delta\,(\bm{n}\times \bm{S}_{\rm c})\cdot \bm{N} \right]\nonumber\\ %%%% &\, +({\bm\Sigma}_{\rm c} \times \bm{N})^{a}n^{b}\bigg[\frac{5}{4}(1+7\nu)\dot r^2+\frac{15}{4}(1-3\nu)\dot r^2(\bm{n}\cdot \bm{N})^2+5(-1+3\nu)\dot r(\bm{n}\cdot \bm{N})(\bm{v}\cdot \bm{N})+\frac{5}{3}(1-3\nu)(\bm{v}\cdot \bm{N})^2\nonumber\\ & \; \; \; +\frac{1}{12}(11-25\nu)v^2+\frac{3}{4}(-1+3\nu)(\bm{n}\cdot \bm{N})^2 v^2 %-\frac{m}{3r}(11+2\nu) +\frac{G m}{3r}(-17+10\nu)+\frac{G m}{r}(-1+3\nu)(\bm{n}\cdot \bm{N})^2\bigg]\nonumber\\ %%%%%%%% & +({\bm S}_{\rm c}\times \bm{N})^{a}{n}^{b}\delta\bigg[-\frac{5}{4}\dot r^2+\frac{15}{4}\dot r^2({\bm n}\cdot \bm{N})^2-5\dot r ({\bm n}\cdot \bm{N})({\bm v}\cdot \bm{N})+\frac{5}{3}({\bm v}\cdot \bm{N})^2+\frac{1}{4}v^2-\frac{3}{4}v^2 ({\bm n}\cdot \bm{N})^2\nonumber\\ & \; \; \; -\frac{2G m}{r}- \frac{G m}{r}({\bm n}\cdot \bm{N})^2\bigg]\nonumber\\ %%%%%% &+(\bm{\Sigma}_{\rm c}\times \bm{N})^{a}v^{b}\bigg[-\frac{29}{6}(1+\nu)\dot r+\frac{5}{2}(-1+3\nu)\dot r(\bm{n}\cdot \bm{N})^2+\frac{10}{3}(1-3\nu)(\bm{n}\cdot \bm{N})(\bm{v}\cdot \bm{N})\bigg]\nonumber\\ & +(\bm{S}_{\rm c}\times \bm{N})^{a}v^{b}\delta\bigg[\frac{5}{6}\dot r-\frac{5}{2}\dot r (\bm{n}\cdot \bm{N})^2+\frac{10}{3}(\bm{n}\cdot \bm{N})(\bm{v}\cdot \bm{N})\bigg]\nonumber\\ & +(\bm{v}\times \bm{N})^{a}n^{b}\bigg[(-3+23 \nu)\dot r (\bm{n}\cdot \bm{\Sigma}_{\rm c})+5(-1+3\nu) \dot r(\bm{n}\cdot \bm{N})(\bm{\Sigma}_{\rm c}\cdot \bm{N})%+\frac{2}{3}(1+8\nu) +\frac{1}{3}(5+7\nu) (\bm{v}\cdot \bm{\Sigma}_{\rm c})\nonumber\\ & \; \; \; +\frac{10}{3}(1-3\nu)(\bm {v}\cdot \bm{N})(\bm{\Sigma}_{\rm c}\cdot \bm{N})+ \delta \left(-11\dot r (\bm{n}\cdot \bm{S}_{\rm c})-5 \dot r (\bm{n}\cdot \bm{N})(\bm{S}_{\rm c}\cdot \bm{N})%-\frac{2}{3} +\frac{1}{3}(\bm{v}\cdot \bm{S}_{\rm c})+\frac{10}{3}(\bm{v}\cdot\bm{N})(\bm{S}_{\rm c}\cdot \bm{N})\right) \bigg] \ \ \ \ \ \ \end{align} \end{widetext} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %Here, we have omitted the structures $\delta^{ij}$ and $N^i$ which would be %annihilated by the TT-projection. These contributions add linearly to the other known terms in the waveform. Note that in Eq. (\ref{eq:hijSO}) we have already anticipated the transverse-traceless projection and simplified the expression using $\delta^{ij}_{\mathrm{TT}}=N^i_{\mathrm{TT}}=N^j_{\mathrm{TT}}=0$ and the interchange identity \cite{Kidder:1995zr} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{equation} {\cal P}_{ijab}^{\mathrm{TT}} \ A^a (\bm{B}\times \bm{N})^b= {\cal P}_{ijab}^{\mathrm{TT}}\ B^a(\bm{A}\times \bm{N})^b, \end{equation} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% for any vectors $\bm{A}$ and $\bm{B}$. \subsection{Spin-spin effects} \label{sec:SS} Spin-spin terms in the waveform at 2PN order are entirely attributable to the equations of motion; they arise when substituting $\bm{a}^{\rm SS}$ in the time derivatives of $I_{ab}^{\rm Newt}$. The second time derivative of the contribution $I_{ab}^{\text{S}^2}$ given in Eq.~\eqref{eq:ISSresult} is at least of 3PN order (because of the fact that spins are constant at leading approximation) and therefore vanishes for our calculation. We derive %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{align} \label{eq:hijSS} h^{\rm 2PN SS}_{{ij \ {\mathrm{TT}}}} &=\frac{6G^2 \nu}{r^3 R} {\cal P}_{ijab}^{\mathrm{TT}} \bigg\{ \nonumber \\ & \quad ~ n^a \, n^b \Big[5 (\bm{n}\cdot \bmSeffp)(\bm{n}\cdot \bmSeffm)- (\bmSeffp\cdot \bmSeffm) \Big]\nonumber\\ %%%% & \qquad ~ - n^a \, \Seffp^b (\bm{n}\cdot \bmSeffm)- n^a \, \Seffm^b (\bm{n}\cdot \bmSeffp) \bigg\}\,. \end{align} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% We notice that the spin-orbit contributions at 2PN order are zero for an equal-mass, equal-spin black-hole binary. This is a consequence of the multipoles (\ref{spinmultis}) being zero for this highly symmetric binary configuration. The general results (\ref{eq:hijSO}) and (\ref{eq:hijSS}) are available as a \textsc{mathematica} notebook upon request to be used to compute the gravitational polarizations and spherical harmonic modes for precessing binaries for any choice of the source frame and the polarization triad~\cite{Finn1993, Kidder:1995zr,Buonanno:2002fy,Schmidt:2010it,OShaughnessy2011, Ochsner2012, 2011PhRvD..84l4011B,Schmidt:2012rh}. Below, we shall derive the polarizations and spin-weighted spherical-harmonic modes for the case of nonprecessing compact binaries on circular orbits. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Reduction to quasicircular orbits} We now specialize Eqs.~(\ref{eq:hijSO}) and (\ref{eq:hijSS}) to the case of orbits that have a constant separation $r$ in the absence of radiation reaction and for which the precession time scale is much longer than an orbital period. The details of the derivation of the modified Kepler law relating the orbit-averaged orbital angular frequency $\omega$ and the orbit-averaged orbital separation are discussed in Ref.~\cite{Racine2008}. The instantaneous accelerations (\ref{eq:eom}) and (\ref{eq:ssaccel}) are projected onto a triad consisting of the following unit vectors: $\bm{n}=\bm{x}/r$, the vector $\bm{\ell}=\bm{L}_{\rm N}/|\bm{L}_{\rm N}|$ orthogonal to the instantaneous orbital plane, where $\bm{L}_{\rm N}=m\nu\, \bm{x}\times \bm{v}$ denotes the Newtonian orbital angular momentum, and $\bm{\lambda}=\bm{\ell}\times \bm{n}$. The orbital separation $r$ and angular frequency $\omega$ are decomposed into their orbit averaged piece, indicated by an overbar, and remaining fluctuating pieces, $r=\bar r+\delta r$ and $\omega=\bar \omega+\delta \omega$. Projecting the equations of motion along $\bm{\lambda}$ yields the equality $2 \omega \, \dot{r} + \dot{\omega}\, r$ or, equivalently~\cite{Racine2008} % \begin{equation} \frac{d}{dt} (\omega\, r^2) = - \frac{3G}{2 m \omega\,r^3c^4} \frac{d}{dt} (\bm{n} \cdot \bmSeffp) (\bm{n} \cdot \bmSeffm) \, . \end{equation} % At the 2PN order, $r$ and $\omega$ can be replaced by the constants $\overline{r}$ and $\overline{\omega}$, respectively, on the right-hand side. The expression for $\omega\, r^2$ follows from (i) dropping the time derivatives in the above equation, and (ii) adding an integration constant determined by averaging $\omega\, r^2$ over an orbit. Inserting the result in the projection along $\bm{n}$ of the equations of motion, % \begin{equation} \ddot{r} - \omega^2 r = (\bm{n}\cdot \bm{a}) \, \end{equation} % and linearizing in $\delta r$ we find an explicit solution to the differential equation given by % \begin{subequations} \begin{align} \label{eq:rdot} \dot{r} &= \frac{d\delta r}{dt} \nonumber \\ & = - \frac{\omega}{2 m^2 r c^4} [(\bm{n} \cdot \bmSeffp) (\bm{\lambda} \cdot \bmSeffm) + (\bm{\lambda}\cdot \bmSeffp) (\bm{n}\cdot \bmSeffm)] \, ,\\ \omega^2 &= \frac{\ddot{r}-(\bm{n} \cdot \bm{a})}{r} \nonumber \\* & = \frac{G m}{r^3} \Big[1 - (3-\nu) \frac{G m}{rc^2} \nonumber \\* & \qquad \quad- \Big(\frac{G m}{r c^2}\Big)^\frac{1}{2} \frac{5 (\bm{\ell}\cdot \bm{S}_{\rm c})+ 3 \delta\, (\bm{\ell}\cdot \bm{\Sigma}_{\rm c})}{mrc^2} \nonumber \\* & \qquad \quad + \frac{1}{2m^2 r^2c^4} \Big((\bmSeffp \cdot \bmSeffm) + 2 (\bm{\ell}\cdot \bmSeffp) (\bm{\ell}\cdot \bmSeffm)\nonumber \\* & \qquad \quad ~-5 (\bm{n}\cdot\bmSeffp) (\bm{n}\cdot \bmSeffm)\Big) \Big] \, . \label{eq:omegaofr} \end{align} \end{subequations} % Inverting Eq.~(\ref{eq:omegaofr}) to write $r$ as a function of $\omega$ in Eq. (\ref{eq:hijSO}) and inserting there the expression~\eqref{eq:rdot} of $\dot{r}$, we obtain the following spin-orbit terms in the waveform: % \begin{widetext} \begin{eqnarray} \label{eq:hijSOcirc} h_{ij\ {\rm{TT}}}^{\mathrm {2PN SO}}&=& \frac{G^2 \nu m \omega^2}{3R }{\cal P}_{ijab}^{\mathrm{TT}}\Bigg\{ n^a \, n^b\, \left[ 4 (1-7\nu)(\bm{\ell}\cdot \bm{\Sigma}_{\rm c})(\bm{\lambda}\cdot \bm{N})- (13-59\nu)(\bm{n}\times \bm{\Sigma}_{\rm c})\cdot \bm{N}- 21\delta\,(\bm{n}\times \bm{S}_{\rm c})\cdot \bm{N}\right]\nonumber\\ %%% && \, + \lambda^a \, \lambda^b \, \left[ 4(7-24\nu)(\bm{\ell}\cdot \bm{\Sigma}^{\rm c})(\bm{\lambda}\cdot \bm{N})+ 4(1-6\nu)(\bm{n}\times \bm{\Sigma}^{\rm c})\cdot \bm{N}+ \delta \bigg(4 (\bm{n}\times \bm{S}^{\rm c})\cdot \bm{N}+ 52 ( \bm{\ell}\cdot \bm{S}^{\rm c})(\bm{\lambda}\cdot \bm{N})\bigg)\right] \nonumber\\ %%%% %%%% && \, + \lambda^{a} \, n^{b} \, \bigg[ 4(13-55\nu)(\bm{\lambda}\times \bm{\Sigma}_{\rm c}) \cdot \bm{N}+ 2(-63+239\nu)(\bm{n}\cdot \bm{N})(\bm{\ell}\cdot \bm{\Sigma}_{\rm c}) \nonumber\\ && \left. \; \; \; \; \; + \delta \bigg(100 (\bm{\lambda}\times \bm{S}_{\rm c})\cdot \bm{N}- 262(\bm{n}\cdot \bm{N})(\bm{\ell}\cdot \bm{S}_{\rm c}) \bigg)\right] \, + \Sigma_{\rm c}^{a}\, \ell^{b} \, 12 (1-4\nu) (\bm{\lambda}\cdot \bm{N}) \nonumber\\ %%%% %%% &&+ \lambda^{a} \, {\ell}^{b}\bigg[ 12(-1+4\nu)(\bm{N} \cdot \bm{\Sigma}_{\rm c})+ 8(1-6\nu) (\bm{\lambda}\cdot \bm{\Sigma}_{\rm c}) (\bm{\lambda}\cdot \bm{N})+ 4 (-16+67\nu) (\bm{n}\cdot \bm{\Sigma}_{\rm c}) (\bm{n}\cdot \bm{N}) \nonumber\\ && \left. \; \; \; \; \; +\delta \bigg( - 28(\bm{N}\cdot \bm{S}_{\rm c})+ 8 (\bm{\lambda}\cdot \bm{S}_{\rm c}) (\bm{\lambda}\cdot \bm{N})- 128(\bm{n}\cdot \bm{S}_{\rm c}) (\bm{n}\cdot \bm{N})\bigg) \right]\nonumber\\ %%%% &&+ n^{a} \, {\ell}^{b}\bigg[ 2(-13+59\nu) (\bm{\lambda}\cdot \bm{\Sigma}_{\rm c})(\bm{n}\cdot \bm{N})+ 4(-10+43\nu) (\bm{n}\cdot \bm{\Sigma}_{\rm c})(\bm{\lambda}\cdot \bm{N}) \nonumber\\ && \left. \; \; \; \; \; + \delta \bigg( -42 (\bm{\lambda}\cdot \bm{S}_{\rm c})(\bm{n}\cdot \bm{N})- 72 (\bm{n}\cdot \bm{S}_{\rm c})(\bm{\lambda}\cdot \bm{N})\bigg)\right]\, + S_{\rm c}^{a}\, \ell^{b} \, 28\delta\, (\bm{\lambda}\cdot \bm{N}) \nonumber\\ %%%% &&+ n^{a}(\bm{n}\times \bm{N})^{b}\left[ -(1+\nu) (\bm{n}\cdot \bm{\Sigma}_{\rm c})- 21(1-3\nu) (\bm{n}\cdot \bm{N}) (\bm{N}\cdot \bm{\Sigma}_{\rm c}) + \delta \bigg(3(\bm{n} \cdot \bm{S}_{\rm c}) - 21 (\bm{n}\cdot \bm{N}) (\bm{N}\cdot \bm{S}_{\rm c}) \bigg)\right]\nonumber\\ %%%%% &&+ \lambda^{a}(\bm{n}\times \bm{N})^{b}\left[ 2(7+23\nu) (\bm{\lambda}\cdot \bm{\Sigma}_{\rm c})+ 40 (1-3\nu)(\bm{N}\cdot \bm{\Sigma}_{\rm c}) (\bm{\lambda} \cdot \bm{N}) + \delta \bigg( 40 (\bm{N}\cdot \bm{S}_{\rm c}) (\bm{\lambda} \cdot \bm{N}) - 2(\bm{\lambda}\cdot \bm{S}_{\rm c})\bigg)\right]\nonumber\\ %%%%% && + \Sigma_{\rm c}^{a} (\bm{n}\times \bm{N})^{b}\bigg[-(21+17\nu)+ 20 (1-3\nu) (\bm{\lambda}\cdot \bm{N})^2+ 21(-1+3\nu)(\bm{n}\cdot \bm{N})^2\bigg]\nonumber\\ %%%%%% &&+ S_{\rm c}^{a} (\bm{n}\times \bm{N})^{b} \delta \bigg[ -9+20 (\bm{\lambda}\cdot \bm{N})^2-21(\bm{n}\cdot \bm{N})^2\bigg]\, + S_{\rm c}^{a} \, (\bm{\lambda}\times \bm{N})^{b} \, 40 \, \delta \, (\bm{\lambda}\cdot \bm{N}) (\bm{n} \cdot \bm{N}) \nonumber\\ %%%%%% &&+ \lambda^{a} \, (\bm{\lambda}\times \bm{N})^{b} \left[ -80 \nu (\bm{n}\cdot \bm{\Sigma}_{\rm c})+ 20(1-3\nu) (\bm{n}\cdot \bm{N}) (\bm{N}\cdot \bm{\Sigma}_{\rm c})+ \delta \bigg(40 (\bm{n}\cdot \bm{S}_{\rm c})+ 20 (\bm{N} \cdot \bm{S}_{\rm c}) (\bm{n}\cdot \bm{N})\bigg)\right]\nonumber\\ %%%% &&+\Sigma_{\rm c}^{a} \, (\bm{\lambda}\times \bm{N})^{b} \, 40(1-3\nu) (\bm{\lambda}\cdot \bm{N}) (\bm{n} \cdot \bm{N}) \Bigg\} \,. \end{eqnarray} \end{widetext} %%%%%%% Here, we have used that % \begin{equation} (\bm{n}\times\bm{S}_{\rm c})^i=-\lambda^i(\bm{\ell}\cdot \bm{S}_{\rm c})+ \ell^i(\bm{\lambda}\cdot \bm{S}_{\rm c}), \end{equation} % and similarly for $\bm{\Sigma}_{\rm c}$. Finally, we derive the 2PN spin-spin terms for circular orbits. They read % \begin{align} & h_{ij \ \mathrm{ TT}}^{\mathrm {2 PN SS}}= \frac{2G \nu \omega^2}{m R} {\cal P}_{ijab}^{\mathrm{TT}} \bigg\{ n^a\, n^b \Big[-\frac{8}{3} ( \bmSeffp\cdot \bmSeffm) \nonumber \\ & \qquad \qquad \quad + \frac{2}{3} (\bm{\ell}\cdot \bmSeffp) (\bm{\ell}\cdot \bmSeffm) + \frac{40}{3} (\bm{n}\cdot \bmSeffp) (\bm{n}\cdot \bmSeffm) \Big]\nonumber \\ & \qquad + \lambda^a \, \lambda^b \Big[ \frac{2}{3} ( \bmSeffp\cdot \bmSeffm) + \frac{4}{3} (\bm{\ell}\cdot \bmSeffp) (\bm{\ell}\cdot \bmSeffm) \nonumber \\ & \qquad \qquad \quad - \frac{10}{3} (\bm{n}\cdot \bmSeffp) (\bm{n}\cdot \bmSeffm) \Big] \nonumber \\ & \qquad -2 n^a \, \lambda^b \Big[ (\bm{n}\cdot \bmSeffp) (\bm{\lambda}\cdot \bmSeffm)+ (\bm{n}\cdot \bmSeffm) (\bm{\lambda}\cdot \bmSeffp)\Big] \nonumber \\ & \qquad -3 (\bm{n}\cdot \bmSeffp)\, n^{(a} \, \Seffm^{b)} - 3 (\bm{n}\cdot \bmSeffm) \, n^{(a} \, \Seffp^{b)} \bigg\} \, . \end{align} % %with $v = (G m\,\omega/c^3)^{1/3}$. \subsection{Polarizations for nonprecessing, spinning compact bodies} \label{sec:pol} The two polarization states $h_+$ and $h_\times$ are obtained by choosing a coordinate system and taking linear combinations of the components of $h_{ij}^\mathrm{TT}$. Using an orthonormal triad consisting of $\bm{N}$ and two polarization vectors $\bm{P}$ and $\bm{Q}$, the polarizations are %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subequations} \label{eq:polarizations} \begin{eqnarray} h_+&=& \frac{1}{2}\left(P^iP^j-Q^iQ^j\right)h_{ij}^\mathrm{TT}\,,\\ h_\times&=&\frac{1}{2}\left(P^iQ^j+Q^iP^j\right)h_{ij}^\mathrm{TT}\, . \end{eqnarray} \end{subequations} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Although different choices of $\bm{P}$ and $\bm{Q}$ give different polarizations, the particular linear combination of $h_+$ and $h_\times$ corresponding to the physical strain measured in a detector is independent of the convention used. For nonspinning binaries, one usually chooses a coordinate system such that the orbital plane lies in the $x\mbox{-}y$ plane, and the direction of gravitational-wave propagation $\bm{N}$ is in the $x\mbox{-}z$ plane. When the spins of the bodies are aligned or anti-aligned with the orbital angular momentum, the system's evolution is qualitatively similar to the case of nonspinning bodies. This case is characterized by the absence of precession of the spins and orbital angular momentum and thus the orbital plane remains fixed in space. However, the effect of the spins gives a contribution to the phase and a correction to the amplitude of the waveform, which we explicitly provide in this subsection. We use the conventions that the $z$ axis coincides with $\bm{\ell}$ and the vectors $ \bm{\ell}$, $\bm{N}$, $\bm{n}$, and $\bm{\lambda}$ have the following $(x,y,z)$ components: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{subequations} \begin{eqnarray} \bm{\ell}&=& (0,0,1), \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \bm{N}= (\sin\theta, 0 ,\cos\theta), \; \; \; \; \; \; \; \ \; \; \; \; \; \\ %%%% \bm{n}&=& (\sin\Phi, -\cos\Phi, 0), \ \ \ \ \bm{\lambda}= (\cos \Phi, \sin \Phi, 0), \ \ \ \; \; \; \; \; \; \; \; \end{eqnarray} \end{subequations} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% where $\Phi$ is the orbital phase defined such that at the initial time, $\bm{n}$ points in the $x$ direction. We use the following polarization vectors: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{equation} \bm{P}=\bm{N}\times \bm{\ell}, \ \ \ \bm{Q}=\bm{N}\times \bm{P}. \end{equation} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% The vector $\bm{P}$ is the ascending node where the orbital separation vector crosses the plane of the sky from below. With these conventions, Eqs. (\ref{eq:polarizations}) with Eqs. (\ref{eq:hijSOcirc}), specialized to the case where the only nonvanishing spin components are $(\bm{\Sigma^{\rm c}}\cdot \bm{\ell})$ and $(\bm{S}^{\mathrm{c}}\cdot\bm{\ell})$, become %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{widetext} \begin{eqnarray} \label{eq:hplus} h_+^{\mathrm{2PN \ spin} } &=&-\frac{G^2\nu m \omega^2}{12 R }\cos\Phi \ \sin\theta\left\{ 3 \delta\, (\bm{\ell}\cdot \bm{S}_{\rm c})(-33+\cos ^2\theta)+ \left[(-93+167\nu)+ 9(1-3\nu)\cos^2\theta\right]( \bm{\ell}\cdot \bm{\Sigma_{\rm c}})\right\} \nonumber\\ %%%% &&-\frac{9G^2\nu m\omega^2}{4 R }\cos (3\Phi) \ \sin\theta \left\{ \delta\,(5-\cos^2\theta)( \bm{\ell}\cdot \bm{S}_{\rm c}) + 3(1-3\nu)\sin^2\theta (\bm{\ell}\cdot \bm{\Sigma_{\rm c}})\right\}\nonumber \\ %%%% &&-\frac{2G \nu \omega^2}{m R }\cos (2\Phi)\left(1+\cos^2 \theta\right) ( \bm{\ell}\cdot \bmSeffp )(\bm{\ell}\cdot \bmSeffm )\, ,\ \\ %%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% h_\times^{\mathrm{2PN \ spin} }&=& -\frac{G^2\nu m \omega^2}{48 R }\sin\Phi \sin(2\theta)\left\{ 6\delta\, ( \bm{\ell}\cdot \bm{S}_{\rm c}) \left(-33+\cos^2\theta\right)+ \left[(-171+289 \nu)+ 3(1-3\nu)\cos(2\theta)\right]( \bm{\ell}\cdot \bm{\Sigma}_{\rm c})\right\} \nonumber\\ %%%% &&-\frac{9G^2\nu m \omega^2}{8 R }\sin(3\Phi)\sin(2\theta)\left\{ \delta\, ( \bm{\ell}\cdot \bm{S}_{\rm c})\left(7-3\cos^2\theta\right)+ 3(1-3\nu)\sin^2\theta ( \bm{\ell} \cdot \bm{\Sigma_{\rm c}})\right\}\nonumber\\ %%%% &&-\frac{4G\nu \omega^2}{m R}\sin(2\Phi)\cos\theta ( \bm{\ell}\cdot \bmSeffp )( \bm{\ell}\cdot \bmSeffm )\,. \label{eq:hcross} \end{eqnarray} \end{widetext} Here, the convention for the 2PN spin pieces of the polarizations is analogous to that adopted for the PN expansion of the waveform (\ref{eq:hexpansion}), with the expansion coefficients related by Eqs. (\ref{eq:polarizations}) at each PN order. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \subsection{Gravitational modes for nonprecessing, spinning compact bodies} \label{sec:modes} The gravitational wave modes are obtained by expanding the complex polarization \begin{equation} h = h_+ - i h_\times\,, \label{hcomplex} \end{equation} into spin-weighted $s=-2$ spherical harmonics as % \begin{equation}\label{eq:modeexp} h(\theta,\phi) = \sum_{\ell = 2}^{+\infty} \sum_{m=-\ell}^{\ell} h_{\ell m}\, {}_{-2}Y^{\ell m}(\theta,\phi) \, , \end{equation} % where % \begin{equation} {}_{-s} Y^{\ell m}(\theta,\phi) = (-1)^s \sqrt{\frac{2\ell + 1}{4\pi}} \, d_{sm}^\ell(\theta)\, e^{i m \phi}\,, \end{equation} % with % \begin{eqnarray} &&d_{sm}^\ell(\theta) = \sum_{k=\max(0,m-s)}^{\min(\ell+m,\ell-s)} \frac{(-1)^k}{k!} \nonumber\\ &&\times \frac{\sqrt{(\ell + m)! (\ell - m)!(\ell + s)!(\ell - s)!}}{(k - m + s)! (\ell + m - k)! (\ell - k - s)!} \ \ \ \ \ \ \ \ \ \nonumber\\ &&\times \left(\cos (\theta/2)\right)^{2\ell+m-2k-s} \left( \sin (\theta/2) \right)^{2k-m+s} \,. \end{eqnarray} % The modes $h_{\ell m}$ can be extracted by computing \begin{equation} \label{eq:hlm} h_{\ell m} = \int d\Omega \, h(\theta, \phi) \, {}_{-2}{Y}^{\ell m*}(\theta,\phi) \,, \end{equation} % where the integration is over the solid angle $\int d\Omega=\int^{\pi}_0 \sin\theta d\theta \int^{2\pi}_0d\phi $ and using the orthogonality property % $ \int d\Omega \ {}_{-s}Y^{\ell m}(\theta,\phi) \, {}_{-s}Y^{\ell' m' *}(\theta,\phi) = \delta^{\ell \ell'} \delta^{m m'}$, % where $ \delta^{\ell \ell'}$ is the Kronecker symbol and the star denotes complex conjugation. % Using Eqs. (\ref{eq:hplus}) and (\ref{eq:hcross}) in Eq. (\ref{eq:hlm}) we find the following nonvanishing modes: % \begin{equation} ( h_{\ell m})^\mathrm{2PN\,spin }=-\frac{2 G^2 m \nu \, \omega^2}{R}\sqrt{\frac{16\pi}{5}}e^{-im\Phi} \ \hat h_{\ell m}, \end{equation} % \begin{subequations} \label{eq:hlmnonprec} \begin{eqnarray} \hat h_{21}&=& -\frac{43}{21} \delta\, (\bm{\ell}\cdot \bm{S}_{\rm c})+ \frac{1}{42}(-79+139\nu)(\bm{\ell}\cdot \bm{\Sigma}_{\rm c})\,, \nonumber \\\\ \hat h_{22}&=& \frac{(\bm{\ell}\cdot \bmSeffp)(\bm{\ell}\cdot \bmSeffm )}{G m^2} \,, \label{eq:h22} \\ \hat h_{31}&=& \frac{1}{24\sqrt{14}} \delta\, (\bm{\ell}\cdot \bm{S}_{\rm c})+ \frac{5}{24\sqrt{14}}(1-3\nu)(\bm{\ell}\cdot \bm{\Sigma}_{\rm c}) \,,\nonumber \\ \\ \hat h_{33}&=&-\frac{3\sqrt{105}}{8\sqrt{2}} \delta\, (\bm{\ell}\cdot \bm{S}_{\rm c})- \frac{9}{8}\sqrt{\frac{15}{14}}(1-3\nu)(\bm{\ell}\cdot \bm{\Sigma_{\rm c}})\,, \nonumber \\ \\ \hat h_{41}&=&\frac{\sqrt{5} }{168\sqrt{2}}\delta\, (\bm{\ell}\cdot \bm{S}_{\rm c})+ \frac{\sqrt{5} }{168\sqrt{2}}(1-3\nu)(\bm{\ell}\cdot \bm{\Sigma}_{\rm c})\,, \nonumber \\\\ \hat h_{43}&=&\frac{9\sqrt{5}}{8\sqrt{14}}\delta\, (\bm{\ell}\cdot \bm{S}_{\rm c})+ \frac{9\sqrt{5}}{8\sqrt{14}}(1-3\nu)(\bm{\ell}\cdot \bm{\Sigma}_{\rm c}) \,.\ \nonumber \\ \end{eqnarray} \end{subequations} % We have explicitly checked that in the test-mass limit $\nu\to 0$, Eqs. (\ref{eq:hlmnonprec}) reduce to the 2PN ${\cal O}(q)$ and ${\cal O}(q^2)$ terms given in Eqs. (22) of Ref.~\cite{Pan2010hz} (see also \cite{Tagoshi:1996gh}), after accounting for the factor of $(-i)^m$ attributable to the different conventions for the phase origin, as explained in Ref.~\cite{Arun:2009}. It is interesting to note from Eq. (\ref{eq:h22}) that in the nonprecessing case, the dominant $h_{22}$ mode contains only terms that are quadratic in the spin at 2PN order. By contrast, for precessing binaries, the 2PN spin-orbit terms will give a nonvanishing contribution to the $22$-mode. \section{CONCLUSIONS} \label{sec:conclusions} We have extended the knowledge of the spin terms in the gravitational-wave strain tensor to 2PN accuracy for precessing binaries. Our result includes the spin-orbit as well as the spin${}_1$-spin${}_2$ and spin${}^2_1$, spin${}_{2}^2$ effects. The quadratic-in-spin terms are entirely due to the equations of motion, whereas the 2PN spin-orbit terms come from both the corrections to the orbital dynamics and the radiation field. For a given choice of an orthonormal polarization triad and a source frame, the gravitational-wave polarizations can be obtained by projecting our result for the gravitational-wave strain tensor given in Secs.~\ref{sec:SO} and~\ref{sec:SS} orthogonal to the propagation direction. For precessing binaries, there is no preferred unique choice of the source frame~\cite{Finn1993, Kidder:1995zr,Buonanno:2002fy,Schmidt:2010it, OShaughnessy2011, Ochsner2012, 2011PhRvD..84l4011B,Schmidt:2012rh}, but in the case that the spins are collinear with the orbital angular momentum, the procedure to obtain the polarizations can be carried out in a similar fashion as for nonspinning binaries. For the nonprecessing case and circular orbits, we provided ready-to-use expressions for the gravitational polarizations in Sec.~\ref{sec:pol}, which could be directly employed in time-domain post-Newtonian, phenomenological and effective-one-body--based template models~\cite{Kidder:1995zr, Arun:2009,Ajith:2008,Damour2009a,Pan:2009wj, Santamaria:2010yb,Pan:2011gk}. In view of the current interest in interfacing analytical and numerical relativity, we also provided the decomposition of the waveform into spin-weighted spherical harmonic modes for nonprecessing binaries and quasicircular orbits. We verified that the test-particle limit of our result reduces to the expressions obtained from black-hole perturbation theory~~\cite{Tagoshi:1996gh,Pan2010hz}. We noted that for spins collinear with the orbital angular momentum, the dominant $h_{22}$ mode of the waveform contains only quadratic-in-spin effects since the spin-orbit contributions vanish in this case, although they are nonzero for generic, precessing configurations. \begin{acknowledgments} A.B. acknowledges partial support from NSF Grants No. PHY-0903631 and No. PHY-1208881, and NASA Grant NNX09AI81G. A.B. also thanks the Kavli Institute for Theoretical Physics (supported by the NSF Grant No. PHY11-25915) for hospitality during the preparation of this manuscript. T.H. acknowledges support from NSF Grants No. PHY-0903631 and No. PHY-1208881, and the Maryland Center for Fundamental Physics. We thank Gilles Esposito-Far\`ese, Larry Kidder and Etienne Racine for useful interactions, as well as David Delavaquerie for help in finalizing one of our \textsc{mathematica} codes. \end{acknowledgments} \appendix* \section{USEFUL IDENTITIES} According to the way the waveform is computed, the result may take various forms, which are not immediately seen to be equivalent. Their difference vanishes because of some dimensional identities valid in three dimensions. They all amount to expressing the fact that a tensor with four antisymmetrized indices must vanish. We shall present here two of such identities, which turned out to be particularly useful for our checks, together with Eqs.~(5.2) of Ref.~\cite{Faye-Blanchet-Buonanno:2006}. Let $\bm{U}_A=U_A^i$, for $A\in \{1,2,3\}$, be three vectors of $\mathbb{R}^3$. The first identity tells us that for any vector $\bm{U}$, we must have % \begin{align} & (\bm{U}_1 \times \bm{U}_2)^{(i} [U_3^{j)} (\bm{U}_4 \cdot \bm{U}) - U_4^{j)} (\bm{U}_3 \cdot \bm{U})] \\ & ~ = U_4^{(i} [(\bm{U}\times \bm{U}_1)^{j)} (\bm{U}_2 \cdot \bm{U}_3) - (\bm{U}\times \bm{U}_2)^{j)} (\bm{U}_1 \cdot \bm{U}_3)] \nonumber \\ & \quad + U_3^{(i} [(\bm{U}\times \bm{U}_2)^{j)} (\bm{U}_1 \cdot \bm{U}_4) - (\bm{U}\times \bm{U}_1)^{j)} (\bm{U}_2 \cdot \bm{U}_4)] \, . \nonumber \end{align} % To show this, we compute $\varepsilon^i_{~ab} \varepsilon^{mjk} \varepsilon_{mpq} U_1^a U_2^b U_3^p U_4^q$ in two different manners: (i) we group the first two epsilons, which are next expanded in terms of the identity tensor $\delta^i_{~j}$ using the standard formula $\varepsilon_{iab} \varepsilon^{mjk} =3! \delta_{~[i}^m \delta_{~a}^j \delta_{~b]}^k$; (ii) we group the last two epsilons and apply the contracted version of the previous equation: $\varepsilon^{mjk} \varepsilon_{mpq} = 2 \delta^j_{~[p} \delta^k_{~q]}$. One of the remaining free indices, say $k$, is finally contracted with $U_k$. The second identity reads: \begin{widetext} % \begin{align} \label{eq:identity2} & \delta^{ij} [U_1^2 U_2^2 U_3^2 - U_1^2 (\bm{U}_2 \cdot \bm{U}_3)^2 - U_2^2 (\bm{U}_3 \cdot \bm{U}_1)^2 - U_3^2 (\bm{U}_1 \cdot \bm{U}_2)^2 + 2 (\bm{U}_1 \cdot \bm{U}_2) (\bm{U}_2 \cdot \bm{U}_3) (\bm{U}_3 \cdot \bm{U}_1)] \nonumber \\ & \qquad + 2 U_1^{(i} U_3^{j)} [ U_2^2 (\bm{U}_3 \cdot \bm{U}_1) - (\bm{U}_1 \cdot \bm{U}_2) (\bm{U}_2 \cdot \bm{U}_3)] + 2 U_1^{(i} U_2^{j)} [ U_3^2 (\bm{U}_1 \cdot \bm{U}_2) - (\bm{U}_2 \cdot \bm{U}_3) (\bm{U}_3 \cdot \bm{U}_1)] \nonumber \\ & \qquad + 2 U_2^{(i} U_3^{j)} [ U_1^2 (\bm{U}_2 \cdot \bm{U}_3) - (\bm{U}_1 \cdot \bm{U}_2) (\bm{U}_1 \cdot \bm{U}_3)] + U_1^i U_1^j [ (\bm{U}_2 \cdot \bm{U}_3)^2 - U_2^2 U_3^2] + U_2^i U_2^j [ (\bm{U}_1 \cdot \bm{U}_3)^2 - U_1^2 U_3^2] \nonumber \\ & \qquad + U_3^i U_3^j [ (\bm{U}_1 \cdot \bm{U}_2)^2 - U_1^2 U_2^2] =0 \, . \end{align} % \end{widetext} It is proved by contracting the equality $U_1^{[a} U_2^b U_3^c \delta^{i]j}=0$ with $U_{1a} U_{2b} U_{3c}$ and expanding. As the trace of the left-hand side of Eq.~\eqref{eq:identity2} is identically zero, the nontrivial content of this identity consists of its STF part. \begin{thebibliography}{94} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax \def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax \def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax \def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax \def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\def\urlprefix{URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}} \bibitem[{\citenamefont{Abbott et~al.}(2009)}]{Abbott:2007} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Abbott}} \bibnamefont{\textit{et~al}.} (\bibinfo{collaboration}{LIGO Scientific Collaboration}), \bibinfo{journal}{Rep. Prog. Phys.} \textbf{\bibinfo{volume}{72}}, \bibinfo{pages}{076901} (\bibinfo{year}{2009}).%, \eprint{0711.3041}. \bibitem[{\citenamefont{Acernese \textit{et~al}.}(2008)\citenamefont{Acernese \textit{et~al}.}}]{Acernese:2008} \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Acernese}} %\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Alshourbagy}}, %\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Amico}}, %\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Antonucci}}, %\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Aoudia}}, \bibnamefont{\textit{et~al}.}, \bibinfo{journal}{Classical Quantum Gravity} \textbf{\bibinfo{volume}{25}}, \bibinfo{pages}{184001} (\bibinfo{year}{2008}). \bibitem[{\citenamefont{Grote}(2008)}]{Grote:2008zz} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Grote}} (\bibinfo{collaboration}{LIGO Scientific Collaboration}), \bibinfo{journal}{Classical Quantum Gravity} \textbf{\bibinfo{volume}{25}}, \bibinfo{pages}{114043} (\bibinfo{year}{2008}). \bibitem[{\citenamefont{Kuroda and the {LCGT}~Collaboration}(2010)}]{Kuroda:2010} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Kuroda}} \bibnamefont{and} \bibinfo{author}{\bibnamefont{the {LCGT}~Collaboration}}, \bibinfo{journal}{Classical Quantum Gravity} \textbf{\bibinfo{volume}{27}}, \bibinfo{pages}{084004} (\bibinfo{year}{2010}). \bibitem[{\citenamefont{{{Prince}, T.~A., {Binetruy}, P., {Centrella}, J., {Finn}, L.~S., {Hogan}, C. and {Nelemans}, G., {Phinney}, E.~S., {Schutz}, B. and LISA International ~Science ~Team}}(2007)}]{lisa} \bibinfo{author}{\bibnamefont{{{T.~A. Prince}, {P. Binetruy}, J. {Centrella}, L.~S. {Finn}, C. {Hogan}, G. {Nelemans}, E.~S. {Phinney}, and B. {Schutz} ~ ~ (LISA International ~Science ~Team)}}}, \bibinfo{type}{Technical Report}, \bibinfo{institution}{{LISA science case document}}, \bibinfo{year}{2007}, \bibinfo{note}{ \url{http://list.caltech.edu/mission_documents}}. \bibitem[{ESA()}]{ESALISAwebsite} \bibinfo{howpublished}{\url{http://sci.esa.int/lisa}}. \bibitem[{\citenamefont{Finn}(1992)}]{Finn1992} \bibinfo{author}{\bibfnamefont{L.~S.} \bibnamefont{Finn}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{46}}, \bibinfo{pages}{5236} (\bibinfo{year}{1992}). \bibitem[{\citenamefont{Finn and Chernoff}(1993)}]{Finn1993} \bibinfo{author}{\bibfnamefont{L.~S.} \bibnamefont{Finn}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.~F.} \bibnamefont{Chernoff}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{47}}, \bibinfo{pages}{2198} (\bibinfo{year}{1993}). \bibitem[{\citenamefont{Sasaki and Tagoshi}(2003)}]{Sasaki:2003xr} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Sasaki}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Tagoshi}}, \bibinfo{journal}{Living Rev. Rel.} \textbf{\bibinfo{volume}{6}}, \bibinfo{pages}{6} (\bibinfo{year}{2003}), \bibinfo{howpublished}{\url{http://www.livingreviews.org/lrr-2003-6}} \bibitem[{\citenamefont{Blanchet}(2006)}]{Blanchet2006} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Blanchet}}, \bibinfo{journal}{Living Rev. Rel.} \textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{4} (\bibinfo{year}{2006}), \bibinfo{howpublished}{\url{http://www.livingreviews.org/lrr-2006-4}} \bibitem[{\citenamefont{Futamase and Itoh}(2007)}]{Futamase:2007zz} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Futamase}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Itoh}}, \bibinfo{journal}{Living Rev. Rel.} \textbf{\bibinfo{volume}{10}}, \bibinfo{pages}{2} (\bibinfo{year}{2007}), \bibinfo{howpublished}{\url{http://www.livingreviews.org/lrr-2007-2}} \bibitem[{\citenamefont{Pretorius}(2005)}]{Pretorius2005a} \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Pretorius}}, \bibinfo{journal}{Phys.\ Rev.\ Lett.} \textbf{\bibinfo{volume}{95}}, \bibinfo{pages}{121101} (\bibinfo{year}{2005})%, \eprint{gr-qc/0507014}. \bibitem[{\citenamefont{Campanelli et~al.}(2006)\citenamefont{Campanelli, Lousto, Marronetti, and Zlochower}}]{Campanelli2006a} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Campanelli}}, \bibinfo{author}{\bibfnamefont{C.~O.}~\bibnamefont{Lousto}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Marronetti}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Zlochower}}, \bibinfo{journal}{Phys.\ Rev.\ Lett.} \textbf{\bibinfo{volume}{96}}, \bibinfo{pages}{111101} (\bibinfo{year}{2006})%, \eprint{gr-qc/0511048}. \bibitem[{\citenamefont{Baker et~al.}(2006)\citenamefont{Baker, Centrella, Choi, Koppitz, and van Meter}}]{Baker2006a} \bibinfo{author}{\bibfnamefont{J.~G.} \bibnamefont{Baker}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Centrella}}, \bibinfo{author}{\bibfnamefont{D.-I.} \bibnamefont{Choi}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Koppitz}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{van Meter}}, \bibinfo{journal}{Phys.\ Rev. \ Lett.} \textbf{\bibinfo{volume}{96}}, \bibinfo{pages}{111102} (\bibinfo{year}{2006})%, \eprint{gr-qc/0511103}. \bibitem[{\citenamefont{Buonanno and Damour}(1999)}]{Buonanno99} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Damour}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{59}}, \bibinfo{pages}{084006} (\bibinfo{year}{1999})%, \eprint{gr-qc/9811091}. \bibitem[{\citenamefont{Buonanno and Damour}(2000)}]{Buonanno00} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Damour}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{62}}, \bibinfo{pages}{064015} (\bibinfo{year}{2000}). %, \eprint{gr-qc/0001013}. \bibitem[{\citenamefont{Damour et~al.}(2000)\citenamefont{Damour, Jaranowski, and Schaefer}}]{DJS00} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Damour}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Jaranowski}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Schafer}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{62}}, \bibinfo{pages}{084011} (\bibinfo{year}{2000}). %, \eprint{gr-qc/0005034}. \bibitem[{\citenamefont{Buonanno et~al.}(2007)\citenamefont{Buonanno, Cook, and Pretorius}}]{Buonanno-Cook-Pretorius:2007} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibinfo{author}{\bibfnamefont{G.~B.} \bibnamefont{Cook}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Pretorius}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{75}}, \bibinfo{pages}{124018} (\bibinfo{year}{2007}). %, \eprint{gr-qc/0610122}. \bibitem[{\citenamefont{Ajith et~al.}(2008)\citenamefont{Ajith, Babak, Chen, Hewitson, Krishnan et~al.}}]{Ajith:2008} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Ajith}} %, % \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Babak}}, % \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Chen}}, % \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Hewitson}}, % \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Krishnan}}, \bibnamefont{\textit{et~al}.}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{77}}, \bibinfo{pages}{104017} (\bibinfo{year}{2008}).%, \eprint{0710.2335}. \bibitem[{\citenamefont{Damour and Nagar}(2009)}]{Damour2009a} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Damour}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Nagar}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{79}}, \bibinfo{pages}{081503} (\bibinfo{year}{2009}). %, \eprint{0902.0136}. \bibitem[{\citenamefont{Pan et~al.}(2010)\citenamefont{Pan, Buonanno, Buchman, Chu, Kidder et~al.}}]{Pan:2009wj} \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Pan}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibinfo{author}{\bibfnamefont{L.~T.} \bibnamefont{Buchman}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Chu}}, \bibinfo{author}{\bibfnamefont{L.~E.} \bibnamefont{Kidder}}, \bibinfo{author}{\bibfnamefont{H.~P.} \bibnamefont{Pfeiffer}} \bibfnamefont{and} \bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Scheel}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{084041} (\bibinfo{year}{2010}). %, \eprint{0912.3466}. \bibitem[{\citenamefont{Santamar\'{i}a et~al.}(2010)\citenamefont{Santamar\'{i}a, Ohme, Ajith, Br{\"u}gmann, Dorband, Hannam, Husa, M{\"o}sta, Pollney, Reisswig et~al.}}]{Santamaria:2010yb} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Santamar\'{i}a}}%, %\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Ohme}}, %\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Ajith}}, %\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Br{\"u}gmann}}, %\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Dorband}}, %\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Hannam}}, %\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Husa}}, %\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{M{\"o}sta}}, %\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Pollney}}, %\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Reisswig}}, \bibnamefont{\textit{et~al}.}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{82}}, \bibinfo{pages}{064016} (\bibinfo{year}{2010}). \bibitem[{\citenamefont{Pan et~al.}(2011{\natexlab{a}})\citenamefont{Pan, Buonanno, Boyle, Buchman, Kidder et~al.}}]{Pan:2011gk} \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Pan}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Boyle}}, \bibinfo{author}{\bibfnamefont{L.~T.} \bibnamefont{Buchman}}, \bibinfo{author}{\bibfnamefont{L.~E.} \bibnamefont{Kidder}}, \bibinfo{author}{\bibfnamefont{H.~P.} \bibnamefont{Pfeiffer}} \bibfnamefont{and} \bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Scheel}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{84}}, \bibinfo{pages}{124052} (\bibinfo{year}{2011}{\natexlab{a}})%, \eprint{1106.1021}. \bibitem[{\citenamefont{Blanchet et~al.}(1995)\citenamefont{Blanchet, Damour, and Iyer}}]{Blanchet95a} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Blanchet}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Damour}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.~R.} \bibnamefont{Iyer}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{51}}, \bibinfo{pages}{5360} (\bibinfo{year}{1995}). \bibitem[{\citenamefont{Blanchet et~al.}(2004)\citenamefont{Blanchet, Damour, and Esposito-Far\`{e}se}}]{Blanchet04} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Blanchet}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Damour}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Esposito-Far\`{e}se}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{69}}, \bibinfo{pages}{124007} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{LucBlanchet et~al.}(2005)\citenamefont{LucBlanchet, Qusailah, and Will}}]{Blanchet2005b} \bibinfo{author}{\bibnamefont{K.~G. Arun}}, \bibinfo{author}{\bibnamefont{L.~ Blanchet}}, \bibinfo{author}{\bibfnamefont{B.~R.} \bibnamefont{Iyer}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.~S.~S.} \bibnamefont{Qusailah}}, \bibinfo{journal}{Classical Quantum Gravity} \textbf{\bibinfo{volume}{21}}, \bibinfo{pages}{377} (\bibinfo{year}{2004}). %, \eprint{arXiv:astro-ph/0507692}. \bibitem[{\citenamefont{Blanchet et~al.}(1996)\citenamefont{Blanchet, Iyer, Will, and Wiseman}}]{Blanchet96a} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Blanchet}}, \bibinfo{author}{\bibfnamefont{B.~R.} \bibnamefont{Iyer}}, \bibinfo{author}{\bibfnamefont{C.~M.} \bibnamefont{Will}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.~G.} \bibnamefont{Wiseman}}, \bibinfo{journal}{Classical Quantum Gravity} \textbf{\bibinfo{volume}{13}}, \bibinfo{pages}{575} (\bibinfo{year}{1996}). \bibitem[{\citenamefont{{Kidder} et~al.}(2007)\citenamefont{{Kidder}, {Blanchet}, and {Iyer}}}]{Kidder07} \bibinfo{author}{\bibfnamefont{L.~E.} \bibnamefont{{Kidder}}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{{Blanchet}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.~R.} \bibnamefont{{Iyer}}}, \bibinfo{journal}{Classical Quantum Gravity} \textbf{\bibinfo{volume}{24}}, \bibinfo{pages}{5307} (\bibinfo{year}{2007}). \bibitem[{\citenamefont{Kidder}(2008)}]{Kidder2008} \bibinfo{author}{\bibfnamefont{L.~E.} \bibnamefont{Kidder}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{77}}, \bibinfo{pages}{044016} (\bibinfo{year}{2008}).%, \eprint{0710.0614}. \bibitem[{\citenamefont{Blanchet et~al.}(2008)\citenamefont{Blanchet, Faye, Iyer, and Sinha}}]{BFIS} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Blanchet}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Faye}}, \bibinfo{author}{\bibfnamefont{B.~R.} \bibnamefont{Iyer}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Sinha}}, \bibinfo{journal}{Classical Quantum Gravity} \textbf{\bibinfo{volume}{25}}, \bibinfo{pages}{165003} (\bibinfo{year}{2008}).%, \eprint{0802.1249}. \bibitem[{\citenamefont{Faye et~al.}(2012)\citenamefont{Faye, Marsat, Blanchet, and Iyer}}]{Faye:2012we} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Faye}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Marsat}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Blanchet}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.~R.} \bibnamefont{Iyer}}, \bibinfo{journal}{Classical Quantum Gravity} \textbf{\bibinfo{volume}{29}}, \bibinfo{pages}{175004} (\bibinfo{year}{2012}). %, \eprint{1204.1043}. \bibitem[{\citenamefont{{Miller} et~al.}(2009)\citenamefont{{Miller}, {Reynolds}, {Fabian}, {Miniutti}, and {Gallo}}}]{Miller2009} \bibinfo{author}{\bibfnamefont{J.~M.} \bibnamefont{{Miller}}}, \bibinfo{author}{\bibfnamefont{C.~S.} \bibnamefont{{Reynolds}}}, \bibinfo{author}{\bibfnamefont{A.~C.} \bibnamefont{{Fabian}}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{{Miniutti}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.~C.} \bibnamefont{{Gallo}}}, \bibinfo{journal}{\apj} \textbf{\bibinfo{volume}{697}}, \bibinfo{pages}{900} (\bibinfo{year}{2009}).%, %\eprint{0902.2840}. \bibitem[{\citenamefont{Apostolatos et~al.}(1994)\citenamefont{Apostolatos, Cutler, Sussman, and Thorne}}]{Apostolatos1994} \bibinfo{author}{\bibfnamefont{T.~A.} \bibnamefont{Apostolatos}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Cutler}}, \bibinfo{author}{\bibfnamefont{G.~J.} \bibnamefont{Sussman}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{K.~S.} \bibnamefont{Thorne}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{49}}, \bibinfo{pages}{6274 } (\bibinfo{year}{1994}). \bibitem[{\citenamefont{{Apostolatos}}(1996)}]{Apostolatos1996} \bibinfo{author}{\bibfnamefont{T.~A.} \bibnamefont{{Apostolatos}}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{54}}, \bibinfo{pages}{2421} (\bibinfo{year}{1996}). \bibitem[{\citenamefont{Buonanno et~al.}(2003)\citenamefont{Buonanno, Chen, and Vallisneri}}]{Buonanno:2002fy} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Chen}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Vallisneri}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{67}}, \bibinfo{pages}{104025} (\bibinfo{year}{2003}).%, \eprint{gr-qc/0211087}. \bibitem[{\citenamefont{Pan et~al.}(2004)\citenamefont{Pan, Buonanno, Chen, and Vallisneri}}]{Pan:2003qt} \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Pan}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Chen}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Vallisneri}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{69}}, \bibinfo{pages}{104017} (\bibinfo{year}{2004}).%, \eprint{gr-qc/0310034}. \bibitem[{\citenamefont{Buonanno et~al.}(2004)\citenamefont{Buonanno, Chen, Pan, and Vallisneri}}]{Buonanno2004} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Chen}}, \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Pan}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Vallisneri}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{70}}, \bibinfo{pages}{104003} (\bibinfo{year}{2004}). \bibitem[{\citenamefont{Buonanno et~al.}(2005)\citenamefont{Buonanno, Chen, Pan, Tagoshi, and Vallisneri}}]{Buonanno:2005pt} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Chen}}, \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Pan}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Tagoshi}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Vallisneri}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{72}}, \bibinfo{pages}{084027} (\bibinfo{year}{2005}).%, \eprint{gr-qc/0508064}. \bibitem[{\citenamefont{Buonanno et~al.}(2006)\citenamefont{Buonanno, Chen, and Damour}}]{Buonanno06} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Chen}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Damour}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{74}}, \bibinfo{pages}{104005} (\bibinfo{year}{2006}).%, \eprint{gr-qc/0508067}. \bibitem[{\citenamefont{{Ajith} et~al.}(2009)\citenamefont{{Ajith}, {Hannam}, {Husa}, {Chen}, {Bruegmann}, {Dorband}, {Mueller}, {Ohme}, {Pollney}, {Reisswig} et~al.}}]{Ajith:2009} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{{Ajith}}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{{Hannam}}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{{Husa}}}, \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{{Chen}}}, \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{{Bruegmann}}}, \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{{Dorband}}}, \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{{Mueller}}}, \bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{{Ohme}}}, \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{{Pollney}}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{{Reisswig}}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{{Santamar\'{i}a}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{{Seiler}}}, \bibinfo{journal}{Phys.\ Rev.\ Lett.} \textbf{\bibinfo{volume}{106}}, \bibinfo{pages}{241101} (\bibinfo{year}{2011}) %, \bibinfo{journal}{ArXiv e-prints} %(\bibinfo{year}{2009}).%, \eprint{0909.2867}. \bibitem[{\citenamefont{Ajith}(2011)}]{Ajith:2011ec} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Ajith}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{84}}, \bibinfo{pages}{084037} (\bibinfo{year}{2011}).%, \eprint{1107.1267}. \bibitem[{\citenamefont{Brown et~al.}(2012)\citenamefont{Brown, Lundgren, and O'Shaughnessy}}]{Brown:2012gs} \bibinfo{author}{\bibfnamefont{D.~A.} \bibnamefont{Brown}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Lundgren}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{O'Shaughnessy}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{064020} (\bibinfo{year}{2012}).%, \eprint{1203.6060}. \bibitem[{\citenamefont{Taracchini et~al.}(2012)\citenamefont{Taracchini, Pan, Buonanno, Barausse, Boyle et~al.}}]{Taracchini:2012ig} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Taracchini}}, \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Pan}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Barausse}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Boyle}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Chu}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Lovelace}}, \bibinfo{author}{\bibfnamefont{H.~P.} \bibnamefont{Pfeiffer}}, \bibfnamefont{and} \bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Scheel}} \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{024011} (\bibinfo{year}{2012}).%, \eprint{1202.0790}. \bibitem[{\citenamefont{Cutler and Flanagan}(1994)}]{CutlerFlanagan1994} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Cutler}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{E.~E.} \bibnamefont{Flanagan}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{49}}, \bibinfo{pages}{2658} (\bibinfo{year}{1994}). \bibitem[{\citenamefont{{Poisson} and {Will}}(1995)}]{PoissonWill95} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{{Poisson}}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.~M.} \bibnamefont{{Will}}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{52}}, \bibinfo{pages}{848} (\bibinfo{year}{1995}).%, \eprint{arXiv:gr-qc/9502040}. \bibitem[{\citenamefont{{van der Sluys} et~al.}(2008)\citenamefont{{van der Sluys}, {R{\"o}ver}, {Stroeer}, {Raymond}, {Mandel}, {Christensen}, {Kalogera}, {Meyer}, and {Vecchio}}}]{vanderSluys} \bibinfo{author}{\bibfnamefont{M.~V.} \bibnamefont{{van der Sluys}}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{{R{\"o}ver}}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{{Stroeer}}}, \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{{Raymond}}}, \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{{Mandel}}}, \bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{{Christensen}}}, \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{{Kalogera}}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{{Meyer}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{{Vecchio}}}, \bibinfo{journal}{Astrophys.\ J.\ Lett.} \textbf{\bibinfo{volume}{688}}, \bibinfo{pages}{L61} (\bibinfo{year}{2008}).%, \eprint{0710.1897}. \bibitem[{\citenamefont{Ajith and Bose}(2009)}]{AjithBose2009} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Ajith}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Bose}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{79}}, \bibinfo{pages}{084032} (\bibinfo{year}{2009}).%, % \eprint{arXiv:0901.4936[gr-qc]}. \bibitem[{\citenamefont{Mikoczi et~al.}(2005)\citenamefont{Mikoczi, Vasuth, and Gergely}}]{Mikoczi:2005dn} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Mikoczi}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Vasuth}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.~A.} \bibnamefont{Gergely}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{71}}, \bibinfo{pages}{124043} (\bibinfo{year}{2005}).%, \eprint{astro-ph/0504538}. \bibitem[{\citenamefont{Faye et~al.}(2006)\citenamefont{Faye, Blanchet, and Buonanno}}]{Faye-Blanchet-Buonanno:2006} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Faye}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Blanchet}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{74}}, \bibinfo{eid}{104033} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Blanchet et~al.}(2006)\citenamefont{Blanchet, Buonanno, and Faye}}]{Blanchet-Buonanno-Faye:2006} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Blanchet}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Faye}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{74}}, \bibinfo{eid}{104034} (\bibinfo{year}{2006}). \bibitem[{\citenamefont{Kidder}(1995)}]{Kidder:1995zr} \bibinfo{author}{\bibfnamefont{L.~E.} \bibnamefont{Kidder}}, \bibinfo{journal}{Phys. Rev. D} \textbf{\bibinfo{volume}{52}}, \bibinfo{pages}{821} (\bibinfo{year}{1995}). \bibitem[{\citenamefont{Arun et~al.}(2009)\citenamefont{Arun, Buonanno, Faye, and Ochsner}}]{Arun:2009} \bibinfo{author}{\bibfnamefont{K.~G.}~\bibnamefont{Arun}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Faye}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Ochsner}}, \bibinfo{journal}{Phys. Rev. D} \textbf{\bibinfo{volume}{79}}, \bibinfo{pages}{104023} (\bibinfo{year}{2009}).%, \eprint{0810.5336}. \bibitem[{\citenamefont{Will and Wiseman}(1996)}]{Will96} \bibinfo{author}{\bibfnamefont{C.~M.} \bibnamefont{Will}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.~G.} \bibnamefont{Wiseman}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{54}}, \bibinfo{pages}{4813} (\bibinfo{year}{1996}). \bibitem[{\citenamefont{Blanchet et~al.}(2011)\citenamefont{Blanchet, Buonanno, and Faye}}]{BlanchetEtAl:2011} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Blanchet}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Faye}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{84}}, \bibinfo{pages}{064041} (\bibinfo{year}{2011}).%, \eprint{1104.5659}. \bibitem[{\citenamefont{{Bailey} and {Israel}}(1980)}]{1980AnPhy.130..188B} \bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{{Bailey}}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{{Israel}}}, \bibinfo{journal}{Ann. Phys. (N.Y.)} \textbf{\bibinfo{volume}{130}}, \bibinfo{pages}{188} (\bibinfo{year}{1980}). \bibitem[{\citenamefont{{Porto} and {Rothstein}}(2006)}]{PortoRothstein2006} \bibinfo{author}{\bibfnamefont{R.~A.} \bibnamefont{{Porto}}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.~Z.} \bibnamefont{{Rothstein}}}, \bibinfo{journal}{Phys.\ Rev.\ Lett.} \textbf{\bibinfo{volume}{97}}, \bibinfo{eid}{021101} (\bibinfo{year}{2006}).%, \eprint{arXiv:gr-qc/0604099}. \bibitem[{\citenamefont{Porto and Rothstein}(2008{\natexlab{a}})}]{Porto:2008jj} \bibinfo{author}{\bibfnamefont{R.~A.} \bibnamefont{Porto}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.~Z.} \bibnamefont{Rothstein}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{78}}, \bibinfo{pages}{044013} (\bibinfo{year}{2008}{\natexlab{a}}).%, % \eprint{0804.0260}. \bibitem[{\citenamefont{Steinhoff and Puetzfeld}(2010)}]{SP10} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Steinhoff}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Puetzfeld}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{044019} (\bibinfo{year}{2010}). \bibitem[{\citenamefont{Poisson}(1998)}]{Poisson:1997ha} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Poisson}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{57}}, \bibinfo{pages}{5287} (\bibinfo{year}{1998}). %, \eprint{gr-qc/9709032}. \bibitem[{\citenamefont{Damour}(2001)}]{Damour01c} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Damour}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{64}}, \bibinfo{pages}{124013} (\bibinfo{year}{2001}).%, \eprint{gr-qc/0103018}. \bibitem[{\citenamefont{Steinhoff}(2011)}]{Steinhoff:2010zz} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Steinhoff}}, \bibinfo{journal}{Annalen Phys. (Berlin)} \textbf{\bibinfo{volume}{523}}, \bibinfo{pages}{296} (\bibinfo{year}{2011}).%, \eprint{1106.4203}. \bibitem[{\citenamefont{{Porto} et~al.}(2012)\citenamefont{{Porto}, {Ross}, and {Rothstein}}}]{Porto:2012x} \bibinfo{author}{\bibfnamefont{R.~A.} \bibnamefont{{Porto}}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{{Ross}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.~Z.} \bibnamefont{{Rothstein}}}, \bibinfo{journal}{J. Cosmol. Astropart. Phys. 09} (\bibinfo{year}{2012}) 028. %\eprint{1203.2962}. \bibitem[{\citenamefont{Porto and Rothstein}(2008{\natexlab{b}})}]{Porto:2008tb} \bibinfo{author}{\bibfnamefont{R.~A.} \bibnamefont{Porto}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.~Z.} \bibnamefont{Rothstein}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{78}}, \bibinfo{pages}{044012} (\bibinfo{year}{2008}{\natexlab{b}}).%, % \eprint{0802.0720}. \bibitem[{\citenamefont{Porto}(2010)}]{Porto:2010tr} \bibinfo{author}{\bibfnamefont{R.~A.} \bibnamefont{Porto}}, \bibinfo{journal}{Classical Quantum Gravity} \textbf{\bibinfo{volume}{27}}, \bibinfo{pages}{205001} (\bibinfo{year}{2010}).%, \eprint{1005.5730}. \bibitem[{\citenamefont{Damour et~al.}(2008)\citenamefont{Damour, Jaranowski, and Schaefer}}]{Damour:2007nc} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Damour}}, \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Jaranowski}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Schafer}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{77}}, \bibinfo{pages}{064032} (\bibinfo{year}{2008}).%, \eprint{0711.1048}. \bibitem[{\citenamefont{{Steinhoff} et~al.}(2008{\natexlab{a}})\citenamefont{{Steinhoff}, {Sch{\"a}fer}, and {Hergt}}}]{Steinhoff08} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{{Steinhoff}}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{{Sch{\"a}fer}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{{Hergt}}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{77}}, \bibinfo{eid}{104018} (\bibinfo{year}{2008}{\natexlab{a}}).%, \eprint{0805.3136}. \bibitem[{\citenamefont{{Steinhoff} et~al.}(2008{\natexlab{b}})\citenamefont{{Steinhoff}, {Hergt}, and {Sch{\"a}fer}}}]{Steinhoff08a} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{{Steinhoff}}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{{Hergt}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{{Sch{\"a}fer}}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{77}}, \bibinfo{pages}{081501} (\bibinfo{year}{2008}{\natexlab{b}}).%, %\eprint{0712.1716}. \bibitem[{\citenamefont{{Steinhoff} et~al.}(2008{\natexlab{c}})\citenamefont{{Steinhoff}, {Hergt}, and {Sch{\"a}fer}}}]{Steinhoff08b} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{{Steinhoff}}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{{Hergt}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{{Sch{\"a}fer}}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{78}}, \bibinfo{eid}{101503} (\bibinfo{year}{2008}{\natexlab{c}}).%, \eprint{0809.2200}. \bibitem[{\citenamefont{Thorne}(1980)}]{thorne80} \bibinfo{author}{\bibfnamefont{K.~S.} \bibnamefont{Thorne}}, \bibinfo{journal}{Rev. Mod. Phys.} \textbf{\bibinfo{volume}{52}}, \bibinfo{pages}{299} (\bibinfo{year}{1980}). \bibitem[{\citenamefont{Blanchet and Damour}(1992)}]{Blanchet:1992br} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Blanchet}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Damour}}, \bibinfo{journal}{Ann. Inst. Henri Poincare, Phys. Theor.} \textbf{\bibinfo{volume}{50}}, \bibinfo{pages}{377} (\bibinfo{year}{1989}), \bibinfo{howpublished}{\url{http://www.numdam.org/item?id=AIHPA_1989__50_4_377_0}}. \bibitem[{\citenamefont{Blanchet}(1995)}]{Blanchet:1995fr} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Blanchet}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{51}}, \bibinfo{pages}{2559} (\bibinfo{year}{1995}).%, \eprint{gr-qc/9501030}. \bibitem[{\citenamefont{Schmidt et~al.}(2011)\citenamefont{Schmidt, Hannam, Husa, and Ajith}}]{Schmidt:2010it} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Schmidt}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Hannam}}, \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Husa}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Ajith}}, \bibinfo{journal}{Phys. Rev. D} \textbf{\bibinfo{volume}{84}}, \bibinfo{pages}{024046} (\bibinfo{year}{2011}).%, \eprint{1012.2879}. \bibitem[{\citenamefont{{O'Shaughnessy} et~al.}(2011)\citenamefont{{O'Shaughnessy}, {Vaishnav}, {Healy}, {Meeks}, and {Shoemaker}}}]{OShaughnessy2011} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{{O'Shaughnessy}}}, \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{{Vaishnav}}}, \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{{Healy}}}, \bibinfo{author}{\bibfnamefont{Z.}~\bibnamefont{{Meeks}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{{Shoemaker}}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{84}}, \bibinfo{eid}{124002} (\bibinfo{year}{2011}).%, \eprint{1109.5224}. \bibitem[{\citenamefont{{Ochsner} and {O'Shaughnessy}}(2012)}]{Ochsner2012} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{{Ochsner}}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{{O'Shaughnessy}}}, \bibinfo{journal}{Phys. Rev. D} \textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{104037} (\bibinfo{year}{2012}).% % \eprint{1205.2287}. \bibitem[{\citenamefont{{Boyle} et~al.}(2011)\citenamefont{{Boyle}, {Owen}, and {Pfeiffer}}}]{2011PhRvD..84l4011B} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{{Boyle}}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{{Owen}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.~P.} \bibnamefont{{Pfeiffer}}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{84}}, \bibinfo{eid}{124011} (\bibinfo{year}{2011}).%, \eprint{1110.2965}. \bibitem[{\citenamefont{Schmidt et~al.}(2012)\citenamefont{Schmidt, Hannam, and Husa}}]{Schmidt:2012rh} \bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Schmidt}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Hannam}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Husa}} \bibinfo{journal}{Phys. Rev. D} \textbf{\bibinfo{volume}{86}}, \bibinfo{pages}{104063} (\bibinfo{year}{2012}). % (\bibinfo{year}{2012}), \eprint{1207.3088}. \bibitem[{\citenamefont{Mart\'in-Garc\'ia et~al.}(GPL 2002--2012)\citenamefont{Mart\'in-Garc\'ia, Garc\'ia-Parrado, Stecchina, Wardell, Pitrou, Brizuela, Yllanes, Faye, Stein, Portugal et~al.}}]{xtensor} \bibinfo{author}{\bibfnamefont{J.~M.} \bibnamefont{Mart\'in-Garc\'ia}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Garc\'ia-Parrado}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Stecchina}}, \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Wardell}}, \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Pitrou}}, \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Brizuela}}, \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Yllanes}}, \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Faye}}, \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Stein}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Portugal}}, \bibnamefont{et~al.} (\bibinfo{year}{GPL 2002--2012}), \bibinfo{note}{http://www.xact.es/}. \bibitem[{\citenamefont{{Owen} et~al.}(1998)\citenamefont{{Owen}, {Tagoshi}, and {Ohashi}}}]{1998PhRvD..57.6168O} \bibinfo{author}{\bibfnamefont{B.~J.} \bibnamefont{{Owen}}}, \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{{Tagoshi}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{{Ohashi}}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{57}}, \bibinfo{pages}{6168} (\bibinfo{year}{1998}).%, \eprint{arXiv:gr-qc/9710134}. \bibitem[{\citenamefont{Tagoshi et~al.}(1996)\citenamefont{Tagoshi, Shibata, Tanaka, and Sasaki}}]{Tagoshi:1996gh} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Tagoshi}}, \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Shibata}}, \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Tanaka}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Sasaki}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{54}}, \bibinfo{pages}{1439} (\bibinfo{year}{1996}).%, \eprint{gr-qc/9603028}. \bibitem[{\citenamefont{Pan et~al.}(2011{\natexlab{b}})\citenamefont{Pan, Buonanno, Fujita, Racine, and Tagoshi}}]{Pan2010hz} \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Pan}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Fujita}}, \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Racine}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Tagoshi}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{83}}, \bibinfo{pages}{064003} (\bibinfo{year}{2011}{\natexlab{b}}). %\eprint{1006.0431}. \bibitem[{\citenamefont{Tulczyjew}(1959)}]{Tulczyjew1959} \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Tulczyjew}}, \bibinfo{journal}{Acta Phys. Polon.} \textbf{\bibinfo{volume}{18}}, \bibinfo{pages}{393} (\bibinfo{year}{1959}). \bibitem[{\citenamefont{Barker and O'Connell}(1979)}]{BOC79} \bibinfo{author}{\bibfnamefont{B.~M.} \bibnamefont{Barker}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{R.~F.} \bibnamefont{O'Connell}}, \bibinfo{journal}{Gen. Relativ. Gravit.} \textbf{\bibinfo{volume}{11}}, \bibinfo{pages}{149} (\bibinfo{year}{1979}). \bibitem[{\citenamefont{Porto}(2006)}]{Porto:2005ac} \bibinfo{author}{\bibfnamefont{R.~A.} \bibnamefont{Porto}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{73}}, \bibinfo{pages}{104031} (\bibinfo{year}{2006}).%, \eprint{gr-qc/0511061}. \bibitem[{\citenamefont{Damour and Esposito-Far{\`e}se}(1998)}]{DE98a} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Damour}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Esposito-Far{\`e}se}}, \bibinfo{journal}{Phys.Rev.D} \textbf{\bibinfo{volume}{58}}, \bibinfo{pages}{042001} (\bibinfo{year}{1998}).%, \eprint{gr-qc/9803031}. \bibitem[{\citenamefont{{Laarakkers} and {Poisson}}(1999)}]{Laarakkers99} \bibinfo{author}{\bibfnamefont{W.~G.} \bibnamefont{{Laarakkers}}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{{Poisson}}}, \bibinfo{journal}{\apj} \textbf{\bibinfo{volume}{512}}, \bibinfo{pages}{282} (\bibinfo{year}{1999}).%, \eprint{arXiv:gr-qc/9709033}. \bibitem[{\citenamefont{Hanson and Regge}(1974)}]{HR74} \bibinfo{author}{\bibfnamefont{A.~J.} \bibnamefont{Hanson}} \bibnamefont{and} \bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Regge}}, \bibinfo{journal}{Ann. Phys. (N.Y.)} \textbf{\bibinfo{volume}{87}}, \bibinfo{pages}{498} (\bibinfo{year}{1974}). \bibitem[{\citenamefont{Papapetrou}(1951)}]{Papa51spin} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Papapetrou}}, \bibinfo{journal}{Proc. R. Soc. A} \textbf{\bibinfo{volume}{209}}, \bibinfo{pages}{248} (\bibinfo{year}{1951}). \bibitem[{\citenamefont{Dixon}(1964)}]{Dixon1964} \bibinfo{author}{\bibfnamefont{W.~G.} \bibnamefont{Dixon}}, \bibinfo{journal}{Nuovo Cimento} \textbf{\bibinfo{volume}{34}}, \bibinfo{pages}{317} (\bibinfo{year}{1964}). \bibitem[{\citenamefont{{Tagoshi} et~al.}(2001)\citenamefont{{Tagoshi}, {Ohashi}, and {Owen}}}]{Tagoshi-Ohashi-Owen:2001} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{{Tagoshi}}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{{Ohashi}}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{B.~J.} \bibnamefont{{Owen}}}, \bibinfo{journal}{Phys.\ Rev.\ D} \textbf{\bibinfo{volume}{63}}, \bibinfo{pages}{044006} (\bibinfo{year}{2001}). \bibitem[{\citenamefont{Dixon}(1974)}]{Dixon1974} \bibinfo{author}{\bibfnamefont{W.~G.} \bibnamefont{Dixon}}, \bibinfo{journal}{Phil. Trans. R. Soc. A} \textbf{\bibinfo{volume}{277}}, \bibinfo{pages}{59} (\bibinfo{year}{1974}). \bibitem[{\citenamefont{Porto et~al.}(2011)\citenamefont{Porto, Ross, and Rothstein}}]{Porto:2010zg} \bibinfo{author}{\bibfnamefont{R.~A.} \bibnamefont{Porto}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Ross}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{I.~Z.} \bibnamefont{Rothstein}}, \bibinfo{journal}{J. Cosmol. Astropart. Phys.} \textbf{\bibinfo{volume}{1103}}, \bibinfo{pages}{009} (\bibinfo{year}{2011}).%, \eprint{1007.1312}. \bibitem[{\citenamefont{Blanchet}(1998)}]{Blanchet98} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Blanchet}}, \bibinfo{journal}{Classical Quantum Gravity} \textbf{\bibinfo{volume}{15}}, \bibinfo{pages}{1971} (\bibinfo{year}{1998}). \bibitem[{\citenamefont{Racine et~al.}(2009)\citenamefont{Racine, Buonanno, and Kidder}}]{Racine2008} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Racine}}, \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Buonanno}}, \bibnamefont{and} \bibinfo{author}{\bibfnamefont{L.~E.} \bibnamefont{Kidder}}, \bibinfo{journal}{Phys. Rev. D} \textbf{\bibinfo{volume}{80}}, \bibinfo{pages}{044010} (\bibinfo{year}{2009}). \bibitem[{\citenamefont{Racine}(2008)}]{Racine:2008qv} \bibinfo{author}{\bibfnamefont{E.}~\bibnamefont{Racine}}, \bibinfo{journal}{\prd} \textbf{\bibinfo{volume}{78}}, \bibinfo{pages}{044021} (\bibinfo{year}{2008}). \end{thebibliography} \end{document}
[STATEMENT] theorem sup_loc_rev [simp]: "(G \<turnstile> (rev a) <=l rev b) = (G \<turnstile> a <=l b)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. G \<turnstile> rev a <=l rev b = G \<turnstile> a <=l b [PROOF STEP] proof - [PROOF STATE] proof (state) goal (1 subgoal): 1. G \<turnstile> rev a <=l rev b = G \<turnstile> a <=l b [PROOF STEP] have "\<forall>b. (G \<turnstile> (rev a) <=l rev b) = (G \<turnstile> a <=l b)" (is "\<forall>b. ?Q a b" is "?P a") [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<forall>b. G \<turnstile> rev a <=l rev b = G \<turnstile> a <=l b [PROOF STEP] proof (induct a) [PROOF STATE] proof (state) goal (2 subgoals): 1. \<forall>b. G \<turnstile> rev [] <=l rev b = G \<turnstile> [] <=l b 2. \<And>a1 a2. \<forall>b. G \<turnstile> rev a2 <=l rev b = G \<turnstile> a2 <=l b \<Longrightarrow> \<forall>b. G \<turnstile> rev (a1 # a2) <=l rev b = G \<turnstile> (a1 # a2) <=l b [PROOF STEP] show "?P []" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<forall>b. G \<turnstile> rev [] <=l rev b = G \<turnstile> [] <=l b [PROOF STEP] by simp [PROOF STATE] proof (state) this: \<forall>b. G \<turnstile> rev [] <=l rev b = G \<turnstile> [] <=l b goal (1 subgoal): 1. \<And>a1 a2. \<forall>b. G \<turnstile> rev a2 <=l rev b = G \<turnstile> a2 <=l b \<Longrightarrow> \<forall>b. G \<turnstile> rev (a1 # a2) <=l rev b = G \<turnstile> (a1 # a2) <=l b [PROOF STEP] fix l ls [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>a1 a2. \<forall>b. G \<turnstile> rev a2 <=l rev b = G \<turnstile> a2 <=l b \<Longrightarrow> \<forall>b. G \<turnstile> rev (a1 # a2) <=l rev b = G \<turnstile> (a1 # a2) <=l b [PROOF STEP] assume IH: "?P ls" [PROOF STATE] proof (state) this: \<forall>b. G \<turnstile> rev ls <=l rev b = G \<turnstile> ls <=l b goal (1 subgoal): 1. \<And>a1 a2. \<forall>b. G \<turnstile> rev a2 <=l rev b = G \<turnstile> a2 <=l b \<Longrightarrow> \<forall>b. G \<turnstile> rev (a1 # a2) <=l rev b = G \<turnstile> (a1 # a2) <=l b [PROOF STEP] { [PROOF STATE] proof (state) this: \<forall>b. G \<turnstile> rev ls <=l rev b = G \<turnstile> ls <=l b goal (1 subgoal): 1. \<And>a1 a2. \<forall>b. G \<turnstile> rev a2 <=l rev b = G \<turnstile> a2 <=l b \<Longrightarrow> \<forall>b. G \<turnstile> rev (a1 # a2) <=l rev b = G \<turnstile> (a1 # a2) <=l b [PROOF STEP] fix b [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>a1 a2. \<forall>b. G \<turnstile> rev a2 <=l rev b = G \<turnstile> a2 <=l b \<Longrightarrow> \<forall>b. G \<turnstile> rev (a1 # a2) <=l rev b = G \<turnstile> (a1 # a2) <=l b [PROOF STEP] have "?Q (l#ls) b" [PROOF STATE] proof (prove) goal (1 subgoal): 1. G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b [PROOF STEP] proof (cases b) [PROOF STATE] proof (state) goal (2 subgoals): 1. b = [] \<Longrightarrow> G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b 2. \<And>a list. b = a # list \<Longrightarrow> G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b [PROOF STEP] case Nil [PROOF STATE] proof (state) this: b = [] goal (2 subgoals): 1. b = [] \<Longrightarrow> G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b 2. \<And>a list. b = a # list \<Longrightarrow> G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b [PROOF STEP] thus ?thesis [PROOF STATE] proof (prove) using this: b = [] goal (1 subgoal): 1. G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b [PROOF STEP] by (auto dest: sup_loc_length) [PROOF STATE] proof (state) this: G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b goal (1 subgoal): 1. \<And>a list. b = a # list \<Longrightarrow> G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b [PROOF STEP] next [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>a list. b = a # list \<Longrightarrow> G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b [PROOF STEP] case (Cons a list) [PROOF STATE] proof (state) this: b = a # list goal (1 subgoal): 1. \<And>a list. b = a # list \<Longrightarrow> G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b [PROOF STEP] show ?thesis [PROOF STATE] proof (prove) goal (1 subgoal): 1. G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b [PROOF STEP] proof [PROOF STATE] proof (state) goal (2 subgoals): 1. G \<turnstile> rev (l # ls) <=l rev b \<Longrightarrow> G \<turnstile> (l # ls) <=l b 2. G \<turnstile> (l # ls) <=l b \<Longrightarrow> G \<turnstile> rev (l # ls) <=l rev b [PROOF STEP] assume "G \<turnstile> (l # ls) <=l b" [PROOF STATE] proof (state) this: G \<turnstile> (l # ls) <=l b goal (2 subgoals): 1. G \<turnstile> rev (l # ls) <=l rev b \<Longrightarrow> G \<turnstile> (l # ls) <=l b 2. G \<turnstile> (l # ls) <=l b \<Longrightarrow> G \<turnstile> rev (l # ls) <=l rev b [PROOF STEP] thus "G \<turnstile> rev (l # ls) <=l rev b" [PROOF STATE] proof (prove) using this: G \<turnstile> (l # ls) <=l b goal (1 subgoal): 1. G \<turnstile> rev (l # ls) <=l rev b [PROOF STEP] by (clarsimp simp add: Cons IH sup_loc_length sup_loc_append) [PROOF STATE] proof (state) this: G \<turnstile> rev (l # ls) <=l rev b goal (1 subgoal): 1. G \<turnstile> rev (l # ls) <=l rev b \<Longrightarrow> G \<turnstile> (l # ls) <=l b [PROOF STEP] next [PROOF STATE] proof (state) goal (1 subgoal): 1. G \<turnstile> rev (l # ls) <=l rev b \<Longrightarrow> G \<turnstile> (l # ls) <=l b [PROOF STEP] assume "G \<turnstile> rev (l # ls) <=l rev b" [PROOF STATE] proof (state) this: G \<turnstile> rev (l # ls) <=l rev b goal (1 subgoal): 1. G \<turnstile> rev (l # ls) <=l rev b \<Longrightarrow> G \<turnstile> (l # ls) <=l b [PROOF STEP] hence G: "G \<turnstile> (rev ls @ [l]) <=l (rev list @ [a])" [PROOF STATE] proof (prove) using this: G \<turnstile> rev (l # ls) <=l rev b goal (1 subgoal): 1. G \<turnstile> (rev ls @ [l]) <=l rev list @ [a] [PROOF STEP] by (simp add: Cons) [PROOF STATE] proof (state) this: G \<turnstile> (rev ls @ [l]) <=l rev list @ [a] goal (1 subgoal): 1. G \<turnstile> rev (l # ls) <=l rev b \<Longrightarrow> G \<turnstile> (l # ls) <=l b [PROOF STEP] hence "length (rev ls) = length (rev list)" [PROOF STATE] proof (prove) using this: G \<turnstile> (rev ls @ [l]) <=l rev list @ [a] goal (1 subgoal): 1. length (rev ls) = length (rev list) [PROOF STEP] by (auto dest: sup_loc_length) [PROOF STATE] proof (state) this: length (rev ls) = length (rev list) goal (1 subgoal): 1. G \<turnstile> rev (l # ls) <=l rev b \<Longrightarrow> G \<turnstile> (l # ls) <=l b [PROOF STEP] from this G [PROOF STATE] proof (chain) picking this: length (rev ls) = length (rev list) G \<turnstile> (rev ls @ [l]) <=l rev list @ [a] [PROOF STEP] obtain "G \<turnstile> rev ls <=l rev list" "G \<turnstile> l <=o a" [PROOF STATE] proof (prove) using this: length (rev ls) = length (rev list) G \<turnstile> (rev ls @ [l]) <=l rev list @ [a] goal (1 subgoal): 1. (\<lbrakk>G \<turnstile> rev ls <=l rev list; G \<turnstile> l <=o a\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by (simp add: sup_loc_append) [PROOF STATE] proof (state) this: G \<turnstile> rev ls <=l rev list G \<turnstile> l <=o a goal (1 subgoal): 1. G \<turnstile> rev (l # ls) <=l rev b \<Longrightarrow> G \<turnstile> (l # ls) <=l b [PROOF STEP] thus "G \<turnstile> (l # ls) <=l b" [PROOF STATE] proof (prove) using this: G \<turnstile> rev ls <=l rev list G \<turnstile> l <=o a goal (1 subgoal): 1. G \<turnstile> (l # ls) <=l b [PROOF STEP] by (simp add: Cons IH) [PROOF STATE] proof (state) this: G \<turnstile> (l # ls) <=l b goal: No subgoals! [PROOF STEP] qed [PROOF STATE] proof (state) this: G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b goal: No subgoals! [PROOF STEP] qed [PROOF STATE] proof (state) this: G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b goal (1 subgoal): 1. \<And>a1 a2. \<forall>b. G \<turnstile> rev a2 <=l rev b = G \<turnstile> a2 <=l b \<Longrightarrow> \<forall>b. G \<turnstile> rev (a1 # a2) <=l rev b = G \<turnstile> (a1 # a2) <=l b [PROOF STEP] } [PROOF STATE] proof (state) this: G \<turnstile> rev (l # ls) <=l rev ?ba2 = G \<turnstile> (l # ls) <=l ?ba2 goal (1 subgoal): 1. \<And>a1 a2. \<forall>b. G \<turnstile> rev a2 <=l rev b = G \<turnstile> a2 <=l b \<Longrightarrow> \<forall>b. G \<turnstile> rev (a1 # a2) <=l rev b = G \<turnstile> (a1 # a2) <=l b [PROOF STEP] thus "?P (l#ls)" [PROOF STATE] proof (prove) using this: G \<turnstile> rev (l # ls) <=l rev ?ba2 = G \<turnstile> (l # ls) <=l ?ba2 goal (1 subgoal): 1. \<forall>b. G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b [PROOF STEP] by blast [PROOF STATE] proof (state) this: \<forall>b. G \<turnstile> rev (l # ls) <=l rev b = G \<turnstile> (l # ls) <=l b goal: No subgoals! [PROOF STEP] qed [PROOF STATE] proof (state) this: \<forall>b. G \<turnstile> rev a <=l rev b = G \<turnstile> a <=l b goal (1 subgoal): 1. G \<turnstile> rev a <=l rev b = G \<turnstile> a <=l b [PROOF STEP] thus ?thesis [PROOF STATE] proof (prove) using this: \<forall>b. G \<turnstile> rev a <=l rev b = G \<turnstile> a <=l b goal (1 subgoal): 1. G \<turnstile> rev a <=l rev b = G \<turnstile> a <=l b [PROOF STEP] by blast [PROOF STATE] proof (state) this: G \<turnstile> rev a <=l rev b = G \<turnstile> a <=l b goal: No subgoals! [PROOF STEP] qed
# Jinliang Yang # purpose: calculate the GCA and SCA # updated: 3.7.2012 #Row Barcode KRN Note.x Pedigree Genotype Note.y #3133 8757 11-8757-23 OP 16 10g-1028-21/2048 Tzi8 x P39 Diallel # input data=d should contain trait and Genotype (Tzi x P39) getca2 <- function(data=d, trait="KRN", square=FALSE){ ### check the required cols if (!("Genotype" %in% names(data) & trait %in% names(data) )) stop(paste("Make sure your data frame contains columns Genotype and ", trait, sep="")); data$Genotype <- as.character(data$Genotype) ### remove the self genotype idx <- grep("@", data$Genotype); if(sum(idx)>0) data <- data[-idx,]; ### get the p1, p2 and reciprocal col if(!("p1" %in% names(data) & "p2" %in% names(data) & "reciprocal" %in% names(data) )){ data$p1 <- NA; data$p2 <- NA; data$reciprocal <- NA; data$Genotype <- as.character(data$Genotype); for(i in 1:nrow(data)){ tem <- unlist(strsplit(data$Genotype[i], split="x")); tem <- sort(tem) data$p1[i] <- tem[1]; data$p2[i] <- tem[2]; data$reciprocal[i] <- paste(tem[1], tem[2], sep="_") } } ### start the calculation according to Method 4, Model I uhat <- mean(data[,trait], na.rm=T); inbred <- sort(unique(c(data$p1, data$p2))); ### General Combining Ability gca <- data.frame(Genotype=inbred, GCA=NA); for(j in 1:length(inbred)){ inbredj = inbred[j]; ghat <- mean(subset(data, p1==inbredj | p2==inbredj)[, trait], na.rm=T)-uhat; gca[gca$Genotype==inbredj,]$GCA <- ghat; } ### Specific Combining Ability sca <- matrix(NA, nrow=length(inbred), ncol=length(inbred)+1); sca <- as.data.frame(sca) names(sca) <- c(trait, inbred) sca[,1] <- inbred specific <- unique(data$reciprocal); scatable <- data.frame() for(k in 1:length(specific)){ tem2 <- unlist(strsplit(specific[k], split="_")); myp1 <- tem2[1]; myp2 <- tem2[2]; pheno <- subset(data, (p1==myp1 & p2==myp2) | (p1==myp2 & p2==myp1)); shat <- mean(pheno[, trait], na.rm=T) - subset(gca, Genotype==myp1)$GCA - subset(gca, Genotype==myp2)$GCA - uhat; if(square){ sca[sca[,1]==myp1, myp2] <- shat; } else { temid <- paste(myp1, myp2, sep="x"); temsca <- data.frame(Genotype=temid, SCA=shat) scatable <- rbind(scatable, temsca) } } outputlist <- list(); names(gca)[2] <- trait; outputlist[[1]] <- gca; names(scatable)[2] <- trait; if(square){ outputlist[[2]] <- sca; } else{ outputlist[[2]] <- scatable; } return(outputlist) } ############################################### mydata <- read.csv("pheno_diallel_master_BLUE.csv") krnca <- getca2(data=mydata, trait="KRN", square=FALSE) akwca <- getca2(data=mydata, trait="AKW", square=FALSE) tkwca <- getca2(data=mydata, trait="TKW", square=FALSE) kcca <- getca2(data=mydata, trait="KC", square=FALSE) cdca <- getca2(data=mydata, trait="CD", square=FALSE) cwca <- getca2(data=mydata, trait="CW", square=FALSE) clca <- getca2(data=mydata, trait="CL", square=FALSE) ########## GCA ################# gca_master <- merge(krnca[[1]], akwca[[1]], by="Genotype") gca_master <- merge(gca_master, tkwca[[1]], by="Genotype") gca_master <- merge(gca_master, kcca[[1]], by="Genotype") gca_master <- merge(gca_master, cdca[[1]], by="Genotype") gca_master <- merge(gca_master, cwca[[1]], by="Genotype") gca_master <- merge(gca_master, clca[[1]], by="Genotype") write.table(gca_master, "pheno_diallel_master_BLUE_GCA.csv", sep=",", row.names=FALSE, quote=FALSE) ########## SCA ################# sca_master <- merge(krnca[[2]], akwca[[2]], by="Genotype") sca_master <- merge(sca_master, tkwca[[2]], by="Genotype") sca_master <- merge(sca_master, kcca[[2]], by="Genotype") sca_master <- merge(sca_master, cdca[[2]], by="Genotype") sca_master <- merge(sca_master, cwca[[2]], by="Genotype") sca_master <- merge(sca_master, clca[[2]], by="Genotype") write.table(sca_master, "pheno_diallel_master_BLUE_SCA.csv", sep=",", row.names=FALSE, quote=FALSE) pairs(d[, c(4,7,9, 11:13)], text.panel = diag, upper.panel=panel.smooth, lower.panel=panel.cor, gap=0, main="KRN Correlation Plots of three obs") ################################################################################# # make the design matrix ##################################################################################################################################################### design <- read.table("/Users/yangjl/Documents/Heterosis_GWAS/SNP/diallel_4impute.txt", header=TRUE) dm <- matrix(0, nrow=378, ncol=28+378) dm <- as.data.frame(dm) p <- as.character(unique(design$p1)); names(dm) <- c("Geno", p, as.character(design$geno)) dm$Geno <- as.character(design$geno); for(i in 1:nrow(dm)){ tem <- unlist(strsplit(dm$Geno[i], split="x")); p1 <- tem[1]; p2 <- tem[2]; dm[i, p1] <- 1; dm[i, p2] <- 1; dm[i, dm$Geno[i]] <- 1; } ################# trait <- read.csv("pheno_diallel_master_raw_040612.csv") trait$Curtiss <- 0; # Only B73 trait$Dairy <- 0; trait$Johnson <- 0; trait$zumwalt <- 0; for (i in 1:nrow(trait)){ farm <- as.character(trait$Farm[i]); trait[i, farm] <- 1; } ############################# #FUNCTION to GCA and SCA getca1 <- function(traitnm="TKW"){ tem <- merge(trait[, c("Genotype2", traitnm, "Dairy", "Johnson", "zumwalt")], dm, by.x="Genotype2", by.y="Geno") tem <- tem[!is.na(tem[,2]),] yield <- tem[,2] idx1 <- numeric(ncol(tem)) for(i in 3:ncol(tem)){ idx1[i] <- sum(tem[,i]) } idx <- which(idx1==0) dmatrix1 <- as.matrix(tem[, -idx]); fit1 <- lm(yield ~ dmatrix1) ped.hat1 <- fit1$coef ped.hat1 <- ped.hat1[!is.na(ped.hat1)] names(ped.hat1) <- gsub("dmatrix1", "", names(ped.hat1)); names(ped.hat1)[1] <- "yhat" gcaout <- data.frame(Genotype=names(ped.hat1[1:29]), GCA=ped.hat1[1:29]); names(gcaout)[2] <- traitnm; scaout <- data.frame(Genotype=names(ped.hat1[30:227]), GCA=ped.hat1[30:227]); names(gcaout)[2] <- traitnm; output <- list(gca=gcaout, sca=scaout) return(output) } ############# krn1 <- getca1("KRN") krntem <- merge(krnca[[2]], krn1[[2]], by="Genotype", all.x=TRUE) krntem[is.na(krntem)] <- 0 kc <- getca("KC") akw <- getca("AKW") tkw <- getca("TKW") cd <- getca("CD") cw <- getca("CW") cl <- getca("CL") ################################ getca(data=sample, trait="Yield") ######################################################################################################################### x1 <- c("P1", "P1", "P1", "P1", "P2", "P2"); x2 <- c("P2", "P2", "P3", "P3", "P3", "P3") yield <- c(10, 12, 20, 22, 14, 16) sample <- data.frame(Parent1=x1, Parent2=x2, Yield=yield) ##################################################################################################################################################### X <- matrix(c(1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1), ncol=6, byrow=TRUE) y <- sample$Yield XX=t(X)%*%X library(MASS) XXgi=ginv(XX) Px=X%*%XXgi%*%t(X) yhat=Px%*%y fit <- lm(y~X) y=c(4,1,3,5,3,3,1) X=matrix(c( 1,1,0,0,0,1,0,0, 1,1,0,0,0,0,1,0, 1,0,1,0,0,0,1,0, 1,0,1,0,0,0,0,1, 1,0,0,1,0,0,0,1, 1,0,0,0,1,1,0,0, 1,0,0,0,1,0,1,0 ),byrow=T,nrow=7) fit <- lm(y ~X) XX=t(X)%*%X library(MASS) XXgi=ginv(XX) Px=X%*%XXgi%*%t(X) yhat=Px%*%y ################# data1 <- read.csv("~/Documents/Heterosis_GWAS/pheno2011/LL_KN_AKW_test.csv"); ########################################################### ### Delete duplicated input and missing data file1 <- read.csv("LL_KRN_test.csv") dim(file1) #8678 8 diallel <- subset(file1, Note.y=="Diallel" | Note.y=="NAM Filler"); dim(diallel) #[1] 2850 8 #summary(diallel) ########################################################### ### Delete duplicated input and missing data # Get ride of 4 duplicated records dup <- which(duplicated(diallel$Barcode)) diallel <- diallel[!duplicated(diallel$Barcode),] # Get ride of the missing records ### checking for outlyers diallel$KRN <- as.numeric(as.character(diallel$KRN)); diallel[is.na(diallel$KRN),] diallel <- diallel[!is.na(diallel$KRN),] hist(diallel$KRN, breaks=10) dim(diallel) #[1] 2837 8 data2 <- read.csv("~/Documents/Heterosis_GWAS/pheno2011/LL_KRN_test.csv"); data3 <- read.csv("~/Documents/Heterosis_GWAS/pheno2011/LL_cob.test.csv"); data1 <- subset(data1, ) dm2 <- merge(sample, dm, by.x="Genotype", by.y="Geno") yield <- dm2$Yield dmatrix <- as.matrix(dm2[, 7:52]); fit <- lm(yield ~ dmatrix)
version = v"6.2.1" include("../common.jl") # Build the tarballs build_tarballs(ARGS, configure(version)...; preferred_gcc_version=v"6", julia_compat="1.7")
lemma open_greaterThan [continuous_intros, simp]: "open {a <..}"
(* Author: Wenda Li <[email protected] / [email protected]> *) theory Count_Circle imports Count_Half_Plane begin subsection \<open>Polynomial roots within a circle (open ball)\<close> \<comment> \<open>Roots counted WITH multiplicity\<close> definition proots_ball::"complex poly \<Rightarrow> complex \<Rightarrow> real \<Rightarrow> nat" where "proots_ball p z0 r = proots_count p (ball z0 r)" \<comment> \<open>Roots counted WITHOUT multiplicity\<close> definition proots_ball_card ::"complex poly \<Rightarrow> complex \<Rightarrow> real \<Rightarrow> nat" where "proots_ball_card p z0 r = card (proots_within p (ball z0 r))" lemma proots_ball_code1[code]: "proots_ball p z0 r = ( if r \<le> 0 then 0 else if p\<noteq>0 then proots_upper (fcompose (p \<circ>\<^sub>p [:z0, of_real r:]) [:\<i>,-1:] [:\<i>,1:]) else Code.abort (STR ''proots_ball fails when p=0.'') (\<lambda>_. proots_ball p z0 r) )" proof (cases "p=0 \<or> r\<le>0") case False have "proots_ball p z0 r = proots_count (p \<circ>\<^sub>p [:z0, of_real r:]) (ball 0 1)" unfolding proots_ball_def apply (rule proots_uball_eq[THEN arg_cong]) using False by auto also have "... = proots_upper (fcompose (p \<circ>\<^sub>p [:z0, of_real r:]) [:\<i>,-1:] [:\<i>,1:])" unfolding proots_upper_def apply (rule proots_ball_plane_eq[THEN arg_cong]) using False pcompose_eq_0[of p "[:z0, of_real r:]"] by (simp add: pcompose_eq_0) finally show ?thesis using False by auto qed (auto simp:proots_ball_def ball_empty) lemma proots_ball_card_code1[code]: "proots_ball_card p z0 r = ( if r \<le> 0 \<or> p=0 then 0 else proots_upper_card (fcompose (p \<circ>\<^sub>p [:z0, of_real r:]) [:\<i>,-1:] [:\<i>,1:]) )" proof (cases "p=0 \<or> r\<le>0") case True moreover have ?thesis when "r\<le>0" proof - have "proots_within p (ball z0 r) = {}" by (simp add: ball_empty that) then show ?thesis unfolding proots_ball_card_def using that by auto qed moreover have ?thesis when "r>0" "p=0" unfolding proots_ball_card_def using that infinite_ball[of r z0] by auto ultimately show ?thesis by argo next case False then have "p\<noteq>0" "r>0" by auto have "proots_ball_card p z0 r = card (proots_within (p \<circ>\<^sub>p [:z0, of_real r:]) (ball 0 1))" unfolding proots_ball_card_def by (rule proots_card_uball_eq[OF \<open>r>0\<close>, THEN arg_cong]) also have "... = proots_upper_card (fcompose (p \<circ>\<^sub>p [:z0, of_real r:]) [:\<i>,-1:] [:\<i>,1:])" unfolding proots_upper_card_def apply (rule proots_card_ball_plane_eq[THEN arg_cong]) using False pcompose_eq_0[of p "[:z0, of_real r:]"] by (simp add: pcompose_eq_0) finally show ?thesis using False by auto qed subsection \<open>Polynomial roots on a circle (sphere)\<close> \<comment> \<open>Roots counted WITH multiplicity\<close> definition proots_sphere::"complex poly \<Rightarrow> complex \<Rightarrow> real \<Rightarrow> nat" where "proots_sphere p z0 r = proots_count p (sphere z0 r)" \<comment> \<open>Roots counted WITHOUT multiplicity\<close> definition proots_sphere_card ::"complex poly \<Rightarrow> complex \<Rightarrow> real \<Rightarrow> nat" where "proots_sphere_card p z0 r = card (proots_within p (sphere z0 r))" lemma proots_sphere_card_code1[code]: "proots_sphere_card p z0 r = ( if r=0 then (if poly p z0=0 then 1 else 0) else if r < 0 \<or> p=0 then 0 else (if poly p (z0-r) =0 then 1 else 0) + proots_unbounded_line_card (fcompose (p \<circ>\<^sub>p [:z0, of_real r:]) [:\<i>,-1:] [:\<i>,1:]) 0 1 )" proof - have ?thesis when "r=0" proof - have "proots_within p {z0} = (if poly p z0 = 0 then {z0} else {})" by auto then show ?thesis unfolding proots_sphere_card_def using that by simp qed moreover have ?thesis when "r\<noteq>0" "r < 0 \<or> p=0" proof - have ?thesis when "r<0" proof - have "proots_within p (sphere z0 r) = {}" by (auto simp add: ball_empty that) then show ?thesis unfolding proots_sphere_card_def using that by auto qed moreover have ?thesis when "r>0" "p=0" unfolding proots_sphere_card_def using that infinite_sphere[of r z0] by auto ultimately show ?thesis using that by argo qed moreover have ?thesis when "r>0" "p\<noteq>0" proof - define pp where "pp = p \<circ>\<^sub>p [:z0, of_real r:]" define ppp where "ppp=fcompose pp [:\<i>, - 1:] [:\<i>, 1:]" have "pp\<noteq>0" unfolding pp_def using that pcompose_eq_0 by force have "proots_sphere_card p z0 r = card (proots_within pp (sphere 0 1))" unfolding proots_sphere_card_def pp_def by (rule proots_card_usphere_eq[OF \<open>r>0\<close>, THEN arg_cong]) also have "... = card (proots_within pp {-1} \<union> proots_within pp (sphere 0 1 - {-1}))" by (simp add: insert_absorb proots_within_union) also have "... = card (proots_within pp {-1}) + card (proots_within pp (sphere 0 1 - {-1}))" apply (rule card_Un_disjoint) using \<open>pp\<noteq>0\<close> by auto also have "... = card (proots_within pp {-1}) + card (proots_within ppp {x. 0 = Im x})" using proots_card_sphere_axis_eq[OF \<open>pp\<noteq>0\<close>,folded ppp_def] by simp also have "... = (if poly p (z0-r) =0 then 1 else 0) + proots_unbounded_line_card ppp 0 1" proof - have "proots_within pp {-1} = (if poly p (z0-r) =0 then {-1} else {})" unfolding pp_def by (auto simp:poly_pcompose) then have "card (proots_within pp {-1}) = (if poly p (z0-r) =0 then 1 else 0)" by auto moreover have "{x. Im x = 0} = unbounded_line 0 1" unfolding unbounded_line_def apply auto by (metis complex_is_Real_iff of_real_Re of_real_def) then have "card (proots_within ppp {x. 0 = Im x}) = proots_unbounded_line_card ppp 0 1" unfolding proots_unbounded_line_card_def by simp ultimately show ?thesis by auto qed finally show ?thesis apply (fold pp_def,fold ppp_def) using that by auto qed ultimately show ?thesis by auto qed subsection \<open>Polynomial roots on a closed ball\<close> \<comment> \<open>Roots counted WITH multiplicity\<close> definition proots_cball::"complex poly \<Rightarrow> complex \<Rightarrow> real \<Rightarrow> nat" where "proots_cball p z0 r = proots_count p (cball z0 r)" \<comment> \<open>Roots counted WITHOUT multiplicity\<close> definition proots_cball_card ::"complex poly \<Rightarrow> complex \<Rightarrow> real \<Rightarrow> nat" where "proots_cball_card p z0 r = card (proots_within p (cball z0 r))" (*FIXME: this surely can be optimised/refined.*) lemma proots_cball_card_code1[code]: "proots_cball_card p z0 r = ( if r=0 then (if poly p z0=0 then 1 else 0) else if r < 0 \<or> p=0 then 0 else ( let pp=fcompose (p \<circ>\<^sub>p [:z0, of_real r:]) [:\<i>,-1:] [:\<i>,1:] in (if poly p (z0-r) =0 then 1 else 0) + proots_unbounded_line_card pp 0 1 + proots_upper_card pp ) )" proof - have ?thesis when "r=0" proof - have "proots_within p {z0} = (if poly p z0 = 0 then {z0} else {})" by auto then show ?thesis unfolding proots_cball_card_def using that by simp qed moreover have ?thesis when "r\<noteq>0" "r < 0 \<or> p=0" proof - have ?thesis when "r<0" proof - have "proots_within p (cball z0 r) = {}" by (auto simp add: ball_empty that) then show ?thesis unfolding proots_cball_card_def using that by auto qed moreover have ?thesis when "r>0" "p=0" unfolding proots_cball_card_def using that infinite_cball[of r z0] by auto ultimately show ?thesis using that by argo qed moreover have ?thesis when "p\<noteq>0" "r>0" proof - define pp where "pp=fcompose (p \<circ>\<^sub>p [:z0, of_real r:]) [:\<i>,-1:] [:\<i>,1:]" have "proots_cball_card p z0 r = card (proots_within p (sphere z0 r) \<union> proots_within p (ball z0 r))" unfolding proots_cball_card_def apply (simp add:proots_within_union) by (metis Diff_partition cball_diff_sphere sphere_cball) also have "... = card (proots_within p (sphere z0 r)) + card (proots_within p (ball z0 r))" apply (rule card_Un_disjoint) using \<open>p\<noteq>0\<close> by auto also have "... = (if poly p (z0-r) =0 then 1 else 0) + proots_unbounded_line_card pp 0 1 + proots_upper_card pp" using proots_sphere_card_code1[of p z0 r,folded pp_def,unfolded proots_sphere_card_def] proots_ball_card_code1[of p z0 r,folded pp_def,unfolded proots_ball_card_def] that by simp finally show ?thesis apply (fold pp_def) using that by auto qed ultimately show ?thesis by auto qed end
[STATEMENT] lemma irreflexive_inf_closed: "irreflexive x \<Longrightarrow> irreflexive (x \<sqinter> y)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. irreflexive x \<Longrightarrow> irreflexive (x \<sqinter> y) [PROOF STEP] by (simp add: le_infI1)
L.A.M.B. started out as a collaboration with LeSportsac in 2003 . The name L.A.M.B. is an acronym which stands for Love . Angel . Music . Baby . , which is also the name of Stefani 's first solo album .
[STATEMENT] lemma domino_finite: assumes "d \<in> domino" shows "finite d" [PROOF STATE] proof (prove) goal (1 subgoal): 1. finite d [PROOF STEP] using assms [PROOF STATE] proof (prove) using this: d \<in> domino goal (1 subgoal): 1. finite d [PROOF STEP] proof induct [PROOF STATE] proof (state) goal (2 subgoals): 1. \<And>i j. finite {(i, j), (i, j + 1)} 2. \<And>i j. finite {(i, j), (i + 1, j)} [PROOF STEP] fix i j :: nat [PROOF STATE] proof (state) goal (2 subgoals): 1. \<And>i j. finite {(i, j), (i, j + 1)} 2. \<And>i j. finite {(i, j), (i + 1, j)} [PROOF STEP] show "finite {(i, j), (i, j + 1)}" [PROOF STATE] proof (prove) goal (1 subgoal): 1. finite {(i, j), (i, j + 1)} [PROOF STEP] by (intro finite.intros) [PROOF STATE] proof (state) this: finite {(i, j), (i, j + 1)} goal (1 subgoal): 1. \<And>i j. finite {(i, j), (i + 1, j)} [PROOF STEP] show "finite {(i, j), (i + 1, j)}" [PROOF STATE] proof (prove) goal (1 subgoal): 1. finite {(i, j), (i + 1, j)} [PROOF STEP] by (intro finite.intros) [PROOF STATE] proof (state) this: finite {(i, j), (i + 1, j)} goal: No subgoals! [PROOF STEP] qed
(* Title: JinjaThreads/J/SmallTypeSafe.thy Author: Tobias Nipkow, Andreas Lochbihler *) section \<open>Type Safety Proof\<close> theory TypeSafe imports Progress DefAssPreservation begin subsection\<open>Basic preservation lemmas\<close> text\<open>First two easy preservation lemmas.\<close> theorem (in J_conf_read) shows red_preserves_hconf: "\<lbrakk> extTA,P,t \<turnstile> \<langle>e,s\<rangle> -ta\<rightarrow> \<langle>e',s'\<rangle>; P,E,hp s \<turnstile> e : T; hconf (hp s) \<rbrakk> \<Longrightarrow> hconf (hp s')" and reds_preserves_hconf: "\<lbrakk> extTA,P,t \<turnstile> \<langle>es,s\<rangle> [-ta\<rightarrow>] \<langle>es',s'\<rangle>; P,E,hp s \<turnstile> es [:] Ts; hconf (hp s) \<rbrakk> \<Longrightarrow> hconf (hp s')" proof (induct arbitrary: T E and Ts E rule: red_reds.inducts) case RedNew thus ?case by(auto intro: hconf_heap_ops_mono) next case RedNewFail thus ?case by(auto intro: hconf_heap_ops_mono) next case RedNewArray thus ?case by(auto intro: hconf_heap_ops_mono) next case RedNewArrayFail thus ?case by(auto intro: hconf_heap_ops_mono) next case (RedAAss h a U n i v U' h' l) from \<open>sint i < int n\<close> \<open>0 <=s i\<close> have "nat (sint i) < n" by(metis nat_less_iff sint_0 word_sle_def) thus ?case using RedAAss by(fastforce elim: hconf_heap_write_mono intro: addr_loc_type.intros simp add: conf_def) next case RedFAss thus ?case by(fastforce elim: hconf_heap_write_mono intro: addr_loc_type.intros simp add: conf_def) next case RedCASSucceed thus ?case by(fastforce elim: hconf_heap_write_mono intro: addr_loc_type.intros simp add: conf_def) next case (RedCallExternal s a U M Ts T' D vs ta va h' ta' e' s') hence "P,hp s \<turnstile> a\<bullet>M(vs) : T" by(fastforce simp add: external_WT'_iff dest: sees_method_fun) with RedCallExternal show ?case by(auto dest: external_call_hconf) qed auto theorem (in J_heap) red_preserves_lconf: "\<lbrakk> extTA,P,t \<turnstile> \<langle>e,s\<rangle> -ta\<rightarrow> \<langle>e',s'\<rangle>; P,E,hp s \<turnstile> e:T; P,hp s \<turnstile> lcl s (:\<le>) E \<rbrakk> \<Longrightarrow> P,hp s' \<turnstile> lcl s' (:\<le>) E" and reds_preserves_lconf: "\<lbrakk> extTA,P,t \<turnstile> \<langle>es,s\<rangle> [-ta\<rightarrow>] \<langle>es',s'\<rangle>; P,E,hp s \<turnstile> es[:]Ts; P,hp s \<turnstile> lcl s (:\<le>) E \<rbrakk> \<Longrightarrow> P,hp s' \<turnstile> lcl s' (:\<le>) E" proof(induct arbitrary: T E and Ts E rule:red_reds.inducts) case RedNew thus ?case by(fastforce intro:lconf_hext hext_heap_ops simp del: fun_upd_apply) next case RedNewFail thus ?case by(auto intro:lconf_hext hext_heap_ops simp del: fun_upd_apply) next case RedNewArray thus ?case by(fastforce intro:lconf_hext hext_heap_ops simp del: fun_upd_apply) next case RedNewArrayFail thus ?case by(fastforce intro:lconf_hext hext_heap_ops simp del: fun_upd_apply) next case RedLAss thus ?case by(fastforce elim: lconf_upd simp add: conf_def simp del: fun_upd_apply ) next case RedAAss thus ?case by(fastforce intro:lconf_hext hext_heap_ops simp del: fun_upd_apply) next case RedFAss thus ?case by(fastforce intro:lconf_hext hext_heap_ops simp del: fun_upd_apply) next case RedCASSucceed thus ?case by(fastforce intro:lconf_hext hext_heap_ops simp del: fun_upd_apply) next case (BlockRed e h x V vo ta e' h' x' T T' E) note red = \<open>extTA,P,t \<turnstile> \<langle>e,(h, x(V := vo))\<rangle> -ta\<rightarrow> \<langle>e',(h', x')\<rangle>\<close> note IH = \<open>\<And>T E. \<lbrakk>P,E,hp (h, x(V := vo)) \<turnstile> e : T; P,hp (h, x(V := vo)) \<turnstile> lcl (h, x(V := vo)) (:\<le>) E\<rbrakk> \<Longrightarrow> P,hp (h', x') \<turnstile> lcl (h', x') (:\<le>) E\<close> note wt = \<open>P,E,hp (h, x) \<turnstile> {V:T=vo; e} : T'\<close> note lconf = \<open>P,hp (h, x) \<turnstile> lcl (h, x) (:\<le>) E\<close> from lconf_hext[OF lconf[simplified] red_hext_incr[OF red, simplified]] have "P,h' \<turnstile> x (:\<le>) E" . moreover from wt have "P,E(V\<mapsto>T),h \<turnstile> e : T'" by(cases vo, auto) moreover from lconf wt have "P,h \<turnstile> x(V := vo) (:\<le>) E(V \<mapsto> T)" by(cases vo)(simp add: lconf_def,auto intro: lconf_upd2 simp add: conf_def) ultimately have "P,h' \<turnstile> x' (:\<le>) E(V\<mapsto>T)" by(auto intro: IH[simplified]) with \<open>P,h' \<turnstile> x (:\<le>) E\<close> show ?case by(auto simp add: lconf_def split: if_split_asm) next case (RedCallExternal s a U M Ts T' D vs ta va h' ta' e' s') from \<open>P,t \<turnstile> \<langle>a\<bullet>M(vs),hp s\<rangle> -ta\<rightarrow>ext \<langle>va,h'\<rangle>\<close> have "hp s \<unlhd> h'" by(rule red_external_hext) with \<open>s' = (h', lcl s)\<close> \<open>P,hp s \<turnstile> lcl s (:\<le>) E\<close> show ?case by(auto intro: lconf_hext) qed auto text\<open>Combining conformance of heap and local variables:\<close> definition (in J_heap_conf_base) sconf :: "env \<Rightarrow> ('addr, 'heap) Jstate \<Rightarrow> bool" ("_ \<turnstile> _ \<surd>" [51,51]50) where "E \<turnstile> s \<surd> \<equiv> let (h,l) = s in hconf h \<and> P,h \<turnstile> l (:\<le>) E \<and> preallocated h" context J_conf_read begin lemma red_preserves_sconf: "\<lbrakk> extTA,P,t \<turnstile> \<langle>e,s\<rangle> -tas\<rightarrow> \<langle>e',s'\<rangle>; P,E,hp s \<turnstile> e : T; E \<turnstile> s \<surd> \<rbrakk> \<Longrightarrow> E \<turnstile> s' \<surd>" apply(auto dest: red_preserves_hconf red_preserves_lconf simp add:sconf_def) apply(fastforce dest: red_hext_incr intro: preallocated_hext) done lemma reds_preserves_sconf: "\<lbrakk> extTA,P,t \<turnstile> \<langle>es,s\<rangle> [-ta\<rightarrow>] \<langle>es',s'\<rangle>; P,E,hp s \<turnstile> es [:] Ts; E \<turnstile> s \<surd> \<rbrakk> \<Longrightarrow> E \<turnstile> s' \<surd>" apply(auto dest: reds_preserves_hconf reds_preserves_lconf simp add: sconf_def) apply(fastforce dest: reds_hext_incr intro: preallocated_hext) done end lemma (in J_heap_base) wt_external_call: "\<lbrakk> conf_extRet P h va T; P,E,h \<turnstile> e : T \<rbrakk> \<Longrightarrow> \<exists>T'. P,E,h \<turnstile> extRet2J e va : T' \<and> P \<turnstile> T' \<le> T" by(cases va)(auto simp add: conf_def) subsection "Subject reduction" theorem (in J_conf_read) assumes wf: "wf_J_prog P" shows subject_reduction: "\<lbrakk> extTA,P,t \<turnstile> \<langle>e,s\<rangle> -ta\<rightarrow> \<langle>e',s'\<rangle>; E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e:T; P,hp s \<turnstile> t \<surd>t \<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> e':T' \<and> P \<turnstile> T' \<le> T" and subjects_reduction: "\<lbrakk> extTA,P,t \<turnstile> \<langle>es,s\<rangle> [-ta\<rightarrow>] \<langle>es',s'\<rangle>; E \<turnstile> s \<surd>; P,E,hp s \<turnstile> es[:]Ts; P,hp s \<turnstile> t \<surd>t \<rbrakk> \<Longrightarrow> \<exists>Ts'. P,E,hp s' \<turnstile> es'[:]Ts' \<and> P \<turnstile> Ts' [\<le>] Ts" proof (induct arbitrary: T E and Ts E rule:red_reds.inducts) case RedNew thus ?case by(auto dest: allocate_SomeD) next case RedNewFail thus ?case unfolding sconf_def by(fastforce intro:typeof_OutOfMemory preallocated_heap_ops simp add: xcpt_subcls_Throwable[OF _ wf]) next case NewArrayRed thus ?case by fastforce next case RedNewArray thus ?case by(auto dest: allocate_SomeD) next case RedNewArrayNegative thus ?case unfolding sconf_def by(fastforce intro: preallocated_heap_ops simp add: xcpt_subcls_Throwable[OF _ wf]) next case RedNewArrayFail thus ?case unfolding sconf_def by(fastforce intro:typeof_OutOfMemory preallocated_heap_ops simp add: xcpt_subcls_Throwable[OF _ wf]) next case (CastRed e s ta e' s' C T E) have esse: "extTA,P,t \<turnstile> \<langle>e,s\<rangle> -ta\<rightarrow> \<langle>e',s'\<rangle>" and IH: "\<And>T E. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> e' : T' \<and> P \<turnstile> T' \<le> T" and hconf: "E \<turnstile> s \<surd>" and wtc: "P,E,hp s \<turnstile> Cast C e : T" by fact+ thus ?case proof(clarsimp) fix T' assume wte: "P,E,hp s \<turnstile> e : T'" "is_type P T" from wte and hconf and IH and \<open>P,hp s \<turnstile> t \<surd>t\<close> have "\<exists>U. P,E,hp s' \<turnstile> e' : U \<and> P \<turnstile> U \<le> T'" by simp then obtain U where wtee: "P,E,hp s' \<turnstile> e' : U" and UsTT: "P \<turnstile> U \<le> T'" by blast from wtee \<open>is_type P T\<close> have "P,E,hp s' \<turnstile> Cast T e' : T" by(rule WTrtCast) thus "\<exists>T'. P,E,hp s' \<turnstile> Cast T e' : T' \<and> P \<turnstile> T' \<le> T" by blast qed next case RedCast thus ?case by(clarsimp simp add: is_refT_def) next case RedCastFail thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case (InstanceOfRed e s ta e' s' U T E) have IH: "\<And>T E. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> e' : T' \<and> P \<turnstile> T' \<le> T" and hconf: "E \<turnstile> s \<surd>" and wtc: "P,E,hp s \<turnstile> e instanceof U : T" and tconf: "P,hp s \<turnstile> t \<surd>t" by fact+ from wtc obtain T' where "P,E,hp s \<turnstile> e : T'" by auto from IH[OF hconf this tconf] obtain T'' where "P,E,hp s' \<turnstile> e' : T''" by auto with wtc show ?case by auto next case RedInstanceOf thus ?case by(clarsimp) next case (BinOpRed1 e\<^sub>1 s ta e\<^sub>1' s' bop e\<^sub>2 T E) have red: "extTA,P,t \<turnstile> \<langle>e\<^sub>1, s\<rangle> -ta\<rightarrow> \<langle>e\<^sub>1', s'\<rangle>" and IH: "\<And>T E. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e\<^sub>1:T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>U. P,E,hp s' \<turnstile> e\<^sub>1' : U \<and> P \<turnstile> U \<le> T" and conf: "E \<turnstile> s \<surd>" and wt: "P,E,hp s \<turnstile> e\<^sub>1 \<guillemotleft>bop\<guillemotright> e\<^sub>2 : T" and tconf: "P,hp s \<turnstile> t \<surd>t" by fact+ from wt obtain T1 T2 where wt1: "P,E,hp s \<turnstile> e\<^sub>1 : T1" and wt2: "P,E,hp s \<turnstile> e\<^sub>2 : T2" and wtbop: "P \<turnstile> T1\<guillemotleft>bop\<guillemotright>T2 : T" by auto from IH[OF conf wt1 tconf] obtain T1' where wt1': "P,E,hp s' \<turnstile> e\<^sub>1' : T1'" and sub: "P \<turnstile> T1' \<le> T1" by blast from WTrt_binop_widen_mono[OF wtbop sub widen_refl] obtain T' where wtbop': "P \<turnstile> T1'\<guillemotleft>bop\<guillemotright>T2 : T'" and sub': "P \<turnstile> T' \<le> T" by blast from wt1' WTrt_hext_mono[OF wt2 red_hext_incr[OF red]] wtbop' have "P,E,hp s' \<turnstile> e\<^sub>1' \<guillemotleft>bop\<guillemotright> e\<^sub>2 : T'" by(rule WTrtBinOp) with sub' show ?case by blast next case (BinOpRed2 e\<^sub>2 s ta e\<^sub>2' s' v\<^sub>1 bop T E) have red: "extTA,P,t \<turnstile> \<langle>e\<^sub>2,s\<rangle> -ta\<rightarrow> \<langle>e\<^sub>2',s'\<rangle>" by fact have IH: "\<And>E T. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e\<^sub>2:T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>U. P,E,hp s' \<turnstile> e\<^sub>2' : U \<and> P \<turnstile> U \<le> T" and tconf: "P,hp s \<turnstile> t \<surd>t" by fact+ have conf: "E \<turnstile> s \<surd>" and wt: "P,E,hp s \<turnstile> (Val v\<^sub>1) \<guillemotleft>bop\<guillemotright> e\<^sub>2 : T" by fact+ from wt obtain T1 T2 where wt1: "P,E,hp s \<turnstile> Val v\<^sub>1 : T1" and wt2: "P,E,hp s \<turnstile> e\<^sub>2 : T2" and wtbop: "P \<turnstile> T1\<guillemotleft>bop\<guillemotright>T2 : T" by auto from IH[OF conf wt2 tconf] obtain T2' where wt2': "P,E,hp s' \<turnstile> e\<^sub>2' : T2'" and sub: "P \<turnstile> T2' \<le> T2" by blast from WTrt_binop_widen_mono[OF wtbop widen_refl sub] obtain T' where wtbop': "P \<turnstile> T1\<guillemotleft>bop\<guillemotright>T2' : T'" and sub': "P \<turnstile> T' \<le> T" by blast from WTrt_hext_mono[OF wt1 red_hext_incr[OF red]] wt2' wtbop' have "P,E,hp s' \<turnstile> Val v\<^sub>1 \<guillemotleft>bop\<guillemotright> e\<^sub>2' : T'" by(rule WTrtBinOp) with sub' show ?case by blast next case (RedBinOp bop v1 v2 v s) from \<open>E \<turnstile> s \<surd>\<close> have preh: "preallocated (hp s)" by(cases s)(simp add: sconf_def) from \<open>P,E,hp s \<turnstile> Val v1 \<guillemotleft>bop\<guillemotright> Val v2 : T\<close> obtain T1 T2 where "typeof\<^bsub>hp s\<^esub> v1 = \<lfloor>T1\<rfloor>" "typeof\<^bsub>hp s\<^esub> v2 = \<lfloor>T2\<rfloor>" "P \<turnstile> T1\<guillemotleft>bop\<guillemotright>T2 : T" by auto with wf preh have "P,hp s \<turnstile> v :\<le> T" using \<open>binop bop v1 v2 = \<lfloor>Inl v\<rfloor>\<close> by(rule binop_type) thus ?case by(auto simp add: conf_def) next case (RedBinOpFail bop v1 v2 a s) from \<open>E \<turnstile> s \<surd>\<close> have preh: "preallocated (hp s)" by(cases s)(simp add: sconf_def) from \<open>P,E,hp s \<turnstile> Val v1 \<guillemotleft>bop\<guillemotright> Val v2 : T\<close> obtain T1 T2 where "typeof\<^bsub>hp s\<^esub> v1 = \<lfloor>T1\<rfloor>" "typeof\<^bsub>hp s\<^esub> v2 = \<lfloor>T2\<rfloor>" "P \<turnstile> T1\<guillemotleft>bop\<guillemotright>T2 : T" by auto with wf preh have "P,hp s \<turnstile> Addr a :\<le> Class Throwable" using \<open>binop bop v1 v2 = \<lfloor>Inr a\<rfloor>\<close> by(rule binop_type) thus ?case by(auto simp add: conf_def) next case RedVar thus ?case by (fastforce simp:sconf_def lconf_def conf_def) next case LAssRed thus ?case by(blast intro:widen_trans) next case RedLAss thus ?case by fastforce next case (AAccRed1 a s ta a' s' i T E) have IH: "\<And>E T. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> a : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> a' : T' \<and> P \<turnstile> T' \<le> T" and assa: "extTA,P,t \<turnstile> \<langle>a,s\<rangle> -ta\<rightarrow> \<langle>a',s'\<rangle>" and wt: "P,E,hp s \<turnstile> a\<lfloor>i\<rceil> : T" and hconf: "E \<turnstile> s \<surd>" and tconf: "P,hp s \<turnstile> t \<surd>t" by fact+ from wt have wti: "P,E,hp s \<turnstile> i : Integer" by auto from wti red_hext_incr[OF assa] have wti': "P,E,hp s' \<turnstile> i : Integer" by - (rule WTrt_hext_mono) { assume wta: "P,E,hp s \<turnstile> a : T\<lfloor>\<rceil>" from IH[OF hconf wta tconf] obtain U where wta': "P,E,hp s' \<turnstile> a' : U" and UsubT: "P \<turnstile> U \<le> T\<lfloor>\<rceil>" by fastforce with wta' wti' have ?case by(cases U, auto simp add: widen_Array) } moreover { assume wta: "P,E,hp s \<turnstile> a : NT" from IH[OF hconf wta tconf] have "P,E,hp s' \<turnstile> a' : NT" by fastforce from this wti' have ?case by(fastforce intro:WTrtAAccNT) } ultimately show ?case using wt by auto next case (AAccRed2 i s ta i' s' a T E) have IH: "\<And>E T. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> i : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> i' : T' \<and> P \<turnstile> T' \<le> T" and issi: "extTA,P,t \<turnstile> \<langle>i,s\<rangle> -ta\<rightarrow> \<langle>i',s'\<rangle>" and wt: "P,E,hp s \<turnstile> Val a\<lfloor>i\<rceil> : T" and sconf: "E \<turnstile> s \<surd>" and tconf: "P,hp s \<turnstile> t \<surd>t" by fact+ from wt have wti: "P,E,hp s \<turnstile> i : Integer" by auto from wti IH sconf tconf have wti': "P,E,hp s' \<turnstile> i' : Integer" by blast from wt show ?case proof (rule WTrt_elim_cases) assume wta: "P,E,hp s \<turnstile> Val a : T\<lfloor>\<rceil>" from wta red_hext_incr[OF issi] have wta': "P,E,hp s' \<turnstile> Val a : T\<lfloor>\<rceil>" by (rule WTrt_hext_mono) from wta' wti' show ?case by(fastforce) next assume wta: "P,E,hp s \<turnstile> Val a : NT" from wta red_hext_incr[OF issi] have wta': "P,E,hp s' \<turnstile> Val a : NT" by (rule WTrt_hext_mono) from wta' wti' show ?case by(fastforce elim:WTrtAAccNT) qed next case RedAAccNull thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case RedAAccBounds thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case (RedAAcc h a T n i v l T' E) from \<open>E \<turnstile> (h, l) \<surd>\<close> have "hconf h" by(clarsimp simp add: sconf_def) from \<open>0 <=s i\<close> \<open>sint i < int n\<close> have "nat (sint i) < n" by(metis nat_less_iff sint_0 word_sle_def) with \<open>typeof_addr h a = \<lfloor>Array_type T n\<rfloor>\<close> have "P,h \<turnstile> a@ACell (nat (sint i)) : T" by(auto intro: addr_loc_type.intros) from heap_read_conf[OF \<open>heap_read h a (ACell (nat (sint i))) v\<close> this] \<open>hconf h\<close> have "P,h \<turnstile> v :\<le> T" by simp thus ?case using RedAAcc by(auto simp add: conf_def) next case (AAssRed1 a s ta a' s' i e T E) have IH: "\<And>E T. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> a : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> a' : T' \<and> P \<turnstile> T' \<le> T" and assa: "extTA,P,t \<turnstile> \<langle>a,s\<rangle> -ta\<rightarrow> \<langle>a',s'\<rangle>" and wt: "P,E,hp s \<turnstile> a\<lfloor>i\<rceil> := e : T" and sconf: "E \<turnstile> s \<surd>" and tconf: "P,hp s \<turnstile> t \<surd>t" by fact+ from wt have void: "T = Void" by blast from wt have wti: "P,E,hp s \<turnstile> i : Integer" by auto from wti red_hext_incr[OF assa] have wti': "P,E,hp s' \<turnstile> i : Integer" by - (rule WTrt_hext_mono) { assume wta: "P,E,hp s \<turnstile> a : NT" from IH[OF sconf wta tconf] have wta': "P,E,hp s' \<turnstile> a' : NT" by fastforce from wt wta obtain V where wte: "P,E,hp s \<turnstile> e : V" by(auto) from wte red_hext_incr[OF assa] have wte': "P,E,hp s' \<turnstile> e : V" by - (rule WTrt_hext_mono) from wta' wti' wte' void have ?case by(fastforce elim: WTrtAAssNT) } moreover { fix U assume wta: "P,E,hp s \<turnstile> a : U\<lfloor>\<rceil>" from IH[OF sconf wta tconf] obtain U' where wta': "P,E,hp s' \<turnstile> a' : U'" and UsubT: "P \<turnstile> U' \<le> U\<lfloor>\<rceil>" by fastforce with wta' have ?case proof(cases U') case NT assume UNT: "U' = NT" from UNT wt wta obtain V where wte: "P,E,hp s \<turnstile> e : V" by(auto) from wte red_hext_incr[OF assa] have wte': "P,E,hp s' \<turnstile> e : V" by - (rule WTrt_hext_mono) from wta' UNT wti' wte' void show ?thesis by(fastforce elim: WTrtAAssNT) next case (Array A) have UA: "U' = A\<lfloor>\<rceil>" by fact with UA UsubT wt wta obtain V where wte: "P,E,hp s \<turnstile> e : V" by auto from wte red_hext_incr[OF assa] have wte': "P,E,hp s' \<turnstile> e : V" by - (rule WTrt_hext_mono) with wta' wte' UA wti' void show ?thesis by (fast elim:WTrtAAss) qed(simp_all add: widen_Array) } ultimately show ?case using wt by blast next case (AAssRed2 i s ta i' s' a e T E) have IH: "\<And>E T. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> i : T; P,hp s \<turnstile> t \<surd>t \<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> i' : T' \<and> P \<turnstile> T' \<le> T" and issi: "extTA,P,t \<turnstile> \<langle>i,s\<rangle> -ta\<rightarrow> \<langle>i',s'\<rangle>" and wt: "P,E,hp s \<turnstile> Val a\<lfloor>i\<rceil> := e : T" and sconf: "E \<turnstile> s \<surd>" and tconf: "P,hp s \<turnstile> t \<surd>t" by fact+ from wt have void: "T = Void" by blast from wt have wti: "P,E,hp s \<turnstile> i : Integer" by auto from IH[OF sconf wti tconf] have wti': "P,E,hp s' \<turnstile> i' : Integer" by fastforce from wt show ?case proof(rule WTrt_elim_cases) fix U T' assume wta: "P,E,hp s \<turnstile> Val a : U\<lfloor>\<rceil>" and wte: "P,E,hp s \<turnstile> e : T'" from wte red_hext_incr[OF issi] have wte': "P,E,hp s' \<turnstile> e : T'" by - (rule WTrt_hext_mono) from wta red_hext_incr[OF issi] have wta': "P,E,hp s' \<turnstile> Val a : U\<lfloor>\<rceil>" by - (rule WTrt_hext_mono) from wta' wti' wte' void show ?case by (fastforce elim:WTrtAAss) next fix T' assume wta: "P,E,hp s \<turnstile> Val a : NT" and wte: "P,E,hp s \<turnstile> e : T'" from wte red_hext_incr[OF issi] have wte': "P,E,hp s' \<turnstile> e : T'" by - (rule WTrt_hext_mono) from wta red_hext_incr[OF issi] have wta': "P,E,hp s' \<turnstile> Val a : NT" by - (rule WTrt_hext_mono) from wta' wti' wte' void show ?case by (fastforce elim:WTrtAAss) qed next case (AAssRed3 e s ta e' s' a i T E) have IH: "\<And>E T. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> e' : T' \<and> P \<turnstile> T' \<le> T" and issi: "extTA,P,t \<turnstile> \<langle>e,s\<rangle> -ta\<rightarrow> \<langle>e',s'\<rangle>" and wt: "P,E,hp s \<turnstile> Val a\<lfloor>Val i\<rceil> := e : T" and sconf: "E \<turnstile> s \<surd>" and tconf: "P,hp s \<turnstile> t \<surd>t" by fact+ from wt have void: "T = Void" by blast from wt have wti: "P,E,hp s \<turnstile> Val i : Integer" by auto from wti red_hext_incr[OF issi] have wti': "P,E,hp s' \<turnstile> Val i : Integer" by - (rule WTrt_hext_mono) from wt show ?case proof(rule WTrt_elim_cases) fix U T' assume wta: "P,E,hp s \<turnstile> Val a : U\<lfloor>\<rceil>" and wte: "P,E,hp s \<turnstile> e : T'" from wta red_hext_incr[OF issi] have wta': "P,E,hp s' \<turnstile> Val a : U\<lfloor>\<rceil>" by - (rule WTrt_hext_mono) from IH[OF sconf wte tconf] obtain V where wte': "P,E,hp s' \<turnstile> e' : V" by fastforce from wta' wti' wte' void show ?case by (fastforce elim:WTrtAAss) next fix T' assume wta: "P,E,hp s \<turnstile> Val a : NT" and wte: "P,E,hp s \<turnstile> e : T'" from wta red_hext_incr[OF issi] have wta': "P,E,hp s' \<turnstile> Val a : NT" by - (rule WTrt_hext_mono) from IH[OF sconf wte tconf] obtain V where wte': "P,E,hp s' \<turnstile> e' : V" by fastforce from wta' wti' wte' void show ?case by (fastforce elim:WTrtAAss) qed next case RedAAssNull thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case RedAAssBounds thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case RedAAssStore thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case RedAAss thus ?case by(auto simp del:fun_upd_apply) next case (ALengthRed a s ta a' s' T E) note IH = \<open>\<And>T'. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> a : T'; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T''. P,E,hp s' \<turnstile> a' : T'' \<and> P \<turnstile> T'' \<le> T'\<close> from \<open>P,E,hp s \<turnstile> a\<bullet>length : T\<close> show ?case proof(rule WTrt_elim_cases) fix T' assume [simp]: "T = Integer" and wta: "P,E,hp s \<turnstile> a : T'\<lfloor>\<rceil>" from wta \<open>E \<turnstile> s \<surd>\<close> IH \<open>P,hp s \<turnstile> t \<surd>t\<close> obtain T'' where wta': "P,E,hp s' \<turnstile> a' : T''" and sub: "P \<turnstile> T'' \<le> T'\<lfloor>\<rceil>" by blast from sub have "P,E,hp s' \<turnstile> a'\<bullet>length : Integer" unfolding widen_Array proof(rule disjE) assume "T'' = NT" with wta' show ?thesis by(auto) next assume "\<exists>V. T'' = V\<lfloor>\<rceil> \<and> P \<turnstile> V \<le> T'" then obtain V where "T'' = V\<lfloor>\<rceil>" "P \<turnstile> V \<le> T'" by blast with wta' show ?thesis by -(rule WTrtALength, simp) qed thus ?thesis by(simp) next assume "P,E,hp s \<turnstile> a : NT" with \<open>E \<turnstile> s \<surd>\<close> IH \<open>P,hp s \<turnstile> t \<surd>t\<close> obtain T'' where wta': "P,E,hp s' \<turnstile> a' : T''" and sub: "P \<turnstile> T'' \<le> NT" by blast from sub have "T'' = NT" by auto with wta' show ?thesis by(auto) qed next case (RedALength h a T n l T' E) from \<open>P,E,hp (h, l) \<turnstile> addr a\<bullet>length : T'\<close> \<open>typeof_addr h a = \<lfloor>Array_type T n\<rfloor>\<close> have [simp]: "T' = Integer" by(auto) thus ?case by(auto) next case RedALengthNull thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case (FAccRed e s ta e' s' F D T E) have IH: "\<And>E T. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>U. P,E,hp s' \<turnstile> e' : U \<and> P \<turnstile> U \<le> T" and conf: "E \<turnstile> s \<surd>" and wt: "P,E,hp s \<turnstile> e\<bullet>F{D} : T" and tconf: "P,hp s \<turnstile> t \<surd>t" by fact+ \<comment> \<open>Now distinguish the two cases how wt can have arisen.\<close> { fix T' C fm assume wte: "P,E,hp s \<turnstile> e : T'" and icto: "class_type_of' T' = \<lfloor>C\<rfloor>" and has: "P \<turnstile> C has F:T (fm) in D" from IH[OF conf wte tconf] obtain U where wte': "P,E,hp s' \<turnstile> e' : U" and UsubC: "P \<turnstile> U \<le> T'" by auto \<comment> \<open>Now distinguish what @{term U} can be.\<close> with UsubC have ?case proof(cases "U = NT") case True thus ?thesis using wte' by(blast intro:WTrtFAccNT widen_refl) next case False with icto UsubC obtain C' where icto': "class_type_of' U = \<lfloor>C'\<rfloor>" and C'subC: "P \<turnstile> C' \<preceq>\<^sup>* C" by(rule widen_is_class_type_of) from has_field_mono[OF has C'subC] wte' icto' show ?thesis by(auto intro!:WTrtFAcc) qed } moreover { assume "P,E,hp s \<turnstile> e : NT" hence "P,E,hp s' \<turnstile> e' : NT" using IH[OF conf _ tconf] by fastforce hence ?case by(fastforce intro:WTrtFAccNT widen_refl) } ultimately show ?case using wt by blast next case RedFAcc thus ?case unfolding sconf_def by(fastforce dest: heap_read_conf intro: addr_loc_type.intros simp add: conf_def) next case RedFAccNull thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case (FAssRed1 e s ta e' s' F D e\<^sub>2) have red: "extTA,P,t \<turnstile> \<langle>e,s\<rangle> -ta\<rightarrow> \<langle>e',s'\<rangle>" and IH: "\<And>E T. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>U. P,E,hp s' \<turnstile> e' : U \<and> P \<turnstile> U \<le> T" and conf: "E \<turnstile> s \<surd>" and wt: "P,E,hp s \<turnstile> e\<bullet>F{D}:=e\<^sub>2 : T" and tconf: "P,hp s \<turnstile> t \<surd>t" by fact+ from wt have void: "T = Void" by blast \<comment> \<open>We distinguish if @{term e} has type @{term NT} or a Class type\<close> { assume "P,E,hp s \<turnstile> e : NT" hence "P,E,hp s' \<turnstile> e' : NT" using IH[OF conf _ tconf] by fastforce moreover obtain T\<^sub>2 where "P,E,hp s \<turnstile> e\<^sub>2 : T\<^sub>2" using wt by auto from this red_hext_incr[OF red] have "P,E,hp s' \<turnstile> e\<^sub>2 : T\<^sub>2" by(rule WTrt_hext_mono) ultimately have ?case using void by(blast intro!:WTrtFAssNT) } moreover { fix T' C TF T\<^sub>2 fm assume wt\<^sub>1: "P,E,hp s \<turnstile> e : T'" and icto: "class_type_of' T' = \<lfloor>C\<rfloor>" and wt\<^sub>2: "P,E,hp s \<turnstile> e\<^sub>2 : T\<^sub>2" and has: "P \<turnstile> C has F:TF (fm) in D" and sub: "P \<turnstile> T\<^sub>2 \<le> TF" obtain U where wt\<^sub>1': "P,E,hp s' \<turnstile> e' : U" and UsubC: "P \<turnstile> U \<le> T'" using IH[OF conf wt\<^sub>1 tconf] by blast have wt\<^sub>2': "P,E,hp s' \<turnstile> e\<^sub>2 : T\<^sub>2" by(rule WTrt_hext_mono[OF wt\<^sub>2 red_hext_incr[OF red]]) \<comment> \<open>Is @{term U} the null type or a class type?\<close> have ?case proof(cases "U = NT") case True with wt\<^sub>1' wt\<^sub>2' void show ?thesis by(blast intro!:WTrtFAssNT) next case False with icto UsubC obtain C' where icto': "class_type_of' U = \<lfloor>C'\<rfloor>" and "subclass": "P \<turnstile> C' \<preceq>\<^sup>* C" by(rule widen_is_class_type_of) have "P \<turnstile> C' has F:TF (fm) in D" by(rule has_field_mono[OF has "subclass"]) with wt\<^sub>1' show ?thesis using wt\<^sub>2' sub void icto' by(blast intro:WTrtFAss) qed } ultimately show ?case using wt by blast next case (FAssRed2 e\<^sub>2 s ta e\<^sub>2' s' v F D T E) have red: "extTA,P,t \<turnstile> \<langle>e\<^sub>2,s\<rangle> -ta\<rightarrow> \<langle>e\<^sub>2',s'\<rangle>" and IH: "\<And>E T. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e\<^sub>2 : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>U. P,E,hp s' \<turnstile> e\<^sub>2' : U \<and> P \<turnstile> U \<le> T" and conf: "E \<turnstile> s \<surd>" and wt: "P,E,hp s \<turnstile> Val v\<bullet>F{D}:=e\<^sub>2 : T" and tconf: "P,hp s \<turnstile> t \<surd>t" by fact+ from wt have [simp]: "T = Void" by auto from wt show ?case proof (rule WTrt_elim_cases) fix U C TF T\<^sub>2 fm assume wt\<^sub>1: "P,E,hp s \<turnstile> Val v : U" and icto: "class_type_of' U = \<lfloor>C\<rfloor>" and has: "P \<turnstile> C has F:TF (fm) in D" and wt\<^sub>2: "P,E,hp s \<turnstile> e\<^sub>2 : T\<^sub>2" and TsubTF: "P \<turnstile> T\<^sub>2 \<le> TF" have wt\<^sub>1': "P,E,hp s' \<turnstile> Val v : U" by(rule WTrt_hext_mono[OF wt\<^sub>1 red_hext_incr[OF red]]) obtain T\<^sub>2' where wt\<^sub>2': "P,E,hp s' \<turnstile> e\<^sub>2' : T\<^sub>2'" and T'subT: "P \<turnstile> T\<^sub>2' \<le> T\<^sub>2" using IH[OF conf wt\<^sub>2 tconf] by blast have "P,E,hp s' \<turnstile> Val v\<bullet>F{D}:=e\<^sub>2' : Void" by(rule WTrtFAss[OF wt\<^sub>1' icto has wt\<^sub>2' widen_trans[OF T'subT TsubTF]]) thus ?case by auto next fix T\<^sub>2 assume null: "P,E,hp s \<turnstile> Val v : NT" and wt\<^sub>2: "P,E,hp s \<turnstile> e\<^sub>2 : T\<^sub>2" from null have "v = Null" by simp moreover obtain T\<^sub>2' where "P,E,hp s' \<turnstile> e\<^sub>2' : T\<^sub>2' \<and> P \<turnstile> T\<^sub>2' \<le> T\<^sub>2" using IH[OF conf wt\<^sub>2 tconf] by blast ultimately show ?thesis by(fastforce intro:WTrtFAssNT) qed next case RedFAss thus ?case by(auto simp del:fun_upd_apply) next case RedFAssNull thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case (CASRed1 e s ta e' s' D F e2 e3) from CASRed1.prems(2) consider (NT) T2 T3 where "P,E,hp s \<turnstile> e : NT" "T = Boolean" "P,E,hp s \<turnstile> e2 : T2" "P,E,hp s \<turnstile> e3 : T3" | (RefT) U T' C fm T2 T3 where "P,E,hp s \<turnstile> e : U" "T = Boolean" "class_type_of' U = \<lfloor>C\<rfloor>" "P \<turnstile> C has F:T' (fm) in D" "P,E,hp s \<turnstile> e2 : T2" "P \<turnstile> T2 \<le> T'" "P,E,hp s \<turnstile> e3 : T3" "P \<turnstile> T3 \<le> T'" "volatile fm" by fastforce thus ?case proof cases case NT have "P,E,hp s' \<turnstile> e' : NT" using CASRed1.hyps(2)[OF CASRed1.prems(1) NT(1) CASRed1.prems(3)] by auto moreover from NT CASRed1.hyps(1)[THEN red_hext_incr] have "P,E,hp s' \<turnstile> e2 : T2" "P,E,hp s' \<turnstile> e3 : T3" by(auto intro: WTrt_hext_mono) ultimately show ?thesis using NT by(auto intro: WTrtCASNT) next case RefT from CASRed1.hyps(2)[OF CASRed1.prems(1) RefT(1) CASRed1.prems(3)] obtain U' where wt1: "P,E,hp s' \<turnstile> e' : U'" "P \<turnstile> U' \<le> U" by blast from RefT CASRed1.hyps(1)[THEN red_hext_incr] have wt2: "P,E,hp s' \<turnstile> e2 : T2" and wt3: "P,E,hp s' \<turnstile> e3 : T3" by(auto intro: WTrt_hext_mono) show ?thesis proof(cases "U' = NT") case True with RefT wt1 wt2 wt3 show ?thesis by(auto intro: WTrtCASNT) next case False with RefT(3) wt1 obtain C' where icto': "class_type_of' U' = \<lfloor>C'\<rfloor>" and "subclass": "P \<turnstile> C' \<preceq>\<^sup>* C" by(blast intro: widen_is_class_type_of) have "P \<turnstile> C' has F:T' (fm) in D" by(rule has_field_mono[OF RefT(4) "subclass"]) with RefT wt1 wt2 wt3 icto' show ?thesis by(auto intro!: WTrtCAS) qed qed next case (CASRed2 e s ta e' s' v D F e3) consider (Null) "v = Null" | (Val) U C T' fm T2 T3 where "class_type_of' U = \<lfloor>C\<rfloor>" "P \<turnstile> C has F:T' (fm) in D" "volatile fm" "P,E,hp s \<turnstile> e : T2" "P \<turnstile> T2 \<le> T'" "P,E,hp s \<turnstile> e3 : T3" "P \<turnstile> T3 \<le> T'" "T = Boolean" "typeof\<^bsub>hp s\<^esub> v = \<lfloor>U\<rfloor>" using CASRed2.prems(2) by auto then show ?case proof cases case Null then show ?thesis using CASRed2 by(force dest: red_hext_incr intro: WTrt_hext_mono WTrtCASNT) next case Val from CASRed2.hyps(1) have hext: "hp s \<unlhd> hp s'" by(auto dest: red_hext_incr) with Val(9) have "typeof\<^bsub>hp s'\<^esub> v = \<lfloor>U\<rfloor>" by(rule type_of_hext_type_of) moreover from CASRed2.hyps(2)[OF CASRed2.prems(1) Val(4) CASRed2.prems(3)] Val(5) obtain T2' where "P,E,hp s' \<turnstile> e' : T2'" "P \<turnstile> T2' \<le> T'" by(auto intro: widen_trans) moreover from Val(6) hext have "P,E,hp s' \<turnstile> e3 : T3" by(rule WTrt_hext_mono) ultimately show ?thesis using Val by(auto intro: WTrtCAS) qed next case (CASRed3 e s ta e' s' v D F v') consider (Null) "v = Null" | (Val) U C T' fm T2 T3 where "T = Boolean" "class_type_of' U = \<lfloor>C\<rfloor>" "P \<turnstile> C has F:T' (fm) in D" "volatile fm" "P \<turnstile> T2 \<le> T'" "P,E,hp s \<turnstile> e : T3" "P \<turnstile> T3 \<le> T'" "typeof\<^bsub>hp s\<^esub> v = \<lfloor>U\<rfloor>" "typeof\<^bsub>hp s\<^esub> v' = \<lfloor>T2\<rfloor>" using CASRed3.prems(2) by auto then show ?case proof cases case Null then show ?thesis using CASRed3 by(force dest: red_hext_incr intro: type_of_hext_type_of WTrtCASNT) next case Val from CASRed3.hyps(1) have hext: "hp s \<unlhd> hp s'" by(auto dest: red_hext_incr) with Val(8,9) have "typeof\<^bsub>hp s'\<^esub> v = \<lfloor>U\<rfloor>" "typeof\<^bsub>hp s'\<^esub> v' = \<lfloor>T2\<rfloor>" by(blast intro: type_of_hext_type_of)+ moreover from CASRed3.hyps(2)[OF CASRed3.prems(1) Val(6) CASRed3.prems(3)] Val(7) obtain T3' where "P,E,hp s' \<turnstile> e' : T3'" "P \<turnstile> T3' \<le> T'" by(auto intro: widen_trans) ultimately show ?thesis using Val by(auto intro: WTrtCAS) qed next case CASNull thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case (CallObj e s ta e' s' M es T E) have red: "extTA,P,t \<turnstile> \<langle>e,s\<rangle> -ta\<rightarrow> \<langle>e',s'\<rangle>" and IH: "\<And>E T. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>U. P,E,hp s' \<turnstile> e' : U \<and> P \<turnstile> U \<le> T" and conf: "E \<turnstile> s \<surd>" and wt: "P,E,hp s \<turnstile> e\<bullet>M(es) : T" and tconf: "P,hp s \<turnstile> t \<surd>t" by fact+ \<comment> \<open>We distinguish if @{term e} has type @{term NT} or a Class type\<close> from wt show ?case proof(rule WTrt_elim_cases) fix T' C Ts meth D Us assume wte: "P,E,hp s \<turnstile> e : T'" and icto: "class_type_of' T' = \<lfloor>C\<rfloor>" and "method": "P \<turnstile> C sees M:Ts\<rightarrow>T = meth in D" and wtes: "P,E,hp s \<turnstile> es [:] Us" and subs: "P \<turnstile> Us [\<le>] Ts" obtain U where wte': "P,E,hp s' \<turnstile> e' : U" and UsubC: "P \<turnstile> U \<le> T'" using IH[OF conf wte tconf] by blast show ?thesis proof(cases "U = NT") case True moreover have "P,E,hp s' \<turnstile> es [:] Us" by(rule WTrts_hext_mono[OF wtes red_hext_incr[OF red]]) ultimately show ?thesis using wte' by(blast intro!:WTrtCallNT) next case False with icto UsubC obtain C' where icto': "class_type_of' U = \<lfloor>C'\<rfloor>" and "subclass": "P \<turnstile> C' \<preceq>\<^sup>* C" by(rule widen_is_class_type_of) obtain Ts' T' meth' D' where method': "P \<turnstile> C' sees M:Ts'\<rightarrow>T' = meth' in D'" and subs': "P \<turnstile> Ts [\<le>] Ts'" and sub': "P \<turnstile> T' \<le> T" using Call_lemma[OF "method" "subclass" wf] by fast have wtes': "P,E,hp s' \<turnstile> es [:] Us" by(rule WTrts_hext_mono[OF wtes red_hext_incr[OF red]]) show ?thesis using wtes' wte' icto' subs method' subs' sub' by(blast intro:widens_trans) qed next fix Ts assume "P,E,hp s \<turnstile> e:NT" hence "P,E,hp s' \<turnstile> e' : NT" using IH[OF conf _ tconf] by fastforce moreover fix Ts assume wtes: "P,E,hp s \<turnstile> es [:] Ts" have "P,E,hp s' \<turnstile> es [:] Ts" by(rule WTrts_hext_mono[OF wtes red_hext_incr[OF red]]) ultimately show ?thesis by(blast intro!:WTrtCallNT) qed next case (CallParams es s ta es' s' v M T E) have reds: "extTA,P,t \<turnstile> \<langle>es,s\<rangle> [-ta\<rightarrow>] \<langle>es',s'\<rangle>" and IH: "\<And>Ts E. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> es [:] Ts; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>Ts'. P,E,hp s' \<turnstile> es' [:] Ts' \<and> P \<turnstile> Ts' [\<le>] Ts" and conf: "E \<turnstile> s \<surd>" and wt: "P,E,hp s \<turnstile> Val v\<bullet>M(es) : T" and tconf: "P,hp s \<turnstile> t \<surd>t" by fact+ from wt show ?case proof (rule WTrt_elim_cases) fix U C Ts meth D Us assume wte: "P,E,hp s \<turnstile> Val v : U" and icto: "class_type_of' U = \<lfloor>C\<rfloor>" and "P \<turnstile> C sees M:Ts\<rightarrow>T = meth in D" and wtes: "P,E,hp s \<turnstile> es [:] Us" and "P \<turnstile> Us [\<le>] Ts" moreover have "P,E,hp s' \<turnstile> Val v : U" by(rule WTrt_hext_mono[OF wte reds_hext_incr[OF reds]]) moreover obtain Us' where "P,E,hp s' \<turnstile> es' [:] Us'" "P \<turnstile> Us' [\<le>] Us" using IH[OF conf wtes tconf] by blast ultimately show ?thesis by(fastforce intro:WTrtCall widens_trans) next fix Us assume null: "P,E,hp s \<turnstile> Val v : NT" and wtes: "P,E,hp s \<turnstile> es [:] Us" from null have "v = Null" by simp moreover obtain Us' where "P,E,hp s' \<turnstile> es' [:] Us' \<and> P \<turnstile> Us' [\<le>] Us" using IH[OF conf wtes tconf] by blast ultimately show ?thesis by(fastforce intro:WTrtCallNT) qed next case (RedCall s a U M Ts T pns body D vs T' E) have hp: "typeof_addr (hp s) a = \<lfloor>U\<rfloor>" and "method": "P \<turnstile> class_type_of U sees M: Ts\<rightarrow>T = \<lfloor>(pns,body)\<rfloor> in D" and wt: "P,E,hp s \<turnstile> addr a\<bullet>M(map Val vs) : T'" by fact+ obtain Ts' where wtes: "P,E,hp s \<turnstile> map Val vs [:] Ts'" and subs: "P \<turnstile> Ts' [\<le>] Ts" and T'isT: "T' = T" using wt "method" hp wf by(auto 4 3 dest: sees_method_fun) from wtes subs have length_vs: "length vs = length Ts" by(auto simp add: WTrts_conv_list_all2 dest!: list_all2_lengthD) have UsubD: "P \<turnstile> ty_of_htype U \<le> Class (class_type_of U)" by(cases U)(simp_all add: widen_array_object) from sees_wf_mdecl[OF wf "method"] obtain T'' where wtabody: "P,[this#pns [\<mapsto>] Class D#Ts] \<turnstile> body :: T''" and T''subT: "P \<turnstile> T'' \<le> T" and length_pns: "length pns = length Ts" by(fastforce simp:wf_mdecl_def simp del:map_upds_twist) from wtabody have "P,Map.empty(this#pns [\<mapsto>] Class D#Ts),hp s \<turnstile> body : T''" by(rule WT_implies_WTrt) hence "P,E(this#pns [\<mapsto>] Class D#Ts),hp s \<turnstile> body : T''" by(rule WTrt_env_mono) simp hence "P,E,hp s \<turnstile> blocks (this#pns) (Class D#Ts) (Addr a#vs) body : T''" using wtes subs hp sees_method_decl_above[OF "method"] length_vs length_pns UsubD by(auto simp add:wt_blocks rel_list_all2_Cons2 intro: widen_trans) with T''subT T'isT show ?case by blast next case (RedCallExternal s a U M Ts T' D vs ta va h' ta' e' s') from \<open>P,t \<turnstile> \<langle>a\<bullet>M(vs),hp s\<rangle> -ta\<rightarrow>ext \<langle>va,h'\<rangle>\<close> have "hp s \<unlhd> h'" by(rule red_external_hext) with \<open>P,E,hp s \<turnstile> addr a\<bullet>M(map Val vs) : T\<close> have "P,E,h' \<turnstile> addr a\<bullet>M(map Val vs) : T" by(rule WTrt_hext_mono) moreover from \<open>typeof_addr (hp s) a = \<lfloor>U\<rfloor>\<close> \<open>P \<turnstile> class_type_of U sees M: Ts\<rightarrow>T' = Native in D\<close> \<open>P,E,hp s \<turnstile> addr a\<bullet>M(map Val vs) : T\<close> have "P,hp s \<turnstile> a\<bullet>M(vs) : T'" by(fastforce simp add: external_WT'_iff dest: sees_method_fun) ultimately show ?case using RedCallExternal by(auto 4 3 intro: red_external_conf_extRet[OF wf] intro!: wt_external_call simp add: sconf_def dest: sees_method_fun[where C="class_type_of U"]) next case RedCallNull thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case (BlockRed e h x V vo ta e' h' x' T T' E) note IH = \<open>\<And>T E. \<lbrakk>E \<turnstile> (h, x(V := vo)) \<surd>; P,E,hp (h, x(V := vo)) \<turnstile> e : T; P,hp (h, x(V := vo)) \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp (h', x') \<turnstile> e' : T' \<and> P \<turnstile> T' \<le> T\<close>[simplified] from \<open>P,E,hp (h, x) \<turnstile> {V:T=vo; e} : T'\<close> have "P,E(V\<mapsto>T),h \<turnstile> e : T'" by(cases vo, auto) moreover from \<open>E \<turnstile> (h, x) \<surd>\<close> \<open>P,E,hp (h, x) \<turnstile> {V:T=vo; e} : T'\<close> have "(E(V \<mapsto> T)) \<turnstile> (h, x(V := vo)) \<surd>" by(cases vo)(simp add: lconf_def sconf_def,auto simp add: sconf_def conf_def intro: lconf_upd2) ultimately obtain T'' where wt': "P,E(V\<mapsto>T),h' \<turnstile> e' : T''" "P \<turnstile> T'' \<le> T'" using \<open>P,hp (h, x) \<turnstile> t \<surd>t\<close> by(auto dest: IH) { fix v assume vo: "x' V = \<lfloor>v\<rfloor>" from \<open>(E(V \<mapsto> T)) \<turnstile> (h, x(V := vo)) \<surd>\<close> \<open>extTA,P,t \<turnstile> \<langle>e,(h, x(V := vo))\<rangle> -ta\<rightarrow> \<langle>e',(h', x')\<rangle>\<close> \<open>P,E(V\<mapsto>T),h \<turnstile> e : T'\<close> have "P,h' \<turnstile> x' (:\<le>) (E(V \<mapsto> T))" by(auto simp add: sconf_def dest: red_preserves_lconf) with vo have "\<exists>T'. typeof\<^bsub>h'\<^esub> v = \<lfloor>T'\<rfloor> \<and> P \<turnstile> T' \<le> T" by(fastforce simp add: sconf_def lconf_def conf_def) then obtain T' where "typeof\<^bsub>h'\<^esub> v = \<lfloor>T'\<rfloor>" "P \<turnstile> T' \<le> T" by blast hence ?case using wt' vo by(auto) } moreover { assume "x' V = None" with wt' have ?case by(auto) } ultimately show ?case by blast next case RedBlock thus ?case by auto next case (SynchronizedRed1 o' s ta o'' s' e T E) have red: "extTA,P,t \<turnstile> \<langle>o',s\<rangle> -ta\<rightarrow> \<langle>o'',s'\<rangle>" by fact have IH: "\<And>T E. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> o' : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> o'' : T' \<and> P \<turnstile> T' \<le> T" by fact have conf: "E \<turnstile> s \<surd>" by fact have wt: "P,E,hp s \<turnstile> sync(o') e : T" by fact+ thus ?case proof(rule WTrt_elim_cases) fix To assume wto: "P,E,hp s \<turnstile> o' : To" and refT: "is_refT To" and wte: "P,E,hp s \<turnstile> e : T" from IH[OF conf wto \<open>P,hp s \<turnstile> t \<surd>t\<close>] obtain To' where "P,E,hp s' \<turnstile> o'' : To'" and sub: "P \<turnstile> To' \<le> To" by auto moreover have "P,E,hp s' \<turnstile> e : T" by(rule WTrt_hext_mono[OF wte red_hext_incr[OF red]]) moreover have "is_refT To'" using refT sub by(auto intro: widen_refT) ultimately show ?thesis by(auto) qed next case SynchronizedNull thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case LockSynchronized thus ?case by(auto) next case (SynchronizedRed2 e s ta e' s' a T E) have red: "extTA,P,t \<turnstile> \<langle>e,s\<rangle> -ta\<rightarrow> \<langle>e',s'\<rangle>" by fact have IH: "\<And>T E. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> e' : T' \<and> P \<turnstile> T' \<le> T" by fact have conf: "E \<turnstile> s \<surd>" by fact have wt: "P,E,hp s \<turnstile> insync(a) e : T" by fact thus ?case proof(rule WTrt_elim_cases) fix Ta assume "P,E,hp s \<turnstile> e : T" and hpa: "typeof_addr (hp s) a = \<lfloor>Ta\<rfloor>" from \<open>P,E,hp s \<turnstile> e : T\<close> conf \<open>P,hp s \<turnstile> t \<surd>t\<close> obtain T' where "P,E,hp s' \<turnstile> e' : T'" "P \<turnstile> T' \<le> T" by(blast dest: IH) moreover from red have hext: "hp s \<unlhd> hp s'" by(auto dest: red_hext_incr) with hpa have "P,E,hp s' \<turnstile> addr a : ty_of_htype Ta" by(auto intro: typeof_addr_hext_mono) ultimately show ?thesis by auto qed next case UnlockSynchronized thus ?case by(auto) next case SeqRed thus ?case apply(auto) apply(drule WTrt_hext_mono[OF _ red_hext_incr], assumption) by auto next case (CondRed b s ta b' s' e1 e2 T E) have red: "extTA,P,t \<turnstile> \<langle>b,s\<rangle> -ta\<rightarrow> \<langle>b',s'\<rangle>" by fact have IH: "\<And>T E. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> b : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> b' : T' \<and> P \<turnstile> T' \<le> T" by fact have conf: "E \<turnstile> s \<surd>" by fact have wt: "P,E,hp s \<turnstile> if (b) e1 else e2 : T" by fact thus ?case proof(rule WTrt_elim_cases) fix T1 T2 assume wtb: "P,E,hp s \<turnstile> b : Boolean" and wte1: "P,E,hp s \<turnstile> e1 : T1" and wte2: "P,E,hp s \<turnstile> e2 : T2" and lub: "P \<turnstile> lub(T1, T2) = T" from IH[OF conf wtb \<open>P,hp s \<turnstile> t \<surd>t\<close>] have "P,E,hp s' \<turnstile> b' : Boolean" by(auto) moreover have "P,E,hp s' \<turnstile> e1 : T1" by(rule WTrt_hext_mono[OF wte1 red_hext_incr[OF red]]) moreover have "P,E,hp s' \<turnstile> e2 : T2" by(rule WTrt_hext_mono[OF wte2 red_hext_incr[OF red]]) ultimately show ?thesis using lub by auto qed next case (ThrowRed e s ta e' s' T E) have IH: "\<And>T E. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> e' : T' \<and> P \<turnstile> T' \<le> T" by fact have conf: "E \<turnstile> s \<surd>" by fact have wt: "P,E,hp s \<turnstile> throw e : T" by fact then obtain T' where wte: "P,E,hp s \<turnstile> e : T'" and nobject: "P \<turnstile> T' \<le> Class Throwable" by auto from IH[OF conf wte \<open>P,hp s \<turnstile> t \<surd>t\<close>] obtain T'' where wte': "P,E,hp s' \<turnstile> e' : T''" and PT'T'': "P \<turnstile> T'' \<le> T'" by blast from nobject PT'T'' have "P \<turnstile> T'' \<le> Class Throwable" by(auto simp add: widen_Class)(erule notE, rule rtranclp_trans) hence "P,E,hp s' \<turnstile> throw e' : T" using wte' PT'T'' by -(erule WTrtThrow) thus ?case by(auto) next case RedThrowNull thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case (TryRed e s ta e' s' C V e2 T E) have red: "extTA,P,t \<turnstile> \<langle>e,s\<rangle> -ta\<rightarrow> \<langle>e',s'\<rangle>" by fact have IH: "\<And>T E. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> e' : T' \<and> P \<turnstile> T' \<le> T" by fact have conf: "E \<turnstile> s \<surd>" by fact have wt: "P,E,hp s \<turnstile> try e catch(C V) e2 : T" by fact thus ?case proof(rule WTrt_elim_cases) fix T1 assume wte: "P,E,hp s \<turnstile> e : T1" and wte2: "P,E(V \<mapsto> Class C),hp s \<turnstile> e2 : T" and sub: "P \<turnstile> T1 \<le> T" from IH[OF conf wte \<open>P,hp s \<turnstile> t \<surd>t\<close>] obtain T1' where "P,E,hp s' \<turnstile> e' : T1'" and "P \<turnstile> T1' \<le> T1" by(auto) moreover have "P,E(V \<mapsto> Class C),hp s' \<turnstile> e2 : T" by(rule WTrt_hext_mono[OF wte2 red_hext_incr[OF red]]) ultimately show ?thesis using sub by(auto elim: widen_trans) qed next case RedTryFail thus ?case unfolding sconf_def by(fastforce simp add: xcpt_subcls_Throwable[OF _ wf]) next case RedSeq thus ?case by auto next case RedCondT thus ?case by(auto dest: is_lub_upper) next case RedCondF thus ?case by(auto dest: is_lub_upper) next case RedWhile thus ?case by(fastforce) next case RedTry thus ?case by auto next case RedTryCatch thus ?case by(fastforce) next case (ListRed1 e s ta e' s' es Ts E) note IH = \<open>\<And>T E. \<lbrakk>E \<turnstile> s \<surd>; P,E,hp s \<turnstile> e : T; P,hp s \<turnstile> t \<surd>t\<rbrakk> \<Longrightarrow> \<exists>T'. P,E,hp s' \<turnstile> e' : T' \<and> P \<turnstile> T' \<le> T\<close> from \<open>P,E,hp s \<turnstile> e # es [:] Ts\<close> obtain T Ts' where "Ts = T # Ts'" "P,E,hp s \<turnstile> e : T" "P,E,hp s \<turnstile> es [:] Ts'" by auto with IH[of E T] \<open>E \<turnstile> s \<surd>\<close> WTrts_hext_mono[OF \<open>P,E,hp s \<turnstile> es [:] Ts'\<close> red_hext_incr[OF \<open>extTA,P,t \<turnstile> \<langle>e,s\<rangle> -ta\<rightarrow> \<langle>e',s'\<rangle>\<close>]] show ?case using \<open>P,hp s \<turnstile> t \<surd>t\<close> by(auto simp add: list_all2_Cons2 intro: widens_refl) next case ListRed2 thus ?case by(fastforce dest: hext_typeof_mono[OF reds_hext_incr]) qed(fastforce)+ end
classdef TestImageUtilities < CoreTest % TestImageUtilities. Tests for the PTKTextUtilities class. % % % Licence % ------- % Part of the TD Pulmonary Toolkit. https://github.com/tomdoel/pulmonarytoolkit % Author: Tom Doel, 2012. www.tomdoel.com % Distributed under the GNU GPL v3 licence. Please see website for details. % methods function obj = TestImageUtilities obj.TestCT; end end methods (Access = private) function TestCT(obj) datasets = {}; datasets{end + 1} = obj.MakeDataset('OT', 12, 'OT0'); datasets{end + 1} = obj.MakeDataset('CT', 12, 'CT1'); datasets{end + 1} = obj.MakeDataset('CT', 14, 'CTlarge'); datasets{end + 1} = obj.MakeDataset('CT', 1, 'CT2'); datasets{end + 1} = obj.MakeDataset('MR', 13, 'MR1'); datasets{end + 1} = obj.MakeDataset('OT', 25, 'OT1'); obj.Assert(strcmp(MimImageUtilities.FindBestSeries(datasets), 'CTlarge')); end function TestMR(obj) datasets = {}; datasets{end + 1} = obj.MakeDataset('OT', 12, 'OT0'); datasets{end + 1} = obj.MakeDataset('CT', 12, 'CT1'); datasets{end + 1} = obj.MakeDataset('CT', 14, 'CTlarge'); datasets{end + 1} = obj.MakeDataset('CT', 1, 'CT2'); datasets{end + 1} = obj.MakeDataset('MR', 13, 'MR1'); datasets{end + 1} = obj.MakeDataset('MR', 100, 'MRlarge'); datasets{end + 1} = obj.MakeDataset('OT', 250, 'OT1'); obj.Assert(strcmp(MimImageUtilities.FindBestSeries(datasets), 'MRlarge')); end function TestOther(obj) datasets = {}; datasets{end + 1} = obj.MakeDataset('AB', 12, 'CT1'); datasets{end + 1} = obj.MakeDataset('CD', 14, 'CTlarge'); datasets{end + 1} = obj.MakeDataset('EF', 1, 'CT2'); datasets{end + 1} = obj.MakeDataset('GH', 1300, 'GHuid'); datasets{end + 1} = obj.MakeDataset('IJ', 100, 'MRlarge'); datasets{end + 1} = obj.MakeDataset('KL', 250, 'OT1'); obj.Assert(strcmp(MimImageUtilities.FindBestSeries(datasets), 'GHuid')); end end methods (Access = private, Static) function fake_dataset = MakeDataset(modality, num_images, uid) fake_dataset = struct; fake_dataset.Modality = modality; fake_dataset.NumberOfImages = num_images; fake_dataset.SeriesUid = uid; end end end
Require Import UFO.Rel.Definitions. Require Import UFO.Rel.BasicFacts. Require Import UFO.Rel.Monotone. Require Import UFO.Rel.Compat_sub. Require Import UFO.Util.Subset. Require Import UFO.Util.Postfix. Require Import UFO.Lang.BindingsFacts. Require Import UFO.Lang.Static. Require Import UFO.Lang.StaticFacts. Set Implicit Arguments. Section section_ccompat_tm_up. Context (EV LV : Set). Context (Ξ : XEnv EV LV). Context (δ₁ δ₂ : EV → eff0) (δ : EV → IRel 𝓤_Sig). Context (ρ₁ ρ₂ : LV → lbl0) (ρ : LV → IRel 𝓣_Sig). Context (T : ty ∅ EV LV ∅) (E₁ E₂ : eff ∅ EV LV ∅) (ℓ : lbl LV ∅). Lemma ccompat_tm_up n ξ₁ ξ₂ t₁ t₂ : n ⊨ 𝓣⟦ Ξ ⊢ (ty_ms (ms_res T E₁) ℓ) # E₂ ⟧ δ₁ δ₂ δ ρ₁ ρ₂ ρ ξ₁ ξ₂ t₁ t₂ → n ⊨ 𝓣⟦ Ξ ⊢ T # ((ef_lbl ℓ) :: (E₁ ++ E₂)) ⟧ δ₁ δ₂ δ ρ₁ ρ₂ ρ ξ₁ ξ₂ (⇧ t₁) (⇧ t₂). Proof. intro Ht. change (⇧ t₁) with (ktx_plug (ktx_up ktx_hole) t₁). change (⇧ t₂) with (ktx_plug (ktx_up ktx_hole) t₂). eapply plug0 with (Ta := ty_ms (ms_res T E₁) ℓ). + intro ; simpl ; auto. + intro ; simpl ; auto. + iintro ξ₁' ; iintro ξ₂' ; iintro Hξ₁' ; iintro Hξ₂' ; iintro v₁ ; iintro v₂ ; iintro Hv. bind_hole ; apply 𝓦_in_𝓣. destruct v₁ as [ | | | m₁ [ | X₁] | m₁ [ | X₁] ], v₂ as [ | | | m₂ [ | X₂] | m₂ [ | X₂] ] ; simpl in Hv ; idestruct Hv as m₁' Hv ; idestruct Hv as m₂' Hv ; idestruct Hv as X₁' Hv ; idestruct Hv as X₂' Hv ; idestruct Hv as Hv Hr ; ielim_prop Hv ; destruct Hv as [Hv₁ Hv₂] ; inversion Hv₁ ; inversion Hv₂ ; clear Hv₁ Hv₂ ; subst m₁' m₂' X₁' X₂'. idestruct Hr as HX₁X₂ Hr ; idestruct Hr as r₁ Hr ; idestruct Hr as r₂ Hr ; idestruct Hr as Hr₁r₂ Hr ; idestruct Hr as HX Hr. ielim_prop Hr₁r₂ ; destruct Hr₁r₂; subst m₁ m₂. ielim_prop HX₁X₂ ; destruct HX₁X₂ as [HX₁ HX₂]. simpl ktx_plug. destruct ℓ as [ α | [ α | X ] ] ; simpl in Hr ; [ | destruct α | ]. { eapply fold_𝓦 with (ψ := 𝓣⟦ Ξ ⊢ T # (ef_lbl (lbl_var α) :: E₁) ⟧ δ₁ δ₂ δ ρ₁ ρ₂ ρ). + apply ccompat_eff_In with (ε := ef_lbl (lbl_var α)). { left ; trivial. } repeat ieexists ; repeat isplit ; try iintro_prop ; crush. + crush. + clear. do 7 iintro. later_shift. eapply ccompat_sub ; try eassumption. { apply st_reflexive. } { rewrite app_comm_cons ; apply se_app_l. } } { eapply fold_𝓦 with (ψ := 𝓣⟦ Ξ ⊢ T # (ef_lbl (lbl_id (lid_f X)) :: E₁) ⟧ δ₁ δ₂ δ ρ₁ ρ₂ ρ). + apply ccompat_eff_In with (ε := ef_lbl (lbl_id (lid_f X))). { left ; trivial. } simpl in HX₁, HX₂. inversion HX₁ ; inversion HX₂ ; clear HX₁ HX₂ ; subst X₁ X₂. simpl 𝓾_Fun. repeat ieexists ; repeat isplit ; try iintro_prop ; crush. + crush. + clear. do 7 iintro ; later_shift. eapply ccompat_sub ; try eassumption. { apply st_reflexive. } { rewrite app_comm_cons ; apply se_app_l. } } + apply postfix_refl. + apply postfix_refl. + eapply ccompat_sub ; try eassumption. { apply st_reflexive. } { apply se_cons_r ; apply se_app_r. } Qed. End section_ccompat_tm_up. Section section_compat_tm_up. Context (n : nat). Context (EV LV V : Set). Context (Ξ : XEnv EV LV). Context (Γ : V → ty ∅ EV LV ∅). Context (T : ty ∅ EV LV ∅) (E₁ E₂ : eff ∅ EV LV ∅) (ℓ : lbl LV ∅). Lemma compat_tm_up t₁ t₂ : n ⊨ ⟦ Ξ Γ ⊢ t₁ ≼ˡᵒᵍ t₂ : (ty_ms (ms_res T E₁) ℓ) # E₂ ⟧ → n ⊨ ⟦ Ξ Γ ⊢ (⇧ t₁) ≼ˡᵒᵍ (⇧ t₂) : T # ((ef_lbl ℓ) :: (E₁ ++ E₂)) ⟧. Proof. intro Ht. iintro ξ₁ ; iintro ξ₂ ; iintro δ₁ ; iintro δ₂ ; iintro δ ; iintro ρ₁ ; iintro ρ₂ ; iintro ρ ; iintro γ₁ ; iintro γ₂. iintro Hξ ; iintro Hδ ; iintro Hρ ; iintro Hγ. simpl subst_tm. apply ccompat_tm_up. iespecialize Ht. ispecialize Ht ; [ eassumption | ]. ispecialize Ht ; [ eassumption | ]. ispecialize Ht ; [ eassumption | ]. ispecialize Ht ; [ eassumption | ]. apply Ht. Qed. Lemma compat_ktx_up T' E' K₁ K₂ : n ⊨ ⟦ Ξ Γ ⊢ K₁ ≼ˡᵒᵍ K₂ : T' # E' ⇢ (ty_ms (ms_res T E₁) ℓ) # E₂ ⟧ → n ⊨ ⟦ Ξ Γ ⊢ (ktx_up K₁) ≼ˡᵒᵍ (ktx_up K₂) : T' # E' ⇢ T # ((ef_lbl ℓ) :: (E₁ ++ E₂)) ⟧. Proof. intro HK. iintro ξ₁ ; iintro ξ₂ ; iintro δ₁ ; iintro δ₂ ; iintro δ ; iintro ρ₁ ; iintro ρ₂ ; iintro ρ ; iintro γ₁ ; iintro γ₂. iintro Hξ ; iintro Hδ ; iintro Hρ ; iintro Hγ. iintro ξ₁' ; iintro ξ₂' ; iintro Hξ₁' ; iintro Hξ₂' ; iintro t₁ ; iintro t₂ ; iintro Ht. iespecialize HK. ispecialize HK ; [ eassumption | ]. ispecialize HK ; [ eassumption | ]. ispecialize HK ; [ eassumption | ]. ispecialize HK ; [ eassumption | ]. ielim_vars HK ; [ | eassumption | eassumption ]. iespecialize HK. ispecialize HK ; [ eassumption | ]. simpl ktx_plug. apply ccompat_tm_up. apply HK. Qed. End section_compat_tm_up.