Datasets:
AI4M
/

text
stringlengths
0
3.34M
Exercise material of the MSc-level course **Numerical Methods in Geotechnical Engineering**. Held at Technische Universität Bergakademie Freiberg. Comments to: *Prof. Dr. Thomas Nagel Chair of Soil Mechanics and Foundation Engineering Geotechnical Institute Technische Universität Bergakademie Freiberg.* https://tu-freiberg.de/en/soilmechanics ```python import numpy as np import matplotlib.pyplot as plt #Some plot settings plt.style.use('seaborn-deep') plt.rcParams['lines.linewidth']= 2.0 plt.rcParams['lines.color']= 'black' plt.rcParams['legend.frameon']=True plt.rcParams['font.family'] = 'serif' plt.rcParams['legend.fontsize']=14 plt.rcParams['font.size'] = 14 plt.rcParams['axes.spines.right'] = False plt.rcParams['axes.spines.top'] = False plt.rcParams['axes.spines.left'] = True plt.rcParams['axes.spines.bottom'] = True plt.rcParams['axes.axisbelow'] = True plt.rcParams['figure.figsize'] = (12, 6) ``` # Exercise 7 - 1D Consolidation, Terzaghi ## Governing differential equation Neglecting lateral displacements and fluid flow $$ \epsilon_{xx} = \epsilon_{yy} = 0 \quad \text{and} \quad q_x = q_y = 0 $$ the volume balance simplifies to $$ \dot{\epsilon}_{zz} = \frac{\partial q_z}{\partial z} $$ Now we substitute the constitutive law (linear elasticity) of the solid and Darcy's law for the fluid motion into the volume balance and get $$ \dot{\sigma}_{zz}^\text{eff} = E_\text{S} \dot{\epsilon}_{zz} \quad \text{and} \quad \frac{\partial q_z}{\partial z} = - \frac{k}{\mu_\text{F}} \frac{\partial^2 p}{\partial z^2} \quad \text{ergibt} \quad \frac{\dot{\sigma}_{zz}^\text{eff}}{E_\text{S}} = - \frac{k}{\mu_\text{F}} \frac{\partial^2 p}{\partial z^2} $$ Assuming constant total stress (constant external loads) we find with the use of the effective stress principle $$ \sigma_{zz} = \sigma_{zz}^\text{eff} + p = \text{const} \quad \rightarrow \quad \dot{\sigma}_{zz} = 0 = \dot{\sigma}_{zz}^\text{eff} + \dot{p} $$ Substitution yields the partial, linear and homogeneous differential equation $$ \dot{p} = \frac{E_\text{S} k}{\mu_\text{F}} p_{,zz} = \frac{E_\text{S} K_\text{D}}{\gamma_\text{F}} p_{,zz} = c_\text{v} p_{,zz} \quad \text{with } K_\text{D} = \frac{k\rho_\text{F} g}{\mu_\text{F}} $$ $[k] = \text{m}^2$: intrinsic permeability $[K_\text{D}] = \text{m s}^{-1}$: hydraulic conductivity $[c_\text{v}] = \text{m}^2\text{s}^{-1}$: coefficient of consolidation ## Weak form The pore pressure can have (the essential/Dirichlet) boundary conditions in the form: $$ p = \bar{p}\ \forall z \in \partial \Omega_\mathrm{D} $$ We now introduce a test function $\eta$ which vanishes where the pore pressure is given $$ \eta = 0\ \forall z \in \partial \Omega_\mathrm{D} $$ and construct the weak form (using integration by parts): \begin{align} 0 &= \int \limits_0^H \eta \left[\dot{p} - c_\text{v} p_{,zz} \right] \text{d}z \\ &= \int \limits_0^H \left[\eta \dot{p} - \left( \eta c_\text{v} p_{,z} \right)_{,z} + \eta_{,z} c_\text{v} p_{,z} \right] \, \text{d}z \\ &= \int \limits_0^H \left[\eta \dot{p} + \eta_{,z} c_\text{v} p_{,z} \right] \, \text{d}z + \left[ \eta E_\text{S} q_z \right]^H_0 \end{align} where the natural/Neumann boundary conditions have appeared. ## Finite elements in 1D We have a soil column of height $H$ on top of the bed rock at $z=0$. We first create an element class. An element knows the number of nodes it has, their IDs in the global node vector, and the coordinates of its nodes. Linear elements have 2 nodes and 2 quadrature points, quadratic elements 3 nodes and 3 quadrature points. The natural coordinates of the element run from -1 to 1, and the quadrature points and weights are directly taken from Numpy. ```python #element class class line_element():#local coordinates go from -1 to 1 #takes number of nodes, global nodal coordinates, global node ids def __init__(self, nnodes=2, ncoords=[0.,1.], nids=[0,1]): self.__nnodes = nnodes if (len(ncoords) != self.__nnodes): raise Exception("Number of coordinates does not match number \ of nodes of element (%i vs of %i)" %(self.__nnodes,len(ncoords))) else: self.__coords = np.array(ncoords) self.__natural_coords = (self.__coords-self.__coords[0])/(self.__coords[-1]-self.__coords[0])*2. - 1. if (len(nids) != self.__nnodes): raise Exception("Number of node IDs does not match number \ of nodes of element (%i vs of %i)" %(self.__nnodes,len(nids))) else: self.__global_ids = np.array(nids) self.__quad_degree = self.__nnodes self.__quad_points, self.__quad_weights = np.polynomial.legendre.leggauss(self.__quad_degree) ``` Next, we wish to generate a one-dimensional mesh by specifying the length of a line, the number of elements into which the mesh is to be split, and the number of nodes per element. ```python def number_of_nodes(nelems,nodes_per_elem): return nelems*nodes_per_elem - (nelems - 1) def generate_mesh(domain_length,nelems,nodes_per_elem): nn = number_of_nodes(nelems,nodes_per_elem) #coordinate vector of global nodes global_nodal_coordinates = np.linspace(0.,domain_length,nn) global_solution = np.array([0.]*nn) #generate elements element_vector = [] for i in range(nelems): node_start = (nodes_per_elem-1)*i element_vector.append( line_element(nodes_per_elem, global_nodal_coordinates[node_start:node_start+nodes_per_elem], list(range(node_start,node_start+nodes_per_elem)))) return global_nodal_coordinates, element_vector, global_solution ``` ## Shape functions in 1D As in exercise 06, we allow linear and higher-order shape functions. ```python #N def shape_function(element_order,xi): if (element_order == 2): #-1,1 return np.array([(1.-xi)/2., (1.+xi)/2.]) elif (element_order == 3): #-1, 0, 1 return np.array([(xi - 1.)*xi/2., (1-xi)*(1+xi), (1+xi)*xi/2.]) #dN_dxi def dshape_function_dxi(element_order,xi): if (element_order == 2): #-1,1 return np.array([-0.5*xi/xi, 0.5*xi/xi]) #xi only later for plotting dimensions elif (element_order == 3):#-1,0,1 return np.array([xi - 0.5,-2.*xi,xi + 0.5]) #dz_dxi def element_jacobian(element,xi): element_order = element._line_element__nnodes Jacobian = 0. Jacobian += dshape_function_dxi(element_order,xi).dot(element._line_element__coords) return Jacobian #dN_dz def grad_shape_function(element,xi): element_order = element._line_element__nnodes Jac = element_jacobian(element,xi) return dshape_function_dxi(element_order,xi)/Jac ``` ## Time and space discretization, Picard iterations Using a backward Euler approach for simplicity we find the time-discrete weak form as ($p_{n+1} \equiv p$) $$ 0 = \int \limits_0^H \left[\eta \frac{p - p_n}{\Delta t} + \eta_{,z} c_\text{v} p_{,z} \right] \, \text{d}z + \left[ \eta E_\text{S} q_z \right]^H_0 $$ Introducing standard FE approximations $$ p \approx N_i \hat{p}_i, \quad \eta \approx N_i \hat{\eta}_i, \quad \frac{\partial p}{\partial z} \approx \nabla N_i \hat{p}_i, \quad \frac{\partial \eta}{\partial z} \approx \nabla N_i \hat{\eta}_i $$ This yields $$ \begin{align} 0 &= \int \limits_0^H \left[ N_i \hat{\eta}_i \frac{N_k \hat{p}_k - N_k \hat{p}_{n,k}}{\Delta t} + \nabla N_i \hat{\eta}_i c_\text{v} \nabla N_k \hat{p}_k \right] \, \text{d}z + \left[ N_i \hat{\eta}_i E_\text{S} q_z \right]^H_0 \end{align} $$ Now we bring all quantities associated with the unknown pressure to the left-hand side (LHS) and all known quantities to the RHS: $$ \hat{\eta}_i \int \limits_0^H \left[ N_i \frac{1}{\Delta t} N_k + \nabla N_i c_\text{v} \nabla N_k \right] \, \text{d}z\ \hat{p}_k = \hat{\eta}_i \left[ N_{n_\text{n}} E_\text{S} \bar{q}_z|_{z=H} \delta_{n_\text{n}i} - N_{n_\text{n}} E_\text{S} \bar{q}_z|_{z=0} \delta_{i0} \right] + \hat{\eta}_i \int \limits_0^H N_i \frac{1}{\Delta t} N_k \, \text{d}z\ \hat{p}_{n,k} $$ can be simplified by realizing that the nodal test function values are arbitrary and thus $$ \int \limits_0^H \left[ N_i \frac{1}{\Delta t} N_k + \nabla N_i c_\text{v} \nabla N_k \right] \, \text{d}z\ \hat{p}_k = N_{n_\text{n}} E_\text{S} \bar{q}_z|_{z=H} \delta_{n_\text{n}i} - N_{n_\text{n}} E_\text{S} \bar{q}_z|_{z=0} \delta_{i0} + \int \limits_0^H N_i \frac{1}{\Delta t} N_k \, \text{d}z\ \hat{p}_{n,k} $$ which leaves us with $n_\text{n}$ equations for the $n_\text{n}$ unknown nodal pressures $\hat{p}_k$. If any coefficients in the above are taken as pressure-dependent, the system could be solved repeatedly usind Picard iterations. For strong non-linearities, a Newton linearization would typically be used. ### Question: How would the equation differ for an explicit time integration scheme (forward Euler)? What we require now is the local assembler to calculate the stiffness matrix and the local right-hand side. Local integration is performed by Gauss quadrature: $$ \int \limits_{-1}^1 f(\xi)\,\text{d}\xi \approx \sum \limits_{i=1}^{n_\text{gp}} f(\xi_i) w_i $$ ## Local assember ```python def Stiffness(z): E0 = 5.e6 #Pa return E0 def Conductivity(z): kf = 1.e-9#m/s return kf def SpecificWeight(z): g = 9.81 #m/s² rhow = 1000. #kg/m³ return g*rhow def ConsolidationCoeff(z):#m²/s return Stiffness(z)*Conductivity(z)/SpecificWeight(z) ``` ```python def local_assembler(elem,dt,prev_sol,mass_lumping=False): element_order = elem._line_element__nnodes K_loc = np.zeros((element_order,element_order)) M_loc = np.zeros((element_order,element_order)) b_loc = np.zeros(element_order) z_nodes = elem._line_element__coords for i in range(elem._line_element__quad_degree): #local integration point coordinate xi = elem._line_element__quad_points[i] #shape function N = shape_function(element_order,xi) #gradient of shape function dN_dX = grad_shape_function(elem,xi) #determinant of Jacobian detJ = np.abs(element_jacobian(elem,xi)) #integration weight w = elem._line_element__quad_weights[i] #global integration point coordinate (for spatially varying properties) z_glob = np.dot(N,z_nodes) #evaluation of local material/structural properties E = Stiffness(z_glob) #evaluation of local body force CV = ConsolidationCoeff(z_glob) #assembly of local stiffness matrix M_loc = np.outer(N,N) / dt if (mass_lumping): M_loc = np.diag(M_loc.sum(0)) #diagonal of column sum K_loc += (np.outer(dN_dX,dN_dX) * CV + M_loc)* w * detJ #assembly of local RHS p_prev = np.dot(N,prev_sol)#pressure in integration point b_loc += N * p_prev/dt * w * detJ return K_loc,b_loc ``` ## Global assembly Now we can construct the global matrix system $\mathbf{K}\mathbf{u} = \mathbf{f}$ or $\mathbf{A}\mathbf{x}=\mathbf{b}$ (see lecture script). ```python def global_assembler(nodes,elements,solution,dt,mass_lumping=False): K_glob = np.zeros((len(nodes),len(nodes))) b_glob = np.zeros(len(nodes)) for i,elem in enumerate(elements): start_id = elem._line_element__global_ids[0] end_id = elem._line_element__global_ids[-1] K_i, b_i = local_assembler(elem,dt,solution[start_id:end_id+1],mass_lumping) K_glob[start_id:end_id+1,start_id:end_id+1] += K_i b_glob[start_id:end_id+1] += b_i return K_glob, b_glob ``` ## Application of boundary conditions First we apply flux boundary conditions ```python def apply_Neumann_bc(b_glob,node_id,value): b_glob[node_id] += value return b_glob ``` Then we apply Dirichlet bc ```python def apply_Dirichlet_bc(K_glob,b_glob,node_id,value): K_glob[node_id,:] = 0.# = K_glob[:,node_id] = 0. K_glob[node_id,node_id] = 1. b_glob[node_id] = value return K_glob, b_glob ``` ## Application of initial conditions Since we're dealing with a time-dependent problem (rate problem) we require initial conditions for the pore pressure, i.e. $p_0 = p(t=0)\ \forall\ z$. Due to the very specific assumptions in deriving this consolidation equation, the initial pressure is given by the (suddenly) applied load: $$ p_0 = \sigma_{zz} = \text{const.} $$ ```python def apply_initial_conditions(solution,sig_v): solution *= 0. solution += sig_v return ``` ## Time loop and problem solution We now establish the time loop and in each time step perform the global assembly, apply a vanishing traction on the top and constrain the displacement at the bottom to zero. ```python def time_loop(dt,nodes,elements,solution,mass_lumping=False): #Startwerte t_end = 366*24*60*60 #s absolute_tolerance = 1.e-6 max_iter = 100 iteration_counter = np.array([0]) apply_initial_conditions(solution,200.e3) y = [solution] #create a list that will hold the solution vectors at all time points times = np.array([0.]) # while times[-1]+dt < t_end: #repeat the loop as long as the final time step is below the end point times = np.append(times,times[-1]+dt) #here define the next time point as the previous time point plus the time increment dt y_old = y[-1] #Starting value for recursive update i = 0 # while True: K, f = global_assembler(nodes,elements,y[-1],dt,mass_lumping) #f = apply_Neumann_bc(f,len(nodes)-1,0) K, f = apply_Dirichlet_bc(K, f, len(nodes)-1, 0.)#free draining top solution = np.linalg.solve(K,f) i += 1 if (np.abs(np.linalg.norm(solution) - np.linalg.norm(y_old)) < absolute_tolerance or i > max_iter): #if change is below tolerance, stop iterations break y_old = solution #preparation of next recursion y.append(solution) #append the new found solution to the solution vector iteration_counter = np.append(iteration_counter,i) #store how much iterations this time step took to converge return times, y,iteration_counter ``` ```python #spatial discretization H = 10. nel = 20 n_per_el = 3 nodes,elements,solution=generate_mesh(H,nel,n_per_el) ``` ```python times, sols, iters = time_loop(24*60*60,nodes,elements,solution) ``` ```python plt.xlabel('$z$ / m') plt.ylabel('$p$ / kPa') plt.title('Finite element solution') plt.plot(nodes, sols[0]/1.e3, marker='o', label='$t = %i$ d' %(times[0]/60/60)) plt.plot(nodes, sols[1]/1.e3, marker='o', label='$t = %i$ d' %(times[1]/60/60/24)) plt.plot(nodes, sols[10]/1.e3, marker='o', label='$t = %i$ d' %(times[10]/60/60/24)) plt.plot(nodes, sols[30]/1.e3, marker='o', label='$t = %i$ d' %(times[30]/60/60/24)) plt.plot(nodes, sols[200]/1.e3, marker='o', label='$t = %i$ d' %(times[200]/60/60/24)) plt.plot(nodes, sols[-1]/1.e3, marker='o', label='$t = %i$ d' %(times[-1]/60/60/24)) plt.legend(); ``` ```python #from matplotlib import animation #from IPython.display import HTML # First set up the figure, the axis, and the plot element we want to animate #fig, ax = plt.subplots(); #ax.set_xlim(( 0, 11)); #ax.set_ylim((0, 210)); #line, = ax.plot([], [], lw=2); # initialization function: plot the background of each frame #def init(): # line.set_data(nodes, sols[0]/1e3) # return (line,) # animation function. This is called sequentially #def animate(i): #x = nodes #y = sols[i]/1e3 #line.set_data(x, y) #return (line,) # call the animator. blit=True means only re-draw the parts that have changed. #anim = animation.FuncAnimation(fig, animate, init_func=init, # frames=len(sols)-1, interval=20, blit=True); #HTML(anim.to_html5_video()) ``` ### Convergence study Let's do a simple convergence study. ```python dts = [12.*3600,24.*3600,48.*3600] nels = [10,20,40] H = 10. fig, ax = plt.subplots(nrows=3,ncols=3,figsize=(18,18)) for i,dt in enumerate(dts): for j,nel in enumerate(nels): #print("Running (n,dt) combination ", dt, nel) number_of_elements = nel nodes_per_element = 2 nodes,elements,solution=generate_mesh(H,number_of_elements,nodes_per_element) times, sols, iters = time_loop(dt,nodes,elements,solution) nodes,elements,solution=generate_mesh(H,number_of_elements,nodes_per_element) times_lumped, sols_lumped, iters_lumped = time_loop(dt,nodes,elements,solution,True) # ax[i][j].plot(nodes, sols[0]/1.e3, marker='o', color='green', label='$t = %i$ d' %(times[0]/60/60/24)) ax[i][j].plot(nodes, sols[1]/1.e3, marker='o', color='red', label='$t = %i$ d' %(times[1]/60/60/24)) ax[i][j].plot(nodes, sols[10]/1.e3, marker='o', color='blue', label='$t = %i$ d' %(times[10]/60/60/24)) # ax[i][j].plot(nodes, sols_lumped[0]/1.e3, marker='d', ls=':', color='green') ax[i][j].plot(nodes, sols_lumped[1]/1.e3, marker='d', ls=':', color='red') ax[i][j].plot(nodes, sols_lumped[10]/1.e3, marker='d', ls=':', color='blue') # ax[i][j].set_xlabel('$z$ / m') ax[i][j].set_ylabel('$p$ / kPa') ax[i][j].set_title('dt = %i h, nel = %i' %(dt/3600,nel)) ax[i][j].legend(fontsize=10) fig.tight_layout() ``` We observe that time step size and discretization size act together, and that unsuitable choices can lead to unphysical oscillations.
/- Copyright (c) 2021 Alex Zhao. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Alex Zhao ! This file was ported from Lean 3 source module number_theory.frobenius_number ! leanprover-community/mathlib commit 327c3c0d9232d80e250dc8f65e7835b82b266ea5 ! Please do not edit these lines, except to modify the commit id ! if you have ported upstream changes. -/ import Mathbin.Data.Nat.Modeq import Mathbin.GroupTheory.Submonoid.Basic import Mathbin.GroupTheory.Submonoid.Membership import Mathbin.Tactic.Ring import Mathbin.Tactic.Zify /-! # Frobenius Number in Two Variables > THIS FILE IS SYNCHRONIZED WITH MATHLIB4. > Any changes to this file require a corresponding PR to mathlib4. In this file we first define a predicate for Frobenius numbers, then solve the 2-variable variant of this problem. ## Theorem Statement Given a finite set of relatively prime integers all greater than 1, their Frobenius number is the largest positive integer that cannot be expressed as a sum of nonnegative multiples of these integers. Here we show the Frobenius number of two relatively prime integers `m` and `n` greater than 1 is `m * n - m - n`. This result is also known as the Chicken McNugget Theorem. ## Implementation Notes First we define Frobenius numbers in general using `is_greatest` and `add_submonoid.closure`. Then we proceed to compute the Frobenius number of `m` and `n`. For the upper bound, we begin with an auxiliary lemma showing `m * n` is not attainable, then show `m * n - m - n` is not attainable. Then for the construction, we create a `k_1` which is `k mod n` and `0 mod m`, then show it is at most `k`. Then `k_1` is a multiple of `m`, so `(k-k_1)` is a multiple of n, and we're done. ## Tags frobenius number, chicken mcnugget, chinese remainder theorem, add_submonoid.closure -/ open Nat #print FrobeniusNumber /- /-- A natural number `n` is the **Frobenius number** of a set of natural numbers `s` if it is an upper bound on the complement of the additive submonoid generated by `s`. In other words, it is the largest number that can not be expressed as a sum of numbers in `s`. -/ def FrobeniusNumber (n : ℕ) (s : Set ℕ) : Prop := IsGreatest { k | k ∉ AddSubmonoid.closure s } n #align is_frobenius_number FrobeniusNumber -/ variable {m n : ℕ} #print frobeniusNumber_pair /- /-- The **Chicken Mcnugget theorem** stating that the Frobenius number of positive numbers `m` and `n` is `m * n - m - n`. -/ theorem frobeniusNumber_pair (cop : coprime m n) (hm : 1 < m) (hn : 1 < n) : FrobeniusNumber (m * n - m - n) {m, n} := by simp_rw [FrobeniusNumber, AddSubmonoid.mem_closure_pair] have hmn : m + n ≤ m * n := add_le_mul hm hn constructor · push_neg intro a b h apply cop.mul_add_mul_ne_mul (add_one_ne_zero a) (add_one_ne_zero b) simp only [Nat.sub_sub, smul_eq_mul] at h zify at h⊢ rw [← sub_eq_zero] at h⊢ rw [← h] ring · intro k hk dsimp at hk contrapose! hk let x := chinese_remainder cop 0 k have hx : x.val < m * n := chinese_remainder_lt_mul cop 0 k (ne_bot_of_gt hm) (ne_bot_of_gt hn) suffices key : x.1 ≤ k · obtain ⟨a, ha⟩ := modeq_zero_iff_dvd.mp x.2.1 obtain ⟨b, hb⟩ := (modeq_iff_dvd' key).mp x.2.2 exact ⟨a, b, by rw [mul_comm, ← ha, mul_comm, ← hb, Nat.add_sub_of_le key]⟩ refine' modeq.le_of_lt_add x.2.2 (lt_of_le_of_lt _ (add_lt_add_right hk n)) rw [Nat.sub_add_cancel (le_tsub_of_add_le_left hmn)] exact modeq.le_of_lt_add (x.2.1.trans (modeq_zero_iff_dvd.mpr (Nat.dvd_sub' (dvd_mul_right m n) dvd_rfl)).symm) (lt_of_lt_of_le hx le_tsub_add) #align is_frobenius_number_pair frobeniusNumber_pair -/
theory Aenderung imports Main ExecutableHelper BeispielPerson Handlung Zahlenwelt "HOL-Library.Multiset" begin section\<open>Änderungen in Welten\<close> text\<open>In diesem Abschnitt werden wir Änderungen in Welten, und darauf basierend, Abmachungen modellieren.\<close> text\<open>Bei einer Änderung keine eine Person entweder etwas verlieren oder gewinnen. Dieses einfache Modell ist natürlich auf unsere Zahlenwelten angepasst, in der normalerweise Typ \<^typ>\<open>'etwas\<close> ein \<^typ>\<open>int\<close> ist.\<close> datatype ('person, 'etwas) aenderung = Verliert \<open>'person\<close> \<open>'etwas\<close> | Gewinnt \<open>'person\<close> \<open>'etwas\<close> text\<open>Beispiel: \<^term>\<open>[Gewinnt Alice 3, Verliert Bob 3]::(person, int) aenderung list\<close>.\<close> text\<open>Von einer \<^typ>\<open>('person, 'etwas) aenderung\<close> betroffene Person bzw. Personen.\<close> definition betroffen :: \<open>('person, 'etwas) aenderung \<Rightarrow> 'person\<close> where \<open>betroffen a \<equiv> case a of Verliert p _ \<Rightarrow> p | Gewinnt p _ \<Rightarrow> p\<close> definition betroffene :: \<open>('person, 'etwas) aenderung list \<Rightarrow> 'person list\<close> where \<open>betroffene as \<equiv> map betroffen as\<close> (*<*) lemma betroffene_case_aenderung: \<open>betroffene = map (case_aenderung (\<lambda>p _. p) (\<lambda>p _. p))\<close> by(simp add: fun_eq_iff betroffene_def betroffen_def) (*>*) beispiel \<open>betroffene [Verliert Alice (2::int), Gewinnt Bob 3, Gewinnt Carol 2, Verliert Eve 1] = [Alice, Bob, Carol, Eve]\<close> by eval beispiel \<open>betroffene [Verliert Alice (5::nat), Gewinnt Bob 3, Verliert Eve 7] = [Alice, Bob, Eve]\<close> by eval beispiel \<open>betroffene [Verliert Alice (5::nat), Gewinnt Alice 3] = [Alice, Alice]\<close> by eval (*<*) text\<open>Das Delta um von \<^term>\<open>i2\<close> nach \<^term>\<open>i2\<close> zu kommen.\<close> definition delta_num :: \<open>'person \<Rightarrow> 'etwas::{ord,minus} \<Rightarrow> 'etwas \<Rightarrow> (('person, 'etwas) aenderung) option\<close> where \<open>delta_num p i1 i2 = ( if i1 > i2 then Some (Verliert p (i1 - i2)) else if i1 < i2 then Some (Gewinnt p (i2 - i1)) else None )\<close> lemma \<open>delta_num p i1 i2 = Some (Gewinnt p (i::int)) \<Longrightarrow> i > 0\<close> by(auto simp add: delta_num_def split_ifs) lemma \<open>delta_num p i1 i2 = Some (Verliert p (i::int)) \<Longrightarrow> i > 0\<close> by(auto simp add: delta_num_def split_ifs) lemma \<open>delta_num p1 i1 i2 = Some (Gewinnt p2 (i::int)) \<Longrightarrow> p1 = p2\<close> by(auto simp add: delta_num_def split_ifs) lemma \<open>delta_num p1 i1 i2 = Some (Verliert p2 (i::int)) \<Longrightarrow> p1 = p2\<close> by(auto simp add: delta_num_def split_ifs) beispiel \<open>delta_num Alice (2::int) 6 = Some (Gewinnt Alice 4)\<close> by eval beispiel \<open>delta_num Alice (-2::int) 6 = Some (Gewinnt Alice 8)\<close> by eval lemma delta_num_same: \<open>delta_num p (a::'a::ordered_ab_group_add) a = None\<close> by(simp add: delta_num_def) text\<open>The absolute delta between \<^term>\<open>i1\<close> and \<^term>\<open>i2\<close>. This basically merges the two terms.\<close> definition sum_delta_num :: \<open>'person \<Rightarrow> 'etwas::{ord,zero,plus,uminus,minus} \<Rightarrow> 'etwas \<Rightarrow> (('person, 'etwas) aenderung) option\<close> where \<open>sum_delta_num p i1 i2 = ( let s = i1 + i2 in if s < 0 then Some (Verliert p (-s)) else if s > 0 then Some (Gewinnt p s) else None )\<close> beispiel \<open>sum_delta_num Alice (2::int) 6 = Some (Gewinnt Alice 8)\<close> by eval beispiel \<open>sum_delta_num Alice (-2::int) 6 = Some (Gewinnt Alice 4)\<close> by eval lemma sum_delta_num_delta_num: fixes i1::\<open>'a::ordered_ab_group_add\<close> shows \<open>sum_delta_num p i1 i2 = delta_num p 0 (i1+i2)\<close> by(simp add: sum_delta_num_def delta_num_def Let_def) lemma delta_num_sum_delta_num: fixes i1::\<open>'a::ordered_ab_group_add\<close> shows \<open>delta_num p i1 i2 = sum_delta_num p (-i1) i2\<close> by(simp add: sum_delta_num_def delta_num_def Let_def) (*>*) subsection\<open>Deltas\<close> text\<open>Deltas, d.h. Unterschiede zwischen Welten. Ein Delta ist eine Liste von Änderungen. Wir definieren das \<^theory_text>\<open>type_synonym\<close> delta als die Funktion, welche solch eine Liste berechnet, gegeben die Handlung welche die Veränderung hervorruft.\<close> (*Man könnte eine class Delta world einführen, mit einer delta-Funtion :: welt -> welt -> [Aenderung person etwas] Diese Klasse würde dann Welten mit Personen und Etwas in Relation setzen. Dafür bräuchte es MultiParamTypeClasses. Eine simple Funktion ist da einfacher.*) type_synonym ('welt, 'person, 'etwas) delta = \<open>'welt handlung \<Rightarrow> (('person, 'etwas) aenderung) list\<close> (*<*) definition aenderung_val :: \<open>('person, ('etwas::uminus)) aenderung \<Rightarrow> 'etwas\<close> where \<open>aenderung_val a \<equiv> case a of Verliert _ n \<Rightarrow> -n | Gewinnt _ n \<Rightarrow> n\<close> beispiel \<open>aenderung_val (Verliert Alice (2::int)) = -2\<close> by eval beispiel \<open>aenderung_val (Gewinnt Alice (2::int)) = 2\<close> by eval lemma betroffen_simps[simp]: \<open>betroffen (Gewinnt a ab) = a\<close> \<open>betroffen (Verliert a ab) = a\<close> by(simp add: betroffen_def)+ lemma aenderung_val_simps[simp]: \<open>aenderung_val (Gewinnt a ab) = ab\<close> \<open>aenderung_val (Verliert a ab) = -ab\<close> by(simp add: aenderung_val_def)+ fun delta_num_map :: \<open>(('person::enum \<rightharpoonup> ('etwas::{zero,minus,ord})), 'person, 'etwas) delta\<close> where \<open>delta_num_map (Handlung vor nach) = List.map_filter (\<lambda>p. case (the_default (vor p) 0, the_default (nach p) 0) of (a,b) \<Rightarrow> delta_num p a b) (Enum.enum::'person list)\<close> beispiel\<open>delta_num_map (Handlung [Alice \<mapsto> 5::int, Bob \<mapsto> 10, Eve \<mapsto> 1] [Alice \<mapsto> 3, Bob \<mapsto> 13, Carol \<mapsto> 2]) = [Verliert Alice 2, Gewinnt Bob 3, Gewinnt Carol 2, Verliert Eve 1]\<close> by eval fun delta_num_fun :: \<open>(('person::enum \<Rightarrow> ('etwas::{minus,ord})), 'person, 'etwas) delta\<close> where \<open>delta_num_fun (Handlung vor nach) = List.map_filter (\<lambda>p. delta_num p (vor p) (nach p)) Enum.enum\<close> beispiel \<open>delta_num_fun (Handlung ((\<lambda>p. 0::int)(Alice:=8, Bob:=12, Eve:=7)) ((\<lambda>p. 0::int)(Alice:=3, Bob:=15, Eve:=0))) = [Verliert Alice 5, Gewinnt Bob 3, Verliert Eve 7]\<close> by eval lemma delta_num_map: \<open>delta_num_map (Handlung m1 m2) = delta_num_fun (Handlung (\<lambda>p. the_default (m1 p) 0) (\<lambda>p. the_default (m2 p) 0))\<close> by(simp) (*TODO: das if will in die swap.thy?*) term \<open>map_aenderung\<close> definition aenderung_swap :: \<open>'person \<Rightarrow> 'person \<Rightarrow> ('person, 'etwas) aenderung \<Rightarrow> ('person, 'etwas) aenderung\<close> where \<open>aenderung_swap p1 p2 a \<equiv> map_aenderung (\<lambda>p. if p = p1 then p2 else if p = p2 then p1 else p) id a\<close> beispiel\<open>aenderung_swap Alice Bob (Gewinnt Alice (3::nat)) = Gewinnt Bob 3\<close> by eval beispiel\<open>aenderung_swap Alice Bob (Gewinnt Bob (3::nat)) = Gewinnt Alice 3\<close> by eval beispiel\<open>aenderung_swap Alice Bob (Gewinnt Carol (3::nat)) = Gewinnt Carol 3\<close> by eval lemma aenderung_swap_id: \<open>aenderung_swap p1 p2 (aenderung_swap p1 p2 a) = a\<close> apply(simp add: aenderung_swap_def) apply(cases \<open>a\<close>) by simp_all lemma aenderung_swap_sym: \<open>aenderung_swap p1 p2 = aenderung_swap p2 p1\<close> apply(simp add: fun_eq_iff aenderung_swap_def, intro allI, rename_tac a) apply(case_tac \<open>a\<close>) by simp_all lemma map_map_aenderung_swap: \<open>map (map (aenderung_swap p1 p2)) \<circ> (map (map (aenderung_swap p1 p2)) \<circ> kons) = kons\<close> by(simp add: fun_eq_iff aenderung_swap_id comp_def) lemma swap_map_map_aenderung_swap: \<open>swap p2 p1 (map (map (aenderung_swap p2 p1)) \<circ> swap p1 p2 (map (map (aenderung_swap p1 p2)) \<circ> kons)) = kons\<close> apply(subst aenderung_swap_sym) apply(subst swap_symmetric) apply(subst swap_fun_comp_id) apply(simp add: map_map_aenderung_swap) done (*>*) text\<open>Eine Liste von Änderungen lässt sich ausführen.\<close> fun aenderung_ausfuehren :: \<open>('person, 'etwas::{plus,minus}) aenderung list \<Rightarrow> ('person \<Rightarrow> 'etwas) \<Rightarrow> ('person \<Rightarrow> 'etwas)\<close> where \<open>aenderung_ausfuehren [] bes = bes\<close> | \<open>aenderung_ausfuehren (Verliert p n # deltas) bes = aenderung_ausfuehren deltas \<lbrakk>bes(p -= n)\<rbrakk>\<close> | \<open>aenderung_ausfuehren (Gewinnt p n # deltas) bes = aenderung_ausfuehren deltas \<lbrakk>bes(p += n)\<rbrakk>\<close> text\<open>Die lokale Variable \<^term_type>\<open>bes :: ('person \<Rightarrow> 'etwas)\<close> stellt dabei den aktuellen Besitz dar. Die Ausgabe der Funktion ist der modifizierte Besitz, nachdem die Änderung ausgeführt wurde.\<close> beispiel \<open>aenderung_ausfuehren [Verliert Alice (2::int), Gewinnt Bob 3, Gewinnt Carol 2, Verliert Eve 1] (\<euro>(Alice:=8, Bob:=3, Eve:= 5)) = (\<euro>(Alice:=6, Bob:=6, Carol:=2, Eve:= 4))\<close> by eval beispiel \<open>aenderung_ausfuehren [Verliert Alice (2::int), Verliert Alice 6] (\<euro>(Alice:=8, Bob:=3, Eve:= 5)) = (\<euro>(Bob:=3, Eve:= 5))\<close> by eval text\<open>Im vorherigen Beispiel verliert \<^const>\<open>Alice\<close> alles. Da sie nun den \<^const>\<open>DEFAULT\<close>-Wert von \<^term>\<open>0::int\<close> besitzt, wird ihre Besitz nicht angezeigt.\<close> (*<*) (*TODO: upstream und vereinfachen!*) lemma swap_aenderung_ausfuehren: \<open>swap p1 p2 (Aenderung.aenderung_ausfuehren a bes) = Aenderung.aenderung_ausfuehren (map (aenderung_swap p1 p2) a) (swap p1 p2 bes)\<close> apply(induction \<open>a\<close> arbitrary: \<open>bes\<close>) apply(simp) apply(simp) apply(case_tac \<open>a1\<close>) subgoal apply(simp) apply(simp add: aenderung_swap_def, safe) apply (simp_all add: fun_upd_twist swap_def Fun.swap_def) done apply(simp) apply(simp add: aenderung_swap_def, safe) apply (simp_all add: fun_upd_twist swap_def Fun.swap_def) done (*>*) subsection\<open>Abmachungen\<close> text\<open>Eine \<^typ>\<open>('person, 'etwas) aenderung list\<close> wie z.B. \<^term>\<open>[Gewinnt Alice (3::int), Verliert Bob 3]\<close> ließe sich gut verwenden, um eine Abmachung zwischen \<^const>\<open>Alice\<close> und \<^const>\<open>Bob\<close> zu modellieren. Allerdings ist diese Darstellung unpraktisch zu benutzen. Beispielsweise sind \<^item> \<^term>\<open>[Gewinnt Alice (3::int), Verliert Bob 3]\<close> \<^item> \<^term>\<open>[Verliert Bob 3, Gewinnt Alice (3::int)]\<close> \<^item> \<^term>\<open>[Gewinnt Alice (1::int), Gewinnt Alice 1, Gewinnt Alice 1, Verliert Bob 3, Verliert Carol 0]\<close> extensional betrachtet alle equivalent. Es ist praktischer, eine Darstellung zu wählen, in der syntaktische und semantische Äquivalenz zusammenfallen. Das bedeutet, eine Abmachung muss eindeutig dargestellt werden. Ein Kandidat dafür wäre eine Map vom Typ \<^typ>\<open>'person \<rightharpoonup> 'etwas\<close>, da diese eindeutig einer \<^typ>\<open>'person\<close> ein \<^typ>\<open>'etwas\<close> zuordnet. Dies funktioniert allerdings nur, wenn @{typ [source=true] \<open>'etwas::{uminus,plus}\<close>} mit Plus und Minus dargestellt werden kann, um \<^const>\<open>Gewinnt\<close> und \<^const>\<open>Verliert\<close> darzustellen. Allerdings ist auch diese Darstellung nicht eindeutig, da z.B. \<^term>\<open>[Alice \<mapsto> 0] = Map.empty\<close> semantisch gilt, solange \<^term>\<open>0\<close> ein neutrales Element ist. Deshalb stellen wir eine Abmachung als eine totale Funktion vom Typ @{typ [source=true] \<open>'person \<Rightarrow> ('etwas::{uminus, plus, zero})\<close>} dar. Der Term \<^term>\<open>(\<lambda>_. 0)(Alice := 3, Bob := -3)\<close> bedeutet \<^const>\<open>Alice\<close> bekommt 3, \<^const>\<open>Bob\<close> verliert 3. \<close> type_synonym ('person, 'etwas) abmachung = \<open>'person \<Rightarrow> 'etwas\<close> text\<open>Folgende Funktion konvertiert eine Liste von Änderungen in ein Abmachung. Persönlich finde ich es schöner eine Liste von Änderungen aufzuschreiben, mathematisch ist eine Abmachung allerdings überlegen. Folgende Funktion sorgt dafür, dass wir Abmachungen dennoch als Liste von Änderungen aufschreiben können, dann allerdings mit der Abmachung weiterrechnen.\<close> fun to_abmachung :: \<open>('person, 'etwas::{ord,zero,plus,minus,uminus}) aenderung list \<Rightarrow> ('person, 'etwas) abmachung\<close> where \<open>to_abmachung [] = (\<lambda>p. 0)\<close> | \<open>to_abmachung (delta # deltas) = \<lbrakk>(to_abmachung deltas)(betroffen delta += aenderung_val delta)\<rbrakk>\<close> beispiel \<open>[to_abmachung [Gewinnt Alice (3::int)], to_abmachung [Gewinnt Alice 3, Verliert Bob 3]] = [(\<lambda>p.0)(Alice := 3), (\<lambda>p.0)(Alice := 3, Bob := -3)]\<close> by eval (*<*) lemma to_abmachung_simp_call: \<open>to_abmachung (delta # deltas) p = (if p = betroffen delta then (to_abmachung deltas p) + (aenderung_val delta) else (to_abmachung deltas p))\<close> by(simp) lemma to_abmachung_fold_induct_helper: fixes as :: \<open>('person, 'etwas::ordered_ab_group_add) aenderung list\<close> shows \<open>fold (\<lambda>a acc. \<lbrakk>acc(betroffen a += aenderung_val a)\<rbrakk>) as abmachung = (\<lambda>p. to_abmachung as p + abmachung p)\<close> apply(induction \<open>as\<close> arbitrary:\<open>abmachung\<close>) by(simp add: fun_eq_iff)+ lemma to_abmachung_fold: fixes as :: \<open>('person, 'etwas::ordered_ab_group_add) aenderung list\<close> shows \<open>to_abmachung as = fold (\<lambda>a acc. \<lbrakk>acc(betroffen a += aenderung_val a)\<rbrakk>) as (\<lambda>_. 0)\<close> apply(subst to_abmachung_fold_induct_helper[where abmachung=\<open>\<lambda>_. 0\<close>]) by simp lemma to_abmachung_List_map_filter_simp_call: fixes f :: \<open>'person::enum \<Rightarrow> ('person, 'etwas::ordered_ab_group_add) aenderung option\<close> assumes valid_f: \<open>\<And>p a. f p = Some a \<Longrightarrow> betroffen a = p\<close> shows \<open>p \<in> set as \<Longrightarrow> distinct as \<Longrightarrow> to_abmachung (List.map_filter f as) p = (case f p of Some a \<Rightarrow> to_abmachung [a] p | None \<Rightarrow> 0)\<close> proof(induction \<open>as\<close>) case Nil then show \<open>?case\<close> by simp next case (Cons a as) have filter_not_in_set: \<open>p \<notin> set ps \<Longrightarrow> to_abmachung (List.map_filter f ps) p = 0\<close> for p ps apply(induction \<open>ps\<close>) apply(simp add: List.map_filter_simps) apply(simp add: List.map_filter_simps split:option.split) apply(clarsimp, rename_tac a ps a2) apply(subgoal_tac \<open>betroffen a2 = a\<close>) apply simp by(auto dest: valid_f) from Cons show \<open>?case\<close> apply(simp add: List.map_filter_simps) apply(safe) apply(case_tac \<open>f p\<close>) apply(simp add: filter_not_in_set; fail) apply(simp add: filter_not_in_set) using filter_not_in_set apply blast apply(simp) apply(case_tac \<open>f a\<close>) apply(simp add: filter_not_in_set; fail) apply(simp add: filter_not_in_set) by(auto dest: valid_f) qed lemma to_abmachung_List_map_filter_enum_simp_call: fixes f :: \<open>'person::enum \<Rightarrow> ('person, 'etwas::ordered_ab_group_add) aenderung option\<close> assumes valid_f: \<open>\<And>p a. f p = Some a \<Longrightarrow> betroffen a = p\<close> shows \<open>to_abmachung (List.map_filter f Enum.enum) p = (case f p of Some a \<Rightarrow> to_abmachung [a] p | None \<Rightarrow> 0)\<close> apply(rule to_abmachung_List_map_filter_simp_call) using valid_f apply(simp) apply(simp add: enum_class.enum_UNIV) apply(simp add: enum_class.enum_distinct) done fun abmachung_to_aenderung_list :: \<open>'person list \<Rightarrow> ('person, 'etwas::{ord,zero,plus,minus,uminus}) abmachung \<Rightarrow> ('person, 'etwas) aenderung list\<close> where \<open>abmachung_to_aenderung_list [] _ = []\<close> | \<open>abmachung_to_aenderung_list (p#ps) a = (if a p = 0 then abmachung_to_aenderung_list ps a else (if a p > 0 then Gewinnt p (a p) else Verliert p (- (a p))) # abmachung_to_aenderung_list ps a )\<close> definition abmachung_to_aenderung :: \<open>('person::enum, 'etwas::{ord,zero,plus,minus,uminus}) abmachung \<Rightarrow> ('person, 'etwas) aenderung list\<close> where \<open>abmachung_to_aenderung \<equiv> abmachung_to_aenderung_list Enum.enum\<close> beispiel \<open>abmachung_to_aenderung ((\<lambda>p.0)(Alice := (3::int), Bob := -3)) = [Gewinnt Alice 3, Verliert Bob 3]\<close> by eval definition aenderung_to_abmachung :: \<open>('person, 'etwas) aenderung list \<Rightarrow> ('person::enum, 'etwas::{ord,zero,plus,minus,uminus}) abmachung\<close> where \<open>aenderung_to_abmachung \<equiv> to_abmachung\<close> lemma fixes as :: \<open>('person::enum, int) aenderung list\<close> shows \<open>abmachung_to_aenderung (aenderung_to_abmachung as) = as\<close> (* nitpick as = [Verliert person\<^sub>1 (- 1)] *) oops (*gilt nicht, weil aenderungen nicht eindeutig*) lemma abmachung_to_aenderung_list_to_abmachung_not_in_ps: \<open>p \<notin> set ps \<Longrightarrow> to_abmachung (abmachung_to_aenderung_list ps a) p = 0\<close> by(induction \<open>ps\<close>) simp+ lemma abmachung_to_aenderung_list_not_in_ps: \<open>p \<notin> set ps \<Longrightarrow> abmachung_to_aenderung_list ps (a(p := v)) = abmachung_to_aenderung_list ps a\<close> apply(induction \<open>ps\<close>) apply(simp) apply(simp) by fastforce definition abmachung_dom :: \<open>('person, 'etwas::zero) abmachung \<Rightarrow> 'person set\<close> where \<open>abmachung_dom m = {a. m a \<noteq> 0}\<close> lemma abmachung_dom_swap: \<open>abmachung_dom (swap p1 p2 a) = (swap p1 p2 id) ` (abmachung_dom a)\<close> apply(simp add: abmachung_dom_def) apply(simp add: image_def) apply(rule Collect_cong) apply(simp add: swap_def Fun.swap_def) by fast lemma to_abmachung_abmachung_to_aenderung_list_induct_helper: fixes a :: \<open>('person::enum, 'etwas::ordered_ab_group_add) abmachung\<close> shows \<open>abmachung_dom a \<subseteq> set ps \<Longrightarrow> distinct ps \<Longrightarrow> to_abmachung (abmachung_to_aenderung_list ps a) = a\<close> apply(induction \<open>ps\<close> arbitrary: \<open>a\<close>) apply(simp add: abmachung_dom_def) apply fastforce apply(rename_tac p ps a) apply(simp) apply(simp add: abmachung_to_aenderung_list_to_abmachung_not_in_ps) apply(case_tac \<open>p \<notin> abmachung_dom a\<close>) apply(subgoal_tac \<open>abmachung_dom a \<subseteq> set ps\<close>) apply(simp add: abmachung_dom_def; fail) apply(simp add: abmachung_dom_def) apply blast apply(subgoal_tac \<open>abmachung_dom (a(p := 0)) \<subseteq> set ps\<close>) prefer 2 apply(simp add: abmachung_dom_def) apply blast apply(subgoal_tac \<open>to_abmachung (abmachung_to_aenderung_list ps a) = (a(p := 0))\<close>) (*instantiate IH*) prefer 2 apply(simp) apply (metis abmachung_to_aenderung_list_not_in_ps) apply(simp) by fastforce lemma aenderung_to_abmachung_abmachung_to_aenderung: fixes a :: \<open>('person::enum, 'etwas::ordered_ab_group_add) abmachung\<close> shows \<open>aenderung_to_abmachung (abmachung_to_aenderung a) = a\<close> apply(simp add: abmachung_to_aenderung_def aenderung_to_abmachung_def) apply(rule to_abmachung_abmachung_to_aenderung_list_induct_helper) apply(simp add: enum_class.enum_UNIV) apply(simp add: enum_class.enum_distinct) done (*>*) text\<open>Personen, welche von einer Abmachung betroffen sind.\<close> definition abmachungs_betroffene :: \<open>('person::enum, 'etwas::zero) abmachung \<Rightarrow> 'person list\<close> where \<open>abmachungs_betroffene a \<equiv> [p. p \<leftarrow> Enum.enum, a p \<noteq> 0]\<close> beispiel \<open>abmachungs_betroffene (to_abmachung [Gewinnt Bob (3::int), Verliert Alice 3]) = [Alice, Bob]\<close> by eval (*<*) lemma abmachungs_betroffene_simp: \<open>abmachungs_betroffene a = filter (\<lambda>p. a p \<noteq> 0) Enum.enum\<close> proof - have \<open>concat (map (\<lambda>p. if a p \<noteq> 0 then [p] else []) as) = filter (\<lambda>p. a p \<noteq> 0) as\<close> for as by(induction \<open>as\<close>) auto thus \<open>?thesis\<close> by(simp add: abmachungs_betroffene_def) qed lemma abmachungs_betroffene_distinct: \<open>distinct (abmachungs_betroffene a)\<close> apply(simp add: abmachungs_betroffene_simp) using enum_class.enum_distinct distinct_filter by blast lemma abmachungs_betroffene_is_dom: \<open>set (abmachungs_betroffene a) = abmachung_dom a\<close> by(simp add: abmachung_dom_def abmachungs_betroffene_simp enum_class.enum_UNIV) lemma set_abmachungs_betroffene_swap: \<open>set (abmachungs_betroffene (swap p1 p2 a)) = (swap p1 p2 id) ` set (abmachungs_betroffene a)\<close> apply(simp add: abmachungs_betroffene_simp enum_class.enum_UNIV) apply(simp add: image_def) apply(rule Collect_cong) apply(simp add: swap_def Fun.swap_def) by fast (*>*) text\<open>Eine Abmachung lässt sich ausführen. Dabei wird effektiv die gegebene \<^term>\<open>besitz::('person \<Rightarrow> 'etwas)\<close> Funktion upgedated.\<close> definition abmachung_ausfuehren :: \<open>('person, 'etwas::{plus,minus}) abmachung \<Rightarrow> ('person \<Rightarrow> 'etwas) \<Rightarrow> ('person \<Rightarrow> 'etwas)\<close> where \<open>abmachung_ausfuehren a besitz \<equiv> \<lambda>p. a p + (besitz p)\<close> beispiel \<open>abmachung_ausfuehren (to_abmachung [Gewinnt Alice 3, Verliert Bob 3]) (\<euro>(Alice:=8, Bob:=3, Eve:= 5)) = (\<euro>(Alice:=11, Bob:=0, Eve:= 5))\<close> by(code_simp) (*<*) lemma abmachung_ausfuehren_swap: \<open>abmachung_ausfuehren (swap p1 p2 a) (swap p1 p2 welt) = swap p2 p1 (abmachung_ausfuehren a welt)\<close> by(auto simp add: abmachung_ausfuehren_def swap_def Fun.swap_def) lemma aenderung_ausfuehren_abmachung_to_aenderung_induction_helper: fixes welt :: \<open>'person::enum \<Rightarrow> 'etwas::ordered_ab_group_add\<close> shows \<open>abmachung_dom abmachung \<subseteq> set ps \<Longrightarrow> distinct ps \<Longrightarrow> aenderung_ausfuehren (abmachung_to_aenderung_list ps abmachung) welt p = welt p + (abmachung p)\<close> apply(induction \<open>ps\<close> arbitrary: \<open>abmachung\<close> \<open>welt\<close>) apply(simp add: abmachung_dom_def; fail) apply(simp) apply(rename_tac pa ps abmachung welt) apply(subgoal_tac \<open>abmachung_dom (abmachung(pa := 0)) \<subseteq> set ps\<close>) prefer 2 subgoal apply(simp add: abmachung_dom_def) by blast (*thank you sledgehammer isar proofs*) proof - fix pa :: \<open>'person\<close> and psa :: \<open>'person list\<close> and abmachunga :: \<open>'person \<Rightarrow> 'etwas\<close> and welta :: \<open>'person \<Rightarrow> 'etwas\<close> assume a1: \<open>pa \<notin> set psa \<and> distinct psa\<close> assume a2: \<open>abmachung_dom (abmachunga(pa := 0)) \<subseteq> set psa\<close> assume \<open>\<And>abmachung welt. abmachung_dom abmachung \<subseteq> set psa \<Longrightarrow> aenderung_ausfuehren (abmachung_to_aenderung_list psa abmachung) welt p = (welt p::'etwas) + abmachung p\<close> then have f3: \<open>\<And>f. f p + (abmachunga(pa := 0)) p = aenderung_ausfuehren (abmachung_to_aenderung_list psa abmachunga) f p\<close> using a2 a1 by (metis (full_types) abmachung_to_aenderung_list_not_in_ps) then have \<open>pa = p \<longrightarrow> (0 < abmachunga pa \<longrightarrow> abmachunga pa \<noteq> 0 \<longrightarrow> aenderung_ausfuehren (abmachung_to_aenderung_list psa abmachunga) \<lbrakk>welta (pa += abmachunga pa)\<rbrakk> p = welta p + abmachunga p) \<and> (\<not> 0 < abmachunga pa \<longrightarrow> (abmachunga pa = 0 \<longrightarrow> aenderung_ausfuehren (abmachung_to_aenderung_list psa abmachunga) welta p = welta p + abmachunga p) \<and> (abmachunga pa \<noteq> 0 \<longrightarrow> aenderung_ausfuehren (abmachung_to_aenderung_list psa abmachunga) \<lbrakk>welta (pa += abmachunga pa)\<rbrakk> p = welta p + abmachunga p))\<close> by force then show \<open>(0 < abmachunga pa \<longrightarrow> abmachunga pa \<noteq> 0 \<longrightarrow> aenderung_ausfuehren (abmachung_to_aenderung_list psa abmachunga) \<lbrakk>welta (pa += abmachunga pa)\<rbrakk> p = welta p + abmachunga p) \<and> (\<not> 0 < abmachunga pa \<longrightarrow> (abmachunga pa = 0 \<longrightarrow> aenderung_ausfuehren (abmachung_to_aenderung_list psa abmachunga) welta p = welta p + abmachunga p) \<and> (abmachunga pa \<noteq> 0 \<longrightarrow> aenderung_ausfuehren (abmachung_to_aenderung_list psa abmachunga) \<lbrakk>welta (pa += abmachunga pa)\<rbrakk> p = welta p + abmachunga p))\<close> using f3 by (metis fun_upd_other) qed lemma aenderung_ausfuehren_abmachung_to_aenderung: fixes welt :: \<open>'person::enum \<Rightarrow> 'etwas::ordered_ab_group_add\<close> shows \<open>aenderung_ausfuehren (abmachung_to_aenderung abmachung) welt p = welt p + (abmachung p)\<close> apply(simp add: abmachung_to_aenderung_def) apply(rule aenderung_ausfuehren_abmachung_to_aenderung_induction_helper) apply(simp add: enum_class.enum_UNIV) apply(simp add: enum_class.enum_distinct) done (*>*) text\<open>Es ist equivalent eine Abmachung oder die entsprechende Änderungsliste auszuführen.\<close> (*TODO: does this make a good [code] rule? I cannot measure performance changes.*) lemma abmachung_ausfuehren_aenderung: fixes abmachung :: \<open>('person::enum, 'etwas::ordered_ab_group_add) abmachung\<close> shows \<open>abmachung_ausfuehren abmachung = aenderung_ausfuehren (abmachung_to_aenderung abmachung)\<close> by(simp add: abmachung_ausfuehren_def fun_eq_iff aenderung_ausfuehren_abmachung_to_aenderung) subsection\<open>Konsens\<close> text\<open>Laut \<^url>\<open>https://de.wikipedia.org/wiki/Konsens#Konsens_im_Rechtssystem\<close> lässt sich Konsens wie folgt definieren: "die Übereinstimmung der Willenserklärungen beider Vertragspartner über die Punkte des Vertrages". Wir können also \<^term>\<open>to_abmachung [Gewinnt Alice 3, Verliert Bob 3]\<close> verwenden, um Konsens zu modellieren. Dabei müssen alle Betroffenen die gleiche Vorstellung der Abmachung haben. Beispielsweise lässt sich der gesamte Konsens in einer Welt darstellen als \<^typ>\<open>'person \<Rightarrow> ('person, 'etwas) abmachung list\<close>, wobei jeder Person genau die Abmachungen zugeordnet werden, deren sie zustimmt. Die Abmachungen sind in einer Liste und keiner Menge, da eine Person eventuell bereit ist, Abmachungen mehrfach auszuführen. \<close> type_synonym ('person, 'etwas) globaler_konsens = \<open>'person \<Rightarrow> ('person, 'etwas) abmachung list\<close> text\<open> Folgendes Beispiel liest sich wie folgt: \<^term>\<open>(\<lambda>_. [])( Alice := [to_abmachung [Gewinnt Alice 3], to_abmachung [Gewinnt Alice 3, Verliert Bob 3]], Bob := [to_abmachung [Gewinnt Alice 3, Verliert Bob 3]]) :: (person, int) globaler_konsens\<close> \<^const>\<open>Alice\<close> stimmt folgendem zu: \<^item> \<^const>\<open>Alice\<close> bekommt 3. \<^item> \<^const>\<open>Alice\<close> bekommt 3 und \<^const>\<open>Bob\<close> muss 3 abgeben. \<^const>\<open>Bob\<close> stimmt folgendem zu: \<^item> \<^const>\<open>Alice\<close> bekommt 3 und \<^const>\<open>Bob\<close> muss 3 abgeben. Wir könnten also sagen, dass Konsens zwischen \<^const>\<open>Alice\<close> und \<^const>\<open>Bob\<close> herrscht, dass 3 Besitz von \<^const>\<open>Bob\<close> auf \<^const>\<open>Alice\<close> übergehen. Zusätzlich wäre es in diesem Beispiel auch okay für \<^const>\<open>Alice\<close>, wenn sie 3 Besitz erhalten würde, ohne dass \<^const>\<open>Bob\<close> 3 Besitz verliert. \<close> (*<*) definition konsensswap :: \<open>'person \<Rightarrow> 'person \<Rightarrow> ('person, 'etwas) globaler_konsens \<Rightarrow> ('person, 'etwas) globaler_konsens\<close> where \<open>konsensswap p1 p2 kons \<equiv> swap p1 p2 ((map (swap p1 p2)) \<circ> kons)\<close> lemma konsensswap_id[simp]: \<open>konsensswap p1 p2 (konsensswap p1 p2 kons) = kons\<close> apply(simp add: konsensswap_def) apply(subst swap_fun_map_comp_id) by simp lemma konsensswap_sym: \<open>konsensswap p1 p2 = konsensswap p2 p1\<close> by(simp add: fun_eq_iff konsensswap_def swap_symmetric) lemma konsensswap_apply: \<open>konsensswap p1 p2 kons p = map (swap p1 p2) (swap p1 p2 kons p)\<close> apply(simp add: konsensswap_def comp_def) by(rule swap_cases, simp_all add: swap_a swap_b swap_nothing) lemma konsensswap_same[simp]: \<open>konsensswap p p konsens = konsens\<close> by(simp add: konsensswap_def swap_id_comp) lemma konsensswap_swap_id: \<open>konsensswap p1 p2 konsens (swap p1 p2 id p) = map (swap p1 p2) (konsens p)\<close> apply(simp add: konsensswap_apply) by (simp add: swap_fun_swap_id) (*>*) text\<open>Folgendes Prädikat prüft, ob für eine gegebene Abmachung Konsens herrscht.\<close> definition enthaelt_konsens :: \<open>('person::enum, 'etwas::zero) abmachung \<Rightarrow> ('person, 'etwas) globaler_konsens \<Rightarrow> bool\<close> where \<open>enthaelt_konsens abmachung konsens \<equiv> \<forall>betroffene_person \<in> set (abmachungs_betroffene abmachung). abmachung \<in> set (konsens betroffene_person)\<close> (*<*) lemma swap_konsensswap_swap: \<open>swap p2 p1 ` set (konsensswap p1 p2 konsens (swap p1 p2 id p)) = (set (konsens p))\<close> apply(simp add: konsensswap_apply) apply(simp add: swap_fun_swap_id) by (simp add: image_comp) lemma enthaelt_konsens_swap: \<open>enthaelt_konsens (swap p1 p2 a) (konsensswap p1 p2 konsens) = enthaelt_konsens a konsens\<close> apply(simp add: enthaelt_konsens_def abmachungs_betroffene_is_dom) apply(simp add: abmachung_dom_swap) apply(rule ball_cong) apply(simp; fail) by(simp add: swap_in_set_of_functions swap_konsensswap_swap) (*>*) text\<open>Eine (ausgeführte) Abmachung einlösen, bzw. entfernen.\<close> definition konsens_entfernen :: \<open>('person::enum, 'etwas::zero) abmachung \<Rightarrow> ('person \<Rightarrow> ('person, 'etwas) abmachung list) \<Rightarrow> ('person \<Rightarrow> ('person, 'etwas) abmachung list)\<close> where \<open>konsens_entfernen abmachung kons = fold (\<lambda>p k. k(p := remove1 abmachung (k p))) (abmachungs_betroffene abmachung) kons\<close> beispiel \<open>konsens_entfernen (to_abmachung [Gewinnt Alice (3::int), Verliert Bob 3]) ((\<lambda>_. [])( Alice := [to_abmachung [Gewinnt Alice 3], to_abmachung [Gewinnt Alice 3, Verliert Bob 3]], Bob := [to_abmachung [Gewinnt Alice 3, Verliert Bob 3]]) ) = (\<lambda>_. [])( Alice := [to_abmachung [Gewinnt Alice 3]], Bob := [])\<close> by eval (*<*) lemma konsens_entfernen_fold_induct_helper_helper: \<open>a \<notin> set as \<Longrightarrow> fold (\<lambda>a k. k(a := f (k a))) as kons a = kons a\<close> by(induction \<open>as\<close> arbitrary: \<open>kons\<close>) simp+ lemma konsens_entfernen_fold_induct_helper: \<open>x \<in> set as \<Longrightarrow> distinct as \<Longrightarrow> fold (\<lambda>a k. k(a := f (k a))) as kons x = f (kons x)\<close> apply(induction \<open>as\<close> arbitrary: \<open>kons\<close>) apply(simp; fail) apply(simp) apply(erule disjE) apply(simp) apply(simp add: konsens_entfernen_fold_induct_helper_helper; fail) apply(simp) apply blast done (*>*) text\<open>Alternative Definition:\<close> lemma konsens_entfernen_simp: \<open>konsens_entfernen a kons = (\<lambda>p. if p \<in> set (abmachungs_betroffene a) then remove1 a (kons p) else (kons p))\<close> apply(simp add: konsens_entfernen_def fun_eq_iff) apply(intro allI conjI impI) apply(subst konsens_entfernen_fold_induct_helper, simp_all) apply(simp add: abmachungs_betroffene_distinct) apply(simp add: konsens_entfernen_fold_induct_helper_helper) done (*<*) lemma remove1_konsensswap: \<open>remove1 (swap p1 p2 a) (konsensswap p1 p2 kons p) = map (swap p1 p2) (remove1 a (swap p1 p2 kons p))\<close> by(simp add: konsensswap_apply remove1_swap) lemma konsens_entfernen_konsensswap: \<open>konsensswap p2 p1 (konsens_entfernen (swap p1 p2 a) (konsensswap p1 p2 kons)) = konsens_entfernen a kons\<close> apply(simp add: konsens_entfernen_simp fun_eq_iff) apply(safe) apply(simp add: set_abmachungs_betroffene_swap) apply(simp add: konsensswap_apply) apply(simp add: swap_if_move_inner) apply(simp add: swap_id_in_set) apply(subst(2) remove1_swap2[of \<open>p1\<close> \<open>p2\<close>, symmetric]) apply(auto simp add: konsensswap_apply swap_def Fun.swap_def)[1] (*wants helper*) apply(simp add: set_abmachungs_betroffene_swap) apply(simp add: konsensswap_apply) apply(simp add: swap_if_move_inner) apply(simp add: swap_id_in_set) apply(simp add: konsensswap_apply swap_def comp_def) by (simp add: transpose_commute) lemma to_abmachung_delta_num_fun_simp_call: (*stronger than the usual ordered_ab_group_add*) fixes vor::\<open>('person::enum \<Rightarrow> 'etwas::linordered_ab_group_add)\<close> shows \<open>to_abmachung (delta_num_fun (Handlung vor nach)) p = nach p - vor p\<close> apply(simp) apply(subst to_abmachung_List_map_filter_enum_simp_call) subgoal by(auto simp add: delta_num_def split: if_split_asm) by(simp add: delta_num_def) (*>*) text\<open>Folgendes Prädikat prüft ob eine Abmachung korrekt aus dem Konsens entfernt wurde. Dies sollte normalerweise direkt nachdem eine Abmachung eingelöst wurde geschehen.\<close> definition konsens_wurde_entfernt :: \<open>('person::enum, 'etwas::zero) abmachung \<Rightarrow> ('person, 'etwas) globaler_konsens \<Rightarrow> ('person, 'etwas) globaler_konsens \<Rightarrow> bool\<close> where \<open>konsens_wurde_entfernt abmachung konsens_vor konsens_nach \<equiv> \<forall>betroffene_person \<in> set (abmachungs_betroffene abmachung). mset (konsens_vor betroffene_person) = mset (abmachung#(konsens_nach betroffene_person))\<close> text\<open>Wir müssen hier Multisets (\<^const>\<open>mset\<close>) verwenden, da eine Abmachung sowohl mehrfach vorkommen kann aber nur einmal eingelöst wird und die Reihenfolge in welcher die Abmachungen angeordnet sind egal ist.\<close> (*TODO: will ich multiset von Anfang an verwenden?*) text\<open>Folgendes gilt nicht \<^term>\<open>konsens_wurde_entfernt a konsens (konsens_entfernen a konsens)\<close>, da \<^const>\<open>konsens_entfernen\<close> nur einen existierenden Konsens entfernt. Sollte der gegebene Konsens nicht existieren passiert nichts! \<close> beispiel \<open>konsens = (\<lambda>_. []) \<Longrightarrow> a = to_abmachung [Gewinnt Alice (3::int), Verliert Bob 3] \<Longrightarrow> \<not> konsens_wurde_entfernt a konsens (konsens_entfernen a konsens)\<close> by(simp, eval) text\<open>Wenn wir allerdings Konsens haben, dann verhalten sich \<^const>\<open>konsens_wurde_entfernt\<close> und \<^const>\<open>konsens_entfernen\<close> doch wie erwartet.\<close> lemma konsens_wurde_entfernt_konsens_entfernen: \<open>enthaelt_konsens a konsens \<Longrightarrow> konsens_wurde_entfernt a konsens (konsens_entfernen a konsens)\<close> apply(simp add: konsens_wurde_entfernt_def) apply(simp add: konsens_entfernen_simp) by (simp add: enthaelt_konsens_def) (*<*) (*makes the simplifier loop*) lemma \<open>add_mset (swap p1 p2 a) (image_mset (swap p1 p2) M) = image_mset (swap p1 p2) (add_mset a M)\<close> by simp lemma konsens_wurde_entfernt_swap: \<open>konsens_wurde_entfernt (swap p1 p2 a) (konsensswap p1 p2 konsens_vor) (konsensswap p1 p2 konsens_nach) = konsens_wurde_entfernt a konsens_vor konsens_nach\<close> apply(simp add: konsens_wurde_entfernt_def abmachungs_betroffene_is_dom) apply(simp add: abmachung_dom_swap) apply(rule ball_cong) apply(simp; fail) apply(simp add: konsensswap_swap_id) (*TODO: wow, ugly*) by (metis (no_types, opaque_lifting) comp_apply image_mset_add_mset multiset.map_comp multiset.map_ident_strong swap1) (*>*) text\<open>Gegeben eine Handlung berechnet folgende Funktion die Abmachung, aus der diese Handlung resultiert haben könnte.\<close> definition reverse_engineer_abmachung :: \<open>('person::enum \<Rightarrow> 'etwas::linordered_ab_group_add) handlung \<Rightarrow> ('person, 'etwas) abmachung\<close> where \<open>reverse_engineer_abmachung h \<equiv> fold (\<lambda>p acc. acc(p := (nachher h p) - (vorher h p))) Enum.enum (\<lambda>_. 0)\<close> text\<open>Sollte die Abmachung vom Typ \<^typ>\<open>(person, int) abmachung\<close> sein, ist dies eindeutig.\<close> lemma reverse_engineer_abmachung_delta_num_fun: \<open>reverse_engineer_abmachung h = to_abmachung (delta_num_fun h)\<close> apply(simp add: fun_eq_iff reverse_engineer_abmachung_def) apply(cases \<open>h\<close>, simp del: delta_num_fun.simps) apply(subst to_abmachung_delta_num_fun_simp_call) apply(subst fold_enum_fun_update_call) by simp (*<*) lemma reverse_engineer_abmachung_same: \<open>reverse_engineer_abmachung (Handlung v v) = (\<lambda>_. 0)\<close> by(simp add: reverse_engineer_abmachung_def fun_eq_iff fold_enum_fun_update_call) lemma reverse_engineer_abmachung_swap: \<open>reverse_engineer_abmachung (Handlung (swap p1 p2 vor) (swap p1 p2 nach)) = swap p1 p2 (reverse_engineer_abmachung (Handlung vor nach))\<close> by(simp add: fun_eq_iff reverse_engineer_abmachung_def fold_enum_fun_update swap_def) (*>*) lemma reverse_engineer_abmachung: \<open>reverse_engineer_abmachung (Handlung welt welt') = a \<longleftrightarrow> abmachung_ausfuehren a welt = welt'\<close> apply(simp add: abmachung_ausfuehren_def fun_eq_iff) apply(simp add: reverse_engineer_abmachung_def fold_enum_fun_update_call) by (metis add_diff_cancel diff_add_cancel) end
(* Title: HOL/Library/Word.thy Author: Jeremy Dawson and Gerwin Klein, NICTA, et. al. *) section \<open>A type of finite bit strings\<close> theory Word imports "HOL-Library.Type_Length" begin subsection \<open>Preliminaries\<close> lemma signed_take_bit_decr_length_iff: \<open>signed_take_bit (LENGTH('a::len) - Suc 0) k = signed_take_bit (LENGTH('a) - Suc 0) l \<longleftrightarrow> take_bit LENGTH('a) k = take_bit LENGTH('a) l\<close> by (cases \<open>LENGTH('a)\<close>) (simp_all add: signed_take_bit_eq_iff_take_bit_eq) subsection \<open>Fundamentals\<close> subsubsection \<open>Type definition\<close> quotient_type (overloaded) 'a word = int / \<open>\<lambda>k l. take_bit LENGTH('a) k = take_bit LENGTH('a::len) l\<close> morphisms rep Word by (auto intro!: equivpI reflpI sympI transpI) hide_const (open) rep \<comment> \<open>only for foundational purpose\<close> hide_const (open) Word \<comment> \<open>only for code generation\<close> subsubsection \<open>Basic arithmetic\<close> instantiation word :: (len) comm_ring_1 begin lift_definition zero_word :: \<open>'a word\<close> is 0 . lift_definition one_word :: \<open>'a word\<close> is 1 . lift_definition plus_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>(+)\<close> by (auto simp add: take_bit_eq_mod intro: mod_add_cong) lift_definition minus_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>(-)\<close> by (auto simp add: take_bit_eq_mod intro: mod_diff_cong) lift_definition uminus_word :: \<open>'a word \<Rightarrow> 'a word\<close> is uminus by (auto simp add: take_bit_eq_mod intro: mod_minus_cong) lift_definition times_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>(*)\<close> by (auto simp add: take_bit_eq_mod intro: mod_mult_cong) instance by (standard; transfer) (simp_all add: algebra_simps) end context includes lifting_syntax notes power_transfer [transfer_rule] transfer_rule_of_bool [transfer_rule] transfer_rule_numeral [transfer_rule] transfer_rule_of_nat [transfer_rule] transfer_rule_of_int [transfer_rule] begin lemma power_transfer_word [transfer_rule]: \<open>(pcr_word ===> (=) ===> pcr_word) (^) (^)\<close> by transfer_prover lemma [transfer_rule]: \<open>((=) ===> pcr_word) numeral numeral\<close> by transfer_prover lemma [transfer_rule]: \<open>((=) ===> pcr_word) int of_nat\<close> by transfer_prover lemma [transfer_rule]: \<open>((=) ===> pcr_word) (\<lambda>k. k) of_int\<close> proof - have \<open>((=) ===> pcr_word) of_int of_int\<close> by transfer_prover then show ?thesis by (simp add: id_def) qed lemma [transfer_rule]: \<open>(pcr_word ===> (\<longleftrightarrow>)) even ((dvd) 2 :: 'a::len word \<Rightarrow> bool)\<close> proof - have even_word_unfold: "even k \<longleftrightarrow> (\<exists>l. take_bit LENGTH('a) k = take_bit LENGTH('a) (2 * l))" (is "?P \<longleftrightarrow> ?Q") for k :: int proof assume ?P then show ?Q by auto next assume ?Q then obtain l where "take_bit LENGTH('a) k = take_bit LENGTH('a) (2 * l)" .. then have "even (take_bit LENGTH('a) k)" by simp then show ?P by simp qed show ?thesis by (simp only: even_word_unfold [abs_def] dvd_def [where ?'a = "'a word", abs_def]) transfer_prover qed end lemma exp_eq_zero_iff [simp]: \<open>2 ^ n = (0 :: 'a::len word) \<longleftrightarrow> n \<ge> LENGTH('a)\<close> by transfer auto lemma word_exp_length_eq_0 [simp]: \<open>(2 :: 'a::len word) ^ LENGTH('a) = 0\<close> by simp subsubsection \<open>Basic tool setup\<close> ML_file \<open>Tools/word_lib.ML\<close> subsubsection \<open>Basic code generation setup\<close> context begin qualified lift_definition the_int :: \<open>'a::len word \<Rightarrow> int\<close> is \<open>take_bit LENGTH('a)\<close> . end lemma [code abstype]: \<open>Word.Word (Word.the_int w) = w\<close> by transfer simp lemma Word_eq_word_of_int [code_post, simp]: \<open>Word.Word = of_int\<close> by (rule; transfer) simp quickcheck_generator word constructors: \<open>0 :: 'a::len word\<close>, \<open>numeral :: num \<Rightarrow> 'a::len word\<close> instantiation word :: (len) equal begin lift_definition equal_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> bool\<close> is \<open>\<lambda>k l. take_bit LENGTH('a) k = take_bit LENGTH('a) l\<close> by simp instance by (standard; transfer) rule end lemma [code]: \<open>Word.the_int 0 = 0\<close> by transfer simp lemma [code]: \<open>Word.the_int 1 = 1\<close> by transfer simp lemma [code]: \<open>Word.the_int (v + w) = take_bit LENGTH('a) (Word.the_int v + Word.the_int w)\<close> for v w :: \<open>'a::len word\<close> by transfer (simp add: take_bit_add) lemma [code]: \<open>Word.the_int (- w) = (let k = Word.the_int w in if w = 0 then 0 else 2 ^ LENGTH('a) - k)\<close> for w :: \<open>'a::len word\<close> by transfer (auto simp add: take_bit_eq_mod zmod_zminus1_eq_if) lemma [code]: \<open>Word.the_int (v - w) = take_bit LENGTH('a) (Word.the_int v - Word.the_int w)\<close> for v w :: \<open>'a::len word\<close> by transfer (simp add: take_bit_diff) lemma [code]: \<open>Word.the_int (v * w) = take_bit LENGTH('a) (Word.the_int v * Word.the_int w)\<close> for v w :: \<open>'a::len word\<close> by transfer (simp add: take_bit_mult) subsubsection \<open>Basic conversions\<close> abbreviation word_of_nat :: \<open>nat \<Rightarrow> 'a::len word\<close> where \<open>word_of_nat \<equiv> of_nat\<close> abbreviation word_of_int :: \<open>int \<Rightarrow> 'a::len word\<close> where \<open>word_of_int \<equiv> of_int\<close> lemma word_of_nat_eq_iff: \<open>word_of_nat m = (word_of_nat n :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) m = take_bit LENGTH('a) n\<close> by transfer (simp add: take_bit_of_nat) lemma word_of_int_eq_iff: \<open>word_of_int k = (word_of_int l :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) k = take_bit LENGTH('a) l\<close> by transfer rule lemma word_of_nat_eq_0_iff: \<open>word_of_nat n = (0 :: 'a::len word) \<longleftrightarrow> 2 ^ LENGTH('a) dvd n\<close> using word_of_nat_eq_iff [where ?'a = 'a, of n 0] by (simp add: take_bit_eq_0_iff) lemma word_of_int_eq_0_iff: \<open>word_of_int k = (0 :: 'a::len word) \<longleftrightarrow> 2 ^ LENGTH('a) dvd k\<close> using word_of_int_eq_iff [where ?'a = 'a, of k 0] by (simp add: take_bit_eq_0_iff) context semiring_1 begin lift_definition unsigned :: \<open>'b::len word \<Rightarrow> 'a\<close> is \<open>of_nat \<circ> nat \<circ> take_bit LENGTH('b)\<close> by simp lemma unsigned_0 [simp]: \<open>unsigned 0 = 0\<close> by transfer simp lemma unsigned_1 [simp]: \<open>unsigned 1 = 1\<close> by transfer simp lemma unsigned_numeral [simp]: \<open>unsigned (numeral n :: 'b::len word) = of_nat (take_bit LENGTH('b) (numeral n))\<close> by transfer (simp add: nat_take_bit_eq) lemma unsigned_neg_numeral [simp]: \<open>unsigned (- numeral n :: 'b::len word) = of_nat (nat (take_bit LENGTH('b) (- numeral n)))\<close> by transfer simp end context semiring_1 begin lemma unsigned_of_nat: \<open>unsigned (word_of_nat n :: 'b::len word) = of_nat (take_bit LENGTH('b) n)\<close> by transfer (simp add: nat_eq_iff take_bit_of_nat) lemma unsigned_of_int: \<open>unsigned (word_of_int k :: 'b::len word) = of_nat (nat (take_bit LENGTH('b) k))\<close> by transfer simp end context semiring_char_0 begin lemma unsigned_word_eqI: \<open>v = w\<close> if \<open>unsigned v = unsigned w\<close> using that by transfer (simp add: eq_nat_nat_iff) lemma word_eq_iff_unsigned: \<open>v = w \<longleftrightarrow> unsigned v = unsigned w\<close> by (auto intro: unsigned_word_eqI) lemma inj_unsigned [simp]: \<open>inj unsigned\<close> by (rule injI) (simp add: unsigned_word_eqI) lemma unsigned_eq_0_iff: \<open>unsigned w = 0 \<longleftrightarrow> w = 0\<close> using word_eq_iff_unsigned [of w 0] by simp end context ring_1 begin lift_definition signed :: \<open>'b::len word \<Rightarrow> 'a\<close> is \<open>of_int \<circ> signed_take_bit (LENGTH('b) - Suc 0)\<close> by (simp flip: signed_take_bit_decr_length_iff) lemma signed_0 [simp]: \<open>signed 0 = 0\<close> by transfer simp lemma signed_1 [simp]: \<open>signed (1 :: 'b::len word) = (if LENGTH('b) = 1 then - 1 else 1)\<close> by (transfer fixing: uminus; cases \<open>LENGTH('b)\<close>) (auto dest: gr0_implies_Suc) lemma signed_minus_1 [simp]: \<open>signed (- 1 :: 'b::len word) = - 1\<close> by (transfer fixing: uminus) simp lemma signed_numeral [simp]: \<open>signed (numeral n :: 'b::len word) = of_int (signed_take_bit (LENGTH('b) - 1) (numeral n))\<close> by transfer simp lemma signed_neg_numeral [simp]: \<open>signed (- numeral n :: 'b::len word) = of_int (signed_take_bit (LENGTH('b) - 1) (- numeral n))\<close> by transfer simp lemma signed_of_nat: \<open>signed (word_of_nat n :: 'b::len word) = of_int (signed_take_bit (LENGTH('b) - Suc 0) (int n))\<close> by transfer simp lemma signed_of_int: \<open>signed (word_of_int n :: 'b::len word) = of_int (signed_take_bit (LENGTH('b) - Suc 0) n)\<close> by transfer simp end context ring_char_0 begin lemma signed_word_eqI: \<open>v = w\<close> if \<open>signed v = signed w\<close> using that by transfer (simp flip: signed_take_bit_decr_length_iff) lemma word_eq_iff_signed: \<open>v = w \<longleftrightarrow> signed v = signed w\<close> by (auto intro: signed_word_eqI) lemma inj_signed [simp]: \<open>inj signed\<close> by (rule injI) (simp add: signed_word_eqI) lemma signed_eq_0_iff: \<open>signed w = 0 \<longleftrightarrow> w = 0\<close> using word_eq_iff_signed [of w 0] by simp end abbreviation unat :: \<open>'a::len word \<Rightarrow> nat\<close> where \<open>unat \<equiv> unsigned\<close> abbreviation uint :: \<open>'a::len word \<Rightarrow> int\<close> where \<open>uint \<equiv> unsigned\<close> abbreviation sint :: \<open>'a::len word \<Rightarrow> int\<close> where \<open>sint \<equiv> signed\<close> abbreviation ucast :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> where \<open>ucast \<equiv> unsigned\<close> abbreviation scast :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> where \<open>scast \<equiv> signed\<close> context includes lifting_syntax begin lemma [transfer_rule]: \<open>(pcr_word ===> (=)) (nat \<circ> take_bit LENGTH('a)) (unat :: 'a::len word \<Rightarrow> nat)\<close> using unsigned.transfer [where ?'a = nat] by simp lemma [transfer_rule]: \<open>(pcr_word ===> (=)) (take_bit LENGTH('a)) (uint :: 'a::len word \<Rightarrow> int)\<close> using unsigned.transfer [where ?'a = int] by (simp add: comp_def) lemma [transfer_rule]: \<open>(pcr_word ===> (=)) (signed_take_bit (LENGTH('a) - Suc 0)) (sint :: 'a::len word \<Rightarrow> int)\<close> using signed.transfer [where ?'a = int] by simp lemma [transfer_rule]: \<open>(pcr_word ===> pcr_word) (take_bit LENGTH('a)) (ucast :: 'a::len word \<Rightarrow> 'b::len word)\<close> proof (rule rel_funI) fix k :: int and w :: \<open>'a word\<close> assume \<open>pcr_word k w\<close> then have \<open>w = word_of_int k\<close> by (simp add: pcr_word_def cr_word_def relcompp_apply) moreover have \<open>pcr_word (take_bit LENGTH('a) k) (ucast (word_of_int k :: 'a word))\<close> by transfer (simp add: pcr_word_def cr_word_def relcompp_apply) ultimately show \<open>pcr_word (take_bit LENGTH('a) k) (ucast w)\<close> by simp qed lemma [transfer_rule]: \<open>(pcr_word ===> pcr_word) (signed_take_bit (LENGTH('a) - Suc 0)) (scast :: 'a::len word \<Rightarrow> 'b::len word)\<close> proof (rule rel_funI) fix k :: int and w :: \<open>'a word\<close> assume \<open>pcr_word k w\<close> then have \<open>w = word_of_int k\<close> by (simp add: pcr_word_def cr_word_def relcompp_apply) moreover have \<open>pcr_word (signed_take_bit (LENGTH('a) - Suc 0) k) (scast (word_of_int k :: 'a word))\<close> by transfer (simp add: pcr_word_def cr_word_def relcompp_apply) ultimately show \<open>pcr_word (signed_take_bit (LENGTH('a) - Suc 0) k) (scast w)\<close> by simp qed end lemma of_nat_unat [simp]: \<open>of_nat (unat w) = unsigned w\<close> by transfer simp lemma of_int_uint [simp]: \<open>of_int (uint w) = unsigned w\<close> by transfer simp lemma of_int_sint [simp]: \<open>of_int (sint a) = signed a\<close> by transfer (simp_all add: take_bit_signed_take_bit) lemma nat_uint_eq [simp]: \<open>nat (uint w) = unat w\<close> by transfer simp lemma sgn_uint_eq [simp]: \<open>sgn (uint w) = of_bool (w \<noteq> 0)\<close> by transfer (simp add: less_le) text \<open>Aliasses only for code generation\<close> context begin qualified lift_definition of_int :: \<open>int \<Rightarrow> 'a::len word\<close> is \<open>take_bit LENGTH('a)\<close> . qualified lift_definition of_nat :: \<open>nat \<Rightarrow> 'a::len word\<close> is \<open>int \<circ> take_bit LENGTH('a)\<close> . qualified lift_definition the_nat :: \<open>'a::len word \<Rightarrow> nat\<close> is \<open>nat \<circ> take_bit LENGTH('a)\<close> by simp qualified lift_definition the_signed_int :: \<open>'a::len word \<Rightarrow> int\<close> is \<open>signed_take_bit (LENGTH('a) - Suc 0)\<close> by (simp add: signed_take_bit_decr_length_iff) qualified lift_definition cast :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> is \<open>take_bit LENGTH('a)\<close> by simp qualified lift_definition signed_cast :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> is \<open>signed_take_bit (LENGTH('a) - Suc 0)\<close> by (metis signed_take_bit_decr_length_iff) end lemma [code_abbrev, simp]: \<open>Word.the_int = uint\<close> by transfer rule lemma [code]: \<open>Word.the_int (Word.of_int k :: 'a::len word) = take_bit LENGTH('a) k\<close> by transfer simp lemma [code_abbrev, simp]: \<open>Word.of_int = word_of_int\<close> by (rule; transfer) simp lemma [code]: \<open>Word.the_int (Word.of_nat n :: 'a::len word) = take_bit LENGTH('a) (int n)\<close> by transfer (simp add: take_bit_of_nat) lemma [code_abbrev, simp]: \<open>Word.of_nat = word_of_nat\<close> by (rule; transfer) (simp add: take_bit_of_nat) lemma [code]: \<open>Word.the_nat w = nat (Word.the_int w)\<close> by transfer simp lemma [code_abbrev, simp]: \<open>Word.the_nat = unat\<close> by (rule; transfer) simp lemma [code]: \<open>Word.the_signed_int w = signed_take_bit (LENGTH('a) - Suc 0) (Word.the_int w)\<close> for w :: \<open>'a::len word\<close> by transfer (simp add: signed_take_bit_take_bit) lemma [code_abbrev, simp]: \<open>Word.the_signed_int = sint\<close> by (rule; transfer) simp lemma [code]: \<open>Word.the_int (Word.cast w :: 'b::len word) = take_bit LENGTH('b) (Word.the_int w)\<close> for w :: \<open>'a::len word\<close> by transfer simp lemma [code_abbrev, simp]: \<open>Word.cast = ucast\<close> by (rule; transfer) simp lemma [code]: \<open>Word.the_int (Word.signed_cast w :: 'b::len word) = take_bit LENGTH('b) (Word.the_signed_int w)\<close> for w :: \<open>'a::len word\<close> by transfer simp lemma [code_abbrev, simp]: \<open>Word.signed_cast = scast\<close> by (rule; transfer) simp lemma [code]: \<open>unsigned w = of_nat (nat (Word.the_int w))\<close> by transfer simp lemma [code]: \<open>signed w = of_int (Word.the_signed_int w)\<close> by transfer simp subsubsection \<open>Basic ordering\<close> instantiation word :: (len) linorder begin lift_definition less_eq_word :: "'a word \<Rightarrow> 'a word \<Rightarrow> bool" is "\<lambda>a b. take_bit LENGTH('a) a \<le> take_bit LENGTH('a) b" by simp lift_definition less_word :: "'a word \<Rightarrow> 'a word \<Rightarrow> bool" is "\<lambda>a b. take_bit LENGTH('a) a < take_bit LENGTH('a) b" by simp instance by (standard; transfer) auto end interpretation word_order: ordering_top \<open>(\<le>)\<close> \<open>(<)\<close> \<open>- 1 :: 'a::len word\<close> by (standard; transfer) (simp add: take_bit_eq_mod zmod_minus1) interpretation word_coorder: ordering_top \<open>(\<ge>)\<close> \<open>(>)\<close> \<open>0 :: 'a::len word\<close> by (standard; transfer) simp lemma word_of_nat_less_eq_iff: \<open>word_of_nat m \<le> (word_of_nat n :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) m \<le> take_bit LENGTH('a) n\<close> by transfer (simp add: take_bit_of_nat) lemma word_of_int_less_eq_iff: \<open>word_of_int k \<le> (word_of_int l :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) k \<le> take_bit LENGTH('a) l\<close> by transfer rule lemma word_of_nat_less_iff: \<open>word_of_nat m < (word_of_nat n :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) m < take_bit LENGTH('a) n\<close> by transfer (simp add: take_bit_of_nat) lemma word_of_int_less_iff: \<open>word_of_int k < (word_of_int l :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) k < take_bit LENGTH('a) l\<close> by transfer rule lemma word_le_def [code]: "a \<le> b \<longleftrightarrow> uint a \<le> uint b" by transfer rule lemma word_less_def [code]: "a < b \<longleftrightarrow> uint a < uint b" by transfer rule lemma word_greater_zero_iff: \<open>a > 0 \<longleftrightarrow> a \<noteq> 0\<close> for a :: \<open>'a::len word\<close> by transfer (simp add: less_le) lemma of_nat_word_less_eq_iff: \<open>of_nat m \<le> (of_nat n :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) m \<le> take_bit LENGTH('a) n\<close> by transfer (simp add: take_bit_of_nat) lemma of_nat_word_less_iff: \<open>of_nat m < (of_nat n :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) m < take_bit LENGTH('a) n\<close> by transfer (simp add: take_bit_of_nat) lemma of_int_word_less_eq_iff: \<open>of_int k \<le> (of_int l :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) k \<le> take_bit LENGTH('a) l\<close> by transfer rule lemma of_int_word_less_iff: \<open>of_int k < (of_int l :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) k < take_bit LENGTH('a) l\<close> by transfer rule subsection \<open>Enumeration\<close> lemma inj_on_word_of_nat: \<open>inj_on (word_of_nat :: nat \<Rightarrow> 'a::len word) {0..<2 ^ LENGTH('a)}\<close> by (rule inj_onI; transfer) (simp_all add: take_bit_int_eq_self) lemma UNIV_word_eq_word_of_nat: \<open>(UNIV :: 'a::len word set) = word_of_nat ` {0..<2 ^ LENGTH('a)}\<close> (is \<open>_ = ?A\<close>) proof show \<open>word_of_nat ` {0..<2 ^ LENGTH('a)} \<subseteq> UNIV\<close> by simp show \<open>UNIV \<subseteq> ?A\<close> proof fix w :: \<open>'a word\<close> show \<open>w \<in> (word_of_nat ` {0..<2 ^ LENGTH('a)} :: 'a word set)\<close> by (rule image_eqI [of _ _ \<open>unat w\<close>]; transfer) simp_all qed qed instantiation word :: (len) enum begin definition enum_word :: \<open>'a word list\<close> where \<open>enum_word = map word_of_nat [0..<2 ^ LENGTH('a)]\<close> definition enum_all_word :: \<open>('a word \<Rightarrow> bool) \<Rightarrow> bool\<close> where \<open>enum_all_word = All\<close> definition enum_ex_word :: \<open>('a word \<Rightarrow> bool) \<Rightarrow> bool\<close> where \<open>enum_ex_word = Ex\<close> instance by standard (simp_all add: enum_all_word_def enum_ex_word_def enum_word_def distinct_map inj_on_word_of_nat flip: UNIV_word_eq_word_of_nat) end lemma [code]: \<open>Enum.enum_all P \<longleftrightarrow> list_all P Enum.enum\<close> \<open>Enum.enum_ex P \<longleftrightarrow> list_ex P Enum.enum\<close> for P :: \<open>'a::len word \<Rightarrow> bool\<close> by (simp_all add: enum_all_word_def enum_ex_word_def enum_UNIV list_all_iff list_ex_iff) subsection \<open>Bit-wise operations\<close> instantiation word :: (len) semiring_modulo begin lift_definition divide_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>\<lambda>a b. take_bit LENGTH('a) a div take_bit LENGTH('a) b\<close> by simp lift_definition modulo_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>\<lambda>a b. take_bit LENGTH('a) a mod take_bit LENGTH('a) b\<close> by simp instance proof show "a div b * b + a mod b = a" for a b :: "'a word" proof transfer fix k l :: int define r :: int where "r = 2 ^ LENGTH('a)" then have r: "take_bit LENGTH('a) k = k mod r" for k by (simp add: take_bit_eq_mod) have "k mod r = ((k mod r) div (l mod r) * (l mod r) + (k mod r) mod (l mod r)) mod r" by (simp add: div_mult_mod_eq) also have "... = (((k mod r) div (l mod r) * (l mod r)) mod r + (k mod r) mod (l mod r)) mod r" by (simp add: mod_add_left_eq) also have "... = (((k mod r) div (l mod r) * l) mod r + (k mod r) mod (l mod r)) mod r" by (simp add: mod_mult_right_eq) finally have "k mod r = ((k mod r) div (l mod r) * l + (k mod r) mod (l mod r)) mod r" by (simp add: mod_simps) with r show "take_bit LENGTH('a) (take_bit LENGTH('a) k div take_bit LENGTH('a) l * l + take_bit LENGTH('a) k mod take_bit LENGTH('a) l) = take_bit LENGTH('a) k" by simp qed qed end instance word :: (len) semiring_parity proof show "\<not> 2 dvd (1::'a word)" by transfer simp show even_iff_mod_2_eq_0: "2 dvd a \<longleftrightarrow> a mod 2 = 0" for a :: "'a word" by transfer (simp_all add: mod_2_eq_odd take_bit_Suc) show "\<not> 2 dvd a \<longleftrightarrow> a mod 2 = 1" for a :: "'a word" by transfer (simp_all add: mod_2_eq_odd take_bit_Suc) qed lemma word_bit_induct [case_names zero even odd]: \<open>P a\<close> if word_zero: \<open>P 0\<close> and word_even: \<open>\<And>a. P a \<Longrightarrow> 0 < a \<Longrightarrow> a < 2 ^ (LENGTH('a) - Suc 0) \<Longrightarrow> P (2 * a)\<close> and word_odd: \<open>\<And>a. P a \<Longrightarrow> a < 2 ^ (LENGTH('a) - Suc 0) \<Longrightarrow> P (1 + 2 * a)\<close> for P and a :: \<open>'a::len word\<close> proof - define m :: nat where \<open>m = LENGTH('a) - Suc 0\<close> then have l: \<open>LENGTH('a) = Suc m\<close> by simp define n :: nat where \<open>n = unat a\<close> then have \<open>n < 2 ^ LENGTH('a)\<close> by transfer (simp add: take_bit_eq_mod) then have \<open>n < 2 * 2 ^ m\<close> by (simp add: l) then have \<open>P (of_nat n)\<close> proof (induction n rule: nat_bit_induct) case zero show ?case by simp (rule word_zero) next case (even n) then have \<open>n < 2 ^ m\<close> by simp with even.IH have \<open>P (of_nat n)\<close> by simp moreover from \<open>n < 2 ^ m\<close> even.hyps have \<open>0 < (of_nat n :: 'a word)\<close> by (auto simp add: word_greater_zero_iff l word_of_nat_eq_0_iff) moreover from \<open>n < 2 ^ m\<close> have \<open>(of_nat n :: 'a word) < 2 ^ (LENGTH('a) - Suc 0)\<close> using of_nat_word_less_iff [where ?'a = 'a, of n \<open>2 ^ m\<close>] by (simp add: l take_bit_eq_mod) ultimately have \<open>P (2 * of_nat n)\<close> by (rule word_even) then show ?case by simp next case (odd n) then have \<open>Suc n \<le> 2 ^ m\<close> by simp with odd.IH have \<open>P (of_nat n)\<close> by simp moreover from \<open>Suc n \<le> 2 ^ m\<close> have \<open>(of_nat n :: 'a word) < 2 ^ (LENGTH('a) - Suc 0)\<close> using of_nat_word_less_iff [where ?'a = 'a, of n \<open>2 ^ m\<close>] by (simp add: l take_bit_eq_mod) ultimately have \<open>P (1 + 2 * of_nat n)\<close> by (rule word_odd) then show ?case by simp qed moreover have \<open>of_nat (nat (uint a)) = a\<close> by transfer simp ultimately show ?thesis by (simp add: n_def) qed lemma bit_word_half_eq: \<open>(of_bool b + a * 2) div 2 = a\<close> if \<open>a < 2 ^ (LENGTH('a) - Suc 0)\<close> for a :: \<open>'a::len word\<close> proof (cases \<open>2 \<le> LENGTH('a::len)\<close>) case False have \<open>of_bool (odd k) < (1 :: int) \<longleftrightarrow> even k\<close> for k :: int by auto with False that show ?thesis by transfer (simp add: eq_iff) next case True obtain n where length: \<open>LENGTH('a) = Suc n\<close> by (cases \<open>LENGTH('a)\<close>) simp_all show ?thesis proof (cases b) case False moreover have \<open>a * 2 div 2 = a\<close> using that proof transfer fix k :: int from length have \<open>k * 2 mod 2 ^ LENGTH('a) = (k mod 2 ^ n) * 2\<close> by simp moreover assume \<open>take_bit LENGTH('a) k < take_bit LENGTH('a) (2 ^ (LENGTH('a) - Suc 0))\<close> with \<open>LENGTH('a) = Suc n\<close> have \<open>take_bit LENGTH('a) k = take_bit n k\<close> by (auto simp add: take_bit_Suc_from_most) ultimately have \<open>take_bit LENGTH('a) (k * 2) = take_bit LENGTH('a) k * 2\<close> by (simp add: take_bit_eq_mod) with True show \<open>take_bit LENGTH('a) (take_bit LENGTH('a) (k * 2) div take_bit LENGTH('a) 2) = take_bit LENGTH('a) k\<close> by simp qed ultimately show ?thesis by simp next case True moreover have \<open>(1 + a * 2) div 2 = a\<close> using that proof transfer fix k :: int from length have \<open>(1 + k * 2) mod 2 ^ LENGTH('a) = 1 + (k mod 2 ^ n) * 2\<close> using pos_zmod_mult_2 [of \<open>2 ^ n\<close> k] by (simp add: ac_simps) moreover assume \<open>take_bit LENGTH('a) k < take_bit LENGTH('a) (2 ^ (LENGTH('a) - Suc 0))\<close> with \<open>LENGTH('a) = Suc n\<close> have \<open>take_bit LENGTH('a) k = take_bit n k\<close> by (auto simp add: take_bit_Suc_from_most) ultimately have \<open>take_bit LENGTH('a) (1 + k * 2) = 1 + take_bit LENGTH('a) k * 2\<close> by (simp add: take_bit_eq_mod) with True show \<open>take_bit LENGTH('a) (take_bit LENGTH('a) (1 + k * 2) div take_bit LENGTH('a) 2) = take_bit LENGTH('a) k\<close> by (auto simp add: take_bit_Suc) qed ultimately show ?thesis by simp qed qed lemma even_mult_exp_div_word_iff: \<open>even (a * 2 ^ m div 2 ^ n) \<longleftrightarrow> \<not> ( m \<le> n \<and> n < LENGTH('a) \<and> odd (a div 2 ^ (n - m)))\<close> for a :: \<open>'a::len word\<close> by transfer (auto simp flip: drop_bit_eq_div simp add: even_drop_bit_iff_not_bit bit_take_bit_iff, simp_all flip: push_bit_eq_mult add: bit_push_bit_iff_int) instantiation word :: (len) semiring_bits begin lift_definition bit_word :: \<open>'a word \<Rightarrow> nat \<Rightarrow> bool\<close> is \<open>\<lambda>k n. n < LENGTH('a) \<and> bit k n\<close> proof fix k l :: int and n :: nat assume *: \<open>take_bit LENGTH('a) k = take_bit LENGTH('a) l\<close> show \<open>n < LENGTH('a) \<and> bit k n \<longleftrightarrow> n < LENGTH('a) \<and> bit l n\<close> proof (cases \<open>n < LENGTH('a)\<close>) case True from * have \<open>bit (take_bit LENGTH('a) k) n \<longleftrightarrow> bit (take_bit LENGTH('a) l) n\<close> by simp then show ?thesis by (simp add: bit_take_bit_iff) next case False then show ?thesis by simp qed qed instance proof show \<open>P a\<close> if stable: \<open>\<And>a. a div 2 = a \<Longrightarrow> P a\<close> and rec: \<open>\<And>a b. P a \<Longrightarrow> (of_bool b + 2 * a) div 2 = a \<Longrightarrow> P (of_bool b + 2 * a)\<close> for P and a :: \<open>'a word\<close> proof (induction a rule: word_bit_induct) case zero have \<open>0 div 2 = (0::'a word)\<close> by transfer simp with stable [of 0] show ?case by simp next case (even a) with rec [of a False] show ?case using bit_word_half_eq [of a False] by (simp add: ac_simps) next case (odd a) with rec [of a True] show ?case using bit_word_half_eq [of a True] by (simp add: ac_simps) qed show \<open>bit a n \<longleftrightarrow> odd (a div 2 ^ n)\<close> for a :: \<open>'a word\<close> and n by transfer (simp flip: drop_bit_eq_div add: drop_bit_take_bit bit_iff_odd_drop_bit) show \<open>0 div a = 0\<close> for a :: \<open>'a word\<close> by transfer simp show \<open>a div 1 = a\<close> for a :: \<open>'a word\<close> by transfer simp show \<open>a mod b div b = 0\<close> for a b :: \<open>'a word\<close> apply transfer apply (simp add: take_bit_eq_mod) apply (smt (verit, best) Euclidean_Rings.pos_mod_bound Euclidean_Rings.pos_mod_sign div_int_pos_iff nonneg1_imp_zdiv_pos_iff zero_less_power zmod_le_nonneg_dividend) done show \<open>(1 + a) div 2 = a div 2\<close> if \<open>even a\<close> for a :: \<open>'a word\<close> using that by transfer (auto dest: le_Suc_ex simp add: take_bit_Suc elim!: evenE) show \<open>(2 :: 'a word) ^ m div 2 ^ n = of_bool ((2 :: 'a word) ^ m \<noteq> 0 \<and> n \<le> m) * 2 ^ (m - n)\<close> for m n :: nat by transfer (simp, simp add: exp_div_exp_eq) show "a div 2 ^ m div 2 ^ n = a div 2 ^ (m + n)" for a :: "'a word" and m n :: nat apply transfer apply (auto simp add: not_less take_bit_drop_bit ac_simps simp flip: drop_bit_eq_div) apply (simp add: drop_bit_take_bit) done show "a mod 2 ^ m mod 2 ^ n = a mod 2 ^ min m n" for a :: "'a word" and m n :: nat by transfer (auto simp flip: take_bit_eq_mod simp add: ac_simps) show \<open>a * 2 ^ m mod 2 ^ n = a mod 2 ^ (n - m) * 2 ^ m\<close> if \<open>m \<le> n\<close> for a :: "'a word" and m n :: nat using that apply transfer apply (auto simp flip: take_bit_eq_mod) apply (auto simp flip: push_bit_eq_mult simp add: push_bit_take_bit split: split_min_lin) done show \<open>a div 2 ^ n mod 2 ^ m = a mod (2 ^ (n + m)) div 2 ^ n\<close> for a :: "'a word" and m n :: nat by transfer (auto simp add: not_less take_bit_drop_bit ac_simps simp flip: take_bit_eq_mod drop_bit_eq_div split: split_min_lin) show \<open>even ((2 ^ m - 1) div (2::'a word) ^ n) \<longleftrightarrow> 2 ^ n = (0::'a word) \<or> m \<le> n\<close> for m n :: nat by transfer (simp flip: drop_bit_eq_div mask_eq_exp_minus_1 add: bit_simps even_drop_bit_iff_not_bit not_less) show \<open>even (a * 2 ^ m div 2 ^ n) \<longleftrightarrow> n < m \<or> (2::'a word) ^ n = 0 \<or> m \<le> n \<and> even (a div 2 ^ (n - m))\<close> for a :: \<open>'a word\<close> and m n :: nat proof transfer show \<open>even (take_bit LENGTH('a) (k * 2 ^ m) div take_bit LENGTH('a) (2 ^ n)) \<longleftrightarrow> n < m \<or> take_bit LENGTH('a) ((2::int) ^ n) = take_bit LENGTH('a) 0 \<or> (m \<le> n \<and> even (take_bit LENGTH('a) k div take_bit LENGTH('a) (2 ^ (n - m))))\<close> for m n :: nat and k l :: int by (auto simp flip: take_bit_eq_mod drop_bit_eq_div push_bit_eq_mult simp add: div_push_bit_of_1_eq_drop_bit drop_bit_take_bit drop_bit_push_bit_int [of n m]) qed qed end lemma bit_word_eqI: \<open>a = b\<close> if \<open>\<And>n. n < LENGTH('a) \<Longrightarrow> bit a n \<longleftrightarrow> bit b n\<close> for a b :: \<open>'a::len word\<close> using that by transfer (auto simp add: nat_less_le bit_eq_iff bit_take_bit_iff) lemma bit_imp_le_length: \<open>n < LENGTH('a)\<close> if \<open>bit w n\<close> for w :: \<open>'a::len word\<close> using that by transfer simp lemma not_bit_length [simp]: \<open>\<not> bit w LENGTH('a)\<close> for w :: \<open>'a::len word\<close> by transfer simp lemma finite_bit_word [simp]: \<open>finite {n. bit w n}\<close> for w :: \<open>'a::len word\<close> proof - have \<open>{n. bit w n} \<subseteq> {0..LENGTH('a)}\<close> by (auto dest: bit_imp_le_length) moreover have \<open>finite {0..LENGTH('a)}\<close> by simp ultimately show ?thesis by (rule finite_subset) qed lemma bit_numeral_word_iff [simp]: \<open>bit (numeral w :: 'a::len word) n \<longleftrightarrow> n < LENGTH('a) \<and> bit (numeral w :: int) n\<close> by transfer simp lemma bit_neg_numeral_word_iff [simp]: \<open>bit (- numeral w :: 'a::len word) n \<longleftrightarrow> n < LENGTH('a) \<and> bit (- numeral w :: int) n\<close> by transfer simp instantiation word :: (len) ring_bit_operations begin lift_definition not_word :: \<open>'a word \<Rightarrow> 'a word\<close> is not by (simp add: take_bit_not_iff) lift_definition and_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>and\<close> by simp lift_definition or_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is or by simp lift_definition xor_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is xor by simp lift_definition mask_word :: \<open>nat \<Rightarrow> 'a word\<close> is mask . lift_definition set_bit_word :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is set_bit by (simp add: set_bit_def) lift_definition unset_bit_word :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is unset_bit by (simp add: unset_bit_def) lift_definition flip_bit_word :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is flip_bit by (simp add: flip_bit_def) lift_definition push_bit_word :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is push_bit proof - show \<open>take_bit LENGTH('a) (push_bit n k) = take_bit LENGTH('a) (push_bit n l)\<close> if \<open>take_bit LENGTH('a) k = take_bit LENGTH('a) l\<close> for k l :: int and n :: nat proof - from that have \<open>take_bit (LENGTH('a) - n) (take_bit LENGTH('a) k) = take_bit (LENGTH('a) - n) (take_bit LENGTH('a) l)\<close> by simp moreover have \<open>min (LENGTH('a) - n) LENGTH('a) = LENGTH('a) - n\<close> by simp ultimately show ?thesis by (simp add: take_bit_push_bit) qed qed lift_definition drop_bit_word :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>\<lambda>n. drop_bit n \<circ> take_bit LENGTH('a)\<close> by (simp add: take_bit_eq_mod) lift_definition take_bit_word :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>\<lambda>n. take_bit (min LENGTH('a) n)\<close> by (simp add: ac_simps) (simp only: flip: take_bit_take_bit) instance apply (standard; transfer) apply (auto simp add: minus_eq_not_minus_1 mask_eq_exp_minus_1 bit_simps set_bit_def flip_bit_def take_bit_drop_bit simp flip: drop_bit_eq_div take_bit_eq_mod) apply (simp_all add: drop_bit_take_bit flip: push_bit_eq_mult) done end lemma [code]: \<open>push_bit n w = w * 2 ^ n\<close> for w :: \<open>'a::len word\<close> by (fact push_bit_eq_mult) lemma [code]: \<open>Word.the_int (drop_bit n w) = drop_bit n (Word.the_int w)\<close> by transfer (simp add: drop_bit_take_bit min_def le_less less_diff_conv) lemma [code]: \<open>Word.the_int (take_bit n w) = (if n < LENGTH('a::len) then take_bit n (Word.the_int w) else Word.the_int w)\<close> for w :: \<open>'a::len word\<close> by transfer (simp add: not_le not_less ac_simps min_absorb2) lemma [code_abbrev]: \<open>push_bit n 1 = (2 :: 'a::len word) ^ n\<close> by (fact push_bit_of_1) context includes bit_operations_syntax begin lemma [code]: \<open>NOT w = Word.of_int (NOT (Word.the_int w))\<close> for w :: \<open>'a::len word\<close> by transfer (simp add: take_bit_not_take_bit) lemma [code]: \<open>Word.the_int (v AND w) = Word.the_int v AND Word.the_int w\<close> by transfer simp lemma [code]: \<open>Word.the_int (v OR w) = Word.the_int v OR Word.the_int w\<close> by transfer simp lemma [code]: \<open>Word.the_int (v XOR w) = Word.the_int v XOR Word.the_int w\<close> by transfer simp lemma [code]: \<open>Word.the_int (mask n :: 'a::len word) = mask (min LENGTH('a) n)\<close> by transfer simp lemma [code]: \<open>set_bit n w = w OR push_bit n 1\<close> for w :: \<open>'a::len word\<close> by (fact set_bit_eq_or) lemma [code]: \<open>unset_bit n w = w AND NOT (push_bit n 1)\<close> for w :: \<open>'a::len word\<close> by (fact unset_bit_eq_and_not) lemma [code]: \<open>flip_bit n w = w XOR push_bit n 1\<close> for w :: \<open>'a::len word\<close> by (fact flip_bit_eq_xor) context includes lifting_syntax begin lemma set_bit_word_transfer [transfer_rule]: \<open>((=) ===> pcr_word ===> pcr_word) set_bit set_bit\<close> by (unfold set_bit_def) transfer_prover lemma unset_bit_word_transfer [transfer_rule]: \<open>((=) ===> pcr_word ===> pcr_word) unset_bit unset_bit\<close> by (unfold unset_bit_def) transfer_prover lemma flip_bit_word_transfer [transfer_rule]: \<open>((=) ===> pcr_word ===> pcr_word) flip_bit flip_bit\<close> by (unfold flip_bit_def) transfer_prover lemma signed_take_bit_word_transfer [transfer_rule]: \<open>((=) ===> pcr_word ===> pcr_word) (\<lambda>n k. signed_take_bit n (take_bit LENGTH('a::len) k)) (signed_take_bit :: nat \<Rightarrow> 'a word \<Rightarrow> 'a word)\<close> proof - let ?K = \<open>\<lambda>n (k :: int). take_bit (min LENGTH('a) n) k OR of_bool (n < LENGTH('a) \<and> bit k n) * NOT (mask n)\<close> let ?W = \<open>\<lambda>n (w :: 'a word). take_bit n w OR of_bool (bit w n) * NOT (mask n)\<close> have \<open>((=) ===> pcr_word ===> pcr_word) ?K ?W\<close> by transfer_prover also have \<open>?K = (\<lambda>n k. signed_take_bit n (take_bit LENGTH('a::len) k))\<close> by (simp add: fun_eq_iff signed_take_bit_def bit_take_bit_iff ac_simps) also have \<open>?W = signed_take_bit\<close> by (simp add: fun_eq_iff signed_take_bit_def) finally show ?thesis . qed end end subsection \<open>Conversions including casts\<close> subsubsection \<open>Generic unsigned conversion\<close> context semiring_bits begin lemma bit_unsigned_iff [bit_simps]: \<open>bit (unsigned w) n \<longleftrightarrow> possible_bit TYPE('a) n \<and> bit w n\<close> for w :: \<open>'b::len word\<close> by (transfer fixing: bit) (simp add: bit_of_nat_iff bit_nat_iff bit_take_bit_iff) end lemma possible_bit_word[simp]: \<open>possible_bit TYPE(('a :: len) word) m \<longleftrightarrow> m < LENGTH('a)\<close> by (simp add: possible_bit_def linorder_not_le) context semiring_bit_operations begin lemma unsigned_minus_1_eq_mask: \<open>unsigned (- 1 :: 'b::len word) = mask LENGTH('b)\<close> by (transfer fixing: mask) (simp add: nat_mask_eq of_nat_mask_eq) lemma unsigned_push_bit_eq: \<open>unsigned (push_bit n w) = take_bit LENGTH('b) (push_bit n (unsigned w))\<close> for w :: \<open>'b::len word\<close> proof (rule bit_eqI) fix m assume \<open>possible_bit TYPE('a) m\<close> show \<open>bit (unsigned (push_bit n w)) m = bit (take_bit LENGTH('b) (push_bit n (unsigned w))) m\<close> proof (cases \<open>n \<le> m\<close>) case True with \<open>possible_bit TYPE('a) m\<close> have \<open>possible_bit TYPE('a) (m - n)\<close> by (simp add: possible_bit_less_imp) with True show ?thesis by (simp add: bit_unsigned_iff bit_push_bit_iff Bit_Operations.bit_push_bit_iff bit_take_bit_iff not_le ac_simps) next case False then show ?thesis by (simp add: not_le bit_unsigned_iff bit_push_bit_iff Bit_Operations.bit_push_bit_iff bit_take_bit_iff) qed qed lemma unsigned_take_bit_eq: \<open>unsigned (take_bit n w) = take_bit n (unsigned w)\<close> for w :: \<open>'b::len word\<close> by (rule bit_eqI) (simp add: bit_unsigned_iff bit_take_bit_iff Bit_Operations.bit_take_bit_iff) end context unique_euclidean_semiring_with_bit_operations begin lemma unsigned_drop_bit_eq: \<open>unsigned (drop_bit n w) = drop_bit n (take_bit LENGTH('b) (unsigned w))\<close> for w :: \<open>'b::len word\<close> by (rule bit_eqI) (auto simp add: bit_unsigned_iff bit_take_bit_iff bit_drop_bit_eq Bit_Operations.bit_drop_bit_eq possible_bit_def dest: bit_imp_le_length) end lemma ucast_drop_bit_eq: \<open>ucast (drop_bit n w) = drop_bit n (ucast w :: 'b::len word)\<close> if \<open>LENGTH('a) \<le> LENGTH('b)\<close> for w :: \<open>'a::len word\<close> by (rule bit_word_eqI) (use that in \<open>auto simp add: bit_unsigned_iff bit_drop_bit_eq dest: bit_imp_le_length\<close>) context semiring_bit_operations begin context includes bit_operations_syntax begin lemma unsigned_and_eq: \<open>unsigned (v AND w) = unsigned v AND unsigned w\<close> for v w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps) lemma unsigned_or_eq: \<open>unsigned (v OR w) = unsigned v OR unsigned w\<close> for v w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps) lemma unsigned_xor_eq: \<open>unsigned (v XOR w) = unsigned v XOR unsigned w\<close> for v w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps) end end context ring_bit_operations begin context includes bit_operations_syntax begin lemma unsigned_not_eq: \<open>unsigned (NOT w) = take_bit LENGTH('b) (NOT (unsigned w))\<close> for w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps) end end context unique_euclidean_semiring_numeral begin lemma unsigned_greater_eq [simp]: \<open>0 \<le> unsigned w\<close> for w :: \<open>'b::len word\<close> by (transfer fixing: less_eq) simp lemma unsigned_less [simp]: \<open>unsigned w < 2 ^ LENGTH('b)\<close> for w :: \<open>'b::len word\<close> by (transfer fixing: less) simp end context linordered_semidom begin lemma word_less_eq_iff_unsigned: "a \<le> b \<longleftrightarrow> unsigned a \<le> unsigned b" by (transfer fixing: less_eq) (simp add: nat_le_eq_zle) lemma word_less_iff_unsigned: "a < b \<longleftrightarrow> unsigned a < unsigned b" by (transfer fixing: less) (auto dest: preorder_class.le_less_trans [OF take_bit_nonnegative]) end subsubsection \<open>Generic signed conversion\<close> context ring_bit_operations begin lemma bit_signed_iff [bit_simps]: \<open>bit (signed w) n \<longleftrightarrow> possible_bit TYPE('a) n \<and> bit w (min (LENGTH('b) - Suc 0) n)\<close> for w :: \<open>'b::len word\<close> by (transfer fixing: bit) (auto simp add: bit_of_int_iff Bit_Operations.bit_signed_take_bit_iff min_def) lemma signed_push_bit_eq: \<open>signed (push_bit n w) = signed_take_bit (LENGTH('b) - Suc 0) (push_bit n (signed w :: 'a))\<close> for w :: \<open>'b::len word\<close> apply (simp add: bit_eq_iff bit_simps possible_bit_less_imp min_less_iff_disj) apply (cases n, simp_all add: min_def) done lemma signed_take_bit_eq: \<open>signed (take_bit n w) = (if n < LENGTH('b) then take_bit n (signed w) else signed w)\<close> for w :: \<open>'b::len word\<close> by (transfer fixing: take_bit; cases \<open>LENGTH('b)\<close>) (auto simp add: Bit_Operations.signed_take_bit_take_bit Bit_Operations.take_bit_signed_take_bit take_bit_of_int min_def less_Suc_eq) context includes bit_operations_syntax begin lemma signed_not_eq: \<open>signed (NOT w) = signed_take_bit LENGTH('b) (NOT (signed w))\<close> for w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps possible_bit_less_imp min_less_iff_disj) (auto simp: min_def) lemma signed_and_eq: \<open>signed (v AND w) = signed v AND signed w\<close> for v w :: \<open>'b::len word\<close> by (rule bit_eqI) (simp add: bit_signed_iff bit_and_iff Bit_Operations.bit_and_iff) lemma signed_or_eq: \<open>signed (v OR w) = signed v OR signed w\<close> for v w :: \<open>'b::len word\<close> by (rule bit_eqI) (simp add: bit_signed_iff bit_or_iff Bit_Operations.bit_or_iff) lemma signed_xor_eq: \<open>signed (v XOR w) = signed v XOR signed w\<close> for v w :: \<open>'b::len word\<close> by (rule bit_eqI) (simp add: bit_signed_iff bit_xor_iff Bit_Operations.bit_xor_iff) end end subsubsection \<open>More\<close> lemma sint_greater_eq: \<open>- (2 ^ (LENGTH('a) - Suc 0)) \<le> sint w\<close> for w :: \<open>'a::len word\<close> proof (cases \<open>bit w (LENGTH('a) - Suc 0)\<close>) case True then show ?thesis by transfer (simp add: signed_take_bit_eq_if_negative minus_exp_eq_not_mask or_greater_eq ac_simps) next have *: \<open>- (2 ^ (LENGTH('a) - Suc 0)) \<le> (0::int)\<close> by simp case False then show ?thesis by transfer (auto simp add: signed_take_bit_eq intro: order_trans *) qed lemma sint_less: \<open>sint w < 2 ^ (LENGTH('a) - Suc 0)\<close> for w :: \<open>'a::len word\<close> by (cases \<open>bit w (LENGTH('a) - Suc 0)\<close>; transfer) (simp_all add: signed_take_bit_eq signed_take_bit_def not_eq_complement mask_eq_exp_minus_1 OR_upper) lemma unat_div_distrib: \<open>unat (v div w) = unat v div unat w\<close> proof transfer fix k l have \<open>nat (take_bit LENGTH('a) k) div nat (take_bit LENGTH('a) l) \<le> nat (take_bit LENGTH('a) k)\<close> by (rule div_le_dividend) also have \<open>nat (take_bit LENGTH('a) k) < 2 ^ LENGTH('a)\<close> by (simp add: nat_less_iff) finally show \<open>(nat \<circ> take_bit LENGTH('a)) (take_bit LENGTH('a) k div take_bit LENGTH('a) l) = (nat \<circ> take_bit LENGTH('a)) k div (nat \<circ> take_bit LENGTH('a)) l\<close> by (simp add: nat_take_bit_eq div_int_pos_iff nat_div_distrib take_bit_nat_eq_self_iff) qed lemma unat_mod_distrib: \<open>unat (v mod w) = unat v mod unat w\<close> proof transfer fix k l have \<open>nat (take_bit LENGTH('a) k) mod nat (take_bit LENGTH('a) l) \<le> nat (take_bit LENGTH('a) k)\<close> by (rule mod_less_eq_dividend) also have \<open>nat (take_bit LENGTH('a) k) < 2 ^ LENGTH('a)\<close> by (simp add: nat_less_iff) finally show \<open>(nat \<circ> take_bit LENGTH('a)) (take_bit LENGTH('a) k mod take_bit LENGTH('a) l) = (nat \<circ> take_bit LENGTH('a)) k mod (nat \<circ> take_bit LENGTH('a)) l\<close> by (simp add: nat_take_bit_eq mod_int_pos_iff less_le nat_mod_distrib take_bit_nat_eq_self_iff) qed lemma uint_div_distrib: \<open>uint (v div w) = uint v div uint w\<close> proof - have \<open>int (unat (v div w)) = int (unat v div unat w)\<close> by (simp add: unat_div_distrib) then show ?thesis by (simp add: of_nat_div) qed lemma unat_drop_bit_eq: \<open>unat (drop_bit n w) = drop_bit n (unat w)\<close> by (rule bit_eqI) (simp add: bit_unsigned_iff bit_drop_bit_eq) lemma uint_mod_distrib: \<open>uint (v mod w) = uint v mod uint w\<close> proof - have \<open>int (unat (v mod w)) = int (unat v mod unat w)\<close> by (simp add: unat_mod_distrib) then show ?thesis by (simp add: of_nat_mod) qed context semiring_bit_operations begin lemma unsigned_ucast_eq: \<open>unsigned (ucast w :: 'c::len word) = take_bit LENGTH('c) (unsigned w)\<close> for w :: \<open>'b::len word\<close> by (rule bit_eqI) (simp add: bit_unsigned_iff Word.bit_unsigned_iff bit_take_bit_iff not_le) end context ring_bit_operations begin lemma signed_ucast_eq: \<open>signed (ucast w :: 'c::len word) = signed_take_bit (LENGTH('c) - Suc 0) (unsigned w)\<close> for w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps min_less_iff_disj) lemma signed_scast_eq: \<open>signed (scast w :: 'c::len word) = signed_take_bit (LENGTH('c) - Suc 0) (signed w)\<close> for w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps min_less_iff_disj) end lemma uint_nonnegative: "0 \<le> uint w" by (fact unsigned_greater_eq) lemma uint_bounded: "uint w < 2 ^ LENGTH('a)" for w :: "'a::len word" by (fact unsigned_less) lemma uint_idem: "uint w mod 2 ^ LENGTH('a) = uint w" for w :: "'a::len word" by transfer (simp add: take_bit_eq_mod) lemma word_uint_eqI: "uint a = uint b \<Longrightarrow> a = b" by (fact unsigned_word_eqI) lemma word_uint_eq_iff: "a = b \<longleftrightarrow> uint a = uint b" by (fact word_eq_iff_unsigned) lemma uint_word_of_int_eq: \<open>uint (word_of_int k :: 'a::len word) = take_bit LENGTH('a) k\<close> by transfer rule lemma uint_word_of_int: "uint (word_of_int k :: 'a::len word) = k mod 2 ^ LENGTH('a)" by (simp add: uint_word_of_int_eq take_bit_eq_mod) lemma word_of_int_uint: "word_of_int (uint w) = w" by transfer simp lemma word_div_def [code]: "a div b = word_of_int (uint a div uint b)" by transfer rule lemma word_mod_def [code]: "a mod b = word_of_int (uint a mod uint b)" by transfer rule lemma split_word_all: "(\<And>x::'a::len word. PROP P x) \<equiv> (\<And>x. PROP P (word_of_int x))" proof fix x :: "'a word" assume "\<And>x. PROP P (word_of_int x)" then have "PROP P (word_of_int (uint x))" . then show "PROP P x" by (simp only: word_of_int_uint) qed lemma sint_uint: \<open>sint w = signed_take_bit (LENGTH('a) - Suc 0) (uint w)\<close> for w :: \<open>'a::len word\<close> by (cases \<open>LENGTH('a)\<close>; transfer) (simp_all add: signed_take_bit_take_bit) lemma unat_eq_nat_uint: \<open>unat w = nat (uint w)\<close> by simp lemma ucast_eq: \<open>ucast w = word_of_int (uint w)\<close> by transfer simp lemma scast_eq: \<open>scast w = word_of_int (sint w)\<close> by transfer simp lemma uint_0_eq: \<open>uint 0 = 0\<close> by (fact unsigned_0) lemma uint_1_eq: \<open>uint 1 = 1\<close> by (fact unsigned_1) lemma word_m1_wi: "- 1 = word_of_int (- 1)" by simp lemma uint_0_iff: "uint x = 0 \<longleftrightarrow> x = 0" by (auto simp add: unsigned_word_eqI) lemma unat_0_iff: "unat x = 0 \<longleftrightarrow> x = 0" by (auto simp add: unsigned_word_eqI) lemma unat_0: "unat 0 = 0" by (fact unsigned_0) lemma unat_gt_0: "0 < unat x \<longleftrightarrow> x \<noteq> 0" by (auto simp: unat_0_iff [symmetric]) lemma ucast_0: "ucast 0 = 0" by (fact unsigned_0) lemma sint_0: "sint 0 = 0" by (fact signed_0) lemma scast_0: "scast 0 = 0" by (fact signed_0) lemma sint_n1: "sint (- 1) = - 1" by (fact signed_minus_1) lemma scast_n1: "scast (- 1) = - 1" by (fact signed_minus_1) lemma uint_1: "uint (1::'a::len word) = 1" by (fact uint_1_eq) lemma unat_1: "unat (1::'a::len word) = 1" by (fact unsigned_1) lemma ucast_1: "ucast (1::'a::len word) = 1" by (fact unsigned_1) instantiation word :: (len) size begin lift_definition size_word :: \<open>'a word \<Rightarrow> nat\<close> is \<open>\<lambda>_. LENGTH('a)\<close> .. instance .. end lemma word_size [code]: \<open>size w = LENGTH('a)\<close> for w :: \<open>'a::len word\<close> by (fact size_word.rep_eq) lemma word_size_gt_0 [iff]: "0 < size w" for w :: "'a::len word" by (simp add: word_size) lemmas lens_gt_0 = word_size_gt_0 len_gt_0 lemma lens_not_0 [iff]: \<open>size w \<noteq> 0\<close> for w :: \<open>'a::len word\<close> by auto lift_definition source_size :: \<open>('a::len word \<Rightarrow> 'b) \<Rightarrow> nat\<close> is \<open>\<lambda>_. LENGTH('a)\<close> . lift_definition target_size :: \<open>('a \<Rightarrow> 'b::len word) \<Rightarrow> nat\<close> is \<open>\<lambda>_. LENGTH('b)\<close> .. lift_definition is_up :: \<open>('a::len word \<Rightarrow> 'b::len word) \<Rightarrow> bool\<close> is \<open>\<lambda>_. LENGTH('a) \<le> LENGTH('b)\<close> .. lift_definition is_down :: \<open>('a::len word \<Rightarrow> 'b::len word) \<Rightarrow> bool\<close> is \<open>\<lambda>_. LENGTH('a) \<ge> LENGTH('b)\<close> .. lemma is_up_eq: \<open>is_up f \<longleftrightarrow> source_size f \<le> target_size f\<close> for f :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> by (simp add: source_size.rep_eq target_size.rep_eq is_up.rep_eq) lemma is_down_eq: \<open>is_down f \<longleftrightarrow> target_size f \<le> source_size f\<close> for f :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> by (simp add: source_size.rep_eq target_size.rep_eq is_down.rep_eq) lift_definition word_int_case :: \<open>(int \<Rightarrow> 'b) \<Rightarrow> 'a::len word \<Rightarrow> 'b\<close> is \<open>\<lambda>f. f \<circ> take_bit LENGTH('a)\<close> by simp lemma word_int_case_eq_uint [code]: \<open>word_int_case f w = f (uint w)\<close> by transfer simp translations "case x of XCONST of_int y \<Rightarrow> b" \<rightleftharpoons> "CONST word_int_case (\<lambda>y. b) x" "case x of (XCONST of_int :: 'a) y \<Rightarrow> b" \<rightharpoonup> "CONST word_int_case (\<lambda>y. b) x" subsection \<open>Arithmetic operations\<close> lemma div_word_self: \<open>w div w = 1\<close> if \<open>w \<noteq> 0\<close> for w :: \<open>'a::len word\<close> using that by transfer simp lemma mod_word_self [simp]: \<open>w mod w = 0\<close> for w :: \<open>'a::len word\<close> apply (cases \<open>w = 0\<close>) apply auto using div_mult_mod_eq [of w w] by (simp add: div_word_self) lemma div_word_less: \<open>w div v = 0\<close> if \<open>w < v\<close> for w v :: \<open>'a::len word\<close> using that by transfer simp lemma mod_word_less: \<open>w mod v = w\<close> if \<open>w < v\<close> for w v :: \<open>'a::len word\<close> using div_mult_mod_eq [of w v] using that by (simp add: div_word_less) lemma div_word_one [simp]: \<open>1 div w = of_bool (w = 1)\<close> for w :: \<open>'a::len word\<close> proof transfer fix k :: int show \<open>take_bit LENGTH('a) (take_bit LENGTH('a) 1 div take_bit LENGTH('a) k) = take_bit LENGTH('a) (of_bool (take_bit LENGTH('a) k = take_bit LENGTH('a) 1))\<close> proof (cases \<open>take_bit LENGTH('a) k > 1\<close>) case False with take_bit_nonnegative [of \<open>LENGTH('a)\<close> k] have \<open>take_bit LENGTH('a) k = 0 \<or> take_bit LENGTH('a) k = 1\<close> by linarith then show ?thesis by auto next case True then show ?thesis by simp qed qed lemma mod_word_one [simp]: \<open>1 mod w = 1 - w * of_bool (w = 1)\<close> for w :: \<open>'a::len word\<close> using div_mult_mod_eq [of 1 w] by auto lemma div_word_by_minus_1_eq [simp]: \<open>w div - 1 = of_bool (w = - 1)\<close> for w :: \<open>'a::len word\<close> by (auto intro: div_word_less simp add: div_word_self word_order.not_eq_extremum) lemma mod_word_by_minus_1_eq [simp]: \<open>w mod - 1 = w * of_bool (w < - 1)\<close> for w :: \<open>'a::len word\<close> proof (cases \<open>w = - 1\<close>) case True then show ?thesis by simp next case False moreover have \<open>w < - 1\<close> using False by (simp add: word_order.not_eq_extremum) ultimately show ?thesis by (simp add: mod_word_less) qed text \<open>Legacy theorems:\<close> lemma word_add_def [code]: "a + b = word_of_int (uint a + uint b)" by transfer (simp add: take_bit_add) lemma word_sub_wi [code]: "a - b = word_of_int (uint a - uint b)" by transfer (simp add: take_bit_diff) lemma word_mult_def [code]: "a * b = word_of_int (uint a * uint b)" by transfer (simp add: take_bit_eq_mod mod_simps) lemma word_minus_def [code]: "- a = word_of_int (- uint a)" by transfer (simp add: take_bit_minus) lemma word_0_wi: "0 = word_of_int 0" by transfer simp lemma word_1_wi: "1 = word_of_int 1" by transfer simp lift_definition word_succ :: "'a::len word \<Rightarrow> 'a word" is "\<lambda>x. x + 1" by (auto simp add: take_bit_eq_mod intro: mod_add_cong) lift_definition word_pred :: "'a::len word \<Rightarrow> 'a word" is "\<lambda>x. x - 1" by (auto simp add: take_bit_eq_mod intro: mod_diff_cong) lemma word_succ_alt [code]: "word_succ a = word_of_int (uint a + 1)" by transfer (simp add: take_bit_eq_mod mod_simps) lemma word_pred_alt [code]: "word_pred a = word_of_int (uint a - 1)" by transfer (simp add: take_bit_eq_mod mod_simps) lemmas word_arith_wis = word_add_def word_sub_wi word_mult_def word_minus_def word_succ_alt word_pred_alt word_0_wi word_1_wi lemma wi_homs: shows wi_hom_add: "word_of_int a + word_of_int b = word_of_int (a + b)" and wi_hom_sub: "word_of_int a - word_of_int b = word_of_int (a - b)" and wi_hom_mult: "word_of_int a * word_of_int b = word_of_int (a * b)" and wi_hom_neg: "- word_of_int a = word_of_int (- a)" and wi_hom_succ: "word_succ (word_of_int a) = word_of_int (a + 1)" and wi_hom_pred: "word_pred (word_of_int a) = word_of_int (a - 1)" by (transfer, simp)+ lemmas wi_hom_syms = wi_homs [symmetric] lemmas word_of_int_homs = wi_homs word_0_wi word_1_wi lemmas word_of_int_hom_syms = word_of_int_homs [symmetric] lemma double_eq_zero_iff: \<open>2 * a = 0 \<longleftrightarrow> a = 0 \<or> a = 2 ^ (LENGTH('a) - Suc 0)\<close> for a :: \<open>'a::len word\<close> proof - define n where \<open>n = LENGTH('a) - Suc 0\<close> then have *: \<open>LENGTH('a) = Suc n\<close> by simp have \<open>a = 0\<close> if \<open>2 * a = 0\<close> and \<open>a \<noteq> 2 ^ (LENGTH('a) - Suc 0)\<close> using that by transfer (auto simp add: take_bit_eq_0_iff take_bit_eq_mod *) moreover have \<open>2 ^ LENGTH('a) = (0 :: 'a word)\<close> by transfer simp then have \<open>2 * 2 ^ (LENGTH('a) - Suc 0) = (0 :: 'a word)\<close> by (simp add: *) ultimately show ?thesis by auto qed subsection \<open>Ordering\<close> lift_definition word_sle :: \<open>'a::len word \<Rightarrow> 'a word \<Rightarrow> bool\<close> is \<open>\<lambda>k l. signed_take_bit (LENGTH('a) - Suc 0) k \<le> signed_take_bit (LENGTH('a) - Suc 0) l\<close> by (simp flip: signed_take_bit_decr_length_iff) lift_definition word_sless :: \<open>'a::len word \<Rightarrow> 'a word \<Rightarrow> bool\<close> is \<open>\<lambda>k l. signed_take_bit (LENGTH('a) - Suc 0) k < signed_take_bit (LENGTH('a) - Suc 0) l\<close> by (simp flip: signed_take_bit_decr_length_iff) notation word_sle ("'(\<le>s')") and word_sle ("(_/ \<le>s _)" [51, 51] 50) and word_sless ("'(<s')") and word_sless ("(_/ <s _)" [51, 51] 50) notation (input) word_sle ("(_/ <=s _)" [51, 51] 50) lemma word_sle_eq [code]: \<open>a <=s b \<longleftrightarrow> sint a \<le> sint b\<close> by transfer simp lemma [code]: \<open>a <s b \<longleftrightarrow> sint a < sint b\<close> by transfer simp lemma signed_ordering: \<open>ordering word_sle word_sless\<close> apply (standard; transfer) using signed_take_bit_decr_length_iff by force+ lemma signed_linorder: \<open>class.linorder word_sle word_sless\<close> by (standard; transfer) (auto simp add: signed_take_bit_decr_length_iff) interpretation signed: linorder word_sle word_sless by (fact signed_linorder) lemma word_sless_eq: \<open>x <s y \<longleftrightarrow> x <=s y \<and> x \<noteq> y\<close> by (fact signed.less_le) lemma word_less_alt: "a < b \<longleftrightarrow> uint a < uint b" by (fact word_less_def) lemma word_zero_le [simp]: "0 \<le> y" for y :: "'a::len word" by (fact word_coorder.extremum) lemma word_m1_ge [simp] : "word_pred 0 \<ge> y" (* FIXME: delete *) by transfer (simp add: mask_eq_exp_minus_1) lemma word_n1_ge [simp]: "y \<le> -1" for y :: "'a::len word" by (fact word_order.extremum) lemmas word_not_simps [simp] = word_zero_le [THEN leD] word_m1_ge [THEN leD] word_n1_ge [THEN leD] lemma word_gt_0: "0 < y \<longleftrightarrow> 0 \<noteq> y" for y :: "'a::len word" by (simp add: less_le) lemmas word_gt_0_no [simp] = word_gt_0 [of "numeral y"] for y lemma word_sless_alt: "a <s b \<longleftrightarrow> sint a < sint b" by transfer simp lemma word_le_nat_alt: "a \<le> b \<longleftrightarrow> unat a \<le> unat b" by transfer (simp add: nat_le_eq_zle) lemma word_less_nat_alt: "a < b \<longleftrightarrow> unat a < unat b" by transfer (auto simp add: less_le [of 0]) lemmas unat_mono = word_less_nat_alt [THEN iffD1] instance word :: (len) wellorder proof fix P :: "'a word \<Rightarrow> bool" and a assume *: "(\<And>b. (\<And>a. a < b \<Longrightarrow> P a) \<Longrightarrow> P b)" have "wf (measure unat)" .. moreover have "{(a, b :: ('a::len) word). a < b} \<subseteq> measure unat" by (auto simp add: word_less_nat_alt) ultimately have "wf {(a, b :: ('a::len) word). a < b}" by (rule wf_subset) then show "P a" using * by induction blast qed lemma wi_less: "(word_of_int n < (word_of_int m :: 'a::len word)) = (n mod 2 ^ LENGTH('a) < m mod 2 ^ LENGTH('a))" by transfer (simp add: take_bit_eq_mod) lemma wi_le: "(word_of_int n \<le> (word_of_int m :: 'a::len word)) = (n mod 2 ^ LENGTH('a) \<le> m mod 2 ^ LENGTH('a))" by transfer (simp add: take_bit_eq_mod) subsection \<open>Bit-wise operations\<close> context includes bit_operations_syntax begin lemma uint_take_bit_eq: \<open>uint (take_bit n w) = take_bit n (uint w)\<close> by transfer (simp add: ac_simps) lemma take_bit_word_eq_self: \<open>take_bit n w = w\<close> if \<open>LENGTH('a) \<le> n\<close> for w :: \<open>'a::len word\<close> using that by transfer simp lemma take_bit_length_eq [simp]: \<open>take_bit LENGTH('a) w = w\<close> for w :: \<open>'a::len word\<close> by (rule take_bit_word_eq_self) simp lemma bit_word_of_int_iff: \<open>bit (word_of_int k :: 'a::len word) n \<longleftrightarrow> n < LENGTH('a) \<and> bit k n\<close> by transfer rule lemma bit_uint_iff: \<open>bit (uint w) n \<longleftrightarrow> n < LENGTH('a) \<and> bit w n\<close> for w :: \<open>'a::len word\<close> by transfer (simp add: bit_take_bit_iff) lemma bit_sint_iff: \<open>bit (sint w) n \<longleftrightarrow> n \<ge> LENGTH('a) \<and> bit w (LENGTH('a) - 1) \<or> bit w n\<close> for w :: \<open>'a::len word\<close> by transfer (auto simp add: bit_signed_take_bit_iff min_def le_less not_less) lemma bit_word_ucast_iff: \<open>bit (ucast w :: 'b::len word) n \<longleftrightarrow> n < LENGTH('a) \<and> n < LENGTH('b) \<and> bit w n\<close> for w :: \<open>'a::len word\<close> by transfer (simp add: bit_take_bit_iff ac_simps) lemma bit_word_scast_iff: \<open>bit (scast w :: 'b::len word) n \<longleftrightarrow> n < LENGTH('b) \<and> (bit w n \<or> LENGTH('a) \<le> n \<and> bit w (LENGTH('a) - Suc 0))\<close> for w :: \<open>'a::len word\<close> by transfer (auto simp add: bit_signed_take_bit_iff le_less min_def) lemma bit_word_iff_drop_bit_and [code]: \<open>bit a n \<longleftrightarrow> drop_bit n a AND 1 = 1\<close> for a :: \<open>'a::len word\<close> by (simp add: bit_iff_odd_drop_bit odd_iff_mod_2_eq_one and_one_eq) lemma word_not_def: "NOT (a::'a::len word) = word_of_int (NOT (uint a))" and word_and_def: "(a::'a word) AND b = word_of_int (uint a AND uint b)" and word_or_def: "(a::'a word) OR b = word_of_int (uint a OR uint b)" and word_xor_def: "(a::'a word) XOR b = word_of_int (uint a XOR uint b)" by (transfer, simp add: take_bit_not_take_bit)+ definition even_word :: \<open>'a::len word \<Rightarrow> bool\<close> where [code_abbrev]: \<open>even_word = even\<close> lemma even_word_iff [code]: \<open>even_word a \<longleftrightarrow> a AND 1 = 0\<close> by (simp add: and_one_eq even_iff_mod_2_eq_zero even_word_def) lemma map_bit_range_eq_if_take_bit_eq: \<open>map (bit k) [0..<n] = map (bit l) [0..<n]\<close> if \<open>take_bit n k = take_bit n l\<close> for k l :: int using that proof (induction n arbitrary: k l) case 0 then show ?case by simp next case (Suc n) from Suc.prems have \<open>take_bit n (k div 2) = take_bit n (l div 2)\<close> by (simp add: take_bit_Suc) then have \<open>map (bit (k div 2)) [0..<n] = map (bit (l div 2)) [0..<n]\<close> by (rule Suc.IH) moreover have \<open>bit (r div 2) = bit r \<circ> Suc\<close> for r :: int by (simp add: fun_eq_iff bit_Suc) moreover from Suc.prems have \<open>even k \<longleftrightarrow> even l\<close> by (auto simp add: take_bit_Suc elim!: evenE oddE) arith+ ultimately show ?case by (simp only: map_Suc_upt upt_conv_Cons flip: list.map_comp) (simp add: bit_0) qed lemma take_bit_word_Bit0_eq [simp]: \<open>take_bit (numeral n) (numeral (num.Bit0 m) :: 'a::len word) = 2 * take_bit (pred_numeral n) (numeral m)\<close> (is ?P) and take_bit_word_Bit1_eq [simp]: \<open>take_bit (numeral n) (numeral (num.Bit1 m) :: 'a::len word) = 1 + 2 * take_bit (pred_numeral n) (numeral m)\<close> (is ?Q) and take_bit_word_minus_Bit0_eq [simp]: \<open>take_bit (numeral n) (- numeral (num.Bit0 m) :: 'a::len word) = 2 * take_bit (pred_numeral n) (- numeral m)\<close> (is ?R) and take_bit_word_minus_Bit1_eq [simp]: \<open>take_bit (numeral n) (- numeral (num.Bit1 m) :: 'a::len word) = 1 + 2 * take_bit (pred_numeral n) (- numeral (Num.inc m))\<close> (is ?S) proof - define w :: \<open>'a::len word\<close> where \<open>w = numeral m\<close> moreover define q :: nat where \<open>q = pred_numeral n\<close> ultimately have num: \<open>numeral m = w\<close> \<open>numeral (num.Bit0 m) = 2 * w\<close> \<open>numeral (num.Bit1 m) = 1 + 2 * w\<close> \<open>numeral (Num.inc m) = 1 + w\<close> \<open>pred_numeral n = q\<close> \<open>numeral n = Suc q\<close> by (simp_all only: w_def q_def numeral_Bit0 [of m] numeral_Bit1 [of m] ac_simps numeral_inc numeral_eq_Suc flip: mult_2) have even: \<open>take_bit (Suc q) (2 * w) = 2 * take_bit q w\<close> for w :: \<open>'a::len word\<close> by (rule bit_word_eqI) (auto simp add: bit_take_bit_iff bit_double_iff) have odd: \<open>take_bit (Suc q) (1 + 2 * w) = 1 + 2 * take_bit q w\<close> for w :: \<open>'a::len word\<close> by (rule bit_eqI) (auto simp add: bit_take_bit_iff bit_double_iff even_bit_succ_iff) show ?P using even [of w] by (simp add: num) show ?Q using odd [of w] by (simp add: num) show ?R using even [of \<open>- w\<close>] by (simp add: num) show ?S using odd [of \<open>- (1 + w)\<close>] by (simp add: num) qed subsection \<open>More shift operations\<close> lift_definition signed_drop_bit :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a::len word\<close> is \<open>\<lambda>n. drop_bit n \<circ> signed_take_bit (LENGTH('a) - Suc 0)\<close> using signed_take_bit_decr_length_iff by (simp add: take_bit_drop_bit) force lemma bit_signed_drop_bit_iff [bit_simps]: \<open>bit (signed_drop_bit m w) n \<longleftrightarrow> bit w (if LENGTH('a) - m \<le> n \<and> n < LENGTH('a) then LENGTH('a) - 1 else m + n)\<close> for w :: \<open>'a::len word\<close> apply transfer apply (auto simp add: bit_drop_bit_eq bit_signed_take_bit_iff not_le min_def) apply (metis add.commute le_antisym less_diff_conv less_eq_decr_length_iff) apply (metis le_antisym less_eq_decr_length_iff) done lemma [code]: \<open>Word.the_int (signed_drop_bit n w) = take_bit LENGTH('a) (drop_bit n (Word.the_signed_int w))\<close> for w :: \<open>'a::len word\<close> by transfer simp lemma signed_drop_bit_of_0 [simp]: \<open>signed_drop_bit n 0 = 0\<close> by transfer simp lemma signed_drop_bit_of_minus_1 [simp]: \<open>signed_drop_bit n (- 1) = - 1\<close> by transfer simp lemma signed_drop_bit_signed_drop_bit [simp]: \<open>signed_drop_bit m (signed_drop_bit n w) = signed_drop_bit (m + n) w\<close> for w :: \<open>'a::len word\<close> proof (cases \<open>LENGTH('a)\<close>) case 0 then show ?thesis using len_not_eq_0 by blast next case (Suc n) then show ?thesis by (force simp add: bit_signed_drop_bit_iff not_le less_diff_conv ac_simps intro!: bit_word_eqI) qed lemma signed_drop_bit_0 [simp]: \<open>signed_drop_bit 0 w = w\<close> by transfer (simp add: take_bit_signed_take_bit) lemma sint_signed_drop_bit_eq: \<open>sint (signed_drop_bit n w) = drop_bit n (sint w)\<close> proof (cases \<open>LENGTH('a) = 0 \<or> n=0\<close>) case False then show ?thesis apply simp apply (rule bit_eqI) by (auto simp add: bit_sint_iff bit_drop_bit_eq bit_signed_drop_bit_iff dest: bit_imp_le_length) qed auto subsection \<open>Single-bit operations\<close> lemma set_bit_eq_idem_iff: \<open>Bit_Operations.set_bit n w = w \<longleftrightarrow> bit w n \<or> n \<ge> LENGTH('a)\<close> for w :: \<open>'a::len word\<close> by (simp add: bit_eq_iff) (auto simp add: bit_simps not_le) lemma unset_bit_eq_idem_iff: \<open>unset_bit n w = w \<longleftrightarrow> bit w n \<longrightarrow> n \<ge> LENGTH('a)\<close> for w :: \<open>'a::len word\<close> by (simp add: bit_eq_iff) (auto simp add: bit_simps dest: bit_imp_le_length) lemma flip_bit_eq_idem_iff: \<open>flip_bit n w = w \<longleftrightarrow> n \<ge> LENGTH('a)\<close> for w :: \<open>'a::len word\<close> using linorder_le_less_linear by (simp add: bit_eq_iff) (auto simp add: bit_simps) subsection \<open>Rotation\<close> lift_definition word_rotr :: \<open>nat \<Rightarrow> 'a::len word \<Rightarrow> 'a::len word\<close> is \<open>\<lambda>n k. concat_bit (LENGTH('a) - n mod LENGTH('a)) (drop_bit (n mod LENGTH('a)) (take_bit LENGTH('a) k)) (take_bit (n mod LENGTH('a)) k)\<close> subgoal for n k l by (simp add: concat_bit_def nat_le_iff less_imp_le take_bit_tightened [of \<open>LENGTH('a)\<close> k l \<open>n mod LENGTH('a::len)\<close>]) done lift_definition word_rotl :: \<open>nat \<Rightarrow> 'a::len word \<Rightarrow> 'a::len word\<close> is \<open>\<lambda>n k. concat_bit (n mod LENGTH('a)) (drop_bit (LENGTH('a) - n mod LENGTH('a)) (take_bit LENGTH('a) k)) (take_bit (LENGTH('a) - n mod LENGTH('a)) k)\<close> subgoal for n k l by (simp add: concat_bit_def nat_le_iff less_imp_le take_bit_tightened [of \<open>LENGTH('a)\<close> k l \<open>LENGTH('a) - n mod LENGTH('a::len)\<close>]) done lift_definition word_roti :: \<open>int \<Rightarrow> 'a::len word \<Rightarrow> 'a::len word\<close> is \<open>\<lambda>r k. concat_bit (LENGTH('a) - nat (r mod int LENGTH('a))) (drop_bit (nat (r mod int LENGTH('a))) (take_bit LENGTH('a) k)) (take_bit (nat (r mod int LENGTH('a))) k)\<close> subgoal for r k l by (simp add: concat_bit_def nat_le_iff less_imp_le take_bit_tightened [of \<open>LENGTH('a)\<close> k l \<open>nat (r mod int LENGTH('a::len))\<close>]) done lemma word_rotl_eq_word_rotr [code]: \<open>word_rotl n = (word_rotr (LENGTH('a) - n mod LENGTH('a)) :: 'a::len word \<Rightarrow> 'a word)\<close> by (rule ext, cases \<open>n mod LENGTH('a) = 0\<close>; transfer) simp_all lemma word_roti_eq_word_rotr_word_rotl [code]: \<open>word_roti i w = (if i \<ge> 0 then word_rotr (nat i) w else word_rotl (nat (- i)) w)\<close> proof (cases \<open>i \<ge> 0\<close>) case True moreover define n where \<open>n = nat i\<close> ultimately have \<open>i = int n\<close> by simp moreover have \<open>word_roti (int n) = (word_rotr n :: _ \<Rightarrow> 'a word)\<close> by (rule ext, transfer) (simp add: nat_mod_distrib) ultimately show ?thesis by simp next case False moreover define n where \<open>n = nat (- i)\<close> ultimately have \<open>i = - int n\<close> \<open>n > 0\<close> by simp_all moreover have \<open>word_roti (- int n) = (word_rotl n :: _ \<Rightarrow> 'a word)\<close> by (rule ext, transfer) (simp add: zmod_zminus1_eq_if flip: of_nat_mod of_nat_diff) ultimately show ?thesis by simp qed lemma bit_word_rotr_iff [bit_simps]: \<open>bit (word_rotr m w) n \<longleftrightarrow> n < LENGTH('a) \<and> bit w ((n + m) mod LENGTH('a))\<close> for w :: \<open>'a::len word\<close> proof transfer fix k :: int and m n :: nat define q where \<open>q = m mod LENGTH('a)\<close> have \<open>q < LENGTH('a)\<close> by (simp add: q_def) then have \<open>q \<le> LENGTH('a)\<close> by simp have \<open>m mod LENGTH('a) = q\<close> by (simp add: q_def) moreover have \<open>(n + m) mod LENGTH('a) = (n + q) mod LENGTH('a)\<close> by (subst mod_add_right_eq [symmetric]) (simp add: \<open>m mod LENGTH('a) = q\<close>) moreover have \<open>n < LENGTH('a) \<and> bit (concat_bit (LENGTH('a) - q) (drop_bit q (take_bit LENGTH('a) k)) (take_bit q k)) n \<longleftrightarrow> n < LENGTH('a) \<and> bit k ((n + q) mod LENGTH('a))\<close> using \<open>q < LENGTH('a)\<close> by (cases \<open>q + n \<ge> LENGTH('a)\<close>) (auto simp add: bit_concat_bit_iff bit_drop_bit_eq bit_take_bit_iff le_mod_geq ac_simps) ultimately show \<open>n < LENGTH('a) \<and> bit (concat_bit (LENGTH('a) - m mod LENGTH('a)) (drop_bit (m mod LENGTH('a)) (take_bit LENGTH('a) k)) (take_bit (m mod LENGTH('a)) k)) n \<longleftrightarrow> n < LENGTH('a) \<and> (n + m) mod LENGTH('a) < LENGTH('a) \<and> bit k ((n + m) mod LENGTH('a))\<close> by simp qed lemma bit_word_rotl_iff [bit_simps]: \<open>bit (word_rotl m w) n \<longleftrightarrow> n < LENGTH('a) \<and> bit w ((n + (LENGTH('a) - m mod LENGTH('a))) mod LENGTH('a))\<close> for w :: \<open>'a::len word\<close> by (simp add: word_rotl_eq_word_rotr bit_word_rotr_iff) lemma bit_word_roti_iff [bit_simps]: \<open>bit (word_roti k w) n \<longleftrightarrow> n < LENGTH('a) \<and> bit w (nat ((int n + k) mod int LENGTH('a)))\<close> for w :: \<open>'a::len word\<close> proof transfer fix k l :: int and n :: nat define m where \<open>m = nat (k mod int LENGTH('a))\<close> have \<open>m < LENGTH('a)\<close> by (simp add: nat_less_iff m_def) then have \<open>m \<le> LENGTH('a)\<close> by simp have \<open>k mod int LENGTH('a) = int m\<close> by (simp add: nat_less_iff m_def) moreover have \<open>(int n + k) mod int LENGTH('a) = int ((n + m) mod LENGTH('a))\<close> by (subst mod_add_right_eq [symmetric]) (simp add: of_nat_mod \<open>k mod int LENGTH('a) = int m\<close>) moreover have \<open>n < LENGTH('a) \<and> bit (concat_bit (LENGTH('a) - m) (drop_bit m (take_bit LENGTH('a) l)) (take_bit m l)) n \<longleftrightarrow> n < LENGTH('a) \<and> bit l ((n + m) mod LENGTH('a))\<close> using \<open>m < LENGTH('a)\<close> by (cases \<open>m + n \<ge> LENGTH('a)\<close>) (auto simp add: bit_concat_bit_iff bit_drop_bit_eq bit_take_bit_iff nat_less_iff not_le not_less ac_simps le_diff_conv le_mod_geq) ultimately show \<open>n < LENGTH('a) \<and> bit (concat_bit (LENGTH('a) - nat (k mod int LENGTH('a))) (drop_bit (nat (k mod int LENGTH('a))) (take_bit LENGTH('a) l)) (take_bit (nat (k mod int LENGTH('a))) l)) n \<longleftrightarrow> n < LENGTH('a) \<and> nat ((int n + k) mod int LENGTH('a)) < LENGTH('a) \<and> bit l (nat ((int n + k) mod int LENGTH('a)))\<close> by simp qed lemma uint_word_rotr_eq: \<open>uint (word_rotr n w) = concat_bit (LENGTH('a) - n mod LENGTH('a)) (drop_bit (n mod LENGTH('a)) (uint w)) (uint (take_bit (n mod LENGTH('a)) w))\<close> for w :: \<open>'a::len word\<close> by transfer (simp add: take_bit_concat_bit_eq) lemma [code]: \<open>Word.the_int (word_rotr n w) = concat_bit (LENGTH('a) - n mod LENGTH('a)) (drop_bit (n mod LENGTH('a)) (Word.the_int w)) (Word.the_int (take_bit (n mod LENGTH('a)) w))\<close> for w :: \<open>'a::len word\<close> using uint_word_rotr_eq [of n w] by simp subsection \<open>Split and cat operations\<close> lift_definition word_cat :: \<open>'a::len word \<Rightarrow> 'b::len word \<Rightarrow> 'c::len word\<close> is \<open>\<lambda>k l. concat_bit LENGTH('b) l (take_bit LENGTH('a) k)\<close> by (simp add: bit_eq_iff bit_concat_bit_iff bit_take_bit_iff) lemma word_cat_eq: \<open>(word_cat v w :: 'c::len word) = push_bit LENGTH('b) (ucast v) + ucast w\<close> for v :: \<open>'a::len word\<close> and w :: \<open>'b::len word\<close> by transfer (simp add: concat_bit_eq ac_simps) lemma word_cat_eq' [code]: \<open>word_cat a b = word_of_int (concat_bit LENGTH('b) (uint b) (uint a))\<close> for a :: \<open>'a::len word\<close> and b :: \<open>'b::len word\<close> by transfer (simp add: concat_bit_take_bit_eq) lemma bit_word_cat_iff [bit_simps]: \<open>bit (word_cat v w :: 'c::len word) n \<longleftrightarrow> n < LENGTH('c) \<and> (if n < LENGTH('b) then bit w n else bit v (n - LENGTH('b)))\<close> for v :: \<open>'a::len word\<close> and w :: \<open>'b::len word\<close> by transfer (simp add: bit_concat_bit_iff bit_take_bit_iff) definition word_split :: \<open>'a::len word \<Rightarrow> 'b::len word \<times> 'c::len word\<close> where \<open>word_split w = (ucast (drop_bit LENGTH('c) w) :: 'b::len word, ucast w :: 'c::len word)\<close> definition word_rcat :: \<open>'a::len word list \<Rightarrow> 'b::len word\<close> where \<open>word_rcat = word_of_int \<circ> horner_sum uint (2 ^ LENGTH('a)) \<circ> rev\<close> subsection \<open>More on conversions\<close> lemma int_word_sint: \<open>sint (word_of_int x :: 'a::len word) = (x + 2 ^ (LENGTH('a) - 1)) mod 2 ^ LENGTH('a) - 2 ^ (LENGTH('a) - 1)\<close> by transfer (simp flip: take_bit_eq_mod add: signed_take_bit_eq_take_bit_shift) lemma sint_sbintrunc': "sint (word_of_int bin :: 'a word) = signed_take_bit (LENGTH('a::len) - 1) bin" by (simp add: signed_of_int) lemma uint_sint: "uint w = take_bit LENGTH('a) (sint w)" for w :: "'a::len word" by transfer (simp add: take_bit_signed_take_bit) lemma bintr_uint: "LENGTH('a) \<le> n \<Longrightarrow> take_bit n (uint w) = uint w" for w :: "'a::len word" by transfer (simp add: min_def) lemma wi_bintr: "LENGTH('a::len) \<le> n \<Longrightarrow> word_of_int (take_bit n w) = (word_of_int w :: 'a word)" by transfer simp lemma word_numeral_alt: "numeral b = word_of_int (numeral b)" by (induct b, simp_all only: numeral.simps word_of_int_homs) declare word_numeral_alt [symmetric, code_abbrev] lemma word_neg_numeral_alt: "- numeral b = word_of_int (- numeral b)" by (simp only: word_numeral_alt wi_hom_neg) declare word_neg_numeral_alt [symmetric, code_abbrev] lemma uint_bintrunc [simp]: "uint (numeral bin :: 'a word) = take_bit (LENGTH('a::len)) (numeral bin)" by transfer rule lemma uint_bintrunc_neg [simp]: "uint (- numeral bin :: 'a word) = take_bit (LENGTH('a::len)) (- numeral bin)" by transfer rule lemma sint_sbintrunc [simp]: "sint (numeral bin :: 'a word) = signed_take_bit (LENGTH('a::len) - 1) (numeral bin)" by transfer simp lemma sint_sbintrunc_neg [simp]: "sint (- numeral bin :: 'a word) = signed_take_bit (LENGTH('a::len) - 1) (- numeral bin)" by transfer simp lemma unat_bintrunc [simp]: "unat (numeral bin :: 'a::len word) = nat (take_bit (LENGTH('a)) (numeral bin))" by transfer simp lemma unat_bintrunc_neg [simp]: "unat (- numeral bin :: 'a::len word) = nat (take_bit (LENGTH('a)) (- numeral bin))" by transfer simp lemma size_0_eq: "size w = 0 \<Longrightarrow> v = w" for v w :: "'a::len word" by transfer simp lemma uint_ge_0 [iff]: "0 \<le> uint x" by (fact unsigned_greater_eq) lemma uint_lt2p [iff]: "uint x < 2 ^ LENGTH('a)" for x :: "'a::len word" by (fact unsigned_less) lemma sint_ge: "- (2 ^ (LENGTH('a) - 1)) \<le> sint x" for x :: "'a::len word" using sint_greater_eq [of x] by simp lemma sint_lt: "sint x < 2 ^ (LENGTH('a) - 1)" for x :: "'a::len word" using sint_less [of x] by simp lemma uint_m2p_neg: "uint x - 2 ^ LENGTH('a) < 0" for x :: "'a::len word" by (simp only: diff_less_0_iff_less uint_lt2p) lemma uint_m2p_not_non_neg: "\<not> 0 \<le> uint x - 2 ^ LENGTH('a)" for x :: "'a::len word" by (simp only: not_le uint_m2p_neg) lemma lt2p_lem: "LENGTH('a) \<le> n \<Longrightarrow> uint w < 2 ^ n" for w :: "'a::len word" using uint_bounded [of w] by (rule less_le_trans) simp lemma uint_le_0_iff [simp]: "uint x \<le> 0 \<longleftrightarrow> uint x = 0" by (fact uint_ge_0 [THEN leD, THEN antisym_conv1]) lemma uint_nat: "uint w = int (unat w)" by transfer simp lemma uint_numeral: "uint (numeral b :: 'a::len word) = numeral b mod 2 ^ LENGTH('a)" by (simp flip: take_bit_eq_mod add: of_nat_take_bit) lemma uint_neg_numeral: "uint (- numeral b :: 'a::len word) = - numeral b mod 2 ^ LENGTH('a)" by (simp flip: take_bit_eq_mod add: of_nat_take_bit) lemma unat_numeral: "unat (numeral b :: 'a::len word) = numeral b mod 2 ^ LENGTH('a)" by transfer (simp add: take_bit_eq_mod nat_mod_distrib nat_power_eq) lemma sint_numeral: "sint (numeral b :: 'a::len word) = (numeral b + 2 ^ (LENGTH('a) - 1)) mod 2 ^ LENGTH('a) - 2 ^ (LENGTH('a) - 1)" by (metis int_word_sint word_numeral_alt) lemma word_of_int_0 [simp, code_post]: "word_of_int 0 = 0" by (fact of_int_0) lemma word_of_int_1 [simp, code_post]: "word_of_int 1 = 1" by (fact of_int_1) lemma word_of_int_neg_1 [simp]: "word_of_int (- 1) = - 1" by (simp add: wi_hom_syms) lemma word_of_int_numeral [simp] : "(word_of_int (numeral bin) :: 'a::len word) = numeral bin" by (fact of_int_numeral) lemma word_of_int_neg_numeral [simp]: "(word_of_int (- numeral bin) :: 'a::len word) = - numeral bin" by (fact of_int_neg_numeral) lemma word_int_case_wi: "word_int_case f (word_of_int i :: 'b word) = f (i mod 2 ^ LENGTH('b::len))" by transfer (simp add: take_bit_eq_mod) lemma word_int_split: "P (word_int_case f x) = (\<forall>i. x = (word_of_int i :: 'b::len word) \<and> 0 \<le> i \<and> i < 2 ^ LENGTH('b) \<longrightarrow> P (f i))" by transfer (auto simp add: take_bit_eq_mod) lemma word_int_split_asm: "P (word_int_case f x) = (\<nexists>n. x = (word_of_int n :: 'b::len word) \<and> 0 \<le> n \<and> n < 2 ^ LENGTH('b::len) \<and> \<not> P (f n))" by transfer (auto simp add: take_bit_eq_mod) lemma uint_range_size: "0 \<le> uint w \<and> uint w < 2 ^ size w" by transfer simp lemma sint_range_size: "- (2 ^ (size w - Suc 0)) \<le> sint w \<and> sint w < 2 ^ (size w - Suc 0)" by (simp add: word_size sint_greater_eq sint_less) lemma sint_above_size: "2 ^ (size w - 1) \<le> x \<Longrightarrow> sint w < x" for w :: "'a::len word" unfolding word_size by (rule less_le_trans [OF sint_lt]) lemma sint_below_size: "x \<le> - (2 ^ (size w - 1)) \<Longrightarrow> x \<le> sint w" for w :: "'a::len word" unfolding word_size by (rule order_trans [OF _ sint_ge]) lemma word_unat_eq_iff: \<open>v = w \<longleftrightarrow> unat v = unat w\<close> for v w :: \<open>'a::len word\<close> by (fact word_eq_iff_unsigned) subsection \<open>Testing bits\<close> lemma bin_nth_uint_imp: "bit (uint w) n \<Longrightarrow> n < LENGTH('a)" for w :: "'a::len word" by transfer (simp add: bit_take_bit_iff) lemma bin_nth_sint: "LENGTH('a) \<le> n \<Longrightarrow> bit (sint w) n = bit (sint w) (LENGTH('a) - 1)" for w :: "'a::len word" by (transfer fixing: n) (simp add: bit_signed_take_bit_iff le_diff_conv min_def) lemma num_of_bintr': "take_bit (LENGTH('a::len)) (numeral a :: int) = (numeral b) \<Longrightarrow> numeral a = (numeral b :: 'a word)" proof (transfer fixing: a b) assume \<open>take_bit LENGTH('a) (numeral a :: int) = numeral b\<close> then have \<open>take_bit LENGTH('a) (take_bit LENGTH('a) (numeral a :: int)) = take_bit LENGTH('a) (numeral b)\<close> by simp then show \<open>take_bit LENGTH('a) (numeral a :: int) = take_bit LENGTH('a) (numeral b)\<close> by simp qed lemma num_of_sbintr': "signed_take_bit (LENGTH('a::len) - 1) (numeral a :: int) = (numeral b) \<Longrightarrow> numeral a = (numeral b :: 'a word)" proof (transfer fixing: a b) assume \<open>signed_take_bit (LENGTH('a) - 1) (numeral a :: int) = numeral b\<close> then have \<open>take_bit LENGTH('a) (signed_take_bit (LENGTH('a) - 1) (numeral a :: int)) = take_bit LENGTH('a) (numeral b)\<close> by simp then show \<open>take_bit LENGTH('a) (numeral a :: int) = take_bit LENGTH('a) (numeral b)\<close> by (simp add: take_bit_signed_take_bit) qed lemma num_abs_bintr: "(numeral x :: 'a word) = word_of_int (take_bit (LENGTH('a::len)) (numeral x))" by transfer simp lemma num_abs_sbintr: "(numeral x :: 'a word) = word_of_int (signed_take_bit (LENGTH('a::len) - 1) (numeral x))" by transfer (simp add: take_bit_signed_take_bit) text \<open> \<open>cast\<close> -- note, no arg for new length, as it's determined by type of result, thus in \<open>cast w = w\<close>, the type means cast to length of \<open>w\<close>! \<close> lemma bit_ucast_iff: \<open>bit (ucast a :: 'a::len word) n \<longleftrightarrow> n < LENGTH('a::len) \<and> bit a n\<close> by transfer (simp add: bit_take_bit_iff) lemma ucast_id [simp]: "ucast w = w" by transfer simp lemma scast_id [simp]: "scast w = w" by transfer (simp add: take_bit_signed_take_bit) lemma ucast_mask_eq: \<open>ucast (mask n :: 'b word) = mask (min LENGTH('b::len) n)\<close> by (simp add: bit_eq_iff) (auto simp add: bit_mask_iff bit_ucast_iff) \<comment> \<open>literal u(s)cast\<close> lemma ucast_bintr [simp]: "ucast (numeral w :: 'a::len word) = word_of_int (take_bit (LENGTH('a)) (numeral w))" by transfer simp (* TODO: neg_numeral *) lemma scast_sbintr [simp]: "scast (numeral w ::'a::len word) = word_of_int (signed_take_bit (LENGTH('a) - Suc 0) (numeral w))" by transfer simp lemma source_size: "source_size (c::'a::len word \<Rightarrow> _) = LENGTH('a)" by transfer simp lemma target_size: "target_size (c::_ \<Rightarrow> 'b::len word) = LENGTH('b)" by transfer simp lemma is_down: "is_down c \<longleftrightarrow> LENGTH('b) \<le> LENGTH('a)" for c :: "'a::len word \<Rightarrow> 'b::len word" by transfer simp lemma is_up: "is_up c \<longleftrightarrow> LENGTH('a) \<le> LENGTH('b)" for c :: "'a::len word \<Rightarrow> 'b::len word" by transfer simp lemma is_up_down: \<open>is_up c \<longleftrightarrow> is_down d\<close> for c :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> and d :: \<open>'b::len word \<Rightarrow> 'a::len word\<close> by transfer simp context fixes dummy_types :: \<open>'a::len \<times> 'b::len\<close> begin private abbreviation (input) UCAST :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> where \<open>UCAST == ucast\<close> private abbreviation (input) SCAST :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> where \<open>SCAST == scast\<close> lemma down_cast_same: \<open>UCAST = scast\<close> if \<open>is_down UCAST\<close> by (rule ext, use that in transfer) (simp add: take_bit_signed_take_bit) lemma sint_up_scast: \<open>sint (SCAST w) = sint w\<close> if \<open>is_up SCAST\<close> using that by transfer (simp add: min_def Suc_leI le_diff_iff) lemma uint_up_ucast: \<open>uint (UCAST w) = uint w\<close> if \<open>is_up UCAST\<close> using that by transfer (simp add: min_def) lemma ucast_up_ucast: \<open>ucast (UCAST w) = ucast w\<close> if \<open>is_up UCAST\<close> using that by transfer (simp add: ac_simps) lemma ucast_up_ucast_id: \<open>ucast (UCAST w) = w\<close> if \<open>is_up UCAST\<close> using that by (simp add: ucast_up_ucast) lemma scast_up_scast: \<open>scast (SCAST w) = scast w\<close> if \<open>is_up SCAST\<close> using that by transfer (simp add: ac_simps) lemma scast_up_scast_id: \<open>scast (SCAST w) = w\<close> if \<open>is_up SCAST\<close> using that by (simp add: scast_up_scast) lemma isduu: \<open>is_up UCAST\<close> if \<open>is_down d\<close> for d :: \<open>'b word \<Rightarrow> 'a word\<close> using that is_up_down [of UCAST d] by simp lemma isdus: \<open>is_up SCAST\<close> if \<open>is_down d\<close> for d :: \<open>'b word \<Rightarrow> 'a word\<close> using that is_up_down [of SCAST d] by simp lemmas ucast_down_ucast_id = isduu [THEN ucast_up_ucast_id] lemmas scast_down_scast_id = isdus [THEN scast_up_scast_id] lemma up_ucast_surj: \<open>surj (ucast :: 'b word \<Rightarrow> 'a word)\<close> if \<open>is_up UCAST\<close> by (rule surjI) (use that in \<open>rule ucast_up_ucast_id\<close>) lemma up_scast_surj: \<open>surj (scast :: 'b word \<Rightarrow> 'a word)\<close> if \<open>is_up SCAST\<close> by (rule surjI) (use that in \<open>rule scast_up_scast_id\<close>) lemma down_ucast_inj: \<open>inj_on UCAST A\<close> if \<open>is_down (ucast :: 'b word \<Rightarrow> 'a word)\<close> by (rule inj_on_inverseI) (use that in \<open>rule ucast_down_ucast_id\<close>) lemma down_scast_inj: \<open>inj_on SCAST A\<close> if \<open>is_down (scast :: 'b word \<Rightarrow> 'a word)\<close> by (rule inj_on_inverseI) (use that in \<open>rule scast_down_scast_id\<close>) lemma ucast_down_wi: \<open>UCAST (word_of_int x) = word_of_int x\<close> if \<open>is_down UCAST\<close> using that by transfer simp lemma ucast_down_no: \<open>UCAST (numeral bin) = numeral bin\<close> if \<open>is_down UCAST\<close> using that by transfer simp end lemmas word_log_defs = word_and_def word_or_def word_xor_def word_not_def lemma bit_last_iff: \<open>bit w (LENGTH('a) - Suc 0) \<longleftrightarrow> sint w < 0\<close> (is \<open>?P \<longleftrightarrow> ?Q\<close>) for w :: \<open>'a::len word\<close> proof - have \<open>?P \<longleftrightarrow> bit (uint w) (LENGTH('a) - Suc 0)\<close> by (simp add: bit_uint_iff) also have \<open>\<dots> \<longleftrightarrow> ?Q\<close> by (simp add: sint_uint) finally show ?thesis . qed lemma drop_bit_eq_zero_iff_not_bit_last: \<open>drop_bit (LENGTH('a) - Suc 0) w = 0 \<longleftrightarrow> \<not> bit w (LENGTH('a) - Suc 0)\<close> for w :: "'a::len word" proof (cases \<open>LENGTH('a)\<close>) case (Suc n) then show ?thesis apply transfer apply (simp add: take_bit_drop_bit) by (simp add: bit_iff_odd_drop_bit drop_bit_take_bit odd_iff_mod_2_eq_one) qed auto lemma unat_div: \<open>unat (x div y) = unat x div unat y\<close> by (fact unat_div_distrib) lemma unat_mod: \<open>unat (x mod y) = unat x mod unat y\<close> by (fact unat_mod_distrib) subsection \<open>Word Arithmetic\<close> lemmas less_eq_word_numeral_numeral [simp] = word_le_def [of \<open>numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_word_numeral_numeral [simp] = word_less_def [of \<open>numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_eq_word_minus_numeral_numeral [simp] = word_le_def [of \<open>- numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_word_minus_numeral_numeral [simp] = word_less_def [of \<open>- numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_eq_word_numeral_minus_numeral [simp] = word_le_def [of \<open>numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_word_numeral_minus_numeral [simp] = word_less_def [of \<open>numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_eq_word_minus_numeral_minus_numeral [simp] = word_le_def [of \<open>- numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_word_minus_numeral_minus_numeral [simp] = word_less_def [of \<open>- numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_word_numeral_minus_1 [simp] = word_less_def [of \<open>numeral a\<close> \<open>- 1\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_word_minus_numeral_minus_1 [simp] = word_less_def [of \<open>- numeral a\<close> \<open>- 1\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas sless_eq_word_numeral_numeral [simp] = word_sle_eq [of \<open>numeral a\<close> \<open>numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_word_numeral_numeral [simp] = word_sless_alt [of \<open>numeral a\<close> \<open>numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_eq_word_minus_numeral_numeral [simp] = word_sle_eq [of \<open>- numeral a\<close> \<open>numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_word_minus_numeral_numeral [simp] = word_sless_alt [of \<open>- numeral a\<close> \<open>numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_eq_word_numeral_minus_numeral [simp] = word_sle_eq [of \<open>numeral a\<close> \<open>- numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_word_numeral_minus_numeral [simp] = word_sless_alt [of \<open>numeral a\<close> \<open>- numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_eq_word_minus_numeral_minus_numeral [simp] = word_sle_eq [of \<open>- numeral a\<close> \<open>- numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_word_minus_numeral_minus_numeral [simp] = word_sless_alt [of \<open>- numeral a\<close> \<open>- numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas div_word_numeral_numeral [simp] = word_div_def [of \<open>numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas div_word_minus_numeral_numeral [simp] = word_div_def [of \<open>- numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas div_word_numeral_minus_numeral [simp] = word_div_def [of \<open>numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas div_word_minus_numeral_minus_numeral [simp] = word_div_def [of \<open>- numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas div_word_minus_1_numeral [simp] = word_div_def [of \<open>- 1\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas div_word_minus_1_minus_numeral [simp] = word_div_def [of \<open>- 1\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas mod_word_numeral_numeral [simp] = word_mod_def [of \<open>numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas mod_word_minus_numeral_numeral [simp] = word_mod_def [of \<open>- numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas mod_word_numeral_minus_numeral [simp] = word_mod_def [of \<open>numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas mod_word_minus_numeral_minus_numeral [simp] = word_mod_def [of \<open>- numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas mod_word_minus_1_numeral [simp] = word_mod_def [of \<open>- 1\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas mod_word_minus_1_minus_numeral [simp] = word_mod_def [of \<open>- 1\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemma signed_drop_bit_of_1 [simp]: \<open>signed_drop_bit n (1 :: 'a::len word) = of_bool (LENGTH('a) = 1 \<or> n = 0)\<close> apply (transfer fixing: n) apply (cases \<open>LENGTH('a)\<close>) apply (auto simp add: take_bit_signed_take_bit) apply (auto simp add: take_bit_drop_bit gr0_conv_Suc simp flip: take_bit_eq_self_iff_drop_bit_eq_0) done lemma take_bit_word_beyond_length_eq: \<open>take_bit n w = w\<close> if \<open>LENGTH('a) \<le> n\<close> for w :: \<open>'a::len word\<close> using that by transfer simp lemmas word_div_no [simp] = word_div_def [of "numeral a" "numeral b"] for a b lemmas word_mod_no [simp] = word_mod_def [of "numeral a" "numeral b"] for a b lemmas word_less_no [simp] = word_less_def [of "numeral a" "numeral b"] for a b lemmas word_le_no [simp] = word_le_def [of "numeral a" "numeral b"] for a b lemmas word_sless_no [simp] = word_sless_eq [of "numeral a" "numeral b"] for a b lemmas word_sle_no [simp] = word_sle_eq [of "numeral a" "numeral b"] for a b lemma size_0_same': "size w = 0 \<Longrightarrow> w = v" for v w :: "'a::len word" by (unfold word_size) simp lemmas size_0_same = size_0_same' [unfolded word_size] lemmas unat_eq_0 = unat_0_iff lemmas unat_eq_zero = unat_0_iff lemma mask_1: "mask 1 = 1" by simp lemma mask_Suc_0: "mask (Suc 0) = 1" by simp lemma bin_last_bintrunc: "odd (take_bit l n) \<longleftrightarrow> l > 0 \<and> odd n" by simp lemma push_bit_word_beyond [simp]: \<open>push_bit n w = 0\<close> if \<open>LENGTH('a) \<le> n\<close> for w :: \<open>'a::len word\<close> using that by (transfer fixing: n) (simp add: take_bit_push_bit) lemma drop_bit_word_beyond [simp]: \<open>drop_bit n w = 0\<close> if \<open>LENGTH('a) \<le> n\<close> for w :: \<open>'a::len word\<close> using that by (transfer fixing: n) (simp add: drop_bit_take_bit) lemma signed_drop_bit_beyond: \<open>signed_drop_bit n w = (if bit w (LENGTH('a) - Suc 0) then - 1 else 0)\<close> if \<open>LENGTH('a) \<le> n\<close> for w :: \<open>'a::len word\<close> by (rule bit_word_eqI) (simp add: bit_signed_drop_bit_iff that) lemma take_bit_numeral_minus_numeral_word [simp]: \<open>take_bit (numeral m) (- numeral n :: 'a::len word) = (case take_bit_num (numeral m) n of None \<Rightarrow> 0 | Some q \<Rightarrow> take_bit (numeral m) (2 ^ numeral m - numeral q))\<close> (is \<open>?lhs = ?rhs\<close>) proof (cases \<open>LENGTH('a) \<le> numeral m\<close>) case True then have *: \<open>(take_bit (numeral m) :: 'a word \<Rightarrow> 'a word) = id\<close> by (simp add: fun_eq_iff take_bit_word_eq_self) have **: \<open>2 ^ numeral m = (0 :: 'a word)\<close> using True by (simp flip: exp_eq_zero_iff) show ?thesis by (auto simp only: * ** split: option.split dest!: take_bit_num_eq_None_imp [where ?'a = \<open>'a word\<close>] take_bit_num_eq_Some_imp [where ?'a = \<open>'a word\<close>]) simp_all next case False then show ?thesis by (transfer fixing: m n) simp qed lemma of_nat_inverse: \<open>word_of_nat r = a \<Longrightarrow> r < 2 ^ LENGTH('a) \<Longrightarrow> unat a = r\<close> for a :: \<open>'a::len word\<close> by (metis id_apply of_nat_eq_id take_bit_nat_eq_self_iff unsigned_of_nat) subsection \<open>Transferring goals from words to ints\<close> lemma word_ths: shows word_succ_p1: "word_succ a = a + 1" and word_pred_m1: "word_pred a = a - 1" and word_pred_succ: "word_pred (word_succ a) = a" and word_succ_pred: "word_succ (word_pred a) = a" and word_mult_succ: "word_succ a * b = b + a * b" by (transfer, simp add: algebra_simps)+ lemma uint_cong: "x = y \<Longrightarrow> uint x = uint y" by simp lemma uint_word_ariths: fixes a b :: "'a::len word" shows "uint (a + b) = (uint a + uint b) mod 2 ^ LENGTH('a::len)" and "uint (a - b) = (uint a - uint b) mod 2 ^ LENGTH('a)" and "uint (a * b) = uint a * uint b mod 2 ^ LENGTH('a)" and "uint (- a) = - uint a mod 2 ^ LENGTH('a)" and "uint (word_succ a) = (uint a + 1) mod 2 ^ LENGTH('a)" and "uint (word_pred a) = (uint a - 1) mod 2 ^ LENGTH('a)" and "uint (0 :: 'a word) = 0 mod 2 ^ LENGTH('a)" and "uint (1 :: 'a word) = 1 mod 2 ^ LENGTH('a)" by (simp_all only: word_arith_wis uint_word_of_int_eq flip: take_bit_eq_mod) lemma uint_word_arith_bintrs: fixes a b :: "'a::len word" shows "uint (a + b) = take_bit (LENGTH('a)) (uint a + uint b)" and "uint (a - b) = take_bit (LENGTH('a)) (uint a - uint b)" and "uint (a * b) = take_bit (LENGTH('a)) (uint a * uint b)" and "uint (- a) = take_bit (LENGTH('a)) (- uint a)" and "uint (word_succ a) = take_bit (LENGTH('a)) (uint a + 1)" and "uint (word_pred a) = take_bit (LENGTH('a)) (uint a - 1)" and "uint (0 :: 'a word) = take_bit (LENGTH('a)) 0" and "uint (1 :: 'a word) = take_bit (LENGTH('a)) 1" by (simp_all add: uint_word_ariths take_bit_eq_mod) lemma sint_word_ariths: fixes a b :: "'a::len word" shows "sint (a + b) = signed_take_bit (LENGTH('a) - 1) (sint a + sint b)" and "sint (a - b) = signed_take_bit (LENGTH('a) - 1) (sint a - sint b)" and "sint (a * b) = signed_take_bit (LENGTH('a) - 1) (sint a * sint b)" and "sint (- a) = signed_take_bit (LENGTH('a) - 1) (- sint a)" and "sint (word_succ a) = signed_take_bit (LENGTH('a) - 1) (sint a + 1)" and "sint (word_pred a) = signed_take_bit (LENGTH('a) - 1) (sint a - 1)" and "sint (0 :: 'a word) = signed_take_bit (LENGTH('a) - 1) 0" and "sint (1 :: 'a word) = signed_take_bit (LENGTH('a) - 1) 1" subgoal by transfer (simp add: signed_take_bit_add) subgoal by transfer (simp add: signed_take_bit_diff) subgoal by transfer (simp add: signed_take_bit_mult) subgoal by transfer (simp add: signed_take_bit_minus) apply (metis of_int_sint scast_id sint_sbintrunc' wi_hom_succ) apply (metis of_int_sint scast_id sint_sbintrunc' wi_hom_pred) apply (simp_all add: sint_uint) done lemma word_pred_0_n1: "word_pred 0 = word_of_int (- 1)" unfolding word_pred_m1 by simp lemma succ_pred_no [simp]: "word_succ (numeral w) = numeral w + 1" "word_pred (numeral w) = numeral w - 1" "word_succ (- numeral w) = - numeral w + 1" "word_pred (- numeral w) = - numeral w - 1" by (simp_all add: word_succ_p1 word_pred_m1) lemma word_sp_01 [simp]: "word_succ (- 1) = 0 \<and> word_succ 0 = 1 \<and> word_pred 0 = - 1 \<and> word_pred 1 = 0" by (simp_all add: word_succ_p1 word_pred_m1) \<comment> \<open>alternative approach to lifting arithmetic equalities\<close> lemma word_of_int_Ex: "\<exists>y. x = word_of_int y" by (rule_tac x="uint x" in exI) simp subsection \<open>Order on fixed-length words\<close> lift_definition udvd :: \<open>'a::len word \<Rightarrow> 'a::len word \<Rightarrow> bool\<close> (infixl \<open>udvd\<close> 50) is \<open>\<lambda>k l. take_bit LENGTH('a) k dvd take_bit LENGTH('a) l\<close> by simp lemma udvd_iff_dvd: \<open>x udvd y \<longleftrightarrow> unat x dvd unat y\<close> by transfer (simp add: nat_dvd_iff) lemma udvd_iff_dvd_int: \<open>v udvd w \<longleftrightarrow> uint v dvd uint w\<close> by transfer rule lemma udvdI [intro]: \<open>v udvd w\<close> if \<open>unat w = unat v * unat u\<close> proof - from that have \<open>unat v dvd unat w\<close> .. then show ?thesis by (simp add: udvd_iff_dvd) qed lemma udvdE [elim]: fixes v w :: \<open>'a::len word\<close> assumes \<open>v udvd w\<close> obtains u :: \<open>'a word\<close> where \<open>unat w = unat v * unat u\<close> proof (cases \<open>v = 0\<close>) case True moreover from True \<open>v udvd w\<close> have \<open>w = 0\<close> by transfer simp ultimately show thesis using that by simp next case False then have \<open>unat v > 0\<close> by (simp add: unat_gt_0) from \<open>v udvd w\<close> have \<open>unat v dvd unat w\<close> by (simp add: udvd_iff_dvd) then obtain n where \<open>unat w = unat v * n\<close> .. moreover have \<open>n < 2 ^ LENGTH('a)\<close> proof (rule ccontr) assume \<open>\<not> n < 2 ^ LENGTH('a)\<close> then have \<open>n \<ge> 2 ^ LENGTH('a)\<close> by (simp add: not_le) then have \<open>unat v * n \<ge> 2 ^ LENGTH('a)\<close> using \<open>unat v > 0\<close> mult_le_mono [of 1 \<open>unat v\<close> \<open>2 ^ LENGTH('a)\<close> n] by simp with \<open>unat w = unat v * n\<close> have \<open>unat w \<ge> 2 ^ LENGTH('a)\<close> by simp with unsigned_less [of w, where ?'a = nat] show False by linarith qed ultimately have \<open>unat w = unat v * unat (word_of_nat n :: 'a word)\<close> by (auto simp add: take_bit_nat_eq_self_iff unsigned_of_nat intro: sym) with that show thesis . qed lemma udvd_imp_mod_eq_0: \<open>w mod v = 0\<close> if \<open>v udvd w\<close> using that by transfer simp lemma mod_eq_0_imp_udvd [intro?]: \<open>v udvd w\<close> if \<open>w mod v = 0\<close> proof - from that have \<open>unat (w mod v) = unat 0\<close> by simp then have \<open>unat w mod unat v = 0\<close> by (simp add: unat_mod_distrib) then have \<open>unat v dvd unat w\<close> .. then show ?thesis by (simp add: udvd_iff_dvd) qed lemma udvd_imp_dvd: \<open>v dvd w\<close> if \<open>v udvd w\<close> for v w :: \<open>'a::len word\<close> proof - from that obtain u :: \<open>'a word\<close> where \<open>unat w = unat v * unat u\<close> .. then have \<open>(word_of_nat (unat w) :: 'a word) = word_of_nat (unat v * unat u)\<close> by simp then have \<open>w = v * u\<close> by simp then show \<open>v dvd w\<close> .. qed lemma exp_dvd_iff_exp_udvd: \<open>2 ^ n dvd w \<longleftrightarrow> 2 ^ n udvd w\<close> for v w :: \<open>'a::len word\<close> proof assume \<open>2 ^ n udvd w\<close> then show \<open>2 ^ n dvd w\<close> by (rule udvd_imp_dvd) next assume \<open>2 ^ n dvd w\<close> then obtain u :: \<open>'a word\<close> where \<open>w = 2 ^ n * u\<close> .. then have \<open>w = push_bit n u\<close> by (simp add: push_bit_eq_mult) then show \<open>2 ^ n udvd w\<close> by transfer (simp add: take_bit_push_bit dvd_eq_mod_eq_0 flip: take_bit_eq_mod) qed lemma udvd_nat_alt: \<open>a udvd b \<longleftrightarrow> (\<exists>n. unat b = n * unat a)\<close> by (auto simp add: udvd_iff_dvd) lemma udvd_unfold_int: \<open>a udvd b \<longleftrightarrow> (\<exists>n\<ge>0. uint b = n * uint a)\<close> unfolding udvd_iff_dvd_int by (metis dvd_div_mult_self dvd_triv_right uint_div_distrib uint_ge_0) lemma unat_minus_one: \<open>unat (w - 1) = unat w - 1\<close> if \<open>w \<noteq> 0\<close> proof - have "0 \<le> uint w" by (fact uint_nonnegative) moreover from that have "0 \<noteq> uint w" by (simp add: uint_0_iff) ultimately have "1 \<le> uint w" by arith from uint_lt2p [of w] have "uint w - 1 < 2 ^ LENGTH('a)" by arith with \<open>1 \<le> uint w\<close> have "(uint w - 1) mod 2 ^ LENGTH('a) = uint w - 1" by (auto intro: mod_pos_pos_trivial) with \<open>1 \<le> uint w\<close> have "nat ((uint w - 1) mod 2 ^ LENGTH('a)) = nat (uint w) - 1" by (auto simp del: nat_uint_eq) then show ?thesis by (simp only: unat_eq_nat_uint word_arith_wis mod_diff_right_eq) (metis of_int_1 uint_word_of_int unsigned_1) qed lemma measure_unat: "p \<noteq> 0 \<Longrightarrow> unat (p - 1) < unat p" by (simp add: unat_minus_one) (simp add: unat_0_iff [symmetric]) lemmas uint_add_ge0 [simp] = add_nonneg_nonneg [OF uint_ge_0 uint_ge_0] lemmas uint_mult_ge0 [simp] = mult_nonneg_nonneg [OF uint_ge_0 uint_ge_0] lemma uint_sub_lt2p [simp]: "uint x - uint y < 2 ^ LENGTH('a)" for x :: "'a::len word" and y :: "'b::len word" using uint_ge_0 [of y] uint_lt2p [of x] by arith subsection \<open>Conditions for the addition (etc) of two words to overflow\<close> lemma uint_add_lem: "(uint x + uint y < 2 ^ LENGTH('a)) = (uint (x + y) = uint x + uint y)" for x y :: "'a::len word" by (metis add.right_neutral add_mono_thms_linordered_semiring(1) mod_pos_pos_trivial of_nat_0_le_iff uint_lt2p uint_nat uint_word_ariths(1)) lemma uint_mult_lem: "(uint x * uint y < 2 ^ LENGTH('a)) = (uint (x * y) = uint x * uint y)" for x y :: "'a::len word" by (metis mod_pos_pos_trivial uint_lt2p uint_mult_ge0 uint_word_ariths(3)) lemma uint_sub_lem: "uint x \<ge> uint y \<longleftrightarrow> uint (x - y) = uint x - uint y" by (metis diff_ge_0_iff_ge of_nat_0_le_iff uint_nat uint_sub_lt2p uint_word_of_int unique_euclidean_semiring_numeral_class.mod_less word_sub_wi) lemma uint_add_le: "uint (x + y) \<le> uint x + uint y" unfolding uint_word_ariths by (simp add: zmod_le_nonneg_dividend) lemma uint_sub_ge: "uint (x - y) \<ge> uint x - uint y" unfolding uint_word_ariths by (simp flip: take_bit_eq_mod add: take_bit_int_greater_eq_self_iff) lemma int_mod_ge: \<open>a \<le> a mod n\<close> if \<open>a < n\<close> \<open>0 < n\<close> for a n :: int using that order.trans [of a 0 \<open>a mod n\<close>] by (cases \<open>a < 0\<close>) auto lemma mod_add_if_z: "\<lbrakk>x < z; y < z; 0 \<le> y; 0 \<le> x; 0 \<le> z\<rbrakk> \<Longrightarrow> (x + y) mod z = (if x + y < z then x + y else x + y - z)" for x y z :: int apply (simp add: not_less) by (metis (no_types) add_strict_mono diff_ge_0_iff_ge diff_less_eq minus_mod_self2 mod_pos_pos_trivial) lemma uint_plus_if': "uint (a + b) = (if uint a + uint b < 2 ^ LENGTH('a) then uint a + uint b else uint a + uint b - 2 ^ LENGTH('a))" for a b :: "'a::len word" using mod_add_if_z [of "uint a" _ "uint b"] by (simp add: uint_word_ariths) lemma mod_sub_if_z: "\<lbrakk>x < z; y < z; 0 \<le> y; 0 \<le> x; 0 \<le> z\<rbrakk> \<Longrightarrow> (x - y) mod z = (if y \<le> x then x - y else x - y + z)" for x y z :: int using mod_pos_pos_trivial [of "x - y + z" z] by (auto simp add: not_le) lemma uint_sub_if': "uint (a - b) = (if uint b \<le> uint a then uint a - uint b else uint a - uint b + 2 ^ LENGTH('a))" for a b :: "'a::len word" using mod_sub_if_z [of "uint a" _ "uint b"] by (simp add: uint_word_ariths) lemma word_of_int_inverse: "word_of_int r = a \<Longrightarrow> 0 \<le> r \<Longrightarrow> r < 2 ^ LENGTH('a) \<Longrightarrow> uint a = r" for a :: "'a::len word" by transfer (simp add: take_bit_int_eq_self) lemma unat_split: "P (unat x) \<longleftrightarrow> (\<forall>n. of_nat n = x \<and> n < 2^LENGTH('a) \<longrightarrow> P n)" for x :: "'a::len word" by (auto simp add: unsigned_of_nat take_bit_nat_eq_self) lemma unat_split_asm: "P (unat x) \<longleftrightarrow> (\<nexists>n. of_nat n = x \<and> n < 2^LENGTH('a) \<and> \<not> P n)" for x :: "'a::len word" by (auto simp add: unsigned_of_nat take_bit_nat_eq_self) lemma un_ui_le: \<open>unat a \<le> unat b \<longleftrightarrow> uint a \<le> uint b\<close> by transfer (simp add: nat_le_iff) lemma unat_plus_if': \<open>unat (a + b) = (if unat a + unat b < 2 ^ LENGTH('a) then unat a + unat b else unat a + unat b - 2 ^ LENGTH('a))\<close> for a b :: \<open>'a::len word\<close> apply (auto simp add: not_less le_iff_add) apply (metis (mono_tags, lifting) of_nat_add of_nat_unat take_bit_nat_eq_self_iff unsigned_less unsigned_of_nat unsigned_word_eqI) apply (smt (verit, ccfv_SIG) dbl_simps(3) dbl_simps(5) numerals(1) of_nat_0_le_iff of_nat_add of_nat_eq_iff of_nat_numeral of_nat_power of_nat_unat uint_plus_if' unsigned_1) done lemma unat_sub_if_size: "unat (x - y) = (if unat y \<le> unat x then unat x - unat y else unat x + 2 ^ size x - unat y)" proof - { assume xy: "\<not> uint y \<le> uint x" have "nat (uint x - uint y + 2 ^ LENGTH('a)) = nat (uint x + 2 ^ LENGTH('a) - uint y)" by simp also have "... = nat (uint x + 2 ^ LENGTH('a)) - nat (uint y)" by (simp add: nat_diff_distrib') also have "... = nat (uint x) + 2 ^ LENGTH('a) - nat (uint y)" by (metis nat_add_distrib nat_eq_numeral_power_cancel_iff order_less_imp_le unsigned_0 unsigned_greater_eq unsigned_less) finally have "nat (uint x - uint y + 2 ^ LENGTH('a)) = nat (uint x) + 2 ^ LENGTH('a) - nat (uint y)" . } then show ?thesis by (simp add: word_size) (metis nat_diff_distrib' uint_sub_if' un_ui_le unat_eq_nat_uint unsigned_greater_eq) qed lemmas unat_sub_if' = unat_sub_if_size [unfolded word_size] lemma uint_split: "P (uint x) = (\<forall>i. word_of_int i = x \<and> 0 \<le> i \<and> i < 2^LENGTH('a) \<longrightarrow> P i)" for x :: "'a::len word" by transfer (auto simp add: take_bit_eq_mod) lemma uint_split_asm: "P (uint x) = (\<nexists>i. word_of_int i = x \<and> 0 \<le> i \<and> i < 2^LENGTH('a) \<and> \<not> P i)" for x :: "'a::len word" by (auto simp add: unsigned_of_int take_bit_int_eq_self) subsection \<open>Some proof tool support\<close> \<comment> \<open>use this to stop, eg. \<open>2 ^ LENGTH(32)\<close> being simplified\<close> lemma power_False_cong: "False \<Longrightarrow> a ^ b = c ^ d" by auto lemmas unat_splits = unat_split unat_split_asm lemmas unat_arith_simps = word_le_nat_alt word_less_nat_alt word_unat_eq_iff unat_sub_if' unat_plus_if' unat_div unat_mod lemmas uint_splits = uint_split uint_split_asm lemmas uint_arith_simps = word_le_def word_less_alt word_uint_eq_iff uint_sub_if' uint_plus_if' \<comment> \<open>\<open>unat_arith_tac\<close>: tactic to reduce word arithmetic to \<open>nat\<close>, try to solve via \<open>arith\<close>\<close> ML \<open> val unat_arith_simpset = @{context} (* TODO: completely explicitly determined simpset *) |> fold Simplifier.add_simp @{thms unat_arith_simps} |> fold Splitter.add_split @{thms if_split_asm} |> fold Simplifier.add_cong @{thms power_False_cong} |> simpset_of fun unat_arith_tacs ctxt = let fun arith_tac' n t = Arith_Data.arith_tac ctxt n t handle Cooper.COOPER _ => Seq.empty; in [ clarify_tac ctxt 1, full_simp_tac (put_simpset unat_arith_simpset ctxt) 1, ALLGOALS (full_simp_tac (put_simpset HOL_ss ctxt |> fold Splitter.add_split @{thms unat_splits} |> fold Simplifier.add_cong @{thms power_False_cong})), rewrite_goals_tac ctxt @{thms word_size}, ALLGOALS (fn n => REPEAT (resolve_tac ctxt [allI, impI] n) THEN REPEAT (eresolve_tac ctxt [conjE] n) THEN REPEAT (dresolve_tac ctxt @{thms of_nat_inverse} n THEN assume_tac ctxt n)), TRYALL arith_tac' ] end fun unat_arith_tac ctxt = SELECT_GOAL (EVERY (unat_arith_tacs ctxt)) \<close> method_setup unat_arith = \<open>Scan.succeed (SIMPLE_METHOD' o unat_arith_tac)\<close> "solving word arithmetic via natural numbers and arith" \<comment> \<open>\<open>uint_arith_tac\<close>: reduce to arithmetic on int, try to solve by arith\<close> ML \<open> val uint_arith_simpset = @{context} (* TODO: completely explicitly determined simpset *) |> fold Simplifier.add_simp @{thms uint_arith_simps} |> fold Splitter.add_split @{thms if_split_asm} |> fold Simplifier.add_cong @{thms power_False_cong} |> simpset_of; fun uint_arith_tacs ctxt = let fun arith_tac' n t = Arith_Data.arith_tac ctxt n t handle Cooper.COOPER _ => Seq.empty; in [ clarify_tac ctxt 1, full_simp_tac (put_simpset uint_arith_simpset ctxt) 1, ALLGOALS (full_simp_tac (put_simpset HOL_ss ctxt |> fold Splitter.add_split @{thms uint_splits} |> fold Simplifier.add_cong @{thms power_False_cong})), rewrite_goals_tac ctxt @{thms word_size}, ALLGOALS (fn n => REPEAT (resolve_tac ctxt [allI, impI] n) THEN REPEAT (eresolve_tac ctxt [conjE] n) THEN REPEAT (dresolve_tac ctxt @{thms word_of_int_inverse} n THEN assume_tac ctxt n THEN assume_tac ctxt n)), TRYALL arith_tac' ] end fun uint_arith_tac ctxt = SELECT_GOAL (EVERY (uint_arith_tacs ctxt)) \<close> method_setup uint_arith = \<open>Scan.succeed (SIMPLE_METHOD' o uint_arith_tac)\<close> "solving word arithmetic via integers and arith" subsection \<open>More on overflows and monotonicity\<close> lemma no_plus_overflow_uint_size: "x \<le> x + y \<longleftrightarrow> uint x + uint y < 2 ^ size x" for x y :: "'a::len word" by (auto simp add: word_size word_le_def uint_add_lem uint_sub_lem) lemmas no_olen_add = no_plus_overflow_uint_size [unfolded word_size] lemma no_ulen_sub: "x \<ge> x - y \<longleftrightarrow> uint y \<le> uint x" for x y :: "'a::len word" by (auto simp add: word_size word_le_def uint_add_lem uint_sub_lem) lemma no_olen_add': "x \<le> y + x \<longleftrightarrow> uint y + uint x < 2 ^ LENGTH('a)" for x y :: "'a::len word" by (simp add: ac_simps no_olen_add) lemmas olen_add_eqv = trans [OF no_olen_add no_olen_add' [symmetric]] lemmas uint_plus_simple_iff = trans [OF no_olen_add uint_add_lem] lemmas uint_plus_simple = uint_plus_simple_iff [THEN iffD1] lemmas uint_minus_simple_iff = trans [OF no_ulen_sub uint_sub_lem] lemmas uint_minus_simple_alt = uint_sub_lem [folded word_le_def] lemmas word_sub_le_iff = no_ulen_sub [folded word_le_def] lemmas word_sub_le = word_sub_le_iff [THEN iffD2] lemma word_less_sub1: "x \<noteq> 0 \<Longrightarrow> 1 < x \<longleftrightarrow> 0 < x - 1" for x :: "'a::len word" by transfer (simp add: take_bit_decr_eq) lemma word_le_sub1: "x \<noteq> 0 \<Longrightarrow> 1 \<le> x \<longleftrightarrow> 0 \<le> x - 1" for x :: "'a::len word" by transfer (simp add: int_one_le_iff_zero_less less_le) lemma sub_wrap_lt: "x < x - z \<longleftrightarrow> x < z" for x z :: "'a::len word" by (simp add: word_less_def uint_sub_lem) (meson linorder_not_le uint_minus_simple_iff uint_sub_lem word_less_iff_unsigned) lemma sub_wrap: "x \<le> x - z \<longleftrightarrow> z = 0 \<or> x < z" for x z :: "'a::len word" by (simp add: le_less sub_wrap_lt ac_simps) lemma plus_minus_not_NULL_ab: "x \<le> ab - c \<Longrightarrow> c \<le> ab \<Longrightarrow> c \<noteq> 0 \<Longrightarrow> x + c \<noteq> 0" for x ab c :: "'a::len word" by uint_arith lemma plus_minus_no_overflow_ab: "x \<le> ab - c \<Longrightarrow> c \<le> ab \<Longrightarrow> x \<le> x + c" for x ab c :: "'a::len word" by uint_arith lemma le_minus': "a + c \<le> b \<Longrightarrow> a \<le> a + c \<Longrightarrow> c \<le> b - a" for a b c :: "'a::len word" by uint_arith lemma le_plus': "a \<le> b \<Longrightarrow> c \<le> b - a \<Longrightarrow> a + c \<le> b" for a b c :: "'a::len word" by uint_arith lemmas le_plus = le_plus' [rotated] lemmas le_minus = leD [THEN thin_rl, THEN le_minus'] (* FIXME *) lemma word_plus_mono_right: "y \<le> z \<Longrightarrow> x \<le> x + z \<Longrightarrow> x + y \<le> x + z" for x y z :: "'a::len word" by uint_arith lemma word_less_minus_cancel: "y - x < z - x \<Longrightarrow> x \<le> z \<Longrightarrow> y < z" for x y z :: "'a::len word" by uint_arith lemma word_less_minus_mono_left: "y < z \<Longrightarrow> x \<le> y \<Longrightarrow> y - x < z - x" for x y z :: "'a::len word" by uint_arith lemma word_less_minus_mono: "a < c \<Longrightarrow> d < b \<Longrightarrow> a - b < a \<Longrightarrow> c - d < c \<Longrightarrow> a - b < c - d" for a b c d :: "'a::len word" by uint_arith lemma word_le_minus_cancel: "y - x \<le> z - x \<Longrightarrow> x \<le> z \<Longrightarrow> y \<le> z" for x y z :: "'a::len word" by uint_arith lemma word_le_minus_mono_left: "y \<le> z \<Longrightarrow> x \<le> y \<Longrightarrow> y - x \<le> z - x" for x y z :: "'a::len word" by uint_arith lemma word_le_minus_mono: "a \<le> c \<Longrightarrow> d \<le> b \<Longrightarrow> a - b \<le> a \<Longrightarrow> c - d \<le> c \<Longrightarrow> a - b \<le> c - d" for a b c d :: "'a::len word" by uint_arith lemma plus_le_left_cancel_wrap: "x + y' < x \<Longrightarrow> x + y < x \<Longrightarrow> x + y' < x + y \<longleftrightarrow> y' < y" for x y y' :: "'a::len word" by uint_arith lemma plus_le_left_cancel_nowrap: "x \<le> x + y' \<Longrightarrow> x \<le> x + y \<Longrightarrow> x + y' < x + y \<longleftrightarrow> y' < y" for x y y' :: "'a::len word" by uint_arith lemma word_plus_mono_right2: "a \<le> a + b \<Longrightarrow> c \<le> b \<Longrightarrow> a \<le> a + c" for a b c :: "'a::len word" by uint_arith lemma word_less_add_right: "x < y - z \<Longrightarrow> z \<le> y \<Longrightarrow> x + z < y" for x y z :: "'a::len word" by uint_arith lemma word_less_sub_right: "x < y + z \<Longrightarrow> y \<le> x \<Longrightarrow> x - y < z" for x y z :: "'a::len word" by uint_arith lemma word_le_plus_either: "x \<le> y \<or> x \<le> z \<Longrightarrow> y \<le> y + z \<Longrightarrow> x \<le> y + z" for x y z :: "'a::len word" by uint_arith lemma word_less_nowrapI: "x < z - k \<Longrightarrow> k \<le> z \<Longrightarrow> 0 < k \<Longrightarrow> x < x + k" for x z k :: "'a::len word" by uint_arith lemma inc_i: "1 \<le> i \<Longrightarrow> i < m \<Longrightarrow> 1 \<le> i + 1 \<and> i + 1 \<le> m" for i m :: "'a::len word" by uint_arith lemma udvd_incr_lem: "up < uq \<Longrightarrow> up = ua + n * uint K \<Longrightarrow> uq = ua + n' * uint K \<Longrightarrow> up + uint K \<le> uq" by auto (metis int_distrib(1) linorder_not_less mult.left_neutral mult_right_mono uint_nonnegative zless_imp_add1_zle) lemma udvd_incr': "p < q \<Longrightarrow> uint p = ua + n * uint K \<Longrightarrow> uint q = ua + n' * uint K \<Longrightarrow> p + K \<le> q" unfolding word_less_alt word_le_def by (metis (full_types) order_trans udvd_incr_lem uint_add_le) lemma udvd_decr': assumes "p < q" "uint p = ua + n * uint K" "uint q = ua + n' * uint K" shows "uint q = ua + n' * uint K \<Longrightarrow> p \<le> q - K" proof - have "\<And>w wa. uint (w::'a word) \<le> uint wa + uint (w - wa)" by (metis (no_types) add_diff_cancel_left' diff_add_cancel uint_add_le) moreover have "uint K + uint p \<le> uint q" using assms by (metis (no_types) add_diff_cancel_left' diff_add_cancel udvd_incr_lem word_less_def) ultimately show ?thesis by (meson add_le_cancel_left order_trans word_less_eq_iff_unsigned) qed lemmas udvd_incr_lem0 = udvd_incr_lem [where ua=0, unfolded add_0_left] lemmas udvd_incr0 = udvd_incr' [where ua=0, unfolded add_0_left] lemmas udvd_decr0 = udvd_decr' [where ua=0, unfolded add_0_left] lemma udvd_minus_le': "xy < k \<Longrightarrow> z udvd xy \<Longrightarrow> z udvd k \<Longrightarrow> xy \<le> k - z" unfolding udvd_unfold_int by (meson udvd_decr0) lemma udvd_incr2_K: "p < a + s \<Longrightarrow> a \<le> a + s \<Longrightarrow> K udvd s \<Longrightarrow> K udvd p - a \<Longrightarrow> a \<le> p \<Longrightarrow> 0 < K \<Longrightarrow> p \<le> p + K \<and> p + K \<le> a + s" unfolding udvd_unfold_int apply (simp add: uint_arith_simps split: if_split_asm) apply (metis (no_types, opaque_lifting) le_add_diff_inverse le_less_trans udvd_incr_lem) using uint_lt2p [of s] by simp subsection \<open>Arithmetic type class instantiations\<close> lemmas word_le_0_iff [simp] = word_zero_le [THEN leD, THEN antisym_conv1] lemma word_of_int_nat: "0 \<le> x \<Longrightarrow> word_of_int x = of_nat (nat x)" by simp text \<open> note that \<open>iszero_def\<close> is only for class \<open>comm_semiring_1_cancel\<close>, which requires word length \<open>\<ge> 1\<close>, ie \<open>'a::len word\<close> \<close> lemma iszero_word_no [simp]: "iszero (numeral bin :: 'a::len word) = iszero (take_bit LENGTH('a) (numeral bin :: int))" by (metis iszero_def uint_0_iff uint_bintrunc) text \<open>Use \<open>iszero\<close> to simplify equalities between word numerals.\<close> lemmas word_eq_numeral_iff_iszero [simp] = eq_numeral_iff_iszero [where 'a="'a::len word"] subsection \<open>Word and nat\<close> lemma word_nchotomy: "\<forall>w :: 'a::len word. \<exists>n. w = of_nat n \<and> n < 2 ^ LENGTH('a)" by (metis of_nat_unat ucast_id unsigned_less) lemma of_nat_eq: "of_nat n = w \<longleftrightarrow> (\<exists>q. n = unat w + q * 2 ^ LENGTH('a))" for w :: "'a::len word" using mod_div_mult_eq [of n "2 ^ LENGTH('a)", symmetric] by (auto simp flip: take_bit_eq_mod simp add: unsigned_of_nat) lemma of_nat_eq_size: "of_nat n = w \<longleftrightarrow> (\<exists>q. n = unat w + q * 2 ^ size w)" unfolding word_size by (rule of_nat_eq) lemma of_nat_0: "of_nat m = (0::'a::len word) \<longleftrightarrow> (\<exists>q. m = q * 2 ^ LENGTH('a))" by (simp add: of_nat_eq) lemma of_nat_2p [simp]: "of_nat (2 ^ LENGTH('a)) = (0::'a::len word)" by (fact mult_1 [symmetric, THEN iffD2 [OF of_nat_0 exI]]) lemma of_nat_gt_0: "of_nat k \<noteq> 0 \<Longrightarrow> 0 < k" by (cases k) auto lemma of_nat_neq_0: "0 < k \<Longrightarrow> k < 2 ^ LENGTH('a::len) \<Longrightarrow> of_nat k \<noteq> (0 :: 'a word)" by (auto simp add : of_nat_0) lemma Abs_fnat_hom_add: "of_nat a + of_nat b = of_nat (a + b)" by simp lemma Abs_fnat_hom_mult: "of_nat a * of_nat b = (of_nat (a * b) :: 'a::len word)" by (simp add: wi_hom_mult) lemma Abs_fnat_hom_Suc: "word_succ (of_nat a) = of_nat (Suc a)" by transfer (simp add: ac_simps) lemma Abs_fnat_hom_0: "(0::'a::len word) = of_nat 0" by simp lemma Abs_fnat_hom_1: "(1::'a::len word) = of_nat (Suc 0)" by simp lemmas Abs_fnat_homs = Abs_fnat_hom_add Abs_fnat_hom_mult Abs_fnat_hom_Suc Abs_fnat_hom_0 Abs_fnat_hom_1 lemma word_arith_nat_add: "a + b = of_nat (unat a + unat b)" by simp lemma word_arith_nat_mult: "a * b = of_nat (unat a * unat b)" by simp lemma word_arith_nat_Suc: "word_succ a = of_nat (Suc (unat a))" by (subst Abs_fnat_hom_Suc [symmetric]) simp lemma word_arith_nat_div: "a div b = of_nat (unat a div unat b)" by (metis of_int_of_nat_eq of_nat_unat of_nat_div word_div_def) lemma word_arith_nat_mod: "a mod b = of_nat (unat a mod unat b)" by (metis of_int_of_nat_eq of_nat_mod of_nat_unat word_mod_def) lemmas word_arith_nat_defs = word_arith_nat_add word_arith_nat_mult word_arith_nat_Suc Abs_fnat_hom_0 Abs_fnat_hom_1 word_arith_nat_div word_arith_nat_mod lemma unat_cong: "x = y \<Longrightarrow> unat x = unat y" by (fact arg_cong) lemma unat_of_nat: \<open>unat (word_of_nat x :: 'a::len word) = x mod 2 ^ LENGTH('a)\<close> by transfer (simp flip: take_bit_eq_mod add: nat_take_bit_eq) lemmas unat_word_ariths = word_arith_nat_defs [THEN trans [OF unat_cong unat_of_nat]] lemmas word_sub_less_iff = word_sub_le_iff [unfolded linorder_not_less [symmetric] Not_eq_iff] lemma unat_add_lem: "unat x + unat y < 2 ^ LENGTH('a) \<longleftrightarrow> unat (x + y) = unat x + unat y" for x y :: "'a::len word" by (metis mod_less unat_word_ariths(1) unsigned_less) lemma unat_mult_lem: "unat x * unat y < 2 ^ LENGTH('a) \<longleftrightarrow> unat (x * y) = unat x * unat y" for x y :: "'a::len word" by (metis mod_less unat_word_ariths(2) unsigned_less) lemma le_no_overflow: "x \<le> b \<Longrightarrow> a \<le> a + b \<Longrightarrow> x \<le> a + b" for a b x :: "'a::len word" using word_le_plus_either by blast lemma uint_div: \<open>uint (x div y) = uint x div uint y\<close> by (fact uint_div_distrib) lemma uint_mod: \<open>uint (x mod y) = uint x mod uint y\<close> by (fact uint_mod_distrib) lemma no_plus_overflow_unat_size: "x \<le> x + y \<longleftrightarrow> unat x + unat y < 2 ^ size x" for x y :: "'a::len word" unfolding word_size by unat_arith lemmas no_olen_add_nat = no_plus_overflow_unat_size [unfolded word_size] lemmas unat_plus_simple = trans [OF no_olen_add_nat unat_add_lem] lemma word_div_mult: "\<lbrakk>0 < y; unat x * unat y < 2 ^ LENGTH('a)\<rbrakk> \<Longrightarrow> x * y div y = x" for x y :: "'a::len word" by (simp add: unat_eq_zero unat_mult_lem word_arith_nat_div) lemma div_lt': "i \<le> k div x \<Longrightarrow> unat i * unat x < 2 ^ LENGTH('a)" for i k x :: "'a::len word" by unat_arith (meson le_less_trans less_mult_imp_div_less not_le unsigned_less) lemmas div_lt'' = order_less_imp_le [THEN div_lt'] lemma div_lt_mult: "\<lbrakk>i < k div x; 0 < x\<rbrakk> \<Longrightarrow> i * x < k" for i k x :: "'a::len word" by (metis div_le_mono div_lt'' not_le unat_div word_div_mult word_less_iff_unsigned) lemma div_le_mult: "\<lbrakk>i \<le> k div x; 0 < x\<rbrakk> \<Longrightarrow> i * x \<le> k" for i k x :: "'a::len word" by (metis div_lt' less_mult_imp_div_less not_less unat_arith_simps(2) unat_div unat_mult_lem) lemma div_lt_uint': "i \<le> k div x \<Longrightarrow> uint i * uint x < 2 ^ LENGTH('a)" for i k x :: "'a::len word" unfolding uint_nat by (metis div_lt' int_ops(7) of_nat_unat uint_mult_lem unat_mult_lem) lemmas div_lt_uint'' = order_less_imp_le [THEN div_lt_uint'] lemma word_le_exists': "x \<le> y \<Longrightarrow> \<exists>z. y = x + z \<and> uint x + uint z < 2 ^ LENGTH('a)" for x y z :: "'a::len word" by (metis add.commute diff_add_cancel no_olen_add) lemmas plus_minus_not_NULL = order_less_imp_le [THEN plus_minus_not_NULL_ab] lemmas plus_minus_no_overflow = order_less_imp_le [THEN plus_minus_no_overflow_ab] lemmas mcs = word_less_minus_cancel word_less_minus_mono_left word_le_minus_cancel word_le_minus_mono_left lemmas word_l_diffs = mcs [where y = "w + x", unfolded add_diff_cancel] for w x lemmas word_diff_ls = mcs [where z = "w + x", unfolded add_diff_cancel] for w x lemmas word_plus_mcs = word_diff_ls [where y = "v + x", unfolded add_diff_cancel] for v x lemma le_unat_uoi: \<open>y \<le> unat z \<Longrightarrow> unat (word_of_nat y :: 'a word) = y\<close> for z :: \<open>'a::len word\<close> by transfer (simp add: nat_take_bit_eq take_bit_nat_eq_self_iff le_less_trans) lemmas thd = times_div_less_eq_dividend lemmas uno_simps [THEN le_unat_uoi] = mod_le_divisor div_le_dividend lemma word_mod_div_equality: "(n div b) * b + (n mod b) = n" for n b :: "'a::len word" by (fact div_mult_mod_eq) lemma word_div_mult_le: "a div b * b \<le> a" for a b :: "'a::len word" by (metis div_le_mult mult_not_zero order.not_eq_order_implies_strict order_refl word_zero_le) lemma word_mod_less_divisor: "0 < n \<Longrightarrow> m mod n < n" for m n :: "'a::len word" by (simp add: unat_arith_simps) lemma word_of_int_power_hom: "word_of_int a ^ n = (word_of_int (a ^ n) :: 'a::len word)" by (induct n) (simp_all add: wi_hom_mult [symmetric]) lemma word_arith_power_alt: "a ^ n = (word_of_int (uint a ^ n) :: 'a::len word)" by (simp add : word_of_int_power_hom [symmetric]) lemma unatSuc: "1 + n \<noteq> 0 \<Longrightarrow> unat (1 + n) = Suc (unat n)" for n :: "'a::len word" by unat_arith subsection \<open>Cardinality, finiteness of set of words\<close> lemma inj_on_word_of_int: \<open>inj_on (word_of_int :: int \<Rightarrow> 'a word) {0..<2 ^ LENGTH('a::len)}\<close> unfolding inj_on_def by (metis atLeastLessThan_iff word_of_int_inverse) lemma range_uint: \<open>range (uint :: 'a word \<Rightarrow> int) = {0..<2 ^ LENGTH('a::len)}\<close> apply transfer apply (auto simp add: image_iff) apply (metis take_bit_int_eq_self_iff) done lemma UNIV_eq: \<open>(UNIV :: 'a word set) = word_of_int ` {0..<2 ^ LENGTH('a::len)}\<close> by (auto simp add: image_iff) (metis atLeastLessThan_iff linorder_not_le uint_split) lemma card_word: "CARD('a word) = 2 ^ LENGTH('a::len)" by (simp add: UNIV_eq card_image inj_on_word_of_int) lemma card_word_size: "CARD('a word) = 2 ^ size x" for x :: "'a::len word" unfolding word_size by (rule card_word) end instance word :: (len) finite by standard (simp add: UNIV_eq) subsection \<open>Bitwise Operations on Words\<close> context includes bit_operations_syntax begin lemma word_wi_log_defs: "NOT (word_of_int a) = word_of_int (NOT a)" "word_of_int a AND word_of_int b = word_of_int (a AND b)" "word_of_int a OR word_of_int b = word_of_int (a OR b)" "word_of_int a XOR word_of_int b = word_of_int (a XOR b)" by (transfer, rule refl)+ lemma word_no_log_defs [simp]: "NOT (numeral a) = word_of_int (NOT (numeral a))" "NOT (- numeral a) = word_of_int (NOT (- numeral a))" "numeral a AND numeral b = word_of_int (numeral a AND numeral b)" "numeral a AND - numeral b = word_of_int (numeral a AND - numeral b)" "- numeral a AND numeral b = word_of_int (- numeral a AND numeral b)" "- numeral a AND - numeral b = word_of_int (- numeral a AND - numeral b)" "numeral a OR numeral b = word_of_int (numeral a OR numeral b)" "numeral a OR - numeral b = word_of_int (numeral a OR - numeral b)" "- numeral a OR numeral b = word_of_int (- numeral a OR numeral b)" "- numeral a OR - numeral b = word_of_int (- numeral a OR - numeral b)" "numeral a XOR numeral b = word_of_int (numeral a XOR numeral b)" "numeral a XOR - numeral b = word_of_int (numeral a XOR - numeral b)" "- numeral a XOR numeral b = word_of_int (- numeral a XOR numeral b)" "- numeral a XOR - numeral b = word_of_int (- numeral a XOR - numeral b)" by (transfer, rule refl)+ text \<open>Special cases for when one of the arguments equals 1.\<close> lemma word_bitwise_1_simps [simp]: "NOT (1::'a::len word) = -2" "1 AND numeral b = word_of_int (1 AND numeral b)" "1 AND - numeral b = word_of_int (1 AND - numeral b)" "numeral a AND 1 = word_of_int (numeral a AND 1)" "- numeral a AND 1 = word_of_int (- numeral a AND 1)" "1 OR numeral b = word_of_int (1 OR numeral b)" "1 OR - numeral b = word_of_int (1 OR - numeral b)" "numeral a OR 1 = word_of_int (numeral a OR 1)" "- numeral a OR 1 = word_of_int (- numeral a OR 1)" "1 XOR numeral b = word_of_int (1 XOR numeral b)" "1 XOR - numeral b = word_of_int (1 XOR - numeral b)" "numeral a XOR 1 = word_of_int (numeral a XOR 1)" "- numeral a XOR 1 = word_of_int (- numeral a XOR 1)" apply (simp_all add: word_uint_eq_iff unsigned_not_eq unsigned_and_eq unsigned_or_eq unsigned_xor_eq of_nat_take_bit ac_simps unsigned_of_int) apply (simp_all add: minus_numeral_eq_not_sub_one) apply (simp_all only: sub_one_eq_not_neg bit.xor_compl_right take_bit_xor bit.double_compl) apply simp_all done text \<open>Special cases for when one of the arguments equals -1.\<close> lemma word_bitwise_m1_simps [simp]: "NOT (-1::'a::len word) = 0" "(-1::'a::len word) AND x = x" "x AND (-1::'a::len word) = x" "(-1::'a::len word) OR x = -1" "x OR (-1::'a::len word) = -1" " (-1::'a::len word) XOR x = NOT x" "x XOR (-1::'a::len word) = NOT x" by (transfer, simp)+ lemma word_of_int_not_numeral_eq [simp]: \<open>(word_of_int (NOT (numeral bin)) :: 'a::len word) = - numeral bin - 1\<close> by transfer (simp add: not_eq_complement) lemma uint_and: \<open>uint (x AND y) = uint x AND uint y\<close> by transfer simp lemma uint_or: \<open>uint (x OR y) = uint x OR uint y\<close> by transfer simp lemma uint_xor: \<open>uint (x XOR y) = uint x XOR uint y\<close> by transfer simp \<comment> \<open>get from commutativity, associativity etc of \<open>int_and\<close> etc to same for \<open>word_and etc\<close>\<close> lemmas bwsimps = wi_hom_add word_wi_log_defs lemma word_bw_assocs: "(x AND y) AND z = x AND y AND z" "(x OR y) OR z = x OR y OR z" "(x XOR y) XOR z = x XOR y XOR z" for x :: "'a::len word" by (fact ac_simps)+ lemma word_bw_comms: "x AND y = y AND x" "x OR y = y OR x" "x XOR y = y XOR x" for x :: "'a::len word" by (fact ac_simps)+ lemma word_bw_lcs: "y AND x AND z = x AND y AND z" "y OR x OR z = x OR y OR z" "y XOR x XOR z = x XOR y XOR z" for x :: "'a::len word" by (fact ac_simps)+ lemma word_log_esimps: "x AND 0 = 0" "x AND -1 = x" "x OR 0 = x" "x OR -1 = -1" "x XOR 0 = x" "x XOR -1 = NOT x" "0 AND x = 0" "-1 AND x = x" "0 OR x = x" "-1 OR x = -1" "0 XOR x = x" "-1 XOR x = NOT x" for x :: "'a::len word" by simp_all lemma word_not_dist: "NOT (x OR y) = NOT x AND NOT y" "NOT (x AND y) = NOT x OR NOT y" for x :: "'a::len word" by simp_all lemma word_bw_same: "x AND x = x" "x OR x = x" "x XOR x = 0" for x :: "'a::len word" by simp_all lemma word_ao_absorbs [simp]: "x AND (y OR x) = x" "x OR y AND x = x" "x AND (x OR y) = x" "y AND x OR x = x" "(y OR x) AND x = x" "x OR x AND y = x" "(x OR y) AND x = x" "x AND y OR x = x" for x :: "'a::len word" by (auto intro: bit_eqI simp add: bit_and_iff bit_or_iff) lemma word_not_not [simp]: "NOT (NOT x) = x" for x :: "'a::len word" by (fact bit.double_compl) lemma word_ao_dist: "(x OR y) AND z = x AND z OR y AND z" for x :: "'a::len word" by (fact bit.conj_disj_distrib2) lemma word_oa_dist: "x AND y OR z = (x OR z) AND (y OR z)" for x :: "'a::len word" by (fact bit.disj_conj_distrib2) lemma word_add_not [simp]: "x + NOT x = -1" for x :: "'a::len word" by (simp add: not_eq_complement) lemma word_plus_and_or [simp]: "(x AND y) + (x OR y) = x + y" for x :: "'a::len word" by transfer (simp add: plus_and_or) lemma leoa: "w = x OR y \<Longrightarrow> y = w AND y" for x :: "'a::len word" by auto lemma leao: "w' = x' AND y' \<Longrightarrow> x' = x' OR w'" for x' :: "'a::len word" by auto lemma word_ao_equiv: "w = w OR w' \<longleftrightarrow> w' = w AND w'" for w w' :: "'a::len word" by (auto intro: leoa leao) lemma le_word_or2: "x \<le> x OR y" for x y :: "'a::len word" by (simp add: or_greater_eq uint_or word_le_def) lemmas le_word_or1 = xtrans(3) [OF word_bw_comms (2) le_word_or2] lemmas word_and_le1 = xtrans(3) [OF word_ao_absorbs (4) [symmetric] le_word_or2] lemmas word_and_le2 = xtrans(3) [OF word_ao_absorbs (8) [symmetric] le_word_or2] lemma bit_horner_sum_bit_word_iff [bit_simps]: \<open>bit (horner_sum of_bool (2 :: 'a::len word) bs) n \<longleftrightarrow> n < min LENGTH('a) (length bs) \<and> bs ! n\<close> by transfer (simp add: bit_horner_sum_bit_iff) definition word_reverse :: \<open>'a::len word \<Rightarrow> 'a word\<close> where \<open>word_reverse w = horner_sum of_bool 2 (rev (map (bit w) [0..<LENGTH('a)]))\<close> lemma bit_word_reverse_iff [bit_simps]: \<open>bit (word_reverse w) n \<longleftrightarrow> n < LENGTH('a) \<and> bit w (LENGTH('a) - Suc n)\<close> for w :: \<open>'a::len word\<close> by (cases \<open>n < LENGTH('a)\<close>) (simp_all add: word_reverse_def bit_horner_sum_bit_word_iff rev_nth) lemma word_rev_rev [simp] : "word_reverse (word_reverse w) = w" by (rule bit_word_eqI) (auto simp add: bit_word_reverse_iff bit_imp_le_length Suc_diff_Suc) lemma word_rev_gal: "word_reverse w = u \<Longrightarrow> word_reverse u = w" by (metis word_rev_rev) lemma word_rev_gal': "u = word_reverse w \<Longrightarrow> w = word_reverse u" by simp lemma uint_2p: "(0::'a::len word) < 2 ^ n \<Longrightarrow> uint (2 ^ n::'a::len word) = 2 ^ n" by (cases \<open>n < LENGTH('a)\<close>; transfer; force) lemma word_of_int_2p: "(word_of_int (2 ^ n) :: 'a::len word) = 2 ^ n" by (induct n) (simp_all add: wi_hom_syms) subsubsection \<open>shift functions in terms of lists of bools\<close> lemma drop_bit_word_numeral [simp]: \<open>drop_bit (numeral n) (numeral k) = (word_of_int (drop_bit (numeral n) (take_bit LENGTH('a) (numeral k))) :: 'a::len word)\<close> by transfer simp lemma drop_bit_word_Suc_numeral [simp]: \<open>drop_bit (Suc n) (numeral k) = (word_of_int (drop_bit (Suc n) (take_bit LENGTH('a) (numeral k))) :: 'a::len word)\<close> by transfer simp lemma drop_bit_word_minus_numeral [simp]: \<open>drop_bit (numeral n) (- numeral k) = (word_of_int (drop_bit (numeral n) (take_bit LENGTH('a) (- numeral k))) :: 'a::len word)\<close> by transfer simp lemma drop_bit_word_Suc_minus_numeral [simp]: \<open>drop_bit (Suc n) (- numeral k) = (word_of_int (drop_bit (Suc n) (take_bit LENGTH('a) (- numeral k))) :: 'a::len word)\<close> by transfer simp lemma signed_drop_bit_word_numeral [simp]: \<open>signed_drop_bit (numeral n) (numeral k) = (word_of_int (drop_bit (numeral n) (signed_take_bit (LENGTH('a) - 1) (numeral k))) :: 'a::len word)\<close> by transfer simp lemma signed_drop_bit_word_Suc_numeral [simp]: \<open>signed_drop_bit (Suc n) (numeral k) = (word_of_int (drop_bit (Suc n) (signed_take_bit (LENGTH('a) - 1) (numeral k))) :: 'a::len word)\<close> by transfer simp lemma signed_drop_bit_word_minus_numeral [simp]: \<open>signed_drop_bit (numeral n) (- numeral k) = (word_of_int (drop_bit (numeral n) (signed_take_bit (LENGTH('a) - 1) (- numeral k))) :: 'a::len word)\<close> by transfer simp lemma signed_drop_bit_word_Suc_minus_numeral [simp]: \<open>signed_drop_bit (Suc n) (- numeral k) = (word_of_int (drop_bit (Suc n) (signed_take_bit (LENGTH('a) - 1) (- numeral k))) :: 'a::len word)\<close> by transfer simp lemma take_bit_word_numeral [simp]: \<open>take_bit (numeral n) (numeral k) = (word_of_int (take_bit (min LENGTH('a) (numeral n)) (numeral k)) :: 'a::len word)\<close> by transfer rule lemma take_bit_word_Suc_numeral [simp]: \<open>take_bit (Suc n) (numeral k) = (word_of_int (take_bit (min LENGTH('a) (Suc n)) (numeral k)) :: 'a::len word)\<close> by transfer rule lemma take_bit_word_minus_numeral [simp]: \<open>take_bit (numeral n) (- numeral k) = (word_of_int (take_bit (min LENGTH('a) (numeral n)) (- numeral k)) :: 'a::len word)\<close> by transfer rule lemma take_bit_word_Suc_minus_numeral [simp]: \<open>take_bit (Suc n) (- numeral k) = (word_of_int (take_bit (min LENGTH('a) (Suc n)) (- numeral k)) :: 'a::len word)\<close> by transfer rule lemma signed_take_bit_word_numeral [simp]: \<open>signed_take_bit (numeral n) (numeral k) = (word_of_int (signed_take_bit (numeral n) (take_bit LENGTH('a) (numeral k))) :: 'a::len word)\<close> by transfer rule lemma signed_take_bit_word_Suc_numeral [simp]: \<open>signed_take_bit (Suc n) (numeral k) = (word_of_int (signed_take_bit (Suc n) (take_bit LENGTH('a) (numeral k))) :: 'a::len word)\<close> by transfer rule lemma signed_take_bit_word_minus_numeral [simp]: \<open>signed_take_bit (numeral n) (- numeral k) = (word_of_int (signed_take_bit (numeral n) (take_bit LENGTH('a) (- numeral k))) :: 'a::len word)\<close> by transfer rule lemma signed_take_bit_word_Suc_minus_numeral [simp]: \<open>signed_take_bit (Suc n) (- numeral k) = (word_of_int (signed_take_bit (Suc n) (take_bit LENGTH('a) (- numeral k))) :: 'a::len word)\<close> by transfer rule lemma False_map2_or: "\<lbrakk>set xs \<subseteq> {False}; length ys = length xs\<rbrakk> \<Longrightarrow> map2 (\<or>) xs ys = ys" by (induction xs arbitrary: ys) (auto simp: length_Suc_conv) lemma align_lem_or: assumes "length xs = n + m" "length ys = n + m" and "drop m xs = replicate n False" "take m ys = replicate m False" shows "map2 (\<or>) xs ys = take m xs @ drop m ys" using assms proof (induction xs arbitrary: ys m) case (Cons a xs) then show ?case by (cases m) (auto simp: length_Suc_conv False_map2_or) qed auto lemma False_map2_and: "\<lbrakk>set xs \<subseteq> {False}; length ys = length xs\<rbrakk> \<Longrightarrow> map2 (\<and>) xs ys = xs" by (induction xs arbitrary: ys) (auto simp: length_Suc_conv) lemma align_lem_and: assumes "length xs = n + m" "length ys = n + m" and "drop m xs = replicate n False" "take m ys = replicate m False" shows "map2 (\<and>) xs ys = replicate (n + m) False" using assms proof (induction xs arbitrary: ys m) case (Cons a xs) then show ?case by (cases m) (auto simp: length_Suc_conv set_replicate_conv_if False_map2_and) qed auto subsubsection \<open>Mask\<close> lemma minus_1_eq_mask: \<open>- 1 = (mask LENGTH('a) :: 'a::len word)\<close> by (rule bit_eqI) (simp add: bit_exp_iff bit_mask_iff) lemma mask_eq_decr_exp: \<open>mask n = 2 ^ n - (1 :: 'a::len word)\<close> by (fact mask_eq_exp_minus_1) lemma mask_Suc_rec: \<open>mask (Suc n) = 2 * mask n + (1 :: 'a::len word)\<close> by (simp add: mask_eq_exp_minus_1) context begin qualified lemma bit_mask_iff [bit_simps]: \<open>bit (mask m :: 'a::len word) n \<longleftrightarrow> n < min LENGTH('a) m\<close> by (simp add: bit_mask_iff not_le) end lemma mask_bin: "mask n = word_of_int (take_bit n (- 1))" by transfer simp lemma and_mask_bintr: "w AND mask n = word_of_int (take_bit n (uint w))" by transfer (simp add: ac_simps take_bit_eq_mask) lemma and_mask_wi: "word_of_int i AND mask n = word_of_int (take_bit n i)" by (simp add: take_bit_eq_mask of_int_and_eq of_int_mask_eq) lemma and_mask_wi': "word_of_int i AND mask n = (word_of_int (take_bit (min LENGTH('a) n) i) :: 'a::len word)" by (auto simp add: and_mask_wi min_def wi_bintr) lemma and_mask_no: "numeral i AND mask n = word_of_int (take_bit n (numeral i))" unfolding word_numeral_alt by (rule and_mask_wi) lemma and_mask_mod_2p: "w AND mask n = word_of_int (uint w mod 2 ^ n)" by (simp only: and_mask_bintr take_bit_eq_mod) lemma uint_mask_eq: \<open>uint (mask n :: 'a::len word) = mask (min LENGTH('a) n)\<close> by transfer simp lemma and_mask_lt_2p: "uint (w AND mask n) < 2 ^ n" by (metis take_bit_eq_mask take_bit_int_less_exp unsigned_take_bit_eq) lemma mask_eq_iff: "w AND mask n = w \<longleftrightarrow> uint w < 2 ^ n" apply (auto simp flip: take_bit_eq_mask) apply (metis take_bit_int_eq_self_iff uint_take_bit_eq) apply (simp add: take_bit_int_eq_self unsigned_take_bit_eq word_uint_eqI) done lemma and_mask_dvd: "2 ^ n dvd uint w \<longleftrightarrow> w AND mask n = 0" by (simp flip: take_bit_eq_mask take_bit_eq_mod unsigned_take_bit_eq add: dvd_eq_mod_eq_0 uint_0_iff) lemma and_mask_dvd_nat: "2 ^ n dvd unat w \<longleftrightarrow> w AND mask n = 0" by (simp flip: take_bit_eq_mask take_bit_eq_mod unsigned_take_bit_eq add: dvd_eq_mod_eq_0 unat_0_iff uint_0_iff) lemma word_2p_lem: "n < size w \<Longrightarrow> w < 2 ^ n = (uint w < 2 ^ n)" for w :: "'a::len word" by transfer simp lemma less_mask_eq: fixes x :: "'a::len word" assumes "x < 2 ^ n" shows "x AND mask n = x" by (metis (no_types) assms lt2p_lem mask_eq_iff not_less word_2p_lem word_size) lemmas mask_eq_iff_w2p = trans [OF mask_eq_iff word_2p_lem [symmetric]] lemmas and_mask_less' = iffD2 [OF word_2p_lem and_mask_lt_2p, simplified word_size] lemma and_mask_less_size: "n < size x \<Longrightarrow> x AND mask n < 2 ^ n" for x :: \<open>'a::len word\<close> unfolding word_size by (erule and_mask_less') lemma word_mod_2p_is_mask [OF refl]: "c = 2 ^ n \<Longrightarrow> c > 0 \<Longrightarrow> x mod c = x AND mask n" for c x :: "'a::len word" by (auto simp: word_mod_def uint_2p and_mask_mod_2p) lemma mask_eqs: "(a AND mask n) + b AND mask n = a + b AND mask n" "a + (b AND mask n) AND mask n = a + b AND mask n" "(a AND mask n) - b AND mask n = a - b AND mask n" "a - (b AND mask n) AND mask n = a - b AND mask n" "a * (b AND mask n) AND mask n = a * b AND mask n" "(b AND mask n) * a AND mask n = b * a AND mask n" "(a AND mask n) + (b AND mask n) AND mask n = a + b AND mask n" "(a AND mask n) - (b AND mask n) AND mask n = a - b AND mask n" "(a AND mask n) * (b AND mask n) AND mask n = a * b AND mask n" "- (a AND mask n) AND mask n = - a AND mask n" "word_succ (a AND mask n) AND mask n = word_succ a AND mask n" "word_pred (a AND mask n) AND mask n = word_pred a AND mask n" using word_of_int_Ex [where x=a] word_of_int_Ex [where x=b] unfolding take_bit_eq_mask [symmetric] by (transfer; simp add: take_bit_eq_mod mod_simps)+ lemma mask_power_eq: "(x AND mask n) ^ k AND mask n = x ^ k AND mask n" for x :: \<open>'a::len word\<close> using word_of_int_Ex [where x=x] unfolding take_bit_eq_mask [symmetric] by (transfer; simp add: take_bit_eq_mod mod_simps)+ lemma mask_full [simp]: "mask LENGTH('a) = (- 1 :: 'a::len word)" by transfer simp subsubsection \<open>Slices\<close> definition slice1 :: \<open>nat \<Rightarrow> 'a::len word \<Rightarrow> 'b::len word\<close> where \<open>slice1 n w = (if n < LENGTH('a) then ucast (drop_bit (LENGTH('a) - n) w) else push_bit (n - LENGTH('a)) (ucast w))\<close> lemma bit_slice1_iff [bit_simps]: \<open>bit (slice1 m w :: 'b::len word) n \<longleftrightarrow> m - LENGTH('a) \<le> n \<and> n < min LENGTH('b) m \<and> bit w (n + (LENGTH('a) - m) - (m - LENGTH('a)))\<close> for w :: \<open>'a::len word\<close> by (auto simp add: slice1_def bit_ucast_iff bit_drop_bit_eq bit_push_bit_iff not_less not_le ac_simps dest: bit_imp_le_length) definition slice :: \<open>nat \<Rightarrow> 'a::len word \<Rightarrow> 'b::len word\<close> where \<open>slice n = slice1 (LENGTH('a) - n)\<close> lemma bit_slice_iff [bit_simps]: \<open>bit (slice m w :: 'b::len word) n \<longleftrightarrow> n < min LENGTH('b) (LENGTH('a) - m) \<and> bit w (n + LENGTH('a) - (LENGTH('a) - m))\<close> for w :: \<open>'a::len word\<close> by (simp add: slice_def word_size bit_slice1_iff) lemma slice1_0 [simp] : "slice1 n 0 = 0" unfolding slice1_def by simp lemma slice_0 [simp] : "slice n 0 = 0" unfolding slice_def by auto lemma ucast_slice1: "ucast w = slice1 (size w) w" unfolding slice1_def by (simp add: size_word.rep_eq) lemma ucast_slice: "ucast w = slice 0 w" by (simp add: slice_def slice1_def) lemma slice_id: "slice 0 t = t" by (simp only: ucast_slice [symmetric] ucast_id) lemma rev_slice1: \<open>slice1 n (word_reverse w :: 'b::len word) = word_reverse (slice1 k w :: 'a::len word)\<close> if \<open>n + k = LENGTH('a) + LENGTH('b)\<close> proof (rule bit_word_eqI) fix m assume *: \<open>m < LENGTH('a)\<close> from that have **: \<open>LENGTH('b) = n + k - LENGTH('a)\<close> by simp show \<open>bit (slice1 n (word_reverse w :: 'b word) :: 'a word) m \<longleftrightarrow> bit (word_reverse (slice1 k w :: 'a word)) m\<close> unfolding bit_slice1_iff bit_word_reverse_iff using * ** by (cases \<open>n \<le> LENGTH('a)\<close>; cases \<open>k \<le> LENGTH('a)\<close>) auto qed lemma rev_slice: "n + k + LENGTH('a::len) = LENGTH('b::len) \<Longrightarrow> slice n (word_reverse (w::'b word)) = word_reverse (slice k w :: 'a word)" unfolding slice_def word_size by (simp add: rev_slice1) subsubsection \<open>Revcast\<close> definition revcast :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> where \<open>revcast = slice1 LENGTH('b)\<close> lemma bit_revcast_iff [bit_simps]: \<open>bit (revcast w :: 'b::len word) n \<longleftrightarrow> LENGTH('b) - LENGTH('a) \<le> n \<and> n < LENGTH('b) \<and> bit w (n + (LENGTH('a) - LENGTH('b)) - (LENGTH('b) - LENGTH('a)))\<close> for w :: \<open>'a::len word\<close> by (simp add: revcast_def bit_slice1_iff) lemma revcast_slice1 [OF refl]: "rc = revcast w \<Longrightarrow> slice1 (size rc) w = rc" by (simp add: revcast_def word_size) lemma revcast_rev_ucast [OF refl refl refl]: "cs = [rc, uc] \<Longrightarrow> rc = revcast (word_reverse w) \<Longrightarrow> uc = ucast w \<Longrightarrow> rc = word_reverse uc" by (metis rev_slice1 revcast_slice1 ucast_slice1 word_size) lemma revcast_ucast: "revcast w = word_reverse (ucast (word_reverse w))" using revcast_rev_ucast [of "word_reverse w"] by simp lemma ucast_revcast: "ucast w = word_reverse (revcast (word_reverse w))" by (fact revcast_rev_ucast [THEN word_rev_gal']) lemma ucast_rev_revcast: "ucast (word_reverse w) = word_reverse (revcast w)" by (fact revcast_ucast [THEN word_rev_gal']) text "linking revcast and cast via shift" lemmas wsst_TYs = source_size target_size word_size lemmas sym_notr = not_iff [THEN iffD2, THEN not_sym, THEN not_iff [THEN iffD1]] subsection \<open>Split and cat\<close> lemmas word_split_bin' = word_split_def lemmas word_cat_bin' = word_cat_eq \<comment> \<open>this odd result is analogous to \<open>ucast_id\<close>, result to the length given by the result type\<close> lemma word_cat_id: "word_cat a b = b" by transfer (simp add: take_bit_concat_bit_eq) lemma word_cat_split_alt: "\<lbrakk>size w \<le> size u + size v; word_split w = (u,v)\<rbrakk> \<Longrightarrow> word_cat u v = w" unfolding word_split_def by (rule bit_word_eqI) (auto simp add: bit_word_cat_iff not_less word_size bit_ucast_iff bit_drop_bit_eq) lemmas word_cat_split_size = sym [THEN [2] word_cat_split_alt [symmetric]] subsubsection \<open>Split and slice\<close> lemma split_slices: assumes "word_split w = (u, v)" shows "u = slice (size v) w \<and> v = slice 0 w" unfolding word_size proof (intro conjI) have \<section>: "\<And>n. \<lbrakk>ucast (drop_bit LENGTH('b) w) = u; LENGTH('c) < LENGTH('b)\<rbrakk> \<Longrightarrow> \<not> bit u n" by (metis bit_take_bit_iff bit_word_of_int_iff diff_is_0_eq' drop_bit_take_bit less_imp_le less_nat_zero_code of_int_uint unsigned_drop_bit_eq) show "u = slice LENGTH('b) w" proof (rule bit_word_eqI) show "bit u n = bit ((slice LENGTH('b) w)::'a word) n" if "n < LENGTH('a)" for n using assms bit_imp_le_length unfolding word_split_def bit_slice_iff by (fastforce simp add: \<section> ac_simps word_size bit_ucast_iff bit_drop_bit_eq) qed show "v = slice 0 w" by (metis Pair_inject assms ucast_slice word_split_bin') qed lemma slice_cat1 [OF refl]: "\<lbrakk>wc = word_cat a b; size a + size b \<le> size wc\<rbrakk> \<Longrightarrow> slice (size b) wc = a" by (rule bit_word_eqI) (auto simp add: bit_slice_iff bit_word_cat_iff word_size) lemmas slice_cat2 = trans [OF slice_id word_cat_id] lemma cat_slices: "\<lbrakk>a = slice n c; b = slice 0 c; n = size b; size c \<le> size a + size b\<rbrakk> \<Longrightarrow> word_cat a b = c" by (rule bit_word_eqI) (auto simp add: bit_slice_iff bit_word_cat_iff word_size) lemma word_split_cat_alt: assumes "w = word_cat u v" and size: "size u + size v \<le> size w" shows "word_split w = (u,v)" proof - have "ucast ((drop_bit LENGTH('c) (word_cat u v))::'a word) = u" "ucast ((word_cat u v)::'a word) = v" using assms by (auto simp add: word_size bit_ucast_iff bit_drop_bit_eq bit_word_cat_iff intro: bit_eqI) then show ?thesis by (simp add: assms(1) word_split_bin') qed lemma horner_sum_uint_exp_Cons_eq: \<open>horner_sum uint (2 ^ LENGTH('a)) (w # ws) = concat_bit LENGTH('a) (uint w) (horner_sum uint (2 ^ LENGTH('a)) ws)\<close> for ws :: \<open>'a::len word list\<close> by (simp add: bintr_uint concat_bit_eq push_bit_eq_mult) lemma bit_horner_sum_uint_exp_iff: \<open>bit (horner_sum uint (2 ^ LENGTH('a)) ws) n \<longleftrightarrow> n div LENGTH('a) < length ws \<and> bit (ws ! (n div LENGTH('a))) (n mod LENGTH('a))\<close> for ws :: \<open>'a::len word list\<close> proof (induction ws arbitrary: n) case Nil then show ?case by simp next case (Cons w ws) then show ?case by (cases \<open>n \<ge> LENGTH('a)\<close>) (simp_all only: horner_sum_uint_exp_Cons_eq, simp_all add: bit_concat_bit_iff le_div_geq le_mod_geq bit_uint_iff Cons) qed subsection \<open>Rotation\<close> lemma word_rotr_word_rotr_eq: \<open>word_rotr m (word_rotr n w) = word_rotr (m + n) w\<close> by (rule bit_word_eqI) (simp add: bit_word_rotr_iff ac_simps mod_add_right_eq) lemma word_rot_lem: "\<lbrakk>l + k = d + k mod l; n < l\<rbrakk> \<Longrightarrow> ((d + n) mod l) = n" for l::nat by (metis (no_types, lifting) add.commute add.right_neutral add_diff_cancel_left' mod_if mod_mult_div_eq mod_mult_self2 mod_self) lemma word_rot_rl [simp]: \<open>word_rotl k (word_rotr k v) = v\<close> proof (rule bit_word_eqI) show "bit (word_rotl k (word_rotr k v)) n = bit v n" if "n < LENGTH('a)" for n using that by (auto simp: word_rot_lem word_rotl_eq_word_rotr word_rotr_word_rotr_eq bit_word_rotr_iff algebra_simps split: nat_diff_split) qed lemma word_rot_lr [simp]: \<open>word_rotr k (word_rotl k v) = v\<close> proof (rule bit_word_eqI) show "bit (word_rotr k (word_rotl k v)) n = bit v n" if "n < LENGTH('a)" for n using that by (auto simp add: word_rot_lem word_rotl_eq_word_rotr word_rotr_word_rotr_eq bit_word_rotr_iff algebra_simps split: nat_diff_split) qed lemma word_rot_gal: \<open>word_rotr n v = w \<longleftrightarrow> word_rotl n w = v\<close> by auto lemma word_rot_gal': \<open>w = word_rotr n v \<longleftrightarrow> v = word_rotl n w\<close> by auto lemma word_rotr_rev: \<open>word_rotr n w = word_reverse (word_rotl n (word_reverse w))\<close> proof (rule bit_word_eqI) fix m assume \<open>m < LENGTH('a)\<close> moreover have \<open>1 + ((int m + int n mod int LENGTH('a)) mod int LENGTH('a) + ((int LENGTH('a) * 2) mod int LENGTH('a) - (1 + (int m + int n mod int LENGTH('a)))) mod int LENGTH('a)) = int LENGTH('a)\<close> apply (cases \<open>(1 + (int m + int n mod int LENGTH('a))) mod int LENGTH('a) = 0\<close>) using zmod_zminus1_eq_if [of \<open>1 + (int m + int n mod int LENGTH('a))\<close> \<open>int LENGTH('a)\<close>] apply simp_all apply (auto simp add: algebra_simps) apply (metis (mono_tags, opaque_lifting) Abs_fnat_hom_add mod_Suc mod_mult_self2_is_0 of_nat_Suc of_nat_mod semiring_char_0_class.of_nat_neq_0) apply (metis (no_types, opaque_lifting) Abs_fnat_hom_add less_not_refl mod_Suc of_nat_Suc of_nat_gt_0 of_nat_mod) done then have \<open>int ((m + n) mod LENGTH('a)) = int (LENGTH('a) - Suc ((LENGTH('a) - Suc m + LENGTH('a) - n mod LENGTH('a)) mod LENGTH('a)))\<close> using \<open>m < LENGTH('a)\<close> by (simp only: of_nat_mod mod_simps) (simp add: of_nat_diff of_nat_mod Suc_le_eq add_less_mono algebra_simps mod_simps) then have \<open>(m + n) mod LENGTH('a) = LENGTH('a) - Suc ((LENGTH('a) - Suc m + LENGTH('a) - n mod LENGTH('a)) mod LENGTH('a))\<close> by simp ultimately show \<open>bit (word_rotr n w) m \<longleftrightarrow> bit (word_reverse (word_rotl n (word_reverse w))) m\<close> by (simp add: word_rotl_eq_word_rotr bit_word_rotr_iff bit_word_reverse_iff) qed lemma word_roti_0 [simp]: "word_roti 0 w = w" by transfer simp lemma word_roti_add: "word_roti (m + n) w = word_roti m (word_roti n w)" by (rule bit_word_eqI) (simp add: bit_word_roti_iff nat_less_iff mod_simps ac_simps) lemma word_roti_conv_mod': "word_roti n w = word_roti (n mod int (size w)) w" by transfer simp lemmas word_roti_conv_mod = word_roti_conv_mod' [unfolded word_size] end subsubsection \<open>"Word rotation commutes with bit-wise operations\<close> \<comment> \<open>using locale to not pollute lemma namespace\<close> locale word_rotate begin context includes bit_operations_syntax begin lemma word_rot_logs: "word_rotl n (NOT v) = NOT (word_rotl n v)" "word_rotr n (NOT v) = NOT (word_rotr n v)" "word_rotl n (x AND y) = word_rotl n x AND word_rotl n y" "word_rotr n (x AND y) = word_rotr n x AND word_rotr n y" "word_rotl n (x OR y) = word_rotl n x OR word_rotl n y" "word_rotr n (x OR y) = word_rotr n x OR word_rotr n y" "word_rotl n (x XOR y) = word_rotl n x XOR word_rotl n y" "word_rotr n (x XOR y) = word_rotr n x XOR word_rotr n y" by (rule bit_word_eqI, auto simp add: bit_word_rotl_iff bit_word_rotr_iff bit_and_iff bit_or_iff bit_xor_iff bit_not_iff algebra_simps not_le)+ end end lemmas word_rot_logs = word_rotate.word_rot_logs lemma word_rotx_0 [simp] : "word_rotr i 0 = 0 \<and> word_rotl i 0 = 0" by transfer simp_all lemma word_roti_0' [simp] : "word_roti n 0 = 0" by transfer simp declare word_roti_eq_word_rotr_word_rotl [simp] subsection \<open>Maximum machine word\<close> context includes bit_operations_syntax begin lemma word_int_cases: fixes x :: "'a::len word" obtains n where "x = word_of_int n" and "0 \<le> n" and "n < 2^LENGTH('a)" by (rule that [of \<open>uint x\<close>]) simp_all lemma word_nat_cases [cases type: word]: fixes x :: "'a::len word" obtains n where "x = of_nat n" and "n < 2^LENGTH('a)" by (rule that [of \<open>unat x\<close>]) simp_all lemma max_word_max [intro!]: \<open>n \<le> - 1\<close> for n :: \<open>'a::len word\<close> by (fact word_order.extremum) lemma word_of_int_2p_len: "word_of_int (2 ^ LENGTH('a)) = (0::'a::len word)" by simp lemma word_pow_0: "(2::'a::len word) ^ LENGTH('a) = 0" by (fact word_exp_length_eq_0) lemma max_word_wrap: \<open>x + 1 = 0 \<Longrightarrow> x = - 1\<close> for x :: \<open>'a::len word\<close> by (simp add: eq_neg_iff_add_eq_0) lemma word_and_max: \<open>x AND - 1 = x\<close> for x :: \<open>'a::len word\<close> by (fact word_log_esimps) lemma word_or_max: \<open>x OR - 1 = - 1\<close> for x :: \<open>'a::len word\<close> by (fact word_log_esimps) lemma word_ao_dist2: "x AND (y OR z) = x AND y OR x AND z" for x y z :: "'a::len word" by (fact bit.conj_disj_distrib) lemma word_oa_dist2: "x OR y AND z = (x OR y) AND (x OR z)" for x y z :: "'a::len word" by (fact bit.disj_conj_distrib) lemma word_and_not [simp]: "x AND NOT x = 0" for x :: "'a::len word" by (fact bit.conj_cancel_right) lemma word_or_not [simp]: \<open>x OR NOT x = - 1\<close> for x :: \<open>'a::len word\<close> by (fact bit.disj_cancel_right) lemma word_xor_and_or: "x XOR y = x AND NOT y OR NOT x AND y" for x y :: "'a::len word" by (fact bit.xor_def) lemma uint_lt_0 [simp]: "uint x < 0 = False" by (simp add: linorder_not_less) lemma word_less_1 [simp]: "x < 1 \<longleftrightarrow> x = 0" for x :: "'a::len word" by (simp add: word_less_nat_alt unat_0_iff) lemma uint_plus_if_size: "uint (x + y) = (if uint x + uint y < 2^size x then uint x + uint y else uint x + uint y - 2^size x)" by (simp add: take_bit_eq_mod word_size uint_word_of_int_eq uint_plus_if') lemma unat_plus_if_size: "unat (x + y) = (if unat x + unat y < 2^size x then unat x + unat y else unat x + unat y - 2^size x)" for x y :: "'a::len word" by (simp add: size_word.rep_eq unat_arith_simps) lemma word_neq_0_conv: "w \<noteq> 0 \<longleftrightarrow> 0 < w" for w :: "'a::len word" by (fact word_coorder.not_eq_extremum) lemma max_lt: "unat (max a b div c) = unat (max a b) div unat c" for c :: "'a::len word" by (fact unat_div) lemma uint_sub_if_size: "uint (x - y) = (if uint y \<le> uint x then uint x - uint y else uint x - uint y + 2^size x)" by (simp add: size_word.rep_eq uint_sub_if') lemma unat_sub: \<open>unat (a - b) = unat a - unat b\<close> if \<open>b \<le> a\<close> by (meson that unat_sub_if_size word_le_nat_alt) lemmas word_less_sub1_numberof [simp] = word_less_sub1 [of "numeral w"] for w lemmas word_le_sub1_numberof [simp] = word_le_sub1 [of "numeral w"] for w lemma word_of_int_minus: "word_of_int (2^LENGTH('a) - i) = (word_of_int (-i)::'a::len word)" by simp lemma word_of_int_inj: \<open>(word_of_int x :: 'a::len word) = word_of_int y \<longleftrightarrow> x = y\<close> if \<open>0 \<le> x \<and> x < 2 ^ LENGTH('a)\<close> \<open>0 \<le> y \<and> y < 2 ^ LENGTH('a)\<close> using that by (transfer fixing: x y) (simp add: take_bit_int_eq_self) lemma word_le_less_eq: "x \<le> y \<longleftrightarrow> x = y \<or> x < y" for x y :: "'z::len word" by (auto simp add: order_class.le_less) lemma mod_plus_cong: fixes b b' :: int assumes 1: "b = b'" and 2: "x mod b' = x' mod b'" and 3: "y mod b' = y' mod b'" and 4: "x' + y' = z'" shows "(x + y) mod b = z' mod b'" proof - from 1 2[symmetric] 3[symmetric] have "(x + y) mod b = (x' mod b' + y' mod b') mod b'" by (simp add: mod_add_eq) also have "\<dots> = (x' + y') mod b'" by (simp add: mod_add_eq) finally show ?thesis by (simp add: 4) qed lemma mod_minus_cong: fixes b b' :: int assumes "b = b'" and "x mod b' = x' mod b'" and "y mod b' = y' mod b'" and "x' - y' = z'" shows "(x - y) mod b = z' mod b'" using assms [symmetric] by (auto intro: mod_diff_cong) lemma word_induct_less [case_names zero less]: \<open>P m\<close> if zero: \<open>P 0\<close> and less: \<open>\<And>n. n < m \<Longrightarrow> P n \<Longrightarrow> P (1 + n)\<close> for m :: \<open>'a::len word\<close> proof - define q where \<open>q = unat m\<close> with less have \<open>\<And>n. n < word_of_nat q \<Longrightarrow> P n \<Longrightarrow> P (1 + n)\<close> by simp then have \<open>P (word_of_nat q :: 'a word)\<close> proof (induction q) case 0 show ?case by (simp add: zero) next case (Suc q) show ?case proof (cases \<open>1 + word_of_nat q = (0 :: 'a word)\<close>) case True then show ?thesis by (simp add: zero) next case False then have *: \<open>word_of_nat q < (word_of_nat (Suc q) :: 'a word)\<close> by (simp add: unatSuc word_less_nat_alt) then have **: \<open>n < (1 + word_of_nat q :: 'a word) \<longleftrightarrow> n \<le> (word_of_nat q :: 'a word)\<close> for n by (metis (no_types, lifting) add.commute inc_le le_less_trans not_less of_nat_Suc) have \<open>P (word_of_nat q)\<close> by (simp add: "**" Suc.IH Suc.prems) with * have \<open>P (1 + word_of_nat q)\<close> by (rule Suc.prems) then show ?thesis by simp qed qed with \<open>q = unat m\<close> show ?thesis by simp qed lemma word_induct: "P 0 \<Longrightarrow> (\<And>n. P n \<Longrightarrow> P (1 + n)) \<Longrightarrow> P m" for P :: "'a::len word \<Rightarrow> bool" by (rule word_induct_less) lemma word_induct2 [case_names zero suc, induct type]: "P 0 \<Longrightarrow> (\<And>n. 1 + n \<noteq> 0 \<Longrightarrow> P n \<Longrightarrow> P (1 + n)) \<Longrightarrow> P n" for P :: "'b::len word \<Rightarrow> bool" by (induction rule: word_induct_less; force) subsection \<open>Recursion combinator for words\<close> definition word_rec :: "'a \<Rightarrow> ('b::len word \<Rightarrow> 'a \<Rightarrow> 'a) \<Rightarrow> 'b word \<Rightarrow> 'a" where "word_rec forZero forSuc n = rec_nat forZero (forSuc \<circ> of_nat) (unat n)" lemma word_rec_0 [simp]: "word_rec z s 0 = z" by (simp add: word_rec_def) lemma word_rec_Suc [simp]: "1 + n \<noteq> 0 \<Longrightarrow> word_rec z s (1 + n) = s n (word_rec z s n)" for n :: "'a::len word" by (simp add: unatSuc word_rec_def) lemma word_rec_Pred: "n \<noteq> 0 \<Longrightarrow> word_rec z s n = s (n - 1) (word_rec z s (n - 1))" by (metis add.commute diff_add_cancel word_rec_Suc) lemma word_rec_in: "f (word_rec z (\<lambda>_. f) n) = word_rec (f z) (\<lambda>_. f) n" by (induct n) simp_all lemma word_rec_in2: "f n (word_rec z f n) = word_rec (f 0 z) (f \<circ> (+) 1) n" by (induct n) simp_all lemma word_rec_twice: "m \<le> n \<Longrightarrow> word_rec z f n = word_rec (word_rec z f (n - m)) (f \<circ> (+) (n - m)) m" proof (induction n arbitrary: z f) case zero then show ?case by (metis diff_0_right word_le_0_iff word_rec_0) next case (suc n z f) show ?case proof (cases "1 + (n - m) = 0") case True then show ?thesis by (simp add: add_diff_eq) next case False then have eq: "1 + n - m = 1 + (n - m)" by simp with False have "m \<le> n" by (metis "suc.prems" add.commute dual_order.antisym eq_iff_diff_eq_0 inc_le leI) with False "suc.hyps" show ?thesis using suc.IH [of "f 0 z" "f \<circ> (+) 1"] by (simp add: word_rec_in2 eq add.assoc o_def) qed qed lemma word_rec_id: "word_rec z (\<lambda>_. id) n = z" by (induct n) auto lemma word_rec_id_eq: "(\<And>m. m < n \<Longrightarrow> f m = id) \<Longrightarrow> word_rec z f n = z" by (induction n) (auto simp add: unatSuc unat_arith_simps(2)) lemma word_rec_max: assumes "\<forall>m\<ge>n. m \<noteq> - 1 \<longrightarrow> f m = id" shows "word_rec z f (- 1) = word_rec z f n" proof - have \<section>: "\<And>m. \<lbrakk>m < - 1 - n\<rbrakk> \<Longrightarrow> (f \<circ> (+) n) m = id" using assms by (metis (mono_tags, lifting) add.commute add_diff_cancel_left' comp_apply less_le olen_add_eqv plus_minus_no_overflow word_n1_ge) have "word_rec z f (- 1) = word_rec (word_rec z f (- 1 - (- 1 - n))) (f \<circ> (+) (- 1 - (- 1 - n))) (- 1 - n)" by (meson word_n1_ge word_rec_twice) also have "... = word_rec z f n" by (metis (no_types, lifting) \<section> diff_add_cancel minus_diff_eq uminus_add_conv_diff word_rec_id_eq) finally show ?thesis . qed end subsection \<open>Tool support\<close> ML_file \<open>Tools/smt_word.ML\<close> end
------------------------------------------------------------------------ -- Polymorphic and iso-recursive types ------------------------------------------------------------------------ module SystemF.Type where open import Data.Fin using (Fin; zero; suc) open import Data.Fin.Substitution open import Data.Fin.Substitution.Lemmas open import Data.Nat using (ℕ; _+_) open import Data.Star using (Star; ε; _◅_) open import Data.Vec using (Vec; []; _∷_; lookup; map) open import Relation.Binary.PropositionalEquality as PropEq using (refl; _≡_; cong; cong₂; sym) open PropEq.≡-Reasoning ------------------------------------------------------------------------ -- Polymorphic and iso-recursive types infixr 7 _→'_ -- Types with up to n free type variables data Type (n : ℕ) : Set where var : Fin n → Type n -- type variable _→'_ : Type n → Type n → Type n -- arrow/function type ∀' : Type (1 + n) → Type n -- universal type μ : Type (1 + n) → Type n -- recursive type ------------------------------------------------------------------------ -- Substitutions in types module TypeSubst where module TypeApp {T : ℕ → Set} (l : Lift T Type) where open Lift l hiding (var) infixl 8 _/_ -- Apply a substitution to a type _/_ : ∀ {m n} → Type m → Sub T m n → Type n var x / σ = lift (lookup σ x) (a →' b) / σ = (a / σ) →' (b / σ) ∀' a / σ = ∀' (a / σ ↑) μ a / σ = μ (a / σ ↑) open Application (record { _/_ = _/_ }) using (_/✶_) -- Helper lemmas relating application of simple substitutions (_/_) -- to application of sequences of substititions (_/✶_). These are -- used to derive other (more general) lemmas below. →'-/✶-↑✶ : ∀ k {m n a b} (ρs : Subs T m n) → (a →' b) /✶ ρs ↑✶ k ≡ (a /✶ ρs ↑✶ k) →' (b /✶ ρs ↑✶ k) →'-/✶-↑✶ k ε = refl →'-/✶-↑✶ k (ρ ◅ ρs) = cong₂ _/_ (→'-/✶-↑✶ k ρs) refl ∀'-/✶-↑✶ : ∀ k {m n a} (ρs : Subs T m n) → (∀' a) /✶ ρs ↑✶ k ≡ ∀' (a /✶ ρs ↑✶ (1 + k)) ∀'-/✶-↑✶ k ε = refl ∀'-/✶-↑✶ k (ρ ◅ ρs) = cong₂ _/_ (∀'-/✶-↑✶ k ρs) refl μT-/✶-↑✶ : ∀ k {m n a} (ρs : Subs T m n) → (μ a) /✶ ρs ↑✶ k ≡ μ (a /✶ ρs ↑✶ (1 + k)) μT-/✶-↑✶ k ε = refl μT-/✶-↑✶ k (ρ ◅ ρs) = cong₂ _/_ (μT-/✶-↑✶ k ρs) refl -- Defining the abstract members var and _/_ in -- Data.Fin.Substitution.TermSubst for T = Type gives us access to a -- number of (generic) substitution functions out-of-the-box. typeSubst : TermSubst Type typeSubst = record { var = var; app = TypeApp._/_ } open TermSubst typeSubst public hiding (var) weaken↑ : ∀ {n} → Type (1 + n) → Type (2 + n) weaken↑ a = a / wk ↑ infix 8 _[/_] -- Shorthand for single-variable type substitutions _[/_] : ∀ {n} → Type (1 + n) → Type n → Type n a [/ b ] = a / sub b -- Substitution lemmas. module TypeLemmas where -- FIXME: The following lemmas are generic and should go somewhere -- else. module AdditionalLemmas {T} (lemmas : TermLemmas T) where open TermLemmas lemmas -- Weakening commutes with single-variable substitution weaken-sub : ∀ {n} (a : T (1 + n)) (b : T n) → weaken (a / sub b) ≡ a / wk ↑ / sub (weaken b) weaken-sub a b = begin weaken (a / sub b) ≡⟨ sym (/-wk′ (a / sub b)) ⟩ a / sub b / wk ≡⟨ sub-commutes a ⟩ a / wk ↑ / sub (b / wk) ≡⟨ cong (λ c → a / wk ↑ / sub c) (/-wk′ b) ⟩ a / wk ↑ / sub (weaken b) ∎ where /-wk′ : ∀ {n} (a : T n) → a / wk ≡ weaken a /-wk′ a = /-wk {t = a} -- Weakening commutes with reverse composition of substitutions. map-weaken-⊙ : ∀ {m n k} (σ₁ : Sub T m n) (σ₂ : Sub T n k) → map weaken (σ₁ ⊙ σ₂) ≡ (map weaken σ₁) ⊙ (σ₂ ↑) map-weaken-⊙ σ₁ σ₂ = begin map weaken (σ₁ ⊙ σ₂) ≡⟨ map-weaken ⟩ (σ₁ ⊙ σ₂) ⊙ wk ≡⟨ sym ⊙-assoc ⟩ σ₁ ⊙ (σ₂ ⊙ wk) ≡⟨ cong (λ σ₂ → σ₁ ⊙ σ₂) ⊙-wk ⟩ σ₁ ⊙ (wk ⊙ (σ₂ ↑)) ≡⟨ ⊙-assoc ⟩ (σ₁ ⊙ wk) ⊙ (σ₂ ↑) ≡⟨ cong (λ σ₁ → σ₁ ⊙ (σ₂ ↑)) (sym map-weaken) ⟩ (map weaken σ₁) ⊙ (σ₂ ↑) ∎ -- Giving concrete definitions (i.e. proofs) for the abstract members -- (i.e. lemmas) in Data.Fin.Substitution.Lemmas.TermLemmas for T = -- Type gives us access to a number of (generic) substitutions lemmas -- out-of-the-box. typeLemmas : TermLemmas Type typeLemmas = record { termSubst = TypeSubst.typeSubst ; app-var = refl ; /✶-↑✶ = Lemma./✶-↑✶ } where module Lemma {T₁ T₂} {lift₁ : Lift T₁ Type} {lift₂ : Lift T₂ Type} where open TypeSubst open Lifted lift₁ using () renaming (_↑✶_ to _↑✶₁_; _/✶_ to _/✶₁_) open Lifted lift₂ using () renaming (_↑✶_ to _↑✶₂_; _/✶_ to _/✶₂_) /✶-↑✶ : ∀ {m n} (ρs₁ : Subs T₁ m n) (ρs₂ : Subs T₂ m n) → (∀ k x → var x /✶₁ ρs₁ ↑✶₁ k ≡ var x /✶₂ ρs₂ ↑✶₂ k) → ∀ k t → t /✶₁ ρs₁ ↑✶₁ k ≡ t /✶₂ ρs₂ ↑✶₂ k /✶-↑✶ ρs₁ ρs₂ hyp k (var x) = hyp k x /✶-↑✶ ρs₁ ρs₂ hyp k (a →' b) = begin (a →' b) /✶₁ ρs₁ ↑✶₁ k ≡⟨ TypeApp.→'-/✶-↑✶ _ k ρs₁ ⟩ (a /✶₁ ρs₁ ↑✶₁ k) →' (b /✶₁ ρs₁ ↑✶₁ k) ≡⟨ cong₂ _→'_ (/✶-↑✶ ρs₁ ρs₂ hyp k a) (/✶-↑✶ ρs₁ ρs₂ hyp k b) ⟩ (a /✶₂ ρs₂ ↑✶₂ k) →' (b /✶₂ ρs₂ ↑✶₂ k) ≡⟨ sym (TypeApp.→'-/✶-↑✶ _ k ρs₂) ⟩ (a →' b) /✶₂ ρs₂ ↑✶₂ k ∎ /✶-↑✶ ρs₁ ρs₂ hyp k (∀' a) = begin (∀' a) /✶₁ ρs₁ ↑✶₁ k ≡⟨ TypeApp.∀'-/✶-↑✶ _ k ρs₁ ⟩ ∀' (a /✶₁ ρs₁ ↑✶₁ (1 + k)) ≡⟨ cong ∀' (/✶-↑✶ ρs₁ ρs₂ hyp (1 + k) a) ⟩ ∀' (a /✶₂ ρs₂ ↑✶₂ (1 + k)) ≡⟨ sym (TypeApp.∀'-/✶-↑✶ _ k ρs₂) ⟩ (∀' a) /✶₂ ρs₂ ↑✶₂ k ∎ /✶-↑✶ ρs₁ ρs₂ hyp k (μ a) = begin (μ a) /✶₁ ρs₁ ↑✶₁ k ≡⟨ TypeApp.μT-/✶-↑✶ _ k ρs₁ ⟩ μ (a /✶₁ ρs₁ ↑✶₁ (1 + k)) ≡⟨ cong μ (/✶-↑✶ ρs₁ ρs₂ hyp (1 + k) a) ⟩ μ (a /✶₂ ρs₂ ↑✶₂ (1 + k)) ≡⟨ sym (TypeApp.μT-/✶-↑✶ _ k ρs₂) ⟩ (μ a) /✶₂ ρs₂ ↑✶₂ k ∎ open TermLemmas typeLemmas public hiding (var) open TypeSubst public using (_[/_]; _/Var_; weaken↑; module Lifted) -- The above lemma /✶-↑✶ specialized to single substitutions /-↑⋆ : ∀ {T₁ T₂} {lift₁ : Lift T₁ Type} {lift₂ : Lift T₂ Type} → let open Lifted lift₁ using () renaming (_↑⋆_ to _↑⋆₁_; _/_ to _/₁_) open Lifted lift₂ using () renaming (_↑⋆_ to _↑⋆₂_; _/_ to _/₂_) in ∀ {n k} (ρ₁ : Sub T₁ n k) (ρ₂ : Sub T₂ n k) → (∀ i x → var x /₁ ρ₁ ↑⋆₁ i ≡ var x /₂ ρ₂ ↑⋆₂ i) → ∀ i a → a /₁ ρ₁ ↑⋆₁ i ≡ a /₂ ρ₂ ↑⋆₂ i /-↑⋆ ρ₁ ρ₂ hyp i a = /✶-↑✶ (ρ₁ ◅ ε) (ρ₂ ◅ ε) hyp i a open AdditionalLemmas typeLemmas public ------------------------------------------------------------------------ -- Encoding/translation of additional type operators module TypeOperators where open TypeLemmas hiding (id) -- Type of the polymorphic identity id : ∀ {n} → Type n id = ∀' ((var zero) →' (var zero)) -- Bottom/initial/zero type ⊥ : ∀ {n} → Type n ⊥ = ∀' (var zero) -- Top/terminal/unit type ⊤ : ∀ {n} → Type n ⊤ = id -- Existential type ∃ : ∀ {n} → Type (1 + n) → Type n ∃ a = ∀' (∀' (weaken↑ a →' var (suc zero)) →' var zero) infixr 7 _→ⁿ_ -- n-ary function type _→ⁿ_ : ∀ {n k} → Vec (Type n) k → Type n → Type n [] →ⁿ z = z (a ∷ as) →ⁿ z = as →ⁿ a →' z -- Record/finite tuple rec : ∀ {n k} → Vec (Type n) k → Type n rec [] = ⊤ rec (a ∷ as) = ∀' ((map weaken (a ∷ as) →ⁿ var zero) →' var zero) ------------------------------------------------------------------------ -- Lemmas about encoded type operators module TypeOperatorLemmas where open TypeOperators open TypeLemmas hiding (_/_; id) module TypeOperatorAppLemmas {T} (l : Lift T Type) where open TypeSubst.TypeApp l -- Type substitution commutes with the translation of of n-ary -- function types /-→ⁿ : ∀ {m n k} (as : Vec (Type m) k) (b : Type m) (σ : Sub T m n) → (as →ⁿ b) / σ ≡ map (λ a → a / σ) as →ⁿ b / σ /-→ⁿ [] _ _ = refl /-→ⁿ (a ∷ as) b σ = /-→ⁿ as (a →' b) σ -- FIXME: Write similar lemmas for the remaining type operators private module WeakeningLemmas where open TypeOperatorAppLemmas TypeSubst.varLift -- Weakening commutes with the translation of of n-ary function -- types weaken-→ⁿ : ∀ {n k} (as : Vec (Type n) k) (b : Type n) → weaken (as →ⁿ b) ≡ map weaken as →ⁿ weaken b weaken-→ⁿ as b = /-→ⁿ as b VarSubst.wk -- FIXME: Write similar lemmas for the remaining type operators open TypeOperatorAppLemmas TypeSubst.termLift public open WeakeningLemmas public
Formal statement is: lemma scaleR_mono: "a \<le> b \<Longrightarrow> x \<le> y \<Longrightarrow> 0 \<le> b \<Longrightarrow> 0 \<le> x \<Longrightarrow> a *\<^sub>R x \<le> b *\<^sub>R y" Informal statement is: If $a \leq b$ and $x \leq y$, and $a, b, x, y$ are all nonnegative, then $a x \leq b y$.
State Before: α : Type u_1 inst✝ : Ring α n✝¹ : ℤ n✝ : ℕ ⊢ ↑n✝¹ ^ ↑n✝ = ↑(Int.pow n✝¹ n✝) State After: no goals Tactic: simp
Require Import RelationClasses. Require Import Program. From sflib Require Import sflib. From Paco Require Import paco. From PromisingLib Require Import Axioms. From PromisingLib Require Import Basic. From PromisingLib Require Import Loc. From PromisingLib Require Import DenseOrder. From PromisingLib Require Import Language. From PromisingLib Require Import Event. Require Import Time. Require Import View. Require Import Cell. Require Import Memory. Require Import MemoryFacts. Require Import TView. Require Import Local. Require Import Thread. Require Import Configuration. Require Import PromiseConsistent. Require Import Cover. Require Import MemorySplit. Require Import MemoryMerge. Require Import FulfillStep. Require Import Pred. Require Import Trace. Require Import MemoryProps. Require Import LowerMemory. Require Import FulfillStep. Require Import ReorderStepPromise. Require Import Pred. Require Import Trace. Require Import SeqLib. Set Implicit Arguments. Variant lower_step {lang} e (th0 th1: Thread.t lang): Prop := | lower_step_intro (STEP: Thread.program_step e th0 th1) (NRELEASE: ~ release_event e) (MEM: lower_memory th1.(Thread.memory) th0.(Thread.memory)) (SAME: is_na_write e -> th1.(Thread.memory) = th0.(Thread.memory)) . Lemma lower_step_step lang: (@lower_step lang) <3= (@Thread.step lang true). Proof. i. inv PR. econs 2. ss. Qed. Lemma tau_lower_step_tau_step lang: tau (@lower_step lang) <2= (@Thread.tau_step lang). Proof. apply tau_mon. i. inv PR. econs. econs 2. ss. Qed. Definition is_lower_kind (kind: Memory.op_kind) (msg: Message.t): Prop := match kind with | Memory.op_kind_lower msg' => msg = msg' | _ => False end. Lemma write_lower_memory_lower prom0 mem0 loc from to msg prom1 mem1 kind (WRITE: Memory.write prom0 mem0 loc from to msg prom1 mem1 kind) (LOWER: lower_memory mem1 mem0): Memory.op_kind_is_lower kind. Proof. inv WRITE. inv PROMISE; eauto. { eapply Memory.add_get0 in MEM. des. hexploit lower_memory_get_inv; eauto. i. des; clarify. } { eapply Memory.split_get0 in MEM. des. hexploit lower_memory_get_inv; try apply GET1; eauto. i. des; clarify. } { eapply Memory.remove_get0 in MEM. des. hexploit lower_memory_get; eauto. i. des; clarify. } Qed. Lemma write_same_memory_same prom0 mem0 loc from to msg prom1 mem1 kind (WRITE: Memory.write prom0 mem0 loc from to msg prom1 mem1 kind) (LOWER: mem1 = mem0): is_lower_kind kind msg. Proof. inv WRITE. inv PROMISE; eauto. { eapply Memory.add_get0 in MEM. des. clarify. } { eapply Memory.split_get0 in MEM. des. clarify. } { eapply Memory.lower_get0 in MEM. des. clarify. } { eapply Memory.remove_get0 in MEM. des. clarify. } Qed. Lemma write_na_future ts prom0 mem0 loc from to msg prom1 mem1 msgs kinds kind (WRITE: Memory.write_na ts prom0 mem0 loc from to msg prom1 mem1 msgs kinds kind): Memory.future mem0 mem1. Proof. induction WRITE. - inv WRITE. exploit Memory.promise_op; eauto. i. econs 2; eauto. econs; eauto. econs. apply Time.bot_spec. - etrans; try exact IHWRITE. inv WRITE_EX. exploit Memory.promise_op; eauto. i. econs 2; eauto. econs; eauto. + unguard. des; subst; ss. econs. ss. + unguard. des; subst; ss. econs. apply Time.bot_spec. Qed. Lemma write_na_lower_memory_lower ts prom0 mem0 loc from to msg prom1 mem1 msgs kinds kind (WRITE: Memory.write_na ts prom0 mem0 loc from to msg prom1 mem1 msgs kinds kind) (LOWER: mem1 = mem0) : (<<KINDS: List.Forall2 (fun kind '(_, _, msg) => is_lower_kind kind msg) kinds msgs>>) /\ (<<KIND: is_lower_kind kind (Message.concrete msg None)>>) . Proof. induction WRITE. { hexploit write_same_memory_same; eauto. } exploit write_na_future; try exact WRITE. i. inv WRITE_EX. inv PROMISE. - exploit Memory.add_get0; try exact MEM. i. des. exploit Memory.future_get1; try exact GET0; eauto. { unguard. des; subst; ss. } i. des. clarify. - exploit Memory.split_get0; try exact MEM. i. des. exploit Memory.future_get1; try exact GET1; eauto. i. des. clarify. - cut (mem1 = mem'). { i. clarify. exploit IHWRITE; eauto. i. des. splits; auto. econs; eauto. ss. eapply Memory.lower_get0 in MEM. des; clarify. } eapply Memory.ext. i. erewrite (@Memory.lower_o mem'); eauto. condtac; ss. des. subst. exploit Memory.lower_get0; try exact MEM. i. des. exploit Memory.future_get1; try exact GET0; eauto. i. des. clarify. rewrite GET1. f_equal. f_equal. eapply Message.antisym; eauto. - unguard. des; subst; ss. Qed. Lemma write_step_lower_memory_lower lc1 sc1 mem1 loc from to val releasedm released ord lc2 sc2 mem2 kind (STEP: Local.write_step lc1 sc1 mem1 loc from to val releasedm released ord lc2 sc2 mem2 kind) (LOWER: lower_memory mem2 mem1): Memory.op_kind_is_lower kind. Proof. inv STEP. eapply write_lower_memory_lower; eauto. Qed. Lemma write_step_lower_memory_lower_same lc1 sc1 mem1 loc from to val releasedm released ord lc2 sc2 mem2 kind (STEP: Local.write_step lc1 sc1 mem1 loc from to val releasedm released ord lc2 sc2 mem2 kind) (LOWER: mem2 = mem1): is_lower_kind kind (Message.concrete val released). Proof. inv STEP. eapply write_same_memory_same; eauto. Qed. Lemma write_na_step_lower_memory_lower lc1 sc1 mem1 loc from to val ord lc2 sc2 mem2 msgs kinds kind (STEP: Local.write_na_step lc1 sc1 mem1 loc from to val ord lc2 sc2 mem2 msgs kinds kind) (LOWER: mem2 = mem1): (<<KINDS: List.Forall2 (fun kind '(_, _, msg) => is_lower_kind kind msg) kinds msgs>>) /\ (<<KIND: is_lower_kind kind (Message.concrete val None)>>) . Proof. inv STEP. eapply write_na_lower_memory_lower; eauto. Qed. Lemma write_lower_lower_memory promises1 mem1 loc from to msg promises2 mem2 kind (WRITE: Memory.write promises1 mem1 loc from to msg promises2 mem2 kind) (KIND: Memory.op_kind_is_lower kind): lower_memory mem2 mem1. Proof. inv WRITE. inv PROMISE; ss. econs. i. erewrite (@Memory.lower_o mem2); eauto. condtac; ss; try refl. des. subst. exploit Memory.lower_get0; try exact MEM. i. des. rewrite GET. econs. ss. Qed. Lemma write_lower_lower_memory_same promises1 mem1 loc from to msg promises2 mem2 kind (WRITE: Memory.write promises1 mem1 loc from to msg promises2 mem2 kind) (KIND: is_lower_kind kind msg): mem2 = mem1. Proof. inv WRITE. inv PROMISE; ss. eapply Memory.ext. i. erewrite (@Memory.lower_o mem2); eauto. condtac; ss; try refl. des. subst. exploit Memory.lower_get0; try exact MEM. i. des. rewrite GET. econs. Qed. Lemma write_na_lower_lower_memory ts promises1 mem1 loc from to val promises2 mem2 msgs kinds kind (WRITE: Memory.write_na ts promises1 mem1 loc from to val promises2 mem2 msgs kinds kind) (KINDS: List.Forall2 (fun kind '(_, _, msg) => is_lower_kind kind msg) kinds msgs) (KIND: is_lower_kind kind (Message.concrete val None)): mem2 = mem1. Proof. induction WRITE; eauto using write_lower_lower_memory. { inv WRITE; ss. inv PROMISE; ss. clarify. eapply lower_same_same; eauto. } inv KINDS. etrans; try eapply IHWRITE; eauto. inv WRITE_EX; ss. inv PROMISE; ss. clarify. eapply lower_same_same; eauto. Qed. Lemma write_step_lower_lower_memory lc1 sc1 mem1 loc from to val releasedm released ord lc2 sc2 mem2 kind (STEP: Local.write_step lc1 sc1 mem1 loc from to val releasedm released ord lc2 sc2 mem2 kind) (KIND: Memory.op_kind_is_lower kind): lower_memory mem2 mem1. Proof. inv STEP. eapply write_lower_lower_memory; eauto. Qed. Lemma write_step_lower_lower_memory_same lc1 sc1 mem1 loc from to val releasedm released ord lc2 sc2 mem2 kind (STEP: Local.write_step lc1 sc1 mem1 loc from to val releasedm released ord lc2 sc2 mem2 kind) (KIND: is_lower_kind kind (Message.concrete val released)): mem2 = mem1. Proof. inv STEP. eapply write_lower_lower_memory_same; eauto. Qed. Lemma write_na_step_lower_lower_memory lc1 sc1 mem1 loc from to val ord lc2 sc2 mem2 msgs kinds kind (STEP: Local.write_na_step lc1 sc1 mem1 loc from to val ord lc2 sc2 mem2 msgs kinds kind) (KINDS: List.Forall2 (fun kind '(_, _, msg) => is_lower_kind kind msg) kinds msgs) (KIND: is_lower_kind kind (Message.concrete val None)): mem2 = mem1. Proof. inv STEP. eapply write_na_lower_lower_memory; eauto. Qed. Lemma lower_memory_promise_step lang pf e st1 lc1 sc1 mem1 st2 lc2 sc2 mem2 lc1' mem1' (LC1: lower_local lc1' lc1) (MEM1: lower_memory mem1' mem1) (STEP: Thread.step pf e (Thread.mk lang st1 lc1 sc1 mem1) (Thread.mk lang st2 lc2 sc2 mem2)) (PROMISE: is_promise e) (WF1: Local.wf lc1' mem1'): exists lc2' mem2', (<<STEP: Thread.step pf e (Thread.mk lang st1 lc1' sc1 mem1') (Thread.mk lang st2 lc2' sc2 mem2')>>) /\ (<<LC2: lower_local lc2' lc2>>) /\ (<<MEM2: lower_memory mem2' mem2>>). Proof. inv STEP; inv STEP0; [|inv LOCAL; ss]. exploit lower_memory_promise_step; try apply WF1; eauto. i. des. esplits; eauto. econs 1. econs; eauto. Qed. Lemma lower_memory_promise_steps lang tr st1 lc1 sc1 mem1 st2 lc2 sc2 mem2 lc1' mem1' (LC1: lower_local lc1' lc1) (MEM1: lower_memory mem1' mem1) (STEP: Trace.steps tr (Thread.mk lang st1 lc1 sc1 mem1) (Thread.mk lang st2 lc2 sc2 mem2)) (PROMISE: List.Forall (fun x => is_promise (snd x)) tr) (WF1: Local.wf lc1' mem1') (SC1: Memory.closed_timemap sc1 mem1') (CLOSED1: Memory.closed mem1'): exists tr' lc2' mem2', (<<STEP: Trace.steps tr' (Thread.mk lang st1 lc1' sc1 mem1') (Thread.mk lang st2 lc2' sc2 mem2')>>) /\ (<<EVENTS: List.Forall2 (fun x y => snd x = snd y) tr tr'>>) /\ (<<LC2: lower_local lc2' lc2>>) /\ (<<MEM2: lower_memory mem2' mem2>>). Proof. revert lc1' mem1' LC1 MEM1 WF1 SC1 CLOSED1. dependent induction STEP; i. { esplits; eauto. } inv PROMISE. destruct th1. ss. exploit lower_memory_promise_step; try exact STEP; eauto. i. des. exploit Thread.step_future; try exact STEP1; eauto. s. i. des. exploit IHSTEP; eauto. i. des. esplits. - econs 2; eauto. - econs 2; eauto. - ss. - ss. Qed. Lemma lower_memory_lower_step lang e st1 lc1 sc1 mem1 st2 lc2 sc2 mem2 lc1' mem1' (LC1: lower_local lc1' lc1) (MEM1: lower_memory mem1' mem1) (STEP: lower_step e (Thread.mk lang st1 lc1 sc1 mem1) (Thread.mk lang st2 lc2 sc2 mem2)) (WF1: Local.wf lc1 mem1) (WF1': Local.wf lc1' mem1') (CLOSED: Memory.closed mem1) (CLOSED1': Memory.closed mem1'): exists e' lc2' mem2', (<<STEP: lower_step e' (Thread.mk lang st1 lc1' sc1 mem1') (Thread.mk lang st2 lc2' sc2 mem2')>>) /\ (<<EVENT: lower_event e' e>>) /\ (<<LC2: lower_local lc2' lc2>>) /\ (<<MEM2: lower_memory mem2' mem2>>). Proof. inv STEP. inv STEP0. inv LOCAL; ss. { esplits. - econs; [econs; try econs 1|..]; eauto; ss. refl. - ss. - ss. - ss. } { exploit lower_memory_read_step; try exact MEM1; eauto. i. des. esplits. - econs; [econs; try econs 2|..]; eauto; ss. refl. - econs. ss. - ss. - ss. } { exploit lower_memory_write_step; try exact MEM1; eauto; try refl. i. des. replace sc_src1 with sc2 in *; cycle 1. { inv LOCAL0. inv STEP. ss. } esplits. - econs; [econs; try econs 3|..]; eauto; ss. { exploit write_step_lower_memory_lower; try exact LOCAL0; eauto. i. inv KIND; ss. eapply write_step_lower_lower_memory; eauto. } { i. subst. i. exploit write_step_lower_memory_lower_same; try exact LOCAL0; eauto. i. destruct kind; ss. subst. eapply write_step_lower_lower_memory_same; eauto. ss. inv LOCAL0; inv STEP. destruct ord; ss. } - econs. ss. - ss. - ss. } { exploit lower_memory_read_step; try exact MEM1; eauto; try refl. i. des. exploit Local.read_step_future; try exact LOCAL1; eauto. i. des. exploit Local.read_step_future; try exact STEP; eauto. i. des. exploit lower_memory_write_step; try exact MEM1; eauto; try refl. i. des. replace sc_src1 with sc2 in *; cycle 1. { inv LOCAL2. inv STEP0. ss. } esplits. - econs; [econs; try econs 4|..]; eauto; ss. exploit write_step_lower_memory_lower; try exact LOCAL2; eauto. i. inv KIND; ss. eapply write_step_lower_lower_memory; eauto. - econs; ss. - ss. - ss. } { exploit lower_memory_fence_step; try exact LC1; eauto; try refl. i. des. replace sc_src1 with sc2 in *; cycle 1. { inv LOCAL0. inv STEP. unfold TView.write_fence_sc. condtac; ss. destruct ordw; ss. } esplits. - econs; [econs; try econs 5|..]; eauto; ss. refl. - ss. - ss. - ss. } { exploit lower_memory_write_na_step; try exact MEM1; eauto; try refl. i. des. replace sc_src1 with sc2 in *; cycle 1. { inv LOCAL0. inv STEP. ss. } esplits. - assert (mem_src1 = mem1'). { exploit write_na_step_lower_memory_lower; try exact LOCAL0; eauto. i. des. eapply write_na_step_lower_lower_memory; eauto. { clear - KINDS KINDS0. induction KINDS; ss. } { subst. auto. } } econs; [econs; try econs 8|..]; eauto; ss. subst. refl. - econs; ss. - ss. - ss. } { exploit lower_memory_is_racy; try exact MEM1; try eapply LOCAL0; eauto. i. esplits. - econs; [econs; try econs 9|..]; eauto; ss. refl. - ss. - ss. - ss. } Qed. Lemma lower_memory_lower_steps lang st1 lc1 sc1 mem1 st2 lc2 sc2 mem2 lc1' mem1' (LC1: lower_local lc1' lc1) (MEM1: lower_memory mem1' mem1) (STEP: rtc (tau lower_step) (Thread.mk lang st1 lc1 sc1 mem1) (Thread.mk lang st2 lc2 sc2 mem2)) (WF1: Local.wf lc1 mem1) (WF1': Local.wf lc1' mem1') (SC1: Memory.closed_timemap sc1 mem1) (SC1': Memory.closed_timemap sc1 mem1') (CLOSED: Memory.closed mem1) (CLOSED1': Memory.closed mem1'): exists lc2' mem2', (<<STEP: rtc (tau lower_step) (Thread.mk lang st1 lc1' sc1 mem1') (Thread.mk lang st2 lc2' sc2 mem2')>>) /\ (<<LC2: lower_local lc2' lc2>>) /\ (<<MEM2: lower_memory mem2' mem2>>). Proof. revert lc1' mem1' LC1 MEM1 WF1' SC1' CLOSED1'. dependent induction STEP; i. { esplits; eauto. } inv H. destruct y. exploit lower_memory_lower_step; try exact TSTEP; eauto. i. des. exploit Thread.step_future; try eapply lower_step_step; try exact TSTEP; eauto. s. i. des. exploit Thread.step_future; try eapply lower_step_step; try exact STEP0; eauto. s. i. des. exploit IHSTEP; eauto. i. des. esplits. - econs 2; eauto. econs; eauto. inv EVENT0; ss. - ss. - ss. Qed. Lemma same_memory_promise_step lang pf pf' e (th1 th2 th1' th2': Thread.t lang) (STEP: Thread.step pf e th1 th2) (STEP': Thread.step pf' e th1' th2') (PROMISE: is_promise e) (MEM: th1.(Thread.memory) = th1'.(Thread.memory)): th2.(Thread.memory) = th2'.(Thread.memory). Proof. inv STEP; inv STEP0; try by inv LOCAL; ss. inv STEP'; inv STEP; try by inv LOCAL0; ss. inv LOCAL. inv LOCAL0. ss. subst. exploit Memory.promise_op; try exact PROMISE0. i. exploit Memory.promise_op; try exact PROMISE1. i. eapply Memory.op_inj; eauto. Qed. Lemma same_memory_promise_steps lang tr tr' (th1 th2 th1' th2': Thread.t lang) (STEP: Trace.steps tr th1 th2) (STEP': Trace.steps tr' th1' th2') (PROMISE: List.Forall (fun x => is_promise (snd x)) tr) (TRACE: List.Forall2 (fun x y => snd x = snd y) tr tr') (MEM: th1.(Thread.memory) = th1'.(Thread.memory)): th2.(Thread.memory) = th2'.(Thread.memory). Proof. revert tr' th1' th2' STEP' TRACE MEM. induction STEP; i. { inv TRACE. inv STEP'; ss. } subst. inv PROMISE. inv TRACE. inv STEP'. inv TR. ss. subst. exploit same_memory_promise_step; try exact MEM; eauto. Qed. Lemma promise_steps_trace_promise_steps lang (th1 th2: Thread.t lang) (STEPS: rtc (tau (@pred_step is_promise _)) th1 th2): exists tr, (<<STEPS: Trace.steps tr th1 th2>>) /\ (<<PROMISE: List.Forall (fun x => is_promise (snd x)) tr>>). Proof. induction STEPS; eauto. des. inv H. inv TSTEP. inv STEP. esplits; eauto. Qed. Lemma trace_promise_steps_promise_steps lang tr (th1 th2: Thread.t lang) (STEPS: Trace.steps tr th1 th2) (PROMISE: List.Forall (fun x => is_promise (snd x)) tr): rtc (tau (@pred_step is_promise _)) th1 th2. Proof. induction STEPS; eauto. inv PROMISE; ss. inv H1. exploit IHSTEPS; eauto. intros x. etrans; try exact x. econs 2; try refl. econs. - econs; eauto. econs; eauto. - destruct e; ss. Qed. Lemma trace_eq_promise (tr1 tr2: Trace.t) (EQ: List.Forall2 (fun x y => snd x = snd y) tr1 tr2) (PROMISE1: List.Forall (fun x => is_promise (snd x)) tr1): List.Forall (fun x => is_promise (snd x)) tr2. Proof. induction EQ; eauto. inv PROMISE1. econs; eauto. congr. Qed. Lemma promise_step_sc lang pf e (th1 th2: Thread.t lang) (STEP: Thread.step pf e th1 th2) (PROMISE: is_promise e): th1.(Thread.sc) = th2.(Thread.sc). Proof. inv STEP; inv STEP0; inv LOCAL; ss. Qed. Lemma promise_steps_sc lang tr (th1 th2: Thread.t lang) (STEPS: Trace.steps tr th1 th2) (PROMISE: List.Forall (fun x => is_promise (snd x)) tr): th1.(Thread.sc) = th2.(Thread.sc). Proof. induction STEPS; eauto. subst. inv PROMISE. ss. exploit promise_step_sc; try exact STEP; eauto. i. rewrite x0. eauto. Qed. Lemma write_lower_promises_le promises1 mem1 loc from to msg promises2 mem2 kind (WRITE: Memory.write promises1 mem1 loc from to msg promises2 mem2 kind) (LOWER: lower_memory mem2 mem1): Memory.le promises2 promises1. Proof. exploit write_lower_memory_lower; eauto. i. inv WRITE. inv PROMISE; ss. ii. revert LHS. erewrite Memory.remove_o; eauto. condtac; ss. erewrite Memory.lower_o; eauto. condtac; ss. Qed. Lemma write_na_lower_promises_le ts promises1 mem1 loc from to msg promises2 mem2 msgs kinds kind (WRITE: Memory.write_na ts promises1 mem1 loc from to msg promises2 mem2 msgs kinds kind) (LOWER: mem2 = mem1): Memory.le promises2 promises1. Proof. exploit write_na_lower_memory_lower; eauto. i. des. clear LOWER. induction WRITE. { inv WRITE. inv PROMISE; ss. ii. revert LHS. erewrite Memory.remove_o; eauto. condtac; ss. erewrite Memory.lower_o; eauto. condtac; ss. } inv KINDS. hexploit IHWRITE; eauto. i. etrans; eauto. inv WRITE_EX. inv PROMISE; ss. ii. revert LHS. erewrite Memory.remove_o; eauto. condtac; ss. erewrite Memory.lower_o; eauto. condtac; ss. Qed. Lemma lower_step_future lang e (th1 th2: Thread.t lang) (STEP: lower_step e th1 th2): (<<PROMISES: Memory.le th2.(Thread.local).(Local.promises) th1.(Thread.local).(Local.promises)>>) /\ (<<SC: th1.(Thread.sc) = th2.(Thread.sc)>>) /\ (<<MEM: lower_memory th2.(Thread.memory) th1.(Thread.memory)>>). Proof. inv STEP. splits; ss. { inv STEP0. inv LOCAL; ss; try by inv LOCAL0; ss. - inv LOCAL0. ss. eapply write_lower_promises_le; eauto. - inv LOCAL1. inv LOCAL2. ss. eapply write_lower_promises_le; eauto. - inv LOCAL0. ss. eapply write_na_lower_promises_le; eauto. } { inv STEP0. inv LOCAL; ss. - inv LOCAL0. ss. - inv LOCAL2. ss. - inv LOCAL0. unfold TView.write_fence_sc. condtac; ss. destruct ordw; ss. - inv LOCAL0. ss. } Qed. Lemma lower_steps_future lang (th1 th2: Thread.t lang) (STEPS: rtc (tau lower_step) th1 th2): (<<PROMISES: Memory.le th2.(Thread.local).(Local.promises) th1.(Thread.local).(Local.promises)>>) /\ (<<SC: th1.(Thread.sc) = th2.(Thread.sc)>>) /\ (<<MEM: lower_memory th2.(Thread.memory) th1.(Thread.memory)>>). Proof. induction STEPS; eauto. { splits; ss. refl. } inv H. exploit lower_step_future; try exact TSTEP; eauto. i. des. splits; try congr; etrans; eauto. Qed. Lemma step_split_pure lang pf e (th1 th2: Thread.t lang) (STEP: Thread.step pf e th1 th2) (NPROMISE: no_promise e) (NRELEASE: ~ release_event e) (MEM: th2.(Thread.memory) = th1.(Thread.memory)) : (<<LOWER: tau lower_step th1 th2>>) /\ (<<SC: th2.(Thread.sc) = th1.(Thread.sc)>>). Proof. assert (SC: th2.(Thread.sc) = th1.(Thread.sc)). { inv STEP; inv STEP0; auto. inv LOCAL; ss. { inv LOCAL0; ss. } { inv LOCAL1; inv LOCAL2; ss. } { inv LOCAL0; ss. unfold TView.write_fence_sc. destruct (Ordering.le Ordering.seqcst ordw) eqn:ORD; ss. exfalso. eapply NRELEASE. destruct ordw; ss. } { inv LOCAL0; ss. } } esplits; eauto. econs. { inv STEP; [inv STEP0; ss|]. econs; eauto. rewrite MEM. refl. } { destruct e; ss. } Qed. Lemma memory_op_diff_only mem0 loc from to msg mem1 kind (WRITE: Memory.op mem0 loc from to msg mem1 kind) loc' to' from' msg' (SOME: Memory.get loc' to' mem1 = Some (from', msg')) : (exists from'' msg'', (<<GET: Memory.get loc' to' mem0 = Some (from'', msg'')>>) /\ (<<MSG: Message.le msg' msg''>>)) \/ ((<<LOC: loc' = loc>>) /\ (<<TO: to' = to>>) /\ (<<FROM: from' = from>>) /\ (<<MSG: msg' = msg>>) /\ (<<NONE: Memory.get loc' to' mem0 = None>>)). Proof. inv WRITE. { erewrite Memory.add_o in SOME; eauto. des_ifs. { right. ss. des; clarify. splits; auto. eapply Memory.add_get0; eauto. } { left. esplits; eauto. refl. } } { erewrite Memory.split_o in SOME; eauto. des_ifs. { right. ss. des; clarify. splits; auto. eapply Memory.split_get0 in SPLIT. des; auto. } { left. ss. des; clarify. eapply Memory.split_get0 in SPLIT. des; clarify. esplits; eauto. refl. } { left. esplits; eauto. refl. } } { erewrite Memory.lower_o in SOME; eauto. des_ifs. { ss. des; clarify. left. eapply lower_succeed_wf in LOWER; eauto. des; eauto. } { left. esplits; eauto. refl. } } { erewrite Memory.remove_o in SOME; eauto. des_ifs. left. esplits; eauto. refl. } Qed. Lemma promise_step_tau_promise_step lang pf e (th0 th1: Thread.t lang) (STEP: Thread.promise_step pf e th0 th1) : tau (@pred_step is_promise _) th0 th1. Proof. econs; eauto. { econs; eauto. { econs; eauto. econs 1; eauto. } { inv STEP; ss. } } { inv STEP; ss. } Qed. Lemma lower_step_tau_lower_step lang e (th0 th1: Thread.t lang) (STEP: lower_step e th0 th1) : tau lower_step th0 th1. Proof. inv STEP. econs; eauto. { econs; eauto. } { destruct e; ss. } Qed. Lemma memory_lower_exists prom0 mem0 loc from to msg (CLOSED: Memory.closed mem0) (MLE: Memory.le prom0 mem0) (MSG: msg <> Message.reserve) (GET: Memory.get loc to prom0 = Some (from, msg)) (BOT: Memory.bot_none prom0) : Memory.promise prom0 mem0 loc from to msg prom0 mem0 (Memory.op_kind_lower msg). Proof. inv CLOSED. exploit CLOSED0. { eapply MLE. eauto. } i. des. hexploit Memory.lower_exists; try eassumption. { hexploit memory_get_ts_strong; eauto. i. des; clarify. rewrite BOT in GET. ss. } { refl. } i. des. hexploit Memory.lower_exists_le; eauto. i. des. hexploit lower_same_same; try apply H. i. subst. hexploit lower_same_same; try apply H0. i. subst. econs; eauto. Qed. Lemma split_memory_write promises1 mem1 loc from to msg promises2 mem2 kind (MESSAGE: msg <> Message.reserve) (WRITE: Memory.write promises1 mem1 loc from to msg promises2 mem2 kind): exists promises1', (<<PROMISE: Memory.promise promises1 mem1 loc from to msg promises1' mem2 kind>>) /\ (<<WRITE_LOWER: Memory.write promises1' mem2 loc from to msg promises2 mem2 (Memory.op_kind_lower msg)>>). Proof. exploit MemoryFacts.write_time_lt; eauto. i. assert (MSG: Message.wf msg). { inv WRITE. inv PROMISE; inv MEM; ss. - inv ADD. ss. - inv SPLIT. ss. - inv LOWER. ss. } assert (MSG_TO: Memory.message_to msg loc to). { inv WRITE. inv PROMISE; ss. } inv WRITE. esplits; eauto. exploit Memory.promise_get0; eauto. { inv PROMISE; ss. } i. des. exploit Memory.lower_exists_same; try exact GET_PROMISES; eauto. i. exploit Memory.lower_exists_same; try exact GET_MEM; eauto. Qed. Lemma split_write lc0 sc0 mem0 loc from to val releasedm released ord lc2 sc2 mem2 kind (REL_WF: View.opt_wf releasedm) (REL_CLOSED: Memory.closed_opt_view releasedm mem0) (WF0: Local.wf lc0 mem0) (SC0: Memory.closed_timemap sc0 mem0) (MEM0: Memory.closed mem0) (ORD: Ordering.le ord Ordering.relaxed) (STEP: Local.write_step lc0 sc0 mem0 loc from to val releasedm released ord lc2 sc2 mem2 kind): exists released' lc1, (<<RELEASED: released' = TView.write_released (Local.tview lc0) sc0 loc to releasedm ord>>) /\ (<<PROMISE: Local.promise_step lc0 mem0 loc from to (Message.concrete val released') lc1 mem2 kind>>) /\ (<<WRITE: Local.write_step lc1 sc0 mem2 loc from to val releasedm released ord lc2 sc2 mem2 (Memory.op_kind_lower (Message.concrete val released'))>>). Proof. exploit write_promise_fulfill; eauto. i. des. exploit Local.promise_step_future; eauto. i. des. exploit fulfill_write; try exact STEP2; eauto. subst. inv STEP1. ss. Qed. Lemma reorder_write_lower_rtc_promise lang promises1 mem1 loc from to msg kind th2 th3 (LE: Memory.le promises1 mem1) (KIND: Memory.op_kind_is_lower kind) (WRITE: Memory.write promises1 mem1 loc from to msg th2.(Thread.local).(Local.promises) th2.(Thread.memory) kind) (STEPS: rtc (tau (@pred_step is_promise lang)) th2 th3): exists th2', (<<STEPS: rtc (tau (@pred_step is_promise lang)) (Thread.mk _ th2.(Thread.state) (Local.mk th2.(Thread.local).(Local.tview) promises1) th2.(Thread.sc) mem1) th2'>>) /\ (<<WRITE: Memory.write th2'.(Thread.local).(Local.promises) th2'.(Thread.memory) loc from to msg th3.(Thread.local).(Local.promises) th3.(Thread.memory) kind>>) /\ (<<STATE: th2'.(Thread.state) = th3.(Thread.state)>>) /\ (<<TVIEW: th2'.(Thread.local).(Local.tview) = th3.(Thread.local).(Local.tview)>>) /\ (<<SC: th2'.(Thread.sc) = th3.(Thread.sc)>>). Proof. revert promises1 mem1 LE WRITE. induction STEPS; i. { esplits; try refl. ss. } inv H. inv TSTEP. inv STEP. inv STEP0; inv STEP; inv LOCAL; ss. exploit reorder_memory_write_lower_promise; try exact WRITE; eauto. i. des. hexploit Memory.promise_le; try exact PROMISE0; eauto. i. des. exploit IHSTEPS; eauto. i. des. esplits; try exact WRITE1; eauto. econs 2; eauto. econs. - econs; [do 4 econs; eauto|]; ss. inv WRITE0. inv PROMISE1; ss. eapply lower_closed_message_inv; eauto. - ss. Qed. Lemma memory_write_lower_refl_inv promsies1 mem1 loc from to msg promises2 mem2 (WRITE: Memory.write promsies1 mem1 loc from to msg promises2 mem2 (Memory.op_kind_lower msg)): mem1 = mem2. Proof. inv WRITE. inv PROMISE; ss. apply Memory.ext. i. exploit Memory.lower_get0; try exact MEM. i. des. erewrite (@Memory.lower_o mem2); eauto. condtac; ss. des. subst. ss. Qed. Lemma promise_remove_messages promises0 mem0 loc from to msg promises1 mem1 kind promises2 (PROMISE: Memory.promise promises0 mem0 loc from to msg promises1 mem1 kind) (REMOVE: Memory.remove promises1 loc from to msg promises2): Messages.of_memory promises1 <4= (Messages.of_memory promises2 \4/ committed mem0 promises0 mem1 promises2). Proof. s. i. inv PR. revert GET. inv PROMISE; ss. { erewrite Memory.add_o; eauto. condtac; ss. - i. des. symmetry in GET. inv GET. right. exploit Memory.remove_get0; eauto. i. des. exploit Memory.add_get0; try exact MEM. i. des. econs; [econs; eauto|]. ii. inv H. congr. - i. left. econs. erewrite Memory.remove_o; eauto. condtac; ss. erewrite Memory.add_o; eauto. condtac; ss. } { erewrite Memory.split_o; eauto. repeat (condtac; ss). - i. des. symmetry in GET. inv GET. right. exploit Memory.remove_get0; eauto. i. des. exploit Memory.split_get0; try exact MEM. i. des. econs; [econs; eauto|]. ii. inv H. congr. - guardH o. i. des. symmetry in GET. inv GET. left. econs. exploit Memory.split_get0; try exact PROMISES. i. des. erewrite Memory.remove_o; eauto. condtac; ss. - i. left. econs. erewrite Memory.remove_o; eauto. condtac; ss. erewrite Memory.split_o; eauto. repeat (condtac; ss). } { erewrite Memory.lower_o; eauto. condtac; ss. - i. des. symmetry in GET. inv GET. right. exploit Memory.remove_get0; eauto. i. des. exploit Memory.lower_get0; try exact MEM. i. des. exploit Memory.lower_get0; try exact PROMISES. i. des. econs; [econs; eauto|]. ii. inv H. congr. - i. left. econs. erewrite Memory.remove_o; eauto. condtac; ss. erewrite Memory.lower_o; eauto. condtac; ss. } { exploit Memory.remove_get0; try exact REMOVE. i. des. exploit Memory.remove_get0; try exact PROMISES. i. des. congr. } Qed. Lemma lower_remove_remove mem1 loc from to msg1 msg2 mem2 mem3 (LOWER: Memory.lower mem1 loc from to msg1 msg2 mem2) (REMOVE: Memory.remove mem2 loc from to msg2 mem3): Memory.remove mem1 loc from to msg1 mem3. Proof. exploit Memory.lower_get0; eauto. i. des. exploit Memory.remove_exists; try exact GET. i. des. replace mem0 with mem3 in *; ss. apply Memory.ext. i. erewrite Memory.remove_o; eauto. erewrite Memory.lower_o; eauto. erewrite (@Memory.remove_o mem0); eauto. condtac; ss. Qed. Lemma write_lower_promises promises1 mem1 loc from to msg promises2 mem2 kind l t (WRITE: Memory.write promises1 mem1 loc from to msg promises2 mem2 kind) (KIND: Memory.op_kind_is_lower kind): Memory.get l t promises2 = if loc_ts_eq_dec (l, t) (loc, to) then None else Memory.get l t promises1. Proof. inv WRITE. inv PROMISE; ss. erewrite Memory.remove_o; eauto. condtac; ss. erewrite Memory.lower_o; eauto. condtac; ss. Qed. Lemma split_step lang pf e (th0 th2: Thread.t lang) (STEP: Thread.step pf e th0 th2) (NRELEASE: ~ release_event e) (LOCAL: Local.wf (Thread.local th0) (Thread.memory th0)) (SC: Memory.closed_timemap (Thread.sc th0) (Thread.memory th0)) (CLOSED: Memory.closed (Thread.memory th0)): exists th1, (<<PROMISES: rtc (tau (@pred_step is_promise _)) th0 th1>>) /\ (<<LOWER: rtc (tau lower_step) th1 th2>>) /\ (<<STATE: th0.(Thread.state) = th1.(Thread.state)>>) /\ (<<MEM: th1.(Thread.memory) = th2.(Thread.memory)>>) /\ (<<SC: th1.(Thread.sc) = th2.(Thread.sc)>>) /\ (<<FIN: Messages.of_memory th1.(Thread.local).(Local.promises) <4= (Messages.of_memory th2.(Thread.local).(Local.promises) \4/ committed th0.(Thread.memory) th0.(Thread.local).(Local.promises) th2.(Thread.memory) th2.(Thread.local).(Local.promises))>>). Proof. dup STEP. inv STEP. { (* promise *) exists th2. inv STEP1. ss. splits; eauto. econs; eauto. econs. - econs; [econs|]; eauto. ss. - ss. } inv STEP1. inv LOCAL0; ss. { (* silent *) exploit step_split_pure; eauto; ss. i. des. esplits; eauto. } { (* read *) exploit step_split_pure; eauto; ss. i. des. esplits; eauto. s. i. inv LOCAL1. eauto. } { (* write *) exploit split_write; try exact LOCAL1; eauto. { destruct ord; ss. } i. des. esplits. - econs 2; eauto. eapply promise_step_tau_promise_step. econs; eauto. - econs 2; eauto. eapply lower_step_tau_lower_step. econs; [econs; eauto|..]; eauto. refl. - ss. - ss. - inv WRITE. ss. - clear STEP0 WRITE. inv PROMISE. inv LOCAL1. inv WRITE. destruct lc1. ss. exploit Memory.promise_inj; [exact PROMISE|exact PROMISE0|]. i. des. subst. eapply promise_remove_messages; eauto. } { (* update *) exploit Local.read_step_future; eauto. i. des. exploit split_write; try exact LOCAL2; eauto. { destruct ordw; ss. } i. des. exploit reorder_read_promise_diff; try exact LOCAL1; eauto. { inv WRITE. exploit MemoryFacts.write_time_lt; eauto. i. ii. inv H. timetac. } i. des. esplits. - econs 2; eauto. eapply promise_step_tau_promise_step. econs; eauto. - econs 2; eauto. eapply lower_step_tau_lower_step. econs; [econs; eauto|..]; eauto. refl. - ss. - ss. - inv WRITE. ss. - clear STEP0 WRITE PROMISE STEP2. inv STEP1. inv LOCAL1. inv LOCAL2. inv WRITE. destruct lc1. ss. exploit Memory.promise_inj; [exact PROMISE|exact PROMISE0|]. i. des. subst. eapply promise_remove_messages; eauto. } { (* fence *) exploit step_split_pure; eauto; ss. i. des. esplits; eauto. s. i. inv LOCAL1. eauto. } { (* na write *) clear STEP0. inv LOCAL1. ss. cut (exists th1 kinds' kind', rtc (tau (@pred_step is_promise _)) (Thread.mk _ st1 lc1 sc1 mem1) th1 /\ Memory.write_na (View.rlx (TView.cur (Local.tview lc1)) loc) th1.(Thread.local).(Local.promises) th1.(Thread.memory) loc from to val promises2 mem2 msgs kinds' kind' /\ th1.(Thread.state) = st1 /\ th1.(Thread.local).(Local.tview) = lc1.(Local.tview) /\ th1.(Thread.memory) = mem2 /\ th1.(Thread.sc) = sc1 /\ Messages.of_memory th1.(Thread.local).(Local.promises) <4= (Messages.of_memory promises2 \4/ committed mem1 lc1.(Local.promises) mem2 promises2)). { i. des. destruct th1. ss. subst. rewrite <- H2 in *. esplits; eauto. econs 2; eauto. econs. - econs. + econs; cycle 1. * econs 8. econs; eauto. * ss. + ss. + refl. + ss. - ss. } destruct lc1. ss. remember (View.rlx (TView.cur tview) loc) as ts eqn:TS. assert (LE: Time.le (View.rlx (TView.cur tview) loc) ts) by (subst; refl). clear TS. induction WRITE. { inv WRITE. esplits. - econs 2; eauto. eapply promise_step_tau_promise_step. econs; eauto. - s. hexploit Memory.promise_get0; try exact PROMISE. { inv PROMISE; ss. } i. des. hexploit Memory.get_ts; try exact GET_MEM. i. des. { subst. inv WRITABLE. } hexploit Memory.lower_exists_same; try exact GET_PROMISES; eauto. i. hexploit Memory.lower_exists_same; try exact GET_MEM; eauto. i. econs 1. + ss. + econs; eauto. econs 3; eauto; ss. econs. apply Time.bot_spec. - refl. - refl. - refl. - refl. - s. eapply promise_remove_messages; eauto. } exploit Memory.write_future; try exact WRITE_EX; try apply LOCAL; eauto. { unguard. des; subst; ss. econs; ss. } i. des. exploit IHWRITE; eauto. { eapply Memory.future_closed_timemap; eauto. } { econs; try apply LOCAL; eauto. eapply TView.future_closed; eauto. apply LOCAL. } { econs. eapply TimeFacts.le_lt_lt; eauto. } intros x. des. clear IHWRITE. exploit split_memory_write; try exact WRITE_EX. { unguard. des; subst; ss. } i. des. exploit reorder_write_lower_rtc_promise; try exact x; try apply WRITE_LOWER; eauto. { eapply Memory.promise_le; try eapply LOCAL; eauto. } s. i. des. esplits. - econs 2; try exact STEPS. econs; [econs|]. + econs. econs. econs; eauto. econs; eauto. unguard. des; subst; ss. econs. ss. + ss. + ss. - econs 2; eauto. - congr. - congr. - exploit memory_write_lower_refl_inv; try exact WRITE0. i. congr. - congr. - destruct th1, th2'. ss. subst. i. inv PR. destruct (classic ((x1, x2) = (loc, to'))). { inv H. right. econs. - eapply unchangable_write_na; eauto. inv WRITE0. inv PROMISE0. exploit Memory.lower_get0; try exact PROMISES. i. des. exploit Memory.lower_get0; try exact MEM. i. des. exploit Memory.remove_get0; try exact REMOVE. i. des. rewrite GET in *. inv GET0. econs; eauto. - ii. exploit unchangable_promise; eauto. i. exploit unchangable_rtc_increase; try exact STEPS; eauto. s. i. inv x6. inv WRITE0. inv PROMISE0. exploit Memory.lower_get0; try exact PROMISES. i. des. congr. } specialize (x5 x1 x2 x3 x4). exploit x5. { econs. erewrite write_lower_promises; try exact WRITE0; eauto. condtac; ss. des. subst. ss. } i. des; eauto. right. inv x7. econs; eauto. ii. apply NUNCHANGABLE. eapply unchangable_write; try exact WRITE_EX. ss. } { exploit step_split_pure; eauto; ss. i. des. esplits; eauto. } Qed. Lemma reorder_lower_step_promise_step lang pf e1 e2 (th0 th1 th2: @Thread.t lang) (WF: Local.wf th0.(Thread.local) th0.(Thread.memory)) (CLOSED: Memory.closed th0.(Thread.memory)) (STEP1: lower_step e1 th0 th1) (STEP2: Thread.step pf e2 th1 th2) (ISPROMISE: is_promise e2) (CONS: Local.promise_consistent th2.(Thread.local)): exists th1', (<<STEP1: Thread.step pf e2 th0 th1'>>) /\ (<<STEP2: lower_step e1 th1' th2>>) /\ (<<STATE: th0.(Thread.state) = th1'.(Thread.state)>>). Proof. inv STEP2; [|inv STEP; inv LOCAL; ss]. inv STEP. ss. inv STEP1. inv STEP. inv LOCAL0; ss. { esplits. - econs; ss. econs; eauto. - econs; ss. refl. - ss. } { exploit reorder_read_promise_diff; eauto. { ii. inv H. inv LOCAL1. inv LOCAL. ss. exploit Memory.promise_get0; eauto. { inv PROMISE; ss. exploit Memory.remove_get0; try exact MEM0. i. des. congr. } i. des. exploit Memory.promise_get1; eauto. { inv PROMISE; ss. exploit Memory.remove_get0; try exact MEM0. i. des. congr. } i. des. inv MSG_LE. rewrite GET_MEM in *. inv GET0. exploit CONS; try exact GET_PROMISES; ss. unfold TimeMap.join, View.singleton_ur_if. condtac. - unfold View.singleton_ur, TimeMap.singleton. ss. unfold LocFun.add, LocFun.init, LocFun.find. condtac; ss. i. exploit TimeFacts.join_lt_des; eauto. i. des. exploit TimeFacts.join_lt_des; try exact AC. i. des. timetac. - unfold View.singleton_rw, TimeMap.singleton. ss. unfold LocFun.add, LocFun.init, LocFun.find. condtac; ss. i. exploit TimeFacts.join_lt_des; eauto. i. des. exploit TimeFacts.join_lt_des; try exact AC. i. des. timetac. } i. des. esplits. - econs. econs; eauto. - econs; ss; try refl. econs; eauto. - ss. } { exploit write_step_lower_memory_lower; eauto. i. exploit reorder_write_lower_promise; eauto. { destruct ord; ss. } i. des. destruct (Ordering.le ord Ordering.na) eqn:EQ. { hexploit SAME; auto. i. subst. hexploit write_step_lower_memory_lower_same; try apply LOCAL1; eauto. i. hexploit write_step_lower_lower_memory_same; try exact STEP2; eauto. i. subst. esplits. - econs. econs; eauto. - econs; ss. { econs; eauto. } { refl. } - ss. } { exploit write_step_lower_lower_memory; try exact STEP2; eauto. i. esplits. - econs. econs; eauto. - econs; ss. { econs; eauto. } { rewrite EQ. ss. } - ss. } } { exploit Local.read_step_future; eauto. i. des. exploit write_step_lower_memory_lower; eauto. i. exploit reorder_write_lower_promise; try exact LOCAL2; eauto. { destruct ordw; ss. } i. des. exploit write_step_lower_lower_memory; try exact STEP2; eauto. i. hexploit write_step_promise_consistent; try exact STEP2; eauto. i. exploit reorder_read_promise_diff; try exact LOCAL1; eauto. { ii. inv H0. clear LOCAL LOCAL2 STEP2. inv LOCAL1. inv STEP1. ss. exploit Memory.promise_get0; eauto. { inv PROMISE; ss. exploit Memory.remove_get0; try exact MEM0. i. des. congr. } i. des. exploit Memory.promise_get1; eauto. { inv PROMISE; ss. exploit Memory.remove_get0; try exact MEM0. i. des. congr. } i. des. inv MSG_LE. rewrite GET_MEM in *. inv GET0. exploit H; try exact GET_PROMISES; ss. unfold TimeMap.join, View.singleton_ur_if. condtac. - unfold View.singleton_ur, TimeMap.singleton. ss. unfold LocFun.add, LocFun.init, LocFun.find. condtac; ss. i. exploit TimeFacts.join_lt_des; eauto. i. des. exploit TimeFacts.join_lt_des; try exact AC. i. des. timetac. - unfold View.singleton_rw, TimeMap.singleton. ss. unfold LocFun.add, LocFun.init, LocFun.find. condtac; ss. i. exploit TimeFacts.join_lt_des; eauto. i. des. exploit TimeFacts.join_lt_des; try exact AC. i. des. timetac. } i. des. esplits. - econs. econs; eauto. - econs; ss. econs; eauto. - ss. } { exploit reorder_fence_promise; eauto. { destruct ordw; ss. } i. des. esplits. - econs. econs; eauto. - econs; ss; try refl. econs; eauto. - ss. } { exploit write_na_step_lower_memory_lower; eauto. i. des. exploit reorder_write_na_lower_promise; try apply LOCAL1; eauto. { eapply List.Forall_forall. i. eapply list_Forall2_in2 in KINDS; eauto. des. des_ifs. destruct x; ss. } { destruct kind0; ss. } { inv LOCAL1. destruct ord; ss. } i. des. exploit write_na_step_lower_lower_memory; try exact STEP2; eauto. i. esplits. - econs. econs; eauto. - econs; ss. { econs; eauto. } { subst. refl. } - ss. } { exploit reorder_racy_read_promise; eauto. i. des. esplits. - econs. econs; eauto. - econs; ss; try refl. econs; eauto. - ss. } Qed. Lemma reorder_lower_steps_promise_steps lang tr (th0 th1 th2: @Thread.t lang) (WF: Local.wf th0.(Thread.local) th0.(Thread.memory)) (SC: Memory.closed_timemap th0.(Thread.sc) th0.(Thread.memory)) (CLOSED: Memory.closed th0.(Thread.memory)) (STEPS1: rtc (tau lower_step) th0 th1) (STEPS2: Trace.steps tr th1 th2) (PROMISE: List.Forall (fun x => is_promise (snd x)) tr) (CONS: Local.promise_consistent th2.(Thread.local)): exists tr' th1', (<<STEPS1: Trace.steps tr' th0 th1'>>) /\ (<<TRACE: List.Forall2 (fun x y => snd x = snd y) tr tr'>>) /\ (<<STEPS2: rtc (tau lower_step) th1' th2>>) /\ (<<STATE: th0.(Thread.state) = th1'.(Thread.state)>>). Proof. revert tr th2 STEPS2 PROMISE CONS. induction STEPS1; i. { esplits; eauto using Forall2_refl. clear - PROMISE STEPS2. induction STEPS2; eauto. subst. inv PROMISE. ss. rewrite <- IHSTEPS2; eauto. inv STEP; [|inv STEP0; inv LOCAL; ss]. inv STEP0. ss. } inv H. exploit Thread.step_future; try eapply lower_step_step; eauto. i. des. exploit IHSTEPS1; eauto. i. des. cut (exists tr'' th1'', Trace.steps tr'' x th1'' /\ List.Forall2 (fun x y => snd x = snd y) tr' tr'' /\ lower_step e th1'' th1' /\ x.(Thread.state) = th1''.(Thread.state)). { i. des. esplits; eauto. eapply Forall2_trans; eauto. congr. } exploit Trace.steps_future; try exact STEPS0; eauto. i. des. hexploit rtc_tau_step_promise_consistent; try eapply rtc_implies; try eapply tau_lower_step_tau_step; try exact STEPS3; eauto. i. assert (PROMISE': List.Forall (fun x => is_promise (snd x)) tr') by eauto using trace_eq_promise. clear z STEPS1 IHSTEPS1 STEPS2 STEPS3. clear - WF SC CLOSED TSTEP STEPS0 H PROMISE'. rename tr' into tr, th1' into z, STEPS0 into STEPS, PROMISE' into PROMISE. revert x e WF SC CLOSED TSTEP. induction STEPS; i. { esplits; eauto. } subst. inv PROMISE. ss. exploit Thread.step_future; try eapply lower_step_step; eauto. i. des. exploit Thread.step_future; try eapply STEP; eauto. i. des. hexploit Trace.steps_promise_consistent; try exact STEPS; eauto. i. exploit reorder_lower_step_promise_step; try exact TSTEP; eauto. i. des. exploit Thread.step_future; try eapply STEP1; eauto. i. des. exploit IHSTEPS; try exact STEP2; eauto. i. des. esplits; try exact x2; eauto. congr. Qed. Definition delayed {lang} (st0 st1: lang.(Language.state)) lc0 lc1 sc mem: Prop := (<<MEM: Memory.closed mem>>) /\ (<<SC: Memory.closed_timemap sc mem>>) /\ (<<LOCAL0: Local.wf lc0 mem>>) /\ (<<LOCAL1: Local.wf lc1 mem>>) /\ (<<PROMISES: Memory.le lc1.(Local.promises) lc0.(Local.promises)>>) /\ exists lc1' mem', (<<STEPS: rtc (tau lower_step) (Thread.mk _ st0 lc0 sc mem) (Thread.mk _ st1 lc1' sc mem')>>) /\ (<<MEM: lower_memory mem' mem>>) /\ (<<LOCAL: lower_local lc1' lc1>>). Lemma delayed_refl lang (st: lang.(Language.state)) lc mem sc (MEM: Memory.closed mem) (SC: Memory.closed_timemap sc mem) (LOCAL: Local.wf lc mem) : delayed st st lc lc sc mem. Proof. red. esplits; eauto; refl. Qed. Lemma delayed_step lang (st0 st1 st2: Language.state lang) lc0 lc1 lc2 mem1 sc1 mem2 sc2 pf e (STEP: Thread.step pf e (Thread.mk _ st1 lc1 sc1 mem1) (Thread.mk _ st2 lc2 sc2 mem2)) (CONS: Local.promise_consistent lc2) (NRELEASE: ~ release_event e) (DELAYED: delayed st0 st1 lc0 lc1 sc1 mem1) : exists lc0', (<<PROMISES: rtc (tau (@pred_step is_promise _)) (Thread.mk _ st0 lc0 sc1 mem1) (Thread.mk _ st0 lc0' sc2 mem2)>>) /\ (<<DELAYED: delayed st0 st2 lc0' lc2 sc2 mem2>>). Proof. unfold delayed in DELAYED. des. exploit Thread.step_future; try exact STEP; eauto. s. i. des. exploit Thread.rtc_tau_step_future; try eapply rtc_implies; try eapply tau_lower_step_tau_step; eauto. s. i. des. exploit split_step; try exact STEP; eauto. s. i. des. exploit promise_steps_trace_promise_steps; eauto. i. des. clear STEP PROMISES. rename STEPS0 into PROMISES. destruct th1. ss. subst. exploit lower_memory_promise_steps; try exact MEM0; try exact PROMISES; eauto. i. des. rename STEP into PROMISES_L. exploit Trace.steps_future; try exact PROMISES; eauto. s. i. des. exploit Trace.steps_future; try exact PROMISES_L; eauto. s. i. des. exploit lower_memory_lower_steps; try exact MEM2; try exact LOWER; eauto. i. des. rename STEP into LOWER_L. clear LOWER. hexploit lower_local_consistent; try exact LC0; eauto. i. hexploit rtc_tau_step_promise_consistent; try eapply rtc_implies; try eapply tau_lower_step_tau_step; try exact LOWER_L; eauto. s. i. exploit reorder_lower_steps_promise_steps; try exact STEPS; try exact PROMISES_L; eauto. { eapply trace_eq_promise; eauto. } s. i. des. subst. move STEPS1 at bottom. exploit same_memory_promise_steps; [exact PROMISES|exact STEPS1|..]; eauto. { eapply Forall2_trans; eauto. congr. } s. i. subst. destruct th1'. ss. exploit lower_steps_future; try exact STEPS2. s. i. des. subst. exploit promise_steps_sc; try exact STEPS1; eauto. { repeat (eapply trace_eq_promise; eauto). } s. i. subst. exploit trace_promise_steps_promise_steps; try exact STEPS1; eauto. { repeat (eapply trace_eq_promise; eauto). } i. esplits; try exact x0. clear x0. unfold delayed. exploit Trace.steps_future; try exact STEPS1; eauto. s. i. des. splits; auto. - exploit lower_steps_future; try exact LOWER_L. s. i. des. etrans; eauto. etrans; eauto. inv LC0. ss. - esplits. + etrans; eauto. + ss. + ss. Qed. Lemma writable_mon_non_release view1 view2 sc1 sc2 loc ts ord1 ord2 (VIEW: View.le view1 view2) (ORD: Ordering.le ord1 ord2) (WRITABLE: TView.writable view2 sc2 loc ts ord2): TView.writable view1 sc1 loc ts ord1. Proof. inv WRITABLE. econs; eauto. eapply TimeFacts.le_lt_lt; try apply VIEW; auto. Qed. Lemma write_tview_mon_non_release tview1 tview2 sc1 sc2 loc ts ord1 ord2 (TVIEW: TView.le tview1 tview2) (WF2: TView.wf tview2) (ORD: Ordering.le ord1 ord2): TView.le (TView.write_tview tview1 sc1 loc ts ord1) (TView.write_tview tview2 sc2 loc ts ord2). Proof. unfold TView.write_tview, View.singleton_ur_if. econs; repeat (condtac; aggrtac); (try by etrans; [apply TVIEW|aggrtac]); (try by rewrite <- ? View.join_r; econs; aggrtac); (try apply WF2). Qed. Lemma read_fence_tview_mon_non_release tview1 tview2 ord1 ord2 (TVIEW: TView.le tview1 tview2) (WF2: TView.wf tview2) (ORD: Ordering.le ord1 ord2): TView.le (TView.read_fence_tview tview1 ord1) (TView.read_fence_tview tview2 ord2). Proof. unfold TView.read_fence_tview. econs; repeat (condtac; aggrtac); (try by etrans; [apply TVIEW|aggrtac]); (try by rewrite <- ? View.join_r; aggrtac; rewrite <- ? TimeMap.join_r; apply TVIEW). Qed. Lemma write_fence_tview_mon_non_release tview1 tview2 sc1 sc2 ord1 ord2 (TVIEW: TView.le tview1 tview2) (ORD: Ordering.le ord1 ord2) (WF1: TView.wf tview1) (NREL: Ordering.le ord2 Ordering.strong_relaxed): TView.le (TView.write_fence_tview tview1 sc1 ord1) (TView.write_fence_tview tview2 sc2 ord2). Proof. unfold TView.write_fence_tview, TView.write_fence_sc. econs; repeat (condtac; aggrtac). all: try by destruct ord1, ord2; ss. all: try by etrans; [apply TVIEW|aggrtac]. Qed. Lemma future_write_lower promises1 mem1 loc from to msg promises2 mem2 kind mem1' msg' (WRITE: Memory.write promises1 mem1 loc from to msg promises2 mem2 kind) (KIND: Memory.op_kind_is_lower kind) (LE1: Memory.le promises1 mem1') (MSG_LE: Message.le msg' msg) (MSG_WF: Message.wf msg') (MSG_CLOSED: Memory.closed_message msg' mem1') (FUTURE1: Memory.future_weak mem1 mem1'): exists mem2', (<<WRITE': Memory.write promises1 mem1' loc from to msg' promises2 mem2' kind>>) /\ (<<FUTURE2: Memory.future_weak mem2 mem2'>>). Proof. inv WRITE. inv PROMISE; ss. exploit Memory.lower_get0; try exact PROMISES. i. des. exploit Memory.lower_exists; try exact GET; try eapply MSG_WF. { inv MEM. inv LOWER. ss. } { etrans; eauto. } i. des. exploit Memory.lower_exists_le; try apply LE1; eauto. i. des. exploit Memory.lower_get0; try exact x0. i. des. exploit Memory.remove_exists; try exact GET2. i. des. exploit lower_remove_remove; try exact PROMISES; eauto. i. exploit lower_remove_remove; try exact x0; eauto. i. exploit Memory.remove_inj; [exact x4|exact x3|]. i. subst. clear x3 x4. esplits. { econs; try exact x2. econs; eauto. - inv MSG_LE; ss. inv TS. econs. etrans; eauto. inv RELEASED; try apply Time.bot_spec. apply LE. - ii. subst. inv MSG_LE. ss. } { clear - FUTURE1 MEM x1 MSG_LE MSG_WF MSG_CLOSED TS. inv FUTURE1. econs; i. - revert GET. erewrite Memory.lower_o; eauto. condtac; ss; i. + des. inv GET. exploit Memory.lower_get0; try exact x1. i. des. esplits; eauto; try refl. right. splits; ss. eapply Memory.lower_closed_message; eauto. + guardH o. erewrite Memory.lower_o; eauto. condtac; ss. guardH o0. exploit SOUND; eauto. i. des; esplits; eauto. right. splits; auto. eapply Memory.lower_closed_message; eauto. - revert GET2. erewrite Memory.lower_o; eauto. condtac; ss; i. + des. inv GET2. inv MSG_LE. inv MSG_WF. inv MSG_CLOSED. inv TS. splits; ss. * eapply Memory.lower_closed_opt_view; eauto. * etrans; eauto. inv RELEASED; try apply Time.bot_spec. apply LE. + guardH o. revert GET1. erewrite Memory.lower_o; eauto. condtac; ss. i. guardH o0. exploit COMPLETE1; eauto. i. des. splits; auto. eapply Memory.lower_closed_opt_view; eauto. - revert GET2. erewrite Memory.lower_o; eauto. condtac; ss; i. + des. inv GET2. inv MSG_LE. inv MSG_WF. inv MSG_CLOSED. inv TS. splits; ss. * eapply Memory.lower_closed_opt_view; eauto. * etrans; eauto. inv RELEASED; try apply Time.bot_spec. apply LE. + guardH o. revert GET1. erewrite Memory.lower_o; eauto. condtac; ss. i. guardH o0. exploit COMPLETE2; eauto. i. des. splits; auto. eapply Memory.lower_closed_opt_view; eauto. } Qed. Lemma future_write_na_lower ts promises1 mem1 loc from to val promises2 mem2 msgs kinds kind ts' mem1' (WRITE: Memory.write_na ts promises1 mem1 loc from to val promises2 mem2 msgs kinds kind) (KINDS: List.Forall (fun x => Memory.op_kind_is_lower x) kinds) (KIND: Memory.op_kind_is_lower kind) (LE1: Memory.le promises1 mem1') (TS: Time.le ts' ts) (FUTURE1: Memory.future_weak mem1 mem1'): exists mem2', (<<WRITE': Memory.write_na ts' promises1 mem1' loc from to val promises2 mem2' msgs kinds kind>>) /\ (<<FUTURE2: Memory.future_weak mem2 mem2'>>). Proof. revert ts' mem1' LE1 TS FUTURE1. induction WRITE; i. { exploit future_write_lower; try eassumption; try refl; eauto. i. des. esplits. - econs 1; eauto. eapply TimeFacts.le_lt_lt; eauto. - ss. } inv KINDS. exploit future_write_lower; try exact WRITE_EX; try exact LE1; try refl; eauto. { unguard. des; subst; ss. econs. ss. } { unguard. des; subst; ss. econs. ss. } i. des. hexploit Memory.write_le; try exact WRITE'; eauto. i. des. exploit IHWRITE; eauto; try refl. i. des. esplits. - econs 2; eauto. eapply TimeFacts.le_lt_lt; eauto. - ss. Qed. Lemma future_read_step lc1 mem1 loc to val released ord lc2 lc1' mem1' (STEP: Local.read_step lc1 mem1 loc to val released ord lc2) (LOCAL1: lower_local lc1' lc1) (MEM1: Memory.future_weak mem1 mem1') (WF: Local.wf lc1 mem1) (CLOSED: Memory.closed mem1): exists lc2' released', (<<STEP': Local.read_step lc1' mem1' loc to val released' ord lc2'>>) /\ (<<RELEASED: View.opt_le released' released>>) /\ (<<LOCAL2: lower_local lc2' lc2>>). Proof. inv STEP. exploit Memory.future_weak_get1; try exact GET; eauto; ss. i. des. inv MSG_LE. esplits. - econs; eauto. + etrans; eauto. + inv LOCAL1. ss. eapply TViewFacts.readable_mon; try apply TVIEW; eauto. refl. - ss. - inv LOCAL1. ss. econs. eapply TViewFacts.read_tview_mon; try apply TVIEW; eauto. + apply WF. + inv CLOSED. exploit CLOSED0; eauto. i. des. inv MSG_WF. ss. + refl. Qed. Lemma future_write_step lc1 sc1 mem1 loc from to val releasedm released ord lc2 sc2 mem2 kind lc1' sc1' mem1' releasedm' (STEP: Local.write_step lc1 sc1 mem1 loc from to val releasedm released ord lc2 sc2 mem2 kind) (KIND: Memory.op_kind_is_lower kind) (LOCAL1: lower_local lc1' lc1) (MEM1: Memory.future_weak mem1 mem1') (RELM_LE: View.opt_le releasedm' releasedm) (RELM_WF: View.opt_wf releasedm) (RELM_CLOSED: Memory.closed_opt_view releasedm mem1) (RELM_WF': View.opt_wf releasedm') (RELM_CLOSED': Memory.closed_opt_view releasedm' mem1') (WF: Local.wf lc1 mem1) (WF': Local.wf lc1' mem1') (CLOSED: Memory.closed mem1) (CLOSED': Memory.closed mem1'): exists released' lc2' sc2' mem2', (<<STEP': Local.write_step lc1' sc1' mem1' loc from to val releasedm' released' ord lc2' sc2' mem2' kind>>) /\ (<<LOCAL2: lower_local lc2' lc2>>) /\ (<<MEM2: Memory.future_weak mem2 mem2'>>). Proof. inv STEP. inv LOCAL1. ss. exploit TViewFacts.write_future0; try apply WF'; try apply RELM_WF'. s. i. des. exploit future_write_lower; try exact WRITE. { ss. } { apply WF'. } { econs; [refl|]. eapply TViewFacts.write_released_mon; eauto; try refl. apply WF. } { eauto. } { econs. unfold TView.write_released. condtac; ss. econs. apply Memory.join_closed_view. - inv RELM_CLOSED'; ss. inv CLOSED'. econs; ii; eauto. - unfold LocFun.add. condtac; ss. inv WRITE. inv PROMISE; ss. exploit Memory.lower_get0; try exact MEM. i. des. inv MSG_LE. exploit Memory.future_weak_get1; try exact GET; eauto; ss. i. des. inv MSG_LE. condtac; apply Memory.join_closed_view; try apply WF'; viewtac. } { ss. } i. des. esplits. - econs; eauto. ss. eapply writable_mon_non_release; eauto; try refl. apply TVIEW. - econs; ss. eapply write_tview_mon_non_release; eauto; try refl. apply WF. - ss. Qed. Lemma future_write_na_step lc1 sc1 mem1 loc from to val ord lc2 sc2 mem2 msgs kinds kind lc1' sc1' mem1' (STEP: Local.write_na_step lc1 sc1 mem1 loc from to val ord lc2 sc2 mem2 msgs kinds kind) (KINDS: List.Forall (fun x => Memory.op_kind_is_lower x) kinds) (KIND: Memory.op_kind_is_lower kind) (LOCAL1: lower_local lc1' lc1) (MEM1: Memory.future_weak mem1 mem1') (WF: Local.wf lc1 mem1) (WF': Local.wf lc1' mem1') (CLOSED: Memory.closed mem1) (CLOSED': Memory.closed mem1'): exists lc2' sc2' mem2', (<<STEP': Local.write_na_step lc1' sc1' mem1' loc from to val ord lc2' sc2' mem2' msgs kinds kind>>) /\ (<<LOCAL2: lower_local lc2' lc2>>) /\ (<<MEM2: Memory.future_weak mem2 mem2'>>). Proof. inv STEP. inv LOCAL1. ss. exploit future_write_na_lower; try exact WRITE; try exact MEM1; eauto; try apply WF'; try apply TVIEW. i. des. esplits. - econs; eauto. - econs; ss. eapply write_tview_mon_non_release; eauto; try refl. apply WF. - ss. Qed. Lemma future_is_racy lc1 mem1 loc to ord lc1' mem1' (STEP: Local.is_racy lc1 mem1 loc to ord) (LOCAL1: lower_local lc1' lc1) (MEM1: Memory.future_weak mem1 mem1') (WF: Local.wf lc1 mem1) (CLOSED: Memory.closed mem1): <<STEP': Local.is_racy lc1' mem1' loc to ord>>. Proof. inv STEP. inv LOCAL1. ss. exploit Memory.future_weak_get1; eauto. i. des. econs; eauto; ss. - eapply TViewFacts.racy_view_mon; try apply TVIEW; eauto. - ii. subst. inv MSG_LE. ss. - i. exploit MSG2; eauto. i. subst. inv MSG_LE. ss. Qed. Lemma future_lower_step lang e st1 lc1 sc1 mem1 st2 lc2 sc2 mem2 lc1' sc1' mem1' (STEP: @lower_step lang e (Thread.mk _ st1 lc1 sc1 mem1) (Thread.mk _ st2 lc2 sc2 mem2)) (LOCAL1: lower_local lc1' lc1) (MEM1: Memory.future_weak mem1 mem1') (WF: Local.wf lc1 mem1) (WF': Local.wf lc1' mem1') (SC: Memory.closed_timemap sc1 mem1) (SC': Memory.closed_timemap sc1' mem1') (CLOSED: Memory.closed mem1) (CLOSED': Memory.closed mem1'): exists e' lc2' sc2' mem2', (<<STEP': lower_step e' (Thread.mk _ st1 lc1' sc1' mem1') (Thread.mk _ st2 lc2' sc2' mem2')>>) /\ (<<EVENT: ThreadEvent.get_machine_event e = ThreadEvent.get_machine_event e'>>) /\ (<<LOCAL2: lower_local lc2' lc2>>) /\ (<<MEM2: Memory.future_weak mem2 mem2'>>). Proof. inv STEP. inv STEP0. inv LOCAL; ss. { (* silent *) esplits; eauto. - econs; [econs; eauto|..]; eauto. refl. - ss. } { (* read *) exploit future_read_step; try exact LOCAL0; try exact LOCAL1; try exact MEM1; eauto. i. des. esplits. - econs. + econs; [|econs 2]; eauto. + ss. + ss. refl. + ss. - ss. - ss. - ss. } { (* write *) exploit future_write_step; try exact LOCAL0; try exact LOCAL1; try exact SC1; try exact MEM1; eauto. { inv LOCAL0. eapply write_lower_memory_lower; eauto. } i. des. esplits. - econs. + econs; [|econs 3]; eauto. + ss. + ss. inv LOCAL0. inv STEP'. eapply write_lower_lower_memory; eauto. eapply write_lower_memory_lower; try exact WRITE; eauto. + i. ss. inv LOCAL0. inv STEP'. eapply write_lower_lower_memory_same; eauto. eapply write_same_memory_same in WRITE; eauto. destruct ord; ss. - ss. - ss. - ss. } { (* update *) exploit future_read_step; try exact LOCAL0; try exact LOCAL1; try exact MEM1; eauto. i. des. exploit Local.read_step_future; try exact LOCAL0; eauto. i. des. exploit Local.read_step_future; try exact STEP'; eauto. i. des. exploit future_write_step; try exact LOCAL2; try exact LOCAL3; try exact SC1; try exact MEM1; try exact RELEASED; eauto. { inv LOCAL2. eapply write_lower_memory_lower; eauto. } i. des. esplits. - econs. + econs; [|econs 4]; eauto. + ss. + ss. inv LOCAL2. inv STEP'0. eapply write_lower_lower_memory; eauto. eapply write_lower_memory_lower; try exact WRITE; eauto. + i. ss. - ss. - ss. - ss. } { (* fence *) inv LOCAL0. esplits. - econs. + econs; [|econs 5]; eauto. econs; eauto; ss. i. subst. ss. + ss. + ss. refl. + ss. - ss. - inv LOCAL1. econs. ss. eapply write_fence_tview_mon_non_release; eauto; try refl. + eapply read_fence_tview_mon_non_release; eauto; try refl. apply WF. + eapply TViewFacts.read_fence_future; apply WF'. + destruct ordw; ss. - ss. } { (* na write *) hexploit SAME; eauto. i. subst. hexploit write_na_step_lower_memory_lower; eauto. i. des. exploit future_write_na_step; try exact LOCAL0; try exact LOCAL1; try exact SC1; try exact MEM1; eauto. { eapply List.Forall_forall. i. eapply list_Forall2_in2 in KINDS; eauto. des. des_ifs. destruct x; ss. } { destruct kind; ss. } i. des. assert (mem2' = mem1'). { ss. inv LOCAL0. inv STEP'. eapply write_na_lower_lower_memory; eauto; eapply write_na_lower_memory_lower; try exact WRITE; eauto. } subst. esplits. - econs. + econs; [|econs 8]; eauto. + ss. + ss. refl. + ss. - ss. - ss. - ss. } { (* racy read *) inv LOCAL0. exploit future_is_racy; try exact RACE; try exact LOCAL1; try exact MEM1; eauto. i. des. esplits. - econs. + econs; [|econs 9]; eauto. + ss. + ss. refl. + ss. - ss. - ss. - ss. } Qed. Lemma delayed_future mem1 sc1 lang (st0 st1: lang.(Language.state)) lc0 lc1 mem0 sc0 (DELAYED: delayed st0 st1 lc0 lc1 sc0 mem0) (WF: Local.wf lc0 mem1) (SC: Memory.closed_timemap sc1 mem1) (MEM: Memory.closed mem1) (MEM_FUTURE: Memory.future_weak mem0 mem1): delayed st0 st1 lc0 lc1 sc1 mem1. Proof. unfold delayed in *. des. splits; auto. { inv LOCAL1. econs; ss. - eapply TView.future_weak_closed; eauto. - inv WF. etrans; try exact PROMISES1. ss. } cut (exists lc2' sc2' mem1', rtc (tau lower_step) (Thread.mk _ st0 lc0 sc1 mem1) (Thread.mk _ st1 lc2' sc2' mem1') /\ lower_local lc2' lc1'). { i. des. exploit lower_steps_future; try exact H. s. i. des. subst. esplits; eauto. etrans; eauto. } clear lc1 LOCAL1 PROMISES LOCAL MEM1. rename LOCAL0 into WF0. assert (LOCAL: lower_local lc0 lc0) by refl. revert LOCAL WF. generalize lc0 at 1 3 4. remember (Thread.mk lang st0 lc0 sc0 mem0) as th0. remember (Thread.mk lang st1 lc1' sc0 mem') as th1. move STEPS at top. revert_until STEPS. induction STEPS; i; subst. { inv Heqth1. esplits; eauto. } inv H. destruct y. exploit lower_step_future; try exact TSTEP. s. i. des. subst. exploit future_lower_step; try exact TSTEP; try exact LOCAL; try exact MEM_FUTURE; eauto. i. des. exploit Thread.step_future; try eapply lower_step_step; try exact TSTEP; eauto. s. i. des. exploit Thread.step_future; try eapply lower_step_step; try exact STEP'; eauto. s. i. des. exploit IHSTEPS; try exact LOCAL2; try exact MEM2; eauto. i. des. esplits. - econs 2; eauto. econs; eauto. congr. - ss. Qed. Section CLOSED. Variable loc_na: Loc.t -> Prop. Definition closed_future_timemap (tm: TimeMap.t) (mem0 mem1: Memory.t): Prop := forall loc (NA: loc_na loc), Memory.get loc (tm loc) mem1 = Memory.get loc (tm loc) mem0. Record closed_future_view (vw: View.t) (mem0 mem1: Memory.t): Prop := closed_future_view_intro { closed_future_pln: closed_future_timemap vw.(View.pln) mem0 mem1; closed_future_rlx: closed_future_timemap vw.(View.rlx) mem0 mem1; }. Record closed_future_tview (tvw: TView.t) (mem0 mem1: Memory.t): Prop := closed_future_tview_intro { closed_future_rel: forall loc, closed_future_view (tvw.(TView.rel) loc) mem0 mem1; closed_future_cur: closed_future_view tvw.(TView.cur) mem0 mem1; closed_future_acq: closed_future_view tvw.(TView.acq) mem0 mem1; }. End CLOSED.
State Before: K R : Type v V M : Type w inst✝⁵ : CommRing R inst✝⁴ : AddCommGroup M inst✝³ : Module R M inst✝² : Field K inst✝¹ : AddCommGroup V inst✝ : Module K V f : End R M ⊢ eigenspace f 0 = LinearMap.ker f State After: no goals Tactic: simp [eigenspace]
type Foam_Data_Path <: Models_Data_Path dir_input :: ASCIIString filename :: ASCIIString start_on_month :: Int64 last_on_month :: Int64 end function AFoamProblem ( ;dir_input :: ASCIIString = "/home/3Tstorage/foam/run_sd1/storage/", filename :: ASCIIString = "data/control_sd1.nc", first_day :: Int64 = 0, last_day :: Int64 = 0) return Foam_Data_Path(dir_input,filename, first_day , last_day ); end export Foam_Data_Path export AFoamProblem
lemma is_nth_power_nth_power': assumes "n dvd n'" shows "is_nth_power n (m ^ n')"
[STATEMENT] lemma token_smallest_accepting_rank: assumes "smallest_accepting_rank = Some i" shows "\<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x \<longleftrightarrow> (x > n \<or> (\<exists>j \<ge> i. rank x n = Some j) \<or> token_run x n \<in> F)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] proof - [PROOF STATE] proof (state) goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] from assms [PROOF STATE] proof (chain) picking this: smallest_accepting_rank = Some i [PROOF STEP] have "accept" "finite fail" "infinite (succeed i)" "\<forall>j < i. finite (succeed j)" [PROOF STATE] proof (prove) using this: smallest_accepting_rank = Some i goal (1 subgoal): 1. (accept\<^sub>M &&& finite fail) &&& infinite (succeed i) &&& \<forall>j<i. finite (succeed j) [PROOF STEP] using smallest_accepting_rank_properties [PROOF STATE] proof (prove) using this: smallest_accepting_rank = Some i smallest_accepting_rank = Some ?i \<Longrightarrow> accept\<^sub>M smallest_accepting_rank = Some ?i \<Longrightarrow> finite fail smallest_accepting_rank = Some ?i \<Longrightarrow> finite (local.merge ?i) smallest_accepting_rank = Some ?i \<Longrightarrow> infinite (succeed ?i) smallest_accepting_rank = Some ?i \<Longrightarrow> \<forall>j<?i. finite (succeed j) smallest_accepting_rank = Some ?i \<Longrightarrow> ?i < max_rank goal (1 subgoal): 1. (accept\<^sub>M &&& finite fail) &&& infinite (succeed i) &&& \<forall>j<i. finite (succeed j) [PROOF STEP] by blast+ [PROOF STATE] proof (state) this: accept\<^sub>M finite fail infinite (succeed i) \<forall>j<i. finite (succeed j) goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] then [PROOF STATE] proof (chain) picking this: accept\<^sub>M finite fail infinite (succeed i) \<forall>j<i. finite (succeed j) [PROOF STEP] obtain n\<^sub>1 where n\<^sub>1_def: "\<forall>x \<ge> n\<^sub>1. token_succeeds x" [PROOF STATE] proof (prove) using this: accept\<^sub>M finite fail infinite (succeed i) \<forall>j<i. finite (succeed j) goal (1 subgoal): 1. (\<And>n\<^sub>1. \<forall>x\<ge>n\<^sub>1. token_succeeds x \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] unfolding accept_def MOST_nat_le [PROOF STATE] proof (prove) using this: \<exists>m. \<forall>n\<ge>m. token_succeeds n finite fail infinite (succeed i) \<forall>j<i. finite (succeed j) goal (1 subgoal): 1. (\<And>n\<^sub>1. \<forall>x\<ge>n\<^sub>1. token_succeeds x \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by blast [PROOF STATE] proof (state) this: \<forall>x\<ge>n\<^sub>1. token_succeeds x goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] define n\<^sub>2 where "n\<^sub>2 = Suc (Max (fail_t \<union> \<Union>{succeed_t j | j. j < i}))" (is "_ = Suc (Max ?S)") [PROOF STATE] proof (state) this: n\<^sub>2 = Suc (Max (fail_t \<union> \<Union> {succeed_t j |j. j < i})) goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] define n\<^sub>3 where "n\<^sub>3 = Max ({LEAST m. stable_rank_at x m | x. x < n\<^sub>1 \<and> token_squats x})" (is "_ = Max ?S'") [PROOF STATE] proof (state) this: n\<^sub>3 = Max {LEAST m. stable_rank_at x m |x. x < n\<^sub>1 \<and> token_squats x} goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] define n where "n = Max {n\<^sub>1, n\<^sub>2, n\<^sub>3}" [PROOF STATE] proof (state) this: n = Max {n\<^sub>1, n\<^sub>2, n\<^sub>3} goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] have "finite ?S" and "finite ?S'" [PROOF STATE] proof (prove) goal (1 subgoal): 1. finite (fail_t \<union> \<Union> {succeed_t j |j. j < i}) &&& finite {LEAST m. stable_rank_at x m |x. x < n\<^sub>1 \<and> token_squats x} [PROOF STEP] using \<open>finite fail\<close> \<open>\<forall>j < i. finite (succeed j)\<close> [PROOF STATE] proof (prove) using this: finite fail \<forall>j<i. finite (succeed j) goal (1 subgoal): 1. finite (fail_t \<union> \<Union> {succeed_t j |j. j < i}) &&& finite {LEAST m. stable_rank_at x m |x. x < n\<^sub>1 \<and> token_squats x} [PROOF STEP] unfolding finite_fail_t finite_succeed_t [PROOF STATE] proof (prove) using this: finite fail_t \<forall>j<i. finite (succeed_t j) goal (1 subgoal): 1. finite (fail_t \<union> \<Union> {succeed_t j |j. j < i}) &&& finite {LEAST m. stable_rank_at x m |x. x < n\<^sub>1 \<and> token_squats x} [PROOF STEP] by fastforce+ [PROOF STATE] proof (state) this: finite (fail_t \<union> \<Union> {succeed_t j |j. j < i}) finite {LEAST m. stable_rank_at x m |x. x < n\<^sub>1 \<and> token_squats x} goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] { [PROOF STATE] proof (state) this: finite (fail_t \<union> \<Union> {succeed_t j |j. j < i}) finite {LEAST m. stable_rank_at x m |x. x < n\<^sub>1 \<and> token_squats x} goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] fix x [PROOF STATE] proof (state) goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] assume "x < n\<^sub>1" "token_squats x" [PROOF STATE] proof (state) this: x < n\<^sub>1 token_squats x goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] hence "(LEAST m. stable_rank_at x m) \<in> ?S'" (is "?m \<in> _") [PROOF STATE] proof (prove) using this: x < n\<^sub>1 token_squats x goal (1 subgoal): 1. (LEAST m. stable_rank_at x m) \<in> {LEAST m. stable_rank_at x m |x. x < n\<^sub>1 \<and> token_squats x} [PROOF STEP] by blast [PROOF STATE] proof (state) this: (LEAST m. stable_rank_at x m) \<in> {LEAST m. stable_rank_at x m |x. x < n\<^sub>1 \<and> token_squats x} goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] hence "?m \<le> n\<^sub>3" [PROOF STATE] proof (prove) using this: (LEAST m. stable_rank_at x m) \<in> {LEAST m. stable_rank_at x m |x. x < n\<^sub>1 \<and> token_squats x} goal (1 subgoal): 1. (LEAST m. stable_rank_at x m) \<le> n\<^sub>3 [PROOF STEP] using Max.coboundedI[OF \<open>finite ?S'\<close>] n\<^sub>3_def [PROOF STATE] proof (prove) using this: (LEAST m. stable_rank_at x m) \<in> {LEAST m. stable_rank_at x m |x. x < n\<^sub>1 \<and> token_squats x} ?a \<in> {LEAST m. stable_rank_at x m |x. x < n\<^sub>1 \<and> token_squats x} \<Longrightarrow> ?a \<le> Max {LEAST m. stable_rank_at x m |x. x < n\<^sub>1 \<and> token_squats x} n\<^sub>3 = Max {LEAST m. stable_rank_at x m |x. x < n\<^sub>1 \<and> token_squats x} goal (1 subgoal): 1. (LEAST m. stable_rank_at x m) \<le> n\<^sub>3 [PROOF STEP] by simp [PROOF STATE] proof (state) this: (LEAST m. stable_rank_at x m) \<le> n\<^sub>3 goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] moreover [PROOF STATE] proof (state) this: (LEAST m. stable_rank_at x m) \<le> n\<^sub>3 goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] obtain k where "stable_rank x k" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<And>k. stable_rank x k \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] using \<open>x < n\<^sub>1\<close> \<open>token_squats x\<close> stable_rank_equiv_token_squats [PROOF STATE] proof (prove) using this: x < n\<^sub>1 token_squats x token_squats ?x = (\<exists>i. stable_rank ?x i) goal (1 subgoal): 1. (\<And>k. stable_rank x k \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by blast [PROOF STATE] proof (state) this: stable_rank x k goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] hence "stable_rank_at x ?m" [PROOF STATE] proof (prove) using this: stable_rank x k goal (1 subgoal): 1. stable_rank_at x (LEAST m. stable_rank_at x m) [PROOF STEP] by (metis stable_rank_equiv LeastI) [PROOF STATE] proof (state) this: stable_rank_at x (LEAST m. stable_rank_at x m) goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: (LEAST m. stable_rank_at x m) \<le> n\<^sub>3 stable_rank_at x (LEAST m. stable_rank_at x m) [PROOF STEP] have "stable_rank_at x n\<^sub>3" [PROOF STATE] proof (prove) using this: (LEAST m. stable_rank_at x m) \<le> n\<^sub>3 stable_rank_at x (LEAST m. stable_rank_at x m) goal (1 subgoal): 1. stable_rank_at x n\<^sub>3 [PROOF STEP] by (rule stable_rank_at_ge) [PROOF STATE] proof (state) this: stable_rank_at x n\<^sub>3 goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] hence "\<exists>i. \<forall>m' \<ge> n. rank x m' = Some i" [PROOF STATE] proof (prove) using this: stable_rank_at x n\<^sub>3 goal (1 subgoal): 1. \<exists>i. \<forall>m'\<ge>n. rank x m' = Some i [PROOF STEP] unfolding n_def stable_rank_at_def [PROOF STATE] proof (prove) using this: \<exists>i. \<forall>m\<ge>n\<^sub>3. rank x m = Some i goal (1 subgoal): 1. \<exists>i. \<forall>m'\<ge>Max {n\<^sub>1, n\<^sub>2, n\<^sub>3}. rank x m' = Some i [PROOF STEP] by fastforce [PROOF STATE] proof (state) this: \<exists>i. \<forall>m'\<ge>n. rank x m' = Some i goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] } [PROOF STATE] proof (state) this: \<lbrakk>?x2 < n\<^sub>1; token_squats ?x2\<rbrakk> \<Longrightarrow> \<exists>i. \<forall>m'\<ge>n. rank ?x2 m' = Some i goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] note Stable = this [PROOF STATE] proof (state) this: \<lbrakk>?x2 < n\<^sub>1; token_squats ?x2\<rbrakk> \<Longrightarrow> \<exists>i. \<forall>m'\<ge>n. rank ?x2 m' = Some i goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] have "\<And>m j. j < i \<Longrightarrow> m \<in> succeed_t j \<Longrightarrow> m < n" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>m j. \<lbrakk>j < i; m \<in> succeed_t j\<rbrakk> \<Longrightarrow> m < n [PROOF STEP] using Max.coboundedI[OF \<open>finite ?S\<close>] [PROOF STATE] proof (prove) using this: ?a \<in> fail_t \<union> \<Union> {succeed_t j |j. j < i} \<Longrightarrow> ?a \<le> Max (fail_t \<union> \<Union> {succeed_t j |j. j < i}) goal (1 subgoal): 1. \<And>m j. \<lbrakk>j < i; m \<in> succeed_t j\<rbrakk> \<Longrightarrow> m < n [PROOF STEP] unfolding n_def n\<^sub>2_def [PROOF STATE] proof (prove) using this: ?a \<in> fail_t \<union> \<Union> {succeed_t j |j. j < i} \<Longrightarrow> ?a \<le> Max (fail_t \<union> \<Union> {succeed_t j |j. j < i}) goal (1 subgoal): 1. \<And>m j. \<lbrakk>j < i; m \<in> succeed_t j\<rbrakk> \<Longrightarrow> m < Max {n\<^sub>1, Suc (Max (fail_t \<union> \<Union> {succeed_t j |j. j < i})), n\<^sub>3} [PROOF STEP] by fastforce [PROOF STATE] proof (state) this: \<lbrakk>?j < i; ?m \<in> succeed_t ?j\<rbrakk> \<Longrightarrow> ?m < n goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] hence Succeed: "\<And>m j x. n \<le> m \<Longrightarrow> token_run x m \<notin> F - {q\<^sub>0} \<Longrightarrow> token_run x (Suc m) \<in> F \<Longrightarrow> rank x m = Some j \<Longrightarrow> i \<le> j" [PROOF STATE] proof (prove) using this: \<lbrakk>?j < i; ?m \<in> succeed_t ?j\<rbrakk> \<Longrightarrow> ?m < n goal (1 subgoal): 1. \<And>m j x. \<lbrakk>n \<le> m; token_run x m \<notin> F - {q\<^sub>0}; token_run x (Suc m) \<in> F; rank x m = Some j\<rbrakk> \<Longrightarrow> i \<le> j [PROOF STEP] by (metis not_le succeed_t_inclusion) [PROOF STATE] proof (state) this: \<lbrakk>n \<le> ?m; token_run ?x ?m \<notin> F - {q\<^sub>0}; token_run ?x (Suc ?m) \<in> F; rank ?x ?m = Some ?j\<rbrakk> \<Longrightarrow> i \<le> ?j goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] have "\<And>m. m \<in> fail_t \<Longrightarrow> m < n" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>m. m \<in> fail_t \<Longrightarrow> m < n [PROOF STEP] using Max.coboundedI[OF \<open>finite ?S\<close>] [PROOF STATE] proof (prove) using this: ?a \<in> fail_t \<union> \<Union> {succeed_t j |j. j < i} \<Longrightarrow> ?a \<le> Max (fail_t \<union> \<Union> {succeed_t j |j. j < i}) goal (1 subgoal): 1. \<And>m. m \<in> fail_t \<Longrightarrow> m < n [PROOF STEP] unfolding n_def n\<^sub>2_def [PROOF STATE] proof (prove) using this: ?a \<in> fail_t \<union> \<Union> {succeed_t j |j. j < i} \<Longrightarrow> ?a \<le> Max (fail_t \<union> \<Union> {succeed_t j |j. j < i}) goal (1 subgoal): 1. \<And>m. m \<in> fail_t \<Longrightarrow> m < Max {n\<^sub>1, Suc (Max (fail_t \<union> \<Union> {succeed_t j |j. j < i})), n\<^sub>3} [PROOF STEP] by fastforce [PROOF STATE] proof (state) this: ?m \<in> fail_t \<Longrightarrow> ?m < n goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] hence Fail: "\<And>m x. n \<le> m \<Longrightarrow> x \<le> m \<Longrightarrow> sink (token_run x m) \<or> \<not>sink (token_run x (Suc m)) \<or> \<not>token_run x (Suc m) \<notin> F" [PROOF STATE] proof (prove) using this: ?m \<in> fail_t \<Longrightarrow> ?m < n goal (1 subgoal): 1. \<And>m x. \<lbrakk>n \<le> m; x \<le> m\<rbrakk> \<Longrightarrow> sink (token_run x m) \<or> \<not> sink (token_run x (Suc m)) \<or> \<not> token_run x (Suc m) \<notin> F [PROOF STEP] using fail_t_inclusion [PROOF STATE] proof (prove) using this: ?m \<in> fail_t \<Longrightarrow> ?m < n \<lbrakk>?x \<le> ?n; \<not> sink (token_run ?x ?n); sink (token_run ?x (Suc ?n)); token_run ?x (Suc ?n) \<notin> F\<rbrakk> \<Longrightarrow> ?n \<in> fail_t goal (1 subgoal): 1. \<And>m x. \<lbrakk>n \<le> m; x \<le> m\<rbrakk> \<Longrightarrow> sink (token_run x m) \<or> \<not> sink (token_run x (Suc m)) \<or> \<not> token_run x (Suc m) \<notin> F [PROOF STEP] by fastforce [PROOF STATE] proof (state) this: \<lbrakk>n \<le> ?m; ?x \<le> ?m\<rbrakk> \<Longrightarrow> sink (token_run ?x ?m) \<or> \<not> sink (token_run ?x (Suc ?m)) \<or> \<not> token_run ?x (Suc ?m) \<notin> F goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] { [PROOF STATE] proof (state) this: \<lbrakk>n \<le> ?m; ?x \<le> ?m\<rbrakk> \<Longrightarrow> sink (token_run ?x ?m) \<or> \<not> sink (token_run ?x (Suc ?m)) \<or> \<not> token_run ?x (Suc ?m) \<notin> F goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] fix m x [PROOF STATE] proof (state) goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] assume "m \<ge> n" "m \<ge> x" [PROOF STATE] proof (state) this: n \<le> m x \<le> m goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] moreover [PROOF STATE] proof (state) this: n \<le> m x \<le> m goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] { [PROOF STATE] proof (state) this: n \<le> m x \<le> m goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] assume "token_succeeds x" "token_run x m \<notin> F" [PROOF STATE] proof (state) this: token_succeeds x token_run x m \<notin> F goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] then [PROOF STATE] proof (chain) picking this: token_succeeds x token_run x m \<notin> F [PROOF STEP] obtain m' where "x \<le> m'" and "token_run x m' \<notin> F - {q\<^sub>0}" and "token_run x (Suc m') \<in> F" [PROOF STATE] proof (prove) using this: token_succeeds x token_run x m \<notin> F goal (1 subgoal): 1. (\<And>m'. \<lbrakk>x \<le> m'; token_run x m' \<notin> F - {q\<^sub>0}; token_run x (Suc m') \<in> F\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] using token_run_enter_final_states [PROOF STATE] proof (prove) using this: token_succeeds x token_run x m \<notin> F token_run ?x ?n \<in> F \<Longrightarrow> \<exists>m\<ge>?x. token_run ?x m \<notin> F - {q\<^sub>0} \<and> token_run ?x (Suc m) \<in> F goal (1 subgoal): 1. (\<And>m'. \<lbrakk>x \<le> m'; token_run x m' \<notin> F - {q\<^sub>0}; token_run x (Suc m') \<in> F\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] unfolding token_succeeds_def [PROOF STATE] proof (prove) using this: \<exists>n. token_run x n \<in> F token_run x m \<notin> F token_run ?x ?n \<in> F \<Longrightarrow> \<exists>m\<ge>?x. token_run ?x m \<notin> F - {q\<^sub>0} \<and> token_run ?x (Suc m) \<in> F goal (1 subgoal): 1. (\<And>m'. \<lbrakk>x \<le> m'; token_run x m' \<notin> F - {q\<^sub>0}; token_run x (Suc m') \<in> F\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by meson [PROOF STATE] proof (state) this: x \<le> m' token_run x m' \<notin> F - {q\<^sub>0} token_run x (Suc m') \<in> F goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] moreover [PROOF STATE] proof (state) this: x \<le> m' token_run x m' \<notin> F - {q\<^sub>0} token_run x (Suc m') \<in> F goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] hence "\<not>sink (token_run x m')" [PROOF STATE] proof (prove) using this: x \<le> m' token_run x m' \<notin> F - {q\<^sub>0} token_run x (Suc m') \<in> F goal (1 subgoal): 1. \<not> sink (token_run x m') [PROOF STEP] by (metis Diff_empty Diff_insert0 \<open>token_run x m \<notin> F\<close> initial_in_F_token_run token_is_not_in_sink) [PROOF STATE] proof (state) this: \<not> sink (token_run x m') goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: x \<le> m' token_run x m' \<notin> F - {q\<^sub>0} token_run x (Suc m') \<in> F \<not> sink (token_run x m') [PROOF STEP] obtain j' where "rank x m' = Some j'" [PROOF STATE] proof (prove) using this: x \<le> m' token_run x m' \<notin> F - {q\<^sub>0} token_run x (Suc m') \<in> F \<not> sink (token_run x m') goal (1 subgoal): 1. (\<And>j'. rank x m' = Some j' \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by simp [PROOF STATE] proof (state) this: rank x m' = Some j' goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] moreover [PROOF STATE] proof (state) this: rank x m' = Some j' goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] have "m \<le> m'" [PROOF STATE] proof (prove) goal (1 subgoal): 1. m \<le> m' [PROOF STEP] by (metis \<open>token_run x m \<notin> F\<close> token_stays_in_final_states[OF \<open>token_run x (Suc m') \<in> F\<close>] add_Suc_right add_Suc_shift less_imp_Suc_add not_le) [PROOF STATE] proof (state) this: m \<le> m' goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] moreover [PROOF STATE] proof (state) this: m \<le> m' goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] hence "m' \<ge> n" [PROOF STATE] proof (prove) using this: m \<le> m' goal (1 subgoal): 1. n \<le> m' [PROOF STEP] using \<open>x \<le> m\<close> \<open>m \<ge> n\<close> [PROOF STATE] proof (prove) using this: m \<le> m' x \<le> m n \<le> m goal (1 subgoal): 1. n \<le> m' [PROOF STEP] by simp [PROOF STATE] proof (state) this: n \<le> m' goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] hence "j' \<ge> i" [PROOF STATE] proof (prove) using this: n \<le> m' goal (1 subgoal): 1. i \<le> j' [PROOF STEP] using Succeed[OF _ \<open>token_run x m' \<notin> F - {q\<^sub>0}\<close> \<open>token_run x (Suc m') \<in> F\<close> \<open>rank x m' = Some j'\<close>] [PROOF STATE] proof (prove) using this: n \<le> m' n \<le> m' \<Longrightarrow> i \<le> j' goal (1 subgoal): 1. i \<le> j' [PROOF STEP] by blast [PROOF STATE] proof (state) this: i \<le> j' goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] moreover [PROOF STATE] proof (state) this: i \<le> j' goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] obtain k where "rank x x = Some k" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<And>k. rank x x = Some k \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] using rank_initial[of x] [PROOF STATE] proof (prove) using this: \<exists>i. rank x x = Some i goal (1 subgoal): 1. (\<And>k. rank x x = Some k \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by blast [PROOF STATE] proof (state) this: rank x x = Some k goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: rank x m' = Some j' m \<le> m' i \<le> j' rank x x = Some k [PROOF STEP] obtain j where "rank x m = Some j" [PROOF STATE] proof (prove) using this: rank x m' = Some j' m \<le> m' i \<le> j' rank x x = Some k goal (1 subgoal): 1. (\<And>j. rank x m = Some j \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by (metis rank_continuous[OF \<open>rank x x = Some k\<close>, of "m' - x"] \<open>x \<le> m'\<close> \<open>x \<le> m\<close> diff_le_mono le_add_diff_inverse) [PROOF STATE] proof (state) this: rank x m = Some j goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] hence "\<exists>j \<ge> i. rank x m = Some j" [PROOF STATE] proof (prove) using this: rank x m = Some j goal (1 subgoal): 1. \<exists>j\<ge>i. rank x m = Some j [PROOF STEP] using rank_monotonic \<open>rank x m' = Some j'\<close> \<open>j' \<ge> i\<close> \<open>m \<le> m'\<close>[THEN le_Suc_ex] [PROOF STATE] proof (prove) using this: rank x m = Some j \<lbrakk>rank ?x ?n = Some ?i; rank ?x (?n + ?m) = Some ?j\<rbrakk> \<Longrightarrow> ?j \<le> ?i rank x m' = Some j' i \<le> j' \<exists>n. m' = m + n goal (1 subgoal): 1. \<exists>j\<ge>i. rank x m = Some j [PROOF STEP] by (blast dest: le_Suc_ex trans_le_add1) [PROOF STATE] proof (state) this: \<exists>j\<ge>i. rank x m = Some j goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] } [PROOF STATE] proof (state) this: \<lbrakk>token_succeeds x; token_run x m \<notin> F\<rbrakk> \<Longrightarrow> \<exists>j\<ge>i. rank x m = Some j goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] moreover [PROOF STATE] proof (state) this: \<lbrakk>token_succeeds x; token_run x m \<notin> F\<rbrakk> \<Longrightarrow> \<exists>j\<ge>i. rank x m = Some j goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] { [PROOF STATE] proof (state) this: \<lbrakk>token_succeeds x; token_run x m \<notin> F\<rbrakk> \<Longrightarrow> \<exists>j\<ge>i. rank x m = Some j goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] assume "\<not>token_succeeds x" [PROOF STATE] proof (state) this: \<not> token_succeeds x goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] hence "\<And>m. token_run x m \<notin> F" [PROOF STATE] proof (prove) using this: \<not> token_succeeds x goal (1 subgoal): 1. \<And>m. token_run x m \<notin> F [PROOF STEP] unfolding token_succeeds_def [PROOF STATE] proof (prove) using this: \<nexists>n. token_run x n \<in> F goal (1 subgoal): 1. \<And>m. token_run x m \<notin> F [PROOF STEP] by blast [PROOF STATE] proof (state) this: token_run x ?m \<notin> F goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] moreover [PROOF STATE] proof (state) this: token_run x ?m \<notin> F goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] have "\<not>(\<exists>j \<ge> i. rank x m = Some j)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] proof (cases "token_squats x") [PROOF STATE] proof (state) goal (2 subgoals): 1. token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) 2. \<not> token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] case True \<comment> \<open>The token already stabilised\<close> [PROOF STATE] proof (state) this: token_squats x goal (2 subgoals): 1. token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) 2. \<not> token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] have "x < n\<^sub>1" [PROOF STATE] proof (prove) goal (1 subgoal): 1. x < n\<^sub>1 [PROOF STEP] using \<open>\<not>token_succeeds x\<close> n\<^sub>1_def [PROOF STATE] proof (prove) using this: \<not> token_succeeds x \<forall>x\<ge>n\<^sub>1. token_succeeds x goal (1 subgoal): 1. x < n\<^sub>1 [PROOF STEP] by (metis not_le) [PROOF STATE] proof (state) this: x < n\<^sub>1 goal (2 subgoals): 1. token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) 2. \<not> token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] then [PROOF STATE] proof (chain) picking this: x < n\<^sub>1 [PROOF STEP] obtain k where "\<forall>m' \<ge> n. rank x m' = Some k" [PROOF STATE] proof (prove) using this: x < n\<^sub>1 goal (1 subgoal): 1. (\<And>k. \<forall>m'\<ge>n. rank x m' = Some k \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] using Stable[OF _ True] [PROOF STATE] proof (prove) using this: x < n\<^sub>1 x < n\<^sub>1 \<Longrightarrow> \<exists>i. \<forall>m'\<ge>n. rank x m' = Some i goal (1 subgoal): 1. (\<And>k. \<forall>m'\<ge>n. rank x m' = Some k \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by blast [PROOF STATE] proof (state) this: \<forall>m'\<ge>n. rank x m' = Some k goal (2 subgoals): 1. token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) 2. \<not> token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] moreover [PROOF STATE] proof (state) this: \<forall>m'\<ge>n. rank x m' = Some k goal (2 subgoals): 1. token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) 2. \<not> token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] hence "stable_rank x k" [PROOF STATE] proof (prove) using this: \<forall>m'\<ge>n. rank x m' = Some k goal (1 subgoal): 1. stable_rank x k [PROOF STEP] unfolding stable_rank_def MOST_nat_le [PROOF STATE] proof (prove) using this: \<forall>m'\<ge>n. rank x m' = Some k goal (1 subgoal): 1. \<exists>m. \<forall>n\<ge>m. rank x n = Some k [PROOF STEP] by blast [PROOF STATE] proof (state) this: stable_rank x k goal (2 subgoals): 1. token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) 2. \<not> token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] moreover [PROOF STATE] proof (state) this: stable_rank x k goal (2 subgoals): 1. token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) 2. \<not> token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] have "q\<^sub>0 \<notin> F" [PROOF STATE] proof (prove) goal (1 subgoal): 1. q\<^sub>0 \<notin> F [PROOF STEP] by (metis \<open>\<And>m. token_run x m \<notin> F\<close> initial_in_F_token_run) [PROOF STATE] proof (state) this: q\<^sub>0 \<notin> F goal (2 subgoals): 1. token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) 2. \<not> token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] ultimately \<comment> \<open>Hence the rank is smaller than i\<close> [PROOF STATE] proof (chain) picking this: \<forall>m'\<ge>n. rank x m' = Some k stable_rank x k q\<^sub>0 \<notin> F [PROOF STEP] have "k < i" and "rank x m = Some k" [PROOF STATE] proof (prove) using this: \<forall>m'\<ge>n. rank x m' = Some k stable_rank x k q\<^sub>0 \<notin> F goal (1 subgoal): 1. k < i &&& rank x m = Some k [PROOF STEP] using stable_rank_bounded \<open>infinite (succeed i)\<close> \<open>n \<le> m\<close> [PROOF STATE] proof (prove) using this: \<forall>m'\<ge>n. rank x m' = Some k stable_rank x k q\<^sub>0 \<notin> F \<lbrakk>stable_rank ?x ?j; infinite (succeed ?i); q\<^sub>0 \<notin> F\<rbrakk> \<Longrightarrow> ?j < ?i infinite (succeed i) n \<le> m goal (1 subgoal): 1. k < i &&& rank x m = Some k [PROOF STEP] by blast+ [PROOF STATE] proof (state) this: k < i rank x m = Some k goal (2 subgoals): 1. token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) 2. \<not> token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] thus ?thesis [PROOF STATE] proof (prove) using this: k < i rank x m = Some k goal (1 subgoal): 1. \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] by simp [PROOF STATE] proof (state) this: \<not> (\<exists>j\<ge>i. rank x m = Some j) goal (1 subgoal): 1. \<not> token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] next [PROOF STATE] proof (state) goal (1 subgoal): 1. \<not> token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] case False \<comment> \<open>Then token is already in a sink\<close> [PROOF STATE] proof (state) this: \<not> token_squats x goal (1 subgoal): 1. \<not> token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] have "sink (token_run x m)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. sink (token_run x m) [PROOF STEP] proof (rule ccontr) [PROOF STATE] proof (state) goal (1 subgoal): 1. \<not> sink (token_run x m) \<Longrightarrow> False [PROOF STEP] assume "\<not>sink (token_run x m)" [PROOF STATE] proof (state) this: \<not> sink (token_run x m) goal (1 subgoal): 1. \<not> sink (token_run x m) \<Longrightarrow> False [PROOF STEP] moreover [PROOF STATE] proof (state) this: \<not> sink (token_run x m) goal (1 subgoal): 1. \<not> sink (token_run x m) \<Longrightarrow> False [PROOF STEP] obtain m' where "m < m'" and "sink (token_run x m')" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<And>m'. \<lbrakk>m < m'; sink (token_run x m')\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by (metis False token_squats_def le_add2 not_le not_less_eq_eq token_stays_in_sink) [PROOF STATE] proof (state) this: m < m' sink (token_run x m') goal (1 subgoal): 1. \<not> sink (token_run x m) \<Longrightarrow> False [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: \<not> sink (token_run x m) m < m' sink (token_run x m') [PROOF STEP] obtain m'' where "m \<le> m''" and "\<not>sink (token_run x m'')" and "sink (token_run x (Suc m''))" [PROOF STATE] proof (prove) using this: \<not> sink (token_run x m) m < m' sink (token_run x m') goal (1 subgoal): 1. (\<And>m''. \<lbrakk>m \<le> m''; \<not> sink (token_run x m''); sink (token_run x (Suc m''))\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis [PROOF STEP] by (metis le_add1 less_imp_Suc_add token_run_P) [PROOF STATE] proof (state) this: m \<le> m'' \<not> sink (token_run x m'') sink (token_run x (Suc m'')) goal (1 subgoal): 1. \<not> sink (token_run x m) \<Longrightarrow> False [PROOF STEP] thus False [PROOF STATE] proof (prove) using this: m \<le> m'' \<not> sink (token_run x m'') sink (token_run x (Suc m'')) goal (1 subgoal): 1. False [PROOF STEP] by (metis Fail \<open>\<And>m. token_run x m \<notin> F\<close> \<open>n \<le> m\<close> \<open>x \<le> m\<close> le_trans) [PROOF STATE] proof (state) this: False goal: No subgoals! [PROOF STEP] qed \<comment> \<open>Hence there is no rank\<close> [PROOF STATE] proof (state) this: sink (token_run x m) goal (1 subgoal): 1. \<not> token_squats x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] thus ?thesis [PROOF STATE] proof (prove) using this: sink (token_run x m) goal (1 subgoal): 1. \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] by simp [PROOF STATE] proof (state) this: \<not> (\<exists>j\<ge>i. rank x m = Some j) goal: No subgoals! [PROOF STEP] qed [PROOF STATE] proof (state) this: \<not> (\<exists>j\<ge>i. rank x m = Some j) goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: token_run x ?m \<notin> F \<not> (\<exists>j\<ge>i. rank x m = Some j) [PROOF STEP] have "\<not>(\<exists>j \<ge> i. rank x m = Some j) \<and> token_run x m \<notin> F" [PROOF STATE] proof (prove) using this: token_run x ?m \<notin> F \<not> (\<exists>j\<ge>i. rank x m = Some j) goal (1 subgoal): 1. \<not> (\<exists>j\<ge>i. rank x m = Some j) \<and> token_run x m \<notin> F [PROOF STEP] by blast [PROOF STATE] proof (state) this: \<not> (\<exists>j\<ge>i. rank x m = Some j) \<and> token_run x m \<notin> F goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] } [PROOF STATE] proof (state) this: \<not> token_succeeds x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) \<and> token_run x m \<notin> F goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: n \<le> m x \<le> m \<lbrakk>token_succeeds x; token_run x m \<notin> F\<rbrakk> \<Longrightarrow> \<exists>j\<ge>i. rank x m = Some j \<not> token_succeeds x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) \<and> token_run x m \<notin> F [PROOF STEP] have "(\<exists>j \<ge> i. rank x m = Some j) \<or> token_run x m \<in> F \<longleftrightarrow> token_succeeds x" [PROOF STATE] proof (prove) using this: n \<le> m x \<le> m \<lbrakk>token_succeeds x; token_run x m \<notin> F\<rbrakk> \<Longrightarrow> \<exists>j\<ge>i. rank x m = Some j \<not> token_succeeds x \<Longrightarrow> \<not> (\<exists>j\<ge>i. rank x m = Some j) \<and> token_run x m \<notin> F goal (1 subgoal): 1. ((\<exists>j\<ge>i. rank x m = Some j) \<or> token_run x m \<in> F) = token_succeeds x [PROOF STEP] by (cases "token_succeeds x") (blast, simp) [PROOF STATE] proof (state) this: ((\<exists>j\<ge>i. rank x m = Some j) \<or> token_run x m \<in> F) = token_succeeds x goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] } [PROOF STATE] proof (state) this: \<lbrakk>n \<le> ?m2; ?x2 \<le> ?m2\<rbrakk> \<Longrightarrow> ((\<exists>j\<ge>i. rank ?x2 ?m2 = Some j) \<or> token_run ?x2 ?m2 \<in> F) = token_succeeds ?x2 goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] moreover \<comment> \<open>By definition of n all tokens @{term "\<And>x. x \<ge> n"} succeed\<close> [PROOF STATE] proof (state) this: \<lbrakk>n \<le> ?m2; ?x2 \<le> ?m2\<rbrakk> \<Longrightarrow> ((\<exists>j\<ge>i. rank ?x2 ?m2 = Some j) \<or> token_run ?x2 ?m2 \<in> F) = token_succeeds ?x2 goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] have "\<And>m x. m \<ge> n \<Longrightarrow> \<not>x \<le> m \<Longrightarrow> token_succeeds x" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>m x. \<lbrakk>n \<le> m; \<not> x \<le> m\<rbrakk> \<Longrightarrow> token_succeeds x [PROOF STEP] using n_def n\<^sub>1_def [PROOF STATE] proof (prove) using this: n = Max {n\<^sub>1, n\<^sub>2, n\<^sub>3} \<forall>x\<ge>n\<^sub>1. token_succeeds x goal (1 subgoal): 1. \<And>m x. \<lbrakk>n \<le> m; \<not> x \<le> m\<rbrakk> \<Longrightarrow> token_succeeds x [PROOF STEP] by force [PROOF STATE] proof (state) this: \<lbrakk>n \<le> ?m; \<not> ?x \<le> ?m\<rbrakk> \<Longrightarrow> token_succeeds ?x goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: \<lbrakk>n \<le> ?m2; ?x2 \<le> ?m2\<rbrakk> \<Longrightarrow> ((\<exists>j\<ge>i. rank ?x2 ?m2 = Some j) \<or> token_run ?x2 ?m2 \<in> F) = token_succeeds ?x2 \<lbrakk>n \<le> ?m; \<not> ?x \<le> ?m\<rbrakk> \<Longrightarrow> token_succeeds ?x [PROOF STEP] show ?thesis [PROOF STATE] proof (prove) using this: \<lbrakk>n \<le> ?m2; ?x2 \<le> ?m2\<rbrakk> \<Longrightarrow> ((\<exists>j\<ge>i. rank ?x2 ?m2 = Some j) \<or> token_run ?x2 ?m2 \<in> F) = token_succeeds ?x2 \<lbrakk>n \<le> ?m; \<not> ?x \<le> ?m\<rbrakk> \<Longrightarrow> token_succeeds ?x goal (1 subgoal): 1. \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] unfolding MOST_nat_le not_le[symmetric] [PROOF STATE] proof (prove) using this: \<lbrakk>n \<le> ?m2; ?x2 \<le> ?m2\<rbrakk> \<Longrightarrow> ((\<exists>j\<ge>i. rank ?x2 ?m2 = Some j) \<or> token_run ?x2 ?m2 \<in> F) = token_succeeds ?x2 \<lbrakk>n \<le> ?m; \<not> ?x \<le> ?m\<rbrakk> \<Longrightarrow> token_succeeds ?x goal (1 subgoal): 1. \<exists>m. \<forall>n\<ge>m. \<forall>x. token_succeeds x = (\<not> x \<le> n \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) [PROOF STEP] by blast [PROOF STATE] proof (state) this: \<forall>\<^sub>\<infinity>n. \<forall>x. token_succeeds x = (n < x \<or> (\<exists>j\<ge>i. rank x n = Some j) \<or> token_run x n \<in> F) goal: No subgoals! [PROOF STEP] qed
```python import numpy as np import scipy.special as sci import matplotlib.pyplot as plt from scipy import stats # linregress import pandas as pd from IPython.display import Latex ``` ## Lecture 10: Reactive Mass Transport _(The contents presented in this section were re-developed principally by Dr. P. K. Yadav. The original contents are from Prof. Rudolf Liedl)_ --- ### Motivation ### The last lecture dealt with the conservative transport processes and quantified the mass flow and flux emanating from those processes. The effects of these processes were evaluated as an isolated processes and as joint transport process. The last lecture dealt with the conservative transport processes and quantified the mass flow and flux emanating from those processes. The effects of these processes were evaluated as an isolated processes and as joint transport process. ```{admonition} Important conclusions from last lecture > $J_{adv}>J_{dis}>>J_{diff}$ is observed in normal aquifers. > $J_{diff}$ may only be useful as an individual processes in special aquifers, e.g., clayey aquifers. > In general aquifers, hydrodynamic dispersion $J_{hyd} = J_{dis} + J_{diff}$ is used in the analysis of solute transport process. ``` Finally, the last chapter introduced _Concentration Profile_ $(C-t)$ and _Breakthrough Curve_ $(C-x)$ to visually evaluate solute transport in aquifers using _Concentration_ $(C)$, a process output, as a function of _time_ $(t)$ and _space_ $(x)$. This **Lecture** focuses on the _reactive transport processes,_ which as already discussed involves the transport of solute with _reaction_ processes. This course being an introductory groundwater course, _sorption_ and _degradation_ are the only two reaction types introduced and they are combined with the conservative transport processes- _advection_ and _dispersion_. Eventully, the section evaluates the joint action of conservative transport and reactive processes limiting to 1-D scenario. The lecture will, however, first deal with 3-D effects of dispersive process, which is more important to quantify the reactive processes. ### Dispersive Mass Flow in 3-D ## In the last lecture we saw that concentration gradient $\frac{\Delta C}{\Delta L}$ and flow velocity $v$ drives dispersive and diffusive solute transport processes. However, in the natural aquifers $\frac{\Delta C}{\Delta L}$ is normally varying with space $(x,y,z)$ and time $(t)$. Therefore, a differential operator $\big(\frac{\mathrm{d}}{\mathrm{d} x}\big)$ is more suitable representation of gradient than the difference operator $\frac{\Delta C}{\Delta x}$. The differential operator also generalizes the gradient case. Considering the differential operator, the diffusive mass flow and diffusive mass flux (= mass flow per unit area) in 1D is then expressed as: $$ J_{diff} = - n_e \cdot A \cdot D_p \cdot \frac{\mathrm{d} C}{\mathrm{d}x} $$ and $$ j_{diff} = - n_e \cdot D_p \cdot \frac{\mathrm{d} C}{\mathrm{d}x} $$ Likewise, the dispersive mass flow and dispersive mass flux (1-D) is: $$ J_{disp, h} = - n_e \cdot A \cdot D_{disp} \cdot \frac{\mathrm{d} C}{\mathrm{d}x} $$ $$ j_{disp, h} = - n_e \cdot D_{disp} \cdot \frac{\mathrm{d} C}{\mathrm{d}x} $$ The examples of these relations are presented in the last lecture [Conservative Transport](/contents/transport/lecture_09/21_conservative_transport) ### 3-D concentration Gradient The concentration gradient $\frac{\mathrm{d}C}{\mathrm{d}x}$ for 1-D solute transport problems is uni-directional, i.e, direction is fixed, and thus only the magnitude of the gradient is the important factor. However in higher dimensions, 2-D or 3-D, solute transport problems, the _direction_ of gradient along with it's _magnitude_ in that direction has to be specified. Thus, for higher dimension solute transport problems, the _concentration gradient_ becomes **concentration vector**, i.e., a quantity providing both magnitude and direction. Thus, the representation of concentration gradient in Cartesian coordinate in 2-D and 3-D is: $$ \mathrm{grad}\,C = \nabla C= \begin{pmatrix} \frac{\partial C}{\partial x}\\ \frac{\partial C}{\partial y} \end{pmatrix} $$ and $$ \mathrm{grad} \,C = \nabla C= \begin{pmatrix} \frac{\partial C}{\partial x}\\ \frac{\partial C}{\partial y}\\ \frac{\partial C}{\partial z} \end{pmatrix} $$ The $\nabla$, the inverted Delta symbol, is called the **del** or **nabla** operator. The vector **grad $C$** in the above relations points in the direction of the _steepest increase_ of $C$. However, for the **Hydrogeologists**, the concentration gradients as well the grad $C$ points to the _steepest decrease_ of $C$. ### Isotropic and Anisotropic Dispersion Corresponding the expression for the concentration gradient at higher dimensions, the expression for mass flow and flux becomes: $$ J_{disp,\, h} = \begin{pmatrix} J_{disp,\, hx}\\ J_{disp,\, hy}\\ J_{disp,\, hz} \end{pmatrix} $$ and the 3-D mass flux is: $$ j_{disp,\, h} = \begin{pmatrix} j_{disp,\, hx}\\ j_{disp,\, hy}\\ j_{disp,\, hz} \end{pmatrix} $$ The subscript in ${{disp, h}}$ refers to _hydrodynamic dispersion_ which is sum of _mechanical dispersion_ and _diffusion_. Likewise, the subscript ${{disp,\, hx}}$, ${{disp,\, hy}}$ and ${{disp,\, hz}}$ refers to dispersion components along the Cartesian coordinates. The corresponding mass flow and mass flux in the higher dimension is then: $$ J_{disp,\, h} = - n_e \cdot A \cdot D_{hyd} \cdot \text{grad}C $$ and ### Isotropic and Anisotropic Dispersion Corresponding the expression for the concentration gradient at higher dimensions, the expression for mass flow and flux becomes: $$ J_{disp,\, h} = \begin{pmatrix} J_{disp,\, hx}\\ J_{disp,\, hy}\\ J_{disp,\, hz} \end{pmatrix} $$ and the 3-D mass flux is: $$ j_{disp,\, h} = \begin{pmatrix} j_{disp,\, hx}\\ j_{disp,\, hy}\\ j_{disp,\, hz} \end{pmatrix} $$ The subscript in ${{disp,\, h}}$ refers to _hydrodynamic dispersion_ which is sum of _mechanical dispersion_ and _diffusion_. Likewise, the subscript ${{disp,\, hx}}$, ${{disp,\, hy}}$ and ${{disp,\, hz}}$ refers to dispersion components along the Cartesian coordinates. The corresponding mass flow and mass flux in the higher dimension is then: $$ J_{disp,\, h} = - n_e \cdot A \cdot D_{hyd} \cdot \text{grad}C $$ and $$ j_{disp,\, h} = - n_e \cdot A \cdot D_{hyd} \cdot \text{grad}C $$ The **isotropic dispersion**, rather an _exceptional,_ the $D_{hyd}$ in this case is: $$ D_{hyd} = \alpha \cdot |v| + n_e \cdot D $$ where, $D$ is direction independent dispersion coefficient and $\alpha$ $[L]$ is dispersivity, which in: **heterogeneous aquifer**: $\alpha = \alpha(x,y,z)$ and in **homogeneous aquifer**: $\alpha = \text{constant}$ For more practical cases and in normal aquifers, the 2-D and 3-D the dispersion for solute transport is _direction dependent,_ i.e. **anisotropic**. Hence the $D_{hyd}$ is not an scalar quantity but a _matrix (tensor),_ which relates the concentration gradient (vector) to the dispersive mass flow (vector). However, if the princopal axes of the dispersion tensor $D_{hyd}$ is made to coincide with the axes of a Cartesian coordinate system _and_ the groundwater flow is considered uniform along the $x-$axis, the dispersive mass flux can be obtained from $$ \begin{pmatrix} J_x \\ j_y \\ j_z \end{pmatrix} = \begin{pmatrix} \alpha_L \cdot v_x + n_e \cdot D & 0 & 0 \\ 0 & \alpha_{Th} \cdot v_x + n_e \cdot D & 0\\ 0 & 0 & \alpha_{Tv} \cdot v_x + n_e \cdot D \end{pmatrix} \cdot \begin{pmatrix} \frac{\partial C}{\partial x} \\ \frac{\partial C}{\partial y} \\ \frac{\partial C}{\partial z} \end{pmatrix} $$ with $\alpha_L$, $\alpha_{Th}$ and $\alpha_{Tv}$ are longitudinal dispersivity, horizontal transverse dispversity and vertical transverse dispersivity, respectively. The statistical analysis of dispersivity data shows that $\alpha_{L}>\alpha_{Th}>\alpha_{Tv}$ and the values differ roughly by an order of magnitude. This, however, is just a rule of thumb. ```{admonition} A quick example :class: tip Discuss the role of 2D dispersitvity in column when discharge is limited at 10 m$^3$/d of a chemical with concentration 1 mg/L. The flow velocity can be assumed to be 0.05 m/d. ``` ```python # Analytical solution from Bear (1976) - Line source, 1st-type input and infinte plane # Input (values can be changed) Co = 1 # mg/L, input concentration Dx = 3 # m, Dispersion in x direction Dy = Dx/10 # m v = 0.05 # m/d Q = 10 # m^3/d ## domain dimension and descritization (values can be changed) xmin = -100; xmax= 101 ymin = 0.1; ymax = 11 [x, y] = np.meshgrid(np.linspace(xmin, xmax, 1000), np.linspace(ymin, ymax, 100)) # mesh # Bear (1976) solution Implementation #"k0: Modified Bessel function of second type and zero Order" term1 = (Co*Q)/(2*np.pi* np.sqrt(Dx*Dy)) term2 = (x*v)/(2*Dx) args = (v**2*x**2)/(4*Dx**2) + (v**2*y**2)/(4*Dx*Dy) sol = term1*np.exp(term2)*sci.k0(args) # plots fig, ax = plt.subplots() CS = ax.contour(x,y,sol, cmap='flag') ax.clabel(CS, inline=1, fontsize= 10) CB = fig.colorbar(CS, shrink=0.8, extend='both'); ``` ### Equilibrium Sorption ### A reactive transport system can include a single reactive process, e.g., degradation, or combination of several multiple reactive processes, e.g. degradation and sorption. The inclusion of the reactive process(es) in the transport studies are site specific. The important to note is that an inclusion of a reactive process increases the complexity of transport problem. In this course we limit ourselves with the following two types of reaction processes: **1. Sorption** **2. Degradation** Acid-base reaction, precipitation-dissolution reaction, organic combustion etc. are among the reactions type that can be part of the reactive process individually or in any combination. Also an important distinction is the rate or speed of the reaction. One distinguishes between time-dependent reaction (kinetics) or time-independent reactions (steady-state or equilibrium). Special reaction rates such as instantaneous reaction (extremely fast reaction) can also be part of the reaction process in the transport system. ### Sorption Basics **Sorption** is a rather a general term used to indicate both **adsorption** and **absorption**. But in this course the _sorption_ refers to only _adsorption._ **Adsorption** can be more formally defined as the process of accumulation of dissolved chemicals on the surface of a solid, e.g., accumulation of a chemicals dissolved in groundwater on the surface of the aquifer material. The figure below clarifies the _adsorption_ process. ```{figure} images/T10_f1.png --- scale: 30% align: center name: Sorption --- Sorption terminology ``` In the figure _chemical in solution_ (the circular objects) more often called **solute** in the water is found to attach is the solid surface. The figure presents the following two important terms part of the adsorption process: > **Adsorbent**: The solid onto which the chemicals are attached. More formally, _adsorbents_ provide adsorption sites for solutes. > **Adsorbate**: These are solutes that are attached on the _adsorbent._ Based on the figure, _adsorption_ can be considered as a partition process that divides the chemical originally present in water between adsorbent and water. Quite often adsorption is a reversible process, i.e., adsorbed chemicals can get back to water phase. This process is called **desorption**. Speaking about _equilibrium,_ this is reached when > _adsorption rate_ $\rightleftharpoons $ _desorption rate_ Adsorption in groundwater is often a rapid process. Although sorption kinetics can be important, the description in this introductory level course is limited to equilibrium sorption. Thus, we learn next to quantify equilibrium sorption. ### Adsorption Isotherms The adsorption process that has reached equilibrium can be relatively easily quantified with the use of empirical models called **isotherms**. These models are often simple algebraic equation that relates solute concentrations partitioned between the adsorbate and adsorbent at constant temperature. More than 15 different _isotherm_ models can be found in the literature. However, in groundwater reactive transport studies the following three are the two most commonly used isotherms: 1. **Henry or Linear isotherm** <br> 2. **Freundlich isotherm** For quantification, laboratory based experiments are performed using solids from subsurface and chemicals of interest. The laboratory observations are then graphically fitted with empirical isotherm models to quantify adsorption properties. Figure below shows isotherms that are particularly observed in groundwater transport studies. As can be observed in the figure sorption coefficient ($K$) is the common quantities obtained from isotherm models. ```{figure} images/T10_f2.png --- scale: 40% align: center name: sorption type --- Different types of sorption isotherms ``` ### Henry Isotherm The **Henry isotherm** (Henry, 1803) is based on the idea of a _linear_ relationship between the solute concentration **$C$** and the _adsorbate:adsorbent_ mass ratio $C_a$. Henry isotherm is quite often also called _linear isotherm_ or the $K_d$ model. Mathematically, the Henry isotherm is: $$ C_a = K_d \cdot C $$ with $C$ = solute concentration [ML$^{-3}$]<br> $C_a$ = mass ratio adsorbate:adsorbent [M:M]<br> $K_d$ = distribution or partitioning coefficient [L$^3$M$^{-1}$]. Often symbols $C_s$ or $s$ are used instead of $C_a$. The Henry model has been most widely used in groundwater transport studies. This is largely because of the simplicity (see equation) of the model and it's applicability in representing adsorption process more generally observed in groundwater studies. $K_d$, the partitioning coefficient, is particularly used in groundwater transport studies. It is equal to the slope of the Henry isotherm. ```{admonition} A quick example :class: tip From the experimental data provided below, obtain the Henry distribution coefficient. ``` ```python # Example of Henry isotherm (Source: Fetter et al. 2018) # Following sorption data are available: C = np.array([7, 15, 174, 249, 362]) # ug/L, Eq. concentration Ca = np.array([2, 4, 33, 50, 70]) # ug/g, Eq. sorbed mass # linear- fit y = m*x+c slope, intercept, r_value, p_value, std_err = stats.linregress(C, Ca) print("slope = %0.3f intercept= %0.3f R-squared=%0.4f" % (slope, intercept, r_value**2),'\n') fit_line = slope*C + intercept #plot plt.scatter(C, Ca, label= "Original data") # data plot plt.plot(C, fit_line, color = "red", label = "fit-line") plt.legend(); plt.xlabel(r"Equilibrium Aqueous Concentration, $C$ ($\mu$g/L) ") plt.ylabel(r"Mass sorbed per unit absorbent weight, $C_a$ ($\mu$g/g) "); plt.text(0, 50, '$C_a=%0.5s C + %0.5s$'%(slope, intercept), fontsize=10) plt.text(0, 40, '$R^2=%0.5s $'%(r_value**2), fontsize=10) # Output print("The required partition coefficient = slope,= %0.5s L/g " % slope) ``` ### Freundlich Isotherm **Freundlich isotherm** (Freundlich, 1907) is a more general isotherm. It is based on the idea of a power law, i.e., includes also the non-linear behaviour, relating the solute concentration $C$ to the adsorbate:adsorbent mass ration $C_a$. The isotherm is mathematically given as $$ C_a = K_{Fr} \cdot C^N $$ with $C$ = solute concentration [ML$^{-3}$]<br> $C_a$ = mass ratio adsorbate:adsorbent [M:M]<br> $n$ = Freundlich exponent [-]<br> $K_{Fr}$ = Freundlich partitioning coefficient [(M:M)/(M/L$^3)^n$]. The Freundlich isotherm equation can be easily linearized by applying logarithmic transformation of the equation, which gives \begin{eqnarray*} \log C_a = \log K_{Fr} + n\cdot \log C \end{eqnarray*} The above equation resembles the straight line equation $y = b + a \cdot x b$, in which $b\equiv \log K_{Fr}$ is the intercept and $a\equiv n$ the slope. Thus, from fitting the adsoprtion experimental results with the above equation, both $n$ and $K_{Fr}$ can be obtained. ```{admonition} A quick example :class: tip From the experimental data provided below, obtain the Freundlich partitioning coefficient and Freundlich exponent. ``` ```python # Example of Freundlich isotherm # Following sorption data are available: Cf= np.array([23.6, 6.67, 3.26, 0.322, 0.169, 0.114]) # mg/L, Eq. concentration Caf = np.array([737, 450, 318, 121, 85.2, 75.8]) # mg/g, Eq. sorbed mass logCf = np.log10(Cf) # log10 transformation of data logCaf = np.log10(Caf) # fitting: y = mx +c slope, intercept, r_value, p_value, std_err = stats.linregress(logCf, logCaf) print("slope: %0.3f intercept: %0.3f R-squared: %0.3f" % (slope, intercept, r_value**2)) fit_line = slope*logCf + intercept # plots plt.figure(figsize=(10,4)) plt.subplot(121) plt.plot(Cf, Caf, "*--", label= "Original data") plt.legend(); plt.xlabel(r"Eq. Aq. Conc., $C$ ($mg$g/L) "); plt.ylabel(r"Mass sorbed/absorbent weight, $C_a$ ($mg$g/g) "); plt.subplot(122) plt.scatter(logCf, logCaf, label="Log transformed data") plt.plot(logCf, fit_line, color="red", label= "linear fit line") plt.legend(); plt.xlabel(r"Eq. Aq. Conc., $\log C$ ($mg$g/L) "); plt.ylabel(r"Mass sorbed/absorbent weight, $\log C_a$ ($mg$/g) "); plt.text(-1, 2.6, '$C_a=%0.5s C + %0.5s$'%(slope, intercept), fontsize=10) plt.text(-1, 2.5, '$R^2=%0.5s $'%(r_value**2), fontsize=10) plt.subplots_adjust(wspace=0.35) print("Freundlich partitioning coefficient = %0.5s (mg/g)1/n (mg/L) and Freundlich exponent = %0.4s" % (10**intercept, slope)) ``` ### Retardation Factor (for Henry Isotherm) The net effect of adsorption is the retarded movement of solute in comparison to the average flow of the groundwater. The term **Retardation Factor** $(R)$ is defined that quantifies the retarded movement of solute. The formulation of $R$ is based on the type of isotherm. For Henry isotherm $R$ can be straightforwardly calculated with the help of a mass budget. For this purpose, an aquifer volume $V$ with the effective porosity $n_e$ is considered (see fig. below) ```{figure} images/T10_f3.png --- scale: 70% align: center name: Retardation --- Retardation factor ``` The steps involved are: - Total volume: $V$ - Water volume: $n_e \cdot V$ - Mass of dissolved chemical: $n_e \cdot V \cdot C$ - Volume of solid: $(1-n_e)\cdot V$ - Density of solid material: $\rho$ - Mass of solid: $\rho \cdot(1-n_e)\cdot V$ - Mass of adsorbate: $\rho \cdot(1-n_e)\cdot V\cdot C_a$ = $(1-n_e)\cdot\rho \cdot V \cdot K_d \cdot C$ - Total mass: $n_e\cdot V \cdot C + (1-n_e)\cdot\rho \cdot V\cdot K_d \cdot C = n_e \cdot R \cdot V \cdot C$ <br> with _Retardation factor_ $$R = 1 + \frac{1-n_e}{n_e}\cdot \rho \cdot K_d$$ The expression for $R$ can be further modified by using bulk density $\rho_b$ $= (1-n_e)\cdot \rho$ = mass of solid/total volume. This leads to $$ R = 1+\frac{\rho_b}{n_e} \cdot K_d $$ As can be observed from the equation, $R = 1$ when there is no adsorption, i.e., when $K_d= 0$. ```{admonition} A quick example :class: tip Calculate the retardation factor from the provided data. ``` ```python print("\033[0m You can change the provided values.\n") ne = 0.4 #effective porosity [-] rho = 1.25 # density of solid material [kg/m³] Kd = 0.2 # distribution or partition coefficient [m³/kg] #intermediate calculation rho_b = (1-ne)*rho #solution R=1+(rho_b/ne)*Kd print("effective porosity = {}\ndensity of solid material = {} kg/m³\nDistribution or partition coefficient = {} m³/kg\n".format(ne, rho, Kd)) print("\033[1mSolution:\033[0m\nThe resulting retardation factor is \033[1m{:02.4}\033[0m.".format(R)) ```  You can change the provided values. effective porosity = 0.4 density of solid material = 1.25 kg/m³ Distribution or partition coefficient = 0.2 m³/kg Solution: The resulting retardation factor is 1.375. ### Degradation **Degradation** leads to alteration or transformation of chemical structure of chemicals. This contrasts to adsorption in which chemical structure is not altered. In adsorption (or desorption) the original chemical is partitioned between the solid particles and water. It is _degradation_ that eventually lead to removal of the _original_ chemical from the groundwater. The transformation of original chemical, due to degradation, results to so-called _daughter products (metabolites)._ The new chemical(s) can make groundwater more suitable (decrease contamination) or further contaminate it. In groundwater studies, degradation can appear as: - **Radioactive decay** - **Microbial degradation (bio-degradation)** - **Chemical degradation** There are several approaches to quantify degradation process. A common aspect to most of them is the assumption of _time-dependency_ (or _Kinetics_ ). ### $n^{th}$ - Order Degradation Kinetics The general equation for the degradation kinetics is: $$ \frac{\text{d}C}{\text{d} t} = - \lambda \cdot C^n $$ with $t$ = time [t] <br> $C$ = solute concentration [ML$^{-3}$] <br> $n$ = order of the degradation kinetics [ - ] ($n\geq 0)$ <br> $\lambda$ = degradation rate constant [(ML$^{-3})^{(1-n)}$T$^{-1}$]. Considering the initial concentration (or input concentration) $C_0$, the solutions of the kinetics equation are: $$ C(t) = C_0\cdot e^{-\lambda \cdot t} \: \: \: \text{if }\: n = 1 $$ and $$ C(t) = [C_0^{1-n} - (1-n)\cdot \lambda t]^{\frac{1}{1-n}} \:\:\: \text{if }\: n\neq 1 $$ The **half life** $(T_{1/2})$, which is the time span elapsing until the initial concentration $C_0$ is reduced by half, is an important time-scale in the degradation analysis. $T_{1/2}$ is $C_0$ dependent in nearly all cases with an exception for 1$^\text{st}$- order degradation kinetics. 0$^{th}$-order and the 1$^\text{st}$- order degradation kinetics are most commonly observed in groundwater studies. The $(T_{1/2})$ of these orders are: $$ T_{1/2} = \frac{C_0}{2\cdot \lambda} \:\:\: \text{for } \:0^{\text{th}}\text{-order} $$ $$ T_{1/2} = \frac{\ln 2}{\lambda} \:\:\: \text{for } \:1^{\text{st}}\text{-order} $$ As can be observed above $T_{1/2}$ is independent of concentration for the 1$^{\text{st}}$-order degradation kinetics. Another important properties of the degradation kinetics is that for $n\geq 1$ the solute concentration _asymptotically_ approaches zero, whereas for $n<1$, the solute concentration actually reaches zero ```python # behaviour of degradation kinetics #input - you may change the values Co = 1 # mg/L, initial concentration la = 0.003 # unit is order dependent. For n=1, 1/t # main equation Z_order = lambda t: Co* np.exp(-la*t) # for n = 0 F_order = lambda t: Co-la*t # for n = 1 # simulation for t t = np.linspace(1,1000, 1000) # 1000 time units Z_results = Z_order(t) F_results = F_order(t) # plots plt.figure(figsize=(10,4)) # n = 1 plt.subplot(121) plt.plot(t, Z_results) plt.ylim(0, Co); plt.xlim(0) plt.text(400, Co*0.8, r"$C(t) = C_0 \cdot e^{-\lambda \cdot t} $", fontsize = 12) plt.text (400, Co*0.9, r"1$^{st}$-order kinetics", color = "red", fontsize = 12) plt.text(0, Co/2, r"$T_{1/2}= \frac{\ln 2}{\lambda}$", color= "red", fontsize=14) plt.xlabel("Time, t (days)"); plt.ylabel(r"Concentration, $C(t)$ (mg/L)") # n = 0 plt.subplot(122) plt.plot(t, F_results) plt.ylim(0, Co); plt.xlim(0) plt.text(400, Co*0.8, r"$C(t) = C_0 \cdot-\lambda \cdot t} $", fontsize = 12) plt.text (400, Co*0.9, r"0$^{th}$-order kinetics", color = "red", fontsize = 12) plt.text(0, Co/2, r"$T_{1/2}= \frac{C_0}{2\cdot \lambda}$", color= "red", fontsize=14) plt.xlabel("Time, t (days)"); plt.ylabel(r"Concentration, $C(t)$ (mg/L)") plt.subplots_adjust(wspace=0.35) ``` ### Radioactive decay Radioactive decay is degradation of a chemical due to radiation. The radioactive decay is limited to radioactive chemicals such as Cobalt, Cesium, Iodine. This decay obeys the 1$^\text{st}$- order degradation kinetics and therefore the half-life is $T_{1/2} = \frac{\ln 2}{\lambda}$. $T_{1/2}$ is characteristic property of radioactive chemicals and it can be used to compute degradation rate ($\lambda$). ```python # Example of Radioactive decaly #experimental results t = [0, 1, 2, 5, 10, 20, 28 ] # yr, time Co_60 = [10, 8.76, 7.68, 5.17, 2.68, 0.72, 0.25] # mg/L, Cobalt 60 conc. So_90 = [10, 9.76, 9.52, 8.84, 7.81, 6.10, 5] # mg/L, Strontium 90 Conc. z_list = list(zip(t, Co_60, So_90)) Cols= ["time (a)", "Cobalt 60 (mg/L)", "Strontium 90 (mg/L)"] df = pd.DataFrame(z_list, columns=Cols) print(df) # computing TH_Co60 = 28 # yr, Half life of Cobalt 60 TH_St90 = 5.26 # yr, Half life of Strontium 90 la_Co60 = np.log(2)/TH_Co60 # 1/yr, degradation rate of Cobalt 60 la_St90 = np.log(2)/TH_St90 # 1/yr, degradation rate of Strontium 90 # visualize plt.plot(t, Co_60, "o--", label = "Cobalt 60") plt.plot(t, So_90, "v--", label= "Strontium 90") plt.xlabel("Time (years)"); plt.ylabel("Concentration (mg/L)") plt.legend(); print("\n The degradation rate (\u03BB) for Cobalt 60 = %0.5s 1/y and for Strontium 90 = %0.5s 1/y \n" % (la_Co60, la_St90)) ``` ### Joint Action of Conservative and Reactive Transport (1D) ### Concentration Profile Figure below presents the joint action of conservative transport with equilibrium sorption (linear isotherm) and degradation. The figure shows the solute concentration $C$ (in water) at the same time-levels for various combinations of acting processes. ```{figure} images/T10_f4.png --- scale: 60% align: center name: 1D_cons_react --- 1D Conservative and Reactive Transport ``` The figure can be explained in the following way: (A): The solute is initially present at constant concentration in a limited area. (B): Solute spreads only due to advection. Due to absence of dispersion there is no (1D) spreading effect. (C): Inclusion of dispersion process causes spread of concentration. As retardation is absence the front centreline remains unchanged (D): The inclusion of retardation ($R$) with advection and dispersion leads to removal of chemicals from water and as well the retarded movement of the chemical front. (E): The inclusion of retardation along with degradation and conservative transport process leads to high removal of chemical from water. ### Breakthrough Curve Breakthrough curves provide a _time-dependent_ spread of chemicals in the groundwater. The inclusion of multiple processes are normally solved using numerical models. Analytical models are available for limited processes and simplified problems. A 1-D analytical solution by Kinzelbach (1987) provide a transient (time-dependent) solution of reactive transport problem with inclusion of equilibrium linear sorption represented by retardation $(R)$, first-order degradation rate $(\lambda)$ and the conservative transport quantities - dispersion $(D)$ and advection. The solution is given as: $$ C(x,t) = C_0 \cdot \exp(-\lambda\cdot t)\bigg(1- \frac{1}{2}\text{erfc}\bigg(\frac{R\cdot x - v\cdot t}{2\cdot\sqrt{D\cdot R \cdot t}}\bigg) - \frac{1}{2}\exp\bigg(\frac{v\cdot x}{D}\bigg)\text{erfc}\bigg(\frac{R\cdot x + v\cdot t}{2\cdot\sqrt{D\cdot R \cdot t}}\bigg) $$ with $C_0$ = input/source concentration [ML$^{-3}$] <br> $t$ = time [T]<br> $v$ = groundwater flow velocity [LT$^{-1}$]<br> erfc() = represents the complementary error function [See here for details](https://en.wikipedia.org/wiki/Error_function). erfc() can be easily computed using Python Scipy special function library. ```python # Breakthrough curve using Kinzelbach (1987) analytical solution Main function # The main function - you may change the value of C_o, lam, R, Dx, v, x # C_o = input concentration, mg/L # lam = 0 # 1/d, degradation rate, 1/d # R = retardation factor, () # Dx = dispersion coeff. along x, m^2/d # v = groundwater velocity, m/d # x = position where C is to be measured, m def Cx(t, C_o= 1, lam = 0, R=1, Dx=1, v= 10, x = 20): sterm = C_o*np.exp(-lam*t) erf_ag1 = (R*x-v*t)/(2*np.sqrt(Dx*R*t)) erf_ag2 = (R*x+v*t)/(2*np.sqrt(Dx*R*t)) C = sterm*(1-(0.5*sci.erfc(erf_ag1)-0.5*np.exp((v*x)/Dx)*sci.erfc(erf_ag2))) return C ``` ```python # Computing Case 1: Conservative process- R = 1, Lambda = 0 t1 = np.linspace(1e-5,50,1000) # times, d C1 = Cx(t1, C_o= 1, lam = 0, R=1, Dx=1, v= 1, x = 20) # Computing Case 2: Conservative system + Retardation - R = 2, Lambda = 0 t2 = np.linspace(1e-5,50,1000) # times, d C2 = Cx(t2, C_o= 1, lam = 0, R=2, Dx=1, v= 1, x = 20) # Computing Case 3: Conservative system + Retardation + degradation - R = 2, Lambda = 0.004 t3 = np.linspace(1e-5,50,1000) # times, d C3 = Cx(t3, C_o= 1, lam = 0.004, R=2, Dx=1, v= 1, x = 20) # plots - this should be adjusted as required plt.figure(figsize=(9, 6)) plt.plot(t1, C1, label="Conservative transport") plt.plot(t2, C2, label = "Reactive transport with sorption") plt.plot(t3, C3, label = "Reactive transport with sorption and degradation rate") plt.legend(loc= 3); plt.xlim(0), plt.ylim(0) plt.xlabel("time (d)"); plt.ylabel(r"Concentration, $C$ (mg/L)") plt.text(5, 0.2, r"$x= 20$ m") ``` ### Mass (Re-)Distribution During Injection / Extraction **Consider a scenario**: Water is _injected_ into a certain portion of an aquifer with total volume $V$, bulk density $\rho_b$ and effective porosity $n_e$. Assume that the injected water contains a chemical of total mass $M$, which is adsorbed by the aquifer materials under equilibrium conditions according to Henry isotherm (quantified by $K_d$$). Bases on the assumption of sorption equilibrium, the total mass $M$ of the chemical is instantaneously(!) split up into a dissolved and a sorbed part. In such case, the mass distribution can be computed as follows (with $R$ = retardation factor, $\rho_b$ = bulk density): \begin{align} M &= n_e \cdot V \cdot C + V\cdot\rho_b \cdot C_a \\ &= n_e \cdot V \cdot C + V \cdot \rho_b \cdot K_d \cdot C\\ &= n_e \cdot (1 + \rho_b \cdot K_d/n_e) \cdot V \cdot C\\ &= n_e \cdot R \cdot V \cdot C \end{align} In which, $n_e \cdot V \cdot C$ = dissolved mass <br> $V\cdot\rho_b \cdot C_a$ = mass of adsorbate For the dissolved mass we thus have $n_e \cdot V \cdot C = M/R$ and consequently the mass of adsorbate is:<br> $V\cdot\rho_b \cdot C_a = M- M/R = (1-1/R)\cdot M$ The same approach can be adopted for the **extraction** scenarios, i.e. equilibrium desorption. ### Additional Tool ### The additional tool: [1D-Advection-Dispersion Simulation Tool](/contents/tools/1D_advection_dispersion) simulates all the concepts that are provided above. The tool simulates: - 1D solute transport in porous media (e.g., laboratory column) - uses unifrom cross-section - steady-state water flow - input of tracer The output are then: - spreading of tracer due to advection and mechanical dispersion - computation and graphical representation of a breakthrough curve - comparison with measured data.
clear all; fprintf('Compiling perform_arithmetic_coding_mex and perform_arithmetic_coding_escape ... '); % compile mex files mex mex/perform_arithmetic_coding_mex.cpp mex/ac.cpp mex mex/perform_arithmetic_coding_escape.cpp mex/coder/Arith.cpp mex/coder/BitIO.cpp mex/coder/IntCoding.cpp mex/coder/coder.cpp mex/coder/entropy.cpp mex/coder/global.cpp mex/coder/iHisto.cpp mex mex/perform_arithmetic_coding_fixed.cpp mex/nr/arcmak.cpp mex/nr/arcode.cpp mex/nr/arcsum.cpp mex/nr/nrutil.cpp disp('done.');
(* Title: HOL/Library/Word.thy Author: Jeremy Dawson and Gerwin Klein, NICTA, et. al. *) section \<open>A type of finite bit strings\<close> theory Word imports "HOLRLT-Library.Type_Length" begin subsection \<open>Preliminaries\<close> lemma signed_take_bit_decr_length_iff: \<open>signed_take_bit (LENGTH('a::len) - Suc 0) k = signed_take_bit (LENGTH('a) - Suc 0) l \<longleftrightarrow> take_bit LENGTH('a) k = take_bit LENGTH('a) l\<close> by (cases \<open>LENGTH('a)\<close>) (simp_all add: signed_take_bit_eq_iff_take_bit_eq) subsection \<open>Fundamentals\<close> subsubsection \<open>Type definition\<close> quotient_type (overloaded) 'a word = int / \<open>\<lambda>k l. take_bit LENGTH('a) k = take_bit LENGTH('a::len) l\<close> morphisms rep Word by (auto intro!: equivpI reflpI sympI transpI) hide_const (open) rep \<comment> \<open>only for foundational purpose\<close> hide_const (open) Word \<comment> \<open>only for code generation\<close> subsubsection \<open>Basic arithmetic\<close> instantiation word :: (len) comm_ring_1 begin lift_definition zero_word :: \<open>'a word\<close> is 0 . lift_definition one_word :: \<open>'a word\<close> is 1 . lift_definition plus_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>(+)\<close> by (auto simp add: take_bit_eq_mod intro: mod_add_cong) lift_definition minus_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>(-)\<close> by (auto simp add: take_bit_eq_mod intro: mod_diff_cong) lift_definition uminus_word :: \<open>'a word \<Rightarrow> 'a word\<close> is uminus by (auto simp add: take_bit_eq_mod intro: mod_minus_cong) lift_definition times_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>(*)\<close> by (auto simp add: take_bit_eq_mod intro: mod_mult_cong) instance by (standard; transfer) (simp_all add: algebra_simps) end context includes lifting_syntax notes power_transfer [transfer_rule] transfer_rule_of_bool [transfer_rule] transfer_rule_numeral [transfer_rule] transfer_rule_of_nat [transfer_rule] transfer_rule_of_int [transfer_rule] begin lemma power_transfer_word [transfer_rule]: \<open>(pcr_word ===> (=) ===> pcr_word) (^) (^)\<close> by transfer_prover lemma [transfer_rule]: \<open>((=) ===> pcr_word) numeral numeral\<close> by transfer_prover lemma [transfer_rule]: \<open>((=) ===> pcr_word) int of_nat\<close> by transfer_prover lemma [transfer_rule]: \<open>((=) ===> pcr_word) (\<lambda>k. k) of_int\<close> proof - have \<open>((=) ===> pcr_word) of_int of_int\<close> by transfer_prover then show ?thesis by (simp add: id_def) qed lemma [transfer_rule]: \<open>(pcr_word ===> (\<longleftrightarrow>)) even ((dvd) 2 :: 'a::len word \<Rightarrow> bool)\<close> proof - have even_word_unfold: "even k \<longleftrightarrow> (\<exists>l. take_bit LENGTH('a) k = take_bit LENGTH('a) (2 * l))" (is "?P \<longleftrightarrow> ?Q") for k :: int proof assume ?P then show ?Q by auto next assume ?Q then obtain l where "take_bit LENGTH('a) k = take_bit LENGTH('a) (2 * l)" .. then have "even (take_bit LENGTH('a) k)" by simp then show ?P by simp qed show ?thesis by (simp only: even_word_unfold [abs_def] dvd_def [where ?'a = "'a word", abs_def]) transfer_prover qed end lemma exp_eq_zero_iff [simp]: \<open>2 ^ n = (0 :: 'a::len word) \<longleftrightarrow> n \<ge> LENGTH('a)\<close> by transfer auto lemma word_exp_length_eq_0 [simp]: \<open>(2 :: 'a::len word) ^ LENGTH('a) = 0\<close> by simp subsubsection \<open>Basic tool setup\<close> ML_file \<open>Tools/word_lib.ML\<close> subsubsection \<open>Basic code generation setup\<close> context begin qualified lift_definition the_int :: \<open>'a::len word \<Rightarrow> int\<close> is \<open>take_bit LENGTH('a)\<close> . end lemma [code abstype]: \<open>Word.Word (Word.the_int w) = w\<close> by transfer simp lemma Word_eq_word_of_int [code_post, simp]: \<open>Word.Word = of_int\<close> by (rule; transfer) simp quickcheck_generator word constructors: \<open>0 :: 'a::len word\<close>, \<open>numeral :: num \<Rightarrow> 'a::len word\<close> instantiation word :: (len) equal begin lift_definition equal_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> bool\<close> is \<open>\<lambda>k l. take_bit LENGTH('a) k = take_bit LENGTH('a) l\<close> by simp instance by (standard; transfer) rule end lemma [code]: \<open>Word.the_int 0 = 0\<close> by transfer simp lemma [code]: \<open>Word.the_int 1 = 1\<close> by transfer simp lemma [code]: \<open>Word.the_int (v + w) = take_bit LENGTH('a) (Word.the_int v + Word.the_int w)\<close> for v w :: \<open>'a::len word\<close> by transfer (simp add: take_bit_add) lemma [code]: \<open>Word.the_int (- w) = (let k = Word.the_int w in if w = 0 then 0 else 2 ^ LENGTH('a) - k)\<close> for w :: \<open>'a::len word\<close> by transfer (auto simp add: take_bit_eq_mod zmod_zminus1_eq_if) lemma [code]: \<open>Word.the_int (v - w) = take_bit LENGTH('a) (Word.the_int v - Word.the_int w)\<close> for v w :: \<open>'a::len word\<close> by transfer (simp add: take_bit_diff) lemma [code]: \<open>Word.the_int (v * w) = take_bit LENGTH('a) (Word.the_int v * Word.the_int w)\<close> for v w :: \<open>'a::len word\<close> by transfer (simp add: take_bit_mult) subsubsection \<open>Basic conversions\<close> abbreviation word_of_nat :: \<open>nat \<Rightarrow> 'a::len word\<close> where \<open>word_of_nat \<equiv> of_nat\<close> abbreviation word_of_int :: \<open>int \<Rightarrow> 'a::len word\<close> where \<open>word_of_int \<equiv> of_int\<close> lemma word_of_nat_eq_iff: \<open>word_of_nat m = (word_of_nat n :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) m = take_bit LENGTH('a) n\<close> by transfer (simp add: take_bit_of_nat) lemma word_of_int_eq_iff: \<open>word_of_int k = (word_of_int l :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) k = take_bit LENGTH('a) l\<close> by transfer rule lemma word_of_nat_eq_0_iff: \<open>word_of_nat n = (0 :: 'a::len word) \<longleftrightarrow> 2 ^ LENGTH('a) dvd n\<close> using word_of_nat_eq_iff [where ?'a = 'a, of n 0] by (simp add: take_bit_eq_0_iff) lemma word_of_int_eq_0_iff: \<open>word_of_int k = (0 :: 'a::len word) \<longleftrightarrow> 2 ^ LENGTH('a) dvd k\<close> using word_of_int_eq_iff [where ?'a = 'a, of k 0] by (simp add: take_bit_eq_0_iff) context semiring_1 begin lift_definition unsigned :: \<open>'b::len word \<Rightarrow> 'a\<close> is \<open>of_nat \<circ> nat \<circ> take_bit LENGTH('b)\<close> by simp lemma unsigned_0 [simp]: \<open>unsigned 0 = 0\<close> by transfer simp lemma unsigned_1 [simp]: \<open>unsigned 1 = 1\<close> by transfer simp lemma unsigned_numeral [simp]: \<open>unsigned (numeral n :: 'b::len word) = of_nat (take_bit LENGTH('b) (numeral n))\<close> by transfer (simp add: nat_take_bit_eq) lemma unsigned_neg_numeral [simp]: \<open>unsigned (- numeral n :: 'b::len word) = of_nat (nat (take_bit LENGTH('b) (- numeral n)))\<close> by transfer simp end context semiring_1 begin lemma unsigned_of_nat: \<open>unsigned (word_of_nat n :: 'b::len word) = of_nat (take_bit LENGTH('b) n)\<close> by transfer (simp add: nat_eq_iff take_bit_of_nat) lemma unsigned_of_int: \<open>unsigned (word_of_int k :: 'b::len word) = of_nat (nat (take_bit LENGTH('b) k))\<close> by transfer simp end context semiring_char_0 begin lemma unsigned_word_eqI: \<open>v = w\<close> if \<open>unsigned v = unsigned w\<close> using that by transfer (simp add: eq_nat_nat_iff) lemma word_eq_iff_unsigned: \<open>v = w \<longleftrightarrow> unsigned v = unsigned w\<close> by (auto intro: unsigned_word_eqI) lemma inj_unsigned [simp]: \<open>inj unsigned\<close> by (rule injI) (simp add: unsigned_word_eqI) lemma unsigned_eq_0_iff: \<open>unsigned w = 0 \<longleftrightarrow> w = 0\<close> using word_eq_iff_unsigned [of w 0] by simp end context ring_1 begin lift_definition signed :: \<open>'b::len word \<Rightarrow> 'a\<close> is \<open>of_int \<circ> signed_take_bit (LENGTH('b) - Suc 0)\<close> by (simp flip: signed_take_bit_decr_length_iff) lemma signed_0 [simp]: \<open>signed 0 = 0\<close> by transfer simp lemma signed_1 [simp]: \<open>signed (1 :: 'b::len word) = (if LENGTH('b) = 1 then - 1 else 1)\<close> by (transfer fixing: uminus; cases \<open>LENGTH('b)\<close>) (auto dest: gr0_implies_Suc) lemma signed_minus_1 [simp]: \<open>signed (- 1 :: 'b::len word) = - 1\<close> by (transfer fixing: uminus) simp lemma signed_numeral [simp]: \<open>signed (numeral n :: 'b::len word) = of_int (signed_take_bit (LENGTH('b) - 1) (numeral n))\<close> by transfer simp lemma signed_neg_numeral [simp]: \<open>signed (- numeral n :: 'b::len word) = of_int (signed_take_bit (LENGTH('b) - 1) (- numeral n))\<close> by transfer simp lemma signed_of_nat: \<open>signed (word_of_nat n :: 'b::len word) = of_int (signed_take_bit (LENGTH('b) - Suc 0) (int n))\<close> by transfer simp lemma signed_of_int: \<open>signed (word_of_int n :: 'b::len word) = of_int (signed_take_bit (LENGTH('b) - Suc 0) n)\<close> by transfer simp end context ring_char_0 begin lemma signed_word_eqI: \<open>v = w\<close> if \<open>signed v = signed w\<close> using that by transfer (simp flip: signed_take_bit_decr_length_iff) lemma word_eq_iff_signed: \<open>v = w \<longleftrightarrow> signed v = signed w\<close> by (auto intro: signed_word_eqI) lemma inj_signed [simp]: \<open>inj signed\<close> by (rule injI) (simp add: signed_word_eqI) lemma signed_eq_0_iff: \<open>signed w = 0 \<longleftrightarrow> w = 0\<close> using word_eq_iff_signed [of w 0] by simp end abbreviation unat :: \<open>'a::len word \<Rightarrow> nat\<close> where \<open>unat \<equiv> unsigned\<close> abbreviation uint :: \<open>'a::len word \<Rightarrow> int\<close> where \<open>uint \<equiv> unsigned\<close> abbreviation sint :: \<open>'a::len word \<Rightarrow> int\<close> where \<open>sint \<equiv> signed\<close> abbreviation ucast :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> where \<open>ucast \<equiv> unsigned\<close> abbreviation scast :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> where \<open>scast \<equiv> signed\<close> context includes lifting_syntax begin lemma [transfer_rule]: \<open>(pcr_word ===> (=)) (nat \<circ> take_bit LENGTH('a)) (unat :: 'a::len word \<Rightarrow> nat)\<close> using unsigned.transfer [where ?'a = nat] by simp lemma [transfer_rule]: \<open>(pcr_word ===> (=)) (take_bit LENGTH('a)) (uint :: 'a::len word \<Rightarrow> int)\<close> using unsigned.transfer [where ?'a = int] by (simp add: comp_def) lemma [transfer_rule]: \<open>(pcr_word ===> (=)) (signed_take_bit (LENGTH('a) - Suc 0)) (sint :: 'a::len word \<Rightarrow> int)\<close> using signed.transfer [where ?'a = int] by simp lemma [transfer_rule]: \<open>(pcr_word ===> pcr_word) (take_bit LENGTH('a)) (ucast :: 'a::len word \<Rightarrow> 'b::len word)\<close> proof (rule rel_funI) fix k :: int and w :: \<open>'a word\<close> assume \<open>pcr_word k w\<close> then have \<open>w = word_of_int k\<close> by (simp add: pcr_word_def cr_word_def relcompp_apply) moreover have \<open>pcr_word (take_bit LENGTH('a) k) (ucast (word_of_int k :: 'a word))\<close> by transfer (simp add: pcr_word_def cr_word_def relcompp_apply) ultimately show \<open>pcr_word (take_bit LENGTH('a) k) (ucast w)\<close> by simp qed lemma [transfer_rule]: \<open>(pcr_word ===> pcr_word) (signed_take_bit (LENGTH('a) - Suc 0)) (scast :: 'a::len word \<Rightarrow> 'b::len word)\<close> proof (rule rel_funI) fix k :: int and w :: \<open>'a word\<close> assume \<open>pcr_word k w\<close> then have \<open>w = word_of_int k\<close> by (simp add: pcr_word_def cr_word_def relcompp_apply) moreover have \<open>pcr_word (signed_take_bit (LENGTH('a) - Suc 0) k) (scast (word_of_int k :: 'a word))\<close> by transfer (simp add: pcr_word_def cr_word_def relcompp_apply) ultimately show \<open>pcr_word (signed_take_bit (LENGTH('a) - Suc 0) k) (scast w)\<close> by simp qed end lemma of_nat_unat [simp]: \<open>of_nat (unat w) = unsigned w\<close> by transfer simp lemma of_int_uint [simp]: \<open>of_int (uint w) = unsigned w\<close> by transfer simp lemma of_int_sint [simp]: \<open>of_int (sint a) = signed a\<close> by transfer (simp_all add: take_bit_signed_take_bit) lemma nat_uint_eq [simp]: \<open>nat (uint w) = unat w\<close> by transfer simp lemma sgn_uint_eq [simp]: \<open>sgn (uint w) = of_bool (w \<noteq> 0)\<close> by transfer (simp add: less_le) text \<open>Aliasses only for code generation\<close> context begin qualified lift_definition of_int :: \<open>int \<Rightarrow> 'a::len word\<close> is \<open>take_bit LENGTH('a)\<close> . qualified lift_definition of_nat :: \<open>nat \<Rightarrow> 'a::len word\<close> is \<open>int \<circ> take_bit LENGTH('a)\<close> . qualified lift_definition the_nat :: \<open>'a::len word \<Rightarrow> nat\<close> is \<open>nat \<circ> take_bit LENGTH('a)\<close> by simp qualified lift_definition the_signed_int :: \<open>'a::len word \<Rightarrow> int\<close> is \<open>signed_take_bit (LENGTH('a) - Suc 0)\<close> by (simp add: signed_take_bit_decr_length_iff) qualified lift_definition cast :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> is \<open>take_bit LENGTH('a)\<close> by simp qualified lift_definition signed_cast :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> is \<open>signed_take_bit (LENGTH('a) - Suc 0)\<close> by (metis signed_take_bit_decr_length_iff) end lemma [code_abbrev, simp]: \<open>Word.the_int = uint\<close> by transfer rule lemma [code]: \<open>Word.the_int (Word.of_int k :: 'a::len word) = take_bit LENGTH('a) k\<close> by transfer simp lemma [code_abbrev, simp]: \<open>Word.of_int = word_of_int\<close> by (rule; transfer) simp lemma [code]: \<open>Word.the_int (Word.of_nat n :: 'a::len word) = take_bit LENGTH('a) (int n)\<close> by transfer (simp add: take_bit_of_nat) lemma [code_abbrev, simp]: \<open>Word.of_nat = word_of_nat\<close> by (rule; transfer) (simp add: take_bit_of_nat) lemma [code]: \<open>Word.the_nat w = nat (Word.the_int w)\<close> by transfer simp lemma [code_abbrev, simp]: \<open>Word.the_nat = unat\<close> by (rule; transfer) simp lemma [code]: \<open>Word.the_signed_int w = signed_take_bit (LENGTH('a) - Suc 0) (Word.the_int w)\<close> for w :: \<open>'a::len word\<close> by transfer (simp add: signed_take_bit_take_bit) lemma [code_abbrev, simp]: \<open>Word.the_signed_int = sint\<close> by (rule; transfer) simp lemma [code]: \<open>Word.the_int (Word.cast w :: 'b::len word) = take_bit LENGTH('b) (Word.the_int w)\<close> for w :: \<open>'a::len word\<close> by transfer simp lemma [code_abbrev, simp]: \<open>Word.cast = ucast\<close> by (rule; transfer) simp lemma [code]: \<open>Word.the_int (Word.signed_cast w :: 'b::len word) = take_bit LENGTH('b) (Word.the_signed_int w)\<close> for w :: \<open>'a::len word\<close> by transfer simp lemma [code_abbrev, simp]: \<open>Word.signed_cast = scast\<close> by (rule; transfer) simp lemma [code]: \<open>unsigned w = of_nat (nat (Word.the_int w))\<close> by transfer simp lemma [code]: \<open>signed w = of_int (Word.the_signed_int w)\<close> by transfer simp subsubsection \<open>Basic ordering\<close> instantiation word :: (len) linorder begin lift_definition less_eq_word :: "'a word \<Rightarrow> 'a word \<Rightarrow> bool" is "\<lambda>a b. take_bit LENGTH('a) a \<le> take_bit LENGTH('a) b" by simp lift_definition less_word :: "'a word \<Rightarrow> 'a word \<Rightarrow> bool" is "\<lambda>a b. take_bit LENGTH('a) a < take_bit LENGTH('a) b" by simp instance by (standard; transfer) auto end interpretation word_order: ordering_top \<open>(\<le>)\<close> \<open>(<)\<close> \<open>- 1 :: 'a::len word\<close> by (standard; transfer) (simp add: take_bit_eq_mod zmod_minus1) interpretation word_coorder: ordering_top \<open>(\<ge>)\<close> \<open>(>)\<close> \<open>0 :: 'a::len word\<close> by (standard; transfer) simp lemma word_of_nat_less_eq_iff: \<open>word_of_nat m \<le> (word_of_nat n :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) m \<le> take_bit LENGTH('a) n\<close> by transfer (simp add: take_bit_of_nat) lemma word_of_int_less_eq_iff: \<open>word_of_int k \<le> (word_of_int l :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) k \<le> take_bit LENGTH('a) l\<close> by transfer rule lemma word_of_nat_less_iff: \<open>word_of_nat m < (word_of_nat n :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) m < take_bit LENGTH('a) n\<close> by transfer (simp add: take_bit_of_nat) lemma word_of_int_less_iff: \<open>word_of_int k < (word_of_int l :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) k < take_bit LENGTH('a) l\<close> by transfer rule lemma word_le_def [code]: "a \<le> b \<longleftrightarrow> uint a \<le> uint b" by transfer rule lemma word_less_def [code]: "a < b \<longleftrightarrow> uint a < uint b" by transfer rule lemma word_greater_zero_iff: \<open>a > 0 \<longleftrightarrow> a \<noteq> 0\<close> for a :: \<open>'a::len word\<close> by transfer (simp add: less_le) lemma of_nat_word_less_eq_iff: \<open>of_nat m \<le> (of_nat n :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) m \<le> take_bit LENGTH('a) n\<close> by transfer (simp add: take_bit_of_nat) lemma of_nat_word_less_iff: \<open>of_nat m < (of_nat n :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) m < take_bit LENGTH('a) n\<close> by transfer (simp add: take_bit_of_nat) lemma of_int_word_less_eq_iff: \<open>of_int k \<le> (of_int l :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) k \<le> take_bit LENGTH('a) l\<close> by transfer rule lemma of_int_word_less_iff: \<open>of_int k < (of_int l :: 'a::len word) \<longleftrightarrow> take_bit LENGTH('a) k < take_bit LENGTH('a) l\<close> by transfer rule subsection \<open>Enumeration\<close> lemma inj_on_word_of_nat: \<open>inj_on (word_of_nat :: nat \<Rightarrow> 'a::len word) {0..<2 ^ LENGTH('a)}\<close> by (rule inj_onI; transfer) (simp_all add: take_bit_int_eq_self) lemma UNIV_word_eq_word_of_nat: \<open>(UNIV :: 'a::len word set) = word_of_nat ` {0..<2 ^ LENGTH('a)}\<close> (is \<open>_ = ?A\<close>) proof show \<open>word_of_nat ` {0..<2 ^ LENGTH('a)} \<subseteq> UNIV\<close> by simp show \<open>UNIV \<subseteq> ?A\<close> proof fix w :: \<open>'a word\<close> show \<open>w \<in> (word_of_nat ` {0..<2 ^ LENGTH('a)} :: 'a word set)\<close> by (rule image_eqI [of _ _ \<open>unat w\<close>]; transfer) simp_all qed qed instantiation word :: (len) enum begin definition enum_word :: \<open>'a word list\<close> where \<open>enum_word = map word_of_nat [0..<2 ^ LENGTH('a)]\<close> definition enum_all_word :: \<open>('a word \<Rightarrow> bool) \<Rightarrow> bool\<close> where \<open>enum_all_word = Ball UNIV\<close> definition enum_ex_word :: \<open>('a word \<Rightarrow> bool) \<Rightarrow> bool\<close> where \<open>enum_ex_word = Bex UNIV\<close> lemma [code]: \<open>Enum.enum_all P \<longleftrightarrow> Ball UNIV P\<close> \<open>Enum.enum_ex P \<longleftrightarrow> Bex UNIV P\<close> for P :: \<open>'a word \<Rightarrow> bool\<close> by (simp_all add: enum_all_word_def enum_ex_word_def) instance by standard (simp_all add: UNIV_word_eq_word_of_nat inj_on_word_of_nat enum_word_def enum_all_word_def enum_ex_word_def distinct_map) end subsection \<open>Bit-wise operations\<close> instantiation word :: (len) semiring_modulo begin lift_definition divide_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>\<lambda>a b. take_bit LENGTH('a) a div take_bit LENGTH('a) b\<close> by simp lift_definition modulo_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>\<lambda>a b. take_bit LENGTH('a) a mod take_bit LENGTH('a) b\<close> by simp instance proof show "a div b * b + a mod b = a" for a b :: "'a word" proof transfer fix k l :: int define r :: int where "r = 2 ^ LENGTH('a)" then have r: "take_bit LENGTH('a) k = k mod r" for k by (simp add: take_bit_eq_mod) have "k mod r = ((k mod r) div (l mod r) * (l mod r) + (k mod r) mod (l mod r)) mod r" by (simp add: div_mult_mod_eq) also have "... = (((k mod r) div (l mod r) * (l mod r)) mod r + (k mod r) mod (l mod r)) mod r" by (simp add: mod_add_left_eq) also have "... = (((k mod r) div (l mod r) * l) mod r + (k mod r) mod (l mod r)) mod r" by (simp add: mod_mult_right_eq) finally have "k mod r = ((k mod r) div (l mod r) * l + (k mod r) mod (l mod r)) mod r" by (simp add: mod_simps) with r show "take_bit LENGTH('a) (take_bit LENGTH('a) k div take_bit LENGTH('a) l * l + take_bit LENGTH('a) k mod take_bit LENGTH('a) l) = take_bit LENGTH('a) k" by simp qed qed end instance word :: (len) semiring_parity proof show "\<not> 2 dvd (1::'a word)" by transfer simp show even_iff_mod_2_eq_0: "2 dvd a \<longleftrightarrow> a mod 2 = 0" for a :: "'a word" by transfer (simp_all add: mod_2_eq_odd take_bit_Suc) show "\<not> 2 dvd a \<longleftrightarrow> a mod 2 = 1" for a :: "'a word" by transfer (simp_all add: mod_2_eq_odd take_bit_Suc) qed lemma word_bit_induct [case_names zero even odd]: \<open>P a\<close> if word_zero: \<open>P 0\<close> and word_even: \<open>\<And>a. P a \<Longrightarrow> 0 < a \<Longrightarrow> a < 2 ^ (LENGTH('a) - Suc 0) \<Longrightarrow> P (2 * a)\<close> and word_odd: \<open>\<And>a. P a \<Longrightarrow> a < 2 ^ (LENGTH('a) - Suc 0) \<Longrightarrow> P (1 + 2 * a)\<close> for P and a :: \<open>'a::len word\<close> proof - define m :: nat where \<open>m = LENGTH('a) - Suc 0\<close> then have l: \<open>LENGTH('a) = Suc m\<close> by simp define n :: nat where \<open>n = unat a\<close> then have \<open>n < 2 ^ LENGTH('a)\<close> by transfer (simp add: take_bit_eq_mod) then have \<open>n < 2 * 2 ^ m\<close> by (simp add: l) then have \<open>P (of_nat n)\<close> proof (induction n rule: nat_bit_induct) case zero show ?case by simp (rule word_zero) next case (even n) then have \<open>n < 2 ^ m\<close> by simp with even.IH have \<open>P (of_nat n)\<close> by simp moreover from \<open>n < 2 ^ m\<close> even.hyps have \<open>0 < (of_nat n :: 'a word)\<close> by (auto simp add: word_greater_zero_iff l word_of_nat_eq_0_iff) moreover from \<open>n < 2 ^ m\<close> have \<open>(of_nat n :: 'a word) < 2 ^ (LENGTH('a) - Suc 0)\<close> using of_nat_word_less_iff [where ?'a = 'a, of n \<open>2 ^ m\<close>] by (simp add: l take_bit_eq_mod) ultimately have \<open>P (2 * of_nat n)\<close> by (rule word_even) then show ?case by simp next case (odd n) then have \<open>Suc n \<le> 2 ^ m\<close> by simp with odd.IH have \<open>P (of_nat n)\<close> by simp moreover from \<open>Suc n \<le> 2 ^ m\<close> have \<open>(of_nat n :: 'a word) < 2 ^ (LENGTH('a) - Suc 0)\<close> using of_nat_word_less_iff [where ?'a = 'a, of n \<open>2 ^ m\<close>] by (simp add: l take_bit_eq_mod) ultimately have \<open>P (1 + 2 * of_nat n)\<close> by (rule word_odd) then show ?case by simp qed moreover have \<open>of_nat (nat (uint a)) = a\<close> by transfer simp ultimately show ?thesis by (simp add: n_def) qed lemma bit_word_half_eq: \<open>(of_bool b + a * 2) div 2 = a\<close> if \<open>a < 2 ^ (LENGTH('a) - Suc 0)\<close> for a :: \<open>'a::len word\<close> proof (cases \<open>2 \<le> LENGTH('a::len)\<close>) case False have \<open>of_bool (odd k) < (1 :: int) \<longleftrightarrow> even k\<close> for k :: int by auto with False that show ?thesis by transfer (simp add: eq_iff) next case True obtain n where length: \<open>LENGTH('a) = Suc n\<close> by (cases \<open>LENGTH('a)\<close>) simp_all show ?thesis proof (cases b) case False moreover have \<open>a * 2 div 2 = a\<close> using that proof transfer fix k :: int from length have \<open>k * 2 mod 2 ^ LENGTH('a) = (k mod 2 ^ n) * 2\<close> by simp moreover assume \<open>take_bit LENGTH('a) k < take_bit LENGTH('a) (2 ^ (LENGTH('a) - Suc 0))\<close> with \<open>LENGTH('a) = Suc n\<close> have \<open>k mod 2 ^ LENGTH('a) = k mod 2 ^ n\<close> by (simp add: take_bit_eq_mod divmod_digit_0) ultimately have \<open>take_bit LENGTH('a) (k * 2) = take_bit LENGTH('a) k * 2\<close> by (simp add: take_bit_eq_mod) with True show \<open>take_bit LENGTH('a) (take_bit LENGTH('a) (k * 2) div take_bit LENGTH('a) 2) = take_bit LENGTH('a) k\<close> by simp qed ultimately show ?thesis by simp next case True moreover have \<open>(1 + a * 2) div 2 = a\<close> using that proof transfer fix k :: int from length have \<open>(1 + k * 2) mod 2 ^ LENGTH('a) = 1 + (k mod 2 ^ n) * 2\<close> using pos_zmod_mult_2 [of \<open>2 ^ n\<close> k] by (simp add: ac_simps) moreover assume \<open>take_bit LENGTH('a) k < take_bit LENGTH('a) (2 ^ (LENGTH('a) - Suc 0))\<close> with \<open>LENGTH('a) = Suc n\<close> have \<open>k mod 2 ^ LENGTH('a) = k mod 2 ^ n\<close> by (simp add: take_bit_eq_mod divmod_digit_0) ultimately have \<open>take_bit LENGTH('a) (1 + k * 2) = 1 + take_bit LENGTH('a) k * 2\<close> by (simp add: take_bit_eq_mod) with True show \<open>take_bit LENGTH('a) (take_bit LENGTH('a) (1 + k * 2) div take_bit LENGTH('a) 2) = take_bit LENGTH('a) k\<close> by (auto simp add: take_bit_Suc) qed ultimately show ?thesis by simp qed qed lemma even_mult_exp_div_word_iff: \<open>even (a * 2 ^ m div 2 ^ n) \<longleftrightarrow> \<not> ( m \<le> n \<and> n < LENGTH('a) \<and> odd (a div 2 ^ (n - m)))\<close> for a :: \<open>'a::len word\<close> by transfer (auto simp flip: drop_bit_eq_div simp add: even_drop_bit_iff_not_bit bit_take_bit_iff, simp_all flip: push_bit_eq_mult add: bit_push_bit_iff_int) instantiation word :: (len) semiring_bits begin lift_definition bit_word :: \<open>'a word \<Rightarrow> nat \<Rightarrow> bool\<close> is \<open>\<lambda>k n. n < LENGTH('a) \<and> bit k n\<close> proof fix k l :: int and n :: nat assume *: \<open>take_bit LENGTH('a) k = take_bit LENGTH('a) l\<close> show \<open>n < LENGTH('a) \<and> bit k n \<longleftrightarrow> n < LENGTH('a) \<and> bit l n\<close> proof (cases \<open>n < LENGTH('a)\<close>) case True from * have \<open>bit (take_bit LENGTH('a) k) n \<longleftrightarrow> bit (take_bit LENGTH('a) l) n\<close> by simp then show ?thesis by (simp add: bit_take_bit_iff) next case False then show ?thesis by simp qed qed instance proof show \<open>P a\<close> if stable: \<open>\<And>a. a div 2 = a \<Longrightarrow> P a\<close> and rec: \<open>\<And>a b. P a \<Longrightarrow> (of_bool b + 2 * a) div 2 = a \<Longrightarrow> P (of_bool b + 2 * a)\<close> for P and a :: \<open>'a word\<close> proof (induction a rule: word_bit_induct) case zero have \<open>0 div 2 = (0::'a word)\<close> by transfer simp with stable [of 0] show ?case by simp next case (even a) with rec [of a False] show ?case using bit_word_half_eq [of a False] by (simp add: ac_simps) next case (odd a) with rec [of a True] show ?case using bit_word_half_eq [of a True] by (simp add: ac_simps) qed show \<open>bit a n \<longleftrightarrow> odd (a div 2 ^ n)\<close> for a :: \<open>'a word\<close> and n by transfer (simp flip: drop_bit_eq_div add: drop_bit_take_bit bit_iff_odd_drop_bit) show \<open>0 div a = 0\<close> for a :: \<open>'a word\<close> by transfer simp show \<open>a div 1 = a\<close> for a :: \<open>'a word\<close> by transfer simp have \<section>: "\<And>i n. (i::int) mod 2 ^ n = 0 \<or> 0 < i mod 2 ^ n" by (metis le_less take_bit_eq_mod take_bit_nonnegative) have less_power: "\<And>n i p. (i::int) mod numeral p ^ n < numeral p ^ n" by simp show \<open>a mod b div b = 0\<close> for a b :: \<open>'a word\<close> apply transfer apply (simp add: take_bit_eq_mod mod_eq_0_iff_dvd dvd_def) by (metis (no_types, opaque_lifting) "\<section>" Euclidean_Division.pos_mod_bound Euclidean_Division.pos_mod_sign le_less_trans mult_eq_0_iff take_bit_eq_mod take_bit_nonnegative zdiv_eq_0_iff zmod_le_nonneg_dividend) show \<open>(1 + a) div 2 = a div 2\<close> if \<open>even a\<close> for a :: \<open>'a word\<close> using that by transfer (auto dest: le_Suc_ex simp add: take_bit_Suc elim!: evenE) show \<open>(2 :: 'a word) ^ m div 2 ^ n = of_bool ((2 :: 'a word) ^ m \<noteq> 0 \<and> n \<le> m) * 2 ^ (m - n)\<close> for m n :: nat by transfer (simp, simp add: exp_div_exp_eq) show "a div 2 ^ m div 2 ^ n = a div 2 ^ (m + n)" for a :: "'a word" and m n :: nat apply transfer apply (auto simp add: not_less take_bit_drop_bit ac_simps simp flip: drop_bit_eq_div) apply (simp add: drop_bit_take_bit) done show "a mod 2 ^ m mod 2 ^ n = a mod 2 ^ min m n" for a :: "'a word" and m n :: nat by transfer (auto simp flip: take_bit_eq_mod simp add: ac_simps) show \<open>a * 2 ^ m mod 2 ^ n = a mod 2 ^ (n - m) * 2 ^ m\<close> if \<open>m \<le> n\<close> for a :: "'a word" and m n :: nat using that apply transfer apply (auto simp flip: take_bit_eq_mod) apply (auto simp flip: push_bit_eq_mult simp add: push_bit_take_bit split: split_min_lin) done show \<open>a div 2 ^ n mod 2 ^ m = a mod (2 ^ (n + m)) div 2 ^ n\<close> for a :: "'a word" and m n :: nat by transfer (auto simp add: not_less take_bit_drop_bit ac_simps simp flip: take_bit_eq_mod drop_bit_eq_div split: split_min_lin) show \<open>even ((2 ^ m - 1) div (2::'a word) ^ n) \<longleftrightarrow> 2 ^ n = (0::'a word) \<or> m \<le> n\<close> for m n :: nat by transfer (simp flip: drop_bit_eq_div mask_eq_exp_minus_1 add: bit_simps even_drop_bit_iff_not_bit not_less) show \<open>even (a * 2 ^ m div 2 ^ n) \<longleftrightarrow> n < m \<or> (2::'a word) ^ n = 0 \<or> m \<le> n \<and> even (a div 2 ^ (n - m))\<close> for a :: \<open>'a word\<close> and m n :: nat proof transfer show \<open>even (take_bit LENGTH('a) (k * 2 ^ m) div take_bit LENGTH('a) (2 ^ n)) \<longleftrightarrow> n < m \<or> take_bit LENGTH('a) ((2::int) ^ n) = take_bit LENGTH('a) 0 \<or> (m \<le> n \<and> even (take_bit LENGTH('a) k div take_bit LENGTH('a) (2 ^ (n - m))))\<close> for m n :: nat and k l :: int by (auto simp flip: take_bit_eq_mod drop_bit_eq_div push_bit_eq_mult simp add: div_push_bit_of_1_eq_drop_bit drop_bit_take_bit drop_bit_push_bit_int [of n m]) qed qed end lemma bit_word_eqI: \<open>a = b\<close> if \<open>\<And>n. n < LENGTH('a) \<Longrightarrow> bit a n \<longleftrightarrow> bit b n\<close> for a b :: \<open>'a::len word\<close> using that by transfer (auto simp add: nat_less_le bit_eq_iff bit_take_bit_iff) lemma bit_imp_le_length: \<open>n < LENGTH('a)\<close> if \<open>bit w n\<close> for w :: \<open>'a::len word\<close> using that by transfer simp lemma not_bit_length [simp]: \<open>\<not> bit w LENGTH('a)\<close> for w :: \<open>'a::len word\<close> by transfer simp lemma finite_bit_word [simp]: \<open>finite {n. bit w n}\<close> for w :: \<open>'a::len word\<close> proof - have \<open>{n. bit w n} \<subseteq> {0..LENGTH('a)}\<close> by (auto dest: bit_imp_le_length) moreover have \<open>finite {0..LENGTH('a)}\<close> by simp ultimately show ?thesis by (rule finite_subset) qed lemma bit_numeral_word_iff [simp]: \<open>bit (numeral w :: 'a::len word) n \<longleftrightarrow> n < LENGTH('a) \<and> bit (numeral w :: int) n\<close> by transfer simp lemma bit_neg_numeral_word_iff [simp]: \<open>bit (- numeral w :: 'a::len word) n \<longleftrightarrow> n < LENGTH('a) \<and> bit (- numeral w :: int) n\<close> by transfer simp instantiation word :: (len) ring_bit_operations begin lift_definition not_word :: \<open>'a word \<Rightarrow> 'a word\<close> is not by (simp add: take_bit_not_iff) lift_definition and_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>and\<close> by simp lift_definition or_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is or by simp lift_definition xor_word :: \<open>'a word \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is xor by simp lift_definition mask_word :: \<open>nat \<Rightarrow> 'a word\<close> is mask . lift_definition set_bit_word :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is set_bit by (simp add: set_bit_def) lift_definition unset_bit_word :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is unset_bit by (simp add: unset_bit_def) lift_definition flip_bit_word :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is flip_bit by (simp add: flip_bit_def) lift_definition push_bit_word :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is push_bit proof - show \<open>take_bit LENGTH('a) (push_bit n k) = take_bit LENGTH('a) (push_bit n l)\<close> if \<open>take_bit LENGTH('a) k = take_bit LENGTH('a) l\<close> for k l :: int and n :: nat proof - from that have \<open>take_bit (LENGTH('a) - n) (take_bit LENGTH('a) k) = take_bit (LENGTH('a) - n) (take_bit LENGTH('a) l)\<close> by simp moreover have \<open>min (LENGTH('a) - n) LENGTH('a) = LENGTH('a) - n\<close> by simp ultimately show ?thesis by (simp add: take_bit_push_bit) qed qed lift_definition drop_bit_word :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>\<lambda>n. drop_bit n \<circ> take_bit LENGTH('a)\<close> by (simp add: take_bit_eq_mod) lift_definition take_bit_word :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a word\<close> is \<open>\<lambda>n. take_bit (min LENGTH('a) n)\<close> by (simp add: ac_simps) (simp only: flip: take_bit_take_bit) instance apply (standard; transfer) apply (auto simp add: minus_eq_not_minus_1 mask_eq_exp_minus_1 bit_simps set_bit_def flip_bit_def take_bit_drop_bit simp flip: drop_bit_eq_div take_bit_eq_mod) apply (simp_all add: drop_bit_take_bit flip: push_bit_eq_mult) done end lemma [code]: \<open>push_bit n w = w * 2 ^ n\<close> for w :: \<open>'a::len word\<close> by (fact push_bit_eq_mult) lemma [code]: \<open>Word.the_int (drop_bit n w) = drop_bit n (Word.the_int w)\<close> by transfer (simp add: drop_bit_take_bit min_def le_less less_diff_conv) lemma [code]: \<open>Word.the_int (take_bit n w) = (if n < LENGTH('a::len) then take_bit n (Word.the_int w) else Word.the_int w)\<close> for w :: \<open>'a::len word\<close> by transfer (simp add: not_le not_less ac_simps min_absorb2) lemma [code_abbrev]: \<open>push_bit n 1 = (2 :: 'a::len word) ^ n\<close> by (fact push_bit_of_1) context includes bit_operations_syntax begin lemma [code]: \<open>NOT w = Word.of_int (NOT (Word.the_int w))\<close> for w :: \<open>'a::len word\<close> by transfer (simp add: take_bit_not_take_bit) lemma [code]: \<open>Word.the_int (v AND w) = Word.the_int v AND Word.the_int w\<close> by transfer simp lemma [code]: \<open>Word.the_int (v OR w) = Word.the_int v OR Word.the_int w\<close> by transfer simp lemma [code]: \<open>Word.the_int (v XOR w) = Word.the_int v XOR Word.the_int w\<close> by transfer simp lemma [code]: \<open>Word.the_int (mask n :: 'a::len word) = mask (min LENGTH('a) n)\<close> by transfer simp lemma [code]: \<open>set_bit n w = w OR push_bit n 1\<close> for w :: \<open>'a::len word\<close> by (fact set_bit_eq_or) lemma [code]: \<open>unset_bit n w = w AND NOT (push_bit n 1)\<close> for w :: \<open>'a::len word\<close> by (fact unset_bit_eq_and_not) lemma [code]: \<open>flip_bit n w = w XOR push_bit n 1\<close> for w :: \<open>'a::len word\<close> by (fact flip_bit_eq_xor) context includes lifting_syntax begin lemma set_bit_word_transfer [transfer_rule]: \<open>((=) ===> pcr_word ===> pcr_word) set_bit set_bit\<close> by (unfold set_bit_def) transfer_prover lemma unset_bit_word_transfer [transfer_rule]: \<open>((=) ===> pcr_word ===> pcr_word) unset_bit unset_bit\<close> by (unfold unset_bit_def) transfer_prover lemma flip_bit_word_transfer [transfer_rule]: \<open>((=) ===> pcr_word ===> pcr_word) flip_bit flip_bit\<close> by (unfold flip_bit_def) transfer_prover lemma signed_take_bit_word_transfer [transfer_rule]: \<open>((=) ===> pcr_word ===> pcr_word) (\<lambda>n k. signed_take_bit n (take_bit LENGTH('a::len) k)) (signed_take_bit :: nat \<Rightarrow> 'a word \<Rightarrow> 'a word)\<close> proof - let ?K = \<open>\<lambda>n (k :: int). take_bit (min LENGTH('a) n) k OR of_bool (n < LENGTH('a) \<and> bit k n) * NOT (mask n)\<close> let ?W = \<open>\<lambda>n (w :: 'a word). take_bit n w OR of_bool (bit w n) * NOT (mask n)\<close> have \<open>((=) ===> pcr_word ===> pcr_word) ?K ?W\<close> by transfer_prover also have \<open>?K = (\<lambda>n k. signed_take_bit n (take_bit LENGTH('a::len) k))\<close> by (simp add: fun_eq_iff signed_take_bit_def bit_take_bit_iff ac_simps) also have \<open>?W = signed_take_bit\<close> by (simp add: fun_eq_iff signed_take_bit_def) finally show ?thesis . qed end end subsection \<open>Conversions including casts\<close> subsubsection \<open>Generic unsigned conversion\<close> context semiring_bits begin lemma bit_unsigned_iff [bit_simps]: \<open>bit (unsigned w) n \<longleftrightarrow> possible_bit TYPE('a) n \<and> bit w n\<close> for w :: \<open>'b::len word\<close> by (transfer fixing: bit) (simp add: bit_of_nat_iff bit_nat_iff bit_take_bit_iff) end lemma possible_bit_word[simp]: \<open>possible_bit TYPE(('a :: len) word) m \<longleftrightarrow> m < LENGTH('a)\<close> by (simp add: possible_bit_def linorder_not_le) context semiring_bit_operations begin lemma unsigned_minus_1_eq_mask: \<open>unsigned (- 1 :: 'b::len word) = mask LENGTH('b)\<close> by (transfer fixing: mask) (simp add: nat_mask_eq of_nat_mask_eq) lemma unsigned_push_bit_eq: \<open>unsigned (push_bit n w) = take_bit LENGTH('b) (push_bit n (unsigned w))\<close> for w :: \<open>'b::len word\<close> proof (rule bit_eqI) fix m assume \<open>possible_bit TYPE('a) m\<close> show \<open>bit (unsigned (push_bit n w)) m = bit (take_bit LENGTH('b) (push_bit n (unsigned w))) m\<close> proof (cases \<open>n \<le> m\<close>) case True with \<open>possible_bit TYPE('a) m\<close> have \<open>possible_bit TYPE('a) (m - n)\<close> by (simp add: possible_bit_less_imp) with True show ?thesis by (simp add: bit_unsigned_iff bit_push_bit_iff Bit_Operations.bit_push_bit_iff bit_take_bit_iff not_le ac_simps) next case False then show ?thesis by (simp add: not_le bit_unsigned_iff bit_push_bit_iff Bit_Operations.bit_push_bit_iff bit_take_bit_iff) qed qed lemma unsigned_take_bit_eq: \<open>unsigned (take_bit n w) = take_bit n (unsigned w)\<close> for w :: \<open>'b::len word\<close> by (rule bit_eqI) (simp add: bit_unsigned_iff bit_take_bit_iff Bit_Operations.bit_take_bit_iff) end context unique_euclidean_semiring_with_bit_operations begin lemma unsigned_drop_bit_eq: \<open>unsigned (drop_bit n w) = drop_bit n (take_bit LENGTH('b) (unsigned w))\<close> for w :: \<open>'b::len word\<close> by (rule bit_eqI) (auto simp add: bit_unsigned_iff bit_take_bit_iff bit_drop_bit_eq Bit_Operations.bit_drop_bit_eq possible_bit_def dest: bit_imp_le_length) end lemma ucast_drop_bit_eq: \<open>ucast (drop_bit n w) = drop_bit n (ucast w :: 'b::len word)\<close> if \<open>LENGTH('a) \<le> LENGTH('b)\<close> for w :: \<open>'a::len word\<close> by (rule bit_word_eqI) (use that in \<open>auto simp add: bit_unsigned_iff bit_drop_bit_eq dest: bit_imp_le_length\<close>) context semiring_bit_operations begin context includes bit_operations_syntax begin lemma unsigned_and_eq: \<open>unsigned (v AND w) = unsigned v AND unsigned w\<close> for v w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps) lemma unsigned_or_eq: \<open>unsigned (v OR w) = unsigned v OR unsigned w\<close> for v w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps) lemma unsigned_xor_eq: \<open>unsigned (v XOR w) = unsigned v XOR unsigned w\<close> for v w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps) end end context ring_bit_operations begin context includes bit_operations_syntax begin lemma unsigned_not_eq: \<open>unsigned (NOT w) = take_bit LENGTH('b) (NOT (unsigned w))\<close> for w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps) end end context unique_euclidean_semiring_numeral begin lemma unsigned_greater_eq [simp]: \<open>0 \<le> unsigned w\<close> for w :: \<open>'b::len word\<close> by (transfer fixing: less_eq) simp lemma unsigned_less [simp]: \<open>unsigned w < 2 ^ LENGTH('b)\<close> for w :: \<open>'b::len word\<close> by (transfer fixing: less) simp end context linordered_semidom begin lemma word_less_eq_iff_unsigned: "a \<le> b \<longleftrightarrow> unsigned a \<le> unsigned b" by (transfer fixing: less_eq) (simp add: nat_le_eq_zle) lemma word_less_iff_unsigned: "a < b \<longleftrightarrow> unsigned a < unsigned b" by (transfer fixing: less) (auto dest: preorder_class.le_less_trans [OF take_bit_nonnegative]) end subsubsection \<open>Generic signed conversion\<close> context ring_bit_operations begin lemma bit_signed_iff [bit_simps]: \<open>bit (signed w) n \<longleftrightarrow> possible_bit TYPE('a) n \<and> bit w (min (LENGTH('b) - Suc 0) n)\<close> for w :: \<open>'b::len word\<close> by (transfer fixing: bit) (auto simp add: bit_of_int_iff Bit_Operations.bit_signed_take_bit_iff min_def) lemma signed_push_bit_eq: \<open>signed (push_bit n w) = signed_take_bit (LENGTH('b) - Suc 0) (push_bit n (signed w :: 'a))\<close> for w :: \<open>'b::len word\<close> apply (simp add: bit_eq_iff bit_simps possible_bit_less_imp min_less_iff_disj) apply (cases n, simp_all add: min_def) done lemma signed_take_bit_eq: \<open>signed (take_bit n w) = (if n < LENGTH('b) then take_bit n (signed w) else signed w)\<close> for w :: \<open>'b::len word\<close> by (transfer fixing: take_bit; cases \<open>LENGTH('b)\<close>) (auto simp add: Bit_Operations.signed_take_bit_take_bit Bit_Operations.take_bit_signed_take_bit take_bit_of_int min_def less_Suc_eq) context includes bit_operations_syntax begin lemma signed_not_eq: \<open>signed (NOT w) = signed_take_bit LENGTH('b) (NOT (signed w))\<close> for w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps possible_bit_less_imp min_less_iff_disj) (auto simp: min_def) lemma signed_and_eq: \<open>signed (v AND w) = signed v AND signed w\<close> for v w :: \<open>'b::len word\<close> by (rule bit_eqI) (simp add: bit_signed_iff bit_and_iff Bit_Operations.bit_and_iff) lemma signed_or_eq: \<open>signed (v OR w) = signed v OR signed w\<close> for v w :: \<open>'b::len word\<close> by (rule bit_eqI) (simp add: bit_signed_iff bit_or_iff Bit_Operations.bit_or_iff) lemma signed_xor_eq: \<open>signed (v XOR w) = signed v XOR signed w\<close> for v w :: \<open>'b::len word\<close> by (rule bit_eqI) (simp add: bit_signed_iff bit_xor_iff Bit_Operations.bit_xor_iff) end end subsubsection \<open>More\<close> lemma sint_greater_eq: \<open>- (2 ^ (LENGTH('a) - Suc 0)) \<le> sint w\<close> for w :: \<open>'a::len word\<close> proof (cases \<open>bit w (LENGTH('a) - Suc 0)\<close>) case True then show ?thesis by transfer (simp add: signed_take_bit_eq_if_negative minus_exp_eq_not_mask or_greater_eq ac_simps) next have *: \<open>- (2 ^ (LENGTH('a) - Suc 0)) \<le> (0::int)\<close> by simp case False then show ?thesis by transfer (auto simp add: signed_take_bit_eq intro: order_trans *) qed lemma sint_less: \<open>sint w < 2 ^ (LENGTH('a) - Suc 0)\<close> for w :: \<open>'a::len word\<close> by (cases \<open>bit w (LENGTH('a) - Suc 0)\<close>; transfer) (simp_all add: signed_take_bit_eq signed_take_bit_def not_eq_complement mask_eq_exp_minus_1 OR_upper) lemma unat_div_distrib: \<open>unat (v div w) = unat v div unat w\<close> proof transfer fix k l have \<open>nat (take_bit LENGTH('a) k) div nat (take_bit LENGTH('a) l) \<le> nat (take_bit LENGTH('a) k)\<close> by (rule div_le_dividend) also have \<open>nat (take_bit LENGTH('a) k) < 2 ^ LENGTH('a)\<close> by (simp add: nat_less_iff) finally show \<open>(nat \<circ> take_bit LENGTH('a)) (take_bit LENGTH('a) k div take_bit LENGTH('a) l) = (nat \<circ> take_bit LENGTH('a)) k div (nat \<circ> take_bit LENGTH('a)) l\<close> by (simp add: nat_take_bit_eq div_int_pos_iff nat_div_distrib take_bit_nat_eq_self_iff) qed lemma unat_mod_distrib: \<open>unat (v mod w) = unat v mod unat w\<close> proof transfer fix k l have \<open>nat (take_bit LENGTH('a) k) mod nat (take_bit LENGTH('a) l) \<le> nat (take_bit LENGTH('a) k)\<close> by (rule mod_less_eq_dividend) also have \<open>nat (take_bit LENGTH('a) k) < 2 ^ LENGTH('a)\<close> by (simp add: nat_less_iff) finally show \<open>(nat \<circ> take_bit LENGTH('a)) (take_bit LENGTH('a) k mod take_bit LENGTH('a) l) = (nat \<circ> take_bit LENGTH('a)) k mod (nat \<circ> take_bit LENGTH('a)) l\<close> by (simp add: nat_take_bit_eq mod_int_pos_iff less_le nat_mod_distrib take_bit_nat_eq_self_iff) qed lemma uint_div_distrib: \<open>uint (v div w) = uint v div uint w\<close> proof - have \<open>int (unat (v div w)) = int (unat v div unat w)\<close> by (simp add: unat_div_distrib) then show ?thesis by (simp add: of_nat_div) qed lemma unat_drop_bit_eq: \<open>unat (drop_bit n w) = drop_bit n (unat w)\<close> by (rule bit_eqI) (simp add: bit_unsigned_iff bit_drop_bit_eq) lemma uint_mod_distrib: \<open>uint (v mod w) = uint v mod uint w\<close> proof - have \<open>int (unat (v mod w)) = int (unat v mod unat w)\<close> by (simp add: unat_mod_distrib) then show ?thesis by (simp add: of_nat_mod) qed context semiring_bit_operations begin lemma unsigned_ucast_eq: \<open>unsigned (ucast w :: 'c::len word) = take_bit LENGTH('c) (unsigned w)\<close> for w :: \<open>'b::len word\<close> by (rule bit_eqI) (simp add: bit_unsigned_iff Word.bit_unsigned_iff bit_take_bit_iff not_le) end context ring_bit_operations begin lemma signed_ucast_eq: \<open>signed (ucast w :: 'c::len word) = signed_take_bit (LENGTH('c) - Suc 0) (unsigned w)\<close> for w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps min_less_iff_disj) lemma signed_scast_eq: \<open>signed (scast w :: 'c::len word) = signed_take_bit (LENGTH('c) - Suc 0) (signed w)\<close> for w :: \<open>'b::len word\<close> by (simp add: bit_eq_iff bit_simps min_less_iff_disj) end lemma uint_nonnegative: "0 \<le> uint w" by (fact unsigned_greater_eq) lemma uint_bounded: "uint w < 2 ^ LENGTH('a)" for w :: "'a::len word" by (fact unsigned_less) lemma uint_idem: "uint w mod 2 ^ LENGTH('a) = uint w" for w :: "'a::len word" by transfer (simp add: take_bit_eq_mod) lemma word_uint_eqI: "uint a = uint b \<Longrightarrow> a = b" by (fact unsigned_word_eqI) lemma word_uint_eq_iff: "a = b \<longleftrightarrow> uint a = uint b" by (fact word_eq_iff_unsigned) lemma uint_word_of_int_eq: \<open>uint (word_of_int k :: 'a::len word) = take_bit LENGTH('a) k\<close> by transfer rule lemma uint_word_of_int: "uint (word_of_int k :: 'a::len word) = k mod 2 ^ LENGTH('a)" by (simp add: uint_word_of_int_eq take_bit_eq_mod) lemma word_of_int_uint: "word_of_int (uint w) = w" by transfer simp lemma word_div_def [code]: "a div b = word_of_int (uint a div uint b)" by transfer rule lemma word_mod_def [code]: "a mod b = word_of_int (uint a mod uint b)" by transfer rule lemma split_word_all: "(\<And>x::'a::len word. PROP P x) \<equiv> (\<And>x. PROP P (word_of_int x))" proof fix x :: "'a word" assume "\<And>x. PROP P (word_of_int x)" then have "PROP P (word_of_int (uint x))" . then show "PROP P x" by (simp only: word_of_int_uint) qed lemma sint_uint: \<open>sint w = signed_take_bit (LENGTH('a) - Suc 0) (uint w)\<close> for w :: \<open>'a::len word\<close> by (cases \<open>LENGTH('a)\<close>; transfer) (simp_all add: signed_take_bit_take_bit) lemma unat_eq_nat_uint: \<open>unat w = nat (uint w)\<close> by simp lemma ucast_eq: \<open>ucast w = word_of_int (uint w)\<close> by transfer simp lemma scast_eq: \<open>scast w = word_of_int (sint w)\<close> by transfer simp lemma uint_0_eq: \<open>uint 0 = 0\<close> by (fact unsigned_0) lemma uint_1_eq: \<open>uint 1 = 1\<close> by (fact unsigned_1) lemma word_m1_wi: "- 1 = word_of_int (- 1)" by simp lemma uint_0_iff: "uint x = 0 \<longleftrightarrow> x = 0" by (auto simp add: unsigned_word_eqI) lemma unat_0_iff: "unat x = 0 \<longleftrightarrow> x = 0" by (auto simp add: unsigned_word_eqI) lemma unat_0: "unat 0 = 0" by (fact unsigned_0) lemma unat_gt_0: "0 < unat x \<longleftrightarrow> x \<noteq> 0" by (auto simp: unat_0_iff [symmetric]) lemma ucast_0: "ucast 0 = 0" by (fact unsigned_0) lemma sint_0: "sint 0 = 0" by (fact signed_0) lemma scast_0: "scast 0 = 0" by (fact signed_0) lemma sint_n1: "sint (- 1) = - 1" by (fact signed_minus_1) lemma scast_n1: "scast (- 1) = - 1" by (fact signed_minus_1) lemma uint_1: "uint (1::'a::len word) = 1" by (fact uint_1_eq) lemma unat_1: "unat (1::'a::len word) = 1" by (fact unsigned_1) lemma ucast_1: "ucast (1::'a::len word) = 1" by (fact unsigned_1) instantiation word :: (len) size begin lift_definition size_word :: \<open>'a word \<Rightarrow> nat\<close> is \<open>\<lambda>_. LENGTH('a)\<close> .. instance .. end lemma word_size [code]: \<open>size w = LENGTH('a)\<close> for w :: \<open>'a::len word\<close> by (fact size_word.rep_eq) lemma word_size_gt_0 [iff]: "0 < size w" for w :: "'a::len word" by (simp add: word_size) lemmas lens_gt_0 = word_size_gt_0 len_gt_0 lemma lens_not_0 [iff]: \<open>size w \<noteq> 0\<close> for w :: \<open>'a::len word\<close> by auto lift_definition source_size :: \<open>('a::len word \<Rightarrow> 'b) \<Rightarrow> nat\<close> is \<open>\<lambda>_. LENGTH('a)\<close> . lift_definition target_size :: \<open>('a \<Rightarrow> 'b::len word) \<Rightarrow> nat\<close> is \<open>\<lambda>_. LENGTH('b)\<close> .. lift_definition is_up :: \<open>('a::len word \<Rightarrow> 'b::len word) \<Rightarrow> bool\<close> is \<open>\<lambda>_. LENGTH('a) \<le> LENGTH('b)\<close> .. lift_definition is_down :: \<open>('a::len word \<Rightarrow> 'b::len word) \<Rightarrow> bool\<close> is \<open>\<lambda>_. LENGTH('a) \<ge> LENGTH('b)\<close> .. lemma is_up_eq: \<open>is_up f \<longleftrightarrow> source_size f \<le> target_size f\<close> for f :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> by (simp add: source_size.rep_eq target_size.rep_eq is_up.rep_eq) lemma is_down_eq: \<open>is_down f \<longleftrightarrow> target_size f \<le> source_size f\<close> for f :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> by (simp add: source_size.rep_eq target_size.rep_eq is_down.rep_eq) lift_definition word_int_case :: \<open>(int \<Rightarrow> 'b) \<Rightarrow> 'a::len word \<Rightarrow> 'b\<close> is \<open>\<lambda>f. f \<circ> take_bit LENGTH('a)\<close> by simp lemma word_int_case_eq_uint [code]: \<open>word_int_case f w = f (uint w)\<close> by transfer simp translations "case x of XCONST of_int y \<Rightarrow> b" \<rightleftharpoons> "CONST word_int_case (\<lambda>y. b) x" "case x of (XCONST of_int :: 'a) y \<Rightarrow> b" \<rightharpoonup> "CONST word_int_case (\<lambda>y. b) x" subsection \<open>Arithmetic operations\<close> lemma div_word_self: \<open>w div w = 1\<close> if \<open>w \<noteq> 0\<close> for w :: \<open>'a::len word\<close> using that by transfer simp lemma mod_word_self [simp]: \<open>w mod w = 0\<close> for w :: \<open>'a::len word\<close> apply (cases \<open>w = 0\<close>) apply auto using div_mult_mod_eq [of w w] by (simp add: div_word_self) lemma div_word_less: \<open>w div v = 0\<close> if \<open>w < v\<close> for w v :: \<open>'a::len word\<close> using that by transfer simp lemma mod_word_less: \<open>w mod v = w\<close> if \<open>w < v\<close> for w v :: \<open>'a::len word\<close> using div_mult_mod_eq [of w v] using that by (simp add: div_word_less) lemma div_word_one [simp]: \<open>1 div w = of_bool (w = 1)\<close> for w :: \<open>'a::len word\<close> proof transfer fix k :: int show \<open>take_bit LENGTH('a) (take_bit LENGTH('a) 1 div take_bit LENGTH('a) k) = take_bit LENGTH('a) (of_bool (take_bit LENGTH('a) k = take_bit LENGTH('a) 1))\<close> proof (cases \<open>take_bit LENGTH('a) k > 1\<close>) case False with take_bit_nonnegative [of \<open>LENGTH('a)\<close> k] have \<open>take_bit LENGTH('a) k = 0 \<or> take_bit LENGTH('a) k = 1\<close> by linarith then show ?thesis by auto next case True then show ?thesis by simp qed qed lemma mod_word_one [simp]: \<open>1 mod w = 1 - w * of_bool (w = 1)\<close> for w :: \<open>'a::len word\<close> using div_mult_mod_eq [of 1 w] by simp lemma div_word_by_minus_1_eq [simp]: \<open>w div - 1 = of_bool (w = - 1)\<close> for w :: \<open>'a::len word\<close> by (auto intro: div_word_less simp add: div_word_self word_order.not_eq_extremum) lemma mod_word_by_minus_1_eq [simp]: \<open>w mod - 1 = w * of_bool (w < - 1)\<close> for w :: \<open>'a::len word\<close> apply (cases \<open>w = - 1\<close>) apply (auto simp add: word_order.not_eq_extremum) using div_mult_mod_eq [of w \<open>- 1\<close>] by simp text \<open>Legacy theorems:\<close> lemma word_add_def [code]: "a + b = word_of_int (uint a + uint b)" by transfer (simp add: take_bit_add) lemma word_sub_wi [code]: "a - b = word_of_int (uint a - uint b)" by transfer (simp add: take_bit_diff) lemma word_mult_def [code]: "a * b = word_of_int (uint a * uint b)" by transfer (simp add: take_bit_eq_mod mod_simps) lemma word_minus_def [code]: "- a = word_of_int (- uint a)" by transfer (simp add: take_bit_minus) lemma word_0_wi: "0 = word_of_int 0" by transfer simp lemma word_1_wi: "1 = word_of_int 1" by transfer simp lift_definition word_succ :: "'a::len word \<Rightarrow> 'a word" is "\<lambda>x. x + 1" by (auto simp add: take_bit_eq_mod intro: mod_add_cong) lift_definition word_pred :: "'a::len word \<Rightarrow> 'a word" is "\<lambda>x. x - 1" by (auto simp add: take_bit_eq_mod intro: mod_diff_cong) lemma word_succ_alt [code]: "word_succ a = word_of_int (uint a + 1)" by transfer (simp add: take_bit_eq_mod mod_simps) lemma word_pred_alt [code]: "word_pred a = word_of_int (uint a - 1)" by transfer (simp add: take_bit_eq_mod mod_simps) lemmas word_arith_wis = word_add_def word_sub_wi word_mult_def word_minus_def word_succ_alt word_pred_alt word_0_wi word_1_wi lemma wi_homs: shows wi_hom_add: "word_of_int a + word_of_int b = word_of_int (a + b)" and wi_hom_sub: "word_of_int a - word_of_int b = word_of_int (a - b)" and wi_hom_mult: "word_of_int a * word_of_int b = word_of_int (a * b)" and wi_hom_neg: "- word_of_int a = word_of_int (- a)" and wi_hom_succ: "word_succ (word_of_int a) = word_of_int (a + 1)" and wi_hom_pred: "word_pred (word_of_int a) = word_of_int (a - 1)" by (transfer, simp)+ lemmas wi_hom_syms = wi_homs [symmetric] lemmas word_of_int_homs = wi_homs word_0_wi word_1_wi lemmas word_of_int_hom_syms = word_of_int_homs [symmetric] lemma double_eq_zero_iff: \<open>2 * a = 0 \<longleftrightarrow> a = 0 \<or> a = 2 ^ (LENGTH('a) - Suc 0)\<close> for a :: \<open>'a::len word\<close> proof - define n where \<open>n = LENGTH('a) - Suc 0\<close> then have *: \<open>LENGTH('a) = Suc n\<close> by simp have \<open>a = 0\<close> if \<open>2 * a = 0\<close> and \<open>a \<noteq> 2 ^ (LENGTH('a) - Suc 0)\<close> using that by transfer (auto simp add: take_bit_eq_0_iff take_bit_eq_mod *) moreover have \<open>2 ^ LENGTH('a) = (0 :: 'a word)\<close> by transfer simp then have \<open>2 * 2 ^ (LENGTH('a) - Suc 0) = (0 :: 'a word)\<close> by (simp add: *) ultimately show ?thesis by auto qed subsection \<open>Ordering\<close> lift_definition word_sle :: \<open>'a::len word \<Rightarrow> 'a word \<Rightarrow> bool\<close> is \<open>\<lambda>k l. signed_take_bit (LENGTH('a) - Suc 0) k \<le> signed_take_bit (LENGTH('a) - Suc 0) l\<close> by (simp flip: signed_take_bit_decr_length_iff) lift_definition word_sless :: \<open>'a::len word \<Rightarrow> 'a word \<Rightarrow> bool\<close> is \<open>\<lambda>k l. signed_take_bit (LENGTH('a) - Suc 0) k < signed_take_bit (LENGTH('a) - Suc 0) l\<close> by (simp flip: signed_take_bit_decr_length_iff) notation word_sle ("'(\<le>s')") and word_sle ("(_/ \<le>s _)" [51, 51] 50) and word_sless ("'(<s')") and word_sless ("(_/ <s _)" [51, 51] 50) notation (input) word_sle ("(_/ <=s _)" [51, 51] 50) lemma word_sle_eq [code]: \<open>a <=s b \<longleftrightarrow> sint a \<le> sint b\<close> by transfer simp lemma [code]: \<open>a <s b \<longleftrightarrow> sint a < sint b\<close> by transfer simp lemma signed_ordering: \<open>ordering word_sle word_sless\<close> apply (standard; transfer) using signed_take_bit_decr_length_iff by force+ lemma signed_linorder: \<open>class.linorder word_sle word_sless\<close> by (standard; transfer) (auto simp add: signed_take_bit_decr_length_iff) interpretation signed: linorder word_sle word_sless by (fact signed_linorder) lemma word_sless_eq: \<open>x <s y \<longleftrightarrow> x <=s y \<and> x \<noteq> y\<close> by (fact signed.less_le) lemma word_less_alt: "a < b \<longleftrightarrow> uint a < uint b" by (fact word_less_def) lemma word_zero_le [simp]: "0 \<le> y" for y :: "'a::len word" by (fact word_coorder.extremum) lemma word_m1_ge [simp] : "word_pred 0 \<ge> y" (* FIXME: delete *) by transfer (simp add: mask_eq_exp_minus_1) lemma word_n1_ge [simp]: "y \<le> -1" for y :: "'a::len word" by (fact word_order.extremum) lemmas word_not_simps [simp] = word_zero_le [THEN leD] word_m1_ge [THEN leD] word_n1_ge [THEN leD] lemma word_gt_0: "0 < y \<longleftrightarrow> 0 \<noteq> y" for y :: "'a::len word" by (simp add: less_le) lemmas word_gt_0_no [simp] = word_gt_0 [of "numeral y"] for y lemma word_sless_alt: "a <s b \<longleftrightarrow> sint a < sint b" by transfer simp lemma word_le_nat_alt: "a \<le> b \<longleftrightarrow> unat a \<le> unat b" by transfer (simp add: nat_le_eq_zle) lemma word_less_nat_alt: "a < b \<longleftrightarrow> unat a < unat b" by transfer (auto simp add: less_le [of 0]) lemmas unat_mono = word_less_nat_alt [THEN iffD1] instance word :: (len) wellorder proof fix P :: "'a word \<Rightarrow> bool" and a assume *: "(\<And>b. (\<And>a. a < b \<Longrightarrow> P a) \<Longrightarrow> P b)" have "wf (measure unat)" .. moreover have "{(a, b :: ('a::len) word). a < b} \<subseteq> measure unat" by (auto simp add: word_less_nat_alt) ultimately have "wf {(a, b :: ('a::len) word). a < b}" by (rule wf_subset) then show "P a" using * by induction blast qed lemma wi_less: "(word_of_int n < (word_of_int m :: 'a::len word)) = (n mod 2 ^ LENGTH('a) < m mod 2 ^ LENGTH('a))" by transfer (simp add: take_bit_eq_mod) lemma wi_le: "(word_of_int n \<le> (word_of_int m :: 'a::len word)) = (n mod 2 ^ LENGTH('a) \<le> m mod 2 ^ LENGTH('a))" by transfer (simp add: take_bit_eq_mod) subsection \<open>Bit-wise operations\<close> context includes bit_operations_syntax begin lemma uint_take_bit_eq: \<open>uint (take_bit n w) = take_bit n (uint w)\<close> by transfer (simp add: ac_simps) lemma take_bit_word_eq_self: \<open>take_bit n w = w\<close> if \<open>LENGTH('a) \<le> n\<close> for w :: \<open>'a::len word\<close> using that by transfer simp lemma take_bit_length_eq [simp]: \<open>take_bit LENGTH('a) w = w\<close> for w :: \<open>'a::len word\<close> by (rule take_bit_word_eq_self) simp lemma bit_word_of_int_iff: \<open>bit (word_of_int k :: 'a::len word) n \<longleftrightarrow> n < LENGTH('a) \<and> bit k n\<close> by transfer rule lemma bit_uint_iff: \<open>bit (uint w) n \<longleftrightarrow> n < LENGTH('a) \<and> bit w n\<close> for w :: \<open>'a::len word\<close> by transfer (simp add: bit_take_bit_iff) lemma bit_sint_iff: \<open>bit (sint w) n \<longleftrightarrow> n \<ge> LENGTH('a) \<and> bit w (LENGTH('a) - 1) \<or> bit w n\<close> for w :: \<open>'a::len word\<close> by transfer (auto simp add: bit_signed_take_bit_iff min_def le_less not_less) lemma bit_word_ucast_iff: \<open>bit (ucast w :: 'b::len word) n \<longleftrightarrow> n < LENGTH('a) \<and> n < LENGTH('b) \<and> bit w n\<close> for w :: \<open>'a::len word\<close> by transfer (simp add: bit_take_bit_iff ac_simps) lemma bit_word_scast_iff: \<open>bit (scast w :: 'b::len word) n \<longleftrightarrow> n < LENGTH('b) \<and> (bit w n \<or> LENGTH('a) \<le> n \<and> bit w (LENGTH('a) - Suc 0))\<close> for w :: \<open>'a::len word\<close> by transfer (auto simp add: bit_signed_take_bit_iff le_less min_def) lemma bit_word_iff_drop_bit_and [code]: \<open>bit a n \<longleftrightarrow> drop_bit n a AND 1 = 1\<close> for a :: \<open>'a::len word\<close> by (simp add: bit_iff_odd_drop_bit odd_iff_mod_2_eq_one and_one_eq) lemma word_not_def: "NOT (a::'a::len word) = word_of_int (NOT (uint a))" and word_and_def: "(a::'a word) AND b = word_of_int (uint a AND uint b)" and word_or_def: "(a::'a word) OR b = word_of_int (uint a OR uint b)" and word_xor_def: "(a::'a word) XOR b = word_of_int (uint a XOR uint b)" by (transfer, simp add: take_bit_not_take_bit)+ definition even_word :: \<open>'a::len word \<Rightarrow> bool\<close> where [code_abbrev]: \<open>even_word = even\<close> lemma even_word_iff [code]: \<open>even_word a \<longleftrightarrow> a AND 1 = 0\<close> by (simp add: and_one_eq even_iff_mod_2_eq_zero even_word_def) lemma map_bit_range_eq_if_take_bit_eq: \<open>map (bit k) [0..<n] = map (bit l) [0..<n]\<close> if \<open>take_bit n k = take_bit n l\<close> for k l :: int using that proof (induction n arbitrary: k l) case 0 then show ?case by simp next case (Suc n) from Suc.prems have \<open>take_bit n (k div 2) = take_bit n (l div 2)\<close> by (simp add: take_bit_Suc) then have \<open>map (bit (k div 2)) [0..<n] = map (bit (l div 2)) [0..<n]\<close> by (rule Suc.IH) moreover have \<open>bit (r div 2) = bit r \<circ> Suc\<close> for r :: int by (simp add: fun_eq_iff bit_Suc) moreover from Suc.prems have \<open>even k \<longleftrightarrow> even l\<close> by (auto simp add: take_bit_Suc elim!: evenE oddE) arith+ ultimately show ?case by (simp only: map_Suc_upt upt_conv_Cons flip: list.map_comp) simp qed lemma take_bit_word_Bit0_eq [simp]: \<open>take_bit (numeral n) (numeral (num.Bit0 m) :: 'a::len word) = 2 * take_bit (pred_numeral n) (numeral m)\<close> (is ?P) and take_bit_word_Bit1_eq [simp]: \<open>take_bit (numeral n) (numeral (num.Bit1 m) :: 'a::len word) = 1 + 2 * take_bit (pred_numeral n) (numeral m)\<close> (is ?Q) and take_bit_word_minus_Bit0_eq [simp]: \<open>take_bit (numeral n) (- numeral (num.Bit0 m) :: 'a::len word) = 2 * take_bit (pred_numeral n) (- numeral m)\<close> (is ?R) and take_bit_word_minus_Bit1_eq [simp]: \<open>take_bit (numeral n) (- numeral (num.Bit1 m) :: 'a::len word) = 1 + 2 * take_bit (pred_numeral n) (- numeral (Num.inc m))\<close> (is ?S) proof - define w :: \<open>'a::len word\<close> where \<open>w = numeral m\<close> moreover define q :: nat where \<open>q = pred_numeral n\<close> ultimately have num: \<open>numeral m = w\<close> \<open>numeral (num.Bit0 m) = 2 * w\<close> \<open>numeral (num.Bit1 m) = 1 + 2 * w\<close> \<open>numeral (Num.inc m) = 1 + w\<close> \<open>pred_numeral n = q\<close> \<open>numeral n = Suc q\<close> by (simp_all only: w_def q_def numeral_Bit0 [of m] numeral_Bit1 [of m] ac_simps numeral_inc numeral_eq_Suc flip: mult_2) have even: \<open>take_bit (Suc q) (2 * w) = 2 * take_bit q w\<close> for w :: \<open>'a::len word\<close> by (rule bit_word_eqI) (auto simp add: bit_take_bit_iff bit_double_iff) have odd: \<open>take_bit (Suc q) (1 + 2 * w) = 1 + 2 * take_bit q w\<close> for w :: \<open>'a::len word\<close> by (rule bit_eqI) (auto simp add: bit_take_bit_iff bit_double_iff even_bit_succ_iff) show ?P using even [of w] by (simp add: num) show ?Q using odd [of w] by (simp add: num) show ?R using even [of \<open>- w\<close>] by (simp add: num) show ?S using odd [of \<open>- (1 + w)\<close>] by (simp add: num) qed subsection \<open>More shift operations\<close> lift_definition signed_drop_bit :: \<open>nat \<Rightarrow> 'a word \<Rightarrow> 'a::len word\<close> is \<open>\<lambda>n. drop_bit n \<circ> signed_take_bit (LENGTH('a) - Suc 0)\<close> using signed_take_bit_decr_length_iff by (simp add: take_bit_drop_bit) force lemma bit_signed_drop_bit_iff [bit_simps]: \<open>bit (signed_drop_bit m w) n \<longleftrightarrow> bit w (if LENGTH('a) - m \<le> n \<and> n < LENGTH('a) then LENGTH('a) - 1 else m + n)\<close> for w :: \<open>'a::len word\<close> apply transfer apply (auto simp add: bit_drop_bit_eq bit_signed_take_bit_iff not_le min_def) apply (metis add.commute le_antisym less_diff_conv less_eq_decr_length_iff) apply (metis le_antisym less_eq_decr_length_iff) done lemma [code]: \<open>Word.the_int (signed_drop_bit n w) = take_bit LENGTH('a) (drop_bit n (Word.the_signed_int w))\<close> for w :: \<open>'a::len word\<close> by transfer simp lemma signed_drop_bit_of_0 [simp]: \<open>signed_drop_bit n 0 = 0\<close> by transfer simp lemma signed_drop_bit_of_minus_1 [simp]: \<open>signed_drop_bit n (- 1) = - 1\<close> by transfer simp lemma signed_drop_bit_signed_drop_bit [simp]: \<open>signed_drop_bit m (signed_drop_bit n w) = signed_drop_bit (m + n) w\<close> for w :: \<open>'a::len word\<close> proof (cases \<open>LENGTH('a)\<close>) case 0 then show ?thesis using len_not_eq_0 by blast next case (Suc n) then show ?thesis by (force simp add: bit_signed_drop_bit_iff not_le less_diff_conv ac_simps intro!: bit_word_eqI) qed lemma signed_drop_bit_0 [simp]: \<open>signed_drop_bit 0 w = w\<close> by transfer (simp add: take_bit_signed_take_bit) lemma sint_signed_drop_bit_eq: \<open>sint (signed_drop_bit n w) = drop_bit n (sint w)\<close> proof (cases \<open>LENGTH('a) = 0 \<or> n=0\<close>) case False then show ?thesis apply simp apply (rule bit_eqI) by (auto simp add: bit_sint_iff bit_drop_bit_eq bit_signed_drop_bit_iff dest: bit_imp_le_length) qed auto subsection \<open>Rotation\<close> lift_definition word_rotr :: \<open>nat \<Rightarrow> 'a::len word \<Rightarrow> 'a::len word\<close> is \<open>\<lambda>n k. concat_bit (LENGTH('a) - n mod LENGTH('a)) (drop_bit (n mod LENGTH('a)) (take_bit LENGTH('a) k)) (take_bit (n mod LENGTH('a)) k)\<close> subgoal for n k l by (simp add: concat_bit_def nat_le_iff less_imp_le take_bit_tightened [of \<open>LENGTH('a)\<close> k l \<open>n mod LENGTH('a::len)\<close>]) done lift_definition word_rotl :: \<open>nat \<Rightarrow> 'a::len word \<Rightarrow> 'a::len word\<close> is \<open>\<lambda>n k. concat_bit (n mod LENGTH('a)) (drop_bit (LENGTH('a) - n mod LENGTH('a)) (take_bit LENGTH('a) k)) (take_bit (LENGTH('a) - n mod LENGTH('a)) k)\<close> subgoal for n k l by (simp add: concat_bit_def nat_le_iff less_imp_le take_bit_tightened [of \<open>LENGTH('a)\<close> k l \<open>LENGTH('a) - n mod LENGTH('a::len)\<close>]) done lift_definition word_roti :: \<open>int \<Rightarrow> 'a::len word \<Rightarrow> 'a::len word\<close> is \<open>\<lambda>r k. concat_bit (LENGTH('a) - nat (r mod int LENGTH('a))) (drop_bit (nat (r mod int LENGTH('a))) (take_bit LENGTH('a) k)) (take_bit (nat (r mod int LENGTH('a))) k)\<close> subgoal for r k l by (simp add: concat_bit_def nat_le_iff less_imp_le take_bit_tightened [of \<open>LENGTH('a)\<close> k l \<open>nat (r mod int LENGTH('a::len))\<close>]) done lemma word_rotl_eq_word_rotr [code]: \<open>word_rotl n = (word_rotr (LENGTH('a) - n mod LENGTH('a)) :: 'a::len word \<Rightarrow> 'a word)\<close> by (rule ext, cases \<open>n mod LENGTH('a) = 0\<close>; transfer) simp_all lemma word_roti_eq_word_rotr_word_rotl [code]: \<open>word_roti i w = (if i \<ge> 0 then word_rotr (nat i) w else word_rotl (nat (- i)) w)\<close> proof (cases \<open>i \<ge> 0\<close>) case True moreover define n where \<open>n = nat i\<close> ultimately have \<open>i = int n\<close> by simp moreover have \<open>word_roti (int n) = (word_rotr n :: _ \<Rightarrow> 'a word)\<close> by (rule ext, transfer) (simp add: nat_mod_distrib) ultimately show ?thesis by simp next case False moreover define n where \<open>n = nat (- i)\<close> ultimately have \<open>i = - int n\<close> \<open>n > 0\<close> by simp_all moreover have \<open>word_roti (- int n) = (word_rotl n :: _ \<Rightarrow> 'a word)\<close> by (rule ext, transfer) (simp add: zmod_zminus1_eq_if flip: of_nat_mod of_nat_diff) ultimately show ?thesis by simp qed lemma bit_word_rotr_iff [bit_simps]: \<open>bit (word_rotr m w) n \<longleftrightarrow> n < LENGTH('a) \<and> bit w ((n + m) mod LENGTH('a))\<close> for w :: \<open>'a::len word\<close> proof transfer fix k :: int and m n :: nat define q where \<open>q = m mod LENGTH('a)\<close> have \<open>q < LENGTH('a)\<close> by (simp add: q_def) then have \<open>q \<le> LENGTH('a)\<close> by simp have \<open>m mod LENGTH('a) = q\<close> by (simp add: q_def) moreover have \<open>(n + m) mod LENGTH('a) = (n + q) mod LENGTH('a)\<close> by (subst mod_add_right_eq [symmetric]) (simp add: \<open>m mod LENGTH('a) = q\<close>) moreover have \<open>n < LENGTH('a) \<and> bit (concat_bit (LENGTH('a) - q) (drop_bit q (take_bit LENGTH('a) k)) (take_bit q k)) n \<longleftrightarrow> n < LENGTH('a) \<and> bit k ((n + q) mod LENGTH('a))\<close> using \<open>q < LENGTH('a)\<close> by (cases \<open>q + n \<ge> LENGTH('a)\<close>) (auto simp add: bit_concat_bit_iff bit_drop_bit_eq bit_take_bit_iff le_mod_geq ac_simps) ultimately show \<open>n < LENGTH('a) \<and> bit (concat_bit (LENGTH('a) - m mod LENGTH('a)) (drop_bit (m mod LENGTH('a)) (take_bit LENGTH('a) k)) (take_bit (m mod LENGTH('a)) k)) n \<longleftrightarrow> n < LENGTH('a) \<and> (n + m) mod LENGTH('a) < LENGTH('a) \<and> bit k ((n + m) mod LENGTH('a))\<close> by simp qed lemma bit_word_rotl_iff [bit_simps]: \<open>bit (word_rotl m w) n \<longleftrightarrow> n < LENGTH('a) \<and> bit w ((n + (LENGTH('a) - m mod LENGTH('a))) mod LENGTH('a))\<close> for w :: \<open>'a::len word\<close> by (simp add: word_rotl_eq_word_rotr bit_word_rotr_iff) lemma bit_word_roti_iff [bit_simps]: \<open>bit (word_roti k w) n \<longleftrightarrow> n < LENGTH('a) \<and> bit w (nat ((int n + k) mod int LENGTH('a)))\<close> for w :: \<open>'a::len word\<close> proof transfer fix k l :: int and n :: nat define m where \<open>m = nat (k mod int LENGTH('a))\<close> have \<open>m < LENGTH('a)\<close> by (simp add: nat_less_iff m_def) then have \<open>m \<le> LENGTH('a)\<close> by simp have \<open>k mod int LENGTH('a) = int m\<close> by (simp add: nat_less_iff m_def) moreover have \<open>(int n + k) mod int LENGTH('a) = int ((n + m) mod LENGTH('a))\<close> by (subst mod_add_right_eq [symmetric]) (simp add: of_nat_mod \<open>k mod int LENGTH('a) = int m\<close>) moreover have \<open>n < LENGTH('a) \<and> bit (concat_bit (LENGTH('a) - m) (drop_bit m (take_bit LENGTH('a) l)) (take_bit m l)) n \<longleftrightarrow> n < LENGTH('a) \<and> bit l ((n + m) mod LENGTH('a))\<close> using \<open>m < LENGTH('a)\<close> by (cases \<open>m + n \<ge> LENGTH('a)\<close>) (auto simp add: bit_concat_bit_iff bit_drop_bit_eq bit_take_bit_iff nat_less_iff not_le not_less ac_simps le_diff_conv le_mod_geq) ultimately show \<open>n < LENGTH('a) \<and> bit (concat_bit (LENGTH('a) - nat (k mod int LENGTH('a))) (drop_bit (nat (k mod int LENGTH('a))) (take_bit LENGTH('a) l)) (take_bit (nat (k mod int LENGTH('a))) l)) n \<longleftrightarrow> n < LENGTH('a) \<and> nat ((int n + k) mod int LENGTH('a)) < LENGTH('a) \<and> bit l (nat ((int n + k) mod int LENGTH('a)))\<close> by simp qed lemma uint_word_rotr_eq: \<open>uint (word_rotr n w) = concat_bit (LENGTH('a) - n mod LENGTH('a)) (drop_bit (n mod LENGTH('a)) (uint w)) (uint (take_bit (n mod LENGTH('a)) w))\<close> for w :: \<open>'a::len word\<close> by transfer (simp add: take_bit_concat_bit_eq) lemma [code]: \<open>Word.the_int (word_rotr n w) = concat_bit (LENGTH('a) - n mod LENGTH('a)) (drop_bit (n mod LENGTH('a)) (Word.the_int w)) (Word.the_int (take_bit (n mod LENGTH('a)) w))\<close> for w :: \<open>'a::len word\<close> using uint_word_rotr_eq [of n w] by simp subsection \<open>Split and cat operations\<close> lift_definition word_cat :: \<open>'a::len word \<Rightarrow> 'b::len word \<Rightarrow> 'c::len word\<close> is \<open>\<lambda>k l. concat_bit LENGTH('b) l (take_bit LENGTH('a) k)\<close> by (simp add: bit_eq_iff bit_concat_bit_iff bit_take_bit_iff) lemma word_cat_eq: \<open>(word_cat v w :: 'c::len word) = push_bit LENGTH('b) (ucast v) + ucast w\<close> for v :: \<open>'a::len word\<close> and w :: \<open>'b::len word\<close> by transfer (simp add: concat_bit_eq ac_simps) lemma word_cat_eq' [code]: \<open>word_cat a b = word_of_int (concat_bit LENGTH('b) (uint b) (uint a))\<close> for a :: \<open>'a::len word\<close> and b :: \<open>'b::len word\<close> by transfer (simp add: concat_bit_take_bit_eq) lemma bit_word_cat_iff [bit_simps]: \<open>bit (word_cat v w :: 'c::len word) n \<longleftrightarrow> n < LENGTH('c) \<and> (if n < LENGTH('b) then bit w n else bit v (n - LENGTH('b)))\<close> for v :: \<open>'a::len word\<close> and w :: \<open>'b::len word\<close> by transfer (simp add: bit_concat_bit_iff bit_take_bit_iff) definition word_split :: \<open>'a::len word \<Rightarrow> 'b::len word \<times> 'c::len word\<close> where \<open>word_split w = (ucast (drop_bit LENGTH('c) w) :: 'b::len word, ucast w :: 'c::len word)\<close> definition word_rcat :: \<open>'a::len word list \<Rightarrow> 'b::len word\<close> where \<open>word_rcat = word_of_int \<circ> horner_sum uint (2 ^ LENGTH('a)) \<circ> rev\<close> subsection \<open>More on conversions\<close> lemma int_word_sint: \<open>sint (word_of_int x :: 'a::len word) = (x + 2 ^ (LENGTH('a) - 1)) mod 2 ^ LENGTH('a) - 2 ^ (LENGTH('a) - 1)\<close> by transfer (simp flip: take_bit_eq_mod add: signed_take_bit_eq_take_bit_shift) lemma sint_sbintrunc': "sint (word_of_int bin :: 'a word) = signed_take_bit (LENGTH('a::len) - 1) bin" by (simp add: signed_of_int) lemma uint_sint: "uint w = take_bit LENGTH('a) (sint w)" for w :: "'a::len word" by transfer (simp add: take_bit_signed_take_bit) lemma bintr_uint: "LENGTH('a) \<le> n \<Longrightarrow> take_bit n (uint w) = uint w" for w :: "'a::len word" by transfer (simp add: min_def) lemma wi_bintr: "LENGTH('a::len) \<le> n \<Longrightarrow> word_of_int (take_bit n w) = (word_of_int w :: 'a word)" by transfer simp lemma word_numeral_alt: "numeral b = word_of_int (numeral b)" by (induct b, simp_all only: numeral.simps word_of_int_homs) declare word_numeral_alt [symmetric, code_abbrev] lemma word_neg_numeral_alt: "- numeral b = word_of_int (- numeral b)" by (simp only: word_numeral_alt wi_hom_neg) declare word_neg_numeral_alt [symmetric, code_abbrev] lemma uint_bintrunc [simp]: "uint (numeral bin :: 'a word) = take_bit (LENGTH('a::len)) (numeral bin)" by transfer rule lemma uint_bintrunc_neg [simp]: "uint (- numeral bin :: 'a word) = take_bit (LENGTH('a::len)) (- numeral bin)" by transfer rule lemma sint_sbintrunc [simp]: "sint (numeral bin :: 'a word) = signed_take_bit (LENGTH('a::len) - 1) (numeral bin)" by transfer simp lemma sint_sbintrunc_neg [simp]: "sint (- numeral bin :: 'a word) = signed_take_bit (LENGTH('a::len) - 1) (- numeral bin)" by transfer simp lemma unat_bintrunc [simp]: "unat (numeral bin :: 'a::len word) = nat (take_bit (LENGTH('a)) (numeral bin))" by transfer simp lemma unat_bintrunc_neg [simp]: "unat (- numeral bin :: 'a::len word) = nat (take_bit (LENGTH('a)) (- numeral bin))" by transfer simp lemma size_0_eq: "size w = 0 \<Longrightarrow> v = w" for v w :: "'a::len word" by transfer simp lemma uint_ge_0 [iff]: "0 \<le> uint x" by (fact unsigned_greater_eq) lemma uint_lt2p [iff]: "uint x < 2 ^ LENGTH('a)" for x :: "'a::len word" by (fact unsigned_less) lemma sint_ge: "- (2 ^ (LENGTH('a) - 1)) \<le> sint x" for x :: "'a::len word" using sint_greater_eq [of x] by simp lemma sint_lt: "sint x < 2 ^ (LENGTH('a) - 1)" for x :: "'a::len word" using sint_less [of x] by simp lemma uint_m2p_neg: "uint x - 2 ^ LENGTH('a) < 0" for x :: "'a::len word" by (simp only: diff_less_0_iff_less uint_lt2p) lemma uint_m2p_not_non_neg: "\<not> 0 \<le> uint x - 2 ^ LENGTH('a)" for x :: "'a::len word" by (simp only: not_le uint_m2p_neg) lemma lt2p_lem: "LENGTH('a) \<le> n \<Longrightarrow> uint w < 2 ^ n" for w :: "'a::len word" using uint_bounded [of w] by (rule less_le_trans) simp lemma uint_le_0_iff [simp]: "uint x \<le> 0 \<longleftrightarrow> uint x = 0" by (fact uint_ge_0 [THEN leD, THEN antisym_conv1]) lemma uint_nat: "uint w = int (unat w)" by transfer simp lemma uint_numeral: "uint (numeral b :: 'a::len word) = numeral b mod 2 ^ LENGTH('a)" by (simp flip: take_bit_eq_mod add: of_nat_take_bit) lemma uint_neg_numeral: "uint (- numeral b :: 'a::len word) = - numeral b mod 2 ^ LENGTH('a)" by (simp flip: take_bit_eq_mod add: of_nat_take_bit) lemma unat_numeral: "unat (numeral b :: 'a::len word) = numeral b mod 2 ^ LENGTH('a)" by transfer (simp add: take_bit_eq_mod nat_mod_distrib nat_power_eq) lemma sint_numeral: "sint (numeral b :: 'a::len word) = (numeral b + 2 ^ (LENGTH('a) - 1)) mod 2 ^ LENGTH('a) - 2 ^ (LENGTH('a) - 1)" by (metis int_word_sint word_numeral_alt) lemma word_of_int_0 [simp, code_post]: "word_of_int 0 = 0" by (fact of_int_0) lemma word_of_int_1 [simp, code_post]: "word_of_int 1 = 1" by (fact of_int_1) lemma word_of_int_neg_1 [simp]: "word_of_int (- 1) = - 1" by (simp add: wi_hom_syms) lemma word_of_int_numeral [simp] : "(word_of_int (numeral bin) :: 'a::len word) = numeral bin" by (fact of_int_numeral) lemma word_of_int_neg_numeral [simp]: "(word_of_int (- numeral bin) :: 'a::len word) = - numeral bin" by (fact of_int_neg_numeral) lemma word_int_case_wi: "word_int_case f (word_of_int i :: 'b word) = f (i mod 2 ^ LENGTH('b::len))" by transfer (simp add: take_bit_eq_mod) lemma word_int_split: "P (word_int_case f x) = (\<forall>i. x = (word_of_int i :: 'b::len word) \<and> 0 \<le> i \<and> i < 2 ^ LENGTH('b) \<longrightarrow> P (f i))" by transfer (auto simp add: take_bit_eq_mod) lemma word_int_split_asm: "P (word_int_case f x) = (\<nexists>n. x = (word_of_int n :: 'b::len word) \<and> 0 \<le> n \<and> n < 2 ^ LENGTH('b::len) \<and> \<not> P (f n))" by transfer (auto simp add: take_bit_eq_mod) lemma uint_range_size: "0 \<le> uint w \<and> uint w < 2 ^ size w" by transfer simp lemma sint_range_size: "- (2 ^ (size w - Suc 0)) \<le> sint w \<and> sint w < 2 ^ (size w - Suc 0)" by (simp add: word_size sint_greater_eq sint_less) lemma sint_above_size: "2 ^ (size w - 1) \<le> x \<Longrightarrow> sint w < x" for w :: "'a::len word" unfolding word_size by (rule less_le_trans [OF sint_lt]) lemma sint_below_size: "x \<le> - (2 ^ (size w - 1)) \<Longrightarrow> x \<le> sint w" for w :: "'a::len word" unfolding word_size by (rule order_trans [OF _ sint_ge]) lemma word_unat_eq_iff: \<open>v = w \<longleftrightarrow> unat v = unat w\<close> for v w :: \<open>'a::len word\<close> by (fact word_eq_iff_unsigned) subsection \<open>Testing bits\<close> lemma bin_nth_uint_imp: "bit (uint w) n \<Longrightarrow> n < LENGTH('a)" for w :: "'a::len word" by transfer (simp add: bit_take_bit_iff) lemma bin_nth_sint: "LENGTH('a) \<le> n \<Longrightarrow> bit (sint w) n = bit (sint w) (LENGTH('a) - 1)" for w :: "'a::len word" by (transfer fixing: n) (simp add: bit_signed_take_bit_iff le_diff_conv min_def) lemma num_of_bintr': "take_bit (LENGTH('a::len)) (numeral a :: int) = (numeral b) \<Longrightarrow> numeral a = (numeral b :: 'a word)" proof (transfer fixing: a b) assume \<open>take_bit LENGTH('a) (numeral a :: int) = numeral b\<close> then have \<open>take_bit LENGTH('a) (take_bit LENGTH('a) (numeral a :: int)) = take_bit LENGTH('a) (numeral b)\<close> by simp then show \<open>take_bit LENGTH('a) (numeral a :: int) = take_bit LENGTH('a) (numeral b)\<close> by simp qed lemma num_of_sbintr': "signed_take_bit (LENGTH('a::len) - 1) (numeral a :: int) = (numeral b) \<Longrightarrow> numeral a = (numeral b :: 'a word)" proof (transfer fixing: a b) assume \<open>signed_take_bit (LENGTH('a) - 1) (numeral a :: int) = numeral b\<close> then have \<open>take_bit LENGTH('a) (signed_take_bit (LENGTH('a) - 1) (numeral a :: int)) = take_bit LENGTH('a) (numeral b)\<close> by simp then show \<open>take_bit LENGTH('a) (numeral a :: int) = take_bit LENGTH('a) (numeral b)\<close> by (simp add: take_bit_signed_take_bit) qed lemma num_abs_bintr: "(numeral x :: 'a word) = word_of_int (take_bit (LENGTH('a::len)) (numeral x))" by transfer simp lemma num_abs_sbintr: "(numeral x :: 'a word) = word_of_int (signed_take_bit (LENGTH('a::len) - 1) (numeral x))" by transfer (simp add: take_bit_signed_take_bit) text \<open> \<open>cast\<close> -- note, no arg for new length, as it's determined by type of result, thus in \<open>cast w = w\<close>, the type means cast to length of \<open>w\<close>! \<close> lemma bit_ucast_iff: \<open>bit (ucast a :: 'a::len word) n \<longleftrightarrow> n < LENGTH('a::len) \<and> bit a n\<close> by transfer (simp add: bit_take_bit_iff) lemma ucast_id [simp]: "ucast w = w" by transfer simp lemma scast_id [simp]: "scast w = w" by transfer (simp add: take_bit_signed_take_bit) lemma ucast_mask_eq: \<open>ucast (mask n :: 'b word) = mask (min LENGTH('b::len) n)\<close> by (simp add: bit_eq_iff) (auto simp add: bit_mask_iff bit_ucast_iff) \<comment> \<open>literal u(s)cast\<close> lemma ucast_bintr [simp]: "ucast (numeral w :: 'a::len word) = word_of_int (take_bit (LENGTH('a)) (numeral w))" by transfer simp (* TODO: neg_numeral *) lemma scast_sbintr [simp]: "scast (numeral w ::'a::len word) = word_of_int (signed_take_bit (LENGTH('a) - Suc 0) (numeral w))" by transfer simp lemma source_size: "source_size (c::'a::len word \<Rightarrow> _) = LENGTH('a)" by transfer simp lemma target_size: "target_size (c::_ \<Rightarrow> 'b::len word) = LENGTH('b)" by transfer simp lemma is_down: "is_down c \<longleftrightarrow> LENGTH('b) \<le> LENGTH('a)" for c :: "'a::len word \<Rightarrow> 'b::len word" by transfer simp lemma is_up: "is_up c \<longleftrightarrow> LENGTH('a) \<le> LENGTH('b)" for c :: "'a::len word \<Rightarrow> 'b::len word" by transfer simp lemma is_up_down: \<open>is_up c \<longleftrightarrow> is_down d\<close> for c :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> and d :: \<open>'b::len word \<Rightarrow> 'a::len word\<close> by transfer simp context fixes dummy_types :: \<open>'a::len \<times> 'b::len\<close> begin private abbreviation (input) UCAST :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> where \<open>UCAST == ucast\<close> private abbreviation (input) SCAST :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> where \<open>SCAST == scast\<close> lemma down_cast_same: \<open>UCAST = scast\<close> if \<open>is_down UCAST\<close> by (rule ext, use that in transfer) (simp add: take_bit_signed_take_bit) lemma sint_up_scast: \<open>sint (SCAST w) = sint w\<close> if \<open>is_up SCAST\<close> using that by transfer (simp add: min_def Suc_leI le_diff_iff) lemma uint_up_ucast: \<open>uint (UCAST w) = uint w\<close> if \<open>is_up UCAST\<close> using that by transfer (simp add: min_def) lemma ucast_up_ucast: \<open>ucast (UCAST w) = ucast w\<close> if \<open>is_up UCAST\<close> using that by transfer (simp add: ac_simps) lemma ucast_up_ucast_id: \<open>ucast (UCAST w) = w\<close> if \<open>is_up UCAST\<close> using that by (simp add: ucast_up_ucast) lemma scast_up_scast: \<open>scast (SCAST w) = scast w\<close> if \<open>is_up SCAST\<close> using that by transfer (simp add: ac_simps) lemma scast_up_scast_id: \<open>scast (SCAST w) = w\<close> if \<open>is_up SCAST\<close> using that by (simp add: scast_up_scast) lemma isduu: \<open>is_up UCAST\<close> if \<open>is_down d\<close> for d :: \<open>'b word \<Rightarrow> 'a word\<close> using that is_up_down [of UCAST d] by simp lemma isdus: \<open>is_up SCAST\<close> if \<open>is_down d\<close> for d :: \<open>'b word \<Rightarrow> 'a word\<close> using that is_up_down [of SCAST d] by simp lemmas ucast_down_ucast_id = isduu [THEN ucast_up_ucast_id] lemmas scast_down_scast_id = isdus [THEN scast_up_scast_id] lemma up_ucast_surj: \<open>surj (ucast :: 'b word \<Rightarrow> 'a word)\<close> if \<open>is_up UCAST\<close> by (rule surjI) (use that in \<open>rule ucast_up_ucast_id\<close>) lemma up_scast_surj: \<open>surj (scast :: 'b word \<Rightarrow> 'a word)\<close> if \<open>is_up SCAST\<close> by (rule surjI) (use that in \<open>rule scast_up_scast_id\<close>) lemma down_ucast_inj: \<open>inj_on UCAST A\<close> if \<open>is_down (ucast :: 'b word \<Rightarrow> 'a word)\<close> by (rule inj_on_inverseI) (use that in \<open>rule ucast_down_ucast_id\<close>) lemma down_scast_inj: \<open>inj_on SCAST A\<close> if \<open>is_down (scast :: 'b word \<Rightarrow> 'a word)\<close> by (rule inj_on_inverseI) (use that in \<open>rule scast_down_scast_id\<close>) lemma ucast_down_wi: \<open>UCAST (word_of_int x) = word_of_int x\<close> if \<open>is_down UCAST\<close> using that by transfer simp lemma ucast_down_no: \<open>UCAST (numeral bin) = numeral bin\<close> if \<open>is_down UCAST\<close> using that by transfer simp end lemmas word_log_defs = word_and_def word_or_def word_xor_def word_not_def lemma bit_last_iff: \<open>bit w (LENGTH('a) - Suc 0) \<longleftrightarrow> sint w < 0\<close> (is \<open>?P \<longleftrightarrow> ?Q\<close>) for w :: \<open>'a::len word\<close> proof - have \<open>?P \<longleftrightarrow> bit (uint w) (LENGTH('a) - Suc 0)\<close> by (simp add: bit_uint_iff) also have \<open>\<dots> \<longleftrightarrow> ?Q\<close> by (simp add: sint_uint) finally show ?thesis . qed lemma drop_bit_eq_zero_iff_not_bit_last: \<open>drop_bit (LENGTH('a) - Suc 0) w = 0 \<longleftrightarrow> \<not> bit w (LENGTH('a) - Suc 0)\<close> for w :: "'a::len word" proof (cases \<open>LENGTH('a)\<close>) case (Suc n) then show ?thesis apply transfer apply (simp add: take_bit_drop_bit) by (simp add: bit_iff_odd_drop_bit drop_bit_take_bit odd_iff_mod_2_eq_one) qed auto lemma unat_div: \<open>unat (x div y) = unat x div unat y\<close> by (fact unat_div_distrib) lemma unat_mod: \<open>unat (x mod y) = unat x mod unat y\<close> by (fact unat_mod_distrib) subsection \<open>Word Arithmetic\<close> lemmas less_eq_word_numeral_numeral [simp] = word_le_def [of \<open>numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_word_numeral_numeral [simp] = word_less_def [of \<open>numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_eq_word_minus_numeral_numeral [simp] = word_le_def [of \<open>- numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_word_minus_numeral_numeral [simp] = word_less_def [of \<open>- numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_eq_word_numeral_minus_numeral [simp] = word_le_def [of \<open>numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_word_numeral_minus_numeral [simp] = word_less_def [of \<open>numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_eq_word_minus_numeral_minus_numeral [simp] = word_le_def [of \<open>- numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_word_minus_numeral_minus_numeral [simp] = word_less_def [of \<open>- numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_word_numeral_minus_1 [simp] = word_less_def [of \<open>numeral a\<close> \<open>- 1\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas less_word_minus_numeral_minus_1 [simp] = word_less_def [of \<open>- numeral a\<close> \<open>- 1\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas sless_eq_word_numeral_numeral [simp] = word_sle_eq [of \<open>numeral a\<close> \<open>numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_word_numeral_numeral [simp] = word_sless_alt [of \<open>numeral a\<close> \<open>numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_eq_word_minus_numeral_numeral [simp] = word_sle_eq [of \<open>- numeral a\<close> \<open>numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_word_minus_numeral_numeral [simp] = word_sless_alt [of \<open>- numeral a\<close> \<open>numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_eq_word_numeral_minus_numeral [simp] = word_sle_eq [of \<open>numeral a\<close> \<open>- numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_word_numeral_minus_numeral [simp] = word_sless_alt [of \<open>numeral a\<close> \<open>- numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_eq_word_minus_numeral_minus_numeral [simp] = word_sle_eq [of \<open>- numeral a\<close> \<open>- numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas sless_word_minus_numeral_minus_numeral [simp] = word_sless_alt [of \<open>- numeral a\<close> \<open>- numeral b\<close>, simplified sint_sbintrunc sint_sbintrunc_neg] for a b lemmas div_word_numeral_numeral [simp] = word_div_def [of \<open>numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas div_word_minus_numeral_numeral [simp] = word_div_def [of \<open>- numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas div_word_numeral_minus_numeral [simp] = word_div_def [of \<open>numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas div_word_minus_numeral_minus_numeral [simp] = word_div_def [of \<open>- numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas div_word_minus_1_numeral [simp] = word_div_def [of \<open>- 1\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas div_word_minus_1_minus_numeral [simp] = word_div_def [of \<open>- 1\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas mod_word_numeral_numeral [simp] = word_mod_def [of \<open>numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas mod_word_minus_numeral_numeral [simp] = word_mod_def [of \<open>- numeral a\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas mod_word_numeral_minus_numeral [simp] = word_mod_def [of \<open>numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas mod_word_minus_numeral_minus_numeral [simp] = word_mod_def [of \<open>- numeral a\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas mod_word_minus_1_numeral [simp] = word_mod_def [of \<open>- 1\<close> \<open>numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemmas mod_word_minus_1_minus_numeral [simp] = word_mod_def [of \<open>- 1\<close> \<open>- numeral b\<close>, simplified uint_bintrunc uint_bintrunc_neg unsigned_minus_1_eq_mask mask_eq_exp_minus_1] for a b lemma signed_drop_bit_of_1 [simp]: \<open>signed_drop_bit n (1 :: 'a::len word) = of_bool (LENGTH('a) = 1 \<or> n = 0)\<close> apply (transfer fixing: n) apply (cases \<open>LENGTH('a)\<close>) apply (auto simp add: take_bit_signed_take_bit) apply (auto simp add: take_bit_drop_bit gr0_conv_Suc simp flip: take_bit_eq_self_iff_drop_bit_eq_0) done lemma take_bit_word_beyond_length_eq: \<open>take_bit n w = w\<close> if \<open>LENGTH('a) \<le> n\<close> for w :: \<open>'a::len word\<close> using that by transfer simp lemmas word_div_no [simp] = word_div_def [of "numeral a" "numeral b"] for a b lemmas word_mod_no [simp] = word_mod_def [of "numeral a" "numeral b"] for a b lemmas word_less_no [simp] = word_less_def [of "numeral a" "numeral b"] for a b lemmas word_le_no [simp] = word_le_def [of "numeral a" "numeral b"] for a b lemmas word_sless_no [simp] = word_sless_eq [of "numeral a" "numeral b"] for a b lemmas word_sle_no [simp] = word_sle_eq [of "numeral a" "numeral b"] for a b lemma size_0_same': "size w = 0 \<Longrightarrow> w = v" for v w :: "'a::len word" by (unfold word_size) simp lemmas size_0_same = size_0_same' [unfolded word_size] lemmas unat_eq_0 = unat_0_iff lemmas unat_eq_zero = unat_0_iff lemma mask_1: "mask 1 = 1" by simp lemma mask_Suc_0: "mask (Suc 0) = 1" by simp lemma bin_last_bintrunc: "odd (take_bit l n) \<longleftrightarrow> l > 0 \<and> odd n" by simp lemma push_bit_word_beyond [simp]: \<open>push_bit n w = 0\<close> if \<open>LENGTH('a) \<le> n\<close> for w :: \<open>'a::len word\<close> using that by (transfer fixing: n) (simp add: take_bit_push_bit) lemma drop_bit_word_beyond [simp]: \<open>drop_bit n w = 0\<close> if \<open>LENGTH('a) \<le> n\<close> for w :: \<open>'a::len word\<close> using that by (transfer fixing: n) (simp add: drop_bit_take_bit) lemma signed_drop_bit_beyond: \<open>signed_drop_bit n w = (if bit w (LENGTH('a) - Suc 0) then - 1 else 0)\<close> if \<open>LENGTH('a) \<le> n\<close> for w :: \<open>'a::len word\<close> by (rule bit_word_eqI) (simp add: bit_signed_drop_bit_iff that) lemma take_bit_numeral_minus_numeral_word [simp]: \<open>take_bit (numeral m) (- numeral n :: 'a::len word) = (case take_bit_num (numeral m) n of None \<Rightarrow> 0 | Some q \<Rightarrow> take_bit (numeral m) (2 ^ numeral m - numeral q))\<close> (is \<open>?lhs = ?rhs\<close>) proof (cases \<open>LENGTH('a) \<le> numeral m\<close>) case True then have *: \<open>(take_bit (numeral m) :: 'a word \<Rightarrow> 'a word) = id\<close> by (simp add: fun_eq_iff take_bit_word_eq_self) have **: \<open>2 ^ numeral m = (0 :: 'a word)\<close> using True by (simp flip: exp_eq_zero_iff) show ?thesis by (auto simp only: * ** split: option.split dest!: take_bit_num_eq_None_imp [where ?'a = \<open>'a word\<close>] take_bit_num_eq_Some_imp [where ?'a = \<open>'a word\<close>]) simp_all next case False then show ?thesis by (transfer fixing: m n) simp qed lemma of_nat_inverse: \<open>word_of_nat r = a \<Longrightarrow> r < 2 ^ LENGTH('a) \<Longrightarrow> unat a = r\<close> for a :: \<open>'a::len word\<close> by (metis id_apply of_nat_eq_id take_bit_nat_eq_self_iff unsigned_of_nat) subsection \<open>Transferring goals from words to ints\<close> lemma word_ths: shows word_succ_p1: "word_succ a = a + 1" and word_pred_m1: "word_pred a = a - 1" and word_pred_succ: "word_pred (word_succ a) = a" and word_succ_pred: "word_succ (word_pred a) = a" and word_mult_succ: "word_succ a * b = b + a * b" by (transfer, simp add: algebra_simps)+ lemma uint_cong: "x = y \<Longrightarrow> uint x = uint y" by simp lemma uint_word_ariths: fixes a b :: "'a::len word" shows "uint (a + b) = (uint a + uint b) mod 2 ^ LENGTH('a::len)" and "uint (a - b) = (uint a - uint b) mod 2 ^ LENGTH('a)" and "uint (a * b) = uint a * uint b mod 2 ^ LENGTH('a)" and "uint (- a) = - uint a mod 2 ^ LENGTH('a)" and "uint (word_succ a) = (uint a + 1) mod 2 ^ LENGTH('a)" and "uint (word_pred a) = (uint a - 1) mod 2 ^ LENGTH('a)" and "uint (0 :: 'a word) = 0 mod 2 ^ LENGTH('a)" and "uint (1 :: 'a word) = 1 mod 2 ^ LENGTH('a)" by (simp_all only: word_arith_wis uint_word_of_int_eq flip: take_bit_eq_mod) lemma uint_word_arith_bintrs: fixes a b :: "'a::len word" shows "uint (a + b) = take_bit (LENGTH('a)) (uint a + uint b)" and "uint (a - b) = take_bit (LENGTH('a)) (uint a - uint b)" and "uint (a * b) = take_bit (LENGTH('a)) (uint a * uint b)" and "uint (- a) = take_bit (LENGTH('a)) (- uint a)" and "uint (word_succ a) = take_bit (LENGTH('a)) (uint a + 1)" and "uint (word_pred a) = take_bit (LENGTH('a)) (uint a - 1)" and "uint (0 :: 'a word) = take_bit (LENGTH('a)) 0" and "uint (1 :: 'a word) = take_bit (LENGTH('a)) 1" by (simp_all add: uint_word_ariths take_bit_eq_mod) lemma sint_word_ariths: fixes a b :: "'a::len word" shows "sint (a + b) = signed_take_bit (LENGTH('a) - 1) (sint a + sint b)" and "sint (a - b) = signed_take_bit (LENGTH('a) - 1) (sint a - sint b)" and "sint (a * b) = signed_take_bit (LENGTH('a) - 1) (sint a * sint b)" and "sint (- a) = signed_take_bit (LENGTH('a) - 1) (- sint a)" and "sint (word_succ a) = signed_take_bit (LENGTH('a) - 1) (sint a + 1)" and "sint (word_pred a) = signed_take_bit (LENGTH('a) - 1) (sint a - 1)" and "sint (0 :: 'a word) = signed_take_bit (LENGTH('a) - 1) 0" and "sint (1 :: 'a word) = signed_take_bit (LENGTH('a) - 1) 1" subgoal by transfer (simp add: signed_take_bit_add) subgoal by transfer (simp add: signed_take_bit_diff) subgoal by transfer (simp add: signed_take_bit_mult) subgoal by transfer (simp add: signed_take_bit_minus) apply (metis of_int_sint scast_id sint_sbintrunc' wi_hom_succ) apply (metis of_int_sint scast_id sint_sbintrunc' wi_hom_pred) apply (simp_all add: sint_uint) done lemma word_pred_0_n1: "word_pred 0 = word_of_int (- 1)" unfolding word_pred_m1 by simp lemma succ_pred_no [simp]: "word_succ (numeral w) = numeral w + 1" "word_pred (numeral w) = numeral w - 1" "word_succ (- numeral w) = - numeral w + 1" "word_pred (- numeral w) = - numeral w - 1" by (simp_all add: word_succ_p1 word_pred_m1) lemma word_sp_01 [simp]: "word_succ (- 1) = 0 \<and> word_succ 0 = 1 \<and> word_pred 0 = - 1 \<and> word_pred 1 = 0" by (simp_all add: word_succ_p1 word_pred_m1) \<comment> \<open>alternative approach to lifting arithmetic equalities\<close> lemma word_of_int_Ex: "\<exists>y. x = word_of_int y" by (rule_tac x="uint x" in exI) simp subsection \<open>Order on fixed-length words\<close> lift_definition udvd :: \<open>'a::len word \<Rightarrow> 'a::len word \<Rightarrow> bool\<close> (infixl \<open>udvd\<close> 50) is \<open>\<lambda>k l. take_bit LENGTH('a) k dvd take_bit LENGTH('a) l\<close> by simp lemma udvd_iff_dvd: \<open>x udvd y \<longleftrightarrow> unat x dvd unat y\<close> by transfer (simp add: nat_dvd_iff) lemma udvd_iff_dvd_int: \<open>v udvd w \<longleftrightarrow> uint v dvd uint w\<close> by transfer rule lemma udvdI [intro]: \<open>v udvd w\<close> if \<open>unat w = unat v * unat u\<close> proof - from that have \<open>unat v dvd unat w\<close> .. then show ?thesis by (simp add: udvd_iff_dvd) qed lemma udvdE [elim]: fixes v w :: \<open>'a::len word\<close> assumes \<open>v udvd w\<close> obtains u :: \<open>'a word\<close> where \<open>unat w = unat v * unat u\<close> proof (cases \<open>v = 0\<close>) case True moreover from True \<open>v udvd w\<close> have \<open>w = 0\<close> by transfer simp ultimately show thesis using that by simp next case False then have \<open>unat v > 0\<close> by (simp add: unat_gt_0) from \<open>v udvd w\<close> have \<open>unat v dvd unat w\<close> by (simp add: udvd_iff_dvd) then obtain n where \<open>unat w = unat v * n\<close> .. moreover have \<open>n < 2 ^ LENGTH('a)\<close> proof (rule ccontr) assume \<open>\<not> n < 2 ^ LENGTH('a)\<close> then have \<open>n \<ge> 2 ^ LENGTH('a)\<close> by (simp add: not_le) then have \<open>unat v * n \<ge> 2 ^ LENGTH('a)\<close> using \<open>unat v > 0\<close> mult_le_mono [of 1 \<open>unat v\<close> \<open>2 ^ LENGTH('a)\<close> n] by simp with \<open>unat w = unat v * n\<close> have \<open>unat w \<ge> 2 ^ LENGTH('a)\<close> by simp with unsigned_less [of w, where ?'a = nat] show False by linarith qed ultimately have \<open>unat w = unat v * unat (word_of_nat n :: 'a word)\<close> by (auto simp add: take_bit_nat_eq_self_iff unsigned_of_nat intro: sym) with that show thesis . qed lemma udvd_imp_mod_eq_0: \<open>w mod v = 0\<close> if \<open>v udvd w\<close> using that by transfer simp lemma mod_eq_0_imp_udvd [intro?]: \<open>v udvd w\<close> if \<open>w mod v = 0\<close> proof - from that have \<open>unat (w mod v) = unat 0\<close> by simp then have \<open>unat w mod unat v = 0\<close> by (simp add: unat_mod_distrib) then have \<open>unat v dvd unat w\<close> .. then show ?thesis by (simp add: udvd_iff_dvd) qed lemma udvd_imp_dvd: \<open>v dvd w\<close> if \<open>v udvd w\<close> for v w :: \<open>'a::len word\<close> proof - from that obtain u :: \<open>'a word\<close> where \<open>unat w = unat v * unat u\<close> .. then have \<open>(word_of_nat (unat w) :: 'a word) = word_of_nat (unat v * unat u)\<close> by simp then have \<open>w = v * u\<close> by simp then show \<open>v dvd w\<close> .. qed lemma exp_dvd_iff_exp_udvd: \<open>2 ^ n dvd w \<longleftrightarrow> 2 ^ n udvd w\<close> for v w :: \<open>'a::len word\<close> proof assume \<open>2 ^ n udvd w\<close> then show \<open>2 ^ n dvd w\<close> by (rule udvd_imp_dvd) next assume \<open>2 ^ n dvd w\<close> then obtain u :: \<open>'a word\<close> where \<open>w = 2 ^ n * u\<close> .. then have \<open>w = push_bit n u\<close> by (simp add: push_bit_eq_mult) then show \<open>2 ^ n udvd w\<close> by transfer (simp add: take_bit_push_bit dvd_eq_mod_eq_0 flip: take_bit_eq_mod) qed lemma udvd_nat_alt: \<open>a udvd b \<longleftrightarrow> (\<exists>n. unat b = n * unat a)\<close> by (auto simp add: udvd_iff_dvd) lemma udvd_unfold_int: \<open>a udvd b \<longleftrightarrow> (\<exists>n\<ge>0. uint b = n * uint a)\<close> unfolding udvd_iff_dvd_int by (metis dvd_div_mult_self dvd_triv_right uint_div_distrib uint_ge_0) lemma unat_minus_one: \<open>unat (w - 1) = unat w - 1\<close> if \<open>w \<noteq> 0\<close> proof - have "0 \<le> uint w" by (fact uint_nonnegative) moreover from that have "0 \<noteq> uint w" by (simp add: uint_0_iff) ultimately have "1 \<le> uint w" by arith from uint_lt2p [of w] have "uint w - 1 < 2 ^ LENGTH('a)" by arith with \<open>1 \<le> uint w\<close> have "(uint w - 1) mod 2 ^ LENGTH('a) = uint w - 1" by (auto intro: mod_pos_pos_trivial) with \<open>1 \<le> uint w\<close> have "nat ((uint w - 1) mod 2 ^ LENGTH('a)) = nat (uint w) - 1" by (auto simp del: nat_uint_eq) then show ?thesis by (simp only: unat_eq_nat_uint word_arith_wis mod_diff_right_eq) (metis of_int_1 uint_word_of_int unsigned_1) qed lemma measure_unat: "p \<noteq> 0 \<Longrightarrow> unat (p - 1) < unat p" by (simp add: unat_minus_one) (simp add: unat_0_iff [symmetric]) lemmas uint_add_ge0 [simp] = add_nonneg_nonneg [OF uint_ge_0 uint_ge_0] lemmas uint_mult_ge0 [simp] = mult_nonneg_nonneg [OF uint_ge_0 uint_ge_0] lemma uint_sub_lt2p [simp]: "uint x - uint y < 2 ^ LENGTH('a)" for x :: "'a::len word" and y :: "'b::len word" using uint_ge_0 [of y] uint_lt2p [of x] by arith subsection \<open>Conditions for the addition (etc) of two words to overflow\<close> lemma uint_add_lem: "(uint x + uint y < 2 ^ LENGTH('a)) = (uint (x + y) = uint x + uint y)" for x y :: "'a::len word" by (metis add.right_neutral add_mono_thms_linordered_semiring(1) mod_pos_pos_trivial of_nat_0_le_iff uint_lt2p uint_nat uint_word_ariths(1)) lemma uint_mult_lem: "(uint x * uint y < 2 ^ LENGTH('a)) = (uint (x * y) = uint x * uint y)" for x y :: "'a::len word" by (metis mod_pos_pos_trivial uint_lt2p uint_mult_ge0 uint_word_ariths(3)) lemma uint_sub_lem: "uint x \<ge> uint y \<longleftrightarrow> uint (x - y) = uint x - uint y" by (metis diff_ge_0_iff_ge of_nat_0_le_iff uint_nat uint_sub_lt2p uint_word_of_int unique_euclidean_semiring_numeral_class.mod_less word_sub_wi) lemma uint_add_le: "uint (x + y) \<le> uint x + uint y" unfolding uint_word_ariths by (simp add: zmod_le_nonneg_dividend) lemma uint_sub_ge: "uint (x - y) \<ge> uint x - uint y" unfolding uint_word_ariths by (simp flip: take_bit_eq_mod add: take_bit_int_greater_eq_self_iff) lemma int_mod_ge: \<open>a \<le> a mod n\<close> if \<open>a < n\<close> \<open>0 < n\<close> for a n :: int proof (cases \<open>a < 0\<close>) case True with \<open>0 < n\<close> show ?thesis by (metis less_trans not_less pos_mod_conj) next case False with \<open>a < n\<close> show ?thesis by simp qed lemma mod_add_if_z: "\<lbrakk>x < z; y < z; 0 \<le> y; 0 \<le> x; 0 \<le> z\<rbrakk> \<Longrightarrow> (x + y) mod z = (if x + y < z then x + y else x + y - z)" for x y z :: int apply (simp add: not_less) by (metis (no_types) add_strict_mono diff_ge_0_iff_ge diff_less_eq minus_mod_self2 mod_pos_pos_trivial) lemma uint_plus_if': "uint (a + b) = (if uint a + uint b < 2 ^ LENGTH('a) then uint a + uint b else uint a + uint b - 2 ^ LENGTH('a))" for a b :: "'a::len word" using mod_add_if_z [of "uint a" _ "uint b"] by (simp add: uint_word_ariths) lemma mod_sub_if_z: "\<lbrakk>x < z; y < z; 0 \<le> y; 0 \<le> x; 0 \<le> z\<rbrakk> \<Longrightarrow> (x - y) mod z = (if y \<le> x then x - y else x - y + z)" for x y z :: int using mod_pos_pos_trivial [of "x - y + z" z] by (auto simp add: not_le) lemma uint_sub_if': "uint (a - b) = (if uint b \<le> uint a then uint a - uint b else uint a - uint b + 2 ^ LENGTH('a))" for a b :: "'a::len word" using mod_sub_if_z [of "uint a" _ "uint b"] by (simp add: uint_word_ariths) lemma word_of_int_inverse: "word_of_int r = a \<Longrightarrow> 0 \<le> r \<Longrightarrow> r < 2 ^ LENGTH('a) \<Longrightarrow> uint a = r" for a :: "'a::len word" by transfer (simp add: take_bit_int_eq_self) lemma unat_split: "P (unat x) \<longleftrightarrow> (\<forall>n. of_nat n = x \<and> n < 2^LENGTH('a) \<longrightarrow> P n)" for x :: "'a::len word" by (auto simp add: unsigned_of_nat take_bit_nat_eq_self) lemma unat_split_asm: "P (unat x) \<longleftrightarrow> (\<nexists>n. of_nat n = x \<and> n < 2^LENGTH('a) \<and> \<not> P n)" for x :: "'a::len word" by (auto simp add: unsigned_of_nat take_bit_nat_eq_self) lemma un_ui_le: \<open>unat a \<le> unat b \<longleftrightarrow> uint a \<le> uint b\<close> by transfer (simp add: nat_le_iff) lemma unat_plus_if': \<open>unat (a + b) = (if unat a + unat b < 2 ^ LENGTH('a) then unat a + unat b else unat a + unat b - 2 ^ LENGTH('a))\<close> for a b :: \<open>'a::len word\<close> apply (auto simp add: not_less le_iff_add) apply (metis (mono_tags, lifting) of_nat_add of_nat_unat take_bit_nat_eq_self_iff unsigned_less unsigned_of_nat unsigned_word_eqI) apply (smt (verit, ccfv_SIG) dbl_simps(3) dbl_simps(5) numerals(1) of_nat_0_le_iff of_nat_add of_nat_eq_iff of_nat_numeral of_nat_power of_nat_unat uint_plus_if' unsigned_1) done lemma unat_sub_if_size: "unat (x - y) = (if unat y \<le> unat x then unat x - unat y else unat x + 2 ^ size x - unat y)" proof - { assume xy: "\<not> uint y \<le> uint x" have "nat (uint x - uint y + 2 ^ LENGTH('a)) = nat (uint x + 2 ^ LENGTH('a) - uint y)" by simp also have "... = nat (uint x + 2 ^ LENGTH('a)) - nat (uint y)" by (simp add: nat_diff_distrib') also have "... = nat (uint x) + 2 ^ LENGTH('a) - nat (uint y)" by (metis nat_add_distrib nat_eq_numeral_power_cancel_iff order_less_imp_le unsigned_0 unsigned_greater_eq unsigned_less) finally have "nat (uint x - uint y + 2 ^ LENGTH('a)) = nat (uint x) + 2 ^ LENGTH('a) - nat (uint y)" . } then show ?thesis by (simp add: word_size) (metis nat_diff_distrib' uint_sub_if' un_ui_le unat_eq_nat_uint unsigned_greater_eq) qed lemmas unat_sub_if' = unat_sub_if_size [unfolded word_size] lemma uint_split: "P (uint x) = (\<forall>i. word_of_int i = x \<and> 0 \<le> i \<and> i < 2^LENGTH('a) \<longrightarrow> P i)" for x :: "'a::len word" by transfer (auto simp add: take_bit_eq_mod) lemma uint_split_asm: "P (uint x) = (\<nexists>i. word_of_int i = x \<and> 0 \<le> i \<and> i < 2^LENGTH('a) \<and> \<not> P i)" for x :: "'a::len word" by (auto simp add: unsigned_of_int take_bit_int_eq_self) subsection \<open>Some proof tool support\<close> \<comment> \<open>use this to stop, eg. \<open>2 ^ LENGTH(32)\<close> being simplified\<close> lemma power_False_cong: "False \<Longrightarrow> a ^ b = c ^ d" by auto lemmas unat_splits = unat_split unat_split_asm lemmas unat_arith_simps = word_le_nat_alt word_less_nat_alt word_unat_eq_iff unat_sub_if' unat_plus_if' unat_div unat_mod lemmas uint_splits = uint_split uint_split_asm lemmas uint_arith_simps = word_le_def word_less_alt word_uint_eq_iff uint_sub_if' uint_plus_if' \<comment> \<open>\<open>unat_arith_tac\<close>: tactic to reduce word arithmetic to \<open>nat\<close>, try to solve via \<open>arith\<close>\<close> ML \<open> val unat_arith_simpset = @{context} (* TODO: completely explicitly determined simpset *) |> fold Simplifier.add_simp @{thms unat_arith_simps} |> fold Splitter.add_split @{thms if_split_asm} |> fold Simplifier.add_cong @{thms power_False_cong} |> simpset_of fun unat_arith_tacs ctxt = let fun arith_tac' n t = Arith_Data.arith_tac ctxt n t handle Cooper.COOPER _ => Seq.empty; in [ clarify_tac ctxt 1, full_simp_tac (put_simpset unat_arith_simpset ctxt) 1, ALLGOALS (full_simp_tac (put_simpset HOL_ss ctxt |> fold Splitter.add_split @{thms unat_splits} |> fold Simplifier.add_cong @{thms power_False_cong})), rewrite_goals_tac ctxt @{thms word_size}, ALLGOALS (fn n => REPEAT (resolve_tac ctxt [allI, impI] n) THEN REPEAT (eresolve_tac ctxt [conjE] n) THEN REPEAT (dresolve_tac ctxt @{thms of_nat_inverse} n THEN assume_tac ctxt n)), TRYALL arith_tac' ] end fun unat_arith_tac ctxt = SELECT_GOAL (EVERY (unat_arith_tacs ctxt)) \<close> method_setup unat_arith = \<open>Scan.succeed (SIMPLE_METHOD' o unat_arith_tac)\<close> "solving word arithmetic via natural numbers and arith" \<comment> \<open>\<open>uint_arith_tac\<close>: reduce to arithmetic on int, try to solve by arith\<close> ML \<open> val uint_arith_simpset = @{context} (* TODO: completely explicitly determined simpset *) |> fold Simplifier.add_simp @{thms uint_arith_simps} |> fold Splitter.add_split @{thms if_split_asm} |> fold Simplifier.add_cong @{thms power_False_cong} |> simpset_of; fun uint_arith_tacs ctxt = let fun arith_tac' n t = Arith_Data.arith_tac ctxt n t handle Cooper.COOPER _ => Seq.empty; in [ clarify_tac ctxt 1, full_simp_tac (put_simpset uint_arith_simpset ctxt) 1, ALLGOALS (full_simp_tac (put_simpset HOL_ss ctxt |> fold Splitter.add_split @{thms uint_splits} |> fold Simplifier.add_cong @{thms power_False_cong})), rewrite_goals_tac ctxt @{thms word_size}, ALLGOALS (fn n => REPEAT (resolve_tac ctxt [allI, impI] n) THEN REPEAT (eresolve_tac ctxt [conjE] n) THEN REPEAT (dresolve_tac ctxt @{thms word_of_int_inverse} n THEN assume_tac ctxt n THEN assume_tac ctxt n)), TRYALL arith_tac' ] end fun uint_arith_tac ctxt = SELECT_GOAL (EVERY (uint_arith_tacs ctxt)) \<close> method_setup uint_arith = \<open>Scan.succeed (SIMPLE_METHOD' o uint_arith_tac)\<close> "solving word arithmetic via integers and arith" subsection \<open>More on overflows and monotonicity\<close> lemma no_plus_overflow_uint_size: "x \<le> x + y \<longleftrightarrow> uint x + uint y < 2 ^ size x" for x y :: "'a::len word" by (auto simp add: word_size word_le_def uint_add_lem uint_sub_lem) lemmas no_olen_add = no_plus_overflow_uint_size [unfolded word_size] lemma no_ulen_sub: "x \<ge> x - y \<longleftrightarrow> uint y \<le> uint x" for x y :: "'a::len word" by (auto simp add: word_size word_le_def uint_add_lem uint_sub_lem) lemma no_olen_add': "x \<le> y + x \<longleftrightarrow> uint y + uint x < 2 ^ LENGTH('a)" for x y :: "'a::len word" by (simp add: ac_simps no_olen_add) lemmas olen_add_eqv = trans [OF no_olen_add no_olen_add' [symmetric]] lemmas uint_plus_simple_iff = trans [OF no_olen_add uint_add_lem] lemmas uint_plus_simple = uint_plus_simple_iff [THEN iffD1] lemmas uint_minus_simple_iff = trans [OF no_ulen_sub uint_sub_lem] lemmas uint_minus_simple_alt = uint_sub_lem [folded word_le_def] lemmas word_sub_le_iff = no_ulen_sub [folded word_le_def] lemmas word_sub_le = word_sub_le_iff [THEN iffD2] lemma word_less_sub1: "x \<noteq> 0 \<Longrightarrow> 1 < x \<longleftrightarrow> 0 < x - 1" for x :: "'a::len word" by transfer (simp add: take_bit_decr_eq) lemma word_le_sub1: "x \<noteq> 0 \<Longrightarrow> 1 \<le> x \<longleftrightarrow> 0 \<le> x - 1" for x :: "'a::len word" by transfer (simp add: int_one_le_iff_zero_less less_le) lemma sub_wrap_lt: "x < x - z \<longleftrightarrow> x < z" for x z :: "'a::len word" by (simp add: word_less_def uint_sub_lem) (meson linorder_not_le uint_minus_simple_iff uint_sub_lem word_less_iff_unsigned) lemma sub_wrap: "x \<le> x - z \<longleftrightarrow> z = 0 \<or> x < z" for x z :: "'a::len word" by (simp add: le_less sub_wrap_lt ac_simps) lemma plus_minus_not_NULL_ab: "x \<le> ab - c \<Longrightarrow> c \<le> ab \<Longrightarrow> c \<noteq> 0 \<Longrightarrow> x + c \<noteq> 0" for x ab c :: "'a::len word" by uint_arith lemma plus_minus_no_overflow_ab: "x \<le> ab - c \<Longrightarrow> c \<le> ab \<Longrightarrow> x \<le> x + c" for x ab c :: "'a::len word" by uint_arith lemma le_minus': "a + c \<le> b \<Longrightarrow> a \<le> a + c \<Longrightarrow> c \<le> b - a" for a b c :: "'a::len word" by uint_arith lemma le_plus': "a \<le> b \<Longrightarrow> c \<le> b - a \<Longrightarrow> a + c \<le> b" for a b c :: "'a::len word" by uint_arith lemmas le_plus = le_plus' [rotated] lemmas le_minus = leD [THEN thin_rl, THEN le_minus'] (* FIXME *) lemma word_plus_mono_right: "y \<le> z \<Longrightarrow> x \<le> x + z \<Longrightarrow> x + y \<le> x + z" for x y z :: "'a::len word" by uint_arith lemma word_less_minus_cancel: "y - x < z - x \<Longrightarrow> x \<le> z \<Longrightarrow> y < z" for x y z :: "'a::len word" by uint_arith lemma word_less_minus_mono_left: "y < z \<Longrightarrow> x \<le> y \<Longrightarrow> y - x < z - x" for x y z :: "'a::len word" by uint_arith lemma word_less_minus_mono: "a < c \<Longrightarrow> d < b \<Longrightarrow> a - b < a \<Longrightarrow> c - d < c \<Longrightarrow> a - b < c - d" for a b c d :: "'a::len word" by uint_arith lemma word_le_minus_cancel: "y - x \<le> z - x \<Longrightarrow> x \<le> z \<Longrightarrow> y \<le> z" for x y z :: "'a::len word" by uint_arith lemma word_le_minus_mono_left: "y \<le> z \<Longrightarrow> x \<le> y \<Longrightarrow> y - x \<le> z - x" for x y z :: "'a::len word" by uint_arith lemma word_le_minus_mono: "a \<le> c \<Longrightarrow> d \<le> b \<Longrightarrow> a - b \<le> a \<Longrightarrow> c - d \<le> c \<Longrightarrow> a - b \<le> c - d" for a b c d :: "'a::len word" by uint_arith lemma plus_le_left_cancel_wrap: "x + y' < x \<Longrightarrow> x + y < x \<Longrightarrow> x + y' < x + y \<longleftrightarrow> y' < y" for x y y' :: "'a::len word" by uint_arith lemma plus_le_left_cancel_nowrap: "x \<le> x + y' \<Longrightarrow> x \<le> x + y \<Longrightarrow> x + y' < x + y \<longleftrightarrow> y' < y" for x y y' :: "'a::len word" by uint_arith lemma word_plus_mono_right2: "a \<le> a + b \<Longrightarrow> c \<le> b \<Longrightarrow> a \<le> a + c" for a b c :: "'a::len word" by uint_arith lemma word_less_add_right: "x < y - z \<Longrightarrow> z \<le> y \<Longrightarrow> x + z < y" for x y z :: "'a::len word" by uint_arith lemma word_less_sub_right: "x < y + z \<Longrightarrow> y \<le> x \<Longrightarrow> x - y < z" for x y z :: "'a::len word" by uint_arith lemma word_le_plus_either: "x \<le> y \<or> x \<le> z \<Longrightarrow> y \<le> y + z \<Longrightarrow> x \<le> y + z" for x y z :: "'a::len word" by uint_arith lemma word_less_nowrapI: "x < z - k \<Longrightarrow> k \<le> z \<Longrightarrow> 0 < k \<Longrightarrow> x < x + k" for x z k :: "'a::len word" by uint_arith lemma inc_i: "1 \<le> i \<Longrightarrow> i < m \<Longrightarrow> 1 \<le> i + 1 \<and> i + 1 \<le> m" for i m :: "'a::len word" by uint_arith lemma udvd_incr_lem: "up < uq \<Longrightarrow> up = ua + n * uint K \<Longrightarrow> uq = ua + n' * uint K \<Longrightarrow> up + uint K \<le> uq" by auto (metis int_distrib(1) linorder_not_less mult.left_neutral mult_right_mono uint_nonnegative zless_imp_add1_zle) lemma udvd_incr': "p < q \<Longrightarrow> uint p = ua + n * uint K \<Longrightarrow> uint q = ua + n' * uint K \<Longrightarrow> p + K \<le> q" unfolding word_less_alt word_le_def by (metis (full_types) order_trans udvd_incr_lem uint_add_le) lemma udvd_decr': assumes "p < q" "uint p = ua + n * uint K" "uint q = ua + n' * uint K" shows "uint q = ua + n' * uint K \<Longrightarrow> p \<le> q - K" proof - have "\<And>w wa. uint (w::'a word) \<le> uint wa + uint (w - wa)" by (metis (no_types) add_diff_cancel_left' diff_add_cancel uint_add_le) moreover have "uint K + uint p \<le> uint q" using assms by (metis (no_types) add_diff_cancel_left' diff_add_cancel udvd_incr_lem word_less_def) ultimately show ?thesis by (meson add_le_cancel_left order_trans word_less_eq_iff_unsigned) qed lemmas udvd_incr_lem0 = udvd_incr_lem [where ua=0, unfolded add_0_left] lemmas udvd_incr0 = udvd_incr' [where ua=0, unfolded add_0_left] lemmas udvd_decr0 = udvd_decr' [where ua=0, unfolded add_0_left] lemma udvd_minus_le': "xy < k \<Longrightarrow> z udvd xy \<Longrightarrow> z udvd k \<Longrightarrow> xy \<le> k - z" unfolding udvd_unfold_int by (meson udvd_decr0) lemma udvd_incr2_K: "p < a + s \<Longrightarrow> a \<le> a + s \<Longrightarrow> K udvd s \<Longrightarrow> K udvd p - a \<Longrightarrow> a \<le> p \<Longrightarrow> 0 < K \<Longrightarrow> p \<le> p + K \<and> p + K \<le> a + s" unfolding udvd_unfold_int apply (simp add: uint_arith_simps split: if_split_asm) apply (metis (no_types, opaque_lifting) le_add_diff_inverse le_less_trans udvd_incr_lem) using uint_lt2p [of s] by simp subsection \<open>Arithmetic type class instantiations\<close> lemmas word_le_0_iff [simp] = word_zero_le [THEN leD, THEN antisym_conv1] lemma word_of_int_nat: "0 \<le> x \<Longrightarrow> word_of_int x = of_nat (nat x)" by simp text \<open> note that \<open>iszero_def\<close> is only for class \<open>comm_semiring_1_cancel\<close>, which requires word length \<open>\<ge> 1\<close>, ie \<open>'a::len word\<close> \<close> lemma iszero_word_no [simp]: "iszero (numeral bin :: 'a::len word) = iszero (take_bit LENGTH('a) (numeral bin :: int))" by (metis iszero_def uint_0_iff uint_bintrunc) text \<open>Use \<open>iszero\<close> to simplify equalities between word numerals.\<close> lemmas word_eq_numeral_iff_iszero [simp] = eq_numeral_iff_iszero [where 'a="'a::len word"] subsection \<open>Word and nat\<close> lemma word_nchotomy: "\<forall>w :: 'a::len word. \<exists>n. w = of_nat n \<and> n < 2 ^ LENGTH('a)" by (metis of_nat_unat ucast_id unsigned_less) lemma of_nat_eq: "of_nat n = w \<longleftrightarrow> (\<exists>q. n = unat w + q * 2 ^ LENGTH('a))" for w :: "'a::len word" using mod_div_mult_eq [of n "2 ^ LENGTH('a)", symmetric] by (auto simp flip: take_bit_eq_mod simp add: unsigned_of_nat) lemma of_nat_eq_size: "of_nat n = w \<longleftrightarrow> (\<exists>q. n = unat w + q * 2 ^ size w)" unfolding word_size by (rule of_nat_eq) lemma of_nat_0: "of_nat m = (0::'a::len word) \<longleftrightarrow> (\<exists>q. m = q * 2 ^ LENGTH('a))" by (simp add: of_nat_eq) lemma of_nat_2p [simp]: "of_nat (2 ^ LENGTH('a)) = (0::'a::len word)" by (fact mult_1 [symmetric, THEN iffD2 [OF of_nat_0 exI]]) lemma of_nat_gt_0: "of_nat k \<noteq> 0 \<Longrightarrow> 0 < k" by (cases k) auto lemma of_nat_neq_0: "0 < k \<Longrightarrow> k < 2 ^ LENGTH('a::len) \<Longrightarrow> of_nat k \<noteq> (0 :: 'a word)" by (auto simp add : of_nat_0) lemma Abs_fnat_hom_add: "of_nat a + of_nat b = of_nat (a + b)" by simp lemma Abs_fnat_hom_mult: "of_nat a * of_nat b = (of_nat (a * b) :: 'a::len word)" by (simp add: wi_hom_mult) lemma Abs_fnat_hom_Suc: "word_succ (of_nat a) = of_nat (Suc a)" by transfer (simp add: ac_simps) lemma Abs_fnat_hom_0: "(0::'a::len word) = of_nat 0" by simp lemma Abs_fnat_hom_1: "(1::'a::len word) = of_nat (Suc 0)" by simp lemmas Abs_fnat_homs = Abs_fnat_hom_add Abs_fnat_hom_mult Abs_fnat_hom_Suc Abs_fnat_hom_0 Abs_fnat_hom_1 lemma word_arith_nat_add: "a + b = of_nat (unat a + unat b)" by simp lemma word_arith_nat_mult: "a * b = of_nat (unat a * unat b)" by simp lemma word_arith_nat_Suc: "word_succ a = of_nat (Suc (unat a))" by (subst Abs_fnat_hom_Suc [symmetric]) simp lemma word_arith_nat_div: "a div b = of_nat (unat a div unat b)" by (metis of_int_of_nat_eq of_nat_unat of_nat_div word_div_def) lemma word_arith_nat_mod: "a mod b = of_nat (unat a mod unat b)" by (metis of_int_of_nat_eq of_nat_mod of_nat_unat word_mod_def) lemmas word_arith_nat_defs = word_arith_nat_add word_arith_nat_mult word_arith_nat_Suc Abs_fnat_hom_0 Abs_fnat_hom_1 word_arith_nat_div word_arith_nat_mod lemma unat_cong: "x = y \<Longrightarrow> unat x = unat y" by (fact arg_cong) lemma unat_of_nat: \<open>unat (word_of_nat x :: 'a::len word) = x mod 2 ^ LENGTH('a)\<close> by transfer (simp flip: take_bit_eq_mod add: nat_take_bit_eq) lemmas unat_word_ariths = word_arith_nat_defs [THEN trans [OF unat_cong unat_of_nat]] lemmas word_sub_less_iff = word_sub_le_iff [unfolded linorder_not_less [symmetric] Not_eq_iff] lemma unat_add_lem: "unat x + unat y < 2 ^ LENGTH('a) \<longleftrightarrow> unat (x + y) = unat x + unat y" for x y :: "'a::len word" by (metis mod_less unat_word_ariths(1) unsigned_less) lemma unat_mult_lem: "unat x * unat y < 2 ^ LENGTH('a) \<longleftrightarrow> unat (x * y) = unat x * unat y" for x y :: "'a::len word" by (metis mod_less unat_word_ariths(2) unsigned_less) lemma le_no_overflow: "x \<le> b \<Longrightarrow> a \<le> a + b \<Longrightarrow> x \<le> a + b" for a b x :: "'a::len word" using word_le_plus_either by blast lemma uint_div: \<open>uint (x div y) = uint x div uint y\<close> by (fact uint_div_distrib) lemma uint_mod: \<open>uint (x mod y) = uint x mod uint y\<close> by (fact uint_mod_distrib) lemma no_plus_overflow_unat_size: "x \<le> x + y \<longleftrightarrow> unat x + unat y < 2 ^ size x" for x y :: "'a::len word" unfolding word_size by unat_arith lemmas no_olen_add_nat = no_plus_overflow_unat_size [unfolded word_size] lemmas unat_plus_simple = trans [OF no_olen_add_nat unat_add_lem] lemma word_div_mult: "\<lbrakk>0 < y; unat x * unat y < 2 ^ LENGTH('a)\<rbrakk> \<Longrightarrow> x * y div y = x" for x y :: "'a::len word" by (simp add: unat_eq_zero unat_mult_lem word_arith_nat_div) lemma div_lt': "i \<le> k div x \<Longrightarrow> unat i * unat x < 2 ^ LENGTH('a)" for i k x :: "'a::len word" by unat_arith (meson le_less_trans less_mult_imp_div_less not_le unsigned_less) lemmas div_lt'' = order_less_imp_le [THEN div_lt'] lemma div_lt_mult: "\<lbrakk>i < k div x; 0 < x\<rbrakk> \<Longrightarrow> i * x < k" for i k x :: "'a::len word" by (metis div_le_mono div_lt'' not_le unat_div word_div_mult word_less_iff_unsigned) lemma div_le_mult: "\<lbrakk>i \<le> k div x; 0 < x\<rbrakk> \<Longrightarrow> i * x \<le> k" for i k x :: "'a::len word" by (metis div_lt' less_mult_imp_div_less not_less unat_arith_simps(2) unat_div unat_mult_lem) lemma div_lt_uint': "i \<le> k div x \<Longrightarrow> uint i * uint x < 2 ^ LENGTH('a)" for i k x :: "'a::len word" unfolding uint_nat by (metis div_lt' int_ops(7) of_nat_unat uint_mult_lem unat_mult_lem) lemmas div_lt_uint'' = order_less_imp_le [THEN div_lt_uint'] lemma word_le_exists': "x \<le> y \<Longrightarrow> \<exists>z. y = x + z \<and> uint x + uint z < 2 ^ LENGTH('a)" for x y z :: "'a::len word" by (metis add.commute diff_add_cancel no_olen_add) lemmas plus_minus_not_NULL = order_less_imp_le [THEN plus_minus_not_NULL_ab] lemmas plus_minus_no_overflow = order_less_imp_le [THEN plus_minus_no_overflow_ab] lemmas mcs = word_less_minus_cancel word_less_minus_mono_left word_le_minus_cancel word_le_minus_mono_left lemmas word_l_diffs = mcs [where y = "w + x", unfolded add_diff_cancel] for w x lemmas word_diff_ls = mcs [where z = "w + x", unfolded add_diff_cancel] for w x lemmas word_plus_mcs = word_diff_ls [where y = "v + x", unfolded add_diff_cancel] for v x lemma le_unat_uoi: \<open>y \<le> unat z \<Longrightarrow> unat (word_of_nat y :: 'a word) = y\<close> for z :: \<open>'a::len word\<close> by transfer (simp add: nat_take_bit_eq take_bit_nat_eq_self_iff le_less_trans) lemmas thd = times_div_less_eq_dividend lemmas uno_simps [THEN le_unat_uoi] = mod_le_divisor div_le_dividend lemma word_mod_div_equality: "(n div b) * b + (n mod b) = n" for n b :: "'a::len word" by (fact div_mult_mod_eq) lemma word_div_mult_le: "a div b * b \<le> a" for a b :: "'a::len word" by (metis div_le_mult mult_not_zero order.not_eq_order_implies_strict order_refl word_zero_le) lemma word_mod_less_divisor: "0 < n \<Longrightarrow> m mod n < n" for m n :: "'a::len word" by (simp add: unat_arith_simps) lemma word_of_int_power_hom: "word_of_int a ^ n = (word_of_int (a ^ n) :: 'a::len word)" by (induct n) (simp_all add: wi_hom_mult [symmetric]) lemma word_arith_power_alt: "a ^ n = (word_of_int (uint a ^ n) :: 'a::len word)" by (simp add : word_of_int_power_hom [symmetric]) lemma unatSuc: "1 + n \<noteq> 0 \<Longrightarrow> unat (1 + n) = Suc (unat n)" for n :: "'a::len word" by unat_arith subsection \<open>Cardinality, finiteness of set of words\<close> lemma inj_on_word_of_int: \<open>inj_on (word_of_int :: int \<Rightarrow> 'a word) {0..<2 ^ LENGTH('a::len)}\<close> unfolding inj_on_def by (metis atLeastLessThan_iff word_of_int_inverse) lemma range_uint: \<open>range (uint :: 'a word \<Rightarrow> int) = {0..<2 ^ LENGTH('a::len)}\<close> apply transfer apply (auto simp add: image_iff) apply (metis take_bit_int_eq_self_iff) done lemma UNIV_eq: \<open>(UNIV :: 'a word set) = word_of_int ` {0..<2 ^ LENGTH('a::len)}\<close> by (auto simp add: image_iff) (metis atLeastLessThan_iff linorder_not_le uint_split) lemma card_word: "CARD('a word) = 2 ^ LENGTH('a::len)" by (simp add: UNIV_eq card_image inj_on_word_of_int) lemma card_word_size: "CARD('a word) = 2 ^ size x" for x :: "'a::len word" unfolding word_size by (rule card_word) end instance word :: (len) finite by standard (simp add: UNIV_eq) subsection \<open>Bitwise Operations on Words\<close> context includes bit_operations_syntax begin lemma word_wi_log_defs: "NOT (word_of_int a) = word_of_int (NOT a)" "word_of_int a AND word_of_int b = word_of_int (a AND b)" "word_of_int a OR word_of_int b = word_of_int (a OR b)" "word_of_int a XOR word_of_int b = word_of_int (a XOR b)" by (transfer, rule refl)+ lemma word_no_log_defs [simp]: "NOT (numeral a) = word_of_int (NOT (numeral a))" "NOT (- numeral a) = word_of_int (NOT (- numeral a))" "numeral a AND numeral b = word_of_int (numeral a AND numeral b)" "numeral a AND - numeral b = word_of_int (numeral a AND - numeral b)" "- numeral a AND numeral b = word_of_int (- numeral a AND numeral b)" "- numeral a AND - numeral b = word_of_int (- numeral a AND - numeral b)" "numeral a OR numeral b = word_of_int (numeral a OR numeral b)" "numeral a OR - numeral b = word_of_int (numeral a OR - numeral b)" "- numeral a OR numeral b = word_of_int (- numeral a OR numeral b)" "- numeral a OR - numeral b = word_of_int (- numeral a OR - numeral b)" "numeral a XOR numeral b = word_of_int (numeral a XOR numeral b)" "numeral a XOR - numeral b = word_of_int (numeral a XOR - numeral b)" "- numeral a XOR numeral b = word_of_int (- numeral a XOR numeral b)" "- numeral a XOR - numeral b = word_of_int (- numeral a XOR - numeral b)" by (transfer, rule refl)+ text \<open>Special cases for when one of the arguments equals 1.\<close> lemma word_bitwise_1_simps [simp]: "NOT (1::'a::len word) = -2" "1 AND numeral b = word_of_int (1 AND numeral b)" "1 AND - numeral b = word_of_int (1 AND - numeral b)" "numeral a AND 1 = word_of_int (numeral a AND 1)" "- numeral a AND 1 = word_of_int (- numeral a AND 1)" "1 OR numeral b = word_of_int (1 OR numeral b)" "1 OR - numeral b = word_of_int (1 OR - numeral b)" "numeral a OR 1 = word_of_int (numeral a OR 1)" "- numeral a OR 1 = word_of_int (- numeral a OR 1)" "1 XOR numeral b = word_of_int (1 XOR numeral b)" "1 XOR - numeral b = word_of_int (1 XOR - numeral b)" "numeral a XOR 1 = word_of_int (numeral a XOR 1)" "- numeral a XOR 1 = word_of_int (- numeral a XOR 1)" apply (simp_all add: word_uint_eq_iff unsigned_not_eq unsigned_and_eq unsigned_or_eq unsigned_xor_eq of_nat_take_bit ac_simps unsigned_of_int) apply (simp_all add: minus_numeral_eq_not_sub_one) apply (simp_all only: sub_one_eq_not_neg bit.xor_compl_right take_bit_xor bit.double_compl) apply simp_all done text \<open>Special cases for when one of the arguments equals -1.\<close> lemma word_bitwise_m1_simps [simp]: "NOT (-1::'a::len word) = 0" "(-1::'a::len word) AND x = x" "x AND (-1::'a::len word) = x" "(-1::'a::len word) OR x = -1" "x OR (-1::'a::len word) = -1" " (-1::'a::len word) XOR x = NOT x" "x XOR (-1::'a::len word) = NOT x" by (transfer, simp)+ lemma word_of_int_not_numeral_eq [simp]: \<open>(word_of_int (NOT (numeral bin)) :: 'a::len word) = - numeral bin - 1\<close> by transfer (simp add: not_eq_complement) lemma uint_and: \<open>uint (x AND y) = uint x AND uint y\<close> by transfer simp lemma uint_or: \<open>uint (x OR y) = uint x OR uint y\<close> by transfer simp lemma uint_xor: \<open>uint (x XOR y) = uint x XOR uint y\<close> by transfer simp \<comment> \<open>get from commutativity, associativity etc of \<open>int_and\<close> etc to same for \<open>word_and etc\<close>\<close> lemmas bwsimps = wi_hom_add word_wi_log_defs lemma word_bw_assocs: "(x AND y) AND z = x AND y AND z" "(x OR y) OR z = x OR y OR z" "(x XOR y) XOR z = x XOR y XOR z" for x :: "'a::len word" by (fact ac_simps)+ lemma word_bw_comms: "x AND y = y AND x" "x OR y = y OR x" "x XOR y = y XOR x" for x :: "'a::len word" by (fact ac_simps)+ lemma word_bw_lcs: "y AND x AND z = x AND y AND z" "y OR x OR z = x OR y OR z" "y XOR x XOR z = x XOR y XOR z" for x :: "'a::len word" by (fact ac_simps)+ lemma word_log_esimps: "x AND 0 = 0" "x AND -1 = x" "x OR 0 = x" "x OR -1 = -1" "x XOR 0 = x" "x XOR -1 = NOT x" "0 AND x = 0" "-1 AND x = x" "0 OR x = x" "-1 OR x = -1" "0 XOR x = x" "-1 XOR x = NOT x" for x :: "'a::len word" by simp_all lemma word_not_dist: "NOT (x OR y) = NOT x AND NOT y" "NOT (x AND y) = NOT x OR NOT y" for x :: "'a::len word" by simp_all lemma word_bw_same: "x AND x = x" "x OR x = x" "x XOR x = 0" for x :: "'a::len word" by simp_all lemma word_ao_absorbs [simp]: "x AND (y OR x) = x" "x OR y AND x = x" "x AND (x OR y) = x" "y AND x OR x = x" "(y OR x) AND x = x" "x OR x AND y = x" "(x OR y) AND x = x" "x AND y OR x = x" for x :: "'a::len word" by (auto intro: bit_eqI simp add: bit_and_iff bit_or_iff) lemma word_not_not [simp]: "NOT (NOT x) = x" for x :: "'a::len word" by (fact bit.double_compl) lemma word_ao_dist: "(x OR y) AND z = x AND z OR y AND z" for x :: "'a::len word" by (fact bit.conj_disj_distrib2) lemma word_oa_dist: "x AND y OR z = (x OR z) AND (y OR z)" for x :: "'a::len word" by (fact bit.disj_conj_distrib2) lemma word_add_not [simp]: "x + NOT x = -1" for x :: "'a::len word" by (simp add: not_eq_complement) lemma word_plus_and_or [simp]: "(x AND y) + (x OR y) = x + y" for x :: "'a::len word" by transfer (simp add: plus_and_or) lemma leoa: "w = x OR y \<Longrightarrow> y = w AND y" for x :: "'a::len word" by auto lemma leao: "w' = x' AND y' \<Longrightarrow> x' = x' OR w'" for x' :: "'a::len word" by auto lemma word_ao_equiv: "w = w OR w' \<longleftrightarrow> w' = w AND w'" for w w' :: "'a::len word" by (auto intro: leoa leao) lemma le_word_or2: "x \<le> x OR y" for x y :: "'a::len word" by (simp add: or_greater_eq uint_or word_le_def) lemmas le_word_or1 = xtrans(3) [OF word_bw_comms (2) le_word_or2] lemmas word_and_le1 = xtrans(3) [OF word_ao_absorbs (4) [symmetric] le_word_or2] lemmas word_and_le2 = xtrans(3) [OF word_ao_absorbs (8) [symmetric] le_word_or2] lemma bit_horner_sum_bit_word_iff [bit_simps]: \<open>bit (horner_sum of_bool (2 :: 'a::len word) bs) n \<longleftrightarrow> n < min LENGTH('a) (length bs) \<and> bs ! n\<close> by transfer (simp add: bit_horner_sum_bit_iff) definition word_reverse :: \<open>'a::len word \<Rightarrow> 'a word\<close> where \<open>word_reverse w = horner_sum of_bool 2 (rev (map (bit w) [0..<LENGTH('a)]))\<close> lemma bit_word_reverse_iff [bit_simps]: \<open>bit (word_reverse w) n \<longleftrightarrow> n < LENGTH('a) \<and> bit w (LENGTH('a) - Suc n)\<close> for w :: \<open>'a::len word\<close> by (cases \<open>n < LENGTH('a)\<close>) (simp_all add: word_reverse_def bit_horner_sum_bit_word_iff rev_nth) lemma word_rev_rev [simp] : "word_reverse (word_reverse w) = w" by (rule bit_word_eqI) (auto simp add: bit_word_reverse_iff bit_imp_le_length Suc_diff_Suc) lemma word_rev_gal: "word_reverse w = u \<Longrightarrow> word_reverse u = w" by (metis word_rev_rev) lemma word_rev_gal': "u = word_reverse w \<Longrightarrow> w = word_reverse u" by simp lemma uint_2p: "(0::'a::len word) < 2 ^ n \<Longrightarrow> uint (2 ^ n::'a::len word) = 2 ^ n" by (cases \<open>n < LENGTH('a)\<close>; transfer; force) lemma word_of_int_2p: "(word_of_int (2 ^ n) :: 'a::len word) = 2 ^ n" by (induct n) (simp_all add: wi_hom_syms) subsubsection \<open>shift functions in terms of lists of bools\<close> lemma drop_bit_word_numeral [simp]: \<open>drop_bit (numeral n) (numeral k) = (word_of_int (drop_bit (numeral n) (take_bit LENGTH('a) (numeral k))) :: 'a::len word)\<close> by transfer simp lemma drop_bit_word_Suc_numeral [simp]: \<open>drop_bit (Suc n) (numeral k) = (word_of_int (drop_bit (Suc n) (take_bit LENGTH('a) (numeral k))) :: 'a::len word)\<close> by transfer simp lemma drop_bit_word_minus_numeral [simp]: \<open>drop_bit (numeral n) (- numeral k) = (word_of_int (drop_bit (numeral n) (take_bit LENGTH('a) (- numeral k))) :: 'a::len word)\<close> by transfer simp lemma drop_bit_word_Suc_minus_numeral [simp]: \<open>drop_bit (Suc n) (- numeral k) = (word_of_int (drop_bit (Suc n) (take_bit LENGTH('a) (- numeral k))) :: 'a::len word)\<close> by transfer simp lemma signed_drop_bit_word_numeral [simp]: \<open>signed_drop_bit (numeral n) (numeral k) = (word_of_int (drop_bit (numeral n) (signed_take_bit (LENGTH('a) - 1) (numeral k))) :: 'a::len word)\<close> by transfer simp lemma signed_drop_bit_word_Suc_numeral [simp]: \<open>signed_drop_bit (Suc n) (numeral k) = (word_of_int (drop_bit (Suc n) (signed_take_bit (LENGTH('a) - 1) (numeral k))) :: 'a::len word)\<close> by transfer simp lemma signed_drop_bit_word_minus_numeral [simp]: \<open>signed_drop_bit (numeral n) (- numeral k) = (word_of_int (drop_bit (numeral n) (signed_take_bit (LENGTH('a) - 1) (- numeral k))) :: 'a::len word)\<close> by transfer simp lemma signed_drop_bit_word_Suc_minus_numeral [simp]: \<open>signed_drop_bit (Suc n) (- numeral k) = (word_of_int (drop_bit (Suc n) (signed_take_bit (LENGTH('a) - 1) (- numeral k))) :: 'a::len word)\<close> by transfer simp lemma take_bit_word_numeral [simp]: \<open>take_bit (numeral n) (numeral k) = (word_of_int (take_bit (min LENGTH('a) (numeral n)) (numeral k)) :: 'a::len word)\<close> by transfer rule lemma take_bit_word_Suc_numeral [simp]: \<open>take_bit (Suc n) (numeral k) = (word_of_int (take_bit (min LENGTH('a) (Suc n)) (numeral k)) :: 'a::len word)\<close> by transfer rule lemma take_bit_word_minus_numeral [simp]: \<open>take_bit (numeral n) (- numeral k) = (word_of_int (take_bit (min LENGTH('a) (numeral n)) (- numeral k)) :: 'a::len word)\<close> by transfer rule lemma take_bit_word_Suc_minus_numeral [simp]: \<open>take_bit (Suc n) (- numeral k) = (word_of_int (take_bit (min LENGTH('a) (Suc n)) (- numeral k)) :: 'a::len word)\<close> by transfer rule lemma signed_take_bit_word_numeral [simp]: \<open>signed_take_bit (numeral n) (numeral k) = (word_of_int (signed_take_bit (numeral n) (take_bit LENGTH('a) (numeral k))) :: 'a::len word)\<close> by transfer rule lemma signed_take_bit_word_Suc_numeral [simp]: \<open>signed_take_bit (Suc n) (numeral k) = (word_of_int (signed_take_bit (Suc n) (take_bit LENGTH('a) (numeral k))) :: 'a::len word)\<close> by transfer rule lemma signed_take_bit_word_minus_numeral [simp]: \<open>signed_take_bit (numeral n) (- numeral k) = (word_of_int (signed_take_bit (numeral n) (take_bit LENGTH('a) (- numeral k))) :: 'a::len word)\<close> by transfer rule lemma signed_take_bit_word_Suc_minus_numeral [simp]: \<open>signed_take_bit (Suc n) (- numeral k) = (word_of_int (signed_take_bit (Suc n) (take_bit LENGTH('a) (- numeral k))) :: 'a::len word)\<close> by transfer rule lemma False_map2_or: "\<lbrakk>set xs \<subseteq> {False}; length ys = length xs\<rbrakk> \<Longrightarrow> map2 (\<or>) xs ys = ys" by (induction xs arbitrary: ys) (auto simp: length_Suc_conv) lemma align_lem_or: assumes "length xs = n + m" "length ys = n + m" and "drop m xs = replicate n False" "take m ys = replicate m False" shows "map2 (\<or>) xs ys = take m xs @ drop m ys" using assms proof (induction xs arbitrary: ys m) case (Cons a xs) then show ?case by (cases m) (auto simp: length_Suc_conv False_map2_or) qed auto lemma False_map2_and: "\<lbrakk>set xs \<subseteq> {False}; length ys = length xs\<rbrakk> \<Longrightarrow> map2 (\<and>) xs ys = xs" by (induction xs arbitrary: ys) (auto simp: length_Suc_conv) lemma align_lem_and: assumes "length xs = n + m" "length ys = n + m" and "drop m xs = replicate n False" "take m ys = replicate m False" shows "map2 (\<and>) xs ys = replicate (n + m) False" using assms proof (induction xs arbitrary: ys m) case (Cons a xs) then show ?case by (cases m) (auto simp: length_Suc_conv set_replicate_conv_if False_map2_and) qed auto subsubsection \<open>Mask\<close> lemma minus_1_eq_mask: \<open>- 1 = (mask LENGTH('a) :: 'a::len word)\<close> by (rule bit_eqI) (simp add: bit_exp_iff bit_mask_iff) lemma mask_eq_decr_exp: \<open>mask n = 2 ^ n - (1 :: 'a::len word)\<close> by (fact mask_eq_exp_minus_1) lemma mask_Suc_rec: \<open>mask (Suc n) = 2 * mask n + (1 :: 'a::len word)\<close> by (simp add: mask_eq_exp_minus_1) context begin qualified lemma bit_mask_iff [bit_simps]: \<open>bit (mask m :: 'a::len word) n \<longleftrightarrow> n < min LENGTH('a) m\<close> by (simp add: bit_mask_iff not_le) end lemma mask_bin: "mask n = word_of_int (take_bit n (- 1))" by transfer simp lemma and_mask_bintr: "w AND mask n = word_of_int (take_bit n (uint w))" by transfer (simp add: ac_simps take_bit_eq_mask) lemma and_mask_wi: "word_of_int i AND mask n = word_of_int (take_bit n i)" by (simp add: take_bit_eq_mask of_int_and_eq of_int_mask_eq) lemma and_mask_wi': "word_of_int i AND mask n = (word_of_int (take_bit (min LENGTH('a) n) i) :: 'a::len word)" by (auto simp add: and_mask_wi min_def wi_bintr) lemma and_mask_no: "numeral i AND mask n = word_of_int (take_bit n (numeral i))" unfolding word_numeral_alt by (rule and_mask_wi) lemma and_mask_mod_2p: "w AND mask n = word_of_int (uint w mod 2 ^ n)" by (simp only: and_mask_bintr take_bit_eq_mod) lemma uint_mask_eq: \<open>uint (mask n :: 'a::len word) = mask (min LENGTH('a) n)\<close> by transfer simp lemma and_mask_lt_2p: "uint (w AND mask n) < 2 ^ n" by (metis take_bit_eq_mask take_bit_int_less_exp unsigned_take_bit_eq) lemma mask_eq_iff: "w AND mask n = w \<longleftrightarrow> uint w < 2 ^ n" apply (auto simp flip: take_bit_eq_mask) apply (metis take_bit_int_eq_self_iff uint_take_bit_eq) apply (simp add: take_bit_int_eq_self unsigned_take_bit_eq word_uint_eqI) done lemma and_mask_dvd: "2 ^ n dvd uint w \<longleftrightarrow> w AND mask n = 0" by (simp flip: take_bit_eq_mask take_bit_eq_mod unsigned_take_bit_eq add: dvd_eq_mod_eq_0 uint_0_iff) lemma and_mask_dvd_nat: "2 ^ n dvd unat w \<longleftrightarrow> w AND mask n = 0" by (simp flip: take_bit_eq_mask take_bit_eq_mod unsigned_take_bit_eq add: dvd_eq_mod_eq_0 unat_0_iff uint_0_iff) lemma word_2p_lem: "n < size w \<Longrightarrow> w < 2 ^ n = (uint w < 2 ^ n)" for w :: "'a::len word" by transfer simp lemma less_mask_eq: fixes x :: "'a::len word" assumes "x < 2 ^ n" shows "x AND mask n = x" by (metis (no_types) assms lt2p_lem mask_eq_iff not_less word_2p_lem word_size) lemmas mask_eq_iff_w2p = trans [OF mask_eq_iff word_2p_lem [symmetric]] lemmas and_mask_less' = iffD2 [OF word_2p_lem and_mask_lt_2p, simplified word_size] lemma and_mask_less_size: "n < size x \<Longrightarrow> x AND mask n < 2 ^ n" for x :: \<open>'a::len word\<close> unfolding word_size by (erule and_mask_less') lemma word_mod_2p_is_mask [OF refl]: "c = 2 ^ n \<Longrightarrow> c > 0 \<Longrightarrow> x mod c = x AND mask n" for c x :: "'a::len word" by (auto simp: word_mod_def uint_2p and_mask_mod_2p) lemma mask_eqs: "(a AND mask n) + b AND mask n = a + b AND mask n" "a + (b AND mask n) AND mask n = a + b AND mask n" "(a AND mask n) - b AND mask n = a - b AND mask n" "a - (b AND mask n) AND mask n = a - b AND mask n" "a * (b AND mask n) AND mask n = a * b AND mask n" "(b AND mask n) * a AND mask n = b * a AND mask n" "(a AND mask n) + (b AND mask n) AND mask n = a + b AND mask n" "(a AND mask n) - (b AND mask n) AND mask n = a - b AND mask n" "(a AND mask n) * (b AND mask n) AND mask n = a * b AND mask n" "- (a AND mask n) AND mask n = - a AND mask n" "word_succ (a AND mask n) AND mask n = word_succ a AND mask n" "word_pred (a AND mask n) AND mask n = word_pred a AND mask n" using word_of_int_Ex [where x=a] word_of_int_Ex [where x=b] unfolding take_bit_eq_mask [symmetric] by (transfer; simp add: take_bit_eq_mod mod_simps)+ lemma mask_power_eq: "(x AND mask n) ^ k AND mask n = x ^ k AND mask n" for x :: \<open>'a::len word\<close> using word_of_int_Ex [where x=x] unfolding take_bit_eq_mask [symmetric] by (transfer; simp add: take_bit_eq_mod mod_simps)+ lemma mask_full [simp]: "mask LENGTH('a) = (- 1 :: 'a::len word)" by transfer simp subsubsection \<open>Slices\<close> definition slice1 :: \<open>nat \<Rightarrow> 'a::len word \<Rightarrow> 'b::len word\<close> where \<open>slice1 n w = (if n < LENGTH('a) then ucast (drop_bit (LENGTH('a) - n) w) else push_bit (n - LENGTH('a)) (ucast w))\<close> lemma bit_slice1_iff [bit_simps]: \<open>bit (slice1 m w :: 'b::len word) n \<longleftrightarrow> m - LENGTH('a) \<le> n \<and> n < min LENGTH('b) m \<and> bit w (n + (LENGTH('a) - m) - (m - LENGTH('a)))\<close> for w :: \<open>'a::len word\<close> by (auto simp add: slice1_def bit_ucast_iff bit_drop_bit_eq bit_push_bit_iff not_less not_le ac_simps dest: bit_imp_le_length) definition slice :: \<open>nat \<Rightarrow> 'a::len word \<Rightarrow> 'b::len word\<close> where \<open>slice n = slice1 (LENGTH('a) - n)\<close> lemma bit_slice_iff [bit_simps]: \<open>bit (slice m w :: 'b::len word) n \<longleftrightarrow> n < min LENGTH('b) (LENGTH('a) - m) \<and> bit w (n + LENGTH('a) - (LENGTH('a) - m))\<close> for w :: \<open>'a::len word\<close> by (simp add: slice_def word_size bit_slice1_iff) lemma slice1_0 [simp] : "slice1 n 0 = 0" unfolding slice1_def by simp lemma slice_0 [simp] : "slice n 0 = 0" unfolding slice_def by auto lemma ucast_slice1: "ucast w = slice1 (size w) w" unfolding slice1_def by (simp add: size_word.rep_eq) lemma ucast_slice: "ucast w = slice 0 w" by (simp add: slice_def slice1_def) lemma slice_id: "slice 0 t = t" by (simp only: ucast_slice [symmetric] ucast_id) lemma rev_slice1: \<open>slice1 n (word_reverse w :: 'b::len word) = word_reverse (slice1 k w :: 'a::len word)\<close> if \<open>n + k = LENGTH('a) + LENGTH('b)\<close> proof (rule bit_word_eqI) fix m assume *: \<open>m < LENGTH('a)\<close> from that have **: \<open>LENGTH('b) = n + k - LENGTH('a)\<close> by simp show \<open>bit (slice1 n (word_reverse w :: 'b word) :: 'a word) m \<longleftrightarrow> bit (word_reverse (slice1 k w :: 'a word)) m\<close> unfolding bit_slice1_iff bit_word_reverse_iff using * ** by (cases \<open>n \<le> LENGTH('a)\<close>; cases \<open>k \<le> LENGTH('a)\<close>) auto qed lemma rev_slice: "n + k + LENGTH('a::len) = LENGTH('b::len) \<Longrightarrow> slice n (word_reverse (w::'b word)) = word_reverse (slice k w :: 'a word)" unfolding slice_def word_size by (simp add: rev_slice1) subsubsection \<open>Revcast\<close> definition revcast :: \<open>'a::len word \<Rightarrow> 'b::len word\<close> where \<open>revcast = slice1 LENGTH('b)\<close> lemma bit_revcast_iff [bit_simps]: \<open>bit (revcast w :: 'b::len word) n \<longleftrightarrow> LENGTH('b) - LENGTH('a) \<le> n \<and> n < LENGTH('b) \<and> bit w (n + (LENGTH('a) - LENGTH('b)) - (LENGTH('b) - LENGTH('a)))\<close> for w :: \<open>'a::len word\<close> by (simp add: revcast_def bit_slice1_iff) lemma revcast_slice1 [OF refl]: "rc = revcast w \<Longrightarrow> slice1 (size rc) w = rc" by (simp add: revcast_def word_size) lemma revcast_rev_ucast [OF refl refl refl]: "cs = [rc, uc] \<Longrightarrow> rc = revcast (word_reverse w) \<Longrightarrow> uc = ucast w \<Longrightarrow> rc = word_reverse uc" by (metis rev_slice1 revcast_slice1 ucast_slice1 word_size) lemma revcast_ucast: "revcast w = word_reverse (ucast (word_reverse w))" using revcast_rev_ucast [of "word_reverse w"] by simp lemma ucast_revcast: "ucast w = word_reverse (revcast (word_reverse w))" by (fact revcast_rev_ucast [THEN word_rev_gal']) lemma ucast_rev_revcast: "ucast (word_reverse w) = word_reverse (revcast w)" by (fact revcast_ucast [THEN word_rev_gal']) text "linking revcast and cast via shift" lemmas wsst_TYs = source_size target_size word_size lemmas sym_notr = not_iff [THEN iffD2, THEN not_sym, THEN not_iff [THEN iffD1]] subsection \<open>Split and cat\<close> lemmas word_split_bin' = word_split_def lemmas word_cat_bin' = word_cat_eq \<comment> \<open>this odd result is analogous to \<open>ucast_id\<close>, result to the length given by the result type\<close> lemma word_cat_id: "word_cat a b = b" by transfer (simp add: take_bit_concat_bit_eq) lemma word_cat_split_alt: "\<lbrakk>size w \<le> size u + size v; word_split w = (u,v)\<rbrakk> \<Longrightarrow> word_cat u v = w" unfolding word_split_def by (rule bit_word_eqI) (auto simp add: bit_word_cat_iff not_less word_size bit_ucast_iff bit_drop_bit_eq) lemmas word_cat_split_size = sym [THEN [2] word_cat_split_alt [symmetric]] subsubsection \<open>Split and slice\<close> lemma split_slices: assumes "word_split w = (u, v)" shows "u = slice (size v) w \<and> v = slice 0 w" unfolding word_size proof (intro conjI) have \<section>: "\<And>n. \<lbrakk>ucast (drop_bit LENGTH('b) w) = u; LENGTH('c) < LENGTH('b)\<rbrakk> \<Longrightarrow> \<not> bit u n" by (metis bit_take_bit_iff bit_word_of_int_iff diff_is_0_eq' drop_bit_take_bit less_imp_le less_nat_zero_code of_int_uint unsigned_drop_bit_eq) show "u = slice LENGTH('b) w" proof (rule bit_word_eqI) show "bit u n = bit ((slice LENGTH('b) w)::'a word) n" if "n < LENGTH('a)" for n using assms bit_imp_le_length unfolding word_split_def bit_slice_iff by (fastforce simp add: \<section> ac_simps word_size bit_ucast_iff bit_drop_bit_eq) qed show "v = slice 0 w" by (metis Pair_inject assms ucast_slice word_split_bin') qed lemma slice_cat1 [OF refl]: "\<lbrakk>wc = word_cat a b; size a + size b \<le> size wc\<rbrakk> \<Longrightarrow> slice (size b) wc = a" by (rule bit_word_eqI) (auto simp add: bit_slice_iff bit_word_cat_iff word_size) lemmas slice_cat2 = trans [OF slice_id word_cat_id] lemma cat_slices: "\<lbrakk>a = slice n c; b = slice 0 c; n = size b; size c \<le> size a + size b\<rbrakk> \<Longrightarrow> word_cat a b = c" by (rule bit_word_eqI) (auto simp add: bit_slice_iff bit_word_cat_iff word_size) lemma word_split_cat_alt: assumes "w = word_cat u v" and size: "size u + size v \<le> size w" shows "word_split w = (u,v)" proof - have "ucast ((drop_bit LENGTH('c) (word_cat u v))::'a word) = u" "ucast ((word_cat u v)::'a word) = v" using assms by (auto simp add: word_size bit_ucast_iff bit_drop_bit_eq bit_word_cat_iff intro: bit_eqI) then show ?thesis by (simp add: assms(1) word_split_bin') qed lemma horner_sum_uint_exp_Cons_eq: \<open>horner_sum uint (2 ^ LENGTH('a)) (w # ws) = concat_bit LENGTH('a) (uint w) (horner_sum uint (2 ^ LENGTH('a)) ws)\<close> for ws :: \<open>'a::len word list\<close> by (simp add: bintr_uint concat_bit_eq push_bit_eq_mult) lemma bit_horner_sum_uint_exp_iff: \<open>bit (horner_sum uint (2 ^ LENGTH('a)) ws) n \<longleftrightarrow> n div LENGTH('a) < length ws \<and> bit (ws ! (n div LENGTH('a))) (n mod LENGTH('a))\<close> for ws :: \<open>'a::len word list\<close> proof (induction ws arbitrary: n) case Nil then show ?case by simp next case (Cons w ws) then show ?case by (cases \<open>n \<ge> LENGTH('a)\<close>) (simp_all only: horner_sum_uint_exp_Cons_eq, simp_all add: bit_concat_bit_iff le_div_geq le_mod_geq bit_uint_iff Cons) qed subsection \<open>Rotation\<close> lemma word_rotr_word_rotr_eq: \<open>word_rotr m (word_rotr n w) = word_rotr (m + n) w\<close> by (rule bit_word_eqI) (simp add: bit_word_rotr_iff ac_simps mod_add_right_eq) lemma word_rot_lem: "\<lbrakk>l + k = d + k mod l; n < l\<rbrakk> \<Longrightarrow> ((d + n) mod l) = n" for l::nat by (metis (no_types, lifting) add.commute add.right_neutral add_diff_cancel_left' mod_if mod_mult_div_eq mod_mult_self2 mod_self) lemma word_rot_rl [simp]: \<open>word_rotl k (word_rotr k v) = v\<close> proof (rule bit_word_eqI) show "bit (word_rotl k (word_rotr k v)) n = bit v n" if "n < LENGTH('a)" for n using that by (auto simp: word_rot_lem word_rotl_eq_word_rotr word_rotr_word_rotr_eq bit_word_rotr_iff algebra_simps split: nat_diff_split) qed lemma word_rot_lr [simp]: \<open>word_rotr k (word_rotl k v) = v\<close> proof (rule bit_word_eqI) show "bit (word_rotr k (word_rotl k v)) n = bit v n" if "n < LENGTH('a)" for n using that by (auto simp add: word_rot_lem word_rotl_eq_word_rotr word_rotr_word_rotr_eq bit_word_rotr_iff algebra_simps split: nat_diff_split) qed lemma word_rot_gal: \<open>word_rotr n v = w \<longleftrightarrow> word_rotl n w = v\<close> by auto lemma word_rot_gal': \<open>w = word_rotr n v \<longleftrightarrow> v = word_rotl n w\<close> by auto lemma word_rotr_rev: \<open>word_rotr n w = word_reverse (word_rotl n (word_reverse w))\<close> proof (rule bit_word_eqI) fix m assume \<open>m < LENGTH('a)\<close> moreover have \<open>1 + ((int m + int n mod int LENGTH('a)) mod int LENGTH('a) + ((int LENGTH('a) * 2) mod int LENGTH('a) - (1 + (int m + int n mod int LENGTH('a)))) mod int LENGTH('a)) = int LENGTH('a)\<close> apply (cases \<open>(1 + (int m + int n mod int LENGTH('a))) mod int LENGTH('a) = 0\<close>) using zmod_zminus1_eq_if [of \<open>1 + (int m + int n mod int LENGTH('a))\<close> \<open>int LENGTH('a)\<close>] apply simp_all apply (auto simp add: algebra_simps) apply (metis (mono_tags, opaque_lifting) Abs_fnat_hom_add mod_Suc mod_mult_self2_is_0 of_nat_Suc of_nat_mod semiring_char_0_class.of_nat_neq_0) apply (metis (no_types, opaque_lifting) Abs_fnat_hom_add less_not_refl mod_Suc of_nat_Suc of_nat_gt_0 of_nat_mod) done then have \<open>int ((m + n) mod LENGTH('a)) = int (LENGTH('a) - Suc ((LENGTH('a) - Suc m + LENGTH('a) - n mod LENGTH('a)) mod LENGTH('a)))\<close> using \<open>m < LENGTH('a)\<close> by (simp only: of_nat_mod mod_simps) (simp add: of_nat_diff of_nat_mod Suc_le_eq add_less_mono algebra_simps mod_simps) then have \<open>(m + n) mod LENGTH('a) = LENGTH('a) - Suc ((LENGTH('a) - Suc m + LENGTH('a) - n mod LENGTH('a)) mod LENGTH('a))\<close> by simp ultimately show \<open>bit (word_rotr n w) m \<longleftrightarrow> bit (word_reverse (word_rotl n (word_reverse w))) m\<close> by (simp add: word_rotl_eq_word_rotr bit_word_rotr_iff bit_word_reverse_iff) qed lemma word_roti_0 [simp]: "word_roti 0 w = w" by transfer simp lemma word_roti_add: "word_roti (m + n) w = word_roti m (word_roti n w)" by (rule bit_word_eqI) (simp add: bit_word_roti_iff nat_less_iff mod_simps ac_simps) lemma word_roti_conv_mod': "word_roti n w = word_roti (n mod int (size w)) w" by transfer simp lemmas word_roti_conv_mod = word_roti_conv_mod' [unfolded word_size] end subsubsection \<open>"Word rotation commutes with bit-wise operations\<close> \<comment> \<open>using locale to not pollute lemma namespace\<close> locale word_rotate begin context includes bit_operations_syntax begin lemma word_rot_logs: "word_rotl n (NOT v) = NOT (word_rotl n v)" "word_rotr n (NOT v) = NOT (word_rotr n v)" "word_rotl n (x AND y) = word_rotl n x AND word_rotl n y" "word_rotr n (x AND y) = word_rotr n x AND word_rotr n y" "word_rotl n (x OR y) = word_rotl n x OR word_rotl n y" "word_rotr n (x OR y) = word_rotr n x OR word_rotr n y" "word_rotl n (x XOR y) = word_rotl n x XOR word_rotl n y" "word_rotr n (x XOR y) = word_rotr n x XOR word_rotr n y" by (rule bit_word_eqI, auto simp add: bit_word_rotl_iff bit_word_rotr_iff bit_and_iff bit_or_iff bit_xor_iff bit_not_iff algebra_simps not_le)+ end end lemmas word_rot_logs = word_rotate.word_rot_logs lemma word_rotx_0 [simp] : "word_rotr i 0 = 0 \<and> word_rotl i 0 = 0" by transfer simp_all lemma word_roti_0' [simp] : "word_roti n 0 = 0" by transfer simp declare word_roti_eq_word_rotr_word_rotl [simp] subsection \<open>Maximum machine word\<close> context includes bit_operations_syntax begin lemma word_int_cases: fixes x :: "'a::len word" obtains n where "x = word_of_int n" and "0 \<le> n" and "n < 2^LENGTH('a)" by (rule that [of \<open>uint x\<close>]) simp_all lemma word_nat_cases [cases type: word]: fixes x :: "'a::len word" obtains n where "x = of_nat n" and "n < 2^LENGTH('a)" by (rule that [of \<open>unat x\<close>]) simp_all lemma max_word_max [intro!]: \<open>n \<le> - 1\<close> for n :: \<open>'a::len word\<close> by (fact word_order.extremum) lemma word_of_int_2p_len: "word_of_int (2 ^ LENGTH('a)) = (0::'a::len word)" by simp lemma word_pow_0: "(2::'a::len word) ^ LENGTH('a) = 0" by (fact word_exp_length_eq_0) lemma max_word_wrap: \<open>x + 1 = 0 \<Longrightarrow> x = - 1\<close> for x :: \<open>'a::len word\<close> by (simp add: eq_neg_iff_add_eq_0) lemma word_and_max: \<open>x AND - 1 = x\<close> for x :: \<open>'a::len word\<close> by (fact word_log_esimps) lemma word_or_max: \<open>x OR - 1 = - 1\<close> for x :: \<open>'a::len word\<close> by (fact word_log_esimps) lemma word_ao_dist2: "x AND (y OR z) = x AND y OR x AND z" for x y z :: "'a::len word" by (fact bit.conj_disj_distrib) lemma word_oa_dist2: "x OR y AND z = (x OR y) AND (x OR z)" for x y z :: "'a::len word" by (fact bit.disj_conj_distrib) lemma word_and_not [simp]: "x AND NOT x = 0" for x :: "'a::len word" by (fact bit.conj_cancel_right) lemma word_or_not [simp]: \<open>x OR NOT x = - 1\<close> for x :: \<open>'a::len word\<close> by (fact bit.disj_cancel_right) lemma word_xor_and_or: "x XOR y = x AND NOT y OR NOT x AND y" for x y :: "'a::len word" by (fact bit.xor_def) lemma uint_lt_0 [simp]: "uint x < 0 = False" by (simp add: linorder_not_less) lemma word_less_1 [simp]: "x < 1 \<longleftrightarrow> x = 0" for x :: "'a::len word" by (simp add: word_less_nat_alt unat_0_iff) lemma uint_plus_if_size: "uint (x + y) = (if uint x + uint y < 2^size x then uint x + uint y else uint x + uint y - 2^size x)" by (simp add: take_bit_eq_mod word_size uint_word_of_int_eq uint_plus_if') lemma unat_plus_if_size: "unat (x + y) = (if unat x + unat y < 2^size x then unat x + unat y else unat x + unat y - 2^size x)" for x y :: "'a::len word" by (simp add: size_word.rep_eq unat_arith_simps) lemma word_neq_0_conv: "w \<noteq> 0 \<longleftrightarrow> 0 < w" for w :: "'a::len word" by (fact word_coorder.not_eq_extremum) lemma max_lt: "unat (max a b div c) = unat (max a b) div unat c" for c :: "'a::len word" by (fact unat_div) lemma uint_sub_if_size: "uint (x - y) = (if uint y \<le> uint x then uint x - uint y else uint x - uint y + 2^size x)" by (simp add: size_word.rep_eq uint_sub_if') lemma unat_sub: \<open>unat (a - b) = unat a - unat b\<close> if \<open>b \<le> a\<close> by (meson that unat_sub_if_size word_le_nat_alt) lemmas word_less_sub1_numberof [simp] = word_less_sub1 [of "numeral w"] for w lemmas word_le_sub1_numberof [simp] = word_le_sub1 [of "numeral w"] for w lemma word_of_int_minus: "word_of_int (2^LENGTH('a) - i) = (word_of_int (-i)::'a::len word)" by simp lemma word_of_int_inj: \<open>(word_of_int x :: 'a::len word) = word_of_int y \<longleftrightarrow> x = y\<close> if \<open>0 \<le> x \<and> x < 2 ^ LENGTH('a)\<close> \<open>0 \<le> y \<and> y < 2 ^ LENGTH('a)\<close> using that by (transfer fixing: x y) (simp add: take_bit_int_eq_self) lemma word_le_less_eq: "x \<le> y \<longleftrightarrow> x = y \<or> x < y" for x y :: "'z::len word" by (auto simp add: order_class.le_less) lemma mod_plus_cong: fixes b b' :: int assumes 1: "b = b'" and 2: "x mod b' = x' mod b'" and 3: "y mod b' = y' mod b'" and 4: "x' + y' = z'" shows "(x + y) mod b = z' mod b'" proof - from 1 2[symmetric] 3[symmetric] have "(x + y) mod b = (x' mod b' + y' mod b') mod b'" by (simp add: mod_add_eq) also have "\<dots> = (x' + y') mod b'" by (simp add: mod_add_eq) finally show ?thesis by (simp add: 4) qed lemma mod_minus_cong: fixes b b' :: int assumes "b = b'" and "x mod b' = x' mod b'" and "y mod b' = y' mod b'" and "x' - y' = z'" shows "(x - y) mod b = z' mod b'" using assms [symmetric] by (auto intro: mod_diff_cong) lemma word_induct_less [case_names zero less]: \<open>P m\<close> if zero: \<open>P 0\<close> and less: \<open>\<And>n. n < m \<Longrightarrow> P n \<Longrightarrow> P (1 + n)\<close> for m :: \<open>'a::len word\<close> proof - define q where \<open>q = unat m\<close> with less have \<open>\<And>n. n < word_of_nat q \<Longrightarrow> P n \<Longrightarrow> P (1 + n)\<close> by simp then have \<open>P (word_of_nat q :: 'a word)\<close> proof (induction q) case 0 show ?case by (simp add: zero) next case (Suc q) show ?case proof (cases \<open>1 + word_of_nat q = (0 :: 'a word)\<close>) case True then show ?thesis by (simp add: zero) next case False then have *: \<open>word_of_nat q < (word_of_nat (Suc q) :: 'a word)\<close> by (simp add: unatSuc word_less_nat_alt) then have **: \<open>n < (1 + word_of_nat q :: 'a word) \<longleftrightarrow> n \<le> (word_of_nat q :: 'a word)\<close> for n by (metis (no_types, lifting) add.commute inc_le le_less_trans not_less of_nat_Suc) have \<open>P (word_of_nat q)\<close> by (simp add: "**" Suc.IH Suc.prems) with * have \<open>P (1 + word_of_nat q)\<close> by (rule Suc.prems) then show ?thesis by simp qed qed with \<open>q = unat m\<close> show ?thesis by simp qed lemma word_induct: "P 0 \<Longrightarrow> (\<And>n. P n \<Longrightarrow> P (1 + n)) \<Longrightarrow> P m" for P :: "'a::len word \<Rightarrow> bool" by (rule word_induct_less) lemma word_induct2 [case_names zero suc, induct type]: "P 0 \<Longrightarrow> (\<And>n. 1 + n \<noteq> 0 \<Longrightarrow> P n \<Longrightarrow> P (1 + n)) \<Longrightarrow> P n" for P :: "'b::len word \<Rightarrow> bool" by (induction rule: word_induct_less; force) subsection \<open>Recursion combinator for words\<close> definition word_rec :: "'a \<Rightarrow> ('b::len word \<Rightarrow> 'a \<Rightarrow> 'a) \<Rightarrow> 'b word \<Rightarrow> 'a" where "word_rec forZero forSuc n = rec_nat forZero (forSuc \<circ> of_nat) (unat n)" lemma word_rec_0 [simp]: "word_rec z s 0 = z" by (simp add: word_rec_def) lemma word_rec_Suc [simp]: "1 + n \<noteq> 0 \<Longrightarrow> word_rec z s (1 + n) = s n (word_rec z s n)" for n :: "'a::len word" by (simp add: unatSuc word_rec_def) lemma word_rec_Pred: "n \<noteq> 0 \<Longrightarrow> word_rec z s n = s (n - 1) (word_rec z s (n - 1))" by (metis add.commute diff_add_cancel word_rec_Suc) lemma word_rec_in: "f (word_rec z (\<lambda>_. f) n) = word_rec (f z) (\<lambda>_. f) n" by (induct n) simp_all lemma word_rec_in2: "f n (word_rec z f n) = word_rec (f 0 z) (f \<circ> (+) 1) n" by (induct n) simp_all lemma word_rec_twice: "m \<le> n \<Longrightarrow> word_rec z f n = word_rec (word_rec z f (n - m)) (f \<circ> (+) (n - m)) m" proof (induction n arbitrary: z f) case zero then show ?case by (metis diff_0_right word_le_0_iff word_rec_0) next case (suc n z f) show ?case proof (cases "1 + (n - m) = 0") case True then show ?thesis by (simp add: add_diff_eq) next case False then have eq: "1 + n - m = 1 + (n - m)" by simp with False have "m \<le> n" by (metis "suc.prems" add.commute dual_order.antisym eq_iff_diff_eq_0 inc_le leI) with False "suc.hyps" show ?thesis using suc.IH [of "f 0 z" "f \<circ> (+) 1"] by (simp add: word_rec_in2 eq add.assoc o_def) qed qed lemma word_rec_id: "word_rec z (\<lambda>_. id) n = z" by (induct n) auto lemma word_rec_id_eq: "(\<And>m. m < n \<Longrightarrow> f m = id) \<Longrightarrow> word_rec z f n = z" by (induction n) (auto simp add: unatSuc unat_arith_simps(2)) lemma word_rec_max: assumes "\<forall>m\<ge>n. m \<noteq> - 1 \<longrightarrow> f m = id" shows "word_rec z f (- 1) = word_rec z f n" proof - have \<section>: "\<And>m. \<lbrakk>m < - 1 - n\<rbrakk> \<Longrightarrow> (f \<circ> (+) n) m = id" using assms by (metis (mono_tags, lifting) add.commute add_diff_cancel_left' comp_apply less_le olen_add_eqv plus_minus_no_overflow word_n1_ge) have "word_rec z f (- 1) = word_rec (word_rec z f (- 1 - (- 1 - n))) (f \<circ> (+) (- 1 - (- 1 - n))) (- 1 - n)" by (meson word_n1_ge word_rec_twice) also have "... = word_rec z f n" by (metis (no_types, lifting) \<section> diff_add_cancel minus_diff_eq uminus_add_conv_diff word_rec_id_eq) finally show ?thesis . qed end subsection \<open>Tool support\<close> ML_file \<open>Tools/smt_word.ML\<close> end
(* Copyright 2018 Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. *) theory task_list_pop_front_mem imports tasks begin text \<open>Up to two locales per function in the binary.\<close> locale task_list_pop_front_function = tasks_context + fixes rsp\<^sub>0 rbp\<^sub>0 a task_list_pop_front_ret :: \<open>64 word\<close> and v\<^sub>0 :: \<open>8 word\<close> and blocks :: \<open>(nat \<times> 64 word \<times> nat) set\<close> assumes seps: \<open>seps blocks\<close> and masters: \<open>master blocks (a, 1) 0\<close> \<open>master blocks (rsp\<^sub>0, 8) 1\<close> \<open>master blocks (rsp\<^sub>0-8, 8) 2\<close> \<open>master blocks (rsp\<^sub>0-16, 8) 3\<close> \<open>master blocks (rsp\<^sub>0-32, 8) 4\<close> and ret_address: \<open>outside task_list_pop_front_ret 317 464\<close> \<comment> \<open>Only works for non-recursive functions.\<close> begin text \<open> The Floyd invariant expresses for some locations properties that are invariably true. Simply expresses that a byte in the memory remains untouched. \<close> definition pp_\<Theta> :: \<open>_ \<Rightarrow> _ \<Rightarrow> _ \<Rightarrow> floyd_invar\<close> where \<open>pp_\<Theta> list first next \<equiv> [ \<comment> \<open>precondition\<close> boffset+317 \<mapsto> \<lambda>\<sigma>. regs \<sigma> rsp = rsp\<^sub>0 \<and> regs \<sigma> rbp = rbp\<^sub>0 \<and> regs \<sigma> rdi = list \<and> \<sigma> \<turnstile> *[list,8] = first \<and> \<sigma> \<turnstile> *[first + 0x58,8] = next \<and> \<sigma> \<turnstile> *[rsp\<^sub>0,8] = boffset+task_list_pop_front_ret \<and> \<sigma> \<turnstile> *[a,1] = v\<^sub>0, boffset+430 \<mapsto> \<lambda>\<sigma>. regs \<sigma> rsp = rsp\<^sub>0-8 \<and> regs \<sigma> rbp = rsp\<^sub>0-8 \<and> (first \<noteq> 0 \<longrightarrow> \<sigma> \<turnstile> *[list,8] = next) \<and> \<sigma> \<turnstile> *[rsp\<^sub>0-32,8] = list \<and> \<sigma> \<turnstile> *[rsp\<^sub>0-16,8] = first \<and> \<sigma> \<turnstile> *[rsp\<^sub>0-8,8] = rbp\<^sub>0 \<and> \<sigma> \<turnstile> *[rsp\<^sub>0,8] = boffset+task_list_pop_front_ret \<and> \<sigma> \<turnstile> *[a,1] = v\<^sub>0, boffset+462 \<mapsto> \<lambda>\<sigma>. regs \<sigma> rsp = rsp\<^sub>0-8 \<and> regs \<sigma> rbp = rsp\<^sub>0-8 \<and> \<sigma> \<turnstile> *[rsp\<^sub>0-8,8] = rbp\<^sub>0 \<and> \<sigma> \<turnstile> *[rsp\<^sub>0,8] = boffset+task_list_pop_front_ret \<and> \<sigma> \<turnstile> *[a,1] = v\<^sub>0, \<comment> \<open>postcondition\<close> boffset+task_list_pop_front_ret \<mapsto> \<lambda>\<sigma>. \<sigma> \<turnstile> *[a,1] = v\<^sub>0 \<and> regs \<sigma> rsp = rsp\<^sub>0+8 \<and> regs \<sigma> rbp = rbp\<^sub>0 ]\<close> text \<open>Adding some rules to the simplifier to simplify proofs.\<close> schematic_goal pp_\<Theta>_zero[simp]: \<open>pp_\<Theta> list first next boffset = ?x\<close> unfolding pp_\<Theta>_def by simp schematic_goal pp_\<Theta>_numeral_l[simp]: \<open>pp_\<Theta> list first next (n + boffset) = ?x\<close> unfolding pp_\<Theta>_def by simp schematic_goal pp_\<Theta>_numeral_r[simp]: \<open>pp_\<Theta> list first next (boffset + n) = ?x\<close> unfolding pp_\<Theta>_def by simp lemma rewrite_task_list_pop_front_mem: assumes \<open>master blocks (list, 8) 5\<close> \<open>master blocks (list+8, 8) 6\<close> \<open>master blocks (first+0x58, 8) 7\<close> \<open>master blocks (first+0x60, 8) 8\<close> \<open>master blocks (next+0x60, 8) 9\<close> shows \<open>is_std_invar task_list_pop_front_ret (floyd.invar task_list_pop_front_ret (pp_\<Theta> list first next))\<close> proof - note masters = masters assms show ?thesis text \<open>Boilerplate code to start the VCG\<close> apply (rule floyd_invarI) apply (rewrite at \<open>floyd_vcs task_list_pop_front_ret \<hole> _\<close> pp_\<Theta>_def) apply (intro floyd_vcsI) text \<open>Subgoal for rip = boffset+317\<close> subgoal premises prems for \<sigma> text \<open>Insert relevant knowledge\<close> apply (insert prems seps ret_address) text \<open>Apply VCG/symb.\ execution\<close> apply (restart_symbolic_execution?, (symbolic_execution masters: masters)+, (finish_symbolic_execution masters: masters)?)+ done text \<open>Subgoal for rip = boffset+430\<close> subgoal premises prems for \<sigma> text \<open>Insert relevant knowledge\<close> apply (insert prems seps ret_address) text \<open>Apply VCG/symb.\ execution\<close> apply (restart_symbolic_execution?, (symbolic_execution masters: masters)+, (finish_symbolic_execution masters: masters)?)+ done text \<open>Subgoal for rip = boffset+462\<close> subgoal premises prems for \<sigma> text \<open>Insert relevant knowledge\<close> apply (insert prems seps ret_address) text \<open>Apply VCG/symb.\ execution\<close> apply (restart_symbolic_execution?, (symbolic_execution masters: masters)+, (finish_symbolic_execution masters: masters)?)+ done text \<open>Trivial ending subgoal.\<close> subgoal by simp done qed end end
Test translator definition with union output
lemma convex_hull_2_alt: "convex hull {a,b} = {a + u *\<^sub>R (b - a) | u. 0 \<le> u \<and> u \<le> 1}"
REBOL [ Title: "Regression tests script for Red Compiler" Author: "Boleslav Březovský" File: %regression-test-redc-4.r Rights: "Copyright (C) 2016 Boleslav Březovský. All rights reserved." License: "BSD-3 - https://github.com/red/red/blob/origin/BSD-3-License.txt" ] ; cd %../ ;--separate-log-file ~~~start-file~~~ "Red Compiler Regression tests part 4" ===start-group=== "Red regressions #1501 - #2000" ; help functions for crash and compiler-problem detection true?: func [value] [not not value] crashed?: does [true? find qt/output "*** Runtime Error"] compiled?: does [true? not find qt/comp-output "Error"] script-error?: does [true? find qt/output "Script Error"] compiler-error?: does [true? find qt/comp-output "*** Compiler Internal Error"] compilation-error?: does [true? find qt/comp-output "*** Compilation Error"] loading-error: func [value] [found? find qt/comp-output join "*** Loading Error: " value] compilation-error: func [value] [found? find qt/comp-output join "*** Compilation Error: " value] syntax-error: func [value] [found? find qt/comp-output join "*** Syntax Error: " value] script-error: func [value] [found? find qt/comp-output join "*** Script Error: " value] ; -test-: :--test-- ; --test--: func [value] [probe value -test- value] --test-- "#1524" --compile-and-run-this-red {parse [x][keep 1]} --assert not crashed? --test-- "#1589" --compile-and-run-this-red {power -1 0.5} --assert not crashed? --test-- "#1598" --compile-and-run-this-red {3x4 // 1.1} --assert not crashed? --test-- "#1679" --compile-and-run-this-red {probe switch 1 []} --assert equal? qt/output "none^/" --test-- "#1694" --compile-and-run-this-red { do [ f: func [x] [x] probe try [f/only 3] ] } --assert true? find qt/output "arg2: 'only" --test-- "#1698" --compile-and-run-this-red { h: make hash! [] loop 10 [insert tail h 1] } --assert not crashed? --test-- "#1700" --compile-and-run-this-red {change-dir %../} --assert not crashed? --test-- "#1702" --compile-and-run-this-red { offset?: func [ series1 series2 ] [ (index? series2) - (index? series1) ] cmp: context [ shift-window: func [look-ahead-buffer positions][ set look-ahead-buffer skip get look-ahead-buffer positions ] match-length: func [a b /local start][ start: a while [all [a/1 = b/1 not tail? a]][a: next a b: next b] probe offset? start a ] find-longest-match: func [ search data /local pos len off length result ] [ pos: data length: 0 result: head insert insert clear [] 0 0 while [pos: find/case/reverse pos first data] [ if (len: match-length pos data) > length [ if len > 15 [ break ] length: len ] ] result ] lz77: context [ result: copy [] compress: func [ data [any-string!] /local look-ahead-buffer search-buffer position length ] [ clear result look-ahead-buffer: data search-buffer: data while [not empty? look-ahead-buffer] [ set [position length] find-longest-match search-buffer look-ahead-buffer shift-window 'look-ahead-buffer length + 1 ] ] ] ] cmp/lz77/compress "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" } --assert not script-error? --test-- "#1720" --compile-and-run-this-red {write http://abc.com compose [ {} {} ]} --assert not crashed? --assert script-error? --test-- "#1730" --compile-and-run-this-red {reduce does ["ok"]} --assert not crashed? --compile-and-run-this-red {do [reduce does ["ok"]]} --assert not crashed? --test-- "#1758" --compile-and-run-this-red {do [system/options/path: none]} --assert not crashed? --test-- "#1774" --compile-this-red {system/options/} --assert syntax-error "Invalid path! value" --test-- "#1831" --compile-and-run-this-red {do [function [a] [repeat a/1]]} --assert not crashed? --test-- "#1836" --compile-and-run-this-red { do [ content: [a [b] c] rule: [any [ set item word! (print item) | mark: () into [rule] stop: (prin "STOP: " probe stop)] ] parse content rule ] } --assert not crashed? --test-- "#1842" --compile-and-run-this-red {do [throw 10]} --assert not crashed? --test-- "#1858" --compile-and-run-this-red { probe type? try [ f: func [] [f] f ] } --assert not crashed? --assert equal? qt/output "error!^/" --test-- "#1866" --compile-and-run-this-red {do [parse "abc" [(return 1)]]} --assert not crashed? --test-- "#1868" --compile-this-red { dot2d: func [a [pair!] b [pair!] return: [float!]][ (to float! a/x * to float! b/x) + (to float! b/y * to float! b/y) ] norm: func [a [pair!] return: [integer!] /local d2 ][ d2: dot2d a a res: to integer! (square-root d2) return res ] distance: func [a [pair!] b [pair!] return: [integer!] /local res ][ norm (a - b) ] } --assert compiled? --test-- "#1878" --compile-and-run-this-red { digit: charset "0123456789" content: "a&&1b&&2c" block: copy [] parse content [ collect into block any [ remove keep ["&&" some digit] (remove/part head block 1 probe head block) | skip ] ] } --assert not crashed? --test-- "#1894" --compile-and-run-this-red {parse [1] [collect into test keep [skip]]} --assert not crashed? --test-- "#1895" --compile-and-run-this-red { fn: func [body [block!]] [collect [do body]] fn [x: 1] } --assert not crashed? --test-- "#1907" --compile-and-run-this-red {do [set: 1]} --assert not crashed? --test-- "#1935" --compile-and-run-this-red {do load {test/:}} --assert not crashed? --test-- "#1969" --compile-and-run-this-red { foo: func [a [float!] b [float!]][a + b] #system [ #call [foo 2.0 4.0] fl: as red-float! stack/arguments probe fl/value ] } --assert equal? "6" trim/all qt/output --test-- "#1974" --compile-and-run-this-red { do [ f1: func [p [string!]][print p] reflect f1 'spec ] } --assert not crashed? ===end-group=== ~~~end-file~~~
Jesus taught how we can be one of his relations, or part of his family. It is a two-step process. First we must hear God’s word. Second, we must put it into practice. His brother James was listening. James later wrote, “But prove yourselves doers of the word, and not merely hearers, who delude themselves” (James 1: 22). He said we are supposed to “receive the word implanted” (v. 21). First we must each ask ourselves, “Do we want to be close with Jesus, even as close as a brother?” Secondly, “What are we willing to do in order to achieve that personal closeness with him?” Frankly, not everyone is willing to enter into a personal relationship with Jesus. That’s okay, it is a personal choice but for those who do crave intimacy with the Lord Jesus he has shown the way. Jesus put an emphasis on God’s word. He told us we must put ourselves in a position to hear God’s word. Bear in mind they didn’t have the written word in abundance like we do today. We can “hear” the word in many ways. One of those ways, I would suggest, is in reading it for ourselves. Furthermore, there is an incredible amount of free teaching available on television, radio and the internet. Many ministers even allow people to download this content to their own device. I find that remarkable. What other profession gives away their work product for free? We can hear great messages every single day. Brother James teaches us that we are to receive this word implanted. When you hear the word, you must then receive it. Just hearing someone speak the Word of the God won’t do a thing for you if you don’t receive it. I believe James is telling us to plant the Word of God in our hearts. This principle is well taught by Jesus in Luke 8: 15 where the sower sows the word to our hearts. There are quite a few scriptures that speak about the relationship between God’s Word and our hearts. Look at Deuteronomy 6:6 for example, “These words, which I am commanding you today, shall be on your heart.” This is our starting place then. We can’t do God’s Word if we do not first hear it and in that hearing receive it into our hearts. This is not a brain game. It is a matter of the heart. What, then, is the condition of your heart? Is it the good soil that Jesus speaks about in Luke 8 or is it hard like the stony ground? Fellowship with Jesus begins with God’s Word. Perhaps hearing that thrills you. I hope so. Maybe, though, you are one of those who does not want to give time and place to the Word of God. I am very sorry because there is no getting around this one. No amount of intellectualization, even, is going to provide an argument that supports that position. We must all come to a place in our lives where we are willing to let Jesus be the absolute Lord of our lives and do, therefore, what he says. Either he is Lord of our lives or he is not but if he is then we should follow his instructions and his teachings. He came to give us abundant life. That is all he is trying to do with his instruction. He is trying to bless us but in order for us to receive the greatest of gifts we are going to have to taste humility. As you turn your face to Jesus, soften your heart and receive his word implanted therein. Then we can talk about putting it into practice. Then we probably won’t have to.
(* PREAMBLE *) From compcert.cfrontend Require Csem. From trancert.lib Require All. From trancert.invariants Require Common. From trancert.properties Require State Events. From trancert.analysis Require Genv. From trancert.transformations Require Specification. From trancert Require simulations.Def. Import Csem lib.All Coqlib Csyntax Ctypes Events Values Memory Mem Errors invariants.Common transformations.Specification common.Events properties.Events common.Values properties.State Globalenvs Globalenvs.Genv analysis.Genv analysis.Function simulations.Def analysis.Program CsemAugmented . Local Notation program := Csyntax.program. Local Notation fundef := Csyntax.fundef. Local Notation ident := AST.ident. Local Notation function := Csyntax.function. Local Notation type := Ctypes.type. Section bisimr. Import Def.Tag. Context {p p': program} {bisim_params: Type} (B:sim_rel p p' bisim_params). Record match_exprstate b f f' fid a a' e e' (* k1 k2 C C' *) m m' k k' := { mcse_fundef: match_fundef B fid f f' ; mcse_expr: match_expr B b f fid a a' ; mcse_env: match_env B b f fid e e' ; mcse_mem: match_mem B b f fid m m' ; mcse_cont: match_cont B b f fid k k' ; (* mcse_ctx: context k1 k2 C /\ context k1 k2 C' /\ match_ctx B b C C' ; *) }. Record match_state b f f' fid a a' e e' m m' k k' := { mcss_fundef: match_fundef B fid f f' ; mcss_stmt: match_stmt B b f fid a a' ; mcss_env: match_env B b f fid e e' ; mcss_mem: match_mem B b f fid m m' ; mcss_cont: match_cont B b f fid k k' ; }. Record match_callstate b f__callee f__callee' id__callee tra tra' m m' k k' := { mcsc_fundef: match_fundef B id__callee f__callee f__callee' ; mcsc_args: list_forall2 (match_value B b f__callee id__callee) tra tra' ; mcsc_mem: match_mem B b f__callee id__callee m m' ; mcsc_cont: match_cont B b f__callee id__callee k k' ; }. Record match_returnstate b f f' id v v' m m' k k' := { mcsr_fundef: match_fundef B id f f' ; mcsr_value: match_value B b f id v v' ; mcsr_mem: match_mem B b f id m m' ; mcsr_cont: match_cont B b f id k k' ; }. Inductive BisimR (b : bisim_params) :estate -> estate -> Prop := | bisim_State: forall stmt ev k m f f' stmt' ev' k' m' fid, match_state b (Ctypes.Internal f) (Ctypes.Internal f') fid stmt stmt' ev ev' m m' k k' -> BisimR b (EState fid f stmt k ev m) (EState fid f' stmt' k' ev' m') | bisim_ExprState: forall e ev k m f f' e' ev' k' m' fid, match_exprstate b (Ctypes.Internal f)(Ctypes.Internal f') fid e e' ev ev' m m' k k' -> BisimR b (EExprState fid f e k ev m) (EExprState fid f' e' k' ev' m') | bisim_CallState: forall vals k m id__callee f__callee f'__callee vals' k' m', match_callstate b f__callee f'__callee id__callee vals vals' m m' k k' -> BisimR b (ECallstate id__callee f__callee vals k m) (ECallstate id__callee f'__callee vals' k' m') | bisim_Returnstate: forall f f' fid v v' k k' m m', match_returnstate b f f' fid v v' m m' k k' -> BisimR b (EReturnstate fid f v k m) (EReturnstate fid f' v' k' m') | bisim_Stuckstate: BisimR b EStuckstate EStuckstate. End bisimr. (* Section Estep. Lemma estep_commute: forall s1 s2 s1' params tra, P__state PARAMS s1 -> P'__state PARAMS s1' -> estep p s1 tra s2 -> BisimR p params s1 s1' -> exists s2' tra' params', BisimR (Csem.globalenv p ) params' s2 s2' /\ list_forall2 (match_event PARAMS) tra tra' /\ estep p' s1' tra' s2'. Proof. intros s1 s2 s1' params tra H H0 H1 H2. inv H1; inv H2. - edestruct match_ctx_inj1 as (C' & a1' & HC' & HmatchC & ? & Ha1 ); eauto. rename a into a1, a' into a2, m into m1, m' into m2, m'0 into m1'. subst. edestruct (lred__fwd PARAMS) as ( a2' & m2' & b' & Hincr & Hexpr & Hmem & Hlred' ); eauto. + eapply wf_C_P__expr. eapply H4. eapply (wf_P__expr PARAMS); eauto. econstructor. + eapply (wf_P__mem PARAMS); eauto. econstructor. + eapply (wf_P__env PARAMS); eauto. econstructor. + exists (ExprState f' (C' a2') k' ev' m2'), E0, b'. split; last split; auto; try by econstructor. econstructor; eauto. * eapply env_stable; eauto. * eapply cont_stable; eauto. * eapply wf_match_ctx2; eauto. eapply ctx_stable; eauto. - edestruct match_ctx_inj1 as (C' & a1' & HC' & HmatchC & ? & Ha1 ); eauto. rename a into a1, a' into a2, m into m1, m' into m2, m'0 into m1'. subst. edestruct (rred__fwd PARAMS) as ( a2' & m2' & b' & tra' & Htra & Henv & Hexpr & Hmem & Hrred ); eauto. + eapply wf_C_P__expr. eapply H4. eapply wf_P__expr; eauto. econstructor. + eapply wf_P__mem; eauto. econstructor. + exists (ExprState f' (C' a2') k' ev' m2'), tra', b'. split; last split; auto; try by econstructor. econstructor; eauto; try by econstructor. * eapply env_stable; eauto. * eapply cont_stable; eauto. * eapply wf_match_ctx2; eauto. eapply ctx_stable; eauto. - edestruct match_ctx_inj1 as (C' & a1' & HC' & HmatchC & ? & Ha1 ); eauto. rename a into a1, m into m1, m' into m2. subst. inversion H3. subst. assert (exists fid1, fundef_id p fd fid1 ) as (fid1 & Hfid). { edestruct spec; eauto. apply globalenv_wf in rs_pre_wf. inv rs_pre_wf. unfold fundef_id, Globalenvs.Genv.find_funct, Globalenvs.Genv.find_funct_ptr, Globalenvs.Genv.find_def in *. repeat option_cases. edestruct gwf_ds_correspond; eauto. subst. autoinj. eauto. } edestruct (callred__fwd PARAMS) as ( b' & vals' & fd' & Hincr & Hargs & Hfundef' & Hfid' & Hcallred); eauto. + eapply wf_C_P__expr. eapply H4. eapply wf_P__expr; eauto. econstructor. + eapply wf_P__mem; eauto. econstructor. + eexists (Callstate fd' _ _ _). exists E0, b'. split; eauto. econstructor; eauto. * eapply mem_stable; eauto. * eapply cont_stable; eauto. eapply match_cont_kcall; eauto. * eapply list_forall2_imply; eauto. intros v1 v2 H7 H8 H11. eapply val_stable; eauto. * split; constructor; eauto. - edestruct match_ctx_inj1 as (C' & a1' & HC' & HmatchC & ? & Ha1 ); eauto. rename a into a1, m into m1, m' into m2. subst. exists Stuckstate, E0, params. repeat econstructor; eauto. contradict H4. inv H4. + edestruct match_expr_eval; eauto. subst. constructor. + edestruct match_expr_eloc; eauto. decomp. subst. constructor. + rename e0 into a1', m2 into m1', m' into m2'. edestruct match_ctx_inj2 as (C'' & a'' & HC & HCC' & ? & Hmexpr); eauto. subst. edestruct (lred__bwd PARAMS) as ( a2 & m2 & b' & Hincr & Hexpr & Hmem & Hlred' ); try eapply H1. * eapply wf_C_P'__expr with (C := C0); eauto. eapply wf_C_P'__expr with (C := C'); eauto. eapply wf_P'__expr; eauto. econstructor. * eapply wf_P__mem ; eauto. econstructor. * eapply (wf_P__env PARAMS); eauto. econstructor. * eassumption. * eassumption. * assumption. * econstructor 3; eauto. + rename e0 into a1', m2 into m1', m' into m2'. edestruct match_ctx_inj2 as (C'' & a'' & HC & HCC' & ? & Hmexpr); eauto. subst. edestruct (rred__bwd PARAMS) as ( a2 & m2 & b' & tra & Hincr & Htra & Hexpr & Hmem & Hlred' ); try eapply H1. * eapply wf_C_P'__expr with (C := C0); eauto. eapply wf_C_P'__expr with (C := C'); eauto. eapply wf_P'__expr; eauto. econstructor. * eapply wf_P'__mem ; eauto. econstructor. * eassumption. * eassumption. * econstructor 4; eauto. + edestruct match_ctx_inj2 as (C'' & a'' & HC & HCC & ? & Hmexpr ); eauto. subst. rename e0 into a', fd into fd', m2 into m'. assert (exists fid1, fundef_id p' fd' fid1 ) as (fid1 & Hfid). { edestruct spec; eauto. inv H1. apply globalenv_wf in rs_post_wf. inv rs_post_wf. unfold fundef_id, Globalenvs.Genv.find_funct, Globalenvs.Genv.find_funct_ptr, Globalenvs.Genv.find_def in *. repeat option_cases. edestruct gwf_ds_correspond; eauto. subst. autoinj. eauto. } edestruct (callred__bwd PARAMS) as ( b' & vals' & fd & Hincr & Hargs & Hfundef' & Hfid' & Hcallred);try eapply H1. * eapply wf_C_P'__expr. eapply H2. eapply wf_C_P'__expr. eapply HC'. eapply wf_P'__expr; eauto. econstructor. * eapply wf_P'__mem; eauto. econstructor. * eassumption. * eassumption. * eassumption. * econstructor 5; eassumption. Qed. End Bisim . *)
module LaxWendroff function generate_solver(f₀, f, c) function lax_wendroff!(g, f, i, i⁻, i⁺) g[i] = f[i] - 0.5*c*(f[i⁺]-f[i⁻]) + 0.5*c^2*(f[i⁺]-2*f[i]+f[i⁻]) end function solve!() lax_wendroff!(f, f₀, 1, length(f), 2) @inbounds @fastmath @simd for i = 2:length(f)-1 lax_wendroff!(f, f₀, i, i-1, i+1) end lax_wendroff!(f, f₀, length(f), length(f)-1, 1) end function solve!(h, h₀) lax_wendroff!(h, h₀, 1, length(f), 2) @inbounds @fastmath @simd for i = 2:length(f)-1 lax_wendroff!(h, h₀, i, i-1, i+1) end lax_wendroff!(h, h₀, length(f), length(f)-1, 1) end return solve! end function generate_solver(f₀, f) function lax_wendroff!(g, f, i, i⁻, i⁺, c) g[i] = f[i] - 0.5*c*(f[i⁺]-f[i⁻]) + 0.5*c^2*(f[i⁺]-2*f[i]+f[i⁻]) end function solve!(c) lax_wendroff!(f, f₀, 1, length(f), 2, c) @inbounds @fastmath @simd for i = 2:length(f)-1 lax_wendroff!(f, f₀, i, i-1, i+1, c) end lax_wendroff!(f, f₀, length(f), length(f)-1, 1, c) end function solve!(h, h₀, c) lax_wendroff!(h, h₀, 1, length(f), 2, c) @inbounds @fastmath @simd for i = 2:length(f)-1 lax_wendroff!(h, h₀, i, i-1, i+1, c) end lax_wendroff!(h, h₀, length(f), length(f)-1, 1, c) end return solve! end end # module
(* Author: Tobias Nipkow, 2007 *) theory QEdlo_inf imports DLO begin subsection "Quantifier elimination with infinitesimals" text{* This section presents a new quantifier elimination procedure for dense linear orders based on (the simulation of) infinitesimals. It is a fairly straightforward adaptation of the analogous algorithm by Loos and Weispfenning for linear arithmetic described in \S\ref{sec:lin-inf}. *} fun asubst_peps :: "nat \<Rightarrow> atom \<Rightarrow> atom fm" ("asubst\<^sub>+") where "asubst_peps k (Less 0 0) = FalseF" | "asubst_peps k (Less 0 (Suc j)) = Atom(Less k j)" | "asubst_peps k (Less (Suc i) 0) = (if i=k then TrueF else Or (Atom(Less i k)) (Atom(Eq i k)))" | "asubst_peps k (Less (Suc i) (Suc j)) = Atom(Less i j)" | "asubst_peps k (Eq 0 0) = TrueF" | "asubst_peps k (Eq 0 _) = FalseF" | "asubst_peps k (Eq _ 0) = FalseF" | "asubst_peps k (Eq (Suc i) (Suc j)) = Atom(Eq i j)" abbreviation subst_peps :: "atom fm \<Rightarrow> nat \<Rightarrow> atom fm" ("subst\<^sub>+") where "subst\<^sub>+ \<phi> k \<equiv> amap\<^bsub>fm\<^esub> (asubst\<^sub>+ k) \<phi>" definition "nolb \<phi> xs l x = (\<forall>y\<in>{l<..<x}. y \<notin> LB \<phi> xs)" lemma nolb_And[simp]: "nolb (And \<phi>\<^sub>1 \<phi>\<^sub>2) xs l x = (nolb \<phi>\<^sub>1 xs l x \<and> nolb \<phi>\<^sub>2 xs l x)" apply(clarsimp simp:nolb_def) apply blast done lemma nolb_Or[simp]: "nolb (Or \<phi>\<^sub>1 \<phi>\<^sub>2) xs l x = (nolb \<phi>\<^sub>1 xs l x \<and> nolb \<phi>\<^sub>2 xs l x)" apply(clarsimp simp:nolb_def) apply blast done declare[[simp_depth_limit=3]] lemma I_subst_peps2: "nqfree \<phi> \<Longrightarrow> xs!l < x \<Longrightarrow> nolb \<phi> xs (xs!l) x \<Longrightarrow> x \<notin> EQ \<phi> xs \<Longrightarrow> \<forall>y \<in> {xs!l <.. x}. DLO.I \<phi> (y#xs) \<Longrightarrow> DLO.I (subst\<^sub>+ \<phi> l) xs" proof(induct \<phi>) case FalseF thus ?case by simp (metis linorder_antisym_conv1 linorder_neq_iff) next case (Atom a) show ?case proof(cases "(l,a)" rule:asubst_peps.cases) case 3 thus ?thesis using Atom by (auto simp: nolb_def EQ_def Ball_def) (metis One_nat_def linorder_antisym_conv1 not_less_iff_gr_or_eq) qed (insert Atom, auto simp: nolb_def EQ_def Ball_def) next case Or thus ?case by(simp add: Ball_def)(metis order_refl innermost_intvl) qed simp_all declare[[simp_depth_limit=50]] lemma I_subst_peps: "nqfree \<phi> \<Longrightarrow> DLO.I (subst\<^sub>+ \<phi> l) xs \<longrightarrow> (\<exists>leps>xs!l. \<forall>x. xs!l < x \<and> x \<le> leps \<longrightarrow> DLO.I \<phi> (x#xs))" proof(induct \<phi>) case TrueF thus ?case by simp (metis no_ub) next case (Atom a) show ?case proof (cases "(l,a)" rule: asubst_peps.cases) case 2 thus ?thesis using Atom apply(auto) apply(drule dense) apply(metis One_nat_def xt1(7)) done next case 3 thus ?thesis using Atom apply(auto) apply (metis no_ub) apply (metis no_ub less_trans) apply (metis no_ub) done next case 4 thus ?thesis using Atom by(auto)(metis no_ub) next case 5 thus ?thesis using Atom by(auto)(metis no_ub) next case 8 thus ?thesis using Atom by(auto)(metis no_ub) qed (insert Atom, auto) next case And thus ?case apply clarsimp apply(rule_tac x="min leps lepsa" in exI) apply simp done next case Or thus ?case by force qed simp_all definition "qe_eps\<^sub>1(\<phi>) = (let as = DLO.atoms\<^sub>0 \<phi>; lbs = lbounds as; ebs = ebounds as in list_disj (inf\<^sub>- \<phi> # map (subst\<^sub>+ \<phi>) lbs @ map (subst \<phi>) ebs))" theorem I_qe_eps1: assumes "nqfree \<phi>" shows "DLO.I (qe_eps\<^sub>1 \<phi>) xs = (\<exists>x. DLO.I \<phi> (x#xs))" (is "?QE = ?EX") proof let ?as = "DLO.atoms\<^sub>0 \<phi>" let ?ebs = "ebounds ?as" assume ?QE { assume "DLO.I (inf\<^sub>- \<phi>) xs" hence ?EX using `?QE` min_inf[of \<phi> xs] `nqfree \<phi>` by(auto simp add:qe_eps\<^sub>1_def amap_fm_list_disj) } moreover { assume "\<forall>i \<in> set ?ebs. \<not>DLO.I \<phi> (xs!i # xs)" "\<not> DLO.I (inf\<^sub>- \<phi>) xs" with `?QE` `nqfree \<phi>` obtain l where "DLO.I (subst\<^sub>+ \<phi> l) xs" by(fastforce simp: I_subst qe_eps\<^sub>1_def set_ebounds set_lbounds) then obtain leps where "DLO.I \<phi> (leps#xs)" using I_subst_peps[OF `nqfree \<phi>`] by fastforce hence ?EX .. } ultimately show ?EX by blast next let ?as = "DLO.atoms\<^sub>0 \<phi>" let ?ebs = "ebounds ?as" assume ?EX then obtain x where x: "DLO.I \<phi> (x#xs)" .. { assume "DLO.I (inf\<^sub>- \<phi>) xs" hence ?QE using `nqfree \<phi>` by(auto simp:qe_eps\<^sub>1_def) } moreover { assume "\<exists>k \<in> set ?ebs. DLO.I (subst \<phi> k) xs" hence ?QE by(auto simp:qe_eps\<^sub>1_def) } moreover { assume "\<not> DLO.I (inf\<^sub>- \<phi>) xs" and "\<forall>k \<in> set ?ebs. \<not> DLO.I (subst \<phi> k) xs" hence noE: "\<forall>e \<in> EQ \<phi> xs. \<not> DLO.I \<phi> (e#xs)" using `nqfree \<phi>` by (auto simp:set_ebounds EQ_def I_subst nth_Cons' split:split_if_asm) hence "x \<notin> EQ \<phi> xs" using x by fastforce obtain l where "l \<in> LB \<phi> xs" "l < x" using LBex[OF `nqfree \<phi>` x `\<not> DLO.I(inf\<^sub>- \<phi>) xs` `x \<notin> EQ \<phi> xs`] .. have "\<exists>l\<in>LB \<phi> xs. l<x \<and> nolb \<phi> xs l x \<and> (\<forall>y. l < y \<and> y \<le> x \<longrightarrow> DLO.I \<phi> (y#xs))" using dense_interval[where P = "\<lambda>x. DLO.I \<phi> (x#xs)", OF finite_LB `l\<in>LB \<phi> xs` `l<x` x] x innermost_intvl[OF `nqfree \<phi>` _ _ `x \<notin> EQ \<phi> xs`] by (simp add:nolb_def) then obtain m where *: "Less (Suc m) 0 \<in> set ?as \<and> xs!m < x \<and> nolb \<phi> xs (xs!m) x \<and> (\<forall>y. xs!m < y \<and> y \<le> x \<longrightarrow> DLO.I \<phi> (y#xs))" by blast then have "DLO.I (subst\<^sub>+ \<phi> m) xs" using noE by(auto intro!: I_subst_peps2[OF `nqfree \<phi>`]) with * have ?QE by(simp add:qe_eps\<^sub>1_def bex_Un set_lbounds set_ebounds) metis } ultimately show ?QE by blast qed lemma qfree_asubst_peps: "qfree (asubst\<^sub>+ k a)" by(cases "(k,a)" rule:asubst_peps.cases) simp_all lemma qfree_subst_peps: "nqfree \<phi> \<Longrightarrow> qfree (subst\<^sub>+ \<phi> k)" by(induct \<phi>) (simp_all add:qfree_asubst_peps) lemma qfree_qe_eps\<^sub>1: "nqfree \<phi> \<Longrightarrow> qfree(qe_eps\<^sub>1 \<phi>)" apply(simp add:qe_eps\<^sub>1_def) apply(rule qfree_list_disj) apply (auto simp:qfree_min_inf qfree_subst_peps qfree_map_fm) done definition "qe_eps = DLO.lift_nnf_qe qe_eps\<^sub>1" lemma qfree_qe_eps: "qfree(qe_eps \<phi>)" by(simp add: qe_eps_def DLO.qfree_lift_nnf_qe qfree_qe_eps\<^sub>1) lemma I_qe_eps: "DLO.I (qe_eps \<phi>) xs = DLO.I \<phi> xs" by(simp add:qe_eps_def DLO.I_lift_nnf_qe qfree_qe_eps\<^sub>1 I_qe_eps1) end
cdis Forecast Systems Laboratory cdis NOAA/OAR/ERL/FSL cdis 325 Broadway cdis Boulder, CO 80303 cdis cdis Forecast Research Division cdis Local Analysis and Prediction Branch cdis LAPS cdis cdis This software and its documentation are in the public domain and cdis are furnished "as is." The United States government, its cdis instrumentalities, officers, employees, and agents make no cdis warranty, express or implied, as to the usefulness of the software cdis and documentation for any purpose. They assume no responsibility cdis (1) for the use of the software and documentation; or (2) to provide cdis technical support to users. cdis cdis Permission to use, copy, modify, and distribute this software is cdis hereby granted, provided that the entire disclaimer notice appears cdis in all copies. All modifications to this software must be clearly cdis documented, and are solely the responsibility of the agent making cdis the modifications. If significant modifications or enhancements cdis are made to this software, the FSL Software Policy Manager cdis ([email protected]) should be notified. cdis cdis cdis cdis cdis cdis cdis c c subroutine solar_normal(ni,nj,topo,dx,dy,lat,lon 1 ,sol_alt,sol_azi,alt_norm) c Compute solar altitude normal to the terrain include 'trigd.inc' angleunitvectors(a1,a2,a3,b1,b2,b3) = acosd(a1*b1+a2*b2+a3*b3) real topo(ni,nj) ! I Terrain elevation (m) real lat(ni,nj) ! I Lat (deg) real lon(ni,nj) ! I Lon (deg) real sol_alt(ni,nj) ! I Solar Altitude (deg) real sol_azi(ni,nj) ! I Solar Azimuth (deg) real alt_norm(ni,nj) ! O Solar Alt w.r.t. terrain normal real dx(ni,nj) ! I Grid spacing in X direction (m) real dy(ni,nj) ! I Grid spacing in Y direction (m) real rot(ni,nj) ! L Rotation Angle (deg) write(6,*)' Subroutine solar_normal' ! call get_grid_spacing_array(lat,lon,ni,nj,dx,dy) ! Default value alt_norm = sol_alt call projrot_latlon_2d(lat,lon,ni,nj,rot,istatus) do j=2,nj-1 do i=2,ni-1 ! Determine centered terrain slope dterdx = (topo(i+1,j )-topo(i-1,j )) / (2. * dx(i,j)) dterdy = (topo(i ,j+1)-topo(i ,j-1)) / (2. * dy(i,j)) terrain_slope = sqrt(dterdx**2 + dterdy**2) if(terrain_slope .gt. .001)then ! machine/terrain epsilon threshold ! Direction cosines of terrain normal dircos_tx = -dterdx / (sqrt(dterdx**2 + 1.)) dircos_ty = -dterdy / (sqrt(dterdy**2 + 1.)) dircos_tz = 1.0 / sqrt(1.0 + terrain_slope**2) sol_azi_grid = sol_azi(i,j) - rot(i,j) ! Direction cosines of sun dircos_sx = cosd(sol_alt(i,j)) * sind(sol_azi_grid) dircos_sy = cosd(sol_alt(i,j)) * cosd(sol_azi_grid) dircos_sz = sind(sol_alt(i,j)) ! Angle between terrain normal and sun result = angleunitvectors(dircos_tx,dircos_ty,dircos_tz 1 ,dircos_sx,dircos_sy,dircos_sz) alt_norm(i,j) = 90. - result if(i .eq. ni/2 .and. j .eq. nj/2)then write(6,*)'solar alt/az, dterdx, dterdy, alt_norm', 1 sol_alt(i,j),sol_azi(i,j),dterdx,dterdy,alt_norm(i,j) write(6,*)' dircos_t ',dircos_tx,dircos_ty,dircos_tz write(6,*)' dircos_s ',dircos_sx,dircos_sy,dircos_sz write(6,*)' terrain slope angle: ',90.-asind(dircos_tz) write(6,*)' rot = ',rot(i,j) endif else alt_norm(i,j) = sol_alt(i,j) ! terrain virtually flat if(i .eq. ni/2 .and. j .eq. nj/2)then write(6,*)'solar alt/az, alt_norm', 1 sol_alt(i,j),sol_azi(i,j),alt_norm(i,j) endif endif enddo !i enddo !j return end
! { dg-do run } ! { dg-options "-fcheck=recursion" } ! { dg-shouldfail "Recursion check" } ! ! { dg-output "Fortran runtime error: Recursive call to nonrecursive procedure 'f'" } ! ! PR fortran/39577 ! ! Invalid - recursion program test call f(.false.) call f(.true.) contains subroutine f(rec) logical :: rec if(rec) then call g() end if return end subroutine f subroutine g() call f(.false.) return end subroutine g end program test
\documentclass[12pt]{article} \usepackage{graphics} \newcommand{\kg}{\mathrm{kg}} \newcommand{\m}{\mathrm{m}} \newcommand{\s}{\mathrm{s}} \renewcommand{\deg}{\mathrm{deg}} \newcommand{\km}{\mathrm{km}} \newcommand{\cm}{\mathrm{cm}} \newcommand{\mps}{\m\,\s^{-1}} \newcommand{\mpss}{\m\,\s^{-2}} \newcommand{\kgpmmm}{\kg\,\m^{-3}} \newcommand{\N}{\mathrm{N}} \newcommand{\J}{\mathrm{J}} \newcommand{\Npmm}{\N\,\m^{-2}} \newcommand{\tv}[1]{\mathbf{\vec{#1}}} \newcommand{\dd}{\mathrm{d}} \newcommand{\cell}[1]{\texttt{{#1}}} \newcounter{problem} \addtolength{\oddsidemargin}{-1in} \addtolength{\textheight}{\headheight} \setlength{\headheight}{0in} \addtolength{\textheight}{\headsep} \setlength{\headsep}{0in} \setlength{\marginparwidth}{2in} \begin{document} \section*{NYU Physics 1---midterm exam} Thursday 2009 October 15 in lecture. \section*{Name:} ~ \vfill ~ \clearpage \paragraph{Problem~\theproblem:}\refstepcounter{problem}% The Moon is (1/80) the mass of the Earth. If the compositions are similar, what is the approximate radius of the Moon? Give your answer in units of $\km$. State your assumptions and show your work explicitly. ~ \vfill ~ \paragraph{Problem~\theproblem:}\refstepcounter{problem}% In lecture we considered a car sliding around a banked circular turn at constant speed, in the absence of friction. Now imagine that the car has static friction acting for transverse forces (that is, opposing sliding up or down the bank) with a coefficient of $0.2$. Draw the free-body diagram for the car---show only the forces and their directions and approximate magnitudes---when it is going around the turn at the maximum speed at which the car can go without sliding uphill or downhill. If you need to assume anything, state your assumptions explicitly. ~ \vfill ~ \clearpage \paragraph{Problem~\theproblem:}\refstepcounter{problem}% Compute the velocity of the Space Shuttle in its orbit around the Earth in units of $\mps$. Show your calculation. Assume the shuttle is orbiting very close to the surface of the Earth (not a bad approximation, as we discussed in class). If you need to assume anything else, state your assumptions explicitly. ~ \vfill ~ \paragraph{Problem~\theproblem}\refstepcounter{problem}% A block of mass $m$ sits on an inclined plane, inclined at an angle $\theta = 20\,\deg$ to the horizontal. If the coefficient of friction is $\mu = 0.9$ and the acceleration due to gravity is $g$, What is the magnitude of the frictional force on the block, in terms of the symbols given? If you need to assume anything, state your assumptions clearly. ~ \vfill ~ \clearpage \paragraph{Problem~\theproblem:}\refstepcounter{problem}% Imagine a particle of mass $m$ moving back and forth in the $x$ direction according to the equation \begin{equation} x(t)= A\,\cos\left(\omega\,t\right) \quad . \end{equation} What is the $x$-direction force as a function of time $F_x(t)$? ~ \vfill ~ \paragraph{Problem~\theproblem:}\refstepcounter{problem}% What formulae should go into cells \cell{E9} and \cell{F9} in this spreadsheet, which is integrating a trajectory? State your formulae in terms of the cell numbers (such as ``\cell{I8}''), not variables (such as ``$a_x$''). \\ \resizebox{\textwidth}{!}{\includegraphics{../xls/exam.eps}} ~ \vfill ~ \clearpage \paragraph{Problem~\theproblem:}\refstepcounter{problem}% In lecture, I swung a full cup of iced tea in a vertical loop over my head, like a roller coaster doing a loop-de-loop. Explain, in \emph{50 words or less} why the iced tea did not fall out when the cup was upside-down. ~ \vfill ~ \end{document}
theorem ex (h₁ : α = β) (as : List α) (bs : List β) (h₂ : (h ▸ as) = bs) : True := True.intro
From iris.base_logic Require Export invariants. From iris.program_logic Require Export weakestpre. From iris.heap_lang Require Export lang proofmode notation. From iris.heap_lang.lib Require Export nondet_bool. From iris_examples.proph.lib Require Import typed_proph. From iris_examples.proph Require Import clairvoyant_coin_spec. (* Clairvoyant coin with *typed* prophecies. *) Definition new_coin: val := λ: <>, (ref (nondet_bool #()), NewProph). Definition read_coin : val := λ: "cp", !(Fst "cp"). Definition toss_coin : val := λ: "cp", let: "c" := Fst "cp" in let: "p" := Snd "cp" in let: "r" := nondet_bool #() in "c" <- "r";; resolve_proph: "p" to: "r";; #(). Section proof. Context `{!heapG Σ}. Definition coin (cp : val) (bs : list bool) : iProp Σ := (∃ (c : loc) (p : proph_id) (b : bool) (bs' : list bool), ⌜cp = (#c, #p)%V⌝ ∗ ⌜bs = b :: bs'⌝ ∗ c ↦ #b ∗ (typed_proph_prop BoolTypedProph) p bs')%I. Lemma coin_exclusive (cp : val) (bs1 bs2 : list bool) : coin cp bs1 -∗ coin cp bs2 -∗ False. Proof. iIntros "H1 H2". iDestruct "H1" as (c1 p1 b1 bs'1) "(-> & -> & _ & Hp1)". iDestruct "H2" as (c2 p2 b2 bs'2) "(% & -> & _ & Hp2)". simplify_eq. iApply (typed_proph_prop_excl BoolTypedProph). iFrame. Qed. Lemma new_coin_spec : {{{ True }}} new_coin #() {{{ c bs, RET c; coin c bs }}}. Proof. iIntros (Φ) "_ HΦ". wp_lam. wp_apply (typed_proph_wp_new_proph BoolTypedProph); first done. iIntros (bs p) "Hp". wp_apply nondet_bool_spec; first done. iIntros (b) "_". wp_alloc c as "Hc". wp_pair. iApply ("HΦ" $! _ (b :: bs)). iExists c, p, b, bs. by iFrame. Qed. Lemma read_coin_spec cp bs : {{{ coin cp bs }}} read_coin cp {{{ b bs', RET #b; ⌜bs = b :: bs'⌝ ∗ coin cp bs }}}. Proof. iIntros (Φ) "Hc HΦ". iDestruct "Hc" as (c p b bs') "[-> [-> [Hc Hp]]]". wp_lam. wp_load. iApply "HΦ". iSplit; first done. iExists c, p, b, bs'. by iFrame. Qed. Lemma toss_coin_spec cp bs : {{{ coin cp bs }}} toss_coin cp {{{ b bs', RET #(); ⌜bs = b :: bs'⌝ ∗ coin cp bs' }}}. Proof. iIntros (Φ) "Hc HΦ". iDestruct "Hc" as (c p b bs') "[-> [-> [Hc Hp]]]". wp_lam. wp_pures. wp_apply nondet_bool_spec; first done. iIntros (r) "_". wp_store. wp_apply (typed_proph_wp_resolve BoolTypedProph with "[Hp]"); try done. wp_pures. iIntros (bs) "-> Hp". wp_seq. iApply "HΦ"; iSplit; first done. iExists c, p, r, bs. by iFrame. Qed. End proof. Definition clairvoyant_coin_spec_instance `{!heapG Σ} : clairvoyant_coin_spec.clairvoyant_coin_spec Σ := {| clairvoyant_coin_spec.new_coin_spec := new_coin_spec; clairvoyant_coin_spec.read_coin_spec := read_coin_spec; clairvoyant_coin_spec.toss_coin_spec := toss_coin_spec; clairvoyant_coin_spec.coin_exclusive := coin_exclusive |}. Typeclasses Opaque coin.
State Before: α : Type u J : Type w inst✝³ : SmallCategory J inst✝² : FinCategory J inst✝¹ : SemilatticeSup α inst✝ : OrderBot α x y : α ⊢ colimit (pair x y) = Finset.sup Finset.univ (pair x y).toPrefunctor.obj State After: no goals Tactic: rw [finite_colimit_eq_finset_univ_sup (pair x y)] State Before: α : Type u J : Type w inst✝³ : SmallCategory J inst✝² : FinCategory J inst✝¹ : SemilatticeSup α inst✝ : OrderBot α x y : α ⊢ x ⊔ (y ⊔ ⊥) = x ⊔ y State After: no goals Tactic: rw [sup_bot_eq]
# eval_piecewise has the same calling convention as eval_factor. # It simplifies piecewise expressions. eval_piecewise := proc(e :: specfunc(piecewise), kb :: t_kb, mode :: identical(`*`,`+`), loops :: list([identical(product,Product,sum,Sum), name=range]), $) local default, kbs, pieces, i, cond, inds, res, s, r, x, b, a; default := 0; # the catch-all "else" result kbs[1] := kb; for i from 1 by 2 to nops(e) do if i = nops(e) then default := op(i,e); pieces[i] := default; else # Simplify piecewise conditions using KB cond := op(i,e); # cond := eval_factor(cond, kbs[i], `+`, []); kbs[i+1] := assert(cond, kbs[i]); # This condition is false in context, so delete this piece # by not putting anything inside "pieces" if kbs[i+1] :: t_kb then # find all of the assertions in "kbs[i+1] - kbs[i]" cond := map(proc(cond::[identical(assert),anything], $) op(2,cond) end proc, kb_subtract(kbs[i+1], kbs[i])); if nops(cond) = 0 then default := op(i+1,e); pieces[i] := default; break; else cond := `if`(nops(cond)=1, op(1,cond), And(op(cond))); end if; end if; # TODO: Extend KB interface to optimize for # entails(kb,cond) := nops(kb_subtract(assert(cond,kb),kb))=0 kbs[i+2] := assert(Not(cond), kbs[i]); if not(kb_entails(kbs[i], kbs[i+2])) then pieces[i] := cond; pieces[i+1] := op(i+1,e); else # This condition is false in context, so delete this piece # by not putting anything inside "pieces" end if end if end do; # Combine duplicate branches at end inds := [indices(pieces, 'nolist', 'indexorder')]; for i in ListTools:-Reverse(select(type, inds, 'even')) do if Testzero(pieces[i]-default) then pieces[i ] := evaln(pieces[i ]); pieces[i-1] := evaln(pieces[i-1]); else break; end if end do; # Special processing for when the pieces are few res := [entries(pieces, 'nolist', 'indexorder')]; if nops(res) <= 1 then return eval_factor(default, kb, mode, loops); end if; if nops(res) <= 3 and op(1,res) :: '{`=`,And(specfunc(And),Not(specfunc(Not(`=`),And)))}' and Testzero(default - mode()) then # Reduce product(piecewise(i=3,f(i),1),i=1..10) to f(3) r := op(1,res); r := `if`(r::`=`, And(r), select(type,r,`=`)); for i from 1 to nops(loops) do x := op([i,2,1],loops); s, r := selectremove(depends, r, x); for cond in s do if ispoly(lhs(cond)-rhs(cond), 'linear', x, 'b', 'a') then b := Normalizer(-b/a); if kb_entails(kb, And(b :: integer, op([i,2,2,1],loops) <= b, b <= op([i,2,2,2],loops)) ) then kb := assert(x=b, kb);# TODO: why not just use kb? ASSERT(type(kb,t_kb), "eval_piecewise{product of pw}: not a kb"); res := `if`(op(1,res) = cond, op(2,res), piecewise(bool_And(op(remove(`=`, op(1,res), cond))), op(2..-1,res))); return eval_factor(eval(res, x=b), kb, mode, eval(subsop(i=NULL, loops), x=b)); end if; end if; end do; if nops(r) = 0 then break end if; end do; end if; # Recursively process pieces inds := [indices(pieces, 'nolist', 'indexorder')]; for i in inds do if i::even or i=op(-1,inds) then # only simplify if the piece is not default; # note that kbs[i] could be NotAKB(), but this is still valid if not Testzero(pieces[i] - default) then pieces[i] := eval_factor(pieces[i], kbs[i], mode, []); end if; end if; end do; res := piecewise(entries(pieces, 'nolist', 'indexorder')); for i in loops do res := op(1,i)(res, op(2,i)) end do; return res; end proc;
classdef TuneAGC < adi.common.DebugAttribute & adi.common.RegisterReadWrite properties (Nontunable, Hidden) CustomAGC = 0; AttackDelay = 1; PeakOverloadWaitTime = 10; AGCLockLevel = 10; DecStepSizeFullTableCase3 = 3; ADCLargeOverloadThresh = 58; ADCSmallOverloadThresh = 47; DecStepSizeFullTableCase2 = 3; DecStepSizeFullTableCase1 = 3; LargeLMTOverloadThresh = 35; SmallLMTOverloadThresh = 25; SettlingDelay = 3; EnergyLostThresh = 3; LowPowerThresh = 15; IncrementGainStep FAGCLockLevelGainIncreaseUpperLimit = 7; FAGCLPThreshIncrementTime = 3; DecPowMeasurementDuration = 16; end properties (Constant, Hidden, Access = private) % Register addresses in hexadecimal AttackDelay_Reg = '022'; PeakOverloadWaitTime_Reg = '0FE'; AGCLockLevel_Reg = '101'; DecStepSizeFullTableCase3_Reg = '103'; ADCSmallOverloadThresh_Reg = '104'; ADCLargeOverloadThresh_Reg = '105'; DecStepSizeFullTableCase2_Reg = '106'; DecStepSizeFullTableCase1_Reg = '106'; LargeLMTOverloadThresh_Reg = '108'; SmallLMTOverloadThresh_Reg = '107'; SettlingDelay_Reg = '111'; EnergyLostThresh_Reg = '112'; LowPowerThresh_Reg = '114'; IncrementGainStep_Reg = '117'; FAGCLockLevelGainIncreaseUpperLimit_Reg = '118'; FAGCLPThreshIncrementTime_Reg = '11B'; DecPowMeasurementDuration_Reg = '15C'; % Register mask in binary AttackDelay_Mask = '11000000'; PeakOverloadWaitTime_Mask = '11100000'; AGCLockLevel_Mask = '10000000'; DecStepSizeFullTableCase3_Mask = '11100011'; DecStepSizeFullTableCase2_Mask = '10001111'; DecStepSizeFullTableCase1_Mask = '11110000'; LargeLMTOverloadThresh_Mask = '11000000'; SmallLMTOverloadThresh_Mask = '11000000'; SettlingDelay_Mask = '11100000'; EnergyLostThresh_Mask = '11000000'; LowPowerThresh_Mask = '10000000'; IncrementGainStep_Mask = '00011111'; FAGCLockLevelGainIncreaseUpperLimit_Mask = '11000000'; DecPowMeasurementDuration_Mask = '11110000'; % Bit-shifts to be applied DecStepSizeFullTableCase3_BitShift = 2; DecStepSizeFullTableCase2_BitShift = 4; IncrementGainStep_BitShift = 5; end methods function set.AttackDelay(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',63}, ... '', 'AttackDelay'); obj.AttackDelay = value; if obj.ConnectedToDevice obj.setRegister(value, obj.AttackDelay_Reg, obj.AttackDelay_Mask); end end function set.PeakOverloadWaitTime(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',31}, ... '', 'PeakOverloadWaitTime'); obj.PeakOverloadWaitTime = value; if obj.ConnectedToDevice obj.setRegister(value, obj.PeakOverloadWaitTime_Reg, obj.PeakOverloadWaitTime_Mask); end end function set.AGCLockLevel(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',127}, ... '', 'AGCLockLevel'); obj.AGCLockLevel = value; if obj.ConnectedToDevice obj.setRegister(value, obj.AGCLockLevel_Reg, obj.AGCLockLevel_Mask); end end function set.DecStepSizeFullTableCase3(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',7}, ... '', 'DecStepSizeFullTableCase3'); obj.DecStepSizeFullTableCase3 = value; if obj.ConnectedToDevice obj.setRegister(value, obj.DecStepSizeFullTableCase3_Reg, obj.DecStepSizeFullTableCase3_Mask, obj.DecStepSizeFullTableCase3_BitShift); end end function set.ADCLargeOverloadThresh(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',255}, ... '', 'ADCLargeOverloadThresh'); obj.ADCLargeOverloadThresh = value; if obj.ConnectedToDevice obj.setDebugAttributeLongLong('adi,gc-adc-large-overload-thresh',value); end end function set.ADCSmallOverloadThresh(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',obj.ADCLargeOverloadThresh}, ... '', 'ADCSmallOverloadThresh'); obj.ADCSmallOverloadThresh = value; if obj.ConnectedToDevice obj.setDebugAttributeLongLong('adi,gc-adc-small-overload-thresh',value); end end function set.DecStepSizeFullTableCase2(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',7}, ... '', 'DecStepSizeFullTableCase2'); obj.DecStepSizeFullTableCase2 = value; if obj.ConnectedToDevice obj.setRegister(value, obj.DecStepSizeFullTableCase2_Reg, obj.DecStepSizeFullTableCase2_Mask, obj.DecStepSizeFullTableCase2_BitShift); end end function set.DecStepSizeFullTableCase1(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',15}, ... '', 'DecStepSizeFullTableCase1'); obj.DecStepSizeFullTableCase1 = value; if obj.ConnectedToDevice obj.setRegister(value, obj.DecStepSizeFullTableCase1_Reg, obj.DecStepSizeFullTableCase1_Mask); end end function set.LargeLMTOverloadThresh(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',63}, ... '', 'LargeLMTOverloadThresh'); obj.LargeLMTOverloadThresh = value; if obj.ConnectedToDevice obj.setRegister(value, obj.LargeLMTOverloadThresh_Reg, obj.LargeLMTOverloadThresh_Mask); end end function set.SmallLMTOverloadThresh(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',obj.LargeLMTOverloadThresh}, ... '', 'SmallLMTOverloadThresh'); obj.SmallLMTOverloadThresh = value; if obj.ConnectedToDevice obj.setRegister(value, obj.SmallLMTOverloadThresh_Reg, obj.SmallLMTOverloadThresh_Mask); end end function set.SettlingDelay(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',31}, ... '', 'SettlingDelay'); obj.SettlingDelay = value; if obj.ConnectedToDevice obj.setRegister(value, obj.SettlingDelay_Reg, obj.SettlingDelay_Mask); end end function set.EnergyLostThresh(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',63}, ... '', 'SettlingDelay'); obj.EnergyLostThresh = value; if obj.ConnectedToDevice obj.setRegister(value, obj.EnergyLostThresh_Reg, obj.EnergyLostThresh_Mask); end end function set.LowPowerThresh(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',63}, ... '', 'LowPowerThresh'); obj.LowPowerThresh = value; if obj.ConnectedToDevice obj.setDebugAttributeLongLong('adi,gc-low-power-thresh',value); end end function set.IncrementGainStep(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',7}, ... '', 'IncrementGainStep'); obj.IncrementGainStep = value; if obj.ConnectedToDevice obj.setRegister(value, obj.IncrementGainStep_Reg, obj.IncrementGainStep_Mask, obj.IncrementGainStep_BitShift); end end function set.FAGCLockLevelGainIncreaseUpperLimit(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',63}, ... '', 'FAGCLockLevelGainIncreaseUpperLimit'); obj.FAGCLockLevelGainIncreaseUpperLimit = value; if obj.ConnectedToDevice obj.setDebugAttributeLongLong('adi,fagc-lock-level-gain-increase-upper-limit',value); end end function set.FAGCLPThreshIncrementTime(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',255}, ... '', 'FAGCLPThreshIncrementTime'); obj.FAGCLPThreshIncrementTime = value; if obj.ConnectedToDevice obj.setDebugAttributeLongLong('adi,fagc-lp-thresh-increment-time',value); end end function set.DecPowMeasurementDuration(obj, value) validateattributes( value, { 'double','single', 'uint32' }, ... { 'real', 'nonnegative','scalar', 'finite', 'nonnan', 'nonempty','integer','>=',0,'<=',15}, ... '', 'DecPowMeasurementDuration'); obj.DecPowMeasurementDuration = value; if obj.ConnectedToDevice obj.setRegister(value, obj.DecPowMeasurementDuration_Reg, obj.DecPowMeasurementDuration_Mask); end end function WriteDebugAttributes(obj) if obj.ConnectedToDevice obj.setDebugAttributeLongLong('adi,gc-adc-large-overload-thresh',obj.ADCLargeOverloadThresh); obj.setDebugAttributeLongLong('adi,gc-adc-small-overload-thresh',obj.ADCSmallOverloadThresh); obj.setDebugAttributeLongLong('adi,gc-low-power-thresh',obj.LowPowerThresh); obj.setDebugAttributeLongLong('adi,fagc-lock-level-gain-increase-upper-limit',obj.FAGCLockLevelGainIncreaseUpperLimit); obj.setDebugAttributeLongLong('adi,fagc-lp-thresh-increment-time',obj.FAGCLPThreshIncrementTime); end end function WriteToRegisters(obj) if obj.ConnectedToDevice obj.setRegister(obj.AttackDelay, obj.AttackDelay_Reg, obj.AttackDelay_Mask); obj.setRegister(obj.PeakOverloadWaitTime, obj.PeakOverloadWaitTime_Reg, obj.PeakOverloadWaitTime_Mask); obj.setRegister(obj.AGCLockLevel, obj.AGCLockLevel_Reg, obj.AGCLockLevel_Mask); obj.setRegister(obj.DecStepSizeFullTableCase3, obj.DecStepSizeFullTableCase3_Reg, obj.DecStepSizeFullTableCase3_Mask, obj.DecStepSizeFullTableCase3_BitShift); obj.setRegister(obj.DecStepSizeFullTableCase2, obj.DecStepSizeFullTableCase2_Reg, obj.DecStepSizeFullTableCase2_Mask, obj.DecStepSizeFullTableCase2_BitShift); obj.setRegister(obj.DecStepSizeFullTableCase1, obj.DecStepSizeFullTableCase1_Reg, obj.DecStepSizeFullTableCase1_Mask); obj.setRegister(obj.LargeLMTOverloadThresh, obj.LargeLMTOverloadThresh_Reg, obj.LargeLMTOverloadThresh_Mask); obj.setRegister(obj.SmallLMTOverloadThresh, obj.SmallLMTOverloadThresh_Reg, obj.SmallLMTOverloadThresh_Mask); obj.setRegister(obj.SettlingDelay, obj.SettlingDelay_Reg, obj.SettlingDelay_Mask); obj.setRegister(obj.EnergyLostThresh, obj.EnergyLostThresh_Reg, obj.EnergyLostThresh_Mask); obj.setRegister(obj.IncrementGainStep, obj.IncrementGainStep_Reg, obj.IncrementGainStep_Mask, obj.IncrementGainStep_BitShift); obj.setRegister(obj.DecPowMeasurementDuration, obj.DecPowMeasurementDuration_Reg, obj.DecPowMeasurementDuration_Mask); end end function value = ReadFromRegister(obj, prop_name) if obj.ConnectedToDevice switch prop_name case 'AttackDelay' value = obj.getRegister(obj.AttackDelay_Reg, obj.AttackDelay_Mask); case 'PeakOverloadWaitTime' value = obj.getRegister(obj.PeakOverloadWaitTime_Reg, obj.PeakOverloadWaitTime_Mask); case 'AGCLockLevel' value = obj.getRegister(obj.AGCLockLevel_Reg, obj.AGCLockLevel_Mask); case 'DecStepSizeFullTableCase3' value = obj.getRegister(obj.DecStepSizeFullTableCase3_Reg, obj.DecStepSizeFullTableCase3_Mask, obj.DecStepSizeFullTableCase3_BitShift); case 'ADCSmallOverloadThresh' value = obj.getRegister(obj.ADCSmallOverloadThresh_Reg); case 'ADCLargeOverloadThresh' value = obj.getRegister(obj.ADCLargeOverloadThresh_Reg); case 'DecStepSizeFullTableCase2' value = obj.getRegister(obj.DecStepSizeFullTableCase2_Reg, obj.DecStepSizeFullTableCase2_Mask, obj.DecStepSizeFullTableCase2_BitShift); case 'DecStepSizeFullTableCase1' value = obj.getRegister(obj.DecStepSizeFullTableCase1_Reg, obj.DecStepSizeFullTableCase1_Mask); case 'LargeLMTOverloadThresh' value = obj.getRegister(obj.LargeLMTOverloadThresh_Reg, obj.LargeLMTOverloadThresh_Mask); case 'SmallLMTOverloadThresh' value = obj.getRegister(obj.SmallLMTOverloadThresh_Reg, obj.SmallLMTOverloadThresh_Mask); case 'SettlingDelay' value = obj.getRegister(obj.SettlingDelay_Reg, obj.SettlingDelay_Mask); case 'EnergyLostThresh' value = obj.getRegister(obj.EnergyLostThresh_Reg, obj.EnergyLostThresh_Mask); case 'LowPowerThresh' value = obj.getRegister(obj.LowPowerThresh_Reg, obj.LowPowerThresh_Mask); case 'IncrementGainStep' value = obj.getRegister(obj.IncrementGainStep_Reg, obj.IncrementGainStep_Mask, obj.IncrementGainStep_BitShift); case 'FAGCLockLevelGainIncreaseUpperLimit' value = obj.getRegister(obj.FAGCLockLevelGainIncreaseUpperLimit_Reg, obj.FAGCLockLevelGainIncreaseUpperLimit_Mask); case 'FAGCLPThreshIncrementTime' value = obj.getRegister(obj.FAGCLPThreshIncrementTime_Reg); case 'DecPowMeasurementDuration' value = obj.getRegister(obj.DecPowMeasurementDuration_Reg, obj.DecPowMeasurementDuration_Mask); otherwise error('Attempted to read unknown property %s\n', prop_name); end end end end end
!------------------------------------------------------------------------------ ! ! Program name: ! ! fibonacci ! ! Purpose: ! ! This program calls a recursive function to calculate the nth value of ! the Fibonacci sequence. ! fib(1) = 1, fib(2) = 1, fib(i) = fib(i-1) + fib(i-2), i.e., ! 1, 1, 2, 3, 5, 8, 13, ... ! ! !------------------------------------------------------------------------------ PROGRAM fibonacci IMPLICIT NONE INTEGER :: n, nfib INTEGER, SAVE :: count = 0 WRITE (*, *) 'Input n:' READ (*, *) n nfib = fib(n) WRITE (*, *) 'The nth value of the Fibonacci sequence, where n = ', n, & ', is: ', nfib WRITE (*, *) 'Function fib was invoked ', count, ' times.' CONTAINS RECURSIVE FUNCTION fib(n) RESULT(fib_result) INTEGER, INTENT(in) :: n INTEGER :: fib_result count = count + 1 ! Increment count of function calls SELECT CASE (n) CASE (1) ! First value is 1 fib_result = 1 CASE (2) ! Second value is 1 fib_result = 1 CASE (3:) fib_result = fib(n-1) + fib(n-2) ! Any others is sum of two previous END SELECT ! WRITE (*,*) 'n, fib = ', n, fib_result ! Write out intermediate values END FUNCTION fib END PROGRAM fibonacci
--Rohan and Enrico import data.set namespace prop_logic inductive fml | atom (i : ℕ) | imp (a b : fml) | not (a : fml) open fml infixr ` →' `:50 := imp local notation ` ¬' ` := fml.not --CAN I MAKE THIS A FUNCTION INTO PROP INSTEAD OF TYPE???? inductive thm : fml → Prop | axk (p q) : thm (p →' q →' p) | axs (p q r) : thm $ (p →' q →' r) →' (p →' q) →' (p →' r) | axn (p q) : thm $ (¬'q →' ¬'p) →' p →' q | mp (p q) : thm p → thm (p →' q) → thm q lemma p_of_p_of_p_of_p_of_p (p : fml) : thm ((p →' (p →' p)) →' (p →' p)) := thm.mp (p →' ((p →' p) →' p)) ((p →' (p →' p)) →' (p →' p)) (thm.axk p (p →' p)) (thm.axs p (p →' p) p) lemma p_of_p (p : fml) : thm (p →' p) := thm.mp (p →' p →' p) (p →' p) (thm.axk p p) (p_of_p_of_p_of_p_of_p p) lemma p_of_p' (p : fml) : thm (p →' p) := begin have lemma1 := p_of_p_of_p_of_p_of_p p, have lemma2 := thm.axk p p, exact thm.mp _ _ lemma2 lemma1, end inductive consequence (G : set fml) : fml → Prop | axk (p q) : consequence (p →' q →' p) | axs (p q r) : consequence $ (p →' q →' r) →' (p →' q) →' (p →' r) | axn (p q) : consequence $ (¬'q →' ¬'p) →' p →' q | mp (p q) : consequence p → consequence (p →' q) → consequence q | of_G (g ∈ G) : consequence g lemma consequence_of_thm (f : fml) (H : thm f) (G : set fml) : consequence G f := begin induction H, exact consequence.axk G H_p H_q, exact consequence.axs G H_p H_q H_r, exact consequence.axn G H_p H_q, exact consequence.mp H_p H_q H_ih_a H_ih_a_1, end lemma thm_of_consequence_null (f : fml) (H : consequence ∅ f) : thm f := begin induction H, exact thm.axk H_p H_q, exact thm.axs H_p H_q H_r, exact thm.axn H_p H_q, exact thm.mp H_p H_q H_ih_a H_ih_a_1, rw set.mem_empty_eq at H_H, contradiction, end theorem deduction (G : set fml) (p q : fml) (H : consequence (G ∪ {p}) q) : consequence G (p →' q) := begin induction H, have H1 := consequence.axk G H_p H_q, have H2 := consequence.axk G (H_p →' H_q →' H_p) p, exact consequence.mp _ _ H1 H2, have H6 := consequence.axs G H_p H_q H_r, have H7 := consequence.axk G ((H_p →' H_q →' H_r) →' (H_p →' H_q) →' H_p →' H_r) p, exact consequence.mp _ _ H6 H7, have H8 := consequence.axn G H_p H_q, have H9 := consequence.axk G ((¬' H_q →' ¬' H_p) →' H_p →' H_q) p, exact consequence.mp _ _ H8 H9, have H3 := consequence.axs G p H_p H_q, have H4 := consequence.mp _ _ H_ih_a_1 H3, exact consequence.mp _ _ H_ih_a H4, rw set.mem_union at H_H, cases H_H, have H51 := consequence.of_G H_g H_H, have H52 := consequence.axk G H_g p, exact consequence.mp _ _ H51 H52, rw set.mem_singleton_iff at H_H, rw H_H, exact consequence_of_thm _ (p_of_p p) G, end lemma part1 (p : fml) : consequence {¬' (¬' p)} p := begin have H1 := consequence.axk {¬' (¬' p)} p p, have H2 := consequence.axk {¬' (¬' p)} (¬' (¬' p)) (¬' (¬' (p →' p →' p))), have H3 := consequence.of_G (¬' (¬' p))(set.mem_singleton (¬' (¬' p))), have H4 := consequence.mp _ _ H3 H2, have H5 := consequence.axn {¬' (¬' p)} (¬' p) (¬' (p →' p →' p)), have H6 := consequence.mp _ _ H4 H5, have H7 := consequence.axn {¬' (¬' p)} (p →' p →' p) p, have H8 := consequence.mp _ _ H6 H7, exact consequence.mp _ _ H1 H8, end lemma p_of_not_not_p (p : fml) : thm ((¬' (¬' p)) →' p) := begin have H1 := deduction ∅ (¬' (¬' p)) p, rw set.empty_union at H1, have H2 := H1 (part1 p), exact thm_of_consequence_null (¬' (¬' p) →' p) H2, end theorem not_not_p_of_p (p : fml) : thm (p →' (¬' (¬' p))) := begin have H1 := thm.axn p (¬' (¬' p)), have H2 := p_of_not_not_p (¬' p), exact thm.mp _ _ H2 H1, end end prop_logic
Help Send a Leader to CLS and change their life. Meet Justin, a 7-year leader and a recent graduate of Central Leaders School. Justin’s CLS adventure began in 2010 as a 12-year-old as a 6th grader. He was really excited because, as one of the classes he takes during the week, he got to take Learn to Splash; a class that teaches the basics of a variety of swimming strokes, how to play water polo, how to do synchronized swimming and basic life-saving skills. He didn’t know how to swim but was very interested in learning. He had such a great time that he took it every year of the 7 years he attended Central Leaders School. The confidence he gained from taking Learn to Splash inspired him to try out for his school swim team, train to become a lifeguard, and eventually get a job at his local YMCA lifeguarding. He says his experience was life-changing. “I had been through a lot, I grew up on the streets and the Leaders program kept me away from all that.” Justin received a scholarship to attend Harris-Stowe State University and is pursuing a degree in criminal justice. Justin, like so many other leaders, could experience Leaders School because he received financial support from the Alumni Scholarship Fund, a fund that supports students who want to attend CLS but need assistance getting there. This fund would not exist without your generous support. You can give a tax-deductible gift on the form below or via text by texting CLS to 71777 and following the prompts. Thank you for enabling students like Justin to continue to be able to come to CLS and experience life-transforming experiences.
Sunday school will begin again in the fall on Rally Day, Sunday, September 9, 2018 at 9:45 a.m. All children who are not yet old enough for three-year old Sunday school are invited to join us for this interactive program featuring fun and age-appropriate Bible stories, singing, crafts, prayer and a snack. A grown up must accompany their child. Godly Play is a Montessori-based Sunday School program. This teaching style allows children freedom to move about the classroom choosing from a variety of activities such as puzzles, books and art that reinforce the weekly Bible lesson. A thriving, high-energy, Bible-based, interactive Sunday Church School program is offered at Mount Pleasant Lutheran Church. Children pre-school through High School are invited to attend Sunday Church School. Middle School Sunday School We have a few lessons where all of our Middle School students meet together, but typically keep each grade separate. Classes are interactive and geared toward middle-school learners. We also incorporate a service project (making snack bags for Open Table, making breakfast bags for community meals, etc) into each month, which the students love! High School Sunday School is the place to be between 9:45-10:45 a.m.! Students are invited to explore topics that are interesting and relevant to them during High School Sunday School. We hit on all the hot topics of our time and we struggle with what is our response as Christians. This program is a relaxed, conversation based program. Come and join leaders Pastor Krista and Scott Reske… there is always a plentiful supply of pop tarts, bagels, donuts or other treats, and the fridge is always full of either chocolate milk or OJ.
/- Copyright (c) 2018 Kenny Lau. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Kenny Lau, Yury Kudryashov -/ import Mathlib.PrePort import Mathlib.Lean3Lib.init.default import Mathlib.algebra.char_p.basic import Mathlib.data.equiv.ring import Mathlib.algebra.group_with_zero.power import Mathlib.algebra.iterate_hom import Mathlib.PostPort universes u l v u_1 namespace Mathlib /-! # The perfect closure of a field -/ /-- A perfect ring is a ring of characteristic p that has p-th root. -/ class perfect_ring (R : Type u) [comm_semiring R] (p : ℕ) [fact (nat.prime p)] [char_p R p] where pth_root' : R → R frobenius_pth_root' : ∀ (x : R), coe_fn (frobenius R p) (pth_root' x) = x pth_root_frobenius' : ∀ (x : R), pth_root' (coe_fn (frobenius R p) x) = x /-- Frobenius automorphism of a perfect ring. -/ def frobenius_equiv (R : Type u) [comm_semiring R] (p : ℕ) [fact (nat.prime p)] [char_p R p] [perfect_ring R p] : R ≃+* R := ring_equiv.mk (ring_hom.to_fun (frobenius R p)) (perfect_ring.pth_root' p) perfect_ring.pth_root_frobenius' perfect_ring.frobenius_pth_root' sorry sorry /-- `p`-th root of an element in a `perfect_ring` as a `ring_hom`. -/ def pth_root (R : Type u) [comm_semiring R] (p : ℕ) [fact (nat.prime p)] [char_p R p] [perfect_ring R p] : R →+* R := ↑(ring_equiv.symm (frobenius_equiv R p)) @[simp] theorem coe_frobenius_equiv {R : Type u} [comm_semiring R] {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] : ⇑(frobenius_equiv R p) = ⇑(frobenius R p) := rfl @[simp] theorem coe_frobenius_equiv_symm {R : Type u} [comm_semiring R] {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] : ⇑(ring_equiv.symm (frobenius_equiv R p)) = ⇑(pth_root R p) := rfl @[simp] theorem frobenius_pth_root {R : Type u} [comm_semiring R] {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] (x : R) : coe_fn (frobenius R p) (coe_fn (pth_root R p) x) = x := ring_equiv.apply_symm_apply (frobenius_equiv R p) x @[simp] theorem pth_root_pow_p {R : Type u} [comm_semiring R] {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] (x : R) : coe_fn (pth_root R p) x ^ p = x := frobenius_pth_root x @[simp] theorem pth_root_frobenius {R : Type u} [comm_semiring R] {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] (x : R) : coe_fn (pth_root R p) (coe_fn (frobenius R p) x) = x := ring_equiv.symm_apply_apply (frobenius_equiv R p) x @[simp] theorem pth_root_pow_p' {R : Type u} [comm_semiring R] {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] (x : R) : coe_fn (pth_root R p) (x ^ p) = x := pth_root_frobenius x theorem left_inverse_pth_root_frobenius {R : Type u} [comm_semiring R] {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] : function.left_inverse ⇑(pth_root R p) ⇑(frobenius R p) := pth_root_frobenius theorem right_inverse_pth_root_frobenius {R : Type u} [comm_semiring R] {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] : function.right_inverse ⇑(pth_root R p) ⇑(frobenius R p) := frobenius_pth_root theorem commute_frobenius_pth_root {R : Type u} [comm_semiring R] {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] : function.commute ⇑(frobenius R p) ⇑(pth_root R p) := fun (x : R) => Eq.trans (frobenius_pth_root x) (Eq.symm (pth_root_frobenius x)) theorem eq_pth_root_iff {R : Type u} [comm_semiring R] {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] {x : R} {y : R} : x = coe_fn (pth_root R p) y ↔ coe_fn (frobenius R p) x = y := equiv.eq_symm_apply (ring_equiv.to_equiv (frobenius_equiv R p)) theorem pth_root_eq_iff {R : Type u} [comm_semiring R] {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] {x : R} {y : R} : coe_fn (pth_root R p) x = y ↔ x = coe_fn (frobenius R p) y := equiv.symm_apply_eq (ring_equiv.to_equiv (frobenius_equiv R p)) theorem monoid_hom.map_pth_root {R : Type u} [comm_semiring R] {S : Type v} [comm_semiring S] (f : R →* S) {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] [char_p S p] [perfect_ring S p] (x : R) : coe_fn f (coe_fn (pth_root R p) x) = coe_fn (pth_root S p) (coe_fn f x) := sorry theorem monoid_hom.map_iterate_pth_root {R : Type u} [comm_semiring R] {S : Type v} [comm_semiring S] (f : R →* S) {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] [char_p S p] [perfect_ring S p] (x : R) (n : ℕ) : coe_fn f (nat.iterate (⇑(pth_root R p)) n x) = nat.iterate (⇑(pth_root S p)) n (coe_fn f x) := function.semiconj.iterate_right (monoid_hom.map_pth_root f) n x theorem ring_hom.map_pth_root {R : Type u} [comm_semiring R] {S : Type v} [comm_semiring S] (g : R →+* S) {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] [char_p S p] [perfect_ring S p] (x : R) : coe_fn g (coe_fn (pth_root R p) x) = coe_fn (pth_root S p) (coe_fn g x) := monoid_hom.map_pth_root (ring_hom.to_monoid_hom g) x theorem ring_hom.map_iterate_pth_root {R : Type u} [comm_semiring R] {S : Type v} [comm_semiring S] (g : R →+* S) {p : ℕ} [fact (nat.prime p)] [char_p R p] [perfect_ring R p] [char_p S p] [perfect_ring S p] (x : R) (n : ℕ) : coe_fn g (nat.iterate (⇑(pth_root R p)) n x) = nat.iterate (⇑(pth_root S p)) n (coe_fn g x) := monoid_hom.map_iterate_pth_root (ring_hom.to_monoid_hom g) x n theorem injective_pow_p {R : Type u} [comm_semiring R] (p : ℕ) [fact (nat.prime p)] [char_p R p] [perfect_ring R p] {x : R} {y : R} (hxy : x ^ p = y ^ p) : x = y := function.left_inverse.injective left_inverse_pth_root_frobenius hxy /-- `perfect_closure K p` is the quotient by this relation. -/ inductive perfect_closure.r (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : ℕ × K → ℕ × K → Prop where | intro : ∀ (n : ℕ) (x : K), perfect_closure.r K p (n, x) (n + 1, coe_fn (frobenius K p) x) /-- The perfect closure is the smallest extension that makes frobenius surjective. -/ def perfect_closure (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] := Quot sorry namespace perfect_closure /-- Constructor for `perfect_closure`. -/ def mk (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (x : ℕ × K) : perfect_closure K p := Quot.mk (r K p) x @[simp] theorem quot_mk_eq_mk (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (x : ℕ × K) : Quot.mk (r K p) x = mk K p x := rfl /-- Lift a function `ℕ × K → L` to a function on `perfect_closure K p`. -/ def lift_on {K : Type u} [comm_ring K] {p : ℕ} [fact (nat.prime p)] [char_p K p] {L : Type u_1} (x : perfect_closure K p) (f : ℕ × K → L) (hf : ∀ (x y : ℕ × K), r K p x y → f x = f y) : L := quot.lift_on x f hf @[simp] theorem lift_on_mk {K : Type u} [comm_ring K] {p : ℕ} [fact (nat.prime p)] [char_p K p] {L : Type u_1} (f : ℕ × K → L) (hf : ∀ (x y : ℕ × K), r K p x y → f x = f y) (x : ℕ × K) : lift_on (mk K p x) f hf = f x := rfl theorem induction_on {K : Type u} [comm_ring K] {p : ℕ} [fact (nat.prime p)] [char_p K p] (x : perfect_closure K p) {q : perfect_closure K p → Prop} (h : ∀ (x : ℕ × K), q (mk K p x)) : q x := quot.induction_on x h protected instance has_mul (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : Mul (perfect_closure K p) := { mul := Quot.lift (fun (x : ℕ × K) => Quot.lift (fun (y : ℕ × K) => mk K p (prod.fst x + prod.fst y, nat.iterate (⇑(frobenius K p)) (prod.fst y) (prod.snd x) * nat.iterate (⇑(frobenius K p)) (prod.fst x) (prod.snd y))) (mul_aux_right K p x)) sorry } @[simp] theorem mk_mul_mk (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (x : ℕ × K) (y : ℕ × K) : mk K p x * mk K p y = mk K p (prod.fst x + prod.fst y, nat.iterate (⇑(frobenius K p)) (prod.fst y) (prod.snd x) * nat.iterate (⇑(frobenius K p)) (prod.fst x) (prod.snd y)) := rfl protected instance comm_monoid (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : comm_monoid (perfect_closure K p) := comm_monoid.mk Mul.mul sorry (mk K p (0, 1)) sorry sorry sorry theorem one_def (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : 1 = mk K p (0, 1) := rfl protected instance inhabited (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : Inhabited (perfect_closure K p) := { default := 1 } protected instance has_add (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : Add (perfect_closure K p) := { add := Quot.lift (fun (x : ℕ × K) => Quot.lift (fun (y : ℕ × K) => mk K p (prod.fst x + prod.fst y, nat.iterate (⇑(frobenius K p)) (prod.fst y) (prod.snd x) + nat.iterate (⇑(frobenius K p)) (prod.fst x) (prod.snd y))) (add_aux_right K p x)) sorry } @[simp] theorem mk_add_mk (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (x : ℕ × K) (y : ℕ × K) : mk K p x + mk K p y = mk K p (prod.fst x + prod.fst y, nat.iterate (⇑(frobenius K p)) (prod.fst y) (prod.snd x) + nat.iterate (⇑(frobenius K p)) (prod.fst x) (prod.snd y)) := rfl protected instance has_neg (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : Neg (perfect_closure K p) := { neg := Quot.lift (fun (x : ℕ × K) => mk K p (prod.fst x, -prod.snd x)) sorry } @[simp] theorem neg_mk (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (x : ℕ × K) : -mk K p x = mk K p (prod.fst x, -prod.snd x) := rfl protected instance has_zero (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : HasZero (perfect_closure K p) := { zero := mk K p (0, 0) } theorem zero_def (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : 0 = mk K p (0, 0) := rfl theorem mk_zero (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (n : ℕ) : mk K p (n, 0) = 0 := sorry theorem r.sound (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (m : ℕ) (n : ℕ) (x : K) (y : K) (H : nat.iterate (⇑(frobenius K p)) m x = y) : mk K p (n, x) = mk K p (m + n, y) := sorry protected instance comm_ring (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : comm_ring (perfect_closure K p) := comm_ring.mk Add.add sorry 0 sorry sorry Neg.neg (ring.sub._default Add.add sorry 0 sorry sorry Neg.neg) sorry sorry comm_monoid.mul sorry comm_monoid.one sorry sorry sorry sorry sorry theorem eq_iff' (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (x : ℕ × K) (y : ℕ × K) : mk K p x = mk K p y ↔ ∃ (z : ℕ), nat.iterate (⇑(frobenius K p)) (prod.fst y + z) (prod.snd x) = nat.iterate (⇑(frobenius K p)) (prod.fst x + z) (prod.snd y) := sorry theorem nat_cast (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (n : ℕ) (x : ℕ) : ↑x = mk K p (n, ↑x) := sorry theorem int_cast (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (x : ℤ) : ↑x = mk K p (0, ↑x) := sorry theorem nat_cast_eq_iff (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (x : ℕ) (y : ℕ) : ↑x = ↑y ↔ ↑x = ↑y := sorry protected instance char_p (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : char_p (perfect_closure K p) p := char_p.mk fun (x : ℕ) => eq.mpr (id (Eq._oldrec (Eq.refl (↑x = 0 ↔ p ∣ x)) (Eq.symm (propext (char_p.cast_eq_zero_iff K p x))))) (eq.mpr (id (Eq._oldrec (Eq.refl (↑x = 0 ↔ ↑x = 0)) (Eq.symm nat.cast_zero))) (eq.mpr (id (Eq._oldrec (Eq.refl (↑x = ↑0 ↔ ↑x = 0)) (propext (nat_cast_eq_iff K p x 0)))) (eq.mpr (id (Eq._oldrec (Eq.refl (↑x = ↑0 ↔ ↑x = 0)) nat.cast_zero)) (iff.refl (↑x = 0))))) theorem frobenius_mk (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (x : ℕ × K) : coe_fn (frobenius (perfect_closure K p) p) (mk K p x) = mk K p (prod.fst x, prod.snd x ^ p) := sorry /-- Embedding of `K` into `perfect_closure K p` -/ def of (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : K →+* perfect_closure K p := ring_hom.mk (fun (x : K) => mk K p (0, x)) sorry sorry sorry sorry theorem of_apply (K : Type u) [comm_ring K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (x : K) : coe_fn (of K p) x = mk K p (0, x) := rfl theorem eq_iff (K : Type u) [integral_domain K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (x : ℕ × K) (y : ℕ × K) : Quot.mk (r K p) x = Quot.mk (r K p) y ↔ nat.iterate (⇑(frobenius K p)) (prod.fst y) (prod.snd x) = nat.iterate (⇑(frobenius K p)) (prod.fst x) (prod.snd y) := sorry protected instance has_inv (K : Type u) [field K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : has_inv (perfect_closure K p) := has_inv.mk (Quot.lift (fun (x : ℕ × K) => Quot.mk (r K p) (prod.fst x, prod.snd x⁻¹)) sorry) protected instance field (K : Type u) [field K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : field (perfect_closure K p) := field.mk comm_ring.add sorry comm_ring.zero sorry sorry comm_ring.neg comm_ring.sub sorry sorry comm_ring.mul sorry comm_ring.one sorry sorry sorry sorry sorry has_inv.inv sorry sorry sorry protected instance perfect_ring (K : Type u) [field K] (p : ℕ) [fact (nat.prime p)] [char_p K p] : perfect_ring (perfect_closure K p) p := perfect_ring.mk (fun (e : perfect_closure K p) => lift_on e (fun (x : ℕ × K) => mk K p (prod.fst x + 1, prod.snd x)) sorry) sorry sorry theorem eq_pth_root (K : Type u) [field K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (x : ℕ × K) : mk K p x = nat.iterate (⇑(pth_root (perfect_closure K p) p)) (prod.fst x) (coe_fn (of K p) (prod.snd x)) := sorry /-- Given a field `K` of characteristic `p` and a perfect ring `L` of the same characteristic, any homomorphism `K →+* L` can be lifted to `perfect_closure K p`. -/ def lift (K : Type u) [field K] (p : ℕ) [fact (nat.prime p)] [char_p K p] (L : Type v) [comm_semiring L] [char_p L p] [perfect_ring L p] : (K →+* L) ≃ (perfect_closure K p →+* L) := equiv.mk (fun (f : K →+* L) => ring_hom.mk (fun (e : perfect_closure K p) => lift_on e (fun (x : ℕ × K) => nat.iterate (⇑(pth_root L p)) (prod.fst x) (coe_fn f (prod.snd x))) sorry) sorry sorry sorry sorry) (fun (f : perfect_closure K p →+* L) => ring_hom.comp f (of K p)) sorry sorry end Mathlib
The current training ground is located at Bodymoor Heath near Kingsbury in north Warwickshire , the site for which was purchased by former chairman Doug Ellis in the early 1970s from a local farmer . Although Bodymoor Heath was state @-@ of @-@ the @-@ art in the 1970s , by the late 1990s the facilities had started to look dated . In November 2005 , Ellis and Aston Villa plc announced a state of the art GB £ 13 million redevelopment of Bodymoor in two phases . However , work on Bodymoor was suspended by Ellis due to financial problems , and was left in an unfinished state until new owner Randy Lerner made it one of his priorities to make the site one of the best in world football . The new training ground was officially unveiled on 6 May 2007 , by then manager Martin O 'Neill , then team captain Gareth Barry and 1982 European Cup winning team captain Dennis Mortimer , with the Aston Villa squad moving in for the 2007 – 08 season .
{-# OPTIONS --without-K --safe --no-sized-types --no-guardedness #-} module Agda.Builtin.Float where open import Agda.Builtin.Bool open import Agda.Builtin.Nat open import Agda.Builtin.Int open import Agda.Builtin.String postulate Float : Set {-# BUILTIN FLOAT Float #-} primitive primFloatEquality : Float → Float → Bool primFloatLess : Float → Float → Bool primFloatNumericalEquality : Float → Float → Bool primFloatNumericalLess : Float → Float → Bool primNatToFloat : Nat → Float primFloatPlus : Float → Float → Float primFloatMinus : Float → Float → Float primFloatTimes : Float → Float → Float primFloatNegate : Float → Float primFloatDiv : Float → Float → Float primFloatSqrt : Float → Float primRound : Float → Int primFloor : Float → Int primCeiling : Float → Int primExp : Float → Float primLog : Float → Float primSin : Float → Float primCos : Float → Float primTan : Float → Float primASin : Float → Float primACos : Float → Float primATan : Float → Float primATan2 : Float → Float → Float primShowFloat : Float → String
A set $S$ is open if and only if for every $x \in S$, there exists an $e > 0$ such that the ball of radius $e$ centered at $x$ is contained in $S$.
A set $S$ is open if and only if for every $x \in S$, there exists an $e > 0$ such that the ball of radius $e$ centered at $x$ is contained in $S$.
## Author: Alan Ruttenberg ## Project: OHD ## Date: May, 2013 ## ## Demonstrates simple statistics on our data set with R. Here draw distribution of age at first procedure. ## ## Modifed by Bill Duncan Oct. 10, 2017 ## Summary of changes: ## After updating to GraphDB SE 8.3 and R 3.4.2, the endpoint url was changed and the rrdf library no ## no longer worked. To fix, I updated the current_endpoint in environment.r and I am using the SPARQL ## library (i.e., current_sparqlr <<- "SPARQL") for queries. ## Two important side effects: ## 1. In order to get dates to work, I had to cast them as strings; e.g.: bind(str(?birth_date) as ?bdate) ## 2. To count rows in results, use ncol instead of nrows ## ## In the SPARQL query, I also changed the results from returning a sample (i.e., SAMPLE(?birth_date)) ## to returning all birth dates. The labels on the plot were updated to reflect this. years_difference_from_dates <- function(first_date,second_date) { start <- as.Date(first_date) end <- as.Date(second_date) as.numeric((end-start)/365.25) } mean_median_and_sd_from_dates_in_years <- function (first_date,second_date,print=TRUE) { years <- years_difference_from_dates(first_date,second_date); if (print) { cat(paste("mean: ", mean(years),", median: ", median(years), ", standard deviation: ",sd(years),"\n"))} c(as.numeric(mean(years)),as.numeric(median(years)),as.numeric(sd(years))); } # http://stats.stackexchange.com/questions/12232/calculating-the-parameters-of-a-beta-distribution-using-the-mean-and-variance estimate_beta_distribution_parameters <- function(mean, sd) { var <- sd*sd; alpha <- ((1 - mean) / var - 1 / mean) * mean ^ 2 beta <- alpha * (1 / mean - 1) return(params = list(shape1 = alpha, shape2 = beta)) } # show a histogram of date differences with a beta function fit to it histogram_fit_to_distribution_of_dates_in_years <- function (first_date,second_date,breaks=20,topic="times",subtitle="") { # compute years different years <- years_difference_from_dates(first_date,second_date); count <- length(years) # pick arbitrary bigger than we expect, so we can scale 0-1 years_max = 110; # scale years to fit beta years_scaled <- years/years_max; # estimate the parameters from the means and sd beta_parameter_estimates <- estimate_beta_distribution_parameters(mean(years_scaled),sd(years_scaled)); # I'm not sure if this is right. How to scale the fit to the counts? distscale <- count/breaks; # compute the beta distribution years_distribution <- fitdistr(years_scaled,"beta",beta_parameter_estimates); # histogram it hist(as.numeric(years),breaks=breaks, main=paste("Distribution of ",topic,", fit to",expression("beta"),"distribution"), xlab="Years", ylab="Count", sub=subtitle ); # draw the function on top of it curve(distscale*dbeta(x/years_max,years_distribution$estimate[1],years_distribution$estimate[2]),add=TRUE) years_distribution } age_to_first_treatment_statistics <- function () { # retrieve a row for each patient with the birth date first treatment date for each queryRes <- queryc("SELECT ?patient (str(sample(?birth_date)) as ?bdate) (str(min(?treatdatei)) as ?treatdate) WHERE { ?patient rdf:type dental_patient: . ?patient participates_in: ?procedure. ?procedure rdf:type dental_procedure: . ?procedure occurrence_date: ?treatdatei . ?patient birth_date: ?birth_date } group by ?patient"); # currently the date fields are strings # converty them to dates queryRes[,"bdate"] <- as.Date(queryRes[,"bdate"]); queryRes[,"treatdate"] <- as.Date(queryRes[,"treatdate"]); # count how many procedures had a patient participate query <- "SELECT distinct ?procedure WHERE { ?patient rdf:type dental_patient: . ?patient participates_in: ?procedure. ?procedure rdf:type dental_procedure: . ?procedure occurrence_date: ?treatdatei . ?patient birth_date: ?birth_date . } " if(current_sparqlr == "rrdf") { withpatient <- nrow(queryc(query)) } else { withpatient <- ncol(queryc(query)) } ## count the total procedures if(current_sparqlr == "rrdf") { total <- nrow(queryc("SELECT distinct ?procedure WHERE { ?procedure rdf:type dental_procedure: . } "));} else # use when current_sparqlr == "SPARQL" { total <- ncol(queryc("SELECT distinct ?procedure WHERE { ?procedure rdf:type dental_procedure: . } "));} # compute mean, median, sd meansd<-mean_median_and_sd_from_dates_in_years(queryRes[,"bdate"],queryRes[,"treatdate"]); # visualize histogram_fit_to_distribution_of_dates_in_years( queryRes[,"bdate"],queryRes[,"treatdate"], topic="age at first treatment", subtitle=paste("median=",signif(meansd[2],3),"sd=",signif(meansd[3],3)," ", # summarize how many first treatments, summarized from how many treatments, from how many total treatments dim(queryRes)[1],"first from",withpatient,"procedures of ",total) ); } age_at_first_dental_procedure_statistics <- function () { # retrieve a row for each patient with the birth date first procedure date for each queryRes <- queryc(" SELECT ?patient (str(?birth_date) as ?bdate) (min(?age) as ?first_procedure_age) (str(min(?procdatei)) as ?procdate) WHERE { ?patient rdf:type dental_patient: . ?patient participates_in: ?procedure. ?procedure rdf:type dental_procedure: . ?procedure occurrence_date: ?procdatei . ?patient birth_date: ?birth_date . bind(year(?procdatei)-year(?birth_date) as ?age) . } group by ?patient ?birth_date"); values <- c(queryRes[,"first_procedure_age"]); hist( values #, breaks = seq(0,100, by = 10) #, labels = TRUE , main = "Distribution of patient's age during first dental procedure \n (fit to normal distribution)" , sub = paste("N:", format(length(values), big.mark = ","),"patients", " mean age:", round(mean(values), digits = 2), " SD age:", round(sd(values), digits = 2)) , xlab = "Patient age" #, ylim = c(0, 200) , ylim = c(0, 0.03) , xlim = c(0, 100) , freq = FALSE , col= "lightgreen" , font.lab = 2 ) # lines(density(values), col="darkblue", lwd = 3) curve(dnorm(x, mean=mean(values), sd=sd(values)), add=TRUE, col="darkblue", lwd = 3) } age_at_first_dental_encounter_statistics <- function () { # retrieve a row for each patient with the birth date first procedure date for each queryRes <- queryc(" SELECT DISTINCT ?patient (min(?age) as ?min_age) (str(min(?encounter_date)) as ?min_encounter_date) WHERE { ?patient rdf:type dental_patient: . ?patient participates_in: ?encounter. ?encounter rdf:type health_care_encounter: . ?encounter occurrence_date: ?encounter_date . ?patient birth_date: ?birth_date . bind(year(?encounter_date)-year(?birth_date) as ?age) . } group by ?patient"); values <- c(queryRes[,"min_age"]); hist( values #, breaks = seq(0,100, by = 10) , breaks = 20 #, labels = TRUE , main = "Distribution of patient's age during first dental encounter \n (fit to normal distribution)" , sub = paste("N:", format(length(values), big.mark = ","),"patients", " mean age:", round(mean(values), digits = 2), " SD age:", round(sd(values), digits = 2)) , xlab = "Patient age" #, ylim = c(0, 200) , ylim = c(0, 0.02) , xlim = c(0, 100) , freq = FALSE , col= "lightgreen" , font.lab = 2 ) # lines(density(values), col="darkblue", lwd = 3) curve(dnorm(x, mean=mean(values), sd=sd(values)), add=TRUE, col="darkblue", lwd = 3) } which_distribution_for_ages <- function () { queryRes <- queryc(" SELECT ?patient ?bdate (str(min(?procdatei)) as ?procdate) WHERE { ?patient rdf:type dental_patient: . ?patient participates_in: ?procedure. ?procedure rdf:type dental_procedure: . ?procedure occurrence_date: ?procdatei . ?patient birth_date: ?birth_date bind(str(?birth_date) as ?bdate) } group by ?patient ?bdate"); compare_normal_to_beta_distribution( years_difference_from_dates(as.Date(queryRes[,"bdate"]), as.Date(queryRes[,"procdate"]))/115); } # See http://en.wikipedia.org/wiki/Kolmogorov–Smirnov_test - a way of testing goodness of fit of a probability distribution to a sample compare_normal_to_beta_distribution <- function (data) { beta_parameter_estimates <- estimate_beta_distribution_parameters(mean(data),sd(data)); # compute the beta distribution beta_parameters <- fitdistr(data,"beta",beta_parameter_estimates); # compute the normal distribution normal_parameters <- fitdistr(data,"normal") # Now do the KS test for the normal distribution and print cat("Normal\n"); print(ks.test(data,"pnorm",normal_parameters$estimate[1],normal_parameters$estimate[2])) # And do the KS test for the beta distribution and print cat("Beta\n"); print(ks.test(data,"pbeta",beta_parameters$estimate[1],beta_parameters$estimate[2])) }
module Language.LSP.CodeAction public export data IdrisAction = CaseSplit | ExprSearch | GenerateDef | MakeCase | MakeClause | MakeLemma | MakeWith export Eq IdrisAction where CaseSplit == CaseSplit = True ExprSearch == ExprSearch = True GenerateDef == GenerateDef = True MakeCase == MakeCase = True MakeClause == MakeClause = True MakeLemma == MakeLemma = True MakeWith == MakeWith = True _ == _ = False export Show IdrisAction where show CaseSplit = "CaseSplit" show ExprSearch = "ExprSearch" show GenerateDef = "GenerateDef" show MakeCase = "MakeCase" show MakeClause = "MakeClause" show MakeLemma = "MakeLemma" show MakeWith = "MakeWith"
#!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Sat Jun 15 19:55:43 2019 """ # Python imports. import random import numpy as np from collections import defaultdict # Local classes. from tensor_rl.agents.AgentClass import Agent class OptimalAgent(Agent): ''' Implementation for the modified R-Max Agent [Sham's thesis] ''' def __init__(self, states, state_map, actions, par_tensor, times, gamma=0.95, horizon=2, name="Optimal", greedy=False): name = name Agent.__init__(self, name=name, actions=actions, gamma=gamma) self.states = states self.state_map = state_map self.horizon = horizon self.greedy = greedy self.times = times self.par_tensor = par_tensor self.reset() # print(self.states) # print(self.actions) print(self.par_tensor) self.policy = defaultdict(type(self.actions[0])) self.update_all() def reset(self): ''' Summary: Resets the agent back to its tabula rasa config. ''' self.action_map = {} k = 0 for a in self.actions: self.action_map[a] = k k += 1 def act(self, state, reward): # print(state) action = self.policy[state] return action def get_r(self, state, action): return self.par_tensor[self.action_map[action]][self.state_map[state]][len(self.states)] def get_next_r(self, state, action): probs = self.par_tensor[self.action_map[action]][self.state_map[state]][:len(self.states)] rs = np.zeros((len(self.states))) for s in self.states: rs[self.state_map[s]] = max(self.par_tensor[:,self.state_map[s],len(self.states)]) return np.dot(probs, rs) def get_best_action(self, state): max_a = random.choice(self.actions) max_q = self.get_r(state, max_a) + self.gamma * self.get_next_r(state, max_a) for a in self.actions: r = self.get_r(state, a) nr = self.get_next_r(state, a) rew = r + self.gamma * nr print(a, r, nr) if rew > max_q: max_q = rew max_a = a return max_a def get_policy_action(self, state, q): return self.actions[np.argmax(q[self.state_map[state]])] def get_ns_dist(self, state, action): return self.par_tensor[self.action_map[action]][self.state_map[state]][:len(self.states)] def get_state_vals(self, q): print("max:", np.max(q, axis=1)) s_vals = np.zeros(len(self.states)) for s in self.states: s_vals[self.state_map[s]] = q[self.state_map[s]][self.action_map[self.get_policy_action(s,q)]] return s_vals def planning(self, n_iter=10000): q = np.zeros((len(self.states), len(self.actions))) prev_q = np.copy(q) for i in range(n_iter): for s in self.states: for a in self.actions: q[self.state_map[s]][self.action_map[a]] = self.get_r(s,a) + self.gamma * np.dot(self.get_ns_dist(s,a), np.max(q, axis=1)) if np.linalg.norm(q-prev_q) < 1e-3: print("iter for ", i, "times") break prev_q = np.copy(q) print(q) return q def update_all(self): ''' After recovering parameters, we calculate the best actions for all state-action pairs once and use them forever. ''' q = self.planning() for s in self.states: print("getting policy for ", s) # self.policy[s] = self.get_best_action(s) self.policy[s] = self.get_policy_action(s, q) print("policies updated") self.print_policy() def print_policy(self): for s in self.states: print("s: ", s.get_data(), "a: ",self.policy[s]) if __name__ == "__main__": states = [1, 2, 3] state_map = {1: 0, 2: 1, 3: 2} actions = [0, 1] par_tensor = np.array([[[.2, .3, .5, 1],[.2, .3, .5, 0],[.2, .3, .5, -1]], [[.3, .5, .2, 0.5],[.3, .5, .2, 0.7],[.3, .5, .2, -0.8]]]) ag = OptimalAgent(states, state_map, actions, par_tensor, 100) # print(ag.get_best_action(2)) q = ag.planning(100) print(q) print(ag.get_policy_action(2, q))
(* Title: HOL/Algebra/IntRing.thy Author: Stephan Hohe, TU Muenchen Author: Clemens Ballarin *) theory IntRing imports "~~/src/HOL/Number_Theory/Primes" QuotRing Lattice Int begin section \<open>The Ring of Integers\<close> subsection \<open>Some properties of @{typ int}\<close> lemma dvds_eq_abseq: fixes k :: int shows "l dvd k \<and> k dvd l \<longleftrightarrow> \<bar>l\<bar> = \<bar>k\<bar>" apply rule apply (simp add: zdvd_antisym_abs) apply (simp add: dvd_if_abs_eq) done subsection \<open>\<open>\<Z>\<close>: The Set of Integers as Algebraic Structure\<close> abbreviation int_ring :: "int ring" ("\<Z>") where "int_ring \<equiv> \<lparr>carrier = UNIV, mult = op *, one = 1, zero = 0, add = op +\<rparr>" lemma int_Zcarr [intro!, simp]: "k \<in> carrier \<Z>" by simp lemma int_is_cring: "cring \<Z>" apply (rule cringI) apply (rule abelian_groupI, simp_all) defer 1 apply (rule comm_monoidI, simp_all) apply (rule distrib_right) apply (fast intro: left_minus) done (* lemma int_is_domain: "domain \<Z>" apply (intro domain.intro domain_axioms.intro) apply (rule int_is_cring) apply (unfold int_ring_def, simp+) done *) subsection \<open>Interpretations\<close> text \<open>Since definitions of derived operations are global, their interpretation needs to be done as early as possible --- that is, with as few assumptions as possible.\<close> interpretation int: monoid \<Z> rewrites "carrier \<Z> = UNIV" and "mult \<Z> x y = x * y" and "one \<Z> = 1" and "pow \<Z> x n = x^n" proof - \<comment> "Specification" show "monoid \<Z>" by standard auto then interpret int: monoid \<Z> . \<comment> "Carrier" show "carrier \<Z> = UNIV" by simp \<comment> "Operations" { fix x y show "mult \<Z> x y = x * y" by simp } show "one \<Z> = 1" by simp show "pow \<Z> x n = x^n" by (induct n) simp_all qed interpretation int: comm_monoid \<Z> rewrites "finprod \<Z> f A = prod f A" proof - \<comment> "Specification" show "comm_monoid \<Z>" by standard auto then interpret int: comm_monoid \<Z> . \<comment> "Operations" { fix x y have "mult \<Z> x y = x * y" by simp } note mult = this have one: "one \<Z> = 1" by simp show "finprod \<Z> f A = prod f A" by (induct A rule: infinite_finite_induct, auto) qed interpretation int: abelian_monoid \<Z> rewrites int_carrier_eq: "carrier \<Z> = UNIV" and int_zero_eq: "zero \<Z> = 0" and int_add_eq: "add \<Z> x y = x + y" and int_finsum_eq: "finsum \<Z> f A = sum f A" proof - \<comment> "Specification" show "abelian_monoid \<Z>" by standard auto then interpret int: abelian_monoid \<Z> . \<comment> "Carrier" show "carrier \<Z> = UNIV" by simp \<comment> "Operations" { fix x y show "add \<Z> x y = x + y" by simp } note add = this show zero: "zero \<Z> = 0" by simp show "finsum \<Z> f A = sum f A" by (induct A rule: infinite_finite_induct, auto) qed interpretation int: abelian_group \<Z> (* The equations from the interpretation of abelian_monoid need to be repeated. Since the morphisms through which the abelian structures are interpreted are not the identity, the equations of these interpretations are not inherited. *) (* FIXME *) rewrites "carrier \<Z> = UNIV" and "zero \<Z> = 0" and "add \<Z> x y = x + y" and "finsum \<Z> f A = sum f A" and int_a_inv_eq: "a_inv \<Z> x = - x" and int_a_minus_eq: "a_minus \<Z> x y = x - y" proof - \<comment> "Specification" show "abelian_group \<Z>" proof (rule abelian_groupI) fix x assume "x \<in> carrier \<Z>" then show "\<exists>y \<in> carrier \<Z>. y \<oplus>\<^bsub>\<Z>\<^esub> x = \<zero>\<^bsub>\<Z>\<^esub>" by simp arith qed auto then interpret int: abelian_group \<Z> . \<comment> "Operations" { fix x y have "add \<Z> x y = x + y" by simp } note add = this have zero: "zero \<Z> = 0" by simp { fix x have "add \<Z> (- x) x = zero \<Z>" by (simp add: add zero) then show "a_inv \<Z> x = - x" by (simp add: int.minus_equality) } note a_inv = this show "a_minus \<Z> x y = x - y" by (simp add: int.minus_eq add a_inv) qed (simp add: int_carrier_eq int_zero_eq int_add_eq int_finsum_eq)+ interpretation int: "domain" \<Z> rewrites "carrier \<Z> = UNIV" and "zero \<Z> = 0" and "add \<Z> x y = x + y" and "finsum \<Z> f A = sum f A" and "a_inv \<Z> x = - x" and "a_minus \<Z> x y = x - y" proof - show "domain \<Z>" by unfold_locales (auto simp: distrib_right distrib_left) qed (simp add: int_carrier_eq int_zero_eq int_add_eq int_finsum_eq int_a_inv_eq int_a_minus_eq)+ text \<open>Removal of occurrences of @{term UNIV} in interpretation result --- experimental.\<close> lemma UNIV: "x \<in> UNIV \<longleftrightarrow> True" "A \<subseteq> UNIV \<longleftrightarrow> True" "(\<forall>x \<in> UNIV. P x) \<longleftrightarrow> (\<forall>x. P x)" "(EX x : UNIV. P x) \<longleftrightarrow> (EX x. P x)" "(True \<longrightarrow> Q) \<longleftrightarrow> Q" "(True \<Longrightarrow> PROP R) \<equiv> PROP R" by simp_all interpretation int (* FIXME [unfolded UNIV] *) : partial_order "\<lparr>carrier = UNIV::int set, eq = op =, le = op \<le>\<rparr>" rewrites "carrier \<lparr>carrier = UNIV::int set, eq = op =, le = op \<le>\<rparr> = UNIV" and "le \<lparr>carrier = UNIV::int set, eq = op =, le = op \<le>\<rparr> x y = (x \<le> y)" and "lless \<lparr>carrier = UNIV::int set, eq = op =, le = op \<le>\<rparr> x y = (x < y)" proof - show "partial_order \<lparr>carrier = UNIV::int set, eq = op =, le = op \<le>\<rparr>" by standard simp_all show "carrier \<lparr>carrier = UNIV::int set, eq = op =, le = op \<le>\<rparr> = UNIV" by simp show "le \<lparr>carrier = UNIV::int set, eq = op =, le = op \<le>\<rparr> x y = (x \<le> y)" by simp show "lless \<lparr>carrier = UNIV::int set, eq = op =, le = op \<le>\<rparr> x y = (x < y)" by (simp add: lless_def) auto qed interpretation int (* FIXME [unfolded UNIV] *) : lattice "\<lparr>carrier = UNIV::int set, eq = op =, le = op \<le>\<rparr>" rewrites "join \<lparr>carrier = UNIV::int set, eq = op =, le = op \<le>\<rparr> x y = max x y" and "meet \<lparr>carrier = UNIV::int set, eq = op =, le = op \<le>\<rparr> x y = min x y" proof - let ?Z = "\<lparr>carrier = UNIV::int set, eq = op =, le = op \<le>\<rparr>" show "lattice ?Z" apply unfold_locales apply (simp add: least_def Upper_def) apply arith apply (simp add: greatest_def Lower_def) apply arith done then interpret int: lattice "?Z" . show "join ?Z x y = max x y" apply (rule int.joinI) apply (simp_all add: least_def Upper_def) apply arith done show "meet ?Z x y = min x y" apply (rule int.meetI) apply (simp_all add: greatest_def Lower_def) apply arith done qed interpretation int (* [unfolded UNIV] *) : total_order "\<lparr>carrier = UNIV::int set, eq = op =, le = op \<le>\<rparr>" by standard clarsimp subsection \<open>Generated Ideals of \<open>\<Z>\<close>\<close> lemma int_Idl: "Idl\<^bsub>\<Z>\<^esub> {a} = {x * a | x. True}" apply (subst int.cgenideal_eq_genideal[symmetric]) apply simp apply (simp add: cgenideal_def) done lemma multiples_principalideal: "principalideal {x * a | x. True } \<Z>" by (metis UNIV_I int.cgenideal_eq_genideal int.cgenideal_is_principalideal int_Idl) lemma prime_primeideal: assumes prime: "prime p" shows "primeideal (Idl\<^bsub>\<Z>\<^esub> {p}) \<Z>" apply (rule primeidealI) apply (rule int.genideal_ideal, simp) apply (rule int_is_cring) apply (simp add: int.cgenideal_eq_genideal[symmetric] cgenideal_def) apply clarsimp defer 1 apply (simp add: int.cgenideal_eq_genideal[symmetric] cgenideal_def) apply (elim exE) proof - fix a b x assume "a * b = x * p" then have "p dvd a * b" by simp then have "p dvd a \<or> p dvd b" by (metis prime prime_dvd_mult_eq_int) then show "(\<exists>x. a = x * p) \<or> (\<exists>x. b = x * p)" by (metis dvd_def mult.commute) next assume "UNIV = {uu. EX x. uu = x * p}" then obtain x where "1 = x * p" by best then have "\<bar>p * x\<bar> = 1" by (simp add: mult.commute) then show False using prime by (auto dest!: abs_zmult_eq_1 simp: prime_def) qed subsection \<open>Ideals and Divisibility\<close> lemma int_Idl_subset_ideal: "Idl\<^bsub>\<Z>\<^esub> {k} \<subseteq> Idl\<^bsub>\<Z>\<^esub> {l} = (k \<in> Idl\<^bsub>\<Z>\<^esub> {l})" by (rule int.Idl_subset_ideal') simp_all lemma Idl_subset_eq_dvd: "Idl\<^bsub>\<Z>\<^esub> {k} \<subseteq> Idl\<^bsub>\<Z>\<^esub> {l} \<longleftrightarrow> l dvd k" apply (subst int_Idl_subset_ideal, subst int_Idl, simp) apply (rule, clarify) apply (simp add: dvd_def) apply (simp add: dvd_def ac_simps) done lemma dvds_eq_Idl: "l dvd k \<and> k dvd l \<longleftrightarrow> Idl\<^bsub>\<Z>\<^esub> {k} = Idl\<^bsub>\<Z>\<^esub> {l}" proof - have a: "l dvd k \<longleftrightarrow> (Idl\<^bsub>\<Z>\<^esub> {k} \<subseteq> Idl\<^bsub>\<Z>\<^esub> {l})" by (rule Idl_subset_eq_dvd[symmetric]) have b: "k dvd l \<longleftrightarrow> (Idl\<^bsub>\<Z>\<^esub> {l} \<subseteq> Idl\<^bsub>\<Z>\<^esub> {k})" by (rule Idl_subset_eq_dvd[symmetric]) have "l dvd k \<and> k dvd l \<longleftrightarrow> Idl\<^bsub>\<Z>\<^esub> {k} \<subseteq> Idl\<^bsub>\<Z>\<^esub> {l} \<and> Idl\<^bsub>\<Z>\<^esub> {l} \<subseteq> Idl\<^bsub>\<Z>\<^esub> {k}" by (subst a, subst b, simp) also have "Idl\<^bsub>\<Z>\<^esub> {k} \<subseteq> Idl\<^bsub>\<Z>\<^esub> {l} \<and> Idl\<^bsub>\<Z>\<^esub> {l} \<subseteq> Idl\<^bsub>\<Z>\<^esub> {k} \<longleftrightarrow> Idl\<^bsub>\<Z>\<^esub> {k} = Idl\<^bsub>\<Z>\<^esub> {l}" by blast finally show ?thesis . qed lemma Idl_eq_abs: "Idl\<^bsub>\<Z>\<^esub> {k} = Idl\<^bsub>\<Z>\<^esub> {l} \<longleftrightarrow> \<bar>l\<bar> = \<bar>k\<bar>" apply (subst dvds_eq_abseq[symmetric]) apply (rule dvds_eq_Idl[symmetric]) done subsection \<open>Ideals and the Modulus\<close> definition ZMod :: "int \<Rightarrow> int \<Rightarrow> int set" where "ZMod k r = (Idl\<^bsub>\<Z>\<^esub> {k}) +>\<^bsub>\<Z>\<^esub> r" lemmas ZMod_defs = ZMod_def genideal_def lemma rcos_zfact: assumes kIl: "k \<in> ZMod l r" shows "\<exists>x. k = x * l + r" proof - from kIl[unfolded ZMod_def] have "\<exists>xl\<in>Idl\<^bsub>\<Z>\<^esub> {l}. k = xl + r" by (simp add: a_r_coset_defs) then obtain xl where xl: "xl \<in> Idl\<^bsub>\<Z>\<^esub> {l}" and k: "k = xl + r" by auto from xl obtain x where "xl = x * l" by (auto simp: int_Idl) with k have "k = x * l + r" by simp then show "\<exists>x. k = x * l + r" .. qed lemma ZMod_imp_zmod: assumes zmods: "ZMod m a = ZMod m b" shows "a mod m = b mod m" proof - interpret ideal "Idl\<^bsub>\<Z>\<^esub> {m}" \<Z> by (rule int.genideal_ideal) fast from zmods have "b \<in> ZMod m a" unfolding ZMod_def by (simp add: a_repr_independenceD) then have "\<exists>x. b = x * m + a" by (rule rcos_zfact) then obtain x where "b = x * m + a" by fast then have "b mod m = (x * m + a) mod m" by simp also have "\<dots> = ((x * m) mod m) + (a mod m)" by (simp add: mod_add_eq) also have "\<dots> = a mod m" by simp finally have "b mod m = a mod m" . then show "a mod m = b mod m" .. qed lemma ZMod_mod: "ZMod m a = ZMod m (a mod m)" proof - interpret ideal "Idl\<^bsub>\<Z>\<^esub> {m}" \<Z> by (rule int.genideal_ideal) fast show ?thesis unfolding ZMod_def apply (rule a_repr_independence'[symmetric]) apply (simp add: int_Idl a_r_coset_defs) proof - have "a = m * (a div m) + (a mod m)" by (simp add: mult_div_mod_eq [symmetric]) then have "a = (a div m) * m + (a mod m)" by simp then show "\<exists>h. (\<exists>x. h = x * m) \<and> a = h + a mod m" by fast qed simp qed lemma zmod_imp_ZMod: assumes modeq: "a mod m = b mod m" shows "ZMod m a = ZMod m b" proof - have "ZMod m a = ZMod m (a mod m)" by (rule ZMod_mod) also have "\<dots> = ZMod m (b mod m)" by (simp add: modeq[symmetric]) also have "\<dots> = ZMod m b" by (rule ZMod_mod[symmetric]) finally show ?thesis . qed corollary ZMod_eq_mod: "ZMod m a = ZMod m b \<longleftrightarrow> a mod m = b mod m" apply (rule iffI) apply (erule ZMod_imp_zmod) apply (erule zmod_imp_ZMod) done subsection \<open>Factorization\<close> definition ZFact :: "int \<Rightarrow> int set ring" where "ZFact k = \<Z> Quot (Idl\<^bsub>\<Z>\<^esub> {k})" lemmas ZFact_defs = ZFact_def FactRing_def lemma ZFact_is_cring: "cring (ZFact k)" apply (unfold ZFact_def) apply (rule ideal.quotient_is_cring) apply (intro ring.genideal_ideal) apply (simp add: cring.axioms[OF int_is_cring] ring.intro) apply simp apply (rule int_is_cring) done lemma ZFact_zero: "carrier (ZFact 0) = (\<Union>a. {{a}})" apply (insert int.genideal_zero) apply (simp add: ZFact_defs A_RCOSETS_defs r_coset_def) done lemma ZFact_one: "carrier (ZFact 1) = {UNIV}" apply (simp only: ZFact_defs A_RCOSETS_defs r_coset_def ring_record_simps) apply (subst int.genideal_one) apply (rule, rule, clarsimp) apply (rule, rule, clarsimp) apply (rule, clarsimp, arith) apply (rule, clarsimp) apply (rule exI[of _ "0"], clarsimp) done lemma ZFact_prime_is_domain: assumes pprime: "prime p" shows "domain (ZFact p)" apply (unfold ZFact_def) apply (rule primeideal.quotient_is_domain) apply (rule prime_primeideal[OF pprime]) done end
Require Export Fiat.Common.Coq__8_4__8_5__Compat. Require Import Fiat.Narcissus.Common.Specs Fiat.Narcissus.Formats.AsciiOpt. Require Import Bedrock.Word Coq.ZArith.ZArith Coq.Strings.Ascii Coq.Strings.String. Section String. (* this has an exact idential structure to _FixList_ *) Context {B : Type}. Context {cache : Cache}. Context {cacheAddNat : CacheAdd cache nat}. Context {monoid : Monoid B}. Context {monoidUnit : QueueMonoidOpt monoid bool}. Fixpoint format_string (xs : string) (ce : CacheFormat) : Comp (B * CacheFormat) := match xs with | EmptyString => ret (mempty, addE ce 0) | String x xs' => `(b1, env1) <- format_ascii x ce; `(b2, env2) <- format_string xs' env1; ret (mappend b1 b2, env2) end%comp. Fixpoint encode_string (xs : string) (ce : CacheFormat) : B * CacheFormat := match xs with | EmptyString => (mempty, addE ce 0) | String x xs' => let (b1, env1) := encode_ascii x ce in let (b2, env2) := encode_string xs' env1 in (mappend b1 b2, env2) end. Fixpoint decode_string (s : nat) (b : B) (cd : CacheDecode) : option (string * B * CacheDecode) := match s with | O => Some (EmptyString, b, addD cd 0) | S s' => `(x, b1, e1) <- decode_ascii b cd; `(xs, b2, e2) <- decode_string s' b1 e1; Some (String x xs, b2, e2) end. Local Opaque format_ascii. Local Opaque encode_ascii. Theorem String_decode_correct {P : CacheDecode -> Prop} (P_OK : forall b cd, P cd -> P (addD cd b)) : forall sz, CorrectDecoder monoid (fun ls => length ls = sz) (fun ls => length ls = sz) eq format_string (decode_string sz) P format_string. Proof. split. { intros env env' xenv l l' ext ? Eeq Ppred Penc. subst. generalize dependent env. revert env' xenv l' env_OK. induction l. { intros. inversion Penc; subst; clear Penc. rewrite mempty_left; eexists _, _; intuition eauto. simpl; eauto. apply add_correct; eauto. } { intros. simpl in *. unfold Bind2 in *; computes_to_inv; subst. injection Penc''; intros; subst. destruct v; destruct v0. destruct (proj1 (Ascii_decode_correct P_OK) _ _ _ _ _ (mappend b0 ext) env_OK Eeq I Penc) as [? [? [? xenv_OK] ] ]. simpl. rewrite <- mappend_assoc, H; simpl; split_and; subst. destruct (IHl _ _ _ H4 _ H1 Penc') as [? [? ?] ]. split_and; subst. setoid_rewrite H3; simpl; eexists _, _; intuition eauto. simpl; unfold Bind2; eauto. } } { induction sz; simpl; intros. { split; eauto; injections; repeat eexists; simpl; eauto using mempty_left. apply add_correct; eauto. } { destruct (decode_ascii t env') as [ [ [? ?] ?] | ] eqn: ? ; simpl in *; try discriminate. destruct (decode_string sz b c) as [ [ [? ?] ?] | ] eqn: ? ; simpl in *; try discriminate; injections. eapply (proj2 (Ascii_decode_correct P_OK)) in Heqo; eauto; destruct Heqo; destruct_ex; intuition; subst; eapply IHsz in Heqo0; eauto; destruct Heqo0; destruct_ex; intuition; subst. simpl. eexists _, _; intuition eauto. computes_to_econstructor; eauto. computes_to_econstructor; eauto. rewrite mappend_assoc; reflexivity. } } Qed. Theorem decode_string_lt : forall len (lt_len : lt 0 len) (b3 : B) (cd0 : CacheDecode) (a : string) (b' : B) (cd' : CacheDecode), decode_string len b3 cd0 = Some (a, b', cd') -> lt_B b' b3. Proof. induction len; simpl; intros; try omega. destruct (decode_ascii b3 cd0) as [ [ [? ?] ?] | ] eqn: ? ; simpl in *; try discriminate. eapply ascii_decode_lt in Heqo. destruct (decode_string len b c) as [ [ [? ?] ?] | ] eqn: ? ; simpl in *; try discriminate. injections. inversion lt_len; subst; simpl in *. - injections; eauto. - eapply IHlen in Heqo0; eauto; unfold lt_B in *; omega. Qed. End String.
# distribution(f::F, args) where F = djltype(F)(args...) # djltype(::typeof(normal)) = Distributions.Normal # djltype(::typeof(betarv)) = Distributions.Beta # djltype(::typeof(uniform)) = Distributions.Uniform
State Before: α : Type u_1 β : Type u_2 γ : Type ?u.130759 ι : Type ?u.130762 inst✝ : CompleteLattice α f g : Filter β p q : β → Prop u v : β → α h : ∀ (x : β), p x → q x a : α ha : a ∈ {a | ∀ᶠ (x : β) in f, q x → u x ≤ a} ⊢ ∀ (x : β), (q x → u x ≤ a) → p x → u x ≤ a State After: no goals Tactic: tauto
If $f$ is a real-valued function defined on the real line, then $f$ is integrable if and only if $f(t + cx)$ is integrable for any $t, c \in \mathbb{R}$.
lemma degree_primitive_part [simp]: "degree (primitive_part p) = degree p"
{-# OPTIONS --safe #-} open import Definition.Typed.EqualityRelation module Definition.LogicalRelation.Properties.Reflexivity {{eqrel : EqRelSet}} where open import Definition.Untyped open import Definition.Typed open import Definition.LogicalRelation open import Tools.Product open import Tools.Empty import Tools.PropositionalEquality as PE import Data.Fin as Fin import Data.Nat as Nat -- Reflexivity of reducible types. reflEq : ∀ {l Γ A r} ([A] : Γ ⊩⟨ l ⟩ A ^ r) → Γ ⊩⟨ l ⟩ A ≡ A ^ r / [A] reflEq (Uᵣ′ _ _ _ _ l< PE.refl D) = red D reflEq (ℕᵣ D) = red D reflEq (Emptyᵣ D) = red D reflEq (ne′ K [[ ⊢A , ⊢B , D ]] neK K≡K) = ne₌ _ [[ ⊢A , ⊢B , D ]] neK K≡K reflEq (Πᵣ′ rF lF lG _ _ F G [[ ⊢A , ⊢B , D ]] ⊢F ⊢G A≡A [F] [G] G-ext) = Π₌ _ _ D A≡A (λ ρ ⊢Δ → reflEq ([F] ρ ⊢Δ)) (λ ρ ⊢Δ [a] → reflEq ([G] ρ ⊢Δ [a])) reflEq (∃ᵣ′ F G [[ ⊢A , ⊢B , D ]] ⊢F ⊢G A≡A [F] [G] G-ext) = ∃₌ _ _ D A≡A (λ ρ ⊢Δ → reflEq ([F] ρ ⊢Δ)) (λ ρ ⊢Δ [a] → reflEq ([G] ρ ⊢Δ [a])) reflEq {ι ¹} (emb X [A]) = reflEq [A] reflEq {∞} (emb X [A]) = reflEq [A] reflNatural-prop : ∀ {Γ n} → Natural-prop Γ n → [Natural]-prop Γ n n reflNatural-prop (sucᵣ (ℕₜ n d t≡t prop)) = sucᵣ (ℕₜ₌ n n d d t≡t (reflNatural-prop prop)) reflNatural-prop zeroᵣ = zeroᵣ reflNatural-prop (ne (neNfₜ neK ⊢k k≡k)) = ne (neNfₜ₌ neK neK k≡k) reflEmpty-prop : ∀ {Γ n l} → Empty-prop Γ n l → [Empty]-prop Γ n n l reflEmpty-prop (ne x) = ne x x -- Reflexivity of reducible terms. -- We proceed in a layered way because Agda does not understand our -- recursions are well founded reflEqTerm⁰ : ∀ {Γ A t r} ([A] : Γ ⊩⟨ ι ⁰ ⟩ A ^ r) → Γ ⊩⟨ ι ⁰ ⟩ t ∷ A ^ r / [A] → Γ ⊩⟨ ι ⁰ ⟩ t ≡ t ∷ A ^ r / [A] reflEqTerm⁰ (ℕᵣ D) (ℕₜ n [[ ⊢t , ⊢u , d ]] t≡t prop) = ℕₜ₌ n n [[ ⊢t , ⊢u , d ]] [[ ⊢t , ⊢u , d ]] t≡t (reflNatural-prop prop) reflEqTerm⁰ (Emptyᵣ D) (Emptyₜ (ne x)) = Emptyₜ₌ (ne x x) reflEqTerm⁰ {r = [ ! , l ]} (ne′ K D neK K≡K) (neₜ k d (neNfₜ neK₁ ⊢k k≡k)) = neₜ₌ k k d d (neNfₜ₌ neK₁ neK₁ k≡k) reflEqTerm⁰ {r = [ % , l ]} (ne′ K D neK K≡K) (neₜ d) = neₜ₌ d d reflEqTerm⁰ {r = [ ! , l ]} (Πᵣ′ rF lF lG _ _ F G D ⊢F ⊢G A≡A [F] [G] G-ext) (Πₜ f d funcF f≡f [f] [f]₁) = Πₜ₌ f f d d funcF funcF f≡f (Πₜ f d funcF f≡f [f] [f]₁) (Πₜ f d funcF f≡f [f] [f]₁) (λ ρ ⊢Δ [a] → [f] ρ ⊢Δ [a] [a] (reflEqTerm⁰ ([F] ρ ⊢Δ) [a])) reflEqTerm⁰ {r = [ % , l ]} (Πᵣ′ rF lF lG _ _ F G D ⊢F ⊢G A≡A [F] [G] G-ext) X = X , X reflEqTerm⁰ (∃ᵣ′ F G D ⊢F ⊢G A≡A [F] [G] G-ext) X = X , X reflEqTerm¹ : ∀ {Γ A t r} ([A] : Γ ⊩⟨ ι ¹ ⟩ A ^ r) → Γ ⊩⟨ ι ¹ ⟩ t ∷ A ^ r / [A] → Γ ⊩⟨ ι ¹ ⟩ t ≡ t ∷ A ^ r / [A] reflEqTerm¹ (Uᵣ (Uᵣ r ⁰ X PE.refl D)) (Uₜ A d typeA A≡A [A]) = Uₜ₌ (Uₜ A d typeA A≡A [A]) (Uₜ A d typeA A≡A [A]) A≡A (λ [ρ] ⊢Δ → reflEq ([A] [ρ] ⊢Δ)) reflEqTerm¹ (Uᵣ (Uᵣ r ¹ () PE.refl D)) (Uₜ A d typeA A≡A [A]) reflEqTerm¹ (ℕᵣ D) (ℕₜ n [[ ⊢t , ⊢u , d ]] t≡t prop) = ℕₜ₌ n n [[ ⊢t , ⊢u , d ]] [[ ⊢t , ⊢u , d ]] t≡t (reflNatural-prop prop) reflEqTerm¹ (Emptyᵣ D) (Emptyₜ (ne x)) = Emptyₜ₌ (ne x x) reflEqTerm¹ {r = [ ! , l ]} (ne′ K D neK K≡K) (neₜ k d (neNfₜ neK₁ ⊢k k≡k)) = neₜ₌ k k d d (neNfₜ₌ neK₁ neK₁ k≡k) reflEqTerm¹ {r = [ % , l ]} (ne′ K D neK K≡K) (neₜ d) = neₜ₌ d d reflEqTerm¹ {r = [ ! , l ]} (Πᵣ′ rF lF lG _ _ F G D ⊢F ⊢G A≡A [F] [G] G-ext) (Πₜ f d funcF f≡f [f] [f]₁) = Πₜ₌ f f d d funcF funcF f≡f (Πₜ f d funcF f≡f [f] [f]₁) (Πₜ f d funcF f≡f [f] [f]₁) (λ ρ ⊢Δ [a] → [f] ρ ⊢Δ [a] [a] (reflEqTerm¹ ([F] ρ ⊢Δ) [a])) reflEqTerm¹ {r = [ % , l ]} (Πᵣ′ rF lF lG _ _ F G D ⊢F ⊢G A≡A [F] [G] G-ext) X = X , X reflEqTerm¹ (∃ᵣ′ F G D ⊢F ⊢G A≡A [F] [G] G-ext) X = X , X reflEqTerm¹ (emb X [A]) = reflEqTerm⁰ [A] reflEqTerm∞ : ∀ {Γ A t r} ([A] : Γ ⊩⟨ ∞ ⟩ A ^ r) → Γ ⊩⟨ ∞ ⟩ t ∷ A ^ r / [A] → Γ ⊩⟨ ∞ ⟩ t ≡ t ∷ A ^ r / [A] reflEqTerm∞ (Uᵣ (Uᵣ r ⁰ X eq D)) (Uₜ A d typeA A≡A [A]) = Uₜ₌ (Uₜ A d typeA A≡A [A]) (Uₜ A d typeA A≡A [A]) A≡A (λ [ρ] ⊢Δ → reflEq ([A] [ρ] ⊢Δ)) reflEqTerm∞ (Uᵣ (Uᵣ r ¹ X eq D)) (Uₜ A d typeA A≡A [A]) = Uₜ₌ (Uₜ A d typeA A≡A [A]) (Uₜ A d typeA A≡A [A]) A≡A (λ [ρ] ⊢Δ → reflEq ([A] [ρ] ⊢Δ)) reflEqTerm∞ (ℕᵣ D) (ℕₜ n [[ ⊢t , ⊢u , d ]] t≡t prop) = ℕₜ₌ n n [[ ⊢t , ⊢u , d ]] [[ ⊢t , ⊢u , d ]] t≡t (reflNatural-prop prop) reflEqTerm∞ (Emptyᵣ D) (Emptyₜ (ne x)) = Emptyₜ₌ (ne x x) reflEqTerm∞ {r = [ ! , l ]} (ne′ K D neK K≡K) (neₜ k d (neNfₜ neK₁ ⊢k k≡k)) = neₜ₌ k k d d (neNfₜ₌ neK₁ neK₁ k≡k) reflEqTerm∞ {r = [ % , l ]} (ne′ K D neK K≡K) (neₜ d) = neₜ₌ d d reflEqTerm∞ {r = [ ! , l ]} (Πᵣ′ rF lF lG _ _ F G D ⊢F ⊢G A≡A [F] [G] G-ext) (Πₜ f d funcF f≡f [f] [f]₁) = Πₜ₌ f f d d funcF funcF f≡f (Πₜ f d funcF f≡f [f] [f]₁) (Πₜ f d funcF f≡f [f] [f]₁) (λ ρ ⊢Δ [a] → [f] ρ ⊢Δ [a] [a] (reflEqTerm∞ ([F] ρ ⊢Δ) [a])) reflEqTerm∞ {r = [ % , l ]} (Πᵣ′ rF lF lG F G _ _ D ⊢F ⊢G A≡A [F] [G] G-ext) X = X , X reflEqTerm∞ (∃ᵣ′ F G D ⊢F ⊢G A≡A [F] [G] G-ext) X = X , X reflEqTerm∞ (emb X [A]) = reflEqTerm¹ [A] reflEqTerm : ∀ {l Γ A t r} ([A] : Γ ⊩⟨ l ⟩ A ^ r) → Γ ⊩⟨ l ⟩ t ∷ A ^ r / [A] → Γ ⊩⟨ l ⟩ t ≡ t ∷ A ^ r / [A] reflEqTerm {l = ι ⁰} [A] [t] = reflEqTerm⁰ [A] [t] reflEqTerm {l = ι ¹} [A] [t] = reflEqTerm¹ [A] [t] reflEqTerm {l = ∞} [A] [t] = reflEqTerm∞ [A] [t]
theory Demo1 imports Main begin subsection {* @{text "?thesis"}, @{text this}, \isakeyword{then} *} lemma "A \<and> B \<longrightarrow> B \<and> A" proof assume "A \<and> B" from this show "B \<and> A" proof assume "A" "B" show ?thesis .. qed qed subsection {* \isakeyword{with} *} lemma "A \<and> B \<longrightarrow> B \<and> A" proof assume ab: "A \<and> B" from ab have a: "A" .. from ab have b: "B" .. from b a show "B \<and> A" .. qed subsection{*Predicate calculus*} text{* \isakeyword{fix} *} lemma "\<forall>x. P x \<Longrightarrow> \<forall>x. P(f x)" proof fix a assume "\<forall>x. P x" then show "P(f a)" .. qed lemma "\<exists>x. P(f x) \<Longrightarrow> \<exists>y. P y" proof - assume "\<exists>x. P(f x)" then show ?thesis proof fix x assume "P(f x)" show ?thesis .. qed qed text{* \isakeyword{obtain} *} lemma "\<exists>x. P(f x) \<Longrightarrow> \<exists>y. P y" proof - assume "\<exists>x. P(f x)" then obtain x where "P(f x)" .. then show "\<exists>y. P y" .. qed end
module A.Issue1635 (A : Set₁) where data Foo : Set where foo : Foo
inductive Formula | eqf : Nat → Nat → Formula | impf : Formula → Formula → Formula def Formula.denote : Formula → Prop | eqf n1 n2 => n1 = n2 | impf f1 f2 => denote f1 → denote f2 theorem Formula.denote_eqf (n1 n2 : Nat) : denote (eqf n1 n2) = (n1 = n2) := rfl theorem Formula.denote_impf (f1 f2 : Formula) : denote (impf f1 f2) = (denote f1 → denote f2) := rfl
classdef ColorSelector < wt.abstract.BaseWidget &... wt.mixin.Enableable & wt.mixin.FontStyled & wt.mixin.Tooltipable & ... wt.mixin.FieldColorable % A color selection control with browse button % Copyright 2020-2021 The MathWorks Inc. %% Public properties properties (AbortSet) % The current value shown Value (1,3) double {wt.validators.mustBeBetweenZeroAndOne} = [0 1 0] end %properties % These properties do not trigger the update method properties (AbortSet, UsedInUpdate = false) % Indicates whether to show the edit field ShowEditField (1,1) matlab.lang.OnOffSwitchState = true end %properties %% Events events (HasCallbackProperty, NotifyAccess = protected) % Triggered on value changed, has companion callback ValueChanged end %events %% Internal Properties properties ( Transient, NonCopyable, ... Access = {?wt.abstract.BaseWidget, ?wt.test.BaseWidgetTest} ) % Button ButtonControl (1,1) matlab.ui.control.Button % Edit control EditControl (1,1) matlab.ui.control.EditField end %properties %% Protected methods methods (Access = protected) function setup(obj) % Call superclass setup first to establish the grid [email protected](); % Set default size obj.Position(3:4) = [100 25]; % Configure Grid obj.Grid.ColumnWidth = {'1x',25}; obj.Grid.RowHeight = {'1x'}; % Create the standard edit control obj.EditControl = matlab.ui.control.EditField(... "Parent",obj.Grid,... "ValueChangedFcn",@(h,e)obj.onTextChanged(e)); % Create Button obj.ButtonControl = matlab.ui.control.Button(... "Parent",obj.Grid,... "Text","",... "ButtonPushedFcn",@(h,e)obj.onButtonPushed(e)); % Update the internal component lists obj.FontStyledComponents = [obj.EditControl]; obj.FieldColorableComponents = [obj.EditControl]; obj.EnableableComponents = [obj.EditControl, obj.ButtonControl]; obj.TooltipableComponents = [obj.EditControl, obj.ButtonControl]; end %function function update(obj) % Update the edit control text obj.EditControl.Value = mat2str(obj.Value,2); % Update the button color obj.ButtonControl.BackgroundColor = obj.Value; end %function function updateFieldVisibility(obj) % Is history being shown? If so, update history and items if obj.ShowEditField % Showing edit field if isempty(obj.EditControl.Parent) obj.ButtonControl.Layout.Column = 2; obj.EditControl.Parent = obj.Grid; obj.EditControl.Layout.Column = 1; obj.EditControl.Layout.Row = 1; end else % Hiding edit field if ~isempty(obj.EditControl.Parent) obj.EditControl.Parent = []; obj.ButtonControl.Layout.Column = [1 2]; end end %if end %function function onButtonPushed(obj,~) % Triggered on button pushed % Get prior value oldValue = obj.Value; % Prompt for a new color newColor = uisetcolor(oldValue); % Did user make a choice or cancel? if ~isequal(newColor,0) % Update the color obj.Value = newColor; end % Trigger event evtOut = wt.eventdata.ValueChangedData(obj.Value, oldValue); notify(obj,"ValueChanged",evtOut); end %function function onTextChanged(obj,evt) % Triggered on text interaction - subclass may override % Get prior value oldValue = obj.Value; % Trap errors try % Store new result obj.Value = str2num(evt.Value); %#ok<ST2NM> % Trigger event evtOut = wt.eventdata.ValueChangedData(obj.Value, oldValue); notify(obj,"ValueChanged",evtOut); catch % Restore original value obj.update(); end %try end %function end %methods %% Accessors methods function set.ShowEditField(obj,value) obj.ShowEditField = value; obj.updateFieldVisibility() end end % methods end % classdef
subroutine read_locinfo() ! read localization scales from text file (hybens_info) use kinds, only : r_kind,i_kind,r_single use params, only : nlevs,corrlengthnh,corrlengthtr,corrlengthsh,letkf_flag use enkf_obsmod, only: obloc, oblnp, corrlengthsq, lnsigl, nobstot, & obpress, obtype, nobs_conv, nobs_oz, oberrvar use kdtree2_module, only: kdtree2, kdtree2_create, kdtree2_destroy, & kdtree2_result, kdtree2_n_nearest use constants, only: zero, rearth use gridinfo, only: gridloc, logp use mpisetup logical lexist character(len=40) :: fname = 'hybens_info' real(r_kind) oblnp_indx(1) real(r_single), allocatable, dimension(:) :: & hlength,vlength,lnsigl1,corrlengthsq1 real(r_kind) logp_tmp(nlevs) type(kdtree2),pointer :: kdtree_grid type(kdtree2_result),dimension(:),allocatable :: sresults integer(i_kind) k, msig, iunit, n1, n2 ,ideln, nob, ierr real(r_kind) :: tmp iunit = 91 ! read in vertical profile of horizontal and vertical localization length ! scales, set values for each ob. ! First, check the status of input file inquire(file=trim(fname),exist=lexist) if ( lexist ) then allocate(hlength(nlevs),vlength(nlevs)) allocate(corrlengthsq1(nobstot),lnsigl1(nobstot)) open(iunit,file=trim(fname),form='formatted') rewind(iunit) read(iunit,100) msig if ( msig /= nlevs ) then write(6,*) 'READ_LOCINFO: ***ERROR*** error in ',trim(fname) write(6,*) 'READ_LOCINFO: levels do not match,msig[read in],nsig[defined] = ',msig,nlevs close(iunit) call stop2(123) endif do k=1,nlevs read(iunit,101) hlength(k),vlength(k),tmp,tmp hlength(k) = hlength(k)/0.388 vlength(k) = abs(vlength(k))/0.388 ! factor of 0.388 to convert from e-folding scale ! to distance Gaspari-Cohn function goes to zero. if (nproc .eq. 0) print *,'level=',k,'localization scales (horiz,vert)=',hlength(k),vlength(k) end do close(iunit) else write(6,*) 'READ_LOCINFO: ***ERROR*** INPUT FILE MISSING -- ',trim(fname) call stop2(124) end if 100 format(I4) 101 format(F8.1,3x,F5.1,2(3x,F8.4)) kdtree_grid => kdtree2_create(gridloc,sort=.false.,rearrange=.true.) allocate(sresults(1)) if (nobstot > numproc) then ideln = int(real(nobstot)/real(numproc)) n1 = 1 + nproc*ideln n2 = (nproc+1)*ideln if (nproc == numproc-1) n2 = nobstot else if(nproc < nobstot)then n1 = nproc+1 n2 = n1 else n1=1 n2=0 end if end if lnsigl1=zero corrlengthsq1=zero do nob=n1,n2 if (oberrvar(nob) .lt. 1.e20) then ! find horizontal grid point closest to this ob call kdtree2_n_nearest(tp=kdtree_grid,qv=obloc(:,nob),nn=1,results=sresults) ! find vertical level closest to ob pressure at that grid point. oblnp_indx(1) = oblnp(nob) if (oblnp_indx(1) .le. logp(sresults(1)%idx,1)) then oblnp_indx(1) = 1 else if (oblnp_indx(1) .ge. logp(sresults(1)%idx,nlevs)) then oblnp_indx(1) = nlevs else logp_tmp = logp(sresults(1)%idx,1:nlevs) call grdcrd(oblnp_indx,1,logp_tmp,nlevs,1) end if corrlengthsq1(nob) = (hlength(nint(oblnp_indx(1)))*1.e3_r_single/rearth)**2 lnsigl1(nob) = vlength(nint(oblnp_indx(1))) ! don't use computed value for ps vertical localization. !if (obtype(nob)(1:3) .eq. ' ps') lnsigl1(nob) = lnsigl(nob) ! for radiance obs, double vertical localization used. !if (nob > nobs_conv+nobs_oz) lnsigl(nob) = 2.*lnsigl(nob) !if (nproc .eq. 0 .and. obtype(nob)(1:3) .eq. ' t') then ! write(6,102) nob,trim(obtype(nob)),obpress(nob),oblnp_indx(1),& ! sqrt(corrlengthsq1(nob))*rearth/1000.,lnsigl1(nob) !endif !102 format(i7,1x,a20,1x,f6.1,1x,f5.2,1x,f6.1,1x,f4.2) else corrlengthsq1(nob) = corrlengthsq(nob) lnsigl1(nob) = lnsigl(nob) end if enddo if (nproc .eq. 0) close(iunit) ! distribute the results to all processors. call mpi_allreduce(lnsigl1,lnsigl,nobstot,mpi_real4,mpi_sum,mpi_comm_world,ierr) call mpi_allreduce(corrlengthsq1,corrlengthsq,nobstot,mpi_real4,mpi_sum,mpi_comm_world,ierr) call kdtree2_destroy(kdtree_grid) ! For LETKF, modify values of corrlengthnh,tr,sh for use in observation box ! calculation to be equal to maximum value for any level. if (letkf_flag) then corrlengthnh=maxval(hlength(1:nlevs))*1.e3_r_single/rearth corrlengthtr = corrlengthnh corrlengthsh = corrlengthnh endif deallocate(sresults,hlength,vlength,corrlengthsq1,lnsigl1) end subroutine read_locinfo
# # generate multi-day events from summarized GHCD data # - expects GHCD format input CSV files, named by station and year in 'tmp' subdirectory # (created by fileterData.py) # - expects seaonsalseaonabl baseline data for all stations in a single CSV file 'season_baseline.csv' in the 'out' subdirectory # (this is created by summarize.r) # - writes output temperature events CSV into 'out' subdirectory # D. Dorsettt 20-Jul-2019 library(tidyverse) library(lubridate) library(here) setwd("E:/WeatherData/out") stations <- c("USW00014764","USW00014739","USW00014758","USW00094789","USW00013739","USW00093721","USW00013750","USW00013748","USW00013782","USC00084366","USW00012849","USW00092811") # pull baseline data season_baseline <- read.csv("season_baseline.csv") seasons <- c("spring","summer","fall","winter") # this is the data frame we're building tempevents <- data.frame(matrix(ncol=6,nrow=0)) names(tempevents) = c("STATION","YEAR","SEASON","TYPE","EVENT","LABEL") # go back through daily temperatiure data looking for cold/heat waves based on station highest average max and lowest average min through the whole year for (i in 1:length(stations)) { station = stations[i] file_list <- list.files(path="E:/WeatherData/tmp", paste0(station,".*.csv")) statavg <- season_baseline %>% filter(STATION==station) maxtemp = max(statavg$TMAX_MEAN) mintemp = min(statavg$TMIN_MEAN) for (y in 1:length(file_list)) { if (!file.size(paste0("E:/WeatherData/tmp/",file_list[y])) == 0) { print(paste("Processing", file_list[y])) raw <- read.csv(paste0("E:/WeatherData/tmp/",file_list[y]), header=FALSE) %>% select(1,2,3,4) names(raw) <- c("station","date","tag","value") raw <- raw %>% filter(tag == "TMAX") # extract year,month,week,season and pivot based on GHCD dataset tag hight_data <- raw %>% mutate(DATE=ymd(date),YEAR=year(date),MONTH=month(date),SEASON=case_when(MONTH>2 & MONTH<6 ~ "spring",MONTH>5 & MONTH<9 ~ "summer",MONTH>8 & MONTH<12 ~ "fall",TRUE ~ "winter")) %>% pivot_wider(names_from=tag, values_from=value) # convert temp from tenths degC to degF if ("TMAX" %in% colnames(hight_data)) { hight_data$TMAX <- ((hight_data$TMAX / 10.0) * (9.0 / 5.0)) + 32.0 } else { hight_data <- mutate(hight_data, TMAX=NA) } # mark the HOT days as more than 9 deg about the average maximum and COLD days as more than 9 deg colder than the average minimum hight_data <- hight_data %>% mutate(HOT=TMAX > (maxtemp + 9), COLD=TMAX < (mintemp - 9)) # summzrize by season for (ss in 1:length(seasons)) { season = seasons[ss] season_avgt <- hight_data %>% filter(SEASON==season) # run-length-encode to reduce to vectors of streaks streaks <- rle(season_avgt$HOT) drow = 1 if (length(streaks$lengths > 0)) { for (r in 1:length(streaks$values)) { # write the starting day of streaks where HOT as TRUE and more than 3 days in length if (streaks$values[r] && streaks$lengths[r] > 3) { tempevents <- add_row(tempevents, STATION=station, YEAR=season_avgt$YEAR[drow], SEASON=season, TYPE="TMAX", EVENT="Heat Wave",LABEL=sprintf("%d days", streaks$length[r])) } drow = drow + streaks$lengths[r] } } streaks <- rle(season_avgt$COLD) drow = 1 if (length(streaks$lengths > 0)) { for (r in 1:length(streaks$values)) { # write the starting day of streaks where COLD as TRUE and more than 3 days in length if (streaks$values[r] & streaks$lengths[r] > 3) { tempevents <- add_row(tempevents, STATION=station, YEAR=season_avgt$YEAR[drow], SEASON=season, TYPE="TMIN", EVENT="Cold Wave",LABEL=sprintf("%d days", streaks$length[r])) } drow = drow + streaks$lengths[r] } } } } } } # write CSV summary datafile write.csv(tempevents, "tempevents.csv", row.names=FALSE)
-- | -- Module : Jeopardy.Controller -- Description : MVC for our game -- Copyright : (c) Jonatan H Sundqvist, year -- License : MIT -- Maintainer : Jonatan H Sundqvist -- Stability : experimental|stable -- Portability : POSIX (not sure) -- -- Created date year -- TODO | - -- - -- SPEC | - -- - module Jeopardy.Controller where --------------------------------------------------------------------------------------------------- -- We'll need these --------------------------------------------------------------------------------------------------- import Graphics.UI.Gtk -- import Graphics.Rendering.Cairo (liftIO, fill) -- import Data.IORef -- import Data.Complex -- import Data.Maybe (listToMaybe, maybe) import Data.List (findIndex) import Text.Printf import qualified Southpaw.Picasso.Palette as Palette import qualified Southpaw.Interactive.Application as App import Jeopardy.Graphics -- TODO: Better name import Jeopardy.Core -- TODO: Better name import Jeopardy.Curator -- import Tiler -- --------------------------------------------------------------------------------------------------- -- Types --------------------------------------------------------------------------------------------------- -- | -- TODO: Input state (?) -- data App = App { _window :: Window, _canvas :: DrawingArea, _size :: (Int, Int), _state :: IORef AppState } -- data AppState = AppState { _game :: Game, _selected :: Maybe Int, _path :: [Complex Double] } -- --------------------------------------------------------------------------------------------------- -- Data --------------------------------------------------------------------------------------------------- fps = 30 :: Int --------------------------------------------------------------------------------------------------- -- Functions --------------------------------------------------------------------------------------------------- -- Events ----------------------------------------------------------------------------------------- -- | -- ondelete :: IO Bool ondelete = do -- TODO: Uhmmm... what? liftIO $ do mainQuit putStrLn "Goodbye!" return False -- | onmousemotion stateref = do (mx, my) <- eventCoordinates liftIO $ do let cursor = mx:+my -- let radius = 18 -- appstate@(AppState { _path=path }) <- readIORef stateref writeIORef stateref $ appstate { _selected=findIndex (within radius cursor) path } return False where within r p = (< r) . realPart . abs . subtract p -- | ondraw appstate = do -- renderGame $ _game appstate -- Testing an unrelated tiling function renderPathWithJoints Palette.chartreuse Palette.darkviolet 18 5 $ _path appstate assuming (_selected appstate) $ \ sel -> do renderCircle 22 $ _path appstate !! sel Palette.choose Palette.limegreen fill renderCircle 22 $ _path appstate !! opposite sel Palette.choose Palette.limegreen fill where assuming (Just a) f = f a assuming _ _ = return () opposite n = (n + 3) `mod` (length $ _path appstate) -- | onanimate :: DrawingArea -> IORef AppState -> IO Bool onanimate canvas stateref = do widgetQueueDraw canvas return True --------------------------------------------------------------------------------------------------- -- | createApp :: IO (App.App) createApp = App.createWindowWithCanvas 650 650 $ AppState { _game=createGame, _selected=Nothing, _path=[ (200:+200)+(70:+0)*(cos θ:+sin θ) | θ <- [0, τ/6..(τ*5/6)] ] } --------------------------------------------------------------------------------------------------- -- | -- TODO: Rename, move (?) mainGTK :: IO () mainGTK = do (App.App { App._window=window, App._canvas=canvas, App._size=size, App._state=stateref }) <- createApp timeoutAdd (onanimate canvas stateref) (1000 `div` fps) >> return () -- Events canvas `on` draw $ (liftIO $ readIORef stateref) >>= ondraw canvas `on` motionNotifyEvent $ onmousemotion stateref -- canvas `on` buttonPressEvent $ onbuttonpress worldref -- canvas `on` buttonReleaseEvent $ onbuttonreleased worldref -- window `on` configureEvent $ onresize window worldref window `on` deleteEvent $ ondelete -- window `on` keyPressEvent $ onkeypress worldref mainGUI
!=============================================================================== ! Copyright 2006-2017 Intel Corporation All Rights Reserved. ! ! The source code, information and material ("Material") contained herein is ! owned by Intel Corporation or its suppliers or licensors, and title to such ! Material remains with Intel Corporation or its suppliers or licensors. The ! Material contains proprietary information of Intel or its suppliers and ! licensors. The Material is protected by worldwide copyright laws and treaty ! provisions. No part of the Material may be used, copied, reproduced, ! modified, published, uploaded, posted, transmitted, distributed or disclosed ! in any way without Intel's prior express written permission. No license under ! any patent, copyright or other intellectual property rights in the Material ! is granted to or conferred upon you, either expressly, by implication, ! inducement, estoppel or otherwise. Any license under such intellectual ! property rights must be express and approved by Intel in writing. ! ! Unless otherwise agreed by Intel in writing, you may not remove or alter this ! notice or any other notice embedded in Materials by Intel or Intel's ! suppliers or licensors in any way. !=============================================================================== ! Content: ! Intel(R) Math Kernel Library (Intel(R) MKL) interface for TT routines !******************************************************************************* MODULE MKL_TT_TYPE ! Parameters definitions for the kind of the Trigonometric Transform INTEGER, PARAMETER :: MKL_SINE_TRANSFORM = 0 INTEGER, PARAMETER :: MKL_COSINE_TRANSFORM = 1 INTEGER, PARAMETER :: MKL_STAGGERED_COSINE_TRANSFORM = 2 INTEGER, PARAMETER :: MKL_STAGGERED_SINE_TRANSFORM = 3 INTEGER, PARAMETER :: MKL_STAGGERED2_COSINE_TRANSFORM = 4 INTEGER, PARAMETER :: MKL_STAGGERED2_SINE_TRANSFORM = 5 END MODULE MKL_TT_TYPE MODULE MKL_TRIG_TRANSFORMS USE MKL_TT_TYPE USE MKL_DFTI INTERFACE SUBROUTINE D_INIT_TRIG_TRANSFORM(n, tt_type, ipar,dpar, stat) USE MKL_DFT_TYPE !DEC$ ATTRIBUTES C, ALIAS: '_d_init_trig_transform' :: D_INIT_TRIG_TRANSFORM !MS$ATTRIBUTES REFERENCE :: n !MS$ATTRIBUTES REFERENCE :: tt_type !MS$ATTRIBUTES REFERENCE :: ipar !MS$ATTRIBUTES REFERENCE :: dpar !MS$ATTRIBUTES REFERENCE :: stat INTEGER, INTENT(IN) :: n, tt_type INTEGER, INTENT(INOUT) :: ipar(*) REAL(8), INTENT(INOUT) :: dpar(*) INTEGER, INTENT(OUT) :: stat END SUBROUTINE D_INIT_TRIG_TRANSFORM SUBROUTINE D_COMMIT_TRIG_TRANSFORM(f, handle, ipar,dpar, stat) USE MKL_DFT_TYPE !DEC$ ATTRIBUTES C, ALIAS: '_d_commit_trig_transform' :: D_COMMIT_TRIG_TRANSFORM !MS$ATTRIBUTES REFERENCE :: f !MS$ATTRIBUTES REFERENCE :: handle !MS$ATTRIBUTES REFERENCE :: ipar !MS$ATTRIBUTES REFERENCE :: dpar !MS$ATTRIBUTES REFERENCE :: stat REAL(8), INTENT(INOUT) :: f(*) TYPE(DFTI_DESCRIPTOR), POINTER :: handle INTEGER, INTENT(INOUT) :: ipar(*) REAL(8), INTENT(OUT) :: dpar(*) INTEGER, INTENT(OUT) :: stat END SUBROUTINE D_COMMIT_TRIG_TRANSFORM SUBROUTINE D_FORWARD_TRIG_TRANSFORM(f, handle, ipar,dpar, stat) USE MKL_DFT_TYPE !DEC$ ATTRIBUTES C, ALIAS: '_d_forward_trig_transform' :: D_FORWARD_TRIG_TRANSFORM !MS$ATTRIBUTES REFERENCE :: f !MS$ATTRIBUTES REFERENCE :: handle !MS$ATTRIBUTES REFERENCE :: ipar !MS$ATTRIBUTES REFERENCE :: dpar !MS$ATTRIBUTES REFERENCE :: stat REAL(8), INTENT(INOUT) :: f(*) TYPE(DFTI_DESCRIPTOR), POINTER :: handle INTEGER, INTENT(INOUT) :: ipar(*) REAL(8), INTENT(IN) :: dpar(*) INTEGER, INTENT(OUT) :: stat END SUBROUTINE D_FORWARD_TRIG_TRANSFORM SUBROUTINE D_BACKWARD_TRIG_TRANSFORM(f, handle, ipar,dpar, stat) USE MKL_DFT_TYPE !DEC$ ATTRIBUTES C, ALIAS: '_d_backward_trig_transform' :: D_BACKWARD_TRIG_TRANSFORM !MS$ATTRIBUTES REFERENCE :: f !MS$ATTRIBUTES REFERENCE :: handle !MS$ATTRIBUTES REFERENCE :: ipar !MS$ATTRIBUTES REFERENCE :: dpar !MS$ATTRIBUTES REFERENCE :: stat REAL(8), INTENT(INOUT) :: f(*) TYPE(DFTI_DESCRIPTOR), POINTER :: handle INTEGER, INTENT(INOUT) :: ipar(*) REAL(8), INTENT(IN) :: dpar(*) INTEGER, INTENT(OUT) :: stat END SUBROUTINE D_BACKWARD_TRIG_TRANSFORM SUBROUTINE S_INIT_TRIG_TRANSFORM(n, tt_type, ipar,spar, stat) USE MKL_DFT_TYPE !DEC$ ATTRIBUTES C, ALIAS: '_s_init_trig_transform' :: S_INIT_TRIG_TRANSFORM !MS$ATTRIBUTES REFERENCE :: n !MS$ATTRIBUTES REFERENCE :: tt_type !MS$ATTRIBUTES REFERENCE :: ipar !MS$ATTRIBUTES REFERENCE :: spar !MS$ATTRIBUTES REFERENCE :: stat INTEGER, INTENT(IN) :: n, tt_type INTEGER, INTENT(INOUT) :: ipar(*) REAL(4), INTENT(INOUT) :: spar(*) INTEGER, INTENT(OUT) :: stat END SUBROUTINE S_INIT_TRIG_TRANSFORM SUBROUTINE S_COMMIT_TRIG_TRANSFORM(f, handle, ipar,spar, stat) USE MKL_DFT_TYPE !DEC$ ATTRIBUTES C, ALIAS: '_s_commit_trig_transform' :: S_COMMIT_TRIG_TRANSFORM !MS$ATTRIBUTES REFERENCE :: f !MS$ATTRIBUTES REFERENCE :: handle !MS$ATTRIBUTES REFERENCE :: ipar !MS$ATTRIBUTES REFERENCE :: spar !MS$ATTRIBUTES REFERENCE :: stat REAL(4), INTENT(INOUT) :: f(*) TYPE(DFTI_DESCRIPTOR), POINTER :: handle INTEGER, INTENT(INOUT) :: ipar(*) REAL(4), INTENT(OUT) :: spar(*) INTEGER, INTENT(OUT) :: stat END SUBROUTINE S_COMMIT_TRIG_TRANSFORM SUBROUTINE S_FORWARD_TRIG_TRANSFORM(f, handle, ipar,spar, stat) USE MKL_DFT_TYPE !DEC$ ATTRIBUTES C, ALIAS: '_s_forward_trig_transform' :: S_FORWARD_TRIG_TRANSFORM !MS$ATTRIBUTES REFERENCE :: f !MS$ATTRIBUTES REFERENCE :: handle !MS$ATTRIBUTES REFERENCE :: ipar !MS$ATTRIBUTES REFERENCE :: spar !MS$ATTRIBUTES REFERENCE :: stat REAL(4), INTENT(INOUT) :: f(*) TYPE(DFTI_DESCRIPTOR), POINTER :: handle INTEGER, INTENT(INOUT) :: ipar(*) REAL(4), INTENT(IN) :: spar(*) INTEGER, INTENT(OUT) :: stat END SUBROUTINE S_FORWARD_TRIG_TRANSFORM SUBROUTINE S_BACKWARD_TRIG_TRANSFORM(f, handle, ipar,spar, stat) USE MKL_DFT_TYPE !DEC$ ATTRIBUTES C, ALIAS: '_s_backward_trig_transform' :: S_BACKWARD_TRIG_TRANSFORM !MS$ATTRIBUTES REFERENCE :: f !MS$ATTRIBUTES REFERENCE :: handle !MS$ATTRIBUTES REFERENCE :: ipar !MS$ATTRIBUTES REFERENCE :: spar !MS$ATTRIBUTES REFERENCE :: stat REAL(4), INTENT(INOUT) :: f(*) TYPE(DFTI_DESCRIPTOR), POINTER :: handle INTEGER, INTENT(INOUT) :: ipar(*) REAL(4), INTENT(IN) :: spar(*) INTEGER, INTENT(OUT) :: stat END SUBROUTINE S_BACKWARD_TRIG_TRANSFORM SUBROUTINE FREE_TRIG_TRANSFORM(handle, ipar,stat) USE MKL_DFT_TYPE !DEC$ ATTRIBUTES C, ALIAS: '_free_trig_transform' :: FREE_TRIG_TRANSFORM !MS$ATTRIBUTES REFERENCE :: handle !MS$ATTRIBUTES REFERENCE :: ipar !MS$ATTRIBUTES REFERENCE :: stat INTEGER, INTENT(INOUT) :: ipar(*) TYPE(DFTI_DESCRIPTOR), POINTER :: handle INTEGER, INTENT(OUT) :: stat END SUBROUTINE FREE_TRIG_TRANSFORM END INTERFACE END MODULE MKL_TRIG_TRANSFORMS
----------------------------------------------------------------------------- -- | -- Module : Numeric.LinearAlgebra.Packed.Herm -- Copyright : Copyright (c) 2010, Patrick Perry <[email protected]> -- License : BSD3 -- Maintainer : Patrick Perry <[email protected]> -- Stability : experimental -- -- Hermitian views of packed matrices. -- module Numeric.LinearAlgebra.Packed.Herm ( -- * Immutable interface -- ** Vector multiplication hermMulVector, hermMulVectorWithScale, addHermMulVectorWithScales, -- ** Updates hermRank1Update, hermRank2Update, -- * Mutable interface hermCreate, -- ** Vector multiplication hermMulVectorTo, hermMulVectorWithScaleTo, addHermMulVectorWithScalesM_, -- ** Updates hermRank1UpdateM_, hermRank2UpdateM_, ) where import Numeric.LinearAlgebra.Packed.Base
#define BOOST_TEST_MODULE Container #include <boost/test/included/unit_test.hpp> #include "../NativeContainer/NativeContainer.h" #include "Counter.h" BOOST_AUTO_TEST_CASE(container_controlled_instance) { auto container = std::make_shared<NativeContainer>(); container->RegisterType<CounterInterface, Counter, ContainerControlledManager>(); { auto simpleTest = container->Resolve<CounterInterface>(); simpleTest->Increment(); } { auto simpleTest = container->Resolve<CounterInterface>(); simpleTest->Increment(); } { auto simpleTest = container->Resolve<CounterInterface>(); BOOST_TEST(simpleTest->GetValue() == 2); } } BOOST_AUTO_TEST_CASE(container_per_resolve_instance) { auto container = std::make_shared<NativeContainer>(); container->RegisterType<CounterInterface, Counter, PerResolveManager>(); { auto simpleTest = container->Resolve<CounterInterface>(); simpleTest->Increment(); } { auto simpleTest = container->Resolve<CounterInterface>(); simpleTest->Increment(); } { auto simpleTest = container->Resolve<CounterInterface>(); BOOST_TEST(simpleTest->GetValue() == 0); } } BOOST_AUTO_TEST_CASE(container_externally_managed_instance_1) { auto container = std::make_shared<NativeContainer>(); container->RegisterType<CounterInterface, Counter, ExternallyControlledManager>(); { auto simpleTest = container->Resolve<CounterInterface>(); simpleTest->Increment(); } { auto simpleTest = container->Resolve<CounterInterface>(); simpleTest->Increment(); } { auto simpleTest = container->Resolve<CounterInterface>(); BOOST_TEST(simpleTest->GetValue() == 0); } } BOOST_AUTO_TEST_CASE(container_externally_managed_instance_2) { auto container = std::make_shared<NativeContainer>(); container->RegisterType<CounterInterface, Counter, ExternallyControlledManager>(); auto storage = container->Resolve<CounterInterface>(); storage->Increment(); { auto simpleTest = container->Resolve<CounterInterface>(); simpleTest->Increment(); } { auto simpleTest = container->Resolve<CounterInterface>(); simpleTest->Increment(); } { auto simpleTest = container->Resolve<CounterInterface>(); BOOST_TEST(simpleTest->GetValue() == 3); } }
Require Import core core_axioms fintype fintype_axioms. Import ScopedNotations. (* From Chapter9 Require Export stlc. *) Load stlc. Set Implicit Arguments. Unset Strict Implicit. Require Import Setoid Morphisms. (*** Rewriting laws that have to hold ***) (* one of the rewriting laws of lambda-sigma *) (* Goal forall m (s: tm (S m)), (ren_tm (var_zero .: shift) s) = s. *) (* Proof. *) (* (* can be proved by asimpl both in the original form and reformulated as a substitution *) *) (* intros m s. *) (* asimpl. *) (* reflexivity. *) (* Restart. *) (* intros m s. *) (* substify. *) (* asimpl. *) (* reflexivity. *) (* Qed. *) (*** Iea for normalization ***) (* I want to be able to say assert (H: s[sigma] = _) by (now asimpl) * so that it fills in the normal form in the evar. * With normal setoid rewriting this is not directly possible because setoid rewriting seems to behave differently on goals of the form * `s[sigma] = t[tau]` where it normalizes both sides. than on goals of the form * `s[sigma] = ?E` where it fails to normalize the left side. * * A hack around this is the following tactic that asserts the trivial equality, then normalizes both sides and then builds another equality where we already know the normal form on the right hand side and proves it again with asimpl. * Of course it's slow since it uses asimpl twice. * Plus `asimpl in H` is naive and reverts H so asimpl is also applied to the rest of the goal which is not desirable and often defeeats the purpose of the tactic. But we might be able to write a more performant `asimpl in H` tactic. *) Ltac normalize t := let H := fresh "H" in assert (H: t = t) by (reflexivity); asimpl in H; match goal with | [ H: ?t2 = ?t2 |- _ ] => clear H; let H := fresh "H" in assert (H: t = t2) by (now asimpl) end. (*** Testing out asimpl. ***) (* Ltac apply_subst_morphism := *) (* match goal with *) (* | [|- eq (subst_tm ?sigma ?s) (subst_tm ?tau ?t)] => *) (* apply subst_morphism; intros ? *) (* | [|- context[(subst_tm ?sigma ?s)]] => *) (* erewrite subst_morphism *) (* end. *) Inductive Foo {n} : tm n -> Type := FooC : forall s:tm n, Foo s. Goal forall {m n} (f : fin m -> tm n) (s : tm (S m)) (t : tm m), Foo (s[t[f] .: f]) -> Foo (s[t..][f]). Proof. intros * H. (* unfortunately it's not practicable to use the normalize tactic because apply is too simple (would have to use rewrite with the morphisms but that has its own set of problems with normalize). Idea: use the setoid rewriting version (or the funext version) to find out the normal form and then use the fast tactic to prove it If using the setoid rewrite tactic I would need some way of caching it otherwise it's useless *) assert (s[t..][f] = s[t[f] .: f]) as -> by (now asimpl). assumption. (* normalize (s[t..][f]). *) (* asimpl. *) (* apply_subst_morphism. *) (* 2: { *) (* intros ?. *) (* (* If we just have ?g x on the right hand side of the equality, we can always rewrite with rinstId_tm' (or instId_tm' but the other comes first) *) *) (* (* which would then constrain the evar to be ?g = ?s <id> *) *) (* (* That's the reason why I always got these <id> terms when I tried to normalize something with this tactic. *) *) (* rewrite rinstId_tm'. *) (* asimpl. *) (* } *) Abort. (* this subsitution equation that appears for example in step_inst is pretty nice to test since it uses a lot of the lemmas. Experience shows that if asimpl works here it works for a lot of other lemmas of Chapter9 *) Lemma default_subst_lemma {m n} (f : fin m -> tm n) (s : tm (S m)) (t : tm m) : s[t..][f] = s[up_tm_tm f][(t[f])..]. Proof. assert (s[t..][f] = _) by (asimpl; reflexivity). evar (x : tm n). assert (s[t..][f] = x). { subst x. auto_unfold. rewrite compComp_tm. simple apply subst_morphism; intros ?. (* it does not make sense to do simple apply scons_comp' here since that would abort the normalization. simple apply only makes sense with the morphisms since they have a hypothesis *) erewrite scons_comp'. simple apply scons_morphism; intros ?. erewrite varL_tm''. fsimpl. reflexivity. } eta_reduce. reflexivity. } (* assert (s[t..][f] = s[t[f] .: f]) by (now asimpl). *) (* assert (s[up_tm_tm f][(t[f])..] = _) by (now asimpl). *) (* there are still too many differences *) now asimpl. (* rewrite renComp_tm. *) (* asimpl. *) (* apply subst_morphism. *) (* but how do I continue from here. Apply subst_morphism is also not promising because of the next lemma, s and t might be different and the equation still holds. *) (* Ok it appears I cannot construct an instantiation with two different terms (out of literals, else it's impossible anyways) that does not work. The reason seems to be that anything constructed from literals (variables, var_zero and shift) can be simplified by fsimpl. Then we unfold the definition of subst_tm/ren_tm and can argue about extensional equality of functions again. *) Restart. (* auto_unfold. *) (* asimpl'. *) (* simple apply subst_morphism; intros ?. *) (* asimpl'. *) (* simple apply scons_morphism; intros ?. *) (* asimpl'. *) (* (* asimpl. *) *) (* repeat unfold_funcomp. *) (* rewrite renComp_tm. *) (* cbn [subst_tm ren_tm]. *) (* fsimpl. *) (* rewrite instId_tm'. *) (* reflexivity. *) Abort. Lemma default_subst_lemma2 {m n} (f : fin m -> tm n) (s : tm (S m)) (t : tm m) : (var_tm var_zero)[t..][f] = subst_tm (@scons (tm n) (S n) t[f] (@scons (tm n) n t[f] var_tm)) (var_tm (shift var_zero)). Proof. now asimpl. Restart. auto_unfold. rewrite compComp_tm. cbn [subst_tm ren_tm]. fsimpl. reflexivity. Qed. Definition const {n} (a: fin (S n)) := @var_zero n. Lemma default_subst_lemma3 {n} (x: fin (S n)): @subst_tm (S n) (S n) (@const n >> @var_tm (S n)) (var_tm x) = @subst_tm (S (S n)) (S n) (var_tm x .: @const n >> var_tm) (var_tm (shift x)). Proof. asimpl. unfold const. reflexivity. Qed.
Formal statement is: lemma rcis_divide: "rcis r1 a / rcis r2 b = rcis (r1 / r2) (a - b)" Informal statement is: If $r_1$ and $r_2$ are positive real numbers and $a$ and $b$ are real numbers, then $\frac{e^{i r_1 a}}{e^{i r_2 b}} = e^{i \frac{r_1}{r_2} (a - b)}$.
theory Sepref_Misc imports Refine_Monadic.Refine_Monadic PO_Normalizer "List-Index.List_Index" Separation_Logic_Imperative_HOL.Sep_Main Named_Theorems_Rev "HOL-Eisbach.Eisbach" Separation_Logic_Imperative_HOL.Array_Blit begin hide_const (open) CONSTRAINT (* Additions for List_Index *) lemma index_of_last_distinct[simp]: "distinct l \<Longrightarrow> index l (last l) = length l - 1" apply (cases l rule: rev_cases) apply (auto simp: index_append) done lemma index_eqlen_conv[simp]: "index l x = length l \<longleftrightarrow> x\<notin>set l" by (auto simp: index_size_conv) subsection \<open>Iterated Curry and Uncurry\<close> text \<open>Uncurry0\<close> definition "uncurry0 c \<equiv> \<lambda>_::unit. c" definition curry0 :: "(unit \<Rightarrow> 'a) \<Rightarrow> 'a" where "curry0 f = f ()" lemma uncurry0_apply[simp]: "uncurry0 c x = c" by (simp add: uncurry0_def) lemma curry_uncurry0_id[simp]: "curry0 (uncurry0 f) = f" by (simp add: curry0_def) lemma uncurry_curry0_id[simp]: "uncurry0 (curry0 g) = g" by (auto simp: curry0_def) lemma param_uncurry0[param]: "(uncurry0,uncurry0) \<in> A \<rightarrow> (unit_rel\<rightarrow>A)" by auto text \<open>Abbreviations for higher-order uncurries\<close> abbreviation "uncurry2 f \<equiv> uncurry (uncurry f)" abbreviation "curry2 f \<equiv> curry (curry f)" abbreviation "uncurry3 f \<equiv> uncurry (uncurry2 f)" abbreviation "curry3 f \<equiv> curry (curry2 f)" abbreviation "uncurry4 f \<equiv> uncurry (uncurry3 f)" abbreviation "curry4 f \<equiv> curry (curry3 f)" abbreviation "uncurry5 f \<equiv> uncurry (uncurry4 f)" abbreviation "curry5 f \<equiv> curry (curry4 f)" abbreviation "uncurry6 f \<equiv> uncurry (uncurry5 f)" abbreviation "curry6 f \<equiv> curry (curry5 f)" abbreviation "uncurry7 f \<equiv> uncurry (uncurry6 f)" abbreviation "curry7 f \<equiv> curry (curry6 f)" abbreviation "uncurry8 f \<equiv> uncurry (uncurry7 f)" abbreviation "curry8 f \<equiv> curry (curry7 f)" abbreviation "uncurry9 f \<equiv> uncurry (uncurry8 f)" abbreviation "curry9 f \<equiv> curry (curry8 f)" lemma fold_partial_uncurry: "uncurry (\<lambda>(ps, cf). f ps cf) = uncurry2 f" by auto lemma curry_shl: "\<And>g f. (g \<equiv> curry f) \<equiv> (uncurry g \<equiv> f)" "\<And>g f. (g \<equiv> curry0 f) \<equiv> (uncurry0 g \<equiv> f)" by (atomize (full); auto)+ lemma curry_shr: "\<And>f g. (curry f \<equiv> g) \<equiv> (f \<equiv> uncurry g)" "\<And>f g. (curry0 f \<equiv> g) \<equiv> (f \<equiv> uncurry0 g)" by (atomize (full); auto)+ lemmas uncurry_shl = curry_shr[symmetric] lemmas uncurry_shr = curry_shl[symmetric] end
Stephen later become more involved in formal political parties , being elected as a local councillor and standing as a candidate in general elections . After moving to Bristol she became the first woman president of Bristol Trades Council . She was appointed MBE in 1977 and her life is commemorated by a blue plaque in Bristol .
State Before: d✝ d : ℤ hd : d ≤ 0 x y : ℤ√d h : Associated x y ⊢ norm x = norm y State After: case intro d✝ d : ℤ hd : d ≤ 0 x : ℤ√d u : (ℤ√d)ˣ ⊢ norm x = norm (x * ↑u) Tactic: obtain ⟨u, rfl⟩ := h State Before: case intro d✝ d : ℤ hd : d ≤ 0 x : ℤ√d u : (ℤ√d)ˣ ⊢ norm x = norm (x * ↑u) State After: no goals Tactic: rw [norm_mul, (norm_eq_one_iff' hd _).mpr u.isUnit, mul_one]
[STATEMENT] lemma homologous_rel_sum: assumes f: "finite {i \<in> I. f i \<noteq> 0}" and g: "finite {i \<in> I. g i \<noteq> 0}" and h: "\<And>i. i \<in> I \<Longrightarrow> homologous_rel p X S (f i) (g i)" shows "homologous_rel p X S (sum f I) (sum g I)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. homologous_rel p X S (sum f I) (sum g I) [PROOF STEP] proof (cases "finite I") [PROOF STATE] proof (state) goal (2 subgoals): 1. finite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) 2. infinite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) [PROOF STEP] case True [PROOF STATE] proof (state) this: finite I goal (2 subgoals): 1. finite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) 2. infinite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) [PROOF STEP] let ?L = "{i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}" [PROOF STATE] proof (state) goal (2 subgoals): 1. finite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) 2. infinite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) [PROOF STEP] have L: "finite ?L" "?L \<subseteq> I" [PROOF STATE] proof (prove) goal (1 subgoal): 1. finite ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) &&& {i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0} \<subseteq> I [PROOF STEP] using f g [PROOF STATE] proof (prove) using this: finite {i \<in> I. f i \<noteq> 0} finite {i \<in> I. g i \<noteq> 0} goal (1 subgoal): 1. finite ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) &&& {i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0} \<subseteq> I [PROOF STEP] by blast+ [PROOF STATE] proof (state) this: finite ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) {i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0} \<subseteq> I goal (2 subgoals): 1. finite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) 2. infinite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) [PROOF STEP] have "sum f I = sum f ?L" [PROOF STATE] proof (prove) goal (1 subgoal): 1. sum f I = sum f ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) [PROOF STEP] by (rule comm_monoid_add_class.sum.mono_neutral_right [OF True]) auto [PROOF STATE] proof (state) this: sum f I = sum f ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) goal (2 subgoals): 1. finite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) 2. infinite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) [PROOF STEP] moreover [PROOF STATE] proof (state) this: sum f I = sum f ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) goal (2 subgoals): 1. finite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) 2. infinite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) [PROOF STEP] have "sum g I = sum g ?L" [PROOF STATE] proof (prove) goal (1 subgoal): 1. sum g I = sum g ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) [PROOF STEP] by (rule comm_monoid_add_class.sum.mono_neutral_right [OF True]) auto [PROOF STATE] proof (state) this: sum g I = sum g ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) goal (2 subgoals): 1. finite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) 2. infinite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) [PROOF STEP] moreover [PROOF STATE] proof (state) this: sum g I = sum g ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) goal (2 subgoals): 1. finite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) 2. infinite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) [PROOF STEP] have *: "homologous_rel p X S (f i) (g i)" if "i \<in> ?L" for i [PROOF STATE] proof (prove) goal (1 subgoal): 1. homologous_rel p X S (f i) (g i) [PROOF STEP] using h that [PROOF STATE] proof (prove) using this: ?i \<in> I \<Longrightarrow> homologous_rel p X S (f ?i) (g ?i) i \<in> {i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0} goal (1 subgoal): 1. homologous_rel p X S (f i) (g i) [PROOF STEP] by auto [PROOF STATE] proof (state) this: ?i \<in> {i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0} \<Longrightarrow> homologous_rel p X S (f ?i) (g ?i) goal (2 subgoals): 1. finite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) 2. infinite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) [PROOF STEP] have "homologous_rel p X S (sum f ?L) (sum g ?L)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. homologous_rel p X S (sum f ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0})) (sum g ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0})) [PROOF STEP] using L [PROOF STATE] proof (prove) using this: finite ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) {i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0} \<subseteq> I goal (1 subgoal): 1. homologous_rel p X S (sum f ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0})) (sum g ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0})) [PROOF STEP] proof induction [PROOF STATE] proof (state) goal (2 subgoals): 1. {} \<subseteq> I \<Longrightarrow> homologous_rel p X S (sum f {}) (sum g {}) 2. \<And>x F. \<lbrakk>finite F; x \<notin> F; F \<subseteq> I \<Longrightarrow> homologous_rel p X S (sum f F) (sum g F); insert x F \<subseteq> I\<rbrakk> \<Longrightarrow> homologous_rel p X S (sum f (insert x F)) (sum g (insert x F)) [PROOF STEP] case (insert j J) [PROOF STATE] proof (state) this: finite J j \<notin> J J \<subseteq> I \<Longrightarrow> homologous_rel p X S (sum f J) (sum g J) insert j J \<subseteq> I goal (2 subgoals): 1. {} \<subseteq> I \<Longrightarrow> homologous_rel p X S (sum f {}) (sum g {}) 2. \<And>x F. \<lbrakk>finite F; x \<notin> F; F \<subseteq> I \<Longrightarrow> homologous_rel p X S (sum f F) (sum g F); insert x F \<subseteq> I\<rbrakk> \<Longrightarrow> homologous_rel p X S (sum f (insert x F)) (sum g (insert x F)) [PROOF STEP] then [PROOF STATE] proof (chain) picking this: finite J j \<notin> J J \<subseteq> I \<Longrightarrow> homologous_rel p X S (sum f J) (sum g J) insert j J \<subseteq> I [PROOF STEP] show ?case [PROOF STATE] proof (prove) using this: finite J j \<notin> J J \<subseteq> I \<Longrightarrow> homologous_rel p X S (sum f J) (sum g J) insert j J \<subseteq> I goal (1 subgoal): 1. homologous_rel p X S (sum f (insert j J)) (sum g (insert j J)) [PROOF STEP] by (simp add: h homologous_rel_add) [PROOF STATE] proof (state) this: homologous_rel p X S (sum f (insert j J)) (sum g (insert j J)) goal (1 subgoal): 1. {} \<subseteq> I \<Longrightarrow> homologous_rel p X S (sum f {}) (sum g {}) [PROOF STEP] qed auto [PROOF STATE] proof (state) this: homologous_rel p X S (sum f ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0})) (sum g ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0})) goal (2 subgoals): 1. finite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) 2. infinite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) [PROOF STEP] ultimately [PROOF STATE] proof (chain) picking this: sum f I = sum f ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) sum g I = sum g ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) homologous_rel p X S (sum f ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0})) (sum g ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0})) [PROOF STEP] show ?thesis [PROOF STATE] proof (prove) using this: sum f I = sum f ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) sum g I = sum g ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0}) homologous_rel p X S (sum f ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0})) (sum g ({i \<in> I. f i \<noteq> 0} \<union> {i \<in> I. g i \<noteq> 0})) goal (1 subgoal): 1. homologous_rel p X S (sum f I) (sum g I) [PROOF STEP] by simp [PROOF STATE] proof (state) this: homologous_rel p X S (sum f I) (sum g I) goal (1 subgoal): 1. infinite I \<Longrightarrow> homologous_rel p X S (sum f I) (sum g I) [PROOF STEP] qed auto
[CONT] This card cannot side attack. [CONT] This card gets -1 level while on the stage. [CONT] This card gets +500 power for each of your other "Tomoeda" or "Magic" characters. [ACT] Brainstorm [(1) REST this card] Flip over 4 cards from the top of your deck, and put them into your waiting room. For each climax revealed among those cards, draw up to 1 card. [CONT] All of your other "Sakura: Brimming with Butterflies" get +1000 power. [ACT] [REST this card] Choose 1 of your characters, and that character gets +500 power until end of turn. [AUTO] When this card is placed on the stage from your hand, choose 1 of your other "Tomoeda" or "Magic" characters, and that character gets +1500 power until end of turn. [AUTO] When this card becomes REVERSE, if this card's battle opponent is level 0 or lower, you may REVERSE that character. [AUTO] When this card attacks, choose 1 of your characters, and that character gets +500 power until end of turn. [AUTO] At the beginning of your opponent's attack phase, you may move this card to an open position of your center stage.
# Data Space Report ## Pittsburgh Bridges Data Set Andy Warhol Bridge - Pittsburgh. Report created by Student Francesco Maria Chiarlo s253666, for A.A 2019/2020. **Abstract**:The aim of this report is to evaluate the effectiveness of distinct, different statistical learning approaches, in particular focusing on their characteristics as well as on their advantages and backwards when applied on a relatively small dataset as the one employed within this report, that is Pittsburgh Bridgesdataset. **Key words**:Statistical Learning, Machine Learning, Bridge Design. ## TOC: * [Imports Section](#imports-section) * [Dataset's Attributes Description](#attributes-description) * [Data Preparation and Investigation](#data-preparation) * [Learning Models](#learning-models) * [Improvements and Conclusions](#improvements-and-conclusions) * [References](#references) ### Imports Section <a class="anchor" id="imports-section"></a> ```python from utils.all_imports import *; %matplotlib inline ``` None ```python # Set seed for notebook repeatability np.random.seed(0) ``` ### Dataset's Attributes Description <a class="anchor" id="attributes-description"></a> The analyses that I aim at accomplishing while using as means the methods or approaches provided by both Statistical Learning and Machine Learning fields, concern the dataset Pittsburgh Bridges, and what follows is a overview and brief description of the main characteristics, as well as, basic information about this precise dataset. The Pittsburgh Bridges dataset is a dataset available from the web site called mainly *"UCI Machine Learing Repository"*, which is one of the well known web site that let a large amount of different datasets, from different domains or fields, to be used for machine-learning research and which have been cited in peer-reviewed academic journals. In particular, the dataset I'm going to treat and analyze, which is Pittsburgh Bridges dataset, has been made freely available from the Western Pennsylvania Regional Data Center (WPRDC), which is a project led by the University Center of Social and Urban Research (UCSUR) at the University of Pittsburgh ("University") in collaboration with City of Pittsburgh and The County of Allegheny in Pennsylvania. The WPRDC and the WPRDC Project is supported by a grant from the Richard King Mellon Foundation. In order to be more precise, from the official and dedicated web page, within UCI Machine Learning cite, Pittsburgh Bridges dataset is a dataset that has been created after the works of some co-authors which are: - Yoram Reich & Steven J. Fenves from Department of Civil Engineering and Engineering Design Research Center Carnegie Mellon University Pittsburgh, PA 15213 The Pittsburgh Bridges dataset is made of up to 108 distinct observations and each of that data sample is made of 12 attributes or features where some of them are considered to be continuous properties and other to be categorical or nominal properties. Those variables are the following: - **RIVER**: which is a nominal type variable that can assume the subsequent possible discrete values which are: A, M, O. Where A stands for Allegheny river, while M stands for Monongahela river and lastly O stands for Ohio river. - **LOCATION**: which represents a nominal type variable too, and assume a positive integer value from 1 up to 52 used as categorical attribute. - **ERECTED**: which might be either a numerical or categorical variable, depending on the fact that we want to aggregate a bunch of value under a categorical quantity. What this means is that, basically such attribute is made of date starting from 1818 up to 1986, but we may imagine to aggregate somehow these data within a given category among those suggested, that are CRAFTS, EMERGENING, MATURE, MODERN. - **PURPOSE**: which is a categorical attribute and represents the reason why a particular bridge has been built, which means that this attribute represents what kind of vehicle can cross the bridge or if the bridge has been made just for people. For this reasons the allowd values for this attributes are the following: WALK, AQUEDUCT, RR, HIGHWAY. Three out of four are self explained values, while RR value that might be tricky at first glance, it just stands for railroad. - **LENGTH**: which represents the bridge's length, is a numerical attribute if we just look at the real number values that go from 804 up to 4558, but we can again decide to handle or arrange such values so that they can be grouped into range of values mapped into SHORT, MEDIUM, LONG so that we can refer to a bridge's length by means of these new categorical values. - **LANES**: which is a categorical variable which is represented by numerical values, that are 1, 2, 4, 6 which indicate the number of distinct lanes that a bridge in Pittsburgh city may have. The larger the value the wider the bridge. - **CLEAR-G**: specifies whether a vertical navigation clearance requirement was enforced in the design or not. - **T-OR-D**: which is a nominal attribute, in other words, a categorical attribute that can assume THROUGH, DECK values. In order to be more precise, this samples attribute deals with structural elements of a bridge. In fact, a deck is the surface of a bridge and this structural element, of bridge's superstructure, may be constructed of concrete, steel, open grating, or wood. On the other hand, a through arch bridge, also known as a half-through arch bridge or a through-type arch bridge, is a bridge that is made from materials such as steel or reinforced concrete, in which the base of an arch structure is below the deck but the top rises above it. - **MATERIAL**: which is a categorical or nominal variable and is used to describe the bridge telling which is the main or core material used to build it. This attribute can assume one of the possible, following values which are: WOOD, IRON, STEEL. Furthermore, we expect to see somehow a bit of correlation between the values assumed by the pairs represented by T-OR-D and MATERIAL columns, when looking just to them. - **SPAN**: which is a categorical or nominal value and has been recorded by means of three possible values for each sample, that are SHORT, MEDIUM, LONG. This attribute, within the field of Structural Engineering, is the distance between two intermediate supports for a structure, e.g. a beam or a bridge. A span can be closed by a solid beam or by a rope. The first kind is used for bridges, the second one for power lines, overhead telecommunication lines, some type of antennas or for aerial tramways. - **REL-L**: which is a categorical or nominal variable and stands for relative length of the main span of the bridge to the total crossing length, it can assume three possible values that are S, S-F, F. - Lastly, **TYPE** which indicates as a categorical or nominal attributes what type of bridge each record represents, among the possible 6 distinct classes or types of bridges that are: WOOD, SUSPEN, SIMPLE-T, ARCH, CANTILEV, CONT-T. ```python # Show TYPE of Bridges # ------------------------ # show_bridges_types_images() ``` ## Data Investigation <a class="anchor" id="data-preparation"></a> The aim of this chapter is to get in the data, that are available within Pittsburgh Bridge Dataset, in order to investigate a bit more in to detail and generally speaking deeper the main or high level statistics quantities, such as mean, median, standard deviation of each attribute, as well as displaying somehow data distribution for each attribute by means of histogram plots. This phase allows or enables us to decide which should be the best feature to be selected as the target variable, in other word the attribute that will represent the dependent variable with respect to the remaining attributes that instead will play the role of predictors and independent variables, as well. In order to investigate and explore our data we make usage of *Pandas library*. We recall mainly that, in computer programming, Pandas is a software library written for the Python programming language* for *data manipulation and analysis*. In particular, it offers data structures and operations for manipulating numerical tables and time series. It is free software and a interesting and funny things about such tool is that the name is derived from the term "panel data", an econometrics term for data sets that include observations over multiple time periods for the same individuals. We also note that as the analysis proceeds we will introduce other computer programming as well as programming libraries that allow or enable us to fulfill our goals. Initially, once I have downloaded from the provided web page the dataset with the data samples about Pittsburgh Bridge we load the data by means of functions available using python library's pandas. We notice that the overall set of data points is large up to 108 records or rows, which are sorted by Erected attributes, so this means that are sorted in decreasing order from the oldest bridge which has been built in 1818 up to the most modern bridge that has been erected in 1986. Then we display the first 5 rows to get an overview and have a first idea about what is inside the overall dataset, and the result we obtain by means of head() function applied onto the fetched dataset is equals to what follows: ### Read Input Data ```python # Some global script variables # --------------------------------------------------------------------------- # dataset_path, dataset_name, column_names, TARGET_COL = \ get_dataset_location() # Info Data to be fetched estimators_list, estimators_names = get_estimators() # Estimator to be trained # variables used for pass through arrays used to store results pos_gs = 0; pos_cv = 0 # Array used for storing graphs plots_names = list(map(lambda xi: f"{xi}_learning_curve.png", estimators_names)) pca_kernels_list = ['linear', 'poly', 'rbf', 'cosine', 'sigmoid'] cv_list = list(range(10, 1, -1)) ``` ```python # Parameters to be tested for Cross-Validation Approach # ----------------------------------------------------- param_grids = [] parmas_logreg = { 'penalty': ('l1', 'l2', 'elastic', None), 'solver': ('newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'), 'fit_intercept': (True, False), 'tol': (1e-4, 1e-3, 1e-2), 'class_weight': (None, 'balanced'), 'C': (10.0, 1.0, .1, .01, .001, .0001), # 'random_state': (0,), }; param_grids.append(parmas_logreg) parmas_knn_clf = { 'n_neighbors': (2,3,4,5,6,7,8,9,10), 'weights': ('uniform', 'distance'), 'metric': ('euclidean', 'minkowski', 'manhattan'), 'leaf_size': (5, 10, 15, 30), 'algorithm': ('ball_tree', 'kd_tree', 'brute'), }; param_grids.append(parmas_knn_clf) params_sgd_clf = { 'loss': ('log', 'modified_huber'), # ('hinge', 'log', 'modified_huber', 'squared_hinge', 'perceptron') 'penalty': ('l2', 'l1', 'elasticnet'), 'alpha': (1e-1, 1e-2, 1e-3, 1e-4), 'max_iter': (50, 100, 150, 200, 500, 1000, 1500, 2000, 2500), 'class_weight': (None, 'balanced'), 'learning_rate': ('optimal',), 'tol': (None, 1e-2, 1e-4, 1e-5, 1e-6), # 'random_state': (0,), }; param_grids.append(params_sgd_clf) kernel_type = 'svm-rbf-kernel' params_svm_clf = { # 'gamma': (1e-7, 1e-4, 1e-3, 1e-2, 0.1, 1.0, 10, 1e+2, 1e+3, 1e+5, 1e+7), 'gamma': (1e-5, 1e-3, 1e-2, 0.1, 1.0, 10, 1e+2, 1e+3, 1e+5), 'max_iter':(1e+2, 1e+3, 2 * 1e+3, 5 * 1e+3, 1e+4, 1.5 * 1e+3), 'degree': (1,2,4,8), 'coef0': (.001, .01, .1, 0.0, 1.0, 10.0), 'shrinking': (True, False), 'kernel': ['linear', 'poly', 'rbf', 'sigmoid',], 'class_weight': (None, 'balanced'), 'C': (1e-4, 1e-3, 1e-2, 0.1, 1.0, 10, 1e+2, 1e+3), 'probability': (True,), }; param_grids.append(params_svm_clf) parmas_tree = { 'splitter': ('random', 'best'), 'criterion':('gini', 'entropy'), 'max_features': (None, 'sqrt', 'log2'), 'max_depth': (None, 3, 5, 7, 10,), 'splitter': ('best', 'random',), 'class_weight': (None, 'balanced'), }; param_grids.append(parmas_tree) parmas_random_forest = { 'n_estimators': (3, 5, 7, 10, 30, 50, 70, 100, 150, 200), 'criterion':('gini', 'entropy'), 'bootstrap': (True, False), 'min_samples_leaf': (1,2,3,4,5), 'max_features': (None, 'sqrt', 'log2'), 'max_depth': (None, 3, 5, 7, 10,), 'class_weight': (None, 'balanced', 'balanced_subsample'), }; param_grids.append(parmas_random_forest) # Some variables to perform different tasks # ----------------------------------------------------- N_CV, N_KERNEL, N_GS = 9, 5, 6; nrows = N_KERNEL // 2 if N_KERNEL % 2 == 0 else N_KERNEL // 2 + 1; ncols = 2; grid_size = [nrows, ncols] ``` ```python # READ INPUT DATASET # --------------------------------------------------------------------------- # dataset = pd.read_csv(os.path.join(dataset_path, dataset_name), names=column_names, index_col=0) ``` ```python # SHOW SOME STANDARD DATASET INFOS # --------------------------------------------------------------------------- # # print('Dataset shape: {}'.format(dataset.shape)); print(dataset.info()) ``` ```python # SHOWING FIRSTS N-ROWS AS THEY ARE STORED WITHIN DATASET # --------------------------------------------------------------------------- # # dataset.head(5) ``` What we can notice from just the table above is that there are some attributes that are characterized by a special character that is '?' which stands for a missing value, so by chance there was not possibility to get the value for this attribute, such as for LENGTH and SPAN attributes. Analyzing in more details the dataset we discover that there are up to 6 different attributes, in the majority attributes with categorical or nominal nature such as CLEAR-G, T-OR-D, MATERIAL, SPAN, REL-L, and TYPE that contain at list one row characterized by the fact that one of its attributes is set to assuming '?' value that stands, as we already know for a missing value. Here, we can follow different strategies that depends onto the level of complexity as well as accuracy we want to obtain or achieve for models we are going to fit to the data after having correctly pre-processed them, speaking about what we could do with missing values. In fact one can follow the simplest way and can decide to simply discard those rows that contain at least one attribute with a missing value represented by the '?' symbol. Otherwise one may alos decide to follow a different strategy that aims at keeping also those rows that have some missing values by means of some kind of technique that allows to establish a potential substituting value for the missing one. So, in this setting, that is our analyses, we start by just leaving out those rows that at least contain one attribute that has a missing value, this choice leads us to reduce the size of our dataset from 108 records to 70 remaining samples, with a drop of 38 data examples, which may affect the final results, since we left out more or less the 46\% of the data because of missing values. ```python # INVESTIGATING DATASET IN ORDER TO DETECT NULL VALUES # --------------------------------------------------------------------------- # # print('Before preprocessing dataset and handling null values') result = dataset.isnull().values.any(); # print('There are any null values ? Response: {}'.format(result)) result = dataset.isnull().sum(); # print('Number of null values for each predictor:\n{}'.format(result)) ``` ```python # DISCOVERING VALUES WITHIN EACH PREDICTOR DOMAIN # --------------------------------------------------------------------------- # columns_2_avoid = ['ERECTED', 'LENGTH', 'LOCATION', 'LANES'] list_columns_2_fix = show_categorical_predictor_values(dataset, columns_2_avoid) ``` ```python # FIXING, UPDATING NULL VALUES CODED AS '?' SYMBOL # WITHIN EACH CATEGORICAL VARIABLE, IF DETECTED ANY # --------------------------------------------------------------------------- # # print('"Before" removing \'?\' rows, Dataset dim:', dataset.shape) for _, predictor in enumerate(list_columns_2_fix): dataset = dataset[dataset[predictor] != '?'] # print('"After" removing \'?\' rows, Dataset dim: ', dataset.shape); print('-' * 50) _ = show_categorical_predictor_values(dataset, columns_2_avoid) ``` ```python # INTERMEDIATE RESULT FOUND # --------------------------------------------------------------------------- # features_vs_values = preprocess_categorical_variables(dataset, columns_2_avoid); print(dataset.info()) ``` <class 'pandas.core.frame.DataFrame'> Index: 88 entries, E1 to E90 Data columns (total 12 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 RIVER 88 non-null int64 1 LOCATION 88 non-null object 2 ERECTED 88 non-null int64 3 PURPOSE 88 non-null int64 4 LENGTH 88 non-null object 5 LANES 88 non-null object 6 CLEAR-G 88 non-null int64 7 T-OR-D 88 non-null int64 8 MATERIAL 88 non-null int64 9 SPAN 88 non-null int64 10 REL-L 88 non-null int64 11 TYPE 88 non-null int64 dtypes: int64(9), object(3) memory usage: 8.9+ KB None ```python # dataset.head(5) ``` The next step is represented by the effort of mapping categorical variables into numerical variables, so that them are comparable with the already existing numerical or continuous variables, and also by mapping the categorical variables into numerical variables we allow or enable us to perform some kind of normalization or just transformation onto the entire dataset in order to let some machine learning algorithm to work better or to take advantage of normalized data within our pre-processed dataset. Furthermore, by transforming first the categorical attributes into a continuous version we are also able to calculate the \textit{heatmap}, which is a very useful way of representing a correlation matrix calculated on the whole dataset. Moreover we have displayed data distribution for each attribute by means of histogram representation to take some useful information about the number of occurrences for each possible value, in particular for those attributes that have a categorical nature. ```python # MAP NUMERICAL VALUES TO INTEGER VALUES # --------------------------------------------------------------------------- # # print('Before', dataset.shape) columns_2_map = ['ERECTED', 'LANES'] for _, predictor in enumerate(columns_2_map): dataset = dataset[dataset[predictor] != '?'] dataset[predictor] = np.array(list(map(lambda x: int(x), dataset[predictor].values))) # print('After', dataset.shape); print(dataset.info()) ``` ```python # dataset.head(5) ``` ```python # MAP NUMERICAL VALUES TO FLOAT VALUES # --------------------------------------------------------------------------- # # print('Before', dataset.shape) columns_2_map = ['LOCATION', 'LANES', 'LENGTH'] for _, predictor in enumerate(columns_2_map): dataset = dataset[dataset[predictor] != '?'] dataset[predictor] = np.array(list(map(lambda x: float(x), dataset[predictor].values))) ``` ```python result = dataset.isnull().values.any() # print('After handling null values\nThere are any null values ? Response: {}'.format(result)) result = dataset.isnull().sum() # print('Number of null values for each predictor:\n{}'.format(result)) # dataset.head(5) ``` ```python # dataset.describe(include='all') ``` ### Descriptive Statistics After having performed the inital preprocessing phase about Pittsburgh Bridge Dataset, where we have clined the dataset from missing values as well as properly coded all the features, that are attributes and variables by which our dataset is made of, in order to reflect their own nature, wheter categorical or numerical, so continuous, we go ahead doing another step further, consisting in describing features properties by means of some usefull and well know tools coming from Descriptive Statistics area, that is a branch of the Statistics id considered as a whole. In particular we are going to exploit some features that made up statistician's toolbox such as histograms, pie charts and the like, describing also their advantages as well as some of their backwards. ```python # sns.pairplot(dataset, hue='T-OR-D', size=1.5) ``` __Histograms__: the main advantage of using such a chart is that i can be employed to describe the frequencies with which single values or a subset of distict values wihtin a range occurs for a given sample of observations, independently whether such a sample is representing a part of an entire population of examples and measurements, or the population itself, reminding that usually we deal with subsets or samples obtained or randomly sampled by an entire popualtion which might be real or just hypothetical and supposed. In particualr the advantages that Histogram graphs allow us to observe looking at a sampel of records and measurements are the following: - If the variable taken into account is a continuous variable we may decide dto discretize the range of possible values into a number of subintervals, that are also referred to as bins, and observe how the data is distributed into the different subintervals. - In particular, continuing from above, the histogram we might see can suggest us if the sample has one or more picks, describing which are the most occuring values or the most populated subintervalls, as well as the histogram follows a bell-like shape in order to spot also whether the graphs shows greater upper-tail or lower-tail, or a positive skew and altertnatively a negaitve skew. Where, in the former case usually we knwo that that the sample of data shows a greater probability of observing data measurements from the upper side of the bell-shaped graph, otherwise from the lower side of again a bell-shapef graph. - It is also important to say that generally speaking all these kind of observations and analyses are well suited for variables and featrues that assumes values that are continuous in nature, such as height, weight, or as in our dataset LOCATION variable - Instead, if the variable under investigation is discrete or categorical in nature, the histogram graph is better called a bar graph and is a suitable choice for describing occourrencies or frequencies of different categories and classes, since someitmes there is not a natural order among the values such as Colors, evne if we might find a natural order as dress's sizes. Here, what follows is the sequence of several histograms created and illustrated to describe some other characteristics of the variables by which the dataset is made as well as to show the level or type of relationship of the frequency or the occurency of each value for a given attribute with the values assumed by the target variable we have selected amongst the overall variables. ```python columns_2_avoid = ['ERECTED', 'LENGTH', 'LOCATION']; show_frequency_distribution_predictor(dataset, predictor_name='RIVER', columns_2_avoid=columns_2_avoid, features_vs_values=features_vs_values, hue=TARGET_COL, verbose=1) ``` The Histogram related to the frequency, in other sense the occurency, of RIVER datset's feature shows us that: - Among the three main rivers that cross the pittsburgh town, that are *Allegheny*, *Monongahela*, and *Ohio*, the one with the highest number of bridges the first Allegheny, followed by Monongahela, and finally the Ohio river whcih is also the converging river of the former two preciding rivers. - Instead, if we depict and illustrate the occurency, of RIVER datset's feature over our target variable T-OR-D dataset's feature we can understand that among the two binary values, that are DECK and THROUGH, the second seems the most exploited floor system for building bridges between the opposite edges of the rivers. Furthermore, speaking about bridges built around Ohio river just THROUGH structural element is the only technique adopted for those bridges. - What we can also sau about RIVER feature is that Allegheny and Monongahela show more or less the same number of bridges made from THROUGH surface, while for DECK surface Allegheny bits all the other rivers and Ohio does not figure among the rivers where there are bridges with DECK like structure at all. ```python # show_frequency_distribution_predictors(dataset, columns_2_avoid) show_frequency_distribution_predictor(dataset, predictor_name='T-OR-D', columns_2_avoid=columns_2_avoid, features_vs_values=features_vs_values) ``` ```python # show_frequency_distribution_predictors(dataset, columns_2_avoid) show_frequency_distribution_predictor(dataset, predictor_name='CLEAR-G', columns_2_avoid=columns_2_avoid, features_vs_values=features_vs_values, hue=TARGET_COL) ``` Instead looking at CLEAR-G feature we can notice that the *Vertical Clearance for navigation* is allowed for the majority of the bridges and when looing at the relationship of such a feature with the T-OR-D target variable we can see that THROUGH technology is the most adopted amongst the both the bridges that have or not gained the vertical Clearance for navigation, and in particular the THROUCH system is far more popular than DECK surface system in both G and N bridges, recalling us how the THROUGH technique become so important and widely spread across time and space while speaking about bridge constructing. ```python # show_frequency_distribution_predictors(dataset, columns_2_avoid) show_frequency_distribution_predictor(dataset, predictor_name='SPAN', columns_2_avoid=columns_2_avoid, features_vs_values=features_vs_values, hue=TARGET_COL) ``` Span is the distance between two intermediate supports for a structure, e.g. a beam or a bridge. A span can be closed by a solid beam or by a rope. The first kind is used for bridges, the second one for power lines, overhead telecommunication lines, some type of antennas or for aerial tramways. With such a definition kept in mind what we can undestand is that: - looking at the histogram graph about Occurency distribution of Bridge SPAN feature is that, since the three rivers are conisdered to be large riversa long most of their length considering the portion of them that cross the city of Pittsburgh, it becomes natural to observe that MEDIUM Span samples are the most occurring examples, while SHORT Span samples are the least frequent records and LONG span samples range in the between but seems to reach more closely the MEDIUM Span records. - As usual, also here, we continue to observe that THROUGH brdiges are the ind of bridges analysing the T-OR-D feature that collect the majority of samples, while DECK bridges are just characterized by brdiges with LONG or MEDIUM Span feature and no SHORT span. - Moreover, the ration between DECK and THROUGH reaches more or less 1 over 5, that is every 5 THROUGH bridges we find a DECK bridge with either LONG or MEDIUM Span elements. ```python # show_frequency_distribution_predictors(dataset, columns_2_avoid) show_frequency_distribution_predictor(dataset, predictor_name='MATERIAL', columns_2_avoid=columns_2_avoid, features_vs_values=features_vs_values, hue=TARGET_COL) ``` In the histohram graph illustrated above, we can clearly understand that: - the STELL element is the most frequently exploited material for building bridges due to its strengthen and its restinance to the corrosion caused by the surrounding environment, while WOOD-like bridges are still present they are far less frequent than STEEL-like bridges but still have better properties than IRON bridges that are least frequent bridges since Iron leads to heavier bridges and Iron requires more extensive maintanance than Steel bridges and have less elastic properties than Wood brdiges. - However, THROUGH-like bridges are the kind of bridges that present istances that have all examples of bridges with all the available materials that the dataset has shown, while DECK-like bridges exploit just Steel material for building bridges. ```python # show_frequency_distribution_predictors(dataset, columns_2_avoid) show_frequency_distribution_predictor(dataset, predictor_name='REL-L', columns_2_avoid=columns_2_avoid, features_vs_values=features_vs_values, hue=TARGET_COL) ``` We knwo that The REL-L property is the relative length of the main span to the total crossing length. With this short notion about such a feature in mind, what we can suggest observing ther first histogram depicted above is that: - FULL kind of Bridge, shortly *F*, is the most frequent example of feature of the Pittsburgh bridges, also, the Through system is the bridge system that the most exploit or is characterized by FULL REL-L property - The, SMALL kind of Bridge, shortlu *S* is the second property in number of instances that show such a feature among the bridges and it is only a kind of feature shown only from THROUGH-like bridges, this means that we do not find bridges that show such a property amongs the DECK-like bridges. - Lastly, an intermediate solution, represented by SMALL-FULL property, shortly *S-F*, is more or less present in both type of bridges that are classified with DECK or THROUGH system speaking about T-OR-D attribute. ```python # show_frequency_distribution_predictors(dataset, columns_2_avoid) show_frequency_distribution_predictor(dataset, predictor_name='TYPE', columns_2_avoid=columns_2_avoid, features_vs_values=features_vs_values, hue=TARGET_COL) ``` Lastly, we have to speak about the TYPE feature, which is an attribute that refers to the kind of architecture or strucutre used for building the final shape of the bridges. Looking at the first picture above, that is the first Histogram, what we can notice is that: - SIMPLE-T architecture is the most frequent kind of shape or strucutre adopted to build brdiges amongs the pittsburgh bridges, than it is folowed by ARCH-like brideges. - Howevere, starting from the ARCH-like brideges and going ahead considering the other remaining kind of technique for giving a strucuture to a bridge, what we can undestand is that these attribute are more or less distrubuted equally, instead SIMPLE-T shows the highest value to refer to the number of instances characterized by SIMPLE-T value for this attribute within the datast. - Furthermore, DECK-like Birdges are just characterized by up to 4 out of 7 possible values for TYPE attribute, while THROUGH-like bridges show examples of istances from all of the possible kind of architectures for building a bridge. ### Correlation Matrix Analysis In fields of statistics as well as statistical learning, where the latter comes partly from the former, _correlation matrix_ is a table showing correlation coefficients between variables. Each cell in the table shows the correlation between two variables. A correlation matrix is used to summarize data, as an input into a more advanced analysis, and as a diagnostic for advanced analyses. Key decisions to be made when creating a correlation matrix include: choice of correlation statistic, coding of the variables, treatment of missing data, and presentation. Typically, a correlation matrix is square, with the same variables shown in the rows and columns. __Applications of a correlation matrix__: there are three broad reasons for computing a correlation matrix: - To summarize a large amount of data where the goal is to see patterns. In our example above, the observable pattern is that all the variables highly correlate with each other. - To input into other analyses. For example, people commonly use correlation matrixes as inputs for exploratory factor analysis, confirmatory factor analysis, structural equation models, and linear regression when excluding missing values pairwise. - As a diagnostic when checking other analyses. For example, with linear regression, a high amount of correlations suggests that the linear regression estimates will be unreliable. __Treatment of missing values__: the data that we use to compute correlations often contain missing values. This can either be because we did not collect this data or don’t know the responses. Various strategies exist for dealing with missing values when computing correlation matrixes. A best practice is usually to use _multiple imputation_. However, people more commonly use _pairwise missing values_ (sometimes known as partial correlations). This involves computing correlation using all the non-missing data for the two variables. Alternatively, some use _listwise deletion_, also known as case-wise deletion, which only uses observations with no missing data. Both pairwise and case-wise deletion assume that data is missing completely at random. ```python # corr_matrix = dataset.corr() ``` __Coding of the variables__: if you also have data from a survey, you'll need to decide how to code the data before computing the correlations. Changes in codings tend to have little effect, except when extreme. __Presentation__: when presenting a correlation matrix, you'll need to consider various options including: - Whether to show the whole matrix, as above or just the non-redundant bits, as below (arguably the 1.00 values in the main diagonal should also be removed). - How to format the numbers (for example, best practice is to remove the 0s prior to the decimal places and decimal-align the numbers, as above, but this can be difficult to do in most software). - Whether to show statistical significance (e.g., by color-coding cells red). - Whether to color-code the values according to the correlation statistics (as shown below). - Rearranging the rows and columns to make patterns clearer. This shows correlations between the stated importance of various things of attributes used to describe records, examples, and samples within bridge dataset. The line of 1.00s going from the top left to the bottom right is the main diagonal, which shows that each variable always perfectly correlates with itself. This matrix is symmetrical, with the same correlation is shown above the main diagonal being a mirror image of those below the main diagonal. ```python # display_heatmap(corr_matrix) ``` This kind of presentation allows us, once we choose a row number and column number let's say _ith-row_ and _jth-column_, we can check locally the value assigned by the correlection factor to the pair made from to distinct features if _"i"_ strctly different from _"j"_. As an instance, what we can suggest observing the correlation matrix depicted just above, as well as exploting some common properties of symmetic and square matrices, is that: - along the area that spread near the main diagonal the resulting features pairs seemes to either positively moderately correlate or positively waekly correlate. - Convercely, along the area that spreads near the anti-diagonal matrix the resulting features pairs seems to instead either negatively moderately correlate or negatively waekly correlate. Finally, as examples, in order to exploit the fact that we can access directly to the correlaction valuecomputed by means the math formula provided by the expression of correlation facotr, we can say that: - the pair represented by ERECTED and LANES (3d-row, 6th col) features seems to moderately positively correlate, with a value equals to 0.65. This is also reasoable since that by the time while the Pittsburgh city was growing in size also the need of more infrastructures and building for working and let the people in town to leave, lead to increase in winde the different bridges to manage the traffic from and to the city. - on the other hand, still speaking about ERECTED feature, when it is coupled with TYPE feature (3d-row, 12th col) we see in contrast that them are characterized by a negative value of correlation that lead to interprete the pair as negatively moderately correlated. The principal reason about such a behavior may be imputed by the observation that through the different years the building techniques and technologies have been employed to constrcut better, stronger bridges abandoning oldest techniques which instead imply the exploitation of less technological materials sucha s wood that requires instead more frequently maintenance tasks. ### Pie chart as a continuation of Correlation Matrix Analysis Here, within this subsequent section, I'm going to discuss and analyse the usefulness of exploiting piec-chart like graphs for describing some features or beahvior of correlation matrix values. The first pie chart followed by also a realted histogram both aim at explaining and depicting how the features pairs are distributed among the three main subintervals that are named weak, moderate and strong as facotrs to label the kind of correlation the value of correlation referrend to a given pair represent, which interval are the following: - weak, if $p_{ith}$: $p_{ith} \leq .5$ - moderate, if $p_{ith}$: $.5 < p_{ith} < .8$ - strong, if $p_{ith}$: $p_{ith} \geq .8$ ```python # show_pie_hist_charts_abs_corr(corr_matrix, figsize=(2, 2), gridshape=None) ``` The insight that we can understand is that up to nearly 90.90% of pairs of features is weakly correlated and just 9.09% is moderately correlated, without performing any finer distinction among positively or negatively correlated in each group. While we can notice that no pair is strongly correlated. Moreover, looking at the related histogram, just illustrated below the pie chart, we can clearly understand that just 5 over 12 features are showing also a moderate correlation patterns, which are: CLER-G, ERECTED, LANES; RIVER, and SPAN. Furthermore, we can end up saying that among those 5 features just ERECTED and SPAN shw the larger number of pairs in which them moderately correlate. instead in the majority of cases the possible pairs of features in large measure seem to weakly correlate. The other two graphs, the first a pie chart and the second a histogram are a kind of zoom in of the other two graphs illustrated in the paragraphs above. In particular those two subsequents graphs aim at exploring in a deeper way the difference about correlation factor, taking into account also the positivness, or convercely the negativness of the kind of correlation, and so not just the information about the strongness in absolute value of the correlation factor. ```python # show_pie_hist_charts_corr(corr_matrix, figsize=(10, 10), gridshape=None) ``` What we can understand analysing the two graphs is that the pie chart allows us to see that the features seems to be more negatively weakly correlated than positively weakly correlated, and following from the prior pie chart, also here the weak correlation dominates over the other kinds or strengthen of correlations, that are the remaining moderate and strong ones. In particualr negative weak correlation rises up to nearly 50% and positive weak correlation reaaches up to 40%, in other words, the latter is more or less 10 percentage point less than the former. While from the bar chart obtained from a histogram partly related to the pie chart that is above, we can undestand that, considering the same set of featrues which also show a bit of moderate correlation - that are CLER-G, ERECTED, LANES; RIVER, and SPAN - only three up to five of them show also somehow a little positive moderate correlation with some features, while the others mostly show a positive moderate correlation ## Prepare data for Training Step ```python # Make distinction between Target Variable and Predictors # --------------------------------------------------------------------------- # rescaledX, y, columns = prepare_data_for_train(dataset, target_col=TARGET_COL) ``` Summary about Target Variable {target_col} -------------------------------------------------- 2 57 1 13 Name: T-OR-D, dtype: int64 shape features matrix X, after normalizing: (70, 11) ```python # sns.pairplot(dataset, hue=TARGET_COL, height=2.5); ``` ### Pricipal Component Analysis After having investigate the data points inside the dataset, I move one to another section of my report where I decide to explore examples that made up the entire dataset using a particular technique in the field of statistical analysis that corresponds, precisely, to so called Principal Component Analysis. In fact, the major objective of this section is understand whether it is possible to transform, by means of some kind of linear transformation given by a mathematical calculation, the original data examples into reprojected representation that allows me to retrieve most useful information to be later exploited at training time. So, lets dive a bit whitin what is and which are main concepts, pros and cons about Principal Component Analysis. Firstly, we know that **Principal Component Analysis**, more shortly PCA, is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called *principal components*. This transformation is defined in such a way that: - the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), - and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors, each being a linear combination of the variables and containing n observations, are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables. PCA is mostly used as a tool in *exploratory data analysis* and for making predictive models, for that reasons I used such a technique here, before going through the different learning technique for producing my models. #### Several Different Implementation From the theory and the filed of research in statistics, we know that out there, there are several different implementation and way of computing principal component analysis, and each adopted technique has different performance as well as numerical stability. The three major derivations are: - PCA by means of an iterative based procedure of extraing pricipal components one after the other selecting each time the one that account for the most of variance along its own axis, within the remainig subspace to be derived. - The second possible way of performing PCA is done via calculation of *Covariance Matrix* applied to attributes, that are our independent predictive variables, used to represent data points. - Lastly, it is used the technique known as *Singular Valued Decomposition* applied to the overall data points within our dataset. Reading scikit-learn documentation, I discovered that PCA's derivation uses the *LAPACK implementation* of the *full SVD* or a *randomized truncated SVD* by the method of *Halko et al. 2009*, depending on the shape of the input data and the number of components to extract. Therefore I will descrive mainly that way of deriving the method with respect to the others that, instead, will be described more briefly and roughly. #### PCA's Iterative based Method Going in order, as depicted briefly above, I start describing PCA obtained by means of iterative based procedure to extract one at a time a new principal componet explointing the data points at hand. We begin, recalling that, PCA is defined as an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. We suppose to deal with a data matrix X, with column-wise zero empirical mean, where each of the n rows represents a different repetition of the experiment, and each of the p columns gives a particular kind of feature. From a math poitn of view, the transformation is defined by a set of p-dimensional vectors of weights or coefficients $\mathbf {w} _{(k)}=(w_{1},\dots ,w_{p})_{(k)}$ that map each row vector $\mathbf{x}_{(i)}$ of X to a new vector of principal component scores ${\displaystyle \mathbf {t} _{(i)}=(t_{1},\dots ,t_{l})_{(i)}}$, given by: ${\displaystyle {t_{k}}_{(i)}=\mathbf {x} _{(i)}\cdot \mathbf {w} _{(k)}\qquad \mathrm {for} \qquad i=1,\dots ,n\qquad k=1,\dots ,l}$. In this way all the individual variables ${\displaystyle t_{1},\dots ,t_{l}}$ of t considered over the data set successively inherit the maximum possible variance from X, with each coefficient vector w constrained to be a unit vector. More precisely, the first component In order to maximize variance has to satisfy the following expression: ${\displaystyle \mathbf {w} _{(1)}={\underset {\Vert \mathbf {w} \Vert =1}{\operatorname {\arg \,max} }}\,\left\{\sum _{i}\left(t_{1}\right)_{(i)}^{2}\right\}={\underset {\Vert \mathbf {w} \Vert =1}{\operatorname {\arg \,max} }}\,\left\{\sum _{i}\left(\mathbf {x} _{(i)}\cdot \mathbf {w} \right)^{2}\right\}}$ So, with $w_{1}$ found, the first principal component of a data vector $x_{1}$ can then be given as a score $t_{1(i)} = x_{1} ⋅ w_{1}$ in the transformed co-ordinates, or as the corresponding vector in the original variables, $(x_{1} ⋅ w_{1})w_{1}$. The others remainig components are computed as folloes. The kth component can be found by subtracting the first k − 1 principal components from X, as in the following expression: - ${\displaystyle \mathbf {\hat {X}} _{k}=\mathbf {X} -\sum _{s=1}^{k-1}\mathbf {X} \mathbf {w} _{(s)}\mathbf {w} _{(s)}^{\rm {T}}}$ - and then finding the weight vector which extracts the maximum variance from this new data matrix ${\mathbf {w}}_{{(k)}}={\underset {\Vert {\mathbf {w}}\Vert =1}{\operatorname {arg\,max}}}\left\{\Vert {\mathbf {{\hat {X}}}}_{{k}}{\mathbf {w}}\Vert ^{2}\right\}={\operatorname {\arg \,max}}\,\left\{{\tfrac {{\mathbf {w}}^{T}{\mathbf {{\hat {X}}}}_{{k}}^{T}{\mathbf {{\hat {X}}}}_{{k}}{\mathbf {w}}}{{\mathbf {w}}^{T}{\mathbf {w}}}}\right\}$ It turns out that: - from the formulas depicted above me get the remaining eigenvectors of $X^{T}X$, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors of $X^{T}X$. - The kth principal component of a data vector $x_(i)$ can therefore be given as a score $t_{k(i)} = x_{(i)} ⋅ w_(k)$ in the transformed co-ordinates, or as the corresponding vector in the space of the original variables, $(x_{(i)} ⋅ w_{(k)}) w_{(k)}$, where $w_{(k)}$ is the kth eigenvector of $X^{T}X$. - The full principal components decomposition of X can therefore be given as: ${\displaystyle \mathbf {T} =\mathbf {X} \mathbf {W}}$, where W is a p-by-p matrix of weights whose columns are the eigenvectors of $X^{T}X$. #### Covariance Matrix for PCA analysis PCA made from covarian matrix computation requires the calculation of sample covariance matrix of the dataset as follows: $\mathbf{Q} \propto \mathbf{X}^T \mathbf{X} = \mathbf{W} \mathbf{\Lambda} \mathbf{W}^T$. The empirical covariance matrix between the principal components becomes ${\displaystyle \mathbf {W} ^{T}\mathbf {Q} \mathbf {W} \propto \mathbf {W} ^{T}\mathbf {W} \,\mathbf {\Lambda } \,\mathbf {W} ^{T}\mathbf {W} =\mathbf {\Lambda } }$. #### Singular Value Decomposition for PCA analysis Finally, the principal components transformation can also be associated with another matrix factorization, the singular value decomposition (SVD) of X, ${\displaystyle \mathbf {X} =\mathbf {U} \mathbf {\Sigma } \mathbf {W} ^{T}}$, where more precisely: - Σ is an n-by-p rectangular diagonal matrix of positive numbers $σ_{(k)}$, called the singular values of X; - instead U is an n-by-n matrix, the columns of which are orthogonal unit vectors of length n called the left singular vectors of X; - Then, W is a p-by-p whose columns are orthogonal unit vectors of length p and called the right singular vectors of X. factorizingn the matrix ${X^{T}X}$, it can be written as: ${\begin{aligned}\mathbf {X} ^{T}\mathbf {X} &=\mathbf {W} \mathbf {\Sigma } ^{T}\mathbf {U} ^{T}\mathbf {U} \mathbf {\Sigma } \mathbf {W} ^{T}\\&=\mathbf {W} \mathbf {\Sigma } ^{T}\mathbf {\Sigma } \mathbf {W} ^{T}\\&=\mathbf {W} \mathbf {\hat {\Sigma }} ^{2}\mathbf {W} ^{T}\end{aligned}}$ Where we recall that ${\displaystyle \mathbf {\hat {\Sigma }} }$ is the square diagonal matrix with the singular values of X and the excess zeros chopped off that satisfies ${\displaystyle \mathbf {{\hat {\Sigma }}^{2}} =\mathbf {\Sigma } ^{T}\mathbf {\Sigma } } {\displaystyle \mathbf {{\hat {\Sigma }}^{2}} =\mathbf {\Sigma } ^{T}\mathbf {\Sigma } }$. Comparison with the eigenvector factorization of $X^{T}X$ establishes that the right singular vectors W of X are equivalent to the eigenvectors of $X^{T}X$ , while the singular values $σ_{(k)}$ of X are equal to the square-root of the eigenvalues $λ_{(k)}$ of $X^{T}X$ . At this point we understand that using the singular value decomposition the score matrix T can be written as: $\begin{align} \mathbf{T} & = \mathbf{X} \mathbf{W} \\ & = \mathbf{U}\mathbf{\Sigma}\mathbf{W}^T \mathbf{W} \\ & = \mathbf{U}\mathbf{\Sigma} \end{align}$ so each column of T is given by one of the left singular vectors of X multiplied by the corresponding singular value. This form is also the polar decomposition of T. Efficient algorithms exist to calculate the SVD, as in scikit-learn package, of X without having to form the matrix $X^{T}X$, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix ```python show_table_pc_analysis(X=rescaledX) ``` Cumulative varation explained(percentage) up to given number of pcs: <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th># PCS</th> <th>Cumulative Varation Explained (percentage)</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>2</td> <td>47.738342</td> </tr> <tr> <th>1</th> <td>5</td> <td>75.856460</td> </tr> <tr> <th>2</th> <td>6</td> <td>82.615768</td> </tr> <tr> <th>3</th> <td>7</td> <td>88.413903</td> </tr> <tr> <th>4</th> <td>8</td> <td>92.661938</td> </tr> <tr> <th>5</th> <td>9</td> <td>95.976841</td> </tr> <tr> <th>6</th> <td>10</td> <td>98.432807</td> </tr> </tbody> </table> </div> #### Major Pros & Cons of PCA ## Learning Process <a class="anchor" id="learning-models"></a> Here in this section we are going to partly describy and in the remaining to test and evaluate performance of various machine learning models that we selected and adopted to built up learning models for accomplishing supervised machine learning tasks related to classification problems. More precisely, we focused on binary classification problemsa, since the target variables, that is T-OR-D feature, amongst the 12 features from which the dataset is made of, and from which the more or less hundered of records are described, is a binary categorical feature which can assume the two values represented by labels DECK and THROUGH describing in two distinct manner a property about each bridge within the dataset, that property refers to the system used for constructing the bridge surface, that is commonly called deck, for let veicheles or trains or whatevert to cross the rivers that are three distinct rivers: A, M, O. Where A stands for Allegheny river, while M stands for Monongahela river and lastly O stands for Ohio river. Before describing the fine tuning process applied to the different models accordingly to their own major properties, characteristics and features, we have decided and established to test each model performance by looking at how well all of them is going just exploiting the default setting and runnng cross-validation protocol, in other word also referred to as policy, to check the accuracy level, as ana instance and some other metrics. To be more detailed we follow the common machine leaning working flow, that requires to split the data set, after having preprocessed it properly and in a suitable manner to meet mahcine learning models needs, into subsets commonly referred to as training set and test set. Where, the former is exploited to build up a inference model and the latter is used to check model's performance as weel as behavior up on held out sample of istances never seen before and so those examples that the learning model wasn't provided with to learn weights and select right hyper-params to plug in back into model at the end of training procedure, in order to later test the model with the found weights as well as hyper-parameters and if it meet our requirements in terms of performance values reached at test time, then it will be ready for deployments. As we can see from the image just above what we have to do, following the purposed machine learning scheme or work flow are the subsequent steps: __(1) Split Dataset into Train and Test sets__: After having done with preprocessing phase we separate or divide the dataset into training set and test set, where usually the training set is bigger than the test set, and in some cases the test set, for recording some kind of performance measures, can be made of just a single instance against which the model will be tested and checked. __(2) Split Train set into smaller Train set and a Validation set__: Once have made the first split then we hold out a part for later checking the test set and we focus on training set. In fact once we have selected a machine learning model amongst those we want to adopt to fit a model and compare the results obtained we try to indentify the best fit parameters to setting the model with them. To reach such a goal we can adopt different aproaches to split further the training set into a validation set and a smaller train set in order to emulate before doing test phase a possible beahvior of the trainng model once we think it is ready for the following phase that is test step. There are several procedure for establishing how to divide training set into two new subsets, that are validation and a little bit smaller train set, where all of them are also connected or reffered to a learning algorithm called cross-validation which consists roughly speaking into testing the model against a smaller portion of training set in order to record and measure the performance of a model before saying we are ready to proceed with test phase. Among the existing Cross-Validation Procedure we adopt and so describe briefly the following: - **K-fold Cross-Validation**: which is a cross validation protocol exploited so that we split the training set into K-folds, in other words K-subsets all of the same size, a part eventually the last one which will be the remainder set of samples. One at a time the K-fold are left out and the model is firstly trained against K-1 subsets(folds) and then test against the left out fold for recording the performanc. At the end of the k-times we have trained the model we have recorded the performance measures and we so can average the results and understand how model in average is well doing. In oder words we can either assume the mean value as the driving value to assume the model as satisfying our constraint on performance measures or adopt the best result amongst the k-trains as the settings for hyper-params to adopt. This procedure is feasible and suitable if we do not carea botu the fact that, in cases of classification tasks, the categories might be balanced or not in terms of number of instances, as well as, it can be adopted also if we want to show a learning curve and some other performance graphics or schemes as Confusion matrtices and ROC or recall-precision curves. Latly usual values for K are: 5, 10, 20. - **Leave One Out Cross-Validation**: It is a special case of the K-fold Cross-Validation. In fact, instead of adopting the usual values as 5, 10, 20, for K as the number of possible subsets, we establish to identify each single instance as a possible fold. It's clear that this algorithm requires more time to be completed and does not allow to show up the graphics cited just above, since we do not enough data for confusion matrix and ROC or recall-precision curves. - **Stratified Cross-Validation**: It is a good compromise when the dataset is still large enough to be exploited for training purposes but we decide to split into K-folds the traning set such that each subset have the same proportion of samples coming from the differn classes. This operation reveals to be necessary or even mandatory when we detect that the dataset does not show the same number of samples, more or less, for each class, in other word when the dataset is unbalanced for a given attribute that is the target attribute. This means that, while trying to mitigate the issue about the unbalanced dataset we thunk as well as hope that this management let the model to fit a model which will not be affected heavily just from the most numerous class, bus still learn how to classify the samples coming from the other less numerous classes, without too mcuh misclassifying errors. As usual also with Stratified Cross-Validation we are able to show same graphics as with plain K-fold Cross-Validation, the difference is that the folds are not randomly sampled from the original training set, but yet are sampled in the same proportion per each class in order to have the same number of samples for each class inside each fold. We try out all of the three described cross-validation techniques to measure how weel default settings for different models are doing to gaina a base line against wich compare later results coming from fine tuning process carried out exploiting grid search technique for selecting the best combinatin of purposed values for each candidate machine learning technique. __(3) Grid Search approach__: is the technique adopted when taking into account one at a time all the machine learning algorithm for selecting the best set of hyper-parameters. It consists into defining for each model a grid of possible values for different hyper-parameters that in some sense represent our degree of freedom referred to some of the properties that characterize all the different models. The grid of values might be real number ranging within a more or less interval, or a string value used to trigger a certain feature of a model combined with other related aspects of the machine learning algorithm of the given model. We recall that the standard grid search will proceed until all possible combination of provided values have been test and training with such settings have been carried out. Opposite to classic grid search it is another technique called Random Grid Search, which implies instead to let a model to choose or sample randomly the hyper-parameters within the ranges or intervals related to each hyper-params. The latter technique can be potentially less expensive since we test a reduced number of combination but might be sub-optimal even if the results can be still acceptable and meaningful. __(4) Metrics and model performance's evaluation tools__: before deploying we have to test our model against a test set, that is a subset created or obtained from overall dataset which was, after preprocessing phase that turns feature values somehow into numbers, devided into two distinct sets generally of different size. The test set evaluation implies exploiting some metrics such as Accuracy, but yet there exist several others that partly are derived from confusion matrix evaluation tool, such as Precsio, Recal F1-score, and the like. So what we understand is that we can make use of a bouch of metrics but rather than using directly those metrics we can explore model's performance by means of more useful tools such as Confusion Matrix, Roc curve in order to better understanding model's behavior when fed with unobserved new samples, as well as, how to set a threshold for determing when a target variable will suggest us that the classified sample belong to one class or the other. So, here we briefly describe which instruments we exploit to measure model's peformance starting from confusion matrix and moving ahead toward ROC curve. **Confusion Matrix**: in statistics a confusion matrix it's a grid or matrix of numbers and in the simplest scenario, correspoding to a binary classification task, it aims at showing how well a model was going when applied onto unknow or preiviously unseen data points and samples, in the following manner. Arbitrary we establish that along the rows we have the fraction of samples that the model has calssified or has assigned to agiven class, and so the rows account for *Predicted valuers*. Vice versa, along the columns we have the total number of samples for each class that all together resemble the so called *Actual Values* as we illustrate in the picture just below: So, such a table of numbers allows us to measure the fraction of correctly classified examples belonging to the Positive class, also referred to as *True Positive(TP)* or to the Negative class, also named *True Negative(TN)* and at the same time we can derive also the fractions of wrongly classified Positive samples and Negative ones, respectively knwon as *False Positve(FP)* and *False Negative (FN)*. It is clear that looking at the diagonal matrix we can undestand that the larger the value along the diagonal the better the model was able to correctly identify the samples accordingly to their own actual class. From the four statistics measure depiceted above, that are TP,TN, FP, and FN, by the time have been dirved other useful metrics that can be exploited when the most used and well knwon measurement performance of accuracy is not enough due to the fact as an instance we wanto to analyze deeper our model behavior toward optimization of some goal or because we have to deal with dataset which are not balanced through classes to be predicted and so we wanto some other metrics to ensure the goodness of our solutions. **Roc Curve**: for the reason cited some lines earlier by exploiting the four basis metrics have been developed other useful tools for ensuring the goodness of a solution, among those tools we decide to adopt the following one, knwon as Roc Curve. It is a curve, whose acronim stands for Receiver Operating Curve, largely employed in the field of *Decison Theory* and aims at finding, siggesting or showing how moedl's performance varywhen we are going to set different thresholds for a simple scenario in which we are going to solve a binary classification problem. Such a curve require to show on the x-axis the fraction of samples corresponding to *False Positve Rate (FPR)* at a different values of the model's hyper-params corresponding to threshold set for classifying items at inference time, as well as on the x-axis the fraction of samples corresponding to *True Positive Rate(TRP)* in order to plot a curve that originates at coordinates (0,0) and terminates at coordinates (1,1), varying in the middle accordingly to the pairs of values recorded at a given threshold for (FPR,TPR) pair. We are also reporting two driving curves that are respectvely the curve related to the *Random Classifier* which corresponds to a classifier that for each instance is just randomly selecting the predicted class, and the *Perfetct Classifier* that always correctly classifyes all the samples. Our goal by analysing such a graphics is to identify the threshold value such that we are near the points on the curve that are not so far from the upper-left corner so that to maximise the TPR as well as minimizing the FPR. Another useful quantity related to Roc Curve is represented by the so called amount Area Under the Curve (AUC) that suggests us how much area under the rco curve whas accounted while varying the threshold for classiffying the test samples, in particular the higher the value the better the classifier is doing varying the threshlds. Lastly we notice that the Random Sample accounts for an AUC equls to 0.5 %, while the Perfect Classifier for 1.0 % so we aim at obtaing a value for AUC that is in at least in the between but that approaches the unity. Here, below is presented an example of Roc Curve example: Lastly, before moving ahead with describing Machine Learning Models we provide a brief list of other useful metrics that can be exploited if necessary during our analyses: **Learning Curve**: learning curves constitute a great tool to diagnose bias and variance in any supervised learning algorithm. It comes in handt when we have to face against the so called **Bias-Variance Trade-Off**. In order to explain what that trade-off implies we have to say briefly what follows: - In supervised learning, we assume there's a real relationship between feature(s) and target and estimate this unknown relationship with a model. Provided the assumption is true, there really is a model, which we'll call $f$, which describes perfectly the relationship between features and target. - In practice, $f$ is almost always completely unknown, and we try to estimate it with a model $\hat{f}$. We use a certain training set and get a certain $\hat{f}$. If we use a different training set, we are very likely to get a different $\hat{f}$. As we keep changing training sets, we get different outputs for $\hat{f}$. The amount by which $\hat{f}$ varies as we change training sets is called **variance**. - While, for most real-life scenarios, however, the true relationship between features and target is complicated and far from linear. Simplifying assumptions give **bias** to a model. The more erroneous the assumptions with respect to the true relationship, the higher the bias, and vice-versa. - In practice, however, we need to accept a trade-off. We can’t have both low bias and low variance, so we want to aim for something in the middle, knowing that: \begin{equation} \begin{cases} Y = f(X) + \quad irreducible \quad error \\ f(X) = \hat{f(X)} + \quad reducible \quad error \\ Y = \hat{f(X)} + \quad reducible \quad error + \quad irreducible \quad error \\ \end{cases} \end{equation} <table><tr> <td> </td> <td> </td> </tr></table> <table><tr> <td> </td> <td> </td> </tr></table> __(5)Deployment__: The last step for building a machine learning system comprise the deployment of one or more machine learning trained models that fit and satisfy constraints that was intially fixed before doing any kind of analysis. The main goal of deployemnt is to employ such statistics models for predicting, in other words making inference, against new data, unknown obeservatons for which we do not knwo their target class values. ```python try_comparing_various_online_solvers_by_kernel(X=rescaledX, y=y, kernels=None, heldout=[.75, .7, .65, .6, .55, .5, .45, .4, .35, .33, .3, .25, .2], rounds=20, n_classes=-1, n_components=9, estimators=estimators_list, cv=StratifiedKFold(2), verbose=0, show_fig=True, save_fig=False, stratified_flag=True, n_splits=3, random_state=42, gridshape=(3, 2), figsize=(15, 15), title="try comparing various online solvers by kernel", fig_name="try_comparing_various_online_solvers_by_kernel.png") ``` ## Learning Models <a class="anchor" id="learning-models"></a> ### Learning Curve Performing learning curve for a certain number of machine learning methods applied using the default configurations provided by the scikit-learn API, as illustrated looking at the online documentation. The result obtained is the following, in particular we have established to focus on just a kind of kernel trick instead of analysing all the beahviors from different kernel modes for kernelPca unsupervised learning technique, since we have notice, when carrying out several trials with different kenrel tricks the results found seem to be nearly the same, with some minor differences, so that we select the first kernel trick as a representative for the model: ```python X = rescaledX; n_components=9 # learning_curves_by_components( # train_sizes=list(range(5, 50)), # figsize=(20,5) # learning_curves_by_kernels(estimators_list[:], estimators_names[:], X, y, train_sizes=np.linspace(.1, 1.0, 10), n_components=n_components, pca_kernels_list=pca_kernels_list[0], verbose=0, by_pairs=True, savefigs=True, figs_dest=os.path.join('figures', 'learning_curve', f"Pcs_{n_components}")) ``` What we can learn observig or looking at the bounch of graphics we have displayed just above is that not all of the different machine learning techiniques applied on dataset pre-processed by means of a precise kernel trick adopted for KernelPca show the same behavior, and even some plots suggest us that a machine learning algorithm may seem not suitable for such a small dataset for determining the well-known trend of a learning curve while exploiting a learning technique. To be more precise, we are going to speak roughly and briefly a little bit about each graphics, in the order as they have been shown here before, recalling that we have adopted up to 9 Principal components that account for nearly 95% of comulative explained variance related to the dataset at hand: - The first general observation or thought that we can explain is that for all the pictures the initial gap between the two curves is most of the time wide, large, in other words, important, more or less about 20% in percentage. - Speaking about both learning curve obtained from applying Gaussian Niave Bayes, before, and K-Nearest Neighbor, after, what we can say is that the curve related to Training Score is decreasing more or less of 10 point percentage, reaching 90% from the top, while the Cross-alidation curve seems to be constant for the first 6 trials and then to improve a little bit, but for the former model we can see increasing and descreasing that do not allow to fix a trend for measuring a gap between the two curves, instead for the latter model seems that the two curve converge at a certain point once the traingin set is more or less the overall dataset, which means we are going to exploit the most of information. - Instead, if we focus on Logistic Regression technique and SGD Classifier, we have observed that before a given numebr of trials in both picitures the two curves, that are Trainign and Cross-validation score curves, seem to follow an independent trend increasing and descrising independently, but then the two curves start to follow a decreasing trend so we can hipotize that after that precise trials we may have reached the desired training size. - The learning curve associated with SVC classifier suggests us that it seems to behave looking at both Traingin and Cross-Validation curve more or less as the previous graphics, but while the Training curve as decreases loosing accuracy score value such a curve tightenes the varaince, instead the Cross-validation curve keeps a wide varaince stretching it while increasing the training set size, meaaning that the models seem to be more unsure about the results. - The last two graphics are the most problematics and questionable ones, since while enalarging the training size we do not see either improvements or worsening referring to the Training curve, while the Cross-lvalidation curve seems at the very beginning to show a samew behavior as the Training curve, but then after a sequence of trials with increasing training size we get an improvement in terms of accuracy scores. But, since we normally expect to observe a learning curve where Training curve as well as Cross-validation curve respectively decreases and increases their performance at least at a given point where we hope the gap existing between them will be as small as possible wecan think that such those two graphics are nto explaining directly the real issues we face while dealing with Decsion Tree and Radom Forest Classifiers with such a small dataset. ### Cross Validation Performing cross validation for all the Machine Learning Models to be fine tuned, later once the cross validation results are available to be plotted and to be considered as a base line, since those results are computed when the models have been prepared and configured with default hyper-parameter values. ```python # Performe all Cross-Validations # ----------------------------------------------------------------- # naive_bayes_classifier_grid_search(rescaledX, y) plot_dest = os.path.join("figures", "n_comp_2_analysis", "cross_validation"); plots_names = list(map(lambda xi: f"{xi}_learning_curve.png", estimators_names)) n = len(estimators_list); dfs_list, df_strfd = fit_all_by_n_components(estimators_list=estimators_list[:n], estimators_names=estimators_names[:n], X=X, y=y, random_state=0, test_size=.33, n_components=9, cv_list=cv_list[:N_CV], show_plots=False, pca_kernels_list=pca_kernels_list[:N_KERNEL-2], verbose=0, plot_dest=plot_dest) show_df_with_mean_at_bottom(df_strfd) # show_df_with_mean_at_bottom(df_strfd) # df_strfd.head(df_strfd.shape[0]) ``` ## Naive Bayes Classification Naive Bayes models are a group of extremely fast and simple classification algorithms that are often suitable for very high-dimensional datasets. Because they are so fast and have so few tunable parameters, they end up being very useful as a quick-and-dirty baseline for a classification problem. Here I will provide an intuitive and brief explanation of how naive Bayes classifiers work, followed by its exploitation onto my datasets. I start saying that Naive Bayes classifiers are built on Bayesian classification methods. These rely on Bayes's theorem, which is an equation describing the relationship of conditional probabilities of statistical quantities. n Bayesian classification, we're interested in finding the probability of a label given some observed features, which we can write as P(L | features). Bayes's theorem tells us how to express this in terms of quantities we can compute more directly: $P(L|features)=\frac{P(features|L)}{P(L)P(features)}$ If we are trying to decide between two labels, and we call them L1 and L2, then one way to make this decision is to compute the ratio of the posterior probabilities for each label: $\frac{P(L1 | features)}{P(L2 | features)}=\frac{P(features | L1)P(L1)}{P(features | L2)P(L2)}$ All we need now is some model by which we can compute P(features | $L_{i}$) for each label. Such a model is called a generative model because it specifies the hypothetical random process that generates the data. Specifying this generative model for each label is the main piece of the training of such a Bayesian classifier. The general version of such a training step is a very difficult task, but we can make it simpler through the use of some simplifying assumptions about the form of this model. This is where the "naive" in "naive Bayes" comes in: if we make very naive assumptions about the generative model for each label, we can find a rough approximation of the generative model for each class, and then proceed with the Bayesian classification. Different types of naive Bayes classifiers rest on different naive assumptions about the data, and we will examine a few of these in the following sections. #### Gaussian Naive Bayes Perhaps the easiest naive Bayes classifier to understand is Gaussian naive Bayes. In this classifier, the assumption is that data from each label is drawn from a simple Gaussian distribution. In fact, one extremely fast way to create a simple model is to assume that the data is described by a Gaussian distribution with no covariance between dimensions. This model can be fit by simply finding the mean and standard deviation of the points within each label, which is all you need to define such a distribution. $P(x_i \mid y) = \frac{1}{\sqrt{2\pi\sigma^2_y}} \exp\left(-\frac{(x_i - \mu_y)^2}{2\sigma^2_y}\right)$ The parameters $\sigma_{y}$ and $\mu_{y}$ are estimated usually using maximum likelihood. ```python # GaussianNB # ----------------------------------- show_df_with_mean_at_bottom(dfs_list[pos_cv]) # dfs_list[pos_cv].head(dfs_list[pos_cv].shape[0]) ``` ```python plot_name = plots_names[pos_cv] show_learning_curve(dfs_list[pos_cv], n=len(cv_list[:N_CV]), figsize=(15, 7), plot_dest=plot_dest, grid_size=grid_size, plot_name=plot_name) ``` #### When to Use Naive Bayes Because naive Bayesian classifiers make such stringent assumptions about data, they will generally not perform as well as a more complicated model. That said, they have several advantages: - They are extremely fast for both training and prediction - They provide straightforward probabilistic prediction - They are often very easily interpretable - They have very few (if any) tunable parameters These advantages mean a naive Bayesian classifier is often a good choice as an initial baseline classification. If it performs suitably, then congratulations: you have a very fast, very interpretable classifier for your problem. If it does not perform well, then you can begin exploring more sophisticated models, with some baseline knowledge of how well they should perform. Naive Bayes classifiers tend to perform especially well in one of the following situations: - When the naive assumptions actually match the data (very rare in practice) - For very well-separated categories, when model complexity is less important - For very high-dimensional data, when model complexity is less important The last two points seem distinct, but they actually are related: as the dimension of a dataset grows, it is much less likely for any two points to be found close together (after all, they must be close in every single dimension to be close overall). This means that clusters in high dimensions tend to be more separated, on average, than clusters in low dimensions, assuming the new dimensions actually add information. For this reason, simplistic classifiers like naive Bayes tend to work as well or better than more complicated classifiers as the dimensionality grows: once you have enough data, even a simple model can be very powerful. ## Logistic Regression | Learning Technique | Type of Learner | Type of Learning | Classification | Regression | | --- | --- | --- | --- | --- | | *Logistic Regression* | *Linear Model* | *Supervised Learning* | *Supported* | *Not-Supported* | Logistic regression, is a linear model for classification rather than regression. Logistic regression is also known in the literature as logit regression, maximum-entropy classification (MaxEnt) or the log-linear classifier. In this model, the probabilities describing the possible outcomes of a single trial are modeled using a logistic function. Logistic regression is implemented in LogisticRegression. This implementation can fit binary, One-vs-Rest, or multinomial logistic regression with optional $l_{1}$, $l_{2}$ or Elastic-Net regularization. As an optimization problem, binary class $l_{2}$ penalized logistic regression minimizes the following cost function: - $\min_{w, c} \frac{1}{2}w^T w + C \sum_{i=1}^n \log(\exp(- y_i (X_i^T w + c)) + 1)$ Similarly, $l_{1}$ regularized logistic regression solves the following optimization problem: - $\min_{w, c} \|w\|_1 + C \sum_{i=1}^n \log(\exp(- y_i (X_i^T w + c)) + 1)$ Elastic-Net regularization is a combination of $l_{1}$ and $l_{2}$, so minimizes the following cost function: - $\min_{w, c} \frac{1 - \rho}{2}w^T w + \rho \|w\|_1 + C \sum_{i=1}^n \log(\exp(- y_i (X_i^T w + c)) + 1)$ - where $\rho$ controls the strength of $l_{1}$ regularization vs. $l_{2}$ regularization ### Cross-Validation Results ```python # LogisticRegression # ----------------------------------- pos_cv = pos_cv + 1; show_df_with_mean_at_bottom(dfs_list[pos_cv]) # dfs_list[pos_cv].head(dfs_list[pos_cv].shape[0]) ``` ```python plot_name = plots_names[pos_cv] show_learning_curve(dfs_list[pos_cv], n=len(cv_list[:N_CV]), figsize=(15, 7),plot_dest=plot_dest, grid_size=grid_size, plot_name=plot_name) ``` ### Grdi-Search Results ```javascript %%javascript IPython.OutputArea.prototype._should_scroll = function(lines) { return false; } ``` ```python plot_dest = os.path.join("figures", "n_comp_9_analysis", "grid_search"); X = copy.deepcopy(rescaledX) df_gs, df_auc_gs, df_pvalue = grid_search_all_by_n_components( estimators_list=estimators_list[pos_gs+1], \ param_grids=param_grids[pos_gs], estimators_names=estimators_names[pos_gs+1], \ X=X, y=y, n_components=9, random_state=0, show_plots=False, show_errors=False, verbose=1, plot_dest=plot_dest, debug_var=False) df_9, df_9_auc = df_gs, df_auc_gs ``` Looking at the results obtained running *Logistic Regression Classifier* against our dataset splitted into training set and test set and adopting a different kernel trick applied to *kernel-Pca* unsupervised preprocessing method we can state that generally speaking all the methods shonw a very high *Train Accuracy Score* which reaches in the most of the case a value greater than *90%*. However only one trial out of five, that is trial in which we adopted *Cosine Trick* we was able to account for *74%* of accuracy than the other cases where instead we do not reach a *train accuracy score* greater than *50%*. So, we can end up saying that the other models either overfit to the *train set* and wasn't able to generalize well on *test set*, or the fact that our dataset is not a balanced one leads to models and estimators that were able to correctly predict one among the two classes and more specifically, the models seems to recognize better the *class 0*, that is *Deck Bridges* than *class 1*, that is *Through Bridges*. In other words, usually working with unbalanced dataset we expect that the most frequent classes or most numerous classes were advantaged against the less numerous, but here emploping Logisti Regression Classifier we obtained models that were more able to correctly classify the less numerous class and to worngly predict the more numerous class. More precisely: - speaking about __Linear kernel Pca based Logisti Classifier__, we can notice that such a model exploiting a default threshold of *.5* reaches a poor test accuracy score of just *41%* with respect to a train accuracy score of *97%*. The model indeed overfits to the overall train set and tends to better predict the less numerous class, so the model gains weight parameters suitable to identify class 0 samples. Looking at *recall and precision scores*, the model was really precise when predicting class 1 examples and was able to correcly predict labels for class 0, so maximzes recall of negative class. But we cannot say it is also precise when predicting class 0 this means that it wrongly inferes the true label for positive class. Lastly the model obtained high and low weighted average precision and recall, such that weigthed *F1-score* was low as well. Speaking about *Roc curve and Auc Score*, we can unbderstand that the model obtains a intermediate Auc score, of *.64* than the Random Classifier, and the relationship between *FPR and TPR* is linear most of the time changing the threshold value for classification. - observing __Polynomial kernel Pca based Logisti Classifier__, we can notice that such a model exploiting a default threshold of *.5* reaches even a lower test accuracy of *32%* than a still high train accuracy of *91%*. So also here for this trial the resulting model overfit to the train set and becaus of both lower accuracy scores we can can state that the model wrongly predicts a larger number of samples from class 1. In fact the model's precision and recall of class 0 and class 1 lowered than the values seens for the previous trial, while the precision and recall of class 1 and class 0 still remain the same, so this model preidcts with high precision samples from class 1 but with great uncertainty about class 0, even if most of the sample from such a classes were correctly labeled. Looking at *Roc Curve and Auc Score* we can observe that the best found model with this configuration indeed is going slightly bettere than random classifier, in fact has a auc score equals just to *.59* and the *TPR and FPR* behaviors is that them grows linearly while modifying the default threshold value most of the time. - review __Rbf kernel Pca based Logisti Classifier__, we can stricly and briefly saying that as the two preivous models also here discussing this estimator performance we do not obtain satisfying results in fact the model behaves more or less as the first reviewed, and more precisely the model obtained a slighlty better accuracy on test set of *.47* and a weighted F1-Score of *.5*, that allow for a Auc score that reaches a value of *.68* However also this model overfit to the train set with an accuracy score of *92%*, and is more able to correcly predict class 1 instances with high precision and class 0 instances with more uncertainty, even if has a high recall related to class 0. - __Cosine kernel Pca based Logisti Classifier__ results to be the best solution found while performing grid search algorithm for Logisti Regression method, when it is fixed a defualt threshold of *.5* for classification. The rationale is that this trial retrieves a model that does not overfit to the train set, since test set accuracy is *74%*, just nearly 20 percent points than train accuracy score of *92%*. Moreover, we obtain high values for both *averaged precions, recalla and F1-Score metrics*, where the latter more precisely was even gretaer than test accuracy score, reachiung a value of *77%*. However, this model as others is less precise when predicting labels for class 0, than when inferring class 1 lables, this is mostly due to the fact that the dataset is not balanced. So we still remain more confident and precise when predicting class labels for class 1 examples. Looking at Roc Curve and Auc Score, we can say that for this model we have a curve which is able to account for up to *77%* of auc, and that this model works fine for many thresholds, in particular we can imagine to lower down a little bit the default threshold so that we can improve *TPR* reducing slightly *FPR* scores. - lastly, also __Sigmoid kernel Pca based Logisti Classifier__ as other previous trials except the one represented by the moedl trained fixing Cosine Trick as kernel Pca method, gains poor and lower performance, due to overfit issue and as other more or less same low performance models generally speaking obtains high and low weighted average precision and recall scores, meaning that while the few instances predicted as belonging to class 1 was done with high precision instead of samples from class 0 which was prediceted with high uncertainty, even if most of the time the model correctly predicts instances that indeed belongs to class 0. The Roc Curve and Auc Score of *62%* show that also this run leads to a model which TPR and FPR are most of the time growing linearly across the thresholds. __Significance Analysis__: finally, when looking at the different graphics related to the test which aims at investigating the diagnostic power of our different models we have fine tuned, picking the best one for such a test we can notice that beacues of the *signficance level* $\alpha$ set equal to *0.05 that is 5% of chance to reject the Null-Hypothesis $H_{0}$*, we have obtained not grdi search result from training set that was able to overcome such cutt-off value of *%%* and therefore the different models are not uncertain enough to be adopted and configured with those hyper-parameters and model's weights for describing the underling model related to the data. #### Table Fine Tuned Hyper-Params (Logisti Regression) ```python show_table_summary_grid_search(df_gs, df_auc_gs, df_pvalue) ``` Looking at the table dispalyed just above that shows the details about the selected values for hyper-parameters specified during grid search, in the different situations accordingly to the fixed kernel-trick for kernel Pca unsupervised method we can state that, referring to the first two columns of *Train and Test Accuracy*, we can recognize which trials lead to more overfit results such as for *Linear, Polynomial, Rbf, and Sigmoid Tricks* or less overfit solution such as in the case of *Cosine Trick*. Speaking about the hyper-parameters, we can say what follows: - speaking about the *hyper-param C*, that is inverse of regularization strength where smaller values specify stronger regularization, we observe that except the Cosine kernel trick case all other kernel-Pca tricks adopted have preferred to exploit a very low value for *C parameter* equals to *0.001* and accounts for a very strtong regularization, but such a choice does not lead to models that obtained a high generalization capability, instead the *Cosine based kernel-Pca* model opted for a default value for such a parameter. - instead referring to *class_weight parameter*, we knwo that it can be set with balanced strategy which stends for a strategy where values of y to automatically adjust weights inversely proportional to class frequencies in the input data as *n_samples / (n_classes * np.bincount(y))*, we have been surprised that all the method that obtained worst performance choose a balanced strategy than the best model which was fine even with a default strategy that does not require to use a balanced mode. - instead *fit_intercept parameter* refers to the fact that we specify if a constant (a.k.a. bias or intercept) should be added to the decision function, and allows for modeling a certain behavior and a certain response different from zero even when the input sample is mostly made of zero components, we can understand that in all the cases the models obtained best results enabling such strategy and so the models are fitted taking into account also a intercept weight or parameter, increasing model complexity. - model's *penalty parameter* allows to specify the norm used in the penalization, among the folowing list of possible choices *l1, l2, elasticnet*. In all the models the best choice was for *l2 regularization*, this means that all the models opted for a kind of regularization that do not consider at all the *l1 normalization* as a regularization technique, so we avoid to obtain models that instead may lead weights to zero values, in other words sparse models. - model's *solver parameter* which is the algorithm to use in the optimization problem. It is curios to notice that almost all the models except cosine based kernel-Pca which adopted *liblinear* solver. What we can understand is that for all the overfitted models the choice of *sag* solver does not lead to significant results in term of performance, and we can say instead that we correctly except that for such a small dataset a *liblinear* choice is the most suitable and the best model found here is coherent with such a suggestion from theory field. - lastly, looking at *tol parameter*, which stends for tolerance for stopping criteria, we can clearly see that the first two models adopted a lowe tolerance value instead the last three preferred a lower value of tolerance, so the first two methods accordingly with the kind of kernel trick technique adopted for kernel-Pca seem to go well when a tolerance value is not so small as the last three methods, furthermore the first two methods request less time than the last three because of the lareger tolerance set for training convergence. ## Knn | Learning Technique | Type of Learner | Type of Learning |Classification | Regression | Clustering | | --- | --- | --- | --- | --- | --- | | *K-Nearest Neighbor* | *Instance-based or Non-generalizing* | *Supervised and Usupervised Learning* | *Supported* | *Supported* | *Supported*| In *Pattern Recognition*, the *K-Nearest Neighbors Algorithm (k-NN)* is a __non-parametric method__ used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space. The output depends on whether k-NN is used for classification or regression: - In *k-NN classification*, the output is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. - In *k-NN regression*, the output is the property value for the object. This value is the average of the values of k nearest neighbors. What follows is a briefly explanation of Knn: The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples. In the classification phase, k is a user-defined constant, and an unlabeled vector (a query or test point) is classified by assigning the label which is most frequent among the k training samples nearest to that query point. A commonly used distance metric for continuous variables is *Euclidean distance*. Example of k-NN classification. The test sample (green dot) should be classified either to blue squares or to red triangles. If k = 3 (solid line circle) it is assigned to the red triangles because there are 2 triangles and only 1 square inside the inner circle. If k = 5 (dashed line circle) it is assigned to the blue squares (3 squares vs. 2 triangles inside the outer circle). __Choice of Nearest Neighbors Algorithm__: The optimal algorithm for a given dataset is a complicated choice, and depends on a number of factors, number of samples *N* (i.e. n_samples) and dimensionality *D* (i.e. n_features): - *Brute force* query time grows as $O[D N]$ - *Ball tree* query time grows as approximately $O[D \log(N)]$ - *KD tree* query time changes with *D* in a way that is difficult to precisely characterise. For small *D* (less than 20 or so) the cost is approximately $O[D \log(N)]$, and the KD tree query can be very efficient. For larger *D*, the cost increases to nearly O[DN], and the overhead due to the tree structure can lead to queries which are slower than brute force. Therefore, we end up saying that, for small data sets (*N* less than 30 or so), $\log(N)$ is comparable to *N*, and brute force algorithms can be more efficient than a tree-based approach. Both KDTree and BallTree address this through providing a leaf size parameter: this controls the number of samples at which a query switches to brute-force. This allows both algorithms to approach the efficiency of a brute-force computation for small *N*. ### Cross-Validation Results ```python # Knn # ----------------------------------- pos_cv = pos_cv + 1; show_df_with_mean_at_bottom(dfs_list[pos_cv]) # dfs_list[pos_cv].head(dfs_list[pos_cv].shape[0]) ``` ```python plot_name = plots_names[pos_cv] show_learning_curve(dfs_list[pos_cv], n=len(cv_list[:N_CV]), figsize=(15, 7), plot_dest=plot_dest, grid_size=grid_size, plot_name=plot_name) ``` ### Grid-Search Results ```python pos_gs = pos_gs + 1; plot_dest = os.path.join("figures", "n_comp_9_analysis", "grid_search"); X = copy.deepcopy(rescaledX) df_gs, df_auc_gs, df_pvalue = grid_search_all_by_n_components( estimators_list=estimators_list[pos_gs+1], \ param_grids=param_grids[pos_gs], estimators_names=estimators_names[pos_gs+1], \ X=X, y=y, n_components=9, random_state=0, show_plots=False, show_errors=False, verbose=1, plot_dest=plot_dest, debug_var=False) df_9 = merge_dfs_by_common_columns(df_9, df_gs); df_9_auc = merge_dfs_by_common_columns(df_9_auc, df_auc_gs) # # df_9, df_9_auc = pd.concat([df_9, df_gs], axis=0), pd.concat([df_9_auc, df_auc_gs], axis=0) ``` Looking at the results obtained running *Knn Classifier* against our dataset splitted into training set and test set and adopting a different kernel trick applied to *kernel-Pca* unsupervised preprocessing method we can state generally speaking that all the such a *Statistical Learning technique* leads to a sequence of results that on average are more than appreciable beacues of the accuracy scores obtained at test time which compared against the same score but related to train phase allows us to understand that during the model creation the resulting classifiers do not overfit to the data, and even when the training score was high it was still lower than the scores in terms of accuracy obtained from the Logisti Regression Models which overfit to the data. Moreover, looking at the weighted values of *Recall, Precision, and F1-Scores* we can notably claim that the classifiers based on Knn obtained good performance and and except for one trial where we got lower and worst results, when *Sigmoid Trick* is selected, in the remaning cases have gotten remarkable results. More precisely we can say what follows: - speaking about __Linear kernel Pca based Knn Classifier__, when adoping the default threshold of *.5* for classification purposes we have a model that reaches an accuracy of *71%* at test time against an accuracy of *92%* at train step, while the Auc score reaches a value of *76%* with a Roc Curve that shows a behavior for which the model for a first set of thresholds let *TPR* grows faster than *FPR*, and only when we take into account larger thresholds we can understand that the trend is reversed. Looking at classification report we can see that the model has high precision and recall for class 1, so this means that the classifier has high confidence when predicting class 1 labels, instead it is less certain when predicting class 0 instances because has low precision, even if the model was able to predict correctly all the samples from class 0, leading to high recall. - observing __Polynomial kernel Pca based Knn Estimator__, we can notice that such a model exploiting a default threshold of *.5* reaches an accuracy of *74%* at test time against an accuracy of *89%* at train step, while the Auc score reaches a value of *77%*. What we can immediately understand is that the second model we have trained for Knn classifier is able to better generalize because obtained a higher accuracy score for test set which is also less far from train accuracy score, moreover the model has a slightly greater precision and recall when referring to class 1, while the precision and recall about class 0 seem to be more or less the same. - review __Rbf kernel Pca based Knn Classifier__, we can notice that such a model exploiting a default threshold of *.5* reaches an accuracy of *79%* at test time against an accuracy of *95%* at train step, while the Auc score reaches a value of *74%*. We can understand that even if this model, having selected *Rbf kernel Trick for kernel-Pca*, corresponds to the estimator that among Knn classifiers is the one with the best performance in terms of accuracy we notice that the corresponding auc score is less than the other first two analyzed trials where we end up saying that such classifiers lead to acceptable results. However, this method set with the hyper-params found by the grid-search algorithm reveals a higher value of precision related to class 0, meaning that *Rbf kernel Pca based Logisti Classifier* has a higher precision than previous models when classifying instances as belonging to class 0, while precisin and recall metrics for class 1 was more or less the same. This classifier is the one that we should select since it has higher precision values for better classifyng new instances. - looking at __Cosine kernel Pca based Knn Classifier__, we can notice that such a model exploiting a default threshold of *.5* reaches an accuracy of *59%* at test time against an accuracy of *92%* at train step, while the Auc score reaches a value of *62%*. We can clearly see that that such a model corresponds to the worst solution amongst the models we have trained exploiting *Knn classifier*, because we can state that due to a mostly lower accuracy score obtained at test time than the accuracy score referred to training time the classifier seems to have overfit to the data. In particular speaking about Precision and Recall scores about class 1, from classification report, the model seems to be mostly precise when predicting class 1 as label for the instances to be classified, however was misclassifying nearly half of the samples from class 1. Furthermore, the model also do not obtain fine results in terms of metrics when we focus on class 0 precision and recall. This model is the oen we should avoid, and do not exploit. - finally, referring to __Sigmoid kernel Pca based Knn Model__, we can notice that such a model exploiting a default threshold of *.5* reaches an accuracy of *65%* at test time against an accuracy of *92%* at train step, while the Auc score reaches a value of *72%*. It has values for performance metrics such as *precisio, recall, and F1-Score* which are more or less quyite similar to those of the first models trained for Knn classifier, that are those which are exploiting lienar and polynomial tricks, however this model misclassifyes larger number of class 1 instances lowering down the precision related to class 0, as well as lowering down recall from class 1. Also such a classifier with respect the first three trial is not sufficiently fine to be accepted so that we can exclude it from our choice. __Significance Analysis__: finally, when looking at the different graphics related to the test which aims at investigating the diagnostic power of our different models we have fine tuned, picking the best one for such a test we can notice that beacues of the *signficance level* $\alpha$ set equal to *0.05 that is 5% of chance to reject the Null-Hypothesis $H_{0}$*, we have obtained following results. Two classifier out of five, which are *Linear- and Poly-kernel Pca based Knn Classifier* have a p-value that widely exceeds the value set for significance level, so for those two cases rejecting Null-Hypothesis will causes a *Type I error*. Also looking at *Cosine- and Sigmoid-kernel Pca based Knn Classifier* we can state same conclusions as just above for the two previous models. While just *Rbf-kernel Pca based Knn Classifier* seems the right classifier that obtained a p-value lower than the pre-defined p-value of *5%*, so we have to select this method as the one which allows us to reject the Null-Hypothesis and adopting such a classifier for describing the data behavior. #### Table Fine Tuned Hyper-Params(Knn Classifier) ```python show_table_summary_grid_search(df_gs, df_auc_gs, df_pvalue) ``` Looking at the table dispalyed just above that shows the details about the selected values for hyper-parameters specified during grid search, in the different situations accordingly to the fixed kernel-trick for kernel Pca unsupervised method we can state that, referring to the first two columns of *Train and Test Accuracy*, we can recognize which trials lead to more overfit results such as for *Cosine, and Sigmoid Tricks* or less overfit solution such as in the case of *Linear, Polynomial, and Rbf Trick*. Speaking about the hyper-parameters, we can say what follows: - looking at the *algorithm paramter*, which can be set alternatively as *brute, kd-tree, and ball-tree* where each algorithm represents an differen strategy for implementing neighbor-based learning with pros and cons in terms of requested training time, memory usage and inference performance in terms of elapsed time, we can clearly understand that the choice of the kind of kernel trick for performing kernel-Pca does not care since all the trials preferred and selectd *ball-tree* strategy to solve the problem. It means that the grid search algorithm, when forced to try all the possible combination recorded such a choice as the best hyper-param which leads to building an expensive data strucuture which aims at integrating somehow distance information to achieve better performance scores. So, this should make us reason about the fact that it is still a good choice or we should re-run the procedure excluding such algorithm. In fact the answer to such a issue depend on the forecast about the number of queryes we aim to solve. If it will be huge in future than ball-tree algorthm was a good choice and a goo parameter included amongst the hyper-params grid of values, otherwise we should get rid of it. - referring to *leaf_size parameter*, we can notice that also here the choice of a specific kernel trick for performing kernel-Pca algorithm does not affect the value tha such a parameter has assumed amongst those proposed. However, recalling that leaf size hyper-param is used to monitor and control the tree-like structure of our solutions we can understand that since the value is pretty low the obtained trees were allowed to grow toward maximum depth. - speaking about *distance parameter*, the best solution through different trials was *Euclidean distance*, which also corresponds to the default choice, furthermore the choice of a kernel trick in the context of the other grid values was not affecting the choice of *distance parameter*. - *n_neighbors parameter* is the one which is most affected and influenced by the choice of kernel trick for performing the kernel-Pca preprocessing method, since three out of five trials found 3 as the best value which are *Linear, Poly and Cosine tricks*, however only the first two examples using such a low number of neighbors still obtained fine results, instead the best trial which is the classifier characterized from Rbf kernel trick for kernel-Pca has selected 7 as the best value for the number of neighbors meanning that such a classifier required a greater number of neighbor before estimating class label and also that the query time t solve the inference would be longer. - lastly, the *weights param* is involved when we want to assign a certain weight to examples used during classification, where usually farway points will have less effect and nearby point grow their importance. The most frequent choice was represented by the *distance strategy*, which assign a weigh value to each sample of train set involved during classification a value proportional to the inverse of the distance of that sample from the query point. Only the Sigmoid kernel trick case instead adopted a weights strategy which corresponds to the default choice which is the uniform strategy. If we imagine to build up an *Ensemble Classifier* from the family of *Average Methods*, which state that the underlying principle leading their creation requires to build separate and single classifiers than averaging their prediction in regression context or adopting a majority vote strategy for the classification context, we can claim that amongst the purposed Knn classifier, for sure, we could employ the classifier foudn from the first three trials because of their performance metrics and also because Ensemble Methods such as Bagging Classifier, usually work fine exploiting an ensemble of independent and fine tuned classifier differently from Boosting Methods which instead are based on weak learners. ## Stocastic Gradient Descent | Learning Technique | Type of Learner | Type of Learning | Classification | Regression | Clustering | | --- | --- | --- | --- | --- | --- | | *Stochastic Gradient Descent (SGD)* | *Linear Model* | *Supervised Learning*| *Supported* | *Supported* | *Not-Supported*| Stochastic Gradient Descent (SGD) is a simple yet very efficient approach to fitting linear classifiers and regressors under convex loss functions such as (linear) Support Vector Machines and Logistic Regression. Even though SGD has been around in the machine learning community for a long time, it has received a considerable amount of attention just recently in the context of large-scale learning. SGD has been successfully applied to large-scale and sparse machine learning problems often encountered in *text classification * and *natural language processing*. Given that the data is sparse, the classifiers in this module easily scale to problems with more than 10^5 training examples and more than 10^5 features. __Mathematical formulation__: we describe here the mathematical details of the SGD procedure. Given a set of training examples $(x_1, y_1), \ldots, (x_n, y_n)$ where $x_i \in \mathbf{R}^m$ and $y_i \in \mathcal{R}$ ($y_i \in {-1, 1}$ for classification), our goal is to learn a linear scoring function $f(x) = w^T x + b$ with model parameters $w \in \mathbf{R}^m$ and intercept $b \in \mathbf{R}$. In order to make predictions for binary classification, we simply look at the sign of $f(x)$. To find the model parameters, we minimize the regularized training error given by: - $E(w,b) = \frac{1}{n}\sum_{i=1}^{n} L(y_i, f(x_i)) + \alpha R(w)$ - where **L** is a loss function that measures model (mis)fit and **R** is a regularization term (aka penalty) that penalizes model complexity; $\alpha > 0$ is a non-negative hyperparameter that controls the regularization stength. Different choices for **L** entail different classifiers or regressors: - Hinge (soft-margin): equivalent to Support Vector Classification: $L(y_i, f(x_i)) = \max(0, 1 - y_i f(x_i))$. - Perceptron: $L(y_i, f(x_i)) = \max(0, - y_i f(x_i))$ - Modified Huber: $L(y_i, f(x_i)) = \max(0, 1 - y_i f(x_i))^2$ if $y_i f(x_i) >1$, otherwise in $L(y_i, f(x_i)) = -4 y_i f(x_i)$ - Log: equivalent to Logistic Regression: $L(y_i, f(x_i)) = \log(1 + \exp (-y_i f(x_i)))$ - Least-Squares: Linear regression (Ridge or Lasso depending on **R**): $L(y_i, f(x_i)) = \frac{1}{2}(y_i - f(x_i))^2$. - Huber: less sensitive to outliers than least-squares. It is equivalent to least squares when $|y_i - f(x_i)| \leq \varepsilon$, and $L(y_i, f(x_i)) = \varepsilon |y_i - f(x_i)| - \frac{1}{2} \varepsilon^2$ otherwise. - Epsilon-Insensitive: (soft-margin) equivalent to Support Vector Regression: $L(y_i, f(x_i)) = \max(0, |y_i - f(x_i)| - \varepsilon)$ Finally, popular choices for the regularization term (the penalty parameter) include: - L2 norm: $R(w) := \frac{1}{2} \sum_{j=1}^{m} w_j^2 = ||w||_2^2$ - L1 norm: $R(w) := \sum_{j=1}^{m} |w_j|$ ,which leads to sparse solutions. - Elastic Net: $R(w) := \frac{\rho}{2} \sum_{j=1}^{n} w_j^2 +(1-\rho) \sum_{j=1}^{m} |w_j|$, a convex combination of L2 and L1, where $\rho$ is given by $1 - l1_{ratio}$. __Adavantages and Backwards__: the advantages of Stochastic Gradient Descent are: - Efficiency. - Ease of implementation (lots of opportunities for code tuning). - Complexity: The major advantage of SGD is its efficiency, which is basically linear in the number of training examples. If X is a matrix of size (n, p) training has a cost of $O(k n \bar p)$, where k is the number of iterations (epochs) and $\bar p$ is the average number of non-zero attributes per sample. The disadvantages of Stochastic Gradient Descent include: - SGD requires a number of hyperparameters such as the regularization parameter and the number of iterations. - SGD is sensitive to feature scaling. ### Cross-Validation Result ```python # SGDClassifier # ----------------------------------- pos_cv = pos_cv + 1; show_df_with_mean_at_bottom(dfs_list[pos_cv]) # dfs_list[pos_cv].head(dfs_list[pos_cv].shape[0]) ``` ```python plot_name = plots_names[pos_cv] show_learning_curve(dfs_list[pos_cv], n=len(cv_list[:N_CV]), figsize=(15, 7), plot_dest=plot_dest, grid_size=grid_size, plot_name=plot_name) ``` ### Grid-Search Result ```python pos_gs = pos_gs + 1; plot_dest = os.path.join("figures", "n_comp_9_analysis", "grid_search"); X = copy.deepcopy(rescaledX) df_gs, df_auc_gs, df_pvalue = grid_search_all_by_n_components( estimators_list=estimators_list[pos_gs+1], \ param_grids=param_grids[pos_gs], estimators_names=estimators_names[pos_gs+1], \ X=X, y=y, n_components=9, random_state=0, show_plots=False, show_errors=False, verbose=1, plot_dest=plot_dest, debug_var=False) df_9 = merge_dfs_by_common_columns(df_9, df_gs); df_9_auc = merge_dfs_by_common_columns(df_9_auc, df_auc_gs) # df_9, df_9_auc = pd.concat([df_9, df_gs], axis=0), pd.concat([df_9_auc, df_auc_gs], axis=0) ``` Looking at the results obtained running *Sgd Classifier* against our dataset splitted into training set and test set and adopting a different kernel trick applied to *kernel-Pca* unsupervised preprocessing method we can state generally speaking that looking at the weighted values of *Recall, Precision, and F1-Scores* we obtained good performance and and except for one trial where we got lower and worst results, when *Polynomial and Rbf Tricks* is selected, in the remaning cases have gotten remarkable results. More precisely we can say what follows: - speaking about __Linear kernel Pca based Sgd Classifier__, when adoping the default threshold of *.5* for classification purposes we have a model that reaches an accuracy of *65%* at test time against an accuracy of *92%* at train step, while the Auc score reaches a value of *79%* with a Roc Curve that shows a behavior for which the model increases its *TPR* without affecting the *FPR* score, however at a given point the Roc Curve trend turns so that the two cited scores begin to increase linearly and with a slope lower than that of Random Classifier so that FPR increases faster. The model is very precise when predicting class 1 instances but it has a recall of just *54%* so misclassified more or less half of samples from class 1 and this fact influenced instead the precision of class 0 that is a bit low, just *32%*, while class 0 recall is very high. Since the test accuracy score loses nearly 30 percent points we can assume that sucha model quite overfit to train data, we are not really encouraged to adopt it except we decied to exploit it for including it in an ensemble classifier, more boosting like than bagging one. - observing __Polynomial kernel Pca based Sgd Estimator__, we can notice that such a model exploiting a default threshold of *.5* reaches an accuracy of *76%* at test time against an accuracy of *92%* at train step, while the Auc score reaches a value of *73%*. It represents the best result obtained running th SGD based Training Algorithm upon our input dataset, in particular it obtained high precision and high recall for class 1, in other words such a model is able to recognize and correctly classify most of the data examples whose true label is indeed class 1. However, even if the model has high recall related to class 0, since the dataset is unbalanced we cannot say the same things for precision score about the class 0. So the model is somewhat uncertain when predicting class 0 as label value for new observations. - review __Rbf kernel Pca based Sgd Classifier__, we can notice that such a model exploiting a default threshold of *.5* reaches an accuracy of *82%* at test time against an accuracy of *92%* at train step, while the Auc score reaches a value of *57%*. In particular such a trial along with the *PCosine kernel Pca based Sgd Classifier* are the two attempts that lead to worts results, since the model overfit against the data employed at training time, but also the model gained weights that tend to predict every thing as class 1 instance. So, the resulting scores tell us that the model is highly precise and obtained high recall related to class 1, convercely has very low performance for precision and recall referred to class 0. Since such a model is performing just a little bit better than random classifier, can be largely adopted along other similar models for building voting classifier, following boosting like classifier policy and behavior. - looking at __Cosine kernel Pca based Sgd Classifier__, we can notice that such a model exploiting a default threshold of *.5* reaches an accuracy of *32%* at test time against an accuracy of *95%* at train step, while the Auc score reaches a value of just *59%*. Here the fine tuned model obtained from grid-search approach tells us that we are able to classify with high precision a few data examples from class 1, and even if we correctly classify all instances from class 0, we also wrongly predict class labels for most of instances,. whose true label is class 1. This means that the model is highly uncertain when predicting class 0 as the output target label. Moreover, the model's ROC Curve performs slighltly better than the random classifier, and we end up saying that such a model has gained weights and hyper-params that tend to predict the unknown instances as belonging to class 0 most of the time. We cannot neither say that switching the class labels between the two classes will allow us to obtain a better result since the roc curve trend is just a little bit better than the random classifier. - finally, referring to __Sigmoid kernel Pca based Sgd Model__, we can notice that such a model exploiting a default threshold of *.5* reaches an accuracy of *44%* at test time against an accuracy of *92%* at train step, while the Auc score reaches a value of *66%*. This model behaves more or less as the model obtained from the first trial performed for Sgd-based classifier, so as the first model is slightly worst than the best model found here when adopting as classifier Sgd technique, that is the *Cosine kernel Pca based Sgd Classifier*. __Significance Analysis__: finally, when looking at the different graphics related to the test which aims at investigating the diagnostic power of our different models we have fine tuned for *SGD Classifier*, picking the best one for such a test we can notice that beacues of the *signficance level* $\alpha$ set equal to *0.05 that is 5% of chance to reject the Null-Hypothesis $H_{0}$*, we have obtained following results. Adopting the SGD statistical learning technique for classification fine tuned as above with hyper-params selectd also depending on the kind of *kernel-trick adopted for kernel-Pca unsupervised technique*, we can calim that only two out of five trials lead to a *p-vlaue* worst than *selected significance level equal to 5%*, which are *Linear- and Cosine-kernel Pca based Sgd Classifier*, so rejecting the *Null-Hypotesis* for those two cases will results into a *Type I Error*. While the remaining three cases, that are *Poly-, Rbf- and Sigmoid-kernel Pca based Sgd Classifier* have obtained a p-value over the range $[.9, 3]$ *in percet points*, so we are satisfyed for the results obtained in terms of significance scores, however, only *Poly-, and Rbf-kernel Pca based Sgd Classifier* really matter or are worth models since they do not overfit too much and do not go worstly as *Sigmoid-kernel Pca based Sgd Classifier* at test time. #### Table Fine Tuned Hyper-Params(SGD Classifier) ```python show_table_summary_grid_search(df_gs, df_auc_gs, df_pvalue) ``` Looking at the table dispalyed just above that shows the details about the selected values for hyper-parameters specified during grid search, in the different situations accordingly to the fixed kernel-trick for kernel Pca unsupervised method we can state that, referring to the first two columns of *Train and Test Accuracy*, we can recognize which trials lead to more overfit results such as for *Rbfd Trick* or less overfit solution such as in the case of *Linear, Polynomial, Cosine, and Sigmoid Tricks*. Speaking about the hyper-parameters, we can say what follows: - looking at __alpha hyper-parameter__, that is constant that multiplies the regularization term. The higher the value, the stronger the regularization. Also used to compute the learning rate when set to *learning_rate* is set to *'optimal'*, as was here, we can notice that the final choice through the different trials was more or less tha same, meanning that the adopted kernel trick for performing kernel-Pca does not affected appreciably such a hyper-param, which three cases out of five was set to *0.1*, and the remaining case adopted *0.0001*, *0.001* for respectively Cosine and Sigmoid based *kernel-Pca*. This also remind us that while training the classifiers was not necessary to force a high regularization contribute for reducing the overfit as well as the learning process, even if we know that *Rbf kernel Pca based Sgd Classifier* overfits mostly against train data, and gained weights that encourages predicting all samples as belonging to class 1. - reviewing __class_weight hyper-param__, what we can state about such a parameter is that it represents weights associated with classes. If not given, all classes are supposed to have weight one. The *“balanced” mode* uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as __n_samples / (n_classes * np.bincount(y))__. In particular we can notice that three out five models that were fine tuned accepted or selected *balanced weights*, which are *Linear-, Sigomoid-, Cosine-kernel Pca based Sgd Classifier*, while the remaining obtain better, when setting uniform weights which are models *Polynomial-, Rbf-kernel Pca based Sgd Classifier*. So the choiche of the right *kernel-trick* affected the subsequent selection at fine tuning time of the *class_weight hyper-param*. What we can further notice is that *Polynomial-, Rbf-kernel Pca based Sgd Classifier* more or less adopted same kind of values for hyper-params, as an instance for penalty hyper-param, however Polynomial model got worst performance in terms of accuracy but considering the other metrics simultaneously we can understand that the Poly model overfits less than Rbf one and so get better performance in general. - speaking of __learning_rate hyper-param__, since we force this as the unique available choice it was just report for completeness. - interesting it is the discussion about __loss parameter__, if fact we know that the possible options are *‘hinge’, ‘log’, ‘modified_huber’, ‘squared_hinge’, ‘perceptron’*, where the *‘log’ loss* gives logistic regression, a probabilistic classifier. *‘modified_huber’* is another smooth loss that brings tolerance to outliers as well as probability estimates. *‘squared_hinge’* is like hinge but is quadratically penalized. ‘perceptron’ is the linear loss used by the perceptron algorithm. Here, speaking about loss parameter we can clearly understand that the choice of a particular kernel trick does not affect the following choice of the loss function to be optimized, in fact uniformly all the models adopted or tend to prefer *modified_huber* loss function, allowing the models to fit to the data taking into account the fact that such a loss function is less sensitive to outliers, recalling inn fact that the Huber loss function is used in robust statistics, M-estimation and additive modelling. This loss is so cllaed beacues it derives from the plain version normally exploited for regression problems. - also when referring to __max iteration parameter__, we can easily say that thte models evenly adopted somewhat small number of iteration before stopping the learning procedure, this might be also becaues we work with a small dataset and so a set of data points that is small tend to overfit quickly and this migth be the reason for which in order to avoid too much overfit the training procedure performed employing grid-search technique for fine-tuning tend to prefer tiny number of iterations set for training the model. - __penalty parameter__, we recall here that it represents regularization term to be used. More precisely, defaults to *‘l2’* which is the standard regularizer for linear SVM models. *‘l1’* and *‘elasticnet’* might bring *sparsity* to the model (feature selection) not achievable with *‘l2’*. Also for such a hyper-param the choice of a particular *kernel-trick* to be used for *kernel-Pca* was affecting the subsequent selection of penalty contribute to regularize learning task, as was for *class weight hyper-param*. Here three over five models that are *Linear-, Sigomoid-, Cosine-kernel Pca based Sgd Classifier* adopted *l1-norm* as regularization term so the models's weights tend to be more sparse, while the remaining *Polynomial-, Rbf-kernel Pca based Sgd Classifier* models adopted *l2-nrom*. For the trials we have done, the models with *l1-regularization term* seem to get worst performance, more precisely *Sigomoid-, Cosine-kernel Pca based Sgd Classifier* even were worser than random classifier, while the *Linear-kernel Pca based Sgd Classifier* was slightly worst than Polynomial one, so does not overfit too much however we can say it can be exploited for ensemble method that follows a Boosting Policy. If we imagine to build up an *Ensemble Classifier* from the family of *Average Methods*, which state that the underlying principle leading their creation requires to build separate and single classifiers than averaging their prediction in regression context or adopting a majority vote strategy for the classification context, we can claim that amongst the purposed *Sgd classifier*, for sure, we could employ the classifier found from all the trials, except for *Rbf, Cosine and Sigmoid kernel Pca based Sgd Classifiers*, since the first model is overly overfitting to the data used at train time and more precisely most of the time predicted correctly just samples from class 1 and misclassifyes instances from class 0, the others instead assumed the opposite behavior. Also, because of their performance metrics and also because Ensemble Methods such as Bagging Classifier, usually work fine exploiting an ensemble of independent and fine tuned classifier differently from Boosting Methods which instead are based on weak learners. ## Support Vector Machines Classifier | Learning Technique | Type of Learner | Type of Learning | Classification | Regression | Clustering | Outlier Detection | --- | --- | --- | --- | --- | --- | --- | | *Support Vector Machines(SVMs)* | *Discriminative Model* | *Supervised Learning*| *Supported* | *Supported* | *Not-Supported* | *Supported* | Here, in this section I'm going to exploit a machine learning techinique known as Support Vectors Machines in order to detect and so select the best model I can produce throughout the usage of data points contained within the dataset at hand. So let discuss a bit those kind of classifiers. In machine learning, **support-vector machines**, shortly SVMs, are *supervised learning models* with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training examples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new examples to one category or the other, making it a *non-probabilistic binary linear classifier*. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on the side of the gap on which they fall. More formally, a support-vector machine constructs a hyperplane or set of hyperplanes in a high-dimensional space, which can be used for classification, regression. Intuitively, a good separation is achieved by the hyperplane that has the largest distance to the nearest training-data point of any class, so-called *functional margin*, since in general the larger the margin, the lower the *generalization error* of the classifier. #### Mathematical formulation of SVMs Here, I'm going to describe the main mathematical properties and characteristics used to derive from a math point of view the algorithm derived and proven by researches when they have studied SVMs classifiers. I start saying and recalling that A support vector machine constructs a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be used for classification, regression or other tasks. Intuitively, a good separation is achieved by the hyper-plane that has the largest distance to the nearest training data points of any class so-called functional margin, since in general the larger the margin the lower the generalization error of the classifier. When demonstrating SVMs classifier algorithm I suppose that We are given a training dataset of *n*n points of the form: \begin{align} (\vec{x_1} y_1),..,(\vec{x_n},y_n) \end{align} where the $y_{1}$ are either 1 or −1, each indicating the class to which the point $\vec{x}_{i}$ belongs. Each $\vec{x}_{i}$is a *p-dimensional real vector*. We want to find the "maximum-margin hyperplane" that divides the group of points $\vec{x}_{i}$ for which $y_{1}$ = 1from the group of points for which $y_{1}$ = − 1, which is defined so that the distance between the hyperplane and the nearest point $\vec{x}_{i}$ from either group is maximized. Any hyperplane can be written as the set of points $\vec{x}_{i}$ satisfying : $\vec{w}_{i}\cdot{\vec{x}_{i}} - b = 0$; where $\vec{w}_{i}$ is the, even if not necessarily, normal vector to the hyperplane. The parameter $\tfrac {b}{\|\vec{w}\|}$ determines the offset of the hyperplane from the origin along the normal vector $\vec{x}_{i}$. Arrived so far, I have to distiguish between two distinct cases which both depende on the nature of data points that generally made up a given dataset. Those two different cases are called *Hard-Margin* and *Soft Margin* and, respectively. The first case, so the ***Hard-Margin*** case, happens just for really optimistics datasets. In fact it is the case when the training data is linearly separable, hence, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them is as large as possible. The region bounded by these two hyperplanes is called the "margin", and the maximum-margin hyperplane is the hyperplane that lies halfway between them. With a normalized or standardized dataset, these hyperplanes can be described by the equations: - $\vec{w}_{i}\cdot{\vec{x}_{i}} - b = 1$, that is anything on or above this boundary is of one class, with label 1; - and, $\vec{w}_{i}\cdot{\vec{x}_{i}} - b = -1$, that is anything on or above this boundary is of one class, with label -1. We can notice also that the distance between these two hyperplanes is ${\displaystyle {\tfrac {2}{\|{\vec {w}}\|}}}$ so to maximize the distance between the planes we want to minimize ‖ w → ‖ {\displaystyle \|{\vec {w}}\|} \|{\vec {w}}\|. The distance is computed using the distance from a point to a plane equation. We also have to prevent data points from falling into the margin, we add the following constraint: for each *i*: - either, ${\displaystyle {\vec {w}}\cdot {\vec {x}}_{i}-b\geq 1}$, ${\displaystyle y_{i}=1}$; - or, ${\displaystyle {\vec {w}}\cdot {\vec {x}}_{i}-b\leq -1}$, if ${\displaystyle y_{i}=-1}$. These constraints state that each data point must lie on the correct side of the margin. Finally, we collect all the previous observations and define the following optimization problem: - from $y_{i}(\vec{w}_{i}\cdot{\vec{x}_{i}} - b) \geq 1$, for all $1 \leq i \leq n$; - to minimize ${\displaystyle y_{i}({\vec {w}}\cdot {\vec {x}}_{i}-b)\geq 1}$ ${\displaystyle i=1,\ldots ,n}$. The classifier we obtain is made from ${\vec {w}}$ and ${\displaystyle b}$ that solve this problem, and he max-margin hyperplane is completely determined by those ${\vec {x}}_{i}$ that lie nearest to it. These $\vec{x}_{i}$ are called *support vectors*. The other case, so the ***Soft-Margin*** case, convercely happens when the training data is not linearly separable. To deal with such situation, as well as, to extend SVM to cases in which the data are not linearly separable, we introduce the hinge loss function, that is: $max(y_{i}(\vec{w}_{i}\cdot{\vec{x}_{i}} - b))$. Once we have provided the new loss function we go ahead with the new optimization problem that we aim at minimizing that is: \begin{align} {\displaystyle \left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}({\vec {w}}\cdot {\vec {x}}_{i}-b)\right)\right]+\lambda \lVert {\vec {w}}\rVert ^{2},} \end{align} where the parameter \lambda determines the trade-off between increasing the margin size and ensuring that the ${\vec {x}}_{i}$ lie on the correct side of the margin. Thus, for sufficiently small values of\lambda , the second term in the loss function will become negligible, hence, it will behave similar to the hard-margin SVM, if the input data are linearly classifiable, but will still learn if a classification rule is viable or not. What we notice from last equations written just above is that we deal with a quadratic programming problem, and its solution is provided, detailed below. We start defining a *Primal Problem* as follow: - For each $\{1,\,\ldots ,\,n\}$ we introduce a variable ${\displaystyle \zeta _{i}=\max \left(0,1-y_{i}(w\cdot x_{i}-b)\right)}$. Note that ${\displaystyle \zeta _{i}}$ is the smallest nonnegative number satisfying ${\displaystyle y_{i}(w\cdot x_{i}-b)\geq 1-\zeta _{i}}$; - we can rewrite the optimization problem as follows: ${\displaystyle {\text{minimize }}{\frac {1}{n}}\sum _{i=1}^{n}\zeta _{i}+\lambda \|w\|^{2}}$, ${\displaystyle {\text{subject to }}y_{i}(w\cdot x_{i}-b)\geq 1-\zeta _{i}\,{\text{ and }}\,\zeta _{i}\geq 0,\,{\text{for all }}i.}$ However, by solving for the *Lagrangian dual* of the above problem, one obtains the simplified problem: \begin{align} {\displaystyle {\text{maximize}}\,\,f(c_{1}\ldots c_{n})=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(x_{i}\cdot x_{j})y_{j}c_{j},} \\ {\displaystyle {\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.} \end{align} - moreover, the variables $c_i$ are defined as ${\displaystyle {\vec {w}}=\sum _{i=1}^{n}c_{i}y_{i}{\vec {x}}_{i}}$. Where, ${\displaystyle c_{i}=0}$ exactly when ${\displaystyle {\vec {x}}_{i}}$ lies on the correct side of the margin, and ${\displaystyle 0<c_{i}<(2n\lambda )^{-1}}$ when ${\vec {x}}_{i}$ lies on the margin's boundary. It follows that ${\displaystyle {\vec {w}}}$ can be written as a linear combination of the support vectors. The offset, ${\displaystyle b}$, can be recovered by finding an ${\vec {x}}_{i}$ on the margin's boundary and solving: ${\displaystyle y_{i}({\vec {w}}\cdot {\vec {x}}_{i}-b)=1\iff b={\vec {w}}\cdot {\vec {x}}_{i}-y_{i}.}$ This is called the *dual problem*. Since the dual maximization problem is a quadratic function of the ${\displaystyle c_{i}}$ subject to linear constraints, it is efficiently solvable by quadratic programming algorithms. Lastly, I will discuss what in the context of SVMs classifier is called as ***Kernel Trick***. Roughly speaking, we know that a possible way of dealing with datasets that are not linearly separable but that can become linearnly separable within an higher dimensional space, or feature space, we can try to remap the original data points into a higher order feature space by means of some kind of remapping function, hence, solve the SVMs classifier optimization problem to find out a linear classifier in that new larger feature space. Then, we project back to the original feature space the solution we have found, reminding that in the hold feature space the decision boundaries founded will be non-linear, but still allow to classify new examples. Usually, especially, dealing with large datasets or with datasets with large set of features this approach becomes computationally intensive and and unfeasible if we run out of memory. So, in other words, the procedure is constrained in time and space, and might become time consuming or even unfeasible because of the large amount of memory we have to exploit. An reasonable alternative is represented by the usage of kernel functions that are function which satisfy ${\displaystyle k({\vec {x}}_{i},{\vec {x}}_{j})=\varphi ({\vec {x}}_{i})\cdot \varphi ({\vec {x}}_{j})}$, where we recall that classification vector ${\vec {w}}$ in the transformed space satisfies ${\displaystyle {\vec {w}}=\sum _{i=1}^{n}c_{i}y_{i}\varphi ({\vec {x}}_{i}),}$ where, the ${\displaystyle c_{i}}$ are obtained by solving the optimization problem: ${\displaystyle {\begin{aligned}{\text{maximize}}\,\,f(c_{1}\ldots c_{n})&=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(\varphi ({\vec {x}}_{i})\cdot \varphi ({\vec {x}}_{j}))y_{j}c_{j}\\&=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}k({\vec {x}}_{i},{\vec {x}}_{j})y_{j}c_{j}\\\end{aligned}}}$ ${\displaystyle {\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.}$ The coefficients ${\displaystyle c_{i}}$ can be solved for using quadratic programming, and we can find some index ${\displaystyle i}$ such that ${\displaystyle 0<c_{i}<(2n\lambda )^{-1}}$, so that ${\displaystyle \varphi ({\vec {x}}_{i})}$ lies on the boundary of the margin in the transformed space, and then solve, by substituting doto product between remapped data points with kernel function applied upon the same arguments: ${\displaystyle {\begin{aligned}b={\vec {w}}\cdot \varphi ({\vec {x}}_{i})-y_{i}&=\left[\sum _{j=1}^{n}c_{j}y_{j}\varphi ({\vec {x}}_{j})\cdot \varphi ({\vec {x}}_{i})\right]-y_{i}\\&=\left[\sum _{j=1}^{n}c_{j}y_{j}k({\vec {x}}_{j},{\vec {x}}_{i})\right]-y_{i}.\end{aligned}}}$ Finally, ${\displaystyle {\vec {z}}\mapsto \operatorname {sgn}({\vec {w}}\cdot \varphi ({\vec {z}})-b)=\operatorname {sgn} \left(\left[\sum _{i=1}^{n}c_{i}y_{i}k({\vec {x}}_{i},{\vec {z}})\right]-b\right).}$ What follows is a briefly list of the most commonly used kernel functions. They should be fine tuned, by means of a either grid search or random search approaches, identifying the best set of values to replace whitin the picked kernel function, where the choice depend on the dataset at hand: - Polynomial (homogeneous): ${\displaystyle k({\vec {x_{i}}},{\vec {x_{j}}})=({\vec {x_{i}}}\cdot {\vec {x_{j}}})^{d}}$. - Polynomial (inhomogeneous): ${\displaystyle k({\vec {x_{i}}},{\vec {x_{j}}})=({\vec {x_{i}}}\cdot {\vec {x_{j}}}+1)^{d}}$. - Gaussian radial basis function: ${\displaystyle \gamma =1/(2\sigma ^{2})} {\displaystyle \gamma =1/(2\sigma ^{2})}$. - Hyperbolic tangent: ${\displaystyle k({\vec {x_{i}}},{\vec {x_{j}}})=\tanh(\kappa {\vec {x_{i}}}\cdot {\vec {x_{j}}}+c)}$ for some (not every) ${\displaystyle \kappa >0}$ and ${\displaystyle c<0}$. What follows is the application or use of SVMs classifier for learning a model that best fit the training data in order to be able to classify new instance in a reliable way, selecting the most promising model trained. ### Cross-Validation Result ```python # RandomForestClassifier # ----------------------------------- pos_cv = pos_cv + 1; show_df_with_mean_at_bottom(dfs_list[pos_cv]) # dfs_list[pos_cv].head(dfs_list[pos_cv].shape[0]) ``` ```python plot_name = plots_names[pos_cv] show_learning_curve(dfs_list[pos_cv], n=len(cv_list[:N_CV]), figsize=(15, 7), plot_dest=plot_dest, grid_size=grid_size, plot_name=plot_name) ``` ### Grid-Search Result ```python pos_gs = pos_gs + 1; plot_dest = os.path.join("figures", "n_comp_9_analysis", "grid_search"); X = copy.deepcopy(rescaledX) df_gs, df_auc_gs, df_pvalue = grid_search_all_by_n_components( estimators_list=estimators_list[pos_gs+1], \ param_grids=param_grids[pos_gs], estimators_names=estimators_names[pos_gs+1], \ X=X, y=y, n_components=9, random_state=0, show_plots=False, show_errors=False, verbose=1, plot_dest=plot_dest, debug_var=False) df_9 = merge_dfs_by_common_columns(df_9, df_gs); df_9_auc = merge_dfs_by_common_columns(df_9_auc, df_auc_gs) # # df_9, df_9_auc = pd.concat([df_9, df_gs], axis=0), pd.concat([df_9_auc, df_auc_gs], axis=0) ``` Generally speaking, looking at results coming from the trials we have carried out with grid search algorithm for the purpose of fine tuning the SVM classifier adopted to fit models against the dataset at hand, we can claim that only three models out of five gives us important, significant performance, while oen model even if corresponds to that with the best Auc score is not satisfying our main goal we have in mind, that is correctly classify the majority instances from both classes, lastly the remaing classifier that corresponds to poly-tirkc kernel Pca based classifier is that with worst performance in terms of measurement metrics. To be more precise we can say what follows: - looking at __Linear-trick kenrle-Pca based Svm Model__, we can state that it belongs to the set of three classifier we consider satisfying results for such a classifier, since it reaches an Auc score equals to .74, however discussing the Roc Curve we can observe that it starts with a higher slope and then ends with a lower slope for describing the relation between the Sensitivity and 1-Specificity scores. Moreover, the model obtain high values for both precision and recall related to class 1, but the recall of class 0 was higher than the precision of the same class, so we get troubles when classifying as class 0 new instances since we are less sure about the result. - focusing on __Poly-trick kenrle-Pca based Svm Classifier__, this trials corresponds to the worst classifier gathered from grid-search approach when fine tuning such model because the setting adopted for the model leads to a Auc score lower than the random class, so this model should be discarded, even if seems to get good performance when looking at calculated classification report. - speaking of __Rbf-trick kenrle-Pca based Svm Classifier__, here we can state that the fianl model leads to a very well performing roc curve, in fact the Auc score is even .73, one of the highes found, and also the interval of thresholds before the TPR and FPR scores begin to linearly increas is very wide. However the classifier with a default threshold of .5, seems not to perform adequately, since it classify correctly most of the samples from class 1 but wrongly predicts class label for samples belonging to class 0, meaning it was a model with high recall but low precision for for class 1 and very poor performance generally speaking for class 0 related metrics. - also the __Cosine-trick kenrle-Pca based Svm Classifier__ shows the same issues more or less as the previous classifier, in other words, the model was well performing for class 1 with high recall and precision but really bad performing for precision and recall related to class 0. Thus, looking to Roc Curve and Auc Score, we can say that the shape of Roc Curve shows a model which in the first part has a higher slope that describes how the TPR and FPR increases with the different thresholds, while int the second half the steepness of the slope reduces, however the Auc score is not soo good, just .67. - Finally, discussing the __Sigmoid-trick kenrle-Pca based Svm Classifier__, we can observe that with a default threshold of 0.5 the model was able to correctly classify all the instances belonging to class 0 leading to high recall as well as high precisoin for class 0 and class 1 respectively, however, both precision and recall of class 0 and class 1 was convercely very low, this means tha the model is very sure when classifyng instances as belonging to class 1 but is almost unsure when facing an instance that virtually belongs to class 0. Focusing on Roc Curve the model has a tiny range of thresholds where the model increases the Sensitivity without changing the 1-Specificity, but, at a given point the remaing part of roc curve seems to follow a linear curve for describing the relation beween TPR and FPR fractions, leading the model to record a Auc score of .7. __Significance Analysis__: finally, when looking at the different graphics related to the test which aims at investigating the diagnostic power of our different models we have fine tuned for *SGD Classifier*, picking the best one for such a test we can notice that beacues of the *signficance level* $\alpha$ set equal to *0.05 that is 5% of chance to reject the Null-Hypothesis $H_{0}$*, we have obtained following results. #### Table Fine Tuned Hyper-Params(SVMs Classifier) ```python show_table_summary_grid_search(df_gs, df_auc_gs, df_pvalue) ``` Looking at the table dispalyed just above, referred to the found hyper-params values identifyed by grid search procedure that attempt to fine tuning the Svm Classifier combining it with different possbile kernel tricks for kernel-Pca unsupervised learning technique, we can clearly understand that the accuracy results gained by the best models retrieved by grid-search procedures corresponds to value near 90 percent of accuracy.However, the choice of kenrel trick affected importantly and widely the identification of proper values for hyper-params such as *C, gamma, and kernel-trick related to Svm Classifier*. More precisely, we can say that for *C hyper-param* we go across a wide range of values from 0.0001 up to 10 and except to Rbf and Cosine tricks adopted for kernel-Pca, there are no other models that share the same value of hyper-param C. What we can further note is that as the kernel-trick adopted to performe kernel-Pca becomes more fancy and complicated from the viewpoint of math formulation the value assumend by *C hyper-param* seems to become greater and greater, where we recall that the *parameter C* controls the trade off between errors of the SVM on training data and margin maximization. So as we increase the complexity of kernel-trick for kernel-Pca we observe that the resulting marigns becomes more and more hard. Instead, if we think intuitively of the *gamma parameter* as that parameter which defines how far the influence of a single training example reaches, with low values meaning *'far'* and high values meaning *'close'*, then we can see a trend for which as the margin becomes harder the influence of the single seems to be closer. Finally, looking at kernel trick chosen from different models we observe that in the most of the cases the polynomial kernel was the best choice and only sigmoid trick was adopted when also the kernel-Pca technique was performed exploiting the same kind of trick, as well as the linear trick was selected as the best choice combining with a kernel-Pca that instead adopted a Rbf-kernel. #### Advantages and Backwards of SVMs Finally, I conclude this section providing a description of major advantages and backwards of such a machine learning technique, that have been noticed by researches who studied SVMs properties. The advantages of support vector machines are: - Effective in high dimensional spaces. - Still effective in cases where number of dimensions is greater than the number of samples. - Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient. - Versatile: different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels. On the other and, the disadvantages of support vector machines include: - If the number of features is much greater than the number of samples, avoid over-fitting in choosing Kernel functions and regularization term is crucial. - SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation (see Scores and probabilities, below). ## Decision Tree Models | Learning Technique | Type of Learner | Type of Learning | Classification | Regression | Clustering | Outlier Detection | --- | --- | --- | --- | --- | --- | --- | | *Decision Trees* | *Non-parametric Model* | *Supervised Learning*| *Supported* | *Supported* | *Not-Supported* | *Not-Supported* | Decision Trees, for short DTs, are a *non-parametric supervised learning method* used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. Their mathematical formulation is generally provided as follows: Given training vectors $x_{i} \in R^{n}$, $i=1,…, l$ and a label vector $y \in R^{l}$, a decision tree recursively partitions the space such that the samples with the same labels are grouped together. Let the data at node $m$ be represented by $Q$. For each candidate split $\theta = (j, t_{m})$ consisting of a feature $j$ and threshold $t_{m}$, partition the data into $Q_{left}(\theta)$ and $Q_{right}(\theta)$ subsets as: \begin{align}\begin{aligned}Q_{left}(\theta) = {(x, y) | x_j <= t_m}\\Q_{right}(\theta) = Q \setminus Q_{left}(\theta)\end{aligned}\end{align} The impurity at $m$ is computed using an impurity function $H()$, the choice of which depends on the task being solved (classification or regression) like: \begin{align} G(Q, \theta) = \frac{n_{left}}{N_m} H(Q_{left}(\theta)) + \frac{n_{right}}{N_m} H(Q_{right}(\theta)) \end{align} Select the parameters that minimises the impurity: $\theta^* = \operatorname{argmin}_\theta G(Q, \theta)$. Recurse for subsets $Q_{left}(\theta^*)$ and $Q_{right}(\theta^*)$ until the maximum allowable depth is reached, $N_m < \min_{samples}$ or N_m = 1. Speaking about *Classification Criteria* referred to the procedure used for learining or fit to the data a decision tree we can state what follows: If a target is a classification outcome taking on values $0,1,…,K-1$, for node $m$, representing a region $R_{m}$ with $N_{m}$ observations, let $p_{mk} = 1/ N_m \sum_{x_i \in R_m} I(y_i = k)$ be the proportion of class $k$ observations in node $m$. So, Common measures of impurity are: - Gini, specified as $H(X_m) = \sum_k p_{mk} (1 - p_{mk})$ - Entropy, definead as $(X_m) = - \sum_k p_{mk} \log(p_{mk})$ where, we recall that $X_{m}$ is the training data in node $m$. ```python pos_cv = pos_cv +1; show_df_with_mean_at_bottom(dfs_list[pos_cv]) # dfs_list[pos_cv].head(dfs_list[pos_cv].shape[0]) ``` ```python plot_name = plots_names[pos_cv] show_learning_curve(dfs_list[pos_cv], n=len(cv_list[:N_CV]), figsize=(15, 7), plot_dest=plot_dest, grid_size=grid_size, plot_name=plot_name) ``` ### Grid Search Result ```python pos_gs = pos_gs + 1; plot_dest = os.path.join("figures", "n_comp_9_analysis", "grid_search"); X = copy.deepcopy(rescaledX) df_gs, df_auc_gs, df_pvalue = grid_search_all_by_n_components( estimators_list=estimators_list[pos_gs+1], \ param_grids=param_grids[pos_gs], estimators_names=estimators_names[pos_gs+1], \ X=X, y=y, n_components=9, random_state=0, show_plots=False, show_errors=False, verbose=1, plot_dest=plot_dest, debug_var=False) df_9 = merge_dfs_by_common_columns(df_9, df_gs); df_9_auc = merge_dfs_by_common_columns(df_9_auc, df_auc_gs) # df_9, df_9_auc = pd.concat([df_9, df_gs], axis=0), pd.concat([df_9_auc, df_auc_gs], axis=0) ``` Looking at the results obtained running grid-search algorithm applied to Decision Tree Classifier we can rougly saying that the different models will obtain performances with default classification thresholds that are not as good as the results obtained from previous models, also because the models do not show Roc Curve along with their corresponding Auc scores that enable us saying that such models can perform well during inference also varying the default threshold. In particular we can say what follows: - looking at __Linear kernel-Pca based Decision Tree Classifier__, we notice that with the default threshold the model obtain high precision and recall for class 1 examples, meaning it is able to correctly classify most of the samples from class 1 as well as few examples from class 0 are exchanged as belonging to class 1. However speaking about class 0 we notice that we have obtained a very low precision and a 50 percent of recall that means that the model with default threshold misclassifyes half of the samples from class 0 and we are not really sure that what we have classified as the class 0 instance it really belong to class 0. Finally looking at Roc Curve we can observe that even from the very beginning the Sensitivity and 1-Specificity grow linearly with a slope value sligthly bigger than the slope of referring curve represented by Random Classifier, however at a given point the slope changes and as the thresholds approaches to higher values and the curve approaches to the top the slope decreases importantly, instead the value of Auc score account for 0.62. - while looking at __Poly kernel-Pca based Decision Tree Model__, we can clearly understand that such a classifier is not good enough to be exploited for further inferences since it accounts for just .58 value of Auc score and observing Roc Curve we can conclude that it goes slightly better than the curve provided by random classifier. Moreover, the model when we adopt the default threshold seems to correctly classify most of the istances from class 0, but wrongly predict the class for instances of the opposite categories, in fact it is characterized from law precision referred to class 0, and since we want to correctly predict labels for both categories, here with such a classifier we are not able to satisfy such a constraint. - the classifier corresponding to a __Rbf kernel-Pca based Decision Tree__, here is the model which leads to the worst performances, since the roc curve graphics is even worst than the random classifier and the roc curve accounts for a Auc score that is less than .5, more precisely just .44. So this result will be discarded, even if the model seems to correctly recognize samples from class 1 but wrongly predict labels for the class 0 samples, and again also here we have to state that we are not able to meet the constraint of correctly classify most of the data examples as we expect from a well defined classifier. - referring to __Cosine kernel-Pca based Decision Tree Classifier__, when adopting a default threshold we notice that the model even if correctly classifyes all samples from class 0 leading to high recall for such a category, we can say also that the model has a low precision for class 0, meaning that the model confuses many samples from class 1 in fact is characterized from low value of recalll for the class 1, however when predicts a label equals to category one it is almost always sure about the choice. Thus, looking at Roc Curve and Auc score we can note that the model is characterized from a firt phase in which the Sensitivityu and 1-Specificity are not growing linearly following a line with a slope even lower than the one of Random Classifier as the preivous models, accounting for a AUC score equals to *45%*, we cannot neither decided to switch the labels for trying to train another classifier with such a new configuration, since the AUC does not suggest us to follow this new available and well-knwon strategy. - Lastly the __Sigmoid kernel-Pca based Decision Tree Classifier__, with a default threshold of .5 for classification shows peformance scores that are more or less analog to those seen previously for *Rbf and Cosine kernel-Pca based Decision Tree Models*. The only difference is that the model get a AUC score much closer to that of Random Classifier. Again, also this model is able to predict with high recall the samples belonging to class 1, instead wrongly predict class labels for those samples whcih come from class 0. __Significance Analysis__: finally, when looking at the different graphics related to the test which aims at investigating the diagnostic power of our different models we have fine tuned for *SGD Classifier*, picking the best one for such a test we can notice that beacues of the *signficance level* $\alpha$ set equal to *0.05 that is 5% of chance to reject the Null-Hypothesis $H_{0}$*, we have obtained following results. Adopting the Decision Trees statistical learning technique for classification fine tuned as above with hyper-params selectd also depending on the kind of *kernel-trick adopted for kernel-Pca unsupervised technique*, we can claim that only two out of five trials lead to a *p-vlaue* worst than *selected significance level equal to 5%*, which are *Rbf- and Cosine-kernel Pca based Sgd Classifier*, so rejecting the *Null-Hypotesis* for those two cases will results into a *Type I Error*. While the remaining three cases, that are *Linear Poly-, and Sigmoid-kernel Pca based Sgd Classifier* have obtained a p-value over the range $[15, 90] \in R$ *in percet points*. Holthough we have at least two out of five classifier fine tuned that seem to allow us reject the *Null-Hypothesis* in order to justify the usae of such a models so fine-tuned and that accept the weights and hyper-parameters values we have discovered we end up saying that there is no one of the previous models that for sure we will accept to employ for inference and classification tasks due to their worst performance, rougly speaking. In other words, it seems that *Decision Tree Classification technique* does not work fine with such a small, unbalanced dataset. #### Table Fine Tuned Hyper-Params(Decision Trees Classifier) ```python show_table_summary_grid_search(df_gs, df_auc_gs, df_pvalue) ``` Looking at the table shown above and obtained running grid-search algorithm applied to Decision Tree Classifier, only three out of five classifier get significant accuracy values, and those are Decision Tree created by means of *Linear, Cosine Sigmoid tricks adopted for kernel-Pca*, since we reaches accuracy up to *71%, 74%, and 74%* respectively. However, the worst classifier reveals to be *Polynomial based kernel Pca Decision Tree* with an accuracy on test set that reaches just *41%*, lastly the *Sigmoid based kernel Pca Decision Tree* was slightly worst than the previous three classifier that we have told were the best models, since the latter *Sigmoid based kernel Pca Decision Tree* has reached an accuracy score just 10 percent points less than the previous three, that is *62%*, however, not enough to consider it a good enough classifier. Looking at the hyper-parameters results shown from the summary table included in the report just above for DecisionTree Classification algorithm we can explain those results as follows: - referring to the **class_weight hyper-param**, which represents weights associated with classes and if we disable such a setting so that it is valued to None, all classes are supposed to have weight one. Instead the *“balanced”* mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)). We can notice that the choice of whether adopting a weighted strategy as balanced for performing training phase was chosen as best hyper-param value for *class_weight param* from just two out of five resulting fine tuned classifiers which were *Rbf, Sigmoid kernel-Pca based Decision Tree Classifier*, while the remaining adopted a uniform strategy. However discarding the worst classifier such a parameter half of the time was chose to be balanced and the remaining time to be uniform. In other words the choice of the kernel-trick at preprocessing time affected later the choice at grid-search training for *class_weight hyper-param*. - reviewing **criterion decision trees' param**, which stands for the function to measure the quality of a split, where supported criteria are *“gini” for the Gini impurity and “entropy” for the information gain*. The most chosen *measure the quality* was the *gini for the Gini impurity*, in fact *Linear, Rbf, Sigmoid kernel-Pca based Decision Tree Classifier* selected this technique, while the remaining fine tuned classifiers *Poly and Cosine kernel-Pca based Decision Tree Classifiers* take advantages from exploiting *“entropy” for the information gain*. Again the choice at pre-processing time of a specific kernel-trick amongst those available for kernel-Pca unsupervised procedure leads to a particular *criterion* adopted from the trees based estimator when building the trees strucuture. - looking at **max_depth trees' param**, which stands for the maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than *min_samples_split samples*, we clearly understand that for such a unbalanced and small dataset the best strategy while building the trees data strucutre was to keep growing the trees until all leaves are pure or until all leaves contain less than *min_samples_split samples*. In other words the hyper-param was set with a None value suggesting not to stop growing the trees at a given point, but rather to expand as much as possible. Only for *Cosine kernel-Pca based Decision Tree Classifiers* the best max_depth value for the hyper-parameter involved in this analysis was set to a really small size, just three nodes, which means that the preprocessed data via Cosine kernel-trick, does need to* much attributes to be taken into account before arriving into a leaf node. - also describing **max_features hyper-parameter**, that is, the number of features to consider when looking for the best split, where if *“sqrt”, then max_features=sqrt(n_features)*, *if “log2”, then max_features=log2(n_features)*, and *if None, then max_features=n_features,* we can notice that the initial choice of the kernel-trick for pre-processing the data points within the dataset do not affect the final selection about the right technique to be adopted for calculating the number of features to be considered when looking for the best split. Moreover, since we are dealing with a small dataset, unbalanced with respect the class labels and also with a small number of features, we can reasonable understand that in most of the cases the classifier get better performance scores just considering a number of features equal to the available features after having performed the kernel-Pca algorithm. Only *Rbf kernel-Pca based Decision Tree Classifiers* selected a strategy tgat corresponds to *“sqrt”, then max_features=sqrt(n_features)* for deciding the features to be identifyed for the best split. - lastly, when speaking about the strategy used to choose the split at each node, where supported strategies are *“best” to choose the best split and “random” to choose the best random split*, we are saying that we are referring to **splitter hyper-parameter**. Here, for the trails we have carryed out, what we can say about such a hyper-param, is that since we are dealing with a small dataset, unbalanced and with not so much large number of features, also after having preprocessed it and discarded some useless features, is that the diverese fine tuned models in most of the cases adopted a best strategy, which requires more trainign time, while just for a single case corresponding to *Cosine kernel-Pca based Decision Tree Classifier*, the retrieved model seems to go better when adopint a random strategy. The choice of kernel-Pca with a specific kernel-trick was decisive and affects the hyper-partameters set for building the If we imagine to build up an *Ensemble Classifier* from the family of *Average Methods*, which state that the underlying principle leading their creation requires to build separate and single classifiers than averaging their prediction in regression context or adopting a majority vote strategy for the classification context, we can claim that amongst the purposed decision trees classifier, for sure, we could employ the classifiers found from the __Linear, Rbf, Cosine and Sigmoid kernel-Pca based Decision Tree Classifier__ because of their performance metrics and also because Ensemble Methods such as Bagging Classifier, usually work fine exploiting an ensemble of independent and fine tuned classifier differently from Boosting Methods which instead are based on weak learners. #### Decision Tree's Advantages & Bacwards Some advantages of decision trees are: - Simple to understand and to interpret. Trees can be visualised. - Requires little data preparation. Other techniques often require data normalisation, dummy variables need to be created and blank values to be removed. Note however that this module does not support missing values. - The cost of using the tree (i.e., predicting data) is logarithmic in the number of data points used to train the tree. - Able to handle both numerical and categorical data. Other techniques are usually specialised in analysing datasets that have only one type of variable. See algorithms for more information. - Able to handle multi-output problems. - Uses a white box model. If a given situation is observable in a model, the explanation for the condition is easily explained by boolean logic. By contrast, in a black box model (e.g., in an artificial neural network), results may be more difficult to interpret. - Possible to validate a model using statistical tests. That makes it possible to account for the reliability of the model. - Performs well even if its assumptions are somewhat violated by the true model from which the data were generated. The disadvantages of decision trees include: - Decision-tree learners can create over-complex trees that do not generalise the data well. This is called overfitting. Mechanisms such as pruning (not currently supported), setting the minimum number of samples required at a leaf node or setting the maximum depth of the tree are necessary to avoid this problem. - Decision trees can be unstable because small variations in the data might result in a completely different tree being generated. This problem is mitigated by using decision trees within an ensemble. - The problem of learning an optimal decision tree is known to be NP-complete under several aspects of optimality and even for simple concepts. Consequently, practical decision-tree learning algorithms are based on heuristic algorithms such as the greedy algorithm where locally optimal decisions are made at each node. Such algorithms cannot guarantee to return the globally optimal decision tree. This can be mitigated by training multiple trees in an ensemble learner, where the features and samples are randomly sampled with replacement. - There are concepts that are hard to learn because decision trees do not express them easily, such as XOR, parity or multiplexer problems. - Decision tree learners create biased trees if some classes dominate. It is therefore recommended to balance the dataset prior to fitting with the decision tree. ## Ensemble methods The goal of ensemble methods is to combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator. Two families of ensemble methods are usually distinguished: - In averaging methods, the driving principle is to build several estimators independently and then to average their predictions. On average, the combined estimator is usually better than any of the single base estimator because its variance is reduced. So, some examples are: Bagging methods, Forests of randomized trees, but still exist more classifiers; - Instead, in boosting methods, base estimators are built sequentially and one tries to reduce the bias of the combined estimator. The motivation is to combine several weak models to produce a powerful ensemble. Hence, some examples are: AdaBoost, Gradient Tree Boosting,but still exist more options. ## Random Forests | Learning Technique | Type of Learner | Type of Learning | Classification | Regression | Ensemble Family | | --- | --- | --- | --- | --- | --- | | *RandomForest* | *Ensemble Method (Meta-Estimator)* | *Supervised Learning* | *Supported* | *Supported* | *Averaging Methods* | The **sklearn.ensemble module** includes two averaging algorithms based on randomized decision trees: the RandomForest algorithm and the Extra-Trees method. Both algorithms are perturb-and-combine techniques, specifically designed for trees. This means a diverse set of classifiers is created by introducing randomness in the classifier construction. The prediction of the ensemble is given as the averaged prediction of the individual classifiers. In random forests (see RandomForestClassifier and RandomForestRegressor classes), each tree in the ensemble is built from a sample drawn with replacement (i.e., a bootstrap sample) from the training set. The main parameters to adjust when using these methods is *number of estimators* and *maxima features*. The former is the number of trees in the forest. The larger the better, but also the longer it will take to compute. In addition, note that results will stop getting significantly better beyond a critical number of trees. The latter is the size of the random subsets of features to consider when splitting a node. The lower the greater the reduction of variance, but also the greater the increase in bias. Empirical good default values are maxima features equals to null, that means always considering all features instead of a random subset, for regression problems, and maxima features equals to "sqrt", using a random subset of size sqrt(number of features)) for classification tasks, where number of features is the number of features in the data. The best parameter values should always be cross-validated. We note that the size of the model with the default parameters is $O( M * N * log (N) )$, where $M$ is the number of trees and $N$ is the number of samples. ### Cross-Validation Result ```python pos_cv = pos_cv + 1; show_df_with_mean_at_bottom(dfs_list[pos_cv]) # dfs_list[pos_cv].head(dfs_list[pos_cv].shape[0]) ``` ```python plot_name = plots_names[pos_cv] show_learning_curve(dfs_list[pos_cv], figsize=(15, 7), n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=grid_size, plot_name=plot_name) ``` ### Grid-Search Result ```python pos_gs = pos_gs + 1; plot_dest = os.path.join("figures", "n_comp_9_analysis", "grid_search"); X = copy.deepcopy(rescaledX) df_gs, df_auc_gs, df_pvalue = grid_search_all_by_n_components( estimators_list=estimators_list[pos_gs+1], \ param_grids=param_grids[pos_gs], estimators_names=estimators_names[pos_gs+1], \ X=X, y=y, n_components=9, random_state=0, show_plots=False, show_errors=False, verbose=1, plot_dest=plot_dest, debug_var=False) df_9 = merge_dfs_by_common_columns(df_9, df_gs); df_9_auc = merge_dfs_by_common_columns(df_9_auc, df_auc_gs) # df_9, df_9_auc = pd.concat([df_9, df_gs], axis=0), pd.concat([df_9_auc, df_auc_gs], axis=0) ``` The results collected when running Random Forest Classifier Algorithm as the ensemble method of choice here, by means of grid search approach wer can say, widely speaking, that only two out of five models lead to classifiers which are able to satisy our constrain of correctly classify the most of the samples coming from different classes that are class 0 and class 1. More precisely we can say what follows: - Looking at __Linear kenrel-Pca baed Random Forest Classifier__, with a default threshld the model correctly classifyes all instances from class 1 but wrongly predicts labesl for instances from class 0 and so results into a model with a high recall and low precision for class 1, and a low recall as welll as low precision for class 0. Moreover modifying the default threshold we dono not observe looking at Roc Curve an improvement in the classification capability of the model, the evidence is also suggested by the fact that we do not obtainm a Auc score higher than .5 which is the referring score related to the Random Classifier, this means that such a model should be ignored and not employed. - instead lookinga at __Poly kernel-Pca based Random Forest Classifier__, with a default threshold of .5 the model allows to correctly classify most of the samples from class 1 but we correctly classify just half of the samples that instead belon to the class 0, so the model has high recall and preicision for class 1 but low precision and a recal of 50 percent for class 0. While lookng at Rocu Curve it shows us that the model for a first range of thresholds has both TPR and FPR metrics that grow linearly with a slope higher than the slope used by lien of Random Classifier, however after a given point the slope changes and decreaes to a value lower than 0.5, meaing that in the first range chanign the threshold the model seems to gain grower TPR than FPR, after the the FPR is increasing more significantly than the TPR and so we do not have to select thresholds too much high. However the model get a Auc Score of .61 that is not so greater than .5 and so also such a model is not really well performing. - while, speaking about __Rbf kernel-Pca based Random Forest Classifier___, with a default thresold of .5 allows for a model wich reaches a Auc Score of .78 which is the highest score amongs the Random Forest Classifiers built here in this section. Morevore, it allows us to obtain a model wich correctly classify most of the class 1 instances in fact we get high precision and higfh recall so we also have a model that misclassify few instances from class 0. Alos looking at the Roc Curve we can notice that for a very large set of thresholds the TPR metric grows faster than the FPR metric than the model for high thresholds seems to let FPR to grows much faster than TPR, however this trial leads to a classifier that could be righltly selected for predicting the samples from different classes, because shows us really good performance. - referring to the __Cosine kernel-Pca based Random Classifier__, we can quickly say that sucha a classifier is slighlty better than the one obtained still for Random Forest Classifier but with a linear trick for kernel-Pca, in fact the Auc score is higher than the latter model but is just slighly higher also than the Random Classifier, so we end up saying that such a configuration do not lead to a model that we want to exploit for classification. The main reason is that we have wrongly predicted the class label for half of class 0 examples as well as we have wrognly assigned labels to class 1 samples in huge number so that the model results with a low value of precision related to class 0. - Lastly, __Sigmoid kernel-Pca based Random Classifier__ with a default .5 threshold leads to model with a very high precision and a middle recall for class 1, instead we obtain a middle recall and low precision for class 0, so we correcly classify more or less a little bit more samples from class 0, but even if the number of wrongly classified examples from class 1 is not huge it is a value comparable to the number of correctly classified claas 0 exmaples so that the precision goes down. Howqever the model allows for a roc curve that obtain a score of .67 the second score for Random Forest Classifier even if the accuracy obtained from the model is only the third amongst Random Forest classifier. ```python show_table_summary_grid_search(df_gs, df_auc_gs, df_pvalue) ``` Looking at the table reported above for Random Forest Classifiers we can noticed that the three out of five classffiers that are linear, polyomial and sigmoid-trick kernel-Pca based models do not adopt a boostrap approach, while Rbf and Cosine trick kernel-Pca based Random Forests decided to adopt such a strategy to get better results during inference. While the gini index criterion was mostly preferred than entropy, in fact the latter was adopted just from sigmoid kernel-Pca based model. Finally the number of estimators adopted from the different models varyes from model to model, in fact we can see that three out of five cases the number of estimators was lower than 10 estimators, while there are two cases in whihc we exploit 50 and even hundered of estiamtors to be able to correctly classify samples. ## Summary Results ### Summary Tables about Analyses done by means of different number of included Pricipal Components ```python # df_9_, df_12_ = reshape_dfs_acc([df_9, df_12], num_col=N_KERNEL, n_cp_list=[9, 11]) # res = create_widget_list_df_vertical([df_9_, df_9_auc]); display.display(res) # res = create_widget_list_df_vertical([df_12_, df_12_auc]); display.display(res) ``` ### Summary Test Here, in the following section I'm going to emulate a test in which I will test the different possible kinds of kernel trick, in other sense techniques, available for a Principal Component Analysis, shortly PCA, unsupervised statistical learning technique in order to remap the original features into a new N-dimensional reference system by means of the kernel approach adopted during the computation. Once the new N-dimensional feature space is available and ready, I will experiment a bounch of selected machine learning methods and procedures applied directly on the first two most informative principal components, that is, also referred to as PCA1 and PCA2, respectively, in order to display a sequence of decision boundaries and contours retrieved after having runned each method on the selected dataset, which has been divided into halves, ofd the same size, and with the same proportion of the two classes of the target variable. What follows is the related code, to the desciption given just above, and the results are also available through several rows of images that represent the contour and decision boundaries obtained thank to the several combinations of PCA's kernel trick and machine learning method for fitting a classifier: ```python kernel_pca = ['linear', 'poly', 'rbf', 'cosine', 'sigmoid'] # linear, poly, rbf, sigmoid, cosine, precomputed scaler_techniques = ['StandardScaler', 'Normalize', 'MinMaxScaler'] # Trying only StandardScaler approach err_list = classifier_comparison_by_pca_kernels(X, y, start_clf=0, stop_clf=10, scaler_technique=scaler_techniques[0], straitified_flag=True, kernels_pca_list=kernel_pca[:], figsize=(27, 9), by_pairs=False, singles=False, verbose=0, record_errors=True, avoid_func=False,) ``` Describing the several pictures we have obtained throughout the combination of kernel tricks available for kernelPCA technique together with different supervised machine learning techniques for building classifiers and models, we can end up saying what follows. Looking at the first picture of each row of graphs, that is those pictures showing just data points witout any kind of decision regions as well as decision boundaries, what we understand is that the data points that are the data examples will group creating different shapes accordingly to the kind of kernel trick adopted for kernelPCA method, in particular, we can immediately see that the two categories, where blue points stands for THROUGH-like bridges while red points for DECK-like bridges, are not equally in numbers, but blue points are the greater among the two, moreover the two categories does not seem to seaprate very well but the picture is crowded with both types of categories that are strctlu closed one another. More precisely we can see that: - using *linear* kernel trick for kernelPCA procedure data points seem to be widely spread along vertical axis, and group mostly near the center of the picutre; - using *poly* kernel trick for kernelPCA procedure instead data points are mostly clusterd near the left bottom corner where seem to form a straight line and there are few examples on the upper side and some less points on the right side of the sdame picture; - while using *rbf* kernel trick for kernelPCA procedure data points seem to spread as the data points represented in the first picture so in the middle of the area but are tightly related so that are less spread along the horizontal axis; - when exploiting *cosine* kernel trick for kernelPCA method the data points are widely spread and tend to reach the top of the picture; - fianlly, when adopting *sigmoid* kernel trick for kernelPCA we can see that data points are mostly clusterd in the center of the graph. Speaking about decision boundaries and decision regions about the selected and fitted to the data machine learning methods, what we can say is the following: - Looking at **Nearest-Neighbor Method** graphs for describing decision boundaries and decision regions we notice that in the majority of cases the decision boundaries and decision regions are prominent for the THROUGH-like bridges, sometimes the area referring to DECK-like samples are sourranded by the decision regions of the other class and the transition to the two decision boundaries is very sharp, not easilly describable. - While, looking at **Linear SVM Classifier**, and knowing the fact that we are fitting a linear classifier to the data, we are aware and so it's clear that the expected decision regions follow a pattern made from several strips of shifting shades of colors from dark red to dark blue. More precisely, three out of five Linear Svm classifiers, in paritcular those that correspond to classifiers fitted when kernel trick for Kernel PCA was set to *'rbf', 'cosine', 'sigmoid'* respectively and one at a time, show more or less the same pattern, so this classification technique combined with these kernel tricks for KernelPCA seems to behave more or less at the same way. Instead Liner Svm combined with poly kernel trick seems to lead the classifier and the resulting decisin boundaries to follow a symmetric patter with respect to the vertical axis. Finally the first combination of KernelPca and Linear Svm technique, that is linear kernel trick plus linear svm, leads to a less aggressive or finer slope of the linear decision regions. We can end up saying that in the majority of cases the transition from one extreme or edge to the other of the shade of color is smoother and continue with respect to the Nearest-Neighbor Approach. - Speaking about **RBF kernel SVM** combined with a preprocessed datset with the various kernel tricks for kernelPca Procedure we can observe that the attempt of finding decision regions on one side advantages the more numerous class that is the class corresponding to those data points classified as THROUGH-like bridges, while penalize the other which is referred to smaller region. However it seems that the classifieris able to correctly classify the data points corresponding to the less numerous class while the data points of the other class sometimes are misclassified more frequently. - Looking at classifiers trained by means of **Gaussian Process technique** we can ascertain that decision boundaries and decision regions seem to follow a straight pattern where the data points are mixing the most, while far from the bigger cluster of points that come from both categories the decison boundaries are assuming higher order so that resemble smooth nonlinear curves. In particular while in all other cases the blue region seems to occupy the left side of the graph, sometimes near the bottom and other times near the top-right, for Gaussian Process technique combined with sigmoid kernel trick for kernelPca procedure we observe that the pattern observed above is the opposite. - Even if these three methods have different characteristics they seems to lead to or provide more or less, and somehow, resulting decision boundaries and decision regions that follow a similar nature that is regions obtained dividing the available two-dimensional plane into subregions that corresponds to square regioons or alternatively irregular regions that are not corresponding to some kind of curve but rather to segmentation of the available area. These methods are respectively **Deciosn Trees, Random Forests, and Adabosts**. Where the two latter can be seen as a improvement of Decsion Tree because often the two latter are based on the decision tree classifier as unit of the overall classifier as are generally described Random Forests, and AdaBost. However Adabost and Random Forests seem to beahve more or less in the same way, in the sense that both show a predominance of reagins and subregions linked to the THROUGH class, even if the transition from one region to the other is mcuh smoother than the transition of the Decision Tree based models. - The **Niave Bayes Classifier**, when applied to the data points once preprocessed using one at a time all the suggested kernel tricks for kernelPca method as a classifier technique, leads to a results in terms of decision boundaries and regions that vary the most from one kernel trick to the other. In particular using the first three proposed kernel tricks that are 'linear', 'poly', 'rbf' the decison regions connected to the Deck-like bridges are concentric with respect to the surrounding area that instead is widely associated to the other class that is THROUGH-like bridges. More Precisely for 'linear' kernel the resulting decision boundaries are wide and spread along the vertical axis instead for 'poly', 'rbf' tend to be narrowe and to be located near the bottom of the graphic. Instead looking at the graphic that referes to the data points when preprocessed by means of cosine kenrel trick for kernelPca method we notice that it seems to lead to a opposite or simmetryc graphic with respect to the horizontal axis when compared with the graphic obtained by means of linear kernel trick. Lastly the sigmoid kernel trick leads to a graphic that seems to classifies data points from THROUGH class associating them to the left and right sides of the piciture while the top and bottom centered horizontal strip seems to be associated with data points from DECK class and more precisely the dark red areas are spotted mosty near either the top or bottom areas. - The last classifier proposed for this thiny and rough experiment is he one known as **Quadratic Discriminant Analysis**, or more shortly *QDA*. The resulting graphics suggest us that by means of such technique we observe that the DECK class is the class among the two which affects mostly the models capabilites, since the decision regions are mostly represented by shades of colors that range in the majority of case around the red color, enabling us to summarize that the DECK class differently from other preceding models will be the most frequently predicted class with respect to the other class that is the THROUGH class. Having performed the analyses discussed just above, employing graphics and so qualitaty approach for investigating some of the most known and exploited methods we can summarize that since we adopt jsut two PCS out of eleven possible components for predicting classes among DECK and THROUGH for T-OR-D dependent variable as our predictive or target variable, is is reallyu difficult to correctly classify all the majority of the data samples since the decisoin bundaries vary heavily from one method to the other also due to the fact that we exploit few information and knwoledge and we cannot find patterns that lead to a more precise classification. We need to exploit more features to reach better performance at classification time and find better decision boundaries that allow to separate the data points without mixing them. ```python plot_dest = os.path.join("figures", "n_comp_12_analysis", "grid_search"); X = rescaledX; pos = pos + 1 df_gs, df_auc_gs = grid_search_all_by_n_components(estimators_list=estimators_list[pos_gs+1], param_grids=param_grids[pos_gs], estimators_names=estimators_names[pos_gs+1], X=X, y=y, n_components=12, random_state=0, show_plots=False, show_errors=False, verbose=1, plot_dest=plot_dest, debug_var=False) df_12, df_12_auc = df_gs, df_auc_gs ``` ```python create_widget_list_df([df_gs, df_auc_gs]) #print(df_gs); print(df_auc_gs) ``` ```python plot_dest = os.path.join("figures", "n_comp_12_analysis", "grid_search"); X = rescaledX; pos = pos + 1 df_gs, df_auc_gs = grid_search_all_by_n_components(estimators_list=estimators_list[pos_gs+1], param_grids=param_grids[pos_gs], estimators_names=estimators_names[pos_gs+1], X=X, y=y, n_components=12, random_state=0, show_plots=False, show_errors=False, verbose=1, plot_dest=plot_dest, debug_var=False) df_12, df_12_auc = pd.concat([df_12, df_gs], axis=0), pd.concat([df_12_auc, df_auc_gs], axis=0) ``` ```python create_widget_list_df([df_gs, df_auc_gs]) #print(df_gs); print(df_auc_gs) ``` ```python plot_dest = os.path.join("figures", "n_comp_12_analysis", "grid_search"); X = rescaledX; pos = pos + 1 df_gs, df_auc_gs = grid_search_all_by_n_components(estimators_list=estimators_list[pos_gs+1], param_grids=param_grids[pos_gs], estimators_names=estimators_names[pos_gs+1], X=X, y=y, n_components=12, random_state=0, show_plots=False, show_errors=False, verbose=1, plot_dest=plot_dest, debug_var=False) df_12, df_12_auc = pd.concat([df_12, df_gs], axis=0), pd.concat([df_12_auc, df_auc_gs], axis=0) ``` ```python create_widget_list_df([df_gs, df_auc_gs]) #print(df_gs); print(df_auc_gs) ``` ```python plot_dest = os.path.join("figures", "n_comp_12_analysis", "grid_search"); X = rescaledX; pos = pos + 1 df_gs, df_auc_gs = grid_search_all_by_n_components(estimators_list=estimators_list[pos_gs+1], param_grids=param_grids[pos_gs], estimators_names=estimators_names[pos_gs+1], X=X, y=y, n_components=12, random_state=0, show_plots=False, show_errors=False, verbose=1, plot_dest=plot_dest, debug_var=False) df_12, df_12_auc = pd.concat([df_12, df_gs], axis=0), pd.concat([df_12_auc, df_auc_gs], axis=0) ``` ```python create_widget_list_df([df_gs, df_auc_gs]) #print(df_gs); print(df_auc_gs) ``` ```python plot_dest = os.path.join("figures", "n_comp_12_analysis", "grid_search"); X = rescaledX; pos = pos + 1 df_gs, df_auc_gs = grid_search_all_by_n_components(estimators_list=estimators_list[pos_gs+1], param_grids=param_grids[pos_gs], estimators_names=estimators_names[pos_gs+1], X=X, y=y, n_components=12, random_state=0, show_plots=False, show_errors=False, verbose=1, plot_dest=plot_dest, debug_var=False) df_12, df_12_auc = pd.concat([df_12, df_gs], axis=0), pd.concat([df_12_auc, df_auc_gs], axis=0) ``` ```python create_widget_list_df([df_gs, df_auc_gs]) #print(df_gs); print(df_auc_gs) ``` ```python plot_dest = os.path.join("figures", "n_comp_12_analysis", "grid_search"); X = rescaledX; pos = pos + 1 df_gs, df_auc_gs = grid_search_all_by_n_components(estimators_list=estimators_list[pos_gs+1], param_grids=param_grids[pos_gs], estimators_names=estimators_names[pos_gs+1], X=X, y=y, n_components=12, random_state=0, show_plots=False, show_errors=False, verbose=1, plot_dest=plot_dest, debug_var=False) df_12, df_12_auc = pd.concat([df_12, df_gs], axis=0), pd.concat([df_12_auc, df_auc_gs], axis=0) ``` ```python create_widget_list_df([df_gs, df_auc_gs]) #print(df_gs); print(df_auc_gs) ``` ### Improvements and Conclusions <a class="anchor" id="Improvements-and-conclusions"></a> Extension that we can think of to better improve the analyses we can perform on such a relative tiny dataset many include, for preprocessing phases: - Selecting different *Feature Extraction ant Dimensionality Reduction Techniques* other than Pca or kernel Pca such as: *linear discriminant analysis (LDA)*, or *canonical correlation analysis (CCA) techniques* as a pre-processing step. Extension that we can think of to better improve the analyses we can perform on such a relative tiny dataset many include, for training phases: - Selecting different *Ensemble Methods, investigating both Average based and Boosting based Statistical Learning Methods*. Extension that we can think of to better improve the analyses we can perform on such a relative tiny dataset many include, for diagnostic analyses after having performed train and test phases: - Using other measures, indicators and ghraphical plots such as the *Total Operating Characteristic (TOC)*, since also such a measure characterizes diagnostic ability while revealing more information than the ROC. In fact for each threshold, ROC reveals two ratios, TP/(TP + FN) and FP/(FP + TN). In other words, ROC reveals hits/(hits + misses) and false alarms/(false alarms + correct rejections). On the other hand, TOC shows the total information in the contingency table for each threshold. Lastly, the TOC method reveals all of the information that the ROC method provides, plus additional important information that ROC does not reveal, i.e. the size of every entry in the contingency table for each threshold. ## References section <a class="anchor" id="references"></a> ### Main References - Data Domain Information part: - (Deck) https://en.wikipedia.org/wiki/Deck_(bridge) - (Cantilever bridge) https://en.wikipedia.org/wiki/Cantilever_bridge - (Arch bridge) https://en.wikipedia.org/wiki/Deck_(bridge) - Machine Learning part: - (Theory Book) https://jakevdp.github.io/PythonDataScienceHandbook/ - (Feature Extraction: PCA) https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html - (Linear Model: Logistic Regression) https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression - (Neighbor-based Learning: Knn) https://scikit-learn.org/stable/modules/neighbors.html - (Stochastc Learning: SGD Classifier) https://scikit-learn.org/stable/modules/sgd.html#sgd - (Discriminative Model: SVM) https://scikit-learn.org/stable/modules/svm.html - (Non-Parametric Learning: Decsion Trees) https://scikit-learn.org/stable/modules/tree.html#tree - (Ensemble, Non-Parametric Learning: RandomForest) https://scikit-learn.org/stable/modules/ensemble.html#forest - Metrics: - (F1-Accuracy-Precision-Recall) https://towardsdatascience.com/beyond-accuracy-precision-and-recall-3da06bea9f6c - Statistics: - (Correlation and dependence) https://en.wikipedia.org/wiki/Correlation_and_dependence - (KDE) https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/ - Chart part: - (Seaborn Charts) https://acadgild.com/blog/data-visualization-using-matplotlib-and-seaborn - Third Party Library: - (sklearn) https://scikit-learn.org/stable/index.html - (statsmodels) https://www.statsmodels.org/stable/index.html# ### Others References - Plots: - (Python Plot) https://www.datacamp.com/community/tutorials/matplotlib-tutorial-python?utm_source=adwords_ppc&utm_campaignid=898687156&utm_adgroupid=48947256715&utm_device=c&utm_keyword=&utm_matchtype=b&utm_network=g&utm_adpostion=&utm_creative=255798340456&utm_targetid=aud-299261629574:dsa-473406587955&utm_loc_interest_ms=&utm_loc_physical_ms=1008025&gclid=Cj0KCQjw-_j1BRDkARIsAJcfmTFu4LAUDhRGK2D027PHiqIPSlxK3ud87Ek_lwOu8rt8A8YLrjFiHqsaAoLDEALw_wcB - Markdown Math part: - (Math Symbols Latex) https://oeis.org/wiki/List_of_LaTeX_mathematical_symbols - (CheatSheet) https://www.ibm.com/support/knowledgecenter/SSHGWL_1.2.3/analyze-data/markd-jupyter.html - (Tutorial 1) https://share.cocalc.com/share/b4a30ed038ee41d868dad094193ac462ccd228e2/Homework%20/HW%201.2%20-%20Markdown%20and%20LaTeX%20Cheatsheet.ipynb?viewer=share - (Tutorial 2) https://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Typesetting%20Equations.html ```python ```
[STATEMENT] theorem weak_bisimilarity_implies_weak_equivalence: assumes "P \<approx>\<cdot> Q" shows "P \<equiv>\<cdot> Q" [PROOF STATE] proof (prove) goal (1 subgoal): 1. P \<equiv>\<cdot> Q [PROOF STEP] proof - [PROOF STATE] proof (state) goal (1 subgoal): 1. P \<equiv>\<cdot> Q [PROOF STEP] { [PROOF STATE] proof (state) goal (1 subgoal): 1. P \<equiv>\<cdot> Q [PROOF STEP] fix x :: "('idx, 'pred, 'act) formula" [PROOF STATE] proof (state) goal (1 subgoal): 1. P \<equiv>\<cdot> Q [PROOF STEP] assume "weak_formula x" [PROOF STATE] proof (state) this: weak_formula x goal (1 subgoal): 1. P \<equiv>\<cdot> Q [PROOF STEP] then [PROOF STATE] proof (chain) picking this: weak_formula x [PROOF STEP] have "\<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x \<longleftrightarrow> Q \<Turnstile> x" [PROOF STATE] proof (prove) using this: weak_formula x goal (1 subgoal): 1. \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x [PROOF STEP] proof (induct rule: weak_formula.induct) [PROOF STATE] proof (state) goal (4 subgoals): 1. \<And>xset P Q. \<lbrakk>finite (supp xset); \<And>x. x \<in> set_bset xset \<Longrightarrow> weak_formula x; \<And>x P Q. \<lbrakk>x \<in> set_bset xset; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> Conj xset = Q \<Turnstile> Conj xset 2. \<And>x P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> Formula.Not x = Q \<Turnstile> Formula.Not x 3. \<And>x \<alpha> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x = Q \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x 4. \<And>x \<phi> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) = Q \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) [PROOF STEP] case (wf_Conj xset) [PROOF STATE] proof (state) this: finite (supp xset) ?x7 \<in> set_bset xset \<Longrightarrow> weak_formula ?x7 \<lbrakk>?x7 \<in> set_bset xset; ?P7 \<approx>\<cdot> ?Q7\<rbrakk> \<Longrightarrow> ?P7 \<Turnstile> ?x7 = ?Q7 \<Turnstile> ?x7 P \<approx>\<cdot> Q goal (4 subgoals): 1. \<And>xset P Q. \<lbrakk>finite (supp xset); \<And>x. x \<in> set_bset xset \<Longrightarrow> weak_formula x; \<And>x P Q. \<lbrakk>x \<in> set_bset xset; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> Conj xset = Q \<Turnstile> Conj xset 2. \<And>x P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> Formula.Not x = Q \<Turnstile> Formula.Not x 3. \<And>x \<alpha> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x = Q \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x 4. \<And>x \<phi> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) = Q \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) [PROOF STEP] then [PROOF STATE] proof (chain) picking this: finite (supp xset) ?x7 \<in> set_bset xset \<Longrightarrow> weak_formula ?x7 \<lbrakk>?x7 \<in> set_bset xset; ?P7 \<approx>\<cdot> ?Q7\<rbrakk> \<Longrightarrow> ?P7 \<Turnstile> ?x7 = ?Q7 \<Turnstile> ?x7 P \<approx>\<cdot> Q [PROOF STEP] show ?case [PROOF STATE] proof (prove) using this: finite (supp xset) ?x7 \<in> set_bset xset \<Longrightarrow> weak_formula ?x7 \<lbrakk>?x7 \<in> set_bset xset; ?P7 \<approx>\<cdot> ?Q7\<rbrakk> \<Longrightarrow> ?P7 \<Turnstile> ?x7 = ?Q7 \<Turnstile> ?x7 P \<approx>\<cdot> Q goal (1 subgoal): 1. P \<Turnstile> Conj xset = Q \<Turnstile> Conj xset [PROOF STEP] by simp [PROOF STATE] proof (state) this: P \<Turnstile> Conj xset = Q \<Turnstile> Conj xset goal (3 subgoals): 1. \<And>x P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> Formula.Not x = Q \<Turnstile> Formula.Not x 2. \<And>x \<alpha> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x = Q \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x 3. \<And>x \<phi> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) = Q \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) [PROOF STEP] next [PROOF STATE] proof (state) goal (3 subgoals): 1. \<And>x P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> Formula.Not x = Q \<Turnstile> Formula.Not x 2. \<And>x \<alpha> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x = Q \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x 3. \<And>x \<phi> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) = Q \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) [PROOF STEP] case (wf_Not x) [PROOF STATE] proof (state) this: weak_formula x ?P7 \<approx>\<cdot> ?Q7 \<Longrightarrow> ?P7 \<Turnstile> x = ?Q7 \<Turnstile> x P \<approx>\<cdot> Q goal (3 subgoals): 1. \<And>x P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> Formula.Not x = Q \<Turnstile> Formula.Not x 2. \<And>x \<alpha> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x = Q \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x 3. \<And>x \<phi> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) = Q \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) [PROOF STEP] then [PROOF STATE] proof (chain) picking this: weak_formula x ?P7 \<approx>\<cdot> ?Q7 \<Longrightarrow> ?P7 \<Turnstile> x = ?Q7 \<Turnstile> x P \<approx>\<cdot> Q [PROOF STEP] show ?case [PROOF STATE] proof (prove) using this: weak_formula x ?P7 \<approx>\<cdot> ?Q7 \<Longrightarrow> ?P7 \<Turnstile> x = ?Q7 \<Turnstile> x P \<approx>\<cdot> Q goal (1 subgoal): 1. P \<Turnstile> Formula.Not x = Q \<Turnstile> Formula.Not x [PROOF STEP] by simp [PROOF STATE] proof (state) this: P \<Turnstile> Formula.Not x = Q \<Turnstile> Formula.Not x goal (2 subgoals): 1. \<And>x \<alpha> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x = Q \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x 2. \<And>x \<phi> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) = Q \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) [PROOF STEP] next [PROOF STATE] proof (state) goal (2 subgoals): 1. \<And>x \<alpha> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x = Q \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x 2. \<And>x \<phi> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) = Q \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) [PROOF STEP] case (wf_Act x \<alpha>) [PROOF STATE] proof (state) this: weak_formula x ?P7 \<approx>\<cdot> ?Q7 \<Longrightarrow> ?P7 \<Turnstile> x = ?Q7 \<Turnstile> x P \<approx>\<cdot> Q goal (2 subgoals): 1. \<And>x \<alpha> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x = Q \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x 2. \<And>x \<phi> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) = Q \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) [PROOF STEP] then [PROOF STATE] proof (chain) picking this: weak_formula x ?P7 \<approx>\<cdot> ?Q7 \<Longrightarrow> ?P7 \<Turnstile> x = ?Q7 \<Turnstile> x P \<approx>\<cdot> Q [PROOF STEP] show ?case [PROOF STATE] proof (prove) using this: weak_formula x ?P7 \<approx>\<cdot> ?Q7 \<Longrightarrow> ?P7 \<Turnstile> x = ?Q7 \<Turnstile> x P \<approx>\<cdot> Q goal (1 subgoal): 1. P \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x = Q \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x [PROOF STEP] by (metis weakly_bisimilar_symp weak_bisimilarity_implies_weak_equivalence_Act sympE) [PROOF STATE] proof (state) this: P \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x = Q \<Turnstile> \<langle>\<langle>\<alpha>\<rangle>\<rangle>x goal (1 subgoal): 1. \<And>x \<phi> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) = Q \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) [PROOF STEP] next [PROOF STATE] proof (state) goal (1 subgoal): 1. \<And>x \<phi> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) = Q \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) [PROOF STEP] case (wf_Pred x \<phi>) [PROOF STATE] proof (state) this: weak_formula x ?P7 \<approx>\<cdot> ?Q7 \<Longrightarrow> ?P7 \<Turnstile> x = ?Q7 \<Turnstile> x P \<approx>\<cdot> Q goal (1 subgoal): 1. \<And>x \<phi> P Q. \<lbrakk>weak_formula x; \<And>P Q. P \<approx>\<cdot> Q \<Longrightarrow> P \<Turnstile> x = Q \<Turnstile> x; P \<approx>\<cdot> Q\<rbrakk> \<Longrightarrow> P \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) = Q \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) [PROOF STEP] then [PROOF STATE] proof (chain) picking this: weak_formula x ?P7 \<approx>\<cdot> ?Q7 \<Longrightarrow> ?P7 \<Turnstile> x = ?Q7 \<Turnstile> x P \<approx>\<cdot> Q [PROOF STEP] show ?case [PROOF STATE] proof (prove) using this: weak_formula x ?P7 \<approx>\<cdot> ?Q7 \<Longrightarrow> ?P7 \<Turnstile> x = ?Q7 \<Turnstile> x P \<approx>\<cdot> Q goal (1 subgoal): 1. P \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) = Q \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) [PROOF STEP] by (metis weakly_bisimilar_symp weak_bisimilarity_implies_weak_equivalence_Pred sympE) [PROOF STATE] proof (state) this: P \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) = Q \<Turnstile> \<langle>\<langle>\<tau>\<rangle>\<rangle>Conj (binsert (Pred \<phi>) (bsingleton x)) goal: No subgoals! [PROOF STEP] qed [PROOF STATE] proof (state) this: ?P7 \<approx>\<cdot> ?Q7 \<Longrightarrow> ?P7 \<Turnstile> x = ?Q7 \<Turnstile> x goal (1 subgoal): 1. P \<equiv>\<cdot> Q [PROOF STEP] } [PROOF STATE] proof (state) this: \<lbrakk>weak_formula ?x10; ?P7 \<approx>\<cdot> ?Q7\<rbrakk> \<Longrightarrow> ?P7 \<Turnstile> ?x10 = ?Q7 \<Turnstile> ?x10 goal (1 subgoal): 1. P \<equiv>\<cdot> Q [PROOF STEP] with assms [PROOF STATE] proof (chain) picking this: P \<approx>\<cdot> Q \<lbrakk>weak_formula ?x10; ?P7 \<approx>\<cdot> ?Q7\<rbrakk> \<Longrightarrow> ?P7 \<Turnstile> ?x10 = ?Q7 \<Turnstile> ?x10 [PROOF STEP] show ?thesis [PROOF STATE] proof (prove) using this: P \<approx>\<cdot> Q \<lbrakk>weak_formula ?x10; ?P7 \<approx>\<cdot> ?Q7\<rbrakk> \<Longrightarrow> ?P7 \<Turnstile> ?x10 = ?Q7 \<Turnstile> ?x10 goal (1 subgoal): 1. P \<equiv>\<cdot> Q [PROOF STEP] unfolding weakly_logically_equivalent_def [PROOF STATE] proof (prove) using this: P \<approx>\<cdot> Q \<lbrakk>weak_formula ?x10; ?P7 \<approx>\<cdot> ?Q7\<rbrakk> \<Longrightarrow> ?P7 \<Turnstile> ?x10 = ?Q7 \<Turnstile> ?x10 goal (1 subgoal): 1. \<forall>x. weak_formula x \<longrightarrow> P \<Turnstile> x = Q \<Turnstile> x [PROOF STEP] by simp [PROOF STATE] proof (state) this: P \<equiv>\<cdot> Q goal: No subgoals! [PROOF STEP] qed
||| Implementing `Decidable.Order.Strict` for `Data.Nat.LT` module Data.Nat.Order.Strict import Decidable.Order import Decidable.Order.Strict import Decidable.Equality import Data.Nat import Data.Nat.Order export irreflexiveLTE : (a : Nat) -> Not (a `LT` a) irreflexiveLTE 0 z_lt_z impossible irreflexiveLTE (S a) (LTESucc a_lt_a) = irreflexiveLTE a a_lt_a export StrictPreorder Nat LT where irreflexive = irreflexiveLTE transitive a b c a_lt_b b_lt_c = transitive {po = LTE} (S a) b c a_lt_b (transitive {po = LTE} b (S b) c (lteSuccRight (reflexive b)) b_lt_c) public export decLT : (a, b : Nat) -> DecOrdering {lt = LT} a b decLT 0 0 = DecEQ Refl decLT 0 (S b) = DecLT (LTESucc LTEZero) decLT (S a) 0 = DecGT (LTESucc LTEZero) decLT (S a) (S b) = case decLT a b of DecLT a_lt_b => DecLT (LTESucc a_lt_b) DecEQ Refl => DecEQ Refl DecGT b_lt_a => DecGT (LTESucc b_lt_a) public export StrictOrdered Nat LT where order = decLT
function rhs_to_args(ex::Expr) indsall = Union{Symbol,Expr}[] symorig, symgend, isconjs = Union{Symbol,Expr}[], Symbol[], Bool[] function exparse(x, isconj) if x isa Number x elseif @capture(x, -(rhs__)) y = exparse.(rhs, isconj) :(-($(y...))) elseif @capture(x, +(rhs__)) y = exparse.(rhs, isconj) :(+($(y...))) elseif @capture(x, *(rhs__)) y = exparse.(rhs, isconj) :(*($(y...))) elseif @capture(x, conj(rhs_)) y = exparse(rhs, !isconj) :(conj($y)) elseif @capture(x, sym_[ind__]) any(x -> isa(x, Integer), ind) && error("NCON style is unsupported") @gensym new push!(symorig, sym) push!(symgend, new) push!(isconjs, isconj) append!(indsall, ind) :($new[$(ind...)]) else @gensym new push!(symorig, x) push!(symgend, new) push!(isconjs, isconj) new end end exreplaced = exparse(ex, false) return exreplaced, symorig, symgend, isconjs, indsall end function make_only_product(ex::Expr, sym::Symbol) hassym(x) = if @capture(x, -(y__) | +(y__) | *(y__)) any(hassym.(y)) elseif @capture(x, conj(y_)) hassym(y) elseif @capture(x, $sym) || @capture(x, $sym[__]) true else false end return MacroTools.postwalk(ex) do x if @capture(x, -(y__)) @assert 1 ≤ length(y) ≤ 2 if length(y) == 1 x elseif hassym(first(y)) first(y) elseif hassym(last(y)) :(-$(last(y))) else x end elseif @capture(x, +(y__)) @assert 1 ≤ length(y) y = filter(hassym, y) @assert length(y) ≤ 1 length(y) == 1 ? first(y) : x else x end end end function make_scalar_first(ex::Expr) return MacroTools.postwalk(ex) do x if @capture(x, *(y__)) tensors = Expr[] notensors = Union{Expr,Symbol,Number}[] for z in y if @capture(z, _[__]) || @capture(z, conj(_[__])) push!(tensors, z) else push!(notensors, z) end end isempty(tensors) && return x if isempty(notensors) return x elseif length(notensors) == 1 return :(*($(notensors[1]), $(tensors...))) else return :(*(*($(notensors...)), $(tensors...))) end else return x end end end
section \<open> Relational Calculus Laws \<close> theory utp_rel_laws imports utp_rel utp_recursion utp_healthy begin section \<open> Conditional Laws \<close> lemma rcond_idem [simp]: "P \<lhd> b \<rhd> P = P" by pred_auto lemma rcond_sym: "P \<lhd> b \<rhd> Q = Q \<lhd> \<not>b \<rhd> P" by pred_auto lemma rcond_assoc: "(P \<lhd> b \<rhd> Q) \<lhd> c \<rhd> R = P \<lhd> b \<and> c \<rhd> (Q \<lhd> c \<rhd> R)" by pred_auto lemma rcond_distr: "P \<lhd> b \<rhd> (Q \<lhd> c \<rhd> R) = (P \<lhd> b \<rhd> Q) \<lhd> c \<rhd> (P \<lhd> b \<rhd> R)" by pred_auto lemma rcond_true [simp]: "P \<lhd> True \<rhd> Q = P" by pred_auto lemma rcond_false [simp]: "P \<lhd> False \<rhd> Q = Q" by pred_auto lemma rcond_reach [simp]: "P \<lhd> b \<rhd> (Q \<lhd> b \<rhd> R) = P \<lhd> b \<rhd> R" by pred_auto lemma rcond_disj [simp]: "P \<lhd> b \<rhd> (P \<lhd> c \<rhd> Q) = P \<lhd> b \<or> c \<rhd> Q" by pred_auto lemma rcomp_rcond_left_distr: "(P \<lhd> b \<rhd> Q) ;; R = (P ;; R) \<lhd> b \<rhd> (Q ;; R) " by (pred_auto) lemma rcond_seq_left_distr: "out\<alpha> \<sharp> b \<Longrightarrow> ((P \<triangleleft> b \<triangleright> Q) ;; R) = ((P ;; R) \<triangleleft> b \<triangleright> (Q ;; R))" by (pred_auto, blast) lemma rcond_seq_right_distr: "in\<alpha> \<sharp> b \<Longrightarrow> (P ;; (Q \<triangleleft> b \<triangleright> R)) = ((P ;; Q) \<triangleleft> b \<triangleright> (P ;; R))" by (pred_auto, blast) text \<open> Alternative expression of conditional using assumptions and choice \<close> lemma rcond_rassume_expand: "P \<lhd> b \<rhd> Q = (\<questiondown>b? ;; P) \<sqinter> (\<questiondown>(\<not> b)? ;; Q)" by pred_auto lemma rcond_mono [mono]: "\<lbrakk> (P\<^sub>1 :: 's pred) \<sqsubseteq> P\<^sub>2; Q\<^sub>1 \<sqsubseteq> Q\<^sub>2 \<rbrakk> \<Longrightarrow> (P\<^sub>1 \<triangleleft> b \<triangleright> Q\<^sub>1) \<sqsubseteq> (P\<^sub>2 \<triangleleft> b \<triangleright> Q\<^sub>2)" by pred_auto lemma rcond_refine: "(P \<sqsubseteq> (Q \<triangleleft> b \<triangleright> R)) = (P \<sqsubseteq> (b \<and> Q)\<^sub>e \<and> (P \<sqsubseteq> ((\<not>b \<and> R)\<^sub>e)))" by pred_auto section \<open> Precondition and Postcondition Laws \<close> theorem precond_equiv: "P = (P ;; true) \<longleftrightarrow> (out\<alpha> \<sharp> P)" by (pred_auto) theorem postcond_equiv: "P = (true ;; P) \<longleftrightarrow> (in\<alpha> \<sharp> P)" by (pred_auto) theorem precond_left_zero: assumes "out\<alpha> \<sharp> p" "p \<noteq> false" shows "(true ;; p) = true" by (pred_auto assms: assms) (*theorem feasibile_iff_true_right_zero: "P ;; true = true \<longleftrightarrow> (\<exists> out\<alpha> \<bullet> P)\<^sub>e" oops*) subsection \<open> Sequential Composition Laws \<close> lemma seqr_assoc: "(P ;; Q) ;; R = P ;; (Q ;; R)" by (pred_auto) lemma seqr_left_unit [simp]: "II ;; P = P" by (pred_auto) lemma seqr_right_unit [simp]: "P ;; II = P" by (pred_auto) lemma seqr_left_zero [simp]: "false ;; P = false" by pred_auto lemma seqr_right_zero [simp]: "P ;; false = false" by pred_auto lemma seqr_mono: "\<lbrakk> P\<^sub>1 \<sqsubseteq> P\<^sub>2; Q\<^sub>1 \<sqsubseteq> Q\<^sub>2 \<rbrakk> \<Longrightarrow> (P\<^sub>1 ;; Q\<^sub>1) \<sqsubseteq> (P\<^sub>2 ;; Q\<^sub>2)" by (pred_auto, blast) lemma mono_seqr [mono]: "\<lbrakk> mono P; mono Q \<rbrakk> \<Longrightarrow> mono (\<lambda> X. P X ;; Q X)" by (pred_auto add: mono_def, blast) lemma cond_seqr_mono [mono]: "mono (\<lambda>X. (P ;; X) \<lhd> b \<rhd> II)" by (pred_auto add: mono_def) lemma mono_seqr_tail: assumes "mono F" shows "mono (\<lambda> X. P ;; F(X))" by (pred_auto assms: assms add: mono_def) lemma seqr_liberate_left: "vwb_lens x \<Longrightarrow> ((P \\ $x\<^sup><) ;; Q) = ((P ;; Q) \\ $x\<^sup><)" by (pred_auto) lemma seqr_liberate_right: "vwb_lens x \<Longrightarrow> P ;; Q \\ $x\<^sup>> = (P ;; Q) \\ $x\<^sup>>" by pred_auto lemma seqr_or_distl: "((P \<or> Q) ;; R) = ((P ;; R) \<or> (Q ;; R))" by (pred_auto) lemma seqr_or_distr: "(P ;; (Q \<or> R)) = ((P ;; Q) \<or> (P ;; R))" by (pred_auto) lemma seqr_and_distr_ufunc: "Functional P \<Longrightarrow> (P ;; (Q \<and> R)) = ((P ;; Q) \<and> (P ;; R))" by (rel_auto, metis single_valuedD) lemma seqr_and_distl_uinj: "Injective R \<Longrightarrow> ((P \<and> Q) ;; R) = ((P ;; R) \<and> (Q ;; R))" by (rel_auto, auto simp add: injective_def) lemma seqr_unfold: "(P ;;\<^sub>h Q) = (\<exists> v. P\<lbrakk>\<guillemotleft>v\<guillemotright>/\<^bold>v\<^sup>>\<rbrakk> \<and> Q\<lbrakk>\<guillemotleft>v\<guillemotright>/\<^bold>v\<^sup><\<rbrakk>)\<^sub>e" by pred_auto lemma seqr_unfold_heterogeneous: "(P ;; Q) = (\<exists> v. (pre(P\<lbrakk>\<guillemotleft>v\<guillemotright>/\<^bold>v\<^sup>>\<rbrakk>))\<^sup>< \<and> (post(Q\<lbrakk>\<guillemotleft>v\<guillemotright>/\<^bold>v\<^sup><\<rbrakk>))\<^sup>>)\<^sub>e" by pred_auto lemma seqr_middle: "vwb_lens x \<Longrightarrow> P ;; Q = (\<Sqinter> v. P\<lbrakk>\<guillemotleft>v\<guillemotright>/x\<^sup>>\<rbrakk> ;; Q\<lbrakk>\<guillemotleft>v\<guillemotright>/x\<^sup><\<rbrakk>)" by (pred_auto, metis vwb_lens.put_eq) lemma seqr_left_one_point: assumes "vwb_lens x" shows "(P \<and> ($x\<^sup>> = \<guillemotleft>v\<guillemotright>)\<^sub>e) ;; Q = P\<lbrakk>\<guillemotleft>v\<guillemotright>/x\<^sup>>\<rbrakk> ;; Q\<lbrakk>\<guillemotleft>v\<guillemotright>/x\<^sup><\<rbrakk>" by (pred_auto assms: assms, metis vwb_lens_wb wb_lens.get_put) lemma seqr_right_one_point: assumes "vwb_lens x" shows "P ;; (($x\<^sup>< = \<guillemotleft>v\<guillemotright>)\<^sub>e \<and> Q) = P\<lbrakk>\<guillemotleft>v\<guillemotright>/x\<^sup>>\<rbrakk> ;; Q\<lbrakk>\<guillemotleft>v\<guillemotright>/x\<^sup><\<rbrakk>" using assms by (pred_auto, metis vwb_lens_wb wb_lens.get_put) lemma seqr_left_one_point_true: assumes "vwb_lens x" shows "(P \<and> ($x\<^sup>>)\<^sub>e) ;; Q = P\<lbrakk>True/x\<^sup>>\<rbrakk> ;; Q\<lbrakk>True/x\<^sup><\<rbrakk>" using assms by (pred_auto, metis (full_types) vwb_lens_wb wb_lens.get_put) lemma seqr_left_one_point_false: assumes "vwb_lens x" shows "((P \<and> \<not>($x\<^sup>>)\<^sub>e) ;; Q) = (P\<lbrakk>False/x\<^sup>>\<rbrakk> ;; Q\<lbrakk>False/x\<^sup><\<rbrakk>)" using assms by (pred_auto, metis (full_types) vwb_lens_wb wb_lens.get_put) lemma seqr_right_one_point_true: assumes "vwb_lens x" shows "(P ;; (($x\<^sup><)\<^sub>e \<and> Q)) = (P\<lbrakk>True/x\<^sup>>\<rbrakk> ;; Q\<lbrakk>True/x\<^sup><\<rbrakk>)" using assms by (pred_auto, metis (full_types) vwb_lens_wb wb_lens.get_put) lemma seqr_right_one_point_false: assumes "vwb_lens x" shows "(P ;; (\<not>($x\<^sup><)\<^sub>e \<and> Q)) = (P\<lbrakk>False/x\<^sup>>\<rbrakk> ;; Q\<lbrakk>False/x\<^sup><\<rbrakk>)" using assms by (pred_auto, metis (full_types) vwb_lens_wb wb_lens.get_put) lemma seqr_insert_ident_left: assumes "vwb_lens x" "$x\<^sup>> \<sharp> P" "$x\<^sup>< \<sharp> Q" shows "((($x\<^sup>> = $x\<^sup><)\<^sub>e \<and> P) ;; Q) = (P ;; Q)" by (pred_auto assms: assms, meson vwb_lens_def wb_lens_weak weak_lens.put_get) lemma seqr_insert_ident_right: assumes "vwb_lens x" "$x\<^sup>> \<sharp> P" "$x\<^sup>< \<sharp> Q" shows "(P ;; (($x\<^sup>> = $x\<^sup><)\<^sub>e \<and> Q)) = (P ;; Q)" by (pred_auto assms: assms, metis (no_types, opaque_lifting) vwb_lens_def wb_lens_def weak_lens.put_get) lemma seq_var_ident_lift: assumes "vwb_lens x" "$x\<^sup>> \<sharp> P" "$x\<^sup>< \<sharp> Q" shows "((($x\<^sup>> = $x\<^sup><)\<^sub>e \<and> P) ;; (($x\<^sup>> = $x\<^sup><)\<^sub>e \<and> Q)) = (($x\<^sup>> = $x\<^sup><)\<^sub>e \<and> (P ;; Q))" by (pred_auto assms: assms, metis (no_types, lifting) vwb_lens_wb wb_lens_weak weak_lens.put_get) lemma seqr_bool_split: assumes "vwb_lens x" shows "P ;; Q = (P\<lbrakk>True/x\<^sup>>\<rbrakk> ;; Q\<lbrakk>True/x\<^sup><\<rbrakk> \<or> P\<lbrakk>False/x\<^sup>>\<rbrakk> ;; Q\<lbrakk>False/x\<^sup><\<rbrakk>)" using assms apply (subst seqr_middle[of x], simp_all) apply pred_auto apply (metis (full_types)) done lemma cond_inter_var_split: assumes "vwb_lens x" shows "(P \<triangleleft> $x\<^sup>> \<triangleright> Q) ;; R = (P\<lbrakk>True/x\<^sup>>\<rbrakk> ;; R\<lbrakk>True/x\<^sup><\<rbrakk> \<or> Q\<lbrakk>False/x\<^sup>>\<rbrakk> ;; R\<lbrakk>False/x\<^sup><\<rbrakk>)" proof - have "(P \<triangleleft> $x\<^sup>> \<triangleright> Q) ;; R = (((x\<^sup>>)\<^sub>e \<and> P) ;; R \<or> (\<not> (x\<^sup>>)\<^sub>e \<and> Q) ;; R)" by pred_auto also have "... = ((P \<and> (x\<^sup>>)\<^sub>e) ;; R \<or> (Q \<and> \<not>(x\<^sup>>)\<^sub>e) ;; R)" by (pred_auto) also have "... = (P\<lbrakk>True/x\<^sup>>\<rbrakk> ;; R\<lbrakk>True/x\<^sup><\<rbrakk> \<or> Q\<lbrakk>False/x\<^sup>>\<rbrakk> ;; R\<lbrakk>False/x\<^sup><\<rbrakk>)" apply (pred_auto add: seqr_left_one_point_true seqr_left_one_point_false assms) by (metis (full_types) assms vwb_lens_wb wb_lens.get_put)+ finally show ?thesis . qed theorem seqr_pre_transfer: "in\<alpha> \<sharp> q \<Longrightarrow> ((P \<and> q) ;; R) = (P ;; (q\<^sup>- \<and> R))" by pred_auto theorem seqr_pre_transfer': "((P \<and> (q\<^sup>>)\<^sub>e) ;; R) = (P ;; ((q\<^sup><)\<^sub>e \<and> R))" by (pred_auto) theorem seqr_post_out: "in\<alpha> \<sharp> r \<Longrightarrow> (P ;; (Q \<and> r)) = ((P ;; Q) \<and> r)" by (pred_auto) lemma seqr_post_var_out: shows "(P ;; (Q \<and> (x\<^sup>>)\<^sub>e)) = ((P ;; Q) \<and> (x\<^sup>>)\<^sub>e)" by (pred_auto) theorem seqr_post_transfer: "out\<alpha> \<sharp> q \<Longrightarrow> (P ;; (q \<and> R)) = ((P \<and> q\<^sup>-) ;; R)" by (pred_auto) lemma seqr_pre_out: "out\<alpha> \<sharp> p \<Longrightarrow> ((p \<and> Q) ;; R) = (p \<and> (Q ;; R))" by (pred_auto) lemma seqr_pre_var_out: shows "(((x\<^sup><)\<^sub>e \<and> P) ;; Q) = ((x\<^sup><)\<^sub>e \<and> (P ;; Q))" by (pred_auto) lemma seqr_true_lemma: "(P = (\<not> ((\<not> P) ;; true))) = (P = (P ;; true))" by (pred_auto) lemma seqr_to_conj: "\<lbrakk> out\<alpha> \<sharp> P; in\<alpha> \<sharp> Q \<rbrakk> \<Longrightarrow> (P ;; Q) = (P \<and> Q)" by (pred_auto; blast) lemma liberate_seq_unfold: "vwb_lens x \<Longrightarrow> $x \<sharp> Q \<Longrightarrow> (P \\ $x) ;; Q = (P ;; Q) \\ $x" apply (pred_auto) oops (* the following laws are like HOL.ex_simps but for lenses *) named_theorems "pred_ex_simps" lemma ex_lens_split_conj_unrest [pred_ex_simps]: assumes "$x \<sharp> Q" "mwb_lens x" shows "(\<exists> x \<Zspot> P \<and> Q) = ((\<exists> x \<Zspot> P) \<and> Q)" using assms by pred_auto lemma ex_lens_split_conj_unrest2 [pred_ex_simps]: assumes "$x \<sharp> P" "mwb_lens x" shows "(\<exists> x \<Zspot> P \<and> Q) = (P \<and> (\<exists> x \<Zspot> Q))" using assms by pred_auto lemma ex_lens_split_disj_unrest [pred_ex_simps]: assumes "$x \<sharp> Q" "mwb_lens x" shows "(\<exists> x \<Zspot> P \<or> Q) = ((\<exists> x \<Zspot> P) \<or> Q)" using assms by pred_auto lemma ex_lens_split_disj_unrest2 [pred_ex_simps]: assumes "$x \<sharp> P" "mwb_lens x" shows "(\<exists> x \<Zspot> P \<or> Q) = (P \<or> (\<exists> x \<Zspot> Q))" using assms by pred_auto lemma ex_lens_split_impl_unrest [pred_ex_simps]: assumes "$x \<sharp> Q" "mwb_lens x" shows "(\<exists> x \<Zspot> P \<longrightarrow> Q) = ((\<forall> x \<Zspot> P) \<longrightarrow> Q)" using assms by pred_auto lemma ex_lens_split_impl_unrest2 [pred_ex_simps]: assumes "$x \<sharp> P" "mwb_lens x" shows "(\<exists> x \<Zspot> P \<longrightarrow> Q) = (P \<longrightarrow> (\<exists> x \<Zspot> Q))" using assms by pred_auto (* lemma shEx_mem_lift_seq_1 [uquant_lift]: assumes "out\<alpha> \<sharp> A" shows "((\<^bold>\<exists> x \<in> A \<bullet> P x) ;; Q) = (\<^bold>\<exists> x \<in> A \<bullet> (P x ;; Q))" using assms by rel_blast lemma shEx_lift_seq_2 [uquant_lift]: "(P ;; (\<^bold>\<exists> x \<bullet> Q x)) = (\<^bold>\<exists> x \<bullet> (P ;; Q x))" by pred_auto lemma shEx_mem_lift_seq_2 [uquant_lift]: assumes "in\<alpha> \<sharp> A" shows "(P ;; (\<^bold>\<exists> x \<in> A \<bullet> Q x)) = (\<^bold>\<exists> x \<in> A \<bullet> (P ;; Q x))" using assms by rel_blast*) subsection \<open> Iterated Sequential Composition Laws \<close> lemma iter_seqr_nil [simp]: "(;; i : [] \<bullet> P(i)) = II" by (simp add: seqr_iter_def) lemma iter_seqr_cons [simp]: "(;; i : (x # xs) \<bullet> P(i)) = P(x) ;; (;; i : xs \<bullet> P(i))" by (simp add: seqr_iter_def) subsection \<open> Quantale Laws \<close> lemma seq_Sup_distl: "P ;; (\<Sqinter> A) = (\<Sqinter> Q\<in>A. P ;; Q)" by pred_auto lemma seq_Sup_distr: "(\<Sqinter> A) ;; Q = (\<Sqinter> P\<in>A. P ;; Q)" by pred_auto lemma seq_SUP_distl: "P ;; (\<Sqinter> Q\<in>A. F(Q)) = (\<Sqinter> Q\<in>A. P ;; F(Q))" by pred_auto lemma seq_SUP_distr: "(\<Sqinter> P\<in>A. F(P)) ;; Q = (\<Sqinter> P\<in>A. F(P) ;; Q)" by pred_auto subsection \<open> Skip Laws \<close> lemma cond_skip: "out\<alpha> \<sharp> b \<Longrightarrow> (b \<and> II) = (II \<and> b\<^sup>-)" by (pred_auto) lemma pre_skip_post: "((b\<^sup><)\<^sub>e \<and> II) = (II \<and> (b\<^sup>>)\<^sub>e)" by (pred_auto) lemma skip_var: fixes x :: "(bool \<Longrightarrow> '\<alpha>)" shows "(($x\<^sup><)\<^sub>e \<and> II) = (II \<and> ($x\<^sup>>)\<^sub>e)" by (pred_auto) (* text \<open>Liberate currently doesn't work on relations - it expects a lens of type 'a instead of 'a \<times> 'a\<close> lemma skip_r_unfold: "vwb_lens x \<Longrightarrow> II = (($x\<^sup>> = $x\<^sup><)\<^sub>e \<and> II \\ $x)" by (rel_simp, metis mwb_lens.put_put vwb_lens_mwb vwb_lens_wb wb_lens.get_put) *) lemma skip_r_alpha_eq: "II = (\<^bold>v\<^sup>< = \<^bold>v\<^sup>>)\<^sub>e" by (pred_auto) (* lemma skip_ra_unfold: "II\<^bsub>x;y\<^esub> = ($x\<acute> =\<^sub>u $x \<and> II\<^bsub>y\<^esub>)" by (pred_auto) lemma skip_res_as_ra: "\<lbrakk> vwb_lens y; x +\<^sub>L y \<approx>\<^sub>L 1\<^sub>L; x \<bowtie> y \<rbrakk> \<Longrightarrow> II\<restriction>\<^sub>\<alpha>x = II\<^bsub>y\<^esub>" apply (pred_auto) apply (metis (no_types, lifting) lens_indep_def) apply (metis vwb_lens.put_eq) done *) subsection \<open> Assignment Laws \<close> lemma assigns_skip: "\<langle>id\<rangle>\<^sub>a = II" by pred_auto lemma assigns_comp: "\<langle>\<sigma>\<rangle>\<^sub>a ;; \<langle>\<rho>\<rangle>\<^sub>a = \<langle>\<rho> \<circ>\<^sub>s \<sigma>\<rangle>\<^sub>a" by pred_auto lemma assigns_cond: "\<langle>\<sigma>\<rangle>\<^sub>a \<lhd> b \<rhd> \<langle>\<rho>\<rangle>\<^sub>a = \<langle>\<sigma> \<triangleleft> b \<triangleright> \<rho>\<rangle>\<^sub>a" by pred_auto text \<open>Extend the alphabet of a substitution\<close> lemma assigns_subst: "(subst_aext \<sigma> fst\<^sub>L) \<dagger> \<langle>\<rho>\<rangle>\<^sub>a = \<langle>\<rho> \<circ>\<^sub>s \<sigma>\<rangle>\<^sub>a" by pred_auto lemma assigns_r_comp: "(\<langle>\<sigma>\<rangle>\<^sub>a ;; P) = ((\<lambda> s. put\<^bsub>fst\<^sub>L\<^esub> s (\<sigma> (get\<^bsub>fst\<^sub>L\<^esub> s))) \<dagger> P)" by (pred_auto) lemma assigns_r_feasible: "(\<langle>\<sigma>\<rangle>\<^sub>a ;; true) = true" by (pred_auto) lemma assign_subst [usubst]: "\<lbrakk> vwb_lens x; vwb_lens y \<rbrakk> \<Longrightarrow> [x\<^sup>> \<leadsto> u\<^sup><] \<dagger> (y := v) = (y, x) := ([x \<leadsto> u] \<dagger> v, u)" apply pred_auto oops lemma assign_vacuous_skip: assumes "vwb_lens x" shows "(x := $x) = II" using assms by pred_auto text \<open> The following law shows the case for the above law when $x$ is only mainly-well behaved. We require that the state is one of those in which $x$ is well defined using and assumption. \<close> (* lemma assign_vacuous_assume: assumes "mwb_lens x" shows "[&\<^bold>v \<in> \<guillemotleft>\<S>\<^bsub>x\<^esub>\<guillemotright>]\<^sup>\<top> ;; (x := &x) = [&\<^bold>v \<in> \<guillemotleft>\<S>\<^bsub>x\<^esub>\<guillemotright>]\<^sup>\<top>" using assms by pred_auto *) lemma assign_simultaneous: assumes "vwb_lens y" "x \<bowtie> y" shows "(x,y) := (e, $y) = (x := e)" using assms by pred_auto lemma assigns_idem: "mwb_lens x \<Longrightarrow> (x,x) := (v,u) = (x := v)" by pred_auto lemma assigns_r_conv: "bij f \<Longrightarrow> \<langle>f\<rangle>\<^sub>a\<^sup>- = \<langle>inv f\<rangle>\<^sub>a" by (pred_auto, simp_all add: bij_is_inj bij_is_surj surj_f_inv_f) lemma assign_pred_transfer: assumes "$x\<^sup>< \<sharp> b" "out\<alpha> \<sharp> b" shows "(b \<and> x := v) = (x := v \<and> b\<^sup>-)" using assms apply pred_auto oops lemma assign_r_comp: "x := u ;; P = P\<lbrakk>u\<^sup></x\<^sup><\<rbrakk>" by pred_auto lemma assign_test: "mwb_lens x \<Longrightarrow> (x := \<guillemotleft>u\<guillemotright> ;; x := \<guillemotleft>v\<guillemotright>) = (x := \<guillemotleft>v\<guillemotright>)" by pred_auto lemma assign_twice: "\<lbrakk> mwb_lens x; $x \<sharp> f \<rbrakk> \<Longrightarrow> (x := e ;; x := f) = (x := f)" by pred_auto lemma assign_commute: assumes "x \<bowtie> y" "$x \<sharp> f" "$y \<sharp> e" "vwb_lens x" "vwb_lens y" shows "(x := e ;; y := f) = (y := f ;; x := e)" using assms by (pred_auto add: lens_indep_comm) lemma assign_cond: "(x := e ;; (P \<lhd> b \<rhd> Q)) = ((x := e ;; P) \<lhd> b\<lbrakk>e/x\<rbrakk> \<rhd> (x := e ;; Q))" by pred_auto lemma assign_rcond: "(x := e ;; (P \<lhd> b \<rhd> Q)) = ((x := e ;; P) \<lhd> (b\<lbrakk>e/x\<rbrakk>) \<rhd> (x := e ;; Q))" by (pred_auto) lemma assign_r_alt_def: fixes x :: "('a \<Longrightarrow> '\<alpha>)" shows "x := v = II\<lbrakk>v\<^sup></x\<^sup><\<rbrakk>" by (pred_auto) lemma assigns_r_func: "Functional \<langle>f\<rangle>\<^sub>a" unfolding Functional_def assigns_r_def single_valued_def pred_rel_def by simp lemma assigns_r_injective: "inj f \<Longrightarrow> Injective \<langle>f\<rangle>\<^sub>a" unfolding Injective_def pred_rel_def injective_def apply auto apply (metis Functional_def assigns_r_func pred_rel_def) apply (simp add: assigns_r_def injD) done (* lemma assigns_r_swap_uinj: "\<lbrakk> vwb_lens x; vwb_lens y; x \<bowtie> y \<rbrakk> \<Longrightarrow> (x,y) := (y,x)" by (metis assigns_r_uinj pr_var_def swap_usubst_inj) lemma assign_unfold: "vwb_lens x \<Longrightarrow> (x := v) = (x\<^sup>> = v\<^sup><)" apply (pred_auto, auto simp add: comp_def) using vwb_lens.put_eq by fastforce*) subsection \<open> Non-deterministic Assignment Laws \<close> (* lemma ndet_assign_comp: "x \<bowtie> y \<Longrightarrow> x := * ;; y := * = (x,y) := *" by (pred_auto add: lens_indep.lens_put_comm) lemma ndet_assign_assign: "\<lbrakk> vwb_lens x; $x \<sharp> e \<rbrakk> \<Longrightarrow> x := * ;; x := e = x := e" by pred_auto lemma ndet_assign_refine: "x := * \<sqsubseteq> x := e" by pred_auto *) subsection \<open> Converse Laws \<close> lemma convr_invol [simp]: "p\<^sup>-\<^sup>- = p" by pred_auto (* lemma lit_convr [simp]: "(\<guillemotleft>v\<guillemotright>)\<^sup>- = \<guillemotleft>v\<guillemotright>" by pred_auto lemma uivar_convr [simp]: fixes x :: "('a \<Longrightarrow> '\<alpha>)" shows "($x\<^sup><)\<^sup>- = $x\<^sup>>" by pred_auto lemma uovar_convr [simp]: fixes x :: "('a \<Longrightarrow> '\<alpha>)" shows "($x\<acute>)\<^sup>- = $x" by pred_auto lemma uop_convr [simp]: "(uop f u)\<^sup>- = uop f (u\<^sup>-)" by (pred_auto) lemma bop_convr [simp]: "(bop f u v)\<^sup>- = bop f (u\<^sup>-) (v\<^sup>-)" by (pred_auto) lemma eq_convr [simp]: "(p = q)\<^sup>- = (p\<^sup>- = q\<^sup>-)" by (pred_auto) *) lemma not_convr [simp]: "(\<not> p)\<^sup>- = (\<not> p\<^sup>-)" by (pred_auto) lemma disj_convr [simp]: "(p \<or> q)\<^sup>- = (q\<^sup>- \<or> p\<^sup>-)" by (pred_auto) lemma conj_convr [simp]: "(p \<and> q)\<^sup>- = (q\<^sup>- \<and> p\<^sup>-)" by (pred_auto) lemma seqr_convr [simp]: "(p ;; q)\<^sup>- = (q\<^sup>- ;; p\<^sup>-)" by (pred_auto) (* lemma pre_convr [simp]: "\<lceil>p\<rceil>\<^sub><\<^sup>- = \<lceil>p\<rceil>\<^sub>>" by (pred_auto) lemma post_convr [simp]: "\<lceil>p\<rceil>\<^sub>>\<^sup>- = \<lceil>p\<rceil>\<^sub><" by (pred_auto)*) subsection \<open> Assertion and Assumption Laws \<close> (* declare sublens_def [lens_defs del] lemma assume_false: "\<questiondown>false? = false" by (pred_auto) lemma assume_true: "\<questiondown>true? = II" by (pred_auto) lemma assume_seq: "\<questiondown>b? ;; \<questiondown>c? = \<questiondown>b \<and> c?" by (pred_auto) (* lemma assert_false: "{false}\<^sub>\<bottom> = true" by (pred_auto) lemma assert_true: "{true}\<^sub>\<bottom> = II" by (pred_auto) lemma assert_seq: "{b}\<^sub>\<bottom> ;; {c}\<^sub>\<bottom> = {(b \<and> c)}\<^sub>\<bottom>" by (pred_auto)*) subsection \<open> While Loop Laws \<close> theorem while_unfold: "while b do P od = ((P ;; while b do P od) \<lhd> b \<rhd> II)" proof - have m:"mono (\<lambda>X. (P ;; X) \<lhd> b \<rhd> II)" unfolding mono_def by (meson equalityE rcond_mono ref_by_def relcomp_mono) have "(while b do P od) = (\<nu> X \<bullet> (P ;; X) \<lhd> b \<rhd> II)" by (simp add: while_top_def) also have "... = ((P ;; (\<nu> X \<bullet> (P ;; X) \<lhd> b \<rhd> II)) \<lhd> b \<rhd> II)" by (subst lfp_unfold, simp_all add: m) also have "... = ((P ;; while b do P od) \<lhd> b \<rhd> II)" by (simp add: while_top_def) finally show ?thesis . qed theorem while_false: "while (false)\<^sub>e do P od = II" by (subst while_unfold, pred_auto) theorem while_true: "while (true)\<^sub>e do P od = false" apply (simp add: while_top_def alpha) apply (rule antisym) apply (rule lfp_lowerbound) apply (pred_auto)+ done theorem while_bot_unfold: "while\<^sub>\<bottom> b do P od = ((P ;; while\<^sub>\<bottom> b do P od) \<lhd> b \<rhd> II)" proof - have m:"mono (\<lambda>X. (P ;; X) \<lhd> b \<rhd> II)" unfolding mono_def by (meson equalityE rcond_mono ref_by_def relcomp_mono) have "(while\<^sub>\<bottom> b do P od) = (\<mu> X \<bullet> (P ;; X) \<lhd> b \<rhd> II)" by (simp add: while_bot_def) also have "... = ((P ;; (\<mu> X \<bullet> (P ;; X) \<lhd> b \<rhd> II)) \<lhd> b \<rhd> II)" by (subst gfp_unfold, simp_all add: m) also have "... = ((P ;; while\<^sub>\<bottom> b do P od) \<lhd> b \<rhd> II)" by (simp add: while_bot_def) finally show ?thesis . qed theorem while_bot_false: "while\<^sub>\<bottom> (false)\<^sub>e do P od = II" by (pred_auto add: while_bot_def gfp_const) theorem while_bot_true: "while\<^sub>\<bottom> (true)\<^sub>e do P od = (\<mu> X \<bullet> P ;; X)" by (pred_auto add: while_bot_def) text \<open> An infinite loop with a feasible body corresponds to a program error (non-termination). \<close> theorem while_infinite: "P ;; true = true \<Longrightarrow> while\<^sub>\<bottom> (true)\<^sub>e do P od = true" apply (rule antisym) apply (simp add: true_pred_def) apply (pred_auto add: gfp_upperbound while_bot_true) done subsection \<open> Algebraic Properties \<close> interpretation upred_semiring: semiring_1 where times = "(;;)" and one = Id and zero = false_pred and plus = "Lattices.sup" by (unfold_locales; pred_auto+) declare upred_semiring.power_Suc [simp del] text \<open> We introduce the power syntax derived from semirings \<close> text \<open> Set up transfer tactic for powers \<close> unbundle utp_lattice_syntax lemma Sup_power_expand: fixes P :: "nat \<Rightarrow> 'a::complete_lattice" shows "P(0) \<sqinter> (\<Sqinter>i. P(i+1)) = (\<Sqinter>i. P(i))" proof - have "UNIV = insert (0::nat) {1..}" by auto moreover have "(\<Sqinter>i. P(i)) = \<Sqinter> (P ` UNIV)" by (blast) moreover have "\<Sqinter> (P ` insert 0 {1..}) = P(0) \<sqinter> \<Sqinter> (P ` {1..})" by (simp) moreover have "\<Sqinter> (P ` {1..}) = (\<Sqinter>i. P(i+1))" sorry ultimately show ?thesis by simp qed *) (* lemma Sup_upto_Suc: "(\<Sqinter>i\<in>{0..Suc n}. P \<^bold>^ i) = (\<Sqinter>i\<in>{0..n}. P \<^bold>^ i) \<sqinter> P \<^bold>^ Suc n" proof - have "(\<Sqinter>i\<in>{0..Suc n}. P \<^bold>^ i) = (\<Sqinter>i\<in>insert (Suc n) {0..n}. P \<^bold>^ i)" by (simp add: atLeast0_atMost_Suc) also have "... = P \<^bold>^ Suc n \<sqinter> (\<Sqinter>i\<in>{0..n}. P \<^bold>^ i)" by (simp) finally show ?thesis by (simp add: Lattices.sup_commute) qed *) subsection \<open> Relational Power \<close> lemma upower_interp [rel]: "\<lbrakk>P \<^bold>^ i\<rbrakk>\<^sub>U = \<lbrakk>P\<rbrakk>\<^sub>U ^^ i" by (induct i arbitrary: P) ((auto; pred_auto add: pred_rel_def), simp add: rel_interp(1) upred_semiring.power_Suc2) lemma Sup_power_expand: fixes P :: "nat \<Rightarrow> 'a::complete_lattice" shows "P(0) \<sqinter> (\<Sqinter>i. P(i+1)) = (\<Sqinter>i. P(i))" proof - have "UNIV = insert (0::nat) {1..}" by auto moreover have "(\<Sqinter>i. P(i)) = \<Sqinter> (P ` UNIV)" by (blast) moreover have "\<Sqinter> (P ` insert 0 {1..}) = P(0) \<sqinter> \<Sqinter> (P ` {1..})" by (simp) moreover have "\<Sqinter> (P ` {1..}) = (\<Sqinter>i. P(i+1))" by (simp add: atLeast_Suc_greaterThan greaterThan_0 image_image) ultimately show ?thesis by (simp only:) qed lemma Sup_upto_Suc: "(\<Sqinter>i\<in>{0..Suc n}. P \<^bold>^ i) = (\<Sqinter>i\<in>{0..n}. P \<^bold>^ i) \<sqinter> P \<^bold>^ Suc n" proof - have "(\<Sqinter>i\<in>{0..Suc n}. P \<^bold>^ i) = (\<Sqinter>i\<in>insert (Suc n) {0..n}. P \<^bold>^ i)" by (simp add: atLeast0_atMost_Suc) also have "... = P \<^bold>^ Suc n \<sqinter> (\<Sqinter>i\<in>{0..n}. P \<^bold>^ i)" by (simp) finally show ?thesis by (simp add: Lattices.sup_commute) qed text \<open> The following two proofs are adapted from the AFP entry \href{https://www.isa-afp.org/entries/Kleene_Algebra.shtml}{Kleene Algebra}. See also~\cite{Armstrong2012,Armstrong2015}. \<close> thm rel_transfer thm inf_fun_def lemma upower_inductl: assumes "Q \<sqsubseteq> ((P ;; Q) \<sqinter> R)" shows "Q \<sqsubseteq> P \<^bold>^ n ;; R" proof (induct n) case 0 with assms show ?case by rel_simp next case (Suc n) with assms show ?case by (rel_transfer) (metis (no_types, lifting) O_assoc le_iff_sup relcomp_distrib relpow.simps(2) relpow_commute sup.coboundedI1) qed lemma upower_inductr: assumes "Q \<sqsubseteq> R \<sqinter> (Q ;; P)" shows "Q \<sqsubseteq> R ;; (P \<^bold>^ n)" proof (induct n) case 0 with assms show ?case by auto next case (Suc n) with assms show ?case by (rel_transfer) (metis (no_types, lifting) O_assoc le_supE relcomp_distrib2 relpow.simps(2) sup.order_iff) qed lemma SUP_atLeastAtMost_first: fixes P :: "nat \<Rightarrow> 'a::complete_lattice" assumes "m \<le> n" shows "(\<Sqinter>i\<in>{m..n}. P(i)) = P(m) \<sqinter> (\<Sqinter>i\<in>{Suc m..n}. P(i))" by (metis SUP_insert assms atLeastAtMost_insertL) lemma upower_seqr_iter: "P \<^bold>^ n = (;; Q : replicate n P \<bullet> Q)" by (induct n, simp_all add: power.power.power_Suc) subsection \<open> Omega \<close> (* definition uomega :: "'\<alpha> rel \<Rightarrow> '\<alpha> rel" ("_\<^sup>\<omega>" [999] 999) where "P\<^sup>\<omega> = (\<mu> X \<bullet> P ;; X)" subsection \<open> Relation Algebra Laws \<close> theorem seqr_disj_cancel: "((P\<^sup>- ;; (\<not>(P ;; Q))) \<or> (\<not>Q)) = (\<not>Q)" by (pred_auto) *) subsection \<open> Kleene Algebra Laws \<close> lemma ustar_rep_eq [rel]: "\<lbrakk>P\<^sup>\<star>\<rbrakk>\<^sub>U = \<lbrakk>P\<rbrakk>\<^sub>U\<^sup>*" proof have "((a, b) \<in> \<lbrakk>P\<rbrakk>\<^sub>U\<^sup>*) \<Longrightarrow> (a,b) \<in> \<lbrakk>P\<^sup>\<star>\<rbrakk>\<^sub>U" for a b apply (induct rule: rtrancl.induct) apply (simp_all add: pred_rel_def ustar_def) apply (metis (full_types) power.power.power_0 prod.simps(2) skip_def) by (metis (mono_tags, lifting) case_prodI upred_semiring.power_Suc2 utp_rel.seq_def) then show "\<lbrakk>P\<rbrakk>\<^sub>U\<^sup>* \<subseteq> \<lbrakk>P\<^sup>\<star>\<rbrakk>\<^sub>U" by auto next have "((a, b) \<in> \<lbrakk>P\<^sup>\<star>\<rbrakk>\<^sub>U) \<Longrightarrow> (a,b) \<in> \<lbrakk>P\<rbrakk>\<^sub>U\<^sup>*" for a b apply (simp add: ustar_def pred_rel_def) by (metis mem_Collect_eq pred_rel_def rtrancl_power upower_interp) then show "\<lbrakk>P\<^sup>\<star>\<rbrakk>\<^sub>U \<subseteq> \<lbrakk>P\<rbrakk>\<^sub>U\<^sup>*" by force qed theorem ustar_sub_unfoldl: "P\<^sup>\<star> \<sqsubseteq> II \<sqinter> (P;;P\<^sup>\<star>)" by rel_auto theorem ustar_inductl: assumes "Q \<sqsubseteq> R" "Q \<sqsubseteq> P ;; Q" shows "Q \<sqsubseteq> P\<^sup>\<star> ;; R" proof - have "P\<^sup>\<star> ;; R = (\<Sqinter> i. P \<^bold>^ i ;; R)" by (simp add: seq_SUP_distr ustar_def) also have "Q \<sqsubseteq> ..." by (simp add: assms ref_lattice.INF_greatest upower_inductl) finally show ?thesis . qed theorem ustar_inductr: assumes "Q \<sqsubseteq> R" "Q \<sqsubseteq> Q ;; P" shows "Q \<sqsubseteq> R ;; P\<^sup>\<star>" proof - have "R ;; P\<^sup>\<star> = (\<Sqinter> i. R ;; P \<^bold>^ i)" by (simp add: seq_SUP_distl ustar_def) also have "Q \<sqsubseteq> ..." by (simp add: assms ref_lattice.INF_greatest upower_inductr) finally show ?thesis . qed lemma ustar_refines_nu: "(\<nu> X \<bullet> (P ;; X) \<sqinter> II) \<sqsubseteq> P\<^sup>\<star>" proof (rule ustar_inductl[where R="II", simplified]) show "(\<nu> X \<bullet> (P ;; X) \<sqinter> II) \<sqsubseteq> II" by (simp add: ref_by_pred_is_leq, metis le_supE lfp_greatest) show "(\<nu> X \<bullet> (P ;; X) \<sqinter> II) \<sqsubseteq> P ;; (\<nu> X \<bullet> (P ;; X) \<sqinter> II)" (is "?lhs \<sqsubseteq> ?rhs") proof - have "?lhs = (P ;; (\<nu> X \<bullet> (P ;; X) \<sqinter> II)) \<sqinter> II" by (rule lfp_unfold, simp add: mono) also have "... \<sqsubseteq> ?rhs" by simp finally show ?thesis . qed qed lemma ustar_as_nu: "P\<^sup>\<star> = (\<nu> X \<bullet> (P ;; X) \<sqinter> II)" proof (rule ref_antisym) show "(\<nu> X \<bullet> (P ;; X) \<sqinter> II) \<sqsubseteq> P\<^sup>\<star>" by (simp add: ustar_refines_nu) show "P\<^sup>\<star> \<sqsubseteq> (\<nu> X \<bullet> (P ;; X) \<sqinter> II)" by (metis lfp_lowerbound pred_ref_iff_le sup_commute ustar_sub_unfoldl) qed lemma ustar_unfoldl: "P\<^sup>\<star> = II \<sqinter> (P ;; P\<^sup>\<star>)" by (rel_auto, meson converse_rtranclE) text \<open> While loop can be expressed using Kleene star \<close> (* lemma while_star_form: "while b do P od = (P \<lhd> b \<rhd> II)\<^sup>\<star> ;; \<questiondown>(\<not>b)?" proof - have 1: "Continuous (\<lambda>X. P ;; X \<lhd> b \<rhd> II)" by (pred_auto) have "while b do P od = (\<Sqinter>i. ((\<lambda>X. P ;; X \<lhd> b \<rhd> II) ^^ i) false)" by (simp add: "1" sup_continuous_Continuous sup_continuous_lfp while_top_def top_false) also have "... = ((\<lambda>X. P ;; X \<lhd> b \<rhd> II) ^^ 0) false \<sqinter> (\<Sqinter>i. ((\<lambda>X. P ;; X \<lhd> b \<rhd> II) ^^ (i+1)) false)" by (subst Sup_power_expand, simp) also have "... = (\<Sqinter>i. ((\<lambda>X. P ;; X \<lhd> b \<rhd> II) ^^ (i+1)) false)" by (simp) also have "... = (\<Sqinter>i. (P \<lhd> b \<rhd> II)\<^bold>^i ;; (false \<lhd> b \<rhd> II))" proof (rule SUP_cong, simp_all) fix i show "P ;; ((\<lambda>X. P ;; X \<lhd> b \<rhd> II) ^^ i) false \<lhd> b \<rhd> II = (P \<lhd> b \<rhd> II) \<^bold>^ i ;; (false \<lhd> b \<rhd> II)" proof (induct i) case 0 then show ?case by simp next case (Suc i) then show ?case apply (simp only: upred_semiring.power_Suc) qed qed also have "... = (\<Sqinter>i\<in>{0..} \<bullet> (P \<lhd> b \<rhd> II)\<^bold>^i ;; [(\<not>b)]\<^sup>\<top>)" by (pred_auto) also have "... = (P \<lhd> b \<rhd> II)\<^sup>\<star> ;; [(\<not>b)]\<^sup>\<top>" by (metis seq_UINF_distr ustar_def) finally show ?thesis . qed *) subsection \<open> Omega Algebra Laws \<close> (* lemma uomega_induct: "P ;; P\<^sup>\<omega> \<sqsubseteq> P\<^sup>\<omega>" by (metis gfp_unfold monoI ref_by_set_def relcomp_mono subset_refl uomega_def) subsection \<open> Refinement Laws \<close> lemma skip_r_refine: "(p \<Rightarrow> p) \<sqsubseteq> II" by pred_auto lemma conj_refine_left: "(Q \<Rightarrow> P) \<sqsubseteq> R \<Longrightarrow> P \<sqsubseteq> (Q \<and> R)" by (pred_auto) lemma pre_weak_rel: assumes "`p \<longrightarrow> I`" and "(I \<longrightarrow> q)\<^sub>e \<sqsubseteq> P" shows "(p \<longrightarrow> q)\<^sub>e \<sqsubseteq> P" using assms by(pred_auto) *) (* lemma cond_refine_rel: assumes "S \<sqsubseteq> (\<lceil>b\<rceil>\<^sub>< \<and> P)" "S \<sqsubseteq> (\<lceil>\<not>b\<rceil>\<^sub>< \<and> Q)" shows "S \<sqsubseteq> P \<lhd> b \<rhd> Q" by (metis aext_not assms(1) assms(2) cond_def lift_rcond_def utp_pred_laws.le_sup_iff) lemma seq_refine_pred: assumes "(\<lceil>b\<rceil>\<^sub>< \<Rightarrow> \<lceil>s\<rceil>\<^sub>>) \<sqsubseteq> P" and "(\<lceil>s\<rceil>\<^sub>< \<Rightarrow> \<lceil>c\<rceil>\<^sub>>) \<sqsubseteq> Q" shows "(\<lceil>b\<rceil>\<^sub>< \<Rightarrow> \<lceil>c\<rceil>\<^sub>>) \<sqsubseteq> (P ;; Q)" using assms by pred_auto lemma seq_refine_unrest: assumes "out\<alpha> \<sharp> b" "in\<alpha> \<sharp> c" assumes "(b \<Rightarrow> \<lceil>s\<rceil>\<^sub>>) \<sqsubseteq> P" and "(\<lceil>s\<rceil>\<^sub>< \<Rightarrow> c) \<sqsubseteq> Q" shows "(b \<Rightarrow> c) \<sqsubseteq> (P ;; Q)" using assms by rel_blast subsection \<open> Preain and Postge Laws \<close> named_theorems prepost lemma Pre_conv_Post [prepost]: "Pre(P\<^sup>-) = Post(P)" by (pred_auto) lemma Post_conv_Pre [prepost]: "Post(P\<^sup>-) = Pre(P)" by (pred_auto) lemma Pre_skip [prepost]: "Pre(II) = true" by (pred_auto) lemma Pre_assigns [prepost]: "Pre(\<langle>\<sigma>\<rangle>\<^sub>a) = true" by (pred_auto) lemma Pre_miracle [prepost]: "Pre(false) = false" by (pred_auto) lemma Pre_assume [prepost]: "Pre([b]\<^sup>\<top>) = b" by (pred_auto) lemma Pre_)seq: "Pre(P ;; Q) = Pre(P ;; [Pre(Q)]\<^sup>\<top>)" by (pred_auto) lemma Pre_disj [prepost]: "Pre(P \<or> Q) = (Pre(P) \<or> Pre(Q))" by (pred_auto) lemma Pre_inf [prepost]: "Pre(P \<sqinter> Q) = (Pre(P) \<or> Pre(Q))" by (pred_auto) text \<open> If P uses on the variables in @{term a} and @{term Q} does not refer to the variables of @{term "$a\<acute>"} then we can distribute. \<close> lemma Pre_conj_indep [prepost]: "\<lbrakk> {$a,$a\<acute>} \<natural> P; $a\<acute> \<sharp> Q; vwb_lens a \<rbrakk> \<Longrightarrow> Pre(P \<and> Q) = (Pre(P) \<and> Pre(Q))" by (pred_auto, metis lens_override_def lens_override_idem) lemma assume_Pre [prepost]: "[Pre(P)]\<^sup>\<top> ;; P = P" by (pred_auto) *) end
[STATEMENT] lemma gfun_at_nongposs [simp]: "p \<notin> gposs t \<Longrightarrow> gfun_at t p = None" [PROOF STATE] proof (prove) goal (1 subgoal): 1. p \<notin> gposs t \<Longrightarrow> gfun_at t p = None [PROOF STEP] using gfun_at_glabel[of "the \<circ> gfun_at t" "gdomain t" p, unfolded glabel_map_gterm_conv] [PROOF STATE] proof (prove) using this: gfun_at (map_gterm (the \<circ> Some) t) p = (if p \<in> gposs (gdomain t) then Some ((the \<circ> gfun_at t) p) else None) goal (1 subgoal): 1. p \<notin> gposs t \<Longrightarrow> gfun_at t p = None [PROOF STEP] by (simp add: comp_def option.map_ident)
/- Copyright (c) 2021 Rémy Degenne. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Rémy Degenne -/ import analysis.normed_space.dual import measure_theory.function.strongly_measurable import measure_theory.integral.set_integral /-! # From equality of integrals to equality of functions This file provides various statements of the general form "if two functions have the same integral on all sets, then they are equal almost everywhere". The different lemmas use various hypotheses on the class of functions, on the target space or on the possible finiteness of the measure. ## Main statements All results listed below apply to two functions `f, g`, together with two main hypotheses, * `f` and `g` are integrable on all measurable sets with finite measure, * for all measurable sets `s` with finite measure, `∫ x in s, f x ∂μ = ∫ x in s, g x ∂μ`. The conclusion is then `f =ᵐ[μ] g`. The main lemmas are: * `ae_eq_of_forall_set_integral_eq_of_sigma_finite`: case of a sigma-finite measure. * `ae_fin_strongly_measurable.ae_eq_of_forall_set_integral_eq`: for functions which are `ae_fin_strongly_measurable`. * `Lp.ae_eq_of_forall_set_integral_eq`: for elements of `Lp`, for `0 < p < ∞`. * `integrable.ae_eq_of_forall_set_integral_eq`: for integrable functions. For each of these results, we also provide a lemma about the equality of one function and 0. For example, `Lp.ae_eq_zero_of_forall_set_integral_eq_zero`. We also register the corresponding lemma for integrals of `ℝ≥0∞`-valued functions, in `ae_eq_of_forall_set_lintegral_eq_of_sigma_finite`. Generally useful lemmas which are not related to integrals: * `ae_eq_zero_of_forall_inner`: if for all constants `c`, `λ x, inner c (f x) =ᵐ[μ] 0` then `f =ᵐ[μ] 0`. * `ae_eq_zero_of_forall_dual`: if for all constants `c` in the dual space, `λ x, c (f x) =ᵐ[μ] 0` then `f =ᵐ[μ] 0`. -/ open measure_theory topological_space normed_space filter open_locale ennreal nnreal measure_theory namespace measure_theory section ae_eq_of_forall variables {α E 𝕜 : Type*} {m : measurable_space α} {μ : measure α} [is_R_or_C 𝕜] lemma ae_eq_zero_of_forall_inner [inner_product_space 𝕜 E] [second_countable_topology E] {f : α → E} (hf : ∀ c : E, (λ x, (inner c (f x) : 𝕜)) =ᵐ[μ] 0) : f =ᵐ[μ] 0 := begin let s := dense_seq E, have hs : dense_range s := dense_range_dense_seq E, have hf' : ∀ᵐ x ∂μ, ∀ n : ℕ, inner (s n) (f x) = (0 : 𝕜), from ae_all_iff.mpr (λ n, hf (s n)), refine hf'.mono (λ x hx, _), rw [pi.zero_apply, ← inner_self_eq_zero], have h_closed : is_closed {c : E | inner c (f x) = (0 : 𝕜)}, from is_closed_eq (continuous_id.inner continuous_const) continuous_const, exact @is_closed_property ℕ E _ s (λ c, inner c (f x) = (0 : 𝕜)) hs h_closed (λ n, hx n) _, end local notation `⟪`x`, `y`⟫` := y x variables (𝕜) lemma ae_eq_zero_of_forall_dual [normed_group E] [normed_space 𝕜 E] [second_countable_topology E] {f : α → E} (hf : ∀ c : dual 𝕜 E, (λ x, ⟪f x, c⟫) =ᵐ[μ] 0) : f =ᵐ[μ] 0 := begin let u := dense_seq E, have hu : dense_range u := dense_range_dense_seq _, have : ∀ n, ∃ g : E →L[𝕜] 𝕜, ∥g∥ ≤ 1 ∧ g (u n) = ∥u n∥ := λ n, exists_dual_vector'' 𝕜 (u n), choose s hs using this, have A : ∀ (a : E), (∀ n, ⟪a, s n⟫ = (0 : 𝕜)) → a = 0, { assume a ha, contrapose! ha, have a_pos : 0 < ∥a∥, by simp only [ha, norm_pos_iff, ne.def, not_false_iff], have a_mem : a ∈ closure (set.range u), by simp [hu.closure_range], obtain ⟨n, hn⟩ : ∃ (n : ℕ), dist a (u n) < ∥a∥ / 2 := metric.mem_closure_range_iff.1 a_mem (∥a∥/2) (half_pos a_pos), use n, have I : ∥a∥/2 < ∥u n∥, { have : ∥a∥ ≤ ∥u n∥ + ∥a - u n∥ := norm_le_insert' _ _, have : ∥a - u n∥ < ∥a∥/2, by rwa dist_eq_norm at hn, linarith }, assume h, apply lt_irrefl (∥s n (u n)∥), calc ∥s n (u n)∥ = ∥s n (u n - a)∥ : by simp only [h, sub_zero, continuous_linear_map.map_sub] ... ≤ 1 * ∥u n - a∥ : continuous_linear_map.le_of_op_norm_le _ (hs n).1 _ ... < ∥a∥ / 2 : by { rw [one_mul], rwa dist_eq_norm' at hn } ... < ∥u n∥ : I ... = ∥s n (u n)∥ : by rw [(hs n).2, is_R_or_C.norm_coe_norm] }, have hfs : ∀ n : ℕ, ∀ᵐ x ∂μ, ⟪f x, s n⟫ = (0 : 𝕜), from λ n, hf (s n), have hf' : ∀ᵐ x ∂μ, ∀ n : ℕ, ⟪f x, s n⟫ = (0 : 𝕜), by rwa ae_all_iff, exact hf'.mono (λ x hx, A (f x) hx), end variables {𝕜} end ae_eq_of_forall variables {α E : Type*} {m m0 : measurable_space α} {μ : measure α} {s t : set α} [normed_group E] [normed_space ℝ E] [measurable_space E] [borel_space E] [second_countable_topology E] [complete_space E] {p : ℝ≥0∞} section ae_eq_of_forall_set_integral_eq lemma ae_const_le_iff_forall_lt_measure_zero {β} [linear_order β] [topological_space β] [order_topology β] [first_countable_topology β] (f : α → β) (c : β) : (∀ᵐ x ∂μ, c ≤ f x) ↔ ∀ b < c, μ {x | f x ≤ b} = 0 := begin rw ae_iff, push_neg, split, { assume h b hb, exact measure_mono_null (λ y hy, (lt_of_le_of_lt hy hb : _)) h }, assume hc, by_cases h : ∀ b, c ≤ b, { have : {a : α | f a < c} = ∅, { apply set.eq_empty_iff_forall_not_mem.2 (λ x hx, _), exact (lt_irrefl _ (lt_of_lt_of_le hx (h (f x)))).elim }, simp [this] }, by_cases H : ¬ (is_lub (set.Iio c) c), { have : c ∈ upper_bounds (set.Iio c) := λ y hy, le_of_lt hy, obtain ⟨b, b_up, bc⟩ : ∃ (b : β), b ∈ upper_bounds (set.Iio c) ∧ b < c, by simpa [is_lub, is_least, this, lower_bounds] using H, exact measure_mono_null (λ x hx, b_up hx) (hc b bc) }, push_neg at H h, obtain ⟨u, u_mono, u_lt, u_lim, -⟩ : ∃ (u : ℕ → β), strict_mono u ∧ (∀ (n : ℕ), u n < c) ∧ tendsto u at_top (nhds c) ∧ ∀ (n : ℕ), u n ∈ set.Iio c := H.exists_seq_strict_mono_tendsto_of_not_mem (lt_irrefl c) h, have h_Union : {x | f x < c} = ⋃ (n : ℕ), {x | f x ≤ u n}, { ext1 x, simp_rw [set.mem_Union, set.mem_set_of_eq], split; intro h, { obtain ⟨n, hn⟩ := ((tendsto_order.1 u_lim).1 _ h).exists, exact ⟨n, hn.le⟩ }, { obtain ⟨n, hn⟩ := h, exact hn.trans_lt (u_lt _), }, }, rw [h_Union, measure_Union_null_iff], assume n, exact hc _ (u_lt n), end section ennreal open_locale topological_space lemma ae_le_of_forall_set_lintegral_le_of_sigma_finite [sigma_finite μ] {f g : α → ℝ≥0∞} (hf : measurable f) (hg : measurable g) (h : ∀ s, measurable_set s → μ s < ∞ → ∫⁻ x in s, f x ∂μ ≤ ∫⁻ x in s, g x ∂μ) : f ≤ᵐ[μ] g := begin have A : ∀ (ε N : ℝ≥0) (p : ℕ), 0 < ε → μ ({x | g x + ε ≤ f x ∧ g x ≤ N} ∩ spanning_sets μ p) = 0, { assume ε N p εpos, let s := {x | g x + ε ≤ f x ∧ g x ≤ N} ∩ spanning_sets μ p, have s_meas : measurable_set s, { have A : measurable_set {x | g x + ε ≤ f x} := measurable_set_le (hg.add measurable_const) hf, have B : measurable_set {x | g x ≤ N} := measurable_set_le hg measurable_const, exact (A.inter B).inter (measurable_spanning_sets μ p) }, have s_lt_top : μ s < ∞ := (measure_mono (set.inter_subset_right _ _)).trans_lt (measure_spanning_sets_lt_top μ p), have A : ∫⁻ x in s, g x ∂μ + ε * μ s ≤ ∫⁻ x in s, g x ∂μ + 0 := calc ∫⁻ x in s, g x ∂μ + ε * μ s = ∫⁻ x in s, g x ∂μ + ∫⁻ x in s, ε ∂μ : by simp only [lintegral_const, set.univ_inter, measurable_set.univ, measure.restrict_apply] ... = ∫⁻ x in s, (g x + ε) ∂μ : (lintegral_add hg measurable_const).symm ... ≤ ∫⁻ x in s, f x ∂μ : set_lintegral_mono (hg.add measurable_const) hf (λ x hx, hx.1.1) ... ≤ ∫⁻ x in s, g x ∂μ + 0 : by { rw [add_zero], exact h s s_meas s_lt_top }, have B : ∫⁻ x in s, g x ∂μ ≠ ∞, { apply ne_of_lt, calc ∫⁻ x in s, g x ∂μ ≤ ∫⁻ x in s, N ∂μ : set_lintegral_mono hg measurable_const (λ x hx, hx.1.2) ... = N * μ s : by simp only [lintegral_const, set.univ_inter, measurable_set.univ, measure.restrict_apply] ... < ∞ : by simp only [lt_top_iff_ne_top, s_lt_top.ne, and_false, ennreal.coe_ne_top, with_top.mul_eq_top_iff, ne.def, not_false_iff, false_and, or_self] }, have : (ε : ℝ≥0∞) * μ s ≤ 0 := ennreal.le_of_add_le_add_left B A, simpa only [ennreal.coe_eq_zero, nonpos_iff_eq_zero, mul_eq_zero, εpos.ne', false_or] }, obtain ⟨u, u_mono, u_pos, u_lim⟩ : ∃ (u : ℕ → ℝ≥0), strict_anti u ∧ (∀ n, 0 < u n) ∧ tendsto u at_top (nhds 0) := exists_seq_strict_anti_tendsto (0 : ℝ≥0), let s := λ (n : ℕ), {x | g x + u n ≤ f x ∧ g x ≤ (n : ℝ≥0)} ∩ spanning_sets μ n, have μs : ∀ n, μ (s n) = 0 := λ n, A _ _ _ (u_pos n), have B : {x | f x ≤ g x}ᶜ ⊆ ⋃ n, s n, { assume x hx, simp at hx, have L1 : ∀ᶠ n in at_top, g x + u n ≤ f x, { have : tendsto (λ n, g x + u n) at_top (𝓝 (g x + (0 : ℝ≥0))) := tendsto_const_nhds.add (ennreal.tendsto_coe.2 u_lim), simp at this, exact eventually_le_of_tendsto_lt hx this }, have L2 : ∀ᶠ (n : ℕ) in (at_top : filter ℕ), g x ≤ (n : ℝ≥0), { have : tendsto (λ (n : ℕ), ((n : ℝ≥0) : ℝ≥0∞)) at_top (𝓝 ∞), { simp only [ennreal.coe_nat], exact ennreal.tendsto_nat_nhds_top }, exact eventually_ge_of_tendsto_gt (hx.trans_le le_top) this }, apply set.mem_Union.2, exact ((L1.and L2).and (eventually_mem_spanning_sets μ x)).exists }, refine le_antisymm _ bot_le, calc μ {x : α | (λ (x : α), f x ≤ g x) x}ᶜ ≤ μ (⋃ n, s n) : measure_mono B ... ≤ ∑' n, μ (s n) : measure_Union_le _ ... = 0 : by simp only [μs, tsum_zero] end lemma ae_eq_of_forall_set_lintegral_eq_of_sigma_finite [sigma_finite μ] {f g : α → ℝ≥0∞} (hf : measurable f) (hg : measurable g) (h : ∀ s, measurable_set s → μ s < ∞ → ∫⁻ x in s, f x ∂μ = ∫⁻ x in s, g x ∂μ) : f =ᵐ[μ] g := begin have A : f ≤ᵐ[μ] g := ae_le_of_forall_set_lintegral_le_of_sigma_finite hf hg (λ s hs h's, le_of_eq (h s hs h's)), have B : g ≤ᵐ[μ] f := ae_le_of_forall_set_lintegral_le_of_sigma_finite hg hf (λ s hs h's, ge_of_eq (h s hs h's)), filter_upwards [A, B], exact λ x, le_antisymm end end ennreal section real section real_finite_measure variables [is_finite_measure μ] {f : α → ℝ} /-- Don't use this lemma. Use `ae_nonneg_of_forall_set_integral_nonneg_of_finite_measure`. -/ lemma ae_nonneg_of_forall_set_integral_nonneg_of_finite_measure_of_measurable (hfm : measurable f) (hf : integrable f μ) (hf_zero : ∀ s, measurable_set s → 0 ≤ ∫ x in s, f x ∂μ) : 0 ≤ᵐ[μ] f := begin simp_rw [eventually_le, pi.zero_apply], rw ae_const_le_iff_forall_lt_measure_zero, intros b hb_neg, let s := {x | f x ≤ b}, have hs : measurable_set s, from measurable_set_le hfm measurable_const, have h_int_gt : ∫ x in s, f x ∂μ ≤ b * (μ s).to_real, { have h_const_le : ∫ x in s, f x ∂μ ≤ ∫ x in s, b ∂μ, { refine set_integral_mono_ae_restrict hf.integrable_on (integrable_on_const.mpr (or.inr (measure_lt_top μ s))) _, rw [eventually_le, ae_restrict_iff hs], exact eventually_of_forall (λ x hxs, hxs), }, rwa [set_integral_const, smul_eq_mul, mul_comm] at h_const_le, }, by_contra, refine (lt_self_iff_false (∫ x in s, f x ∂μ)).mp (h_int_gt.trans_lt _), refine (mul_neg_iff.mpr (or.inr ⟨hb_neg, _⟩)).trans_le _, swap, { simp_rw measure.restrict_restrict hs, exact hf_zero s hs, }, refine (ennreal.to_real_nonneg).lt_of_ne (λ h_eq, h _), cases (ennreal.to_real_eq_zero_iff _).mp h_eq.symm with hμs_eq_zero hμs_eq_top, { exact hμs_eq_zero, }, { exact absurd hμs_eq_top (measure_lt_top μ s).ne, }, end lemma ae_nonneg_of_forall_set_integral_nonneg_of_finite_measure (hf : integrable f μ) (hf_zero : ∀ s, measurable_set s → 0 ≤ ∫ x in s, f x ∂μ) : 0 ≤ᵐ[μ] f := begin rcases hf.1 with ⟨f', hf'_meas, hf_ae⟩, have hf'_integrable : integrable f' μ, from integrable.congr hf hf_ae, have hf'_zero : ∀ s, measurable_set s → 0 ≤ ∫ x in s, f' x ∂μ, { intros s hs, rw set_integral_congr_ae hs (hf_ae.mono (λ x hx hxs, hx.symm)), exact hf_zero s hs, }, exact (ae_nonneg_of_forall_set_integral_nonneg_of_finite_measure_of_measurable hf'_meas hf'_integrable hf'_zero).trans hf_ae.symm.le, end end real_finite_measure lemma ae_nonneg_restrict_of_forall_set_integral_nonneg_inter {f : α → ℝ} {t : set α} (hμt : μ t ≠ ∞) (hf : integrable_on f t μ) (hf_zero : ∀ s, measurable_set s → 0 ≤ ∫ x in (s ∩ t), f x ∂μ) : 0 ≤ᵐ[μ.restrict t] f := begin haveI : fact (μ t < ∞) := ⟨lt_top_iff_ne_top.mpr hμt⟩, refine ae_nonneg_of_forall_set_integral_nonneg_of_finite_measure hf (λ s hs, _), simp_rw measure.restrict_restrict hs, exact hf_zero s hs, end lemma ae_nonneg_of_forall_set_integral_nonneg_of_sigma_finite [sigma_finite μ] {f : α → ℝ} (hf_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on f s μ) (hf_zero : ∀ s, measurable_set s → μ s < ∞ → 0 ≤ ∫ x in s, f x ∂μ) : 0 ≤ᵐ[μ] f := begin apply ae_of_forall_measure_lt_top_ae_restrict, assume t t_meas t_lt_top, apply ae_nonneg_restrict_of_forall_set_integral_nonneg_inter t_lt_top.ne (hf_int_finite t t_meas t_lt_top), assume s s_meas, exact hf_zero _ (s_meas.inter t_meas) (lt_of_le_of_lt (measure_mono (set.inter_subset_right _ _)) t_lt_top) end lemma ae_fin_strongly_measurable.ae_nonneg_of_forall_set_integral_nonneg {f : α → ℝ} (hf : ae_fin_strongly_measurable f μ) (hf_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on f s μ) (hf_zero : ∀ s, measurable_set s → μ s < ∞ → 0 ≤ ∫ x in s, f x ∂μ) : 0 ≤ᵐ[μ] f := begin let t := hf.sigma_finite_set, suffices : 0 ≤ᵐ[μ.restrict t] f, from ae_of_ae_restrict_of_ae_restrict_compl this hf.ae_eq_zero_compl.symm.le, haveI : sigma_finite (μ.restrict t) := hf.sigma_finite_restrict, refine ae_nonneg_of_forall_set_integral_nonneg_of_sigma_finite (λ s hs hμts, _) (λ s hs hμts, _), { rw [integrable_on, measure.restrict_restrict hs], rw measure.restrict_apply hs at hμts, exact hf_int_finite (s ∩ t) (hs.inter hf.measurable_set) hμts, }, { rw measure.restrict_restrict hs, rw measure.restrict_apply hs at hμts, exact hf_zero (s ∩ t) (hs.inter hf.measurable_set) hμts, }, end lemma integrable.ae_nonneg_of_forall_set_integral_nonneg {f : α → ℝ} (hf : integrable f μ) (hf_zero : ∀ s, measurable_set s → μ s < ∞ → 0 ≤ ∫ x in s, f x ∂μ) : 0 ≤ᵐ[μ] f := ae_fin_strongly_measurable.ae_nonneg_of_forall_set_integral_nonneg hf.ae_fin_strongly_measurable (λ s hs hμs, hf.integrable_on) hf_zero lemma ae_nonneg_restrict_of_forall_set_integral_nonneg {f : α → ℝ} (hf_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on f s μ) (hf_zero : ∀ s, measurable_set s → μ s < ∞ → 0 ≤ ∫ x in s, f x ∂μ) {t : set α} (ht : measurable_set t) (hμt : μ t ≠ ∞) : 0 ≤ᵐ[μ.restrict t] f := begin refine ae_nonneg_restrict_of_forall_set_integral_nonneg_inter hμt (hf_int_finite t ht (lt_top_iff_ne_top.mpr hμt)) (λ s hs, _), refine (hf_zero (s ∩ t) (hs.inter ht) _), exact (measure_mono (set.inter_subset_right s t)).trans_lt (lt_top_iff_ne_top.mpr hμt), end lemma ae_eq_zero_restrict_of_forall_set_integral_eq_zero_real {f : α → ℝ} (hf_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on f s μ) (hf_zero : ∀ s, measurable_set s → μ s < ∞ → ∫ x in s, f x ∂μ = 0) {t : set α} (ht : measurable_set t) (hμt : μ t ≠ ∞) : f =ᵐ[μ.restrict t] 0 := begin suffices h_and : f ≤ᵐ[μ.restrict t] 0 ∧ 0 ≤ᵐ[μ.restrict t] f, from h_and.1.mp (h_and.2.mono (λ x hx1 hx2, le_antisymm hx2 hx1)), refine ⟨_, ae_nonneg_restrict_of_forall_set_integral_nonneg hf_int_finite (λ s hs hμs, (hf_zero s hs hμs).symm.le) ht hμt⟩, suffices h_neg : 0 ≤ᵐ[μ.restrict t] -f, { refine h_neg.mono (λ x hx, _), rw pi.neg_apply at hx, simpa using hx, }, refine ae_nonneg_restrict_of_forall_set_integral_nonneg (λ s hs hμs, (hf_int_finite s hs hμs).neg) (λ s hs hμs, _) ht hμt, simp_rw pi.neg_apply, rw [integral_neg, neg_nonneg], exact (hf_zero s hs hμs).le, end end real lemma ae_eq_zero_restrict_of_forall_set_integral_eq_zero {f : α → E} (hf_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on f s μ) (hf_zero : ∀ s : set α, measurable_set s → μ s < ∞ → ∫ x in s, f x ∂μ = 0) {t : set α} (ht : measurable_set t) (hμt : μ t ≠ ∞) : f =ᵐ[μ.restrict t] 0 := begin refine ae_eq_zero_of_forall_dual ℝ (λ c, _), refine ae_eq_zero_restrict_of_forall_set_integral_eq_zero_real _ _ ht hμt, { assume s hs hμs, exact continuous_linear_map.integrable_comp c (hf_int_finite s hs hμs) }, { assume s hs hμs, rw [continuous_linear_map.integral_comp_comm c (hf_int_finite s hs hμs), hf_zero s hs hμs], exact continuous_linear_map.map_zero _ } end lemma ae_eq_restrict_of_forall_set_integral_eq {f g : α → E} (hf_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on f s μ) (hg_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on g s μ) (hfg_zero : ∀ s : set α, measurable_set s → μ s < ∞ → ∫ x in s, f x ∂μ = ∫ x in s, g x ∂μ) {t : set α} (ht : measurable_set t) (hμt : μ t ≠ ∞) : f =ᵐ[μ.restrict t] g := begin rw ← sub_ae_eq_zero, have hfg' : ∀ s : set α, measurable_set s → μ s < ∞ → ∫ x in s, (f - g) x ∂μ = 0, { intros s hs hμs, rw integral_sub' (hf_int_finite s hs hμs) (hg_int_finite s hs hμs), exact sub_eq_zero.mpr (hfg_zero s hs hμs), }, have hfg_int : ∀ s, measurable_set s → μ s < ∞ → integrable_on (f-g) s μ, from λ s hs hμs, (hf_int_finite s hs hμs).sub (hg_int_finite s hs hμs), exact ae_eq_zero_restrict_of_forall_set_integral_eq_zero hfg_int hfg' ht hμt, end lemma ae_eq_zero_of_forall_set_integral_eq_of_sigma_finite [sigma_finite μ] {f : α → E} (hf_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on f s μ) (hf_zero : ∀ s : set α, measurable_set s → μ s < ∞ → ∫ x in s, f x ∂μ = 0) : f =ᵐ[μ] 0 := begin let S := spanning_sets μ, rw [← @measure.restrict_univ _ _ μ, ← Union_spanning_sets μ, eventually_eq, ae_iff, measure.restrict_apply' (measurable_set.Union (measurable_spanning_sets μ))], rw [set.inter_Union, measure_Union_null_iff], intro n, have h_meas_n : measurable_set (S n), from (measurable_spanning_sets μ n), have hμn : μ (S n) < ∞, from measure_spanning_sets_lt_top μ n, rw ← measure.restrict_apply' h_meas_n, exact ae_eq_zero_restrict_of_forall_set_integral_eq_zero hf_int_finite hf_zero h_meas_n hμn.ne, end lemma ae_eq_of_forall_set_integral_eq_of_sigma_finite [sigma_finite μ] {f g : α → E} (hf_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on f s μ) (hg_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on g s μ) (hfg_eq : ∀ s : set α, measurable_set s → μ s < ∞ → ∫ x in s, f x ∂μ = ∫ x in s, g x ∂μ) : f =ᵐ[μ] g := begin rw ← sub_ae_eq_zero, have hfg : ∀ s : set α, measurable_set s → μ s < ∞ → ∫ x in s, (f - g) x ∂μ = 0, { intros s hs hμs, rw [integral_sub' (hf_int_finite s hs hμs) (hg_int_finite s hs hμs), sub_eq_zero.mpr (hfg_eq s hs hμs)], }, have hfg_int : ∀ s, measurable_set s → μ s < ∞ → integrable_on (f-g) s μ, from λ s hs hμs, (hf_int_finite s hs hμs).sub (hg_int_finite s hs hμs), exact ae_eq_zero_of_forall_set_integral_eq_of_sigma_finite hfg_int hfg, end lemma ae_fin_strongly_measurable.ae_eq_zero_of_forall_set_integral_eq_zero {f : α → E} (hf_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on f s μ) (hf_zero : ∀ s : set α, measurable_set s → μ s < ∞ → ∫ x in s, f x ∂μ = 0) (hf : ae_fin_strongly_measurable f μ) : f =ᵐ[μ] 0 := begin let t := hf.sigma_finite_set, suffices : f =ᵐ[μ.restrict t] 0, from ae_of_ae_restrict_of_ae_restrict_compl this hf.ae_eq_zero_compl, haveI : sigma_finite (μ.restrict t) := hf.sigma_finite_restrict, refine ae_eq_zero_of_forall_set_integral_eq_of_sigma_finite _ _, { intros s hs hμs, rw [integrable_on, measure.restrict_restrict hs], rw [measure.restrict_apply hs] at hμs, exact hf_int_finite _ (hs.inter hf.measurable_set) hμs, }, { intros s hs hμs, rw [measure.restrict_restrict hs], rw [measure.restrict_apply hs] at hμs, exact hf_zero _ (hs.inter hf.measurable_set) hμs, }, end lemma ae_fin_strongly_measurable.ae_eq_of_forall_set_integral_eq {f g : α → E} (hf_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on f s μ) (hg_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on g s μ) (hfg_eq : ∀ s : set α, measurable_set s → μ s < ∞ → ∫ x in s, f x ∂μ = ∫ x in s, g x ∂μ) (hf : ae_fin_strongly_measurable f μ) (hg : ae_fin_strongly_measurable g μ) : f =ᵐ[μ] g := begin rw ← sub_ae_eq_zero, have hfg : ∀ s : set α, measurable_set s → μ s < ∞ → ∫ x in s, (f - g) x ∂μ = 0, { intros s hs hμs, rw [integral_sub' (hf_int_finite s hs hμs) (hg_int_finite s hs hμs), sub_eq_zero.mpr (hfg_eq s hs hμs)], }, have hfg_int : ∀ s, measurable_set s → μ s < ∞ → integrable_on (f-g) s μ, from λ s hs hμs, (hf_int_finite s hs hμs).sub (hg_int_finite s hs hμs), exact (hf.sub hg).ae_eq_zero_of_forall_set_integral_eq_zero hfg_int hfg, end lemma Lp.ae_eq_zero_of_forall_set_integral_eq_zero (f : Lp E p μ) (hp_ne_zero : p ≠ 0) (hp_ne_top : p ≠ ∞) (hf_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on f s μ) (hf_zero : ∀ s : set α, measurable_set s → μ s < ∞ → ∫ x in s, f x ∂μ = 0) : f =ᵐ[μ] 0 := ae_fin_strongly_measurable.ae_eq_zero_of_forall_set_integral_eq_zero hf_int_finite hf_zero (Lp.fin_strongly_measurable _ hp_ne_zero hp_ne_top).ae_fin_strongly_measurable lemma Lp.ae_eq_of_forall_set_integral_eq (f g : Lp E p μ) (hp_ne_zero : p ≠ 0) (hp_ne_top : p ≠ ∞) (hf_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on f s μ) (hg_int_finite : ∀ s, measurable_set s → μ s < ∞ → integrable_on g s μ) (hfg : ∀ s : set α, measurable_set s → μ s < ∞ → ∫ x in s, f x ∂μ = ∫ x in s, g x ∂μ) : f =ᵐ[μ] g := ae_fin_strongly_measurable.ae_eq_of_forall_set_integral_eq hf_int_finite hg_int_finite hfg (Lp.fin_strongly_measurable _ hp_ne_zero hp_ne_top).ae_fin_strongly_measurable (Lp.fin_strongly_measurable _ hp_ne_zero hp_ne_top).ae_fin_strongly_measurable lemma ae_eq_zero_of_forall_set_integral_eq_of_fin_strongly_measurable_trim (hm : m ≤ m0) {f : α → E} (hf_int_finite : ∀ s, measurable_set[m] s → μ s < ∞ → integrable_on f s μ) (hf_zero : ∀ s : set α, measurable_set[m] s → μ s < ∞ → ∫ x in s, f x ∂μ = 0) (hf : fin_strongly_measurable f (μ.trim hm)) : f =ᵐ[μ] 0 := begin obtain ⟨t, ht_meas, htf_zero, htμ⟩ := hf.exists_set_sigma_finite, haveI : sigma_finite ((μ.restrict t).trim hm) := by rwa restrict_trim hm μ ht_meas at htμ, have htf_zero : f =ᵐ[μ.restrict tᶜ] 0, { rw [eventually_eq, ae_restrict_iff' (measurable_set.compl (hm _ ht_meas))], exact eventually_of_forall htf_zero, }, have hf_meas_m : @measurable _ _ m _ f, from hf.measurable, suffices : f =ᵐ[μ.restrict t] 0, from ae_of_ae_restrict_of_ae_restrict_compl this htf_zero, refine measure_eq_zero_of_trim_eq_zero hm _, refine ae_eq_zero_of_forall_set_integral_eq_of_sigma_finite _ _, { intros s hs hμs, rw [integrable_on, restrict_trim hm (μ.restrict t) hs, measure.restrict_restrict (hm s hs)], rw [← restrict_trim hm μ ht_meas, measure.restrict_apply hs, trim_measurable_set_eq hm (@measurable_set.inter _ m _ _ hs ht_meas)] at hμs, refine integrable.trim hm _ hf_meas_m, exact hf_int_finite _ (@measurable_set.inter _ m _ _ hs ht_meas) hμs, }, { intros s hs hμs, rw [restrict_trim hm (μ.restrict t) hs, measure.restrict_restrict (hm s hs)], rw [← restrict_trim hm μ ht_meas, measure.restrict_apply hs, trim_measurable_set_eq hm (@measurable_set.inter _ m _ _ hs ht_meas)] at hμs, rw ← integral_trim hm hf_meas_m, exact hf_zero _ (@measurable_set.inter _ m _ _ hs ht_meas) hμs, }, end lemma integrable.ae_eq_zero_of_forall_set_integral_eq_zero {f : α → E} (hf : integrable f μ) (hf_zero : ∀ s, measurable_set s → μ s < ∞ → ∫ x in s, f x ∂μ = 0) : f =ᵐ[μ] 0 := begin have hf_Lp : mem_ℒp f 1 μ, from mem_ℒp_one_iff_integrable.mpr hf, let f_Lp := hf_Lp.to_Lp f, have hf_f_Lp : f =ᵐ[μ] f_Lp, from (mem_ℒp.coe_fn_to_Lp hf_Lp).symm, refine hf_f_Lp.trans _, refine Lp.ae_eq_zero_of_forall_set_integral_eq_zero f_Lp one_ne_zero ennreal.coe_ne_top _ _, { exact λ s hs hμs, integrable.integrable_on (L1.integrable_coe_fn _), }, { intros s hs hμs, rw integral_congr_ae (ae_restrict_of_ae hf_f_Lp.symm), exact hf_zero s hs hμs, }, end lemma integrable.ae_eq_of_forall_set_integral_eq (f g : α → E) (hf : integrable f μ) (hg : integrable g μ) (hfg : ∀ s : set α, measurable_set s → μ s < ∞ → ∫ x in s, f x ∂μ = ∫ x in s, g x ∂μ) : f =ᵐ[μ] g := begin rw ← sub_ae_eq_zero, have hfg' : ∀ s : set α, measurable_set s → μ s < ∞ → ∫ x in s, (f - g) x ∂μ = 0, { intros s hs hμs, rw integral_sub' hf.integrable_on hg.integrable_on, exact sub_eq_zero.mpr (hfg s hs hμs), }, exact integrable.ae_eq_zero_of_forall_set_integral_eq_zero (hf.sub hg) hfg', end end ae_eq_of_forall_set_integral_eq section lintegral lemma ae_measurable.ae_eq_of_forall_set_lintegral_eq {f g : α → ℝ≥0∞} (hf : ae_measurable f μ) (hg : ae_measurable g μ) (hfi : ∫⁻ x, f x ∂μ ≠ ∞) (hgi : ∫⁻ x, g x ∂μ ≠ ∞) (hfg : ∀ ⦃s⦄, measurable_set s → μ s < ∞ → ∫⁻ x in s, f x ∂μ = ∫⁻ x in s, g x ∂μ) : f =ᵐ[μ] g := begin refine ennreal.eventually_eq_of_to_real_eventually_eq (ae_lt_top' hf hfi).ne_of_lt (ae_lt_top' hg hgi).ne_of_lt (integrable.ae_eq_of_forall_set_integral_eq _ _ (integrable_to_real_of_lintegral_ne_top hf hfi) (integrable_to_real_of_lintegral_ne_top hg hgi) (λ s hs hs', _)), rw [integral_eq_lintegral_of_nonneg_ae, integral_eq_lintegral_of_nonneg_ae], { congr' 1, rw [lintegral_congr_ae (of_real_to_real_ae_eq _), lintegral_congr_ae (of_real_to_real_ae_eq _)], { exact hfg hs hs' }, { refine (ae_lt_top' hg.restrict (ne_of_lt (lt_of_le_of_lt _ hgi.lt_top))), exact @set_lintegral_univ α _ μ g ▸ lintegral_mono_set (set.subset_univ _) }, { refine (ae_lt_top' hf.restrict (ne_of_lt (lt_of_le_of_lt _ hfi.lt_top))), exact @set_lintegral_univ α _ μ f ▸ lintegral_mono_set (set.subset_univ _) } }, -- putting the proofs where they are used is extremely slow exacts [ae_of_all _ (λ x, ennreal.to_real_nonneg), hg.ennreal_to_real.restrict, ae_of_all _ (λ x, ennreal.to_real_nonneg), hf.ennreal_to_real.restrict] end end lintegral end measure_theory
lemma diameter_compact_attained: assumes "compact S" and "S \<noteq> {}" shows "\<exists>x\<in>S. \<exists>y\<in>S. dist x y = diameter S"
// Copyright (c) 2001-2011 Hartmut Kaiser // Copyright (c) 2009 Pavel Baranov // // Distributed under the Boost Software License, Version 1.0. (See accompanying // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) #include <boost/spirit/include/lex_lexertl.hpp> #include <boost/core/lightweight_test.hpp> #include <iostream> #include <string> using namespace boost::spirit; using namespace boost::spirit::lex; typedef const char * base_iterator; /////////////////////////////////////////////////////////////////////////////// // Token definition /////////////////////////////////////////////////////////////////////////////// template <typename Lexer> struct position_helper_tokens : lexer<Lexer> { position_helper_tokens() { // define tokens and associate them with the lexer eol = "\n"; any = "[^\n]+"; // associate tokens with the lexer this->self = eol | any ; } token_def<> any, eol; }; int main() { // read input from the given file std::string str ("test"); // token type typedef lexertl::token<base_iterator, lex::omit, boost::mpl::false_> token_type; // lexer type typedef lexertl::actor_lexer<token_type> lexer_type; // create the lexer object instance needed to invoke the lexical analysis position_helper_tokens<lexer_type> position_helper_lexer; // tokenize the given string, all generated tokens are discarded base_iterator first = str.c_str(); base_iterator last = &first[str.size()]; for(lexer_type::iterator_type i = position_helper_lexer.begin(first, last); i != position_helper_lexer.end() && (*i).is_valid(); i++ ) { } return boost::report_errors(); }
From trace_program_logic.fairness Require Import fairness resources. From trace_program_logic.prelude Require Import finitary quantifiers classical_instances. From stdpp Require Import finite. Section gmap. Context `{!EqDecision K, !Countable K}. Definition max_gmap (m: gmap K nat) : nat := map_fold (λ k v r, v `max` r) 0 m. Lemma max_gmap_spec m: map_Forall (λ _ v, v <= max_gmap m) m. Proof. induction m using map_ind; first done. apply map_Forall_insert =>//. rewrite /max_gmap map_fold_insert //. - split; first lia. intros ?? Hnotin. specialize (IHm _ _ Hnotin). simpl in IHm. unfold max_gmap in IHm. lia. - intros **. lia. Qed. End gmap. Section finitary. Context `{M: FairModel}. Context `{Λ: language}. Context `{EqDecision M}. Context `{EqDecision (locale Λ)}. Context `{HPI0: forall s x, ProofIrrel ((let '(s', ℓ) := x in M.(fmtrans) s ℓ s'): Prop) }. Variable model_finitary: forall s1, Finite { '(s2, ℓ) | M.(fmtrans) s1 ℓ s2 }. Definition enum_inner (s1: M): list (M * option M.(fmrole)) := map proj1_sig (@enum _ _ (model_finitary s1)). Lemma enum_inner_spec (s1 s2: M) ℓ: M.(fmtrans) s1 ℓ s2 -> (s2, ℓ) ∈ enum_inner s1. Proof. intros H. unfold enum_inner. rewrite elem_of_list_fmap. exists (exist _ (s2, ℓ) H). split =>//. apply elem_of_enum. Qed. Program Definition enumerate_next (δ1: aux_state (aux_fair_state M)) (oζ : olocale Λ) (c': cfg Λ): list (aux_state (aux_fair_state M) * @mlabel (fair_model (Λ := Λ) M)) := '(s2, ℓ) ← (δ1.(ls_under), None) :: enum_inner δ1.(ls_under); fs ← enum_gmap_bounded' (live_roles _ s2) (max_gmap δ1.(ls_fuel) `max` fuel_limit s2); ms ← enum_gmap_range_bounded' (live_roles _ s2) (locales_of_list c'.1); let ℓ' := match ℓ with | None => match oζ with Some ζ => Silent_step ζ | None => Config_step end | Some ℓ => match oζ with | None => Config_step | Some ζ => Take_step ℓ ζ end end in mret ({| ls_under := s2; ls_fuel := `fs; (* ls_fuel_dom := proj2_sig fs; *) (* TODO: why this does not work?*) ls_mapping := `ms ; ls_mapping_dom := proj2_sig ms |}, ℓ'). Next Obligation. intros ??????????. destruct fs as [??]. by simpl. Qed. Lemma valid_state_evolution_finitary_fairness φ: valid_state_evolution_finitary (aux_fair_state M) φ. Proof. intros extr auxtr c' oζ. eapply finite_smaller_card_nat. eapply (in_list_finite (enumerate_next (trace_last auxtr) oζ c')). intros [δ2 ℓ] [[Hlab [Htrans Hsmall]] _]. unfold enumerate_next. apply elem_of_list_bind. exists (δ2.(ls_under), match ℓ with Take_step l _ => Some l | _ => None end). split; last first. { destruct ℓ as [ρ tid' | |]. - inversion Htrans as [Htrans']. apply elem_of_cons; right. by apply enum_inner_spec. - apply elem_of_cons; left. f_equal. inversion Htrans as (?&?&?&?); done. - apply elem_of_cons; right. inversion Htrans as (?&?). by apply enum_inner_spec. } apply elem_of_list_bind. eexists (δ2.(ls_fuel) ↾ (ls_fuel_dom _)); split; last first. { eapply enum_gmap_bounded'_spec; split; first by apply ls_fuel_dom. intros ρ f Hsome. destruct ℓ as [ρ' tid' | |]. - destruct (decide (ρ = ρ')) as [-> | Hneq]. + inversion Htrans as [? Hbig]. destruct Hbig as (?&?&?&Hlim&?). rewrite Hsome /= in Hlim. assert (Hlive: ρ' ∈ live_roles _ δ2). { rewrite -ls_fuel_dom elem_of_dom. eauto. } specialize (Hlim Hlive). lia. + inversion Htrans as [? Hbig]. destruct Hbig as (?&?&Hleq&?&Hnew). destruct (decide (ρ ∈ live_roles _ (trace_last auxtr))) as [Hin|Hnotin]. * assert (Hok: oleq (ls_fuel δ2 !! ρ) (ls_fuel (trace_last auxtr) !! ρ)). { apply Hleq =>//. congruence. } rewrite Hsome in Hok. destruct (ls_fuel (trace_last auxtr) !! ρ) as [f'|] eqn:Heqn; last done. rewrite <-ls_fuel_dom, elem_of_dom in Hin. pose proof (max_gmap_spec _ _ _ Heqn). simpl in *. lia. * assert (Hok: oleq (ls_fuel δ2 !! ρ) (Some (fuel_limit δ2))). { apply Hnew. apply elem_of_dom_2 in Hsome. rewrite -ls_fuel_dom. set_solver. } rewrite Hsome in Hok. simpl in *. lia. - inversion Htrans as [? [? [Hleq Heq]]]. specialize (Hleq ρ). assert (Hok: oleq (ls_fuel δ2 !! ρ) (ls_fuel (trace_last auxtr) !! ρ)). { apply Hleq; last done. rewrite Heq -ls_fuel_dom elem_of_dom Hsome. by eauto. } rewrite Hsome in Hok. destruct (ls_fuel (trace_last auxtr) !! ρ) as [f'|] eqn:Heqn; last done. pose proof (max_gmap_spec _ _ _ Heqn). simpl in *. lia. - inversion Htrans as [? [? [Hleq Hnew]]]. specialize (Hleq ρ). destruct (decide (ρ ∈ live_roles _ (trace_last auxtr))). + assert (Hok: oleq (ls_fuel δ2 !! ρ) (ls_fuel (trace_last auxtr) !! ρ)). { apply Hleq; done. } rewrite Hsome in Hok. destruct (ls_fuel (trace_last auxtr) !! ρ) as [f'|] eqn:Heqn; last done. pose proof (max_gmap_spec _ _ _ Heqn). simpl in *. lia. + assert (Hok: oleq (ls_fuel δ2 !! ρ) (Some (fuel_limit δ2))). { apply Hnew. apply elem_of_dom_2 in Hsome. rewrite -ls_fuel_dom. set_solver. } rewrite Hsome in Hok. simpl in *. lia. } apply elem_of_list_bind. exists (δ2.(ls_mapping) ↾ (ls_mapping_dom _)); split; last first. { eapply enum_gmap_range_bounded'_spec; split; first by apply ls_mapping_dom. intros ρ' tid' Hsome. unfold tids_smaller in *. apply locales_of_list_from_locale_from. eauto. } rewrite elem_of_list_singleton; f_equal. - destruct δ2; simpl. f_equal; apply ProofIrrelevance. - destruct ℓ; simpl; destruct oζ =>//; by inversion Hlab. Unshelve. + intros ??. apply make_decision. + intros. apply make_proof_irrel. Qed. End finitary.
*----------------------------------------------------------------------* subroutine set_op_dim2(ipass,mel,str_info,ngam) *----------------------------------------------------------------------* * * set up the dimension arrays for ME-list mel operator mel%op * * new version: general intermediates added * * the operator has a total Ms and IRREP, as given in "mel". * it is stored as * * Op(Cx,Ax,Cp,Ap,Ch,Ah,Cv,Av) * * where C are indices associated with creation strings and * A are indices associated with annihilation strings * in (p)article, (h)ole, and (v)alence spaces * * for intermediates: Cx,Ax,etc. may break down in several blocks * * in this routine we define the sequence in which the different blocks * of the operator are to be stored; with "different blocks" we mean * all the different cases of Ms and symmetry labels distributed among * the strings (Cp,Ch,Cv,Ap,Ah,Av) such that they add up to the * required total Ms and IRREP * * ipass = 1 set len_op_gmo, len_op_occ, * off_op_gmo, off_op_occ, len_op * and get dimension info for off_op_gmox * * ipass = 2 set off_op_gmox * * the len_op_gmox is only needed for operators, where more than * one class of spaces (I mean H/P/V) is occupied for C or for A * usual H->P excitation operators (e.g. T-operators of SR-CC) do * not fall into this group and ipass=2 may be skipped * * andreas, end of 2006 * *----------------------------------------------------------------------* implicit none include 'stdunit.h' include 'opdim.h' include 'hpvxseq.h' include 'def_filinf.h' include 'def_operator.h' include 'def_me_list.h' include 'def_graph.h' include 'def_strinf.h' include 'multd2h.h' integer, parameter :: & ntest = 000 integer, intent(in) :: & ipass, ngam type(strinf), intent(in) :: & str_info type(me_list), intent(inout) :: & mel logical :: & first, ms_fix, fix_success, skip integer :: & idxstr, idxstr_tot, idxdis, iblk, iblkoff, nexc, & msa_max, msc_max, njoined, nblk, & msa, msc, idxmsa, idxmsa2, igama, igamc, & nasub, ncsub, icmp, & did, iexc, igam, len_blk, ld_blk, idx, jdx, tot_c, tot_a, & idx_hpvx, hpvx, ii, maxidxms integer, pointer :: & hpvx_csub(:), hpvx_asub(:), & occ_csub(:), occ_asub(:), & graph_csub(:), graph_asub(:), & msdis_c(:), msdis_a(:), & idxmsdis_c(:), idxmsdis_a(:), & gamdis_c(:), gamdis_a(:), & len_str(:), lenstr_array(:,:,:), & mapca(:), diag_idx(:), diag_ca(:) integer, pointer :: & hpvx_occ(:,:,:), idx_graph(:,:,:), & ca_occ(:,:) type(graph), pointer :: & graphs(:) type(operator), pointer :: & op integer :: c & msd(ngastp,2), igamd(ngastp,2) & msd(ngastp,2,mel%op%njoined), igamd(ngastp,2,mel%op%njoined) logical, external :: & next_msgamdist2, nondia_blk, nondia_distr integer, external :: & msgmdid2, msgmdid, msa2idxms4op op => mel%op if (ntest.gt.5) then call write_title(lulog,wst_dbg_subr,'set_op_dim') write(lulog,*) ' ipass = ',ipass write(lulog,*) ' ME-list = ',trim(mel%label) write(lulog,*) ' operator = ',trim(op%name) write(lulog,*) ' IRREP = ',mel%gamt write(lulog,*) ' Ms = ',mel%mst end if idxstr = 0 idxstr_tot = 0 njoined = op%njoined nblk = op%n_occ_cls ms_fix = mel%fix_vertex_ms hpvx_occ => op%ihpvca_occ idx_graph => mel%idx_graph ca_occ => op%ica_occ graphs => str_info%g c dbg c print *,'set dim, fix =',ms_fix c dbg maxidxms = str_info%max_idxms allocate(lenstr_array(ngam,str_info%max_idxms,str_info%ngraph)) call set_lenstr_array(lenstr_array,ngam, & str_info%max_idxms,str_info) ! we better initialize some of the key arrays if (ipass.eq.1) then mel%len_op_occ(1:nblk) = 0 mel%off_op_occ(1:nblk) = 0 do iblk = 1, nblk mel%off_op_gmox(iblk)%maxd = 0 mel%len_op_gmo(iblk)%gam_ms = 0 mel%off_op_gmo(iblk)%gam_ms = 0 end do end if if (ipass.eq.2) then do iblk = 1, nblk mel%len_op_gmox(iblk)%d_gam_ms = 0 mel%off_op_gmox(iblk)%d_gam_ms = 0 mel%ld_op_gmox(iblk)%d_gam_ms = 0 end do end if ! loop over occupations (= blocks) occ_cls: do iblk = 1, nblk if (ntest.ge.100.and.op%formal_blk(iblk)) & write(lulog,*) 'skipping formal block (#',iblk,')' if (op%formal_blk(iblk)) cycle iblkoff = (iblk-1)*njoined if (ntest.ge.150) then write(lulog,*) 'class: ',iblk call wrt_occ_n(lulog,hpvx_occ(1,1,iblkoff+1),njoined) end if ! find the number sub-blocks for C and A call get_num_subblk(ncsub,nasub,hpvx_occ(1,1,iblkoff+1),njoined) ! HPVX-info, OCC-info, ! MS/IRREP-distributions, and string lengths allocate(hpvx_csub(ncsub),hpvx_asub(nasub), & occ_csub(ncsub), occ_asub(nasub), & graph_csub(ncsub), graph_asub(nasub), & msdis_c(ncsub), msdis_a(nasub), & idxmsdis_c(ncsub), idxmsdis_a(nasub), & gamdis_c(ncsub), gamdis_a(nasub), & len_str(ncsub+nasub)) msc_max = ca_occ(1,iblk) msa_max = ca_occ(2,iblk) nexc = min(ca_occ(1,iblk),ca_occ(2,iblk)) ! set HPVX and OCC info call condense_occ(occ_csub, occ_asub, & hpvx_csub,hpvx_asub, & hpvx_occ(1,1,iblkoff+1),njoined,hpvxblkseq) ! do the same for the graph info call condense_occ(graph_csub, graph_asub, & hpvx_csub,hpvx_asub, & idx_graph(1,1,iblkoff+1),njoined,hpvxblkseq) if (mel%diag_type.ne.0) then ! skip non-diagonal blocks allocate(mapca(ncsub),diag_idx(ncsub),diag_ca(ncsub)) if (nondia_blk(mapca,diag_idx,diag_ca, & hpvx_occ(1,1,iblkoff+1),njoined, & ncsub,nasub,mel%diag_type)) & msa_max = -msa_max - 1 ! will skip msa_loop end if c dbg c print *,'graph_csub: ',graph_csub(1:ncsub) c print *,'graph_asub: ',graph_asub(1:nasub) c print *,'hpvx_csub: ',hpvx_csub(1:ncsub) c print *,'hpvx_asub: ',hpvx_asub(1:nasub) c dbg ! set offsets mel%off_op_occ(iblk) = idxstr if (ipass.eq.1) then mel%off_op_gmox(iblk)%maxd = 0 else mel%off_op_gmox(iblk)% & d_gam_ms(1:mel%off_op_gmox(iblk)%maxd,1:ngam,1:nexc+1)=-1 mel%off_op_gmox(iblk)% & did(1:mel%off_op_gmox(iblk)%maxd,1:ngam,1:nexc+1) = 0 mel%off_op_gmox(iblk)% & ndis(1:ngam,1:nexc+1) = 0 end if ! loop over Ms of A-string (fixes Ms of C-string) idxmsa = 0 msa_loop: do msa = msa_max, -msa_max, -2 ! C <-> A means alpha <-> beta !! msc = msa + mel%mst ! to make this point clear: ! usually, we have mst=0 operators, which implies ! Ms(C) == Ms(A) if (abs(msc).gt.msc_max) cycle msa_loop idxmsa = idxmsa+1 c I think this test has been run often enough by now c let's save the time here c ! test indexing routine c idxmsa2 = msa2idxms4op(msa,mel%mst,msa_max,msc_max) c cc if (idxmsa.ne.idxmsa2) cc & call quit(1,'set_op_dim2','bug in msa2idxms4op!') c if (idxmsa.ne.idxmsa2) then c print *,'msa,mst,msa_max,msa_max: ', c & msa,mel%mst,msa_max,msc_max c print *,'idxmsa2, idxmsa: ',idxmsa2, idxmsa c call quit(1,'set_op_dim2','bug in msa2idxms4op !') c end if ! loop over IRREP of A-string (fixes IRREP of C-string) igama_loop: do igama = 1, ngam igamc = multd2h(igama,mel%gamt) ! store the current position in offset array if (ipass.eq.2) & mel%off_op_gmo(iblk)%gam_ms(igama,idxmsa) = idxstr ! now, to be general, we have to loop over all ! possible MS and IRREP distributions over X/H/P/V spaces ! for both C and A strings ! (for intermediates, njoined>1, these are further ! subdivided into substrings, so even more work to come) idxdis = 0 first = .true. distr_loop: do if (.not.next_msgamdist2(first, & msdis_c,msdis_a,gamdis_c,gamdis_a, & ncsub, nasub, & occ_csub,occ_asub, & msc,msa,igamc,igama,ngam, & ms_fix,fix_success)) & exit distr_loop c if(ms_fix.and..not.fix_success)cycle distr_loop c dbg c print *,'top of dist_loop' c dbg first = .false. ! avoid call c call ms2idxms(idxmsdis_c,msdis_c,occ_csub,ncsub) do ii = 1, ncsub idxmsdis_c(ii) = ishft(occ_csub(ii)-msdis_c(ii),-1)+1 end do c call ms2idxms(idxmsdis_a,msdis_a,occ_asub,nasub) do ii = 1, nasub idxmsdis_a(ii) = ishft(occ_asub(ii)-msdis_a(ii),-1)+1 end do if (mel%diag_type.ne.0) then ! skip non-diagonal distributions and those in which ! the diagonal index tuple has wrong symmetry/ms if (nondia_distr(mapca,diag_idx,diag_ca, & msdis_c,msdis_a,gamdis_c,gamdis_a, & ncsub,mel%msdiag,mel%gamdiag)) & cycle distr_loop end if if (ipass.eq.2) then call set_len_str2(len_str,ncsub,nasub, & lenstr_array,ngam,maxidxms, & graph_csub,idxmsdis_c,gamdis_c,hpvx_csub, & graph_asub,idxmsdis_a,gamdis_a,hpvx_asub, & hpvxseq,.false.) c dbg c write(lulog,*)'current dis:' c write(lulog,*) idxmsdis_c(1:ncsub) c write(lulog,*) gamdis_c(1:ncsub) c write(lulog,*) idxmsdis_a(1:nasub) c write(lulog,*) gamdis_a(1:nasub) c write(lulog,*)'graphs c:',graph_csub(1:ncsub) c write(lulog,*)'graphs a:',graph_asub(1:nasub) c write(lulog,*)'len_str: ',len_str(1:ncsub+nasub) c dbg ld_blk = 1 do icmp = 1, ncsub ld_blk = ld_blk*len_str(icmp) c dbg c print *,'icmp, len_str: ',icmp,len_str(icmp) c dbg end do len_blk = ld_blk do icmp = ncsub+1, ncsub+nasub len_blk = len_blk*len_str(icmp) c dbg c print *,'icmp, len_str: ',icmp,len_str(icmp) c dbg end do ! get actual leading dimension search_loop: do idx_hpvx = 1, ngastp hpvx = hpvxseq(idx_hpvx) do icmp = 1, ncsub if (hpvx_csub(icmp).eq.hpvx) then ld_blk = len_str(icmp) exit search_loop end if end do do icmp = 1, nasub if (hpvx_asub(icmp).eq.hpvx) then ld_blk = len_str(ncsub+icmp) exit search_loop end if end do end do search_loop if (len_blk.le.0) cycle distr_loop ! increment distribution index idxdis = idxdis+1 if (njoined.eq.1.and.ntest.ge.150.or. & (ms_fix.and.ipass.eq.2)) then if(.not.ms_fix) & write(lulog,*) 'current MS and IRREP distr:' call expand_occ(msd,idx_graph(1,1,iblkoff+1), & ncsub,nasub, & msdis_c,msdis_a, & hpvx_csub,hpvx_asub, & njoined) if(.not.ms_fix) & call wrt_occ_n(lulog,msd,njoined) call expand_occ(igamd,idx_graph(1,1,iblkoff+1), & ncsub,nasub, & gamdis_c,gamdis_a, & hpvx_csub,hpvx_asub, & njoined) if(.not.ms_fix) & call wrt_occ_n(lulog,igamd,njoined) if (njoined.eq.1) then did = msgmdid(hpvx_occ(1,1,iblkoff+1), & msd,igamd,ngam) write(lulog,*) 'DID old: ',did end if write(lulog,*) 'current idxdis = ',idxdis end if c dbg if(ms_fix)then do idx = 1, njoined tot_c = 0 tot_a = 0 do jdx = 1, ngastp tot_c = tot_c + msd(jdx,1,idx) tot_a = tot_a + msd(jdx,2,idx) enddo if(tot_c.ne.tot_a) cycle distr_loop enddo endif c dbg ! save current offset !if (ipass.eq.2) then mel%off_op_gmox(iblk)% & d_gam_ms(idxdis,igama,idxmsa)=idxstr ! get ID of current distr did = msgmdid2(occ_csub,idxmsdis_c,gamdis_c,ncsub, & occ_asub,idxmsdis_a,gamdis_a,nasub,ngam) c dbg c if(trim(mel%op%name).eq.'G_Z') c & print *,'did',did c dbg ! save ID of current distr mel%off_op_gmox(iblk)% & did(idxdis,igama,idxmsa) = did if (ntest.ge.150) then write(lulog,*) 'current did = ',did write(lulog,*) idxmsdis_c(1:ncsub) write(lulog,*) gamdis_c(1:ncsub) write(lulog,*) idxmsdis_a(1:nasub) write(lulog,*) gamdis_a(1:nasub) end if !end if ! increment string element index idxstr = idxstr+len_blk idxstr_tot = idxstr_tot+len_blk !if (ipass.eq.2) then mel%len_op_gmox(iblk)% & d_gam_ms(idxdis,igama,idxmsa) = len_blk mel%ld_op_gmox(iblk)% & d_gam_ms(idxdis,igama,idxmsa) = ld_blk !end if if (ntest.ge.150) then write(lulog,*) 'current block length: ',len_blk end if else ! ipass == 1 idxdis = idxdis+1 ! just increment end if end do distr_loop if (ipass.eq.2) & mel%len_op_gmo(iblk)%gam_ms(igama,idxmsa) = idxstr - & mel%off_op_gmo(iblk)%gam_ms(igama,idxmsa) if (ipass.eq.1) ! note: now maxd > actually needed maxd & mel%off_op_gmox(iblk)%maxd = & max(mel%off_op_gmox(iblk)%maxd,idxdis) if (ipass.eq.2) then mel%off_op_gmox(iblk)%ndis(igama,idxmsa) = idxdis end if end do igama_loop end do msa_loop if (mel%diag_type.ne.0) deallocate(mapca,diag_idx,diag_ca) if (ipass.eq.2) & mel%len_op_occ(iblk) = idxstr - mel%off_op_occ(iblk) deallocate(hpvx_csub,hpvx_asub, & occ_csub, occ_asub, & graph_csub, graph_asub, & msdis_c, msdis_a, & idxmsdis_c, idxmsdis_a, & gamdis_c, gamdis_a, & len_str) end do occ_cls if (ipass.eq.2) mel%len_op = idxstr_tot if (ntest.ge.100) then if (ipass.eq.2) then write(lulog,*) 'total number of operator elements: ', & mel%len_op write(lulog,*) 'length per occupation class:' call iwrtma(mel%len_op_occ,nblk,1,nblk,1) write(lulog,*) 'offsets per occupation class:' call iwrtma(mel%off_op_occ,nblk,1,nblk,1) write(lulog,*) 'info per occupation class, IRREP, MS:' do iblk = 1, nblk if (op%formal_blk(iblk)) cycle nexc = min(ca_occ(1,iblk), & ca_occ(2,iblk)) write(lulog,*) 'occ-class: ',iblk write(lulog,*) 'lengths:' call iwrtma(mel%len_op_gmo(iblk)%gam_ms, & ngam,nexc+1,ngam,nexc+1) write(lulog,*) 'offsets:' call iwrtma(mel%off_op_gmo(iblk)%gam_ms, & ngam,nexc+1,ngam,nexc+1) end do !else write(lulog,*) 'info per occupation class, DISTR, IRREP, MS:' write(lulog,*) 'offsets:' do iblk = 1, nblk if (op%formal_blk(iblk)) cycle nexc = min(ca_occ(1,iblk), & ca_occ(2,iblk)) write(lulog,*) 'occ-class: ',iblk iblkoff = (iblk-1)*njoined call wrt_occ_n(lulog,op%ihpvca_occ(1,1,iblkoff+1),njoined) do iexc = 1, nexc+1 do igam = 1, ngam if (mel%off_op_gmox(iblk)%ndis(igam,iexc).eq.0) cycle write(lulog,*) iexc,igam,' -> ', & mel%off_op_gmox(iblk)% & d_gam_ms(1:mel%off_op_gmox(iblk)% & ndis(igam,iexc),igam,iexc) end do end do end do write(lulog,*) 'distribution IDs:' do iblk = 1, nblk if (op%formal_blk(iblk)) cycle nexc = min(ca_occ(1,iblk), & ca_occ(2,iblk)) write(lulog,*) 'occ-class: ',iblk iblkoff = (iblk-1)*njoined call wrt_occ_n(lulog,op%ihpvca_occ(1,1,iblkoff+1),njoined) do iexc = 1, nexc+1 do igam = 1, ngam if (mel%off_op_gmox(iblk)%ndis(igam,iexc).eq.0) cycle write(lulog,*) iexc,igam,' -> ', & mel%off_op_gmox(iblk)% & did(1:mel%off_op_gmox(iblk)% & ndis(igam,iexc),igam,iexc) end do end do end do end if end if deallocate(lenstr_array) return end *----------------------------------------------------------------------*
fft(c(1,1,1,1,0,0,0,0))
If $F$ is a finite set of bounded sets, then $\bigcup F$ is bounded.
theory Timed_Game_Certification_Impl2 imports Timed_Game_Certification_Impl begin text \<open>This theory tried to achieve the same goals as theory \<open>Timed_Game_Certification_Impl\<close> but turned out too complicated for our purposes. It is therefore incomplete and can be considered deprecated. We leave it for now to document the thought process.\<close> subsection \<open>More Simulation Theorems\<close> text \<open>These theorems might be useful, maybe they should be moved somewhere else.\<close> lemma (in Graphs.Simulation) Graph_InvariantI: assumes "Graphs.Graph_Invariant B I" shows "Graphs.Graph_Invariant A (\<lambda>a. \<exists>b. a \<sim> b \<and> I b)" by (smt (verit, ccfv_SIG) Graphs.Graph_Invariant_def A_B_step assms(1)) lemmas Graph_Invariant_simulationI = Simulation.Graph_InvariantI lemma (in Graphs.Simulation) Graph_Invariant_StartI: assumes "Graphs.Graph_Invariant_Start B b\<^sub>0 I" "a\<^sub>0 \<sim> b\<^sub>0" shows "Graphs.Graph_Invariant_Start A a\<^sub>0 (\<lambda>a. \<exists>b. a \<sim> b \<and> I b)" using assms unfolding Graphs.Graph_Invariant_Start_def Graphs.Graph_Invariant_Start_axioms_def by (blast intro: Graph_InvariantI) lemmas Graph_Invariant_Start_simulationI = Simulation.Graph_Invariant_StartI lemma (in Graphs.Graph_Invariant) replaceI: assumes "P = Q" shows "Graphs.Graph_Invariant E Q" using Graph_Invariant_axioms assms by simp lemma (in Graphs.Graph_Invariant_Start) replaceI: assumes "P = Q" shows "Graphs.Graph_Invariant_Start E s\<^sub>0 Q" using Graph_Invariant_Start_axioms assms by simp lemma (in Graphs.Simulation_Invariant) Graph_InvariantI: assumes "Graphs.Graph_Invariant B I" shows "Graphs.Graph_Invariant A (\<lambda>a. \<exists>b. PA a \<and> a \<sim> b \<and> I b \<and> PB b)" using assms by (smt (verit, ccfv_threshold) A_B_step A_invariant B_invariant Graphs.Graph_Invariant_def) lemma Simulation_Invariant_composition: assumes "Graphs.Simulation_Invariant A B sim1 PA PB" "Graphs.Simulation_Invariant B C sim2 PB PC" shows "Graphs.Simulation_Invariant A C (\<lambda> a c. \<exists> b. PB b \<and> sim1 a b \<and> sim2 b c) PA PC" proof - interpret A: Graphs.Simulation_Invariant A B sim1 PA PB by (rule assms(1)) interpret B: Graphs.Simulation_Invariant B C sim2 PB PC by (rule assms(2)) show ?thesis by (standard; (blast dest: A.A_B_step B.A_B_step)) qed lemma (in Graphs.Simulation) Simulation_Invariant: "Graphs.Simulation_Invariant A B sim (\<lambda>_. True) (\<lambda>_. True)" by unfold_locales (rule A_B_step) lemma Simulation_Invariant_sim_replace: assumes "Graphs.Simulation_Invariant A B sim PA PB" and "\<And> a b. PA a \<Longrightarrow> PB b \<Longrightarrow> sim a b \<longleftrightarrow> sim' a b" shows "Graphs.Simulation_Invariant A B sim' PA PB" proof - interpret Graphs.Simulation_Invariant A B sim PA PB by (rule assms(1)) from assms(2) show ?thesis by (unfold_locales; blast dest: A_B_step) qed subsection \<open>The Original More Layered Approach\<close> text \<open>To make it work properly, one would need to introduce an invariant already in the zone semantics.\<close> paragraph \<open>Zone Semantics\<close> locale Timed_Game_Automaton_Zones = Timed_Game_Automaton where A = A for A :: "('a, 'c, 't :: time, 's) ta" + fixes Strategy :: "'s \<times> ('c, 't) zone \<Rightarrow> (('c, 't) zone \<times> 'a move set) set" assumes strategy_Strategy: "a \<in> strategy (l, u) \<Longrightarrow> u \<in> Z \<Longrightarrow> \<exists>Z\<^sub>s M. (Z\<^sub>s, M) \<in> Strategy (l, Z) \<and> a \<in> M \<and> u \<in> Z\<^sub>s" begin text \<open>XXX: Choose maximal \<open>Z'\<close>\<close> \<^cancel>\<open>definition "Strategy \<equiv> \<lambda>(l, Z). {(Z', C) | Z' C. Z' \<subseteq> Z \<and> (\<forall>u \<in> Z'. strategy (l, u) = C)}"\<close> text \<open>Are strategies convex, i.e. is \<^term>\<open>Strategy(l, Z)\<close> always singleton?\<close> inductive step_z where action: "step_z (l, Z) (l', Z')" if "A \<turnstile> \<langle>l, Z\<^sub>s\<rangle> \<leadsto>\<^bsub>\<upharpoonleft>a\<^esub> \<langle>l', Z'\<rangle>" "controllable a" "(Z\<^sub>s, C) \<in> Strategy (l, Z)" "Act a \<in> C" | wait: "step_z (l, Z) (l', Z')" if "A \<turnstile> \<langle>l, Z\<^sub>s\<rangle> \<leadsto>\<^bsub>\<tau>\<^esub> \<langle>l',Z'\<rangle>" "(Z\<^sub>s, C) \<in> Strategy (l, Z)" "Wait \<in> C" | uncontrolled: "step_z (l, Z) (l', Z')" if "A \<turnstile> \<langle>l, Z\<rangle> \<leadsto>\<^bsub>\<upharpoonleft>a\<^esub> \<langle>l', Z'\<rangle>" "\<not> controllable a" lemma strategy_StrategyE: assumes "u \<in> Z" "a \<in> strategy (l, u)" obtains Z\<^sub>s M where "(Z\<^sub>s, M) \<in> Strategy (l, Z)" "a \<in> M" \<^cancel>\<open>"Z\<^sub>s \<subseteq> Z"\<close> "u \<in> Z\<^sub>s" using assms by atomize_elim (rule strategy_Strategy) sublocale zone_simulation: Graphs.Simulation step "\<lambda> (l, Z) (l', Z'). step_z (l, Z) (l', Z') \<and> Z' \<noteq> {}" "\<lambda> (l, u) (l', Z). l' = l \<and> u \<in> Z" apply standard apply clarsimp apply (erule step.cases; clarsimp; (erule (1) strategy_StrategyE)?) apply (auto intro: step_z.intros dest!: step_a_z_complete step_t_z_complete) done end paragraph \<open>Concrete Semantics To Zone Semantics\<close> context Timed_Safety_Game_Strat begin thm invariant_safe_fromI \<comment> \<open>We directly establish the invariant in the end, see below.\<close> lemma invariant_safe_fromI: assumes "Labeled_Graphs.Graph_Invariant_Start (\<lambda> (l, Z) a (l', Z'). step_z (l, Z) a (l', Z') \<and> Z' \<noteq> {}) (l\<^sub>0, Z\<^sub>0) I" "\<forall>l Z. I (l, Z) \<longrightarrow> from_R l Z \<inter> K = {}" "u\<^sub>0 \<in> Z\<^sub>0" shows "safe_from (l\<^sub>0, u\<^sub>0)" apply (rule invariant_safe_fromI[where I = "\<lambda>(l, u). \<exists>Z. I (l, Z) \<and> u \<in> Z"]) apply (rule Graph_Invariant_Start.replaceI) apply (rule zone_simulation.Graph_Invariant_StartI) using assms by auto end paragraph \<open>Zone Semantics To Implementation Semantics\<close> locale TGA_Start_Defs1 = \<^cancel>\<open>TA_Start_Defs where A = A for A :: "('a, nat, int, 's) ta" +\<close> TA_Impl_Precise where A = A for A :: "('a, nat, int, 's) ta" + fixes controllable :: "'a \<Rightarrow> bool" fixes strategy :: "('a, nat, real, 's) strategy" fixes S :: "'s \<times> int DBM' \<Rightarrow> (int DBM' \<times> 'a move set) set" fixes K :: "(nat, real, 's) ta_config set" assumes strategy_S: "a \<in> strategy (l, u) \<Longrightarrow> u \<in> [curry (conv_M M)]\<^bsub>v,n\<^esub> \<Longrightarrow> wf_state (l, M) \<Longrightarrow> \<exists>M\<^sub>s C. (M\<^sub>s, C) \<in> S (l, M) \<and> a \<in> C \<and> u \<in> [curry (conv_M M\<^sub>s)]\<^bsub>v,n\<^esub>" begin term "conv_A A \<turnstile> \<langle>l, Z\<rangle> \<leadsto> \<langle>l', Z'\<rangle>" sublocale sem: Timed_Game_Automaton "conv_A A" controllable strategy . definition "Strategy \<equiv> \<lambda>(l, Z). {([curry (conv_M M\<^sub>s)]\<^bsub>v,n\<^esub>, C) | M\<^sub>s C. \<exists>M. Z = [curry (conv_M M)]\<^bsub>v,n\<^esub> \<and> (M\<^sub>s, C) \<in> S (l, M)}" inductive step_z where action: "step_z (l, Z) (l', Z')" if "conv_A A \<turnstile> \<langle>l, Z\<^sub>s\<rangle> \<leadsto>\<^bsub>\<upharpoonleft>a\<^esub> \<langle>l', Z'\<rangle>" "controllable a" "(Z\<^sub>s, C) \<in> Strategy (l, Z)" "Act a \<in> C" | wait: "step_z (l, Z) (l', Z')" if "conv_A A \<turnstile> \<langle>l, Z\<^sub>s\<rangle> \<leadsto>\<^bsub>\<tau>\<^esub> \<langle>l',Z'\<rangle>" "(Z\<^sub>s, C) \<in> Strategy (l, Z)" "Wait \<in> C" | uncontrolled: "step_z (l, Z) (l', Z')" if "conv_A A \<turnstile> \<langle>l, Z\<rangle> \<leadsto>\<^bsub>\<upharpoonleft>a\<^esub> \<langle>l', Z'\<rangle>" "\<not> controllable a" lemma strategy_StrategyE: assumes "u \<in> Z" "a \<in> strategy (l, u)" "Z = [curry (conv_M M)]\<^bsub>v,n\<^esub>" "wf_state (l, M)" obtains Z\<^sub>s M where "(Z\<^sub>s, M) \<in> Strategy (l, Z)" "a \<in> M" \<^cancel>\<open>"Z\<^sub>s \<subseteq> Z"\<close> "u \<in> Z\<^sub>s" using assms unfolding Strategy_def by (blast dest!: strategy_S[where M = M]) sublocale sem: Timed_Game_Automaton_Zones controllable strategy "conv_A A" Strategy apply standard sorry inductive step_impl where action: "step_impl (l, M) (l', M')" if "op_precise.E_from_op_empty (l, M\<^sub>s) (\<upharpoonleft>a) (l', M')" "controllable a" "(M\<^sub>s, C) \<in> S (l, M)" "Act a \<in> C" | wait: "step_impl (l, M) (l', M')" if "op_precise.E_from_op_empty (l, M\<^sub>s) \<tau> (l', M')" "(M\<^sub>s, C) \<in> S (l, M)" "Wait \<in> C" | uncontrolled: "step_impl (l, M) (l', M')" if "op_precise.E_from_op_empty (l, M) (\<upharpoonleft>a) (l', M')" "\<not> controllable a" sublocale sem: Timed_Safety_Game_Strat "conv_A A" controllable K strategy . lemma S_dbm_equivE: assumes "(M\<^sub>s, C) \<in> S (l, M)" "M \<simeq> M'" obtains M\<^sub>s' where "(M\<^sub>s', C) \<in> S (l, M')" "M\<^sub>s \<simeq> M\<^sub>s'" sorry lemma strategy_SimulationE: assumes "(Z\<^sub>s, C) \<in> Strategy (l, [curry (conv_M M)]\<^bsub>v,n\<^esub>)" "wf_state (l, M)" obtains M\<^sub>s where "(M\<^sub>s, C) \<in> S (l, M)" "Z\<^sub>s = [curry (conv_M M\<^sub>s)]\<^bsub>v,n\<^esub>" apply atomize_elim using assms unfolding Strategy_def apply (auto 4 3 simp: dbm_equiv_def elim: S_dbm_equivE[where M' = M]) done lemma strategy_split_preserves_wf_state: assumes "(M\<^sub>s, C) \<in> S (l, M)" "wf_state (l, M)" shows "wf_state (l, M\<^sub>s)" sorry interpretation bisim_empty_zone: Bisimulation_Invariant "\<lambda>(l, Z) a (l', Z'). conv_A A \<turnstile> \<langle>l, Z\<rangle> \<leadsto>\<^bsub>a\<^esub> \<langle>l', Z'\<rangle> \<and> Z' \<noteq> {}" op_precise.E_from_op_empty "\<lambda>(l, Z) (l', D). l' = l \<and> [curry (conv_M D)]\<^bsub>v,n\<^esub> = Z" "\<lambda>_. True" wf_state by (rule op_precise.step_z_E_from_op_bisim_empty) sublocale zone_impl_simulation: Graphs.Simulation_Invariant "\<lambda> (l, Z) (l', Z'). sem.step_z (l, Z) (l', Z') \<and> Z' \<noteq> {}" step_impl "\<lambda>(l, Z) (l', D). l' = l \<and> [curry (conv_M D)]\<^bsub>v,n\<^esub> = Z" "\<lambda>_. True" wf_state proof (standard; (clarsimp simp del: One_nat_def)?) fix l l' :: 's and Z' :: "(nat \<Rightarrow> real) set" and M :: "int DBM'" assume "wf_state (l, M)" and sem_step: "sem.step_z (l, [curry (conv_M M)]\<^bsub>v,n\<^esub>) (l', Z')" and "Z' \<noteq> {}" from sem_step show "\<exists>b. local.step_impl (l, M) (l', b) \<and> [curry (conv_M b)]\<^bsub>v,n\<^esub> = Z'" proof cases case (action Z\<^sub>s a C) from strategy_SimulationE[OF action(3) \<open>wf_state _\<close>] obtain M\<^sub>s where "(M\<^sub>s, C) \<in> S (l, M)" "Z\<^sub>s = [curry (conv_M M\<^sub>s)]\<^bsub>v,n\<^esub>" . have "wf_state (l, M\<^sub>s)" using \<open>(M\<^sub>s, C) \<in> S (l, M)\<close> \<open>wf_state (l, M)\<close> by (rule strategy_split_preserves_wf_state) with bisim_empty_zone.A_B_step[of "(l, Z\<^sub>s)" "\<upharpoonleft>a" "(l', Z')" "(l, M\<^sub>s)", simplified] action(1) \<open>Z' \<noteq> {}\<close> \<open>Z\<^sub>s = _\<close> obtain M' where "op_precise.E_from_op_empty (l, M\<^sub>s) \<upharpoonleft>a (l', M')" "[curry (conv_M M')]\<^bsub>v,n\<^esub> = Z'" by metis with action(2,4) \<open>_\<in> S (l, M)\<close> show ?thesis by (inst_existentials M') (rule step_impl.intros; simp) next case (wait Z\<^sub>s C) from strategy_SimulationE[OF wait(2) \<open>wf_state _\<close>] obtain M\<^sub>s where "(M\<^sub>s, C) \<in> S (l, M)" "Z\<^sub>s = [curry (conv_M M\<^sub>s)]\<^bsub>v,n\<^esub>" . have "wf_state (l, M\<^sub>s)" using \<open>(M\<^sub>s, C) \<in> S (l, M)\<close> \<open>wf_state (l, M)\<close> by (rule strategy_split_preserves_wf_state) with bisim_empty_zone.A_B_step[of "(l, Z\<^sub>s)" \<tau> "(l', Z')" "(l, M\<^sub>s)", simplified] wait(1) \<open>Z' \<noteq> {}\<close> \<open>Z\<^sub>s = _\<close> obtain M' where "op_precise.E_from_op_empty (l, M\<^sub>s) \<tau> (l', M')" "[curry (conv_M M')]\<^bsub>v,n\<^esub> = Z'" by metis with wait(3) \<open>_\<in> S (l, M)\<close> show ?thesis by (inst_existentials M') (rule step_impl.intros; simp) next case (uncontrolled a) from bisim_empty_zone.A_B_step[of _ "\<upharpoonleft>a" "(l', Z')" "(l, M)", simplified] uncontrolled(1) \<open>Z' \<noteq> {}\<close> \<open>wf_state _\<close> obtain M' where "op_precise.E_from_op_empty (l, M) \<upharpoonleft>a (l', M')" "[curry (conv_M M')]\<^bsub>v,n\<^esub> = Z'" by auto with uncontrolled(2-) show ?thesis by (inst_existentials M') (rule step_impl.intros; simp) qed next fix l l' :: 's and M M' :: "int DBM'" assume "wf_state (l, M)" "step_impl (l, M) (l', M')" from this(2,1) show "wf_state (l', M')" by (cases; elim bisim_empty_zone.B_invariant[rotated] strategy_split_preserves_wf_state) qed sublocale sem_impl_simulation: Graphs.Simulation_Invariant sem.step step_impl "\<lambda>(l, u) (l', D). l' = l \<and> u \<in> [curry (conv_M D)]\<^bsub>v,n\<^esub>" "\<lambda>_. True" wf_state apply (rule Simulation_Invariant_sim_replace) apply (rule Simulation_Invariant_composition) apply (rule sem.zone_simulation.Simulation_Invariant) apply (rule zone_impl_simulation.Simulation_Invariant_axioms) apply auto done lemma invariant_safe_fromI: assumes "Graph_Invariant_Start step_impl (l\<^sub>0, Z\<^sub>0) I" "\<forall>l Z. I (l, Z) \<longrightarrow> from_R l ([curry (conv_M Z)]\<^bsub>v,n\<^esub>) \<inter> K = {}" "u\<^sub>0 \<in> [curry (conv_M Z\<^sub>0)]\<^bsub>v,n\<^esub>" "wf_state (l\<^sub>0, Z\<^sub>0)" shows "sem.safe_from (l\<^sub>0, u\<^sub>0)" proof - interpret inv: Graph_Invariant_Start step_impl "(l\<^sub>0, Z\<^sub>0)" I by (rule assms) show ?thesis unfolding sem.safe_from_alt_def using assms by (auto dest!: inv.invariant_reaches sem_impl_simulation.simulation_reaches[where b = "(l\<^sub>0, Z\<^sub>0)"]) qed lemma safe_fromI: "sem.safe_from (l\<^sub>0, u\<^sub>0)" if "(\<nexists>l' D'. step_impl\<^sup>*\<^sup>* (l\<^sub>0, D\<^sub>0) (l', D') \<and> from_R l' ([curry (conv_M D')]\<^bsub>v,n\<^esub>) \<inter> K \<noteq> {})" "u\<^sub>0 \<in> [curry (conv_M D\<^sub>0)]\<^bsub>v,n\<^esub>" "wf_dbm D\<^sub>0" unfolding sem.safe_from_alt_def using that by (auto simp: wf_state_def dest!: sem_impl_simulation.simulation_reaches[where b = "(l\<^sub>0, D\<^sub>0)"]) end end
------------------------------------------------------------------------ -- The Agda standard library -- -- Indexed binary relations ------------------------------------------------------------------------ {-# OPTIONS --without-K --safe #-} module Relation.Binary.Indexed.Heterogeneous where open import Function open import Level using (suc; _⊔_) open import Relation.Binary using (_⇒_) open import Relation.Binary.PropositionalEquality.Core as P using (_≡_) ------------------------------------------------------------------------ -- Publically export core definitions open import Relation.Binary.Indexed.Heterogeneous.Core public ------------------------------------------------------------------------ -- Equivalences record IsIndexedEquivalence {i a ℓ} {I : Set i} (A : I → Set a) (_≈_ : IRel A ℓ) : Set (i ⊔ a ⊔ ℓ) where field refl : Reflexive A _≈_ sym : Symmetric A _≈_ trans : Transitive A _≈_ reflexive : ∀ {i} → _≡_ ⟨ _⇒_ ⟩ _≈_ {i} reflexive P.refl = refl record IndexedSetoid {i} (I : Set i) c ℓ : Set (suc (i ⊔ c ⊔ ℓ)) where infix 4 _≈_ field Carrier : I → Set c _≈_ : IRel Carrier ℓ isEquivalence : IsIndexedEquivalence Carrier _≈_ open IsIndexedEquivalence isEquivalence public ------------------------------------------------------------------------ -- Preorders record IsIndexedPreorder {i a ℓ₁ ℓ₂} {I : Set i} (A : I → Set a) (_≈_ : IRel A ℓ₁) (_∼_ : IRel A ℓ₂) : Set (i ⊔ a ⊔ ℓ₁ ⊔ ℓ₂) where field isEquivalence : IsIndexedEquivalence A _≈_ reflexive : ∀ {i j} → (_≈_ {i} {j}) ⟨ _⇒_ ⟩ _∼_ trans : Transitive A _∼_ module Eq = IsIndexedEquivalence isEquivalence refl : Reflexive A _∼_ refl = reflexive Eq.refl record IndexedPreorder {i} (I : Set i) c ℓ₁ ℓ₂ : Set (suc (i ⊔ c ⊔ ℓ₁ ⊔ ℓ₂)) where infix 4 _≈_ _∼_ field Carrier : I → Set c _≈_ : IRel Carrier ℓ₁ -- The underlying equality. _∼_ : IRel Carrier ℓ₂ -- The relation. isPreorder : IsIndexedPreorder Carrier _≈_ _∼_ open IsIndexedPreorder isPreorder public ------------------------------------------------------------------------ -- DEPRECATED NAMES ------------------------------------------------------------------------ -- Please use the new names as continuing support for the old names is -- not guaranteed. -- Version 0.17 REL = IREL {-# WARNING_ON_USAGE REL "Warning: REL was deprecated in v0.17. Please use IREL instead." #-} Rel = IRel {-# WARNING_ON_USAGE Rel "Warning: Rel was deprecated in v0.17. Please use IRel instead." #-} Setoid = IndexedSetoid {-# WARNING_ON_USAGE Setoid "Warning: Setoid was deprecated in v0.17. Please use IndexedSetoid instead." #-} IsEquivalence = IsIndexedEquivalence {-# WARNING_ON_USAGE IsEquivalence "Warning: IsEquivalence was deprecated in v0.17. Please use IsIndexedEquivalence instead." #-}
chapter \<open> Suppes' Theorem \label{sec:suppes-theorem} \<close> theory Suppes_Theorem imports Probability_Logic begin no_notation FuncSet.funcset (infixr "\<rightarrow>" 60) text \<open> An elementary completeness theorem for inequalities for probability logic is due to Patrick Suppes @{cite suppesProbabilisticInferenceConcept1966}. \<close> text \<open> A consequence of this Suppes' theorem is an elementary form of \<^emph>\<open>collapse\<close>, which asserts that inequalities for probabilities are logically equivalent to the more restricted class of \<^emph>\<open>Dirac measures\<close> as defined in \S\ref{sec:dirac-measures}. \<close> section \<open> Suppes' List Theorem \label{sec:suppes-theorem-for-lists} \<close> text \<open> We first establish Suppes' theorem for lists of propositions. This is done by establishing our first completeness theorem using \<^emph>\<open>Dirac measures\<close>. \<close> text \<open> First, we use the result from \S\ref{sec:basic-probability-inequality-results} that shows \<open>\<turnstile> \<phi> \<rightarrow> \<Squnion> \<Psi>\<close> implies \<open>\<P> \<phi> \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. \<P> \<psi>)\<close>. This can be understood as a \<^emph>\<open>soundness\<close> result. \<close> text \<open> To show completeness, assume \<open>\<not> \<turnstile> \<phi> \<rightarrow> \<Squnion> \<Psi>\<close>. From this obtain a maximally consistent \<^term>\<open>\<Omega>\<close> such that \<open>\<phi> \<rightarrow> \<Squnion> \<Psi> \<notin> \<Omega>\<close>. We then define \<^term>\<open>\<delta> \<chi> = (if (\<chi> \<in> \<Omega>) then 1 else 0)\<close> and show \<^term>\<open>\<delta>\<close> is a \<^emph>\<open>Dirac measure\<close> such that \<open>\<delta> \<phi> \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. \<delta> \<psi>)\<close>. \<close> lemma (in classical_logic) dirac_list_summation_completeness: "(\<forall> \<delta> \<in> dirac_measures. \<delta> \<phi> \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. \<delta> \<psi>)) = \<turnstile> \<phi> \<rightarrow> \<Squnion> \<Psi>" proof - { fix \<delta> :: "'a \<Rightarrow> real" assume "\<delta> \<in> dirac_measures" from this interpret probability_logic "(\<lambda> \<phi>. \<turnstile> \<phi>)" "(\<rightarrow>)" "\<bottom>" "\<delta>" unfolding dirac_measures_def by auto assume "\<turnstile> \<phi> \<rightarrow> \<Squnion> \<Psi>" hence "\<delta> \<phi> \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. \<delta> \<psi>)" using implication_list_summation_inequality by auto } moreover { assume "\<not> \<turnstile> \<phi> \<rightarrow> \<Squnion> \<Psi>" from this obtain \<Omega> where \<Omega>: "MCS \<Omega>" "\<phi> \<in> \<Omega>" "\<Squnion> \<Psi> \<notin> \<Omega>" by (meson insert_subset formula_consistent_def formula_maximal_consistency formula_maximally_consistent_extension formula_maximally_consistent_set_def_def set_deduction_base_theory set_deduction_reflection set_deduction_theorem) hence"\<forall> \<psi> \<in> set \<Psi>. \<psi> \<notin> \<Omega>" using arbitrary_disjunction_exclusion_MCS by blast define \<delta> where "\<delta> = (\<lambda> \<chi> . if \<chi>\<in>\<Omega> then (1 :: real) else 0)" from \<open>\<forall> \<psi> \<in> set \<Psi>. \<psi> \<notin> \<Omega>\<close> have "(\<Sum>\<psi>\<leftarrow>\<Psi>. \<delta> \<psi>) = 0" unfolding \<delta>_def by (induct \<Psi>, simp, simp) hence "\<not> \<delta> \<phi> \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. \<delta> \<psi>)" unfolding \<delta>_def by (simp add: \<Omega>(2)) hence "\<exists> \<delta> \<in> dirac_measures. \<not> (\<delta> \<phi> \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. \<delta> \<psi>))" unfolding \<delta>_def using \<Omega>(1) MCS_dirac_measure by auto } ultimately show ?thesis by blast qed theorem (in classical_logic) list_summation_completeness: "(\<forall> \<P> \<in> probabilities. \<P> \<phi> \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. \<P> \<psi>)) = \<turnstile> \<phi> \<rightarrow> \<Squnion> \<Psi>" (is "?lhs = ?rhs") proof assume ?lhs hence "\<forall> \<delta> \<in> dirac_measures. \<delta> \<phi> \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. \<delta> \<psi>)" unfolding dirac_measures_def probabilities_def by blast thus ?rhs using dirac_list_summation_completeness by blast next assume ?rhs show ?lhs proof fix \<P> :: "'a \<Rightarrow> real" assume "\<P> \<in> probabilities" from this interpret probability_logic "(\<lambda> \<phi>. \<turnstile> \<phi>)" "(\<rightarrow>)" \<bottom> \<P> unfolding probabilities_def by auto show "\<P> \<phi> \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. \<P> \<psi>)" using \<open>?rhs\<close> implication_list_summation_inequality by simp qed qed text \<open> The collapse theorem asserts that to prove an inequalities for all probabilities in probability logic, one only needs to consider the case of functions which take on values of 0 or 1. \<close> lemma (in classical_logic) suppes_collapse: "(\<forall> \<P> \<in> probabilities. \<P> \<phi> \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. \<P> \<psi>)) = (\<forall> \<delta> \<in> dirac_measures. \<delta> \<phi> \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. \<delta> \<psi>))" by (simp add: dirac_list_summation_completeness list_summation_completeness) lemma (in classical_logic) probability_member_neg: fixes \<P> assumes "\<P> \<in> probabilities" shows "\<P> (\<sim> \<phi>) = 1 - \<P> \<phi>" proof - from assms interpret probability_logic "(\<lambda> \<phi>. \<turnstile> \<phi>)" "(\<rightarrow>)" \<bottom> \<P> unfolding probabilities_def by auto show ?thesis by (simp add: complementation) qed text \<open> Suppes' theorem has a philosophical interpretation. It asserts that if \<^term>\<open>\<Psi> :\<turnstile> \<phi>\<close>, then our \<^emph>\<open>uncertainty\<close> in \<^term>\<open>\<phi>\<close> is bounded above by our uncertainty in \<^term>\<open>\<Psi>\<close>. Here the uncertainty in the proposition \<^term>\<open>\<phi>\<close> is \<open>1 - \<P> \<phi>\<close>. Our uncertainty in \<^term>\<open>\<Psi>\<close>, on the other hand, is \<open>\<Sum>\<psi>\<leftarrow>\<Psi>. 1 - \<P> \<psi>\<close>. \<close> theorem (in classical_logic) suppes_list_theorem: "\<Psi> :\<turnstile> \<phi> = (\<forall> \<P> \<in> probabilities. (\<Sum>\<psi>\<leftarrow>\<Psi>. 1 - \<P> \<psi>) \<ge> 1 - \<P> \<phi>)" proof - have "\<Psi> :\<turnstile> \<phi> = (\<forall> \<P> \<in> probabilities. (\<Sum>\<psi> \<leftarrow> \<^bold>\<sim> \<Psi>. \<P> \<psi>) \<ge> \<P> (\<sim> \<phi>))" using list_summation_completeness weak_biconditional_weaken contra_list_curry_uncurry list_deduction_def by blast moreover have "\<forall> \<P> \<in> probabilities. (\<Sum>\<psi> \<leftarrow> (\<^bold>\<sim> \<Psi>). \<P> \<psi>) = (\<Sum>\<psi> \<leftarrow> \<Psi>. \<P> (\<sim> \<psi>))" by (induct \<Psi>, auto) ultimately show ?thesis using probability_member_neg by (induct \<Psi>, simp+) qed section \<open> Suppes' Set Theorem \<close> text \<open> Suppes theorem also obtains for \<^emph>\<open>sets\<close>. \<close> lemma (in classical_logic) dirac_set_summation_completeness: "(\<forall> \<delta> \<in> dirac_measures. \<delta> \<phi> \<le> (\<Sum>\<psi>\<in> set \<Psi>. \<delta> \<psi>)) = \<turnstile> \<phi> \<rightarrow> \<Squnion> \<Psi>" by (metis dirac_list_summation_completeness modus_ponens arbitrary_disjunction_remdups biconditional_left_elimination biconditional_right_elimination hypothetical_syllogism sum.set_conv_list) theorem (in classical_logic) set_summation_completeness: "(\<forall> \<delta> \<in> probabilities. \<delta> \<phi> \<le> (\<Sum>\<psi>\<in> set \<Psi>. \<delta> \<psi>)) = \<turnstile> \<phi> \<rightarrow> \<Squnion> \<Psi>" by (metis dirac_list_summation_completeness dirac_set_summation_completeness list_summation_completeness sum.set_conv_list) lemma (in classical_logic) suppes_set_collapse: "(\<forall> \<P> \<in> probabilities. \<P> \<phi> \<le> (\<Sum>\<psi> \<in> set \<Psi>. \<P> \<psi>)) = (\<forall> \<delta> \<in> dirac_measures. \<delta> \<phi> \<le> (\<Sum>\<psi> \<in> set \<Psi>. \<delta> \<psi>))" by (simp add: dirac_set_summation_completeness set_summation_completeness) text \<open> In our formulation of logic, there is not reason that \<open>\<sim> a = \<sim> b\<close> while \<^term>\<open>a \<noteq> b\<close>. As a consequence the Suppes theorem for sets presented below is different than the one given in \S\ref{sec:suppes-theorem-for-lists}. \<close> theorem (in classical_logic) suppes_set_theorem: "\<Psi> :\<turnstile> \<phi> = (\<forall> \<P> \<in> probabilities. (\<Sum>\<psi> \<in> set (\<^bold>\<sim> \<Psi>). \<P> \<psi>) \<ge> 1 - \<P> \<phi>)" proof - have "\<Psi> :\<turnstile> \<phi> = (\<forall> \<P> \<in> probabilities. (\<Sum>\<psi> \<in> set (\<^bold>\<sim> \<Psi>). \<P> \<psi>) \<ge> \<P> (\<sim> \<phi>))" using contra_list_curry_uncurry list_deduction_def set_summation_completeness weak_biconditional_weaken by blast thus ?thesis using probability_member_neg by (induct \<Psi>, auto) qed section \<open> Converse Suppes' Theorem \<close> text \<open> A formulation of the converse of Suppes' theorem obtains for lists/sets of \<^emph>\<open>logically disjoint\<close> propositions. \<close> lemma (in probability_logic) exclusive_sum_list_identity: assumes "\<turnstile> \<Coprod> \<Phi>" shows "\<P> (\<Squnion> \<Phi>) = (\<Sum>\<phi>\<leftarrow>\<Phi>. \<P> \<phi>)" using assms proof (induct \<Phi>) case Nil then show ?case by (simp add: gaines_weatherson_antithesis) next case (Cons \<phi> \<Phi>) assume "\<turnstile> \<Coprod> (\<phi> # \<Phi>)" hence "\<turnstile> \<sim> (\<phi> \<sqinter> \<Squnion> \<Phi>)" "\<turnstile> \<Coprod> \<Phi>" by simp+ hence "\<P> (\<Squnion>(\<phi> # \<Phi>)) = \<P> \<phi> + \<P> (\<Squnion> \<Phi>)" "\<P> (\<Squnion> \<Phi>) = (\<Sum>\<phi>\<leftarrow>\<Phi>. \<P> \<phi>)" using Cons.hyps probability_additivity by auto hence "\<P> (\<Squnion>(\<phi> # \<Phi>)) = \<P> \<phi> + (\<Sum>\<phi>\<leftarrow>\<Phi>. \<P> \<phi>)" by auto thus ?case by simp qed lemma sum_list_monotone: fixes f :: "'a \<Rightarrow> real" assumes "\<forall> x. f x \<ge> 0" and "set \<Phi> \<subseteq> set \<Psi>" and "distinct \<Phi>" shows "(\<Sum>\<phi>\<leftarrow>\<Phi>. f \<phi>) \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. f \<psi>)" using assms proof - assume "\<forall> x. f x \<ge> 0" have "\<forall>\<Phi>. set \<Phi> \<subseteq> set \<Psi> \<longrightarrow> distinct \<Phi> \<longrightarrow> (\<Sum>\<phi>\<leftarrow>\<Phi>. f \<phi>) \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. f \<psi>)" proof (induct \<Psi>) case Nil then show ?case by simp next case (Cons \<psi> \<Psi>) { fix \<Phi> assume "set \<Phi> \<subseteq> set (\<psi> # \<Psi>)" and "distinct \<Phi>" have "(\<Sum>\<phi>\<leftarrow>\<Phi>. f \<phi>) \<le> (\<Sum>\<psi>'\<leftarrow>(\<psi> # \<Psi>). f \<psi>')" proof - { assume "\<psi> \<notin> set \<Phi>" with \<open>set \<Phi> \<subseteq> set (\<psi> # \<Psi>)\<close> have "set \<Phi> \<subseteq> set \<Psi>" by auto hence "(\<Sum>\<phi>\<leftarrow>\<Phi>. f \<phi>) \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. f \<psi>)" using Cons.hyps \<open>distinct \<Phi>\<close> by auto moreover have "f \<psi> \<ge> 0" using \<open>\<forall> x. f x \<ge> 0\<close> by metis ultimately have ?thesis by simp } moreover { assume "\<psi> \<in> set \<Phi>" hence "set \<Phi> = insert \<psi> (set (removeAll \<psi> \<Phi>))" by auto with \<open>set \<Phi> \<subseteq> set (\<psi> # \<Psi>)\<close> have "set (removeAll \<psi> \<Phi>) \<subseteq> set \<Psi>" by (metis insert_subset list.simps(15) set_removeAll subset_insert_iff) moreover from \<open>distinct \<Phi>\<close> have "distinct (removeAll \<psi> \<Phi>)" by (meson distinct_removeAll) ultimately have "(\<Sum>\<phi>\<leftarrow>(removeAll \<psi> \<Phi>). f \<phi>) \<le> (\<Sum>\<psi>\<leftarrow>\<Psi>. f \<psi>)" using Cons.hyps by simp moreover from \<open>\<psi> \<in> set \<Phi>\<close> \<open>distinct \<Phi>\<close> have "(\<Sum>\<phi>\<leftarrow>\<Phi>. f \<phi>) = f \<psi> + (\<Sum>\<phi>\<leftarrow>(removeAll \<psi> \<Phi>). f \<phi>)" using distinct_remove1_removeAll sum_list_map_remove1 by fastforce ultimately have ?thesis using \<open>\<forall> x. f x \<ge> 0\<close> by simp } ultimately show ?thesis by blast qed } thus ?case by blast qed moreover assume "set \<Phi> \<subseteq> set \<Psi>" and "distinct \<Phi>" ultimately show ?thesis by blast qed lemma count_remove_all_sum_list: fixes f :: "'a \<Rightarrow> real" shows "real (count_list xs x) * f x + (\<Sum>x'\<leftarrow>(removeAll x xs). f x') = (\<Sum>x\<leftarrow>xs. f x)" by (induct xs, simp, simp, metis combine_common_factor mult_cancel_right1) lemma (in classical_logic) dirac_exclusive_implication_completeness: "(\<forall> \<delta> \<in> dirac_measures. (\<Sum>\<phi>\<leftarrow>\<Phi>. \<delta> \<phi>) \<le> \<delta> \<psi>) = (\<turnstile> \<Coprod> \<Phi> \<and> \<turnstile> \<Squnion> \<Phi> \<rightarrow> \<psi>)" proof - { fix \<delta> assume "\<delta> \<in> dirac_measures" from this interpret probability_logic "(\<lambda> \<phi>. \<turnstile> \<phi>)" "(\<rightarrow>)" "\<bottom>" "\<delta>" unfolding dirac_measures_def by simp assume "\<turnstile> \<Coprod> \<Phi>" "\<turnstile> \<Squnion> \<Phi> \<rightarrow> \<psi>" hence "(\<Sum>\<phi>\<leftarrow>\<Phi>. \<delta> \<phi>) \<le> \<delta> \<psi>" using exclusive_sum_list_identity monotonicity by fastforce } moreover { assume "\<not> \<turnstile> \<Coprod> \<Phi>" hence "(\<exists> \<phi> \<in> set \<Phi>. \<exists> \<psi> \<in> set \<Phi>. \<phi> \<noteq> \<psi> \<and> \<not> \<turnstile> \<sim> (\<phi> \<sqinter> \<psi>)) \<or> (\<exists> \<phi> \<in> duplicates \<Phi>. \<not> \<turnstile> \<sim> \<phi>)" using exclusive_equivalence set_deduction_base_theory by blast hence "\<not> (\<forall> \<delta> \<in> dirac_measures. (\<Sum>\<phi>\<leftarrow>\<Phi>. \<delta> \<phi>) \<le> \<delta> \<psi>)" proof (elim disjE) assume "\<exists> \<phi> \<in> set \<Phi>. \<exists> \<chi> \<in> set \<Phi>. \<phi> \<noteq> \<chi> \<and> \<not> \<turnstile> \<sim> (\<phi> \<sqinter> \<chi>)" from this obtain \<phi> and \<chi> where \<phi>\<chi>_properties: "\<phi> \<in> set \<Phi>" "\<chi> \<in> set \<Phi>" "\<phi> \<noteq> \<chi>" "\<not> \<turnstile> \<sim> (\<phi> \<sqinter> \<chi>)" by blast from this obtain \<Omega> where \<Omega>: "MCS \<Omega>" "\<sim> (\<phi> \<sqinter> \<chi>) \<notin> \<Omega>" by (meson insert_subset formula_consistent_def formula_maximal_consistency formula_maximally_consistent_extension formula_maximally_consistent_set_def_def set_deduction_base_theory set_deduction_reflection set_deduction_theorem) let ?\<delta> = "\<lambda> \<chi>. if \<chi>\<in>\<Omega> then (1 :: real) else 0" from \<Omega> have "\<phi> \<in> \<Omega>" "\<chi> \<in> \<Omega>" by (metis formula_maximally_consistent_set_def_implication maximally_consistent_set_def conjunction_def negation_def)+ with \<phi>\<chi>_properties have "(\<Sum>\<phi>\<leftarrow>[\<phi>, \<chi>]. ?\<delta> \<phi>) = 2" "set [\<phi>, \<chi>] \<subseteq> set \<Phi>" "distinct [\<phi>, \<chi>]" "\<forall>\<phi>. ?\<delta> \<phi> \<ge> 0" by simp+ hence "(\<Sum>\<phi>\<leftarrow>\<Phi>. ?\<delta> \<phi>) \<ge> 2" using sum_list_monotone by metis hence "\<not> (\<Sum>\<phi>\<leftarrow>\<Phi>. ?\<delta> \<phi>) \<le> ?\<delta> (\<psi>)" by auto thus ?thesis using \<Omega>(1) MCS_dirac_measure by auto next assume "\<exists> \<phi> \<in> duplicates \<Phi>. \<not> \<turnstile> \<sim> \<phi>" from this obtain \<phi> where \<phi>: "\<phi> \<in> duplicates \<Phi>" "\<not> \<turnstile> \<sim> \<phi>" using exclusive_equivalence [where \<Gamma>="{}"] set_deduction_base_theory by blast from \<phi> obtain \<Omega> where \<Omega>: "MCS \<Omega>" "\<sim> \<phi> \<notin> \<Omega>" by (meson insert_subset formula_consistent_def formula_maximal_consistency formula_maximally_consistent_extension formula_maximally_consistent_set_def_def set_deduction_base_theory set_deduction_reflection set_deduction_theorem) hence "\<phi> \<in> \<Omega>" using negation_def by auto let ?\<delta> = "\<lambda> \<chi>. if \<chi>\<in>\<Omega> then (1 :: real) else 0" from \<phi> have "count_list \<Phi> \<phi> \<ge> 2" using duplicates_alt_def [where xs="\<Phi>"] by blast hence "real (count_list \<Phi> \<phi>) * ?\<delta> \<phi> \<ge> 2" using \<open>\<phi> \<in> \<Omega>\<close> by simp moreover { fix \<Psi> have "(\<Sum>\<phi>\<leftarrow>\<Psi>. ?\<delta> \<phi>) \<ge> 0" by (induct \<Psi>, simp, simp) } moreover have "(0::real) \<le> (\<Sum>a\<leftarrow>removeAll \<phi> \<Phi>. if a \<in> \<Omega> then 1 else 0)" using \<open>\<And>\<Psi>. 0 \<le> (\<Sum>\<phi>\<leftarrow>\<Psi>. if \<phi> \<in> \<Omega> then 1 else 0)\<close> by presburger ultimately have "real (count_list \<Phi> \<phi>) * ?\<delta> \<phi> + (\<Sum> \<phi> \<leftarrow> (removeAll \<phi> \<Phi>). ?\<delta> \<phi>) \<ge> 2" using \<open>2 \<le> real (count_list \<Phi> \<phi>) * (if \<phi> \<in> \<Omega> then 1 else 0)\<close> by linarith hence "(\<Sum>\<phi>\<leftarrow>\<Phi>. ?\<delta> \<phi>) \<ge> 2" by (metis count_remove_all_sum_list) hence "\<not> (\<Sum>\<phi>\<leftarrow>\<Phi>. ?\<delta> \<phi>) \<le> ?\<delta> (\<psi>)" by auto thus ?thesis using \<Omega>(1) MCS_dirac_measure by auto qed } moreover { assume "\<not> \<turnstile> \<Squnion> \<Phi> \<rightarrow> \<psi>" from this obtain \<Omega> \<phi> where \<Omega>: "MCS \<Omega>" and \<psi>: "\<psi> \<notin> \<Omega>" and \<phi>: "\<phi> \<in> set \<Phi>" "\<phi> \<in> \<Omega>" by (meson insert_subset formula_consistent_def formula_maximal_consistency formula_maximally_consistent_extension formula_maximally_consistent_set_def_def arbitrary_disjunction_exclusion_MCS set_deduction_base_theory set_deduction_reflection set_deduction_theorem) let ?\<delta> = "\<lambda> \<chi>. if \<chi>\<in>\<Omega> then (1 :: real) else 0" from \<phi> have "(\<Sum>\<phi>\<leftarrow>\<Phi>. ?\<delta> \<phi>) \<ge> 1" proof (induct \<Phi>) case Nil then show ?case by simp next case (Cons \<phi>' \<Phi>) obtain f :: "real list \<Rightarrow> real" where f: "\<forall>rs. f rs \<in> set rs \<and> \<not> 0 \<le> f rs \<or> 0 \<le> sum_list rs" using sum_list_nonneg by moura moreover have "f (map ?\<delta> \<Phi>) \<notin> set (map ?\<delta> \<Phi>) \<or> 0 \<le> f (map ?\<delta> \<Phi>)" by fastforce ultimately show ?case by (simp, metis Cons.hyps Cons.prems(1) \<phi>(2) set_ConsD) qed hence "\<not> (\<Sum>\<phi>\<leftarrow>\<Phi>. ?\<delta> \<phi>) \<le> ?\<delta> (\<psi>)" using \<psi> by auto hence "\<not> (\<forall> \<delta> \<in> dirac_measures. (\<Sum>\<phi>\<leftarrow>\<Phi>. \<delta> \<phi>) \<le> \<delta> \<psi>)" using \<Omega>(1) MCS_dirac_measure by auto } ultimately show ?thesis by blast qed theorem (in classical_logic) exclusive_implication_completeness: "(\<forall> \<P> \<in> probabilities. (\<Sum>\<phi>\<leftarrow>\<Phi>. \<P> \<phi>) \<le> \<P> \<psi>) = (\<turnstile> \<Coprod> \<Phi> \<and> \<turnstile> \<Squnion> \<Phi> \<rightarrow> \<psi>)" (is "?lhs = ?rhs") proof assume ?lhs thus ?rhs by (meson dirac_exclusive_implication_completeness dirac_measures_subset subset_eq) next assume ?rhs show ?lhs proof fix \<P> :: "'a \<Rightarrow> real" assume "\<P> \<in> probabilities" from this interpret probability_logic "(\<lambda> \<phi>. \<turnstile> \<phi>)" "(\<rightarrow>)" \<bottom> \<P> unfolding probabilities_def by simp show "(\<Sum>\<phi>\<leftarrow>\<Phi>. \<P> \<phi>) \<le> \<P> \<psi>" using \<open>?rhs\<close> exclusive_sum_list_identity monotonicity by fastforce qed qed lemma (in classical_logic) dirac_inequality_completeness: "(\<forall> \<delta> \<in> dirac_measures. \<delta> \<phi> \<le> \<delta> \<psi>) = \<turnstile> \<phi> \<rightarrow> \<psi>" proof - have "\<turnstile> \<Coprod> [\<phi>]" by (simp add: conjunction_right_elimination negation_def) hence "(\<turnstile> \<Coprod> [\<phi>] \<and> \<turnstile> \<Squnion> [\<phi>] \<rightarrow> \<psi>) = \<turnstile> \<phi> \<rightarrow> \<psi>" by (metis arbitrary_disjunction.simps(1) arbitrary_disjunction.simps(2) disjunction_def implication_equivalence negation_def weak_biconditional_weaken) thus ?thesis using dirac_exclusive_implication_completeness [where \<Phi>="[\<phi>]"] by auto qed section \<open> Implication Inequality Completeness \<close> text \<open> The following theorem establishes the converse of \<open>\<turnstile> \<phi> \<rightarrow> \<psi> \<Longrightarrow> \<P> \<phi> \<le> \<P> \<psi>\<close>, which was proved in \S\ref{sec:prob-logic-alt-def}. \<close> theorem (in classical_logic) implication_inequality_completeness: "(\<forall> \<P> \<in> probabilities. \<P> \<phi> \<le> \<P> \<psi>) = \<turnstile> \<phi> \<rightarrow> \<psi>" proof - have "\<turnstile> \<Coprod> [\<phi>]" by (simp add: conjunction_right_elimination negation_def) hence "(\<turnstile> \<Coprod> [\<phi>] \<and> \<turnstile> \<Squnion> [\<phi>] \<rightarrow> \<psi>) = \<turnstile> \<phi> \<rightarrow> \<psi>" by (metis arbitrary_disjunction.simps(1) arbitrary_disjunction.simps(2) disjunction_def implication_equivalence negation_def weak_biconditional_weaken) thus ?thesis using exclusive_implication_completeness [where \<Phi>="[\<phi>]"] by simp qed section \<open> Characterizing Logical Exclusiveness In Probability Logic \<close> text \<open> Finally, we can say that \<open>\<P> (\<Squnion> \<Phi>) = (\<Sum>\<phi>\<leftarrow>\<Phi>. \<P> \<phi>)\<close> if and only if the propositions in \<^term>\<open>\<Phi>\<close> are mutually exclusive (i.e. \<open>\<turnstile> \<Coprod> \<Phi>\<close>). This result also obtains for sets. \<close> lemma (in classical_logic) dirac_exclusive_list_summation_completeness: "(\<forall> \<delta> \<in> dirac_measures. \<delta> (\<Squnion> \<Phi>) = (\<Sum>\<phi>\<leftarrow>\<Phi>. \<delta> \<phi>)) = \<turnstile> \<Coprod> \<Phi>" by (metis antisym_conv dirac_exclusive_implication_completeness dirac_list_summation_completeness trivial_implication) theorem (in classical_logic) exclusive_list_summation_completeness: "(\<forall> \<P> \<in> probabilities. \<P> (\<Squnion> \<Phi>) = (\<Sum>\<phi>\<leftarrow>\<Phi>. \<P> \<phi>)) = \<turnstile> \<Coprod> \<Phi>" by (metis antisym_conv exclusive_implication_completeness list_summation_completeness trivial_implication) lemma (in classical_logic) dirac_exclusive_set_summation_completeness: "(\<forall> \<delta> \<in> dirac_measures. \<delta> (\<Squnion> \<Phi>) = (\<Sum>\<phi> \<in> set \<Phi>. \<delta> \<phi>)) = \<turnstile> \<Coprod> (remdups \<Phi>)" by (metis (mono_tags) eq_iff dirac_exclusive_implication_completeness dirac_set_summation_completeness trivial_implication set_remdups sum.set_conv_list eq_iff dirac_exclusive_implication_completeness dirac_set_summation_completeness trivial_implication set_remdups sum.set_conv_list antisym_conv) theorem (in classical_logic) exclusive_set_summation_completeness: "(\<forall> \<P> \<in> probabilities. \<P> (\<Squnion> \<Phi>) = (\<Sum>\<phi> \<in> set \<Phi>. \<P> \<phi>)) = \<turnstile> \<Coprod> (remdups \<Phi>)" by (metis (mono_tags, opaque_lifting) antisym_conv exclusive_list_summation_completeness exclusive_implication_completeness implication_inequality_completeness set_summation_completeness order.refl sum.set_conv_list) lemma (in probability_logic) exclusive_list_set_inequality: assumes "\<turnstile> \<Coprod> \<Phi>" shows "(\<Sum>\<phi>\<leftarrow>\<Phi>. \<P> \<phi>) = (\<Sum>\<phi>\<in>set \<Phi>. \<P> \<phi>)" proof - have "distinct (remdups \<Phi>)" using distinct_remdups by auto hence "duplicates (remdups \<Phi>) = {}" by (induct "\<Phi>", simp+) moreover have "set (remdups \<Phi>) = set \<Phi>" by (induct "\<Phi>", simp, simp add: insert_absorb) moreover have "(\<forall>\<phi> \<in> duplicates \<Phi>. \<turnstile> \<sim> \<phi>) \<and> (\<forall> \<phi> \<in> set \<Phi>. \<forall> \<psi> \<in> set \<Phi>. (\<phi> \<noteq> \<psi>) \<longrightarrow> \<turnstile> \<sim> (\<phi> \<sqinter> \<psi>))" using assms exclusive_elimination1 exclusive_elimination2 set_deduction_base_theory by blast ultimately have "(\<forall>\<phi>\<in>duplicates (remdups \<Phi>). \<turnstile> \<sim> \<phi>) \<and> (\<forall> \<phi> \<in> set (remdups \<Phi>). \<forall> \<psi> \<in> set (remdups \<Phi>). (\<phi> \<noteq> \<psi>) \<longrightarrow> \<turnstile> \<sim> (\<phi> \<sqinter> \<psi>))" by auto hence "\<turnstile> \<Coprod> (remdups \<Phi>)" by (meson exclusive_equivalence set_deduction_base_theory) hence "(\<Sum>\<phi>\<in>set \<Phi>. \<P> \<phi>) = \<P> (\<Squnion> \<Phi>)" by (metis arbitrary_disjunction_remdups biconditional_equivalence exclusive_sum_list_identity sum.set_conv_list) moreover have "(\<Sum>\<phi>\<leftarrow>\<Phi>. \<P> \<phi>) = \<P> (\<Squnion> \<Phi>)" by (simp add: assms exclusive_sum_list_identity) ultimately show ?thesis by metis qed notation FuncSet.funcset (infixr "\<rightarrow>" 60) end
SUBROUTINE clawpack5_qinit(meqn,mbc,mx,my,xlower,ylower, & dx,dy,q,maux,aux) IMPLICIT NONE INTEGER meqn,mbc,mx,my,maux DOUBLE PRECISION xlower,ylower,dx,dy DOUBLE PRECISION q(meqn,1-mbc:mx+mbc, 1-mbc:my+mbc) DOUBLE PRECISION aux(maux,1-mbc:mx+mbc, 1-mbc:my+mbc) INTEGER i,j DOUBLE PRECISION xi,yj DO i = 1-mbc,mx+mbc xi = xlower + (i-0.5d0)*dx DO j = 1-mbc,my+mbc yj = ylower + (j-0.5d0)*dy IF (xi.GT.0.1d0 .AND. xi.LT.0.6d0 .AND. & yj.GT.0.1d0 .AND. yj.LT.0.6d0) THEN q(1,i,j) = 1.d0 ELSE q(1,i,j) = 0.1d0 ENDIF ENDDO ENDDO RETURN END SUBROUTINE clawpack5_qinit
\documentclass[letter,11pt,oneside]{article} %%% (occur "\\(\\\\[a-z]*section\\|appendix\\|input\\|\\<include\\>\\)") %%\documentclass[11pt,twocolumn]{article} %%\usepackage[inline]{asymptote} %% Inline asymptote diagrams %%\usepackage{wglatex} %% Use this one and kill others. \usepackage{color} %% colored letters {\color{red}{{text}} \usepackage{fancyhdr} %% headers/footers %%\usepackage{fancyvrb} %% headers/footers \usepackage{datetime} %% pick up tex date time \usepackage{lastpage} %% support page of ...lastpage \usepackage{times} %% native times roman fonts \usepackage{textcomp} %% trademark \usepackage{amssymb,amsmath} %% greek alphabet \usepackage{parskip} %% blank lines between paragraphs, no indent \usepackage{shortvrb} %% short verb use for tables \usepackage{lscape} %% landscape for tables. \usepackage{longtable} %% permit tables to span pages wg-longtable \usepackage{url} %% Make URLs uniform and links in PDFs \usepackage{enumerate} %% Allow letters/decorations for enumerations \usepackage{endnotes} %% Enhance footnotes/endnotes \usepackage{listings} %% Make URLs uniform and links in PDFs \pdfadjustspacing=1 %% force LaTeX-like character spacing \usepackage{geometry} %% allow margins to be relaxed %%\usepackage{wrapfig} %% permit wrapping figures. %%\usepackage{subfigure} %% images side by side. \geometry{margin=1in} %% Allow narrower margins etc. \usepackage[T1]{fontenc} %% Better Verbatim Font. \renewcommand*\ttdefault{txtt} %% %%\usepackage{natbib} %% bibitems %% include background image (wg-document-page-background) \usepackage{graphicx} %% Include pictures into a document %% (wg-texdoc-inserttikz) \def\documentisdraft{NOTDRAFT} %% (wg-texdoc-isdraft) %% (wg-texdoc-insert-fancy-headers) %%\usepackage[bookmarks]{hyperref} %% Make huperlinks within a PDF %%\usepackage{makeidx} %% Make an index uncomment following line %%\makeindex %%.. yeah this one, too. index{key} in text %% \definecolor{verbcolor}{rgb}{0.6,0,0} \definecolor{darkgreen}{rgb}{0,0.4,0} \newcommand\debate[1]{\textcolor{darkgreen}{DEBATE: #1} \marginpar{\textcolor{red}{DEBATE} }} \newcommand{\ltodo}[2]{\marginpar{\textcolor{red}{ACTION: #1}\endnote{#2}}} \renewcommand{\thefigure}{\thesection-\arabic{figure}} %%(wg-texdoc-adjust-paper-width)%%Begin User Definitions: Hint: ~/.latex.defs and latex.defs %%End User Definitions: %% (wg-texdoc-insert-hypersetup) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} %% (wg-latex-pretty-title-page) %% (wg-texdoc-titleblock) \setcounter{section}{1} \pagenumbering{arabic} \ifx\documentisdraft\drafttest \linenumbers %%%%%%%%%%%%% DRAFT \fi Hi, \vskip 1cm \begin{quote}{\centering {\Large I'm am inviting you\\ yes you\\ to be a member of the SAS panel discussion on \\ ``Remote Observatories for Small-Telescope Science''\\ taking place at the SAS Symposium in Ontario\\ Friday afternoon 31 May. 2019 \\ at 4:00 PM --- 'till'. \\ }} \end{quote} \vskip 1cm We placed the discussion last on the agenda. While planning for 1 hour it can be allowed to run long. Below is a list of the basic topics off the top of my head, please add and delete as you see fit. I am approaching this Panel as a 'blank slate'. If you wish to participate please let me know ASAP -- then take some time to pull together and send me questions you feel are important to the discussion. We plan to have a power-point available. We will take questions primarily from the audiance. If you have information that can fit into slides I will create an overall PPT presentation with hyper-links so we can call upon the slides as needed at various points during the discussion. The implicit goal of the panel is to encourage an ongoing discussion to promote best-practices of small-telescope remote operation and to avoid a one-prescription fits all answer. I will be the moderator. I will not be involved in the answers but will serve to keep the discussion moving along. We want to include hilarious mistakes to avoid, disasters to celebrate, and to avoid one-prescription fits all answers. Experience is a good teacher. Somebody else's experience is a better teacher! Sincerely, Wayne Green +1.303.818.1290 \\ [email protected] \subsection{Broad Initial Topics} \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item First off \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Experience Level \item Buy \item Build \item Interoperability \item INDI vs ASCOM \item Planetarium Software \item Planning \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Software \item Web resources \item catalogs \item database \end{enumerate} \item Raw Target List \end{enumerate} \end{enumerate} %%%%%%%%%%% \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \setcounter{enumi}{1} \item Health and Safety \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Battery burns the house down \item Lightening \item Its Dark! \item Its Cold! \item Its Dirty! \item Its Quiet \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Cameras/audio \end{enumerate} \item Thermal Expansion \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Cable Flexibility \item Heat/Cold loosening connections \item Electrical intermittants \item Grease thermal range \end{enumerate} \item Its Hot! \item Near-by human to fix/protect things \end{enumerate} %%%%%%%%%%% \end{enumerate} \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \setcounter{enumi}{2} \item Telescopes - align/clean/thermal expansion \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item OTA \item Platescale \end{enumerate} \end{enumerate} \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \setcounter{enumi}{3} \item Mounts \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item German, Fork-Equitorial, Nasmyth \item Alignment (field rotation) \item Limit Switches \item Home-position feedback \item Encoders \item Weight/Balance/Inertia \end{enumerate} \end{enumerate} \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \setcounter{enumi}{4} \item Targets \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Acqiisition \item Verify \item Guiding \end{enumerate} \end{enumerate} %%%%%%%%%%% \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \setcounter{enumi}{5} \item Software \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Communications \end{enumerate} \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item WiFi \item EFI (Electronic Noise) \item Interface \begin{itemize} \addtolength{\itemsep}{-0.5\baselineskip} \item Teamviewer \item VNC \end{itemize} \item NTPD Time Sync \item File Export \begin{itemize} \addtolength{\itemsep}{-0.5\baselineskip} \item Preprocess with pipelines \item SSH/Putty \item SFTP \item Git \end{itemize} \end{enumerate} \item Windows/Linux/Mac \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Commercial \item Custom \end{enumerate} \item SoC \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Pi \item Arduino \item Distributed/Integrated \end{enumerate} \item Mounts \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Control/Recovery \end{enumerate} \item Controls: \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item open-loop \item closed-loop \end{enumerate} \item Sensors: \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Usual: \end{enumerate} \begin{itemize} \addtolength{\itemsep}{-0.5\baselineskip} \item Zeros/darks/flats (not why but scheduling) \item Dome flats/twilight flats \end{itemize} \item Cameras and Filters \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Focus \item Control/Recovery \end{enumerate} \item Spectrographs \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Control/Recovery \item Comparison lamps \end{enumerate} \item Dome vs Roll-off \item Mechanical Topics \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Motors \item Grease \end{enumerate} \item Power Quantity/Quality \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item UPS \item Standalone Batteries \end{enumerate} \item Weather \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Forecasts \item Monitoring \item Panic \end{enumerate} \end{enumerate} \section{John Hout} John Hout Addition Remote astronomy topics: \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Periodic Maintenance \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item cleaning - optics, elecctronics, keyboards... \item lubrication \item Balance \item Desicants \item UPS's \item Spares Inventory \item Anticipate \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Failures \item Obsolesence \item Software Support Withdrawl \end{enumerate} \end{enumerate} \item Infrastructure \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Communication \item Phone \item Radio/Microwave \item Internet \item Ethernet \item Wifi \item Fiber \item Routers \item Security \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item VPNs \item SSH \item Firewalls \end{enumerate} \item DHCP-DNS Servers \item Communications Failure/Recovery \end{enumerate} \item Power \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item On grid/Off Grid \item Batteries \item UPS's \item Remote outlet Switching \item Power Failure Management/Recover \item Mounts \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Plate Solutions \item Focus, Focus, Focus.. \item Mutliplexed Instruments on a single mount \end{enumerate} \item Control Paradigms \item Telepresense \item Scripting \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Closed/Extensible \item Scripting extension languages \end{enumerate} \item Scheduling \item Shared/Private Use \item Remote Facility Locale \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Nearby - minutes \item Distant - hours \item Very Distant - Days \end{enumerate} \item Distributed Control \item Construction \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Planning \item Document!!! \item Toolchain Preservation \item Archiving: \end{enumerate} \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Never trust a cloud!!! \item Tools \item Software \item Systems \item Rawdata \item Utilities \item Licenses/Keys \item Manuals \item Scripts \end{enumerate} \end{enumerate} \end{enumerate} %%\appendix %%\renewcommand \thesection{\Alph{section}} %% use a bibitem approach to the references publications etc. %% (wg-bibitem) %%\clearpage %%\addcontentsline{toc}{section}{References} %%\renewcommand*{\refname}{My Bibliography and References} %%\bibliographystyle{plain} % bibliographystyle{apalike} and \usepackage{natbib} %%\bibliography{MasterBib} % expects file "MasterBib.bib" %%\clearpage %%\addcontentsline{toc}{section}{Index} %%\printindex %% www.cs.usask.ca/resources/tutorials/latex/notes/toc-index.pdf %%\begin{thebibliography}{80} %%\usepackage{natbib} %% bibitems %%\end{thebibliography} % /home/wayne/git/pre.SAS2019/PanelDiscussion.tex %% (wg-texdoc-endnotes) \end{document} \begingroup \fontsize{10pt}{10pt} \selectfont %%\begin{Verbatim} [commandchars=\\\{\}] \begin{verbatim} Arne Henden [email protected] Tom Smith [email protected] Jerry Foote [email protected] Bob Denny [email protected] Larry Dingle [email protected] Stan Watson [email protected] Steve Bisque [email protected] HOOT John [email protected] \end{verbatim} \endgroup %% \end{Verbatim} [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Arne Henden Tom Smith Jerry Foote Bob Denny Larry Dingle Stan Watsonx2 Steve Bisque Hoot John Tom Smith: Here are some things to maybe add: - Telescope-optical train f/ratio - Mount types, advantages and disadvantages - What if my polar alignment isn't perfect, results in images - WiFi or Ethernet based communications - Flat & calibration lamps for good results - Remote desktop, VNC, Anywhere PC, SSH, SFTP, Putty - Limit switches on scopes and for observatory control and indication - Two-way audio, maybe incorporated into existing webcams - UPS - Time sync methods - Automatic processing pipelines - Planning an observation, where to find information - Sky catalogs and/or planetarium softwares - Guiding, several methods - Closed or open loop controls - Is someone nearby to assist if things go wrong from remote operating positions - Buy or build, experience level and time commitments - First off - Experience Level - Buy - Build - Interoperability - INDI vs ASCOM - Planetarium Software - Planning - Software - Web resources - catalogs - database - Raw Target List %%%%%%%%%%% - Health and Safety - Battery burns the house down - Lightening - Its Dark! - Its Cold! - Its Quiet - Cameras/audio - Thermal Expansion - Cable Flexibility - Heat/Cold loosening connections - Electrical intermittants - Grease thermal range - Its Hot! - Near-by human to fix/protect things %%%%%%%%%%% - Telescopes - OTA - Platescale - Mounts - German, Fork-Equitorial, Nasmyth - Alignment (field rotation) - Limit Switches - Home-position feedback - Encoders - Weight/Balance/Inertia - Targets - Acqiisition - Verify - Guiding %%%%%%%%%%% -Software - Communications - WiFi - EFI (Electronic Noise) - Interface - Teamviewer - VNC - NTPD Time Sync - File Export - Preprocess with pipelines - SSH/Putty - SFTP - Git - Windows/Linux/Mac - Commercial - Custom - SoC - Pi - Arduino - Distributed/Integrated - Mounts - Control/Recovery - Controls: - open-loop - closed-loop - Sensors: - Usual: - Zeros/darks/flats (not why but scheduling) - Dome flats/twilight flats - Cameras and Filters - Focus - Control/Recovery - Spectrographs - Control/Recovery - Comparison lamps - Dome vs Roll-off - Mechanical Topics - Motors - Grease - Power Quantity/Quality - UPS - Standalone Batteries - Weather - Forecasts - Monitoring - Panic John Hout Addition Remote astronomy topics: \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Periodic Maintenance \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item cleaning - optics, elecctronics, keyboards... \item lubrication \item Balance \item Desicants \item UPS's \item Spares Inventory \item Anticipate \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Failures \item Obsolesence \item Software Support Withdrawl \end{enumerate} \end{enumerate} \item Infrastructure \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Communication \item Phone \item Radio/Microwave \item Internet \item Ethernet \item Wifi \item Fiber \item Routers \item Security \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item VPNs \item SSH \item Firewalls \end{enumerate} \item DHCP-DNS Servers \item Communications Failure/Recovery \end{enumerate} \item Power \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item On grid/Off Grid \item Batteries \item UPS's \item Remote outlet Switching \item Power Failure Management/Recover \item Mounts \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Plate Solutions \item Focus, Focus, Focus.. \item Mutliplexed Instruments on a single mount \end{enumerate} \item Control Paradigms \item Telepresense \item Scripting \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Closed/Extensible \item Scripting extension languages \end{enumerate} \item Scheduling \item Shared/Private Use \item Remote Facility Locale \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Nearby - minutes \item Distant - hours \item Very Distant - Days \end{enumerate} \item Distributed Control \item Construction \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Planning \item Document!!! \item Toolchain Preservation \item Archiving: \end{enumerate} \vspace{-.15cm} \begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip} \item Never trust a cloud!!! \item Tools \item Software \item Systems \item Rawdata \item Utilities \item Licenses/Keys \item Manuals \item Scripts \end{enumerate} \end{enumerate} \end{enumerate}
import Smt theorem triv': ∀ p : Bool, ∀ _ : p, p := by smt intro p simp_all
Formal statement is: lemma smallomegaD [dest]: assumes "f \<in> \<omega>[F](g)" shows "eventually (\<lambda>x. (norm (f x)) \<ge> c * (norm (g x))) F" Informal statement is: If $f$ is a small-omega function of $g$, then $f$ is eventually bounded below by a constant multiple of $g$.
from torch import optim from ProcessData.Skeleton import Skeleton from ProcessData.Utils import getX_full import numpy as np import torch from torch.utils.tensorboard import SummaryWriter from torch.utils.data import DataLoader import typing from ModelCompatibilityRefinement_2.Database import DataBase from ModelCompatibilityRefinement_2.ModelCompatibilityRefinement import ModelCompatibilityRefinement from ModelCompatibilityRefinement_2.PhasingSize import PhasingSize from ModelCompatibilityRefinement_2.Discriminator import Discriminator from ModelCompatibilityRefinement_2.LossFunction import LossFunction from ModelCompatibilityRefinement_2.Vizualise import motion_animation from ModelCompatibilityRefinement_2.UpdateFeature import updateFeature, getExtremeties from ProcessData.TrainingLoss import TrainingLoss import Format class TrainingConfig: def __init__(self, batch_size, epochs, learn_rate, learning_decay, DiscriminatorPreTrainingEpochs, warmup): self.batch_size = batch_size self.epochs = epochs self.learn_rate = learn_rate self.learning_decay = learning_decay self.DiscriminatorPreTrainingEpochs = DiscriminatorPreTrainingEpochs self.warmup = warmup def RunDiscriminator_pose(true_data:torch.Tensor, discriminator:Discriminator, database:DataBase) -> torch.Tensor: true_disc = discriminator(true_data) return true_disc def Train(training_config:TrainingConfig, error_config:TrainingLoss, runName:str, database:DataBase, database_validation:DataBase, modelDirectories: typing.List[str], visual = True): print('Starting Training of ModelCompatibilityRefinement...') writer = SummaryWriter(log_dir='runs/'+runName) if torch.cuda.is_available(): print('Enabling CUDA...') device = torch.device("cuda:0") torch.cuda.empty_cache() print('Creating DB...') database_loader = DataLoader(database, shuffle=True, batch_size=training_config.batch_size, num_workers= (1 if torch.cuda.is_available() else 0)) print('Creating NN & Trainer...') if torch.cuda.is_available(): model_raw = ModelCompatibilityRefinement(modelDirectories[0], modelDirectories[1], modelDirectories[2], database.featureDim, Format.latentDim, database.poseDim) model = model_raw.to(device) discriminator = Discriminator(database.poseDim, (database.sequenceLength - training_config.warmup) * Format.deltaT ) discriminator = discriminator.to(device) phasing_model_raw = PhasingSize(database.featureDim, Format.latentDim) phasing_model = phasing_model_raw.to(device) else: model = ModelCompatibilityRefinement(modelDirectories[0], modelDirectories[1], modelDirectories[2], database.featureDim, Format.latentDim, database.poseDim) discriminator = Discriminator(database.poseDim, (database.sequenceLength - training_config.warmup)) phasing_model = PhasingSize(database.featureDim, Format.latentDim) model.train() discriminator.train() params = list(model.HF.parameters()) + list(model.LF.parameters()) + list(phasing_model.parameters()) optimizer_model = torch.optim.AdamW(params, lr = training_config.learn_rate) scheduler_model = torch.optim.lr_scheduler.LambdaLR(optimizer_model, lr_lambda=lambda ep: training_config.learn_rate*(training_config.learning_decay**ep) * (0.1 + np.cos(np.pi * ep/10)**2)/1.1 ) optimizer_disc = torch.optim.AdamW(discriminator.parameters(), lr = training_config.learn_rate) scheduler_disc = torch.optim.lr_scheduler.LambdaLR(optimizer_disc, lr_lambda=lambda ep: training_config.learn_rate*(training_config.learning_decay**ep) * (0.1 + np.cos(np.pi * ep/10)**2)/1.1 ) starter = 3 print('Starting training...') for ep in range(training_config.epochs): # TRAINING LOOP: losses_sum_weighted_training = 0.0 losses_training = [0.0] * error_config.length() model.train() discriminator.train() for i, data in enumerate(database_loader): # split the data features, true_poses = data if torch.cuda.is_available(): features = features.cuda() true_poses = true_poses.cuda() # Initialisations Nbatch = features.size(0) frames = features.size(1) h1 = torch.zeros(Nbatch, model.HF.latent_red) h2 = torch.zeros(Nbatch, model.HF.latent_red) h3 = torch.zeros(Nbatch, model.HF.latent_red) current_feature = features[:,0] last_latent = torch.zeros(Nbatch, Format.latentDim) next_latent = torch.zeros(Nbatch, Format.latentDim) pose = model.Decoder(torch.zeros(Nbatch, Format.latentDim)) lastFeet, lastHands = getExtremeties(pose, database) lastFeet = lastFeet.detach() lastHands = lastHands.detach() poses = [] nextLFposes = [] true_nextLFposes = [] deltaT = Format.deltaT counterT = 0 for f in range(frames): current_feature, lastFeet, lastHands = updateFeature(features[:, f], pose, database, lastFeet, lastHands) current_feature = features[:,f] if (counterT == int(deltaT)): last_latent = next_latent deltaT = phasing_model(current_feature, last_latent) counterT = 0 next_latent = model.LF(current_feature, last_latent, deltaT)[0] nextLFposes.append(model.Decoder(next_latent).unsqueeze(1)) true_nextLFposes.append(true_poses[:,f].unsqueeze(1)) time = torch.cat((torch.ones(Nbatch, 1) * counterT / deltaT, torch.ones(Nbatch, 1) * deltaT),dim=1) currentLatent, h1, h2, h3 = model.HF.forward(current_feature, last_latent, next_latent, time, h1, h2, h3) pose = model.Decoder(currentLatent) poses.append(pose.unsqueeze(1)) counterT+=1 poses = torch.cat(poses, dim=1)[:, training_config.warmup:] true_poses = true_poses[:, training_config.warmup:] nextLFposes = torch.cat(nextLFposes, dim=1)[:, 2:] true_nextLFposes = torch.cat(true_nextLFposes, dim=1)[:, 2:] # Discriminator discriminator.zero_grad() genLoss = torch.mean(torch.square(RunDiscriminator_pose(poses.detach(), discriminator, database))) discLoss = torch.mean(torch.square(RunDiscriminator_pose(true_poses, discriminator, database)-1)) sumLosses = (genLoss+discLoss)/2 if sumLosses >= 0.15: sumLosses.backward() optimizer_disc.step() print("DiscriminatorLosses: Generated: {} \t \t True: {}".format(float(genLoss.item()), float(discLoss.item()))) # Generator model.zero_grad() if ep>=starter+2: with torch.no_grad(): genLoss = torch.mean(torch.square(RunDiscriminator_pose(poses, discriminator, database)-1)) else: genLoss = 0 losses_here = LossFunction(poses, true_poses, nextLFposes, true_nextLFposes, genLoss, database, frames - training_config.warmup) losses_here.applyWeights(error_config) # backward pass + optimisation totalLoss = sum(losses_here()) totalLoss.backward() optimizer_model.step() print("Iteration {}: {}".format(i, float(totalLoss))) # update training_loss losses_sum_weighted_training = losses_sum_weighted_training + float(totalLoss) * (Nbatch/len(database)) for j in range(len(losses_training)): losses_training[j] = losses_training[j] + losses_here.getUnweighted().makefloat()[j] * (Nbatch/len(database)) # Add to tensorboard labels = error_config.getNames() writer.add_scalar("Refinement_Training_2/Total_Weighted", float(losses_sum_weighted_training), ep) writer.add_scalar("Refinement_Training_2/Total_Unweighted", float(sum(losses_training)), ep) for i in range(len(labels)): writer.add_scalar("Refinement_Training_2/"+labels[i], float(losses_training[i]), ep) if visual: motion_animation(getX_full(poses[0], database.poseDimDiv, Format.skeleton, Format.rotation), Format.skeleton._parents, ep, folder = "ModelCompatibilityRefinement_2/Anim/") # Step the scheduler if ep>=starter: scheduler_disc.step() scheduler_model.step() model.export_to_onnx("ONNX_networks_refined") return model
section \<open>Sepref Tool\<close> theory Sepref_Tool imports Sepref_Translate Sepref_Definition Sepref_Combinator_Setup Sepref_Intf_Util begin text \<open>In this theory, we set up the sepref tool.\<close> subsection \<open>Sepref Method\<close> lemma ID_init: "\<lbrakk>ID a a' TYPE('T); hn_refine \<Gamma> c \<Gamma>' R a'\<rbrakk> \<Longrightarrow> hn_refine \<Gamma> c \<Gamma>' R a" by simp lemma TRANS_init: "\<lbrakk> hn_refine \<Gamma> c \<Gamma>' R a; CNV c c' \<rbrakk> \<Longrightarrow> hn_refine \<Gamma> c' \<Gamma>' R a" by simp lemma infer_post_triv: "P \<Longrightarrow>\<^sub>t P" by (rule entt_refl) ML \<open> structure Sepref = struct structure sepref_preproc_simps = Named_Thms ( val name = @{binding sepref_preproc} val description = "Sepref: Preprocessor simplifications" ) structure sepref_opt_simps = Named_Thms ( val name = @{binding sepref_opt_simps} val description = "Sepref: Post-Translation optimizations, phase 1" ) structure sepref_opt_simps2 = Named_Thms ( val name = @{binding sepref_opt_simps2} val description = "Sepref: Post-Translation optimizations, phase 2" ) fun cons_init_tac ctxt = Sepref_Frame.weaken_post_tac ctxt THEN' resolve_tac ctxt @{thms CONS_init} fun cons_solve_tac dbg ctxt = let val dbgSOLVED' = if dbg then I else SOLVED' in dbgSOLVED' ( resolve_tac ctxt @{thms infer_post_triv} ORELSE' Sepref_Translate.side_frame_tac ctxt ) end fun preproc_tac ctxt = let val ctxt = put_simpset HOL_basic_ss ctxt val ctxt = ctxt addsimps (sepref_preproc_simps.get ctxt) in Sepref_Rules.prepare_hfref_synth_tac ctxt THEN' Simplifier.simp_tac ctxt end fun id_tac ctxt = resolve_tac ctxt @{thms ID_init} THEN' CONVERSION Thm.eta_conversion THEN' DETERM o Id_Op.id_tac Id_Op.Normal ctxt fun id_init_tac ctxt = resolve_tac ctxt @{thms ID_init} THEN' CONVERSION Thm.eta_conversion THEN' Id_Op.id_tac Id_Op.Init ctxt fun id_step_tac ctxt = Id_Op.id_tac Id_Op.Step ctxt fun id_solve_tac ctxt = Id_Op.id_tac Id_Op.Solve ctxt (*fun id_param_tac ctxt = CONVERSION (Refine_Util.HOL_concl_conv (K (Sepref_Param.id_param_conv ctxt)) ctxt)*) fun monadify_tac ctxt = Sepref_Monadify.monadify_tac ctxt (*fun lin_ana_tac ctxt = Sepref_Lin_Ana.lin_ana_tac ctxt*) fun trans_tac ctxt = Sepref_Translate.trans_tac ctxt fun opt_tac ctxt = let val opt1_ss = put_simpset HOL_basic_ss ctxt addsimps sepref_opt_simps.get ctxt addsimprocs [@{simproc "HOL.let_simp"}] |> Simplifier.add_cong @{thm SP_cong} |> Simplifier.add_cong @{thm PR_CONST_cong} val unsp_ss = put_simpset HOL_basic_ss ctxt addsimps @{thms SP_def} val opt2_ss = put_simpset HOL_basic_ss ctxt addsimps sepref_opt_simps2.get ctxt addsimprocs [@{simproc "HOL.let_simp"}] in simp_tac opt1_ss THEN' simp_tac unsp_ss THEN' simp_tac opt2_ss THEN' simp_tac unsp_ss THEN' CONVERSION Thm.eta_conversion THEN' resolve_tac ctxt @{thms CNV_I} end fun sepref_tac dbg ctxt = (K Sepref_Constraints.ensure_slot_tac) THEN' Sepref_Basic.PHASES' [ ("preproc",preproc_tac,0), ("cons_init",cons_init_tac,2), ("id",id_tac,0), ("monadify",monadify_tac false,0), ("opt_init",fn ctxt => resolve_tac ctxt @{thms TRANS_init},1), ("trans",trans_tac,~1), ("opt",opt_tac,~1), ("cons_solve1",cons_solve_tac false,~1), ("cons_solve2",cons_solve_tac false,~1), ("constraints",fn ctxt => K (Sepref_Constraints.solve_constraint_slot ctxt THEN Sepref_Constraints.remove_slot_tac),~1) ] (Sepref_Basic.flag_phases_ctrl dbg) ctxt val setup = I #> sepref_preproc_simps.setup #> sepref_opt_simps.setup #> sepref_opt_simps2.setup end \<close> setup Sepref.setup method_setup sepref = \<open>Scan.succeed (fn ctxt => SIMPLE_METHOD (DETERM (SOLVED' (IF_EXGOAL ( Sepref.sepref_tac false ctxt )) 1)))\<close> \<open>Automatic refinement to Imperative/HOL\<close> method_setup sepref_dbg_keep = \<open>Scan.succeed (fn ctxt => let (*val ctxt = Config.put Id_Op.cfg_id_debug true ctxt*) in SIMPLE_METHOD (IF_EXGOAL (Sepref.sepref_tac true ctxt) 1) end)\<close> \<open>Automatic refinement to Imperative/HOL, debug mode\<close> subsubsection \<open>Default Optimizer Setup\<close> lemma return_bind_eq_let: "do { x\<leftarrow>return v; f x } = do { let x=v; f x }" by simp lemmas [sepref_opt_simps] = return_bind_eq_let bind_return bind_bind id_def text \<open>We allow the synthesized function to contain tagged function applications. This is important to avoid higher-order unification problems when synthesizing generic algorithms, for example the to-list algorithm for foreach-loops.\<close> lemmas [sepref_opt_simps] = Autoref_Tagging.APP_def text \<open>Revert case-pulling done by monadify\<close> lemma case_prod_return_opt[sepref_opt_simps]: "case_prod (\<lambda>a b. return (f a b)) p = return (case_prod f p)" by (simp split: prod.split) lemma case_option_return_opt[sepref_opt_simps]: "case_option (return fn) (\<lambda>s. return (fs s)) v = return (case_option fn fs v)" by (simp split: option.split) lemma case_list_return[sepref_opt_simps]: "case_list (return fn) (\<lambda>x xs. return (fc x xs)) l = return (case_list fn fc l)" by (simp split: list.split) lemma if_return[sepref_opt_simps]: "If b (return t) (return e) = return (If b t e)" by simp text \<open>In some cases, pushing in the returns is more convenient\<close> lemma case_prod_opt2[sepref_opt_simps2]: "(\<lambda>x. return (case x of (a,b) \<Rightarrow> f a b)) = (\<lambda>(a,b). return (f a b))" by auto subsection \<open>Debugging Methods\<close> ML \<open> fun SIMPLE_METHOD_NOPARAM' tac = Scan.succeed (fn ctxt => SIMPLE_METHOD' (IF_EXGOAL (tac ctxt))) fun SIMPLE_METHOD_NOPARAM tac = Scan.succeed (fn ctxt => SIMPLE_METHOD (tac ctxt)) \<close> method_setup sepref_dbg_preproc = \<open>SIMPLE_METHOD_NOPARAM' (fn ctxt => K (Sepref_Constraints.ensure_slot_tac) THEN' Sepref.preproc_tac ctxt)\<close> \<open>Sepref debug: Preprocessing phase\<close> (*method_setup sepref_dbg_id_param = \<open>SIMPLE_METHOD_NOPARAM' Sepref.id_param_tac\<close> \<open>Sepref debug: Identify parameters phase\<close>*) method_setup sepref_dbg_cons_init = \<open>SIMPLE_METHOD_NOPARAM' Sepref.cons_init_tac\<close> \<open>Sepref debug: Initialize consequence reasoning\<close> method_setup sepref_dbg_id = \<open>SIMPLE_METHOD_NOPARAM' (Sepref.id_tac)\<close> \<open>Sepref debug: Identify operations phase\<close> method_setup sepref_dbg_id_keep = \<open>SIMPLE_METHOD_NOPARAM' (Config.put Id_Op.cfg_id_debug true #> Sepref.id_tac)\<close> \<open>Sepref debug: Identify operations phase. Debug mode, keep intermediate subgoals on failure.\<close> method_setup sepref_dbg_monadify = \<open>SIMPLE_METHOD_NOPARAM' (Sepref.monadify_tac false)\<close> \<open>Sepref debug: Monadify phase\<close> method_setup sepref_dbg_monadify_keep = \<open>SIMPLE_METHOD_NOPARAM' (Sepref.monadify_tac true)\<close> \<open>Sepref debug: Monadify phase\<close> method_setup sepref_dbg_monadify_arity = \<open>SIMPLE_METHOD_NOPARAM' (Sepref_Monadify.arity_tac)\<close> \<open>Sepref debug: Monadify phase: Arity phase\<close> method_setup sepref_dbg_monadify_comb = \<open>SIMPLE_METHOD_NOPARAM' (Sepref_Monadify.comb_tac)\<close> \<open>Sepref debug: Monadify phase: Comb phase\<close> method_setup sepref_dbg_monadify_check_EVAL = \<open>SIMPLE_METHOD_NOPARAM' (K (CONCL_COND' (not o Sepref_Monadify.contains_eval)))\<close> \<open>Sepref debug: Monadify phase: check_EVAL phase\<close> method_setup sepref_dbg_monadify_mark_params = \<open>SIMPLE_METHOD_NOPARAM' (Sepref_Monadify.mark_params_tac)\<close> \<open>Sepref debug: Monadify phase: mark_params phase\<close> method_setup sepref_dbg_monadify_dup = \<open>SIMPLE_METHOD_NOPARAM' (Sepref_Monadify.dup_tac)\<close> \<open>Sepref debug: Monadify phase: dup phase\<close> method_setup sepref_dbg_monadify_remove_pass = \<open>SIMPLE_METHOD_NOPARAM' (Sepref_Monadify.remove_pass_tac)\<close> \<open>Sepref debug: Monadify phase: remove_pass phase\<close> (*method_setup sepref_dbg_lin_ana = \<open>SIMPLE_METHOD_NOPARAM' (Sepref.lin_ana_tac true)\<close> \<open>Sepref debug: Linearity analysis phase\<close>*) method_setup sepref_dbg_opt_init = \<open>SIMPLE_METHOD_NOPARAM' (fn ctxt => resolve_tac ctxt @{thms TRANS_init})\<close> \<open>Sepref debug: Translation phase initialization\<close> method_setup sepref_dbg_trans = \<open>SIMPLE_METHOD_NOPARAM' Sepref.trans_tac\<close> \<open>Sepref debug: Translation phase\<close> method_setup sepref_dbg_opt = \<open>SIMPLE_METHOD_NOPARAM' (fn ctxt => Sepref.opt_tac ctxt THEN' CONVERSION Thm.eta_conversion THEN' TRY o resolve_tac ctxt @{thms CNV_I} )\<close> \<open>Sepref debug: Optimization phase\<close> method_setup sepref_dbg_cons_solve = \<open>SIMPLE_METHOD_NOPARAM' (Sepref.cons_solve_tac false)\<close> \<open>Sepref debug: Solve post-consequences\<close> method_setup sepref_dbg_cons_solve_keep = \<open>SIMPLE_METHOD_NOPARAM' (Sepref.cons_solve_tac true)\<close> \<open>Sepref debug: Solve post-consequences, keep intermediate results\<close> method_setup sepref_dbg_constraints = \<open>SIMPLE_METHOD_NOPARAM' (fn ctxt => IF_EXGOAL (K ( Sepref_Constraints.solve_constraint_slot ctxt THEN Sepref_Constraints.remove_slot_tac )))\<close> \<open>Sepref debug: Solve accumulated constraints\<close> (* apply sepref_dbg_preproc apply sepref_dbg_cons_init apply sepref_dbg_id apply sepref_dbg_monadify apply sepref_dbg_opt_init apply sepref_dbg_trans apply sepref_dbg_opt apply sepref_dbg_cons_solve apply sepref_dbg_cons_solve apply sepref_dbg_constraints *) method_setup sepref_dbg_id_init = \<open>SIMPLE_METHOD_NOPARAM' Sepref.id_init_tac\<close> \<open>Sepref debug: Initialize operation identification phase\<close> method_setup sepref_dbg_id_step = \<open>SIMPLE_METHOD_NOPARAM' Sepref.id_step_tac\<close> \<open>Sepref debug: Single step operation identification phase\<close> method_setup sepref_dbg_id_solve = \<open>SIMPLE_METHOD_NOPARAM' Sepref.id_solve_tac\<close> \<open>Sepref debug: Complete current operation identification goal\<close> method_setup sepref_dbg_trans_keep = \<open>SIMPLE_METHOD_NOPARAM' Sepref_Translate.trans_keep_tac\<close> \<open>Sepref debug: Translation phase, stop at failed subgoal\<close> method_setup sepref_dbg_trans_step = \<open>SIMPLE_METHOD_NOPARAM' Sepref_Translate.trans_step_tac\<close> \<open>Sepref debug: Translation step\<close> method_setup sepref_dbg_trans_step_keep = \<open>SIMPLE_METHOD_NOPARAM' Sepref_Translate.trans_step_keep_tac\<close> \<open>Sepref debug: Translation step, keep unsolved subgoals\<close> method_setup sepref_dbg_side = \<open>SIMPLE_METHOD_NOPARAM' (fn ctxt => REPEAT_ALL_NEW_FWD (Sepref_Translate.side_cond_dispatch_tac false (K no_tac) ctxt))\<close> method_setup sepref_dbg_side_unfold = \<open>SIMPLE_METHOD_NOPARAM' (Sepref_Translate.side_unfold_tac)\<close> method_setup sepref_dbg_side_keep = \<open>SIMPLE_METHOD_NOPARAM' (fn ctxt => REPEAT_ALL_NEW_FWD (Sepref_Translate.side_cond_dispatch_tac true (K no_tac) ctxt))\<close> method_setup sepref_dbg_prepare_frame = \<open>SIMPLE_METHOD_NOPARAM' Sepref_Frame.prepare_frame_tac\<close> \<open>Sepref debug: Prepare frame inference\<close> method_setup sepref_dbg_frame = \<open>SIMPLE_METHOD_NOPARAM' (Sepref_Frame.frame_tac (Sepref_Translate.side_fallback_tac))\<close> \<open>Sepref debug: Frame inference\<close> method_setup sepref_dbg_merge = \<open>SIMPLE_METHOD_NOPARAM' (Sepref_Frame.merge_tac (Sepref_Translate.side_fallback_tac))\<close> \<open>Sepref debug: Frame inference, merge\<close> method_setup sepref_dbg_frame_step = \<open>SIMPLE_METHOD_NOPARAM' (Sepref_Frame.frame_step_tac (Sepref_Translate.side_fallback_tac) false)\<close> \<open>Sepref debug: Frame inference, single-step\<close> method_setup sepref_dbg_frame_step_keep = \<open>SIMPLE_METHOD_NOPARAM' (Sepref_Frame.frame_step_tac (Sepref_Translate.side_fallback_tac) true)\<close> \<open>Sepref debug: Frame inference, single-step, keep partially solved side conditions\<close> subsection \<open>Utilities\<close> subsubsection \<open>Manual hfref-proofs\<close> method_setup sepref_to_hnr = \<open>SIMPLE_METHOD_NOPARAM' (fn ctxt => Sepref.preproc_tac ctxt THEN' Sepref_Frame.weaken_post_tac ctxt)\<close> \<open>Sepref: Convert to hnr-goal and weaken postcondition\<close> method_setup sepref_to_hoare = \<open> let fun sepref_to_hoare_tac ctxt = let val ss = put_simpset HOL_basic_ss ctxt addsimps @{thms hn_ctxt_def pure_def} in Sepref.preproc_tac ctxt THEN' Sepref_Frame.weaken_post_tac ctxt THEN' resolve_tac ctxt @{thms hn_refineI} THEN' asm_full_simp_tac ss end in SIMPLE_METHOD_NOPARAM' sepref_to_hoare_tac end \<close> \<open>Sepref: Convert to hoare-triple\<close> subsubsection \<open>Copying of Parameters\<close> lemma fold_COPY: "x = COPY x" by simp sepref_register COPY text \<open>Copy is treated as normal operator, and one can just declare rules for it! \<close> lemma hnr_pure_COPY[sepref_fr_rules]: "CONSTRAINT is_pure R \<Longrightarrow> (return, RETURN o COPY) \<in> R\<^sup>k \<rightarrow>\<^sub>a R" by (sep_auto simp: is_pure_conv pure_def intro!: hfrefI hn_refineI) subsubsection \<open>Short-Circuit Boolean Evaluation\<close> text \<open>Convert boolean operators to short-circuiting. When applied before monadify, this will generate a short-circuit execution.\<close> lemma short_circuit_conv: "(a \<and> b) \<longleftrightarrow> (if a then b else False)" "(a \<or> b) \<longleftrightarrow> (if a then True else b)" "(a\<longrightarrow>b) \<longleftrightarrow> (if a then b else True)" by auto subsubsection \<open>Eliminating higher-order\<close> (* TODO: Add similar rules for other cases! *) lemma ho_prod_move[sepref_preproc]: "case_prod (\<lambda>a b x. f x a b) = (\<lambda>p x. case_prod (f x) p)" by (auto intro!: ext) declare o_apply[sepref_preproc] subsubsection \<open>Precision Proofs\<close> text \<open> We provide a method that tries to extract equalities from an assumption of the form \<open>_ \<Turnstile> P1 * \<dots> * Pn \<and>\<^sub>A P1' * \<dots> * Pn'\<close>, if it find a precision rule for Pi and Pi'. The precision rules are extracted from the constraint rules. TODO: Extracting the precision rules from the constraint rules is not a clean solution. It might be better to collect precision rules separately, and feed them into the constraint solver. \<close> definition "prec_spec h \<Gamma> \<Gamma>' \<equiv> h \<Turnstile> \<Gamma> * true \<and>\<^sub>A \<Gamma>' * true" lemma prec_specI: "h \<Turnstile> \<Gamma> \<and>\<^sub>A \<Gamma>' \<Longrightarrow> prec_spec h \<Gamma> \<Gamma>'" unfolding prec_spec_def by (auto simp: mod_and_dist mod_star_trueI) lemma prec_split1_aux: "A*B*true \<Longrightarrow>\<^sub>A A*true" apply (fr_rot 2, fr_rot_rhs 1) apply (rule ent_star_mono) by simp_all lemma prec_split2_aux: "A*B*true \<Longrightarrow>\<^sub>A B*true" apply (fr_rot 1, fr_rot_rhs 1) apply (rule ent_star_mono) by simp_all lemma prec_spec_splitE: assumes "prec_spec h (A*B) (C*D)" obtains "prec_spec h A C" "prec_spec h B D" apply (thin_tac "\<lbrakk>_;_\<rbrakk> \<Longrightarrow> _") apply (rule that) using assms apply - unfolding prec_spec_def apply (erule entailsD[rotated]) apply (rule ent_conjI) apply (rule ent_conjE1) apply (rule prec_split1_aux) apply (rule ent_conjE2) apply (rule prec_split1_aux) apply (erule entailsD[rotated]) apply (rule ent_conjI) apply (rule ent_conjE1) apply (rule prec_split2_aux) apply (rule ent_conjE2) apply (rule prec_split2_aux) done lemma prec_specD: assumes "precise R" assumes "prec_spec h (R a p) (R a' p)" shows "a=a'" using assms unfolding precise_def prec_spec_def CONSTRAINT_def by blast ML \<open> fun prec_extract_eqs_tac ctxt = let fun is_precise thm = case Thm.concl_of thm of @{mpat "Trueprop (precise _)"} => true | _ => false val thms = Sepref_Constraints.get_constraint_rules ctxt @ Sepref_Constraints.get_safe_constraint_rules ctxt val thms = thms |> filter is_precise val thms = @{thms snga_prec sngr_prec} @ thms val thms = map (fn thm => thm RS @{thm prec_specD}) thms val thin_prec_spec_rls = @{thms thin_rl[Pure.of "prec_spec a b c" for a b c]} val tac = forward_tac ctxt @{thms prec_specI} THEN' REPEAT_ALL_NEW (ematch_tac ctxt @{thms prec_spec_splitE}) THEN' REPEAT o (dresolve_tac ctxt thms) THEN' REPEAT o (eresolve_tac ctxt thin_prec_spec_rls ) in tac end \<close> method_setup prec_extract_eqs = \<open>SIMPLE_METHOD_NOPARAM' prec_extract_eqs_tac\<close> \<open>Extract equalities from "_ |= _ & _" assumption, using precision rules\<close> subsubsection \<open>Combinator Rules\<close> lemma split_merge: "\<lbrakk>A \<or>\<^sub>A B \<Longrightarrow>\<^sub>t X; X \<or>\<^sub>A C \<Longrightarrow>\<^sub>t D\<rbrakk> \<Longrightarrow> (A \<or>\<^sub>A B \<or>\<^sub>A C \<Longrightarrow>\<^sub>t D)" proof - assume a1: "X \<or>\<^sub>A C \<Longrightarrow>\<^sub>t D" assume "A \<or>\<^sub>A B \<Longrightarrow>\<^sub>t X" then have "A \<or>\<^sub>A B \<Longrightarrow>\<^sub>A D * true" using a1 by (meson ent_disjI1_direct ent_frame_fwd enttD entt_def_true) then show ?thesis using a1 by (metis (no_types) Assertions.ent_disjI2 ent_disjE enttD enttI semigroup.assoc sup.semigroup_axioms) qed ML \<open> fun prep_comb_rule thm = let fun mrg t = case Logic.strip_assums_concl t of @{mpat "Trueprop (_ \<or>\<^sub>A _ \<or>\<^sub>A _ \<Longrightarrow>\<^sub>t _)"} => (@{thm split_merge},true) | @{mpat "Trueprop (hn_refine _ _ ?G _ _)"} => ( if not (is_Var (head_of G)) then (@{thm hn_refine_cons_post}, true) else (asm_rl,false) ) | _ => (asm_rl,false) val inst = Thm.prems_of thm |> map mrg in if exists snd inst then prep_comb_rule (thm OF (map fst inst)) else thm |> zero_var_indexes end \<close> attribute_setup sepref_prep_comb_rule = \<open>Scan.succeed (Thm.rule_attribute [] (K prep_comb_rule))\<close> \<open>Preprocess combinator rule: Split merge-rules and add missing frame rules\<close> end
\documentclass{beamer} % % Choose how your presentation looks. % % For more themes, color themes and font themes, see: % http://deic.uab.es/~iblanes/beamer_gallery/index_by_theme.html % \mode<presentation> { \usetheme{Madrid} % or try Darmstadt, Madrid, Warsaw, ... \usecolortheme{beaver} % or try albatross, beaver, crane, ... \usefonttheme{default} % or try serif, structurebold, ... \setbeamertemplate{navigation symbols}{} \setbeamertemplate{caption}[numbered] } \usepackage{pifont} \usepackage[bookmarks]{hyperref} \usepackage[backend=bibtex]{biblatex} \usepackage{braket} \addbibresource{bibliography.bib} \newtheorem*{remark}{Remark} \usepackage[english]{babel} \usepackage[utf8x]{inputenc} \usebackgroundtemplate{\includegraphics[width=\paperwidth]{Pic/Intro_pict.png}} \title[AMS project]{ Stationary states of opinion diffusion} \subtitle{Project for the exam: AMS (DSE)} \author{Paola Serra and Marzio De Corato } \date{\today} \begin{document} \begin{frame} \vspace{+4 cm} \titlepage \end{frame} \usebackgroundtemplate{ } % Uncomment these lines for an automatically generated outline. %\begin{frame}{Outline} %\setcounter{tocdepth}{1} %\begin{center} % \tableofcontents %\end{center} %\end{frame} \begin{frame}{} \begin{center} {\Huge Theoretical Framework} \end{center} \end{frame} \section{Statistical Mechanics} \begin{frame}{} \begin{center} {\Huge Statistical Mechanics} \end{center} \begin{center} \textit{“Ludwig Boltzmann, who spent much of his life studying statistical mechanics, died in 1906, by his own hand. Paul Ehrenfest, carrying on the work, died similarly in 1933. Now it is our turn to study statistical mechanics.” States of Matter (1975), by David L. Goodstein} \end{center} \end{frame} \begin{frame}{Concepts of statistical mechanics: aim \cite{peliti2011statistical}} \begin{center} \textbf{Aim} To predict the macroscopic properties of systems on the basis of their microscopic structure. This connection between these two scales is performed with statistical methods \end{center} \end{frame} \begin{frame}{Concepts of statistical mechanics: entropy \cite{peliti2011statistical}} \textbf{Central problem of thermodynamics:} characterize the actual state of equilibrium among all virtual states \\ \textbf{Entropy postulate}: there exist a function S of the extensive variables $(X_{0},X_{1}...X_{r})$ called entropy, that assumes the maximum value for a state of equilibrium among all vritual states and that possesses the following properties: \begin{itemize} \item Extensivity $S^{(1\cup2)}=S^{1}+S^{2}$ \item Convexity $S((1-\alpha)X^{1}+\alpha X^{2}) \geq (1-\alpha)S(x^{1})+\alpha S(X^{2})$ \item Monotonicity $\dfrac{\partial S}{\partial E}|_{X_{1}...X_{r}}=\frac{1}{T}>0$ \end{itemize} \textbf The equilibrium state corresponds to the maximum entropy compatible with the constrains \end{frame} \begin{frame}{Concepts of statistical mechanics: entropy \cite{peliti2011statistical}} \begin{itemize} \item \textbf{Fundamental postulate of statistical mechanics} $S=k_{b}\ln|\Gamma|$ Where S is the thermodynamic entropy, $k_{b}$ is Boltzmann constant and $|\Gamma|$ the volume in the phase space \end{itemize} \begin{center} \includegraphics[width=0.7\textwidth]{Pic/Pendulum_phase_portrait_illustration.png} \end{center} \begin{center} Image taken from \cite{PBC} \end{center} \end{frame} \begin{frame}{Concepts of statistical mechanics: ensemble \cite{peliti2011statistical}} \begin{itemize} \item \textbf{Statistical ensemble}: a large number of virtual copies of a system ; each of them is a possible state of the real system (epistemic probability) . It is the formalization of a repeated experiment proposed by Gibbs (empirical probability) \item\textbf{ Microcanical ensamble:} $p=1/W$ $W$ is the number of microstates \item \textbf{Canonical ensamble:} $p=\frac{1}{Z}\exp\left(-\frac{E}{kT}\right)$ where $Z=\sum_{i}\exp\left(\dfrac{-E_{i}}{k_{b}T}\right)$ \end{itemize} \begin{equation} \left\langle A(x) \right\rangle=\dfrac{1}{Z}\int dx_{s}A(x_{s})exp\left[ -\frac{H^{(S)}(x_{s})}{k_{b}T} \right] \end{equation} \begin{equation} Z=\int dx_{s}exp\left[ -\frac{H^{S}(x_{s})}{k_{b}T} \right] \end{equation} \begin{equation} \left\langle H(x)^{2} \right\rangle-\left\langle H(x) \right\rangle^{2}=k_{b}T^{2}C \end{equation} \end{frame} \section{Ising model} \begin{frame}{} \begin{center} {\Huge Ising Model} \end{center} \end{frame} \begin{frame}{Ising model \cite{mackay2003information,slides}} \begin{itemize} \item An array of atoms that can take states $\pm 1$. The energy of the system is given by $E(\textbf{x},J,H)= - \left[\dfrac{1}{2}\sum_{m,n}J_{mn}x_{m}x_{n}+\sum_{n}Hx_{n} \right]$ where J is the coupling constant between two neighbour sites, and H is an external field. \item The probability of the system to be in the state x is given by $p(x|\beta,J,H)=\frac{1}{Z(\beta,J,H)}\exp[-\beta E(x,J,H)]$ (canonical ensemble) where $\beta=1/k_{b}T\quad Z(\beta,J,H)=\sum_{x}exp\left[-\beta E(x,J,H)\right]$ \item It is useful to characterize the order level of a lattice (macroscopic) with the (spatial) correlation functions (whose input are microscopic quantities). In particular, for the Ising model, these are given by the following expression (with $H=0$) \end{itemize} \begin{equation*} g(m)=\frac{\left\langle \sigma_{i}\sigma_{i+m} \right\rangle - \left\langle \sigma_{i} \right\rangle \left\langle \sigma_{i+m} \right\rangle }{1- \left\langle \sigma_{i} \right\rangle \left\langle \sigma_{i+m} \right\rangle }=\left\langle \sigma_{i}\sigma_{i+m} \right\rangle $ \end{equation*} \end{frame} \section{Numerical simulation} \begin{frame}{} \begin{center} {\Huge Numerical simulations} \end{center} \begin{center} \textit{“Never make a calculation until you know the answer. Make an estimate before every calculation, try a simple physical argument (symmetry! invariance! conservation!) before every derivation, guess the answer to every paradox and puzzle. Courage: No one else needs to know what the guess is. Therefore make it quickly, by instinct. A right guess reinforces this instinct. A wrong guess brings the refreshment of surprise. In either case life as a spacetime expert, however long, is more fun!” John Archibald Wheeler } \end{center} \end{frame} \begin{frame}{Numerical simulation \cite{peliti2011statistical}} \begin{itemize} \item\textbf{Molecular dynamics:} the equation of motion are solved numerically PROS: information of both the dynamical and static properties of the system are explored \item\textbf{Monte Carlo:} a fictitious evolution process of the system is solved in order to get the equilibrium distribution PROS 1) also the systems whose dynamics is not defined can be explored 2) a fictitious dynamics can be considered in order to reach the equilibrium faster \end{itemize} \end{frame} \begin{frame} {Monte Carlo method \cite{frenkel2001understanding}} \begin{center} \includegraphics[width=0.5\textwidth]{Pic/MonteCarloIntegrationCircle.png} \end{center} \begin{center} Image taken from \cite{PBC} \end{center} \end{frame} \begin{frame}{Monte Carlo method \cite{frenkel2001understanding}} \begin{itemize} \item Lets consider the generic integral $I=\int^{b}_{a}dx\;f(x)$ \item This can be recast in the following form $I=\int^{1}_{0}dx\;w(x)\dfrac{f(x)}{w(x)}$ \item If w(x) is the derivative of u(x) (non-decreasing,non negative) we have $I=\int^{1}_{0}du\;\dfrac{f[x(u)]}{w[x(u)]}$ \item If one considers L random values of u uniformely distributed in the interval [0,1] we have $I\approx\dfrac{1}{L}\sum_{i=1}^{L}\dfrac{f[x(u)}{w[x(x)]}$ \item The choice of w is crucial since $\sigma=\dfrac{1}{L}\left[\left\langle\left(\dfrac{f}{w}\right)^{2}\right\rangle- \left\langle\dfrac{f}{w}\right\rangle^{2}\right]$ \item Brute Force $f=10^{-260}$ and $\sigma=\dfrac{1}{Lf}$...is not a good idea \item PROBLEM: we do not know the form of the denominator (if we know it we do not need the Monte Carlo method) \end{itemize} \end{frame} \begin{frame} {Monte Carlo method \cite{peliti2011statistical}} \begin{center} \includegraphics[width=0.7\textwidth]{Pic/Pic_river.png} \end{center} \begin{center} Image taken from \cite{frenkel2001understanding} \end{center} \end{frame} \begin{frame}{Monte Carlo method: Metropolis idea \cite{frenkel2001understanding}} \begin{equation} \left\langle A \right\rangle=\dfrac{\int d\textbf{r}^{N}\exp\left[ -\beta E(\textbf{r}^{N}) \right]A(r^{N})}{\int d\textbf{r}^{N}\exp\left[ -\beta E(\textbf{r}^{N}) \right]} \end{equation} \begin{itemize} \item We have a ratio between two integrals, therefore what we need to sample is the ratio and not the integrals alone \item The probability density is $N(r^{N})=\exp\left[-\beta E(\textbf{r}^{N}) \right]/Z$ \item Metropolis idea: randomly generate points with this last probability distribution. In this case we have $\left\langle A \right\rangle \approx 1/L\sum_{i=1}^{L}n_{i}A(\textbf{r}_{i}^{N})$ \end{itemize} \end{frame} \begin{frame}{Monte Carlo method: Metropolis idea \cite{frenkel2001understanding}} \begin{itemize} \item How the points are generated ? With a Boltzmann weighted Markov chain \item $\pi(old \rightarrow new) = \alpha (old \rightarrow new) \times acc(old \rightarrow new) $ where $\pi$ is the transition probability element from the old state to the new state,$\alpha$ is the matrix element of Markov Chain and acc is the acceptance ratio. \item Detailed balance condition at the equilibrium $N(old)\pi(old \rightarrow new)= N(new)\pi(new \rightarrow old) $ \item With a symmetric Markov transition matrix we have $N(old)\times acc(old\rightarrow new) = N(new) \times acc(new\rightarrow old)$ \item Therefore we have \end{itemize} \begin{equation} \frac{acc(old \rightarrow new)}{acc(new \rightarrow old)}=\dfrac{N(n)}{N(o)}=\exp\left[ -\beta \left( E(new)-E(old) \right) \right] \end{equation} \begin{itemize} \item THE Z TERM IS NO MORE PRESENT ! We have only the difference between the two energies !!! \end{itemize} \end{frame} \begin{frame}{Monte Carlo method: Metropolis idea \cite{frenkel2001understanding}} \begin{equation*} acc(old \rightarrow new) = \begin{cases} N(new)/N(old)\quad N(new) < N(old) \\ 1 \quad N(new) \geq N(old) \end{cases} \end{equation*} \begin{itemize} Therefore the overall transition probabilities are given by \end{itemize} \begin{equation*} \begin{split} \pi(old \rightarrow new) &= \begin{cases} \alpha(old \rightarrow new)\quad N(new) \geq N(old) \\ \alpha(old \rightarrow new) \left[ N(new)/N(old) \right] \quad N(new) < N(old) \end{cases} \\ \pi(old \rightarrow new) & = 1-\sum_{new\neq old} \pi(old\rightarrow new) \end{split} \end{equation*} \begin{itemize} \item In practice for each move a random number is generated from the uniform distribution between the interval $[0,1]$, since $acc(old \rightarrow new) = \exp\left[ -\beta \left( E(new)-E(old) \right)\right] < 1 $. The move is accepted if the random number is lower than $acc(old \rightarrow new)$ \item $\pi(old \rightarrow new)$ should be ergodic \end{itemize} \end{frame} \section{Goals and methods} \begin{frame} \begin{center} {\Huge Goals and methods} \end{center} \end{frame} \begin{frame}{Goals and methods} \begin{itemize} \item Reproduce the main result for a 2D anti ferromagnetic lattice $(J=-1)$ with no external magnetic field $(H=0)$ with the montecarlo-metropolis \item Once checked that the script provide the correct results apply it to a lattice $(J=+1)$. In this case the spins represent an opinion and the sites people. The goal is to find the stationary states (at $T=0$ and $T\neq0$) \item Introduce in the lattice some blocks that never change their status. These islands represent groups that never change mind and only diffuse their ideas. (at $T=0$ and $T\neq0$) \end{itemize} \end{frame} \section{Results} \begin{frame} \begin{center} {\Huge Results} \end{center} \end{frame} \begin{frame}{Simulation features} \begin{itemize} \item 10x10 lattice \item Periodic boundary conditions $\rightarrow$ the topology of a torus (genus equal to 1) \item 6000 steps \end{itemize} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=0.7\textwidth]{Pic/PBC.png} \end{center} \begin{center} Image taken from \cite{PBC} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=0.7\textwidth]{Pic/Torus.png} \end{center} \begin{center} Image taken from \cite{PBC} \end{center} \end{column} \end{columns} \end{frame} \subsection{Antiferromagnetic} \begin{frame}{Antiferromagnetic J=-1, T=0} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_10_6000_T=0_ENERGY.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_10_6000_T=0_FINAL.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Antiferromagnetic J=-1, T=0} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_10_6000_T=0_Magnetization.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_10_6000_T=0_CORRELATION.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Antiferromagnetic J=-1, T=1.5} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_60_2500_T=1.5_ENERGY.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_60_2500_T=1.5_FINAL.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Antiferromagnetic J=-1, T=1.5} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_60_2500_T=1.5_Magnetization.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_60_2500_T=1.5_CORRELATION.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Antiferromagnetic J=-1, T=4} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_60_2500_T=4_ENERGY.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_60_2500_T=4_FINAL.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Antiferromagnetic J=-1, T=4} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_60_2500_T=4_Magnetization.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_60_2500_T=4_CORRELATION.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Antiferromagnetic J=-1, T=8} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_60_2500_T=8_ENERGY.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_60_2500_T=8_FINAL.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Antiferromagnetic J=-1, T=8} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_60_2500_T=8_Magnetization.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_60_2500_T=8_CORRELATION.pdf} \end{center} \end{column} \end{columns} \end{frame} \subsection{Ferromagnetic} \begin{frame}{Ferromagnetic J=-1, T=30} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_10_6000_T=30_ENERGY.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_10_6000_T=30_FINAL.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Ferromagnetic J=-1, T=30} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_10_6000_T=30_Magnetization.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J-1_10_6000_T=30_CORRELATION.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Ferromagnetic J=+1, T=0} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_6000_T=0_ENERGY.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_6000_T=0_FINAL.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Ferromagnetic J=+1, T=0} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_6000_T=0_Magnetization.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_6000_T=0_Coherence.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Ferromagnetic J=+1, T=8} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_6000_T=8_ENERGY.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_6000_T=8_FINAL.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Ferromagnetic J=+1, T=8} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_6000_T=8_Magnetization.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_6000_T=8_Coherence.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Ferromagnetic J=+1, T=16} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_2500_T=16_ENERGY.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_2500_T=16_FINAL.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Ferromagnetic J=+1, T=16} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_2500_T=16_Magnetization.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_2500_T=16_Coherence.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Ferromagnetic with two centers T=0} \begin{center} \includegraphics[width=0.5\textwidth]{Pic/2CENTER.pdf} \end{center} \end{frame} \begin{frame}{Ferromagnetic with two centers T=0} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_6000_two_center_T=0_FINAL.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_10_6000_two_center_T=0_2_FINAL.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Ferromagnetic with two centers T=0, 20$\times$20} \begin{center} \includegraphics[width=0.5\textwidth]{Pic/LATTICE20x20TWOCENTER.pdf} \end{center} \end{frame} \begin{frame}{Ferromagnetic with two centers T=0, 20$\times$20} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_20_10000_T=0_FINAL.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_20_10000_T=0_FINAL_2.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Ferromagnetic with two centers T=0.25, 20$\times$20} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_20_10000_T=0.25.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_20_10000_T=0.25_2.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Ferromagnetic with two centers T=0, 60$\times$60} \begin{columns} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_60_10000_T=0_FINAL.pdf} \end{center} \end{column} \begin{column}{0.5\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_60_10000_T=0_2_FINAL.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}{Ferromagnetic with two centers $T>0$, 60$\times$60} \begin{columns} \begin{column}{0.3\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_60_10000_T=1_FINAL.pdf} \end{center} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_60_10000_T=1_FINAL_COHERENCE.pdf} \end{center} \end{column} \begin{column}{0.3\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_60_10000_T=4_FINAL.pdf} \end{center} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_60_10000_T=4_FINAL_COHERENCE.pdf} \end{center} \end{column} \begin{column}{0.3\textwidth} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_60_10000_T=8_FINAL.pdf} \end{center} \begin{center} \includegraphics[width=\textwidth]{Pic/J+1_60_10000_T=8_COHERENCE.pdf} \end{center} \end{column} \end{columns} \end{frame} \begin{frame}[t,allowframebreaks] \frametitle{Bibliography} \printbibliography \end{frame} \begin{frame}{Concepts of statistical mechanics: entropy \cite{peliti2011statistical}} \textbf{Central problem of thermodynamics:} characterize the actual state of equilibrium among all virtual states \\ \textbf{Entropy postulate}: there exist a function S of the extensive variables $(X_{0},X_{1}...X_{r})$ called entropy, that assumes the maximum value for a state of equilibrium among all vritual states and that possesses the following properties: \begin{itemize} \item Extensivity $S^{(1\cup2)}=S^{1}+S^{2}$ \item Convexity $S((1-\alpha)X^{1}+\alpha X^{2}) \geq (1-\alpha)S(x^{1})+\alpha S(X^{2})$ \item Monotonicity $\dfrac{\partial S}{\partial E}|_{X_{1}...X_{r}}=\frac{1}{T}>0$ \end{itemize} \textbf The equilibrium state corresponds to the maximum entropy compatible with the constrains \end{frame} \section{Supporting Info} \begin{frame}{Concepts of statistical mechanics: entropy \cite{peliti2011statistical}} \begin{itemize} \item \textbf{Fundamental postulate of statistical mechanics} $S=k_{b}\ln|\Gamma|$ \item Where S is the thermodynamic entropy, $k_{b}$ is Boltzmann constant and $|\Gamma|$ the volume in the phase space \end{itemize} \begin{equation} \begin{split} S(X_{0},...,X_{r})&=k_{b}\ln \int_{\Gamma}dx= \\ & k_{b}\int dx\prod_{i=0}^{r}\left[\theta(X_{i}(x)-(X_{i}-\Delta X_{i}))\theta(X_{i}-X_{i}(x))\right]$ \end{split} \end{equation} \begin{itemize} \item $(X_{0},...,X_{r})$ are the extensive variables \item The $\theta$ functions assures that the integrand is not null only in the interval $X_{i} - \Delta X_{i} \leq X_{i}(x) \leq X_{i} $ \end{itemize} \end{frame} \begin{frame}{Concepts of statistical mechanics: micro-canonical ensamble} \begin{itemize} \item Lets focus on a particular observable A (extensive) \end{itemize} \begin{equation} S(X;a)=k_{b}\ln\int_{\Gamma}dx\delta(A(x)-a) \end{equation} \begin{equation} S(X)=S(X;a*)\geq S(X;a) \end{equation} \begin{equation} \begin{split} \frac{|\Gamma (a)|}{|\Gamma |}&=\frac{1}{|\Gamma |}\int_{\Gamma}dx\delta(A(x)-a)) \\ =& \exp\left\lbrace \frac{1}{k_{b}} \left[ S(X;a) - S(X;a^{*})\right] \right\rbrace \\ \simeq& \exp\left\lbrace \dfrac{1}{k_{b}} \left[ \dfrac{\partial^{2}S}{\partial A^{2}}|_{a^{*}} (a-a^{*})^{2} \right] \right\rbrace \end{split} \end{equation} \begin{equation} a^{*}=\left\langle A(x) \right\rangle =\frac{1}{|\Gamma|}\int_{\Gamma}dx A(x) \end{equation} \end{frame} \begin{frame}{Concepts of statistical mechanics: canonical ensemble} \begin{equation} a^{*}=\frac{1}{|\Gamma|}\int_{\Gamma}dx_{s}dx_{R} A(x_{s}) \end{equation} \begin{equation} \left\langle A(x) \right\rangle=\int dx_{s}dx_{r}A(x_{S})\delta(H^{(s)}) \end{equation} \begin{equation} \left\langle A(x) \right\rangle=\frac{1}{|\Gamma|}\int dx_{s}dx_{r}A(x_{s})\delta(H^{S}(x_{s})+H^{R}(x_{R})-H^{S}(x_{S})) \end{equation} \begin{equation} \left\langle A(x) \right\rangle=\frac{1}{|\Gamma|}\int dx_{s}A(x_{s})\times\int dx_{r}\delta(H^{R}(x_{R})-(E-H^{(S)}(x_{s}))) \end{equation} \begin{equation} \int dx_{r}\delta(H^{R}(x_{R})-(E-H^{(S)}(x_{s})))\simeq \exp\left\lbrace\frac{1}{k_{b}} S^{R}(E-H^{S})\right\rbrace \end{equation} \end{frame} \begin{frame}{Concepts of statistical mechanics: canonical ensemble} \begin{equation} \exp\left\lbrace\frac{1}{k_{b}} S^{R}(E-H^{S})\right\rbrace \simeq \exp \left[ \frac{1}{k_{b}}S^{R}(E) \right] \exp\left[ -\frac{1}{k_{b}}\frac{\partial S^{(R)}}{\partial E}|_{E} H^{(S)}(x_{S}) \right] \end{equation} \begin{equation} \left\langle A(x) \right\rangle=\dfrac{1}{Z}\int dx_{s}A(x_{s})exp\left[ -\frac{H^{(S)}(x_{s})}{k_{b}T} \right] \end{equation} \begin{equation} Z=\int dx_{s}exp\left[ -\frac{H^{S}(x_{s})}{k_{b}T} \right] \end{equation} \begin{equation} \left\langle A(x) \right\rangle=\dfrac{1}{Z}\int dE\int dx\delta(H(x)-E)A(x)exp(-\dfrac{E}{k_{b}T}) \end{equation} \begin{equation} \left\langle A(x) \right\rangle=\dfrac{1}{Z}\int dE'a^{*}(E')exp\left[ - \dfrac{E'-TS(E')}{k_{b}T}\right] \end{equation} \end{frame} \begin{frame}{Concepts of statistical mechanics: canonical ensemble\cite{peliti2011statistical} } \begin{equation} Z\simeq \exp\left[-\dfrac{E^{*}-TS(E^{*})}{k_{b}T} \right]=\exp\left( - \frac{F}{k_{b}T} \right) \end{equation} \begin{equation} \dfrac{\partial ln Z(\beta)}{\partial \beta}=-\dfrac{1}{Z}\int dx H(x)\exp\left[ -\dfrac{H}{k_{b}T}\right]=-\left\langle H(x) \right\rangle=-E \end{equation} \begin{equation} \dfrac{\partial^{2} ln Z(\beta)}{\partial \beta^{2}}=\left\langle H(x)^{2} \right\rangle-\left\langle H(x) \right\rangle^{2} \end{equation} \begin{equation} \left\langle H(x)^{2} \right\rangle-\left\langle H(x) \right\rangle^{2}=-\dfrac{\partial E}{\partial (1/k_{b}T)}=k_{b}T^{2}\dfrac{\partial E}{\partial T}=k_{b}T^{2}C \end{equation} In this way a statistical quantity the variance has been connected to a thermodynamic quantity: the temperature. \end{frame} \begin{frame}{Monte Carlo method: Metropolis idea \cite{peliti2011statistical}} \begin{itemize} \item The true dynamics is replaced with a fictitious stochastic dynamics. The state at $t+1$ depends only from the state at $t$. $\rightarrow$ Markov Chain \item The evolution of probability is described by the \textbf{master equation} $\Delta p_{a}(t)=\sum'_{b\neq a}[W_{ab}p_{b}(t)-W_{ba}p_{a}(t)]$ \item The stationary state is given by \rightarrow $ \sum'_{b\neq a}[W_{ab}p_{b}(t)-W_{ba}p_{a}(t)=0\quad \forall a$ \item Detailed balance property $W_{ab}W_{bc}W_{ca}=W_{ac}W_{cb}W_{ba}$ \item Therefore the statinary state is given by $ W_{ab}p_{b}(t)-W_{ba}p_{a}(t)=0\quad \forall a,b$ \end{itemize} \end{frame} \begin{frame}{Monte Carlo method: Metropolis idea \cite{peliti2011statistical}} \begin{itemize} \item We would sample $p^{eq}_{a}$ \item This can be performed as long as the $W_{ab}$ is ergodic and the detailed balance property holds \item The transition between any two arbitrary states can take place as long as one waits for a sufficient amount of time \end{itemize} \end{frame} \begin{frame}{Application \cite{peliti2011statistical}} \begin{itemize} \item $H(\sigma)=-\sum_{\left\langle ij \right\rangle}J\sigma_{i}\sigma_{j}-\sum_{i}h\sigma_{i}$ \item $P_{\sigma}=\frac{e^{-H(\sigma)/k_{b}T}}{Z}$ $\quad Z=\sum_{\sigma}e^{-H(\sigma)/k_{b}T}$ \item The observable are calculated as $E=\left\langle H \right\rangle=\sum_{sigma}H(\sigma)P_{\sigma'}^{B} \quad M=\sum_{\sigma}\left(\sigma_{i} \sigma\right)P_{\sigma}^{B} $ \item The markov chain states should be the microstates $\sigma$ and $P_{\sigma}$ the stationary distribution \item $W_{\sigma\sigma '} = W_{\sigma ' \sigma}$ $\dfrac{P_{\sigma}}{P_{\sigma '}}=W_{\sigma ' \sigma} \exp - \dfrac{H(\sigma)-H(\sigma ')}{k_{b}T}$ \item The Z term is no more present !!! \end{itemize} \end{frame} \begin{frame}{Application \cite{peliti2011statistical}} \begin{itemize} \item $W_{\sigma' \sigma }= \begin{cases} \kappa H(\sigma ') < H(\sigma ) \\ \kappa\exp\left\lbrace -\left[ H(\sigma ')-H(\sigma) \right] / k_{b}T \right\rbrace$ \end{cases} \item $\left\lbrace A \right\rbrace \neq \left[ A \right]_{T}=\frac{1}{T}$ \item $A(\sigma)=\left\langle A \right\rangle \left[1+O(N^{-1/2})\right] \forall \sigma \in \Gamma $ \item $S=0.5Nk_{b}ln2 \quad \dfrac{|\Gamma|}{2^{N}}\approx 2^{-0.5N}\quad N=100 \quad 10^{-15}$ \item $ \approx \left[ A \right]_{T}=\dfrac{1}{T}\sum^{T\left\langle A \right\rangle_{0}+T}_{t=T_{0}}A_{\sigma(t)} $ \item $\left\langle \Delta A_{T}^{2} \right\rangle \approx \dfrac{1}{T}\left(A_{\sigma(t)}-\left[A\right]_{T} \right)^{2}$ \item $\sigma_{t}$ and $\sigma_{t'}$ are indipendent if $|t-t'|$ is larger that a characteristic time $\tau_{0}$ \item $\left\langle \Delta A_{T}^{2} \right\rangle \approx \dfrac{\tau_{0}}{T}\left(A_{\sigma(t)}-\left[A\right]_{T} \right)^{2}$ \end{itemize} \end{frame} \begin{itemize} \item We would calculate an integral of type $\left\langle A\right\rangle = \int^{1}_{0} dxA(x)\rho(x)$ where $\rho$ is the probability distribution. \item Evaluate the integrand in N+1 points uniformly arranged between 0 and 1 $\left\langle A\right\rangle \approx \frac{1}{N+1}\sum_{i=0}^{N}A(x_{i})\rho(x_{i})$ \item A better convergence is reached if the $x_{i}$ density is proportional to $\rho(x)$. In this case we have $\left\langle A\right\rangle = 1/N\sum_{i=1}^{N}A(\textbf{x}_{i})$. \end{itemize} \end{frame} \end{document}
# This module forms part of NewSLO and is `$include`d there RoundTrip := proc(e, t::t_type) local result; interface(screenwidth=9999, prettyprint=0, warnlevel=0, showassumed=0,quiet=true); kernelopts(assertlevel=0); result := eval(ToInert(Simplify(_passed)), _Inert_ATTRIBUTE=NULL); lprint(result) end proc; Simplify := proc(e, t::t_type, {ctx :: list := []}, $) subsindets(SimplifyKB(value(e), t, build_kb(ctx, "Simplify")), And({t_sum, t_product}, anyfunc(anything, anything=range)), e -> subsop(0 = `if`(e::t_sum, SumIE, ProductIE), applyop(`+`, [2,2,2], e, 1))) end proc; SimplifyKB_ := proc(e, t::t_type, kb::t_kb, $) local patterns, x, kb1, ex; if t :: HMeasure(anything) then %fromLO(%improve(%toLO(e), _ctx=kb), _ctx=kb); elif t :: HFunction(anything, anything) then patterns := htype_patterns(op(1,t)); if patterns :: Branches(Branch(PVar(name),anything)) then # Eta-expand the function type x := `if`(e::lam(name,anything,anything), op(1,e), op([1,1,1],patterns)); x, kb1 := genType(x, op(1,t), kb, e); ex := app(e,x); lam(x, op(1,t), SimplifyKB_(ex, op(2,t), kb1)) else # Eta-expand the function type and the sum-of-product argument-type x := `if`(e::lam(name,anything,anything), op(1,e), d); if depends([e,t,kb], x) then x := gensym(x) end if; ex := app(e,x); lam(x, op(1,t), 'case'(x, map(proc(branch) local eSubst, pSubst, p1, binds, y, kb1, i, pSubst1; eSubst, pSubst := pattern_match([x,e], x, op(1,branch)); p1 := subs(pSubst, op(1,branch)); binds := [pattern_binds(p1)]; kb1 := kb; pSubst1 := table(); for i from 1 to nops(binds) do y, kb1 := genType(op(i,binds), op([2,i],branch), kb1); pSubst1[op(i,binds)] := y; end do; pSubst1 := op(op(pSubst1)); Branch(subs(pSubst1, p1), SimplifyKB_(eval(eval(ex,eSubst),pSubst1), op(2,t), kb1)) end proc, patterns))) end if else %simplify_assuming(e, kb) end if end proc; SimplifyKB := proc(e, t::t_type, kb::t_kb, $) eval(SimplifyKB_(args), [%fromLO=fromLO,%improve=improve,%toLO=toLO,%simplify_assuming=simplify_assuming]); end proc; # Testing TestSimplify := proc(m, t, n::algebraic:=m, {verify:=simplify}) local s, r; # How to pass keyword argument {ctx::list:=[]} on to Simplify? s, r := selectremove(type, [_rest], 'identical(ctx)=anything'); CodeTools[Test](Simplify(m,t,op(s)), n, measure(verify), op(r)) end proc; TestHakaru := proc(m, n::{set(algebraic),algebraic}:=m, {simp:=improve, verify:=simplify, ctx::list:=[]}) local kb := build_kb(ctx, "TestHakaru"), ver := measure(verify); ver := `if`(n::set, 'member'(ver), ver); CodeTools[Test](fromLO(simp(toLO(m), _ctx=kb), _ctx=kb), n, ver, _rest) end proc; TestDisint := module() export ModuleApply := proc( M::{t_Hakaru, list(anything)}, #args to disint, or just 1st arg n::set({t_Hakaru, identical(NULL)}), #desired return ctx::t_kb_atoms:= [], #context: assumptions, "knowledge" TLim::{positive, identical(-1)}:= 80 #timelimit ) local disint_args, disint_var, expected := n; if M :: list then disint_args := [op(M),ctx]; else disint_var := gensym('t'); disint_args := [M,disint_var,ctx]; expected := subs(:-`t`=disint_var,expected); end if; try do_test(disint_args, copy(expected), TLim, _rest); catch "time expired": error "Time expired while running: disint(%1)", disint_args; end try; end proc; # This is necessary because CodeTools seems to forgot the value # of our local variables (but only the `expected result' arg) # unless that arg is precisely a formal parameter, and we `copy' the # input to this function. local do_test := proc(disint_args, expected, tlim) timelimit( tlim, CodeTools[Test] ( {disint(op(disint_args))} , expected , '`subset`(measure(simplify))' , _rest)); end proc; end module; TestEfficient := proc(e, t::t_type, t_kb := KB:-empty, {label::string := "Test" }) local done_, result, todo; todo := SimplifyKB_(args[1..3]); done_ := eval(todo, [%fromLO=fromLO,%improve=improve,%toLO=toLO,%simplify_assuming=simplify_assuming]); _Env_TestTools_Try_printf := false; result := TestTools[Try](label, Efficient(done_),true); if result = NULL then printf("%s passed.\n", label); else error sprintf("%s FAILED.\n" "The result of\n\t%a\n\tis not efficient.\n" , label, todo ); end if; end proc; # Test roughly for "efficient" Hakaru measure terms, # i.e., those we want simplification to produce. Efficient := proc(mm, $) local m, n; m := mm; if has(m, 'undefined') then return false; end if; while m :: '{lam(name, anything, anything), Context(anything, anything), case(anything, Branches(Branch(anything, anything))), And(specfunc(piecewise) ,{anyfunc(anything, anything, Msum()) ,anyfunc(anything, anything, anything, Msum())})}' do m := op(`if`(op(0,m)='lam',3,`if`(op(0,m)='case',[2,1,2],2)),m); end do; if m :: 'Weight(anything, anything)' then m := op(2,m) end if; if has(m, '{infinity, Lebesgue, int, Int, Beta, GAMMA}') then return false; end if; for n in `if`(m :: 'specfunc(Msum)', m, [m]) do if n :: 'Weight(anything, anything)' then n := op(2,n) end if; if has(subsindets(n, 'specfunc(Weight(anything, anything), Msum)', s -> `if`(Testzero(`+`(op(map2(op, 1, s))) - 1), map2(op, 2, s), s)), '{Msum, Weight}') then return false; end if; end do; return true; end proc; # Load a file in concrete Hakaru syntax (using the "momiji" command) # and return its term (in which Sum and Product are inert) and type. Concrete := proc(path::string, $) local cmd, res, dangerous_chars; cmd := FileTools:-AbsolutePath(path, (FileTools:-ParentDirectory@@2)(LibraryTools:-FindLibrary(Hakaru))); dangerous_chars := [ " ", "'", """", `if`(kernelopts(platform)="windows", [], ["\\"])[] ]; if ormap((c->StringTools:-Has(cmd,c)), dangerous_chars) then error "Dangerous characters in path: %1", cmd; end if; cmd := cat("momiji ", cmd); res := ssystem(cmd); if res :: [0, string] then parse(cat("use Hakaru in ", op(2,res), " end use")); else error "ssystem %1: %2", cmd, res; end if; end proc;
[STATEMENT] lemma group_reduced_homology_group [simp]: "group (reduced_homology_group p X)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. Group.group (reduced_homology_group p X) [PROOF STEP] by (simp add: reduced_homology_group_def group.group_subgroup_generated)
Describe Users/JanetY here. 20130916 12:49:38 nbsp Welcome to the Wiki, Janet. Using your real name has always been a concern on every Wiki Ive been involved with. Front Page really isnt a proper place to discuss it, so I moved your comment to Identity/talk. Theres also a page called Importance of using your RealName you might want to check into. Users/PeteB 20130916 13:20:55 nbsp And even some of us editing with nicknames are quite friendly and have plenty of get to know you info on our editor profiles. Users/JabberWokky 20130916 13:42:51 nbsp Janetplease dont change Front Pageagain it is not the proper page to hold a discussion. Users/PeteB 20130916 15:27:27 nbsp Whats your real name then, Janet? ☺ Users/ConstantiaOomen 20130916 16:12:10 nbsp PeteB, are you the owner of the Davis Wiki? Who gets to decide what goes where? Doesnt this violate the spirit of the Wiki? Users/JanetY Janetthere is no owner of the Wiki. This a communityrun Wiki and decision is made by consensus. Did you look at the first comment on this page I left youplease do so. Theres some good info for you there. Users/PeteB 20130916 16:46:10 nbsp Actually, what I said is something I will very much stand behind. Feel free to call me to discuss this: (814) 8898845. Youre insisting that a bunch of other people are acting in the wrong when they correct you. Either a bunch of people who have been using the wiki for many years have suddenly and irrationally decided to single you out and pick on you for no reason, or youre simply making an accidental mistake over and over again, and thus demanding that your way is the only way. That unilateral demand with no willingness to talk to other editors is rude, but (and I make this clear in my comment) is either out of innocent ignorance or willful malice. Im giving you the benefit of the doubt and believe that you are simply confused. Several others have moved on to assume you are simply being a jerk for the sake of being nasty. Im happy to continue to consider this simply a terribly bad introduction simply because you are unfamiliar with the wiki. Again, if youd like to talk, Im extending an offer to a friendly chat and try to help you out: (814) 8898845. Users/JabberWokky 20130916 17:12:26 nbsp @ Janet Please give us your full name first. Mine is the one you see. Acually it is Constantia Maria Oomen. PS: do you want to see my ID?, here it is: http://constantiaoomen.com/media/uploaded/images/GreenCardCMO(2).jpg link Users/ConstantiaOomen 20130916 18:01:54 nbsp Here is another ID: indulge yourself! Image(CMODL.jpg , left, 300, thumbnail) Users/ConstantiaOomen 20130916 18:58:33 nbsp this is that user that got banned a while. Same style or writing / confrontation. Thanks for taking the high road jw Users/StevenDaubert
(* Title: HOL/Imperative_HOL/ex/Imperative_Quicksort.thy Author: Lukas Bulwahn, TU Muenchen *) section \<open>An imperative implementation of Quicksort on arrays\<close> theory Imperative_Quicksort imports "~~/src/HOL/Imperative_HOL/Imperative_HOL" Subarray "~~/src/HOL/Library/Multiset" "~~/src/HOL/Library/Code_Target_Numeral" begin text \<open>We prove QuickSort correct in the Relational Calculus.\<close> definition swap :: "'a::heap array \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> unit Heap" where "swap arr i j = do { x \<leftarrow> Array.nth arr i; y \<leftarrow> Array.nth arr j; Array.upd i y arr; Array.upd j x arr; return () }" lemma effect_swapI [effect_intros]: assumes "i < Array.length h a" "j < Array.length h a" "x = Array.get h a ! i" "y = Array.get h a ! j" "h' = Array.update a j x (Array.update a i y h)" shows "effect (swap a i j) h h' r" unfolding swap_def using assms by (auto intro!: effect_intros) lemma swap_permutes: assumes "effect (swap a i j) h h' rs" shows "mset (Array.get h' a) = mset (Array.get h a)" using assms unfolding swap_def by (auto simp add: Array.length_def mset_swap dest: sym [of _ "h'"] elim!: effect_bindE effect_nthE effect_returnE effect_updE) function part1 :: "'a::{heap,ord} array \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> 'a \<Rightarrow> nat Heap" where "part1 a left right p = ( if (right \<le> left) then return right else do { v \<leftarrow> Array.nth a left; (if (v \<le> p) then (part1 a (left + 1) right p) else (do { swap a left right; part1 a left (right - 1) p })) })" by pat_completeness auto termination by (relation "measure (\<lambda>(_,l,r,_). r - l )") auto declare part1.simps[simp del] lemma part_permutes: assumes "effect (part1 a l r p) h h' rs" shows "mset (Array.get h' a) = mset (Array.get h a)" using assms proof (induct a l r p arbitrary: h h' rs rule:part1.induct) case (1 a l r p h h' rs) thus ?case unfolding part1.simps [of a l r p] by (elim effect_bindE effect_ifE effect_returnE effect_nthE) (auto simp add: swap_permutes) qed lemma part_returns_index_in_bounds: assumes "effect (part1 a l r p) h h' rs" assumes "l \<le> r" shows "l \<le> rs \<and> rs \<le> r" using assms proof (induct a l r p arbitrary: h h' rs rule:part1.induct) case (1 a l r p h h' rs) note cr = \<open>effect (part1 a l r p) h h' rs\<close> show ?case proof (cases "r \<le> l") case True (* Terminating case *) with cr \<open>l \<le> r\<close> show ?thesis unfolding part1.simps[of a l r p] by (elim effect_bindE effect_ifE effect_returnE effect_nthE) auto next case False (* recursive case *) note rec_condition = this let ?v = "Array.get h a ! l" show ?thesis proof (cases "?v \<le> p") case True with cr False have rec1: "effect (part1 a (l + 1) r p) h h' rs" unfolding part1.simps[of a l r p] by (elim effect_bindE effect_nthE effect_ifE effect_returnE) auto from rec_condition have "l + 1 \<le> r" by arith from 1(1)[OF rec_condition True rec1 \<open>l + 1 \<le> r\<close>] show ?thesis by simp next case False with rec_condition cr obtain h1 where swp: "effect (swap a l r) h h1 ()" and rec2: "effect (part1 a l (r - 1) p) h1 h' rs" unfolding part1.simps[of a l r p] by (elim effect_bindE effect_nthE effect_ifE effect_returnE) auto from rec_condition have "l \<le> r - 1" by arith from 1(2) [OF rec_condition False rec2 \<open>l \<le> r - 1\<close>] show ?thesis by fastforce qed qed qed lemma part_length_remains: assumes "effect (part1 a l r p) h h' rs" shows "Array.length h a = Array.length h' a" using assms proof (induct a l r p arbitrary: h h' rs rule:part1.induct) case (1 a l r p h h' rs) note cr = \<open>effect (part1 a l r p) h h' rs\<close> show ?case proof (cases "r \<le> l") case True (* Terminating case *) with cr show ?thesis unfolding part1.simps[of a l r p] by (elim effect_bindE effect_ifE effect_returnE effect_nthE) auto next case False (* recursive case *) with cr 1 show ?thesis unfolding part1.simps [of a l r p] swap_def by (auto elim!: effect_bindE effect_ifE effect_nthE effect_returnE effect_updE) fastforce qed qed lemma part_outer_remains: assumes "effect (part1 a l r p) h h' rs" shows "\<forall>i. i < l \<or> r < i \<longrightarrow> Array.get h (a::'a::{heap,linorder} array) ! i = Array.get h' a ! i" using assms proof (induct a l r p arbitrary: h h' rs rule:part1.induct) case (1 a l r p h h' rs) note cr = \<open>effect (part1 a l r p) h h' rs\<close> show ?case proof (cases "r \<le> l") case True (* Terminating case *) with cr show ?thesis unfolding part1.simps[of a l r p] by (elim effect_bindE effect_ifE effect_returnE effect_nthE) auto next case False (* recursive case *) note rec_condition = this let ?v = "Array.get h a ! l" show ?thesis proof (cases "?v \<le> p") case True with cr False have rec1: "effect (part1 a (l + 1) r p) h h' rs" unfolding part1.simps[of a l r p] by (elim effect_bindE effect_nthE effect_ifE effect_returnE) auto from 1(1)[OF rec_condition True rec1] show ?thesis by fastforce next case False with rec_condition cr obtain h1 where swp: "effect (swap a l r) h h1 ()" and rec2: "effect (part1 a l (r - 1) p) h1 h' rs" unfolding part1.simps[of a l r p] by (elim effect_bindE effect_nthE effect_ifE effect_returnE) auto from swp rec_condition have "\<forall>i. i < l \<or> r < i \<longrightarrow> Array.get h a ! i = Array.get h1 a ! i" unfolding swap_def by (elim effect_bindE effect_nthE effect_updE effect_returnE) auto with 1(2) [OF rec_condition False rec2] show ?thesis by fastforce qed qed qed lemma part_partitions: assumes "effect (part1 a l r p) h h' rs" shows "(\<forall>i. l \<le> i \<and> i < rs \<longrightarrow> Array.get h' (a::'a::{heap,linorder} array) ! i \<le> p) \<and> (\<forall>i. rs < i \<and> i \<le> r \<longrightarrow> Array.get h' a ! i \<ge> p)" using assms proof (induct a l r p arbitrary: h h' rs rule:part1.induct) case (1 a l r p h h' rs) note cr = \<open>effect (part1 a l r p) h h' rs\<close> show ?case proof (cases "r \<le> l") case True (* Terminating case *) with cr have "rs = r" unfolding part1.simps[of a l r p] by (elim effect_bindE effect_ifE effect_returnE effect_nthE) auto with True show ?thesis by auto next case False (* recursive case *) note lr = this let ?v = "Array.get h a ! l" show ?thesis proof (cases "?v \<le> p") case True with lr cr have rec1: "effect (part1 a (l + 1) r p) h h' rs" unfolding part1.simps[of a l r p] by (elim effect_bindE effect_nthE effect_ifE effect_returnE) auto from True part_outer_remains[OF rec1] have a_l: "Array.get h' a ! l \<le> p" by fastforce have "\<forall>i. (l \<le> i = (l = i \<or> Suc l \<le> i))" by arith with 1(1)[OF False True rec1] a_l show ?thesis by auto next case False with lr cr obtain h1 where swp: "effect (swap a l r) h h1 ()" and rec2: "effect (part1 a l (r - 1) p) h1 h' rs" unfolding part1.simps[of a l r p] by (elim effect_bindE effect_nthE effect_ifE effect_returnE) auto from swp False have "Array.get h1 a ! r \<ge> p" unfolding swap_def by (auto simp add: Array.length_def elim!: effect_bindE effect_nthE effect_updE effect_returnE) with part_outer_remains [OF rec2] lr have a_r: "Array.get h' a ! r \<ge> p" by fastforce have "\<forall>i. (i \<le> r = (i = r \<or> i \<le> r - 1))" by arith with 1(2)[OF lr False rec2] a_r show ?thesis by auto qed qed qed fun partition :: "'a::{heap,linorder} array \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> nat Heap" where "partition a left right = do { pivot \<leftarrow> Array.nth a right; middle \<leftarrow> part1 a left (right - 1) pivot; v \<leftarrow> Array.nth a middle; m \<leftarrow> return (if (v \<le> pivot) then (middle + 1) else middle); swap a m right; return m }" declare partition.simps[simp del] lemma partition_permutes: assumes "effect (partition a l r) h h' rs" shows "mset (Array.get h' a) = mset (Array.get h a)" proof - from assms part_permutes swap_permutes show ?thesis unfolding partition.simps by (elim effect_bindE effect_returnE effect_nthE effect_ifE effect_updE) fastforce qed lemma partition_length_remains: assumes "effect (partition a l r) h h' rs" shows "Array.length h a = Array.length h' a" proof - from assms part_length_remains show ?thesis unfolding partition.simps swap_def by (elim effect_bindE effect_returnE effect_nthE effect_ifE effect_updE) auto qed lemma partition_outer_remains: assumes "effect (partition a l r) h h' rs" assumes "l < r" shows "\<forall>i. i < l \<or> r < i \<longrightarrow> Array.get h (a::'a::{heap,linorder} array) ! i = Array.get h' a ! i" proof - from assms part_outer_remains part_returns_index_in_bounds show ?thesis unfolding partition.simps swap_def by (elim effect_bindE effect_returnE effect_nthE effect_ifE effect_updE) fastforce qed lemma partition_returns_index_in_bounds: assumes effect: "effect (partition a l r) h h' rs" assumes "l < r" shows "l \<le> rs \<and> rs \<le> r" proof - from effect obtain middle h'' p where part: "effect (part1 a l (r - 1) p) h h'' middle" and rs_equals: "rs = (if Array.get h'' a ! middle \<le> Array.get h a ! r then middle + 1 else middle)" unfolding partition.simps by (elim effect_bindE effect_returnE effect_nthE effect_ifE effect_updE) simp from \<open>l < r\<close> have "l \<le> r - 1" by arith from part_returns_index_in_bounds[OF part this] rs_equals \<open>l < r\<close> show ?thesis by auto qed lemma partition_partitions: assumes effect: "effect (partition a l r) h h' rs" assumes "l < r" shows "(\<forall>i. l \<le> i \<and> i < rs \<longrightarrow> Array.get h' (a::'a::{heap,linorder} array) ! i \<le> Array.get h' a ! rs) \<and> (\<forall>i. rs < i \<and> i \<le> r \<longrightarrow> Array.get h' a ! rs \<le> Array.get h' a ! i)" proof - let ?pivot = "Array.get h a ! r" from effect obtain middle h1 where part: "effect (part1 a l (r - 1) ?pivot) h h1 middle" and swap: "effect (swap a rs r) h1 h' ()" and rs_equals: "rs = (if Array.get h1 a ! middle \<le> ?pivot then middle + 1 else middle)" unfolding partition.simps by (elim effect_bindE effect_returnE effect_nthE effect_ifE effect_updE) simp from swap have h'_def: "h' = Array.update a r (Array.get h1 a ! rs) (Array.update a rs (Array.get h1 a ! r) h1)" unfolding swap_def by (elim effect_bindE effect_returnE effect_nthE effect_updE) simp from swap have in_bounds: "r < Array.length h1 a \<and> rs < Array.length h1 a" unfolding swap_def by (elim effect_bindE effect_returnE effect_nthE effect_updE) simp from swap have swap_length_remains: "Array.length h1 a = Array.length h' a" unfolding swap_def by (elim effect_bindE effect_returnE effect_nthE effect_updE) auto from \<open>l < r\<close> have "l \<le> r - 1" by simp note middle_in_bounds = part_returns_index_in_bounds[OF part this] from part_outer_remains[OF part] \<open>l < r\<close> have "Array.get h a ! r = Array.get h1 a ! r" by fastforce with swap have right_remains: "Array.get h a ! r = Array.get h' a ! rs" unfolding swap_def by (auto simp add: Array.length_def elim!: effect_bindE effect_returnE effect_nthE effect_updE) (cases "r = rs", auto) from part_partitions [OF part] show ?thesis proof (cases "Array.get h1 a ! middle \<le> ?pivot") case True with rs_equals have rs_equals: "rs = middle + 1" by simp { fix i assume i_is_left: "l \<le> i \<and> i < rs" with swap_length_remains in_bounds middle_in_bounds rs_equals \<open>l < r\<close> have i_props: "i < Array.length h' a" "i \<noteq> r" "i \<noteq> rs" by auto from i_is_left rs_equals have "l \<le> i \<and> i < middle \<or> i = middle" by arith with part_partitions[OF part] right_remains True have "Array.get h1 a ! i \<le> Array.get h' a ! rs" by fastforce with i_props h'_def in_bounds have "Array.get h' a ! i \<le> Array.get h' a ! rs" unfolding Array.update_def Array.length_def by simp } moreover { fix i assume "rs < i \<and> i \<le> r" hence "(rs < i \<and> i \<le> r - 1) \<or> (rs < i \<and> i = r)" by arith hence "Array.get h' a ! rs \<le> Array.get h' a ! i" proof assume i_is: "rs < i \<and> i \<le> r - 1" with swap_length_remains in_bounds middle_in_bounds rs_equals have i_props: "i < Array.length h' a" "i \<noteq> r" "i \<noteq> rs" by auto from part_partitions[OF part] rs_equals right_remains i_is have "Array.get h' a ! rs \<le> Array.get h1 a ! i" by fastforce with i_props h'_def show ?thesis by fastforce next assume i_is: "rs < i \<and> i = r" with rs_equals have "Suc middle \<noteq> r" by arith with middle_in_bounds \<open>l < r\<close> have "Suc middle \<le> r - 1" by arith with part_partitions[OF part] right_remains have "Array.get h' a ! rs \<le> Array.get h1 a ! (Suc middle)" by fastforce with i_is True rs_equals right_remains h'_def show ?thesis using in_bounds unfolding Array.update_def Array.length_def by auto qed } ultimately show ?thesis by auto next case False with rs_equals have rs_equals: "middle = rs" by simp { fix i assume i_is_left: "l \<le> i \<and> i < rs" with swap_length_remains in_bounds middle_in_bounds rs_equals have i_props: "i < Array.length h' a" "i \<noteq> r" "i \<noteq> rs" by auto from part_partitions[OF part] rs_equals right_remains i_is_left have "Array.get h1 a ! i \<le> Array.get h' a ! rs" by fastforce with i_props h'_def have "Array.get h' a ! i \<le> Array.get h' a ! rs" unfolding Array.update_def by simp } moreover { fix i assume "rs < i \<and> i \<le> r" hence "(rs < i \<and> i \<le> r - 1) \<or> i = r" by arith hence "Array.get h' a ! rs \<le> Array.get h' a ! i" proof assume i_is: "rs < i \<and> i \<le> r - 1" with swap_length_remains in_bounds middle_in_bounds rs_equals have i_props: "i < Array.length h' a" "i \<noteq> r" "i \<noteq> rs" by auto from part_partitions[OF part] rs_equals right_remains i_is have "Array.get h' a ! rs \<le> Array.get h1 a ! i" by fastforce with i_props h'_def show ?thesis by fastforce next assume i_is: "i = r" from i_is False rs_equals right_remains h'_def show ?thesis using in_bounds unfolding Array.update_def Array.length_def by auto qed } ultimately show ?thesis by auto qed qed function quicksort :: "'a::{heap,linorder} array \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> unit Heap" where "quicksort arr left right = (if (right > left) then do { pivotNewIndex \<leftarrow> partition arr left right; pivotNewIndex \<leftarrow> assert (\<lambda>x. left \<le> x \<and> x \<le> right) pivotNewIndex; quicksort arr left (pivotNewIndex - 1); quicksort arr (pivotNewIndex + 1) right } else return ())" by pat_completeness auto (* For termination, we must show that the pivotNewIndex is between left and right *) termination by (relation "measure (\<lambda>(a, l, r). (r - l))") auto declare quicksort.simps[simp del] lemma quicksort_permutes: assumes "effect (quicksort a l r) h h' rs" shows "mset (Array.get h' a) = mset (Array.get h a)" using assms proof (induct a l r arbitrary: h h' rs rule: quicksort.induct) case (1 a l r h h' rs) with partition_permutes show ?case unfolding quicksort.simps [of a l r] by (elim effect_ifE effect_bindE effect_assertE effect_returnE) auto qed lemma length_remains: assumes "effect (quicksort a l r) h h' rs" shows "Array.length h a = Array.length h' a" using assms proof (induct a l r arbitrary: h h' rs rule: quicksort.induct) case (1 a l r h h' rs) with partition_length_remains show ?case unfolding quicksort.simps [of a l r] by (elim effect_ifE effect_bindE effect_assertE effect_returnE) fastforce+ qed lemma quicksort_outer_remains: assumes "effect (quicksort a l r) h h' rs" shows "\<forall>i. i < l \<or> r < i \<longrightarrow> Array.get h (a::'a::{heap,linorder} array) ! i = Array.get h' a ! i" using assms proof (induct a l r arbitrary: h h' rs rule: quicksort.induct) case (1 a l r h h' rs) note cr = \<open>effect (quicksort a l r) h h' rs\<close> thus ?case proof (cases "r > l") case False with cr have "h' = h" unfolding quicksort.simps [of a l r] by (elim effect_ifE effect_returnE) auto thus ?thesis by simp next case True { fix h1 h2 p ret1 ret2 i assume part: "effect (partition a l r) h h1 p" assume qs1: "effect (quicksort a l (p - 1)) h1 h2 ret1" assume qs2: "effect (quicksort a (p + 1) r) h2 h' ret2" assume pivot: "l \<le> p \<and> p \<le> r" assume i_outer: "i < l \<or> r < i" from partition_outer_remains [OF part True] i_outer have 2: "Array.get h a !i = Array.get h1 a ! i" by fastforce moreover from 1(1) [OF True pivot qs1] pivot i_outer 2 have 3: "Array.get h1 a ! i = Array.get h2 a ! i" by auto moreover from qs2 1(2) [of p h2 h' ret2] True pivot i_outer 3 have "Array.get h2 a ! i = Array.get h' a ! i" by auto ultimately have "Array.get h a ! i= Array.get h' a ! i" by simp } with cr show ?thesis unfolding quicksort.simps [of a l r] by (elim effect_ifE effect_bindE effect_assertE effect_returnE) auto qed qed lemma quicksort_is_skip: assumes "effect (quicksort a l r) h h' rs" shows "r \<le> l \<longrightarrow> h = h'" using assms unfolding quicksort.simps [of a l r] by (elim effect_ifE effect_returnE) auto lemma quicksort_sorts: assumes "effect (quicksort a l r) h h' rs" assumes l_r_length: "l < Array.length h a" "r < Array.length h a" shows "sorted (subarray l (r + 1) a h')" using assms proof (induct a l r arbitrary: h h' rs rule: quicksort.induct) case (1 a l r h h' rs) note cr = \<open>effect (quicksort a l r) h h' rs\<close> thus ?case proof (cases "r > l") case False hence "l \<ge> r + 1 \<or> l = r" by arith with length_remains[OF cr] 1(5) show ?thesis by (auto simp add: subarray_Nil subarray_single) next case True { fix h1 h2 p assume part: "effect (partition a l r) h h1 p" assume qs1: "effect (quicksort a l (p - 1)) h1 h2 ()" assume qs2: "effect (quicksort a (p + 1) r) h2 h' ()" from partition_returns_index_in_bounds [OF part True] have pivot: "l\<le> p \<and> p \<le> r" . note length_remains = length_remains[OF qs2] length_remains[OF qs1] partition_length_remains[OF part] from quicksort_outer_remains [OF qs2] quicksort_outer_remains [OF qs1] pivot quicksort_is_skip[OF qs1] have pivot_unchanged: "Array.get h1 a ! p = Array.get h' a ! p" by (cases p, auto) (*-- First of all, by induction hypothesis both sublists are sorted. *) from 1(1)[OF True pivot qs1] length_remains pivot 1(5) have IH1: "sorted (subarray l p a h2)" by (cases p, auto simp add: subarray_Nil) from quicksort_outer_remains [OF qs2] length_remains have left_subarray_remains: "subarray l p a h2 = subarray l p a h'" by (simp add: subarray_eq_samelength_iff) with IH1 have IH1': "sorted (subarray l p a h')" by simp from 1(2)[OF True pivot qs2] pivot 1(5) length_remains have IH2: "sorted (subarray (p + 1) (r + 1) a h')" by (cases "Suc p \<le> r", auto simp add: subarray_Nil) (* -- Secondly, both sublists remain partitioned. *) from partition_partitions[OF part True] have part_conds1: "\<forall>j. j \<in> set (subarray l p a h1) \<longrightarrow> j \<le> Array.get h1 a ! p " and part_conds2: "\<forall>j. j \<in> set (subarray (p + 1) (r + 1) a h1) \<longrightarrow> Array.get h1 a ! p \<le> j" by (auto simp add: all_in_set_subarray_conv) from quicksort_outer_remains [OF qs1] quicksort_permutes [OF qs1] True length_remains 1(5) pivot mset_sublist [of l p "Array.get h1 a" "Array.get h2 a"] have multiset_partconds1: "mset (subarray l p a h2) = mset (subarray l p a h1)" unfolding Array.length_def subarray_def by (cases p, auto) with left_subarray_remains part_conds1 pivot_unchanged have part_conds2': "\<forall>j. j \<in> set (subarray l p a h') \<longrightarrow> j \<le> Array.get h' a ! p" by (simp, subst set_mset_mset[symmetric], simp) (* -- These steps are the analogous for the right sublist \<dots> *) from quicksort_outer_remains [OF qs1] length_remains have right_subarray_remains: "subarray (p + 1) (r + 1) a h1 = subarray (p + 1) (r + 1) a h2" by (auto simp add: subarray_eq_samelength_iff) from quicksort_outer_remains [OF qs2] quicksort_permutes [OF qs2] True length_remains 1(5) pivot mset_sublist [of "p + 1" "r + 1" "Array.get h2 a" "Array.get h' a"] have multiset_partconds2: "mset (subarray (p + 1) (r + 1) a h') = mset (subarray (p + 1) (r + 1) a h2)" unfolding Array.length_def subarray_def by auto with right_subarray_remains part_conds2 pivot_unchanged have part_conds1': "\<forall>j. j \<in> set (subarray (p + 1) (r + 1) a h') \<longrightarrow> Array.get h' a ! p \<le> j" by (simp, subst set_mset_mset[symmetric], simp) (* -- Thirdly and finally, we show that the array is sorted following from the facts above. *) from True pivot 1(5) length_remains have "subarray l (r + 1) a h' = subarray l p a h' @ [Array.get h' a ! p] @ subarray (p + 1) (r + 1) a h'" by (simp add: subarray_nth_array_Cons, cases "l < p") (auto simp add: subarray_append subarray_Nil) with IH1' IH2 part_conds1' part_conds2' pivot have ?thesis unfolding subarray_def apply (auto simp add: sorted_append sorted_Cons all_in_set_sublist'_conv) by (auto simp add: set_sublist' dest: order.trans [of _ "Array.get h' a ! p"]) } with True cr show ?thesis unfolding quicksort.simps [of a l r] by (elim effect_ifE effect_returnE effect_bindE effect_assertE) auto qed qed lemma quicksort_is_sort: assumes effect: "effect (quicksort a 0 (Array.length h a - 1)) h h' rs" shows "Array.get h' a = sort (Array.get h a)" proof (cases "Array.get h a = []") case True with quicksort_is_skip[OF effect] show ?thesis unfolding Array.length_def by simp next case False from quicksort_sorts [OF effect] False have "sorted (sublist' 0 (List.length (Array.get h a)) (Array.get h' a))" unfolding Array.length_def subarray_def by auto with length_remains[OF effect] have "sorted (Array.get h' a)" unfolding Array.length_def by simp with quicksort_permutes [OF effect] properties_for_sort show ?thesis by fastforce qed subsection \<open>No Errors in quicksort\<close> text \<open>We have proved that quicksort sorts (if no exceptions occur). We will now show that exceptions do not occur.\<close> lemma success_part1I: assumes "l < Array.length h a" "r < Array.length h a" shows "success (part1 a l r p) h" using assms proof (induct a l r p arbitrary: h rule: part1.induct) case (1 a l r p) thus ?case unfolding part1.simps [of a l r] apply (auto intro!: success_intros simp add: not_le) apply (auto intro!: effect_intros) done qed lemma success_bindI' [success_intros]: (*FIXME move*) assumes "success f h" assumes "\<And>h' r. effect f h h' r \<Longrightarrow> success (g r) h'" shows "success (f \<bind> g) h" using assms(1) proof (rule success_effectE) fix h' r assume *: "effect f h h' r" with assms(2) have "success (g r) h'" . with * show "success (f \<bind> g) h" by (rule success_bind_effectI) qed lemma success_partitionI: assumes "l < r" "l < Array.length h a" "r < Array.length h a" shows "success (partition a l r) h" using assms unfolding partition.simps swap_def apply (auto intro!: success_bindI' success_ifI success_returnI success_nthI success_updI success_part1I elim!: effect_bindE effect_updE effect_nthE effect_returnE simp add:) apply (frule part_length_remains) apply (frule part_returns_index_in_bounds) apply auto apply (frule part_length_remains) apply (frule part_returns_index_in_bounds) apply auto apply (frule part_length_remains) apply auto done lemma success_quicksortI: assumes "l < Array.length h a" "r < Array.length h a" shows "success (quicksort a l r) h" using assms proof (induct a l r arbitrary: h rule: quicksort.induct) case (1 a l ri h) thus ?case unfolding quicksort.simps [of a l ri] apply (auto intro!: success_ifI success_bindI' success_returnI success_nthI success_updI success_assertI success_partitionI) apply (frule partition_returns_index_in_bounds) apply auto apply (frule partition_returns_index_in_bounds) apply auto apply (auto elim!: effect_assertE dest!: partition_length_remains length_remains) apply (subgoal_tac "Suc r \<le> ri \<or> r = ri") apply (erule disjE) apply auto unfolding quicksort.simps [of a "Suc ri" ri] apply (auto intro!: success_ifI success_returnI) done qed subsection \<open>Example\<close> definition "qsort a = do { k \<leftarrow> Array.len a; quicksort a 0 (k - 1); return a }" code_reserved SML upto definition "example = do { a \<leftarrow> Array.of_list ([42, 2, 3, 5, 0, 1705, 8, 3, 15] :: nat list); qsort a }" ML_val \<open>@{code example} ()\<close> export_code qsort checking SML SML_imp OCaml? OCaml_imp? Haskell? Scala Scala_imp end
(* SPDX-License-Identifier: GPL-2.0 *) Require Import List. Import ListNotations. (* From RecordUpdate Require Import RecordSet. Import RecordSetNotations. *) (* Require Import PeanoNat. *) Require Import Psatz. Require Import weakenedWDRF.Base weakenedWDRF.Promising weakenedWDRF.DRF weakenedWDRF.SC. Parameter is_page_table_addr : Address -> Prop. Definition PageTableLog := list Value. Inductive rel_page_table_log : View -> (View -> option Promise) -> Address -> PageTableLog -> bool -> Prop := | PTL_EMPTY : forall lp addr, rel_page_table_log 0 lp addr [default] true | PTL_WRITE_NEW : forall lp addr n val tid ptl (Hptl : rel_page_table_log n lp addr ptl true) (Hlp : lp (S n) = Some (WRITE tid val addr)), rel_page_table_log (S n) lp addr [val] false | PTL_WRITE_OLD : forall lp addr n val tid ptl (Hptl : rel_page_table_log n lp addr ptl false) (Hlp : lp (S n) = Some (WRITE tid val addr)), rel_page_table_log (S n) lp addr (val :: ptl) false | PTL_PULL : forall lp addr n tid ptl b (Hptl : rel_page_table_log n lp addr ptl b) (Hlp : lp (S n) = Some (PULL tid addr)), rel_page_table_log (S n) lp addr ptl true | PTL_PUSH : forall lp addr n tid ptl b (Hptl : rel_page_table_log n lp addr ptl b) (Hlp : lp (S n) = Some (PUSH tid addr)), rel_page_table_log (S n) lp addr ptl b | PTL_NONE : forall lp addr n ptl b (Hptl : rel_page_table_log n lp addr ptl b) (Hlp : lp (S n) = None), rel_page_table_log (S n) lp addr ptl b. Definition transactional_page_table (lp : View -> option Promise) := forall n addr ptl b (Haddr : is_page_table_addr addr) (Hptl : rel_page_table_log n lp addr ptl b), exists val, ptl = [val]. Theorem page_table_same_result : forall lp (Htrans : transactional_page_table lp) n addr ms ptl b (Haddr : is_page_table_addr addr) (Hms : rel_replay_mem n lp ms) (Hptl : rel_page_table_log n lp addr ptl b), ptl = [ms addr]. Proof. induction n; intros. - inversion Hms. inversion Hptl. reflexivity. - inversion Hms; inversion Hptl; try (try rewrite Hlp0 in Hwrite; try rewrite Hlp0 in Hnotwrite; discriminate). + rewrite Hwrite in Hlp0. inversion Hlp0. rewrite update_same. reflexivity. + rewrite Hwrite in Hlp0. inversion Hlp0. rewrite update_same. unfold transactional_page_table in *. edestruct Htrans. apply Haddr. apply Hptl. rewrite <- H6 in H. inversion H. reflexivity. + eapply IHn. easy. easy. apply Hptl0. + eapply IHn. easy. easy. apply Hptl0. + eapply IHn. easy. easy. apply Hptl0. Qed.
# Copyright 2019 IBM Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from test import EnableSchemaValidation from typing import Any import jsonschema import lale.lib.lale import lale.lib.sklearn import lale.type_checking from lale.lib.lale import ConcatFeatures from lale.lib.sklearn import ( NMF, PCA, RFE, FunctionTransformer, LogisticRegression, MissingIndicator, Nystroem, TfidfVectorizer, ) class TestFeaturePreprocessing(unittest.TestCase): def setUp(self): from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split data = load_iris() X, y = data.data, data.target self.X_train, self.X_test, self.y_train, self.y_test = train_test_split(X, y) def create_function_test_feature_preprocessor(fproc_name): def test_feature_preprocessor(self): X_train, y_train = self.X_train, self.y_train import importlib module_name = ".".join(fproc_name.split(".")[0:-1]) class_name = fproc_name.split(".")[-1] module = importlib.import_module(module_name) class_ = getattr(module, class_name) fproc = class_() from lale.lib.sklearn.one_hot_encoder import OneHotEncoder if isinstance(fproc, OneHotEncoder): # type: ignore # fproc = OneHotEncoder(handle_unknown = 'ignore') # remove the hack when this is fixed fproc = PCA() # test_schemas_are_schemas lale.type_checking.validate_is_schema(fproc.input_schema_fit()) lale.type_checking.validate_is_schema(fproc.input_schema_transform()) lale.type_checking.validate_is_schema(fproc.output_schema_transform()) lale.type_checking.validate_is_schema(fproc.hyperparam_schema()) # test_init_fit_transform trained = fproc.fit(self.X_train, self.y_train) _ = trained.transform(self.X_test) # test_predict_on_trainable trained = fproc.fit(X_train, y_train) fproc.transform(X_train) # test_to_json fproc.to_json() # test_in_a_pipeline # This test assumes that the output of feature processing is compatible with LogisticRegression from lale.lib.sklearn import LogisticRegression pipeline = fproc >> LogisticRegression() trained = pipeline.fit(self.X_train, self.y_train) _ = trained.predict(self.X_test) # Tune the pipeline with LR using Hyperopt from lale.lib.lale import Hyperopt hyperopt = Hyperopt(estimator=pipeline, max_evals=1, verbose=True, cv=3) trained = hyperopt.fit(self.X_train, self.y_train) _ = trained.predict(self.X_test) test_feature_preprocessor.__name__ = "test_{0}".format(fproc_name.split(".")[-1]) return test_feature_preprocessor feature_preprocessors = [ "lale.lib.sklearn.PolynomialFeatures", "lale.lib.sklearn.PCA", "lale.lib.sklearn.Nystroem", "lale.lib.sklearn.Normalizer", "lale.lib.sklearn.MinMaxScaler", "lale.lib.sklearn.OneHotEncoder", "lale.lib.sklearn.SimpleImputer", "lale.lib.sklearn.StandardScaler", "lale.lib.sklearn.FeatureAgglomeration", "lale.lib.sklearn.RobustScaler", "lale.lib.sklearn.QuantileTransformer", ] for fproc in feature_preprocessors: setattr( TestFeaturePreprocessing, "test_{0}".format(fproc.split(".")[-1]), create_function_test_feature_preprocessor(fproc), ) class TestNMF(unittest.TestCase): def test_init_fit_predict(self): import lale.datasets nmf = NMF() lr = LogisticRegression() trainable = nmf >> lr (train_X, train_y), (test_X, test_y) = lale.datasets.digits_df() trained = trainable.fit(train_X, train_y) _ = trained.predict(test_X) def test_not_randome_state(self): with EnableSchemaValidation(): with self.assertRaises(jsonschema.ValidationError): _ = NMF(random_state='"not RandomState"') class TestFunctionTransformer(unittest.TestCase): def test_init_fit_predict(self): import numpy as np import lale.datasets ft = FunctionTransformer(func=np.log1p) lr = LogisticRegression() trainable = ft >> lr (train_X, train_y), (test_X, test_y) = lale.datasets.digits_df() trained = trainable.fit(train_X, train_y) _ = trained.predict(test_X) def test_not_callable(self): with EnableSchemaValidation(): with self.assertRaises(jsonschema.ValidationError): _ = FunctionTransformer(func='"not callable"') class TestMissingIndicator(unittest.TestCase): def test_init_fit_transform(self): import numpy as np X1 = np.array([[np.nan, 1, 3], [4, 0, np.nan], [8, 1, 0]]) X2 = np.array([[5, 1, np.nan], [np.nan, 2, 3], [2, 4, 0]]) trainable = MissingIndicator() trained = trainable.fit(X1) transformed = trained.transform(X2) expected = np.array([[False, True], [True, False], [False, False]]) self.assertTrue((transformed == expected).all()) class TestRFE(unittest.TestCase): def test_init_fit_predict(self): import sklearn.datasets import sklearn.svm svm = lale.lib.sklearn.SVR(kernel="linear") rfe = RFE(estimator=svm, n_features_to_select=2) lr = LogisticRegression() trainable = rfe >> lr data = sklearn.datasets.load_iris() X, y = data.data, data.target trained = trainable.fit(X, y) _ = trained.predict(X) def test_init_fit_predict_sklearn(self): import sklearn.datasets import sklearn.svm svm = sklearn.svm.SVR(kernel="linear") rfe = RFE(estimator=svm, n_features_to_select=2) lr = LogisticRegression() trainable = rfe >> lr data = sklearn.datasets.load_iris() X, y = data.data, data.target trained = trainable.fit(X, y) _ = trained.predict(X) def test_not_operator(self): with EnableSchemaValidation(): with self.assertRaises(jsonschema.ValidationError): _ = RFE(estimator='"not an operator"', n_features_to_select=2) def test_attrib_sklearn(self): import sklearn.datasets import sklearn.svm from lale.lib.sklearn import RFE, LogisticRegression svm = sklearn.svm.SVR(kernel="linear") rfe = RFE(estimator=svm, n_features_to_select=2) lr = LogisticRegression() trainable = rfe >> lr data = sklearn.datasets.load_iris() X, y = data.data, data.target trained = trainable.fit(X, y) _ = trained.predict(X) from lale.lib.lale import Hyperopt opt = Hyperopt(estimator=trainable, max_evals=2, verbose=True) opt.fit(X, y) def test_attrib(self): import sklearn.datasets from lale.lib.sklearn import RFE, LogisticRegression svm = lale.lib.sklearn.SVR(kernel="linear") rfe = RFE(estimator=svm, n_features_to_select=2) lr = LogisticRegression() trainable = rfe >> lr data = sklearn.datasets.load_iris() X, y = data.data, data.target trained = trainable.fit(X, y) _ = trained.predict(X) from lale.lib.lale import Hyperopt opt = Hyperopt(estimator=trainable, max_evals=2, verbose=True) opt.fit(X, y) class TestOrdinalEncoder(unittest.TestCase): def setUp(self): from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split data = load_iris() X, y = data.data, data.target self.X_train, self.X_test, self.y_train, self.y_test = train_test_split(X, y) def test_with_hyperopt(self): from lale.lib.sklearn import OrdinalEncoder fproc = OrdinalEncoder() from lale.lib.sklearn import LogisticRegression pipeline = fproc >> LogisticRegression() # Tune the pipeline with LR using Hyperopt from lale.lib.lale import Hyperopt hyperopt = Hyperopt(estimator=pipeline, max_evals=1) trained = hyperopt.fit(self.X_train, self.y_train) _ = trained.predict(self.X_test) def test_inverse_transform(self): from lale.lib.sklearn import OneHotEncoder, OrdinalEncoder fproc_ohe = OneHotEncoder(handle_unknown="ignore") # test_init_fit_transform trained_ohe = fproc_ohe.fit(self.X_train, self.y_train) transformed_X = trained_ohe.transform(self.X_test) orig_X_ohe = trained_ohe._impl._wrapped_model.inverse_transform(transformed_X) fproc_oe = OrdinalEncoder(handle_unknown="ignore") # test_init_fit_transform trained_oe = fproc_oe.fit(self.X_train, self.y_train) transformed_X = trained_oe.transform(self.X_test) orig_X_oe = trained_oe._impl.inverse_transform(transformed_X) self.assertEqual(orig_X_ohe.all(), orig_X_oe.all()) def test_handle_unknown_error(self): from lale.lib.sklearn import OrdinalEncoder fproc_oe = OrdinalEncoder(handle_unknown="error") # test_init_fit_transform trained_oe = fproc_oe.fit(self.X_train, self.y_train) with self.assertRaises( ValueError ): # This is repying on the train_test_split, so may fail randomly _ = trained_oe.transform(self.X_test) def test_encode_unknown_with(self): from lale.lib.sklearn import OrdinalEncoder fproc_oe = OrdinalEncoder(handle_unknown="ignore", encode_unknown_with=1000) # test_init_fit_transform trained_oe = fproc_oe.fit(self.X_train, self.y_train) transformed_X = trained_oe.transform(self.X_test) # This is repying on the train_test_split, so may fail randomly self.assertTrue(1000 in transformed_X) # Testing that inverse_transform works even for encode_unknown_with=1000 _ = trained_oe._impl.inverse_transform(transformed_X) class TestConcatFeatures(unittest.TestCase): def test_hyperparam_defaults(self): _ = ConcatFeatures() def test_init_fit_predict(self): trainable_cf = ConcatFeatures() A = [[11, 12, 13], [21, 22, 23], [31, 32, 33]] B = [[14, 15], [24, 25], [34, 35]] trained_cf = trainable_cf.fit(X=[A, B]) transformed: Any = trained_cf.transform([A, B]) expected = [[11, 12, 13, 14, 15], [21, 22, 23, 24, 25], [31, 32, 33, 34, 35]] for i_sample in range(len(transformed)): for i_feature in range(len(transformed[i_sample])): self.assertEqual( transformed[i_sample][i_feature], expected[i_sample][i_feature] ) def test_comparison_with_scikit(self): import warnings warnings.filterwarnings("ignore") import sklearn.datasets import sklearn.utils from lale.helpers import cross_val_score from lale.lib.sklearn import PCA pca = PCA(n_components=3, random_state=42, svd_solver="arpack") nys = Nystroem(n_components=10, random_state=42) concat = ConcatFeatures() lr = LogisticRegression(random_state=42, C=0.1) trainable = (pca & nys) >> concat >> lr digits = sklearn.datasets.load_digits() X, y = sklearn.utils.shuffle(digits.data, digits.target, random_state=42) cv_results = cross_val_score(trainable, X, y) cv_results = ["{0:.1%}".format(score) for score in cv_results] from sklearn.decomposition import PCA as SklearnPCA from sklearn.kernel_approximation import Nystroem as SklearnNystroem from sklearn.linear_model import LogisticRegression as SklearnLR from sklearn.model_selection import cross_val_score from sklearn.pipeline import FeatureUnion, make_pipeline union = FeatureUnion( [ ( "pca", SklearnPCA(n_components=3, random_state=42, svd_solver="arpack"), ), ("nys", SklearnNystroem(n_components=10, random_state=42)), ] ) lr = SklearnLR(random_state=42, C=0.1) pipeline = make_pipeline(union, lr) scikit_cv_results = cross_val_score(pipeline, X, y, cv=5) scikit_cv_results = ["{0:.1%}".format(score) for score in scikit_cv_results] self.assertEqual(cv_results, scikit_cv_results) warnings.resetwarnings() def test_with_pandas(self): import warnings from lale.datasets import load_iris_df warnings.filterwarnings("ignore") pca = PCA(n_components=3) nys = Nystroem(n_components=10) concat = ConcatFeatures() lr = LogisticRegression(random_state=42, C=0.1) trainable = (pca & nys) >> concat >> lr (X_train, y_train), (X_test, y_test) = load_iris_df() trained = trainable.fit(X_train, y_train) _ = trained.predict(X_test) def test_concat_with_hyperopt(self): from lale.lib.lale import Hyperopt pca = PCA(n_components=3) nys = Nystroem(n_components=10) concat = ConcatFeatures() lr = LogisticRegression(random_state=42, C=0.1) trainable = (pca & nys) >> concat >> lr clf = Hyperopt(estimator=trainable, max_evals=2) from sklearn.datasets import load_iris iris_data = load_iris() clf.fit(iris_data.data, iris_data.target) clf.predict(iris_data.data) def test_concat_with_hyperopt2(self): from lale.lib.lale import Hyperopt from lale.operators import make_pipeline, make_union pca = PCA(n_components=3) nys = Nystroem(n_components=10) lr = LogisticRegression(random_state=42, C=0.1) trainable = make_pipeline(make_union(pca, nys), lr) clf = Hyperopt(estimator=trainable, max_evals=2) from sklearn.datasets import load_iris iris_data = load_iris() clf.fit(iris_data.data, iris_data.target) clf.predict(iris_data.data) class TestTfidfVectorizer(unittest.TestCase): def test_more_hyperparam_values(self): with EnableSchemaValidation(): with self.assertRaises(jsonschema.ValidationError): _ = TfidfVectorizer( max_df=2.5, min_df=2, max_features=1000, stop_words="english" ) with self.assertRaises(jsonschema.ValidationError): _ = TfidfVectorizer( max_df=2, min_df=2, max_features=1000, stop_words=["I", "we", "not", "this", "that"], analyzer="char", ) def test_non_null_tokenizer(self): # tokenize the doc and lemmatize its tokens def my_tokenizer(): return "abc" with EnableSchemaValidation(): with self.assertRaises(jsonschema.ValidationError): _ = TfidfVectorizer( max_df=2, min_df=2, max_features=1000, stop_words="english", tokenizer=my_tokenizer, analyzer="char", )
import categories.category import categories.isomorphism import categories.tactics import categories.functor import categories.ndefs open categories open categories.isomorphism open categories.functor open tactic --delaration of universes and variables universes u v u₁ v₁ variables (C : Type u₁) [𝒞 : category.{u₁ v₁} C] include 𝒞 -- 1a Show that identities in a category are unique theorem uniq_id (X : C) (id' : X ⟶ X) : (∀ {A : C} (g : X ⟶ A), id' ≫ g = g) → (∀ {A : C} (g : A ⟶ X), g ≫ id' = g) → (id' = 𝟙X) := begin intros hl hr, transitivity, symmetry, exact category.right_identity_lemma C id', exact hl(𝟙X) end -- 1b Show that a morphism with both a left inverse and right inverse is an isomorphism theorem landr_id (X Y Z : C) (f : X ⟶ Y) : (∃ gl : Y ⟶ X, gl ≫ f = 𝟙Y) → (∃ gr : Y ⟶ X, f ≫ gr = 𝟙X) → (is_Isomorphism' f) := begin intros, cases (classical.indefinite_description _ a) with gl hl, cases (classical.indefinite_description _ a_1) with gr hr, apply nonempty.intro (⟨gr, hr, begin simp, symmetry, exact calc 𝟙Y = gl ≫ f : eq.symm hl ... = gl ≫ 𝟙X ≫ f : by rw category.left_identity_lemma C f ... = (gl ≫ 𝟙X) ≫ f : by rw category.associativity_lemma ... = (gl ≫ (f ≫ gr)) ≫ f : by rw hr ... = ((gl ≫ f) ≫ gr) ≫ f : by rw category.associativity_lemma C gl f gr ... = (𝟙Y ≫ gr) ≫ f : by rw hl ... = gr ≫ f : by rw category.left_identity_lemma C gr end⟩ : is_Isomorphism f) end -- 1c Consider f : X ⟶ Y and g : Y ⟶ Z. Show that if two out of f, g and gf are isomorphisms,then so is the third. section Two_Out_Of_Three variables (X Y Z : C) variables (f : X ⟶ Y) (g : Y ⟶ Z) theorem tootfirsec : is_Isomorphism' f → is_Isomorphism' g → is_Isomorphism' (f ≫ g) := begin intros hf hg, apply hf.elim, apply hg.elim, intros Ig If, exact nonempty.intro ⟨Ig.1 ≫ If.1, begin simp, exact calc f ≫ g ≫ Ig.1 ≫ If.1 = f ≫ (g ≫ Ig.1) ≫ If.1 : by rw category.associativity_lemma ... = f ≫ 𝟙Y ≫ If.1 : by rw is_Isomorphism.witness_1_lemma ... = f ≫ If.1 : by rw category.left_identity_lemma ... = 𝟙X : by rw is_Isomorphism.witness_1_lemma end, begin simp, exact calc Ig.1 ≫ If.1 ≫ f ≫ g = Ig.1 ≫ (If.1 ≫ f) ≫ g : by rw category.associativity_lemma ... = Ig.1 ≫ 𝟙Y ≫ g : by rw is_Isomorphism.witness_2_lemma ... = Ig.1 ≫ g : by rw category.left_identity_lemma ... = 𝟙Z : by rw is_Isomorphism.witness_2_lemma end⟩ end theorem tootsecthi : is_Isomorphism' g → is_Isomorphism' (f ≫ g) → is_Isomorphism' f := begin intros hg hfg, apply hg.elim, apply hfg.elim, intros Ifg Ig, exact nonempty.intro ⟨g ≫ Ifg.1, begin simp, exact calc f ≫ g ≫ Ifg.1 = (f ≫ g) ≫ Ifg.1 : by rw category.associativity_lemma ... = 𝟙X : by rw is_Isomorphism.witness_1_lemma end, begin simp, exact calc g ≫ Ifg.1 ≫ f = (g ≫ Ifg.1 ≫ f) ≫ 𝟙Y : by rw category.right_identity_lemma ... = g ≫ (Ifg.1 ≫ f) ≫ 𝟙Y : by rw category.associativity_lemma ... = g ≫ Ifg.1 ≫ f ≫ 𝟙Y : by rw category.associativity_lemma ... = g ≫ Ifg.1 ≫ f ≫ g ≫ Ig.1 : by rw is_Isomorphism.witness_1_lemma ... = g ≫ (Ifg.1 ≫ ((f ≫ g) ≫ Ig.1)) : by rw category.associativity_lemma ... = g ≫ (Ifg.1 ≫ (f ≫ g)) ≫ Ig.1 : by rw (category.associativity_lemma C Ifg.1 (f ≫ g) Ig.1) ... = g ≫ 𝟙Z ≫ Ig.1 : by rw is_Isomorphism.witness_2_lemma ... = g ≫ Ig.1 : by rw category.left_identity_lemma ... = 𝟙Y : by rw is_Isomorphism.witness_1_lemma end⟩ end theorem tootfirthi : is_Isomorphism' f → is_Isomorphism' (f ≫ g) → is_Isomorphism' g := begin intros hf hfg, apply hf.elim, apply hfg.elim, intros Ifg If, exact nonempty.intro ⟨Ifg.1 ≫ f, begin simp, exact calc g ≫ Ifg.1 ≫ f = 𝟙Y ≫ g ≫ Ifg.1 ≫ f : by rw category.left_identity_lemma ... = (If.1 ≫ f) ≫ g ≫ Ifg.1 ≫ f : by rw is_Isomorphism.witness_2_lemma ... = ((If.1 ≫ f) ≫ g) ≫ Ifg.1 ≫ f : by rw (category.associativity_lemma C (If.1 ≫ f) g (Ifg.1 ≫ f)) ... = (If.1 ≫ (f ≫ g)) ≫ Ifg.1 ≫ f : by rw (category.associativity_lemma C If.1 f g) ... = If.1 ≫ (f ≫ g) ≫ Ifg.1 ≫ f : by rw category.associativity_lemma ... = If.1 ≫ ((f ≫ g) ≫ Ifg.1) ≫ f : by rw (category.associativity_lemma C (f ≫ g) Ifg.1 f) ... = If.1 ≫ 𝟙X ≫ f : by rw is_Isomorphism.witness_1_lemma ... = If.1 ≫ f : by rw category.left_identity_lemma ... = 𝟙Y : by rw is_Isomorphism.witness_2_lemma end, begin simp end⟩ end end Two_Out_Of_Three variables {D : Type u} [𝒟 : category.{u v} D] include 𝒟 -- 1d Show functors preserve isomorphisms theorem fun_id (F : C ↝ D) (X Y : C) (f : X ⟶ Y) : (is_Isomorphism' f) → (is_Isomorphism' (F &> f)) := begin intro hf, apply hf.elim, intro If, exact nonempty.intro /- ⟨F &> If.1, begin simp, exact calc (F &> f) ≫ (F &> If.1) = F &> (f ≫ If.1) : by rw Functor.functoriality_lemma ... = F &> 𝟙X : by rw is_Isomorphism.witness_1_lemma ... = 𝟙 (F +> X) : by rw Functor.identities end, begin simp, exact calc (F &> If.1) ≫ (F &> f) = F &> (If.1 ≫ f) : by rw Functor.functoriality_lemma ... = F &> 𝟙Y : by rw is_Isomorphism.witness_2_lemma ... = 𝟙 (F +> Y) : by rw Functor.identities end⟩ -/ (isomorphism.is_Isomorphism_of_Isomorphism (F.onIsomorphisms ⟨f , If.1, by simp, by simp⟩)) end -- 1e Show that if F : C ↝ D is full and faithful, and F &> f : F +> A ⟶ F +> B is an isomorphism in 𝒟, then f : A ⟶ B is an isomorphism in 𝒞 theorem reflecting_isomorphisms (F : C ↝ D) (X Y : C) (f : X ⟶ Y) : is_Full_Functor F → is_Faithful_Functor F → is_Isomorphism' (F &> f) → is_Isomorphism' f := begin intros hfu hfa hFf, apply hFf.elim, intro IFf, cases (classical.indefinite_description _ (hfu IFf.1)) with g hg, apply nonempty.intro (⟨g, begin simp, exact hfa (calc F &> (f ≫ g) = (F &> f) ≫ (F &> g) : by rw Functor.functoriality_lemma ... = 𝟙(F +> X) : by rw [hg, is_Isomorphism.witness_1_lemma] ... = F &> (𝟙X) : by rw Functor.identities ) end, begin simp, exact hfa (calc F &> (g ≫ f) = (F &> g) ≫ (F &> f) : by rw Functor.functoriality_lemma ... = 𝟙(F +> Y) : by rw [hg, is_Isomorphism.witness_2_lemma] ... = F &> (𝟙Y) : by rw Functor.identities ) end⟩ : is_Isomorphism f) end
#Pias says hello to training #print"Hello Git"
{-# OPTIONS --without-K --exact-split #-} module quotient-groups where import subgroups open subgroups public {- The left and right coset relation -} left-coset-relation : {l1 l2 : Level} (G : Group l1) (H : Subgroup l2 G) → (x y : type-Group G) → UU (l1 ⊔ l2) left-coset-relation G H x = fib ((mul-Group G x) ∘ (incl-group-Subgroup G H)) right-coset-relation : {l1 l2 : Level} (G : Group l1) (H : Subgroup l2 G) → (x y : type-Group G) → UU (l1 ⊔ l2) right-coset-relation G H x = fib ((mul-Group' G x) ∘ (incl-group-Subgroup G H)) {- We show that the left coset relation is an equivalence relation -} is-prop-left-coset-relation : {l1 l2 : Level} (G : Group l1) (H : Subgroup l2 G) → (x y : type-Group G) → is-prop (left-coset-relation G H x y) is-prop-left-coset-relation G H x = is-prop-map-is-emb ( (mul-Group G x) ∘ (incl-group-Subgroup G H)) ( is-emb-comp' ( mul-Group G x) ( incl-group-Subgroup G H) ( is-emb-is-equiv (mul-Group G x) (is-equiv-mul-Group G x)) ( is-emb-incl-group-Subgroup G H)) is-reflexive-left-coset-relation : {l1 l2 : Level} (G : Group l1) (H : Subgroup l2 G) → (x : type-Group G) → left-coset-relation G H x x is-reflexive-left-coset-relation G H x = pair ( unit-group-Subgroup G H) ( right-unit-law-Group G x) is-symmetric-left-coset-relation : {l1 l2 : Level} (G : Group l1) (H : Subgroup l2 G) → (x y : type-Group G) → left-coset-relation G H x y → left-coset-relation G H y x is-symmetric-left-coset-relation G H x y (pair z p) = pair ( inv-group-Subgroup G H z) ( ap ( λ t → mul-Group G t ( incl-group-Subgroup G H ( inv-group-Subgroup G H z))) ( inv p) ∙ ( ( is-associative-mul-Group G _ _ _) ∙ ( ( ap (mul-Group G x) (right-inverse-law-Group G _)) ∙ ( right-unit-law-Group G x)))) is-transitive-left-coset-relation : {l1 l2 : Level} (G : Group l1) (H : Subgroup l2 G) → (x y z : type-Group G) → left-coset-relation G H x y → left-coset-relation G H y z → left-coset-relation G H x z is-transitive-left-coset-relation G H x y z (pair h1 p1) (pair h2 p2) = pair ( mul-group-Subgroup G H h1 h2) ( ( inv (is-associative-mul-Group G _ _ _)) ∙ ( ( ap (λ t → mul-Group G t (incl-group-Subgroup G H h2)) p1) ∙ p2)) {- We show that the right coset relation is an equivalence relation -} is-prop-right-coset-relation : {l1 l2 : Level} (G : Group l1) (H : Subgroup l2 G) → (x y : type-Group G) → is-prop (right-coset-relation G H x y) is-prop-right-coset-relation G H x = is-prop-map-is-emb ( (mul-Group' G x) ∘ (incl-group-Subgroup G H)) ( is-emb-comp' ( mul-Group' G x) ( incl-group-Subgroup G H) ( is-emb-is-equiv (mul-Group' G x) (is-equiv-mul-Group' G x)) ( is-emb-incl-group-Subgroup G H)) is-reflexive-right-coset-relation : {l1 l2 : Level} (G : Group l1) (H : Subgroup l2 G) → (x : type-Group G) → right-coset-relation G H x x is-reflexive-right-coset-relation G H x = pair ( unit-group-Subgroup G H) ( left-unit-law-Group G x) is-symmetric-right-coset-relation : {l1 l2 : Level} (G : Group l1) (H : Subgroup l2 G) → (x y : type-Group G) → right-coset-relation G H x y → right-coset-relation G H y x is-symmetric-right-coset-relation G H x y (pair z p) = pair ( inv-group-Subgroup G H z) ( ( ap ( mul-Group G (incl-group-Subgroup G H (inv-group-Subgroup G H z))) ( inv p)) ∙ ( ( inv (is-associative-mul-Group G _ _ _)) ∙ ( ( ap (λ t → mul-Group G t x) (left-inverse-law-Group G _)) ∙ ( left-unit-law-Group G x)))) is-transitive-right-coset-relation : {l1 l2 : Level} (G : Group l1) (H : Subgroup l2 G) → (x y z : type-Group G) → right-coset-relation G H x y → right-coset-relation G H y z → right-coset-relation G H x z is-transitive-right-coset-relation G H x y z (pair h1 p1) (pair h2 p2) = pair ( mul-group-Subgroup G H h2 h1) ( ( is-associative-mul-Group G _ _ _) ∙ ( ( ap (mul-Group G (incl-group-Subgroup G H h2)) p1) ∙ p2))
{-# OPTIONS --no-pattern-matching #-} id : {A : Set} (x : A) → A id x = x data Unit : Set where unit : Unit fail : Unit → Set fail unit = Unit -- Expected error: Pattern matching is disabled