Datasets:
AI4M
/

text
stringlengths
0
3.34M
If $I$ is a countable index set, $A_i$ is a family of measurable sets, and $\sum_{i \in I} \mu(A_i) \leq B$ for all finite subsets $I' \subseteq I$, then $\bigcup_{i \in I} A_i$ is measurable and $\mu(\bigcup_{i \in I} A_i) \leq B$.
SUBROUTINE slDVTP (V, V0, XI, ETA, J) *+ * - - - - - - * D V T P * - - - - - - * * Given the direction cosines of a star and of the tangent point, * determine the star's tangent-plane coordinates. * * (double precision) * * Given: * V d(3) direction cosines of star * V0 d(3) direction cosines of tangent point * * Returned: * XI,ETA d tangent plane coordinates of star * J i status: 0 = OK * 1 = error, star too far from axis * 2 = error, antistar on tangent plane * 3 = error, antistar too far from axis * * Notes: * * 1 If vector V0 is not of unit length, or if vector V is of zero * length, the results will be wrong. * * 2 If V0 points at a pole, the returned XI,ETA will be based on the * arbitrary assumption that the RA of the tangent point is zero. * * 3 This routine is the Cartesian equivalent of the routine slDSTP. * * P.T.Wallace Starlink 27 November 1996 * * Copyright (C) 1996 Rutherford Appleton Laboratory * * License: * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program (see SLA_CONDITIONS); if not, write to the * Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, * Boston, MA 02110-1301 USA * * Copyright (C) 1995 Association of Universities for Research in Astronomy Inc. *- IMPLICIT NONE DOUBLE PRECISION V(3),V0(3),XI,ETA INTEGER J DOUBLE PRECISION X,Y,Z,X0,Y0,Z0,R2,R,W,D DOUBLE PRECISION TINY PARAMETER (TINY=1D-6) X=V(1) Y=V(2) Z=V(3) X0=V0(1) Y0=V0(2) Z0=V0(3) R2=X0*X0+Y0*Y0 R=SQRT(R2) IF (R.EQ.0D0) THEN R=1D-20 X0=R END IF W=X*X0+Y*Y0 D=W+Z*Z0 IF (D.GT.TINY) THEN J=0 ELSE IF (D.GE.0D0) THEN J=1 D=TINY ELSE IF (D.GT.-TINY) THEN J=2 D=-TINY ELSE J=3 END IF D=D*R XI=(Y*X0-X*Y0)/D ETA=(Z*R2-Z0*W)/D END
function [vfN, vfR, vrN, vrR, vnetN, vnetR, uL, uC, u0L, u0C, lnxL, lnxC] = projectOntoSubspace(A, vf, vr, vnet, u, u0, lnx,printLevel,rowBool,colBool) % Projects flux, net flux, potential and logarithmic concentration onto % their respective subspaces of A using projection matrices generated either % derived from SVD, or by using the Moore-Penrose pseudoinverse % % Optionally, a subset of the matrix A may be chosen by using A(rowBool,colBool) % but then only the true rows of u, u0, lnx, and true columns of vf,vr,vnet are % projected and the remaining rows and columns are not affected % % Let `M` denote the Moore-Penrose pseudoinverse of A and the subscripts are the following % `_R` row space, % `_N` nullspace, % `_C` column space, % `_L` left nullspace, % % Example for flux of net flux % % Let % % .. math:: % vf &= vf_R + vf_N \\ % vf_R &= M A vf = PR vf \\ % vf_N &= (I - M A) vf = PN vf % % Example for potential or logarithmic concentration % % Let % % .. math:: % u &= u_C + u_L \\ % u_C &= A M u = PC u \\ % u_L &= (I - A M) u = PL u % % USAGE: % % [vfN, vfR, vrN, vrR, vnetN, vnetR, uL, uC, u0L, u0C, lnxL, lnxC] = projectOntoSubspace(modelT, vf, vr, vnet, u, u0, lnx) % % INPUTS: % A `m x n` matrix % vf: `n x 1` - forward flux % vr: `n x 1` - reverse flux % vnet: `n x 1` - net flux % u: `m x 1` - chemical potential % u0: `m x 1` - standard chemical potential % lnx: `m x 1` - logarithmic concentration % OPTIONAL INPUTS % rowBool `m x 1` - boolean indicating the subset of rows of A % colBool 'n x 1' - boolean indicating the subset of cols of A % % OUTPUTS: % vfN: forward flux - nullspace % vfR: forward flux - row space % vrN: reverse flux - nullspace % vrR: reverse flux - row space % vnetN: net flux - nullspace % vnetR: net flux - row space % uL: chemical potential - left nullspace % uC: chemical potential - column space % lnxL: logarithmic concentration - left nullspace % lnxC: logarithmic concentration - column space if ~isempty(vf) if any((vnet(colBool)- vf +vr)>1e-12) %sanity check error('Net flux does not equal the difference between forward and reverse flux') end end if ~isempty(lnx) if any((u - u0 - lnx)>1e-12) error('Chemical potential does not equal standard chemical potential plus logarithmic conc.') end end if ~exist('printLevel','var') printLevel=0; end [m,n]=size(A); if ~exist('rowBool','var') rowBool=true(m,1); end if ~exist('colBool','var') colBool=true(n,1); end %generate fake outputs, or populate with unprojected vectors, part of which %will be overwritten with the projected vectos further below. if ~exist('u','var') uL=NaN*ones(m,1); uC=NaN*ones(m,1); else uL=u; uC=u; end if ~exist('u0','var') u0L=NaN*ones(m,1); u0C=NaN*ones(m,1); else u0L=u0; u0C=u0; end if ~exist('lnx','var') lnxL=NaN*ones(m,1); lnxC=NaN*ones(m,1); else lnxL=lnx; lnxC=lnx; end if ~exist('vf','var') vfN=NaN*ones(n,1); vfR=NaN*ones(n,1); else vfN=vf; vfR=vf; end if ~exist('vr','var') vrN=NaN*ones(n,1); vrR=NaN*ones(n,1); else vrN=vr; vrR=vr; end if ~exist('vnet','var') vnetN=NaN*ones(n,1); vnetR=NaN*ones(n,1); else vnetN=vnet; vnetR=vnet; end %generate projection matrices sub_space='all'; [PR,PN,PC,PL]=subspaceProjector(A(rowBool,colBool),printLevel,sub_space); %potential if ~isempty(u) %concentration uC(rowBool)=PC*u(rowBool); uL(rowBool)=PL*u(rowBool); end if ~isempty(u0) %standard potential u0L(rowBool)=PL*u0(rowBool); u0C(rowBool)=PC*u0(rowBool); end if ~isempty(lnx) %concentration lnxC(rowBool)=PC*lnx(rowBool); lnxL(rowBool)=PL*lnx(rowBool); end %flux if ~isempty(vnet) vnetN(colBool)=PN*vnet(colBool); vnetR(colBool)=PR*vnet(colBool); end if ~isempty(vf) vfN(colBool)=PN*vf(colBool); vrR(colBool)=PR*vr(colBool); end if ~isempty(vr) vrN(colBool)=PN*vr(colBool); vrR(colBool)=PR*vr(colBool); end
# (10) ######### writeLines("We move asum to the right hand size") ## ct <- makeCluster(cores) registerDoParallel(ct) asum<-5 writeLines("\nasum is scalar on the right hand side") bsum<-foreach(ijk=1:nt , .combine=rbind ,.export=(c('asum')),.verbose=doverbose) %dopar% { xsum<-asum*ijk } print(asum) print(bsum) asum=vector(mode="double",length=5) writeLines("\nasum is vector of on length 5 on the right hand side") for (ijk in 1:length(asum))asum[ijk]<-asum[ijk]+ijk bsum<-foreach(ijk=1:nt , .combine=rbind ,.export=(c('asum')),.verbose=doverbose) %dopar% { xsum<-asum*ijk } print(asum) print(bsum) asum=vector(mode="double",length=15) writeLines("\nasum is vector of on length 15 on the right hand side") writeLines("We add the pid for each task/iteration to the output") for (ijk in 1:length(asum))asum[ijk]<-asum[ijk]+ijk bsum<-foreach(ijk=1:nt , .combine=rbind,.verbose=doverbose) %dopar% { xsum<-asum+ijk*1000 pid<-Sys.getpid() c(pid,xsum) } print(asum) print(bsum) print(bsum[order(bsum[,1]),]) stopCluster(ct) #readline(prompt = "NEXT>")
%% Hough Lines Transform % An example using the Hough line detector. % % This program demonstrates line finding with the Hough transform. % We show how to use the OpenCV functions |cv.HoughLines| and |cv.HoughLinesP| % to detect lines in an image. % % Sources: % % * <https://github.com/opencv/opencv/blob/3.2.0/samples/cpp/houghlines.cpp> % * <https://github.com/opencv/opencv/blob/3.2.0/samples/python/houghlines.py> % * <https://docs.opencv.org/3.2.0/d9/db0/tutorial_hough_lines.html> % * <https://github.com/opencv/opencv/blob/3.2.0/samples/cpp/tutorial_code/ImgTrans/HoughLines_Demo.cpp> % * <https://docs.opencv.org/3.2.0/d6/d10/tutorial_py_houghlines.html> % %% Theory % % The explanation below belongs to the book *Learning OpenCV* by % Bradski and Kaehler. % % The Hough Line Transform is a transform used to detect straight lines. To % apply the Transform, first an edge detection pre-processing is desirable. % % As you know, a line in the image space can be expressed with two variables. % For example: % % * In the *Cartesian coordinate system:* Parameters: $(m,b)$. % * In the *Polar coordinate system:* Parameters: $(r,\theta)$ % % <<https://docs.opencv.org/3.2.0/Hough_Lines_Tutorial_Theory_0.jpg>> % % For Hough Transforms, we will express lines in the _Polar system_. Hence, a % line equation can be written as: % % $$y = \left ( -\frac{\cos \theta}{\sin \theta} \right ) x + % \left ( \frac{r}{\sin \theta} \right )$$ % % Arranging the terms: $r = x \cos \theta + y \sin \theta$ % % In general for each point $(x_{0}, y_{0})$, we can define the family of % lines that goes through that point as: % % $$r_{\theta} = x_{0} \cdot \cos \theta + y_{0} \cdot \sin \theta$$ % % Meaning that each pair $(r_{\theta},\theta)$ represents each line that % passes by $(x_{0}, y_{0})$. % % If for a given $(x_{0}, y_{0})$ we plot the family of lines that goes % through it, we get a sinusoid. For instance, for $x_{0} = 8$ and $y_{0} = 6$ % we get the following plot (in a plane $\theta$ - $r$): % % <<https://docs.opencv.org/3.2.0/Hough_Lines_Tutorial_Theory_1.jpg>> % % We consider only points such that $r > 0$ and $0< \theta < 2 \pi$. % % We can do the same operation above for all the points in an image. If the % curves of two different points intersect in the plane $\theta$ - $r$, that % means that both points belong to a same line. For instance, following with % the example above and drawing the plot for two more points: % $x_{1} = 4$, $y_{1} = 9$ and $x_{2} = 12$, $y_{2} = 3$, we get: % % <<https://docs.opencv.org/3.2.0/Hough_Lines_Tutorial_Theory_2.jpg>> % % The three plots intersect in one single point $(0.925, 9.6)$, these % coordinates are the parameters ($\theta, r$) or the line in which % $(x_{0}, y_{0})$, $(x_{1}, y_{1})$ and $(x_{2}, y_{2})$ lay. % % What does all the stuff above mean? It means that in general, a line can be % _detected_ by finding the number of intersections between curves. The more % curves intersecting means that the line represented by that intersection % have more points. In general, we can define a _threshold_ of the minimum % number of intersections needed to _detect_ a line. % % This is what the Hough Line Transform does. It keeps track of the % intersection between curves of every point in the image. If the number of % intersections is above some _threshold_, then it declares it as a line with % the parameters $(\theta, r_{\theta})$ of the intersection point. % %% Standard and Probabilistic Hough Line Transform % % OpenCV implements two kind of Hough Line Transforms: % % 1) *The Standard Hough Transform* % % * It consists in pretty much what we just explained in the previous section. % It gives you as result a vector of couples $(\theta, r_{\theta})$ % * In OpenCV it is implemented with the function |cv.HoughLines| % % 2) *The Probabilistic Hough Line Transform* % % * A more efficient implementation of the Hough Line Transform. It gives as % output the extremes of the detected lines $(x_{0}, y_{0}, x_{1}, y_{1})$ % * In OpenCV it is implemented with the function |cv.HoughLinesP| % %% Code % % This program: % % * Loads an image % * Applies either a _Standard Hough Line Transform_ or a % _Probabilistic Line Transform_. % * Display the original image and the detected line in two windows. % % You may observe that the number of lines detected vary while you change the % _threshold_. The explanation is sort of evident: If you establish a higher % threshold, fewer lines will be detected (since you will need more points to % declare a line detected). % %% % Input image if true fname = fullfile(mexopencv.root(), 'test', 'sudoku.jpg'); thresh = 200; threshP = 100; minlen = 100; else fname = fullfile(mexopencv.root(), 'test', 'pic1.png'); thresh = 85; threshP = 50; minlen = 50; end src = cv.imread(fname, 'Color',true); %% % Edge Detection gray = cv.cvtColor(src, 'RGB2GRAY'); edges = cv.Canny(gray, [50, 150], 'ApertureSize',3); imshow(edges), title('Edges') %% % HoughLines: Standard Hough Line Transform tic lines = cv.HoughLines(edges, 'Rho',1, 'Theta',pi/180, 'Threshold',thresh); toc %% % draw the lines, and display the result lines = cat(1, lines{:}); rho = lines(:,1); theta = lines(:,2); a = cos(theta); b = sin(theta); x0 = a.*rho; y0 = b.*rho; pt1 = round([x0 + 1000*(-b), y0 + 1000*(a)]); pt2 = round([x0 - 1000*(-b), y0 - 1000*(a)]); out = cv.line(src, pt1, pt2, ... 'Color',[0 255 0], 'Thickness',2, 'LineType','AA'); figure, imshow(out), title('Detected Lines') %% % HoughLinesP: Probabilistic Hough Line Transform tic linesP = cv.HoughLinesP(edges, 'Rho',1, 'Theta',pi/180, ... 'Threshold',threshP, 'MinLineLength',minlen, 'MaxLineGap',10); toc %% % draw the line segments, and display the result linesP = cat(1, linesP{:}); outP = cv.line(src, linesP(:,1:2), linesP(:,3:4), ... 'Color',[0 255 0], 'Thickness',2, 'LineType','AA'); figure, imshow(outP), title('Detected Line Segments')
// ---------------------------------------------------------------------------- // - Open3D: www.open3d.org - // ---------------------------------------------------------------------------- // The MIT License (MIT) // // Copyright (c) 2018 www.open3d.org // // Permission is hereby granted, free of charge, to any person obtaining a copy // of this software and associated documentation files (the "Software"), to deal // in the Software without restriction, including without limitation the rights // to use, copy, modify, merge, publish, distribute, sublicense, and/or sell // copies of the Software, and to permit persons to whom the Software is // furnished to do so, subject to the following conditions: // // The above copyright notice and this permission notice shall be included in // all copies or substantial portions of the Software. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, // FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE // AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER // LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING // FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS // IN THE SOFTWARE. // ---------------------------------------------------------------------------- #include <algorithm> #include <iostream> #include <Eigen/Dense> #include <cmath> #include "Open3D/Camera/PinholeCameraIntrinsic.h" #include "Open3D/Geometry/Image.h" namespace open3d { namespace geometry { std::shared_ptr<Image> Image::CreateDepthToCameraDistanceMultiplierFloatImage( const camera::PinholeCameraIntrinsic &intrinsic) { auto fimage = std::make_shared<Image>(); fimage->Prepare(intrinsic.width_, intrinsic.height_, 1, 4); float ffl_inv[2] = { 1.0f / (float)intrinsic.GetFocalLength().first, 1.0f / (float)intrinsic.GetFocalLength().second, }; float fpp[2] = { (float)intrinsic.GetPrincipalPoint().first, (float)intrinsic.GetPrincipalPoint().second, }; std::vector<float> xx(intrinsic.width_); std::vector<float> yy(intrinsic.height_); for (int j = 0; j < intrinsic.width_; j++) { xx[j] = (j - fpp[0]) * ffl_inv[0]; } for (int i = 0; i < intrinsic.height_; i++) { yy[i] = (i - fpp[1]) * ffl_inv[1]; } for (int i = 0; i < intrinsic.height_; i++) { float *fp = (float *)(fimage->data_.data() + i * fimage->BytesPerLine()); for (int j = 0; j < intrinsic.width_; j++, fp++) { *fp = sqrtf(xx[j] * xx[j] + yy[i] * yy[i] + 1.0f); } } return fimage; } std::shared_ptr<Image> Image::CreateWeightImage( const camera::PinholeCameraIntrinsic &intrinsic) const { auto output = std::make_shared<Image>(); output->Prepare(intrinsic.width_, intrinsic.height_, 1, 4); auto focal_length = intrinsic.GetFocalLength(); auto principal_point = intrinsic.GetPrincipalPoint(); #ifdef _OPENMP #ifdef _WIN32 #pragma omp parallel for schedule(static) #else #pragma omp parallel for collapse(2) schedule(static) #endif #endif for (int i = 0; i < output->height_; i++) { for (int j = 0; j < output->width_; j++) { float *p = output->PointerAt<float>(j, i); float *ip = PointerAt<float>(j, i); double weight = 0.0; if (*ip > 0) { if(i > 0 && j > 0 && i < output->height_-1 && j < output->width_-1){ // computing normalized vertex double z = (double)(*ip); double x = (j - principal_point.first) * z / focal_length.first; double y = (i - principal_point.second) * z / focal_length.second; Eigen::Vector3d point = Eigen::Vector3d(x, y, z); Eigen::Vector3d v_norm = point.normalized(); //computing normalized normal float *dx1 = PointerAt<float>(j+1, i); float *dx2 = PointerAt<float>(j-1, i); float *dy1 = PointerAt<float>(j, i+1); float *dy2 = PointerAt<float>(j, i-1); double dzdx = (((double)*dx1 - (double)*dx2)/2.0)*1000.0; double dzdy = (((double)*dy1 - (double)*dy2)/2.0)*1000.0; Eigen::Vector3d normal = Eigen::Vector3d(-dzdx, -dzdy, 1.0); Eigen::Vector3d n_norm = normal.normalized(); // Eigen::Vector3d captureDir = Eigen::Vector3d(0, 0, 1.0); // double w1 = abs(captureDir.dot(n_norm)); // double w2 = abs(captureDir.dot(v_norm)); // weight = w1; // weight = w2; double w = abs(n_norm.dot(v_norm)); weight = w * w; // // Adding gaussian weight centering at principle point // double w = (double)output->width_/2.0; // double h = (double)output->height_/2.0; // double w_x = ((double)j - w)/w ; // double w_y = ((double)i - h)/h; // double d = sqrt(w_x * w_x + w_y * w_y); // weight = exp(-((d*d)/(2.0))); // assuming mu = 0 and sigma = 1.0 } else{ weight = 1.0f; // if this weight is set to 0, some of the points are missing in final integrated reconstruction. Therefore assigning it 1.0 } } *p = (float)weight; } } return output; } std::shared_ptr<Image> Image::CreateFloatImage( Image::ColorToIntensityConversionType type /* = WEIGHTED*/) const { auto fimage = std::make_shared<Image>(); if (IsEmpty()) { return fimage; } fimage->Prepare(width_, height_, 1, 4); for (int i = 0; i < height_ * width_; i++) { float *p = (float *)(fimage->data_.data() + i * 4); const uint8_t *pi = data_.data() + i * num_of_channels_ * bytes_per_channel_; if (num_of_channels_ == 1) { // grayscale image if (bytes_per_channel_ == 1) { *p = (float)(*pi) / 255.0f; } else if (bytes_per_channel_ == 2) { const uint16_t *pi16 = (const uint16_t *)pi; *p = (float)(*pi16); } else if (bytes_per_channel_ == 4) { const float *pf = (const float *)pi; *p = *pf; } } else if (num_of_channels_ == 3) { if (bytes_per_channel_ == 1) { if (type == Image::ColorToIntensityConversionType::Equal) { *p = ((float)(pi[0]) + (float)(pi[1]) + (float)(pi[2])) / 3.0f / 255.0f; } else if (type == Image::ColorToIntensityConversionType::Weighted) { *p = (0.2990f * (float)(pi[0]) + 0.5870f * (float)(pi[1]) + 0.1140f * (float)(pi[2])) / 255.0f; } } else if (bytes_per_channel_ == 2) { const uint16_t *pi16 = (const uint16_t *)pi; if (type == Image::ColorToIntensityConversionType::Equal) { *p = ((float)(pi16[0]) + (float)(pi16[1]) + (float)(pi16[2])) / 3.0f; } else if (type == Image::ColorToIntensityConversionType::Weighted) { *p = (0.2990f * (float)(pi16[0]) + 0.5870f * (float)(pi16[1]) + 0.1140f * (float)(pi16[2])); } } else if (bytes_per_channel_ == 4) { const float *pf = (const float *)pi; if (type == Image::ColorToIntensityConversionType::Equal) { *p = (pf[0] + pf[1] + pf[2]) / 3.0f; } else if (type == Image::ColorToIntensityConversionType::Weighted) { *p = (0.2990f * pf[0] + 0.5870f * pf[1] + 0.1140f * pf[2]); } } } } return fimage; } template <typename T> std::shared_ptr<Image> Image::CreateImageFromFloatImage() const { auto output = std::make_shared<Image>(); if (num_of_channels_ != 1 || bytes_per_channel_ != 4) { utility::LogError( "[CreateImageFromFloatImage] Unsupported image format."); } output->Prepare(width_, height_, num_of_channels_, sizeof(T)); const float *pi = (const float *)data_.data(); T *p = (T *)output->data_.data(); for (int i = 0; i < height_ * width_; i++, p++, pi++) { if (sizeof(T) == 1) *p = static_cast<T>(*pi * 255.0f); if (sizeof(T) == 2) *p = static_cast<T>(*pi); } return output; } template std::shared_ptr<Image> Image::CreateImageFromFloatImage<uint8_t>() const; template std::shared_ptr<Image> Image::CreateImageFromFloatImage<uint16_t>() const; ImagePyramid Image::CreatePyramid(size_t num_of_levels, bool with_gaussian_filter /*= true*/) const { std::vector<std::shared_ptr<Image>> pyramid_image; pyramid_image.clear(); if ((num_of_channels_ != 1) || (bytes_per_channel_ != 4)) { utility::LogError("[CreateImagePyramid] Unsupported image format."); } for (size_t i = 0; i < num_of_levels; i++) { if (i == 0) { std::shared_ptr<Image> input_copy_ptr = std::make_shared<Image>(); *input_copy_ptr = *this; pyramid_image.push_back(input_copy_ptr); } else { if (with_gaussian_filter) { // https://en.wikipedia.org/wiki/Pyramid_(image_processing) auto level_b = pyramid_image[i - 1]->Filter( Image::FilterType::Gaussian3); auto level_bd = level_b->Downsample(); pyramid_image.push_back(level_bd); } else { auto level_d = pyramid_image[i - 1]->Downsample(); pyramid_image.push_back(level_d); } } } return pyramid_image; } } // namespace geometry } // namespace open3d
import Revise import GLRenderer as GL import ThreeDP3 as T import Images as I import MiniGSG as S import Rotations as R import PoseComposition: Pose using Gen import MeshCatViz import MeshCatViz MeshCatViz.setup_visualizer() YCB_DIR = "/home/nishadg/mcs/ThreeDVision.jl/data/ycbv2" world_scaling_factor = 100.0 id_to_cloud, id_to_shift, id_to_box = T.load_ycbv_models_adjusted(YCB_DIR, world_scaling_factor); all_ids = sort(collect(keys(id_to_cloud))); IDX =400 @show T.get_ycb_scene_frame_id_from_idx(YCB_DIR,IDX) gt_poses, ids, gt_rgb_image, gt_depth_image, cam_pose, camera = T.load_ycbv_scene_adjusted( YCB_DIR, IDX, world_scaling_factor, id_to_shift ); img = I.colorview(I.Gray, gt_depth_image ./ maximum(gt_depth_image)) rgb_image = I.colorview(I.RGB, permutedims(Float64.(gt_rgb_image)./255.0, (3,1,2))) nominal_colors = [I.colorant"red",I.colorant"blue",I.colorant"green",I.colorant"yellow",I.colorant"white", I.colorant"black"]; diffs = cat([I.colordiff.(rgb_image, c) for c in nominal_colors]...,dims=3); argmaxs = map(x->x[3],argmin(diffs, dims=3))[:,:,1]; mod_image = copy(rgb_image) for (idx,c) in enumerate(nominal_colors) mod_image[argmaxs .== idx] .= c end mod_image import Plots as P P.heatmap # + resolution = 0.5 renderer = GL.setup_renderer(camera, GL.DepthMode()) for id in all_ids cloud = id_to_cloud[id] v,n,f = GL.mesh_from_voxelized_cloud(GL.voxelize(cloud, resolution), resolution) GL.load_object!(renderer, v, f) end depth_image = GL.gl_render( renderer, ids, gt_poses, T.IDENTITY_POSE) depth_image[depth_image .== 50000.0] .= 200.0 img = I.colorview(I.Gray, depth_image ./ maximum(depth_image)) # - renderer_color = GL.setup_renderer(camera, GL.RGBMode()) for id in all_ids cloud = id_to_cloud[id] v,n,f = GL.mesh_from_voxelized_cloud(GL.voxelize(cloud, resolution), resolution) GL.load_object!(renderer_color, v, n, f) end colors = I.distinguishable_colors(length(ids), I.colorant"green") # colors = [I.colorant"red", I.colorant"green", I.colorant"cyan"] colors = [ I.colorant"yellow", I.colorant"cyan", I.colorant"lightgreen", I.colorant"red", I.colorant"purple", I.colorant"orange" ] rgb_image, depth_image = GL.gl_render( renderer_color, ids, gt_poses, colors[1:length(ids)], T.IDENTITY_POSE) depth_image[depth_image .== 50000.0] .= 200.0 img = I.colorview(I.Gray, depth_image ./ maximum(depth_image)) I.colorview(I.RGBA, permutedims(rgb_image,(3,1,2))) gt_poses renderer_color = GL.setup_renderer(camera, GL.RGBMode()) for id in all_ids cloud = id_to_cloud[id] v,n,f = GL.mesh_from_voxelized_cloud(GL.voxelize(cloud, resolution), resolution) GL.load_object!(renderer_color, v, n, f) end colors = I.distinguishable_colors(length(ids), I.colorant"green") # colors = [I.colorant"red", I.colorant"green", I.colorant"cyan"] colors = [ I.colorant"yellow", I.colorant"cyan", I.colorant"lightgreen", I.colorant"red", I.colorant"purple", I.colorant"orange" ] rand_poses = [T.uniformPose(-20.0,20.0,-20.0,20.0,97.0,120.0) for _ in 1:length(gt_poses)] rgb_image, depth_image = GL.gl_render( renderer_color, ids, rand_poses, colors[1:length(ids)], T.IDENTITY_POSE) depth_image[depth_image .== 50000.0] .= 200.0 img = I.colorview(I.Gray, depth_image ./ maximum(depth_image)) I.colorview(I.RGBA, permutedims(rgb_image,(3,1,2))) # + renderer_texture = GL.setup_renderer(camera, GL.TextureMode()) obj_paths = T.load_ycb_model_obj_file_paths(YCB_DIR) texture_paths = T.load_ycb_model_texture_file_paths(YCB_DIR) for id in all_ids v,n,f,t = renderer_texture.gl_instance.load_obj_parameters( obj_paths[id] ) v = v * world_scaling_factor v .-= id_to_shift[id]' GL.load_object!(renderer_texture, v, n, f, t, texture_paths[id] ) end rgb_image, depth_image = GL.gl_render( renderer_texture, ids, gt_poses, T.IDENTITY_POSE) depth_image[depth_image .== 50000.0] .= 200.0 img = I.colorview(I.Gray, depth_image ./ maximum(depth_image)) I.colorview(I.RGBA, permutedims(rgb_image,(3,1,2))) # - focus_idx = 6 rgb_image, depth_image = GL.gl_render( renderer_texture, [ids[focus_idx]], [gt_poses[focus_idx]], T.IDENTITY_POSE) depth_image[depth_image .== 50000.0] .= 200.0 img = I.colorview(I.Gray, depth_image ./ maximum(depth_image)) I.colorview(I.RGBA, permutedims(rgb_image,(3,1,2))) idxs = [4,6] rgb_image, depth_image = GL.gl_render( renderer_texture, ids[idxs], gt_poses[idxs], T.IDENTITY_POSE) depth_image[depth_image .== 50000.0] .= 200.0 img = I.colorview(I.Gray, depth_image ./ maximum(depth_image)) I.colorview(I.RGBA, permutedims(rgb_image,(3,1,2))) focus_idx = 5 rgb_image, depth_image = GL.gl_render( renderer_texture, [ids[focus_idx]], [Pose([0.0, 50.0, 80.0], R.RotXYZ(pi/2, pi/2, 0.0))], Pose(zeros(3),R.RotX(-0.5))) depth_image[depth_image .== 50000.0] .= 200.0 img = I.colorview(I.Gray, depth_image ./ maximum(depth_image)) I.colorview(I.RGBA, permutedims(rgb_image,(3,1,2)))
If a set $S$ is sequentially compact, then it is compact.
import numpy as np import scipy.stats as stats import matplotlib.pyplot as plt import scipy.misc as misc import scipy.special as special from skimage.util.shape import view_as_windows import itertools import libstempo class HMM: def __init__(self, toas, toa_errs, freqs, fdots, noise_cov, glitches, f_fiducial=0, fd_fiducial=0, fdd_fiducial=0): self.zs = np.diff(sorted(toas))*86400 self.noise_cov = noise_cov self.glitches = glitches self.num_timesteps = len(self.zs) self.freqs = freqs self.df = np.diff(freqs)[0] self.fdots = fdots self.dfd = np.diff(fdots)[0] self.f_fiducial = f_fiducial self.fd_fiducial = fd_fiducial self.fdd_fiducial = fdd_fiducial err_df = np.diff(freqs)[0]*self.zs err_dfd = 0.5*np.diff(fdots)[0]*self.zs**2 # err_sigma_toas = 5*np.sqrt((f_fiducial + np.mean(freqs))**2*(toa_errs[:-1]**2 + toa_errs[1:]**2)) err_sigma_toas = np.sqrt((f_fiducial + np.mean(freqs))**2*(toa_errs[:-1]**2 + toa_errs[1:]**2)) #err_df = 2*np.diff(freqs)[0]*self.zs #err_dfd = 2*0.5*np.diff(fdots)[0]*self.zs**2 #err_sigma_toas = 5*np.sqrt((f_fiducial + np.mean(freqs))**2*(toa_errs[:-1]**2 + toa_errs[1:]**2)) self.kappas = 1/4/np.pi**2/(err_df**2 + err_dfd**2 + err_sigma_toas**2) @staticmethod def from_tempo2(parfile, timfile, freqs, fdots, noise_cov, glitches, efac=1, equad=0, min_toa_gap=0, mjd_range=None): psr = libstempo.tempopulsar(parfile=parfile, timfile=timfile) psr_toas = psr.toas() try: phase_jumps = psr.flagvals('phaseJ').astype(np.float) psr_toas += phase_jumps/86400 except Exception as e: print(e) psr_toas = sorted(psr_toas) psr_toaerrs = [x for _,x in sorted(zip(psr_toas, psr.toaerrs))] toas = np.array([psr_toas[0]]) toaerrs = np.array([np.sqrt(efac**2*psr_toaerrs[0]**2 + equad**2)]) for i in range(1, len(psr.toaerrs)): if psr_toas[i] - toas[-1] >= min_toa_gap: toas = np.append(toas, psr_toas[i]) toaerrs = np.append(toaerrs, np.sqrt(efac**2*psr_toaerrs[i]**2 + equad**2)) #toas = psr.toas() #toaerrs = np.sqrt(efac**2*psr.toaerrs**2 + equad**2) print(toaerrs) if mjd_range is not None: filt = [toa >= mjd_range[0] and toa <= mjd_range[1] for toa in toas] toas = list(itertools.compress(toas,filt)) toaerrs = np.array(list(itertools.compress(toaerrs,filt))) pepoch = psr['PEPOCH'].val f_fiducial = psr['F0'].val + psr['F1'].val * (min(toas) - pepoch)*86400 + 0.5*psr['F2'].val * ((min(toas) - pepoch)*86400)**2 fd_fiducial = psr['F1'].val + psr['F2'].val * (min(toas) - pepoch)*86400 return HMM(toas, toaerrs*1e-6, freqs, fdots, noise_cov, glitches, f_fiducial, fd_fiducial, psr['F2'].val) def fokker_planck_pdf(self, freqs, fdots, z): cov_matrix = self.noise_cov(z) cov_matrix[0][0] /= self.df**2 cov_matrix[1][0] /= self.df*self.dfd cov_matrix[0][1] /= self.df*self.dfd cov_matrix[1][1] /= self.dfd**2 freq_size = len(freqs) fdot_size = len(fdots) grid_f, grid_fd = np.meshgrid(np.linspace(-(freq_size-1)/2, (freq_size-1)/2, freq_size), np.linspace(-(fdot_size-1)/2, (fdot_size-1)/2, fdot_size)) pos = np.empty(grid_f.shape + (2,)) pos[:,:,0] = grid_f pos[:,:,1] = grid_fd try: rand_var = stats.multivariate_normal(mean=[0,0], cov=cov_matrix) pdf = rand_var.logpdf(pos) return pdf - special.logsumexp(pdf) except Exception as e: pdf = np.ones(grid_f.shape)*-np.inf pdf[int((fdot_size-1)/2)][int((freq_size-1)/2)] = 0 return pdf def gen_trans_matrix_block(self, z): freq_size = int(np.max([3, np.min([len(self.freqs), (2*3 + 1)*np.sqrt(self.noise_cov(z)[0][0])/self.df])])) if freq_size % 2 == 0: # Pad freq_size to be odd so that we have a good center freq_size += 1 fdot_size = len(self.fdots) fs = np.linspace(-self.df*(freq_size-1)/2, self.df*(freq_size-1)/2, freq_size) fds = np.linspace(-self.dfd*(fdot_size-1), self.dfd*(fdot_size-1), fdot_size*2-1) return self.fokker_planck_pdf(fs[:, None], fds[:, None], z) def step(self, prev_loglikes, z, glitch, direction='fwd'): new_loglikes = np.zeros(np.shape(prev_loglikes)) if not glitch: trans_matrix_block = np.flipud(self.gen_trans_matrix_block(z)) window_shape = (len(self.fdots), np.shape(trans_matrix_block)[1]) new_loglikes_unsummed = np.zeros((prev_loglikes.shape[0]*trans_matrix_block.shape[1], prev_loglikes.shape[0], prev_loglikes.shape[1])) for fdot_idx in range(np.shape(prev_loglikes)[0]): trans_matrix_block_fdot_lower = -fdot_idx + len(self.fdots) - 1 trans_matrix_block_fdot_upper = -fdot_idx + 2*len(self.fdots) - 1 trans_matrix_block_selected_fdots = trans_matrix_block[trans_matrix_block_fdot_lower:trans_matrix_block_fdot_upper, :] trans_matrix_block_replicated = np.repeat(trans_matrix_block_selected_fdots[:,:, np.newaxis], np.shape(prev_loglikes)[1], axis=2) if direction == 'fwd': freq_idx_offset = int(np.rint(self.fdots[fdot_idx]*z/self.df)) elif direction == 'bwd': freq_idx_offset = int(np.rint(self.fdots[fdot_idx]*z/self.df)) * -1 else: raise ValueError('Invalid value for direction') rolled_loglikes = np.roll(prev_loglikes, freq_idx_offset, axis=1) #if freq_idx_offset < 0: # rolled_loglikes[:, freq_idx_offset:] = -np.inf #elif freq_idx_offset > 0: # rolled_loglikes[:, :(freq_idx_offset)] = -np.inf padding_width = int((window_shape[1] - 1)/2) rolled_loglikes = np.pad(rolled_loglikes, ((0,0), (padding_width, padding_width)), constant_values=(-np.inf, -np.inf), mode='constant') rolled_loglikes_win = np.moveaxis(view_as_windows(rolled_loglikes, window_shape).squeeze(), [0,1,2], [2,0,1]) #summed = special.logsumexp(rolled_loglikes_win + trans_matrix_block_replicated, axis=(0,1)) summed = rolled_loglikes_win + trans_matrix_block_replicated new_loglikes_unsummed[:, fdot_idx, :] = summed.reshape((trans_matrix_block_selected_fdots.size, prev_loglikes.shape[1])) #print(summed.shape) #summed = special.logsumexp(summed, axis=0) #print(summed.shape) new_loglikes = np.logaddexp.reduce(new_loglikes_unsummed, axis=0) for fdot_idx in range(np.shape(prev_loglikes)[0]): if direction == 'fwd': freq_idx_offset = int(np.rint(self.fdots[fdot_idx]*z/self.df)) elif direction == 'bwd': freq_idx_offset = int(np.rint(self.fdots[fdot_idx]*z/self.df)) * -1 else: raise ValueError('Invalid value for direction') if freq_idx_offset < 0: new_loglikes[fdot_idx, freq_idx_offset:] = -np.inf elif freq_idx_offset > 0: new_loglikes[fdot_idx, :(freq_idx_offset)] = -np.inf else: all_glitch_loglikes = np.ones((prev_loglikes.shape[0], prev_loglikes.shape[1], prev_loglikes.shape[0]*prev_loglikes.shape[1] + 1))*-np.inf all_glitch_loglikes[:,:,0] = prev_loglikes for i in range(prev_loglikes.size): fdot, f = np.unravel_index(i, prev_loglikes.shape) if direction == 'fwd': freq_idx_offset = int(np.rint(self.fdots[fdot]*z/self.df)) freq_min = np.clip(f + freq_idx_offset, 1, prev_loglikes.shape[1]) all_glitch_loglikes[:, freq_min:, i] = prev_loglikes[fdot, f] - np.log((prev_loglikes.shape[1]-freq_min)*prev_loglikes.shape[0]) elif direction == 'bwd': freq_idx_offset = int(np.rint(self.fdots[fdot]*z/self.df)) * -1 freq_max = np.clip(f + freq_idx_offset, 1, prev_loglikes.shape[1]) all_glitch_loglikes[:, :freq_max, i] = prev_loglikes[fdot, f] #- np.log((freq_max)*prev_loglikes.shape[0]) else: raise ValueError('Invalid value for direction') new_loglikes = special.logsumexp(all_glitch_loglikes, axis=2) #for fdot_idx in range(prev_loglikes.shape[0]): # for f_idx in range(1, prev_loglikes.shape[1]): # # for fdot_prev_idx in range(prev_loglikes.shape[0]): # if direction == 'fwd': # freq_idx_offset = int(np.rint(self.fdots[fdot_idx]*z/self.df)) # elif direction == 'bwd': # freq_idx_offset = int(np.rint(self.fdots[fdot_idx]*z/self.df)) * -1 # else: # raise ValueError('Invalid value for direction') # # freq_max = np.clip(f_idx + freq_idx_offset, 1, prev_loglikes.shape[1]) # new_loglikes[fdot_idx][f_idx] = special.logsumexp(prev_loglikes[:, :f_idx].flatten()) #- np.log((prev_loglikes.shape[0] + 1)*freq_max) return new_loglikes def obs_loglikes(self, z, freqs, fdots, kappa, running_f, running_fd): obs_loglikes = np.zeros((len(self.fdots), len(self.freqs))) fs, fds = np.meshgrid(freqs, fdots) phase_fiducial = 2*np.pi*(z*running_f - 0.5*running_fd*z**2) phases = 2*np.pi*(z*fs - 0.5*z**2*fds) + phase_fiducial log_besseli = np.log(special.i0(kappa.astype(np.float64))) if np.isinf(log_besseli): # For large argument, I_0(x) ~ exp(x)/sqrt(2pi*x) log_besseli = kappa - 0.5*np.log(2*np.pi*kappa) obs_loglikes = kappa*np.cos(phases) - log_besseli - np.log(2*np.pi) return obs_loglikes def gen_all_obs_loglikes(self): self.all_obs_loglikes = np.zeros((len(self.zs), len(self.fdots), len(self.freqs))) running_f = self.f_fiducial running_fd = self.fd_fiducial for n in range(len(self.zs)): running_f += self.fd_fiducial*self.zs[n] self.all_obs_loglikes[n, :, :] = self.obs_loglikes(self.zs[n], self.freqs, self.fdots, self.kappas[n], running_f, running_fd) def forward(self): self.forward_loglikes = np.zeros((len(self.zs)+1, len(self.fdots), len(self.freqs))) self.evidence = np.zeros(len(self.zs)) running_f = self.f_fiducial running_fd = self.fd_fiducial for n in range(1, len(self.zs)): self.forward_loglikes[n, :, :] = self.step(self.forward_loglikes[n-1,:,:], self.zs[n], n in self.glitches, direction='fwd') self.forward_loglikes[n,:,:] += self.all_obs_loglikes[n, :, :] self.evidence[n] = special.logsumexp(self.forward_loglikes[n,:,:].flatten()) def backward(self): self.backward_loglikes = np.zeros((len(self.zs)+1, len(self.fdots), len(self.freqs))) for n in range(len(self.zs)-1, 0, -1): new_loglikes = self.all_obs_loglikes[n,:,:] + self.backward_loglikes[n+1,:,:] self.backward_loglikes[n, :, :] = self.step(new_loglikes, self.zs[n], n in self.glitches, direction='bwd') def fw_bw(self): self.gen_all_obs_loglikes() self.forward() self.backward() self.combined_loglikes = self.forward_loglikes[:,:,:] + self.backward_loglikes[:,:,:] for n in range(len(self.zs)): self.combined_loglikes[n,:,:] -= special.logsumexp(self.combined_loglikes[n,:,:].flatten()) self.gen_path() g = np.zeros((len(self.zs), len(self.freqs))) for n in range(len(self.zs)): g[n, :] = self.forward_loglikes[n, 0, :] np.savetxt('g.dat', g) def gen_path(self): path = [] for n in range(1, len(self.zs)): (fd, f) = np.unravel_index(self.combined_loglikes[n,:,:].argmax(), self.combined_loglikes[n,:,:].shape) path.append((fd,f)) self.path = path def get_residuals(self): running_f = self.f_fiducial running_fd = self.fd_fiducial residuals = [] for n in range(0,len(self.zs)): running_f += running_fd*self.zs[n] phase_fiducial = running_f*self.zs[n] - 0.5*running_fd*self.zs[n]**2 phase = self.freqs[self.path[n][1]]*self.zs[n] - 0.5*self.fdots[self.path[n][0]]*self.zs[n]**2 + phase_fiducial residuals.append(phase - np.round(phase)) return residuals
/* $Id$ */ /********************************************************************* * Software License Agreement (BSD License) * * Copyright (c) 2010 Jack O'Quin, 2013 Séverin Lemaignan * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following * disclaimer in the documentation and/or other materials provided * with the distribution. * * Neither the name of the author nor other contributors may be * used to endorse or promote products derived from this software * without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN * ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. *********************************************************************/ #include <signal.h> #include <boost/thread.hpp> #include <ros/ros.h> #include <pluginlib/class_list_macros.h> #include <nodelet/nodelet.h> #include "naoqi_camera.h" /** @file @brief ROS driver nodelet for Aldebaran's NAO cameras. */ /** Nao camera driver nodelet implementation. */ class NaoqiCameraNodelet: public nodelet::Nodelet { public: NaoqiCameraNodelet(): running_(false) {} ~NaoqiCameraNodelet() { if (running_) { NODELET_INFO("shutting down driver thread"); running_ = false; deviceThread_->join(); NODELET_INFO("driver thread stopped"); } dvr_->shutdown(); } private: virtual void onInit(); virtual void devicePoll(); volatile bool running_; ///< device is running boost::shared_ptr<naoqicamera_driver::NaoqiCameraDriver> dvr_; boost::shared_ptr<boost::thread> deviceThread_; }; /** Nodelet initialization. * * @note MUST return immediately. */ void NaoqiCameraNodelet::onInit() { ros::NodeHandle priv_nh(getPrivateNodeHandle()); ros::NodeHandle node(getNodeHandle()); ros::NodeHandle camera_nh(node, "camera"); //TODO: allow for passing host/port of broker! int argc = 0; char* argv = ""; dvr_.reset(new naoqicamera_driver::NaoqiCameraDriver(argc, &argv, priv_nh, camera_nh)); dvr_->setup(); // spawn device thread running_ = true; deviceThread_ = boost::shared_ptr< boost::thread > (new boost::thread(boost::bind(&NaoqiCameraNodelet::devicePoll, this))); } /** Nodelet device poll thread main function. */ void NaoqiCameraNodelet::devicePoll() { while (running_) { dvr_->poll(); } } // Register this plugin with pluginlib. Names must match nodelet_velodyne.xml. // // parameters are: package, class name, class type, base class type PLUGINLIB_DECLARE_CLASS(naoqicamera, driver, NaoqiCameraNodelet, nodelet::Nodelet);
%% intesselation % Below is a demonstration of the features of the |intesselation| function %% clear; close all; clc; %% Syntax % |L=intesselation(X,TES,XI);| %% Description % UNDOCUMENTED %% Examples % %% % % <<gibbVerySmall.gif>> % % _*GIBBON*_ % <www.gibboncode.org> % % _Kevin Mattheus Moerman_, <[email protected]> %% % _*GIBBON footer text*_ % % License: <https://github.com/gibbonCode/GIBBON/blob/master/LICENSE> % % GIBBON: The Geometry and Image-based Bioengineering add-On. A toolbox for % image segmentation, image-based modeling, meshing, and finite element % analysis. % % Copyright (C) 2006-2022 Kevin Mattheus Moerman and the GIBBON contributors % % This program is free software: you can redistribute it and/or modify % it under the terms of the GNU General Public License as published by % the Free Software Foundation, either version 3 of the License, or % (at your option) any later version. % % This program is distributed in the hope that it will be useful, % but WITHOUT ANY WARRANTY; without even the implied warranty of % MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the % GNU General Public License for more details. % % You should have received a copy of the GNU General Public License % along with this program. If not, see <http://www.gnu.org/licenses/>.
people police cattle 等本身就是复数,不能说 a people,a police,a cattle,但可以说a person,a policeman,a head of cattle. (2) 否定句:主语+does not / doesn't+动词原形+其他. 例如: Henry doesn't have any brothers. (3)一般疑问句:Does+主语+动词原形+其他? 肯定回答:Yes, 主语+does. 否定回答:No, 主语+doesn't. 例如: ―Does he work in the hospital? ―Yes, he does. / No, he doesn't. 例如: When does Li Ming do his homework every day?
s1 = "12.125"; v1 = 12.125 s2 = "-0.125"; v2 = -0.125 s3 = "5" ; v3 = 5 @testset "parse $T" for T in (Double16, Double32, Double64) @test T(s1) === T(v1) @test T(s2) === T(v2) @test T(s3) === T(v3) end @testset "T_str" begin @test df64"12.125" === Double64(s1) @test df32"-0.125" === Double32(s2) @test df16"5" === Double16(s3) end
import torch, os, cv2 from model.model import parsingNet from utils.common import merge_config from utils.dist_utils import dist_print import torch import scipy.special, tqdm import numpy as np import torchvision.transforms as transforms from data.dataset import LaneTestDataset from data.constant import culane_row_anchor, tusimple_row_anchor if __name__ == "__main__": torch.backends.cudnn.benchmark = True args, cfg = merge_config() dist_print('start testing...') assert cfg.backbone in ['18','34','50','101','152','50next','101next','50wide','101wide'] if cfg.dataset == 'CULane': cls_num_per_lane = 18 elif cfg.dataset == 'Tusimple': cls_num_per_lane = 56 else: raise NotImplementedError net = parsingNet(pretrained = False, backbone=cfg.backbone,cls_dim = (cfg.griding_num+1,cls_num_per_lane,4), use_aux=False).cuda() # we dont need auxiliary segmentation in testing state_dict = torch.load(cfg.test_model, map_location='cpu')['model'] compatible_state_dict = {} for k, v in state_dict.items(): if 'module.' in k: compatible_state_dict[k[7:]] = v else: compatible_state_dict[k] = v net.load_state_dict(compatible_state_dict, strict=False) net.eval() img_transforms = transforms.Compose([ transforms.Resize((288, 800)), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)), ]) if cfg.dataset == 'CULane': splits = ['test0_normal.txt', 'test1_crowd.txt', 'test2_hlight.txt', 'test3_shadow.txt', 'test4_noline.txt', 'test5_arrow.txt', 'test6_curve.txt', 'test7_cross.txt', 'test8_night.txt'] datasets = [LaneTestDataset(cfg.data_root,os.path.join(cfg.data_root, 'list/test_split/'+split),img_transform = img_transforms) for split in splits] img_w, img_h = 1640, 590 row_anchor = culane_row_anchor elif cfg.dataset == 'Tusimple': splits = ['test.txt'] datasets = [LaneTestDataset(cfg.data_root,os.path.join(cfg.data_root, split),img_transform = img_transforms) for split in splits] img_w, img_h = 1280, 720 row_anchor = tusimple_row_anchor else: raise NotImplementedError for split, dataset in zip(splits, datasets): loader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle = False, num_workers=1) fourcc = cv2.VideoWriter_fourcc(*'MJPG') print(split[:-3]+'avi') vout = cv2.VideoWriter(split[:-3]+'avi', fourcc , 30.0, (img_w, img_h)) for i, data in enumerate(tqdm.tqdm(loader)): imgs, names = data imgs = imgs.cuda() with torch.no_grad(): out = net(imgs) col_sample = np.linspace(0, 800 - 1, cfg.griding_num) col_sample_w = col_sample[1] - col_sample[0] out_j = out[0].data.cpu().numpy() out_j = out_j[:, ::-1, :] prob = scipy.special.softmax(out_j[:-1, :, :], axis=0) ### # a = [0,1,2,3,4,5,6,7,8,9] # b = a[i:j] 表示复制a[i]到a[j-1],以生成新的list对象 # b = a[1:3] 那么,b的内容是 [1,2] # 当i缺省时,默认为0,即 a[:3]相当于 a[0:3] # 当j缺省时,默认为len(alist), 即a[1:]相当于a[1:10] # 当i,j都缺省时,a[:]就相当于完整复制一份a了 # b = a[i:j:s]这种格式呢,i,j与上面的一样,但s表示步进,缺省为1. # 所以a[i:j:1]相当于a[i:j] # 当s<0时,i缺省时,默认为-1. j缺省时,默认为-len(a)-1 # 所以a[::-1]相当于 a[-1:-len(a)-1:-1],也就是从最后一个元素到第一个元素复制一遍。所以你看到一个倒序的东东。 ### idx = np.arange(cfg.griding_num) + 1 idx = idx.reshape(-1, 1, 1) loc = np.sum(prob * idx, axis=0) out_j = np.argmax(out_j, axis=0)#取出out_j中元素最大值所对应的索引 loc[out_j == cfg.griding_num] = 0 out_j = loc # import pdb; pdb.set_trace() vis = cv2.imread(os.path.join(cfg.data_root,names[0])) for i in range(out_j.shape[1]): if np.sum(out_j[:, i] != 0) > 2: for k in range(out_j.shape[0]): if out_j[k, i] > 0: ppp = (int(out_j[k, i] * col_sample_w * img_w / 800) - 1, int(img_h * (row_anchor[cls_num_per_lane-1-k]/288)) - 1 ) cv2.circle(vis,ppp,5,(0,255,0),-1) vout.write(vis) vout.release()
%\title{Example letter using the newlfm LaTeX package} % % See http://texblog.org/2013/11/11/latexs-alternative-letter-class-newlfm/ % and http://www.ctan.org/tex-archive/macros/latex/contrib/newlfm % for more information. % \documentclass[12pt]{article} \usepackage{blindtext, xfrac} \newcommand{\pointRaised}[2]{\medskip \hrule \noindent \textsl{{\fontseries{b} #1}: #2}} \newcommand{\reply}{\noindent \textbf{Reply}:\ } \begin{document} % Enclosed is our manuscript, titled `Tissue Enrichment Analysis for \emph{C.~elegans} genomics' to be considered for publication in BMC Bioinformatics. We have previously received two reviews and an editorial comment, which we address below. We have attached our detailed responses to the comments and reviews to our paper, titled `Tissue Enrichment Analysis for \emph{C.~elegans} Genomics'. We hope our answers are satisfactory. \section{Editor's Comment} \pointRaised{1}{My own comment is that you may want to do a second check in the literature for previous ontology-trimming methods. For example, Garrido et al. Different approaches to build brief ontologies (Volume 348 of the series Communications in Computer and Information Science pp 232-246, Springer, 2013) presents one such methodology. } \reply{We thank Dr. Setubal for pointing us towards the book chapter by Garrido \emph{et al}. We have included the citation in the text for completeness. Originally we had not included it because the book chapter in question, like much of the ontology pruning literature, explains methods for generation of a complete ontological structure that is a logical subset of a greater ontology. Our approach does not generate an ontological structure, but rather identifies terms in the ontology that are balanced with regards to specificity (tree depth) and evidence (gene annotations). Once the terms are selected, these terms are not rebuilt into a formal `brief ontology'. However, we recognize the similarity of our work with the Computer Science literature and have included this citation as suggested as a consequence. } \section{Review 1} \pointRaised{1}{Authors haven’t discussed how often the background database for the enrichment analysis will be updated. Every time, when new GO terms are added by the GO consortium does the trimming algorithm takes into account these terms and refine the method? } \reply{The background dataset for the enrichment analysis will be updated with every version of WormBase every two months. As part of a new WormBase release, the tissue ontology is annotated with new gene expression data, and our trimming pipeline is used to generate a new background dataset. Due to the tight integration of this tool with WormBase, we can guarantee that our database will continuously remain up to date, which is a drawback other popular tools very often suffer from. We have updated the text to reflect this by adding the following text at the end of the subsection \textbf{Filtering greatly reduces the number of nodes used for analysis}: ``Our trimming pipeline is applied as part of each new WormBase release. This ensures that the ontology database we are using remains up-to-date with regards to both addition or removal of specific terms as well as with regard to gene expression annotations.'' } \pointRaised{2}{I am skeptical about the usage of this tool as it is only confined and tested with \emph{C.~elegans} genome. Such tools should be always part of the big enrichment analysis tool in the form of plugins rather than as standalone.\\ For example, the recently published `FunRich', a functional enrichment analysis tools has an option of using custom database. Using this custom database, any species genome can be used as background database for functional enrichment analysis which also serves the purpose of identifying GO terms enriched in the input gene lists. } \reply{While we understand the concerns regarding the usage of standalone software, we would like to point out that the cell and tissue ontology referred to in the text has been developed independently of the GO. The ontology was developed and is maintained, annotated and extended independently by WormBase. Moreover, due to the species-specific nature of cell and tissue ontologies, at this moment we cannot allow users to input any species genome as a background database. To do this, we would require cell and tissue ontologies for every species in question (available only for some species); ortholog mapping between species (available) and ontology mapping between species (non-existent). While we do intend to expand TEA to major model systems in the future, we cannot follow the approach that PANTHER takes of identifying orthologs in a background set and using pre-existing GO annotations to identify enrichment. Doing this would yield uninterpretable results, for example if a researcher input a Drosophila background and enrichment set, he/she would at best receive a list of C. elegans tissues. To add tissue enrichment analysis for other species in the future, more cell and tissue ontologies must be developed. Other tissue ontologies do exist at this point in time, such as a Zebrafish anatomy ontology. } \pointRaised{3}{Majority of the figures is slightly distorted and needs to be replaced with high resolution clear images.} \reply{We have remade all of the figures using vector-formatting and saved each figure as a PDF for maximum resolution. We thank the reviewer for pointing this out.} \pointRaised{4}{Citing ‘FunRich’ tool mainly used for the functional enrichment analysis is worth mentioning in the ‘Background’ section of the manuscript. } \reply{We have cited `FunRich' and thank reviewer 1 for making us aware of this tool.} \pointRaised{5}{In the web GUI (http://www.wormbase.org/tools/enrichment/tea/tea.cgi), please include standard list of C.elegans gene names by default in the search box.} \reply{We have added example genes in the search box. } \pointRaised{6}{In the results table, include Hypergeometric p-values in addition to q-values.} \reply{We have added p-values to the results table. However, we feel that these values add no relevant information for the user, since the table only contains statistically significant results as assessed by a q-value cutoff of 0.1.} \section{Review 2} \pointRaised{1}{Introduction needs to be re-organized and partially re-written because it feels a bit strange that details about pruning and trimming are discussed in the last sentences. Instead, the authors need to outline the ideas behind the software and its implementation. A brief outline of what was achieved with the development of this software should also be included.} \reply{We have re-written the introduction to reflect this. At the end of our section titled \textbf{Background}, we have included the following paragraph: ``We have developed a tool that tests a user-provided list of genes for term enrichment using a nematode-specific tissue ontology. This ontology, which is not a module of Gene Ontology, is verbose. We trim our ontology using an algorithmic approach, outlined below, that reduces multiple hypothesis testing issues by limiting testing to terms that are well-annotated. The results are provided to the user in a GUI that includes a table of results and an automatically generated bar-chart. This software addresses a previously unmet need in the \emph{C.~elegans} community for a tool that reliably and specifically links gene expression with changes in specific cells, organs or tissues in the worm.'' } \pointRaised{2}{The sources of data need to be better described. This may be obvious for the worm researchers but may be harder to understand for those outside.} \reply{We would like to thank Reviewer 2 for pointing this out to us, as it had not ocurred to us that non-worm researchers may find the sources of data confusing. We have added the following paragraph explaining the sources of data more extensively in our section \textbf{Reducing term redundancy through a similarity metric}: ``For our tool, we employ a previously generated cell and tissue ontology for \emph{C.~elegans}[6], which is maintained and curated by WormBase.'' } \pointRaised{3}{The significance and the way to determine the parameter S in the `avg' vs `any' is pretty hard to understand. More illustrations and explanations need to be provided} \reply{We have expanded our verbal explanation of the similarity criterions to make it more approachable to readers. We did not include additional figures because we did not know how to graphically show the difference between a threshold based on an average score and one based on a supremum score. We added the following text to the subsection \textbf{Reducing term redundancy through a similarity metric}: ``Intuitively, a set of sisters can be considered very similar if they share most gene annotations. Within a given set of sisters, we can calculate a similarity score for a single node by counting the number of unique annotations it contains and dividing by the total number of unique annotations in the sister set. Having assigned to each sister a similarity score, we can identify the \textbf{average} similarity score for this set of sisters, and if this average value exceeds a threshold, all of the sisters are removed from the ontology. An alternative method is check whether \textbf{any} of the scores exceeds a predetermined threshold, and if so remove this sister set from the ontology. We referred to these two scoring criteria as `\textbf{avg}' and `\textbf{any}' respectively.'' } \pointRaised{4}{Hypergeometric tests. What was the benefit of setting alpha to 0.1 vs 0.05 or another value? Did you find this using some of the established datasets?} \reply{An alpha of 0.1 is standard for many applications that employ FDR. We didn’t fine-tune this parameter cutoff. We have added a sentence to explicitly point this out in the text, and we have also made it clear that this parameter can be tuned by users in the command-line, but not the web, version.} \end{document}
library("arules") library(Matrix) library(data.table) library(Matrix) library(e1071) library(rPython) library(dplyr) library(tidyr) library(glue) require(ggplot2) require(reshape2) library(stringi) library(data.table) library(RNeo4j) library('RCurl') library('RJSONIO') library('plyr') library(jsonlite) library(microbenchmark) library(tictoc) fcboscript='/home/terminator1/Documents/adapt/FCA_JSON/explore/fcbo.py' fcboscript2='/home/terminator1/Documents/adapt/FCA_JSON/fcbo.py' #'/home/terminator2/Documents/Adapt_Project/Repository/experimental_fca_neo4j_branche/adapt/explore/fcbo.py' viewer = getOption("viewer") if(!is.null(viewer)){ viewer("http://localhost:8080") }else { utils::browseURL("http://localhost:8080") } neo4j = startGraph("http://localhost:8080/") #opts = list(timeout=2)) http://localhost:8080/query/generic/g.V().limit(10) browse(neo4j) query="g.V().hasLabel('AdmNetFlowObject').as('nf').values('remoteAddress').as('ip').select('nf').inE('predicateObject','predicateObject2').outV().values('eventType').as('type').select('ip','type')" #g.V().has('eventType').as('y').values('eventType').as('type').select('y').out().has('subjectType','SUBJECT_PROCESS').as('x').values('uuid').as('id').select('x').out().has('path').values('path').as('filepath').select('id','type','filepath').dedup()"#g.V().limit(10)" neo4JURL="http://localhost:8080/query/json/" query_result <- fromJSON(paste0(neo4JURL,query,sep="")) #itemJson=unlist(query_result[4]) #repo_df <- lapply(query_result, function(x) { # itemJson=unlist(x) # df <- data_frame(id = itemJson[1], # type = itemJson[2], # filepath = itemJson[3]) #}) %>% bind_rows() #query_resultdf<- lapply(query_result, function(x) { # x[sapply(x, is.null)] <- NA # unlist(x) #}) #query_resultdf=do.call("rbind", query_resultdf) #python.load(fcboscript) Listviews=c("ProcessEventExec", "ProcessNetFlow2", "ProcessEventExec", "ProcessByIPPort", "FileEventType", "FileEventExec", "FileEvent") myWorkingDirectory=getwd() AttributesDictionnary=paste0(myWorkingDirectory,"/contexts/AttributesDictionnary.txt",sep="") ObjectssDictionnary=paste0(myWorkingDirectory,"/contexts/ObjectsDictionnary.txt",sep="") #JsonSpecFile="/home/terminator2/Documents/Adapt_Project/Repository/experimental_fca/explore/csv/csvspec.json" JsonSpecFile2='/home/terminator1/Documents/adapt/FCA_JSON/contextSpecFiles/neo4jspec_FileEvent.json' ContextFile=paste0(myWorkingDirectory,"/contexts/Context.basenum",sep="") ShiftedContextFile=paste0(myWorkingDirectory,"/contexts/ShiftedContext.basenum",sep="") ContextFileRCF=paste0(myWorkingDirectory,"/contexts/Context.rcf",sep="") ShiftedContextFileRCF=paste0(myWorkingDirectory,"/contexts/ShiftedContext.rcf",sep="") pycommand = paste0("python3 ",fcboscript2,sep=" ") pycommand = paste0(pycommand,JsonSpecFile2,sep=" ") pycommand=paste0(pycommand,ObjectssDictionnary,sep=" ") pycommand=paste0(pycommand,AttributesDictionnary,sep=" ") pycommand=paste0(pycommand,ContextFile," ",ContextFileRCF,sep=" ") fcboresult=try(system(pycommand, wait=TRUE)) # shiftContext=paste0("./tool06_shiftContext.pl ",ContextFile, " 1 > ",ShiftedContextFile) #shiftContextRCF=paste0("./tool06_shiftContext.pl ",ShiftedContextFileRCF, " 1 > ",ShiftedContextFileRCF) try(system(shiftContext)) #rules <- apriori(sprsM, parameter = list(support = .001)) #inspect(head(sort(rules, by = "lift"), 3)) attributes <- read.table("~/Documents/R_Projects/AnomalyRulesMining/contexts/AttributesDictionnary.txt", quote="\"", comment.char="") objects<- read.table("~/Documents/R_Projects/AnomalyRulesMining/contexts/ObjectsDictionnary.txt", quote="\"", comment.char="") ArgItemsets=c(" -alg:apriori",#This is the absolute basic levelwise algorithm for finding FIs. It is efficient for sparse datasets only. " -alg:aclose",#A levelwise alg. to find FGs and their closures. Not too efficient since it needs to do lots of intersections for computing the corresponding FCIs. " -alg:aprioriclose",#An extension of Apriori that finds FIs and marks FCIs. The FCI-identification gives no overhead. " -alg:aprioriinverse",#Finds perfectly rare itemsets. An itemset is a PRI if all its subsets are rare (with the exception of the empty set). " -alg:apriorirare",#Finds minimal rare itemsets. An itemset is an mRI if it is rare and all its subsets are frequent. Here you must specify an extra option: " -alg:arima",#Arima calls Apriori-Rare to extract mRIs. Then, from this set it restores the family of non-zero rare itemsets. This process is memory extensive. " -alg:charm",#A very efficient depth-first alg. to extract FCIs. More efficient on dense datasets. " -alg:close",#A levelwise alg. to find FCIs. Note that it was the very first alg. that were implemented in the Coron System. " -alg:dcharm",#A diffset implementation of Charm. A very efficient depth-first alg. to extract FCIs. More efficient on dense datasets. It is like Charm but instead of tidsets it uses diffsets. As a result, it performs better and uses less memory. Its output is the same as Charm's. " -alg:eclat",#A very efficient depth-first alg. to extract FIs. " -alg:zart" #An extension of Pascal to produce FCI/FG pairs. Zart identifies FCIs and associates the FGs to their closures. ) #add an attribute to specify the database source Global_Result_Dataframe =data.frame("","","","" , "") names(Global_Result_Dataframe)=c("id", "type",#"I, ASS "algorithm", "minsup", "data") cpt=1 for (j in 1:length(ArgItemsets)){ MinSup=20 ChoiceAlg=ArgItemsets[j] Results_Frequent_Itemsets=c() for (i in 1:5){ MinSup=20*i result="" CoronOutPut=list() df.CoronOutPut2=data.frame() df.CoronOutPut=data.frame() opt=" " if(j==5)opt=" -nonzero " cmd=paste0(getwd(),"/coron-0.8/core01_coron.sh ", #ShiftedContextFile, ContextFileRCF, " ", MinSup,"% -names ", ChoiceAlg,opt,sep="") #>thisresults2.txt cat("\n ==================================================Itemsets Mining \n") cat("MinSup: ",MinSup,"\n") cat("Alg: ",ChoiceAlg,"\n") result=try(system(cmd, intern = TRUE, wait = TRUE)) #x=strsplit(as.character(result[length(result)]),":") back=0 if(j==11)back=10 NbFI=as.integer(unlist(strsplit(as.character(result[length(result)-back]),":"))[2]) cat(result,"\n") Results_Frequent_Itemsets=c(Results_Frequent_Itemsets,NbFI) CoronOutPut=as.list(result[11:length(result)-1]) df.CoronOutPut=as.data.frame(do.call(rbind, CoronOutPut)) df.CoronOutPut2=setNames(do.call(rbind.data.frame, CoronOutPut), "FCIs") Split <- strsplit(as.character(df.CoronOutPut2$FCIs), " (", fixed = TRUE) Intent <- sapply(Split, "[", 1) Intent = as.data.frame(do.call(rbind, as.list(Intent))) #Intent[ Intent == NA ] <- "NULL" Extent <- as.data.frame(do.call(rbind, as.list(sapply(Split, "[", 2)))) save(result,file=paste0("./Rdata/Results_Itemsets_",trimws(ChoiceAlg),"_Sup_",MinSup,".RData",sep="") ) save(df.CoronOutPut2,file=paste0("./Rdata/Results_Itemsets_DataFramce_",trimws(ChoiceAlg),"_Sup_",MinSup,".RData",sep="") ) save(Intent,file=paste0("./Rdata/Results_Itemsets_Antecedent_",trimws(ChoiceAlg),"_Sup_",MinSup,".RData",sep="") ) save(Extent,file=paste0("./Rdata/Results_Itemsets_Result_",trimws(ChoiceAlg),"_Sup_",MinSup,".RData",sep="") ) save(result,file=paste0("./Rdata/Results_Itemsets_",trimws(ChoiceAlg),"_Sup_",MinSup,".RData",sep="") ) rm(result,df.CoronOutPut,df.CoronOutPut2,Split,Intent,Extent) #Global_Result_Dataframe[nrow(Global_Result_Dataframe) + 1,] = list("ItemSetsMining",trimws(ChoiceAlg),MinSup,NbFI) #Create an empty data frame # de <- list("ItemSetsMining",trimws(ChoiceAlg),MinSup,NbFI) # Global_Result_Dataframe = rbind(Global_Result_Dataframe,de, stringsAsFactors=FALSE) de<-data.frame(as.character(cpt),"ItemSetsMining",trimws(ChoiceAlg),as.character(MinSup),as.character(NbFI)) names(de)<-names(Global_Result_Dataframe ) Global_Result_Dataframe <- rbind(Global_Result_Dataframe , de) rm(de) cpt=cpt+1 cat("==================================================\n") } } save(Results_Frequent_Itemsets,file=paste0("./Rdata/Results_Size_Itemsets_.RData",sep="") ) save(Global_Result_Dataframe,file=paste0("./Rdata/Global_Result_Dataframe.RData",sep="") ) write.csv(Global_Result_Dataframe,file=paste0("./Rdata/Global_Result_Dataframe.csv",sep="") ) ########################################### AssRules AssRulescmd=paste0(getwd(),"/coron-0.8/core02_assrulex.sh ",ContextFileRCF, " ", MinSup,"% ", MinConf,"% -names -alg:zart -rule:all -full -examples",sep="") #>thisresults2.txt AssRulesresult=try(system(AssRulescmd, intern = TRUE, wait = TRUE)) capture.output(AssRulesresult,file=paste0("./contexts/AssociationRulesWithFullDetails_Conf_",MinConf,"_Sup_",MinSup,".txt",sep="")) CoronOutPut=as.list(AssRulesresult) df.AssocRulesOutPut=as.data.frame(do.call(rbind, CoronOutPut)) save(df.AssocRulesOutPut,file="./contexts/AssociationRulesWithFullDetails.RData") #cat(AssRulesresult,"\n") ####################################################################################display time cpu options(max.print=10000000) listresult=list() Assruleslistresulttext=list() RareAssruleslistresulttext=list() TimeAssruleslistresulttext=list() x <- matrix(data=NA,byrow=FALSE,ncol = 6, nrow=6) #for (i in 1:6){for(j in 1:6){cat('\n i \n',i);cat('\n j \n',j); sx[i,j]=rnorm(1, mu, sigma)}} y <- matrix(data=NA,byrow=TRUE,ncol = 6, nrow=6) z <- matrix(data=NA,byrow=TRUE,ncol = 6, nrow=6) Conf=10 Sup=10 DisplayFull=FALSE for(i in 1:6){ MinSup=Sup*i df.AssocRulesOutPut=data.frame() df.RareAssocRulesOutPut=data.frame() for(j in 1:6){ MinConf=Conf*j cat('\n ##################sup \n',MinSup) cat('\n ###############conf \n',MinConf) #listresult=c(listresult,paste("Conf_",MinConf,"_Sup_",MinSup,sep="")) tic("FreqAsRules" ) # SoftAssRulescmd=paste0(getwd(),"/coron-0.8/core02_assrulex.sh ",ContextFileRCF, " ", MinSup,"% ", MinConf,"% -names -alg:zart -rule:all",sep="") #>thisresults2.txt SoftAssRulesresult=try(system(SoftAssRulescmd, intern = TRUE, wait = TRUE)) t=toc() if(DisplayFull)capture.output(SoftAssRulesresult,file=paste0("./contexts/AssociationRulesOnly_Conf_",MinConf,"_Sup_",MinSup,".txt",sep="")) CoronOutPut=as.list(SoftAssRulesresult) df.AssocRulesOutPut=as.data.frame(do.call(rbind, CoronOutPut)) NbRules= unlist(strsplit(as.character(df.AssocRulesOutPut$V1[length(df.AssocRulesOutPut$V1)]),":"))[2] cat('nb Association Rules',NbRules) #Assruleslistresulttext=c(Assruleslistresulttext,NbRules) x[i,j]=as.integer(gsub(",", '', NbRules, fixed = T)) cputime=t$toc-t$tic #TimeAssruleslistresulttext=c(TimeAssruleslistresulttext,as.character(cputime)) y[i,j]=as.double(cputime) if(DisplayFull){ AssRulescmd=paste0(getwd(),"/coron-0.8/core02_assrulex.sh ",ContextFileRCF, " ", MinSup,"% ", MinConf,"% -names -alg:zart -rule:all -full -examples",sep="") #>thisresults2.txt AssRulesresult=try(system(AssRulescmd, intern = TRUE, wait = TRUE)) capture.output(AssRulesresult,file=paste0("./contexts/AssociationRulesWithFullDetails_Conf_",MinConf,"_Sup_",MinSup,".txt",sep="")) } #tic("RarAsRules" ) RareAssRulescmd=paste0(getwd(),"/coron-0.8/core02_assrulex.sh ",ContextFileRCF, " ", MinSup,"% ", MinConf,"% -names -alg:BtB -rule:rare ",sep="") RareAssRulesresult=try(system(RareAssRulescmd, intern = TRUE, wait = TRUE)) #toc() if(DisplayFull)capture.output(RareAssRulesresult,file=paste0("./contexts/RareAssociationRules_Conf_",MinConf,"_Sup_",MinSup,".txt",sep="")) CoronOutPut=as.list(RareAssRulesresult) df.RareAssocRulesOutPut=as.data.frame(do.call(rbind, CoronOutPut)) NbRulesrare= unlist(strsplit(as.character(df.RareAssocRulesOutPut$V1[length(df.RareAssocRulesOutPut$V1)]),":"))[2] cat('nb Association Rules',NbRulesrare) #RareAssruleslistresulttext=c(RareAssruleslistresulttext,NbRulesrare) z[i,j]=as.integer(gsub(",", '', NbRulesrare, fixed = T)) } } #Global_AssRules_Result_Dataframe =data.frame("","","","" , "","","","","" , "","","","","" , # "","","","","" , "","","","","" , "","","","","" , "","","","","" , "","") #de=as.data.frame(listresult) #names(de)<-names(Global_AssRules_Result_Dataframe ) #Global_AssRules_Result_Dataframe=rbind(Global_AssRules_Result_Dataframe,de) #de=as.data.frame(Assruleslistresulttext) #names(de)<-names(Global_AssRules_Result_Dataframe ) #Global_AssRules_Result_Dataframe=rbind(Global_AssRules_Result_Dataframe,de) #de=as.data.frame(TimeAssruleslistresulttext) #names(de)<-names(Global_AssRules_Result_Dataframe ) #Global_AssRules_Result_Dataframe=rbind(Global_AssRules_Result_Dataframe,de) #de=as.data.frame(RareAssruleslistresulttext) #names(de)<-names(Global_AssRules_Result_Dataframe ) #Global_AssRules_Result_Dataframe=rbind(Global_AssRules_Result_Dataframe,de) #write.csv(Global_AssRules_Result_Dataframe,file='./Global_AssRules_Result_Dataframe.csv') library(gplots) #Build the matrix data to look like a correlation matrix #x <- matrix(rnorm(64), nrow=8) xval <- formatC(x, format="f", digits=2) pal <- colorRampPalette(c(rgb(0.96,0.96,1), rgb(0.1,0.1,0.9)), space = "rgb") y_val <- formatC(y, format="f", digits=2) #Plot the matrix x_hm <- heatmap.2(x, Rowv=FALSE, Colv=FALSE, dendrogram="none", main="Support X Confidence Heatmap", xlab="Confidence (x10)", ylab="Support (x10)", col=pal, tracecol="#303030", trace="none", cellnote=x, notecol="black", notecex=0.8, keysize = 1.5, margins=c(5, 5)) y_hm<- heatmap.2(y, Rowv=FALSE, Colv=FALSE, dendrogram="none", main="Support X Confidence Heatmap", xlab="Confidence (x10)", ylab="Support (x10)", col=pal, tracecol="#303030", trace="none", cellnote=y_val, notecol="black", notecex=0.8, keysize = 1.5, margins=c(5, 5)) z_hm <- heatmap.2(z, Rowv=FALSE, Colv=FALSE, dendrogram="none", main="Support X Confidence Heatmap", xlab="Confidence (x10)", ylab="Support (x10)", col=pal, tracecol="#303030", trace="none", cellnote=z, notecol="black", notecex=0.8, keysize = 1.5, margins=c(5, 5)) ###################################################################### ########fair assoc rules #./core02_assrulex.sh [switches] <database> <min_supp> <min_conf> -alg:<alg> -rule:<rule> SoftAssRulescmd=paste0(getwd(),"/coron-0.8/core02_assrulex.sh ",ContextFileRCF, " ", MinSup,"% ", MinConf,"% -names -alg:zart -rule:all",sep="") #>thisresults2.txt start_time <- Sys.time() SoftAssRulesresult=try(system(SoftAssRulescmd, intern = TRUE, wait = TRUE)) end_time <- Sys.time() cat('proc time: ',end_time - start_time) capture.output(SoftAssRulesresult,file="./contexts/AssociationRulesOnly.txt") CoronOutPut=as.list(SoftAssRulesresult) df.AssocRulesOutPut=as.data.frame(do.call(rbind, CoronOutPut)) NbRules= unlist(strsplit(as.character(df.AssocRulesOutPut$V1[length(df.AssocRulesOutPut$V1)]),":"))[2] cat('nb Association Rules',NbRules) save(df.AssocRulesOutPut,file="./contexts/AssociationRulesOnly.RData") cat(SoftAssRulesresult,"\n") ####Rare RareAssRulescmd=paste0(getwd(),"/coron-0.8/core02_assrulex.sh ",ContextFileRCF, " ", MinSup,"% ", MinConf,"% -names -alg:BtB -rule:rare ",sep="") RareAssRulesresult=try(system(RareAssRulescmd, intern = TRUE, wait = TRUE)) #The rule is in the FF class, i.e. both sides of the rule are frequent (frequent itemset implies frequent itemset). #The rule is closed, i.e. the union of the left and right side forms a closed itemset. capture.output(RareAssRulesresult,file="./contexts/RareAssociationRules.txt") CoronOutPut=as.list(RareAssRulesresult) df.AssocRulesOutPut=as.data.frame(do.call(rbind, CoronOutPut)) save(df.AssocRulesOutPut,file="./contexts/RareAssociationRules.RData") NbRules= unlist(strsplit(as.character(df.AssocRulesOutPut$V1[length(df.AssocRulesOutPut$V1)]),":"))[2] cat('nb Association Rules',NbRules) cat(RareAssRulesresult,"\n") # data <- read.table(file="./contexts/AssociationRules2.txt", sep="\t", quote="", comment.char="") try(system("./core03_leco.sh ./contexts/Context.rcf 1 -names -order -alg:dtouch -method:snow -dot -null -uc", intern = TRUE, wait = TRUE)) try(system(" xdot ./graphviz/lattice.dot &")) pattern="g" try(system(" ./post01_filterRules.sh ./contexts/AssociationRules.txt \"EXECUTE\" -keep -left")) ######################################################### ARJsonOutPut=fromJSON('/home/terminator2/Documents/Adapt_Project/Repository/experimental_fca_json/adapt/explore/jsonoutput/implication.json') ARJsonOutPut2=fromJSON('/home/terminator2/Documents/Adapt_Project/Repository/experimental_fca_json/adapt/explore/jsonoutput/implication.json') for (i in 1:length(ARJsonOutPut)){ cat("processing Line \n",i) current_rule=ARJsonOutPut[[i]]$rules } #####display time cpu m <- microbenchmark("FreqAsRules" = { SoftAssRulescmd=paste0(getwd(),"/coron-0.8/core02_assrulex.sh ",ContextFileRCF, " ", MinSup,"% ", MinConf,"% -names -alg:zart -rule:all",sep="") #>thisresults2.txt SoftAssRulesresult=try(system(SoftAssRulescmd, intern = TRUE, wait = TRUE)) }, "RarAsRules" = { RareAssRulescmd=paste0(getwd(),"/coron-0.8/core02_assrulex.sh ",ContextFileRCF, " ", MinSup,"% ", MinConf,"% -names -alg:BtB -rule:rare ",sep="") RareAssRulesresult=try(system(RareAssRulescmd, intern = TRUE, wait = TRUE)) },times=20 ) library(ggplot2) autoplot(m) uq <- function(x) { fivenum(x)[4]} lq <- function(x) { fivenum(x)[2]} y_min <- 0 # min(by(m$time,m$expr,lq)) y_max <- max(by(m$time,m$expr,uq)) * 1.05 p <- ggplot(m,aes(x=expr,y=time)) + coord_cartesian(ylim = c( y_min , y_max )) p + stat_summary(fun.y=median,fun.ymin = lq, fun.ymax = uq, aes(fill=expr)) ########################################### display the lattice #./pre02_converter.sh ShiftedContextFile -of:./shifted.rcf #./core03_leco.sh sample/laszlo.rcf 1 -names -order -alg:dtouch -method:snow -dot -ext -dot:ext -null -uc #cd graphviz/ # ./compile_gif_leco.sh #./view_leco.sh ########################################### display hist(as.numeric(Global_Result_Dataframe$data)) # plot on same grid, each series colored differently -- # good if the series have same scale ggplot(Global_Result_Dataframe, aes(Global_Result_Dataframe$algorithm,Global_Result_Dataframe$data)) + geom_line(aes(colour = Global_Result_Dataframe$algorithm)) w.plot <- melt(Global_Result_Dataframe$data) p <- ggplot(aes(x=Global_Result_Dataframe$minsup, colour=variable), data=w.plot) p + geom_density() plot(density(as.numeric(Global_Result_Dataframe$data)), type = "n") #Extent[ Extent == NA ] <- "NULL" ########################################### Other ListIntent=as.list(Intent) d<-as.data.frame(z[!which(z==''),]) r <- with(z, which(z=='', arr.ind=TRUE)) newd <- z[-r, ] z=stri_list2matrix(CoronOutPut, byrow = TRUE) out <- strsplit(as.character(CoronOutPut),'()') z=do.call(rbind, out) ## v 1.9.6+ setDT(CoronOutPut)[, paste0("type", 1:2) := tstrsplit(CoronOutPut, "_and_")] before df.aree <- as.data.frame(t(as.data.frame(do.call(rbind, CoronOutPut)))) #x=bind_rows(lapply(CoronOutPut, as.data.frame.list)) #y=as.data.frame(data.table::transpose(CoronOutPut), col.names = names(CoronOutPut[[1]])) #write.csv(file="./x.csv",CoronOutPut,sep="(") dt <- data.table(person = c('Sam','Sam','Sam','Greg','Tom','Tom','Tom','Mary','Mary'), group = c('a','b','e','a','b','c','d','b','d')) # non-sparse, desirable output M <- as.matrix(table(dt)) M %*% t(M) # sparse, binary instead of integer rows <- sort(unique(dt$person)) cols <- sort(unique(dt$group)) dimnamesM <- list(person = rows, groups = cols) sprsM <- sparseMatrix(i = match(dt$person, rows), j = match(dt$group, cols), dimnames = dimnamesM)
open import Relation.Binary.Core module BBHeap.Drop {A : Set} (_≤_ : A → A → Set) (tot≤ : Total _≤_) (trans≤ : Transitive _≤_) where open import BBHeap _≤_ open import BBHeap.Compound _≤_ open import BBHeap.Equality _≤_ open import BBHeap.Equality.Properties _≤_ open import BBHeap.Subtyping.Properties _≤_ trans≤ open import BBHeap.Perfect _≤_ open import BBHeap.Properties _≤_ open import BBHeap.Push _≤_ tot≤ trans≤ open import Bound.Lower A open import Bound.Lower.Order _≤_ open import Bound.Lower.Order.Properties _≤_ trans≤ open import Data.Empty open import Data.Product renaming (_×_ to _∧_) open import Data.Sum renaming (_⊎_ to _∨_) open import Order.Total _≤_ tot≤ root : {b : Bound}{h : BBHeap b}(cₕ : Compound h) → A root (cl {x = x} _ _) = x root (cr {x = x} _ _) = x mutual drop : {b : Bound}{h : BBHeap b}(cₕ : Compound h) → BBHeap (val (root cₕ)) drop (cl b≤x l⋘r) = drop⋘ b≤x l⋘r drop (cr b≤x l⋙r) = drop⋙ b≤x l⋙r drop⋘ : {b : Bound}{x : A}{l r : BBHeap (val x)}(b≤x : LeB b (val x)) → l ⋘ r → BBHeap (val x) drop⋘ b≤x lf⋘ = leaf drop⋘ b≤x (ll⋘ {x = y₁} {x' = y₂} x≤y₁ x≤y₂ l₁⋘r₁ l₂⋘r₂ l₂≃r₂ r₁≃l₂) with tot≤ y₁ y₂ | lemma-drop⋘ (cl x≤y₁ l₁⋘r₁) (cl x≤y₂ l₂⋘r₂) (ll⋘ x≤y₁ x≤y₂ l₁⋘r₁ l₂⋘r₂ l₂≃r₂ r₁≃l₂) ... | inj₁ y₁≤y₂ | inj₁ (_ , l⋙dr) = let pl'≈l' = lemma-push⋘ (lexy y₁≤y₂) (lexy refl≤) l₁⋘r₁ ; l'≈l = ≈left (lexy refl≤) x≤y₁ l₁⋘r₁ l₁⋘r₁ refl≈ refl≈ ; pl'≈l = trans≈ pl'≈l' l'≈l ; pl'⋙dr = lemma≈⋙ pl'≈l l⋙dr ; pl'⋙dr' = subtyping⋙r (lexy y₁≤y₂) pl'⋙dr in right x≤y₁ pl'⋙dr' ... | inj₁ y₁≤y₂ | inj₂ dl⋘r = let r≈r' = ≈left x≤y₂ (lexy y₁≤y₂) l₂⋘r₂ l₂⋘r₂ refl≈ refl≈ ; dl⋘r' = lemma⋘≈ dl⋘r r≈r' in left x≤y₁ dl⋘r' ... | inj₂ y₂≤y₁ | inj₁ (_ , l⋙dr) = let l'≈l = ≈left (lexy y₂≤y₁) x≤y₁ l₁⋘r₁ l₁⋘r₁ refl≈ refl≈ ; l'⋙dr = lemma≈⋙ l'≈l l⋙dr in right x≤y₂ l'⋙dr ... | inj₂ y₂≤y₁ | inj₂ dl⋘r = let pr'≈r' = lemma-push⋘ (lexy y₂≤y₁) (lexy refl≤) l₂⋘r₂ ; r'≈pr' = sym≈ pr'≈r' ; r≈r' = ≈left x≤y₂ (lexy refl≤) l₂⋘r₂ l₂⋘r₂ refl≈ refl≈ ; r≈pr' = trans≈ r≈r' r'≈pr' ; dl⋘pr' = lemma⋘≈ dl⋘r r≈pr' ; dl'⋘pr' = subtyping⋘l (lexy y₂≤y₁) dl⋘pr' in left x≤y₂ dl'⋘pr' drop⋘ b≤x (lr⋘ {x = y₁} {x' = y₂} x≤y₁ x≤y₂ l₁⋙r₁ l₂⋘r₂ l₂≃r₂ l₁⋗l₂) with tot≤ y₁ y₂ | lemma-drop⋘ (cr x≤y₁ l₁⋙r₁) (cl x≤y₂ l₂⋘r₂) (lr⋘ x≤y₁ x≤y₂ l₁⋙r₁ l₂⋘r₂ l₂≃r₂ l₁⋗l₂) ... | _ | inj₁ (() , _) ... | inj₁ y₁≤y₂ | inj₂ dl⋘r = let r≈r' = ≈left x≤y₂ (lexy y₁≤y₂) l₂⋘r₂ l₂⋘r₂ refl≈ refl≈ ; dl⋘r' = lemma⋘≈ dl⋘r r≈r' in left x≤y₁ dl⋘r' ... | inj₂ y₂≤y₁ | inj₂ dl⋘r = let pr'≈r' = lemma-push⋘ (lexy y₂≤y₁) (lexy refl≤) l₂⋘r₂ ; r'≈pr' = sym≈ pr'≈r' ; r≈r' = ≈left x≤y₂ (lexy refl≤) l₂⋘r₂ l₂⋘r₂ refl≈ refl≈ ; r≈pr' = trans≈ r≈r' r'≈pr' ; dl⋘pr' = lemma⋘≈ dl⋘r r≈pr' ; dl'⋘pr' = subtyping⋘l (lexy y₂≤y₁) dl⋘pr' in left x≤y₂ dl'⋘pr' drop⋙ : {b : Bound}{x : A}{l r : BBHeap (val x)}(b≤x : LeB b (val x)) → l ⋙ r → BBHeap (val x) drop⋙ b≤x (⋙lf x≤y) = left x≤y lf⋘ drop⋙ b≤x (⋙rl {x = y₁} {x' = y₂} x≤y₁ x≤y₂ l₁⋘r₁ l₁≃r₁ l₂⋘r₂ l₁⋗r₂) with tot≤ y₁ y₂ | lemma-drop⋙ (cl x≤y₁ l₁⋘r₁) (cl x≤y₂ l₂⋘r₂) (⋙rl x≤y₁ x≤y₂ l₁⋘r₁ l₁≃r₁ l₂⋘r₂ l₁⋗r₂) ... | inj₁ y₁≤y₂ | inj₁ (_ , dl⋘r) = let r≈r' = ≈left x≤y₂ (lexy y₁≤y₂) l₂⋘r₂ l₂⋘r₂ refl≈ refl≈ ; dl⋘r' = lemma⋘≈ dl⋘r r≈r' in left x≤y₁ dl⋘r' ... | inj₁ y₁≤y₂ | inj₂ l⋙dr = let l'≈l = ≈left (lexy refl≤) x≤y₁ l₁⋘r₁ l₁⋘r₁ refl≈ refl≈ ; pl'≈l' = lemma-push⋘ (lexy y₁≤y₂) (lexy refl≤) l₁⋘r₁ ; pl'≈l = trans≈ pl'≈l' l'≈l ; pl'⋙dr = lemma≈⋙ pl'≈l l⋙dr ; pl'⋙dr' = subtyping⋙r (lexy y₁≤y₂) pl'⋙dr in right x≤y₁ pl'⋙dr' ... | inj₂ y₂≤y₁ | inj₁ (_ , dl⋘r) = let pr'≈r' = lemma-push⋘ (lexy y₂≤y₁) (lexy refl≤) l₂⋘r₂ ; r'≈r = ≈left (lexy refl≤) x≤y₂ l₂⋘r₂ l₂⋘r₂ refl≈ refl≈ ; pr'≈r = trans≈ pr'≈r' r'≈r ; r≈pr' = sym≈ pr'≈r ; dl⋘pr' = lemma⋘≈ dl⋘r r≈pr' ; dl'⋘pr' = subtyping⋘l (lexy y₂≤y₁) dl⋘pr' in left x≤y₂ dl'⋘pr' ... | inj₂ y₂≤y₁ | inj₂ l⋙dr = let l'≈l = ≈left (lexy y₂≤y₁) x≤y₁ l₁⋘r₁ l₁⋘r₁ refl≈ refl≈ ; l'⋙dr = lemma≈⋙ l'≈l l⋙dr in right x≤y₂ l'⋙dr drop⋙ b≤x (⋙rr {x = y₁} {x' = y₂} x≤y₁ x≤y₂ l₁⋘r₁ l₁≃r₁ l₂⋙r₂ l₁≃l₂) with tot≤ y₁ y₂ | lemma-drop⋙ (cl x≤y₁ l₁⋘r₁) (cr x≤y₂ l₂⋙r₂) (⋙rr x≤y₁ x≤y₂ l₁⋘r₁ l₁≃r₁ l₂⋙r₂ l₁≃l₂) ... | _ | inj₁ (() , _) ... | inj₁ y₁≤y₂ | inj₂ l⋙dr = let l'≈l = ≈left (lexy refl≤) x≤y₁ l₁⋘r₁ l₁⋘r₁ refl≈ refl≈ ; pl'≈l' = lemma-push⋘ (lexy y₁≤y₂) (lexy refl≤) l₁⋘r₁ ; pl'≈l = trans≈ pl'≈l' l'≈l ; pl'⋙dr = lemma≈⋙ pl'≈l l⋙dr ; pl'⋙dr' = subtyping⋙r (lexy y₁≤y₂) pl'⋙dr in right x≤y₁ pl'⋙dr' ... | inj₂ y₂≤y₁ | inj₂ l⋙dr = let l'≈l = ≈left (lexy y₂≤y₁) x≤y₁ l₁⋘r₁ l₁⋘r₁ refl≈ refl≈ ; l'⋙dr = lemma≈⋙ l'≈l l⋙dr in right x≤y₂ l'⋙dr lemma-drop⋘ : {b : Bound}{l r : BBHeap b}(cₗ : Compound l)(cᵣ : Compound r) → l ⋘ r → l ≃ r ∧ l ⋙ drop cᵣ ∨ drop cₗ ⋘ r lemma-drop⋘ (cl y≤y₁ lf⋘) (cl y≤y₂ lf⋘) (ll⋘ .y≤y₁ .y≤y₂ .lf⋘ .lf⋘ ≃lf ≃lf) = inj₁ (≃nd y≤y₁ y≤y₂ lf⋘ lf⋘ ≃lf ≃lf ≃lf , ⋙lf y≤y₁) lemma-drop⋘ (cl y≤y₁ (ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄)) (cl y≤y₂ lf⋘) (ll⋘ .y≤y₁ .y≤y₂ .(ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄) .lf⋘ _ ()) lemma-drop⋘ (cl y≤y₁ (ll⋘ {x = y₃} {x' = y₄} y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄)) (cl y≤y₂ (ll⋘ {x = y₅} {x' = y₆} y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆)) (ll⋘ {x = y₁} .y≤y₁ .y≤y₂ .(ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄) .(ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆) l₂≃r₂ r₁≃l₂) with tot≤ y₃ y₄ | tot≤ y₅ y₆ | lemma-drop⋘ (cl y₁≤y₃ l₃⋘r₃) (cl y₁≤y₄ l₄⋘r₄) (ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄) | lemma-drop⋘ (cl y₂≤y₅ l₅⋘r₅) (cl y₂≤y₆ l₆⋘r₆) (ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆) ... | _ | inj₁ y₅≤y₆ | inj₁ (l₁≃r₁ , _) | inj₁ (_ , l₂⋙dr₂) = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; l₂⋘r₂ = ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆ ; l₁≃l₂ = trans≃ l₁≃r₁ r₁≃l₂ ; pl₂'≈l₂' = lemma-push⋘ (lexy y₅≤y₆) (lexy refl≤) l₅⋘r₅ ; l₂'≈l₂ = ≈left (lexy refl≤) y₂≤y₅ l₅⋘r₅ l₅⋘r₅ refl≈ refl≈ ; pl₂'≈l₂ = trans≈ pl₂'≈l₂' l₂'≈l₂ ; pl₂'⋙dr₂ = lemma≈⋙ pl₂'≈l₂ l₂⋙dr₂ ; pl₂'⋙dr₂' = subtyping⋙r (lexy y₅≤y₆) pl₂'⋙dr₂ ; l₂≈pl₂' = sym≈ pl₂'≈l₂ ; l₁≃pl₂' = lemma≃≈ l₁≃l₂ l₂≈pl₂' in inj₁ (≃nd y≤y₁ y≤y₂ l₁⋘r₁ l₂⋘r₂ l₁≃r₁ l₂≃r₂ l₁≃l₂ , ⋙rr y≤y₁ y₂≤y₅ l₁⋘r₁ l₁≃r₁ pl₂'⋙dr₂' l₁≃pl₂') ... | _ | inj₂ y₆≤y₅ | inj₁ (l₁≃r₁ , _) | inj₁ (_ , l₂⋙dr₂) = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; l₂⋘r₂ = ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆ ; l₁≃l₂ = trans≃ l₁≃r₁ r₁≃l₂ ; l₂'≈l₂ = ≈left (lexy y₆≤y₅) y₂≤y₅ l₅⋘r₅ l₅⋘r₅ refl≈ refl≈ ; l₂'⋙dr₂ = lemma≈⋙ l₂'≈l₂ l₂⋙dr₂ ; l₂≈l₂' = sym≈ l₂'≈l₂ ; l₁≃l₂' = lemma≃≈ l₁≃l₂ l₂≈l₂' in inj₁ (≃nd y≤y₁ y≤y₂ l₁⋘r₁ l₂⋘r₂ l₁≃r₁ l₂≃r₂ l₁≃l₂ , ⋙rr y≤y₁ y₂≤y₆ l₁⋘r₁ l₁≃r₁ l₂'⋙dr₂ l₁≃l₂') ... | inj₁ y₃≤y₄ | _ | inj₂ dl₁⋘r₁ | _ = let r₁≈r₁' = ≈left y₁≤y₄ (lexy y₃≤y₄) l₄⋘r₄ l₄⋘r₄ refl≈ refl≈ ; dl₁⋘r₁' = lemma⋘≈ dl₁⋘r₁ r₁≈r₁' ; l₂⋘r₂ = ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆ ; r₁'≈r₁ = sym≈ r₁≈r₁' ; r₁'≃l₂ = lemma≈≃ r₁'≈r₁ r₁≃l₂ in inj₂ (ll⋘ y₁≤y₃ y≤y₂ dl₁⋘r₁' l₂⋘r₂ l₂≃r₂ r₁'≃l₂) ... | inj₂ y₄≤y₃ | _ | inj₂ dl₁⋘r₁ | _ = let r₁≈r₁' = ≈left y₁≤y₄ (lexy refl≤) l₄⋘r₄ l₄⋘r₄ refl≈ refl≈ ; pr₁'≈r₁' = lemma-push⋘ (lexy y₄≤y₃) (lexy refl≤) l₄⋘r₄ ; dl₁⋘r₁' = lemma⋘≈ dl₁⋘r₁ r₁≈r₁' ; r₁'≈pr₁' = sym≈ pr₁'≈r₁' ; r₁≈pr₁' = trans≈ r₁≈r₁' r₁'≈pr₁' ; dl₁⋘pr₁' = lemma⋘≈ dl₁⋘r₁ r₁≈pr₁' ; l₂⋘r₂ = ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆ ; dl₁'⋘pr₁' = subtyping⋘l (lexy y₄≤y₃) dl₁⋘pr₁' ; r₁'≈r₁ = sym≈ r₁≈r₁' ; r₁'≃l₂ = lemma≈≃ r₁'≈r₁ r₁≃l₂ ; pr₁'≃l₂ = lemma≈≃ pr₁'≈r₁' r₁'≃l₂ in inj₂ (ll⋘ y₁≤y₄ y≤y₂ dl₁'⋘pr₁' l₂⋘r₂ l₂≃r₂ pr₁'≃l₂) ... | _ | _ | inj₁ (l₁≃r₁ , l₁⋙dr₁) | inj₂ dl₂⋘r₂ with lemma-drop-⊥ y₂≤y₅ l₅⋘r₅ (lemma-⋘-≃ dl₂⋘r₂ (sym≃ l₂≃r₂)) ... | () lemma-drop⋘ (cl y≤y₁ (ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄)) (cl y≤y₂ (lr⋘ y₂≤y₅ y₂≤y₆ l₅⋙r₅ l₆⋘r₆ l₆≃r₆ l₅⋗l₆)) (ll⋘ .y≤y₁ .y≤y₂ .(ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄) .(lr⋘ y₂≤y₅ y₂≤y₆ l₅⋙r₅ l₆⋘r₆ l₆≃r₆ l₅⋗l₆) _ ()) lemma-drop⋘ (cl y≤y₁ (lr⋘ y₁≤y₃ y₁≤y₄ l₃⋙r₃ l₄⋘r₄ l₄≃r₄ l₃⋗l₄)) (cl y≤y₂ lf⋘) (ll⋘ .y≤y₁ .y≤y₂ .(lr⋘ y₁≤y₃ y₁≤y₄ l₃⋙r₃ l₄⋘r₄ l₄≃r₄ l₃⋗l₄) .lf⋘ _ ()) lemma-drop⋘ (cl y≤y₁ (lr⋘ {x = y₃} {x' = y₄} y₁≤y₃ y₁≤y₄ l₃⋙r₃ l₄⋘r₄ l₄≃r₄ l₃⋗l₄)) (cl y≤y₂ (ll⋘ {x = y₅} {x' = y₆} y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆)) (ll⋘ .y≤y₁ .y≤y₂ .(lr⋘ y₁≤y₃ y₁≤y₄ l₃⋙r₃ l₄⋘r₄ l₄≃r₄ l₃⋗l₄) .(ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆) l₂≃r₂ r₁≃l₂) with tot≤ y₃ y₄ | tot≤ y₅ y₆ | lemma-drop⋘ (cr y₁≤y₃ l₃⋙r₃) (cl y₁≤y₄ l₄⋘r₄) (lr⋘ y₁≤y₃ y₁≤y₄ l₃⋙r₃ l₄⋘r₄ l₄≃r₄ l₃⋗l₄) | lemma-drop⋘ (cl y₂≤y₅ l₅⋘r₅) (cl y₂≤y₆ l₆⋘r₆) (ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆) ... | _ | inj₁ y₅≤y₆ | inj₁ (l₁≃r₁ , _) | inj₁ (_ , l₂⋙dr₂) = let l₁⋘r₁ = lr⋘ y₁≤y₃ y₁≤y₄ l₃⋙r₃ l₄⋘r₄ l₄≃r₄ l₃⋗l₄ ; l₂⋘r₂ = ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆ ; l₁≃l₂ = trans≃ l₁≃r₁ r₁≃l₂ ; pl₂'≈l₂' = lemma-push⋘ (lexy y₅≤y₆) (lexy refl≤) l₅⋘r₅ ; l₂'≈l₂ = ≈left (lexy refl≤) y₂≤y₅ l₅⋘r₅ l₅⋘r₅ refl≈ refl≈ ; pl₂'≈l₂ = trans≈ pl₂'≈l₂' l₂'≈l₂ ; pl₂'⋙dr₂ = lemma≈⋙ pl₂'≈l₂ l₂⋙dr₂ ; pl₂'⋙dr₂' = subtyping⋙r (lexy y₅≤y₆) pl₂'⋙dr₂ ; l₂≈pl₂' = sym≈ pl₂'≈l₂ ; l₁≃pl₂' = lemma≃≈ l₁≃l₂ l₂≈pl₂' in inj₁ (≃nd y≤y₁ y≤y₂ l₁⋘r₁ l₂⋘r₂ l₁≃r₁ l₂≃r₂ l₁≃l₂ , ⋙rr y≤y₁ y₂≤y₅ l₁⋘r₁ l₁≃r₁ pl₂'⋙dr₂' l₁≃pl₂') ... | _ | inj₂ y₆≤y₅ | inj₁ (l₁≃r₁ , _) | inj₁ (_ , l₂⋙dr₂) = let l₁⋘r₁ = lr⋘ y₁≤y₃ y₁≤y₄ l₃⋙r₃ l₄⋘r₄ l₄≃r₄ l₃⋗l₄ ; l₂⋘r₂ = ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆ ; l₁≃l₂ = trans≃ l₁≃r₁ r₁≃l₂ ; l₂'≈l₂ = ≈left (lexy y₆≤y₅) y₂≤y₅ l₅⋘r₅ l₅⋘r₅ refl≈ refl≈ ; l₂'⋙dr₂ = lemma≈⋙ l₂'≈l₂ l₂⋙dr₂ ; l₂≈l₂' = sym≈ l₂'≈l₂ ; l₁≃l₂' = lemma≃≈ l₁≃l₂ l₂≈l₂' in inj₁ (≃nd y≤y₁ y≤y₂ l₁⋘r₁ l₂⋘r₂ l₁≃r₁ l₂≃r₂ l₁≃l₂ , ⋙rr y≤y₁ y₂≤y₆ l₁⋘r₁ l₁≃r₁ l₂'⋙dr₂ l₁≃l₂') ... | inj₁ y₃≤y₄ | _ | inj₂ dl₁⋘r₁ | _ = let r₁≈r₁' = ≈left y₁≤y₄ (lexy y₃≤y₄) l₄⋘r₄ l₄⋘r₄ refl≈ refl≈ ; dl₁⋘r₁' = lemma⋘≈ dl₁⋘r₁ r₁≈r₁' ; l₂⋘r₂ = ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆ ; r₁'≈r₁ = sym≈ r₁≈r₁' ; r₁'≃l₂ = lemma≈≃ r₁'≈r₁ r₁≃l₂ in inj₂ (ll⋘ y₁≤y₃ y≤y₂ dl₁⋘r₁' l₂⋘r₂ l₂≃r₂ r₁'≃l₂) ... | inj₂ y₄≤y₃ | _ | inj₂ dl₁⋘r₁ | _ = let r₁≈r₁' = ≈left y₁≤y₄ (lexy refl≤) l₄⋘r₄ l₄⋘r₄ refl≈ refl≈ ; pr₁'≈r₁' = lemma-push⋘ (lexy y₄≤y₃) (lexy refl≤) l₄⋘r₄ ; dl₁⋘r₁' = lemma⋘≈ dl₁⋘r₁ r₁≈r₁' ; r₁'≈pr₁' = sym≈ pr₁'≈r₁' ; r₁≈pr₁' = trans≈ r₁≈r₁' r₁'≈pr₁' ; dl₁⋘pr₁' = lemma⋘≈ dl₁⋘r₁ r₁≈pr₁' ; l₂⋘r₂ = ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆ ; dl₁'⋘pr₁' = subtyping⋘l (lexy y₄≤y₃) dl₁⋘pr₁' ; r₁'≈r₁ = sym≈ r₁≈r₁' ; r₁'≃l₂ = lemma≈≃ r₁'≈r₁ r₁≃l₂ ; pr₁'≃l₂ = lemma≈≃ pr₁'≈r₁' r₁'≃l₂ in inj₂ (ll⋘ y₁≤y₄ y≤y₂ dl₁'⋘pr₁' l₂⋘r₂ l₂≃r₂ pr₁'≃l₂) ... | _ | _ | inj₁ (l₁≃r₁ , l₁⋙dr₁) | inj₂ dl₂⋘r₂ with lemma-drop-⊥ y₂≤y₅ l₅⋘r₅ (lemma-⋘-≃ dl₂⋘r₂ (sym≃ l₂≃r₂)) ... | () lemma-drop⋘ (cl y≤y₁ (lr⋘ y₁≤y₃ y₁≤y₄ l₃⋙r₃ l₄⋘r₄ l₄≃r₄ l₃⋗l₄)) (cl y≤y₂ (lr⋘ y₂≤y₅ y₂≤y₆ l₅⋙r₅ l₆⋘r₆ l₆≃r₆ l₅⋗l₆)) (ll⋘ .y≤y₁ .y≤y₂ .(lr⋘ y₁≤y₃ y₁≤y₄ l₃⋙r₃ l₄⋘r₄ l₄≃r₄ l₃⋗l₄) .(lr⋘ y₂≤y₅ y₂≤y₆ l₅⋙r₅ l₆⋘r₆ l₆≃r₆ l₅⋗l₆) () _) lemma-drop⋘ (cr y≤y₁ (⋙lf y₁≤y₃)) (cl y≤y₂ lf⋘) (lr⋘ .y≤y₁ .y≤y₂ .(⋙lf y₁≤y₃) .lf⋘ ≃lf (⋗lf .y₁≤y₃)) = inj₂ (ll⋘ y₁≤y₃ y≤y₂ lf⋘ lf⋘ ≃lf ≃lf) lemma-drop⋘ (cr y≤y₁ (⋙lf y₁≤y₃)) (cl y≤y₂ (ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆)) (lr⋘ .y≤y₁ .y≤y₂ .(⋙lf y₁≤y₃) .(ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆) _ (⋗nd .y₁≤y₃ .y₂≤y₅ .lf⋘ .l₅⋘r₅ _ _ ())) lemma-drop⋘ (cr y≤y₁ (⋙lf y₁≤y₃)) (cl y≤y₂ (lr⋘ y₂≤y₅ y₂≤y₆ l₅⋙r₅ l₆⋘r₆ l₆≃r₆ l₅⋗l₆)) (lr⋘ .y≤y₁ .y≤y₂ .(⋙lf y₁≤y₃) .(lr⋘ y₂≤y₅ y₂≤y₆ l₅⋙r₅ l₆⋘r₆ l₆≃r₆ l₅⋗l₆) () _) lemma-drop⋘ (cr y≤y₁ (⋙rl {x = y₃} {x' = y₄} y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₃≃r₃ l₄⋘r₄ l₃⋗r₄)) (cl y≤y₂ l₂⋘r₂) (lr⋘ .y≤y₁ .y≤y₂ .(⋙rl y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₃≃r₃ l₄⋘r₄ l₃⋗r₄) .l₂⋘r₂ l₂≃r₂ l₁⋗l₂) with tot≤ y₃ y₄ | lemma-drop⋙ (cl y₁≤y₃ l₃⋘r₃) (cl y₁≤y₄ l₄⋘r₄) (⋙rl y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₃≃r₃ l₄⋘r₄ l₃⋗r₄) ... | inj₁ y₃≤y₄ | inj₁ (l₁⋗r₁ , dl₁⋘r₁) = let r₁≈r₁' = ≈left y₁≤y₄ (lexy y₃≤y₄) l₄⋘r₄ l₄⋘r₄ refl≈ refl≈ ; dl₁⋘r₁' = lemma⋘≈ dl₁⋘r₁ r₁≈r₁' ; r₁'≈r₁ = sym≈ r₁≈r₁' ; r₁≃l₂ = lemma⋗⋗' l₁⋗r₁ l₁⋗l₂ ; r₁'≃l₂ = lemma≈≃ r₁'≈r₁ r₁≃l₂ in inj₂ (ll⋘ y₁≤y₃ y≤y₂ dl₁⋘r₁' l₂⋘r₂ l₂≃r₂ r₁'≃l₂) ... | inj₁ y₃≤y₄ | inj₂ l₁⋙dr₁ = let l₁'≈l₁ = ≈left (lexy refl≤) y₁≤y₃ l₃⋘r₃ l₃⋘r₃ refl≈ refl≈ ; pl₁'≈l₁' = lemma-push⋘ (lexy y₃≤y₄) (lexy refl≤) l₃⋘r₃ ; pl₁'≈l₁ = trans≈ pl₁'≈l₁' l₁'≈l₁ ; pl₁'⋙dr = lemma≈⋙ pl₁'≈l₁ l₁⋙dr₁ ; pl₁'⋙dr₁' = subtyping⋙r (lexy y₃≤y₄) pl₁'⋙dr ; pl₁'⋗l₂ = lemma≈⋗ pl₁'≈l₁ l₁⋗l₂ in inj₂ (lr⋘ y₁≤y₃ y≤y₂ pl₁'⋙dr₁' l₂⋘r₂ l₂≃r₂ pl₁'⋗l₂) ... | inj₂ y₄≤y₃ | inj₁ (l₁⋗r₁ , dl₁⋘r₁) = let r₁'≈r₁ = ≈left (lexy refl≤) y₁≤y₄ l₄⋘r₄ l₄⋘r₄ refl≈ refl≈ ; pr₁'≈r₁' = lemma-push⋘ (lexy y₄≤y₃) (lexy refl≤) l₄⋘r₄ ; r₁'≈pr₁' = sym≈ pr₁'≈r₁' ; pr₁'≈r₁ = trans≈ pr₁'≈r₁' r₁'≈r₁ ; r₁≈pr₁' = sym≈ pr₁'≈r₁ ; dl₁⋘pr₁' = lemma⋘≈ dl₁⋘r₁ r₁≈pr₁' ; dl₁'⋘pr₁' = subtyping⋘l (lexy y₄≤y₃) dl₁⋘pr₁' ; r₁≃l₂ = lemma⋗⋗' l₁⋗r₁ l₁⋗l₂ ; pr₁'≃l₂ = lemma≈≃ pr₁'≈r₁ r₁≃l₂ in inj₂ (ll⋘ y₁≤y₄ y≤y₂ dl₁'⋘pr₁' l₂⋘r₂ l₂≃r₂ pr₁'≃l₂) ... | inj₂ y₄≤y₃ | inj₂ l₁⋙dr₁ = let l₁'≈l₁ = ≈left (lexy y₄≤y₃) y₁≤y₃ l₃⋘r₃ l₃⋘r₃ refl≈ refl≈ ; l₁'⋙dr₁ = lemma≈⋙ l₁'≈l₁ l₁⋙dr₁ ; l₁'⋗l₂ = lemma≈⋗ l₁'≈l₁ l₁⋗l₂ in inj₂ (lr⋘ y₁≤y₄ y≤y₂ l₁'⋙dr₁ l₂⋘r₂ l₂≃r₂ l₁'⋗l₂) lemma-drop⋘ (cr y≤y₁ (⋙rr {x = y₃} {x' = y₄} y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₃≃r₃ l₄⋙r₄ l₃≃l₄)) (cl y≤y₂ l₂⋘r₂) (lr⋘ .y≤y₁ .y≤y₂ .(⋙rr y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₃≃r₃ l₄⋙r₄ l₃≃l₄) .l₂⋘r₂ l₂≃r₂ l₁⋗l₂) with tot≤ y₃ y₄ | lemma-drop⋙ (cl y₁≤y₃ l₃⋘r₃) (cr y₁≤y₄ l₄⋙r₄) (⋙rr y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₃≃r₃ l₄⋙r₄ l₃≃l₄) ... | _ | inj₁ (() , _) ... | inj₁ y₃≤y₄ | inj₂ l₁⋙dr₁ = let l₁'≈l₁ = ≈left (lexy refl≤) y₁≤y₃ l₃⋘r₃ l₃⋘r₃ refl≈ refl≈ ; pl₁'≈l₁' = lemma-push⋘ (lexy y₃≤y₄) (lexy refl≤) l₃⋘r₃ ; pl₁'≈l₁ = trans≈ pl₁'≈l₁' l₁'≈l₁ ; pl₁'⋙dr = lemma≈⋙ pl₁'≈l₁ l₁⋙dr₁ ; pl₁'⋙dr₁' = subtyping⋙r (lexy y₃≤y₄) pl₁'⋙dr ; pl₁'⋗l₂ = lemma≈⋗ pl₁'≈l₁ l₁⋗l₂ in inj₂ (lr⋘ y₁≤y₃ y≤y₂ pl₁'⋙dr₁' l₂⋘r₂ l₂≃r₂ pl₁'⋗l₂) ... | inj₂ y₄≤y₃ | inj₂ l₁⋙dr₁ = let l₁'≈l₁ = ≈left (lexy y₄≤y₃) y₁≤y₃ l₃⋘r₃ l₃⋘r₃ refl≈ refl≈ ; l₁'⋙dr₁ = lemma≈⋙ l₁'≈l₁ l₁⋙dr₁ ; l₁'⋗l₂ = lemma≈⋗ l₁'≈l₁ l₁⋗l₂ in inj₂ (lr⋘ y₁≤y₄ y≤y₂ l₁'⋙dr₁ l₂⋘r₂ l₂≃r₂ l₁'⋗l₂) lemma-drop⋙ : {b : Bound}{l r : BBHeap b}(cₗ : Compound l)(cᵣ : Compound r) → l ⋙ r → l ⋗ r ∧ drop cₗ ⋘ r ∨ l ⋙ drop cᵣ lemma-drop⋙ (cl y≤y₁ lf⋘) (cl y≤y₂ l₂⋘r₂) (⋙rl .y≤y₁ .y≤y₂ .lf⋘ _ .l₂⋘r₂ ()) lemma-drop⋙ (cl y≤y₁ (ll⋘ {x = y₃} {x' = y₄} y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄)) (cl y≤y₂ lf⋘) (⋙rl .y≤y₁ .y≤y₂ .(ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄) l₁≃r₁ .lf⋘ l₁⋗r₂) with tot≤ y₃ y₄ | lemma-drop⋘ (cl y₁≤y₃ l₃⋘r₃) (cl y₁≤y₄ l₄⋘r₄) (ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄) ... | inj₁ y₃≤y₄ | inj₁ (_ , l₁⋙dr₁) = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; l₁⋗l₂ = lemma⋗≃ l₁⋗r₂ ≃lf ; l₁'≈l₁ = ≈left (lexy refl≤) y₁≤y₃ l₃⋘r₃ l₃⋘r₃ refl≈ refl≈ ; pl₁'≈l₁' = lemma-push⋘ (lexy y₃≤y₄) (lexy refl≤) l₃⋘r₃ ; pl₁'≈l₁ = trans≈ pl₁'≈l₁' l₁'≈l₁ ; pl₁'⋙dr₁ = lemma≈⋙ pl₁'≈l₁ l₁⋙dr₁ ; pl₁'⋙dr₁' = subtyping⋙r (lexy y₃≤y₄) pl₁'⋙dr₁ ; pl₁'⋗l₂ = lemma≈⋗ pl₁'≈l₁ l₁⋗l₂ in inj₁ (⋗nd y≤y₁ y≤y₂ l₁⋘r₁ lf⋘ l₁≃r₁ ≃lf l₁⋗l₂ , lr⋘ y₁≤y₃ y≤y₂ pl₁'⋙dr₁' lf⋘ ≃lf pl₁'⋗l₂) ... | inj₂ y₄≤y₃ | inj₁ (_ , l₁⋙dr₁) = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; l₁⋗l₂ = lemma⋗≃ l₁⋗r₂ ≃lf ; l₁'≈l₁ = ≈left (lexy y₄≤y₃) y₁≤y₃ l₃⋘r₃ l₃⋘r₃ refl≈ refl≈ ; l₁'⋙dr₁ = lemma≈⋙ l₁'≈l₁ l₁⋙dr₁ ; l₁'⋗l₂ = lemma≈⋗ l₁'≈l₁ l₁⋗l₂ in inj₁ (⋗nd y≤y₁ y≤y₂ l₁⋘r₁ lf⋘ l₁≃r₁ ≃lf l₁⋗l₂ , lr⋘ y₁≤y₄ y≤y₂ l₁'⋙dr₁ lf⋘ ≃lf l₁'⋗l₂) ... | _ | inj₂ dl₁⋘r₁ with lemma-drop-⊥ y₁≤y₃ l₃⋘r₃ (lemma-⋘-≃ dl₁⋘r₁ (sym≃ l₁≃r₁)) ... | () lemma-drop⋙ (cl y≤y₁ (ll⋘ {x = y₃} {x' = y₄} y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄)) (cl y≤y₂ (ll⋘ {x = y₅} {x' = y₆} y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆)) (⋙rl .y≤y₁ .y≤y₂ .(ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄) l₁≃r₁ .(ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆) l₁⋗r₂) with tot≤ y₃ y₄ | tot≤ y₅ y₆ | lemma-drop⋘ (cl y₂≤y₅ l₅⋘r₅) (cl y₂≤y₆ l₆⋘r₆) (ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆) | lemma-drop⋘ (cl y₁≤y₃ l₃⋘r₃) (cl y₁≤y₄ l₄⋘r₄) (ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄) ... | inj₁ y₃≤y₄ | _ | inj₁ (l₂≃r₂ , _) | inj₁ (_ , l₁⋙dr₁) = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; l₂⋘r₂ = ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆ ; r₂≃l₂ = sym≃ l₂≃r₂ ; l₁⋗l₂ = lemma⋗≃ l₁⋗r₂ r₂≃l₂ ; l₁'≈l₁ = ≈left (lexy refl≤) y₁≤y₃ l₃⋘r₃ l₃⋘r₃ refl≈ refl≈ ; pl₁'≈l₁' = lemma-push⋘ (lexy y₃≤y₄) (lexy refl≤) l₃⋘r₃ ; pl₁'≈l₁ = trans≈ pl₁'≈l₁' l₁'≈l₁ ; pl₁'⋙dr₁ = lemma≈⋙ pl₁'≈l₁ l₁⋙dr₁ ; pl₁'⋙dr₁' = subtyping⋙r (lexy y₃≤y₄) pl₁'⋙dr₁ ; pl₁'⋗l₂ = lemma≈⋗ pl₁'≈l₁ l₁⋗l₂ in inj₁ (⋗nd y≤y₁ y≤y₂ l₁⋘r₁ l₂⋘r₂ l₁≃r₁ l₂≃r₂ l₁⋗l₂ , lr⋘ y₁≤y₃ y≤y₂ pl₁'⋙dr₁' l₂⋘r₂ l₂≃r₂ pl₁'⋗l₂) ... | inj₂ y₄≤y₃ | _ | inj₁ (l₂≃r₂ , _) | inj₁ (_ , l₁⋙dr₁) = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; l₂⋘r₂ = ll⋘ y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₆⋘r₆ l₆≃r₆ r₅≃l₆ ; r₂≃l₂ = sym≃ l₂≃r₂ ; l₁⋗l₂ = lemma⋗≃ l₁⋗r₂ r₂≃l₂ ; l₁'≈l₁ = ≈left (lexy y₄≤y₃) y₁≤y₃ l₃⋘r₃ l₃⋘r₃ refl≈ refl≈ ; l₁'⋙dr₁ = lemma≈⋙ l₁'≈l₁ l₁⋙dr₁ ; l₁'⋗l₂ = lemma≈⋗ l₁'≈l₁ l₁⋗l₂ in inj₁ (⋗nd y≤y₁ y≤y₂ l₁⋘r₁ l₂⋘r₂ l₁≃r₁ l₂≃r₂ l₁⋗l₂ , lr⋘ y₁≤y₄ y≤y₂ l₁'⋙dr₁ l₂⋘r₂ l₂≃r₂ l₁'⋗l₂) ... | _ | inj₁ y₅≤y₆ | inj₂ dl₂⋘r₂ | _ = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; r₂≈r₂' = ≈left y₂≤y₆ (lexy y₅≤y₆) l₆⋘r₆ l₆⋘r₆ refl≈ refl≈ ; dl₂⋘r₂' = lemma⋘≈ dl₂⋘r₂ r₂≈r₂' ; l₁⋗r₂' = lemma⋗≈ l₁⋗r₂ r₂≈r₂' in inj₂ (⋙rl y≤y₁ y₂≤y₅ l₁⋘r₁ l₁≃r₁ dl₂⋘r₂' l₁⋗r₂') ... | _ | inj₂ y₆≤y₅ | inj₂ dl₂⋘r₂ | _ = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; r₂≈r₂' = ≈left y₂≤y₆ (lexy refl≤) l₆⋘r₆ l₆⋘r₆ refl≈ refl≈ ; pr₂'≈r₂' = lemma-push⋘ (lexy y₆≤y₅) (lexy refl≤) l₆⋘r₆ ; r₂'≈pr₂' = sym≈ pr₂'≈r₂' ; r₂≈pr₂' = trans≈ r₂≈r₂' r₂'≈pr₂'; dl₂⋘pr₂' = lemma⋘≈ dl₂⋘r₂ r₂≈pr₂' ; dl₂'⋘pr₂' = subtyping⋘l (lexy y₆≤y₅) dl₂⋘pr₂' ; l₁⋗pr₂' = lemma⋗≈ l₁⋗r₂ r₂≈pr₂' in inj₂ (⋙rl y≤y₁ y₂≤y₆ l₁⋘r₁ l₁≃r₁ dl₂'⋘pr₂' l₁⋗pr₂') ... | _ | _ | _ | inj₂ dl₁⋘r₁ with lemma-drop-⊥ y₁≤y₃ l₃⋘r₃ (lemma-⋘-≃ dl₁⋘r₁ (sym≃ l₁≃r₁)) ... | () lemma-drop⋙ (cl y≤y₁ (ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄)) (cl y≤y₂ (lr⋘ {x = y₅} {x' = y₆} y₂≤y₅ y₂≤y₆ l₅⋙r₅ l₆⋘r₆ l₆≃r₆ l₅⋗l₆)) (⋙rl .y≤y₁ .y≤y₂ .(ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄) l₁≃r₁ .(lr⋘ y₂≤y₅ y₂≤y₆ l₅⋙r₅ l₆⋘r₆ l₆≃r₆ l₅⋗l₆) l₁⋗r₂) with tot≤ y₅ y₆ | lemma-drop⋘ (cr y₂≤y₅ l₅⋙r₅) (cl y₂≤y₆ l₆⋘r₆) (lr⋘ y₂≤y₅ y₂≤y₆ l₅⋙r₅ l₆⋘r₆ l₆≃r₆ l₅⋗l₆) ... | _ | inj₁ (() , _) ... | inj₁ y₅≤y₆ | inj₂ dl₂⋘r₂ = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; r₂≈r₂' = ≈left y₂≤y₆ (lexy y₅≤y₆) l₆⋘r₆ l₆⋘r₆ refl≈ refl≈ ; dl₂⋘r₂' = lemma⋘≈ dl₂⋘r₂ r₂≈r₂' ; l₁⋗r₂' = lemma⋗≈ l₁⋗r₂ r₂≈r₂' in inj₂ (⋙rl y≤y₁ y₂≤y₅ l₁⋘r₁ l₁≃r₁ dl₂⋘r₂' l₁⋗r₂') ... | inj₂ y₆≤y₅ | inj₂ dl₂⋘r₂ = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; r₂≈r₂' = ≈left y₂≤y₆ (lexy refl≤) l₆⋘r₆ l₆⋘r₆ refl≈ refl≈ ; pr₂'≈r₂' = lemma-push⋘ (lexy y₆≤y₅) (lexy refl≤) l₆⋘r₆ ; r₂'≈pr₂' = sym≈ pr₂'≈r₂' ; r₂≈pr₂' = trans≈ r₂≈r₂' r₂'≈pr₂'; dl₂⋘pr₂' = lemma⋘≈ dl₂⋘r₂ r₂≈pr₂' ; dl₂'⋘pr₂' = subtyping⋘l (lexy y₆≤y₅) dl₂⋘pr₂' ; l₁⋗pr₂' = lemma⋗≈ l₁⋗r₂ r₂≈pr₂' in inj₂ (⋙rl y≤y₁ y₂≤y₆ l₁⋘r₁ l₁≃r₁ dl₂'⋘pr₂' l₁⋗pr₂') lemma-drop⋙ (cl y≤y₁ (lr⋘ y₁≤y₃ y₁≤y₄ l₃⋙r₃ l₄⋘r₄ l₄≃r₄ l₃⋗l₄)) (cl y≤y₂ l₂⋘r₂) (⋙rl .y≤y₁ .y≤y₂ .(lr⋘ y₁≤y₃ y₁≤y₄ l₃⋙r₃ l₄⋘r₄ l₄≃r₄ l₃⋗l₄) () .l₂⋘r₂ _) lemma-drop⋙ _ _ (⋙rr _ _ lf⋘ _ (⋙lf _) ()) lemma-drop⋙ _ _ (⋙rr _ _ lf⋘ _ (⋙rl _ _ _ _ _ _) ()) lemma-drop⋙ _ _ (⋙rr _ _ lf⋘ _ (⋙rr _ _ _ _ _ _) ()) lemma-drop⋙ (cl y≤y₁ (ll⋘ y₁≤y₃ y₁≤y₄ lf⋘ l₄⋘r₄ l₄≃r₄ r₃≃l₄)) (cr y≤y₂ (⋙lf y₂≤y₅)) (⋙rr .y≤y₁ .y≤y₂ .(ll⋘ y₁≤y₃ y₁≤y₄ lf⋘ l₄⋘r₄ l₄≃r₄ r₃≃l₄) l₁≃r₁ .(⋙lf y₂≤y₅) (≃nd .y₁≤y₃ .y₂≤y₅ .lf⋘ .lf⋘ ≃lf ≃lf ≃lf)) = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ lf⋘ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; l₁⋗r₂ = ⋗lf y₁≤y₃ in inj₂ (⋙rl y≤y₁ y₂≤y₅ l₁⋘r₁ l₁≃r₁ lf⋘ l₁⋗r₂) lemma-drop⋙ _ _ (⋙rr _ _ (ll⋘ y₁≤y₃ _ (ll⋘ _ _ _ _ _ _) _ _ _) _ (⋙lf y₂≤y₅) (≃nd .y₁≤y₃ .y₂≤y₅ .(ll⋘ _ _ _ _ _ _) .lf⋘ _ ≃lf ())) lemma-drop⋙ _ _ (⋙rr _ _ (ll⋘ y₁≤y₃ _ (lr⋘ _ _ _ _ _ _) _ _ _) _ (⋙lf y₂≤y₅) (≃nd .y₁≤y₃ .y₂≤y₅ .(lr⋘ _ _ _ _ _ _) .lf⋘ _ ≃lf ())) lemma-drop⋙ (cl y≤y₁ (ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄)) (cr y≤y₂ (⋙rl {x = y₅} {x' = y₆} y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₅≃r₅ l₆⋘r₆ l₅⋗r₆)) (⋙rr .y≤y₁ .y≤y₂ .(ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄) l₁≃r₁ .(⋙rl y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₅≃r₅ l₆⋘r₆ l₅⋗r₆) l₁≃l₂) with tot≤ y₅ y₆ | lemma-drop⋙ (cl y₂≤y₅ l₅⋘r₅) (cl y₂≤y₆ l₆⋘r₆) (⋙rl y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₅≃r₅ l₆⋘r₆ l₅⋗r₆) ... | inj₁ y₅≤y₆ | inj₁ (l₂⋗r₂ , dl₂⋘r₂) = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; r₂≈r₂' = ≈left y₂≤y₆ (lexy y₅≤y₆) l₆⋘r₆ l₆⋘r₆ refl≈ refl≈ ; dl₂⋘r₂' = lemma⋘≈ dl₂⋘r₂ r₂≈r₂' ; l₁⋗r₂ = lemma≃⋗ l₁≃l₂ l₂⋗r₂ ; l₁⋗r₂' = lemma⋗≈ l₁⋗r₂ r₂≈r₂' in inj₂ (⋙rl y≤y₁ y₂≤y₅ l₁⋘r₁ l₁≃r₁ dl₂⋘r₂' l₁⋗r₂') ... | inj₁ y₅≤y₆ | inj₂ l₂⋙dr₂ = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; l₂'≈l₂ = ≈left (lexy refl≤) y₂≤y₅ l₅⋘r₅ l₅⋘r₅ refl≈ refl≈ ; pl₂'≈l₂' = lemma-push⋘ (lexy y₅≤y₆) (lexy refl≤) l₅⋘r₅ ; pl₂'≈l₂ = trans≈ pl₂'≈l₂' l₂'≈l₂ ; pl₂'⋙dr₂ = lemma≈⋙ pl₂'≈l₂ l₂⋙dr₂ ; l₂≈pl₂' = sym≈ pl₂'≈l₂ ; pl₂'⋙dr₂' = subtyping⋙r (lexy y₅≤y₆) pl₂'⋙dr₂ ; l₁≃pl₂' = lemma≃≈ l₁≃l₂ l₂≈pl₂' in inj₂ (⋙rr y≤y₁ y₂≤y₅ l₁⋘r₁ l₁≃r₁ pl₂'⋙dr₂' l₁≃pl₂') ... | inj₂ y₆≤y₅ | inj₁ (l₂⋗r₂ , dl₂⋘r₂) = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; r₂'≈r₂ = ≈left (lexy refl≤) y₂≤y₆ l₆⋘r₆ l₆⋘r₆ refl≈ refl≈ ; pr₂'≈r₂' = lemma-push⋘ (lexy y₆≤y₅) (lexy refl≤) l₆⋘r₆ ; pr₂'≈r₂ = trans≈ pr₂'≈r₂' r₂'≈r₂ ; r₂≈pr₂' = sym≈ pr₂'≈r₂ ; dl₂⋘pr₂' = lemma⋘≈ dl₂⋘r₂ r₂≈pr₂' ; dl₂'⋘pr₂' = subtyping⋘l (lexy y₆≤y₅) dl₂⋘pr₂' ; l₁⋗r₂ = lemma≃⋗ l₁≃l₂ l₂⋗r₂ ; l₁⋗pr₂' = lemma⋗≈ l₁⋗r₂ r₂≈pr₂' in inj₂ (⋙rl y≤y₁ y₂≤y₆ l₁⋘r₁ l₁≃r₁ dl₂'⋘pr₂' l₁⋗pr₂') ... | inj₂ y₆≤y₅ | inj₂ l₂⋙dr₂ = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; l₂'≈l₂ = ≈left (lexy y₆≤y₅) y₂≤y₅ l₅⋘r₅ l₅⋘r₅ refl≈ refl≈ ; l₂'⋙dr₂ = lemma≈⋙ l₂'≈l₂ l₂⋙dr₂ ; l₂≈l₂' = sym≈ l₂'≈l₂ ; l₁≃l₂' = lemma≃≈ l₁≃l₂ l₂≈l₂' in inj₂ (⋙rr y≤y₁ y₂≤y₆ l₁⋘r₁ l₁≃r₁ l₂'⋙dr₂ l₁≃l₂') lemma-drop⋙ (cl y≤y₁ (ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄)) (cr y≤y₂ (⋙rr {x = y₅} {x' = y₆} y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₅≃r₅ l₆⋙r₆ l₅≃l₆)) (⋙rr .y≤y₁ .y≤y₂ .(ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄) l₁≃r₁ .(⋙rr y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₅≃r₅ l₆⋙r₆ l₅≃l₆) l₁≃l₂) with tot≤ y₅ y₆ | lemma-drop⋙ (cl y₂≤y₅ l₅⋘r₅) (cr y₂≤y₆ l₆⋙r₆) (⋙rr y₂≤y₅ y₂≤y₆ l₅⋘r₅ l₅≃r₅ l₆⋙r₆ l₅≃l₆) ... | _ | inj₁ (() , _) ... | inj₁ y₅≤y₆ | inj₂ l₂⋙dr₂ = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; l₂'≈l₂ = ≈left (lexy refl≤) y₂≤y₅ l₅⋘r₅ l₅⋘r₅ refl≈ refl≈ ; pl₂'≈l₂' = lemma-push⋘ (lexy y₅≤y₆) (lexy refl≤) l₅⋘r₅ ; pl₂'≈l₂ = trans≈ pl₂'≈l₂' l₂'≈l₂ ; pl₂'⋙dr₂ = lemma≈⋙ pl₂'≈l₂ l₂⋙dr₂ ; l₂≈pl₂' = sym≈ pl₂'≈l₂ ; pl₂'⋙dr₂' = subtyping⋙r (lexy y₅≤y₆) pl₂'⋙dr₂ ; l₁≃pl₂' = lemma≃≈ l₁≃l₂ l₂≈pl₂' in inj₂ (⋙rr y≤y₁ y₂≤y₅ l₁⋘r₁ l₁≃r₁ pl₂'⋙dr₂' l₁≃pl₂') ... | inj₂ y₆≤y₅ | inj₂ l₂⋙dr₂ = let l₁⋘r₁ = ll⋘ y₁≤y₃ y₁≤y₄ l₃⋘r₃ l₄⋘r₄ l₄≃r₄ r₃≃l₄ ; l₂'≈l₂ = ≈left (lexy y₆≤y₅) y₂≤y₅ l₅⋘r₅ l₅⋘r₅ refl≈ refl≈ ; l₂'⋙dr₂ = lemma≈⋙ l₂'≈l₂ l₂⋙dr₂ ; l₂≈l₂' = sym≈ l₂'≈l₂ ; l₁≃l₂' = lemma≃≈ l₁≃l₂ l₂≈l₂' in inj₂ (⋙rr y≤y₁ y₂≤y₆ l₁⋘r₁ l₁≃r₁ l₂'⋙dr₂ l₁≃l₂') lemma-drop⋙ (cl y≤y₁ (lr⋘ y₁≤y₃ y₁≤y₄ l₃⋙r₃ l₄⋘r₄ l₄≃r₄ l₃⋗l₄)) (cr y≤y₂ l₂⋙r₂) (⋙rr .y≤y₁ .y≤y₂ .(lr⋘ y₁≤y₃ y₁≤y₄ l₃⋙r₃ l₄⋘r₄ l₄≃r₄ l₃⋗l₄) () .l₂⋙r₂ _) lemma-drop-⊥ : {b : Bound}{x : A}{l r : BBHeap (val x)}(b≤x : LeB b (val x))(l⋘r : l ⋘ r) → drop (cl b≤x l⋘r) ⋘ (left b≤x l⋘r) → ⊥ lemma-drop-⊥ _ lf⋘ () lemma-drop-⊥ b≤x (ll⋘ {x = y₁} {x' = y₂} x≤y₁ x≤y₂ l₁⋘r₁ l₂⋘r₂ l₂≃r₂ r₁≃l₂) dxlr⋘xlr with tot≤ y₁ y₂ | lemma-drop⋘ (cl x≤y₁ l₁⋘r₁) (cl x≤y₂ l₂⋘r₂) (ll⋘ x≤y₁ x≤y₂ l₁⋘r₁ l₂⋘r₂ l₂≃r₂ r₁≃l₂) | dxlr⋘xlr | lemma-perfect dxlr⋘xlr ... | inj₁ y₁≤y₂ | inj₁ (l≃r , l⋙dr) | _dxlr⋘xlr | _ = let l⋘r = ll⋘ x≤y₁ x≤y₂ l₁⋘r₁ l₂⋘r₂ l₂≃r₂ r₁≃l₂ ; pl'≈l' = lemma-push⋘ (lexy y₁≤y₂) (lexy refl≤) l₁⋘r₁ ; l'≈l = ≈left (lexy refl≤) x≤y₁ l₁⋘r₁ l₁⋘r₁ refl≈ refl≈ ; pl'≈l = trans≈ pl'≈l' l'≈l ; pl'⋙dr = lemma≈⋙ pl'≈l l⋙dr ; pl'⋙dr' = subtyping⋙r (lexy y₁≤y₂) pl'⋙dr ; r≃l = sym≃ l≃r ; l≃l = trans≃ l≃r r≃l ; pl'≃l = lemma≈≃ pl'≈l l≃l in lemma-⋘-⊥ x≤y₁ b≤x pl'⋙dr' l⋘r pl'≃l _dxlr⋘xlr ... | inj₂ y₂≤y₁ | inj₁ (l≃r , l⋙dr) | _dxlr⋘xlr | _ = let l⋘r = ll⋘ x≤y₁ x≤y₂ l₁⋘r₁ l₂⋘r₂ l₂≃r₂ r₁≃l₂ ; l'≈l = ≈left (lexy y₂≤y₁) x≤y₁ l₁⋘r₁ l₁⋘r₁ refl≈ refl≈ ; l'⋙dr = lemma≈⋙ l'≈l l⋙dr ; r≃l = sym≃ l≃r ; l≃l = trans≃ l≃r r≃l ; l'≃l = lemma≈≃ l'≈l l≃l in lemma-⋘-⊥ x≤y₂ b≤x l'⋙dr l⋘r l'≃l _dxlr⋘xlr ... | _ | inj₂ dl⋘r | _ | pnd .b≤x .(ll⋘ x≤y₁ x≤y₂ l₁⋘r₁ l₂⋘r₂ l₂≃r₂ r₁≃l₂) l≃r = let r≃l = sym≃ l≃r ; dl⋘l = lemma-⋘-≃ dl⋘r r≃l in lemma-drop-⊥ x≤y₁ l₁⋘r₁ dl⋘l lemma-drop-⊥ b≤x (lr⋘ x≤y₁ x≤y₂ l₁⋙r₁ l₂⋘r₂ l₂≃r₂ l₁⋗r₂) dxlr⋘xlr with lemma-perfect dxlr⋘xlr ... | pnd .b≤x .(lr⋘ x≤y₁ x≤y₂ l₁⋙r₁ l₂⋘r₂ l₂≃r₂ l₁⋗r₂) ()
# # Copyright (c) Facebook, Inc. and its affiliates. # # This source code is licensed under the MIT license found in the # LICENSE file in the root directory of this source tree. # import torch import torch.nn as nn # import rlstructures.logging as logging from rlstructures import DictTensor from rlstructures import RL_Agent import time import numpy as np class QAgent(RL_Agent): def __init__(self, model=None, n_actions=None): super().__init__() self.model = model self.n_actions = n_actions def update(self, sd): self.model.load_state_dict(sd) def initial_state(self, agent_info, B): return DictTensor({}) def __call__(self, state, observation, agent_info=None, history=None): B = observation.n_elems() agent_step = None q = self.model(observation["frame"]) qs, action = q.max(1) raction = torch.tensor( np.random.randint(low=0, high=self.n_actions, size=(action.size()[0])) ) epsilon = agent_info["epsilon"] r = torch.rand(action.size()[0]) mask = r.lt(epsilon).float() action = mask * raction + (1 - mask) * action action = action.long() agent_do = DictTensor({"action": action, "q": q}) return agent_do, DictTensor({}) class DQMLP(nn.Module): def __init__(self, n_observations, n_actions, n_hidden): super().__init__() self.linear = nn.Linear(n_observations, n_hidden) self.linear_adv = nn.Linear(n_hidden, n_actions) self.linear_value = nn.Linear(n_hidden, 1) self.n_actions = n_actions def forward_common(self, frame): z = torch.tanh(self.linear(frame)) return z def forward_value(self, z): return self.linear_value(z) def forward_advantage(self, z): adv = self.linear_adv(z) advm = adv.mean(1).unsqueeze(-1).repeat(1, self.n_actions) return adv - advm def forward(self, state): z = self.forward_common(state) v = self.forward_value(z) adv = self.forward_advantage(z) return v + adv
lemma (in ring_of_sets) positive_cong_eq: "(\<And>a. a \<in> M \<Longrightarrow> \<mu>' a = \<mu> a) \<Longrightarrow> positive M \<mu>' = positive M \<mu>"
using ConstrainedDynamics # Parameters ex = [1.;0.;0.] l1 = 1.0 x, y = .1, .1 b1 = Box(x, y, l1, l1, color = RGBA(1., 1., 0.)) vert11 = [0.;0.;l1 / 2] vert12 = -vert11 verts = [[vert11];[vert12]] # Initial orientation offset1 = pi / 4 offset2 = pi / 2 phi1 = pi / 8 q1 = Quaternion(RotX(phi1)) qoff1 = Quaternion(RotX(offset1)) qoff2 = Quaternion(RotX(offset2)) N = 10 # Links origin = Origin{Float64}() links = [Body(b1) for i = 1:4 * N] function fourbar(links, vertices, axis) j1 = EqualityConstraint(Revolute(links[1], links[2], vertices[1], vertices[2], axis)) j2 = EqualityConstraint(Revolute(links[2], links[3], vertices[3], vertices[2], axis), Cylindrical(links[2], links[4], vertices[2], vertices[2], axis)) j3 = EqualityConstraint(Revolute(links[4], links[5], vertices[3], vertices[2], axis)) j4 = EqualityConstraint(Revolute(links[3], links[5], vertices[3], vertices[3], axis)) return j1, j2, j3, j4 end function initfourbar!(mechanism, links, vertices, Δq1, Δq2) setPosition!(mechanism, links[1], links[2], p1 = vertices[1], p2 = vertices[2], Δq = Δq1) setPosition!(mechanism, links[2], links[3], p1 = vertices[3], p2 = vertices[2], Δq = inv(Δq2) * inv(Δq2)) setPosition!(mechanism, links[2], links[4], p1 = vertices[2], p2 = vertices[2], Δq = inv(Δq2) * inv(Δq2)) setPosition!(mechanism, links[4], links[5], p1 = vertices[3], p2 = vertices[2], Δq = Δq2 * Δq2) end # Constraints constraints = [fourbar([origin;links[1:4]], [[zeros(3)];verts], ex)...] for i = 2:N push!(constraints, fourbar(links[(i - 1) * 4:i * 4], [[vert12];verts], ex)...) end shapes = [b1] mech = Mechanism(origin, links, constraints, shapes = shapes) if N > 1 initfourbar!(mech, [origin;links[1:4]], [[zeros(3)];verts], q1 * qoff1, q1) else initfourbar!(mech, [origin;links[1:4]], [[zeros(3)];verts], q1 * qoff2, q1) end for i = 2:N - 1 initfourbar!(mech, links[(i - 1) * 4:i * 4], [[vert12];verts], q1 / q1, q1) end if N > 1 initfourbar!(mech, links[(N - 1) * 4:N * 4], [[vert12];verts], inv(qoff1) * qoff2, q1) end simulate!(mech,save = true) visualize!(mech)
from __future__ import absolute_import from __future__ import print_function from __future__ import division import numpy as np import torch from torch.nn import functional as F def compute_distance_matrix(input1, input2, metric='euclidean'): """A wrapper function for computing distance matrix. Args: input1 (torch.Tensor): 2-D feature matrix. input2 (torch.Tensor): 2-D feature matrix. metric (str, optional): "euclidean" or "cosine". Default is "euclidean". Returns: torch.Tensor: distance matrix. Examples:: >>> from torchreid import metrics >>> input1 = torch.rand(10, 2048) >>> input2 = torch.rand(100, 2048) >>> distmat = metrics.compute_distance_matrix(input1, input2) >>> distmat.size() # (10, 100) """ # check input assert isinstance(input1, torch.Tensor) assert isinstance(input2, torch.Tensor) assert input1.dim() == 2, 'Expected 2-D tensor, but got {}-D'.format(input1.dim()) assert input2.dim() == 2, 'Expected 2-D tensor, but got {}-D'.format(input2.dim()) assert input1.size(1) == input2.size(1) if metric == 'euclidean': distmat = euclidean_squared_distance(input1, input2) elif metric == 'cosine': distmat = cosine_distance(input1, input2) else: raise ValueError( 'Unknown distance metric: {}. ' 'Please choose either "euclidean" or "cosine"'.format(metric) ) return distmat def euclidean_squared_distance(input1, input2): """Computes euclidean squared distance. Args: input1 (torch.Tensor): 2-D feature matrix. input2 (torch.Tensor): 2-D feature matrix. Returns: torch.Tensor: distance matrix. """ m, n = input1.size(0), input2.size(0) distmat = torch.pow(input1, 2).sum(dim=1, keepdim=True).expand(m, n) + \ torch.pow(input2, 2).sum(dim=1, keepdim=True).expand(n, m).t() distmat.addmm_(1, -2, input1, input2.t()) return distmat def cosine_distance(input1, input2): """Computes cosine distance. Args: input1 (torch.Tensor): 2-D feature matrix. input2 (torch.Tensor): 2-D feature matrix. Returns: torch.Tensor: distance matrix. """ input1_normed = F.normalize(input1, p=2, dim=1) input2_normed = F.normalize(input2, p=2, dim=1) distmat = 1 - torch.mm(input1_normed, input2_normed.t()) return distmat
#ifndef OPENMC_TALLIES_FILTER_POLAR_H #define OPENMC_TALLIES_FILTER_POLAR_H #include <cmath> #include <vector> #include <gsl/gsl> #include "openmc/tallies/filter.h" namespace openmc { //============================================================================== //! Bins the incident neutron polar angle (relative to the global z-axis). //============================================================================== class PolarFilter : public Filter { public: //---------------------------------------------------------------------------- // Constructors, destructors ~PolarFilter() = default; //---------------------------------------------------------------------------- // Methods std::string type() const override {return "polar";} void from_xml(pugi::xml_node node) override; void get_all_bins(const Particle* p, int estimator, FilterMatch& match) const override; void to_statepoint(hid_t filter_group) const override; std::string text_label(int bin) const override; //---------------------------------------------------------------------------- // Accessors void set_bins(gsl::span<double> bins); private: //---------------------------------------------------------------------------- // Data members std::vector<double> bins_; }; } // namespace openmc #endif // OPENMC_TALLIES_FILTER_POLAR_H
using Random, SnpArrays, DataFrames, GLM using LinearAlgebra, Test, TraitSimulation using BenchmarkTools, Statistics Random.seed!(1234) function generateSPDmatrix(n) A = rand(n) m = 0.5 * (A * A') PDmat = m + (n * Diagonal(ones(n))) end function generateRandomVCM(n::Int64, p::Int64, d::Int64, m::Int64) # n-by-p design matrix X = randn(n, p) # p-by-d mean component regression coefficient for each trait B = hcat(ones(p, 1), rand(p)) V = ntuple(x -> zeros(n, n), m) for i = 1:m-1 copy!(V[i], generateSPDmatrix(n)) end copy!(V[end], Diagonal(ones(n))) # last covarianec matrix is identity # a tuple of m d-by-d variance component parameters Σ = ntuple(x -> zeros(d, d), m) for i in 1:m copy!(Σ[i], generateSPDmatrix(d)) end return(X, B, Σ, V) end import TraitSimulation: snparray_simulation n = 10 p = 2 d = 2 m = 2 df = DataFrame(x = repeat([0.0], n), y = repeat([1.0], n)) dist = Normal() link = IdentityLink() # test for correct mean formula formulas = ["x + 5y", "2 + log(y)"] # what happens when there is no variables and just a scalar formulas2 = ["25", "738"] @test unique(mean_formula(formulas2[1], df)[1]) == [25] evaluated_output = [repeat([5.0], n), repeat([2.0], n)] for i in eachindex(formulas) return(@test mean_formula(formulas[i], df)[1] == evaluated_output[i]) end X, B, Σ, V = generateRandomVCM(n, p, d, m) test_vcm1 = VCMTrait(X, B, @vc Σ[1] ⊗ V[1] + Σ[2] ⊗ V[2]) test_vcm1_equivalent = VCMTrait(X, B, [Σ...], [V...]) @test test_vcm1_equivalent.vc[1].V == V[1] @test typeof(test_vcm1.vc[1]) == VarianceComponent varcomp = @vc Σ[1] ⊗ V[1] + Σ[2] ⊗ V[2] varcomp_onevc = @vc Σ[1] ⊗ V[1] @test eltype(varcomp_onevc) == VarianceComponent test_vcm1 = VCMTrait(X, B, varcomp) # check if the structure is correct @test eltype(varcomp) == VarianceComponent # check if returns the appropriate decomposition of the VarianceComponent type @test vcobjtuple(varcomp)[1][1] == Σ[1] # test provided simulate coefficients function x = rand(n) @test eltype(TraitSimulation.simulate_effect_size(x)) == Float64 effectsizes = rand(n) our_names = ["sarah"; "janet"; "hua"; "eric"; "ken"; "jenny"; "ben"; "chris"; "juhyun"; "xinkai"] whats_my_mean_formula = TraitSimulation.FixedEffectTerms(effectsizes, our_names) data_frame_2 = DataFrame(ones(n, n), :auto) rename!(data_frame_2, Symbol.(our_names)) @test unique(mean_formula(whats_my_mean_formula, data_frame_2)[1])[1] == sum(effectsizes) @test_throws ErrorException TraitSimulation.__default_behavior(test_vcm1) test_vcm1_new = VCMTrait(X, B, @vc Σ[1] ⊗ V[1] + Σ[2] ⊗ V[2]) test_vcm1_equiv_new = VCMTrait(X, B, [Σ...], [V...]) ## nsim = 10 using Statistics Y_new = simulate(test_vcm1_new, nsim) Y_vecd = zeros(n*d, nsim) for i in 1:nsim Y_vecd[:, i] = vec(Y_new[i]) end simulated_mean = Statistics.mean(Y_vecd, dims = 2) Z_new = Y_vecd .- simulated_mean emp_cov = (Z_new * Z_new') * inv(nsim) true_mu = vec(test_vcm1_new.μ) true_Ω = zeros(n*d, n*d) for i = 1:m global true_Ω += kron(Σ[i], V[i]) end vs = diag(true_Ω) for i = 1:20 @test isapprox(simulated_mean[i], true_mu[i], atol=sqrt(vs[i] / nsim) * 8.0) end for i = 1:20, j = 1:20 @test isapprox(emp_cov[i,j], true_Ω[i,j], atol=sqrt(vs[i] * vs[j]) * 10.0 / sqrt(nsim)) end
{-# OPTIONS --without-K --safe #-} open import Categories.Category module Categories.Object.Initial {o ℓ e} (C : Category o ℓ e) where open import Level open import Relation.Binary.PropositionalEquality as ≡ using (_≡_) open Category C open import Categories.Morphism C using (Epi; _≅_) open import Categories.Morphism.IsoEquiv C using (_≃_; ⌞_⌟) open import Categories.Morphism.Reasoning C open HomReasoning record Initial : Set (o ⊔ ℓ ⊔ e) where field ⊥ : Obj ! : {A : Obj} → (⊥ ⇒ A) !-unique : ∀ {A} → (f : ⊥ ⇒ A) → ! ≈ f !-unique₂ : ∀ {A} → (f g : ⊥ ⇒ A) → f ≈ g !-unique₂ f g = begin f ≈˘⟨ !-unique f ⟩ ! ≈⟨ !-unique g ⟩ g ∎ where open HomReasoning ⊥-id : (f : ⊥ ⇒ ⊥) → f ≈ id ⊥-id f = !-unique₂ f id open Initial to-⊥-is-Epi : ∀ {A : Obj} {i : Initial} → (f : A ⇒ ⊥ i) → Epi f to-⊥-is-Epi {_} {i} _ = λ g h _ → !-unique₂ i g h up-to-iso : (i₁ i₂ : Initial) → ⊥ i₁ ≅ ⊥ i₂ up-to-iso i₁ i₂ = record { from = ! i₁ ; to = ! i₂ ; iso = record { isoˡ = ⊥-id i₁ _; isoʳ = ⊥-id i₂ _ } } transport-by-iso : (i : Initial) → ∀ {X} → ⊥ i ≅ X → Initial transport-by-iso i {X} i≅X = record { ⊥ = X ; ! = ! i ∘ to ; !-unique = λ h → begin ! i ∘ to ≈⟨ !-unique i (h ∘ from) ⟩∘⟨refl ⟩ (h ∘ from) ∘ to ≈⟨ cancelʳ isoʳ ⟩ h ∎ } where open _≅_ i≅X up-to-iso-unique : ∀ i i′ → (iso : ⊥ i ≅ ⊥ i′) → up-to-iso i i′ ≃ iso up-to-iso-unique i i′ iso = ⌞ !-unique i _ ⌟ up-to-iso-invˡ : ∀ {t X} {i : ⊥ t ≅ X} → up-to-iso t (transport-by-iso t i) ≃ i up-to-iso-invˡ {t} {i = i} = up-to-iso-unique t (transport-by-iso t i) i up-to-iso-invʳ : ∀ {t t′} → ⊥ (transport-by-iso t (up-to-iso t t′)) ≡ ⊥ t′ up-to-iso-invʳ {t} {t′} = ≡.refl
import math import numpy as np import matplotlib.pyplot as plt # usuario insere angulo em graus degrees1= float(input('digite o angulo de inclinação do espelho 1 em graus:')) # funcao if para interpretar um angulo agudo if degrees1 < 90 : degrees1 = degrees1 else: degrees1 = 180 - degrees1 # converter para rad por causa das funcoes trigonometricas x=math.radians(degrees1) # matriz A - Primeira transformação relativa a primeira reflexão A= np.array([[math.cos(2*x), math.sin(2*x)], [math.sin(2*x), -1*math.cos(2*x)]]) # usuario insere algulo do espelho 2 em graus degrees2= float(input('digite o angulo de inclinação do espelho 2:')) # função if para interpretar sempre um angulo obtuso if degrees2 > 90 : degrees2 = degrees2 else: degrees2 = 180 - degrees2 # coversao y= math.radians(degrees2) # matriz B- segunda transformação relativa a segunda reflexão B= np.array([[math.cos(2*y), math.sin(2*y)], [math.sin(2*y), -1*math.cos(2*y)]]) #multiplicacao das matrizes BxA para efetuar as transformações C= B@A # variavel a nos aponta para direção em x sempre positiva para o vetor estar incidindo a= float(input('componente da direção em x(+) do vetor do raio de luz:')) if a > 0 : a = a else: a = -a # variavel em y sempre negativa para incidir b= float(input('componente da direção em y(-) do vetor do raio de luz:')) if b < 0 : b = b else: b = -b # matriz coluna que representa as direções do vetor-raio de luz que incide # sobre o espelho 1 D= np.array([[a],[b]]) #matriz que representa a direção do ultimo vetor refletido E= C@D h= E[0] # elemento 11 da matriz E- direção x do ultimo vetor refletido f= E[-1] # elemento 12 da matriz E- direção y do ultimo vetor refletido s= f'a direção do ultimo vetor refletido é: ( {h} , {f} )' print(s) c= a*(math.cos(2*x))+ b*(math.sin(2*x)) # direção em x do segundo vetor d= a*(math.sin(2*x)) - b*(math.cos(2*x)) # direção em y do segundo vetor mod= (c*c + d*d)**0.5 # norma do vetor 2 m1= math.tan(y) m2= d/c # coeficiente angular da reta em direção a vec2 # ponto de intersecção com a reta em direção a v2 e a reta do espelho 2 em x p1= (m1*(-70))/(m2-m1) # ponto de intersecção com a reta em direção a v2 e a reta do espelho 2 em y p2= (m1*m2*(-70))/ (m2-m1) dist= (p1*p1 + p2*p2)**0.5 # distancia da origem até o ponto de intersec c= c/ mod # vetor unitario c d= d/ mod # vetor unitario d c= c* dist # para que c toque com sua extremidade no espelho d= d* dist # '' d toque '' n= f'a direção do segundo vetor refletido é: ( {c} , {d} )' print(n) def draw(vec1, vec2, vec3): # para plotar os 3 vetores ''' Funcao que plota os 3 vetores. Primeiro, e criado um array com todos os parametros que serao usados. Depois, o conteudo desse array e parcialmente separado em 4 variaveis que serao usadas na funcao quiver, que gera a imagem do vetor. Ai, o grafico e gerado com os eixos x e y e seus respectivos limites. Por fim, as retas e a legenda são gerados e o grafico e exibido. ''' array = np.array([[-6*a, -6*b, vec1[0], vec1[1]], [0, 0, vec2[0], vec2[1]], [vec2[0], vec2[1], vec3[0], vec3[1]]]) X, Y, U, V = zip(*array) plt.figure() plt.ylabel('Eixo Y') plt.xlabel('Eixo X') ax = plt.gca() ax.quiver(X, Y, U, V,color='b', angles='xy', scale_units='xy',scale=1) ax.set_xlim([-400, 300]) ax.set_ylim([-400, 400]) x1= np.arange(-400,300,1) plt.plot(x1,x1*(math.tan(x)), label= 'Espelho 1') x2= np.arange(-400,300,1) plt.plot(x2, m1*(x2-70), label= 'Espelho 2') plt.legend(bbox_to_anchor=(0.,1.02, 1., .102), loc= 'lower left', ncol= 2, mode = 'expand', borderaxespad=0.) plt.draw() plt.show() # chama e executa a funcao draw([6*a,6*b], [c,d],[19*h,19*f])
<a href="https://colab.research.google.com/github/ewuerfel66/DS-Unit-1-Sprint-3-Statistical-Tests-and-Experiments/blob/master/EricWuerfel_LS_DS5_143_Assignment.ipynb" target="_parent"></a> # Lambda School Data Science Module 143 ## Introduction to Bayesian Inference !['Detector! What would the Bayesian statistician say if I asked him whether the--' [roll] 'I AM A NEUTRINO DETECTOR, NOT A LABYRINTH GUARD. SERIOUSLY, DID YOUR BRAIN FALL OUT?' [roll] '... yes.'](https://imgs.xkcd.com/comics/frequentists_vs_bayesians.png) *[XKCD 1132](https://www.xkcd.com/1132/)* ## Prepare - Bayes' Theorem and the Bayesian mindset Bayes' theorem possesses a near-mythical quality - a bit of math that somehow magically evaluates a situation. But this mythicalness has more to do with its reputation and advanced applications than the actual core of it - deriving it is actually remarkably straightforward. ### The Law of Total Probability By definition, the total probability of all outcomes (events) if some variable (event space) $A$ is 1. That is: $$P(A) = \sum_n P(A_n) = 1$$ The law of total probability takes this further, considering two variables ($A$ and $B$) and relating their marginal probabilities (their likelihoods considered independently, without reference to one another) and their conditional probabilities (their likelihoods considered jointly). A marginal probability is simply notated as e.g. $P(A)$, while a conditional probability is notated $P(A|B)$, which reads "probability of $A$ *given* $B$". The law of total probability states: $$P(A) = \sum_n P(A | B_n) P(B_n)$$ In words - the total probability of $A$ is equal to the sum of the conditional probability of $A$ on any given event $B_n$ times the probability of that event $B_n$, and summed over all possible events in $B$. ### The Law of Conditional Probability What's the probability of something conditioned on something else? To determine this we have to go back to set theory and think about the intersection of sets: The formula for actual calculation: $$P(A|B) = \frac{P(A \cap B)}{P(B)}$$ Think of the overall rectangle as the whole probability space, $A$ as the left circle, $B$ as the right circle, and their intersection as the red area. Try to visualize the ratio being described in the above formula, and how it is different from just the $P(A)$ (not conditioned on $B$). We can see how this relates back to the law of total probability - multiply both sides by $P(B)$ and you get $P(A|B)P(B) = P(A \cap B)$ - replaced back into the law of total probability we get $P(A) = \sum_n P(A \cap B_n)$. This may not seem like an improvement at first, but try to relate it back to the above picture - if you think of sets as physical objects, we're saying that the total probability of $A$ given $B$ is all the little pieces of it intersected with $B$, added together. The conditional probability is then just that again, but divided by the probability of $B$ itself happening in the first place. \begin{align} P(A|B) &= \frac{P(A \cap B)}{P(B)}\\ \Rightarrow P(A|B)P(B) &= P(A \cap B)\\ P(B|A) &= \frac{P(B \cap A)}{P(A)}\\ \Rightarrow P(B|A)P(A) &= P(B \cap A)\\ \Rightarrow P(A|B)P(B) &= P(B|A)P(A) \\ P(A \cap B) &= P(B \cap A)\\ P(A|B) &= \frac{P(B|A) \times P(A)}{P(B)} \end{align} ### Bayes Theorem Here is is, the seemingly magic tool: $$P(A|B) = \frac{P(B|A)P(A)}{P(B)}$$ In words - the probability of $A$ conditioned on $B$ is the probability of $B$ conditioned on $A$, times the probability of $A$ and divided by the probability of $B$. These unconditioned probabilities are referred to as "prior beliefs", and the conditioned probabilities as "updated." Why is this important? Scroll back up to the XKCD example - the Bayesian statistician draws a less absurd conclusion because their prior belief in the likelihood that the sun will go nova is extremely low. So, even when updated based on evidence from a detector that is $35/36 = 0.972$ accurate, the prior belief doesn't shift enough to change their overall opinion. There's many examples of Bayes' theorem - one less absurd example is to apply to [breathalyzer tests](https://www.bayestheorem.net/breathalyzer-example/). You may think that a breathalyzer test that is 100% accurate for true positives (detecting somebody who is drunk) is pretty good, but what if it also has 8% false positives (indicating somebody is drunk when they're not)? And furthermore, the rate of drunk driving (and thus our prior belief) is 1/1000. What is the likelihood somebody really is drunk if they test positive? Some may guess it's 92% - the difference between the true positives and the false positives. But we have a prior belief of the background/true rate of drunk driving. Sounds like a job for Bayes' theorem! $$ \begin{aligned} P(Drunk | Positive) &= \frac{P(Positive | Drunk)P(Drunk)}{P(Positive)} \\ &= \frac{1 \times 0.001}{0.08} \\ &= 0.0125 \end{aligned} $$ In other words, the likelihood that somebody is drunk given they tested positive with a breathalyzer in this situation is only 1.25% - probably much lower than you'd guess. This is why, in practice, it's important to have a repeated test to confirm (the probability of two false positives in a row is $0.08 * 0.08 = 0.0064$, much lower), and Bayes' theorem has been relevant in court cases where proper consideration of evidence was important. Source: <https://en.wikipedia.org/wiki/Bayes%27_theorem> ## Live Lecture - Deriving Bayes' Theorem, Calculating Bayesian Confidence Notice that $P(A|B)$ appears in the above laws - in Bayesian terms, this is the belief in $A$ updated for the evidence $B$. So all we need to do is solve for this term to derive Bayes' theorem. Let's do it together! ``` # Activity 2 - Use SciPy to calculate Bayesian confidence intervals # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.bayes_mvs.html#scipy.stats.bayes_mvs ``` ``` from scipy import stats import numpy as np np.random.seed(seed=42) coinflips = np.random.binomial(n=1, p=.5, size=100) print(coinflips) ``` [0 1 1 1 0 0 0 1 1 1 0 1 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 1 0 1 0 0 1 1 1 0 0 1 0 0 0 0 1 0 1 0 1 1 0 1 1 1 1 1 1 0 0 0 0 0 0 1 0 0 1 0 1 0 1 1 0 0 1 1 1 1 0 0 0 1 1 0 0 0 0 1 1 1 0 0 1 1 1 1 0 1 0 0 0] ``` def confidence_interval(data, confidence=.95): n = len(data) mean = sum(data)/n data = np.array(data) stderr = stats.sem(data) interval = stderr * stats.t.ppf((1 + confidence) / 2.0, n-1) return (mean , mean-interval, mean+interval) ``` ``` confidence_interval(coinflips, confidence=.95) ``` (0.47, 0.3704689875017368, 0.5695310124982632) ``` mean_CI, _, _ = stats.bayes_mvs(coinflips, alpha=.95) mean_CI ``` Mean(statistic=0.47, minmax=(0.37046898750173674, 0.5695310124982632)) ``` ??stats.bayes_mvs ``` ``` coinflips_mean_dist, _, _ = stats.mvsdist(coinflips) coinflips_mean_dist ``` <scipy.stats._distn_infrastructure.rv_frozen at 0x7f7a8c320b38> ``` coinflips_mean_dist.rvs(1000) ``` array([0.47447628, 0.51541425, 0.54722018, 0.4589882 , 0.51501386, 0.53819192, 0.43382292, 0.53546659, 0.47026173, 0.44967562, 0.4621107 , 0.42691904, 0.37324325, 0.47531437, 0.46052277, 0.48711257, 0.52456771, 0.43332181, 0.49545882, 0.44671454, 0.47520117, 0.47047251, 0.41828918, 0.50159477, 0.42965501, 0.45273383, 0.48045849, 0.45342529, 0.48238344, 0.53966291, 0.48230241, 0.48073422, 0.48553525, 0.47962228, 0.41274185, 0.42892633, 0.5170948 , 0.42678096, 0.42249309, 0.51499109, 0.47059199, 0.39903942, 0.41790336, 0.46406817, 0.42232382, 0.42163269, 0.47848227, 0.48232842, 0.4731858 , 0.51077244, 0.3957508 , 0.48504646, 0.49014295, 0.53252732, 0.45495376, 0.47883978, 0.60393033, 0.4492549 , 0.44797902, 0.54782121, 0.43380002, 0.5760073 , 0.36941266, 0.44467418, 0.4939245 , 0.45278835, 0.55635162, 0.48695459, 0.39080983, 0.45948606, 0.2941779 , 0.35950718, 0.44805696, 0.4725126 , 0.42218381, 0.45985418, 0.47545393, 0.44317753, 0.46267013, 0.4458753 , 0.44204707, 0.51334913, 0.50914181, 0.49923748, 0.46895674, 0.43892798, 0.45984946, 0.44984632, 0.53560791, 0.45865723, 0.48646824, 0.55937503, 0.41464303, 0.50701457, 0.46934196, 0.37681534, 0.42748113, 0.49812825, 0.48278895, 0.4964763 , 0.3891381 , 0.43956744, 0.48413544, 0.45477873, 0.48725027, 0.49464113, 0.50575373, 0.47327346, 0.47520013, 0.58130199, 0.5845843 , 0.46478398, 0.4258629 , 0.52948199, 0.48513203, 0.49687534, 0.41137211, 0.46621924, 0.3914774 , 0.48360179, 0.38619449, 0.48277886, 0.47026304, 0.45226139, 0.47583911, 0.51800201, 0.48765985, 0.47519588, 0.56197092, 0.41764152, 0.49955199, 0.4476301 , 0.53072591, 0.51503605, 0.54521753, 0.51825987, 0.38392617, 0.46969675, 0.40735953, 0.41644585, 0.46704857, 0.44673322, 0.44172829, 0.39682358, 0.56863866, 0.49382431, 0.46425614, 0.43441607, 0.45352793, 0.43280667, 0.49838641, 0.42134069, 0.39030482, 0.46056071, 0.43477593, 0.48030697, 0.46963763, 0.58135074, 0.41707759, 0.54735952, 0.40234266, 0.44587394, 0.43824819, 0.34994202, 0.45715098, 0.48171551, 0.49707708, 0.56201387, 0.43796178, 0.48736057, 0.48396275, 0.4137432 , 0.43730294, 0.44127354, 0.49414193, 0.37391405, 0.48951459, 0.49203495, 0.48750347, 0.4535989 , 0.4826649 , 0.45727017, 0.35957717, 0.52627891, 0.48671508, 0.5146115 , 0.40126273, 0.49351532, 0.47899387, 0.41170621, 0.47372827, 0.45349404, 0.45541059, 0.44761163, 0.50985422, 0.38946749, 0.38924167, 0.477608 , 0.47523283, 0.48057958, 0.55631265, 0.47918939, 0.41974198, 0.59314567, 0.46179892, 0.52111564, 0.39858206, 0.39293582, 0.45738699, 0.51094648, 0.55605523, 0.42063349, 0.4553239 , 0.47003479, 0.47070228, 0.46428309, 0.46828548, 0.55559626, 0.54327956, 0.48485723, 0.39503943, 0.45169487, 0.51312502, 0.43261878, 0.44449548, 0.45205734, 0.50467902, 0.55919291, 0.50052268, 0.39552378, 0.44554284, 0.54545754, 0.41285254, 0.37820216, 0.4433361 , 0.51902109, 0.45162443, 0.57347586, 0.47871392, 0.40561444, 0.48058706, 0.56598937, 0.48203328, 0.42126387, 0.368201 , 0.45272922, 0.43585457, 0.54199909, 0.42996167, 0.474737 , 0.44127776, 0.39061556, 0.46844006, 0.38929335, 0.49974341, 0.38804905, 0.46641358, 0.52312717, 0.49613505, 0.44815583, 0.49130684, 0.51080517, 0.41943377, 0.52715474, 0.51901749, 0.40173031, 0.48157307, 0.45698766, 0.54181905, 0.5128087 , 0.4738456 , 0.53469041, 0.58876563, 0.37350851, 0.44841936, 0.41531469, 0.46828303, 0.41863695, 0.52030773, 0.59197971, 0.47809192, 0.39139708, 0.43735205, 0.44473506, 0.54450722, 0.4877697 , 0.48142576, 0.4282081 , 0.43828492, 0.49536959, 0.46056192, 0.51769419, 0.44435832, 0.2833451 , 0.44709257, 0.39013597, 0.49752388, 0.48941684, 0.51950258, 0.43841402, 0.461676 , 0.4364845 , 0.47132422, 0.5159512 , 0.40504394, 0.54411978, 0.48126155, 0.53768622, 0.44783793, 0.45195711, 0.53732665, 0.48919172, 0.54916543, 0.38184422, 0.3839936 , 0.50047602, 0.4827814 , 0.45782355, 0.57051467, 0.51586565, 0.41297865, 0.49549503, 0.4867028 , 0.49218095, 0.47941133, 0.4179382 , 0.43990307, 0.43267506, 0.51435874, 0.45603811, 0.44264597, 0.5258102 , 0.42116497, 0.59109176, 0.45889992, 0.42601209, 0.41855971, 0.51763858, 0.53603004, 0.55891986, 0.51308977, 0.47539497, 0.57980186, 0.45166958, 0.4360487 , 0.4160565 , 0.46894016, 0.42544503, 0.4718965 , 0.44509759, 0.4553363 , 0.51417409, 0.40125374, 0.40141203, 0.52444062, 0.38433692, 0.53755945, 0.49124436, 0.44092107, 0.48664193, 0.49809931, 0.35939896, 0.45019818, 0.51452836, 0.44702996, 0.39014382, 0.4742493 , 0.45802077, 0.54117637, 0.50917065, 0.48864846, 0.45513837, 0.46638664, 0.46289285, 0.474597 , 0.47679289, 0.53272938, 0.4273865 , 0.53018322, 0.48459184, 0.46054965, 0.46864369, 0.47940797, 0.47963348, 0.50495819, 0.43197032, 0.46684607, 0.48552696, 0.45851019, 0.52062144, 0.45638092, 0.4765386 , 0.40863058, 0.42996211, 0.43454883, 0.47898572, 0.44026601, 0.47275271, 0.39097285, 0.58139265, 0.49820118, 0.45762952, 0.43127976, 0.42291755, 0.47822454, 0.54221029, 0.41974753, 0.42307496, 0.4404098 , 0.54071199, 0.47650072, 0.52908201, 0.43292955, 0.52911544, 0.40416927, 0.51208142, 0.43676583, 0.59252479, 0.50098008, 0.52513111, 0.43895871, 0.48582562, 0.43385598, 0.51551279, 0.49560729, 0.4116628 , 0.47181415, 0.44020566, 0.48571059, 0.40538225, 0.55172833, 0.47509918, 0.49899901, 0.42421471, 0.43601874, 0.44018693, 0.5304447 , 0.43289087, 0.476795 , 0.41250698, 0.38083118, 0.58788278, 0.46971184, 0.45125409, 0.47414778, 0.4974292 , 0.46069729, 0.42235771, 0.52285515, 0.59676334, 0.4705739 , 0.44988487, 0.47274685, 0.37493384, 0.42223226, 0.49987446, 0.46030573, 0.44077887, 0.43844871, 0.47083241, 0.49024836, 0.49153355, 0.40008594, 0.53218928, 0.43465945, 0.51603003, 0.39652748, 0.41985494, 0.53091204, 0.40977991, 0.46225922, 0.41771646, 0.43867606, 0.38712168, 0.58344414, 0.48316133, 0.47170139, 0.47396495, 0.45185247, 0.43308114, 0.53336288, 0.44655484, 0.52674401, 0.49790806, 0.45346429, 0.49966867, 0.43964157, 0.5347767 , 0.49514565, 0.49845113, 0.40907362, 0.4988595 , 0.45864058, 0.40669431, 0.46175527, 0.5317036 , 0.50075453, 0.48638633, 0.49108861, 0.471713 , 0.48383151, 0.37494445, 0.50690883, 0.43971337, 0.45880774, 0.48454783, 0.41166892, 0.48265585, 0.43225349, 0.39086731, 0.50734673, 0.42186418, 0.48059622, 0.55935268, 0.39964071, 0.47968735, 0.44197047, 0.5523577 , 0.5194387 , 0.46967629, 0.46114995, 0.51547562, 0.41173477, 0.42714514, 0.54287129, 0.47917532, 0.52899054, 0.52902622, 0.55529675, 0.39260093, 0.47808929, 0.5227214 , 0.49686402, 0.41385472, 0.46877338, 0.51290447, 0.42081246, 0.48763814, 0.46488503, 0.48815416, 0.51874676, 0.44349542, 0.35529184, 0.48235864, 0.38829235, 0.41629837, 0.49353573, 0.42837918, 0.43078333, 0.51282674, 0.49055841, 0.48687382, 0.4024712 , 0.45031963, 0.49709223, 0.54003902, 0.43554303, 0.53183842, 0.486558 , 0.45249906, 0.51469574, 0.42098649, 0.45018556, 0.37915825, 0.55746338, 0.50905594, 0.49594724, 0.51327984, 0.4526535 , 0.48421933, 0.58224419, 0.47947599, 0.46611747, 0.52237733, 0.46120613, 0.47167891, 0.49850872, 0.4311296 , 0.47774032, 0.45230789, 0.35840294, 0.44659314, 0.51071187, 0.44069454, 0.55320876, 0.39988476, 0.49035529, 0.48985295, 0.44694677, 0.45049715, 0.51842605, 0.37342115, 0.49553783, 0.504753 , 0.49098663, 0.4218805 , 0.52620235, 0.4827884 , 0.44288146, 0.45916104, 0.49631062, 0.51646158, 0.48630302, 0.37307539, 0.41265663, 0.49024564, 0.46467903, 0.47432696, 0.47325263, 0.48613461, 0.51737977, 0.49745443, 0.43226223, 0.51386209, 0.54409309, 0.42166633, 0.45683158, 0.49113578, 0.47195372, 0.46461796, 0.43912749, 0.4570565 , 0.3981925 , 0.45969044, 0.45356353, 0.49012313, 0.46231133, 0.42623662, 0.52407443, 0.4489394 , 0.36793671, 0.50496954, 0.4459393 , 0.47762308, 0.45557782, 0.42430219, 0.46342973, 0.49607806, 0.42021132, 0.47986594, 0.43995321, 0.47310004, 0.46830237, 0.6095986 , 0.47867353, 0.50938602, 0.44119682, 0.41853036, 0.54135276, 0.3737122 , 0.54427806, 0.4251556 , 0.41348475, 0.41993261, 0.52989098, 0.462017 , 0.51346035, 0.56842082, 0.44612654, 0.4650062 , 0.46543262, 0.37686614, 0.50593036, 0.38350366, 0.41051578, 0.5477685 , 0.37572632, 0.40238182, 0.37546585, 0.46061846, 0.34000573, 0.48379551, 0.4102443 , 0.46841925, 0.48235662, 0.4521498 , 0.50212742, 0.46316433, 0.52688369, 0.39250788, 0.44273506, 0.60936845, 0.46729244, 0.48883352, 0.45995963, 0.52954227, 0.50744425, 0.5702215 , 0.4322026 , 0.52990493, 0.51626873, 0.4946539 , 0.5082119 , 0.49850001, 0.46857659, 0.37680806, 0.42922449, 0.4714559 , 0.47006439, 0.46103295, 0.38448095, 0.51598495, 0.51233212, 0.39171157, 0.47295778, 0.42799097, 0.31999544, 0.43777493, 0.51361593, 0.48083238, 0.49048985, 0.37754081, 0.44390605, 0.43851769, 0.45367766, 0.43004286, 0.39810176, 0.52425887, 0.5132496 , 0.46711766, 0.5371266 , 0.49789306, 0.47440018, 0.48044375, 0.46275003, 0.32760769, 0.43969128, 0.53361144, 0.50404316, 0.45660878, 0.39614646, 0.5306167 , 0.41652062, 0.47978152, 0.44229313, 0.38296985, 0.4576275 , 0.51705712, 0.46901214, 0.57001682, 0.50423767, 0.45819868, 0.47460827, 0.52497238, 0.47857488, 0.34748446, 0.46412874, 0.43491473, 0.47103418, 0.45914633, 0.4506799 , 0.48795458, 0.49316724, 0.41450339, 0.45860263, 0.48590433, 0.43353272, 0.47182887, 0.57180098, 0.51429135, 0.36982541, 0.45893858, 0.44927164, 0.47235794, 0.58265714, 0.478167 , 0.49140614, 0.46531855, 0.50984351, 0.4827639 , 0.45424265, 0.5015955 , 0.40968418, 0.49247972, 0.44791535, 0.43087735, 0.5079453 , 0.39380662, 0.38242163, 0.49299987, 0.41208436, 0.39335919, 0.45047663, 0.40227791, 0.55079414, 0.51004866, 0.46107434, 0.44619307, 0.40856549, 0.45213558, 0.34076475, 0.44746926, 0.50151825, 0.47512069, 0.44447584, 0.51219988, 0.41074984, 0.52785383, 0.37876592, 0.51172916, 0.51014685, 0.5534993 , 0.4745541 , 0.49519006, 0.50658855, 0.51617094, 0.55167752, 0.52080632, 0.48118055, 0.4497149 , 0.43954218, 0.51988854, 0.46973126, 0.49375973, 0.45512846, 0.4670614 , 0.51303675, 0.56130338, 0.49572266, 0.41883276, 0.44433704, 0.48790926, 0.50805016, 0.47367689, 0.41275913, 0.53529189, 0.4393815 , 0.44798915, 0.47777408, 0.41248419, 0.44957019, 0.44111031, 0.47174419, 0.54963872, 0.37056181, 0.42624852, 0.42007032, 0.47428632, 0.44194326, 0.53917971, 0.51442597, 0.39569021, 0.52024419, 0.45939336, 0.51860329, 0.4722443 , 0.49892044, 0.45117057, 0.4687997 , 0.48571876, 0.44523495, 0.47080056, 0.40803152, 0.4873699 , 0.42852689, 0.5576894 , 0.44129667, 0.48988382, 0.47362904, 0.53799032, 0.43168666, 0.47733785, 0.42619853, 0.52326113, 0.40582344, 0.3752876 , 0.44395294, 0.43526222, 0.44753265, 0.4335338 , 0.50883482, 0.43585868, 0.41200332, 0.36602514, 0.49333628, 0.40624739, 0.45769445, 0.39957451, 0.51484301, 0.45243127, 0.49550451, 0.42045661, 0.51606437, 0.45627401, 0.45883254, 0.40159611, 0.39777387, 0.47548967, 0.37814115, 0.52078691, 0.33737182, 0.49376712, 0.42425788, 0.49313496, 0.51393986, 0.33733477, 0.61310296, 0.4179583 , 0.48252206, 0.48776153, 0.52774351, 0.48715976, 0.42955008, 0.45700497, 0.43991845, 0.45648164, 0.37957614, 0.39961823, 0.43406117, 0.53066173, 0.505644 , 0.48217836, 0.49081739, 0.50618318, 0.4919582 , 0.4350554 , 0.48444719, 0.49467042, 0.4789851 , 0.46491457, 0.42527415, 0.42989511, 0.47073809, 0.48158046, 0.49392888, 0.52054431, 0.47831854, 0.42700402, 0.49578621, 0.52062022, 0.43633741, 0.42671723, 0.48976181, 0.41265183, 0.45424771, 0.44743247, 0.50648504, 0.46491952, 0.46800249, 0.3828106 , 0.49856068, 0.51699582, 0.48166775, 0.56224234, 0.49789532, 0.46000952, 0.49959486, 0.46650966, 0.42187689, 0.47007628, 0.51639958, 0.49191647, 0.50020547, 0.51637026, 0.54369003, 0.42976058, 0.48321571, 0.47720863, 0.44630105, 0.42892523, 0.41553131, 0.46174644, 0.51717268, 0.48445115, 0.44363908, 0.486894 , 0.45906175, 0.43506012, 0.44476889, 0.38141848, 0.40464606, 0.44997479, 0.44733676, 0.45134756, 0.46831684, 0.53670241, 0.47772302, 0.40203076, 0.46568984, 0.39886807, 0.55712779, 0.45029969, 0.45676884, 0.55615739, 0.53303594, 0.45722586, 0.55022421, 0.48445879, 0.58295224, 0.3706536 , 0.48182352, 0.42183159, 0.44396719, 0.473292 , 0.53361495, 0.47621795, 0.44416008, 0.43392763, 0.42497657, 0.48451716]) ## Assignment - Code it up! Most of the above was pure math - now write Python code to reproduce the results! This is purposefully open ended - you'll have to think about how you should represent probabilities and events. You can and should look things up, and as a stretch goal - refactor your code into helpful reusable functions! Specific goals/targets: 1. Write a function `def prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk)` that reproduces the example from lecture, and use it to calculate and visualize a range of situations 2. Explore `scipy.stats.bayes_mvs` - read its documentation, and experiment with it on data you've tested in other ways earlier this week 3. Create a visualization comparing the results of a Bayesian approach to a traditional/frequentist approach 4. In your own words, summarize the difference between Bayesian and Frequentist statistics If you're unsure where to start, check out [this blog post of Bayes theorem with Python](https://dataconomy.com/2015/02/introduction-to-bayes-theorem-with-python/) - you could and should create something similar! Stretch goals: - Apply a Bayesian technique to a problem you previously worked (in an assignment or project work) on from a frequentist (standard) perspective - Check out [PyMC3](https://docs.pymc.io/) (note this goes beyond hypothesis tests into modeling) - read the guides and work through some examples - Take PyMC3 further - see if you can build something with it! ### Imports ``` from scipy import stats import numpy as np import pandas as pd import matplotlib.pyplot as plt ``` ### Bayes' Theorem Definition ``` def prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk): return((prob_positive_drunk*prob_drunk_prior) / prob_positive) ``` ``` # Let's check it prob_drunk_prior = 1/1000 prob_positive = 8/100 prob_positive_drunk = 1 ``` ``` prob_drunk_given_positive(prob_drunk_prior, prob_positive, prob_positive_drunk) ``` 0.0125 ### Likelihood over Ranges of `prob_drunk_prior` and `prob_positive` ``` df = pd.DataFrame() ``` ``` drunk_list = list(range(1, 51)) positive_list = list(range(1, 10, 1)) drunk_list = [i/100 for i in drunk_list] positive_list = [i/100 for i in positive_list] ``` ``` drunk = [] positive = [] likelihood = [] ``` ``` for x in drunk_list: for y in positive_list: drunk.append(x) positive.append(y) likelihood.append(prob_drunk_given_positive(x, y, 1)) ``` ``` df['prob_drunk'] = drunk df['prob_positive'] = positive df['likelihood'] = likelihood ``` ``` fig = plt.figure(figsize=(10, 10)) ax = df.plot.scatter('prob_drunk', 'prob_positive', c='likelihood') # Title ax.text(x=-.05, y=.11, s="Likelihood the Breathylyzer is Correct", fontsize=14, fontweight='bold'); # Set x-axis label plt.xlabel(x=.5, y=-.1, xlabel="The proporion of citizens drunk at any given time", fontsize=12, fontweight="bold", labelpad=15); plt.xticks([.1, .2, .3, .4, .5], labels=['.1', '.2', '.3', '.4', '.5']); # Set y-axis label plt.ylabel(x=1, y=.5, ylabel="False Positive Rate", fontsize=12, fontweight="bold", labelpad=15); plt.yticks([0, .02, .04, .06, .08, .1]); ``` ### Recursive Definition ``` i = 1 df = pd.DataFrame() result = [] ind = [] post_list = [] ``` ``` def prob_drunk_given_positive_recur(prob_drunk_prior, prob_positive, prob_positive_drunk, n): global result global i post_prob = (prob_positive_drunk*prob_drunk_prior) / (prob_positive + prob_drunk_prior) ind.append(int(i)) post_list.append(post_prob) #print(i, post_prob) i += 1 while i < n: prob_drunk_given_positive_recur(post_prob, prob_positive, prob_positive_drunk, n) return(result) ``` ``` prob_drunk_given_positive_recur(prob_drunk_prior, prob_positive, prob_positive_drunk, 20); ``` ``` df['index'] = ind df['post_prob'] = post_list ``` ``` df.plot.scatter('index', 'post_prob') ``` Running the test over and over again will cause the posterior probability to converge to 1 - (False Positive Rate) as we would expect. ``` df.head() ``` <div> <style scoped> .dataframe tbody tr th:only-of-type { vertical-align: middle; } .dataframe tbody tr th { vertical-align: top; } .dataframe thead th { text-align: right; } </style> <table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>index</th> <th>post_prob</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>1</td> <td>0.012346</td> </tr> <tr> <th>1</th> <td>2</td> <td>0.133690</td> </tr> <tr> <th>2</th> <td>3</td> <td>0.625626</td> </tr> <tr> <th>3</th> <td>4</td> <td>0.886625</td> </tr> </tbody> </table> </div> ### Frequentist vs. Bayesian Statistics Frequentist and Bayesian Statistics are two sides of the same coin, with fundamentally different philosophies. Frequentists are primarily concerned with the frequency with which events happen, and Bayesians are concerned with our own uncertainty of events. Bayesian approaches factor in our own observations and perceptions of events more naturally than Frequentist approaches. ## Resources - [Worked example of Bayes rule calculation](https://en.wikipedia.org/wiki/Bayes'_theorem#Examples) (helpful as it fully breaks out the denominator) - [Source code for mvsdist in scipy](https://github.com/scipy/scipy/blob/90534919e139d2a81c24bf08341734ff41a3db12/scipy/stats/morestats.py#L139) ``` def prob_drunk_given_positive_recur(prob_drunk_prior, prob_positive, prob_positive_drunk, n): global result post_prob = (prob_positive_drunk*prob_drunk_prior) / (prob_positive + prob_drunk_prior) global i i += 1 while i < n: result.append(prob_drunk_given_positive_recur(post_prob, prob_positive, prob_positive_drunk, n)) return(i, result) # This will give x, y. ``` ``` pd.DataFrame(columns=[x, y]) ```
```python from IPython.display import Image from IPython.core.display import HTML from sympy import *; x,h,y,t = symbols("x h y t") Image(url= "https://i.imgur.com/8q79db3.png") ``` ```python # source https://nathancarter.github.io/how2data/site/how-to-find-the-critical-numbers-of-a-function-in-python-using-sympy/ f = x**6 + (6/x) d = diff(f) #solve(Eq(d,0)), f.subs(x,2.5), f.subs(x,6), ``` (246.540625000000, 46657) ```python p = plot(f,d,show = False, legend = True, )#xlim = (-0.4,0.4), ylim = (-110,110)) p[1].line_color = 'red' p.show() ``` ```python Image(url= "https://i.imgur.com/2wIXsU7.png") ```
The infnorm of a vector is less than or equal to its norm.
\documentclass{article} \usepackage{fancyhdr} \usepackage{extramarks} \usepackage{minted} \usepackage{color} \usepackage[english]{babel} % % Basic Document Settings % \topmargin=-0.45in \evensidemargin=0in \oddsidemargin=0in \textwidth=6.5in \textheight=9.0in \headsep=0.25in \linespread{1.1} \pagestyle{fancy} \lhead{\hmwkAuthorName} \chead{\hmwkClass\ (\hmwkClassInstructor): \hmwkTitle} \rhead{\firstxmark} \lfoot{\lastxmark} \cfoot{\thepage} \renewcommand\headrulewidth{0.4pt} \renewcommand\footrulewidth{0.4pt} \setlength\parindent{0pt} \setlength{\parskip}{1em} % % Minted Settings % \setminted{frame=lines} \setminted{linenos} \setminted{autogobble} % % Create Problem Sections % \newcommand{\enterProblemHeader}[1]{ \nobreak\extramarks{}{Problem \arabic{#1} continued on next page\ldots}\nobreak{} \nobreak\extramarks{Problem \arabic{#1} (continued)}{Problem \arabic{#1} continued on next page\ldots}\nobreak{} } \newcommand{\exitProblemHeader}[1]{ \nobreak\extramarks{Problem \arabic{#1} (continued)}{Problem \arabic{#1} continued on next page\ldots}\nobreak{} \stepcounter{#1} \nobreak\extramarks{Problem \arabic{#1}}{}\nobreak{} } \setcounter{secnumdepth}{0} \newcounter{partCounter} \newcounter{homeworkProblemCounter} \setcounter{homeworkProblemCounter}{1} \nobreak\extramarks{Problem \arabic{homeworkProblemCounter}}{}\nobreak{} % % Homework Problem Environment % % This environment takes an optional argument. When given, it will adjust the % problem counter. This is useful for when the problems given for your % assignment aren't sequential. See the last 3 problems of this template for an % example. % \newenvironment{homeworkProblem}[1][-1]{ \ifnum#1>0 \setcounter{homeworkProblemCounter}{#1} \fi \section{Problem \arabic{homeworkProblemCounter}} \setcounter{partCounter}{1} \enterProblemHeader{homeworkProblemCounter} }{ \exitProblemHeader{homeworkProblemCounter} } % % Homework Details % - Title % - Due date % - Class % - Section/Time % - Instructor % - Author % \newcommand{\hmwkTitle}{Assignment\ \#3} \newcommand{\hmwkDueDate}{March 28, 2019} \newcommand{\hmwkClass}{CSE 6431} \newcommand{\hmwkClassInstructor}{Professor Qin} \newcommand{\hmwkAuthorName}{\textbf{Jeremy Grifski}} % % Title Page % \title{ \vspace{2in} \textmd{\textbf{\hmwkClass:\ \hmwkTitle}}\\ \normalsize\vspace{0.1in}\small{Due\ on\ \hmwkDueDate\ at 11:10am}\\ \vspace{0.1in}\large{\textit{\hmwkClassInstructor}} \vspace{3in} } \author{\hmwkAuthorName} \date{} \renewcommand{\part}[1]{\textbf{\large Part \Alph{partCounter}}\stepcounter{partCounter}\\} % Alias for the Solution section header \newcommand{\solution}{\textbf{\large Solution}} \begin{document} \maketitle \pagebreak \begin{homeworkProblem} \textbf{ In the class, we discussed a voting mechanism for reading and writing from replicated data. If $R$ is the number of messages exchanged for reading and $W$ is the number of messages exchanged for writing, then we know that $R + W$ is $O(N)$, where N is the number of sites on which data is replicated. Develop a scheme in which both read and write operations will require $O(sqrt(N))$ messages. Include arguments on why you think it will work correctly. } In order to accomplish $O(sqrt(N))$, we need to be able to accomplish reads and writes without sending anymore than $sqrt(N)$ messages. To do this, we need to divide the nodes into pools such that there are $sqrt(N)$ pools. For example, if there are 9 nodes, we'd need to form 3 pools of 3 nodes. For writes, we only need to check in with one node in each pool. That way, every pool has the lastest copy of data. Then, reads only have to check in with their local pool since they're guaranteed to have one copy of the latest written data. The following serves as an example of the pooling described above: \begin{itemize} \item Pool A: {Node 1, Node 2, Node 3} \item Pool B: {Node 4, Node 5, Node 6} \item Pool C: {Node 7, Node 8, Node 9} \end{itemize} If Node 1 wants to make a read, it checks in with itself, Node 2, and Node 3. If Node 1 wants to make a write, it checks in with itself, Node 4, and Node 7. \end{homeworkProblem} \pagebreak \begin{homeworkProblem} \textbf{ Show that when checkpoints are taken after every $K (K > 1)$ messages are sent, the recovery mechanism can suffer from the domino effect. Assume that a process takes a checkpoint immediately after sending the Kth message, but before doing anything else. } As long as $K > 1$, the system can suffer the domino effect. Since there are always messages sent between checkpoints, there is always an opportunity where the recovery mechanism causes an orphan message. Orphan messages will result in another recovery which could create another orphan message. Naturally, the domino effect follows. \end{homeworkProblem} \pagebreak \begin{homeworkProblem} \textbf{ This question is about replication in distributed systems. We are given a coding scheme, in which a file of size $F$ is broken into n parts of size $F/m$, such that any $m$ of these parts are sufficient to reconstruct the file. Here, $n >= m$. If we have n sites in a distributed system, we can store each such part on each site. What potential advantages and disadvantages does his scheme have, as compared to the normal replication of files? Show how Gifford's voting algorithm needs to be modified. Clearly state the constraints on read and write quorum that are required. Give a brief argument as to how the correctness will be maintained. } In terms of advantages, the new scheme is quite scalable. If we want a more reliable system, we can increase the ratio of $n$ to $m$. In other words, we can allow for significantly more faults because we only need $m$ copies of a file to reconstruct that file. Meanwhile, we have to option to reduce the ratio of $n$ to $m$ for the sake of speed. In other words, there are less sites needed for data replication, so there are less read/write messages needed overall. In addition, since we only need to acquire $m$ chunks of data at any given time, we can proceed once we receive those $m$ chunks. On the flip side, it's possible that we only grab m chunks of data and reconstruct a corrupt file. Without proper corruption detection, there's no way of confirming that our file is corrupt. In addition, file reconstruction probably has some sort of overhead that the client has to worry about. In terms of Gifford's voting algorithm we can relax the following conditions: \begin{itemize} \item $r + w > v$ \item $w > v/2$ \end{itemize} Due to the reduction in information needed to reconstruct a file, we don't need such strict requirements. We just need $r + w > m$ and $w > m/2$ so that the information needed for reconstruction can be obtained safely using Gifford's voting algorithm. We can verify that the new constraints are correct because it follows the same logic that the original algorithm does. Here, we're assuming one site gets one vote. \end{homeworkProblem} \end{document}
/- Question: is weak induction enough to prove strong induction? Answer: yes! -/ import order.bounded_lattice -- For the has_bot / has_top notation classes. ---------------------------------------------------------------- section API -- The operations needed on the boolalg A. def A : Type := sorry instance A_has_bot : has_bot A := sorry instance A_has_top : has_top A := sorry instance A_has_subset : has_subset A := sorry instance A_has_inter : has_inter A := sorry instance A_has_union : has_union A := sorry def singlet : A → Prop := sorry -- The property of being a singleton. /- subset_singlet is the crucial property, and should be provable as a consequence of the fact that singletons have size 1 and there are no integers between 0 and 1. -/ lemma subset_singlet (X e : A) : (singlet e) → (X ⊆ e) → (X = ⊥ ∨ X = e) := sorry /- Decidable equality, which is true classically even without assuming that one of the subsets is contained in the other, but should also be provable directly by checking whether (size X) = (size Y). -/ lemma subset_dec_eq (X Y : A) : (X ⊆ Y) → (X = Y) ∨ (X ≠ Y) := sorry -- The lemmas needed about boolalg elements. Not that many! lemma bot_union (X : A) : ⊥ ∪ X = X := sorry lemma subset_bot (X : A) : (X ⊆ ⊥) → (X = ⊥) := sorry lemma inter_subset_right (X Y : A) : (X ∩ Y) ⊆ Y := sorry lemma inter_eq_left (X Y : A) : (X ⊆ Y) → (X ∩ Y = X) := sorry lemma inter_distrib_union_right (X Y Z : A) : X ∩ (Y ∪ Z) = (X ∩ Y) ∪ (X ∩ Z) := sorry end /-section-/ API ---------------------------------------------------------------- section weak_induction /- A formulation of weak induction, which crawls up the poset, singleton by singleton. Provable from the axiom that every nonempty subset of the boolalg contains a singleton. -/ lemma weak_induction (P : A → Prop) : (P ⊥) → -- Base case. (forall (e Y : A), (singlet e) → (P Y) → P (e ∪ Y)) → -- Induction step. (forall (Z : A), P Z) := sorry end /-section-/ weak_induction ---------------------------------------------------------------- section strong_induction -- (below P Y) says that property P is true everywhere strictly below Y. def below (P : A → Prop) (Y : A) : Prop := forall (X : A), (X ⊆ Y) → (X ≠ Y) → (P X) -- (augment P) says that (below P Y) can be upgraded to (P Y), for all Y. def augment (P : A → Prop) : Prop := forall (Y : A), (below P Y) → (P Y) -- The statement of strong induction, specialized to a single position in the boolalg. def strong_at (Y : A) : Prop := forall (P : A → Prop), (augment P) → (P Y) /- The crucial part of the proof that weak induction implies strong induction. Consider the subalg of elements below Y, and the subalg of elements below (e ∪ Y). Because of subset_singlet, we know that (subalg (e ∪ Y)) is covered by two copies of (subalg Y) : the elements of the form X ⊆ Y, and the elements of the form (e ∪ X) for X ⊆ Y. So, we package up the pair of propositions (P X) and (P (e ∪ X)) as a new proposition on (subalg Y), and call it (Q X). This will allow us to use strong induction at position Y, for Q, to prove that strong induction works at position (e ∪ Y), for P. -/ lemma pair_up (P : A → Prop) (e : A) : (singlet e) → let Q : (A → Prop) := fun Y, (P Y) ∧ P (e ∪ Y) in (augment P) → (augment Q) := fun h_singlet h_augment Y h_below, /- First use (augment P) to upgrade (below P Y) to (P Y). The fact that (below Q Y) implies (below P Y) is almost immediate. -/ let h_Y : (P Y) := h_augment Y (fun X h_ss h_ne, and.left (h_below X h_ss h_ne)) in and.intro h_Y /- Then use (augment P) to upgrade (below P (e ∪ Y)) to (P (e ∪ Y)). The fact that (below Q Y) implies (below P (e ∪ Y)) is *not* immediate. There are three cases to consider: 1. Elements of the form X ⊆ Y with X ≠ Y: (P X) is directly implied by (Q X). 2. The element Y: we proved above that (augment P) works to prove (P Y). 3. Elements of the form (e ∪ X) with X ⊆ Y and X ≠ Y: (P (e ∪ X)) is directly implied by (Q X). -/ (h_augment (e ∪ Y) (fun X h_ss h_ne, -- Break up X ⊆ (e ∪ Y) into the part under e and the part under Y. let h_union := calc X = X ∩ (e ∪ Y) : (inter_eq_left X (e ∪ Y) h_ss).symm ... = (X ∩ {e}) ∪ (X ∩ Y) : inter_distrib_union_right X e Y in or.elim (subset_dec_eq (X ∩ Y) Y (inter_subset_right X Y)) (or.elim (subset_singlet (X ∩ {e}) e h_singlet (inter_subset_right X e)) -- Case 2 described above: X = Y. (fun (h₁ : X ∩ {e} = ⊥) (h₂ : X ∩ Y = Y), @eq.rec A Y P h_Y X ( calc Y = X ∩ Y : h₂.symm ... = ⊥ ∪ (X ∩ Y) : (bot_union (X ∩ Y)).symm ... = (X ∩ {e}) ∪ (X ∩ Y) : by rw [h₁] ... = X : h_union.symm)) -- Case impossible, because X ≠ (e ∪ Y) (fun (h₁ : X ∩ {e} = e) (h₂ : X ∩ Y = Y), false.elim (h_ne ( calc X = (X ∩ {e}) ∪ (X ∩ Y) : h_union ... = e ∪ Y : by rw [h₁, h₂])))) (or.elim (subset_singlet (X ∩ {e}) e h_singlet (inter_subset_right X e)) -- Case 1 described above: X ⊂ Y. (fun (h₁ : X ∩ {e} = ⊥) (h₂ : X ∩ Y ≠ Y), let h₃ := calc X = (X ∩ {e}) ∪ (X ∩ Y) : h_union ... = ⊥ ∪ (X ∩ Y) : by rw [h₁] ... = X ∩ Y : bot_union (X ∩ Y) in and.left (h_below X (calc X = X ∩ Y : h₃ ... ⊆ Y : inter_subset_right X Y) (calc X = X ∩ Y : h₃ ... ≠ Y : h₂))) -- Case 3 described above: X = e ∪ X' with X' ⊂ Y. (fun (h₁ : X ∩ {e} = e) (h₂ : X ∩ Y ≠ Y), @eq.rec A (e ∪ (X ∩ Y)) P (and.right (h_below (X ∩ Y) (inter_subset_right X Y) h₂)) X ( calc e ∪ (X ∩ Y) = (X ∩ {e}) ∪ (X ∩ Y) : by rw [h₁] ... = X : h_union.symm))) )) /- Strong induction works at position ⊥, vacuously. -/ lemma strong_base : strong_at ⊥ := fun P aug, aug ⊥ (fun X h_ss h_ne, false.elim (h_ne (subset_bot X h_ss))) /- As explained above, strong induction at position Y implies strong induction at position (e ∪ Y). -/ lemma strong_step (e Y : A) : (singlet e) → (strong_at Y) → (strong_at (e ∪ Y)) := fun h_singlet h_strong P h_augment, let Q : (A → Prop) := fun Y, (P Y) ∧ P (e ∪ Y) in and.right (h_strong Q (pair_up P e h_singlet h_augment)) /- So weak induction implies strong induction at every position. -/ lemma strong_induction (P : A → Prop) : (augment P) → (forall (Z : A), P Z) := fun h_augment Z, weak_induction strong_at strong_base strong_step Z P h_augment end /-section-/ strong_induction
[STATEMENT] lemma eqOn_singl[simp]: "eqOn {p} env env1 \<longleftrightarrow> env p = env1 p" [PROOF STATE] proof (prove) goal (1 subgoal): 1. eqOn {p} env env1 = (env p = env1 p) [PROOF STEP] unfolding eqOn_def [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<forall>pa. pa \<in> {p} \<longrightarrow> env pa = env1 pa) = (env p = env1 p) [PROOF STEP] by auto
// Copyright Jean Pierre Cimalando 2019. // Distributed under the Boost Software License, Version 1.0. // (See accompanying file LICENSE.md or copy at // http://www.boost.org/LICENSE_1_0.txt) #pragma once #include <gsl/gsl> template <class Ch> bool string_starts_with(gsl::basic_string_span<const Ch> text, gsl::basic_string_span<const Ch> prefix); template <class Ch> bool string_ends_with(gsl::basic_string_span<const Ch> text, gsl::basic_string_span<const Ch> suffix); #include "strings.tcc"
Formal statement is: corollary\<^marker>\<open>tag unimportant\<close> contour_integral_uniform_limit_circlepath: assumes "\<forall>\<^sub>F n::'a in F. (f n) contour_integrable_on (circlepath z r)" and "uniform_limit (sphere z r) f l F" and "\<not> trivial_limit F" "0 < r" shows "l contour_integrable_on (circlepath z r)" "((\<lambda>n. contour_integral (circlepath z r) (f n)) \<longlongrightarrow> contour_integral (circlepath z r) l) F" Informal statement is: If $f_n$ is a sequence of functions that are all integrable over the circle of radius $r$ centered at $z$, and if $f_n$ converges uniformly to $f$ on the circle of radius $r$ centered at $z$, then $f$ is integrable over the circle of radius $r$ centered at $z$, and the integral of $f$ over the circle of radius $r$ centered at $z$ is the limit of the integrals of $f_n$ over the circle of radius $r$ centered at $z$.
```python import numpy as np import sympy as sp import numpy.linalg as la ``` ```python def finite_difference_nd(f_str, s_str, xvec, h, scheme=0): ''' scheme: 0 for forward, 1 for backward, 2 for central ''' formula = sp.sympify(f_str) symbols = sp.symbols(s_str) result = [] for i in range(len(xvec)): temp = formula # This kind of copy is enough temp_plus = formula temp_minus = formula for j in range(len(symbols)): temp = temp.subs(symbols[j], xvec[j]) if i == j: temp_plus = temp_plus.subs(symbols[j], xvec[j]+h) temp_minus = temp_minus.subs(symbols[j], xvec[j]-h) else: temp_plus = temp_plus.subs(symbols[j], xvec[j]) temp_minus = temp_minus.subs(symbols[j], xvec[j]) if scheme == 0: result.append(float(((temp_plus-temp)/h).evalf())) elif scheme == 1: result.append(float(((temp-temp_minus)/h).evalf())) else: result.append(float(((temp_plus-temp_minus)/h/2).evalf())) return result ``` ```python # @Before tol = 10**-7 ``` ```python # Test case backward finite difference method expected = np.array([1, 2.9, 2]) actual = np.array(finite_difference_nd('x*z+y**2*z+y', 'x y z', [1, 1, 1], 0.1, scheme=1)) assert la.norm(expected - actual) < tol ``` ```python # Test case forward finite difference method expected = np.array([1, 3.1, 2]) actual = finite_difference_nd('x*y*z+y*y+z', 'x y z', [1, 1, 1], 0.1, 0) assert la.norm(expected - actual) < tol ``` ```python # Workspace finite_difference_nd('x*y*z**2+x+1', 'x y z', [1, 1, 1], 0.1, 1) ``` [2.0000000000000018, 1.0000000000000009, 1.8999999999999995] ```python ```
/- Copyright (c) 2021 Yury Kudryashov. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Yury Kudryashov -/ import data.finset.option import data.pfun /-! # Image of a `finset α` under a partially defined function In this file we define `part.to_finset` and `finset.pimage`. We also prove some trivial lemmas about these definitions. ## Tags finite set, image, partial function -/ variables {α β : Type*} namespace part /-- Convert a `o : part α` with decidable `part.dom o` to `finset α`. -/ def to_finset (o : part α) [decidable o.dom] : finset α := o.to_option.to_finset @[simp] lemma mem_to_finset {o : part α} [decidable o.dom] {x : α} : x ∈ o.to_finset ↔ x ∈ o := by simp [to_finset] @[simp] theorem to_finset_none [decidable (none : part α).dom] : none.to_finset = (∅ : finset α) := by simp [to_finset] @[simp] theorem to_finset_some {a : α} [decidable (some a).dom] : (some a).to_finset = {a} := by simp [to_finset] @[simp] lemma coe_to_finset (o : part α) [decidable o.dom] : (o.to_finset : set α) = {x | x ∈ o} := set.ext $ λ x, mem_to_finset end part namespace finset variables [decidable_eq β] {f g : α →. β} [∀ x, decidable (f x).dom] [∀ x, decidable (g x).dom] {s t : finset α} {b : β} /-- Image of `s : finset α` under a partially defined function `f : α →. β`. -/ def pimage (f : α →. β) [∀ x, decidable (f x).dom] (s : finset α) : finset β := s.bUnion (λ x, (f x).to_finset) @[simp] lemma mem_pimage : b ∈ s.pimage f ↔ ∃ (a ∈ s), b ∈ f a := by simp [pimage] @[simp, norm_cast] lemma coe_pimage : (s.pimage f : set β) = f.image s := set.ext $ λ x, mem_pimage @[simp] lemma pimage_some (s : finset α) (f : α → β) [∀ x, decidable (part.some $ f x).dom] : s.pimage (λ x, part.some (f x)) = s.image f := by { ext, simp [eq_comm] } lemma pimage_congr (h₁ : s = t) (h₂ : ∀ x ∈ t, f x = g x) : s.pimage f = t.pimage g := by { subst s, ext y, simp [h₂] { contextual := tt } } /-- Rewrite `s.pimage f` in terms of `finset.filter`, `finset.attach`, and `finset.image`. -/ lemma pimage_eq_image_filter : s.pimage f = (filter (λ x, (f x).dom) s).attach.image (λ x, (f x).get (mem_filter.1 x.coe_prop).2) := by { ext x, simp [part.mem_eq, and.exists, -exists_prop] } lemma pimage_union [decidable_eq α] : (s ∪ t).pimage f = s.pimage f ∪ t.pimage f := coe_inj.1 $ by simp only [coe_pimage, pfun.image_union, coe_union] @[simp] lemma pimage_empty : pimage f ∅ = ∅ := by { ext, simp } lemma pimage_subset {t : finset β} : s.pimage f ⊆ t ↔ ∀ (x ∈ s) (y ∈ f x), y ∈ t := by simp [subset_iff, @forall_swap _ β] @[mono] lemma pimage_mono (h : s ⊆ t) : s.pimage f ⊆ t.pimage f := pimage_subset.2 $ λ x hx y hy, mem_pimage.2 ⟨x, h hx, hy⟩ lemma pimage_inter [decidable_eq α] : (s ∩ t).pimage f ⊆ s.pimage f ∩ t.pimage f := by simp only [← coe_subset, coe_pimage, coe_inter, pfun.image_inter] end finset
\documentclass[a4paper,12pt,oneside]{book} %-------------------------------Start of the Preable------------------------------------------------ \usepackage[english]{babel} \usepackage{tocloft} \renewcommand\cftchapafterpnum{\vskip10pt} \renewcommand\cftsecafterpnum{\vskip15pt} \usepackage{blindtext} %packagr for hyperlinks \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=black, filecolor=magenta, urlcolor=cyan, } \urlstyle{same} %use of package fancy header \usepackage{fancyhdr} \setlength\headheight{26pt} \fancyhf{} %\rhead{\includegraphics[width=1cm]{logo}} \lhead{\rightmark} \rhead{\includegraphics[width=1cm]{logo}} \fancyfoot[RE, RO]{\thepage} \fancyfoot[CE, CO]{\href{http://www.e-yantra.org}{www.e-yantra.org}} \pagestyle{fancy} %use of package for section title formatting \usepackage{titlesec} \titleformat{\chapter} {\Large\bfseries} % format {} % label {0pt} % sep {\huge} % before-code %use of package tcolorbox for colorful textbox \usepackage[most]{tcolorbox} \tcbset{colback=cyan!5!white,colframe=cyan!75!black,halign title = flush center} \newtcolorbox{mybox}[1]{colback=cyan!5!white, colframe=cyan!75!black,fonttitle=\bfseries, title=\textbf{\Large{#1}}} %use of package marginnote for notes in margin \usepackage{marginnote} %use of packgage watermark for pages %\usepackage{draftwatermark} %\SetWatermarkText{\includegraphics{logo}} \usepackage[scale=2,opacity=0.1,angle=0]{background} \backgroundsetup{ contents={\includegraphics{logo}} } %use of newcommand for keywords color \usepackage{xcolor} \newcommand{\keyword}[1]{\textcolor{red}{\textbf{#1}}} %package for inserting pictures \usepackage{graphicx} %package for highlighting \usepackage{color,soul} %new command for table \newcommand{\head}[1]{\textnormal{\textbf{#1}}} %----------------------End of the Preamble--------------------------------------- \begin{document} %---------------------Title Page------------------------------------------------ \begin{titlepage} \raggedright {\Large eYSIP2016\\[1cm]} {\Huge\scshape WEB MONITORING FOR GREENHOUSE \\[.1in]} \vfill {\underline{\large{Interns:}}} \\ \begin{quote} \large{Ankit Gala} \large{Email: [email protected]} \large{Mobile: 7208760344} \end{quote} \begin{quote} \large{Neel Rami} \large{Email: [email protected]} \large{Mobile: 9029585939} \end{quote} \vspace{0.5cm} {\underline{\textbf{Mentors:}}} \\ \begin{quote} \large{Jayant Solanki} \large{Email: [email protected]} \end{quote} \begin{flushright} {\large Duration of Internship: $ 21/05/2016-10/07/2016 $ \\} \end{flushright} {\itshape 2016, e-Yantra Publication} \end{titlepage} \tableofcontents %------------------------------------------------------------------------------- \chapter[Web Monitoring For Greenhouse]{Web Monitoring For Greenhouse} \section{Abstract} \hspace{7mm}This project aims at developing a web portal with the help of which various aspects of the greenhouse such as scheduling a task,switching irrigation valves,visualizing data collected from various devices,managing devices and displaying their status can be controlled and monitored by any user.\\ With the help of this web portal,anyone can remotely access various aspects of greenhouse and control it.This project aims at providing automation to greenhouse systems.\\ The ultimate goal of the project would be to eliminate the user from manually controlling various aspects of greenhouse and provide effective automation through the Internet-of-things approach. \newpage \section{Completion status} \begin{itemize} \item{Task Accomplished} \setlength\itemsep{0.2cm} \begin{enumerate} \item{Understanding the current back-end system of the Greenhouse.} \item{Study of Bower,AngularJS,Websocket} \item{Installation of required software.} \item{Creating Login and Signup pages.} \item{Creating an Admin page for managing users.} \item{Creating Device Management page.} \item{Designing a Device Status page with dynamic update.} \item{Controlling irrigation valves using Websocket.} \item{Understanding JavaScript based Charts APIs.} \item{Plotting charts for multiple data.} \item{ Designing a Scheduling Page.} \item{Understanding RTSP.} \item{Designing dashboard} \item{Code Documentation and Project Report.} \end{enumerate} \item{Task Uncompleted} \begin{enumerate} \item{Designing page for Live Video Feed.} \end{enumerate} \par In order to embed RTSP live video feed in webpage,plugins should be used.But the major problem with plugins is that they are browser and OS dependent.Also there is no specific Javascript library which can be used to embed live videos in webpage.So its difficult to embed RTSP live video feed in a webpage. \end{itemize} \newpage \section{Hardware parts} \hspace{7mm}No Hardware parts used. \section{Software used} \begin{enumerate} \item Linux Environment \begin{itemize} \item Setting up the System \begin{enumerate} \item Software used:Ubuntu OS \item Version:Ubuntu 15.04 \item \href{http://old-releases.ubuntu.com/releases/15.04/}{Ubuntu 15.04 Download Link} \end{enumerate} \item Setting up the Server \begin{enumerate} \item Software used: \begin{itemize} \item Apache \item PHP \end{itemize} \item Version: \begin{itemize} \item{Apache version-2.4.10} \item{PHP version-5.6.4} \end{itemize} \item Installation Commands: \begin{itemize} \item{Apache: \\ sudo apt-get update\\ sudo apt-get install apache2} \item {PHP:\\ sudo apt-get install php5 libapache2-mod-php5 php5-mcrypt} \end{itemize} \end{enumerate} \item Setting up the Database \begin{enumerate} \item Software used:MYSQL \item Installation command: \\sudo apt-get install mysql-server libapache2-mod-auth-mysql php5-mysql \end{enumerate} \item{Setting up package manager} \begin{enumerate} \item{Software used:Bower} \item{Note:Bower requires node, npm and git.} \item {Installation command: npm install -g bower} \end{enumerate} \item{Setting up essential libraries} \begin{enumerate} \item{Jquery} \begin{itemize} \item{Installation command: bower install jquery} \end{itemize} \item{Bootstrap} \begin{itemize} \item{Installation command: bower install bootstrap} \end{itemize} \item{AngularJS} \begin{itemize} \item{Installation command: bower install angular} \end{itemize} \item{Angular Material} \begin{itemize} \item{Installation command: bower install angular-material} \end{itemize} \item{amCharts} \begin{itemize} \item{Installation command: bower install amcharts3} \end{itemize} \item{angular-animate} \begin{itemize} \item{Installation command: bower install angular-animate} \end{itemize} \item{angular-aria} \begin{itemize} \item{Installation command: bower install angular-aria} \end{itemize} \item{angular-ui} \begin{itemize} \item{Installation command: bower install angular-bootstrap} \end{itemize} \item{angular-datatables} \begin{itemize} \item{Installation command: bower install angular-datatables} \end{itemize} \item{angular-pagination} \begin{itemize} \item{Installation command: bower install angular-utils-pagination} \end{itemize} \item{bootstrap-table} \begin{itemize} \item{Installation command: bower install bootstrap-table} \end{itemize} \item{DataTables} \begin{itemize} \item{Installation command: bower install } \end{itemize} \item{mmenu} \begin{itemize} \item{Installation command: bower install jquery-mmenu} \end{itemize} \end{enumerate} \end{itemize} \item Windows Environment \begin{itemize} \item Setting up the environment \begin{enumerate} \item Software used:Windows OS \item Version:Windows 7 Premium \item \href{https://www.microsoft.com/en-in/software-download/windows7}{Windows 7 Download Link} \end{enumerate} \item Setting up the Server \& Database \begin{enumerate} \item Software used:XAMPP \item Version:XAMPP version 3.2.2 \item \href{https://www.apachefriends.org/download.html}{XAMPP Download Link} \end{enumerate} \item Setting up the Editor \begin{enumerate} \item Software used:Sublime Text \item \href{https://www.sublimetext.com/}{Sublime Text Download Link} \end{enumerate} \item{Setting up package manager} \begin{enumerate} \item{Software used:Bower} \item{Note:Bower requires node, npm and git.} \item {Installation command: npm install -g bower} \item{Note:The above installation command should be wriiten in Git Bash.} \end{enumerate} \item{Setting up essential libraries} \begin{enumerate} \item{Jquery} \begin{itemize} \item{Installation command: bower install jquery} \end{itemize} \item{Bootstrap} \begin{itemize} \item{Installation command: bower install bootstrap} \end{itemize} \item{AngularJS} \begin{itemize} \item{Installation command: bower install angular} \end{itemize} \item{Angular Material} \begin{itemize} \item{Installation command: bower install angular-material} \end{itemize} \item{amCharts} \begin{itemize} \item{Installation command: bower install amcharts3} \end{itemize} \item{angular-animate} \begin{itemize} \item{Installation command: bower install angular-animate} \end{itemize} \item{angular-aria} \begin{itemize} \item{Installation command: bower install angular-aria} \end{itemize} \item{angular-ui} \begin{itemize} \item{Installation command: bower install angular-bootstrap} \end{itemize} \item{angular-datatables} \begin{itemize} \item{Installation command: bower install angular-datatables} \end{itemize} \item{angular-pagination} \begin{itemize} \item{Installation command: bower install angular-utils-pagination} \end{itemize} \item{bootstrap-table} \begin{itemize} \item{Installation command: bower install bootstrap-table} \end{itemize} \item{DataTables} \begin{itemize} \item{Installation command: bower install } \end{itemize} \item{mmenu} \begin{itemize} \item{Installation command: bower install jquery-mmenu} \end{itemize} \end{enumerate} \end{itemize} \section{Assembly of hardware} \hspace{7mm}No Hardware parts used \section{Software and Code} \hspace{7mm}\href{https://github.com/eYSIP-2016/eYSIP-2016-Web-Monitoring-For-Greenhouse}{Github link} for the repository of code.\\ {\underline{Code Explanation}}:\\ \begin{itemize} \setlength\itemsep{0.2cm} \item{Current Greenhouse Setup:} \begin{itemize} \item{Current Greenhouse has columns of troughs containing plants.} \item{Current hardware setup at the Greenhouse has two types of devices, one controls the irrigation valves and the other gets the temperature, humidity, moisture values.} \item{The first type controls 1-10 Irrigation valves at a time.} \item{The second type of device also known as Sensor nodes gathers the environment values.} \item{Irrigation valves and the sensor nodes are placed at every troughs and are placed in different groups.} \end{itemize} \item{Database Structure Explanation: } \begin{itemize} \item{devices table: Stores information related to a device such as name,deviceid,description,type etc. } \item{devicestatus table :Stores connectivity status of a device.} \item{feeds table: Stores data such as battery,temperature,moisture etc of a device.} \item{groups table: Stores various groups.} \item{security\_questions table: Stores security question and corresponding id.} \item{switches table: Stores status of a valve's switch.} \item{sensors table: Stores device types.} \item{tasks table: Stores various schedules created by user or machine.} \item{users table: Stores user login credentials.} \end{itemize} \newpage \item{Features:} \begin{enumerate} \setlength\itemsep{0.2cm} \item{Registration and Authentication} \begin{itemize} \setlength\itemsep{0.2cm} \item{Website supports two types of users.} \begin{enumerate} \item{Administrative User} \item{Normal User} \end{enumerate} \item{A user won't be able to access any page unless his/her account has been activated by any administrative user.} \item{If his/her account has been activated by the admin,he/she will be directed to the dashboard.} \item{Efforts have been made to implement a strong password policy.} \item{Password Reset procedure has been implemented using a security question.} \item{During the registration procedure,the user has to select a security question and answer that question.} \item{If a user forgets the password,then he/she has to answer the security question.} \item{If a user forgets his/her username or email or security question's answer,then he/she has to create a new account.} \item{The essential credentials such as password and security question's answer are encrypted an then stored in database.} \item{Accessible to both normal as well as administrative users.} \end{itemize} \vspace{0.3cm} \item{Managing Users} \begin{itemize} \setlength\itemsep{0.2cm} \item{Accessible only to administrative users.} \item{Four types of accounts.} \begin{enumerate} \item{Normal User Account} \item{Administrative User Account} \item{Pending Approval Account } \item{De-activated Account} \end{enumerate} \item{Once a user creates an account,his account will be in pending approval state.} \item{If his/her account is activated by any administrative user,then he/she would be able to access the web portal.} \item{Any administrative user can promote a normal user and make him an administrative user.} \item{Any administrative user can demote another administrative user into a normal user.} \item{Any administrative user can de-activate another administrative user's account or normal user's account.} \end{itemize} \vspace{0.3cm} \item{Device Management} \begin{itemize} \item{Accessible to both normal as well as administrative users.} \item{This page helps to manage various devices.} \item{This page displays basic information of a device such as its name,type,latitude,longitude etc.} \item{We can even edit information of any device such as editing its name,group,device type etc..} \item{The page provides facility for adding a new group and also editing an existing group.} \item{It also provides facility for adding a device type and also editing an existing device type.} \end{itemize} \item{Device Status} \begin{itemize} \item{Accessible to both normal as well as administrative users.} \item{The purpose of this page is for monitoring various devices.} \item{Various types of data such as primary battery and secondary battery for valves and battery value and moisture value for sensor nodes are being displayed on this page.} \item{Also connectivity status i.e whether the device is online or offline is being indicated on this page.} \item{The page also shows the time when the device went online or offline.} \item{For each valve,the respective switches and their status i.e whether the switch is open or close has also been demonstrated on the page.} \item{First the user has to select a particular group and the devices belonging to those groups and their respective data is displayed on the page.} \item{All the data such as battery value,connectivity status,switch status etc. is dynamically updated using Websockets.} \end{itemize} \item{Controlling Valves} \setlength\itemsep{0.2cm} \begin{itemize} \item{Accessible to both normal as well as administrative users.} \item{The page aims at controlling irrigation valves using the Websocket.} \item{Any user can easily switch on or switch off an individual switch of any valve.} \item{The user can also specify the duration for which a particular switch of a valve should be switched on.} \item{First,the user has to select appropriate group.} \item{If he wants to open a switch,he can also specify the duration and then simply open the switch.} \end{itemize} \item{Scheduling Page} \setlength\itemsep{0.2cm} \begin{itemize} \item{Accessible to both normal as well as administrative users.} \item{This page displays all the scheduled tasks for a particular group.} \item{The user can also add a schedule for any group with the help of this page.} \item{There are 3 types of schedules:} \setlength\itemsep{0.2cm} \begin{enumerate} \item{Period} \item{Duration} \item{Frequency} \end{enumerate} \item{Steps for adding a schedule: } \setlength\itemsep{0.2cm} \begin{enumerate} \item{Select appropriate group} \item{Select type of schedule} \item{If the selected type of schedule is period, then add start time and end time.} \item{If the selected type is duration,then add start time and the duration of the schedule.} \item{And if the scheduled task is frequency,then add start time,duration and the frequency.} \end{enumerate} \item{Even a schedule can be deleted or disabled.} \item{Later the disabled schedule can be enabled again.} \end{itemize} \item{Data Visualization} \begin{itemize} \item{Accessible to both normal as well as administrative users.} \item{This page helps the user to visualize several types of data such as battery,moisture,temperature etc in the form of line charts.} \item{ Data can also be plotted in real time using websockets and analysed.} \item{amCharts,an advanced javascript charting library is used to plot charts.} \item{The page has the flexibility to show data on daily,weekly,monthly and yearly basis.} \end{itemize} \item{Dashboard} \begin{itemize} \item{Accessible to both normal as well as administrative users.} \item{This page provides a summarized picture of all the above features.} \end{itemize} \item{Platform Support} \begin{itemize} \item{This web portal supports both desktop as well as mobile version.} \end{itemize} \end{enumerate} \end{itemize} \newpage \section{Use and Demo} \hspace{7mm}Few glimpses of the Website \begin{itemize} \item{Desktop Version} \vspace{2mm} Login Page \includegraphics[width=10cm]{logind.png} \vspace{4mm} Data Visualization\\ \includegraphics[width=13cm]{chartsd.png} \newpage \vspace{3mm} SignUp Page \includegraphics[width=10cm]{signupd.png} \newpage \vspace{3mm} Forgot Password Page \includegraphics[width=10cm]{fpd.png} \vspace{3mm} Device Status Page\\ \includegraphics[width=14cm]{devicestatus.png} \newpage \vspace{3mm} Device Management Page\\ \includegraphics[width=14cm]{devicemanagement1.png}\\ \vspace{3mm} Valve Control Page\\ \includegraphics[width=14cm]{valvecontrol1.png}\\ \newpage \vspace{3mm} Manage Users Page\\ \includegraphics[width=14cm]{manageusersd.png} \vspace{3mm} Few Instances Of Dashboard\\ \includegraphics[width=14cm]{dashboard2.png} \vspace{3mm} \includegraphics[width=14cm]{dashboard1.png} \includegraphics[width=14cm]{dashboard3.png} \newpage \item{Mobile Version}\\ Login Page\\ \includegraphics[width=10cm]{loginm.jpg}\\ \newpage Data Visualization Page\\ \includegraphics[width=10cm]{chartsm.png}\\ \newpage Dashboard Page\\ \includegraphics[width=10cm]{dashbaordm2.jpeg} \newpage Scheduling Page\\ \includegraphics[width=10cm]{schedulingm.jpeg} \newpage Device Status Page\\ \includegraphics[width=10cm]{devicestatusm.jpeg} \newpage Valve Control Page\\ \includegraphics[width=10cm]{valvecontrolm.jpeg} \newpage Manage Users Page\\ \includegraphics[width=10cm]{manageusersm.jpeg} \newpage \end{itemize} \href{https://youtu.be/fZxgUEiySDY}{Youtube Link} of demonstration video \section{Future Work} \begin{itemize} \item{Large Scale Purpose} \begin{itemize} \item{Integrating the web portal with Greenhouse system and irrigation system.} \end{itemize} \item{Small Scale purpose} \begin{itemize} \item{{Automated watering of plants in gardens and houses.}} \end{itemize} \end{itemize} \section{Bug report and Challenges} \begin{itemize} \item{Bugs} \begin{enumerate} \item{Small UI flaws may be present.} \item{The site is vulnerable to web attacks such as SQL Injection etc.} \end{enumerate} \item{Challenges Faced} \begin{enumerate} \item{Designing good UI.} \item{Implementing Websockets.} \item{Working with charts.} \end{enumerate} \item{Failures} \begin{enumerate} \item{Embedding RTSP live video feed in a webpage.} \end{enumerate} \end{itemize} \end{enumerate} \begin{thebibliography} \begin{itemize} \item{Stack Overflow} \item{TutorialsPoint} \item{TreeHouse} \item{Coursera} \item{Udemy} \item{GitHub} \item{Head First PHP \& MYSQL} \end{itemize} \end{thebibliography} \end{document}
Lebanon, Missouri has long been home to an oft-traveled trail along the edge of the Ozarks, beginning when the Wyota and Osage Indians roamed the area. During the Civil War, the trail became known as the “Wire Road” because of the telegraph lines installed along it between St. Louis and Springfield. Then, in the late 1920s, Route 66 was born and roughly followed the same path the Indians had marked. Today the “trail” is called I-44. The first white settler in the area was a man named Jesse Ballew in 1820, who built a log cabin on the east side of the Gasconade River. When Laclede County was formed in 1849, the settlement of Wyota, named for the area Indians, became the county seat. Later a highly respected minister requested the name be changed to Lebanon, after his hometown of Lebanon, Tennessee. Soon, a courthouse was erected on the town square of the newly formed county seat. Early settlers were mostly hunters and farmers from Tennessee, but word soon spread about the region, its rich farmland, plentiful game, rivers, and springs, and people from the east began to migrate to the new settlement. In the 1850s The Academy was built, which offered higher education to the area’s students and soon became the center of the town’s cultural activity. By the time the Civil War began, Lebanon remained a small secluded settlement. Though Missouri declared itself as a neutral state, its population was primarily from the South and therefore sympathized with the Confederate forces. During this time, Lebanon saw division among its people, even among families. In the 1860 election, Abraham Lincoln had received only one vote. The Lebanon people obviously did not consider themselves “neutral.” The town was occupied by troops for the entire length of the war. Except for six months in late 1861 when the Confederates were in control, the occupation was by Union troops. When the war ended, the town worked together to rebuild the community and officially incorporated in 1867. When the railroad began its expansion west, the short-sighted town of Lebanon refused to provide land for a railroad depot. As a result, the railroad tracks were built one mile away from the existing settlement. Later the commercial area of the town moved closer to the railroad, and the original site became known as Old Town. Eventually, even the old town square disappeared. In 1882, the Lebanon Opera House opened and along with other buildings in town, helped to establish the town as a popular place to gather for meetings. Then a discovery was made in 1889 that helped to attract even more visitors to Lebanon. When water well was dug for the community, they discovered that the water had magnetic properties. The locals began to drink and bathe in the water, believing that it had healing properties. The magnetic water led to the building of the Gasconade Hotel, the grandest structure ever erected in Lebanon. Having the capacity to house up to 500 guests, the hotel also provided a ballroom, restaurant, reception rooms, and a bathhouse next to the magnetic well. Sure that people would flock from all over the country to partake of the healing waters, its builders were sorely disappointed when the hotel was not successful. Soon it became and sanatorium, but that was also short-lived. Next, it was used for community events and there was talk of turning the beautiful structure into a college. However, just ten years after it was built, the Gasconade Hotel was totally destroyed by fire in October 1899. Lebanon continued to thrive as a small community, catering to travelers along the edge of the Ozarks. But the town really saw a change when Route 66 was born in 1926. Lebanon, the largest town between Rolla and Springfield, Missouri became a major stop along the Mother Road. Quickly providing road services, one of the first motels along the highway was Camp Joy, which opened in 1927 as a tent camp at a rate of 50 cents a night. Later, cottages and a combination gas station/grocery store were added. The Spears family ran Camp Joy for 44 years in Lebanon and even named one of their daughters after the business. In 1931 Arthur T. Nelson built his 24-room hotel at the intersection of Route 66 and Missouri Highway 5. The Nelson Hotel and Dream Village soon became one of the best-known spots along Route 66 between Chicago and Los Angeles. Each room featured a private bath and kitchen facilities, renting for $2-$3 a night. Across the street from the hotel, Nelson built his “Dream Village,” so named because the layout appeared to him in a dream. Twelve units of native Ozark stone surrounded a courtyard which featured a very special fountain. In the evenings it became the centerpiece of a light and music show. Cars would be lined up for blocks to see it. Nelsonville, as it was called by the locals, passed into history when Route 66 became I-44. Route 66 in Lebanon was also was the site of some interesting restaurants. Perhaps the most unique was Andy’s Street Car Grill. It was housed in an actual street car, brought in from St. Louis, and its featured dish was “Andy’s Famous Fried Domestic Rabbit.” Alas, Andy’s is long gone. Up until just recently, you could still get a great plate of home fixins from the Bell Restaurant; but, unfortunately, it too has closed. Another Lebanon landmark along Route 66 is Wrink’s Market, which opened in June 1950 and continued to operate up until owner, Glenn Wrinkle’s death in March 2005. This was a one of kind vintage market, where you would not only see groceries, but also collectibles, dry goods, and Route 66 memorabilia. However, the main attraction was always Glenn Wrinkle himself, who could astound the Route 66 traveler with his stories covering more than a half a century along the Mother Road. Alas, the road lost yet another paragraph in its history when Mr. Wrinkle died in March 2005 and the family auctioned the contents of the store. His son briefly resurrected the market as a convenience store, but, it closed in 2009. Located in south-central Missouri on the edge of the Ozarks, Lebanon straddles Interstate 44. Today this small town of some 12,000 souls, though nestled among flowering trees, cool streams, and rolling hills is growing quickly. Still though, wonderful peeks of vintage America can still be found among the thriving franchise operated strip malls and hotels.
Require Export DACandMAC. Set Strict Implicit. Unset Implicit Arguments. Section Write. Variable s : SFSstate. (*********************************************************************) (* Some Useful Synonymous *) (*********************************************************************) Let t (e : ENTITY) (n : nat) (buf : ENTCONT) : SFSstate := mkSFS (entitySC s) (cap s) (secmat s) (write_entities e n buf). (*********************************************************************) (* write *) (*********************************************************************) (*This operation writes the first n BYTEs of buf to the entity *) (*represented by entity e. *) Inductive write (eSub : ENTITY) (eObj : ENTITY) (n : nat) (buf : ENTCONT) : SFSstate -> Prop := WriteOK : match fsecmat (secmat s) eSub with | None => False | Some y => set_In eObj (EntWritingList y) end -> write eSub eObj n buf (t eObj n buf). End Write.
module Control.Optics.Types import public Control.Applicative.Const import public Control.Monad.Identity import public Data.Contravariant import public Data.Functor.Tagged import public Data.Morphisms import public Data.Profunctor import public Data.Profunctor.Choice public export Simple : (Type -> Type -> Type -> Type -> Type) -> Type -> Type -> Type Simple p s a = p s s a a public export Optical : (p : Type -> Type -> Type) -> (q : Type -> Type -> Type) -> (f : Type -> Type) -> (s : Type) -> (t : Type) -> (a : Type) -> (b : Type) -> Type Optical p q f s t a b = p a (f b) -> q s (f t) public export Optic : (Type -> Type -> Type) -> (Type -> Type) -> Type -> Type -> Type -> Type -> Type Optic p = Optical p p public export LensLike : (Type -> Type) -> Type -> Type -> Type -> Type -> Type LensLike = Optic Morphism public export Lens : Type -> Type -> Type -> Type -> Type Lens s t a b = {f : Type -> Type} -> Functor f => LensLike f s t a b public export Setter : Type -> Type -> Type -> Type -> Type Setter s t a b = LensLike Identity s t a b public export Getter : Type -> Type -> Type Getter s a = {f : _} -> (Contravariant f, Functor f) => Simple (LensLike f) s a public export Getting : Type -> Type -> Type -> Type Getting r s a = Simple (LensLike (Const r)) s a public export Prism : Type -> Type -> Type -> Type -> Type Prism s t a b = {p, f : _} -> (Choice p, Applicative f) => Optic p f s t a b public export Review : Type -> Type -> Type -> Type -> Type Review s t a b = {p : _} -> (Choice p) => Optic p Identity s t a b public export AReview : Type -> Type -> Type AReview t b = Simple (Optic Tagged Identity) t b
[STATEMENT] lemma lt_imp_ex_count_lt: "M < N \<Longrightarrow> \<exists>y. count M y < count N y" [PROOF STATE] proof (prove) goal (1 subgoal): 1. M < N \<Longrightarrow> \<exists>y. count M y < count N y [PROOF STEP] by (meson less_eq_multiset\<^sub>H\<^sub>O less_le_not_le)
/* -*- c++ -*- */ /* * Copyright 2003,2010,2011,2013 Free Software Foundation, Inc. * * This file is part of GNU Radio * * GNU Radio is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 3, or (at your option) * any later version. * * GNU Radio is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with GNU Radio; see the file COPYING. If not, write to * the Free Software Foundation, Inc., 51 Franklin Street, * Boston, MA 02110-1301, USA. */ #ifdef HAVE_CONFIG_H #include "config.h" #endif #include "vmcircbuf_prefs.h" #include "vmcircbuf.h" #include <gnuradio/sys_paths.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #include <string.h> #include <boost/filesystem/operations.hpp> #include <boost/filesystem/path.hpp> namespace fs = boost::filesystem; namespace gr { /* * The simplest thing that could possibly work: * the key is the filename; the value is the file contents. */ static std::string pathname(const char *key) { static fs::path path; path = fs::path(gr::appdata_path()) / ".gnuradio" / "prefs" / key; return path.string(); } static void ensure_dir_path() { fs::path path = fs::path(gr::appdata_path()) / ".gnuradio"; if(!fs::is_directory(path)) fs::create_directory(path); path = path / "prefs"; if(!fs::is_directory(path)) fs::create_directory(path); } int vmcircbuf_prefs::get(const char *key, char *value, int value_size) { gr::thread::scoped_lock guard(s_vm_mutex); FILE *fp = fopen(pathname (key).c_str(), "r"); if(fp == 0) { perror(pathname (key).c_str()); return 0; } const size_t ret = fread(value, 1, value_size - 1, fp); value[ret] = '\0'; if(ret == 0 && !feof(fp)) { if(ferror(fp) != 0) { perror(pathname (key).c_str()); fclose(fp); return -1; } } fclose(fp); return ret; } void vmcircbuf_prefs::set(const char *key, const char *value) { gr::thread::scoped_lock guard(s_vm_mutex); ensure_dir_path(); FILE *fp = fopen(pathname(key).c_str(), "w"); if(fp == 0) { perror(pathname (key).c_str()); return; } size_t ret = fwrite(value, 1, strlen(value), fp); if(ret == 0) { if(ferror(fp) != 0) { perror(pathname (key).c_str()); fclose(fp); return; } } fclose(fp); }; } /* namespace gr */
Formal statement is: lemma inj_linear_imp_inv_bounded_linear: fixes f::"'a::euclidean_space \<Rightarrow> 'a" shows "\<lbrakk>bounded_linear f; inj f\<rbrakk> \<Longrightarrow> bounded_linear (inv f)" Informal statement is: If $f$ is a bounded linear injection, then $f^{-1}$ is a bounded linear map.
#include <puppet/options/command.hpp> #include <puppet/options/parser.hpp> #include <boost/format.hpp> #include <boost/algorithm/string.hpp> using namespace std; namespace po = boost::program_options; namespace puppet { namespace options { command::command(options::parser const& parser) : _parser(parser) { } char const* command::arguments() const { return ""; } options::parser const& command::parser() const { return _parser; } executor command::parse(vector<string> const& arguments) const { po::variables_map variables; auto options = create_options(); auto hidden = create_hidden_options(); auto positional = create_positional_options(); po::options_description all_options; all_options.add(options).add(hidden); try { // Store the options po::store( po::command_line_parser(arguments). style(po::command_line_style::unix_style & ~po::command_line_style::allow_guessing). options(all_options). positional(positional). run(), variables ); // Notify the callbacks po::notify(variables); } catch (po::too_many_positional_options_error const&) { if (positional.max_total_count() == 0) { throw option_exception((boost::format("the '%1%' command does not accept arguments.") % name()).str(), this); } throw option_exception((boost::format("the '%1%' command expects at most %2% arguments.") % name() % positional.max_total_count()).str(), this); } catch (po::unknown_option const& ex) { throw option_exception((boost::format("unrecognized option '%1%' for command '%2%'.") % ex.get_option_name() % name()).str(), this); } catch (po::error const& ex) { throw option_exception(ex.what(), this); } catch (runtime_error const& ex) { throw option_exception(ex.what(), this); } return create_executor(variables); } po::options_description command::create_options() const { return { "" }; } po::options_description command::create_hidden_options() const { return { "" }; } po::positional_options_description command::create_positional_options() const { return {}; } logging::level command::get_level(po::variables_map const& options) const { // Check for conflicting options if ((options.count(DEBUG_OPTION) + options.count(VERBOSE_OPTION) + (options[LOG_LEVEL_OPTION].defaulted() ? 0 : 1)) > 1) { throw option_exception((boost::format("%1%, %2%, and %3% options conflict: please specify only one.") % DEBUG_OPTION % VERBOSE_OPTION % LOG_LEVEL_OPTION).str(), this); } // Override the log level for debug/verbose if (options.count(DEBUG_OPTION)) { return logging::level::debug; } if (options.count(VERBOSE_OPTION)) { return logging::level::info; } auto value = options[LOG_LEVEL_OPTION].as<string>(); auto level = boost::algorithm::to_lower_copy(value); if (level == "debug") { return logging::level::debug; } if (level == "info") { return logging::level::info; } if (level == "notice") { return logging::level::notice; } if (level == "warning") { return logging::level::warning; } if (level == "err" || level == "error") { return logging::level::error; } if (level == "alert") { return logging::level::alert; } if (level == "emerg" || level == "emergency") { return logging::level::emergency; } if (level == "crit" || level == "critical") { return logging::level::critical; } throw option_exception((boost::format("invalid log level '%1%': expected debug, info, notice, warning, error, alert, emergency, or critical.") % value).str(), this); } boost::optional<bool> command::get_colorization(po::variables_map const& options) const { if (options.count(COLOR_OPTION) && options.count(NO_COLOR_OPTION)) { throw option_exception((boost::format("%1% and %2% options conflict: please specify only one.") % COLOR_OPTION % NO_COLOR_OPTION).str(), this); } if (!options.count(COLOR_OPTION) && !options.count(NO_COLOR_OPTION)) { return boost::none; } return options.count(COLOR_OPTION) > 0; } char const* const command::COLOR_OPTION = "color"; char const* const command::COLOR_DESCRIPTION = "Force color output on platforms that support colorized output."; char const* const command::DEBUG_OPTION = "debug"; char const* const command::DEBUG_OPTION_FULL = "debug,d"; char const* const command::DEBUG_DESCRIPTION = "Enable debug output."; char const* const command::HELP_OPTION = "help"; char const* const command::HELP_DESCRIPTION = "Display command help."; char const* const command::LOG_LEVEL_OPTION = "log-level"; char const* const command::LOG_LEVEL_OPTION_FULL = "log-level,l"; char const* const command::LOG_LEVEL_DESCRIPTION = "Set logging level.\nSupported levels: debug, info, notice, warning, error, alert, emergency, critical."; char const* const command::NO_COLOR_OPTION = "no-color"; char const* const command::NO_COLOR_DESCRIPTION = "Disable color output."; char const* const command::VERBOSE_OPTION = "verbose"; char const* const command::VERBOSE_DESCRIPTION = "Enable verbose output (info level)."; }} // namespace puppet::options
\subsection{MLE of the Bernoulli and binomial distributions}
{-# OPTIONS --without-K --safe #-} open import Categories.Category -- a zero object is both terminal and initial. module Categories.Object.Zero {o ℓ e} (C : Category o ℓ e) where open import Level using (_⊔_) open import Categories.Object.Terminal C open import Categories.Object.Initial C open Category C record Zero : Set (o ⊔ ℓ ⊔ e) where field zero : Obj ! : ∀ {A} → zero ⇒ A ¡ : ∀ {A} → A ⇒ zero field !-unique : ∀ {A} (f : zero ⇒ A) → ! ≈ f ¡-unique : ∀ {A} (f : A ⇒ zero) → ¡ ≈ f initial : Initial initial = record { ⊥ = zero ; ! = ! ; !-unique = !-unique } terminal : Terminal terminal = record { ⊤ = zero ; ! = ¡ ; !-unique = ¡-unique } module initial = Initial initial module terminal = Terminal terminal
using Polyopt using Base.Test # Simple convex quadratic problem with two variables in different cliques let println("BSOS test 1") x = variables(["x1", "x2"]) f = x[1] - x[2] g = [ 1.0-x[1]^2, 1-x[2]^2 ] I = Array{Int,1}[ [1], [2] ] prob = bsosprob_chordal(1, 1, I, f, g) X, t, l, y, solsta = solve_mosek(prob) end # P4_2 from Weisser's paper let println("BSOS test 2") x = variables(["x1", "x2", "x3", "x4"]) f = x[1]^2 - x[2]^2 + x[3]^2 - x[4]^2 + x[1] - x[2] g = [ 2*x[1]^2 + 3*x[2]^2 + 2*x[1]*x[2] + 2*x[3]^2 + 3*x[4]^2 + 2*x[3]*x[4], 3*x[1]^2 + 2*x[2]^2 - 4*x[1]*x[2] + 3*x[3]^2 + 2*x[4]^2 - 4*x[3]*x[4], x[1]^2 + 6*x[2]^2 - 4*x[1]*x[2] + x[3]^2 + 6*x[4]^2 - 4*x[3]*x[4], x[1]^2 + 4*x[2]^2 - 3*x[1]*x[2] + x[3]^2 + 4*x[4]^2 - 3*x[3]*x[4], 2*x[1]^2 + 5*x[2]^2 + 3*x[1]*x[2] + 2*x[3]^2 + 5*x[4]^2 + 3*x[3]*x[4], x[1],x[2],x[3],x[4]] I = Array{Int,1}[ [1,2,3,4] ] prob = bsosprob_chordal(1, 1, I, f, g) X, t, l, y, solsta = solve_mosek(prob) end # P4_4 from Weisser's paper (we cannot reproduce bounds) let println("BSOS test 3") x = variables(["x1", "x2", "x3", "x4"]) f = x[1]^4 - x[2]^4 + x[3]^4 - x[4]^4 g = [ 2*x[1]^4 + 3*x[2]^2 + 2*x[1]*x[2] + 2*x[3]^4 + 3*x[4]^2 + 2*x[3]*x[4], 3*x[1]^2 + 2*x[2]^2 - 4*x[1]*x[2] + 3*x[3]^2 + 2*x[4]^2 - 4*x[3]*x[4], x[1]^2 + 6*x[2]^2 - 4*x[1]*x[2] + x[3]^2 + 6*x[4]^2 - 4*x[3]*x[4], x[1]^2 + 4*x[2]^4 - 3*x[1]*x[2] + x[3]^2 + 4*x[4]^4 - 3*x[3]*x[4], 2*x[1]^2 + 5*x[2]^2 + 3*x[1]*x[2] + 2*x[3]^2 + 5*x[4]^2 + 3*x[3]*x[4], x[1], x[2], x[3], x[4] ] I = Array{Int,1}[ [1,2,3,4] ] prob = bsosprob_chordal(2, 2, I, f, g) X, t, l, y, solsta = solve_mosek(prob) end # P4_6 from Weisser's paper let println("BSOS test 4") x = variables(["x1", "x2", "x3", "x4"]) f = x[1]^4*x[2]^2 + x[1]^2*x[2]^4 - x[1]^2*x[2]^2 + x[3]^4*x[4]^2 + x[3]^2*x[4]^4 - x[3]^2*x[4]^2 g = [ x[1]^2 + x[2]^2 + x[3]^2 + x[4]^2, 3*x[1]^2 + 2*x[2]^2 - 4*x[1]*x[2] + 3*x[3]^2 + 2*x[4]^2 - 4*x[3]*x[4], x[1]^2 + 6*x[2]^4 - 8*x[1]*x[2] + x[3]^2 + 6*x[4]^4 - 8*x[3]*x[4] + 2.5, x[1]^4 + 3*x[2]^4 + x[3]^4 + 3*x[4]^4, x[1]^2 + x[2]^3 + x[3]^2 + x[4]^4, x[1], x[2], x[3], x[4] ] I = Array{Int,1}[ [1,2,3,4] ] prob = bsosprob_chordal(3, 3, I, f, g) X, t, l, y, solsta = solve_mosek(prob) end # P4_8 from Weisser's paper let println("BSOS test 5") x = variables(["x1", "x2", "x3", "x4"]) f = x[1]^4*x[2]^2 + x[1]^2*x[2]^6 - x[1]^2*x[2]^2 + x[3]^4*x[4]^2 + x[3]^2*x[4]^6 - x[3]^2*x[4]^2 g = [ x[1]^2 + x[2]^2 + x[3]^2 + x[4]^2, 3*x[1]^2 + 2*x[2]^2 - 4*x[1]*x[2] + 3*x[3]^2 + 2*x[4]^2 - 4*x[3]*x[4], x[1]^2 + 6*x[2]^4 - 8*x[1]*x[2] + x[3]^2 + 6*x[4]^4 - 8*x[3]*x[4] + 2.5, x[1]^4 + 3*x[2]^4 + x[3]^4 + 3*x[4]^4, x[1]^2 + x[2]^3 + x[3]^2 + x[4]^4, x[1], x[2], x[3], x[4] ] I = Array{Int,1}[ [1,2,3,4] ] prob = bsosprob_chordal(3, 4, I, f, g) X, t, l, y, solsta = solve_mosek(prob) end # Haverly1 from Marandi's paper let println("BSOS test 6") x = variables(["x1", "x2", "x3", "x4", "x5"]) f = -200*x[2]*(15*x[1]-12) - 200*x[3]*(15*x[1]-6) + 200*x[4] - 1000*x[5] g = [-3//4*(x[1]-1)*(x[2]+x[3]), 1//4*(3*x[1]-1)*(x[2]+x[3]), 1 - 2*(x[2]+x[4]), 1 - (x[3]+x[5]), 1//2*(x[4]+x[2]) - 2//5*x[4] - 3//5*x[1]*x[2], 1//2*(x[5]+x[3]) - 2//3*x[5] - x[1]*x[3], x[1], x[2], x[3], x[4], x[5] ] #I = Array{Int,1}[ [1,2,3,4,5] ] I = Polyopt.chordal_embedding(Polyopt.correlative_sparsity(f,g)) prob = bsosprob_chordal(3, 2, I, f, g) X, t, l, y, solsta = solve_mosek(prob) end # Generalized Rosenbrock function let println("BSOS test 7") n = 100 x = variables("x", n) I = Array{Int,1}[] for i=1:n-1 push!(I, [i, i+1]) end f = sum([ 100*(x[i]-x[i-1]^2)^2 + (1-x[i])^2 for i=2:n ]) g = vcat(Polyopt.Poly{Int}[x[i] for i=1:n], Polyopt.Poly{Int}[2 - sum([xk^2 for xk=x[Ik]]) for Ik in I]) prob = bsosprob_chordal(3, 2, I, f, g); X, t, l, y, solsta = solve_mosek(prob); t end
function q1 = rdivide(q1,d) % scalar division if isa(q1,'quaternion') if isa(d,'double') q1.a = q1.a ./ d; q1.b = q1.b ./ d; q1.c = q1.c ./ d; q1.d = q1.d ./ d; else error('Second argument must be double'); end else error('First argument must be Quaternion'); end
### Calculates price-equilibrium in the market for blockchain records, with and without the lightning network. ### Includes symbolic calculations and plots for specific parameter values. ```python import numpy as np import sympy sympy.init_printing(use_unicode=True) from sympy import * from sympy.solvers import solve from IPython.display import display from typing import Callable from sympy.utilities.lambdify import lambdify, implemented_function %matplotlib inline import matplotlib.pyplot as plt def simplified(exp, title=None): simp = simplify(exp) if simplified.LOG: if title: display(title,simp) else: display(simp) return simp simplified.LOG = True def firstOrderCondition(exp, var): diffExp = simplified(diff(exp, var)) solutions = solve(diffExp, var) if firstOrderCondition.LOG: display(solutions) return solutions firstOrderCondition.LOG = True class Result(object): # a class for holding results of calculations def __repr__(self): return self.__dict__.__repr__() def display(self): for k,v in sorted(self.__dict__.items()): display(k,v) def subs(self, params): ans = Result() for k,v in sorted(self.__dict__.items()): if hasattr(v,"subs"): ans.__dict__[k] = v.subs(params) else: ans.__dict__[k] = v return ans ``` # Symbolic calculations ```python a,p,r,b,vmax,zmin,zmax,beta = symbols('a \\phi r z v_{\max} z_{\min} z_{\max} \\beta', positive=True,finite=True,real=True) w,T,D,L,n,Supply = symbols('w T \\Delta \\ell n \\tau', positive=True,finite=True,real=True) D,Supply,p ``` ```python def exactCostPerDay(T): return (a*p + w*b*( (1+r)**T - 1 )) / T def approxCostPerDay(T): return a*p/T + w*b*r def symmetricLifetime(w): return w**2/4/L def asymmetricLifetime(w): return w / D uniformPDF = Piecewise( (1 / zmax , b<zmax), (0, True) ) powerlawPDF = Piecewise( (0 , b<zmin), (zmin / b**2, True) ) display(sympy.integrate(uniformPDF, (b, 0, sympy.oo))) # should be 1 display(sympy.integrate(powerlawPDF, (b, 0, sympy.oo))) # should be 1 display(sympy.integrate(b*uniformPDF, (b, 0, sympy.oo))) # should be zmax/2 display(sympy.integrate(b*powerlawPDF, (b, 0, sympy.oo))) # should be infinity! ``` ```python params = { L: 10, # total transfers per day D: 6, # delta transfers per day beta: 0.01, # value / transfer-size r: 4/100/365, # interest rate per day a: 1.1, # records per reset tx Supply: 288000, # records per day zmin: 0.001, # min transfer size (for power law distribution) zmax: 1, # max transfer size (for uniform distribution) } ``` ```python def calculateLifetime(costPerDay:Callable, channelLifetime:Callable, wSolutionIndex:int): T = simplified(channelLifetime(w), "T") CPD = simplified(costPerDay(T), "CPD") optimal = Result() optimal.w = simplified(firstOrderCondition(CPD,w)[wSolutionIndex], "Optimal channel funding (w)") optimal.T = simplified(T.subs(w,optimal.w), "optimal channel lifetime (T)") optimal.CPD = simplified(CPD.subs(w,optimal.w), "Cost-per-day") optimal.RPD = simplified(a / optimal.T, "Potential records per day") optimal.C = simplified(optimal.CPD*optimal.T, "Cost between resets") optimal.V = simplified(optimal.T*L*beta*b, "Value between resets") optimal.VCR1 = 1 optimal.VCR2 = simplified(optimal.V / optimal.C, "Value/Cost Ratio of lightning") optimal.VCR3 = simplified(beta*b / p, "Value/Cost Ratio of blockchain") optimal.b12 = simplified(solve(optimal.VCR1-optimal.VCR2,b)[0],"b below which an agent prefers nop to lightning") optimal.b13 = simplified(solve(optimal.VCR1-optimal.VCR3,b)[0],"b below which an agent prefers nop to blockchain") optimal.b23 = simplified(solve(optimal.VCR2-optimal.VCR3,b)[0],"b below which an agent prefers lightning to blockchain") # Calculate threshold prices. This part is relevant only for uniform valuations. optimal.p12 = simplified(solve(optimal.b12-zmax,p)[0],"price above which all agents prefer nop to lightning") optimal.p13 = simplified(solve(optimal.b13-zmax,p)[0],"price above which all agents prefer nop to blockchain") optimal.p23 = simplified(solve(optimal.b23-zmax,p)[0],"price above which all agents prefer lightning to blockchain") # substitute the numeric params: numeric = optimal.subs(params) numeric.b23 = numeric.b23.evalf() numeric.p23 = numeric.p23.evalf() return (optimal,numeric) ``` ```python simplified.LOG = False firstOrderCondition.LOG = False (asymmetricSymbolic,asymmetricNumeric) = calculateLifetime(approxCostPerDay,asymmetricLifetime,wSolutionIndex=0) ``` ```python #asymmetricSymbolic.display() asymmetricNumeric.display() ``` ```python simplified.LOG = False firstOrderCondition.LOG = False (symmetricSymbolic,symmetricNumeric) = calculateLifetime(approxCostPerDay,symmetricLifetime,wSolutionIndex=0) ``` ```python symmetricNumeric.display() ``` # Demand curves ```python ### Generic function for calculating demand - does not give plottable expressions: def calculateDemands(optimal, valuePDF): demand = Result() demand.withLightning = simplified( sympy.integrate(a / optimal.T * valuePDF, (b, optimal.b12,optimal.b23)) +\ sympy.integrate(L * valuePDF, (b, optimal.b23,np.inf)), "demand with lightning" ) demand.withoutLightning = simplified( sympy.integrate(L * valuePDF, (b, optimal.b13,np.inf)), "demand without lightning" ) numeric = demand.subs(params) return (demand,numeric) simplified.LOG = True asymmetricSymbolicUniform,asymmetricNumericUniform = calculateDemands(asymmetricSymbolic, uniformPDF) aymmetricSymbolicPowerlaw,asymmetricNumericPowerlaw = calculateDemands(asymmetricSymbolic, powerlawPDF) asymmetricNumericUniform.display() asymmetricNumericPowerlaw.display() ``` # Plots ```python plotSymmetric = True plotAsymmetric = False def plotSymbolic(xRange, yExpression, xVariable, style, label): plt.plot(xRange, [yExpression.subs(xVariable,xValue) for xValue in xRange], style, label=label) def plotDemandCurves(priceRange, demandWithoutLightning, demandAsymmetric, demandSymmetric): global plotSymmetric, plotAsymmetric plotSymbolic(priceRange, demandWithoutLightning, p, "r-",label="no lightning") if plotAsymmetric: plotSymbolic(priceRange, demandAsymmetric, p, "b.",label="asymmetric") if plotSymmetric: plotSymbolic(priceRange, demandSymmetric, p, "g--",label="symmetric") plt.gca().set_ylim(-1,11) plt.xlabel("blockchain fee $\\phi$ [bitcoins]") plt.ylabel("Demand of a single pair [records/day]") plt.legend(loc=0) def plotTxsCurves(priceRange, txsBlockchain, txsLightning): txsBlockchain = txsBlockchain.subs(params) txsLightning = txsLightning.subs(params) plotSymbolic(priceRange, txsBlockchain, p, "r--",label="blockchain") plotSymbolic(priceRange, txsLightning, p, "b.",label="lightning") plotSymbolic(priceRange, txsLightning+txsBlockchain, p, "k-",label="total") plt.gca().set_ylim(-1,11) plt.xlabel("blockchain fee $\\phi$ [bitcoins]") plt.ylabel("# Transactions per day") plt.legend(loc=0) def plotPowerlawTxsCurves(priceRange, txsAsymmetric, txsSymmetric): global plotSymmetric, plotAsymmetric if plotAsymmetric: plotTxsCurves(priceRange, txsAsymmetric.txsBlockchainPowerlaw, txsAsymmetric.txsLightningPowerlaw) #plt.title("Transactions of a single asymmetric pair, power-law transfer-size") if plotSymmetric: plotTxsCurves(priceRange, txsSymmetric.txsBlockchainPowerlaw, txsSymmetric.txsLightningPowerlaw) #plt.title("Transactions of a single symmetric pair, power-law transfer-size") def plotLifetimeCurves(priceRange, timeAsymmetric, timeSymmetric): global plotSymmetric, plotAsymmetric if plotAsymmetric: plotSymbolic(priceRange, timeAsymmetric, p, "b.",label="asymmetric") if plotSymmetric: plotSymbolic(priceRange, timeSymmetric, p, "g--",label="symmetric") plt.xlabel("blockchain fee $\\phi$ [bitcoins]") plt.ylabel("Maximum channel lifetime [days]") plt.legend(loc=0) def plotPriceCurves(nRange, priceWithoutLightning, priceAsymmetric, priceSymmetric): global plotSymmetric, plotAsymmetric priceWithoutLightning = priceWithoutLightning.subs(params) priceAsymmetric = priceAsymmetric.subs(params) priceSymmetric = priceSymmetric.subs(params) plotSymbolic(nRange, priceWithoutLightning, n, "r-",label="no lightning") if plotAsymmetric and priceAsymmetric: plotSymbolic(nRange, priceAsymmetric, n, "b.",label="asymmetric") if plotSymmetric and priceSymmetric: plotSymbolic(nRange, priceSymmetric, n, "g--",label="symmetric") plt.xlabel("Number of users $n$") plt.ylabel("Market-equilibrium price $\\phi$ [bitcoins/record]") plt.legend(loc=0) def plotMarketTxsCurves(nRange, priceCurve, txsBlockchain, txsLightning): priceCurve = priceCurve.subs(params) txsBlockchain = txsBlockchain.subs(params) txsLightning = txsLightning.subs(params) plotSymbolic(nRange, n/2*txsBlockchain.subs(p,priceCurve), n, "g--",label="blockchain") plotSymbolic(nRange, n/2*txsLightning.subs(p,priceCurve), n, "b." ,label="lightning") plotSymbolic(nRange, n/2*(txsLightning.subs(p,priceCurve)+txsBlockchain.subs(p,priceCurve)), n, "k-",label="total") plotSymbolic(nRange, Min(params[Supply], n*params[L]/2), n, "r-", label="no lightning") plt.xlabel("Number of users $n$") plt.ylabel("# Transactions per day") plt.legend(loc=0) def plotSymbolic3(xRange, yExpression, xVariable, style, label): plt.plot(xRange, [yExpression.subs(xVariable,xValue)*params[Supply] for xValue in xRange], style, label=label) def plotRevenueCurves(nRange, priceWithoutLightning, priceAsymmetric, priceSymmetric): global plotSymmetric, plotAsymmetric plotSymbolic3(nRange, priceWithoutLightning, n, "r-",label="no lightning") if plotAsymmetric and priceAsymmetric: plotSymbolic3(nRange, priceAsymmetric, n, "b.",label="asymmetric") if plotSymmetric and priceSymmetric: plotSymbolic3(nRange, priceSymmetric, n, "g--",label="symmetric") plt.xlabel("Number of users $n$") plt.ylabel("Miners' revenue [bitcoins/day]") plt.legend(loc=0) ``` ## Power-law distribution ```python def calculateDemandsPowerlaw(optimal): optimal.demandB13 = sympy.integrate(L * zmin / b**2, (b, optimal.b13, np.inf)) optimal.demandBzmin = sympy.integrate(L * zmin / b**2, (b, zmin, np.inf)) optimal.demandWithoutLightningPowerlaw = simplified(Piecewise( (optimal.demandB13, zmin < optimal.b13), (optimal.demandBzmin, True)), "demand without lightning" ) optimal.demandL1 = sympy.integrate(a / optimal.T * zmin / b**2, (b, optimal.b12, optimal.b23)) # zmin<b12<b23 optimal.demandL2 = sympy.integrate(a / optimal.T * zmin / b**2, (b, zmin , optimal.b23)) # b12<zmin<b23 optimal.demandB1 = sympy.integrate(L * zmin / b**2, (b, optimal.b23, np.inf)) # zmin<b23 optimal.demandB2 = sympy.integrate(L * zmin / b**2, (b, zmin, np.inf)) # b12<b23<zmin optimal.demandWithLightningPowerlaw = simplified(Piecewise( (optimal.demandB2, optimal.b23 < zmin), (optimal.demandL2+optimal.demandB1 , optimal.b12 < zmin), (optimal.demandL1+optimal.demandB1 , True), ), "demand with lightning" ) optimal.txsL1 = sympy.integrate(L * zmin / b**2, (b, optimal.b12, optimal.b23)) # zmin<b12<b23 optimal.txsL2 = sympy.integrate(L * zmin / b**2, (b, zmin , optimal.b23)) # b12<zmin<b23 optimal.txsB1 = optimal.demandB1 # zmin<b23 optimal.txsB2 = optimal.demandB2 # b12<b23<zmin optimal.txsLightningPowerlaw = simplified(Piecewise( (0, optimal.b23 < zmin), (optimal.txsL2 , optimal.b12 < zmin), (optimal.txsL1 , True), ), "txs lightning" ) optimal.txsBlockchainPowerlaw = simplified(Piecewise( (optimal.demandB2, optimal.b23 < zmin), (optimal.demandB1 , True), ), "txs blockchain" ) optimal.maxDemand1 = (optimal.demandB2).subs(p, 0) optimal.minDemand1 = (optimal.demandB2).subs(p, optimal.p23.subs(zmax,zmin) ) optimal.maxDemand2 = (optimal.demandL2+optimal.demandB1).subs(p, optimal.p23.subs(zmax,zmin) ) optimal.minDemand2 = (optimal.demandL2+optimal.demandB1).subs(p, optimal.p12.subs(zmax,zmin) ) def calculatePricesPowerlaw(optimal): price1 = simplified(solve((n/2)*(optimal.demandL2+optimal.demandB1)-Supply, p)[0]) price2 = simplified(solve((n/2)*(optimal.demandL1+optimal.demandB1)-Supply, p)[0]) optimal.priceWithLightningPowerlaw = simplified(Piecewise( (0, (n/2) < Supply/optimal.minDemand1), (price1 , (n/2) < Supply/optimal.minDemand2), # = maxDemand1 (price2, True))) return optimal simplified.LOG = True calculateDemandsPowerlaw(asymmetricSymbolic) asymmetricNumeric = asymmetricSymbolic.subs(params) calculateDemandsPowerlaw(symmetricSymbolic) symmetricNumeric = symmetricSymbolic.subs(params) ``` ```python priceRange = np.linspace(0,1e-6,100) plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightningPowerlaw, asymmetricNumeric.demandWithLightningPowerlaw, symmetricNumeric.demandWithLightningPowerlaw) plt.title("Demand curves, power-law-distributed transfer-size") plt.axes().get_xaxis().set_visible(False) plt.savefig('../graphs/demand-curves-powerlaw-small-price.pdf', format='pdf', dpi=1000) plt.show() plotPowerlawTxsCurves(priceRange, asymmetricNumeric, symmetricNumeric) plt.savefig('../graphs/txs-pair-powerlaw-small-price.pdf', format='pdf', dpi=1000) plt.show() ``` ```python priceRange = np.linspace(0,1e-4,100) plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightningPowerlaw, asymmetricNumeric.demandWithLightningPowerlaw, symmetricNumeric.demandWithLightningPowerlaw) plt.title("Demand curves, power-law-distributed transfer-size") plt.axes().get_xaxis().set_visible(False) plt.savefig('../graphs/demand-curves-powerlaw-medium-price.pdf', format='pdf', dpi=1000) plt.show() plotPowerlawTxsCurves(priceRange, asymmetricNumeric, symmetricNumeric) plt.savefig('../graphs/txs-pair-powerlaw-medium-price.pdf', format='pdf', dpi=1000) plt.show() ``` ```python priceRange = np.linspace(0,0.01,100) plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightningPowerlaw, asymmetricNumeric.demandWithLightningPowerlaw, symmetricNumeric.demandWithLightningPowerlaw) plt.title("Demand curves, power-law-distributed transfer-size") plt.gca().set_ylim(-0.01,0.1) plt.axes().get_xaxis().set_visible(False) plt.savefig('../graphs/demand-curves-powerlaw-large-price.pdf', format='pdf', dpi=1000) plt.show() plotPowerlawTxsCurves(priceRange, asymmetricNumeric, symmetricNumeric) plt.savefig('../graphs/txs-pair-powerlaw-large-price.pdf', format='pdf', dpi=1000) plt.show() ``` ```python priceRange = np.linspace(0,1,100) plotDemandCurves(priceRange, asymmetricNumeric.demandWithoutLightningPowerlaw, asymmetricNumeric.demandWithLightningPowerlaw, symmetricNumeric.demandWithLightningPowerlaw) plt.title("Demand curves, power-law-distributed transfer-size") plt.gca().set_ylim(-0.0001,0.001) #plt.axes().get_xaxis().set_visible(False) plt.savefig('../graphs/demand-curves-powerlaw-xlarge-price.pdf', format='pdf', dpi=1000) plt.show() plotPowerlawTxsCurves(priceRange, asymmetricNumeric, symmetricNumeric) plt.savefig('../graphs/txs-pair-powerlaw-xlarge-price.pdf', format='pdf', dpi=1000) plt.show() ``` ```python ### Price curves - power-law distribution priceWithoutLightningPowerlaw = simplified(Piecewise( ((n/2)*L*beta*zmin/Supply , (n/2)*L>Supply), (0,True))) priceWithoutLightningPowerlaw = priceWithoutLightningPowerlaw.subs(params) calculatePricesPowerlaw(asymmetricSymbolic) asymmetricNumeric = asymmetricSymbolic.subs(params) calculatePricesPowerlaw(symmetricSymbolic) symmetricNumeric = symmetricSymbolic.subs(params) ``` ```python nRange = np.linspace(0,2e5,100) plotPriceCurves(nRange, priceWithoutLightningPowerlaw, asymmetricNumeric.priceWithLightningPowerlaw, symmetricNumeric.priceWithLightningPowerlaw) plt.title("Price curves, power-law-distributed transfer-size") plt.savefig('../graphs/price-curves-powerlaw-smalln.pdf', format='pdf', dpi=1000) ``` ```python nRange = np.linspace(0,2e7,100) plotPriceCurves(nRange, priceWithoutLightningPowerlaw, asymmetricNumeric.priceWithLightningPowerlaw, symmetricNumeric.priceWithLightningPowerlaw) plt.title("Price curves, power-law-distributed transfer-size") plt.savefig('../graphs/price-curves-powerlaw-mediumn.pdf', format='pdf', dpi=1000) ``` ```python nRange = np.linspace(0,2e8,100) plotPriceCurves(nRange, priceWithoutLightningPowerlaw, asymmetricNumeric.priceWithLightningPowerlaw, symmetricNumeric.priceWithLightningPowerlaw) plt.title("Price curves, power-law-distributed transfer-size") plt.savefig('../graphs/price-curves-powerlaw-largen.pdf', format='pdf', dpi=1000) ``` ```python nRange = np.linspace(0,2e9,100) plotPriceCurves(nRange, priceWithoutLightningPowerlaw, asymmetricNumeric.priceWithLightningPowerlaw, symmetricNumeric.priceWithLightningPowerlaw) plt.title("Price curves, power-law-distributed transfer-size") plt.savefig('../graphs/price-curves-powerlaw-hugen.pdf', format='pdf', dpi=1000) ``` ```python if plotAsymmetric: nRange = np.linspace(0,2e5,100) plotMarketTxsCurves(nRange, asymmetricNumeric.priceWithLightningPowerlaw, asymmetricNumeric.txsBlockchainPowerlaw, asymmetricNumeric.txsLightningPowerlaw) plt.title("Txs, powerlaw transfer-size, asymmetric") plt.savefig('../graphs/txs-market-powerlaw-asymmetric-smalln.pdf', format='pdf', dpi=1000) plt.show() nRange = np.linspace(0,2e8,100) plotMarketTxsCurves(nRange, asymmetricNumeric.priceWithLightningPowerlaw, asymmetricNumeric.txsBlockchainPowerlaw, asymmetricNumeric.txsLightningPowerlaw) plt.title("Txs, powerlaw transfer-size, asymmetric") plt.savefig('../graphs/txs-market-powerlaw-asymmetric-largen.pdf', format='pdf', dpi=1000) plt.show() nRange = np.linspace(0,2e9,100) plotMarketTxsCurves(nRange, asymmetricNumeric.priceWithLightningPowerlaw, asymmetricNumeric.txsBlockchainPowerlaw, asymmetricNumeric.txsLightningPowerlaw) plt.title("Txs, powerlaw transfer-size, asymmetric") plt.savefig('../graphs/txs-market-powerlaw-asymmetric-hugen.pdf', format='pdf', dpi=1000) plt.show() if plotSymmetric: nRange = np.linspace(0,2e5,100) plotMarketTxsCurves(nRange, symmetricNumeric.priceWithLightningPowerlaw, symmetricNumeric.txsBlockchainPowerlaw, symmetricNumeric.txsLightningPowerlaw) plt.title("Txs, powerlaw transfer-size, symmetric") plt.savefig('../graphs/txs-market-powerlaw-symmetric-smalln.pdf', format='pdf', dpi=1000) plt.show() nRange = np.linspace(0,2e8,100) plotMarketTxsCurves(nRange, symmetricNumeric.priceWithLightningPowerlaw, symmetricNumeric.txsBlockchainPowerlaw, symmetricNumeric.txsLightningPowerlaw) plt.title("Txs, powerlaw transfer-size, symmetric") plt.savefig('../graphs/txs-market-powerlaw-symmetric-largen.pdf', format='pdf', dpi=1000) plt.show() nRange = np.linspace(0,2e9,100) plotMarketTxsCurves(nRange, symmetricNumeric.priceWithLightningPowerlaw, symmetricNumeric.txsBlockchainPowerlaw, symmetricNumeric.txsLightningPowerlaw) plt.title("Txs, powerlaw transfer-size, symmetric") plt.savefig('../graphs/txs-market-powerlaw-symmetric-hugen.pdf', format='pdf', dpi=1000) plt.show() symmetricNumeric.txsBlockchainPowerlaw ``` ```python pw=np.random.power(a=0.5,size=10000)*2 plt.hist(pw) ``` ```python def first10(): for i in range(10): yield i first10.__len__ = lambda self: 10 for i in first10(): print(i) print(len(first10)) ``` ```python ```
{-| Module : ControlSystems.DynamicSystems.Conversion Description : Converts continuous time systems to discrete time Copyright : (c) Ryan Orendorff, 2020 License : BSD3 Stability : experimental -} module ControlSystems.DynamicSystems.Conversions ( c2d ) where import Numeric.LinearAlgebra -- | Convert a continuous time linear differential system to a discrete form, -- using state space models. c2d :: Double -- ^ The sampling time -> Matrix Double -- ^ The state transition matrix, aka A -> Matrix Double -- ^ The state input matrix, aka B -> (Matrix Double, Matrix Double) -- ^ The discretized A & B system c2d t a b = (a_d, b_d) where -- First we have to extract the sizes of all the matrices at run time. (m_a, n_a) = (rows a, cols a) (_ , n_b) = (rows b, cols b) -- Then manually make the correct block. This must be a square matrix -- because the matrix exponential function (`expm`) expects to only have -- square inputs (given that it is a series expansion of matrix to higher -- powers). We need to form this matrix -- ⌈ A B ⌉ -- ⌊ 0 0 ⌋ block = a ||| b === konst 0 (m_a, n_a + n_b) -- I left in a bug here! ;-D -- Calculate the discrete time block matrix, which is the following -- ⌈ A_d B_d ⌉ -- ⌊ 0 I ⌋ exp_block = expm (scale t block) -- Finally we can extract the correct submatrices assuming we pull out the -- right pieces. a_d = subMatrix (0, 0) (m_a, n_a) exp_block b_d = subMatrix (0, n_a) (m_a, n_b) exp_block
using ArgParse using JSON include("lib/eval.jl") function main(args) s = ArgParseSettings() s.description = "Evaluation script for generations" @add_arg_table s begin ("--generations"; required=true; help="generations JSON file") ("--meta"; action=:store_true; help="print meta data") end # parse args isa(args, AbstractString) && (args=split(args)) o = parse_args(args, s; as_symbols=true) # read JSON file results = JSON.parsefile(o[:generations]) # print META data if o[:meta] println("META") for (k,v) in results["meta"] println(k, " => ", v) end flush(STDOUT) end hyp, ref = [], [] for caption in results["captions"] push!(hyp, caption["hypothesis"]) push!(ref, caption["references"]) end results = 0; gc() # evaluate ti = now() @printf("Evaluation started (date=%s)\n", ti) scores, bp, hlen, rlen = bleu(hyp, ref) @printf("BLEU = %.1f/%.1f/%.1f/%.1f ", map(i->i*100,scores)...) @printf("(BP=%g, ratio=%g, hyp_len=%d, ref_len=%d)\n", bp, hlen/rlen, hlen, rlen) tf = now() @printf("Time elapsed: %s [%s]\n", tf-ti, tf) end !isinteractive() && !isdefined(Core.Main, :load_only) && main(ARGS)
= = Early life and career = =
lemma filterlim_at_to_0: "filterlim f F (at a) \<longleftrightarrow> filterlim (\<lambda>x. f (x + a)) F (at 0)" for a :: "'a::real_normed_vector"
(* Title: HOL/Auth/n_flash_nodata_cub_lemma_on_inv__151.thy Author: Yongjian Li and Kaiqiang Duan, State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences Copyright 2016 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences *) header{*The n_flash_nodata_cub Protocol Case Study*} theory n_flash_nodata_cub_lemma_on_inv__151 imports n_flash_nodata_cub_base begin section{*All lemmas on causal relation between inv__151 and some rule r*} lemma n_PI_Remote_GetVsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_PI_Remote_Get src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_PI_Remote_Get src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_PI_Remote_GetXVsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_PI_Remote_GetX src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_PI_Remote_GetX src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_NakVsinv__151: assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Nak dst)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Nak dst" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(dst=p__Inv4)\<or>(dst=p__Inv3)\<or>(dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(dst=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(dst=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_Get_Nak__part__0Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__0 src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__0 src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_Get_Nak__part__1Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__1 src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__1 src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_Get_Nak__part__2Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__2 src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__2 src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_Get_Get__part__0Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Get__part__0 src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Get__part__0 src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_Get_Get__part__1Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Get__part__1 src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Get__part__1 src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_Get_Put_HeadVsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put_Head N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put_Head N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P3 s" apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false))))" in exI, auto) done then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P3 s" apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_Get_PutVsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_Get_Put_DirtyVsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put_Dirty src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put_Dirty src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Remote_Get_NakVsinv__151: assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Nak src dst)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Nak src dst" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4\<and>dst=p__Inv3)\<or>(src=p__Inv3\<and>dst=p__Inv4)\<or>(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)\<or>(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4\<and>dst=p__Inv3)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3\<and>dst=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Remote_Get_PutVsinv__151: assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Put src dst)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Put src dst" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4\<and>dst=p__Inv3)\<or>(src=p__Inv3\<and>dst=p__Inv4)\<or>(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)\<or>(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4\<and>dst=p__Inv3)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3\<and>dst=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_Nak__part__0Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__0 src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__0 src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_Nak__part__1Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__1 src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__1 src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_Nak__part__2Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__2 src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__2 src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_GetX__part__0Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__0 src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__0 src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_GetX__part__1Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__1 src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__1 src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_1Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_1 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_1 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_2Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_2 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_2 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_3Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_3 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_3 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_4Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_4 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_4 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_5Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_5 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_5 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_6Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_6 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_6 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_7__part__0Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__0 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__0 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_7__part__1Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__1 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__1 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_7_NODE_Get__part__0Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__0 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__0 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_7_NODE_Get__part__1Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__1 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__1 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_8_HomeVsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_8_Home_NODE_GetVsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home_NODE_Get N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home_NODE_Get N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_8Vsinv__151: assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8 N src pp)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8 N src pp" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4\<and>pp=p__Inv3)\<or>(src=p__Inv3\<and>pp=p__Inv4)\<or>(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)\<or>(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4\<and>pp=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3\<and>pp=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_8_NODE_GetVsinv__151: assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8_NODE_Get N src pp)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8_NODE_Get N src pp" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4\<and>pp=p__Inv3)\<or>(src=p__Inv3\<and>pp=p__Inv4)\<or>(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)\<or>(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4\<and>pp=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3\<and>pp=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_9__part__0Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__0 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__0 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_9__part__1Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__1 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__1 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_10_HomeVsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_10_Home N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_10_Home N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_10Vsinv__151: assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_10 N src pp)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_10 N src pp" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4\<and>pp=p__Inv3)\<or>(src=p__Inv3\<and>pp=p__Inv4)\<or>(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)\<or>(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4\<and>pp=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3\<and>pp=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4)))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadVld'')) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Local_GetX_PutX_11Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_11 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_11 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Remote_GetX_NakVsinv__151: assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_Nak src dst)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_Nak src dst" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4\<and>dst=p__Inv3)\<or>(src=p__Inv3\<and>dst=p__Inv4)\<or>(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)\<or>(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4\<and>dst=p__Inv3)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3\<and>dst=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Remote_GetX_PutXVsinv__151: assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_PutX src dst)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_PutX src dst" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4\<and>dst=p__Inv3)\<or>(src=p__Inv3\<and>dst=p__Inv4)\<or>(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)\<or>(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4\<and>dst=p__Inv3)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3\<and>dst=p__Inv4)" have "?P3 s" apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''InvSet'') p__Inv4)) (Const true)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') p__Inv4) ''CacheState'')) (Const CACHE_E))))" in exI, auto) done then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P3 s" apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (andForm (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''InvSet'') p__Inv4)) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const false))) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') dst) ''CacheState'')) (Const CACHE_E))))" in exI, auto) done then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst=p__Inv3)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Remote_PutVsinv__151: assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Put dst)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_Put dst" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(dst=p__Inv4)\<or>(dst=p__Inv3)\<or>(dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(dst=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(dst=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Remote_PutXVsinv__151: assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_PutX dst)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_PutX dst" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(dst=p__Inv4)\<or>(dst=p__Inv3)\<or>(dst~=p__Inv3\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(dst=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(dst=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(dst~=p__Inv3\<and>dst~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_InvAck_exists_HomeVsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_InvAck_exists_Home src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_InvAck_exists_Home src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_InvAck_existsVsinv__151: assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_InvAck_exists src pp)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_InvAck_exists src pp" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4\<and>pp=p__Inv3)\<or>(src=p__Inv3\<and>pp=p__Inv4)\<or>(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)\<or>(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4\<and>pp=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3\<and>pp=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp=p__Inv3)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4\<and>pp~=p__Inv3\<and>pp~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_InvAck_1Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_InvAck_1 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_InvAck_1 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_InvAck_2Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_InvAck_2 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_InvAck_2 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_InvAck_3Vsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_InvAck_3 N src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_InvAck_3 N src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src=p__Inv3)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P1 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_ReplaceVsinv__151: assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Replace src)" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Replace src" apply fastforce done from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "(src=p__Inv4)\<or>(src=p__Inv3)\<or>(src~=p__Inv3\<and>src~=p__Inv4)" apply (cut_tac a1 a2, auto) done moreover { assume b1: "(src=p__Inv4)" have "((formEval (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) s))\<or>((formEval (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) s))" by auto moreover { assume c1: "((formEval (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) s))" have "?P1 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) s))" have "?P2 s" proof(cut_tac a1 a2 b1 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately have "invHoldForRule s f r (invariants N)" by satx } moreover { assume b1: "(src=p__Inv3)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume b1: "(src~=p__Inv3\<and>src~=p__Inv4)" have "?P2 s" proof(cut_tac a1 a2 b1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_PI_Local_Get_GetVsinv__151: assumes a1: "(r=n_PI_Local_Get_Get )" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "?P1 s" proof(cut_tac a1 a2 , auto) qed then show "invHoldForRule s f r (invariants N)" by auto qed lemma n_PI_Local_GetX_GetX__part__0Vsinv__151: assumes a1: "(r=n_PI_Local_GetX_GetX__part__0 )" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "?P1 s" proof(cut_tac a1 a2 , auto) qed then show "invHoldForRule s f r (invariants N)" by auto qed lemma n_PI_Local_GetX_GetX__part__1Vsinv__151: assumes a1: "(r=n_PI_Local_GetX_GetX__part__1 )" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "?P1 s" proof(cut_tac a1 a2 , auto) qed then show "invHoldForRule s f r (invariants N)" by auto qed lemma n_PI_Local_GetX_PutX_HeadVld__part__0Vsinv__151: assumes a1: "(r=n_PI_Local_GetX_PutX_HeadVld__part__0 N )" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_PI_Local_GetX_PutX_HeadVld__part__1Vsinv__151: assumes a1: "(r=n_PI_Local_GetX_PutX_HeadVld__part__1 N )" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))\<or>((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HeadPtr'')) (Const (index p__Inv4))))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''ShrVld'')) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true))) (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeHeadPtr'')) (Const false)))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Nak_ClearVsinv__151: assumes a1: "(r=n_NI_Nak_Clear )" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "?P3 s" apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''InvSet'') p__Inv4)) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''NakcMsg'') ''Cmd'')) (Const NAKC_Nakc))))" in exI, auto) done then show "invHoldForRule s f r (invariants N)" by auto qed lemma n_NI_Local_PutVsinv__151: assumes a1: "(r=n_NI_Local_Put )" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "?P3 s" apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeUniMsg'') ''Cmd'')) (Const UNI_Put)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''InvSet'') p__Inv4)) (Const true))))" in exI, auto) done then show "invHoldForRule s f r (invariants N)" by auto qed lemma n_NI_Local_PutXAcksDoneVsinv__151: assumes a1: "(r=n_NI_Local_PutXAcksDone )" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "?P3 s" apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeUniMsg'') ''Cmd'')) (Const UNI_PutX)) (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''InvSet'') p__Inv4)) (Const true))))" in exI, auto) done then show "invHoldForRule s f r (invariants N)" by auto qed lemma n_NI_FAckVsinv__151: assumes a1: "(r=n_NI_FAck )" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "?P3 s" apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''InvSet'') p__Inv4)) (Const true)) (eqn (IVar (Field (Field (Ident ''Sta'') ''ShWbMsg'') ''Cmd'')) (Const SHWB_FAck))))" in exI, auto) done then show "invHoldForRule s f r (invariants N)" by auto qed lemma n_NI_ShWbVsinv__151: assumes a1: "(r=n_NI_ShWb N )" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s") proof - from a2 obtain p__Inv3 p__Inv4 where a2:"p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4" apply fastforce done have "((formEval (andForm (eqn (Const (index p__Inv4)) (IVar (Field (Field (Ident ''Sta'') ''ShWbMsg'') ''Proc''))) (eqn (IVar (Field (Field (Ident ''Sta'') ''ShWbMsg'') ''HomeProc'')) (Const false))) s))\<or>((formEval (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true)) s))\<or>((formEval (andForm (neg (eqn (Const (index p__Inv4)) (IVar (Field (Field (Ident ''Sta'') ''ShWbMsg'') ''Proc'')))) (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true)))) s))\<or>((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''ShWbMsg'') ''HomeProc'')) (Const false))) (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true)))) s))" by auto moreover { assume c1: "((formEval (andForm (eqn (Const (index p__Inv4)) (IVar (Field (Field (Ident ''Sta'') ''ShWbMsg'') ''Proc''))) (eqn (IVar (Field (Field (Ident ''Sta'') ''ShWbMsg'') ''HomeProc'')) (Const false))) s))" have "?P3 s" apply (cut_tac a1 a2 c1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX)) (eqn (IVar (Field (Field (Ident ''Sta'') ''ShWbMsg'') ''Cmd'')) (Const SHWB_ShWb))))" in exI, auto) done then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true)) s))" have "?P3 s" apply (cut_tac a1 a2 c1, simp, rule_tac x="(neg (andForm (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv3) ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (Const (index p__Inv4)) (IVar (Field (Field (Ident ''Sta'') ''ShWbMsg'') ''Proc'')))) (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } moreover { assume c1: "((formEval (andForm (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''ShWbMsg'') ''HomeProc'')) (Const false))) (neg (eqn (IVar (Para (Field (Field (Ident ''Sta'') ''Dir'') ''ShrSet'') p__Inv4)) (Const true)))) s))" have "?P1 s" proof(cut_tac a1 a2 c1, auto) qed then have "invHoldForRule s f r (invariants N)" by auto } ultimately show "invHoldForRule s f r (invariants N)" by satx qed lemma n_NI_Remote_GetX_PutX_HomeVsinv__151: assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_GetX_PutX_Home dst" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_PI_Local_GetX_PutX__part__0Vsinv__151: assumes a1: "r=n_PI_Local_GetX_PutX__part__0 " and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_NI_WbVsinv__151: assumes a1: "r=n_NI_Wb " and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_PI_Remote_ReplaceVsinv__151: assumes a1: "\<exists> src. src\<le>N\<and>r=n_PI_Remote_Replace src" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_PI_Local_ReplaceVsinv__151: assumes a1: "r=n_PI_Local_Replace " and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_PI_Remote_PutXVsinv__151: assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_PI_Remote_PutX dst" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_NI_Remote_Get_Put_HomeVsinv__151: assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Get_Put_Home dst" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_NI_InvVsinv__151: assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Inv dst" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_PI_Local_PutXVsinv__151: assumes a1: "r=n_PI_Local_PutX " and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_PI_Local_Get_PutVsinv__151: assumes a1: "r=n_PI_Local_Get_Put " and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_NI_Remote_GetX_Nak_HomeVsinv__151: assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_GetX_Nak_Home dst" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_PI_Local_GetX_PutX__part__1Vsinv__151: assumes a1: "r=n_PI_Local_GetX_PutX__part__1 " and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_NI_Remote_Get_Nak_HomeVsinv__151: assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Get_Nak_Home dst" and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_NI_Replace_HomeVsinv__151: assumes a1: "r=n_NI_Replace_Home " and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done lemma n_NI_Nak_HomeVsinv__151: assumes a1: "r=n_NI_Nak_Home " and a2: "(\<exists> p__Inv3 p__Inv4. p__Inv3\<le>N\<and>p__Inv4\<le>N\<and>p__Inv3~=p__Inv4\<and>f=inv__151 p__Inv3 p__Inv4)" shows "invHoldForRule s f r (invariants N)" apply (rule noEffectOnRule, cut_tac a1 a2, auto) done end
TELMISARTAN helps lower blood pressure to normal levels. It controls high blood pressure, but it is not a cure. High blood pressure can damage your kidneys, and may lead to a stroke or heart failure. Telmisartan helps prevent these things from happening. Take this medicine by mouth with a glass of water. This medicine can be taken with or without food. Take your doses at regular intervals. Do not take your medicine more often than directed. Tell your prescriber or health care professional about all other medicines you are taking, including nonprescription medicines, nutritional supplements, or herbal products. Also tell your prescriber or health care professional if you are a frequent user of drinks with caffeine or alcohol, if you smoke, or if you use illegal drugs. These may affect the way your medicine works. Check with your health care professional before stopping or starting any of your medicines. Visit your doctor or health care professional for regular checks on your progress. Check your blood pressure as directed. Ask your doctor or health care professional what your blood pressure should be and when you should contact him or her. Call your doctor or health care professional if you notice an irregular or fast heart beat. Women should inform their doctor if they wish to become pregnant or think they might be pregnant. There is a potential for serious side effects to an unborn child, particularly in the second or third trimester. Talk to your health care professional or pharmacist for more information. Avoid salt substitutes unless you are told otherwise by your doctor or health care professional. Do not treat yourself for coughs, colds, or pain while you are taking this medicine without asking your doctor or health care professional for advice. Some ingredients may increase your blood pressure. Store at room temperature between 15 and 30 degrees C (59 and 86 degrees F). Tablets should not be removed from the blisters until right before use. Throw away any unused medicine after the expiration date.
########################################################################## # Try to match (and correct) consumers to Tree of Life ########################################################################## rm(list=ls()) # clears workspace newTaxa<-FALSE # Are there new taxa in the database that need to be found in ToL? matchNewTaxa<-TRUE # Match only taxa not already previously matched matchAllTaxa<-FALSE # Erase prior matches and start over...CAREFULL HERE! #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # install.packages(c("curl", "httr")) library(devtools) devtools::install_github("ropensci/rotl@fix-101") # source("https://install-github.me/ropensci/rotl") library(rotl) # Tree of Life (https://cran.r-project.org/web/packages/rotl/vignettes/how-to-use-rotl.html) source('SatMeta-Functions.r') #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ source('SatMeta-DataPrep-GoogleImport.r') #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ load('../Data/SatMeta_DB.Rdata') # load saved 'DB' dat<-DB #~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ###################################################### # Problem data identified during phylogenetic analyses ###################################################### # Remove non-species-specific taxa (i.e. Genus only, or non-formal species) dat[!grepl('_',dat$Consumer.identity),] # Almost all come from Arrington '02 fish database dat<-dat[grepl('_',dat$Consumer.identity),] dat<-dat[order(dat$Taxon.group,dat$Consumer.identity,dat$Citation,dat$Survey.ID),] ############################################ # Problem data identified during GAM-fitting ############################################ # Remove some high-leverage surveys # dat<-dat[dat$Citation!='Christiansen_2012',] # Fishes perform much better in GAM w/out these! ######################################################################## # Problem taxa identified during prior attempts to match to Tree of Life ######################################################################## # INCERTAE_SEDIS_INHERITED warning('9 INCERTAE_SEDIS_INHERITED removed manually. These species should be periodically checked in the ToL to see if their status has changed.') rem<-c("Astyanax_bimaculatus","Astyanax_fasciatus","Astyanax_metae","Centropomus_pectinatus", "Chromis_chrysura","Jupiaba_abramoides","Markiana_geayi","Platycephalus_speculator","Psellogrammus_kennedyi") dat[dat$Consumer.identity%in%rem,] dat<-dat[dat$Consumer.identity%!in%rem,] ################################### # Identify taxa in the Tree of Life ################################### if(newTaxa){ Spp<-unique(data.frame(Taxon.group=dat$Taxon.group,Consumer.identity=dat$Consumer.identity)) taxa <- tnrs_match_names(Spp$Consumer.identity, context_name = "Animals") save(taxa,file='../Output/Phylo/Taxa.Rdata') system("say -v Vicki I have finished matching taxa to the tree of life.") } ################## # Resolve problems ################## load(file='../Output/Phylo/Taxa.Rdata') # foc_tax <- tnrs_match_names("Pterostichus_melanaria") # foc_ins <- inspect(foc_tax,taxon_name = 'Feronia'); foc_ins # taxonomy_taxon_info(foc_ins$ott_id[2]) # sort(taxonomy_subtree(foc_ins$ott_id[2])$tip_label) # taxa[is.na(taxa[,1]),] # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # This next section of code is replicated in 'SatMeta-Analyses-PhyloCycling.R' and will # need to fixed here and there if changes occur. # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # The following taxa have multiple matches (often synonyms) that need to be corrected # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # This has to be done manually (mostly using WORMS, ReptileBase, Fishbase) multmatch<-taxa[taxa$number_matches>1,] nrow(multmatch); nrow(multmatch)/nrow(taxa) multmatch$chosen.ott_id<-multmatch$chosen.name<-NA if(matchAllTaxa){write.csv(multmatch,'../Output/Phylo/Taxa_MultipleMatches.csv')} if(matchNewTaxa){ prior.multmatch<-read.csv('../Output/Phylo/Taxa_MultipleMatches.csv') for(i in 1:nrow(multmatch)){ # If a prior match has been assigned.... if(multmatch$search_string[i] %in% prior.multmatch$search_string){ sel_ottid<-prior.multmatch$chosen.ott_id[which(prior.multmatch$search_string==multmatch$search_string[i])] if(!is.na(sel_ottid)){ multmatch$chosen.ott_id[i]<-sel_ottid taxa<-update(taxa, ott_id=multmatch$ott_id[i], new_ott_id=multmatch$chosen.ott_id[i]) }} # If no prior match has been assigned... if(multmatch$search_string[i] %!in% prior.multmatch$search_string | isTRUE(is.na(prior.multmatch$chosen.ott_id[which(prior.multmatch$search_string==multmatch$search_string[i])]))){ insp<-inspect(taxa,taxon_name = multmatch$search_string[i]) if(length(unique(insp$ott_id))==1){ multmatch$chosen.ott_id[i]<-insp$ott_id[1] } if(length(unique(insp$ott_id))>1){ # sometimes multiple rows actually have the same ott_id, so skip these since it won't change anything inf<-array(NA,dim=c(40,nrow(insp))) colnames(inf)<-insp$ott_id for(r in 1:nrow(insp)){ foc_ottid<-insp$ott_id[r] tlin<-tax_lineage(taxonomy_taxon_info(foc_ottid, include_lineage = TRUE))[[1]][2] inf[1:nrow(tlin),r]<-rev(tlin[,1]) } inf<-inf[-which(apply(is.na(inf),1,sum)==ncol(inf)),] print(inf) print(insp) sel_tax<-as.numeric(readline(prompt="Which row (i.e. taxon) do you want to choose to replace the existing one? ")) if(sel_tax>nrow(insp)|!is.numeric(sel_tax)){ sel_tax<-as.numeric(readline(prompt="Try again. Which row (i.e. taxon) do you want to choose to replace the existing one? "))} multmatch$chosen.ott_id[i]<-insp$ott_id[sel_tax] } taxa<-update(taxa, ott_id=multmatch$ott_id[i], new_ott_id=multmatch$chosen.ott_id[i]) print(paste(i,' of ',nrow(multmatch),' completed.')) yn<-readline(prompt="Next taxon? y/n ") if(yn!='y'){break} } } write.csv(multmatch,'../Output/Phylo/Taxa_MultipleMatches.csv') save(taxa,file='../Output/Phylo/Taxa_matched.Rdata') print('All done matching.') } # ~~~~~~~~~~~~~~~~~~~~~ # Inspect flagged taxa # ~~~~~~~~~~~~~~~~~~~~~ # Uncertain taxonomic positions cause problems, so remove upstream in DataPrep script unique(taxa$flags) taxa[which(taxa$flags=='INCERTAE_SEDIS_INHERITED'),] save(taxa,file='../Output/Phylo/Taxa_matched.Rdata') ############################### # Set up full tree construction ############################### tree<-tol_induced_subtree(taxa$ott_id,label_format = 'name') save(tree,file='../Output/Phylo/Tree.Rdata') ################################## # Merge in unique names and ottIDs ################################## taxa$search_string<-firstup(taxa$search_string) taxa$unique_name<-gsub(' ','_',taxa$unique_name) dat<-merge(dat,taxa[,c('search_string','unique_name','ott_id')],by.x='Consumer.identity',by.y='search_string',all=TRUE) ################ # Hardwire fixes ################ dat$unique_name[dat$Consumer.identity=='Gadus_morhua']<-'Gadus_morhua' ####################################### # Save dataset w/ corrected taxon names ####################################### save(dat,file='../Data/SatMeta_Data.Rdata') # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[STATEMENT] lemma dim_b_col_A: "dim_vec c = dim_col A" [PROOF STATE] proof (prove) goal (1 subgoal): 1. dim_vec c = dim_col A [PROOF STEP] using abstract_LP_axioms abstract_LP_def carrier_matD(2) carrier_vecD [PROOF STATE] proof (prove) using this: abstract_LP A b c m n abstract_LP ?A ?b ?c ?m ?n \<equiv> ?b \<in> carrier_vec ?m \<and> ?c \<in> carrier_vec ?n \<and> ?A \<in> carrier_mat ?m ?n ?A \<in> carrier_mat ?nr ?nc \<Longrightarrow> dim_col ?A = ?nc ?v \<in> carrier_vec ?n \<Longrightarrow> dim_vec ?v = ?n goal (1 subgoal): 1. dim_vec c = dim_col A [PROOF STEP] by metis
Require Import Bool String List Arith.Peano_dec Lia. Require Import Lib.FMap Lib.Struct Lib.CommonTactics Lib.Indexer Lib.StringEq Lib.ListSupport. Require Import Kami.Syntax Kami.Semantics Kami.SemFacts Kami.RefinementFacts Kami.Renaming Kami.Wf. Require Import Kami.Specialize. Require Import FunctionalExtensionality. Require Import Compare_dec. Set Implicit Arguments. Set Asymmetric Patterns. Section Duplicate. Variable m: nat -> Modules. Fixpoint duplicate n := match n with | O => specializeMod (m O) O | S n' => ConcatMod (specializeMod (m n) n) (duplicate n') end. End Duplicate. Section DuplicateFacts. Variable m: nat -> Modules. Lemma duplicate_ModEquiv: forall ty1 ty2 n, (forall iv, ModEquiv ty1 ty2 (m iv)) -> ModEquiv ty1 ty2 (duplicate m n). Proof. induction n; simpl; intros; [apply specializeMod_ModEquiv; auto|]. apply ModEquiv_modular; auto. apply specializeMod_ModEquiv; auto. Qed. Lemma duplicate_validRegsModules: forall n, (forall iv, ValidRegsModules type (m iv)) -> ValidRegsModules type (duplicate m n). Proof. induction n; simpl; intros. - apply specializeMod_validRegsModules; auto. - split; auto. apply specializeMod_validRegsModules; auto. Qed. Lemma duplicate_dom_indexed: (forall iv, Specializable (m iv)) -> forall s n , In s (spDom (duplicate m n)) -> exists t i, s = t __ i /\ i < S n. Proof. induction n; simpl; intros. - pose proof (specializeMod_dom_indexed (H 0) _ _ H0); dest; subst. do 2 eexists; eauto. - apply spDom_in in H0; destruct H0. + pose proof (specializeMod_dom_indexed (H (S n)) _ _ H0); dest; subst. do 2 eexists; eauto. + specialize (IHn H0); dest; subst. do 2 eexists; eauto. Qed. Lemma duplicate_specializeMod_disj_regs: (forall iv, Specializable (m iv)) -> forall n ln iv, ln > n -> DisjList (namesOf (getRegInits (specializeMod (m iv) ln))) (namesOf (getRegInits (duplicate m n))). Proof. induction n; simpl; intros. - apply specializeMod_disj_regs_different_indices; auto; lia. - unfold namesOf in *. rewrite map_app. apply DisjList_comm, DisjList_app_4. + apply specializeMod_disj_regs_different_indices; auto; lia. + apply DisjList_comm, IHn; lia. Qed. Lemma duplicate_specializeMod_disj_defs: (forall iv, Specializable (m iv)) -> forall n ln iv, ln > n -> DisjList (getDefs (specializeMod (m iv) ln)) (getDefs (duplicate m n)). Proof. induction n; simpl; intros. - apply specializeMod_disj_defs_different_indices; auto; lia. - apply DisjList_comm. apply DisjList_SubList with (l1:= app (getDefs (specializeMod (m (S n)) (S n))) (getDefs (duplicate m n))). + unfold SubList; intros. apply getDefs_in in H1; destruct H1; apply in_or_app; auto. + apply DisjList_app_4. * apply specializeMod_disj_defs_different_indices; auto; lia. * apply DisjList_comm, IHn; lia. Qed. Lemma duplicate_specializeMod_disj_calls: (forall iv, Specializable (m iv)) -> forall n ln iv, ln > n -> DisjList (getCalls (specializeMod (m iv) ln)) (getCalls (duplicate m n)). Proof. induction n; simpl; intros. - apply specializeMod_disj_calls_different_indices; auto; lia. - apply DisjList_comm. apply DisjList_SubList with (l1:= app (getCalls (specializeMod (m (S n)) (S n))) (getCalls (duplicate m n))). + unfold SubList; intros. apply getCalls_in in H1; destruct H1; apply in_or_app; auto. + apply DisjList_app_4. * apply specializeMod_disj_calls_different_indices; auto; lia. * apply DisjList_comm, IHn; lia. Qed. Lemma duplicate_disj_regs: forall m1 m2, (forall iv1, Specializable (m1 iv1)) -> (forall iv2, Specializable (m2 iv2)) -> (forall iv1 iv2, DisjList (namesOf (getRegInits (m1 iv1))) (namesOf (getRegInits (m2 iv2)))) -> forall n, DisjList (namesOf (getRegInits (duplicate m1 n))) (namesOf (getRegInits (duplicate m2 n))). Proof. induction n; simpl; intros. - apply specializeMod_disj_regs_2; auto. - unfold namesOf; do 2 rewrite map_app; apply DisjList_app_4. + apply DisjList_comm, DisjList_app_4. * apply DisjList_comm, specializeMod_disj_regs_2; auto. * clear IHn. assert (n < S n) by lia. generalize dependent (S n); intros. induction n; simpl; intros. { apply DisjList_comm, specializeMod_disj_regs_2; auto. } { rewrite map_app; apply DisjList_app_4. { apply DisjList_comm, specializeMod_disj_regs_2; auto. } { apply IHn; lia. } } + apply DisjList_comm, DisjList_app_4. * clear IHn. assert (n < S n) by lia. generalize dependent (S n); intros. induction n; simpl; intros. { apply DisjList_comm, specializeMod_disj_regs_2; auto. } { rewrite map_app; apply DisjList_comm, DisjList_app_4. { apply specializeMod_disj_regs_2; auto. } { apply DisjList_comm; auto. apply IHn; lia. } } * apply DisjList_comm, IHn. Qed. Lemma duplicate_noninteracting: (forall iv, Specializable (m iv)) -> forall n ln, ln > n -> forall iv, NonInteracting (specializeMod (m iv) ln) (duplicate m n). Proof. induction n; simpl; intros. - apply specializable_noninteracting_2; auto; lia. - unfold NonInteracting in *. assert (ln > n) by lia; specialize (IHn _ H1); clear H1; dest. split. + apply DisjList_comm. apply DisjList_SubList with (l1:= app (getCalls (specializeMod (m (S n)) (S n))) (getCalls (duplicate m n))). * unfold SubList; intros. apply getCalls_in in H1. apply in_or_app; auto. * apply DisjList_app_4. { pose proof (specializable_noninteracting_2 (H (S n)) (H iv)). apply H1; lia. } { specialize (IHn iv); dest. apply DisjList_comm; auto. } + apply DisjList_comm. apply DisjList_SubList with (l1:= app (getDefs (specializeMod (m (S n)) (S n))) (getDefs (duplicate m n))). * unfold SubList; intros. apply getDefs_in in H1. apply in_or_app; auto. * apply DisjList_app_4. { pose proof (specializable_noninteracting_2 (H (S n)) (H iv)). apply H1; lia. } { specialize (IHn iv); dest. apply DisjList_comm; auto. } Qed. Lemma duplicate_regs_NoDup: forall (Hsp: forall iv, Specializable (m iv)) n, (forall iv, NoDup (namesOf (getRegInits (m iv)))) -> NoDup (namesOf (getRegInits (duplicate m n))). Proof. induction n; simpl; intros; [apply specializeMod_regs_NoDup; auto|]. unfold namesOf in *; simpl in *. rewrite map_app; apply NoDup_DisjList; auto. - apply specializeMod_regs_NoDup, H; auto. - apply duplicate_specializeMod_disj_regs; auto. Qed. Lemma getRegInits_duplicate_nil: forall n, (forall iv, Specializable (m iv)) -> (forall iv, getRegInits (m iv) = nil) -> getRegInits (duplicate m n) = nil. Proof. intros. match goal with | [ |- ?P ] => assert (namesOf (getRegInits (duplicate m n)) = nil -> P) as Hm end. { intros; unfold namesOf in H1; eapply map_eq_nil; eauto. } apply Hm; clear Hm. induction n; simpl; intros. - rewrite specializeMod_regs; auto. rewrite H0; reflexivity. - rewrite namesOf_app. rewrite IHn; rewrite app_nil_r. rewrite specializeMod_regs; auto. rewrite H0; reflexivity. Qed. Lemma getDefs_duplicate_nil: (forall iv, Specializable (m iv)) -> (forall iv, getDefs (m iv) = nil) -> forall n, getDefs (duplicate m n) = nil. Proof. induction n; simpl; intros. - rewrite specializeMod_defs; auto. rewrite H0; reflexivity. - rewrite getDefs_app. rewrite IHn; rewrite app_nil_r. rewrite specializeMod_defs; auto. rewrite H0; reflexivity. Qed. Lemma getDefsBodies_duplicate_nil: (forall iv, Specializable (m iv)) -> (forall iv, getDefsBodies (m iv) = nil) -> forall n, getDefsBodies (duplicate m n) = nil. Proof. intros. assert (forall iv, getDefs (m iv) = nil) by (intros; unfold getDefs; rewrite H0; reflexivity). eapply getDefs_duplicate_nil with (n:= n) in H1; eauto. eapply map_eq_nil with (f:= @attrName _); eauto. Qed. Lemma getRules_duplicate_in: forall rn rb i, (forall iv, Specializable (m iv)) -> In (rn :: rb)%struct (getRules (m i)) -> forall n, i <= n -> In ((rn __ i) :: (fun ty => (Renaming.renameAction (specializer (m i) i) (rb ty))))%struct (getRules (duplicate m n)). Proof. induction n; simpl; intros. - inv H1; apply specializeMod_rules_in; auto. - inv H1; [|apply in_or_app; right; auto]. apply in_or_app; left. apply specializeMod_rules_in; auto. Qed. End DuplicateFacts. Section TwoModules1. Variables (m1 m2: Modules). Hypotheses (Hsp1: Specializable m1) (Hsp2: Specializable m2) (Hequiv1: ModEquiv type typeUT m1) (Hequiv2: ModEquiv type typeUT m2) (Hvr1: ValidRegsModules type m1) (Hvr2: ValidRegsModules type m2) (Hexts: SubList (getExtMeths m1) (getExtMeths m2)). Lemma specializer_equiv: forall {A} (m: M.t A), M.KeysSubset m (spDom m1) -> M.KeysSubset m (spDom m2) -> forall i, renameMap (specializer m1 i) m = renameMap (specializer m2 i) m. Proof. intros; do 2 (rewrite specializer_map; auto). Qed. Lemma specializeMod_defCallSub: forall i, DefCallSub m1 m2 -> DefCallSub (specializeMod m1 i) (specializeMod m2 i). Proof. unfold DefCallSub; intros; dest; split. - do 2 rewrite specializeMod_defs by assumption. apply SubList_map; auto. - do 2 rewrite specializeMod_calls by assumption. apply SubList_map; auto. Qed. Lemma specializer_two_comm: forall (m: MethsT), M.KeysSubset m (getExtMeths m1) -> forall i, m = renameMap (specializer m2 i) (renameMap (specializer m1 i) m). Proof. intros. replace (renameMap (specializer m1 i) m) with (renameMap (specializer m2 i) m). - rewrite renameMapFInvG; auto. + apply specializer_bijective. apply specializable_disj_dom_img; auto. + apply specializer_bijective. apply specializable_disj_dom_img; auto. - apply eq_sym, specializer_equiv. + eapply M.KeysSubset_SubList; eauto. pose proof (getExtMeths_meths m1). apply SubList_trans with (l2:= app (getDefs m1) (getCalls m1)); auto. apply SubList_app_3; [apply spDom_defs|apply spDom_calls]. + apply M.KeysSubset_SubList with (d2:= getExtMeths m2) in H; auto. eapply M.KeysSubset_SubList; eauto. pose proof (getExtMeths_meths m2). apply SubList_trans with (l2:= app (getDefs m2) (getCalls m2)); auto. apply SubList_app_3; [apply spDom_defs|apply spDom_calls]. Qed. End TwoModules1. Section DuplicateTwoModules1. Variables (m1 m2: nat -> Modules). Hypotheses (Hsp1: forall iv, Specializable (m1 iv)) (Hsp2: forall iv, Specializable (m2 iv)) (Hequiv1: forall iv, ModEquiv type typeUT (m1 iv)) (Hequiv2: forall iv, ModEquiv type typeUT (m2 iv)) (Hvr1: forall iv, ValidRegsModules type (m1 iv)) (Hvr2: forall iv, ValidRegsModules type (m2 iv)) (Hexts: forall iv1 iv2, SubList (getExtMeths (m1 iv1)) (getExtMeths (m2 iv2))). Lemma duplicate_defCallSub: forall n, (forall i, DefCallSub (m1 i) (m2 i)) -> DefCallSub (duplicate m1 n) (duplicate m2 n). Proof. induction n; simpl; intros. - apply specializeMod_defCallSub; auto. - apply DefCallSub_modular. + apply specializeMod_defCallSub; auto. + apply IHn; auto. Qed. Lemma duplicate_traceRefines: forall n, (forall i, traceRefines (liftToMap1 (@idElementwise _)) (m1 i) (m2 i)) -> traceRefines (liftToMap1 (@idElementwise _)) (duplicate m1 n) (duplicate m2 n). Proof. induction n; simpl; intros. - apply specialized_2 with (i:= O); auto. specialize (H 0). eapply traceRefines_label_map; eauto using H. clear - Hsp1 Hsp2 Hexts; unfold EquivalentLabelMap; intros. rewrite idElementwiseId; unfold id; simpl. unfold liftPRename; simpl. apply specializer_two_comm; auto. - apply traceRefines_modular_noninteracting; auto. + apply specializeMod_ModEquiv; auto. + apply specializeMod_ModEquiv; auto. + apply duplicate_ModEquiv; auto. + apply duplicate_ModEquiv; auto. + apply duplicate_specializeMod_disj_regs; auto. + apply duplicate_specializeMod_disj_regs; auto. + pose proof (duplicate_validRegsModules m1 (S n) Hvr1); auto. + pose proof (duplicate_validRegsModules m2 (S n) Hvr2); auto. + apply duplicate_specializeMod_disj_defs; auto. + eapply DisjList_comm, DisjList_SubList. * apply getIntCalls_getCalls. * apply DisjList_comm, duplicate_specializeMod_disj_calls; auto. + eapply DisjList_SubList. * apply getIntCalls_getCalls. * apply duplicate_specializeMod_disj_calls; auto. + apply duplicate_specializeMod_disj_defs; auto. + eapply DisjList_comm, DisjList_SubList. * apply getIntCalls_getCalls. * apply DisjList_comm, duplicate_specializeMod_disj_calls; auto. + eapply DisjList_SubList. * apply getIntCalls_getCalls. * apply duplicate_specializeMod_disj_calls; auto. + apply duplicate_noninteracting; auto. + apply duplicate_noninteracting; auto. + apply specialized_2 with (i:= S n); auto. specialize (H (S n)). eapply traceRefines_label_map; eauto using H. clear - Hsp1 Hsp2 Hexts; unfold EquivalentLabelMap; intros. rewrite idElementwiseId; unfold id; simpl. unfold liftPRename; simpl. apply specializer_two_comm; auto. Qed. End DuplicateTwoModules1. Section TwoModules2. Variables (m1 m2: Modules). Hypotheses (Hsp1: Specializable m1) (Hsp2: Specializable m2) (Hequiv1: ModEquiv type typeUT m1) (Hequiv2: ModEquiv type typeUT m2) (Hvr1: ValidRegsModules type m1) (Hvr2: ValidRegsModules type m2). Variable (ds: string). (* a single label to drop *) Hypothesis (Hexts: SubList (filter (fun s => negb (string_eq s ds)) (getExtMeths m1)) (getExtMeths m2)). Lemma specializeMod_traceRefines_drop: forall i, (m1 <<=[dropP ds] m2) -> (specializeMod m1 i <<=[dropI ds i] specializeMod m2 i). Proof. intros. apply specialized_2; auto. apply traceRefines_label_map with (p:= liftToMap1 (dropP ds)); auto. clear -Hsp1 Hsp2 Hexts. unfold EquivalentLabelMap; intros. unfold liftPRename. assert (renameMap (specializer m2 i) ((liftToMap1 (dropP ds)) m) = liftToMap1 (dropI ds i) (renameMap (specializer m1 i) m)). { rewrite specializer_map with (m:= m1); auto; [|eapply M.KeysSubset_SubList; eauto; apply spDom_getExtMeths]. rewrite specializer_map with (m:= m2); auto; [|apply M.KeysSubset_SubList with (d1:= getExtMeths m2); [|apply spDom_getExtMeths]; eapply M.KeysSubset_SubList; eauto; apply dropP_KeysSubset; auto]. clear; M.ext y. rewrite liftToMap1_find. remember (M.find y (renameMap (spf i) m)) as yiv; destruct yiv. - apply eq_sym, renameFind2' in Heqyiv; [|apply spf_onto]. dest; subst; rewrite <-renameMapFind; [|apply spf_onto]. rewrite liftToMap1_find, H0. unfold dropP, dropI. remember (string_eq x ds) as xds; destruct xds. + apply string_eq_dec_eq in Heqxds; subst. rewrite string_eq_true; reflexivity. + apply string_eq_dec_neq in Heqxds. remember (string_eq (spf i x) (ds __ i)) as xdsi; destruct xdsi; auto. apply string_eq_dec_eq, spf_onto in Heqxdsi. elim Heqxds; auto. - remember (M.find y (renameMap (spf i) (liftToMap1 (dropP ds) m))) as ypv; destruct ypv; auto. exfalso; apply eq_sym, renameFind2' in Heqypv; [|apply spf_onto]. dest; subst. rewrite <-renameMapFind in Heqyiv; [|apply spf_onto]. rewrite liftToMap1_find in H0. rewrite <-Heqyiv in H0; inv H0. } rewrite <-H0. rewrite <-specializer_two_comm with (m1:= m2) (m2:= m2) (i:= i); auto. - apply SubList_refl. - eapply M.KeysSubset_SubList; eauto. apply dropP_KeysSubset; auto. Qed. Lemma equivalentLabelMapElem_dropI_dropN: forall n t (Ht: t > n), EquivalentLabelMapElem (dropI ds t) (compLabelMaps (dropI ds t) (dropN ds n)) (getExtMeths (specializeMod m1 t)). Proof. unfold EquivalentLabelMapElem; intros. induction n. - simpl; unfold compLabelMaps, dropI. destruct (string_eq _ (ds __ t)). + destruct (string_eq _ (ds __ 0)); auto. + remember (string_eq _ _) as sv; destruct sv; auto. exfalso; apply string_eq_dec_eq in Heqsv; subst. apply spDom_getExtMeths in H. apply specializeMod_dom_indexed in H; auto; dest. apply withIndex_index_eq in H; dest; lia. - simpl; assert (t > n) by lia; specialize (IHn H0); clear H0. rewrite IHn; clear IHn. unfold dropI, compLabelMaps. remember (dropN ds n s v) as nv; destruct nv; auto. destruct (string_eq s (ds __ t)). + destruct (string_eq _ _); auto. + remember (string_eq _ _) as sn; destruct sn; auto. exfalso; apply string_eq_dec_eq in Heqsn; subst. apply spDom_getExtMeths in H. apply specializeMod_dom_indexed in H; auto; dest. apply withIndex_index_eq in H; dest; lia. Qed. End TwoModules2. Section DuplicateTwoModules2. Variables (m1 m2: nat -> Modules). Hypotheses (Hsp1: forall iv, Specializable (m1 iv)) (Hsp2: forall iv, Specializable (m2 iv)) (Hequiv1: forall iv, ModEquiv type typeUT (m1 iv)) (Hequiv2: forall iv, ModEquiv type typeUT (m2 iv)) (Hvr1: forall iv, ValidRegsModules type (m1 iv)) (Hvr2: forall iv, ValidRegsModules type (m2 iv)). Variable (ds: string). (* a single label to drop *) Hypothesis (Hexts: forall iv1 iv2, SubList (filter (fun s => negb (string_eq s ds)) (getExtMeths (m1 iv1))) (getExtMeths (m2 iv2))). Lemma equivalentLabelMapElem_dropN_dropI: forall n u (Ht: u > n), EquivalentLabelMapElem (dropN ds n) (compLabelMaps (dropI ds u) (dropN ds n)) (getExtMeths (duplicate m1 n)). Proof. induction n; unfold EquivalentLabelMapElem; intros. - simpl; unfold compLabelMaps, dropI. destruct (string_eq _ (ds __ 0)); auto. remember (string_eq _ _) as st; destruct st; auto. apply string_eq_dec_eq in Heqst; subst. simpl in H. apply spDom_getExtMeths in H. apply specializeMod_dom_indexed in H; auto; dest. apply withIndex_index_eq in H; dest; lia. - simpl; assert (u > n) by lia; specialize (IHn _ H0); clear H0. simpl in H. apply getExtMeths_in in H; destruct H. + clear IHn. unfold dropI, compLabelMaps. destruct (dropN ds n s v); auto. destruct (string_eq _ (ds __ (S n))); auto. remember (string_eq _ _) as st; destruct st; auto. exfalso; apply string_eq_dec_eq in Heqst; subst. apply spDom_getExtMeths in H. apply specializeMod_dom_indexed in H; auto; dest. apply withIndex_index_eq in H; dest; lia. + unfold compLabelMaps. rewrite IHn; clear IHn; auto. unfold compLabelMaps. destruct (dropN ds n s v); auto. destruct (dropI ds u s s0); auto. destruct (dropI ds (S n) s s1); auto. unfold dropI; remember (string_eq _ _) as st; destruct st; auto. exfalso; apply string_eq_dec_eq in Heqst; subst. apply spDom_getExtMeths in H. apply duplicate_dom_indexed in H; auto; dest. apply withIndex_index_eq in H; dest; lia. Qed. Lemma duplicate_traceRefines_drop: forall n, (forall i, (m1 i) <<=[dropP ds] (m2 i)) -> (duplicate m1 n <<=[dropN ds n] duplicate m2 n). Proof. induction n; simpl; intros. - apply specializeMod_traceRefines_drop; auto. - apply traceRefines_modular_noninteracting_p; auto. + apply specializeMod_ModEquiv; auto. + apply specializeMod_ModEquiv; auto. + apply duplicate_ModEquiv; auto. + apply duplicate_ModEquiv; auto. + apply duplicate_specializeMod_disj_regs; auto. + apply duplicate_specializeMod_disj_regs; auto. + pose proof (duplicate_validRegsModules m1 (S n) Hvr1); auto. + pose proof (duplicate_validRegsModules m2 (S n) Hvr2); auto. + apply duplicate_specializeMod_disj_defs; auto. + eapply DisjList_comm, DisjList_SubList. * apply getIntCalls_getCalls. * apply DisjList_comm, duplicate_specializeMod_disj_calls; auto. + eapply DisjList_SubList. * apply getIntCalls_getCalls. * apply duplicate_specializeMod_disj_calls; auto. + apply duplicate_specializeMod_disj_defs; auto. + eapply DisjList_comm, DisjList_SubList. * apply getIntCalls_getCalls. * apply DisjList_comm, duplicate_specializeMod_disj_calls; auto. + eapply DisjList_SubList. * apply getIntCalls_getCalls. * apply duplicate_specializeMod_disj_calls; auto. + split. * apply equivalentLabelMapElem_dropI_dropN; auto; lia. * apply equivalentLabelMapElem_dropN_dropI; auto; lia. + apply duplicate_noninteracting; auto. + apply duplicate_noninteracting; auto. + apply specializeMod_traceRefines_drop; auto. Qed. End DuplicateTwoModules2. Section DuplicateTwoModules3. Variables (m1 m2: nat -> Modules). Hypotheses (Hequiv1: forall iv ty, ModEquiv ty typeUT (m1 iv)) (Hequiv2: forall iv ty, ModEquiv ty typeUT (m2 iv)) (Hvr1: forall iv ty, ValidRegsModules ty (m1 iv)) (Hvr2: forall iv ty, ValidRegsModules ty (m2 iv)) (Hsp1: forall iv, Specializable (m1 iv)) (Hsp2: forall iv, Specializable (m2 iv)). Lemma duplicate_regs_ConcatMod_1: forall n, SubList (getRegInits (duplicate (fun i => (m1 i) ++ (m2 i))%kami n)) (getRegInits (duplicate m1 n ++ duplicate m2 n)%kami). Proof. Opaque specializeMod. induction n; intros. - unfold duplicate. rewrite specializeMod_concatMod; auto. apply SubList_refl. - simpl in *; apply SubList_app_3. + rewrite specializeMod_concatMod; auto. simpl; apply SubList_app_3. * do 2 apply SubList_app_1; apply SubList_refl. * apply SubList_app_2, SubList_app_1, SubList_refl. + unfold SubList in *; intros. specialize (IHn e H). apply in_app_or in IHn; destruct IHn. * apply in_or_app; left; apply in_or_app; auto. * apply in_or_app; right; apply in_or_app; auto. Transparent specializeMod. Qed. Lemma duplicate_regs_ConcatMod_2: forall n, SubList (getRegInits (duplicate m1 n ++ duplicate m2 n)%kami) (getRegInits (duplicate (fun i => m1 i ++ m2 i)%kami n)). Proof. Opaque specializeMod. induction n; intros. - unfold duplicate. rewrite specializeMod_concatMod; auto. apply SubList_refl. - simpl in *; apply SubList_app_3. + rewrite specializeMod_concatMod; auto. simpl; apply SubList_app_3. * do 2 apply SubList_app_1; apply SubList_refl. * apply SubList_app_2; apply SubList_app_4 in IHn; auto. + rewrite specializeMod_concatMod; auto. simpl; apply SubList_app_3. * apply SubList_app_1, SubList_app_2, SubList_refl. * apply SubList_app_2; apply SubList_app_5 in IHn; auto. Transparent specializeMod. Qed. Corollary duplicate_regs_ConcatMod: forall n, EquivList (getRegInits (duplicate m1 n ++ duplicate m2 n)%kami) (getRegInits (duplicate (fun i => m1 i ++ m2 i)%kami n)). Proof. intros; split. - apply duplicate_regs_ConcatMod_2. - apply duplicate_regs_ConcatMod_1. Qed. Lemma duplicate_regs_NoDup_2: (forall i, NoDup (namesOf (getRegInits (m1 i ++ m2 i)%kami))) -> forall n, NoDup (namesOf (getRegInits (duplicate m1 n ++ duplicate m2 n)%kami)). Proof. Opaque specializeMod. intros. pose proof H; apply duplicate_regs_NoDup with (n:= n) in H0. induction n; simpl; intros. - simpl in *; rewrite specializeMod_concatMod in H0; auto. - assert (NoDup (namesOf (getRegInits (duplicate (fun i => m1 i ++ m2 i)%kami n)))). { apply duplicate_regs_NoDup; auto. intros; apply specializable_concatMod; auto. } specialize (IHn H1); clear H1. unfold namesOf; repeat rewrite map_app. rewrite app_assoc. rewrite <-app_assoc with (l:= map (@attrName _) (getRegInits (specializeMod (m1 (S n)) (S n)))). rewrite <-app_assoc with (l:= map (@attrName _) (getRegInits (specializeMod (m1 (S n)) (S n)))). apply NoDup_app_comm_ext. rewrite app_assoc. rewrite app_assoc. rewrite <-app_assoc with (n:= map (@attrName _) (getRegInits (duplicate m2 n))). apply NoDup_DisjList. + specialize (H (S n)); apply specializeMod_regs_NoDup with (i:= S n) in H; [|apply specializable_concatMod; auto]. rewrite specializeMod_concatMod in H; auto. rewrite <-map_app; auto. + rewrite <-map_app; auto. + do 2 rewrite <-map_app. pose proof (duplicate_regs_ConcatMod_2 n). apply SubList_map with (f:= @attrName _) in H1. eapply DisjList_comm, DisjList_SubList; eauto. pose proof (specializeMod_concatMod (Hvr1 (S n)) (Hvr2 (S n)) (Hequiv1 (S n)) (Hequiv2 (S n)) (S n) (Hsp1 (S n)) (Hsp2 (S n))). change (getRegInits (specializeMod (m1 (S n)) (S n)) ++ getRegInits (specializeMod (m2 (S n)) (S n))) with (getRegInits (specializeMod (m1 (S n)) (S n) ++ (specializeMod (m2 (S n)) (S n)))%kami). rewrite <-H2. apply DisjList_comm. change (m1 (S n) ++ m2 (S n))%kami with ((fun i => (m1 i ++ m2 i)%kami) (S n)). apply duplicate_specializeMod_disj_regs; auto. intros; apply specializable_concatMod; auto. - intros; apply specializable_concatMod; auto. Qed. Lemma duplicate_rules_ConcatMod_1: forall n, SubList (getRules (duplicate (fun i => m1 i ++ m2 i)%kami n)) (getRules (duplicate m1 n ++ duplicate m2 n)%kami). Proof. Opaque specializeMod. induction n; intros. - unfold duplicate. rewrite specializeMod_concatMod; auto. apply SubList_refl. - simpl in *; apply SubList_app_3. + rewrite specializeMod_concatMod; auto. simpl; apply SubList_app_3. * do 2 apply SubList_app_1; apply SubList_refl. * apply SubList_app_2, SubList_app_1, SubList_refl. + unfold SubList in *; intros. specialize (IHn e H). apply in_app_or in IHn; destruct IHn. * apply in_or_app; left; apply in_or_app; auto. * apply in_or_app; right; apply in_or_app; auto. Transparent specializeMod. Qed. Lemma duplicate_rules_ConcatMod_2: forall n, SubList (getRules (duplicate m1 n ++ duplicate m2 n)%kami) (getRules (duplicate (fun i => m1 i ++ m2 i)%kami n)). Proof. Opaque specializeMod. induction n; intros. - unfold duplicate. rewrite specializeMod_concatMod; auto. apply SubList_refl. - simpl in *; apply SubList_app_3. + rewrite specializeMod_concatMod; auto. simpl; apply SubList_app_3. * do 2 apply SubList_app_1; apply SubList_refl. * apply SubList_app_2; apply SubList_app_4 in IHn; auto. + rewrite specializeMod_concatMod; auto. simpl; apply SubList_app_3. * apply SubList_app_1, SubList_app_2, SubList_refl. * apply SubList_app_2; apply SubList_app_5 in IHn; auto. Transparent specializeMod. Qed. Corollary duplicate_rules_ConcatMod: forall n, EquivList (getRules (duplicate m1 n ++ duplicate m2 n)%kami) (getRules (duplicate (fun i => m1 i ++ m2 i)%kami n)). Proof. intros; split. - apply duplicate_rules_ConcatMod_2. - apply duplicate_rules_ConcatMod_1. Qed. Lemma duplicate_defs_ConcatMod_1: forall n, SubList (getDefsBodies (duplicate (fun i => m1 i ++ m2 i)%kami n)) (getDefsBodies (duplicate m1 n ++ duplicate m2 n)%kami). Proof. Opaque specializeMod. induction n; intros. - unfold duplicate. rewrite specializeMod_concatMod; auto. apply SubList_refl. - simpl in *; apply SubList_app_3. + rewrite specializeMod_concatMod; auto. simpl; apply SubList_app_3. * do 2 apply SubList_app_1; apply SubList_refl. * apply SubList_app_2, SubList_app_1, SubList_refl. + unfold SubList in *; intros. specialize (IHn e H). apply in_app_or in IHn; destruct IHn. * apply in_or_app; left; apply in_or_app; auto. * apply in_or_app; right; apply in_or_app; auto. Transparent specializeMod. Qed. Lemma duplicate_defs_ConcatMod_2: forall n, SubList (getDefsBodies (duplicate m1 n ++ duplicate m2 n)%kami) (getDefsBodies (duplicate (fun i => m1 i ++ m2 i)%kami n)). Proof. Opaque specializeMod. induction n; intros. - unfold duplicate. rewrite specializeMod_concatMod; auto. apply SubList_refl. - simpl in *; apply SubList_app_3. + rewrite specializeMod_concatMod; auto. simpl; apply SubList_app_3. * do 2 apply SubList_app_1; apply SubList_refl. * apply SubList_app_2; apply SubList_app_4 in IHn; auto. + rewrite specializeMod_concatMod; auto. simpl; apply SubList_app_3. * apply SubList_app_1, SubList_app_2, SubList_refl. * apply SubList_app_2; apply SubList_app_5 in IHn; auto. Transparent specializeMod. Qed. Lemma duplicate_defs_ConcatMod: forall n, EquivList (getDefsBodies (duplicate m1 n ++ duplicate m2 n)%kami) (getDefsBodies (duplicate (fun i => m1 i ++ m2 i)%kami n)). Proof. intros; split. - apply duplicate_defs_ConcatMod_2. - apply duplicate_defs_ConcatMod_1. Qed. End DuplicateTwoModules3. Section DuplicateTwoModules4. Variables (m1 m2: nat -> Modules). Hypotheses (Hsp1: forall iv, Specializable (m1 iv)) (Hsp2: forall iv, Specializable (m2 iv)) (Hequiv1: forall iv ty, ModEquiv ty typeUT (m1 iv)) (Hequiv2: forall iv ty, ModEquiv ty typeUT (m2 iv)) (Hvr1: forall iv ty, ValidRegsModules ty (m1 iv)) (Hvr2: forall iv ty, ValidRegsModules ty (m2 iv)) (HnoDup: forall iv, NoDup (namesOf (getRegInits (m1 iv ++ m2 iv)%kami))). Lemma duplicate_concatMod_comm_1: forall n, duplicate (fun i => m1 i ++ m2 i)%kami n <<== ((duplicate m1 n) ++ (duplicate m2 n))%kami. Proof. intros; rewrite idElementwiseId. apply traceRefines_same_module_structure. - apply duplicate_regs_NoDup; auto. intros; apply specializable_concatMod; auto. - apply duplicate_regs_NoDup_2; auto. - split. + apply duplicate_regs_ConcatMod_1; auto. + apply duplicate_regs_ConcatMod_2; auto. - split. + apply duplicate_rules_ConcatMod_1; auto. + apply duplicate_rules_ConcatMod_2; auto. - split. + apply duplicate_defs_ConcatMod_1; auto. + apply duplicate_defs_ConcatMod_2; auto. Qed. Lemma duplicate_concatMod_comm_2: forall n, ((duplicate m1 n) ++ (duplicate m2 n))%kami <<== duplicate (fun i => m1 i ++ m2 i)%kami n. Proof. intros; rewrite idElementwiseId. apply traceRefines_same_module_structure. - apply duplicate_regs_NoDup_2; auto. - apply duplicate_regs_NoDup; auto. intros; apply specializable_concatMod; auto. - split. + apply duplicate_regs_ConcatMod_2; auto. + apply duplicate_regs_ConcatMod_1; auto. - split. + apply duplicate_rules_ConcatMod_2; auto. + apply duplicate_rules_ConcatMod_1; auto. - split. + apply duplicate_defs_ConcatMod_2; auto. + apply duplicate_defs_ConcatMod_1; auto. Qed. End DuplicateTwoModules4. #[global] Hint Unfold specializeMod duplicate: ModuleDefs.
Formal statement is: lemma finite_ksimplexes: "finite {s. ksimplex p n s}" Informal statement is: There are finitely many $k$-simplices in an $n$-simplex.
section \<open>Reference Monitors\<close> theory Reference_Monitor imports "../Noninterference/Noninterference" begin text \<open>We now consider a special class of automata with structured state: there is a type @{text "'var"} of variable names, and a state stores a value for each variable.\<close> subsection \<open>Definition\<close> text \<open>We formalize this with a function @{text contents}, which takes a state and a variable name, and returns the value stored under that variable name in the state.\<close> locale Structured_State = Automaton s0 step out for s0 :: "'state" and step :: "'state \<Rightarrow> 'act \<Rightarrow> 'state" and out :: "'state \<Rightarrow> 'act \<Rightarrow> 'out" + fixes contents :: "'state \<Rightarrow> 'var \<Rightarrow> 'val" subsection \<open>Verification\<close> text \<open>We implement a flow policy for an automaton with structured state by specifying which variables can be read and written in a domain: @{text "observe u"} gives the set of variables that may be read in the domain @{text u}, and @{text "alter u"} gives the set of variables that may be written by @{text u}. These functions have to be consistent with the flow policy in the following sense: \<^item> if information is allowed to flow from domain @{text u} to domain @{text v}, then all variables that may be read by @{text u} may also be read by @{text v} (condition @{text FP_impl1} below), and \<^item> if a variable may be written by @{text u} and read by @{text v}, then information must be allowed to flow from @{text u} to @{text v} (condition @{text FP_impl2} below.)\<close> locale FP_Implementation = Structured_State s0 step out contents + NI s0 step out FP dom for s0 :: "'state" and step :: "'state \<Rightarrow> 'act \<Rightarrow> 'state" and out :: "'state \<Rightarrow> 'act \<Rightarrow> 'out" and contents :: "'state \<Rightarrow> 'var \<Rightarrow> 'val" and FP :: "'dom rel" and dom :: "'act \<Rightarrow> 'dom" + fixes observe :: "'dom \<Rightarrow> 'var set" and alter :: "'dom \<Rightarrow> 'var set" assumes FP_impl1: "(u \<leadsto> v) \<Longrightarrow> observe u \<subseteq> observe v" and FP_impl2: "n \<in> alter u \<Longrightarrow> n \<in> observe v \<Longrightarrow> (u \<leadsto> v)" begin text \<open>We will prove the security of the automaton using the unwinding theorem. For this purpose, we define a view partitioning where two states are equivalent for a domain @{text u} iff they coincide on the variables that are observable for @{text u}.\<close> definition view :: "'dom \<Rightarrow> 'state rel" where "view u \<equiv> {(s, t). \<forall>n \<in> observe u. contents s n = contents t n}" abbreviation view'' :: "'state \<Rightarrow> 'dom \<Rightarrow> 'state \<Rightarrow> bool" ("_ \<sim>\<^bsub>_\<^esub> _") where "(s \<sim>\<^bsub>u\<^esub> t) \<equiv> (s, t) \<in> view u" lemma viewI: assumes "\<And>n. n \<in> observe u \<Longrightarrow> contents s n = contents t n" shows "s \<sim>\<^bsub>u\<^esub> t" using assms unfolding view_def by auto text \<open>This is indeed a view partitioning: for all domains @{text u}, the relation @{text "\<sim>\<^bsub>u\<^esub>"} is an equivalence relation.\<close> sublocale View_Partitioning s0 step out FP dom view proof show "\<forall>u. equiv UNIV (view u)" unfolding view_def by (auto intro!: equivI symI refl_onI transI) qed text \<open>As a helper lemma for the proof below, we note that, if information is allowed to flow from @{text u} to @{text v}, then two states are equivalent for @{text u} if they are equivalent for @{text v}, i.e.\ @{text "\<sim>\<^bsub>v\<^esub>"} is a @{emph \<open>finer\<close>} equivalence relation than @{text "\<sim>\<^bsub>u\<^esub>"}.\<close> lemma view_spec: assumes "u \<leadsto> v" and "s \<sim>\<^bsub>v\<^esub> t" shows "s \<sim>\<^bsub>u\<^esub> t" using assms FP_impl1 unfolding view_def by auto end text \<open>In order to prove the unwinding conditions, we require that the implementation of the flow policy satisfies three additional assumptions: \<^item> output consistency has to hold (RMA1), \<^item> if a visible action is performed on two equivalent states and changes the value of a variable in one of them, then it has to be changed to the same value in the other state (RMA2), and \<^item> if an action @{text a} changes the value of a variable, then that variable must be writable by the domain of @{text a} (RMA3).\<close> locale Reference_Monitor = FP_Implementation + assumes RMA1: "(s \<sim>\<^bsub>dom a\<^esub> t) \<Longrightarrow> out s a = out t a" and RMA2: "(s \<sim>\<^bsub>dom a\<^esub> t) \<Longrightarrow> (contents (step s a) n \<noteq> contents s n) \<or> (contents (step t a) n \<noteq> contents t n) \<Longrightarrow> contents (step s a) n = contents (step t a) n" and RMA3: "contents (step s a) n \<noteq> contents s n \<Longrightarrow> n \<in> alter (dom a)" begin text \<open>Together with the assumptions on the consistency of implementation and flow policy (@{text "FP_impl1"} and @{text "FP_impl2"}), this is sufficient to prove the unwinding conditions.\<close> theorem monitor_secure: "NI_secure" proof (intro unwinding_theorem) show "output_consistent" using RMA1 unfolding output_consistent_def by auto next show "step_consistent" unfolding step_consistent_def proof (intro allI impI, elim conjE) fix s t u a assume "(dom a) \<leadsto> u" and "s \<sim>\<^bsub>u\<^esub> t" then have "s \<sim>\<^bsub>dom a\<^esub> t" by (rule view_spec) show "(step s a) \<sim>\<^bsub>u\<^esub> (step t a)" proof (intro viewI) fix n assume n: "n \<in> observe u" show "contents (step s a) n = contents (step t a) n" proof (cases "contents (step s a) n = contents s n \<and> contents (step t a) n = contents t n") case True then show ?thesis using `s \<sim>\<^bsub>u\<^esub> t` n unfolding view_def by auto next case False then show ?thesis using `s \<sim>\<^bsub>dom a\<^esub> t` RMA2 by auto qed qed qed next show "locally_respects_FP" unfolding locally_respects_FP_def proof (intro allI impI) fix s u a assume noflow: "\<not>((dom a) \<leadsto> u)" then have not_alter_observe: "\<forall>n. n \<notin> alter (dom a) \<or> n \<notin> observe u" using FP_impl2 by auto show "s \<sim>\<^bsub>u\<^esub> (step s a)" proof (intro viewI) fix n assume "n \<in> observe u" with not_alter_observe have "n \<notin> alter (dom a)" by auto with RMA3 have "contents (step s a) n = contents s n" by auto then show "contents s n = contents (step s a) n" .. qed qed qed end end
[STATEMENT] lemma nth_default_take: "nth_default x (take n xs) m = (if m < n then nth_default x xs m else x)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. nth_default x (take n xs) m = (if m < n then nth_default x xs m else x) [PROOF STEP] by (auto simp add: nth_default_def add_ac)
(*************************************************************************** * System-F pure with capabilities in a different universe * * (based on the F<: implementation in locally-nameless project) * ***************************************************************************) Set Implicit Arguments. Require Import LibLN. Implicit Types x : var. Implicit Types X : var. (* ********************************************************************** *) (** * Description of the Language *) (** Representation of pre-types *) Inductive typ : Set := | typ_bvar : nat -> typ | typ_fvar : bool -> var -> typ (* bool indicates which type universe *) | typ_base : typ | typ_eff : typ | typ_stoic : typ -> typ -> typ (* effect closeded term abstraction *) | typ_all : bool -> typ -> typ. (* Bool indicate which type universe *) (** Representation of pre-terms *) Inductive trm : Set := | trm_bvar : nat -> trm | trm_fvar : var -> trm | trm_abs : typ -> trm -> trm | trm_app : trm -> trm -> trm | trm_tabs : bool -> trm -> trm (* Bool indicates which type universe *) | trm_tapp : trm -> typ -> trm. (** Opening up a type binder occuring in a type *) Fixpoint open_tt_rec (K : nat) (U : typ) (T : typ) {struct T} : typ := match T with | typ_bvar J => If K = J then U else (typ_bvar J) | typ_fvar b X => typ_fvar b X | typ_base => typ_base | typ_eff => typ_eff | typ_stoic T1 T2 => typ_stoic (open_tt_rec K U T1) (open_tt_rec K U T2) | typ_all b T1 => typ_all b (open_tt_rec (S K) U T1) end. Definition open_tt T U := open_tt_rec 0 U T. (** Opening up a type binder occuring in a term *) Fixpoint open_te_rec (K : nat) (U : typ) (e : trm) {struct e} : trm := match e with | trm_bvar i => trm_bvar i | trm_fvar x => trm_fvar x | trm_abs V e1 => trm_abs (open_tt_rec K U V) (open_te_rec K U e1) | trm_app e1 e2 => trm_app (open_te_rec K U e1) (open_te_rec K U e2) | trm_tabs b e1 => trm_tabs b (open_te_rec (S K) U e1) | trm_tapp e1 V => trm_tapp (open_te_rec K U e1) (open_tt_rec K U V) end. Definition open_te t U := open_te_rec 0 U t. (** Opening up a term binder occuring in a term *) Fixpoint open_ee_rec (k : nat) (f : trm) (e : trm) {struct e} : trm := match e with | trm_bvar i => If k = i then f else (trm_bvar i) | trm_fvar x => trm_fvar x | trm_abs V e1 => trm_abs V (open_ee_rec (S k) f e1) | trm_app e1 e2 => trm_app (open_ee_rec k f e1) (open_ee_rec k f e2) | trm_tabs b e1 => trm_tabs b (open_ee_rec k f e1) | trm_tapp e1 V => trm_tapp (open_ee_rec k f e1) V end. Definition open_ee t u := open_ee_rec 0 u t. (** Notation for opening up binders with type or term variables *) Notation "T 'open_tt_var' b '\with' X" := (open_tt T (typ_fvar b X)) (at level 67). Notation "t 'open_te_var' b '\with' X" := (open_te t (typ_fvar b X)) (at level 67). Notation "t 'open_ee_var' x" := (open_ee t (trm_fvar x)) (at level 67). (** Types as locally closeded pre-types *) Inductive type : typ -> Prop := | type_var : forall b X, type (typ_fvar b X) | type_base: type typ_base | type_eff: type typ_eff | type_stoic : forall T1 T2, type T1 -> type T2 -> type (typ_stoic T1 T2) | type_all : forall L T2 b, (forall X, X \notin L -> type (T2 open_tt_var b \with X)) -> type (typ_all b T2). (** Terms as locally closeded pre-terms *) Inductive term : trm -> Prop := | term_var : forall x, term (trm_fvar x) | term_abs : forall L V e1, type V -> (forall x, x \notin L -> term (e1 open_ee_var x)) -> term (trm_abs V e1) | term_app : forall e1 e2, term e1 -> term e2 -> term (trm_app e1 e2) | term_tabs : forall L e1 b, (forall X, X \notin L -> term (e1 open_te_var b \with X)) -> term (trm_tabs b e1) | term_tapp : forall e1 V, term e1 -> type V -> term (trm_tapp e1 V). (** Environment is an associative list of bindings. *) (** Binding are either mapping type or term variables. [: X :] is a type variable asumption and [x ~: T] is a typing assumption *) Inductive bind : Set := | bind_tvar : bool -> bind (* bool indicates which type universe *) | bind_typ : typ -> bind. Notation "X \at b" := (X ~ bind_tvar b) (at level 23) : env_scope. Notation "x ~: T" := (x ~ bind_typ T) (at level 24, left associativity) : env_scope. Definition env := LibEnv.env bind. (** Well-formedness of a pre-type T in an environment E: all the type variables of T must be bound via [: T :] in E. This predicates implies that T is a type *) Inductive wft : env -> typ -> Prop := | wft_base: forall E, wft E typ_base | wft_eff: forall E, wft E typ_eff | wft_var : forall E X b, binds X (bind_tvar b) E -> wft E (typ_fvar b X) | wft_stoic : forall E T1 T2, wft E T1 -> wft E T2 -> wft E (typ_stoic T1 T2) | wft_all : forall L E T b, (forall X, X \notin L -> wft (E & (X \at b)) (T open_tt_var b \with X)) -> wft E (typ_all b T). (** A environment E is well-formed if it contains no duplicate bindings and if each type in it is well-formed with respect to the environment it is pushed on to. *) Inductive okt : env -> Prop := | okt_empty : okt empty | okt_tvar : forall E X b, okt E -> X # E -> okt (E & (X \at b)) | okt_typ : forall E x T, okt E -> wft E T -> x # E -> okt (E & x ~: T). (* closed rules *) Fixpoint closed_typ(t: typ) := match t with | typ_bvar _ => false (* impossible, ill-formed *) | typ_fvar b _ => b (* true - ordinary type, false - capability type *) | typ_base => true | typ_eff => false | typ_stoic U V => true (* pure lambda abstraction *) | typ_all _ T => true (* pure type abstraction *) end. Fixpoint pure(E: env) := match E with | nil => nil | cons (X, bind_tvar b) E' => cons (X, bind_tvar b) (pure E') | cons (x, bind_typ T) E' => if closed_typ T then cons (x, bind_typ T) (pure E') else pure E' end. (** Typing relation *) Inductive typing : env -> trm -> typ -> Prop := | typing_var : forall E x T, okt E -> binds x (bind_typ T) E -> typing E (trm_fvar x) T | typing_stoic: forall L E V e1 T1, okt E -> (forall x, x \notin L -> typing ((pure E) & x ~: V) (e1 open_ee_var x) T1) -> typing E (trm_abs V e1) (typ_stoic V T1) | typing_app : forall T1 E e1 e2 T2, typing E e1 (typ_stoic T1 T2) -> typing E e2 T1 -> typing E (trm_app e1 e2) T2 | typing_tabs : forall L E e1 T1 b, okt E -> (forall X, X \notin L -> typing ((pure E) & (X \at b)) (e1 open_te_var b \with X) (T1 open_tt_var b \with X)) -> typing E (trm_tabs b e1) (typ_all b T1) | typing_tapp : forall T1 E e1 T b, wft E T -> closed_typ T = b -> typing E e1 (typ_all b T1) -> typing E (trm_tapp e1 T) (open_tt T1 T). (** Values *) Inductive value : trm -> Prop := | value_abs : forall V e1, term (trm_abs V e1) -> value (trm_abs V e1) | value_tabs : forall e1 b, term (trm_tabs b e1) -> value (trm_tabs b e1) | value_var : forall x, value (trm_fvar x). (** One-step reduction *) Inductive red : trm -> trm -> Prop := | red_app_1 : forall e1 e1' e2, term e2 -> red e1 e1' -> red (trm_app e1 e2) (trm_app e1' e2) | red_app_2 : forall e1 e2 e2', value e1 -> red e2 e2' -> red (trm_app e1 e2) (trm_app e1 e2') | red_tapp : forall e1 e1' V, type V -> red e1 e1' -> red (trm_tapp e1 V) (trm_tapp e1' V) | red_abs : forall V e1 v2, term (trm_abs V e1) -> value v2 -> red (trm_app (trm_abs V e1) v2) (open_ee e1 v2) | red_tabs : forall e1 V b, term (trm_tabs b e1) -> type V -> red (trm_tapp (trm_tabs b e1) V) (open_te e1 V). (** Our goal is to prove preservation and progress *) Definition preservation := forall E e e' T, typing E e T -> red e e' -> typing E e' T. Definition progress := forall e T, typing empty e T -> value e \/ exists e', red e e'. (* inhabitable environment *) Inductive primitive: env -> Prop := | primitive_base: forall x y, primitive (x ~: typ_base & y ~: typ_eff) | primitive_tvar: forall X E b, primitive E -> primitive (E & (X \at b)) | primitive_typ: forall x X E b, primitive E -> primitive (E & x ~: (typ_fvar b X)). Inductive inhabitable: env -> Prop := | inhabitable_empty: inhabitable empty | inhabitable_tvar: forall X E b, inhabitable E -> inhabitable (E & (X \at b)) | inhabitable_typ: forall z t T E F, inhabitable E -> primitive F -> value t -> typing F t T -> inhabitable (E & z ~: T). (* capsafe types are not capability producing, i.e. capable of creating an instance of E *) Fixpoint degree_typ (T: typ) := match T with | typ_stoic T1 T2 => max (degree_typ T1) (degree_typ T2) | typ_all _ T => S (degree_typ T) | _ => O end. Fixpoint degree_trm (t: trm) := match t with | trm_abs _ t1 => degree_trm t1 | trm_app t1 t2 => max (degree_trm t1) (degree_trm t2) | trm_tabs _ t1 => S (degree_trm t1) | trm_tapp t1 _ => degree_trm t1 | _ => O end. Inductive capsafe: typ -> Prop := | capsafe_base: capsafe typ_base | capsafe_var: forall X, capsafe (typ_fvar true X) | capsafe_eff_any: forall S T, type T -> caprod S -> capsafe (typ_stoic S T) | capsafe_any_safe: forall S T, type S -> capsafe T -> capsafe (typ_stoic S T) | capsafe_all_true: forall T, type (typ_all true T) -> capsafe (open_tt T typ_base) -> capsafe (open_tt T (typ_stoic typ_base typ_eff)) -> capsafe (typ_all true T) | capsafe_all_false: forall T, type (typ_all false T) -> capsafe (open_tt T typ_eff) -> capsafe (typ_all false T) with caprod: typ -> Prop := | caprod_eff: caprod typ_eff | caprod_var: forall X, caprod (typ_fvar false X) | caprod_safe_eff: forall S T, capsafe S -> caprod T -> caprod (typ_stoic S T) | caprod_all_true: forall T, type (typ_all true T) -> (caprod (open_tt T typ_base) \/ caprod (open_tt T (typ_stoic typ_base typ_eff))) -> caprod (typ_all true T) | caprod_all_false: forall T, type (typ_all false T) -> caprod (open_tt T typ_eff) -> caprod (typ_all false T). (* capsafe environment *) Inductive healthy: env -> Prop := | healthy_empty: healthy empty | healthy_typ: forall x E T, capsafe T -> healthy E -> healthy (E & x ~: T) | healthy_tvar: forall X E b, healthy E -> healthy (E & ( X \at b)). Definition inhabitable_pure_healthy_statement := forall E, inhabitable E -> pure E = E -> healthy E. (* effect safety : it's impossible to construct a term of typ_eff in healthy environment *) Definition effect_safety := forall E, healthy E -> ~exists e, typing E e typ_eff. (* ********************************************************************** *) (** * Additional Definitions Used in the Proofs *) (** Computing free type variables in a type *) Fixpoint fv_tt (T : typ) {struct T} : vars := match T with | typ_bvar J => \{} | typ_base => \{} | typ_eff => \{} | typ_fvar _ X => \{X} | typ_stoic T1 T2 => (fv_tt T1) \u (fv_tt T2) | typ_all _ T1 => (fv_tt T1) end. (** Computing free type variables in a term *) Fixpoint fv_te (e : trm) {struct e} : vars := match e with | trm_bvar i => \{} | trm_fvar x => \{} | trm_abs V e1 => (fv_tt V) \u (fv_te e1) | trm_app e1 e2 => (fv_te e1) \u (fv_te e2) | trm_tabs _ e1 => (fv_te e1) | trm_tapp e1 V => (fv_tt V) \u (fv_te e1) end. (** Computing free term variables in a type *) Fixpoint fv_ee (e : trm) {struct e} : vars := match e with | trm_bvar i => \{} | trm_fvar x => \{x} | trm_abs V e1 => (fv_ee e1) | trm_app e1 e2 => (fv_ee e1) \u (fv_ee e2) | trm_tabs _ e1 => (fv_ee e1) | trm_tapp e1 V => (fv_ee e1) end. (** Substitution for free type variables in types. *) Fixpoint subst_tt (Z : var) (zb: bool) (U : typ) (T : typ) {struct T} : typ := match T with | typ_bvar J => typ_bvar J | typ_base => typ_base | typ_eff => typ_eff | typ_fvar b X => If X = Z /\ b = zb then U else (typ_fvar b X) | typ_stoic T1 T2 => typ_stoic (subst_tt Z zb U T1) (subst_tt Z zb U T2) | typ_all b T => typ_all b (subst_tt Z zb U T) end. (** Substitution for free type variables in terms. *) Fixpoint subst_te (Z : var) (zb: bool) (U : typ) (e : trm) {struct e} : trm := match e with | trm_bvar i => trm_bvar i | trm_fvar x => trm_fvar x | trm_abs V e1 => trm_abs (subst_tt Z zb U V) (subst_te Z zb U e1) | trm_app e1 e2 => trm_app (subst_te Z zb U e1) (subst_te Z zb U e2) | trm_tabs b e1 => trm_tabs b (subst_te Z zb U e1) | trm_tapp e1 V => trm_tapp (subst_te Z zb U e1) (subst_tt Z zb U V) end. (** Substitution for free term variables in terms. *) Fixpoint subst_ee (z : var) (u : trm) (e : trm) {struct e} : trm := match e with | trm_bvar i => trm_bvar i | trm_fvar x => If x = z then u else (trm_fvar x) | trm_abs V e1 => trm_abs V (subst_ee z u e1) | trm_app e1 e2 => trm_app (subst_ee z u e1) (subst_ee z u e2) | trm_tabs b e1 => trm_tabs b (subst_ee z u e1) | trm_tapp e1 V => trm_tapp (subst_ee z u e1) V end. (** Substitution for free type variables in environment. *) Definition subst_tb (Z : var) (zb: bool) (P : typ) (b : bind) : bind := match b with | bind_tvar b => bind_tvar b | bind_typ T => bind_typ (subst_tt Z zb P T) end. (* ********************************************************************** *) (** * Tactics *) (** Constructors as hints. *) Hint Constructors type term wft ok okt value red. Hint Resolve typing_var typing_app typing_tapp. (** Gathering free names already used in the proofs *) Ltac gather_vars := let A := gather_vars_with (fun x : vars => x) in let B := gather_vars_with (fun x : var => \{x}) in let C := gather_vars_with (fun x : trm => fv_te x) in let D := gather_vars_with (fun x : trm => fv_ee x) in let E := gather_vars_with (fun x : typ => fv_tt x) in let F := gather_vars_with (fun x : env => dom x) in constr:(A \u B \u C \u D \u E \u F). (** "pick_fresh x" tactic create a fresh variable with name x *) Ltac pick_fresh x := let L := gather_vars in (pick_fresh_gen L x). (** "apply_fresh T as x" is used to apply inductive rule which use an universal quantification over a cofinite set *) Tactic Notation "apply_fresh" constr(T) "as" ident(x) := apply_fresh_base T gather_vars x. Tactic Notation "apply_fresh" "*" constr(T) "as" ident(x) := apply_fresh T as x; autos*. (** These tactics help applying a lemma which conclusion mentions an environment (E & F) in the particular case when F is empty *) Ltac get_env := match goal with | |- wft ?E _ => E | |- typing ?E _ _ => E end. Tactic Notation "apply_empty_bis" tactic(get_env) constr(lemma) := let E := get_env in rewrite <- (concat_empty_r E); eapply lemma; try rewrite concat_empty_r. Tactic Notation "apply_empty" constr(F) := apply_empty_bis (get_env) F. Tactic Notation "apply_empty" "*" constr(F) := apply_empty F; autos*. (** Tactic to undo when Coq does too much simplification *) Ltac unsimpl_map_bind := match goal with |- context [ ?B (subst_tt ?Z ?b ?P ?U) ] => unsimpl ((subst_tb Z b P) (B U)) end. Tactic Notation "unsimpl_map_bind" "*" := unsimpl_map_bind; autos*. (* ********************************************************************** *) (** * Properties of Set *) (* ********************************************************************** *) Lemma notin_union_inv: forall x E F, x \notin (E \u F) -> x \notin E /\ x \notin F. Proof. intros. autos. Qed. Lemma union_empty_inv: forall (A:Type) (a b: fset A), a \u b = \{} -> a = \{} /\ b = \{}. Proof. intros. split. apply fset_extens. rewrite <- H. apply subset_union_weak_l. apply subset_empty_l. apply fset_extens. rewrite <- H. apply subset_union_weak_r. apply subset_empty_l. Qed. Lemma subset_trans: forall (T: Type) (a b c: fset T), a \c b -> b \c c -> a \c c. Proof. unfolds subset. autos. Qed. Lemma subset_strengthen: forall (T: Type) (a b: fset T) (x: T), a \c (b \u \{x}) -> x \notin a -> a \c b. Proof. unfolds subset. intros. forwards K: (H x0 H1). rewrite in_union in K. destruct* K. rewrite in_singleton in H2. subst. tryfalse. Qed. Lemma subset_union : forall (T: Type) (a b c: fset T), a \u b \c c -> a \c c /\ b \c c. Proof. intros. unfolds subset. split; intros x; specializes H x; rewrite in_union in H; auto. Qed. (* ********************************************************************** *) (** * Properties of Substitutions *) (* ********************************************************************** *) (** ** Properties of type substitution in type *) (** Substitution on indices is identity on well-formed terms. *) Lemma open_tt_rec_type_core : forall T j V U i, i <> j -> (open_tt_rec j V T) = open_tt_rec i U (open_tt_rec j V T) -> T = open_tt_rec i U T. Proof. induction T; introv Neq H; simpl in *; inversion H; f_equal*. case_nat*. case_nat*. Qed. Lemma open_tt_rec_type : forall T U, type T -> forall k, T = open_tt_rec k U T. Proof. induction 1; intros; simpl; f_equal*. unfolds open_tt. pick_fresh X. apply* (@open_tt_rec_type_core T2 0 (typ_fvar b X)). Qed. (** Substitution for a fresh name is identity. *) Lemma subst_tt_fresh : forall Z U T b, Z \notin fv_tt T -> subst_tt Z b U T = T. Proof. induction T; simpl; intros; f_equal*. case_if*. destruct H0. substs. false*. apply H. apply in_singleton_self. Qed. (** Substitution distributes on the open operation. *) Lemma subst_tt_open_tt_rec : forall T1 T2 X P b n, type P -> subst_tt X b P (open_tt_rec n T2 T1) = open_tt_rec n (subst_tt X b P T2) (subst_tt X b P T1). Proof. introv WP. generalize n. induction T1; intros k; simpls; f_equal*. case_nat*. case_if*. rewrite* <- open_tt_rec_type. Qed. Lemma subst_tt_open_tt : forall T1 T2 X P b, type P -> subst_tt X b P (open_tt T1 T2) = open_tt (subst_tt X b P T1) (subst_tt X b P T2). Proof. unfold open_tt. autos* subst_tt_open_tt_rec. Qed. (** Substitution and open_var for distinct names commute. *) Lemma subst_tt_open_tt_var : forall X Y U T b1 b2, Y <> X -> type U -> (subst_tt X b1 U T) open_tt_var b2 \with Y = subst_tt X b1 U (T open_tt_var b2 \with Y). Proof. introv Neq Wu. rewrite* subst_tt_open_tt. simpl. case_if*. Qed. (** Opening up a body t with a type u is the same as opening up the abstraction with a fresh name x and then substituting u for x. *) Lemma subst_tt_intro : forall X T2 U b, X \notin fv_tt T2 -> type U -> open_tt T2 U = subst_tt X b U (T2 open_tt_var b \with X). Proof. introv Fr Wu. rewrite* subst_tt_open_tt. rewrite* subst_tt_fresh. simpl. case_if*. Qed. (* ********************************************************************** *) (** ** Properties of type substitution in terms *) Lemma open_te_rec_term_core : forall e j u i P , open_ee_rec j u e = open_te_rec i P (open_ee_rec j u e) -> e = open_te_rec i P e. Proof. induction e; intros; simpl in *; inversion H; f_equal*; f_equal*. Qed. Lemma open_te_rec_type_core : forall e j Q i P, i <> j -> open_te_rec j Q e = open_te_rec i P (open_te_rec j Q e) -> e = open_te_rec i P e. Proof. induction e; intros; simpl in *; inversion H0; f_equal*; match goal with H: ?i <> ?j |- ?t = open_tt_rec ?i _ ?t => apply* (@open_tt_rec_type_core t j) end. Qed. Lemma open_te_rec_term : forall e U, term e -> forall k, e = open_te_rec k U e. Proof. intros e U WF. induction WF; intros; simpl; f_equal*; try solve [ apply* open_tt_rec_type ]. unfolds open_ee. pick_fresh x. apply* (@open_te_rec_term_core e1 0 (trm_fvar x)). unfolds open_te. pick_fresh X. apply* (@open_te_rec_type_core e1 0 (typ_fvar b X)). Qed. (** Substitution for a fresh name is identity. *) Lemma subst_te_fresh : forall X U e b, X \notin fv_te e -> subst_te X b U e = e. Proof. induction e; simpl; intros; f_equal*; autos* subst_tt_fresh. Qed. (** Substitution distributes on the open operation. *) Lemma subst_te_open_te : forall e T X U b, type U -> subst_te X b U (open_te e T) = open_te (subst_te X b U e) (subst_tt X b U T). Proof. intros. unfold open_te. generalize 0. induction e; intros; simpls; f_equal*; autos* subst_tt_open_tt_rec. Qed. (** Substitution and open_var for distinct names commute. *) Lemma subst_te_open_te_var : forall X Y U e b1 b2, Y <> X -> type U -> (subst_te X b1 U e) open_te_var b2 \with Y = subst_te X b1 U (e open_te_var b2 \with Y). Proof. introv Neq Wu. rewrite* subst_te_open_te. simpl. case_if*. Qed. (** Opening up a body t with a type u is the same as opening up the abstraction with a fresh name x and then substituting u for x. *) Lemma subst_te_intro : forall X U e b, X \notin fv_te e -> type U -> open_te e U = subst_te X b U (e open_te_var b \with X). Proof. introv Fr Wu. rewrite* subst_te_open_te. rewrite* subst_te_fresh. simpl. case_if*. Qed. (* ********************************************************************** *) (** ** Properties of term substitution in terms *) Lemma open_ee_rec_term_core : forall e j v u i, i <> j -> open_ee_rec j v e = open_ee_rec i u (open_ee_rec j v e) -> e = open_ee_rec i u e. Proof. induction e; introv Neq H; simpl in *; inversion H; f_equal*. case_nat*. case_nat*. Qed. Lemma open_ee_rec_type_core : forall e j V u i, open_te_rec j V e = open_ee_rec i u (open_te_rec j V e) -> e = open_ee_rec i u e. Proof. induction e; introv H; simpls; inversion H; f_equal*. Qed. Lemma open_ee_rec_term : forall u e, term e -> forall k, e = open_ee_rec k u e. Proof. induction 1; intros; simpl; f_equal*. unfolds open_ee. pick_fresh x. apply* (@open_ee_rec_term_core e1 0 (trm_fvar x)). unfolds open_te. pick_fresh X. apply* (@open_ee_rec_type_core e1 0 (typ_fvar b X)). Qed. (** Substitution for a fresh name is identity. *) Lemma subst_ee_fresh : forall x u e, x \notin fv_ee e -> subst_ee x u e = e. Proof. induction e; simpl; intros; f_equal*. case_var*. Qed. (** Substitution distributes on the open operation. *) Lemma subst_ee_open_ee : forall t1 t2 u x, term u -> subst_ee x u (open_ee t1 t2) = open_ee (subst_ee x u t1) (subst_ee x u t2). Proof. intros. unfold open_ee. generalize 0. induction t1; intros; simpls; f_equal*. case_nat*. case_var*. rewrite* <- open_ee_rec_term. Qed. (** Substitution and open_var for distinct names commute. *) Lemma subst_ee_open_ee_var : forall x y u e, y <> x -> term u -> (subst_ee x u e) open_ee_var y = subst_ee x u (e open_ee_var y). Proof. introv Neq Wu. rewrite* subst_ee_open_ee. simpl. case_var*. Qed. (** Opening up a body t with a type u is the same as opening up the abstraction with a fresh name x and then substituting u for x. *) Lemma subst_ee_intro : forall x u e, x \notin fv_ee e -> term u -> open_ee e u = subst_ee x u (e open_ee_var x). Proof. introv Fr Wu. rewrite* subst_ee_open_ee. rewrite* subst_ee_fresh. simpl. case_var*. Qed. (** Interactions between type substitutions in terms and opening with term variables in terms. *) Lemma subst_te_open_ee_var : forall Z P x e b, (subst_te Z b P e) open_ee_var x = subst_te Z b P (e open_ee_var x). Proof. introv. unfold open_ee. generalize 0. induction e; intros; simpl; f_equal*. case_nat*. Qed. (** Interactions between term substitutions in terms and opening with type variables in terms. *) Lemma subst_ee_open_te_var : forall z u e X b, term u -> (subst_ee z u e) open_te_var b \with X = subst_ee z u (e open_te_var b \with X). Proof. introv. unfold open_te. generalize 0. induction e; intros; simpl; f_equal*. case_var*. symmetry. autos* open_te_rec_term. Qed. (** Substitutions preserve local closedure. *) Lemma subst_tt_type : forall T Z P b, type T -> type P -> type (subst_tt Z b P T). Proof. induction 1; intros; simpl; auto. case_if*. apply_fresh* type_all as X. rewrite* subst_tt_open_tt_var. Qed. Lemma typ_all_open_tt_type: forall T U b, type (typ_all b T) -> type U -> type (open_tt T U). Proof. intros. inversions H. pick_fresh X. forwards~ : H2 X. rewrite* (@subst_tt_intro X T U b). apply* subst_tt_type. Qed. Lemma subst_te_term : forall e Z P b, term e -> type P -> term (subst_te Z b P e). Proof. lets: subst_tt_type. induction 1; intros; simpl; auto. apply_fresh* term_abs as x. rewrite* subst_te_open_ee_var. apply_fresh* term_tabs as x. rewrite* subst_te_open_te_var. Qed. Lemma subst_ee_term : forall e1 Z e2, term e1 -> term e2 -> term (subst_ee Z e2 e1). Proof. induction 1; intros; simpl; auto. case_var*. apply_fresh* term_abs as y. rewrite* subst_ee_open_ee_var. apply_fresh* term_tabs as Y. rewrite* subst_ee_open_te_var. Qed. Hint Resolve subst_tt_type typ_all_open_tt_type subst_te_term subst_ee_term. (* ********************************************************************** *) (** * Properties of well-formedness of a type in an environment *) (** If a type is well-formed in an environment then it is locally closeded. *) Lemma wft_type : forall E T, wft E T -> type T. Proof. induction 1; eauto. Qed. (** Through weakening *) Lemma wft_weaken : forall G T E F, wft (E & G) T -> ok (E & F & G) -> wft (E & F & G) T. Proof. intros. gen_eq K: (E & G). gen E F G. induction H; intros; subst; eauto. (* case: var *) apply wft_var. apply* binds_weaken. (* case arrow *) (* case: all *) apply_fresh* wft_all as Y. apply_ih_bind* H0. Qed. (** Through strengthening *) Lemma wft_strengthen : forall E F x U T, wft (E & x ~: U & F) T -> wft (E & F) T. Proof. intros. gen_eq G: (E & x ~: U & F). gen F. induction H; intros F EQ; subst; auto. apply* wft_var. destruct (binds_concat_inv H) as [?|[? ?]]. apply~ binds_concat_right. destruct (binds_push_inv H1) as [[? ?]|[? ?]]. subst. false. apply~ binds_concat_left. (* todo: binds_cases tactic *) apply_fresh* wft_all as Y. apply_ih_bind* H0. Qed. (** Through type substitution *) Lemma wft_subst_tb : forall F E Z P T b, wft (E & (Z \at b) & F) T -> wft E P -> ok (E & map (subst_tb Z b P) F) -> wft (E & map (subst_tb Z b P) F) (subst_tt Z b P T). Proof. introv WT WP. gen_eq G: (E & (Z \at b) & F). gen F. induction WT; intros F EQ Ok; subst; simpl subst_tt; auto. case_if*. apply_empty* wft_weaken. destruct (binds_concat_inv H) as [?|[? ?]]. apply wft_var. apply~ binds_concat_right. replace (bind_tvar b0) with ((subst_tb Z b P) (bind_tvar b0)) by reflexivity. apply~ binds_map. destruct (binds_push_inv H2) as [[? ?]|[? ?]]. subst. inversions H4. false~. applys wft_var. apply* binds_concat_left. apply_fresh* wft_all as Y. unsimpl ((subst_tb Z b P) (bind_tvar b0)). lets: wft_type. rewrite* subst_tt_open_tt_var. apply_ih_map_bind* H0. Qed. (** Through type reduction *) Lemma wft_open : forall E U T b, ok E -> wft E (typ_all b T) -> wft E U -> wft E (open_tt T U). Proof. introv Ok WA WU. inversions WA. pick_fresh X. autos* wft_type. rewrite* (@subst_tt_intro X T U b). lets K: (@wft_subst_tb empty). specializes_vars K. clean_empty K. apply* K. (* todo: apply empty ? *) Qed. (* ********************************************************************** *) (** * Relations between well-formed environment and types well-formed in environments *) (** If an environment is well-formed, then it does not contain duplicated keys. *) Lemma ok_from_okt : forall E, okt E -> ok E. Proof. induction 1; auto. Qed. Hint Extern 1 (ok _) => apply ok_from_okt. (** Extraction from a typing assumption in a well-formed environments *) Lemma wft_from_env_has_typ : forall x U E, okt E -> binds x (bind_typ U) E -> wft E U. Proof. induction E using env_ind; intros Ok B. false* binds_empty_inv. inversions Ok. false (empty_push_inv H0). destruct (eq_push_inv H) as [? [? ?]]. subst. clear H. destruct (binds_push_inv B) as [[? ?]|[? ?]]. subst. inversions H2. apply_empty* wft_weaken. destruct (eq_push_inv H) as [? [? ?]]. subst. clear H. destruct (binds_push_inv B) as [[? ?]|[? ?]]. subst. inversions H3. apply_empty* wft_weaken. apply_empty* wft_weaken. Qed. (** Extraction from a well-formed environment *) Lemma wft_from_okt_typ : forall x T E, okt (E & x ~: T) -> wft E T. Proof. intros. inversions* H. false (empty_push_inv H1). destruct (eq_push_inv H0) as [? [? ?]]. false. destruct (eq_push_inv H0) as [? [? ?]]. inversions~ H4. Qed. (** Automation *) Lemma wft_weaken_right : forall T E F, wft E T -> ok (E & F) -> wft (E & F) T. Proof. intros. apply_empty* wft_weaken. Qed. Hint Resolve wft_weaken_right. Hint Resolve wft_strengthen. Hint Resolve wft_from_okt_typ. Hint Immediate wft_from_env_has_typ. Hint Resolve wft_subst_tb. (* ********************************************************************** *) (** ** Properties of well-formedness of an environment *) (** Inversion lemma *) Lemma okt_push_inv : forall E X B, okt (E & X ~ B) -> exists T b, B = bind_tvar b \/ B = bind_typ T. Proof. introv O. inverts O. false* empty_push_inv. lets (?&?&?): (eq_push_inv H). subst. exists* (typ_fvar b X). lets (?&?&?): (eq_push_inv H). subst. exists* T false. Qed. Lemma okt_push_tvar_inv : forall E X b, okt (E & (X \at b)) -> okt E /\ X # E. Proof. introv O. inverts O. false* empty_push_inv. lets (?&M&?): (eq_push_inv H). subst. inverts~ M. lets (?&?&?): (eq_push_inv H). false. Qed. Lemma okt_push_typ_inv : forall E x T, okt (E & x ~: T) -> okt E /\ wft E T /\ x # E. Proof. introv O. inverts O. false* empty_push_inv. lets (?&?&?): (eq_push_inv H). false. lets (?&M&?): (eq_push_inv H). subst. inverts~ M. Qed. Lemma okt_push_typ_type : forall E x T, okt (E & x ~: T) -> type T. Proof. intros. applys wft_type. forwards*: okt_push_typ_inv. Qed. Hint Immediate okt_push_typ_type. (** Through strengthening *) Lemma okt_strengthen : forall x T (E F:env), okt (E & x ~: T & F) -> okt (E & F). Proof. introv O. induction F using env_ind. rewrite concat_empty_r in *. lets*: (okt_push_typ_inv O). rewrite concat_assoc in *. lets (U&b&[?|?]): okt_push_inv O; subst. applys~ okt_tvar. apply IHF. applys* okt_push_tvar_inv. apply ok_from_okt in O. lets (? & H): (ok_push_inv O). eauto. applys~ okt_typ. apply IHF. applys* okt_push_typ_inv. applys* wft_strengthen. apply ok_from_okt in O. lets (? & H): (ok_push_inv O). eauto. Qed. Lemma okt_weaken : forall E F, okt (E & F) -> okt E. Proof. induction F using env_ind; rew_env_concat; introv Okt. auto. lets(T & b & [H | H]): (okt_push_inv Okt); subst. apply IHF. lets*: okt_push_tvar_inv Okt. apply IHF. lets*: okt_push_typ_inv Okt. Qed. (** Through type substitution *) Lemma okt_subst_tb : forall Z P (E F:env) b, okt (E & (Z \at b) & F) -> wft E P -> okt (E & map (subst_tb Z b P) F). Proof. introv O W. induction F using env_ind. rewrite map_empty. rewrite concat_empty_r in *. lets*: (okt_push_tvar_inv O). rewrite map_push. rewrite concat_assoc in *. lets (U&b0&[?|?]): okt_push_inv O; subst. lets*: (okt_push_tvar_inv O). simpls*. lets*: (okt_push_typ_inv O). applys~ okt_typ; autos*. Qed. (** Automation *) Hint Resolve okt_subst_tb wft_weaken. Hint Immediate okt_strengthen. (* ********************************************************************** *) (** ** Environment is unchanged by substitution from a fresh name *) Lemma notin_fv_tt_open : forall Y X T b, X \notin fv_tt (T open_tt_var b \with Y) -> X \notin fv_tt T. Proof. introv. unfold open_tt. generalize 0. induction T; simpl; intros k Fr; auto. specializes IHT1 k. specializes IHT2 k. auto. apply* IHT. Qed. Lemma notin_fv_wf : forall E X T, wft E T -> X # E -> X \notin fv_tt T. Proof. induction 1; intros Fr; simpl; autos* notin_empty. rewrite notin_singleton. intro. subst. applys binds_fresh_inv H Fr. pick_fresh Y. apply* (@notin_fv_tt_open Y). Qed. Lemma map_subst_tb_id : forall G Z P b, okt G -> Z # G -> G = map (subst_tb Z b P) G. Proof. induction 1; intros Fr; autorewrite with rew_env_map; simpl. auto. rewrite <- IHokt. reflexivity. eauto. rewrite <- IHokt. rewrite* subst_tt_fresh. apply* notin_fv_wf. eauto. Qed. (* ********************************************************************** *) (** ** Regularity of relations *) (** The typing relation is restricted to well-formed objects. *) Lemma typing_regular : forall E e T, typing E e T -> okt E /\ term e. Proof. induction 1; auto. splits. pick_fresh y. specializes H1 y. destructs~ H1. apply_fresh* term_abs as y. pick_fresh y. specializes H1 y. destructs~ H1. forwards*: okt_push_typ_inv. specializes H1 y. destructs~ H1. splits*. splits*. apply term_tabs with L. intros. specializes H1 X. destructs~ H1. splits*. destructs IHtyping. apply* term_tapp. apply* wft_type. Qed. (** The value relation is restricted to well-formed objects. *) Lemma value_regular : forall t, value t -> term t. Proof. induction 1; autos*. Qed. (** The reduction relation is restricted to well-formed objects. *) Lemma red_regular : forall t t', red t t' -> term t /\ term t'. Proof. induction 1; split; autos* value_regular. inversions H. pick_fresh y. rewrite* (@subst_ee_intro y). inversions H. pick_fresh Y. rewrite* (@subst_te_intro Y V e1 b). Qed. (** Automation *) Hint Extern 1 (okt ?E) => match goal with | H: typing _ _ _ |- _ => apply (proj31 (typing_regular H)) end. Hint Extern 1 (wft ?E ?T) => match goal with | H: typing E _ T |- _ => apply (proj33 (typing_regular H)) end. Hint Extern 1 (type ?T) => let go E := apply (@wft_type E); auto in match goal with | H: typing ?E _ T |- _ => go E end. Hint Extern 1 (term ?e) => match goal with | H: typing _ ?e _ |- _ => apply (proj32 (typing_regular H)) | H: red ?e _ |- _ => apply (proj1 (red_regular H)) | H: red _ ?e |- _ => apply (proj2 (red_regular H)) end. (* ********************************************************************** *) (** * Properties of environment *) Lemma pure_closed: forall E x T, binds x (bind_typ T) (pure E) -> closed_typ T = true. Proof. intros. inductions E. simpls. rewrite <- empty_def in H. false* binds_empty_inv. destruct a. simpls. destruct b; rewrite cons_to_push in *. destruct (binds_push_inv H); destruct H0. false. apply* IHE. cases* (closed_typ t). destruct (binds_push_inv H); destruct H0. inversions* H1. apply* IHE. Qed. Lemma pure_dist: forall E F, pure (E & F) = pure E & pure F. Proof. rewrite concat_def. intros. gen E. induction F; intros E; autos. rewrite LibList.app_cons. destruct a. destruct b. simpl. rewrite LibList.app_cons. rewrite* <- IHF. simpl. destruct* (closed_typ t). rewrite LibList.app_cons. rewrite* <- IHF. Qed. Lemma pure_dom_subset : forall E, dom (pure E) \c dom E. Proof. intros. induction E. simpl. apply subset_refl. destruct a. destruct b. simpl. repeat(rewrite cons_to_push). repeat(rewrite dom_push). eapply subset_trans. eapply subset_union_2. eapply subset_refl. exact IHE. apply subset_refl. simpl. destruct* (closed_typ t). repeat(rewrite cons_to_push; rewrite dom_push). apply* subset_union_2. apply subset_refl. rewrite cons_to_push. rewrite dom_push. eapply subset_trans. exact IHE. apply subset_union_weak_r. Qed. Lemma pure_binds: forall E x v, ok E -> binds x v (pure E) -> binds x v E. Proof. intros. induction E. simpl in *. autos. destruct a. destruct b. simpl in *. rewrite cons_to_push in *. destruct (binds_push_inv H0). destruct H1. subst. apply binds_push_eq. destruct H1. apply* binds_push_neq. simpl in *. rewrite cons_to_push in *. destruct (closed_typ t). destruct (binds_push_inv H0). destruct H1. substs. apply* binds_push_eq. destruct H1. apply* binds_push_neq. rewrite <- concat_empty_r. apply binds_weaken; rewrite* concat_empty_r. Qed. Lemma pure_binds_reverse: forall E x b, binds x (bind_tvar b) E -> binds x (bind_tvar b) (pure E). Proof. intros. induction E. simpl in *. autos. destruct a. destruct b0. simpl in *. rewrite cons_to_push in *. destruct (binds_push_inv H). destruct H0. inversion H1. subst. apply binds_push_eq. destruct H0. apply* binds_push_neq. simpl in *. rewrite cons_to_push in *. destruct (binds_push_inv H). false H0. destruct H0. destruct* (closed_typ t). Qed. Lemma pure_binds_in: forall E x T, closed_typ T = true -> binds x (bind_typ T) E -> binds x (bind_typ T) (pure E). Proof. intros. induction E. (* nil *) rewrite <- empty_def in H0. destruct(binds_empty_inv H0). (* x::xs *) destruct a. destruct b. (* bind_tvar *) simpl. rewrite cons_to_push in *. destruct (binds_push_inv H0). destruct H1. inversion H2. destruct H1. apply* binds_push_neq. (* bind_typ *) simpls. rewrite cons_to_push in *. destruct (binds_push_inv H0). destruct H1. inversions H2. rewrite* H. destruct H1. destruct* (closed_typ t). Qed. Lemma pure_wft: forall E V, ok E -> wft (pure E) V -> wft E V. Proof. intros. remember (pure E) as G. gen E. induction H0; intros; substs; autos. apply wft_var. apply* pure_binds. apply_fresh* wft_all as Y. apply* H0. repeat(rewrite <- cons_to_push). autos. Qed. Lemma pure_wft_weaken: forall E F G V, ok (E & F & G) -> wft (E & (pure F) & G) V -> wft (E & F & G) V. Proof. intros. inductions H0; intros; subst; autos. apply wft_var. binds_cases H0. apply binds_concat_left; autos. apply* binds_concat_left_ok. apply binds_concat_left; autos. apply* binds_concat_right. apply* pure_binds. lets*: ok_concat_inv_r (ok_concat_inv_l H). apply binds_concat_right. auto. apply_fresh wft_all as Y. assert (HI: ok (E & F & (G & (Y \at b)))). rewrite concat_assoc. apply* ok_push. forwards~ HII: (H0 Y). apply HI. rewrite* concat_assoc. rewrite* <- concat_assoc. Qed. Lemma pure_wft_reverse: forall E V, wft E V -> wft (pure E) V. Proof. intros. inductions H; autos. apply wft_var. apply* pure_binds_reverse. apply_fresh* wft_all as Y. forwards~ HI: (H0 Y). rewrite pure_dist in HI. rewrite single_def in *. autos. Qed. Lemma pure_empty : pure empty = empty. Proof. rewrite empty_def. reflexivity. Qed. Lemma pure_single_true : forall x U, closed_typ U = true -> pure (x ~: U) = x ~: U. Proof. intros. replace (x ~: U) with (empty & x ~: U) by rewrite* concat_empty_l. rewrite <- cons_to_push. simpls. rewrite H. rewrite pure_empty. reflexivity. Qed. Lemma pure_single_tvar : forall X b, pure (X \at b) = X \at b. Proof. intros. replace (X \at b) with (empty & (X \at b)) by rewrite* concat_empty_l. rewrite <- cons_to_push. simpl. rewrite pure_empty. reflexivity. Qed. Lemma pure_single_false : forall x U, closed_typ U = false -> pure (x ~: U) = empty. Proof. intros. replace (x ~: U) with (empty & x ~: U) by rewrite* concat_empty_l. rewrite <- cons_to_push. simpls. rewrite H. rewrite pure_empty. reflexivity. Qed. Lemma pure_okt : forall E, okt E -> okt (pure E). Proof. intros. induction* E. destruct a. destruct b; simpl; rewrite cons_to_push in *. apply okt_tvar. apply IHE. lets*: okt_push_tvar_inv H. unfolds. lets(_ & HI): okt_push_tvar_inv H. autos* (pure_dom_subset E). destructs (okt_push_typ_inv H). destruct* (closed_typ t). apply okt_typ. apply* IHE. apply* pure_wft_reverse. lets: pure_dom_subset E. unfolds subset. unfolds notin. autos. Qed. Lemma pure_fresh: forall E Z, Z # E -> Z # pure(E). Proof. intros. intros H1. apply H. lets: pure_dom_subset E. autos*. Qed. Lemma closed_subst_tt: forall Z b P T, closed_typ P = b -> closed_typ (subst_tt Z b P T) = closed_typ T. Proof. intros. inductions T; autos*. simpl. cases_if*. destruct H0. substs*. Qed. Lemma pure_map : forall E Z P b, closed_typ P = b -> pure (map (subst_tb Z b P) E) = map (subst_tb Z b P) (pure E). Proof. intros. induction E. simpl. rewrite <- empty_def. rewrite map_empty. apply pure_empty. destruct a. destruct b0; simpl. repeat(rewrite cons_to_push, map_push). simpl. rewrite <- cons_to_push. simpl. rewrite cons_to_push. rewrite* IHE. repeat(rewrite cons_to_push). repeat(rewrite map_push). simpl. rewrite <- cons_to_push. cases_if*; simpl; rewrite* closed_subst_tt; rewrite* H0. rewrite cons_to_push, map_push. simpl. rewrite* IHE. Qed. Lemma pure_eq : forall E, pure (pure E) = pure E. Proof. intros. induction E; autos. destruct a. destruct b; autos. simpls. rewrite* IHE. simpls. remember (closed_typ t). symmetry in Heqb. destruct* b. simpls. rewrite* Heqb. rewrite* IHE. Qed. Hint Resolve closed_subst_tt closed_subst_tt. (* ********************************************************************** *) (** * Properties of Typing *) (* ********************************************************************** *) (** Weakening (5) *) Lemma typing_weakening : forall E F G e T, typing (E & G) e T -> okt (E & F & G) -> typing (E & F & G) e T. Proof. introv Typ. gen F. inductions Typ; introv Ok. apply* typing_var. apply* binds_weaken. apply_fresh* typing_stoic as x. repeat(rewrite pure_dist in *). rewrite <- concat_assoc. apply* H1. rewrite* concat_assoc. rewrite concat_assoc. repeat(rewrite <- pure_dist). apply okt_typ. apply* pure_okt. forwards~ K: (H0 x). lets(Hk & _): typing_regular K. lets: wft_from_okt_typ Hk. apply pure_wft_reverse. apply* wft_weaken. apply* pure_wft. rewrite* pure_dist. assert (Ha: x \notin dom E \u dom F \u dom G) by autos. intros HI. apply Ha. repeat(rewrite pure_dist in HI). repeat(rewrite dom_concat in HI). repeat(rewrite in_union in *). rewrite or_assoc in HI. branches HI. branch 1. lets*: pure_dom_subset E. branch 2. lets*: pure_dom_subset F. branch 3. lets*: pure_dom_subset G. apply* typing_app. apply_fresh* typing_tabs as X. repeat(rewrite pure_dist in *). rewrite <- concat_assoc. apply* H1. rewrite* concat_assoc. rewrite concat_assoc. repeat(rewrite <- pure_dist). apply okt_tvar. apply* pure_okt. forwards~ K: (H0 X). lets(Hk & _): typing_regular K. assert (Ha: X \notin dom E \u dom F \u dom G) by autos. intros HI. apply Ha. repeat(rewrite pure_dist in HI). repeat(rewrite dom_concat in HI). repeat(rewrite in_union in *). rewrite or_assoc in HI. branches HI. branch 1. lets*: pure_dom_subset E. branch 2. lets*: pure_dom_subset F. branch 3. lets*: pure_dom_subset G. apply* typing_tapp. Qed. Lemma typing_wft: forall E e T, typing E e T -> wft E T. Proof. intros. induction H. applys~ wft_from_env_has_typ x. apply wft_stoic. pick_fresh x. forwards~: (H0 x). lets(H3 & _): (typing_regular H2). lets*: wft_from_okt_typ H3. apply* pure_wft. pick_fresh x. forwards~: (H1 x). rewrite <- (@concat_empty_r bind (x ~: V) ) in H2. rewrite concat_assoc in H2. lets: wft_strengthen H2. rewrite concat_empty_r in H3. apply* pure_wft. inverts* IHtyping1. let L := gather_vars in (apply* (@wft_all L)). intros. forwards~: (H1 X). rewrite <- (@concat_empty_l bind E). apply pure_wft_weaken; rewrite* concat_empty_l. apply* wft_open. Qed. Lemma typing_weakening_env : forall E F G e T, typing (E & (pure F) & G) e T -> okt (E & F & G) -> typing (E & F & G) e T. Proof. intros. inductions H. apply* typing_var. binds_cases H0; autos. apply* binds_weaken. apply* binds_concat_left. apply binds_concat_right. apply* pure_binds. autos* ok_concat_inv_l ok_concat_inv_r ok_from_okt. apply_fresh typing_stoic as x. auto. repeat(rewrite pure_dist in *). rewrite pure_eq in *. apply_ih_bind* H1. rewrite* pure_eq. forwards~ : H0 x. apply* typing_app. apply_fresh typing_tabs as X; auto. repeat(rewrite pure_dist in *). rewrite pure_eq in *. apply_ih_bind* H1. rewrite* pure_eq. forwards~ : H0 X. apply typing_tapp with (b := closed_typ T); auto. apply* pure_wft_weaken. Qed. Lemma typing_strengthen_env: forall E u U, value u -> typing E u U -> closed_typ U = true -> typing (pure E) u U. Proof. intros. induction H0; simpls; inversion H1. apply typing_var. apply* pure_okt. apply* pure_binds_in. apply_fresh* typing_stoic as y. apply* pure_okt. rewrite* pure_eq. inversion H. apply_fresh* typing_tabs as y. apply* pure_okt. rewrite* pure_eq. inversion H. Qed. (************************************************************************ *) (** Preservation by Term Substitution (8) *) Lemma open_tt_fv_subset: forall k U T, fv_tt T \c fv_tt (open_tt_rec k U T). Proof. intros. gen k. induction T; intros; simpls; autos* subset_refl subset_empty_l subset_union_2. Qed. Lemma open_te_fv_subset: forall k U e, fv_te e \c fv_te (open_te_rec k U e). Proof. intros. gen k. induction e; intros; simpls; autos* subset_empty_l subset_union_2 open_tt_fv_subset. Qed. Lemma open_ee_fv_subset: forall k u e, fv_ee e \c fv_ee (open_ee_rec k u e). Proof. intros. gen k. induction e; intros; simpls; autos* subset_empty_l subset_refl subset_union_2. Qed. Lemma open_ee_te_fv_eq: forall k U e, fv_ee e = fv_ee (open_te_rec k U e). Proof. intros. gen k. induction e; intros; simpls; autos. rewrites (IHe1 k). rewrites (IHe2 k). reflexivity. Qed. Lemma open_te_ee_fv_subset: forall k u e, fv_te e \c fv_te (open_ee_rec k u e). Proof. intros. gen k. induction e; intros; simpls; autos* subset_empty_l subset_union_2 subset_refl. Qed. Lemma open_tt_tt_fv_subset: forall k T1 T2, fv_tt (open_tt_rec k T1 T2) \c fv_tt T1 \u fv_tt T2. Proof. intros. gen k. induction T2; intros; simpls; autos* union_comm subset_empty_l subset_union_weak_r. destruct (prop_degeneracy (k = n)). (* k = n*) apply is_True_inv in H2. rewrite* If_l. apply subset_union_weak_l. (* k != n*) apply is_False_inv in H2. rewrite* If_r. lets*: (subset_union_2 (IHT2_1 k) (IHT2_2 k)). rewrite union_assoc in H2. rewrite union_comm in H2. replace ((fv_tt T1 \u fv_tt T2_1) \u fv_tt T1) with (fv_tt T1 \u fv_tt T2_1) in H2 by (rewrite union_comm; rewrite <- union_assoc; rewrite* union_same). rewrite union_assoc. rewrite* union_comm. Qed. Lemma wft_fv_tt: forall E T, wft E T -> fv_tt T \c dom E. Proof. intros. induction H; simpls; autos* subset_empty_l. lets: get_some_inv (binds_get H). unfolds. intros. rewrite in_singleton in H2. rewrite* H2. replace (dom E) with (dom E \u dom E) by (autos* union_same). apply* subset_union_2. pick_fresh X. forwards~ HI: (H0 X). rewrite dom_concat in HI. rewrite dom_single in HI. assert (HII: fv_tt T \c dom E \u \{X}). apply subset_trans with (fv_tt (T open_tt_var b \with X)). autos* open_tt_fv_subset. autos. apply subset_strengthen with X; autos. Qed. Ltac solve_subsets := match goal with | [|- _ \u _ \c dom ?E ] => rewrite <- union_same; eapply subset_trans; apply* subset_union_2; apply pure_dom_subset | [|- _ \c dom ?E ] => eapply subset_trans; eauto; apply pure_dom_subset | [_: ?a \c ?E, _: ?b \c ?E |- ?a \u ?b \c ?E ] => rewrite <- union_same; apply* subset_union_2 end. Ltac splits_solve_subsets := splits*; try solve_subsets. Lemma typing_env_fv : forall E e T, typing E e T -> fv_te e \c dom E /\ fv_ee e \c dom E /\ fv_tt T \c dom E. Proof. intros. inductions H. (* var *) simpls. splits; try solve [autos* subset_empty_l wft_fv_tt]. forwards~ K: get_some_inv (binds_get H0). unfolds subset. intros. rewrite in_singleton in H1. subst*. (* abs closed *) simpl. pick_fresh x. forwards~ : H0 x. forwards~ : H1 x. destructs H3. rewrite dom_concat in *. rewrite dom_single in *. forwards~ : subset_strengthen (subset_trans (@open_te_ee_fv_subset 0 (trm_fvar x) e1) H3). forwards~ : subset_strengthen (subset_trans (@open_ee_fv_subset 0 (trm_fvar x) e1) H4). forwards~ : subset_strengthen H5. forwards~ : wft_fv_tt (wft_from_okt_typ (proj1 (typing_regular H2))). splits_solve_subsets. (* app *) destructs IHtyping1. simpls. destruct (subset_union H3). destructs IHtyping2. splits_solve_subsets. (* tabs closed *) simpl. pick_fresh X. forwards~ : H1 X. destructs H2. rewrite dom_concat in *. rewrite dom_single in *. forwards~ : subset_strengthen (subset_trans (@open_te_fv_subset 0 (typ_fvar b X) e1) H2). unfold open_te in H3. rewrite <- open_ee_te_fv_eq in H3. forwards~ : subset_strengthen H3. forwards~ : subset_strengthen (subset_trans (@open_tt_fv_subset 0 (typ_fvar b X) T1) H4). splits_solve_subsets. (* tapp *) destructs IHtyping. simpls. lets: wft_fv_tt H. splits_solve_subsets. eapply subset_trans. apply open_tt_tt_fv_subset. solve_subsets. Qed. Lemma typing_through_subst_ee : forall U E F x T e u, value u -> typing (E & x ~: U & F) e T -> typing E u U -> typing (E & F) (subst_ee x u e) T. Proof. introv Hv TypT TypU. inductions TypT; introv; simpl. case_var. binds_get H0. apply_empty* typing_weakening. binds_cases H0; apply* typing_var. apply_fresh* typing_stoic as y. destruct (typing_regular TypU). rewrite* subst_ee_open_ee_var. (* if U is closed, then use IH; else x is free in e1; *) repeat(rewrite pure_dist in *). remember (closed_typ U) as b. destruct b. (* closed_typ U = true *) symmetry in Heqb. rewrite* pure_single_true in H1. intros. rewrite <- concat_assoc. apply H1 with U; autos. rewrite* concat_assoc. apply* typing_strengthen_env. (* closed_typ U = false *) symmetry in Heqb. rewrite* pure_single_false in H0. rewrite concat_empty_r in H0. lets: ok_middle_inv (ok_from_okt H). forwards~ HI: H0 y. rewrite* subst_ee_fresh. destructs (typing_env_fv HI). unfolds notin. intros HII. assert (HIII: x \in dom (pure E & pure F & y ~: V)) by unfolds* subset. repeat(rewrite dom_concat in HIII). repeat(rewrite in_union in HIII). rewrite dom_single in HIII. rewrite or_assoc in HIII. destruct H4. branches HIII. apply H4. lets*: pure_dom_subset E. apply H8. lets*: pure_dom_subset F. rewrite in_singleton in H9. substs. apply* Fry. repeat(rewrite in_union). autos* in_singleton_self. apply* typing_app. apply_fresh* typing_tabs as Y. destruct (typing_regular TypU). rewrite* subst_ee_open_te_var. (* if U is closed, then use IH; else x is free in e1; *) repeat(rewrite pure_dist in *). remember (closed_typ U) as b'. destruct b'. (* closed_typ U = true *) symmetry in Heqb'. rewrite* pure_single_true in H1. intros. rewrite <- concat_assoc. apply H1 with U; autos. rewrite* concat_assoc. apply* typing_strengthen_env. (* closed_typ U = false *) symmetry in Heqb'. rewrite* pure_single_false in H0. rewrite concat_empty_r in H0. lets: ok_middle_inv (ok_from_okt H). forwards~ HI: H0 Y. rewrite* subst_ee_fresh. destructs (typing_env_fv HI). unfolds notin. intros HII. assert (HIII: x \in dom (pure E & pure F & (Y \at b))) by unfolds* subset. repeat(rewrite dom_concat in HIII). repeat(rewrite in_union in HIII). rewrite dom_single in HIII. rewrite or_assoc in HIII. destruct H4. branches HIII. apply H4. lets*: pure_dom_subset E. apply H8. lets*: pure_dom_subset F. rewrite in_singleton in H9. substs. apply* FrY. repeat(rewrite in_union). autos* in_singleton_self. apply* typing_tapp. Qed. (************************************************************************ *) (** Preservation by Type Substitution (11) *) Lemma typing_through_subst_te : forall E F Z e T P b, closed_typ P = b -> typing (E & (Z \at b) & F) e T -> wft E P -> typing (E & map (subst_tb Z b P) F) (subst_te Z b P e) (subst_tt Z b P T). Proof. introv Hcap Typ PsubQ. inductions Typ; introv; simpls subst_tt; simpls subst_te. apply* typing_var. rewrite* (@map_subst_tb_id E Z P (closed_typ P)). binds_cases H0; unsimpl_map_bind*. eauto using okt_weaken. apply_fresh* typing_stoic as y. repeat(rewrite pure_dist in *). rewrite pure_single_tvar in *. rewrite* subst_te_open_ee_var. rewrite* pure_map. unsimpl (subst_tb Z (closed_typ P) P (bind_typ V)). rewrite <- concat_assoc. rewrite <- map_push. apply* H1. rewrite* concat_assoc. apply* pure_wft_reverse. apply* typing_app. apply_fresh* typing_tabs as Y. repeat(rewrite pure_dist in *). rewrite pure_single_tvar in *. rewrite* subst_te_open_te_var; eauto using wft_type. rewrite* subst_tt_open_tt_var; eauto using wft_type. rewrite* pure_map. unsimpl (subst_tb Z (closed_typ P) P (bind_tvar b0)). rewrite <- concat_assoc. rewrite <- map_push. apply* H1. rewrite* concat_assoc. apply* pure_wft_reverse. rewrite* subst_tt_open_tt; eauto using wft_type. apply* typing_tapp. rewrite* closed_subst_tt. Qed. (* ********************************************************************** *) (** * Preservation *) (* ********************************************************************** *) Lemma preservation_result : preservation. Proof. introv Typ. gen e'. induction Typ; introv Red; try solve [ inversion Red ]. (* case: app *) inversions Red; try solve [ apply* typing_app ]. inversions Typ1. pick_fresh x. forwards~ K: (H7 x). rewrite* (@subst_ee_intro x). apply_empty typing_through_subst_ee; substs*. rewrite <- (@concat_empty_l bind _). rewrite concat_assoc. apply typing_weakening_env. rewrite* concat_empty_l. rewrite concat_empty_l. apply* okt_typ. autos* typing_wft. lets*: typing_regular Typ2. (* case: tapp *) inversions Red. try solve [ apply* typing_tapp ]. inversions Typ. pick_fresh X. forwards~ : H8 X. rewrite* (@subst_te_intro X T e0 (closed_typ T)). rewrite* (@subst_tt_intro X T1 T (closed_typ T)). asserts_rewrite (E = E & map (subst_tb X (closed_typ T) T) empty). rewrite map_empty. rewrite~ concat_empty_r. apply* typing_through_subst_te. rewrite concat_empty_r. rewrite <- (@concat_empty_l bind E). apply typing_weakening_env. rewrite* concat_empty_l. rewrite concat_empty_l. apply* okt_tvar. Qed. (* ********************************************************************** *) (** * Progress *) (* ********************************************************************** *) (** Canonical Forms (14) *) Lemma canonical_form_abs : forall t U1 U2, value t -> typing empty t (typ_stoic U1 U2) -> exists V, exists e1, t = trm_abs V e1. Proof. introv Val Typ. gen_eq E: (@empty bind). gen_eq T: (typ_stoic U1 U2). gen U1 U2. induction Typ; introv EQT EQE; try solve [ inversion Val | inversion EQT | eauto ]. subst. false* binds_empty_inv. Qed. Lemma canonical_form_tabs : forall t U1 b, value t -> typing empty t (typ_all b U1) -> exists e1, t = trm_tabs b e1. Proof. introv Val Typ. gen_eq E: (@empty bind). gen_eq T: (typ_all b U1). gen U1. induction Typ; introv EQT EQE; try solve [ inversion Val | inversion EQT | eauto ]. subst. false* binds_empty_inv. inversions EQT. exists* e1. Qed. Lemma progress_result : progress. Proof. introv Typ. gen_eq E: (@empty bind). lets Typ': Typ. induction Typ; intros EQ; subst; autos. (* case: var *) (* false* binds_empty_inv. *) (* case: abs closed *) left*. apply value_abs. lets*: typing_regular Typ'. (* case: app *) right. destruct* IHTyp1 as [Val1 | [e1' Rede1']]. destruct* IHTyp2 as [Val2 | [e2' Rede2']]. destruct (canonical_form_abs Val1 Typ1) as [S [e3 EQ]]. subst. exists* (open_ee e3 e2). apply* red_abs. lets*: typing_regular Typ1. exists* (trm_app e1' e2). apply* red_app_1. lets*: typing_regular Typ2. (* case: tabs_closed *) left*. apply* value_tabs. lets*: typing_regular Typ'. (* case: tapp *) right. destruct~ IHTyp as [Val1 | [e1' Rede1']]. destruct (canonical_form_tabs Val1 Typ) as [e EQ]. subst. exists* (open_te e T). apply* red_tabs. lets*: typing_regular Typ. autos* wft_type. exists (trm_tapp e1' T). apply* red_tapp. autos* wft_type. Qed. (* ********************************************************************** *) (** * effect safety *) (* ********************************************************************** *) (** * Properties of Healthy Evnironment *) Lemma degree_typ_parent_zero: forall S T, degree_typ (typ_stoic S T) = 0 -> degree_typ S = 0 /\ degree_typ T = 0. Proof. intros. simpl in H. destruct (degree_typ S); destruct (degree_typ T); eauto. simpl in H. inversion H. Qed. Lemma degree_typ_eq_open_tt_rec: forall T U k, degree_typ U = 0 -> degree_typ T = degree_typ (open_tt_rec k U T). Proof. intros. inductions T; unfolds open_tt_rec; simpls; try reflexivity; eauto. unfolds open_tt. unfolds open_tt_rec. cases_if*. Qed. Lemma degree_typ_eq_open_tt: forall T U, degree_typ U = 0 -> degree_typ T = degree_typ (open_tt T U). Proof. intros. unfolds open_tt. apply* degree_typ_eq_open_tt_rec. Qed. Lemma degree_trm_parent_zero: forall t1 t2, degree_trm (trm_app t1 t2) = 0 -> degree_trm t1 = 0 /\ degree_trm t2 = 0. Proof. intros. simpl in H. destruct (degree_trm t1); destruct (degree_trm t2); eauto. simpl in H. inversion H. Qed. Lemma degree_trm_eq_open_te_rec: forall t U k, degree_trm t = degree_trm (open_te_rec k U t). Proof. intros. inductions t; simpls; try reflexivity; eauto. Qed. Lemma degree_trm_eq_open_te: forall t U, degree_trm t = degree_trm (open_te t U). Proof. intros. unfolds open_te. apply* degree_trm_eq_open_te_rec. Qed. Lemma degree_trm_eq_open_ee_rec: forall t u k, degree_trm u = 0 -> degree_trm t = degree_trm (open_ee_rec k u t). Proof. intros. inductions t; simpls; try reflexivity; eauto. cases_if*. Qed. Lemma degree_trm_eq_open_ee: forall t u, degree_trm u = 0 -> degree_trm t = degree_trm (open_ee t u). Proof. intros. unfolds open_ee. apply* degree_trm_eq_open_ee_rec. Qed. Scheme capsafe_mut := Induction for capsafe Sort Prop with caprod_mut := Induction for caprod Sort Prop. Lemma capsafe_regular: forall T, capsafe T -> type T. apply (capsafe_mut (fun T safeT => type T ) (fun T prodT => type T ) ); eauto. Qed. Lemma caprod_regular: forall T, caprod T -> type T. apply (caprod_mut (fun T safeT => type T ) (fun T prodT => type T ) ); eauto. Qed. Hint Constructors capsafe caprod. Hint Immediate capsafe_regular caprod_regular. Lemma capsafe_closed_typ: forall T, capsafe T -> closed_typ T = true. Proof. intros. inductions H; try reflexivity; try false; autos. Qed. Lemma capsafe_not_caprod_0 : forall T, capsafe T -> degree_typ T = 0 -> ~ caprod T. apply (capsafe_mut (fun T safeT => degree_typ T = 0 -> ~ caprod T ) (fun T prodT => degree_typ T = 0 -> ~ capsafe T ) ); intros; intros Hc; inversions Hc; eauto; try solve [simpls; false*]; repeat destruct* (degree_typ_parent_zero S T). Qed. Lemma capsafe_not_caprod_k : forall T k, degree_typ T <= k -> capsafe T -> ~ caprod T. Proof. intros. gen T. inductions k; intros. lets: Le.le_n_0_eq H. apply* capsafe_not_caprod_0. inductions T; try solve [simpls; apply* IHk; apply le_0_n]. inversions H0. intros Hc. inversions Hc. simpl in H. destruct (classic (degree_typ T1 = S k)). forwards~ : IHT1. rewrite* H0. lets: Max.max_lub_l H. forwards~ : IHk T1. autos* PeanoNat.Nat.le_neq PeanoNat.Nat.lt_succ_r. intros Hc. inversions Hc. destruct (classic (degree_typ T2 = S k)). forwards~ : IHT2. rewrite* H0. lets: Max.max_lub_r H. forwards~ : IHk T2. autos* PeanoNat.Nat.le_neq PeanoNat.Nat.lt_succ_r. inversions H0. intros Hc. simpl in H. lets: le_S_n H. inversions Hc. destruct* H6. forwards~ : IHk (open_tt T typ_base). rewrite* <- degree_typ_eq_open_tt. forwards~ : IHk (open_tt T (typ_stoic typ_base typ_eff)). rewrite* <- degree_typ_eq_open_tt. intros Hc. simpl in H. lets: le_S_n H. inversions Hc. forwards~ : IHk (open_tt T typ_eff). rewrite* <- degree_typ_eq_open_tt. Qed. Lemma capsafe_not_caprod : forall T, capsafe T -> ~ caprod T. Proof. intros T. apply* capsafe_not_caprod_k. Qed. Lemma capsafe_caprod_classic_0: forall T, type T -> degree_typ T = 0 -> capsafe T \/ caprod T. Proof. intros T Ht Hd. inductions T; auto. inversions Ht. cases* b. inversions Ht. destruct (degree_typ_parent_zero T1 T2 Hd). destruct* (IHT1 H1); destruct* (IHT2 H2). simpl in Hd. inversions Hd. Qed. Lemma capsafe_caprod_classic_k: forall T k, type T -> degree_typ T <= k -> capsafe T \/ caprod T. Proof. intros. gen T. inductions k; intros. lets: Le.le_n_0_eq H0. apply* capsafe_caprod_classic_0. inductions T; try solve [inversion H]; eauto. cases* b. inversions H. simpl in H0. forwards~ : IHT1. apply (Max.max_lub_l _ _ _ H0). forwards~ : IHT2. apply (Max.max_lub_r _ _ _ H0). destruct* H; destruct* H1. simpls. lets: le_S_n H0. cases b. forwards~ : IHk (open_tt T typ_base). apply* typ_all_open_tt_type. rewrite* <- degree_typ_eq_open_tt. forwards~ : IHk (open_tt T (typ_stoic typ_base typ_eff)). apply* typ_all_open_tt_type. rewrite* <- degree_typ_eq_open_tt. destruct* H2; destruct* H3. forwards~ : IHk (open_tt T typ_eff). apply* typ_all_open_tt_type. rewrite* <- degree_typ_eq_open_tt. destruct* H2. Qed. Lemma capsafe_caprod_classic: forall T, type T -> capsafe T \/ caprod T. Proof. intros. apply* capsafe_caprod_classic_k. Qed. Lemma capsafe_decidable: forall T, type T -> capsafe T \/ ~ capsafe T. Proof. intros. destruct (capsafe_caprod_classic H). left*. right. intros Hc. lets*: capsafe_not_caprod Hc. Qed. Lemma not_capsafe_caprod : forall T, type T -> ~capsafe T -> caprod T. Proof. intros. destruct* (capsafe_caprod_classic H). Qed. Lemma healthy_env_closed: forall E, healthy E -> pure E = E. Proof. intros. inductions H. rewrite empty_def. reflexivity. rewrite <- cons_to_push. simpls. lets: capsafe_closed_typ H. cases_if. rewrite* IHhealthy. rewrite <- cons_to_push. simpls. rewrite* IHhealthy. Qed. Lemma healthy_env_capsafe : forall E S x, healthy E -> binds x (bind_typ S) E -> capsafe S. Proof. introv H Hb. inductions H. false* binds_empty_inv. destruct (binds_push_inv Hb). destruct H1. inversions* H2. destruct H1. autos. destruct (binds_push_inv Hb). destruct H0. inversions* H1. destruct H0. autos. Qed. Lemma subst_tt_type_type_0: forall P Q T Z b, degree_typ T = 0 -> type P -> type Q -> type (subst_tt Z b P T) -> type (subst_tt Z b Q T). Proof. intros. inductions T; try solve [simpls; eauto]; simpl. cases_if*. inversions H2. lets*: degree_typ_parent_zero H. simpls. inversion H. Qed. Lemma subst_tt_type_type_k: forall P Q T Z k b, degree_typ T <= k -> type P -> type Q -> type (subst_tt Z b P T) -> type (subst_tt Z b Q T). Proof. intros. gen T. inductions k; intros. (* k = 0*) lets: Le.le_n_0_eq H. forwards~ : subst_tt_type_type_0 H0 H1 H2. (* K > 0 *) inductions T; simpls; eauto. cases_if*. inversions H2. forwards~ : IHT1 H5. apply (Max.max_lub_l _ _ _ H). forwards~ : IHT2 H6. apply (Max.max_lub_r _ _ _ H). lets: le_S_n H. inversions H2. apply_fresh type_all as X. forwards~ : H5 X. unfolds open_tt. replace (typ_fvar b0 X) with (subst_tt Z b Q (typ_fvar b0 X)) by (rewrite* subst_tt_fresh; simpls; autos). replace (typ_fvar b0 X) with (subst_tt Z b P (typ_fvar b0 X)) in H2 by (rewrite* subst_tt_fresh; simpls; autos). rewrite <- subst_tt_open_tt_rec in *; auto. apply* IHk. rewrite* <- degree_typ_eq_open_tt_rec. Qed. Lemma subst_tt_type_type: forall P Q T Z b, type P -> type Q -> type (subst_tt Z b P T) -> type (subst_tt Z b Q T). Proof. intros. remember (degree_typ T) as k. eapply subst_tt_type_type_k. symmetry in Heqk. apply* PeanoNat.Nat.eq_le_incl. exact H. auto. auto. Qed. Hint Resolve capsafe_closed_typ healthy_env_capsafe not_capsafe_caprod subst_tt_type_type. Definition same_cap T1 T2 := (capsafe T1 /\ capsafe T2) \/ (caprod T1 /\ caprod T2). Definition same_as_cap T1 T2 := (capsafe T1 -> capsafe T2) /\ (caprod T1 -> caprod T2). Lemma same_cap_regular: forall T1 T2, same_cap T1 T2 -> type T1 /\ type T2. Proof. intros. destruct H; destruct H; split; autos* capsafe_regular caprod_regular. Qed. Lemma same_cap_subst_tt_0: forall T Z P Q b, degree_typ T = 0 -> same_cap P Q -> same_as_cap (subst_tt Z b P T) (subst_tt Z b Q T). Proof. intros. inductions T; unfolds; autos. simpls; splits; cases_if*; unfolds same_cap; intros. destruct* H0. false* (capsafe_not_caprod H2). destruct* H0. destruct H0. false* (capsafe_not_caprod H0). forwards~ : IHT1 Z H0. lets*: degree_typ_parent_zero H. forwards~ : IHT2 Z H0. lets*: degree_typ_parent_zero H. unfolds same_as_cap. destruct H1. destruct H2. destruct (same_cap_regular H0). splits; intros. inversions H5; simpl. apply capsafe_eff_any; autos* subst_tt_type_type. apply capsafe_any_safe; autos* subst_tt_type_type. inversions H5. apply* caprod_safe_eff. simpls. inversions H. Qed. Lemma same_cap_subst_tt_k: forall T Z P Q k b, degree_typ T <= k -> same_cap P Q -> same_as_cap (subst_tt Z b P T) (subst_tt Z b Q T). Proof. intros. gen T. inductions k; intros. lets: Le.le_n_0_eq H. apply* same_cap_subst_tt_0. inductions T; simpls; unfolds; autos. splits; cases_if*; unfolds same_cap; intros. destruct* H0. false* (capsafe_not_caprod H2). destruct* H0. destruct H0. false* (capsafe_not_caprod H0). forwards~ : IHT1. apply (Max.max_lub_l _ _ _ H). forwards~ : IHT2. apply (Max.max_lub_r _ _ _ H). destruct H1. destruct H2. destruct (same_cap_regular H0). splits; intros. inversions H7. apply capsafe_eff_any; autos* subst_tt_type_type. apply capsafe_any_safe; autos* subst_tt_type_type. inversions H7. apply* caprod_safe_eff. split; intros. inversions H1; destruct (same_cap_regular H0). apply capsafe_all_true. unsimpl (subst_tt Z b Q (typ_all true T)). autos* subst_tt_type_type. lets: le_S_n H. replace typ_base with (subst_tt Z b Q typ_base) by reflexivity. replace typ_base with (subst_tt Z b P typ_base) in H5 by reflexivity. unfolds open_tt. rewrite <- subst_tt_open_tt_rec in *; auto. apply* IHk. rewrite* <- degree_typ_eq_open_tt_rec. lets: le_S_n H. replace (typ_stoic typ_base typ_eff) with (subst_tt Z b Q (typ_stoic typ_base typ_eff)) by reflexivity. replace (typ_stoic typ_base typ_eff) with (subst_tt Z b P (typ_stoic typ_base typ_eff)) in H6 by reflexivity. unfolds open_tt. rewrite <- subst_tt_open_tt_rec in *; auto. apply* IHk. rewrite* <- degree_typ_eq_open_tt_rec. apply capsafe_all_false. unsimpl (subst_tt Z b Q (typ_all false T)). autos* subst_tt_type_type. lets: le_S_n H. replace typ_eff with (subst_tt Z b Q typ_eff) by reflexivity. replace typ_eff with (subst_tt Z b P typ_eff) in H5 by reflexivity. unfolds open_tt. rewrite <- subst_tt_open_tt_rec in *; auto. apply* IHk. rewrite* <- degree_typ_eq_open_tt_rec. inversions H1. destruct H5; destruct (same_cap_regular H0); apply caprod_all_true; unsimpl (subst_tt Z b Q (typ_all true T)); autos* subst_tt_type_type. lets: le_S_n H. left. replace typ_base with (subst_tt Z b Q typ_base) by reflexivity. replace typ_base with (subst_tt Z b P typ_base) in H1 by reflexivity. unfolds open_tt. rewrite <- subst_tt_open_tt_rec in *; auto. apply* IHk. rewrite* <- degree_typ_eq_open_tt_rec. lets: le_S_n H. right. replace (typ_stoic typ_base typ_eff) with (subst_tt Z b Q (typ_stoic typ_base typ_eff)) by reflexivity. replace (typ_stoic typ_base typ_eff) with (subst_tt Z b P (typ_stoic typ_base typ_eff)) in H1 by reflexivity. unfolds open_tt. rewrite <- subst_tt_open_tt_rec in *; auto. apply* IHk. rewrite* <- degree_typ_eq_open_tt_rec. destruct (same_cap_regular H0). apply caprod_all_false. unsimpl (subst_tt Z b Q (typ_all false T)). autos* subst_tt_type_type. lets: le_S_n H. replace typ_eff with (subst_tt Z b Q typ_eff) by reflexivity. replace typ_eff with (subst_tt Z b P typ_eff) in H5 by reflexivity. unfolds open_tt. rewrite <- subst_tt_open_tt_rec in *; auto. apply* IHk. rewrite* <- degree_typ_eq_open_tt_rec. Qed. Lemma same_cap_subst_tt: forall T Z P Q b, same_cap P Q -> same_as_cap (subst_tt Z b P T) (subst_tt Z b Q T). Proof. intros. apply* same_cap_subst_tt_k. Qed. Lemma capsafe_subst_tt_caprod: forall T Z P Q b, caprod P -> caprod Q -> capsafe (subst_tt Z b P T) -> capsafe (subst_tt Z b Q T). Proof. intros. forwards~ : same_cap_subst_tt T Z P Q b. unfolds*. destruct* H2. Qed. Lemma capsafe_subst_tt_capsafe: forall T Z P Q b, capsafe P -> capsafe Q -> capsafe (subst_tt Z b P T) -> capsafe (subst_tt Z b Q T). Proof. intros. forwards~ : same_cap_subst_tt T Z P Q. unfolds*. destruct* H2. Qed. Lemma capsafe_all_open_tt: forall T U b, type U -> closed_typ U = b -> capsafe (typ_all b T) -> capsafe (open_tt T U). Proof. intros. inversions H1. destruct (capsafe_decidable H). pick_fresh X. rewrite* (@subst_tt_intro X T U true). eapply capsafe_subst_tt_capsafe. apply capsafe_base. auto. rewrite* <- subst_tt_intro. pick_fresh X. rewrite* (@subst_tt_intro X T U true). eapply capsafe_subst_tt_caprod. applys~ caprod_safe_eff typ_base typ_eff. auto. rewrite* <- subst_tt_intro. destruct (capsafe_decidable H). lets: capsafe_closed_typ H0. rewrite H1 in H2. inversions H2. pick_fresh X. rewrite* (@subst_tt_intro X T U false). eapply capsafe_subst_tt_caprod. apply caprod_eff. auto. rewrite* <- subst_tt_intro. Qed. Lemma healthy_env_term_capsafe_0: forall E t T, degree_trm t = 0 -> healthy E -> typing E t T -> capsafe T. Proof. intros. inductions H1; intros; autos. apply* healthy_env_capsafe. pick_fresh x. forwards~ : H2 x. assert (HI: type V) by destruct* (typing_regular H4). destruct (capsafe_decidable HI). simpls. apply* capsafe_any_safe. apply* (H3 x). rewrite* <- degree_trm_eq_open_ee. rewrite* (healthy_env_closed H0). apply* healthy_typ. lets*: not_capsafe_caprod H5. apply* capsafe_eff_any. autos* wft_type typing_wft. forwards~ : IHtyping1. lets*: degree_trm_parent_zero H. forwards~ : IHtyping2. lets*: degree_trm_parent_zero H. inversions* H1. lets*: capsafe_not_caprod T1. simpl in H. inversions H. simpl in H. forwards~ : IHtyping. apply* capsafe_all_open_tt. apply* wft_type. subst*. Qed. Lemma healthy_env_term_capsafe_k: forall E t T k, degree_trm t <= k -> healthy E -> typing E t T -> capsafe T. Proof. intros. gen t E T. inductions k; intros. lets: Le.le_n_0_eq H. apply* healthy_env_term_capsafe_0. inductions H1; intros; autos. apply* healthy_env_capsafe. pick_fresh x. forwards~ : H2 x. assert (HI: type V) by destruct* (typing_regular H4). destruct (capsafe_decidable HI). simpls. apply* capsafe_any_safe. apply* (H3 x). rewrite* <- degree_trm_eq_open_ee. rewrite* (healthy_env_closed H0). apply* healthy_typ. lets*: not_capsafe_caprod H5. apply* capsafe_eff_any. autos* wft_type typing_wft. simpls. forwards~ : IHtyping1. apply (Max.max_lub_l _ _ _ H). forwards~ : IHtyping2. apply (Max.max_lub_r _ _ _ H). inversions* H1. lets*: capsafe_not_caprod T1. simpl in H. lets: le_S_n H. pick_fresh X. forwards~ : H2 X. rewrite (healthy_env_closed H0) in *. assert (typing E (trm_tabs b e1) (typ_all b T1)). apply* (@typing_tabs L). rewrite* (healthy_env_closed H0). cases b. assert (HI: typing E (open_te e1 typ_base) (open_tt T1 typ_base)). rewrite <- (@concat_empty_r bind E). rewrite* (@subst_te_intro X typ_base e1 true). rewrite* (@subst_tt_intro X T1 typ_base true). replace empty with (map (subst_tb X true typ_base) empty) by rewrite* map_empty. apply* typing_through_subst_te. rewrite* concat_empty_r. assert (HII: typing E (open_te e1 (typ_stoic typ_base typ_eff)) (open_tt T1 (typ_stoic typ_base typ_eff))). rewrite <- (@concat_empty_r bind E). rewrite* (@subst_te_intro X (typ_stoic typ_base typ_eff) e1 true). rewrite* (@subst_tt_intro X T1 (typ_stoic typ_base typ_eff) true). replace empty with (map (subst_tb X true (typ_stoic typ_base typ_eff)) empty) by rewrite* map_empty. apply* typing_through_subst_te. rewrite* concat_empty_r. forwards~ : IHk HI. rewrite* <- degree_trm_eq_open_te. forwards~ : IHk HII. rewrite* <- degree_trm_eq_open_te. apply* capsafe_all_true. autos* wft_type typing_wft. assert (HI: typing E (open_te e1 typ_eff) (open_tt T1 typ_eff)). rewrite <- (@concat_empty_r bind E). rewrite* (@subst_te_intro X typ_eff e1 false). rewrite* (@subst_tt_intro X T1 typ_eff false). replace empty with (map (subst_tb X false typ_eff) empty) by rewrite* map_empty. apply* typing_through_subst_te. rewrite* concat_empty_r. forwards~ : IHk HI. rewrite* <- degree_trm_eq_open_te. apply* capsafe_all_false. autos* wft_type typing_wft. simpl in H. forwards~ : IHtyping. apply* capsafe_all_open_tt. apply* wft_type. subst*. Qed. Lemma healthy_env_term_capsafe: forall E t T, healthy E -> typing E t T -> capsafe T. Proof. intros. apply* healthy_env_term_capsafe_k. Qed. Lemma effect_safety_result : effect_safety. Proof. intros E H He. destruct He. lets*: healthy_env_term_capsafe H0. inversions H1. Qed. (* This proof ensures that all inhabitable types are capsafe, thus justifies the definition of capsafe/caprod. This theorem assumes that all inhabitable types in the system can be inhabited by a value in the environment {x:B, y:E}. Note that variables are values in the system, thus B and E are inhabitable. If the term t is not a value, it should be able to take a step and preserves the type. *) Theorem primitive_capsafe: forall E x T, primitive E -> binds x (bind_typ T) E -> capsafe T \/ closed_typ T = false. Proof. introv Prim Bd. inductions Prim. destruct (binds_push_inv Bd) as [Inv | Inv]; destruct Inv. inversions H0. auto. destructs (binds_single_inv H0). inversions H2. auto. binds_cases Bd. auto. binds_cases Bd. auto. inversions EQ. cases* b. Qed. Theorem primitive_pure_healthy: forall E, primitive E -> healthy (pure E). Proof. introv Prim. inductions Prim. rewrite pure_dist, pure_single_true, pure_single_false, concat_empty_r; auto. rewrite <- concat_empty_l. apply* healthy_typ. apply healthy_empty. rewrite ?pure_dist, pure_single_tvar; auto. apply* healthy_tvar. rewrite ?pure_dist. cases* b. rewrite pure_single_true; auto. apply* healthy_typ. rewrite pure_single_false; auto. rewrite* concat_empty_r. Qed. Theorem inhabitable_capsafe: forall E t T, primitive E -> typing E t T -> value t -> capsafe T \/ closed_typ T = false. Proof. introv Prim Typ Val. inductions Typ; auto. apply* primitive_capsafe. pick_fresh z. forwards~ IH: H0 z. lets (Ok&_): (typing_regular IH). lets (_&Wf&_): (okt_push_typ_inv Ok). lets TypV: wft_type Wf. lets TypT1: wft_type (typing_wft IH). destruct (capsafe_decidable TypV) as [Case | Case]. (* capsafe V -> healthy E -> capsafe T1 *) forwards~ Hcap : healthy_env_term_capsafe IH. apply* healthy_typ. apply* primitive_pure_healthy. (* caprod V -> capsafe V -> T1 *) forwards~ Vcap : not_capsafe_caprod Case. inversion Val. pick_fresh X. forwards~ IH: H0 X. assert (Typ: typing E (trm_tabs b e1) (typ_all b T1)) by apply* typing_tabs. left. cases b. apply capsafe_all_true. apply (wft_type (typing_wft Typ)). rewrite <- concat_empty_r in IH at 1. forwards~ Typ1: typing_through_subst_te typ_base IH. rewrite map_empty, concat_empty_r in Typ1. forwards~ Safe1 : healthy_env_term_capsafe Typ1. rewrite <- concat_empty_l. rewrite concat_empty_l. apply* primitive_pure_healthy. rewrite* (@subst_tt_intro X T1 typ_base true). rewrite <- concat_empty_r in IH at 1. forwards~ Typ1: typing_through_subst_te (typ_stoic typ_base typ_eff) IH. rewrite map_empty, concat_empty_r in Typ1. forwards~ Safe1 : healthy_env_term_capsafe Typ1. rewrite <- concat_empty_l. rewrite concat_empty_l. apply* primitive_pure_healthy. rewrite* (@subst_tt_intro X T1 (typ_stoic typ_base typ_eff) true). apply capsafe_all_false. apply (wft_type (typing_wft Typ)). rewrite <- concat_empty_r in IH at 1. forwards~ Typ1: typing_through_subst_te typ_eff IH. rewrite map_empty, concat_empty_r in Typ1. forwards~ Safe1 : healthy_env_term_capsafe Typ1. rewrite <- concat_empty_l. rewrite concat_empty_l. apply* primitive_pure_healthy. rewrite* (@subst_tt_intro X T1 typ_eff false). inversion Val. Qed. Theorem inhabitable_pure_healthy: inhabitable_pure_healthy_statement. Proof. introv In Pure. inductions In. apply healthy_empty. apply* healthy_tvar. apply IHIn. rewrite <- ?cons_to_push in Pure. simpls. inversion Pure. rewrite* H0. assert (Closed: closed_typ T = true). applys~ pure_closed (E & z ~: T) z. rewrite Pure. apply binds_tail. forwards~ IH: inhabitable_capsafe H1. destruct IH. rewrite pure_dist, pure_single_true in Pure; auto. rewrite <- ?cons_to_push in Pure. inversion Pure. rewrite H4. apply* healthy_typ. substs. false*. Qed.
let d = SparseCat([:a, :b, :d], [0.4, 0.5, 0.1]) c = collect(weighted_iterator(d)) @test c == [:a=>0.4, :b=>0.5, :d=>0.1] @test pdf(d, :c) == 0.0 @test pdf(d, :a) == 0.4 @test mode(d) == :b @test sampletype(d) == Symbol @test sampletype(typeof(d)) == Symbol @inferred rand(Random.GLOBAL_RNG, d) dt = SparseCat((:a, :b, :d), (0.4, 0.5, 0.1)) c = collect(weighted_iterator(dt)) @test c == [:a=>0.4, :b=>0.5, :d=>0.1] @test pdf(dt, :c) == 0.0 @test pdf(dt, :a) == 0.4 @test mode(dt) == :b @test sampletype(dt) == Symbol @test sampletype(typeof(dt)) == Symbol @inferred rand(Random.GLOBAL_RNG, dt) rng = MersenneTwister(14) samples = Symbol[] N = 100_000 @time for i in 1:N push!(samples, rand(rng, d)) end @test isapprox(count(samples.==:a)/N, pdf(d,:a), atol=0.005) @test isapprox(count(samples.==:b)/N, pdf(d,:b), atol=0.005) @test isapprox(count(samples.==:c)/N, pdf(d,:c), atol=0.005) @test isapprox(count(samples.==:d)/N, pdf(d,:d), atol=0.005) @test_throws ErrorException rand(Random.GLOBAL_RNG, SparseCat([1], [0.0])) end
(* Copyright (c) 2017, ETH Zurich All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. *) (*<*) theory MipsTLBEquivalence imports Main Set MipsTLB MipsTLBPageTable MipsTLBReplacementHandler MipsTLBLarge begin (*>*) (* ========================================================================= *) section "Equivalence to Large TLB" (* ========================================================================= *) text "Next we show that for all valid TLBs the TLB with replacement handler behaves as if its " lemma TLBEquivalence : assumes inrange: "vpn < MIPSPT_EntriesMax" and inrange2: "as < ASIDMax" and cap: "capacity (tlb mpt) > 0" and valid: "MipsTLBPT_valid mpt" shows "MipsTLBPT_translate mpt as vpn = MIPSTLB_translate (MipsTLBLarge_create (pte mpt)) as vpn" proof - from valid have ptvalid: "MIPSPT_valid (pte mpt)" by(simp add:MipsTLBPT_valid_def) from ptvalid inrange inrange2 have X0: " MIPSTLB_translate (MipsTLBLarge_create (pte mpt)) as vpn = (if (v ((entry (pte mpt)) vpn as)) then {(pfn ((entry (pte mpt)) vpn as))} else {})" by(simp add:MipsTLBLarge_translate_is) from valid inrange inrange2 cap have X1: "MipsTLBPT_translate mpt as vpn = (if (v ((entry (pte mpt)) vpn as)) then {(pfn ((entry (pte mpt)) vpn as))} else {})" by(simp add:MipsTLBPT_translate_is) from X0 X1 show ?thesis by(auto) qed (* ========================================================================= *) section "Equivalence in decoding net nodes" (* ========================================================================= *) text "First we define a few helper functions that convert virtual addresses into (asid, vpn, offset)" definition addr2vpn :: "addr \<Rightarrow> VPN" where "addr2vpn a = ((fst a) mod VASize) div 4096" definition addr2asid :: "addr \<Rightarrow> ASID" where "addr2asid a = (fst a) div VASize" definition pfn2addr :: "PFN \<Rightarrow> addr \<Rightarrow> addr" where "pfn2addr p va = (p * 4096 + (fst va) mod 4096, {})" text "A virtual address is valid, if it's representable by the available bits i.e.. 8-bit ASID and 40-bit Address" definition AddrValid :: "addr \<Rightarrow> bool" where "AddrValid a = ((fst a) < VASize * ASIDMax)" text "If the address is valid, then extracting the ASID and VPN is within the well defined ranges." lemma AddrValid_implies_inrange : "AddrValid a \<Longrightarrow> addr2vpn a < MIPSPT_EntriesMax" by(auto simp:addr2vpn_def AddrValid_def VASize_def ASIDMax_def MIPSPT_EntriesMax_def) lemma AddrValid_implies_inrange2 : "AddrValid a \<Longrightarrow> addr2asid a < ASIDMax" by(auto simp:addr2asid_def AddrValid_def VASize_def ASIDMax_def MIPSPT_EntriesMax_def) (* ------------------------------------------------------------------------- *) subsection "Lifting methods" (* ------------------------------------------------------------------------- *) text "We construct decoding net nodes for both, the large TLB and the replacement handler by using their translate functions" definition MipsTLBPT_to_node :: "nodeid \<Rightarrow> MipsTLBPT \<Rightarrow> node" where "MipsTLBPT_to_node nid mpt = \<lparr> accept = {}, translate = (\<lambda>a. (if AddrValid a then (\<Union>x\<in> (MipsTLBPT_translate mpt (addr2asid a) (addr2vpn a)). {(nid, pfn2addr x a)} ) else {} )) \<rparr>" definition MIPSLARGE_to_node :: "nodeid \<Rightarrow> MIPSTLB \<Rightarrow> node" where "MIPSLARGE_to_node nid t = \<lparr> accept = {}, translate = (\<lambda>a. (if AddrValid a then (\<Union>x\<in> (MIPSTLB_translate t (addr2asid a) (addr2vpn a)). {(nid, pfn2addr x a)} ) else {} )) \<rparr>" (* ------------------------------------------------------------------------- *) subsection "Equivalence Proof of lifted nodes" (* ------------------------------------------------------------------------- *) text "We first define a lemma that shows that if the address is valid, then the set of translated addresses are the same. " lemma translate_function_equivalent : assumes cap: "capacity (tlb mpt) > 0" and valid: "MipsTLBPT_valid mpt" and avalid: "AddrValid a" shows "(\<Union>x\<in>MipsTLBPT_translate mpt (addr2asid a) (addr2vpn a). {(nid, pfn2addr x a)}) = (\<Union>x\<in>MIPSTLB_translate (MipsTLBLarge_create (pte mpt)) (addr2asid a) (addr2vpn a). {(nid, pfn2addr x a)})" proof - from avalid have X0: "addr2asid a < ASIDMax" by(auto simp:AddrValid_implies_inrange2) from avalid have X1 : "addr2vpn a < MIPSPT_EntriesMax" by(auto simp:AddrValid_implies_inrange) from X0 X1 cap valid show ?thesis by(auto simp: TLBEquivalence) qed text "Next, we use the lemma above to proof that the two nodes will be the same." lemma assumes cap: "capacity (tlb mpt) > 0" and valid: "MipsTLBPT_valid mpt" shows "MipsTLBPT_to_node nid mpt = MIPSLARGE_to_node nid (MipsTLBLarge_create (pte mpt))" proof - have X0: "MipsTLBPT_to_node nid mpt = \<lparr> accept = {}, translate = \<lambda>a. if AddrValid a then \<Union>x\<in>MipsTLBPT_translate mpt (addr2asid a) (addr2vpn a). {(nid, pfn2addr x a)} else {}\<rparr>" by(simp add:MipsTLBPT_to_node_def) have X1: "MIPSLARGE_to_node nid (MipsTLBLarge_create (pte mpt)) = \<lparr> accept = {}, translate = \<lambda>a. if AddrValid a then \<Union>x\<in>MIPSTLB_translate (MipsTLBLarge_create (pte mpt)) (addr2asid a) (addr2vpn a). {(nid, pfn2addr x a)} else {}\<rparr>" by(simp add:MIPSLARGE_to_node_def) from cap valid have X2: "\<And>a. (if AddrValid a then \<Union>x\<in>MipsTLBPT_translate mpt (addr2asid a) (addr2vpn a). {(nid, pfn2addr x a)} else {}) = (if AddrValid a then \<Union>x\<in>MIPSTLB_translate (MipsTLBLarge_create (pte mpt)) (addr2asid a) (addr2vpn a). {(nid, pfn2addr x a)} else {})" by(auto simp:translate_function_equivalent) from X0 cap valid have X3: "\<lparr>accept = {}, translate = \<lambda>a. if AddrValid a then \<Union>x\<in>MipsTLBPT_translate mpt (addr2asid a) (addr2vpn a). {(nid, pfn2addr x a)} else {} \<rparr> = \<lparr>accept = {}, translate = \<lambda>a. if AddrValid a then \<Union>x\<in>MIPSTLB_translate (MipsTLBLarge_create (pte mpt)) (addr2asid a) (addr2vpn a). {(nid, pfn2addr x a)} else {}\<rparr>" by(simp only:X2) from X0 X1 X2 show ?thesis by(auto) qed end
State Before: M : Type u_2 N : Type ?u.29826 A : Type ?u.29829 inst✝¹ : MulOneClass M s : Set M inst✝ : AddZeroClass A t : Set A S : Submonoid M ι : Sort u_1 p : ι → Submonoid M ⊢ (⨆ (i : ι), p i) = closure (⋃ (i : ι), ↑(p i)) State After: no goals Tactic: simp_rw [Submonoid.closure_iUnion, Submonoid.closure_eq]
function inspect_bug3141 % WALLTIME 00:10:00 % MEM 2gb % DEPENDENCY ft_defacemesh ft_defacevolume %% anatomical mri mri = ft_read_mri(dccnpath('/home/common/matlab/fieldtrip/data/ftp/test/ctf/Subject01.mri')); cfg = []; defaced = ft_defacevolume(cfg, mri); cfg = []; ft_sourceplot(cfg, defaced); %% head shape headshape = ft_read_headshape(dccnpath('/home/common/matlab/fieldtrip/data/ftp/test/ctf/Subject01.shape')); cfg = []; defaced = ft_defacemesh(cfg, headshape); figure ft_plot_mesh(defaced); %% 3D grid source model % this MATLAB file contains the variable sourcemodel load(dccnpath('/home/common/matlab/fieldtrip/template/sourcemodel/standard_sourcemodel3d4mm.mat')); cfg = []; defaced = ft_defacemesh(cfg, sourcemodel); figure ft_plot_mesh(defaced.pos(defaced.inside,:)); %% cortical sheet source model sourcemodel = ft_read_headshape(dccnpath('/home/common/matlab/fieldtrip/template/sourcemodel/cortex_8196.surf.gii')); cfg = []; defaced = ft_defacemesh(cfg, sourcemodel); figure ft_plot_mesh(defaced); camlight lighting phong
/- Copyright (c) 2022 Eric Wieser. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Eric Wieser -/ import algebra.star.basic import algebra.ring.prod import algebra.module.prod /-! # `star` on product types We put a `has_star` structure on product types that operates elementwise. -/ universes u v w variables {R : Type u} {S : Type v} namespace prod instance [has_star R] [has_star S] : has_star (R × S) := { star := λ x, (star x.1, star x.2) } @[simp] lemma fst_star [has_star R] [has_star S] (x : R × S) : (star x).1 = star x.1 := rfl @[simp] lemma snd_star [has_star R] [has_star S] (x : R × S) : (star x).2 = star x.2 := rfl lemma star_def [has_star R] [has_star S] (x : R × S) : star x = (star x.1, star x.2) := rfl instance [has_involutive_star R] [has_involutive_star S] : has_involutive_star (R × S) := { star_involutive := λ _, prod.ext (star_star _) (star_star _) } instance [semigroup R] [semigroup S] [star_semigroup R] [star_semigroup S] : star_semigroup (R × S) := { star_mul := λ _ _, prod.ext (star_mul _ _) (star_mul _ _) } instance [add_monoid R] [add_monoid S] [star_add_monoid R] [star_add_monoid S] : star_add_monoid (R × S) := { star_add := λ _ _, prod.ext (star_add _ _) (star_add _ _) } instance [non_unital_semiring R] [non_unital_semiring S] [star_ring R] [star_ring S] : star_ring (R × S) := { ..prod.star_add_monoid, ..(prod.star_semigroup : star_semigroup (R × S)) } instance {α : Type w} [has_smul α R] [has_smul α S] [has_star α] [has_star R] [has_star S] [star_module α R] [star_module α S] : star_module α (R × S) := { star_smul := λ r x, prod.ext (star_smul _ _) (star_smul _ _) } end prod @[simp] lemma units.embed_product_star [monoid R] [star_semigroup R] (u : Rˣ) : units.embed_product R (star u) = star (units.embed_product R u) := rfl
{-# OPTIONS --cubical --safe #-} module Analysis where open import Data.List using (List; []; _∷_; map) open import Relation.Binary.PropositionalEquality using (_≡_; refl) open import Harmony open import Music open import Note open import Pitch -- test of analysis accompF : List Pitch accompF = f 4 ∷ a 4 ∷ c 5 ∷ [] xx = pitchClassListToSet (map pitchToClass accompF) --yy = showPitchClassSet xx zz : xx ≡ IV-maj zz = refl
/- LoVe Exercise 1: Definitions and Lemma Statements -/ /- Replace the placeholders (e.g., `:= sorry`) with your solutions. -/ import .love01_definitions_and_lemma_statements_demo namespace LoVe /- Question 1: Fibonacci Numbers -/ /- 1.1. Define the function `fib` that computes the Fibonacci numbers. -/ def fib : ℕ → ℕ | 0 := 0 | 1 := 1 | (nat.succ (nat.succ n)) := fib n + fib (nat.succ n) -- (n + 2) and (n + 1) would also work /- 1.2. Check that your function works as expected. -/ #reduce fib 0 -- expected: 0 #reduce fib 1 -- expected: 1 #reduce fib 2 -- expected: 1 #reduce fib 3 -- expected: 2 #reduce fib 4 -- expected: 3 #reduce fib 5 -- expected: 5 #reduce fib 6 -- expected: 8 #reduce fib 7 -- expected: 13 #reduce fib 8 -- expected: 21 /- Question 2: Arithmetic Expressions -/ /- Consider the type `aexp` from the lecture. -/ #print aexp #check eval /- 2.1. Test that `eval` behaves as expected. Making sure to exercise each constructor at least once. You can use the following environment in your tests. What happens if you divide by zero? -/ def some_env : string → ℤ | "x" := 3 | "y" := 17 | _ := 201 #eval eval some_env (aexp.add (aexp.var "x") (aexp.var "y")) #eval eval some_env (aexp.sub (aexp.num 5) (aexp.var "y")) #eval eval some_env (aexp.mul (aexp.num 11) (aexp.var "z")) #eval eval some_env (aexp.div (aexp.num 2) (aexp.num 0)) /- 2.2. The following function simplifies arithmetic expressions involving addition. It simplifies `0 + e` and `e + 0` to `e`. Complete the definition so that it also simplifies expressions involving the other three binary operators. -/ def simplify : aexp → aexp | (aexp.add (aexp.num 0) e₂) := simplify e₂ | (aexp.add e₁ (aexp.num 0)) := simplify e₁ | (aexp.sub e₁ (aexp.num 0)) := simplify e₁ | (aexp.mul (aexp.num 0) e₂) := aexp.num 0 | (aexp.mul e₁ (aexp.num 0)) := aexp.num 0 | (aexp.mul (aexp.num 1) e₂) := simplify e₂ | (aexp.mul e₁ (aexp.num 1)) := simplify e₁ | (aexp.div (aexp.num 0) e₂) := aexp.num 0 | (aexp.div e₁ (aexp.num 0)) := aexp.num 0 | (aexp.div e₁ (aexp.num 1)) := simplify e₁ -- catch-all cases below | (aexp.num i) := aexp.num i | (aexp.var x) := aexp.var x | (aexp.add e₁ e₂) := aexp.add (simplify e₁) (simplify e₂) | (aexp.sub e₁ e₂) := aexp.sub (simplify e₁) (simplify e₂) | (aexp.mul e₁ e₂) := aexp.mul (simplify e₁) (simplify e₂) | (aexp.div e₁ e₂) := aexp.div (simplify e₁) (simplify e₂) /- 2.3. State the correctness lemma for `simplify`, namely that the simplified expression should have the same semantics, with respect to `eval`, as the original expression. -/ lemma simplify_correct (env : string → ℤ) (e : aexp) : eval env (simplify e) = eval env e := sorry /- Question 3: λ-Terms -/ /- We start by declaring three new opaque types. -/ constants α β γ : Type /- 3.1. Complete the following definitions, by replacing the `sorry` markers by terms of the expected type. Hint: You can use `_` as a placeholder while constructing a term. By hovering over `_`, you will see the current logical context. -/ def I : α → α := λa, a def K : α → β → α := λa b, a def C : (α → β → γ) → β → α → γ := λg b a, g a b def proj_1st : α → α → α := λx y, x -- please give a different answer than for `proj_1st` def proj_2nd : α → α → α := λx y, y def some_nonsense : (α → β → γ) → α → (α → γ) → β → γ := λg a f b, g a b /- 3.2. Show the typing derivation for your definition of `C` above. -/ /- Let Γ := g : α → β → γ, b : β, a : α. We have –––––––––––––––––– Var –––––––––– Var Γ ⊢ g : α → β → γ Γ ⊢ a : α –––––––––––––––––––––––––––––––––––– App –––––––––– Var Γ ⊢ g a : β → γ Γ ⊢ b : β –––––––––––––––––––––––––––––––––––––––––––––––––––––– App Γ ⊢ g a b : γ ––––––––––––––––––––––––––––––––––––––––––– Lam g : α → β → γ, b : β ⊢ (λa : α, g a b) : γ –––––––––––––––––––––––––––––––––––––––––––––– Lam g : α → β → γ ⊢ (λ(b : β) (a : α), g a b) : γ ––––––––––––––––––––––––––––––––––––––––––––––– Lam ⊢ (λ(g : α → β → γ) (b : β) (a : α), g a b) : γ -/ end LoVe
\documentclass[11pt]{beamer} \usepackage{hyperref} \usepackage{color} \usepackage{amsmath} \usepackage{listings} \lstset{numbers=none,language=[ISO]C++,tabsize=4, frame=single, basicstyle=\small, showspaces=false,showstringspaces=false, showtabs=false, keywordstyle=\color{blue}\bfseries, commentstyle=\color{red}, } \usepackage{verbatim} \usepackage{fixltx2e} \usepackage{graphicx} \usepackage{longtable} \usepackage{float} \usepackage{wrapfig} \usepackage{soul} \usepackage{textcomp} \usepackage{marvosym} \usepackage{wasysym} \usepackage{latexsym} \usepackage{amssymb} \usepackage{hyperref} \tolerance=1000 \usepackage{minted} \providecommand{\alert}[1]{\textbf{#1}} \title{module2} \author{gar} \date{} \hypersetup{ pdfkeywords={}, pdfsubject={}, pdfcreator={Emacs Org-mode version 7.9.3f}} \begin{document} \maketitle \begin{frame} \frametitle{Outline} \setcounter{tocdepth}{3} \tableofcontents \end{frame} \section{Branching and Looping} \label{sec-1} \begin{frame}[fragile]\frametitle{Branching} \label{sec-1-1} \begin{itemize} \item When an algorithm makes a choice to do one of two or more things, it's called branching \item Different options are available to make choices \begin{itemize} \item \verb~if~ : to conditionally execute the statements in its block (the statements written between \verb~{ ... }~ after the \verb~if~ keyword) \item \verb~if-else~ : to make two-way decisions \item Cascaded \verb~if-else~ : for multi-way decision \item Nested \verb~if-else~ : branching within branching \item \verb~Switch~ : making multi-way decisions in another way \end{itemize} \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Two-way Selection: If} \label{sec-1-2} \begin{itemize} \item Required to make decisions \item E.g. To decide whether a person is senior citizen, we check the age \begin{minted}[]{C} if (age >= 60) printf("Person is a senior citizen"); \end{minted} \item In the example, we are printing the statement only when the expression is true \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Two-way Selection: If} \label{sec-1-3} \begin{itemize} \item Whenever we require more than one statement to be executed within the \verb~if~ block, we need the braces. Otherwise, we may leave out the braces \begin{minted}[]{C} if (age >= 60) { printf("Person is a senior citizen"); ticket_cost = 0.9*ticket_cost; /* Give a discount */ } \end{minted} \item If the braces were removed, everybody gets a discount and the ticket seller's profit would be reduced \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Two-way Selection: If-Else} \label{sec-1-4} \begin{itemize} \item The if-else statement tells what to do when an expression is true and what to do when it's false \item The syntax is \begin{minted}[]{C} if (expression) statement_1 else statement_2 \end{minted} \item E.g. To print whether a number is even or odd \begin{minted}[]{C} if (num%2 == 0) printf("number is even"); else printf("number is odd"); \end{minted} \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Multi-way decision: Else-If (Cascaded if-else)} \label{sec-1-5} \begin{itemize} \item The construction of a multiway decision is written as \begin{minted}[]{C} if (expression1) statement1 else if (expression2) statement2 else if (expression3) statement3 else statementn \end{minted} \item As soon as one of the expressions holds true, the statements inside the body is executed and goes out of the \verb~else-if~ chain \item The \verb~else~ part serves as a default, when none of the given expressions is true \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Multi-way decision: Else-If} \label{sec-1-6} \begin{itemize} \item E.g. To display the grade obtained by the student in an exam \begin{minted}[]{C} if (marks >= 90) printf("A"); else if (marks >= 80 && marks < 90) printf("B"); else if (marks >= 70 && marks < 80) printf("C"); else if (marks >= 60 && marks < 70) printf("D"); else if (marks >= 50 && marks < 60) printf("E"); else printf("F"); \end{minted} \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Two-way Selection: Nested If-Else} \label{sec-1-7} \begin{itemize} \item E.g. Displaying maximum of three numbers \begin{minted}[]{C} if (a > b) { if (a > c) max = a; else max = c; } else { if (b > c) max = b; else max = c; } \end{minted} \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Switch Statement} \label{sec-1-8} \begin{itemize} \item This is another multi-way decision that tests whether an expression matches one of a number of constant integer values, and branches accordingly \begin{minted}[]{C} switch (expression) { case constant1: statements break; case constant2: statements break; default: statements } \end{minted} \item If the expression is \verb~constant1~, the statements in front of that number is executed, and the \verb~break~ gets the execution flow out of the switch block \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Switch Statement -- Example 1} \label{sec-1-9} \begin{itemize} \item A simple calculator: enter two numbers and an operator to perform the required arithmetic operation \end{itemize} \begin{minted}[]{C} int a,b; char op; scanf("%d%d", &a, &b); scanf("%c", &op); switch (op) { case '+': printf("%d\n", a+b); break; case '-': printf("%d\n", a-b); break; case '*': printf("%d\n", a*b); break; case '/': printf("%f\n", (float)a/b); break; default: printf("Enter an arithmetic operator\n"); } \end{minted} \end{frame} \begin{frame}[fragile]\frametitle{Switch Statement -- Example 2} \label{sec-1-10} \begin{itemize} \item Tell whether the entered character is a vowel or not \end{itemize} \begin{minted}[]{C} char ch; scanf("%c", &ch); switch (ch) { case 'a': case 'e': case 'i': case 'o': case 'u': printf("%c is a vowel\n"); break; default : printf("%c is not a vowel\n"); } \end{minted} \begin{itemize} \item If upper case letters are required, that may be added with the corresponding \verb~case~ labels \item If a statement is missing after a \verb~case~ constant, it will carry on the execution from the first statement it sees. In this example, if `a' is entered, it executes the first printf statement and breaks out of the switch block \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Switch Statement -- Example 3} \label{sec-1-11} \begin{itemize} \item Rewriting the grading example using switch instead of else-if \end{itemize} \begin{minted}[]{C} switch (marks / 10) { case 9: printf("A\n"); break; case 8: printf("B\n"); break; case 7: printf("C\n"); break; case 6: printf("D\n"); break; case 5: printf("E\n"); break; default: printf("F\n"); } \end{minted} \end{frame} \begin{frame}[fragile]\frametitle{Ternary Operator ?:} \label{sec-1-12} \begin{itemize} \item ? : is called a ternary operator since it takes three expressions \item Syntax is \emph{expr$_1$} ? \emph{expr$_2$} : \emph{expr$_3$} \item If the expression \emph{expr$_1$} is true (non-zero), then \emph{expr$_2$} is evaluated. Otherwise, \emph{expr$_3$} is evaluated. \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Ternary Operator ?: -- Example} \label{sec-1-13} \begin{itemize} \item To compute the max of two numbers, these two code samples are equivalent \item Using if-else \begin{minted}[]{C} if (a>b) max = a; else max = b; \end{minted} \item Using ternary operator \begin{minted}[]{C} max = (a>b) ? a : b; \end{minted} \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Loops} \label{sec-1-14} \begin{itemize} \item Loops are required when we want to do certain repetetive tasks \item E.g. Printing the squares of first five numbers without loop would require us to write 5 statements \begin{minted}[]{C} main() { printf("%d\n", 1*1); printf("%d\n", 2*2); printf("%d\n", 3*3); printf("%d\n", 4*4); printf("%d\n", 5*5); } \end{minted} \item There are different kinds of loops available in C to print that in fewer lines (which is shown after the description of the syntax) \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Loops: while} \label{sec-1-15} \begin{minted}[]{C} while (expression) { statements } \end{minted} \begin{itemize} \item Using if and goto: \begin{minted}[]{C} loop1: if (expression) { statements goto loop1; } \end{minted} \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Loops: do--while} \label{sec-1-16} \begin{itemize} \item The do-while loop is written as: \begin{minted}[]{C} do { statements } while (expression); \end{minted} \item Note the semicolon after while. Missing that will cause a syntax error \item Equivalently, using if and goto: \begin{minted}[]{C} loop1: statements if (expression) goto loop1; \end{minted} \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Loops: for} \label{sec-1-17} \begin{itemize} \item for loop is written as: \begin{minted}[]{C} for (expr1 ; expr2 ; expr3) { statements } \end{minted} \item Can be written as an equivalent while loop \begin{minted}[]{C} expr1; while (expr2) { statements expr3 } \end{minted} \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{Loops (example) -- Squares of numbers} \label{sec-1-18} \begin{itemize} \item Continuing from the example, printing the squares can be done as follows (do-while) \begin{minted}[]{C} int i = 1; do { printf("%d\n", i*i); i++; } while(i<6); \end{minted} \item (while) \begin{minted}[]{C} int i = 1; while(i<6) { printf("%d\n", i*i); i++; } \end{minted} \item (for) \begin{minted}[]{C} int i; for (i=1; i<6; i++) printf("%d\n", i*i); \end{minted} \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{break and continue} \label{sec-1-19} \begin{itemize} \item \verb~break~ gets the control out of the current loop or switch block \item \verb~continue~ gets the control directly to the testing of the condition, and begins the next iteration if condition is satisfied \item E.g. Compute the sum of numbers only if positive numbers are entered \begin{minted}[]{C} int a, sum = 0; while (1) { scanf("%d", &a); if (a<0) break; sum += a; } \end{minted} \item Hence, if \verb~1 7 8 3 -4~ are entered, it calculates the sum of first 4 numbers and exits the loop \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{break and continue} \label{sec-1-20} \begin{itemize} \item E.g. Print the first five odd numbers \begin{minted}[]{C} int a=0; while (a<10) { a++; if (a%2 == 0) continue; printf("%d\n", a); } \end{minted} \item In the loop body, whenever \verb~a~ becomes even, \verb~continue~ statement is executed, which takes the flow of execution to check the condition \verb~a<10~ and then continues execution depending on the condition result \end{itemize} \end{frame} \begin{frame}[fragile]\frametitle{goto and labels} \label{sec-1-21} \begin{itemize} \item When there is a goto and a label, the statement next to the label gets executed. \item Usually, it's not preferred since it's difficult to read and maintain such code \item Used mainly to exit out of deeply nested loops \item E.g. \begin{minted}[]{C} for ( ... ) for ( ... ) for ( ... ) if (solution_found) goto found; found: /* print the solution */ \end{minted} \item \verb~break~ can terminate only one loop, \verb~goto~ helps to terminate all the outer loops as well \end{itemize} \end{frame} \end{document}
#' watershed: Tools for Watershed Delineation #' #' This is a simplified set of tools for deriving a river network from digital elevation data. #' The package relies on GRASS GIS (v. 7.4 or 7.6) to do the heavy lifting. An installation of #' GRASS and the rgrass7 package is required for watershed to function. Note that this is meant #' to be a simplified workflow for watershed delineation using only R code; for more options #' users can use rgrass7 to access GRASS GIS functions directly. #' #' @section Key functions: #' #' * [delineate()] Produce a stream map from a digital elevation model #' * [pixel_topology()] Construct river network topologies #' * [reach_topology()] Construct river network topologies #' * [catchment()] Compute catchment areas #' #' @section Datasets: #' * [kamp_dem] An example digital elevation model for the Kamp river in Austria #' * [kamp_q] Measured discharge in the Kamp river #' #' @docType package #' @name watershed_package NULL #' Digital elevation model for the Kamp River catchment in Austria #' #' @format A [raster::raster()] #' \describe{ #' \item{value}{Elevation of each pixel, in m} #' } "kamp_dem" #' Measured discharge in the Kamp river #' #' @format A simple features (`sf`) point dataset #' \describe{ #' \item{discharge}{Discharge in m^3/s} #' } "kamp_q"
\subsection{Existence of an infinite number of prime numbers} \subsubsection{Existence of an infinite number of prime numbers} If there are a finite number of primes, we can call the set of primes \(P\). We identify a new natural number \(a\) by taking the product of existing primes and adding \(1\). \(a=1+\prod_{p\in P} p\) From the fundamental theorem of arithmetic we know all numbers are primes or the products of primes. If \(a\) is not a prime then it can be divided by one of the existing primes to form number \(n\): \(\dfrac{\prod^n p_i +1}{p_j}=n\) \(\dfrac{p_j \prod^n_{i\ne j} p_i +1}{p_j}=n\) \(\prod^n_{i\ne j} p_i +\dfrac{1}{p_j}=n\) As this is not a whole number, \(n\) must prime. We can do this process for any finite number of primes, so there are an infinite number.
module Data.Optics import public Data.Optics.Lens import public Data.Optics.Iso import public Data.Optics.Prism import public Data.Optics.Optional import Control.Category -- -- Conversions -- -- Iso isoAsPrism: (Iso s a) -> (Prism s a) isoAsPrism (MkIso to from) = MkPrism (\s => Just (to s)) (\a => from a) isoAsLens: (Iso s a) -> (Lens s a) isoAsLens (MkIso to from) = MkLens (to) (\s => from) isoAsOptional: (Iso s a) -> (Optional s a) isoAsOptional (MkIso to from) = MkOptional (\s => Just(to s)) (\s,a => from a) -- Lens lensAsOptional: (Lens s a ) -> (Optional s a) lensAsOptional (MkLens get set) = MkOptional (\s => Just (get s)) (set) -- Prism prismAsOptional: Prism s a -> Optional s a prismAsOptional (MkPrism to from) = MkOptional (to) (\s => from) -- -- Compositions -- infixr 9 <:+ ||| the composition of a prism with a lens is an optional (<:+) : Prism a b -> Lens s a -> Optional s b (<:+) prism lens = let prismOptional = prismAsOptional prism lensOptional = lensAsOptional lens in prismOptional ?:+ lensOptional infixr 9 +:> (+:>) : Lens s a -> Prism a b -> Optional s b (+:>) = flip (<:+) -- -- instances -- ||| An Iso forms a Category instance Category Iso where id = MkIso id id (.) = (-:+) ||| A Lens forms a Category instance Category Lens where id = MkLens (id) (\a => id) (.) = (#:+) ||| An Optonal forms a Category instance Category Optional where id = MkOptional Just (\a => id) (.) = (?:+) ||| A Prism forms a Category instance Category Prism where id = MkPrism Just id (.) = (<:+)
import order.atoms import data.finite.basic import order.hom.complete_lattice universes u v variables {α : Type u} {β : Type v} section upper_lower variables [preorder α] {x y z a b: α} @[reducible] def lower_closure (s : set α) : set α := {x | ∃ y ∈ s, x ≤ y} @[reducible] def upper_closure (s : set α) : set α := {x | ∃ y ∈ s, y ≤ x} -- lemma lower_closure_preimage_invo [has_involution α] (s : set α) : -- lower_closure (invo ⁻¹' s) = invo ⁻¹' (upper_closure s) := -- begin -- ext x, simp only [set.mem_set_of_eq, set.mem_preimage, exists_prop], -- exact ⟨λ ⟨y,hy,hxy⟩, ⟨yᵒ, hy, invo_le_iff.mpr hxy⟩, -- λ ⟨y,hy,hxy⟩, ⟨yᵒ, (@invo_invo _ _ _ y).symm ▸ hy, le_invo_comm.mpr hxy⟩⟩, -- end -- lemma lower_closure_image_invo [has_involution α] (s : set α) : -- lower_closure (invo '' s) = invo '' (upper_closure s) := -- by rw [image_invo_eq_preimage_invo, lower_closure_preimage_invo, image_invo_eq_preimage_invo] -- lemma upper_closure_image_invo [has_involution α] (s : set α) : -- upper_closure (invo '' s) = invo '' (lower_closure s) := -- by {nth_rewrite 1 ←(@invo_invo_image _ s), rw [lower_closure_image_invo, invo_invo_image]} -- lemma upper_closure_preimage_invo [has_involution α] (s : set α): -- upper_closure (invo ⁻¹' s) = invo ⁻¹' (lower_closure s) := -- by rw [←image_invo_eq_preimage_invo, ←image_invo_eq_preimage_invo, upper_closure_image_invo] -- lemma set.Icc_dual''' (x y : α) : @set.Icc αᵒᵈ _ x y = @set.Icc α _ y x := -- set.dual_Icc end upper_lower section covby variables [partial_order α] {x y z a b : α} lemma covby.eq_of_le_of_lt (hab : a ⋖ b) (hax : a ≤ x) (hxb : x < b) : a = x := by_contra (λ h, hab.2 (hax.lt_of_ne h) hxb) lemma covby.eq_of_lt_of_le (hab : a ⋖ b) (hax : a < x) (hxb : x ≤ b) : x = b := by_contra (λ h, hab.2 hax (hxb.lt_of_ne h)) lemma covby.eq_or_of_le_of_le (hab : a ⋖ b) (hax : a ≤ x) (hxb : x ≤ b) : x = a ∨ x = b := begin obtain ⟨rfl, hax⟩ := em (a = x), exact or.inl rfl, exact or.inr ((hab.eq_of_lt_of_le (hax.lt_of_ne h)) hxb), end lemma wcovby.covby_or_eq (hab : a ⩿ b) : a ⋖ b ∨ a = b := wcovby_iff_covby_or_eq.mp hab end covby section lattice variables [lattice α] {x y z a b: α} lemma inf_le_inf_of_inf_le (h : a ⊓ x ≤ b) : a ⊓ x ≤ b ⊓ x := le_inf h inf_le_right lemma sup_le_sup_of_le_sup (h : a ≤ b ⊔ x) : a ⊔ x ≤ b ⊔ x := sup_le h le_sup_right lemma inf_eq_inf_of_le_of_le (h1 : a ⊓ x ≤ b) (h2 : b ⊓ x ≤ a) : a ⊓ x = b ⊓ x := (le_inf h1 inf_le_right).antisymm (le_inf h2 inf_le_right) lemma sup_eq_sup_of_le_of_le (h1 : a ≤ b ⊔ x) (h2 : b ≤ a ⊔ x) : a ⊔ x = b ⊔ x := (sup_le h1 le_sup_right).antisymm (sup_le h2 le_sup_right) end lattice section modular variables [lattice α] [is_modular_lattice α] {x y z a b: α} lemma eq_of_le_of_inf_le_of_sup_le' (hxy : x ≤ y) (hinf : y ⊓ z ≤ x) (hsup : y ≤ x ⊔ z) : x = y := eq_of_le_of_inf_le_of_sup_le hxy (le_inf hinf inf_le_right) (sup_le hsup le_sup_right) lemma inf_coatom_wcovby [order_top α] (x : α) (ha : is_coatom a) : x ⊓ a ⩿ x := begin by_cases hxa : x ≤ a, { rw inf_eq_left.mpr hxa, exact rfl.wcovby}, refine covby.wcovby ⟨inf_le_left.lt_of_ne (λ h, hxa (inf_eq_left.mp h)), λ y hxy hyx, hyx.ne _⟩, refine @eq_of_le_of_inf_le_of_sup_le' _ _ _ _ _ a hyx.le hxy.le _, rw ha.2 (y ⊔ a) (le_sup_right.lt_of_ne (λ hay, _)), exact le_top, rw [eq_comm, sup_eq_right] at hay, exact (lt_of_lt_of_le hxy (le_inf hyx.le hay)).ne rfl, end lemma sup_atom_wcovby [order_bot α] (x : α) (ha : is_atom a) : x ⩿ x ⊔ a := (@inf_coatom_wcovby αᵒᵈ _ _ _ _ _ ha).to_dual lemma sup_atom_covby_of_not_le [order_bot α] {x a : α} (ha : is_atom a) (hx : ¬ a ≤ x) : x ⋖ x ⊔ a := (sup_atom_wcovby x ha).covby_of_ne (λ h, hx (sup_eq_left.mp h.symm)) lemma inf_coatom_covby_of_not_le [order_top α] {x a : α} (ha : is_coatom a) (hx : ¬ x ≤ a) : x ⊓ a ⋖ x := (@sup_atom_covby_of_not_le αᵒᵈ _ _ _ x a ha hx).to_dual end modular section atoms variables [complete_lattice α] [is_atomistic α] {x y z a b : α} lemma le_of_forall_atom_le (h : ∀ a, is_atom a → a ≤ x → a ≤ y) : x ≤ y := by {obtain ⟨sx,rfl,hsx⟩ := eq_Sup_atoms x, exact Sup_le (λ b hb, h b (hsx b hb) (le_Sup hb))} lemma le_iff_forall_atom_le : x ≤ y ↔ (∀ a, is_atom a → a ≤ x → a ≤ y) := ⟨λ hxy a ha hax, hax.trans hxy, le_of_forall_atom_le⟩ lemma eq_of_atom_le_iff_atom_le (h : ∀ a, is_atom a → (a ≤ x ↔ a ≤ y)) : x = y := (le_of_forall_atom_le (λ a ha, (h a ha).mp)).antisymm (le_of_forall_atom_le (λ a ha, (h a ha).mpr)) lemma exists_atom_of_not_le (hxy : ¬ (x ≤ y)) : ∃ a, is_atom a ∧ a ≤ x ∧ ¬ (a ≤ y) := by_contra (λ h, hxy (le_of_forall_atom_le (by {push_neg at h, exact h}))) lemma exists_atom_of_lt (hxy : x < y) : ∃ a, is_atom a ∧ a ≤ y ∧ ¬ (a ≤ x) := exists_atom_of_not_le (not_le_of_lt hxy) lemma exists_atom_le_of_ne_bot (hx : x ≠ ⊥) : ∃ a, is_atom a ∧ a ≤ x := by {obtain ⟨a,ha,hax,-⟩ := exists_atom_of_lt (bot_le.lt_of_ne' hx), exact ⟨a,ha,hax⟩} lemma covby.exists_atom_sup (hxy : x ⋖ y) : ∃ a, is_atom a ∧ y = x ⊔ a := begin obtain ⟨a,ha,hxa,hay⟩ := exists_atom_of_lt hxy.lt, exact ⟨a, ha, (hxy.eq_of_lt_of_le (le_sup_left.lt_of_not_le (by simpa)) (sup_le hxy.le hxa)).symm⟩, end lemma exists_sup_atom_of_inf_coatom_of_ne_bot [is_modular_lattice α] {x a : α} (hx : x ≠ ⊥) (ha : is_coatom a) : ∃ b, is_atom b ∧ x = (x ⊓ a) ⊔ b := begin obtain ⟨b,hb,hbx⟩ := exists_atom_le_of_ne_bot hx, exact or.elim (inf_coatom_wcovby x ha).covby_or_eq (λ h, h.exists_atom_sup) (λ h, ⟨b, hb, by rw [h, sup_eq_left.mpr hbx]⟩), end end atoms section coatoms variables [complete_lattice α] [is_coatomistic α] {x y z a : α} lemma le_of_le_forall_coatom (h : ∀ a, is_coatom a → y ≤ a → x ≤ a) : x ≤ y := @le_of_forall_atom_le αᵒᵈ _ _ _ _ h lemma le_iff_le_forall_coatom : x ≤ y ↔ (∀ a, is_coatom a → y ≤ a → x ≤ a) := @le_iff_forall_atom_le αᵒᵈ_ _ _ _ lemma eq_of_le_coatom_iff_le_coatom (h : ∀ a, is_coatom a → (x ≤ a ↔ y ≤ a)) : x = y := @eq_of_atom_le_iff_atom_le αᵒᵈ _ _ _ _ h lemma exists_coatom_of_not_le (hxy : ¬ (x ≤ y)) : ∃ a, is_coatom a ∧ y ≤ a ∧ ¬ (x ≤ a) := @exists_atom_of_not_le αᵒᵈ _ _ _ _ hxy lemma exists_coatom_of_lt (hxy : x < y) : ∃ a, is_coatom a ∧ x ≤ a ∧ ¬ (y ≤ a) := @exists_atom_of_lt αᵒᵈ _ _ _ _ hxy lemma exists_le_coatom_of_ne_top (hx : x ≠ ⊤) : ∃ b, is_coatom b ∧ x ≤ b := @exists_atom_le_of_ne_bot αᵒᵈ _ _ _ hx lemma covby.exists_coatom_inf (hxy : x ⋖ y): ∃ a, is_coatom a ∧ x = y ⊓ a := @covby.exists_atom_sup αᵒᵈ _ _ _ _ hxy.to_dual lemma exists_inf_coatom_of_sup_atom_of_ne_top [is_modular_lattice α] {x a : α} (hx : x ≠ ⊤) (ha : is_atom a): ∃ b, is_coatom b ∧ x = (x ⊔ a) ⊓ b := @exists_sup_atom_of_inf_coatom_of_ne_bot αᵒᵈ _ _ _ _ _ hx ha end coatoms section finite variables [finite α] instance : finite αᵒᵈ := (infer_instance : finite α) lemma finite.exists_maximal' [nonempty α] [preorder α] : ∃ x : α, ∀ y, ¬ (x < y) := begin haveI := fintype.of_finite α, exact (finset.univ.exists_maximal finset.univ_nonempty).imp (λ a h y hay, (exists.elim h (λ _ h', h' _ (finset.mem_univ _) hay))), end lemma finite.exists_minimal' [nonempty α] [preorder α] : ∃ x : α, ∀ y, ¬ (y < x) := @finite.exists_maximal' αᵒᵈ _ _ _ lemma set.finite.exists_maximal_mem' [preorder α] {s : set α} (hs : s.nonempty) : ∃ x ∈ s, ∀ y, y ∈ s → ¬ (x < y) := begin obtain ⟨⟨x,hx⟩,h⟩ := @finite.exists_maximal' s _ hs.to_subtype _, exact ⟨x,hx,λ y hy, λ hlt, h ⟨y,hy⟩ (subtype.mk_lt_mk.mpr hlt)⟩, end lemma set.finite.exists_minimal_mem' [preorder α] {s : set α} (hs : s.nonempty) : ∃ x ∈ s, ∀ y, y ∈ s → ¬ (y < x) := @set.finite.exists_maximal_mem' αᵒᵈ _ _ _ hs lemma set.finite.exists_maximal_mem [partial_order α] {s : set α} (hs : s.nonempty) : ∃ x ∈ s, ∀ y, y ∈ s → x ≤ y → x = y := (set.finite.exists_maximal_mem' hs).imp (λ x h, h.imp (λ hxs hx y hys hxy, eq_of_le_of_not_lt hxy (hx _ hys))) lemma set.finite.exists_minimal_mem [partial_order α] {s : set α} (hs : s.nonempty) : ∃ x ∈ s, ∀ y, y ∈ s → y ≤ x → y = x := (set.finite.exists_minimal_mem' hs).imp (λ x h, h.imp (λ hxs hx y hys hxy, eq_of_le_of_not_lt hxy (hx _ hys))) end finite section complete variables [complete_lattice α] {a : α} {S T : set α} {f : α → α} lemma Sup_image_sup_left_eq_sup_Sup_of_nonempty (hS : S.nonempty) : Sup ((λ x, a ⊔ x) '' S) = a ⊔ (Sup S) := let ⟨x,hx⟩ := hS in (Sup_le (by {rintro _ ⟨y,hy,rfl⟩, refine sup_le_sup_left (le_Sup hy) _, })).antisymm (sup_le (le_sup_left.trans (le_Sup ((set.mem_image _ _ _).mpr ⟨x,hx,rfl⟩))) (Sup_le_Sup_of_forall_exists_le (λ y hy, ⟨a ⊔ y, ⟨⟨y,hy,rfl⟩,le_sup_right⟩⟩))) lemma Sup_image_sup_right_eq_Sup_sup_of_nonempty (hS : S.nonempty) : Sup ((λ x, x ⊔ a) '' S) = (Sup S) ⊔ a := by {rw [sup_comm, ←Sup_image_sup_left_eq_sup_Sup_of_nonempty hS], simp_rw [sup_comm]} lemma Inf_image_inf_right_eq_Inf_inf_of_nonempty (hS : S.nonempty) : Inf ((λ x, x ⊓ a) '' S) = (Inf S) ⊓ a := @Sup_image_sup_right_eq_Sup_sup_of_nonempty αᵒᵈ _ _ _ hS lemma Inf_image_inf_left_eq_inf_Inf_of_nonempty (hS : S.nonempty) : Inf ((λ x, a ⊓ x) '' S) = a ⊓ (Inf S) := @Sup_image_sup_left_eq_sup_Sup_of_nonempty αᵒᵈ _ _ _ hS -- these are already in mathlib : supr_sup etc . Maybe PR the below? lemma bsupr_eq_supr_subtype : (⨆ (x : α) (H : x ∈ S), f x) = ⨆ (y : ↥S), f y := supr_subtype' lemma bsupr_sup (hS : S.nonempty) : (⨆ (x ∈ S), (f x ⊔ a)) = (⨆ (x ∈ S), f x) ⊔ a := by {simpa [supr_subtype'] using (@supr_sup _ _ _ hS.to_subtype _ _).symm} lemma sup_bsupr (hS : S.nonempty) : (⨆ (x ∈ S), (a ⊔ f x)) = a ⊔ (⨆ (x ∈ S), f x) := by {rw [sup_comm, ←bsupr_sup hS], simp_rw [sup_comm]} lemma binf_inf (hS : S.nonempty) : (⨅ (x ∈ S), (f x ⊓ a)) = (⨅ (x ∈ S), f x) ⊓ a := @bsupr_sup αᵒᵈ _ _ _ _ hS lemma inf_binf (hS : S.nonempty) : (⨅ (x ∈ S), (a ⊓ f x)) = a ⊓ (⨅ (x ∈ S), f x) := @sup_bsupr αᵒᵈ _ _ _ _ hS lemma Inf_diff_singleton_inf_of_mem_eq (ha : a ∈ S) : (Inf (S \ {a})) ⊓ a = Inf S := begin nth_rewrite 1 ←(Inf_singleton : Inf {a} = a), rw [←Inf_union, set.diff_union_of_subset (set.singleton_subset_iff.mpr ha)], end lemma Sup_diff_singleton_sup_of_mem_eq (ha : a ∈ S) : (Sup (S \ {a})) ⊔ a = Sup S := @Inf_diff_singleton_inf_of_mem_eq αᵒᵈ _ _ _ ha lemma supr_bool_eq' {f : bool → α} (i : bool) : (⨆ j, f j) = f i ⊔ f (!i) := by {rw supr_bool_eq, cases i; simp [sup_comm]} lemma infi_bool_eq' {f : bool → α} (i : bool) : (⨅ j, f j) = f i ⊓ f (!i) := @supr_bool_eq' αᵒᵈ _ _ _ end complete section intervals open set subtype instance [complete_lattice α] {a : α} : complete_lattice (Iic a) := { Sup := λ S, ⟨_, (Sup_le (λ _ ⟨⟨_,hb⟩,_,rfl⟩, hb) : complete_lattice.Sup (coe '' S) ≤ a)⟩, Inf := λ S, ⟨_, @inf_le_left _ _ a (complete_lattice.Inf (coe '' S))⟩, le_Sup := λ _ _ h, coe_le_coe.mp (le_Sup (mem_image_of_mem _ h)), Sup_le := λ _ _ h, coe_le_coe.mp (Sup_le (by {rintros y ⟨z,p,rfl⟩, exact coe_le_coe.mpr (h z p)})), Inf_le := λ _ x h, coe_le_coe.mp (inf_le_of_right_le (Inf_le ⟨x,h,rfl⟩)), le_Inf := λ _ x h, coe_le_coe.mp (le_inf x.2 (le_Inf (by {rintros _ ⟨z,p,rfl⟩, exact coe_le_coe.mpr (h z p)}))), .. (infer_instance : lattice (set.Iic a)), .. (infer_instance : bounded_order (set.Iic a)) } instance [complete_lattice α] {a : α} : complete_lattice (Ici a) := { Sup := λ S, ⟨_, (@le_sup_left _ _ a (complete_lattice.Sup (coe '' S)))⟩, Inf := λ S, ⟨complete_lattice.Inf (coe '' S), (le_Inf (λ _ ⟨⟨_,hb⟩,_,rfl⟩, hb))⟩, Inf_le := λ _ _ h, coe_le_coe.mp (Inf_le (mem_image_of_mem _ h)), le_Inf := λ _ _ h, coe_le_coe.mp (le_Inf (by {rintros _ ⟨z,p,rfl⟩, exact coe_le_coe.mpr (h z p)})), le_Sup := λ _ x h, coe_le_coe.mp (le_sup_of_le_right (le_Sup ⟨x,h,rfl⟩)), Sup_le := λ _ x h, coe_le_coe.mp (sup_le x.2 (Sup_le (by {rintros _ ⟨z,p,rfl⟩, exact coe_le_coe.mpr (h z p)}))), .. (infer_instance : lattice (set.Ici a)), .. (infer_instance : bounded_order (set.Ici a)) } @[reducible] def Icc_complete_lattice [complete_lattice α] {a b : α} (hab : a ≤ b) : complete_lattice (Icc a b) := { Sup := λ S, ⟨a ⊔ Sup (coe '' S), ⟨le_sup_left, sup_le hab (Sup_le (by {rintros _ ⟨⟨_,h⟩,_,rfl⟩, exact h.2}))⟩ ⟩, Inf := λ S, ⟨b ⊓ Inf (coe '' S), ⟨le_inf hab (le_Inf (by {rintros _ ⟨⟨_,h⟩,_,rfl⟩, exact h.1})), inf_le_left⟩⟩, Inf_le := λ _ x h, coe_le_coe.mp (inf_le_of_right_le (Inf_le (⟨x,h,rfl⟩))), le_Inf := λ _ x h, coe_le_coe.mp (le_inf x.2.2 (le_Inf (by {rintros _ ⟨z,p,rfl⟩, exact coe_le_coe.mpr (h z p)}))), le_Sup := λ _ x h, coe_le_coe.mp (le_sup_of_le_right (le_Sup ⟨x,h,rfl⟩)), Sup_le := λ _ x h, (sup_le x.2.1 (Sup_le (by {rintros _ ⟨z,p,rfl⟩, exact coe_le_coe.mpr (h z p)}))), .. (infer_instance : lattice (set.Icc a b)), .. (Icc.bounded_order hab) } @[simp] lemma set.Iic.coe_Sup [complete_lattice α] {a : α} {S : set (Iic a)} : ((Sup S : Iic a) : α) = Sup ((coe : Iic a → α) '' S) := rfl @[simp] lemma set.Iic.coe_Inf [complete_lattice α] {a : α} {S : set (Iic a)} : ((Inf S : Iic a) : α) = a ⊓ Inf ((coe : Iic a → α) '' S) := rfl lemma set.Iic.coe_Inf_nonempty_eq [complete_lattice α] {a : α} {S : set (Iic a)} (hS : S.nonempty) : ((Inf S : Iic a) : α) = Inf ((coe : Iic a → α) '' S) := by { rw [set.Iic.coe_Inf, inf_eq_right], exact exists.elim hS (λ ⟨x,hxa⟩ hx, le_trans (Inf_le (⟨⟨x,hxa⟩,hx,rfl⟩)) hxa)} @[simp] lemma set.Iic.coe_sup [semilattice_sup α] {a : α} {x y : Iic a} : (↑(x ⊔ y) : α) = (↑x ⊔ ↑y) := rfl @[simp] lemma set.Iic.coe_inf [semilattice_inf α] {a : α} {x y : Iic a} : (↑(x ⊓ y) : α) = (↑x ⊓ ↑y) := rfl @[simp] lemma set.Ici.coe_inf [semilattice_inf α] {a : α} {x y : Iic a} : (↑(x ⊓ y) : α) = (↑x ⊓ ↑y) := rfl @[simp] lemma set.Ici.coe_sup [semilattice_sup α] {a : α} {x y : Iic a} : (↑(x ⊔ y) : α) = (↑x ⊔ ↑y) := rfl @[simp] lemma set.Icc.coe_inf [semilattice_inf α] {a : α} {x y : Iic a} : (↑(x ⊓ y) : α) = (↑x ⊓ ↑y) := rfl @[simp] lemma set.Icc.coe_sup [semilattice_sup α] {a : α} {x y : Iic a} : (↑(x ⊔ y) : α) = (↑x ⊔ ↑y) := rfl @[simp] lemma set.Ioc.coe_inf [semilattice_inf α] {a : α} {x y : Iic a} : (↑(x ⊓ y) : α) = (↑x ⊓ ↑y) := rfl @[simp] lemma set.Ioc.coe_sup [semilattice_sup α] {a : α} {x y : Iic a} : (↑(x ⊔ y) : α) = (↑x ⊔ ↑y) := rfl @[simp] lemma set.Ico.coe_inf [semilattice_inf α] {a : α} {x y : Iic a} : (↑(x ⊓ y) : α) = (↑x ⊓ ↑y) := rfl @[simp] lemma set.Ico.coe_sup [semilattice_sup α] {a : α} {x y : Iic a} : (↑(x ⊔ y) : α) = (↑x ⊔ ↑y) := rfl -- etc etc... PR this? -- @[simp, norm_cast] lemma coe_inf [semilattice_inf α] {P : α → Prop} -- (Psup : ∀⦃x y⦄, P x → P y → P (x ⊓ y)) (x y : subtype P) : -- (@has_inf.inf _ (@semilattice_inf.to_has_inf _ (subtype.semilattice_inf Psup)) x y : α) -- = x ⊓ y := rfl @[simp, norm_cast] lemma subtype.coe_sup [semilattice_sup α] {P : α → Prop} (Psup : ∀⦃x y⦄, P x → P y → P (x ⊔ y)) (x y : subtype P) : (@has_sup.sup _ (@semilattice_sup.to_has_sup _ (subtype.semilattice_sup Psup)) x y : α) = x ⊔ y := rfl @[simp, norm_cast] lemma subtype.coe_inf [semilattice_inf α] {P : α → Prop} (Pinf : ∀⦃x y⦄, P x → P y → P (x ⊓ y)) (x y : subtype P) : (@has_inf.inf _ (@semilattice_inf.to_has_inf _ (subtype.semilattice_inf Pinf)) x y : α) = x ⊓ y := rfl @[simp] lemma subtype.mk_sup [semilattice_sup α] {P : α → Prop} (Psup : ∀⦃x y⦄, P x → P y → P (x ⊔ y)) {x y : α} (hx : P x) (hy : P y) : (@has_sup.sup _ (@semilattice_sup.to_has_sup _ (subtype.semilattice_sup Psup)) ⟨x,hx⟩ ⟨y,hy⟩) = ⟨x ⊔ y, Psup hx hy⟩ := rfl @[simp] lemma subtype.mk_inf [semilattice_inf α] {P : α → Prop} (Pinf : ∀⦃x y⦄, P x → P y → P (x ⊓ y)) {x y : α} (hx : P x) (hy : P y) : (@has_inf.inf _ (@semilattice_inf.to_has_inf _ (subtype.semilattice_inf Pinf)) ⟨x,hx⟩ ⟨y,hy⟩) = ⟨x ⊓ y, Pinf hx hy⟩ := rfl end intervals
#ifndef I3SPRNGRANDOMSERVICE_H #define I3SPRNGRANDOMSERVICE_H #include "phys-services/I3RandomService.h" #include <gsl/gsl_randist.h> #include <gsl/gsl_rng.h> #include <gsl/gsl_test.h> #include <string> /** * copyright (C) 2004 * the icecube collaboration * $Id: I3SPRNGRandomService.h 127790 2015-01-14 04:03:06Z olivas $ * * @brief SPRNG Implementation of the I3RandomService interface. * This implementation uses a combination of SPRNG and GSL to generate * statistically independent streams of pseudo-random number distributions. * See gsl-sprng.h for more details. * * NB : It's important that you use the same seed for different jobs. Set * nstreams to the number of jobs and use a different streamnum for each job. * Otherwise you'll get correlations between the RNG streams. I know this is * counterintuitive, but this is how SPRNG works. * * The code for this class is based on John Pretz's implementation of * I3GSLRandomService. * * @version $Revision: 127790 $ * @date $Date: 2015-01-13 21:03:06 -0700 (Tue, 13 Jan 2015) $ * @author juancarlos * * @todo Add ability to save state of rng after run is complete * SPRNG has the functions: * * int pack_sprng(char *bytes); // returns size of bytes * void unpack_sprng(char bytes[MAX_PACKED_LENGTH]); * * which can be used to save and retrieve the state of an rng */ class I3SPRNGRandomService : public I3RandomService{ public: /** * default constructor */ I3SPRNGRandomService(); /** * constructors */ I3SPRNGRandomService(int seed, int nstreams, int streamnum, std::string instatefile=std::string(), std::string outstatefile=std::string()); /** * destructor */ virtual ~I3SPRNGRandomService(); /** * Binomial distribution */ virtual int Binomial(int ntot, double prob); // As with John Pretz's GSL implementation, I have left this out for now. /* virtual double BreitWigner(double mean = 0, double gamma = 1)=0; */ /** * Exponential distribution */ virtual double Exp(double tau); /** * Uniform int distribution with range [0,imax) */ virtual unsigned int Integer(unsigned int imax); /** * int Poisson distribution */ virtual int Poisson(double mean); /** * double Poisson distribution */ virtual double PoissonD(double mean); /** * double uniform distribution with range (0,x1) */ virtual double Uniform(double x1 = 1); /** * double uniform distribution with range (x1,x2) */ virtual double Uniform(double x1, double x2); /** * double Gaussian distribution given mean and StdD */ virtual double Gaus(double mean, double stddev); virtual I3FrameObjectPtr GetState() const; virtual void RestoreState(I3FrameObjectConstPtr state); private: // private copy constructors and assignment I3SPRNGRandomService(const I3SPRNGRandomService& ); I3SPRNGRandomService operator=(const I3SPRNGRandomService& ); gsl_rng* rng_; std::string instatefile_; std::string outstatefile_; int seed_; int streamnum_; int nstreams_; SET_LOGGER("I3SPRNGRandomService"); }; I3_POINTER_TYPEDEFS(I3SPRNGRandomService); #endif // I3SPRNGRANDOMSERVICE_H
for each element in the set: for each subset constructed so far: new subset = (subset + element)
import OpenCL; const cl = OpenCL using Cartesian macro at_proc(p, ex) quote remotecall( $p, ()->eval(Main,$(Expr(:quote,ex)))) end end function has64Bit(device) amd = "cl_amd_fp64" khr = "cl_khr_fp64" ext = cl.info(device, :extensions) return (khr in ext) || (amd in ext) end function countOCL() cpu_count = 0 gpu_count = 0 for d in cl.devices(:cpu) if has64Bit(d) cpu_count = 1 # We currently only support one device of each sort. end end for d in cl.devices(:gpu) if has64Bit(d) gpu_count = 1 # We currently only support one device of each sort. end end return {:gpu => gpu_count, :cpu => cpu_count} end function determineWorkload() cpus = CPU_CORES count = countOCL() CPU_OCL = count[:cpu] >= 1 ncpu_workers = if CPU_OCL iceil((CPU_CORES - count[:gpu] - 1) / 3) # If we have opencl support on the cpu we really don't want to spawn more then one process per 3 cores. else (CPU_CORES - count[:gpu] - 1) # Native Julia workers end ngpu_workers = count[:gpu] return (ncpu_workers, ngpu_workers, CPU_OCL) end function determineCapabilities(cluster :: Bool = false, allow32Bit = false, forceJuliaImpl = false) if cluster if CPU_OCL println("Running simulation on CPU with OpenCL") devs = filter(has64Bit, cl.devices(:cpu)) ctx = cl.Context(devs) queue = cl.CmdQueue(ctx) return (true, true, ctx, queue) elseif GPU_OCL println("Running simulation on GPU with OpenCL") devs = filter(has64Bit, cl.devices(:gpu)) ctx = cl.Context(devs) queue = cl.CmdQueue(ctx) return (true, true, ctx, queue) else println("Running simulation on CPU") return (true, false, nothing, nothing) end else try if forceJuliaImpl error("forced usage of Julia implementation.") end if any(map(has64Bit, cl.devices())) devs = filter(has64Bit, cl.devices()) ctx = cl.Context([first(devs)]) queue = cl.CmdQueue(ctx) return (true, true, ctx, queue) else warn("No OpenCL device with Float64 support found!") if allow32Bit warn("Searching for device with Float32 support.") device, ctx, queue = cl.create_compute_context() return (false, true, ctx, queue) else throw(Exception()) end end catch e println("Got exception: $e") warn("OpenCL is not supported falling back to Julia computation") return (true, false, nothing, nothing) end end end # TODO: find a better way to handle arbitrary dimensions. function createDisturbance(name :: Symbol, ranges) dim = length(ranges) if dim == 1 createDisturbance1(name, ranges) elseif dim == 2 createDisturbance2(name, ranges) elseif dim == 3 createDisturbance3(name, ranges) elseif dim == 4 createDisturbance4(name, ranges) else error("Can't handle dim $dim") end end function createDisturbance4(name :: Symbol, ranges) vals = Dict[] @nloops 4 i d -> ranges[d] begin args = @ntuple 4 i push!(vals, {100.0 => [name, args...]}) end return vals end function createDisturbance3(name :: Symbol, ranges) vals = Dict[] @nloops 3 i d -> ranges[d] begin args = @ntuple 3 i push!(vals, {100.0 => [name, args...]}) end return vals end function createDisturbance2(name :: Symbol, ranges) vals = Dict[] @nloops 2 i d -> ranges[d] begin args = @ntuple 2 i push!(vals, {100.0 => [name, args...]}) end return vals end function createDisturbance1(name :: Symbol, range) vals = Dict[] for i in range push!(vals, {100.0 => [name, i]}) end return vals end function torange(c :: Dict) min = c["min"] max = c["max"] step = c["step"] return min:step:max end function torange(c :: Real) return c end function writedlm_row(io::IO, row, dlm = ',') pb = PipeBuffer() state = start(row) while !done(row, state) (x, state) = next(row, state) Base.writedlm_cell(pb, x, dlm) done(row, state) ? write(pb,'\n') : print(pb,dlm) end (nb_available(pb) > (16*1024)) && write(io, takebuf_array(pb)) write(io, takebuf_array(pb)) nothing end function apply_punch_down!(A, x0, y0, a, b) d1, d2 = size(A) @assert x0 in 1:d1 @assert y0 in 1:d2 #dist(x, y) = sqrt((x - x0)^2 + (y-y0)^2) #dist(x, y) = abs(x-x0) + abs(y-y0) dist(x, y) = floor(sqrt((x - x0)^2 + (y-y0)^2)) #punch(d) = 1- sech(1/b * d) ^ a punch(x)=a*exp(-x^2/(2*b^2))+1 for j in 1:d2 for i in 1:d1 d = dist(i, j) A[i,j] *= punch(d) end end end
/- Copyright © 2021 Scott Morrison. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Scott Morrison, Shing Tak Lam -/ import topology.algebra.ordered.proj_Icc import topology.continuous_function.basic /-! # Bundled continuous maps into orders, with order-compatible topology -/ variables {α : Type*} {β : Type*} {γ : Type*} variables [topological_space α] [topological_space β] [topological_space γ] namespace continuous_map section variables [linear_ordered_add_comm_group β] [order_topology β] /-- The pointwise absolute value of a continuous function as a continuous function. -/ def abs (f : C(α, β)) : C(α, β) := { to_fun := λ x, |f x|, } @[priority 100] -- see Note [lower instance priority] instance : has_abs C(α, β) := ⟨λf, abs f⟩ @[simp] lemma abs_apply (f : C(α, β)) (x : α) : |f| x = |f x| := rfl end /-! We now set up the partial order and lattice structure (given by pointwise min and max) on continuous functions. -/ section lattice instance partial_order [partial_order β] : partial_order C(α, β) := partial_order.lift (λ f, f.to_fun) (by tidy) lemma le_def [partial_order β] {f g : C(α, β)} : f ≤ g ↔ ∀ a, f a ≤ g a := pi.le_def lemma lt_def [partial_order β] {f g : C(α, β)} : f < g ↔ (∀ a, f a ≤ g a) ∧ (∃ a, f a < g a) := pi.lt_def instance has_sup [linear_order β] [order_closed_topology β] : has_sup C(α, β) := { sup := λ f g, { to_fun := λ a, max (f a) (g a), } } @[simp, norm_cast] lemma sup_coe [linear_order β] [order_closed_topology β] (f g : C(α, β)) : ((f ⊔ g : C(α, β)) : α → β) = (f ⊔ g : α → β) := rfl @[simp] lemma sup_apply [linear_order β] [order_closed_topology β] (f g : C(α, β)) (a : α) : (f ⊔ g) a = max (f a) (g a) := rfl instance [linear_order β] [order_closed_topology β] : semilattice_sup C(α, β) := { le_sup_left := λ f g, le_def.mpr (by simp [le_refl]), le_sup_right := λ f g, le_def.mpr (by simp [le_refl]), sup_le := λ f₁ f₂ g w₁ w₂, le_def.mpr (λ a, by simp [le_def.mp w₁ a, le_def.mp w₂ a]), ..continuous_map.partial_order, ..continuous_map.has_sup, } instance has_inf [linear_order β] [order_closed_topology β] : has_inf C(α, β) := { inf := λ f g, { to_fun := λ a, min (f a) (g a), } } @[simp, norm_cast] lemma inf_coe [linear_order β] [order_closed_topology β] (f g : C(α, β)) : ((f ⊓ g : C(α, β)) : α → β) = (f ⊓ g : α → β) := rfl @[simp] lemma inf_apply [linear_order β] [order_closed_topology β] (f g : C(α, β)) (a : α) : (f ⊓ g) a = min (f a) (g a) := rfl instance [linear_order β] [order_closed_topology β] : semilattice_inf C(α, β) := { inf_le_left := λ f g, le_def.mpr (by simp [le_refl]), inf_le_right := λ f g, le_def.mpr (by simp [le_refl]), le_inf := λ f₁ f₂ g w₁ w₂, le_def.mpr (λ a, by simp [le_def.mp w₁ a, le_def.mp w₂ a]), ..continuous_map.partial_order, ..continuous_map.has_inf, } instance [linear_order β] [order_closed_topology β] : lattice C(α, β) := { ..continuous_map.semilattice_inf, ..continuous_map.semilattice_sup } -- TODO transfer this lattice structure to `bounded_continuous_function` section sup' variables [linear_order γ] [order_closed_topology γ] lemma sup'_apply {ι : Type*} {s : finset ι} (H : s.nonempty) (f : ι → C(β, γ)) (b : β) : s.sup' H f b = s.sup' H (λ a, f a b) := finset.comp_sup'_eq_sup'_comp H (λ f : C(β, γ), f b) (λ i j, rfl) @[simp, norm_cast] lemma sup'_coe {ι : Type*} {s : finset ι} (H : s.nonempty) (f : ι → C(β, γ)) : ((s.sup' H f : C(β, γ)) : ι → β) = s.sup' H (λ a, (f a : β → γ)) := by { ext, simp [sup'_apply], } end sup' section inf' variables [linear_order γ] [order_closed_topology γ] lemma inf'_apply {ι : Type*} {s : finset ι} (H : s.nonempty) (f : ι → C(β, γ)) (b : β) : s.inf' H f b = s.inf' H (λ a, f a b) := @sup'_apply _ (order_dual γ) _ _ _ _ _ _ H f b @[simp, norm_cast] lemma inf'_coe {ι : Type*} {s : finset ι} (H : s.nonempty) (f : ι → C(β, γ)) : ((s.inf' H f : C(β, γ)) : ι → β) = s.inf' H (λ a, (f a : β → γ)) := @sup'_coe _ (order_dual γ) _ _ _ _ _ _ H f end inf' end lattice section extend variables [linear_order α] [order_topology α] {a b : α} (h : a ≤ b) /-- Extend a continuous function `f : C(set.Icc a b, β)` to a function `f : C(α, β)`. -/ def Icc_extend (f : C(set.Icc a b, β)) : C(α, β) := ⟨set.Icc_extend h f⟩ @[simp] lemma coe_Icc_extend (f : C(set.Icc a b, β)) : ((Icc_extend h f : C(α, β)) : α → β) = set.Icc_extend h f := rfl end extend end continuous_map
If you live in Farnam and need to find a fast and reliable internet service provider that is trustworthy, then you will be pleased to know that HughesNet has a variety of exciting options available right here in Farnam! While there may be a lot of different companies that can provide people in Farnam, Nebraska and the neighboring communities with internet service, none of them can compare with the level of service, options, and reliability that HughesNet offers. HughesNet brings cutting edge satellite technology to residents of Farnam, making DSL and cable services an inferior choice in Farnam, NE.With high capacity satellites that are now equipped with Gen 4 SmartTechnologies, HughesNet has the ability to provide residents of Nebraska with satellite internet that is faster than ever. HughesNet has a stellar reputation for quality, reliability, and service, it’s no wonder HughesNet is the #1 satellite internet service provider in the Unites States. Nebraska has tens of thousands of residents in rural areas that rely on the internet for entertainment in the home as well as for business. Traditionally, cable and DSL companies have not offered high speed internet connections to rural residents simply because the infrastructure was not in place. Now, people throughout Nebraska, including Farnam, that live in rural areas and need high speed internet that is fast and reliable have an option with HughesNet as their satellite internet provider. HughesNet has established themselves as being one of the premier internet service providers in the area of Farnam, offering these services to those who live in areas that would ordinarily have very limited internet service provider options. More and more, folks who live in rural areas and even those who live in more populated town centers, are choosing satellite internet over other providers. HughesNet satellite internet provides Farnam residents flexibility, and the ability to receive high speed internet service no matter how far out in the state of Nebraska they may live. HughesNet is faster, stronger and more reliable. Advances in internet technology have rendered DSL and cable internet virtually obsolete. Though they were the go-to choice in the past, their infrastructure and technology have limited them so they can no longer offer the same lightning fast speeds and reliability to residents of Nebraska that HughesNet satellite internet can provide. The Gen4 technology that HughesNet uses to provide their millions of subscribers with high speed internet service including those in rural Nebraska is sophisticated and extremely reliable, even in Farnam, so downtime is virtually eliminated, giving customers dependability and value. HughesNet offers several affordable plans and packages that include high speed internet service, home phone service, and more throughout Nebraska. This allows HughesNet customers to save time and money while still getting exactly what they need online for your home, your business, or both! Anyone who likes to play video games online, stream movies, view photos, or enjoy sites like Facebook, Twitter, or Pinterest will need to choose an internet service provider that has reliable high speed. Popular sites require fast connections that can load images, text, and sound quickly allowing you to engage in real time. Nothing is more frustrating than waiting on the video to catch up with the sound, or missing important parts of the conversation when you video chat and there is a glitch. With HughesNet’s Gen4 technology, you can experience lightning fast internet and stay connected all the time. Never miss a moment of your favorite game, movie, or conversation due to buffering or glitches. HughesNet offers several different internet deals in Farnam, Nebraska so there are many options for everyone to choose from for your home or for your business in the Henry County area. As a company, HughesNet understands that everyone has different needs or demands from when it comes to their internet in Farnam, NE. That is why HughesNet of Nebraska gives customers a choice as to which data level and price works alongside their needs. The various affordable plans and packages that HughesNet has to choose from are competitively priced and full of features so that customers like you have the ability to enjoy all that high speed internet offers while getting the best price in Farnam, and other rural areas of Henry County, Nebraska. There are many other DSL and cable companies in Farnam, NEand the surrounding communities however, none of them can offer the same level of super fast internet service at an amazing price, as HughesNet. With features like unlimited data, 24/7 technical support, and free standard installation, not to mention the advanced Gen4 technology, it’s no wonder that HughesNet is the #1 satellite internet provider in America. When searching for the perfect home internet service provider, customers in Farnam, NE are looking for reliability, plan and package options, high speed, and of course, cheap prices. Now, Farnam residents can have the best of both worlds when it comes to high speed satellite internet and budget friendly plans. HughesNet Gen4 brings the speed you need to search the internet, stream media and play games. Watch movies with no buffering, play games with no lag time, and update social media instantly with the lightning fast speed of Gen4. Look no further than HughesNet in Farnam, NE for dependable high speed internet service. Living in rural Henry County, NE has gotten a little easier with the expansion of satellite internet technology. As long as you have a clear view of the southern sky and a computer, you are able to have the best rated satellite internet in the U.S. Why wait? Call today and get connected with HughesNet in Farnam, NE. In Farnam, Nebraska and beyond, satellite internet has become the premier option for families and businesses who need to be connected. Virtually every aspect of your life in NE is touched by the internet…from business to school, shopping and entertaining, residents of Farnam are finding it increasingly necessary to have a reliable, high speed internet connection. Thankfully, HughesNet Gen 4 has come to the rescue! Satellite internet has become incredibly popular in recent years, primarily because the technology has surpassed that of DSL, cable, and dialup in Farnam. HughesNet uses only state of the art technology to provide its customers with reliable internet service, and with millions of customers all over the country they are quickly becoming America’s favorite ISP. Rural internet access is one of HughesNet’s top priorities, and this is reflected in the number of cities throughout the state of Nebraska as well as the United States they have begun to cover. If you are a resident of Farnam and are interested in HughesNet’s internet services, simply log on to our site and search your zipcode. From there, you will be able to determine which coverage area within Farnam or Henry County where you live, then a qualified professional will help you get connected! Why Choose high speed internet HughesNet in Farnam Nebraska? Blazing fast wireless internet speeds of up to 15 Mbps in Farnam, which allows you to download music in a flash, watch videos without buffering, check your favorite media-rich websites, and so much more. If you're used to dial-up and DSL, HughesNet's internet speeds are up to 100x faster than dial-up, and up to 12x faster than DSL. HughesNet, makes this speed a reality along with affordable plans in Farnam that are flexible for the needs of your home or business. Some users may not use as much data as others, and that's why HughesNet offers different packages that include data allotments. Get as much, or as little, data as you need, including promotional periods that can double your allotment, “free” download periods that don't count toward your cap, and more, all at very affordable prices and easy installation anywhere! If your Farnam home or business is in a clear view of the sky, then you can connect to our satellite communications system, you can also add phone or tv service. Unlike other broadband internet solutions, HughesNet's satellite-powered network beams information directly to and from your home, wirelessly and fast anywhere in Farnam, NE and beyond. That means that you won't have to worry about geographic restrictions that used to stand between your home and great internet connectivity. Outstanding customer service for Farnam, Nebraska, including technicians that can quickly handle your installation and maintenance whenever you need it. By signing up now, you could have your satellite internet up and running in as little as two days after your order. Our team of customer service representatives and technicians are here to help.
# Import dataset dataframe = read.csv('BikeData.csv') # Get all the daily cyclists daily_cyclists = subset(dataframe, cyc_freq == 'Daily') # Count the number of daily cyclists nrow(daily_cyclists) # 47 # Count the number of male and female daily cyclists table(daily_cyclists$gender) # M: 38 # F: 9 # Mean age of daily cyclists colMeans(daily_cyclists['age']) # Age: 33.65 # Mean age of daily female cyclists colMeans(subset(daily_cyclists, gender == 'F')['age']) # Mean age of daily male cyclists colMeans(subset(daily_cyclists, gender == 'M')['age'])
\section{Function Contracts} \label{sect:functions} Next we turn to the specification of functions. We'll take the example from the previous section, and pull the computation of the minimum of two numbers out into a separate function: \vccInput[linerange={begin-}]{c/3.1.min2.c} (The listing above presents both the source code and the output of VCC, typeset in a different fonts, and the actual file name of the example is replaced with \vcc{/*`testcase`*/}.) VCC failed to prove our assertion, even though it's easy to see that it always holds. This is because verification in VCC is \Def{modular}: VCC doesn't look inside the body of a function (such as the definition of \vcc{min()}) when understanding the effect of a call to the function (such as the call from \vcc{main()}); all it knows about the effect of calling \vcc{min()} is that the call satisfies the specification of \vcc{min()}. Since the correctness of \vcc{main()} clearly depends on what \vcc{min()} does, we need to specify \vcc{min()} in order to verify \vcc{main()}. The specification of a function is sometimes called its \Def{contract}, because it gives obligations on both the function and its callers. It is provided by four types of annotations: \begin{itemize} \item A requirement on the caller (sometimes called a \Def{precondition} of the function) takes the form \vcc{_(requires E)}, where \vcc{E} is an expression. It says that callers of the function promise that \vcc{E} will hold on function entry. \item A requirement on the function (sometimes called a \Def{postcondition} of the function) takes the form \vcc{_(ensures E)}, where \vcc{E} is an expression. It says that the function promises that \vcc{E} holds just before control exits the function. \item The third type of contract annotation, a \Def{writes clause}, is described in the next section. In this example, the lack of writes clauses says that \vcc{min()} has no side effects that are visible to its caller. \item The last type of contract annotation, a \Def{termination clause}, is described in section \secref{termination}. For now, we won't bother proving that our functions terminate. \end{itemize} For example, we can provide a suitable specification for \vcc{min()} as follows: \vccInput[linerange={min-endmin,out-}]{c/3.2.min3.c} \noindent (Note that the specification of the function comes after the header and before the function body; you can also put specifications on function declarations (\eg in header files).) The precondition \vcc{_(requires \true)} of \vcc{min()} doesn't really say anything (since \vcc{\true} holds in every state), but is included to emphasize that the function can be called from any state and with arbitrary parameter values. The postcondition states that the value returned from \vcc{min()} is no bigger than either of the inputs. Note that \vcc{\true} and \vcc{\result} are spelled with a backslash to avoid name clashes with C identifiers.% \footnote{ All VCC keywords start with a backslash; this contrasts with annotation tags (like \vcc{requires}), which are only used at the beginning of annotation and therefore cannot be confused with C identifiers (and thus you are still free to have, \eg a function called \lstinline{requires} or \lstinline{assert}).} VCC uses function specifications as follows. When verifying the body of a function, VCC implicitly assumes each precondition of the function on function entry, and implicitly asserts each postcondition of the function (with \vcc{\result} bound to the return value and each parameter bound to its value on function entry) just before the function returns. For every call to the function, VCC replaces the call with an assertion of the preconditions of the function, sets the return value to an arbitrary value, and finally assumes each postcondition of the function. For example, VCC translates the program above roughly as follows: \vccInput[linerange={begin-end}]{c/3.3.min_assert.c} Note that the assumptions above are ``harmless'', that is in a fully verified program they will be never violated, as each follows from the assertion that proceeds it in an execution% \footnote{ A more detailed explanation of why this translation is sound is given in section \secref{soundness}. }% For example, the assumption generated by a precondition could fail only if the assertion generated from that same precondition before it fails. \begin{note} \notehd{Why modular verification?} Modular verification brings several benefits. First, it allows verification to more easily scale to large programs. Second, by providing a precise interface between caller and callee, it allows you to modify the implementation of a function like \vcc{min()} without having to worry about breaking the verifications of functions that call it (as long as you don't change the specification of \vcc{min()}). This is especially important because these callers normally aren't in scope, and the person maintaining \vcc{min()} might not even know about them (e.g., if \vcc{min()} is in a library). Third, you can verify a function like \vcc{main()} even if the implementation of \vcc{min()} is unavailable (\eg if it hasn't been written yet). \end{note} \subsection*{Exercises} \begin{enumerate} \item What is the effect of giving a function the specification \vcc{_(requires \false)}? How does it effect verification of the function itself? What about its callers? Can you think of a good reason to use such a specification? \item Can you see anything wrong with the above specification of \vcc{min()}? Can you give a simpler implementation than the one presented? Is this specification strong enough to be useful? If not, how might it be strengthened to make it more useful? \item Specify a function that returns the (\vcc{int}) square root of its (\vcc{int}) argument. (You can try writing an implementation for the function, but won't be able to verify it until you've learned about loop invariants.) \item Can you think of useful functions in a real program that might legitimately guarantee only the trivial postcondition \vcc{_(ensures \true)}? \end{enumerate} \subsection*{Solutions} \begin{enumerate} \item Any function with a spec that includes \vcc{_(requires \false)} should verify. However, a call to such a function will only verify if the call itself is dead code (and VCC can prove it is dead code). Putting \vcc{_(requires \false)} on a function is one way to document that nothing has been proven about it, and that it should not be called. \item The spec guarantees that \vcc{min} returns a result that is small enough, but nothing prevents it from always returning \vcc{INT_MIN}. This might be strong enough for some applications, but for most, you probably want the additional postcondition \vcc{_(ensures \result == a || \result == b)}. \item \begin{VCC} int sqrt(int x) _(requires x >= 0) _(ensures \result * \result <= x && (result + 1) * (\result + 1) > x) ; \end{VCC} \item Many important system functions have no nontrivial guaranteed postcondition, save those that come from the omission of writes clauses. For example, a system call that tries to collect garbage might very well have an empty specification. \end{enumerate} \subsection{Reading and Writing Memory} \label{sect:writes} When programming in C, it is important to distinguish two kinds of memory access. \Def{Sequential} access, which is the default, is appropriate when interference from other threads (or the outside world) is not an issue, \eg when accessing unshared memory. Sequential accesses can be safely optimized by the compiler by breaking it up into multiple operations, caching reads, reordering operations, and so on. \Def{Atomic} access is required when the access might race with other threads, \ie write to memory that is concurrently read or written, or a read to memory that is concurrently written. Atomic access is typically indicated in C by accessing memory through a volatile type (or through atomic compiler intrinsics). We consider only sequential access for now; we consider atomic access in section \secref{concurrency}. To access a memory object pointed to by a pointer \vcc{p}, \vcc{p} must point to a valid chunk of memory% \footnote{VCC actually enforces a stronger condition, that the memory is ``typed'' according to \vcc{p}. }% . (For example, on typical hardware, its virtual address must be suitably aligned, must be mapped to existing physical memory, and so on.) In addition, to safely access memory sequentially, the memory must not be concurrently written by other threads (including hardware and devices). Most of the time\footnote{ It is also possible to read memory sequentially if it is a nonvolatile field of an object that is known to be closed, even if it is not owned by the thread; this allows multiple threads to sequentially access shared read-only memory. }, this is because the memory object is ``part of'' something that is ``owned'' by the thread (concepts that will be discussed later); we express this with the predicate \vcc{\thread_local(p)}\footnote{Note that thread locality only makes sense in the context of a particular thread, and so cannot appear, for example, in type invariants.} VCC asserts this before any sequential memory access to (the memory pointed to by) \vcc{p}. To write sequentially through \vcc{p}, you need to know \vcc{\thread_local(p)}, and that no other thread is trying to read (sequentially or concurrently) through \vcc{p} at the same time\footnote{ You also need to know that no object invariants depend on \vcc{*p}; this is why object invariants are in effect only for closed objects, and only (parts of) open objects are mutable. }. We write this as \vcc{\mutable(p)}. Like thread-locality, mutability makes sense only in the context of a particular thread. %%Writing via pointers other than \vcc{p} cannot make \vcc{p} non-mutable. \begin{note} If VCC doesn't know why an object is thread local, then it has hard time proving that the object stays thread local after an operation with side effects (\eg a function call). Thus, in preconditions you will sometimes want to use \vcc{\mutable(p)} instead of \vcc{\thread_local(p)}. The precise definitions of mutability and thread locality is given in \secref{ownership}, where we also describe another form of guaranteeing thread locality through so called ownership domains. \end{note} The \vcc{NULL} pointer, pointers outside bounds of arrays, the memory protected by the operating system, or outside the address space are never thread local (and thus also never mutable nor writable). There is one further restriction on sequential writes, motivated by the desire to limit the possible side effects of a function call to a specific set of mutable objects. We could do this by adding a postcondition that all other parts of the state are unmodified, but VCC provides some sugar to make specification (and reasoning) about such properties more convenient and efficient. The idea is that when you call a function, you give it permission to write certain objects, but not others; \vcc{\writable(p)} expresses the condition that the function has the right to write to \vcc{p}. Thus, when writing through \vcc{p}, VCC asserts \vcc{\mutable(p) && \writable(p)}. While mutability is a thread-level concept, writability is a property of a particular instance of an executing function. (That is, just because something is writable for you doesn't mean it will be writable for a function you call.) Therefore, you can't express that a function needs ``permission'' to write \vcc{p} by \vcc{_(requires \writable(p))}, because preconditions are evaluated in the context of the caller. Instead, you specify that a function needs writability of \vcc{p} at function entry with the annotation \vcc{_(writes p)}, called a ``writes clause''. When you call a function, VCC assumes that all of the objects listed in the writes clauses are writable on function entry. Moreover, if an object becomes mutable (for a thread) after entry to a function call, it is considered writable within that call (as long as it remains mutable). %Moved this up -E %Access to thread local memory will never crash the program. %The memory that is not thread-local is covered in %\secref{concurrency}. Let's have a look at an example: \vccInput[linerange={begin-}]{c/3.4.rw.c} \noindent The function \vcc{write_wrong} fails because \vcc{p} is only mutable, and not writable. In \vcc{read_wrong} VCC complains that it does not know anything about \vcc{p} (maybe it's \vcc{NULL}, who knows), in particular it doesn't know it's thread-local. \vcc{read2} is fine because \vcc{\mutable} is stronger than \vcc{\thread_local}. Finally, in \vcc{test_them} the first three assertions succeed because if something is not listed in the writes clause of the called function it cannot change. The last assertion fails, because \vcc{write()} listed \vcc{&a} in its writes clause. %\noindent Without the \vcc{\thread_local()} annotation you would %get the following: %\vccInput[linerange={begin-}]{c/01_read_wrong.c} % the concept of validity (typed pointers) is gone from VCC3 --MM %For each memory access within a program, VCC checks that the access is accessing a %\Def{valid} memory object. Validity implies that the object address %points to memory that is actually in the address space of the program %(i.e., it has been allocated, either on the stack or on the heap, and %has not been freed). (Validity in VCC additionally requires that the %access is appropriately typed; this aspect is %described in more detail in \secref{type-safety}). Intuitively, the clause \vcc{_(writes p, q)} says that, of the memory objects that are thread-local to the caller before the call, the function is going to modify only the object pointed to by \vcc{p} and the object pointed to by \vcc{q}. %\todo{I think we should get rid of this footnote. For writes claiuses, the ownership domains %are logically irrelevant; what matters is when the object became mutable. %\footnote{ %And their ``ownership domains'', %but until \secref{ownership} we consider these to be empty. %} In other words, it is roughly equivalent to a postcondition that ensures that all other objects thread-local to the caller prior to the call remain unchanged. A function can have multiple writes clauses, and implicitly combines them into a single set. If a function spec contains no writes clauses, it is equivalent to specifying a writes clause with empty set of pointers. % Do we have essentially the same paragraph later? -E %More precisely, an object is \vcc{\writable} if it is \vcc{\mutable} and %it is either listed in a writes clause of the function, %or it became \vcc{\mutable} sometime after the function was entered; the %latter condition guarantees that either \vcc{p} was listed in the %writes clause or was not thread-local in the caller when the call to %the function was made. %In particular, formal function parameters and local variables are %writable as long as they are in scope and have not been explicitly %wrapped (\secref{invariants}) or reinterpreted to a %different type (\secref{reint}). VCC asserts %\vcc{\writable(p)} on each attempt to write to \vcc{*p}, as well as on %each call to a function that lists \vcc{p} in a writes clause. Here is a simple example of a function that visibly reads and writes memory; it simply copies data from one location to another. \vccInput[linerange={begin-}]{c/3.5.copy1.c} In the postcondition the expression \vcc{\old(E)} returns the value the expression \vcc{E} had on function entry. Thus, our postcondition states that the new value of \vcc{*to} equals the value \vcc{*from} had on call entry. VCC translates the function call \vcc{copy(&x,&y)} approximately as follows: \begin{VCC} _(assert \thread_local(&x)) _(assert \mutable(&y)) // record the value of x int _old_x = x; // havoc the written variables havoc(y); // assume the postcondition _(assume y == _old_x) \end{VCC} %% Shouldn't \array_range be allowed for arrays of non-primitives also? % Similar to \vcc{\thread_local_array}, %\vcc{\mutable_array(ar,sz)} is defined as %\vcc{\forall unsigned i; i < sz ==> \mutable(&ar[i])}. % %The expression \vcc{\thread_local_array(ar, sz)} is %syntactic sugar for \vcc{\forall unsigned i; i < sz ==> \thread_local(&ar[i])}. % %Programs that manipulate arrays often write to multiple array %locations. Writes clauses actually allow sets of pointers, rather than %individual pointers. We'll introduce sets in full generality later, but %note one special case: the expression \vcc{\array_range(ar, len)} %denotes the set of pointers \vcc@{&ar[0], &ar[1], ..., &ar[len-1]}@. %Thus, a writes clause of the form %\vcc{_(writes \array_range(ar,len))} %allows writing to all elements of the array. \subsubsection{Local Variables} Unlike most block-structured languages, C allows you to take the addresses of local variables (with the \vcc{&} operator). If you take the address of a local, nothing prevents you from storing that address in a data structure, and trying to dereference the address after the lifetime of the variable has ended. (The result is not pretty.) Even if you are careful about its lifetime, once you take the address of a variable, you have to worry that writing through some seemingly unrelated pointer might change the value of the variable, which is a pain. Because of this, VCC distinguishes between local variables whose addresses are never taken and those whose addresses are taken; the former are called \Def{purely local} variables. A purely local variable is much, much easier to reason about; you know that its value can be changed only by a an update through its name (In particular, it cannot be modified by function calls, by assignments to other variables, or assignments through pointers.) Purely local variables are always considered thread-local (so there is no thread-locality check when reading them) and writable (so you never have to mention them in writes clauses of loops or blocks). Also, if you have a loop that doesn't modify a purely local variable in scope, VCC will automatically infer that the value of that variable is not changed in the loop. So you should definitely keep variables purely local whenever possible. A local variable that is not purely local is treated as if it was allocated on the heap when its lifetime starts (but without the possibility of allocation failure), and is freed when the lifetime ends. The treatment of pure locality sometimes results in the strange phenomenon that changing some code near the end of a function body can cause verification of someting earlier in the function body to fail. This is because if you take the address of a local near the bottom, it is treated as impure for the whole function body. The simplest workaround in such cases is just to introduce a new local for the bottom part of the function\footnote{This won't effect the final binary produced by a decent optimizing compiler.}. \subsection{Arrays} \label{sect:arrays} Array accesses are a kind of pointer accesses. Thus, before allowing you to read an element of an array VCC checks if it's thread-local. Usually you want to specify that all elements of an array are thread-local, which is done using the expression \vcc{\thread_local_array(ar, sz)}. It is essentially a syntactic sugar for \vcc{\forall unsigned i; i < sz ==> \thread_local(&ar[i])}. The annotation \vcc{\mutable_array()} is analogous. To specify that an array is writable use: \begin{VCC} _(writes \array_range(ar, sz)) \end{VCC} which is roughly a syntactic sugar for: \begin{VCC} _(writes &ar[0], &ar[1], ..., &ar[sz-1]) \end{VCC} For example, the function below is recursive implementation of the C standard library \vcc{memcpy()} function: \vccInput[linerange={begin-end}]{c/3.6.copy_array.c} It requires that array \vcc{src} is thread-local, \vcc{dst} is writable, and they do not overlap. It ensures that, at all indices, \vcc{dst} has the value \vcc{src}. The next section presents a more conventional implementation using a loop. \subsection{Termination} \label{sect:termination} A function terminates if it is guaranteed to return to its caller. You can specify termination for simple functions (like the ones we've seen so far) by simply adding to the specification \vcc{_(decreases 0)}; this will do the job as long as your functions are not recursive. For functions that are recursive (or which look potentially recursive to VCC, because of potential callbacks from functions whose bodies are hidden from VCC), the termination clause of a function gives a measure that decreases for each call that might start a chain of calls back the function. For example, to verify the termination of \vcc{my_memcpy} above, you need only add to its specification the additional annotation \vcc{_(decreases len)}. This annotation defines a ``measure'' on calls to \vcc{my_memcpy} (namely, the value passed as the last parameter). VCC checks termination by checking that (1) all loops in the body terminate (\secref{loopTermination}), and (2) for every function call within the body of \vcc{my_memcpy} that is potentially the first of a chain of calls leading to a call back to \vcc{my_memcpy}, the called function has a \vcc{_(decreases)} specification and the measure of the call to that function is strictly less than the measure of the calling function instance. It is usually a good idea to prove termination for sequential code when you can\footnote{ You should also consider doing it for your concurrent code, but here VCC is much more limited in its capabilities. The reason for this is that proving termination for a racy function (e.g., one that has to compete for locks) typically depends on fairness assumptions (e.g., that a function trying to grab a spinlock will eventually get lucky and get it, if the spinlock is infinitely often free) and/or global termination measures (e.g., to make sure that other threads will release spinlocks once they acquire them). VCC does not currently support either of these.}. More details on termination measures can be found in the VCC manual. \subsection{Pure functions} \label{sect:pureFunctions} A \Def{pure function} is one that has no side effects on the program state. In VCC, pure functions are not allowed to allocate memory, and can write only to local variables. Only pure functions can be called within VCC annotations; such functions are required to terminate\footnote{ This is to guarantee that there is indeed a mathematical function satisfying the specification of the function. }. The function \vcc{min()} above example of a function that can be declared to be pure; this is done by adding the modifier \vcc{_(pure)} to the beginning of the function specification, \eg \begin{VCC} _(pure) min(int x, int y) ... \end{VCC} Being pure is a stronger condition that simply having an empty writes clause. This is because a writes clause has only to mention those side effects that might cause the caller to lose information (\ie knowledge) about the state, and as we have seen, VCC takes advantage of the kind of information callers maintain to limit the kinds of side effects that have to be reported. A pure function that you don't want to be executed can be defined using the \vcc{_(def)} tag, which is essentially a pure ghost function (one that can be used only in specifications) that is inlined, and uses the following streamlined syntax: \vccInput[linerange={beginsp-endsp}]{c/3.7.issorted.c} \noindent A partial spec for a sorting routine could look like the following:% \footnote{We will take care about input being permutation of the output in \secref{ghosts}.} \vccInput[linerange={beginso-endso}]{c/3.7.issorted.c} \subsection{Contracts on Blocks} Sometimes, a large function will contain an inner block that implements some simple functionality, but you don't want to refactor it into a separate function (\eg because you don't want to bother with having to pass in a bunch of parameters, or because you want to verify code without rewriting it). VCC lets you conduct your verification as if you had done so, by putting a function-like specification on the block. This is done by simply writing function specifications preceding the block, \eg \begin{VCC} ... x = 5; _(requires x == 5) _(writes &x) _(ensures x == 6) { x++; } ... \end{VCC} VCC translates this by (internally) refactoring the block into a function, the parameters of which are the variables from the surrounding scope that are mentioned within the block (or the block specifications), so blocks with contracts cannot have statements that transfer control outside of the block. The advantages of this translation is that within the block, VCC can ignore what it knows about the preceding context, and following the block, VCC can ``forget'' what it knew inside the block (other than what escapes through the \vcc{ensures} clauses); in each case, this results in less distracting irrelevant detail for the theorem prover. Sometimes, you don't care about information flowing into the block, but only care about the information flowing out of the block. In this case, you can use the precondition \vcc{_(requires \full_context())}, which tells VCC to verify the block using all of the information about what came before, but using the postconditions and writes clauses to hide information about what went on inside the block to the code that follows the block. %% \vccInput[linerange={swap-partition}]{c/04_partition.c} %% \vccInput[linerange={foo-}]{c/01_swap1.c} %% Because global variables (like \vcc{z}) might be visible to callers %% of \vcc{copy()}, \vcc{copy()} needs to report that they might change. %% Note that the writes clause lists pointers to memory locations, not lvalues %% (\ie \vcc{_(writes &x, &y, &z)} and not \vcc{_(writes x, y, z)}). %% Note also that because \vcc{&z} aliases neither \vcc{&x} nor \vcc{&y}, VCC can %% deduce that \vcc{swap(&x, &y)} does not change \vcc{z}. %% Had we left out the writes clause from the specification of %% \vcc{swap()}, VCC would report several errors: %% \vccInput[linerange={out-}]{c/01_swap2.c} %% \noindent %% Whenever a memory object is read, VCC asserts that it is %% thread-local; whenever a memory object is written or is %% mentioned in the writes clause of a function call, VCC asserts that it %% is writable. Try removing each of the function annotations in %% this example (one at a time) to see what error messages you get. %% For example, the assert/assume translation of \vcc{swap()} is% %% \footnote{ %% VCC does not generate the read and writes checks for the local variable %% \vcc{tmp}. Because \vcc{tmp} is a local that never has its address %% taken, it can be thought of as remaining in a register, where it is %% guaranteed to remain writable. %% }: %% \vccInput[linerange={swap-}]{c/01_swap3.c} %% \noindent %% As we can see, one effect of \vcc{writes p, q} is the implicit %% precondition \vcc{requires \writable(p) && \writable(q)}. Such precondition %% needs to be checked in \vcc{foo()}, at the place where it calls \vcc{swap()}. %% In other words, the called function can write at most what the caller can write. %% In particular if we forget to list \vcc{&x} in the writes %% clause of \vcc{foo()} we would get an error when it tries to call a %% function that possibly writes \vcc{&x}: %% \vccInput[linerange={out-}]{c/01_swap4.c} %% You might wonder why cannot we just have %% \vcc{requires \writable(p)} and instead have the specialized writes clause. %% The reason is that the writes clause also specifies that nothing %% outside of the writes clause will be changed. %% This is why we can prove that \vcc{y} is still \vcc{42} after the call %% to \vcc{boundedIncr(&x)}. %A predicate \vcc{\mutable(p)} states that the object pointed to by \vcc{p} %is allocated, ``belongs'' to the current thread, and is in a ``phase of life'' %that allows for modification. %We will get into details all of these later. %For now we just need to know that in order to be able to write to \vcc{*p} %one needs to know that \vcc{p} was listed in the writes clause \emph{and} \vcc{\mutable(p)}. %For example, if we remove the \vcc{_(writes ...)} clause from the %\vcc{boundedIncr()} we get the following output:
[STATEMENT] lemma orbit_eq: assumes "s \<in> S" shows "\<gamma>\<^sup>\<phi> s = {\<phi> t s| t. t \<in> T}" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<gamma>\<^sup>\<phi> s = {\<phi> t s |t. t \<in> T} [PROOF STEP] apply(unfold orbit_def, subst g_orbital_collapses) [PROOF STATE] proof (prove) goal (5 subgoals): 1. s \<in> S 2. is_interval T 3. T \<subseteq> T 4. 0 \<in> T 5. {\<phi> t s |t. t \<in> T \<and> (\<forall>\<tau>\<in>down T t. True)} = {\<phi> t s |t. t \<in> T} [PROOF STEP] by (simp_all add: assms init_time interval_time)
(* Copyright (C) 2021 Susi Lehtola This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. *) (* type: gga_exc *) (* prefix: gga_c_lypr_params *params; assert(p->params != NULL); params = (gga_c_lypr_params * )(p->params); *) $include "gga_c_lyp.mpl" lypr_eta := rr -> -2/(3*sqrt(Pi))*params_a_m2*params_a_omega * exp(-params_a_m2^2*params_a_omega^2*rr^2): lypr_t7 := (rr, z, xt, xs0, xs1) -> -rr * (1 - z^2)/4 * ( + 7/6*(xt^2 - lyp_aux6*(xs0^2*opz_pow_n(z,8/3) + xs1^2*opz_pow_n(-z,8/3))) + (1 + (1 + z)/6)*xs0^2*lyp_aux6*opz_pow_n( z, 8/3) + (1 + (1 - z)/6)*xs1^2*lyp_aux6*opz_pow_n(-z, 8/3) ): (* This functional is very similar to gga_c_lyp. One adds the two erfc and the extra term proportinal to eta *) f_lypr_rr := (rr, z, xt, xs0, xs1) -> params_a_a*( + erfc(params_a_m1*params_a_omega*rr)*lyp_t1(rr, z) + erfc(params_a_m2*params_a_omega*rr)*lyp_omega(rr)*( + lyp_t2(rr, z, xt) + lyp_t3(z) + lyp_t4(rr, z, xs0, xs1) + lyp_t5(rr, z, xs0, xs1) + lyp_t6(z, xs0, xs1) ) + lyp_omega(rr)*lypr_eta(rr)*lypr_t7(rr, z, xt, xs0, xs1) ): (* rr = rs/RS_FACTOR is equal to n_total(rs)^(-1/3) *) f_lypr := (rs, z, xt, xs0, xs1) -> f_lypr_rr(rs/RS_FACTOR, z, xt, xs0, xs1): f := (rs, z, xt, xs0, xs1) -> f_lypr(rs, z, xt, xs0, xs1):
#ifndef _SIMPLE_CFG_GRAMMAR_HPP_ #define _SIMPLE_CFG_GRAMMAR_HPP_ #include <boost/config/warning_disable.hpp> #include <boost/spirit/include/qi.hpp> #include "SimpleCFGStructure.hpp" namespace cfg { namespace qi = boost::spirit::qi; namespace standard_wide = boost::spirit::standard_wide; template <typename Iterator> struct simple_cfg_grammar : qi::grammar<Iterator, config_file(), standard_wide::space_type> { simple_cfg_grammar() : simple_cfg_grammar::base_type(file) { using qi::lexeme; using qi::eol; using qi::omit; using qi::raw; using qi::skip; using qi::lit; using standard_wide::char_; using standard_wide::alnum; using standard_wide::alpha; using standard_wide::space; using standard_wide::blank; comments %= lexeme[omit[+lit("//")] >> *(char_ - eol) >> eol] ; name %= lexeme[(alpha | char_('_')) >> *(alnum | char_('_'))] ; type %= raw[-standard_wide::string("unsigned") >> name >> -char_('*')] ; parameter %= type ; attribute_prototype %= type >> name >> lit("::") >> name ; function_prototype %= type >> name >> lit("::") >> name >> '(' >> -(parameter % ',') >> ')' ; identifier %= '[' >> (function_prototype | attribute_prototype | lexeme[+(alnum | char_('_') | char_('-'))]) >> ']' ; branch %= +char_('-') | name ; element %= branch >> *identifier ; path %= ((char_('.') | element) % '/') ; value %= raw[+(char_ - ';')] >> ';' ; text %= "${" >> lexeme[+(char_ - lit("}$"))] >> "}$" ; content %= (text | value) ; property %= path >> '=' >> content ; line %= (property | path) >> eol ; file %= *(skip(blank)[omit[comments] | line | eol]) ; } qi::rule<Iterator, config_file(), standard_wide::space_type> file; qi::rule<Iterator, config_line(), standard_wide::blank_type> line; qi::rule<Iterator, config_property(), standard_wide::blank_type> property; qi::rule<Iterator, config_path(), standard_wide::blank_type> path; qi::rule<Iterator, config_content(), standard_wide::blank_type> content; qi::rule<Iterator, config_value(), standard_wide::blank_type> value; qi::rule<Iterator, config_text(), standard_wide::blank_type> text; qi::rule<Iterator, config_element(), standard_wide::blank_type> element; qi::rule<Iterator, config_branch(), standard_wide::blank_type> branch; qi::rule<Iterator, config_identifier(),standard_wide::blank_type> identifier; qi::rule<Iterator, config_attribute(), standard_wide::blank_type> attribute_prototype; qi::rule<Iterator, config_function(), standard_wide::blank_type> function_prototype; qi::rule<Iterator, fct_parameter(), standard_wide::blank_type> parameter; qi::rule<Iterator, std::wstring(), standard_wide::blank_type> type; qi::rule<Iterator, std::wstring(), standard_wide::blank_type> name; qi::rule<Iterator, std::vector<std::wstring>(), standard_wide::blank_type> comments; }; } #endif // _SIMPLE_CFG_GRAMMAR_HPP_
# Gradient descent ## Simple linear regression In a previous notebook, we solved the problem of simple linear regression - finding a straight line that best fits a data set with two variables. In that case, we were able to find the exact solution. In this notebook, we'll use a common technique to approximate that solution. Why would we want to approximate a solution when we can easily find an exact solution? We don't - it's just that the technique we discuss here can also be used in situations where we can't find an exact solution or don't want to try, for whatever reason. The approximation technique is called gradient descent. ### Preliminaries ```python # numpy efficiently deals with numerical multi-dimensional arrays. import numpy as np # matplotlib is a plotting library, and pyplot is its easy-to-use module. import matplotlib.pyplot as pl # This just sets the default plot size to be bigger. pl.rcParams['figure.figsize'] = (16.0, 8.0) ``` ### Simple linear regression model In simple linear regression, we have some data points $(x_i, y_i)$, and we decide that they belong to a straight line with a little bit of error involved. Straight lines in two dimensions are of the form $y = mx + c$, and to fit a line to our data points we must find appropriate values for $m$ and $c$. Numpy has a function called `polyfit` that finds such values for us. ```python w = np.arange(1.0, 16.0, 1.0) d = 5.0 * w + 10.0 + np.random.normal(0.0, 5.0, w.size) m, c = np.polyfit(w, d, 1) print("Best fit is m = %f and c = %f" % (m, c)) # Plot the best fit line. pl.plot(w, d, 'k.', label='Original data') pl.plot(w, m * w + c, 'b-', label='Best fit: $%0.1f x + %0.1f$' % (m,c)) pl.legend() pl.show() ``` ### Gradient descent In gradient descent, we select a random guess of a parameter and iteratively improve that guess. For instance, we might pick $1.0$ as our initial guess for $m$ and then create a `for` loop to iteratively improve the value of $m$. The way we improve $m$ is to first take the partial derivative of our cost function with respect to $m$. ### Cost function Recall that our cost function for simple linear regression is: $$ Cost(m, c) = \sum_i (y_i - mx_i - c)^2 $$ ### Calculate the partial derivatives We calculate the partial derivative of $Cost$ with respect to $m$ while treating $c$ as a constant. Note that the $x_i$ and $y_i$ values are all just constants. We'll also calculate the partial derivative with respect to $c$ here. $$ \begin{align} Cost(m, c) &= \sum_i (y_i - mx_i - c)^2 \\[1cm] \frac{\partial Cost}{\partial m} &= \sum 2(y_i - m x_i -c)(-x_i) \\ &= -2 \sum x_i (y_i - m x_i -c) \\[0.5cm] \frac{\partial Cost}{\partial c} & = \sum 2(y_i - m x_i -c)(-1) \\ & = -2 \sum (y_i - m x_i -c) \\ \end{align} $$ ### Code the partial derivatives Once we've calculated the partial derivatives, we'll code them up in python. Here we create two functions, each taking four parameters. The first two parameters are arrays with our $x_i$ and $y_i$ data set values. The second two are our current guesses for $m$ and $c$. ```python def grad_m(x, y, m, c): return -2.0 * np.sum(x * (y - m * x - c)) ``` ```python def grad_c(x, y, m , c): return -2.0 * np.sum(y - m * x - c) ``` ### Iterate Now we can run our gradient descent algorithm. For $m$, we keep replacing its value with $m - \eta grad\_m(x, y, m, c)$ until it doesn't change. For $c$, we keep replacing its value with $c - \eta grad\_c(x, y, m, c)$ until it doesn't change. What is $\eta$? It is called the learning rate and we set it to a small value relative to the data points. You can see on each iteration, $m$ and $c$ are getting closer to their true values. ```python eta = 0.0001 m, c = 1.0, 1.0 delta = 0.0000001 mold, cold = m - 1.0, c - 1.0 i = 0 while abs(mold - m) > delta and abs(cold - c) > delta: mold, cold = m, c m = mold - eta * grad_m(w, d, mold, cold) c = cold - eta * grad_c(w, d, mold, cold) i = i + 1 if i % 1000 == 0: print("m: %20.16f c: %20.16f" % (m, c)) ``` m: 5.7149506777702488 c: 4.5518695120877757 m: 5.5629397868829846 c: 6.1183980049248472 m: 5.4852588608786546 c: 6.9189286986499470 m: 5.4455621902673004 c: 7.3280175818245992 m: 5.4252763139437015 c: 7.5370710455442040 m: 5.4149097826482908 c: 7.6439019874021357 m: 5.4096122559659570 c: 7.6984949605832087 m: 5.4069051027099357 c: 7.7263931768575951 m: 5.4055216875445105 c: 7.7406497821666767 m: 5.4048147318094859 c: 7.7479352226744274 m: 5.4044534617990276 c: 7.7516582438453998 m: 5.4042688448364773 c: 7.7535607899748742 ## Newton's method for square roots Newton's method for square roots is a method for approximating the square root of a number $x$. We begin with an initial guess $z_0$ of the square root - it doesn't have to be particularly good. We then apply the following calculation repeatedly, to calculate $z_1$, then $z_2$, and so on: $$ z_{i+1} = z_i - \frac{z_i^2 - x}{2z_i} $$ ### Coding the calculation We can create a function that calculates the next value of $z$ based on the current value of $z$ as follows. ```python def next_z(x, z): return z - (z**2 - x) / (2 * z) ``` ### Calculating the square root of $x$ Suppose we want to calculate the square root of $x$. We start with a random guess for the square root, $z_0$. We then apply the `next_z` function repeatedly until the value of $z$ stops changing. Let's create a function to do this. We'll include the next_z function inside the `newtsqrt` function to make it one all-inclusive package. ```python def newtsqrt(x): next_z = lambda x, z: z - (z**2 - x) / (2 * z) z = 2.0 n = next_z(x, z) while z != n: z, n = n, next_z(x, n) print(z) return z newtsqrt(11) ``` 3.75 3.341666666666667 3.316718620116376 3.3166247916826186 3.3166247903554 3.3166247903554 ### Comparison with the standard library We can compare our square root method return value to the value calculated by Python's `math` standard library package. It has a `sqrt` function. ```python import math math.sqrt(11) ``` 3.3166247903554 ### Being careful Due to the complexities of floating point numbers, the `nextsqrt` function could get into an infinite loop. For instance, calculating the square root of 10 gives an infinite loop. ```python # Uncommenting will result in infinite loop. # newtsqrt(10) ``` To counteract this problem, the condition of the loop is better written as: ```python abs(z - n) > 0.001 ``` ## Gradient descent for square roots Newton's method for square roots is efficient, but we can also use gradient descent to approximate the square root of a real number $x$. Here, we use the following cost function. $$ Cost(z \mid x) = (x - z^2)^2 $$ ### Example value Let's use it to calculate the square root of 20, i.e. $x = 20$. Then the cost function is: $$ Cost(z \mid x=20) = (20 - z^2)^2 $$ Our goal is to find the $z$ that minimises this. ### Plotting the cost function Let's plot the cost function. Given that we know the best $z$ will be between $4$ and $5$, we'll let $z$ range over 0 to 10. ```python i = np.linspace(0.0, 10.0, 1000) j = (20.0 - i**2)**2 pl.plot(i, j, 'k-', label='$(20-z^2)^2$') pl.legend() pl.show() ``` ### The derivative Looks like there's a low point at about $4.5$. Let's take the derivative of the cost function with respect to $z$. $$ \begin{align} Cost(z) &= ( 20.0 - z^2 )^2 \\ \Rightarrow \frac{\partial Cost}{\partial z} &= 2(20.0 - z^2)(-2z) \\ &= 4z(z^2 - 20) \\ &= 4z^3 - 80z \end{align} $$ The derivative tells us what the slope of the tangent to the curve is at any point on the cost function. What does that mean? It means that if we pick a value of $z$, e.g. $8.0$, that the derivative tells us that a line going through the point $(8.0, (20.0 - (8.0)^2)^2)$ with the slope $4(8.0)^3 - 80(8.0)$ perfectly touches the graph above. Let's plot that. When you simply, the point $(8.0, (20.0 - (8.0)^2)^2)$ becomes $(8,1936)$. The slope is $4(8.0)^3 - 80(8.0)$ which when simplified becomes $1408$. So, the claim is that the line with slope $1408$ going through the point $(8,1936)$ touches the graph. To calculate the equation of the line, we'll use $(y - y_1) = m(x - x_1)$: $$ y - 1936 = 1408(x - 8) \\ \Rightarrow y = 1408x - 11264 + 1936 \\ \Rightarrow y = 1408x - 9328 $$ Let's plot that line and the cost function together. ```python i = np.linspace(0.0, 10.0, 1000) j = (20.0 - i**2)**2 k = 1408 * i - 9328 pl.plot(i, j, 'k-', label='$(20-z^2)^2$') pl.plot(i, k, 'b-', label='$1408z - 9328$') pl.legend() pl.show() ``` ### Why do we care about the slope? It's a bit hard to see, but the blue line is perfectly touching the curve. We care about this because the slope of the blue line tells us in which way to change $z$ in order to make the cost less. If we increase $z$ the cost goes up. If we decrease it the cost goes down. ### Gradient descent Let's use the gradient descent algorithm to calculate the best $z$. We'll start with the guess $z=8$, and then use the derivative to move $z$ ever so slightly in the direction that decreases the cost. By ever so slightly, we mean $0.001$ times the slope: $$ \begin{align} z_{i+1} &= z_i - \eta \frac{\partial Cost}{\partial z} \\ &= z_i - (0.001) (4 z_i^3 - 80 z_i) \end{align} $$ So, for our initial guess $z_0 = 8.0$ we get: $$ \begin{align} z_1 &= 8.0 - (0.001) (4 (8.0)^3 - 80 (8.0)) \\ &= 8.0 - 1.408 = 6.592 \end{align} $$ Let's code it up. ```python def next_z(z, x, eta=0.001): return z - eta * (4.0 * z**3 - 80 * z) def sqrt_grad_desc(x, z, verbose=False): while abs(z - next_z(z, x)) > 0.001: if verbose: print("Current: %14.8f\tNext: %14.8f" % (z, next_z(z, x))) z = next_z(z, x) return z ans =sqrt_grad_desc(20.0, 8.0, True) print("Square root:", ans, "\tSquared:", ans**2) ``` Current: 8.00000000 Next: 6.59200000 Current: 6.59200000 Next: 5.97355269 Current: 5.97355269 Next: 5.59881186 Current: 5.59881186 Next: 5.34469983 Current: 5.34469983 Next: 5.16157297 Current: 5.16157297 Next: 5.02444369 Current: 5.02444369 Next: 4.91903017 Current: 4.91903017 Next: 4.83645229 Current: 4.83645229 Next: 4.77084541 Current: 4.77084541 Next: 4.71815685 Current: 4.71815685 Next: 4.67548576 Current: 4.67548576 Next: 4.64069702 Current: 4.64069702 Next: 4.61218330 Current: 4.61218330 Next: 4.58871218 Current: 4.58871218 Next: 4.56932433 Current: 4.56932433 Next: 4.55326362 Current: 4.55326362 Next: 4.53992784 Current: 4.53992784 Next: 4.52883326 Current: 4.52883326 Next: 4.51958845 Current: 4.51958845 Next: 4.51187478 Current: 4.51187478 Next: 4.50543157 Current: 4.50543157 Next: 4.50004463 Current: 4.50004463 Next: 4.49553736 Current: 4.49553736 Next: 4.49176369 Current: 4.49176369 Next: 4.48860255 Current: 4.48860255 Next: 4.48595333 Current: 4.48595333 Next: 4.48373229 Current: 4.48373229 Next: 4.48186965 Current: 4.48186965 Next: 4.48030717 Current: 4.48030717 Next: 4.47899619 Current: 4.47899619 Next: 4.47789603 Square root: 4.477896027970839 Squared: 20.05155283731702 ### A question Let's try some other initial guesses: 4.0, 1.0 and -1.0. Can you explain the square root returned with -1.0? ```python print("With initial guess %6.2f: %10.6f" % (4.0, sqrt_grad_desc(20.0, 4.0, False))) print("With initial guess %6.2f: %10.6f" % (1.0, sqrt_grad_desc(20.0, 1.0, False))) print("With initial guess %6.2f: %10.6f" % (-1.0, sqrt_grad_desc(20.0, -1.0, False))) ``` With initial guess 4.00: 4.465951 With initial guess 1.00: 4.465963 With initial guess -1.00: -4.465963 ### End
function options = pswarmset(varargin) %PSWARMSET Create or alter the options for Optimization with PSwarm % % options = pswarmset('param1',value1,'param2',value2,...) creates an % PSWARM options structure with the parameters 'param' set to their % corresponding values in 'value'. Parameters not specified will be set to % the PSWARM default. % % options = pswarmset(oldopts,'param1',value1,...) creates a copy of the old % options 'oldopts' and then fills in (or writes over) the parameters % specified by 'param' and 'value'. % % options = pswarmset() creates an options structure with all fields set to % PSWARMSET defaults. % % pswarmset() prints a list of all possible fields and their function. % Copyright (C) 2012 Jonathan Currie (I2C2) % Print out possible values of properties. if ((nargin == 0) && (nargout == 0)) printfields(); return end %Names and Defaults Names = {'swarm_size';'vectorized';'mu';'nu';'iweight';'fweight';'delta';'idelta';'ddelta'}; Defaults = {42;0;0.5;0.5;0.9;0.4;Inf;2.0;0.5}; %Enter and check user args try options = opticheckset(Names,Defaults,@checkfield,varargin{:}); catch ME throw(ME); end function checkfield(field,value) %Check a field contains correct data type switch lower(field) %Scalar double case {'mu','nu','iweight','fweight','delta','idelta','ddelta'} err = opticheckval.checkScalarDbl(value,field); %Scalar 0/1 case 'vectorized' err = opticheckval.checkScalar01(value,field); %Non-zero scalar double case 'swarm_size' err = opticheckval.checkScalarIntGrtZ(value,field); otherwise err = sprintf('Unrecognized parameter name ''%s''.', field); end if(~isempty(err)), throw(err); end function printfields() %Print out fields with defaults fprintf(' swarm_size: [ Swarm Size {42} ] \n'); fprintf(' vectorized: [ Objective function is vectorized {0}, 1 ] \n'); fprintf(' mu: [ Cognitial Parameter {0.5} ] \n'); fprintf(' nu: [ Social Parameter {0.5} ] \n'); fprintf(' iweight: [ Initial Weight {0.9} ] \n'); fprintf(' fweight: [ Final Weight {0.4} ] \n'); fprintf(' delta: [ Initial Delta {Inf} ] \n'); fprintf(' idelta: [ Increase Delta {2.0} ] \n'); fprintf(' ddelta: [ Decrease Delta {0.5} ] \n'); fprintf('\n');
(* * Copyright 2014, General Dynamics C4 Systems * * SPDX-License-Identifier: GPL-2.0-only *) theory PSpaceStorable_H imports Structures_H KernelStateData_H "Lib.DataMap" begin context begin interpretation Arch . requalify_types arch_kernel_object_type requalify_consts archTypeOf end lemma UserData_singleton [simp]: "(v = UserData) = True" "(UserData = v) = True" by (cases v, simp)+ lemma UserDataDevice_singleton [simp]: "(v = UserDataDevice) = True" "(UserDataDevice = v) = True" by (cases v, simp)+ datatype kernel_object_type = EndpointT | NotificationT | CTET | TCBT | UserDataT | UserDataDeviceT | KernelDataT | ArchT arch_kernel_object_type primrec koTypeOf :: "kernel_object \<Rightarrow> kernel_object_type" where "koTypeOf (KOEndpoint e) = EndpointT" | "koTypeOf (KONotification e) = NotificationT" | "koTypeOf (KOCTE e) = CTET" | "koTypeOf (KOTCB e) = TCBT" | "koTypeOf (KOUserData) = UserDataT" | "koTypeOf (KOUserDataDevice) = UserDataDeviceT" | "koTypeOf (KOKernelData) = KernelDataT" | "koTypeOf (KOArch e) = ArchT (archTypeOf e)" definition typeError :: "unit list \<Rightarrow> kernel_object \<Rightarrow> 'a kernel" where "typeError t1 t2 \<equiv> fail" definition alignError :: "nat \<Rightarrow> 'a kernel" where "alignError n \<equiv> fail" definition alignCheck :: "machine_word \<Rightarrow> nat \<Rightarrow> unit kernel" where "alignCheck x n \<equiv> unless ((x && mask n) = 0) $ alignError n" definition magnitudeCheck :: "machine_word \<Rightarrow> machine_word option \<Rightarrow> nat \<Rightarrow> unit kernel" where "magnitudeCheck x y n \<equiv> case y of None \<Rightarrow> return () | Some z \<Rightarrow> when (z - x < 1 << n) fail" class pre_storable = fixes injectKO :: "'a \<Rightarrow> kernel_object" fixes projectKO_opt :: "kernel_object \<Rightarrow> 'a option" fixes koType :: "'a itself \<Rightarrow> kernel_object_type" assumes project_inject: "(projectKO_opt ko = Some v) = (injectKO v = ko)" assumes project_koType: "(\<exists>v. projectKO_opt ko = Some (v::'a)) = (koTypeOf ko = koType TYPE('a))" begin definition projectKO :: "kernel_object \<Rightarrow> 'a kernel" where "projectKO e \<equiv> case projectKO_opt e of None \<Rightarrow> fail | Some k \<Rightarrow> return k" definition objBits :: "'a \<Rightarrow> nat" where "objBits v \<equiv> objBitsKO (injectKO v)" definition loadObject_default :: "machine_word \<Rightarrow> machine_word \<Rightarrow> machine_word option \<Rightarrow> kernel_object \<Rightarrow> 'a kernel" where "loadObject_default ptr ptr' next obj \<equiv> do assert (ptr = ptr'); val \<leftarrow> projectKO obj; alignCheck ptr (objBits val); magnitudeCheck ptr next (objBits val); return val od" definition updateObject_default :: "'a \<Rightarrow> kernel_object \<Rightarrow> machine_word \<Rightarrow> machine_word \<Rightarrow> machine_word option \<Rightarrow> kernel_object kernel" where "updateObject_default val oldObj ptr ptr' next \<equiv> do assert (ptr = ptr'); (_ :: 'a) \<leftarrow> projectKO oldObj; alignCheck ptr (objBits val); magnitudeCheck ptr next (objBits val); return (injectKO val) od" end class pspace_storable = pre_storable + fixes makeObject :: 'a \<comment>\<open> `loadObject` is only used in the generic definition of `getObject`. It describes how to extract a value of type `'a` from memory. If `(obj, _) \<in> loadObjext p before after ko` within `getObject`, then: - @{term "p :: machine_word"} is the addres that we want to read an instance of `'a` from. - @{term "before :: machine_word"} is the address of the nearest object at or before `p`. - @{term "after :: machine_word option"} is the address of the nearest object after `p`, if any (for checking overlap). - @{term "ko :: kernel_object"} is the object currently at `before`. - @{term "obj :: 'a"} is the value extracted from `ko`. Graphically, the "memory" looks like this: before p after |-------|--+-----+-----|---| | +~~+ <---+---------- The span of obj, the object we want to extract. +~~~~~~~~~~~~~~~~+ <-------- The span of ko, the existing object that spans obj. +~~~+ The span of whatever object comes after obj. We don't care about this beyond making sure it doesn't overlap with ko. In almost every case, the object in memory (ko) is the same type of object as the one being loaded (obj). For example, for a reply object our parameters look like this: p, before |-----------| +~~~~~~~~~~~+ <- The span of two objects: - ko, the existing object (which should be a reply object). - obj, the object that we want to load from memory. This will just be ko projected through @{term projectKO}. In these simple cases, @{term loadObject_default} is a good specification for how to load an instance of `'a` from memory. The only interesting case is when we're loading a CTE, which might be inside a TCB. Then memory looks like this: before p |-------|--+-----+ | +~~+ <---+---- The span of obj, i.e. the CTE which we're reading from | | memory. +~~~~~~~~~~~~~~~~+ <-- The span of ko, i.e. the TCB surrounding and containing obj. In this case, the process for extracting the CTE from the surrounding TCB is more involved. See `loadObject_cte` in `ObjectInstances_H`. \<close> fixes loadObject :: "machine_word \<Rightarrow> machine_word \<Rightarrow> machine_word option \<Rightarrow> kernel_object \<Rightarrow> 'a kernel" \<comment>\<open> `updateObject` is only used in the generic definition of `setObject`, but it shows up in a few lemma statements as well. It describes how to update the kernel object contents of memory depending on what's already in that memory. If `(ko', _) \<in> updateObject v ko p before after s` within `setObject`, then: - @{term "v :: 'a"} is the new object you want to write at pointer @{term "p :: machine_word"}. - @{term "before :: machine_word"} is the address of the nearest object at or before `p`. - @{term "ko :: kernel_object"} is the object currently at `before`. - @{term "after :: machine_word option"} should be the address of the nearest object after `p`, if any (for checking overlap). - The returned value @{term "ko' :: kernel_object"} is the old object `ko`, updated as required by `v`. This value gets inserted by `setObject` into memory at the address `before`. Graphically, the "memory" looks like this: before p after |-------|--+-----+-----|---| | +~~+ <---+---------- The span of v, the object we want to insert. +~~~~~~~~~~~~~~~~+ <-------- The span of ko, the existing object that spans v. This is also the span of ko', which will be what gets put into memory after the update. +~~~+ The span of whatever object comes after ko. We don't care about this beyond making sure it doesn't overlap with ko before or after it gets updated with v. In almost every case, the object in memory (ko) is the same type of object as the one being inserted (v). For example, for a reply object our parameters look like this: p, before |-----------| +~~~~~~~~~~~+ <- The span of three objects: - v, the new reply object we want to insert. - ko, the existing object (which should be a reply object). - ko', the new object (which should be a reply object if the previous one was). In these simple cases, @{term updateObject_default} is a good specification for how to update the existing kernel object. The only interesting case is when we're updating a CTE, which might be inside a TCB. Then memory looks like this: before p |-------|--+-----+ | +~~+ <---+---- The span of v, i.e. the CTE which we're inserting into | | memory. +~~~~~~~~~~~~~~~~+ <-- The span of ko, i.e. the TCB surrounding and containing v. This is also the span of ko', which is "just" a copy of ko with the relevant CTE updated. In this case, the process for updating the surrounding TCB is more involved. See `updateObject_cte` in `ObjectInstances_H`. \<close> fixes updateObject :: "'a \<Rightarrow> kernel_object \<Rightarrow> machine_word \<Rightarrow> machine_word \<Rightarrow> machine_word option \<Rightarrow> kernel_object kernel" \<comment>\<open> If updating an object succeeds, then the type of the updated object (ko') should be the same as the original object (ko). \<close> assumes updateObject_type: "(ko', s') \<in> fst (updateObject v ko p p' p'' s) \<Longrightarrow> koTypeOf ko' = koTypeOf ko" end
C C $Id: gdsg.f,v 1.9 2008-07-27 00:20:57 haley Exp $ C C Copyright (C) 2000 C University Corporation for Atmospheric Research C All Rights Reserved C C The use of this Software is governed by a License Agreement. C SUBROUTINE GDSG(SGNA) C C DELETE SEGMENT C INTEGER EDSG PARAMETER (EDSG=59) C include 'gkscom.h' C INTEGER SGNA CHARACTER*80 CSNAME C C This subroutine is here solely as support for the SPPS GFLASn C entries. Full segmentation is not a part of the NCAR GKS C package at this time. The NCAR package is non-standard to the C extent that certain segmentation functions are supported, but C not all level 1 functions are supported. This subroutine should C be considered a user entry point only by way of the GFLASn C calls--it should never be called directly be the user. C C Check if GKS is in the proper state. C CALL GZCKST(7,EDSG,IER) IF (IER .NE. 0) RETURN C C Check that the segment name is valid. C IF (SGNA.LT.0 .OR. SGNA.GT.99) THEN ERS = 1 CALL GERHND(120,EDSG,ERF) ERS = 0 RETURN ENDIF C C Check if the segment exists. C DO 200 I=1,NUMSEG IF (SEGS(I) .EQ. SGNA) GO TO 210 200 CONTINUE ERS = 1 CALL GERHND(122,EDSG,ERF) ERS = 0 RETURN 210 CONTINUE C C Check if the segment is open. C IF (SGNA .EQ. CURSEG) THEN ERS = 1 CALL GERHND(125,EDSG,ERF) ERS = 0 RETURN ENDIF C C Remove the segment name from the list of segment names in use C and readjust the associated segment description arrays. C IF (NUMSEG .GT. 0) THEN DO 201 I=1,NUMSEG IF (SEGS(I) .EQ. SGNA) THEN C CSNAME = ' ' CSNAME = SEGNAM(I) IP1 = I+1 DO 202 J=IP1,NUMSEG SEGS(J-1) = SEGS(J) SEGNAM(J-1) = SEGNAM(J) SEGLEN(J-1) = SEGLEN(J) DO 205 IR=1,2 DO 206 JC=1,3 SEGT(J-1,IR,JC) = SEGT(J,IR,JC) 206 CONTINUE 205 CONTINUE 202 CONTINUE SEGS(NUMSEG) = 0 SEGNAM(NUMSEG) = ' ' SEGLEN(NUMSEG) = 0 DO 203 IR=1,2 DO 204 JC=1,3 SEGT(NUMSEG,IR,JC) = 0. 204 CONTINUE 203 CONTINUE SEGT(NUMSEG,1,1) = 1. SEGT(NUMSEG,2,2) = 1. NUMSEG = NUMSEG-1 GO TO 10 ENDIF 201 CONTINUE ENDIF 10 CONTINUE C C Make the interface call. C FCODE = 79 CALL GZROI(0) CONT = 0 STRL1 = 80 STRL2 = 80 STR = CSNAME CALL GZTOWK IF (RERR.NE.0) THEN ERS = 1 CALL GERHND(RERR,EDSG,ERF) ERS = 0 ENDIF C RETURN END
If the distance between $x$ and $y$ is greater than $r + s$, then the closed balls of radius $r$ and $s$ centered at $x$ and $y$, respectively, are disjoint.
""" struct SymSparseMatrixCSR{Bi,T,Ti<:Integer} <: AbstractSparseMatrix{T,Ti} uppertrian :: SparseMatrixCSR{Bi,T,Ti} end Matrix type for storing symmetric sparse matrices in the Compressed Sparse Row format with `Bi`-based indexing (typically 0 or 1). Only the upper triangle is stored (including the non zero diagonal entries), which is represented by a `SparseMatrixCSR`. The standard way of constructing a `SymSparseMatrixCSR` is through the [`symsparsecsr`](@ref) function. """ struct SymSparseMatrixCSR{Bi,T,Ti<:Integer} <: AbstractSparseMatrix{T,Ti} uppertrian :: SparseMatrixCSR{Bi,T,Ti} end """ symsparsecsr(args...;symmetrize::Bool=false) symsparsecsr(::Val{Bi},args...;symmetrize::Bool=false) where Bi Create a `SymSparseMatrixCSR` with `Bi`-based indexing (1 by default) from the same `args...` as one constructs a `SparseMatrixCSC` with the [`sparse`](@ref) function. If `symmetrize == false` (the default) the given arguments should only describe the upper triangle of the matrix (including non zero diagonal values). If `symmetrize == true` a non symmetric input is accepted and it will be symmetrized in-place (i.e., changing the input arguments). """ function symsparsecsr(::Val{Bi},I,J,V,args...;symmetrize::Bool=false) where Bi if symmetrize Tv = eltype(V) α = Tv(0.5) for k in 1:length(I) r = I[k] c = J[k] if r > c I[k] = c J[k] = r end if r != c V[k] = α*V[k] end end end SymSparseMatrixCSR(sparsecsr(Val(Bi),I,J,V,args...)) end symsparsecsr(args...;kwargs...) = symsparsecsr(Val(1),args...;kwargs...) size(A::SymSparseMatrixCSR) = size(A.uppertrian) IndexStyle(::Type{<:SymSparseMatrixCSR}) = IndexCartesian() function getindex(A::SymSparseMatrixCSR, x::Integer, y::Integer) getindex(A.uppertrian,min(x,y),max(x,y)) end function setindex!(A::SymSparseMatrixCSR, v, x::Integer, y::Integer) setindex!(A.uppertrian,v,min(x,y),max(x,y)) end getrowptr(S::SymSparseMatrixCSR) = getrowptr(S.uppertrian) getnzval(S::SymSparseMatrixCSR) = getnzval(S.uppertrian) getcolval(S::SymSparseMatrixCSR) = getcolval(S.uppertrian) """ getBi(S::SymSparseMatrixCSR{Bi}) where {Bi} Return `Bi`. """ getBi(S::SymSparseMatrixCSR{Bi}) where {Bi} = Bi """ getoffset(S::SymSparseMatrixCSR{Bi}) where {Bi} Return `1-Bi`. Useful to convert from 1-based to `Bi`-based indexing (by subtracting the offset). """ getoffset(S::SymSparseMatrixCSR{Bi}) where Bi = getoffset(Bi) """ issparse(S::SymSparseMatrixCSR) Returns `true`. """ issparse(S::SymSparseMatrixCSR) = true """ nnz(S::SymSparseMatrixCSR) Returns the number of stored elements in a sparse array, which correspond to the nonzero entries in the upper triangle and diagonal. """ nnz(S::SymSparseMatrixCSR) = nnz(S.uppertrian) """ nonzeros(S::SymSparseMatrixCSR) Return a vector (1-based) of the structural nonzero values in sparse array S. This includes zeros that are explicitly stored in the sparse array, which correspond to the nonzero entries in the upper triangle and diagonal. The returned vector points directly to the internal nonzero storage of S, and any modifications to the returned vector will mutate S as well. """ nonzeros(S::SymSparseMatrixCSR) = nonzeros(S.uppertrian) """ colvals(S::SparseMatrixCSR) Return a vector of the col indices of `S`. The stored values are indexes to arrays with `Bi`-based indexing, but the `colvals(S)` array itself is a standard 1-based Julia `Vector`. Any modifications to the returned vector will mutate S as well. Providing access to how the col indices are stored internally can be useful in conjunction with iterating over structural nonzero values. See also [`nonzeros`](@ref) and [`nzrange`](@ref). """ colvals(S::SymSparseMatrixCSR) = colvals(S.uppertrian) """ nzrange(S::SymSparseMatrixCSR, row::Integer) Return the range of indices to the structural nonzero values of a sparse matrix row section being in the diagonal or upper triangle. The returned range of indices is always 1-based even for `Bi != 1`. """ nzrange(S::SymSparseMatrixCSR, row::Integer) = nzrange(S.uppertrian, row) """ findnz(S::SymSparseMatrixCSR) Return a tuple `(I, J, V)` where `I` and `J` are the row and column 1-based indices of the stored ("structurally non-zero in diagonal + upper trianle") values in sparse matrix A, and V is a vector of the values. The returned vectors are newly allocated and are unrelated to the internal storage of matrix `S`. """ findnz(S::SymSparseMatrixCSR) = findnz(S.uppertrian) """ count(pred, S::SymSparseMatrixCSR) count(S::SymSparseMatrixCSR) Count the number of elements in `nonzeros(S)` for which predicate `pred` returns `true`. If `pred` not given, it counts the number of `true` values. """ count(pred, S::SymSparseMatrixCSR) = count(pred, S.uppertrian) count(S::SymSparseMatrixCSR) = count(i->true, S) function LinearAlgebra.fillstored!(a::SymSparseMatrixCSR,v) LinearAlgebra.fillstored!(a.uppertrian,v) a end function LinearAlgebra.rmul!(a::SymSparseMatrixCSR,v::Number) LinearAlgebra.rmul!(a.uppertrian,v) a end function mul!(y::AbstractVector,A::SymSparseMatrixCSR,v::AbstractVector, α::Number, β::Number) A.uppertrian.n == size(v, 1) || throw(DimensionMismatch()) A.uppertrian.m == size(y, 1) || throw(DimensionMismatch()) if β != 1 β != 0 ? rmul!(y, β) : fill!(y, zero(eltype(y))) end o = getoffset(A) for row = 1:size(y, 1) @inbounds for nz in nzrange(A,row) col = A.uppertrian.colval[nz]+o y[row] += A.uppertrian.nzval[nz]*v[col]*α row != col && (y[col] += A.uppertrian.nzval[nz]*v[row]*α) end end return y end function mul!(y::AbstractVector,A::SymSparseMatrixCSR,v::AbstractVector) A.uppertrian.n == size(v, 1) || throw(DimensionMismatch()) A.uppertrian.m == size(y, 1) || throw(DimensionMismatch()) fill!(y,zero(eltype(y))) o = getoffset(A) for row = 1:size(y, 1) @inbounds for nz in nzrange(A,row) col = A.uppertrian.colval[nz]+o y[row] += A.uppertrian.nzval[nz]*v[col] row != col && (y[col] += A.uppertrian.nzval[nz]*v[row]) end end return y end *(A::SymSparseMatrixCSR, v::Vector) = (y = similar(v,size(A,1));mul!(y,A,v)) function show(io::IO, ::MIME"text/plain", S::SymSparseMatrixCSR) xnnz = nnz(S) print(io, S.uppertrian.m, "×", S.uppertrian.n, " ", typeof(S), " with ", xnnz, " stored ", xnnz == 1 ? "entry" : "entries") if xnnz != 0 print(io, ":") show(IOContext(io, :typeinfo => eltype(S)), S) end end show(io::IO, S::SymSparseMatrixCSR) = show(io, S.uppertrian) Base.convert(::Type{T},a::T) where T<:SymSparseMatrixCSR = a function Base.convert( ::Type{SymSparseMatrixCSR{Bi,Tv,Ti}},a::SymSparseMatrixCSR) where {Bi,Tv,Ti} utrian = convert(SparseMatrixCSR{Bi,Tv,Ti},a.uppertrian) SymSparseMatrixCSR(utrian) end
[STATEMENT] lemma incseq_Suc_iff: "incseq f \<longleftrightarrow> (\<forall>n. f n \<le> f (Suc n))" [PROOF STATE] proof (prove) goal (1 subgoal): 1. incseq f = (\<forall>n. f n \<le> f (Suc n)) [PROOF STEP] by (auto intro: incseq_SucI dest: incseq_SucD)
Load LFindLoad. From lfind Require Import LFind. From QuickChick Require Import QuickChick. From adtind Require Import goal33. Derive Show for natural. Derive Arbitrary for natural. Instance Dec_Eq_natural : Dec_Eq natural. Proof. dec_eq. Qed. Lemma conj3eqsynthconj1 : forall (lv0 : natural) (lv1 : natural), (@eq natural (mult lv0 lv1) (mult lv0 lv0)). Admitted. QuickChick conj3eqsynthconj1.
function y=remove_CP(x,Ncp,Noff) % Remove cyclic prefix if nargin<3 Noff=0; end y=x(:,Ncp+1-Noff:end-Noff);
If $A$ is a countable set, then the Lebesgue measure of $A$ is zero.
{-# OPTIONS --prop --rewriting #-} open import Calf.CostMonoid open import Data.Nat using (ℕ) open import Examples.Sorting.Comparable module Examples.Sorting.Core (costMonoid : CostMonoid) (fromℕ : ℕ → CostMonoid.ℂ costMonoid) (M : Comparable costMonoid fromℕ) where open Comparable M open import Calf costMonoid open import Calf.Types.List open import Relation.Nullary open import Relation.Nullary.Negation open import Relation.Binary open import Relation.Binary.PropositionalEquality as Eq using (_≡_; refl; module ≡-Reasoning) open import Data.Product using (_×_; _,_; ∃; proj₁; proj₂) open import Data.Sum using (inj₁; inj₂) open import Data.Nat as Nat using (ℕ; zero; suc; z≤n; s≤s; _+_; _*_; _^_; ⌊_/2⌋; ⌈_/2⌉) import Data.Nat.Properties as N open import Data.List.Properties using (++-assoc; length-++) public open import Data.List.Relation.Binary.Permutation.Propositional public open import Data.List.Relation.Binary.Permutation.Propositional.Properties using (↭-length; ¬x∷xs↭[]; All-resp-↭; Any-resp-↭; drop-∷; ++-identityʳ) renaming (++-comm to ++-comm-↭; ++⁺ˡ to ++⁺ˡ-↭; ++⁺ʳ to ++⁺ʳ-↭; ++⁺ to ++⁺-↭) public open import Data.List.Relation.Unary.All using (All; []; _∷_; map; lookup) public open import Data.List.Relation.Unary.All.Properties as AllP using () renaming (++⁺ to ++⁺-All) public open import Data.List.Relation.Unary.Any using (Any; here; there) _≥_ : val A → val A → Set x ≥ y = y ≤ x _≰_ : val A → val A → Set x ≰ y = ¬ x ≤ y ≰⇒≥ : _≰_ ⇒ _≥_ ≰⇒≥ {x} {y} h with ≤-total x y ... | inj₁ h₁ = contradiction h₁ h ... | inj₂ h₂ = h₂ _≤*_ : val A → val (list A) → Set _≤*_ x = All (x ≤_) ≤-≤* : ∀ {x₁ x₂ l} → x₁ ≤ x₂ → x₂ ≤* l → x₁ ≤* l ≤-≤* x₁≤x₂ = map (≤-trans x₁≤x₂) data Sorted : val (list A) → Set where [] : Sorted [] _∷_ : ∀ {y ys} → y ≤* ys → Sorted ys → Sorted (y ∷ ys) short-sorted : {l : val (list A)} → length l Nat.≤ 1 → Sorted l short-sorted {[]} _ = [] short-sorted {_ ∷ []} _ = [] ∷ [] short-sorted {_ ∷ _ ∷ _} (s≤s ()) unique-sorted : ∀ {l'₁ l'₂} → Sorted l'₁ → Sorted l'₂ → l'₁ ↭ l'₂ → l'₁ ≡ l'₂ unique-sorted [] [] ↭ = refl unique-sorted [] (h₂ ∷ sorted₂) ↭ = contradiction (↭-sym ↭) ¬x∷xs↭[] unique-sorted (h₁ ∷ sorted₁) [] ↭ = contradiction (↭) ¬x∷xs↭[] unique-sorted (h₁ ∷ sorted₁) (h₂ ∷ sorted₂) ↭ with ≤-antisym (lookup (≤-refl ∷ h₁) (Any-resp-↭ (↭-sym ↭) (here refl))) (lookup (≤-refl ∷ h₂) (Any-resp-↭ (↭) (here refl))) ... | refl = Eq.cong (_ ∷_) (unique-sorted sorted₁ sorted₂ (drop-∷ ↭)) join-sorted : ∀ {l₁ mid l₂} → Sorted l₁ → Sorted l₂ → All (_≤ mid) l₁ → All (mid ≤_) l₂ → Sorted (l₁ ++ [ mid ] ++ l₂) join-sorted [] sorted₂ all₁ all₂ = all₂ ∷ sorted₂ join-sorted (h ∷ sorted₁) sorted₂ (h' ∷ all₁) all₂ = ++⁺-All h (h' ∷ ≤-≤* h' all₂) ∷ (join-sorted sorted₁ sorted₂ all₁ all₂) ++⁻ˡ : ∀ xs {ys} → Sorted (xs ++ ys) → Sorted xs ++⁻ˡ [] sorted = [] ++⁻ˡ (x ∷ xs) (h ∷ sorted) = AllP.++⁻ˡ xs h ∷ (++⁻ˡ xs sorted) ++⁻ʳ : ∀ xs {ys} → Sorted (xs ++ ys) → Sorted ys ++⁻ʳ [] sorted = sorted ++⁻ʳ (x ∷ xs) (h ∷ sorted) = ++⁻ʳ xs sorted split-sorted₁ : ∀ xs {x} → Sorted (xs ∷ʳ x) → All (_≤ x) xs split-sorted₁ [] sorted = [] split-sorted₁ (x ∷ xs) (h ∷ sorted) = proj₂ (AllP.∷ʳ⁻ h) ∷ split-sorted₁ xs sorted uncons₁ : ∀ {x xs} → Sorted (x ∷ xs) → x ≤* xs uncons₁ (h ∷ sorted) = h uncons₂ : ∀ {x xs} → Sorted (x ∷ xs) → Sorted xs uncons₂ (h ∷ sorted) = sorted SortedOf : val (list A) → val (list A) → Set SortedOf l l' = l ↭ l' × Sorted l' SortResult : cmp (Π (list A) λ _ → F (list A)) → val (list A) → Set SortResult sort l = ◯ (∃ λ l' → sort l ≡ ret l' × SortedOf l l') IsSort : cmp (Π (list A) λ _ → F (list A)) → Set IsSort sort = ∀ l → SortResult sort l IsSort⇒≡ : ∀ sort₁ → IsSort sort₁ → ∀ sort₂ → IsSort sort₂ → ◯ (sort₁ ≡ sort₂) IsSort⇒≡ sort₁ correct₁ sort₂ correct₂ u = funext λ l → let (l'₁ , ≡₁ , ↭₁ , sorted₁) = correct₁ l u in let (l'₂ , ≡₂ , ↭₂ , sorted₂) = correct₂ l u in begin sort₁ l ≡⟨ ≡₁ ⟩ ret l'₁ ≡⟨ Eq.cong ret (unique-sorted sorted₁ sorted₂ (trans (↭-sym ↭₁) ↭₂)) ⟩ ret l'₂ ≡˘⟨ ≡₂ ⟩ sort₂ l ∎ where open ≡-Reasoning
!*********************************************************************** ! * SUBROUTINE STRSUM ! * ! Generates the first part of sms92.sum (on stream 24). * ! * ! Call(s) to: [LIB92] CALEN, CONVRT, WGHTD5. * ! * ! Written by Farid A. Parpia Last revision: 28 Dec 1992 * ! * !*********************************************************************** !...Translated by Pacific-Sierra Research 77to90 4.3E 14:07:11 1/ 3/07 !...Modified by Charlotte Froese Fischer ! Gediminas Gaigalas 11/02/17 !----------------------------------------------- ! M o d u l e s !----------------------------------------------- USE vast_kind_param, ONLY: DOUBLE USE parameter_def, ONLY: NNNW USE decide_C USE def_C USE eigv_C USE iccu_C USE grid_C USE npar_C USE npot_C USE nsmdat_C USE orb_C USE prnt_C USE syma_C USe wave_C !----------------------------------------------- ! I n t e r f a c e B l o c k s !----------------------------------------------- USE convrt_I USE engout_I USE wghtd5_I IMPLICIT NONE !----------------------------------------------- ! L o c a l V a r i a b l e s !----------------------------------------------- INTEGER :: LENTH, I CHARACTER :: RECORD*256, CDATA*26 !----------------------------------------------- ! ! Get the date and time of day; make this information the ! header of the summary file ! ! ! Write out the basic dimensions of the electron cloud ! WRITE (24, *) CALL CONVRT (NELEC, RECORD, LENTH) WRITE (24, *) 'There are '//RECORD(1:LENTH)//' electrons in the cloud' CALL CONVRT (NCF, RECORD, LENTH) WRITE (24, *) ' in '//RECORD(1:LENTH)//' relativistic CSFs' CALL CONVRT (NW, RECORD, LENTH) WRITE (24, *) ' based on '//RECORD(1:LENTH)//' relativistic subshells.' ! ! If the CSFs are not treated uniformly, write out an ! informative message ! IF (LFORDR) THEN WRITE (24, *) CALL CONVRT (ICCUT(1), RECORD, LENTH) WRITE (24, *) ' CSFs 1--'//RECORD(1:LENTH)//' constitute'//& ' the zero-order space;' ENDIF ! ! Write out the nuclear parameters ! WRITE (24, *) WRITE (24, 300) Z IF (NPARM == 2) THEN WRITE (24, *) 'Fermi nucleus:' WRITE (24, 301) PARM(1), PARM(2) CALL CONVRT (NNUC, RECORD, LENTH) WRITE (24, *) ' there are '//RECORD(1:LENTH)//& ' tabulation points in the nucleus.' ELSE WRITE (24, *) ' point nucleus.' ENDIF ! ! Write out the physical effects specifications ! WRITE (24, *) WRITE (24, 305) C ! ! Write out the parameters of the radial grid ! WRITE (24, *) IF (HP == 0.0D00) THEN WRITE (24, 306) RNT, H, N ELSE WRITE (24, 307) RNT, H, HP, N ENDIF WRITE (24, 308) R(1), R(2), R(N) ! ! Write out the orbital properties ! WRITE (24, *) WRITE (24, *) 'Subshell radial wavefunction summary:' WRITE (24, *) WRITE (24, 309) WRITE (24, *) DO I = 1, NW WRITE (24, 310) NP(I), NH(I), E(I), PZ(I), GAMA(I), PF(2,I), QF(2,I), & MF(I) END DO ! ! Write the list of eigenpair indices ! WRITE (24, *) CALL ENGOUT (EAV, EVAL, IATJPO, IASPAR, IVEC, NVEC, 3) CALL WGHTD5 ! RETURN ! 300 FORMAT('The atomic number is ',1F14.10,';') 301 FORMAT(' c =',1P,1D19.12,' Bohr radii,'/,' a =',1D19.12,' Bohr radii;') 305 FORMAT('Speed of light = ',1P,D19.12,' atomic units.') 306 FORMAT('Radial grid: R(I) = RNT*(exp((I-1)*H)-1),',' I = 1, ..., N;'/,/,& ' RNT = ',1P,D19.12,' Bohr radii;'/,' H = ',D19.12,' Bohr radii;'/& ,' N = ',1I4,';') 307 FORMAT('Radial grid: ln(R(I)/RNT+1)+(H/HP)*R(I) = (I-1)*H,',& ' I = 1, ..., N;'/,/,' RNT = ',1P,D19.12,' Bohr radii;'/,' H = ',D& 19.12,' Bohr radii;'/,' HP = ',D19.12,' Bohr radii;'/,' N = ',1I4& ,';') 308 FORMAT(' R(1) = ',1P,1D19.12,' Bohr radii;'/,' R(2) = ',1D19.12,& ' Bohr radii;'/,' R(N) = ',1D19.12,' Bohr radii.') 309 FORMAT(' Subshell',11X,'e',20X,'p0',18X,'gamma',19X,'P(2)',18X,'Q(2)',10X& ,'MTP') 310 FORMAT(3X,1I2,1A2,1X,1P,5(3X,1D19.12),3X,1I3) RETURN ! END SUBROUTINE STRSUM
# pylint: disable=unsubscriptable-object, too-many-function-args, not-callable, unexpected-keyword-arg, no-value-for-parameter, too-many-boolean-expressions """ Module for data representation translation methods """ # TODO: # - right now we distinguish on histogramming/lookup for scalars (normal) or array, which means that instead # of just a single value per e.g. histogram bin, there can be an array of values # This should be made more general that one function can handle everything...since now we have several # functions doing similar things. not very pretty from __future__ import absolute_import, print_function, division from copy import deepcopy import numpy as np from numba import guvectorize, SmartArray, cuda from pisa import FTYPE, TARGET from pisa.core.binning import OneDimBinning, MultiDimBinning from pisa.utils.comparisons import recursiveEquality from pisa.utils.log import logging, set_verbosity from pisa.utils.numba_tools import myjit, WHERE from pisa.utils import vectorizer __all__ = [ 'resample', 'histogram', 'lookup', 'find_index', 'find_index_unsafe', 'find_index_cuda', 'test_histogram', 'test_find_index', ] FX = 'f4' if FTYPE == np.float32 else 'f8' # --------- resampling ------------ def resample(weights, old_sample, old_binning, new_sample, new_binning): """Resample binned data with a given binning into any arbitrary `new_binning` Parameters ---------- weights : SmartArray old_sample : list of SmartArrays old_binning : PISA MultiDimBinning new_sample : list of SmartArrays new_binning : PISA MultiDimBinning Returns ------- new_hist_vals """ if old_binning.names != new_binning.names: raise ValueError(f'cannot translate betwen {old_binning} and {new_binning}') # This is a two step process: first histogram the weights into the new binning # and keep the flat_hist_counts hist_func = histogram_gpu if TARGET == 'cuda' else histogram_np flat_hist = hist_func(old_sample, weights, new_binning, apply_weights=True) flat_hist_counts = hist_func(old_sample, weights, new_binning, apply_weights=False) vectorizer.itruediv(flat_hist_counts, out=flat_hist) # now do the inverse, a lookup of hist vals at `new_sample` points new_hist_vals = lookup(new_sample, weights, old_binning) # Now, for bin we have 1 or less counts, take the lookedup value instead: vectorizer.replace_where_counts_gt( vals=flat_hist, counts=flat_hist_counts, min_count=1, out=new_hist_vals, ) return new_hist_vals # --------- histogramming methods --------------- def histogram(sample, weights, binning, averaged): """Histogram `sample` points, weighting by `weights`, according to `binning`. Parameters ---------- sample : list of SmartArrays weights : SmartArray binning : PISA MultiDimBinning averaged : bool If True, the histogram entries are averages of the numbers that end up in a given bin. This for example must be used when oscillation probabilities are translated, otherwise we end up with probability*count per bin """ hist_func = histogram_gpu if TARGET == 'cuda' else histogram_np flat_hist = hist_func(sample, weights, binning, apply_weights=True) if averaged: flat_hist_counts = hist_func(sample, weights, binning, apply_weights=False) vectorizer.itruediv(flat_hist_counts, out=flat_hist) return flat_hist def histogram_gpu(sample, weights, binning, apply_weights=True): # pylint: disable=missing-docstring binning = MultiDimBinning(binning) # TODO: make for d > 3 if binning.num_dims in [2, 3]: bin_edges = [edges.magnitude for edges in binning.bin_edges] if len(weights.shape) > 1: # so we have arrays flat_hist = SmartArray( np.zeros(shape=(binning.size, weights.shape[1]), dtype=FTYPE) ) arrays = True else: flat_hist = SmartArray(np.zeros(binning.size, dtype=FTYPE)) arrays = False size = weights.shape[0] d_bin_edges_x = cuda.to_device(bin_edges[0]) d_bin_edges_y = cuda.to_device(bin_edges[1]) if binning.num_dims == 2: if arrays: histogram_2d_kernel_arrays[(size + 511) // 512, 512]( sample[0].get('gpu'), sample[1].get('gpu'), flat_hist, d_bin_edges_x, d_bin_edges_y, weights.get('gpu'), apply_weights, ) else: histogram_2d_kernel[(size + 511) // 512, 512]( sample[0].get('gpu'), sample[1].get('gpu'), flat_hist, d_bin_edges_x, d_bin_edges_y, weights.get('gpu'), apply_weights, ) elif binning.num_dims == 3: d_bin_edges_z = cuda.to_device(bin_edges[2]) if arrays: histogram_3d_kernel_arrays[(size + 511) // 512, 512]( sample[0].get('gpu'), sample[1].get('gpu'), sample[2].get('gpu'), flat_hist, d_bin_edges_x, d_bin_edges_y, d_bin_edges_z, weights.get('gpu'), apply_weights, ) else: histogram_3d_kernel[(size + 511) // 512, 512]( sample[0].get('gpu'), sample[1].get('gpu'), sample[2].get('gpu'), flat_hist, d_bin_edges_x, d_bin_edges_y, d_bin_edges_z, weights.get('gpu'), apply_weights, ) return flat_hist else: raise NotImplementedError( 'Dimensionality other than 2 or 3 not supported on the GPU' ) histogram_gpu.__doc__ = histogram.__doc__ def histogram_np(sample, weights, binning, apply_weights=True): # pylint: disable=missing-docstring """helper function for numpy historams""" binning = MultiDimBinning(binning) bin_edges = [edges.magnitude for edges in binning.bin_edges] sample = [s.get('host') for s in sample] weights = weights.get('host') if weights.ndim == 2: # that means it's 1-dim data instead of scalars hists = [] for i in range(weights.shape[1]): w = weights[:, i] if apply_weights else None hist, _ = np.histogramdd(sample=sample, weights=w, bins=bin_edges) hists.append(hist.ravel()) flat_hist = np.stack(hists, axis=1) else: w = weights if apply_weights else None hist, _ = np.histogramdd(sample=sample, weights=w, bins=bin_edges) flat_hist = hist.ravel() return SmartArray(flat_hist.astype(FTYPE)) # TODO: can we do just n-dimensional? And scalars or arbitrary array shapes? # TODO: optimize using shared memory @cuda.jit def histogram_2d_kernel( sample_x, sample_y, flat_hist, bin_edges_x, bin_edges_y, weights, apply_weights, ): i = cuda.grid(1) if i < sample_x.size: if ( sample_x[i] >= bin_edges_x[0] and sample_x[i] <= bin_edges_x[-1] and sample_y[i] >= bin_edges_y[0] and sample_y[i] <= bin_edges_y[-1] ): idx_x = find_index_unsafe(sample_x[i], bin_edges_x) idx_y = find_index_unsafe(sample_y[i], bin_edges_y) idx = idx_x * (bin_edges_y.size - 1) + idx_y if apply_weights: cuda.atomic.add(flat_hist, idx, weights[i]) else: cuda.atomic.add(flat_hist, idx, 1.) # else: outside of binning or nan; nothing to do @cuda.jit def histogram_2d_kernel_arrays( sample_x, sample_y, flat_hist, bin_edges_x, bin_edges_y, weights, apply_weights, ): i = cuda.grid(1) if i < sample_x.size: if ( sample_x[i] >= bin_edges_x[0] and sample_x[i] <= bin_edges_x[-1] and sample_y[i] >= bin_edges_y[0] and sample_y[i] <= bin_edges_y[-1] ): idx_x = find_index_unsafe(sample_x[i], bin_edges_x) idx_y = find_index_unsafe(sample_y[i], bin_edges_y) idx = idx_x * (bin_edges_y.size - 1) + idx_y for j in range(flat_hist.shape[1]): if apply_weights: cuda.atomic.add(flat_hist, (idx, j), weights[i, j]) else: cuda.atomic.add(flat_hist, (idx, j), 1.) # else: outside of binning or nan; nothing to do @cuda.jit def histogram_3d_kernel( sample_x, sample_y, sample_z, flat_hist, bin_edges_x, bin_edges_y, bin_edges_z, weights, apply_weights, ): i = cuda.grid(1) if i < sample_x.size: if ( sample_x[i] >= bin_edges_x[0] and sample_x[i] <= bin_edges_x[-1] and sample_y[i] >= bin_edges_y[0] and sample_y[i] <= bin_edges_y[-1] and sample_z[i] >= bin_edges_z[0] and sample_z[i] <= bin_edges_z[-1] ): idx_x = find_index_unsafe(sample_x[i], bin_edges_x) idx_y = find_index_unsafe(sample_y[i], bin_edges_y) idx_z = find_index_unsafe(sample_z[i], bin_edges_z) idx = ( idx_x * (bin_edges_y.size - 1) * (bin_edges_z.size - 1) + idx_y * (bin_edges_z.size - 1) + idx_z ) if apply_weights: cuda.atomic.add(flat_hist, idx, weights[i]) else: cuda.atomic.add(flat_hist, idx, 1.) # else: outside of binning or nan; nothing to do @cuda.jit def histogram_3d_kernel_arrays( sample_x, sample_y, sample_z, flat_hist, bin_edges_x, bin_edges_y, bin_edges_z, weights, apply_weights, ): i = cuda.grid(1) if i < sample_x.size: if ( sample_x[i] >= bin_edges_x[0] and sample_x[i] <= bin_edges_x[-1] and sample_y[i] >= bin_edges_y[0] and sample_y[i] <= bin_edges_y[-1] and sample_z[i] >= bin_edges_z[0] and sample_z[i] <= bin_edges_z[-1] ): idx_x = find_index_unsafe(sample_x[i], bin_edges_x) idx_y = find_index_unsafe(sample_y[i], bin_edges_y) idx_z = find_index_unsafe(sample_z[i], bin_edges_z) idx = ( idx_x * (bin_edges_y.size - 1) * (bin_edges_z.size - 1) + idx_y * (bin_edges_z.size - 1) + idx_z ) for j in range(flat_hist.shape[1]): if apply_weights: cuda.atomic.add(flat_hist, (idx, j), weights[i, j]) else: cuda.atomic.add(flat_hist, (idx, j), 1.) # else: outside of binning or nan; nothing to do # ---------- Lookup methods --------------- def lookup(sample, flat_hist, binning): """The inverse of histograming: Extract the histogram values at `sample` points. Parameters ---------- sample : num_dims list of length-num_samples SmartArrays Points at which to find histogram's values flat_hist : SmartArray Histogram values binning : num_dims MultiDimBinning Histogram's binning Returns ------- hist_vals : len-num_samples SmartArray Notes ----- Handles up to 3D. """ assert binning.num_dims <= 3, 'can only do up to 3D at the moment' bin_edges = [edges.magnitude for edges in binning.bin_edges] # TODO: directly return smart array if flat_hist.ndim == 1: #print 'looking up 1D' hist_vals = SmartArray(np.zeros_like(sample[0])) if binning.num_dims == 1: lookup_vectorized_1d( sample[0].get(WHERE), flat_hist.get(WHERE), bin_edges[0], out=hist_vals.get(WHERE), ) elif binning.num_dims == 2: lookup_vectorized_2d( sample[0].get(WHERE), sample[1].get(WHERE), flat_hist.get(WHERE), bin_edges[0], bin_edges[1], out=hist_vals.get(WHERE), ) elif binning.num_dims == 3: lookup_vectorized_3d( sample[0].get(WHERE), sample[1].get(WHERE), sample[2].get(WHERE), flat_hist.get(WHERE), bin_edges[0], bin_edges[1], bin_edges[2], out=hist_vals.get(WHERE), ) elif flat_hist.ndim == 2: #print 'looking up ND' hist_vals = SmartArray( np.zeros((sample[0].size, flat_hist.shape[1]), dtype=FTYPE) ) if binning.num_dims == 1: lookup_vectorized_1d_arrays( sample[0].get(WHERE), flat_hist.get(WHERE), bin_edges[0], out=hist_vals.get(WHERE), ) if binning.num_dims == 2: lookup_vectorized_2d_arrays( sample[0].get(WHERE), sample[1].get(WHERE), flat_hist.get(WHERE), bin_edges[0], bin_edges[1], out=hist_vals.get(WHERE), ) elif binning.num_dims == 3: lookup_vectorized_3d_arrays( sample[0].get(WHERE), sample[1].get(WHERE), sample[2].get(WHERE), flat_hist.get(WHERE), bin_edges[0], bin_edges[1], bin_edges[2], out=hist_vals.get(WHERE), ) else: raise NotImplementedError() hist_vals.mark_changed(WHERE) return hist_vals @myjit def find_index(val, bin_edges): """Find index in binning for `val`. If `val` is below binning range or is nan, return -1; if `val` is above binning range, return num_bins. Edge inclusivity/exclusivity is defined as .. :: [ bin 0 ) [ bin 1 ) ... [ bin num_bins-1 ] Using these indices to produce histograms should yield identical results (ignoring underflow and overflow, which `find_index` has) that are equivalent to those produced by ``numpy.histogramdd``. Parameters ---------- val : scalar Value for which to find bin index bin_edges : 1d numpy ndarray of 2 or more scalars Must be monotonically increasing, and all bins are assumed to be adjacent Returns ------- bin_idx : int in [-1, num_bins] -1 is returned for underflow or if `val` is nan. `num_bins` is returned for overflow. Otherwise, for bin_edges[0] <= `val` <= bin_edges[-1], 0 <= `bin_idx` <= num_bins - 1 """ # TODO: support fast computation for lin and log binnings? num_edges = len(bin_edges) num_bins = num_edges - 1 assert num_bins >= 1, 'bin_edges must define at least one bin' underflow_idx = -1 overflow_idx = num_bins if val >= bin_edges[0]: if val <= bin_edges[-1]: bin_idx = find_index_unsafe(val, bin_edges) # Paranoia: In case of unforseen numerical issues, force clipping of # returned bin index to [0, num_bins - 1] (any `val` outside of binning # is already handled, so this should be valid) bin_idx = min(max(0, bin_idx), num_bins - 1) else: bin_idx = overflow_idx else: # either value is below first bin or is NaN bin_idx = underflow_idx return bin_idx @myjit def find_index_unsafe(val, bin_edges): """Find bin index of `val` within binning defined by `bin_edges`. Validity of `val` and `bin_edges` is not checked. Parameters ---------- val : scalar Assumed to be within range of `bin_edges` (including lower and upper bin edges) bin_edges : array Returns ------- index See also -------- find_index : includes bounds checking and handling of special cases """ # Initialize to point to left-most edge left_edge_idx = 0 # Initialize to point to right-most edge right_edge_idx = len(bin_edges) - 1 while left_edge_idx < right_edge_idx: # See where value falls w.r.t. an edge ~midway between left and right edges # ``>> 1``: integer division by 2 (i.e., divide w/ truncation) test_edge_idx = (left_edge_idx + right_edge_idx) >> 1 # ``>=``: bin left edges are inclusive if val >= bin_edges[test_edge_idx]: left_edge_idx = test_edge_idx + 1 else: right_edge_idx = test_edge_idx # break condition of while loop is that left_edge_idx points to the # right edge of the bin that `val` is inside of; that is one more than # that _bin's_ index return left_edge_idx - 1 @cuda.jit def find_index_cuda(val, bin_edges, out): """CUDA wrapper of `find_index` kernel e.g. for running tests on GPU Parameters ---------- val : array bin_edges : array out : array of same size as `val` Results are stored to `out` """ i = cuda.grid(1) if i < val.size: out[i] = find_index(val[i], bin_edges) @guvectorize( [f'({FX}[:], {FX}[:], {FX}[:], {FX}[:])'], '(), (j), (k) -> ()', target=TARGET, ) def lookup_vectorized_1d( sample, flat_hist, bin_edges, weights, ): """Vectorized gufunc to perform the lookup""" x = sample[0] if (bin_edges[0] <= x <= bin_edges[-1]): idx = find_index_unsafe(x, bin_edges) weights[0] = flat_hist[idx] else: # outside of binning or nan weights[0] = 0. @guvectorize( [f'({FX}[:], {FX}[:, :], {FX}[:], {FX}[:])'], '(), (j, d), (k) -> (d)', target=TARGET, ) def lookup_vectorized_1d_arrays( sample, flat_hist, bin_edges, weights, ): """Vectorized gufunc to perform the lookup""" x = sample[0] if (bin_edges[0] <= x <= bin_edges[-1]): idx = find_index_unsafe(x, bin_edges) for i in range(weights.size): weights[i] = flat_hist[idx, i] else: # outside of binning or nan for i in range(weights.size): weights[i] = 0. @guvectorize( [f'({FX}[:], {FX}[:], {FX}[:], {FX}[:], {FX}[:], {FX}[:])'], '(), (), (j), (k), (l) -> ()', target=TARGET, ) def lookup_vectorized_2d( sample_x, sample_y, flat_hist, bin_edges_x, bin_edges_y, weights, ): """Vectorized gufunc to perform the lookup""" x = sample_x[0] y = sample_y[0] if ( x >= bin_edges_x[0] and x <= bin_edges_x[-1] and y >= bin_edges_y[0] and y <= bin_edges_y[-1] ): idx_x = find_index_unsafe(x, bin_edges_x) idx_y = find_index_unsafe(y, bin_edges_y) idx = idx_x * (len(bin_edges_y) - 1) + idx_y weights[0] = flat_hist[idx] else: # outside of binning or nan weights[0] = 0. @guvectorize( [f'({FX}[:], {FX}[:], {FX}[:, :], {FX}[:], {FX}[:], {FX}[:])'], '(), (), (j, d), (k), (l) -> (d)', target=TARGET, ) def lookup_vectorized_2d_arrays( sample_x, sample_y, flat_hist, bin_edges_x, bin_edges_y, weights, ): """Vectorized gufunc to perform the lookup while flat hist and weights have both a second dimension """ x = sample_x[0] y = sample_y[0] if ( x >= bin_edges_x[0] and x <= bin_edges_x[-1] and y >= bin_edges_y[0] and y <= bin_edges_y[-1] ): idx_x = find_index_unsafe(x, bin_edges_x) idx_y = find_index_unsafe(y, bin_edges_y) idx = idx_x * (len(bin_edges_y) - 1) + idx_y for i in range(weights.size): weights[i] = flat_hist[idx, i] else: # outside of binning or nan for i in range(weights.size): weights[i] = 0. @guvectorize( [f'({FX}[:], {FX}[:], {FX}[:], {FX}[:], {FX}[:], {FX}[:], {FX}[:], {FX}[:])'], '(), (), (), (j), (k), (l), (m) -> ()', target=TARGET, ) def lookup_vectorized_3d( sample_x, sample_y, sample_z, flat_hist, bin_edges_x, bin_edges_y, bin_edges_z, weights, ): """Vectorized gufunc to perform the lookup""" x = sample_x[0] y = sample_y[0] z = sample_z[0] if ( x >= bin_edges_x[0] and x <= bin_edges_x[-1] and y >= bin_edges_y[0] and y <= bin_edges_y[-1] and z >= bin_edges_z[0] and z <= bin_edges_z[-1] ): idx_x = find_index_unsafe(x, bin_edges_x) idx_y = find_index_unsafe(y, bin_edges_y) idx_z = find_index_unsafe(z, bin_edges_z) idx = (idx_x * (len(bin_edges_y) - 1) + idx_y) * (len(bin_edges_z) - 1) + idx_z weights[0] = flat_hist[idx] else: # outside of binning or nan weights[0] = 0. @guvectorize( [f'({FX}[:], {FX}[:], {FX}[:], {FX}[:, :], {FX}[:], {FX}[:], {FX}[:], {FX}[:])'], '(), (), (), (j, d), (k), (l), (m) -> (d)', target=TARGET, ) def lookup_vectorized_3d_arrays( sample_x, sample_y, sample_z, flat_hist, bin_edges_x, bin_edges_y, bin_edges_z, weights, ): """Vectorized gufunc to perform the lookup while flat hist and weights have both a second dimension""" x = sample_x[0] y = sample_y[0] z = sample_z[0] if ( x >= bin_edges_x[0] and x <= bin_edges_x[-1] and y >= bin_edges_y[0] and y <= bin_edges_y[-1] and z >= bin_edges_z[0] and z <= bin_edges_z[-1] ): idx_x = find_index_unsafe(x, bin_edges_x) idx_y = find_index_unsafe(y, bin_edges_y) idx_z = find_index_unsafe(z, bin_edges_z) idx = (idx_x * (len(bin_edges_y) - 1) + idx_y) * (len(bin_edges_z) - 1) + idx_z for i in range(weights.size): weights[i] = flat_hist[idx, i] else: # outside of binning or nan for i in range(weights.size): weights[i] = 0. def test_histogram(): """Unit tests for `histogram` function. Correctness is defined as matching the histogram produced by numpy.histogramdd. """ all_num_bins = [2, 3, 4] n_evts = 10000 rand = np.random.RandomState(seed=0) weights = SmartArray(rand.rand(n_evts).astype(FTYPE)) binning = [] sample = [] for num_dims, num_bins in enumerate(all_num_bins, start=1): binning.append( OneDimBinning( name=f'dim{num_dims - 1}', num_bins=num_bins, is_lin=True, domain=[0, num_bins], ) ) sample.append( SmartArray(rand.rand(n_evts).astype(FTYPE) * num_bins) ) if TARGET == "cuda" and num_dims == 1: continue bin_edges = [b.edge_magnitudes for b in binning] test = histogram(sample, weights, binning, averaged=False).get() ref, _ = np.histogramdd(sample=sample, bins=bin_edges, weights=weights) ref = ref.astype(FTYPE).ravel() assert recursiveEquality(test, ref), f'\ntest:\n{test}\n\nref:\n{ref}' test_avg = histogram(sample, weights, binning, averaged=True).get() ref_counts, _ = np.histogramdd(sample=sample, bins=bin_edges, weights=None) ref_counts = ref_counts.astype(FTYPE).ravel() ref_avg = (ref / ref_counts).astype(FTYPE) assert recursiveEquality(test_avg, ref_avg), \ f'\ntest_avg:\n{test_avg}\n\nref_avg:\n{ref_avg}' logging.info('<< PASS : test_histogram >>') def test_find_index(): """Unit tests for `find_index` function. Correctness is defined as producing the same histogram as numpy.histogramdd by using the output of `find_index` (ignoring underflow and overflow values). Additionally, -1 should be returned if a value is below the range (underflow) or is nan, and num_bins should be returned for a value above the range (overflow). """ # Negative, positive, integer, non-integer, binary-unrepresentable (0.1) edges basic_bin_edges = [-1, -0.5, -0.1, 0, 0.1, 0.5, 1, 2, 3, 4] failures = 0 for basic_bin_edges in [ # Negative, positive, integer, non-integer, binary-unrepresentable (0.1) edges [-1, -0.5, -0.1, 0, 0.1, 0.5, 1, 2, 3, 4], # A single infinite bin: [-np.inf, np.inf] [], # Half-infinite bins (lower or upper edge) & [-inf, .1, +inf] [0.1], # Single bin with finite edges & +/-inf-edge(s)-added variants [-0.1, 0.1], ]: # Bin edges from above, w/ and w/o +/-inf as left and/or right edges for le, re in [ (None, None), (-np.inf, None), (None, np.inf), (-np.inf, np.inf) ]: bin_edges = deepcopy(basic_bin_edges) if le is not None: bin_edges = [le] + bin_edges if re is not None: bin_edges = bin_edges + [re] if len(bin_edges) < 2: continue logging.debug('bin_edges being tested: %s', bin_edges) bin_edges = SmartArray(np.array(bin_edges, dtype=FTYPE)) num_bins = len(bin_edges) - 1 underflow_idx = -1 overflow_idx = num_bins # # Construct test values to try out # non_finite_vals = [-np.inf, +np.inf, np.nan] # Values within bins (i.e., not on edges) inbin_vals = [] for idx in range(len(bin_edges) - 1): lower_be = bin_edges[idx] upper_be = bin_edges[idx + 1] if np.isfinite(lower_be): if np.isfinite(upper_be): inbin_val = (lower_be + upper_be) / 2 else: inbin_val = lower_be + 10.5 else: if np.isfinite(upper_be): inbin_val = upper_be - 10.5 else: inbin_val = 10.5 inbin_vals.append(inbin_val) # Values above/below bin edges by one unit of floating point # accuracy eps = np.finfo(FTYPE).eps # pylint: disable=no-member below_edges_vals = [FTYPE((1 - eps)*be) for be in bin_edges] above_edges_vals = [FTYPE((1 + eps)*be) for be in bin_edges] test_vals = np.concatenate( [ non_finite_vals, bin_edges, inbin_vals, below_edges_vals, above_edges_vals, ] ) logging.trace('test_vals = %s', test_vals) # # Run tests # for val in test_vals: val = FTYPE(val) np_histvals, _ = np.histogramdd([val], np.atleast_2d(bin_edges)) nonzero_indices = np.nonzero(np_histvals)[0] # select first & only dim if np.isnan(val): assert len(nonzero_indices) == 0, str(len(nonzero_indices)) expected_idx = underflow_idx elif val < bin_edges[0]: assert len(nonzero_indices) == 0, str(len(nonzero_indices)) expected_idx = underflow_idx elif val > bin_edges[-1]: assert len(nonzero_indices) == 0, str(len(nonzero_indices)) expected_idx = overflow_idx else: assert len(nonzero_indices) == 1, str(len(nonzero_indices)) expected_idx = nonzero_indices[0] if TARGET == 'cpu': found_idx = find_index(val, bin_edges) elif TARGET == 'cuda': found_idx_ary = SmartArray(np.zeros(1, dtype=np.int)) find_index_cuda( SmartArray(np.array([val], dtype=FTYPE)).get(WHERE), bin_edges.get(WHERE), found_idx_ary.get(WHERE), ) found_idx_ary.mark_changed(WHERE) found_idx = found_idx_ary.get()[0] else: raise NotImplementedError(f"TARGET='{TARGET}'") if found_idx != expected_idx: failures += 1 msg = 'val={}, edges={}: Expected idx={}, found idx={}'.format( val, bin_edges.get(), expected_idx, found_idx ) logging.error(msg) assert failures == 0, f"{failures} failures, inspect ERROR messages above for info" logging.info('<< PASS : test_find_index >>') if __name__ == '__main__': set_verbosity(1) test_find_index() test_histogram()
[GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NormedCommRing 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 s : Set E x✝¹ : E x✝ : x✝¹ ∈ s ⊢ ‖↑(↑B x✝¹) 0‖ ≤ 1 [PROOFSTEP] simp only [map_zero, norm_zero, zero_le_one] [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NormedCommRing 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 s : Set E ⊢ polar B s = ⋂ (x : E) (_ : x ∈ s), {y | ‖↑(↑B x) y‖ ≤ 1} [PROOFSTEP] ext [GOAL] case h 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NormedCommRing 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 s : Set E x✝ : F ⊢ x✝ ∈ polar B s ↔ x✝ ∈ ⋂ (x : E) (_ : x ∈ s), {y | ‖↑(↑B x) y‖ ≤ 1} [PROOFSTEP] simp only [polar_mem_iff, Set.mem_iInter, Set.mem_setOf_eq] [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NormedCommRing 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 ⊢ polar B {0} = Set.univ [PROOFSTEP] refine' Set.eq_univ_iff_forall.mpr fun y x hx => _ [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NormedCommRing 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 y : F x : E hx : x ∈ {0} ⊢ ‖↑(↑B x) y‖ ≤ 1 [PROOFSTEP] rw [Set.mem_singleton_iff.mp hx, map_zero, LinearMap.zero_apply, norm_zero] [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NormedCommRing 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 y : F x : E hx : x ∈ {0} ⊢ 0 ≤ 1 [PROOFSTEP] exact zero_le_one [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NormedCommRing 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 s : Set E x : E hx : x ∈ s y : F hy : y ∈ polar B s ⊢ ‖↑(↑(flip B) y) x‖ ≤ 1 [PROOFSTEP] rw [B.flip_apply] [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NormedCommRing 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 s : Set E x : E hx : x ∈ s y : F hy : y ∈ polar B s ⊢ ‖↑(↑B x) y‖ ≤ 1 [PROOFSTEP] exact hy x hx [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NormedCommRing 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 s : Set E ⊢ IsClosed (polar B s) [PROOFSTEP] rw [polar_eq_iInter] [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NormedCommRing 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 s : Set E ⊢ IsClosed (⋂ (x : E) (_ : x ∈ s), {y | ‖↑(↑B x) y‖ ≤ 1}) [PROOFSTEP] refine' isClosed_iInter fun x => isClosed_iInter fun _ => _ [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NormedCommRing 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 s : Set E x : E x✝ : x ∈ s ⊢ IsClosed {y | ‖↑(↑B x) y‖ ≤ 1} [PROOFSTEP] exact isClosed_le (WeakBilin.eval_continuous B.flip x).norm continuous_const [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 h : SeparatingRight B ⊢ polar B Set.univ = {0} [PROOFSTEP] rw [Set.eq_singleton_iff_unique_mem] [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 h : SeparatingRight B ⊢ 0 ∈ polar B Set.univ ∧ ∀ (x : F), x ∈ polar B Set.univ → x = 0 [PROOFSTEP] refine' ⟨by simp only [zero_mem_polar], fun y hy => h _ fun x => _⟩ [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 h : SeparatingRight B ⊢ 0 ∈ polar B Set.univ [PROOFSTEP] simp only [zero_mem_polar] [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 h : SeparatingRight B y : F hy : y ∈ polar B Set.univ x : E ⊢ ↑(↑B x) y = 0 [PROOFSTEP] refine' norm_le_zero_iff.mp (le_of_forall_le_of_dense fun ε hε => _) [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 h : SeparatingRight B y : F hy : y ∈ polar B Set.univ x : E ε : ℝ hε : 0 < ε ⊢ ‖↑(↑B x) y‖ ≤ ε [PROOFSTEP] rcases NormedField.exists_norm_lt 𝕜 hε with ⟨c, hc, hcε⟩ [GOAL] case intro.intro 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 h : SeparatingRight B y : F hy : y ∈ polar B Set.univ x : E ε : ℝ hε : 0 < ε c : 𝕜 hc : 0 < ‖c‖ hcε : ‖c‖ < ε ⊢ ‖↑(↑B x) y‖ ≤ ε [PROOFSTEP] calc ‖B x y‖ = ‖c‖ * ‖B (c⁻¹ • x) y‖ := by rw [B.map_smul, LinearMap.smul_apply, Algebra.id.smul_eq_mul, norm_mul, norm_inv, mul_inv_cancel_left₀ hc.ne'] _ ≤ ε * 1 := by gcongr; exact hy _ trivial _ = ε := mul_one _ [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 h : SeparatingRight B y : F hy : y ∈ polar B Set.univ x : E ε : ℝ hε : 0 < ε c : 𝕜 hc : 0 < ‖c‖ hcε : ‖c‖ < ε ⊢ ‖↑(↑B x) y‖ = ‖c‖ * ‖↑(↑B (c⁻¹ • x)) y‖ [PROOFSTEP] rw [B.map_smul, LinearMap.smul_apply, Algebra.id.smul_eq_mul, norm_mul, norm_inv, mul_inv_cancel_left₀ hc.ne'] [GOAL] 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 h : SeparatingRight B y : F hy : y ∈ polar B Set.univ x : E ε : ℝ hε : 0 < ε c : 𝕜 hc : 0 < ‖c‖ hcε : ‖c‖ < ε ⊢ ‖c‖ * ‖↑(↑B (c⁻¹ • x)) y‖ ≤ ε * 1 [PROOFSTEP] gcongr [GOAL] case h₂ 𝕜 : Type u_1 E : Type u_2 F : Type u_3 inst✝⁴ : NontriviallyNormedField 𝕜 inst✝³ : AddCommMonoid E inst✝² : AddCommMonoid F inst✝¹ : Module 𝕜 E inst✝ : Module 𝕜 F B : E →ₗ[𝕜] F →ₗ[𝕜] 𝕜 h : SeparatingRight B y : F hy : y ∈ polar B Set.univ x : E ε : ℝ hε : 0 < ε c : 𝕜 hc : 0 < ‖c‖ hcε : ‖c‖ < ε ⊢ ‖↑(↑B (c⁻¹ • x)) y‖ ≤ 1 [PROOFSTEP] exact hy _ trivial
Formal statement is: lemma convex_halfspace_gt: "convex {x. inner a x > b}" Informal statement is: The set of points $x$ such that $a \cdot x > b$ is convex.
#redirect Davis Healthcare Center
This is some documentation. More documentation. > module Main > > import Data.Vect > > data Cat = Cas | Luna | Sherlock > > f : (cat: Cat) -> String > > getName : (cat: Cat) -> String > getName Cas = "Cas" > getName Luna = "Luna" > getName Sherlock = "Sherlock" An intermission. > plusTwo : (n: Nat) -> Nat > plusTwo n = ?plusTwo_rhs > > g : (n: Nat) -> (b: Bool) -> String > g n b = ?g_rhs > > num : Nat > num = ?n_rhs > > append : Vect n a -> Vect m a -> Vect (n + m) a Some closing thoughts.