text
stringlengths 0
3.34M
|
---|
Providing individuals with opportunities to excel is important to Infinity Law Group. The sponsoring of the 2016 3rd Annual Asian Heritage Law School Scholarship is at the heart of this belief. As a result, we’re happy to announce Keren Salim as our 2016 award recipient. She has demonstrated the ability to work hard in the pursuit of legal education.
Keren Salim is a South Asian Muslim American from North Carolina. Keren Salim attended Salem College from 2010 through 2014 and earned her Bachelor of Arts degree. For the past two years she completed a fellowship in North Carolina, traveling across the state engaging with nonprofit community organizations working on systemic issues in the following focus areas: social justice and equity, community economic development, public education, environment, and democracy/ civic engagement. She plans to continue her education at Northeastern University School of Law in Boston, Massachusetts.
Infinity Law Group LLC along with divorce attorney Gabriel Cheong support individuals looking to further their education in this highly demanding industry. The Asian Heritage Law School Scholarship aims to support students with an Asian heritage. It provides a $1,000 scholarship every fall semester to those who pursue a legal degree. The goal of the scholarship to encourage more Asian Americans to pursue a degree in this area.
The scholarship is available to students who are attending or planning to attend law school. Individuals must be a U.S. citizen or otherwise allowed to work in the country and must be 1L, 2L, or 3L students in the fall of the application year. They must have an undergraduate cumulative minimum 3.0 GPA and must have at least one parent of Asian ancestry. |
module Main
StringOrInt : Bool -> Type
StringOrInt x = case x of
True => Int
False => String
getStringOrInt : (x : Bool) -> StringOrInt x
getStringOrInt x = case x of
True => 42
False => "Forty two"
valToString : (x : Bool) -> StringOrInt x -> String
valToString x val = case x of
True => ?xtrueType
False=> val
main : IO ()
main = putStrLn "Hello, World!"
|
[STATEMENT]
lemma adm_subseteq[simp]:
assumes "cont f"
shows "adm (\<lambda>a. f a \<subseteq> S)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. adm (\<lambda>a. f a \<subseteq> S)
[PROOF STEP]
by (rule admI)(auto simp add: cont2contlubE[OF assms] lub_set) |
# Libraries
library(htmlwidgets)
library(dplyr)
library(ggplot2)
library(dygraphs)
library(htmltools)
library(widgetframe)
library(icesSAG)
library(plotly)
library(shiny)
library(shinythemes)
library(glue)
library(sf)
library(leaflet)
library(fisheryO)
library(DT)
library(tidyverse)
library(icesVocab)
library(tm)
library(shinyWidgets)
library(shinyjs)
library(reshape2)
library(scales)
library(ggradar)
library(icesFO)
library(icesTAF)
# required if using most recent version of sf
sf::sf_use_s2(FALSE)
################################
# sources
source("Shiny/utilities_load_shapefiles.r")
source("Shiny/utilities_shiny_formatting.r")
source("Shiny/utilities_plotting.r")
source("Shiny/utilities_mapping.r")
source("Shiny/utilities_sag_data.r")
source("Shiny/utilities_shiny_Input.r")
source("Shiny/utilities_SID_data.r")
source("Shiny/utilities_catch_scenarios.r")
## If this code is run for the first time and the SAG data in not present on the local machine
## the following line will download the last 5 years of SAG data (summary and ref points).
## This process will take several minutes but, once the data is in the local folder,
## the app will run much faster.
if (!file.exists("SAG_ 2021/SAG_summary.csv")) {
source("Shiny/update_SAG_data.r")
}
# ui and server
# source("Shiny/ui_05052021.r")
# source("Shiny/server_18052021.r")
### run app
# shinyApp(server = server, ui = ui)
### runApp function (Colin way of running the app which shows the png images in folder www)
runApp("temp")
|
module Boltzmann
export BernoulliRBM,
GRBM,
fit,
transform,
generate,
components
include("rbm.jl")
end
|
import numpy as np
import pandas as pd
from astropy import units
from astropy.cosmology import FLRW, default_cosmology
from scipy.integrate import quad
from scipy.interpolate import InterpolatedUnivariateSpline
from scipy.spatial import cKDTree, minkowski_distance
class FastSeparation2Angle(object):
_is_comoving = True
"""
fastSeparation2Angle(zmin=0.001, zmax=100.0, nlogsamples=100, units=False)
fast conversion from transverse comoving separation to angles
This class offers two methods to compute the separation angle between two
points on the sky corresponding to a given transverse comoving distance
in kpc at a given redshift. Next to the exact calculation using the
astropy.cosmology module it offers approximations using a cubic spline fit
to precomputed values of from a given cosmological model.
Parameters
----------
zmin : positive float
Minimum redshift at which data for the spline fit is computed.
zmax : positive float
Maximum redshift at which data for the spline fit is computed.
nlogsamples : positive integer
Number of logarithmically spaced sampling points to wich the spline
is fitted.
"""
# use the default cosmolgy of astropy
cosmology = default_cosmology.get()
def __init__(self, zmin=0.001, zmax=100.0, nlogsamples=100, units=False):
if zmax <= zmin:
raise ValueError("zmax must be larger than zmin")
# compute a logarighmically spaced redshift sampling
self.z_log_min = np.log10(zmin)
self.z_log_max = np.log10(zmax)
self.nlogsamples = nlogsamples
self.z_log_samples = np.logspace(
self.z_log_min, self.z_log_max, self.nlogsamples)
# fit the cubic spline
self._fit_spline()
def set_comoving(self):
self._is_comoving = True
self._fit_spline()
def set_physical(self):
self._is_comoving = False
self._fit_spline()
def set_cosmology(self, cosmology):
"""
Compute the L**p distance between two arrays.
Parameters
----------
x : astropy.cosmology.core.FLRW subclass or string
class that provides methods to compute the number of arcseconds
per kpc (comoving), or a string specifing a predefined cosmology
in astropy.
Examples
--------
>>> get_angle = fastDang2Angle()
>>> get_angle.set_cosmology("WMAP7")
>>> get_angle.cosmology
FlatLambdaCDM(name="WMAP7", H0=70.4 km / (Mpc s), Om0=0.272,
Tcmb0=2.725 K, Neff=3.04, m_nu=[0. 0. 0.] eV, Ob0=0.0455)
"""
if type(cosmology) is str:
self.cosmology = \
default_cosmology.get_cosmology_from_string(cosmology)
elif not issubclass(type(cosmology), FLRW):
raise TypeError(
"cosmology must be subclass of type %s" % str(FLRW))
else:
self.cosmology = cosmology
self._fit_spline()
def _fit_spline(self):
"""
Update the internal cubic spline fit to samples of the exact evaluation
of the separation angle on the sky between two points corresponding to
1 kpc transverse separation for at a given redshift.
"""
# use the redshift sample compute at instantiation to fit cubic spline
self.spline = InterpolatedUnivariateSpline(
self.z_log_samples, self.exact(self.z_log_samples, 1.0), k=3)
def exact(self, z, scale_kpc):
"""
Compute the separation angle on the sky between two points
corresponding to a transverse comoving separation at a given redshift.
Parameters
----------
z : array_like
The redshift at which the separation angle is computed.
scale_kpc : positive float
The transverse comoving separation in kpc.
Returns
-------
results : array_like or astropy.units.quantity.Quantity
The separation angle in degrees. If self.units is true, the result
is given with an astropy unit
"""
# convert to an astropy unit object
r_kpc = scale_kpc * units.kpc
# compute the separation angle in arcseconds
if self._is_comoving:
arcsec = self.cosmology.arcsec_per_kpc_comoving(z) * r_kpc
else:
arcsec = self.cosmology.arcsec_per_kpc_proper(z) * r_kpc
return arcsec.to(units.deg).value
def fast(self, z, scale_kpc):
"""
Fast computation of the separation angle on the sky between two points
corresponding to a transverse comoving separation at a given redshift.
Uses a spline fit to the exact result to improve performace at the cost
of accuracy.
Parameters
----------
z : array_like
The redshift at which the separation angle is computed.
scale_kpc : float
The projected separation in kpc.
Returns
-------
results : float or astropy.units.quantity.Quantity
The separation angle in degrees. If self.units is true, the result
is given with an astropy unit
Notes
-----
Accuracy loss is usually negligible but evaluation can be serveral
ten times faster than compared to self.exact.
"""
# evaluate the spline fit that gives the angular size of 1 kpc comoving
# at the given redshift and scale it to the input scale.
return self.spline(z) * scale_kpc
class SphericalKDTree(object):
"""
SphericalKDTree(RA, DEC, leaf_size=16)
A binary search tree based on scipy.spatial.cKDTree that works with
celestial coordinates. Provides methods to find pairs within angular
apertures (ball) and annuli (shell). Data is internally represented on a
unit-sphere in three dimensions (x, y, z).
Parameters
----------
RA : array_like
List of right ascensions in degrees.
DEC : array_like
List of declinations in degrees.
leafsize : int
The number of points at which the algorithm switches over to
brute-force.
"""
def __init__(self, RA, DEC, leafsize=16):
# convert angular coordinates to 3D points on unit sphere
pos_sphere = self._position_sky2sphere(RA, DEC)
self.tree = cKDTree(pos_sphere, leafsize)
@staticmethod
def _position_sky2sphere(RA, DEC):
"""
Maps celestial coordinates onto a unit-sphere in three dimensions
(x, y, z).
Parameters
----------
RA : float or array_like
Single or list of right ascensions in degrees.
DEC : float or array_like
Single or list of declinations in degrees.
Returns
-------
pos_sphere : array like
Data points (x, y, z) representing input points on the unit-sphere,
shape of output is (3,) for a single input point or (N, 3) for a
set of N input points.
"""
ras_rad = np.deg2rad(RA)
decs_rad = np.deg2rad(DEC)
try:
pos_sphere = np.empty((len(RA), 3))
except TypeError:
pos_sphere = np.empty((1, 3))
cos_decs = np.cos(decs_rad)
pos_sphere[:, 0] = np.cos(ras_rad) * cos_decs
pos_sphere[:, 1] = np.sin(ras_rad) * cos_decs
pos_sphere[:, 2] = np.sin(decs_rad)
return np.squeeze(pos_sphere)
@staticmethod
def _distance_sky2sphere(dist_sky):
"""
Converts angular separation in celestial coordinates to the
Euclidean distance in (x, y, z) space.
Parameters
----------
dist_sky : float or array_like
Single or list of separations in celestial coordinates.
Returns
-------
dist_sphere : float or array_like
Celestial separation converted to (x, y, z) Euclidean distance.
"""
dist_sky_rad = np.deg2rad(dist_sky)
dist_sphere = np.sqrt(2.0 - 2.0 * np.cos(dist_sky_rad))
return dist_sphere
@staticmethod
def _distance_sphere2sky(dist_sphere):
"""
Converts Euclidean distance in (x, y, z) space to angular separation in
celestial coordinates.
Parameters
----------
dist_sphere : float or array_like
Single or list of Euclidean distances in (x, y, z) space.
Returns
-------
dist_sky : float or array_like
Euclidean distance converted to celestial angular separation.
"""
dist_sky_rad = np.arccos(1.0 - dist_sphere**2 / 2.0)
dist_sky = np.rad2deg(dist_sky_rad)
return dist_sky
def query_radius(self, RA, DEC, r):
"""
Find all data points within an angular aperture r around a reference
point with coordiantes (RA, DEC) obeying the spherical geometry.
Parameters
----------
RA : float
Right ascension of the reference point in degrees.
DEC : float
Declination of the reference point in degrees.
r : float
Maximum separation of data points from the reference point.
Returns
-------
idx : array_like
Positional indices of matching data points in the search tree data
with sepration < r.
dist : array_like
Angular separation of matching data points from reference point.
"""
point_sphere = self._position_sky2sphere(RA, DEC)
# find all points that lie within r
r_sphere = self._distance_sky2sphere(r)
idx = self.tree.query_ball_point(point_sphere, r_sphere)
# compute pair separation
dist_sphere = minkowski_distance(self.tree.data[idx], point_sphere)
dist = self._distance_sphere2sky(dist_sphere)
return idx, dist
def query_shell(self, RA, DEC, rmin, rmax):
"""
Find all data points within an angular annulus rmin <= r < rmax around
a reference point with coordiantes (RA, DEC) obeying the spherical
geometry.
Parameters
----------
RA : float
Right ascension of the reference point in degrees.
DEC : float
Declination of the reference point in degrees.
rmin : float
Minimum separation of data points from the reference point.
rmax : float
Maximum separation of data points from the reference point.
Returns
-------
idx : array_like
Positional indices of matching data points in the search tree data
with rmin <= sepration < rmax.
dist : array_like
Angular separation of matching data points from reference point.
"""
# find all points that lie within rmax
idx, dist = self.query_radius(RA, DEC, rmax)
# remove pairs with r >= rmin
dist_mask = dist >= rmin
idx = np.compress(dist_mask, idx)
dist = np.compress(dist_mask, dist)
return idx, dist
def count_pairs(
group_reference, group_other, rlimits, comoving=False,
cosmology=None, inv_distance_weight=True):
"""
Count pairs between a reference and an unknown data catalouge with a
constant physical or comoving separation r_min <= r < r_max using k-nearest
neighbour search. Individual object weights from both catalogues and an
inverse distance weight can be included.
Parameters
----------
group_reference : tuple (as returned by pandas.DataFrame.groupby)
Reference object catalogue around which the other catalogue is queried,
must contain a pandas.DataFrame with keys 'RA' (right ascension), 'DEC'
(declination), 'z' (redshift) and optionally 'weights' (object weights)
group_other : tuple (as returned by pandas.DataFrame.groupby)
Other catalogue from which pairs are selected using a k-nearest
neighbour tree, must contain a pandas.DataFrame with keys 'RA' (right
ascension), 'DEC' (declination) and optionally 'weights' (object
weights)
rlimits : tuple
Tuple of minimum and maximum projected comoving/physical distance used
to select object pairs.
comoving : bool
Whether the rlimits are comoving or physical projected distances.
cosmology : astropy.cosmology
An astropy cosmology instance used for distance calculations.
inv_distance_weight : bool
Whether or not to use the inverse distance of two partners as
additional weight for the pair.
Returns
-------
pair_counts : pandas.DataFrame
DataFrame with reference catalogue indices and sum of pair weights
associated with each reference object.
"""
# unpack the pandas groups and dictionary
region_idx, data_reference = group_reference
region_idx, data_other = group_other
try:
weights_other = data_other.weights.to_numpy()
except Exception: # default to unity weight
weights_other = np.ones(len(data_other))
# initialize fast angular diameter distance calculator
get_angle = FastSeparation2Angle()
get_angle.set_cosmology(cosmology)
if comoving:
get_angle.set_comoving()
else:
get_angle.set_physical()
# compute annuli
ang_min = get_angle.fast(data_reference.z, rlimits[0])
ang_max = ang_min * rlimits[1] / rlimits[0]
# compute pair counts
if len(data_other) > 1:
pairs = np.empty(len(data_reference))
tree = SphericalKDTree(data_other.RA, data_other.DEC)
for n, (row_idx, item) in enumerate(data_reference.iterrows()):
# query the unknown tree between ang_min and ang_max
idx, distance = tree.query_shell(
item.RA, item.DEC, ang_min[n], ang_max[n])
# compute pair count including optional weights
if len(idx) > 0:
weight = weights_other[idx]
if inv_distance_weight:
count = np.sum(weight / distance)
# We need to normalise the weights. If all reference
# objects were at the same redshift within each bin, this
# would simply divied out in the estimator. Otherwise the
# weight evolves over the width of the reference bin.
norm = np.log(ang_max[n] / ang_min[n]) # relative norm
pairs[n] = count / norm
else:
pairs[n] = np.sum(weight)
if "weights" in item: # reference weight
pairs[n] *= item.weights
else: # fallback for an empty slice
pairs[n] = 0.0
else: # fallback for an empty group
pairs = np.zeros(len(data_reference))
# indices are needed to map the counts back to the correct reference object
pair_counts = pd.DataFrame({"pairs": pairs}, index=data_reference.index)
return pair_counts
|
import geopy.distance as gpd
import networkx as nx
from python_files.Utility import sc_dir_path, ar_dir_path, normalize
def is_node_in_graph(node, node_attrs, nodes_list):
"""
Checks if node can be found inside node_list.
:param node: The node to search
:param node_attrs: the dictionary of node's attributes
:param nodes_list: list of nodes expressed as tuples (name, {attributes})
:return: True if node is found among the nodes of node_list, False otherwise
"""
found = False
for node2 in nodes_list:
if normalize(node_attrs['label']).lower() == normalize(node2[1]['label']).lower() and \
normalize(node_attrs['country']).lower() == normalize(node2[1]['country']).lower():
found = True
break
return found
def build_travels_graph(air_routes_graph, sister_cities_graph):
travels_graph = air_routes_graph
ar_nodes = air_routes_graph.nodes.data()
length = len(sister_cities_graph.nodes)
iter = 0
for city, attrs in sister_cities_graph.copy().nodes(True):
iter += 1
print("ITERAZIONE ", iter, " di ", length, " con", attrs['label'])
if not is_node_in_graph(city, attrs, ar_nodes):
# Searching for the nearest airport city
min_dist = float("inf")
nearest_airport = None
current_pos = (attrs['lat'], attrs['lon']) # (latitude, longitude)
for airport, attrs_airport in air_routes_graph.copy().nodes(True):
airport_pos = (attrs_airport['lat'], attrs_airport['lon'])
# Computing distances considering wgs84 model
dist = gpd.distance(current_pos, airport_pos).km
if dist < min_dist:
min_dist = dist
nearest_airport = airport
travels_graph.add_node(city, **attrs)
travels_graph.add_edge(city, nearest_airport, weight=1)
print("nodo aggiunto")
nx.write_gexf(travels_graph, ar_dir_path + r'\travels_routes.gexf')
# nx.write_gexf(travels_graph,
# r'C:\Users\MARANGONI\IdeaProjects\ComparisonBetweenNetworks\data\airline_routes_data\travels_routes.gexf')
# sister_cities_graph = nx.readwrite.read_gexf(
# r'C:\Users\MARANGONI\IdeaProjects\ComparisonBetweenNetworks\data\sister_cities_data\sister_cities.gexf')
# air_routes_graph = nx.readwrite.read_gexf(
# r'C:\Users\MARANGONI\IdeaProjects\ComparisonBetweenNetworks\data\airline_routes_data\reduced_routes.gexf')
air_routes_graph = nx.readwrite.read_gexf(ar_dir_path + r'\reduced_routes.gexf')
sister_cities_graph = nx.readwrite.read_gexf(sc_dir_path + r'\sister_cities.gexf')
build_travels_graph(air_routes_graph, sister_cities_graph)
|
{-
True
0
False
False
4
42
42
True
False
[33, 42, 42, 42, 42]
[42, 42, 33, 42, 42]
[42, 42, 42, 42, 33]
[33, 42, 42, 42]
[42, 42, 33, 42]
[42, 42, 42, 33]
False
True
-}
import Data.Vector
main : IO ()
main = do
let e = the (Vector Int) empty
printLn (null e)
printLn (length e)
printLn (elem 42 e)
let a = replicate 4 (the Int 42)
printLn (null a)
printLn (length a)
printLn (a !! 0)
printLn (a !! 3)
printLn (elem 42 a)
printLn (elem 33 a)
printLn (unsafeInsertAt 0 33 a)
printLn (unsafeInsertAt 2 33 a)
printLn (unsafeInsertAt (length a) 33 a)
printLn (unsafeReplaceAt 0 33 a)
printLn (unsafeReplaceAt 2 33 a)
printLn (maybe a (\index => unsafeReplaceAt index 33 a) (lastIndex a))
printLn (elem 33 a)
printLn (elem 33 (singleton 33))
-- Local Variables:
-- idris-load-packages: ("cil")
-- End:
|
%% trigradient
% Below is a demonstration of the features of the |trigradient| function
%%
clear; close all; clc;
%% Syntax
% |[ux,uy]=trigradient(TRI,V,INT);|
%% Description
% UNDOCUMENTED
%% Examples
%
%%
%
% <<gibbVerySmall.gif>>
%
% _*GIBBON*_
% <www.gibboncode.org>
%
% _Kevin Mattheus Moerman_, <[email protected]>
%%
% _*GIBBON footer text*_
%
% License: <https://github.com/gibbonCode/GIBBON/blob/master/LICENSE>
%
% GIBBON: The Geometry and Image-based Bioengineering add-On. A toolbox for
% image segmentation, image-based modeling, meshing, and finite element
% analysis.
%
% Copyright (C) 2006-2022 Kevin Mattheus Moerman and the GIBBON contributors
%
% This program is free software: you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published by
% the Free Software Foundation, either version 3 of the License, or
% (at your option) any later version.
%
% This program is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with this program. If not, see <http://www.gnu.org/licenses/>.
|
import Data.Vect
import Data.Fin
Vect_ext : (v : Vect n a) -> (w : Vect n a) -> ((i : Fin n) -> index i v = index i w)
-> v = w
Weird : (v: Vect n a) -> v = v
Weird v = Vect_ext ?hole0 ?hole1 ?hole2
f : Bool -> Nat
f True = 0
f True = ?help
f False = 1
|
(* Title: HOL/Auth/n_flash_nodata_cub_lemma_on_inv__94.thy
Author: Yongjian Li and Kaiqiang Duan, State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
Copyright 2016 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
*)
header{*The n_flash_nodata_cub Protocol Case Study*}
theory n_flash_nodata_cub_lemma_on_inv__94 imports n_flash_nodata_cub_base
begin
section{*All lemmas on causal relation between inv__94 and some rule r*}
lemma n_PI_Remote_GetVsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_PI_Remote_Get src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_PI_Remote_Get src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_PI_Remote_GetXVsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_PI_Remote_GetX src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_PI_Remote_GetX src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_NakVsinv__94:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Nak dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Nak dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Nak__part__0Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__0 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__0 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Nak__part__1Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__1 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__1 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Nak__part__2Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__2 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__2 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Get__part__0Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Get__part__0 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Get__part__0 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Get__part__1Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Get__part__1 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Get__part__1 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Put_HeadVsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put_Head N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put_Head N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_PutVsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Put_DirtyVsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put_Dirty src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put_Dirty src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_Get_NakVsinv__94:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Nak src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Nak src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_Get_Nak_HomeVsinv__94:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Get_Nak_Home dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_Get_Nak_Home dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_Get_PutVsinv__94:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Put src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Put src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_Get_Put_HomeVsinv__94:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Get_Put_Home dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_Get_Put_Home dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_GetX)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') p__Inv4) ''CacheState'')) (Const CACHE_E))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeUniMsg'') ''Cmd'')) (Const UNI_Get)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_GetX))) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''HomeProc'')) (Const false))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_Nak__part__0Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__0 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__0 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_Nak__part__1Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__1 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__1 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_Nak__part__2Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__2 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__2 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_GetX__part__0Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__0 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__0 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_GetX__part__1Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__1 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__1 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeUniMsg'') ''Cmd'')) (Const UNI_Put)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const false))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_1Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_1 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_1 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_2Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_2 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_2 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_3Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_3 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_3 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_4Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_4 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_4 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_5Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_5 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_5 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_6Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_6 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_6 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7__part__0Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__0 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__0 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7__part__1Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__1 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__1 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7_NODE_Get__part__0Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__0 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__0 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7_NODE_Get__part__1Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__1 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__1 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8_HomeVsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8_Home_NODE_GetVsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home_NODE_Get N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home_NODE_Get N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8Vsinv__94:
assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8 N src pp)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8 N src pp" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>pp~=p__Inv4)\<or>(src~=p__Inv4\<and>pp=p__Inv4)\<or>(src~=p__Inv4\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>pp~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8_NODE_GetVsinv__94:
assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8_NODE_Get N src pp)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8_NODE_Get N src pp" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>pp~=p__Inv4)\<or>(src~=p__Inv4\<and>pp=p__Inv4)\<or>(src~=p__Inv4\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>pp~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_9__part__0Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__0 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__0 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_9__part__1Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__1 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__1 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_10_HomeVsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_10_Home N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_10_Home N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_10Vsinv__94:
assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_10 N src pp)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_10 N src pp" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>pp~=p__Inv4)\<or>(src~=p__Inv4\<and>pp=p__Inv4)\<or>(src~=p__Inv4\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>pp~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_11Vsinv__94:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_11 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_11 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_NakVsinv__94:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_Nak src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_Nak src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_Nak_HomeVsinv__94:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_GetX_Nak_Home dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_GetX_Nak_Home dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_PutXVsinv__94:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_PutX src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_PutX src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_PutX_HomeVsinv__94:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_GetX_PutX_Home dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_GetX_PutX_Home dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_PutVsinv__94:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Put dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_Put dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_PutXVsinv__94:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_PutX dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_PutX dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_PI_Local_Get_GetVsinv__94:
assumes a1: "(r=n_PI_Local_Get_Get )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Local_GetX_GetX__part__0Vsinv__94:
assumes a1: "(r=n_PI_Local_GetX_GetX__part__0 )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Local_GetX_GetX__part__1Vsinv__94:
assumes a1: "(r=n_PI_Local_GetX_GetX__part__1 )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_NI_Nak_HomeVsinv__94:
assumes a1: "(r=n_NI_Nak_Home )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_NI_Local_PutVsinv__94:
assumes a1: "(r=n_NI_Local_Put )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_NI_Local_PutXAcksDoneVsinv__94:
assumes a1: "(r=n_NI_Local_PutXAcksDone )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__94 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Local_GetX_PutX__part__0Vsinv__94:
assumes a1: "r=n_PI_Local_GetX_PutX__part__0 " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_WbVsinv__94:
assumes a1: "r=n_NI_Wb " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_3Vsinv__94:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_3 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_1Vsinv__94:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_1 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Remote_ReplaceVsinv__94:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_PI_Remote_Replace src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_ReplaceVsinv__94:
assumes a1: "r=n_PI_Local_Replace " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_existsVsinv__94:
assumes a1: "\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_InvAck_exists src pp" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Remote_PutXVsinv__94:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_PI_Remote_PutX dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvVsinv__94:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Inv dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_PutXVsinv__94:
assumes a1: "r=n_PI_Local_PutX " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_Get_PutVsinv__94:
assumes a1: "r=n_PI_Local_Get_Put " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_ShWbVsinv__94:
assumes a1: "r=n_NI_ShWb N " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_PutX_HeadVld__part__0Vsinv__94:
assumes a1: "r=n_PI_Local_GetX_PutX_HeadVld__part__0 N " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_ReplaceVsinv__94:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Replace src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_PutX__part__1Vsinv__94:
assumes a1: "r=n_PI_Local_GetX_PutX__part__1 " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_exists_HomeVsinv__94:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_exists_Home src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Replace_HomeVsinv__94:
assumes a1: "r=n_NI_Replace_Home " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Nak_ClearVsinv__94:
assumes a1: "r=n_NI_Nak_Clear " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_2Vsinv__94:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_2 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_PutX_HeadVld__part__1Vsinv__94:
assumes a1: "r=n_PI_Local_GetX_PutX_HeadVld__part__1 N " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_FAckVsinv__94:
assumes a1: "r=n_NI_FAck " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__94 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
end
|
"""
get_dataBath()
Returns the input data from the bath as an array of shape {nbBath,4}.
This array is stored in the order {gamma,m,a0,Mbar=M(<a0)}.
It is later used to get nbBath and to fill in tabgammaBar,tabmBath,taba0Bath,tabMbarBath,tabNbarBath.
"""
function get_dataBath()
dataBath, headerBath = readdlm(INPUTBATH,header=true) # Reading the data file with the header
return dataBath # Output the entire data
end
##################################################
"""
dataBath
Input data from the Bath.
Use `get_dataBath()` to fill it in.
"""
const dataBath = get_dataBath()
"""
nbBath
Number of bath populations.
Initialize `dataBath` with `get_dataBath()` first before looking at this value.
"""
const nbBath = size(dataBath)[1]
"""
gBath(gamma)
Dimensionless function g(gamma), where gamma is the cusp of the family.
Used to make the translation between N(a) in SMA-space and N(<a) in radius space.
"""
function gBath(gamma::Float64)
return 2.0^(-gamma)*sqrt(pi)*(SpecialFunctions.gamma(1.0+gamma))/(SpecialFunctions.gamma(gamma-0.5)) # Output of g(gamma)
end
##################################################
# Function that returns the normalisation prefactor
# Nbar in SMA-space, for a given set {gamma,m,a0,Mbar}
# for a given set {gamma,m,a0,Mbar}
##################################################
"""
get_NbarBath(gamma,m,a0,Mbar)
Returns the normalization prefactor NBar in SMA-space, for a given set {gamma,m,a0,Mbar}.
The prefactor Nbar is such that `N(a)=Nbar*(a/a0)^(2-gamma)`.
# Arguments
- `gamma::Float64`: Cusp of the family.
- `m ::Float64`: Individual masses of the family.
- `a0 ::Float64`: Scale radius of the family.
- `Mbar ::Float64`: Enclosed mass within a radius a0 of the family.
"""
function get_NbarBath(gamma::Float64,m::Float64,a0::Float64,Mbar::Float64)
return (3.0-gamma)*gBath(gamma)*Mbar/(a0*m) # Value of Nbar such that N(a)=Nbar*(a/a0)^(2-gamma) in SMA-space
end
##################################################
"""
tabgammaBath
Array containing the cusp index for all the families.
"""
const tabgammaBath = MVector{nbBath,Float64}([dataBath[iBath,1] for iBath=1:nbBath])
"""
tabmBath
Array containing the individual masses for all the families.
"""
const tabmBath = MVector{nbBath,Float64}([dataBath[iBath,2] for iBath=1:nbBath])
"""
taba0Bath
Array containing the scale radius `a0` for all the families.
"""
const taba0Bath = MVector{nbBath,Float64}([dataBath[iBath,3] for iBath=1:nbBath])
"""
tabMbarBath
Array containing the enclosed masses, `Mbar=M(<a0)` for all the families.
"""
const tabMbarBath = MVector{nbBath,Float64}([dataBath[iBath,4] for iBath=1:nbBath])
"""
tabNbarBath
Array containing the prefactors `Nbar` for all the families.
"""
const tabNbarBath = MVector{nbBath,Float64}([get_NbarBath(tabgammaBath[iBath],tabmBath[iBath],taba0Bath[iBath],tabMbarBath[iBath]) for iBath=1:nbBath])
##################################################
"""
MenclosedBath(iBath,a)
Returns the enclosed mass `M(<a)` for the bath population indexed by iBath.
Equal to `M(<a)=Mbar*(a/a0)^(3-gamma)`.
"""
function MenclosedBath(iBath::Int64,a::Float64)
gamma = tabgammaBath[iBath] # Cusp index of the current bath population
a0 = taba0Bath[iBath] # Scale radius of the current bath population
Mbar = tabMbarBath[iBath] # Enclosed mass M(<a0) for the current bath population
return Mbar*(a/a0)^(3.0-gamma) # Output
end
"""
NBath(iBath,a)
Returns the distribution function in SMA-space of `N(a)` for the bath component indexed by iBath.
Equal to `N(a)=Nbar*(a/a0)^(2-gamma)`.
"""
function NBath(iBath::Int64,a::Float64)
gamma = tabgammaBath[iBath] # Cusp index of the current bath population
Nbar = tabNbarBath[iBath]
a0 = taba0Bath[iBath] # Scale radius of the current bath population
# Normalisation in SMA-space of the current population
#####
return Nbar*(a/a0)^(2.0-gamma) # Output
end
"""
NBathTot(a)
Returns the total distribution function in SMA-space of `N(a)`.
"""
function NBathTot(a::Float64)
res = 0.0
for iBath=1:nbBath # Loop over the bath components
m = tabmBath[iBath]^(2) # Individual mass^2 of the current bath component
res += m*NBath(iBath,a) # Contribution from the bath component
end
return res
end
"""
fjBath(a,j)
Returns the conditional PDF fj(j|a).
Normalized so that integrating `fj(j|a)` over `[jlc(a),1]` yields 1.
Proportional to `2j` in the range `[jlc(a),1]`.
# Remark:
- Independent of the family of the bath.
- Should add a test to ensure that one has `jlc<j<1.0`.
"""
function fjBath(a::Float64,j::Float64)
return (2.0*j)/(1.0-(jlc(a))^(2)) # Output
end
"""
dfjBathdj(a,j)
Returns the conditional PDF fj(j|a).
Normalized so that integrating `fj(j|a)` over `[jlc(a),1]` yields 1.
# Remark:
- Independent of the family of the bath.
- Should add a test to ensure that one has `jlc<j<1.0`.
"""
function dfjBathdj(a::Float64,j::Float64)
return (2.0)/(1.0-(jlc(a))^(2))
end
"""
FtotBath(a,j)
Wrapped function of the distributions functions which appears as an integrand in the DRR coefficients DRRJJ and DRRJ.
"""
function FtotBath(a::Float64,j::Float64)
res = 0.0 # Initialising the result
#####
for iBath=1:nbBath # Loop over the bath components
m = tabmBath[iBath] # Individual mass of the current bath component
res += m^(2)*NBath(iBath,a) # Contribution from the bath component
end
res *= fjBath(a,j)
#####
return res # Output
end
|
lemma sup_measure_F_mono: "finite I \<Longrightarrow> J \<subseteq> I \<Longrightarrow> sup_measure.F id J \<le> sup_measure.F id I" |
lemma mpoly_base_conv: fixes x :: "'a::comm_ring_1" shows "0 = poly 0 x" "c = poly [:c:] x" "x = poly [:0,1:] x" |
module Test.Int16
import Data.Prim.Int16
import Data.SOP
import Hedgehog
import Test.RingLaws
allInt16 : Gen Int16
allInt16 = int16 (linear (-0x8000) 0xffff)
prop_ltMax : Property
prop_ltMax = property $ do
b8 <- forAll allInt16
(b8 <= MaxInt16) === True
prop_ltMin : Property
prop_ltMin = property $ do
b8 <- forAll allInt16
(b8 >= MinInt16) === True
prop_comp : Property
prop_comp = property $ do
[m,n] <- forAll $ np [allInt16, allInt16]
toOrdering (comp m n) === compare m n
export
props : Group
props = MkGroup "Int16" $
[ ("prop_ltMax", prop_ltMax)
, ("prop_ltMin", prop_ltMin)
, ("prop_comp", prop_comp)
] ++ ringProps allInt16
|
At the 2012 Consultative Group meeting of the Global Facility for Disaster Reduction and Recovery ( <unk> ) , the Haitian delegation shared a " bottom @-@ up " approach to disaster reduction and management based on community integration and sustainable development with a group of experts from approximately 38 nations .
|
The function $-\log_b(x)$ is convex on the interval $(0, \infty)$. |
[GOAL]
ι : Type u_1
α : Type u_2
inst✝ : Zero α
f✝ : ι →₀ α
i✝ : ι
a : α
f : ι →₀ α
i : ι
⊢ i ∈ f.support ↔ (fun i => {↑f i}) i ≠ 0
[PROOFSTEP]
rw [← not_iff_not, not_mem_support_iff, not_ne_iff]
[GOAL]
ι : Type u_1
α : Type u_2
inst✝ : Zero α
f✝ : ι →₀ α
i✝ : ι
a : α
f : ι →₀ α
i : ι
⊢ ↑f i = 0 ↔ (fun i => {↑f i}) i = 0
[PROOFSTEP]
exact singleton_injective.eq_iff.symm
[GOAL]
ι : Type u_1
α : Type u_2
inst✝² : Zero α
inst✝¹ : PartialOrder α
inst✝ : LocallyFiniteOrder α
f✝ g✝ : ι →₀ α
i✝ : ι
a : α
f g : ι →₀ α
i : ι
⊢ i ∈ f.support ∪ g.support ↔ (fun i => Icc (↑f i) (↑g i)) i ≠ 0
[PROOFSTEP]
rw [mem_union, ← not_iff_not, not_or, not_mem_support_iff, not_mem_support_iff, not_ne_iff]
[GOAL]
ι : Type u_1
α : Type u_2
inst✝² : Zero α
inst✝¹ : PartialOrder α
inst✝ : LocallyFiniteOrder α
f✝ g✝ : ι →₀ α
i✝ : ι
a : α
f g : ι →₀ α
i : ι
⊢ ↑f i = 0 ∧ ↑g i = 0 ↔ (fun i => Icc (↑f i) (↑g i)) i = 0
[PROOFSTEP]
exact Icc_eq_singleton_iff.symm
[GOAL]
ι : Type u_1
α : Type u_2
inst✝² : PartialOrder α
inst✝¹ : Zero α
inst✝ : LocallyFiniteOrder α
f✝ g✝ f g x : ι →₀ α
⊢ x ∈ (fun f g => Finset.finsupp (f.support ∪ g.support) ↑(rangeIcc f g)) f g ↔ f ≤ x ∧ x ≤ g
[PROOFSTEP]
refine' (mem_finsupp_iff_of_support_subset <| Finset.subset_of_eq <| rangeIcc_support _ _).trans _
[GOAL]
ι : Type u_1
α : Type u_2
inst✝² : PartialOrder α
inst✝¹ : Zero α
inst✝ : LocallyFiniteOrder α
f✝ g✝ f g x : ι →₀ α
⊢ (∀ (i : ι), ↑x i ∈ ↑(rangeIcc f g) i) ↔ f ≤ x ∧ x ≤ g
[PROOFSTEP]
simp_rw [mem_rangeIcc_apply_iff]
[GOAL]
ι : Type u_1
α : Type u_2
inst✝² : PartialOrder α
inst✝¹ : Zero α
inst✝ : LocallyFiniteOrder α
f✝ g✝ f g x : ι →₀ α
⊢ (∀ (i : ι), ↑f i ≤ ↑x i ∧ ↑x i ≤ ↑g i) ↔ f ≤ x ∧ x ≤ g
[PROOFSTEP]
exact forall_and
[GOAL]
ι : Type u_1
α : Type u_2
inst✝² : PartialOrder α
inst✝¹ : Zero α
inst✝ : LocallyFiniteOrder α
f g : ι →₀ α
⊢ card (Icc f g) = ∏ i in f.support ∪ g.support, card (Icc (↑f i) (↑g i))
[PROOFSTEP]
simp_rw [Icc_eq, card_finsupp, coe_rangeIcc]
[GOAL]
ι : Type u_1
α : Type u_2
inst✝² : PartialOrder α
inst✝¹ : Zero α
inst✝ : LocallyFiniteOrder α
f g : ι →₀ α
⊢ card (Ico f g) = ∏ i in f.support ∪ g.support, card (Icc (↑f i) (↑g i)) - 1
[PROOFSTEP]
rw [card_Ico_eq_card_Icc_sub_one, card_Icc]
[GOAL]
ι : Type u_1
α : Type u_2
inst✝² : PartialOrder α
inst✝¹ : Zero α
inst✝ : LocallyFiniteOrder α
f g : ι →₀ α
⊢ card (Ioc f g) = ∏ i in f.support ∪ g.support, card (Icc (↑f i) (↑g i)) - 1
[PROOFSTEP]
rw [card_Ioc_eq_card_Icc_sub_one, card_Icc]
[GOAL]
ι : Type u_1
α : Type u_2
inst✝² : PartialOrder α
inst✝¹ : Zero α
inst✝ : LocallyFiniteOrder α
f g : ι →₀ α
⊢ card (Ioo f g) = ∏ i in f.support ∪ g.support, card (Icc (↑f i) (↑g i)) - 2
[PROOFSTEP]
rw [card_Ioo_eq_card_Icc_sub_two, card_Icc]
[GOAL]
ι : Type u_1
α : Type u_2
inst✝² : Lattice α
inst✝¹ : Zero α
inst✝ : LocallyFiniteOrder α
f g : ι →₀ α
⊢ card (uIcc f g) = ∏ i in f.support ∪ g.support, card (uIcc (↑f i) (↑g i))
[PROOFSTEP]
rw [← support_inf_union_support_sup]
[GOAL]
ι : Type u_1
α : Type u_2
inst✝² : Lattice α
inst✝¹ : Zero α
inst✝ : LocallyFiniteOrder α
f g : ι →₀ α
⊢ card (uIcc f g) = ∏ i in (f ⊓ g).support ∪ (f ⊔ g).support, card (uIcc (↑f i) (↑g i))
[PROOFSTEP]
exact card_Icc (_ : ι →₀ α) _
[GOAL]
ι : Type u_1
α : Type u_2
inst✝¹ : CanonicallyOrderedAddMonoid α
inst✝ : LocallyFiniteOrder α
f : ι →₀ α
⊢ card (Iic f) = ∏ i in f.support, card (Iic (↑f i))
[PROOFSTEP]
classical simp_rw [Iic_eq_Icc, card_Icc, Finsupp.bot_eq_zero, support_zero, empty_union, zero_apply, bot_eq_zero]
[GOAL]
ι : Type u_1
α : Type u_2
inst✝¹ : CanonicallyOrderedAddMonoid α
inst✝ : LocallyFiniteOrder α
f : ι →₀ α
⊢ card (Iic f) = ∏ i in f.support, card (Iic (↑f i))
[PROOFSTEP]
simp_rw [Iic_eq_Icc, card_Icc, Finsupp.bot_eq_zero, support_zero, empty_union, zero_apply, bot_eq_zero]
[GOAL]
ι : Type u_1
α : Type u_2
inst✝¹ : CanonicallyOrderedAddMonoid α
inst✝ : LocallyFiniteOrder α
f : ι →₀ α
⊢ card (Iio f) = ∏ i in f.support, card (Iic (↑f i)) - 1
[PROOFSTEP]
rw [card_Iio_eq_card_Iic_sub_one, card_Iic]
|
State Before: α : Type u_1
β : Type ?u.92677
γ : Type ?u.92680
s t : Set α
inst✝¹ : Fintype α
inst✝ : Fintype ↑s
⊢ toFinset s = Finset.univ ↔ s = univ State After: no goals Tactic: rw [← coe_inj, coe_toFinset, coe_univ] |
module Addition.AbsorptionLemmas
import Data.Vect
import Common.Util
import Common.Interfaces
import Specifications.DiscreteOrderedGroup
import Proofs.GroupTheory
import Addition.Carry
import public Addition.Digit
%default total
%access export
||| Express the constraint that the output is in the allowed digit
||| range. The output range is [-v, v] before carry absorption, and
||| [-u, u] after.
public export
data Ranges : Binrel s -> (s -> s) -> s -> s -> s -> Vect k s -> Type
where MkRanges :
InSymRange leq neg v pending ->
(digits : Vect k (Digit leq neg u)) ->
Ranges leq neg u v pending (map Digit.val digits)
||| To do: prove this.
||| Need to add the assumption that One is positive.
absorbCarry : Ringops s => DiscreteOrderedGroupSpec {s} (+) Zero Ng leq One ->
InSymRange leq Ng (u + Ng One) x ->
(c : Carry) ->
InSymRange leq Ng u (value c + x)
rangeLemma : Ringops s => DiscreteOrderedGroupSpec {s} (+) Zero Ng leq One ->
Ranges leq Ng u (u + Ng One) oldPending outputs ->
InSymRange leq Ng (u + Ng One) newPending ->
(c : Carry) ->
Ranges leq Ng u (u + Ng One) newPending ((value c + oldPending) :: outputs)
rangeLemma {oldPending} spec (MkRanges old digits) prf c =
let output = value c + oldPending
digit = MkDigit output (absorbCarry spec old c)
in MkRanges prf (digit :: digits)
|
/* roots/demo1.c
*
* Copyright (C) 1996, 1997, 1998, 1999, 2000, 2007 Reid Priedhorsky, Brian Gough
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 3 of the License, or (at
* your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <stdio.h>
#include <gsl/gsl_errno.h>
#include <gsl/gsl_math.h>
#include <gsl/gsl_roots.h>
#include "demof.h"
#include "demof.c"
int
main ()
{
int status;
int iterations = 0, max_iterations = 100;
gsl_root_fdfsolver *s;
double x0, x = 5.0, r_expected = sqrt (5.0);
gsl_function_fdf FDF;
struct quadratic_params params = {1.0, 0.0, -5.0};
FDF.f = &quadratic;
FDF.df = &quadratic_deriv;
FDF.fdf = &quadratic_fdf;
FDF.params = ¶ms;
s = gsl_root_fdfsolver_alloc (gsl_root_fdfsolver_newton);
gsl_root_fdfsolver_set (s, &FDF, x);
printf ("using %s method\n", gsl_root_fdfsolver_name (s));
printf ("%-5s %10s %10s %10s %10s\n",
"iter", "root", "actual", "err", "err(est)");
do
{
iterations++;
status = gsl_root_fdfsolver_iterate (s);
x0 = x;
x = gsl_root_fdfsolver_root (s);
status = gsl_root_test_delta (x, x0, 0, 0.001);
if (status == GSL_SUCCESS)
printf ("Converged:\n");
printf ("%5d %10.7f %10.7f %+10.7f %10.7f\n",
iterations, x, r_expected, x - r_expected, x - x0);
}
while (status == GSL_CONTINUE && iterations < max_iterations);
}
|
module Control.Monad.Reader.Interface
import Control.Monad.Maybe
import Control.Monad.Error.Either
import Control.Monad.Reader.Reader
import Control.Monad.State.State
import Control.Monad.RWS.CPS
import Control.Monad.Trans
import Control.Monad.Writer.CPS
%default total
||| A computation which runs in a static context and produces an output
public export
interface Monad m => MonadReader stateType m | m where
||| Get the context
ask : m stateType
||| `local f c` runs the computation `c` in an environment modified by `f`.
local : (stateType -> stateType) -> m a -> m a
||| Evaluate a function in the context held by this computation
public export
asks : MonadReader stateType m => (stateType -> a) -> m a
asks f = map f ask
--------------------------------------------------------------------------------
-- Implementations
--------------------------------------------------------------------------------
public export %inline
Monad m => MonadReader stateType (ReaderT stateType m) where
ask = MkReaderT (\st => pure st)
local f (MkReaderT action) = MkReaderT (action . f)
public export %inline
Monad m => MonadReader r (RWST r w s m) where
ask = MkRWST $ \r,s,w => pure (r,s,w)
local f m = MkRWST $ \r,s,w => unRWST m (f r) s w
public export %inline
MonadReader r m => MonadReader r (EitherT e m) where
ask = lift ask
local = mapEitherT . local
public export %inline
MonadReader r m => MonadReader r (MaybeT m) where
ask = lift ask
local = mapMaybeT . local
public export %inline
MonadReader r m => MonadReader r (StateT s m) where
ask = lift ask
local = mapStateT . local
public export %inline
MonadReader r m => MonadReader r (WriterT w m) where
ask = lift ask
-- this differs from the implementation in the mtl package
-- which uses mapWriterT. However, it seems strange that
-- this should require a Monoid instance to further
-- accumulate values, while the implementation of
-- MonadReader for RWST does no such thing.
local f (MkWriterT m) = MkWriterT $ \w => local f (m w)
|
------------------------------------------------------------------------
-- Integers
------------------------------------------------------------------------
{-# OPTIONS --without-K --safe #-}
open import Equality
module Integer {e⁺} (eq : ∀ {a p} → Equality-with-J a p e⁺) where
open Derived-definitions-and-properties eq
open import Prelude as P hiding (suc) renaming (_+_ to _⊕_; _*_ to _⊛_)
open import Bijection eq using (_↔_)
open import Equivalence eq as Eq using (_≃_)
open import Function-universe eq hiding (_∘_)
open import Group eq using (Group; Abelian)
open import H-level eq
open import H-level.Closure eq
import Nat eq as Nat
open import Integer.Basics eq public
private
variable
i j : ℤ
------------------------------------------------------------------------
-- Some lemmas
-- The sum of i and - i is zero.
+-right-inverse : ∀ i → i + - i ≡ + 0
+-right-inverse = λ where
(+ zero) → refl _
(+ P.suc n) → lemma n
-[1+ n ] → lemma n
where
lemma : ∀ n → + P.suc n +-[1+ n ] ≡ + 0
lemma n
with P.suc n Nat.<= n | T[<=]↔≤ {m = P.suc n} {n = n}
… | false | _ = cong (+_) $ Nat.∸≡0 n
… | true | eq = ⊥-elim $ Nat.<-irreflexive (_↔_.to eq _)
-- The sum of - i and i is zero.
+-left-inverse : ∀ i → - i + i ≡ + 0
+-left-inverse i =
- i + i ≡⟨ +-comm (- i) ⟩
i + - i ≡⟨ +-right-inverse i ⟩∎
+ 0 ∎
-- + 0 is a left identity for addition.
+-left-identity : + 0 + i ≡ i
+-left-identity {i = + _} = refl _
+-left-identity {i = -[1+ _ ]} = refl _
-- + 0 is a right identity for addition.
+-right-identity : i + + 0 ≡ i
+-right-identity {i = + _} = cong (+_) Nat.+-right-identity
+-right-identity {i = -[1+ _ ]} = refl _
-- Addition is associative.
+-assoc : ∀ i {j k} → i + (j + k) ≡ (i + j) + k
+-assoc = λ where
(+ m) {j = + _} {k = + _} →
cong (+_) $ Nat.+-assoc m
-[1+ m ] {j = -[1+ n ]} {k = -[1+ o ]} →
cong (-[1+_] ∘ P.suc)
(m ⊕ P.suc (n ⊕ o) ≡⟨ sym $ Nat.suc+≡+suc m ⟩
P.suc m ⊕ (n ⊕ o) ≡⟨ Nat.+-assoc (P.suc m) ⟩∎
(P.suc m ⊕ n) ⊕ o ∎)
-[1+ m ] {j = + n} {k = + o} →
sym $ ++-[1+]++ _ n _
(+ m) {j = -[1+ n ]} {k = -[1+ o ]} →
sym $ ++-[1+]+-[1+] m _ _
(+ m) {j = + n} {k = -[1+ o ]} →
lemma₁ m _ _
(+ m) {j = -[1+ n ]} {k = + o} →
lemma₂ m _ _
-[1+ m ] {j = -[1+ n ]} {k = + o} →
lemma₃ _ o _
-[1+ m ] {j = + n} {k = -[1+ o ]} →
lemma₄ _ n _
where
++-[1+]++ : ∀ m n o → + n +-[1+ m ] + + o ≡ (+ (n ⊕ o) +-[1+ m ])
++-[1+]++ m zero o = refl _
++-[1+]++ zero (P.suc n) o = refl _
++-[1+]++ (P.suc m) (P.suc n) o = ++-[1+]++ m n o
++-[1+]+-[1+] :
∀ m n o → + m +-[1+ n ] + -[1+ o ] ≡ (+ m +-[1+ 1 ⊕ n ⊕ o ])
++-[1+]+-[1+] zero n o = refl _
++-[1+]+-[1+] (P.suc m) zero o = refl _
++-[1+]+-[1+] (P.suc m) (P.suc n) o = ++-[1+]+-[1+] m n o
lemma₁ : ∀ m n o → + m + (+ n +-[1+ o ]) ≡ (+ m ⊕ n +-[1+ o ])
lemma₁ m n o =
+ m + (+ n +-[1+ o ]) ≡⟨ +-comm (+ m) {j = + n +-[1+ o ]} ⟩
(+ n +-[1+ o ]) + + m ≡⟨ ++-[1+]++ _ n _ ⟩
(+ n ⊕ m +-[1+ o ]) ≡⟨ cong +_+-[1+ o ] $ Nat.+-comm n ⟩∎
(+ m ⊕ n +-[1+ o ]) ∎
lemma₂ : ∀ m n o → + m + (+ n +-[1+ o ]) ≡ (+ m +-[1+ o ]) + + n
lemma₂ m n o =
+ m + (+ n +-[1+ o ]) ≡⟨ lemma₁ m n o ⟩
(+ m ⊕ n +-[1+ o ]) ≡⟨ sym $ ++-[1+]++ _ m _ ⟩∎
(+ m +-[1+ o ]) + + n ∎
lemma₃ :
∀ m n o → -[1+ m ] + (+ n +-[1+ o ]) ≡ (+ n +-[1+ 1 ⊕ m ⊕ o ])
lemma₃ m n o =
-[1+ m ] + (+ n +-[1+ o ]) ≡⟨ +-comm -[1+ m ] {j = + n +-[1+ o ]} ⟩
(+ n +-[1+ o ]) + -[1+ m ] ≡⟨ ++-[1+]+-[1+] n o m ⟩
(+ n +-[1+ 1 ⊕ o ⊕ m ]) ≡⟨ cong + n +-[1+_] $ cong (1 ⊕_) $ Nat.+-comm o ⟩∎
(+ n +-[1+ 1 ⊕ m ⊕ o ]) ∎
lemma₄ :
∀ m n o → -[1+ m ] + (+ n +-[1+ o ]) ≡ (+ n +-[1+ m ]) + -[1+ o ]
lemma₄ m n o =
-[1+ m ] + (+ n +-[1+ o ]) ≡⟨ lemma₃ m n o ⟩
(+ n +-[1+ 1 ⊕ m ⊕ o ]) ≡⟨ sym $ ++-[1+]+-[1+] n _ _ ⟩∎
(+ n +-[1+ m ]) + -[1+ o ] ∎
------------------------------------------------------------------------
-- Successor and predecessor
-- The successor function.
suc : ℤ → ℤ
suc (+ n) = + P.suc n
suc -[1+ zero ] = + zero
suc -[1+ P.suc n ] = -[1+ n ]
-- The successor function adds one to its input.
suc≡1+ : ∀ i → suc i ≡ + 1 + i
suc≡1+ (+ _) = refl _
suc≡1+ -[1+ zero ] = refl _
suc≡1+ -[1+ P.suc _ ] = refl _
-- The predecessor function.
pred : ℤ → ℤ
pred (+ zero) = -[1+ zero ]
pred (+ P.suc n) = + n
pred -[1+ n ] = -[1+ P.suc n ]
-- The predecessor function adds minus one to its input.
pred≡-1+ : ∀ i → pred i ≡ -[ 1 ] + i
pred≡-1+ (+ zero) = refl _
pred≡-1+ (+ P.suc _) = refl _
pred≡-1+ -[1+ _ ] = refl _
-- An equivalence between ℤ and ℤ corresponding to the successor
-- function.
successor : ℤ ≃ ℤ
successor = Eq.↔→≃ suc pred suc-pred pred-suc
where
suc-pred : ∀ i → suc (pred i) ≡ i
suc-pred (+ zero) = refl _
suc-pred (+ P.suc _) = refl _
suc-pred -[1+ _ ] = refl _
pred-suc : ∀ i → pred (suc i) ≡ i
pred-suc (+ _) = refl _
pred-suc -[1+ zero ] = refl _
pred-suc -[1+ P.suc _ ] = refl _
------------------------------------------------------------------------
-- Positive, negative
-- The property of being positive.
Positive : ℤ → Type
Positive (+ zero) = ⊥
Positive (+ P.suc _) = ⊤
Positive -[1+ _ ] = ⊥
-- Positive is propositional.
Positive-propositional : Is-proposition (Positive i)
Positive-propositional {i = + zero} = ⊥-propositional
Positive-propositional {i = + P.suc _} = mono₁ 0 ⊤-contractible
Positive-propositional {i = -[1+ _ ]} = ⊥-propositional
-- The property of being negative.
Negative : ℤ → Type
Negative (+ _) = ⊥
Negative -[1+ _ ] = ⊤
-- Negative is propositional.
Negative-propositional : Is-proposition (Negative i)
Negative-propositional {i = + _} = ⊥-propositional
Negative-propositional {i = -[1+ _ ]} = mono₁ 0 ⊤-contractible
-- No integer is both positive and negative.
¬+- : Positive i → Negative i → ⊥₀
¬+- {i = + _} _ neg = neg
¬+- {i = -[1+ _ ]} pos _ = pos
-- No integer is both positive and equal to zero.
¬+0 : Positive i → i ≡ + 0 → ⊥₀
¬+0 {i = + zero} pos _ = pos
¬+0 {i = + P.suc _} _ ≡0 = Nat.0≢+ $ sym $ +-cancellative ≡0
¬+0 {i = -[1+ _ ]} pos _ = pos
-- No integer is both negative and equal to zero.
¬-0 : Negative i → i ≡ + 0 → ⊥₀
¬-0 {i = + _} neg _ = neg
¬-0 {i = -[1+ _ ]} _ ≡0 = +≢-[1+] $ sym ≡0
-- One can decide if an integer is negative, zero or positive.
-⊎0⊎+ : ∀ i → Negative i ⊎ i ≡ + 0 ⊎ Positive i
-⊎0⊎+ (+ zero) = inj₂ (inj₁ (refl _))
-⊎0⊎+ (+ P.suc _) = inj₂ (inj₂ _)
-⊎0⊎+ -[1+ _ ] = inj₁ _
-- If i and j are positive, then i + j is positive.
>0→>0→+>0 : ∀ i j → Positive i → Positive j → Positive (i + j)
>0→>0→+>0 (+ P.suc _) (+ P.suc _) _ _ = _
-- If i and j are negative, then i + j is negative.
<0→<0→+<0 : ∀ i j → Negative i → Negative j → Negative (i + j)
<0→<0→+<0 -[1+ _ ] -[1+ _ ] _ _ = _
------------------------------------------------------------------------
-- The group of integers
-- The group of integers.
ℤ-group : Group lzero
ℤ-group .Group.Carrier = ℤ
ℤ-group .Group.Carrier-is-set = ℤ-set
ℤ-group .Group._∘_ = _+_
ℤ-group .Group.id = + 0
ℤ-group .Group._⁻¹ = -_
ℤ-group .Group.left-identity _ = +-left-identity
ℤ-group .Group.right-identity _ = +-right-identity
ℤ-group .Group.assoc i _ _ = +-assoc i
ℤ-group .Group.right-inverse = +-right-inverse
ℤ-group .Group.left-inverse = +-left-inverse
-- The group of integers is abelian.
ℤ-abelian : Abelian ℤ-group
ℤ-abelian i _ = +-comm i
private
module ℤG = Group ℤ-group
open ℤG public
using ()
renaming (_^+_ to infixl 7 _*+_;
_^_ to infixl 7 _*_)
-- + 1 is a left identity for multiplication.
*-left-identity : ∀ i → + 1 * i ≡ i
*-left-identity = lemma
where
+lemma : ∀ n → + 1 *+ n ≡ + n
+lemma zero = refl _
+lemma (P.suc n) =
+ 1 + (+ 1) *+ n ≡⟨ cong (λ i → + 1 + i) $ +lemma n ⟩
+ 1 + + n ≡⟨⟩
+ P.suc n ∎
-lemma : ∀ n → -[ 1 ] *+ n ≡ -[ n ]
-lemma zero = refl _
-lemma (P.suc zero) = refl _
-lemma (P.suc (P.suc n)) =
-[ 1 ] + -[ 1 ] *+ P.suc n ≡⟨ cong (λ i → -[ 1 ] + i) $ -lemma (P.suc n) ⟩
-[ 1 ] + -[ P.suc n ] ≡⟨⟩
-[ P.suc (P.suc n) ] ∎
lemma : ∀ i → + 1 * i ≡ i
lemma (+ n) = +lemma n
lemma -[1+ n ] = -lemma (P.suc n)
-- _*+ n distributes over addition.
*+-distrib-+ : ∀ n → (i + j) *+ n ≡ i *+ n + j *+ n
*+-distrib-+ {i = i} n = ℤG.∘^+≡^+∘^+ (+-comm i) n
-- If a positive number is multiplied by a positive number, then
-- the result is positive.
>0→*+suc> : ∀ i m → Positive i → Positive (i *+ P.suc m)
>0→*+suc> i zero =
Positive i ↝⟨ subst Positive (sym $ ℤG.right-identity i) ⟩
Positive (i + + 0) ↔⟨⟩
Positive (i *+ 1) □
>0→*+suc> i (P.suc m) =
Positive i ↝⟨ (λ p → p , >0→*+suc> i m p) ⟩
Positive i × Positive (i *+ P.suc m) ↝⟨ uncurry (>0→>0→+>0 i (i *+ P.suc m)) ⟩
Positive (i + i *+ P.suc m) ↔⟨⟩
Positive (i *+ P.suc (P.suc m)) □
-- If a negative number is multiplied by a positive number, then
-- the result is negative.
<0→*+suc<0 : ∀ i m → Negative i → Negative (i *+ P.suc m)
<0→*+suc<0 i zero =
Negative i ↝⟨ subst Negative (sym $ ℤG.right-identity i) ⟩
Negative (i + + 0) ↔⟨⟩
Negative (i *+ 1) □
<0→*+suc<0 i (P.suc m) =
Negative i ↝⟨ (λ p → p , <0→*+suc<0 i m p) ⟩
Negative i × Negative (i *+ P.suc m) ↝⟨ uncurry (<0→<0→+<0 i (i *+ P.suc m)) ⟩
Negative (i + i *+ P.suc m) ↔⟨⟩
Negative (i *+ P.suc (P.suc m)) □
------------------------------------------------------------------------
-- Integer division by two
-- Division by two, rounded downwards.
⌊_/2⌋ : ℤ → ℤ
⌊ + n /2⌋ = + Nat.⌊ n /2⌋
⌊ -[1+ n ] /2⌋ = -[ Nat.⌈ P.suc n /2⌉ ]
-- A kind of distributivity property for ⌊_/2⌋ and _+_.
⌊+*+2/2⌋≡ : ∀ i {j} → ⌊ i + j *+ 2 /2⌋ ≡ ⌊ i /2⌋ + j
⌊+*+2/2⌋≡ = λ where
(+ m) {j = + n} → cong +_
(Nat.⌊ m ⊕ 2 ⊛ n /2⌋ ≡⟨ cong Nat.⌊_/2⌋ $ Nat.+-comm m ⟩
Nat.⌊ 2 ⊛ n ⊕ m /2⌋ ≡⟨ Nat.⌊2*+/2⌋≡ n ⟩
n ⊕ Nat.⌊ m /2⌋ ≡⟨ Nat.+-comm n ⟩∎
Nat.⌊ m /2⌋ ⊕ n ∎)
-[1+ zero ] {j = -[1+ n ]} → cong -[1+_]
(Nat.⌈ P.suc (n ⊕ n) /2⌉ ≡⟨ cong (Nat.⌈_/2⌉ ∘ P.suc) $ ⊕-lemma n ⟩
Nat.⌈ 1 ⊕ 2 ⊛ n /2⌉ ≡⟨ cong Nat.⌈_/2⌉ $ Nat.+-comm 1 ⟩
Nat.⌈ 2 ⊛ n ⊕ 1 /2⌉ ≡⟨ Nat.⌈2*+/2⌉≡ n ⟩
n ⊕ Nat.⌈ 1 /2⌉ ≡⟨ Nat.+-comm n ⟩∎
P.suc n ∎)
-[1+ P.suc m ] {j = -[1+ n ]} → cong -[1+_]
(Nat.⌈ P.suc m ⊕ P.suc (n ⊕ n) /2⌉ ≡⟨ cong (Nat.⌈_/2⌉ ∘ P.suc) $ sym $ Nat.suc+≡+suc m ⟩
P.suc Nat.⌈ m ⊕ (n ⊕ n) /2⌉ ≡⟨ cong (P.suc ∘ Nat.⌈_/2⌉ ∘ (m ⊕_)) $ ⊕-lemma n ⟩
P.suc Nat.⌈ m ⊕ 2 ⊛ n /2⌉ ≡⟨ cong (P.suc ∘ Nat.⌈_/2⌉) $ Nat.+-comm m ⟩
P.suc Nat.⌈ 2 ⊛ n ⊕ m /2⌉ ≡⟨ cong P.suc $ Nat.⌈2*+/2⌉≡ n ⟩
P.suc (n ⊕ Nat.⌈ m /2⌉) ≡⟨ cong P.suc $ Nat.+-comm n ⟩∎
P.suc (Nat.⌈ m /2⌉ ⊕ n) ∎)
(+ m) {j = -[1+ n ]} →
⌊ + m +-[1+ P.suc (n ⊕ n) ] /2⌋ ≡⟨ cong (λ n → ⌊ + m +-[1+ P.suc n ] /2⌋) $ ⊕-lemma n ⟩
⌊ + m +-[1+ P.suc (2 ⊛ n) ] /2⌋ ≡⟨ lemma₁ m n ⟩∎
+ Nat.⌊ m /2⌋ +-[1+ n ] ∎
-[1+ 0 ] {j = + 0} →
-[ 1 ] ≡⟨⟩
-[ 1 ] ∎
-[1+ 0 ] {j = + P.suc n} → cong +_
(Nat.⌊ n ⊕ P.suc (n ⊕ 0) /2⌋ ≡⟨ cong Nat.⌊_/2⌋ $ sym $ Nat.suc+≡+suc n ⟩
Nat.⌊ 1 ⊕ 2 ⊛ n /2⌋ ≡⟨ Nat.⌊1+2*/2⌋≡ n ⟩∎
n ∎)
-[1+ P.suc m ] {j = + n} →
⌊ + 2 ⊛ n +-[1+ P.suc m ] /2⌋ ≡⟨ lemma₂ m n ⟩∎
+ n +-[1+ Nat.⌈ m /2⌉ ] ∎
where
⊕-lemma : ∀ n → n ⊕ n ≡ 2 ⊛ n
⊕-lemma n = cong (n ⊕_) $ sym Nat.+-right-identity
lemma₁ :
∀ m n →
⌊ + m +-[1+ P.suc (2 ⊛ n) ] /2⌋ ≡
+ Nat.⌊ m /2⌋ +-[1+ n ]
lemma₁ zero n = cong -[1+_]
(Nat.⌈ 2 ⊛ n /2⌉ ≡⟨ Nat.⌈2*/2⌉≡ n ⟩∎
n ∎)
lemma₁ (P.suc zero) n =
-[ Nat.⌈ 1 ⊕ 2 ⊛ n /2⌉ ] ≡⟨ cong -[_] $ Nat.⌈1+2*/2⌉≡ n ⟩∎
-[1+ n ] ∎
lemma₁ (P.suc (P.suc m)) zero =
⌊ + m /2⌋ ≡⟨⟩
+ Nat.⌊ m /2⌋ ∎
lemma₁ (P.suc (P.suc m)) (P.suc n) =
⌊ + m +-[1+ n ⊕ P.suc (n ⊕ 0) ] /2⌋ ≡⟨ cong (⌊_/2⌋ ∘ + m +-[1+_]) $ sym $ Nat.suc+≡+suc n ⟩
⌊ + m +-[1+ P.suc (2 ⊛ n) ] /2⌋ ≡⟨ lemma₁ m n ⟩∎
+ Nat.⌊ m /2⌋ +-[1+ n ] ∎
mutual
lemma₂ :
∀ m n →
⌊ + 2 ⊛ n +-[1+ P.suc m ] /2⌋ ≡
+ n +-[1+ Nat.⌈ m /2⌉ ]
lemma₂ m zero =
⌊ -[1+ P.suc m ] /2⌋ ≡⟨⟩
-[1+ Nat.⌈ m /2⌉ ] ∎
lemma₂ m (P.suc n) =
⌊ + n ⊕ P.suc (n ⊕ 0) +-[1+ m ] /2⌋ ≡⟨ cong (⌊_/2⌋ ∘ +_+-[1+ m ]) $ sym $ Nat.suc+≡+suc n ⟩
⌊ + P.suc (2 ⊛ n) +-[1+ m ] /2⌋ ≡⟨ lemma₃ m n ⟩∎
+ P.suc n +-[1+ Nat.⌈ m /2⌉ ] ∎
lemma₃ :
∀ m n →
⌊ + P.suc (2 ⊛ n) +-[1+ m ] /2⌋ ≡
+ P.suc n +-[1+ Nat.⌈ m /2⌉ ]
lemma₃ zero n = cong +_
(Nat.⌊ 2 ⊛ n /2⌋ ≡⟨ Nat.⌊2*/2⌋≡ n ⟩∎
n ∎)
lemma₃ (P.suc zero) zero =
-[ 1 ] ≡⟨⟩
-[ 1 ] ∎
lemma₃ (P.suc zero) (P.suc n) = cong +_
(Nat.⌊ n ⊕ P.suc (n ⊕ zero) /2⌋ ≡⟨ cong Nat.⌊_/2⌋ $ sym $ Nat.suc+≡+suc n ⟩
Nat.⌊ 1 ⊕ 2 ⊛ n /2⌋ ≡⟨ Nat.⌊1+2*/2⌋≡ n ⟩∎
n ∎)
lemma₃ (P.suc (P.suc m)) n =
⌊ + 2 ⊛ n +-[1+ P.suc m ] /2⌋ ≡⟨ lemma₂ m n ⟩∎
+ n +-[1+ Nat.⌈ m /2⌉ ] ∎
-- If you double and then halve an integer, then you get back what you
-- started with.
⌊*+2/2⌋≡ : ∀ i → ⌊ i *+ 2 /2⌋ ≡ i
⌊*+2/2⌋≡ i =
⌊ i *+ 2 /2⌋ ≡⟨ cong ⌊_/2⌋ $ sym $ +-left-identity {i = i *+ 2} ⟩
⌊ + 0 + i *+ 2 /2⌋ ≡⟨ ⌊+*+2/2⌋≡ (+ 0) {j = i} ⟩
⌊ + 0 /2⌋ + i ≡⟨⟩
+ 0 + i ≡⟨ +-left-identity ⟩∎
i ∎
|
lemma SUP_Lim: fixes X :: "nat \<Rightarrow> 'a::{complete_linorder,linorder_topology}" assumes inc: "incseq X" and l: "X \<longlonglongrightarrow> l" shows "(SUP n. X n) = l" |
module Http.Error
import public Network.Socket
%access public export
data HttpError : Type where
HttpSocketError : SocketError -> HttpError
HttpParseError : String -> HttpError
implementation Show HttpError where
show (HttpSocketError err) = show err
show (HttpParseError err) = err
|
lemma lipschitz_onI: "L-lipschitz_on X f" if "\<And>x y. x \<in> X \<Longrightarrow> y \<in> X \<Longrightarrow> dist (f x) (f y) \<le> L * dist x y" "0 \<le> L" |
State Before: α : Type u_2
β : Type u_1
γ : Type ?u.73169
op : β → β → β
hc : IsCommutative β op
ha : IsAssociative β op
f : α → β
b : β
s✝ : Finset α
a : α
inst✝² : LinearOrder β
c : β
inst✝¹ : Add β
inst✝ : CovariantClass β β (Function.swap fun x x_1 => x + x_1) fun x x_1 => x ≤ x_1
n : WithBot β
s : Finset α
⊢ fold max ⊥ (fun x => ↑(f x) + n) s = fold max ⊥ (WithBot.some ∘ f) s + n State After: no goals Tactic: classical
induction' s using Finset.induction_on with a s _ ih <;> simp [*, max_add_add_right] State Before: α : Type u_2
β : Type u_1
γ : Type ?u.73169
op : β → β → β
hc : IsCommutative β op
ha : IsAssociative β op
f : α → β
b : β
s✝ : Finset α
a : α
inst✝² : LinearOrder β
c : β
inst✝¹ : Add β
inst✝ : CovariantClass β β (Function.swap fun x x_1 => x + x_1) fun x x_1 => x ≤ x_1
n : WithBot β
s : Finset α
⊢ fold max ⊥ (fun x => ↑(f x) + n) s = fold max ⊥ (WithBot.some ∘ f) s + n State After: no goals Tactic: induction' s using Finset.induction_on with a s _ ih <;> simp [*, max_add_add_right] |
ncsub <- function(x)
{
n <- length(x)
a <- seq_len(n)
seqlist <- list()
for(i in 2:(n-1))
{
seqs <- combn(a, i) # Get all subseqs
ok <- apply(seqs, 2, function(x) any(diff(x)!=1)) # Find noncts ones
newseqs <- unlist(apply(seqs[,ok], 2, function(x) list(x)), recursive=FALSE) # Convert matrix to list of its columns
seqlist <- c(seqlist, newseqs) # Append to existing list
}
lapply(seqlist, function(index) x[index])
}
# Example usage
ncsub(1:4)
ncsub(letters[1:5])
|
module SixDof
using LinearAlgebra, DifferentialEquations, Distances
using ...Utils, ...Types
include("aerodynamic_coeff.jl")
export trajectorySixDof!, QEfinderSixDof!, iniCondSixDof
function sixdof(du,u,p,t)
calibre,m,Ixx,It, ω_bar, g₀, Rz, target,dist = p
λ0 = u[1]
λ1 = u[2]
λ2 = u[3]
λ3 = u[4]
U = u[5]
V = u[6]
W = u[7]
P = u[8]
Q = u[9]
R = u[10]
X₁ = u[11]
X₂ = u[12]
X₃ = u[13]
V_bar = [U, V, W]
X_bar = [X₁, X₂, X₃]
α = atan(W/U)
β = asin(V/norm(V_bar))
αt = asin(sqrt((sin(α)*cos(β))^2+(sin(β))^2))
δ = sqrt((sin(α)*cos(β))^2+(sin(β))^2)
mach = Utils.machNumber(norm(V_bar),-u[13])
CD0 = CD0_inter(mach)
CDδ2 = CDδ2_inter(mach)
CD = CD0 + CDδ2 * δ^2
CLα0 = Cl_α0_inter(mach)
CLα2 = Cl_α2_inter(mach)
CLα = CLα0 + CLα2 * δ^2
Cx0 = CD0
Cx2 = CDδ2
Cna = CLα#CD + cos(αt)*CLα
Cypa = CN_pα_inter(mach,(rad2deg(αt))^2)
Cldd = 0.0
Clp = Cl_p_inter(mach)
CMα0 = CM_α0_inter(mach)
CMα2 = CM_α2_inter(mach)
Cma = CMα0 +CMα2 * δ^2
Cmq = CM_q_plus_CM_α_inter(mach)
Cnpa = CM_pα_inter(mach,(rad2deg(αt))^2)
Ωx = 2*R*(λ1*λ3-λ0*λ2)/(λ0^2-λ1^2-λ2^2+λ3^2)
ρ = Utils.density(-u[13])
S = pi*(calibre^2)/4
g = Utils.gravity(-u[13])
gx = 2*(λ1*λ3-λ0*λ2)*g
gy = 0.0
gz = (λ0^2-λ1^2-λ2^2+λ3^2)*g
FGx = - g₀*X₁/Rz
FGy = - g₀*X₂/Rz
FGz = g₀*(1+2*X₃/Rz)
FG = [gx, gy, gz]
FAx = -1/2*ρ*(norm(V_bar))^2*S*(Cx0 + Cx2*(V^2+W^2)/(norm(V_bar))^2)#Cx
FAy = -1/2*ρ*(norm(V_bar))^2*S*(CLα*V/norm(V_bar))#(Cna * V/norm(V_bar)
FAz = -1/2*ρ*(norm(V_bar))^2*S*(CLα*W/norm(V_bar))#(Cna * W/norm(V_bar) #Cz
FMx =0.0
FMy = 1/2*ρ*(norm(V_bar))^2*S* (P*calibre/(2*norm(V_bar))*Cypa*W/(norm(V_bar)))#Cy
FMz = -1/2*ρ*(norm(V_bar))^2*S* (P*calibre/(2*norm(V_bar))*Cypa*V/(norm(V_bar)))
T = T_MatrixQ(λ0, λ1, λ2, λ3)
v_bar = T*V_bar
FC = -2*cross(ω_bar,v_bar)
FCp = transpose(T)*FC
FGp = transpose(T)*FG
FGp[2] = 0.0
FCx = FCp[1]
FCy = FCp[2]
FCz = FCp[3]
Fx = FAx +FMx + FCx
Fy = FAy + FMy + FCy
Fz = FAz + FMz + FCz
L = 1/2*ρ*(norm(V_bar))^2* S *calibre*(Cldd + P*calibre/(2*norm(V_bar))*Clp)#Cl
M = 1/2*ρ*(norm(V_bar))^2* S *calibre*( Cma*W/norm(V_bar)+Q*calibre/(2*norm(V_bar))*Cmq+P*calibre/(2*norm(V_bar))*Cnpa*V/norm(V_bar))#Cm
N = 1/2*ρ*(norm(V_bar))^2* S *calibre*(-Cma*V/norm(V_bar)+R*calibre/(2*norm(V_bar))*Cmq+P*calibre/(2*norm(V_bar))*Cnpa*W/norm(V_bar))#Cq
ϵw = norm([λ0,λ1,λ2,λ3])-1
du[1] = -1/2*(Q*λ2+R*λ3/(λ0^2-λ1^2-λ2^2+λ3^2))-ϵw*λ0 #λ0p
du[2] = -1/2*(Q*λ3+R*λ2/(λ0^2-λ1^2-λ2^2+λ3^2))-ϵw*λ1 #λ1p
du[3] = 1/2*(Q*λ0+R*λ1/(λ0^2-λ1^2-λ2^2+λ3^2))-ϵw*λ2 #λ2p
du[4] = 1/2*(Q*λ1+R*λ0/(λ0^2-λ1^2-λ2^2+λ3^2))-ϵw*λ3 #λ3p
du[5] = FG[1]+Fx/m-Q*W+R*V #Up
du[6] = FG[2]+Fy/m-R*U+Ωx*W # Vp
du[7] = FG[3]+Fz/m-Ωx*V+Q*U # Wp
du[8] = L/Ixx #Pp
du[9] = 1/It * (M-Ixx*R*P+It*R*Ωx)#Qp
du[10] = 1/It * (N+Ixx*Q*P-It*Q*Ωx)#Rp
du[11] = v_bar[1] #x
du[12] = v_bar[2] #y
du[13] = v_bar[3] #z
end
function condition(u,t,integrator) # Event when event_f(u,t) == 0
integrator.p[8][1]-u[11]
end
function timeFuze(u,t,integrator)
close=(euclidean([u[11],u[12],u[13]],integrator.p[8])-integrator.p[9])-1e-6
return close
end
miss(u,t,integrator) = u[13]
affect_miss!(integrator) = terminate!(integrator)
affect!(integrator) = terminate!(integrator)
affect_tf!(integrator) = terminate!(integrator)
function iniCondSixDof(θ::Float64, ψ::Float64, u₀::Float64, gun::Canon, calibre::Float64)
p₀ = 2*pi*u₀/(gun.tc*calibre)
Φ = 0.0
λ0, λ1, λ2, λ3 = euler2quartenions(θ, ψ, Φ)
Q = 0.0
R = 0.0
u₀_bar = [u₀*cos(θ)*cos(ψ), u₀*cos(θ)*sin(ψ), -u₀*sin(θ) ]
T_euler = T_matrixE(θ,ψ,Φ)
u₀_bar_m = transpose(T_euler)*u₀_bar
X₀_bar = [gun.lw*cos(θ)*cos(ψ), gun.lw*cos(θ)*sin(ψ),- (gun.X2w + gun.lw *sin(θ))]
u0 = [λ0, λ1, λ2, λ3, u₀_bar_m[1], u₀_bar_m[2], u₀_bar_m[3], p₀,Q, R, X₀_bar[1], X₀_bar[2], X₀_bar[3]]
return u0
end
function QEfinderSixDof!(drone::Target2D, proj::Projectile2D, u₀::Float64, g₀::Float64, gun::Canon, w_bar::Array{Float64,1},lat::Float64)
epsilonAz = 1e6
epsilonQE = 1e6
precisie = 0.001
ddoel = euclidean(proj.position, drone.position)
tdoel = sqrt(drone.position[1]^2+drone.position[2]^2)/u₀
#QE = asind(((-drone.position[3]) - (-proj.position[3]) + g₀ /2 *tdoel^2)*tdoel/u₀)
QE = (((-drone.position[3]) - (-proj.position[3]) + g₀ /2 *tdoel^2)*tdoel/u₀)
AZ = 0.0
tspan = (0.0,1000.0)
Rz = 6.356766*1e6 #m
Ω = 7.292115*1e-5 #rad/s
ω_bar = [Ω*cosd(lat)*cosd(AZ), Ω*sind(lat), -Ω*cosd(lat)*sind(AZ)]
p = [proj.calibre,proj.mass,proj.inertia[1],proj.inertia[2], ω_bar,g₀, Rz,drone.position,0.0]
while abs(epsilonAz)>precisie || abs(epsilonQE)>precisie
u0 = iniCondSixDof(deg2rad(QE), deg2rad(AZ), u₀, gun, proj.calibre)
#global proj = projectile(u0[1],u0[2],u0[3],u0[4],u0[5],u0[6],0.0)
#proj = projectile(u0[1],u0[2],u0[3],u0[4],u0[5],u0[6],0.0)
proj.position = [u0[11],u0[12],u0[13]]
proj.velocity = [u0[5],u0[6],u0[7]]
proj.tof = 0.0
trajectorySixDof!(u0, tspan, p, proj, drone)
#global epsilonQE = proj.y - drone.y
epsilonQE = (-proj.position[3]) - (-drone.position[3])
#println("epsilonQE", " ", epsilonQE)
#QE = QE0 + (accuracy)/(range_/QE0)
#global epsilonAz = (sqrt((proj.z-drone.z)^2+(proj.x-drone.x)^2)*sign(atan(proj.z)/proj.x)-atan(drone.z/drone.x))
epsilonAz = (sqrt((proj.position[2]-drone.position[2])^2+(proj.position[1]-drone.position[1])^2)*sign(atan(proj.position[2])/proj.position[1])-atan(drone.position[2]/drone.position[1]))
#println("epsilonAz", " ", epsilonAz)
#global AZ = AZ - epsilonAz/ddoel
AZ = AZ - epsilonAz/ddoel
#global QE = QE - epsilonQE/ddoel
QE = QE - epsilonQE/ddoel
#trajectory!(u0, tspan, p, proj, drone)
#calcRange = euclidean([0.0,0.0,0.0], [proj.x,proj.y, proj.z])
#global accuracy = range_ - calcRange
#println("AZ", " ", AZ)
#println("QE", " ", QE)
end
return QE,AZ
end
function trajectorySixDof!(u0::Array{Float64,1}, tspan::Tuple{Float64,Float64}, p, proj::AbstractPenetrator, target::Target2D)
prob = ODEProblem(sixdof,u0,tspan, p)
cb = ContinuousCallback(condition,affect!)
cb_tf = ContinuousCallback(timeFuze,affect!)
cb_miss = ContinuousCallback(miss,affect!)
#cb_tf=DiscreteCallback(timeFuze,affect_tf!)
cbs = CallbackSet(cb,cb_tf,cb_miss)
sol = solve(prob,Tsit5(),callback=cbs, reltol=1e-8, abstol=1e-8)
proj.position[1] = sol.u[end][11]
proj.position[2] = sol.u[end][12]
proj.position[3] = sol.u[end][13]
proj.velocity[1] = sol.u[end][5]
proj.velocity[2] = sol.u[end][6]
proj.velocity[3] = sol.u[end][7]
proj.tof = sol.t[end]
proj.spin = sol.u[end][8]
end
end
|
-- Exercises
-- #3
variable p : Prop
example : ¬(p ↔ ¬p) :=
assume h : p ↔ ¬p,
show false, from
have h1 : p → ¬p := h.elim_left,
have h2 : ¬p → p := h.elim_right,
have hnp : ¬p := (λ hp : p, (h1 hp) hp),
have hp : p := h2 hnp,
absurd hp hnp
|
Barefoot Resort Dye Course. Live Pricing.
This course, designed by architect Pete Dye, certainly doesn’t stray from his reputation of building memorable and challenging courses. The semi-private course is filled with pitfalls for wayward shots, native grass and exceptional elevations. Bordering the white sands of the Carolina Bays is what makes this course a great pick for your golf vacation in Myrtle Beach. |
import cv2
import numpy as np
from bopflow.models.utils import DOutput
def draw_outputs(img, detections: [DOutput]):
box_color = (255, 0, 0)
box_width = 2
wh = np.flip(img.shape[0:2])
for result in detections:
x1y1 = (result.box.x1y1 * wh).astype(np.int32)
x2y2 = (result.box.x2y2 * wh).astype(np.int32)
img = cv2.rectangle(
img=img,
pt1=tuple(x1y1),
pt2=tuple(x2y2),
color=box_color,
thickness=box_width,
)
text = "label: {}| score: {}".format(
result.label.number, result.confidence_score
)
text_coordinates = (x1y1[0] + 5, x2y2[1] - 5)
img = cv2.putText(
img=img,
text=text,
org=text_coordinates,
fontFace=cv2.FONT_HERSHEY_SIMPLEX,
fontScale=0.5,
color=box_color,
thickness=1,
)
return img
|
function F = lininterp1f(X,Y,XI,Ydefault)
disp('For fast linear interpolation, download lininterp1f from Mathworks')
disp('file exchange (File ID: #8627) or use alternative interpolator')
disp('http://www.mathworks.com/matlabcentral/fileexchange/8627-fast-linear-interpolation')
disp(' ')
disp('Place lininterp1f in CORE\utilities') |
My early passion was being a teacher and a coach, and eventually I became a school administrator. Following that career, I went directly into sales with a company that furnished new and existing schools. It was during this career that I realized how fulfilling it is to help people acquire what they need and want.
I have watched and admired my sister's real estate career as it has grown into her own thriving business. Trina and I have many qualities in common, especially that we love people, we are over achievers, and we value our integrity. I can think of nothing more exciting than working for a person I have loved and respected her whole life.
I have lived in the Longview area for over 32 years, the past 16 years on Lake Cherokee. The Lake is an awesome place to live, and it will be an added pleasure to help clients buy and sell property here.
Whether you are looking to buy or sell a property anywhere in the East Texas area, it would be my pleasure to help you. Call 903.918.3074. |
(* *********************************************************************)
(* *)
(* The Compcert verified compiler *)
(* *)
(* Xavier Leroy, INRIA Paris-Rocquencourt *)
(* *)
(* Copyright Institut National de Recherche en Informatique et en *)
(* Automatique. All rights reserved. This file is distributed *)
(* under the terms of the INRIA Non-Commercial License Agreement. *)
(* *)
(* *********************************************************************)
(** Correctness proof for IA32 generation: auxiliary results. *)
Require Import Coqlib.
Require Import AST.
Require Import Errors.
Require Import Integers.
Require Import Floats.
Require Import Values.
Require Import Memory.
Require Import Globalenvs.
Require Import Op.
Require Import Locations.
Require Import Mach.
Require Import Asm.
Require Import Asmgen.
Require Import Asmgenproof0.
Require Import Conventions.
Open Local Scope error_monad_scope.
(** * Correspondence between Mach registers and IA32 registers *)
Lemma agree_nextinstr_nf:
forall ms sp rs,
agree ms sp rs -> agree ms sp (nextinstr_nf rs).
Proof.
intros. unfold nextinstr_nf. apply agree_nextinstr.
apply agree_undef_nondata_regs. auto.
intro. simpl. ElimOrEq; auto.
Qed.
(** Useful properties of the PC register. *)
Lemma nextinstr_nf_inv:
forall r rs,
match r with PC => False | CR _ => False | _ => True end ->
(nextinstr_nf rs)#r = rs#r.
Proof.
intros. unfold nextinstr_nf. rewrite nextinstr_inv.
simpl. repeat rewrite Pregmap.gso; auto;
red; intro; subst; contradiction.
red; intro; subst; contradiction.
Qed.
Lemma nextinstr_nf_inv1:
forall r rs,
data_preg r = true -> (nextinstr_nf rs)#r = rs#r.
Proof.
intros. apply nextinstr_nf_inv. destruct r; auto || discriminate.
Qed.
Lemma nextinstr_nf_set_preg:
forall rs m v,
(nextinstr_nf (rs#(preg_of m) <- v))#PC = Val.add rs#PC Vone.
Proof.
intros. unfold nextinstr_nf.
transitivity (nextinstr (rs#(preg_of m) <- v) PC). auto.
apply nextinstr_set_preg.
Qed.
(** Useful simplification tactic *)
Ltac Simplif :=
match goal with
| [ |- nextinstr_nf _ _ = _ ] =>
((rewrite nextinstr_nf_inv by auto with asmgen)
|| (rewrite nextinstr_nf_inv1 by auto with asmgen)); auto
| [ |- nextinstr _ _ = _ ] =>
((rewrite nextinstr_inv by auto with asmgen)
|| (rewrite nextinstr_inv1 by auto with asmgen)); auto
| [ |- Pregmap.get ?x (Pregmap.set ?x _ _) = _ ] =>
rewrite Pregmap.gss; auto
| [ |- Pregmap.set ?x _ _ ?x = _ ] =>
rewrite Pregmap.gss; auto
| [ |- Pregmap.get _ (Pregmap.set _ _ _) = _ ] =>
rewrite Pregmap.gso by (auto with asmgen); auto
| [ |- Pregmap.set _ _ _ _ = _ ] =>
rewrite Pregmap.gso by (auto with asmgen); auto
end.
Ltac Simplifs := repeat Simplif.
(** * Correctness of IA32 constructor functions *)
Section CONSTRUCTORS.
Variable ge: genv.
Variable fn: function.
(** Smart constructor for moves. *)
Lemma mk_mov_correct:
forall rd rs k c rs1 m,
mk_mov rd rs k = OK c ->
exists rs2,
exec_straight ge fn c rs1 m k rs2 m
/\ rs2#rd = rs1#rs
/\ forall r, data_preg r = true -> r <> rd -> rs2#r = rs1#r.
Proof.
unfold mk_mov; intros.
destruct rd; try (monadInv H); destruct rs; monadInv H.
(* mov *)
econstructor. split. apply exec_straight_one. simpl. eauto. auto.
split. Simplifs. intros; Simplifs.
(* movd *)
econstructor. split. apply exec_straight_one. simpl. eauto. auto.
split. Simplifs. intros; Simplifs.
Qed.
(** Properties of division *)
Remark divs_mods_exist:
forall v1 v2,
match Val.divs v1 v2, Val.mods v1 v2 with
| Some _, Some _ => True
| None, None => True
| _, _ => False
end.
Proof.
intros. unfold Val.divs, Val.mods. destruct v1; auto. destruct v2; auto.
destruct (Int.eq i0 Int.zero || Int.eq i (Int.repr Int.min_signed) && Int.eq i0 Int.mone); auto.
Qed.
Remark divu_modu_exist:
forall v1 v2,
match Val.divu v1 v2, Val.modu v1 v2 with
| Some _, Some _ => True
| None, None => True
| _, _ => False
end.
Proof.
intros. unfold Val.divu, Val.modu. destruct v1; auto. destruct v2; auto.
destruct (Int.eq i0 Int.zero); auto.
Qed.
(** Smart constructor for [shrx] *)
Lemma mk_shrximm_correct:
forall n k c (rs1: regset) v m,
mk_shrximm n k = OK c ->
Val.shrx (rs1#EAX) (Vint n) = Some v ->
exists rs2,
exec_straight ge fn c rs1 m k rs2 m
/\ rs2#EAX = v
/\ forall r, data_preg r = true -> r <> EAX -> r <> ECX -> rs2#r = rs1#r.
Proof.
unfold mk_shrximm; intros. inv H.
exploit Val.shrx_shr; eauto. intros [x [y [A [B C]]]].
inversion B; clear B; subst y; subst v; clear H0.
set (tnm1 := Int.sub (Int.shl Int.one n) Int.one).
set (x' := Int.add x tnm1).
set (rs2 := nextinstr (compare_ints (Vint x) (Vint Int.zero) rs1 m)).
set (rs3 := nextinstr (rs2#ECX <- (Vint x'))).
set (rs4 := nextinstr (if Int.lt x Int.zero then rs3#EAX <- (Vint x') else rs3)).
set (rs5 := nextinstr_nf (rs4#EAX <- (Val.shr rs4#EAX (Vint n)))).
assert (rs3#EAX = Vint x). unfold rs3. Simplifs.
assert (rs3#ECX = Vint x'). unfold rs3. Simplifs.
exists rs5. split.
apply exec_straight_step with rs2 m. simpl. rewrite A. simpl. rewrite Int.and_idem. auto. auto.
apply exec_straight_step with rs3 m. simpl.
change (rs2 EAX) with (rs1 EAX). rewrite A. simpl.
rewrite (Int.add_commut Int.zero tnm1). rewrite Int.add_zero. auto. auto.
apply exec_straight_step with rs4 m. simpl.
rewrite Int.lt_sub_overflow. unfold rs4. destruct (Int.lt x Int.zero); simpl; auto.
unfold rs4. destruct (Int.lt x Int.zero); simpl; auto.
apply exec_straight_one. auto. auto.
split. unfold rs5. Simplifs. unfold rs4. rewrite nextinstr_inv; auto with asmgen.
destruct (Int.lt x Int.zero). rewrite Pregmap.gss. rewrite A; auto. rewrite A; rewrite H; auto.
intros. unfold rs5. Simplifs. unfold rs4. Simplifs.
transitivity (rs3#r). destruct (Int.lt x Int.zero). Simplifs. auto.
unfold rs3. Simplifs. unfold rs2. Simplifs.
unfold compare_ints. Simplifs.
Qed.
(** Smart constructor for integer conversions *)
Lemma mk_intconv_correct:
forall mk sem rd rs k c rs1 m,
mk_intconv mk rd rs k = OK c ->
(forall c rd rs r m,
exec_instr ge c (mk rd rs) r m = Next (nextinstr (r#rd <- (sem r#rs))) m) ->
exists rs2,
exec_straight ge fn c rs1 m k rs2 m
/\ rs2#rd = sem rs1#rs
/\ forall r, data_preg r = true -> r <> rd -> r <> EAX -> rs2#r = rs1#r.
Proof.
unfold mk_intconv; intros. destruct (low_ireg rs); monadInv H.
econstructor. split. apply exec_straight_one. rewrite H0. eauto. auto.
split. Simplifs. intros. Simplifs.
econstructor. split. eapply exec_straight_two.
simpl. eauto. apply H0. auto. auto.
split. Simplifs. intros. Simplifs.
Qed.
(** Smart constructor for small stores *)
Lemma addressing_mentions_correct:
forall a r (rs1 rs2: regset),
(forall (r': ireg), r' <> r -> rs1 r' = rs2 r') ->
addressing_mentions a r = false ->
eval_addrmode ge a rs1 = eval_addrmode ge a rs2.
Proof.
intros until rs2; intro AG. unfold addressing_mentions, eval_addrmode.
destruct a. intros. destruct (orb_false_elim _ _ H). unfold proj_sumbool in *.
decEq. destruct base; auto. apply AG. destruct (ireg_eq r i); congruence.
decEq. destruct ofs as [[r' sc] | ]; auto. rewrite AG; auto. destruct (ireg_eq r r'); congruence.
Qed.
Lemma mk_smallstore_correct:
forall chunk sto addr r k c rs1 m1 m2,
mk_smallstore sto addr r k = OK c ->
Mem.storev chunk m1 (eval_addrmode ge addr rs1) (rs1 r) = Some m2 ->
(forall c r addr rs m,
exec_instr ge c (sto addr r) rs m = exec_store ge chunk m addr rs r nil) ->
exists rs2,
exec_straight ge fn c rs1 m1 k rs2 m2
/\ forall r, data_preg r = true -> r <> EAX /\ r <> ECX -> rs2#r = rs1#r.
Proof.
unfold mk_smallstore; intros.
remember (low_ireg r) as low. destruct low.
(* low reg *)
monadInv H. econstructor; split. apply exec_straight_one. rewrite H1.
unfold exec_store. rewrite H0. eauto. auto.
intros; Simplifs.
(* high reg *)
remember (addressing_mentions addr EAX) as mentions. destruct mentions; monadInv H.
(* EAX is mentioned. *)
assert (r <> ECX). red; intros; subst r; discriminate.
set (rs2 := nextinstr (rs1#ECX <- (eval_addrmode ge addr rs1))).
set (rs3 := nextinstr (rs2#EAX <- (rs1 r))).
econstructor; split.
apply exec_straight_three with rs2 m1 rs3 m1.
simpl. auto.
simpl. replace (rs2 r) with (rs1 r). auto. symmetry. unfold rs2; Simplifs.
rewrite H1. unfold exec_store. simpl. rewrite Int.add_zero.
change (rs3 EAX) with (rs1 r).
change (rs3 ECX) with (eval_addrmode ge addr rs1).
replace (Val.add (eval_addrmode ge addr rs1) (Vint Int.zero))
with (eval_addrmode ge addr rs1).
rewrite H0. eauto.
destruct (eval_addrmode ge addr rs1); simpl in H0; try discriminate.
simpl. rewrite Int.add_zero; auto.
auto. auto. auto.
intros. destruct H3. Simplifs. unfold rs3; Simplifs. unfold rs2; Simplifs.
(* EAX is not mentioned *)
set (rs2 := nextinstr (rs1#EAX <- (rs1 r))).
econstructor; split.
apply exec_straight_two with rs2 m1.
simpl. auto.
rewrite H1. unfold exec_store.
rewrite (addressing_mentions_correct addr EAX rs2 rs1); auto.
change (rs2 EAX) with (rs1 r). rewrite H0. eauto.
intros. unfold rs2; Simplifs.
auto. auto.
intros. destruct H2. simpl. Simplifs. unfold rs2; Simplifs.
Qed.
(** Accessing slots in the stack frame *)
Lemma loadind_correct:
forall (base: ireg) ofs ty dst k (rs: regset) c m v,
loadind base ofs ty dst k = OK c ->
Mem.loadv (chunk_of_type ty) m (Val.add rs#base (Vint ofs)) = Some v ->
exists rs',
exec_straight ge fn c rs m k rs' m
/\ rs'#(preg_of dst) = v
/\ forall r, data_preg r = true -> r <> preg_of dst -> rs'#r = rs#r.
Proof.
unfold loadind; intros.
set (addr := Addrmode (Some base) None (inl (ident * int) ofs)) in *.
assert (eval_addrmode ge addr rs = Val.add rs#base (Vint ofs)).
unfold addr. simpl. rewrite Int.add_commut; rewrite Int.add_zero; auto.
exists (nextinstr_nf (rs#(preg_of dst) <- v)); split.
- destruct ty; try discriminate; destruct (preg_of dst); inv H; simpl in H0;
apply exec_straight_one; auto; simpl; unfold exec_load; rewrite H1, H0; auto.
- intuition Simplifs.
Qed.
Lemma storeind_correct:
forall (base: ireg) ofs ty src k (rs: regset) c m m',
storeind src base ofs ty k = OK c ->
Mem.storev (chunk_of_type ty) m (Val.add rs#base (Vint ofs)) (rs#(preg_of src)) = Some m' ->
exists rs',
exec_straight ge fn c rs m k rs' m'
/\ forall r, data_preg r = true -> preg_notin r (destroyed_by_setstack ty) -> rs'#r = rs#r.
Proof.
Local Transparent destroyed_by_setstack.
unfold storeind; intros.
set (addr := Addrmode (Some base) None (inl (ident * int) ofs)) in *.
assert (eval_addrmode ge addr rs = Val.add rs#base (Vint ofs)).
unfold addr. simpl. rewrite Int.add_commut; rewrite Int.add_zero; auto.
destruct ty; try discriminate; destruct (preg_of src); inv H; simpl in H0;
(econstructor; split;
[apply exec_straight_one; [simpl; unfold exec_store; rewrite H1, H0; eauto|auto]
|simpl; intros; unfold undef_regs; repeat Simplifs]).
Qed.
(** Translation of addressing modes *)
Lemma transl_addressing_mode_correct:
forall addr args am (rs: regset) v,
transl_addressing addr args = OK am ->
eval_addressing ge (rs ESP) addr (List.map rs (List.map preg_of args)) = Some v ->
Val.lessdef v (eval_addrmode ge am rs).
Proof.
assert (A: forall n, Int.add Int.zero n = n).
intros. rewrite Int.add_commut. apply Int.add_zero.
assert (B: forall n i, (if Int.eq i Int.one then Vint n else Vint (Int.mul n i)) = Vint (Int.mul n i)).
intros. predSpec Int.eq Int.eq_spec i Int.one.
subst i. rewrite Int.mul_one. auto. auto.
assert (C: forall v i,
Val.lessdef (Val.mul v (Vint i))
(if Int.eq i Int.one then v else Val.mul v (Vint i))).
intros. predSpec Int.eq Int.eq_spec i Int.one.
subst i. destruct v; simpl; auto. rewrite Int.mul_one; auto.
destruct v; simpl; auto.
unfold transl_addressing; intros.
destruct addr; repeat (destruct args; try discriminate); simpl in H0; inv H0.
(* indexed *)
monadInv H. rewrite (ireg_of_eq _ _ EQ). simpl. rewrite A; auto.
(* indexed2 *)
monadInv H. rewrite (ireg_of_eq _ _ EQ); rewrite (ireg_of_eq _ _ EQ1). simpl.
rewrite Val.add_assoc; auto.
(* scaled *)
monadInv H. rewrite (ireg_of_eq _ _ EQ). unfold eval_addrmode.
rewrite Val.add_permut. simpl. rewrite A. apply Val.add_lessdef; auto.
(* indexed2scaled *)
monadInv H. rewrite (ireg_of_eq _ _ EQ); rewrite (ireg_of_eq _ _ EQ1); simpl.
apply Val.add_lessdef; auto. apply Val.add_lessdef; auto.
(* global *)
inv H. simpl. unfold Genv.symbol_address.
destruct (Genv.find_symbol ge i); simpl; auto. repeat rewrite Int.add_zero. auto.
(* based *)
monadInv H. rewrite (ireg_of_eq _ _ EQ). simpl.
unfold Genv.symbol_address. destruct (Genv.find_symbol ge i); simpl; auto.
rewrite Int.add_zero. rewrite Val.add_commut. auto.
(* basedscaled *)
monadInv H. rewrite (ireg_of_eq _ _ EQ). unfold eval_addrmode.
rewrite (Val.add_commut Vzero). rewrite Val.add_assoc. rewrite Val.add_permut.
apply Val.add_lessdef; auto. destruct (rs x); simpl; auto. rewrite B. simpl.
rewrite Int.add_zero. auto.
(* instack *)
inv H; simpl. rewrite A; auto.
Qed.
(** Processor conditions and comparisons *)
Lemma compare_ints_spec:
forall rs v1 v2 m,
let rs' := nextinstr (compare_ints v1 v2 rs m) in
rs'#ZF = Val.cmpu (Mem.valid_pointer m) Ceq v1 v2
/\ rs'#CF = Val.cmpu (Mem.valid_pointer m) Clt v1 v2
/\ rs'#SF = Val.negative (Val.sub v1 v2)
/\ rs'#OF = Val.sub_overflow v1 v2
/\ (forall r, data_preg r = true -> rs'#r = rs#r).
Proof.
intros. unfold rs'; unfold compare_ints.
split. auto.
split. auto.
split. auto.
split. auto.
intros. Simplifs.
Qed.
Lemma int_signed_eq:
forall x y, Int.eq x y = zeq (Int.signed x) (Int.signed y).
Proof.
intros. unfold Int.eq. unfold proj_sumbool.
destruct (zeq (Int.unsigned x) (Int.unsigned y));
destruct (zeq (Int.signed x) (Int.signed y)); auto.
elim n. unfold Int.signed. rewrite e; auto.
elim n. apply Int.eqm_small_eq; auto with ints.
eapply Int.eqm_trans. apply Int.eqm_sym. apply Int.eqm_signed_unsigned.
rewrite e. apply Int.eqm_signed_unsigned.
Qed.
Lemma int_not_lt:
forall x y, negb (Int.lt y x) = (Int.lt x y || Int.eq x y).
Proof.
intros. unfold Int.lt. rewrite int_signed_eq. unfold proj_sumbool.
destruct (zlt (Int.signed y) (Int.signed x)).
rewrite zlt_false. rewrite zeq_false. auto. omega. omega.
destruct (zeq (Int.signed x) (Int.signed y)).
rewrite zlt_false. auto. omega.
rewrite zlt_true. auto. omega.
Qed.
Lemma int_lt_not:
forall x y, Int.lt y x = negb (Int.lt x y) && negb (Int.eq x y).
Proof.
intros. rewrite <- negb_orb. rewrite <- int_not_lt. rewrite negb_involutive. auto.
Qed.
Lemma int_not_ltu:
forall x y, negb (Int.ltu y x) = (Int.ltu x y || Int.eq x y).
Proof.
intros. unfold Int.ltu, Int.eq.
destruct (zlt (Int.unsigned y) (Int.unsigned x)).
rewrite zlt_false. rewrite zeq_false. auto. omega. omega.
destruct (zeq (Int.unsigned x) (Int.unsigned y)).
rewrite zlt_false. auto. omega.
rewrite zlt_true. auto. omega.
Qed.
Lemma int_ltu_not:
forall x y, Int.ltu y x = negb (Int.ltu x y) && negb (Int.eq x y).
Proof.
intros. rewrite <- negb_orb. rewrite <- int_not_ltu. rewrite negb_involutive. auto.
Qed.
Lemma testcond_for_signed_comparison_correct:
forall c v1 v2 rs m b,
Val.cmp_bool c v1 v2 = Some b ->
eval_testcond (testcond_for_signed_comparison c)
(nextinstr (compare_ints v1 v2 rs m)) = Some b.
Proof.
intros. generalize (compare_ints_spec rs v1 v2 m).
set (rs' := nextinstr (compare_ints v1 v2 rs m)).
intros [A [B [C [D E]]]].
destruct v1; destruct v2; simpl in H; inv H.
unfold eval_testcond. rewrite A; rewrite B; rewrite C; rewrite D.
simpl. unfold Val.cmp, Val.cmpu.
rewrite Int.lt_sub_overflow.
destruct c; simpl.
destruct (Int.eq i i0); auto.
destruct (Int.eq i i0); auto.
destruct (Int.lt i i0); auto.
rewrite int_not_lt. destruct (Int.lt i i0); simpl; destruct (Int.eq i i0); auto.
rewrite (int_lt_not i i0). destruct (Int.lt i i0); destruct (Int.eq i i0); reflexivity.
destruct (Int.lt i i0); reflexivity.
Qed.
Lemma testcond_for_unsigned_comparison_correct:
forall c v1 v2 rs m b,
Val.cmpu_bool (Mem.valid_pointer m) c v1 v2 = Some b ->
eval_testcond (testcond_for_unsigned_comparison c)
(nextinstr (compare_ints v1 v2 rs m)) = Some b.
Proof.
intros. generalize (compare_ints_spec rs v1 v2 m).
set (rs' := nextinstr (compare_ints v1 v2 rs m)).
intros [A [B [C [D E]]]].
unfold eval_testcond. rewrite A; rewrite B. unfold Val.cmpu, Val.cmp.
destruct v1; destruct v2; simpl in H; inv H.
(* int int *)
destruct c; simpl; auto.
destruct (Int.eq i i0); reflexivity.
destruct (Int.eq i i0); auto.
destruct (Int.ltu i i0); auto.
rewrite int_not_ltu. destruct (Int.ltu i i0); simpl; destruct (Int.eq i i0); auto.
rewrite (int_ltu_not i i0). destruct (Int.ltu i i0); destruct (Int.eq i i0); reflexivity.
destruct (Int.ltu i i0); reflexivity.
(* int ptr *)
destruct (Int.eq i Int.zero) eqn:?; try discriminate.
destruct c; simpl in *; inv H1.
rewrite Heqb1; reflexivity.
rewrite Heqb1; reflexivity.
(* ptr int *)
destruct (Int.eq i0 Int.zero) eqn:?; try discriminate.
destruct c; simpl in *; inv H1.
rewrite Heqb1; reflexivity.
rewrite Heqb1; reflexivity.
(* ptr ptr *)
simpl.
fold (Mem.weak_valid_pointer m b0 (Int.unsigned i)) in *.
fold (Mem.weak_valid_pointer m b1 (Int.unsigned i0)) in *.
destruct (eq_block b0 b1).
destruct (Mem.weak_valid_pointer m b0 (Int.unsigned i) &&
Mem.weak_valid_pointer m b1 (Int.unsigned i0)); inversion H1.
destruct c; simpl; auto.
destruct (Int.eq i i0); reflexivity.
destruct (Int.eq i i0); auto.
destruct (Int.ltu i i0); auto.
rewrite int_not_ltu. destruct (Int.ltu i i0); simpl; destruct (Int.eq i i0); auto.
rewrite (int_ltu_not i i0). destruct (Int.ltu i i0); destruct (Int.eq i i0); reflexivity.
destruct (Int.ltu i i0); reflexivity.
destruct (Mem.valid_pointer m b0 (Int.unsigned i) &&
Mem.valid_pointer m b1 (Int.unsigned i0)); try discriminate.
destruct c; simpl in *; inv H1; reflexivity.
Qed.
Lemma compare_floats_spec:
forall rs n1 n2,
let rs' := nextinstr (compare_floats (Vfloat n1) (Vfloat n2) rs) in
rs'#ZF = Val.of_bool (negb (Float.cmp Cne n1 n2))
/\ rs'#CF = Val.of_bool (negb (Float.cmp Cge n1 n2))
/\ rs'#PF = Val.of_bool (negb (Float.cmp Ceq n1 n2 || Float.cmp Clt n1 n2 || Float.cmp Cgt n1 n2))
/\ (forall r, data_preg r = true -> rs'#r = rs#r).
Proof.
intros. unfold rs'; unfold compare_floats.
split. auto.
split. auto.
split. auto.
intros. Simplifs.
Qed.
Lemma compare_floats32_spec:
forall rs n1 n2,
let rs' := nextinstr (compare_floats32 (Vsingle n1) (Vsingle n2) rs) in
rs'#ZF = Val.of_bool (negb (Float32.cmp Cne n1 n2))
/\ rs'#CF = Val.of_bool (negb (Float32.cmp Cge n1 n2))
/\ rs'#PF = Val.of_bool (negb (Float32.cmp Ceq n1 n2 || Float32.cmp Clt n1 n2 || Float32.cmp Cgt n1 n2))
/\ (forall r, data_preg r = true -> rs'#r = rs#r).
Proof.
intros. unfold rs'; unfold compare_floats32.
split. auto.
split. auto.
split. auto.
intros. Simplifs.
Qed.
Definition eval_extcond (xc: extcond) (rs: regset) : option bool :=
match xc with
| Cond_base c =>
eval_testcond c rs
| Cond_and c1 c2 =>
match eval_testcond c1 rs, eval_testcond c2 rs with
| Some b1, Some b2 => Some (b1 && b2)
| _, _ => None
end
| Cond_or c1 c2 =>
match eval_testcond c1 rs, eval_testcond c2 rs with
| Some b1, Some b2 => Some (b1 || b2)
| _, _ => None
end
end.
Definition swap_floats {A: Type} (c: comparison) (n1 n2: A) : A :=
match c with
| Clt | Cle => n2
| Ceq | Cne | Cgt | Cge => n1
end.
Lemma testcond_for_float_comparison_correct:
forall c n1 n2 rs,
eval_extcond (testcond_for_condition (Ccompf c))
(nextinstr (compare_floats (Vfloat (swap_floats c n1 n2))
(Vfloat (swap_floats c n2 n1)) rs)) =
Some(Float.cmp c n1 n2).
Proof.
intros.
generalize (compare_floats_spec rs (swap_floats c n1 n2) (swap_floats c n2 n1)).
set (rs' := nextinstr (compare_floats (Vfloat (swap_floats c n1 n2))
(Vfloat (swap_floats c n2 n1)) rs)).
intros [A [B [C D]]].
unfold eval_extcond, eval_testcond. rewrite A; rewrite B; rewrite C.
destruct c; simpl.
(* eq *)
rewrite Float.cmp_ne_eq.
caseEq (Float.cmp Ceq n1 n2); intros.
auto.
simpl. destruct (Float.cmp Clt n1 n2 || Float.cmp Cgt n1 n2); auto.
(* ne *)
rewrite Float.cmp_ne_eq.
caseEq (Float.cmp Ceq n1 n2); intros.
auto.
simpl. destruct (Float.cmp Clt n1 n2 || Float.cmp Cgt n1 n2); auto.
(* lt *)
rewrite <- (Float.cmp_swap Cge n1 n2).
rewrite <- (Float.cmp_swap Cne n1 n2).
simpl.
rewrite Float.cmp_ne_eq. rewrite Float.cmp_le_lt_eq.
caseEq (Float.cmp Clt n1 n2); intros; simpl.
caseEq (Float.cmp Ceq n1 n2); intros; simpl.
elimtype False. eapply Float.cmp_lt_eq_false; eauto.
auto.
destruct (Float.cmp Ceq n1 n2); auto.
(* le *)
rewrite <- (Float.cmp_swap Cge n1 n2). simpl.
destruct (Float.cmp Cle n1 n2); auto.
(* gt *)
rewrite Float.cmp_ne_eq. rewrite Float.cmp_ge_gt_eq.
caseEq (Float.cmp Cgt n1 n2); intros; simpl.
caseEq (Float.cmp Ceq n1 n2); intros; simpl.
elimtype False. eapply Float.cmp_gt_eq_false; eauto.
auto.
destruct (Float.cmp Ceq n1 n2); auto.
(* ge *)
destruct (Float.cmp Cge n1 n2); auto.
Qed.
Lemma testcond_for_neg_float_comparison_correct:
forall c n1 n2 rs,
eval_extcond (testcond_for_condition (Cnotcompf c))
(nextinstr (compare_floats (Vfloat (swap_floats c n1 n2))
(Vfloat (swap_floats c n2 n1)) rs)) =
Some(negb(Float.cmp c n1 n2)).
Proof.
intros.
generalize (compare_floats_spec rs (swap_floats c n1 n2) (swap_floats c n2 n1)).
set (rs' := nextinstr (compare_floats (Vfloat (swap_floats c n1 n2))
(Vfloat (swap_floats c n2 n1)) rs)).
intros [A [B [C D]]].
unfold eval_extcond, eval_testcond. rewrite A; rewrite B; rewrite C.
destruct c; simpl.
(* eq *)
rewrite Float.cmp_ne_eq.
caseEq (Float.cmp Ceq n1 n2); intros.
auto.
simpl. destruct (Float.cmp Clt n1 n2 || Float.cmp Cgt n1 n2); auto.
(* ne *)
rewrite Float.cmp_ne_eq.
caseEq (Float.cmp Ceq n1 n2); intros.
auto.
simpl. destruct (Float.cmp Clt n1 n2 || Float.cmp Cgt n1 n2); auto.
(* lt *)
rewrite <- (Float.cmp_swap Cge n1 n2).
rewrite <- (Float.cmp_swap Cne n1 n2).
simpl.
rewrite Float.cmp_ne_eq. rewrite Float.cmp_le_lt_eq.
caseEq (Float.cmp Clt n1 n2); intros; simpl.
caseEq (Float.cmp Ceq n1 n2); intros; simpl.
elimtype False. eapply Float.cmp_lt_eq_false; eauto.
auto.
destruct (Float.cmp Ceq n1 n2); auto.
(* le *)
rewrite <- (Float.cmp_swap Cge n1 n2). simpl.
destruct (Float.cmp Cle n1 n2); auto.
(* gt *)
rewrite Float.cmp_ne_eq. rewrite Float.cmp_ge_gt_eq.
caseEq (Float.cmp Cgt n1 n2); intros; simpl.
caseEq (Float.cmp Ceq n1 n2); intros; simpl.
elimtype False. eapply Float.cmp_gt_eq_false; eauto.
auto.
destruct (Float.cmp Ceq n1 n2); auto.
(* ge *)
destruct (Float.cmp Cge n1 n2); auto.
Qed.
Lemma testcond_for_float32_comparison_correct:
forall c n1 n2 rs,
eval_extcond (testcond_for_condition (Ccompfs c))
(nextinstr (compare_floats32 (Vsingle (swap_floats c n1 n2))
(Vsingle (swap_floats c n2 n1)) rs)) =
Some(Float32.cmp c n1 n2).
Proof.
intros.
generalize (compare_floats32_spec rs (swap_floats c n1 n2) (swap_floats c n2 n1)).
set (rs' := nextinstr (compare_floats32 (Vsingle (swap_floats c n1 n2))
(Vsingle (swap_floats c n2 n1)) rs)).
intros [A [B [C D]]].
unfold eval_extcond, eval_testcond. rewrite A; rewrite B; rewrite C.
destruct c; simpl.
(* eq *)
rewrite Float32.cmp_ne_eq.
caseEq (Float32.cmp Ceq n1 n2); intros.
auto.
simpl. destruct (Float32.cmp Clt n1 n2 || Float32.cmp Cgt n1 n2); auto.
(* ne *)
rewrite Float32.cmp_ne_eq.
caseEq (Float32.cmp Ceq n1 n2); intros.
auto.
simpl. destruct (Float32.cmp Clt n1 n2 || Float32.cmp Cgt n1 n2); auto.
(* lt *)
rewrite <- (Float32.cmp_swap Cge n1 n2).
rewrite <- (Float32.cmp_swap Cne n1 n2).
simpl.
rewrite Float32.cmp_ne_eq. rewrite Float32.cmp_le_lt_eq.
caseEq (Float32.cmp Clt n1 n2); intros; simpl.
caseEq (Float32.cmp Ceq n1 n2); intros; simpl.
elimtype False. eapply Float32.cmp_lt_eq_false; eauto.
auto.
destruct (Float32.cmp Ceq n1 n2); auto.
(* le *)
rewrite <- (Float32.cmp_swap Cge n1 n2). simpl.
destruct (Float32.cmp Cle n1 n2); auto.
(* gt *)
rewrite Float32.cmp_ne_eq. rewrite Float32.cmp_ge_gt_eq.
caseEq (Float32.cmp Cgt n1 n2); intros; simpl.
caseEq (Float32.cmp Ceq n1 n2); intros; simpl.
elimtype False. eapply Float32.cmp_gt_eq_false; eauto.
auto.
destruct (Float32.cmp Ceq n1 n2); auto.
(* ge *)
destruct (Float32.cmp Cge n1 n2); auto.
Qed.
Lemma testcond_for_neg_float32_comparison_correct:
forall c n1 n2 rs,
eval_extcond (testcond_for_condition (Cnotcompfs c))
(nextinstr (compare_floats32 (Vsingle (swap_floats c n1 n2))
(Vsingle (swap_floats c n2 n1)) rs)) =
Some(negb(Float32.cmp c n1 n2)).
Proof.
intros.
generalize (compare_floats32_spec rs (swap_floats c n1 n2) (swap_floats c n2 n1)).
set (rs' := nextinstr (compare_floats32 (Vsingle (swap_floats c n1 n2))
(Vsingle (swap_floats c n2 n1)) rs)).
intros [A [B [C D]]].
unfold eval_extcond, eval_testcond. rewrite A; rewrite B; rewrite C.
destruct c; simpl.
(* eq *)
rewrite Float32.cmp_ne_eq.
caseEq (Float32.cmp Ceq n1 n2); intros.
auto.
simpl. destruct (Float32.cmp Clt n1 n2 || Float32.cmp Cgt n1 n2); auto.
(* ne *)
rewrite Float32.cmp_ne_eq.
caseEq (Float32.cmp Ceq n1 n2); intros.
auto.
simpl. destruct (Float32.cmp Clt n1 n2 || Float32.cmp Cgt n1 n2); auto.
(* lt *)
rewrite <- (Float32.cmp_swap Cge n1 n2).
rewrite <- (Float32.cmp_swap Cne n1 n2).
simpl.
rewrite Float32.cmp_ne_eq. rewrite Float32.cmp_le_lt_eq.
caseEq (Float32.cmp Clt n1 n2); intros; simpl.
caseEq (Float32.cmp Ceq n1 n2); intros; simpl.
elimtype False. eapply Float32.cmp_lt_eq_false; eauto.
auto.
destruct (Float32.cmp Ceq n1 n2); auto.
(* le *)
rewrite <- (Float32.cmp_swap Cge n1 n2). simpl.
destruct (Float32.cmp Cle n1 n2); auto.
(* gt *)
rewrite Float32.cmp_ne_eq. rewrite Float32.cmp_ge_gt_eq.
caseEq (Float32.cmp Cgt n1 n2); intros; simpl.
caseEq (Float32.cmp Ceq n1 n2); intros; simpl.
elimtype False. eapply Float32.cmp_gt_eq_false; eauto.
auto.
destruct (Float32.cmp Ceq n1 n2); auto.
(* ge *)
destruct (Float32.cmp Cge n1 n2); auto.
Qed.
Remark swap_floats_commut:
forall (A B: Type) c (f: A -> B) x y, swap_floats c (f x) (f y) = f (swap_floats c x y).
Proof.
intros. destruct c; auto.
Qed.
Remark compare_floats_inv:
forall vx vy rs r,
r <> CR ZF -> r <> CR CF -> r <> CR PF -> r <> CR SF -> r <> CR OF ->
compare_floats vx vy rs r = rs r.
Proof.
intros.
assert (DFL: undef_regs (CR ZF :: CR CF :: CR PF :: CR SF :: CR OF :: nil) rs r = rs r).
simpl. Simplifs.
unfold compare_floats; destruct vx; destruct vy; auto. Simplifs.
Qed.
Remark compare_floats32_inv:
forall vx vy rs r,
r <> CR ZF -> r <> CR CF -> r <> CR PF -> r <> CR SF -> r <> CR OF ->
compare_floats32 vx vy rs r = rs r.
Proof.
intros.
assert (DFL: undef_regs (CR ZF :: CR CF :: CR PF :: CR SF :: CR OF :: nil) rs r = rs r).
simpl. Simplifs.
unfold compare_floats32; destruct vx; destruct vy; auto. Simplifs.
Qed.
Lemma transl_cond_correct:
forall cond args k c rs m,
transl_cond cond args k = OK c ->
exists rs',
exec_straight ge fn c rs m k rs' m
/\ match eval_condition cond (map rs (map preg_of args)) m with
| None => True
| Some b => eval_extcond (testcond_for_condition cond) rs' = Some b
end
/\ forall r, data_preg r = true -> rs'#r = rs r.
Proof.
unfold transl_cond; intros.
destruct cond; repeat (destruct args; try discriminate); monadInv H.
(* comp *)
simpl. rewrite (ireg_of_eq _ _ EQ). rewrite (ireg_of_eq _ _ EQ1).
econstructor. split. apply exec_straight_one. simpl. eauto. auto.
split. destruct (Val.cmp_bool c0 (rs x) (rs x0)) eqn:?; auto.
eapply testcond_for_signed_comparison_correct; eauto.
intros. unfold compare_ints. Simplifs.
(* compu *)
simpl. rewrite (ireg_of_eq _ _ EQ). rewrite (ireg_of_eq _ _ EQ1).
econstructor. split. apply exec_straight_one. simpl. eauto. auto.
split. destruct (Val.cmpu_bool (Mem.valid_pointer m) c0 (rs x) (rs x0)) eqn:?; auto.
eapply testcond_for_unsigned_comparison_correct; eauto.
intros. unfold compare_ints. Simplifs.
(* compimm *)
simpl. rewrite (ireg_of_eq _ _ EQ). destruct (Int.eq_dec i Int.zero).
econstructor; split. apply exec_straight_one. simpl; eauto. auto.
split. destruct (rs x); simpl; auto. subst. rewrite Int.and_idem.
eapply testcond_for_signed_comparison_correct; eauto.
intros. unfold compare_ints. Simplifs.
econstructor; split. apply exec_straight_one. simpl; eauto. auto.
split. destruct (Val.cmp_bool c0 (rs x) (Vint i)) eqn:?; auto.
eapply testcond_for_signed_comparison_correct; eauto.
intros. unfold compare_ints. Simplifs.
(* compuimm *)
simpl. rewrite (ireg_of_eq _ _ EQ).
econstructor. split. apply exec_straight_one. simpl. eauto. auto.
split. destruct (Val.cmpu_bool (Mem.valid_pointer m) c0 (rs x) (Vint i)) eqn:?; auto.
eapply testcond_for_unsigned_comparison_correct; eauto.
intros. unfold compare_ints. Simplifs.
(* compf *)
simpl. rewrite (freg_of_eq _ _ EQ). rewrite (freg_of_eq _ _ EQ1).
exists (nextinstr (compare_floats (swap_floats c0 (rs x) (rs x0)) (swap_floats c0 (rs x0) (rs x)) rs)).
split. apply exec_straight_one.
destruct c0; simpl; auto.
unfold nextinstr. rewrite Pregmap.gss. rewrite compare_floats_inv; auto with asmgen.
split. destruct (rs x); destruct (rs x0); simpl; auto.
repeat rewrite swap_floats_commut. apply testcond_for_float_comparison_correct.
intros. Simplifs. apply compare_floats_inv; auto with asmgen.
(* notcompf *)
simpl. rewrite (freg_of_eq _ _ EQ). rewrite (freg_of_eq _ _ EQ1).
exists (nextinstr (compare_floats (swap_floats c0 (rs x) (rs x0)) (swap_floats c0 (rs x0) (rs x)) rs)).
split. apply exec_straight_one.
destruct c0; simpl; auto.
unfold nextinstr. rewrite Pregmap.gss. rewrite compare_floats_inv; auto with asmgen.
split. destruct (rs x); destruct (rs x0); simpl; auto.
repeat rewrite swap_floats_commut. apply testcond_for_neg_float_comparison_correct.
intros. Simplifs. apply compare_floats_inv; auto with asmgen.
(* compfs *)
simpl. rewrite (freg_of_eq _ _ EQ). rewrite (freg_of_eq _ _ EQ1).
exists (nextinstr (compare_floats32 (swap_floats c0 (rs x) (rs x0)) (swap_floats c0 (rs x0) (rs x)) rs)).
split. apply exec_straight_one.
destruct c0; simpl; auto.
unfold nextinstr. rewrite Pregmap.gss. rewrite compare_floats32_inv; auto with asmgen.
split. destruct (rs x); destruct (rs x0); simpl; auto.
repeat rewrite swap_floats_commut. apply testcond_for_float32_comparison_correct.
intros. Simplifs. apply compare_floats32_inv; auto with asmgen.
(* notcompfs *)
simpl. rewrite (freg_of_eq _ _ EQ). rewrite (freg_of_eq _ _ EQ1).
exists (nextinstr (compare_floats32 (swap_floats c0 (rs x) (rs x0)) (swap_floats c0 (rs x0) (rs x)) rs)).
split. apply exec_straight_one.
destruct c0; simpl; auto.
unfold nextinstr. rewrite Pregmap.gss. rewrite compare_floats32_inv; auto with asmgen.
split. destruct (rs x); destruct (rs x0); simpl; auto.
repeat rewrite swap_floats_commut. apply testcond_for_neg_float32_comparison_correct.
intros. Simplifs. apply compare_floats32_inv; auto with asmgen.
(* maskzero *)
simpl. rewrite (ireg_of_eq _ _ EQ).
econstructor. split. apply exec_straight_one. simpl; eauto. auto.
split. destruct (rs x); simpl; auto.
generalize (compare_ints_spec rs (Vint (Int.and i0 i)) Vzero m).
intros [A B]. rewrite A. unfold Val.cmpu; simpl. destruct (Int.eq (Int.and i0 i) Int.zero); auto.
intros. unfold compare_ints. Simplifs.
(* masknotzero *)
simpl. rewrite (ireg_of_eq _ _ EQ).
econstructor. split. apply exec_straight_one. simpl; eauto. auto.
split. destruct (rs x); simpl; auto.
generalize (compare_ints_spec rs (Vint (Int.and i0 i)) Vzero m).
intros [A B]. rewrite A. unfold Val.cmpu; simpl. destruct (Int.eq (Int.and i0 i) Int.zero); auto.
intros. unfold compare_ints. Simplifs.
Qed.
Remark eval_testcond_nextinstr:
forall c rs, eval_testcond c (nextinstr rs) = eval_testcond c rs.
Proof.
intros. unfold eval_testcond. repeat rewrite nextinstr_inv; auto with asmgen.
Qed.
Remark eval_testcond_set_ireg:
forall c rs r v, eval_testcond c (rs#(IR r) <- v) = eval_testcond c rs.
Proof.
intros. unfold eval_testcond. repeat rewrite Pregmap.gso; auto with asmgen.
Qed.
Lemma mk_setcc_base_correct:
forall cond rd k rs1 m,
exists rs2,
exec_straight ge fn (mk_setcc_base cond rd k) rs1 m k rs2 m
/\ rs2#rd = Val.of_optbool(eval_extcond cond rs1)
/\ forall r, data_preg r = true -> r <> EAX /\ r <> ECX -> r <> rd -> rs2#r = rs1#r.
Proof.
intros. destruct cond; simpl in *.
- (* base *)
econstructor; split.
apply exec_straight_one. simpl; eauto. auto.
split. Simplifs. intros; Simplifs.
- (* or *)
assert (Val.of_optbool
match eval_testcond c1 rs1 with
| Some b1 =>
match eval_testcond c2 rs1 with
| Some b2 => Some (b1 || b2)
| None => None
end
| None => None
end =
Val.or (Val.of_optbool (eval_testcond c1 rs1)) (Val.of_optbool (eval_testcond c2 rs1))).
destruct (eval_testcond c1 rs1). destruct (eval_testcond c2 rs1).
destruct b; destruct b0; auto.
destruct b; auto.
auto.
rewrite H; clear H.
destruct (ireg_eq rd EAX).
subst rd. econstructor; split.
eapply exec_straight_three.
simpl; eauto.
simpl. rewrite eval_testcond_nextinstr. repeat rewrite eval_testcond_set_ireg. eauto.
simpl; eauto.
auto. auto. auto.
intuition Simplifs.
econstructor; split.
eapply exec_straight_three.
simpl; eauto.
simpl. rewrite eval_testcond_nextinstr. repeat rewrite eval_testcond_set_ireg. eauto.
simpl. eauto.
auto. auto. auto.
split. Simplifs. rewrite Val.or_commut. decEq; Simplifs.
intros. destruct H0; Simplifs.
- (* and *)
assert (Val.of_optbool
match eval_testcond c1 rs1 with
| Some b1 =>
match eval_testcond c2 rs1 with
| Some b2 => Some (b1 && b2)
| None => None
end
| None => None
end =
Val.and (Val.of_optbool (eval_testcond c1 rs1)) (Val.of_optbool (eval_testcond c2 rs1))).
{
destruct (eval_testcond c1 rs1). destruct (eval_testcond c2 rs1).
destruct b; destruct b0; auto.
destruct b; auto.
auto.
}
rewrite H; clear H.
destruct (ireg_eq rd EAX).
subst rd. econstructor; split.
eapply exec_straight_three.
simpl; eauto.
simpl. rewrite eval_testcond_nextinstr. repeat rewrite eval_testcond_set_ireg. eauto.
simpl; eauto.
auto. auto. auto.
intuition Simplifs.
econstructor; split.
eapply exec_straight_three.
simpl; eauto.
simpl. rewrite eval_testcond_nextinstr. repeat rewrite eval_testcond_set_ireg. eauto.
simpl. eauto.
auto. auto. auto.
split. Simplifs. rewrite Val.and_commut. decEq; Simplifs.
intros. destruct H0; Simplifs.
Qed.
Lemma mk_setcc_correct:
forall cond rd k rs1 m,
exists rs2,
exec_straight ge fn (mk_setcc cond rd k) rs1 m k rs2 m
/\ rs2#rd = Val.of_optbool(eval_extcond cond rs1)
/\ forall r, data_preg r = true -> r <> EAX /\ r <> ECX -> r <> rd -> rs2#r = rs1#r.
Proof.
intros. unfold mk_setcc. destruct (low_ireg rd).
- apply mk_setcc_base_correct.
- exploit mk_setcc_base_correct. intros [rs2 [A [B C]]].
econstructor; split. eapply exec_straight_trans. eexact A. apply exec_straight_one.
simpl. eauto. simpl. auto.
intuition Simplifs.
Qed.
(** Translation of arithmetic operations. *)
Ltac ArgsInv :=
match goal with
| [ H: Error _ = OK _ |- _ ] => discriminate
| [ H: match ?args with nil => _ | _ :: _ => _ end = OK _ |- _ ] => destruct args; ArgsInv
| [ H: bind _ _ = OK _ |- _ ] => monadInv H; ArgsInv
| [ H: match _ with left _ => _ | right _ => assertion_failed end = OK _ |- _ ] => monadInv H; ArgsInv
| [ H: match _ with true => _ | false => assertion_failed end = OK _ |- _ ] => monadInv H; ArgsInv
| [ H: ireg_of _ = OK _ |- _ ] => simpl in *; rewrite (ireg_of_eq _ _ H) in *; clear H; ArgsInv
| [ H: freg_of _ = OK _ |- _ ] => simpl in *; rewrite (freg_of_eq _ _ H) in *; clear H; ArgsInv
| _ => idtac
end.
Ltac TranslOp :=
econstructor; split;
[ apply exec_straight_one; [ simpl; eauto | auto ]
| split; [ Simplifs | intros; Simplifs ]].
Lemma transl_op_correct:
forall op args res k c (rs: regset) m v,
transl_op op args res k = OK c ->
eval_operation ge (rs#ESP) op (map rs (map preg_of args)) m = Some v ->
exists rs',
exec_straight ge fn c rs m k rs' m
/\ Val.lessdef v rs'#(preg_of res)
/\ forall r, data_preg r = true -> r <> preg_of res -> preg_notin r (destroyed_by_op op) -> rs' r = rs r.
Proof.
Transparent destroyed_by_op.
intros until v; intros TR EV.
assert (SAME:
(exists rs',
exec_straight ge fn c rs m k rs' m
/\ rs'#(preg_of res) = v
/\ forall r, data_preg r = true -> r <> preg_of res -> preg_notin r (destroyed_by_op op) -> rs' r = rs r) ->
exists rs',
exec_straight ge fn c rs m k rs' m
/\ Val.lessdef v rs'#(preg_of res)
/\ forall r, data_preg r = true -> r <> preg_of res -> preg_notin r (destroyed_by_op op) -> rs' r = rs r).
{
intros [rs' [A [B C]]]. subst v. exists rs'; auto.
}
destruct op; simpl in TR; ArgsInv; simpl in EV; try (inv EV); try (apply SAME; TranslOp; fail).
(* move *)
exploit mk_mov_correct; eauto. intros [rs2 [A [B C]]].
apply SAME. exists rs2. eauto.
(* intconst *)
apply SAME. destruct (Int.eq_dec i Int.zero). subst i. TranslOp. TranslOp.
(* floatconst *)
apply SAME. destruct (Float.eq_dec f Float.zero). subst f. TranslOp. TranslOp.
(* singleconst *)
apply SAME. destruct (Float32.eq_dec f Float32.zero). subst f. TranslOp. TranslOp.
(* cast8signed *)
apply SAME. eapply mk_intconv_correct; eauto.
(* cast8unsigned *)
apply SAME. eapply mk_intconv_correct; eauto.
(* cast16signed *)
apply SAME. eapply mk_intconv_correct; eauto.
(* cast16unsigned *)
apply SAME. eapply mk_intconv_correct; eauto.
(* mulhs *)
apply SAME. TranslOp. destruct H1. Simplifs.
(* mulhu *)
apply SAME. TranslOp. destruct H1. Simplifs.
(* div *)
apply SAME.
specialize (divs_mods_exist (rs EAX) (rs ECX)). rewrite H0.
destruct (Val.mods (rs EAX) (rs ECX)) as [vr|] eqn:?; intros; try contradiction.
TranslOp. change (rs#EDX<-Vundef ECX) with (rs#ECX). rewrite H0; rewrite Heqo. eauto.
auto. auto.
simpl in H3. destruct H3; Simplifs.
(* divu *)
apply SAME.
specialize (divu_modu_exist (rs EAX) (rs ECX)). rewrite H0.
destruct (Val.modu (rs EAX) (rs ECX)) as [vr|] eqn:?; intros; try contradiction.
TranslOp. change (rs#EDX<-Vundef ECX) with (rs#ECX). rewrite H0; rewrite Heqo. eauto.
auto. auto.
simpl in H3. destruct H3; Simplifs.
(* mod *)
apply SAME.
specialize (divs_mods_exist (rs EAX) (rs ECX)). rewrite H0.
destruct (Val.divs (rs EAX) (rs ECX)) as [vr|] eqn:?; intros; try contradiction.
TranslOp. change (rs#EDX<-Vundef ECX) with (rs#ECX). rewrite H0; rewrite Heqo. eauto.
auto. auto.
simpl in H3. destruct H3; Simplifs.
(* modu *)
apply SAME.
specialize (divu_modu_exist (rs EAX) (rs ECX)). rewrite H0.
destruct (Val.divu (rs EAX) (rs ECX)) as [vr|] eqn:?; intros; try contradiction.
TranslOp. change (rs#EDX<-Vundef ECX) with (rs#ECX). rewrite H0; rewrite Heqo. eauto.
auto. auto.
simpl in H3. destruct H3; Simplifs.
(* shrximm *)
apply SAME. eapply mk_shrximm_correct; eauto.
(* lea *)
exploit transl_addressing_mode_correct; eauto. intros EA.
TranslOp. rewrite nextinstr_inv; auto with asmgen. rewrite Pregmap.gss; auto.
(* intoffloat *)
apply SAME. TranslOp. rewrite H0; auto.
(* floatofint *)
apply SAME. TranslOp. rewrite H0; auto.
(* intofsingle *)
apply SAME. TranslOp. rewrite H0; auto.
(* singleofint *)
apply SAME. TranslOp. rewrite H0; auto.
(* condition *)
exploit transl_cond_correct; eauto. intros [rs2 [P [Q R]]].
exploit mk_setcc_correct; eauto. intros [rs3 [S [T U]]].
exists rs3.
split. eapply exec_straight_trans. eexact P. eexact S.
split. rewrite T. destruct (eval_condition c0 rs ## (preg_of ## args) m).
rewrite Q. auto.
simpl; auto.
intros. transitivity (rs2 r); auto.
Qed.
(** Translation of memory loads. *)
Lemma transl_load_correct:
forall chunk addr args dest k c (rs: regset) m a v,
transl_load chunk addr args dest k = OK c ->
eval_addressing ge (rs#ESP) addr (map rs (map preg_of args)) = Some a ->
Mem.loadv chunk m a = Some v ->
exists rs',
exec_straight ge fn c rs m k rs' m
/\ rs'#(preg_of dest) = v
/\ forall r, data_preg r = true -> r <> preg_of dest -> rs'#r = rs#r.
Proof.
unfold transl_load; intros. monadInv H.
exploit transl_addressing_mode_correct; eauto. intro EA.
assert (EA': eval_addrmode ge x rs = a). destruct a; simpl in H1; try discriminate; inv EA; auto.
set (rs2 := nextinstr_nf (rs#(preg_of dest) <- v)).
assert (exec_load ge chunk m x rs (preg_of dest) = Next rs2 m).
unfold exec_load. rewrite EA'. rewrite H1. auto.
assert (rs2 PC = Val.add (rs PC) Vone).
transitivity (Val.add ((rs#(preg_of dest) <- v) PC) Vone).
auto. decEq. apply Pregmap.gso; auto with asmgen.
exists rs2. split.
destruct chunk; ArgsInv; apply exec_straight_one; auto.
split. unfold rs2. rewrite nextinstr_nf_inv1. Simplifs. apply preg_of_data.
intros. unfold rs2. Simplifs.
Qed.
Lemma transl_store_correct:
forall chunk addr args src k c (rs: regset) m a m',
transl_store chunk addr args src k = OK c ->
eval_addressing ge (rs#ESP) addr (map rs (map preg_of args)) = Some a ->
Mem.storev chunk m a (rs (preg_of src)) = Some m' ->
exists rs',
exec_straight ge fn c rs m k rs' m'
/\ forall r, data_preg r = true -> preg_notin r (destroyed_by_store chunk addr) -> rs'#r = rs#r.
Proof.
unfold transl_store; intros. monadInv H.
exploit transl_addressing_mode_correct; eauto. intro EA.
assert (EA': eval_addrmode ge x rs = a). destruct a; simpl in H1; try discriminate; inv EA; auto.
rewrite <- EA' in H1. destruct chunk; ArgsInv.
(* int8signed *)
eapply mk_smallstore_correct; eauto.
intros. simpl. unfold exec_store.
destruct (eval_addrmode ge addr0 rs0); simpl; auto. rewrite Mem.store_signed_unsigned_8; auto.
(* int8unsigned *)
eapply mk_smallstore_correct; eauto.
(* int16signed *)
econstructor; split.
apply exec_straight_one. simpl. unfold exec_store.
replace (Mem.storev Mint16unsigned m (eval_addrmode ge x rs) (rs x0))
with (Mem.storev Mint16signed m (eval_addrmode ge x rs) (rs x0)).
rewrite H1. eauto.
destruct (eval_addrmode ge x rs); simpl; auto. rewrite Mem.store_signed_unsigned_16; auto.
auto.
intros. Simplifs.
(* int16unsigned *)
econstructor; split.
apply exec_straight_one. simpl. unfold exec_store. rewrite H1. eauto. auto.
intros. Simplifs.
(* int32 *)
econstructor; split.
apply exec_straight_one. simpl. unfold exec_store. rewrite H1. eauto. auto.
intros. Simplifs.
(* float32 *)
econstructor; split.
apply exec_straight_one. simpl. unfold exec_store. rewrite H1. eauto. auto.
intros. Transparent destroyed_by_store. simpl in H2. simpl. Simplifs.
(* float64 *)
econstructor; split.
apply exec_straight_one. simpl. unfold exec_store. rewrite H1. eauto. auto.
intros. Simplifs.
Qed.
End CONSTRUCTORS.
|
theory Myhill_Nerode
imports Tree_Automata Ground_Ctxt
begin
subsection \<open>Myhill Nerode characterization for regular tree languages\<close>
lemma ground_ctxt_apply_pres_der:
assumes "ta_der \<A> (term_of_gterm s) = ta_der \<A> (term_of_gterm t)"
shows "ta_der \<A> (term_of_gterm C\<langle>s\<rangle>\<^sub>G) = ta_der \<A> (term_of_gterm C\<langle>t\<rangle>\<^sub>G)" using assms
by (induct C) (auto, (metis append_Cons_nth_not_middle nth_append_length)+)
locale myhill_nerode =
fixes \<F> \<L> assumes term_subset: "\<L> \<subseteq> \<T>\<^sub>G \<F>"
begin
definition myhill ("_ \<equiv>\<^sub>\<L> _") where
"myhill s t \<equiv> s \<in> \<T>\<^sub>G \<F> \<and> t \<in> \<T>\<^sub>G \<F> \<and> (\<forall> C. C\<langle>s\<rangle>\<^sub>G \<in> \<L> \<and> C\<langle>t\<rangle>\<^sub>G \<in> \<L> \<or> C\<langle>s\<rangle>\<^sub>G \<notin> \<L> \<and> C\<langle>t\<rangle>\<^sub>G \<notin> \<L>)"
lemma myhill_sound: "s \<equiv>\<^sub>\<L> t \<Longrightarrow> s \<in> \<T>\<^sub>G \<F>" "s \<equiv>\<^sub>\<L> t \<Longrightarrow> t \<in> \<T>\<^sub>G \<F>"
unfolding myhill_def by auto
lemma myhill_refl [simp]: "s \<in> \<T>\<^sub>G \<F> \<Longrightarrow> s \<equiv>\<^sub>\<L> s"
unfolding myhill_def by auto
lemma myhill_symmetric: "s \<equiv>\<^sub>\<L> t \<Longrightarrow> t \<equiv>\<^sub>\<L> s"
unfolding myhill_def by auto
lemma myhill_trans [trans]:
"s \<equiv>\<^sub>\<L> t \<Longrightarrow> t \<equiv>\<^sub>\<L> u \<Longrightarrow> s \<equiv>\<^sub>\<L> u"
unfolding myhill_def by auto
abbreviation myhill_r ("MN\<^sub>\<L>") where
"myhill_r \<equiv> {(s, t) | s t. s \<equiv>\<^sub>\<L> t}"
lemma myhill_equiv:
"equiv (\<T>\<^sub>G \<F>) MN\<^sub>\<L>"
apply (intro equivI) apply (auto simp: myhill_sound myhill_symmetric sym_def trans_def refl_on_def)
using myhill_trans by blast
lemma rtl_der_image_on_myhill_inj:
assumes "gta_lang Q\<^sub>f \<A> = \<L>"
shows "inj_on (\<lambda> X. gta_der \<A> ` X) (\<T>\<^sub>G \<F> // MN\<^sub>\<L>)" (is "inj_on ?D ?R")
proof -
{fix S T assume eq_rel: "S \<in> ?R" "T \<in> ?R" "?D S = ?D T"
have "\<And> s t. s \<in> S \<Longrightarrow> t \<in> T \<Longrightarrow> s \<equiv>\<^sub>\<L> t"
proof -
fix s t assume mem: "s \<in> S" "t \<in> T"
then obtain t' where res: "t' \<in> T" "gta_der \<A> s = gta_der \<A> t'" using eq_rel(3)
by (metis image_iff)
from res(1) mem have "s \<in> \<T>\<^sub>G \<F>" "t \<in> \<T>\<^sub>G \<F>" "t' \<in> \<T>\<^sub>G \<F>" using eq_rel(1, 2)
using in_quotient_imp_subset myhill_equiv by blast+
then have "s \<equiv>\<^sub>\<L> t'" using assms res ground_ctxt_apply_pres_der[of \<A> s]
by (auto simp: myhill_def gta_der_def simp flip: ctxt_of_gctxt_apply
elim!: gta_langE intro: gta_langI)
moreover have "t' \<equiv>\<^sub>\<L> t" using quotient_eq_iff[OF myhill_equiv eq_rel(2) eq_rel(2) res(1) mem(2)]
by simp
ultimately show "s \<equiv>\<^sub>\<L> t" using myhill_trans by blast
qed
then have "\<And> s t. s \<in> S \<Longrightarrow> t \<in> T \<Longrightarrow> (s, t) \<in> MN\<^sub>\<L>" by blast
then have "S = T" using quotient_eq_iff[OF myhill_equiv eq_rel(1, 2)]
using eq_rel(3) by fastforce}
then show inj: "inj_on ?D ?R" by (meson inj_onI)
qed
lemma rtl_implies_finite_indexed_myhill_relation:
assumes "gta_lang Q\<^sub>f \<A> = \<L>"
shows "finite (\<T>\<^sub>G \<F> // MN\<^sub>\<L>)" (is "finite ?R")
proof -
let ?D = "\<lambda> X. gta_der \<A> ` X"
have image: "?D ` ?R \<subseteq> Pow (fset (fPow (\<Q> \<A>)))" unfolding gta_der_def
by (meson PowI fPowI ground_ta_der_states ground_term_of_gterm image_subsetI notin_fset)
then have "finite (Pow (fset (fPow (\<Q> \<A>))))" by simp
then have "finite (?D ` ?R)" using finite_subset[OF image] by fastforce
then show ?thesis using finite_image_iff[OF rtl_der_image_on_myhill_inj[OF assms]]
by blast
qed
end
end |
\chapter{Matrices}
\section{Properties}
\subsection{Dimension}
The dimension
\footnote{Not to be confused with the dimenson of a vector space, see \ref{dimension}} is the number of rows \(a\) and columns \(b\) of a Matrix \(A\)
\begin{equation}
\dim{A} = a \times b
\end{equation}
Denoted as:
\begin{align*}
A^{a \times b}
\end{align*}
\begin{example}
\begin{equation*}
\dim{\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6
\end{bmatrix}} = 2 \times 3
\end{equation*}
\end{example}
\begin{example}[Linearly dependent rows/columns]
\begin{equation*}
\dim{\begin{bmatrix}
1 & 2 \\
2 & 4
\end{bmatrix}}= 2 \times 2
\end{equation*}
\end{example}
\begin{matlab}
\apilink{size}{https://www.mathworks.com/help/matlab/ref/size.html}
\begin{lstlisting}
A = [[1,2,3],[1,2,3]]
size(A)
ans 2 3
\end{lstlisting}
\end{matlab}
\subsection{Rank (Rang)} \label{rank}
\subsubsection{Rowsapce, columnspace}
The rowspace \( C \) of a matrix ist the span of its column vectors. \\
The definied as is the span of its row vectors. It is dentoed as \( C(A^T) \)\\
The dimension of the column and rowspace are always equal.
\begin{example}
\begin{align*}
A & = \begin{bmatrix}
1 & 2 & 4 \\ 1 & 2 & 4
\end{bmatrix} \\
C(A) & = \setb{
\begin{bmatrix}
c \\ c
\end{bmatrix}}{c \in \mathbb{R}} \\
C(A^T) & = \setb{
\begin{bmatrix}
c \\ 2c \\ 4c
\end{bmatrix}}{c \in \mathbb{R}} \\
\dim C(A) & = \dim C(A^T) = 1 \\
\end{align*}
\end{example}
\subsubsection{Rank}
The rank of a matrix \(A\) is the maximal number of linearly independent columns
(or the number of linearly independent rows, is the same thing). Or equally, the rank
of a matrix A is the dimenson of its columnspace (or rowspace):
\begin{equation}
\rank A = \dim C(A) = \dim C(A^T)
\end{equation}
\begin{example}
\begin{equation*}
\rank \begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6
\end{bmatrix} = 2
\end{equation*}
\end{example}
\begin{example}[Both rows are linearly dependent]
\begin{equation*}
\rank \begin{bmatrix}
1 & 2 & 3 \\
2 & 4 & 6
\end{bmatrix} = 1 \\
\end{equation*}
\end{example}
\begin{example}[Only a matrix containing zeroes has a rank of 0]
\begin{equation*}
\rank \begin{bmatrix}
0 & 0 & 0 \\
0 & 0 & 0
\end{bmatrix} = 0
\end{equation*}
\end{example}
\begin{example}
Both columns are linearly independent, some rows are linearly dependent.
\begin{equation*}
\rank \begin{bmatrix}
1 & 2 \\ 2 & 4 \\ 5 & 7
\end{bmatrix} = 2
\end{equation*}
\end{example}
\begin{matlab}
\apilink{rank}{https://www.mathworks.com/help/matlab/ref/rank.html}
\begin{lstlisting}
A = [[1,2,3],[1,2,3]]
rank(A)
ans = 1
\end{lstlisting}
\end{matlab}
\subsection{Trace (Spur)}
The trace of a square matrix \( A \) is the sum of all its main diagonal elements.
\begin{equation}
tr(A) = \sum_{i=0}^{n} a_{ii}
\end{equation}
\begin{example}
\begin{align*}
A & = \begin{bmatrix}
1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9
\end{bmatrix} \\
tr(A) & = 1 + 5 + 9 = 15
\end{align*}
\end{example}
\begin{matlab}
\apilink{trace}{https://www.mathworks.com/help/matlab/ref/double.trace.html}
\begin{lstlisting}
>> A = [1,2,3;4,5,6;7,8,9]
>> trace(A)
ans =
15
\end{lstlisting}
\end{matlab}
\subsection{Minor, Cofactors}\label{minor}
\subsubsection{Submatrix}
A submatrix \(S_ij\) of a Matrix \(A\) is the Matrix obtained by deleting the \(i\)th Row and deleting the \(j\)th column.
\begin{example}
\begin{align*}
A & = \begin{bmatrix}
1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9
\end{bmatrix} \\
S_{12} & = \begin{bmatrix}
4 & 6 \\ 7 & 9
\end{bmatrix}
\end{align*}
\end{example}
\subsubsection{Minor}
A minor \(M_{ij}\) of a matrix \(A\) is the determinant of the submatrix \(S_{ij}\).
\subsubsection{Cofactors}
A cofactor \( C_{ij} \) is obtained by multiplying the minor \( M_{ij} \) by \( (-1)^{i + j} \). The cofactor Matrix \(C \) is given by:
\begin{equation}
C = \begin{bmatrix}
C_{11} & C_{12} & \dots & C_{1i} \\
C_{21} & C_{22} & \dots & C_{1i} \\
\vdots & \vdots & \ddots \\
C_{j1} & C_{j2} & & C_{ij} \\
\end{bmatrix} = \begin{bmatrix}
M_{11} & -M_{12} & \dots & (-1)^{i + 1}M_{1i} \\
-M_{21} & M_{22} & \dots & (-1)^{i + 2} M_{2i} \\
\vdots & \vdots & \ddots \\
(-1)^{1+ j}M_{j1} & (-1)^{2 + j} M_{j2} & & (-1)^{i + j} M_{ij} \\
\end{bmatrix}
\end{equation}
\begin{example}\label{minor_example}
\begin{align*}
A = \begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9 \\
\end{bmatrix} \\
\end{align*}
\begin{align*}
M_{11} & = det(\begin{bmatrix}
5 & 6 \\
8 & 9 \\
\end{bmatrix}) = -3 &
M_{12} & = det(\begin{bmatrix}
4 & 6 \\
7 & 9 \\
\end{bmatrix}) = -6 &
M_{13} & = det(\begin{bmatrix}
4 & 5 \\
7 & 8 \\
\end{bmatrix}) = -3 \\
M_{21} & = det(\begin{bmatrix}
2 & 3 \\
8 & 9 \\
\end{bmatrix}) = -6 &
M_{22} & = det(\begin{bmatrix}
1 & 3 \\
7 & 9 \\
\end{bmatrix}) = -12 &
M_{23} & = det(\begin{bmatrix}
1 & 2 \\
7 & 8 \\
\end{bmatrix}) = -6 \\
M_{31} & = det(\begin{bmatrix}
2 & 3 \\
5 & 6 \\
\end{bmatrix}) = -3 &
M_{32} & = det(\begin{bmatrix}
1 & 4 \\
3 & 6 \\
\end{bmatrix}) = -6 &
M_{33} & = det(\begin{bmatrix}
1 & 2 \\
4 & 5 \\
\end{bmatrix}) = -3
\end{align*}
\begin{equation*}
C = \begin{bmatrix}
M_{11} & -M_{12} & M_{13} \\
-M_{21} & M_{22} & -M_{23} \\
M_{31} & -M_{32} & M_{33} \\
\end{bmatrix} = \begin{bmatrix}
-3 & 6 & -3 \\
6 & -12 & 6 \\
-3 & 6 & -3 \\
\end{bmatrix}
\end{equation*}
\end{example}
\begin{example}
\begin{equation*}
A = \begin{bmatrix}
1 & 2 \\ 3 & 4
\end{bmatrix}
\end{equation*}
\begin{align*}
M_{11} & = 4 & M_{12} & = 3 \\
M_{21} & = 2 & M_{22} & = 1
\end{align*}
\begin{equation*}
C = \begin{bmatrix}
M_{11} & -M_{12} \\
-M_{21} & M_22
\end{bmatrix} = \begin{bmatrix}
4 & -3 \\ -2 & 1
\end{bmatrix}
\end{equation*}
\end{example}
\subsection{Determinant}
\subsubsection{2x2 Matrix}
For 2x2 Matrix the formula is as given:
\begin{align*}
\determinant{\begin{bmatrix}
x_1, x_2 \\
x_3, x_4 \\
\end{bmatrix}} = x_1 \cdot x_4 - x_2 \cdot x_4
\end{align*}
\begin{example}
\begin{align*}
\determinant{
\begin{bmatrix}
3 & 7 \\ -5 & 11
\end{bmatrix}
} = 3 \cdot 11 - 7 \cdot (-5) = 68
\end{align*}
\end{example}
\subsubsection{3x3 Matrix}
The determinant of a 3x3 Matrix can be calculated using its minors.
\begin{align*}
\determinant{\begin{bmatrix}
x_1 & x_2 & x_3 \\
x_4 & x_5 & x_6 \\
x_7 & x_8 & x_9) \\
\end{bmatrix}} & = x_1 \cdot
\determinant{\begin{bmatrix}
x_5 & x_6 \\
x_8 & x_9 \\
\end{bmatrix}}- x_2 \cdot
\determinant{\begin{bmatrix}
x_4 & x_6 \\
x_7 & x_9 \\
\end{bmatrix}}+ x_3 \cdot \determinant{\begin{bmatrix}
x_4 & x_5 \\
x_7 & x_8 \\
\end{bmatrix}} \\
& = x_1 (x_5x_9 - x_6 x_8) - x_2 (x_4 x_9 - x_6 x_7) + x_3 (x_4 x_8 - x_5 x_7) \\
& = x_1 x_5 x_9 + x_2 x_6 x_7 + x_3 x_4 x_8 - x_3 x_5 x_7 - x_2 x_4 x_9 - x_1 x_6 x_8
\end{align*}
For higher order matrices you can apply this method recursively.
\begin{example}
Minors were calculated in previous example.
\begin{align*}
\determinant{\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9 \\
\end{bmatrix}} & = 1 \cdot(-3) - 2 \cdot(-6) + 3 \cdot(-3) = 0 \\
\end{align*}
\end{example}
\subsubsection{Triangular matrx}
\begin{align*}
D = \begin{bmatrix}
x_{11} & x_{12} & \cdots & x_{1n} \\
0 & x_{22} & & \\
\vdots & & \ddots \\
0 & 0 & \cdots & x_{nn} \\
\end{bmatrix} \det(D) & = x_{11} \cdot x_{22} \dots x_{nn} = \prod_{i=1}^n x_{}
\end{align*}
\subsubsection{Singular matrix}
Singular matrices are matrices with \( \det = 0 \).
Singular matrices have rows and/or columns that are not linearly independent.
\begin{example}
\begin{align*}
A & = \begin{bmatrix}
1 & 2 \\ -2 & -4
\end{bmatrix} \\
\determinant{A} & = 1 \cdot (-4) - (-2) \cdot 2 = 0
\end{align*}
\end{example}
\begin{matlab}
\apilink{det}{https://www.mathworks.com/help/matlab/ref/det.html}
\begin{lstlisting}
>> a =[[3,7];[4,12]]
>> det(a)
ans = 8
\end{lstlisting}
\end{matlab}
\subsection{Eigenvalues, Eigenvectors}\label{eigen}
An eigenvector \( v \) of a square matrix \(A\) is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue $\lambda$ is the factor by which the eigenvector is scaled.
\begin{equation} \label{eigenexpression}
A\cdot v = \lambda \cdot v
\end{equation}
\begin{example}
\begin{align*}
A & = \begin{bmatrix}
5 & 1 \\ 0 & 3
\end{bmatrix} \\
v & = \begin{bmatrix}
1 \\ 0
\end{bmatrix}, \lambda = 5 \\
A \cdot v & = \begin{bmatrix}
5 & 1 \\ 0 & 3
\end{bmatrix} \cdot \begin{bmatrix}
1 \\ 0
\end{bmatrix} = \begin{bmatrix}
5 \\ 0
\end{bmatrix}
\end{align*}
\end{example}
\subsubsection{Characteristic polynomial}
The expression \ref{eigenexpression} can be written as:
\begin{align}
A\cdot v = \lambda \cdot I \cdot v \tag*{Multiplying with identity Matrix} \\
A \cdot v - \lambda \cdot I \cdot v = 0 \\
v \cdot (A - \lambda \cdot I) = 0
\end{align}
Since \(v\) per definition can't be the zero vector, the expression \( (A - \lambda \cdot I) \) must be zero.
\begin{align*}
A - \lambda \cdot I = 0 \\
\determinant{ A - \lambda \cdot I } = 0 \\
\determinant{
\begin{bmatrix}
a_{11} - \lambda & a_{12} & \cdots & a_{1n} \\
a_{12} & a_{22} - \lambda & & \\
\vdots & & \ddots & \\
a_{n1} & & & a_{nn} - \lambda
\end{bmatrix}
} & = 0
\end{align*}
The characteristic polynomial \( P_{A}\) of a matrix \(A\) is defined as:
\begin{equation}
P_A(t) = \determinant{A - t I}
\end{equation}
If a square matrix A with \( \dim(A) = n \times n \) then \(p_A(t)\) will have a degree of \(n\). The characteristic polynomial is always monic (the leading coefficient is 1)
\begin{example}\label{eigenexample}
\begin{align*}
A & = \begin{bmatrix}
5 & 7 \\ 11 & 3
\end{bmatrix} \\
p_A(t) & = \determinant{\begin{bmatrix}
5 - t & 7 \\ 11 & 3 -t
\end{bmatrix}} = (5 - t) \cdot (3 - t) - 7 \cdot 11 = t^2 - 8t - 62
\end{align*}
Note for a \(2 \times 2 \) matrix \( p_A(t) \) is always:
\begin{equation}
p_A(t) = t^2 - \trace(A) t - \determinant{A}
\end{equation}
\end{example}
\begin{matlab}
\apilink{charpoly}{https://www.mathworks.com/help/symbolic/sym.charpoly.html}
\begin{lstlisting}
>> charpoly([5, 7 ; 11, 3])
ans =
1 -8 -62
\end{lstlisting}
\end{matlab}
If A gets pluged into \(p_A(t)\) then the result will be the zero-matrix.
\begin{equation}
P_a(A) = A^n + b_2 A^{n-1} \cdots b_{n-1} A + b_n I = 0 \\
\end{equation}
\begin{example}
From previous example.
\begin{gather*}
p_A(t) = t^2 - 8t - 62 \\
p_A(A) = \begin{bmatrix}
5 & 7 \\ 11 & 3
\end{bmatrix}^2 - \begin{bmatrix}
5 & 7 \\ 11 & 3
\end{bmatrix} - 62 I \\
= \begin{bmatrix}
102 & 56 \\ 88 & 86
\end{bmatrix} - \begin{bmatrix}
40 & 56 \\ 88 & 24
\end{bmatrix} - \begin{bmatrix}
62 & 0 \\ 0 & 62
\end{bmatrix} = \begin{bmatrix}
0 & 0 \\ 0 & 0
\end{bmatrix}
\end{gather*}
\end{example}
\subsubsection{Characteristic equation}
The roots of the characteristic polynomial are the eigenvalues \(\lambda_i \) of \(A\). The expression
\begin{equation}
p_A(t) = 0\\
\end{equation}
is called the characteristic equation. The characteristic polynomial can be written as:
\begin{equation}
p_A(t) = (t - \lambda_1)(t - \lambda_2) \cdots (t - \lambda_i)
\end{equation}
\begin{example}\label{eigenexampld}
\begin{align*}
A & = \begin{bmatrix}
3 & 7 \\ 2 & 5
\end{bmatrix} \\ \\
p_A(t) & = \det(A - t I) = \determinant{\begin{bmatrix}
3 - t & 7 \\2 & 5 - t
\end{bmatrix}} = t^2 - 8 t + 1 \\
\end{align*}
We get the eigenvalues by setting \(p_A(\lambda) = 0\) and solving for \( \lambda \)
\begin{align*}
\lambda^2 - 8 \lambda + 1 = 0 \\
\lambda_{12} = -\frac{-8}{2} \pm \sqrt{\left( \frac{-8}{2}\right)^2 -1} \\
\lambda_1 = 4 - \sqrt{15}, \lambda_2 = 4 + \sqrt{15} \\
\end{align*}
\end{example}
\subsubsection{Arithmetic Multiplity}
A matrix can have multiple eigenvalues $\lambda_i$ with the same value.
The characteristic polynomial can be written as:
\begin{align*}
p_A(t) = (t-\lambda_1)(t-\lambda_2) \dots (t-\lambda_n)
\end{align*}
The arithmetic Multiplicity $\mu_A(\lambda_1)$ is the number of times $(t - \lambda_i)$ can divide $p_A(t)$,
so the highest power $(t - \lambda_i)$ can have (simply said the number of times a value appears).
\begin{example}
\begin{align*}
A = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 3 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 3 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 3 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 4 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 4 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 4 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 4 \\
\end{bmatrix}
\end{align*}
A has 4 eigenvalues: 1, 2, 3, 4($=\lambda_{1..10})$\\
The characteristic polynomial can be expressed by using only distinct eigenvalues:
\begin{align*}
p_A(t) = (t -1)(t-2)^2(t-3)^3(t-4)^4
\end{align*}
\end{example}
For example $\mu_A(\lambda_4) = 4$, because $(t - 4)$ divides $p_A(t)$ 4 times.
\subsubsection{Eigenvectors, eigenspace}
To find the eigenvector of an associatited eigenvalue we need to find the kernel of the following linear map:
\begin{align*}
L: (A - \lambda_i \cdot I) x = y \\
\epsilon_i = \ker L
\end{align*}
Since the kernel of a tranformation forms a vectorspace \( \epsilon \) called \textbf{eigenspace}. So following properites are satisfied:
\begin{align*}
v_1, v_2 \in \epsilon_i, c \in \mathbb{F} \\
v_1 + v_2 \in \epsilon_i \\
c \cdot v_1 \in \epsilon_i \\
\end{align*}
\begin{example}
Continuing example \ref{eigenexample}.
\begin{align*}
A & = \begin{bmatrix}
3 & 7 \\ 2 & 5
\end{bmatrix} \\ \\
\lambda_1 = 4 - \sqrt{15}, \lambda_2 = 4 + \sqrt{15} \\
\end{align*}
\(\lambda_1\):
\begin{align*}
\begin{bmatrix}
3 - (4 - \sqrt{15}) & 7 \\ 2 & 5 - ( 4 - \sqrt{15})
\end{bmatrix}
\begin{bmatrix}
x_1 \\ x_2
\end{bmatrix}= \begin{bmatrix}
0 \\ 0
\end{bmatrix} \\
\begin{bmatrix}
-1 + \sqrt{15} & 7 \\ 2 & 1 + \sqrt{15}
\end{bmatrix}\begin{bmatrix}
x_1 \\ x_2
\end{bmatrix}= \begin{bmatrix}
0 \\ 0
\end{bmatrix} \\
\end{align*}
We can eliminate the II row by subtracting I $ \left( \frac{2}{-1 + \sqrt{15}} \right) $ \\
\begin{align*}
\begin{bmatrix}
-1 + \sqrt{15} & 7 \\ 2 & 1 + \sqrt{15}
\end{bmatrix} \rightarrow
\begin{bmatrix}
-1 + \sqrt{15} & 7 \\ 2 - 2 & (1 + \sqrt{15})- \left(\frac{14}{-1 + \sqrt{15}} \right)
\end{bmatrix} = \begin{bmatrix}
-1 + \sqrt{15} & 7 \\ 0 & 0
\end{bmatrix}
\end{align*}
Since the last row was eliminated, we see that of $rank(A- \lambda I)$ is 1. It means $x_1$ or $x_2$ can be freely chosen.
Keep in mind we are interest only in the 'form' of the eigenvector, because an eigenvector of $A$ multiplied with a scalar is still an eigenvector of $A$.
\begin{align*}
0 & = (-1 + \sqrt{15})x + 7y \\
y & = \frac{(1 - \sqrt{15})x}{7}
\end{align*}
We can eliminate the fraction by setting $x=7$.
\begin{align*}
x & = 7 \\
y & = \frac{(1 - \sqrt{15})7}{7} = 1 - \sqrt{15} \\
v_1 & = \begin{bmatrix}
7 \\ 1 - \sqrt{15}
\end{bmatrix}
\end{align*}
Same for $\lambda_2$:
\begin{align*}
\begin{bmatrix}
3 - (4 + \sqrt{15}) & 7 \\
2 & 5 - (4 - \sqrt{15})
\end{bmatrix} \begin{bmatrix}
x_1 \\ x_2
\end{bmatrix} = \begin{bmatrix}
0 \\ 0
\end{bmatrix}
\end{align*}
\begin{align*}
\begin{bmatrix}
-1 - \sqrt{15} & 7 \\
2 & 1 - \sqrt{15}
\end{bmatrix} \begin{bmatrix}
x_1 \\ x_2
\end{bmatrix} = \begin{bmatrix}
0 \\ 0
\end{bmatrix}
\end{align*}
We can eliminate the II row by subtracting I $ \left( \frac{2}{-1 - \sqrt{15}} \right) $ \\
\begin{align*}
\begin{bmatrix}
-1 - \sqrt{15} & 7 \\
0 & 0
\end{bmatrix} \begin{bmatrix}
x_1 \\ x_2
\end{bmatrix} = \begin{bmatrix}
0 \\ 0
\end{bmatrix} \\
\end{align*}
Solving for $x_1$, $x_2$:
\begin{align*}
(-1-\sqrt{15}) x_1 + 7 x_2 & = 0 \\
x_2 & = \frac{1 + \sqrt{15}x_1}{7} \\
x_1 & = 7 \text{ (chosen)} \\
x_2 & = 1 + \sqrt{15} \\
v_2 & = \begin{bmatrix}
7 \\ 1 + \sqrt{15}
\end{bmatrix}
\end{align*}
\end{example}
\begin{matlab}
\apilink{eig}{https://www.mathworks.com/help/matlab/ref/eig.html}
\begin{lstlisting}
>> [a, d] = eig([3, 7; 11, 3])
a =
0.6236 -0.6236
0.7817 0.7817
d =
11.7750 0
0 -5.7750
\end{lstlisting}
\end{matlab}
\subsubsection{Geometric multiplicity}
The geometry multiplicity \(\gamma_A \) of an eigenvalue is the dimension of the associatited eigenspace.
\begin{equation}
\gamma_a(\lambda_i) = \dim \ker\left(A - I \lambda_i \right)
\end{equation}
The geometry multiplicity of an eigenvalue can't be larger than the arithmetic multiplicity.
\begin{equation}
\gamma_A(\lambda_i) \leq \mu_a(\lambda_1)
\end{equation}
For a square Matrix \(A^{n \times n}\) with \(m\) eigenvalues it holds:
\begin{equation}
\sum_{i=1}^{m} \gamma(\lambda_i) + \mu(\lambda_i) = n
\end{equation}
\begin{example}
\begin{align*}
A = \begin{bmatrix}
2 & 0 & 0 & 0 \\ 0 & 0 & 0 &0 \\ 0 & 0 &0 &0 \\ 0 & 0 & 1 & 0
\end{bmatrix} \\
p_A(t) = t^3(t- 2) \\
\lambda_1 = 2, \lambda_2 = 0 \\
\mu_A(2) = 1, \mu_A(0) = 3, \\
\epsilon_1 = \span \{ \begin{bmatrix}
2 \\ 0 \\ 0 \\ 0
\end{bmatrix} \} \\
\epsilon_2 = \span \{ \begin{bmatrix}
0 \\ 1 \\ 0 \\ 0
\end{bmatrix}, \begin{bmatrix}
0 \\ 0 \\ 0 \\ 1
\end{bmatrix}\} \\
\gamma_a(2) = \dim \epsilon_1 = 1 \\
\gamma_a(0) = \dim \epsilon_2 = 2
\end{align*}
\end{example}
\subsection{Similarity}\label{similiarity}
Two square matrices $A$ and $B$ are similar when an if there exists an invertible $n \times n$ matrix P such that:
\begin{gather}
A = P^{-1}BP \\
B = PAP^{-1}
\end{gather}
It is denoted as
\begin{equation*}
A \tilde{=} B
\end{equation*}
U is also called the change of base matrix.
Similar matrices have the same:
\begin{itemize}
\item Characteristic polynomial
\item Eigenvalues (but not eigenvectors)
\item Determinant
\item Trace
\end{itemize}
Similarity is an equivalence relation
\begin{itemize}
\item A is similar to A
\item If A is similar to B, then B is similar to A.
\item If A is similar to B and B is similar to C, then A is similar to C.
\end{itemize}
\begin{example}
\begin{align*}
B = \begin{bmatrix}
2 & 3 \\ 0 & 4
\end{bmatrix}, A = \begin{bmatrix}
3 & 4 \\ \frac{1}{4} & 3
\end{bmatrix}, P = \begin{bmatrix}
3 & 0 \\ 1 & 4
\end{bmatrix}, P^{-1} = \begin{bmatrix}
\frac{1}{3} & 0 \\ -\frac{1}{12} & \frac{1}{4}
\end{bmatrix}
\end{align*}
\begin{align*}
P^{-1}BP = \begin{bmatrix}
\frac{1}{3} & 0 \\ -\frac{1}{12} & \frac{1}{4}
\end{bmatrix}
\begin{bmatrix}
2 & 3 \\ 0 & 4
\end{bmatrix}
\begin{bmatrix}
3 & 0 \\ 1 & 4
\end{bmatrix} = \begin{bmatrix}
\frac{2}{3} & 1 \\ -\frac{1}{6} & \frac{3}{4}
\end{bmatrix}\begin{bmatrix}
3 & 0 \\ 1 & 4
\end{bmatrix} = \begin{bmatrix}
3 & 4 \\ \frac{1}{4} & 3
\end{bmatrix}
\end{align*}
\begin{align*}
\det(B) & = 2 \cdot 4 - 3 \cdot 0 = 8 \\
\det(A) & = 3 \cdot 3 - 1 \cdot \frac{1}{4} = 8 \\
tr(A) & = 3 + 3 = 6 \\
tr(B) & = 2 +4 = 6 \\
p_B(t) & = (2 - t)(4 - t) - 4 \cdot 0 = t^2 - 6t + 8 \\
p_A(t) & = (3 - t)(3 - t) - 4 \cdot \frac{1}{4} = t^2 - 6t + 8 \\
\end{align*}
\end{example}
\subsection{Defective matrices}
If there is one eigenvalue \(\lambda_i\) with \(\mu_A(\lambda_i) \neq \gamma_A(\lambda_i) \) then the corresponding Matrix \(A\) defective:
\begin{itemize}
\item The matrix has less than \(n\) lienary independent eigenvectors
\item The sum of the dimesons of the eigensapces has a dimension less than \( n\)
\end{itemize}
The eigenvalue is called defective eigenvalue.
\begin{example}
\begin{gather*}
A = \begin{bmatrix}
1 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 4
\end{bmatrix}\\
\lambda_1 = 1, \lambda_2 = 4\\
v_1 = \begin{bmatrix}
1 \\ 0 \\ 0
\end{bmatrix}, v_2 = \begin{bmatrix}
\frac{1}{9} \\ \frac{1}{3} \\ 1
\end{bmatrix}\\
\mu_A(\lambda_1) = 2 \neq \gamma_A(\lambda_1) = 1 \\
\end{gather*}
\end{example}
Defective matrices can't be diagonalized.
\subsection{Geeralized eigenvectors}
Let \(L : V \rightarrow V\) with a defective transformation matrix \(A\). A generalized eigenvector \(w\) of a defective eigenvalue is the solution of:
\begin{equation}
\begin{split}
\left(A - \lambda I \right)^m w = 0\\
\left(A - \lambda I \right)^{m-1} w \neq 0 \\
m > 1, m \in \mathbb{N}\\
\end{split}
\end{equation}
\(m\) is called the rank of the generalized eigenvector.
\subsubsection{Jordan chain}
Let \(v\) be an ordinary eigenvector of \(A\):
\begin{align*}
(A - \lambda I)v = 0 \\
(A - \lambda I)w_1 = v \\
(A - \lambda I)w_2 = w_1 \\
(A - \lambda I)w_3 = w_2 \\
\vdots \\
(A - \lambda I)w_{n-1} = w_n \\
\end{align*}
\begin{example}
\begin{align*}
A = \begin{bmatrix}
3 & 2 & 0 \\ 0 & 3 & 4 \\ 0 & 0 & 3
\end{bmatrix}
\end{align*}
The matrix has single eigenvalue \(\lambda = 3\) with \(\mu_a(3) = 3\) but only one eigenvector \(v_1\).
\begin{align*}
A - I \lambda = \begin{bmatrix}
0 & 2 & 0 \\ 0 & 0 & 4 \\ 0 & 0 & 0
\end{bmatrix} \\
\begin{bmatrix}
0 & 2 & 0 \\ 0 & 0 & 4 \\ 0 & 0 & 0
\end{bmatrix} v = \begin{bmatrix}
0 \\ 0 \\ 0
\end{bmatrix}, v = \begin{bmatrix}
1 \\ 0 \\ 0
\end{bmatrix} \\
\begin{bmatrix}
0 & 2 & 0 \\ 0 & 0 & 4 \\ 0 & 0 & 0
\end{bmatrix} w_1 = \begin{bmatrix}
1 \\ 0 \\ 0
\end{bmatrix}, w_1 = \begin{bmatrix}
0 \\ \frac{1}{2} \\ 0
\end{bmatrix}
\end{align*}
\end{example}
\subsection{Shift matrix}
A shift matrix that has 1 on its superdiagonal and 0 elsewhere.
\begin{equation*}
S = \begin{bmatrix}
0 & 1 & 0 & \cdots & \\
\vdots & 0 & 1 & 0 & \cdots \\
\vdots & & \ddots & & \\
& & & 0 & 1 \\
& & \ddots & 0 & 0 \\
\end{bmatrix}
\end{equation*}
When multplied wiht another matrix \(A\) it shifts the columns of \(A\) by one to the right.
\begin{align*}
A & = \begin{bmatrix}
c_1 & c_2 & \cdots & c_{n -1} & c_n
\end{bmatrix} \\
A \cdot S & = \begin{bmatrix}
0 & c_2 & \cdots & c_{n -2} & c_{n - 1}
\end{bmatrix}
\end{align*}
\begin{example}
\begin{gather*}
\begin{bmatrix}
1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9
\end{bmatrix} \cdot \begin{bmatrix}
0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0
\end{bmatrix} = \begin{bmatrix}
0 & 1 & 2 \\ 0 & 4 & 5 \\ 0 & 7 & 8
\end{bmatrix}
\end{gather*}
\end{example}
\subsection{Nilpotent matrix}
A matrix is nilpotent of degree k if
\begin{gather*}
A^i \neq 0 \\
A^k = 0 \\
0 \leq i < k
\end{gather*}
\begin{example}
Let A be a Matrix containing only zeroers except on its superdiagonal. A is nilpotent of degree \( k +1 \) where \(k\) is the number of nonzero element. An example
would be the shift matrix:
\begin{align*}
S^{1} = \begin{bmatrix}
0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0
\end{bmatrix}
S^{2} = \begin{bmatrix}
0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0
\end{bmatrix}
S^{3} = \begin{bmatrix}
0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0
\end{bmatrix}
\end{align*}
\end{example}
\begin{example}
Only the zero matrix is nilpotent with degree 1.
\end{example}
\begin{example}
A diagonal matrix is not nilpotent.
\begin{align*}
D = \begin{bmatrix}
2 & 0 \\ 0 & 4
\end{bmatrix} \\
D^n = \begin{bmatrix}
2^{n} & 0 \\ 0 & 4^{n}
\end{bmatrix}
\end{align*}
\end{example}
\subsection{Jordan normal form}
\subsubsection{Jordan Block}
A jordan block is a square matrix with the same value for each element on its main diagonal and 1 on it superdiagonal. The other elements are 0.
\begin{equation*}
B_{\lambda} = \begin{bmatrix}
\lambda & 1 & 0 & \cdots & \\
0 & \lambda & 1 & 0 & \cdots \\
\vdots & & \ddots & & \\
& & & \lambda & 1 \\
& & \ddots & 0 & \lambda \\
\end{bmatrix}
\end{equation*}
\subsubsection{Jordan box}
Let \(lambda\) be an eigenvalue of \(A\) with \(\mu(\lambda) = n\) and \( \gamma_a(\lambda) = m \).
A Jordan box is the direct sum of
\begin{equation}
J_{\lambda} = D_{\lambda} \oplus B_{\lambda}
\end{equation}
where:
\begin{gather*}
\dim J_{\lambda} = n \\
\dim D_{\lambda} = m \\
\dim B_{\lambda} = n - m \\
\end{gather*}
\begin{example}
\begin{gather*}
\mu(\lambda) = 4, \gamma(\lambda) = 2 \\
D_{\lambda} = \begin{bmatrix}
\lambda
\end{bmatrix}
B_{\lambda} = \begin{bmatrix}
\lambda & 1 & 0 \\ 0 & \lambda & 1 \\ 0 & 0 & \lambda
\end{bmatrix} \\
J_{\lambda} = \begin{bmatrix}
\lambda & 0 & 0 & 0 \\
0 & \lambda & 1 & 0 \\
0 & 0 & \lambda & 1 \\
0 & 0 & 0 & \lambda \\
\end{bmatrix}
\end{gather*}
\end{example}
\begin{example}
\begin{gather*}
\mu(\lambda) = 4, \gamma(\lambda) = 3 \\
D_{\lambda} = \begin{bmatrix}
\lambda & 0 \\ 0 & \lambda
\end{bmatrix}
B_{\lambda} = \begin{bmatrix}
\lambda & 1 \\ 0 & \lambda
\end{bmatrix} \\
J_{\lambda} = \begin{bmatrix}
\lambda & 0 & 0 & 0 \\
0 & \lambda & 0 & 0 \\
0 & 0 & \lambda & 1 \\
0 & 0 & 0 & \lambda \\
\end{bmatrix}
\end{gather*}
\end{example}
\section{Operations}
\subsection{Transposing}
Transpose of a matrix \( A \) is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix A by producing another matrix, often denoted by $A^T$.
\begin{example}
\begin{align*}
\begin{bmatrix}
1 & 2 & 3
\end{bmatrix}^T = \begin{bmatrix}
1 \\
2 \\
3 \\
\end{bmatrix}
\end{align*}
\end{example}
\begin{example}
\begin{align*}
\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9 \\
\end{bmatrix}^T = \begin{bmatrix}
1 & 4 & 7 \\
2 & 5 & 8 \\
3 & 6 & 9 \\
\end{bmatrix}
\end{align*}
\end{example}
Notice the diagonal elements do not get swaped by transposing. So for any diagonal matrix D holds $D=D^T$.
\begin{matlab}
\apilink{transpose}{https://www.mathworks.com/help/matlab/ref/transpose.html}
\begin{lstlisting}
A = [1,2,3;4,5,6]
transpose(A)
ans =
1 4
2 5
3 6
\end{lstlisting}
\end{matlab}
\subsection{Direct sum}
The direct sum of two matrixes \(A^{a \times b}\) and \(B^{c \times d}\) is defined as
\begin{equation}
\begin{split}
C = A \oplus B = \begin{bmatrix}
A & N_1 \\ N_2 & B
\end{bmatrix} \\
\dim C = (a + c) \times (b + d)
\end{split}
\end{equation}
\(N_1\) and \(N_2\) are zero matrices with dimensons \(\dim N_1 = b\)
\subsection{Diagonalisation}
A matrix $A$ is diagonalizabe if $A$ is similar (see \ref{similiarity}) to a diagonal matrix $D$.
\begin{equation*}
D = U^{-1}AU \\
\end{equation*}
\subsubsection{Eigendecomposition}
A matrix can be diagonalized using its eigenvalues and eigenvectors.
D is a diagonal matrix containing the eigenvalues $\lambda_i$ on it main diagonal.
The the eigenspaces $\epsilon_i$ form a base called \textbf{eigenbase} (when the arithmetic multiplicity of an eigenvalue is 1 then $\epsilon$ is just the eigenvector).
So change of base matrix $U$ has the base vectors of the eigenspaces as it's columns.
\begin{equation}
A = U D U^{-1} \\
\end{equation}
\begin{equation}
U = \begin{bmatrix}
v_1 & v_2 \cdots v_n
\end{bmatrix}
\end{equation}
\begin{example}
\begin{equation*}
A = \begin{bmatrix}
3 & 0 & 1 \\ 0 & 2 & 0 \\ 5 & 0 & -1
\end{bmatrix}
\end{equation*}
The eigenvalues $\lambda_i$ and eigenvectors $v_i$ are:
\begin{gather*}
\lambda_1 = 4, v_1 = \begin{bmatrix}
1 \\ 0 \\ 1
\end{bmatrix} \\
\lambda_2 = -2, v_2 = \begin{bmatrix}
-\frac{1}{5} \\ 0 \\ 1
\end{bmatrix} \\
\lambda_3 = 2, v_3 = \begin{bmatrix}
0 \\ 1 \\ 0
\end{bmatrix} \\
\end{gather*}
We can construct U and D. Keep in mind that the order of the eigenvalues in the diagonal of $D$ must match the order of the order of eigenvector columns in $U$ (and $U^{-1}$).
\begin{align*}
D & = \begin{bmatrix}
2 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & -2
\end{bmatrix} \\
U & = \begin{bmatrix}
-\frac{1}{5} & 0 & 1 \\
0 & 0 & 1 & \\
1 & 1 & 0 \\
\end{bmatrix} \\
U^{-1} & = \begin{bmatrix}
\frac{5}{6} & 0 & \frac{1}{6} \\
- \frac{5}{6} & 0 & \frac{5}{6} \\
0 & 1 & 0
\end{bmatrix}
\end{align*}
The diagonalized A is:
\begin{equation*}
A = \begin{bmatrix}
-\frac{1}{5} & 0 & 1 \\
0 & 0 & 1 & \\
1 & 1 & 0 \\
\end{bmatrix} \begin{bmatrix}
2 & 0 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & -2
\end{bmatrix}
\begin{bmatrix}
\frac{5}{6} & 0 & \frac{1}{6} \\
- \frac{5}{6} & 0 & \frac{5}{6} \\
0 & 1 & 0
\end{bmatrix} \end{equation*}
\begin{matlab}
\begin{lstlisting}
>> a = [3,0,1;0,2,0;5,0,-1]
>> [v, d] = eig(a)
v =
0.7071 -0.1961 0
0 0 1.0000
0.7071 0.9806 0
d =
4 0 0
0 -2 0
0 0 2
>> v*d*inv(v)
ans =
3.0000 0 1.0000
0 2.0000 0
5.0000 0 -1.0000
\end{lstlisting}
\end{matlab}
\subsection{Raising a matrix to the nth power using diagonalisation}
Using the definition of matrix multiplication a single squaring a matrix takes \(O(n^3)\) computation steps. Raising a matrix to the \(m\)th power would take \(O(m \cdot n^3)\) steps.
For any diagonal matrix it holds:
\begin{equation}
D^{m} = \begin{bmatrix}
x_{11}^{m} & & & \\
& x_{22}^{m} & & \\
& & \ddots & \\
& & & x_{nn}^{m}
\end{bmatrix}
\end{equation}
Using diagonalisation a more effcient calculation can be achieved:
\begin{gather*}
A = UDU^{-1} \\
A^2 = A \cdot A = UD \cancel{U^{-1} U} DU^{-1} = UD^2U^{-1} \\
A^3 = A^2 \cdot A = UD^2 \cancel{U^{-1} U} D U^{-1} = U D^3 U^{-1} \\
\vdots \\
A^{n} = U D^n U^{-1}
\end{gather*}
\end{example}
\subsection{Matrix exponential}\label{sec:matrixexponent}
The taylor series of the exponential functions is given as:
\begin{equation}
e^{x} = \sum_{n=0}^{\infty} \frac{x^n}{n!} = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} \cdots
\end{equation}
Using this definition you can define the matrix exponential:
\begin{equation}
e^{A} = \sum_{n=0}^{\infty} \frac{A^n}{n!} = I + A + \frac{A^2}{2} + \frac{A^3}{6} \cdots
\end{equation}
\subsubsection{Diagonal Case}
\begin{align*}
e^{D} & = I +
\begin{bmatrix}
x_{11} & & & \\
& x_{22} & & \\
& & \ddots & \\
& & & x_{nn}
\end{bmatrix} + \frac{1}{2}
\begin{bmatrix}
x_{11}^{2} & & & \\
& x_{22}^{2} & & \\
& & \ddots & \\
& & & x_{nn}^{2}
\end{bmatrix} + \frac{1}{6}
\begin{bmatrix}
x_{11}^{3} & & & \\
& x_{22}^{3} & & \\
& & \ddots & \\
& & & x_{nn}^{3}
\end{bmatrix} + \frac{1}{24}
\begin{bmatrix}
x_{11}^{4} & & & \\
& x_{22}^{4} & & \\
& & \ddots & \\
& & & x_{nn}^{4}
\end{bmatrix} + \cdots \\
& = I + \begin{bmatrix}
x_{11} & & & \\
& x_{22} & & \\
& & \ddots & \\
& & & x_{nn}
\end{bmatrix} +
\begin{bmatrix}
\frac{x_{11}^{2}}{2} & & & \\
& \frac{x_{22}^{2}}{2} & & \\
& & \ddots & \\
& & & \frac{x_{nn}^{2}}{2}
\end{bmatrix} +
\begin{bmatrix}
\frac{x_{11}^{3}}{6} & & & \\
& \frac{x_{22}^{3}}{6} & & \\
& & \ddots & \\
& & & \frac{x_{nn}^{3}}{6}
\end{bmatrix} +
\begin{bmatrix}
\frac{ x_{11}^{4}}{24} & & & \\
& \frac{x_{22}^{4}}{24} & & \\
& & \ddots & \\
& & & \frac{x_{nn}^{4}}{24}
\end{bmatrix} + \cdots \\
& = \begin{bmatrix}
\sum_{m=0}^{\infty} \frac{x_{11}^m}{m!} & & & \\
& \sum_{m=0}^{\infty} \frac{x_{22}^m}{m!} & & \\
& & \ddots & \\
& & & \sum_{m=0}^{\infty} \frac{x_{22}^m}{m!}
\end{bmatrix} = \begin{bmatrix}
e^{x_{11}} & & & \\
& e^{x_{22} } & & \\
& & \ddots & \\
& & & e^{x_{nn}}
\end{bmatrix} \\
\end{align*}
\subsection{Diagonalizable case}
If \(A\) is diagonalizabe with \(UDU^{-1}\) then:
\begin{align*}
e^{A} & = \sum_{n=0}^{\infty} \frac{U D^n U^{-1}}{n!} = U I U^{-1} + \frac{U D U^{-1}}{n!} + \frac{{U D^2 U^{-1}}}{2} + \frac{U D^3 U^{-1}}{6} \cdots \\
& = U^{-1} \left(\sum_{n=0}^{\infty} \frac{D^n}{n!}\right)U = U^{-1}e^DU \\
& = U^{-1} \begin{bmatrix}
e^{\lambda_1} & & & \\
& e^{\lambda_2 } & & \\
& & \ddots & \\
& & & e^{\lambda_n}
\end{bmatrix} U \\
\end{align*}
\begin{example}
\begin{gather*}
A = \begin{bmatrix}
3 & -4 \\ -5 & -5 \\
\end{bmatrix} \\
\lambda_1 = -7, v_1 = \begin{bmatrix}
\frac{2}{5} \\ 1
\end{bmatrix} \\
\lambda_2 = 5, v_2 = \begin{bmatrix}
-2 \\ 1
\end{bmatrix} \\
e^D = \begin{bmatrix}
\frac{2}{5} & -2 \\ 1 & 1
\end{bmatrix}
\begin{bmatrix}
e^{-7} & 0 \\0 & e^5
\end{bmatrix}
\begin{bmatrix} \frac{5}{12} & \frac{5}{6} \\ \frac{-5}{12} & \frac{1}{6} \end{bmatrix}
\end{gather*}
\end{example}
\begin{matlab}
\apilink{expm}{https://www.mathworks.com/help/matlab/ref/expm.html}
\begin{lstlisting}
>> a = [3, -4; -5, -5]
>> expm(a)
ans =
123.6778 -49.4707
-61.8384 24.7363
\end{lstlisting}
\end{matlab}
\subsubsection{Comuting matrices}
If two matrices \(A\) and \(B\) commute (\(AB = BA\)) then
\begin{equation}
e^{A + B} = e^{A}e^{B}
\end{equation}
\subsubsection{Jordan bock}
A Jordan block \(B_{\lambda}\) of size m can be separated into:
\begin{equation}
B_{\lambda} = D_{\lambda} + S
\end{equation}
where \(S\) is the shift matrix (the shift matrix is nilpotent of degree \(m\)).
\begin{align}
e^{B_\lambda} = e^{D_{\lambda} + S} = e^{D_{\lambda}} \cdot e^{S} \tag{Note \(D_{\lambda} \) and S commute}
\end{align}
\begin{align*}
& = e^{D_{\lambda}} \cdot e^{S} = e^{D_{\lambda}} \left( \sum_{n=0}^{m - 1} \frac{S^n}{n!} = I + S + \frac{S^2}{2} + \frac{S^3}{6} \cdots \right) \\
& = e^{D_{\lambda}} \left(I + \begin{bmatrix} 0 & 1 & 0 & 0 & \cdots \\ 0 & 0 & 1 & 0 & \cdots \\ 0 & 0 & 0 & 1 & \cdots \\ \vdots & & & \end{bmatrix} +
\frac{1}{2} \begin{bmatrix} 0 & 0 & 1 & 0 & \cdots \\ 0 & 0 & 0 & 1 & \cdots \\ 0 & 0 & 0 & 0 & \cdots \\ \vdots & & & \end{bmatrix} + \cdots +
\frac{1}{(m-1)!} \begin{bmatrix} 0 & \cdots & 0 & 1 \\ 0 & \cdots & & 0 \\ \vdots & & & \end{bmatrix} \right) \\
& = e^{D_{\lambda}} \begin{bmatrix}
1 & 1 & \frac{1}{2} & \frac{1}{6} & \cdots & \frac{1}{(m -1)!} \\
0 & 1 & 1 & \frac{1}{2} & \cdots & \frac{1}{(m -2)!} \\
0 & 0 & 1 & 1 & \cdots & \frac{1}{(m -3)!} \\
\vdots & & &
\end{bmatrix} = \begin{bmatrix}
e^{\lambda} & e^{\lambda} & \frac{1}{2} e^{\lambda} & \frac{1}{6} e^{\lambda} & \cdots & \frac{1}{(m-1)!} e^{\lambda} \\
0 & e^{\lambda} & e^{\lambda} & \frac{1}{2} e^{\lambda} & \cdots & \frac{1}{(m-2)!} e^{\lambda} \\
0 & 0 & e^{\lambda} & e^{\lambda} & \cdots & \frac{1}{(m-3)!} e^{\lambda} \\
\vdots & & &
\end{bmatrix}
\end{align*}
\begin{example}
\begin{align*}
B_2 = \begin{bmatrix}
2 & 1 & 0 & 0 \\
0 & 2 & 1 & 0 \\
0 & 0 & 2 & 1 \\
0 & 0 & 0 & 2 \\
\end{bmatrix} \\
e^{B_2} = \begin{bmatrix}
e^{2} & e^{2} & \frac{1}{2} e^{2} & \frac{1}{6} e^{2} \\
0 & e^{2} & e^{2} & \frac{1}{2} e^{2} \\
0 & 0 & e^{2} & e^{2} \\
0 & 0 & 0 & e^{2} \\
\end{bmatrix}
\end{align*}
\end{example} |
function v=dlyapsq(a,b)
% Solves the discrete Lyapunov equation AV'VA' - V'V +BB' =0
% V is upper triangular with real non-negative diagonal entries
% this is equivalent to v=chol(dlyap(a,b*b')) but better conditioned numerically
% Copyright (C) Mike Brookes 2002
% Version: $Id: dlyapsq.m,v 1.4 2007/05/04 07:01:38 dmb Exp $
%
% VOICEBOX is a MATLAB toolbox for speech processing.
% Home page: http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% This program is free software; you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published by
% the Free Software Foundation; either version 2 of the License, or
% (at your option) any later version.
%
% This program is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You can obtain a copy of the GNU General Public License from
% http://www.gnu.org/copyleft/gpl.html or by writing to
% Free Software Foundation, Inc.,675 Mass Ave, Cambridge, MA 02139, USA.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[q,s]=schur(a');
[q,s]=rsf2csf(q,s);
[qd,r]=qr(b'*q,0);
% save r for testing
r0=r;
[m,n]=size(r);
u=zeros(n,n);
if m==1
for i=1:n-1
in=i+1:n;
si=s(i,i);
aa=sqrt(1-si*si');
u(i,i)=r(1)/aa;
u(i,in)=(u(i,i)*si'*s(i,in)+aa*r(2:end))/(eye(n-i)-si'*s(in,in));
r=aa*(u(i,i)*s(i,in)+u(i,in)*s(in,in))-si*r(2:end);
end
u(n,n)=r/sqrt(1-s(n,n)*s(n,n)');
else
w=zeros(m,1); w(m)=1;
em=eye(m);
for i=1:n-m
in=i+1:n;
si=s(i,i);
aa=sqrt(1-si*si');
u(i,i)=r(1,1)/aa;
u(i,in)=(u(i,i)*si'*s(i,in)+aa*r(1,2:end))/(eye(n-i)-si'*s(in,in));
vv=aa*(u(i,i)*s(i,in)+u(i,in)*s(in,in))-si*r(1,2:end);
rr=zeros(m,n-i);
rr(1:m-1,:)=r(2:end,2:end);
[qq,r]=qrupdate(em,rr,w,vv');
end
for i=n-m+1:n-1
in=i+1:n;
si=s(i,i);
aa=sqrt(1-si*si');
u(i,i)=r(1,1)/aa;
u(i,in)=(u(i,i)*si'*s(i,in)+aa*r(1,2:end))/(eye(n-i)-si'*s(in,in));
vv=aa*(u(i,i)*s(i,in)+u(i,in)*s(in,in))-si*r(1,2:end);
rr=zeros(n-i+1,n-i);
rr(1:n-i,:)=r(2:end,2:end);
[qq,rr]=qrupdate(eye(n-i+1),rr,w(m-n+i:end),vv');
r=rr(1:n-i,:);
end
u(n,n)=r/sqrt(1-s(n,n)*s(n,n)');
end
v=triu(qr(u*q'));
dv=diag(v);
ix=dv~=0;
v(ix,:)=diag(abs(dv(ix))./dv(ix))*v(ix,:);
if isreal(a) & isreal(b)
v=real(v);
end
|
module Replica.Option.Types
import Data.List
import public Data.List1
import public Data.List.AtIndex
import Data.Maybe
import public Data.OpenUnion
import Replica.Help
import public Replica.Other.Free
%default total
export
prefixLongOption : String -> String
prefixLongOption = ("--" <+>)
export
prefixShortOption : Char -> String
prefixShortOption x = pack ['-',x]
public export
record Value a where
constructor MkValue
name : String
parser : String -> Maybe a
export
Functor Value where
map func x = MkValue x.name (map func . x.parser)
public export
record Mod a where
constructor MkMod
longNames : List1 String
shortNames : List Char
param : Either a (Value a)
description : String
export
Functor Mod where
map func x = MkMod x.longNames x.shortNames
(bimap func (map func) x.param)
x.description
public export
record Option b a where
constructor MkOption
mods : List1 (Mod a)
defaultValue : a
setter : a -> b -> Either String b
export
embedOption : (c -> b) -> (b -> c -> c) -> Option b a -> Option c a
embedOption f g x = MkOption x.mods x.defaultValue (embed f g x.setter)
where
embed : (c -> b) -> (b -> c -> c) -> (a -> b -> Either String b) -> a -> c -> Either String c
embed unwrap wrap set p w = flip wrap w <$> set p (unwrap w)
namespace Param
public export
record Param b a where
constructor MkParam
name : String
parser : String -> Maybe a
setter : a -> b -> Either String b
export
embedParam : (c -> b) -> (b -> c -> c) -> Param b a -> Param c a
embedParam f g x = MkParam x.name x.parser (embed f g x.setter)
where
embed : (c -> b) -> (b -> c -> c) -> (a -> b -> Either String b) -> a -> c -> Either String c
embed unwrap wrap set p w = flip wrap w <$> set p (unwrap w)
namespace Parts
public export
Part : Type -> Type -> Type
Part b a = Union (\p => p b a) [Param, Option]
export
embedPart : (c -> b) -> (b -> c -> c) -> Part b a -> Part c a
embedPart get set x = let
Left x1 = decomp x
| Right v => inj $ embedParam get set v
v = decomp0 x1
in inj $ embedOption get set v
public export
OptParse : Type -> Type -> Type
OptParse = Ap . Part
export
embed : (c -> b) -> (b -> c -> c) -> OptParse b a -> OptParse c a
embed get set (Pure x) = Pure x
embed get set (MkAp x y) = MkAp (embedPart get set x) $ embed get set y
namespace Parser
Parser : (a : Type) -> Type
Parser a = List String -> Maybe (List String, a)
modParser : Mod a -> Parser a
modParser m [] = Nothing
modParser m (x::xs) = let
validOption = map prefixLongOption (forget m.longNames)
++ map prefixShortOption m.shortNames
in do
guard $ x `elem` validOption
let Right v = m.param
| Left r => pure (xs, r)
case xs of
[] => Nothing
(y::ys) => MkPair ys <$> v.parser y
optionParser : Option b a -> Parser (b -> Either String b)
optionParser x xs = map x.setter <$> choiceMap (flip modParser xs) x.mods
partParser : Part b a -> Parser (b -> Either String b)
partParser x xs = let
Left x1 = decomp x
| Right v => case xs of
[y] => MkPair [] . v.setter <$> v.parser y
_ => Nothing
in optionParser (decomp0 x1) xs
public export
data ParseResult a
= InvalidOption (List1 String)
| InvalidMix String -- reason
| Done a
export
Functor ParseResult where
map func (InvalidOption xs) = InvalidOption xs
map func (InvalidMix x) = InvalidMix x
map func (Done x) = Done (func x)
export
parse : a ->
OptParse a b ->
List String ->
ParseResult a
parse acc o [] = Done acc
parse acc o (x::xs) = let
Just (xs', f) = runApM (\p => partParser p (x::xs)) o
| Nothing => InvalidOption (x:::xs)
in either
InvalidMix
(\y => parse y o $ assert_smaller (x::xs) xs')
(f acc)
namespace Default
defaultOption : Option b a -> Maybe a
defaultOption = Just . defaultValue
defaultParam : Param b a -> Maybe a
defaultParam = const Nothing
export
defaultPart : Part b a -> Maybe a
defaultPart x = let
Left x1 = decomp x
| Right v => defaultParam v
v = decomp0 x1
in defaultOption v
namespace Help
optionName : (long : List String) -> (short : List Char) ->
(param : Either a (Value b)) -> String
optionName long short param =
either (flip const) (\v => (++ " \{v.name}")) param $
(concat $ intersperse ", " $
map ("--" ++) long ++ map (\c => pack ['-',c]) short)
modHelp : Mod a -> Help
modHelp x = MkHelp (optionName (forget x.longNames) x.shortNames x.param)
Nothing x.description [] Nothing
export
partHelp : Part b a -> List Help
partHelp x = let
Left x1 = decomp x
| Right v => []
v = decomp0 x1
in map modHelp $ forget v.mods
export
commandHelp :
(name : String) -> (description : String) ->
(options : OptParse b c) ->
(param : Maybe String) -> Help
commandHelp name description options param = MkHelp
name
(Just "replica \{name} [OPTIONS]\{paramExt param}")
description
( catMaybes
[ map (MkPair "Options") $
toList1' $ reverse $ runApM (\p => partHelp p) options
])
Nothing
where
paramExt : Maybe String -> String
paramExt = maybe "" (" "<+>)
|
# so GRAPE methods use n_slices and so they can store that with themselves
# but dCRAB can be either continuous or pw defined and we can choose what to do
# we will start with something piecewise since we have the integrators written for that
using Parameters
abstract type Basis end
# think about what defines a basis and the fourier basis in particular
# its always going to have the form A1 * sin(omega1 * t + phi1) + A2 * cos(omega1 * t + phi2)
Base.@kwdef mutable struct Fourier2{F,C} <: Basis
frequencies::F = [0.0]
coefficients::C = [(0.0, 0.0)]
end
Base.@kwdef mutable struct Sigmoid{F} <: Basis
random_stuff::F
end
# this is only true for the fourier basis, but the point is, we could define this for every basis individually
# that gives us a method to have a unified interface
function evaluate_basis!(basis::Fourier2, out, time_axis)
# might also want to return the lambda sometimes?
# lam = get_lambda(basis)
lam = function (t)
res = 0.0
@unpack frequencies, coefficients = basis
# lets imagine we store tuples
n_freq = length(frequencies)
for i = 1:n_freq
coeffs_i = coefficients[i]
res =
res +
coeffs_i[1] * sin(frequencies[i] * t) +
coeffs_i[2] * cos(frequencies[i] * t)
end
return res
end
out .= lam.(time_axis)
return
end
# now lets set up an example using the bases
my_fourier_components = Fourier2([1.0, 2.0], [(1.0, 2.0), (3.0, 4.0)])
my_fourier_components.frequencies[2] = rand()
append!(my_fourier_components.coefficients, [(4.0, 5.0)])
my_fourier_components.coefficients
# using Plots
# using BenchmarkTools
time_axis = 0:0.001:5.0
hold = similar(time_axis)
evaluate_basis!(my_fourier_components, hold, time_axis)
using Plots
plot(hold)
# @benchmark evaluate_basis($my_fourier_components, $time_axis)
# @code_warntype evaluate_basis(my_fourier_components, time_axis)
# @benchmark evaluate_basis($my_fourier_components, $hold, $time_axis)
Base.@kwdef mutable struct dCRAB{ITG,B,FD,IIP,OPTS,MI,SI}
# characteristic of the integrator
integrator_type::ITG # or if you want to do it continuously we have another method!
basis::B = Fourier() # store bases
freq_dist::FD = rand # distribution of frequencies
isinplace::IIP = true
optim_options::OPTS = Optim.Options()
max_iters::MI = 100
num_SI::SI = 5 # number of superiterations you want to do
end
# separate struct that defines the different solvers that are possible
prob = Problem(
B = [Sx, Sy],
A = Sz,
Xi = ρinit,
Xt = ρfin,
T = 1.0,
n_controls = 2,
guess = rand(2, 10),
sys_type = StateTransfer(),
)
# sol = solve(prob, GRAPE(n_slices = 10, isinplace = true))
# we need to get a functional f(x) which maps a sampled array to a figure of merit
# the array x will be the previous Fourier alongside the
function _get_functional(prob)
@unpack B, A, Xi, Xt, T, n_controls, guess, sys_type = prob
D = size(A, 1)
u0 = typeof(A)(I(D))
topt = function (x)
U = pw_evolve(A, B, x, n_controls, T / 10, 10, u0)
ev = U * Xi * U'
return C1(Xt, ev)
end
return topt
end
func = _get_functional(prob)
algg = dCRAB(n_timeslices = 10)
@unpack n_timeslices, basis, freq_dist, isinplace, optim_options, max_iters, num_SI = algg
time_axis = collect(range(0.0, prob.T, length = n_timeslices))
# for n_si = 1:num_SI
n_si = 1
# draw a frequency
ω = freq_dist()
# now from the basis we need to know that we have two coefficients, we draw them at randomly
init_coeffs = Tuple(rand(2))
init_phases = Tuple(rand(2))
# now the point is that these are our parameters in the optimisation
# so we can update the basis function I guess, since we know that we're indexing into it
# at n_si every time
basis.frequencies[n_si] = ω
basis.coefficients[n_si] = init_coeffs
basis.phases[n_si] = init_phases
test_store = zeros(n_timeslices)
evaluate_basis!(basis, test_store, time_axis)
func(test_store)
#end
struct testnew3{T,K}
name::T
age::K
end
mytest = testnew3("Alastair", 11)
mytest
using Parameters
@unpack name, age = mytest
function getname(test::testnew)
@show "hi world " * test.name
end
getname(testnew("Alastair"))
|
{-# OPTIONS --cubical --no-exact-split --safe #-}
module Cubical.Data.NatPlusOne.Properties where
open import Cubical.Foundations.Prelude
open import Cubical.Foundations.Isomorphism
open import Cubical.Data.Nat
open import Cubical.Data.NatPlusOne.Base
1+Path : ℕ ≡ ℕ₊₁
1+Path = isoToPath (iso 1+_ -1+_ (λ _ → refl) (λ _ → refl))
|
Theorem Disjunctive_syllogism : forall P Q : Prop, (P \/ Q) -> ~P -> Q.
Proof.
intros.
destruct H.
destruct H0.
apply H.
apply H.
Qed
. |
// Copyright 2019 Milan Vukov. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "gazebo_server/link.h"
#include <Eigen/Geometry>
#include <gazebo/physics/Link.hh>
namespace gazebo_server {
using Eigen::Matrix3d;
using Eigen::Quaterniond;
using Eigen::Vector3d;
void Link::GetWorldPose(Vector3d* world_p_link, Matrix3d* world_r_link) const {
assert(world_p_link != nullptr);
assert(world_r_link != nullptr);
const auto world_t_link = link_->WorldPose();
*world_p_link = {world_t_link.Pos().X(), world_t_link.Pos().Y(),
world_t_link.Pos().Z()};
*world_r_link = Quaterniond(world_t_link.Rot().W(), world_t_link.Rot().X(),
world_t_link.Rot().Y(), world_t_link.Rot().Z())
.toRotationMatrix();
}
Vector3d Link::GetWorldLinearVel() const {
const auto ret = link_->WorldLinearVel();
return Vector3d(ret.X(), ret.Y(), ret.Z());
}
Vector3d Link::GetWorldAngularVel() const {
const auto ret = link_->WorldAngularVel();
return Vector3d(ret.X(), ret.Y(), ret.Z());
}
Vector3d Link::GetWorldLinearAccel() const {
const auto ret = link_->WorldLinearAccel();
return Vector3d(ret.X(), ret.Y(), ret.Z());
}
Vector3d Link::GetWorldAngularAccel() const {
const auto ret = link_->WorldAngularAccel();
return Vector3d(ret.X(), ret.Y(), ret.Z());
}
Vector3d Link::GetRelativeLinearVel() const {
const auto ret = link_->RelativeLinearVel();
return Vector3d(ret.X(), ret.Y(), ret.Z());
}
Vector3d Link::GetRelativeLinearAccel() const {
const auto ret = link_->RelativeLinearAccel();
return Vector3d(ret.X(), ret.Y(), ret.Z());
}
Vector3d Link::GetRelativeAngularVel() const {
const auto ret = link_->RelativeAngularVel();
return Vector3d(ret.X(), ret.Y(), ret.Z());
}
Vector3d Link::GetRelativeAngularAccel() const {
const auto ret = link_->RelativeAngularAccel();
return Vector3d(ret.X(), ret.Y(), ret.Z());
}
} // namespace gazebo_server
|
(* Title: HOL/Auth/flash_data_cub_lemma_on_inv__126.thy
Author: Yongjian Li and Kaiqiang Duan, State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
Copyright 2016 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
*)
header{*The flash_data_cub Protocol Case Study*}
theory flash_data_cub_lemma_on_inv__126 imports flash_data_cub_base
begin
section{*All lemmas on causal relation between inv__126 and some rule r*}
lemma n_PI_Remote_GetVsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_PI_Remote_Get src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_PI_Remote_Get src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_PI_Local_Get_PutVsinv__126:
assumes a1: "(r=n_PI_Local_Get_Put )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "?P3 s"
apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_Get))) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''HomeProc'')) (Const false))))" in exI, auto) done
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Remote_GetXVsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_PI_Remote_GetX src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_PI_Remote_GetX src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_PI_Local_GetX_PutX_HeadVld__part__0Vsinv__126:
assumes a1: "(r=n_PI_Local_GetX_PutX_HeadVld__part__0 N )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "?P3 s"
apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_Get))) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''HomeProc'')) (Const false))))" in exI, auto) done
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Local_GetX_PutX_HeadVld__part__1Vsinv__126:
assumes a1: "(r=n_PI_Local_GetX_PutX_HeadVld__part__1 N )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "?P3 s"
apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_Get))) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''HomeProc'')) (Const false))))" in exI, auto) done
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Local_GetX_PutX__part__0Vsinv__126:
assumes a1: "(r=n_PI_Local_GetX_PutX__part__0 )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "?P3 s"
apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_Get))) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''HomeProc'')) (Const false))))" in exI, auto) done
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Local_GetX_PutX__part__1Vsinv__126:
assumes a1: "(r=n_PI_Local_GetX_PutX__part__1 )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "?P3 s"
apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const false)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_Get))) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''HomeProc'')) (Const false))))" in exI, auto) done
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Local_PutXVsinv__126:
assumes a1: "(r=n_PI_Local_PutX )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "((formEval (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const true)) s))\<or>((formEval (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const true))) s))" by auto
moreover {
assume c1: "((formEval (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const true)) s))"
have "?P2 s"
proof(cut_tac a1 a2 c1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume c1: "((formEval (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const true))) s))"
have "?P1 s"
proof(cut_tac a1 a2 c1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_PI_Local_ReplaceVsinv__126:
assumes a1: "(r=n_PI_Local_Replace )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_NI_NakVsinv__126:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Nak dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Nak dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Nak__part__0Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__0 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__0 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Nak__part__1Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__1 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__1 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Nak__part__2Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__2 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__2 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Get__part__0Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Get__part__0 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Get__part__0 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Get__part__1Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Get__part__1 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Get__part__1 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Put_HeadVsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put_Head N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put_Head N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_PutVsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Put_DirtyVsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put_Dirty src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put_Dirty src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_Get_NakVsinv__126:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Nak src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Nak src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_Get_PutVsinv__126:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Put src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Put src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_Nak__part__0Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__0 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__0 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_Nak__part__1Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__1 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__1 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_Nak__part__2Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__2 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__2 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_GetX__part__0Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__0 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__0 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_GetX__part__1Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__1 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__1 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_1Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_1 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_1 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_2Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_2 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_2 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_3Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_3 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_3 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_4Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_4 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_4 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_5Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_5 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_5 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_6Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_6 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_6 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7__part__0Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__0 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__0 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7__part__1Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__1 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__1 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7_NODE_Get__part__0Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__0 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__0 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7_NODE_Get__part__1Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__1 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__1 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8_HomeVsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8_Home_NODE_GetVsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home_NODE_Get N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home_NODE_Get N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8Vsinv__126:
assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8 N src pp)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8 N src pp" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>pp~=p__Inv4)\<or>(src~=p__Inv4\<and>pp=p__Inv4)\<or>(src~=p__Inv4\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>pp~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8_NODE_GetVsinv__126:
assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8_NODE_Get N src pp)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8_NODE_Get N src pp" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>pp~=p__Inv4)\<or>(src~=p__Inv4\<and>pp=p__Inv4)\<or>(src~=p__Inv4\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>pp~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_9__part__0Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__0 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__0 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_9__part__1Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__1 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__1 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_10_HomeVsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_10_Home N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_10_Home N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_10Vsinv__126:
assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_10 N src pp)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_10 N src pp" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>pp~=p__Inv4)\<or>(src~=p__Inv4\<and>pp=p__Inv4)\<or>(src~=p__Inv4\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>pp~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_11Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_11 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_11 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_NakVsinv__126:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_Nak src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_Nak src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_PutXVsinv__126:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_PutX src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_PutX src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_PutVsinv__126:
assumes a1: "(r=n_NI_Local_Put )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "?P3 s"
apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeUniMsg'') ''Cmd'')) (Const UNI_Put)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_Get))) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''HomeProc'')) (Const false))))" in exI, auto) done
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_NI_Remote_PutVsinv__126:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Put dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_Put dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_PutXAcksDoneVsinv__126:
assumes a1: "(r=n_NI_Local_PutXAcksDone )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "?P3 s"
apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeUniMsg'') ''Cmd'')) (Const UNI_PutX)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_Get))) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''HomeProc'')) (Const false))))" in exI, auto) done
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_NI_Remote_PutXVsinv__126:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_PutX dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_PutX dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_InvAck_1Vsinv__126:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_InvAck_1 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_InvAck_1 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__126 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_PutX_HomeVsinv__126:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_GetX_PutX_Home dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_WbVsinv__126:
assumes a1: "r=n_NI_Wb " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_StoreVsinv__126:
assumes a1: "\<exists> src data. src\<le>N\<and>data\<le>N\<and>r=n_Store src data" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_3Vsinv__126:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_3 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_GetX__part__1Vsinv__126:
assumes a1: "r=n_PI_Local_GetX_GetX__part__1 " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_GetX__part__0Vsinv__126:
assumes a1: "r=n_PI_Local_GetX_GetX__part__0 " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Remote_ReplaceVsinv__126:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_PI_Remote_Replace src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_Store_HomeVsinv__126:
assumes a1: "\<exists> data. data\<le>N\<and>r=n_Store_Home data" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_existsVsinv__126:
assumes a1: "\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_InvAck_exists src pp" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Remote_PutXVsinv__126:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_PI_Remote_PutX dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Remote_Get_Put_HomeVsinv__126:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Get_Put_Home dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvVsinv__126:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Inv dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_ShWbVsinv__126:
assumes a1: "r=n_NI_ShWb N " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_ReplaceVsinv__126:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Replace src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Remote_GetX_Nak_HomeVsinv__126:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_GetX_Nak_Home dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Remote_Get_Nak_HomeVsinv__126:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Get_Nak_Home dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_exists_HomeVsinv__126:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_exists_Home src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Replace_HomeVsinv__126:
assumes a1: "r=n_NI_Replace_Home " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Nak_ClearVsinv__126:
assumes a1: "r=n_NI_Nak_Clear " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_Get_GetVsinv__126:
assumes a1: "r=n_PI_Local_Get_Get " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Nak_HomeVsinv__126:
assumes a1: "r=n_NI_Nak_Home " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_2Vsinv__126:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_2 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_FAckVsinv__126:
assumes a1: "r=n_NI_FAck " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__126 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
end
|
Rick Gonzales is a Davis resident who cofounded the Mexican American Concilio of Yolo County with his father to advocate for social justice. In recent years the Gonzales and the Concilio have held an annual dinner every fall to raise scholarship money for high school students attending college. He has been active in many other causes of social justice in Davis and Yolo County.
Gonzales grew up in Woodland and was a school teacher and vice principal in the Sacramento schools before retiring. He is the 2012 recipient of the Covell Trophy for Davis Citizen of the Year.
Soure: Hudson, Jeff; 2012 Nov. 16, Rick Gonzales named Citizen of the Year, Davis Enterprise, pg. A1.
|
# christina lu
# build_features.py
import gensim
from gensim import corpora, models
from gensim.utils import simple_preprocess
from gensim.parsing.preprocessing import STOPWORDS
from nltk.stem import WordNetLemmatizer, SnowballStemmer
from nltk.stem.porter import *
import numpy as np
import pandas as pd
import json
data_path = '../../../summer/'
# FEATURE TYPE: FOLLOWING -----------------------------------------------------
# given list of 2000 most followed users in following_vector_file
# create feature vector of size 2000, 1 if followed and 0 if not
following_vector_file = '../features/base_following_vector.csv'
following_data_file = data_path + 'following_negative.json'
# gets base following vector from file
def get_top_vector(filename):
df = pd.read_csv(filename)
return df[df.columns[0]].tolist()
# build following features and return df
def build_following_features():
filename = following_data_file
vec = get_top_vector(following_vector_file)
data_vecs = []
# iterate through json file containing raw following data
with open(filename, 'r') as json_file:
obj = ''
append = False
for line in json_file:
# indicates we have one json object
if '}' in line:
obj += '}'
append = False
user = json.loads(obj)
obj = ''
user_vec = [int(user['user_id'])]
friends = set(user['friend_ids'])
for id in vec:
if id in friends:
user_vec.append(1)
else:
user_vec.append(0)
data_vecs.append(user_vec)
if append:
obj += line
if '{' in line:
append = True
obj += '{'
column_names = ['user_id'] + vec
df = pd.DataFrame(data_vecs, columns=column_names)
return df
# FEATURE TYPE: SIGNAL TWEET --------------------------------------------------
# file produced by combine_tweet_files and get_english_tweets, must be all english
tweet_file = data_path + 'tweets_negative_final.csv'
topic_model = '../features/topic_model/topics_25.model'
signal_tweet_outname = './signal_tweet_negative_final.csv'
# select top signal tweet for each user according to topic model
np.random.seed(2020)
stemmer = SnowballStemmer("english")
def lemmatize_stemming(text):
return stemmer.stem(WordNetLemmatizer().lemmatize(text, pos='v'))
def preprocess(text):
result = []
for token in gensim.utils.simple_preprocess(text):
if token not in gensim.parsing.preprocessing.STOPWORDS and len(token) > 3:
result.append(lemmatize_stemming(token))
return result
# apply topic model to all tweets
# add column to df of topic 12 probability
def apply_topic_model(df):
lda = gensim.models.LdaModel.load(topic_model)
word_dict = gensim.corpora.dictionary.Dictionary.load(topic_model + '.id2word')
# get topic distributions for each
proc_tweets = df['tweet'].astype(str).map(preprocess)
corpus = [word_dict.doc2bow(doc) for doc in proc_tweets]
preds = lda.get_document_topics(corpus)
# pull out pred for topic 12
topic_pred = []
for pred in preds:
prob = 0
for tup in pred:
if tup[0] == 12:
prob = tup[1]
topic_pred.append(prob)
df['topic_prob'] = topic_pred
return df
# select top tweet under topic 12 for each user
# return df of user_id and top tweet text
def select_top_tweet(save=False):
data = pd.read_csv(tweet_file)
df = apply_topic_model(data)
new_df = df.sort_values('topic_prob').drop_duplicates(['user_id'], keep='last')
if save:
new_df.to_csv(signal_tweet_outname, index=False)
return new_df
# bert encode
def build_tweet_features(filename):
df = pd.read_csv(filename)
apply_topic_model(df)
return select_top_tweet(df)
select_top_tweet(True)
|
##
## Initialisation variables, functions, etc.
##
library(tidyverse)
library(readxl)
library(data.table)
library(igraph)
library(futile.logger)
flog.info("Preparing data...")
##
## directories for loading/saving stuff
##
data.dir = "../../data/2016"
data.out.dir = "./data"
plots.dir = "./plots"
results.dir = "./results"
models.dir = "./models"
# create the directories if they don't exist
if (!dir.exists(data.out.dir)) # data out
dir.create(data.out.dir)
if (!dir.exists(plots.dir)) # plots
dir.create(plots.dir)
if (!dir.exists(results.dir)) # results
dir.create(results.dir)
if (!dir.exists(models.dir)) # models
dir.create(models.dir)
##
## data processing parameters
##
license.owners.only = T # whether or not to output only license owners (siida shares)
keep.30C = F # whether or not to remove people in district 30C
# if this variable has been defined outside of this file and set to True, then the people in `people.to.remove` (line 55) will *not* be removed;
# otherwise, set the variable to false and remove unwanted herders
if (!exists("keep.all.herders")) { keep.30C=F }
############################################################################
## Import all data
##
flog.info("Importing data files...")
herders = read_excel(file.path(data.dir, "Herders_anon.xlsx"), sheet = 1)
herd.sizes = read_excel(file.path(data.dir, "Herders_anon.xlsx"), sheet = "Reindeer - from NRK")
survey = read_excel(file.path(data.dir, "Surveys.xlsx"), sheet = "Survey") # interview data
net.maps = read_excel(file.path(data.dir, "Net-maps.xlsx"), sheet = "Net-maps - individuals")
sibs = read_excel(file.path(data.dir, "Surveys.xlsx"), sheet = "Siblings")
kids = read_excel(file.path(data.dir, "Surveys.xlsx"), sheet = "Children")
# remove person/people from district 30C (no net-maps or games data for them)
if (!keep.30C)
{
people.to.remove = subset(herders, Distrikt=="30C")$HerderID
herders = subset(herders, !HerderID %in% people.to.remove)
survey = subset(survey, !HerderID %in% people.to.remove)
net.maps = subset(net.maps, !Ego %in% people.to.remove | !Alter %in% people.to.remove) # this shouldn't cause any changes
sibs = subset(sibs, !SibID %in% people.to.remove)
rm(people.to.remove)
}
############################################################################
## Sort out variables
##
flog.info("Formatting data...")
##
## main herders table
##
setnames(herders, c("License?", "Interviewed?"), c("SiidaShareYN", "Interviewed."))
# drop columns to preserve anonymity
herders$Etternavn = NULL
herders$Kode = NULL
##
## some useful lists
##
siida.share.ids = subset(herders, SiidaShareYN==1)$HerderID # IDs for herders with siida shares in the district
interviewee_list = survey$HerderID # people we interviewed (list everyone, including unlicensed herders but *not* people in district 30C)
netmappers_list = unique(net.maps$Interviewee) # people who drew net-maps
##
## should we keep siida shares only?
##
if (license.owners.only)
{
herders = subset(herders, SiidaShareYN==1)
survey = subset(survey, HerderID %in% siida.share.ids)
net.maps = subset(net.maps, Ego %in% siida.share.ids & Alter %in% siida.share.ids)
}
##
## interview data
##
# copy each participant's siida ID into survey table
survey = survey %>% left_join(
select(herders, HerderID, SiidaID), by="HerderID"
)
# calculate age at the time of fieldwork
# survey$Age = 2016 - survey$BirthYear
# convert amount of subsistence from herd into ordinal scale
scale_subsistence = c("None", "Almost none", "Less than half", "About half", "More than half", "Almost all", "All")
survey$SubsistenceSpring = ordered(survey$SubsistenceSpring, levels=scale_subsistence)
survey$SubsistenceSummer = ordered(survey$SubsistenceSummer, levels=scale_subsistence)
survey$SubsistenceAutumn = ordered(survey$SubsistenceAutumn, levels=scale_subsistence)
survey$SubsistenceWinter = ordered(survey$SubsistenceWinter, levels=scale_subsistence)
# add up subsistence scores (first convert to numbers)
survey$SubsistenceTotal = as.integer(survey$SubsistenceSpring) + as.integer(survey$SubsistenceSummer) +
as.integer(survey$SubsistenceAutumn) + as.integer(survey$SubsistenceWinter)
# copy total subsistence score into herders table
herders = herders %>%
left_join( select(survey, HerderID, SubsistenceTotal), by="HerderID" )
##
## net-maps
##
net.maps = filter(net.maps, Type != "Influence") # remove influence because it wasn't a useful measure in the end
# move gifts and kin into separate dataframes
gifts = net.maps %>%
filter(Type == "Gift") %>%
dplyr::select(Ego, Alter, Value) %>%
mutate(Value = as.numeric(as.character(Value))) %>% # convert the value of the gifts into numeric
rename(GiftSize = Value)
kin = net.maps %>%
filter(Type == "Kin") %>%
dplyr::select(Ego, Alter, r) %>%
na.omit(.) %>% # de-dupe and remove NAs
distinct()
# NB: `kin` based on the net-maps doesn't include all kin we know about - they'll be added when we start accounting for sibs and kids
# and this dataframe will get overwritten by a more complete edge list when we construct the social networks
# keep only advice, help and shared items in `net.maps`
net.maps = filter(net.maps, Type %in% c("Advice", "Help", "Items"))
# make dataframe containing outbound connections only
net.maps.o = filter(net.maps, Alter!=Interviewee) %>% dplyr::select(Ego, Alter, Type)
## in this case, in-degree will be a measure of help received, as reported by others
## and out-degree will be the amount of self-reported help given to others
#... inbound connections only
net.maps.i = filter(net.maps, Alter==Interviewee) %>% dplyr::select(Ego, Alter, Type)
## in this case, in-degree will be a self-reported measure of help received
## and out-degree will be the amount of help given, as reported by others
# sort out factors to remove missing levels
net.maps$Type = factor(net.maps$Type)
net.maps.o$Type = factor(net.maps.o$Type)
net.maps.i$Type = factor(net.maps.i$Type)
##
## clean up
##
rm(scale_subsistence)
############################################################################
## Get herd size data
##
# centre herd size in 2012 and rename variable
herd.sizes$num.reindeer.z = scale(herd.sizes$`Rein 2012`, scale = T)
setnames(herd.sizes, "Rein 2012", "num.reindeer")
herders = merge(herders,
subset(herd.sizes, select=c(HerderID, num.reindeer, num.reindeer.z)),
by="HerderID", all.x=T)
############################################################################
## Calculate complete set of siblings for each person (if they have them)
##
# assign ID numbers to sibs (continuing on from participant IDs)
## first, swap 'HerderID' and 'SibID' column names
names(sibs)[names(sibs)=="HerderID"] = "SibID2"
names(sibs)[names(sibs)=="SibID"] = "HerderID"
names(sibs)[names(sibs)=="SibID2"] = "SibID"
sibs = subset(sibs, SibName != "(deceased)" | is.na(SibName)) # remove anyone deceased (but explicitly keep people we don't know names for)
## assign a new ID number to each sib based on their 'tmpName' (a unique string basd on sex, birth year and no. kids)
## (this is the best we can do to not generate duplicate IDs where sibs without ID numbers are related to more than one interviewee -- e.g. in the case of herders 23 and 24)
largest_id = max(herders$HerderID) + 1
tmpNames = unique( subset(sibs, nchar(tmpName) > 5)$tmpName ) # make sure these names have more five characters (most information)
sib.ids = data.frame(tmpName = tmpNames, tmpID = seq(from=largest_id, to=(largest_id + length(tmpNames) - 1)))
## assign a new ID number to each sib based on their tmpNames...
sibs = merge(sibs, sib.ids, by="tmpName", all.x=T)
#... if we didn't assign an ID based on 'tmpName', give them a new, sequential ID
largest_id = max(sib.ids$tmpID) + 1
sibs$tmpID[ is.na(sibs$tmpID) ] = seq(from=largest_id, to=(largest_id + length(sibs$tmpID[ is.na(sibs$tmpID) ]) - 1))
##... and only use the generated ID if we haven't already linked the sib to someone in our original list of herders
sibs$HerderID = ifelse(is.na(sibs$HerderID), sibs$tmpID, sibs$HerderID)
sibs = as.data.table( subset(sibs, select=c(SibID, HerderID, Sex, BirthYear, NumSons, NumDaughters)) )
# each sibling 'HerderID' related to the person we interviewed 'SibID' is also sib with the others related to the interviewee
# get all combinations of sib relationships
# code adapted from: http://stackoverflow.com/a/30312324
sibs2 = left_join(
dplyr::select(sibs, SibID, HerderID1 = HerderID),
dplyr::select(sibs, SibID, HerderID2 = HerderID),
by = "SibID"
) %>%
filter(HerderID1 != HerderID2) %>%
dplyr::select(SibID=HerderID1, HerderID=HerderID2)
sibs2 = as.data.table(sibs2)
setkey(sibs2, SibID, HerderID)
sibs2 = unique(sibs2)
# merge in covariates for the alter in the sib relationship (in this case, 'HerderID')
sibs.s = unique( subset(sibs, HerderID %in% sibs2$HerderID, select=-c(SibID)) ) # covariates to merge
setkey(sibs.s, HerderID)
setkey(sibs2, HerderID)
sibs2 = sibs.s[sibs2]
# de-dupe
setkey(sibs2, SibID, HerderID)
sibs2 = unique(sibs2)
setcolorder(sibs2, c("SibID", "HerderID", "Sex", "BirthYear", "NumSons", "NumDaughters")) # make sure columns in right order before binding
# now need to add the people we interviewed (currently: sibs$SibID) as the alter in the sib relationship
## first, swap ego/alter columns
sibs3 = copy(sibs)
sibs3 = sibs3 %>% dplyr::select(SibID=HerderID, HerderID=SibID)
## get covars from survey table
sibs.s = as.data.table( subset(survey, select=c("HerderID", "Sex", "BirthYear", "NumSons", "NumDaughters")) )
## merge
setkey(sibs3, HerderID)
setkey(sibs.s, HerderID)
sibs3 = sibs.s[sibs3]
setcolorder(sibs3, c("SibID", "HerderID", "Sex", "BirthYear", "NumSons", "NumDaughters")) # make sure columns in right order before binding
# merge all sib combinations
sibs = rbindlist(list(sibs, sibs2, sibs3)) # append newly expanded list of sibs to main list of sibs
# de-dupe
setkey(sibs, SibID, HerderID)
sibs = unique(sibs)
rm(sibs.s, sibs2, sibs3, sib.ids, largest_id, tmpNames)
############################################################################
## Assign new ID numbers to kids if they don't already have them
##
# assign ID numbers to kids (continuing on from participant IDs)
# set participant as parent for their kids and assign ID numbers to kids
names(kids)[names(kids)=="HerderID"] = "ParentID"
names(kids)[names(kids)=="ChildID"] = "HerderID"
largest_id = max(sibs$HerderID) + 1
kids$tmpID = seq(from=largest_id, to=(largest_id + nrow(kids) - 1))
##... but only use new ID if we haven't already linked the sib to someone in our original list of herders
kids$HerderID = ifelse(is.na(kids$HerderID), kids$tmpID, kids$HerderID)
############################################################################
## Make relatedness matrix (as edge list)
##
flog.info("Making relatedness matrix...")
sibs_sub = subset(sibs, select=c("HerderID", "SibID"))
kids_sub = subset(kids, select=c("HerderID", "ParentID"))
names(sibs_sub) = c("Ego", "Alter")
names(kids_sub) = c("Ego", "Alter")
herders.r = rbind(sibs_sub, kids_sub)
herders.r$r = 0.5 # relatedness of sibs and kids
# add kin from netmaps
herders.r = rbind(herders.r, kin)
# convert to a data table and de-dupe
herders.r = as.data.table(herders.r)
setkey(herders.r, Ego, Alter)
herders.r = na.omit(herders.r) # remove incomplete
herders.r = unique(herders.r) # remove duplicates
herders.r = subset(herders.r, Ego!=Alter) # (just in case)
##
## some dyads only have one entry - but each ego-alter pair should also appear as alter-ego - should be symmetric - so add missing dyads
##
# first, assign a dyad ID to each entry
herders.r[, DyadID := ifelse(Ego < Alter, paste(Ego, Alter, sep=""), paste(Alter, Ego, sep=""))]
herders.r$DyadID = as.integer(herders.r$DyadID) # paste() makes it character; convert to number
# count no. times each dyad appears
herders.r.sum = herders.r %>% group_by(DyadID) %>% summarise(n_appearances=length(DyadID))
single.dyads = herders.r.sum$DyadID[ herders.r.sum$n_appearances<2 ] # list of ego-alter pairs without corresponding alter-ego (i.e. appears only once)
herders.r.new = herders.r[DyadID %in% single.dyads] # get entries that only appear once
setnames(herders.r.new, c("Ego", "Alter"), c("Alter", "Ego")) # swap ego and alter column names
setcolorder(herders.r.new, names(herders.r)) # swap column order
herders.r = rbind(herders.r, herders.r.new) # append new dyads onto end of relatedness table
#### DEBUG - are any dyads repeated? ##
#herders.r.sum = ddply(herders.r, .(DyadID), summarise, n_appearances=length(DyadID))
#herders.r.sum[ herders.r.sum$n_appearances>2, ] # do any dyads appear more than twice (too many entries)
####
# remove DyadID - no longer needed
herders.r$DyadID = NULL
# clean up
rm(sibs_sub, kids_sub, largest_id, herders.r.sum, herders.r.new, single.dyads)
############################################################################
## Load dyadic data (or create if file doesn't exist)
##
flog.info("Creating dyadic data...")
source("create dyadic data.r")
############################################################################
## Create social networks
##
flog.info("Creating social networks...")
##
## First, a bit of data wrangling
##
# rename some columns for compatibility with Gephi
setnames(net.maps, "Type", "LinkType")
setnames(net.maps.o, "Type", "LinkType")
setnames(net.maps.i, "Type", "LinkType")
# make dummy variable in `herders` identifying people who were named in the net-maps
# (these are the subset we should be doing SNA with)
herders$NamedInNetmap = ifelse( herders$HerderID %in% unique(c(net.maps$Ego, net.maps$Alter)), 1, 0 )
# keep only one entry for each Ego-Alter pair, rather than separate entries for each help type
# (this would have the same effect as using igraph::simplify())
coop.net = unique(subset(net.maps, select=c(Ego, Alter)))
coop.net.o = unique(subset(net.maps.o, select=c(Ego, Alter)))
coop.net.i = unique(subset(net.maps.i, select=c(Ego, Alter)))
# if we don't know someone's siida membership, assign them to something arbitrary
# this has no effect if we're only working with siida shares, since we know every license owner's siida
herders$SiidaID[ is.na(herders$SiidaID) ] = max(herders$SiidaID, na.rm=T) + 1
##
## create networks
##
# siida membership
siida.mem = subset( herders.wide, SameSiida==1, select=c(Ego, Alter) )
g.siida = graph.data.frame(siida.mem, vertices=herders, directed=T)
rm(siida.mem)
# relatives
kin = subset( herders.wide, r>0, select=c(Ego, Alter, r) ) # this overwrites the previous `kin` df that was created from the net maps (and, thus, was incomplete)
kin$Weight = kin$r
g.kin = graph.data.frame(kin, vertices=herders, directed=T)
# gifts (the `gifts` dataframe was created directly from the netmap, towards the beginning of this file)
gifts$Weight = gifts$GiftSize
g.gifts = graph.data.frame(gifts, vertices=herders, directed=T)
##
## net-maps with interviewees' outbound connections only
##
# cooperation network
g.netmap.o = graph.data.frame(coop.net.o, vertices=herders, directed=T)
# make a cooperation network containing only people we interviewed and people they named
# g.netmap.o.sub = graph.data.frame(coop.net.o,
# vertices = subset(herders, NamedInNetmap==1),
# directed = T)
##
## net-maps with interviewees' inbound connections only
##
# cooperation network
g.netmap.i = graph.data.frame(coop.net.i, vertices=herders, directed=T)
# make a cooperation network containing only people we interviewed and people they named
# g.netmap.i.sub = graph.data.frame(coop.net.i,
# vertices = subset(herders, NamedInNetmap==1),
# directed = T)
##
## net-maps with everything
##
g.all = graph.data.frame(coop.net, vertices = herders, directed = T)
# g.all.sub = graph.data.frame(coop.net, vertices = subset(herders, NamedInNetmap==1), directed = T)
##
## make graphs of each type of net-map (self-reported outbound connections only)
##
net.maps.advice = subset(net.maps.o, LinkType=="Advice", select=c(Ego, Alter))
net.maps.help = subset(net.maps.o, LinkType=="Help", select=c(Ego, Alter))
net.maps.sharing = subset(net.maps.o, LinkType=="Items", select=c(Ego, Alter))
g.advice = simplify( graph.data.frame( net.maps.advice, vertices=herders, directed=T) )
g.help = simplify( graph.data.frame( net.maps.help, vertices=herders, directed=T) )
g.sharing = simplify( graph.data.frame( net.maps.sharing, vertices=herders, directed=T) )
# don't need these anymore
rm(coop.net.i, coop.net.o)
rm(net.maps.i, net.maps.o)
#######################################################################################
## Calculate centrality measures (in/out degrees)
##
flog.info("Calculating network statistics...")
## there are four measures of degree in the cooperative network:
##
## - cooperation given from ego to alter, as reported by ego (`herders$CoopGiven.SelfReport` which is out-degree in `g.netmap.o`)
## - cooperation given from ego to alter, as reported by alter (`herders$CoopGiven.OtherReport` which is out-degree in `g.netmap.i`)
## - cooperation received by ego from alter, as reported by ego (`herders$CoopReceived.SelfReport` which is in-degree in `g.netmap.i`)
## - cooperation received by ego from alter, as reported by alter (`herders$CoopReceived.OthersReport` which is in-degree in `g.netmap.o`)
##
# cooperation from outbound links (in-degree = cooperation received, reported by others; out-degree = cooperation given, reported by self)
deg.in = degree(g.netmap.o, mode="in")
herders$CoopReceived.OthersReport = deg.in[ as.character(herders$HerderID) ]
deg.out = degree(g.netmap.o, mode="out")
herders$CoopGiven.SelfReport = deg.out[ as.character(herders$HerderID) ]
# cooperation from inbound links (in-degree = cooperation received, reported by self; out-degree = cooperation given, reported by others)
deg.in = degree(g.netmap.i, mode="in")
herders$CoopReceived.SelfReport = deg.in[ as.character(herders$HerderID) ]
deg.out = degree(g.netmap.i, mode="out")
herders$CoopGiven.OtherReport = deg.out[ as.character(herders$HerderID) ]
# cooperation from complete network
deg.in = degree(g.all, mode="in")
herders$coop.deg.in = deg.in[ as.character(herders$HerderID) ]
deg.out = degree(g.all, mode="out")
herders$coop.deg.out = deg.out[ as.character(herders$HerderID) ]
# no. gifts received
deg.in = degree(g.gifts, mode="in")
herders$NumGifts = deg.in[ as.character(herders$HerderID) ]
herders$gifts.bin = ifelse(herders$NumGifts > 0, 1, 0) # also create a binary variable for whether/not they received a gift
# no. gifts given
deg.out = degree(g.gifts, mode="out")
herders$NumGiftsGiven = deg.out[ as.character(herders$HerderID) ]
rm(deg.in, deg.out)
#######################################################################################
## tidy up and save data
##
# keep only subset of herder variables used in these analyses
herders = herders %>%
select(HerderID, SiidaID, Interviewed., SiidaShareYN, NamedInNetmap, num.reindeer, SubsistenceTotal,
NumGifts, CoopGiven.OtherReport, CoopGiven.SelfReport, coop.deg.out)
net.maps = net.maps %>%
select(Interviewee:LinkType)
# clean up environment
rm(license.owners.only, keep.30C)
rm(sibs, kids, herd.sizes, herders.r, kin)
rm(survey)
# save processed data
write_csv(herders, file.path(data.out.dir, "herders.csv"))
write_csv(herders.wide, file.path(data.out.dir, "herders-dyadic-wide.csv"))
write_csv(gifts, file.path(data.out.dir, "gifts.csv"))
write_csv(net.maps, file.path(data.out.dir, "net-maps.csv"))
flog.info("Finished preparing data")
|
!----------------------Subroutine--------------------------------------!
!
SUBROUTINE SFXLB( NSL )
!
!-------------------------Disclaimer-----------------------------------!
!
! This material was prepared as an account of work sponsored by
! an agency of the United States Government. Neither the
! United States Government nor the United States Department of
! Energy, nor Battelle, nor any of their employees, makes any
! warranty, express or implied, or assumes any legal liability or
! responsibility for the accuracy, completeness, or usefulness
! of any information, apparatus, product, software or process
! disclosed, or represents that its use would not infringe
! privately owned rights.
!
!----------------------Acknowledgement---------------------------------!
!
! This software and its documentation were produced with Government
! support under Contract Number DE-AC06-76RLO-1830 awarded by the
! United Department of Energy. The Government retains a paid-up
! non-exclusive, irrevocable worldwide license to reproduce,
! prepare derivative works, perform publicly and display publicly
! by or for the Government, including the right to distribute to
! other Government contractors.
!
!---------------------Copyright Notices--------------------------------!
!
! Copyright Battelle Memorial Institute, 1996
! All Rights Reserved.
!
!----------------------Description-------------------------------------!
!
! Compute solute aqueous-phase fluxes on boundary surfaces.
!
!----------------------Authors-----------------------------------------!
!
! Written by MD White, Battelle, PNL, January, 1995.
! Last Modified by MD White, Battelle, PNL, February 11, 1999.
! $Id: sfxlb.F 1080 2017-03-14 16:22:02Z d3c002 $
!
!----------------------Fortran 90 Modules------------------------------!
!
USE GLB_PAR
USE TRNSPT
USE SOLTN
USE REACT
USE PORMED
USE GRID
USE FLUXP
USE FDVP
USE CONST
USE BCVP
USE BCV
!
!----------------------Implicit Double Precision-----------------------!
!
IMPLICIT REAL*8 (A-H,O-Z)
IMPLICIT INTEGER (I-N)
!
!----------------------Parameter Statements----------------------------!
!
!
!----------------------Common Blocks-----------------------------------!
!
!
!----------------------Type Declarations-------------------------------!
!
REAL*8 BCX(LSPBC+1)
!
!----------------------Executable Lines--------------------------------!
!
ISUB_LOG = ISUB_LOG+1
SUB_LOG(ISUB_LOG) = '/SFXLB'
IF( INDEX(SVN_ID(187)(1:1),'$').EQ.0 ) SVN_ID(187) =
& '$Id: sfxlb.F 1080 2017-03-14 16:22:02Z d3c002 $'
!
!--- Loop over number of specified boundary conditions ---
!
NBCT = MIN( NSL+LUK,NSOLU+LUK+1 )
DO 200 NB = 1,NBC
TMZ = TM
MB = IBCIN(NB)
IF( IBCC(NB).EQ.1 ) TMZ = MOD( TM,BC(1,IBCM(NB),MB) )
IF( TMZ.LE.BC(1,1,MB) ) GOTO 200
IF( IBCM(NB).GT.1 .AND. TMZ.GT.BC(1,IBCM(NB),MB) ) GOTO 200
IF( IBCM(NB).EQ.1 ) THEN
!
!--- Solute transport ---
!
IF( NSL.LE.NSOLU ) THEN
BCX(1) = BC(NSL+LBCU,1,MB)
!
!--- Reactive species transport ---
!
ELSE
BCX(1) = 0.D+0
DO 10 NSPX = 1,IBCSP(1,NB)
MX = NSOLU+LBCU+NSPX
BCX(NSPX+1) = BC(MX,1,MB)
10 CONTINUE
ENDIF
ELSE
DO 100 M = 2,IBCM(NB)
IF( TMZ.LE.BC(1,M,MB) ) THEN
DTBC = MIN( BC(1,M,MB)-TM,DT )
TFBC = (TM-5.D-1*DTBC-BC(1,M-1,MB))/
& (BC(1,M,MB)-BC(1,M-1,MB))
!
!--- Solute transport ---
!
IF( NSL.LE.NSOLU ) THEN
BCX(1) = BC(NSL+LBCU,M-1,MB) +
& TFBC*(BC(NSL+LBCU,M,MB)-BC(NSL+LBCU,M-1,MB))
IF( IBCT(NBCT,NB).EQ.12 ) BCX(1) = CBO(NB,NSL)
!
!--- Reactive species transport ---
!
ELSE
BCX(1) = 0.D+0
DO 20 NSPX = 1,IBCSP(1,NB)
MX = NSOLU+LBCU+NSPX
BCX(NSPX+1) = BC(MX,M-1,MB) +
& TFBC*(BC(MX,M,MB)-BC(MX,M-1,MB))
IF( IBCT(NBCT,NB).EQ.12 ) BCX(NSPX) = CBO(NB,NSL)
20 CONTINUE
ENDIF
GOTO 110
ENDIF
100 CONTINUE
GOTO 200
ENDIF
110 CONTINUE
N = IBCN(NB)
MF = 1
IZN = IZ(N)
MP = IXP(N)
I = ID(N)
J = JD(N)
K = KD(N)
!
!--- Compute adjacent node phase fractions ---
!
SVLP = SL(2,N)*PORD(2,N)
FCLP = 0.D+0
IF( SVLP.GT.SMALL ) FCLP = YL(N,NSL)/SVLP
!
!--- Solute transport only, skip calculations for reactive
! species transport ---
!
IF( NSL.LE.NSOLU ) THEN
!
!--- Compute boundary phase fractions ---
!
IF( IPCL(NSL).EQ.2 ) THEN
SVSB = RHOS(IZN)*PCSL(1,IZN,NSL)*(1.D+0-PORTB(2,NB))*
& SLB(2,NB)
ELSE
SVSB = RHOS(IZN)*PCSL(1,IZN,NSL)*(1.D+0-PORTB(2,NB))
ENDIF
SVLB = SLB(2,NB)*PORDB(2,NB)
SVGB = SGB(2,NB)*PORDB(2,NB)
SVNB = SNB(2,NB)*PORDB(2,NB)
!
!--- Constant gas-aqueous partition coefficient ---
!
IF( IPCGL(NSL).EQ.0 ) THEN
PCGLX = PCGL(1,NSL)
!
!--- Temperature dependent gas-aqueous partition coefficient ---
!
ELSEIF( IPCGL(NSL).EQ.1 ) THEN
TK = TB(2,NB)+TABS
PCGLX = EXP( PCGL(1,NSL) + PCGL(2,NSL)/TK
& + PCGL(3,NSL)*LOG(TK) + PCGL(4,NSL)*TK
& + PCGL(5,NSL)*TK**2 )
!
!--- Water-vapor equilibrium gas-aqueous partition coefficient ---
!
ELSEIF( IPCGL(NSL).EQ.2 ) THEN
PCGLX = RHOG(2,N)*XGW(2,N)/(RHOL(2,N)*XLW(2,N))
ENDIF
PCGLX = MAX( PCGLX,1.D-20 )
PCGLX = MIN( PCGLX,1.D+20 )
!
!--- Phase-volumetric concentration ratios ---
!
FCLB = 1.D+0/(SVSB + SVLB + SVNB/PCLN(1,NSL)
& + SVGB*PCGLX)
FCGB = 1.D+0/((SVSB + SVLB + SVNB)/PCGLX + SVGB)
FCNB = 1.D+0/((SVSB + SVLB + SVGB*PCGLX)*PCLN(1,NSL) + SVNB)
!
!--- Phase mole fractions ---
!
YLB(NB,NSL) = SVLB*FCLB
YGB(NB,NSL) = SVGB*FCGB
YNB(NB,NSL) = SVNB*FCNB
!
!--- Convert boundary concentrations ---
!
IF( IBCT(NBCT,NB).EQ.8 .OR. IBCT(NBCT,NB).EQ.14
& .OR. IBCT(NBCT,NB).EQ.23 ) THEN
BCX(1) = BCX(1)/(FCLB+SMALL)
ELSEIF( IBCT(NBCT,NB).EQ.9 .OR. IBCT(NBCT,NB).EQ.15 ) THEN
BCX(1) = BCX(1)/(FCGB+SMALL)
ELSEIF( IBCT(NBCT,NB).EQ.10 .OR. IBCT(NBCT,NB).EQ.16 ) THEN
BCX(1) = BCX(1)/(FCNB+SMALL)
ENDIF
ELSE
!
!--- Skip for initial condition type boundary condition ---
!
IF( IBCT(NBCT,NB).NE.12 ) THEN
!
!--- Convert species concentrations to total-component
! concentrations ---
!
IF( NSL.LE.NSOLU+NEQC ) THEN
NEQ = NSL-NSOLU
DO 130 NSP = 1,IEQ_C(1,NEQ)
DO 120 NSPX = 1,IBCSP(1,NB)
IF( IBCSP(NSPX+1,NB).EQ.IEQ_C(NSP+1,NEQ) ) THEN
BCX(1) = BCX(1) + EQ_C(NSP,NEQ)*BCX(NSPX+1)
ENDIF
120 CONTINUE
130 CONTINUE
!
!--- Convert species concentrations to total-kinetic
! concentrations ---
!
ELSEIF( NSL.LE.NSOLU+NEQC+NEQK ) THEN
NEQ = NSL-NSOLU-NEQC
DO 150 NSP = 1,IEQ_K(1,NEQ)
DO 140 NSPX = 1,IBCSP(1,NB)
IF( IBCSP(NSPX+1,NB).EQ.IEQ_K(NSP+1,NEQ) ) THEN
BCX(1) = BCX(1) + EQ_K(NSP,NEQ)*BCX(NSPX+1)
ENDIF
140 CONTINUE
150 CONTINUE
ENDIF
ENDIF
SVLB = SLB(2,NB)*PORDB(2,NB)
YLB(NB,NSL) = 1.D+0
FCLB = 0.D+0
IF( SVLB.GT.SMALL ) FCLB = YLB(NB,NSL)/SVLB
!
!--- Convert boundary phase concentrations to
! volumetric concentrations ---
!
IF( IBCT(NBCT,NB).EQ.8 .OR. IBCT(NBCT,NB).EQ.14
& .OR. IBCT(NBCT,NB).EQ.23 ) THEN
BCX(1) = BCX(1)*SVLB
ENDIF
ENDIF
!
!--- Diffusion coefficients at nodes adjacent to boundaries ---
!
IF( IEDL(NSL).EQ.1 ) THEN
TCOR = (T(2,N)+TABS)/TSPRF
SDFLP = SMDL(NSL)*TCOR*(VISRL/VISL(2,N))
DLP = TORL(2,N)*SL(2,N)*PORD(2,N)*SDFLP
ELSEIF( IEDL(NSL).EQ.2 ) THEN
DLP = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& EXP(SL(2,N)*PORD(2,N)*SDCL(3,IZN,NSL))
ELSEIF( IEDL(NSL).EQ.3 ) THEN
DLP = TORL(2,N)*SL(2,N)*PORD(2,N)*SMDL(NSL)
ELSEIF( IEDL(NSL).EQ.4 ) THEN
DLP = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& (SL(2,N)*PORD(2,N))**SDCL(3,IZN,NSL)
ENDIF
!
!--- Bottom boundary ---
!
IF( IBCD(NB).EQ.-3 ) THEN
NPZ = NSZ(N)
!
!--- Hydraulic dispersion
!
IF( IDISP.EQ.1 ) THEN
CALL ADVBB( PORD(2,N),PORDB(2,NB),SL(2,N),SLB(2,NB),
& UL,VL,WL,ULBX,VLBX,WLBX,N,MF )
CALL SHDP( WLBX,ULBX,VLBX,DISPL(IZN),DISPT(IZN),DPLB )
! ULBX = (0.5D+0*(UL(1,NSX(N))+UL(1,NSX(N)+1)))**2
! VLBX = (0.5D+0*(VL(1,NSY(N))+VL(1,NSY(N)+IFLD)))**2
! WLBX = (WL(1,NPZ))**2
! ZLB = SQRT(ULBX + VLBX + WLBX)
! DPLB = (DISPL(IZN)*WLBX + DISPT(IZN)*(ULBX+VLBX))/
! & (ZLB+SMALL)
DPLB = DPLB*SMDEF(IZN,NSL)
ELSE
DPLB = 0.D+0
ENDIF
!
!--- Dirichlet ---
!
IF( IBCT(NBCT,NB).EQ.1 .OR. IBCT(NBCT,NB).EQ.8 .OR.
& IBCT(NBCT,NB).EQ.9 .OR. IBCT(NBCT,NB).EQ.10 .OR.
& IBCT(NBCT,NB).EQ.12 ) THEN
IF( IEDL(NSL).EQ.1 ) THEN
TCOR = (TB(2,NB)+TABS)/TSPRF
SDFLB = SMDL(NSL)*TCOR*(VISRL/VISLB(2,NB))
DLB = TORLB(2,NB)*SVLB*SDFLB
ELSEIF( IEDL(NSL).EQ.2 ) THEN
DLB = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& EXP(SVLB*SDCL(3,IZN,NSL))
ELSEIF( IEDL(NSL).EQ.3 ) THEN
DLB = TORLB(2,NB)*SVLB*SMDL(NSL)
ELSEIF( IEDL(NSL).EQ.4 ) THEN
DLB = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& SVLB**SDCL(3,IZN,NSL)
ENDIF
INDX = 16
DLZ = DIFMN(DLB,DLP,DZGF(N),DZGF(N),WL(1,NPZ),INDX)
DLZ = (DLZ+DPLB)/(5.D-1*DZGF(N))
IF( ISLC(1).GE.1 .AND. KFLD.GT.1 ) THEN
AB = DLZ
AP = DLZ
ELSE
AB = MAX( WL(1,NPZ),ZERO ) +
& DLZ*MAX((ONE-(TENTH*ABS(WL(1,NPZ))/(DLZ+SMALL)))**5,ZERO)
AP = MAX( -WL(1,NPZ),ZERO ) +
& DLZ*MAX((ONE-(TENTH*ABS(WL(1,NPZ))/(DLZ+SMALL)))**5,ZERO)
ENDIF
WC(NPZ,NSL) = WC(NPZ,NSL)+(BCX(1)*AB*FCLB-C(N,NSL)*AP*FCLP)
!
!--- Outflow ---
!
ELSEIF( IBCT(NBCT,NB).EQ.7 .OR.
& ((IBCT(NBCT,NB).EQ.19 .OR. IBCT(NBCT,NB).EQ.23)
& .AND. (WL(1,NPZ)/EPSL.LT.-EPSL)) ) THEN
IF( ISLC(1).GE.1 .AND. KFLD.GT.1 ) THEN
AP = 0.D+0
ELSE
AP = MAX( -WL(1,NPZ),ZERO )
ENDIF
WC(NPZ,NSL) = WC(NPZ,NSL) - C(N,NSL)*AP*FCLP
!
!--- Inflow ---
!
ELSEIF( IBCT(NBCT,NB).GE.13 .AND. IBCT(NBCT,NB).LE.16
& .OR. ((IBCT(NBCT,NB).EQ.19 .OR. IBCT(NBCT,NB).EQ.23)
& .AND. (WL(1,NPZ)/EPSL.GT.EPSL)) ) THEN
IF( ISLC(1).GE.1 .AND. KFLD.GT.1 ) THEN
AB = 0.D+0
ELSE
AB = MAX( WL(1,NPZ),ZERO )
ENDIF
WC(NPZ,NSL) = WC(NPZ,NSL) + BCX(1)*AB*FCLB
ENDIF
!
!--- South boundary ---
!
ELSEIF( IBCD(NB).EQ.-2 ) THEN
NPY = NSY(N)
!
!--- Hydraulic dispersion
!
IF( IDISP.EQ.1 ) THEN
CALL ADVSB( PORD(2,N),PORDB(2,NB),SL(2,N),SLB(2,NB),
& UL,VL,WL,ULSX,VLSX,WLSX,N,MF )
CALL SHDP( VLSX,WLSX,ULSX,DISPL(IZN),DISPT(IZN),DPLS )
! ULSX = (0.5D+0*(UL(1,NSX(N))+UL(1,NSX(N)+1)))**2
! VLSX = VL(1,NPY)**2
! WLSX = (0.5D+0*(WL(1,NSZ(N))+WL(1,NSZ(N)+IJFLD)))**2
! ZLS = SQRT(ULSX + VLSX + WLSX)
! DPLS = (DISPL(IZN)*VLSX + DISPT(IZN)*(ULSX+WLSX))/
! & (ZLS+SMALL)
DPLS = DPLS*SMDEF(IZN,NSL)
ELSE
DPLS = 0.D+0
ENDIF
!
!--- Dirichlet ---
!
IF( IBCT(NBCT,NB).EQ.1 .OR. IBCT(NBCT,NB).EQ.8 .OR.
& IBCT(NBCT,NB).EQ.9 .OR. IBCT(NBCT,NB).EQ.10 .OR.
& IBCT(NBCT,NB).EQ.12 ) THEN
IF( IEDL(NSL).EQ.1 ) THEN
TCOR = (TB(2,NB)+TABS)/TSPRF
SDFLB = SMDL(NSL)*TCOR*(VISRL/VISLB(2,NB))
DLB = TORLB(2,NB)*SVLB*SDFLB
ELSEIF( IEDL(NSL).EQ.2 ) THEN
DLB = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& EXP(SVLB*SDCL(3,IZN,NSL))
ELSEIF( IEDL(NSL).EQ.3 ) THEN
DLB = TORLB(2,NB)*SVLB*SMDL(NSL)
ELSEIF( IEDL(NSL).EQ.4 ) THEN
DLB = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& SVLB**SDCL(3,IZN,NSL)
ENDIF
INDX = 16
DLY = DIFMN(DLB,DLP,DYGF(N),DYGF(N),VL(1,NPY),INDX)
DLY = (DLY+DPLS)/RP(I)/(5.D-1*DYGF(N))
IF( ISLC(1).GE.1 .AND. JFLD.GT.1 ) THEN
AS = DLY
AP = DLY
ELSE
AS = MAX( VL(1,NPY),ZERO ) +
& DLY*MAX((ONE-(TENTH*ABS(VL(1,NPY))/(DLY+SMALL)))**5,ZERO)
AP = MAX( -VL(1,NPY),ZERO ) +
& DLY*MAX((ONE-(TENTH*ABS(VL(1,NPY))/(DLY+SMALL)))**5,ZERO)
ENDIF
VC(NPY,NSL) = VC(NPY,NSL)+(BCX(1)*AS*FCLB-C(N,NSL)*AP*FCLP)
!
!--- Outflow ---
!
ELSEIF( IBCT(NBCT,NB).EQ.7 .OR.
& ((IBCT(NBCT,NB).EQ.19 .OR. IBCT(NBCT,NB).EQ.23)
& .AND. (VL(1,NPY)/EPSL.LT.-EPSL)) ) THEN
IF( ISLC(1).GE.1 .AND. JFLD.GT.1 ) THEN
AP = 0.D+0
ELSE
AP = MAX( -VL(1,NPY),ZERO )
ENDIF
VC(NPY,NSL) = VC(NPY,NSL) - C(N,NSL)*AP*FCLP
!
!--- Inflow ---
!
ELSEIF( IBCT(NBCT,NB).GE.13 .AND. IBCT(NBCT,NB).LE.16
& .OR. ((IBCT(NBCT,NB).EQ.19 .OR. IBCT(NBCT,NB).EQ.23)
& .AND. (VL(1,NPY)/EPSL.GT.EPSL)) ) THEN
IF( ISLC(1).GE.1 .AND. JFLD.GT.1 ) THEN
AS = 0.D+0
ELSE
AS = MAX( VL(1,NPY),ZERO )
ENDIF
VC(NPY,NSL) = VC(NPY,NSL) - BCX(1)*AS*FCLB
ENDIF
!
!--- West boundary ---
!
ELSEIF( IBCD(NB).EQ.-1 ) THEN
NPX = NSX(N)
!
!--- Hydraulic dispersion
!
IF( IDISP.EQ.1 ) THEN
CALL ADVWB( PORD(2,N),PORDB(2,NB),SL(2,N),SLB(2,NB),
& UL,VL,WL,ULX,VLX,WLX,N,MF )
CALL SHDP( ULX,VLX,WLX,DISPL(IZN),DISPT(IZN),DPLW )
! ULX = UL(1,NPX)**2
! VLX = (0.5D+0*(VL(1,NSY(N))+VL(1,NSY(N)+IFLD)))**2
! WLX = (0.5D+0*(WL(1,NSZ(N))+WL(1,NSZ(N)+IJFLD)))**2
! ZLW = SQRT(ULX + VLX + WLX)
! DPLW = (DISPL(IZN)*ULX + DISPT(IZN)*(WLX+VLX))/
! & (ZLW+SMALL)
DPLW = DPLW*SMDEF(IZN,NSL)
ELSE
DPLW = 0.D+0
ENDIF
!
!--- Dirichlet ---
!
IF( IBCT(NBCT,NB).EQ.1 .OR. IBCT(NBCT,NB).EQ.8 .OR.
& IBCT(NBCT,NB).EQ.9 .OR. IBCT(NBCT,NB).EQ.10 .OR.
& IBCT(NBCT,NB).EQ.12 ) THEN
IF( IEDL(NSL).EQ.1 ) THEN
TCOR = (TB(2,NB)+TABS)/TSPRF
SDFLB = SMDL(NSL)*TCOR*(VISRL/VISLB(2,NB))
DLB = TORLB(2,NB)*SVLB*SDFLB
ELSEIF( IEDL(NSL).EQ.2 ) THEN
DLB = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& EXP(SVLB*SDCL(3,IZN,NSL))
ELSEIF( IEDL(NSL).EQ.3 ) THEN
DLB = TORLB(2,NB)*SVLB*SMDL(NSL)
ELSEIF( IEDL(NSL).EQ.4 ) THEN
DLB = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& SVLB**SDCL(3,IZN,NSL)
ENDIF
INDX = 16
DLX = DIFMN(DLB,DLP,DXGF(N),DXGF(N),UL(1,NPX),INDX)
DLX = (DLX+DPLW)/(5.D-1*DXGF(N))
IF( ISLC(1).GE.1 .AND. IFLD.GT.1 ) THEN
AW = DLX
AP = DLX
ELSE
AW = MAX( UL(1,NPX),ZERO ) +
& DLX*MAX((ONE-(TENTH*ABS(UL(1,NPX))/(DLX+SMALL)))**5,ZERO)
AP = MAX( -UL(1,NPX),ZERO ) +
& DLX*MAX((ONE-(TENTH*ABS(UL(1,NPX))/(DLX+SMALL)))**5,ZERO)
ENDIF
UC(NPX,NSL) = UC(NPX,NSL)+(BCX(1)*AW*FCLB-C(N,NSL)*AP*FCLP)
!
!--- Outflow ---
!
ELSEIF( IBCT(NBCT,NB).EQ.7 .OR.
& ((IBCT(NBCT,NB).EQ.19 .OR. IBCT(NBCT,NB).EQ.23)
& .AND. (UL(1,NPX)/EPSL.LT.-EPSL)) ) THEN
IF( ISLC(1).GE.1 .AND. IFLD.GT.1 ) THEN
AP = 0.D+0
ELSE
AP = MAX( -UL(1,NPX),ZERO )
ENDIF
UC(NPX,NSL) = UC(NPX,NSL) - C(N,NSL)*AP*FCLP
!
!--- Inflow ---
!
ELSEIF( IBCT(NBCT,NB).GE.13 .AND. IBCT(NBCT,NB).LE.16
& .OR. ((IBCT(NBCT,NB).EQ.19 .OR. IBCT(NBCT,NB).EQ.23)
& .AND. (UL(1,NPX)/EPSL.GT.EPSL)) ) THEN
IF( ISLC(1).GE.1 .AND. IFLD.GT.1 ) THEN
AW = 0.D+0
ELSE
AW = MAX( UL(1,NPX),ZERO )
ENDIF
UC(NPX,NSL) = UC(NPX,NSL) + BCX(1)*AW*FCLB
ENDIF
!
!--- East boundary ---
!
ELSEIF( IBCD(NB).EQ.1 ) THEN
NQX = NSX(N)+1
!
!--- Hydraulic dispersion
!
IF( IDISP.EQ.1 ) THEN
CALL ADVEB( PORD(2,N),PORDB(2,NB),SL(2,N),SLB(2,NB),
& UL,VL,WL,ULEX,VLEX,WLEX,N,MF )
CALL SHDP( ULEX,VLEX,WLEX,DISPL(IZN),DISPT(IZN),DPLE )
! ULEX = UL(1,NQX)**2
! VLEX = (0.5D+0*(VL(1,NSY(N))+VL(1,NSY(N)+IFLD)))**2
! WLEX = (0.5D+0*(WL(1,NSZ(N))+WL(1,NSZ(N)+IJFLD)))**2
! ZLE = SQRT(ULEX + VLEX + WLEX)
! DPLE = (DISPL(IZN)*ULEX + DISPT(IZN)*(WLEX+VLEX))/
! & (ZLE+SMALL)
DPLE = DPLE*SMDEF(IZN,NSL)
ELSE
DPLE = 0.D+0
ENDIF
!
!--- Dirichlet ---
!
IF( IBCT(NBCT,NB).EQ.1 .OR. IBCT(NBCT,NB).EQ.8 .OR.
& IBCT(NBCT,NB).EQ.9 .OR. IBCT(NBCT,NB).EQ.10 .OR.
& IBCT(NBCT,NB).EQ.12 ) THEN
TCOR = (TB(2,NB)+TABS)/TSPRF
SDFLB = SMDL(NSL)*TCOR*(VISRL/VISLB(2,NB))
IF( IEDL(NSL).EQ.1 ) THEN
TCOR = (TB(2,NB)+TABS)/TSPRF
SDFLB = SMDL(NSL)*TCOR*(VISRL/VISLB(2,NB))
DLB = TORLB(2,NB)*SVLB*SDFLB
ELSEIF( IEDL(NSL).EQ.2 ) THEN
DLB = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& EXP(SVLB*SDCL(3,IZN,NSL))
ELSEIF( IEDL(NSL).EQ.3 ) THEN
DLB = TORLB(2,NB)*SVLB*SMDL(NSL)
ELSEIF( IEDL(NSL).EQ.4 ) THEN
DLB = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& SVLB**SDCL(3,IZN,NSL)
ENDIF
INDX = 16
DLX = DIFMN(DLP,DLB,DXGF(N),DXGF(N),UL(1,NQX),INDX)
DLX = (DLX+DPLE)/(5.D-1*DXGF(N))
IF( ISLC(1).GE.1 .AND. IFLD.GT.1 ) THEN
AE = DLX
AP = DLX
ELSE
AE = MAX( -UL(1,NQX),ZERO ) +
& DLX*MAX((ONE-(TENTH*ABS(UL(1,NQX))/(DLX+SMALL)))**5,ZERO)
AP = MAX( UL(1,NQX),ZERO ) +
& DLX*MAX((ONE-(TENTH*ABS(UL(1,NQX))/(DLX+SMALL)))**5,ZERO)
ENDIF
UC(NQX,NSL) = UC(NQX,NSL)+(C(N,NSL)*AP*FCLP-BCX(1)*AE*FCLB)
!
!--- Outflow ---
!
ELSEIF( IBCT(NBCT,NB).EQ.7 .OR.
& ((IBCT(NBCT,NB).EQ.19 .OR. IBCT(NBCT,NB).EQ.23)
& .AND. (UL(1,NQX)/EPSL.GT.EPSL)) ) THEN
IF( ISLC(1).GE.1 .AND. IFLD.GT.1 ) THEN
AP = 0.D+0
ELSE
AP = MAX( UL(1,NQX),ZERO )
ENDIF
UC(NQX,NSL) = UC(NQX,NSL) + C(N,NSL)*AP*FCLP
!
!--- Inflow ---
!
ELSEIF( IBCT(NBCT,NB).GE.13 .AND. IBCT(NBCT,NB).LE.16
& .OR. ((IBCT(NBCT,NB).EQ.19 .OR. IBCT(NBCT,NB).EQ.23)
& .AND. (UL(1,NQX)/EPSL.LT.-EPSL)) ) THEN
IF( ISLC(1).GE.1 .AND. IFLD.GT.1 ) THEN
AE = 0.D+0
ELSE
AE = MAX( -UL(1,NQX),ZERO )
ENDIF
UC(NQX,NSL) = UC(NQX,NSL) - BCX(1)*AE*FCLB
ENDIF
!
!--- North boundary ---
!
ELSEIF( IBCD(NB).EQ.2 ) THEN
NQY = NSY(N)+IFLD
!
!--- Hydraulic dispersion
!
IF( IDISP.EQ.1 ) THEN
CALL ADVNB( PORD(2,N),PORDB(2,NB),SL(2,N),SLB(2,NB),
& UL,VL,WL,ULNX,VLNX,WLNX,N,MF )
CALL SHDP( VLNX,WLNX,ULNX,DISPL(IZN),DISPT(IZN),DPLN )
! ULNX = (0.5D+0*(UL(1,NSX(N))+UL(1,NSX(N)+1)))**2
! VLNX = VL(1,NQY)**2
! WLNX = (0.5D+0*(WL(1,NSZ(N))+WL(1,NSZ(N)+IJFLD)))**2
! ZLN = SQRT(ULNX + VLNX + WLNX)
! DPLN = (DISPL(IZN)*VLNX + DISPT(IZN)*(ULNX+WLNX))/
! & (ZLN+SMALL)
DPLN = DPLN*SMDEF(IZN,NSL)
ELSE
DPLN = 0.D+0
ENDIF
!
!--- Dirichlet ---
!
IF( IBCT(NBCT,NB).EQ.1 .OR. IBCT(NBCT,NB).EQ.8 .OR.
& IBCT(NBCT,NB).EQ.9 .OR. IBCT(NBCT,NB).EQ.10 .OR.
& IBCT(NBCT,NB).EQ.12 ) THEN
TCOR = (TB(2,NB)+TABS)/TSPRF
SDFLB = SMDL(NSL)*TCOR*(VISRL/VISLB(2,NB))
IF( IEDL(NSL).EQ.1 ) THEN
TCOR = (TB(2,NB)+TABS)/TSPRF
SDFLB = SMDL(NSL)*TCOR*(VISRL/VISLB(2,NB))
DLB = TORLB(2,NB)*SVLB*SDFLB
ELSEIF( IEDL(NSL).EQ.2 ) THEN
DLB = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& EXP(SVLB*SDCL(3,IZN,NSL))
ELSEIF( IEDL(NSL).EQ.3 ) THEN
DLB = TORLB(2,NB)*SVLB*SMDL(NSL)
ELSEIF( IEDL(NSL).EQ.4 ) THEN
DLB = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& SVLB**SDCL(3,IZN,NSL)
ENDIF
INDX = 16
DLY = DIFMN(DLP,DLB,DYGF(N),DYGF(N),VL(1,NQY),INDX)
DLY = (DLY+DPLN)/RP(I)/(5.D-1*DYGF(N))
IF( ISLC(1).GE.1 .AND. JFLD.GT.1 ) THEN
AN = DLY
AP = DLY
ELSE
AN = MAX( -VL(1,NQY),ZERO ) +
& DLY*MAX((ONE-(TENTH*ABS(VL(1,NQY))/(DLY+SMALL)))**5,ZERO)
AP = MAX( VL(1,NQY),ZERO ) +
& DLY*MAX((ONE-(TENTH*ABS(VL(1,NQY))/(DLY+SMALL)))**5,ZERO)
ENDIF
VC(NQY,NSL) = VC(NQY,NSL)+(C(N,NSL)*AP*FCLP-BCX(1)*AN*FCLB)
!
!--- Outflow ---
!
ELSEIF( IBCT(NBCT,NB).EQ.7 .OR.
& ((IBCT(NBCT,NB).EQ.19 .OR. IBCT(NBCT,NB).EQ.23)
& .AND. (VL(1,NQY)/EPSL.GT.EPSL)) ) THEN
IF( ISLC(1).GE.1 .AND. JFLD.GT.1 ) THEN
AP = 0.D+0
ELSE
AP = MAX( VL(1,NQY),ZERO )
ENDIF
VC(NQY,NSL) = VC(NQY,NSL) + C(N,NSL)*AP*FCLP
!
!--- Inflow ---
!
ELSEIF( IBCT(NBCT,NB).GE.13 .AND. IBCT(NBCT,NB).LE.16
& .OR. ((IBCT(NBCT,NB).EQ.19 .OR. IBCT(NBCT,NB).EQ.23)
& .AND. (VL(1,NQY)/EPSL.LT.-EPSL)) ) THEN
IF( ISLC(1).GE.1 .AND. JFLD.GT.1 ) THEN
AN = 0.D+0
ELSE
AN = MAX( -VL(1,NQY),ZERO )
ENDIF
VC(NQY,NSL) = VC(NQY,NSL) - BCX(1)*AN*FCLB
ENDIF
!
!--- Top boundary
!
ELSEIF( IBCD(NB).EQ.3 ) THEN
NQZ = NSZ(N)+IJFLD
!
!--- Hydraulic dispersion
!
IF( IDISP.EQ.1 ) THEN
CALL ADVTB( PORD(2,N),PORDB(2,NB),SL(2,N),SLB(2,NB),
& UL,VL,WL,ULTX,VLTX,WLTX,N,MF )
CALL SHDP( WLTX,ULTX,VLTX,DISPL(IZN),DISPT(IZN),DPLT )
! ULTX = (0.5D+0*(UL(1,NSX(N))+UL(1,NSX(N)+1)))**2
! VLTX = (0.5D+0*(VL(1,NSY(N))+VL(1,NSY(N)+IFLD)))**2
! WLTX = (WL(1,NQZ))**2
! ZLT = SQRT(ULTX + VLTX + WLTX)
! DPLT = (DISPL(IZN)*WLTX + DISPT(IZN)*(ULTX+VLTX))/
! & (ZLT+SMALL)
DPLT = DPLT*SMDEF(IZN,NSL)
ELSE
DPLT = 0.D+0
ENDIF
!
!--- Dirichlet ---
!
IF( IBCT(NBCT,NB).EQ.1 .OR. IBCT(NBCT,NB).EQ.8 .OR.
& IBCT(NBCT,NB).EQ.9 .OR. IBCT(NBCT,NB).EQ.10 .OR.
& IBCT(NBCT,NB).EQ.12 ) THEN
TCOR = (TB(2,NB)+TABS)/TSPRF
SDFLB = SMDL(NSL)*TCOR*(VISRL/VISLB(2,NB))
IF( IEDL(NSL).EQ.1 ) THEN
TCOR = (TB(2,NB)+TABS)/TSPRF
SDFLB = SMDL(NSL)*TCOR*(VISRL/VISLB(2,NB))
DLB = TORLB(2,NB)*SVLB*SDFLB
ELSEIF( IEDL(NSL).EQ.2 ) THEN
DLB = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& EXP(SVLB*SDCL(3,IZN,NSL))
ELSEIF( IEDL(NSL).EQ.3 ) THEN
DLB = TORLB(2,NB)*SVLB*SMDL(NSL)
ELSEIF( IEDL(NSL).EQ.4 ) THEN
DLB = SDCL(1,IZN,NSL)*SDCL(2,IZN,NSL)*
& SVLB**SDCL(3,IZN,NSL)
ENDIF
INDX = 16
DLZ = DIFMN(DLP,DLB,DZGF(N),DZGF(N),WL(1,NQZ),INDX)
DLZ = (DLZ+DPLT)/(5.D-1*DZGF(N))
IF( ISLC(1).GE.1 .AND. KFLD.GT.1 ) THEN
AT = DLZ
AP = DLZ
ELSE
AT = MAX( -WL(1,NQZ),ZERO ) +
& DLZ*MAX((ONE-(TENTH*ABS(WL(1,NQZ))/(DLZ+SMALL)))**5,ZERO)
AP = MAX( WL(1,NQZ),ZERO ) +
& DLZ*MAX((ONE-(TENTH*ABS(WL(1,NQZ))/(DLZ+SMALL)))**5,ZERO)
ENDIF
WC(NQZ,NSL) = WC(NQZ,NSL)+(C(N,NSL)*AP*FCLP-BCX(1)*AT*FCLB)
!
!--- Outflow ---
!
ELSEIF( IBCT(NBCT,NB).EQ.7 .OR.
& ((IBCT(NBCT,NB).EQ.19 .OR. IBCT(NBCT,NB).EQ.23)
& .AND. (WL(1,NQZ)/EPSL.GT.EPSL)) ) THEN
IF( ISLC(1).GE.1 .AND. KFLD.GT.1 ) THEN
AP = 0.D+0
ELSE
AP = MAX( WL(1,NQZ),ZERO )
ENDIF
WC(NQZ,NSL) = WC(NQZ,NSL) + C(N,NSL)*AP*FCLP
!
!--- Inflow ---
!
ELSEIF( IBCT(NBCT,NB).GE.13 .AND. IBCT(NBCT,NB).LE.16
& .OR. ((IBCT(NBCT,NB).EQ.19 .OR. IBCT(NBCT,NB).EQ.23)
& .AND. (WL(1,NQZ)/EPSL.LT.-EPSL)) ) THEN
IF( ISLC(1).GE.1 .AND. KFLD.GT.1 ) THEN
AT = 0.D+0
ELSE
AT = MAX( -WL(1,NQZ),ZERO )
ENDIF
WC(NQZ,NSL) = WC(NQZ,NSL) - BCX(1)*AT*FCLB
ENDIF
ENDIF
200 CONTINUE
!
!--- End of SFXLB group ---
!
ISUB_LOG = ISUB_LOG-1
RETURN
END
|
The latest album from three times Eurovision Song Contest winner was in September released in Denmark, Sweden and Norway and made it to the charts in all three countries. At the end of January Irishman in America will be released in Germany, Austria and Switzerland.
Irishman in America was recorded in Denmark, Germany, Ireland and the USA and was first released in the three Scandinavian countries, where Johnny Logan is quite a big name and always makes it to the charts. With this album he however didn’t manage to get the top positions, as he is use to. The album was four weeks on the Danish album Top40 where it peaked as #10. In Norway it peaked as #20 with five weeks on the Top40. In Sweden it only had three weeks on the album Top60 and peaked as #24. With the coming release in Germany, Austria and Switzerland he gets another chance to make it to the absolute top.
There are a few things different on this album. Johnny Logan’s three sons, Adam (29), Fionn (24) and Jack (16), all sing chorus on the title song Irishman in America. Johnny Logan himself has worked as a producer on the whole album, which is the first time he is trying that.
← System Of A Down to Represent Armenia? |
Couples Counselling in Fort Saskatchewan, AB: Therapy That Works.
Great Couples Counselling in Fort Saskatchewan, AB. Therapy to heal relationships.
Licensed therapists for relationships and marriages in Fort Saskatchewan, Alberta. Discounts available (see profiles).
You Have Found The Best Couples Counsellors in Fort Saskatchewan, AB. Restore Your Relationship.
Thank you for visiting our Alberta search of licensed therapists for couples in Fort Saskatchewan who specialize in helping relationships and marriages heal and overcome division and hurt. Relationships are hard. Unless you are actively working towards coming together, by default you are drifting apart. It takes work. A licensed therapist is an expert at helping to untangle the mess that can develop when things go wrong. Find honest and effective couples Counselling in Fort Saskatchewan and renew your relationship. |
Require Import compcert.lib.Coqlib.
Require Import compcert.lib.Maps.
Require Import compcert.lib.Integers.
Require Import compcert.lib.Axioms.
Require Import compcert.common.Events.
Require Import compcert.common.Memory.
Require Import compcert.common.Values.
Require Import compcert.common.AST.
Require Import compcert.common.Globalenvs.
Require Import sepcomp.mem_lemmas.
Require Import sepcomp.semantics.
Require Import sepcomp.semantics_lemmas.
Require Import sepcomp.wholeprog_simulations.
Require Import sepcomp.closed_safety.
Require Import sepcomp.effect_semantics.
Import Wholeprog_sim.
Arguments match_state : default implicits.
Arguments core_halted : default implicits.
Arguments core_data : default implicits.
Arguments core_ord : default implicits.
Arguments core_ord_wf : default implicits.
Arguments core_diagram : default implicits.
(** * Safety and semantics preservation *)
Section safety_preservation_lemmas.
Context {G TG C D M TM Z data : Type}
{source : @CoreSemantics G C M}
{target : @CoreSemantics TG D TM}
{geS : G}
{geT : TG}
{ge_inv : G -> TG -> Prop}
{init_inv : meminj -> G -> list val -> M -> TG -> list val -> TM -> Prop}
{halt_inv : meminj (*structured_injections.SM_Injection*) ->
G -> val -> M -> TG -> val -> TM -> Prop}
(main : val)
(sim : Wholeprog_sim source target geS geT main ge_inv init_inv halt_inv)
(c : C)
(d : D)
(m : M)
(tm: TM)
(TGT_DET : corestep_fun target)
(source_safe : forall n, safeN source geS n c m).
Definition my_P := fun (x: core_data sim) =>
forall j (c : C) (d : D) (m : M) (tm : TM),
(forall n : nat, safeN source geS n c m) ->
match_state sim x j c m d tm ->
(exists rv : val, halted source c = Some rv) \/
(exists (cd' : core_data sim) j' (c' : C) (m' : M),
corestep_plus source geS c m c' m' /\
((exists (d' : D) (tm' : TM),
corestep_plus target geT d tm d' tm' /\
match_state sim cd' j' c' m' d' tm') \/
(exists rv : val,
halted source c' = Some rv
/\ match_state sim cd' j' c' m' d tm))).
Lemma corestep_ord:
forall cd j,
match_state sim cd j c m d tm ->
(exists rv, halted source c = Some rv) \/
(exists cd' j' c' m',
corestep_plus source geS c m c' m'
/\ ((exists d' tm', corestep_plus target geT d tm d' tm'
/\ match_state sim cd' j' c' m' d' tm')
\/ (exists rv, halted source c' = Some rv
/\ match_state sim cd' j' c' m' d tm))).
Proof.
intros.
revert j c d m tm source_safe H.
assert (my_well_founded_induction
: (forall x, (forall y, core_ord sim y x -> my_P y) -> my_P x) ->
forall a, my_P a).
{ apply well_founded_induction; auto. apply (core_ord_wf sim). }
unfold my_P in my_well_founded_induction.
apply my_well_founded_induction; auto.
intros.
case_eq (halted source c).
intros.
solve[left; exists v; auto].
intros HALTED_NONE.
right.
generalize H0 as SAFE; intro.
specialize (H0 (S O)); simpl in H0.
rewrite HALTED_NONE in H0.
destruct H0 as [H0 HALL].
destruct H0 as [c2 [m2 STEP]].
generalize STEP as STEP'; intro.
eapply core_diagram in STEP; eauto.
destruct STEP as [d2 [tm2 [cd2 [j2 [? H2]]]]].
destruct H2 as [H2|H2]. exists cd2, j2, c2, m2. split; auto.
exists O; simpl; exists c2, m2; split; auto.
solve[left; exists d2, tm2; split; auto].
destruct H2 as [H2 ORD].
specialize (H _ ORD j2 c2 d2 m2 tm2).
assert (SAFE': forall n, safeN source geS n c2 m2).
solve[intros n; eapply safe_corestep_forward; eauto].
specialize (H SAFE' H0).
destruct H2 as [n H2].
destruct n. inv H2.
destruct H.
destruct H as [rv HALTED].
exists cd2, j2, c2, m2.
split; auto.
solve[exists O; simpl; exists c2, m2; split; auto].
solve[right; exists rv; split; auto].
destruct H as [cd' [j' [c' [m' [STEPN H]]]]].
destruct H as [H|H].
destruct H as [d' [tm' [TSTEP' MATCH']]].
exists cd', j', c', m'.
split; auto.
destruct STEPN as [n STEPN].
exists (S n).
simpl.
exists c2, m2.
split; auto.
solve[left; exists d', tm'; split; auto].
destruct H as [rv [HALT MATCH']].
exists cd', j', c', m'.
split; auto.
destruct STEPN as [n STEPN].
exists (S n).
simpl.
exists c2, m2.
split; auto.
solve[right; exists rv; split; auto].
exists cd2, j2, c2, m2.
split; auto.
solve[exists O; simpl; exists c2, m2; split; auto].
left.
exists d2, tm2.
split; auto.
exists n; auto.
Qed.
Definition halt_match c d :=
exists rv trv,
halted source c = Some rv
/\ halted target d = Some trv.
Lemma corestep_ord':
forall cd j,
match_state sim cd j c m d tm ->
halt_match c d
\/ (exists cd' j' c' m',
corestep_plus source geS c m c' m'
/\ ((match_state sim cd' j' c' m' d tm /\ halt_match c' d)
\/ (exists d' tm',
corestep_plus target geT d tm d' tm'
/\ match_state sim cd' j' c' m' d' tm'))).
Proof.
intros.
generalize H as MATCH; intro.
apply corestep_ord in H.
destruct H.
{
destruct H as [rv HALT].
left.
unfold halt_match.
generalize HALT as HALT'; intro.
apply (core_halted sim cd j c m d tm) in HALT; auto.
destruct HALT as [j' [rv' [INJ HALT]]].
exists rv, rv'.
split; auto.
}
{
destruct H as [cd' [j' [c' [m' [STEPN ?]]]]].
destruct H as [H|H].
destruct H as [d' [tm' [TSTEPN MATCH']]].
right.
exists cd', j', c', m'.
split; auto.
right.
exists d', tm'.
solve[split; auto].
destruct H as [rv [HALT MATCH']].
right.
exists cd', j', c', m'.
split; auto.
left.
split; auto.
unfold halt_match.
generalize HALT as HALT'; intro.
apply (core_halted sim cd' j' c' m' d tm) in HALT; auto.
destruct HALT as [j'' [rv' [INJ HALT]]].
exists rv, rv'.
split; auto.
}
Qed.
End safety_preservation_lemmas.
Lemma corestepN_splits_lt
{G C M} (csem : CoreSemantics G C M) (ge : G)
c m c' m' c'' m'' n1 n2 :
corestep_fun csem ->
corestepN csem ge (S n1) c m c' m' ->
corestepN csem ge n2 c m c'' m'' ->
(n1 < n2)%nat ->
exists a b,
(a > O)%nat
/\ (b=0 -> S n1=n2)%nat
/\ n2 = plus a b
/\ corestepN csem ge a c m c' m'
/\ corestepN csem ge b c' m' c'' m''.
Proof.
intros FN H1 H2 LT.
revert c m n1 H1 H2 LT.
induction n2; intros.
destruct n1; try inv LT.
destruct n1.
destruct H1 as [c2' [m2' [STEP STEPN]]].
inv STEPN.
exists (S O), n2.
split; try omega.
split; try omega.
destruct H2 as [c2'' [m2'' [STEP' STEPN']]].
destruct (FN _ _ _ _ _ _ _ STEP STEP').
subst c2'' m2''.
split; auto.
split; auto.
exists c',m'.
split; simpl; auto.
assert (n1 < n2)%nat by omega.
destruct H1 as [c2 [m2 [STEP STEPN]]].
destruct H2 as [c2' [m2' [STEP' STEPN']]].
assert (c2'=c2 /\ m2=m2') as [? ?].
{ destruct (FN _ _ _ _ _ _ _ STEP STEP').
subst c2 m2; split; auto. }
subst c2' m2; auto.
destruct (IHn2 c2 m2' n1); auto.
destruct H0 as [n1' [H0 [H1 [H2 [H3 H4]]]]].
exists (S x), n1'.
split; auto.
split. omega.
split; auto. omega.
split; auto.
exists c2,m2'.
split; auto.
Qed.
(** ** Equitermination *)
Definition terminates {G C M} (csem : CoreSemantics G C M)
(ge : G) (c : C) (m : M) :=
exists c' m', corestep_star csem ge c m c' m'
/\ exists v, halted csem c' = Some v.
Section termination_preservation.
Context {G TG C D M TM Z data : Type}
{source : @CoreSemantics G C M}
{target : @CoreSemantics TG D TM}
{geS : G}
{geT : TG}
{ge_inv : G -> TG -> Prop}
{init_inv : meminj -> G -> list val -> M -> TG -> list val -> TM -> Prop}
{halt_inv : meminj (*structured_injections.SM_Injection *)->
G -> val -> M -> TG -> val -> TM -> Prop}
(main : val)
(sim : Wholeprog_sim source target geS geT main ge_inv init_inv halt_inv).
Lemma termination_preservation:
forall cd c m d tm j c' m' rv1,
match_state sim cd j c m d tm ->
corestep_star source geS c m c' m' ->
halted source c' = Some rv1 ->
terminates target geT d tm.
Proof.
intros.
destruct H0 as [n H0].
revert cd j c m d tm H H0.
induction n; intros.
simpl in H0. symmetry in H0; inv H0.
cut (@halt_match G _ C D _ _ source target c d). intro.
unfold halt_match in H0.
destruct H0 as [rv [trv [? ?]]].
exists d, tm; split; auto.
solve[exists O; simpl; auto].
solve[exists trv; auto].
generalize H1 as H1'; intro.
eapply core_halted in H1; eauto.
destruct H1 as [? [rv2 [? ?]]].
exists rv1, rv2; split; auto.
simpl in H0.
destruct H0 as [c2 [m2 [STEP STEPN]]].
generalize STEP as STEP'; intro.
apply corestep_not_halted in STEP.
eapply core_diagram in STEP'; eauto.
destruct STEP' as [? [? [cd' [j' [MATCH ?]]]]].
clear H.
destruct H0 as [X|[X Y]].
eapply IHn in MATCH; eauto.
unfold terminates in MATCH|-*.
destruct MATCH as [x' [tm' [Y [v W]]]].
exists x', tm'; split; eauto.
eapply corestep_star_trans; eauto.
solve[eapply corestep_plus_star; eauto].
eapply IHn in MATCH; eauto.
unfold terminates in MATCH|-*.
destruct MATCH as [x' [tm' [U [v W]]]].
exists x', tm'; split; eauto.
solve[eapply corestep_star_trans; eauto].
Qed.
End termination_preservation.
Section equitermination.
Context {G TG C D M TM Z data : Type}
{source : @CoreSemantics G C M}
{target : @CoreSemantics TG D TM}
{geS : G}
{geT : TG}
{ge_inv : G -> TG -> Prop}
{init_inv : meminj -> G -> list val -> M -> TG -> list val -> TM -> Prop}
{halt_inv : meminj (*structured_injections.SM_Injection*) ->
G -> val -> M -> TG -> val -> TM -> Prop}
(main : val)
(sim : Wholeprog_sim source target geS geT main ge_inv init_inv halt_inv)
(TGT_DET : corestep_fun target).
Lemma termination_reflection:
forall n c m d tm cd j d' tm' hv'
(source_safe : forall n, safeN source geS n c m),
match_state sim cd j c m d tm ->
corestepN target geT n d tm d' tm' ->
halted target d' = Some hv' ->
terminates source geS c m.
Proof.
set (my_P := fun (n : nat) =>
forall (c : C) (m : M) (d : D) (tm : TM)
(cd : core_data sim) (j : meminj (*structured_injections.SM_Injection*))
(d' : D) (tm' : TM) (hv' : val),
(forall n0 : nat, safeN source geS n0 c m) ->
match_state sim cd j c m d tm ->
corestepN target geT n d tm d' tm' ->
halted target d' = Some hv' -> terminates source geS c m).
apply (@well_founded_induction _ _ lt_wf my_P); auto.
unfold my_P; clear my_P; intros n IH.
intros c m d tm cd j d' tm' hv' safe MATCH TSTEPN HLT2.
apply (corestep_ord' main sim c d m tm) in MATCH; auto.
destruct MATCH as [HLT|H].
destruct HLT as [rv [_ [HLT _]]]; exists c,m. split.
exists O; simpl; auto. solve[exists rv; auto].
destruct H as [cd' [j' [c2 [m2 [STEPN H]]]]]. destruct H as [[H H2]|H].
destruct H2 as [rv [_ [HLT _]]].
destruct STEPN as [n2 STEPN].
exists c2,m2. split; auto. exists (S n2); simpl; auto. solve[exists rv; auto].
destruct H as [d2 [tm2 [[n2 TSTEPN2] MTCH2]]].
destruct (lt_dec n2 n) as [pf|pf].
{ assert (TSTEPN': exists n2', (n2' < n)%nat
/\ corestepN target geT n2' d2 tm2 d' tm').
{ destruct (corestepN_splits_lt target geT d tm d2 tm2 d' tm' n2 n TGT_DET
TSTEPN2 TSTEPN pf)
as [a [b [alt [neq [X [Y U]]]]]].
exists b; split; auto. omega. }
destruct TSTEPN' as [n2' [ltpf TSTEPN']].
assert (safe': forall n, safeN source geS n c2 m2).
{ destruct STEPN as [q STEPN].
intros n0; eapply safe_corestepN_forward; eauto. }
destruct (IH n2' ltpf c2 m2 d2 tm2 cd' j' d' tm' hv' safe' MTCH2 TSTEPN' HLT2)
as [c0 [m0 [[q STEPN1] [hv0 HLT1]]]].
destruct STEPN as [q' STEPN]. exists c0,m0; split; auto.
eapply corestep_star_trans. exists (S q'); eauto. exists q; auto.
exists hv0; auto. }
{ assert (lt: (n < S n2)%nat) by omega. clear pf.
destruct n. inv TSTEPN. simpl in TSTEPN2.
destruct TSTEPN2 as [? [? [X _]]]; apply corestep_not_halted in X.
rewrite X in HLT2; congruence.
destruct (corestepN_splits_lt target geT d tm d' tm' d2 tm2 n (S n2) TGT_DET
TSTEPN TSTEPN2)
as [a [b [alt [neq [X [Y W]]]]]]. omega.
destruct b. rewrite neq in lt; auto. omega.
simpl in W. destruct W as [? [? [W _]]].
apply corestep_not_halted in W; rewrite W in HLT2; congruence. }
Qed.
Lemma equitermination:
forall cd c m d tm j
(source_safe : forall n, safeN source geS n c m),
match_state sim cd j c m d tm ->
(terminates source geS c m <-> terminates target geT d tm).
Proof.
intros; split; intros [? [? [A [? B]]]].
eapply termination_preservation; eauto.
destruct A as [n A]; eapply termination_reflection; eauto.
Qed.
End equitermination.
|
Formal statement is: lemma pairwise_orthogonal_imp_finite: fixes S :: "'a::euclidean_space set" assumes "pairwise orthogonal S" shows "finite S" Informal statement is: If a set of vectors is pairwise orthogonal, then it is finite. |
(* The Call-by-Value Lambda Calculus *)
theory Lt
imports "../../Nominal2"
begin
atom_decl name
nominal_datatype lt =
Var name ("_~" [150] 149)
| App lt lt (infixl "$$" 100)
| Lam x::"name" t::"lt" binds x in t
nominal_function
subst :: "lt \<Rightarrow> name \<Rightarrow> lt \<Rightarrow> lt" ("_ [_ ::= _]" [90, 90, 90] 90)
where
"(Var x)[y ::= s] = (if x = y then s else (Var x))"
| "(App t1 t2)[y ::= s] = App (t1[y ::= s]) (t2[y ::= s])"
| "atom x \<sharp> (y, s) \<Longrightarrow> (Lam x t)[y ::= s] = Lam x (t[y ::= s])"
unfolding eqvt_def subst_graph_aux_def
apply (simp)
apply(rule TrueI)
using [[simproc del: alpha_lst]]
apply(auto simp add: lt.distinct lt.eq_iff)
apply(rule_tac y="a" and c="(aa, b)" in lt.strong_exhaust)
apply blast
apply(simp_all add: fresh_star_def fresh_Pair_elim)
apply (erule_tac c="(ya,sa)" in Abs_lst1_fcb2)
apply(simp add: Abs_fresh_iff)
apply(simp add: fresh_star_def fresh_Pair)
apply(simp add: eqvt_at_def)
apply(simp add: perm_supp_eq fresh_star_Pair)
apply(simp add: eqvt_at_def)
apply(simp add: perm_supp_eq fresh_star_Pair)
done
nominal_termination (eqvt) by lexicographic_order
lemma forget[simp]:
shows "atom x \<sharp> M \<Longrightarrow> M[x ::= s] = M"
by (nominal_induct M avoiding: x s rule: lt.strong_induct)
(auto simp add: lt.fresh fresh_at_base)
nominal_function
isValue:: "lt => bool"
where
"isValue (Var x) = True"
| "isValue (Lam y N) = True"
| "isValue (A $$ B) = False"
unfolding eqvt_def isValue_graph_aux_def
by (auto)
(erule lt.exhaust, auto)
nominal_termination (eqvt)
by (relation "measure size") (simp_all)
inductive
eval :: "[lt, lt] \<Rightarrow> bool" (" _ \<longrightarrow>\<^isub>\<beta> _" [80,80] 80)
where
evbeta: "\<lbrakk>atom x \<sharp> V; isValue V\<rbrakk> \<Longrightarrow> ((Lam x M) $$ V) \<longrightarrow>\<^isub>\<beta> (M[x ::= V])"
| ev1: "\<lbrakk>isValue V; M \<longrightarrow>\<^isub>\<beta> M' \<rbrakk> \<Longrightarrow> (V $$ M) \<longrightarrow>\<^isub>\<beta> (V $$ M')"
| ev2: "M \<longrightarrow>\<^isub>\<beta> M' \<Longrightarrow> (M $$ N) \<longrightarrow>\<^isub>\<beta> (M' $$ N)"
equivariance eval
nominal_inductive eval
done
(*lemmas [simp] = lt.supp(2)*)
lemma closedev1: assumes "s \<longrightarrow>\<^isub>\<beta> t"
shows "supp t <= supp s"
using assms
by (induct, auto simp add: lt.supp)
lemma [simp]: "~ ((Lam x M) \<longrightarrow>\<^isub>\<beta> N)"
by (rule, erule eval.cases, simp_all)
lemma [simp]: assumes "M \<longrightarrow>\<^isub>\<beta> N" shows "~ isValue M"
using assms
by (cases, auto)
inductive
eval_star :: "[lt, lt] \<Rightarrow> bool" (" _ \<longrightarrow>\<^isub>\<beta>\<^sup>* _" [80,80] 80)
where
evs1: "M \<longrightarrow>\<^isub>\<beta>\<^sup>* M"
| evs2: "\<lbrakk>M \<longrightarrow>\<^isub>\<beta> M'; M' \<longrightarrow>\<^isub>\<beta>\<^sup>* M'' \<rbrakk> \<Longrightarrow> M \<longrightarrow>\<^isub>\<beta>\<^sup>* M''"
lemma eval_evs: assumes *: "M \<longrightarrow>\<^isub>\<beta> M'" shows "M \<longrightarrow>\<^isub>\<beta>\<^sup>* M'"
by (rule evs2, rule *, rule evs1)
lemma eval_trans[trans]:
assumes "M1 \<longrightarrow>\<^isub>\<beta>\<^sup>* M2"
and "M2 \<longrightarrow>\<^isub>\<beta>\<^sup>* M3"
shows "M1 \<longrightarrow>\<^isub>\<beta>\<^sup>* M3"
using assms
by (induct, auto intro: evs2)
lemma evs3[rule_format]: assumes "M1 \<longrightarrow>\<^isub>\<beta>\<^sup>* M2"
shows "M2 \<longrightarrow>\<^isub>\<beta> M3 \<longrightarrow> M1 \<longrightarrow>\<^isub>\<beta>\<^sup>* M3"
using assms
by (induct, auto intro: eval_evs evs2)
equivariance eval_star
lemma evbeta':
fixes x :: name
assumes "isValue V" and "atom x\<sharp>V" and "N = (M[x ::= V])"
shows "((Lam x M) $$ V) \<longrightarrow>\<^isub>\<beta>\<^sup>* N"
using assms by simp (rule evs2, rule evbeta, simp_all add: evs1)
end
|
from SharedProcessors.const import SI_TRIALS, LINE_WIDTH, SUB_NAMES, FONT_SIZE, FONT_DICT, TRIAL_NAMES
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.lines as lines
from sklearn.metrics import mean_squared_error
def format_plot():
mpl.rcParams['hatch.linewidth'] = LINE_WIDTH # previous svg hatch linewidth
ax = plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.xaxis.set_tick_params(width=LINE_WIDTH)
ax.yaxis.set_tick_params(width=LINE_WIDTH)
ax.spines['left'].set_linewidth(LINE_WIDTH)
ax.spines['bottom'].set_linewidth(LINE_WIDTH)
def format_errorbar_cap(caplines):
for i_cap in range(1):
caplines[i_cap].set_marker('_')
caplines[i_cap].set_markersize(25)
caplines[i_cap].set_markeredgewidth(LINE_WIDTH)
def save_fig(name, dpi=600):
plt.savefig('exports/' + name + '.png', dpi=dpi)
def load_step_data(result_date, test_name):
all_df = pd.read_csv('result_conclusion/{}/step_result/main{}.csv'.format(result_date, test_name))
si_true, si_pred = {}, {}
for sub_name in SUB_NAMES:
si_true_sub, si_pred_sub = [], []
sub_id = SUB_NAMES.index(sub_name)
sub_df = all_df[all_df['subject id'] == sub_id]
for i_trial in list(set(sub_df['trial id'])):
trial_df = sub_df[sub_df['trial id'] == i_trial]
si_true_sub.append(trial_df['true SI'].values)
si_pred_sub.append(trial_df['predicted SI'].values)
si_true[sub_name], si_pred[sub_name] = si_true_sub, si_pred_sub
return si_true, si_pred
def metric_sub_mean(result_date, test_name, metric_fun):
si_true, si_pred = load_step_data(result_date, test_name)
metric_sub = []
for sub_name in list(si_true.keys()):
si_true_sub, si_pred_sub = si_true[sub_name], si_pred[sub_name]
metric_trial = [metric_fun(si_true_trial, si_pred_trial) for si_true_trial, si_pred_trial in zip(si_true_sub, si_pred_sub)]
metric_sub.append(np.mean(metric_trial))
return metric_sub
def rmse_fun(true, pred):
return np.sqrt(mean_squared_error(true, pred))
def cohen_d(x,y):
return (np.mean(x) - np.mean(y)) / np.sqrt((np.std(x, ddof=1) ** 2 + np.std(y, ddof=1) ** 2) / 2.0)
|
from feature.feature import EmptyFeature
import numpy as np
class Close(EmptyFeature):
def __init__(self, raw_data_manager, history_lengh=None):
super().__init__(1, raw_data_manager, history_lengh=history_lengh)
def compute(self, data_dict):
close = data_dict.get('close')
return np.array(close, dtype=object)
|
\thispagestyle{empty}
\chapter*{Acknowledgments}
%Supervisors
I would like to thank my first supervisor and institute director Prof. Frank Kirchner for giving me the opportunity to work on this interesting topic at the DFKI Robotics Innovation Center. Then, I would like to thank my second supervisor Dr. Tobias Bruckmann for providing his continuous guidance and constructive feedback during this thesis.
%Mentors
I would like to express my deepest gratitude to my main mentor and team leader Shivesh for his guidance, the numerous inspiring discussions we had throughout the course of this work, and his infectious enthusiasm for everything related to dynamics and many more surrounding topics. Special thanks also goes to Carlos and Olivier who supported me with many ideas and critical but constructive questions related to numerical optimization and humanoid robotics, respectively.
%Special thanks to my work colleagues Vinzenz Bargsten, Heiner Peters and Daniel Harnack for the fruitful collaboration.
%Friends, Family, monique
Finally, I want to also thank all my friends I met along the way, in the Lower Rhine area, Duisburg/Essen, Dortmund, %M{\"u}nchen,
Oberpfaffenhofen, Bremen and elsewhere. You have each made this journey an unforgettable experience. Last but not least, I want to thank my family (especially my mother) and my girlfriend Monique (one of the bravest person I ever met) for your supportive stronghold and I am forever grateful to have you in my life.
\bigskip
\begin{flushright}
{\Large Julian} \\
%Julian \\
\bigskip
September 18, 2020
\end{flushright}
|
C Fortran version
C use-def chains with if/else
C no dependence between if and else case have to be done
PROGRAM IF01F
INTEGER R, A
R = -1
IF (RAND(0) .LE. 1) THEN
R = 1
ELSE
R = 0
END IF
A = R
RETURN
END
C***********************************************************************
FUNCTION RAND(N)
C***********************************************************************
IMPLICIT DOUBLE PRECISION(A-H,O-Z)
COMMON /RANDNO/ R3(127),R1,I2
SAVE S, T, RMC
SAVE IW
DATA S, T, RMC /0.0D0, 1.0D0, 1.0D0/
DATA IW /-1/
C
IF(R1.LT.1.0D0.AND.N.EQ.0) GO TO 60
IF(IW.GT.0) GO TO 30
10 IW=IW+1
T=T*0.5D0
R1=S
S=S+T
IF(S.GT.R1.AND.S.LT.1.0D0) GO TO 10
IKT=(IW-1)/12
IC=IW-12*IKT
ID=2**(13-IC)
DO 20 I=1,IC
20 RMC=0.5D0*RMC
RM=0.015625D0*0.015625D0
30 I2=127
IR=MOD(ABS(N),8190)+1
40 R1=0.0D0
DO 50 I=1,IKT
IR=MOD(17*IR,8191)
50 R1=(R1+REAL(IR/2))*RM
IR=MOD(17*IR,8191)
R1=(R1+REAL(IR/ID))*RMC
R3(I2)=R1
I2=I2-1
IF(I2.GT.0) GO TO 40
60 IF(I2.EQ.0) I2=127
T=R1+R3(I2)
IF(T.GE.1.0D0) T=(R1-0.5D0)+(R3(I2)-0.5D0)
R1=T
R3(I2)=R1
I2=I2-1
RAND=R1
RETURN
END
|
The light cruiser Komintern collided with her in May 1932 , shortly after her commissioning , and badly damaged her bow . It was extensively rebuilt and increased her overall length by over 11 metres ( 36 ft ) . In 1933 she made port visits in Turkey , Greece and Italy .
|
[STATEMENT]
lemma connecting_path_walk: "connecting_path u v xs \<Longrightarrow> connecting_walk u v xs"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. connecting_path u v xs \<Longrightarrow> connecting_walk u v xs
[PROOF STEP]
unfolding connecting_path_def connecting_walk_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. is_gen_path xs \<and> hd xs = u \<and> last xs = v \<Longrightarrow> is_walk xs \<and> hd xs = u \<and> last xs = v
[PROOF STEP]
using is_gen_path_def
[PROOF STATE]
proof (prove)
using this:
is_gen_path ?p \<equiv> is_walk ?p \<and> (distinct (tl ?p) \<and> hd ?p = last ?p \<or> distinct ?p)
goal (1 subgoal):
1. is_gen_path xs \<and> hd xs = u \<and> last xs = v \<Longrightarrow> is_walk xs \<and> hd xs = u \<and> last xs = v
[PROOF STEP]
by auto |
{-# OPTIONS --universe-polymorphism #-}
open import Categories.Category
module Categories.Object.SubobjectClassifier {o ℓ e} (C : Category o ℓ e) where
open Category C
open import Level
open import Categories.Object.Terminal
open import Categories.Morphisms
open import Categories.Pullback
record SubobjectClassifier : Set (o ⊔ ℓ ⊔ e) where
field
Ω : Obj
χ : ∀ {U X} → (j : U ⇒ X) → (X ⇒ Ω)
terminal : Terminal C
open Terminal terminal
field
⊤⇒Ω : ⊤ ⇒ Ω
.j-pullback : ∀ {U X} → (j : U ⇒ X) → Mono C j → Pullback C ⊤⇒Ω (χ j)
.χ-unique : ∀ {U X} → (j : U ⇒ X) → (χ′ : X ⇒ Ω) → Mono C j → Pullback C ⊤⇒Ω χ′ → χ′ ≡ χ j
|
The species has a cosmopolitan distribution except for arctic , alpine and cold temperate regions ; it is common in temperate and tropical regions of the world . It has been collected in Africa , Asia , Australia , Europe , North America , and South America .
|
using MPI
using Oceananigans.BoundaryConditions: fill_halo_regions!
using Oceananigans.Distributed: index2rank, east_halo, west_halo, north_halo, south_halo, top_halo, bottom_halo
# Right now just testing with 4 ranks!
comm = MPI.COMM_WORLD
mpi_ranks = MPI.Comm_size(comm)
@assert mpi_ranks == 4
#####
##### Multi architectures and rank connectivity
#####
function test_triply_periodic_rank_connectivity_with_411_ranks()
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 8), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(4, 1, 1))
local_rank = MPI.Comm_rank(MPI.COMM_WORLD)
@test local_rank == index2rank(arch.local_index..., arch.ranks...)
connectivity = arch.connectivity
# No communication in y and z.
@test isnothing(connectivity.south)
@test isnothing(connectivity.north)
@test isnothing(connectivity.top)
@test isnothing(connectivity.bottom)
# +---+---+---+---+
# | 0 | 1 | 2 | 3 |
# +---+---+---+---+
if local_rank == 0
@test connectivity.east == 1
@test connectivity.west == 3
elseif local_rank == 1
@test connectivity.east == 2
@test connectivity.west == 0
elseif local_rank == 2
@test connectivity.east == 3
@test connectivity.west == 1
elseif local_rank == 3
@test connectivity.east == 0
@test connectivity.west == 2
end
return nothing
end
function test_triply_periodic_rank_connectivity_with_141_ranks()
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 8), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(1, 4, 1))
local_rank = MPI.Comm_rank(MPI.COMM_WORLD)
@test local_rank == index2rank(arch.local_index..., arch.ranks...)
connectivity = arch.connectivity
# No communication in x and z.
@test isnothing(connectivity.east)
@test isnothing(connectivity.west)
@test isnothing(connectivity.top)
@test isnothing(connectivity.bottom)
# +---+
# | 3 |
# +---+
# | 2 |
# +---+
# | 1 |
# +---+
# | 0 |
# +---+
if local_rank == 0
@test connectivity.north == 1
@test connectivity.south == 3
elseif local_rank == 1
@test connectivity.north == 2
@test connectivity.south == 0
elseif local_rank == 2
@test connectivity.north == 3
@test connectivity.south == 1
elseif local_rank == 3
@test connectivity.north == 0
@test connectivity.south == 2
end
return nothing
end
function test_triply_periodic_rank_connectivity_with_114_ranks()
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 8), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(1, 1, 4))
local_rank = MPI.Comm_rank(MPI.COMM_WORLD)
@test local_rank == index2rank(arch.local_index..., arch.ranks...)
connectivity = arch.connectivity
# No communication in x and y.
@test isnothing(connectivity.east)
@test isnothing(connectivity.west)
@test isnothing(connectivity.north)
@test isnothing(connectivity.south)
# /---/
# / 3 /
# /---/
# /---/
# / 2 /
# /---/
# /---/
# / 1 /
# /---/
# /---/
# / 0 /
# /---/
if local_rank == 0
@test connectivity.top == 1
@test connectivity.bottom == 3
elseif local_rank == 1
@test connectivity.top == 2
@test connectivity.bottom == 0
elseif local_rank == 2
@test connectivity.top == 3
@test connectivity.bottom == 1
elseif local_rank == 3
@test connectivity.top == 0
@test connectivity.bottom == 2
end
return nothing
end
function test_triply_periodic_rank_connectivity_with_221_ranks()
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 8), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(2, 2, 1))
local_rank = MPI.Comm_rank(MPI.COMM_WORLD)
@test local_rank == index2rank(arch.local_index..., arch.ranks...)
connectivity = arch.connectivity
# No communication in z.
@test isnothing(connectivity.top)
@test isnothing(connectivity.bottom)
# +---+---+
# | 0 | 2 |
# +---+---+
# | 1 | 3 |
# +---+---+
if local_rank == 0
@test connectivity.east == 2
@test connectivity.west == 2
@test connectivity.north == 1
@test connectivity.south == 1
elseif local_rank == 1
@test connectivity.east == 3
@test connectivity.west == 3
@test connectivity.north == 0
@test connectivity.south == 0
elseif local_rank == 2
@test connectivity.east == 0
@test connectivity.west == 0
@test connectivity.north == 3
@test connectivity.south == 3
elseif local_rank == 3
@test connectivity.east == 1
@test connectivity.west == 1
@test connectivity.north == 2
@test connectivity.south == 2
end
return nothing
end
#####
##### Local grids for distributed models
#####
function test_triply_periodic_local_grid_with_411_ranks()
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 8), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(4, 1, 1))
model = DistributedIncompressibleModel(architecture=arch, grid=full_grid, pressure_solver=nothing)
local_rank = MPI.Comm_rank(MPI.COMM_WORLD)
local_grid = model.grid
nx, ny, nz = size(local_grid)
@test local_grid.xF[1] == 0.25*local_rank
@test local_grid.xF[nx+1] == 0.25*(local_rank+1)
@test local_grid.yF[1] == 0
@test local_grid.yF[ny+1] == 2
@test local_grid.zF[1] == -3
@test local_grid.zF[nz+1] == 0
return nothing
end
function test_triply_periodic_local_grid_with_141_ranks()
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 8), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(1, 4, 1))
model = DistributedIncompressibleModel(architecture=arch, grid=full_grid, pressure_solver=nothing)
local_rank = MPI.Comm_rank(MPI.COMM_WORLD)
local_grid = model.grid
nx, ny, nz = size(local_grid)
@test local_grid.xF[1] == 0
@test local_grid.xF[nx+1] == 1
@test local_grid.yF[1] == 0.5*local_rank
@test local_grid.yF[ny+1] == 0.5*(local_rank+1)
@test local_grid.zF[1] == -3
@test local_grid.zF[nz+1] == 0
return nothing
end
function test_triply_periodic_local_grid_with_114_ranks()
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 8), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(1, 1, 4))
model = DistributedIncompressibleModel(architecture=arch, grid=full_grid, pressure_solver=nothing)
local_rank = MPI.Comm_rank(MPI.COMM_WORLD)
local_grid = model.grid
nx, ny, nz = size(local_grid)
@test local_grid.xF[1] == 0
@test local_grid.xF[nx+1] == 1
@test local_grid.yF[1] == 0
@test local_grid.yF[ny+1] == 2
@test local_grid.zF[1] == -3 + 0.75*local_rank
@test local_grid.zF[nz+1] == -3 + 0.75*(local_rank+1)
return nothing
end
function test_triply_periodic_local_grid_with_221_ranks()
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 8), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(2, 2, 1))
model = DistributedIncompressibleModel(architecture=arch, grid=full_grid, pressure_solver=nothing)
i, j, k = arch.local_index
local_grid = model.grid
nx, ny, nz = size(local_grid)
@test local_grid.xF[1] == 0.5*(i-1)
@test local_grid.xF[nx+1] == 0.5*i
@test local_grid.yF[1] == j-1
@test local_grid.yF[ny+1] == j
@test local_grid.zF[1] == -3
@test local_grid.zF[nz+1] == 0
return nothing
end
#####
##### Injection of halo communication BCs
#####
function test_triply_periodic_bc_injection_with_411_ranks()
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 8), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(4, 1, 1))
model = DistributedIncompressibleModel(architecture=arch, grid=full_grid, pressure_solver=nothing)
for field in merge(fields(model), model.pressures)
fbcs = field.boundary_conditions
@test fbcs.east isa HaloCommunicationBC
@test fbcs.west isa HaloCommunicationBC
@test !isa(fbcs.north, HaloCommunicationBC)
@test !isa(fbcs.south, HaloCommunicationBC)
@test !isa(fbcs.top, HaloCommunicationBC)
@test !isa(fbcs.bottom, HaloCommunicationBC)
end
end
function test_triply_periodic_bc_injection_with_141_ranks()
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 8), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(1, 4, 1))
model = DistributedIncompressibleModel(architecture=arch, grid=full_grid, pressure_solver=nothing)
for field in merge(fields(model), model.pressures)
fbcs = field.boundary_conditions
@test !isa(fbcs.east, HaloCommunicationBC)
@test !isa(fbcs.west, HaloCommunicationBC)
@test fbcs.north isa HaloCommunicationBC
@test fbcs.south isa HaloCommunicationBC
@test !isa(fbcs.top, HaloCommunicationBC)
@test !isa(fbcs.bottom, HaloCommunicationBC)
end
end
function test_triply_periodic_bc_injection_with_114_ranks()
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 8), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(1, 1, 4))
model = DistributedIncompressibleModel(architecture=arch, grid=full_grid, pressure_solver=nothing)
for field in merge(fields(model), model.pressures)
fbcs = field.boundary_conditions
@test !isa(fbcs.east, HaloCommunicationBC)
@test !isa(fbcs.west, HaloCommunicationBC)
@test !isa(fbcs.north, HaloCommunicationBC)
@test !isa(fbcs.south, HaloCommunicationBC)
@test fbcs.top isa HaloCommunicationBC
@test fbcs.bottom isa HaloCommunicationBC
end
end
function test_triply_periodic_bc_injection_with_221_ranks()
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 8), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(2, 2, 1))
model = DistributedIncompressibleModel(architecture=arch, grid=full_grid, pressure_solver=nothing)
for field in merge(fields(model), model.pressures)
fbcs = field.boundary_conditions
@test fbcs.east isa HaloCommunicationBC
@test fbcs.west isa HaloCommunicationBC
@test fbcs.north isa HaloCommunicationBC
@test fbcs.south isa HaloCommunicationBC
@test !isa(fbcs.top, HaloCommunicationBC)
@test !isa(fbcs.bottom, HaloCommunicationBC)
end
end
#####
##### Halo communication
#####
function test_triply_periodic_halo_communication_with_411_ranks(halo)
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(16, 6, 4), extent=(1, 2, 3), halo=halo)
arch = MultiCPU(grid=full_grid, ranks=(4, 1, 1))
model = DistributedIncompressibleModel(architecture=arch, grid=full_grid, pressure_solver=nothing)
for field in merge(fields(model), model.pressures)
interior(field) .= arch.local_rank
fill_halo_regions!(field, arch)
@test all(east_halo(field, include_corners=false) .== arch.connectivity.east)
@test all(west_halo(field, include_corners=false) .== arch.connectivity.west)
@test all(interior(field) .== arch.local_rank)
@test all(north_halo(field, include_corners=false) .== arch.local_rank)
@test all(south_halo(field, include_corners=false) .== arch.local_rank)
@test all(top_halo(field, include_corners=false) .== arch.local_rank)
@test all(bottom_halo(field, include_corners=false) .== arch.local_rank)
end
return nothing
end
function test_triply_periodic_halo_communication_with_141_ranks(halo)
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(4, 16, 4), extent=(1, 2, 3), halo=halo)
arch = MultiCPU(grid=full_grid, ranks=(1, 4, 1))
model = DistributedIncompressibleModel(architecture=arch, grid=full_grid, pressure_solver=nothing)
for field in merge(fields(model), model.pressures)
interior(field) .= arch.local_rank
fill_halo_regions!(field, arch)
@test all(north_halo(field, include_corners=false) .== arch.connectivity.north)
@test all(south_halo(field, include_corners=false) .== arch.connectivity.south)
@test all(interior(field) .== arch.local_rank)
@test all(east_halo(field, include_corners=false) .== arch.local_rank)
@test all(west_halo(field, include_corners=false) .== arch.local_rank)
@test all(top_halo(field, include_corners=false) .== arch.local_rank)
@test all(bottom_halo(field, include_corners=false) .== arch.local_rank)
end
return nothing
end
function test_triply_periodic_halo_communication_with_114_ranks(halo)
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(4, 4, 16), extent=(1, 2, 3), halo=halo)
arch = MultiCPU(grid=full_grid, ranks=(1, 1, 4))
model = DistributedIncompressibleModel(architecture=arch, grid=full_grid, pressure_solver=nothing)
for field in merge(fields(model), model.pressures)
interior(field) .= arch.local_rank
fill_halo_regions!(field, arch)
@test all(top_halo(field, include_corners=false) .== arch.connectivity.top)
@test all(bottom_halo(field, include_corners=false) .== arch.connectivity.bottom)
@test all(interior(field) .== arch.local_rank)
@test all(east_halo(field, include_corners=false) .== arch.local_rank)
@test all(west_halo(field, include_corners=false) .== arch.local_rank)
@test all(north_halo(field, include_corners=false) .== arch.local_rank)
@test all(south_halo(field, include_corners=false) .== arch.local_rank)
end
return nothing
end
function test_triply_periodic_halo_communication_with_221_ranks(halo)
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 3), extent=(1, 2, 3), halo=halo)
arch = MultiCPU(grid=full_grid, ranks=(2, 2, 1))
model = DistributedIncompressibleModel(architecture=arch, grid=full_grid, pressure_solver=nothing)
for field in merge(fields(model), model.pressures)
interior(field) .= arch.local_rank
fill_halo_regions!(field, arch)
@test all(east_halo(field, include_corners=false) .== arch.connectivity.east)
@test all(west_halo(field, include_corners=false) .== arch.connectivity.west)
@test all(north_halo(field, include_corners=false) .== arch.connectivity.north)
@test all(south_halo(field, include_corners=false) .== arch.connectivity.south)
@test all(interior(field) .== arch.local_rank)
@test all(top_halo(field, include_corners=false) .== arch.local_rank)
@test all(bottom_halo(field, include_corners=false) .== arch.local_rank)
end
return nothing
end
#####
##### Run tests!
#####
@testset "Distributed MPI Oceananigans" begin
@info "Testing distributed MPI Oceananigans..."
@testset "Multi architectures rank connectivity" begin
@info " Testing multi architecture rank connectivity..."
test_triply_periodic_rank_connectivity_with_411_ranks()
test_triply_periodic_rank_connectivity_with_141_ranks()
test_triply_periodic_rank_connectivity_with_114_ranks()
test_triply_periodic_rank_connectivity_with_221_ranks()
end
@testset "Local grids for distributed models" begin
@info " Testing local grids for distributed models..."
test_triply_periodic_local_grid_with_411_ranks()
test_triply_periodic_local_grid_with_141_ranks()
test_triply_periodic_local_grid_with_114_ranks()
test_triply_periodic_local_grid_with_221_ranks()
end
@testset "Injection of halo communication BCs" begin
@info " Testing injection of halo communication BCs..."
test_triply_periodic_bc_injection_with_411_ranks()
test_triply_periodic_bc_injection_with_141_ranks()
test_triply_periodic_bc_injection_with_114_ranks()
test_triply_periodic_bc_injection_with_221_ranks()
end
@testset "Halo communication" begin
@info " Testing halo communication..."
for H in 1:3
test_triply_periodic_halo_communication_with_411_ranks((H, H, H))
test_triply_periodic_halo_communication_with_141_ranks((H, H, H))
test_triply_periodic_halo_communication_with_114_ranks((H, H, H))
test_triply_periodic_halo_communication_with_221_ranks((H, H, H))
end
end
@testset "Time stepping IncompressibleModel" begin
topo = (Periodic, Periodic, Periodic)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 8), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(1, 4, 1))
model = DistributedIncompressibleModel(architecture=arch, grid=full_grid)
time_step!(model, 1)
@test model isa IncompressibleModel
@test model.clock.time ≈ 1
simulation = Simulation(model, Δt=1, stop_iteration=2)
run!(simulation)
@test model isa IncompressibleModel
@test model.clock.time ≈ 2
end
@testset "Time stepping ShallowWaterModel" begin
topo = (Periodic, Periodic, Bounded)
full_grid = RegularRectilinearGrid(topology=topo, size=(8, 8, 1), extent=(1, 2, 3))
arch = MultiCPU(grid=full_grid, ranks=(1, 4, 1))
model = DistributedShallowWaterModel(architecture=arch, grid=full_grid, gravitational_acceleration=1)
set!(model, h=model.grid.Lz)
time_step!(model, 1)
@test model isa ShallowWaterModel
@test model.clock.time ≈ 1
simulation = Simulation(model, Δt=1, stop_iteration=2)
run!(simulation)
@test model isa ShallowWaterModel
@test model.clock.time ≈ 2
end
end
|
get.fishery.stats.by.region = function( Reg="cfaall", y=NULL ) {
landings = landings.db()
if (is.null(y)) y = sort(unique(landings$yr) )
out = data.frame( yr=y )
if (Reg=="cfaall") region = sort( unique(landings$cfa) ) # all data
if (Reg=="cfanorth") region = c("cfa20", "cfa21", "cfa22", "cfanorth", "north")
if (Reg=="cfasouth") region = c("cfa23", "cfa24", "cfasouth", "cfaslope")
if (Reg=="cfa4x") region = "cfa4x"
lnd = landings[ which(landings$cfa %in% region) ,]
l = aggregate( lnd$landings, list(yr=lnd$yr), function(x) sum(x, na.rm=T))
names(l) = c("yr", "landings")
out = merge(out, l, by="yr", all.x=T, all.y=F, sort=T)
lnd$cpue_direct = lnd$landings / lnd$effort
lnd$cpue_direct[ which( lnd$cpue_direct > (650*0.454)) ] = NA # same rule as in landings.db -- 650lbs/trap is a reasonable upper limit
cpue = aggregate( lnd$cpue, list(yr=lnd$yr), function(x) mean(x, na.rm=T))
names(cpue) = c("yr", "cpue")
out = merge (out, cpue, by="yr", all.x=T, all.y=F, sort=T)
out$effort = out$landings / out$cpue ## estimate effort level as direct estimates are underestimates (due to improper logbook records)
rownames(out) = out$yr
return(out)
}
|
Formal statement is: lemma distr_cong_AE: assumes 1: "M = K" "sets N = sets L" and 2: "(AE x in M. f x = g x)" and "f \<in> measurable M N" and "g \<in> measurable K L" shows "distr M N f = distr K L g" Informal statement is: If two measures $M$ and $K$ are equal and two functions $f$ and $g$ are equal almost everywhere, then the distributions of $f$ and $g$ are equal. |
(* Title: HOL/Auth/n_flash_lemma_on_inv__7.thy
Author: Yongjian Li and Kaiqiang Duan, State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
Copyright 2016 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
*)
header{*The n_flash Protocol Case Study*}
theory n_flash_lemma_on_inv__7 imports n_flash_base
begin
section{*All lemmas on causal relation between inv__7 and some rule r*}
lemma n_PI_Remote_PutXVsinv__7:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_PI_Remote_PutX dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_PI_Remote_PutX dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_PI_Remote_ReplaceVsinv__7:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_PI_Remote_Replace src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_PI_Remote_Replace src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_Get_Nak_HomeVsinv__7:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Get_Nak_Home dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_Get_Nak_Home dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_Get_PutVsinv__7:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Put src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Put src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_Get_Put_HomeVsinv__7:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Get_Put_Home dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_Get_Put_Home dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_Nak_HomeVsinv__7:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_GetX_Nak_Home dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_GetX_Nak_Home dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_PutXVsinv__7:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_PutX src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_PutX src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_PutX_HomeVsinv__7:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_GetX_PutX_Home dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_GetX_PutX_Home dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') p__Inv4) ''CacheState'')) (Const CACHE_E)) (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') dst) ''CacheState'')) (Const CACHE_E))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_PutVsinv__7:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Put dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_Put dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "((formEval (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') p__Inv4) ''InvMarked'')) (Const true)) s))\<or>((formEval (neg (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') p__Inv4) ''InvMarked'')) (Const true))) s))" by auto
moreover {
assume c1: "((formEval (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') p__Inv4) ''InvMarked'')) (Const true)) s))"
have "?P1 s"
proof(cut_tac a1 a2 b1 c1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume c1: "((formEval (neg (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') p__Inv4) ''InvMarked'')) (Const true))) s))"
have "?P1 s"
proof(cut_tac a1 a2 b1 c1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately have "invHoldForRule s f r (invariants N)" by satx
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_PutXVsinv__7:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_PutX dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_PutX dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX)) (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeUniMsg'') ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_InvVsinv__7:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Inv dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Inv dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_PI_Local_Get_GetVsinv__7:
assumes a1: "(r=n_PI_Local_Get_Get )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Local_GetX_GetX__part__0Vsinv__7:
assumes a1: "(r=n_PI_Local_GetX_GetX__part__0 )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Local_GetX_GetX__part__1Vsinv__7:
assumes a1: "(r=n_PI_Local_GetX_GetX__part__1 )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_NI_Nak_HomeVsinv__7:
assumes a1: "(r=n_NI_Nak_Home )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_NI_Local_PutVsinv__7:
assumes a1: "(r=n_NI_Local_Put )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_NI_Local_PutXAcksDoneVsinv__7:
assumes a1: "(r=n_NI_Local_PutXAcksDone )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__7 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_NI_Local_Get_Get__part__1Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Get__part__1 src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Remote_GetVsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_PI_Remote_Get src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_9__part__0Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__0 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_PutX__part__0Vsinv__7:
assumes a1: "r=n_PI_Local_GetX_PutX__part__0 " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_WbVsinv__7:
assumes a1: "r=n_NI_Wb " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_StoreVsinv__7:
assumes a1: "\<exists> src data. src\<le>N\<and>data\<le>N\<and>r=n_Store src data" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_5Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_5 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_GetX__part__1Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__1 src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_3Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_3 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_8_Home_NODE_GetVsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home_NODE_Get N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_7_NODE_Get__part__1Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__1 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_1Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_1 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_Store_HomeVsinv__7:
assumes a1: "\<exists> data. data\<le>N\<and>r=n_Store_Home data" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_3Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_3 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_ReplaceVsinv__7:
assumes a1: "r=n_PI_Local_Replace " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_Nak__part__1Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__1 src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_Get_Nak__part__1Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__1 src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_Get_Get__part__0Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Get__part__0 src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_existsVsinv__7:
assumes a1: "\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_InvAck_exists src pp" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_Nak__part__2Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__2 src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_Get_Put_HeadVsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put_Head N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_PutXVsinv__7:
assumes a1: "r=n_PI_Local_PutX " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_Get_Nak__part__2Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__2 src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_GetX__part__0Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__0 src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_6Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_6 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_Get_PutVsinv__7:
assumes a1: "r=n_PI_Local_Get_Put " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_ShWbVsinv__7:
assumes a1: "r=n_NI_ShWb N " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_PutX_HeadVld__part__0Vsinv__7:
assumes a1: "r=n_PI_Local_GetX_PutX_HeadVld__part__0 N " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_11Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_11 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_ReplaceVsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Replace src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_1Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_1 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_8_HomeVsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_7__part__0Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__0 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Remote_GetX_NakVsinv__7:
assumes a1: "\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_Nak src dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_NakVsinv__7:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Nak dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_9__part__1Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__1 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Remote_GetXVsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_PI_Remote_GetX src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_PutX__part__1Vsinv__7:
assumes a1: "r=n_PI_Local_GetX_PutX__part__1 " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_10Vsinv__7:
assumes a1: "\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_10 N src pp" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_8Vsinv__7:
assumes a1: "\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8 N src pp" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_Get_PutVsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_8_NODE_GetVsinv__7:
assumes a1: "\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8_NODE_Get N src pp" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_2Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_2 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_Nak__part__0Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__0 src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_exists_HomeVsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_exists_Home src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Replace_HomeVsinv__7:
assumes a1: "r=n_NI_Replace_Home " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_7__part__1Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__1 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Remote_Get_NakVsinv__7:
assumes a1: "\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Nak src dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Nak_ClearVsinv__7:
assumes a1: "r=n_NI_Nak_Clear " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_Get_Put_DirtyVsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put_Dirty src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_Get_Nak__part__0Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__0 src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_10_HomeVsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_10_Home N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_2Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_2 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_PutX_HeadVld__part__1Vsinv__7:
assumes a1: "r=n_PI_Local_GetX_PutX_HeadVld__part__1 N " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_FAckVsinv__7:
assumes a1: "r=n_NI_FAck " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_4Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_4 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Local_GetX_PutX_7_NODE_Get__part__0Vsinv__7:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__0 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__7 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
end
|
cppflags_string = function()
{
install_path = "include"
if (nchar(.Platform$r_arch) > 0)
path = file.path(install_path, .Platform$r_arch)
else
path = install_path
fmlh_include_dir_rel = system.file(path, package="fmlh")
fmlh_include_dir = tools::file_path_as_absolute(fmlh_include_dir_rel)
flags = paste0("-I", fmlh_include_dir)
flags
}
cppflags = function()
{
flags = cppflags_string()
cat(flags)
invisible()
}
|
-- ----------------
-- Demostrar que
-- s ∩ t = t ∩ s
-- ----------------
import data.set.basic
open set
variable {α : Type}
variables s t u : set α
example : s ∩ t = t ∩ s :=
sorry
|
[STATEMENT]
lemma bind_spmf_to_nat_on:
"bind_spmf (map_spmf (to_nat_on (set_spmf p)) p) (\<lambda>n. f (from_nat_into (set_spmf p) n)) = bind_spmf p f"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. map_spmf (to_nat_on (set_spmf p)) p \<bind> (\<lambda>n. f (from_nat_into (set_spmf p) n)) = p \<bind> f
[PROOF STEP]
by(simp add: bind_map_spmf cong: bind_spmf_cong) |
{-# OPTIONS --without-K --safe #-}
open import Relation.Binary.Core
module Morphism.Structures where
open import Algebra
import Algebra.Morphism.Definitions as MorphismDefinitions
open import Level using (Level; _⊔_)
import Function.Definitions as FunctionDefinitions
open import Relation.Binary.Morphism.Structures
open import Algebra.Morphism.Structures
private
variable
a b ℓ₁ ℓ₂ : Level
|
# Initialization
```python
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import scqubits as qubit
import scqubits.utils.plotting as plot
import numpy as np
```
# Transmon qubit
The transmon qubit and the Cooper pair box are described by the Hamiltonian
\begin{equation}
H_\text{CPB}=4E_\text{C}(\hat{n}-n_g)^2+\frac{1}{2}E_\text{J}\sum_n(|n\rangle\langle n+1|+\text{h.c.}),
\end{equation}
expressed in the charge basis. Here, $E_C$ is the charging energy, $E_J$ the Josephson energy, and $n_g$ the offset charge. Internal representation of the Hamiltonian proceeds via the charge basis with charge-number cutoff specified by `ncut`, which must be chosen sufficiently large for convergence.
An instance of the transmon qubit is initialized as follows:
```python
CPB = qubit.Transmon(
EJ=30.02,
EC=1.2,
ng=0.3,
ncut=31
)
```
The energy eigenvalues of the transmon Hamiltonian for the given set of model parameters are obtained by calling the `eigenvals()` method. The optional parameter `evals_count` specifies the sought number of eigenenergies.
```python
CPB.eigenvals(evals_count=12)
```
array([-21.84381856, -6.17518551, 8.01366695, 20.04897106,
30.54312385, 38.7071573 , 54.55482909, 67.49323244,
90.05182723, 107.1140667 , 135.67852225, 156.68219246])
To plot eigenenergies as a function of one of the qubit parameters (`EJ`, `EC`, or `ng`), we generate an array of values for the desired parameter and call the method `plot_evals_vs_paramvals`:
```python
ng_list = np.linspace(-2, 2, 220)
CPB.plot_evals_vs_paramvals('ng', ng_list, evals_count=6, subtract_ground=False);
```
HBox(children=(FloatProgress(value=0.0, max=220.0), HTML(value='')))
```python
CPB.ng = 0.3
ej_vals = CPB.EJ * np.cos(np.linspace(-np.pi/2, np.pi/2, 40))
CPB.plot_evals_vs_paramvals('EJ', ej_vals, evals_count=4, subtract_ground=False);
```
HBox(children=(FloatProgress(value=0.0, max=40.0), HTML(value='')))
```python
CPB.plot_n_wavefunction(esys=None, which=1, mode='real');
```
```python
CPB.plot_wavefunction(esys=None, which=[0,2,6], mode='real');
```
```python
CPB.plot_phi_wavefunction(esys=None, which=1, mode='abs_sqr');
```
### Charge matrix elements
```python
CPB.EJ = 30
CPB.ncut = 80
nmat = CPB.matrixelement_table('n_operator', evals_count=10)
plot.print_matrix(abs(nmat));
```
```python
CPB.plot_matrixelements('n_operator', evals_count=10);
```
```python
fig, ax = CPB.plot_matelem_vs_paramvals('n_operator', 'ng', ng_list, select_elems=4, filename='./data/test');
```
HBox(children=(FloatProgress(value=0.0, max=220.0), HTML(value='')))
```python
```
|
import tactic.basic
-- Lean version 3.45.0
@[derive decidable_eq] inductive prop : Type
| and : prop → prop → prop
| true : prop
| impl : prop → prop → prop
| other : ℕ → prop
abbreviation context : Type := list prop
inductive entails₁ : context → prop → Prop
| refl₁ {Γ A} : entails₁ (A :: Γ) A
| trans {Γ A C} : entails₁ (A :: Γ) C → entails₁ Γ A → entails₁ Γ C
| weak {Γ A C} : entails₁ Γ C → entails₁ (A :: Γ) C
| contr {Γ A C} : entails₁ (A :: A :: Γ) C → entails₁ (A :: Γ) C
| exch {Γ A B C} : entails₁ (B :: A :: Γ) C → entails₁ (A :: B :: Γ) C
| and_intro {Γ A B} : entails₁ Γ A → entails₁ Γ B → entails₁ Γ (prop.and A B)
| and_elim₁ {Γ A B} : entails₁ Γ (prop.and A B) → entails₁ Γ A
| and_elim₂ {Γ A B} : entails₁ Γ (prop.and A B) → entails₁ Γ B
| true_intro {Γ} : entails₁ Γ prop.true
| impl_intro {Γ A B} : entails₁ (A :: Γ) B → entails₁ Γ (prop.impl A B)
| impl_elim {Γ A B} : entails₁ Γ (prop.impl A B) → entails₁ Γ A → entails₁ Γ B
inductive entails₂ : context → prop → Prop
| refl₂ {Γ A} : list.mem A Γ → entails₂ Γ A
| and_intro {Γ A B} : entails₂ Γ A → entails₂ Γ B → entails₂ Γ (prop.and A B)
| and_elim₁ {Γ A B} : entails₂ Γ (prop.and A B) → entails₂ Γ A
| and_elim₂ {Γ A B} : entails₂ Γ (prop.and A B) → entails₂ Γ B
| true_intro {Γ} : entails₂ Γ prop.true
| impl_intro {Γ A B} : entails₂ (A :: Γ) B → entails₂ Γ (prop.impl A B)
| impl_elim {Γ A B} : entails₂ Γ (prop.impl A B) → entails₂ Γ A → entails₂ Γ B
lemma entails₁.refl₂ {Γ A} : list.mem A Γ → entails₁ Γ A :=
begin
intro h,
induction Γ with B Γ ih,
{ cases h },
{ by_cases h' : A = B,
{ subst h',
exact entails₁.refl₁ },
{ replace h : list.mem A Γ,
{ cases h, exact false.elim (h' h), exact h },
specialize ih h, clear h h',
exact entails₁.weak ih } }
end
lemma entails₂.refl₁ {Γ A} : entails₂ (A :: Γ) A :=
begin
exact entails₂.refl₂ (list.mem_cons_self A Γ)
end
lemma aux {Γ₁ Γ₂ C} (hΓ : ∀ (A : prop), list.mem A Γ₁ → list.mem A Γ₂) :
entails₂ Γ₁ C → entails₂ Γ₂ C :=
begin
intro h,
induction h with Γ₁ C h Γ₁ A B h₁ h₂ ih₁ ih₂ Γ₁ A B h ih Γ₁ A B h ih Γ₁ Γ₁ A B h ih Γ₁ A B h₁ h₂ ih₁ ih₂ generalizing Γ₂,
any_goals { specialize ih₁ hΓ, specialize ih₂ hΓ },
{ exact entails₂.refl₂ (hΓ C h) },
{ exact entails₂.and_intro ih₁ ih₂ },
{ exact entails₂.and_elim₁ (ih hΓ) },
{ exact entails₂.and_elim₂ (ih hΓ) },
{ exact entails₂.true_intro },
{ replace hΓ : ∀ B, list.mem B (A :: Γ₁) → list.mem B (A :: Γ₂),
{ intro B,
by_cases h : B = A,
{ subst h,
intro h',
exact or.inl rfl },
{ intro h',
cases h', exact false.elim (h h'),
exact or.inr (hΓ B h') } },
exact entails₂.impl_intro (ih hΓ) },
{ exact entails₂.impl_elim ih₁ ih₂ }
end
lemma entails₂.exch {Γ A B C} :
entails₂ (B :: A :: Γ) C → entails₂ (A :: B :: Γ) C :=
begin
apply aux; clear C,
intro C,
by_cases h₁ : C = A,
{ subst h₁,
intro h,
exact or.inl rfl },
{ by_cases h₂ : C = B,
{ subst h₂,
intro h,
exact or.inr (or.inl rfl) },
{ intro h,
cases h, exact false.elim (h₂ h),
cases h, exact false.elim (h₁ h),
exact or.inr (or.inr h) } }
end
lemma entails₂.weak {Γ A C} : entails₂ Γ C → entails₂ (A :: Γ) C :=
begin
intro h,
induction h with Γ C h Γ B C h₁ h₂ ih₁ ih₂ Γ C B h ih Γ B C h ih Γ Γ B C h ih Γ B C h₁ h₂ ih₁ ih₂,
{ exact entails₂.refl₂ (or.inr h) },
{ exact entails₂.and_intro ih₁ ih₂ },
{ exact entails₂.and_elim₁ ih },
{ exact entails₂.and_elim₂ ih },
{ exact entails₂.true_intro },
{ exact entails₂.impl_intro (entails₂.exch ih) },
{ exact entails₂.impl_elim ih₁ ih₂ }
end
lemma entails₂.trans {Γ A C} :
entails₂ (A :: Γ) C → entails₂ Γ A → entails₂ Γ C :=
begin
intros H₁ H₂,
have hΓ : ∀ (B : prop), list.mem B (A :: Γ) → B = A ∨ list.mem B Γ,
{ intros B h, exact h },
induction H₁ with Γ' C h Γ' A' B h₁ h₂ ih₁ ih₂ Γ' A' B h ih Γ' A' B h ih Γ' Γ' A' B h ih Γ' A' B h₁ h₂ ih₁ ih₂ generalizing Γ,
any_goals { specialize ih₁ H₂ hΓ, specialize ih₂ H₂ hΓ },
{ by_cases h' : C = A,
{ subst h',
exact H₂ },
{ replace hΓ : list.mem C Γ,
{ specialize hΓ C h,
cases hΓ,
{ exact false.elim (h' hΓ) },
{ exact hΓ } },
exact entails₂.refl₂ hΓ } },
{ exact entails₂.and_intro ih₁ ih₂ },
{ exact entails₂.and_elim₁ (ih H₂ hΓ) },
{ exact entails₂.and_elim₂ (ih H₂ hΓ) },
{ exact entails₂.true_intro },
{ replace H₂ : entails₂ (A' :: Γ) A := entails₂.weak H₂,
replace hΓ : ∀ (B : prop), list.mem B (A' :: Γ') → B = A ∨ list.mem B (A' :: Γ),
{ clear_dependent B,
intros B h,
cases h,
{ subst h,
exact or.inr (or.inl rfl) },
{ specialize hΓ B h,
cases hΓ,
{ subst hΓ,
exact or.inl rfl },
{ exact or.inr (or.inr hΓ) } } },
exact entails₂.impl_intro (ih H₂ hΓ) },
{ exact entails₂.impl_elim ih₁ ih₂ }
end
lemma entails₂.contr {Γ A C} :
entails₂ (A :: A :: Γ) C → entails₂ (A :: Γ) C :=
begin
apply aux; clear C,
intro B,
by_cases h : B = A,
{ subst h,
intro h',
exact or.inl rfl },
{ intro h',
cases h', exact false.elim (h h'),
cases h', exact false.elim (h h'),
exact or.inr h' }
end
theorem entails₁_iff_entails₂ {Γ C} :
entails₁ Γ C ↔ entails₂ Γ C :=
begin
split; intro h,
{ induction h with Γ A Γ A C h₁ h₂ ih₁ ih₂ Γ A C h ih Γ A C h ih Γ A B C h ih Γ A B h₁ h₂ ih₁ ih₂ Γ A B h ih Γ A B h ih Γ Γ A B h ih Γ A B h₁ h₂ ih₁ ih₂,
{ exact entails₂.refl₁ },
{ exact entails₂.trans ih₁ ih₂ },
{ exact entails₂.weak ih },
{ exact entails₂.contr ih },
{ exact entails₂.exch ih },
{ exact entails₂.and_intro ih₁ ih₂ },
{ exact entails₂.and_elim₁ ih },
{ exact entails₂.and_elim₂ ih },
{ exact entails₂.true_intro },
{ exact entails₂.impl_intro ih },
{ exact entails₂.impl_elim ih₁ ih₂ } },
{ induction h with Γ C h Γ A B h₁ h₂ ih₁ ih₂ Γ A B h ih Γ A B h ih Γ Γ A B h ih Γ A B h₁ h₂ ih₁ ih₂,
{ exact entails₁.refl₂ h },
{ exact entails₁.and_intro ih₁ ih₂ },
{ exact entails₁.and_elim₁ ih },
{ exact entails₁.and_elim₂ ih },
{ exact entails₁.true_intro },
{ exact entails₁.impl_intro ih },
{ exact entails₁.impl_elim ih₁ ih₂ } }
end
#print axioms entails₁_iff_entails₂ -- no axioms
|
[STATEMENT]
lemma nn_cond_exp_mono:
assumes "AE x in M. f x \<le> g x"
and [measurable]: "f \<in> borel_measurable M" "g \<in> borel_measurable M"
shows "AE x in M. nn_cond_exp M F f x \<le> nn_cond_exp M F g x"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. AE x in M. nn_cond_exp M F f x \<le> nn_cond_exp M F g x
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. AE x in M. nn_cond_exp M F f x \<le> nn_cond_exp M F g x
[PROOF STEP]
define h where "h = (\<lambda>x. g x - f x)"
[PROOF STATE]
proof (state)
this:
h = (\<lambda>x. g x - f x)
goal (1 subgoal):
1. AE x in M. nn_cond_exp M F f x \<le> nn_cond_exp M F g x
[PROOF STEP]
have [measurable]: "h \<in> borel_measurable M"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. h \<in> borel_measurable M
[PROOF STEP]
unfolding h_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<lambda>x. g x - f x) \<in> borel_measurable M
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
h \<in> borel_measurable M
goal (1 subgoal):
1. AE x in M. nn_cond_exp M F f x \<le> nn_cond_exp M F g x
[PROOF STEP]
have *: "AE x in M. g x = f x + h x"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. AE x in M. g x = f x + h x
[PROOF STEP]
unfolding h_def
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. AE x in M. g x = f x + (g x - f x)
[PROOF STEP]
using assms(1)
[PROOF STATE]
proof (prove)
using this:
AE x in M. f x \<le> g x
goal (1 subgoal):
1. AE x in M. g x = f x + (g x - f x)
[PROOF STEP]
by (auto simp: ennreal_ineq_diff_add)
[PROOF STATE]
proof (state)
this:
AE x in M. g x = f x + h x
goal (1 subgoal):
1. AE x in M. nn_cond_exp M F f x \<le> nn_cond_exp M F g x
[PROOF STEP]
have "AE x in M. nn_cond_exp M F g x = nn_cond_exp M F (\<lambda>x. f x + h x) x"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. AE x in M. nn_cond_exp M F g x = nn_cond_exp M F (\<lambda>x. f x + h x) x
[PROOF STEP]
by (rule nn_cond_exp_cong) (auto simp add: * assms)
[PROOF STATE]
proof (state)
this:
AE x in M. nn_cond_exp M F g x = nn_cond_exp M F (\<lambda>x. f x + h x) x
goal (1 subgoal):
1. AE x in M. nn_cond_exp M F f x \<le> nn_cond_exp M F g x
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
AE x in M. nn_cond_exp M F g x = nn_cond_exp M F (\<lambda>x. f x + h x) x
goal (1 subgoal):
1. AE x in M. nn_cond_exp M F f x \<le> nn_cond_exp M F g x
[PROOF STEP]
have "AE x in M. nn_cond_exp M F f x + nn_cond_exp M F h x = nn_cond_exp M F (\<lambda>x. f x + h x) x"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. AE x in M. nn_cond_exp M F f x + nn_cond_exp M F h x = nn_cond_exp M F (\<lambda>x. f x + h x) x
[PROOF STEP]
by (rule nn_cond_exp_sum) (auto simp add: assms)
[PROOF STATE]
proof (state)
this:
AE x in M. nn_cond_exp M F f x + nn_cond_exp M F h x = nn_cond_exp M F (\<lambda>x. f x + h x) x
goal (1 subgoal):
1. AE x in M. nn_cond_exp M F f x \<le> nn_cond_exp M F g x
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
AE x in M. nn_cond_exp M F g x = nn_cond_exp M F (\<lambda>x. f x + h x) x
AE x in M. nn_cond_exp M F f x + nn_cond_exp M F h x = nn_cond_exp M F (\<lambda>x. f x + h x) x
[PROOF STEP]
have "AE x in M. nn_cond_exp M F g x = nn_cond_exp M F f x + nn_cond_exp M F h x"
[PROOF STATE]
proof (prove)
using this:
AE x in M. nn_cond_exp M F g x = nn_cond_exp M F (\<lambda>x. f x + h x) x
AE x in M. nn_cond_exp M F f x + nn_cond_exp M F h x = nn_cond_exp M F (\<lambda>x. f x + h x) x
goal (1 subgoal):
1. AE x in M. nn_cond_exp M F g x = nn_cond_exp M F f x + nn_cond_exp M F h x
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
AE x in M. nn_cond_exp M F g x = nn_cond_exp M F f x + nn_cond_exp M F h x
goal (1 subgoal):
1. AE x in M. nn_cond_exp M F f x \<le> nn_cond_exp M F g x
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
AE x in M. nn_cond_exp M F g x = nn_cond_exp M F f x + nn_cond_exp M F h x
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
AE x in M. nn_cond_exp M F g x = nn_cond_exp M F f x + nn_cond_exp M F h x
goal (1 subgoal):
1. AE x in M. nn_cond_exp M F f x \<le> nn_cond_exp M F g x
[PROOF STEP]
by force
[PROOF STATE]
proof (state)
this:
AE x in M. nn_cond_exp M F f x \<le> nn_cond_exp M F g x
goal:
No subgoals!
[PROOF STEP]
qed |
If $i$ and $j$ are less than $n$, then the functions $upd_i$ and $upd_j$ are equal if and only if $i = j$. |
FUNCTION:NAME
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
dt_test_ioctl:sdt-test-ioctl-file
dt_test_ioctl:sdt-test-ioctl-file 0
exit_group:entry
-- @@stderr --
dtrace: script 'test/unittest/sdt/tst.translation-elided-arg.d' matched 3 probes
|
State Before: R : Type u_1
A : Type u_3
B : Type u_2
inst✝⁵ : CommRing R
inst✝⁴ : CommRing A
inst✝³ : CommRing B
inst✝² : Algebra R B
inst✝¹ : Algebra A B
inst✝ : IsIntegralClosure A R B
h : optParam (IsIntegral R 0) (_ : IsIntegral R 0)
⊢ ↑(algebraMap A B) (mk' A 0 h) = ↑(algebraMap A B) 0 State After: no goals Tactic: rw [algebraMap_mk', RingHom.map_zero] |
using Documenter
module PSCOPFDocs
using Documenter
format = Documenter.HTML(
prettyurls = false,
)
pages = Any[
"Home" => "index.md",
"Introduction" => Any[
"Introduction" => "0_intro/1_introduction.md",
"Glossaire" => "0_intro/2_glossaire.md",
],
"Description" => Any[
"Architecture et Design" => "1_description/1_architecture.md",
"Sequence" => "1_description/2_sequence.md",
],
"Modèles" => Any[
"Notations" => "2_modeles/0_notations.md",
"Modèles du marché" => Any[
"Marché de L'Energie avant FO" => Any[
"Le Problème" => "2_modeles/1_marche/1_marche_de_energie_avant_fo/1_problem.md",
"Les Variables et Les Contraintes" => "2_modeles/1_marche/1_marche_de_energie_avant_fo/2_vars_and_cstrs.md",
"L'objectif" => "2_modeles/1_marche/1_marche_de_energie_avant_fo/3_objective.md"
],
"Marché de L'Energie à la FO" => Any[],
],
"Modèles du TSO" => Any[
"TSO avant la FO" => Any[
"Le Problème" => "2_modeles/2_tso/1_tso_avant_fo/1_problem.md",
"Les Variables et les Contraintes" => "2_modeles/2_tso/1_tso_avant_fo/2_vars_and_cstrs.md",
"L'Objectif" => "2_modeles/2_tso/1_tso_avant_fo/3_objective.md",
],
"Mode 1 (ANCIEN)" => Any[
"Problem Description" => "2_modeles/2_tso/1_mode_1/1_problem.md",
"Variables" => "2_modeles/2_tso/1_mode_1/2_variables.md",
"Constraints" => "2_modeles/2_tso/1_mode_1/3_constraints.md",
"Objective" => "2_modeles/2_tso/1_mode_1/4_objective.md",
],
],
],
"Library" => Any[
"PSCOPF" => "lib/pscopf.md",
],
]
end # PSCOPFDocs
#TODO : PTDF docs go somewhere else
include(joinpath(@__DIR__, "..", "src", "PSCOPF.jl"))
include(joinpath(@__DIR__, "..", "src", "PTDF.jl"))
makedocs(modules = [PSCOPF, PTDF],
sitename="PSCOPF",
format = PSCOPFDocs.format,
pages = PSCOPFDocs.pages,
)
|
(* Author: Gertrud Bauer
*)
header{* Plane Graph Enumeration *}
theory Plane
imports Enumerator FaceDivision RTranCl
begin
definition maxGon :: "nat \<Rightarrow> nat" where
"maxGon p \<equiv> p+3"
declare maxGon_def [simp]
definition duplicateEdge :: "graph \<Rightarrow> face \<Rightarrow> vertex \<Rightarrow> vertex \<Rightarrow> bool" where
"duplicateEdge g f a b \<equiv>
2 \<le> directedLength f a b \<and> 2 \<le> directedLength f b a \<and> b \<in> set (neighbors g a)"
primrec containsUnacceptableEdgeSnd ::
"(nat \<Rightarrow> nat \<Rightarrow> bool) \<Rightarrow> nat \<Rightarrow> nat list \<Rightarrow> bool" where
"containsUnacceptableEdgeSnd N v [] = False" |
"containsUnacceptableEdgeSnd N v (w#ws) =
(case ws of [] \<Rightarrow> False
| (w'#ws') \<Rightarrow> if v < w \<and> w < w' \<and> N w w' then True
else containsUnacceptableEdgeSnd N w ws)"
primrec containsUnacceptableEdge :: "(nat \<Rightarrow> nat \<Rightarrow> bool) \<Rightarrow> nat list \<Rightarrow> bool" where
"containsUnacceptableEdge N [] = False" |
"containsUnacceptableEdge N (v#vs) =
(case vs of [] \<Rightarrow> False
| (w#ws) \<Rightarrow> if v < w \<and> N v w then True
else containsUnacceptableEdgeSnd N v vs)"
definition containsDuplicateEdge :: "graph \<Rightarrow> face \<Rightarrow> vertex \<Rightarrow> nat list \<Rightarrow> bool" where
"containsDuplicateEdge g f v is \<equiv>
containsUnacceptableEdge (\<lambda>i j. duplicateEdge g f (f\<^bsup>i\<^esup>\<bullet>v) (f\<^bsup>j\<^esup>\<bullet>v)) is"
definition containsDuplicateEdge' :: "graph \<Rightarrow> face \<Rightarrow> vertex \<Rightarrow> nat list \<Rightarrow> bool" where
"containsDuplicateEdge' g f v is \<equiv>
2 \<le> |is| \<and>
((\<exists>k < |is| - 2. let i0 = is!k; i1 = is!(k+1); i2 = is!(k+2) in
(duplicateEdge g f (f\<^bsup>i1 \<^esup>\<bullet>v) (f\<^bsup>i2 \<^esup>\<bullet>v)) \<and> (i0 < i1) \<and> (i1 < i2))
\<or> (let i0 = is!0; i1 = is!1 in
(duplicateEdge g f (f\<^bsup>i0 \<^esup>\<bullet>v) (f\<^bsup>i1 \<^esup>\<bullet>v)) \<and> (i0 < i1)))"
definition generatePolygon :: "nat \<Rightarrow> vertex \<Rightarrow> face \<Rightarrow> graph \<Rightarrow> graph list" where
"generatePolygon n v f g \<equiv>
let enumeration = enumerator n |vertices f|;
enumeration = [is \<leftarrow> enumeration. \<not> containsDuplicateEdge g f v is];
vertexLists = [indexToVertexList f v is. is \<leftarrow> enumeration] in
[subdivFace g f vs. vs \<leftarrow> vertexLists]"
definition next_plane0 :: "nat \<Rightarrow> graph \<Rightarrow> graph list" ("next'_plane0\<^bsub>_\<^esub>") where
"next_plane0\<^bsub>p\<^esub> g \<equiv>
if final g then []
else \<Squnion>\<^bsub>f\<in>nonFinals g\<^esub> \<Squnion>\<^bsub>v\<in>vertices f\<^esub> \<Squnion>\<^bsub>i\<in>[3..<Suc(maxGon p)]\<^esub> generatePolygon i v f g"
definition Seed :: "nat \<Rightarrow> graph" ("Seed\<^bsub>_\<^esub>") where
"Seed\<^bsub>p\<^esub> \<equiv> graph(maxGon p)"
lemma Seed_not_final[iff]: "\<not> final (Seed p)"
by(simp add:Seed_def graph_def finalGraph_def nonFinals_def)
definition PlaneGraphs0 :: "graph set" where
"PlaneGraphs0 \<equiv> \<Union>p. {g. Seed\<^bsub>p\<^esub> [next_plane0\<^bsub>p\<^esub>]\<rightarrow>* g \<and> final g}"
end
|
#!/usr/bin/env Rscript
# Reduce dimensions for a SCE
# Combiz Khozoie <[email protected]>
# ____________________________________________________________________________
# Initialization ####
## ............................................................................
## Load packages ####
library(argparse)
library(scFlow)
## ............................................................................
## Parse command-line arguments ####
# create parser object
parser <- ArgumentParser()
# specify options
required <- parser$add_argument_group("Required", "required arguments")
optional <- parser$add_argument_group("Optional", "required arguments")
required$add_argument(
"--sce_path",
help = "-path to the SingleCellExperiment",
metavar = "dir",
required = TRUE
)
required$add_argument(
"--reddim_genes_yml",
help = "-path to the yml file with genes of interest",
metavar = "dir",
required = TRUE
)
required$add_argument(
"--reduction_methods",
help = "reduced dimension embedding(s) to use for plots",
metavar = "UMAP",
required = TRUE
)
required$add_argument(
"--reddimplot_pointsize",
default = 0.1,
type = "double",
required = TRUE,
help = "Point size for reduced dimension plots",
metavar = "N"
)
required$add_argument(
"--reddimplot_alpha",
default = 0.2,
type = "double",
required = TRUE,
help = "Alpha value for reduced dimension plots",
metavar = "N"
)
### . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
### Pre-process args ####
args <- parser$parse_args()
args$reduction_methods <- strsplit(args$reduction_methods, ",")[[1]]
options("scflow_reddimplot_pointsize" = args$reddimplot_pointsize)
options("scflow_reddimplot_alpha" = args$reddimplot_alpha)
## ............................................................................
## Start ####
sce <- read_sce(args$sce_path)
gene_l <- yaml::read_yaml(args$reddim_genes_yml)
valid_reddims <- SingleCellExperiment::reducedDimNames(sce)
assertthat::assert_that(
all(args$reduction_methods %in% valid_reddims),
msg = sprintf("Valid reddims are: %s", paste0(valid_reddims, collapse = ",")))
for (reddim in args$reduction_method) {
for (l_name in names(gene_l)) {
folder_path <- file.path(getwd(), "reddim_gene_plots", reddim, l_name)
R.utils::mkdirs(folder_path)
for (gene in gene_l[[l_name]]) {
print(gene)
if (gene %in% SummarizedExperiment::rowData(sce)$gene) {
p <- plot_reduced_dim_gene(
sce,
reduced_dim = reddim,
gene = gene
)
png(file.path(folder_path, paste0(gene, ".png")),
width = 170, height = 170, units = "mm", res = 600)
print(p)
dev.off()
} else {
warning(print(sprintf("Gene %s not found.", gene)))
}
}
}
}
## ............................................................................
## Save Outputs ####
# Save SingleCellExperiment
## ............................................................................
## Clean up ####
# Clear biomart cache
|
The second part of Season 2 brings the announcement of the sex of Robyn and Kody 's baby and the Browns ' struggle to adjust to life in Las Vegas . The episodes following the Season 2 hiatus focus largely on Robyn 's pregnancy and the kids ' adjustment to their new lives . The abrupt move to Las Vegas brings about behavioral problems in some of the older kids , which is also discussed largely in the second half of Season 2 . During these episodes the Browns also explore possible businesses that the five of them ( Kody and the sister wives ) can run together . Several episodes after the hiatus discuss specific topics such as jealousy among the sister wives , especially regarding courting a new wife , how the parents combat the influence of Las Vegas on their children , and how the Browns are preparing the older children for college . Mona Riekki is back in this season and is working with the family on finding a permanent home in Vegas . In the finale , Robyn gives birth to baby Solomon on October 27 , 2011 and the possibility of Meri having more children once again resurfaces .
|
Formal statement is: lemma open_contains_ball: "open S \<longleftrightarrow> (\<forall>x\<in>S. \<exists>e>0. ball x e \<subseteq> S)" Informal statement is: A set $S$ is open if and only if for every $x \in S$, there exists an $e > 0$ such that the ball of radius $e$ centered at $x$ is contained in $S$. |
(*
Axioms v4
*)
Require Import Arith.
(* Helper lemmas *)
Lemma less_or_equal (n m : nat):
n <= m <-> (n < m \/ n = m).
Proof.
split.
intros.
induction H.
right.
reflexivity.
left.
assert (m < S m).
auto.
firstorder.
intros.
destruct H as [H1 | H2].
induction H1.
auto.
auto.
induction H2.
auto.
Qed.
(* Oracle function: Description -> nat *)
Parameter O : nat -> nat.
(* Oracle equivalence relation: Description x Description -> bool *)
Parameter R : nat -> nat -> Prop.
Axiom reflexive: forall r : nat, R r r.
Axiom symmetric: forall r s : nat, R r s -> R s r.
Axiom transitive: forall r s t : nat, R r s -> R s t -> R r t.
(* Nescience function: Description -> Nescience *)
Parameter N : nat -> nat.
(* Axioms *)
(* Axiom: non-negativity of nescience *)
Axiom non_negativity: forall d : nat, N(d) >= 0.
(* Axiom of surfeit *)
Axiom surfeit: forall s t : nat, R s t /\ O s <= O t /\ Nat.log2 s < Nat.log2 t -> N s < N t.
(* Axiom of inaccuracy *)
Axiom inaccuracy: forall s t : nat, R s t /\ O s < O t /\ Nat.log2 s <= Nat.log2 t -> N s < N t.
(* Axiom of equality *)
Axiom equality: forall s t : nat, R s t /\ O s = O t /\ Nat.log2 s = Nat.log2 t -> N s = N t.
(* Axiom perfect_knowledge *)
Axiom perfect_knowledge: forall s : nat, N s = 0 <-> ( O s = 0 )
/\ ( ~ exists t : nat, s <> t /\ R s t /\ O t = 0 /\ Nat.log2 t < Nat.log2 s ).
(* zero uknown *)
Axiom zero_unknown: forall s : nat, exists t : nat, R s t /\ N t = 0.
Lemma zero_inaccuracy (d : nat) :
N d = 0 -> O d = 0.
Proof.
intros.
apply perfect_knowledge.
assumption.
Qed.
(* l(s) < l(t) & O(s) < O(t) => N(s) < N(t) *)
Lemma property_ll:
forall s t : nat, R s t -> Nat.log2 s < Nat.log2 t -> O s < O t -> N s < N t.
Proof.
intros s t H1 H2 H3.
apply surfeit.
split.
assumption.
split.
apply less_or_equal.
left.
apply H3.
apply H2.
Qed.
(* l(s) < l(t) & O(s) = O(t) => N(s) < N(t) *)
Lemma property_le:
forall s t : nat, R s t -> Nat.log2 s < Nat.log2 t -> O s = O t -> N s < N t.
Proof.
intros s t H1 H2 H3.
apply surfeit.
split.
assumption.
split.
apply less_or_equal.
right.
apply H3.
apply H2.
Qed.
(* l(s) = l(t) & O(s) < O(t) => N(s) < N(t) *)
Lemma property_el:
forall s t : nat, R s t -> Nat.log2 s = Nat.log2 t -> O s < O t -> N s < N t.
Proof.
intros s t H1 H2 H3.
apply inaccuracy.
split.
assumption.
split.
assumption.
apply less_or_equal.
right.
apply H2.
Qed.
(* l(s) = l(t) & O(s) = O(t) => N(s) = N(t) *)
(* axiom equality *)
(* l(s) = l(t) & O(s) > O(t) => N(s) > N(t) *)
Lemma property_eg:
forall s t : nat, R s t -> Nat.log2 s = Nat.log2 t -> O s > O t -> N s > N t.
Proof.
intros s t H1 H2 H3.
apply property_el.
apply symmetric.
assumption.
auto.
auto.
Qed.
(* l(s) > l(t) & O(s) = O(t) => N(s) > N(t) *)
Lemma property_ge:
forall s t : nat, R s t -> Nat.log2 s > Nat.log2 t -> O s = O t -> N s > N t.
Proof.
intros s t H1 H2 H3.
apply property_le.
apply symmetric.
assumption.
auto.
auto.
Qed.
(* l(s) > l(t) & O(s) > O(t) => N(s) > N(t) *)
Lemma property_gg:
forall s t : nat, R s t -> Nat.log2 s > Nat.log2 t -> O s > O t -> N s > N t.
Proof.
intros s t H1 H2 H3.
apply property_ll.
apply symmetric.
assumption.
auto.
auto.
Qed.
(* l(s) < l(t) & O(s) > O(t) => unknown *)
(* l(s) > l(t) & O(s) < O(t) => unknown *)
|
!
! forward substitution example
!
program fs03
integer n, i
parameter (n=100)
real a(n), x
do i=1, n
x = a(i)*a(i)
a(i) = x + x
enddo
x = a(4)
a(i-3) = x + x
a(i-2) = x - x
print *, a(5)
end
|
! Test module
! module is module-stmt
! [ specification-part ]
! [ module-subprogram-part ]
! end-module-stmt
!
! module-stmt is MODULE module-name
!
! end-module-stmt is END [ MODULE [ module-name ] ]
!
! module-subprogram-part is contains-stmt
! module-subprogram
! [ module-subprogram ] ...
!
! module-subprogram is function-subprogram
! or subroutine-subprogram
!
! Not tested here: specification-part, function-subprogram, and
! subroutine-subprogram.
! None of the optional parts included
MODULE a
END
! Include the optional MODULE in end-module-stmt
MODULE b
END MODULE
! Include optional MODULE and module-name in end-module-stmt.
MODULE c
END MODULE c
! Include an optional specification-part
MODULE d
Integer i
END MODULE d
! Include an optional module-subprogram-part
MODULE e
CONTAINS
subroutine sub()
END subroutine sub
Function foo()
foo = 13
END FUNCTION foo
END MODULE e
! Include an optional separate-module-subprogram
MODULE f
CONTAINS
MODULE procedure mp
end PROCEDURE mp
END MODULE f
! Include all optional parts
MODULE g
integer i
contains
subroutine sub()
END subroutine sub
FUNCTION foo()
foo = 13
END FUNCTION foo
module PROCEDURE mp
END procedure mp
END MODULE g
|
subroutine akherm2(x,nx,y,ny,fherm,nf2,ilinx,iliny,ier)
C
C create a data set for Hermite interpolation, based on Akima's method
C [Hiroshi Akima, Communications of the ACM, Jan 1974, Vol. 17 No. 1]
C
C input:
C
integer nx,ny,nf1 ! array dimensions
real x(nx) ! x coordinate array
real y(ny) ! y coordinate array
real fherm(0:3,nf2,ny) ! data/Hermite array
C
C fherm(0,i,j) = function value f at x(i),y(j) **on input**
C
C fherm(1,i,j) = derivative df/dx at x(i),y(j) **on output**
C fherm(2,i,j) = derivative df/dy at x(i),y(j) **on output**
C fherm(3,i,j) = derivative d2f/dxdy at x(i),y(j) **on output**
C
C addl output:
C ilinx=1 if x axis is evenly spaced
C iliny=1 if y axis is evenly spaced
C ier=0 if no error:
C x, y must both be strict ascending
C nf2.ge.nx is required.
C
C a default boundary condition is used, based on divided differences
C in the edge region. For more control of BC, use akherm2p...
C
call akherm2p(x,nx,y,ny,fherm,nf2,ilinx,iliny,0,0,ier)
C
return
end
C----------------------------
subroutine akherm2p(x,nx,y,ny,fherm,nf2,ilinx,iliny,ipx,ipy,ier)
C
C create a data set for Hermite interpolation, based on Akima's method
C [Hiroshi Akima, Communications of the ACM, Jan 1974, Vol. 17 No. 1]
C
C with independently settable boundary condition options:
C ipx or ipy =0 -- default, boundary conditions from divided diffs
C ipx or ipy =1 -- periodic boundary condition
C ipx or ipy =2 -- user supplied df/dx or df/dy
C input:
C
integer nx,ny,nf1 ! array dimensions
real x(nx) ! x coordinate array
real y(ny) ! y coordinate array
real fherm(0:3,nf2,ny) ! data/Hermite array
C
integer ipx ! =1 if df/dx periodic in x
integer ipy ! =1 if df/dy periodic in y
C
C fherm(0,1:nx,1:ny) supplied by user; this routine computes the
C rest of the elements, but note:
C
C if ipx=2: fherm(1,1,1:ny) & fherm(1,nx,1:ny) = INPUT df/dx BCs
C if ipy=2: fherm(2,1:nx,1) & fherm(2,1:nx,ny) = INPUT df/dy BCs
C
C on output, at all grid points (i,j) covering (1:nx,1:ny):
C fherm1(1,i,j) -- df/dx at the grid point
C fherm1(2,i,j) -- df/dy at the grid point
C fherm1(3,i,j) -- d2f/dxdy at the grid point
C
C---------------------
C local...
C
real wx(2),wy(2),e(2,2)
c
real, dimension(:,:), allocatable :: ftmp
c
real xx(0:nx+1)
real yy(0:ny+1)
c
c---------------------
c
c error checks
c
ier=0
c
call splinck(x,nx,ilinx,1.0e-3,ierx)
if(ierx.ne.0) ier=ier+1
c
if(ierx.ne.0) then
write(6,'('' ?akherm2: x axis not strict ascending'')')
endif
c
call splinck(y,ny,iliny,1.0e-3,iery)
if(iery.ne.0) ier=ier+1
c
if(iery.ne.0) then
write(6,'('' ?akherm2: y axis not strict ascending'')')
endif
c
if(nf2.lt.nx) then
ier=ier+1
write(6,*) '?akherm2: fherm array dimension too small.'
endif
C
ierbc=0
call ibc_ck(ipx,'akherm2','X Bdy Cond',0,2,ierbc)
ier=ier+ierbc
C
ierbc=0
call ibc_ck(ipy,'akherm2','Y Bdy Cond',0,2,ierbc)
ier=ier+ierbc
C
if(ier.ne.0) return
C
C---------------------------------------
C
C get a temporary array for f -- will extend out by 1 zone in
C each direction so that numerical derivative evaluation can be
C done without a lot of special case logic near the edges...
C
allocate(ftmp(0:nx+1,0:ny+1))
C
do iy=1,ny
do ix=1,nx
ftmp(ix,iy)=fherm(0,ix,iy)
enddo
enddo
C
C also create expanded axes grids...
C
xx(1:nx)=x
yy(1:ny)=y
xx(0)=2*x(1)-x(2)
xx(nx+1)=2*x(nx)-x(nx-1)
yy(0)=2*y(1)-y(2)
yy(ny+1)=2*y(ny)-y(ny-1)
C
C---------------------------------------
C
C handle boundary conditions and create rows of extrapolated points
C in ftmp. first do ftmp(0,1:ny), ftmp(nx+1,1:ny),
C then ftmp(1:nx,0), ftmp(1:nx,ny+1),
c then ... fill in the corners
c
c also, for ipx.le.1 fill in the bdy fherm(1,*) values;
c for ipy.le.1 fill in the bdy fherm(2,*) values
c
c x bc's
c
do iy=1,ny
c
cxp=(ftmp(2,iy)-ftmp(1,iy))/(xx(2)-xx(1))
cxm=(ftmp(nx,iy)-ftmp(nx-1,iy))/(xx(nx)-xx(nx-1))
c
if(ipx.eq.1) then
c
c periodic BC
c
if(nx.gt.2) then
cxpp=(ftmp(3,iy)-ftmp(2,iy))/(xx(3)-xx(2))
cxmm=(ftmp(nx-1,iy)-ftmp(nx-2,iy))/(xx(nx-1)-xx(nx-2))
c
call akherm0(cxmm,cxm,cxp,cxpp,wx,fherm(1,1,iy))
fherm(1,nx,iy)=fherm(1,1,iy)
else
fherm(1,1,iy) = cxp ! =cxm, nx=2
fherm(1,nx,iy) = fherm(1,1,iy)
endif
c
cxtrap0=cxm
cxtrapn=cxp
c
else if(ipx.eq.0) then
C
C default BC -- standard numeric extrapolation
C
if(nx.gt.2) then
cxpp=(ftmp(3,iy)-ftmp(2,iy))/(xx(3)-xx(2))
fherm(1,1,iy)=1.5*cxp-0.5*cxpp
C
cxmm=(ftmp(nx-1,iy)-ftmp(nx-2,iy))/(xx(nx-1)-xx(nx-2))
fherm(1,nx,iy)=1.5*cxm-0.5*cxmm
c
else
fherm(1,1,iy) = cxp ! =cxm, nx=2
fherm(1,nx,iy) = fherm(1,1,iy)
endif
C
C extrapolate to slope to ghost points just past bdy...
C
cxtrap0=2.0*fherm(1,1,iy)-cxp
cxtrapn=2.0*fherm(1,nx,iy)-cxm
C
else
C
C BC supplied by user. Also use this for extrapolation...
C
cxtrap0=2.0*fherm(1,1,iy)-cxp
cxtrapn=2.0*fherm(1,nx,iy)-cxm
C
endif
C
ftmp(0,iy)=ftmp(1,iy)-cxtrap0*(xx(1)-xx(0))
ftmp(nx+1,iy)=ftmp(nx,iy)+cxtrapn*(xx(nx+1)-xx(nx))
C
enddo
c
c y bc's
c
do ix=1,nx
c
cyp=(ftmp(ix,2)-ftmp(ix,1))/(yy(2)-yy(1))
cym=(ftmp(ix,ny)-ftmp(ix,ny-1))/(yy(ny)-yy(ny-1))
c
if(ipy.eq.1) then
c
c periodic BC
c
if(ny.gt.2) then
cypp=(ftmp(ix,3)-ftmp(ix,2))/(yy(3)-yy(2))
cymm=(ftmp(ix,ny-1)-ftmp(ix,ny-2))/(yy(ny-1)-yy(ny-2))
c
call akherm0(cymm,cym,cyp,cypp,wy,fherm(2,ix,1))
fherm(2,ix,ny)=fherm(2,ix,1)
c
else
fherm(2,ix,1) = cyp ! =cym, ny=2
fherm(2,ix,ny)=fherm(2,ix,1)
endif
c
cytrap0=cym
cytrapn=cyp
c
else if(ipy.eq.0) then
C
C default BC -- standard numeric extrapolation
C
if(ny.gt.2) then
cypp=(ftmp(ix,3)-ftmp(ix,2))/(yy(3)-yy(2))
fherm(2,ix,1)=1.5*cyp-0.5*cypp
C
cymm=(ftmp(ix,ny-1)-ftmp(ix,ny-2))/(yy(ny-1)-yy(ny-2))
fherm(2,ix,ny)=1.5*cym-0.5*cymm
c
else
fherm(2,ix,1) = cyp ! =cym, ny=2
fherm(2,ix,ny)=fherm(2,ix,1)
endif
C
C extrapolate to slope to ghost points just past bdy...
C
cytrap0=2.0*fherm(2,ix,1)-cyp
cytrapn=2.0*fherm(2,ix,ny)-cym
C
else
C
C BC supplied by user. Also use this for extrapolation...
C
cytrap0=2.0*fherm(2,ix,1)-cyp
cytrapn=2.0*fherm(2,ix,ny)-cym
C
endif
C
ftmp(ix,0)=ftmp(ix,1)-cytrap0*(yy(1)-yy(0))
ftmp(ix,ny+1)=ftmp(ix,ny)+cytrapn*(yy(ny+1)-yy(ny))
C
enddo
C
C and do something for the corners...
C
do ix=0,1
do iy=0,1
icx=ix*(nx+1)
icy=iy*(ny+1)
incx=1-2*ix
incy=1-2*iy
ix1=icx+incx
iy1=icy+incy
ftmp(icx,icy)=ftmp(icx,iy1)+ftmp(ix1,icy)-ftmp(ix1,iy1)
C
enddo
enddo
C
C----------------------------------------------------------------
C OK, now ready to compute all the interior coefficients and the
C rest of the edge coefficients as well...
C
do iy=1,ny
iym2=iy
iym1=iym2-1
c
iymm2=iy-1
iymm1=iymm2-1
c
iyp2=iy+1
iyp1=iyp2-1
c
iypp2=iy+2
iypp1=iypp2-1
c
do ix=1,nx
c
c x div. diffs in vicinity
c
ixm2=ix
ixm1=ixm2-1
c
ixmm2=ix-1
ixmm1=ixmm2-1
c
iflagx=0
cxm=(ftmp(ixm2,iy)-ftmp(ixm1,iy))/(xx(ixm2)-xx(ixm1))
if(ix.gt.1) then
cxmm=(ftmp(ixmm2,iy)-ftmp(ixmm1,iy))/
> (xx(ixmm2)-xx(ixmm1))
else
if(ipx.eq.1) then
cxmm=(ftmp(nx-1,iy)-ftmp(nx-2,iy))/
> (xx(nx-1)-xx(nx-2))
else
iflagx=1
endif
endif
c
ixp2=ix+1
ixp1=ixp2-1
c
ixpp2=ix+2
ixpp1=ixpp2-1
c
cxp=(ftmp(ixp2,iy)-ftmp(ixp1,iy))/(xx(ixp2)-xx(ixp1))
if(ix.lt.nx) then
cxpp=(ftmp(ixpp2,iy)-ftmp(ixpp1,iy))/
> (xx(ixpp2)-xx(ixpp1))
else
if(ipx.eq.1) then
cxpp=(ftmp(3,iy)-ftmp(2,iy))/(xx(3)-xx(2))
else
cxpp=cxp+(cxm-cxmm)
endif
endif
c
if(iflagx.eq.1) then
cxmm=cxm+(cxp-cxpp)
endif
c
c Akima weightings + df/dx for interior pts
c
call akherm0(cxmm,cxm,cxp,cxpp,wx,zansr)
if((ix.gt.1).and.(ix.lt.nx)) fherm(1,ix,iy)=zansr
c
c y div. diffs in vicinity
c
iflagy=0
cym=(ftmp(ix,iym2)-ftmp(ix,iym1))/(yy(iym2)-yy(iym1))
if(iy.gt.1) then
cymm=(ftmp(ix,iymm2)-ftmp(ix,iymm1))/
> (yy(iymm2)-yy(iymm1))
else
if(ipy.eq.1) then
cymm=(ftmp(ix,ny-1)-ftmp(ix,ny-2))/
> (yy(ny-1)-yy(ny-2))
else
iflagy=1
endif
endif
c
cyp=(ftmp(ix,iyp2)-ftmp(ix,iyp1))/(yy(iyp2)-yy(iyp1))
if(iy.lt.ny) then
cypp=(ftmp(ix,iypp2)-ftmp(ix,iypp1))/
> (yy(iypp2)-yy(iypp1))
else
if(ipy.eq.1) then
cypp=(ftmp(ix,3)-ftmp(ix,2))/(yy(3)-yy(2))
else
cypp=cyp+(cym-cymm)
endif
endif
c
if(iflagy.eq.1) then
cymm=cym+(cyp-cypp)
endif
c
c Akima weightings + df/dy for interior pts
c
call akherm0(cymm,cym,cyp,cypp,wy,zansr)
if((iy.gt.1).and.(iy.lt.ny)) fherm(2,ix,iy)=zansr
c
c cross derivatives (2nd order divided differences)
c
cxm2=(ftmp(ixm2,iym1)-ftmp(ixm1,iym1))/
> (xx(ixm2)-xx(ixm1))
e(1,1)=(cxm-cxm2)/(yy(iym2)-yy(iym1))
c
cxm2=(ftmp(ixm2,iyp2)-ftmp(ixm1,iyp2))/
> (xx(ixm2)-xx(ixm1))
e(1,2)=(cxm2-cxm)/(yy(iyp2)-yy(iyp1))
c
cxp2=(ftmp(ixp2,iym1)-ftmp(ixp1,iym1))/
> (xx(ixp2)-xx(ixp1))
e(2,1)=(cxp-cxp2)/(yy(iym2)-yy(iym1))
c
cxp2=(ftmp(ixp2,iyp2)-ftmp(ixp1,iyp2))/
> (xx(ixp2)-xx(ixp1))
e(2,2)=(cxp2-cxp)/(yy(iyp2)-yy(iyp1))
c
c the values
c
fherm(3,ix,iy)=(wx(1)*(wy(1)*e(1,1)+wy(2)*e(1,2))+
> wx(2)*(wy(1)*e(2,1)+wy(2)*e(2,2)))/
> ((wx(1)+wx(2))*(wy(1)+wy(2)))
c
enddo
enddo
C
deallocate (ftmp)
return
end
|
# TODO: fix and test smoothing
module Simulations2DSymmetric
using ..Curlcurls
using ..Dispersions
using ..Domains
using ..Laplacians
using ..Lattices
using ..Points
using ..SelfEnergies
using ..Shapes
using ..VectorFields
# using LinearAlgebra
using SparseArrays
using Statistics
import ..AbstractDomain
import ..Symmetric, ..Unsymmetric
import ..Simulation
import ..smooth_dielectric!
import ..smooth_pump!
import ..which_domains
import ..simulation_dielectric!
import ..simulation_pump!
function Simulation(
ω₀::Real,
lattice_domains::Tuple{LatticeDomain{2,Symmetric,Cartesian}},
nondispersive_domains::NTuple{L,NondispersiveDomain{2}},
dispersive_domains::NTuple{M,DispersiveDomain{2}};
k₃₀::Real=0
) where {L,M}
lattice_domain = lattice_domains[1] # there is only one lattice by construction, and it's Cartesian
smoothed = Ref(false)
lattice = lattice_domain.lattice
indices = lattice_domain.indices
imin, imax = extrema(map(ld->ld[1],indices))
jmin, jmax = extrema(map(ld->ld[2],indices))
x = [lattice[i,0] for i ∈ imin:imax]
y = [lattice[0,j] for j ∈ jmin:jmax]
lattice_domain_indices = fill(1,length(lattice_domain.x))
nondispersive_domain_indices = which_domains(nondispersive_domains,lattice_domain.x)
dispersive_domain_indices = which_domains(dispersive_domains,lattice_domain.x)
# populate dielectric
ε = Vector{ComplexF64}(undef,length(lattice_domain.x))
if isempty(nondispersive_domains)
for i ∈ eachindex(ε) ε[i] = lattice_domain.ε end
else
dielectrics = map(n -> getfield(n,:dielectric), nondispersive_domains)
for i ∈ eachindex(ε)
d = nondispersive_domain_indices[i]
ε[i] = iszero(d) ? lattice_domain.ε : dielectrics[d](lattice_domain.x[i])
end
end
# populate pump and dispersive susceptability
F = Vector{Float64}(undef,length(lattice_domain.x))
χ = Vector{AbstractDispersion}(undef,length(lattice_domain.x))
Fs = Vector{Vector{Float64}}(undef,length(dispersive_domains))
for i ∈ eachindex(Fs) Fs[i] = zeros(Float64,length(lattice_domain.x)) end
if isempty(dispersive_domains)
for i ∈ eachindex(F) F[i] = 0 end
for i ∈ eachindex(F) χ[i] = NoDispersion() end
else
pumps = map(n -> n.pump, dispersive_domains)
χs = map(n -> n.χ, dispersive_domains)
for i ∈ eachindex(F)
d = dispersive_domain_indices[i]
if iszero(d)
F[i] = 0
χ[i] = NoDispersion()
else
F[i] = pumps[d](lattice_domain.x[i])
χ[i] = χs[d]
Fs[d][i] = pumps[d](lattice_domain.x[i])
end
end
end
# generate boundary layers
σx, _ = boundary_layer(lattice_domain, x)
_, σy = boundary_layer(lattice_domain, y)
αx, αy = 1 .+ 1im*σx/sqrt(ω₀^2-k₃₀^2), 1 .+ 1im*σy/sqrt(ω₀^2-k₃₀^2)
# generate half sites and boundary layers
x_half, y_half = _generate_half_xy(lattice_domain, x)
σx_half, _ = boundary_layer(lattice_domain, x_half)
_, σy_half = boundary_layer(lattice_domain, y_half)
αx_half, αy_half = 1 .+ 1im*σx_half/sqrt(ω₀^2-k₃₀^2), 1 .+ 1im*σy_half/sqrt(ω₀^2-k₃₀^2)
laplacian = Laplacian{Symmetric}(lattice_domain.lattice, αx, αy, αx_half, αy_half)
# curlcurl = Curlcurl{Symmetric}(lattice_domain.lattice,α[1],α_half[1],nnm,nnp,indices,interior,surface)
Σ = SelfEnergy{Symmetric}(lattice_domain, αx_half, αy_half)
return Simulation{2,Symmetric,ComplexF64,typeof(lattice_domains),typeof(nondispersive_domains),typeof(dispersive_domains),typeof(Σ)}(
lattice_domains,
nondispersive_domains,
dispersive_domains,
lattice_domain.x,
lattice_domain_indices,
nondispersive_domain_indices,
dispersive_domain_indices,
ε,
F,
χ,
Fs,
laplacian,
# curlcurl,
Σ,
(αx, αy),
(σx, σy),
(x_half, y_half),
(σx_half, σy_half),
ω₀,
NaN,
NaN,
k₃₀,
smoothed)
end
################################################################################
# SIMULATION{2} building utilities
# used in building Simulation{2}
function _generate_half_xy(lattice_domain::LatticeDomain{2,Symmetric,Cartesian}, x::Vector{TP}) where TP<:Point{2}
lattice = lattice_domain.lattice
indices = lattice_domain.indices
imin, imax = extrema(map(ld->ld[1],indices))
jmin, jmax = extrema(map(ld->ld[2],indices))
half_x = [lattice[i,0] for i ∈ ((imin:(imax+1)).-1/2)]
half_y = [lattice[0,j] for j ∈ ((jmin:(jmax+1)).-1/2)]
return half_x, half_y
end
@inline function boundary_layer(domain::LatticeDomain{2,Symmetric,Cartesian}, x::Vector{Point{2,Cartesian}})
σx = domain.boundary.bls[1].(x) .+ domain.boundary.bls[2].(x)
σy = domain.boundary.bls[3].(x) .+ domain.boundary.bls[4].(x)
return σx, σy
end
################################################################################
# 2-d smoothing functions
function smooth_dielectric!(sim::Simulation{2,Symmetric}, num_sub_pixel::Integer=NUM_SUBPIXELS)
indices = sim.nondispersive_domain_indices
lattice = sim.lattice
X = Matrix{Point{2,Cartesian}}(undef,num_sub_pixel,num_sub_pixel)
E = Vector{ComplexF64}(undef,num_sub_pixel)
r = LinRange(-.5,.5,num_sub_pixel)
for i ∈ eachindex(indices)
if 1<i<length(indices)
idx, = Tuple(sim.lattice_domain.indices[i])
if !(indices[i] == indices[i-1] == indices[i+1])
for j ∈ eachindex(r) X[j] = lattice[idx+r[j]] end
simulation_dielectric!(E,sim,X)
sim.ε[i] = mean(E)
end
else
# placeholder for periodic lattice domains
end
end
return nothing
end
function smooth_pump!(sim::Simulation{2}, num_sub_pixel::Integer = NUM_SUBPIXELS)
indices = sim.dispersive_domain_indices
lattice = sim.lattice
X = Vector{Point{1}}(undef,num_sub_pixel)
F = Vector{ComplexF64}(undef,num_sub_pixel)
r = LinRange(-.5,.5,num_sub_pixel)
for i ∈ eachindex(indices)
if 1<i<length(indices)
idx, = Tuple(sim.lattice_domain.indices[i])
if !(indices[i] == indices[i-1] == indices[i+1])
for j ∈ eachindex(r) X[j] = lattice[idx+r[j]] end
simulation_pump!(F,sim,X)
sim.F[i] = mean(F)
end
end
end
return nothing
end
################################################################################
# extras
function Base.getproperty(sim::Simulation{2}, sym::Symbol)
if sym==:lattice_domain
return getfield(sim,:lattice_domains)[1]
elseif sym==:latticesize
indices = getfield(sim,:lattice_domains)[1].indices
imin, imax = extrema(map(ld->ld[1],indices))
jmin, jmax = extrema(map(ld->ld[2],indices))
return imax-imin+1, jmax-jmin+1
elseif Base.sym_in(sym,(:k10,:k1₀))
return getfield(sim,:k₁₀)
elseif Base.sym_in(sym,(:k20,:k2₀))
return getfield(sim,:k₂₀)
elseif Base.sym_in(sym,(:k30,:k3₀))
return getfield(sim,:k₃₀)
elseif Base.sym_in(sym,(:shape,:boundary,:lattice))
return getproperty(getfield(sim,:lattice_domains)[1],sym)
elseif Base.sym_in(sym,propertynames(getproperty(getfield(sim,:lattice_domains)[1],:lattice)))
return getproperty(getfield(sim,:lattice_domains)[1],sym)
elseif Base.sym_in(sym,(:lat,:lattice,:Lat,:Lattice))
return getfield(sim,:lattice_domains)[1].lattice
else
return getfield(sim,sym)
end
end
function Base.propertynames(sim::Simulation{2,Symmetric}, private=false)
if private
return fieldnames(Simulation)
else
return (:shape, :boundary, :lattice, :lattice_domain, :lattice_domains, :nondispersive_domains, :dispersive_domains, :ε, :F, :x, :χ, :ω₀, :k1₀, :k2₀, :k3₀, :lattice, propertynames(sim.lattice)...)
end
end
################################################################################
# Pretty Printing
import ...PRINTED_COLOR_GOOD
import ...PRINTED_COLOR_WARN
import ...PRINTED_COLOR_DARK
function Base.show(io::IO,sim::Simulation{2,CLASS}) where CLASS
if sim.smoothed[]
printstyled(io,"Smoothed ",color=PRINTED_COLOR_GOOD)
else
printstyled(io,"Unsmoothed ",color=PRINTED_COLOR_WARN)
end
print(io,"2D ")
CLASS<:Symmetric ? print(io,"Symmetric") : nothing
CLASS<:Unsymmetric ? print(io,"Unsymmetric") : nothing
printstyled(io," Simulation\n",color=PRINTED_COLOR_DARK)
print(io,"\tn sites: ")
printstyled(io,length(sim),"\n",color=:light_cyan)
println(io,"\t====================")
println(IOContext(io,:tabbed2=>true),sim.boundary)
println(io)
println(IOContext(io,:tabbed2=>true),sim.lattice)
println(io)
domains = (sim.nondispersive_domains...,sim.dispersive_domains...)
for d ∈ eachindex(domains)
print(io,"\t\tDomain ", d, " ")
print(IOContext(io,:tabbed2=>true),domains[d])
d < length(domains) ? println(io) : nothing
end
end
end
using .Simulations2DSymmetric
|
/* movstat/snacc.c
*
* Moving window S_n accumulator
*
* Copyright (C) 2018 Patrick Alken
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 3 of the License, or (at
* your option) any later version.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <config.h>
#include <gsl/gsl_math.h>
#include <gsl/gsl_vector.h>
#include <gsl/gsl_errno.h>
#include <gsl/gsl_movstat.h>
#include <gsl/gsl_sort.h>
#include <gsl/gsl_statistics.h>
typedef double snacc_type_t;
typedef snacc_type_t ringbuf_type_t;
#include "ringbuf.c"
typedef struct
{
snacc_type_t *window; /* linear array for current window */
snacc_type_t *work; /* workspace */
ringbuf *rbuf; /* ring buffer storing current window */
} snacc_state_t;
static size_t
snacc_size(const size_t n)
{
size_t size = 0;
size += sizeof(snacc_state_t);
size += 2 * n * sizeof(snacc_type_t);
size += ringbuf_size(n);
return size;
}
static int
snacc_init(const size_t n, void * vstate)
{
snacc_state_t * state = (snacc_state_t *) vstate;
state->window = (snacc_type_t *) ((unsigned char *) vstate + sizeof(snacc_state_t));
state->work = (snacc_type_t *) ((unsigned char *) state->window + n * sizeof(snacc_type_t));
state->rbuf = (ringbuf *) ((unsigned char *) state->work + n * sizeof(snacc_type_t));
ringbuf_init(n, state->rbuf);
return GSL_SUCCESS;
}
static int
snacc_insert(const snacc_type_t x, void * vstate)
{
snacc_state_t * state = (snacc_state_t *) vstate;
/* add new element to ring buffer */
ringbuf_insert(x, state->rbuf);
return GSL_SUCCESS;
}
static int
snacc_delete(void * vstate)
{
snacc_state_t * state = (snacc_state_t *) vstate;
if (!ringbuf_is_empty(state->rbuf))
ringbuf_pop_back(state->rbuf);
return GSL_SUCCESS;
}
/* FIXME XXX: this is inefficient - could be improved by maintaining a sorted ring buffer */
static int
snacc_get(void * params, snacc_type_t * result, const void * vstate)
{
const snacc_state_t * state = (const snacc_state_t *) vstate;
size_t n = ringbuf_copy(state->window, state->rbuf);
(void) params;
gsl_sort(state->window, 1, n);
*result = gsl_stats_Sn_from_sorted_data(state->window, 1, n, state->work);
return GSL_SUCCESS;
}
static const gsl_movstat_accum sn_accum_type =
{
snacc_size,
snacc_init,
snacc_insert,
snacc_delete,
snacc_get
};
const gsl_movstat_accum *gsl_movstat_accum_Sn = &sn_accum_type;
|
lemma pseudo_divmod: assumes g: "g \<noteq> 0" and *: "pseudo_divmod f g = (q,r)" shows "smult (coeff g (degree g) ^ (Suc (degree f) - degree g)) f = g * q + r" (is ?A) and "r = 0 \<or> degree r < degree g" (is ?B) |
module Structure.Categorical.Multi where
open import Data
open import Data.Boolean
open import Data.Tuple using (_⨯_ ; _,_)
open import Data.Tuple.Raiseᵣ
import Data.Tuple.Raiseᵣ.Functions as Raise
open import Data.Tuple.RaiseTypeᵣ
import Data.Tuple.RaiseTypeᵣ.Functions as RaiseType
open import Function.Multi
open import Function.Multi.Functions
open import Functional using (_→ᶠ_)
import Functional.Dependent as Fn
open import Lang.Instance
import Lvl
open import Logic
open import Logic.Predicate
open import Logic.Propositional
open import Numeral.Natural
open import Numeral.Natural.Oper.Comparisons
open import Structure.Setoid
import Structure.Categorical.Names as Names
import Structure.Operator.Names as Names
import Structure.Relator.Names as Names
open import Structure.Relator.Properties
open import Syntax.Function
open import Type
open import Type.Properties.Singleton
private variable ℓ ℓₒ ℓₘ ℓₘ₁ ℓₘ₂ ℓₑ ℓₑ₁ ℓₑ₂ : Lvl.Level
private variable Obj Obj₁ Obj₂ : Type{ℓₒ}
module Morphism where
module _
(_⟶_ : Obj → Obj → Stmt{ℓₘ})
⦃ morphism-equiv : ∀{x y} → Equiv{ℓₑ}(x ⟶ y) ⦄
where
{-
-- Examples:
-- MorphismChain(3) T x y z
-- = compose(1) ((x ⟶ y) ,_) (MorphismChain(2) T y) z
-- = (x ⟶ y) , (MorphismChain(2) T y z)
-- = (x ⟶ y) , (y ⟶ z) , T(z)
--
-- MorphismChain(4) T x y z w
-- = compose(2) ((x ⟶ y) ,_) (MorphismChain(3) T y) z w
-- = (x ⟶ y) , (MorphismChain(3) T y z w)
-- = (x ⟶ y) , (y ⟶ z) , (z ⟶ w) , T(w)
MorphismChain : (n : ℕ) → (Obj → Type{ℓₘ}) → (RaiseType.repeat (𝐒(n)) Obj) ⇉ Types(Raise.repeat (𝐒(n)) ℓₘ)
MorphismChain 0 T x = T(x)
MorphismChain 1 T x y = (x ⟶ y) , T(y)
MorphismChain (𝐒(𝐒(n))) T x y = compose(𝐒(n)) ((x ⟶ y) ,_) (MorphismChain(𝐒(n)) T y)
-- Examples:
-- MorphismMapping(2) x y
-- = MorphismChain(2)(x ⟶_) x y
-- = (x ⟶ y) → (x ⟶ y)
--
-- MorphismMapping(3) x y z
-- = MorphismChain(3)(x ⟶_) x y z
-- = (x ⟶ y) → (y ⟶ z) → (x ⟶ z)
--
-- MorphismMapping(4) x y z w
-- = MorphismChain(4)(x ⟶_) x y z w
-- = (x ⟶ y) → (y ⟶ z) → (z ⟶ w) → (x ⟶ w)
MorphismMapping : (n : ℕ) → (RaiseType.repeat n Obj) ⇉ Type{ℓₘ}
MorphismMapping(0) = Unit
MorphismMapping(1) = {!!}
MorphismMapping(𝐒(𝐒 n)) = {!!}
--MorphismMapping(𝐒 n) = curry(n) {!Fs ↦ (uncurry(n) (MorphismChain(n)) Fs ⇉ (? → ?))!}
{-
MorphismMapping(1) x = ? -- MorphismChain(1)(x ⟶_) x
MorphismMapping(𝐒(𝐒(n))) x = compose(𝐒(n)) (RaiseType.reduceᵣ{𝐒(n)}{\ℓ₁ ℓ₂ → ℓ₁ Lvl.⊔ ℓ₂} _→ᶠ_) (MorphismChain(𝐒(𝐒(n))) (x ⟶_) x)
-- MorphismChain(𝐒(𝐒(n)))(x ⟶_) x-}
-}
MorphismFlippedChain : (n : ℕ) → (RaiseType.repeat (𝐒(n)) Obj) ⇉ Type{ℓₘ}
MorphismFlippedChain 𝟎 x = x ⟶ x
MorphismFlippedChain (𝐒(n)) x = Out(𝐒(n)) (x ⟶_) x where
Out : (n : ℕ) → (Obj → Type{ℓₘ}) → (RaiseType.repeat (𝐒(n)) Obj) ⇉ Type{ℓₘ}
Out 0 T x = T(x)
Out 1 T x y = (x ⟶ y) → T(y)
Out (𝐒(𝐒(n))) T x y = Out(𝐒(n)) (z ↦ ((x ⟶ y) → T(z))) y
module _
(_⟶₁_ : Obj₁ → Obj₁ → Stmt{ℓₘ₁})
⦃ morphism-equiv₁ : ∀{x y} → Equiv{ℓₑ₁}(x ⟶₁ y) ⦄
(_⟶₂_ : Obj₂ → Obj₂ → Stmt{ℓₘ₂})
⦃ morphism-equiv₂ : ∀{x y} → Equiv{ℓₑ₂}(x ⟶₂ y) ⦄
where
private open module Equiv₂{x}{y} = Equiv(morphism-equiv₂{x}{y}) using () renaming (_≡_ to _≡₂_)
module _
{F : Obj₁ → Obj₂}
where
-- Definition of the relation between a function and an operation that says:
-- The function preserves the operation.
-- Often used when defining homomorphisms.
-- Examples:
-- Preserving(0) (map)(G₁)(G₂)
-- = ∀{x} → (map ∘₀ G₁ ≡ G₂ on₀ map)
-- = ∀{x} → (map(G₁) ≡ G₂(f))
-- Preserving(1) (map)(G₁)(G₂)
-- = ∀{x y}{f : x ⟶ y} → ((map ∘₁ G₁)(f) ≡ (G₂ on₁ map)(f))
-- = ∀{x y}{f : x ⟶ y} → (map(G₁(f)) ≡ G₂(map(f)))
-- Preserving(2) (map)(G₁)(G₂)
-- = ∀{x y z}{f₁ : y ⟶ z}{f₂ : x ⟶ y} → ((map ∘₂ G₁)(f₁)(f₂) ≡ (G₂ on₂ map)(f₁)(f₂))
-- = ∀{x y z}{f₁ : y ⟶ z}{f₂ : x ⟶ y} → (map(G₁ f₁ f₂) ≡ G₂ (map(f₁)) (map(f₂)))
-- Preserving(3) (map)(G₁)(G₂)
-- = ∀{f₁ f₂ f₃} → ((map ∘₃ G₁)(f₁)(f₂)(f₃) ≡ (G₂ on₃ map)(f₁)(f₂)(f₃))
-- = ∀{f₁ f₂ f₃} → (map(G₁ f₁ f₂ f₃) ≡ G₂ (map(f₁)) (map(f₂)) (map(f₃)))
Preserving : (n : ℕ) → (map : ∀{x y} → (x ⟶₁ y) → (F(x) ⟶₂ F(y))) → (quantifier₊(𝐒(n))(∀ₗ) (MorphismFlippedChain(_⟶₁_)(n))) → (quantifier₊(𝐒(n))(∀ₗ) (MorphismFlippedChain(_⟶₂_)(n))) → (RaiseType.repeat (𝐒(n)) Obj₁) ⇉ Stmt{if(n ≤? 0) then (ℓₑ₂) else (ℓₘ₁ Lvl.⊔ ℓₑ₂)}
Preserving 0 map G₁ G₂ x = (map{x}(G₁) ≡ G₂)
Preserving 1 map G₁ G₂ x y = ∀{f : x ⟶₁ y} → map (G₁(f)) ≡ G₂(map f)
Preserving (𝐒(𝐒(n))) map G₁ G₂ x y = {!Preserving(𝐒(n)) map ? ? y!}
-- ∀{f} → (G₁(f)) (G₂(map f))
--Preserving 2 map G₁ G₂ x y z = ∀{f₁ : y ⟶₁ z}{f₂ : x ⟶₁ y} → (map(G₁(f₁)(f₂)) ≡ G₂(map f₁)(map f₂))
-- Preserving (𝐒(𝐒(𝐒(n)))) map G₁ G₂ x y z = {!∀{f} → Preserving(𝐒(𝐒(n))) map !}
--test (P ↦ (∀{f} → P f)) (f ↦ Preserving (𝐒(𝐒(n))) map {!G₁!} {!G₂!} y z) where
-- test : ((P : {!!}) → TYPE ({!!} Lvl.⊔ {!!})) → ((f : {!!} ⟶₁ {!!}) → RaiseType.repeat (𝐒 n) Obj₁ ⇉ Type) → (RaiseType.repeat (𝐒 n) Obj₁ ⇉ Type)
-- compose(𝐒(n)) (P ↦ (∀{f} → P f)) ({!f ↦ Preserving (𝐒(𝐒(n))) map {!G₁!} {!G₂!} {!!} {!!}!})
-- compose(𝐒(n)) {!!} {!!}
-- ∀{f : x ⟶₁ y} →
-- Preserving(𝐒(𝐒(n))) map (\{a b} → {!G₁{a}{x}{b}!}) \{a b} → {!G₂{a}{F x}{b}!}
{-
-- Preserving 3 map G₁ G₂ a x y z
-- = ∀{f : a ⟶₁ x} → Preserving 2 map (G₁(f)) (G₂(map f)) x y z
-- = ∀{f : a ⟶₁ x}{f₁ : x ⟶₁ y}{f₂ : y ⟶₁ z} → (map(G₁(f)(f₁)(f₂)) ≡ G₂(map f)(map f₁)(map f₂))
-- ∀{x y}{f : x ⟶₁ y} → Preserving(𝐒(𝐒(n))) (map) (G₁(f)) (G₂(map(f)))
{- Preserving(𝟎) (f)(g₁)(g₂) = (f(g₁) ≡ g₂)
Preserving(𝐒(𝟎)) (f)(g₁)(g₂) = (∀{x} → f(g₁(x)) ≡ g₂(f(x)))
Preserving(𝐒(𝐒(n))) (f)(g₁)(g₂) = (∀{x} → Preserving(𝐒(n)) (f) (g₁(x)) (g₂(f(x))))
-- ∀{x y z : Objₗ}{f : y ⟶ z}{g : x ⟶ y} → (map(f ∘ g) ≡ map(f) ∘ map(g))
-}
-}
|
The first @-@ move advantage in chess is the inherent advantage of the player ( White ) who makes the first move in chess . Chess players and theorists generally agree that White begins the game with some advantage . Since 1851 , compiled statistics support this view ; White consistently wins slightly more often than Black , usually scoring between 52 and 56 percent . White 's winning percentage is about the same for tournament games between humans and games between computers . However , White 's advantage is less significant in blitz games and games between novices .
|
(** Useful types for HTTP interactions. *)
Require Import Coq.Lists.List.
Require Import FunctionNinjas.All.
Require Import ListString.All.
Import ListNotations.
(** A map from strings to values. *)
Module LStringMap.
(** Naive implementation of a map with an association list. *)
Definition t (A : Type) := list (LString.t * A).
(** Try to find the value of a key. *)
Fixpoint find {A : Type} (map : t A) (key : LString.t) : option A :=
match map with
| [] => None
| (key', val) :: map =>
if LString.eqb key key' then
Some val
else
find map key
end.
End LStringMap.
(** The arguments as given at the end of an URL. For example:
example.com/index.html?arg1=v11,v12&arg2=v2 *)
Module Arguments.
(** A map of keys to argument lists (the most common case is a list of one
value). *)
Definition t := LStringMap.t (list LString.t).
(** Try to find the values of an argument. *)
Definition find (args : t) (key : LString.t) : option (list LString.t) :=
LStringMap.find args key.
End Arguments.
(** Cookies, given by the client or set by the server. *)
Module Cookies.
(** A cookie is a key and a value. *)
Definition t := LStringMap.t LString.t.
(** Try to find the value of a cookie. *)
Fixpoint find (cookies : t) (key : LString.t) : option LString.t :=
LStringMap.find cookies key.
End Cookies.
|
\section{Issues}
|
header{* Classes and properties of graphs *}
theory Ugraph_Properties
imports
Ugraph_Lemmas
"../Girth_Chromatic/Girth_Chromatic"
begin
text{* A ``graph property'' is a set of graphs which is closed under isomorphism. *}
type_synonym ugraph_class = "ugraph set"
definition ugraph_property :: "ugraph_class \<Rightarrow> bool" where
"ugraph_property C \<equiv> \<forall>G \<in> C. \<forall>G'. G \<simeq> G' \<longrightarrow> G' \<in> C"
abbreviation prob_in_class :: "(nat \<Rightarrow> real) \<Rightarrow> ugraph_class \<Rightarrow> nat \<Rightarrow> real" where
"prob_in_class p c n \<equiv> probGn p n (\<lambda>es. edge_space.edge_ugraph n es \<in> c)"
text{* From now on, we consider random graphs not with fixed edge probabilities but rather with a
probability function depending on the number of vertices. Such a function is called a ``threshold''
for a graph property iff
\begin{itemize}
\item for asymptotically \emph{larger} probability functions, the probability that a random graph
is an element of that class tends to \emph{$1$} (``$1$-statement''), and
\item for asymptotically \emph{smaller} probability functions, the probability that a random graph
is an element of that class tends to \emph{$0$} (``$0$-statement'').
\end{itemize} *}
definition is_threshold :: "ugraph_class \<Rightarrow> (nat \<Rightarrow> real) \<Rightarrow> bool" where
"is_threshold c t \<equiv> ugraph_property c \<and> (\<forall>p. nonzero_prob_fun p \<longrightarrow>
(p \<lless> t \<longrightarrow> prob_in_class p c ----> 0) \<and>
(t \<lless> p \<longrightarrow> prob_in_class p c ----> 1))"
lemma is_thresholdI[intro]:
assumes "ugraph_property c"
assumes "\<And>p. \<lbrakk> nonzero_prob_fun p; p \<lless> t \<rbrakk> \<Longrightarrow> prob_in_class p c ----> 0"
assumes "\<And>p. \<lbrakk> nonzero_prob_fun p; t \<lless> p \<rbrakk> \<Longrightarrow> prob_in_class p c ----> 1"
shows "is_threshold c t"
using assms unfolding is_threshold_def by blast
end
|
"First ed. of this translation 1902; second impression, 1910."
xxix, 387 pages 24 cm.
by Friedrich Nietzsche; tr. by Johanna Volz.
Add tags for "The dawn of day,". Be the first. |
[STATEMENT]
lemma
mtf2_forward_effect4': "q \<in> set xs \<Longrightarrow> distinct xs \<Longrightarrow> index xs x < index xs q - n
\<Longrightarrow> index (mtf2 n q xs) (xs!index xs x) = index xs (xs!index xs x) \<and> index (mtf2 n q xs) (xs!index xs x) < index xs q - n"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>q \<in> set xs; distinct xs; index xs x < index xs q - n\<rbrakk> \<Longrightarrow> index (mtf2 n q xs) (xs ! index xs x) = index xs (xs ! index xs x) \<and> index (mtf2 n q xs) (xs ! index xs x) < index xs q - n
[PROOF STEP]
using mtf2_forward_effect4[where xs=xs and i="index xs x"]
[PROOF STATE]
proof (prove)
using this:
\<lbrakk>?q \<in> set xs; distinct xs; index xs x < index xs ?q - ?n\<rbrakk> \<Longrightarrow> index (mtf2 ?n ?q xs) (xs ! index xs x) = index xs (xs ! index xs x) \<and> index (mtf2 ?n ?q xs) (xs ! index xs x) < index xs ?q - ?n
goal (1 subgoal):
1. \<lbrakk>q \<in> set xs; distinct xs; index xs x < index xs q - n\<rbrakk> \<Longrightarrow> index (mtf2 n q xs) (xs ! index xs x) = index xs (xs ! index xs x) \<and> index (mtf2 n q xs) (xs ! index xs x) < index xs q - n
[PROOF STEP]
by fast |
In April 1984 , " West End Girls " was released , becoming a club hit in Los Angeles and San Francisco , and a minor dance hit in Belgium , and France , but was only available in the United Kingdom as a 12 " import . In March 1985 , after long negotiations , Pet Shop Boys cut their contractual ties with Orlando , and hired manager Tom Watkins , who signed them with EMI . They re @-@ recorded " West End Girls " with producer Stephen Hague , and re @-@ released the song in late 1985 , topping the charts in both the UK and the U.S.
|
Formal statement is: lemma space_empty: "space M = {} \<Longrightarrow> M = count_space {}" Informal statement is: If the underlying set of a measure space is empty, then the measure space is the empty measure space. |
chapter \<open>Definition\<close>
theory %invisible Definition
imports Main
begin
text \<open>\label{chap:definition}\<close>
text \<open>In stepwise refinement~\cite{DijkstraConstructive,WirthRefinement},
a program is derived from a specification
via a sequence of intermediate specifications.\<close>
text \<open>Pop-refinement (where `pop' stands for `predicates over programs')
is an approach to stepwise refinement,
carried out inside an interactive theorem prover
(e.g.\ Isabelle/HOL, HOL4, Coq, PVS, ACL2)
as follows:
\begin{enumerate}
\item
Formalize the syntax and semantics
of (the needed subset of) the target programming language (and libraries),
as a deep embedding.
\item
Specify the requirements
by defining a predicate over programs
that characterizes the possible implementations.
\item
Refine the specification stepwise
by defining monotonically decreasing predicates over programs
(decreasing with respect to inclusion, i.e.\ logical implication),
according to decisions that narrow down the possible implementations.
\item
Conclude the derivation
with a predicate that characterizes a unique program in explicit syntactic form,
from which the program text is readily obtained.
\end{enumerate}\<close>
end %invisible
|
(** * Univalent Basics. Vladimir Voevodsky. Feb. 2010 - Sep. 2011. Port to coq trunk (8.4-8.5) in March 2014.
This file contains results which form a basis of the univalent approach and which do not require the use of universes as types. Fixpoints with values in a universe are used only once in the definition [ isofhlevel ]. Many results in this file do not require any axioms. The first axiom we use is [ funextempty ] which is the functional extensionality axiom for functions with values in the empty type. Closer to the end of the file we use general functional extensionality [ funextfunax ] asserting that two homotopic functions are equal. Since [ funextfunax ] itself is not an "axiom" in our sense i.e. its type is not of h-level 1 we show that it is logically equivalent to a real axiom [ funcontr ] which asserts that the space of sections of a family with contractible fibers is contractible.
*)
(** ** Preambule *)
(** Settings *)
Unset Automatic Introduction. (* This line has to be removed for the file to compile with Coq8.2 *)
(** Imports *)
Add LoadPath "../../".
Require Export Foundations.Generalities.uuu.
(** Universe structure *)
Definition UU := Type .
(* end of "Preambule". *)
(** ** Some standard constructions not using identity types (paths) *)
(** *** Canonical functions from [ empty ] and to [ unit ] *)
Definition fromempty { X : UU } : empty -> X.
Proof. intros X H. destruct H. Defined.
Definition tounit { X : UU } : X -> unit := fun x : X => tt .
(** *** Functions from [ unit ] corresponding to terms *)
Definition termfun { X : UU } ( x : X ) : unit -> X := fun t : unit => x .
(** *** Identity functions and function composition *)
Definition idfun ( T : UU ) := fun t : T => t .
Definition funcomp { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) := fun x : X => g ( f x ) .
(** *** Iteration of an endomorphism *)
Fixpoint iteration { T : UU } ( f : T -> T ) ( n : nat ) : T -> T := match n with
O => idfun T |
S m => funcomp ( iteration f m ) f
end .
(** *** Basic constructions related to the adjoint evaluation function [ X -> ( ( X -> Y ) -> Y ) ] *)
Definition adjev { X Y : UU } ( x : X ) ( f : X -> Y ) : Y := f x.
Definition adjev2 { X Y : UU } ( phi : ( ( X -> Y ) -> Y ) -> Y ) : X -> Y := (fun x : X => phi ( fun f : X -> Y => f x ) ) .
(** *** Pairwise direct products *)
Definition dirprod ( X Y : UU ) := total2 ( fun x : X => Y ) .
Definition dirprodpair { X Y : UU } := tpair ( fun x : X => Y ) .
Definition dirprodadj { X Y Z : UU } ( f : dirprod X Y -> Z ) : X -> Y -> Z := fun x : X => fun y : Y => f ( dirprodpair x y ) .
Definition dirprodf { X Y X' Y' : UU } ( f : X -> Y ) ( f' : X' -> Y' ) ( xx' : dirprod X X' ) : dirprod Y Y' := dirprodpair ( f ( pr1 xx') ) ( f' ( pr2 xx' ) ) .
Definition ddualand { X Y P : UU } (xp : ( X -> P ) -> P ) ( yp : ( Y -> P ) -> P ) : ( dirprod X Y -> P ) -> P.
Proof. intros X Y P xp yp X0 . set ( int1 := fun ypp : ( ( Y -> P ) -> P ) => fun x : X => yp ( fun y : Y => X0 ( dirprodpair x y) ) ) . apply ( xp ( int1 yp ) ) . Defined .
(** *** Negation and double negation *)
Definition neg ( X : UU ) : UU := X -> empty.
Definition negf { X Y : UU } ( f : X -> Y ) : neg Y -> neg X := fun phi : Y -> empty => fun x : X => phi ( f x ) .
Definition dneg ( X : UU ) : UU := ( X -> empty ) -> empty .
Definition dnegf { X Y : UU } ( f : X -> Y ) : dneg X -> dneg Y := negf ( negf f ) .
Definition todneg ( X : UU ) : X -> dneg X := adjev .
Definition dnegnegtoneg { X : UU } : dneg ( neg X ) -> neg X := adjev2 .
Lemma dneganddnegl1 { X Y : UU } ( dnx : dneg X ) ( dny : dneg Y ) : neg ( X -> neg Y ) .
Proof. intros. intro X2. assert ( X3 : dneg X -> neg Y ) . apply ( fun xx : dneg X => dnegnegtoneg ( dnegf X2 xx ) ) . apply ( dny ( X3 dnx ) ) . Defined.
Definition dneganddnegimpldneg { X Y : UU } ( dnx : dneg X ) ( dny : dneg Y ) : dneg ( dirprod X Y ) := ddualand dnx dny.
(** *** Logical equivalence *)
Definition logeq ( X Y : UU ) := dirprod ( X -> Y ) ( Y -> X ) .
Notation " X <-> Y " := ( logeq X Y ) : type_scope .
Definition logeqnegs { X Y : UU } ( l : X <-> Y ) : ( neg X ) <-> ( neg Y ) := dirprodpair ( negf ( pr2 l ) ) ( negf ( pr1 l ) ) .
(* end of "Some standard constructions not using idenity types (paths)". *)
(** ** Operations on [ paths ] *)
(** *** Composition of paths and inverse paths *)
Definition pathscomp0 { X : UU } { a b c : X } ( e1 : paths a b ) ( e2 : paths b c ) : paths a c .
Proof. intros. destruct e1. apply e2 . Defined.
Hint Resolve @pathscomp0 : pathshints .
Definition pathscomp0rid { X : UU } { a b : X } ( e1 : paths a b ) : paths ( pathscomp0 e1 ( idpath b ) ) e1 .
Proof. intros. destruct e1. simpl. apply idpath. Defined.
(** Note that we do no need [ pathscomp0lid ] since the corresponding two terms are convertible to each other due to our definition of [ pathscomp0 ] . If we defined it by destructing [ e2 ] and applying [ e1 ] then [ pathsinv0rid ] would be trivial but [ pathsinv0lid ] would require a proof. Similarly we do not need a lemma to connect [ pathsinv0 ( idpath _ ) ] to [ idpath ] *)
Definition pathsinv0 { X : UU } { a b : X } ( e : paths a b ) : paths b a .
Proof. intros. destruct e. apply idpath. Defined.
Hint Resolve @pathsinv0 : pathshints .
Definition pathsinv0l { X : UU } { a b : X } ( e : paths a b ) : paths ( pathscomp0 ( pathsinv0 e ) e ) ( idpath _ ) .
Proof. intros. destruct e. apply idpath. Defined.
Definition pathsinv0r { X : UU } { a b : X } ( e : paths a b ) : paths ( pathscomp0 e ( pathsinv0 e ) ) ( idpath _ ) .
Proof. intros. destruct e. apply idpath. Defined.
Definition pathsinv0inv0 { X : UU } { x x' : X } ( e : paths x x' ) : paths ( pathsinv0 ( pathsinv0 e ) ) e .
Proof. intros. destruct e. apply idpath. Defined.
(** *** Direct product of paths *)
Definition pathsdirprod { X Y : UU } { x1 x2 : X } { y1 y2 : Y } ( ex : paths x1 x2 ) ( ey : paths y1 y2 ) : paths ( dirprodpair x1 y1 ) ( dirprodpair x2 y2 ) .
Proof . intros . destruct ex . destruct ey . apply idpath . Defined .
(** *** The function [ maponpaths ] between paths types defined by a function between abmbient types and its behavior relative to [ pathscomp0 ] and [ pathsinv0 ] *)
Definition maponpaths { T1 T2 : UU } ( f : T1 -> T2 ) { t1 t2 : T1 } ( e: paths t1 t2 ) : paths ( f t1 ) ( f t2 ) .
Proof. intros . destruct e . apply idpath. Defined.
Definition maponpathscomp0 { X Y : UU } { x1 x2 x3 : X } ( f : X -> Y ) ( e1 : paths x1 x2 ) ( e2 : paths x2 x3 ) : paths ( maponpaths f ( pathscomp0 e1 e2 ) ) ( pathscomp0 ( maponpaths f e1 ) ( maponpaths f e2 ) ) .
Proof. intros. destruct e1. destruct e2. simpl. apply idpath. Defined.
Definition maponpathsinv0 { X Y : UU } ( f : X -> Y ) { x1 x2 : X } ( e : paths x1 x2 ) : paths ( maponpaths f ( pathsinv0 e ) ) ( pathsinv0 ( maponpaths f e ) ) .
Proof. intros . destruct e . apply idpath . Defined .
(** *** [ maponpaths ] for the identity functions and compositions of functions *)
Lemma maponpathsidfun { X : UU } { x x' : X } ( e : paths x x' ) : paths ( maponpaths ( idfun X ) e ) e .
Proof. intros. destruct e. apply idpath . Defined.
Lemma maponpathscomp { X Y Z : UU } { x x' : X } ( f : X -> Y ) ( g : Y -> Z ) ( e : paths x x' ) : paths ( maponpaths g ( maponpaths f e ) ) ( maponpaths ( funcomp f g ) e) .
Proof. intros. destruct e. apply idpath. Defined.
(** The following four statements show that [ maponpaths ] defined by a function f which is homotopic to the identity is "surjective". It is later used to show that the maponpaths defined by a function which is a weak equivalence is itself a weak equivalence. *)
Definition maponpathshomidinv { X : UU } (f:X -> X) ( h: forall x:X, paths (f x) x) ( x x' : X ) : paths (f x) (f x') -> paths x x' := (fun e: paths (f x) (f x') => pathscomp0 (pathsinv0 (h x)) (pathscomp0 e (h x'))).
Lemma maponpathshomid1 { X : UU } (f:X -> X) (h: forall x:X, paths (f x) x) { x x' : X } (e:paths x x'): paths (maponpaths f e) (pathscomp0 (h x) (pathscomp0 e (pathsinv0 (h x')))).
Proof. intros. destruct e. change (pathscomp0 (idpath x) (pathsinv0 (h x))) with (pathsinv0 (h x)). assert (ee: paths (maponpaths f (idpath x)) (idpath (f x))). apply idpath .
assert (eee: paths (idpath (f x)) (pathscomp0 (h x) (pathsinv0 (h x)))). apply (pathsinv0 (pathsinv0r (h x))). apply (pathscomp0 ee eee). Defined.
Lemma maponpathshomid12 { X : UU } { x x' fx fx' : X } (e:paths fx fx') (hx:paths fx x) (hx':paths fx' x') : paths (pathscomp0 hx (pathscomp0 (pathscomp0 (pathsinv0 hx) (pathscomp0 e hx')) (pathsinv0 hx'))) e.
Proof. intros. destruct hx. destruct hx'. destruct e. simpl. apply idpath. Defined.
Lemma maponpathshomid2 { X : UU } (f:X->X) (h: forall x:X, paths (f x) x) ( x x' : X ) (e:paths (f x) (f x')) : paths (maponpaths f (maponpathshomidinv f h _ _ e)) e.
Proof. intros. assert (ee: paths (pathscomp0 (h x) (pathscomp0 (pathscomp0 (pathsinv0 (h x)) (pathscomp0 e (h x'))) (pathsinv0 (h x')))) e). apply (maponpathshomid12 e (h x) (h x')). assert (eee: paths (maponpaths f (pathscomp0 (pathsinv0 (h x)) (pathscomp0 e (h x')))) (pathscomp0 (h x) (pathscomp0 (pathscomp0 (pathsinv0 (h x)) (pathscomp0 e (h x'))) (pathsinv0 (h x'))))). apply maponpathshomid1. apply (pathscomp0 eee ee). Defined.
(** Here we consider the behavior of maponpaths in the case of a projection [ p ] with a section [ s ]. *)
Definition pathssec1 { X Y : UU } ( s : X -> Y ) ( p : Y -> X ) ( eps : forall x:X , paths ( p ( s x ) ) x ) ( x : X ) ( y : Y ) ( e : paths (s x) y ) : paths x (p y) := pathscomp0 ( pathsinv0 ( eps x ) ) ( maponpaths p e ) .
Definition pathssec2 { X Y : UU } ( s : X -> Y ) ( p : Y -> X ) ( eps : forall x : X , paths ( p ( s x ) ) x ) ( x x' : X ) ( e : paths ( s x ) ( s x' ) ) : paths x x'.
Proof. intros . set ( e' := pathssec1 s p eps _ _ e ) . apply ( pathscomp0 e' ( eps x' ) ) . Defined .
Definition pathssec2id { X Y : UU } ( s : X -> Y ) ( p : Y -> X ) ( eps : forall x : X , paths ( p ( s x ) ) x ) ( x : X ) : paths ( pathssec2 s p eps _ _ ( idpath ( s x ) ) ) ( idpath x ) .
Proof. intros. unfold pathssec2. unfold pathssec1. simpl. assert (e: paths (pathscomp0 (pathsinv0 (eps x)) (idpath (p (s x)))) (pathsinv0 (eps x))). apply pathscomp0rid. assert (ee: paths
(pathscomp0 (pathscomp0 (pathsinv0 (eps x)) (idpath (p (s x)))) (eps x))
(pathscomp0 (pathsinv0 (eps x)) (eps x))).
apply (maponpaths (fun e0: _ => pathscomp0 e0 (eps x)) e). assert (eee: paths (pathscomp0 (pathsinv0 (eps x)) (eps x)) (idpath x)). apply (pathsinv0l (eps x)). apply (pathscomp0 ee eee). Defined.
Definition pathssec3 { X Y : UU } (s:X-> Y) (p:Y->X) (eps: forall x:X, paths (p (s x)) x) { x x' : X } ( e : paths x x' ) : paths (pathssec2 s p eps _ _ (maponpaths s e)) e.
Proof. intros. destruct e. simpl. unfold pathssec2. unfold pathssec1. simpl. apply pathssec2id. Defined.
(* end of "Operations on [ paths ]". *)
(** ** Fibrations and paths *)
Definition tppr { T : UU } { P : T -> UU } ( x : total2 P ) : paths x ( tpair _ (pr1 x) (pr2 x) ) .
Proof. intros. destruct x. apply idpath. Defined.
Definition constr1 { X : UU } ( P : X -> UU ) { x x' : X } ( e : paths x x' ) : total2 (fun f: P x -> P x' => ( total2 ( fun ee : forall p : P x, paths (tpair _ x p) (tpair _ x' ( f p ) ) => forall pp : P x, paths (maponpaths ( @pr1 _ _ ) ( ee pp ) ) e ) ) ) .
Proof. intros. destruct e. split with ( idfun ( P x ) ). simpl. split with (fun p : P x => idpath _ ) . unfold maponpaths. simpl. apply (fun pp : P x => idpath _ ) . Defined.
Definition transportf { X : UU } ( P : X -> UU ) { x x' : X } ( e : paths x x' ) : P x -> P x' := pr1 ( constr1 P e ) .
Definition transportb { X : UU } ( P : X -> UU ) { x x' : X } ( e : paths x x' ) : P x' -> P x := transportf P ( pathsinv0 e ) .
Lemma functtransportf { X Y : UU } ( f : X -> Y ) ( P : Y -> UU ) { x x' : X } ( e : paths x x' ) ( p : P ( f x ) ) : paths ( transportf ( fun x => P ( f x ) ) e p ) ( transportf P ( maponpaths f e ) p ) .
Proof. intros. destruct e. apply idpath. Defined.
(** ** First homotopy notions *)
(** *** Homotopy between functions *)
Definition homot { X Y : UU } ( f g : X -> Y ) := forall x : X , paths ( f x ) ( g x ) .
(** *** Contractibility, homotopy fibers etc. *)
(** Contractible types. *)
Definition iscontr (T:UU) : UU := total2 (fun cntr:T => forall t:T, paths t cntr).
Definition iscontrpair { T : UU } := tpair (fun cntr:T => forall t:T, paths t cntr).
Definition iscontrpr1 { T : UU } := @pr1 T ( fun cntr:T => forall t:T, paths t cntr ) .
Lemma iscontrretract { X Y : UU } ( p : X -> Y ) ( s : Y -> X ) ( eps : forall y : Y, paths ( p ( s y ) ) y ) ( is : iscontr X ) : iscontr Y.
Proof . intros . destruct is as [ x fe ] . set ( y := p x ) . split with y . intro y' . apply ( pathscomp0 ( pathsinv0 ( eps y' ) ) ( maponpaths p ( fe ( s y' ) ) ) ) . Defined .
Lemma proofirrelevancecontr { X : UU }(is: iscontr X) ( x x' : X ): paths x x'.
Proof. intros. unfold iscontr in is. destruct is as [ t x0 ]. set (e:= x0 x). set (e':= pathsinv0 (x0 x')). apply (pathscomp0 e e'). Defined.
(** Coconuses - spaces of paths which begin or end at a given point. *)
Definition coconustot ( T : UU ) ( t : T ) := total2 (fun t':T => paths t' t).
Definition coconustotpair ( T : UU ) { t t' : T } (e: paths t' t) : coconustot T t := tpair (fun t':T => paths t' t) t' e.
Definition coconustotpr1 ( T : UU ) ( t : T ) := @pr1 _ (fun t':T => paths t' t) .
Lemma connectedcoconustot { T : UU } { t : T } ( c1 c2 : coconustot T t ) : paths c1 c2.
Proof. intros. destruct c1 as [ x0 x ]. destruct x. destruct c2 as [ x1 x ]. destruct x. apply idpath. Defined.
Lemma iscontrcoconustot ( T : UU ) (t:T) : iscontr (coconustot T t).
Proof. intros. unfold iscontr. set (t0:= tpair (fun t':T => paths t' t) t (idpath t)). split with t0. intros. apply connectedcoconustot. Defined.
Definition coconusfromt ( T : UU ) (t:T) := total2 (fun t':T => paths t t').
Definition coconusfromtpair ( T : UU ) { t t' : T } (e: paths t t') : coconusfromt T t := tpair (fun t':T => paths t t') t' e.
Definition coconusfromtpr1 ( T : UU ) ( t : T ) := @pr1 _ (fun t':T => paths t t') .
Lemma connectedcoconusfromt { T : UU } { t : T } ( e1 e2 : coconusfromt T t ) : paths e1 e2.
Proof. intros. destruct e1 as [x0 x]. destruct x. destruct e2 as [ x1 x ]. destruct x. apply idpath. Defined.
Lemma iscontrcoconusfromt ( T : UU ) (t:T) : iscontr (coconusfromt T t).
Proof. intros. unfold iscontr. set (t0:= tpair (fun t':T => paths t t') t (idpath t)). split with t0. intros. apply connectedcoconusfromt. Defined.
(** Pathsspace of a type. *)
Definition pathsspace (T:UU) := total2 (fun t:T => coconusfromt T t).
Definition pathsspacetriple ( T : UU ) { t1 t2 : T } (e: paths t1 t2): pathsspace T := tpair _ t1 (coconusfromtpair T e).
Definition deltap ( T : UU ) : T -> pathsspace T := (fun t:T => pathsspacetriple T (idpath t)).
Definition pathsspace' ( T : UU ) := total2 (fun xy : dirprod T T => (match xy with tpair _ x y => paths x y end)).
(** Homotopy fibers. *)
Definition hfiber { X Y : UU } (f:X -> Y) (y:Y) : UU := total2 (fun pointover:X => paths (f pointover) y).
Definition hfiberpair { X Y : UU } (f:X -> Y) { y : Y } ( x : X ) ( e : paths ( f x ) y ) := tpair (fun pointover:X => paths (f pointover) y) x e .
Definition hfiberpr1 { X Y : UU } ( f : X -> Y ) ( y : Y ) := @pr1 _ (fun pointover:X => paths (f pointover) y) .
(** Paths in homotopy fibers. *)
Lemma hfibertriangle1 { X Y : UU } (f:X -> Y) { y : Y } { xe1 xe2: hfiber f y } (e: paths xe1 xe2): paths (pr2 xe1) (pathscomp0 (maponpaths f (maponpaths ( @pr1 _ _ ) e)) (pr2 xe2)).
Proof. intros. destruct e. simpl. apply idpath. Defined.
Lemma hfibertriangle1inv0 { X Y : UU } (f:X -> Y) { y : Y } { xe1 xe2: hfiber f y } (e: paths xe1 xe2) : paths ( pathscomp0 ( maponpaths f ( pathsinv0 ( maponpaths ( @pr1 _ _ ) e ) ) ) ( pr2 xe1 ) ) ( pr2 xe2 ) .
Proof . intros . destruct e . apply idpath . Defined .
Lemma hfibertriangle2 { X Y : UU } (f:X -> Y) { y : Y } (xe1 xe2: hfiber f y) (ee: paths (pr1 xe1) (pr1 xe2))(eee: paths (pr2 xe1) (pathscomp0 (maponpaths f ee) (pr2 xe2))): paths xe1 xe2.
Proof. intros. destruct xe1 as [ t e1 ]. destruct xe2. simpl in eee. simpl in ee. destruct ee. simpl in eee. apply (maponpaths (fun e: paths (f t) y => hfiberpair f t e) eee). Defined.
(** Coconus of a function - the total space of the family of h-fibers. *)
Definition coconusf { X Y : UU } (f: X -> Y):= total2 (fun y:_ => hfiber f y).
Definition fromcoconusf { X Y : UU } (f: X -> Y) : coconusf f -> X := fun yxe:_ => pr1 (pr2 yxe).
Definition tococonusf { X Y:UU } (f: X -> Y) : X -> coconusf f := fun x:_ => tpair _ (f x) (hfiberpair f x (idpath _ ) ).
(** Total spaces of families and homotopies *)
Definition famhomotfun { X : UU } { P Q : X -> UU } ( h : homot P Q ) ( xp : total2 P ) : total2 Q .
Proof . intros. destruct xp as [ x p ] . split with x . destruct ( h x ) . apply p . Defined.
Definition famhomothomothomot { X : UU } { P Q : X -> UU } ( h1 h2 : homot P Q ) ( H : forall x : X , paths ( h1 x ) ( h2 x ) ) : homot ( famhomotfun h1 ) ( famhomotfun h2 ) .
Proof . intros . intro xp . destruct xp as [x p] . simpl . apply ( maponpaths ( fun q => tpair Q x q ) ) . destruct ( H x ) . apply idpath . Defined.
(** ** Weak equivalences *)
(** *** Basics *)
Definition isweq { X Y : UU } ( f : X -> Y) : UU := forall y:Y, iscontr (hfiber f y) .
Lemma idisweq (T:UU) : isweq (fun t:T => t).
Proof. intros.
unfold isweq.
intro y .
assert (y0: hfiber (fun t : T => t) y). apply (tpair (fun pointover:T => paths ((fun t:T => t) pointover) y) y (idpath y)).
split with y0. intro t.
destruct y0 as [x0 e0]. destruct t as [x1 e1]. destruct e0. destruct e1. apply idpath. Defined.
Definition weq ( X Y : UU ) : UU := total2 (fun f:X->Y => isweq f) .
Definition pr1weq ( X Y : UU):= @pr1 _ _ : weq X Y -> (X -> Y).
Coercion pr1weq : weq >-> Funclass.
Definition weqpair { X Y : UU } (f:X-> Y) (is: isweq f) : weq X Y := tpair (fun f:X->Y => isweq f) f is.
Definition idweq (X:UU) : weq X X := tpair (fun f:X->X => isweq f) (fun x:X => x) ( idisweq X ) .
Definition isweqtoempty { X : UU } (f : X -> empty ) : isweq f.
Proof. intros. intro y. apply (fromempty y). Defined.
Definition weqtoempty { X : UU } ( f : X -> empty ) := weqpair _ ( isweqtoempty f ) .
Lemma isweqtoempty2 { X Y : UU } ( f : X -> Y ) ( is : neg Y ) : isweq f .
Proof. intros . intro y . destruct ( is y ) . Defined .
Definition weqtoempty2 { X Y : UU } ( f : X -> Y ) ( is : neg Y ) := weqpair _ ( isweqtoempty2 f is ) .
Definition invmap { X Y : UU } ( w : weq X Y ) : Y -> X .
Proof. intros X Y w y . apply (pr1 (pr1 ( pr2 w y ))). Defined.
(** We now define different homotopies and maps between the paths spaces corresponding to a weak equivalence. What may look like unnecessary complexity in the definition of [ weqgf ] is due to the fact that the "naive" definition, that of [ weqgf00 ], needs to be corrected in order for lemma [ weqfgf ] to hold. *)
Definition homotweqinvweq { T1 T2 : UU } ( w : weq T1 T2 ) : forall t2:T2, paths ( w ( invmap w t2 ) ) t2.
Proof. intros. unfold invmap. simpl. apply (pr2 (pr1 ( pr2 w t2 ) ) ) . Defined.
Definition homotinvweqweq0 { X Y : UU } ( w : weq X Y ) ( x : X ) : paths x ( invmap w ( w x ) ) .
Proof. intros. set (isfx:= ( pr2 w ( w x ) ) ). set (pr1fx:= @pr1 X (fun x':X => paths ( w x' ) ( w x ))).
set (xe1:= (hfiberpair w x (idpath ( w x)))). apply (maponpaths pr1fx (pr2 isfx xe1)). Defined.
Definition homotinvweqweq { X Y : UU } ( w : weq X Y ) ( x : X ) : paths (invmap w ( w x ) ) x := pathsinv0 (homotinvweqweq0 w x).
Lemma diaglemma2 { X Y : UU } (f:X -> Y) { x x':X } (e1: paths x x')(e2: paths (f x') (f x)) (ee: paths (idpath (f x)) (pathscomp0 (maponpaths f e1) e2)): paths (maponpaths f (pathsinv0 e1)) e2.
Proof. intros. destruct e1. simpl. simpl in ee. assumption. Defined.
Definition homotweqinvweqweq { X Y : UU } ( w : weq X Y ) ( x : X ) : paths (maponpaths w (homotinvweqweq w x)) (homotweqinvweq w ( w x)).
Proof. intros. set (xe1:= hfiberpair w x (idpath (w x))). set (isfx:= ( pr2 w ) (w x)). set (xe2:= pr1 isfx). set (e:= pr2 isfx xe1). set (ee:=hfibertriangle1 w e). simpl in ee.
apply (diaglemma2 w (homotinvweqweq0 w x) ( homotweqinvweq w ( w x ) ) ee ). Defined.
Definition invmaponpathsweq { X Y : UU } ( w : weq X Y ) ( x x' : X ) : paths (w x) (w x') -> paths x x':= pathssec2 w (invmap w ) (homotinvweqweq w ) _ _ .
Definition invmaponpathsweqid { X Y : UU } ( w : weq X Y ) ( x : X ) : paths (invmaponpathsweq w _ _ (idpath (w x))) (idpath x):= pathssec2id w (invmap w ) (homotinvweqweq w ) x.
Definition pathsweq1 { X Y : UU } ( w : weq X Y ) ( x : X ) ( y : Y ) : paths (w x) y -> paths x (invmap w y) := pathssec1 w (invmap w ) (homotinvweqweq w ) _ _ .
Definition pathsweq1' { X Y : UU } ( w : weq X Y ) ( x : X ) ( y : Y ) : paths x (invmap w y) -> paths ( w x ) y := fun e:_ => pathscomp0 (maponpaths w e) (homotweqinvweq w y).
Definition pathsweq3 { X Y : UU } ( w : weq X Y ) { x x' : X } ( e : paths x x' ) : paths (invmaponpathsweq w x x' (maponpaths w e)) e:= pathssec3 w (invmap w ) (homotinvweqweq w ) _ .
Definition pathsweq4 { X Y : UU } ( w : weq X Y ) ( x x' : X ) ( e : paths ( w x ) ( w x' )) : paths (maponpaths w (invmaponpathsweq w x x' e)) e.
Proof. intros. destruct w as [ f is1 ] . set ( w := weqpair f is1 ) . set (g:=invmap w ). set (gf:= fun x:X => (g (f x))). set (ee:= maponpaths g e). set (eee:= maponpathshomidinv gf (homotinvweqweq w ) x x' ee ).
assert (e1: paths (maponpaths f eee) e).
assert (e2: paths (maponpaths g (maponpaths f eee)) (maponpaths g e)).
assert (e3: paths (maponpaths g (maponpaths f eee)) (maponpaths gf eee)). apply maponpathscomp.
assert (e4: paths (maponpaths gf eee) ee). apply maponpathshomid2. apply (pathscomp0 e3 e4).
set (s:= @maponpaths _ _ g (f x) (f x')). set (p:= @pathssec2 _ _ g f (homotweqinvweq w ) (f x) (f x')). set (eps:= @pathssec3 _ _ g f (homotweqinvweq w ) (f x) (f x')). apply (pathssec2 s p eps _ _ e2 ).
assert (e5: paths (maponpaths f (invmaponpathsweq w x x' e)) (maponpaths f (invmaponpathsweq w x x' (maponpaths f eee)))). apply (pathsinv0 (maponpaths (fun e0: paths (f x) (f x') => (maponpaths f (invmaponpathsweq w x x' e0))) e1)).
assert (X0: paths (invmaponpathsweq w x x' (maponpaths f eee)) eee). apply (pathsweq3 w ).
assert (e6: paths (maponpaths f (invmaponpathsweq w x x' (maponpaths f eee))) (maponpaths f eee)). apply (maponpaths (fun eee0: paths x x' => maponpaths f eee0) X0). set (e7:= pathscomp0 e5 e6). set (pathscomp0 e7 e1).
assumption. Defined.
(** *** Weak equivalences between contractible types (other implications are proved below) *)
Lemma iscontrweqb { X Y : UU } ( w : weq X Y ) ( is : iscontr Y ) : iscontr X.
Proof. intros . apply ( iscontrretract (invmap w ) w (homotinvweqweq w ) is ). Defined.
(** *** Functions between fibers defined by a path on the base are weak equivalences *)
Lemma isweqtransportf { X : UU } (P:X -> UU) { x x' : X } (e:paths x x'): isweq (transportf P e).
Proof. intros. destruct e. apply idisweq. Defined.
Lemma isweqtransportb { X : UU } (P:X -> UU) { x x' : X } (e:paths x x'): isweq (transportb P e).
Proof. intros. apply (isweqtransportf _ (pathsinv0 e)). Defined.
(** *** [ unit ] and contractibility *)
(** [ unit ] is contractible (recall that [ tt ] is the name of the canonical term of the type [ unit ]). *)
Lemma unitl0: paths tt tt -> coconustot _ tt.
Proof. intros X. apply (coconustotpair _ X). Defined.
Lemma unitl1: coconustot _ tt -> paths tt tt.
Proof. intro X. destruct X as [ x t ]. destruct x. assumption. Defined.
Lemma unitl2: forall e: paths tt tt, paths (unitl1 (unitl0 e)) e.
Proof. intros. unfold unitl0. simpl. apply idpath. Defined.
Lemma unitl3: forall e:paths tt tt, paths e (idpath tt).
Proof. intros.
assert (e0: paths (unitl0 (idpath tt)) (unitl0 e)). eapply connectedcoconustot.
assert (e1:paths (unitl1 (unitl0 (idpath tt))) (unitl1 (unitl0 e))). apply (maponpaths unitl1 e0).
assert (e2: paths (unitl1 (unitl0 e)) e). eapply unitl2.
assert (e3: paths (unitl1 (unitl0 (idpath tt))) (idpath tt)). eapply unitl2.
destruct e1. clear e0. destruct e2. assumption. Defined.
Theorem iscontrunit: iscontr (unit).
Proof. assert (pp:forall x:unit, paths x tt). intros. destruct x. apply (idpath _).
apply (tpair (fun cntr:unit => forall t:unit, paths t cntr) tt pp). Defined.
(** [ paths ] in [ unit ] are contractible. *)
Theorem iscontrpathsinunit ( x x' : unit ) : iscontr ( paths x x' ) .
Proof. intros . assert (c:paths x x'). destruct x. destruct x'. apply idpath.
assert (X: forall g:paths x x', paths g c). intro. assert (e:paths c c). apply idpath. destruct c. destruct x. apply unitl3. apply (iscontrpair c X). Defined.
(** A type [ T : UU ] is contractible if and only if [ T -> unit ] is a weak equivalence. *)
Lemma ifcontrthenunitl0 ( e1 e2 : paths tt tt ) : paths e1 e2.
Proof. intros. assert (e3: paths e1 (idpath tt) ). apply unitl3.
assert (e4: paths e2 (idpath tt)). apply unitl3. destruct e3. destruct e4. apply idpath. Defined.
Lemma isweqcontrtounit { T : UU } (is : iscontr T) : (isweq (fun t:T => tt)).
Proof. intros T X. unfold isweq. intro y. destruct y.
assert (c: hfiber (fun x:T => tt) tt). destruct X as [ t x0 ]. eapply (hfiberpair _ t (idpath tt)).
assert (e: forall d: (hfiber (fun x:T => tt) tt), paths d c). intros. destruct c as [ t x] . destruct d as [ t0 x0 ].
assert (e': paths x x0). apply ifcontrthenunitl0 .
assert (e'': paths t t0). destruct X as [t1 x1 ].
assert (e''': paths t t1). apply x1.
assert (e'''': paths t0 t1). apply x1.
destruct e''''. assumption.
destruct e''. destruct e'. apply idpath. apply (iscontrpair c e). Defined.
Definition weqcontrtounit { T : UU } ( is : iscontr T ) := weqpair _ ( isweqcontrtounit is ) .
Theorem iscontrifweqtounit { X : UU } ( w : weq X unit ) : iscontr X.
Proof. intros X X0. apply (iscontrweqb X0 ). apply iscontrunit. Defined.
(** *** A homotopy equivalence is a weak equivalence *)
Definition hfibersgftog { X Y Z : UU } (f:X -> Y) (g: Y -> Z) (z:Z) ( xe : hfiber (fun x:X => g(f x)) z ) : hfiber g z := hfiberpair g ( f ( pr1 xe ) ) ( pr2 xe ) .
Lemma constr2 { X Y : UU } (f:X -> Y)(g: Y-> X) (efg: forall y:Y, paths (f(g y)) y) ( x0 : X) ( z0 : hfiber g x0 ) : total2 (fun z': hfiber (fun x:X => g (f x)) x0 => paths z0 (hfibersgftog f g x0 z')).
Proof. intros. destruct z0 as [ y e ].
assert (eint: paths y (f x0 )). assert (e0: paths (f(g y)) y). apply efg. assert (e1: paths (f(g y)) (f x0 )). apply (maponpaths f e). destruct e1. apply pathsinv0. assumption.
set (int1:=constr1 (fun y:Y => paths (g y) x0 ) eint). destruct int1 as [ t x ].
set (int2:=hfiberpair (fun x0 : X => g (f x0)) x0 (t e)). split with int2. apply x. Defined.
Lemma iscontrhfiberl1 { X Y : UU } (f:X -> Y) (g: Y-> X) (efg: forall y:Y, paths (f(g y)) y) (x0 : X): iscontr (hfiber (fun x:X => g (f x)) x0 ) ->iscontr (hfiber g x0).
Proof. intros X Y f g efg x0 X0. set (X1:= hfiber (fun x:X => g(f x)) x0 ). set (Y1:= hfiber g x0 ). set (f1:= hfibersgftog f g x0 ). set (g1:= fun z0:_ => pr1 (constr2 f g efg x0 z0)).
set (efg1:= (fun y1:Y1 => pathsinv0 ( pr2 (constr2 f g efg x0 y1 ) ) ) ) . simpl in efg1. apply ( iscontrretract f1 g1 efg1). assumption. Defined.
Lemma iscontrhfiberl2 { X Y : UU } ( f1 f2 : X-> Y) (h: forall x:X, paths (f2 x) (f1 x)) (y:Y): iscontr (hfiber f2 y) -> iscontr (hfiber f1 y).
Proof. intros X Y f1 f2 h y X0.
set (f:= (fun z:(hfiber f1 y) =>
match z with
(tpair _ x e) => hfiberpair f2 x (pathscomp0 (h x) e)
end)).
set (g:= (fun z:(hfiber f2 y) =>
match z with
(tpair _ x e) => hfiberpair f1 x (pathscomp0 (pathsinv0 (h x)) e)
end)).
assert (egf: forall z:(hfiber f1 y), paths (g (f z)) z). intros. destruct z as [ x e ]. simpl . apply ( hfibertriangle2 _ (hfiberpair f1 x (pathscomp0 (pathsinv0 (h x)) (pathscomp0 (h x) e))) ( hfiberpair f1 x e ) ( idpath x ) ) . simpl . destruct e . destruct ( h x ) . apply idpath .
apply ( iscontrretract g f egf X0). Defined.
Corollary isweqhomot { X Y : UU } ( f1 f2 : X-> Y ) (h: forall x:X, paths (f1 x) (f2 x)): isweq f1 -> isweq f2.
Proof. intros X Y f1 f2 h X0. unfold isweq. intro y. set (Y0:= X0 y). apply (iscontrhfiberl2 f2 f1 h). assumption. Defined.
Theorem gradth { X Y : UU } (f:X->Y) (g:Y->X) (egf: forall x:X, paths (g (f x)) x) (efg: forall y:Y, paths (f (g y)) y ): isweq f.
Proof. intros. unfold isweq. intro z.
assert (iscontr (hfiber (fun y:Y => (f (g y))) z)).
assert (efg': forall y:Y, paths y (f (g y))). intros. set (e1:= efg y). apply pathsinv0. assumption.
apply (iscontrhfiberl2 (fun y:Y => (f (g y))) (fun y:Y => y) efg' z (idisweq Y z)).
apply (iscontrhfiberl1 g f egf z). assumption.
Defined.
Definition weqgradth { X Y : UU } (f:X->Y) (g:Y->X) (egf: forall x:X, paths (g (f x)) x) (efg: forall y:Y, paths (f (g y)) y ) : weq X Y := weqpair _ ( gradth _ _ egf efg ) .
(** *** Some basic weak equivalences *)
Corollary isweqinvmap { X Y : UU } ( w : weq X Y ) : isweq (invmap w ).
Proof. intros. set (invf:= invmap w ). assert (efinvf: forall y:Y, paths ( w (invf y)) y). apply homotweqinvweq.
assert (einvff: forall x:X, paths (invf ( w x)) x). apply homotinvweqweq. apply ( gradth _ _ efinvf einvff ) . Defined.
Definition invweq { X Y : UU } ( w : weq X Y ) : weq Y X := weqpair (invmap w ) (isweqinvmap w ).
Corollary invinv { X Y :UU } ( w : weq X Y ) ( x : X ) : paths ( invweq ( invweq w ) x) (w x).
Proof. intros. unfold invweq . unfold invmap . simpl . apply idpath . Defined .
Corollary iscontrweqf { X Y : UU } ( w : weq X Y ) : iscontr X -> iscontr Y.
Proof. intros X Y w X0 . apply (iscontrweqb ( invweq w ) ). assumption. Defined.
(** The standard weak equivalence from [ unit ] to a contractible type *)
Definition wequnittocontr { X : UU } ( is : iscontr X ) : weq unit X .
Proof . intros . set ( f := fun t : unit => pr1 is ) . set ( g := fun x : X => tt ) . split with f .
assert ( egf : forall a : _ , paths ( g ( f a )) a ) . intro . destruct a . apply idpath .
assert ( efg : forall a : _ , paths ( f ( g a ) ) a ) . intro . simpl . apply ( pathsinv0 ( pr2 is a ) ) .
apply ( gradth _ _ egf efg ) . Defined .
(** A weak equivalence bwteen types defines weak equivalences on the corresponding [ paths ] types. *)
Corollary isweqmaponpaths { X Y : UU } ( w : weq X Y ) ( x x' : X ) : isweq (@maponpaths _ _ w x x').
Proof. intros. apply (gradth (@maponpaths _ _ w x x') (@invmaponpathsweq _ _ w x x') (@pathsweq3 _ _ w x x') (@pathsweq4 _ _ w x x')). Defined.
Definition weqonpaths { X Y : UU } ( w : weq X Y ) ( x x' : X ) := weqpair _ ( isweqmaponpaths w x x' ) .
Corollary isweqpathsinv0 { X : UU } (x x':X): isweq (@pathsinv0 _ x x').
Proof. intros. apply (gradth (@pathsinv0 _ x x') (@pathsinv0 _ x' x) (@pathsinv0inv0 _ _ _ ) (@pathsinv0inv0 _ _ _ )). Defined.
Definition weqpathsinv0 { X : UU } ( x x' : X ) := weqpair _ ( isweqpathsinv0 x x' ) .
Corollary isweqpathscomp0r { X : UU } (x : X ) { x' x'' : X } (e': paths x' x''): isweq (fun e:paths x x' => pathscomp0 e e').
Proof. intros. set (f:= fun e:paths x x' => pathscomp0 e e'). set (g:= fun e'': paths x x'' => pathscomp0 e'' (pathsinv0 e')).
assert (egf: forall e:_ , paths (g (f e)) e). intro. destruct e. simpl. destruct e'. simpl. apply idpath.
assert (efg: forall e'':_, paths (f (g e'')) e''). intro. destruct e''. simpl. destruct e'. simpl. apply idpath.
apply (gradth f g egf efg). Defined.
Corollary isweqtococonusf { X Y : UU } (f:X-> Y): isweq ( tococonusf f) .
Proof . intros. set (ff:= fromcoconusf f). set (gg:= tococonusf f).
assert (egf: forall yxe:_, paths (gg (ff yxe)) yxe). intro. destruct yxe as [t x]. destruct x as [ x e ]. unfold gg. unfold tococonusf. unfold ff. unfold fromcoconusf. simpl. destruct e. apply idpath.
assert (efg: forall x:_, paths (ff (gg x)) x). intro. apply idpath.
apply (gradth _ _ efg egf ). Defined.
Definition weqtococonusf { X Y : UU } ( f : X -> Y ) : weq X ( coconusf f ) := weqpair _ ( isweqtococonusf f ) .
Corollary isweqfromcoconusf { X Y : UU } (f:X-> Y): isweq (fromcoconusf f).
Proof. intros. set (ff:= fromcoconusf f). set (gg:= tococonusf f).
assert (egf: forall yxe:_, paths (gg (ff yxe)) yxe). intro. destruct yxe as [t x]. destruct x as [ x e ]. unfold gg. unfold tococonusf. unfold ff. unfold fromcoconusf. simpl. destruct e. apply idpath.
assert (efg: forall x:_, paths (ff (gg x)) x). intro. apply idpath.
apply (gradth _ _ egf efg). Defined.
Definition weqfromcoconusf { X Y : UU } ( f : X -> Y ) : weq ( coconusf f ) X := weqpair _ ( isweqfromcoconusf f ) .
Corollary isweqdeltap (T:UU) : isweq (deltap T).
Proof. intros. set (ff:=deltap T). set (gg:= fun z:pathsspace T => pr1 z).
assert (egf: forall t:T, paths (gg (ff t)) t). intro. apply idpath.
assert (efg: forall tte: pathsspace T, paths (ff (gg tte)) tte). intro. destruct tte as [ t x ]. destruct x as [ x0 e ]. destruct e. apply idpath.
apply (gradth _ _ egf efg). Defined.
Corollary isweqpr1pr1 (T:UU) : isweq (fun a: pathsspace' T => (pr1 (pr1 a))).
Proof. intros. set (f:= (fun a:_ => (pr1 (pr1 a))): pathsspace' T -> T). set (g:= (fun t:T => tpair _ (dirprodpair t t) (idpath t)): T -> pathsspace' T).
assert (efg: forall t:T, paths (f (g t)) t). intro. apply idpath.
assert (egf: forall a: pathsspace' T, paths (g (f a)) a). intro. destruct a as [ t x ]. destruct t. destruct x. simpl. apply idpath.
apply (gradth _ _ egf efg). Defined.
Lemma hfibershomotftog { X Y : UU } ( f g : X -> Y ) ( h : forall x : X , paths ( f x ) ( g x ) ) ( y : Y ) : hfiber f y -> hfiber g y .
Proof. intros X Y f g h y xe . destruct xe as [ x e ] . split with x . apply ( pathscomp0 ( pathsinv0 ( h x ) ) e ) . Defined .
Lemma hfibershomotgtof { X Y : UU } ( f g : X -> Y ) ( h : forall x : X , paths ( f x ) ( g x ) ) ( y : Y ) : hfiber g y -> hfiber f y .
Proof. intros X Y f g h y xe . destruct xe as [ x e ] . split with x . apply ( pathscomp0 ( h x ) e ) . Defined .
Theorem weqhfibershomot { X Y : UU } ( f g : X -> Y ) ( h : forall x : X , paths ( f x ) ( g x ) ) ( y : Y ) : weq ( hfiber f y ) ( hfiber g y ) .
Proof . intros . set ( ff := hfibershomotftog f g h y ) . set ( gg := hfibershomotgtof f g h y ) . split with ff .
assert ( effgg : forall xe : _ , paths ( ff ( gg xe ) ) xe ) . intro . destruct xe as [ x e ] . simpl .
assert ( eee: paths ( pathscomp0 (pathsinv0 (h x)) (pathscomp0 (h x) e) ) (pathscomp0 (maponpaths g ( idpath x ) ) e ) ) . simpl . destruct e . destruct ( h x ) . simpl . apply idpath .
set ( xe1 := hfiberpair g x ( pathscomp0 (pathsinv0 (h x)) (pathscomp0 (h x) e) ) ) . set ( xe2 := hfiberpair g x e ) . apply ( hfibertriangle2 g xe1 xe2 ( idpath x ) eee ) .
assert ( eggff : forall xe : _ , paths ( gg ( ff xe ) ) xe ) . intro . destruct xe as [ x e ] . simpl .
assert ( eee: paths ( pathscomp0 (h x) (pathscomp0 (pathsinv0 (h x)) e) ) (pathscomp0 (maponpaths f ( idpath x ) ) e ) ) . simpl . destruct e . destruct ( h x ) . simpl . apply idpath .
set ( xe1 := hfiberpair f x ( pathscomp0 (h x) (pathscomp0 (pathsinv0 (h x)) e) ) ) . set ( xe2 := hfiberpair f x e ) . apply ( hfibertriangle2 f xe1 xe2 ( idpath x ) eee ) .
apply ( gradth _ _ eggff effgg ) . Defined .
(** *** The 2-out-of-3 property of weak equivalences.
Theorems showing that if any two of three functions f, g, gf are weak equivalences then so is the third - the 2-out-of-3 property. *)
Theorem twooutof3a { X Y Z : UU } (f:X->Y) (g:Y->Z) (isgf: isweq (fun x:X => g (f x))) (isg: isweq g) : isweq f.
Proof. intros. set ( gw := weqpair g isg ) . set ( gfw := weqpair _ isgf ) . set (invg:= invmap gw ). set (invgf:= invmap gfw ). set (invf := (fun y:Y => invgf (g y))).
assert (efinvf: forall y:Y, paths (f (invf y)) y). intro. assert (int1: paths (g (f (invf y))) (g y)). unfold invf. apply (homotweqinvweq gfw ( g y ) ). apply (invmaponpathsweq gw _ _ int1).
assert (einvff: forall x: X, paths (invf (f x)) x). intro. unfold invf. apply (homotinvweqweq gfw x).
apply (gradth f invf einvff efinvf). Defined.
Corollary isweqcontrcontr { X Y : UU } (f:X -> Y) (isx: iscontr X) (isy: iscontr Y): isweq f.
Proof. intros. set (py:= (fun y:Y => tt)). apply (twooutof3a f py (isweqcontrtounit isx) (isweqcontrtounit isy)). Defined.
Definition weqcontrcontr { X Y : UU } ( isx : iscontr X) (isy: iscontr Y) := weqpair _ ( isweqcontrcontr ( fun x : X => pr1 isy ) isx isy ) .
Theorem twooutof3b { X Y Z : UU } (f:X->Y) (g:Y->Z) (isf: isweq f) (isgf: isweq (fun x:X => g(f x))) : isweq g.
Proof. intros. set ( wf := weqpair f isf ) . set ( wgf := weqpair _ isgf ) . set (invf:= invmap wf ). set (invgf:= invmap wgf ). set (invg := (fun z:Z => f ( invgf z))). set (gf:= fun x:X => (g (f x))).
assert (eginvg: forall z:Z, paths (g (invg z)) z). intro. apply (homotweqinvweq wgf z).
assert (einvgg: forall y:Y, paths (invg (g y)) y). intro. assert (isinvf: isweq invf). apply isweqinvmap. assert (isinvgf: isweq invgf). apply isweqinvmap. assert (int1: paths (g y) (gf (invf y))). apply (maponpaths g (pathsinv0 (homotweqinvweq wf y))). assert (int2: paths (gf (invgf (g y))) (gf (invf y))). assert (int3: paths (gf (invgf (g y))) (g y)). apply (homotweqinvweq wgf ). destruct int1. assumption. assert (int4: paths (invgf (g y)) (invf y)). apply (invmaponpathsweq wgf ). assumption. assert (int5:paths (invf (f (invgf (g y)))) (invgf (g y))). apply (homotinvweqweq wf ). assert (int6: paths (invf (f (invgf (g (y))))) (invf y)). destruct int4. assumption. apply (invmaponpathsweq ( weqpair invf isinvf ) ). assumption. apply (gradth g invg einvgg eginvg). Defined.
Lemma isweql3 { X Y : UU } (f:X-> Y) (g:Y->X) (egf: forall x:X, paths (g (f x)) x): isweq f -> isweq g.
Proof. intros X Y f g egf X0. set (gf:= fun x:X => g (f x)). assert (int1: isweq gf). apply (isweqhomot (fun x:X => x) gf (fun x:X => (pathsinv0 (egf x)))). apply idisweq. apply (twooutof3b f g X0 int1). Defined.
Theorem twooutof3c { X Y Z : UU } (f:X->Y) (g:Y->Z) (isf: isweq f) (isg: isweq g) : isweq (fun x:X => g(f x)).
Proof. intros. set ( wf := weqpair f isf ) . set ( wg := weqpair _ isg ) . set (gf:= fun x:X => g (f x)). set (invf:= invmap wf ). set (invg:= invmap wg ). set (invgf:= fun z:Z => invf (invg z)). assert (egfinvgf: forall x:X, paths (invgf (gf x)) x). unfold gf. unfold invgf. intro x. assert (int1: paths (invf (invg (g (f x)))) (invf (f x))). apply (maponpaths invf (homotinvweqweq wg (f x))). assert (int2: paths (invf (f x)) x). apply homotinvweqweq. destruct int1. assumption.
assert (einvgfgf: forall z:Z, paths (gf (invgf z)) z). unfold gf. unfold invgf. intro z. assert (int1: paths (g (f (invf (invg z)))) (g (invg z))). apply (maponpaths g (homotweqinvweq wf (invg z))). assert (int2: paths (g (invg z)) z). apply (homotweqinvweq wg z). destruct int1. assumption. apply (gradth gf invgf egfinvgf einvgfgf). Defined.
Definition weqcomp { X Y Z : UU } (w1 : weq X Y) (w2 : weq Y Z) : (weq X Z) := weqpair (fun x:X => (pr1 w2 (pr1 w1 x))) (twooutof3c _ _ (pr2 w1) (pr2 w2)).
(** *** Associativity of [ total2 ] *)
Lemma total2asstor { X : UU } ( P : X -> UU ) ( Q : total2 P -> UU ) : total2 Q -> total2 ( fun x : X => total2 ( fun p : P x => Q ( tpair P x p ) ) ) .
Proof. intros X P Q xpq . destruct xpq as [ xp q ] . destruct xp as [ x p ] . split with x . split with p . assumption . Defined .
Lemma total2asstol { X : UU } ( P : X -> UU ) ( Q : total2 P -> UU ) : total2 ( fun x : X => total2 ( fun p : P x => Q ( tpair P x p ) ) ) -> total2 Q .
Proof. intros X P Q xpq . destruct xpq as [ x pq ] . destruct pq as [ p q ] . split with ( tpair P x p ) . assumption . Defined .
Theorem weqtotal2asstor { X : UU } ( P : X -> UU ) ( Q : total2 P -> UU ) : weq ( total2 Q ) ( total2 ( fun x : X => total2 ( fun p : P x => Q ( tpair P x p ) ) ) ).
Proof. intros . set ( f := total2asstor P Q ) . set ( g:= total2asstol P Q ) . split with f .
assert ( egf : forall xpq : _ , paths ( g ( f xpq ) ) xpq ) . intro . destruct xpq as [ xp q ] . destruct xp as [ x p ] . apply idpath .
assert ( efg : forall xpq : _ , paths ( f ( g xpq ) ) xpq ) . intro . destruct xpq as [ x pq ] . destruct pq as [ p q ] . apply idpath .
apply ( gradth _ _ egf efg ) . Defined.
Definition weqtotal2asstol { X : UU } ( P : X -> UU ) ( Q : total2 P -> UU ) : weq ( total2 ( fun x : X => total2 ( fun p : P x => Q ( tpair P x p ) ) ) ) ( total2 Q ) := invweq ( weqtotal2asstor P Q ) .
(** *** Associativity and commutativity of [ dirprod ] *)
Definition weqdirprodasstor ( X Y Z : UU ) : weq ( dirprod ( dirprod X Y ) Z ) ( dirprod X ( dirprod Y Z ) ) .
Proof . intros . apply weqtotal2asstor . Defined .
Definition weqdirprodasstol ( X Y Z : UU ) : weq ( dirprod X ( dirprod Y Z ) ) ( dirprod ( dirprod X Y ) Z ) := invweq ( weqdirprodasstor X Y Z ) .
Definition weqdirprodcomm ( X Y : UU ) : weq ( dirprod X Y ) ( dirprod Y X ) .
Proof. intros . set ( f := fun xy : dirprod X Y => dirprodpair ( pr2 xy ) ( pr1 xy ) ) . set ( g := fun yx : dirprod Y X => dirprodpair ( pr2 yx ) ( pr1 yx ) ) .
assert ( egf : forall xy : _ , paths ( g ( f xy ) ) xy ) . intro . destruct xy . apply idpath .
assert ( efg : forall yx : _ , paths ( f ( g yx ) ) yx ) . intro . destruct yx . apply idpath .
split with f . apply ( gradth _ _ egf efg ) . Defined .
(** *** Coproducts and direct products *)
Definition rdistrtocoprod ( X Y Z : UU ): dirprod X (coprod Y Z) -> coprod (dirprod X Y) (dirprod X Z).
Proof. intros X Y Z X0. destruct X0 as [ t x ]. destruct x as [ y | z ] . apply (ii1 (dirprodpair t y)). apply (ii2 (dirprodpair t z)). Defined.
Definition rdistrtoprod (X Y Z:UU): coprod (dirprod X Y) (dirprod X Z) -> dirprod X (coprod Y Z).
Proof. intros X Y Z X0. destruct X0 as [ d | d ]. destruct d as [ t x ]. apply (dirprodpair t (ii1 x)). destruct d as [ t x ]. apply (dirprodpair t (ii2 x)). Defined.
Theorem isweqrdistrtoprod (X Y Z:UU): isweq (rdistrtoprod X Y Z).
Proof. intros. set (f:= rdistrtoprod X Y Z). set (g:= rdistrtocoprod X Y Z).
assert (egf: forall a:_, paths (g (f a)) a). intro. destruct a as [ d | d ] . destruct d. apply idpath. destruct d. apply idpath.
assert (efg: forall a:_, paths (f (g a)) a). intro. destruct a as [ t x ]. destruct x. apply idpath. apply idpath.
apply (gradth f g egf efg). Defined.
Definition weqrdistrtoprod (X Y Z: UU):= weqpair _ (isweqrdistrtoprod X Y Z).
Corollary isweqrdistrtocoprod (X Y Z:UU): isweq (rdistrtocoprod X Y Z).
Proof. intros. apply (isweqinvmap ( weqrdistrtoprod X Y Z ) ) . Defined.
Definition weqrdistrtocoprod (X Y Z: UU):= weqpair _ (isweqrdistrtocoprod X Y Z).
(** *** Total space of a family over a coproduct *)
Definition fromtotal2overcoprod { X Y : UU } ( P : coprod X Y -> UU ) ( xyp : total2 P ) : coprod ( total2 ( fun x : X => P ( ii1 x ) ) ) ( total2 ( fun y : Y => P ( ii2 y ) ) ) .
Proof. intros . set ( PX := fun x : X => P ( ii1 x ) ) . set ( PY := fun y : Y => P ( ii2 y ) ) . destruct xyp as [ xy p ] . destruct xy as [ x | y ] . apply ( ii1 ( tpair PX x p ) ) . apply ( ii2 ( tpair PY y p ) ) . Defined .
Definition tototal2overcoprod { X Y : UU } ( P : coprod X Y -> UU ) ( xpyp : coprod ( total2 ( fun x : X => P ( ii1 x ) ) ) ( total2 ( fun y : Y => P ( ii2 y ) ) ) ) : total2 P .
Proof . intros . destruct xpyp as [ xp | yp ] . destruct xp as [ x p ] . apply ( tpair P ( ii1 x ) p ) . destruct yp as [ y p ] . apply ( tpair P ( ii2 y ) p ) . Defined .
Theorem weqtotal2overcoprod { X Y : UU } ( P : coprod X Y -> UU ) : weq ( total2 P ) ( coprod ( total2 ( fun x : X => P ( ii1 x ) ) ) ( total2 ( fun y : Y => P ( ii2 y ) ) ) ) .
Proof. intros . set ( f := fromtotal2overcoprod P ) . set ( g := tototal2overcoprod P ) . split with f .
assert ( egf : forall a : _ , paths ( g ( f a ) ) a ) . intro a . destruct a as [ xy p ] . destruct xy as [ x | y ] . simpl . apply idpath . simpl . apply idpath .
assert ( efg : forall a : _ , paths ( f ( g a ) ) a ) . intro a . destruct a as [ xp | yp ] . destruct xp as [ x p ] . simpl . apply idpath . destruct yp as [ y p ] . apply idpath .
apply ( gradth _ _ egf efg ) . Defined .
(** *** Weak equivalences and pairwise direct products *)
Theorem isweqdirprodf { X Y X' Y' : UU } ( w : weq X Y )( w' : weq X' Y' ) : isweq (dirprodf w w' ).
Proof. intros. set ( f := dirprodf w w' ) . set ( g := dirprodf ( invweq w ) ( invweq w' ) ) .
assert ( egf : forall a : _ , paths ( g ( f a ) ) a ) . intro a . destruct a as [ x x' ] . simpl . apply pathsdirprod . apply ( homotinvweqweq w x ) . apply ( homotinvweqweq w' x' ) .
assert ( efg : forall a : _ , paths ( f ( g a ) ) a ) . intro a . destruct a as [ x x' ] . simpl . apply pathsdirprod . apply ( homotweqinvweq w x ) . apply ( homotweqinvweq w' x' ) .
apply ( gradth _ _ egf efg ) . Defined .
Definition weqdirprodf { X Y X' Y' : UU } ( w : weq X Y ) ( w' : weq X' Y' ) := weqpair _ ( isweqdirprodf w w' ) .
Definition weqtodirprodwithunit (X:UU): weq X (dirprod X unit).
Proof. intros. set (f:=fun x:X => dirprodpair x tt). split with f. set (g:= fun xu:dirprod X unit => pr1 xu).
assert (egf: forall x:X, paths (g (f x)) x). intro. apply idpath.
assert (efg: forall xu:_, paths (f (g xu)) xu). intro. destruct xu as [ t x ]. destruct x. apply idpath.
apply (gradth f g egf efg). Defined.
(** *** Basics on pairwise coproducts (disjoint unions) *)
(** In the current version [ coprod ] is a notation, introduced in uuu.v for [ sum ] of types which is defined in Coq.Init *)
Definition sumofmaps {X Y Z:UU}(fx: X -> Z)(fy: Y -> Z): (coprod X Y) -> Z := fun xy:_ => match xy with ii1 x => fx x | ii2 y => fy y end.
Definition boolascoprod: weq (coprod unit unit) bool.
Proof. set (f:= fun xx: coprod unit unit => match xx with ii1 t => true | ii2 t => false end). split with f.
set (g:= fun t:bool => match t with true => ii1 tt | false => ii2 tt end).
assert (egf: forall xx:_, paths (g (f xx)) xx). intro xx . destruct xx as [ u | u ] . destruct u. apply idpath. destruct u. apply idpath.
assert (efg: forall t:_, paths (f (g t)) t). destruct t. apply idpath. apply idpath.
apply (gradth f g egf efg). Defined.
Definition coprodasstor (X Y Z:UU): coprod (coprod X Y) Z -> coprod X (coprod Y Z).
Proof. intros X Y Z X0. destruct X0 as [ c | z ] . destruct c as [ x | y ] . apply (ii1 x). apply (ii2 (ii1 y)). apply (ii2 (ii2 z)). Defined.
Definition coprodasstol (X Y Z: UU): coprod X (coprod Y Z) -> coprod (coprod X Y) Z.
Proof. intros X Y Z X0. destruct X0 as [ x | c ] . apply (ii1 (ii1 x)). destruct c as [ y | z ] . apply (ii1 (ii2 y)). apply (ii2 z). Defined.
Theorem isweqcoprodasstor (X Y Z:UU): isweq (coprodasstor X Y Z).
Proof. intros. set (f:= coprodasstor X Y Z). set (g:= coprodasstol X Y Z).
assert (egf: forall xyz:_, paths (g (f xyz)) xyz). intro xyz. destruct xyz as [ c | z ] . destruct c. apply idpath. apply idpath. apply idpath.
assert (efg: forall xyz:_, paths (f (g xyz)) xyz). intro xyz. destruct xyz as [ x | c ] . apply idpath. destruct c. apply idpath. apply idpath.
apply (gradth f g egf efg). Defined.
Definition weqcoprodasstor ( X Y Z : UU ) := weqpair _ ( isweqcoprodasstor X Y Z ) .
Corollary isweqcoprodasstol (X Y Z:UU): isweq (coprodasstol X Y Z).
Proof. intros. apply (isweqinvmap ( weqcoprodasstor X Y Z) ). Defined.
Definition weqcoprodasstol (X Y Z:UU):= weqpair _ (isweqcoprodasstol X Y Z).
Definition coprodcomm (X Y:UU): coprod X Y -> coprod Y X := fun xy:_ => match xy with ii1 x => ii2 x | ii2 y => ii1 y end.
Theorem isweqcoprodcomm (X Y:UU): isweq (coprodcomm X Y).
Proof. intros. set (f:= coprodcomm X Y). set (g:= coprodcomm Y X).
assert (egf: forall xy:_, paths (g (f xy)) xy). intro. destruct xy. apply idpath. apply idpath.
assert (efg: forall yx:_, paths (f (g yx)) yx). intro. destruct yx. apply idpath. apply idpath.
apply (gradth f g egf efg). Defined.
Definition weqcoprodcomm (X Y:UU):= weqpair _ (isweqcoprodcomm X Y).
Theorem isweqii1withneg (X : UU) { Y : UU } (nf:Y -> empty): isweq (@ii1 X Y).
Proof. intros. set (f:= @ii1 X Y). set (g:= fun xy:coprod X Y => match xy with ii1 x => x | ii2 y => fromempty (nf y) end).
assert (egf: forall x:X, paths (g (f x)) x). intro. apply idpath.
assert (efg: forall xy: coprod X Y, paths (f (g xy)) xy). intro. destruct xy as [ x | y ] . apply idpath. apply (fromempty (nf y)).
apply (gradth f g egf efg). Defined.
Definition weqii1withneg ( X : UU ) { Y : UU } ( nf : neg Y ) := weqpair _ ( isweqii1withneg X nf ) .
Theorem isweqii2withneg { X : UU } ( Y : UU ) (nf : X -> empty): isweq (@ii2 X Y).
Proof. intros. set (f:= @ii2 X Y). set (g:= fun xy:coprod X Y => match xy with ii1 x => fromempty (nf x) | ii2 y => y end).
assert (egf: forall y : Y, paths (g (f y)) y). intro. apply idpath.
assert (efg: forall xy: coprod X Y, paths (f (g xy)) xy). intro. destruct xy as [ x | y ] . apply (fromempty (nf x)). apply idpath.
apply (gradth f g egf efg). Defined.
Definition weqii2withneg { X : UU } ( Y : UU ) ( nf : neg X ) := weqpair _ ( isweqii2withneg Y nf ) .
Definition coprodf { X Y X' Y' : UU } (f: X -> X')(g: Y-> Y'): coprod X Y -> coprod X' Y' := fun xy: coprod X Y =>
match xy with
ii1 x => ii1 (f x)|
ii2 y => ii2 (g y)
end.
Definition homotcoprodfcomp { X X' Y Y' Z Z' : UU } ( f : X -> Y ) ( f' : X' -> Y' ) ( g : Y -> Z ) ( g' : Y' -> Z' ) : homot ( funcomp ( coprodf f f' ) ( coprodf g g' ) ) ( coprodf ( funcomp f g ) ( funcomp f' g' ) ) .
Proof. intros . intro xx' . destruct xx' as [ x | x' ] . apply idpath . apply idpath . Defined .
Definition homotcoprodfhomot { X X' Y Y' } ( f g : X -> Y ) ( f' g' : X' -> Y' ) ( h : homot f g ) ( h' : homot f' g' ) : homot ( coprodf f f') ( coprodf g g') := fun xx' : _ => match xx' with ( ii1 x ) => maponpaths ( @ii1 _ _ ) ( h x ) | ( ii2 x' ) => maponpaths ( @ii2 _ _ ) ( h' x' ) end .
Theorem isweqcoprodf { X Y X' Y' : UU } ( w : weq X X' )( w' : weq Y Y' ) : isweq (coprodf w w' ).
Proof. intros. set (finv:= invmap w ). set (ginv:= invmap w' ). set (ff:=coprodf w w' ). set (gg:=coprodf finv ginv).
assert (egf: forall xy: coprod X Y, paths (gg (ff xy)) xy). intro. destruct xy as [ x | y ] . simpl. apply (maponpaths (@ii1 X Y) (homotinvweqweq w x)). apply (maponpaths (@ii2 X Y) (homotinvweqweq w' y)).
assert (efg: forall xy': coprod X' Y', paths (ff (gg xy')) xy'). intro. destruct xy' as [ x | y ] . simpl. apply (maponpaths (@ii1 X' Y') (homotweqinvweq w x)). apply (maponpaths (@ii2 X' Y') (homotweqinvweq w' y)).
apply (gradth ff gg egf efg). Defined.
Definition weqcoprodf { X Y X' Y' : UU } (w1: weq X Y)(w2: weq X' Y') : weq (coprod X X') (coprod Y Y') := weqpair _ ( isweqcoprodf w1 w2 ) .
Lemma negpathsii1ii2 { X Y : UU } (x:X)(y:Y): neg (paths (ii1 x) (ii2 y)).
Proof. intros. unfold neg. intro X0. set (dist:= fun xy: coprod X Y => match xy with ii1 x => unit | ii2 y => empty end). apply (transportf dist X0 tt). Defined.
Lemma negpathsii2ii1 { X Y : UU } (x:X)(y:Y): neg (paths (ii2 y) (ii1 x)).
Proof. intros. unfold neg. intro X0. set (dist:= fun xy: coprod X Y => match xy with ii1 x => empty | ii2 y => unit end). apply (transportf dist X0 tt). Defined.
(** *** Fibrations with only one non-empty fiber.
Theorem saying that if a fibration has only one non-empty fiber then the total space is weakly equivalent to this fiber. *)
Theorem onefiber { X : UU } (P:X -> UU)(x:X)(c: forall x':X, coprod (paths x x') (P x' -> empty)) : isweq (fun p: P x => tpair P x p).
Proof. intros.
set (f:= fun p: P x => tpair _ x p).
set (cx := c x).
set (cnew:= fun x':X =>
match cx with
ii1 x0 =>
match c x' with
ii1 ee => ii1 (pathscomp0 (pathsinv0 x0) ee)|
ii2 phi => ii2 phi
end |
ii2 phi => c x'
end).
set (g:= fun pp: total2 P =>
match (cnew (pr1 pp)) with
ii1 e => transportb P e (pr2 pp) |
ii2 phi => fromempty (phi (pr2 pp))
end).
assert (efg: forall pp: total2 P, paths (f (g pp)) pp). intro. destruct pp as [ t x0 ]. set (cnewt:= cnew t). unfold g. unfold f. simpl. change (cnew t) with cnewt. destruct cnewt as [ x1 | y ]. apply (pathsinv0 (pr1 (pr2 (constr1 P (pathsinv0 x1))) x0)). destruct (y x0).
set (cnewx:= cnew x).
assert (e1: paths (cnew x) cnewx). apply idpath.
unfold cnew in cnewx. change (c x) with cx in cnewx.
destruct cx as [ x0 | e0 ].
assert (e: paths (cnewx) (ii1 (idpath x))). apply (maponpaths (@ii1 (paths x x) (P x -> empty)) (pathsinv0l x0)).
assert (egf: forall p: P x, paths (g (f p)) p). intro. simpl in g. unfold g. unfold f. simpl.
set (ff:= fun cc:coprod (paths x x) (P x -> empty) =>
match cc with
| ii1 e0 => transportb P e0 p
| ii2 phi => fromempty (phi p)
end).
assert (ee: paths (ff (cnewx)) (ff (@ii1 (paths x x) (P x -> empty) (idpath x)))). apply (maponpaths ff e).
assert (eee: paths (ff (@ii1 (paths x x) (P x -> empty) (idpath x))) p). apply idpath. fold (ff (cnew x)).
assert (e2: paths (ff (cnew x)) (ff cnewx)). apply (maponpaths ff e1).
apply (pathscomp0 (pathscomp0 e2 ee) eee).
apply (gradth f g egf efg).
unfold isweq. intro y0. destruct (e0 (g y0)). Defined.
(** *** Pairwise coproducts as dependent sums of families over [ bool ] *)
Fixpoint coprodtobool { X Y : UU } ( xy : coprod X Y ) : bool :=
match xy with
ii1 x => true|
ii2 y => false
end.
Definition boolsumfun (X Y:UU) : bool -> UU := fun t:_ =>
match t with
true => X|
false => Y
end.
Definition coprodtoboolsum ( X Y : UU ) : coprod X Y -> total2 (boolsumfun X Y) := fun xy : _ =>
match xy with
ii1 x => tpair (boolsumfun X Y) true x|
ii2 y => tpair (boolsumfun X Y) false y
end .
Definition boolsumtocoprod (X Y:UU): (total2 (boolsumfun X Y)) -> coprod X Y := (fun xy:_ =>
match xy with
tpair _ true x => ii1 x|
tpair _ false y => ii2 y
end).
Theorem isweqcoprodtoboolsum (X Y:UU): isweq (coprodtoboolsum X Y).
Proof. intros. set (f:= coprodtoboolsum X Y). set (g:= boolsumtocoprod X Y).
assert (egf: forall xy: coprod X Y , paths (g (f xy)) xy). destruct xy. apply idpath. apply idpath.
assert (efg: forall xy: total2 (boolsumfun X Y), paths (f (g xy)) xy). intro. destruct xy as [ t x ]. destruct t. apply idpath. apply idpath. apply (gradth f g egf efg). Defined.
Definition weqcoprodtoboolsum ( X Y : UU ) := weqpair _ ( isweqcoprodtoboolsum X Y ) .
Corollary isweqboolsumtocoprod (X Y:UU): isweq (boolsumtocoprod X Y ).
Proof. intros. apply (isweqinvmap ( weqcoprodtoboolsum X Y ) ) . Defined.
Definition weqboolsumtocoprod ( X Y : UU ) := weqpair _ ( isweqboolsumtocoprod X Y ) .
(** *** Splitting of [ X ] into a coproduct defined by a function [ X -> coprod Y Z ] *)
Definition weqcoprodsplit { X Y Z : UU } ( f : X -> coprod Y Z ) : weq X ( coprod ( total2 ( fun y : Y => hfiber f ( ii1 y ) ) ) ( total2 ( fun z : Z => hfiber f ( ii2 z ) ) ) ) .
Proof . intros . set ( w1 := weqtococonusf f ) . set ( w2 := weqtotal2overcoprod ( fun yz : coprod Y Z => hfiber f yz ) ) . apply ( weqcomp w1 w2 ) . Defined .
(** *** Some properties of [ bool ] *)
Definition boolchoice ( x : bool ) : coprod ( paths x true ) ( paths x false ) .
Proof. intro . destruct x . apply ( ii1 ( idpath _ ) ) . apply ( ii2 ( idpath _ ) ) . Defined .
Definition curry : bool -> UU := fun x : bool =>
match x with
false => empty|
true => unit
end.
Theorem nopathstruetofalse: paths true false -> empty.
Proof. intro X. apply (transportf curry X tt). Defined.
Corollary nopathsfalsetotrue: paths false true -> empty.
Proof. intro X. apply (transportb curry X tt). Defined.
Definition truetonegfalse ( x : bool ) : paths x true -> neg ( paths x false ) .
Proof . intros x e . rewrite e . unfold neg . apply nopathstruetofalse . Defined .
Definition falsetonegtrue ( x : bool ) : paths x false -> neg ( paths x true ) .
Proof . intros x e . rewrite e . unfold neg . apply nopathsfalsetotrue . Defined .
Definition negtruetofalse (x : bool ) : neg ( paths x true ) -> paths x false .
Proof. intros x ne. destruct (boolchoice x) as [t | f]. destruct (ne t). apply f. Defined.
Definition negfalsetotrue ( x : bool ) : neg ( paths x false ) -> paths x true .
Proof. intros x ne . destruct (boolchoice x) as [t | f]. apply t . destruct (ne f) . Defined.
(** ** Basics about fibration sequences. *)
(** *** Fibrations sequences and their first "left shifts".
The group of constructions related to fibration sequences forms one of the most important computational toolboxes of homotopy theory .
Given a pair of functions [ ( f : X -> Y ) ( g : Y -> Z ) ] and a point [ z : Z ] , a structure of the complex on such a triple is a homotopy from the composition [ funcomp f g ] to the constant function [ X -> Z ] corresponding to [ z ] i.e. a term [ ez : forall x:X, paths ( g ( f x ) ) z ]. Specifing such a structure is essentially equivalent to specifing a structure of the form [ ezmap : X -> hfiber g z ]. The mapping in one direction is given in the definition of [ ezmap ] below. The mapping in another is given by [ f := fun x : X => pr1 ( ezmap x ) ] and [ ez := fun x : X => pr2 ( ezmap x ) ].
A complex is called a fibration sequence if [ ezmap ] is a weak equivalence. Correspondingly, the structure of a fibration sequence on [ f g z ] is a pair [ ( ez , is ) ] where [ is : isweq ( ezmap f g z ez ) ]. For a fibration sequence [ f g z fs ] where [ fs : fibseqstr f g z ] and any [ y : Y ] there is defined a function [ diff1 : paths ( g y ) z -> X ] and a structure of the fibration sequence [ fibseqdiff1 ] on the triple [ diff1 g y ]. This new fibration sequence is called the derived fibration sequence of the original one.
The first function of the second derived of [ f g z fs ] corresponding to [ ( y : Y ) ( x : X ) ] is of the form [ paths ( f x ) y -> paths ( g y ) z ] and it is homotopic to the function defined by [ e => pathscomp0 ( maponpaths g ( pathsinv0 e) ) ( ez x ) ]. The first function of the third derived of [ f g z fs ] corresponding to [ ( y : Y ) ( x : X ) ( e : paths ( g y ) z ) ] is of the form [ paths ( diff1 e ) x -> paths ( f x ) y ]. Therefore, the third derived of a sequence based on [ X Y Z ] is based entirely on paths types of [ X ], [ Y ] and [ Z ]. When this construction is applied to types of finite h-level (see below) and combined with the fact that the h-level of a path type is strictly lower than the h-level of the ambient type it leads to the possibility of building proofs about types by induction on h-level.
There are three important special cases in which fibration sequences arise:
( pr1 - case ) The fibration sequence [ fibseqpr1 P z ] defined by family [ P : Z -> UU ] and a term [ z : Z ]. It is based on the sequence of functions [ ( tpair P z : P z -> total2 P ) ( pr1 : total2 P -> Z ) ]. The corresponding [ ezmap ] is defined by an obvious rule and the fact that it is a weak equivalence is proved in [ isweqfibertohfiber ].
( g - case ) The fibration sequence [ fibseqg g z ] defined by a function [ g : Y -> Z ] and a term [ z : Z ]. It is based on the sequence of functions [ ( hfiberpr1 : hfiber g z -> Y ) ( g : Y -> Z ) ] and the corresponding [ ezmap ] is the function which takes a term [ ye : hfiber ] to [ hfiberpair g ( pr1 ye ) ( pr2 ye ) ]. If we had eta-concersion for the depndent sums it would be the identiry function. Since we do not have this conversion in Coq this function is only homotopic to the identity function by [ tppr ] which is sufficient to ensure that it is a weak equivalence. The first derived of [ fibseqg g z ] corresponding to [ y : Y ] coincides with [ fibseqpr1 ( fun y' : Y => paths ( g y' ) z ) y ].
( hf -case ) The fibration sequence of homotopy fibers defined for any pair of functions [ ( f : X -> Y ) ( g : Y -> Z ) ] and any terms [ ( z : Z ) ( ye : hfiber g z ) ]. It is based on functions [ hfiberftogf : hfiber f ( pr1 ye ) -> hfiber ( funcomp f g ) z ] and [ hfibergftog : hfiber ( funcomp f g ) z -> hfiber g z ] which are defined below.
*)
(** The structure of a complex structure on a composable pair of functions [ ( f : X -> Y ) ( g : Y -> Z ) ] relative to a term [ z : Z ]. *)
Definition complxstr { X Y Z : UU } (f:X -> Y) (g:Y->Z) ( z : Z ) := forall x:X, paths (g (f x)) z .
(** The structure of a fibration sequence on a complex. *)
Definition ezmap { X Y Z : UU } (f:X -> Y) (g:Y->Z) ( z : Z ) (ez : complxstr f g z ) : X -> hfiber g z := fun x:X => hfiberpair g (f x) (ez x).
Definition isfibseq { X Y Z : UU } (f:X -> Y) (g:Y->Z) ( z : Z ) (ez : complxstr f g z ) := isweq (ezmap f g z ez).
Definition fibseqstr { X Y Z : UU } (f:X -> Y) (g:Y->Z) ( z : Z ) := total2 ( fun ez : complxstr f g z => isfibseq f g z ez ) .
Definition fibseqstrpair { X Y Z : UU } (f:X -> Y) (g:Y->Z) ( z : Z ) := tpair ( fun ez : complxstr f g z => isfibseq f g z ez ) .
Definition fibseqstrtocomplxstr { X Y Z : UU } (f:X -> Y) (g:Y->Z) ( z : Z ) : fibseqstr f g z -> complxstr f g z := @pr1 _ ( fun ez : complxstr f g z => isfibseq f g z ez ) .
Coercion fibseqstrtocomplxstr : fibseqstr >-> complxstr .
Definition ezweq { X Y Z : UU } (f:X -> Y) (g:Y->Z) ( z : Z ) ( fs : fibseqstr f g z ) : weq X ( hfiber g z ) := weqpair _ ( pr2 fs ) .
(** Construction of the derived fibration sequence. *)
Definition d1 { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( fs : fibseqstr f g z ) ( y : Y ) : paths ( g y ) z -> X := fun e : _ => invmap ( ezweq f g z fs ) ( hfiberpair g y e ) .
Definition ezmap1 { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( fs : fibseqstr f g z ) ( y : Y ) ( e : paths ( g y ) z ) : hfiber f y .
Proof . intros . split with ( d1 f g z fs y e ) . unfold d1 . change ( f ( invmap (ezweq f g z fs) (hfiberpair g y e) ) ) with ( hfiberpr1 _ _ ( ezweq f g z fs ( invmap (ezweq f g z fs) (hfiberpair g y e) ) ) ) . apply ( maponpaths ( hfiberpr1 g z ) ( homotweqinvweq ( ezweq f g z fs ) (hfiberpair g y e) ) ) . Defined .
Definition invezmap1 { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( ez : complxstr f g z ) ( y : Y ) : hfiber f y -> paths (g y) z :=
fun xe: hfiber f y =>
match xe with
tpair _ x e => pathscomp0 (maponpaths g ( pathsinv0 e ) ) ( ez x )
end.
Theorem isweqezmap1 { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( fs : fibseqstr f g z ) ( y : Y ) : isweq ( ezmap1 f g z fs y ) .
Proof . intros . set ( ff := ezmap1 f g z fs y ) . set ( gg := invezmap1 f g z ( pr1 fs ) y ) .
assert ( egf : forall e : _ , paths ( gg ( ff e ) ) e ) . intro . simpl . apply ( hfibertriangle1inv0 g (homotweqinvweq (ezweq f g z fs) (hfiberpair g y e)) ) .
assert ( efg : forall xe : _ , paths ( ff ( gg xe ) ) xe ) . intro . destruct xe as [ x e ] . destruct e . simpl . unfold ff . unfold ezmap1 . unfold d1 . change (hfiberpair g (f x) ( pr1 fs x) ) with ( ezmap f g z fs x ) . apply ( hfibertriangle2 f ( hfiberpair f ( invmap (ezweq f g z fs) (ezmap f g z fs x) ) _ ) ( hfiberpair f x ( idpath _ ) ) ( homotinvweqweq ( ezweq f g z fs ) x ) ) . simpl . set ( e1 := pathsinv0 ( pathscomp0rid (maponpaths f (homotinvweqweq (ezweq f g z fs) x) ) ) ) . assert ( e2 : paths (maponpaths (hfiberpr1 g z) (homotweqinvweq (ezweq f g z fs) ( ( ezmap f g z fs ) x))) (maponpaths f (homotinvweqweq (ezweq f g z fs) x)) ) . set ( e3 := maponpaths ( fun e : _ => maponpaths ( hfiberpr1 g z ) e ) ( pathsinv0 ( homotweqinvweqweq ( ezweq f g z fs ) x ) ) ) . simpl in e3 . set ( e4 := maponpathscomp (ezmap f g z (pr1 fs)) (hfiberpr1 g z) (homotinvweqweq (ezweq f g z fs) x) ) . simpl in e4 . apply ( pathscomp0 e3 e4 ) . apply ( pathscomp0 e2 e1 ) .
apply ( gradth _ _ egf efg ) . Defined .
Definition ezweq1 { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( fs : fibseqstr f g z ) ( y : Y ) := weqpair _ ( isweqezmap1 f g z fs y ) .
Definition fibseq1 { X Y Z : UU } (f:X -> Y) (g:Y->Z) (z:Z) ( fs : fibseqstr f g z )(y:Y) : fibseqstr ( d1 f g z fs y) f y := fibseqstrpair _ _ _ _ ( isweqezmap1 f g z fs y ) .
(** Explitcit description of the first map in the second derived sequence. *)
Definition d2 { X Y Z : UU } (f:X -> Y) (g:Y->Z) (z:Z) ( fs : fibseqstr f g z ) (y:Y) (x:X) ( e : paths (f x) y ) : paths (g y) z := pathscomp0 ( maponpaths g ( pathsinv0 e ) ) ( ( pr1 fs ) x ) .
Definition ezweq2 { X Y Z : UU } (f:X -> Y) (g:Y->Z) (z:Z) ( fs : fibseqstr f g z ) (y:Y) (x:X) : weq ( paths (f x) y ) ( hfiber (d1 f g z fs y) x ) := ezweq1 (d1 f g z fs y) f y ( fibseq1 f g z fs y ) x.
Definition fibseq2 { X Y Z : UU } (f:X -> Y) (g:Y->Z) (z:Z) ( fs : fibseqstr f g z ) (y:Y) (x:X) : fibseqstr ( d2 f g z fs y x ) ( d1 f g z fs y ) x := fibseqstrpair _ _ _ _ ( isweqezmap1 (d1 f g z fs y) f y ( fibseq1 f g z fs y ) x ) .
(** *** Fibration sequences based on [ ( tpair P z : P z -> total2 P ) ( pr1 : total2 P -> Z ) ] ( the "pr1-case" ) *)
(** Construction of the fibration sequence. *)
Definition ezmappr1 { Z : UU } ( P : Z -> UU ) ( z : Z ) : P z -> hfiber ( @pr1 Z P ) z := fun p : P z => tpair _ ( tpair _ z p ) ( idpath z ).
Definition invezmappr1 { Z : UU } ( P : Z -> UU) ( z : Z ) : hfiber ( @pr1 Z P ) z -> P z := fun te : hfiber ( @pr1 Z P ) z =>
match te with
tpair _ t e => transportf P e ( pr2 t )
end.
Definition isweqezmappr1 { Z : UU } ( P : Z -> UU ) ( z : Z ) : isweq ( ezmappr1 P z ).
Proof. intros.
assert ( egf : forall x: P z , paths (invezmappr1 _ z ((ezmappr1 P z ) x)) x). intro. unfold ezmappr1. unfold invezmappr1. simpl. apply idpath.
assert ( efg : forall x: hfiber (@pr1 Z P) z , paths (ezmappr1 _ z (invezmappr1 P z x)) x). intros. destruct x as [ x t0 ]. destruct t0. simpl in x. simpl. destruct x. simpl. unfold transportf. unfold ezmappr1. apply idpath.
apply (gradth _ _ egf efg ). Defined.
Definition ezweqpr1 { Z : UU } ( P : Z -> UU ) ( z : Z ) := weqpair _ ( isweqezmappr1 P z ) .
Lemma isfibseqpr1 { Z : UU } ( P : Z -> UU ) ( z : Z ) : isfibseq (fun p : P z => tpair _ z p) ( @pr1 Z P ) z (fun p: P z => idpath z ).
Proof. intros. unfold isfibseq. unfold ezmap. apply isweqezmappr1. Defined.
Definition fibseqpr1 { Z : UU } ( P : Z -> UU ) ( z : Z ) : fibseqstr (fun p : P z => tpair _ z p) ( @pr1 Z P ) z := fibseqstrpair _ _ _ _ ( isfibseqpr1 P z ) .
(** The main weak equivalence defined by the first derived of [ fibseqpr1 ]. *)
Definition ezweq1pr1 { Z : UU } ( P : Z -> UU ) ( z : Z ) ( zp : total2 P ) : weq ( paths ( pr1 zp) z ) ( hfiber ( tpair P z ) zp ) := ezweq1 _ _ z ( fibseqpr1 P z ) zp .
(** *** Fibration sequences based on [ ( hfiberpr1 : hfiber g z -> Y ) ( g : Y -> Z ) ] (the "g-case") *)
Theorem isfibseqg { Y Z : UU } (g:Y -> Z) (z:Z) : isfibseq (hfiberpr1 g z) g z (fun ye: _ => pr2 ye).
Proof. intros. assert (Y0:forall ye': hfiber g z, paths ye' (ezmap (hfiberpr1 g z) g z (fun ye: _ => pr2 ye) ye')). intro. apply tppr. apply (isweqhomot _ _ Y0 (idisweq _ )). Defined.
Definition ezweqg { Y Z : UU } (g:Y -> Z) (z:Z) := weqpair _ ( isfibseqg g z ) .
Definition fibseqg { Y Z : UU } (g:Y -> Z) (z:Z) : fibseqstr (hfiberpr1 g z) g z := fibseqstrpair _ _ _ _ ( isfibseqg g z ) .
(** The first derived of [ fibseqg ]. *)
Definition d1g { Y Z : UU} ( g : Y -> Z ) ( z : Z ) ( y : Y ) : paths ( g y ) z -> hfiber g z := hfiberpair g y .
(** note that [ d1g ] coincides with [ d1 _ _ _ ( fibseqg g z ) ] which makes the following two definitions possible. *)
Definition ezweq1g { Y Z : UU } (g:Y -> Z) (z:Z) (y:Y) : weq (paths (g y) z) (hfiber (hfiberpr1 g z) y) := weqpair _ (isweqezmap1 (hfiberpr1 g z) g z ( fibseqg g z ) y) .
Definition fibseq1g { Y Z : UU } (g:Y -> Z) (z:Z) ( y : Y) : fibseqstr (d1g g z y ) ( hfiberpr1 g z ) y := fibseqstrpair _ _ _ _ (isweqezmap1 (hfiberpr1 g z) g z ( fibseqg g z ) y) .
(** The second derived of [ fibseqg ]. *)
Definition d2g { Y Z : UU } (g:Y -> Z) { z : Z } ( y : Y ) ( ye' : hfiber g z ) ( e: paths (pr1 ye') y ) : paths (g y) z := pathscomp0 ( maponpaths g ( pathsinv0 e ) ) ( pr2 ye' ) .
(** note that [ d2g ] coincides with [ d2 _ _ _ ( fibseqg g z ) ] which makes the following two definitions possible. *)
Definition ezweq2g { Y Z : UU } (g:Y -> Z) { z : Z } ( y : Y ) ( ye' : hfiber g z ) : weq (paths (pr1 ye') y) (hfiber ( hfiberpair g y ) ye') := ezweq2 _ _ _ ( fibseqg g z ) _ _ .
Definition fibseq2g { Y Z : UU } (g:Y -> Z) { z : Z } ( y : Y ) ( ye' : hfiber g z ) : fibseqstr ( d2g g y ye' ) ( hfiberpair g y ) ye' := fibseq2 _ _ _ ( fibseqg g z ) _ _ .
(** The third derived of [ fibseqg ] and an explicit description of the corresponding first map. *)
Definition d3g { Y Z : UU } (g:Y -> Z) { z : Z } ( y : Y ) ( ye' : hfiber g z ) ( e : paths ( g y ) z ) : paths ( hfiberpair g y e ) ye' -> paths ( pr1 ye' ) y := d2 (d1g g z y) (hfiberpr1 g z) y ( fibseq1g g z y ) ye' e .
Lemma homotd3g { Y Z : UU } ( g : Y -> Z ) { z : Z } ( y : Y ) ( ye' : hfiber g z ) ( e : paths ( g y ) z ) ( ee : paths ( hfiberpair g y e) ye' ) : paths (d3g g y ye' e ee) ( maponpaths ( @pr1 _ _ ) ( pathsinv0 ee ) ) .
Proof. intros. unfold d3g . unfold d2 . simpl . apply pathscomp0rid. Defined .
Definition ezweq3g { Y Z : UU } (g:Y -> Z) { z : Z } ( y : Y ) ( ye' : hfiber g z ) ( e : paths ( g y ) z ) := ezweq2 (d1g g z y) (hfiberpr1 g z) y ( fibseq1g g z y ) ye' e .
Definition fibseq3g { Y Z : UU } (g:Y -> Z) { z : Z } ( y : Y ) ( ye' : hfiber g z ) ( e : paths ( g y ) z ) := fibseq2 (d1g g z y) (hfiberpr1 g z) y ( fibseq1g g z y ) ye' e .
(** *** Fibration sequence of h-fibers defined by a composable pair of functions (the "hf-case")
We construct a fibration sequence based on [ ( hfibersftogf f g z ye : hfiber f ( pr1 ye ) -> hfiber gf z ) ( hfibersgftog f g z : hfiber gf z -> hfiber g z ) ]. *)
Definition hfibersftogf { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( ye : hfiber g z ) ( xe : hfiber f ( pr1 ye ) ) : hfiber ( funcomp f g ) z .
Proof . intros . split with ( pr1 xe ) . apply ( pathscomp0 ( maponpaths g ( pr2 xe ) ) ( pr2 ye ) ) . Defined .
Definition ezmaphf { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( ye : hfiber g z ) ( xe : hfiber f ( pr1 ye ) ) : hfiber ( hfibersgftog f g z ) ye .
Proof . intros . split with ( hfibersftogf f g z ye xe ) . simpl . apply ( hfibertriangle2 g (hfiberpair g (f (pr1 xe)) (pathscomp0 (maponpaths g (pr2 xe)) ( pr2 ye ) )) ye ( pr2 xe ) ) . simpl . apply idpath . Defined .
Definition invezmaphf { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( ye : hfiber g z ) ( xee' : hfiber ( hfibersgftog f g z ) ye ) : hfiber f ( pr1 ye ) .
Proof . intros . split with ( pr1 ( pr1 xee' ) ) . apply ( maponpaths ( hfiberpr1 _ _ ) ( pr2 xee' ) ) . Defined .
Definition ffgg { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( ye : hfiber g z ) ( xee' : hfiber ( hfibersgftog f g z ) ye ) : hfiber ( hfibersgftog f g z ) ye .
Proof . intros . destruct ye as [ y e ] . destruct e . unfold hfibersgftog . unfold hfibersgftog in xee' . destruct xee' as [ xe e' ] . destruct xe as [ x e ] . simpl in e' . split with ( hfiberpair ( funcomp f g ) x ( pathscomp0 ( maponpaths g (maponpaths (hfiberpr1 g (g y)) e') ) ( idpath (g y ))) ) . simpl . apply ( hfibertriangle2 _ (hfiberpair g (f x) (( pathscomp0 ( maponpaths g (maponpaths (hfiberpr1 g (g y)) e') ) ( idpath (g y ))))) ( hfiberpair g y ( idpath _ ) ) ( maponpaths ( hfiberpr1 _ _ ) e' ) ( idpath _ ) ) . Defined .
Definition homotffggid { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( ye : hfiber g z ) ( xee' : hfiber ( hfibersgftog f g z ) ye ) : paths ( ffgg f g z ye xee' ) xee' .
Proof . intros . destruct ye as [ y e ] . destruct e . destruct xee' as [ xe e' ] . destruct e' . destruct xe as [ x e ] . destruct e . simpl . apply idpath . Defined .
Theorem isweqezmaphf { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( ye : hfiber g z ) : isweq ( ezmaphf f g z ye ) .
Proof . intros . set ( ff := ezmaphf f g z ye ) . set ( gg := invezmaphf f g z ye ) .
assert ( egf : forall xe : _ , paths ( gg ( ff xe ) ) xe ) . destruct ye as [ y e ] . destruct e . intro xe . apply ( hfibertriangle2 f ( gg ( ff xe ) ) xe ( idpath ( pr1 xe ) ) ) . destruct xe as [ x ex ] . simpl in ex . destruct ( ex ) . simpl . apply idpath .
assert ( efg : forall xee' : _ , paths ( ff ( gg xee' ) ) xee' ) . destruct ye as [ y e ] . destruct e . intro xee' .
assert ( hint : paths ( ff ( gg xee' ) ) ( ffgg f g ( g y ) ( hfiberpair g y ( idpath _ ) ) xee' ) ) . destruct xee' as [ xe e' ] . destruct xe as [ x e ] . apply idpath .
apply ( pathscomp0 hint ( homotffggid _ _ _ _ xee' ) ) .
apply ( gradth _ _ egf efg ) . Defined .
Definition ezweqhf { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( ye : hfiber g z ) : weq ( hfiber f ( pr1 ye ) ) ( hfiber ( hfibersgftog f g z ) ye ) := weqpair _ ( isweqezmaphf f g z ye ) .
Definition fibseqhf { X Y Z : UU } (f:X -> Y)(g: Y -> Z)(z:Z)(ye: hfiber g z) : fibseqstr (hfibersftogf f g z ye) (hfibersgftog f g z) ye := fibseqstrpair _ _ _ _ ( isweqezmaphf f g z ye ) .
Definition isweqinvezmaphf { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( ye : hfiber g z ) : isweq ( invezmaphf f g z ye ) := pr2 ( invweq ( ezweqhf f g z ye ) ) .
Corollary weqhfibersgwtog { X Y Z : UU } ( w : weq X Y ) ( g : Y -> Z ) ( z : Z ) : weq ( hfiber ( funcomp w g ) z ) ( hfiber g z ) .
Proof. intros . split with ( hfibersgftog w g z ) . intro ye . apply ( iscontrweqf ( ezweqhf w g z ye ) ( ( pr2 w ) ( pr1 ye ) ) ) . Defined .
(** ** Fiber-wise weak equivalences.
Theorems saying that a fiber-wise morphism between total spaces is a weak equivalence if and only if all the morphisms between the fibers are weak equivalences. *)
Definition totalfun { X : UU } ( P Q : X -> UU ) (f: forall x:X, P x -> Q x) := (fun z: total2 P => tpair Q (pr1 z) (f (pr1 z) (pr2 z))).
Theorem isweqtotaltofib { X : UU } ( P Q : X -> UU) (f: forall x:X, P x -> Q x):
isweq (totalfun _ _ f) -> forall x:X, isweq (f x).
Proof. intros X P Q f X0 x. set (totp:= total2 P). set (totq := total2 Q). set (totf:= (totalfun _ _ f)). set (pip:= fun z: totp => pr1 z). set (piq:= fun z: totq => pr1 z).
set (hfx:= hfibersgftog totf piq x). simpl in hfx.
assert (H: isweq hfx). unfold isweq. intro y.
set (int:= invezmaphf totf piq x y).
assert (X1:isweq int). apply (isweqinvezmaphf totf piq x y). destruct y as [ t e ].
assert (is1: iscontr (hfiber totf t)). apply (X0 t). apply (iscontrweqb ( weqpair int X1 ) is1).
set (ip:= ezmappr1 P x). set (iq:= ezmappr1 Q x). set (h:= fun p: P x => hfx (ip p)).
assert (is2: isweq h). apply (twooutof3c ip hfx (isweqezmappr1 P x) H). set (h':= fun p: P x => iq ((f x) p)).
assert (ee: forall p:P x, paths (h p) (h' p)). intro. apply idpath.
assert (X2:isweq h'). apply (isweqhomot h h' ee is2).
apply (twooutof3a (f x) iq X2).
apply (isweqezmappr1 Q x). Defined.
Definition weqtotaltofib { X : UU } ( P Q : X -> UU ) ( f : forall x : X , P x -> Q x ) ( is : isweq ( totalfun _ _ f ) ) ( x : X ) : weq ( P x ) ( Q x ) := weqpair _ ( isweqtotaltofib P Q f is x ) .
Theorem isweqfibtototal { X : UU } ( P Q : X -> UU) (f: forall x:X, weq ( P x ) ( Q x ) ) : isweq (totalfun _ _ f).
Proof. intros X P Q f . set (fpq:= totalfun P Q f). set (pr1p:= fun z: total2 P => pr1 z). set (pr1q:= fun z: total2 Q => pr1 z). unfold isweq. intro xq. set (x:= pr1q xq). set (xqe:= hfiberpair pr1q xq (idpath _)). set (hfpqx:= hfibersgftog fpq pr1q x).
assert (isint: iscontr (hfiber hfpqx xqe)).
assert (isint1: isweq hfpqx). set (ipx:= ezmappr1 P x). set (iqx:= ezmappr1 Q x). set (diag:= fun p:P x => (iqx ((f x) p))).
assert (is2: isweq diag). apply (twooutof3c (f x) iqx (pr2 ( f x) ) (isweqezmappr1 Q x)). apply (twooutof3b ipx hfpqx (isweqezmappr1 P x) is2). unfold isweq in isint1. apply (isint1 xqe).
set (intmap:= invezmaphf fpq pr1q x xqe). apply (iscontrweqf ( weqpair intmap (isweqinvezmaphf fpq pr1q x xqe) ) isint).
Defined.
Definition weqfibtototal { X : UU } ( P Q : X -> UU) (f: forall x:X, weq ( P x ) ( Q x ) ) := weqpair _ ( isweqfibtototal P Q f ) .
(** ** Homotopy fibers of the function [fpmap: total2 X (P f) -> total2 Y P].
Given [ X Y ] in [ UU ], [ P:Y -> UU ] and [ f: X -> Y ] we get a function [ fpmap: total2 X (P f) -> total2 Y P ]. The main theorem of this section asserts that the homotopy fiber of fpmap over [ yp:total Y P ] is naturally weakly equivalent to the homotopy fiber of [ f ] over [ pr1 yp ]. In particular, if [ f ] is a weak equivalence then so is [ fpmap ]. *)
Definition fpmap { X Y : UU } (f: X -> Y) ( P:Y-> UU) : total2 ( fun x => P ( f x ) ) -> total2 P :=
(fun z:total2 (fun x:X => P (f x)) => tpair P (f (pr1 z)) (pr2 z)).
Definition hffpmap2 { X Y : UU } (f: X -> Y) (P:Y-> UU): total2 ( fun x => P ( f x ) ) -> total2 (fun u:total2 P => hfiber f (pr1 u)).
Proof. intros X Y f P X0. set (u:= fpmap f P X0). split with u. set (x:= pr1 X0). split with x. simpl. apply idpath. Defined.
Definition hfiberfpmap { X Y : UU } (f:X -> Y)(P:Y-> UU)(yp: total2 P): hfiber (fpmap f P) yp -> hfiber f (pr1 yp).
Proof. intros X Y f P yp X0. set (int1:= hfibersgftog (hffpmap2 f P) (fun u: (total2 (fun u:total2 P => hfiber f (pr1 u))) => (pr1 u)) yp). set (phi:= invezmappr1 (fun u:total2 P => hfiber f (pr1 u)) yp). apply (phi (int1 X0)). Defined.
Lemma centralfiber { X : UU } (P:X -> UU)(x:X): isweq (fun p: P x => tpair (fun u: coconusfromt X x => P ( pr1 u)) (coconusfromtpair X (idpath x)) p).
Proof. intros. set (f:= fun p: P x => tpair (fun u: coconusfromt X x => P(pr1 u)) (coconusfromtpair X (idpath x)) p). set (g:= fun z: total2 (fun u: coconusfromt X x => P ( pr1 u)) => transportf P (pathsinv0 (pr2 (pr1 z))) (pr2 z)).
assert (efg: forall z: total2 (fun u: coconusfromt X x => P ( pr1 u)), paths (f (g z)) z). intro. destruct z as [ t x0 ]. destruct t as [t x1 ]. simpl. destruct x1. simpl. apply idpath.
assert (egf: forall p: P x , paths (g (f p)) p). intro. apply idpath.
apply (gradth f g egf efg). Defined.
Lemma isweqhff { X Y : UU } (f: X -> Y)(P:Y-> UU): isweq (hffpmap2 f P).
Proof. intros. set (int:= total2 (fun x:X => total2 (fun u: coconusfromt Y (f x) => P (pr1 u)))). set (intpair:= tpair (fun x:X => total2 (fun u: coconusfromt Y (f x) => P (pr1 u)))). set (toint:= fun z: (total2 (fun u : total2 P => hfiber f (pr1 u))) => intpair (pr1 (pr2 z)) (tpair (fun u: coconusfromt Y (f (pr1 (pr2 z))) => P (pr1 u)) (coconusfromtpair _ (pr2 (pr2 z))) (pr2 (pr1 z)))). set (fromint:= fun z: int => tpair (fun u:total2 P => hfiber f (pr1 u)) (tpair P (pr1 (pr1 (pr2 z))) (pr2 (pr2 z))) (hfiberpair f (pr1 z) (pr2 (pr1 (pr2 z))))). assert (fromto: forall u:(total2 (fun u : total2 P => hfiber f (pr1 u))), paths (fromint (toint u)) u). simpl in toint. simpl in fromint. simpl. intro u. destruct u as [ t x ]. destruct x. destruct t as [ p0 p1 ] . simpl. unfold toint. unfold fromint. simpl. apply idpath. assert (tofrom: forall u:int, paths (toint (fromint u)) u). intro. destruct u as [ t x ]. destruct x as [ t0 x ]. destruct t0. simpl in x. simpl. unfold fromint. unfold toint. simpl. apply idpath. assert (is: isweq toint). apply (gradth toint fromint fromto tofrom). clear tofrom. clear fromto. clear fromint.
set (h:= fun u: total2 (fun x:X => P (f x)) => toint ((hffpmap2 f P) u)). simpl in h.
assert (l1: forall x:X, isweq (fun p: P (f x) => tpair (fun u: coconusfromt _ (f x) => P (pr1 u)) (coconusfromtpair _ (idpath (f x))) p)). intro. apply (centralfiber P (f x)).
assert (X0:isweq h). apply (isweqfibtototal (fun x:X => P (f x)) (fun x:X => total2 (fun u: coconusfromt _ (f x) => P (pr1 u))) (fun x:X => weqpair _ ( l1 x ) ) ).
apply (twooutof3a (hffpmap2 f P) toint X0 is). Defined.
Theorem isweqhfiberfp { X Y : UU } (f:X -> Y)(P:Y-> UU)(yp: total2 P): isweq (hfiberfpmap f P yp).
Proof. intros. set (int1:= hfibersgftog (hffpmap2 f P) (fun u: (total2 (fun u:total2 P => hfiber f (pr1 u))) => (pr1 u)) yp). assert (is1: isweq int1). simpl in int1 . apply ( pr2 ( weqhfibersgwtog ( weqpair _ ( isweqhff f P ) ) (fun u : total2 (fun u : total2 P => hfiber f (pr1 u)) => pr1 u) yp ) ) . set (phi:= invezmappr1 (fun u:total2 P => hfiber f (pr1 u)) yp). assert (is2: isweq phi). apply ( pr2 ( invweq ( ezweqpr1 (fun u:total2 P => hfiber f (pr1 u)) yp ) ) ) . apply (twooutof3c int1 phi is1 is2). Defined.
Corollary isweqfpmap { X Y : UU } ( w : weq X Y )(P:Y-> UU) : isweq (fpmap w P).
Proof. intros. unfold isweq. intro y. set (h:=hfiberfpmap w P y).
assert (X1:isweq h). apply isweqhfiberfp.
assert (is: iscontr (hfiber w (pr1 y))). apply ( pr2 w ). apply (iscontrweqb ( weqpair h X1 ) is). Defined.
Definition weqfp { X Y : UU } ( w : weq X Y )(P:Y-> UU) := weqpair _ ( isweqfpmap w P ) .
(** *** Total spaces of families over a contractible base *)
Definition fromtotal2overunit ( P : unit -> UU ) ( tp : total2 P ) : P tt .
Proof . intros . destruct tp as [ t p ] . destruct t . apply p . Defined .
Definition tototal2overunit ( P : unit -> UU ) ( p : P tt ) : total2 P := tpair P tt p .
Theorem weqtotal2overunit ( P : unit -> UU ) : weq ( total2 P ) ( P tt ) .
Proof. intro . set ( f := fromtotal2overunit P ) . set ( g := tototal2overunit P ) . split with f .
assert ( egf : forall a : _ , paths ( g ( f a ) ) a ) . intro a . destruct a as [ t p ] . destruct t . apply idpath .
assert ( efg : forall a : _ , paths ( f ( g a ) ) a ) . intro a . apply idpath .
apply ( gradth _ _ egf efg ) . Defined .
(** ** The maps between total spaces of families given by a map between the bases of the families and maps between the corresponding members of the families *)
Definition bandfmap { X Y : UU }(f: X -> Y) ( P : X -> UU)(Q: Y -> UU)(fm: forall x:X, P x -> (Q (f x))): total2 P -> total2 Q:= fun xp:_ =>
match xp with
tpair _ x p => tpair Q (f x) (fm x p)
end.
Theorem isweqbandfmap { X Y : UU } (w : weq X Y ) (P:X -> UU)(Q: Y -> UU)( fw : forall x:X, weq ( P x) (Q (w x))) : isweq (bandfmap _ P Q fw).
Proof. intros. set (f1:= totalfun P _ fw). set (is1:= isweqfibtototal P (fun x:X => Q (w x)) fw ). set (f2:= fpmap w Q). set (is2:= isweqfpmap w Q ).
assert (h: forall xp: total2 P, paths (f2 (f1 xp)) (bandfmap w P Q fw xp)). intro. destruct xp. apply idpath. apply (isweqhomot _ _ h (twooutof3c f1 f2 is1 is2)). Defined.
Definition weqbandf { X Y : UU } (w : weq X Y ) (P:X -> UU)(Q: Y -> UU)( fw : forall x:X, weq ( P x) (Q (w x))) := weqpair _ ( isweqbandfmap w P Q fw ) .
(** ** Homotopy fiber squares *)
(** *** Homotopy commutative squares *)
Definition commsqstr { X X' Y Z : UU } ( g' : Z -> X' ) ( f' : X' -> Y ) ( g : Z -> X ) ( f : X -> Y ) := forall ( z : Z ) , paths ( f' ( g' z ) ) ( f ( g z ) ) .
Definition hfibersgtof' { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( h : commsqstr g' f' g f ) ( x : X ) ( ze : hfiber g x ) : hfiber f' ( f x ) .
Proof. intros . destruct ze as [ z e ] . split with ( g' z ) . apply ( pathscomp0 ( h z ) ( maponpaths f e ) ) . Defined .
Definition hfibersg'tof { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( h : commsqstr g' f' g f ) ( x' : X' ) ( ze : hfiber g' x' ) : hfiber f ( f' x' ) .
Proof. intros . destruct ze as [ z e ] . split with ( g z ) . apply ( pathscomp0 ( pathsinv0 ( h z ) ) ( maponpaths f' e ) ) . Defined .
Definition transposcommsqstr { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) : commsqstr g' f' g f -> commsqstr g f g' f' := fun h : _ => fun z : Z => ( pathsinv0 ( h z ) ) .
(** *** Short complexes and homotopy commutative squares *)
Lemma complxstrtocommsqstr { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( h : complxstr f g z ) : commsqstr f g ( fun x : X => tt ) ( fun t : unit => z ) .
Proof. intros . assumption . Defined .
Lemma commsqstrtocomplxstr { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( h : commsqstr f g ( fun x : X => tt ) ( fun t : unit => z ) ) : complxstr f g z .
Proof. intros . assumption . Defined .
(** *** Homotopy fiber products *)
Definition hfp {X X' Y:UU} (f:X -> Y) (f':X' -> Y):= total2 (fun xx' : dirprod X X' => paths ( f' ( pr2 xx' ) ) ( f ( pr1 xx' ) ) ) .
Definition hfpg {X X' Y:UU} (f:X -> Y) (f':X' -> Y) : hfp f f' -> X := fun xx'e => ( pr1 ( pr1 xx'e ) ) .
Definition hfpg' {X X' Y:UU} (f:X -> Y) (f':X' -> Y) : hfp f f' -> X' := fun xx'e => ( pr2 ( pr1 xx'e ) ) .
Definition commsqZtohfp { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( h : commsqstr g' f' g f ) : Z -> hfp f f' := fun z : _ => tpair _ ( dirprodpair ( g z ) ( g' z ) ) ( h z ) .
Definition commsqZtohfphomot { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( h : commsqstr g' f' g f ) : forall z : Z , paths ( hfpg _ _ ( commsqZtohfp _ _ _ _ h z ) ) ( g z ) := fun z : _ => idpath _ .
Definition commsqZtohfphomot' { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( h : commsqstr g' f' g f ) : forall z : Z , paths ( hfpg' _ _ ( commsqZtohfp _ _ _ _ h z ) ) ( g' z ) := fun z : _ => idpath _ .
Definition hfpoverX {X X' Y:UU} (f:X -> Y) (f':X' -> Y) := total2 (fun x : X => hfiber f' ( f x ) ) .
Definition hfpoverX' {X X' Y:UU} (f:X -> Y) (f':X' -> Y) := total2 (fun x' : X' => hfiber f (f' x' ) ) .
Definition weqhfptohfpoverX {X X' Y:UU} (f:X -> Y) (f':X' -> Y) : weq ( hfp f f' ) ( hfpoverX f f' ) .
Proof. intros . apply ( weqtotal2asstor ( fun x : X => X' ) ( fun xx' : dirprod X X' => paths ( f' ( pr2 xx' ) ) ( f ( pr1 xx' ) ) ) ) . Defined .
Definition weqhfptohfpoverX' {X X' Y:UU} (f:X -> Y) (f':X' -> Y) : weq ( hfp f f' ) ( hfpoverX' f f' ) .
Proof. intros . set ( w1 := weqfp ( weqdirprodcomm X X' ) ( fun xx' : dirprod X' X => paths ( f' ( pr1 xx' ) ) ( f ( pr2 xx' ) ) ) ) . simpl in w1 .
set ( w2 := weqfibtototal ( fun x'x : dirprod X' X => paths ( f' ( pr1 x'x ) ) ( f ( pr2 x'x ) ) ) ( fun x'x : dirprod X' X => paths ( f ( pr2 x'x ) ) ( f' ( pr1 x'x ) ) ) ( fun x'x : _ => weqpathsinv0 ( f' ( pr1 x'x ) ) ( f ( pr2 x'x ) ) ) ) . set ( w3 := weqtotal2asstor ( fun x' : X' => X ) ( fun x'x : dirprod X' X => paths ( f ( pr2 x'x ) ) ( f' ( pr1 x'x ) ) ) ) . simpl in w3 . apply ( weqcomp ( weqcomp w1 w2 ) w3 ) . Defined .
Lemma weqhfpcomm { X X' Y : UU } ( f : X -> Y ) ( f' : X' -> Y ) : weq ( hfp f f' ) ( hfp f' f ) .
Proof . intros . set ( w1 := weqfp ( weqdirprodcomm X X' ) ( fun xx' : dirprod X' X => paths ( f' ( pr1 xx' ) ) ( f ( pr2 xx' ) ) ) ) . simpl in w1 . set ( w2 := weqfibtototal ( fun x'x : dirprod X' X => paths ( f' ( pr1 x'x ) ) ( f ( pr2 x'x ) ) ) ( fun x'x : dirprod X' X => paths ( f ( pr2 x'x ) ) ( f' ( pr1 x'x ) ) ) ( fun x'x : _ => weqpathsinv0 ( f' ( pr1 x'x ) ) ( f ( pr2 x'x ) ) ) ) . apply ( weqcomp w1 w2 ) . Defined .
Definition commhfp {X X' Y:UU} (f:X -> Y) (f':X' -> Y) : commsqstr ( hfpg' f f' ) f' ( hfpg f f' ) f := fun xx'e : hfp f f' => pr2 xx'e .
(** *** Homotopy fiber products and homotopy fibers *)
Definition hfibertohfp { X Y : UU } ( f : X -> Y ) ( y : Y ) ( xe : hfiber f y ) : hfp ( fun t : unit => y ) f := tpair ( fun tx : dirprod unit X => paths ( f ( pr2 tx ) ) y ) ( dirprodpair tt ( pr1 xe ) ) ( pr2 xe ) .
Definition hfptohfiber { X Y : UU } ( f : X -> Y ) ( y : Y ) ( hf : hfp ( fun t : unit => y ) f ) : hfiber f y := hfiberpair f ( pr2 ( pr1 hf ) ) ( pr2 hf ) .
Lemma weqhfibertohfp { X Y : UU } ( f : X -> Y ) ( y : Y ) : weq ( hfiber f y ) ( hfp ( fun t : unit => y ) f ) .
Proof . intros . set ( ff := hfibertohfp f y ) . set ( gg := hfptohfiber f y ) . split with ff .
assert ( egf : forall xe : _ , paths ( gg ( ff xe ) ) xe ) . intro . destruct xe . apply idpath .
assert ( efg : forall hf : _ , paths ( ff ( gg hf ) ) hf ) . intro . destruct hf as [ tx e ] . destruct tx as [ t x ] . destruct t . apply idpath .
apply ( gradth _ _ egf efg ) . Defined .
(** *** Homotopy fiber squares *)
Definition ishfsq { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( h : commsqstr g' f' g f ) := isweq ( commsqZtohfp f f' g g' h ) .
Definition hfsqstr { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) := total2 ( fun h : commsqstr g' f' g f => isweq ( commsqZtohfp f f' g g' h ) ) .
Definition hfsqstrpair { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) := tpair ( fun h : commsqstr g' f' g f => isweq ( commsqZtohfp f f' g g' h ) ) .
Definition hfsqstrtocommsqstr { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) : hfsqstr f f' g g' -> commsqstr g' f' g f := @pr1 _ ( fun h : commsqstr g' f' g f => isweq ( commsqZtohfp f f' g g' h ) ) .
Coercion hfsqstrtocommsqstr : hfsqstr >-> commsqstr .
Definition weqZtohfp { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( hf : hfsqstr f f' g g' ) : weq Z ( hfp f f' ) := weqpair _ ( pr2 hf ) .
Lemma isweqhfibersgtof' { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( hf : hfsqstr f f' g g' ) ( x : X ) : isweq ( hfibersgtof' f f' g g' hf x ) .
Proof. intros . set ( is := pr2 hf ) . set ( h := pr1 hf ) .
set ( a := weqtococonusf g ) . set ( c := weqpair _ is ) . set ( d := weqhfptohfpoverX f f' ) . set ( b0 := totalfun _ _ ( hfibersgtof' f f' g g' h ) ) .
assert ( h1 : forall z : Z , paths ( d ( c z ) ) ( b0 ( a z ) ) ) . intro . simpl . unfold b0 . unfold a . unfold weqtococonusf . unfold tococonusf . simpl . unfold totalfun . simpl . assert ( e : paths ( h z ) ( pathscomp0 (h z) (idpath (f (g z))) ) ) . apply ( pathsinv0 ( pathscomp0rid _ ) ) . destruct e . apply idpath .
assert ( is1 : isweq ( fun z : _ => b0 ( a z ) ) ) . apply ( isweqhomot _ _ h1 ) . apply ( twooutof3c _ _ ( pr2 c ) ( pr2 d ) ) .
assert ( is2 : isweq b0 ) . apply ( twooutof3b _ _ ( pr2 a ) is1 ) . apply ( isweqtotaltofib _ _ _ is2 x ) . Defined .
Definition weqhfibersgtof' { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( hf : hfsqstr f f' g g' ) ( x : X ) := weqpair _ ( isweqhfibersgtof' _ _ _ _ hf x ) .
Lemma ishfsqweqhfibersgtof' { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( h : commsqstr g' f' g f ) ( is : forall x : X , isweq ( hfibersgtof' f f' g g' h x ) ) : hfsqstr f f' g g' .
Proof . intros . split with h .
set ( a := weqtococonusf g ) . set ( c0 := commsqZtohfp f f' g g' h ) . set ( d := weqhfptohfpoverX f f' ) . set ( b := weqfibtototal _ _ ( fun x : X => weqpair _ ( is x ) ) ) .
assert ( h1 : forall z : Z , paths ( d ( c0 z ) ) ( b ( a z ) ) ) . intro . simpl . unfold b . unfold a . unfold weqtococonusf . unfold tococonusf . simpl . unfold totalfun . simpl . assert ( e : paths ( h z ) ( pathscomp0 (h z) (idpath (f (g z))) ) ) . apply ( pathsinv0 ( pathscomp0rid _ ) ) . destruct e . apply idpath .
assert ( is1 : isweq ( fun z : _ => d ( c0 z ) ) ) . apply ( isweqhomot _ _ ( fun z : Z => ( pathsinv0 ( h1 z ) ) ) ) . apply ( twooutof3c _ _ ( pr2 a ) ( pr2 b ) ) .
apply ( twooutof3a _ _ is1 ( pr2 d ) ) . Defined .
Lemma isweqhfibersg'tof { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( hf : hfsqstr f f' g g' ) ( x' : X' ) : isweq ( hfibersg'tof f f' g g' hf x' ) .
Proof. intros . set ( is := pr2 hf ) . set ( h := pr1 hf ) .
set ( a' := weqtococonusf g' ) . set ( c' := weqpair _ is ) . set ( d' := weqhfptohfpoverX' f f' ) . set ( b0' := totalfun _ _ ( hfibersg'tof f f' g g' h ) ) .
assert ( h1 : forall z : Z , paths ( d' ( c' z ) ) ( b0' ( a' z ) ) ) . intro . unfold b0' . unfold a' . unfold weqtococonusf . unfold tococonusf . unfold totalfun . simpl . assert ( e : paths ( pathsinv0 ( h z ) ) ( pathscomp0 ( pathsinv0 (h z) ) (idpath (f' (g' z))) ) ) . apply ( pathsinv0 ( pathscomp0rid _ ) ) . destruct e . apply idpath .
assert ( is1 : isweq ( fun z : _ => b0' ( a' z ) ) ) . apply ( isweqhomot _ _ h1 ) . apply ( twooutof3c _ _ ( pr2 c' ) ( pr2 d' ) ) .
assert ( is2 : isweq b0' ) . apply ( twooutof3b _ _ ( pr2 a' ) is1 ) . apply ( isweqtotaltofib _ _ _ is2 x' ) . Defined .
Definition weqhfibersg'tof { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( hf : hfsqstr f f' g g' ) ( x' : X' ) := weqpair _ ( isweqhfibersg'tof _ _ _ _ hf x' ) .
Lemma ishfsqweqhfibersg'tof { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( h : commsqstr g' f' g f ) ( is : forall x' : X' , isweq ( hfibersg'tof f f' g g' h x' ) ) : hfsqstr f f' g g' .
Proof . intros . split with h .
set ( a' := weqtococonusf g' ) . set ( c0' := commsqZtohfp f f' g g' h ) . set ( d' := weqhfptohfpoverX' f f' ) . set ( b' := weqfibtototal _ _ ( fun x' : X' => weqpair _ ( is x' ) ) ) .
assert ( h1 : forall z : Z , paths ( d' ( c0' z ) ) ( b' ( a' z ) ) ) . intro . simpl . unfold b' . unfold a' . unfold weqtococonusf . unfold tococonusf . unfold totalfun . simpl . assert ( e : paths ( pathsinv0 ( h z ) ) ( pathscomp0 ( pathsinv0 (h z) ) (idpath (f' (g' z))) ) ) . apply ( pathsinv0 ( pathscomp0rid _ ) ) . destruct e . apply idpath .
assert ( is1 : isweq ( fun z : _ => d' ( c0' z ) ) ) . apply ( isweqhomot _ _ ( fun z : Z => ( pathsinv0 ( h1 z ) ) ) ) . apply ( twooutof3c _ _ ( pr2 a' ) ( pr2 b' ) ) .
apply ( twooutof3a _ _ is1 ( pr2 d' ) ) . Defined .
Theorem transposhfpsqstr { X X' Y Z : UU } ( f : X -> Y ) ( f' : X' -> Y ) ( g : Z -> X ) ( g' : Z -> X' ) ( hf : hfsqstr f f' g g' ) : hfsqstr f' f g' g .
Proof . intros . set ( is := pr2 hf ) . set ( h := pr1 hf ) . set ( th := transposcommsqstr f f' g g' h ) . split with th .
set ( w1 := weqhfpcomm f f' ) . assert ( h1 : forall z : Z , paths ( w1 ( commsqZtohfp f f' g g' h z ) ) ( commsqZtohfp f' f g' g th z ) ) . intro . unfold commsqZtohfp . simpl . unfold fpmap . unfold totalfun . simpl . apply idpath . apply ( isweqhomot _ _ h1 ) . apply ( twooutof3c _ _ is ( pr2 w1 ) ) . Defined .
(** *** Fiber sequences and homotopy fiber squares *)
Theorem fibseqstrtohfsqstr { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( hf : fibseqstr f g z ) : hfsqstr ( fun t : unit => z ) g ( fun x : X => tt ) f .
Proof . intros . split with ( pr1 hf ) . set ( ff := ezweq f g z hf ) . set ( ggff := commsqZtohfp ( fun t : unit => z ) g ( fun x : X => tt ) f ( pr1 hf ) ) . set ( gg := weqhfibertohfp g z ) .
apply ( pr2 ( weqcomp ff gg ) ) . Defined .
Theorem hfsqstrtofibseqstr { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( hf : hfsqstr ( fun t : unit => z ) g ( fun x : X => tt ) f ) : fibseqstr f g z .
Proof . intros . split with ( pr1 hf ) . set ( ff := ezmap f g z ( pr1 hf ) ) . set ( ggff := weqZtohfp ( fun t : unit => z ) g ( fun x : X => tt ) f hf ) . set ( gg := weqhfibertohfp g z ) .
apply ( twooutof3a ff gg ( pr2 ggff ) ( pr2 gg ) ) . Defined .
(** ** Basics about h-levels *)
(** *** h-levels of types *)
Fixpoint isofhlevel (n:nat) (X:UU): UU:=
match n with
O => iscontr X |
S m => forall x:X, forall x':X, (isofhlevel m (paths x x'))
end.
Theorem hlevelretract (n:nat) { X Y : UU } ( p : X -> Y ) ( s : Y -> X ) ( eps : forall y : Y , paths ( p ( s y ) ) y ) : isofhlevel n X -> isofhlevel n Y .
Proof. intro. induction n as [ | n IHn ]. intros X Y p s eps X0. unfold isofhlevel. apply ( iscontrretract p s eps X0).
unfold isofhlevel. intros X Y p s eps X0 x x'. unfold isofhlevel in X0. assert (is: isofhlevel n (paths (s x) (s x'))). apply X0. set (s':= @maponpaths _ _ s x x'). set (p':= pathssec2 s p eps x x'). set (eps':= @pathssec3 _ _ s p eps x x' ). simpl. apply (IHn _ _ p' s' eps' is). Defined.
Corollary isofhlevelweqf (n:nat) { X Y : UU } ( f : weq X Y ) : isofhlevel n X -> isofhlevel n Y .
Proof. intros n X Y f X0. apply (hlevelretract n f (invmap f ) (homotweqinvweq f )). assumption. Defined.
Corollary isofhlevelweqb (n:nat) { X Y : UU } ( f : weq X Y ) : isofhlevel n Y -> isofhlevel n X .
Proof. intros n X Y f X0 . apply (hlevelretract n (invmap f ) f (homotinvweqweq f )). assumption. Defined.
Lemma isofhlevelsn ( n : nat ) { X : UU } ( f : X -> isofhlevel ( S n ) X ) : isofhlevel ( S n ) X.
Proof. intros . simpl . intros x x' . apply ( f x x x'). Defined.
Lemma isofhlevelssn (n:nat) { X : UU } ( is : forall x:X, isofhlevel (S n) (paths x x)) : isofhlevel (S (S n)) X.
Proof. intros . simpl. intros x x'. change ( forall ( x0 x'0 : paths x x' ), isofhlevel n ( paths x0 x'0 ) ) with ( isofhlevel (S n) (paths x x') ).
assert ( X1 : paths x x' -> isofhlevel (S n) (paths x x') ) . intro X2. destruct X2. apply ( is x ). apply ( isofhlevelsn n X1 ). Defined.
(** *** h-levels of functions *)
Definition isofhlevelf ( n : nat ) { X Y : UU } ( f : X -> Y ) : UU := forall y:Y, isofhlevel n (hfiber f y).
Theorem isofhlevelfhomot ( n : nat ) { X Y : UU }(f f':X -> Y)(h: forall x:X, paths (f x) (f' x)): isofhlevelf n f -> isofhlevelf n f'.
Proof. intros n X Y f f' h X0. unfold isofhlevelf. intro y . apply ( isofhlevelweqf n ( weqhfibershomot f f' h y ) ( X0 y )) . Defined .
Theorem isofhlevelfpmap ( n : nat ) { X Y : UU } ( f : X -> Y ) ( Q : Y -> UU ) : isofhlevelf n f -> isofhlevelf n ( fpmap f Q ) .
Proof. intros n X Y f Q X0. unfold isofhlevelf. unfold isofhlevelf in X0. intro y . set (yy:= pr1 y). set ( g := hfiberfpmap f Q y). set (is:= isweqhfiberfp f Q y). set (isy:= X0 yy). apply (isofhlevelweqb n ( weqpair g is ) isy). Defined.
Theorem isofhlevelfffromZ ( n : nat ) { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( fs : fibseqstr f g z ) ( isz : isofhlevel ( S n ) Z ) : isofhlevelf n f .
Proof. intros . intro y . assert ( w : weq ( hfiber f y ) ( paths ( g y ) z ) ) . apply ( invweq ( ezweq1 f g z fs y ) ) . apply ( isofhlevelweqb n w ( isz (g y ) z ) ) . Defined.
Theorem isofhlevelXfromg ( n : nat ) { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( fs : fibseqstr f g z ) : isofhlevelf n g -> isofhlevel n X .
Proof. intros n X Y Z f g z fs isf . assert ( w : weq X ( hfiber g z ) ) . apply ( weqpair _ ( pr2 fs ) ) . apply ( isofhlevelweqb n w ( isf z ) ) . Defined .
Theorem isofhlevelffromXY ( n : nat ) { X Y : UU } ( f : X -> Y ) : isofhlevel n X -> isofhlevel (S n) Y -> isofhlevelf n f.
Proof. intro. induction n as [ | n IHn ] . intros X Y f X0 X1.
assert (is1: isofhlevel O Y). split with ( f ( pr1 X0 ) ) . intro t . unfold isofhlevel in X1 . set ( is := X1 t ( f ( pr1 X0 ) ) ) . apply ( pr1 is ).
apply (isweqcontrcontr f X0 is1).
intros X Y f X0 X1. unfold isofhlevelf. simpl.
assert (is1: forall x' x:X, isofhlevel n (paths x' x)). simpl in X0. assumption.
assert (is2: forall y' y:Y, isofhlevel (S n) (paths y' y)). simpl in X1. simpl. assumption.
assert (is3: forall (y:Y)(x:X)(xe': hfiber f y), isofhlevelf n (d2g f x xe')). intros. apply (IHn _ _ (d2g f x xe') (is1 (pr1 xe') x) (is2 (f x) y)).
assert (is4: forall (y:Y)(x:X)(xe': hfiber f y)(e: paths (f x) y), isofhlevel n (paths (hfiberpair f x e) xe')). intros.
apply (isofhlevelweqb n ( ezweq3g f x xe' e) (is3 y x xe' e)).
intros y xe xe' . destruct xe as [ t x ]. apply (is4 y t xe' x). Defined.
Theorem isofhlevelXfromfY ( n : nat ) { X Y : UU } ( f : X -> Y ) : isofhlevelf n f -> isofhlevel n Y -> isofhlevel n X.
Proof. intro. induction n as [ | n IHn ] . intros X Y f X0 X1. apply (iscontrweqb ( weqpair f X0 ) X1). intros X Y f X0 X1. simpl.
assert (is1: forall (y:Y)(xe xe': hfiber f y), isofhlevel n (paths xe xe')). intros. apply (X0 y).
assert (is2: forall (y:Y)(x:X)(xe': hfiber f y), isofhlevelf n (d2g f x xe')). intros. unfold isofhlevel. intro y0.
apply (isofhlevelweqf n ( ezweq3g f x xe' y0 ) (is1 y (hfiberpair f x y0) xe')).
assert (is3: forall (y' y : Y), isofhlevel n (paths y' y)). simpl in X1. assumption.
intros x' x .
set (y:= f x'). set (e':= idpath y). set (xe':= hfiberpair f x' e').
apply (IHn _ _ (d2g f x xe') (is2 y x xe') (is3 (f x) y)). Defined.
Theorem isofhlevelffib ( n : nat ) { X : UU } ( P : X -> UU ) ( x : X ) ( is : forall x':X, isofhlevel n (paths x' x) ) : isofhlevelf n ( tpair P x ) .
Proof . intros . unfold isofhlevelf . intro xp . apply (isofhlevelweqf n ( ezweq1pr1 P x xp) ( is ( pr1 xp ) ) ) . Defined .
Theorem isofhlevelfhfiberpr1y ( n : nat ) { X Y : UU } ( f : X -> Y ) ( y : Y ) ( is : forall y':Y, isofhlevel n (paths y' y) ) : isofhlevelf n ( hfiberpr1 f y).
Proof. intros . unfold isofhlevelf. intro x. apply (isofhlevelweqf n ( ezweq1g f y x ) ( is ( f x ) ) ) . Defined.
Theorem isofhlevelfsnfib (n:nat) { X : UU } (P:X -> UU)(x:X) ( is : isofhlevel (S n) (paths x x) ) : isofhlevelf (S n) ( tpair P x ).
Proof. intros . unfold isofhlevelf. intro xp. apply (isofhlevelweqf (S n) ( ezweq1pr1 P x xp ) ). apply isofhlevelsn . intro X1 . destruct X1 . assumption . Defined .
Theorem isofhlevelfsnhfiberpr1 ( n : nat ) { X Y : UU } (f : X -> Y ) ( y : Y ) ( is : isofhlevel (S n) (paths y y) ) : isofhlevelf (S n) (hfiberpr1 f y).
Proof. intros . unfold isofhlevelf. intro x. apply (isofhlevelweqf (S n) ( ezweq1g f y x ) ). apply isofhlevelsn. intro X1. destruct X1. assumption. Defined .
Corollary isofhlevelfhfiberpr1 ( n : nat ) { X Y : UU } ( f : X -> Y ) ( y : Y ) ( is : isofhlevel (S n) Y ) : isofhlevelf n ( hfiberpr1 f y ) .
Proof. intros. apply isofhlevelfhfiberpr1y. intro y' . apply (is y' y). Defined.
Theorem isofhlevelff ( n : nat ) { X Y Z : UU } (f : X -> Y ) ( g : Y -> Z ) : isofhlevelf n (fun x : X => g ( f x) ) -> isofhlevelf (S n) g -> isofhlevelf n f.
Proof. intros n X Y Z f g X0 X1. unfold isofhlevelf. intro y . set (ye:= hfiberpair g y (idpath (g y))).
apply (isofhlevelweqb n ( ezweqhf f g (g y) ye ) (isofhlevelffromXY n _ (X0 (g y)) (X1 (g y)) ye)). Defined.
Theorem isofhlevelfgf ( n : nat ) { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) : isofhlevelf n f -> isofhlevelf n g -> isofhlevelf n (fun x:X => g(f x)).
Proof. intros n X Y Z f g X0 X1. unfold isofhlevelf. intro z.
assert (is1: isofhlevelf n (hfibersgftog f g z)). unfold isofhlevelf. intro ye. apply (isofhlevelweqf n ( ezweqhf f g z ye ) (X0 (pr1 ye))).
assert (is2: isofhlevel n (hfiber g z)). apply (X1 z).
apply (isofhlevelXfromfY n _ is1 is2). Defined.
Theorem isofhlevelfgwtog (n:nat ) { X Y Z : UU } ( w : weq X Y ) ( g : Y -> Z ) ( is : isofhlevelf n (fun x : X => g ( w x ) ) ) : isofhlevelf n g .
Proof. intros . intro z . assert ( is' : isweq ( hfibersgftog w g z ) ) . intro ye . apply ( iscontrweqf ( ezweqhf w g z ye ) ( pr2 w ( pr1 ye ) ) ) . apply ( isofhlevelweqf _ ( weqpair _ is' ) ( is _ ) ) . Defined .
Theorem isofhlevelfgtogw (n:nat ) { X Y Z : UU } ( w : weq X Y ) ( g : Y -> Z ) ( is : isofhlevelf n g ) : isofhlevelf n (fun x : X => g ( w x ) ) .
Proof. intros . intro z . assert ( is' : isweq ( hfibersgftog w g z ) ) . intro ye . apply ( iscontrweqf ( ezweqhf w g z ye ) ( pr2 w ( pr1 ye ) ) ) . apply ( isofhlevelweqb _ ( weqpair _ is' ) ( is _ ) ) . Defined .
Corollary isofhlevelfhomot2 (n:nat) { X X' Y : UU } (f:X -> Y)(f':X' -> Y)(w : weq X X' )(h:forall x:X, paths (f x) (f' (w x))) : isofhlevelf n f -> isofhlevelf n f'.
Proof. intros n X X' Y f f' w h X0. assert (X1: isofhlevelf n (fun x:X => f' (w x))). apply (isofhlevelfhomot n _ _ h X0).
apply (isofhlevelfgwtog n w f' X1). Defined.
Theorem isofhlevelfonpaths (n:nat) { X Y : UU }(f:X -> Y)(x x':X): isofhlevelf (S n) f -> isofhlevelf n (@maponpaths _ _ f x x').
Proof. intros n X Y f x x' X0.
set (y:= f x'). set (xe':= hfiberpair f x' (idpath _ )).
assert (is1: isofhlevelf n (d2g f x xe')). unfold isofhlevelf. intro y0 . apply (isofhlevelweqf n ( ezweq3g f x xe' y0 ) (X0 y (hfiberpair f x y0) xe')).
assert (h: forall ee:paths x' x, paths (d2g f x xe' ee) (maponpaths f (pathsinv0 ee))). intro.
assert (e0: paths (pathscomp0 (maponpaths f (pathsinv0 ee)) (idpath _ )) (maponpaths f (pathsinv0 ee)) ). destruct ee. simpl. apply idpath. apply (e0). apply (isofhlevelfhomot2 n _ _ ( weqpair (@pathsinv0 _ x' x ) (isweqpathsinv0 _ _ ) ) h is1) . Defined.
Theorem isofhlevelfsn (n:nat) { X Y : UU } (f:X -> Y): (forall x x':X, isofhlevelf n (@maponpaths _ _ f x x')) -> isofhlevelf (S n) f.
Proof. intros n X Y f X0. unfold isofhlevelf. intro y . simpl. intros x x' . destruct x as [ x e ]. destruct x' as [ x' e' ]. destruct e' . set (xe':= hfiberpair f x' ( idpath _ ) ). set (xe:= hfiberpair f x e). set (d3:= d2g f x xe'). simpl in d3.
assert (is1: isofhlevelf n (d2g f x xe')).
assert (h: forall ee: paths x' x, paths (maponpaths f (pathsinv0 ee)) (d2g f x xe' ee)). intro. unfold d2g. simpl . apply ( pathsinv0 ( pathscomp0rid _ ) ) .
assert (is2: isofhlevelf n (fun ee: paths x' x => maponpaths f (pathsinv0 ee))). apply (isofhlevelfgtogw n ( weqpair _ (isweqpathsinv0 _ _ ) ) (@maponpaths _ _ f x x') (X0 x x')).
apply (isofhlevelfhomot n _ _ h is2).
apply (isofhlevelweqb n ( ezweq3g f x xe' e ) (is1 e)). Defined.
Theorem isofhlevelfssn (n:nat) { X Y : UU } (f:X -> Y): (forall x:X, isofhlevelf (S n) (@maponpaths _ _ f x x)) -> isofhlevelf (S (S n)) f.
Proof. intros n X Y f X0. unfold isofhlevelf. intro y .
assert (forall xe0: hfiber f y, isofhlevel (S n) (paths xe0 xe0)). intro. destruct xe0 as [ x e ]. destruct e . set (e':= idpath ( f x ) ). set (xe':= hfiberpair f x e'). set (xe:= hfiberpair f x e' ). set (d3:= d2g f x xe'). simpl in d3.
assert (is1: isofhlevelf (S n) (d2g f x xe')).
assert (h: forall ee: paths x x, paths (maponpaths f (pathsinv0 ee)) (d2g f x xe' ee)). intro. unfold d2g . simpl . apply ( pathsinv0 ( pathscomp0rid _ ) ) .
assert (is2: isofhlevelf (S n) (fun ee: paths x x => maponpaths f (pathsinv0 ee))). apply (isofhlevelfgtogw ( S n ) ( weqpair _ (isweqpathsinv0 _ _ ) ) (@maponpaths _ _ f x x) ( X0 x )) .
apply (isofhlevelfhomot (S n) _ _ h is2).
apply (isofhlevelweqb (S n) ( ezweq3g f x xe' e' ) (is1 e')).
apply (isofhlevelssn). assumption. Defined.
(** ** h -levels of [ pr1 ], fiber inclusions, fibers, total spaces and bases of fibrations *)
(** *** h-levelf of [ pr1 ] *)
Theorem isofhlevelfpr1 (n:nat) { X : UU } (P:X -> UU)(is: forall x:X, isofhlevel n (P x)) : isofhlevelf n (@pr1 X P).
Proof. intros. unfold isofhlevelf. intro x . apply (isofhlevelweqf n ( ezweqpr1 _ x) (is x)). Defined.
Lemma isweqpr1 { Z : UU } ( P : Z -> UU ) ( is1 : forall z : Z, iscontr ( P z ) ) : isweq ( @pr1 Z P ) .
Proof. intros. unfold isweq. intro y. set (isy:= is1 y). apply (iscontrweqf ( ezweqpr1 P y)) . assumption. Defined.
Definition weqpr1 { Z : UU } ( P : Z -> UU ) ( is : forall z : Z , iscontr ( P z ) ) : weq ( total2 P ) Z := weqpair _ ( isweqpr1 P is ) .
(** *** h-level of the total space [ total2 ] *)
Theorem isofhleveltotal2 ( n : nat ) { X : UU } ( P : X -> UU ) ( is1 : isofhlevel n X )( is2 : forall x:X, isofhlevel n (P x) ) : isofhlevel n (total2 P).
Proof. intros. apply (isofhlevelXfromfY n (@pr1 _ _ )). apply isofhlevelfpr1. assumption. assumption. Defined.
Corollary isofhleveldirprod ( n : nat ) ( X Y : UU ) ( is1 : isofhlevel n X ) ( is2 : isofhlevel n Y ) : isofhlevel n (dirprod X Y).
Proof. intros. apply isofhleveltotal2. assumption. intro. assumption. Defined.
(** ** Propositions, inclusions and sets *)
(** *** Basics about types of h-level 1 - "propositions" *)
Definition isaprop := isofhlevel (S O) .
Notation isapropunit := iscontrpathsinunit .
Notation isapropdirprod := ( isofhleveldirprod 1 ) .
Lemma isapropifcontr { X : UU } ( is : iscontr X ) : isaprop X .
Proof. intros . set (f:= fun x:X => tt). assert (isw : isweq f). apply isweqcontrtounit. assumption. apply (isofhlevelweqb (S O) ( weqpair f isw ) ). intros x x' . apply iscontrpathsinunit. Defined.
Coercion isapropifcontr : iscontr >-> isaprop .
Theorem hlevelntosn ( n : nat ) ( T : UU ) ( is : isofhlevel n T ) : isofhlevel (S n) T.
Proof. intro. induction n as [ | n IHn ] . intro. apply isapropifcontr. intro. intro X. change (forall t1 t2:T, isofhlevel (S n) (paths t1 t2)). intros t1 t2 . change (forall t1 t2 : T, isofhlevel n (paths t1 t2)) in X. set (XX := X t1 t2). apply (IHn _ XX). Defined.
Corollary isofhlevelcontr (n:nat) { X : UU } ( is : iscontr X ) : isofhlevel n X.
Proof. intro. induction n as [ | n IHn ] . intros X X0 . assumption.
intros X X0. simpl. intros x x' . assert (is: iscontr (paths x x')). apply (isapropifcontr X0 x x'). apply (IHn _ is). Defined.
Lemma isofhlevelfweq ( n : nat ) { X Y : UU } ( f : weq X Y ) : isofhlevelf n f .
Proof. intros n X Y f . unfold isofhlevelf. intro y . apply ( isofhlevelcontr n ). apply ( pr2 f ). Defined.
Corollary isweqfinfibseq { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( fs : fibseqstr f g z ) ( isz : iscontr Z ) : isweq f .
Proof. intros . apply ( isofhlevelfffromZ 0 f g z fs ( isapropifcontr isz ) ) . Defined .
Corollary weqhfibertocontr { X Y : UU } ( f : X -> Y ) ( y : Y ) ( is : iscontr Y ) : weq ( hfiber f y ) X .
Proof. intros . split with ( hfiberpr1 f y ) . apply ( isofhlevelfhfiberpr1 0 f y ( hlevelntosn 0 _ is ) ) . Defined.
Corollary weqhfibertounit ( X : UU ) : weq ( hfiber ( fun x : X => tt ) tt ) X .
Proof. intro . apply ( weqhfibertocontr _ tt iscontrunit ) . Defined.
Corollary isofhleveltofun ( n : nat ) ( X : UU ) : isofhlevel n X -> isofhlevelf n ( fun x : X => tt ) .
Proof. intros n X is . intro t . destruct t . apply ( isofhlevelweqb n ( weqhfibertounit X ) is ) . Defined .
Corollary isofhlevelfromfun ( n : nat ) ( X : UU ) : isofhlevelf n ( fun x : X => tt ) -> isofhlevel n X .
Proof. intros n X is . apply ( isofhlevelweqf n ( weqhfibertounit X ) ( is tt ) ) . Defined .
Lemma isofhlevelsnprop (n:nat) { X : UU } ( is : isaprop X ) : isofhlevel (S n) X.
Proof. intros n X X0. simpl. unfold isaprop in X0. simpl in X0. intros x x' . apply isofhlevelcontr. apply (X0 x x'). Defined.
Lemma iscontraprop1 { X : UU } ( is : isaprop X ) ( x : X ) : iscontr X .
Proof. intros . unfold iscontr. split with x . intro t . unfold isofhlevel in is . set (is' := is t x ). apply ( pr1 is' ).
Defined.
Lemma iscontraprop1inv { X : UU } ( f : X -> iscontr X ) : isaprop X .
Proof. intros X X0. assert ( H : X -> isofhlevel (S O) X). intro X1. apply (hlevelntosn O _ ( X0 X1 ) ) . apply ( isofhlevelsn O H ) . Defined.
Lemma proofirrelevance ( X : UU ) ( is : isaprop X ) : forall x x' : X , paths x x' .
Proof. intros . unfold isaprop in is . unfold isofhlevel in is . apply ( pr1 ( is x x' ) ). Defined.
Lemma invproofirrelevance ( X : UU ) ( ee : forall x x' : X , paths x x' ) : isaprop X.
Proof. intros . unfold isaprop. unfold isofhlevel . intro x .
assert ( is1 : iscontr X ). split with x. intro t . apply ( ee t x). assert ( is2 : isaprop X). apply isapropifcontr. assumption.
unfold isaprop in is2. unfold isofhlevel in is2. apply (is2 x). Defined.
Lemma isweqimplimpl { X Y : UU } ( f : X -> Y ) ( g : Y -> X ) ( isx : isaprop X ) ( isy : isaprop Y ) : isweq f.
Proof. intros.
assert (isx0: forall x:X, paths (g (f x)) x). intro. apply proofirrelevance . apply isx .
assert (isy0 : forall y : Y, paths (f (g y)) y). intro. apply proofirrelevance . apply isy .
apply (gradth f g isx0 isy0). Defined.
Definition weqimplimpl { X Y : UU } ( f : X -> Y ) ( g : Y -> X ) ( isx : isaprop X ) ( isy : isaprop Y ) := weqpair _ ( isweqimplimpl f g isx isy ) .
Theorem isapropempty: isaprop empty.
Proof. unfold isaprop. unfold isofhlevel. intros x x' . destruct x. Defined.
Theorem isapropifnegtrue { X : UU } ( a : X -> empty ) : isaprop X .
Proof . intros . set ( w := weqpair _ ( isweqtoempty a ) ) . apply ( isofhlevelweqb 1 w isapropempty ) . Defined .
(** *** Functional extensionality for functions to the empty type *)
Axiom funextempty : forall ( X : UU ) ( f g : X -> empty ) , paths f g .
(** *** More results on propositions *)
Theorem isapropneg (X:UU): isaprop (X -> empty).
Proof. intro. apply invproofirrelevance . intros x x' . apply ( funextempty X x x' ) . Defined .
(** See also [ isapropneg2 ] *)
Corollary isapropdneg (X:UU): isaprop (dneg X).
Proof. intro. apply (isapropneg (neg X)). Defined.
Definition isaninvprop (X:UU) := isweq (todneg X).
Definition invimpl (X:UU) (is: isaninvprop X) : (dneg X) -> X:= invmap ( weqpair (todneg X) is ) .
Lemma isapropaninvprop (X:UU): isaninvprop X -> isaprop X.
Proof. intros X X0.
apply (isofhlevelweqb (S O) ( weqpair (todneg X) X0 ) (isapropdneg X)). Defined.
Theorem isaninvpropneg (X:UU): isaninvprop (neg X).
Proof. intros.
set (f:= todneg (neg X)). set (g:= negf (todneg X)). set (is1:= isapropneg X). set (is2:= isapropneg (dneg X)). apply (isweqimplimpl f g is1 is2). Defined.
Theorem isapropdec (X:UU): (isaprop X) -> (isaprop (coprod X (X-> empty))).
Proof. intros X X0.
assert (X1: forall (x x': X), paths x x'). apply (proofirrelevance _ X0).
assert (X2: forall (x x': coprod X (X -> empty)), paths x x'). intros.
destruct x as [ x0 | y0 ]. destruct x' as [ x | y ]. apply (maponpaths (fun x:X => ii1 x) (X1 x0 x)).
apply (fromempty (y x0)).
destruct x' as [ x | y ]. apply (fromempty (y0 x)).
assert (e: paths y0 y). apply (proofirrelevance _ (isapropneg X) y0 y). apply (maponpaths (fun f: X -> empty => ii2 f) e).
apply (invproofirrelevance _ X2). Defined.
(** *** Inclusions - functions of h-level 1 *)
Definition isincl { X Y : UU } (f : X -> Y ) := isofhlevelf 1 f .
Definition incl ( X Y : UU ) := total2 ( fun f : X -> Y => isincl f ) .
Definition inclpair { X Y : UU } ( f : X -> Y ) ( is : isincl f ) : incl X Y := tpair _ f is .
Definition pr1incl ( X Y : UU ) : incl X Y -> ( X -> Y ) := @pr1 _ _ .
Coercion pr1incl : incl >-> Funclass .
Lemma isinclweq ( X Y : UU ) ( f : X -> Y ) : isweq f -> isincl f .
Proof . intros X Y f is . apply ( isofhlevelfweq 1 ( weqpair _ is ) ) . Defined .
Coercion isinclweq : isweq >-> isincl .
Lemma isofhlevelfsnincl (n:nat) { X Y : UU } (f:X -> Y)(is: isincl f): isofhlevelf (S n) f.
Proof. intros. unfold isofhlevelf. intro y . apply isofhlevelsnprop. apply (is y). Defined.
Definition weqtoincl ( X Y : UU ) : weq X Y -> incl X Y := fun w => inclpair ( pr1 w ) ( pr2 w ) .
Coercion weqtoincl : weq >-> incl .
Lemma isinclcomp { X Y Z : UU } ( f : incl X Y ) ( g : incl Y Z ) : isincl ( funcomp ( pr1 f ) ( pr1 g ) ) .
Proof . intros . apply ( isofhlevelfgf 1 f g ( pr2 f ) ( pr2 g ) ) . Defined .
Definition inclcomp { X Y Z : UU } ( f : incl X Y ) ( g : incl Y Z ) : incl X Z := inclpair ( funcomp ( pr1 f ) ( pr1 g ) ) ( isinclcomp f g ) .
Lemma isincltwooutof3a { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( isg : isincl g ) ( isgf : isincl ( funcomp f g ) ) : isincl f .
Proof . intros . apply ( isofhlevelff 1 f g isgf ) . apply ( isofhlevelfsnincl 1 g isg ) . Defined .
Lemma isinclgwtog { X Y Z : UU } ( w : weq X Y ) ( g : Y -> Z ) ( is : isincl ( funcomp w g ) ) : isincl g .
Proof . intros . apply ( isofhlevelfgwtog 1 w g is ) . Defined .
Lemma isinclgtogw { X Y Z : UU } ( w : weq X Y ) ( g : Y -> Z ) ( is : isincl g ) : isincl ( funcomp w g ) .
Proof . intros . apply ( isofhlevelfgtogw 1 w g is ) . Defined .
Lemma isinclhomot { X Y : UU } ( f g : X -> Y ) ( h : homot f g ) ( isf : isincl f ) : isincl g .
Proof . intros . apply ( isofhlevelfhomot ( S O ) f g h isf ) . Defined .
Definition isofhlevelsninclb (n:nat) { X Y : UU } (f:X -> Y)(is: isincl f) : isofhlevel (S n) Y -> isofhlevel (S n) X:= isofhlevelXfromfY (S n) f (isofhlevelfsnincl n f is).
Definition isapropinclb { X Y : UU } ( f : X -> Y ) ( isf : isincl f ) : isaprop Y -> isaprop X := isofhlevelXfromfY 1 _ isf .
Lemma iscontrhfiberofincl { X Y : UU } (f:X -> Y): isincl f -> (forall x:X, iscontr (hfiber f (f x))).
Proof. intros X Y f X0 x. unfold isofhlevelf in X0. set (isy:= X0 (f x)). apply (iscontraprop1 isy (hfiberpair f _ (idpath (f x)))). Defined.
Lemma isweqonpathsincl { X Y : UU } (f:X -> Y) (is: isincl f)(x x':X): isweq (@maponpaths _ _ f x x').
Proof. intros. apply (isofhlevelfonpaths O f x x' is). Defined.
Definition weqonpathsincl { X Y : UU } (f:X -> Y) (is: isincl f)(x x':X) := weqpair _ ( isweqonpathsincl f is x x' ) .
Definition invmaponpathsincl { X Y : UU } (f:X -> Y) (is: isincl f)(x x':X): paths (f x) (f x') -> paths x x':= invmap ( weqonpathsincl f is x x') .
Lemma isinclweqonpaths { X Y : UU } (f:X -> Y): (forall x x':X, isweq (@maponpaths _ _ f x x')) -> isincl f.
Proof. intros X Y f X0. apply (isofhlevelfsn O f X0). Defined.
Definition isinclpr1 { X : UU } (P:X -> UU)(is: forall x:X, isaprop (P x)): isincl (@pr1 X P):= isofhlevelfpr1 (S O) P is.
Theorem samehfibers { X Y Z : UU } (f: X -> Y) (g: Y -> Z) (is1: isincl g) ( y: Y): weq ( hfiber f y ) ( hfiber ( fun x => g ( f x ) ) ( g y ) ) .
Proof. intros. split with (@hfibersftogf _ _ _ f g (g y) (hfiberpair g y (idpath _ ))) .
set (z:= g y). set (ye:= hfiberpair g y (idpath _ )). unfold isweq. intro xe.
set (is3:= isweqezmap1 _ _ _ ( fibseqhf f g z ye ) xe).
assert (w1: weq (paths (hfibersgftog f g z xe) ye) (hfiber (hfibersftogf f g z ye) xe)). split with (ezmap (d1 (hfibersftogf f g z ye) (hfibersgftog f g z) ye ( fibseqhf f g z ye ) xe) (hfibersftogf f g z ye) xe ( fibseq1 (hfibersftogf f g z ye) (hfibersgftog f g z) ye ( fibseqhf f g z ye ) xe) ). apply is3. apply (iscontrweqf w1 ).
assert (is4: iscontr (hfiber g z)). apply iscontrhfiberofincl. assumption.
apply ( isapropifcontr is4 ). Defined.
(** *** Basics about types of h-level 2 - "sets" *)
Definition isaset ( X : UU ) : UU := forall x x' : X , isaprop ( paths x x' ) .
(* Definition isaset := isofhlevel 2 . *)
Notation isasetdirprod := ( isofhleveldirprod 2 ) .
Lemma isasetunit : isaset unit .
Proof . apply ( isofhlevelcontr 2 iscontrunit ) . Defined .
Lemma isasetempty : isaset empty .
Proof. apply ( isofhlevelsnprop 1 isapropempty ) . Defined .
Lemma isasetifcontr { X : UU } ( is : iscontr X ) : isaset X .
Proof . intros . apply ( isofhlevelcontr 2 is ) . Defined .
Lemma isasetaprop { X : UU } ( is : isaprop X ) : isaset X .
Proof . intros . apply ( isofhlevelsnprop 1 is ) . Defined .
(** The following lemma assert "uniqueness of identity proofs" (uip) for sets. *)
Lemma uip { X : UU } ( is : isaset X ) { x x' : X } ( e e' : paths x x' ) : paths e e' .
Proof. intros . apply ( proofirrelevance _ ( is x x' ) e e' ) . Defined .
(** For the theorem about the coproduct of two sets see [ isasetcoprod ] below. *)
Lemma isofhlevelssnset (n:nat) ( X : UU ) ( is : isaset X ) : isofhlevel ( S (S n) ) X.
Proof. intros n X X0. simpl. unfold isaset in X0. intros x x' . apply isofhlevelsnprop. set ( int := X0 x x'). assumption . Defined.
Lemma isasetifiscontrloops (X:UU): (forall x:X, iscontr (paths x x)) -> isaset X.
Proof. intros X X0. unfold isaset. unfold isofhlevel. intros x x' x0 x0' . destruct x0. set (is:= X0 x). apply isapropifcontr. assumption. Defined.
Lemma iscontrloopsifisaset (X:UU): (isaset X) -> (forall x:X, iscontr (paths x x)).
Proof. intros X X0 x. unfold isaset in X0. unfold isofhlevel in X0. change (forall (x x' : X) (x0 x'0 : paths x x'), iscontr (paths x0 x'0)) with (forall (x x':X), isaprop (paths x x')) in X0. apply (iscontraprop1 (X0 x x) (idpath x)). Defined.
(** A monic subtype of a set is a set. *)
Theorem isasetsubset { X Y : UU } (f: X -> Y) (is1: isaset Y) (is2: isincl f): isaset X.
Proof. intros. apply (isofhlevelsninclb (S O) f is2). apply is1. Defined.
(** The morphism from hfiber of a map to a set is an inclusion. *)
Theorem isinclfromhfiber { X Y : UU } (f: X -> Y) (is : isaset Y) ( y: Y ) : @isincl (hfiber f y) X ( @pr1 _ _ ).
Proof. intros. apply isofhlevelfhfiberpr1. assumption. Defined.
(** Criterion for a function between sets being an inclusion. *)
Theorem isinclbetweensets { X Y : UU } ( f : X -> Y ) ( isx : isaset X ) ( isy : isaset Y ) ( inj : forall x x' : X , ( paths ( f x ) ( f x' ) -> paths x x' ) ) : isincl f .
Proof. intros . apply isinclweqonpaths . intros x x' . apply ( isweqimplimpl ( @maponpaths _ _ f x x' ) ( inj x x' ) ( isx x x' ) ( isy ( f x ) ( f x' ) ) ) . Defined .
(** A map from [ unit ] to a set is an inclusion. *)
Theorem isinclfromunit { X : UU } ( f : unit -> X ) ( is : isaset X ) : isincl f .
Proof. intros . apply ( isinclbetweensets f ( isofhlevelcontr 2 ( iscontrunit ) ) is ) . intros . destruct x . destruct x' . apply idpath . Defined .
(** ** Isolated points and types with decidable equality. *)
(** *** Basic results on complements to a point *)
Definition compl ( X : UU ) ( x : X ):= total2 (fun x':X => neg (paths x x' ) ) .
Definition complpair ( X : UU ) ( x : X ) := tpair (fun x':X => neg (paths x x' ) ) .
Definition pr1compl ( X : UU ) ( x : X ) := @pr1 _ (fun x':X => neg (paths x x' ) ) .
Lemma isinclpr1compl ( X : UU ) ( x : X ) : isincl ( pr1compl X x ) .
Proof. intros . apply ( isinclpr1 _ ( fun x' : X => isapropneg _ ) ) . Defined.
Definition recompl ( X : UU ) (x:X): coprod (compl X x) unit -> X := fun u:_ =>
match u with
ii1 x0 => pr1 x0|
ii2 t => x
end.
Definition maponcomplincl { X Y : UU } (f:X -> Y)(is: isincl f)(x:X): compl X x -> compl Y (f x):= fun x0':_ =>
match x0' with
tpair _ x' neqx => tpair _ (f x') (negf (invmaponpathsincl _ is x x' ) neqx)
end.
Definition maponcomplweq { X Y : UU } (f : weq X Y ) (x:X):= maponcomplincl f (isofhlevelfweq (S O) f ) x.
Theorem isweqmaponcompl { X Y : UU } ( f : weq X Y ) (x:X): isweq (maponcomplweq f x).
Proof. intros. set (is1:= isofhlevelfweq (S O) f). set (map1:= totalfun (fun x':X => neg (paths x x' )) (fun x':X => neg (paths (f x) (f x'))) (fun x':X => negf (invmaponpathsincl _ is1 x x' ))). set (map2:= fpmap f (fun y:Y => neg (paths (f x) y ))).
assert (is2: forall x':X, isweq (negf (invmaponpathsincl _ is1 x x'))). intro.
set (invimpll:= (negf (@maponpaths _ _ f x x'))). apply (isweqimplimpl (negf (invmaponpathsincl _ is1 x x')) (negf (@maponpaths _ _ f x x')) (isapropneg _) (isapropneg _)).
assert (is3: isweq map1). unfold map1 . apply ( isweqfibtototal _ _ (fun x':X => weqpair _ ( is2 x' )) ) .
assert (is4: isweq map2). apply (isweqfpmap f (fun y:Y => neg (paths (f x) y )) ).
assert (h: forall x0':_, paths (map2 (map1 x0')) (maponcomplweq f x x0')). intro. simpl. destruct x0'. simpl. apply idpath.
apply (isweqhomot _ _ h (twooutof3c _ _ is3 is4)).
Defined.
Definition weqoncompl { X Y : UU } (w: weq X Y) ( x : X ) : weq (compl X x) (compl Y (pr1 w x)):= weqpair _ (isweqmaponcompl w x).
Definition homotweqoncomplcomp { X Y Z : UU } ( f : weq X Y ) ( g : weq Y Z ) ( x : X ) : homot ( weqcomp ( weqoncompl f x ) ( weqoncompl g ( f x ) ) ) ( weqoncompl ( weqcomp f g ) x ) .
Proof . intros . intro x' . destruct x' as [ x' nexx' ] . apply ( invmaponpathsincl _ ( isinclpr1compl Z _ ) _ _ ) . simpl . apply idpath . Defined .
(** *** Basic results on types with an isolated point. *)
Definition isisolated (X:UU)(x:X):= forall x':X, coprod (paths x x' ) (paths x x' -> empty).
Definition isolated ( T : UU ) := total2 ( fun t : T => isisolated T t ) .
Definition isolatedpair ( T : UU ) := tpair ( fun t : T => isisolated T t ) .
Definition pr1isolated ( T : UU ) := fun x : isolated T => pr1 x .
Theorem isaproppathsfromisolated ( X : UU ) ( x : X ) ( is : isisolated X x ) : forall x' : X, isaprop ( paths x x' ) .
Proof. intros . apply iscontraprop1inv . intro e . destruct e .
set (f:= fun e: paths x x => coconusfromtpair _ e).
assert (is' : isweq f). apply (onefiber (fun x':X => paths x x' ) x (fun x':X => is x' )).
assert (is2: iscontr (coconusfromt _ x)). apply iscontrcoconusfromt.
apply (iscontrweqb ( weqpair f is' ) ). assumption. Defined.
Theorem isaproppathstoisolated ( X : UU ) ( x : X ) ( is : isisolated X x ) : forall x' : X, isaprop ( paths x' x ) .
Proof . intros . apply ( isofhlevelweqf 1 ( weqpathsinv0 x x' ) ( isaproppathsfromisolated X x is x' ) ) . Defined .
Lemma isisolatedweqf { X Y : UU } ( f : weq X Y ) (x:X) (is2: isisolated _ x) : isisolated _ (f x).
Proof. intros. unfold isisolated. intro y. set (g:=invmap f ). set (x':= g y). destruct (is2 x') as [ x0 | y0 ]. apply (ii1 (pathsweq1' f x y x0) ).
assert (phi: paths y (f x) -> empty).
assert (psi: (paths (g y) x -> empty) -> (paths y (f x) -> empty)). intros X0 X1. apply (X0 (pathsinv0 (pathsweq1 f x y (pathsinv0 X1)))). apply (psi ( ( negf ( @pathsinv0 _ _ _ ) ) y0) ) . apply (ii2 ( negf ( @pathsinv0 _ _ _ ) phi ) ). Defined.
Theorem isisolatedinclb { X Y : UU } ( f : X -> Y ) ( is : isincl f ) ( x : X ) ( is0 : isisolated _ ( f x ) ) : isisolated _ x .
Proof. intros . unfold isisolated . intro x' . set ( a := is0 ( f x' ) ) . destruct a as [ a1 | a2 ] . apply ( ii1 ( invmaponpathsincl f is _ _ a1 ) ) . apply ( ii2 ( ( negf ( @maponpaths _ _ f _ _ ) ) a2 ) ) . Defined.
Lemma disjointl1 (X:UU): isisolated (coprod X unit) (ii2 tt).
Proof. intros. unfold isisolated. intros x' . destruct x' as [ x | u ] . apply (ii2 (negpathsii2ii1 x tt )). destruct u. apply (ii1 (idpath _ )). Defined.
(** *** Weak equivalence [ weqrecompl ] from the coproduct of the complement to an isolated point with [ unit ] and the original type *)
Definition invrecompl (X:UU)(x:X)(is: isisolated X x): X -> coprod (compl X x) unit:=
fun x':X => match (is x') with
ii1 e => ii2 tt|
ii2 phi => ii1 (complpair _ _ x' phi)
end.
Theorem isweqrecompl (X:UU)(x:X)(is:isisolated X x): isweq (recompl _ x).
Proof. intros. set (f:= recompl _ x). set (g:= invrecompl X x is). unfold invrecompl in g. simpl in g.
assert (efg: forall x':X, paths (f (g x')) x'). intro. destruct (is x') as [ x0 | e ]. destruct x0. unfold f. unfold g. simpl. unfold recompl. simpl. destruct (is x) as [ x0 | e ] . simpl. apply idpath. destruct (e (idpath x)). unfold f. unfold g. simpl. unfold recompl. simpl. destruct (is x') as [ x0 | e0 ]. destruct (e x0). simpl. apply idpath.
assert (egf: forall u: coprod (compl X x) unit, paths (g (f u)) u). unfold isisolated in is. intro. destruct (is (f u)) as [ p | e ] . destruct u as [ c | u]. simpl. destruct c as [ t x0 ]. simpl in p. destruct (x0 p).
destruct u.
assert (e1: paths (g (f (ii2 tt))) (g x)). apply (maponpaths g p).
assert (e2: paths (g x) (ii2 tt)). unfold g. destruct (is x) as [ i | e ]. apply idpath. destruct (e (idpath x)). apply (pathscomp0 e1 e2). destruct u. simpl. destruct c as [ t x0 ]. simpl. unfold isisolated in is. unfold g. destruct (is t) as [ p | e0 ] . destruct (x0 p). simpl in g.
unfold f. unfold recompl. simpl in e.
assert (ee: paths e0 x0). apply (proofirrelevance _ (isapropneg (paths x t))). destruct ee. apply idpath.
unfold f. unfold g. simpl. destruct u. destruct (is x). apply idpath. destruct (e (idpath x)).
apply (gradth f g egf efg). Defined.
Definition weqrecompl ( X : UU ) ( x : X ) ( is : isisolated _ x ) : weq ( coprod ( compl X x ) unit ) X := weqpair _ ( isweqrecompl X x is ) .
(** *** Theorem saying that [ recompl ] commutes up to homotopy with [ maponcomplweq ] *)
Theorem homotrecomplnat { X Y : UU } ( w : weq X Y ) ( x : X ) : forall a : coprod ( compl X x ) unit , paths ( recompl Y ( w x ) ( coprodf ( maponcomplweq w x ) ( fun x: unit => x ) a ) ) ( w ( recompl X x a ) ) .
Proof . intros . destruct a as [ ane | t ] . destruct ane as [ a ne ] . simpl . apply idpath . destruct t . simpl . apply idpath . Defined .
(** *** Recomplement on functions *)
Definition recomplf { X Y : UU } ( x : X ) ( y : Y ) ( isx : isisolated X x ) ( f : compl X x -> compl Y y ) := funcomp ( funcomp ( invmap ( weqrecompl X x isx ) ) ( coprodf f ( idfun unit ) ) ) ( recompl Y y ) .
Definition weqrecomplf { X Y : UU } ( x : X ) ( y : Y ) ( isx : isisolated X x ) ( isy : isisolated Y y ) ( w : weq ( compl X x ) ( compl Y y ) ) := weqcomp ( weqcomp ( invweq ( weqrecompl X x isx ) ) ( weqcoprodf w ( idweq unit ) ) ) ( weqrecompl Y y isy ) .
Definition homotrecomplfhomot { X Y : UU } ( x : X ) ( y : Y ) ( isx : isisolated X x ) ( f f' : compl X x -> compl Y y ) ( h : homot f f' ) : homot ( recomplf x y isx f ) ( recomplf x y isx f') .
Proof . intros. intro a . unfold recomplf . apply ( maponpaths ( recompl Y y ) ( homotcoprodfhomot _ _ _ _ h ( fun t : unit => idpath t ) (invmap (weqrecompl X x isx) a) ) ) . Defined .
Lemma pathsrecomplfxtoy { X Y : UU } ( x : X ) ( y : Y ) ( isx : isisolated X x ) ( f : compl X x -> compl Y y ) : paths ( recomplf x y isx f x ) y .
Proof . intros . unfold recomplf . unfold weqrecompl . unfold invmap . simpl . unfold invrecompl . unfold funcomp . destruct ( isx x ) as [ i1 | i2 ] . simpl . apply idpath . destruct ( i2 ( idpath _ ) ) . Defined .
Definition homotrecomplfcomp { X Y Z : UU } ( x : X ) ( y : Y ) ( z : Z ) ( isx : isisolated X x ) ( isy : isisolated Y y ) ( f : compl X x -> compl Y y ) ( g : compl Y y -> compl Z z ) : homot ( funcomp ( recomplf x y isx f ) ( recomplf y z isy g ) ) ( recomplf x z isx ( funcomp f g ) ) .
Proof . intros. intro x' . unfold recomplf . set ( e := homotinvweqweq ( weqrecompl Y y isy ) (coprodf f ( idfun unit) (invmap ( weqrecompl X x isx ) x')) ) . unfold funcomp . simpl in e . simpl . rewrite e . set ( e' := homotcoprodfcomp f ( idfun unit ) g ( idfun unit ) (invmap (weqrecompl X x isx) x') ) . unfold funcomp in e' . rewrite e' . apply idpath . Defined .
Definition homotrecomplfidfun { X : UU } ( x : X ) ( isx : isisolated X x ) : homot ( recomplf x x isx ( idfun ( compl X x ) ) ) ( idfun _ ) .
Proof . intros . intro x' . unfold recomplf . unfold weqrecompl . unfold invmap . simpl . unfold invrecompl . unfold funcomp. destruct ( isx x' ) as [ e | ne ] . simpl . apply e . simpl . apply idpath . Defined .
Lemma ishomotinclrecomplf { X Y : UU } ( x : X ) ( y : Y ) ( isx : isisolated X x ) ( f : compl X x -> compl Y y ) ( x'n : compl X x ) ( y'n : compl Y y ) ( e : paths ( recomplf x y isx f ( pr1 x'n ) ) ( pr1 y'n ) ) : paths ( f x'n ) y'n .
Proof . intros . destruct x'n as [ x' nexx' ] . destruct y'n as [ y' neyy' ] . simpl in e . apply ( invmaponpathsincl _ ( isinclpr1compl _ _ ) ) . simpl . rewrite ( pathsinv0 e ) . unfold recomplf. unfold invmap . unfold coprodf . simpl . unfold funcomp . unfold invrecompl . destruct ( isx x' ) as [ exx' | nexx'' ] . destruct ( nexx' exx' ) . simpl . assert ( ee : paths nexx' nexx'' ) . apply ( proofirrelevance _ ( isapropneg _ ) ) . rewrite ee . apply idpath . Defined .
(** *** Standard weak equivalence between [ compl T t1 ] and [ compl T t2 ] for isolated [ t1 t2 ] *)
Definition funtranspos0 { T : UU } ( t1 t2 : T ) ( is2 : isisolated T t2 ) ( x :compl T t1 ) : compl T t2 := match ( is2 ( pr1 x ) ) with
ii1 e => match ( is2 t1 ) with ii1 e' => fromempty ( pr2 x ( pathscomp0 ( pathsinv0 e' ) e ) ) | ii2 ne' => complpair T t2 t1 ne' end |
ii2 ne => complpair T t2 ( pr1 x ) ne end .
Definition homottranspos0t2t1t1t2 { T : UU } ( t1 t2 : T ) ( is1 : isisolated T t1 ) ( is2 : isisolated T t2 ) : homot ( funcomp ( funtranspos0 t1 t2 is2 ) ( funtranspos0 t2 t1 is1 ) ) ( idfun _ ) .
Proof. intros. intro x . unfold funtranspos0 . unfold funcomp . destruct x as [ t net1 ] . simpl . destruct ( is2 t ) as [ et2 | net2 ] . destruct ( is2 t1 ) as [ et2t1 | net2t1 ] . destruct (net1 (pathscomp0 (pathsinv0 et2t1) et2)) . simpl . destruct ( is1 t1 ) as [ e | ne ] . destruct ( is1 t2 ) as [ et1t2 | net1t2 ] . destruct (net2t1 (pathscomp0 (pathsinv0 et1t2) e)) . apply ( invmaponpathsincl _ ( isinclpr1compl _ _ ) _ _ ) . simpl . apply et2 . destruct ( ne ( idpath _ ) ) . simpl . destruct ( is1 t ) as [ et1t | net1t ] . destruct ( net1 et1t ) . apply ( invmaponpathsincl _ ( isinclpr1compl _ _ ) _ _ ) . simpl . apply idpath . Defined .
Definition weqtranspos0 { T : UU } ( t1 t2 : T ) ( is1 : isisolated T t1 ) ( is2 : isisolated T t2 ) : weq ( compl T t1 ) ( compl T t2 ) .
Proof . intros . set ( f := funtranspos0 t1 t2 is2 ) . set ( g := funtranspos0 t2 t1 is1 ) . split with f .
assert ( egf : forall x : _ , paths ( g ( f x ) ) x ) . intro x . apply ( homottranspos0t2t1t1t2 t1 t2 is1 is2 ) .
assert ( efg : forall x : _ , paths ( f ( g x ) ) x ) . intro x . apply ( homottranspos0t2t1t1t2 t2 t1 is2 is1 ) .
apply ( gradth _ _ egf efg ) . Defined .
(** *** Transposition of two isolated points *)
Definition funtranspos { T : UU } ( t1 t2 : isolated T ) : T -> T := recomplf ( pr1 t1 ) ( pr1 t2 ) ( pr2 t1 ) ( funtranspos0 ( pr1 t1 ) ( pr1 t2 ) ( pr2 t2 ) ) .
Definition homottranspost2t1t1t2 { T : UU } ( t1 t2 : T ) ( is1 : isisolated T t1 ) ( is2 : isisolated T t2 ) : homot ( funcomp ( funtranspos ( tpair _ t1 is1 ) ( tpair _ t2 is2 ) ) ( funtranspos ( tpair _ t2 is2 ) ( tpair _ t1 is1 ) ) ) ( idfun _ ) .
Proof. intros. intro t . unfold funtranspos . rewrite ( homotrecomplfcomp t1 t2 t1 is1 is2 _ _ t ) . set ( e:= homotrecomplfhomot t1 t1 is1 _ ( idfun _ ) ( homottranspos0t2t1t1t2 t1 t2 is1 is2 ) t ) . set ( e' := homotrecomplfidfun t1 is1 t ) . apply ( pathscomp0 e e' ) . Defined .
Theorem weqtranspos { T : UU } ( t1 t2 : T ) ( is1 : isisolated T t1 ) ( is2 : isisolated T t2 ) : weq T T .
Proof . intros . set ( f := funtranspos ( tpair _ t1 is1) ( tpair _ t2 is2 ) ) . set ( g := funtranspos ( tpair _ t2 is2 ) ( tpair _ t1 is1 ) ) . split with f .
assert ( egf : forall t : T , paths ( g ( f t ) ) t ) . intro . apply homottranspost2t1t1t2 .
assert ( efg : forall t : T , paths ( f ( g t ) ) t ) . intro . apply homottranspost2t1t1t2 .
apply ( gradth _ _ egf efg ) . Defined .
Lemma pathsfuntransposoft1 { T : UU } ( t1 t2 : T ) ( is1 : isisolated T t1 ) ( is2 : isisolated T t2 ) : paths ( funtranspos ( tpair _ t1 is1 ) ( tpair _ t2 is2 ) t1 ) t2 .
Proof . intros . unfold funtranspos . rewrite ( pathsrecomplfxtoy t1 t2 is1 _ ) . apply idpath . Defined .
Lemma pathsfuntransposoft2 { T : UU } ( t1 t2 : T ) ( is1 : isisolated T t1 ) ( is2 : isisolated T t2 ) : paths ( funtranspos ( tpair _ t1 is1 ) ( tpair _ t2 is2 ) t2 ) t1 .
Proof . intros . unfold funtranspos . simpl . unfold funtranspos0 . unfold recomplf . unfold funcomp . unfold coprodf . unfold invmap . unfold weqrecompl . unfold recompl . simpl . unfold invrecompl . destruct ( is1 t2 ) as [ et1t2 | net1t2 ] . apply ( pathsinv0 et1t2 ) . simpl . destruct ( is2 t2 ) as [ et2t2 | net2t2 ] . destruct ( is2 t1 ) as [ et2t1 | net2t1 ] . destruct (net1t2 (pathscomp0 (pathsinv0 et2t1) et2t2) ). simpl . apply idpath . destruct ( net2t2 ( idpath _ ) ) . Defined .
Lemma pathsfuntransposofnet1t2 { T : UU } ( t1 t2 : T ) ( is1 : isisolated T t1 ) ( is2 : isisolated T t2 ) ( t : T ) ( net1t : neg ( paths t1 t ) ) ( net2t : neg ( paths t2 t ) ) : paths ( funtranspos ( tpair _ t1 is1 ) ( tpair _ t2 is2 ) t ) t .
Proof . intros . unfold funtranspos . simpl . unfold funtranspos0 . unfold recomplf . unfold funcomp . unfold coprodf . unfold invmap . unfold weqrecompl . unfold recompl . simpl . unfold invrecompl . destruct ( is1 t ) as [ et1t | net1t' ] . destruct ( net1t et1t ) . simpl . destruct ( is2 t ) as [ et2t | net2t' ] . destruct ( net2t et2t ) . simpl . apply idpath . Defined .
Lemma homotfuntranspos2 { T : UU } ( t1 t2 : T ) ( is1 : isisolated T t1 ) ( is2 : isisolated T t2 ) : homot ( funcomp ( funtranspos ( tpair _ t1 is1 ) ( tpair _ t2 is2 ) ) ( funtranspos ( tpair _ t1 is1 ) ( tpair _ t2 is2 ) ) ) ( idfun _ ) .
Proof . intros . intro t . unfold funcomp . unfold idfun .
destruct ( is1 t ) as [ et1t | net1t ] . rewrite ( pathsinv0 et1t ) . rewrite ( pathsfuntransposoft1 _ _ ) . rewrite ( pathsfuntransposoft2 _ _ ) . apply idpath .
destruct ( is2 t ) as [ et2t | net2t ] . rewrite ( pathsinv0 et2t ) . rewrite ( pathsfuntransposoft2 _ _ ) . rewrite ( pathsfuntransposoft1 _ _ ) . apply idpath .
rewrite ( pathsfuntransposofnet1t2 _ _ _ _ _ net1t net2t ) . rewrite ( pathsfuntransposofnet1t2 _ _ _ _ _ net1t net2t ) . apply idpath . Defined .
(** *** Types with decidable equality *)
Definition isdeceq (X:UU) : UU := forall (x x':X), coprod (paths x x' ) (paths x x' -> empty).
Lemma isdeceqweqf { X Y : UU } ( w : weq X Y ) ( is : isdeceq X ) : isdeceq Y .
Proof. intros . intros y y' . set ( w' := weqonpaths ( invweq w ) y y' ) . set ( int := is ( ( invweq w ) y ) ( ( invweq w ) y' ) ) . destruct int as [ i | ni ] . apply ( ii1 ( ( invweq w' ) i ) ) . apply ( ii2 ( ( negf w' ) ni ) ) . Defined .
Lemma isdeceqweqb { X Y : UU } ( w : weq X Y ) ( is : isdeceq Y ) : isdeceq X .
Proof . intros . apply ( isdeceqweqf ( invweq w ) is ) . Defined .
Theorem isdeceqinclb { X Y : UU } ( f : X -> Y ) ( is : isdeceq Y ) ( is' : isincl f ) : isdeceq X .
Proof. intros . intros x x' . set ( w := weqonpathsincl f is' x x' ) . set ( int := is ( f x ) ( f x' ) ) . destruct int as [ i | ni ] . apply ( ii1 ( ( invweq w ) i ) ) . apply ( ii2 ( ( negf w ) ni ) ) . Defined .
Lemma isdeceqifisaprop ( X : UU ) : isaprop X -> isdeceq X .
Proof. intros X is . intros x x' . apply ( ii1 ( proofirrelevance _ is x x' ) ) . Defined .
Theorem isasetifdeceq (X:UU): isdeceq X -> isaset X.
Proof. intro X . intro is. intros x x' . apply ( isaproppathsfromisolated X x ( is x ) ) . Defined .
Definition booleq { X : UU } ( is : isdeceq X ) ( x x' : X ) : bool .
Proof . intros . destruct ( is x x' ) . apply true . apply false . Defined .
Lemma eqfromdnegeq (X:UU)(is: isdeceq X)(x x':X): dneg ( paths x x' ) -> paths x x'.
Proof. intros X is x x' X0. destruct ( is x x' ) . assumption . destruct ( X0 e ) . Defined .
(** *** [ bool ] is a [ deceq ] type and a set *)
Theorem isdeceqbool: isdeceq bool.
Proof. unfold isdeceq. intros x' x . destruct x. destruct x'. apply (ii1 (idpath true)). apply (ii2 nopathsfalsetotrue). destruct x'. apply (ii2 nopathstruetofalse). apply (ii1 (idpath false)). Defined.
Theorem isasetbool: isaset bool.
Proof. apply (isasetifdeceq _ isdeceqbool). Defined.
(** *** Splitting of [ X ] into a coproduct defined by a function [ X -> bool ] *)
Definition subsetsplit { X : UU } ( f : X -> bool ) ( x : X ) : coprod ( hfiber f true ) ( hfiber f false ) .
Proof . intros . destruct ( boolchoice ( f x ) ) as [ a | b ] . apply ( ii1 ( hfiberpair f x a ) ) . apply ( ii2 ( hfiberpair f x b ) ) . Defined .
Definition subsetsplitinv { X : UU } ( f : X -> bool ) ( ab : coprod (hfiber f true) (hfiber f false) ) : X := match ab with ii1 xt => pr1 xt | ii2 xf => pr1 xf end.
Theorem weqsubsetsplit { X : UU } ( f : X -> bool ) : weq X (coprod ( hfiber f true) ( hfiber f false) ) .
Proof . intros . set ( ff := subsetsplit f ) . set ( gg := subsetsplitinv f ) . split with ff .
assert ( egf : forall a : _ , paths ( gg ( ff a ) ) a ) . intros . unfold ff . unfold subsetsplit . destruct ( boolchoice ( f a ) ) as [ et | ef ] . simpl . apply idpath . simpl . apply idpath .
assert ( efg : forall a : _ , paths ( ff ( gg a ) ) a ) . intros . destruct a as [ et | ef ] . destruct et as [ x et' ] . simpl . unfold ff . unfold subsetsplit . destruct ( boolchoice ( f x ) ) as [ e1 | e2 ] . apply ( maponpaths ( @ii1 _ _ ) ) . apply ( maponpaths ( hfiberpair f x ) ) . apply uip . apply isasetbool . destruct ( nopathstruetofalse ( pathscomp0 ( pathsinv0 et' ) e2 ) ) . destruct ef as [ x et' ] . simpl . unfold ff . unfold subsetsplit . destruct ( boolchoice ( f x ) ) as [ e1 | e2 ] . destruct ( nopathsfalsetotrue ( pathscomp0 ( pathsinv0 et' ) e1 ) ) . apply ( maponpaths ( @ii2 _ _ ) ) . apply ( maponpaths ( hfiberpair f x ) ) . apply uip . apply isasetbool .
apply ( gradth _ _ egf efg ) . Defined .
(** ** Semi-boolean hfiber of functions over isolated points *)
Definition eqbx ( X : UU ) ( x : X ) ( is : isisolated X x ) : X -> bool .
Proof. intros X x is x' . destruct ( is x' ) . apply true . apply false . Defined .
Lemma iscontrhfibereqbx ( X : UU ) ( x : X ) ( is : isisolated X x ) : iscontr ( hfiber ( eqbx X x is ) true ) .
Proof. intros . assert ( b : paths ( eqbx X x is x ) true ) . unfold eqbx . destruct ( is x ) . apply idpath . destruct ( e ( idpath _ ) ) . set ( i := hfiberpair ( eqbx X x is ) x b ) . split with i .
unfold eqbx . destruct ( boolchoice ( eqbx X x is x ) ) as [ b' | nb' ] . intro t . destruct t as [ x' e ] . assert ( e' : paths x' x ) . destruct ( is x' ) as [ ee | nee ] . apply ( pathsinv0 ee ) . destruct ( nopathsfalsetotrue e ) . apply ( invmaponpathsincl _ ( isinclfromhfiber ( eqbx X x is ) isasetbool true ) ( hfiberpair _ x' e ) i e' ) . destruct ( nopathstruetofalse ( pathscomp0 ( pathsinv0 b ) nb' ) ) . Defined .
Definition bhfiber { X Y : UU } ( f : X -> Y ) ( y : Y ) ( is : isisolated Y y ) := hfiber ( fun x : X => eqbx Y y is ( f x ) ) true .
Lemma weqhfibertobhfiber { X Y : UU } ( f : X -> Y ) ( y : Y ) ( is : isisolated Y y ) : weq ( hfiber f y ) ( bhfiber f y is ) .
Proof . intros . set ( g := eqbx Y y is ) . set ( ye := pr1 ( iscontrhfibereqbx Y y is ) ) . split with ( hfibersftogf f g true ye ) . apply ( isofhlevelfffromZ 0 _ _ ye ( fibseqhf f g true ye ) ) . apply ( isapropifcontr ) . apply ( iscontrhfibereqbx _ y is ) . Defined .
(** *** h-fibers of [ ii1 ] and [ ii2 ] *)
Theorem isinclii1 (X Y:UU): isincl (@ii1 X Y).
Proof. intros. set (f:= @ii1 X Y). set (g:= coprodtoboolsum X Y). set (gf:= fun x:X => (g (f x))). set (gf':= fun x:X => tpair (boolsumfun X Y) true x).
assert (h: forall x:X , paths (gf' x) (gf x)). intro. apply idpath.
assert (is1: isofhlevelf (S O) gf'). apply (isofhlevelfsnfib O (boolsumfun X Y) true (isasetbool true true)).
assert (is2: isofhlevelf (S O) gf). apply (isofhlevelfhomot (S O) gf' gf h is1).
apply (isofhlevelff (S O) _ _ is2 (isofhlevelfweq (S (S O) ) (weqcoprodtoboolsum X Y))). Defined.
Corollary iscontrhfiberii1x ( X Y : UU ) ( x : X ) : iscontr ( hfiber ( @ii1 X Y ) ( ii1 x ) ) .
Proof. intros . set ( xe1 := hfiberpair ( @ii1 _ _ ) x ( idpath ( @ii1 X Y x ) ) ) . apply ( iscontraprop1 ( isinclii1 X Y ( ii1 x ) ) xe1 ) . Defined .
Corollary neghfiberii1y ( X Y : UU ) ( y : Y ) : neg ( hfiber ( @ii1 X Y ) ( ii2 y ) ) .
Proof. intros . intro xe . destruct xe as [ x e ] . apply ( negpathsii1ii2 _ _ e ) . Defined.
Theorem isinclii2 (X Y:UU): isincl (@ii2 X Y).
Proof. intros. set (f:= @ii2 X Y). set (g:= coprodtoboolsum X Y). set (gf:= fun y:Y => (g (f y))). set (gf':= fun y:Y => tpair (boolsumfun X Y) false y).
assert (h: forall y:Y , paths (gf' y) (gf y)). intro. apply idpath.
assert (is1: isofhlevelf (S O) gf'). apply (isofhlevelfsnfib O (boolsumfun X Y) false (isasetbool false false)).
assert (is2: isofhlevelf (S O) gf). apply (isofhlevelfhomot (S O) gf' gf h is1).
apply (isofhlevelff (S O) _ _ is2 (isofhlevelfweq (S (S O)) ( weqcoprodtoboolsum X Y))). Defined.
Corollary iscontrhfiberii2y ( X Y : UU ) ( y : Y ) : iscontr ( hfiber ( @ii2 X Y ) ( ii2 y ) ) .
Proof. intros . set ( xe1 := hfiberpair ( @ii2 _ _ ) y ( idpath ( @ii2 X Y y ) ) ) . apply ( iscontraprop1 ( isinclii2 X Y ( ii2 y ) ) xe1 ) . Defined .
Corollary neghfiberii2x ( X Y : UU ) ( x : X ) : neg ( hfiber ( @ii2 X Y ) ( ii1 x ) ) .
Proof. intros . intro ye . destruct ye as [ y e ] . apply ( negpathsii2ii1 _ _ e ) . Defined.
Lemma negintersectii1ii2 { X Y : UU } (z: coprod X Y): hfiber (@ii1 X Y) z -> hfiber (@ii2 _ _) z -> empty.
Proof. intros X Y z X0 X1. destruct X0 as [ t x ]. destruct X1 as [ t0 x0 ].
set (e:= pathscomp0 x (pathsinv0 x0)). apply (negpathsii1ii2 _ _ e). Defined.
(** *** [ ii1 ] and [ ii2 ] map isolated points to isoloated points *)
Lemma isolatedtoisolatedii1 (X Y:UU)(x:X)(is:isisolated _ x): isisolated ( coprod X Y ) (ii1 x).
Proof. intros. unfold isisolated . intro x' . destruct x' as [ x0 | y ] . destruct (is x0) as [ p | e ] . apply (ii1 (maponpaths (@ii1 X Y) p)). apply (ii2 (negf (invmaponpathsincl (@ii1 X Y) (isinclii1 X Y) _ _ ) e)). apply (ii2 (negpathsii1ii2 x y)). Defined.
Lemma isolatedtoisolatedii2 (X Y:UU)(y:Y)(is:isisolated _ y): isisolated ( coprod X Y ) (ii2 y).
Proof. intros. intro x' . destruct x' as [ x | y0 ] . apply (ii2 (negpathsii2ii1 x y)). destruct (is y0) as [ p | e ] . apply (ii1 (maponpaths (@ii2 X Y) p)). apply (ii2 (negf (invmaponpathsincl (@ii2 X Y) (isinclii2 X Y) _ _ ) e)). Defined.
(** *** h-fibers of [ coprodf ] of two functions *)
Theorem weqhfibercoprodf1 { X Y X' Y' : UU } (f: X -> X')(g:Y -> Y')(x':X'): weq (hfiber f x') (hfiber (coprodf f g) (ii1 x')).
Proof. intros. set ( ix := @ii1 X Y ) . set ( ix' := @ii1 X' Y' ) . set ( fpg := coprodf f g ) . set ( fpgix := fun x : X => ( fpg ( ix x ) ) ) .
assert ( w1 : weq ( hfiber f x' ) ( hfiber fpgix ( ix' x' ) ) ) . apply ( samehfibers f ix' ( isinclii1 _ _ ) x' ) .
assert ( w2 : weq ( hfiber fpgix ( ix' x' ) ) ( hfiber fpg ( ix' x' ) ) ) . split with (hfibersgftog ix fpg ( ix' x' ) ) . unfold isweq. intro y .
set (u:= invezmaphf ix fpg ( ix' x' ) y).
assert (is: isweq u). apply isweqinvezmaphf.
apply (iscontrweqb ( weqpair u is ) ) . destruct y as [ xy e ] . destruct xy as [ x0 | y0 ] . simpl . apply iscontrhfiberofincl . apply ( isinclii1 X Y ) . apply ( fromempty ( ( negpathsii2ii1 x' ( g y0 ) ) e ) ) .
apply ( weqcomp w1 w2 ) .
Defined.
Theorem weqhfibercoprodf2 { X Y X' Y' : UU } (f: X -> X')(g:Y -> Y')(y':Y'): weq (hfiber g y') (hfiber (coprodf f g) (ii2 y')).
Proof. intros. set ( iy := @ii2 X Y ) . set ( iy' := @ii2 X' Y' ) . set ( fpg := coprodf f g ) . set ( fpgiy := fun y : Y => ( fpg ( iy y ) ) ) .
assert ( w1 : weq ( hfiber g y' ) ( hfiber fpgiy ( iy' y' ) ) ) . apply ( samehfibers g iy' ( isinclii2 _ _ ) y' ) .
assert ( w2 : weq ( hfiber fpgiy ( iy' y' ) ) ( hfiber fpg ( iy' y' ) ) ) . split with (hfibersgftog iy fpg ( iy' y' ) ) . unfold isweq. intro y .
set (u:= invezmaphf iy fpg ( iy' y' ) y).
assert (is: isweq u). apply isweqinvezmaphf.
apply (iscontrweqb ( weqpair u is ) ) . destruct y as [ xy e ] . destruct xy as [ x0 | y0 ] . simpl . apply ( fromempty ( ( negpathsii1ii2 ( f x0 ) y' ) e ) ) . simpl. apply iscontrhfiberofincl . apply ( isinclii2 X Y ) .
apply ( weqcomp w1 w2 ) .
Defined.
(** *** Theorem saying that coproduct of two functions of h-level n is of h-level n *)
Theorem isofhlevelfcoprodf (n:nat) { X Y Z T : UU } (f : X -> Z ) ( g : Y -> T )( is1 : isofhlevelf n f ) ( is2 : isofhlevelf n g ) : isofhlevelf n (coprodf f g).
Proof. intros. unfold isofhlevelf . intro y . destruct y as [ z | t ] . apply (isofhlevelweqf n (weqhfibercoprodf1 f g z) ). apply ( is1 z ) . apply (isofhlevelweqf n (weqhfibercoprodf2 f g t )). apply ( is2 t ) . Defined.
(** *** Theorems about h-levels of coproducts and their component types *)
Theorem isofhlevelsnsummand1 ( n : nat ) ( X Y : UU ) : isofhlevel ( S n ) ( coprod X Y ) -> isofhlevel ( S n ) X .
Proof. intros n X Y is . apply ( isofhlevelXfromfY ( S n ) ( @ii1 X Y ) ( isofhlevelfsnincl n _ ( isinclii1 _ _ ) ) is ) . Defined.
Theorem isofhlevelsnsummand2 ( n : nat ) ( X Y : UU ) : isofhlevel ( S n ) ( coprod X Y ) -> isofhlevel ( S n ) Y .
Proof. intros n X Y is . apply ( isofhlevelXfromfY ( S n ) ( @ii2 X Y ) ( isofhlevelfsnincl n _ ( isinclii2 _ _ ) ) is ) . Defined.
Theorem isofhlevelssncoprod ( n : nat ) ( X Y : UU ) ( isx : isofhlevel ( S ( S n ) ) X ) ( isy : isofhlevel ( S ( S n ) ) Y ) : isofhlevel ( S ( S n ) ) ( coprod X Y ) .
Proof. intros . apply isofhlevelfromfun . set ( f := coprodf ( fun x : X => tt ) ( fun y : Y => tt ) ) . assert ( is1 : isofhlevelf ( S ( S n ) ) f ) . apply ( isofhlevelfcoprodf ( S ( S n ) ) _ _ ( isofhleveltofun _ X isx ) ( isofhleveltofun _ Y isy ) ) . assert ( is2 : isofhlevel ( S ( S n ) ) ( coprod unit unit ) ) . apply ( isofhlevelweqb ( S ( S n ) ) boolascoprod ( isofhlevelssnset n _ ( isasetbool ) ) ) . apply ( isofhlevelfgf ( S ( S n ) ) _ _ is1 ( isofhleveltofun _ _ is2 ) ) . Defined .
Lemma isasetcoprod ( X Y : UU ) ( isx : isaset X ) ( isy : isaset Y ) : isaset ( coprod X Y ) .
Proof. intros . apply ( isofhlevelssncoprod 0 _ _ isx isy ) . Defined .
(** *** h-fibers of the sum of two functions [ sumofmaps f g ] *)
Lemma coprodofhfiberstohfiber { X Y Z : UU } ( f : X -> Z ) ( g : Y -> Z ) ( z : Z ) : coprod ( hfiber f z ) ( hfiber g z ) -> hfiber ( sumofmaps f g ) z .
Proof. intros X Y Z f g z hfg . destruct hfg as [ hf | hg ] . destruct hf as [ x fe ] . split with ( ii1 x ) . simpl . assumption . destruct hg as [ y ge ] . split with ( ii2 y ) . simpl . assumption .
Defined.
Lemma hfibertocoprodofhfibers { X Y Z : UU } ( f : X -> Z ) ( g : Y -> Z ) ( z : Z ) : hfiber ( sumofmaps f g ) z -> coprod ( hfiber f z ) ( hfiber g z ) .
Proof. intros X Y Z f g z hsfg . destruct hsfg as [ xy e ] . destruct xy as [ x | y ] . simpl in e . apply ( ii1 ( hfiberpair _ x e ) ) . simpl in e . apply ( ii2 ( hfiberpair _ y e ) ) . Defined .
Theorem weqhfibersofsumofmaps { X Y Z : UU } ( f : X -> Z ) ( g : Y -> Z ) ( z : Z ) : weq ( coprod ( hfiber f z ) ( hfiber g z ) ) ( hfiber ( sumofmaps f g ) z ) .
Proof. intros . set ( ff := coprodofhfiberstohfiber f g z ) . set ( gg := hfibertocoprodofhfibers f g z ) . split with ff .
assert ( effgg : forall hsfg : _ , paths ( ff ( gg hsfg ) ) hsfg ) . intro . destruct hsfg as [ xy e ] . destruct xy as [ x | y ] . simpl . apply idpath . simpl . apply idpath .
assert ( eggff : forall hfg : _ , paths ( gg ( ff hfg ) ) hfg ) . intro . destruct hfg as [ hf | hg ] . destruct hf as [ x fe ] . simpl . apply idpath . destruct hg as [ y ge ] . simpl . apply idpath .
apply ( gradth _ _ eggff effgg ) . Defined .
(** *** Theorem saying that the sum of two functions of h-level ( S ( S n ) ) is of hlevel ( S ( S n ) ) *)
Theorem isofhlevelfssnsumofmaps ( n : nat ) { X Y Z : UU } ( f : X -> Z ) ( g : Y -> Z ) ( isf : isofhlevelf ( S ( S n ) ) f ) ( isg : isofhlevelf ( S ( S n ) ) g ) : isofhlevelf ( S ( S n ) ) ( sumofmaps f g ) .
Proof . intros . intro z . set ( w := weqhfibersofsumofmaps f g z ) . set ( is := isofhlevelssncoprod n _ _ ( isf z ) ( isg z ) ) . apply ( isofhlevelweqf _ w is ) . Defined .
(** *** Theorem saying that the sum of two functions of h-level n with non-intersecting images is of h-level n *)
Lemma noil1 { X Y Z : UU } ( f : X -> Z ) ( g : Y -> Z ) ( noi : forall ( x : X ) ( y : Y ) , neg ( paths ( f x ) ( g y ) ) ) ( z : Z ) : hfiber f z -> hfiber g z -> empty .
Proof. intros X Y Z f g noi z hfz hgz . destruct hfz as [ x fe ] . destruct hgz as [ y ge ] . apply ( noi x y ( pathscomp0 fe ( pathsinv0 ge ) ) ) . Defined .
Lemma weqhfibernoi1 { X Y Z : UU } ( f : X -> Z ) ( g : Y -> Z ) ( noi : forall ( x : X ) ( y : Y ) , neg ( paths ( f x ) ( g y ) ) ) ( z : Z ) ( xe : hfiber f z ) : weq ( hfiber ( sumofmaps f g ) z ) ( hfiber f z ) .
Proof. intros . set ( w1 := invweq ( weqhfibersofsumofmaps f g z ) ) . assert ( a : neg ( hfiber g z ) ) . intro ye . apply ( noil1 f g noi z xe ye ) . set ( w2 := invweq ( weqii1withneg ( hfiber f z ) a ) ) . apply ( weqcomp w1 w2 ) . Defined .
Lemma weqhfibernoi2 { X Y Z : UU } ( f : X -> Z ) ( g : Y -> Z ) ( noi : forall ( x : X ) ( y : Y ) , neg ( paths ( f x ) ( g y ) ) ) ( z : Z ) ( ye : hfiber g z ) : weq ( hfiber ( sumofmaps f g ) z ) ( hfiber g z ) .
Proof. intros . set ( w1 := invweq ( weqhfibersofsumofmaps f g z ) ) . assert ( a : neg ( hfiber f z ) ) . intro xe . apply ( noil1 f g noi z xe ye ) . set ( w2 := invweq ( weqii2withneg ( hfiber g z ) a ) ) . apply ( weqcomp w1 w2 ) . Defined .
Theorem isofhlevelfsumofmapsnoi ( n : nat ) { X Y Z : UU } ( f : X -> Z ) ( g : Y -> Z ) ( isf : isofhlevelf n f ) ( isg : isofhlevelf n g ) ( noi : forall ( x : X ) ( y : Y ) , neg ( paths ( f x ) ( g y ) ) ) : isofhlevelf n ( sumofmaps f g ) .
Proof. intros . intro z . destruct n as [ | n ] . set ( zinx := invweq ( weqpair _ isf ) z ) . set ( ziny := invweq ( weqpair _ isg ) z ) . assert ( ex : paths ( f zinx ) z ) . apply ( homotweqinvweq ( weqpair _ isf ) z ) . assert ( ey : paths ( g ziny ) z ) . apply ( homotweqinvweq ( weqpair _ isg ) z ) . destruct ( ( noi zinx ziny ) ( pathscomp0 ex ( pathsinv0 ey ) ) ) .
apply isofhlevelsn . intro hfgz . destruct ( ( invweq ( weqhfibersofsumofmaps f g z ) hfgz ) ) as [ xe | ye ] . apply ( isofhlevelweqb _ ( weqhfibernoi1 f g noi z xe ) ( isf z ) ) . apply ( isofhlevelweqb _ ( weqhfibernoi2 f g noi z ye ) ( isg z ) ) . Defined .
(** *** Coproducts and complements *)
Definition tocompltoii1x (X Y:UU)(x:X): coprod (compl X x) Y -> compl (coprod X Y) (ii1 x).
Proof. intros X Y x X0. destruct X0 as [ c | y ] . split with (ii1 (pr1 c)).
assert (e: neg(paths x (pr1 c) )). apply (pr2 c). apply (negf (invmaponpathsincl ( @ii1 _ _ ) (isinclii1 X Y) _ _) e).
split with (ii2 y). apply (negf (pathsinv0 ) (negpathsii2ii1 x y)). Defined.
Definition fromcompltoii1x (X Y:UU)(x:X): compl (coprod X Y) (ii1 x) -> coprod (compl X x) Y.
Proof. intros X Y x X0. destruct X0 as [ t x0 ]. destruct t as [ x1 | y ].
assert (ne: neg (paths x x1 )). apply (negf (maponpaths ( @ii1 _ _ ) ) x0). apply (ii1 (complpair _ _ x1 ne )). apply (ii2 y). Defined.
Theorem isweqtocompltoii1x (X Y:UU)(x:X): isweq (tocompltoii1x X Y x).
Proof. intros. set (f:= tocompltoii1x X Y x). set (g:= fromcompltoii1x X Y x).
assert (egf:forall nexy:_ , paths (g (f nexy)) nexy). intro. destruct nexy as [ c | y ]. destruct c as [ t x0 ]. simpl.
assert (e: paths (negf (maponpaths (@ii1 X Y)) (negf (invmaponpathsincl (@ii1 X Y) (isinclii1 X Y) x t) x0)) x0). apply (isapropneg (paths x t) ).
apply (maponpaths (fun ee: neg (paths x t ) => ii1 (complpair X x t ee)) e). apply idpath.
assert (efg: forall neii1x:_, paths (f (g neii1x)) neii1x). intro. destruct neii1x as [ t x0 ]. destruct t as [ x1 | y ]. simpl.
assert (e: paths (negf (invmaponpathsincl (@ii1 X Y) (isinclii1 X Y) x x1 ) (negf (maponpaths (@ii1 X Y) ) x0)) x0). apply (isapropneg (paths _ _ ) ).
apply (maponpaths (fun ee: (neg (paths (ii1 x) (ii1 x1))) => (complpair _ _ (ii1 x1) ee)) e). simpl.
assert (e: paths (negf pathsinv0 (negpathsii2ii1 x y)) x0). apply (isapropneg (paths _ _ ) ).
apply (maponpaths (fun ee: (neg (paths (ii1 x) (ii2 y) )) => (complpair _ _ (ii2 y) ee)) e).
apply (gradth f g egf efg). Defined.
Definition tocompltoii2y (X Y:UU)(y:Y): coprod X (compl Y y) -> compl (coprod X Y) (ii2 y).
Proof. intros X Y y X0. destruct X0 as [ x | c ]. split with (ii1 x). apply (negpathsii2ii1 x y ).
split with (ii2 (pr1 c)). assert (e: neg(paths y (pr1 c) )). apply (pr2 c). apply (negf (invmaponpathsincl ( @ii2 _ _ ) (isinclii2 X Y) _ _ ) e).
Defined.
Definition fromcompltoii2y (X Y:UU)(y:Y): compl (coprod X Y) (ii2 y) -> coprod X (compl Y y).
Proof. intros X Y y X0. destruct X0 as [ t x ]. destruct t as [ x0 | y0 ]. apply (ii1 x0).
assert (ne: neg (paths y y0 )). apply (negf (maponpaths ( @ii2 _ _ ) ) x). apply (ii2 (complpair _ _ y0 ne)). Defined.
Theorem isweqtocompltoii2y (X Y:UU)(y:Y): isweq (tocompltoii2y X Y y).
Proof. intros. set (f:= tocompltoii2y X Y y). set (g:= fromcompltoii2y X Y y).
assert (egf:forall nexy:_ , paths (g (f nexy)) nexy). intro. destruct nexy as [ x | c ].
apply idpath. destruct c as [ t x ]. simpl.
assert (e: paths (negf (maponpaths (@ii2 X Y) ) (negf (invmaponpathsincl (@ii2 X Y) (isinclii2 X Y) y t) x)) x). apply (isapropneg (paths y t ) ).
apply (maponpaths (fun ee: neg ( paths y t ) => ii2 (complpair _ y t ee)) e).
assert (efg: forall neii2x:_, paths (f (g neii2x)) neii2x). intro. destruct neii2x as [ t x ]. destruct t as [ x0 | y0 ]. simpl.
assert (e: paths (negpathsii2ii1 x0 y) x). apply (isapropneg (paths _ _ ) ).
apply (maponpaths (fun ee: (neg (paths (ii2 y) (ii1 x0) )) => (complpair _ _ (ii1 x0) ee)) e). simpl.
assert (e: paths (negf (invmaponpathsincl _ (isinclii2 X Y) y y0 ) (negf (maponpaths (@ii2 X Y) ) x)) x). apply (isapropneg (paths _ _ ) ).
apply (maponpaths (fun ee: (neg (paths (ii2 y) (ii2 y0) )) => (complpair _ _ (ii2 y0) ee)) e).
apply (gradth f g egf efg). Defined.
Definition tocompltodisjoint (X:UU): X -> compl (coprod X unit) (ii2 tt) := fun x:_ => complpair _ _ (ii1 x) (negpathsii2ii1 x tt).
Definition fromcompltodisjoint (X:UU): compl (coprod X unit) (ii2 tt) -> X.
Proof. intros X X0. destruct X0 as [ t x ]. destruct t as [ x0 | u ] . assumption. destruct u. apply (fromempty (x (idpath (ii2 tt)))). Defined.
Lemma isweqtocompltodisjoint (X:UU): isweq (tocompltodisjoint X).
Proof. intros. set (ff:= tocompltodisjoint X). set (gg:= fromcompltodisjoint X).
assert (egf: forall x:X, paths (gg (ff x)) x). intro. apply idpath.
assert (efg: forall xx:_, paths (ff (gg xx)) xx). intro. destruct xx as [ t x ]. destruct t as [ x0 | u ] . simpl. unfold ff. unfold tocompltodisjoint. simpl. assert (ee: paths (negpathsii2ii1 x0 tt) x). apply (proofirrelevance _ (isapropneg _) ). destruct ee. apply idpath. destruct u. simpl. apply (fromempty (x (idpath _))). apply (gradth ff gg egf efg). Defined.
Definition weqtocompltodisjoint ( X : UU ) := weqpair _ ( isweqtocompltodisjoint X ) .
Corollary isweqfromcompltodisjoint (X:UU): isweq (fromcompltodisjoint X).
Proof. intros. apply (isweqinvmap ( weqtocompltodisjoint X ) ). Defined.
(** ** Decidable propositions and decidable inclusions *)
(** *** Decidable propositions [ isdecprop ] *)
Definition isdecprop ( X : UU ) := iscontr ( coprod X ( neg X ) ) .
Lemma isdecproptoisaprop ( X : UU ) ( is : isdecprop X ) : isaprop X .
Proof. intros X is . apply ( isofhlevelsnsummand1 0 _ _ ( isapropifcontr is ) ) . Defined .
Coercion isdecproptoisaprop : isdecprop >-> isaprop .
Lemma isdecpropif ( X : UU ) : isaprop X -> ( coprod X ( neg X ) ) -> isdecprop X .
Proof. intros X is a . assert ( is1 : isaprop ( coprod X ( neg X ) ) ) . apply isapropdec . assumption . apply ( iscontraprop1 is1 a ) . Defined.
Lemma isdecpropfromiscontr { X : UU } ( is : iscontr X ) : isdecprop X .
Proof. intros . apply ( isdecpropif _ ( is ) ( ii1 ( pr1 is ) ) ) . Defined.
Lemma isdecpropempty : isdecprop empty .
Proof. apply ( isdecpropif _ isapropempty ( ii2 ( fun a : empty => a ) ) ) . Defined.
Lemma isdecpropweqf { X Y : UU } ( w : weq X Y ) ( is : isdecprop X ) : isdecprop Y .
Proof. intros . apply isdecpropif . apply ( isofhlevelweqf 1 w ( isdecproptoisaprop _ is ) ) . destruct ( pr1 is ) as [ x | nx ] . apply ( ii1 ( w x ) ) . apply ( ii2 ( negf ( invweq w ) nx ) ) . Defined .
Lemma isdecpropweqb { X Y : UU } ( w : weq X Y ) ( is : isdecprop Y ) : isdecprop X .
Proof. intros . apply isdecpropif . apply ( isofhlevelweqb 1 w ( isdecproptoisaprop _ is ) ) . destruct ( pr1 is ) as [ y | ny ] . apply ( ii1 ( invweq w y ) ) . apply ( ii2 ( ( negf w ) ny ) ) . Defined .
Lemma isdecproplogeqf { X Y : UU } ( isx : isdecprop X ) ( isy : isaprop Y ) ( lg : X <-> Y ) : isdecprop Y .
Proof . intros. set ( w := weqimplimpl ( pr1 lg ) ( pr2 lg ) isx isy ) . apply ( isdecpropweqf w isx ) . Defined .
Lemma isdecproplogeqb { X Y : UU } ( isx : isaprop X ) ( isy : isdecprop Y ) ( lg : X <-> Y ) : isdecprop X .
Proof . intros. set ( w := weqimplimpl ( pr1 lg ) ( pr2 lg ) isx isy ) . apply ( isdecpropweqb w isy ) . Defined .
Lemma isdecpropfromneg { X : UU } ( ne : neg X ) : isdecprop X .
Proof. intros . apply ( isdecpropweqb ( weqtoempty ne ) isdecpropempty ) . Defined .
Lemma isdecproppaths { X : UU } ( is : isdeceq X ) ( x x' : X ) : isdecprop ( paths x x' ) .
Proof. intros . apply ( isdecpropif _ ( isasetifdeceq _ is x x' ) ( is x x' ) ) . Defined .
Lemma isdeceqif { X : UU } ( is : forall x x' : X , isdecprop ( paths x x' ) ) : isdeceq X .
Proof . intros . intros x x' . apply ( pr1 ( is x x' ) ) . Defined .
Lemma isaninv1 (X:UU): isdecprop X -> isaninvprop X.
Proof. intros X is1. unfold isaninvprop. set (is2:= pr1 is1). simpl in is2.
assert (adjevinv: dneg X -> X). intro X0. destruct is2 as [ a | b ]. assumption. destruct (X0 b).
assert (is3: isaprop (dneg X)). apply (isapropneg (X -> empty)). apply (isweqimplimpl (todneg X) adjevinv is1 is3). Defined.
Theorem isdecpropfibseq1 { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( fs : fibseqstr f g z ) : isdecprop X -> isaprop Z -> isdecprop Y .
Proof . intros X Y Z f g z fs isx isz . assert ( isc : iscontr Z ) . apply ( iscontraprop1 isz z ) . assert ( isweq f ) . apply ( isweqfinfibseq f g z fs isc ) . apply ( isdecpropweqf ( weqpair _ X0 ) isx ) . Defined .
Theorem isdecpropfibseq0 { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( z : Z ) ( fs : fibseqstr f g z ) : isdecprop Y -> isdeceq Z -> isdecprop X .
Proof . intros X Y Z f g z fs isy isz . assert ( isg : isofhlevelf 1 g ) . apply ( isofhlevelffromXY 1 g ( isdecproptoisaprop _ isy ) ( isasetifdeceq _ isz ) ) .
assert ( isp : isaprop X ) . apply ( isofhlevelXfromg 1 f g z fs isg ) .
destruct ( pr1 isy ) as [ y | ny ] . apply ( isdecpropfibseq1 _ _ y ( fibseq1 f g z fs y ) ( isdecproppaths isz ( g y ) z ) ( isdecproptoisaprop _ isy ) ) .
apply ( isdecpropif _ isp ( ii2 ( negf f ny ) ) ) . Defined.
Theorem isdecpropdirprod { X Y : UU } ( isx : isdecprop X ) ( isy : isdecprop Y ) : isdecprop ( dirprod X Y ) .
Proof. intros . assert ( isp : isaprop ( dirprod X Y ) ) . apply ( isofhleveldirprod 1 _ _ ( isdecproptoisaprop _ isx ) ( isdecproptoisaprop _ isy ) ) . destruct ( pr1 isx ) as [ x | nx ] . destruct ( pr1 isy ) as [ y | ny ] . apply ( isdecpropif _ isp ( ii1 ( dirprodpair x y ) ) ) . assert ( nxy : neg ( dirprod X Y ) ) . intro xy . destruct xy as [ x0 y0 ] . apply ( ny y0 ) . apply ( isdecpropif _ isp ( ii2 nxy ) ) . assert ( nxy : neg ( dirprod X Y ) ) . intro xy . destruct xy as [ x0 y0 ] . apply ( nx x0 ) . apply ( isdecpropif _ isp ( ii2 nxy ) ) . Defined.
Lemma fromneganddecx { X Y : UU } ( isx : isdecprop X ) ( nf : neg ( dirprod X Y ) ) : coprod ( neg X ) ( neg Y ) .
Proof . intros . destruct ( pr1 isx ) as [ x | nx ] . set ( ny := negf ( fun y : Y => dirprodpair x y ) nf ) . apply ( ii2 ny ) . apply ( ii1 nx ) . Defined .
Lemma fromneganddecy { X Y : UU } ( isy : isdecprop Y ) ( nf : neg ( dirprod X Y ) ) : coprod ( neg X ) ( neg Y ) .
Proof . intros . destruct ( pr1 isy ) as [ y | ny ] . set ( nx := negf ( fun x : X => dirprodpair x y ) nf ) . apply ( ii1 nx ) . apply ( ii2 ny ) . Defined .
(** *** Paths to and from an isolated point form a decidable proposition *)
Lemma isdecproppathsfromisolated ( X : UU ) ( x : X ) ( is : isisolated X x ) ( x' : X ) : isdecprop ( paths x x' ) .
Proof. intros . apply isdecpropif . apply isaproppathsfromisolated . assumption . apply ( is x' ) . Defined .
Lemma isdecproppathstoisolated ( X : UU ) ( x : X ) ( is : isisolated X x ) ( x' : X ) : isdecprop ( paths x' x ) .
Proof . intros . apply ( isdecpropweqf ( weqpathsinv0 x x' ) ( isdecproppathsfromisolated X x is x' ) ) . Defined .
(** *** Decidable inclusions *)
Definition isdecincl {X Y:UU} (f :X -> Y) := forall y:Y, isdecprop ( hfiber f y ).
Lemma isdecincltoisincl { X Y : UU } ( f : X -> Y ) : isdecincl f -> isincl f .
Proof. intros X Y f is . intro y . apply ( isdecproptoisaprop _ ( is y ) ) . Defined.
Coercion isdecincltoisincl : isdecincl >-> isincl .
Lemma isdecinclfromisweq { X Y : UU } ( f : X -> Y ) : isweq f -> isdecincl f .
Proof. intros X Y f iswf . intro y . apply ( isdecpropfromiscontr ( iswf y ) ) . Defined .
Lemma isdecpropfromdecincl { X Y : UU } ( f : X -> Y ) : isdecincl f -> isdecprop Y -> isdecprop X .
Proof. intros X Y f isf isy . destruct ( pr1 isy ) as [ y | n ] . assert ( w : weq ( hfiber f y ) X ) . apply ( weqhfibertocontr f y ( iscontraprop1 ( isdecproptoisaprop _ isy ) y ) ) . apply ( isdecpropweqf w ( isf y ) ) . apply isdecpropif . apply ( isapropinclb _ isf isy ) . apply ( ii2 ( negf f n ) ) . Defined .
Lemma isdecinclii1 (X Y: UU): isdecincl ( @ii1 X Y ) .
Proof. intros. intro y . destruct y as [ x | y ] . apply ( isdecpropif _ ( isinclii1 X Y ( ii1 x ) ) ( ii1 (hfiberpair (@ii1 _ _ ) x (idpath _ )) ) ) .
apply ( isdecpropif _ ( isinclii1 X Y ( ii2 y ) ) ( ii2 ( neghfiberii1y X Y y ) ) ) . Defined.
Lemma isdecinclii2 (X Y: UU): isdecincl ( @ii2 X Y ) .
Proof. intros. intro y . destruct y as [ x | y ] . apply ( isdecpropif _ ( isinclii2 X Y ( ii1 x ) ) ( ii2 ( neghfiberii2x X Y x ) ) ) .
apply ( isdecpropif _ ( isinclii2 X Y ( ii2 y ) ) ( ii1 (hfiberpair (@ii2 _ _ ) y (idpath _ )) ) ) . Defined.
Lemma isdecinclpr1 { X : UU } ( P : X -> UU ) ( is : forall x : X , isdecprop ( P x ) ) : isdecincl ( @pr1 _ P ) .
Proof . intros . intro x . assert ( w : weq ( P x ) ( hfiber (@pr1 _ P ) x ) ) . apply ezweqpr1 . apply ( isdecpropweqf w ( is x ) ) . Defined .
Theorem isdecinclhomot { X Y : UU } ( f g : X -> Y ) ( h : forall x : X , paths ( f x ) ( g x ) ) ( is : isdecincl f ) : isdecincl g .
Proof. intros . intro y . apply ( isdecpropweqf ( weqhfibershomot f g h y ) ( is y ) ) . Defined .
Theorem isdecinclcomp { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( isf : isdecincl f ) ( isg : isdecincl g ) : isdecincl ( fun x : X => g ( f x ) ) .
Proof. intros. intro z . set ( gf := fun x : X => g ( f x ) ) . assert ( wy : forall ye : hfiber g z , weq ( hfiber f ( pr1 ye ) ) ( hfiber ( hfibersgftog f g z ) ye ) ) . apply ezweqhf .
assert ( ww : forall y : Y , weq ( hfiber f y ) ( hfiber gf ( g y ) ) ) . intro . apply ( samehfibers f g ) . apply ( isdecincltoisincl _ isg ) .
destruct ( pr1 ( isg z ) ) as [ ye | nye ] . destruct ye as [ y e ] . destruct e . apply ( isdecpropweqf ( ww y ) ( isf y ) ) . assert ( wz : weq ( hfiber gf z ) ( hfiber g z ) ) . split with ( hfibersgftog f g z ) . intro ye . destruct ( nye ye ) . apply ( isdecpropweqb wz ( isg z ) ) . Defined .
(** The conditions of the following theorem can be weakened by assuming only that the h-fibers of g satisfy [ isdeceq ] i.e. are "sets with decidable equality". *)
Theorem isdecinclf { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( isg : isincl g ) ( isgf : isdecincl ( fun x : X => g ( f x ) ) ) : isdecincl f .
Proof. intros . intro y . set ( gf := fun x : _ => g ( f x ) ) . assert ( ww : weq ( hfiber f y ) ( hfiber gf ( g y ) ) ) . apply ( samehfibers f g ) . assumption . apply ( isdecpropweqb ww ( isgf ( g y ) ) ) . Defined .
(** *)
Theorem isdecinclg { X Y Z : UU } ( f : X -> Y ) ( g : Y -> Z ) ( isf : isweq f ) ( isgf : isdecincl ( fun x : X => g ( f x ) ) ) : isdecincl g .
Proof. intros . intro z . set ( gf := fun x : X => g ( f x ) ) . assert ( w : weq ( hfiber gf z ) ( hfiber g z ) ) . split with ( hfibersgftog f g z ) . intro ye . assert ( ww : weq ( hfiber f ( pr1 ye ) ) ( hfiber ( hfibersgftog f g z ) ye ) ) . apply ezweqhf . apply ( iscontrweqf ww ( isf ( pr1 ye ) ) ) . apply ( isdecpropweqf w ( isgf z ) ) . Defined .
(** *** Decibadle inclusions and isolated points *)
Theorem isisolateddecinclf { X Y : UU } ( f : X -> Y ) ( x : X ) : isdecincl f -> isisolated X x -> isisolated Y ( f x ) .
Proof . intros X Y f x isf isx . assert ( is' : forall y : Y , isdecincl ( d1g f y x ) ) . intro y . intro xe . set ( w := ezweq2g f x xe ) . apply ( isdecpropweqf w ( isdecproppathstoisolated X x isx _ ) ) . assert ( is'' : forall y : Y , isdecprop ( paths ( f x ) y ) ) . intro . apply ( isdecpropfromdecincl _ ( is' y ) ( isf y ) ) . intro y' . apply ( pr1 ( is'' y' ) ) . Defined .
(** *** Decidable inclusions and coprojections *)
Definition negimage { X Y : UU } ( f : X -> Y ) := total2 ( fun y : Y => neg ( hfiber f y ) ) .
Definition negimagepair { X Y : UU } ( f : X -> Y ) := tpair ( fun y : Y => neg ( hfiber f y ) ) .
Lemma isinclfromcoprodwithnegimage { X Y : UU } ( f : X -> Y ) ( is : isincl f ) : isincl ( sumofmaps f ( @pr1 _ ( fun y : Y => neg ( hfiber f y ) ) ) ) .
Proof . intros . assert ( noi : forall ( x : X ) ( nx : negimage f ) , neg ( paths ( f x ) ( pr1 nx ) ) ) . intros x nx e . destruct nx as [ y nhf ] . simpl in e . apply ( nhf ( hfiberpair _ x e ) ) . assert ( is' : isincl ( @pr1 _ ( fun y : Y => neg ( hfiber f y ) ) ) ) . apply isinclpr1 . intro y . apply isapropneg . apply ( isofhlevelfsumofmapsnoi 1 f _ is is' noi ) . Defined .
Definition iscoproj { X Y : UU } ( f : X -> Y ) := isweq ( sumofmaps f ( @pr1 _ ( fun y : Y => neg ( hfiber f y ) ) ) ) .
Definition weqcoproj { X Y : UU } ( f : X -> Y ) ( is : iscoproj f ) : weq ( coprod X ( negimage f ) ) Y := weqpair _ is .
Theorem iscoprojfromisdecincl { X Y : UU } ( f : X -> Y ) ( is : isdecincl f ) : iscoproj f .
Proof. intros . set ( p := sumofmaps f ( @pr1 _ ( fun y : Y => neg ( hfiber f y ) ) ) ) . assert ( is' : isincl p ) . apply isinclfromcoprodwithnegimage . apply ( isdecincltoisincl _ is ) . unfold iscoproj . intro y . destruct ( pr1 ( is y ) ) as [ h | nh ] . destruct h as [ x e ] . destruct e . change ( f x ) with ( p ( ii1 x ) ) . apply iscontrhfiberofincl . assumption . change y with ( p ( ii2 ( negimagepair _ y nh ) ) ) . apply iscontrhfiberofincl . assumption . Defined .
Theorem isdecinclfromiscoproj { X Y : UU } ( f : X -> Y ) ( is : iscoproj f ) : isdecincl f .
Proof . intros . set ( g := ( sumofmaps f ( @pr1 _ ( fun y : Y => neg ( hfiber f y ) ) ) ) ) . set ( f' := fun x : X => g ( ii1 x ) ) . assert ( is' : isdecincl f' ) . apply ( isdecinclcomp _ _ ( isdecinclii1 _ _ ) ( isdecinclfromisweq _ is ) ) . assumption . Defined .
(** ** Results using full form of the functional extentionality axioms.
Summary: We consider two axioms which address functional extensionality. The first one is etacorrection which compensates for the absense of eta-reduction in Coq8.3 Eta-reduction is expected to be included as a basic property of the language in Coq8.4 which will make this axiom and related lemmas unnecessary. The second axiom [ funcontr ] is the functional extensionality for dependent functions formulated as the condition that the space of section of a family with contractible fibers is contractible.
Note : some of the results above this point in code use a very limitted form of functional extensionality . See [ funextempty ] .
*)
(** *** Axioms and their basic corollaries *)
(** etacorrection *)
Axiom etacorrection: forall T:UU, forall P:T -> UU, forall f: (forall t:T, P t), paths f (fun t:T => f t).
Lemma isweqetacorrection { T : UU } (P:T -> UU): isweq (fun f: forall t:T, P t => (fun t:T => f t)).
Proof. intros. apply (isweqhomot (fun f: forall t:T, P t => f) (fun f: forall t:T, P t => (fun t:T => f t)) (fun f: forall t:T, P t => etacorrection _ P f) (idisweq _)). Defined.
Definition weqeta { T : UU } (P:T -> UU) := weqpair _ ( isweqetacorrection P ) .
Lemma etacorrectiononpaths { T : UU } (P:T->UU)(s1 s2 :forall t:T, P t) : paths (fun t:T => s1 t) (fun t:T => s2 t)-> paths s1 s2.
Proof. intros T P s1 s2 X. set (ew := weqeta P). apply (invmaponpathsweq ew s1 s2 X). Defined.
Definition etacor { X Y : UU } (f:X -> Y) : paths f (fun x:X => f x) := etacorrection _ (fun T:X => Y) f.
Lemma etacoronpaths { X Y : UU } (f1 f2 : X->Y) : paths (fun x:X => f1 x) (fun x:X => f2 x) -> paths f1 f2.
Proof. intros X Y f1 f2 X0. set (ec:= weqeta (fun x:X => Y) ). apply (invmaponpathsweq ec f1 f2 X0). Defined.
(** Dependent functions and sections up to homotopy I *)
Definition toforallpaths { T : UU } (P:T -> UU) (f g :forall t:T, P t) : (paths f g) -> (forall t:T, paths (f t) (g t)).
Proof. intros T P f g X t. destruct X. apply (idpath (f t)). Defined.
Definition sectohfiber { X : UU } (P:X -> UU): (forall x:X, P x) -> (hfiber (fun f:_ => fun x:_ => pr1 (f x)) (fun x:X => x)) := (fun a : forall x:X, P x => tpair _ (fun x:_ => tpair _ x (a x)) (idpath (fun x:X => x))).
Definition hfibertosec { X : UU } (P:X -> UU): (hfiber (fun f:_ => fun x:_ => pr1 (f x)) (fun x:X => x)) -> (forall x:X, P x):= fun se:_ => fun x:X => match se as se' return P x with tpair _ s e => (transportf P (toforallpaths (fun x:X => X) (fun x:X => pr1 (s x)) (fun x:X => x) e x) (pr2 (s x))) end.
Definition sectohfibertosec { X : UU } (P:X -> UU): forall a: forall x:X, P x, paths (hfibertosec _ (sectohfiber _ a)) a := fun a:_ => (pathsinv0 (etacorrection _ _ a)).
(** *** Deduction of functional extnsionality for dependent functions (sections) from functional extensionality of usual functions *)
Axiom funextfunax : forall (X Y:UU)(f g:X->Y), (forall x:X, paths (f x) (g x)) -> (paths f g).
Lemma isweqlcompwithweq { X X' : UU} (w: weq X X') (Y:UU) : isweq (fun a:X'->Y => (fun x:X => a (w x))).
Proof. intros. set (f:= (fun a:X'->Y => (fun x:X => a (w x)))). set (g := fun b:X-> Y => fun x':X' => b ( invweq w x')).
set (egf:= (fun a:X'->Y => funextfunax X' Y (fun x':X' => (g (f a)) x') a (fun x': X' => maponpaths a (homotweqinvweq w x')))).
set (efg:= (fun a:X->Y => funextfunax X Y (fun x:X => (f (g a)) x) a (fun x: X => maponpaths a (homotinvweqweq w x)))).
apply (gradth f g egf efg). Defined.
Lemma isweqrcompwithweq { Y Y':UU } (w: weq Y Y')(X:UU): isweq (fun a:X->Y => (fun x:X => w (a x))).
Proof. intros. set (f:= (fun a:X->Y => (fun x:X => w (a x)))). set (g := fun a':X-> Y' => fun x:X => (invweq w (a' x))).
set (egf:= (fun a:X->Y => funextfunax X Y (fun x:X => (g (f a)) x) a (fun x: X => (homotinvweqweq w (a x))))).
set (efg:= (fun a':X->Y' => funextfunax X Y' (fun x:X => (f (g a')) x) a' (fun x: X => (homotweqinvweq w (a' x))))).
apply (gradth f g egf efg). Defined.
Theorem funcontr { X : UU } (P:X -> UU) : (forall x:X, iscontr (P x)) -> iscontr (forall x:X, P x).
Proof. intros X P X0 . set (T1 := forall x:X, P x). set (T2 := (hfiber (fun f: (X -> total2 P) => fun x: X => pr1 (f x)) (fun x:X => x))). assert (is1:isweq (@pr1 X P)). apply isweqpr1. assumption. set (w1:= weqpair (@pr1 X P) is1).
assert (X1:iscontr T2). apply (isweqrcompwithweq w1 X (fun x:X => x)).
apply ( iscontrretract _ _ (sectohfibertosec P ) X1). Defined.
Corollary funcontrtwice { X : UU } (P: X-> X -> UU)(is: forall (x x':X), iscontr (P x x')): iscontr (forall (x x':X), P x x').
Proof. intros.
assert (is1: forall x:X, iscontr (forall x':X, P x x')). intro. apply (funcontr _ (is x)). apply (funcontr _ is1). Defined.
(** Proof of the fact that the [ toforallpaths ] from [paths s1 s2] to [forall t:T, paths (s1 t) (s2 t)] is a weak equivalence - a strong form
of functional extensionality for sections of general families. The proof uses only [funcontr] which is an axiom i.e. its type satisfies [ isaprop ]. *)
Lemma funextweql1 { T : UU } (P:T -> UU)(g: forall t:T, P t): iscontr (total2 (fun f:forall t:T, P t => forall t:T, paths (f t) (g t))).
Proof. intros. set (X:= forall t:T, coconustot _ (g t)). assert (is1: iscontr X). apply (funcontr (fun t:T => coconustot _ (g t)) (fun t:T => iscontrcoconustot _ (g t))). set (Y:= total2 (fun f:forall t:T, P t => forall t:T, paths (f t) (g t))). set (p:= fun z: X => tpair (fun f:forall t:T, P t => forall t:T, paths (f t) (g t)) (fun t:T => pr1 (z t)) (fun t:T => pr2 (z t))). set (s:= fun u:Y => (fun t:T => coconustotpair _ ((pr2 u) t))). set (etap:= fun u: Y => tpair (fun f:forall t:T, P t => forall t:T, paths (f t) (g t)) (fun t:T => ((pr1 u) t)) (pr2 u)).
assert (eps: forall u:Y, paths (p (s u)) (etap u)). intro. destruct u as [ t x ]. unfold p. unfold s. unfold etap. simpl. assert (ex: paths x (fun t0:T => x t0)). apply etacorrection. destruct ex. apply idpath.
assert (eetap: forall u:Y, paths (etap u) u). intro. unfold etap. destruct u as [t x ]. simpl.
set (ff:= fun fe: (total2 (fun f : forall t0 : T, P t0 => forall t0 : T, paths (f t0) (g t0))) => tpair (fun f : forall t0 : T, P t0 => forall t0 : T, paths (f t0) (g t0)) (fun t0:T => (pr1 fe) t0) (pr2 fe)).
assert (isweqff: isweq ff). apply (isweqfpmap ( weqeta P ) (fun f: forall t:T, P t => forall t:T, paths (f t) (g t)) ).
assert (ee: forall fe: (total2 (fun f : forall t0 : T, P t0 => forall t0 : T, paths (f t0) (g t0))), paths (ff (ff fe)) (ff fe)). intro. apply idpath. assert (eee: forall fe: (total2 (fun f : forall t0 : T, P t0 => forall t0 : T, paths (f t0) (g t0))), paths (ff fe) fe). intro. apply (invmaponpathsweq ( weqpair ff isweqff ) _ _ (ee fe)).
apply (eee (tpair _ t x)). assert (eps0: forall u: Y, paths (p (s u)) u). intro. apply (pathscomp0 (eps u) (eetap u)).
apply ( iscontrretract p s eps0). assumption. Defined.
Theorem isweqtoforallpaths { T : UU } (P:T -> UU)( f g: forall t:T, P t) : isweq (toforallpaths P f g).
Proof. intros. set (tmap:= fun ff: total2 (fun f0: forall t:T, P t => paths f0 g) => tpair (fun f0:forall t:T, P t => forall t:T, paths (f0 t) (g t)) (pr1 ff) (toforallpaths P (pr1 ff) g (pr2 ff))). assert (is1: iscontr (total2 (fun f0: forall t:T, P t => paths f0 g))). apply (iscontrcoconustot _ g). assert (is2:iscontr (total2 (fun f0:forall t:T, P t => forall t:T, paths (f0 t) (g t)))). apply funextweql1.
assert (X: isweq tmap). apply (isweqcontrcontr tmap is1 is2). apply (isweqtotaltofib (fun f0: forall t:T, P t => paths f0 g) (fun f0:forall t:T, P t => forall t:T, paths (f0 t) (g t)) (fun f0:forall t:T, P t => (toforallpaths P f0 g)) X f). Defined.
Theorem weqtoforallpaths { T : UU } (P:T -> UU)(f g : forall t:T, P t) : weq (paths f g) (forall t:T, paths (f t) (g t)) .
Proof. intros. split with (toforallpaths P f g). apply isweqtoforallpaths. Defined.
Definition funextsec { T : UU } (P: T-> UU) (s1 s2 : forall t:T, P t) : (forall t:T, paths (s1 t) (s2 t)) -> paths s1 s2 := invmap (weqtoforallpaths _ s1 s2) .
Definition funextfun { X Y:UU } (f g:X->Y) : (forall x:X, paths (f x) (g x)) -> (paths f g):= funextsec (fun x:X => Y) f g.
(** I do not know at the moment whether [funextfun] is equal (homotopic) to [funextfunax]. It is advisable in all cases to use [funextfun] or, equivalently, [funextsec], since it can be produced from [funcontr] and therefore is well defined up to a canonbical equivalence. In addition it is a homotopy inverse of [toforallpaths] which may be true or not for [funextsecax]. *)
Theorem isweqfunextsec { T : UU } (P:T -> UU)(f g : forall t:T, P t) : isweq (funextsec P f g).
Proof. intros. apply (isweqinvmap ( weqtoforallpaths _ f g ) ). Defined.
Definition weqfunextsec { T : UU } (P:T -> UU)(f g : forall t:T, P t) : weq (forall t:T, paths (f t) (g t)) (paths f g) := weqpair _ ( isweqfunextsec P f g ) .
(** ** Sections of "double fibration" [(P: T -> UU)(PP: forall t:T, P t -> UU)] and pairs of sections *)
(** *** General case *)
Definition totaltoforall { X : UU } (P : X -> UU ) ( PP : forall x:X, P x -> UU ) : total2 (fun s0: forall x:X, P x => forall x:X, PP x (s0 x)) -> forall x:X, total2 (PP x).
Proof. intros X P PP X0 x. destruct X0 as [ t x0 ]. split with (t x). apply (x0 x). Defined.
Definition foralltototal { X : UU } ( P : X -> UU ) ( PP : forall x:X, P x -> UU ): (forall x:X, total2 (PP x)) -> total2 (fun s0: forall x:X, P x => forall x:X, PP x (s0 x)).
Proof. intros X P PP X0. split with (fun x:X => pr1 (X0 x)). apply (fun x:X => pr2 (X0 x)). Defined.
Lemma lemmaeta1 { X : UU } (P:X->UU) (Q:(forall x:X, P x) -> UU)(s0: forall x:X, P x)(q: Q (fun x:X => (s0 x))): paths (tpair (fun s: (forall x:X, P x) => Q (fun x:X => (s x))) s0 q) (tpair (fun s: (forall x:X, P x) => Q (fun x:X => (s x))) (fun x:X => (s0 x)) q).
Proof. intros. set (ff:= fun tp:total2 (fun s: (forall x:X, P x) => Q (fun x:X => (s x))) => tpair _ (fun x:X => pr1 tp x) (pr2 tp)). assert (X0 : isweq ff). apply (isweqfpmap ( weqeta P ) Q ).
assert (ee: paths (ff (tpair (fun s : forall x : X, P x => Q (fun x : X => s x)) s0 q)) (ff (tpair (fun s : forall x : X, P x => Q (fun x : X => s x)) (fun x : X => s0 x) q))). apply idpath.
apply (invmaponpathsweq ( weqpair ff X0 ) _ _ ee). Defined.
Definition totaltoforalltototal { X : UU } ( P : X -> UU ) ( PP : forall x:X, P x -> UU )( ss : total2 (fun s0: forall x:X, P x => forall x:X, PP x (s0 x)) ): paths (foralltototal _ _ (totaltoforall _ _ ss)) ss.
Proof. intros. destruct ss as [ t x ]. unfold foralltototal. unfold totaltoforall. simpl. set (et:= fun x:X => t x).
assert (paths (tpair (fun s0 : forall x0 : X, P x0 => forall x0 : X, PP x0 (s0 x0)) t x) (tpair (fun s0 : forall x0 : X, P x0 => forall x0 : X, PP x0 (s0 x0)) et x)). apply (lemmaeta1 P (fun s: forall x:X, P x => forall x:X, PP x (s x)) t x).
assert (ee: paths (tpair (fun s0 : forall x0 : X, P x0 => forall x0 : X, PP x0 (s0 x0)) et x) (tpair (fun s0 : forall x0 : X, P x0 => forall x0 : X, PP x0 (s0 x0)) et (fun x0 : X => x x0))).
assert (eee: paths x (fun x0:X => x x0)). apply etacorrection. destruct eee. apply idpath. destruct ee. apply pathsinv0. assumption. Defined.
Definition foralltototaltoforall { X : UU } ( P : X -> UU ) ( PP : forall x:X, P x -> UU ) ( ss : forall x:X, total2 (PP x)): paths (totaltoforall _ _ (foralltototal _ _ ss)) ss.
Proof. intros. unfold foralltototal. unfold totaltoforall. simpl. assert (ee: forall x:X, paths (tpair (PP x) (pr1 (ss x)) (pr2 (ss x))) (ss x)). intro. apply (pathsinv0 (tppr (ss x))). apply (funextsec). assumption. Defined.
Theorem isweqforalltototal { X : UU } ( P : X -> UU ) ( PP : forall x:X, P x -> UU ) : isweq (foralltototal P PP).
Proof. intros. apply (gradth (foralltototal P PP) (totaltoforall P PP) (foralltototaltoforall P PP) (totaltoforalltototal P PP)). Defined.
Theorem isweqtotaltoforall { X : UU } (P:X->UU)(PP:forall x:X, P x -> UU): isweq (totaltoforall P PP).
Proof. intros. apply (gradth (totaltoforall P PP) (foralltototal P PP) (totaltoforalltototal P PP) (foralltototaltoforall P PP)). Defined.
Definition weqforalltototal { X : UU } ( P : X -> UU ) ( PP : forall x:X, P x -> UU ) := weqpair _ ( isweqforalltototal P PP ) .
Definition weqtotaltoforall { X : UU } ( P : X -> UU ) ( PP : forall x:X, P x -> UU ) := invweq ( weqforalltototal P PP ) .
(** *** Functions to a dependent sum (to a [ total2 ]) *)
Definition weqfuntototaltototal ( X : UU ) { Y : UU } ( Q : Y -> UU ) : weq ( X -> total2 Q ) ( total2 ( fun f : X -> Y => forall x : X , Q ( f x ) ) ) := weqforalltototal ( fun x : X => Y ) ( fun x : X => Q ) .
(** *** Functions to direct product *)
(** Note: we give direct proofs for this special case. *)
Definition funtoprodtoprod { X Y Z : UU } ( f : X -> dirprod Y Z ) : dirprod ( X -> Y ) ( X -> Z ) := dirprodpair ( fun x : X => pr1 ( f x ) ) ( fun x : X => ( pr2 ( f x ) ) ) .
Definition prodtofuntoprod { X Y Z : UU } ( fg : dirprod ( X -> Y ) ( X -> Z ) ) : X -> dirprod Y Z := match fg with tpair _ f g => fun x : X => dirprodpair ( f x ) ( g x ) end .
Theorem weqfuntoprodtoprod ( X Y Z : UU ) : weq ( X -> dirprod Y Z ) ( dirprod ( X -> Y ) ( X -> Z ) ) .
Proof. intros. set ( f := @funtoprodtoprod X Y Z ) . set ( g := @prodtofuntoprod X Y Z ) . split with f .
assert ( egf : forall a : _ , paths ( g ( f a ) ) a ) . intro a . apply funextfun . intro x . simpl . apply pathsinv0 . apply tppr .
assert ( efg : forall a : _ , paths ( f ( g a ) ) a ) . intro a . destruct a as [ fy fz ] . apply pathsdirprod . simpl . apply pathsinv0 . apply etacorrection . simpl . apply pathsinv0 . apply etacorrection .
apply ( gradth _ _ egf efg ) . Defined .
(** ** Homotopy fibers of the map [forall x:X, P x -> forall x:X, Q x] *)
(** *** General case *)
Definition maponsec { X:UU } (P Q : X -> UU) (f: forall x:X, P x -> Q x): (forall x:X, P x) -> (forall x:X, Q x) :=
fun s: forall x:X, P x => (fun x:X => (f x) (s x)).
Definition maponsec1 { X Y : UU } (P:Y -> UU)(f:X-> Y): (forall y:Y, P y) -> (forall x:X, P (f x)) := fun sy: forall y:Y, P y => (fun x:X => sy (f x)).
Definition hfibertoforall { X : UU } (P Q : X -> UU) (f: forall x:X, P x -> Q x)(s: forall x:X, Q x): hfiber (@maponsec _ _ _ f) s -> forall x:X, hfiber (f x) (s x).
Proof. intro. intro. intro. intro. intro. unfold hfiber.
set (map1:= totalfun (fun pointover : forall x : X, P x =>
paths (fun x : X => f x (pointover x)) s) (fun pointover : forall x : X, P x =>
forall x:X, paths ((f x) (pointover x)) (s x)) (fun pointover: forall x:X, P x => toforallpaths _ (fun x : X => f x (pointover x)) s )).
set (map2 := totaltoforall P (fun x:X => (fun pointover : P x => paths (f x pointover) (s x)))).
set (themap := fun a:_ => map2 (map1 a)). assumption. Defined.
Definition foralltohfiber { X : UU } ( P Q : X -> UU) (f: forall x:X, P x -> Q x)(s: forall x:X, Q x): (forall x:X, hfiber (f x) (s x)) -> hfiber (maponsec _ _ f) s.
Proof. intro. intro. intro. intro. intro. unfold hfiber.
set (map2inv := foralltototal P (fun x:X => (fun pointover : P x => paths (f x pointover) (s x)))).
set (map1inv := totalfun (fun pointover : forall x : X, P x =>
forall x:X, paths ((f x) (pointover x)) (s x)) (fun pointover : forall x : X, P x =>
paths (fun x : X => f x (pointover x)) s) (fun pointover: forall x:X, P x => funextsec _ (fun x : X => f x (pointover x)) s)).
set (themap := fun a:_=> map1inv (map2inv a)). assumption. Defined.
Theorem isweqhfibertoforall { X : UU } (P Q :X -> UU) (f: forall x:X, P x -> Q x)(s: forall x:X, Q x): isweq (hfibertoforall _ _ f s).
Proof. intro. intro. intro. intro. intro.
set (map1:= totalfun (fun pointover : forall x : X, P x =>
paths (fun x : X => f x (pointover x)) s) (fun pointover : forall x : X, P x =>
forall x:X, paths ((f x) (pointover x)) (s x)) (fun pointover: forall x:X, P x => toforallpaths _ (fun x : X => f x (pointover x)) s)).
set (map2 := totaltoforall P (fun x:X => (fun pointover : P x => paths (f x pointover) (s x)))).
assert (is1: isweq map1). apply (isweqfibtototal _ _ (fun pointover: forall x:X, P x => weqtoforallpaths _ (fun x : X => f x (pointover x)) s )).
assert (is2: isweq map2). apply isweqtotaltoforall.
apply (twooutof3c map1 map2 is1 is2). Defined.
Definition weqhfibertoforall { X : UU } (P Q :X -> UU) (f: forall x:X, P x -> Q x)(s: forall x:X, Q x) := weqpair _ ( isweqhfibertoforall P Q f s ) .
Theorem isweqforalltohfiber { X : UU } (P Q : X -> UU) (f: forall x:X, P x -> Q x)(s: forall x:X, Q x): isweq (foralltohfiber _ _ f s).
Proof. intro. intro. intro. intro. intro.
set (map2inv := foralltototal P (fun x:X => (fun pointover : P x => paths (f x pointover) (s x)))).
assert (is2: isweq map2inv). apply (isweqforalltototal P (fun x:X => (fun pointover : P x => paths (f x pointover) (s x)))).
set (map1inv := totalfun (fun pointover : forall x : X, P x =>
forall x:X, paths ((f x) (pointover x)) (s x)) (fun pointover : forall x : X, P x =>
paths (fun x : X => f x (pointover x)) s) (fun pointover: forall x:X, P x => funextsec _ (fun x : X => f x (pointover x)) s)).
assert (is1: isweq map1inv).
(* ??? in this place 8.4 (actually trunk to 8.5) hangs if the next command is
apply (isweqfibtototal _ _ (fun pointover: forall x:X, P x => weqfunextsec _ (fun x : X => f x (pointover x)) s ) ).
and no -no-sharing option is turned on. It also hangs on
exact (isweqfibtototal (fun pointover : forall x : X, P x =>
forall x : X, paths (f x (pointover x)) (s x)) (fun pointover : forall x : X, P x =>
paths (fun x : X => f x (pointover x)) s) (fun pointover: forall x:X, P x => weqfunextsec Q (fun x : X => f x (pointover x)) s ) ).
for at least 2hrs. After adding "Opaque funextsec ." the "exact" commend goes through in <1sec and so does the "apply". If "Transparent funextsec." added after the "apply" the compilation hangs on "Define".
*)
Opaque funextsec . apply (isweqfibtototal _ _ (fun pointover: forall x:X, P x => weqfunextsec _ (fun x : X => f x (pointover x)) s ) ).
apply (twooutof3c map2inv map1inv is2 is1). Defined.
Transparent funextsec.
Definition weqforalltohfiber { X : UU } (P Q : X -> UU) (f: forall x:X, P x -> Q x)(s: forall x:X, Q x) := weqpair _ ( isweqforalltohfiber P Q f s ) .
(** *** The weak equivalence between section spaces (dependent products) defined by a family of weak equivalences [ weq ( P x ) ( Q x ) ] *)
Corollary isweqmaponsec { X : UU } (P Q : X-> UU) (f: forall x:X, weq ( P x ) ( Q x) ) : isweq (maponsec _ _ f).
Proof. intros. unfold isweq. intro y.
assert (is1: iscontr (forall x:X, hfiber (f x) (y x))). assert (is2: forall x:X, iscontr (hfiber (f x) (y x))). intro x. apply ( ( pr2 ( f x ) ) (y x)). apply funcontr. assumption.
apply (iscontrweqb (weqhfibertoforall P Q f y) is1 ). Defined.
Definition weqonseqfibers { X : UU } (P Q : X-> UU) (f: forall x:X, weq ( P x ) ( Q x )) := weqpair _ ( isweqmaponsec P Q f ) .
(** *** Composition of functions with a weak equivalence on the right *)
Definition weqffun ( X : UU ) { Y Z : UU } ( w : weq Y Z ) : weq ( X -> Y ) ( X -> Z ) := weqonseqfibers _ _ ( fun x : X => w ) .
(** ** The map between section spaces (dependent products) defined by the map between the bases [ f: Y -> X ] *)
(** *** General case *)
Definition maponsec1l0 { X : UU } (P:X -> UU)(f:X-> X)(h: forall x:X, paths (f x) x)(s: forall x:X, P x): (forall x:X, P x) := (fun x:X => transportf P (h x) (s (f x))).
Lemma maponsec1l1 { X : UU } (P:X -> UU)(x:X)(s:forall x:X, P x): paths (maponsec1l0 P (fun x:X => x) (fun x:X => idpath x) s x) (s x).
Proof. intros. unfold maponsec1l0. apply idpath. Defined.
Lemma maponsec1l2 { X : UU } (P:X -> UU)(f:X-> X)(h: forall x:X, paths (f x) x)(s: forall x:X, P x)(x:X): paths (maponsec1l0 P f h s x) (s x).
Proof. intros.
set (map:= fun ff: total2 (fun f0:X->X => forall x:X, paths (f0 x) x) => maponsec1l0 P (pr1 ff) (pr2 ff) s x).
assert (is1: iscontr (total2 (fun f0:X->X => forall x:X, paths (f0 x) x))). apply funextweql1. assert (e: paths (tpair (fun f0:X->X => forall x:X, paths (f0 x) x) f h) (tpair (fun f0:X->X => forall x:X, paths (f0 x) x) (fun x0:X => x0) (fun x0:X => idpath x0))). apply proofirrelevancecontr. assumption. apply (maponpaths map e). Defined.
Theorem isweqmaponsec1 { X Y : UU } (P:Y -> UU)(f: weq X Y ) : isweq (maponsec1 P f).
Proof. intros.
set (map:= maponsec1 P f).
set (invf:= invmap f). set (e1:= homotweqinvweq f). set (e2:= homotinvweqweq f ).
set (im1:= fun sx: forall x:X, P (f x) => (fun y:Y => sx (invf y))).
set (im2:= fun sy': forall y:Y, P (f (invf y)) => (fun y:Y => transportf _ (homotweqinvweq f y) (sy' y))).
set (invmapp := (fun sx: forall x:X, P (f x) => im2 (im1 sx))).
assert (efg0: forall sx: (forall x:X, P (f x)), forall x:X, paths ((map (invmapp sx)) x) (sx x)). intro. intro. unfold map. unfold invmapp. unfold im1. unfold im2. unfold maponsec1. simpl. fold invf. set (ee:=e2 x). fold invf in ee.
set (e3x:= fun x0:X => invmaponpathsweq f (invf (f x0)) x0 (homotweqinvweq f (f x0))). set (e3:=e3x x). assert (e4: paths (homotweqinvweq f (f x)) (maponpaths f e3)). apply (pathsinv0 (pathsweq4 f (invf (f x)) x _)).
assert (e5:paths (transportf P (homotweqinvweq f (f x)) (sx (invf (f x)))) (transportf P (maponpaths f e3) (sx (invf (f x))))). apply (maponpaths (fun e40:_ => (transportf P e40 (sx (invf (f x))))) e4).
assert (e6: paths (transportf P (maponpaths f e3) (sx (invf (f x)))) (transportf (fun x:X => P (f x)) e3 (sx (invf (f x))))). apply (pathsinv0 (functtransportf f P e3 (sx (invf (f x))))).
set (ff:= fun x:X => invf (f x)).
assert (e7: paths (transportf (fun x : X => P (f x)) e3 (sx (invf (f x)))) (sx x)). apply (maponsec1l2 (fun x:X => P (f x)) ff e3x sx x). apply (pathscomp0 (pathscomp0 e5 e6) e7).
assert (efg: forall sx: (forall x:X, P (f x)), paths (map (invmapp sx)) sx). intro. apply (funextsec _ _ _ (efg0 sx)).
assert (egf0: forall sy: (forall y:Y, P y), forall y:Y, paths ((invmapp (map sy)) y) (sy y)). intros. unfold invmapp. unfold map. unfold im1. unfold im2. unfold maponsec1.
set (ff:= fun y:Y => f (invf y)). fold invf. apply (maponsec1l2 P ff ( homotweqinvweq f ) sy y).
assert (egf: forall sy: (forall y:Y, P y), paths (invmapp (map sy)) sy). intro. apply (funextsec _ _ _ (egf0 sy)).
apply (gradth map invmapp egf efg). Defined.
Definition weqonsecbase { X Y : UU } ( P : Y -> UU ) ( f : weq X Y ) := weqpair _ ( isweqmaponsec1 P f ) .
(** *** Composition of functions with a weak equivalence on the left *)
Definition weqbfun { X Y : UU } ( Z : UU ) ( w : weq X Y ) : weq ( Y -> Z ) ( X -> Z ) := weqonsecbase _ w .
(** ** Sections of families over an empty type and over coproducts *)
(** *** General case *)
Definition iscontrsecoverempty ( P : empty -> UU ) : iscontr ( forall x : empty , P x ) .
Proof . intro . split with ( fun x : empty => fromempty x ) . intro t . apply funextsec . intro t0 . destruct t0 . Defined .
Definition iscontrsecoverempty2 { X : UU } ( P : X -> UU ) ( is : neg X ) : iscontr ( forall x : X , P x ) .
Proof . intros . set ( w := weqtoempty is ) . set ( w' := weqonsecbase P ( invweq w ) ) . apply ( iscontrweqb w' ( iscontrsecoverempty _ ) ) . Defined .
Definition secovercoprodtoprod { X Y : UU } ( P : coprod X Y -> UU ) ( a: forall xy : coprod X Y , P xy ) : dirprod ( forall x : X , P ( ii1 x ) ) ( forall y : Y , P ( ii2 y ) ) := dirprodpair ( fun x : X => a ( ii1 x ) ) ( fun y : Y => a ( ii2 y ) ) .
Definition prodtosecovercoprod { X Y : UU } ( P : coprod X Y -> UU ) ( a : dirprod ( forall x : X , P ( ii1 x ) ) ( forall y : Y , P ( ii2 y ) ) ) : forall xy : coprod X Y , P xy .
Proof . intros . destruct xy as [ x | y ] . apply ( pr1 a x ) . apply ( pr2 a y ) . Defined .
Definition weqsecovercoprodtoprod { X Y : UU } ( P : coprod X Y -> UU ) : weq ( forall xy : coprod X Y , P xy ) ( dirprod ( forall x : X , P ( ii1 x ) ) ( forall y : Y , P ( ii2 y ) ) ) .
Proof . intros . set ( f := secovercoprodtoprod P ) . set ( g := prodtosecovercoprod P ) . split with f .
assert ( egf : forall a : _ , paths ( g ( f a ) ) a ) . intro . apply funextsec . intro t . destruct t as [ x | y ] . apply idpath . apply idpath .
assert ( efg : forall a : _ , paths ( f ( g a ) ) a ) . intro . destruct a as [ ax ay ] . apply ( pathsdirprod ) . apply funextsec . intro x . apply idpath . apply funextsec . intro y . apply idpath .
apply ( gradth _ _ egf efg ) . Defined .
(** *** Functions from the empty type *)
Theorem iscontrfunfromempty ( X : UU ) : iscontr ( empty -> X ) .
Proof . intro . split with fromempty . intro t . apply funextfun . intro x . destruct x . Defined .
Theorem iscontrfunfromempty2 ( X : UU ) { Y : UU } ( is : neg Y ) : iscontr ( Y -> X ) .
Proof. intros . set ( w := weqtoempty is ) . set ( w' := weqbfun X ( invweq w ) ) . apply ( iscontrweqb w' ( iscontrfunfromempty X ) ) . Defined .
(** *** Functions from a coproduct *)
Definition funfromcoprodtoprod { X Y Z : UU } ( f : coprod X Y -> Z ) : dirprod ( X -> Z ) ( Y -> Z ) := dirprodpair ( fun x : X => f ( ii1 x ) ) ( fun y : Y => f ( ii2 y ) ) .
Definition prodtofunfromcoprod { X Y Z : UU } ( fg : dirprod ( X -> Z ) ( Y -> Z ) ) : coprod X Y -> Z := match fg with tpair _ f g => sumofmaps f g end .
Theorem weqfunfromcoprodtoprod ( X Y Z : UU ) : weq ( coprod X Y -> Z ) ( dirprod ( X -> Z ) ( Y -> Z ) ) .
Proof. intros . set ( f := @funfromcoprodtoprod X Y Z ) . set ( g := @prodtofunfromcoprod X Y Z ) . split with f .
assert ( egf : forall a : _ , paths ( g ( f a ) ) a ) . intro a . apply funextfun . intro xy . destruct xy as [ x | y ] . apply idpath . apply idpath .
assert ( efg : forall a : _ , paths ( f ( g a ) ) a ) . intro a . destruct a as [ fx fy ] . simpl . apply pathsdirprod . simpl . apply pathsinv0 . apply etacorrection . simpl . apply pathsinv0 . apply etacorrection .
apply ( gradth _ _ egf efg ) . Defined .
(** ** Sections of families over contractible types and over [ total2 ] (over dependent sums) *)
(** *** General case *)
Definition tosecoverunit ( P : unit -> UU ) ( p : P tt ) : forall t : unit , P t .
Proof . intros . destruct t . apply p . Defined .
Definition weqsecoverunit ( P : unit -> UU ) : weq ( forall t : unit , P t ) ( P tt ) .
Proof . intro. set ( f := fun a : forall t : unit , P t => a tt ) . set ( g := tosecoverunit P ) . split with f .
assert ( egf : forall a : _ , paths ( g ( f a ) ) a ) . intro . apply funextsec . intro t . destruct t . apply idpath .
assert ( efg : forall a : _ , paths ( f ( g a ) ) a ) . intros . apply idpath .
apply ( gradth _ _ egf efg ) . Defined .
Definition weqsecovercontr { X : UU } ( P : X -> UU ) ( is : iscontr X ) : weq ( forall x : X , P x ) ( P ( pr1 is ) ) .
Proof . intros . set ( w1 := weqonsecbase P ( wequnittocontr is ) ) . apply ( weqcomp w1 ( weqsecoverunit _ ) ) . Defined .
Definition tosecovertotal2 { X : UU } ( P : X -> UU ) ( Q : total2 P -> UU ) ( a : forall x : X , forall p : P x , Q ( tpair _ x p ) ) : forall xp : total2 P , Q xp .
Proof . intros . destruct xp as [ x p ] . apply ( a x p ) . Defined .
Definition weqsecovertotal2 { X : UU } ( P : X -> UU ) ( Q : total2 P -> UU ) : weq ( forall xp : total2 P , Q xp ) ( forall x : X , forall p : P x , Q ( tpair _ x p ) ) .
Proof . intros . set ( f := fun a : forall xp : total2 P , Q xp => fun x : X => fun p : P x => a ( tpair _ x p ) ) . set ( g := tosecovertotal2 P Q ) . split with f .
assert ( egf : forall a : _ , paths ( g ( f a ) ) a ) . intro . apply funextsec . intro xp . destruct xp as [ x p ] . apply idpath .
assert ( efg : forall a : _ , paths ( f ( g a ) ) a ) . intro . apply funextsec . intro x . apply funextsec . intro p . apply idpath .
apply ( gradth _ _ egf efg ) . Defined .
(** *** Functions from [ unit ] and from contractible types *)
Definition weqfunfromunit ( X : UU ) : weq ( unit -> X ) X := weqsecoverunit _ .
Definition weqfunfromcontr { X : UU } ( Y : UU ) ( is : iscontr X ) : weq ( X -> Y ) Y := weqsecovercontr _ is .
(** *** Functions from [ total2 ] *)
Definition weqfunfromtotal2 { X : UU } ( P : X -> UU ) ( Y : UU ) : weq ( total2 P -> Y ) ( forall x : X , P x -> Y ) := weqsecovertotal2 P _ .
(** *** Functions from direct product *)
Definition weqfunfromdirprod ( X X' Y : UU ) : weq ( dirprod X X' -> Y ) ( forall x : X , X' -> Y ) := weqsecovertotal2 _ _ .
(** ** Theorem saying that if each member of a family is of h-level n then the space of sections of the family is of h-level n. *)
(** *** General case *)
Theorem impred (n:nat) { T : UU } (P:T -> UU): (forall t:T, isofhlevel n (P t)) -> (isofhlevel n (forall t:T, P t)).
Proof. intro. induction n as [ | n IHn ] . intros T P X. apply (funcontr P X). intros T P X. unfold isofhlevel in X. unfold isofhlevel. intros x x' .
assert (is: forall t:T, isofhlevel n (paths (x t) (x' t))). intro. apply (X t (x t) (x' t)).
assert (is2: isofhlevel n (forall t:T, paths (x t) (x' t))). apply (IHn _ (fun t0:T => paths (x t0) (x' t0)) is).
set (u:=toforallpaths P x x'). assert (is3:isweq u). apply isweqtoforallpaths. set (v:= invmap ( weqpair u is3) ). assert (is4: isweq v). apply isweqinvmap. apply (isofhlevelweqf n ( weqpair v is4 )). assumption. Defined.
Corollary impredtwice (n:nat) { T T' : UU } (P:T -> T' -> UU): (forall (t:T)(t':T'), isofhlevel n (P t t')) -> (isofhlevel n (forall (t:T)(t':T'), P t t')).
Proof. intros n T T' P X. assert (is1: forall t:T, isofhlevel n (forall t':T', P t t')). intro. apply (impred n _ (X t)). apply (impred n _ is1). Defined.
Corollary impredfun (n:nat)(X Y:UU)(is: isofhlevel n Y) : isofhlevel n (X -> Y).
Proof. intros. apply (impred n (fun x:_ => Y) (fun x:X => is)). Defined.
Theorem impredtech1 (n:nat)(X Y: UU) : (X -> isofhlevel n Y) -> isofhlevel n (X -> Y).
Proof. intro. induction n as [ | n IHn ] . intros X Y X0. simpl. split with (fun x:X => pr1 (X0 x)). intro t .
assert (s1: forall x:X, paths (t x) (pr1 (X0 x))). intro. apply proofirrelevancecontr. apply (X0 x).
apply funextsec. assumption.
intros X Y X0. simpl. assert (X1: X -> isofhlevel (S n) (X -> Y)). intro X1 . apply impred. assumption. intros x x' .
assert (s1: isofhlevel n (forall xx:X, paths (x xx) (x' xx))). apply impred. intro t . apply (X0 t).
assert (w: weq (forall xx:X, paths (x xx) (x' xx)) (paths x x')). apply (weqfunextsec _ x x' ). apply (isofhlevelweqf n w s1). Defined.
(** *** Functions to a contractible type *)
Theorem iscontrfuntounit ( X : UU ) : iscontr ( X -> unit ) .
Proof . intro . split with ( fun x : X => tt ) . intro f . apply funextfun . intro x . destruct ( f x ) . apply idpath . Defined .
Theorem iscontrfuntocontr ( X : UU ) { Y : UU } ( is : iscontr Y ) : iscontr ( X -> Y ) .
Proof . intros . set ( w := weqcontrtounit is ) . set ( w' := weqffun X w ) . apply ( iscontrweqb w' ( iscontrfuntounit X ) ) . Defined .
(** *** Functions to a proposition *)
Lemma isapropimpl ( X Y : UU ) ( isy : isaprop Y ) : isaprop ( X -> Y ) .
Proof. intros. apply impred. intro. assumption. Defined.
(** *** Functions to an empty type (generalization of [ isapropneg ]) *)
Theorem isapropneg2 ( X : UU ) { Y : UU } ( is : neg Y ) : isaprop ( X -> Y ) .
Proof . intros . apply impred . intro . apply ( isapropifnegtrue is ) . Defined .
(** ** Theorems saying that [ iscontr T ], [ isweq f ] etc. are of h-level 1 *)
Theorem iscontriscontr { X : UU } ( is : iscontr X ) : iscontr ( iscontr X ).
Proof. intros X X0 .
assert (is0: forall (x x':X), paths x x'). apply proofirrelevancecontr. assumption.
assert (is1: forall cntr:X, iscontr (forall x:X, paths x cntr)). intro.
assert (is2: forall x:X, iscontr (paths x cntr)).
assert (is2: isaprop X). apply isapropifcontr. assumption.
unfold isaprop in is2. unfold isofhlevel in is2. intro x . apply (is2 x cntr).
apply funcontr. assumption.
set (f:= @pr1 X (fun cntr:X => forall x:X, paths x cntr)).
assert (X1:isweq f). apply isweqpr1. assumption. change (total2 (fun cntr : X => forall x : X, paths x cntr)) with (iscontr X) in X1. apply (iscontrweqb ( weqpair f X1 ) ) . assumption. Defined.
Theorem isapropiscontr (T:UU): isaprop (iscontr T).
Proof. intros. unfold isaprop. unfold isofhlevel. intros x x' . assert (is: iscontr(iscontr T)). apply iscontriscontr. apply x. assert (is2: isaprop (iscontr T)). apply ( isapropifcontr is ) . apply (is2 x x'). Defined.
Theorem isapropisweq { X Y : UU } (f:X-> Y) : isaprop (isweq f).
Proof. intros. unfold isweq. apply (impred (S O) (fun y:Y => iscontr (hfiber f y)) (fun y:Y => isapropiscontr (hfiber f y))). Defined.
Theorem isapropisisolated ( X : UU ) ( x : X ) : isaprop ( isisolated X x ) .
Proof. intros . apply isofhlevelsn . intro is . apply impred . intro x' . apply ( isapropdec _ ( isaproppathsfromisolated X x is x' ) ) . Defined .
Theorem isapropisdeceq (X:UU): isaprop (isdeceq X).
Proof. intro. apply ( isofhlevelsn 0 ) . intro is . unfold isdeceq. apply impred . intro x . apply ( isapropisisolated X x ) . Defined .
Definition isapropisdecprop ( X : UU ) : isaprop ( isdecprop X ) := isapropiscontr ( coprod X ( neg X ) ) .
Theorem isapropisofhlevel (n:nat)(X:UU): isaprop (isofhlevel n X).
Proof. intro. unfold isofhlevel. induction n as [ | n IHn ] . apply isapropiscontr. intro X .
assert (X0: forall (x x':X), isaprop ((fix isofhlevel (n0 : nat) (X0 : UU) {struct n0} : UU :=
match n0 with
| O => iscontr X0
| S m => forall x0 x'0 : X0, isofhlevel m (paths x0 x'0)
end) n (paths x x'))). intros. apply (IHn (paths x x')).
assert (is1:
(forall x:X, isaprop (forall x' : X,
(fix isofhlevel (n0 : nat) (X1 : UU) {struct n0} : UU :=
match n0 with
| O => iscontr X1
| S m => forall x0 x'0 : X1, isofhlevel m (paths x0 x'0)
end) n (paths x x')))). intro. apply (impred ( S O ) _ (X0 x)). apply (impred (S O) _ is1). Defined.
Corollary isapropisaprop (X:UU) : isaprop (isaprop X).
Proof. intro. apply (isapropisofhlevel (S O)). Defined.
Corollary isapropisaset (X:UU): isaprop (isaset X).
Proof. intro. apply (isapropisofhlevel (S (S O))). Defined.
Theorem isapropisofhlevelf ( n : nat ) { X Y : UU } ( f : X -> Y ) : isaprop ( isofhlevelf n f ) .
Proof . intros . unfold isofhlevelf . apply impred . intro y . apply isapropisofhlevel . Defined .
Definition isapropisincl { X Y : UU } ( f : X -> Y ) := isapropisofhlevelf 1 f .
(** ** Theorems saying that various [ pr1 ] maps are inclusions *)
Theorem isinclpr1weq ( X Y : UU ) : isincl ( @pr1 _ ( fun f : X -> Y => isweq f ) ) .
Proof. intros . apply isinclpr1 . intro f. apply isapropisweq . Defined .
Theorem isinclpr1isolated ( T : UU ) : isincl ( pr1isolated T ) .
Proof . intro . apply ( isinclpr1 _ ( fun t : T => isapropisisolated T t ) ) . Defined .
(** ** Various weak equivalences between spaces of weak equivalences *)
(** *** Composition with a weak quivalence is a weak equivalence on weak equivalences *)
Theorem weqfweq ( X : UU ) { Y Z : UU } ( w : weq Y Z ) : weq ( weq X Y ) ( weq X Z ) .
Proof. intros . set ( f := fun a : weq X Y => weqcomp a w ) . set ( g := fun b : weq X Z => weqcomp b ( invweq w ) ) . split with f .
assert ( egf : forall a : _ , paths ( g ( f a ) ) a ) . intro a . apply ( invmaponpathsincl _ ( isinclpr1weq _ _ ) ) . apply funextfun . intro x . apply ( homotinvweqweq w ( a x ) ) .
assert ( efg : forall b : _ , paths ( f ( g b ) ) b ) . intro b . apply ( invmaponpathsincl _ ( isinclpr1weq _ _ ) ) . apply funextfun . intro x . apply ( homotweqinvweq w ( b x ) ) .
apply ( gradth _ _ egf efg ) . Defined .
Theorem weqbweq { X Y : UU } ( Z : UU ) ( w : weq X Y ) : weq ( weq Y Z ) ( weq X Z ) .
Proof. intros . set ( f := fun a : weq Y Z => weqcomp w a ) . set ( g := fun b : weq X Z => weqcomp ( invweq w ) b ) . split with f .
assert ( egf : forall a : _ , paths ( g ( f a ) ) a ) . intro a . apply ( invmaponpathsincl _ ( isinclpr1weq _ _ ) ) . apply funextfun . intro y . apply ( maponpaths a ( homotweqinvweq w y ) ) .
assert ( efg : forall b : _ , paths ( f ( g b ) ) b ) . intro b . apply ( invmaponpathsincl _ ( isinclpr1weq _ _ ) ) . apply funextfun . intro x . apply ( maponpaths b ( homotinvweqweq w x ) ) .
apply ( gradth _ _ egf efg ) . Defined .
(** *** Invertion on weak equivalences as a weak equivalence *)
(** Comment : note that full form of [ funextfun ] is only used in the proof of this theorem in the form of [ isapropisweq ]. The rest of the proof can be completed using eta-conversion . *)
Theorem weqinvweq ( X Y : UU ) : weq ( weq X Y ) ( weq Y X ) .
Proof . intros . set ( f := fun w : weq X Y => invweq w ) . set ( g := fun w : weq Y X => invweq w ) . split with f .
assert ( egf : forall w : _ , paths ( g ( f w ) ) w ) . intro . apply ( invmaponpathsincl _ ( isinclpr1weq _ _ ) ) . apply funextfun . intro x . unfold f. unfold g . unfold invweq . simpl . unfold invmap . simpl . apply idpath .
assert ( efg : forall w : _ , paths ( f ( g w ) ) w ) . intro . apply ( invmaponpathsincl _ ( isinclpr1weq _ _ ) ) . apply funextfun . intro x . unfold f. unfold g . unfold invweq . simpl . unfold invmap . simpl . apply idpath .
apply ( gradth _ _ egf efg ) . Defined .
(** ** h-levels of spaces of weak equivalences *)
(** *** Weak equivalences to and from types of h-level ( S n ) *)
Theorem isofhlevelsnweqtohlevelsn ( n : nat ) ( X Y : UU ) ( is : isofhlevel ( S n ) Y ) : isofhlevel ( S n ) ( weq X Y ) .
Proof . intros . apply ( isofhlevelsninclb n _ ( isinclpr1weq _ _ ) ) . apply impred . intro . apply is . Defined .
Theorem isofhlevelsnweqfromhlevelsn ( n : nat ) ( X Y : UU ) ( is : isofhlevel ( S n ) Y ) : isofhlevel ( S n ) ( weq Y X ) .
Proof. intros . apply ( isofhlevelweqf ( S n ) ( weqinvweq X Y ) ( isofhlevelsnweqtohlevelsn n X Y is ) ) . Defined .
(** *** Weak equivalences to and from contractible types *)
Theorem isapropweqtocontr ( X : UU ) { Y : UU } ( is : iscontr Y ) : isaprop ( weq X Y ) .
Proof . intros . apply ( isofhlevelsnweqtohlevelsn 0 _ _ ( isapropifcontr is ) ) . Defined .
Theorem isapropweqfromcontr ( X : UU ) { Y : UU } ( is : iscontr Y ) : isaprop ( weq Y X ) .
Proof. intros . apply ( isofhlevelsnweqfromhlevelsn 0 X _ ( isapropifcontr is ) ) . Defined .
(** *** Weak equivalences to and from propositions *)
Theorem isapropweqtoprop ( X Y : UU ) ( is : isaprop Y ) : isaprop ( weq X Y ) .
Proof . intros . apply ( isofhlevelsnweqtohlevelsn 0 _ _ is ) . Defined .
Theorem isapropweqfromprop ( X Y : UU )( is : isaprop Y ) : isaprop ( weq Y X ) .
Proof. intros . apply ( isofhlevelsnweqfromhlevelsn 0 X _ is ) . Defined .
(** *** Weak equivalences to and from sets *)
Theorem isasetweqtoset ( X Y : UU ) ( is : isaset Y ) : isaset ( weq X Y ) .
Proof . intros . apply ( isofhlevelsnweqtohlevelsn 1 _ _ is ) . Defined .
Theorem isasetweqfromset ( X Y : UU )( is : isaset Y ) : isaset ( weq Y X ) .
Proof. intros . apply ( isofhlevelsnweqfromhlevelsn 1 X _ is ) . Defined .
(** *** Weak equivalences to an empty type *)
Theorem isapropweqtoempty ( X : UU ) : isaprop ( weq X empty ) .
Proof . intro . apply ( isofhlevelsnweqtohlevelsn 0 _ _ ( isapropempty ) ) . Defined .
Theorem isapropweqtoempty2 ( X : UU ) { Y : UU } ( is : neg Y ) : isaprop ( weq X Y ) .
Proof. intros . apply ( isofhlevelsnweqtohlevelsn 0 _ _ ( isapropifnegtrue is ) ) . Defined .
(** *** Weak equivalences from an empty type *)
Theorem isapropweqfromempty ( X : UU ) : isaprop ( weq empty X ) .
Proof . intro . apply ( isofhlevelsnweqfromhlevelsn 0 X _ ( isapropempty ) ) . Defined .
Theorem isapropweqfromempty2 ( X : UU ) { Y : UU } ( is : neg Y ) : isaprop ( weq Y X ) .
Proof. intros . apply ( isofhlevelsnweqfromhlevelsn 0 X _ ( isapropifnegtrue is ) ) . Defined .
(** *** Weak equivalences to and from [ unit ] *)
Theorem isapropweqtounit ( X : UU ) : isaprop ( weq X unit ) .
Proof . intro . apply ( isofhlevelsnweqtohlevelsn 0 _ _ ( isapropunit ) ) . Defined .
Theorem isapropweqfromunit ( X : UU ) : isaprop ( weq unit X ) .
Proof. intros . apply ( isofhlevelsnweqfromhlevelsn 0 X _ ( isapropunit ) ) . Defined .
(** ** Weak auto-equivalences of a type with an isolated point *)
Definition cutonweq { T : UU } ( t : T ) ( is : isisolated T t ) ( w : weq T T ) : dirprod ( isolated T ) ( weq ( compl T t ) ( compl T t ) ) := dirprodpair ( isolatedpair T ( w t ) ( isisolatedweqf w t is ) ) ( weqcomp ( weqoncompl w t ) ( weqtranspos0 ( w t ) t ( isisolatedweqf w t is ) is ) ) .
Definition invcutonweq { T : UU } ( t : T ) ( is : isisolated T t ) ( t'w : dirprod ( isolated T ) ( weq ( compl T t ) ( compl T t ) ) ) : weq T T := weqcomp ( weqrecomplf t t is is ( pr2 t'w ) ) ( weqtranspos t ( pr1 ( pr1 t'w ) ) is ( pr2 ( pr1 t'w ) ) ) .
Lemma pathsinvcuntonweqoft { T : UU } ( t : T ) ( is : isisolated T t ) ( t'w : dirprod ( isolated T ) ( weq ( compl T t ) ( compl T t ) ) ) : paths ( invcutonweq t is t'w t ) ( pr1 ( pr1 t'w ) ) .
Proof. intros . unfold invcutonweq . simpl . unfold recompl . unfold coprodf . unfold invmap . simpl . unfold invrecompl . destruct ( is t ) as [ ett | nett ] . apply pathsfuntransposoft1 . destruct ( nett ( idpath _ ) ) . Defined .
Definition weqcutonweq ( T : UU ) ( t : T ) ( is : isisolated T t ) : weq ( weq T T ) ( dirprod ( isolated T ) ( weq ( compl T t ) ( compl T t ) ) ) .
Proof . intros . set ( f := cutonweq t is ) . set ( g := invcutonweq t is ) . split with f .
assert ( egf : forall w : _ , paths ( g ( f w ) ) w ) . intro w . apply ( invmaponpathsincl _ ( isinclpr1weq _ _ ) _ _ ) . apply funextfun . intro t' . simpl . unfold invmap . simpl . unfold coprodf . unfold invrecompl . destruct ( is t' ) as [ ett' | nett' ] . simpl . rewrite ( pathsinv0 ett' ) . apply pathsfuntransposoft1 . simpl . unfold funtranspos0 . simpl . destruct ( is ( w t ) ) as [ etwt | netwt ] . destruct ( is ( w t' ) ) as [ etwt' | netwt' ] . destruct (negf (invmaponpathsincl w (isofhlevelfweq 1 w) t t') nett' (pathscomp0 (pathsinv0 etwt) etwt')) . simpl . assert ( newtt'' := netwt' ) . rewrite etwt in netwt' . apply ( pathsfuntransposofnet1t2 t ( w t ) is _ ( w t' ) newtt'' netwt' ) . simpl . destruct ( is ( w t' ) ) as [ etwt' | netwt' ] . simpl . change ( w t' ) with ( pr1 w t' ) in etwt' . rewrite ( pathsinv0 etwt' ). apply ( pathsfuntransposoft2 t ( w t ) is _ ) . simpl . assert ( ne : neg ( paths ( w t ) ( w t' ) ) ) . apply ( negf ( invmaponpathsweq w _ _ ) nett' ) . apply ( pathsfuntransposofnet1t2 t ( w t ) is _ ( w t' ) netwt' ne ) .
assert ( efg : forall xw : _ , paths ( f ( g xw ) ) xw ) . intro . destruct xw as [ x w ] . destruct x as [ t' is' ] . simpl in w . apply pathsdirprod .
apply ( invmaponpathsincl _ ( isinclpr1isolated _ ) ) . simpl . unfold recompl . unfold coprodf . unfold invmap . simpl . unfold invrecompl . destruct ( is t ) as [ ett | nett ] . apply pathsfuntransposoft1 . destruct ( nett ( idpath _ ) ) .
simpl . apply ( invmaponpathsincl _ ( isinclpr1weq _ _ ) _ _ ) . apply funextfun . intro x . destruct x as [ x netx ] . unfold g . unfold invcutonweq . simpl .
set ( int := funtranspos ( tpair _ t is ) ( tpair _ t' is' ) (recompl T t (coprodf w (fun x0 : unit => x0) (invmap (weqrecompl T t is) t))) ) .
assert ( eee : paths int t' ) . unfold int . unfold recompl . unfold coprodf . unfold invmap . simpl . unfold invrecompl . destruct ( is t ) as [ ett | nett ] . apply ( pathsfuntransposoft1 ) . destruct ( nett ( idpath _ ) ) .
assert ( isint : isisolated _ int ) . rewrite eee . apply is' .
apply ( ishomotinclrecomplf _ _ isint ( funtranspos0 _ _ _ ) _ _ ) . simpl . change ( recomplf int t isint (funtranspos0 int t is) ) with ( funtranspos ( tpair _ int isint ) ( tpair _ t is ) ) .
assert ( ee : paths ( tpair _ int isint) ( tpair _ t' is' ) ) . apply ( invmaponpathsincl _ ( isinclpr1isolated _ ) _ _ ) . simpl . apply eee .
rewrite ee . set ( e := homottranspost2t1t1t2 t t' is is' (recompl T t (coprodf w (fun x0 : unit => x0) (invmap (weqrecompl T t is) x))) ) . unfold funcomp in e . unfold idfun in e . rewrite e . unfold recompl . unfold coprodf . unfold invmap . simpl . unfold invrecompl . destruct ( is x ) as [ etx | netx' ] . destruct ( netx etx ) . apply ( maponpaths ( @pr1 _ _ ) ) . apply ( maponpaths w ) . apply ( invmaponpathsincl _ ( isinclpr1compl _ _ ) _ _ ) . simpl . apply idpath .
apply ( gradth _ _ egf efg ) . Defined .
(* Coprojections i.e. functions which are weakly equivalent to functions of the form ii1: X -> coprod X Y
Definition locsplit (X:UU)(Y:UU)(f:X -> Y):= forall y:Y, coprod (hfiber f y) (hfiber f y -> empty).
Definition dnegimage (X:UU)(Y:UU)(f:X -> Y):= total2 Y (fun y:Y => dneg(hfiber f y)).
Definition dnegimageincl (X Y:UU)(f:X -> Y):= pr1 Y (fun y:Y => dneg(hfiber f y)).
Definition xtodnegimage (X:UU)(Y:UU)(f:X -> Y): X -> dnegimage f:= fun x:X => tpair (f x) ((todneg _) (hfiberpair f (f x) x (idpath (f x)))).
Definition locsplitsec (X:UU)(Y:UU)(f:X->Y)(ls: locsplit f): dnegimage f -> X := fun u: _ =>
match u with
tpair y psi =>
match (ls y) with
ii1 z => pr1 z|
ii2 phi => fromempty (psi phi)
end
end.
Definition locsplitsecissec (X Y:UU)(f:X->Y)(ls: locsplit f)(u:dnegimage f): paths (xtodnegimage f (locsplitsec f ls u)) u.
Proof. intros. set (p:= xtodnegimage f). set (s:= locsplitsec f ls).
assert (paths (pr1 (p (s u))) (pr1 u)). unfold p. unfold xtodnegimage. unfold s. unfold locsplitsec. simpl. induction u. set (lst:= ls t). induction lst. simpl. apply (pr2 x0). induction (x y).
assert (is: isofhlevelf (S O) (dnegimageincl f)). apply (isofhlevelfpr1 (S O) (fun y:Y => isapropdneg (hfiber f y))).
assert (isw: isweq (maponpaths (dnegimageincl f) (p (s u)) u)). apply (isofhlevelfonpaths O _ is).
apply (invmap _ isw X0). Defined.
Definition negimage (X:UU)(Y:UU)(f:X -> Y):= total2 Y (fun y:Y => neg(hfiber f y)).
Definition negimageincl (X Y:UU)(f:X -> Y):= pr1 Y (fun y:Y => neg(hfiber f y)).
Definition imsum (X:UU)(Y:UU)(f:X -> Y): coprod (dnegimage f) (negimage f) -> Y:= fun u:_ =>
match u with
ii1 z => pr1 z|
ii2 z => pr1 z
end.
*)
|
```python
%pylab inline
import pandas as pd
import numpy as np
from __future__ import division
import itertools
import matplotlib.pyplot as plt
import seaborn as sns
import logging
logger = logging.getLogger()
```
Populating the interactive namespace from numpy and matplotlib
9 Recommendation Systems
=============
two broad groups:
1. Content-based systems
focus on the properities of items.
2. Collaborative filtering systems
focus on the relationship between users and items.
### 9.1 A Model for Recommendation Systems
#### The Utility Matrix
record the preference given by users for certain items.
```python
# Example 9.1
M = pd.DataFrame(index=['A', 'B', 'C', 'D'], columns=['HP1', 'HP2', 'HP3', 'TW', 'SW1', 'SW2', 'SW3'])
M.loc['A', ['HP1', 'TW', 'SW1']] = [4, 5, 1]
M.iloc[1, 0:3] = [5, 5, 4]
M.iloc[2, 3:-1] = [2, 4, 5]
M.iloc[3, [1, -1]] = [3, 3]
M_9_1 = M
M_9_1
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>HP1</th>
<th>HP2</th>
<th>HP3</th>
<th>TW</th>
<th>SW1</th>
<th>SW2</th>
<th>SW3</th>
</tr>
</thead>
<tbody>
<tr>
<th>A</th>
<td>4</td>
<td>NaN</td>
<td>NaN</td>
<td>5</td>
<td>1</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>B</th>
<td>5</td>
<td>5</td>
<td>4</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>C</th>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>2</td>
<td>4</td>
<td>5</td>
<td>NaN</td>
</tr>
<tr>
<th>D</th>
<td>NaN</td>
<td>3</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
In practice, the matrix would be even **sparser**, with the typical user rating only a tiny fraction of all avalibale items.
the **goal** of a recommendation system is: to **predict the blanks** in the utility matrix.
+ slightly difference in many application:
- predict every blank entry $<$ discover some potential entries in each row.
- find all items with the highest expected ratings $<$ find a large subset of those.
#### The Long Tail
physical institutions | online institutions
---- | -----
provide only the most popular items | provide the entire range of items
the long tail force online institutions to recommend items to individual users:
1. It's no possible to present all avaliable items to the user.
2. Neither can we expect users to have heared of each of the items they might like.
#### Applications of Recommendation Systems
1. Product Recommendations
2. Movie Recommendations
3. News Articles
#### Populating the Utility Matrix
how to discovery the value users place on items:
1. We can ask users to rate items.
cons: users are unwilling to do, and so samples are biased by very little fraction of peoples.
2. We can make inferences from users' behavior.
eg: items purchased/viewed/rated.
### 9.2 Content-Based Recommendations
#### 9.2.1 Item Profiles
a record representing important characteristics of items.
##### Discovering Features
1. for Documents
idea: find the identification of words that characterize the topic of a document.
namely, we expect a sets of words to express the subjects or main ideas of the document.
1. eliminate stop words.
2. compute the TF.DIF score for each reamining word in the document.
3. take as the features of a document the $n$ words with the highest TF.DIF scores.
to measure the similarity of two documents, the distance measures we could use are:
1. Jaccard distance
2. cosine distance
cosine distance of vectors is not affected by components in which both vectors have 0.
2. for Images
invite users to tag the items.
cons: users are unwilling to do $\implies$ there are not enough tags (bias).
##### generalize feature vector
1. feature is discrete. $\to$ boolean value.
2. feature is numerical. $\to$ normalization.
#### 9.2.5 User Profiles
create vectors with the same components of item profiles to describe the user's preferences.
It could be derived from utility matrix and item profiles.
1. normalizate untility matrix. ($[-1,1]$ for cosine distance).
2. value in user profiles = utility value * corresponding item vectors.
```python
# example 9.4
users_name = ['U', 'V']
items_name = ['F{}'.format(x) for x in range(4)]
features_name = ['Julia Roberts', 'others']
# utility matrix
M_uti = pd.DataFrame([
[3, 4, 5, 0],
[6, 2, 3, 5]
],
index=users_name,
columns=items_name
)
M_uti
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>F0</th>
<th>F1</th>
<th>F2</th>
<th>F3</th>
</tr>
</thead>
<tbody>
<tr>
<th>U</th>
<td>3</td>
<td>4</td>
<td>5</td>
<td>0</td>
</tr>
<tr>
<th>V</th>
<td>6</td>
<td>2</td>
<td>3</td>
<td>5</td>
</tr>
</tbody>
</table>
</div>
```python
# item profile
M_item = pd.DataFrame(index=items_name, columns=features_name)
M_item.loc[:, features_name[0]] = 1
M_item = M_item.fillna(value=0)
M_item
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Julia Roberts</th>
<th>others</th>
</tr>
</thead>
<tbody>
<tr>
<th>F0</th>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>F1</th>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>F2</th>
<td>1</td>
<td>0</td>
</tr>
<tr>
<th>F3</th>
<td>1</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
```python
M_uti.apply(lambda x: x - np.mean(x), axis=1)
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>F0</th>
<th>F1</th>
<th>F2</th>
<th>F3</th>
</tr>
</thead>
<tbody>
<tr>
<th>U</th>
<td>0</td>
<td>1</td>
<td>2</td>
<td>-3</td>
</tr>
<tr>
<th>V</th>
<td>2</td>
<td>-2</td>
<td>-1</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
```python
M_user = M_uti.fillna(value=0).dot(M_item) / 4 #average = sum/len
M_user
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Julia Roberts</th>
<th>others</th>
</tr>
</thead>
<tbody>
<tr>
<th>U</th>
<td>3</td>
<td>0</td>
</tr>
<tr>
<th>V</th>
<td>4</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
#### 9.2.6 Recommending Items to Users Based on Content
1. to estimate:
$$M_{utility}[user, item] = cosineDistant(M_{user}, M_{item})$$
the more similar, the higher probility to recommend.
2. classification algorithms:
Recommend or Not (machine learning):
one decision per user $\to$ take too long time to construct.
be used only for relatively small problem size.
```python
# exercise 9.2.1
raw_data = [
[3.06, 2.68, 2.92],
[500, 320, 640],
[6, 4, 6]
]
M_item = pd.DataFrame(raw_data, index=['Processor Speed', 'Disk Size', 'Main-Memory Size'], columns=['A', 'B', 'C'])
# items: A, B, C; features: Processor Speed, Disk Size, ...
M_item
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<th>Processor Speed</th>
<td>3.06</td>
<td>2.68</td>
<td>2.92</td>
</tr>
<tr>
<th>Disk Size</th>
<td>500.00</td>
<td>320.00</td>
<td>640.00</td>
</tr>
<tr>
<th>Main-Memory Size</th>
<td>6.00</td>
<td>4.00</td>
<td>6.00</td>
</tr>
</tbody>
</table>
</div>
```python
# exercise 9.2.1
# (d)
M_item.apply(lambda x: x / np.mean(x), axis=1)
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<th>Processor Speed</th>
<td>1.060046</td>
<td>0.928406</td>
<td>1.011547</td>
</tr>
<tr>
<th>Disk Size</th>
<td>1.027397</td>
<td>0.657534</td>
<td>1.315068</td>
</tr>
<tr>
<th>Main-Memory Size</th>
<td>1.125000</td>
<td>0.750000</td>
<td>1.125000</td>
</tr>
</tbody>
</table>
</div>
```python
# exercise 9.2.2
# (a)
M_item.apply(lambda x: x - np.mean(x), axis=1)
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<th>Processor Speed</th>
<td>0.173333</td>
<td>-0.206667</td>
<td>0.033333</td>
</tr>
<tr>
<th>Disk Size</th>
<td>13.333333</td>
<td>-166.666667</td>
<td>153.333333</td>
</tr>
<tr>
<th>Main-Memory Size</th>
<td>0.666667</td>
<td>-1.333333</td>
<td>0.666667</td>
</tr>
</tbody>
</table>
</div>
```python
# exercise 9.2.3
M_uti = pd.DataFrame([[4, 2, 5]], index=['user'], columns=['A', 'B', 'C'])
M_uti
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<th>user</th>
<td>4</td>
<td>2</td>
<td>5</td>
</tr>
</tbody>
</table>
</div>
```python
# (a)
M_uti_nor = M_uti.apply(lambda x: x - np.mean(x), axis=1)
M_uti_nor
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<th>user</th>
<td>0.333333</td>
<td>-1.666667</td>
<td>1.333333</td>
</tr>
</tbody>
</table>
</div>
```python
# (b)
M_user = M_item.dot(M_uti_nor.T) / 3
M_user
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>user</th>
</tr>
</thead>
<tbody>
<tr>
<th>Processor Speed</th>
<td>0.148889</td>
</tr>
<tr>
<th>Disk Size</th>
<td>162.222222</td>
</tr>
<tr>
<th>Main-Memory Size</th>
<td>1.111111</td>
</tr>
</tbody>
</table>
</div>
```python
logger.setLevel('WARN')
def create_user_profile(utility_matrix, item_profile):
"""Create user profile by combining utility matrix with item profile in 9.2.5 ."""
assert np.array_equal(utility_matrix.columns, item_profile.columns), \
"utility matrix should keep same columns name with item profile."
logger.info('utility_matrix: \n{}\n'.format(utility_matrix))
M_uti_notnull = np.ones(utility_matrix.shape)
M_uti_notnull[utility_matrix.isnull().values] = 0
logger.info('utility_matrix_isnull: \n{}\n'.format(M_uti_notnull))
logger.info('utility_matrix: \n{}\n'.format(item_profile))
M_item_notnull = np.ones(item_profile.shape)
M_item_notnull[item_profile.isnull().values] = 0
logger.info('utility_matrix_isnull: \n{}\n'.format(M_item_notnull))
utility_matrix = utility_matrix.fillna(value=0)
item_profile = item_profile.fillna(value=0)
M_user = item_profile.dot(utility_matrix.T).values / np.dot(M_item_notnull, M_uti_notnull.T)
M_user[np.isinf(M_user)] = np.nan # solve: divide zero
logger.info('M_user: \n{}\n'.format(M_user))
return pd.DataFrame(M_user, index=item_profile.index, columns=utility_matrix.index)
M_uti = pd.DataFrame([[4, 2, 5], [1, np.nan, 3]], index=['userA', 'userB'], columns=['A', 'B', 'C'])
M_uti_nor = M_uti.apply(lambda x: x - np.mean(x), axis=1)
print('utility matrix: \n{}\n'.format(M_uti_nor))
print('item profile: \n{}\n'.format(M_item))
create_user_profile(M_uti_nor, M_item)
```
utility matrix:
A B C
userA 0.333333 -1.666667 1.333333
userB -1.000000 NaN 1.000000
item profile:
A B C
Processor Speed 3.06 2.68 2.92
Disk Size 500.00 320.00 640.00
Main-Memory Size 6.00 4.00 6.00
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>userA</th>
<th>userB</th>
</tr>
</thead>
<tbody>
<tr>
<th>Processor Speed</th>
<td>0.148889</td>
<td>-0.07</td>
</tr>
<tr>
<th>Disk Size</th>
<td>162.222222</td>
<td>70.00</td>
</tr>
<tr>
<th>Main-Memory Size</th>
<td>1.111111</td>
<td>0.00</td>
</tr>
</tbody>
</table>
</div>
### 9.3 Collaborative Filtering
identifying similar users and recommending what similar users like.
#### refine data
1. rounding the data
eg: rates of 3, 4 and 5 are "1", otherwise "0".
2. normalizing rates
subtracting from each rating the average rating of that user.
```python
# Fig 9.4
M_9_1
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>HP1</th>
<th>HP2</th>
<th>HP3</th>
<th>TW</th>
<th>SW1</th>
<th>SW2</th>
<th>SW3</th>
</tr>
</thead>
<tbody>
<tr>
<th>A</th>
<td>4</td>
<td>NaN</td>
<td>NaN</td>
<td>5</td>
<td>1</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>B</th>
<td>5</td>
<td>5</td>
<td>4</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>C</th>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>2</td>
<td>4</td>
<td>5</td>
<td>NaN</td>
</tr>
<tr>
<th>D</th>
<td>NaN</td>
<td>3</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
```python
# rounding the data
M_round = M_9_1.copy()
M_round[M_9_1 <= 2] = np.nan
M_round[M_9_1 > 2] = 1
M_round
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>HP1</th>
<th>HP2</th>
<th>HP3</th>
<th>TW</th>
<th>SW1</th>
<th>SW2</th>
<th>SW3</th>
</tr>
</thead>
<tbody>
<tr>
<th>A</th>
<td>1</td>
<td>NaN</td>
<td>NaN</td>
<td>1</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>B</th>
<td>1</td>
<td>1</td>
<td>1</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>C</th>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>1</td>
<td>1</td>
<td>NaN</td>
</tr>
<tr>
<th>D</th>
<td>NaN</td>
<td>1</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
```python
# normalizing ratings
M_norm = M_9_1.apply(lambda x: x - np.mean(x), axis=1)
M_norm
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>HP1</th>
<th>HP2</th>
<th>HP3</th>
<th>TW</th>
<th>SW1</th>
<th>SW2</th>
<th>SW3</th>
</tr>
</thead>
<tbody>
<tr>
<th>A</th>
<td>0.666667</td>
<td>NaN</td>
<td>NaN</td>
<td>1.666667</td>
<td>-2.333333</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>B</th>
<td>0.333333</td>
<td>0.333333</td>
<td>-0.666667</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<th>C</th>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>-1.666667</td>
<td>0.333333</td>
<td>1.333333</td>
<td>NaN</td>
</tr>
<tr>
<th>D</th>
<td>NaN</td>
<td>0.000000</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
#### 9.3.2 The Duality of Similarity
1. We can use information about users to recommend items, whereas even if we find pairs of similar items, it takes an **additional step** in order to recommend items to users.
+ find $n$ similar users $\to$ recommend item $I$ to user $U$.
normalize the utility matrix first.
\begin{align}
M[U,I] &= Ave(M[U,:]) + Ave(M[0:n,I] - Ave(M[0:n,I])) \\
&\approx Ave(M[U,:]) + Std(M[0:n,I])
\end{align}
+ find $m$ similar items $\to$ recommend item $I$ to user $U$.
$$M[U,I] = Ave(M[U,0:m])$$
+ in order to recommend items to user $U$, we need to find all or most of entry in $M[U,:]$.
**tradeoff**:
1. user-item: find similar users, directly get all predict values of all potent items.
item-item: find similar items, we need **calculate all items one by one (additional step)** to fill $M[U,:]$.
2. item-item similarity often provides more **reliable** information due to the simplicity of items (genre).
+ **precompute** preferred items for each user.
utility matrix evolves slowly $\implies$ compute it infrequently and assume that it remains fixed between recomputations.
2. Items tend to be classifiable in simple terms (eg: genre), whereas the individuals are complex.
#### 9.3.3 Clustering Users and Items
Hierachical approach is prefered:
1. leave many cluster unmerged at first.
2. cluster items, and average the corresponding value in utility matrix.
3. cluster users, and average as well.
4. repeat several times if we like.
Predict $M[U,I]:
1. $U \in C$, and $I \in D$.
2. predict:
\begin{equation}
M[U,I] = \begin{cases}
M_{revised}[C,D] & \quad \text{if existed.} \\
\text{estimate using similar users/items} & \quad \text{otherwise}
\end{cases}
\end{equation}
```python
# Fig 9.8
raw_data = [
[4, 5, np.nan, 5, 1, np.nan, 3, 2],
[np.nan, 3, 4, 3, 1, 2, 1, np.nan],
[2, np.nan, 1, 3, np.nan, 4, 5, 3]
]
import string
M_uti = pd.DataFrame(raw_data, index=list(string.uppercase[:3]), columns=list(string.lowercase[:8]))
M_uti
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
<th>c</th>
<th>d</th>
<th>e</th>
<th>f</th>
<th>g</th>
<th>h</th>
</tr>
</thead>
<tbody>
<tr>
<th>A</th>
<td>4</td>
<td>5</td>
<td>NaN</td>
<td>5</td>
<td>1</td>
<td>NaN</td>
<td>3</td>
<td>2</td>
</tr>
<tr>
<th>B</th>
<td>NaN</td>
<td>3</td>
<td>4</td>
<td>3</td>
<td>1</td>
<td>2</td>
<td>1</td>
<td>NaN</td>
</tr>
<tr>
<th>C</th>
<td>2</td>
<td>NaN</td>
<td>1</td>
<td>3</td>
<td>NaN</td>
<td>4</td>
<td>5</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
```python
logger.setLevel('WARN')
# exercise 9.3.1
from scipy.spatial.distance import jaccard, cosine
from itertools import combinations
def calc_distance_among_matrix(M, func_dis):
for c in list(combinations(M.index, 2)):
logger.info('c: {}'.format(c))
u, v = M.loc[c[0]], M.loc[c[1]]
logger.info('\n u:{},\n v:{}\n'.format(u.values,v.values))
print('{} {}: {}'.format(c, func_dis.__name__, func_dis(u,v)))
# (a)
calc_distance_among_matrix(M_uti.notnull(), jaccard)
```
('A', 'B') jaccard: 0.5
('A', 'C') jaccard: 0.5
('B', 'C') jaccard: 0.5
```python
# (b)
calc_distance_among_matrix(M_uti.fillna(value=0), cosine)
```
('A', 'B') cosine: 0.398959235991
('A', 'C') cosine: 0.385081306188
('B', 'C') cosine: 0.486129880223
```python
# (c)
M_tmp = M_uti.copy()
M_tmp[M_uti < 3] = 0
M_tmp[M_uti >= 3] = 1
calc_distance_among_matrix(M_tmp, jaccard)
```
('A', 'B') jaccard: 0.714285714286
('A', 'C') jaccard: 0.75
('B', 'C') jaccard: 0.875
```python
# (d)
calc_distance_among_matrix(M_tmp.fillna(value=0), cosine)
```
('A', 'B') cosine: 0.42264973081
('A', 'C') cosine: 0.5
('B', 'C') cosine: 0.711324865405
```python
# (e)
M_uti_nor = M_uti.apply(lambda x: x - np.mean(x), axis=1)
M_uti_nor
```
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>a</th>
<th>b</th>
<th>c</th>
<th>d</th>
<th>e</th>
<th>f</th>
<th>g</th>
<th>h</th>
</tr>
</thead>
<tbody>
<tr>
<th>A</th>
<td>0.666667</td>
<td>1.666667</td>
<td>NaN</td>
<td>1.666667</td>
<td>-2.333333</td>
<td>NaN</td>
<td>-0.333333</td>
<td>-1.333333</td>
</tr>
<tr>
<th>B</th>
<td>NaN</td>
<td>0.666667</td>
<td>1.666667</td>
<td>0.666667</td>
<td>-1.333333</td>
<td>-0.333333</td>
<td>-1.333333</td>
<td>NaN</td>
</tr>
<tr>
<th>C</th>
<td>-1.000000</td>
<td>NaN</td>
<td>-2.000000</td>
<td>0.000000</td>
<td>NaN</td>
<td>1.000000</td>
<td>2.000000</td>
<td>0.000000</td>
</tr>
</tbody>
</table>
</div>
```python
# (f)
calc_distance_among_matrix(M_uti_nor.fillna(value=0), cosine)
```
('A', 'B') cosine: 0.415693452532
('A', 'C') cosine: 1.11547005384
('B', 'C') cosine: 1.73957399695
```python
# exercise 9.3.2
#todo
```
### 9.4 Dimensionality Reduction
UV-decomposition: $$M = U \times V$$
measure: RMSE (root-mean-square error)
#### Building a Complete UV-Decomposition Algorithm
1. Preprocessing of the matrix $M$.
**normalization**:
1. subtract: average rating of user $i$, then average rating of item $j$.
2. subtract: first item, then user.
3. subtract: half of average of item and half of average of user.
2. Initializing $U$ and $V$.
choice: gives the elements of $UV$ the average of the nonblank elements of $M$.
$\implies$ the element of $U$ and $V$ should be $\sqrt{a/d}$,
where $a$ is the average nonblank element of $M$, $d$ is the lengths of the short sides of $U$ and $V$.
local minima contains global minima:
1. vary the initial values of $U$ and $V$:
perturb the value $\sqrt{a/d}$ randomly.
2. vary the way we seek the optimum.
3. Performing the Optimization.
different optimization path:
choose a permutation of the elements and follow that order for every round.
Gradient Descent $\to$ stochastic gradient descent.
4. Converging to a Minimum.
track the amount of improvement in the RMSE obtained.
stop condition:
1. stop when that improvement in one round falls below a threshold.
2. stop when the maximum improvement during a round is below a threshold.
##### Avoiding Overfitting
solutions:
1. optimized by only moving the value of a component a fraction of the way from its current value toward its optimized value.
2. Stop before the process has converged.
3. Take several different $UV$ decompositions, and average their predicts.
```python
# exercise 9.4.6
#todo
```
### 9.5 The NetFlix Challenge
some facts:
1. CineMatch was not a very good algorithm.
2. UV-decomposition alorithm given a 7\% improvement over CineMatch when couped with normalization and a few other tricks.
3. Combing different algorithms is a preferred strategy.
4. Genre and other information in IMDB was no useful.
5. Time of rating turned out to be useful: upward or downward slope with time.
todo: read the papers introduced in the chapter.
|
\subsection{Standardising file types}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.