text
stringlengths 0
3.34M
|
---|
# Fin Loading Analysis
This notebook is meant to estimate the loading we may expect on the LV3 fins. Currently, it assumes a constant lifting pressure on the fins. This is used to find the ratio of shear to bending at the root of the fin. With that information, we can find the length at which a point load should be applied in order to simulate the real loading at the root of the fin. This is needed to test the strength of whatever design is used to attach the fins to LV3.
I would have (still would) liked to find some more detailed information on the kinds of loading we can expect on the LV3 fins. (Need at least an estimate of average pressure to figure out our factor of safety.) [This random course worksheet from MIT][MITbending] seems to suggest that it's reasonable to assume a constant lifting pressure on wings, so I'm assuming a constant lifting pressure on the fins. However, I still don't know what kind of *magnitude* we can expect for that pressure.
[MITbending]: https://ocw.mit.edu/courses/aeronautics-and-astronautics/16-01-unified-engineering-i-ii-iii-iv-fall-2005-spring-2006/systems-labs-06/spl10.pdf
## Nomenclature
variable | math symbol | name/description | units
---|---|---|---
y | $y$ | span-wise coordinate (distance from the root) | inches
q | $q$ | beam loading (shear force per length) | $lb_f/in$
c | $c$ | chord (distance from leading to trailing edge at some $y$) | inches
F | $F$ | shear force | pounds
M | $M$ | bending moment | $lb_f in$
cr | $C_r$ | exposed root chord | inches
ct | $C_t$ | tip chord | inches
bst | $b^*$ | exposed span (a single fin is $b^*/2$ tall) | inches
kq | $k_q$ | $q$ fudge factor (average lifting pressure) | $lb_f/in^2$
kF | $k_F$ | $F$ integration constant (root shear) | $lb_f$
kM | $k_M$ | $M$ integration constant (root bending) | $lb_f in$
## Governing equations
* $q=k_q c$ constant lifting pressure
* $F= \int_0^{b^*/2}q \, dy$
* BC1: $F=0$ when $y=b^*/2$ (Euler bending)
* $M= \int_0^{b^*/2}F \, dy$
* BC2: $M=0$ when $y=b^*/2$ (Euler bending)
## Calculation
### setup environment and parameters
```python
from sympy import *
init_printing()
%matplotlib inline
y, q, c, F, M, cr, ct, bst, kq, kF, kM = symbols('y q c F M cr ct bst kq kF kM')
c = cr + y*(cr+ct)/(bst/2) # define chord as a function of y
q = kq*c # constant lifting pressure
LV3parms = {cr: 18, ct: 5, kq: 1, bst: 6.42*2} # in, in, lbf/in^2, in; parameters for LV3 fins
```
### find shear
```python
F = integrate(q, y) +kF
kF = solve(F.subs({y: bst/2}), kF)[0] # BC: F=0 when y=bst/2
F = integrate(q, y) +kF # plug back into F
simplify(F)
```
```python
plot(F.subs(LV3parms), (y, 0, LV3parms[bst]/2))
```
```python
root_shear = lambdify(y, F.subs(LV3parms))(0)
root_shear
```
### find bending
```python
M = integrate(F, y) +kM
kM = solve(M.subs({y: bst/2}), kM)[0] # BC: M=0 when y=bst/2
M = integrate(F, y) +kM # plug back into M
simplify(M)
```
```python
plot(M.subs(LV3parms), (y, 0, LV3parms[bst]/2))
```
```python
root_bending = lambdify(y, M.subs(LV3parms))(0)
root_bending
```
### find point load location
When testing lateral the strength of the fins, we will apply a point load to an aluminum beam attached to a module. If we vary the location of that point load, we vary the ratio of shear and bending at the root of that beam. That ratio needs to match that of the real LV3 fins in order for the test to be meaningful.
For a point load $F$ applied at span $y_*$:
$M=y_* F$.
So, we get the location of the point load by taking the ratio of the root bending and root shear:
```python
root_bending/root_shear
```
So, the point load should be applied 3+5/8 inches away from the root.
|
lemma continuous_on_divide[continuous_intros]: fixes f :: "'a::topological_space \<Rightarrow> 'b::real_normed_field" assumes "continuous_on s f" "continuous_on s g" and "\<forall>x\<in>s. g x \<noteq> 0" shows "continuous_on s (\<lambda>x. (f x) / (g x))" |
#Functions for building, initiating models and submodels.
function build_gwas()
return Model(0,0, 0,0, 0, 0, 0, 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, 0,0)
end
function initiate_model(model, interval, setsize, outputfilename, printchain, type)
model.nlayer1 = 1
model.nlayer2 = 0
model.losoInterval = Float64(interval)
model.layerlimit = 0.75 * model.losoInterval
model.inlayer1 = true
model.setsize = setsize
model.notcomplete = true
model.outputfilename = outputfilename
model.printchain = printchain
model.submodelcount = 0
model.futures = Future[]
#
try
model.m = size(model.snpsOld.binarygenotypes, 2)
model.maxbp = parse(Float64, model.snpsOld.markerInfo[model.m,2])
catch
println("Model already initiated. Please build model again.")
end
#
model.phenoOrder2, model.genoOrder = get_orders(model.phenotypes.IID, model.snpsOld.IID) # keep phenoOrder and phenoOrder2 separate as will have to use both when reading new adjusted phenotypes...
get_phenotype_intersect!(model.phenotypes, model.phenoOrder2)
model.weightOrder = model.phenoOrder2 # since pheno and weights have already been sorted/ordered. Weights won't need to be ordered again
get_weight_intersect!(model.weights, model.weightOrder)
model.snps = get_genotype_intersect_bin(model.snpsOld, model.genoOrder)
model.snpsOld = nothing
model.n = size(model.phenotypes.IID,1)
model.k = size(model.phenotypes.trait,2)
model.maxlayerlimit = get_maxlayerlimit(model)
model.type = type
end
function inititate_submodel!(submodel)
submodel.rng = MersenneTwister()
submodel.K = get_1loci_k()
submodel.output_stats = zeros(Float64,submodel.m, 10)
if submodel.type == "additive"
submodel.output_stats = zeros(Float64, submodel.m, 6)
submodel.K = 0
end
submodel.yty = (submodel.phenotype.trait .* submodel.weights.rinverse)'submodel.phenotype.trait #(y .* rinv)'y' where rinv is nx1 vector - faster than y'Rinv*y, where Rinv is a diag matrix. Same allocations
end
# read directory of yloso phenotype files and find largest, calculate max layer limit
function get_maxlayerlimit(model)
phenotypedir = dirname(model.phenotypesprefix)
phenotypefiles = readdir(phenotypedir)
maxn = 1
maxlayer = 1
phenotypefiles = phenotypefiles[occursin.(basename(model.phenotypesprefix), phenotypefiles)]
for file in phenotypefiles
n,layer = get_n_and_layer(file)
if n > maxn
maxn = n
maxlayer = layer
elseif n == maxn
if layer > maxlayer
maxlayer = layer
maxn = n
end
end
end
maxlayerlimit = maxn * 10000000
if maxlayer == 2
maxlayerlimit = maxlayerlimit + 5000000
end
return maxlayerlimit
end
function get_n_and_layer(x)
sub = x[1:findlast(isequal('.'),x)-1]
n = parse(Int64, split(sub[(findlast(isequal('.'),sub)+1):end], "n")[2])
sub2 = sub[1:findlast(isequal('.'),sub)-1]
layer = parse(Int64, split(sub2[(findlast(isequal('.'),sub2)+1):end], "layer")[2])
return n, layer
end |
[STATEMENT]
lemma add_diff_assoc_eint: "z \<le> y \<Longrightarrow> x + (y - z) = x + y - (z::eint)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. z \<le> y \<Longrightarrow> x + (y - z) = x + y - z
[PROOF STEP]
by(cases x)(auto simp add: diff_eint_def split: eint.split) |
Require Import Crypto.Specific.Framework.RawCurveParameters.
Require Import Crypto.Util.LetIn.
(***
Modulus : 2^336 - 3
Base: 32
***)
Definition curve : CurveParameters :=
{|
sz := 11%nat;
base := 32;
bitwidth := 32;
s := 2^336;
c := [(1, 3)];
carry_chains := None;
a24 := None;
coef_div_modulus := None;
goldilocks := None;
karatsuba := None;
montgomery := true;
freeze := Some false;
ladderstep := false;
mul_code := None;
square_code := None;
upper_bound_of_exponent_loose := None;
upper_bound_of_exponent_tight := None;
allowable_bit_widths := None;
freeze_extra_allowable_bit_widths := None;
modinv_fuel := None
|}.
Ltac extra_prove_mul_eq _ := idtac.
Ltac extra_prove_square_eq _ := idtac.
|
/-
Copyright (c) 2017 Mario Carneiro. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Mario Carneiro
-/
import order.rel_iso.basic
import logic.embedding.set
/-!
# Interactions between relation homomorphisms and sets
> THIS FILE IS SYNCHRONIZED WITH MATHLIB4.
> Any changes to this file require a corresponding PR to mathlib4.
It is likely that there are better homes for many of these statement,
in files further down the import graph.
-/
open function
universes u v w
variables {α β γ δ : Type*}
{r : α → α → Prop} {s : β → β → Prop} {t : γ → γ → Prop} {u : δ → δ → Prop}
namespace rel_hom_class
variables {F : Type*}
lemma map_inf [semilattice_inf α] [linear_order β]
[rel_hom_class F ((<) : β → β → Prop) ((<) : α → α → Prop)]
(a : F) (m n : β) : a (m ⊓ n) = a m ⊓ a n :=
(strict_mono.monotone $ λ x y, map_rel a).map_inf m n
lemma map_sup [semilattice_sup α] [linear_order β]
[rel_hom_class F ((>) : β → β → Prop) ((>) : α → α → Prop)]
(a : F) (m n : β) : a (m ⊔ n) = a m ⊔ a n :=
@map_inf αᵒᵈ βᵒᵈ _ _ _ _ _ _ _
end rel_hom_class
namespace rel_iso
@[simp]
end rel_iso
/-- `subrel r p` is the inherited relation on a subset. -/
def subrel (r : α → α → Prop) (p : set α) : p → p → Prop :=
(coe : p → α) ⁻¹'o r
@[simp] theorem subrel_val (r : α → α → Prop) (p : set α)
{a b} : subrel r p a b ↔ r a.1 b.1 := iff.rfl
namespace subrel
/-- The relation embedding from the inherited relation on a subset. -/
protected def rel_embedding (r : α → α → Prop) (p : set α) :
subrel r p ↪r r := ⟨embedding.subtype _, λ a b, iff.rfl⟩
@[simp] theorem rel_embedding_apply (r : α → α → Prop) (p a) :
subrel.rel_embedding r p a = a.1 := rfl
instance (r : α → α → Prop) [is_well_order α r] (p : set α) : is_well_order p (subrel r p) :=
rel_embedding.is_well_order (subrel.rel_embedding r p)
instance (r : α → α → Prop) [is_refl α r] (p : set α) : is_refl p (subrel r p) :=
⟨λ x, @is_refl.refl α r _ x⟩
instance (r : α → α → Prop) [is_symm α r] (p : set α) : is_symm p (subrel r p) :=
⟨λ x y, @is_symm.symm α r _ x y⟩
instance (r : α → α → Prop) [is_trans α r] (p : set α) : is_trans p (subrel r p) :=
⟨λ x y z, @is_trans.trans α r _ x y z⟩
instance (r : α → α → Prop) [is_irrefl α r] (p : set α) : is_irrefl p (subrel r p) :=
⟨λ x, @is_irrefl.irrefl α r _ x⟩
end subrel
/-- Restrict the codomain of a relation embedding. -/
def rel_embedding.cod_restrict (p : set β) (f : r ↪r s) (H : ∀ a, f a ∈ p) : r ↪r subrel s p :=
⟨f.to_embedding.cod_restrict p H, λ _ _, f.map_rel_iff'⟩
@[simp] theorem rel_embedding.cod_restrict_apply (p) (f : r ↪r s) (H a) :
rel_embedding.cod_restrict p f H a = ⟨f a, H a⟩ := rfl
|
lemma eventually_nhds_in_nhd: "x \<in> interior s \<Longrightarrow> eventually (\<lambda>y. y \<in> s) (nhds x)" |
Require Import SfLib Ty.
(** simple sytax definition for the calculus**)
Inductive term : Type :=
| tm_false : term
| tm_true : term
| tm_if : term -> term -> term -> term
| tm_var : id -> term
| tm_app : term -> term -> term
| tm_abs : id -> ty -> term -> term
| tm_trust : term -> term
| tm_distrust : term -> term
| tm_check : term -> term.
Tactic Notation "term_cases" tactic(first) ident(c) :=
first ; [Case_aux c "tm_false" | Case_aux c "tm_true" | Case_aux c "tm_if" |
Case_aux c "tm_var" | Case_aux c "tm_app" | Case_aux c "tm_abs" |
Case_aux c "tm_trust" | Case_aux c "tm_distrust" | Case_aux c "tm_check"].
|
/* -----------------------------------------------------------------------------
* Copyright 2021 Jonathan Haigh
* SPDX-License-Identifier: MIT
* ---------------------------------------------------------------------------*/
#ifndef SQ_INCLUDE_GUARD_system_linux_SqParamSchemaImpl_h_
#define SQ_INCLUDE_GUARD_system_linux_SqParamSchemaImpl_h_
#include "core/typeutil.h"
#include "system/SqParamSchema.gen.h"
#include "system/schema.h"
#include <gsl/gsl>
namespace sq::system::linux {
class SqParamSchemaImpl : public SqParamSchema<SqParamSchemaImpl> {
public:
explicit SqParamSchemaImpl(const ParamSchema ¶m_schema);
SQ_ND Result get_name() const;
SQ_ND Result get_doc() const;
SQ_ND Result get_index() const;
SQ_ND Result get_type() const;
SQ_ND Result get_required() const;
SQ_ND Result get_default_value() const;
SQ_ND Result get_default_value_doc() const;
SQ_ND Primitive to_primitive() const override;
private:
gsl::not_null<const ParamSchema *> param_schema_;
};
} // namespace sq::system::linux
#endif // SQ_INCLUDE_GUARD_system_linux_SqParamSchemaImpl_h_
|
lemma hol_pal_lem3: assumes S: "convex S" "open S" and abc: "a \<in> S" "b \<in> S" "c \<in> S" and "d \<noteq> 0" and lek: "d \<bullet> a \<le> k" and holf1: "f holomorphic_on {z. z \<in> S \<and> d \<bullet> z < k}" and holf2: "f holomorphic_on {z. z \<in> S \<and> k < d \<bullet> z}" and contf: "continuous_on S f" shows "contour_integral (linepath a b) f + contour_integral (linepath b c) f + contour_integral (linepath c a) f = 0" |
/-
Copyright (c) 2021 . All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Chris Hughes
-/
import group_theory.group_action.basic
import group_theory.subgroup.basic
/-!
# Conjugation action of a group on itself
This file defines the conjugation action of a group on itself. See also `mul_aut.conj` for
the definition of conjugation as a homomorphism into the automorphism group.
## Main definitions
A type alias `conj_act G` is introduced for a group `G`. The group `conj_act G` acts on `G`
by conjugation. The group `conj_act G` also acts on any normal subgroup of `G` by conjugation.
## Implementation Notes
The scalar action in defined in this file can also be written using `mul_aut.conj g • h`. This
has the advantage of not using the type alias `conj_act`, but the downside of this approach
is that some theorems about the group actions will not apply when since this
`mul_aut.conj g • h` describes an action of `mul_aut G` on `G`, and not an action of `G`.
-/
variables (G : Type*)
/-- A type alias for a group `G`. `conj_act G` acts on `G` by conjugation -/
def conj_act : Type* := G
namespace conj_act
open mul_action subgroup
variable {G}
instance : Π [group G], group (conj_act G) := id
instance : Π [div_inv_monoid G], div_inv_monoid (conj_act G) := id
instance : Π [group_with_zero G], group_with_zero (conj_act G) := id
instance : Π [fintype G], fintype (conj_act G) := id
@[simp] lemma card [fintype G] : fintype.card (conj_act G) = fintype.card G := rfl
section div_inv_monoid
variable [div_inv_monoid G]
instance : inhabited (conj_act G) := ⟨1⟩
/-- Reinterpret `g : conj_act G` as an element of `G`. -/
def of_conj_act : conj_act G ≃* G := ⟨id, id, λ _, rfl, λ _, rfl, λ _ _, rfl⟩
/-- Reinterpret `g : G` as an element of `conj_act G`. -/
def to_conj_act : G ≃* conj_act G := of_conj_act.symm
/-- A recursor for `conj_act`, for use as `induction x using conj_act.rec` when `x : conj_act G`. -/
protected def rec {C : conj_act G → Sort*} (h : Π g, C (to_conj_act g)) : Π g, C g := h
@[simp] lemma of_mul_symm_eq : (@of_conj_act G _).symm = to_conj_act := rfl
@[simp] lemma to_mul_symm_eq : (@to_conj_act G _).symm = of_conj_act := rfl
@[simp] lemma to_conj_act_of_conj_act (x : conj_act G) : to_conj_act (of_conj_act x) = x := rfl
@[simp] lemma of_conj_act_to_conj_act (x : G) : of_conj_act (to_conj_act x) = x := rfl
@[simp] lemma of_conj_act_one : of_conj_act (1 : conj_act G) = 1 := rfl
@[simp] lemma to_conj_act_one : to_conj_act (1 : G) = 1 := rfl
@[simp] lemma of_conj_act_inv (x : conj_act G) : of_conj_act (x⁻¹) = (of_conj_act x)⁻¹ := rfl
@[simp] lemma to_conj_act_inv (x : G) : to_conj_act (x⁻¹) = (to_conj_act x)⁻¹ := rfl
@[simp] lemma of_conj_act_mul (x y : conj_act G) :
of_conj_act (x * y) = of_conj_act x * of_conj_act y := rfl
@[simp] lemma to_conj_act_mul (x y : G) : to_conj_act (x * y) =
to_conj_act x * to_conj_act y := rfl
instance : has_scalar (conj_act G) G :=
{ smul := λ g h, of_conj_act g * h * (of_conj_act g)⁻¹ }
lemma smul_def (g : conj_act G) (h : G) : g • h = of_conj_act g * h * (of_conj_act g)⁻¹ := rfl
@[simp] lemma «forall» (p : conj_act G → Prop) :
(∀ (x : conj_act G), p x) ↔ ∀ x : G, p (to_conj_act x) := iff.rfl
end div_inv_monoid
section group_with_zero
variable [group_with_zero G]
@[simp] lemma of_conj_act_zero : of_conj_act (0 : conj_act G) = 0 := rfl
@[simp]
instance : mul_action (conj_act G) G :=
{ smul := (•),
one_smul := by simp [smul_def],
mul_smul := by simp [smul_def, mul_assoc, mul_inv_rev₀] }
end group_with_zero
variables [group G]
instance : mul_distrib_mul_action (conj_act G) G :=
{ smul := (•),
smul_mul := by simp [smul_def, mul_assoc],
smul_one := by simp [smul_def],
one_smul := by simp [smul_def],
mul_smul := by simp [smul_def, mul_assoc] }
lemma smul_eq_mul_aut_conj (g : conj_act G) (h : G) : g • h = mul_aut.conj (of_conj_act g) h := rfl
/-- The set of fixed points of the conjugation action of `G` on itself is the center of `G`. -/
lemma fixed_points_eq_center : fixed_points (conj_act G) G = center G :=
begin
ext x,
simp [mem_center_iff, smul_def, mul_inv_eq_iff_eq_mul]
end
/-- As normal subgroups are closed under conjugation, they inherit the conjugation action
of the underlying group. -/
instance subgroup.conj_action {H : subgroup G} [hH : H.normal] :
has_scalar (conj_act G) H :=
⟨λ g h, ⟨g • h, hH.conj_mem h.1 h.2 (of_conj_act g)⟩⟩
lemma subgroup.coe_conj_smul {H : subgroup G} [hH : H.normal] (g : conj_act G) (h : H) :
↑(g • h) = g • (h : G) := rfl
instance subgroup.conj_mul_distrib_mul_action {H : subgroup G} [hH : H.normal] :
mul_distrib_mul_action (conj_act G) H :=
(subtype.coe_injective).mul_distrib_mul_action H.subtype subgroup.coe_conj_smul
/-- Group conjugation on a normal subgroup. Analogous to `mul_aut.conj`. -/
def _root_.mul_aut.conj_normal {H : subgroup G} [hH : H.normal] : G →* mul_aut H :=
(mul_distrib_mul_action.to_mul_aut (conj_act G) H).comp to_conj_act.to_monoid_hom
@[simp] lemma _root_.mul_aut.conj_normal_apply {H : subgroup G} [H.normal] (g : G) (h : H) :
↑(mul_aut.conj_normal g h) = g * h * g⁻¹ := rfl
@[simp] lemma _root_.mul_aut.conj_normal_symm_apply {H : subgroup G} [H.normal] (g : G) (h : H) :
↑((mul_aut.conj_normal g).symm h) = g⁻¹ * h * g :=
by { change _ * (_)⁻¹⁻¹ = _, rw inv_inv, refl }
@[simp] lemma _root_.mul_aut.conj_normal_inv_apply {H : subgroup G} [H.normal] (g : G) (h : H) :
↑((mul_aut.conj_normal g)⁻¹ h) = g⁻¹ * h * g :=
mul_aut.conj_normal_symm_apply g h
lemma _root_.mul_aut.conj_normal_coe {H : subgroup G} [H.normal] {h : H} :
mul_aut.conj_normal ↑h = mul_aut.conj h :=
mul_equiv.ext (λ x, rfl)
instance normal_of_characteristic_of_normal {H : subgroup G} [hH : H.normal]
{K : subgroup H} [h : K.characteristic] : (K.map H.subtype).normal :=
⟨λ a ha b, by
{ obtain ⟨a, ha, rfl⟩ := ha,
exact K.apply_coe_mem_map H.subtype
⟨_, ((set_like.ext_iff.mp (h.fixed (mul_aut.conj_normal b)) a).mpr ha)⟩ }⟩
end conj_act
|
Please see below for any positions we currently have available.
To work in a diligent, efficient and conscientious manner undertaking building cleaning activities.
A basic disclosure from Disclosure Scotland is required for this post. |
function r = uminus(c)
%tstoolbox/@core/uminus
% Syntax:
% * r = uminus(c)
%
% Input Arguments:
% * c - core object
%
% negate time series
%
% Copyright 1997-2001 DPI Goettingen, License http://www.physik3.gwdg.de/tstool/gpl.txt
r = core(-data(c));
|
The $j$th derivative of $(w - z)^n$ is $\binom{n}{j}(w - z)^{n - j}$. |
{-# OPTIONS --without-K --rewriting #-}
open import HoTT
open import homotopy.FunctionOver
open import groups.ProductRepr
open import cohomology.Theory
open import cohomology.WedgeCofiber
{- For the cohomology group of a suspension ΣX, the group inverse has the
- explicit form Cⁿ(flip-susp) : Cⁿ(ΣX) → Cⁿ(ΣX).
-}
module cohomology.InverseInSusp {i} (CT : CohomologyTheory i)
(n : ℤ) {X : Ptd i} where
open CohomologyTheory CT
open import cohomology.Functor CT
open import cohomology.BaseIndependence CT
open import cohomology.Wedge CT
private
module CW = CWedge n (⊙Susp X) (⊙Susp X)
module Subtract = SuspRec {C = fst (⊙Susp X ⊙∨ ⊙Susp X)}
(winl south)
(winr south)
(λ x → ap winl (! (merid x)) ∙ wglue ∙ ap winr (merid x))
subtract = Subtract.f
⊙subtract : ⊙Susp X ⊙→ ⊙Susp X ⊙∨ ⊙Susp X
⊙subtract = (subtract , ! (ap winl (merid (pt X))))
projl-subtract : ∀ σ → projl _ _ (subtract σ) == Susp-flip σ
projl-subtract = Susp-elim idp idp $
↓-='-from-square ∘ vert-degen-square ∘ λ x →
ap-∘ (projl _ _) subtract (merid x)
∙ ap (ap (projl _ _)) (Subtract.merid-β x)
∙ ap-∙ (projl _ _) (ap winl (! (merid x))) (wglue ∙ ap winr (merid x))
∙ ((∘-ap (projl _ _) winl (! (merid x))
∙ ap-idf _)
∙2 (ap-∙ (projl _ _) wglue (ap winr (merid x))
∙ (Projl.glue-β _ _
∙2 (∘-ap (projl _ _) winr (merid x) ∙ ap-cst _ _))))
∙ ∙-unit-r _
∙ ! (FlipSusp.merid-β x)
projr-subtract : ∀ σ → projr _ _ (subtract σ) == σ
projr-subtract = Susp-elim idp idp $
↓-∘=idf-in' (projr _ _) subtract ∘ λ x →
ap (ap (projr _ _)) (Subtract.merid-β x)
∙ ap-∙ (projr _ _) (ap winl (! (merid x))) (wglue ∙ ap winr (merid x))
∙ ((∘-ap (projr _ _) winl (! (merid x)) ∙ ap-cst _ _)
∙2 (ap-∙ (projr _ _) wglue (ap winr (merid x))
∙ (Projr.glue-β _ _
∙2 (∘-ap (projr _ _) winr (merid x) ∙ ap-idf _))))
fold-subtract : ∀ σ → fold (subtract σ) == south
fold-subtract = Susp-elim idp idp $
↓-app=cst-in ∘ ! ∘ λ x →
∙-unit-r _
∙ ap-∘ fold subtract (merid x)
∙ ap (ap fold) (Subtract.merid-β x)
∙ ap-∙ fold (ap winl (! (merid x))) (wglue ∙ ap winr (merid x))
∙ ((∘-ap fold winl (! (merid x)) ∙ ap-idf _)
∙2 (ap-∙ fold wglue (ap winr (merid x))
∙ (Fold.glue-β
∙2 (∘-ap fold winr (merid x) ∙ ap-idf _))))
∙ !-inv-l (merid x)
cancel :
×ᴳ-fanin (C-is-abelian n _) (CF-hom n (⊙Susp-flip X)) (idhom _) ∘ᴳ ×ᴳ-diag
== cst-hom
cancel =
ap2 (λ φ ψ → ×ᴳ-fanin (C-is-abelian n _) φ ψ ∘ᴳ ×ᴳ-diag)
(! (CF-λ= n projl-subtract))
(! (CF-ident n) ∙ ! (CF-λ= n projr-subtract))
∙ transport (λ {(G , φ , ψ) → φ ∘ᴳ ψ == cst-hom})
(pair= (CW.path) $ ↓-×-in
(CW.Wedge-in-over ⊙subtract)
(CW.⊙Wedge-rec-over (⊙idf _) (⊙idf _)
▹ ap2 ×ᴳ-fanout (CF-ident n) (CF-ident n)))
(! (CF-comp n ⊙fold ⊙subtract)
∙ CF-λ= n (λ σ → fold-subtract σ ∙ ! (merid (pt X)))
∙ CF-cst n)
C-Susp-flip-is-inv :
CF-hom n (⊙Susp-flip X) == inv-hom (C n (⊙Susp X)) (C-is-abelian _ _)
C-Susp-flip-is-inv = group-hom= $ λ= λ g →
! (Group.inv-unique-l (C n (⊙Susp X)) _ g (app= (ap GroupHom.f cancel) g))
|
ned's blog - How to reinstall MOH Airborne and keep your Stats and Medals.
How to reinstall MOH Airborne and keep your Stats and Medals.
Saving your Stats and Medals in Medal of Honor Airborne.
The default location of this file is as follows..
Simply save this file and add it to the 'Saved' folder after you have reinstalled (and patched!) the game. |
<a href="https://colab.research.google.com/github/bundickm/Study-Guides/blob/master/Unit_1_Sprint_3_Linear_Algebra_Study_Guide.ipynb" target="_parent"></a>
This study guide should reinforce and provide practice for all of the concepts you have seen in the past week. There are a mix of written questions and coding exercises, both are equally important to prepare you for the sprint challenge as well as to be able to speak on these topics comfortably in interviews and on the job.
If you get stuck or are unsure of something remember the 20 minute rule. If that doesn't help, then research a solution with google and stackoverflow. Only once you have exausted these methods should you turn to your Team Lead - they won't be there on your SC or during an interview. That being said, don't hesitate to ask for help if you truly are stuck.
Have fun studying!
#Resources
[Numpy Linear Algebra Documentation](https://docs.scipy.org/doc/numpy-1.15.1/reference/routines.linalg.html)
[LaTex Cheat Sheet](https://wch.github.io/latexsheet/latexsheet.pdf)
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
```
# Vectors
Define the following terms in your own words, do not simply copy and paste a definition found elsewhere but reword it to be understandable and memorable to you. *Double click the markdown to add your definitions*
**Vector:** `Your Answer Here`
**Scalar:** `Your Answer Here`
**Norm:** `Your Answer Here`
**Dot Product:** `Your Answer Here`
**Cross Product:** `Your Answer Here`
**Unit Vector:** `Your Answer Here`
**Span:** `Your Answer Here`
**Basis:** `Your Answer Here`
Use the following vectors to answer the questions below.
\begin{align}
\vec{a} = \begin{bmatrix} 3 \\ 2 \end{bmatrix}
\vec{b} = \begin{bmatrix} 4 \\ -6 \end{bmatrix}
\vec{c} = \begin{bmatrix} 6 \\ 4 \end{bmatrix}
\end{align}
Graph all three vectors.
- Make sure to color each vector for visual clarity
- Label your x and y axis and title the graph (this should be habit any time you graph)
```
```
Find:
- $\vec{a} \cdot \vec{b}$
- $\vec{a} \cdot \vec{c}$
- $\vec{b} \cdot \vec{c}$
```
```
Find:
- $\vec{a} \times \vec{b}$
- $\vec{a} \times \vec{c}$
- $\vec{b} \times \vec{c}$
```
```
Find:
- $||\vec{a}||$
- $||\vec{b}||$
- $||\vec{c}||$
```
```
Using the vectors and calculations above, answer the following questions.
1. What vectors are orthogonal to each other? How do we know that? `Your Answer Here`
2. What is the longest vector? How do we know that? `Your Answer Here`
3. Which vectors are linearly independent? How do we know that? `Your Answer Here`
4. Which vector is formed from multiplying one of the other vectors by a scalar of 2? `Your Answer Here`
Using $\vec{a}$, transform it into a unit vector.
```
```
Using $\vec{a}$ and LaTex, describe $\vec{a}$ as a linear combination of scalars and unit vectors.
*Double click the markdown and add your answer here.*
# Matrices
Define the following terms in your own words, do not simply copy and paste a definition found elsewhere but reword it to be understandable and memorable to you. *Double click the markdown to add your definitions*
**Matrix:** `Your Answer Here`
**Dimensionality:** `Your Answer Here`
**Transpose:** `Your Answer Here`
**Determinant:** `Your Answer Here`
**Inverse:** `Your Answer Here`
**Rank:** `Your Answer Here`
**Row-Echelon Form:** `Your Answer Here`
**Identity Matrix:** `Your Answer Here`
Use the matrices below to answer the following questions.
\begin{align}
A = \begin{bmatrix}
1 & 4\\
2 & 5\\
3 & 6
\end{bmatrix}
\
B = \begin{bmatrix}
1 & 2 & 3\\
4 & 5 & 6
\end{bmatrix}
\
C = \begin{bmatrix}
1 & 2 & 3 & 4\\
5 & 3 & 7 & 9\\
3 & 7 & 7 & 7\\
2 & 4 & 6 & 8\\
\end{bmatrix}
\end{align}
What are the dimensions of each matrix?
- $A$: `Your Answer Here`
- $B$: `Your Answer Here`
- $C$: `Your Answer Here`
Do matrix multiplication with all valid combinations of $A$, $B$, and $C$.
- Are there any combinations that cannot be multiplied? Why? `Your Answer Here`
```
# Matrix Multiplication Here
```
Find $T$ of each matrix.
```
```
Find the determinant of $C$
```
```
What causes a determinant of 0?
`Your Answer Here`
Find the rank of $C$ (You may use a function to save time)
```
```
What is Gaussian Elimination? What are the three row operations you can perform?
`Your Answer Here`
Given:
\begin{align}
D = \begin{bmatrix}
1 & 3\\
2 & 5\
\end{bmatrix}
\end{align}
Find $D^{-1}$
```
```
Find $D^{-1}D$. What is the name of the resulting matrix?
```
```
# Statistics
Define the following terms in your own words, do not simply copy and paste a definition found elsewhere but reword it to be understandable and memorable to you. *Double click the markdown to add your definitions*
**Variance:** `Your Answer Here`
**Covariance:** `Your Answer Here`
**Correlation Coefficient:** `Your Answer Here`
Given the data below, answer the following questions.
```
tweetle_beatles = [100, 657, 42, 1001, 501]
puddle_battles = [5, 0, 3, 10, 7]
```
Find the mean, variance, and standard deviation for `tweetle_beatles` and `puddle_battles`.
```
```
Find the variance-covariance matrix of `tweetle_beatles` and `puddle_battles`.
```
```
Find the correlation coefficient of `tweetle_beatles` and `puddle_battles`.
```
```
Can we compare the variance of `tweetle_beatles` and `puddle_battles`? Why or why not?
`Your Answer Here`
What is the relationship between Variance and Standard Deviation?
`Your Answer Here`
# Dimensionality Reduction
Define the following terms in your own words, do not simply copy and paste a definition found elsewhere but reword it to be understandable and memorable to you. *Double click the markdown to add your definitions*
**Projection:** `Your Answers Here`
**Principle Component Analysis:** `Your Answer Here`
**Eigenvalue:** `Your Answer Here`
**Eigenvector:** `Your Answer Here`
**Scree Plot:** `Your Answer Here`
Line $L$ is formed by all vectors that can be created by scaling vector $v$.
\begin{align}
\vec{v} = \begin{bmatrix} 1 \\ .5 \end{bmatrix}
\vec{w} = \begin{bmatrix} 2 \\ 2 \end{bmatrix}
\end{align}
Find $proj_{L}(\vec{w})$
```
```
Graph your projected vector to check your work
```
```
What is the Curse of Dimensionality?
`Your Answer Here`
Use the dataset below to perform PCA with the numeric features.
```
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
df = pd.read_csv(url, names=['sepal length','sepal width','petal length','petal width','target'])
df.head()
```
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>sepal length</th>
<th>sepal width</th>
<th>petal length</th>
<th>petal width</th>
<th>target</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>5.1</td>
<td>3.5</td>
<td>1.4</td>
<td>0.2</td>
<td>Iris-setosa</td>
</tr>
<tr>
<th>1</th>
<td>4.9</td>
<td>3.0</td>
<td>1.4</td>
<td>0.2</td>
<td>Iris-setosa</td>
</tr>
<tr>
<th>2</th>
<td>4.7</td>
<td>3.2</td>
<td>1.3</td>
<td>0.2</td>
<td>Iris-setosa</td>
</tr>
<tr>
<th>3</th>
<td>4.6</td>
<td>3.1</td>
<td>1.5</td>
<td>0.2</td>
<td>Iris-setosa</td>
</tr>
<tr>
<th>4</th>
<td>5.0</td>
<td>3.6</td>
<td>1.4</td>
<td>0.2</td>
<td>Iris-setosa</td>
</tr>
</tbody>
</table>
</div>
Standardize your data
```
```
Use PCA to project to 2 dimensions
```
```
Graph your 2 principal components against each other.
```
```
Give 2 examples of when you would use PCA. What would the trade-offs be?
1. `Your Answer Here`
2. `Your Answer Here`
If you were calculating PCA by hand, what are the steps you would need to take?
`Your Answer Here`
# Clustering
Define the following terms in your own words, do not simply copy and paste a definition found elsewhere but reword it to be understandable and memorable to you. *Double click the markdown to add your definitions*
**Supervised Learning:** `Your Answer Here`
**Unsupervised Learning:** `Your Answer Here`
**Reinforcement Learning:** `Your Answer Here`
**Clustering:** `Your Answer Here`
**Centroids:** `Your Answer Here`
**No Free Lunch:** `Your Answer Here`
Use the iris dataframe above, choosing two of the numeric columns, to perform clustering on. Don't perform the clustering on the PCA'd dataset.
Plot the two features against each other
```
```
There are three ways to decide how many clusters we should have. The easiest is that we know how many to expect due to outside knowledge or the data is labeled (`target` has three labels in it, thus we know there should be 3 clusters). What are the other two ways to figure out how many clusters there should be?
1. `Your Answer Here`
2. `Your Answer Here`
Create an elbow plot to determine K.
```
```
Perform clustering with K clusters.
```
```
Plot your clusters with the centroids distinct and each cluster of points a different color.
```
```
|
\chapter{Requirements}
Requirements can be gathered using interviews with customers, users and stakeholders, with observation, questionnaires, prototyping or even from an existing application.
In this project requirements are obtained mainly from the current implementation of the software (\fref{sec:reengineering}) and from interviews with the developer of the old version.
Some features, either deprecated or with low priority, are left out of the requirements list due to time constraints.
\section{Functional Requirements}
\subsection*{Ready-made Solutions}
Since this is a very specific software, there are no off-the-shelf solutions available that meet the requirements of the project.
However, there are libraries and frameworks that can be reused:
\begin{enumerate}
\item JavaScript frameworks
\begin{itemize}
\item \ac{MVC}: AngularJS, Spine, Ember.js, Knockout.js, Sprout, Google Closure
\item Oriented to Web Components: AngularJS, Polymer
\item BackBone.js
\item Testing: Jasmine, qUnit, phantomJS
\end{itemize}
\item JavaScript libraries
\begin{itemize}
\item jQuery and plug-ins: Bigvideo.js, jQuery-ui, jquery-mobile , jquery-kinetic
\item Hammer.js, jGestures , iScroll, swipe.js
\end{itemize}
\item \ac{CSS} Tools: LessCSS, SASS
\end{enumerate}
\subsection*{Constraints}
Certain constraints were defined before the beginning of the project:
\begin{enumerate}
\item Technology: the project has to be implemented using modern web technologies (\ac{HTML5}, JavaScript, \ac{CSS}).
\item Legal: It has to be ready to be open source
\end{enumerate}
\subsection*{Functional Requirements}
\begin{enumerate}
\item \textbf{Render \ac{UI} Components with \ac{HTML}}. \ac{UI} components, currently defined with \ac{XML}, have to be eventually transformed to \ac{HTML} so that a browser can render them.
\begin{enumerate}
\item Support for \textbf{multiple languages} via the parameter \texttt{lang}.
\item \textbf{Components}
\begin{enumerate}
\item \textbf{Basic components} \texttt{screen}, \texttt{subscreen}, \texttt{textarea}, \texttt{img}, \texttt{button}, \texttt{video}, \texttt{showreel}, \texttt{swf}, \texttt{layout}, \texttt{qr}. Exclude: \texttt{group}, \texttt{group\_button}.
\item \textbf{Theme components}.
\item \textbf{Properties inline or in tags}. \texttt{x}, \texttt{y}, \texttt{width}, \texttt{height}, \texttt{scrolly}, \texttt{scrollx}, \texttt{loop}, \texttt{loopy}, \texttt{loopx}, \texttt{src}, \texttt{style}, \texttt{caption}.
\item \textbf{Action} properties \texttt{onclick}, \texttt{onload}, \texttt{action}, \texttt{param}.
\end{enumerate}
\end{enumerate}
\item \textbf{Themes}. The current version has a default theme, a set of \ac{UI} components that extend the basic ones and provide a certain look and feel. All theme components are eventually implemented with a basic component, e.g. a \texttt{back-button}, a \texttt{home-button}, a \texttt{base-button}, etc.
\item Internal \textbf{navigation} (subscreens). Subscreens are containers whose content is determined by the \ac{URI}.
\item \textbf{Settings} management. There are three configurations to load: generic settings (default language, paths...), application specific settings (current language, current theme...) and structure of the application (a graph that defines an $id$ and a \ac{URI} for each node (screen)).
\item Basic support for \textbf{Entities}.
\end{enumerate}
\section{Non-Functional Requirements}
\begin{enumerate}
\item \textbf{Target browser}: QtWebKit (Qt 5.0.x) for the first phase, Google Chrome 28 for the second.
\item \textbf{Extensibility}: The software has to be extensible. The design has to allow adding new features in the future.
\item \textbf{Robustness}: the software has to be reliable and has to be delivered with a comprehensive test suite. It should be compatible with Jenkins and continuous integration.
\item \textbf{Hardware}: the software has to perform smoothly in Reem H3 and a multitouch screen.
\item \textbf{\ac{UX}}: the time to change to another screen in the rendered content application should be less than 0.5s. Media contents (images, videos) have to be ready in less than 1s after a screen change. Rendered components should be aesthetically pleasant.
\item \textbf{Interoperability}: it has to interoperate at least with:
\begin{itemize}
\item \textbf{Backend}: there is an \ac{API} in a Django backend that provides settings and serves the contents
\item \textbf{RobotBehaviour}: the Qt program that runs in the robot and governs the behaviour. Sometimes the touchscreen displays Qt windows, sometimes it displays the content applications.
\end{itemize}
\end{enumerate} |
State Before: C : Type u
inst✝³ : Category C
inst✝² : HasStrongEpiMonoFactorisations C
X Y : C
f : X ⟶ Y
I' : C
e : X ⟶ I'
m : I' ⟶ Y
comm : e ≫ m = f
inst✝¹ : StrongEpi e
inst✝ : Mono m
⊢ (isoStrongEpiMono e m comm).hom ≫ ι f = m State After: C : Type u
inst✝³ : Category C
inst✝² : HasStrongEpiMonoFactorisations C
X Y : C
f : X ⟶ Y
I' : C
e : X ⟶ I'
m : I' ⟶ Y
comm : e ≫ m = f
inst✝¹ : StrongEpi e
inst✝ : Mono m
⊢ IsImage.lift (StrongEpiMonoFactorisation.toMonoIsImage (StrongEpiMonoFactorisation.mk (MonoFactorisation.mk I' m e)))
(Image.monoFactorisation f) ≫
ι f =
m Tactic: dsimp [isoStrongEpiMono] State Before: C : Type u
inst✝³ : Category C
inst✝² : HasStrongEpiMonoFactorisations C
X Y : C
f : X ⟶ Y
I' : C
e : X ⟶ I'
m : I' ⟶ Y
comm : e ≫ m = f
inst✝¹ : StrongEpi e
inst✝ : Mono m
⊢ IsImage.lift (StrongEpiMonoFactorisation.toMonoIsImage (StrongEpiMonoFactorisation.mk (MonoFactorisation.mk I' m e)))
(Image.monoFactorisation f) ≫
ι f =
m State After: no goals Tactic: apply IsImage.lift_fac |
lemma open_cbox_convex: fixes x :: "'a::euclidean_space" assumes x: "x \<in> box a b" and y: "y \<in> cbox a b" and e: "0 < e" "e \<le> 1" shows "(e *\<^sub>R x + (1 - e) *\<^sub>R y) \<in> box a b" |
module Text.Markdown.Definition
-- Inline ----------------------------------------------------------
data Literal = MkLiteral String
data InlineTag = Str
| SoftBreak
| LineBreak
| Code
| RawHtml
| Entity
| Emph
| Strong
| Link
| Image
mutual
data Inline = NullInline | MkInline Inline'
record Linkable : Type where
MkLinkable : (label : Inline) ->
(url : String) ->
(title : String) ->
Linkable
data Content = NullContent
| MkLiteralContent Literal
| MkInlineContent Inline
| MkLinkableContent Linkable
record Inline' : Type where
MkInline' : (tag : InlineTag) ->
(content : Content ) ->
(next : Inline ) ->
Inline'
-- Block -----------------------------------------------------------
data BlockTag = Document
| BlockQuote
| GenericList
| GenericListItem
| FencedCode
| IndentedCode
| HtmlBlock
| Paragraph
| AtxHeader
| SetExtHeader
| HRule
| ReferenceDef
data ListType = Bullet
| Ordered
data Delimiter = Period
| Parens
record ListData : Type where
MkListData : (listType : ListType ) ->
(markerOffset : Int ) ->
(padding : Int ) ->
(start : Int ) ->
(delimiter : Delimiter) ->
(bulletChar : Char ) ->
(tight : Bool ) ->
ListData
record FencedCodeData : Type where
MkFencedCodeData : (fenceLength : Int ) ->
(fenceOffset : Int ) ->
(fenceChar : Char ) ->
(info : String) ->
FencedCodeData
data HeaderLevel = MkHeaderLevel Int
-- data RefMap --FIXME!
data Attributes = NullAttributes
| MkListDataAttributes ListData
| MkFencedCodeDataAttributes FencedCodeData
| MkHeaderLevelAttributes HeaderLevel
-- | MkRefMapAttributes RefMap --FIXME!
mutual
data Block = NullBlock | MkBlock Block'
record Block' : Type where
MkBlock' : (tag : BlockTag ) ->
(startLine : Int ) ->
(startColumn : Int ) ->
(endLine : Int ) ->
(open : Bool ) ->
(lastLineBlank : Bool ) ->
(children : Block ) ->
(stringContent : String ) ->
(inlineContent : Inline ) ->
(attributes : Attributes) ->
(next : Block ) ->
Block'
-- Markdown --------------------------------------------------------
record Meta : Type where
MkMeta : (source : String) ->
Meta
record Markdown : Type where
MkMarkdown : (meta : Meta ) ->
(blocks : List Block) ->
Markdown
-- Show instance ---------------------------------------------------
instance Show Block where
show NullBlock = show "NullBlock"
show (MkBlock block') = show "MkBlock" --FIXME!
instance Show Markdown where
show (MkMarkdown meta blocks) = show $ source meta
|
------------------------------------------------------------------------
-- The Agda standard library
--
-- Results concerning the excluded middle axiom.
------------------------------------------------------------------------
{-# OPTIONS --without-K --safe #-}
module Axiom.ExcludedMiddle where
open import Level
open import Relation.Nullary
------------------------------------------------------------------------
-- Definition
-- The classical statement of excluded middle says that every
-- statement/set is decidable (i.e. it either holds or it doesn't hold).
ExcludedMiddle : (ℓ : Level) → Set (suc ℓ)
ExcludedMiddle ℓ = {P : Set ℓ} → Dec P
|
{-# OPTIONS --copatterns --sized-types #-}
-- {-# OPTIONS -v tc.size.solve:60 #-}
open import Common.Size
open import Common.Prelude
open import Common.Product
-- Sized streams via head/tail.
record Stream {i : Size} (A : Set) : Set where
coinductive
constructor delay
field
force : ∀ {j : Size< i} → A × Stream {j} A
open Stream public
_∷ˢ_ : ∀{i A} → A → Stream {i} A → Stream {↑ i} A
force (a ∷ˢ s) = a , s
-- Prepending a list to a stream.
_++ˢ_ : ∀ {i A} → List A → Stream {i} A → Stream {i} A
(a ∷ as) ++ˢ s = a ∷ˢ (as ++ˢ s)
[] ++ˢ s = s
|
General has opened an investigation into the FISA abuses by top members and agents of the D.O.J. and F.B.I. You will, of course, recall that, according to the Nunes memo, a small group of top D.O.J. and F.B.I. officials, among other alleged abuses, used the "salacious and unverified" dirty Steele dossier to obtain FISA warrants to spy on the Trump campaign. If substantiated, this would be the first reported time in American history that one candidate (and top administration officials) have spied on the opposing party's candidate during an election year.
As a former federal prosecutor, I cannot imagine a larger, more serious white collar crime! Here's hoping that justice will be served and that such abuses never occur again. Here's also hoping that the guilty are brought to justice! |
Formal statement is: lemma higher_deriv_uminus: assumes "f holomorphic_on S" "open S" and z: "z \<in> S" shows "(deriv ^^ n) (\<lambda>w. -(f w)) z = - ((deriv ^^ n) f z)" Informal statement is: If $f$ is holomorphic on an open set $S$, then the $n$th derivative of $-f$ is $-f^{(n)}$. |
Formal statement is: lemma compact_Int [intro]: fixes s t :: "'a :: t2_space set" assumes "compact s" and "compact t" shows "compact (s \<inter> t)" Informal statement is: The intersection of two compact sets is compact. |
/-
Copyright (c) 2018 Mario Carneiro. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Mario Carneiro
-/
import data.vector
import data.list.nodup
import data.list.of_fn
import control.applicative
/-!
# Additional theorems and definitions about the `vector` type
This file introduces the infix notation `::ᵥ` for `vector.cons`.
-/
universes u
variables {n : ℕ}
namespace vector
variables {α : Type*}
infixr `::ᵥ`:67 := vector.cons
attribute [simp] head_cons tail_cons
instance [inhabited α] : inhabited (vector α n) :=
⟨of_fn (λ _, default α)⟩
theorem to_list_injective : function.injective (@to_list α n) :=
subtype.val_injective
/-- Two `v w : vector α n` are equal iff they are equal at every single index. -/
@[ext] theorem ext : ∀ {v w : vector α n}
(h : ∀ m : fin n, vector.nth v m = vector.nth w m), v = w
| ⟨v, hv⟩ ⟨w, hw⟩ h := subtype.eq (list.ext_le (by rw [hv, hw])
(λ m hm hn, h ⟨m, hv ▸ hm⟩))
/-- The empty `vector` is a `subsingleton`. -/
instance zero_subsingleton : subsingleton (vector α 0) :=
⟨λ _ _, vector.ext (λ m, fin.elim0 m)⟩
@[simp] theorem cons_val (a : α) : ∀ (v : vector α n), (a ::ᵥ v).val = a :: v.val
| ⟨_, _⟩ := rfl
@[simp] theorem cons_head (a : α) : ∀ (v : vector α n), (a ::ᵥ v).head = a
| ⟨_, _⟩ := rfl
@[simp] theorem cons_tail (a : α) : ∀ (v : vector α n), (a ::ᵥ v).tail = v
| ⟨_, _⟩ := rfl
@[simp] theorem to_list_of_fn : ∀ {n} (f : fin n → α), to_list (of_fn f) = list.of_fn f
| 0 f := rfl
| (n+1) f := by rw [of_fn, list.of_fn_succ, to_list_cons, to_list_of_fn]
@[simp] theorem mk_to_list :
∀ (v : vector α n) h, (⟨to_list v, h⟩ : vector α n) = v
| ⟨l, h₁⟩ h₂ := rfl
@[simp]
lemma length_coe (v : vector α n) :
((coe : { l : list α // l.length = n } → list α) v).length = n :=
v.2
@[simp] lemma to_list_map {β : Type*} (v : vector α n) (f : α → β) : (v.map f).to_list =
v.to_list.map f := by cases v; refl
theorem nth_eq_nth_le : ∀ (v : vector α n) (i),
nth v i = v.to_list.nth_le i.1 (by rw to_list_length; exact i.2)
| ⟨l, h⟩ i := rfl
@[simp]
lemma nth_repeat (a : α) (i : fin n) :
(vector.repeat a n).nth i = a :=
by apply list.nth_le_repeat
@[simp] lemma nth_map {β : Type*} (v : vector α n) (f : α → β) (i : fin n) :
(v.map f).nth i = f (v.nth i) :=
by simp [nth_eq_nth_le]
@[simp] theorem nth_of_fn {n} (f : fin n → α) (i) : nth (of_fn f) i = f i :=
by rw [nth_eq_nth_le, ← list.nth_le_of_fn f];
congr; apply to_list_of_fn
@[simp] theorem of_fn_nth (v : vector α n) : of_fn (nth v) = v :=
begin
rcases v with ⟨l, rfl⟩,
apply to_list_injective,
change nth ⟨l, eq.refl _⟩ with λ i, nth ⟨l, rfl⟩ i,
simpa only [to_list_of_fn] using list.of_fn_nth_le _
end
/-- The natural equivalence between length-`n` vectors and functions from `fin n`. -/
def _root_.equiv.vector_equiv_fin (α : Type*) (n : ℕ) : vector α n ≃ (fin n → α) :=
⟨vector.nth, vector.of_fn, vector.of_fn_nth, λ f, funext $ vector.nth_of_fn f⟩
theorem nth_tail (x : vector α n) (i) :
x.tail.nth i = x.nth ⟨i.1 + 1, lt_tsub_iff_right.mp i.2⟩ :=
by { rcases x with ⟨_|_, h⟩; refl, }
@[simp]
theorem nth_tail_succ : ∀ (v : vector α n.succ) (i : fin n),
nth (tail v) i = nth v i.succ
| ⟨a::l, e⟩ ⟨i, h⟩ := by simp [nth_eq_nth_le]; refl
@[simp] theorem tail_val : ∀ (v : vector α n.succ), v.tail.val = v.val.tail
| ⟨a::l, e⟩ := rfl
/-- The `tail` of a `nil` vector is `nil`. -/
@[simp] lemma tail_nil : (@nil α).tail = nil := rfl
/-- The `tail` of a vector made up of one element is `nil`. -/
@[simp] lemma singleton_tail (v : vector α 1) : v.tail = vector.nil :=
by simp only [←cons_head_tail, eq_iff_true_of_subsingleton]
@[simp] theorem tail_of_fn {n : ℕ} (f : fin n.succ → α) :
tail (of_fn f) = of_fn (λ i, f i.succ) :=
(of_fn_nth _).symm.trans $ by { congr, funext i, cases i, simp, }
/-- The list that makes up a `vector` made up of a single element,
retrieved via `to_list`, is equal to the list of that single element. -/
@[simp] lemma to_list_singleton (v : vector α 1) : v.to_list = [v.head] :=
begin
rw ←v.cons_head_tail,
simp only [to_list_cons, to_list_nil, cons_head, eq_self_iff_true,
and_self, singleton_tail]
end
/-- Mapping under `id` does not change a vector. -/
@[simp] lemma map_id {n : ℕ} (v : vector α n) : vector.map id v = v :=
vector.eq _ _ (by simp only [list.map_id, vector.to_list_map])
lemma mem_iff_nth {a : α} {v : vector α n} : a ∈ v.to_list ↔ ∃ i, v.nth i = a :=
by simp only [list.mem_iff_nth_le, fin.exists_iff, vector.nth_eq_nth_le];
exact ⟨λ ⟨i, hi, h⟩, ⟨i, by rwa to_list_length at hi, h⟩,
λ ⟨i, hi, h⟩, ⟨i, by rwa to_list_length, h⟩⟩
lemma nodup_iff_nth_inj {v : vector α n} : v.to_list.nodup ↔ function.injective v.nth :=
begin
cases v with l hl,
subst hl,
simp only [list.nodup_iff_nth_le_inj],
split,
{ intros h i j hij,
cases i, cases j, ext, apply h, simpa },
{ intros h i j hi hj hij,
have := @h ⟨i, hi⟩ ⟨j, hj⟩, simp [nth_eq_nth_le] at *, tauto }
end
@[simp] lemma nth_mem (i : fin n) (v : vector α n) : v.nth i ∈ v.to_list :=
by rw [nth_eq_nth_le]; exact list.nth_le_mem _ _ _
theorem head'_to_list : ∀ (v : vector α n.succ),
(to_list v).head' = some (head v)
| ⟨a::l, e⟩ := rfl
/-- Reverse a vector. -/
def reverse (v : vector α n) : vector α n :=
⟨v.to_list.reverse, by simp⟩
/-- The `list` of a vector after a `reverse`, retrieved by `to_list` is equal
to the `list.reverse` after retrieving a vector's `to_list`. -/
lemma to_list_reverse {v : vector α n} : v.reverse.to_list = v.to_list.reverse := rfl
@[simp]
lemma reverse_reverse {v : vector α n} : v.reverse.reverse = v :=
by { cases v, simp [vector.reverse], }
@[simp] theorem nth_zero : ∀ (v : vector α n.succ), nth v 0 = head v
| ⟨a::l, e⟩ := rfl
@[simp] theorem head_of_fn
{n : ℕ} (f : fin n.succ → α) : head (of_fn f) = f 0 :=
by rw [← nth_zero, nth_of_fn]
@[simp] theorem nth_cons_zero
(a : α) (v : vector α n) : nth (a ::ᵥ v) 0 = a :=
by simp [nth_zero]
/-- Accessing the `nth` element of a vector made up
of one element `x : α` is `x` itself. -/
@[simp] lemma nth_cons_nil {ix : fin 1}
(x : α) : nth (x ::ᵥ nil) ix = x :=
by convert nth_cons_zero x nil
@[simp] theorem nth_cons_succ
(a : α) (v : vector α n) (i : fin n) : nth (a ::ᵥ v) i.succ = nth v i :=
by rw [← nth_tail_succ, tail_cons]
/-- The last element of a `vector`, given that the vector is at least one element. -/
def last (v : vector α (n + 1)) : α := v.nth (fin.last n)
/-- The last element of a `vector`, given that the vector is at least one element. -/
lemma last_def {v : vector α (n + 1)} : v.last = v.nth (fin.last n) := rfl
/-- The `last` element of a vector is the `head` of the `reverse` vector. -/
lemma reverse_nth_zero {v : vector α (n + 1)} : v.reverse.head = v.last :=
begin
have : 0 = v.to_list.length - 1 - n,
{ simp only [nat.add_succ_sub_one, add_zero, to_list_length, tsub_self,
list.length_reverse] },
rw [←nth_zero, last_def, nth_eq_nth_le, nth_eq_nth_le],
simp_rw [to_list_reverse, fin.val_eq_coe, fin.coe_last, fin.coe_zero, this],
rw list.nth_le_reverse,
end
section scan
variables {β : Type*}
variables (f : β → α → β) (b : β)
variables (v : vector α n)
/--
Construct a `vector β (n + 1)` from a `vector α n` by scanning `f : β → α → β`
from the "left", that is, from 0 to `fin.last n`, using `b : β` as the starting value.
-/
def scanl : vector β (n + 1) :=
⟨list.scanl f b v.to_list, by rw [list.length_scanl, to_list_length]⟩
/-- Providing an empty vector to `scanl` gives the starting value `b : β`. -/
@[simp] lemma scanl_nil : scanl f b nil = b ::ᵥ nil := rfl
/--
The recursive step of `scanl` splits a vector `x ::ᵥ v : vector α (n + 1)`
into the provided starting value `b : β` and the recursed `scanl`
`f b x : β` as the starting value.
This lemma is the `cons` version of `scanl_nth`.
-/
@[simp] lemma scanl_cons (x : α) : scanl f b (x ::ᵥ v) = b ::ᵥ scanl f (f b x) v :=
by simpa only [scanl, to_list_cons]
/--
The underlying `list` of a `vector` after a `scanl` is the `list.scanl`
of the underlying `list` of the original `vector`.
-/
@[simp] lemma scanl_val : ∀ {v : vector α n}, (scanl f b v).val = list.scanl f b v.val
| ⟨l, hl⟩ := rfl
/--
The `to_list` of a `vector` after a `scanl` is the `list.scanl`
of the `to_list` of the original `vector`.
-/
@[simp] lemma to_list_scanl : (scanl f b v).to_list = list.scanl f b v.to_list := rfl
/--
The recursive step of `scanl` splits a vector made up of a single element
`x ::ᵥ nil : vector α 1` into a `vector` of the provided starting value `b : β`
and the mapped `f b x : β` as the last value.
-/
@[simp] lemma scanl_singleton (v : vector α 1) : scanl f b v = b ::ᵥ f b v.head ::ᵥ nil :=
begin
rw [←cons_head_tail v],
simp only [scanl_cons, scanl_nil, cons_head, singleton_tail]
end
/--
The first element of `scanl` of a vector `v : vector α n`,
retrieved via `head`, is the starting value `b : β`.
-/
@[simp] lemma scanl_head : (scanl f b v).head = b :=
begin
cases n,
{ have : v = nil := by simp only [eq_iff_true_of_subsingleton],
simp only [this, scanl_nil, cons_head] },
{ rw ←cons_head_tail v,
simp only [←nth_zero, nth_eq_nth_le, to_list_scanl,
to_list_cons, list.scanl, fin.val_zero', list.nth_le] }
end
/--
For an index `i : fin n`, the `nth` element of `scanl` of a
vector `v : vector α n` at `i.succ`, is equal to the application
function `f : β → α → β` of the `i.cast_succ` element of
`scanl f b v` and `nth v i`.
This lemma is the `nth` version of `scanl_cons`.
-/
@[simp] lemma scanl_nth (i : fin n) :
(scanl f b v).nth i.succ = f ((scanl f b v).nth i.cast_succ) (v.nth i) :=
begin
cases n,
{ exact fin_zero_elim i },
induction n with n hn generalizing b,
{ have i0 : i = 0 := by simp only [eq_iff_true_of_subsingleton],
simpa only [scanl_singleton, i0, nth_zero] },
{ rw [←cons_head_tail v, scanl_cons, nth_cons_succ],
refine fin.cases _ _ i,
{ simp only [nth_zero, scanl_head, fin.cast_succ_zero, cons_head] },
{ intro i',
simp only [hn, fin.cast_succ_fin_succ, nth_cons_succ] } }
end
end scan
/-- Monadic analog of `vector.of_fn`.
Given a monadic function on `fin n`, return a `vector α n` inside the monad. -/
def m_of_fn {m} [monad m] {α : Type u} : ∀ {n}, (fin n → m α) → m (vector α n)
| 0 f := pure nil
| (n+1) f := do a ← f 0, v ← m_of_fn (λi, f i.succ), pure (a ::ᵥ v)
theorem m_of_fn_pure {m} [monad m] [is_lawful_monad m] {α} :
∀ {n} (f : fin n → α), @m_of_fn m _ _ _ (λ i, pure (f i)) = pure (of_fn f)
| 0 f := rfl
| (n+1) f := by simp [m_of_fn, @m_of_fn_pure n, of_fn]
/-- Apply a monadic function to each component of a vector,
returning a vector inside the monad. -/
def mmap {m} [monad m] {α} {β : Type u} (f : α → m β) :
∀ {n}, vector α n → m (vector β n)
| 0 xs := pure nil
| (n+1) xs := do h' ← f xs.head, t' ← @mmap n xs.tail, pure (h' ::ᵥ t')
@[simp] theorem mmap_nil {m} [monad m] {α β} (f : α → m β) :
mmap f nil = pure nil := rfl
@[simp] theorem mmap_cons {m} [monad m] {α β} (f : α → m β) (a) :
∀ {n} (v : vector α n), mmap f (a ::ᵥ v) =
do h' ← f a, t' ← mmap f v, pure (h' ::ᵥ t')
| _ ⟨l, rfl⟩ := rfl
/-- Define `C v` by induction on `v : vector α n`.
This function has two arguments: `h_nil` handles the base case on `C nil`,
and `h_cons` defines the inductive step using `∀ x : α, C w → C (x ::ᵥ w)`. -/
@[elab_as_eliminator] def induction_on {C : Π {n : ℕ}, vector α n → Sort*}
(v : vector α n)
(h_nil : C nil)
(h_cons : ∀ {n : ℕ} {x : α} {w : vector α n}, C w → C (x ::ᵥ w)) :
C v :=
begin
induction n with n ih generalizing v,
{ rcases v with ⟨_|⟨-,-⟩,-|-⟩,
exact h_nil, },
{ rcases v with ⟨_|⟨a,v⟩,_⟩,
cases v_property,
apply @h_cons n _ ⟨v, (add_left_inj 1).mp v_property⟩,
apply ih, }
end
variables {β γ : Type*}
/-- Define `C v w` by induction on a pair of vectors `v : vector α n` and `w : vector β n`. -/
@[elab_as_eliminator] def induction_on₂ {C : Π {n}, vector α n → vector β n → Sort*}
(v : vector α n) (w : vector β n)
(h_nil : C nil nil)
(h_cons : ∀ {n a b} {x : vector α n} {y}, C x y → C (a ::ᵥ x) (b ::ᵥ y)) : C v w :=
begin
induction n with n ih generalizing v w,
{ rcases v with ⟨_|⟨-,-⟩,-|-⟩, rcases w with ⟨_|⟨-,-⟩,-|-⟩,
exact h_nil, },
{ rcases v with ⟨_|⟨a,v⟩,_⟩,
cases v_property,
rcases w with ⟨_|⟨b,w⟩,_⟩,
cases w_property,
apply @h_cons n _ _ ⟨v, (add_left_inj 1).mp v_property⟩ ⟨w, (add_left_inj 1).mp w_property⟩,
apply ih, }
end
/-- Define `C u v w` by induction on a triplet of vectors
`u : vector α n`, `v : vector β n`, and `w : vector γ b`. -/
@[elab_as_eliminator] def induction_on₃ {C : Π {n}, vector α n → vector β n → vector γ n → Sort*}
(u : vector α n) (v : vector β n) (w : vector γ n)
(h_nil : C nil nil nil)
(h_cons : ∀ {n a b c} {x : vector α n} {y z}, C x y z → C (a ::ᵥ x) (b ::ᵥ y) (c ::ᵥ z)) :
C u v w :=
begin
induction n with n ih generalizing u v w,
{ rcases u with ⟨_|⟨-,-⟩,-|-⟩, rcases v with ⟨_|⟨-,-⟩,-|-⟩, rcases w with ⟨_|⟨-,-⟩,-|-⟩,
exact h_nil, },
{ rcases u with ⟨_|⟨a,u⟩,_⟩,
cases u_property,
rcases v with ⟨_|⟨b,v⟩,_⟩,
cases v_property,
rcases w with ⟨_|⟨c,w⟩,_⟩,
cases w_property,
apply @h_cons n _ _ _ ⟨u, (add_left_inj 1).mp u_property⟩ ⟨v, (add_left_inj 1).mp v_property⟩
⟨w, (add_left_inj 1).mp w_property⟩,
apply ih, }
end
/-- Cast a vector to an array. -/
def to_array : vector α n → array n α
| ⟨xs, h⟩ := cast (by rw h) xs.to_array
section insert_nth
variable {a : α}
/-- `v.insert_nth a i` inserts `a` into the vector `v` at position `i`
(and shifting later components to the right). -/
def insert_nth (a : α) (i : fin (n+1)) (v : vector α n) : vector α (n+1) :=
⟨v.1.insert_nth i a,
begin
rw [list.length_insert_nth, v.2],
rw [v.2, ← nat.succ_le_succ_iff],
exact i.2
end⟩
lemma insert_nth_val {i : fin (n+1)} {v : vector α n} :
(v.insert_nth a i).val = v.val.insert_nth i.1 a :=
rfl
@[simp] lemma remove_nth_val {i : fin n} :
∀{v : vector α n}, (remove_nth i v).val = v.val.remove_nth i
| ⟨l, hl⟩ := rfl
lemma remove_nth_insert_nth {v : vector α n} {i : fin (n+1)} :
remove_nth i (insert_nth a i v) = v :=
subtype.eq $ list.remove_nth_insert_nth i.1 v.1
lemma remove_nth_insert_nth' {v : vector α (n+1)} :
∀{i : fin (n+1)} {j : fin (n+2)},
remove_nth (j.succ_above i) (insert_nth a j v) = insert_nth a (i.pred_above j) (remove_nth i v)
| ⟨i, hi⟩ ⟨j, hj⟩ :=
begin
dsimp [insert_nth, remove_nth, fin.succ_above, fin.pred_above],
simp only [subtype.mk_eq_mk],
split_ifs,
{ convert (list.insert_nth_remove_nth_of_ge i (j-1) _ _ _).symm,
{ convert (nat.succ_pred_eq_of_pos _).symm, exact lt_of_le_of_lt (zero_le _) h, },
{ apply remove_nth_val, },
{ convert hi, exact v.2, },
{ exact nat.le_pred_of_lt h, }, },
{ convert (list.insert_nth_remove_nth_of_le i j _ _ _).symm,
{ apply remove_nth_val, },
{ convert hi, exact v.2, },
{ simpa using h, }, }
end
lemma insert_nth_comm (a b : α) (i j : fin (n+1)) (h : i ≤ j) :
∀(v : vector α n),
(v.insert_nth a i).insert_nth b j.succ = (v.insert_nth b j).insert_nth a i.cast_succ
| ⟨l, hl⟩ :=
begin
refine subtype.eq _,
simp only [insert_nth_val, fin.coe_succ, fin.cast_succ, fin.val_eq_coe, fin.coe_cast_add],
apply list.insert_nth_comm,
{ assumption },
{ rw hl, exact nat.le_of_succ_le_succ j.2 }
end
end insert_nth
section update_nth
/-- `update_nth v n a` replaces the `n`th element of `v` with `a` -/
def update_nth (v : vector α n) (i : fin n) (a : α) : vector α n :=
⟨v.1.update_nth i.1 a, by rw [list.update_nth_length, v.2]⟩
@[simp] lemma to_list_update_nth (v : vector α n) (i : fin n) (a : α) :
(v.update_nth i a).to_list = v.to_list.update_nth i a :=
rfl
@[simp] lemma nth_update_nth_same (v : vector α n) (i : fin n) (a : α) :
(v.update_nth i a).nth i = a :=
by cases v; cases i; simp [vector.update_nth, vector.nth_eq_nth_le]
lemma nth_update_nth_of_ne {v : vector α n} {i j : fin n} (h : i ≠ j) (a : α) :
(v.update_nth i a).nth j = v.nth j :=
by cases v; cases i; cases j; simp [vector.update_nth, vector.nth_eq_nth_le,
list.nth_le_update_nth_of_ne (fin.vne_of_ne h)]
lemma nth_update_nth_eq_if {v : vector α n} {i j : fin n} (a : α) :
(v.update_nth i a).nth j = if i = j then a else v.nth j :=
by split_ifs; try {simp *}; try {rw nth_update_nth_of_ne}; assumption
@[to_additive]
lemma prod_update_nth [monoid α] (v : vector α n) (i : fin n) (a : α) :
(v.update_nth i a).to_list.prod =
(v.take i).to_list.prod * a * (v.drop (i + 1)).to_list.prod :=
begin
refine (list.prod_update_nth v.to_list i a).trans _,
have : ↑i < v.to_list.length := lt_of_lt_of_le i.2 (le_of_eq v.2.symm),
simp [this],
end
@[to_additive]
lemma prod_update_nth' [comm_group α] (v : vector α n) (i : fin n) (a : α) :
(v.update_nth i a).to_list.prod =
v.to_list.prod * (v.nth i)⁻¹ * a :=
begin
refine (list.prod_update_nth' v.to_list i a).trans _,
have : ↑i < v.to_list.length := lt_of_lt_of_le i.2 (le_of_eq v.2.symm),
simp [this, nth_eq_nth_le, mul_assoc],
end
end update_nth
end vector
namespace vector
section traverse
variables {F G : Type u → Type u}
variables [applicative F] [applicative G]
open applicative functor
open list (cons) nat
private def traverse_aux {α β : Type u} (f : α → F β) :
Π (x : list α), F (vector β x.length)
| [] := pure vector.nil
| (x::xs) := vector.cons <$> f x <*> traverse_aux xs
/-- Apply an applicative function to each component of a vector. -/
protected def traverse {α β : Type u} (f : α → F β) : vector α n → F (vector β n)
| ⟨v, Hv⟩ := cast (by rw Hv) $ traverse_aux f v
section
variables {α β : Type u}
@[simp] protected lemma traverse_def
(f : α → F β) (x : α) : ∀ (xs : vector α n),
(x ::ᵥ xs).traverse f = cons <$> f x <*> xs.traverse f :=
by rintro ⟨xs, rfl⟩; refl
protected lemma id_traverse : ∀ (x : vector α n), x.traverse id.mk = x :=
begin
rintro ⟨x, rfl⟩, dsimp [vector.traverse, cast],
induction x with x xs IH, {refl},
simp! [IH], refl
end
end
open function
variables [is_lawful_applicative F] [is_lawful_applicative G]
variables {α β γ : Type u}
-- We need to turn off the linter here as
-- the `is_lawful_traversable` instance below expects a particular signature.
@[nolint unused_arguments]
protected lemma comp_traverse (f : β → F γ) (g : α → G β) : ∀ (x : vector α n),
vector.traverse (comp.mk ∘ functor.map f ∘ g) x =
comp.mk (vector.traverse f <$> vector.traverse g x) :=
by rintro ⟨x, rfl⟩; dsimp [vector.traverse, cast];
induction x with x xs; simp! [cast, *] with functor_norm;
[refl, simp [(∘)]]
protected lemma traverse_eq_map_id {α β} (f : α → β) : ∀ (x : vector α n),
x.traverse (id.mk ∘ f) = id.mk (map f x) :=
by rintro ⟨x, rfl⟩; simp!;
induction x; simp! * with functor_norm; refl
variable (η : applicative_transformation F G)
protected lemma naturality {α β : Type*}
(f : α → F β) : ∀ (x : vector α n),
η (x.traverse f) = x.traverse (@η _ ∘ f) :=
by rintro ⟨x, rfl⟩; simp! [cast];
induction x with x xs IH; simp! * with functor_norm
end traverse
instance : traversable.{u} (flip vector n) :=
{ traverse := @vector.traverse n,
map := λ α β, @vector.map.{u u} α β n }
instance : is_lawful_traversable.{u} (flip vector n) :=
{ id_traverse := @vector.id_traverse n,
comp_traverse := @vector.comp_traverse n,
traverse_eq_map_id := @vector.traverse_eq_map_id n,
naturality := @vector.naturality n,
id_map := by intros; cases x; simp! [(<$>)],
comp_map := by intros; cases x; simp! [(<$>)] }
end vector
|
########################################################
##### Author: Diego Valle Jones
##### Website: www.diegovalle.net
##### Date Created: Sun Mar 28 11:14:38 2010
########################################################
#season and trend decomposition of the monthly murder rates in Mexico
source("library/utilities.r")
source("timelines/constants.r")
plotReg <- function(df){
hom.ts <- ts(df$rate, start=1990, freq = 12)
trend = time(hom.ts)
ndays <- strptime(df$date, format = "%Y-%m-%d")$mday
reg <- glm(rate ~ trend + factor(month) + ndays, data = df)
reg2 <- glm(rate ~ trend + factor(year) + factor(month) +
ndays, data = df)
reg3 <- glm(rate ~ trend + ndays, data = df)
print(anova(reg, reg2))
print(summary(reg))
print(summary(reg2))
print(summary(reg3))
df$fitted <- unlist(reg$fitted.values)
df$fitted <- fitted(reg)
print(ggplot(df, aes(as.Date(date), rate)) +
geom_line() +
geom_line(aes(as.Date(date), fitted,
legend = FALSE), color = "blue") +
scale_x_date(major ="year") +
opts(title = reg$call))
}
plotTrend <- function(df, ban){
start.dw <- op.mich
end.dw <- as.Date("2008-12-31")
print(ggplot(df, aes(as.Date(date), rate)) +
geom_rect(xmin = as.numeric(start.dw), xmax = as.numeric(end.dw),
ymin=0, ymax=Inf, alpha = .01, fill = "pink") +
geom_line(size = .2) +
geom_line(aes(as.Date(date), trend), color = "blue") +
geom_vline(aes(xintercept = as.Date("2004-09-13")), color = "gray",
linetype = 2) +
scale_x_date() +
xlab("") + ylab("Annualized Homicide Rate") +
opts(title = "Monthly Homicide Rate and Trend (the Gray Line is the Assault Weapon Ban Expiration Date)") +
annotate("text", x = as.numeric(start.dw), y = 16.9,
label = "Drug War", hjust =-.2))
}
plotSeasonal <- function(df){
months <- factor(format(as.Date(df$date), "%b"))[1:12]
print(ggplot(df[1:12,], aes(1:12, seasonal), group = 1) +
geom_line() +
scale_x_continuous(breaks = 1:12,
labels = months) +
xlab("") +
opts(title = "Seasonal Component of the Homicide Rate") +
geom_hline(yintercept=0, color = "gray70"))
}
hom <- read.csv(bzfile("timelines/data/county-month-gue-oax.csv.bz2"))
hom <- cleanHom(hom)
hom <- addMonths(hom)
#hom <- subset(hom, Year.of.Murder >= 1994)
#I can't see any clearcut paterns at the state level
ggplot(hom, aes(y = Total.Murders, x = Month.of.Murder,
group = Year.of.Murder, color = Year.of.Murder)) +
geom_line() +
facet_wrap(~ County, scales = "free_y")
#Now I can see them
print(ggplot(hom, aes(as.Date(Date), Total.Murders)) +
geom_line() +
scale_x_date() +
facet_wrap(~ County, scales = "free_y") +
opts(title = "Monthly Number of Homicides"))
dev.print(png, "trends/output/st-murders.png", width = 960, height = 600)
#Now only since the start of the Drug War
print(ggplot(subset(hom, as.Date(Date) >= as.Date("2006/12/01")),
aes(as.Date(Date), Total.Murders)) +
geom_line() +
scale_x_date() +
facet_wrap(~ County, scales = "free_y")+
opts(title = "Monthly Number of Homicides Since the Start of the Drug War"))
dev.print(png, "trends/output/st-drug-war-murders.png", width = 960, height = 600)
#Let's see what Chiapas looked like during the 95 Acteal massacre
print(ggplot(subset(hom, as.Date(Date) <= as.Date("1998/06/01") &
as.Date(Date) >= as.Date("1997/01/01") &
County == "Chiapas"),
aes(as.Date(Date), Total.Murders)) +
geom_line() +
scale_x_date() +
facet_wrap(~ County, scales = "free_y")+
opts(title = "Monthly Number of Homicides Since the Start of the Drug War"))
#STL decomposition with loess
pop <- monthlyPop()
homrate <- addHom(hom, pop)
homrate <- addTrend(homrate)
homrate <- subset(homrate, year >= 1994)
plotReg(homrate)
dev.print(png, "trends/output/regression.png", width = 800,
height = 600)
Cairo(file = "trends/output/trend.png", width = 960, height=600)
plotTrend(homrate)
dev.off()
plotSeasonal(homrate)
dev.print(png, "trends/output/seasonal.png", width = 450, height = 300)
########################################################
#Bunch of crappy tests
########################################################
#See if the residuals are normal
hom.ts <- ts(homrate$rate, start=1990, freq = 12)
plot(stl(hom.ts, "per"))
dhom <- diff(hom.ts)
plot(dhom)
shapiro.test(dhom)
hist(dhom)
#12 month lag
lag.plot(dhom, 40)
#Arima
#fit.ar <- arima(hom.ts,order=c(1,1,1))
#tsdiag(fit.ar)
#Box.test(fit.ar$residuals)
#plot(hom.ts, xlim=c(1990,2010), ylim=c(5,19), type = "l")
#hom.pred <- predict(fit.ar, n.ahead = 12)
#lines(hom.pred$pred, col="red")
#lines(hom.pred$pred + 2 * hom.pred$se, col="red", lty=3)
#lines(hom.pred$pred - 2 * hom.pred$se, col="red", lty=3)
|
Formal statement is: lemma sets_vimage_algebra_cong: "sets M = sets N \<Longrightarrow> sets (vimage_algebra X f M) = sets (vimage_algebra X f N)" Informal statement is: If two measure spaces have the same sets, then the vimage algebras of the two measure spaces have the same sets. |
A legend relates to both these panels . Once , Parvati was annoyed with Shiva . At this moment , Ravana , who was passing by Mount Kailash , found it as an obstruction to his movement . Upset , Ravana shook it vigorously and as a result , Parvati got scared and hugged Shiva . Enraged by Ravana 's arrogance , Shiva stamped down on Ravana , who sang praises of Shiva to free him of his misery and turned into an ardent devotee of Shiva . Another version states that Shiva was pleased with Ravana for restoring Parvati 's composure and blessed him .
|
# 4. Quantum computation
Quantum computation is a new way of doing computation which differs from the classical way. Classical computations can be implemented in several ways, the most successful one today is the circuit model of computation. This is how elements in modern computers are designed. In the circuit model, a computation is made by taking a string of bits as inputs, doing certain operations on them and giving a new string of bits as output. In the current paradigm, these operations are logical operations that follow Boole's logic. It was proved that one needs to be able to carry out a limited set of operation (namely "NOT gate", and "AND gate") in order to implement any operation (addition, multiplication, division, ... ) by a combination of operation from this set. This fundamental set of gates is called an "elementary set of gates" and is all that is required to be able to do any computation. Similarly to classical computation, quantum computation can be done using the circuit model of computation. In this case, bits are replaced by qubits and logic gates must be substituted with quantum gates which can operate on qubits while keeping intact their special quantum properties. Quantum circuits must be reversible due to the reversibility inherent in the laws of quantum mechanics.
A reversible circuit allows you to run the computation backwards and retrieve the inputs given the outputs. Classical computation can also be implemented using reversible gates but there are disadvantages with regards to the circuit size and complexity. Thus, modern computer are built with "irreversible" logic (which means it's impossible to run the computation backwards, see truth table of the "AND" gate for example) and this is the reason why the generate heat! In order to have reversible quantum gates, one must implement these gates using, what is called a "unitary operation", that is an operation which preserve the sum of the probabilities of seeing each of the measurable values of the qubits. Although, the probability of seeing any single outcome can change, their sum will always add up to one.
## 4.1 The qubit
In quantum computation the objects of the computation are quantum objects called qubits. Similarly to bits, when qubits are measured they can only take two values: $\lvert 0 \rangle$, $\lvert 1 \rangle $.
Where the brackets around the number points to the fact that these are quantum objects (see Dirac's notation).
In linear algebra representation, the state of a qubit is a vector in a two-dimensional Hilbert space. One of the possible basis for this space is the so-called computational basis which is formed by the eigenvector of the Pauli Z matrix (more details below), the $\lvert 0 \rangle$ and $\lvert 1 \rangle $ states. In matrix form, they can be written as
$$
\lvert 0 \rangle =
\begin{pmatrix}
1 \\
0
\end{pmatrix}
$$
$$
\lvert 1 \rangle =
\begin{pmatrix}
0 \\
1
\end{pmatrix}
$$
A generic vector $\lvert \psi \rangle $ in the Hilbert space can then be constructed as a linear combination of the basis vectors
$$
\lvert \psi \rangle = \alpha \lvert 0 \rangle + \beta \lvert 1 \rangle =
\begin{pmatrix}
\alpha \\
\beta
\end{pmatrix}
$$
where $\alpha$ and $\beta$ are two complex numbers.
### Superposition
Differently from the regular bits stored in a computer, which can either take the value of $"0"$ or $"1"$, during a computation qubits can be in a state $\lvert \psi \rangle$ which is a superposition of $\lvert 0 \rangle$ and $\lvert 1 \rangle$:
\begin{equation}
\lvert \psi \rangle = \alpha \lvert 0 \rangle + \beta \lvert 1 \rangle ,
\end{equation}
where $\alpha$ and $\beta$ are related to the probability of obtaining the corresponding outcome $\lvert 0 \rangle$ or $\lvert 1 \rangle$ when the qubit is measured to learn its value.
\begin{eqnarray}
\text{P}(\text{qubit state} = 0) = \lvert \alpha \rvert^{2} \\ \notag
\text{P}(\text{qubit state} = 1) = \lvert \beta \rvert^{2}
\end{eqnarray}
Which means that the value of the qubit is not determined until it is measured. This is a counter-intuitive property of quantum mechanical objects. A qubit in a superposition of different states will behave as if it possess properties of all the states in the superposition. However, when measured, the qubit will be in one of the states of the superposition, with a probability given by the modulo square of the coefficient of the corresponding state.
### Multi-qubit state
In quantum computation, one is generally interested in doing operations on a qubit register which contains a collection of qubits. To denote the state of an $n$ qubit register, one of the following equivalent notations is used:
\begin{equation}
\lvert 0 \rangle_1 \otimes \lvert 1 \rangle_2 \otimes ... \otimes \lvert 0 \rangle_n \equiv \lvert 0, 1, ..., 0 \rangle \equiv \lvert 01...0 \rangle
\end{equation}
Where each of the zeros or ones correspond to the state of one of the qubit in the register and the index counts the qubit's number. The linear algebra meaning of all these notations (although less and less explicit going from left to right) is that the Hilbert space containing the state of the multi-qubit system is a tensor product $\otimes$ of the Hilbert spaces of the single qubits and the state of the system is a Dirac ket vector in this tensor product space. That is, the state of the system is a tensor product of the state of the single qubits. So, if the Hilbert space of one qubit has dimension $2$, the Hilbert space of an $n$-qubit register has dimension $2^n$
As an example, let us consider the case of $n=2$ qubits. The matrix form of the basis of the 4-dimensional Hilbert space is given by the tensor product of the basis vectors of each of the single qubit spaces.
From the definition of tensor product between vectors given in Chapter 2, we have
$$
\lvert 00 \rangle =
\begin{pmatrix}
1 \\
0
\end{pmatrix} \otimes
\begin{pmatrix}
1 \\
0
\end{pmatrix} =
\begin{pmatrix}
1 \cdot 1 \\
1 \cdot 0 \\
0 \cdot 1 \\
0 \cdot 0
\end{pmatrix} =
\begin{pmatrix}
1 \\
0 \\
0 \\
0
\end{pmatrix}
$$
$$
\lvert 01 \rangle =
\begin{pmatrix}
1 \\
0
\end{pmatrix} \otimes
\begin{pmatrix}
0 \\
1
\end{pmatrix} =
\begin{pmatrix}
1 \cdot 0 \\
1 \cdot 1 \\
0 \cdot 0 \\
0 \cdot 1
\end{pmatrix} =
\begin{pmatrix}
0 \\
1 \\
0 \\
0
\end{pmatrix}
$$
$$
\lvert 10 \rangle =
\begin{pmatrix}
0 \\
1
\end{pmatrix} \otimes
\begin{pmatrix}
1 \\
0
\end{pmatrix} =
\begin{pmatrix}
0 \cdot 1 \\
0 \cdot 0 \\
1 \cdot 1 \\
1 \cdot 0
\end{pmatrix} =
\begin{pmatrix}
0 \\
0 \\
1 \\
0
\end{pmatrix}
$$
$$
\lvert 11 \rangle =
\begin{pmatrix}
0 \\
1
\end{pmatrix} \otimes
\begin{pmatrix}
0 \\
1
\end{pmatrix} =
\begin{pmatrix}
0 \cdot 0 \\
0 \cdot 1 \\
1 \cdot 0 \\
1 \cdot 1
\end{pmatrix} =
\begin{pmatrix}
0 \\
0 \\
0 \\
1
\end{pmatrix}
$$
The extension to $n$ qubits is then straightforward, although tedious.
### Entanglement
Another interesting property of qubits, which departs from the classical world, is that they can be entangled. If qubits are entangled with each other, the value taken by each of them is strictly related to the value taken by the other. The correlations between the values of entangled qubits is so strong that it cannot be described by classical probability theory. This is one of the most peculiar features of quantum mechanics.
In a way, entangled qubits lose their identity as individual objects, as their properties now depend on the properties of the other qubits with which they are entangled. It's not possible to separate an entangled system in independent parts and it must be treated as a new unit (in quantum mechanical terms, the state of the whole system is not separable, it cannot be factorized as product state of the state of individual systems).
To see the difference between an entangled state and a non-entangle state, consider the following two possibilities for a two-qubit state:
<ol>
<li> $\frac{1}{\sqrt{2}} \left( \lvert 0 \rangle_1 \otimes \lvert 0 \rangle_2 \right) + \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle_1 \otimes \lvert 1 \rangle_2 \right) $ </li>
<li> $\frac{1}{2} \left( \lvert 0 \rangle_1 \otimes \lvert 0 \rangle_2 \right) + \frac{1}{2} \left( \lvert 1 \rangle_1 \otimes \lvert 1 \rangle_2 \right)$ </li>
</ol>
The state shown in 1. can be manipulated so that it will look like the tensor product of the state of the first qubit and the state of the second qubit. In fact, we can rewrite 1. as: $ \lvert 0 \rangle_1 \otimes \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle_2 + \lvert 1 \rangle_2 \right) $. Thus, we can say that the first qubit is in the state $\lvert 0 \rangle_1$ and the second qubit is in an equal superposition of $\lvert 0 \rangle_2$ and $\lvert 1 \rangle_2$.
The same procedure cannot be done on the two-qubit state shown in 2. which cannot be manipulated so that the two-qubit state looks like a tensor product of the state of the first qubit and the state of the second qubit. Therefore, one cannot say anything about the state of a single qubit in state 2. but one has always to talk about the joint state of the two qubits. If one of them has value $"0"$ the other one will have value $"0"$ as well, and if one qubit has value $"1"$, the other qubit will have value $"1"$ too. This purely quantum correlation between the states of a system is what is called quantum entanglement.
## 4.2 Quantum gates
Quantum gates implement operations on qubits by keeping intact their quantum properties. They correspond to operators acting on the qubit register. Furthermore, they allow some of those properties to arise, for example by putting qubits in a superposition of states or entangling them with each other.
It was shown that, similarly to the classical case, it is only really necessary to be able to perform a finite set of quantum gates on the qubits to implement any possible operation on them by combining these particular quantum gates. The minimum set of quantum gates needed for computation is then called a "universal set of quantum gates".
Let's see some quantum gates:
### 4.2.1 One-qubit gates
These gates act on one of the qubits in the qubit register, leaving all other qubits unaffected.
#### Identity gate
The identity gate I leaves the state of a qubit unchanged. In matrix form this is just the identity matrix in a two-dimensional vector space
$$\text{1. Identity gate.}$$
$$
\hat{I} =
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
$$
and its action on the state of a qubit is
$$
\hat{I} \lvert 0 \rangle =
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
\begin{pmatrix}
1 \\
0
\end{pmatrix} =
\begin{pmatrix}
1 \\
0
\end{pmatrix} = \lvert 0 \rangle
$$
$$
\hat{I} \lvert 1 \rangle =
\begin{pmatrix}
1 & 0\\
0 & 1
\end{pmatrix}
\begin{pmatrix}
0 \\
1
\end{pmatrix} =
\begin{pmatrix}
0 \\
1
\end{pmatrix} = \lvert 1 \rangle
$$
#### Pauli X gate
The X gate flips the state of a qubit from $\lvert 0 \rangle \rightarrow \lvert 1 \rangle $ and $\lvert 1 \rangle \rightarrow \lvert 0 \rangle $. It is the quantum analog of the NOT gate and it is sometimes referred to as the "bit-flip gate". When considering the Bloch sphere representation of a qubit, the X gate correspond to a $\pi$ rotation around the y-axis.
$$\text{2. X gate.}$$
In matrix representation, the X gate is
$$
\hat{X} =
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}
$$
Its action on a qubit is
$$
\hat{X} \lvert 0 \rangle =
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}
\begin{pmatrix}
1 \\
0
\end{pmatrix} =
\begin{pmatrix}
0 \\
1
\end{pmatrix} = \lvert 1 \rangle
$$
$$
\hat{X} \lvert 1 \rangle =
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}
\begin{pmatrix}
0 \\
1
\end{pmatrix} =
\begin{pmatrix}
1 \\
0
\end{pmatrix} = \lvert 0 \rangle
$$
#### Pauli Z gate
The Z gate flips the phase of a qubit if it is in the $\lvert 1 \rangle$ state. This correspond to a $\pi$ rotation around the z-axis.
$$\text{3. Z gate.}$$
In matrix representation, the Z gate is
$$
\hat{Z} =
\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}
$$
The effect on the state of a qubit is
$$
\hat{Z} \lvert 0 \rangle =
\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}
\begin{pmatrix}
1 \\
0
\end{pmatrix} =
\begin{pmatrix}
1 \\
0
\end{pmatrix} = \lvert 0 \rangle
$$
$$
\hat{Z} \lvert 1 \rangle =
\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}
\begin{pmatrix}
0 \\
1
\end{pmatrix} =
\begin{pmatrix}
0 \\
-1
\end{pmatrix} = - \lvert 1 \rangle
$$
#### Pauli Y gate
The Y gate flips the state of a qubit and its phase and is sometimes called the " bit- and phase-flip gate".
$$\text{4. Y gate.}$$
The definition of the Y gate in matrix form is
$$
\hat{Y} =
\begin{pmatrix}
0 & -i\\
i & 0
\end{pmatrix}
$$
The effect of the Y gate on the state of a qubit is
$$
\hat{Y} \lvert 0 \rangle =
\begin{pmatrix}
0 & -i\\
i & 0
\end{pmatrix}
\begin{pmatrix}
1 \\
0
\end{pmatrix} =
\begin{pmatrix}
0 \\
i
\end{pmatrix} = i \lvert 1 \rangle
$$
$$
\hat{Y} \lvert 1 \rangle =
\begin{pmatrix}
0 & -i\\
i & 0
\end{pmatrix}
\begin{pmatrix}
0 \\
1
\end{pmatrix} =
\begin{pmatrix}
-i \\
0
\end{pmatrix} = -i \lvert 0 \rangle
$$
As the name of the Y gate suggest, the Y gate itself is not an independent gate but it is the combination of the X and Z gate. In particular, let us consider
$$
i\hat{X}\hat{Z} =
i \begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}
\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix} =
i \begin{pmatrix}
0 & -1\\
1 & 0
\end{pmatrix} =
\begin{pmatrix}
0 & -i\\
i & 0
\end{pmatrix} = Y
$$
#### Hadamard gate
The Hadamard gate H puts a qubit in an equal superposition of $\lvert 0 \rangle$ and $\lvert 1 \rangle$.
$$\text{5. Hadamard gate.}$$
Its matrix representation is
$$
\hat{H} =
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1\\
1 & -1
\end{pmatrix}
$$
Acting with the Hadamard gate on a qubit gives
$$
\hat{H} \lvert 0 \rangle =
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1\\
1 & -1
\end{pmatrix}
\begin{pmatrix}
1 \\
0
\end{pmatrix} =
\frac{1}{\sqrt{2}} \begin{pmatrix}
1 \\
1
\end{pmatrix} = \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle + \lvert 1 \rangle \right)
$$
$$
\hat{H} \lvert 1 \rangle =
\frac{1}{\sqrt{2}}
\begin{pmatrix}
1 & 1\\
1 & -1
\end{pmatrix}
\begin{pmatrix}
0 \\
1
\end{pmatrix} =
\frac{1}{\sqrt{2}} \begin{pmatrix}
1 \\
-1
\end{pmatrix} = \frac{1}{\sqrt{2}} \left( \lvert 0 \rangle - \lvert 1 \rangle \right)
$$
#### Rotations
Rotations around an axis ($R_x, R_y, R_z$): Rotates qubit state $\lvert \psi \rangle = \alpha \lvert 0 \rangle + \beta \lvert 1 \rangle $, by changing the coefficients $\alpha, \beta$ in a way that depends on the angle of rotation.
$$\text{6. Rotation gate around the $x$, $y$ and $z$ axis.}$$
In matrix form
$$
\hat{R}_x(\theta) =
\begin{pmatrix}
\cos(\theta/2) & -i\sin(\theta/2)\\
-i\sin(\theta/2) & \cos(\theta/2)
\end{pmatrix}
$$
$$
\hat{R}_y(\theta) =
\begin{pmatrix}
\cos(\theta/2) & \sin(\theta/2)\\
\sin(\theta/2) & \cos(\theta/2)
\end{pmatrix}
$$
$$
\hat{R}_z(\theta) =
\begin{pmatrix}
1 & 0 \\
0 & e^{i \theta}
\end{pmatrix}
$$
Their action is
$$ \hat{R}_x(\theta) \lvert \psi \rangle =
\begin{pmatrix}
\cos(\theta/2) & -i\sin(\theta/2)\\
-i\sin(\theta/2) & \cos(\theta/2)
\end{pmatrix}
\begin{pmatrix}
\alpha \\
\beta
\end{pmatrix} =
\begin{pmatrix}
\cos(\theta/2) \alpha -i\sin(\theta/2) \beta\\
-i\sin(\theta/2) \alpha + \cos(\theta/2) \beta
\end{pmatrix} = \left( \cos(\theta/2) \alpha -i\sin(\theta/2) \right) \lvert 0 \rangle + \left( -i\sin(\theta/2) \alpha + \cos(\theta/2) \beta \right) \lvert 1 \rangle$$
$$ \hat{R}_y(\theta) \lvert \psi \rangle =
\begin{pmatrix}
\cos(\theta/2) & \sin(\theta/2)\\
\sin(\theta/2) & \cos(\theta/2)
\end{pmatrix}
\begin{pmatrix}
\alpha \\
\beta
\end{pmatrix} =
\begin{pmatrix}
\cos(\theta/2) \alpha + \sin(\theta/2) \beta\\
\sin(\theta/2) \alpha + \cos(\theta/2) \beta
\end{pmatrix} = \left( \cos(\theta/2) \alpha + \sin(\theta/2) \right) \lvert 0 \rangle + \left( \sin(\theta/2) \alpha + \cos(\theta/2) \beta \right) \lvert 1 \rangle$$
$$ \hat{R}_z(\theta) \lvert \psi \rangle =
\begin{pmatrix}
1 & 0 \\
0 & e^{i \theta}
\end{pmatrix}
\begin{pmatrix}
\alpha \\
\beta
\end{pmatrix} =
\begin{pmatrix}
\alpha \\
e^{i \theta} \beta
\end{pmatrix} = \alpha \lvert 0 \rangle + e^{i \theta} \beta \lvert 1 \rangle $$
### 4.2.2 Multi-qubit gates
$$\text{7. Two single-qubit gates. $X$ on the first qubit and $I$ on the second qubit.}$$
If we are dealing with a qubit register that contains more than a single qubit, the operators on the whole register can be obtained by taking the tensor product of the operators acting on each qubit. To avoid lengthy calculations, we work out a few examples for a two-qubit register. We have shown above the form of the basis vector of a two-qubit register.
As an example, consider the matrix representation of the X gate on the first qubit is
$$
\hat{X} \otimes \hat{I} =
\begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix} \otimes
\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix} =
\begin{pmatrix}
0 \cdot 1 & 0 \cdot 0 & 1 \cdot 1 & 1 \cdot 0 \\
0 \cdot 0 & 0 \cdot 1 & 1 \cdot 0 & 1 \cdot 1 \\
1 \cdot 1 & 1 \cdot 0 & 0 \cdot 1 & 0 \cdot 0 \\
1 \cdot 0 & 1 \cdot 1 & 0 \cdot 0 & 0 \cdot 1
\end{pmatrix} =
\begin{pmatrix}
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0
\end{pmatrix}
$$
The action of this operator on a two-qubit register will be
$$
\left( \hat{X} \otimes \hat{I} \right) \lvert 00 \rangle =
\begin{pmatrix}
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0
\end{pmatrix}
\begin{pmatrix}
1 \\
0\\
0\\
0
\end{pmatrix} =
\begin{pmatrix}
0 \\
0\\
1\\
0
\end{pmatrix} =
\lvert 10 \rangle
$$
Exactly as expected. If we now want to combine different type of single qubit gates, acting on the two-qubit register, we only need to calculate the tensor product between these operators to find what is their matrix representation.
### 4.2.3 Two-qubit gates
#### Control-NOT gate
The most reknown two-qubit gate is the control-not gate or CNOT (CX) gate. The CX gate flips the state of a qubit (called $target$) conditionally on the state of another qubit (called $control$).
$$\text{8. CNOT gate. The first qubit is the control and the second is the target.}$$
The matrix representation of the CX gate is the following
$$
\hat{C}X_{12} =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0
\end{pmatrix}
$$
Let us see the effect of acting with the CX gate on a two-qubit register. In the matrix form shown above, the first qubit will be the control qubit and the second qubit will be the target qubit.
If the control qubit is in the state $\lvert 0 \rangle$, nothing is done to the target qubit. If the control qubit is in state $\lvert 1 \rangle$, the X gate (bit-flip) is applied to the target qubit.
$$
\hat{C}X_{12} \lvert 00 \rangle =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0
\end{pmatrix}
\begin{pmatrix}
1 \\
0 \\
0 \\
0
\end{pmatrix} =
\begin{pmatrix}
1 \\
0 \\
0 \\
0
\end{pmatrix} = \lvert 00 \rangle
$$
$$
\hat{C}X_{12} \lvert 01 \rangle =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0
\end{pmatrix}
\begin{pmatrix}
0 \\
1 \\
0 \\
0
\end{pmatrix} =
\begin{pmatrix}
0 \\
1 \\
0 \\
0
\end{pmatrix} = \lvert 01 \rangle
$$
$$
\hat{C}X_{12} \lvert 10 \rangle =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0
\end{pmatrix}
\begin{pmatrix}
0 \\
0 \\
1 \\
0
\end{pmatrix} =
\begin{pmatrix}
0 \\
0 \\
0 \\
1
\end{pmatrix} = \lvert 11 \rangle
$$
$$
\hat{C}X_{12} \lvert 11 \rangle =
\begin{pmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0
\end{pmatrix}
\begin{pmatrix}
0 \\
0 \\
0 \\
1
\end{pmatrix} =
\begin{pmatrix}
0 \\
0 \\
1 \\
0
\end{pmatrix} = \lvert 10 \rangle
$$
There is also another way to write the action of the CNOT gate in Dirac's notation:
$$\hat{C}X_{12} \lvert x, y \rangle = \lvert x, x \oplus y \rangle $$
where $x,y= \{ 0,1 \}$. So that:
$$\hat{C}X_{12} \lvert 0, 0 \rangle = \lvert 0, 0 \oplus 0 \rangle = \lvert 0, 0 \rangle $$
$$\hat{C}X_{12} \lvert 0, 1 \rangle = \lvert 0, 1 \oplus 0 \rangle = \lvert 0, 1 \rangle $$
$$\hat{C}X_{12} \lvert 1, 0 \rangle = \lvert 0, 0 \oplus 1 \rangle = \lvert 1, 1 \rangle$$
$$\hat{C}X_{12} \lvert 1, 1 \rangle = \lvert 1, 1 \oplus 1 \rangle = \lvert 1, 0 \rangle $$
## Problems
<ol>
<li>
Consider the generic state of a qubit $\lvert \psi \rangle = \alpha \lvert 0 \rangle + \beta \lvert 1 \rangle $. Give the values of $\alpha$ and $\beta$ (normalized to $\lvert \alpha \rvert^2 + \lvert \beta \rvert^2 = 1$) to represent the following ket-vectors
<ol>
<li>
$ \lvert 0 \rangle $
</li>
<li>
$ \lvert 1 \rangle $
</li>
<li>
equal superposition of $\lvert 0 \rangle $ and $\lvert 1 \rangle$
</li>
</ol>
</li>
<li>
Find the basis for the Hilbert space of three qubits (it has dimension 8) using the tensor product of the computational basis of the Hilbert space of a qubit.
</li>
<li>
Given a qubit in the state $\lvert \psi \rangle = \frac{\sqrt{2}}{\sqrt{6}} \lvert 0 \rangle + \frac{\sqrt{4}}{\sqrt{6}} \lvert 1 \rangle$, Calculate:
<ol>
<li>
$ \hat{X} \lvert \psi \rangle $
</li>
<li>
$\hat{Z} \lvert \psi \rangle$
</li>
<li>
$\hat{X} \lvert \psi \rangle$
</li>
<li>
$\hat{Y} \lvert \psi \rangle$
</li>
<li>
$\hat{H} \lvert \psi \rangle$
</li>
</ol>
</li>
<li>
Calculate the following multi-qubit operators in matrix form
<ol>
<li>
$ X \otimes X $
</li>
<li>
$ H \otimes H $
</li>
<li>
$ H \otimes Z $
</li>
</ol>
</li>
<li>
Given a qubit in the state $\lvert \psi \rangle = \frac{\sqrt{2}}{\sqrt{6}} \lvert 00 \rangle + \frac{\sqrt{2}}{\sqrt{6}} \lvert 01 \rangle + \frac{\sqrt{i}}{\sqrt{6}} \lvert 10 \rangle + \frac{\sqrt{1}}{\sqrt{6}} \lvert 11 \rangle$, Calculate $\hat{C}X_{12} \lvert \psi \rangle$, where the first qubit is the control qubit and the second qubit is the target qubit.
</li>
</ol>
## References
[1] R. Feynman, Simulating Physics with Computers, International Journal of Theoretical
Physics, Vol. 21, nos. 6/7, pp. 467{488 (1982).
[2] M. A. Nielsen, and I. L. Chuang, 2000, Quantum Computation
and Quantum Information (Cambridge University Press, Cambridge).
[3] A. Barenco et al., Phys. Rev. A 52, 3457 (1995).
|
theory Dom_Fst
imports "../Dominant" "../../Lifter/Lifter_Instances"
begin
locale dominant2_fst = dominant2
sublocale dominant2_fst \<subseteq> out : dominant2 "fst_l l1" t1 "fst_l l2" t2 X
proof
fix a1 a2
fix b :: "'c * ('g :: {Pord, Pord_Weakb})"
fix x
assume Xin : "x \<in> X"
then obtain b'1 b'2 where B' : "b = (b'1, b'2)"
by(cases b; auto simp add: fst_l_S_def)
then show "LUpd (fst_l l2) (t2 x) a2 b <[
LUpd (fst_l l1) (t1 x) a1 b"
using dominant_leq[OF Xin]
by(auto simp add: fst_l_def prod_pleq leq_refl)
qed
lemma (in dominant2_fst) ax :
shows "dominant2 (fst_l l1) t1 (fst_l l2) t2 X"
using out.dominant2_axioms
by auto
end |
||| Copyright 2016 Google Inc.
|||
||| Licensed under the Apache License, Version 2.0 (the "License");
||| you may not use this file except in compliance with the License.
||| You may obtain a copy of the License at
|||
||| http://www.apache.org/licenses/LICENSE-2.0
|||
||| Unless required by applicable law or agreed to in writing, software
||| distributed under the License is distributed on an "AS IS" BASIS,
||| WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
||| See the License for the specific language governing permissions and
||| limitations under the License.
module Test.Protobuf
import Test.UnitTest
import Test.Utils
import Protobuf.Core
%access export
allTests : IO ()
allTests = runTests (MkTestFixture "Protobuf" [
MkTestCase "GetName" (assertEq (getName John) "John Doe"),
MkTestCase "GetId" (assertEq (getId John) 1234),
MkTestCase "GetEmail" (assertEq (getEmail John) (Just "[email protected]"))
])
|
res = 3
version="v2.4"
AR = TRUE
draw = FALSE
ndraws = 10
nknots = 120
one_time=FALSE
lambda_fixed = TRUE
bchron = FALSE
# stat model flags
decomp = TRUE
bt = TRUE
mpp = TRUE
save_plots = TRUE
bacon = TRUE
# draw = TRUE
add_varves = TRUE
constrain = FALSE
# how far to extrapolate past last geochron anchor
nbeyond = 5000
age_model = 'bacon'
suff_dat = '12taxa_mid_comp_ALL_v0.3'
run_pl_Ka_Kgamma = list(suff_fit = 'cal_pl_Ka_Kgamma_EPs_ALL_v0.4c1',
suff_dat = suff_dat,
kernel = 'pl',
one_a = FALSE,
one_b = TRUE,
one_gamma = FALSE,
EPs = TRUE)
runs = list(run_pl_Ka_Kgamma)
# for (run in runs){
# for (res in grids){
# if (draw){
# for (dr in 1:ndraws){
# source('r/pred_build_data.r')
# }
# }
# }
# }
if (draw){
for (dr in 1:ndraws){
source('r/pred_build_data.r')
}
} else {
dr = 1
source('r/pred_build_data.r')
}
|
module DaggerGPU
using Dagger, MemPool, Requires, Adapt
using Distributed
using KernelAbstractions
import Dagger: Chunk
macro gpuproc(PROC, T)
quote
# Assume that we can run anything
Dagger.iscompatible_func(proc::$PROC, opts, f) = true
Dagger.iscompatible_arg(proc::$PROC, opts, x) = true
# CPUs shouldn't process our array type
Dagger.iscompatible_arg(proc::Dagger.ThreadProc, opts, x::$T) = false
# Adapt to/from the appropriate type
function Dagger.move(from_proc::OSProc, to_proc::$PROC, x::Chunk)
from_pid = from_proc.pid
to_pid = Dagger.get_parent(to_proc).pid
@assert myid() == to_pid
adapt($T, remotecall_fetch(from_pid, x) do x
poolget(x.handle)
end)
end
function Dagger.move(from_proc::$PROC, to_proc::OSProc, x::Chunk)
from_pid = Dagger.get_parent(from_proc).pid
to_pid = to_proc.pid
@assert myid() == to_pid
remotecall_fetch(from_pid, x) do x
adapt(Array, poolget(x.handle))
end
end
function Dagger.move(from_proc::OSProc, to_proc::$PROC, x)
adapt($T, x)
end
function Dagger.move(from_proc::$PROC, to_proc::OSProc, x)
adapt(Array, x)
end
end
end
processor(kind::Symbol) = processor(Val(kind))
processor(::Val) = Dagger.ThreadProc
cancompute(kind::Symbol) = cancompute(Val(kind))
cancompute(::Val) = false
kernel_backend() = kernel_backend(Dagger.Sch.thunk_processor())
kernel_backend(::Dagger.ThreadProc) = CPU()
function __init__()
@require CUDA="052768ef-5323-5732-b1bb-66c8b64840ba" begin
include("cu.jl")
end
@require AMDGPU="21141c5a-9bdb-4563-92ae-f87d6854732e" begin
include("roc.jl")
end
end
end
|
Dr. Stoner’s, a local hand crafted Whiskey & Vodka Producer, debuted at Archibald’s on September 28. The taste is unmistakeable as soon as you take the first sip.
As a fan of hand-crafted spirits, Dr. Craig Stoner began to investigate the possibilities of using special herbal combinations to make completely unique taste sensations. It was not long before his interest and curiosity turned into a passion to make spirits of such unique and fine quality that he would want to put his name, his picture, and his signature on every bottle. Working with only the finest natural ingredients, he tirelessly attempted to get the right balance of floral, pungent, spicy, fruity, herbaceous, smoky, and unique flavors and aromas. Countless combinations of herbs and spirits were prepared and tested. Only when the combinations were just right did Dr. Stoner give his stamp of approval.
Come by Archibald’s and ask for Dr. Stoners Vodka or Whisky and taste what everyone’s talking about. At Archibald’s, call it ” Sip, Sip, Pass “. |
(* Title: HOL/Auth/n_flash_lemma_on_inv__5.thy
Author: Yongjian Li and Kaiqiang Duan, State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
Copyright 2016 State Key Lab of Computer Science, Institute of Software, Chinese Academy of Sciences
*)
header{*The n_flash Protocol Case Study*}
theory n_flash_lemma_on_inv__5 imports n_flash_base
begin
section{*All lemmas on causal relation between inv__5 and some rule r*}
lemma n_PI_Remote_GetVsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_PI_Remote_Get src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_PI_Remote_Get src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_PI_Remote_GetXVsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_PI_Remote_GetX src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_PI_Remote_GetX src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_NakVsinv__5:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Nak dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Nak dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Nak__part__0Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__0 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__0 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Nak__part__1Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__1 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__1 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Nak__part__2Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Nak__part__2 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Nak__part__2 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Get__part__0Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Get__part__0 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Get__part__0 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Get__part__1Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Get__part__1 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Get__part__1 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Put_HeadVsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put_Head N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put_Head N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_PutVsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_Get_Put_DirtyVsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_Get_Put_Dirty src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_Get_Put_Dirty src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_Get_NakVsinv__5:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Nak src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Nak src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_Get_PutVsinv__5:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Put src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_Get_Put src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_Nak__part__0Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__0 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__0 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_Nak__part__1Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__1 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__1 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_Nak__part__2Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__2 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_Nak__part__2 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_GetX__part__0Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__0 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__0 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_GetX__part__1Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__1 src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_GetX__part__1 src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_1Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_1 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_1 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_2Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_2 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_2 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_3Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_3 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_3 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_4Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_4 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_4 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_5Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_5 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_5 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_6Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_6 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_6 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7__part__0Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__0 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__0 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7__part__1Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__1 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7__part__1 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7_NODE_Get__part__0Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__0 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__0 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_7_NODE_Get__part__1Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__1 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_7_NODE_Get__part__1 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8_HomeVsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8_Home_NODE_GetVsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home_NODE_Get N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_8_Home_NODE_Get N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8Vsinv__5:
assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8 N src pp)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8 N src pp" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>pp~=p__Inv4)\<or>(src~=p__Inv4\<and>pp=p__Inv4)\<or>(src~=p__Inv4\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>pp~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_8_NODE_GetVsinv__5:
assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8_NODE_Get N src pp)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_8_NODE_Get N src pp" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>pp~=p__Inv4)\<or>(src~=p__Inv4\<and>pp=p__Inv4)\<or>(src~=p__Inv4\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>pp~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_9__part__0Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__0 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__0 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeProc'') ''CacheState'')) (Const CACHE_E))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_9__part__1Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__1 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_9__part__1 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeProc'') ''CacheState'')) (Const CACHE_E))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_10_HomeVsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_10_Home N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_10_Home N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''HomeShrSet'')) (Const true)))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_10Vsinv__5:
assumes a1: "(\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_10 N src pp)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src pp where a1:"src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_Local_GetX_PutX_10 N src pp" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>pp~=p__Inv4)\<or>(src~=p__Inv4\<and>pp=p__Inv4)\<or>(src~=p__Inv4\<and>pp~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>pp~=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false)) (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeProc'') ''CacheState'')) (Const CACHE_E))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>pp~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_GetX_PutX_11Vsinv__5:
assumes a1: "(\<exists> src. src\<le>N\<and>r=n_NI_Local_GetX_PutX_11 N src)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src where a1:"src\<le>N\<and>r=n_NI_Local_GetX_PutX_11 N src" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4)\<or>(src~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_NakVsinv__5:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_Nak src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_Nak src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_GetX_PutXVsinv__5:
assumes a1: "(\<exists> src dst. src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_PutX src dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain src dst where a1:"src\<le>N\<and>dst\<le>N\<and>src~=dst\<and>r=n_NI_Remote_GetX_PutX src dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(src=p__Inv4\<and>dst~=p__Inv4)\<or>(src~=p__Inv4\<and>dst=p__Inv4)\<or>(src~=p__Inv4\<and>dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(src=p__Inv4\<and>dst~=p__Inv4)"
have "?P3 s"
apply (cut_tac a1 a2 b1, simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''Proc'') dst) ''CacheState'')) (Const CACHE_E)) (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeProc'') ''CacheState'')) (Const CACHE_E))))" in exI, auto) done
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(src~=p__Inv4\<and>dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_PutVsinv__5:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Put dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_Put dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Remote_PutXVsinv__5:
assumes a1: "(\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_PutX dst)" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a1 obtain dst where a1:"dst\<le>N\<and>r=n_NI_Remote_PutX dst" apply fastforce done
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "(dst=p__Inv4)\<or>(dst~=p__Inv4)" apply (cut_tac a1 a2, auto) done
moreover {
assume b1: "(dst=p__Inv4)"
have "?P1 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume b1: "(dst~=p__Inv4)"
have "?P2 s"
proof(cut_tac a1 a2 b1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_PI_Local_Get_PutVsinv__5:
assumes a1: "(r=n_PI_Local_Get_Put )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "((formEval (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeProc'') ''InvMarked'')) (Const true)) s))\<or>((formEval (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeProc'') ''InvMarked'')) (Const true))) s))" by auto
moreover {
assume c1: "((formEval (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeProc'') ''InvMarked'')) (Const true)) s))"
have "?P1 s"
proof(cut_tac a1 a2 c1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume c1: "((formEval (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeProc'') ''InvMarked'')) (Const true))) s))"
have "?P1 s"
proof(cut_tac a1 a2 c1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_PI_Local_GetX_PutX_HeadVld__part__0Vsinv__5:
assumes a1: "(r=n_PI_Local_GetX_PutX_HeadVld__part__0 N )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "?P3 s"
apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false))))" in exI, auto) done
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Local_GetX_PutX_HeadVld__part__1Vsinv__5:
assumes a1: "(r=n_PI_Local_GetX_PutX_HeadVld__part__1 N )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "?P3 s"
apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false))))" in exI, auto) done
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Local_GetX_PutX__part__0Vsinv__5:
assumes a1: "(r=n_PI_Local_GetX_PutX__part__0 )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "?P3 s"
apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false))))" in exI, auto) done
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Local_GetX_PutX__part__1Vsinv__5:
assumes a1: "(r=n_PI_Local_GetX_PutX__part__1 )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "?P3 s"
apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX)) (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Dirty'')) (Const false))))" in exI, auto) done
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_PI_Local_PutXVsinv__5:
assumes a1: "(r=n_PI_Local_PutX )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "((formEval (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const true)) s))\<or>((formEval (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const true))) s))" by auto
moreover {
assume c1: "((formEval (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const true)) s))"
have "?P1 s"
proof(cut_tac a1 a2 c1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume c1: "((formEval (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''Dir'') ''Pending'')) (Const true))) s))"
have "?P1 s"
proof(cut_tac a1 a2 c1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_PI_Local_ReplaceVsinv__5:
assumes a1: "(r=n_PI_Local_Replace )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "?P1 s"
proof(cut_tac a1 a2 , auto) qed
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_NI_Local_PutVsinv__5:
assumes a1: "(r=n_NI_Local_Put )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "((formEval (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeProc'') ''InvMarked'')) (Const true)) s))\<or>((formEval (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeProc'') ''InvMarked'')) (Const true))) s))" by auto
moreover {
assume c1: "((formEval (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeProc'') ''InvMarked'')) (Const true)) s))"
have "?P1 s"
proof(cut_tac a1 a2 c1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
moreover {
assume c1: "((formEval (neg (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeProc'') ''InvMarked'')) (Const true))) s))"
have "?P1 s"
proof(cut_tac a1 a2 c1, auto) qed
then have "invHoldForRule s f r (invariants N)" by auto
}
ultimately show "invHoldForRule s f r (invariants N)" by satx
qed
lemma n_NI_Local_PutXAcksDoneVsinv__5:
assumes a1: "(r=n_NI_Local_PutXAcksDone )" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)" (is "?P1 s \<or> ?P2 s \<or> ?P3 s")
proof -
from a2 obtain p__Inv4 where a2:"p__Inv4\<le>N\<and>f=inv__5 p__Inv4" apply fastforce done
have "?P3 s"
apply (cut_tac a1 a2 , simp, rule_tac x="(neg (andForm (eqn (IVar (Field (Para (Field (Ident ''Sta'') ''UniMsg'') p__Inv4) ''Cmd'')) (Const UNI_PutX)) (eqn (IVar (Field (Field (Ident ''Sta'') ''HomeUniMsg'') ''Cmd'')) (Const UNI_PutX))))" in exI, auto) done
then show "invHoldForRule s f r (invariants N)" by auto
qed
lemma n_NI_Remote_GetX_PutX_HomeVsinv__5:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_GetX_PutX_Home dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_WbVsinv__5:
assumes a1: "r=n_NI_Wb " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_StoreVsinv__5:
assumes a1: "\<exists> src data. src\<le>N\<and>data\<le>N\<and>r=n_Store src data" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_3Vsinv__5:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_3 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_1Vsinv__5:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_1 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_GetX__part__1Vsinv__5:
assumes a1: "r=n_PI_Local_GetX_GetX__part__1 " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_GetX_GetX__part__0Vsinv__5:
assumes a1: "r=n_PI_Local_GetX_GetX__part__0 " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Remote_ReplaceVsinv__5:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_PI_Remote_Replace src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_Store_HomeVsinv__5:
assumes a1: "\<exists> data. data\<le>N\<and>r=n_Store_Home data" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_existsVsinv__5:
assumes a1: "\<exists> src pp. src\<le>N\<and>pp\<le>N\<and>src~=pp\<and>r=n_NI_InvAck_exists src pp" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Remote_PutXVsinv__5:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_PI_Remote_PutX dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Remote_Get_Put_HomeVsinv__5:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Get_Put_Home dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvVsinv__5:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Inv dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_ShWbVsinv__5:
assumes a1: "r=n_NI_ShWb N " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_ReplaceVsinv__5:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_Replace src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Remote_GetX_Nak_HomeVsinv__5:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_GetX_Nak_Home dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Remote_Get_Nak_HomeVsinv__5:
assumes a1: "\<exists> dst. dst\<le>N\<and>r=n_NI_Remote_Get_Nak_Home dst" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_exists_HomeVsinv__5:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_exists_Home src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Replace_HomeVsinv__5:
assumes a1: "r=n_NI_Replace_Home " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Nak_ClearVsinv__5:
assumes a1: "r=n_NI_Nak_Clear " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_PI_Local_Get_GetVsinv__5:
assumes a1: "r=n_PI_Local_Get_Get " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_Nak_HomeVsinv__5:
assumes a1: "r=n_NI_Nak_Home " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_InvAck_2Vsinv__5:
assumes a1: "\<exists> src. src\<le>N\<and>r=n_NI_InvAck_2 N src" and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
lemma n_NI_FAckVsinv__5:
assumes a1: "r=n_NI_FAck " and
a2: "(\<exists> p__Inv4. p__Inv4\<le>N\<and>f=inv__5 p__Inv4)"
shows "invHoldForRule s f r (invariants N)"
apply (rule noEffectOnRule, cut_tac a1 a2, auto) done
end
|
function [EToE,EToF]= tiConnect2D(EToV)
% function [EToE,EToF]= tiConnect2D(EToV)
% Purpose: triangle face connect algorithm due to Toby Isaac
Nfaces=3;
K = size(EToV,1);
Nnodes = max(max(EToV));
% create list of all faces 1, then 2, & 3
fnodes = [EToV(:,[1,2]);EToV(:,[2,3]);EToV(:,[3,1])];
fnodes = sort(fnodes,2)-1;
% set up default element to element and Element to faces connectivity
EToE= (1:K)'*ones(1,Nfaces); EToF= ones(K,1)*(1:Nfaces);
% uniquely number each set of three faces by their node numbers
id = fnodes(:,1)*Nnodes + fnodes(:,2)+1;
spNodeToNode=[id, (1:Nfaces*K)', EToE(:), EToF(:)];
% Now we sort by global face number.
sorted=sortrows(spNodeToNode,1);
% find matches in the sorted face list
[indices,dummy]=find( sorted(1:(end-1),1)==sorted(2:end,1) );
% make links reflexive
matchL = [sorted(indices,:) ;sorted(indices+1,:)];
matchR = [sorted(indices+1,:) ;sorted(indices,:)];
% insert matches
EToE(matchL(:,2)) = matchR(:,3); EToF(matchL(:,2)) = matchR(:,4);
return;
|
! Triangle, pentagonal, and hexagonal numbers
! are generated by the following
! formulae:
! Triangle Tn=n(n+1)/2 1, 3, 6, 10, 15, ...
! Pentagonal Pn=n(3n-1)/2 1, 5, 12, 22, 35, ...
! Hexagonal Hn=n(2n-1) 1, 6, 15, 28, 45, ...
! It can be verified that:
! T[285] = P[165] = H[143] = 40755.
! Find the next triangle number that
! is also pentagonal and hexagonal.
! Project Euler: 45
! Answer: 1533776805
program main
use euler
implicit none
integer (int64), parameter :: START = 40755;
integer (int64) :: n, tri;
n = START;
do while(.TRUE.)
tri = n;
call triangle(tri);
if(pentagonal(tri) .AND. hexagonal(tri)) then
exit
endif
call inc(n, 1_int64);
end do
call printint(tri);
contains
pure subroutine triangle(n)
integer (int64), intent(inout) :: n;
n = (n ** 2 - n) / 2;
end subroutine triangle
! Utilise the fact that we can test if a number is pentagonal or not by
! using: 24x + 1 == y;
! Where y is a perfect square. AND, for non generalised solutions of
! n >= 0, we also must qualify that (24x + 1)**0.5 % 6 == 5
pure function pentagonal(x)
integer (int64), intent(in) :: x;
integer (int64) :: tmp;
logical :: pentagonal;
tmp = 24 * x + 1;
pentagonal = (perfect_square(tmp) .AND. &
mod(int(sqrt(real(tmp))), 6) == 5)
end function pentagonal
pure function hexagonal(x)
integer (int64), intent(in) :: x;
logical :: hexagonal;
real :: tmp;
tmp = (sqrt(real(8 * x + 1)) + 1) / 4;
hexagonal = is_natural(tmp)
end function hexagonal
end program main
|
-- Solutions to ExerciseSession2
{-# OPTIONS --cubical #-}
module SolutionsSession2 where
open import Part1
open import Part2
open import ExerciseSession1
-- Exercise 1
JEq : {x : A} (P : (z : A) → x ≡ z → Type ℓ'')
(d : P x refl) → J P d refl ≡ d
JEq P p d = transportRefl p d
-- Exercise 2
isContr→isProp : isContr A → isProp A
isContr→isProp (x , p) a b = sym (p a) ∙ p b
-- Exercise 3
isProp→isProp' : isProp A → isProp' A
isProp→isProp' p x y = p x y , isProp→isSet p _ _ (p x y)
-- Exercise 4
isContr→isContr≡ : isContr A → (x y : A) → isContr (x ≡ y)
isContr→isContr≡ h = isProp→isProp' (isContr→isProp h)
-- Exercise 5
fromPathP : {A : I → Type ℓ} {x : A i0} {y : A i1}
→ PathP A x y
→ transport (λ i → A i) x ≡ y
fromPathP {A = A} p i = transp (λ j → A (i ∨ j)) i (p i)
-- The converse is harder to prove so we give it:
toPathP : {A : I → Type ℓ} {x : A i0} {y : A i1}
→ transport (λ i → A i) x ≡ y
→ PathP A x y
toPathP {A = A} {x = x} p i =
hcomp (λ j → λ { (i = i0) → x
; (i = i1) → p j })
(transp (λ j → A (i ∧ j)) (~ i) x)
-- Exercise 6
Σ≡Prop : {B : A → Type ℓ'} {u v : Σ A B} (h : (x : A) → isProp (B x))
→ (p : fst u ≡ fst v) → u ≡ v
Σ≡Prop {B = B} {u = u} {v = v} h p =
ΣPathP (p , toPathP (h _ (transport (λ i → B (p i)) (snd u)) (snd v)))
-- Exercice 7 (thanks Loïc for the slick proof!)
isPropIsContr : isProp (isContr A)
isPropIsContr (c0 , h0) (c1 , h1) j =
h0 c1 j , λ y i → hcomp (λ k → λ { (i = i0) → h0 (h0 c1 j) k;
(i = i1) → h0 y k;
(j = i0) → h0 (h0 y i) k;
(j = i1) → h0 (h1 y i) k}) c0
-- Exercises about Part 3:
-- Exercise 8 (a bit longer, but very good):
open import Cubical.Data.Nat
open import Cubical.Data.Int hiding (addEq ; subEq)
-- Compose sucPathInt with itself n times. Transporting along this
-- will be addition, transporting with it backwards will be subtraction.
-- a) Define a path "addEq n" by composing sucPathInt with itself n
-- times.
addEq : ℕ → Int ≡ Int
addEq zero = refl
addEq (suc n) = (addEq n) ∙ sucPathInt
-- b) Define another path "subEq n" by composing "sym sucPathInt" with
-- itself n times.
subEq : ℕ → Int ≡ Int
subEq zero = refl
subEq (suc n) = (subEq n) ∙ sym sucPathInt
-- c) Define addition on integers by pattern-matching and transporting
-- along addEq/subEq appropriately.
_+Int_ : Int → Int → Int
m +Int pos n = transport (addEq n) m
m +Int negsuc n = transport (subEq (suc n)) m
-- d) Do some concrete computations using _+Int_ (this would not work
-- in HoTT as the transport would be stuck!)
-- Exercise 9: prove that hSet is not an hSet
open import Cubical.Data.Bool renaming (notEq to notPath)
open import Cubical.Data.Empty
-- Just define hSets of level 0 for simplicity
hSet : Type₁
hSet = Σ[ A ∈ Type₀ ] isSet A
-- Bool is an hSet
BoolSet : hSet
BoolSet = Bool , isSetBool
notPath≢refl : (notPath ≡ refl) → ⊥
notPath≢refl e = true≢false (transport (λ i → transport (e i) true ≡ false) refl)
¬isSet-hSet : isSet hSet → ⊥
¬isSet-hSet h = notPath≢refl (cong (cong fst) (h BoolSet BoolSet p refl))
where
p : BoolSet ≡ BoolSet
p = Σ≡Prop (λ A → isPropIsSet {A = A}) notPath
-- Exercise 10: squivalence between FinData and Fin
-- Thanks to Elies for the PR with the code. On the development
-- version of the library there is now:
--
-- open import Cubical.Data.Fin using (FinData≡Fin)
|
-- ----------------------------------------------------------- [ DataTypes.idr ]
-- Module : Chapter.DataTypes
-- Description : Definitions from Chapter 4 of Edwin Brady's book,
-- "Type-Driven Development with Idris."
-- --------------------------------------------------------------------- [ EOH ]
module Chapter.DataTypes
%access export
-- ------------------------------------------------------- [ 4.1.2 Union Types ]
public export
data Shape = Triangle Double Double
| Rectangle Double Double
| Circle Double
%name Shape shape, shape1, shape2
namespace Shape
||| Calculate the area of a shape.
area : Shape -> Double
area (Triangle base height) = 0.5 * base * height
area (Rectangle width height) = width * height
area (Circle radius) = pi * radius * radius
-- --------------------------------------------------- [ 4.1.3 Recursive Types ]
public export
data Picture = ||| A primitive shape.
Primitive Shape
| ||| A combination of two other pictures.
Combine Picture Picture
| ||| A picture rotated through an angle.
Rotate Double Picture
| ||| A picture translated to a different location.
Translate Double Double Picture
%name Picture pic, pic1, pic2
namespace Picture
area : Picture -> Double
area (Primitive shape) = area shape
area (Combine pic pic1) = area pic + area pic1
area (Rotate _ pic) = area pic
area (Translate _ _ pic) = area pic
-- ------------------------------------------------ [ 4.1.4 Generic Data Types ]
||| A binary search tree.
public export
data Tree elem = ||| A tree with no data.
Empty
| ||| A node with a left subtree, a value, and a right subtree.
Node (Tree elem) elem (Tree elem)
%name Tree tree, tree1
||| Insert a value into a binary search tree.
||| @ x a value to insert
||| @ tree a binary search tree to insert into
insert : Ord elem => (x : elem) -> (tree : Tree elem) -> Tree elem
insert x Empty = Node Empty x Empty
insert x orig@(Node left val right)
= case compare x val of
LT => Node (insert x left) val right
EQ => orig
GT => Node left val (insert x right)
-- --------------------------------------------------------------------- [ EOF ]
|
= = = Biographies and studies = = =
|
-- Adder.idr
-- Demonstrate functions with variable number of arguments by an
-- adder function
AdderType : (numargs : Nat) -> Type -> Type
AdderType Z numType = numType
AdderType (S k) numType = (next : numType) -> AdderType k numType
||| An adder function for variable number of arguments
adder : Num numType => (numargs : Nat) -> (acc : numType) -> AdderType numargs numType
adder Z acc = acc
adder (S k) acc = \next => adder k (next + acc)
|
import ggg
t : Int
t = yyy
|
proposition homotopic_paths_linv: assumes "path p" "path_image p \<subseteq> s" shows "homotopic_paths s (reversepath p +++ p) (linepath (pathfinish p) (pathfinish p))" |
[STATEMENT]
lemma size_ok_size_new_small: "size_ok (States dir big small) \<Longrightarrow> 0 < size_new small"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. size_ok (States dir big small) \<Longrightarrow> 0 < size_new small
[PROOF STEP]
by auto |
(* Title: JinjaThreads/MM/JMM_Typesafe.thy
Author: Andreas Lochbihler
*)
header {* \isaheader{Type-safety proof for the Java memory model} *}
theory JMM_Typesafe
imports
JMM_Framework
begin
text {*
Create a dynamic list @{text "heap_independent"} of theorems for replacing
heap-dependent constants by heap-independent ones.
*}
ML {*
structure Heap_Independent_Rules = Named_Thms
(
val name = @{binding heap_independent}
val description = "Simplification rules for heap-independent constants"
)
*}
setup {* Heap_Independent_Rules.setup *}
locale heap_base' =
h!: heap_base
addr2thread_id thread_id2addr
spurious_wakeups
empty_heap allocate "\<lambda>_. typeof_addr" heap_read heap_write
for addr2thread_id :: "('addr :: addr) \<Rightarrow> 'thread_id"
and thread_id2addr :: "'thread_id \<Rightarrow> 'addr"
and spurious_wakeups :: bool
and empty_heap :: "'heap"
and allocate :: "'heap \<Rightarrow> htype \<Rightarrow> ('heap \<times> 'addr) set"
and typeof_addr :: "'addr \<rightharpoonup> htype"
and heap_read :: "'heap \<Rightarrow> 'addr \<Rightarrow> addr_loc \<Rightarrow> 'addr val \<Rightarrow> bool"
and heap_write :: "'heap \<Rightarrow> 'addr \<Rightarrow> addr_loc \<Rightarrow> 'addr val \<Rightarrow> 'heap \<Rightarrow> bool"
begin
definition typeof_h :: "'addr val \<Rightarrow> ty option"
where "typeof_h = h.typeof_h undefined"
lemma typeof_h_conv_typeof_h [heap_independent, iff]: "h.typeof_h h = typeof_h"
by(rule ext)(case_tac x, simp_all add: typeof_h_def)
lemmas typeof_h_simps [simp] = h.typeof_h.simps [unfolded heap_independent]
definition cname_of :: "'addr \<Rightarrow> cname"
where "cname_of = h.cname_of undefined"
lemma cname_of_conv_cname_of [heap_independent, iff]: "h.cname_of h = cname_of"
by(simp add: cname_of_def h.cname_of_def[abs_def])
definition addr_loc_type :: "'m prog \<Rightarrow> 'addr \<Rightarrow> addr_loc \<Rightarrow> ty \<Rightarrow> bool"
where "addr_loc_type P = h.addr_loc_type P undefined"
notation addr_loc_type ("_ \<turnstile> _@_ : _" [50, 50, 50, 50] 51)
lemma addr_loc_type_conv_addr_loc_type [heap_independent, iff]:
"h.addr_loc_type P h = addr_loc_type P"
by(simp add: addr_loc_type_def h.addr_loc_type_def)
lemmas addr_loc_type_cases [cases pred: addr_loc_type] =
h.addr_loc_type.cases[unfolded heap_independent]
lemmas addr_loc_type_intros = h.addr_loc_type.intros[unfolded heap_independent]
definition typeof_addr_loc :: "'m prog \<Rightarrow> 'addr \<Rightarrow> addr_loc \<Rightarrow> ty"
where "typeof_addr_loc P = h.typeof_addr_loc P undefined"
lemma typeof_addr_loc_conv_typeof_addr_loc [heap_independent, iff]:
"h.typeof_addr_loc P h = typeof_addr_loc P"
by(simp add: typeof_addr_loc_def h.typeof_addr_loc_def[abs_def])
definition conf :: "'a prog \<Rightarrow> 'addr val \<Rightarrow> ty \<Rightarrow> bool"
where "conf P \<equiv> h.conf P undefined"
notation conf ("_ \<turnstile> _ :\<le> _" [51,51,51] 50)
lemma conf_conv_conf [heap_independent, iff]: "h.conf P h = conf P"
by(simp add: conf_def heap_base.conf_def[abs_def])
lemmas defval_conf [simp] = h.defval_conf[unfolded heap_independent]
definition lconf :: "'m prog \<Rightarrow> (vname \<rightharpoonup> 'addr val) \<Rightarrow> (vname \<rightharpoonup> ty) \<Rightarrow> bool"
where "lconf P = h.lconf P undefined"
notation lconf ("_ \<turnstile> _ '(:\<le>') _" [51,51,51] 50)
lemma lconf_conv_lconf [heap_independent, iff]: "h.lconf P h = lconf P"
by(simp add: lconf_def h.lconf_def[abs_def])
definition confs :: "'m prog \<Rightarrow> 'addr val list \<Rightarrow> ty list \<Rightarrow> bool"
where "confs P = h.confs P undefined"
notation confs ("_ \<turnstile> _ [:\<le>] _" [51,51,51] 50)
lemma confs_conv_confs [heap_independent, iff]: "h.confs P h = confs P"
by(simp add: confs_def)
definition tconf :: "'m prog \<Rightarrow> 'thread_id \<Rightarrow> bool"
where "tconf P = h.tconf P undefined"
notation tconf ("_ \<turnstile> _ \<surd>t" [51,51] 50)
definition vs_conf :: "'m prog \<Rightarrow> ('addr \<times> addr_loc \<Rightarrow> 'addr val set) \<Rightarrow> bool"
where "vs_conf P = h.vs_conf P undefined"
lemma vs_conf_conv_vs_conf [heap_independent, iff]: "h.vs_conf P h = vs_conf P"
by(simp add: vs_conf_def h.vs_conf_def[abs_def])
lemmas vs_confI = h.vs_confI[unfolded heap_independent]
lemmas vs_confD = h.vs_confD[unfolded heap_independent]
text {*
use non-speculativity to express that only type-correct values are read
*}
primrec vs_type_all :: "'m prog \<Rightarrow> 'addr \<times> addr_loc \<Rightarrow> 'addr val set"
where "vs_type_all P (ad, al) = {v. \<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> v :\<le> T}"
lemma vs_conf_vs_type_all [simp]: "vs_conf P (vs_type_all P)"
by(rule h.vs_confI[unfolded heap_independent])(simp)
lemma w_addrs_vs_type_all: "w_addrs (vs_type_all P) \<subseteq> dom typeof_addr"
by(auto simp add: w_addrs_def h.conf_def[unfolded heap_independent])
lemma w_addrs_vs_type_all_in_vs_type_all:
"(\<Union>ad \<in> w_addrs (vs_type_all P). {(ad, al)|al. \<exists>T. P \<turnstile> ad@al : T}) \<subseteq> {adal. vs_type_all P adal \<noteq> {}}"
by(auto simp add: w_addrs_def vs_type_all_def intro: defval_conf)
declare vs_type_all.simps [simp del]
lemmas vs_conf_insert_iff = h.vs_conf_insert_iff[unfolded heap_independent]
end
locale heap' =
h!: heap
addr2thread_id thread_id2addr
spurious_wakeups
empty_heap allocate "\<lambda>_. typeof_addr" heap_read heap_write
P
for addr2thread_id :: "('addr :: addr) \<Rightarrow> 'thread_id"
and thread_id2addr :: "'thread_id \<Rightarrow> 'addr"
and spurious_wakeups :: bool
and empty_heap :: "'heap"
and allocate :: "'heap \<Rightarrow> htype \<Rightarrow> ('heap \<times> 'addr) set"
and typeof_addr :: "'addr \<rightharpoonup> htype"
and heap_read :: "'heap \<Rightarrow> 'addr \<Rightarrow> addr_loc \<Rightarrow> 'addr val \<Rightarrow> bool"
and heap_write :: "'heap \<Rightarrow> 'addr \<Rightarrow> addr_loc \<Rightarrow> 'addr val \<Rightarrow> 'heap \<Rightarrow> bool"
and P :: "'m prog"
sublocale heap' < heap_base' .
context heap' begin
lemma vs_conf_w_value_WriteMemD:
"\<lbrakk> vs_conf P (w_value P vs ob); ob = NormalAction (WriteMem ad al v) \<rbrakk>
\<Longrightarrow> \<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> v :\<le> T"
by(auto elim: vs_confD)
lemma vs_conf_w_values_WriteMemD:
"\<lbrakk> vs_conf P (w_values P vs obs); NormalAction (WriteMem ad al v) \<in> set obs \<rbrakk>
\<Longrightarrow> \<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> v :\<le> T"
apply(induct obs arbitrary: vs)
apply(auto 4 3 elim: vs_confD intro: w_values_mono[THEN subsetD])
done
lemma w_values_vs_type_all_start_heap_obs:
assumes wf: "wf_syscls P"
shows "w_values P (vs_type_all P) (map snd (lift_start_obs h.start_tid h.start_heap_obs)) = vs_type_all P"
(is "?lhs = ?rhs")
proof(rule antisym, rule le_funI, rule subsetI)
fix adal v
assume v: "v \<in> ?lhs adal"
obtain ad al where adal: "adal = (ad, al)" by(cases adal)
show "v \<in> ?rhs adal"
proof(rule ccontr)
assume v': "\<not> ?thesis"
from in_w_valuesD[OF v[unfolded adal] this[unfolded adal]]
obtain obs' wa obs''
where eq: "map snd (lift_start_obs h.start_tid h.start_heap_obs) = obs' @ wa # obs''"
and "write": "is_write_action wa"
and loc: "(ad, al) \<in> action_loc_aux P wa"
and vwa: "value_written_aux P wa al = v"
by blast+
from "write" show False
proof cases
case (WriteMem ad' al' v')
with vwa loc eq have "WriteMem ad al v \<in> set h.start_heap_obs"
by(auto simp add: map_eq_append_conv Cons_eq_append_conv lift_start_obs_def)
from h.start_heap_write_typeable[OF this] v' adal
show ?thesis by(auto simp add: vs_type_all_def)
next
case (NewHeapElem ad' hT)
with vwa loc eq have "NewHeapElem ad hT \<in> set h.start_heap_obs"
by(auto simp add: map_eq_append_conv Cons_eq_append_conv lift_start_obs_def)
hence "typeof_addr ad = \<lfloor>hT\<rfloor>"
by(rule h.NewHeapElem_start_heap_obsD[OF wf])
with v' adal loc vwa NewHeapElem show ?thesis
by(auto simp add: vs_type_all_def intro: addr_loc_type_intros h.addr_loc_default_conf[unfolded heap_independent])
qed
qed
qed(rule w_values_greater)
end
lemma lprefix_lappend2I: "lprefix xs ys \<Longrightarrow> lprefix xs (lappend ys zs)"
by(auto simp add: lappend_assoc lprefix_conv_lappend)
locale known_addrs_typing' =
h!: known_addrs_typing
addr2thread_id thread_id2addr
spurious_wakeups
empty_heap allocate "\<lambda>_. typeof_addr" heap_read heap_write
allocated known_addrs
final r wfx
P
for addr2thread_id :: "('addr :: addr) \<Rightarrow> 'thread_id"
and thread_id2addr :: "'thread_id \<Rightarrow> 'addr"
and spurious_wakeups :: bool
and empty_heap :: "'heap"
and allocate :: "'heap \<Rightarrow> htype \<Rightarrow> ('heap \<times> 'addr) set"
and typeof_addr :: "'addr \<rightharpoonup> htype"
and heap_read :: "'heap \<Rightarrow> 'addr \<Rightarrow> addr_loc \<Rightarrow> 'addr val \<Rightarrow> bool"
and heap_write :: "'heap \<Rightarrow> 'addr \<Rightarrow> addr_loc \<Rightarrow> 'addr val \<Rightarrow> 'heap \<Rightarrow> bool"
and allocated :: "'heap \<Rightarrow> 'addr set"
and known_addrs :: "'thread_id \<Rightarrow> 'x \<Rightarrow> 'addr set"
and final :: "'x \<Rightarrow> bool"
and r :: "('addr, 'thread_id, 'x, 'heap, 'addr, ('addr, 'thread_id) obs_event) semantics" ("_ \<turnstile> _ -_\<rightarrow> _" [50,0,0,50] 80)
and wfx :: "'thread_id \<Rightarrow> 'x \<Rightarrow> 'heap \<Rightarrow> bool"
and P :: "'md prog"
+
assumes NewHeapElem_typed: -- {* Should this be moved to known\_addrs\_typing? *}
"\<lbrakk> t \<turnstile> (x, h) -ta\<rightarrow> (x', h'); NewHeapElem ad CTn \<in> set \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>; typeof_addr ad \<noteq> None \<rbrakk>
\<Longrightarrow> typeof_addr ad = \<lfloor>CTn\<rfloor>"
sublocale known_addrs_typing' < heap' by unfold_locales
context known_addrs_typing' begin
lemma known_addrs_typeable_in_vs_type_all:
"h.if.known_addrs_state s \<subseteq> dom typeof_addr
\<Longrightarrow> (\<Union>a \<in> h.if.known_addrs_state s. {(a, al)|al. \<exists>T. P \<turnstile> a@al : T}) \<subseteq> {adal. vs_type_all P adal \<noteq> {}}"
by(auto 4 4 dest: subsetD simp add: vs_type_all.simps intro: defval_conf)
lemma if_NewHeapElem_typed:
"\<lbrakk> t \<turnstile> xh -ta\<rightarrow>i x'h'; NormalAction (NewHeapElem ad CTn) \<in> set \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>; typeof_addr ad \<noteq> None \<rbrakk>
\<Longrightarrow> typeof_addr ad = \<lfloor>CTn\<rfloor>"
by(cases rule: h.mthr.init_fin.cases)(auto dest: NewHeapElem_typed)
lemma if_redT_NewHeapElem_typed:
"\<lbrakk> h.mthr.if.redT s (t, ta) s'; NormalAction (NewHeapElem ad CTn) \<in> set \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>; typeof_addr ad \<noteq> None \<rbrakk>
\<Longrightarrow> typeof_addr ad = \<lfloor>CTn\<rfloor>"
by(cases rule: h.mthr.if.redT.cases)(auto dest: if_NewHeapElem_typed)
lemma non_speculative_written_value_typeable:
assumes wfx_start: "ts_ok wfx (thr (h.start_state f P C M vs)) h.start_heap"
and wfP: "wf_syscls P"
and E: "E \<in> h.\<E>_start f P C M vs status"
and "write": "w \<in> write_actions E"
and adal: "(ad, al) \<in> action_loc P E w"
and ns: "non_speculative P (vs_type_all P) (lmap snd (ltake (enat w) E))"
shows "\<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> value_written P E w (ad, al) :\<le> T"
proof -
let ?start_state = "init_fin_lift_state status (h.start_state f P C M vs)"
and ?start_obs = "lift_start_obs h.start_tid h.start_heap_obs"
and ?v = "value_written P E w (ad, al)"
from "write" have iwa: "is_write_action (action_obs E w)" by cases
from E obtain E' where E': "E = lappend (llist_of ?start_obs) E'"
and \<E>: "E' \<in> h.mthr.if.\<E> ?start_state" by blast
from \<E> obtain E'' where E'': "E' = lconcat (lmap (\<lambda>(t, ta). llist_of (map (Pair t) \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>)) E'')"
and Runs: "h.mthr.if.mthr.Runs ?start_state E''"
by-(rule h.mthr.if.\<E>.cases[OF \<E>])
have wfx': "ts_ok (init_fin_lift wfx) (thr ?start_state) (shr ?start_state)"
using wfx_start by(simp add: h.shr_start_state)
from ns E'
have ns: "non_speculative P (vs_type_all P) (lmap snd (ldropn (length (lift_start_obs h.start_tid h.start_heap_obs)) (ltake (enat w) E)))"
by(subst (asm) lappend_ltake_ldrop[where n="enat (length (lift_start_obs h.start_tid h.start_heap_obs))", symmetric])(simp add: non_speculative_lappend min_def ltake_lappend1 w_values_vs_type_all_start_heap_obs[OF wfP] ldrop_enat split: split_if_asm)
show ?thesis
proof(cases "w < length ?start_obs")
case True
hence in_start: "action_obs E w \<in> set (map snd ?start_obs)"
unfolding in_set_conv_nth E' by(simp add: lnth_lappend action_obs_def map_nth exI[where x="w"])
from iwa show ?thesis
proof(cases)
case (WriteMem ad' al' v')
with adal have "ad' = ad" "al' = al" "?v = v'" by(simp_all add: value_written.simps)
with WriteMem in_start have "WriteMem ad al ?v \<in> set h.start_heap_obs" by auto
thus ?thesis by(rule h.start_heap_write_typeable[unfolded heap_independent])
next
case (NewHeapElem ad' CTn)
with adal have [simp]: "ad' = ad" by auto
with NewHeapElem in_start have "NewHeapElem ad CTn \<in> set h.start_heap_obs" by auto
with wfP have "typeof_addr ad = \<lfloor>CTn\<rfloor>" by(rule h.NewHeapElem_start_heap_obsD)
with adal NewHeapElem show ?thesis
by(cases al)(auto simp add: value_written.simps intro: addr_loc_type_intros h.addr_loc_default_conf[unfolded heap_independent])
qed
next
case False
def w' == "w - length ?start_obs"
from "write" False w'_def have w'_len: "enat w' < llength E'"
by(cases "llength E'")(auto simp add: actions_def E' elim: write_actions.cases)
with Runs obtain m_w n_w t_w ta_w
where E'_w: "lnth E' w' = (t_w, \<lbrace>ta_w\<rbrace>\<^bsub>o\<^esub> ! n_w)"
and n_w: "n_w < length \<lbrace>ta_w\<rbrace>\<^bsub>o\<^esub>"
and m_w: "enat m_w < llength E''"
and w_sum: "w' = (\<Sum>i<m_w. length \<lbrace>snd (lnth E'' i)\<rbrace>\<^bsub>o\<^esub>) + n_w"
and E''_m_w: "lnth E'' m_w = (t_w, ta_w)"
unfolding E'' by(rule h.mthr.if.actions_\<E>E_aux)
from E'_w have obs_w: "action_obs E w = \<lbrace>ta_w\<rbrace>\<^bsub>o\<^esub> ! n_w"
using False E' w'_def by(simp add: action_obs_def lnth_lappend)
let ?E'' = "ldropn (Suc m_w) E''"
let ?m_E'' = "ltake (enat m_w) E''"
have E'_unfold: "E'' = lappend ?m_E'' (LCons (lnth E'' m_w) ?E'')"
unfolding ldropn_Suc_conv_ldropn[OF m_w] by simp
hence "h.mthr.if.mthr.Runs ?start_state (lappend ?m_E'' (LCons (lnth E'' m_w) ?E''))"
using Runs by simp
then obtain \<sigma>' where \<sigma>_\<sigma>': "h.mthr.if.mthr.Trsys ?start_state (list_of ?m_E'') \<sigma>'"
and Runs': "h.mthr.if.mthr.Runs \<sigma>' (LCons (lnth E'' m_w) ?E'')"
by(rule h.mthr.if.mthr.Runs_lappendE) simp
from Runs' obtain \<sigma>''' where red_w: "h.mthr.if.redT \<sigma>' (t_w, ta_w) \<sigma>'''"
and Runs'': "h.mthr.if.mthr.Runs \<sigma>''' ?E''"
unfolding E''_m_w by cases
let ?EE'' = "lmap snd (lappend (lconcat (lmap (\<lambda>(t, ta). llist_of (map (Pair t) \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>)) ?m_E'')) (llist_of (map (Pair t_w) (take (n_w + 1) \<lbrace>ta_w\<rbrace>\<^bsub>o\<^esub>))))"
have len_EE'': "llength ?EE'' = enat (w' + 1)" using n_w m_w
apply(simp add: w_sum)
apply(subst llength_lconcat_lfinite_conv_sum)
apply(simp_all add: split_beta plus_enat_simps(1)[symmetric] add_Suc_right[symmetric] del: plus_enat_simps(1) add_Suc_right)
apply(subst setsum_hom[symmetric, where f=enat])
apply(simp_all add: zero_enat_def min_def le_Suc_eq)
apply(rule setsum.cong)
apply(auto simp add: lnth_ltake less_trans[where y="enat m_w"])
done
have prefix: "lprefix ?EE'' (lmap snd E')" unfolding E''
by(subst (2) E'_unfold)(rule lmap_lprefix, clarsimp simp add: lmap_lappend_distrib E''_m_w lprefix_lappend2I[OF lprefix_llist_ofI[OF exI[where x="map (Pair t_w) (drop (n_w + 1) \<lbrace>ta_w\<rbrace>\<^bsub>o\<^esub>)"]]] map_append[symmetric])
from iwa False have iwa': "is_write_action (action_obs E' w')" by(simp add: E' action_obs_def lnth_lappend w'_def)
from ns False
have "non_speculative P (vs_type_all P) (lmap snd (ltake (enat w') E'))"
by(simp add: E' ltake_lappend lmap_lappend_distrib non_speculative_lappend ldropn_lappend2 w'_def)
with iwa'
have "non_speculative P (vs_type_all P) (lappend (lmap snd (ltake (enat w') E')) (LCons (action_obs E' w') LNil))"
by cases(simp_all add: non_speculative_lappend)
also have "lappend (lmap snd (ltake (enat w') E')) (LCons (action_obs E' w') LNil) = lmap snd (ltake (enat (w' + 1)) E')"
using w'_len by(simp add: ltake_Suc_conv_snoc_lnth lmap_lappend_distrib action_obs_def)
also {
have "lprefix (lmap snd (ltake (enat (w' + 1)) E')) (lmap snd E')" by(rule lmap_lprefix) simp
with prefix have "lprefix ?EE'' (lmap snd (ltake (enat (w' + 1)) E')) \<or>
lprefix (lmap snd (ltake (enat (w' + 1)) E')) ?EE''"
by(rule lprefix_down_linear)
moreover have "llength (lmap snd (ltake (enat (w' + 1)) E')) = enat (w' + 1)"
using w'_len by(cases "llength E'") simp_all
ultimately have "lmap snd (ltake (enat (w' + 1)) E') = ?EE''"
using len_EE'' by(auto dest: lprefix_llength_eq_imp_eq) }
finally
have ns1: "non_speculative P (vs_type_all P) (llist_of (concat (map (\<lambda>(t, ta). \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>) (list_of ?m_E''))))"
and ns2: "non_speculative P (w_values P (vs_type_all P) (map snd (list_of (lconcat (lmap (\<lambda>(t, ta). llist_of (map (Pair t) \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>)) ?m_E''))))) (llist_of (take (Suc n_w) \<lbrace>ta_w\<rbrace>\<^bsub>o\<^esub>))"
by(simp_all add: lmap_lappend_distrib non_speculative_lappend split_beta lconcat_llist_of[symmetric] lmap_lconcat llist.map_comp o_def split_def list_of_lmap[symmetric] del: list_of_lmap)
have "vs_conf P (vs_type_all P)" by simp
with \<sigma>_\<sigma>' wfx' ns1
have wfx': "ts_ok (init_fin_lift wfx) (thr \<sigma>') (shr \<sigma>')"
and vs_conf: "vs_conf P (w_values P (vs_type_all P) (concat (map (\<lambda>(t, ta). \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>) (list_of ?m_E''))))"
by(rule h.if_RedT_non_speculative_invar[unfolded h.mthr.if.RedT_def heap_independent])+
have "concat (map (\<lambda>(t, ta). \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>) (list_of ?m_E'')) = map snd (list_of (lconcat (lmap (\<lambda>(t, ta). llist_of (map (Pair t) \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>)) ?m_E'')))"
by(simp add: split_def lmap_lconcat llist.map_comp o_def list_of_lconcat map_concat)
with vs_conf have "vs_conf P (w_values P (vs_type_all P) \<dots>)" by simp
with red_w wfx' ns2
have vs_conf': "vs_conf P (w_values P (w_values P (vs_type_all P) (map snd (list_of (lconcat (lmap (\<lambda>(t, ta). llist_of (map (Pair t) \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>)) ?m_E''))))) (take (Suc n_w) \<lbrace>ta_w\<rbrace>\<^bsub>o\<^esub>))"
(is "vs_conf _ ?vs'")
by(rule h.if_redT_non_speculative_vs_conf[unfolded heap_independent])
from len_EE'' have "enat w' < llength ?EE''" by simp
from w'_len have "lnth ?EE'' w' = action_obs E' w'"
using lprefix_lnthD[OF prefix `enat w' < llength ?EE''`] by(simp add: action_obs_def)
hence "\<dots> \<in> lset ?EE''" using `enat w' < llength ?EE''` unfolding lset_conv_lnth by(auto intro!: exI)
also have "\<dots> \<subseteq> set (map snd (list_of (lconcat (lmap (\<lambda>(t, ta). llist_of (map (Pair t) \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>)) ?m_E''))) @ take (Suc n_w) \<lbrace>ta_w\<rbrace>\<^bsub>o\<^esub>)"
by(auto 4 4 intro: rev_image_eqI rev_bexI simp add: split_beta lset_lconcat_lfinite dest: lset_lappend[THEN subsetD])
also have "action_obs E' w' = action_obs E w"
using False by(simp add: E' w'_def lnth_lappend action_obs_def)
also note obs_w_in_set = calculation and calculation = nothing
from iwa have "?v \<in> w_values P (vs_type_all P) (map snd (list_of (lconcat (lmap (\<lambda>(t, ta). llist_of (map (Pair t) \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>)) ?m_E''))) @ take (Suc n_w) \<lbrace>ta_w\<rbrace>\<^bsub>o\<^esub>) (ad, al)"
proof(cases)
case (WriteMem ad' al' v')
with adal have "ad' = ad" "al' = al" "?v = v'" by(simp_all add: value_written.simps)
with obs_w_in_set WriteMem show ?thesis
by -(rule w_values_WriteMemD, simp)
next
case (NewHeapElem ad' CTn)
with adal have [simp]: "ad' = ad" and v: "?v = addr_loc_default P CTn al"
by(auto simp add: value_written.simps)
with obs_w_in_set NewHeapElem adal show ?thesis
by(unfold v)(rule w_values_new_actionD, simp_all)
qed
hence "?v \<in> ?vs' (ad, al)" by simp
with vs_conf' show "\<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> ?v :\<le> T"
by(rule h.vs_confD[unfolded heap_independent])
qed
qed
lemma hb_read_value_typeable:
assumes wfx_start: "ts_ok wfx (thr (h.start_state f P C M vs)) h.start_heap"
(is "ts_ok wfx (thr ?start_state) _")
and wfP: "wf_syscls P"
and E: "E \<in> h.\<E>_start f P C M vs status"
and wf: "P \<turnstile> (E, ws) \<surd>"
and races: "\<And>a ad al v. \<lbrakk> enat a < llength E; action_obs E a = NormalAction (ReadMem ad al v); \<not> P,E \<turnstile> ws a \<le>hb a \<rbrakk>
\<Longrightarrow> \<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> v :\<le> T"
and r: "enat a < llength E"
and read: "action_obs E a = NormalAction (ReadMem ad al v)"
shows "\<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> v :\<le> T"
using r read
proof(induction a arbitrary: ad al v rule: less_induct)
case (less a)
note r = `enat a < llength E`
and read = `action_obs E a = NormalAction (ReadMem ad al v)`
show ?case
proof(cases "P,E \<turnstile> ws a \<le>hb a")
case False with r read show ?thesis by(rule races)
next
case True
note hb = this
hence ao: "E \<turnstile> ws a \<le>a a" by(rule happens_before_into_action_order)
from wf have ws: "is_write_seen P E ws" by(rule wf_exec_is_write_seenD)
from r have "a \<in> actions E" by(simp add: actions_def)
hence "a \<in> read_actions E" using read ..
from is_write_seenD[OF ws this read]
have "write": "ws a \<in> write_actions E"
and adal_w: "(ad, al) \<in> action_loc P E (ws a)"
and written: "value_written P E (ws a) (ad, al) = v" by simp_all
from "write" have iwa: "is_write_action (action_obs E (ws a))" by cases
let ?start_state = "init_fin_lift_state status (h.start_state f P C M vs)"
and ?start_obs = "lift_start_obs h.start_tid h.start_heap_obs"
show ?thesis
proof(cases "ws a < a")
case True
let ?EE'' = "lmap snd (ltake (enat (ws a)) E)"
have "non_speculative P (vs_type_all P) ?EE''"
proof(rule non_speculative_nthI)
fix i ad' al' v'
assume i: "enat i < llength ?EE''"
and nth_i: "lnth ?EE'' i = NormalAction (ReadMem ad' al' v')"
from i have "i < ws a" by simp
hence i': "i < a" using True by(simp)
moreover
with r have "enat i < llength E" by(metis enat_ord_code(2) order_less_trans)
moreover
with nth_i i `i < ws a`
have "action_obs E i = NormalAction (ReadMem ad' al' v')"
by(simp add: action_obs_def lnth_ltake ac_simps)
ultimately have "\<exists>T. P \<turnstile> ad'@al' : T \<and> P \<turnstile> v' :\<le> T" by(rule less.IH)
hence "v' \<in> vs_type_all P (ad', al')" by(simp add: vs_type_all.simps)
thus "v' \<in> w_values P (vs_type_all P) (list_of (ltake (enat i) ?EE'')) (ad', al')"
by(rule w_values_mono[THEN subsetD])
qed
with wfx_start wfP E "write" adal_w
show ?thesis unfolding written[symmetric] by(rule non_speculative_written_value_typeable)
next
case False
from E obtain E' where E': "E = lappend (llist_of ?start_obs) E'"
and \<E>: "E' \<in> h.mthr.if.\<E> ?start_state" by blast
from \<E> obtain E'' where E'': "E' = lconcat (lmap (\<lambda>(t, ta). llist_of (map (Pair t) \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>)) E'')"
and Runs: "h.mthr.if.mthr.Runs ?start_state E''"
by-(rule h.mthr.if.\<E>.cases[OF \<E>])
have wfx': "ts_ok (init_fin_lift wfx) (thr ?start_state) (shr ?start_state)"
using wfx_start by(simp add: h.shr_start_state)
have a_start: "\<not> a < length ?start_obs"
proof
assume "a < length ?start_obs"
with read have "NormalAction (ReadMem ad al v) \<in> snd ` set ?start_obs"
unfolding set_map[symmetric] in_set_conv_nth
by(auto simp add: E' lnth_lappend action_obs_def)
hence "ReadMem ad al v \<in> set h.start_heap_obs" by auto
thus False by(simp add: h.start_heap_obs_not_Read)
qed
hence ws_a_not_le: "\<not> ws a < length ?start_obs" using False by simp
def w == "ws a - length ?start_obs"
from "write" ws_a_not_le w_def
have "enat w < llength (lconcat (lmap (\<lambda>(t, ta). llist_of (map (Pair t) \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>)) E''))"
by(cases "llength (lconcat (lmap (\<lambda>(t, ta). llist_of (map (Pair t) \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>)) E''))")(auto simp add: actions_def E' E'' elim: write_actions.cases)
with Runs obtain m_w n_w t_w ta_w
where E'_w: "lnth E' w = (t_w, \<lbrace>ta_w\<rbrace>\<^bsub>o\<^esub> ! n_w)"
and n_w: "n_w < length \<lbrace>ta_w\<rbrace>\<^bsub>o\<^esub>"
and m_w: "enat m_w < llength E''"
and w_sum: "w = (\<Sum>i<m_w. length \<lbrace>snd (lnth E'' i)\<rbrace>\<^bsub>o\<^esub>) + n_w"
and E''_m_w: "lnth E'' m_w = (t_w, ta_w)"
unfolding E'' by(rule h.mthr.if.actions_\<E>E_aux)
from E'_w have obs_w: "action_obs E (ws a) = \<lbrace>ta_w\<rbrace>\<^bsub>o\<^esub> ! n_w"
using ws_a_not_le E' w_def by(simp add: action_obs_def lnth_lappend)
let ?E'' = "ldropn (Suc m_w) E''"
let ?m_E'' = "ltake (enat m_w) E''"
have E'_unfold: "E'' = lappend ?m_E'' (LCons (lnth E'' m_w) ?E'')"
unfolding ldropn_Suc_conv_ldropn[OF m_w] by simp
hence "h.mthr.if.mthr.Runs ?start_state (lappend ?m_E'' (LCons (lnth E'' m_w) ?E''))"
using Runs by simp
then obtain \<sigma>' where \<sigma>_\<sigma>': "h.mthr.if.mthr.Trsys ?start_state (list_of ?m_E'') \<sigma>'"
and Runs': "h.mthr.if.mthr.Runs \<sigma>' (LCons (lnth E'' m_w) ?E'')"
by(rule h.mthr.if.mthr.Runs_lappendE) simp
from Runs' obtain \<sigma>''' where red_w: "h.mthr.if.redT \<sigma>' (t_w, ta_w) \<sigma>'''"
and Runs'': "h.mthr.if.mthr.Runs \<sigma>''' ?E''"
unfolding E''_m_w by cases
from "write" `a \<in> read_actions E` have "ws a \<noteq> a" by(auto dest: read_actions_not_write_actions)
with False have "ws a > a" by simp
with ao have new: "is_new_action (action_obs E (ws a))"
by(simp add: action_order_def split: split_if_asm)
then obtain CTn where obs_w': "action_obs E (ws a) = NormalAction (NewHeapElem ad CTn)"
using adal_w by cases auto
def a' == "a - length ?start_obs"
with False w_def
have "enat a' < llength (lconcat (lmap (\<lambda>(t, ta). llist_of (map (Pair t) \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>)) E''))"
by(simp add: le_less_trans[OF _ `enat w < llength (lconcat (lmap (\<lambda>(t, ta). llist_of (map (Pair t) \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>)) E''))`])
with Runs obtain m_a n_a t_a ta_a
where E'_a: "lnth E' a' = (t_a, \<lbrace>ta_a\<rbrace>\<^bsub>o\<^esub> ! n_a)"
and n_a: "n_a < length \<lbrace>ta_a\<rbrace>\<^bsub>o\<^esub>"
and m_a: "enat m_a < llength E''"
and a_sum: "a' = (\<Sum>i<m_a. length \<lbrace>snd (lnth E'' i)\<rbrace>\<^bsub>o\<^esub>) + n_a"
and E''_m_a: "lnth E'' m_a = (t_a, ta_a)"
unfolding E'' by(rule h.mthr.if.actions_\<E>E_aux)
from a_start E'_a read have obs_a: "\<lbrace>ta_a\<rbrace>\<^bsub>o\<^esub> ! n_a = NormalAction (ReadMem ad al v)"
using E' w_def by(simp add: action_obs_def lnth_lappend a'_def)
let ?E'' = "ldropn (Suc m_a) E''"
let ?m_E'' = "ltake (enat m_a) E''"
have E'_unfold: "E'' = lappend ?m_E'' (LCons (lnth E'' m_a) ?E'')"
unfolding ldropn_Suc_conv_ldropn[OF m_a] by simp
hence "h.mthr.if.mthr.Runs ?start_state (lappend ?m_E'' (LCons (lnth E'' m_a) ?E''))"
using Runs by simp
then obtain \<sigma>'' where \<sigma>_\<sigma>'': "h.mthr.if.mthr.Trsys ?start_state (list_of ?m_E'') \<sigma>''"
and Runs'': "h.mthr.if.mthr.Runs \<sigma>'' (LCons (lnth E'' m_a) ?E'')"
by(rule h.mthr.if.mthr.Runs_lappendE) simp
from Runs'' obtain \<sigma>''' where red_a: "h.mthr.if.redT \<sigma>'' (t_a, ta_a) \<sigma>'''"
and Runs'': "h.mthr.if.mthr.Runs \<sigma>''' ?E''"
unfolding E''_m_a by cases
let ?EE'' = "llist_of (concat (map (\<lambda>(t, ta). \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>) (list_of ?m_E'')))"
from m_a have "enat m_a \<le> llength E''" by simp
hence len_EE'': "llength ?EE'' = enat (a' - n_a)"
by(simp add: a_sum length_concat listsum_setsum_nth atLeast0LessThan length_list_of_conv_the_enat min_def split_beta lnth_ltake)
have prefix: "lprefix ?EE'' (lmap snd E')" unfolding E''
by(subst (2) E'_unfold)(simp add: lmap_lappend_distrib lmap_lconcat llist.map_comp o_def split_def lconcat_llist_of[symmetric] lmap_llist_of[symmetric] lprefix_lappend2I del: lmap_llist_of)
have ns: "non_speculative P (vs_type_all P) ?EE''"
proof(rule non_speculative_nthI)
fix i ad' al' v'
assume i: "enat i < llength ?EE''"
and lnth_i: "lnth ?EE'' i = NormalAction (ReadMem ad' al' v')"
and "non_speculative P (vs_type_all P) (ltake (enat i) ?EE'')"
let ?i = "i + length ?start_obs"
from i len_EE'' have "i < a'" by simp
hence i': "?i < a" by(simp add: a'_def)
moreover
hence "enat ?i < llength E" using `enat a < llength E` by(simp add: less_trans[where y="enat a"])
moreover have "enat i < llength E'" using i
by -(rule less_le_trans[OF _ lprefix_llength_le[OF prefix], simplified], simp)
from lprefix_lnthD[OF prefix i] lnth_i
have "lnth (lmap snd E') i = NormalAction (ReadMem ad' al' v')" by simp
hence "action_obs E ?i = NormalAction (ReadMem ad' al' v')" using `enat i < llength E'`
by(simp add: E' action_obs_def lnth_lappend E'')
ultimately have "\<exists>T. P \<turnstile> ad'@al' : T \<and> P \<turnstile> v' :\<le> T" by(rule less.IH)
hence "v' \<in> vs_type_all P (ad', al')" by(simp add: vs_type_all.simps)
thus "v' \<in> w_values P (vs_type_all P) (list_of (ltake (enat i) ?EE'')) (ad', al')"
by(rule w_values_mono[THEN subsetD])
qed
have "vs_conf P (vs_type_all P)" by simp
with \<sigma>_\<sigma>'' wfx' ns
have wfx'': "ts_ok (init_fin_lift wfx) (thr \<sigma>'') (shr \<sigma>'')"
and vs'': "vs_conf P (w_values P (vs_type_all P) (concat (map (\<lambda>(t, ta). \<lbrace>ta\<rbrace>\<^bsub>o\<^esub>) (list_of ?m_E''))))"
by(rule h.if_RedT_non_speculative_invar[unfolded heap_independent h.mthr.if.RedT_def])+
note red_w moreover
from n_w obs_w obs_w' have "NormalAction (NewHeapElem ad CTn) \<in> set \<lbrace>ta_w\<rbrace>\<^bsub>o\<^esub>"
unfolding in_set_conv_nth by auto
moreover
have ta_a_read: "NormalAction (ReadMem ad al v) \<in> set \<lbrace>ta_a\<rbrace>\<^bsub>o\<^esub>"
using n_a obs_a unfolding in_set_conv_nth by blast
from red_a have "\<exists>T. P \<turnstile> ad@al : T"
proof(cases)
case (redT_normal x x' h')
from wfx'' `thr \<sigma>'' t_a = \<lfloor>(x, no_wait_locks)\<rfloor>`
have "init_fin_lift wfx t_a x (shr \<sigma>'')" by(rule ts_okD)
with `t_a \<turnstile> (x, shr \<sigma>'') -ta_a\<rightarrow>i (x', h')`
show ?thesis using ta_a_read
by(rule h.init_fin_red_read_typeable[unfolded heap_independent])
next
case redT_acquire thus ?thesis using n_a obs_a ta_a_read by auto
qed
hence "typeof_addr ad \<noteq> None" by(auto elim: addr_loc_type_cases)
ultimately have "typeof_addr ad = \<lfloor>CTn\<rfloor>" by(rule if_redT_NewHeapElem_typed)
with written adal_w obs_w' show ?thesis
by(cases al)(auto simp add: value_written.simps intro: addr_loc_type_intros h.addr_loc_default_conf[unfolded heap_independent])
qed
qed
qed
theorem
assumes wfx_start: "ts_ok wfx (thr (h.start_state f P C M vs)) h.start_heap"
and wfP: "wf_syscls P"
and justified: "P \<turnstile> (E, ws) weakly_justified_by J"
and J: "range (justifying_exec \<circ> J) \<subseteq> h.\<E>_start f P C M vs status"
shows read_value_typeable_justifying:
"\<lbrakk> 0 < n; enat a < llength (justifying_exec (J n));
action_obs (justifying_exec (J n)) a = NormalAction (ReadMem ad al v) \<rbrakk>
\<Longrightarrow> \<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> v :\<le> T"
and read_value_typeable_justifed:
"\<lbrakk> E \<in> h.\<E>_start f P C M vs status; P \<turnstile> (E, ws) \<surd>;
enat a < llength E; action_obs E a = NormalAction (ReadMem ad al v) \<rbrakk>
\<Longrightarrow> \<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> v :\<le> T"
proof -
let ?E = "\<lambda>n. justifying_exec (J n)"
and ?\<phi> = "\<lambda>n. action_translation (J n)"
and ?C = "\<lambda>n. committed (J n)"
and ?ws = "\<lambda>n. justifying_ws (J n)"
let ?\<E> = "h.\<E>_start f P C M vs status"
and ?start_obs = "lift_start_obs h.start_tid h.start_heap_obs"
{ fix a n
assume "enat a < llength (justifying_exec (J n))"
and "action_obs (justifying_exec (J n)) a = NormalAction (ReadMem ad al v)"
and "n > 0"
thus "\<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> v :\<le> T"
proof(induction n arbitrary: a ad al v)
case 0 thus ?case by simp
next
case (Suc n')
def n': n \<equiv> "Suc n'"
with Suc have n: "0 < n" and a: "enat a < llength (?E n)"
and a_obs: "action_obs (?E n) a = NormalAction (ReadMem ad al v)"
by simp_all
have wf_n: "P \<turnstile> (?E n, ?ws n) \<surd>"
using justified by(simp add: justification_well_formed_def)
from J have E: "?E n \<in> ?\<E>"
and E': "?E n' \<in> ?\<E>" by auto
from a a_obs wfx_start wfP E wf_n show ?case
proof(rule hb_read_value_typeable[rotated -2])
fix a' ad' al' v'
assume a': "enat a' < llength (?E n)"
and a'_obs: "action_obs (?E n) a' = NormalAction (ReadMem ad' al' v')"
and nhb: "\<not> P,?E n \<turnstile> ?ws n a' \<le>hb a'"
from a' have "a' \<in> actions (?E n)" by(simp add: actions_def)
hence read_a': "a' \<in> read_actions (?E n)" using a'_obs ..
with justified nhb have committed': "?\<phi> n a' \<in> ?\<phi> n' ` ?C n'"
unfolding is_weakly_justified_by.simps n' uncommitted_reads_see_hb_def by blast
from justified have wfa_n: "wf_action_translation E (J n)"
and wfa_n': "wf_action_translation E (J n')" by(simp_all add: wf_action_translations_def)
hence inj_n: "inj_on (?\<phi> n) (actions (?E n))"
and inj_n': "inj_on (?\<phi> n') (actions (?E n'))"
by(blast dest: wf_action_translation_on_inj_onD)+
from justified have C_n: "?C n \<subseteq> actions (?E n)"
and C_n': "?C n' \<subseteq> actions (?E n')"
and wf_n': "P \<turnstile> (?E n', ?ws n') \<surd>"
by(simp_all add: committed_subset_actions_def justification_well_formed_def)
from justified have "?\<phi> n' ` ?C n' \<subseteq> ?\<phi> n ` ?C n"
unfolding n' by(simp add: is_commit_sequence_def)
with n' committed' have "?\<phi> n a' \<in> ?\<phi> n ` ?C n" by auto
with inj_n C_n have committed: "a' \<in> ?C n"
using `a' \<in> actions (?E n)` by(auto dest: inj_onD)
with justified read_a' have ws_committed: "ws (?\<phi> n a') \<in> ?\<phi> n ` ?C n"
by(rule weakly_justified_write_seen_hb_read_committed)
from wf_n have ws_n: "is_write_seen P (?E n) (?ws n)" by(rule wf_exec_is_write_seenD)
from is_write_seenD[OF this read_a' a'_obs]
have ws_write: "?ws n a' \<in> write_actions (?E n)"
and adal: "(ad', al') \<in> action_loc P (?E n) (?ws n a')"
and written: "value_written P (?E n) (?ws n a') (ad', al') = v'" by simp_all
def a'' \<equiv> "inv_into (actions (?E n')) (?\<phi> n') (?\<phi> n a')"
from C_n' n committed' have "?\<phi> n a' \<in> ?\<phi> n' ` actions (?E n')" by auto
hence a'': "?\<phi> n' a'' = ?\<phi> n a'"
and a''_action: "a'' \<in> actions (?E n')" using inj_n' committed' n
by(simp_all add: a''_def f_inv_into_f inv_into_into)
hence committed'': "a'' \<in> ?C n'" using committed' n inj_n' C_n' by(fastforce dest: inj_onD)
from committed committed'' wfa_n wfa_n' a'' have "action_obs (?E n') a'' \<approx> action_obs (?E n) a'"
by(auto dest!: wf_action_translation_on_actionD intro: sim_action_trans sim_action_sym)
with a'_obs committed'' C_n' have read_a'': "a'' \<in> read_actions (?E n')"
by(auto intro: read_actions.intros)
then obtain ad'' al'' v''
where a''_obs: "action_obs (?E n') a'' = NormalAction (ReadMem ad'' al'' v'')" by cases
from committed'' have "n' > 0" using justified
by(cases n')(simp_all add: is_commit_sequence_def)
then obtain n'' where n'': "n' = Suc n''" by(cases n') simp_all
from justified have wfa_n'': "wf_action_translation E (J n'')" by(simp add: wf_action_translations_def)
hence inj_n'': "inj_on (?\<phi> n'') (actions (?E n''))" by(blast dest: wf_action_translation_on_inj_onD)+
from justified have C_n'': "?C n'' \<subseteq> actions (?E n'')" by(simp add: committed_subset_actions_def)
from justified committed' committed'' n' read_a' read_a'' n
have "?\<phi> n (?ws n (inv_into (actions (?E n)) (?\<phi> n) (?\<phi> n' a''))) = ws (?\<phi> n' a'')"
by(simp add: write_seen_committed_def)
hence "?\<phi> n (?ws n a') = ws (?\<phi> n a')" using inj_n `a' \<in> actions (?E n)` by(simp add: a'')
from ws_committed obtain w where w: "ws (?\<phi> n a') = ?\<phi> n w"
and committed_w: "w \<in> ?C n" by blast
from committed_w C_n have "w \<in> actions (?E n)" by blast
hence w_def: "w = ?ws n a'" using `?\<phi> n (?ws n a') = ws (?\<phi> n a')` inj_n ws_write
unfolding w by(auto dest: inj_onD)
have committed_ws: "?ws n a' \<in> ?C n" using committed_w by(simp add: w_def)
with wfa_n have sim_ws: "action_obs (?E n) (?ws n a') \<approx> action_obs E (?\<phi> n (?ws n a'))"
by(blast dest: wf_action_translation_on_actionD)
from wfa_n committed_ws have sim_ws: "action_obs (?E n) (?ws n a') \<approx> action_obs E (?\<phi> n (?ws n a'))"
by(blast dest: wf_action_translation_on_actionD)
with adal have adal_E: "(ad', al') \<in> action_loc P E (?\<phi> n (?ws n a'))"
by(simp add: action_loc_aux_sim_action)
have "\<exists>w \<in> write_actions (?E n'). (ad', al') \<in> action_loc P (?E n') w \<and> value_written P (?E n') w (ad', al') = v'"
proof(cases "?\<phi> n' a'' \<in> ?\<phi> n'' ` ?C n''")
case True
then obtain a''' where a''': "?\<phi> n'' a''' = ?\<phi> n' a''"
and committed''': "a''' \<in> ?C n''" by auto
from committed''' C_n'' have a'''_action: "a''' \<in> actions (?E n'')" by auto
from committed'' committed''' wfa_n' wfa_n'' a''' have "action_obs (?E n'') a''' \<approx> action_obs (?E n') a''"
by(auto dest!: wf_action_translation_on_actionD intro: sim_action_trans sim_action_sym)
with read_a'' committed''' C_n'' have read_a''': "a''' \<in> read_actions (?E n'')"
by cases(auto intro: read_actions.intros)
hence "?\<phi> n' (?ws n' (inv_into (actions (?E n')) (?\<phi> n') (?\<phi> n'' a'''))) = ws (?\<phi> n'' a''')"
using justified committed'''
unfolding is_weakly_justified_by.simps n'' Let_def write_seen_committed_def by blast
also have "inv_into (actions (?E n')) (?\<phi> n') (?\<phi> n'' a''') = a''"
using a''' inj_n' a''_action by(simp)
also note a''' also note a''
finally have "ws (?\<phi> n a') = ?\<phi> n' (?ws n' a'')" ..
with `?\<phi> n (?ws n a') = ws (?\<phi> n a')`[symmetric]
have eq_ws: "?\<phi> n' (?ws n' a'') = ?\<phi> n (?ws n a')" by simp
from wf_n'[THEN wf_exec_is_write_seenD, THEN is_write_seenD, OF read_a'' a''_obs]
have ws_write': "?ws n' a'' \<in> write_actions (?E n')" by simp
from justified read_a'' committed''
have "ws (?\<phi> n' a'') \<in> ?\<phi> n' ` ?C n'" by(rule weakly_justified_write_seen_hb_read_committed)
then obtain w' where w': "ws (?\<phi> n' a'') = ?\<phi> n' w'"
and committed_w': "w' \<in> ?C n'" by blast
from committed_w' C_n' have "w' \<in> actions (?E n')" by blast
hence w'_def: "w' = ?ws n' a''" using `?\<phi> n' (?ws n' a'') = ws (?\<phi> n a')` inj_n' ws_write'
unfolding w' a''[symmetric] by(auto dest: inj_onD)
with committed_w' have committed_ws'': "?ws n' a'' \<in> committed (J n')" by simp
with committed_ws wfa_n wfa_n' eq_ws
have "action_obs (?E n') (?ws n' a'') \<approx> action_obs (?E n) (?ws n a')"
by(auto dest!: wf_action_translation_on_actionD intro: sim_action_trans sim_action_sym)
hence adal_eq: "action_loc P (?E n') (?ws n' a'') = action_loc P (?E n) (?ws n a')"
by(simp add: action_loc_aux_sim_action)
with adal have adal': "(ad', al') \<in> action_loc P (?E n') (?ws n' a'')" by(simp add: action_loc_aux_sim_action)
from committed_ws'' have "?ws n' a'' \<in> actions (?E n')" using C_n' by blast
with ws_write `action_obs (?E n') (?ws n' a'') \<approx> action_obs (?E n) (?ws n a')`
have ws_write'': "?ws n' a'' \<in> write_actions (?E n')"
by(cases)(auto intro: write_actions.intros simp add: sim_action_is_write_action_eq)
from wfa_n' committed_ws''
have sim_ws': "action_obs (?E n') (?ws n' a'') \<approx> action_obs E (?\<phi> n' (?ws n' a''))"
by(blast dest: wf_action_translation_on_actionD)
with adal' have adal'_E: "(ad', al') \<in> action_loc P E (?\<phi> n' (?ws n' a''))"
by(simp add: action_loc_aux_sim_action)
from justified committed_ws ws_write adal_E
have "value_written P (?E n) (?ws n a') (ad', al') = value_written P E (?\<phi> n (?ws n a')) (ad', al')"
unfolding is_weakly_justified_by.simps Let_def value_written_committed_def by blast
also note eq_ws[symmetric]
also from justified committed_ws'' ws_write'' adal'_E
have "value_written P E (?\<phi> n' (?ws n' a'')) (ad', al') = value_written P (?E n') (?ws n' a'') (ad', al')"
unfolding is_weakly_justified_by.simps Let_def value_written_committed_def by(blast dest: sym)
finally show ?thesis using written ws_write'' adal' by auto
next
case False
with justified read_a'' committed''
have "ws (?\<phi> n' a'') \<in> ?\<phi> n'' ` ?C n''"
unfolding is_weakly_justified_by.simps Let_def n'' committed_reads_see_committed_writes_weak_def by blast
with a'' obtain w where w: "?\<phi> n'' w = ws (?\<phi> n a')"
and committed_w: "w \<in> ?C n''" by auto
from justified have "?\<phi> n'' ` ?C n'' \<subseteq> ?\<phi> n' ` ?C n'" by(simp add: is_commit_sequence_def n'')
with committed_w w[symmetric] have "ws (?\<phi> n a') \<in> ?\<phi> n' ` ?C n'" by(auto)
then obtain w' where w': "ws (?\<phi> n a') = ?\<phi> n' w'" and committed_w': "w' \<in> ?C n'" by blast
from wfa_n' committed_w' have "action_obs (?E n') w' \<approx> action_obs E (?\<phi> n' w')"
by(blast dest: wf_action_translation_on_actionD)
from this[folded w', folded `?\<phi> n (?ws n a') = ws (?\<phi> n a')`] sim_ws[symmetric]
have sim_w': "action_obs (?E n') w' \<approx> action_obs (?E n) (?ws n a')" by(rule sim_action_trans)
with ws_write committed_w' C_n' have write_w': "w' \<in> write_actions (?E n')"
by(cases)(auto intro!: write_actions.intros simp add: sim_action_is_write_action_eq)
hence "value_written P (?E n') w' (ad', al') = value_written P E (?\<phi> n' w') (ad', al')"
using adal_E committed_w' justified
unfolding `?\<phi> n (?ws n a') = ws (?\<phi> n a')` w' is_weakly_justified_by.simps Let_def value_written_committed_def by blast
also note w'[symmetric]
also note `?\<phi> n (?ws n a') = ws (?\<phi> n a')`[symmetric]
also have "value_written P E (?\<phi> n (?ws n a')) (ad', al') = value_written P (?E n) (?ws n a') (ad', al')"
using justified committed_ws ws_write adal_E
unfolding is_weakly_justified_by.simps Let_def value_written_committed_def by(blast dest: sym)
also have "(ad', al') \<in> action_loc P (?E n') w'" using sim_w' adal by(simp add: action_loc_aux_sim_action)
ultimately show ?thesis using written write_w' by auto
qed
then obtain w where w: "w \<in> write_actions (?E n')"
and adal: "(ad', al') \<in> action_loc P (?E n') w"
and written: "value_written P (?E n') w (ad', al') = v'" by blast
from w have w_len: "enat w < llength (?E n')"
by(cases)(simp add: actions_def)
let ?EE'' = "lmap snd (ltake (enat w) (?E n'))"
have "non_speculative P (vs_type_all P) ?EE''"
proof(rule non_speculative_nthI)
fix i ad al v
assume i: "enat i < llength ?EE''"
and i_nth: "lnth ?EE'' i = NormalAction (ReadMem ad al v)"
and ns: "non_speculative P (vs_type_all P) (ltake (enat i) ?EE'')"
from i w_len have "i < w" by(simp add: min_def not_le split: split_if_asm)
with w_len have "enat i < llength (?E n')" by(simp add: less_trans[where y="enat w"])
moreover
from i_nth i `i < w` w_len
have "action_obs (?E n') i = NormalAction (ReadMem ad al v)"
by(simp add: action_obs_def ac_simps less_trans[where y="enat w"] lnth_ltake)
moreover from n'' have "0 < n'" by simp
ultimately have "\<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> v :\<le> T" by(rule Suc.IH)
hence "v \<in> vs_type_all P (ad, al)" by(simp add: vs_type_all.simps)
thus "v \<in> w_values P (vs_type_all P) (list_of (ltake (enat i) ?EE'')) (ad, al)"
by(rule w_values_mono[THEN subsetD])
qed
with wfx_start wfP E' w adal
show "\<exists>T. P \<turnstile> ad'@al' : T \<and> P \<turnstile> v' :\<le> T"
unfolding written[symmetric] by(rule non_speculative_written_value_typeable)
qed
qed
}
note justifying = this
assume a: "enat a < llength E"
and read: "action_obs E a = NormalAction (ReadMem ad al v)"
and E: "E \<in> h.\<E>_start f P C M vs status"
and wf: "P \<turnstile> (E, ws) \<surd>"
from a have action: "a \<in> actions E" by(auto simp add: actions_def action_obs_def)
with justified obtain n a' where a': "a = ?\<phi> n a'"
and committed': "a' \<in> ?C n" by(auto simp add: is_commit_sequence_def)
from justified have C_n: "?C n \<subseteq> actions (?E n)"
and C_Sn: "?C (Suc n) \<subseteq> actions (?E (Suc n))"
and wf_tr: "wf_action_translation E (J n)"
and wf_tr': "wf_action_translation E (J (Suc n))"
by(auto simp add: committed_subset_actions_def wf_action_translations_def)
from C_n committed' have action': "a' \<in> actions (?E n)" by blast
from wf_tr committed' a'
have "action_tid E a = action_tid (?E n) a'" "action_obs E a \<approx> action_obs (?E n) a'"
by(auto simp add: wf_action_translation_on_def intro: sim_action_sym)
with read obtain v'
where "action_obs (?E n) a' = NormalAction (ReadMem ad al v')"
by(clarsimp simp add: action_obs_def)
with action' have read': "a' \<in> read_actions (?E n)" ..
from justified have "?\<phi> n ` ?C n \<subseteq> ?\<phi> (Suc n) ` ?C (Suc n)"
by(simp add: is_commit_sequence_def)
with committed' a' have "a \<in> \<dots>" by auto
then obtain a'' where a'': "a = ?\<phi> (Suc n) a''"
and committed'': "a'' \<in> ?C (Suc n)" by auto
from committed'' C_Sn have action'': "a'' \<in> actions (?E (Suc n))" by blast
with wf_tr' have "a'' = inv_into (actions (?E (Suc n))) (?\<phi> (Suc n)) a"
by(simp add: a'' wf_action_translation_on_def)
with justified read' committed' a' have ws_a: "ws a = ?\<phi> (Suc n) (?ws (Suc n) a'')"
by(simp add: write_seen_committed_def)
from wf_tr' committed'' a''
have "action_tid E a = action_tid (?E (Suc n)) a''"
and "action_obs E a \<approx> action_obs (?E (Suc n)) a''"
by(auto simp add: wf_action_translation_on_def intro: sim_action_sym)
with read obtain v''
where a_obs'': "action_obs (?E (Suc n)) a'' = NormalAction (ReadMem ad al v'')"
by(clarsimp simp add: action_obs_def)
with action'' have read'': "a'' \<in> read_actions (?E (Suc n))"
by(auto intro: read_actions.intros simp add: action_obs_def)
have "a \<in> read_actions E" "action_obs E a = NormalAction (ReadMem ad al v)"
using action read by(auto intro: read_actions.intros simp add: action_obs_def read)
from is_write_seenD[OF wf_exec_is_write_seenD[OF wf] this]
have v_eq: "v = value_written P E (ws a) (ad, al)"
and adal: "(ad, al) \<in> action_loc P E (ws a)" by simp_all
from justified have "P \<turnstile> (?E (Suc n), ?ws (Suc n)) \<surd>" by(simp add: justification_well_formed_def)
from is_write_seenD[OF wf_exec_is_write_seenD[OF this] read'' a_obs'']
have write'': "?ws (Suc n) a'' \<in> write_actions (?E (Suc n))"
and written'': "value_written P (?E (Suc n)) (?ws (Suc n) a'') (ad, al) = v''"
by simp_all
from justified read'' committed''
have "ws (?\<phi> (Suc n) a'') \<in> ?\<phi> (Suc n) ` ?C (Suc n)"
by(rule weakly_justified_write_seen_hb_read_committed)
then obtain w where w: "ws (?\<phi> (Suc n) a'') = ?\<phi> (Suc n) w"
and committed_w: "w \<in> ?C (Suc n)" by blast
with C_Sn have "w \<in> actions (?E (Suc n))" by blast
moreover have "ws (?\<phi> (Suc n) a'') = ?\<phi> (Suc n) (?ws (Suc n) a'')"
using ws_a a'' by simp
ultimately have w_def: "w = ?ws (Suc n) a''"
using wf_action_translation_on_inj_onD[OF wf_tr'] write''
unfolding w by(auto dest: inj_onD)
with committed_w have "?ws (Suc n) a'' \<in> ?C (Suc n)" by simp
hence "value_written P E (ws a) (ad, al) = value_written P (?E (Suc n)) (?ws (Suc n) a'') (ad, al)"
using adal justified write'' by(simp add: value_written_committed_def ws_a)
with v_eq written'' have "v = v''" by simp
from read'' have "enat a'' < llength (?E (Suc n))" by(cases)(simp add: actions_def)
thus "\<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> v :\<le> T"
by(rule justifying)(simp_all add: a_obs'' `v = v''`)
qed
corollary weakly_legal_read_value_typeable:
assumes wfx_start: "ts_ok wfx (thr (h.start_state f P C M vs)) h.start_heap"
and wfP: "wf_syscls P"
and legal: "weakly_legal_execution P (h.\<E>_start f P C M vs status) (E, ws)"
and a: "enat a < llength E"
and read: "action_obs E a = NormalAction (ReadMem ad al v)"
shows "\<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> v :\<le> T"
proof -
from legal obtain J
where "P \<turnstile> (E, ws) weakly_justified_by J"
and "range (justifying_exec \<circ> J) \<subseteq> h.\<E>_start f P C M vs status"
and "E \<in> h.\<E>_start f P C M vs status"
and "P \<turnstile> (E, ws) \<surd>" by(rule legal_executionE)
with wfx_start wfP show ?thesis using a read by(rule read_value_typeable_justifed)
qed
corollary legal_read_value_typeable:
"\<lbrakk> ts_ok wfx (thr (h.start_state f P C M vs)) h.start_heap; wf_syscls P;
legal_execution P (h.\<E>_start f P C M vs status) (E, ws);
enat a < llength E; action_obs E a = NormalAction (ReadMem ad al v) \<rbrakk>
\<Longrightarrow> \<exists>T. P \<turnstile> ad@al : T \<and> P \<turnstile> v :\<le> T"
by(erule (1) weakly_legal_read_value_typeable)(rule legal_imp_weakly_legal_execution)
end
end
|
import superimport
import numpy as np
from scipy.stats import norm
#from scipy.optimize import minimize
import scipy.optimize
def expected_improvement(X, X_sample, Y_sample, surrogate,
improvement_thresh=0.01, trust_incumbent=False,
greedy=False):
'''
Computes the EI at points X based on existing samples X_sample
and Y_sample using a probabilistic surrogate model.
Args:
X: Points at which EI shall be computed (m x d).
X_sample: Sample locations (n x d).
Y_sample: Sample values (n x 1).
surrogate: a model with a predict that returns mu, sigma
improvement_thresh: Exploitation-exploration trade-off parameter.
trust_incumbent: whether to trust current best obs. or re-evaluate
Returns:
Expected improvements at points X.
'''
#X = np.atleast_2d(X)
mu, sigma = surrogate.predict(X, return_std=True)
# Make sigma have same shape as mu
#sigma = sigma.reshape(-1, X_sample.shape[1])
sigma = np.reshape(sigma, np.shape(mu))
if trust_incumbent:
current_best = np.max(Y_sample)
else:
mu_sample = surrogate.predict(X_sample)
current_best = np.max(mu_sample)
with np.errstate(divide='warn'):
imp = mu - current_best - improvement_thresh
Z = imp / sigma
ei = imp * norm.cdf(Z) + sigma * norm.pdf(Z)
#ei[sigma == 0.0] = 0.0
ei[sigma < 1e-4] = 0.0
return ei
# bounds: D*2 array, where D = number parameter dimensions
# bounds[:,0] are lower bounds, bounds[:,1] are upper bounds
class MultiRestartGradientOptimizer:
def __init__(self, dim, bounds=None, n_restarts=1, method='L-BFGS-B',
callback=None):
self.bounds = bounds
self.n_restarts = n_restarts
self.method = method
self.dim = dim
def maximize(self, objective):
#neg_obj = lambda x: -objective(x)
neg_obj = lambda x: -objective(x.reshape(-1, 1))
min_val = np.inf
best_x = None
candidates = np.random.uniform(self.bounds[:, 0], self.bounds[:, 1],
size=(self.n_restarts, self.dim))
for x0 in candidates:
res = scipy.optimize.minimize(neg_obj, x0=x0, bounds=self.bounds,
method=self.method)
if res.fun < min_val:
min_val = res.fun
best_x = res.x
return best_x.reshape(-1, 1)
class BayesianOptimizer:
def __init__(self, X_init, Y_init, surrogate,
acq_fn=expected_improvement, acq_solver=None,
n_iter=None, callback=None):
self.X_sample = X_init
self.Y_sample = Y_init
self.surrogate = surrogate
self.surrogate.fit(self.X_sample, self.Y_sample)
self.acq_fn = acq_fn
self.acq_solver = acq_solver
self.n_iter = n_iter
self.callback = callback
# Make sure you "pay" for the initial random guesses
self.val_history = Y_init
self.current_best_val = np.max(self.val_history)
best_ndx = np.argmax(self.val_history)
self.current_best_arg = X_init[best_ndx]
def propose(self):
def objective(x):
y = self.acq_fn(x, self.X_sample, self.Y_sample, self.surrogate)
if np.size(y)==1:
y = y[0] # convert to scalar
return y
x_next = self.acq_solver.maximize(objective)
return x_next
def update(self, x, y):
X = np.atleast_2d(x)
self.X_sample = np.append(self.X_sample, X, axis=0)
self.Y_sample = np.append(self.Y_sample, y)
self.surrogate.fit(self.X_sample, self.Y_sample)
if y > self.current_best_val:
self.current_best_arg = x
self.current_best_val = y
self.val_history = np.append(self.val_history, y)
def maximize(self, objective):
for i in range(self.n_iter):
X_next = self.propose()
Y_next = objective(X_next)
#print("BO iter {}, xnext={}, ynext={:0.3f}".format(i, X_next, Y_next))
self.update(X_next, Y_next)
if self.callback is not None:
self.callback(X_next, Y_next, i)
return self.current_best_arg
|
*...................................................................
subroutine ODS_CheckR ( ncid, varid, nval, values, ierr )
implicit NONE
!-------------------------------------------------------------------------
! NASA/GSFC, Data Assimilation Office, Code 910.3, GEOS/DAS !
!-------------------------------------------------------------------------
!
! !ROUTINE: ODS_CheckR
!
! !DESCRIPTION:
! This routine checks native floating point numbers to
! determine whether all values are within the required range
! as specified by the NetCDF file attributes. If a missing
! value is also defined then, any value equal to the missing
! value is not considered to be out of range. If a value is
! determined to be out of range then ierr is given a value of
! NCEInVal as defined in the header file, netcdf.inc
!
! !INTERFACE:
! ODS_CheckR ( ncid, varid, nval, values, ierr )
!
! !INPUT PARAMETERS:
integer ncid ! NetCDF file id
integer varid ! NetCDF variable id
integer nval ! Number of values to scaled
real values ( nval ) ! Values to be checked
!
! !OUTPUT PARAMETER:
integer ierr ! Returned error (status) code
!
! !SEE ALSO:
! ODS_CheckI ( The parallel routine for integers )
!
! netcdf.inc, a header file, for defining NetCDF library
! parameters
! ods_stdio.h, a header file, for defining standard input/output
! unit numbers
! ods_worksp.h, a header file, for defining hardwired constants
! and defining global variables and setting up data
! structures for work space
!
! !LIBRARIES ACCESSED:
! NetCDF
!
! !REVISION HISTORY:
! 16May96 C. Redder Origional version
! 16Feb2000 R. Todling Rename stdio.h to ods_stdio.h
! 02Mar2005 D. Dee More informative error message
!
!-------------------------------------------------------------------------
include 'netcdf.inc'
include 'ods_stdio.h'
include 'ods_worksp.h'
* variables defining the allowed range of each integer
* ----------------------------------------------------
real valid_max ! maximim value
real valid_min ! minimum value
real valid_range ( 2 ) ! valid range of values
! The first element contains
! the minimum value and the
! second contains the maximum
! value.
real RMin ! temp for minimum value
real RMax ! temp for maximum value
* variables defining the missing value
* ------------------------------------
real missing_val ! code for missing value
real missing_max ! max for missing_val to
! account for round off
! error
real missing_min ! min for missing_val to
! account for round off
! error
* variables containing information about the NetCDF variable
* and its attributes
* ----------------------------------------------------------
character VarNam * ( MaxNCNam ) ! name of variable
integer NC_VarType ! type of NetCDF variable
integer NVDims ! number of NetCDF dimensions
integer VDims ( MaxVDims ) ! NetCDF variable dimensions
integer NVAtts ! number of variable attribute
integer AttLen ! length of an attribute
* Other variables
* ---------------
integer ival ! index variable
real val ! temporary storage for values
integer ncopts ! NetCDF error handling options
* Get information about the variable
* ----------------------------------
call ncvinq ( ncid, varid,
. VarNam, NC_VarType,
. NVDims, VDims,
. NVAtts, ierr )
if ( ierr .ne. NCNoErr ) return
* Default error code
* ------------------
ierr = NCNoErr
* Save current options of error handling and turn off
* error messages and set errors to be non-fatal.
* ---------------------------------------------------
call ncgopt ( ncopts )
call ncpopt ( 0 )
* Get valid range
* ---------------
call ODS_NCAGTR ( ncid, varid, 'valid_range',
. AttLen, valid_range, ierr )
if ( ierr .ne. NCNoErr .and.
. ierr .ne. NCENoAtt ) then
write ( stderr, 901 ) 'valid_range'
return
end if
* If the attribute, valid_range, exists then ...
* ----------------------------------------------
if ( ierr .eq. NCNoErr ) then
* Check to determine if the maximum and minimum
* values stored in valid_range are in reverse order
* -------------------------------------------------
if ( valid_range ( 2 ) .lt. valid_range ( 1 ) ) then
valid_min = valid_range ( 2 )
valid_max = valid_range ( 1 )
valid_range ( 1 ) = valid_min
valid_range ( 2 ) = valid_max
end if
else
* ----
* Set defaults for the range of valid values to the
* minimum and maximum values for native floating
* point numbers
* -------------------------------------------------
valid_range ( 1 ) = R_Min
valid_range ( 2 ) = R_Max
* Get the attribute, valid_min if it exists
* -----------------------------------------
call ODS_NCAGTR ( ncid, varid, 'valid_min',
. AttLen, RMin, ierr )
if ( ierr .ne. NCNoErr .and.
. ierr .ne. NCENoAtt ) then
write ( stderr, 901 ) 'valid_min'
return
end if
if ( ierr .eq. NCNoErr ) valid_range ( 1 ) = RMin
* Get the attribute, valid_max if it exists
* -----------------------------------------
call ODS_NCAGTR ( ncid, varid, 'valid_max',
. AttLen, RMax, ierr )
if ( ierr .ne. NCNoErr .and.
. ierr .ne. NCENoAtt ) then
write ( stderr, 901 ) 'valid_max'
return
end if
if ( ierr .eq. NCNoErr ) valid_range ( 2 ) = RMax
end if
* Set the valid max and min and account
* for machine round-off error
* -------------------------------------
valid_min = valid_range ( 1 ) * ( 1.0 - R_Error )
if ( valid_min .lt. 0.0 )
. valid_min = valid_range ( 1 ) * ( 1.0 + R_Error )
valid_max = valid_range ( 2 ) * ( 1.0 + R_Error )
if ( valid_max .lt. 0.0 )
. valid_max = valid_range ( 2 ) * ( 1.0 - R_Error )
* Get the attribute, missing_value
* --------------------------------
call ODS_NCAGTR ( ncid, varid, 'missing_value',
. AttLen, missing_val, ierr )
if ( ierr .ne. NCNoErr .and.
. ierr .ne. NCENoAtt ) then
write ( stderr, 901 ) 'missing_val'
return
end if
* Set defaults for missing values so that if the attribute
* specifying the missing value is absent, then the values
* will have no affect on the results of the check.
* -------------------------------------------------------
missing_min = R_Max
missing_max = R_Min
* Set the value denoting missing data as specified by NetCDF
* file attribute if it exists. Account for machine round-off
* error by setting the maximum and minimum for missing_val
* -----------------------------------------------------------
if ( ierr .eq. NCNoErr ) then
missing_min = missing_val * ( 1.0 - R_Error )
missing_max = missing_val * ( 1.0 + R_Error )
* In case the missing value is negative ...
* -----------------------------------------
if ( missing_val .lt. 0 ) then
missing_min = missing_val * ( 1.0 + R_Error )
missing_max = missing_val * ( 1.0 - R_Error )
end if
end if
* Return error handling option to their previous values
* -----------------------------------------------------
call ncpopt ( ncopts )
* Set default for returned error code
* -----------------------------------
ierr = NCNoErr
* Check the the attribute values to determine
* if the values are within the user-specified range
* -------------------------------------------------
do 10, ival = 1, nval
val = values ( ival )
if ( val .lt. valid_min .or.
. val .gt. valid_max ) then
* A value can fail the check only if the value
* is not missing
* ---------------------------------------------
if ( val .lt. missing_min .or.
. val .gt. missing_max ) ierr = NCEInVal
end if
10 continue
* Return with an error message if any value is out of range
* ---------------------------------------------------------
if ( ierr .ne. NCNoErr ) then
write ( stderr, 902 ) VarNam, val
return
end if
return
* ------
901 format ( /, ' ODS_CheckR: Error in extracting an attribute. ',
. /, ' Attribute name is ', a )
902 format ( /, ' ODS_CheckR: Value to be written is out of ',
. /, ' the range as specified by the',
. /, ' NetCDF file attributes. ',
. /, ' Variable name is ', a,
. /, ' Value is ', e12.4 )
end
|
/*
* traffic_sim.hpp
*
* Copyright (c) 2015 Masatoshi Hanai
*
* This software is released under MIT License.
* See LICENSE.
*
*/
#ifndef TRAFFICSIM_TRAFFIC_SIM_HPP_
#define TRAFFICSIM_TRAFFIC_SIM_HPP_
#include <fstream>
#include <string>
#include <vector>
#include <boost/algorithm/string.hpp>
#include <boost/serialization/serialization.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/make_shared.hpp>
#include <glog/logging.h>
#include "scalesim/util.hpp"
/**
* The traffic modeling is based on the paper.
* http://kalper.net/kp/publications/docs/rcveh-pads08.pdf
*/
static std::string TRAFFIC_PARTITION_PATH;
static std::string TRAFFIC_MAP_PATH;
static std::string SCENARIO_PATH;
class traffic_sim: public scalesim::application {
public:
/* an event represents a vehicle */
class Event: public scalesim::sim_event {
friend class traffic_sim;
private:
long vehicle_id;
long arrival_time; /* second */
long departure_time; /* second */
long source_;
std::vector<long> destinations_; /* lp id */
mutable scalesim::sim_event_base base_;
public:
Event(): vehicle_id(-1), arrival_time(-1), departure_time(-1), source_(-1) {};
virtual ~Event(){};
Event(long vehicle_id_,
long arrival_time_,
long departure_time_,
long source_,
std::vector<long> destinations):
vehicle_id(vehicle_id_),
arrival_time(arrival_time_),
departure_time(departure_time_),
source_(source_) {
for (auto it = destinations.begin(); it != destinations.end(); ++it) {
destinations_.push_back(*it);
}
};
Event(const Event& event) {
vehicle_id = event.vehicle_id;
arrival_time = event.arrival_time;
departure_time = event.departure_time;
source_ = event.source_;
for (auto it = event.destinations_.begin();
it != event.destinations_.end(); ++it) {
destinations_.push_back(*it);
}
base_ = event.base_;
};
public:
scalesim::sim_event_base* base() const { return &base_; };
long id() const { return vehicle_id; };
long source() const { return source_; };
long destination() const { return destinations_[0]; };
bool end() const { return (destinations_.size() == 1); };
long receive_time() const { return arrival_time; };
long send_time() const { return departure_time; };
int size() const {
return sizeof(*this) + sizeof(long)*destinations_.size();
};
friend class boost::serialization::access;
private:
template<class Archive>
void serialize(Archive& ar, unsigned int version) {
ar & vehicle_id;
ar & arrival_time;
ar & departure_time;
ar & source_;
ar & destinations_;
ar & base_;
}
}; /* class event */
/* a state represents a cross point with outgoing road */
class State : public scalesim::sim_state {
friend class traffic_sim;
private:
long id_;
std::vector<long> destinations_;
std::vector<long> speed_limit_; /* km per hour */
std::vector<int> num_lanes_;
std::vector<long> road_length_; /* m */
public:
State(): id_(-1) {};
virtual ~State() {};
State(long id, std::vector<long> destiantions,
std::vector<long> speed_limit,
std::vector<int> num_lanes,
std::vector<long> road_length) {
id_ = id;
destinations_ = destiantions;
speed_limit_ = speed_limit;
num_lanes_ = num_lanes;
road_length_ = road_length;
};
long id() const { return id_; }
int size() const {
return sizeof(*this) +
sizeof(long) * (destinations_.size() +
speed_limit_.size() +
num_lanes_.size() +
road_length_.size());
}
void out_put() const {
std::cout << "state id: " << id_ << std::endl;
};
friend class boost::serialization::access;
private:
template<class Archive>
void serialize(Archive& ar, unsigned int version) {
ar & id_;
ar & destinations_;
ar & speed_limit_;
ar & num_lanes_;
ar & road_length_;
}
}; /* class State */
public:
/*
* Return finish time of the simulation
*/
static long finish_time();
/*
* Initiation function for application.
* It is invoked before all initiation functions.
*/
void init();
/*
* Initiation function for partition and index.
* Partition format:
* - type: boost::shared_ptr<std::std::vector<long> >
* - value: Nth value represents a rank number in ID=N
* Index format:
* - type: boost::shared_ptr<boost::unordered_multimap<long, long> >
* - key: rank number
* - value: IDs in this rank
*/
std::pair<parti_ptr, parti_indx_ptr> init_partition_index(int rank_size);
/*
* Initiation function for events.
* Initiated events are shuffled to each starting point after this function.
* Thus it is ok to just read in whatever way.
* For example, use modulo operator based on rank_id and rank_size.
*/
void init_events(ev_vec<traffic_sim>& ret,
const int rank,
const int rank_size);
/*
* Initiation function for states.
* Initiated states are NOT shuffled after this function.
* Thus you have to initiate only states in this rank, based on partition.
*/
void init_states_in_this_rank(st_vec<traffic_sim>& new_state,
const int rank,
const int rank_size,
parti_ptr partition);
/*
* Initiation function for what_if events.
* Initiated events are shuffled to each starting point after this function.
* Thus it is ok to just read in whatever way.
*/
void init_what_if(
std::vector<boost::shared_ptr<const scalesim::what_if<traffic_sim> > >& ret,
const int rank,
const int rank_size);
/*
* Event handling function.
* The arguments (receive_event, state) are previous value in the simulation.
* The return value (optional<pair<ev_pst, st_ptr> >) should include
* new event and state based on the arguments and your simulation models.
*
* If there are no new event and state generated, return empty option.
*/
boost::optional<std::pair<std::vector<ev_ptr<traffic_sim> >, st_ptr<traffic_sim> > >
event_handler(ev_ptr<traffic_sim> receive_event, st_ptr<traffic_sim> state);
};
class traffic_reader {
private:
traffic_reader(const traffic_reader&);
void operator=(const traffic_reader&);
private:
traffic_reader() {};
virtual ~traffic_reader() {};
public:
static void graph_read(parti_ptr ret_partition,
parti_indx_ptr ret_partition_index,
const std::string& file_path);
static void road_read(st_vec<traffic_sim>& ret,
int rank,
int rank_size,
parti_ptr partition,
const std::string& file_path);
static void trip_read(ev_vec<traffic_sim>& ret,
int rank,
int rank_size,
const std::string& file_path);
static void what_if_read(std::vector<boost::shared_ptr<
const scalesim::what_if<traffic_sim> > >& ret,
const int rank,
const int rank_size,
const std::string& file_path);
}; /* class traffic_reader */
long traffic_sim::finish_time() {
// return 500;
// return 3600; /* 1 hours */
return 10800; /* 3 hours */
// return 21600; /* 6 hours */
// return 86400; /* 24 hours */
// return std::numeric_limits<long>::max();
};
void traffic_sim::init() {};
std::pair<parti_ptr, parti_indx_ptr> traffic_sim::init_partition_index(int rank_size) {
auto partition_ = boost::make_shared<std::vector<long> >(std::vector<long>());
auto index_ = boost::make_shared<boost::unordered_multimap<long, long> >(
boost::unordered_multimap<long, long>());
traffic_reader::graph_read(partition_, index_, TRAFFIC_PARTITION_PATH);
return std::pair<parti_ptr, parti_indx_ptr>(partition_, index_);
};
void traffic_sim::init_events(ev_vec<traffic_sim>& ret,
const int rank,
const int rank_size) {
traffic_reader::trip_read(ret, rank, rank_size, SCENARIO_PATH);
};
void traffic_sim::init_states_in_this_rank(st_vec<traffic_sim>& new_state,
const int rank,
const int rank_size,
parti_ptr partition) {
traffic_reader::road_read(new_state, rank, rank_size, partition, TRAFFIC_MAP_PATH);
};
void traffic_sim::init_what_if(
std::vector<boost::shared_ptr<const scalesim::what_if<traffic_sim> > >& ret,
const int rank,
const int rank_size) {
traffic_reader::what_if_read(ret, rank, rank_size, SCENARIO_PATH);
};
boost::optional<std::pair<std::vector<ev_ptr<traffic_sim> >, st_ptr<traffic_sim> > >
traffic_sim::event_handler(ev_ptr<traffic_sim> receive_event, st_ptr<traffic_sim> state) {
/* define destinations */
std::vector<long> new_destinations_;
auto tracks_it_ = receive_event->destinations_.begin();
++tracks_it_;
if (tracks_it_ == receive_event->destinations_.end()) {
return boost::optional<std::pair<std::vector<ev_ptr<traffic_sim> >, st_ptr<traffic_sim> > >();
}
while (tracks_it_ != receive_event->destinations_.end()) {
new_destinations_.push_back(*tracks_it_);
++tracks_it_;
}
if (new_destinations_.empty()) {
return boost::optional<std::pair<std::vector<ev_ptr<traffic_sim> >, st_ptr<traffic_sim> > >();
}
/* calculate reaching time to next junction */
auto it = std::find(state->destinations_.begin(),
state->destinations_.end(),
new_destinations_.front());
if (it == state->destinations_.end()) {
return boost::optional<std::pair<std::vector<ev_ptr<traffic_sim> >, st_ptr<traffic_sim> > >();
}
DLOG_ASSERT(it != state->destinations_.end())
<< " No road from: " << state->id() << " to " << new_destinations_.front()
<< "\n See vehicle: " << receive_event->id() << "'s tracks";
long new_arrival_time = 0;
int i = it - state->destinations_.begin();
if (state->speed_limit_[i] != 0) {
new_arrival_time
= state->road_length_[i] * 60 * 60 / state->speed_limit_[i] / 1000
+ receive_event->arrival_time + 5;
} else {
new_arrival_time = state->road_length_[i] * 60 * 60
+ receive_event->arrival_time;
}
long new_source_ = receive_event->destinations_.front();
long new_depature_time = receive_event->arrival_time;
std::vector<ev_ptr<traffic_sim> > new_event;
new_event.push_back(boost::make_shared<traffic_sim::Event>(
traffic_sim::Event(receive_event->vehicle_id,
new_arrival_time,
new_depature_time,
new_source_,
new_destinations_)));
st_ptr<traffic_sim> new_state = boost::make_shared<traffic_sim::State>(
traffic_sim::State(state->id_,
state->destinations_,
state->speed_limit_,
state->num_lanes_,
state->road_length_));
return boost::optional<std::pair<std::vector<ev_ptr<traffic_sim> >, st_ptr<traffic_sim> > > (
std::pair<std::vector<ev_ptr<traffic_sim> >, st_ptr<traffic_sim> >(new_event, new_state));
};
void traffic_reader::graph_read(parti_ptr ret_partition,
parti_indx_ptr ret_partition_index,
const std::string& file_path) {
std::ifstream ifstream(file_path.c_str());
if (ifstream.fail()) {
ifstream.close();
return;
}
std::string line;
long id = 0;
while (getline(ifstream, line)) {
long partitionNum = atol(line.c_str());
ret_partition_index->insert(std::pair<long, long>(partitionNum, id));
ret_partition->push_back(partitionNum);
id++;
}
ifstream.close();
};
void traffic_reader::road_read(st_vec<traffic_sim>& ret,
int rank,
int rank_size,
parti_ptr partition,
const std::string& file_path) {
std::ifstream ifstream(file_path.c_str());
if (ifstream.fail()) {
ifstream.close();
return;
}
std::string line;
long id = 0;
while (getline(ifstream, line)) {
if ((*partition)[id]%rank_size == rank) {
std::vector<std::string> cp;
boost::split(cp, line, boost::is_any_of(";"));
long state_id;
std::vector<long> destinations;
std::vector<long> speed_limit;
std::vector<int> num_lanes;
std::vector<long> road_length;
for (auto road_it = cp.begin(); road_it != cp.end(); ++road_it) {
std::vector<std::string> road;
boost::split(road, *road_it, boost::is_any_of(","));
state_id = atol(road[2].c_str());
destinations.push_back(atol(road[3].c_str()));
speed_limit.push_back(atol(road[4].c_str()));
num_lanes.push_back(atol(road[6].c_str()));
if (atol(road[5].c_str()) != 0) {
road_length.push_back(atol(road[5].c_str()));
} else {
road_length.push_back((1 + atol(road[5].c_str())));
}
}
ret.push_back(
boost::make_shared<state<traffic_sim> >(
state<traffic_sim>(state_id,
destinations,
speed_limit,
num_lanes,
road_length)));
}
id++;
} /* while (getline(ifstream, line)) */
ifstream.close();
return;
}; /* load_read() */
void traffic_reader::trip_read(ev_vec<traffic_sim>& ret,
int rank,
int rank_size,
const std::string& file_path) {
std::ifstream ifstream(file_path.c_str());
if (ifstream.fail()) {
ifstream.close();
return;
}
long id = 0;
std::string line;
while (getline(ifstream, line)) {
if (id % rank_size == rank) {
std::vector<std::string> ev_str_;
boost::split(ev_str_, line, boost::is_any_of(","));
long vehicle_id = atol(ev_str_[1].c_str());
long arrival_time = atol(ev_str_[3].c_str());
long departure_time = atol(ev_str_[3].c_str());
// int track_counter = 0;
int track_length = ev_str_.size() - 4;
std::vector<long> tracks_;
if (track_length == 0) {
std::cout << "test " << std::endl;
}
for (int i = 0; i < track_length; ++i) {
tracks_.push_back(atol(ev_str_[i + 4].c_str()));
}
ret.push_back(
boost::make_shared<event<traffic_sim> >(
event<traffic_sim>(vehicle_id,
arrival_time,
departure_time,
-1,
tracks_)));
}
++id;
} /* while (getline(ifstream, line)) */
ifstream.close();
return;
}; /* trip_read() */
void traffic_reader::what_if_read(std::vector<boost::shared_ptr<const scalesim::what_if<traffic_sim> > >& ret,
const int rank,
const int rank_size,
const std::string& file_path) {
/* Read file */
std::ifstream ifstream_(file_path);
if (ifstream_.fail()) {
ifstream_.close();
LOG(INFO) << " Opening file " << file_path << " fails. Check file.";
exit(1);
}
/* Read lines */
long i = 0;
std::string line;
while (std::getline(ifstream_, line)) {
if (i % rank_size != rank) {
++i;
continue;
}
if (line.compare(0, 2, "SC") == 0) {
/*
* Query type is state change (SC).
* Input format is like that.
* SC,{lp id},{time};{lp format};...;..;..
* ex)
* SC,10,555;R,0,10,1,10,100,2;R,1,10,1,10,200,1;R,3,10,3,10,200,100
*/
std::vector<std::string> vec_;
boost::split(vec_, line, boost::is_any_of(";"));
/* read lp id & time */
std::vector<std::string> id_time_;
boost::split(id_time_, vec_[0], boost::is_any_of(","));
long lp_id = atol(id_time_[1].c_str());
long time_ = atol(id_time_[2].c_str());
/* read roads */
std::vector<long> destinations;
std::vector<long> speed_limit;
std::vector<int> num_lanes;
std::vector<long> rd_length;
auto road_it = vec_.begin();
++road_it;
for (;road_it != vec_.end(); ++road_it) {
std::vector<std::string> road;
boost::split(road, *road_it, boost::is_any_of(","));
destinations.push_back(atol(road[3].c_str()));
speed_limit.push_back(atol(road[4].c_str()));
num_lanes.push_back(atol(road[6].c_str()));
if (atol(road[5].c_str()) != 0) {
rd_length.push_back(atol(road[5].c_str()));
} else {
rd_length.push_back(1 + atol(road[5].c_str()));
}
}
/* initiate what_if query */
boost::shared_ptr<scalesim::what_if<traffic_sim> > wh_if_
= boost::make_shared<scalesim::what_if<traffic_sim> >(
scalesim::what_if<traffic_sim>(lp_id,time_,
state<traffic_sim>(lp_id,
destinations,
speed_limit,
num_lanes,
rd_length)));
/* push back this what if query */
ret.push_back(wh_if_);
} else if (line.compare(0, 2, "AE") == 0) {
/*
* Query type is add event (AE).
* Input format is like that.
* AE,{event id},0,{Deptime},{1st cp},{2nd cp},...
* ex)
* AE,0,0,1,0,1,2,3,4,5
*/
std::vector<std::string> vec_;
boost::split(vec_, line, boost::is_any_of(","));
/* read lp id and time */
long lp_id_ = atol(vec_[4].c_str());
long time_ = atol(vec_[3].c_str());
/* read adding event */
long vehicle_id_ = atol(vec_[1].c_str());
long arrival_time_ = atol(vec_[3].c_str());
long departure_time_ = atol(vec_[3].c_str());
int length_ = vec_.size() - 4;
std::vector<long> tracks_;
for (int i = 0; i < length_; ++i) {
tracks_.push_back(atol(vec_[i + 4].c_str()));
}
/* initiate adding event */
boost::shared_ptr<scalesim::what_if<traffic_sim> > wh_if_
= boost::make_shared<scalesim::what_if<traffic_sim> >(
scalesim::what_if<traffic_sim>(lp_id_, time_,
event<traffic_sim>(vehicle_id_,
arrival_time_,
departure_time_,
-1, /* source id */
tracks_)));
/* push back to return */
ret.push_back(wh_if_);
} else if (line.compare(0,2,"DE") == 0) {
/*
* Query type is delete event (DE).
* Input format is like that.
* DE, {lp id}, {time}, {event id}
* RE, {event id}, {source id}, {send time}, {destination id}, {receive time}
* ex)
* DE,999,10,555
* RE,2,2,94,3,135
*/
std::vector<std::string> vec_;
boost::split(vec_, line, boost::is_any_of(","));
/* read lp id, time, event id */
long lp_id_ = atol(vec_[1].c_str());
long time_ = atol(vec_[2].c_str());
long ev_id_ = atol(vec_[3].c_str());
/* initiate what-if query */
boost::shared_ptr<scalesim::what_if<traffic_sim> > wh_if_
= boost::make_shared<scalesim::what_if<traffic_sim> >(
scalesim::what_if<traffic_sim>(lp_id_, time_, ev_id_));
ret.push_back(wh_if_);
} else if (line.compare(0,2,"RE") == 0) {
std::vector<std::string> vec_;
boost::split(vec_, line, boost::is_any_of(","));
/* read lp id, time, event id */
long lp_id_ = atol(vec_[4].c_str());
long time_ = atol(vec_[5].c_str());
long ev_id_ = atol(vec_[1].c_str());
/* initiate what-if query */
boost::shared_ptr<scalesim::what_if<traffic_sim> > wh_if_
= boost::make_shared<scalesim::what_if<traffic_sim> >(
scalesim::what_if<traffic_sim>(lp_id_, time_, ev_id_));
ret.push_back(wh_if_);
}
++i;
} /* while (std::getline(ifstream_, line)) */
}; /* what_if_read*/
#endif /* TRAFFICSIM_TRAFFIC_SIM_HPP_ */
|
(* Title: Partial Semigroups
Author: Brijesh Dongol, Victor Gomes, Ian J Hayes, Georg Struth
Maintainer: Victor Gomes <[email protected]>
Georg Struth <[email protected]>
*)
section \<open>Partial Semigroups\<close>
theory Partial_Semigroups
imports Main
begin
notation times (infixl "\<cdot>" 70)
and times (infixl "\<oplus>" 70)
subsection \<open>Partial Semigroups\<close>
text \<open>In this context, partiality is modelled by a definedness constraint $D$ instead of a bottom element,
which would make the algebra total. This is common practice in mathematics.\<close>
class partial_times = times +
fixes D :: "'a \<Rightarrow> 'a \<Rightarrow> bool"
text \<open>The definedness constraints for associativity state that the right-hand side of the associativity
law is defined if and only if the left-hand side is and that, in this case, both sides are equal. This and
slightly different constraints can be found in the literature.\<close>
class partial_semigroup = partial_times +
assumes add_assocD: "D y z \<and> D x (y \<cdot> z) \<longleftrightarrow> D x y \<and> D (x \<cdot> y) z"
and add_assoc: "D x y \<and> D (x \<cdot> y) z \<Longrightarrow> (x \<cdot> y) \<cdot> z = x \<cdot> (y \<cdot> z)"
text \<open>Every semigroup is a partial semigroup.\<close>
sublocale semigroup_mult \<subseteq> sg: partial_semigroup _ "\<lambda>x y. True"
by standard (simp_all add: mult_assoc)
context partial_semigroup
begin
text \<open>The following abbreviation is useful for sublocale statements.\<close>
abbreviation (input) "R x y z \<equiv> D y z \<and> x = y \<cdot> z"
lemma add_assocD_var1: "D y z \<and> D x (y \<cdot> z) \<Longrightarrow> D x y \<and> D (x \<cdot> y) z"
by (simp add: add_assocD)
lemma add_assocD_var2: " D x y \<and> D (x \<cdot> y) z \<Longrightarrow> D y z \<and> D x (y \<cdot> z)"
by (simp add: add_assocD)
lemma add_assoc_var: " D y z \<and> D x (y \<cdot> z) \<Longrightarrow> (x \<cdot> y) \<cdot> z = x \<cdot> (y \<cdot> z)"
by (simp add: add_assoc add_assocD)
subsection \<open>Green's Preorders and Green's Relations\<close>
text \<open>We define the standard Green's preorders and Green's relations. They are usually defined on monoids.
On (partial) semigroups, we only obtain transitive relations.\<close>
definition gR_rel :: "'a \<Rightarrow> 'a \<Rightarrow> bool" (infix "\<preceq>\<^sub>R" 50) where
"x \<preceq>\<^sub>R y = (\<exists>z. D x z \<and> x \<cdot> z = y)"
definition strict_gR_rel :: "'a \<Rightarrow> 'a \<Rightarrow> bool" (infix "\<prec>\<^sub>R" 50) where
"x \<prec>\<^sub>R y = (x \<preceq>\<^sub>R y \<and> \<not> y \<preceq>\<^sub>R x)"
definition gL_rel :: "'a \<Rightarrow> 'a \<Rightarrow> bool" (infix "\<preceq>\<^sub>L" 50) where
"x \<preceq>\<^sub>L y = (\<exists>z. D z x \<and> z \<cdot> x = y)"
definition strict_gL_rel :: "'a \<Rightarrow> 'a \<Rightarrow> bool" (infix "\<prec>\<^sub>L" 50) where
"x \<prec>\<^sub>L y = (x \<preceq>\<^sub>L y \<and> \<not> y \<preceq>\<^sub>L x)"
definition gH_rel :: "'a \<Rightarrow> 'a \<Rightarrow> bool" (infix "\<preceq>\<^sub>H" 50) where
"x \<preceq>\<^sub>H y = (x \<preceq>\<^sub>L y \<and> x \<preceq>\<^sub>R y)"
definition gJ_rel :: "'a \<Rightarrow> 'a \<Rightarrow> bool" (infix "\<preceq>\<^sub>J" 50) where
"x \<preceq>\<^sub>J y = (\<exists>v w. D v x \<and> D (v \<cdot> x) w \<and> (v \<cdot> x) \<cdot> w = y)"
definition "gR x y = (x \<preceq>\<^sub>R y \<and> y \<preceq>\<^sub>R x)"
definition "gL x y = (x \<preceq>\<^sub>L y \<and> y \<preceq>\<^sub>L x)"
definition "gH x y = (x \<preceq>\<^sub>H y \<and> y \<preceq>\<^sub>H x)"
definition "gJ x y = (x \<preceq>\<^sub>J y \<and> y \<preceq>\<^sub>J x)"
definition gR_downset :: "'a \<Rightarrow> 'a set" ("_\<down>" [100]100) where
"x\<down> \<equiv> {y. y \<preceq>\<^sub>R x}"
text \<open>The following counterexample rules out reflexivity.\<close>
lemma "x \<preceq>\<^sub>R x" (* nitpick [expect=genuine] *)
oops
lemma gR_rel_trans: "x \<preceq>\<^sub>R y \<Longrightarrow> y \<preceq>\<^sub>R z \<Longrightarrow> x \<preceq>\<^sub>R z"
by (metis gR_rel_def add_assoc add_assocD_var2)
lemma gL_rel_trans: "x \<preceq>\<^sub>L y \<Longrightarrow> y \<preceq>\<^sub>L z \<Longrightarrow> x \<preceq>\<^sub>L z"
by (metis gL_rel_def add_assocD_var1 add_assoc_var)
lemma gR_add_isol: "D z y \<Longrightarrow> x \<preceq>\<^sub>R y \<Longrightarrow> z \<cdot> x \<preceq>\<^sub>R z \<cdot> y"
apply (simp add: gR_rel_def)
using add_assocD_var1 add_assoc_var by blast
lemma gL_add_isor: "D y z \<Longrightarrow> x \<preceq>\<^sub>L y \<Longrightarrow> x \<cdot> z \<preceq>\<^sub>L y \<cdot> z"
apply (simp add: gL_rel_def)
by (metis add_assoc add_assocD_var2)
definition annil :: "'a \<Rightarrow> bool" where
"annil x = (\<forall>y. D x y \<and> x \<cdot> y = x)"
definition annir :: "'a \<Rightarrow> bool" where
"annir x = (\<forall>y. D y x \<and> y \<cdot> x = x)"
end
subsection \<open>Morphisms\<close>
definition ps_morphism :: "('a::partial_semigroup \<Rightarrow> 'b::partial_semigroup) \<Rightarrow> bool" where
"ps_morphism f = (\<forall>x y. D x y \<longrightarrow> D (f x) (f y) \<and> f (x \<cdot> y) = (f x) \<cdot> (f y))"
definition strong_ps_morphism :: "('a::partial_semigroup \<Rightarrow> 'b::partial_semigroup) \<Rightarrow> bool" where
"strong_ps_morphism f = (ps_morphism f \<and> (\<forall>x y. D (f x) (f y) \<longrightarrow> D x y))"
subsection \<open> Locally Finite Partial Semigroups\<close>
text \<open>In locally finite partial semigroups, elements can only be split in finitely many ways.\<close>
class locally_finite_partial_semigroup = partial_semigroup +
assumes loc_fin: "finite (x\<down>)"
subsection \<open>Cancellative Partial Semigroups\<close>
class cancellative_partial_semigroup = partial_semigroup +
assumes add_cancl: "D z x \<Longrightarrow> D z y \<Longrightarrow> z \<cdot> x = z \<cdot> y \<Longrightarrow> x = y"
and add_cancr: "D x z \<Longrightarrow> D y z \<Longrightarrow> x \<cdot> z = y \<cdot> z \<Longrightarrow> x = y"
begin
lemma unique_resl: "D x z \<Longrightarrow> D x z' \<Longrightarrow> x \<cdot> z = y \<Longrightarrow> x \<cdot> z' = y \<Longrightarrow> z = z'"
by (simp add: add_cancl)
lemma unique_resr: "D z x \<Longrightarrow> D z' x \<Longrightarrow> z \<cdot> x = y \<Longrightarrow> z' \<cdot> x = y \<Longrightarrow> z = z'"
by (simp add: add_cancr)
lemma gR_rel_mult: "D x y \<Longrightarrow> x \<preceq>\<^sub>R x \<cdot> y"
using gR_rel_def by force
lemma gL_rel_mult: "D x y \<Longrightarrow> y \<preceq>\<^sub>L x \<cdot> y"
using gL_rel_def by force
text \<open>By cancellation, the element z is uniquely defined for each pair x y, provided it exists.
In both cases, z is therefore a function of x and y; it is a quotient or residual of x y.\<close>
lemma quotr_unique: "x \<preceq>\<^sub>R y \<Longrightarrow> (\<exists>!z. D x z \<and> y = x \<cdot> z)"
using gR_rel_def add_cancl by force
lemma quotl_unique: "x \<preceq>\<^sub>L y \<Longrightarrow> (\<exists>!z. D z x \<and> y = z \<cdot> x)"
using gL_rel_def unique_resr by force
definition "rquot y x = (THE z. D x z \<and> x \<cdot> z = y)"
definition "lquot y x = (THE z. D z x \<and> z \<cdot> x = y)"
lemma rquot_prop: "D x z \<and> y = x \<cdot> z \<Longrightarrow> z = rquot y x"
by (metis (mono_tags, lifting) rquot_def the_equality unique_resl)
lemma rquot_mult: "x \<preceq>\<^sub>R y \<Longrightarrow> z = rquot y x \<Longrightarrow> x \<cdot> z = y"
using gR_rel_def rquot_prop by force
lemma rquot_D: "x \<preceq>\<^sub>R y \<Longrightarrow> z = rquot y x \<Longrightarrow> D x z"
using gR_rel_def rquot_prop by force
lemma add_rquot: "x \<preceq>\<^sub>R y \<Longrightarrow> (D x z \<and> x \<oplus> z = y \<longleftrightarrow> z = rquot y x)"
using gR_rel_def rquot_prop by fastforce
lemma add_canc1: "D x y \<Longrightarrow> rquot (x \<cdot> y) x = y"
using rquot_prop by simp
lemma add_canc2: "x \<preceq>\<^sub>R y \<Longrightarrow> x \<cdot> (rquot y x) = y"
using gR_rel_def add_canc1 by force
lemma add_canc2_prop: "x \<preceq>\<^sub>R y \<Longrightarrow> rquot y x \<preceq>\<^sub>L y"
using gL_rel_mult rquot_D rquot_mult by fastforce
text \<open>The next set of lemmas establishes standard Galois connections for cancellative partial semigroups.\<close>
lemma gR_galois_imp1: "D x z \<Longrightarrow> x \<cdot> z \<preceq>\<^sub>R y \<Longrightarrow> z \<preceq>\<^sub>R rquot y x"
by (metis gR_rel_def add_assoc add_assocD_var2 rquot_prop)
lemma gR_galois_imp21: "x \<preceq>\<^sub>R y \<Longrightarrow> z \<preceq>\<^sub>R rquot y x \<Longrightarrow> x \<cdot> z \<preceq>\<^sub>R y"
using gR_add_isol rquot_D rquot_mult by fastforce
lemma gR_galois_imp22: "x \<preceq>\<^sub>R y \<Longrightarrow> z \<preceq>\<^sub>R rquot y x \<Longrightarrow> D x z"
using gR_rel_def add_assocD add_canc1 by fastforce
lemma gR_galois: "x \<preceq>\<^sub>R y \<Longrightarrow> (D x z \<and> x \<cdot> z \<preceq>\<^sub>R y \<longleftrightarrow> z \<preceq>\<^sub>R rquot y x)"
using gR_galois_imp1 gR_galois_imp21 gR_galois_imp22 by blast
lemma gR_rel_defined: "x \<preceq>\<^sub>R y \<Longrightarrow> D x (rquot y x)"
by (simp add: rquot_D)
lemma ex_add_galois: "D x z \<Longrightarrow> (\<exists>y. x \<cdot> z = y \<longleftrightarrow> rquot y x = z)"
using add_canc1 by force
end
subsection \<open>Partial Monoids\<close>
text \<open>We allow partial monoids with multiple units. This is similar to and inspired by small categories.\<close>
class partial_monoid = partial_semigroup +
fixes E :: "'a set"
assumes unitl_ex: "\<exists>e \<in> E. D e x \<and> e \<cdot> x = x"
and unitr_ex: "\<exists>e \<in> E. D x e \<and> x \<cdot> e = x"
and units_eq: "e1 \<in> E \<Longrightarrow> e2 \<in> E \<Longrightarrow> D e1 e2 \<Longrightarrow> e1 = e2"
text \<open>Every monoid is a partial monoid.\<close>
sublocale monoid_mult \<subseteq> mon: partial_monoid _ "\<lambda>x y. True" "{1}"
by (standard; simp_all)
context partial_monoid
begin
lemma units_eq_var: "e1 \<in> E \<Longrightarrow> e2 \<in> E \<Longrightarrow> e1 \<noteq> e2 \<Longrightarrow> \<not> D e1 e2"
using units_eq by force
text \<open>In partial monoids, Green's relations become preorders, but need not be partial orders.\<close>
sublocale gR: preorder gR_rel strict_gR_rel
apply standard
apply (simp add: strict_gR_rel_def)
using gR_rel_def unitr_ex apply force
using gR_rel_trans by blast
sublocale gL: preorder gL_rel strict_gL_rel
apply standard
apply (simp add: strict_gL_rel_def)
using gL_rel_def unitl_ex apply force
using gL_rel_trans by blast
lemma "x \<preceq>\<^sub>R y \<Longrightarrow> y \<preceq>\<^sub>R x \<Longrightarrow> x = y" (* nitpick [expect=genuine] *)
oops
lemma "annil x \<Longrightarrow> annil y \<Longrightarrow> x = y" (* nitpick [expext=genuine] *)
oops
lemma "annir x \<Longrightarrow> annir y \<Longrightarrow> x = y" (* nitpick [expect=genuine] *)
oops
end
text \<open>Next we define partial monoid morphisms.\<close>
definition pm_morphism :: "('a::partial_monoid \<Rightarrow> 'b::partial_monoid) \<Rightarrow> bool" where
"pm_morphism f = (ps_morphism f \<and> (\<forall>e. e \<in> E \<longrightarrow> (f e) \<in> E))"
definition strong_pm_morphism :: "('a::partial_monoid \<Rightarrow> 'b::partial_monoid) \<Rightarrow> bool" where
"strong_pm_morphism f = (pm_morphism f \<and> (\<forall>e. (f e) \<in> E \<longrightarrow> e \<in> E))"
text \<open>Partial Monoids with a single unit form a special case.\<close>
class partial_monoid_one = partial_semigroup + one +
assumes oneDl: "D x 1"
and oneDr: "D 1 x"
and oner: "x \<cdot> 1 = x"
and onel: "1 \<cdot> x = x"
begin
sublocale pmo: partial_monoid _ _ "{1}"
by standard (simp_all add: oneDr onel oneDl oner)
end
subsection \<open>Cancellative Partial Monoids\<close>
class cancellative_partial_monoid = cancellative_partial_semigroup + partial_monoid
begin
lemma canc_unitr: "D x e \<Longrightarrow> x \<cdot> e = x \<Longrightarrow> e \<in> E"
by (metis add_cancl unitr_ex)
lemma canc_unitl: "D e x \<Longrightarrow> e \<cdot> x = x \<Longrightarrow> e \<in> E"
by (metis add_cancr unitl_ex)
end
subsection \<open>Positive Partial Monoids\<close>
class positive_partial_monoid = partial_monoid +
assumes posl: "D x y \<Longrightarrow> x \<cdot> y \<in> E \<Longrightarrow> x \<in> E"
and posr: "D x y \<Longrightarrow> x \<cdot> y \<in> E \<Longrightarrow> y \<in> E"
begin
lemma pos_unitl: "D x y \<Longrightarrow> e \<in> E \<Longrightarrow> x \<cdot> y = e \<Longrightarrow> x = e"
by (metis posl posr unitr_ex units_eq_var)
lemma pos_unitr: "D x y \<Longrightarrow> e \<in> E \<Longrightarrow> x \<cdot> y = e \<Longrightarrow> y = e"
by (metis posl posr unitr_ex units_eq_var)
end
subsection \<open>Positive Cancellative Partial Monoids\<close>
class positive_cancellative_partial_monoid = positive_partial_monoid + cancellative_partial_monoid
begin
text \<open>In positive cancellative monoids, the Green's relations are partial orders.\<close>
sublocale pcpmR: order gR_rel strict_gR_rel
apply standard
apply (clarsimp simp: gR_rel_def)
by (metis canc_unitr add_assoc add_assocD_var2 pos_unitl)
sublocale pcpmL: order gL_rel strict_gL_rel
apply standard
apply (clarsimp simp: gL_rel_def)
by (metis canc_unitl add_assoc add_assocD_var1 pos_unitr)
end
subsection \<open>From Partial Abelian Semigroups to Partial Abelian Monoids\<close>
text \<open>Next we define partial abelian semigroups. These are interesting, e.g., for the foundations
of quantum mechanics and as resource monoids in separation logic.\<close>
class pas = partial_semigroup +
assumes add_comm: "D x y \<Longrightarrow> D y x \<and> x \<oplus> y = y \<oplus> x"
begin
lemma D_comm: "D x y \<longleftrightarrow> D y x"
by (auto simp add: add_comm)
lemma annilr: "annil x = annir x"
by (metis annil_def annir_def add_comm)
lemma anni_unique: "annil x \<Longrightarrow> annil y \<Longrightarrow> x = y"
by (metis annilr annil_def annir_def)
end
text \<open>The following classes collect families of partially ordered abelian semigroups and monoids.\<close>
class locally_finite_pas = pas + locally_finite_partial_semigroup
class pam = pas + partial_monoid
class cancellative_pam = pam + cancellative_partial_semigroup
class positive_pam = pam + positive_partial_monoid
class positive_cancellative_pam = positive_pam + cancellative_pam
class generalised_effect_algebra = pas + partial_monoid_one
class cancellative_pam_one = cancellative_pam + partial_monoid_one
class positive_cancellative_pam_one = positive_cancellative_pam + cancellative_pam_one
context cancellative_pam_one
begin
lemma E_eq_one: "E = {1}"
by (metis oneDr oner unitl_ex units_eq singleton_iff subsetI subset_antisym)
lemma one_in_E: "1 \<in> E"
by (simp add: E_eq_one)
end
subsection \<open>Alternative Definitions\<close>
text \<open>PAS's can be axiomatised more compactly as follows.\<close>
class pas_alt = partial_times +
assumes pas_alt_assoc: "D x y \<and> D (x \<oplus> y) z \<Longrightarrow> D y z \<and> D x (y \<oplus> z) \<and> (x \<oplus> y) \<oplus> z = x \<oplus> (y \<oplus> z)"
and pas_alt_comm: "D x y \<Longrightarrow> D y x \<and> x \<oplus> y = y \<oplus> x"
sublocale pas_alt \<subseteq> palt: pas
apply standard
using pas_alt_assoc pas_alt_comm by blast+
text \<open>Positive abelian PAM's can be axiomatised more compactly as well.\<close>
class pam_pos_alt = pam +
assumes pos_alt: "D x y \<Longrightarrow> e \<in> E \<Longrightarrow> x \<oplus> y = e \<Longrightarrow> x = e"
sublocale pam_pos_alt \<subseteq> ppalt: positive_pam
apply standard
using pos_alt apply force
using add_comm pos_alt by fastforce
subsection \<open>Product Constructions\<close>
text \<open>We consider two kinds of product construction. The first one combines partial semigroups with sets,
the second one partial semigroups with partial semigroups. The first one is interesting for
Separation Logic. Semidirect product constructions are considered later.\<close>
instantiation prod :: (type, partial_semigroup) partial_semigroup
begin
definition "D_prod x y = (fst x = fst y \<and> D (snd x) (snd y))"
for x y :: "'a \<times> 'b"
definition times_prod :: "'a \<times> 'b \<Rightarrow> 'a \<times> 'b \<Rightarrow> 'a \<times> 'b" where
"times_prod x y = (fst x, snd x \<cdot> snd y)"
instance
apply (standard, simp_all add: D_prod_def times_prod_def)
using partial_semigroup_class.add_assocD apply force
by (simp add: partial_semigroup_class.add_assoc)
end
instantiation prod :: (type, partial_monoid) partial_monoid
begin
definition E_prod :: "('a \<times> 'b) set" where
"E_prod = {x. snd x \<in> E}"
instance
apply (standard, simp_all add: D_prod_def times_prod_def E_prod_def)
using partial_monoid_class.unitl_ex apply fastforce
using partial_monoid_class.unitr_ex apply fastforce
by (simp add: partial_monoid_class.units_eq prod_eq_iff)
end
instance prod :: (type, pas) pas
apply (standard, simp add: D_prod_def times_prod_def)
using pas_class.add_comm by force
lemma prod_div1: "(x1::'a, y1::'b::pas) \<preceq>\<^sub>R (x2::'a, y2::'b::pas) \<Longrightarrow> x1 = x2"
by (force simp: partial_semigroup_class.gR_rel_def times_prod_def)
lemma prod_div2: "(x1, y1) \<preceq>\<^sub>R (x2, y2) \<Longrightarrow> y1 \<preceq>\<^sub>R y2"
by (force simp: partial_semigroup_class.gR_rel_def D_prod_def times_prod_def)
lemma prod_div_eq: "(x1, y1) \<preceq>\<^sub>R (x2, y2) \<longleftrightarrow> x1 = x2 \<and> y1 \<preceq>\<^sub>R y2"
by (force simp: partial_semigroup_class.gR_rel_def D_prod_def times_prod_def)
instance prod :: (type, pam) pam
by standard
instance prod :: (type, cancellative_pam) cancellative_pam
by (standard, auto simp: D_prod_def times_prod_def add_cancr add_cancl)
lemma prod_res_eq: "(x1, y1) \<preceq>\<^sub>R (x2::'a,y2::'b::cancellative_pam)
\<Longrightarrow> rquot (x2, y2) (x1, y1) = (x1, rquot y2 y1)"
apply (clarsimp simp: partial_semigroup_class.gR_rel_def D_prod_def times_prod_def rquot_def)
apply (rule theI2 conjI)
apply force
using add_cancl apply force
by (rule the_equality, auto simp: add_cancl)
instance prod :: (type, positive_pam) positive_pam
apply (standard, simp_all add: E_prod_def D_prod_def times_prod_def)
using positive_partial_monoid_class.posl apply blast
using positive_partial_monoid_class.posr by blast
instance prod :: (type, positive_cancellative_pam) positive_cancellative_pam ..
instance prod :: (type, locally_finite_pas) locally_finite_pas
proof (standard, case_tac x, clarsimp)
fix s :: 'a and x :: 'b
have "finite (x\<down>)"
by (simp add: loc_fin)
hence "finite {y. \<exists>z. D y z \<and> y \<oplus> z = x}"
by (simp add: partial_semigroup_class.gR_downset_def partial_semigroup_class.gR_rel_def)
hence "finite {(s, y)| y. \<exists>z. D y z \<and> y \<oplus> z = x}"
by (drule_tac f="\<lambda>y. (s, y)" in finite_image_set)
moreover have "{y. \<exists>z1 z2. D y (z1, z2) \<and> y \<oplus> (z1, z2) = (s, x)}
\<subseteq> {(s, y)| y. \<exists>z. D y z \<and> y \<oplus> z = x}"
by (auto simp: D_prod_def times_prod_def)
ultimately have "finite {y. \<exists>z1 z2. D y (z1, z2) \<and> y \<oplus> (z1, z2) = (s, x)}"
by (auto intro: finite_subset)
thus "finite ((s, x)\<down>)"
by (simp add: partial_semigroup_class.gR_downset_def partial_semigroup_class.gR_rel_def)
qed
text \<open>Next we consider products of two partial semigroups.\<close>
definition ps_prod_D :: "'a :: partial_semigroup \<times> 'b :: partial_semigroup \<Rightarrow> 'a \<times> 'b \<Rightarrow> bool"
where "ps_prod_D x y \<equiv> D (fst x) (fst y) \<and> D (snd x) (snd y)"
definition ps_prod_times :: "'a :: partial_semigroup \<times> 'b :: partial_semigroup \<Rightarrow> 'a \<times> 'b \<Rightarrow> 'a \<times> 'b"
where "ps_prod_times x y = (fst x \<cdot> fst y, snd x \<cdot> snd y)"
interpretation ps_prod: partial_semigroup ps_prod_times ps_prod_D
apply (standard, simp_all add: ps_prod_D_def ps_prod_times_def)
apply (meson partial_semigroup_class.add_assocD)
by (simp add: partial_semigroup_class.add_assoc)
interpretation pas_prod: pas ps_prod_times "ps_prod_D :: 'a :: pas \<times> 'b :: pas \<Rightarrow> 'a \<times> 'b \<Rightarrow> bool"
by (standard, clarsimp simp: ps_prod_D_def ps_prod_times_def pas_class.add_comm)
definition pm_prod_E :: "('a :: partial_monoid \<times> 'b :: partial_monoid) set" where
"pm_prod_E = {x. fst x \<in> E \<and> snd x \<in> E}"
interpretation pm_prod: partial_monoid ps_prod_times ps_prod_D pm_prod_E
apply standard
apply (simp_all add: ps_prod_times_def ps_prod_D_def pm_prod_E_def)
apply (metis partial_monoid_class.unitl_ex prod.collapse)
apply (metis partial_monoid_class.unitr_ex prod.collapse)
by (simp add: partial_monoid_class.units_eq prod.expand)
interpretation pam_prod: pam ps_prod_times ps_prod_D "pm_prod_E :: ('a :: pam \<times> 'a :: pam) set" ..
subsection \<open>Partial Semigroup Actions and Semidirect Products\<close>
text \<open>(Semi)group actions are a standard mathematical construction. We generalise this to partial
semigroups and monoids. We use it to define semidirect products of partial semigroups. A generalisation
to wreath products might be added in the future.\<close>
text \<open>First we define the (left) action of a partial semigroup on a set. A right action could be defined in a similar way,
but we do not pursue this at the moment.\<close>
locale partial_sg_laction =
fixes Dla :: "'a::partial_semigroup \<Rightarrow> 'b \<Rightarrow> bool"
and act :: "'a::partial_semigroup \<Rightarrow> 'b \<Rightarrow> 'b" ("\<alpha>")
assumes act_assocD: "D x y \<and> Dla (x \<cdot> y) p \<longleftrightarrow> Dla y p \<and> Dla x (\<alpha> y p)"
and act_assoc: "D x y \<and> Dla (x \<cdot> y) p \<Longrightarrow> \<alpha> (x \<cdot> y) p = \<alpha> x (\<alpha> y p)"
text \<open>Next we define the action of a partial semigroup on another partial semigroup.
In the tradition of semigroup theory we use addition as a non-commutative operation for the second semigroup.\<close>
locale partial_sg_sg_laction = partial_sg_laction +
assumes act_distribD: "D (p::'b::partial_semigroup) q \<and> Dla (x::'a::partial_semigroup) (p \<oplus> q) \<longleftrightarrow> Dla x p \<and> Dla x q \<and> D (\<alpha> x p) (\<alpha> x q)"
and act_distrib: "D p q \<and> Dla x (p \<oplus> q) \<Longrightarrow> \<alpha> x (p \<oplus> q) = (\<alpha> x p) \<oplus> (\<alpha> x q)"
begin
text \<open>Next we define the semidirect product as a partial operation and show that the semidirect
product of two partial semigroups forms a partial semigroup.\<close>
definition sd_D :: "('a \<times> 'b) \<Rightarrow> ('a \<times> 'b) \<Rightarrow> bool" where
"sd_D x y \<equiv> D (fst x) (fst y) \<and> Dla (fst x) (snd y) \<and> D (snd x) (\<alpha> (fst x) (snd y))"
definition sd_prod :: "('a \<times> 'b) \<Rightarrow> ('a \<times> 'b) \<Rightarrow> ('a \<times> 'b)" where
"sd_prod x y = ((fst x) \<cdot> (fst y), (snd x) \<oplus> (\<alpha> (fst x) (snd y)))"
sublocale dp_semigroup: partial_semigroup sd_prod sd_D
apply unfold_locales
apply (simp_all add: sd_prod_def sd_D_def)
apply (clarsimp, metis act_assoc act_assocD act_distrib act_distribD add_assocD)
by (clarsimp, metis act_assoc act_assocD act_distrib act_distribD add_assoc add_assocD)
end
text \<open>Finally we define the semigroup action for two partial monoids and show that the semidirect product of two partial monoids
is a partial monoid.\<close>
locale partial_mon_sg_laction = partial_sg_sg_laction Dla
for Dla :: "'a::partial_monoid \<Rightarrow> 'b::partial_semigroup \<Rightarrow> bool" +
assumes act_unitl: "e \<in> E \<Longrightarrow> Dla e p \<and> \<alpha> e p = p"
locale partial_mon_mon_laction = partial_mon_sg_laction _ Dla
for Dla :: "'a::partial_monoid \<Rightarrow> 'b::partial_monoid \<Rightarrow> bool" +
assumes act_annir: "e \<in> Ea \<Longrightarrow> Dla x e \<and> \<alpha> x e = e"
begin
definition sd_E :: "('a \<times> 'b) set" where
"sd_E = {x. fst x \<in> E \<and> snd x \<in> E}"
sublocale dp_semigroup : partial_monoid sd_prod sd_D sd_E
apply unfold_locales
apply (simp_all add: sd_prod_def sd_D_def sd_E_def)
apply (metis act_annir eq_fst_iff eq_snd_iff mem_Collect_eq partial_monoid_class.unitl_ex)
apply (metis act_annir eq_fst_iff eq_snd_iff partial_monoid_class.unitr_ex)
by (metis act_annir partial_monoid_class.units_eq prod_eqI)
end
end
|
@testset "TD_OPs" begin
#test transform-domain operators (linear operators)
# 2D discrete gradient
n1=9
n2=6
h1=0.99
h2=1.123
D2D = get_discrete_Grad(n1,n2,h1,h2,"TV")
D2x = get_discrete_Grad(n1,n2,h1,h2,"D_x")
D2z = get_discrete_Grad(n1,n2,h1,h2,"D_z")
x=zeros(n1,n2) #test on a 'cross' image
x[:,3].=1.0
x[4,:].=1.0
a1 = D2x*vec(x); a1=reshape(a1,n1-1,n2)
a2 = D2z*vec(x); a2=reshape(a2,n1,n2-1)
a3 = D2D*vec(x); a3a=a3[1:(n2-1)*n1]; a3b=a3[1+(n2-1)*n1:end];
a3a = reshape(a3a,n1,n2-1)
a3b = reshape(a3b,n1-1,n2)
#this test depends on the values of h1 and h2, as well as the type of derivative
@test a1==diff(x, dims=1)./h1
@test a2==diff(x, dims=2)./h2
#some more general tests:
@test count(!iszero, a1[:,3])==0
for i in [1 2 4 5 6]
@test a1[:,1]==a1[:,i]
end
#some more general tests:
@test count(!iszero, a2[4,:])==0
for i in [1 2 3 5 6 7 8 9]
@test a2[1,:]==a2[i,:]
end
@test a3a==a2
@test a3b==a1
# 3D discrete gradient
n1=4
n2=6
n3=5
h1=0.99
h2=1.123
h3=1.0
D3D = get_discrete_Grad(n1,n2,n3,h1,h2,h3,"TV")
D3x = get_discrete_Grad(n1,n2,n3,h1,h2,h3,"D_x")
D3y = get_discrete_Grad(n1,n2,n3,h1,h2,h3,"D_y")
D3z = get_discrete_Grad(n1,n2,n3,h1,h2,h3,"D_z")
x=zeros(n1,n2,n3) #test on a 'cross' image
x[2,:,:].=1.0
x[:,4,:].=1.0
x[:,:,3].=1.0
a1 = D3x*vec(x); a1=reshape(a1,n1-1,n2,n3)
a2 = D3y*vec(x); a2=reshape(a2,n1,n2-1,n3)
a3 = D3z*vec(x); a3=reshape(a3,n1,n2,n3-1)
for i=1:n2
@test a1[:,i,:]==diff(x[:,i,:], dims=1)./h1
end
for i=1:n3
@test a1[:,:,i]==diff(x[:,:,i], dims=1)./h1
end
for i=1:n1
@test a2[i,:,:]==diff(x[i,:,:], dims=1)./h2
end
for i=1:n3
@test a2[:,:,i]==diff(x[:,:,i], dims=2)./h2
end
for i=1:n1
@test a3[i,:,:]==diff(x[i,:,:], dims=2)./h3
end
for i=1:n2
@test a3[:,i,:]==diff(x[:,i,:], dims=2)./h3
end
end
|
[STATEMENT]
lemma suntil_as_until: "(\<phi> suntil \<psi>) \<omega> = ((\<phi> until \<psi>) \<omega> \<and> ev \<psi> \<omega>)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<phi> suntil \<psi>) \<omega> = ((\<phi> until \<psi>) \<omega> \<and> ev \<psi> \<omega>)
[PROOF STEP]
using ev_suntil suntil_implies_until until_ev_suntil
[PROOF STATE]
proof (prove)
using this:
(?\<phi> suntil ?\<psi>) ?\<omega> \<Longrightarrow> ev ?\<psi> ?\<omega>
(?\<phi> suntil ?\<psi>) ?\<omega> \<Longrightarrow> (?\<phi> until ?\<psi>) ?\<omega>
\<lbrakk>(?\<phi> until ?\<psi>) ?\<omega>; ev ?\<psi> ?\<omega>\<rbrakk> \<Longrightarrow> (?\<phi> suntil ?\<psi>) ?\<omega>
goal (1 subgoal):
1. (\<phi> suntil \<psi>) \<omega> = ((\<phi> until \<psi>) \<omega> \<and> ev \<psi> \<omega>)
[PROOF STEP]
by blast |
Require Import VST.floyd.proofauto.
Require Import mmap0.
Instance CompSpecs : compspecs. make_compspecs prog. Defined.
Definition Vprog : varspecs. mk_varspecs prog. Defined.
Definition placeholder_spec :=
DECLARE _placeholder
WITH u: unit
PRE [ ]
PROP (False) PARAMS() GLOBALS() SEP()
POST [ tint ]
PROP() LOCAL() SEP().
Definition ispecs := [placeholder_spec].
|
[STATEMENT]
lemma is_lifting_id: "is_lifting (\<lambda>x. x) basis basis"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. is_lifting (\<lambda>x. x) basis basis
[PROOF STEP]
by (simp add: is_lifting_def) |
C Tests flagging of deleted features in Fortran 95
integer i,j,label,format
real x
assign 5 to label
assign 90 to format
j = 1
5 continue
do 10 x=1,5
print format,x,j*x**2
10 continue
j = j+1
if( j .eq. 2 ) goto label
i = j*2
assign 91 to format
write(*,format) i,j
write(*,91) i,j
90 format(1x,2f6.0)
91 format(1x,2hi=,i5,3h j=,i5)
end
|
-- -------------------------------------------------------------- [ ReSkin.idr ]
-- Module : ReSkin.idr
-- Copyright : (c) Jan de Muijnck-Hughes
-- License : see LICENSE
-- --------------------------------------------------------------------- [ EOH ]
||| Example DSML through reSkinning the GRL.
module GRL.Test.DSML.PML
import public GRL.Common
import public GRL.IR
import public GRL.Model
import public GRL.Builder
import public GRL.Pretty
%access public export
-- ---------------------------------------------------------------- [ DSML Def ]
data ETy = PaperTy | SecTy | AuthTy | RevTy | BibTy | AbsTy
data PTy = ElemTy ETy | SLinkTy | ALinkTy
data ValidAction : ETy -> ETy -> Type where
AS : ValidAction AuthTy SecTy
RS : ValidAction RevTy SecTy
AB : ValidAction AuthTy BibTy
RB : ValidAction RevTy BibTy
AA : ValidAction AuthTy AbsTy
RA : ValidAction RevTy AbsTy
data ValidPElem : ETy -> Type where
SP : ValidPElem SecTy
BP : ValidPElem BibTy
AP : ValidPElem AbsTy
data PML : PTy -> GTy -> Type where
MkPaper : String -> PML (ElemTy PaperTy) ELEM
MkSect : String -> PML (ElemTy SecTy) ELEM
MkBib : PML (ElemTy BibTy) ELEM
MkAbs : PML (ElemTy AbsTy) ELEM
MkAuth : String -> SValue -> PML (ElemTy AuthTy) ELEM
MkRev : String -> SValue -> PML (ElemTy RevTy) ELEM
AddElem : PML (ElemTy PaperTy) ELEM
-> PML (ElemTy x) ELEM
-> {auto prf : ValidPElem x}
-> PML SLinkTy STRUCT
AddAction : PML (ElemTy x) ELEM
-> PML (ElemTy y) ELEM
-> {auto prf : ValidAction x y}
-> PML ALinkTy INTENT
GRL (\x => PML ty x) where
mkElem (MkPaper t) = Elem GOALty t Nothing
mkElem (MkSect t) = Elem GOALty t Nothing
mkElem (MkBib) = Elem GOALty "Bibliography" Nothing
mkElem (MkAbs) = Elem GOALty "Abstract" Nothing
mkElem (MkAuth t s) = Elem TASKty ("Authoring " ++ t) (Just s)
mkElem (MkRev t s) = Elem TASKty ("Reviewing " ++ t) (Just s)
mkIntent (AddAction x y) = ILink IMPACTSty MAKES (mkElem x) (mkElem y)
mkStruct (AddElem x y) = SLink ANDty (mkElem x) [(mkElem y)]
-- ---------------------------------------------------------------- [ MkPretty ]
syntax [a] "==>" [b] = AddAction a b
syntax [a] "&=" [b] = AddElem a b
PAPER : Type
PAPER = PML (ElemTy PaperTy) ELEM
SECT : Type
SECT = PML (ElemTy SecTy) ELEM
BIB : Type
BIB = PML (ElemTy BibTy) ELEM
ABSTRACT : Type
ABSTRACT = PML (ElemTy AbsTy) ELEM
WRITING : Type
WRITING = PML (ElemTy AuthTy) ELEM
REVIEW : Type
REVIEW = PML (ElemTy RevTy) ELEM
-- --------------------------------------------------------------------- [ EOF ]
|
/-
Copyright (c) 2020 Yury Kudryashov. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author: Yury Kudryashov
-/
import Mathlib.PrePort
import Mathlib.Lean3Lib.init.default
import Mathlib.analysis.normed_space.banach
import Mathlib.analysis.normed_space.finite_dimension
import Mathlib.PostPort
universes u_1 u_2 u_3 u_4
namespace Mathlib
/-!
# Complemented subspaces of normed vector spaces
A submodule `p` of a topological module `E` over `R` is called *complemented* if there exists
a continuous linear projection `f : E →ₗ[R] p`, `∀ x : p, f x = x`. We prove that for
a closed subspace of a normed space this condition is equivalent to existence of a closed
subspace `q` such that `p ⊓ q = ⊥`, `p ⊔ q = ⊤`. We also prove that a subspace of finite codimension
is always a complemented subspace.
## Tags
complemented subspace, normed vector space
-/
namespace continuous_linear_map
theorem ker_closed_complemented_of_finite_dimensional_range {𝕜 : Type u_1}
[nondiscrete_normed_field 𝕜] {E : Type u_2} [normed_group E] [normed_space 𝕜 E] {F : Type u_3}
[normed_group F] [normed_space 𝕜 F] [complete_space 𝕜] (f : continuous_linear_map 𝕜 E F)
[finite_dimensional 𝕜 ↥(range f)] : submodule.closed_complemented (ker f) :=
sorry
/-- If `f : E →L[R] F` and `g : E →L[R] G` are two surjective linear maps and
their kernels are complement of each other, then `x ↦ (f x, g x)` defines
a linear equivalence `E ≃L[R] F × G`. -/
def equiv_prod_of_surjective_of_is_compl {𝕜 : Type u_1} [nondiscrete_normed_field 𝕜] {E : Type u_2}
[normed_group E] [normed_space 𝕜 E] {F : Type u_3} [normed_group F] [normed_space 𝕜 F]
{G : Type u_4} [normed_group G] [normed_space 𝕜 G] [complete_space E] [complete_space (F × G)]
(f : continuous_linear_map 𝕜 E F) (g : continuous_linear_map 𝕜 E G) (hf : range f = ⊤)
(hg : range g = ⊤) (hfg : is_compl (ker f) (ker g)) : continuous_linear_equiv 𝕜 E (F × G) :=
linear_equiv.to_continuous_linear_equiv_of_continuous
(linear_map.equiv_prod_of_surjective_of_is_compl (↑f) (↑g) hf hg hfg) sorry
@[simp] theorem coe_equiv_prod_of_surjective_of_is_compl {𝕜 : Type u_1} [nondiscrete_normed_field 𝕜]
{E : Type u_2} [normed_group E] [normed_space 𝕜 E] {F : Type u_3} [normed_group F]
[normed_space 𝕜 F] {G : Type u_4} [normed_group G] [normed_space 𝕜 G] [complete_space E]
[complete_space (F × G)] {f : continuous_linear_map 𝕜 E F} {g : continuous_linear_map 𝕜 E G}
(hf : range f = ⊤) (hg : range g = ⊤) (hfg : is_compl (ker f) (ker g)) :
↑(equiv_prod_of_surjective_of_is_compl f g hf hg hfg) = ↑(continuous_linear_map.prod f g) :=
rfl
@[simp] theorem equiv_prod_of_surjective_of_is_compl_to_linear_equiv {𝕜 : Type u_1}
[nondiscrete_normed_field 𝕜] {E : Type u_2} [normed_group E] [normed_space 𝕜 E] {F : Type u_3}
[normed_group F] [normed_space 𝕜 F] {G : Type u_4} [normed_group G] [normed_space 𝕜 G]
[complete_space E] [complete_space (F × G)] {f : continuous_linear_map 𝕜 E F}
{g : continuous_linear_map 𝕜 E G} (hf : range f = ⊤) (hg : range g = ⊤)
(hfg : is_compl (ker f) (ker g)) :
continuous_linear_equiv.to_linear_equiv (equiv_prod_of_surjective_of_is_compl f g hf hg hfg) =
linear_map.equiv_prod_of_surjective_of_is_compl (↑f) (↑g) hf hg hfg :=
rfl
@[simp] theorem equiv_prod_of_surjective_of_is_compl_apply {𝕜 : Type u_1}
[nondiscrete_normed_field 𝕜] {E : Type u_2} [normed_group E] [normed_space 𝕜 E] {F : Type u_3}
[normed_group F] [normed_space 𝕜 F] {G : Type u_4} [normed_group G] [normed_space 𝕜 G]
[complete_space E] [complete_space (F × G)] {f : continuous_linear_map 𝕜 E F}
{g : continuous_linear_map 𝕜 E G} (hf : range f = ⊤) (hg : range g = ⊤)
(hfg : is_compl (ker f) (ker g)) (x : E) :
coe_fn (equiv_prod_of_surjective_of_is_compl f g hf hg hfg) x = (coe_fn f x, coe_fn g x) :=
rfl
end continuous_linear_map
namespace subspace
/-- If `q` is a closed complement of a closed subspace `p`, then `p × q` is continuously
isomorphic to `E`. -/
def prod_equiv_of_closed_compl {𝕜 : Type u_1} [nondiscrete_normed_field 𝕜] {E : Type u_2}
[normed_group E] [normed_space 𝕜 E] [complete_space E] (p : subspace 𝕜 E) (q : subspace 𝕜 E)
(h : is_compl p q) (hp : is_closed ↑p) (hq : is_closed ↑q) :
continuous_linear_equiv 𝕜 (↥p × ↥q) E :=
linear_equiv.to_continuous_linear_equiv_of_continuous (submodule.prod_equiv_of_is_compl p q h)
sorry
/-- Projection to a closed submodule along a closed complement. -/
def linear_proj_of_closed_compl {𝕜 : Type u_1} [nondiscrete_normed_field 𝕜] {E : Type u_2}
[normed_group E] [normed_space 𝕜 E] [complete_space E] (p : subspace 𝕜 E) (q : subspace 𝕜 E)
(h : is_compl p q) (hp : is_closed ↑p) (hq : is_closed ↑q) : continuous_linear_map 𝕜 E ↥p :=
continuous_linear_map.comp (continuous_linear_map.fst 𝕜 ↥p ↥q)
↑(continuous_linear_equiv.symm (prod_equiv_of_closed_compl p q h hp hq))
@[simp] theorem coe_prod_equiv_of_closed_compl {𝕜 : Type u_1} [nondiscrete_normed_field 𝕜]
{E : Type u_2} [normed_group E] [normed_space 𝕜 E] [complete_space E] {p : subspace 𝕜 E}
{q : subspace 𝕜 E} (h : is_compl p q) (hp : is_closed ↑p) (hq : is_closed ↑q) :
⇑(prod_equiv_of_closed_compl p q h hp hq) = ⇑(submodule.prod_equiv_of_is_compl p q h) :=
rfl
@[simp] theorem coe_prod_equiv_of_closed_compl_symm {𝕜 : Type u_1} [nondiscrete_normed_field 𝕜]
{E : Type u_2} [normed_group E] [normed_space 𝕜 E] [complete_space E] {p : subspace 𝕜 E}
{q : subspace 𝕜 E} (h : is_compl p q) (hp : is_closed ↑p) (hq : is_closed ↑q) :
⇑(continuous_linear_equiv.symm (prod_equiv_of_closed_compl p q h hp hq)) =
⇑(linear_equiv.symm (submodule.prod_equiv_of_is_compl p q h)) :=
rfl
@[simp] theorem coe_continuous_linear_proj_of_closed_compl {𝕜 : Type u_1}
[nondiscrete_normed_field 𝕜] {E : Type u_2} [normed_group E] [normed_space 𝕜 E]
[complete_space E] {p : subspace 𝕜 E} {q : subspace 𝕜 E} (h : is_compl p q) (hp : is_closed ↑p)
(hq : is_closed ↑q) :
↑(linear_proj_of_closed_compl p q h hp hq) = submodule.linear_proj_of_is_compl p q h :=
rfl
@[simp] theorem coe_continuous_linear_proj_of_closed_compl' {𝕜 : Type u_1}
[nondiscrete_normed_field 𝕜] {E : Type u_2} [normed_group E] [normed_space 𝕜 E]
[complete_space E] {p : subspace 𝕜 E} {q : subspace 𝕜 E} (h : is_compl p q) (hp : is_closed ↑p)
(hq : is_closed ↑q) :
⇑(linear_proj_of_closed_compl p q h hp hq) = ⇑(submodule.linear_proj_of_is_compl p q h) :=
rfl
theorem closed_complemented_of_closed_compl {𝕜 : Type u_1} [nondiscrete_normed_field 𝕜]
{E : Type u_2} [normed_group E] [normed_space 𝕜 E] [complete_space E] {p : subspace 𝕜 E}
{q : subspace 𝕜 E} (h : is_compl p q) (hp : is_closed ↑p) (hq : is_closed ↑q) :
submodule.closed_complemented p :=
Exists.intro (linear_proj_of_closed_compl p q h hp hq)
(submodule.linear_proj_of_is_compl_apply_left h)
theorem closed_complemented_iff_has_closed_compl {𝕜 : Type u_1} [nondiscrete_normed_field 𝕜]
{E : Type u_2} [normed_group E] [normed_space 𝕜 E] [complete_space E] {p : subspace 𝕜 E} :
submodule.closed_complemented p ↔
is_closed ↑p ∧ ∃ (q : subspace 𝕜 E), ∃ (hq : is_closed ↑q), is_compl p q :=
sorry
theorem closed_complemented_of_quotient_finite_dimensional {𝕜 : Type u_1}
[nondiscrete_normed_field 𝕜] {E : Type u_2} [normed_group E] [normed_space 𝕜 E]
[complete_space E] {p : subspace 𝕜 E} [complete_space 𝕜]
[finite_dimensional 𝕜 (submodule.quotient p)] (hp : is_closed ↑p) :
submodule.closed_complemented p :=
sorry
end Mathlib |
import autoarray as aa
import autoarray.plot as aplt
import numpy as np
grid = aa.grid.uniform(shape_2d=(11, 11), pixel_scales=1.0)
aplt.grid(grid=grid, lines=[(1.0, 1.0), (2.0, 2.0)])
aplt.grid(grid=grid, lines=[[(1.0, 1.0), (2.0, 2.0)], [(2.0, 4.0), (5.0, 6.0)]])
|
import basic
/-
In this file, we formalize a proof of Arrow's Impossibility Theorem.
We closely follow the first proof in this 2005 paper by John Geankopolos:
* https://link.springer.com/article/10.1007/s00199-004-0556-7
Arrow's Impossibility Theorem has been formalized in other languages before.
Freek Wiedijk wrote a formalization in Mizar:
* https://link.springer.com/article/10.1007/s12046-009-0005-1
Tobias Nipkow wrote a formalization in Isabelle/HOL:
* https://link.springer.com/article/10.1007/s10817-009-9147-4
At times, our formalization is a close translation of Nipkow's proof in HOL.
In particular, our definition of functions `maketop`, `makebot`, and `makeabove` are
inspired by his strategy for manipulating preferences. However, Nipkow defines preference orders
in a completely different way from our `basic` file.
-/
open relation vector finset
-- We think of social states as type `σ` and inidividuals as type `ι`
variables {σ ι : Type} {x y x' y' a b : σ} {r r' : σ → σ → Prop} {X : finset σ}
/-! ### Some basic definitions and lemmas -/
/-- A social state `b` is *strictly worst* of a finite set of social states `X` with respect to
a ranking `p` if `b` is ranked strictly lower than every other `a ∈ X`. -/
def is_strictly_worst (b : σ) (r : σ → σ → Prop) (X : finset σ) : Prop :=
∀ a ∈ X, a ≠ b → P r a b
/-- A social state `b` is *strictly best* of a finite set of social states `X` with respect to
a ranking `p` if `b` is ranked strictly higher than every other `a ∈ X`. -/
def is_strictly_best (b : σ) (r : σ → σ → Prop) (X : finset σ) : Prop :=
∀ a ∈ X, a ≠ b → P r b a
/-- A social state `b` is *extremal* with respect to a finite set of social states `X`
and a ranking `p` if `b` is either strictly worst or strictly best of `X`. -/
def is_extremal (b : σ) (r : σ → σ → Prop) (X : finset σ) : Prop :=
is_strictly_worst b r X ∨ is_strictly_best b r X
lemma not_strictly_worst : ¬is_strictly_worst b r X ↔ ∃ a (h : a ∈ X) (h : a ≠ b), ¬P r a b :=
by simp only [is_strictly_worst, not_forall]
lemma not_strictly_best : ¬is_strictly_best b r X ↔ ∃ a (h : a ∈ X) (h : a ≠ b), ¬P r b a :=
by simp only [is_strictly_best, not_forall]
lemma not_extremal : ¬is_extremal b r X ↔
(∃ a (h : a ∈ X) (h : a ≠ b), ¬P r a b) ∧ (∃ c (h : c ∈ X) (h : c ≠ b), ¬P r b c) :=
by simp only [is_extremal, not_or_distrib, not_strictly_worst, not_strictly_best]
lemma not_extremal' (hr : total r) (h : ¬ is_extremal b r X) : -- maybe make an `iff`? maybe combine with `exists_of_not_extremal`? -Ben
∃ a c ∈ X, a ≠ b ∧ c ≠ b ∧ r a b ∧ r b c :=
let ⟨⟨c, hc, hcb, hPc⟩, ⟨a, ha, hab, hPa⟩⟩ := not_extremal.mp h in
⟨a, c, ha, hc, hab, hcb, R_of_nP_total hr hPa, R_of_nP_total hr hPc⟩
lemma is_strictly_best.not_strictly_worst (htop : is_strictly_best b r X) (h : ∃ a ∈ X, a ≠ b) :
¬is_strictly_worst b r X :=
let ⟨a, a_in, hab⟩ := h in not_strictly_worst.mpr ⟨a, a_in, hab, nP_of_reverseP (htop a a_in hab)⟩
lemma is_strictly_best.not_strictly_worst' (htop : is_strictly_best b r X) (hX : 2 ≤ X.card) (hb : b ∈ X) :
¬is_strictly_worst b r X :=
htop.not_strictly_worst $ exists_second_distinct_mem hX hb
lemma is_strictly_worst.not_strictly_best (hbot : is_strictly_worst b r X) (h : ∃ a ∈ X, a ≠ b) :
¬is_strictly_best b r X :=
let ⟨a, a_in, hab⟩ := h in not_strictly_best.mpr ⟨a, a_in, hab, nP_of_reverseP (hbot a a_in hab)⟩
lemma is_strictly_worst.not_strictly_best' (hbot : is_strictly_worst b r X) (hX : 2 ≤ X.card) (hb : b ∈ X) :
¬is_strictly_best b r X :=
hbot.not_strictly_best $ exists_second_distinct_mem hX hb
lemma is_extremal.is_strictly_best (hextr : is_extremal b r X) (not_strictly_worst : ¬is_strictly_worst b r X) :
is_strictly_best b r X :=
hextr.resolve_left not_strictly_worst
lemma is_extremal.is_strictly_worst (hextr : is_extremal b r X) (not_strictly_best : ¬is_strictly_best b r X) :
is_strictly_worst b r X :=
hextr.resolve_right not_strictly_best
lemma is_strictly_worst.is_extremal (hbot : is_strictly_worst b r X) : is_extremal b r X :=
or.inl hbot
lemma is_strictly_best.is_extremal (hbot : is_strictly_best b r X) : is_extremal b r X :=
or.inr hbot
/-! ### "Make" functions -/
local attribute [instance] classical.prop_decidable
/-- Given an arbitary preference order `r` and a social state `b`,
`maketop r b` updates `r` so that `b` is now ranked strictly higher
than any other social state.
The definition also contains a proof that this new relation is a `pref_order σ`. --/
def maketop (r : pref_order σ) (b : σ) : pref_order σ :=
begin
use λ x y, if x = b then true else if y = b then false else r x y,
{ intro x,
split_ifs,
{ trivial },
{ exact r.refl x } },
{ intros x y, simp only,
split_ifs with hx _ hy,
work_on_goal 3 { exact r.total x y },
all_goals { simp only [or_true, true_or] } },
{ intros x y z, simp only,
split_ifs with hx _ _ hy _ hz; intros hxy hyz,
work_on_goal 6 { exact r.trans hxy hyz },
all_goals { trivial } },
end
/-- Given an arbitary preference order `r` and a social state `b`,
`makebot r b` updates `r` so that every other social state is now ranked
strictly higher than `b`.
The definition also contains a proof that this new relation is a `pref_order σ`. --/
def makebot (r : pref_order σ) (b : σ) : pref_order σ :=
begin
use λ x y, if y = b then true else if x = b then false else r x y,
{ intro x,
split_ifs,
{ trivial },
{ exact r.refl x } },
{ intros x y, simp only,
split_ifs with hx _ hy,
work_on_goal 3 { exact r.total x y },
all_goals { simp only [or_true, true_or] } },
{ intros x y z, simp only,
split_ifs with hx _ _ hy _ hz; intros hxy hyz,
work_on_goal 6 { exact r.trans hxy hyz },
all_goals { trivial } },
end
/-- Given an arbitary preference order `r` and two social states `a` and `b`,
`makebot r a b` updates `r` so that:
(1) `b` is strictly higher than `a` and any other social state `y` where `r a y`
(2) any other social state that is strictly higher than `a` is strictly higher than `b`.
Intuitively, we have moved `b` just above `a` in the ordering.
The definition also contains a proof that this new relation is a `pref_order σ`. --/
def makeabove (r : pref_order σ) (a b : σ) : pref_order σ :=
begin
use λ x y, if x = b then if y = b then true else if r a y then true else false
else if y = b then if r a x then false else true else r x y,
{ intro x,
split_ifs,
{ trivial },
{ exact r.refl x } },
{ intros x y, simp only,
split_ifs,
work_on_goal 5 { exact r.total x y },
all_goals { simp only [or_true, true_or] } },
{ intros x y z, simp only,
split_ifs with hx hy _ _ hay hz haz _ _ hy hax _ _ hz haz hz hay; intros hxy hyz,
any_goals { trivial },
{ exact haz (r.trans hay hyz) },
{ exact r.trans (pref_order.reverse hax) haz },
{ exact hay (r.trans h hxy) },
{ exact r.trans hxy hyz } },
end
lemma maketop_noteq (r : pref_order σ) {a b c : σ} (ha : a ≠ b) (hc : c ≠ b) :
(maketop r b a c ↔ r a c) ∧ (maketop r b c a ↔ r c a) :=
begin
simp only [maketop, if_false_left_eq_and, if_true_left_eq_or],
refine ⟨⟨_, λ h, or.inr ⟨hc, h⟩⟩, ⟨_, λ h, or.inr ⟨ha, h⟩⟩⟩; rintro (rfl | ⟨-, h⟩),
exacts [absurd rfl ha, h, absurd rfl hc, h],
end
lemma maketop_noteq' (r : pref_order σ) {a b c : σ} (ha : a ≠ b) (hc : c ≠ b) :
(P (maketop r b) a c ↔ P r a c) ∧ (P (maketop r b) c a ↔ P r c a) :=
let h := maketop_noteq r ha hc in P_iff_of_iff h.1 h.2
lemma makebot_noteq (r : pref_order σ) {a b c : σ} (ha : a ≠ b) (hc : c ≠ b) :
(makebot r b a c ↔ r a c) ∧ (makebot r b c a ↔ r c a) :=
begin
simp only [makebot, if_false_left_eq_and, if_true_left_eq_or],
refine ⟨⟨_, λ h, or.inr ⟨ha, h⟩⟩, ⟨_, λ h, or.inr ⟨hc, h⟩⟩⟩; rintro (rfl | ⟨-, h⟩),
exacts [absurd rfl hc, h, absurd rfl ha, h],
end
lemma makebot_noteq' (r : pref_order σ) {a b c : σ} (ha : a ≠ b) (hc : c ≠ b) :
(P (makebot r b) a c ↔ P r a c) ∧ (P (makebot r b) c a ↔ P r c a) :=
let h := makebot_noteq r ha hc in P_iff_of_iff h.1 h.2
lemma makeabove_noteq (r : pref_order σ) (a : σ) {b c d : σ} (hc : c ≠ b) (hd : d ≠ b) :
(makeabove r a b c d ↔ r c d) ∧ (makeabove r a b d c ↔ r d c) :=
by simp [makeabove, ← pref_order.eq_coe, hc, hd]
lemma makeabove_noteq' (r : pref_order σ) (a : σ) {b c d : σ} (hc : c ≠ b) (hd : d ≠ b) :
(P (makeabove r a b) c d ↔ P r c d) ∧ (P (makeabove r a b) d c ↔ P r d c) :=
by simp [makeabove, P, ← pref_order.eq_coe, hc, hd]
lemma is_strictly_best_maketop (b : σ) (r : pref_order σ) (X : finset σ) :
is_strictly_best b (maketop r b) X :=
by simp [maketop, is_strictly_best, P, ← pref_order.eq_coe]
lemma is_strictly_worst_makebot (b : σ) (r : pref_order σ) (X : finset σ) :
is_strictly_worst b (makebot r b) X :=
by simp [is_strictly_worst, makebot, P, ← pref_order.eq_coe]
lemma makeabove_above {a b : σ} (r : pref_order σ) (ha : a ≠ b):
P (makeabove r a b) b a :=
by simpa [P, makeabove, ← pref_order.eq_coe, not_or_distrib, ha] using r.refl a
lemma makeabove_above' {a b c : σ} {r : pref_order σ} (hc : c ≠ b) (hr : r a c) :
P (makeabove r a b) b c :=
by simpa [P, makeabove, ← pref_order.eq_coe, hc]
lemma makeabove_below {a b c : σ} {r : pref_order σ} (hc : c ≠ b) (hr : ¬r a c) :
P (makeabove r a b) c b :=
by simpa [P, makeabove, ← pref_order.eq_coe, not_or_distrib, hc]
/-! ### Properties -/
/-- A social welfare function satisfies the Weak Pareto criterion if, for any two social states
`x` and `y`, every individual ranking `x` higher than `y` implies that society ranks `x` higher
than `y`. -/
def weak_pareto (f : (ι → pref_order σ) → pref_order σ) (X : finset σ) : Prop :=
∀ (x y ∈ X) (R : ι → pref_order σ), (∀ i : ι, P (R i) x y) → P (f R) x y
/-- Suppose that for any two social states `x` and `y`, every individual's ordering of `x` and `y`
remains unchanged between two orderings `P₁` and `P₂`. We say that a social welfare function is
*independent of irrelevant alternatives* if society's ordering of `x` and `y` also remains
unchanged between `P₁` and `P₂`. -/
def ind_of_irr_alts (f : (ι → pref_order σ) → pref_order σ) (X : finset σ) : Prop :=
∀ (R R' : ι → pref_order σ) (x y ∈ X),
(∀ i : ι, same_order' (R i) (R' i) x y x y) → same_order' (f R) (f R') x y x y
/-- A social welfare function is a *dictatorship* if there exists an individual who possesses the power
to determine society's order of any social states. -/
def is_dictatorship (f : (ι → pref_order σ) → pref_order σ) (X : finset σ) : Prop :=
∃ i : ι, ∀ (x y ∈ X) (R : ι → pref_order σ), P (R i) x y → P (f R) x y
/-- An individual `i` is *pivotal* with respect to a social welfare function and a social state `b`
if there exist preference orderings `R` and `R'` such that:
(1) all individuals except for `i` rank all social states exactly the same in both orderings
(2) all individuals place `b` in an extremal position in both rankings
(3) `i` ranks `b` bottom of their rankings in `R`, but top of their rankings in `R'`
(4) society ranks `b` bottom of its rankings in `R`, but top of its rankings in `R'` -/
def is_pivotal (f : (ι → pref_order σ) → pref_order σ) (X : finset σ) (i : ι) (b : σ) : Prop :=
∃ (R R' : ι → pref_order σ),
(∀ j : ι, j ≠ i → ∀ x y ∈ X, R j = R' j) ∧
(∀ i : ι, is_extremal b (R i) X) ∧ (∀ i : ι, is_extremal b (R' i) X) ∧
(is_strictly_worst b (R i) X) ∧ (is_strictly_best b (R' i) X) ∧
(is_strictly_worst b (f R) X) ∧ (is_strictly_best b (f R') X)
/-- A social welfare function has a *pivot* with respect to a social state `b` if there exists an
individual who is pivotal with respect to that function and `b`. -/
def has_pivot (f : (ι → pref_order σ) → pref_order σ) (X : finset σ) (b : σ): Prop :=
∃ i, is_pivotal f X i b
/-- An individual is a dictator over all social states in a given set *except* `b`
if they are a dictator over every pair of distinct alternatives not equal to `b`. -/
def is_dictator_except (f : (ι → pref_order σ) → pref_order σ)
(X : finset σ) (i : ι) (b : σ) : Prop :=
∀ a c ∈ X, a ≠ b → c ≠ b → ∀ R : ι → pref_order σ, P (R i) c a → P (f R) c a
variables {R : ι → pref_order σ} {f : (ι → pref_order σ) → pref_order σ}
/-! ### Auxiliary lemmas -/
/-- If every individual ranks a social state `b` at the top of its rankings, then society must also
rank `b` at the top of its rankings. -/
theorem is_strictly_best_of_forall_is_strictly_best (b_in : b ∈ X) (hwp : weak_pareto f X)
(htop : ∀ i, is_strictly_best b (R i) X) :
is_strictly_best b (f R) X :=
λ a a_in hab, hwp b a b_in a_in R $ λ i, htop i a a_in hab
/-- If every individual ranks a social state `b` at the bottom of its rankings, then society must
also rank `b` at the bottom of its rankings. -/
theorem is_strictly_worst_of_forall_is_strictly_worst (b_in : b ∈ X) (hwp : weak_pareto f X)
(hbot : ∀ i, is_strictly_worst b (R i) X) :
is_strictly_worst b (f R) X :=
λ a a_in hab, hwp a b a_in b_in R $ λ i, hbot i a a_in hab
lemma exists_of_not_extremal (hX : 3 ≤ X.card) (hb : b ∈ X) (h : ¬ is_extremal b (f R) X) : -- it may be worth generalizing this; see `not_extremal'` above - Ben
∃ a c ∈ X, a ≠ b ∧ c ≠ b ∧ a ≠ c ∧ f R a b ∧ f R b c :=
begin
obtain ⟨a, c, ha, hc, hab, hcb, hfa, hfc⟩ := not_extremal' (f R).total h,
obtain hac | rfl := ne_or_eq a c, { exact ⟨a, c, ha, hc, hab, hcb, hac, hfa, hfc⟩ },
obtain ⟨d, hd, hda, hdb⟩ := exists_third_distinct_mem hX ha hb hab,
obtain hfd | hfd := (f R).total d b,
{ exact ⟨d, a, hd, hc, hdb, hcb, hda, hfd, hfc⟩ },
{ exact ⟨a, d, ha, hd, hab, hdb, hda.symm, hfa, hfd⟩ },
end
/-! ### The Proof Begins -/
/- Geankopolos (2005) calls this step the *Extremal Lemma*. If every individual
places alternative `b` in an extremal position (at the very top or bottom of her rankings),
then society must also place alternative `b` in an extremal position. -/
lemma first_step (hwp : weak_pareto f X) (hind : ind_of_irr_alts f X)
(hX : 3 ≤ X.card) (hb : b ∈ X) (hextr : ∀ i, is_extremal b (R i) X) :
is_extremal b (f R) X :=
begin
by_contra hnot,
obtain ⟨a, c, ha, hc, hab, hcb, hac, hfa, hfb⟩ := exists_of_not_extremal hX hb hnot,
have H1 := λ {j} h, makeabove_below hcb.symm ((hextr j).is_strictly_best h a ha hab).2,
have H2 := λ {j} h, makeabove_above' hcb.symm ((hextr j).is_strictly_worst h a ha hab).1,
refine (hwp c a hc ha (λ j, makeabove (R j) a c) (λ j, makeabove_above (R j) hac)).2
((f _).trans (((same_order_iff_same_order' (f R).total (f _).total).2 -- wouldn't it be better just to do everything using `same_order'`? -Ben
(hind R _ a b ha hb (λ j, _))).1.1.1 hfa)
(((same_order_iff_same_order' (f R).total (f _).total).2
(hind R _ b c hb hc (λ j, ⟨⟨λ h, H1 (not_strictly_worst.mpr ⟨c, hc, hcb, nP_of_reverseP h⟩), _⟩,
⟨λ h, H2 (not_strictly_best.mpr ⟨c, hc, hcb, nP_of_reverseP h⟩), _⟩⟩))).1.1.1 hfb)),
{ simp only [same_order', makeabove_noteq' _ a hcb.symm hac, iff_self, and_self] },
all_goals { rintro ⟨-, h⟩, contrapose! h },
{ exact (H2 (not_strictly_best.mpr ⟨c, hc, hcb, h⟩)).1 },
{ exact (H1 (not_strictly_worst.mpr ⟨c, hc, hcb, h⟩)).1 },
end
/-- We define relation `r₂`, a `pref_order` we will use in `second_step`. -/
def r₂ (b : σ) : pref_order σ :=
begin
use λ x y, if y = b then true else if x = b then false else true,
{ intro x, split_ifs; trivial },
{ intros x y, simp only, split_ifs; simp only [true_or, or_true] },
{ intros x y z, simp only, split_ifs; simp only [forall_true_left, forall_false_left] },
end
/- This is an auxiliary lemma used in the `second_step`. -/
lemma second_step_aux [fintype ι] (hwp : weak_pareto f X) (hind : ind_of_irr_alts f X)
(hX : 2 < X.card) (b_in : b ∈ X) {D' : finset ι} :
∀ {R : ι → pref_order σ}, D' = {i ∈ univ | is_strictly_worst b (R i) X} →
(∀ i, is_extremal b (R i) X) → is_strictly_worst b (f R) X → has_pivot f X b :=
begin
refine finset.induction_on D'
(λ R h hextr hbot, absurd (is_strictly_best_of_forall_is_strictly_best b_in hwp (λ j, (hextr j).is_strictly_best _))
(hbot.not_strictly_best (exists_second_distinct_mem hX.le b_in)))
(λ i D hi IH R h_insert hextr hbot, _),
{ simpa using eq_empty_iff_forall_not_mem.mp h.symm j },
{ let R' := λ j, (ite (j = i) (maketop (R j) b) (R j)),
have hextr' : ∀ j, is_extremal b (R' j) X,
{ intro j, simp only [R'],
split_ifs,
{ exact (is_strictly_best_maketop b (R j) X).is_extremal },
{ exact hextr j } },
by_cases hR' : is_strictly_best b (f R') X,
{ refine ⟨i, R, R', λ j hj x y _ _, _, hextr, hextr', _, _, hbot, hR'⟩,
{ simp only [R', if_neg hj] },
{ have : i ∈ {j ∈ univ | is_strictly_worst b (R j) X}, { rw ← h_insert, exact mem_insert_self i D },
simpa },
{ simp only [R', is_strictly_best_maketop, if_pos] } },
{ refine IH (ext (λ j, _)) hextr' ((first_step hwp hind hX b_in hextr').is_strictly_worst hR'),
simp only [true_and, sep_def, mem_filter, mem_univ, R'],
split; intro hj,
{ have hji : j ≠ i, { rintro rfl, exact hi hj },
have : j ∈ {i ∈ univ | is_strictly_worst b ⇑(R i) X}, { convert mem_insert_of_mem hj, rw h_insert },
simpa [hji] },
{ have hji : j ≠ i,
{ rintro rfl,
obtain ⟨a, a_in, hab⟩ := exists_second_distinct_mem hX.le b_in,
simp only [if_pos] at hj,
exact (is_strictly_best_maketop b (R j) X a a_in hab).2 (hj a a_in hab).1 },
rw [← erase_insert hi, h_insert],
simpa [hji] using hj } } },
end
/- In his second step, Geankopolos shows that for any social state `b`,
there exists an individual who is pivotal over that social state. -/
lemma second_step [fintype ι] (hwp : weak_pareto f X) (hind : ind_of_irr_alts f X)
(hX : 3 ≤ X.card) (b) (b_in : b ∈ X) :
has_pivot f X b :=
have hbot : is_strictly_worst b (r₂ b) X, by simp [is_strictly_worst, r₂, P, ← pref_order.eq_coe],
second_step_aux hwp hind hX b_in rfl (λ i, hbot.is_extremal) $
is_strictly_worst_of_forall_is_strictly_worst b_in hwp $ λ i, hbot
/- Step 3 states that if an individual `is_pivotal` over some alternative `b`, they
are also a dictator over every pair of alternatives not equal to `b`. -/
lemma third_step (hind : ind_of_irr_alts f X)
(b_in : b ∈ X) {i : ι} (i_piv : is_pivotal f X i b) :
is_dictator_except f X i b :=
begin
rintros a c a_in c_in hab hcb Q ⟨-, h⟩,
obtain ⟨R, R', i_piv⟩ := i_piv,
let Q' := λ j, if j = i then makeabove (Q j) a b
else if is_strictly_worst b (R j) X then makebot (Q j) b else maketop (Q j) b,
have Q'bot : ∀ j ≠ i, is_strictly_worst b (R j) X → Q' j = makebot (Q j) b :=
λ j hj hbot, by simp only [Q', if_neg hj, if_pos hbot],
have Q'top : ∀ j ≠ i, ¬is_strictly_worst b (R j) X → Q' j = maketop (Q j) b :=
λ j hj hbot, by simp only [Q', if_neg hj, if_neg hbot],
have Q'above : Q' i = makeabove (Q i) a b := by simp [Q'],
have hQ' : ∀ j, same_order' (Q j) (Q' j) c a c a,
{ intro j,
suffices : ∀ d ≠ b, ∀ e ≠ b, same_order' (Q j) (Q' j) e d e d, from this a hab c hcb,
intros d hdb e heb,
simp only [Q', same_order'],
split_ifs; simp [makeabove_noteq', makebot_noteq', maketop_noteq', hdb, heb] },
rw (hind Q Q' c a c_in a_in hQ').1,
refine P_trans (f Q').trans ((hind R Q' c b c_in b_in _).1.1 (i_piv.2.2.2.2.2.1 c c_in hcb))
((hind R' Q' b a b_in a_in _).1.1 (i_piv.2.2.2.2.2.2 a a_in hab));
intro j; split; split; intro H; rcases eq_or_ne j i with rfl | hj,
{ convert makeabove_below hcb h },
{ convert is_strictly_worst_makebot b (Q j) X c c_in hcb,
apply Q'bot j hj,
unfold is_strictly_worst,
by_contra hbot, push_neg at hbot,
rcases hbot with ⟨d, d_in, hdb, H'⟩,
cases i_piv.2.1 j with hbot htop,
{ exact H' (hbot d d_in hdb) },
{ exact H.2 (htop c c_in hcb).1 } },
{ exact i_piv.2.2.2.1 c c_in hcb },
{ by_contra H',
apply nP_of_reverseP ((is_strictly_best_maketop b (Q j) X) c c_in hcb),
rwa ← Q'top j hj,
exact not_strictly_worst.mpr ⟨c, c_in, hcb, H'⟩ },
{ exact absurd (i_piv.2.2.2.1 c c_in hcb).1 H.2 },
{ convert is_strictly_best_maketop b (Q j) X c c_in hcb,
exact Q'top j hj (λ hbot, H.2 (hbot c c_in hcb).1) },
{ apply absurd (makeabove_below hcb h).1,
convert ← H.2 },
{ by_contra H',
apply absurd (is_strictly_worst_makebot b (Q j) X c c_in hcb).1,
convert ← H.2,
exact Q'bot j hj ((i_piv.2.1 j).is_strictly_worst (not_strictly_best.mpr ⟨c, c_in, hcb, H'⟩)) },
{ convert makeabove_above (Q j) hab },
{ convert is_strictly_best_maketop b (Q j) X a a_in hab,
apply Q'top j hj,
rw i_piv.1 j hj a b a_in b_in,
exact not_strictly_worst.mpr ⟨a, a_in, hab, nP_of_reverseP H⟩ },
{ exact i_piv.2.2.2.2.1 a a_in hab },
{ rw ← i_piv.1 j hj a b a_in b_in,
refine ((i_piv.2.1 j).is_strictly_best (λ hbot, nP_of_reverseP H _)) a a_in hab,
convert is_strictly_worst_makebot b (Q j) X a a_in hab,
exact Q'bot j hj hbot },
{ exact absurd (i_piv.2.2.2.2.1 a a_in hab).1 H.2 },
{ convert is_strictly_worst_makebot b (Q j) X a a_in hab,
apply Q'bot j hj,
rw i_piv.1 j hj a b a_in b_in,
exact (i_piv.2.2.1 j).is_strictly_worst (not_strictly_best.mpr ⟨a, a_in, hab, nP_of_reverseP H⟩) },
{ apply absurd (makeabove_above (Q j) hab).1,
convert ← H.2 },
{ rw ← i_piv.1 j hj a b a_in b_in,
suffices : is_strictly_worst b (R j) X, from this a a_in hab,
by_contra hbot,
apply absurd (is_strictly_best_maketop b (Q j) X a a_in hab).1,
convert ← H.2,
exact Q'top j hj hbot },
end
/- Step 4 states that if an individual is a dictator over every pair of social states
except for `b`, they are also a dictator over all pairs (including `b`). -/
lemma fourth_step (hind : ind_of_irr_alts f X)
(hX : 3 ≤ X.card) (hpiv : ∀ b ∈ X, has_pivot f X b) :
is_dictatorship f X :=
begin
obtain ⟨b, hb⟩ := (card_pos.1 (zero_lt_two.trans hX)).bex,
obtain ⟨i, i_piv⟩ := hpiv b hb,
have h : ∀ a ∈ X, a ≠ b → ∀ Rᵢ : ι → pref_order σ,
(P (Rᵢ i) a b → P (f Rᵢ) a b) ∧ (P (Rᵢ i) b a → P (f Rᵢ) b a), -- is there perhaps a better way to state this? -Ben
{ intros a ha hab Rᵢ,
obtain ⟨c, hc, hca, hcb⟩ := exists_third_distinct_mem hX ha hb hab,
obtain ⟨hac, hbc⟩ := ⟨hca.symm, hcb.symm⟩,
obtain ⟨j, j_piv⟩ := hpiv c hc,
obtain hdict := third_step hind hc j_piv,
obtain rfl : j = i,
{ by_contra hji,
obtain ⟨R, R', hso, hextr, -, -, -, hbot, htop⟩ := i_piv,
refine (htop a ha hab).2 (hdict b a hb ha hbc hac R' _).1,
rw ← hso j hji a b ha hb,
by_contra hnot,
exact (hdict a b ha hb hac hbc R ((hextr j).is_strictly_best
(not_strictly_worst.mpr ⟨a, ha, hab, hnot⟩) a ha hab)).2 (hbot a ha hab).1 },
split; apply hdict; assumption },
refine ⟨i, λ x y hx hy Rᵢ hRᵢ, _⟩,
rcases eq_or_ne b x with rfl | hbx; rcases eq_or_ne b y with rfl | hby,
{ exact (false_of_P_self hRᵢ).elim },
{ exact (h y hy hby.symm Rᵢ).2 hRᵢ },
{ exact (h x hx hbx.symm Rᵢ).1 hRᵢ },
{ exact third_step hind hb i_piv y x hy hx hby.symm hbx.symm Rᵢ hRᵢ },
end
/-- Arrow's Impossibility Theorem: Any social welfare function involving at least three social
states that satisfies WP and IoIA is necessarily a dictatorship. --/
theorem arrow [fintype ι] (hwp : weak_pareto f X) (hind : ind_of_irr_alts f X) (hX : 3 ≤ X.card) :
is_dictatorship f X :=
fourth_step hind hX $ second_step hwp hind hX
|
function [F,X] = spm_fp_display_density(M,x)
% Quiver plot of flow and equilibrium density
% FORMAT [F,X] = spm_fp_display_density(M,x)
%
% M - model specifying flow; M(1).f;
% x - cell array of domain or support
%
% F - flow
% X - evaluation points
%__________________________________________________________________________
% Copyright (C) 2005-2013 Wellcome Trust Centre for Neuroimaging
% Karl Friston
% $Id: spm_fp_display_density.m 5219 2013-01-29 17:07:07Z spm $
% evaluation points and equilibria
%--------------------------------------------------------------------------
n = length(x);
[M0,q0,X,x,F] = spm_fp(M,x);
% flow fields
%--------------------------------------------------------------------------
for i = 1:n
f(i,:) = F(i,:)/max(eps + abs(F(i,:)));
end
% flow and density
%==========================================================================
f = f';
% eliminate first state if 3-D
%--------------------------------------------------------------------------
if n == 3
q = q0;
q = squeeze(sum(q,1));
q = squeeze(sum(q,1));
[m,j] = max(q);
q0 = squeeze(sum(q0,3));
k = find(X(:,3) == x{3}(j));
X = X(k,[1 2]);
f = f(k,[1 2]);
x = x([1 2]);
end
% thin out arrows for quiver
%--------------------------------------------------------------------------
k = 1;
for i = 1:2
nx = length(x{i});
d = fix(nx/16);
k = kron(k,kron(ones(1,nx/d),sparse(1,1,1,1,d)));
end
k = find(k);
% flow and density
%--------------------------------------------------------------------------
imagesc(x{1},x{2},1 - q0'), hold on
quiver(X(k,1),X(k,2),f(k,1),f(k,2),'r'), hold off
axis square xy
drawnow
|
You'll need a few packages for this to work. You can install everything you need by running these two commands:
conda install astropy numpy h5py matplotlib tqdm
pip install astro-gala pyia
```python
import sys
from os import path
import warnings
# Third-party
from astropy.table import Table
from astropy.io import fits
import astropy.coordinates as coord
import astropy.units as u
import h5py
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from tqdm import tqdm
import gala.dynamics as gd
import gala.integrate as gi
import gala.potential as gp
from gala.units import galactic
from pyia import GaiaData
```
/Users/adrian/anaconda/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
We'll use a simple model for the Milky Way potential that is implemented in `Gala` that contains components for the Galactic disk, bulge, nucleus, and dark matter halo:
```python
mw = gp.MilkyWayPotential()
H = gp.Hamiltonian(mw)
print(mw.keys())
```
odict_keys(['disk', 'bulge', 'nucleus', 'halo'])
We'll need to transform observed positions and velocities to the Galactocentric rest frame. To do that, we have to make assumptions about the solar velocity and position. We'll assume the Sun is at:
$$
\begin{align}
\boldsymbol{r}_{\odot} &= (-8, 0, 0)~{\rm kpc}\\
\boldsymbol{v}_{\odot} &= (11.1, 232.24, 7.25)~{\rm kpc}
\end{align}
$$
(but feel free to play with the definitions if you prefer other values).
```python
rsun = 8 * u.kpc
vsun = [11.1, 232.24, 7.25] * u.km/u.s
```
```python
gc_frame = coord.Galactocentric(galcen_distance=rsun,
galcen_v_sun=coord.CartesianDifferential(*vsun))
```
We next need to load some Gaia data. For now, I'll load a mock (simulated_ dataset with DR2-like uncertainties for all stars within 200 pc of the Sun:
```python
g = GaiaData('/Users/adrian/data/GaiaDR2-mock/Gaia-DR2-mock-200pc.fits')
```
We can use this object to get an Astropy sky coordinate object, which has the sky positions, distance (parallax), proper motions, and radial velocity automatically filled:
```python
c = g.skycoord
c
```
<SkyCoord (ICRS): (ra, dec, distance) in (deg, deg, pc)
[(315.07577633, 35.40392509, 197.99079895),
(314.88167156, 35.3907023 , 123.34000397),
(314.86214545, 35.39305834, 148.10754395), ...,
( 45.02060181, -35.32089102, 187.71482849),
( 44.93473399, -35.33031997, 172.88237 ),
( 44.94769253, -35.28921706, 129.5793457 )]
(pm_ra_cosdec, pm_dec, radial_velocity) in (mas / yr, mas / yr, km / s)
[( -8.97676, -44.3531 , -18.882 ), ( 20.8935 , -68.4191 , -11.6318 ),
( 15.4162 , -1.7871 , -6.852 ), ...,
(-58.3823 , -29.8226 , -5.77041), ( -0.30213, 10.4613 , 21.0896 ),
( 86.0874 , -4.13801, 26.4174 )]>
**Note: not all Gaia DR2 stars will have radial velocities, so you may have to filter out those stars here**
Next, we will transform these heliocentric positions/velocities into Galactocentric values, then pass in to a Gala class that will handle computing dynamical quantities from the Galactocentric Cartesian phase-space positions:
```python
w = gd.PhaseSpacePosition(c.transform_to(gc_frame).cartesian)
```
With this object, we can do things like compute the angular momentum:
```python
L = w.angular_momentum()
```
```python
L.shape
```
(3, 2799632)
Or, given a model for the gravitational potential, compute the energy of the stellar orbits:
```python
E = H.energy(w)
```
Let's now plot the energy vs. $z$-component of the angular momentum (index 2):
```python
L_unit = u.km/u.s * u.kpc
E_unit = u.km/u.s * u.kpc/u.Myr
fig, ax = plt.subplots(1, 1, figsize=(6, 5))
ax.plot(L[2].to(L_unit).value,
E.to(E_unit).value,
linestyle='none', marker=',', alpha=0.2)
ax.set_xlim(-3000, -500)
ax.set_ylim(-160, -100)
ax.set_xlabel('$L_z$ [{0:latex_inline}]'.format(L_unit))
ax.set_ylabel('$E$ [{0:latex_inline}]'.format(E_unit))
fig.tight_layout()
```
That's a lot of points! Let's instead make a histogram so we can look at the log-density of points:
```python
E_grid = np.linspace(-160, -100, 128)
Lz_grid = np.linspace(-3000, -500, 128)
H, xedg, yedg = np.histogram2d(L[2].to(L_unit).value, E.to(E_unit).value,
bins=(Lz_grid, E_grid))
```
```python
norm = mpl.colors.LogNorm(vmin=1e-1, vmax=1E5)
fig, ax = plt.subplots(1, 1, figsize=(6, 5))
ax.pcolormesh(xedg, yedg, H.T, norm=norm, cmap='Blues')
ax.set_xlabel('$L_z$ [{0:latex_inline}]'.format(L_unit))
ax.set_ylabel('$E$ [{0:latex_inline}]'.format(E_unit))
fig.tight_layout()
```
That's a very smooth distribution! The real Galaxy probably won't look like that...but we'll see!
We can also compute other quantities for these stars, like the actions. These are other integrals of motion that are useful because they are adiabatically invariant, and because they are conserved in a static potential (unlike the angular momentum components which can vary if the orbit is not planar and the potential is non-spherical). The problem is that, except in very simple potential models, computing the actions has to be done numerically. There are many algorithms out there for estimating actions (see papers by Jason Sanders, James Binney, Jo Bovy). Here, with Gala, we'll use a method that requires numerically integrating orbits in order to compute the actions. This makes it quite slow to run for millions of stars, but we can at least run for a subset of stars as a demo. In practice, you can parallelize this or run on batches (subsets) of stars.
### How do we choose an integration timestep?
One thing we have to choose when numerically estimating the actions is the timestep and length of orbit integration. We'll set the length to 5 Gyr (~20 complete orbits of a sun-like orbit around the Galaxy), and vary the timestep to see if the value of the actions converges:
```python
all_actions = []
dts = [0.1, 0.2, 0.4, 0.8, 1., 2., 4, 8] * u.Myr
with warnings.catch_warnings(record=True):
warnings.simplefilter("ignore")
for dt in tqdm(dts):
orbit = mw.integrate_orbit(w[0], dt=dt, t1=0*u.Gyr, t2=5*u.Gyr,
Integrator=gi.DOPRI853Integrator)
res = gd.actionangle.find_actions(orbit, N_max=8)
all_actions.append(res['actions'])
```
100%|██████████| 8/8 [00:15<00:00, 1.95s/it]
```python
act = u.Quantity(all_actions)
plt.figure(figsize=(6, 5))
for k in range(3):
plt.plot(dts[1:], np.abs((act[1:, k]-act[0, k])/act[0, k]),
label='$J_{0}$'.format(k+1))
plt.xscale('log')
plt.yscale('log')
plt.xlabel('timestep [Myr]')
plt.ylabel('fractional error')
plt.legend(loc='best', fontsize=16)
plt.tight_layout()
```
From this, it looks like we can set the timestep to 2 Myr and only suffer a fractional error of $10^{-5}$. How long does computing the actions for one orbit take?
```python
%%time
orbit = mw.integrate_orbit(w[0], dt=2*u.Myr, t1=0*u.Gyr, t2=5*u.Gyr,
Integrator=gi.DOPRI853Integrator)
res = gd.actionangle.find_actions(orbit, N_max=8)
```
CPU times: user 594 ms, sys: 28.5 ms, total: 623 ms
Wall time: 429 ms
/Users/adrian/projects/gala/build/lib.macosx-10.7-x86_64-3.6/gala/dynamics/actionangle.py:502: UserWarning: More unknowns than equations!
warnings.warn("More unknowns than equations!")
~0.5 seconds! Let's run on a subset (128) of the orbits:
```python
some_w = w[:128]
orbits = mw.integrate_orbit(some_w, dt=1*u.Myr, t1=0*u.Gyr, t2=4*u.Gyr,
Integrator=gi.DOPRI853Integrator)
with warnings.catch_warnings(record=True):
warnings.simplefilter("ignore")
all_actions = []
for orbit in tqdm(orbits.orbit_gen(), total=some_w.shape[0]):
res = gd.actionangle.find_actions(orbit, N_max=8)
all_actions.append(res['actions'])
all_actions = u.Quantity(all_actions)
```
100%|██████████| 128/128 [01:20<00:00, 1.59it/s]
This is a simulated data set, and I don't remember what kind of age-action relations were put in, but we generally expect the vertical action, $J_z$, to increase with stellar age. Let's see if that's the case for these simulated stars:
```python
plt.figure(figsize=(6, 5))
plt.scatter(g.age[:some_w.shape[0]],
all_actions[:, 2].to(L_unit))
plt.xscale('log')
plt.yscale('log')
plt.xlim(1E-2, 20)
plt.ylim(1E-4, 1e2)
plt.xlabel('age [Gyr]')
plt.ylabel('$J_z$ [{0:latex_inline}]'.format(L_unit))
```
Older stars definitely tend to have larger values of vertical action! Another way to look at this is by looking at the maximum height a star reaches above the plane, $|z_{\rm max}|$:
```python
plt.figure(figsize=(6, 5))
plt.scatter(g.age[:some_w.shape[0]],
orbits.zmax(approximate=True).to(u.pc).value)
plt.xscale('log')
plt.yscale('log')
plt.xlim(1E-2, 20)
plt.ylim(10, 2e3)
plt.xlabel('age [Gyr]')
plt.ylabel(r'$\left|z_{\rm max}\right|$ ' + '[{0:latex_inline}]'.format(L_unit))
```
```python
```
|
lemma path_image_part_circlepath': "path_image (part_circlepath z r s t) = (\<lambda>x. z + r * cis x) ` closed_segment s t" |
\chapter{Evaluation}
Evaluation of the Laha framework involves deploying reference Laha-compliant DSNs, validating the data collected from the reference implementations, and then comparing and contrasting various metrics for each of the proposed goals. Metrics will be collected during a set of experiments for each of the Laha reference implementations in early 2019.
The following sections describe my plans for deployment of reference implementations, data validation, evaluating the main goals of the Laha framework, and evaluating the tertiary goals of the Laha framework.
\section{Deploy Laha reference implementations on test sites}
In Q4 2018, 10 to 20 Laha-compliant OPQBoxes will be deployed on the University of Hawaii at Manoa's power microgrid. Using a provided blueprint of the microgrid as a guide and collaborating with the Office of Energy Management, these sensors will be placed strategically with the hopes of observing PQ signals on the same line, PQ signals generated from intermittent renewables, local PQ signals, global PQ signals, and PQ signals near sensitive lab electronics. Many of these sensors will be co-located with industry standard PQ monitoring systems. The industry standard sensors provide both ground truth and a means of comparison between a Laha designed network and a non-Laha designed network..
In Q4 2018, 20 to 30 Laha-compliant Lokahi sensors will be deployed near and around the Infrasound Laboratory in Kailua-Kona on the Big Island of Hawaii. These sensors will be placed strategically around a calibrated infrasound source. The sensors will be placed with the assistance of Dr. Milton Garces to ensure that I can target sensors at different distances by tuning the amplitude and frequencies of the infrasound signal. In this way, I know which devices should or should not have received the signal.
\section{Validate data collected by Laha deployment}
Beginning in Q1 2019, I will begin validated data collection from both the OPQ network and the Lokahi network.
Data will be validated in the OPQ network be comparing detected and classified signals against industry standard meters that are co-located with our sensors. Data validation will be an autonomous process that validates signals and trends seen in both the industry sensor and the OPQ sensors. Data validation will provide metrics for signals and trends that the reference sensors observed but OPQ sensors did not (false negatives) as well as signals that the OPQ sensors observed and the reference sensors did not (false negatives). Specifically, I will be looking to compare long term trends (voltage, frequency, and THD readings over a time period of days) as well as more transient signals of interest (i.e. voltage sags/swells, frequency variations, excessive THD, and outages).
Data from the Lokahi network will be validated against industry standard infrasound sensors. We also control the amplitude and frequency of the signals generated from the calibrated infrasound source and can use geophysical equations to predict which sensors should have seen or not seen an infrasonic signal. Data validation is autonomous for this network as well. Similar to the OPQ network, I will be collecting metrics on false positive and false negatives as compared to the reference sensors.
Data validation for both networks will continue for all data collection until the end of the project.
\section{Use Laha deployments to evaluate the main goals of the framework}
The Laha deployments for both OPQ and Lokahi will be used to evaluate each of the main goals this framework claims to provide. Namely that Laha is a generally useful framework representation for DSNs. Second, Laha provides the ability to turn primitive sensor data into actionable data and insights. Third, Laha's tiered management of sensor data provides metrics on maximum bounds for storage requirements and graceful degradation of DSN performance.
Each deployment requires different techniques for performing evaluation.
In the OPQ deployment, OPQBoxes are deployed and co-located with industry standard, calibrated, reference sensors. Each of these sensors cost thousands to obtain and install, collect all the data all the time, and can only be connected to the power main as it enters a building. These sensors provide a means for verifying signals received or not received by OPQ, as well as confirming long term trend data. I have been provided access to these sensors and stored data via the Office of Energy Management at UH Manoa. The data is accessible via an HTTP API. The Office of Energy Management at UH Manoa has also provided the full schematics for the UH power grid. This will be used as a ground truth for topology estimates and distributed signal analysis. OPQBoxes are placed in strategic locations on the UH Manoa campus specifically in order to evaluate the distributed nature of PQ signals. For example, OPQBoxes are placed on the same electrical lines as well as separate electrical lines to observe how PQ signals travel through an electrical grid.
In the Lokahi deployment, I have the opportunity to generate infrasound signals using a calibrated infrasound source \cite{park2009rotary}.. The source can be tuned to produce infrasound at configurable frequencies and amplitudes. The source works by attaching a variable pitch propeller to an electric motor that can be driven by a waveform generator. The source can generate signals that can be observed at large stand off distances, over tens of kilometers. Similar to the OPQ deployment, sensors within the Lokahi deployment will be co-located with industry standard, calibrated, infrasound sensors. These sensors can provide a metric of signals that were correctly observed, incorrectly observed, or not observed at all by the Lokahi deployment. Further, infrasound itself is characterized quite well by various geophysical equations. These equations can be used to predict if sensors deployed in the Lokahi deployment are likely to observe generated infrasound signals.
Evaluation of the main goals of this network are provided in the following sections.
\subsection{Evaluation of the Generality of this Framework}
I claim that the Laha framework is useful and general enough to be applied to DSNs in different domains. To test this, I will design, develop, and deploy two DSNs. The first OPQ, measures distributed PQ signals on the electrical grid. The second, Lokahi, observes infrasound signals traveling through the atmosphere.
To evaluate the generality of the Laha design, I will provide metrics for whether or not each deployment is able to fulfill the goals of the given network.
I expect the PQ network, OPQ, to be able to detect and classify common PQ issues. I expect OPQ to observe voltage dips, voltages swells, frequency dips, frequency swells, transients, and high levels of THD. A count of these signals will be kept and compared against industry standard PQ meters co-located with each sensor. By comparing these signals to the ground truth, we will be able to tabulate a number of false positives and false negatives. In order to be considered effective, I would expect to be able to classify each of these common PQ signals, collect a set of each of the PQ signals while maintaining a low number of false positives and false negatives as compared to the industry standard sensors. In general, a negative result here would be not being able to detect PQ signals of a specific type or having a high number of false positives or false negatives.
Further, another stated goal of OPQ is to detect and classify distributed PQ incidents. That is, PQ signals that are observed by more than one sensor in situations where OPQ sensors are not co-located. First, I will evaluate if OPQ is capable of detecting distributed PQ signals. I expect OPQ to at least observe one distributed signal during the test deployment, but would not be surprised to see many. By working with the Office for Energy Management at UH Manoa, I will use a list of known PQ source events along with signals collected by OPQ and the industry standard sensors to provide a list of false positives and false negatives for the number of distributed PQ incidents observed by OPQ.
I expect the infrasound network, Lokahi, to be able to securely detect and report on infrasound incidents from a large collection of heterogeneous smartphone based infrasound sensors. This network prioritizes availability and security even in the face of network issues or no network at all. I claim that Laha is a useful framework for a DSN such as this and will evaluate if Laha is able to meet the goals of this network.
To evaluate the effectiveness of Laha as implemented by Lokahi, I will deploy 50 heterogeneous Lokahi smartphone sensors at predetermined distances from a calibrated infrasound source. I will then use the calibrated infrasound source to generate infrasound signals of different amplitudes and frequencies. While signals are being generated, I will disable network access for the sensors to simulate real life network drop outs of sensors. I will disable the networks for time periods of 1 minute, 30 minutes, and 1 hour.
I will then, for each sensor, calculate the number of false positives and false negatives for detections of infrasound signals. In order for Laha to be a useful framework for Lokahi, Lokahi must demonstrate that not only can it detect infrasound signals at different frequencies and amplitudes, but it must also do this while maintaining a low number of false positives or false negatives.
Further, as availability is a major priority of this network, network outages must be handled without signal loss. To evaluate this goal, I will measure the amount of false negatives (or missed signals) due to Laha's data management and the interplay with network outages. I would expect that if Lokahi implements it correctly, we should not see a rise in false negatives. A less great result would be an increase in false negatives.
Finally, backed by the metrics for both deployments, I will provide a critical discussion on what types of DSNs Laha is well suited for and what types of DSNs Laha is not well suited for. This will include a discussion on which parts of the Laha design are useful or a detriment to a given goal of the DSN.
The following sections continue to discuss the evaluation strategies required to show that Laha is a generally useful representation for a DSN.
\subsection{Evaluation of Converting Primitive Data into Actionable Insights}
An important goal of any DSN is to convert primitive sensor data into actionable insights. This is generally accomplished by adding some kind of context associated with the data such as classifications of a signal or linking the data with other data by comparing similarities in time, space, or other physical features.
I claim that Laha's use of Actors acting on and moving data between levels in the Laha hierarchy provides a useful and generic approach to systematically adding context to data as it moves through the framework. Laha is designed with a specific number of levels where data within each level shares the same type. In each deployment, I will evaluate the usefulness of each level with regards to adding context to the data.
An early approach to organizing data for contextualization is the Data Grid project\cite{chervenak2000data} which proposed needing two services for building higher level extractions, storage systems and metadata management. This framework provided the context on top of data needed to easily build replication services for the data, which was important since one of the major goals of this framework was data availability and policy management. Data Grid also maintains data uniformity and does not allow complex schemas. Data Grid does not provide a mechanism for discarding noisy data. Laha differs from Data Grid by providing support for complex metadata schemas, focuses on data reduction strategies, and provides more support for driving context. A more recent paper from Wu et al.\cite{wu2014data} presents the HACE framework which is a framework designed for applying context to Big Data by making integration with other data sources and performing data fusion a first class member of the framework. This paper also examines algorithms for mining of complex and dynamic data, such as those generated from sensor networks. Laha differs from HACE by using a tiered approach to manage data volume while still hopefully generating actionable insights.
In both deployments, I will evaluate the number of false negatives for incident classification. Each level in the framework is responsible for not only adding context, but deciding if data should be moved upward through the levels, adding more context along the way, or discarding data because a level does not think the data is ``interesting". I will keep track of the number of false negatives and which level was responsible for discarding the data with the signal. Using this approach, I will evaluate the effectiveness of each level to determine which levels correctly identify signals and which levels do not correctly identify signals, thus discarding the data.
In order to be useful, I expect each level to add context to the data while maintaining a low level of false negatives.
Using these metrics, I will provide a discussion on which domains a leveled approach may work well for versus which domains a leveled approach might not provide useful benefits.
I claim that Laha is able to provide even more context and actionable insights by implementing a level called Phenomena. Phenomena utilize predictive analytics to provide context and actionable insights over the sensor domain. First, I will evaluate if Phenomena take place in practice for both of the Laha deployments.
To evaluate Phenomena in the OPQ network, OPQ must observe a cyclical incident such as voltage swells occurring every afternoon due to solar output or an electric motor turning on at the same time every day. Once a cyclical incident is observed, OPQ must correctly create predictive Phenomena that predict the same incident happening in the future. Assuming predictive Phenomena are created, I will measure the amount of false positives and false negatives on whether the predictions were correct or not. A positive result would show that now only is OPQ capable of making predictive Phenomena, but also that a high percentage (> 50\%) of the predictions are correct.
Evaluation of predictive Phenomena in the Lokahi infrasound network will follow a similar strategy. However, since I can control the infrasound source, I can actually run an experiment that creates cyclical and non-cyclical signals. I will then test Lokahi's ability to not only create predictive Phenomena, but also show that the predictions are accurate, that is, greater than 50\% of them are correct.
A negative result would be that if either of the networks are not able to create predictive Phenomena or a large number of false positives or false negatives (combining for <50\% prediction accuracy).
Adding context to classified Incidents is the act of providing a statistical likelihood of the underlying cause of the Incident. These include things like showing that a voltage sag is caused by turning on the dryer every day at 2PM or an identifying as infrasound signal as a repetitive flight pattern near an airport. Context is provided by external sources to the DSN (such as users or by performing data fusion with other correlating data sets).
Evaluating contextualized events consists of setting up experiments where I assign context for a specific set of signals and resulting Incidents. Then testing to see if Phenomena are able to correctly apply context to Incidents when the same signals are generated again. I will record the number of false positives and false negatives for assigning context to Incidents.
A positive result would be to see the correct context applied to incidents more than half of the time. That is, I expect context to be applied correctly to at more than 50\% of Incidents for which context has been previously defined.
I expect to see contextualization work better in DSNs where signals provide more measures for discrimination. For example, PQ networks contain many different types of classified PQ signals, however there is a small subset of causes attributed to each type of PQ signal classification.This decreases Laha's search space and in theory should make it easier to provide context.
\subsection{Evaluation of Tiered Management of Big Data}\label{eval-big-data}
The goal of tiered management of Big Data is to add a mechanism that provides a maximum bounds on storage requirements of sensor data at each level in the Laha hierarchy while simultaneously reducing sensor noise as Laha Actors move ``interesting" data upwards. This in turn should decrease the amount of false positives since forwarded data is more likely to include signals of interest and less likely to be sensor noise.
Other approaches to Big Data management include compression\cite{tang2004compression} or storage systems where the goal is to have a distributed file system and move data close to where it is being processed, such as the Hadoop Distributed File System\cite{warrier2007much}. Other systems such as NiFi\cite{hughes2016survey} provide a nice interface for ingestion and movement of data between Big Data tools while also providing data provenance, but do not go far enough in focusing on data reduction and graceful degradation. Carney et al.\cite{carney2002monitoring} discuss how monitoring applications require management and clean up of stale sensor data.
It's not yet known if I will see a decrease in false negatives. On the one hand, it's possible that Laha will throw away data that did contain signals of interest. In this case, detection or classification Actors will not observe the signals because the data has been discarded leading to increased false negatives. On the other hand, by reducing false positives and increasing the signal-to-noise ratio as data moves upward, Phenomena has a better chance of optimizing triggering, detection, and classification which may in turn inform Laha to save data that would have been previously thrown away. In this way, it's possible that Laha will reduce false negatives.
We will evaluate the number of false positives and false negatives in detections, classifications, and Phenomena compared against industry standard reference sensors. A positive outcome for this metric would be a reduction in both false positives and false negatives compared to an approach that does not use tiered data management. A negative result would be an increase in either false positives or false negatives.
During the acquisition and curating of data, metrics will be collected and stored about how much data is saved (in bytes) versus how much data is discarded at each level within the Laha data hierarchy. These numbers will be compared against data storage as if the OPQ and Lokahi frameworks were to take a ``store everything" approach. Evaluation metrics provided will include percentage of data storage saved per data hierarchy level as well as an estimate of overall decrease in data storage requirements for the entire DSN. A positive result from these metrics would show significant reduction in storage requirements for each level in the framework compared against a ``store everything approach" and other state-of-the-art data storage solutions.
I will also provide metrics on ``continuous storage pressure" which is a measure of the average amount of data storage required at each level given the current state of the network. That is, since data at all lower levels of the framework assigns a TTL to the data within the collection, the collection will exhibit a constant data pressure during sensor data collection. For example, at the lowest level, the IML collects raw data from all sensors all the time. Given the sample rate per sensor, the size per sample, the number of sensors, and a known TTL for this level, I can estimate the maximum bounds of data management requirements that the IML requires. We can develop similar estimation strategies with higher levels of the framework. I will compute the statistical error between the predicted storage pressure and the actual storage pressure recorded during the experiments. A positive outcome would show strong correlation between the predicted storage pressure and the actual storage pressure. A negative outcome would show weak correlation between the predicted and actual values.
Finally, I will provide an evaluation that weighs the results of all three metrics against each other. For example, if I see positive results for data storage reduction and negative results for false positives, do the benefits of the data storage reduction outweigh the negatives of increased false positives?
I would expect that DSNs that have a lower signal-to-noise ratio will see greater benefits from tiered data management than DSNs that already have a decent signal-to-noise ratio.
\section{Evaluation of Tertiary Goals}
In order to achieve the main goals of this framework, I claim that either all or a subset of the following tertiary goals must be fulfilled. Optimization of triggering, detection, classification, sensor energy usage, bandwidth, predictive analytics, and the ability to derive models of the underlying sensing field topology.
To evaluate these tertiary goals, I will select and implement DSN optimization techniques from current literature. I will then compare and contrast the usefulness of different techniques and discuss how each of these techniques perform in the different sensor domains.
Finally, I will discuss how each of these tertiary goals make progress towards overall goals of this sensor network.
\subsection{Evaluation of Adaptive Optimizations for Triggering}
Triggering is the act of observing a feature extracted data stream for interesting features and triggering sensors to provide raw data for a requested time window for higher level analysis. Adaptively optimizing triggering is a way to tune triggering algorithms and parameters with the aim of decreasing false positives and false negatives. In this context, a false positive is triggering on a data stream that does not contain a signal of interest and a false negative is not triggering on a data stream that does contain a signal of interest.
Adaptive triggering is only useful in networks that utilize triggering. Specifically, this technique can not be applied to DSNs that take a collect everything all the time approach.
Triggering can also have significant impacts on overall sensor power requirements and DSN bandwidth requirements. Many of the optimizing triggering algorithms present in the literature exist to minimize sensor energy requirements and bandwidth requirements. This is addressed in great detail in the literature review by Anastasi et al. \cite{anastasi_energy_2009}. This is accomplished by reducing communications between sensor nodes and the sink. It's argued in \cite{pottie2000wireless} that the cost of transmitting a single bit of information from a sensor cost approximately the same as running 1000 operations on that sensor now. However, there is some contention on this topic as \cite{alippi_adaptive_2010} argues that in some modern sensors computational requirements can equal or eclipse those of sensor communication.
Even if a DSN utilizes triggering, it's not clear that adaptive triggering even takes place. The first question I will evaluate is, does adaptive optimization of triggering take place at all given the domain of the DSN? That is, does the nature of the underlying sensor field contribute to optimization of triggering? I will compare if and how optimizations take place in the two reference networks for the domains of PQ and infrasound.
In order to evaluate triggering efficiency within our Laha deployments, Laha will only adaptively modify triggering for half of the devices in the OPQ deployment. In the Lokahi deployment, I will run the same experiment twice. The first run will not optimize triggering and the second run will optimize triggering.
Once the experiments have been run, I will first determine if optimization of triggering has occurred, and if it did, compare the number of false negatives and false positives against the runs that did not use optimized triggering or where optimization did not occur.
I hope to show that a side effect of Laha's optimized triggering is reduced bandwidth and sensor energy requirements. To this end, I will calculate metrics for total data sent and received at the sink node of each network for each device in the network. A positive result would show decreased bandwidth usage for devices that utilize optimized triggering. A negative result would show similar or more bandwidth usage for devices that utilize optimized triggering.
I further hope to show that another benefit of Laha's optimized triggering is reduced sensor energy requirements. The evaluation for this metric will occur with the Lokahi network where sensors can be dependent on batteries. I will run two experiments. For each experiment, all sensors will be charged to battery level of 100\%. In the first experiment, I will not utilize optimized triggering. In the second experiment I will utilize optimized triggering. In both experiments, I will measure the final battery level after the experiment and also measure how quickly the battery depletes for each sensor. This is possible because data in the Lokahi network contains timestamped entries with battery levels.
\subsection{Evaluation of Adaptive Optimizations for Detection and Classifications}
Detections occur when triggering observes something ``interesting" in the feature extracted data stream. A Detection is a contiguous window of raw sensor data that was requested by triggering that may or may not contain signals of interest. Optimizing detections involves optimized the window sizes to increase the signal-to-noise ratio of the window. Fine grained features are then computed by Detection Actors and moved to the Incidents Level where classification of signals takes place. Optimizing Detections involves trimming detection windows to increase signal-to-noise. Optimizing of classifications for Incidents involves tuning parameter sets for the underlying classification algorithms.
Predictive and Locality Phenomena as well as topology optimizations will be used to provide optimizations to the Detections and Incidents levels.
Evaluation of adaptive optimizations for detection and classification within the Laha network will be conducted differently for each Laha deployment.
In the Lokahi deployment, I will control the production of infrasound signals using the available infrasound source. I will run two experiments, where the amplitudes and frequencies of the signals are the same and the locations of the devices remain invariant. In the first experiment, Laha will not use optimized detection or classification provided by Phenomena. In the second experiment, Laha will use optimized detection and classification techniques provided by Phenomena.
With known frequencies and amplitudes of the infrasound signals, I can compare the rate of detections and classifications between the optimized and unoptimized experimental runs. I expect to see a greater number of and more accurate detections and classifications from the optimized experiment.
In the OPQ deployment, I will compare the same metrics as the Lokahi deployment, but instead of controlling the source signal, I will co-locate OPQBoxes. In each pair of co-located OPQBoxes, one will be analyzed using Phenomena optimized detection and classification algorithms and the other will be analyzed using unoptimized detection and classification algorithms.
I will collect and evaluate the number of false positives and false negatives for Incidents generated with optimization and without optimization. A positive outcome would include a decrease in either false positives, false negatives, or both. A negative result would be an increase in either or both false positives or false negatives.
I will also calculate the signal-to-noise ration in Detections to determine if optimization of detections is working. A positive outcome would be an increase in the signal-to-noise ration and a negative outcome would be similar or a decrease in signal-to-noise ratio.
\subsection{Evaluation of Model of Underlying Sensor Field Topology}
Laha should be able to build a model of the underlying sensing field topology. This is not the topology of the physical layout of the sensors (this is generally already known a priori or by collecting location information), but rather the topology by which signals travel. For example, in a PQ network the topology is the physical power grid and switches that PQ signals travel through. In an infrasound network, the topology is the atmosphere through which sound waves travel. Laha aims to build a statistical model of the distances between sensors according to the topology of the sensing field by observing recurrent incidents over time. This can perhaps shed some light on understanding the topology of a sensing field without knowing anything about it before hand.
Much of the literature on topology management is written to decrease sensor energy requirements by exploiting the density of sensors within a sensing field topology. For example, the ASCENT\cite{cerpa2004ascent} framework provides adaptive self configuring sensors that exploit topology denseness to decrease sensor energy usage. Several other frameworks have been designed with the same goal of reducing energy usage by exploiting topology\cite{schurgers2002stem},\cite{schurgers2002topology}.
To evaluate the model of the sensing field topology, I will take two different approaches for each Laha deployment. In both deployment, the sensing field topology is known beforehand to provide a ground truth. I will then compare Laha's computed signal distance between sensors to the actual signal distance between sensors as provided by the ground truths.
In the Lokahi deployment, sensors will be strategically placed at different distances from an infrasound source. Some sensors will be close to each other geographically, but separated by terrain that infrasound signals will not easily travel through. By moving the infrasound source, I can expect to see infrasound signals arriving or not arriving at the sensors depending on the source and direction of the signal along with the physical features of the land. By performing multiple experiments, I hope to provide a model of the physical environment topology that Laha has built. I will compare Laha's model to the known topology and provide a statistical error analysis.
In the OPQ deployment, sensors will be strategically placed on like and unlike electrical lines to observe how distributed PQ signals move through a power grid. In this deployment, Laha will build a topology model that doesn't show physical geographic distance between sensors, but instead will build a model of the electrical distance between sensors. This data will be evaluated by comparing the electrical distances found by the Laha model to the actual UH power grid as referenced by the schematic provided by the Office of Energy Management at UH Manoa. A statistical error analysis of the differences between electrical distances between the model and the schematic will be provided as an evaluation metric.
A positive outcome would be to show that there is high correlation between the Laha signal distances and the ground truth distances. A negative outcome would show low correlation.
Assuming high correlation and a statistical model of the sensing field, I would like to evaluate if Laha is able to use this information to optimize triggering, classification, or predictive analytics. In order to evaluate this, I will collect the number of false positives and false negatives at all levels in the Laha hierarchy while optimizing from topology and without optimizing from topology. I expect to see less false positives and less false negatives when utilizing topology optimizations. A negative result would be a larger number of false positives or false negatives.
I expect to only see results in networks where signals travel fast enough to create a statistical difference between arrival times at the various sensors. In sensing fields where signals travel slowly and uniformly (i.e. a temperature collection DSN), it may be more difficult or impossible to actually determine the sensing field topology.
|
[STATEMENT]
lemma less_eq_refl:
fixes x :: domainNameDept
shows "x \<le> y \<Longrightarrow> y \<le> z \<Longrightarrow> x \<le> z"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>x \<le> y; y \<le> z\<rbrakk> \<Longrightarrow> x \<le> z
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<lbrakk>x \<le> y; y \<le> z\<rbrakk> \<Longrightarrow> x \<le> z
[PROOF STEP]
have "x \<le> y \<longrightarrow> y \<le> z \<longrightarrow> x \<le> z"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. x \<le> y \<longrightarrow> y \<le> z \<longrightarrow> x \<le> z
[PROOF STEP]
(*induction over prems and conclusion*)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. x \<le> y \<longrightarrow> y \<le> z \<longrightarrow> x \<le> z
[PROOF STEP]
proof(induction z arbitrary:x y)
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. \<And>x1 z x y. (\<And>x y. x \<le> y \<longrightarrow> y \<le> z \<longrightarrow> x \<le> z) \<Longrightarrow> x \<le> y \<longrightarrow> y \<le> x1 -- z \<longrightarrow> x \<le> x1 -- z
2. \<And>x y. x \<le> y \<longrightarrow> y \<le> Leaf \<longrightarrow> x \<le> Leaf
[PROOF STEP]
case Leaf
[PROOF STATE]
proof (state)
this:
goal (2 subgoals):
1. \<And>x1 z x y. (\<And>x y. x \<le> y \<longrightarrow> y \<le> z \<longrightarrow> x \<le> z) \<Longrightarrow> x \<le> y \<longrightarrow> y \<le> x1 -- z \<longrightarrow> x \<le> x1 -- z
2. \<And>x y. x \<le> y \<longrightarrow> y \<le> Leaf \<longrightarrow> x \<le> Leaf
[PROOF STEP]
have "x \<le> Leaf"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. x \<le> Leaf
[PROOF STEP]
using Leaf_Top
[PROOF STATE]
proof (prove)
using this:
?a \<le> Leaf
goal (1 subgoal):
1. x \<le> Leaf
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
x \<le> Leaf
goal (2 subgoals):
1. \<And>x1 z x y. (\<And>x y. x \<le> y \<longrightarrow> y \<le> z \<longrightarrow> x \<le> z) \<Longrightarrow> x \<le> y \<longrightarrow> y \<le> x1 -- z \<longrightarrow> x \<le> x1 -- z
2. \<And>x y. x \<le> y \<longrightarrow> y \<le> Leaf \<longrightarrow> x \<le> Leaf
[PROOF STEP]
thus ?case
[PROOF STATE]
proof (prove)
using this:
x \<le> Leaf
goal (1 subgoal):
1. x \<le> y \<longrightarrow> y \<le> Leaf \<longrightarrow> x \<le> Leaf
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
x \<le> y \<longrightarrow> y \<le> Leaf \<longrightarrow> x \<le> Leaf
goal (1 subgoal):
1. \<And>x1 z x y. (\<And>x y. x \<le> y \<longrightarrow> y \<le> z \<longrightarrow> x \<le> z) \<Longrightarrow> x \<le> y \<longrightarrow> y \<le> x1 -- z \<longrightarrow> x \<le> x1 -- z
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>x1 z x y. (\<And>x y. x \<le> y \<longrightarrow> y \<le> z \<longrightarrow> x \<le> z) \<Longrightarrow> x \<le> y \<longrightarrow> y \<le> x1 -- z \<longrightarrow> x \<le> x1 -- z
[PROOF STEP]
case (Dept zn zns)
[PROOF STATE]
proof (state)
this:
?x \<le> ?y \<longrightarrow> ?y \<le> zns \<longrightarrow> ?x \<le> zns
goal (1 subgoal):
1. \<And>x1 z x y. (\<And>x y. x \<le> y \<longrightarrow> y \<le> z \<longrightarrow> x \<le> z) \<Longrightarrow> x \<le> y \<longrightarrow> y \<le> x1 -- z \<longrightarrow> x \<le> x1 -- z
[PROOF STEP]
show ?case
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. x \<le> y \<longrightarrow> y \<le> zn -- zns \<longrightarrow> x \<le> zn -- zns
[PROOF STEP]
proof(clarify)
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<lbrakk>x \<le> y; y \<le> zn -- zns\<rbrakk> \<Longrightarrow> x \<le> zn -- zns
[PROOF STEP]
assume a1: "x \<le> y" and a2: "y \<le> zn--zns"
[PROOF STATE]
proof (state)
this:
x \<le> y
y \<le> zn -- zns
goal (1 subgoal):
1. \<lbrakk>x \<le> y; y \<le> zn -- zns\<rbrakk> \<Longrightarrow> x \<le> zn -- zns
[PROOF STEP]
from unfold_dmain_leq[OF a2]
[PROOF STATE]
proof (chain)
picking this:
\<exists>yns. y = zn -- yns \<and> yns \<le> zns
[PROOF STEP]
obtain yns where y1: "y = zn--yns" and y2: "yns \<le> zns"
[PROOF STATE]
proof (prove)
using this:
\<exists>yns. y = zn -- yns \<and> yns \<le> zns
goal (1 subgoal):
1. (\<And>yns. \<lbrakk>y = zn -- yns; yns \<le> zns\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
y = zn -- yns
yns \<le> zns
goal (1 subgoal):
1. \<lbrakk>x \<le> y; y \<le> zn -- zns\<rbrakk> \<Longrightarrow> x \<le> zn -- zns
[PROOF STEP]
from unfold_dmain_leq this a1
[PROOF STATE]
proof (chain)
picking this:
?y \<le> ?zn -- ?zns \<Longrightarrow> \<exists>yns. ?y = ?zn -- yns \<and> yns \<le> ?zns
y = zn -- yns
yns \<le> zns
x \<le> y
[PROOF STEP]
obtain xns where x1: "x = zn -- xns" and x2: "xns \<le> yns"
[PROOF STATE]
proof (prove)
using this:
?y \<le> ?zn -- ?zns \<Longrightarrow> \<exists>yns. ?y = ?zn -- yns \<and> yns \<le> ?zns
y = zn -- yns
yns \<le> zns
x \<le> y
goal (1 subgoal):
1. (\<And>xns. \<lbrakk>x = zn -- xns; xns \<le> yns\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
x = zn -- xns
xns \<le> yns
goal (1 subgoal):
1. \<lbrakk>x \<le> y; y \<le> zn -- zns\<rbrakk> \<Longrightarrow> x \<le> zn -- zns
[PROOF STEP]
from Dept y2 x2
[PROOF STATE]
proof (chain)
picking this:
?x \<le> ?y \<longrightarrow> ?y \<le> zns \<longrightarrow> ?x \<le> zns
yns \<le> zns
xns \<le> yns
[PROOF STEP]
have "xns \<le> zns"
[PROOF STATE]
proof (prove)
using this:
?x \<le> ?y \<longrightarrow> ?y \<le> zns \<longrightarrow> ?x \<le> zns
yns \<le> zns
xns \<le> yns
goal (1 subgoal):
1. xns \<le> zns
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
xns \<le> zns
goal (1 subgoal):
1. \<lbrakk>x \<le> y; y \<le> zn -- zns\<rbrakk> \<Longrightarrow> x \<le> zn -- zns
[PROOF STEP]
from this x1
[PROOF STATE]
proof (chain)
picking this:
xns \<le> zns
x = zn -- xns
[PROOF STEP]
show "x \<le> zn--zns"
[PROOF STATE]
proof (prove)
using this:
xns \<le> zns
x = zn -- xns
goal (1 subgoal):
1. x \<le> zn -- zns
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
x \<le> zn -- zns
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
x \<le> y \<longrightarrow> y \<le> zn -- zns \<longrightarrow> x \<le> zn -- zns
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
x \<le> y \<longrightarrow> y \<le> z \<longrightarrow> x \<le> z
goal (1 subgoal):
1. \<lbrakk>x \<le> y; y \<le> z\<rbrakk> \<Longrightarrow> x \<le> z
[PROOF STEP]
thus "x \<le> y \<Longrightarrow> y \<le> z \<Longrightarrow> x \<le> z"
[PROOF STATE]
proof (prove)
using this:
x \<le> y \<longrightarrow> y \<le> z \<longrightarrow> x \<le> z
goal (1 subgoal):
1. \<lbrakk>x \<le> y; y \<le> z\<rbrakk> \<Longrightarrow> x \<le> z
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
\<lbrakk>x \<le> y; y \<le> z\<rbrakk> \<Longrightarrow> x \<le> z
goal:
No subgoals!
[PROOF STEP]
qed |
%Author : Athi Narayanan S
%M.E, Embedded Systems,
%K.S.R College of Engineering
%Erode, Tamil Nadu, India.
%http://sites.google.com/site/athisnarayanan/
function q5=mat_dec(data1,blkx)
[m,n]=size(data1);
r3=m/blkx;c3=n/blkx;q4=0;q1=0;
for i=1:r3
for j=1:c3
for s=1:blkx
for k=1:blkx
p3=s+q4;
q2=k+q1;
q5(s,k,i,j)=data1(p3,q2);
end
end
q1=q1+blkx;
end
q4=q4+blkx;q1=0;
end
|
% $Id: gltools.tex,v 2.16 2003/03/28 16:47:48 fuhrmann Exp $
\documentclass[a4paper]{article}
\pagestyle{headings}
\usepackage{html}
\usepackage{epsfig}
\usepackage{verbatim}
\newcommand{\href}[2]{\htmladdnormallinkfoot{#1}{#2}}
\newcommand{\xref}[1]{{\tt #1} (\ref{func:#1})}
\newcommand{\func}[1]{\subsubsection{#1}\label{func:#1}}
\setlength{\parindent}{0pt}
\sloppy
\title{gltools - an OpenGL based on-line graphics toolbox}
\author{J\"urgen Fuhrmann \and Hartmut Langmach}
\date{$Date: 2003/03/28 16:47:48 $ $Revision: 2.16 $}
\begin{document}
\maketitle
{\em gltools} is a collection of OpenGL rendering utilities.
It consists of three layers:
\begin{description}
\item[glwin:] Interface to the windowing system.
\item[glrnd:] Management of an orthogonal rendering volume.
\item[glmesh:] Rendering utilities for functions on simplicial meshes
including 3D isosurfaces and plane sections.
\end{description}
Further, parallel to the system dependent glwin module ,
there is the system independent gleps module
which organizes vector postscript dump.
For Information, see also the
\htmladdnormallinkfoot{gltools homepage}{http://www.wias-berlin.de/~gltools}.
\section{Introduction}
\subsection{What is the intention of {\em gltools} ?}
You have program and want to make it some pictures. You are developing
3D code and are unable to find any errors in your data arrays. You
look for a replacement of GKS which no one seems to have anymore.
Many graphics packages, as the
\htmladdnormallinkfoot{GLUTtoolkit}{http://www.sgi.com/Technology/openGL/glut.html}
(which has nearly the same intentions as {\em gltools}),
\htmladdnormallinkfoot{AVS}{http://www.avs.com}
and \htmladdnormallinkfoot{GRAPE}{http://www.mathematik.uni-freiburg.de/Grape/grape.html}
provide a perfect environment.
But for them, one has to register existing code as a callback and/or
one has to translate the existing data structures into those of the
graphics package. This may be fairly time consuming and difficult.
Also, one wants to control the graphics package via the existing code,
not vice versa. This is where the {\em gltools} package comes in.
It should enable one to get easy access to the OpenGL world
{\em on-line}
from one's {\em own} data structures within {\em existing code}.
It has been tested with OpenGL on SGI Irix 6.x and Compaq Tru64 UNIX
as well as with the
\htmladdnormallinkfoot{Mesa package}{http://www.ssec.wisc.edu/~brianp/Mesa.html}
of Brian Paul. It {\em should}
compile with any ANSI-C compiler.
\subsection{What is not the intention of {\em gltools} ?}
It is intended to keep this package compact.
There are two main reasons for this:
\begin{itemize}
\item the limited time of the authors
\item the ease of use of {\em gltools} for the programmer.
\end{itemize}
So, it is until now not planned to incorporate menu control into the
package. An easy (and most preferred by the author) way to provide
menus would be a callback to a scripting language like {\em tcl} which
could be equipped with a menu system, but this makes sense only when
the whole code is embedded into such a language.
\subsection{Sample code}
\paragraph{glwexample-appctrl.c} contains a simple test
program with a GL window in application control mode.
\paragraph{glwexample-evctrl.c} contains a simple test
program with a GL window in event control mode.
\paragraph{glrexample.c} contains a simple test
program with a GL rendere.
\paragraph*{glview.c} contains a sample program which can be used to render
function data on rectangular meshes. It is used as a test program
for {\em glrnd} and {\em glmesh}.
\subsection{To Do}
\begin{itemize}
\item vector field rendering
\item handle arbitrary clipping planes -- this is
only a question of a clever keyboard and mouse interface,
{\em glmesh} already does everything.
\item relate shown values in the title to real values.
\item control rotation axes : keys X,Y,Z
\end{itemize}
\input glwin.h-tex
\input glrnd.h-tex
\input glmesh.h-tex
\input gleps.h-tex
\end{document}
% $Log: gltools.tex,v $
% Revision 2.16 2003/03/28 16:47:48 fuhrmann
% pdelib1.15_alpha1
%
% Revision 2.15 2003/03/28 11:20:25 fuhrmann
% pdelib2.0_alpha1
%
% Revision 2.14 1999/12/21 17:18:55 fuhrmann
% doc update for gltools-2-3
%
% Revision 2.13 1998/12/18 14:35:21 fuhrmann
% Distribution stuff
%
% Revision 2.12 1997/12/12 12:45:38 fuhrmann
% new doc mechanism
%
% Revision 2.11 1997/11/27 19:03:09 fuhrmann
% glwRecord stuff, PAL-Format, tex-file for keys
%
% Revision 2.10 1997/11/25 10:12:28 fuhrmann
% slightly changed input statement
%
% Revision 2.9 1997/11/24 17:45:16 fuhrmann
% introduction removed
%
% Revision 2.8 1997/10/27 14:39:46 fuhrmann
% doc stuff
%
% Revision 2.7 1997/05/20 15:59:54 fuhrmann
% *** empty log message ***
%
% Revision 2.6 1997/05/19 18:09:31 fuhrmann
% func,xref
%
% Revision 2.5 1997/05/19 15:46:53 fuhrmann
% .h-tex include files
%
% Revision 2.4 1997/04/04 12:25:52 fuhrmann
% rcs style, include *.tex
%
% Revision 2.3 1996/11/06 20:39:01 fuhrmann
% documentation improved
%
% Revision 2.2 1996/11/06 18:28:34 fuhrmann
% input files as .h-tex
%
% Revision 2.1 1996/09/23 17:05:13 fuhrmann
% switched to xdoc
%
% Revision 2.0 1996/02/15 19:57:10 fuhrmann
% First meta-stable distribution
%
% Revision 1.2 1996/02/15 14:17:45 fuhrmann
% glwin& glrnd besser dokumentiert
%
% Revision 1.1 1995/10/20 15:45:38 fuhrmann
% Initial revision
%
|
(*
* Copyright 2020, Data61, CSIRO (ABN 41 687 119 230)
*
* SPDX-License-Identifier: BSD-2-Clause
*)
theory Alloc_Simp
imports
"AutoCorres.AutoCorres"
"Sep_Algebra.Separation_Algebra"
"Sep_Algebra.Sep_Algebra_L4v"
"Hoare_Sep_Tactics.Hoare_Sep_Tactics"
begin
external_file "alloc_simp.c"
(* Parse the input file. *)
install_C_file "alloc_simp.c"
(* Abstract the input file. *)
autocorres "alloc_simp.c"
(* Bodies of translated functions. *)
thm alloc_simp.align_up'_def
thm alloc_simp.alloc'_def
thm alloc_simp.add_mem_pool'_def
thm alloc_simp.init_allocator'_def
record my_state =
heap_w32 :: "word32 ptr \<Rightarrow> word32 option"
(* is_valid_w32 :: "word32 ptr \<Rightarrow> bool"
*)
term "a :: my_state"
term "a :: 'a my_state_ext"
instantiation my_state_ext :: (sep_algebra) sep_algebra
begin
definition "zero_my_state_ext \<equiv> \<lparr> heap_w32 = \<lambda>p. None, \<dots> = 0 \<rparr> "
definition "sep_disj_my_state_ext
(a :: 'a my_state_ext) (b :: 'a my_state_ext) \<equiv>
(\<forall>p. (heap_w32 a) p = None \<or> (heap_w32 b) p = None) \<and> (more a ## more b)"
definition "plus_my_state_ext (a :: 'a my_state_ext) (b :: 'a my_state_ext) \<equiv>
\<lparr> heap_w32 = \<lambda>p. if (heap_w32 a p = None) then (heap_w32 b p) else (heap_w32 a p) , \<dots> = more a + more b \<rparr>"
instance
apply default
apply (unfold zero_my_state_ext_def sep_disj_my_state_ext_def plus_my_state_ext_def)
apply (clarsimp)
apply (clarsimp simp: sep_disj_commute)
apply (metis Some_helper)
apply (clarsimp)
apply (rule my_state.equality)
apply (clarsimp)
defer
apply (clarsimp)
apply (auto)[1]
apply (metis)
apply (auto intro: sep_add_commute)[1]
apply (auto intro: sep_add_assoc)[1]
apply (auto dest: sep_disj_addD)[1]
apply (auto)[1]
apply (metis option.distinct(1))
apply (auto elim: sep_disj_addI1)[1]
apply (auto)[1]
done
end
definition
"set_val a v =
modify (my_state.heap_w32_update (\<lambda>s. s(a := Some v)))"
definition
"get_val p = gets (\<lambda>s. the (my_state.heap_w32 s p))"
lemma set_val_wp: "\<lbrace>\<lambda>s. ((\<lambda>s. heap_w32 s a = Some any) \<and>* R) s\<rbrace>
set_val a x
\<lbrace>\<lambda>_ s. ((\<lambda>s. heap_w32 s a = Some x) \<and>* R) s\<rbrace>"
apply(clarsimp simp: set_val_def sep_conj_def)
apply(rule_tac x="my_state.heap_w32_update (\<lambda>s. s(a \<mapsto> x)) xa " in exI)
apply(rule_tac x=y in exI)
apply (clarsimp)
apply (safe)
apply (clarsimp simp: sep_disj_my_state_ext_def)
apply (erule_tac x=a in allE)
apply (clarsimp)
apply(clarsimp simp: plus_my_state_ext_def)
apply (rule ext)
apply (clarsimp)
done
lemma get_val_wp: "\<lbrace>\<lambda>s. ((\<lambda>s. heap_w32 s a = Some x) \<and>* R) s\<rbrace>
get_val a
\<lbrace>\<lambda>rv. ((\<lambda>s. heap_w32 s a = Some x) \<and>* R) and K (rv = x) \<rbrace>"
apply(clarsimp simp: get_val_def sep_conj_def)
apply(rule conjI)
apply(rule_tac x=xa in exI)
apply(rule_tac x=y in exI)
apply(simp)
apply(clarsimp simp: plus_my_state_ext_def)
done
lemma fixes a :: "word32 ptr" and b :: "word32 ptr"
shows "\<lbrace> \<lambda>s. heap_w32 s a = Some x
\<and> heap_w32 s b = Some y\<rbrace>
my_swap a b
\<lbrace>\<lambda>r s. heap_w32 s a = Some y \<and> heap_w32 s b = Some x \<rbrace>!"
sorry
end
|
lemma vector_eq_dot_span: assumes "x \<in> span B" "y \<in> span B" and i: "\<And>i. i \<in> B \<Longrightarrow> i \<bullet> x = i \<bullet> y" shows "x = y" |
# Copyright (c) 2018-2021, Carnegie Mellon University
# See LICENSE for details
# Declaration of a sorter of one dimension within tuples of size n
#n is the number of inputs in total. So, there are n/2 tuples
#w should be minimum 2
Class(SortVecBase, BaseMat, rec(
abbrevs := [()-> []],
new := (self) >> SPL( WithBases(self, rec()) ).setDims(),
dims := self >> [ 4, 4 ],
isReal := True,
sums := self >> self,
rChildren := self >> [],
rSetChild := rSetChildFields(),
toAMat := self >> Error("not supported"),
transpose := self >> self,
));
Class(SortVecConfigBase_w2, BaseMat, rec(
abbrevs := [(a)-> [a] , (b)-> [b]],
new := (self, a, b) >> SPL( WithBases(self, rec(dimensions:=[2,2], a := a, b:=b))),
print := (self, i, is) >> Print(self.name, "(", self.a, ",", self.b, ")"),
dims := self >> [ 2, 2 ],
isReal := True,
sums := self >> self,
rChildren := self >> [self.a, self.b],
rSetChild := rSetChildFields("a","b"),
toAMat := self >> Error("not supported"),
transpose := self >> self,
));
# Declaration of SortConfigBase, a 2x2 Configurable sorter.
Class(SortVecConfigBase, BaseMat, rec(
abbrevs := [(a)-> [a]],
new := (self, a) >> SPL( WithBases(self, rec(dimensions:=[4,4], a := a))),
print := (self, i, is) >> Print(self.name, "(", self.a, ")"),
dims := self >> [ 4, 4 ],
isReal := True,
sums := self >> self,
rChildren := self >> [self.a],
rSetChild := rSetChildFields("a"),
toAMat := self >> Error("not supported"),
transpose := self >> self,
));
Class(SortVecBase_w2, BaseMat, rec(
abbrevs := [(a)-> [a]],
new := (self, a) >> SPL( WithBases(self, rec(dimensions:=[2,2], a := a))),
print := (self, i, is) >> Print(self.name, "(", self.a, ")"),
dims := self >> [ 2, 2 ],
isReal := True,
sums := self >> self,
rChildren := self >> [self.a],
rSetChild := rSetChildFields("a"),
toAMat := self >> Error("not supported"),
transpose := self >> self,
));
HDLCodegen.SortVecBase_w2 := (self, o, y, x, opts) >>
let(
t0 := TempVar(x.t.t),
t1 := TempVar(x.t.t),
t3 := TempVar(x.t.t),
t5 := TempVar(x.t.t),
t6 := TempVar(x.t.t),
t20 := TempVar(x.t.t),
t21 := TempVar(x.t.t),
t23 := TempVar(x.t.t),
t25 := TempVar(x.t.t),
t26 := TempVar(x.t.t),
chain(
assign(t3, imod(o.a, 2)),
regassign(t0, cond(t3, t0, nth(x,0))),
assign(t5, cond(leq(t0, nth(x,0)), nth(x,0), t0)),
assign(t6, cond(leq(t0, nth(x,0)), t0, nth(x,0))),
regassign(t1, cond(t3, t5, t1)),
assign(nth(y,0), cond(t3, t6, t1)),
regassign(t20, cond(t3, t20, nth(x,1))),
assign(t25, cond(leq(t0, nth(x,0)), nth(x,1), t20)),
assign(t26, cond(leq(t0, nth(x,0)), t20, nth(x,1))),
regassign(t21, cond(t3, t25, t21)),
assign(nth(y,1), cond(t3, t26, t21))
)
);
HDLCodegen.SortVecConfigBase_w2 := (self, o, y, x, opts) >>
let(
t0 := TempVar(x.t.t),
t1 := TempVar(x.t.t),
t3 := TempVar(x.t.t),
t5 := TempVar(x.t.t),
t6 := TempVar(x.t.t),
t7 := TempVar(x.t.t),
t8 := TempVar(x.t.t),
t9 := TempVar(x.t.t),
t10 := TempVar(x.t.t),
t2 := TempVar(x.t.t),
t20 := TempVar(x.t.t),
t21 := TempVar(x.t.t),
t23 := TempVar(x.t.t),
t25 := TempVar(x.t.t),
t26 := TempVar(x.t.t),
chain(
assign(t3, imod(o.a, 2)),
regassign(t0, cond(t3, t0, nth(x,0))),
assign(t7, eq(o.b,0)),
assign(t8, eq(o.b,1)),
assign(t2, leq(t0, nth(x,0))),
assign(t9, cond(t2, t0, nth(x,0))),
assign(t10, cond(t2, nth(x,0), t0)),
assign(t5, cond(t7, nth(x,0) , t8, t9, t10)),
assign(t6, cond(t7, t0 , t8, t10, t9)),
regassign(t1, cond(t3, t5, t1)),
assign(nth(y,0), cond(t3, t6, t1)),
regassign(t20, cond(t3, t20, nth(x,1))),
assign(t25, cond(leq(t0, nth(x,0)), nth(x,1), t20)),
assign(t26, cond(leq(t0, nth(x,0)), t20, nth(x,1))),
regassign(t21, cond(t3, t25, t21)),
assign(nth(y,1), cond(t3, t26, t21))
)
);
HDLCodegen.SortVecBase := (self, o, y, x, opts) >>
chain(
assign(nth(y,0), cond(leq(nth(x,0), nth(x,2)), nth(x,0), nth(x,2))),
assign(nth(y,2), cond(leq(nth(x,0), nth(x,2)), nth(x,2), nth(x,0))),
assign(nth(y,1), cond(leq(nth(x,0), nth(x,2)), nth(x,1), nth(x,3))),
assign(nth(y,3), cond(leq(nth(x,0), nth(x,2)), nth(x,3), nth(x,1)))
);
HDLCodegen.SortVecConfigBase := (self, o, y, x, opts) >>
let(
t0 := TempVar(x.t.t),
t1 := TempVar(x.t.t),
t2 := TempVar(x.t.t),
t3 := TempVar(x.t.t),
chain(
assign(t2, nth(x,0)),
assign(t3, nth(x,2)),
assign(t0, cond(leq(t2, t3), t2, t3)),
assign(t1, cond(leq(t2, t3), t3, t2)),
assign(nth(y,0), cond(eq(o.a,0), t2, eq(o.a,1), t1, t0)),
assign(nth(y,2), cond(eq(o.a,0), t3, eq(o.a,1), t0, t1)),
assign(nth(y,1), cond(leq(nth(x,0), nth(x,2)), nth(x,1), nth(x,3))),
assign(nth(y,3), cond(leq(nth(x,0), nth(x,2)), nth(x,3), nth(x,1)))
)
);
Class(SortVec, TaggedNonTerminal, rec(
abbrevs := [
(n) -> Checked(IsPosIntSym(n), [_unwrap(n)]),
],
hashAs := self >> ObjId(self)(self.params[1]).withTags(self.getTags()),
dims := self >> [ self.params[1], self.params[1] ],
terminate := self >> Error("not supported"), # we could probably support this
));
NewRulesFor(SortVec, rec(
Sort_Stream_Vec := rec(
info := "Streaming sorting network",
applicable := nt -> Length(nt.params) = 1 and IsTwoPower(nt.params[1]),
children := (self, nt) >> let(
tag_w_tmp := nt.tags[1],
tag_w := tag_w_tmp.bs,
t := Log2Int(nt.params[1]),
p := Ind(2^t),
#get_bb := w -> TTensorI(SortVecBase(), 2^(t-1), APar, APar),
get_bb := w -> Cond(w=2, TTensorInd(SortVecBase_w2(p), p, APar, APar),TTensorI(SortVecBase(), 2^(t-1), APar, APar)),
[[ TCompose(
[TCompose(List([1..t-1], i ->
TCompose([
#TTensorI(SortBase(), 2^(t-1), APar, APar),
get_bb(tag_w),
TCompose(List([2..(t-i+1)], j ->
TCompose([
# The one below *should* work, but it causes some rewriting problems.
#TPrm(Tensor(Tensor(I(2^(t-j)),(Tensor(I(2), L(2^(j-1), 2^(j-2))) * L(2^j,2))),I(2))),
# So, I'm going to pull the first I() out of the tensor product. This
# changes it from:
# TPrm(I x (IxL)*L x I)
# to:
# I x (TPrm(IxL)*L x I)
TTensorI(TPrm(Tensor(Tensor(I(2), L(2^(j-1), 2^(j-2))) * L(2^j,2), I(2))), 2^(t-j), APar, APar),
# This one doesn't work because it has nested TPrms and it has TTensorI in a TPrm.
#TPrm(Tensor(TTensorI(TPrm(Tensor(I(2), L(2^(j-1), 2^(j-2))) * L(2^j,2)), 2^(t-j), APar, APar),I(2))),
get_bb(tag_w)
#TTensorI(SortBase(), 2^(t-1), APar, APar)
])
)),
# TPrm(Tensor(TTensorI(TPrm(L(2^(t-i+1), 2^(t-i)) * SortIJPerm(2^(t-i+1))), 2^(i-1), APar, APar),I(2)))
# Like above, re-ordering these terms
# TPrm(Tensor(Tensor(I(2^(i-1)), L(2^(t-i+1), 2^(t-i)) * SortIJPerm(2^(t-i+1) ) ) ,I(2)))
TTensorI(TPrm(Tensor(L(2^(t-i+1), 2^(t-i)) * SortIJPerm(2^(t-i+1)), I(2))), 2^(i-1), APar, APar)
])
)),
get_bb(tag_w)]
#TTensorI(SortBase(), 2^(t-1), APar, APar)]
).withTags(nt.getTags())
]]
),
apply := (nt, c, cnt) -> c[1],
),
Sort_Stream4_Vec := rec(
info := "",
depth := 1,
applicable := (self, nt) >> Length(nt.params) = 1 and IsTwoPower(nt.params[1]) and
let (d := self.depth, t := Log2Int(nt.params[1]), IsInt(t*t/self.depth) and
(IsInt(d/t) or IsInt(t/d))),
children := (self, nt) >> let(
t := Log2Int(nt.params[1]),
d := self.depth,
d1 := cond(leq(d,t), d, t).ev(),
d2 := cond(leq(d,t), 1, d/t).ev(),
k := Ind(2^(t-1)),
j := Ind(t),
l := Ind(t),
n := Ind(t/d2),
v := Ind(t/d1),
d_tmp := cond(leq(d,t), t/d, 0).ev(),
d2_tmp := cond(leq(t,d), d/t, 0).ev(),
s_tmp := ((t*t)/d),
m2 := Ind(d_tmp),
s2 := Ind(s_tmp),
v2 := Ind(d2_tmp),
n2 := Ind(d),
n3 := Ind(d),
l2 := Ind(t),
tag_w_tmp := nt.tags[1],
tag_w := tag_w_tmp.bs,
p := Ind(2^t),
c1 := (lp, jp) >> lt((t-1), (lp+jp)),
z := (lp, jp) >> (t-1)-(lp+jp),
z_w1 := (lp, jp) >> (t-1)-(lp+jp)+1,
c2 := (lp, jp) >> logic_and(eq(bit_sel(k, z(lp, jp)), 1), neq(lp, 0)),
c2_w1 := (lp, jp) >> logic_and(eq(bit_sel(p, z_w1(lp, jp)), 1), neq(lp, 0)),
access_f := (lp, jp) >> cond(c1(lp, jp), 0, c2(lp, jp), 1, 2),
access_f_w1 := (lp, jp) >> cond(c1(lp, jp), 0, c2_w1(lp, jp), 1, 2),
get_bb := (lp,jp) -> Cond(tag_w=2, TTensorInd(SortVecConfigBase_w2(p,access_f_w1(lp,jp)), p, APar, APar),TTensorInd(SortVecConfigBase(access_f(lp, jp)), k, APar, APar)),
stage := (lp, jp) >> TCompose([
get_bb(lp, jp),
#TTensorInd(SortConfigBase(access_f(lp, jp)), k, APar, APar),
TPrm(Tensor(L(2^t, 2^(t-1)),I(2)))
]),
full_stage := np_1 >> TCompose(List([0..t-1], m_1 -> TCompose(List([0..t-1], j_1 -> stage(m_1, j_1))))),
full_stage1 := np >> TCompose(List([0..d2-1], m -> TCompose(List([0..t-1], j -> stage(d2*np+m, j))))),
full_stage2 := vp >> TCompose(List([0..d1-1], s -> stage(l, vp+s))),
full_stage1b := np2 >> TICompose(m2,d_tmp, TICompose(j,t, stage((t/d)*np2+m2, j))),
# Old: problem is that it's assuming the l2 above, which is an unassigned iterator.
# There is also a problem with the vp2+s2 parameter: you need to multiply vp2 by the number of iterations.
# full_stage2b := (vp2) >> TICompose(s2,s_tmp, stage(l2, vp2+s2)),
full_stage2b := (vp2, l3) >> TICompose(s2,s_tmp, stage(l3, vp2*s_tmp+s2)),
[[ Cond(d=t*t, full_stage(0).withTags(nt.getTags()),
d<t, TCompose(List([0..d-1], n2 -> full_stage1b(n2))).withTags(nt.getTags()),
d=t, TCompose(List([0..d-1], n3 -> full_stage1b(n3))).withTags(nt.getTags()),
d>t, TCompose(List([0..t-1], l3 -> TCompose(List([0..d2_tmp-1], v2 -> full_stage2b(v2, l3))))).withTags(nt.getTags())) #the problem seems to be the outer most TCompose works if it was TICompose
#d>t, TICompose(n, t/d2, full_stage1(n)).withTags(nt.getTags())) #old one but will leave it as new one does not work yet
]]
),
#
apply := (nt, c, cnt) -> c[1],
),
));
|
```python
# import Python libraries
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import sympy as sym
from sympy.plotting import plot
import pandas as pd
from IPython.display import display
from IPython.core.display import Math
```
```python
# time elbow_flexion BIClong BICshort BRA
r_ef = np.loadtxt('./../data/r_elbowflexors.mot', skiprows=7)
f_ef = np.loadtxt('./../data/f_elbowflexors.mot', skiprows=7)
```
```python
m_ef = r_ef*1
m_ef[:, 2:] = r_ef[:, 2:]*f_ef[:, 2:]
```
```python
labels = ['Biceps long head', 'Biceps short head', 'Brachialis']
fig, ax = plt.subplots(nrows=1, ncols=3, sharex=True, figsize=(10, 4))
ax[0].plot(r_ef[:, 1], r_ef[:, 2:])
#ax[0].set_xlabel('Elbow angle $(\,^o)$')
ax[0].set_title('Moment arm (m)')
ax[1].plot(f_ef[:, 1], f_ef[:, 2:])
ax[1].set_xlabel('Elbow angle $(\,^o)$', fontsize=16)
ax[1].set_title('Maximum force (N)')
ax[2].plot(m_ef[:, 1], m_ef[:, 2:])
#ax[2].set_xlabel('Elbow angle $(\,^o)$')
ax[2].set_title('Maximum torque (Nm)')
ax[2].legend(labels, loc='best', framealpha=.5)
ax[2].set_xlim(np.min(r_ef[:, 1]), np.max(r_ef[:, 1]))
plt.tight_layout()
plt.show()
```
```python
a_ef = np.array([624.3, 435.56, 987.26])/50 # 50 N/cm2
print(a_ef)
```
[ 12.486 8.7112 19.7452]
```python
from scipy.optimize import minimize
```
```python
def cf_f1(x):
"""Cost function: sum of forces."""
return x[0] + x[1] + x[2]
def cf_f2(x):
"""Cost function: sum of forces squared."""
return x[0]**2 + x[1]**2 + x[2]**2
def cf_fpcsa2(x, a):
"""Cost function: sum of squared muscle stresses."""
return (x[0]/a[0])**2 + (x[1]/a[1])**2 + (x[2]/a[2])**2
def cf_fmmax3(x, m):
"""Cost function: sum of cubic forces normalized by moments."""
return (x[0]/m[0])**3 + (x[1]/m[1])**3 + (x[2]/m[2])**3
```
```python
def cf_f1d(x):
"""Derivative of cost function: sum of forces."""
dfdx0 = 1
dfdx1 = 1
dfdx2 = 1
return np.array([dfdx0, dfdx1, dfdx2])
def cf_f2d(x):
"""Derivative of cost function: sum of forces squared."""
dfdx0 = 2*x[0]
dfdx1 = 2*x[1]
dfdx2 = 2*x[2]
return np.array([dfdx0, dfdx1, dfdx2])
def cf_fpcsa2d(x, a):
"""Derivative of cost function: sum of squared muscle stresses."""
dfdx0 = 2*x[0]/a[0]**2
dfdx1 = 2*x[1]/a[1]**2
dfdx2 = 2*x[2]/a[2]**2
return np.array([dfdx0, dfdx1, dfdx2])
def cf_fmmax3d(x, m):
"""Derivative of cost function: sum of cubic forces normalized by moments."""
dfdx0 = 3*x[0]**2/m[0]**3
dfdx1 = 3*x[1]**2/m[1]**3
dfdx2 = 3*x[2]**2/m[2]**3
return np.array([dfdx0, dfdx1, dfdx2])
```
```python
M = 20 # desired torque at the elbow
iang = 69 # which will give the closest value to 90 degrees
r = r_ef[iang, 2:]
f0 = f_ef[iang, 2:]
a = a_ef
m = m_ef[iang, 2:]
x0 = f_ef[iang, 2:]/10 # far from the correct answer for the sum of torques
print('M =', M)
print('x0 =', x0)
print('r * x0 =', np.sum(r*x0))
```
M = 20
x0 = [ 57.51311369 36.29974032 89.6470056 ]
r * x0 = 6.62200444607
```python
bnds = ((0, f0[0]), (0, f0[1]), (0, f0[2]))
```
```python
# use this in combination with the parameter bounds:
cons = ({'type': 'eq',
'fun' : lambda x, r, f0, M: np.array([r[0]*x[0] + r[1]*x[1] + r[2]*x[2] - M]),
'jac' : lambda x, r, f0, M: np.array([r[0], r[1], r[2]]), 'args': (r, f0, M)})
```
```python
# to enter everything as constraints:
cons = ({'type': 'eq',
'fun' : lambda x, r, f0, M: np.array([r[0]*x[0] + r[1]*x[1] + r[2]*x[2] - M]),
'jac' : lambda x, r, f0, M: np.array([r[0], r[1], r[2]]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: f0[0]-x[0],
'jac' : lambda x, r, f0, M: np.array([-1, 0, 0]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: f0[1]-x[1],
'jac' : lambda x, r, f0, M: np.array([0, -1, 0]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: f0[2]-x[2],
'jac' : lambda x, r, f0, M: np.array([0, 0, -1]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: x[0],
'jac' : lambda x, r, f0, M: np.array([1, 0, 0]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: x[1],
'jac' : lambda x, r, f0, M: np.array([0, 1, 0]), 'args': (r, f0, M)},
{'type': 'ineq', 'fun' : lambda x, r, f0, M: x[2],
'jac' : lambda x, r, f0, M: np.array([0, 0, 1]), 'args': (r, f0, M)})
```
```python
f1r = minimize(fun=cf_f1, x0=x0, args=(), jac=cf_f1d,
constraints=cons, method='SLSQP',
options={'disp': True})
```
Optimization terminated successfully. (Exit mode 0)
Current function value: 409.5926601
Iterations: 7
Function evaluations: 7
Gradient evaluations: 7
```python
f2r = minimize(fun=cf_f2, x0=x0, args=(), jac=cf_f2d,
constraints=cons, method='SLSQP',
options={'disp': True})
```
Optimization terminated successfully. (Exit mode 0)
Current function value: 75657.3847913
Iterations: 5
Function evaluations: 6
Gradient evaluations: 5
```python
fpcsa2r = minimize(fun=cf_fpcsa2, x0=x0, args=(a,), jac=cf_fpcsa2d,
constraints=cons, method='SLSQP',
options={'disp': True})
```
Optimization terminated successfully. (Exit mode 0)
Current function value: 529.96397777
Iterations: 11
Function evaluations: 11
Gradient evaluations: 11
```python
fmmax3r = minimize(fun=cf_fmmax3, x0=x0, args=(m,), jac=cf_fmmax3d,
constraints=cons, method='SLSQP',
options={'disp': True})
```
Optimization terminated successfully. (Exit mode 0)
Current function value: 1075.13889317
Iterations: 12
Function evaluations: 12
Gradient evaluations: 12
```python
dat = np.vstack((np.around(r*100,1), np.around(a,1), np.around(f0,0), np.around(m,1)))
opt = np.around(np.vstack((f1r.x, f2r.x, fpcsa2r.x, fmmax3r.x)), 1)
er = ['-', '-', '-', '-',
np.sum(r*f1r.x)-M, np.sum(r*f2r.x)-M, np.sum(r*fpcsa2r.x)-M, np.sum(r*fmmax3r.x)-M]
data = np.vstack((np.vstack((dat, opt)).T, er)).T
rows = ['$\text{Moment arm}\;[cm]$', '$pcsa\;[cm^2]$', '$F_{max}\;[N]$', '$M_{max}\;[Nm]$',
'$\sum F_i$', '$\sum F_i^2$', '$\sum(F_i/pcsa_i)^2$', '$\sum(F_i/M_{max,i})^3$']
cols = ['Biceps long head', 'Biceps short head', 'Brachialis', 'Error in M']
df = pd.DataFrame(data, index=rows, columns=cols)
print('\nComparison of different cost functions for solving the distribution problem')
df
```
Comparison of different cost functions for solving the distribution problem
<div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>Biceps long head</th>
<th>Biceps short head</th>
<th>Brachialis</th>
<th>Error in M</th>
</tr>
</thead>
<tbody>
<tr>
<th>$\text{Moment arm}\;[cm]$</th>
<td>4.9</td>
<td>4.9</td>
<td>2.3</td>
<td>-</td>
</tr>
<tr>
<th>$pcsa\;[cm^2]$</th>
<td>12.5</td>
<td>8.7</td>
<td>19.7</td>
<td>-</td>
</tr>
<tr>
<th>$F_{max}\;[N]$</th>
<td>575.0</td>
<td>363.0</td>
<td>896.0</td>
<td>-</td>
</tr>
<tr>
<th>$M_{max}\;[Nm]$</th>
<td>28.1</td>
<td>17.7</td>
<td>20.4</td>
<td>-</td>
</tr>
<tr>
<th>$\sum F_i$</th>
<td>215.4</td>
<td>194.2</td>
<td>0.0</td>
<td>0.0</td>
</tr>
<tr>
<th>$\sum F_i^2$</th>
<td>184.7</td>
<td>184.7</td>
<td>86.1</td>
<td>0.0</td>
</tr>
<tr>
<th>$\sum(F_i/pcsa_i)^2$</th>
<td>201.7</td>
<td>98.2</td>
<td>235.2</td>
<td>0.0</td>
</tr>
<tr>
<th>$\sum(F_i/M_{max,i})^3$</th>
<td>241.1</td>
<td>120.9</td>
<td>102.0</td>
<td>-3.5527136788e-15</td>
</tr>
</tbody>
</table>
</div>
|
inductive day : Type
| monday
| tuesday
| wednesday
| thursday
| friday
| saturday
| sunday
def next_weekday : day -> day
| day.monday := day.tuesday
| day.tuesday := day.wednesday
| day.wednesday := day.thursday
| day.thursday := day.friday
| day.friday := day.saturday
| day.saturday := day.sunday
| day.sunday := day.monday
example : next_weekday (next_weekday day.saturday) = day.monday := rfl
|
[STATEMENT]
lemma tan_mono_le_eq:
"-(pi/2) < x \<Longrightarrow> x < pi/2 \<Longrightarrow> -(pi/2) < y \<Longrightarrow> y < pi/2 \<Longrightarrow> tan x \<le> tan y \<longleftrightarrow> x \<le> y"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>- (pi / 2) < x; x < pi / 2; - (pi / 2) < y; y < pi / 2\<rbrakk> \<Longrightarrow> (tan x \<le> tan y) = (x \<le> y)
[PROOF STEP]
by (meson tan_mono_le not_le tan_monotone) |
Describe Users/dpb here.
Welcome to the Wiki, dpb! All of the pages on this Wiki are contributed by users just like you. If you feel that something is inaccurate, you are more than welcome to fix it yourself. In the event that your experience differs from others experiences, itd be perfect to note that experiences vary when it comes to noise. The Comments bar is a handy tool, but it sometimes creates the (incorrect) perception that content above the bar is somehow different than that appearing below it. Even if the initial page text was written by the apartments management, though, thats just not the case. Wiki business pages are about the business, not for the business. Meaning that the experiences and perspectives of you and I are every bit as important as those of management/owners. Users/TomGarberson
|
Some days just feel dry.
and sand is the only scenery.
or grassy fields of green.
that come from a different source.
The source that leads me back to Him.
I say that You are my God.
Your presence is my promise.
against the onslaught of the enemy.
I cling; You hold securely.
All the peace I need. |
If $f$ and $g$ are continuous functions on a set $S$ and $g(x) \neq 0$ for all $x \in S$, then the function $f/g$ is continuous on $S$. |
= = Composition and lyrical interpretation = =
|
[GOAL]
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝ : TopologicalSpace β
f g : α → β
hf : AEStronglyMeasurable' m f μ
hfg : f =ᵐ[μ] g
⊢ AEStronglyMeasurable' m g μ
[PROOFSTEP]
obtain ⟨f', hf'_meas, hff'⟩ := hf
[GOAL]
case intro.intro
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝ : TopologicalSpace β
f g : α → β
hfg : f =ᵐ[μ] g
f' : α → β
hf'_meas : StronglyMeasurable f'
hff' : f =ᵐ[μ] f'
⊢ AEStronglyMeasurable' m g μ
[PROOFSTEP]
exact ⟨f', hf'_meas, hfg.symm.trans hff'⟩
[GOAL]
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝ : TopologicalSpace β
f g : α → β
m' : MeasurableSpace α
hf : AEStronglyMeasurable' m f μ
hm : m ≤ m'
⊢ AEStronglyMeasurable' m' f μ
[PROOFSTEP]
obtain ⟨f', hf'_meas, hff'⟩ := hf
[GOAL]
case intro.intro
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝ : TopologicalSpace β
f g : α → β
m' : MeasurableSpace α
hm : m ≤ m'
f' : α → β
hf'_meas : StronglyMeasurable f'
hff' : f =ᵐ[μ] f'
⊢ AEStronglyMeasurable' m' f μ
[PROOFSTEP]
exact ⟨f', hf'_meas.mono hm, hff'⟩
[GOAL]
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f g : α → β
inst✝¹ : Add β
inst✝ : ContinuousAdd β
hf : AEStronglyMeasurable' m f μ
hg : AEStronglyMeasurable' m g μ
⊢ AEStronglyMeasurable' m (f + g) μ
[PROOFSTEP]
rcases hf with ⟨f', h_f'_meas, hff'⟩
[GOAL]
case intro.intro
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f g : α → β
inst✝¹ : Add β
inst✝ : ContinuousAdd β
hg : AEStronglyMeasurable' m g μ
f' : α → β
h_f'_meas : StronglyMeasurable f'
hff' : f =ᵐ[μ] f'
⊢ AEStronglyMeasurable' m (f + g) μ
[PROOFSTEP]
rcases hg with ⟨g', h_g'_meas, hgg'⟩
[GOAL]
case intro.intro.intro.intro
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f g : α → β
inst✝¹ : Add β
inst✝ : ContinuousAdd β
f' : α → β
h_f'_meas : StronglyMeasurable f'
hff' : f =ᵐ[μ] f'
g' : α → β
h_g'_meas : StronglyMeasurable g'
hgg' : g =ᵐ[μ] g'
⊢ AEStronglyMeasurable' m (f + g) μ
[PROOFSTEP]
exact ⟨f' + g', h_f'_meas.add h_g'_meas, hff'.add hgg'⟩
[GOAL]
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f✝ g : α → β
inst✝¹ : AddGroup β
inst✝ : TopologicalAddGroup β
f : α → β
hfm : AEStronglyMeasurable' m f μ
⊢ AEStronglyMeasurable' m (-f) μ
[PROOFSTEP]
rcases hfm with ⟨f', hf'_meas, hf_ae⟩
[GOAL]
case intro.intro
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f✝ g : α → β
inst✝¹ : AddGroup β
inst✝ : TopologicalAddGroup β
f f' : α → β
hf'_meas : StronglyMeasurable f'
hf_ae : f =ᵐ[μ] f'
⊢ AEStronglyMeasurable' m (-f) μ
[PROOFSTEP]
refine' ⟨-f', hf'_meas.neg, hf_ae.mono fun x hx => _⟩
[GOAL]
case intro.intro
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f✝ g : α → β
inst✝¹ : AddGroup β
inst✝ : TopologicalAddGroup β
f f' : α → β
hf'_meas : StronglyMeasurable f'
hf_ae : f =ᵐ[μ] f'
x : α
hx : f x = f' x
⊢ (-f) x = (-f') x
[PROOFSTEP]
simp_rw [Pi.neg_apply]
[GOAL]
case intro.intro
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f✝ g : α → β
inst✝¹ : AddGroup β
inst✝ : TopologicalAddGroup β
f f' : α → β
hf'_meas : StronglyMeasurable f'
hf_ae : f =ᵐ[μ] f'
x : α
hx : f x = f' x
⊢ -f x = -f' x
[PROOFSTEP]
rw [hx]
[GOAL]
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f✝ g✝ : α → β
inst✝¹ : AddGroup β
inst✝ : TopologicalAddGroup β
f g : α → β
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
⊢ AEStronglyMeasurable' m (f - g) μ
[PROOFSTEP]
rcases hfm with ⟨f', hf'_meas, hf_ae⟩
[GOAL]
case intro.intro
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f✝ g✝ : α → β
inst✝¹ : AddGroup β
inst✝ : TopologicalAddGroup β
f g : α → β
hgm : AEStronglyMeasurable' m g μ
f' : α → β
hf'_meas : StronglyMeasurable f'
hf_ae : f =ᵐ[μ] f'
⊢ AEStronglyMeasurable' m (f - g) μ
[PROOFSTEP]
rcases hgm with ⟨g', hg'_meas, hg_ae⟩
[GOAL]
case intro.intro.intro.intro
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f✝ g✝ : α → β
inst✝¹ : AddGroup β
inst✝ : TopologicalAddGroup β
f g f' : α → β
hf'_meas : StronglyMeasurable f'
hf_ae : f =ᵐ[μ] f'
g' : α → β
hg'_meas : StronglyMeasurable g'
hg_ae : g =ᵐ[μ] g'
⊢ AEStronglyMeasurable' m (f - g) μ
[PROOFSTEP]
refine' ⟨f' - g', hf'_meas.sub hg'_meas, hf_ae.mp (hg_ae.mono fun x hx1 hx2 => _)⟩
[GOAL]
case intro.intro.intro.intro
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f✝ g✝ : α → β
inst✝¹ : AddGroup β
inst✝ : TopologicalAddGroup β
f g f' : α → β
hf'_meas : StronglyMeasurable f'
hf_ae : f =ᵐ[μ] f'
g' : α → β
hg'_meas : StronglyMeasurable g'
hg_ae : g =ᵐ[μ] g'
x : α
hx1 : g x = g' x
hx2 : f x = f' x
⊢ (f - g) x = (f' - g') x
[PROOFSTEP]
simp_rw [Pi.sub_apply]
[GOAL]
case intro.intro.intro.intro
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f✝ g✝ : α → β
inst✝¹ : AddGroup β
inst✝ : TopologicalAddGroup β
f g f' : α → β
hf'_meas : StronglyMeasurable f'
hf_ae : f =ᵐ[μ] f'
g' : α → β
hg'_meas : StronglyMeasurable g'
hg_ae : g =ᵐ[μ] g'
x : α
hx1 : g x = g' x
hx2 : f x = f' x
⊢ f x - g x = f' x - g' x
[PROOFSTEP]
rw [hx1, hx2]
[GOAL]
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f g : α → β
inst✝¹ : SMul 𝕜 β
inst✝ : ContinuousConstSMul 𝕜 β
c : 𝕜
hf : AEStronglyMeasurable' m f μ
⊢ AEStronglyMeasurable' m (c • f) μ
[PROOFSTEP]
rcases hf with ⟨f', h_f'_meas, hff'⟩
[GOAL]
case intro.intro
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f g : α → β
inst✝¹ : SMul 𝕜 β
inst✝ : ContinuousConstSMul 𝕜 β
c : 𝕜
f' : α → β
h_f'_meas : StronglyMeasurable f'
hff' : f =ᵐ[μ] f'
⊢ AEStronglyMeasurable' m (c • f) μ
[PROOFSTEP]
refine' ⟨c • f', h_f'_meas.const_smul c, _⟩
[GOAL]
case intro.intro
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝² : TopologicalSpace β
f g : α → β
inst✝¹ : SMul 𝕜 β
inst✝ : ContinuousConstSMul 𝕜 β
c : 𝕜
f' : α → β
h_f'_meas : StronglyMeasurable f'
hff' : f =ᵐ[μ] f'
⊢ c • f =ᵐ[μ] c • f'
[PROOFSTEP]
exact EventuallyEq.fun_comp hff' fun x => c • x
[GOAL]
α : Type u_1
β✝ : Type u_2
𝕜✝ : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝³ : TopologicalSpace β✝
f✝ g : α → β✝
𝕜 : Type u_4
β : Type u_5
inst✝² : IsROrC 𝕜
inst✝¹ : NormedAddCommGroup β
inst✝ : InnerProductSpace 𝕜 β
f : α → β
hfm : AEStronglyMeasurable' m f μ
c : β
⊢ AEStronglyMeasurable' m (fun x => inner c (f x)) μ
[PROOFSTEP]
rcases hfm with ⟨f', hf'_meas, hf_ae⟩
[GOAL]
case intro.intro
α : Type u_1
β✝ : Type u_2
𝕜✝ : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝³ : TopologicalSpace β✝
f✝ g : α → β✝
𝕜 : Type u_4
β : Type u_5
inst✝² : IsROrC 𝕜
inst✝¹ : NormedAddCommGroup β
inst✝ : InnerProductSpace 𝕜 β
f : α → β
c : β
f' : α → β
hf'_meas : StronglyMeasurable f'
hf_ae : f =ᵐ[μ] f'
⊢ AEStronglyMeasurable' m (fun x => inner c (f x)) μ
[PROOFSTEP]
refine' ⟨fun x => (inner c (f' x) : 𝕜), (@stronglyMeasurable_const _ _ m _ c).inner hf'_meas, hf_ae.mono fun x hx => _⟩
[GOAL]
case intro.intro
α : Type u_1
β✝ : Type u_2
𝕜✝ : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝³ : TopologicalSpace β✝
f✝ g : α → β✝
𝕜 : Type u_4
β : Type u_5
inst✝² : IsROrC 𝕜
inst✝¹ : NormedAddCommGroup β
inst✝ : InnerProductSpace 𝕜 β
f : α → β
c : β
f' : α → β
hf'_meas : StronglyMeasurable f'
hf_ae : f =ᵐ[μ] f'
x : α
hx : f x = f' x
⊢ (fun x => inner c (f x)) x = (fun x => inner c (f' x)) x
[PROOFSTEP]
dsimp only
[GOAL]
case intro.intro
α : Type u_1
β✝ : Type u_2
𝕜✝ : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝³ : TopologicalSpace β✝
f✝ g : α → β✝
𝕜 : Type u_4
β : Type u_5
inst✝² : IsROrC 𝕜
inst✝¹ : NormedAddCommGroup β
inst✝ : InnerProductSpace 𝕜 β
f : α → β
c : β
f' : α → β
hf'_meas : StronglyMeasurable f'
hf_ae : f =ᵐ[μ] f'
x : α
hx : f x = f' x
⊢ inner c (f x) = inner c (f' x)
[PROOFSTEP]
rw [hx]
[GOAL]
α : Type u_1
β : Type u_2
𝕜 : Type u_3
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : TopologicalSpace β
f✝ g✝ : α → β
γ : Type u_4
inst✝ : TopologicalSpace γ
f : α → β
g : β → γ
hg : Continuous g
hf : AEStronglyMeasurable' m f μ
x : α
hx : f x = mk f hf x
⊢ (g ∘ f) x = (fun x => g (mk f hf x)) x
[PROOFSTEP]
rw [Function.comp_apply, hx]
[GOAL]
α : Type u_1
β : Type u_2
m m0 m0' : MeasurableSpace α
inst✝ : TopologicalSpace β
hm0 : m0 ≤ m0'
μ : Measure α
f : α → β
hf : AEStronglyMeasurable' m f (Measure.trim μ hm0)
⊢ AEStronglyMeasurable' m f μ
[PROOFSTEP]
obtain ⟨g, hg_meas, hfg⟩ := hf
[GOAL]
case intro.intro
α : Type u_1
β : Type u_2
m m0 m0' : MeasurableSpace α
inst✝ : TopologicalSpace β
hm0 : m0 ≤ m0'
μ : Measure α
f g : α → β
hg_meas : StronglyMeasurable g
hfg : f =ᵐ[Measure.trim μ hm0] g
⊢ AEStronglyMeasurable' m f μ
[PROOFSTEP]
exact ⟨g, hg_meas, ae_eq_of_ae_eq_trim hfg⟩
[GOAL]
α : Type u_1
E : Type u_2
m m₂ m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : TopologicalSpace E
inst✝ : Zero E
hm : m ≤ m0
s : Set α
f : α → E
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) → MeasurableSet (s ∩ t)
hf : AEStronglyMeasurable' m f μ
hf_zero : f =ᵐ[Measure.restrict μ sᶜ] 0
⊢ AEStronglyMeasurable' m₂ f μ
[PROOFSTEP]
have h_ind_eq : s.indicator (hf.mk f) =ᵐ[μ] f :=
by
refine' Filter.EventuallyEq.trans _ (indicator_ae_eq_of_restrict_compl_ae_eq_zero (hm _ hs_m) hf_zero)
filter_upwards [hf.ae_eq_mk] with x hx
by_cases hxs : x ∈ s
· simp [hxs, hx]
· simp [hxs]
[GOAL]
α : Type u_1
E : Type u_2
m m₂ m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : TopologicalSpace E
inst✝ : Zero E
hm : m ≤ m0
s : Set α
f : α → E
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) → MeasurableSet (s ∩ t)
hf : AEStronglyMeasurable' m f μ
hf_zero : f =ᵐ[Measure.restrict μ sᶜ] 0
⊢ Set.indicator s (mk f hf) =ᵐ[μ] f
[PROOFSTEP]
refine' Filter.EventuallyEq.trans _ (indicator_ae_eq_of_restrict_compl_ae_eq_zero (hm _ hs_m) hf_zero)
[GOAL]
α : Type u_1
E : Type u_2
m m₂ m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : TopologicalSpace E
inst✝ : Zero E
hm : m ≤ m0
s : Set α
f : α → E
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) → MeasurableSet (s ∩ t)
hf : AEStronglyMeasurable' m f μ
hf_zero : f =ᵐ[Measure.restrict μ sᶜ] 0
⊢ Set.indicator s (mk f hf) =ᵐ[μ] Set.indicator s f
[PROOFSTEP]
filter_upwards [hf.ae_eq_mk] with x hx
[GOAL]
case h
α : Type u_1
E : Type u_2
m m₂ m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : TopologicalSpace E
inst✝ : Zero E
hm : m ≤ m0
s : Set α
f : α → E
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) → MeasurableSet (s ∩ t)
hf : AEStronglyMeasurable' m f μ
hf_zero : f =ᵐ[Measure.restrict μ sᶜ] 0
x : α
hx : f x = mk f hf x
⊢ Set.indicator s (mk f hf) x = Set.indicator s f x
[PROOFSTEP]
by_cases hxs : x ∈ s
[GOAL]
case pos
α : Type u_1
E : Type u_2
m m₂ m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : TopologicalSpace E
inst✝ : Zero E
hm : m ≤ m0
s : Set α
f : α → E
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) → MeasurableSet (s ∩ t)
hf : AEStronglyMeasurable' m f μ
hf_zero : f =ᵐ[Measure.restrict μ sᶜ] 0
x : α
hx : f x = mk f hf x
hxs : x ∈ s
⊢ Set.indicator s (mk f hf) x = Set.indicator s f x
[PROOFSTEP]
simp [hxs, hx]
[GOAL]
case neg
α : Type u_1
E : Type u_2
m m₂ m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : TopologicalSpace E
inst✝ : Zero E
hm : m ≤ m0
s : Set α
f : α → E
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) → MeasurableSet (s ∩ t)
hf : AEStronglyMeasurable' m f μ
hf_zero : f =ᵐ[Measure.restrict μ sᶜ] 0
x : α
hx : f x = mk f hf x
hxs : ¬x ∈ s
⊢ Set.indicator s (mk f hf) x = Set.indicator s f x
[PROOFSTEP]
simp [hxs]
[GOAL]
α : Type u_1
E : Type u_2
m m₂ m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : TopologicalSpace E
inst✝ : Zero E
hm : m ≤ m0
s : Set α
f : α → E
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) → MeasurableSet (s ∩ t)
hf : AEStronglyMeasurable' m f μ
hf_zero : f =ᵐ[Measure.restrict μ sᶜ] 0
h_ind_eq : Set.indicator s (mk f hf) =ᵐ[μ] f
⊢ AEStronglyMeasurable' m₂ f μ
[PROOFSTEP]
suffices : StronglyMeasurable[m₂] (s.indicator (hf.mk f))
[GOAL]
α : Type u_1
E : Type u_2
m m₂ m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : TopologicalSpace E
inst✝ : Zero E
hm : m ≤ m0
s : Set α
f : α → E
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) → MeasurableSet (s ∩ t)
hf : AEStronglyMeasurable' m f μ
hf_zero : f =ᵐ[Measure.restrict μ sᶜ] 0
h_ind_eq : Set.indicator s (mk f hf) =ᵐ[μ] f
this : StronglyMeasurable (Set.indicator s (mk f hf))
⊢ AEStronglyMeasurable' m₂ f μ
case this
α : Type u_1
E : Type u_2
m m₂ m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : TopologicalSpace E
inst✝ : Zero E
hm : m ≤ m0
s : Set α
f : α → E
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) → MeasurableSet (s ∩ t)
hf : AEStronglyMeasurable' m f μ
hf_zero : f =ᵐ[Measure.restrict μ sᶜ] 0
h_ind_eq : Set.indicator s (mk f hf) =ᵐ[μ] f
⊢ StronglyMeasurable (Set.indicator s (mk f hf))
[PROOFSTEP]
exact AEStronglyMeasurable'.congr this.aeStronglyMeasurable' h_ind_eq
[GOAL]
case this
α : Type u_1
E : Type u_2
m m₂ m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : TopologicalSpace E
inst✝ : Zero E
hm : m ≤ m0
s : Set α
f : α → E
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) → MeasurableSet (s ∩ t)
hf : AEStronglyMeasurable' m f μ
hf_zero : f =ᵐ[Measure.restrict μ sᶜ] 0
h_ind_eq : Set.indicator s (mk f hf) =ᵐ[μ] f
⊢ StronglyMeasurable (Set.indicator s (mk f hf))
[PROOFSTEP]
have hf_ind : StronglyMeasurable[m] (s.indicator (hf.mk f)) := hf.stronglyMeasurable_mk.indicator hs_m
[GOAL]
case this
α : Type u_1
E : Type u_2
m m₂ m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : TopologicalSpace E
inst✝ : Zero E
hm : m ≤ m0
s : Set α
f : α → E
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) → MeasurableSet (s ∩ t)
hf : AEStronglyMeasurable' m f μ
hf_zero : f =ᵐ[Measure.restrict μ sᶜ] 0
h_ind_eq : Set.indicator s (mk f hf) =ᵐ[μ] f
hf_ind : StronglyMeasurable (Set.indicator s (mk f hf))
⊢ StronglyMeasurable (Set.indicator s (mk f hf))
[PROOFSTEP]
exact hf_ind.stronglyMeasurable_of_measurableSpace_le_on hs_m hs fun x hxs => Set.indicator_of_not_mem hxs _
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
f : { x // x ∈ Lp F p }
⊢ f ∈ lpMeasSubgroup F m p μ ↔ AEStronglyMeasurable' m (↑↑f) μ
[PROOFSTEP]
rw [← AddSubgroup.mem_carrier, lpMeasSubgroup, Set.mem_setOf_eq]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
f : { x // x ∈ Lp F p }
⊢ f ∈ lpMeas F 𝕜 m p μ ↔ AEStronglyMeasurable' m (↑↑f) μ
[PROOFSTEP]
rw [← SetLike.mem_coe, ← Submodule.mem_carrier, lpMeas, Set.mem_setOf_eq]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_meas : f ∈ lpMeasSubgroup F m p μ
⊢ Memℒp (Exists.choose (_ : AEStronglyMeasurable' m (↑↑f) μ)) p
[PROOFSTEP]
have hf : AEStronglyMeasurable' m f μ := mem_lpMeasSubgroup_iff_aeStronglyMeasurable'.mp hf_meas
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_meas : f ∈ lpMeasSubgroup F m p μ
hf : AEStronglyMeasurable' m (↑↑f) μ
⊢ Memℒp (Exists.choose (_ : AEStronglyMeasurable' m (↑↑f) μ)) p
[PROOFSTEP]
let g := hf.choose
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_meas : f ∈ lpMeasSubgroup F m p μ
hf : AEStronglyMeasurable' m (↑↑f) μ
g : α → F := Exists.choose hf
⊢ Memℒp (Exists.choose (_ : AEStronglyMeasurable' m (↑↑f) μ)) p
[PROOFSTEP]
obtain ⟨hg, hfg⟩ := hf.choose_spec
[GOAL]
case intro
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_meas : f ∈ lpMeasSubgroup F m p μ
hf : AEStronglyMeasurable' m (↑↑f) μ
g : α → F := Exists.choose hf
hg : StronglyMeasurable (Exists.choose hf)
hfg : ↑↑f =ᵐ[μ] Exists.choose hf
⊢ Memℒp (Exists.choose (_ : AEStronglyMeasurable' m (↑↑f) μ)) p
[PROOFSTEP]
change Memℒp g p (μ.trim hm)
[GOAL]
case intro
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_meas : f ∈ lpMeasSubgroup F m p μ
hf : AEStronglyMeasurable' m (↑↑f) μ
g : α → F := Exists.choose hf
hg : StronglyMeasurable (Exists.choose hf)
hfg : ↑↑f =ᵐ[μ] Exists.choose hf
⊢ Memℒp g p
[PROOFSTEP]
refine' ⟨hg.aestronglyMeasurable, _⟩
[GOAL]
case intro
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_meas : f ∈ lpMeasSubgroup F m p μ
hf : AEStronglyMeasurable' m (↑↑f) μ
g : α → F := Exists.choose hf
hg : StronglyMeasurable (Exists.choose hf)
hfg : ↑↑f =ᵐ[μ] Exists.choose hf
⊢ snorm g p (Measure.trim μ hm) < ⊤
[PROOFSTEP]
have h_snorm_fg : snorm g p (μ.trim hm) = snorm f p μ :=
by
rw [snorm_trim hm hg]
exact snorm_congr_ae hfg.symm
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_meas : f ∈ lpMeasSubgroup F m p μ
hf : AEStronglyMeasurable' m (↑↑f) μ
g : α → F := Exists.choose hf
hg : StronglyMeasurable (Exists.choose hf)
hfg : ↑↑f =ᵐ[μ] Exists.choose hf
⊢ snorm g p (Measure.trim μ hm) = snorm (↑↑f) p μ
[PROOFSTEP]
rw [snorm_trim hm hg]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_meas : f ∈ lpMeasSubgroup F m p μ
hf : AEStronglyMeasurable' m (↑↑f) μ
g : α → F := Exists.choose hf
hg : StronglyMeasurable (Exists.choose hf)
hfg : ↑↑f =ᵐ[μ] Exists.choose hf
⊢ snorm (Exists.choose hf) p μ = snorm (↑↑f) p μ
[PROOFSTEP]
exact snorm_congr_ae hfg.symm
[GOAL]
case intro
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_meas : f ∈ lpMeasSubgroup F m p μ
hf : AEStronglyMeasurable' m (↑↑f) μ
g : α → F := Exists.choose hf
hg : StronglyMeasurable (Exists.choose hf)
hfg : ↑↑f =ᵐ[μ] Exists.choose hf
h_snorm_fg : snorm g p (Measure.trim μ hm) = snorm (↑↑f) p μ
⊢ snorm g p (Measure.trim μ hm) < ⊤
[PROOFSTEP]
rw [h_snorm_fg]
[GOAL]
case intro
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_meas : f ∈ lpMeasSubgroup F m p μ
hf : AEStronglyMeasurable' m (↑↑f) μ
g : α → F := Exists.choose hf
hg : StronglyMeasurable (Exists.choose hf)
hfg : ↑↑f =ᵐ[μ] Exists.choose hf
h_snorm_fg : snorm g p (Measure.trim μ hm) = snorm (↑↑f) p μ
⊢ snorm (↑↑f) p μ < ⊤
[PROOFSTEP]
exact Lp.snorm_lt_top f
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
⊢ Memℒp.toLp ↑↑f (_ : Memℒp (↑↑f) p) ∈ lpMeasSubgroup F m p μ
[PROOFSTEP]
let hf_mem_ℒp := memℒp_of_memℒp_trim hm (Lp.memℒp f)
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_mem_ℒp : Memℒp (↑↑f) p := memℒp_of_memℒp_trim hm (Lp.memℒp f)
⊢ Memℒp.toLp ↑↑f (_ : Memℒp (↑↑f) p) ∈ lpMeasSubgroup F m p μ
[PROOFSTEP]
rw [mem_lpMeasSubgroup_iff_aeStronglyMeasurable']
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_mem_ℒp : Memℒp (↑↑f) p := memℒp_of_memℒp_trim hm (Lp.memℒp f)
⊢ AEStronglyMeasurable' m (↑↑(Memℒp.toLp ↑↑f (_ : Memℒp (↑↑f) p))) μ
[PROOFSTEP]
refine' AEStronglyMeasurable'.congr _ (Memℒp.coeFn_toLp hf_mem_ℒp).symm
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_mem_ℒp : Memℒp (↑↑f) p := memℒp_of_memℒp_trim hm (Lp.memℒp f)
⊢ AEStronglyMeasurable' m (↑↑f) μ
[PROOFSTEP]
refine' aeStronglyMeasurable'_of_aeStronglyMeasurable'_trim hm _
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
hf_mem_ℒp : Memℒp (↑↑f) p := memℒp_of_memℒp_trim hm (Lp.memℒp f)
⊢ AEStronglyMeasurable' m (↑↑f) (Measure.trim μ hm)
[PROOFSTEP]
exact Lp.aestronglyMeasurable f
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
⊢ Function.RightInverse (lpTrimToLpMeasSubgroup F p μ hm) (lpMeasSubgroupToLpTrim F p μ hm)
[PROOFSTEP]
intro f
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
⊢ lpMeasSubgroupToLpTrim F p μ hm (lpTrimToLpMeasSubgroup F p μ hm f) = f
[PROOFSTEP]
ext1
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
⊢ ↑↑(lpMeasSubgroupToLpTrim F p μ hm (lpTrimToLpMeasSubgroup F p μ hm f)) =ᵐ[Measure.trim μ hm] ↑↑f
[PROOFSTEP]
refine' ae_eq_trim_of_stronglyMeasurable hm (Lp.stronglyMeasurable _) (Lp.stronglyMeasurable _) _
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ Lp F p }
⊢ ↑↑(lpMeasSubgroupToLpTrim F p μ hm (lpTrimToLpMeasSubgroup F p μ hm f)) =ᵐ[μ] ↑↑f
[PROOFSTEP]
exact (lpMeasSubgroupToLpTrim_ae_eq hm _).trans (lpTrimToLpMeasSubgroup_ae_eq hm _)
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
⊢ Function.LeftInverse (lpTrimToLpMeasSubgroup F p μ hm) (lpMeasSubgroupToLpTrim F p μ hm)
[PROOFSTEP]
intro f
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ lpTrimToLpMeasSubgroup F p μ hm (lpMeasSubgroupToLpTrim F p μ hm f) = f
[PROOFSTEP]
ext1
[GOAL]
case a
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑(lpTrimToLpMeasSubgroup F p μ hm (lpMeasSubgroupToLpTrim F p μ hm f)) = ↑f
[PROOFSTEP]
ext1
[GOAL]
case a.h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑↑(lpTrimToLpMeasSubgroup F p μ hm (lpMeasSubgroupToLpTrim F p μ hm f)) =ᵐ[μ] ↑↑↑f
[PROOFSTEP]
rw [← lpMeasSubgroup_coe]
[GOAL]
case a.h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑↑(lpTrimToLpMeasSubgroup F p μ hm (lpMeasSubgroupToLpTrim F p μ hm f)) =ᵐ[μ] ↑↑↑f
[PROOFSTEP]
exact (lpTrimToLpMeasSubgroup_ae_eq hm _).trans (lpMeasSubgroupToLpTrim_ae_eq hm _)
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f g : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ lpMeasSubgroupToLpTrim F p μ hm (f + g) = lpMeasSubgroupToLpTrim F p μ hm f + lpMeasSubgroupToLpTrim F p μ hm g
[PROOFSTEP]
ext1
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f g : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑(lpMeasSubgroupToLpTrim F p μ hm (f + g)) =ᵐ[Measure.trim μ hm]
↑↑(lpMeasSubgroupToLpTrim F p μ hm f + lpMeasSubgroupToLpTrim F p μ hm g)
[PROOFSTEP]
refine' EventuallyEq.trans _ (Lp.coeFn_add _ _).symm
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f g : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑(lpMeasSubgroupToLpTrim F p μ hm (f + g)) =ᵐ[Measure.trim μ hm]
↑↑(lpMeasSubgroupToLpTrim F p μ hm f) + ↑↑(lpMeasSubgroupToLpTrim F p μ hm g)
[PROOFSTEP]
refine' ae_eq_trim_of_stronglyMeasurable hm (Lp.stronglyMeasurable _) _ _
[GOAL]
case h.refine'_1
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f g : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ StronglyMeasurable (↑↑(lpMeasSubgroupToLpTrim F p μ hm f) + ↑↑(lpMeasSubgroupToLpTrim F p μ hm g))
[PROOFSTEP]
exact (Lp.stronglyMeasurable _).add (Lp.stronglyMeasurable _)
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f g : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑(lpMeasSubgroupToLpTrim F p μ hm (f + g)) =ᵐ[μ]
↑↑(lpMeasSubgroupToLpTrim F p μ hm f) + ↑↑(lpMeasSubgroupToLpTrim F p μ hm g)
[PROOFSTEP]
refine' (lpMeasSubgroupToLpTrim_ae_eq hm _).trans _
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f g : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑↑(f + g) =ᵐ[μ] ↑↑(lpMeasSubgroupToLpTrim F p μ hm f) + ↑↑(lpMeasSubgroupToLpTrim F p μ hm g)
[PROOFSTEP]
refine'
EventuallyEq.trans _
(EventuallyEq.add (lpMeasSubgroupToLpTrim_ae_eq hm f).symm (lpMeasSubgroupToLpTrim_ae_eq hm g).symm)
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f g : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑↑(f + g) =ᵐ[μ] fun x => ↑↑↑f x + ↑↑↑g x
[PROOFSTEP]
refine' (Lp.coeFn_add _ _).trans _
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f g : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑↑f + ↑↑↑g =ᵐ[μ] fun x => ↑↑↑f x + ↑↑↑g x
[PROOFSTEP]
simp_rw [lpMeasSubgroup_coe]
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f g : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑↑f + ↑↑↑g =ᵐ[μ] fun x => ↑↑↑f x + ↑↑↑g x
[PROOFSTEP]
exact eventually_of_forall fun x => by rfl
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f g : { x // x ∈ lpMeasSubgroup F m p μ }
x : α
⊢ (↑↑↑f + ↑↑↑g) x = (fun x => ↑↑↑f x + ↑↑↑g x) x
[PROOFSTEP]
rfl
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ lpMeasSubgroupToLpTrim F p μ hm (-f) = -lpMeasSubgroupToLpTrim F p μ hm f
[PROOFSTEP]
ext1
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑(lpMeasSubgroupToLpTrim F p μ hm (-f)) =ᵐ[Measure.trim μ hm] ↑↑(-lpMeasSubgroupToLpTrim F p μ hm f)
[PROOFSTEP]
refine' EventuallyEq.trans _ (Lp.coeFn_neg _).symm
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑(lpMeasSubgroupToLpTrim F p μ hm (-f)) =ᵐ[Measure.trim μ hm] -↑↑(lpMeasSubgroupToLpTrim F p μ hm f)
[PROOFSTEP]
refine' ae_eq_trim_of_stronglyMeasurable hm (Lp.stronglyMeasurable _) _ _
[GOAL]
case h.refine'_1
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ StronglyMeasurable (-↑↑(lpMeasSubgroupToLpTrim F p μ hm f))
[PROOFSTEP]
exact @StronglyMeasurable.neg _ _ _ m _ _ _ (Lp.stronglyMeasurable _)
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑(lpMeasSubgroupToLpTrim F p μ hm (-f)) =ᵐ[μ] -↑↑(lpMeasSubgroupToLpTrim F p μ hm f)
[PROOFSTEP]
refine' (lpMeasSubgroupToLpTrim_ae_eq hm _).trans _
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑↑(-f) =ᵐ[μ] -↑↑(lpMeasSubgroupToLpTrim F p μ hm f)
[PROOFSTEP]
refine' EventuallyEq.trans _ (EventuallyEq.neg (lpMeasSubgroupToLpTrim_ae_eq hm f).symm)
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ↑↑↑(-f) =ᵐ[μ] fun x => -↑↑↑f x
[PROOFSTEP]
refine' (Lp.coeFn_neg _).trans _
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ -↑↑↑f =ᵐ[μ] fun x => -↑↑↑f x
[PROOFSTEP]
simp_rw [lpMeasSubgroup_coe]
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ -↑↑↑f =ᵐ[μ] fun x => -↑↑↑f x
[PROOFSTEP]
exact eventually_of_forall fun x => by rfl
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
x : α
⊢ (-↑↑↑f) x = (fun x => -↑↑↑f x) x
[PROOFSTEP]
rfl
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
f g : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ lpMeasSubgroupToLpTrim F p μ hm (f - g) = lpMeasSubgroupToLpTrim F p μ hm f - lpMeasSubgroupToLpTrim F p μ hm g
[PROOFSTEP]
rw [sub_eq_add_neg, sub_eq_add_neg, lpMeasSubgroupToLpTrim_add, lpMeasSubgroupToLpTrim_neg]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
c : 𝕜
f : { x // x ∈ lpMeas F 𝕜 m p μ }
⊢ lpMeasToLpTrim F 𝕜 p μ hm (c • f) = c • lpMeasToLpTrim F 𝕜 p μ hm f
[PROOFSTEP]
ext1
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
c : 𝕜
f : { x // x ∈ lpMeas F 𝕜 m p μ }
⊢ ↑↑(lpMeasToLpTrim F 𝕜 p μ hm (c • f)) =ᵐ[Measure.trim μ hm] ↑↑(c • lpMeasToLpTrim F 𝕜 p μ hm f)
[PROOFSTEP]
refine' EventuallyEq.trans _ (Lp.coeFn_smul _ _).symm
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
c : 𝕜
f : { x // x ∈ lpMeas F 𝕜 m p μ }
⊢ ↑↑(lpMeasToLpTrim F 𝕜 p μ hm (c • f)) =ᵐ[Measure.trim μ hm] c • ↑↑(lpMeasToLpTrim F 𝕜 p μ hm f)
[PROOFSTEP]
refine' ae_eq_trim_of_stronglyMeasurable hm (Lp.stronglyMeasurable _) _ _
[GOAL]
case h.refine'_1
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
c : 𝕜
f : { x // x ∈ lpMeas F 𝕜 m p μ }
⊢ StronglyMeasurable (c • ↑↑(lpMeasToLpTrim F 𝕜 p μ hm f))
[PROOFSTEP]
exact (Lp.stronglyMeasurable _).const_smul c
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
c : 𝕜
f : { x // x ∈ lpMeas F 𝕜 m p μ }
⊢ ↑↑(lpMeasToLpTrim F 𝕜 p μ hm (c • f)) =ᵐ[μ] c • ↑↑(lpMeasToLpTrim F 𝕜 p μ hm f)
[PROOFSTEP]
refine' (lpMeasToLpTrim_ae_eq hm _).trans _
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
c : 𝕜
f : { x // x ∈ lpMeas F 𝕜 m p μ }
⊢ ↑↑↑(c • f) =ᵐ[μ] c • ↑↑(lpMeasToLpTrim F 𝕜 p μ hm f)
[PROOFSTEP]
refine' (Lp.coeFn_smul _ _).trans _
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
c : 𝕜
f : { x // x ∈ lpMeas F 𝕜 m p μ }
⊢ c • ↑↑↑f =ᵐ[μ] c • ↑↑(lpMeasToLpTrim F 𝕜 p μ hm f)
[PROOFSTEP]
refine' (lpMeasToLpTrim_ae_eq hm f).mono fun x hx => _
[GOAL]
case h.refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
c : 𝕜
f : { x // x ∈ lpMeas F 𝕜 m p μ }
x : α
hx : ↑↑(lpMeasToLpTrim F 𝕜 p μ hm f) x = ↑↑↑f x
⊢ (c • ↑↑↑f) x = (c • ↑↑(lpMeasToLpTrim F 𝕜 p μ hm f)) x
[PROOFSTEP]
rw [Pi.smul_apply, Pi.smul_apply, hx]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hp : Fact (1 ≤ p)
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ‖lpMeasSubgroupToLpTrim F p μ hm f‖ = ‖f‖
[PROOFSTEP]
rw [Lp.norm_def, snorm_trim hm (Lp.stronglyMeasurable _), snorm_congr_ae (lpMeasSubgroupToLpTrim_ae_eq hm _),
lpMeasSubgroup_coe, ← Lp.norm_def]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hp : Fact (1 ≤ p)
hm : m ≤ m0
f : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ ‖↑f‖ = ‖f‖
[PROOFSTEP]
congr
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹⁰ : IsROrC 𝕜
inst✝⁹ : NormedAddCommGroup E'
inst✝⁸ : InnerProductSpace 𝕜 E'
inst✝⁷ : CompleteSpace E'
inst✝⁶ : NormedSpace ℝ E'
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 F
inst✝³ : NormedAddCommGroup F'
inst✝² : NormedSpace 𝕜 F'
inst✝¹ : NormedSpace ℝ F'
inst✝ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hp : Fact (1 ≤ p)
hm : m ≤ m0
f g : { x // x ∈ lpMeasSubgroup F m p μ }
⊢ dist (lpMeasSubgroupToLpTrim F p μ hm f) (lpMeasSubgroupToLpTrim F p μ hm g) = dist f g
[PROOFSTEP]
rw [dist_eq_norm, ← lpMeasSubgroupToLpTrim_sub, lpMeasSubgroupToLpTrim_norm_map, dist_eq_norm]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : Fact (m ≤ m0)
inst✝ : CompleteSpace F
hp : Fact (1 ≤ p)
⊢ CompleteSpace { x // x ∈ lpMeasSubgroup F m p μ }
[PROOFSTEP]
rw [(lpMeasSubgroupToLpTrimIso F p μ hm.elim).completeSpace_iff]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : Fact (m ≤ m0)
inst✝ : CompleteSpace F
hp : Fact (1 ≤ p)
⊢ CompleteSpace { x // x ∈ Lp F p }
[PROOFSTEP]
infer_instance
-- For now just no-lint this; lean4's tree-based logging will make this easier to debug.
-- One possible change might be to generalize `𝕜` from `IsROrC` to `NormedField`, as this
-- result may well hold there.
-- Porting note: removed @[nolint fails_quickly]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : Fact (m ≤ m0)
inst✝ : CompleteSpace F
hp : Fact (1 ≤ p)
⊢ CompleteSpace { x // x ∈ lpMeas F 𝕜 m p μ }
[PROOFSTEP]
rw [(lpMeasSubgroupToLpMeasIso F 𝕜 p μ).symm.completeSpace_iff]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hm : Fact (m ≤ m0)
inst✝ : CompleteSpace F
hp : Fact (1 ≤ p)
⊢ CompleteSpace { x // x ∈ lpMeasSubgroup F m p μ }
[PROOFSTEP]
infer_instance
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hp : Fact (1 ≤ p)
inst✝ : CompleteSpace F
hm : m ≤ m0
⊢ IsComplete {f | AEStronglyMeasurable' m (↑↑f) μ}
[PROOFSTEP]
rw [← completeSpace_coe_iff_isComplete]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hp : Fact (1 ≤ p)
inst✝ : CompleteSpace F
hm : m ≤ m0
⊢ CompleteSpace ↑{f | AEStronglyMeasurable' m (↑↑f) μ}
[PROOFSTEP]
haveI : Fact (m ≤ m0) := ⟨hm⟩
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hp : Fact (1 ≤ p)
inst✝ : CompleteSpace F
hm : m ≤ m0
this : Fact (m ≤ m0)
⊢ CompleteSpace ↑{f | AEStronglyMeasurable' m (↑↑f) μ}
[PROOFSTEP]
change CompleteSpace (lpMeasSubgroup F m p μ)
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
ι : Type u_6
m m0 : MeasurableSpace α
μ : Measure α
hp : Fact (1 ≤ p)
inst✝ : CompleteSpace F
hm : m ≤ m0
this : Fact (m ≤ m0)
⊢ CompleteSpace { x // x ∈ lpMeasSubgroup F m p μ }
[PROOFSTEP]
infer_instance
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
m m0 : MeasurableSpace α
μ✝ : Measure α
one_le_p : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
s : Set α
μ : Measure α
hs : MeasurableSet s
hμs : ↑↑(Measure.trim μ hm) s ≠ ⊤
c : F
⊢ ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (indicatorConstLp p hs hμs c)) =
indicatorConstLp p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c
[PROOFSTEP]
ext1
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
m m0 : MeasurableSpace α
μ✝ : Measure α
one_le_p : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
s : Set α
μ : Measure α
hs : MeasurableSet s
hμs : ↑↑(Measure.trim μ hm) s ≠ ⊤
c : F
⊢ ↑↑↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (indicatorConstLp p hs hμs c)) =ᵐ[μ]
↑↑(indicatorConstLp p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
[PROOFSTEP]
rw [← lpMeas_coe]
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
m m0 : MeasurableSpace α
μ✝ : Measure α
one_le_p : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
s : Set α
μ : Measure α
hs : MeasurableSet s
hμs : ↑↑(Measure.trim μ hm) s ≠ ⊤
c : F
⊢ ↑↑↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (indicatorConstLp p hs hμs c)) =ᵐ[μ]
↑↑(indicatorConstLp p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
[PROOFSTEP]
change lpTrimToLpMeas F ℝ p μ hm (indicatorConstLp p hs hμs c) =ᵐ[μ] (indicatorConstLp p _ _ c : α → F)
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
m m0 : MeasurableSpace α
μ✝ : Measure α
one_le_p : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
s : Set α
μ : Measure α
hs : MeasurableSet s
hμs : ↑↑(Measure.trim μ hm) s ≠ ⊤
c : F
⊢ ↑↑↑(lpTrimToLpMeas F ℝ p μ hm (indicatorConstLp p hs hμs c)) =ᵐ[μ]
↑↑(indicatorConstLp p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
[PROOFSTEP]
refine' (lpTrimToLpMeas_ae_eq hm _).trans _
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
m m0 : MeasurableSpace α
μ✝ : Measure α
one_le_p : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
s : Set α
μ : Measure α
hs : MeasurableSet s
hμs : ↑↑(Measure.trim μ hm) s ≠ ⊤
c : F
⊢ ↑↑(indicatorConstLp p hs hμs c) =ᵐ[μ] ↑↑(indicatorConstLp p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
[PROOFSTEP]
exact (ae_eq_of_ae_eq_trim indicatorConstLp_coeFn).trans indicatorConstLp_coeFn.symm
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
one_le_p : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
f : α → F
hf : Memℒp f p
⊢ ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf)) = Memℒp.toLp f (_ : Memℒp f p)
[PROOFSTEP]
ext1
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
one_le_p : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
f : α → F
hf : Memℒp f p
⊢ ↑↑↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf)) =ᵐ[μ]
↑↑(Memℒp.toLp f (_ : Memℒp f p))
[PROOFSTEP]
rw [← lpMeas_coe]
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
one_le_p : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
f : α → F
hf : Memℒp f p
⊢ ↑↑↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf)) =ᵐ[μ]
↑↑(Memℒp.toLp f (_ : Memℒp f p))
[PROOFSTEP]
refine' (lpTrimToLpMeas_ae_eq hm _).trans _
[GOAL]
case h
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹¹ : IsROrC 𝕜
inst✝¹⁰ : NormedAddCommGroup E'
inst✝⁹ : InnerProductSpace 𝕜 E'
inst✝⁸ : CompleteSpace E'
inst✝⁷ : NormedSpace ℝ E'
inst✝⁶ : NormedAddCommGroup F
inst✝⁵ : NormedSpace 𝕜 F
inst✝⁴ : NormedAddCommGroup F'
inst✝³ : NormedSpace 𝕜 F'
inst✝² : NormedSpace ℝ F'
inst✝¹ : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
one_le_p : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
f : α → F
hf : Memℒp f p
⊢ ↑↑(Memℒp.toLp f hf) =ᵐ[μ] ↑↑(Memℒp.toLp f (_ : Memℒp f p))
[PROOFSTEP]
exact (ae_eq_of_ae_eq_trim (Memℒp.coeFn_toLp hf)).trans (Memℒp.coeFn_toLp _).symm
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
⊢ ∀ (f : { x // x ∈ Lp F p }), AEStronglyMeasurable' m (↑↑f) μ → P f
[PROOFSTEP]
intro f hf
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
⊢ P f
[PROOFSTEP]
let f' := (⟨f, hf⟩ : lpMeas F ℝ m p μ)
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
⊢ P f
[PROOFSTEP]
let g := lpMeasToLpTrimLie F ℝ p μ hm f'
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
g : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
⊢ P f
[PROOFSTEP]
have hfg : f' = (lpMeasToLpTrimLie F ℝ p μ hm).symm g := by simp only [LinearIsometryEquiv.symm_apply_apply]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
g : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
⊢ f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g
[PROOFSTEP]
simp only [LinearIsometryEquiv.symm_apply_apply]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
g : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g
⊢ P f
[PROOFSTEP]
change P ↑f'
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
g : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g
⊢ P ↑f'
[PROOFSTEP]
rw [hfg]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
g : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g
⊢ P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g)
[PROOFSTEP]
refine' @Lp.induction α F m _ p (μ.trim hm) _ hp_ne_top (fun g => P ((lpMeasToLpTrimLie F ℝ p μ hm).symm g)) _ _ _ g
[GOAL]
case refine'_1
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
g : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g
⊢ ∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑(Measure.trim μ hm) s < ⊤),
(fun g => P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g))
↑(simpleFunc.indicatorConst p hs (_ : ↑↑(Measure.trim μ hm) s ≠ ⊤) c)
[PROOFSTEP]
intro b t ht hμt
[GOAL]
case refine'_1
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
g : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g
b : F
t : Set α
ht : MeasurableSet t
hμt : ↑↑(Measure.trim μ hm) t < ⊤
⊢ P
↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm))
↑(simpleFunc.indicatorConst p ht (_ : ↑↑(Measure.trim μ hm) t ≠ ⊤) b))
[PROOFSTEP]
rw [@Lp.simpleFunc.coe_indicatorConst _ _ m, lpMeasToLpTrimLie_symm_indicator ht hμt.ne b]
[GOAL]
case refine'_1
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
g : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g
b : F
t : Set α
ht : MeasurableSet t
hμt : ↑↑(Measure.trim μ hm) t < ⊤
⊢ P (indicatorConstLp p (_ : MeasurableSet t) (_ : ↑↑μ t ≠ ⊤) b)
[PROOFSTEP]
have hμt' : μ t < ∞ := (le_trim hm).trans_lt hμt
[GOAL]
case refine'_1
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
g : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g
b : F
t : Set α
ht : MeasurableSet t
hμt : ↑↑(Measure.trim μ hm) t < ⊤
hμt' : ↑↑μ t < ⊤
⊢ P (indicatorConstLp p (_ : MeasurableSet t) (_ : ↑↑μ t ≠ ⊤) b)
[PROOFSTEP]
specialize h_ind b ht hμt'
[GOAL]
case refine'_1
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
g : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g
b : F
t : Set α
ht : MeasurableSet t
hμt : ↑↑(Measure.trim μ hm) t < ⊤
hμt' : ↑↑μ t < ⊤
h_ind : P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet t) (_ : ↑↑μ t ≠ ⊤) b)
⊢ P (indicatorConstLp p (_ : MeasurableSet t) (_ : ↑↑μ t ≠ ⊤) b)
[PROOFSTEP]
rwa [Lp.simpleFunc.coe_indicatorConst] at h_ind
[GOAL]
case refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
g : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g
⊢ ∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
Disjoint (Function.support f) (Function.support g) →
(fun g => P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g)) (Memℒp.toLp f hf) →
(fun g => P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g)) (Memℒp.toLp g hg) →
(fun g => P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g))
(Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
intro f g hf hg h_disj hfP hgP
[GOAL]
case refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f✝, property := hf✝ }
g✝ : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g✝
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
h_disj : Disjoint (Function.support f) (Function.support g)
hfP : P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf))
hgP : P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp g hg))
⊢ P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf + Memℒp.toLp g hg))
[PROOFSTEP]
rw [LinearIsometryEquiv.map_add]
[GOAL]
case refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f✝, property := hf✝ }
g✝ : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g✝
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
h_disj : Disjoint (Function.support f) (Function.support g)
hfP : P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf))
hgP : P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp g hg))
⊢ P
↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf) +
↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp g hg))
[PROOFSTEP]
push_cast
[GOAL]
case refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f✝, property := hf✝ }
g✝ : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g✝
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
h_disj : Disjoint (Function.support f) (Function.support g)
hfP : P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf))
hgP : P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp g hg))
⊢ P
(↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf)) +
↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp g hg)))
[PROOFSTEP]
have h_eq :
∀ (f : α → F) (hf : Memℒp f p (μ.trim hm)),
((lpMeasToLpTrimLie F ℝ p μ hm).symm (Memℒp.toLp f hf) : Lp F p μ) = (memℒp_of_memℒp_trim hm hf).toLp f :=
lpMeasToLpTrimLie_symm_toLp hm
[GOAL]
case refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f✝, property := hf✝ }
g✝ : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g✝
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
h_disj : Disjoint (Function.support f) (Function.support g)
hfP : P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf))
hgP : P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp g hg))
h_eq :
∀ (f : α → F) (hf : Memℒp f p),
↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf)) = Memℒp.toLp f (_ : Memℒp f p)
⊢ P
(↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf)) +
↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp g hg)))
[PROOFSTEP]
rw [h_eq f hf] at hfP ⊢
[GOAL]
case refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f✝, property := hf✝ }
g✝ : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g✝
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
h_disj : Disjoint (Function.support f) (Function.support g)
hfP : P (Memℒp.toLp f (_ : Memℒp f p))
hgP : P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp g hg))
h_eq :
∀ (f : α → F) (hf : Memℒp f p),
↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf)) = Memℒp.toLp f (_ : Memℒp f p)
⊢ P (Memℒp.toLp f (_ : Memℒp f p) + ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp g hg)))
[PROOFSTEP]
rw [h_eq g hg] at hgP ⊢
[GOAL]
case refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f✝, property := hf✝ }
g✝ : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g✝
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
h_disj : Disjoint (Function.support f) (Function.support g)
hfP : P (Memℒp.toLp f (_ : Memℒp f p))
hgP : P (Memℒp.toLp g (_ : Memℒp g p))
h_eq :
∀ (f : α → F) (hf : Memℒp f p),
↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) (Memℒp.toLp f hf)) = Memℒp.toLp f (_ : Memℒp f p)
⊢ P (Memℒp.toLp f (_ : Memℒp f p) + Memℒp.toLp g (_ : Memℒp g p))
[PROOFSTEP]
exact
h_add (memℒp_of_memℒp_trim hm hf) (memℒp_of_memℒp_trim hm hg)
(aeStronglyMeasurable'_of_aeStronglyMeasurable'_trim hm hf.aestronglyMeasurable)
(aeStronglyMeasurable'_of_aeStronglyMeasurable'_trim hm hg.aestronglyMeasurable) h_disj hfP hgP
[GOAL]
case refine'_3
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
g : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g
⊢ IsClosed {f | (fun g => P ↑(↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g)) f}
[PROOFSTEP]
change IsClosed ((lpMeasToLpTrimLie F ℝ p μ hm).symm ⁻¹' {g : lpMeas F ℝ m p μ | P ↑g})
[GOAL]
case refine'_3
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
f' : { x // x ∈ lpMeas F ℝ m p μ } := { val := f, property := hf }
g : { x // x ∈ Lp F p } := ↑(lpMeasToLpTrimLie F ℝ p μ hm) f'
hfg : f' = ↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) g
⊢ IsClosed (↑(LinearIsometryEquiv.symm (lpMeasToLpTrimLie F ℝ p μ hm)) ⁻¹' {g | P ↑g})
[PROOFSTEP]
exact IsClosed.preimage (LinearIsometryEquiv.continuous _) h_closed
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
⊢ ∀ (f : { x // x ∈ Lp F p }), AEStronglyMeasurable' m (↑↑f) μ → P f
[PROOFSTEP]
intro f hf
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
⊢ P f
[PROOFSTEP]
suffices h_add_ae :
∀ ⦃f g⦄,
∀ hf : Memℒp f p μ,
∀ hg : Memℒp g p μ,
∀ _ : AEStronglyMeasurable' m f μ,
∀ _ : AEStronglyMeasurable' m g μ,
Disjoint (Function.support f) (Function.support g) →
P (hf.toLp f) →
P (hg.toLp g) →
P
(hf.toLp f + hg.toLp g)
-- Porting note: `P` should be an explicit argument to `Lp.induction_stronglyMeasurable_aux`, but
-- it isn't?
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
h_add_ae :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
⊢ P f
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
⊢ ∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
exact Lp.induction_stronglyMeasurable_aux hm hp_ne_top h_ind h_add_ae h_closed f hf
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f : { x // x ∈ Lp F p }
hf : AEStronglyMeasurable' m (↑↑f) μ
⊢ ∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
AEStronglyMeasurable' m f μ →
AEStronglyMeasurable' m g μ →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
intro f g hf hg hfm hgm h_disj hPf hPg
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
let s_f : Set α := Function.support (hfm.mk f)
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
have hs_f : MeasurableSet[m] s_f := hfm.stronglyMeasurable_mk.measurableSet_support
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
have hs_f_eq : s_f =ᵐ[μ] Function.support f := hfm.ae_eq_mk.symm.support
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
let s_g : Set α := Function.support (hgm.mk g)
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
have hs_g : MeasurableSet[m] s_g := hgm.stronglyMeasurable_mk.measurableSet_support
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
have hs_g_eq : s_g =ᵐ[μ] Function.support g := hgm.ae_eq_mk.symm.support
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
have h_inter_empty : (s_f ∩ s_g : Set α) =ᵐ[μ] (∅ : Set α) :=
by
refine' (hs_f_eq.inter hs_g_eq).trans _
suffices Function.support f ∩ Function.support g = ∅ by rw [this]
exact Set.disjoint_iff_inter_eq_empty.mp h_disj
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
⊢ s_f ∩ s_g =ᵐ[μ] ∅
[PROOFSTEP]
refine' (hs_f_eq.inter hs_g_eq).trans _
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
⊢ Function.support f ∩ Function.support g =ᵐ[μ] ∅
[PROOFSTEP]
suffices Function.support f ∩ Function.support g = ∅ by rw [this]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
this : Function.support f ∩ Function.support g = ∅
⊢ Function.support f ∩ Function.support g =ᵐ[μ] ∅
[PROOFSTEP]
rw [this]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
⊢ Function.support f ∩ Function.support g = ∅
[PROOFSTEP]
exact Set.disjoint_iff_inter_eq_empty.mp h_disj
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
let f' := (s_f \ s_g).indicator (hfm.mk f)
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
have hff' : f =ᵐ[μ] f' :=
by
have : s_f \ s_g =ᵐ[μ] s_f := by
rw [← Set.diff_inter_self_eq_diff, Set.inter_comm]
refine' ((ae_eq_refl s_f).diff h_inter_empty).trans _
rw [Set.diff_empty]
refine' ((indicator_ae_eq_of_ae_eq_set this).trans _).symm
rw [Set.indicator_support]
exact hfm.ae_eq_mk.symm
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
⊢ f =ᵐ[μ] f'
[PROOFSTEP]
have : s_f \ s_g =ᵐ[μ] s_f := by
rw [← Set.diff_inter_self_eq_diff, Set.inter_comm]
refine' ((ae_eq_refl s_f).diff h_inter_empty).trans _
rw [Set.diff_empty]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
⊢ s_f \ s_g =ᵐ[μ] s_f
[PROOFSTEP]
rw [← Set.diff_inter_self_eq_diff, Set.inter_comm]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
⊢ s_f \ (s_f ∩ s_g) =ᵐ[μ] s_f
[PROOFSTEP]
refine' ((ae_eq_refl s_f).diff h_inter_empty).trans _
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
⊢ s_f \ ∅ =ᵐ[μ] s_f
[PROOFSTEP]
rw [Set.diff_empty]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
this : s_f \ s_g =ᵐ[μ] s_f
⊢ f =ᵐ[μ] f'
[PROOFSTEP]
refine' ((indicator_ae_eq_of_ae_eq_set this).trans _).symm
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
this : s_f \ s_g =ᵐ[μ] s_f
⊢ Set.indicator s_f (AEStronglyMeasurable'.mk f hfm) =ᵐ[μ] f
[PROOFSTEP]
rw [Set.indicator_support]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
this : s_f \ s_g =ᵐ[μ] s_f
⊢ AEStronglyMeasurable'.mk f hfm =ᵐ[μ] f
[PROOFSTEP]
exact hfm.ae_eq_mk.symm
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
have hf'_meas : StronglyMeasurable[m] f' := hfm.stronglyMeasurable_mk.indicator (hs_f.diff hs_g)
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
have hf'_Lp : Memℒp f' p μ := hf.ae_eq hff'
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
let g' := (s_g \ s_f).indicator (hgm.mk g)
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
have hgg' : g =ᵐ[μ] g' :=
by
have : s_g \ s_f =ᵐ[μ] s_g := by
rw [← Set.diff_inter_self_eq_diff]
refine' ((ae_eq_refl s_g).diff h_inter_empty).trans _
rw [Set.diff_empty]
refine' ((indicator_ae_eq_of_ae_eq_set this).trans _).symm
rw [Set.indicator_support]
exact hgm.ae_eq_mk.symm
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
⊢ g =ᵐ[μ] g'
[PROOFSTEP]
have : s_g \ s_f =ᵐ[μ] s_g := by
rw [← Set.diff_inter_self_eq_diff]
refine' ((ae_eq_refl s_g).diff h_inter_empty).trans _
rw [Set.diff_empty]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
⊢ s_g \ s_f =ᵐ[μ] s_g
[PROOFSTEP]
rw [← Set.diff_inter_self_eq_diff]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
⊢ s_g \ (s_f ∩ s_g) =ᵐ[μ] s_g
[PROOFSTEP]
refine' ((ae_eq_refl s_g).diff h_inter_empty).trans _
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
⊢ s_g \ ∅ =ᵐ[μ] s_g
[PROOFSTEP]
rw [Set.diff_empty]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
this : s_g \ s_f =ᵐ[μ] s_g
⊢ g =ᵐ[μ] g'
[PROOFSTEP]
refine' ((indicator_ae_eq_of_ae_eq_set this).trans _).symm
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
this : s_g \ s_f =ᵐ[μ] s_g
⊢ Set.indicator s_g (AEStronglyMeasurable'.mk g hgm) =ᵐ[μ] g
[PROOFSTEP]
rw [Set.indicator_support]
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
this : s_g \ s_f =ᵐ[μ] s_g
⊢ AEStronglyMeasurable'.mk g hgm =ᵐ[μ] g
[PROOFSTEP]
exact hgm.ae_eq_mk.symm
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
hgg' : g =ᵐ[μ] g'
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
have hg'_meas : StronglyMeasurable[m] g' := hgm.stronglyMeasurable_mk.indicator (hs_g.diff hs_f)
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
hgg' : g =ᵐ[μ] g'
hg'_meas : StronglyMeasurable g'
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
have hg'_Lp : Memℒp g' p μ := hg.ae_eq hgg'
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
hgg' : g =ᵐ[μ] g'
hg'_meas : StronglyMeasurable g'
hg'_Lp : Memℒp g' p
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
have h_disj : Disjoint (Function.support f') (Function.support g') :=
haveI : Disjoint (s_f \ s_g) (s_g \ s_f) := disjoint_sdiff_sdiff
this.mono Set.support_indicator_subset Set.support_indicator_subset
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj✝ : Disjoint (Function.support f) (Function.support g)
hPf : P (Memℒp.toLp f hf)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
hgg' : g =ᵐ[μ] g'
hg'_meas : StronglyMeasurable g'
hg'_Lp : Memℒp g' p
h_disj : Disjoint (Function.support f') (Function.support g')
⊢ P (Memℒp.toLp f hf + Memℒp.toLp g hg)
[PROOFSTEP]
rw [← Memℒp.toLp_congr hf'_Lp hf hff'.symm] at hPf ⊢
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj✝ : Disjoint (Function.support f) (Function.support g)
hPg : P (Memℒp.toLp g hg)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
hPf : P (Memℒp.toLp f' hf'_Lp)
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
hgg' : g =ᵐ[μ] g'
hg'_meas : StronglyMeasurable g'
hg'_Lp : Memℒp g' p
h_disj : Disjoint (Function.support f') (Function.support g')
⊢ P (Memℒp.toLp f' hf'_Lp + Memℒp.toLp g hg)
[PROOFSTEP]
rw [← Memℒp.toLp_congr hg'_Lp hg hgg'.symm] at hPg ⊢
[GOAL]
case h_add_ae
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : { x // x ∈ Lp F p } → Prop
h_ind :
∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
P ↑(simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
h_add :
∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
P (Memℒp.toLp f hf) → P (Memℒp.toLp g hg) → P (Memℒp.toLp f hf + Memℒp.toLp g hg)
h_closed : IsClosed {f | P ↑f}
f✝ : { x // x ∈ Lp F p }
hf✝ : AEStronglyMeasurable' m (↑↑f✝) μ
f g : α → F
hf : Memℒp f p
hg : Memℒp g p
hfm : AEStronglyMeasurable' m f μ
hgm : AEStronglyMeasurable' m g μ
h_disj✝ : Disjoint (Function.support f) (Function.support g)
s_f : Set α := Function.support (AEStronglyMeasurable'.mk f hfm)
hs_f : MeasurableSet s_f
hs_f_eq : s_f =ᵐ[μ] Function.support f
s_g : Set α := Function.support (AEStronglyMeasurable'.mk g hgm)
hs_g : MeasurableSet s_g
hs_g_eq : s_g =ᵐ[μ] Function.support g
h_inter_empty : s_f ∩ s_g =ᵐ[μ] ∅
f' : α → F := Set.indicator (s_f \ s_g) (AEStronglyMeasurable'.mk f hfm)
hff' : f =ᵐ[μ] f'
hf'_meas : StronglyMeasurable f'
hf'_Lp : Memℒp f' p
hPf : P (Memℒp.toLp f' hf'_Lp)
g' : α → F := Set.indicator (s_g \ s_f) (AEStronglyMeasurable'.mk g hgm)
hgg' : g =ᵐ[μ] g'
hg'_meas : StronglyMeasurable g'
hg'_Lp : Memℒp g' p
hPg : P (Memℒp.toLp g' hg'_Lp)
h_disj : Disjoint (Function.support f') (Function.support g')
⊢ P (Memℒp.toLp f' hf'_Lp + Memℒp.toLp g' hg'_Lp)
[PROOFSTEP]
exact h_add hf'_Lp hg'_Lp hf'_meas hg'_meas h_disj hPf hPg
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
⊢ ∀ ⦃f : α → F⦄, Memℒp f p → AEStronglyMeasurable' m f μ → P f
[PROOFSTEP]
intro f hf hfm
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f : α → F
hf : Memℒp f p
hfm : AEStronglyMeasurable' m f μ
⊢ P f
[PROOFSTEP]
let f_Lp := hf.toLp f
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f : α → F
hf : Memℒp f p
hfm : AEStronglyMeasurable' m f μ
f_Lp : { x // x ∈ Lp F p } := toLp f hf
⊢ P f
[PROOFSTEP]
have hfm_Lp : AEStronglyMeasurable' m f_Lp μ := hfm.congr hf.coeFn_toLp.symm
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f : α → F
hf : Memℒp f p
hfm : AEStronglyMeasurable' m f μ
f_Lp : { x // x ∈ Lp F p } := toLp f hf
hfm_Lp : AEStronglyMeasurable' m (↑↑f_Lp) μ
⊢ P f
[PROOFSTEP]
refine' h_ae hf.coeFn_toLp (Lp.memℒp _) _
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f : α → F
hf : Memℒp f p
hfm : AEStronglyMeasurable' m f μ
f_Lp : { x // x ∈ Lp F p } := toLp f hf
hfm_Lp : AEStronglyMeasurable' m (↑↑f_Lp) μ
⊢ P ↑↑(toLp f hf)
[PROOFSTEP]
change P f_Lp
[GOAL]
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f : α → F
hf : Memℒp f p
hfm : AEStronglyMeasurable' m f μ
f_Lp : { x // x ∈ Lp F p } := toLp f hf
hfm_Lp : AEStronglyMeasurable' m (↑↑f_Lp) μ
⊢ P ↑↑f_Lp
[PROOFSTEP]
refine' Lp.induction_stronglyMeasurable hm hp_ne_top (P := fun f => P f) _ _ h_closed f_Lp hfm_Lp
[GOAL]
case refine'_1
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f : α → F
hf : Memℒp f p
hfm : AEStronglyMeasurable' m f μ
f_Lp : { x // x ∈ Lp F p } := toLp f hf
hfm_Lp : AEStronglyMeasurable' m (↑↑f_Lp) μ
⊢ ∀ (c : F) {s : Set α} (hs : MeasurableSet s) (hμs : ↑↑μ s < ⊤),
(fun f => P ↑↑f) ↑(Lp.simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
[PROOFSTEP]
intro c s hs hμs
[GOAL]
case refine'_1
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f : α → F
hf : Memℒp f p
hfm : AEStronglyMeasurable' m f μ
f_Lp : { x // x ∈ Lp F p } := toLp f hf
hfm_Lp : AEStronglyMeasurable' m (↑↑f_Lp) μ
c : F
s : Set α
hs : MeasurableSet s
hμs : ↑↑μ s < ⊤
⊢ P ↑↑↑(Lp.simpleFunc.indicatorConst p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
[PROOFSTEP]
rw [Lp.simpleFunc.coe_indicatorConst]
[GOAL]
case refine'_1
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f : α → F
hf : Memℒp f p
hfm : AEStronglyMeasurable' m f μ
f_Lp : { x // x ∈ Lp F p } := toLp f hf
hfm_Lp : AEStronglyMeasurable' m (↑↑f_Lp) μ
c : F
s : Set α
hs : MeasurableSet s
hμs : ↑↑μ s < ⊤
⊢ P ↑↑(indicatorConstLp p (_ : MeasurableSet s) (_ : ↑↑μ s ≠ ⊤) c)
[PROOFSTEP]
refine' h_ae indicatorConstLp_coeFn.symm _ (h_ind c hs hμs)
[GOAL]
case refine'_1
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f : α → F
hf : Memℒp f p
hfm : AEStronglyMeasurable' m f μ
f_Lp : { x // x ∈ Lp F p } := toLp f hf
hfm_Lp : AEStronglyMeasurable' m (↑↑f_Lp) μ
c : F
s : Set α
hs : MeasurableSet s
hμs : ↑↑μ s < ⊤
⊢ Memℒp (Set.indicator s fun x => c) p
[PROOFSTEP]
exact memℒp_indicator_const p (hm s hs) c (Or.inr hμs.ne)
[GOAL]
case refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f : α → F
hf : Memℒp f p
hfm : AEStronglyMeasurable' m f μ
f_Lp : { x // x ∈ Lp F p } := toLp f hf
hfm_Lp : AEStronglyMeasurable' m (↑↑f_Lp) μ
⊢ ∀ ⦃f g : α → F⦄ (hf : Memℒp f p) (hg : Memℒp g p),
StronglyMeasurable f →
StronglyMeasurable g →
Disjoint (Function.support f) (Function.support g) →
(fun f => P ↑↑f) (toLp f hf) → (fun f => P ↑↑f) (toLp g hg) → (fun f => P ↑↑f) (toLp f hf + toLp g hg)
[PROOFSTEP]
intro f g hf_mem hg_mem hfm hgm h_disj hfP hgP
[GOAL]
case refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f✝ : α → F
hf : Memℒp f✝ p
hfm✝ : AEStronglyMeasurable' m f✝ μ
f_Lp : { x // x ∈ Lp F p } := toLp f✝ hf
hfm_Lp : AEStronglyMeasurable' m (↑↑f_Lp) μ
f g : α → F
hf_mem : Memℒp f p
hg_mem : Memℒp g p
hfm : StronglyMeasurable f
hgm : StronglyMeasurable g
h_disj : Disjoint (Function.support f) (Function.support g)
hfP : P ↑↑(toLp f hf_mem)
hgP : P ↑↑(toLp g hg_mem)
⊢ P ↑↑(toLp f hf_mem + toLp g hg_mem)
[PROOFSTEP]
have hfP' : P f := h_ae hf_mem.coeFn_toLp (Lp.memℒp _) hfP
[GOAL]
case refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f✝ : α → F
hf : Memℒp f✝ p
hfm✝ : AEStronglyMeasurable' m f✝ μ
f_Lp : { x // x ∈ Lp F p } := toLp f✝ hf
hfm_Lp : AEStronglyMeasurable' m (↑↑f_Lp) μ
f g : α → F
hf_mem : Memℒp f p
hg_mem : Memℒp g p
hfm : StronglyMeasurable f
hgm : StronglyMeasurable g
h_disj : Disjoint (Function.support f) (Function.support g)
hfP : P ↑↑(toLp f hf_mem)
hgP : P ↑↑(toLp g hg_mem)
hfP' : P f
⊢ P ↑↑(toLp f hf_mem + toLp g hg_mem)
[PROOFSTEP]
have hgP' : P g := h_ae hg_mem.coeFn_toLp (Lp.memℒp _) hgP
[GOAL]
case refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_add :
∀ ⦃f g : α → F⦄,
Disjoint (Function.support f) (Function.support g) →
Memℒp f p → Memℒp g p → StronglyMeasurable f → StronglyMeasurable g → P f → P g → P (f + g)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f✝ : α → F
hf : Memℒp f✝ p
hfm✝ : AEStronglyMeasurable' m f✝ μ
f_Lp : { x // x ∈ Lp F p } := toLp f✝ hf
hfm_Lp : AEStronglyMeasurable' m (↑↑f_Lp) μ
f g : α → F
hf_mem : Memℒp f p
hg_mem : Memℒp g p
hfm : StronglyMeasurable f
hgm : StronglyMeasurable g
h_disj : Disjoint (Function.support f) (Function.support g)
hfP : P ↑↑(toLp f hf_mem)
hgP : P ↑↑(toLp g hg_mem)
hfP' : P f
hgP' : P g
⊢ P ↑↑(toLp f hf_mem + toLp g hg_mem)
[PROOFSTEP]
specialize h_add h_disj hf_mem hg_mem hfm hgm hfP' hgP'
[GOAL]
case refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f✝ : α → F
hf : Memℒp f✝ p
hfm✝ : AEStronglyMeasurable' m f✝ μ
f_Lp : { x // x ∈ Lp F p } := toLp f✝ hf
hfm_Lp : AEStronglyMeasurable' m (↑↑f_Lp) μ
f g : α → F
hf_mem : Memℒp f p
hg_mem : Memℒp g p
hfm : StronglyMeasurable f
hgm : StronglyMeasurable g
h_disj : Disjoint (Function.support f) (Function.support g)
hfP : P ↑↑(toLp f hf_mem)
hgP : P ↑↑(toLp g hg_mem)
hfP' : P f
hgP' : P g
h_add : P (f + g)
⊢ P ↑↑(toLp f hf_mem + toLp g hg_mem)
[PROOFSTEP]
refine' h_ae _ (hf_mem.add hg_mem) h_add
[GOAL]
case refine'_2
α : Type u_1
E' : Type u_2
F : Type u_3
F' : Type u_4
𝕜 : Type u_5
p : ℝ≥0∞
inst✝¹² : IsROrC 𝕜
inst✝¹¹ : NormedAddCommGroup E'
inst✝¹⁰ : InnerProductSpace 𝕜 E'
inst✝⁹ : CompleteSpace E'
inst✝⁸ : NormedSpace ℝ E'
inst✝⁷ : NormedAddCommGroup F
inst✝⁶ : NormedSpace 𝕜 F
inst✝⁵ : NormedAddCommGroup F'
inst✝⁴ : NormedSpace 𝕜 F'
inst✝³ : NormedSpace ℝ F'
inst✝² : CompleteSpace F'
m m0 : MeasurableSpace α
μ : Measure α
inst✝¹ : Fact (1 ≤ p)
inst✝ : NormedSpace ℝ F
hm : m ≤ m0
hp_ne_top : p ≠ ⊤
P : (α → F) → Prop
h_ind : ∀ (c : F) ⦃s : Set α⦄, MeasurableSet s → ↑↑μ s < ⊤ → P (Set.indicator s fun x => c)
h_closed : IsClosed {f | P ↑↑↑f}
h_ae : ∀ ⦃f g : α → F⦄, f =ᵐ[μ] g → Memℒp f p → P f → P g
f✝ : α → F
hf : Memℒp f✝ p
hfm✝ : AEStronglyMeasurable' m f✝ μ
f_Lp : { x // x ∈ Lp F p } := toLp f✝ hf
hfm_Lp : AEStronglyMeasurable' m (↑↑f_Lp) μ
f g : α → F
hf_mem : Memℒp f p
hg_mem : Memℒp g p
hfm : StronglyMeasurable f
hgm : StronglyMeasurable g
h_disj : Disjoint (Function.support f) (Function.support g)
hfP : P ↑↑(toLp f hf_mem)
hgP : P ↑↑(toLp g hg_mem)
hfP' : P f
hgP' : P g
h_add : P (f + g)
⊢ f + g =ᵐ[μ] ↑↑(toLp f hf_mem + toLp g hg_mem)
[PROOFSTEP]
exact (hf_mem.coeFn_toLp.symm.add hg_mem.coeFn_toLp.symm).trans (Lp.coeFn_add _ _).symm
|
\name{circos.track}
\alias{circos.track}
\title{
Create plotting regions for a whole track
}
\description{
Create plotting regions for a whole track
}
\usage{
circos.track(...)
}
\arguments{
\item{...}{pass to \code{\link{circos.trackPlotRegion}}}
}
\details{
Shortcut function of \code{\link{circos.trackPlotRegion}}.
}
\examples{
# There is no example
NULL
}
|
The Dark Lair Of Paul Lanzi
Right.. what better place to screw up Phils new wiki than to start with my own page?
Im a somewhat/kinda recent graduate from the Computer Science Department of the College of Letters and Science at the University of California at Davis. I am no longer a resident, having moved to SF in August 05. After working for a small science based startup, I now work at a major biotech company near SF. I generally hike a lot and take photos while doing so a hobby that has sort of morphed into more than a hobby. I dont like seafood.
But you already knew all that.
What you dont know is that in my next life, I plan on being a wombat.
And now we dance.
Want to stalk me more? Ok!
You can reach me via email at [email protected]
Wiki wiki work
Comments(Your Chance to Object) Knock yourselves out...
07/23/2004 01:19:48 AM Paul, if youd like to work on the CSS, I suggest getting the Web Developer extension for Firefox, which allows you to edit the CSS as you view it, then save and upload it. The things we still want to finalize are the fonts and colors. They look nice, but still the fonts are borrowed from the old theme, and if theres something better wed like to see it. You can also edit the formbutton style, which will be applied to all the buttons(right now its applied only to the buttons on the edit page.) That extension makes it very easy to edit stuff and see how it looks immediately. Users/MikeIvanov
07/28/2004 12:29:58 AM Yep. Have had the Web Developer extension installed for some time. It certainly rocks! Im going to brainstorm on some color schemes soon and run them by you guys. Users/PaulLanzi
20040728 01:21:29 nbsp That would be very cool. I have made the logo transparent, so you can actually change the banner colors to be something offblack. Also, Philip did an awesome job on the Comments macro, and if you want to put Your chance to object in there, you can use Comments(Your chance to object). Events Board and a new Davis Map are in the works! Users/MikeIvanov
08/04/2004 11:34:00 PM nbsp thanks! Users/PaulLanzi
20041021 23:22:14 nbsp What symbols do we need to add to the allowed symbols list? There are a couple that would cause trouble, and I need to check to see which they are. . Users/PhilipNeustrom
20041021 23:30:25 nbsp probably need both ? and an apostrophe... but it occurs to me that those might be the precise symbols that would cause problems... Users/PaulLanzi
20041027 01:23:13 nbsp Hey Paul, to add points you go into Edit for the page. Theres an Edit Map button. Its all documented in Davis Wiki Guide. Users/PhilipNeustrom
20041027 01:55:53 nbsp (Was it unclear in the guide or did you not reread it? Is the Edit area the wrong area for that?) Users/PhilipNeustrom
|
(*
Copyright (C) 2017 M.A.L. Marques
2019 Susi Lehtola
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*)
(* type: gga_exc *)
(* prefix:
gga_c_pbe_vwn_params *params;
assert(p->params != NULL);
params = (gga_c_pbe_vwn_params * )(p->params);
*)
$include "lda_c_vwn.mpl"
$ifdef gga_c_pbe_params
params_a_beta := 0.06672455060314922:
params_a_gamma := (1 - log(2))/Pi^2:
params_a_BB := 1:
$endif
mgamma := params_a_gamma:
mbeta := (rs, t) -> params_a_beta:
BB := params_a_BB:
tp := (rs, z, xt) -> tt(rs, z, xt):
(* Equation (8) *)
A := (rs, z, t) ->
mbeta(rs, t)/(mgamma*(exp(-f_vwn(rs, z)/(mgamma*mphi(z)^3)) - 1)):
(* Equation (7) *)
f1 := (rs, z, t) -> t^2 + BB*A(rs, z, t)*t^4:
f2 := (rs, z, t) -> mbeta(rs, t)*f1(rs, z, t)/(mgamma*(1 + A(rs, z, t)*f1(rs, z, t))):
fH := (rs, z, t) -> mgamma*mphi(z)^3*log(1 + f2(rs, z, t)):
f_pbe := (rs, z, xt, xs0, xs1) ->
f_vwn(rs, z) + fH(rs, z, tp(rs, z, xt)):
f := (rs, z, xt, xs0, xs1) -> f_pbe(rs, z, xt, xs0, xs1):
|
// ======================================================================
/*!
* \brief woml::Elevation
*/
// ======================================================================
#include "Elevation.h"
#include <boost/optional.hpp>
#include <cassert>
namespace woml
{
// ----------------------------------------------------------------------
/*!
* \brief Default constructor
*/
// ----------------------------------------------------------------------
Elevation::Elevation() : itsBounded(false), itsValue(), itsRange() {}
// ----------------------------------------------------------------------
/*!
* \brief Constructor
*/
// ----------------------------------------------------------------------
Elevation::Elevation(const boost::optional<NumericalSingleValueMeasure>& theValue)
: itsBounded(false), itsValue(theValue), itsRange()
{
}
// ----------------------------------------------------------------------
/*!
* \brief Constructor
*/
// ----------------------------------------------------------------------
Elevation::Elevation(const boost::optional<NumericalValueRangeMeasure>& theRange)
: itsBounded(theRange ? true : false), itsValue(), itsRange(theRange)
{
}
// ----------------------------------------------------------------------
/*!
* \brief True if elevation is bounded
*/
// ----------------------------------------------------------------------
bool Elevation::bounded() const { return itsBounded; }
// ----------------------------------------------------------------------
/*!
* \brief Value accessor
*/
// ----------------------------------------------------------------------
const boost::optional<NumericalSingleValueMeasure>& Elevation::value() const
{
assert(itsValue);
return itsValue;
}
// ----------------------------------------------------------------------
/*!
* \brief Lower limit accessor
*/
// ----------------------------------------------------------------------
boost::optional<NumericalSingleValueMeasure> Elevation::lowerLimit() const
{
assert(itsBounded);
boost::optional<NumericalSingleValueMeasure> limit;
if (itsRange) limit = itsRange->lowerLimit();
return limit;
}
// ----------------------------------------------------------------------
/*!
* \brief Upper limit accessor
*/
// ----------------------------------------------------------------------
boost::optional<NumericalSingleValueMeasure> Elevation::upperLimit() const
{
assert(itsBounded);
boost::optional<NumericalSingleValueMeasure> limit;
if (itsRange) limit = itsRange->upperLimit();
return limit;
}
} // namespace woml
|
Formal statement is: lemma borel_measurable_if_D: fixes f :: "'a::euclidean_space \<Rightarrow> 'b::euclidean_space" assumes "(\<lambda>x. if x \<in> S then f x else 0) \<in> borel_measurable lebesgue" shows "f \<in> borel_measurable (lebesgue_on S)" Informal statement is: If $f$ is a Borel measurable function on $\mathbb{R}^n$, then $f$ is a Borel measurable function on any Borel subset of $\mathbb{R}^n$. |
= = Background and release = =
|
= = Background and release = =
|
If $f$ and $g$ are holomorphic functions on an open connected set $S$, and if $f$ has no zeros in $S$, then the number of zeros of $f + g$ in $S$ is equal to the number of zeros of $f$ in $S$. |
module ncepgfs_ghg
!$$$ module documentation block
!
! module ncepgfs_ghg
! prgmmr: hou date: 2010-3-31
!
! abstract: reading ncep gfs/cfs green house gases and/or other
! types of radiative active rare gases distributions.
!
! program history log:
! 2010-03-31 hou
! 2010-04-26 kistler - simplified co2 input file for ico2=2
! 2011-08-09 yang - add data interpolation in temporal and spatial space in the subdomains
! 2011-01-26 yang - add ch4,n2o, and co
!
! subroutines included:
! sub read_gfsco2 - read ncep gfs historical/prescribed co2
! sub read_ch4co2co - read prescribed ch4,n2o, and co fields
! and interpolate them into analysis date and grid
!
!
! unit used for radiative active gases:
! co2 : volume mixing ratio (ppm)
! n2o : volume mixing ratio (ppm)
! ch4 : volume mixing ratio (ppm)
! o2 : volume mixing ratio (ppm)
! co : volume mixing ratio (ppm)
! cfc11 : volume mixing ratio (ppm)
! cfc12 : volume mixing ratio (ppm)
! cfc22 : volume mixing ratio (ppm)
! ccl4 : volume mixing ratio (ppm)
! cfc113: volume mixing ratio (ppm)
!
! atributes:
! language: f90
! machine: ibm
!
!$$$ end document block
use kinds, only : r_kind, i_kind
use constants, only : pi, rad2deg, one
implicit none
private
! --- default parameter constants
integer(i_kind), parameter :: nmxlon_def = 24 ! default input co2 data lon points
integer(i_kind), parameter :: nmxlat_def = 12 ! default input co2 data lat points
integer(i_kind), parameter :: minyear = 1957 ! earlist year 2-d co2 data available
real(r_kind), parameter :: resco2_def = 15.0_r_kind ! horiz res in degree
real(r_kind), parameter :: prsco2 = 78.8_r_kind ! pres lim for 2-d co2 (cb)
! --- parameter constants for gas volume mixing ratioes in ppm (1.e-6 p/p)
! --- values are based on ESRL or default values in CRTM
real(r_kind), parameter :: co2vmr_def = 390.0_r_kind
real(r_kind), parameter :: ch4vmr_def = 1.808_r_kind
real(r_kind), parameter :: n2ovmr_def = 0.324_r_kind
real(r_kind), parameter :: covmr_def = 0.170_r_kind
public:: co2vmr_def
public:: ch4vmr_def
public:: n2ovmr_def
public:: covmr_def
! real(r_kind), parameter :: n2ovmr_def = 0.31_r_kind
! real(r_kind), parameter :: o2vmr_def = 209.0e3_r_kind
! real(r_kind), parameter :: covmr_def = 1.50e-2_r_kind
! real(r_kind), parameter :: f11vmr_def = 3.520e-4_r_kind
! real(r_kind), parameter :: f12vmr_def = 6.358e-4_r_kind
! real(r_kind), parameter :: f22vmr_def = 1.500e-4_r_kind
! real(r_kind), parameter :: cl4vmr_def = 1.397e-4_r_kind
! real(r_kind), parameter :: f113vmr_def= 8.2000e-5_r_kind
! --- co2 2-d monthly data and global mean from observed data
real(r_kind), save :: co2_glb = co2vmr_def
real(r_kind) :: julday ! Used in calculating default value with
! annual trend
integer(i_kind), save :: kyrsav = 0 ! year of data saved
integer(i_kind), save :: kmonsav = 0 ! month of data saved
integer(i_kind), save :: nmxlon = nmxlon_def
integer(i_kind), save :: nmxlat = nmxlat_def
integer(i_kind), save :: resco2 = resco2_def
! --- public invokable subroutines
public read_gfsco2 ! read ncep gfs historical/prescribed co2
public read_ch4n2oco ! read prescribed ch4,n2o,and co fields
! and interpolate them into grid for analysis date
! ---
! ===========
contains
! ===========
subroutine read_gfsco2 &
! --- inputs:
( iyear, month, idd, ico2,xlats,lat2, lon2, nlev, mype, &
! --- outputs:
atmco2 )
!$$$ subprogram documentation block
!
! subprogram: read_gfsco2 read from gfs historical co2 data
! set, convert to model grid
!
! prgmmr: hou date: 2010-03-31
!
! abstract: set up green house gase co2 profile by reading global
! historical co2 monthly 2-d data files, convert to grid
!
! program history log:
! 2003-05-xx hou created the original gfs version
! 2010-03-31 hou modified for gsi application
! 2010-04-26 kistler simplified co2 input file for ico2=2
! 2011-03-15 yang modify to use a timp dependent monthly zonal mean co2 input
! 2011-12-12 yang modify interpolation loop order
!
! input argument list:
! iyear - integer, year for the requested data
! month - integer, month of the year
! idd - integer, day of the month
! ico2 - integer
! =0: use prescribed global mean co2 value
! =1: use observed co2 yearly global annual mean value
! =2: use prescribed monthly mean co2 field (3d)
! xlats(lat2)- real, grid latitude in radians
! lat2 - integer, number of latitude points in subdomain
! lon2 - integer, number of longitude points in subdomain
! nlev - integer, number of vertical layers
! mype - integer, mpi task id
!
! output argument list:
! atmco2(lat2,lon2,nlev)
! - real, co2 volume mixing ratio in ppm
!
!$$$
! --- declare passed variables - input:
integer(i_kind), intent(in ) :: iyear
integer(i_kind), intent(in ) :: month
integer(i_kind), intent(in ) :: idd
integer(i_kind), intent(in ) :: ico2
integer(i_kind), intent(in ) :: lat2
integer(i_kind), intent(in ) :: lon2
integer(i_kind), intent(in ) :: nlev
integer(i_kind), intent(in ) :: mype
real(r_kind), dimension(:), intent(in ) :: xlats
! --- declare passed variables - output:
real(r_kind), dimension(:,:,:), intent(out ) :: atmco2
! --- declare local variables:
real(r_kind), allocatable, dimension(:) :: xlatsdeg
real(r_kind), allocatable, dimension(:,:,:) :: co2_Tintrp
real(r_kind), allocatable, dimension(:,:,:) :: co2_sav1
real(r_kind), allocatable, dimension(:,:,:) :: co2_sav2
! --- latitudes in degree of input co2 data
real(r_kind), allocatable, dimension(:) :: rlats_co2
real(r_kind) :: co2diff
real(r_kind) :: co2rate
real(r_kind) :: co2g1
real(r_kind) :: dydn, dyup, dyall
integer(i_kind) :: iyr, imo, ierr
integer(i_kind) :: i, j, k, ires
integer(i_kind) :: ii
integer(i_kind) :: luco2 = 43 ! data file unit, may be as an input param
integer(i_kind) :: ndpm(12)
integer(i_kind) :: ndmax
logical :: file_exist
character(len=8) :: cform = '(24f7.2)' ! data format (nmxlon*f7.2)
Character(len=20) :: cfile = 'global_co2_data.txt'
data ndpm /31, 28, 31, 30, 31, 30, &
31, 31, 30, 31, 30, 31/
! --- ... begin
allocate(xlatsdeg(lat2))
if ( ico2 < 0 .or. ico2 > 2 ) then
write(6,*) ' ERROR!!! ICO2 value out of range, ICO2 =',ico2
write(6,*) ' Stopped in subprogram read_gfsco2'
call stop2(332)
endif
if ( ico2 == 0 ) then
! --- ... use prescribed global mean co2 data based on date
julday = (1461 * (iyear + 4800 + (month -14)/12))/4 + &
(367 * (month -2 -12 * ((month -14)/12)))/12 - &
(3 * ((iyear + 4900 + (month - 14)/12)/100))/4 + idd - 32075
co2_glb = 0.00602410_r_kind * (julday - 2455563.0_r_kind) + 389.5_r_kind
do k = 1, nlev
do j = 1, lon2
do i = 1, lat2
atmco2(i,j,k) = co2_glb
enddo
enddo
enddo
if ( mype == 0 ) then
write(6,*) ' - Using prescribed co2 global mean value=',co2_glb,&
'for Julian Day',julday
endif
return
! --- ... auto select co2 data table for required month and year
else if ( ico2 == 1 ) then
if ( mype == 0 ) then
write(6,*) ' ico2 == 1 not valid '
write(6,*) ' *** Stopped in subroutine read_gfsco2 !!'
endif
call stop2(332)
else if ( ico2 == 2 ) then
! --- ... set up input data file name
if ( mype == 0 ) then
write(6,*) ' - Using Co2 Data file ',cfile
endif
! --- ... check to see if requested co2 data file existed
inquire (file=cfile, exist=file_exist)
if ( .not. file_exist ) then
if ( mype == 0 ) then
write(6,*) ' Can not find co2 data source file'
write(6,*) ' *** Stopped in subroutine read_gfsco2 !!'
endif
call stop2(332)
endif ! end if_file_exist_block
! --- ... read in co2 2-d data for the requested month
open (luco2,file=cfile,form='formatted',status='old',iostat=ierr)
if (ierr /= 0) then
if ( mype == 0 ) then
write(6,*) ' error opening file = '//cfile
write(6,*) ' *** Stopped in subroutine read_gfsco2 !!'
endif
call stop2(332)
endif
rewind luco2
read (luco2, 36,iostat=ierr) iyr, nmxlon, nmxlat, ires, co2g1
36 format(i4,t25,2i4,t58,i3,t99,f7.2)
if (ierr /= 0) then
if ( mype == 0 ) then
write(6,*) ' error reading file = '//cfile
write(6,*) ' *** Stopped in subroutine read_gfsco2 !!'
endif
call stop2(332)
endif
resco2 = ires
co2_glb = co2g1
if ( mype == 0 ) then
write(6,*) ' Opened co2 data file: ',cfile
write(6,*) ' YEAR=',iyr,' NLON, NLAT=',nmxlon,nmxlat, &
' RES=',ires,' Global annual mean=',co2_glb
endif
if ( .not. allocated(rlats_co2) ) then
allocate ( rlats_co2(nmxlat) )
endif
if ( .not. allocated(co2_sav1) ) then
allocate ( co2_sav1(nmxlon,nmxlat,nlev) )
endif
if ( .not. allocated(co2_sav2) ) then
allocate ( co2_sav2(nmxlon,nmxlat,nlev) )
endif
if ( .not. allocated(co2_Tintrp) ) then
allocate ( co2_Tintrp(nmxlon,nmxlat,nlev) )
endif
! --- ...
! --- ... rlats: latitudes array of input co2 data (in degree)
read (luco2,37) (rlats_co2(j), j=1,nmxlat)
37 format (12f7.2)
! --- ... set up input data format
write(cform(2:3),'(i2)') nmxlon
! --- ... put data to grid and vertical layers
! --- ... convert xlats into degree
do i = 1, lat2
xlatsdeg(i)=xlats(i)*rad2deg
enddo
! --- ... read 3-d data starting from January of the year or climate January
do imo = 1, month
do k = 1, nlev
do j=1,nmxlat
read (luco2,cform) (co2_sav1(i,j,k), i=1,nmxlon)
enddo
enddo
enddo
! --- ... save the next month data for interpolation
do k = 1, nlev
do j=1, nmxlat
read (luco2,cform) (co2_sav2(i,j,k), i=1,nmxlon)
enddo
enddo
! Linearly interpolate month means into the values for analysis day
ndmax=ndpm(month)
! For leap year February: ndmax=29
if (month ==2 ) then
if( mod(iyear,4) == 0 .and. iyear >= 1900) ndmax= 29
endif
do k=1,nlev
do j=1,nmxlat
do i=1,nmxlon
co2diff= co2_sav2(i,j,k)-co2_sav1(i,j,k)
co2rate= co2diff/ndmax
co2_Tintrp(i,j,k)= co2_sav1(i,j,k)+ co2rate*float(idd-1)
enddo
enddo
enddo
i=nmxlon/2+1
j=nmxlat/2+1
if ( mype == 0 ) then
write(6,*) 'ncep_ghg: CO2 data ', &
'data used for year, month,i,j:',iyear,month,i,j
do k=1,nlev
write(6,*) ' Level = ',k,' CO2 = ',co2_Tintrp(i,j,k)
enddo
endif
! Interpolate the co2_Tintrp into a subdomain's grid
do i = 1, lat2
! --- ... If the subdomain indexes are out of the coverage of the input co2
! --- ... or fall at the same lats, atmco2(i,j,k) is assinged by the value of
! --- ... the nearest point of the input co2
if (xlatsdeg(i) < rlats_co2(1)) then
do k = 1, nlev
do j=1,lon2
atmco2(i,j,k)= co2_Tintrp(1,1,k)
enddo
enddo
endif
if (xlatsdeg(i) >= rlats_co2(nmxlat)) then
do k = 1, nlev
do j=1,lon2
atmco2(i,j,k)= co2_Tintrp(1,nmxlat,k)
enddo
enddo
endif
ii_loop:do ii = 1, nmxlat-1
if (xlatsdeg(i) >= rlats_co2(ii) .and. xlatsdeg(i) < rlats_co2(ii+1)) then
dydn= xlatsdeg(i) - rlats_co2(ii)
dyup= rlats_co2(ii+1)-xlatsdeg(i)
dyall=rlats_co2(ii+1)-rlats_co2(ii)
dydn=dydn /dyall
dyup=1.0-dydn
do k=1,nlev
do j=1,lon2
atmco2(i,j,k)= dydn*co2_Tintrp(1,ii+1,k)+ dyup*co2_Tintrp(1,ii,k)
enddo
enddo
endif
enddo ii_loop ! end loop for ii
enddo ! end loop for i-lat2
close (luco2 )
if (allocated(xlatsdeg)) deallocate (xlatsdeg)
if (allocated(rlats_co2)) deallocate (rlats_co2)
if (allocated(co2_sav1)) deallocate (co2_sav1)
if (allocated(co2_sav2)) deallocate (co2_sav2)
if (allocated(co2_Tintrp)) deallocate (co2_Tintrp)
endif ! end if_ico2_block
return
end subroutine read_gfsco2
subroutine read_ch4n2oco &
! --- inputs:
(iyear,month,idd,char_ghg,xlats,lat2, lon2, nlev, mype, &
! --- outputs:
atmghg )
!$$$ subprogram documentation block
!
! subprogram: read_ch4n2oco read and interpolate prescribed CH4,N2O,and CO fields
!
! prgmmr: Yang date: 2012-01-24
!
! abstract: read prescribed CH4,N2O, and CO, either climate monthly means or monthly means.
! Do linearly interpolation on both temporal and spatial space.
!
! program history log:
! 2012-01-25 yang Initiatial code following read_gfsco2
!
! input argument list:
! iyear - integer, year for the requested data
! month - integer, month of the year
! idd - integer, day of the month
! char_ghg - character
! =ch4: use prescribed CH4 data set
! =n2o: use prescribed N2O data set
! =co1: use prescribed CO data set. Use 'co1' to distinguish from 'co' used by GMAO
! xlats(lat2)- real, grid latitude in radians
! lat2 - integer, number of latitude points in subdomain
! lon2 - integer, number of longitude points in subdomain
! nlev - integer, number of vertical layers
! mype - integer, mpi task id
!
! output argument list:
! atmghg(lat2,lon2,nlev)
! - real, ch4,or n2o, or co, volume mixing ratio in ppm
!
!$$$
! --- declare passed variables - input:
integer(i_kind), intent(in ) :: iyear
integer(i_kind), intent(in ) :: month
integer(i_kind), intent(in ) :: idd
integer(i_kind), intent(in ) :: lat2
integer(i_kind), intent(in ) :: lon2
integer(i_kind), intent(in ) :: nlev
integer(i_kind), intent(in ) :: mype
character(len=3), intent(in ) :: char_ghg
real(r_kind), dimension(:), intent(in ) :: xlats
! --- declare passed variables - output:
real(r_kind), dimension(:,:,:), intent(out ) :: atmghg
! --- declare local variables:
real(r_kind), allocatable, dimension(:) :: xlatsdeg
real(r_kind), allocatable, dimension(:,:,:) :: ghg_Tintrp
real(r_kind), allocatable, dimension(:,:,:) :: ghg_sav1
real(r_kind), allocatable, dimension(:,:,:) :: ghg_sav2
! --- latitudes in degree of input ghg data
real(r_kind), allocatable, dimension(:) :: rlats_ghg
real(r_kind) :: ghgdiff
real(r_kind) :: ghgrate
real(r_kind) :: dydn, dyup, dyall
integer(i_kind) :: iyr, imo, ierr
integer(i_kind) :: i, j, k
integer(i_kind) :: nmaxlon,nmaxlat
integer(i_kind) :: ii
integer(i_kind) :: lughg = 43 ! data file unit, may be as an input param
integer(i_kind) :: ndpm(12)
integer(i_kind) :: ndmax
logical :: file_exist
Character(len=20) :: cfile = ''
Character(len=18) :: title1
Character(len=30) :: title2
data ndpm /31_i_kind, 28_i_kind, 31_i_kind, 30_i_kind, 31_i_kind, 30_i_kind, &
31_i_kind, 31_i_kind, 30_i_kind, 31_i_kind, 30_i_kind, 31_i_kind/
! --- ... begin
allocate(xlatsdeg(lat2))
! determine the filename
cfile=trim(char_ghg)//'globaldata.txt'
if (mype == 0 ) then
write(6,*) ' - Using prescribed ghg data file char_ghg==', char_ghg,cfile
endif
! --- ... check to see if requested ghg data file existed
inquire (file=cfile, exist=file_exist)
if ( .not. file_exist ) then
if ( mype == 0 ) then
write(6,*) ' Can not find ',trim(char_ghg),' data source file'
write(6,*) ' *** Stopped in subroutine read_ch4n2oco !!'
endif
call stop2(332)
endif ! end if_file_exist_block
! --- ... read in 2-d (Y-Z) data for the request month
open (lughg,file=cfile,form='formatted',status='old',iostat=ierr)
if (ierr /= 0) then
if ( mype == 0 ) then
write(6,*) ' error opening file = '//cfile
write(6,*) ' *** Stopped in subroutine read_ch4n2oco !!'
endif
call stop2(332)
endif
rewind lughg
read (lughg, 36,iostat=ierr) iyr,title1, nmaxlon, nmaxlat, title2
36 format(i8,4x,a18,2i3,a30)
if (ierr /= 0) then
if ( mype == 0 ) then
write(6,*) ' error reading file = '//cfile
write(6,*) ' *** Stopped in subroutine read_ch4n2oco !!'
endif
call stop2(332)
endif
if ( mype == 0 ) then
write(6,*) ' Opened ghg data file: ',cfile
write(6,*) ' YEAR=',iyr,' NLON, NLAT=',nmaxlon,nmaxlat
endif
if ( .not. allocated(rlats_ghg) ) then
allocate ( rlats_ghg(nmaxlat) )
endif
if ( .not. allocated(ghg_sav1) ) then
allocate ( ghg_sav1(nmaxlon,nmaxlat,nlev) )
endif
if ( .not. allocated(ghg_sav2) ) then
allocate ( ghg_sav2(nmaxlon,nmaxlat,nlev) )
endif
if ( .not. allocated(ghg_Tintrp) ) then
allocate ( ghg_Tintrp(nmaxlon,nmaxlat,nlev) )
endif
! --- ... rlats: latitudes array of input ghg data (in degree)
read (lughg,37) (rlats_ghg(j), j=1,nmaxlat)
37 format (7(13f7.2,/))
! --- ... convert xlats into degree
do i = 1, lat2
xlatsdeg(i)=xlats(i)*rad2deg
enddo
! --- ... read 2-d data field starting from January of the year
do imo = 1, month
do k = 1, nlev
read (lughg,37) (ghg_sav1(1,j,k), j=1,nmaxlat)
enddo
enddo
! --- ... save the next month data for interpolation
do k = 1, nlev
read (lughg,37) (ghg_sav2(1,j,k), j=1,nmaxlat)
enddo
if ( mype == 0 ) then
write(6,*) ' CHECK: at Month+1 CH4 data ', &
'data used for year, month, level:',iyear,month,nlev
write(6,37) ghg_sav2(1,:,64)
endif
! Linearly interperlate two monthly means into values for the analysis day
ndmax=ndpm(month)
! For leap year February: ndmax=29
if (month ==2 ) then
if( mod(iyear,4) == 0_i_kind .and. iyear >= 1900_i_kind) ndmax= 29
endif
do k=1,nlev
do j=1,nmaxlat
!for 2d input data: nmaxlon=1, and ghg_sav1(or 2) has the dimention of (1,j,k)
do i=1,nmaxlon
ghgdiff= ghg_sav2(1,j,k)-ghg_sav1(1,j,k)
ghgrate= ghgdiff/ndmax
ghg_Tintrp(1,j,k)= ghg_sav1(1,j,k)+ ghgrate*float(idd-1)
enddo
enddo
enddo
! Interpolate ghg_Tintrp into a subdomain's grid
do i = 1, lat2
! --- ... If the subdomain indexes are out of the coverage of the input ghg
! --- ... or fall at the same lats, atmghg(i,j,k) is assinged by the value of
! --- ... the nearest point of the input ghg
if (xlatsdeg(i) < rlats_ghg(1)) then
do k = 1, nlev
do j=1,lon2
atmghg(i,j,k)= ghg_Tintrp(1,1,k)
enddo
enddo
endif
if (xlatsdeg(i) >= rlats_ghg(nmaxlat)) then
do k = 1, nlev
do j=1,lon2
atmghg(i,j,k)= ghg_Tintrp(1,nmaxlat,k)
enddo
enddo
endif
ii_loop: do ii = 1, nmaxlat-1
if (xlatsdeg(i) >= rlats_ghg(ii) .and. xlatsdeg(i) < rlats_ghg(ii+1)) then
dydn= xlatsdeg(i) - rlats_ghg(ii)
dyup= rlats_ghg(ii+1)-xlatsdeg(i)
dyall=rlats_ghg(ii+1)-rlats_ghg(ii)
dydn=dydn /dyall
dyup=1.0-dydn
do k=1,nlev
do j=1,lon2
atmghg(i,j,k)= dydn*ghg_Tintrp(1,ii+1,k)+ dyup*ghg_Tintrp(1,ii,k)
enddo
enddo
endif
enddo ii_loop ! end loop for ii
enddo ! end loop for i-lat2
close (lughg )
if (allocated(xlatsdeg)) deallocate (xlatsdeg)
if (allocated(rlats_ghg)) deallocate (rlats_ghg)
if (allocated(ghg_sav1)) deallocate (ghg_sav1)
if (allocated(ghg_sav2)) deallocate (ghg_sav2)
if (allocated(ghg_Tintrp)) deallocate (ghg_Tintrp)
return
end subroutine read_ch4n2oco
end module ncepgfs_ghg
|
\name{add_rect_track}
\alias{add_rect_track}
\title{
add retangles to a new or exsited track
}
\description{
add retangles to a new or exsited track
}
\usage{
add_rect_track(gr, h1, h2, gp = gpar(), ...)
}
\arguments{
\item{gr}{genomic regions, it can be a data frame or a \code{\link[GenomicRanges:GRanges-class]{GRanges}} object}
\item{h1}{top/bottom positions for rectangles}
\item{h2}{top/bottom positions for rectangles}
\item{gp}{graphic settings, should be specified by \code{\link[grid]{gpar}}.}
\item{...}{other arguments passed to \code{\link{add_track}}}
}
\value{
No value is returned.
}
\author{
Zuguang Gu <[email protected]>
}
\seealso{
\code{\link{add_heatmap_track}}, \code{\link{add_track}}
}
\examples{
require(circlize)
bed = generateRandomBed(200)
col_fun = colorRamp2(c(-1, 0, 1), c("green", "black", "red"))
gtrellis_layout(track_ylim = range(bed[[4]]), nrow = 3, byrow = FALSE)
add_rect_track(bed, h1 = bed[[4]], h2 = 0,
gp = gpar(col = NA, fill = col_fun(bed[[4]])))
}
|
/-
Copyright (c) 2018 Scott Morrison. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Scott Morrison, Bhavik Mehta
-/
import category_theory.const
import category_theory.discrete_category
/-!
# The category `discrete punit`
We define `star : C ⥤ discrete punit` sending everything to `punit.star`,
show that any two functors to `discrete punit` are naturally isomorphic,
and construct the equivalence `(discrete punit ⥤ C) ≌ C`.
-/
universes v u -- morphism levels before object levels. See note [category_theory universes].
namespace category_theory
namespace functor
variables (C : Type u) [category.{v} C]
/-- The constant functor sending everything to `punit.star`. -/
def star : C ⥤ discrete punit :=
(functor.const _).obj punit.star
variable {C}
/-- Any two functors to `discrete punit` are isomorphic. -/
def punit_ext (F G : C ⥤ discrete punit) : F ≅ G :=
nat_iso.of_components (λ _, eq_to_iso dec_trivial) (λ _ _ _, dec_trivial)
/--
Any two functors to `discrete punit` are *equal*.
You probably want to use `punit_ext` instead of this.
-/
lemma punit_ext' (F G : C ⥤ discrete punit) : F = G :=
functor.ext (λ _, dec_trivial) (λ _ _ _, dec_trivial)
/-- The functor from `discrete punit` sending everything to the given object. -/
abbreviation from_punit (X : C) : discrete punit.{v+1} ⥤ C :=
(functor.const _).obj X
/-- Functors from `discrete punit` are equivalent to the category itself. -/
@[simps]
def equiv : (discrete punit ⥤ C) ≌ C :=
{ functor :=
{ obj := λ F, F.obj punit.star,
map := λ F G θ, θ.app punit.star },
inverse := functor.const _,
unit_iso :=
begin
apply nat_iso.of_components _ _,
intro X,
apply discrete.nat_iso,
rintro ⟨⟩,
apply iso.refl _,
intros,
ext ⟨⟩,
simp,
end,
counit_iso :=
begin
refine nat_iso.of_components iso.refl _,
intros X Y f,
dsimp, simp, -- See note [dsimp, simp].
end }
end functor
end category_theory
|
(*
Copyright (C) 2017 M.A.L. Marques
Copyright (C) 2018 Susi Lehtola
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*)
(* type: gga_exc *)
(* prefix:
gga_x_lsrpbe_params *params;
assert(p->params != NULL);
params = (gga_x_lsrpbe_params * )(p->params);
*)
lsrpbe_f0 := s -> 1 + params_a_kappa * (
1 - exp(-params_a_mu*s^2/params_a_kappa)
) - (params_a_kappa+1)*(1 - exp(-params_a_alpha*s^2)):
lsrpbe_f := x -> lsrpbe_f0(X2S*x):
f := (rs, z, xt, xs0, xs1) -> gga_exchange(lsrpbe_f, rs, z, xs0, xs1):
|
{-# OPTIONS --without-K --safe #-}
module Algebra.Linear.Structures where
open import Algebra.Structures.Field public
open import Algebra.Linear.Structures.VectorSpace public
|
As things have become might crafty of late I thought I'd share with you some of my current works and projects in progress.
I've been working on quite a few things already this month, some are stocking up on staple pieces for my Etsy shop and lots are brand new things I've never tried before!
The first thing I wanted to show you it my first ever baby jumper. Not long ago a lovely customer of mine complimented some of my nursery decor suggesting that she would love to see more gender neutral things on the market, and ever since I've been on a wild adventure to test my skills and incorporate a gender neutral collection into my shop.
This jumper is my first ever attempt at making tiny person sized clothes and I'm super pleased with how it came out. I love the colours, I like the pattern, and its so so uber soft, now just seam it all together and you'll be able to find it in the shop very soon!
I've also been working on a ton of different pompom crafts. Its amazing how much you can do with pompoms and I'm obsessed with my new pompom makers (you might remember I wrote about these in a post last week) so I can wait to share all my creations with you.
For now I'm going to keep them a secret but if you can guess, some of the clues are in the picture! Let me know in the comments if you think you figured it out!
I'm also expanding my Celtic knot jewellery collection to include an assortment of new knots, links and colours. Although I think these look a bit shocker in the photos (thank you bright sunlight!) I really think they are going to turn out great.
Lastly on my current Pips is a collection of bunting. I think knitted bunting can have a high risk of looking super out-dated, but I have a few tricks up my sleeve to really make these special.
So that's everything I have to show you this week.
I'd really like to start doing tutorials on here, so if there's anything you've seen of anything you think of you've always wanted to know how to make, let me know in the comments and I'd love to start a simple DIY series. |
!========================================================================
!
! S P E C F E M 2 D Version 7 . 0
! --------------------------------
!
! Main historical authors: Dimitri Komatitsch and Jeroen Tromp
! Princeton University, USA
! and CNRS / University of Marseille, France
! (there are currently many more authors!)
! (c) Princeton University and CNRS / University of Marseille, April 2014
!
! This software is a computer program whose purpose is to solve
! the two-dimensional viscoelastic anisotropic or poroelastic wave equation
! using a spectral-element method (SEM).
!
! This software is governed by the CeCILL license under French law and
! abiding by the rules of distribution of free software. You can use,
! modify and/or redistribute the software under the terms of the CeCILL
! license as circulated by CEA, CNRS and Inria at the following URL
! "http://www.cecill.info".
!
! As a counterpart to the access to the source code and rights to copy,
! modify and redistribute granted by the license, users are provided only
! with a limited warranty and the software's author, the holder of the
! economic rights, and the successive licensors have only limited
! liability.
!
! In this respect, the user's attention is drawn to the risks associated
! with loading, using, modifying and/or developing or reproducing the
! software by the user in light of its specific status of free software,
! that may mean that it is complicated to manipulate, and that also
! therefore means that it is reserved for developers and experienced
! professionals having in-depth computer knowledge. Users are therefore
! encouraged to load and test the software's suitability as regards their
! requirements in conditions enabling the security of their systems and/or
! data to be ensured and, more generally, to use and operate it in the
! same conditions as regards security.
!
! The full text of the license is available in file "LICENSE".
!
!========================================================================
! for poroelastic solver: update memory variables with fourth-order Runge-Kutta time scheme for attenuation
subroutine compute_attenuation_poro_fluid_part()
use specfem_par, only: nspec,poroelastic,poroelastcoef,kmato,permeability,ibool,viscox_loc,viscoz_loc, &
velocw_poroelastic,time_stepping_scheme,deltat,i_stage,stage_time_scheme, &
rx_viscous,rz_viscous,viscox,viscoz, &
rx_viscous_force_RK,rx_viscous_initial_rk,rz_viscous_force_RK,rz_viscous_initial_rk, &
rx_viscous_LDDRK,rz_viscous_LDDRK,alpha_LDDRK,beta_LDDRK, &
alphaval,betaval,gammaval,theta_e,theta_s,thetainv
implicit none
include "constants.h"
! local variables
integer :: i,j,ispec,iglob
double precision :: etal_f,permlxx,permlxz,permlzz,detk,invpermlxx,invpermlxz,invpermlzz, &
Sn,Snp1,weight_rk
double precision, dimension(3) :: bl_unrelaxed_elastic
! loop over spectral elements
do ispec = 1,nspec
if( poroelastic(ispec) .and. poroelastcoef(2,2,kmato(ispec)) > 0.d0 ) then
etal_f = poroelastcoef(2,2,kmato(ispec))
permlxx = permeability(1,kmato(ispec))
permlxz = permeability(2,kmato(ispec))
permlzz = permeability(3,kmato(ispec))
! calcul of the inverse of k
detk = permlxx * permlzz - permlxz * permlxz
if( detk /= ZERO ) then
invpermlxx = permlzz/detk
invpermlxz = -permlxz/detk
invpermlzz = permlxx/detk
else
stop 'Permeability matrix is not invertible'
endif
! relaxed viscous coef
bl_unrelaxed_elastic(1) = etal_f*invpermlxx
bl_unrelaxed_elastic(2) = etal_f*invpermlxz
bl_unrelaxed_elastic(3) = etal_f*invpermlzz
do j=1,NGLLZ
do i=1,NGLLX
iglob = ibool(i,j,ispec)
viscox_loc(i,j) = velocw_poroelastic(1,iglob) * bl_unrelaxed_elastic(1) + &
velocw_poroelastic(2,iglob) * bl_unrelaxed_elastic(2)
viscoz_loc(i,j) = velocw_poroelastic(1,iglob) * bl_unrelaxed_elastic(2) + &
velocw_poroelastic(2,iglob) * bl_unrelaxed_elastic(3)
if( time_stepping_scheme == 1 ) then
! evolution rx_viscous
Sn = - (1.d0 - theta_e/theta_s)/theta_s*viscox(i,j,ispec)
Snp1 = - (1.d0 - theta_e/theta_s)/theta_s*viscox_loc(i,j)
rx_viscous(i,j,ispec) = alphaval * rx_viscous(i,j,ispec) + betaval * Sn + gammaval * Snp1
! evolution rz_viscous
Sn = - (1.d0 - theta_e/theta_s)/theta_s*viscoz(i,j,ispec)
Snp1 = - (1.d0 - theta_e/theta_s)/theta_s*viscoz_loc(i,j)
rz_viscous(i,j,ispec) = alphaval * rz_viscous(i,j,ispec) + betaval * Sn + gammaval * Snp1
endif
if( time_stepping_scheme == 2 ) then
Sn = - (1.d0 - theta_e/theta_s)/theta_s*viscox(i,j,ispec)
rx_viscous_LDDRK(i,j,ispec) = alpha_LDDRK(i_stage) * rx_viscous_LDDRK(i,j,ispec) + &
deltat * (Sn + thetainv * rx_viscous(i,j,ispec))
rx_viscous(i,j,ispec)= rx_viscous(i,j,ispec)+beta_LDDRK(i_stage) * rx_viscous_LDDRK(i,j,ispec)
Sn = - (1.d0 - theta_e/theta_s)/theta_s*viscoz(i,j,ispec)
rz_viscous_LDDRK(i,j,ispec)= alpha_LDDRK(i_stage) * rz_viscous_LDDRK(i,j,ispec)+&
deltat * (Sn + thetainv * rz_viscous(i,j,ispec))
rz_viscous(i,j,ispec)= rz_viscous(i,j,ispec)+beta_LDDRK(i_stage) * rz_viscous_LDDRK(i,j,ispec)
endif
if(time_stepping_scheme == 3) then
Sn = - (1.d0 - theta_e/theta_s)/theta_s*viscox(i,j,ispec)
rx_viscous_force_RK(i,j,ispec,i_stage) = deltat * (Sn + thetainv * rx_viscous(i,j,ispec))
if( i_stage==1 .or. i_stage==2 .or. i_stage==3 ) then
if( i_stage == 1 )weight_rk = 0.5d0
if( i_stage == 2 )weight_rk = 0.5d0
if( i_stage == 3 )weight_rk = 1.0d0
if( i_stage==1 ) then
rx_viscous_initial_rk(i,j,ispec) = rx_viscous(i,j,ispec)
endif
rx_viscous(i,j,ispec) = rx_viscous_initial_rk(i,j,ispec) + &
weight_rk * rx_viscous_force_RK(i,j,ispec,i_stage)
else if( i_stage==4 ) then
rx_viscous(i,j,ispec) = rx_viscous_initial_rk(i,j,ispec) + &
1.0d0 / 6.0d0 * (rx_viscous_force_RK(i,j,ispec,i_stage) + &
2.0d0 * rx_viscous_force_RK(i,j,ispec,i_stage) + &
2.0d0 * rx_viscous_force_RK(i,j,ispec,i_stage) + &
rx_viscous_force_RK(i,j,ispec,i_stage))
endif
Sn = - (1.d0 - theta_e/theta_s)/theta_s*viscoz(i,j,ispec)
rz_viscous_force_RK(i,j,ispec,i_stage) = deltat * (Sn + thetainv * rz_viscous(i,j,ispec))
if( i_stage==1 .or. i_stage==2 .or. i_stage==3 ) then
if( i_stage == 1 )weight_rk = 0.5d0
if( i_stage == 2 )weight_rk = 0.5d0
if( i_stage == 3 )weight_rk = 1.0d0
if( i_stage==1 ) then
rz_viscous_initial_rk(i,j,ispec) = rz_viscous(i,j,ispec)
endif
rz_viscous(i,j,ispec) = rz_viscous_initial_rk(i,j,ispec) + &
weight_rk * rz_viscous_force_RK(i,j,ispec,i_stage)
else if(i_stage==4) then
rz_viscous(i,j,ispec) = rz_viscous_initial_rk(i,j,ispec) + &
1.0d0 / 6.0d0 * (rz_viscous_force_RK(i,j,ispec,i_stage) + &
2.0d0 * rz_viscous_force_RK(i,j,ispec,i_stage) + &
2.0d0 * rz_viscous_force_RK(i,j,ispec,i_stage) + &
rz_viscous_force_RK(i,j,ispec,i_stage))
endif
endif
enddo
enddo
if( stage_time_scheme == 1 ) then
! save visco for Runge-Kutta scheme when used together with Newmark
viscox(:,:,ispec) = viscox_loc(:,:)
viscoz(:,:,ispec) = viscoz_loc(:,:)
endif
endif ! end of poroelastic element loop
enddo ! end of spectral element loop
end subroutine compute_attenuation_poro_fluid_part
|
module Gspider.Types.RestrictedCharString
import Data.So
%access private
||| Returns true if the given list of characters `str` contains only characters specified in `chars`.
|||
||| @chars the list of permitted characters
||| @str the string to check
madeOf' : (chars : List Char) -> (str : List Char) -> Bool
madeOf' chars [] = True
madeOf' chars (x :: xs) = elem x chars && madeOf' chars xs
||| Returns true if the given string `str` contains only characters specified in `chars`.
|||
||| @chars the list of permitted characters
||| @str the string to check
export
madeOf : (chars : List Char) -> (str : String) -> Bool
madeOf chars str = madeOf' chars (unpack str)
||| Strings that are restricted to only a specific set of characters.
|||
||| @allowed the list of characters allowed in the string
public export
data RestrictedCharString : (allowed : List Char) -> Type where
||| Constructs a restricted character set string with the specified value.
|||
||| @val the value of the string
MkRestrictedCharString : (val : String) ->
{auto prf : So (madeOf allowed val)} ->
RestrictedCharString allowed
||| Attempts to restricts a string to contain only a specific set of characters.
|||
||| @chars the list of characters allowed in the string
||| @str the string to attempt to restrict
export
restrictStr : (chars : List Char) -> (str : String) -> Maybe (RestrictedCharString chars)
restrictStr chars str = case choose (madeOf chars str) of
Left _ => Just (MkRestrictedCharString str)
Right _ => Nothing
||| Converts a list of strings to a list of strings with a restricted character set, where possible.
|||
||| @chars the set of characters to use to restrict strings
||| @strs the list of strings to convert
export
convertToRestricted : (chars : List Char) -> (strs : List String) -> List (RestrictedCharString chars)
convertToRestricted chars strs = catMaybes (map (restrictStr chars) strs)
||| Converts a string restricted to containing only a specific set of characters to an unrestricted string.
|||
||| @str the string to unrestrict
export
unrestrictStr : (str : RestrictedCharString _) -> String
unrestrictStr (MkRestrictedCharString str) = str
||| Converts a list of strings containing only a specific set of characters to a list of unrestricted strings.
|||
||| @strs the list of strings to convert
export
convertFromRestricted : (strs : List (RestrictedCharString _)) -> List String
convertFromRestricted strs = map unrestrictStr strs
||| Equality for restricted character set strings.
|||
||| @x some restricted character set string
||| @y some restricted character set string
equal : (x : RestrictedCharString s) -> (y : RestrictedCharString s) -> Bool
equal (MkRestrictedCharString u) (MkRestrictedCharString v) = u == v
||| Inequality for restricted character set strings.
|||
||| @x some restricted character set string
||| @y some restricted character set string
notEqual : (x : RestrictedCharString s) -> (y : RestrictedCharString s) -> Bool
notEqual (MkRestrictedCharString u) (MkRestrictedCharString v) = u /= v
||| Implement equality for restricted character set strings.
public export
Eq (RestrictedCharString s) where
(==) = equal
(/=) = notEqual
||| Comparison for restricted character set strings.
|||
||| @x some restricted character set string
||| @y some restricted character set string
compare' : (x : RestrictedCharString s) -> (y : RestrictedCharString s) -> Ordering
compare' (MkRestrictedCharString u) (MkRestrictedCharString v) = compare u v
||| Implement orderability for restricted character set strings.
public export
Ord (RestrictedCharString s) where
compare x y = compare' x y
|
(* ** Construction of standard models *)
From Undecidability.FOL Require Import Syntax.Facts Semantics.Tarski.FullFacts ZF.
From Undecidability.FOL.Sets Require Import ZF binZF.
From Undecidability.FOL.Sets.Models Require Import Aczel Aczel_CE Aczel_TD.
From Coq Require Import Lia.
(* ** ZF-Models *)
Declare Scope sem.
Open Scope sem.
Arguments Vector.nil {_}, _.
Arguments Vector.cons {_} _ {_} _, _ _ _ _.
Notation "x ∈ y" := (@i_atom _ _ _ _ elem (Vector.cons x (Vector.cons y Vector.nil))) (at level 35) : sem.
Notation "x ≡ y" := (@i_atom _ _ _ _ equal (Vector.cons x (Vector.cons y Vector.nil))) (at level 35) : sem.
Notation "x ⊆ y" := (forall z, z ∈ x -> z ∈ y) (at level 34) : sem.
Notation "∅" := (@i_func ZF_func_sig ZF_pred_sig _ _ ZFSignature.eset Vector.nil) : sem.
Notation "'ω'" := (@i_func ZF_func_sig _ _ _ ZFSignature.om Vector.nil) : sem.
Notation "{ x ; y }" := (@i_func ZF_func_sig _ _ _ ZFSignature.pair (Vector.cons x (Vector.cons y Vector.nil))) (at level 31) : sem.
Notation "⋃ x" := (@i_func ZF_func_sig _ _ _ ZFSignature.union (Vector.cons x Vector.nil)) (at level 32) : sem.
Notation "'PP' x" := (@i_func ZF_func_sig _ _ _ ZFSignature.power (Vector.cons x Vector.nil)) (at level 31) : sem.
Notation "x ∪ y" := (⋃ {x; y}) (at level 32) : sem.
Notation "'σ' x" := (x ∪ {x; x}) (at level 32) : sem.
(* ** Internal axioms *)
Section ZF.
Context { V : Type }.
Context { M : interp V }.
Hypothesis M_ZF : forall rho, rho ⊫ ZF'.
Hypothesis VIEQ : extensional M.
Lemma M_ext x y :
x ⊆ y -> y ⊆ x -> x = y.
Proof using VIEQ M_ZF.
rewrite <- VIEQ. apply (@M_ZF (fun _ => ∅) ax_ext). cbn; tauto.
Qed.
Lemma M_eset x :
~ x ∈ ∅.
Proof using VIEQ M_ZF.
refine (@M_ZF (fun _ => ∅) ax_eset _ x). cbn; tauto.
Qed.
Lemma M_pair x y z :
x ∈ {y; z} <-> x = y \/ x = z.
Proof using VIEQ M_ZF.
rewrite <- !VIEQ. apply (@M_ZF (fun _ => ∅) ax_pair). cbn; tauto.
Qed.
Lemma M_union x y :
x ∈ ⋃ y <-> exists z, z ∈ y /\ x ∈ z.
Proof using M_ZF.
apply (@M_ZF (fun _ => ∅) ax_union). cbn; tauto.
Qed.
Lemma M_power x y :
x ∈ PP y <-> x ⊆ y.
Proof using M_ZF.
apply (@M_ZF (fun _ => ∅) ax_power). cbn; tauto.
Qed.
Definition M_inductive x :=
∅ ∈ x /\ forall y, y ∈ x -> σ y ∈ x.
Lemma M_om1 :
M_inductive ω.
Proof using M_ZF.
apply (@M_ZF (fun _ => ∅) ax_om1). cbn; tauto.
Qed.
Lemma M_om2 x :
M_inductive x -> ω ⊆ x.
Proof using M_ZF.
apply (@M_ZF (fun _ => ∅) ax_om2). cbn; tauto.
Qed.
Definition agrees_fun phi (P : V -> Prop) :=
forall x rho, P x <-> (x.:rho) ⊨ phi.
Definition def_pred (P : V -> Prop) :=
exists phi rho, forall d, P d <-> (d.:rho) ⊨ phi.
Lemma M_sep P x :
(forall phi rho, rho ⊨ ax_sep phi) -> def_pred P -> exists y, forall z, z ∈ y <-> z ∈ x /\ P z.
Proof.
cbn. intros H [phi [rho Hp]].
destruct (H phi rho x) as [y H']; clear H.
exists y. intros z. specialize (H' z). setoid_rewrite sat_comp in H'.
rewrite (sat_ext _ _ (xi:=z.:rho)) in H'; try now intros [].
firstorder.
Qed.
Definition M_is_rep R x y :=
forall v, v ∈ y <-> exists u, u ∈ x /\ R u v.
Lemma is_rep_unique R x y y' :
M_is_rep R x y -> M_is_rep R x y' -> y = y'.
Proof using VIEQ M_ZF.
intros H1 H2. apply M_ext; intros v.
- intros H % H1. now apply H2.
- intros H % H2. now apply H1.
Qed.
Definition functional (R : V -> V -> Prop) :=
forall x y y', R x y -> R x y' -> y = y'.
Definition def_rel (R : V -> V -> Prop) :=
exists phi rho, forall x y, R x y <-> (x.:y.:rho) ⊨ phi.
Lemma M_rep R x :
(forall phi rho, rho ⊨ ax_rep phi) -> def_rel R -> functional R -> exists y, M_is_rep R x y.
Proof using VIEQ.
intros H1 [phi [rho Hp]]. intros H2.
cbn in H1. specialize (H1 phi rho). destruct H1 with x as [y Hy].
- intros a b b'. setoid_rewrite sat_comp.
erewrite sat_ext. rewrite <- (Hp a b). 2: now intros [|[]].
erewrite sat_ext. rewrite <- (Hp a b'). 2: now intros [|[]].
rewrite VIEQ. apply H2.
- exists y. intros v. split.
+ intros [u[U1 U2]] % Hy. exists u. split; trivial.
setoid_rewrite sat_comp in U2. rewrite sat_ext in U2. rewrite (Hp u v). apply U2. now intros [|[]]; cbn.
+ intros [u[U1 U2]]. apply Hy. exists u. split; trivial.
setoid_rewrite sat_comp. rewrite sat_ext. rewrite <- (Hp u v). apply U2. now intros [|[]]; cbn.
Qed.
(* ** Basic ZF *)
Definition M_sing x :=
{x; x}.
Definition M_opair x y := ({{x; x}; {x; y}}).
Lemma binunion_el x y z :
x ∈ y ∪ z <-> x ∈ y \/ x ∈ z.
Proof using VIEQ M_ZF.
split.
- intros [u [H1 H2]] % M_union.
apply M_pair in H1 as [->| ->]; auto.
- intros [H|H].
+ apply M_union. exists y. rewrite M_pair. auto.
+ apply M_union. exists z. rewrite M_pair. auto.
Qed.
Lemma sing_el x y :
x ∈ M_sing y <-> x = y.
Proof using VIEQ M_ZF.
split.
- now intros [H|H] % M_pair.
- intros ->. apply M_pair. now left.
Qed.
Lemma M_pair1 x y :
x ∈ {x; y}.
Proof using VIEQ M_ZF.
apply M_pair. now left.
Qed.
Lemma M_pair2 x y :
y ∈ {x; y}.
Proof using VIEQ M_ZF.
apply M_pair. now right.
Qed.
Lemma sing_pair x y z :
{x; x} = {y; z} -> x = y /\ x = z.
Proof using VIEQ M_ZF.
intros He. split.
- assert (H : y ∈ {y; z}) by apply M_pair1.
rewrite <- He in H. apply M_pair in H. intuition.
- assert (H : z ∈ {y; z}) by apply M_pair2.
rewrite <- He in H. apply M_pair in H. intuition.
Qed.
Lemma opair_inj1 x x' y y' :
M_opair x y = M_opair x' y' -> x = x'.
Proof using VIEQ M_ZF.
intros He. assert (H : {x; x} ∈ M_opair x y) by apply M_pair1.
rewrite He in H. apply M_pair in H as [H|H]; apply (sing_pair H).
Qed.
Lemma opair_inj2 x x' y y' :
M_opair x y = M_opair x' y' -> y = y'.
Proof using VIEQ M_ZF.
intros He. assert (y = x' \/ y = y') as [->| ->]; trivial.
- assert (H : {x; y} ∈ M_opair x y) by apply M_pair2.
rewrite He in H. apply M_pair in H as [H|H].
+ symmetry in H. apply sing_pair in H. intuition.
+ assert (H' : y ∈ {x; y}) by apply M_pair2.
rewrite H in H'. now apply M_pair in H'.
- assert (x = x') as -> by now apply opair_inj1 in He.
assert (H : {x'; y'} ∈ M_opair x' y') by apply M_pair2.
rewrite <- He in H. apply M_pair in H as [H|H]; apply (sing_pair (eq_sym H)).
Qed.
Lemma opair_inj x x' y y' :
M_opair x y = M_opair x' y' -> x = x' /\ y = y'.
Proof using VIEQ M_ZF.
intros H. split.
- eapply opair_inj1; eassumption.
- eapply opair_inj2; eassumption.
Qed.
Lemma sigma_el x y :
x ∈ σ y <-> x ∈ y \/ x = y.
Proof using VIEQ M_ZF.
split.
- intros [H|H] % binunion_el; auto.
apply sing_el in H. now right.
- intros [H| ->]; apply binunion_el; auto.
right. now apply sing_el.
Qed.
Lemma sigma_eq x :
x ∈ σ x.
Proof using VIEQ M_ZF.
apply sigma_el. now right.
Qed.
Lemma sigma_sub x :
x ⊆ σ x.
Proof using VIEQ M_ZF.
intros y H. apply sigma_el. now left.
Qed.
Lemma binunion_eset x :
x = ∅ ∪ x.
Proof using VIEQ M_ZF.
apply M_ext.
- intros y H. apply binunion_el. now right.
- intros y [H|H] % binunion_el.
+ now apply M_eset in H.
+ assumption.
Qed.
Lemma pair_com x y :
{x; y} = {y; x}.
Proof using VIEQ M_ZF.
apply M_ext; intros z [->| ->] % M_pair; apply M_pair; auto.
Qed.
Lemma binunion_com x y :
x ∪ y = y ∪ x.
Proof using VIEQ M_ZF.
now rewrite pair_com.
Qed.
Lemma binunionl a x y :
a ∈ x -> a ∈ x ∪ y.
Proof using VIEQ M_ZF.
intros H. apply binunion_el. now left.
Qed.
Lemma binunionr a x y :
a ∈ y -> a ∈ x ∪ y.
Proof using VIEQ M_ZF.
intros H. apply binunion_el. now right.
Qed.
Hint Resolve binunionl binunionr : core.
Lemma binunion_assoc x y z :
(x ∪ y) ∪ z = x ∪ (y ∪ z).
Proof using VIEQ M_ZF.
apply M_ext; intros a [H|H] % binunion_el; eauto.
- apply binunion_el in H as [H|H]; eauto.
- apply binunion_el in H as [H|H]; eauto.
Qed.
(* ** Numerals *)
Fixpoint numeral n :=
match n with
| O => ∅
| S n => σ (numeral n)
end.
Lemma numeral_omega n :
numeral n ∈ ω.
Proof using M_ZF.
induction n; cbn; now apply M_om1.
Qed.
Definition trans x :=
forall y, y ∈ x -> y ⊆ x.
Lemma numeral_trans n :
trans (numeral n).
Proof using VIEQ M_ZF.
induction n; cbn.
- intros x H. now apply M_eset in H.
- intros x [H| ->] % sigma_el; try apply sigma_sub.
apply IHn in H. intuition eauto using sigma_sub.
Qed.
Lemma numeral_wf n :
~ numeral n ∈ numeral n.
Proof using VIEQ M_ZF.
induction n.
- apply M_eset.
- intros [H|H] % sigma_el; fold numeral in *.
+ apply IHn. eapply numeral_trans; eauto. apply sigma_eq.
+ assert (numeral n ∈ numeral (S n)) by apply sigma_eq.
now rewrite H in H0.
Qed.
Lemma numeral_lt k l :
k < l -> numeral k ∈ numeral l.
Proof using VIEQ M_ZF.
induction 1; cbn; apply sigma_el; auto.
Qed.
Lemma numeral_inj k l :
numeral k = numeral l -> k = l.
Proof using VIEQ M_ZF.
intros Hk. assert (k = l \/ k < l \/ l < k) as [H|[H|H]] by lia; trivial.
all: apply numeral_lt in H; rewrite Hk in H; now apply numeral_wf in H.
Qed.
Definition standard :=
forall x, x ∈ ω -> exists n, x ≡ numeral n.
End ZF.
Arguments standard {_} _.
(* *** Extensional model of ZF using CE and TD *)
Section ZFM.
Context { Delta : extensional_normaliser }.
Instance SET_interp : interp SET.
Proof.
split; intros [].
- intros _. exact Sempty.
- intros v. exact (Supair (Vector.hd v) (Vector.hd (Vector.tl v))).
- intros v. exact (Sunion (Vector.hd v)).
- intros v. exact (Spower (Vector.hd v)).
- intros _. exact Som.
- intros v. exact (IN (Vector.hd v) (Vector.hd (Vector.tl v))).
- intros v. exact (Vector.hd v = Vector.hd (Vector.tl v)).
Defined.
Lemma SET_ext :
extensional SET_interp.
Proof.
intros x y. reflexivity.
Qed.
Lemma Anumeral_numeral n :
NS (Anumeral n) = numeral n.
Proof.
induction n; trivial.
cbn [numeral]. rewrite <- IHn.
apply Aeq_NS_eq. cbn -[Anumeral].
now rewrite <- !CR1.
Qed.
Lemma SET_standard :
standard SET_interp.
Proof.
intros x. cbn. destruct x. unfold Som, NS, IN. cbn.
rewrite <- (CR1 Aom). intros [n Hn]. exists n.
rewrite <- Anumeral_numeral. now apply Aeq_p1_NS_eq.
Qed.
Lemma SET_ZF' rho :
rho ⊫ ZF'.
Proof.
intros phi [<-|[<-|[<-|[<-|[<-|[<-|[<-|[]]]]]]]]; cbn.
- intros X Y H1 H2. now apply set_ext.
- apply emptyE.
- intros X Y Z. split; apply upairAx.
- intros X Y. split; apply unionAx.
- intros X Y. split; apply powerAx.
- apply omAx1.
- apply omAx2.
Qed.
Lemma SET_sep phi rho :
rho ⊨ ax_sep phi.
Proof.
intros x. cbn.
exists (Ssep (fun y => (y .: rho) ⊨ phi) x).
intros y. rewrite sepAx.
split; intros [H1 H2]; split; trivial.
- setoid_rewrite sat_comp. eapply sat_ext; try apply H2. now intros [].
- setoid_rewrite sat_comp in H2. eapply sat_ext; try apply H2. now intros [].
Qed.
Lemma SET_rep phi rho :
rho ⊨ ax_rep phi.
Proof.
intros H x. cbn.
exists (Srep (fun u v => (u .: (v .: rho)) ⊨ phi) x).
intros y. rewrite repAx.
- split; intros [z[H1 H2]]; exists z; split; trivial.
+ setoid_rewrite sat_comp. eapply sat_ext; try apply H2. now intros [|[]].
+ setoid_rewrite sat_comp in H2. eapply sat_ext; try apply H2. now intros [|[]].
- intros a b b' H1 H2. apply (H a b b'); fold sat; eapply sat_comp, sat_ext.
2: apply H1. 3: apply H2. all: intros [|[]]; reflexivity.
Qed.
End ZFM.
Definition TD :=
exists delta : (Acz -> Prop) -> Acz, forall P, (exists s : Acz, forall t : Acz, P t <-> Aeq t s) -> P (delta P).
Lemma TD_CE_normaliser :
CE -> TD -> inhabited extensional_normaliser.
Proof.
intros ce [delta H]. split. unshelve econstructor.
- exact delta.
- exact H.
- intros P P' HP. now rewrite (ce _ _ HP).
- intros s H1 H2. apply Aczel_CE.PI, ce.
Qed.
Lemma normaliser_model :
CE -> TD -> exists V (M : interp V), extensional M /\ standard M /\ forall rho psi, ZF psi -> rho ⊨ psi.
Proof.
intros H1 H2. assert (inhabited extensional_normaliser) as [H] by now apply TD_CE_normaliser.
exists SET, SET_interp. split; try apply SET_ext.
split; try apply SET_standard. intros rho psi [].
- now apply SET_ZF'.
- apply SET_sep.
- apply SET_rep.
Qed.
Lemma normaliser_model_eq :
CE -> TD -> exists V (M : interp V), extensional M /\ standard M /\ forall rho psi, ZFeq psi -> rho ⊨ psi.
Proof.
intros H1 H2. assert (inhabited extensional_normaliser) as [H] by now apply TD_CE_normaliser.
exists SET, SET_interp. split; try apply SET_ext.
split; try apply SET_standard. intros rho psi [].
- destruct H0 as [<-|[<-|[<-|[<-|H0]]]]; cbn; try congruence. now apply SET_ZF'.
- apply SET_sep.
- apply SET_rep.
Qed.
(* *** Extensional model of Z using CE *)
Section ZM.
Hypothesis ce : CE.
Instance SET_interp' : interp SET'.
Proof using ce.
split; intros [].
- intros _. exact empty.
- intros v. exact (upair ce (Vector.hd v) (Vector.hd (Vector.tl v))).
- intros v. exact (union ce (Vector.hd v)).
- intros v. exact (power ce (Vector.hd v)).
- intros _. exact om.
- intros v. exact (ele (Vector.hd v) (Vector.hd (Vector.tl v))).
- intros v. exact (Vector.hd v = Vector.hd (Vector.tl v)).
Defined.
Lemma SET_ext' :
extensional SET_interp'.
Proof.
intros x y. reflexivity.
Qed.
Lemma Anumeral_numeral' n :
classof (Anumeral n) = numeral n.
Proof.
induction n.
- reflexivity.
- cbn [Anumeral]. rewrite <- (succ_eq ce).
rewrite IHn. reflexivity.
Qed.
Lemma SET_standard' :
standard SET_interp'.
Proof.
intros x. cbn. destruct (classof_ex ce x) as [s ->].
intros [n Hn] % classof_ele. exists n.
rewrite <- Anumeral_numeral'. now apply classof_eq.
Qed.
Lemma SET'_ZF' rho :
rho ⊫ ZF'.
Proof.
intros phi [<-|[<-|[<-|[<-|[<-|[<-|[<-|[]]]]]]]]; cbn.
- intros X Y H1 H2. now apply Aczel_CE.set_ext.
- now apply Aczel_CE.emptyE.
- intros X Y Z. split; apply Aczel_CE.upairAx.
- intros X Y. split; apply Aczel_CE.unionAx.
- intros X Y. split; apply Aczel_CE.powerAx.
- apply om_Ax1.
- apply om_Ax2.
Qed.
Lemma SET_sep' phi rho :
rho ⊨ ax_sep phi.
Proof.
intros x. cbn.
exists (sep ce (fun y => (y .: rho) ⊨ phi) x).
intros y. rewrite Aczel_CE.sepAx.
split; intros [H1 H2]; split; trivial.
- setoid_rewrite sat_comp. eapply sat_ext; try apply H2. now intros [].
- setoid_rewrite sat_comp in H2. eapply sat_ext; try apply H2. now intros [].
Qed.
End ZM.
Lemma extensionality_model :
CE -> exists V (M : interp V), extensional M /\ standard M /\ forall rho phi, Z phi -> rho ⊨ phi.
Proof.
intros ce. exists SET', (SET_interp' ce). split; try apply SET_ext'.
split; try apply SET_standard'. intros rho phi [].
- now apply SET'_ZF'.
- apply SET_sep'.
Qed.
Lemma extensionality_model_eq :
CE -> exists V (M : interp V), extensional M /\ standard M /\ forall rho phi, Zeq phi -> rho ⊨ phi.
Proof.
intros ce. exists SET', (SET_interp' ce). split; try apply SET_ext'.
split; try apply SET_standard'. intros rho phi [].
- destruct H as [<-|[<-|[<-|[<-|H0]]]]; cbn; try congruence.
now intros x x' y y' -> ->. now apply SET'_ZF'.
- apply SET_sep'.
Qed.
(* *** Intensional model of Z' without assumptions *)
Section IM.
Instance Acz_interp : interp Acz.
Proof.
split; intros [].
- intros _. exact AEmpty.
- intros v. exact (Aupair (Vector.hd v) (Vector.hd (Vector.tl v))).
- intros v. exact (Aunion (Vector.hd v)).
- intros v. exact (Apower (Vector.hd v)).
- intros _. exact Aom.
- intros v. exact (Ain (Vector.hd v) (Vector.hd (Vector.tl v))).
- intros v. exact (Aeq (Vector.hd v) (Vector.hd (Vector.tl v))).
Defined.
Lemma Acz_standard :
standard Acz_interp.
Proof.
intros s [n Hn]. cbn in Hn. exists n. apply Hn.
Qed.
Lemma Acz_ZFeq' rho :
rho ⊫ ZFeq'.
Proof.
intros phi [<-|[<-|[<-|[<-|[<-|[<-|[<-|[<-|[<-|[<-|[<-|[]]]]]]]]]]]]; cbn.
- apply Aeq_ref.
- apply Aeq_sym.
- apply Aeq_tra.
- intros s t s' t' -> ->. tauto.
- apply Aeq_ext.
- apply AEmptyAx.
- intros X Y Z. apply AupairAx.
- intros X Y. apply AunionAx.
- intros X Y. apply ApowerAx.
- apply AomAx1.
- apply AomAx2.
Qed.
End IM.
Lemma intensional_model :
exists V (M : interp V), standard M /\ forall rho, rho ⊫ ZFeq'.
Proof.
exists Acz, Acz_interp. split; try apply Acz_standard. apply Acz_ZFeq'.
Qed.
|
[STATEMENT]
lemma cindex_polyE_rec:
fixes p q::"real poly"
assumes "a < b" "coprime p q"
shows "cindex_polyE a b q p = cross_alt q p a b/2 + cindex_polyE a b (- (p mod q)) q"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. cindex_polyE a b q p = real_of_int (cross_alt q p a b) / 2 + cindex_polyE a b (- (p mod q)) q
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. cindex_polyE a b q p = real_of_int (cross_alt q p a b) / 2 + cindex_polyE a b (- (p mod q)) q
[PROOF STEP]
note cindex_polyE_inverse_add_cross[OF assms]
[PROOF STATE]
proof (state)
this:
cindex_polyE a b q p + cindex_polyE a b p q = real_of_int (cross_alt p q a b) / 2
goal (1 subgoal):
1. cindex_polyE a b q p = real_of_int (cross_alt q p a b) / 2 + cindex_polyE a b (- (p mod q)) q
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
cindex_polyE a b q p + cindex_polyE a b p q = real_of_int (cross_alt p q a b) / 2
goal (1 subgoal):
1. cindex_polyE a b q p = real_of_int (cross_alt q p a b) / 2 + cindex_polyE a b (- (p mod q)) q
[PROOF STEP]
have "cindex_polyE a b (- (p mod q)) q = - cindex_polyE a b p q"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. cindex_polyE a b (- (p mod q)) q = - cindex_polyE a b p q
[PROOF STEP]
using cindex_polyE_mod cindex_polyE_smult_1[of a b "-1"]
[PROOF STATE]
proof (prove)
using this:
cindex_polyE ?a ?b ?q ?p = cindex_polyE ?a ?b (?q mod ?p) ?p
cindex_polyE a b (smult (- 1) ?q) ?p = sgn (- 1) * cindex_polyE a b ?q ?p
goal (1 subgoal):
1. cindex_polyE a b (- (p mod q)) q = - cindex_polyE a b p q
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
cindex_polyE a b (- (p mod q)) q = - cindex_polyE a b p q
goal (1 subgoal):
1. cindex_polyE a b q p = real_of_int (cross_alt q p a b) / 2 + cindex_polyE a b (- (p mod q)) q
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
cindex_polyE a b q p + cindex_polyE a b p q = real_of_int (cross_alt p q a b) / 2
cindex_polyE a b (- (p mod q)) q = - cindex_polyE a b p q
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
cindex_polyE a b q p + cindex_polyE a b p q = real_of_int (cross_alt p q a b) / 2
cindex_polyE a b (- (p mod q)) q = - cindex_polyE a b p q
goal (1 subgoal):
1. cindex_polyE a b q p = real_of_int (cross_alt q p a b) / 2 + cindex_polyE a b (- (p mod q)) q
[PROOF STEP]
by (auto simp add:field_simps cross_alt_poly_commute)
[PROOF STATE]
proof (state)
this:
cindex_polyE a b q p = real_of_int (cross_alt q p a b) / 2 + cindex_polyE a b (- (p mod q)) q
goal:
No subgoals!
[PROOF STEP]
qed |
module LibFaceDetection
using Colors
using FixedPointNumbers
using GeometryBasics
using libfacedetection_jll
# Write your package code here.
include("cwrapper.jl")
export detect_faces
end
|
[STATEMENT]
lemma "filterlim (\<lambda>x::real. x / (ln (x powr (ln x powr (ln 2 / ln x))))) at_top at_top"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. LIM x at_top. x / ln (x powr ln x powr (ln 2 / ln x)) :> at_top
[PROOF STEP]
by real_asymp |
-- Idris2
import System.Concurrency
||| Test basic lock/acquire and unlock/release functionality from child thread
main : IO ()
main =
do m <- makeMutex
t <- fork $ do mutexAcquire m
putStrLn "Child acquired mutex"
mutexRelease m
putStrLn "Child released mutex"
threadWait t
|
lemma scaleR_2: fixes x :: "'a::real_vector" shows "scaleR 2 x = x + x" |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.