text
stringlengths 0
3.34M
|
---|
Recent events have amply confirmed worries about the threat to Christians in North Africa and the Middle East following the regime changes in recent times.
"Who will save my beloved Syria?" This was the impassioned plea by the Jesuit Bishop Antoine Audo, president of Caritas Syria, published in the Caritas blog on June 21. He lamented the situation of ongoing violence, economic crisis and instability.
A short time later came news of the killing of Father Franҫois Mourad, a Syrian hermit, who was a guest at the Franciscan monastery of St Anthony of Padua in Al-Ghassaniyah, Syria. According to a June 24 report by Asia News it is uncertain whether he was killed by a stray bullet or if it was a deliberate killing by Islamic fighters who had attacked the monastery.
His death followed a vigil held last Saturday for the two archbishops kidnapped in Syria in April, whose fate is still unknown. The Patriarch of Antioch, John Yazigi, led around 300 people in the vigil near the Lebanese city of Tripoli to call for the release of Greek Orthodox Archbishop Paul Yazigi, and Syrian Orthodox Archbishop Yohanna Ibrahim, according to a report by Reuters on June 22.
This is taking place while NATO countries are moving to supplying the insurgent forces in Syria with arms. At the same time, according to a June 22 A1 report by the New York Times, there is strong evidence that arms from Libya are being sent to rebels in Syria.
"The flow is an important source of weapons for the uprising and a case of bloody turnabout, as the inheritors of one strongman's arsenal use them in the fight against another," the article noted.
Meanwhile, Christians are caught in the midst of this conflict and their fate does not seem to be of concern to those who are supplying arms.
The situation in Egypt is also proving difficult for Christians. In recent months the New York Times has published several articles referring to attacks on Christians.
"The leader of the Coptic Orthodox Church accused President Mohamed Morsi's government on Tuesday of 'delinquency' and 'misjudgments' for failing to prevent sectarian street-fighting that escalated into an attack on the church's main cathedral after a funeral mass over the weekend, leaving at least six Christians dead," was the opening paragraph of an April 9 report by the New York Times.
Amnesty International has also entered the debate, with a June 11 statement criticizing the increase in blasphemy cases.
The press release referred to a couple of recent court decisions, where Coptic Christians were convicted of blasphemy. It also said that there have been "numerous recent reports of others accused and convicted of blasphemy in Egypt."
"Bloggers and media professionals whose ideas are 'deemed offensive' as well as Coptic Christians – particularly in Upper Egypt – make up the majority of those targeted," according to Amnesty International.
Human Rights Watch, another prominent rights organization, had previously expressed concern over the Arab Spring in its World Report 2013, published in February.
In his introduction to the report Kenneth Roth, the organization's executive director, commented: "Two years into the Arab Spring, euphoria seems a thing of the past."
There are fears that the biggest winners of the uprising, the Islamists, will limit human rights, he added.
He also said there are doubts about the role of Sharia law in Egypt and what this will mean for human rights, including religious freedom.
Regarding Libya, Roth observed that there is a weak government, incapable of ensuring that human rights are respected. In part this is due to local factors but he also blamed the NATO powers for having simply declared victory and then left, instead of helping to build institutions after the overthrow of Gaddafi.
Another report, this one by the Pew Forum on Religion and Public Life, has concluded that restrictions on religion have continued to increase. The June 20 report "Arab Spring Adds to Global Restrictions on Religion" noted that the already high levels of control regarding religious freedom have augmented.
As well as restrictions imposed by governments the level of social hostility has increased, the Pew Forum commented.
The Middle East is certainly not the only region where religious liberty is under threat, the report noted. In fact, in the period 2007-2011 the number of countries with high levels of restrictions rose from 10 to 20.
Christians continued to be the group with the highest number of reports of harassment or intimidation, in 105 countries. Nevertheless, North Africa and the Middle East stood out as the area with the highest levels of both government restrictions and social hostility.
Overthrowing nasty dictators and authoritarian regimes seems an attractive path to take and is also popular with public opinion. The consequences of such actions are, as we are seeing now, not always so attractive. |
lemma (in first_countable_topology) first_countable_basis_Int_stableE: obtains \<A> where "countable \<A>" "\<And>A. A \<in> \<A> \<Longrightarrow> x \<in> A" "\<And>A. A \<in> \<A> \<Longrightarrow> open A" "\<And>S. open S \<Longrightarrow> x \<in> S \<Longrightarrow> (\<exists>A\<in>\<A>. A \<subseteq> S)" "\<And>A B. A \<in> \<A> \<Longrightarrow> B \<in> \<A> \<Longrightarrow> A \<inter> B \<in> \<A>" |
c
c Copyright (C) 1998-2001 Ljubomir Milanovic & Horst Wagner
c This file is part of the g2 library
c
c This library is free software; you can redistribute it and/or
c modify it under the terms of the GNU Lesser General Public
c License as published by the Free Software Foundation; either
c version 2.1 of the License, or (at your option) any later version.
c
c This library is distributed in the hope that it will be useful,
c but WITHOUT ANY WARRANTY; without even the implied warranty of
c MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
c Lesser General Public License for more details.
c
c You should have received a copy of the GNU Lesser General Public
c License along with this library; if not, write to the Free Software
c Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
c
real demo_f
real a, b
real d, d1, d2
real color
d=g2_open_vd()
write (6,*) d
d1=g2_open_X11(100.0, 100.0)
write (6,*) d1
d2=g2_open_PS('demo_f.ps', 4.0, 1.0)
write (6,*) d2
call g2_attach(d, d1)
call g2_attach(d, d2)
call g2_plot(d, 50.0, 50.0)
call g2_arc(d, 50.0, 50.0, 30.0, 20.0, 45.0, 180.0)
color=g2_ink(d1, 1.0, 0.0, 0.0)
call g2_pen(d1, color)
write (6,*) color
call g2_string(d1, 15.0, 75.0, 'TEST (Window)')
color=g2_ink(d2, 0.0, 1.0, 0.0)
call g2_pen(d2, color)
write (6,*) color
call g2_string(d2, 15.0, 75.0, 'TEST (File)')
call g2_pen(d, 1.0)
call g2_circle(d, 20.0, 20.0, 10.0)
call g2_string(d, 20.0, 20.0, 'All devices!')
call g2_flush(d)
call g2_close(d2)
read (*,*) a
stop
end
|
{-# OPTIONS --without-K --exact-split --allow-unsolved-metas #-}
module 16-sets where
import 15-number-theory
open 15-number-theory public
|
function omega = angle(f,ori,varargin)
% angle fibre to orientation or fibre to fibre
%
% Syntax
%
% omega = angle(f,ori) % angle orientation to fibre
% omega = angle(f1,f2) % angle fibre to fibre
%
% Input
% f, f1, f2 - @fibre
% ori - @orientation
%
% Output
% omega - double
%
% See also
% orientation/angle
if isa(ori,'orientation')
omega = angle(ori .\ f.r,f.h,varargin{:});
else
omega = max(angle(f,orientation(ori),varargin{:}));
% in the non symmetric case we have also
%omega = min(angle(f.h,ori.h) + angle(f.r,ori.r), angle(f.h,-ori.h) + angle(f.r,-ori.r));
end
end
|
(*
* Copyright 2014, NICTA
*
* This software may be distributed and modified according to the terms of
* the BSD 2-Clause license. Note that NO WARRANTY is provided.
* See "LICENSE_BSD2.txt" for details.
*
* @TAG(NICTA_BSD)
*)
theory SimpStrategy
imports "~~/src/HOL/Main"
begin
text {*
Support for defining alternative simplifier strategies for some parts of terms
or some premises of rewrite rules. The "name" parameter to the simp_strategy
constant is used to identify which simplification strategy should be used on
this term. Note that, although names have type nat, it is safe for them to all
be defined as 0. The important thing is that the simplifier doesn't know they're
equal.
*}
definition
simp_strategy :: "nat \<Rightarrow> ('a :: {}) \<Rightarrow> 'a"
where
"simp_strategy name x \<equiv> x"
text {*
This congruence rule forbids the simplifier from simplifying the arguments of
simp_strategy normally.
*}
lemma simp_strategy_cong[cong]:
"simp_strategy name x = simp_strategy name x"
by simp
text {*
This strategy, or rather lack thereof, can be used to forbid simplification.
*}
definition
NoSimp :: nat
where "NoSimp = 0"
text {*
This strategy indicates that a boolean subterm should be simplified only by
using explicit assumptions of the simpset.
*}
definition
ByAssum :: nat
where "ByAssum = 0"
lemma Eq_TrueI_ByAssum:
"P \<Longrightarrow> simp_strategy ByAssum P \<equiv> True"
by (simp add: simp_strategy_def)
simproc_setup simp_strategy_ByAssum ("simp_strategy ByAssum b") =
{* K (fn ss => fn ct => let
val b = Thm.dest_arg ct
val t = Thm.instantiate ([],[((("P",0),@{typ bool}), b)])
@{thm Eq_TrueI_ByAssum}
val prems = Raw_Simplifier.prems_of ss
in get_first (try (fn p => p RS t)) prems end) *}
lemma ByAssum:
"simp_strategy ByAssum P \<Longrightarrow> P"
by (simp add: simp_strategy_def)
lemma simp_ByAssum_test:
"P \<Longrightarrow> simp_strategy ByAssum P"
by simp
text {*
Generic framework for instantiating a simproc which simplifies within a
simp_strategy with a given simpset.
The boolean determines whether simp_strategy Name True should be rewritten
to True.
*}
lemma simp_strategy_Eq_True:
"simp_strategy name True \<equiv> True"
by (simp add: simp_strategy_def)
ML {*
fun simp_strategy_True_conv ct = case Thm.term_of ct of
(Const (@{const_name simp_strategy}, _) $ _ $ @{term True})
=> Thm.instantiate ([], [((("name",0), @{typ nat}), Thm.dest_arg1 ct)])
@{thm simp_strategy_Eq_True}
| _ => Conv.all_conv ct
fun new_simp_strategy thy (name : term) ss rewr_True =
let
val ctxt = Proof_Context.init_global thy;
val ss = Simplifier.make_simproc ctxt ("simp_strategy_" ^ fst (dest_Const name))
{lhss = [@{term simp_strategy} $ name $ @{term x}],
proc = (fn _ => fn ctxt' => fn ct =>
ct
|> (Conv.arg_conv (Simplifier.rewrite (put_simpset ss ctxt'))
then_conv (if rewr_True then simp_strategy_True_conv
else Conv.all_conv))
|> (fn c => if Thm.is_reflexive c then NONE else SOME c))
}
in
ss
end
*}
end
|
"""
PaduaTransforms
an implementation of the Padua transform and its inverse via the fast Fourier transform.
"""
module PaduaTransforms
using StaticArrays: SVector
using FFTW
using LinearAlgebra: rmul!
export getpaduanum, getdegree, nextpaduanum
export getpaduapoints
export PaduaTransformPlan, paduatransform!
export InvPaduaTransformPlan, invpaduatransform!
## Number of Padua Points and Degree ##
"""
getpaduanum(n)
calculates number of Padua points needed to approximate a function using Chebyshev polynomials
up to total degree `n`. This number is equal to the number of coefficients. The formula is
```math
N = (n + 1) ⋅ (n + 2) ÷ 2
```
# Examples
```jldoctest
julia> getpaduanum(13)
105
```
"""
getpaduanum(degree) = (degree + 1) * (degree + 2) ÷ 2
"""
getdegree(N)
calculates total degree, given the number of coefficients or Padua points `N`.
Throws an error if `N` is not a possible number of Padua points. The formula is
```math
n = \\frac{\\sqrt{1 + 8N} - 3}{2}
```
# Examples
```jldoctest
julia> getdegree(105)
13
julia> getdegree(104)
ERROR: ArgumentError: number of Padua points or coeffs must be (n + 1) * (n + 2) ÷ 2
[...]
```
"""
function getdegree(N)
d = (sqrt(1 + 8N) - 3) / 2
isinteger(d) ? Int(d) : throw(ArgumentError(
"number of Padua points or coeffs must be (n + 1) * (n + 2) ÷ 2"))
end
"""
nextpaduanum(N)
get next valid number of Padua points ≥ `N`.
# Examples
```jldoctest
julia> nextpaduanum(104)
105
```
"""
function nextpaduanum(N)
d = Int(cld(sqrt(1 + 8N) - 3, 2))
getpaduanum(d)
end
## Padua Points ##
"""
paduapoint(T::Type, j::Integer, i::Integer, n::Integer)
returns the Padua point ``z_{ij}``, where
```math
z_{ij} = (\\cos{\\frac{jπ}{n}}, \\cos{\\frac{iπ}{n+1}})
```
Note, that only points with ``i-j`` even are actually Padua points.
Check with [`ispadua`](@ref).
# Examples
```jldoctest
julia> [PaduaTransforms.paduapoint(Float32, x, y, 1) for y in 0:1+1, x in 0:1]
3×2 Matrix{Tuple{Float32, Float32}}:
(1.0, 1.0) (-1.0, 1.0)
(1.0, 0.0) (-1.0, 0.0)
(1.0, -1.0) (-1.0, -1.0)
```
"""
function paduapoint(::Type{T}, j, i, n) where T
x = cospi(T(j) / T(n))
y = cospi(T(i) / T(n + 1))
return x, y
end
"""
ispadua(i, j)
returns if [`paduapoint`](@ref) at position `(i, j)` is a Padua point.
# Examples
```jldoctest
julia> pointornothing(i, j, n) = PaduaTransforms.ispadua(i, j) ? PaduaTransforms.paduapoint(Float64, j, i, n) : nothing
pointornothing (generic function with 1 method)
julia> [pointornothing(y, x, 2) for y in 0:3, x in 0:2]
4×3 Matrix{Union{Nothing, Tuple{Float64, Float64}}}:
(1.0, 1.0) nothing (-1.0, 1.0)
nothing (0.0, 0.5) nothing
(1.0, -0.5) nothing (-1.0, -0.5)
nothing (0.0, -1.0) nothing
```
"""
ispadua(i, j) = iseven(i - j)
"""
getpaduapoints([T=Float64,] n)
returns the Padua points
```math
\\textrm{Pad}_n = \\{(\\cos{\\frac{jπ}{n}}, \\cos{\\frac{iπ}{n + 1}}) \\; | \\;
0 ≤ j ≤ n, \\; 0 ≤ i ≤ n + 1, \\; i - j \\; \\textrm{even} \\}
```
where each row is a point
# Examples
```jldoctest
julia> getpaduapoints(Float32, 1)
3×2 Matrix{Float32}:
1.0 1.0
1.0 -1.0
-1.0 0.0
```
"""
function getpaduapoints(::Type{T}, n) where T
out = Matrix{T}(undef, getpaduanum(n), 2)
i = 1
for x in 0:n
for y in 0:n+1
if ispadua(x, y)
out[i, 1:2] .= paduapoint(T, x, y, n)
i += 1
end
end
end
return out
end
getpaduapoints(n) = getpaduapoints(Float64, n)
"""
getpaduapoints(f::Function, [T=Float64,] n)
evaluates the function `f` on the Padua points for degree `n`. If `f` returns a single value,
`getpaduapoints` returns a `Vector{T}`, else if `f` returns a tuple or other iterable
`getpaduapoints` returns a `Matrix{T}` where each row represents `f` applied to a Padua point.
# Examples
```jldoctest
julia> getpaduapoints(Float64, 2) do x, y; 3x - y, y^2; end
6×2 Matrix{Float64}:
2.0 1.0
3.5 0.25
-0.5 0.25
1.0 1.0
-4.0 1.0
-2.5 0.25
```
"""
function getpaduapoints(f::Function, ::Type{T}, n) where T
D = length(f(zero(T), zero(T)))
if D == 1
out = Vector{T}(undef, getpaduanum(n))
else
out = Matrix{T}(undef, getpaduanum(n), D)
end
# Function barrier to alleviate issues from type instability
_fillpoints!(out, f, T, n)
return out
end
getpaduapoints(f::Function, n) = getpaduapoints(f, Float64, n)
function _fillpoints!(out::AbstractMatrix, f, ::Type{T}, n) where T
i = 1
for x in 0:n
for y in 0:n+1
if ispadua(x, y)
v = paduapoint(T, x, y, n)
out[i, :] .= f(v[1], v[2])
i += 1
end
end
end
out
end
function _fillpoints!(out::AbstractVector, f, ::Type{T}, n) where T
i = 1
for x in 0:n
for y in 0:n+1
if ispadua(x, y)
v = paduapoint(T, x, y, n)
out[i] = f(v[1], v[2])
i += 1
end
end
end
out
end
## Padua Transform ##
struct PaduaTransformPlan{T, P}
degree::Int
vals::Matrix{T}
dctplan::P
end
"""
PaduaTransformPlan{T}(n::Integer)
create plan to compute coefficients of Chebyshev polynomials in 2D up to total degree `n`
using the Padua transform.
"""
function PaduaTransformPlan{T}(degree::Integer) where T
vals = Matrix{T}(undef, degree + 2, degree + 1)
plan = FFTW.plan_r2r!(vals, FFTW.REDFT00)
PaduaTransformPlan{T, typeof(plan)}(degree, vals, plan)
end
"""
weight!(mat::AbstractMatrix, degree::Integer)
weight fourier coefficients to obtain Chebyshev coefficients as part of a [`paduatransform!`](@ref).
The weighting factor applied to the coefficients is
```math
w = \\frac{1}{n(n+1)} ⋅ \\begin{cases}
\\frac{1}{2} & \\textrm{if on vertex} \\\\
1 & \\textrm{if on edge} \\\\
2 & \\textrm{if in interior} \\\\
\\end{cases}
```
# Examples
```julia-repl
julia> weight!(ones(4+2, 4+1), 4)
6×5 Matrix{Float64}:
0.025 0.05 0.05 0.05 0.025
0.05 0.1 0.1 0.1 0.05
0.05 0.1 0.1 0.1 0.05
0.05 0.1 0.1 0.1 0.05
0.05 0.1 0.1 0.1 0.05
0.025 0.05 0.05 0.05 0.025
```
"""
function weight!(mat::AbstractMatrix{T}, degree::Integer) where T
rmul!(mat, T(2 / ( degree * (degree + 1) )))
rmul!(@view(mat[1, :]), T(0.5))
rmul!(@view(mat[end, :]), T(0.5))
rmul!(@view(mat[:, 1]), T(0.5))
rmul!(@view(mat[:, end]), T(0.5))
mat
end
"""
tovalsmat!(mat::Matrix, from::AbstractVector, degree::Integer)
write values of function evaluated at Padua points from `from` to matrix `mat`.
# Examples
```jldoctest
julia> PaduaTransforms.tovalsmat!(ones(3 + 2, 3 + 1), 1:getpaduanum(3), 3)
5×4 Matrix{Float64}:
1.0 0.0 6.0 0.0
0.0 4.0 0.0 9.0
2.0 0.0 7.0 0.0
0.0 5.0 0.0 10.0
3.0 0.0 8.0 0.0
julia> PaduaTransforms.tovalsmat!(ones(2 + 2, 2 + 1), 1:getpaduanum(2), 2)
4×3 Matrix{Float64}:
1.0 0.0 5.0
0.0 3.0 0.0
2.0 0.0 6.0
0.0 4.0 0.0
```
"""
function tovalsmat!(mat::Matrix{T}, from::AbstractVector, degree::Integer) where T
axes(from, 1) == 1:getpaduanum(degree) || error()
size(mat) == (degree + 2, degree + 1) || error()
if isodd(degree)
# x 0
# 0 x
# x 0
@inbounds for i in 1:length(from)
mat[2i - 1] = from[i]
mat[2i] = zero(T)
end
else
@assert iseven(degree)
# x 0 x
# 0 x 0
# x 0 x
# 0 x 0
valspercol = (degree + 2) ÷ 2
# odd columns (j is column index)
for j in 1:2:degree + 1, i in 1:valspercol
k = (j - 1) * valspercol + i
@inbounds mat[2i - 1, j] = from[k]
@inbounds mat[2i, j] = zero(T)
end
# even columns
for j in 2:2:degree + 1, i in 1:valspercol
k = (j - 1) * valspercol + i
@inbounds mat[2i - 1, j] = zero(T)
@inbounds mat[2i, j] = from[k]
end
end
mat
end
"""
fromcoeffsmat!(to::AbstractVector, mat::Matrix, degree::Integer, ::Val{lex})
write Chebyshev coefficients from `mat` into vector `to`. `lex::Bool` determines whether
coefficients should be written in lexigraphical order or not. The lower right triangle does
not get written into `to`. These would represent greater polynomial degrees than `degree`.
If `lex` is `Val(true)` the coefficients correspond to the following basis polynomials
```math
T_0(x) T_0(y), T_1(x) T_0(y), T_0(x) T_1(y), T_2(x) T_0(y), T_1(x) T_1(y), T_0(x) T_2(y), ...
```
else if `lex` is `Val(false)` they correspond to
```math
T_0(x) T_0(y), T_0(x) T_1(y), T_1(x) T_0(y), T_0(x) T_2(y), T_1(x) T_1(y), T_2(x) T_0(y), ...
```
# Examples
```julia-repl
julia> mat = [(x, y) for y in 0:2+1, x in 0:2]
4×3 Matrix{Tuple{Int64, Int64}}:
(0, 0) (1, 0) (2, 0)
(0, 1) (1, 1) (2, 1)
(0, 2) (1, 2) (2, 2)
(0, 3) (1, 3) (2, 3)
julia> to1 = similar(mat, getpaduanum(2)); to2 = similar(mat, getpaduanum(2));
julia> fromcoeffsmat!(to1, mat, 2, Val(true))
6-element Vector{Tuple{Int64, Int64}}:
(0, 0)
(1, 0)
(0, 1)
(2, 0)
(1, 1)
(0, 2)
julia> fromcoeffsmat!(to2, mat, 2, Val(false))
6-element Vector{Tuple{Int64, Int64}}:
(0, 0)
(0, 1)
(1, 0)
(0, 2)
(1, 1)
(2, 0)
```
"""
function fromcoeffsmat!(to::AbstractVector, mat::Matrix, degree::Integer, ::Val{false})
length(to) == getpaduanum(degree) || error()
axes(mat) == (1:(degree + 2), 1:(degree + 1)) || error()
n = firstindex(to)
for d in 1:degree + 1
for ix in 1:d
iy = d - ix + 1
@assert ix + iy == d + 1 "ix and iy must lie on d-th diagonal"
to[n] = mat[iy, ix]
n += 1
end
end
to
end
function fromcoeffsmat!(to::AbstractVector, mat::Matrix, degree::Integer, ::Val{true})
length(to) == getpaduanum(degree) || error()
size(mat) == (degree + 2, degree + 1) || error()
n = firstindex(to)
for d in 1:degree + 1
for iy in 1:d
ix = d - iy + 1
@assert ix + iy == d + 1 "ix and iy must lie on d-th diagonal"
to[n] = mat[iy, ix]
n += 1
end
end
to
end
"""
fromcoeffsmat!(to::AbstractMatrix, mat::Matrix, degree::Integer)
copy Chebyshev coefficients from `mat` to `to` without copying coefficients coresponding to
total degree greater than `degree`.
# Examples
```jldoctest
julia> PaduaTransforms.fromcoeffsmat!(zeros(4, 4), reshape(1:20, 5, 4), 3)
4×4 Matrix{Float64}:
1.0 6.0 11.0 16.0
2.0 7.0 12.0 0.0
3.0 8.0 0.0 0.0
4.0 0.0 0.0 0.0
```
"""
function fromcoeffsmat!(to::AbstractMatrix, mat::AbstractMatrix, degree::Integer)
axes(to) == (1:degree + 1, 1:degree + 1) || error()
axes(mat) == (1:degree + 2, 1:degree + 1) || error()
for j in 1:degree + 1
for i in 1:degree + 2 - j
@inbounds to[i, j] = mat[i, j]
end
end
to
end
"""
paduatransform!(out, P::PaduaTransformPlan, vals[, lex])
obtain coefficients of Chebyshev polynomials on 2D via the Padua transform, given values
`vals` evaluated at the Padua points. Coefficients will be written into `out`, which should
either be a matrix or a vector.
if `out` is a matrix, make sure that all entries in the lower right diagonal are zero as
these will not get overwritten.
`lex` determines the order in which coefficeints are written into `out` if `out` is a vector.
See [`fromcoeffsmat!`](@ref) for details.
# Examples
```jldoctest
julia> plan = PaduaTransformPlan{Float64}(3);
julia> f(x, y) = 3 + 4x + 5 * y * (2x^2 - 1)
f (generic function with 1 method)
julia> vals = getpaduapoints(f, 3)
10-element Vector{Float64}:
12.0
7.0
2.0
3.232233047033631
6.767766952966369
-1.5000000000000004
1.0000000000000004
3.5000000000000013
2.5355339059327378
-4.535533905932738
julia> paduatransform!(zeros(4, 4), plan, vals)
4×4 Matrix{Float64}:
3.0 4.0 0.0 1.4803e-16
-5.92119e-16 -1.4803e-16 5.0 0.0
0.0 0.0 0.0 0.0
-2.96059e-16 0.0 0.0 0.0
julia> paduatransform!(zeros(getpaduanum(3)), plan, vals, Val(true))
10-element Vector{Float64}:
3.0
4.0
-5.921189464667501e-16
0.0
-1.4802973661668753e-16
0.0
1.4802973661668753e-16
5.0
0.0
-2.9605947323337506e-16
julia> paduatransform!(zeros(getpaduanum(3)), plan, vals, Val(false))
10-element Vector{Float64}:
3.0
-5.921189464667501e-16
4.0
0.0
-1.4802973661668753e-16
0.0
-2.9605947323337506e-16
0.0
5.0
1.4802973661668753e-16
```
"""
function paduatransform!(P::PaduaTransformPlan)
coeffs = P.dctplan * P.vals
weight!(coeffs, P.degree)
coeffs
end
function paduatransform!(out, P::PaduaTransformPlan, vals, args...)
tovalsmat!(P.vals, vals, P.degree)
coeffs = paduatransform!(P)
fromcoeffsmat!(out, coeffs, P.degree, args...)
end
"""
paduatransform!(out::AbstractArray{<:Any, 3}, P::PaduaTransformPlan, vals::AbstractMatrix, args...)
transforms each column in `vals` and writes the resulting coefficients in a slice of `out`.
"""
function paduatransform!(out::AbstractArray{<:Any, 3}, P::PaduaTransformPlan, vals::AbstractMatrix, args...)
axes(out, 3) == axes(vals, 2)|| error()
@views for i in axes(out, 3)
paduatransform!(out[:, :, i], P, vals[:, i], args...)
end
out
end
"""
paduatransform!(out::Array{<:Any, 3}, P::PaduaTransformPlan, vals::AbstractVector{<:AbstractVector{T}}, args...)
transforms vector of vectors to Chebyshev coefficients and writes the resulting coefficients
in a slice of `out`. Each vector in the vector of vectors represents a point. Each slice of
`out` represents a transform of one dimension.
"""
function paduatransform!(out::Array{<:Any, 3}, P::PaduaTransformPlan, vals::AbstractVector{<:AbstractVector{T}}, args...) where T
# Here, each column is a point and each row represents one dimension
r = reinterpret(reshape, T, vals)
axes(out, 3) == axes(r, 1) || error()
@views for i in axes(out, 3)
paduatransform!(out[:, :, i], P, r[i, :], args...)
end
out
end
## Inverse Padua Transform ##
struct InvPaduaTransformPlan{T, P}
degree::Int
coeffs::Matrix{T}
dctplan::P
end
"""
InvPaduaTransformPlan{T}(n::Integer)
create plan to compute values on Padua points, given coefficients of Chebyshev polynomials
up to total degree `n`.
"""
function InvPaduaTransformPlan{T}(degree::Integer) where T
coeffs = Matrix{T}(undef, degree + 2, degree + 1)
iplan = FFTW.plan_r2r!(coeffs, FFTW.REDFT00)
InvPaduaTransformPlan{T, typeof(iplan)}(degree, coeffs, iplan)
end
"""
tocoeffsmat!(mat::AbstractMatrix, coeffs::AbstractMatrix)
writes coefficients in `coeffs` into matrix `mat` for the [`invpaduatransform!`](@ref).
# Examples
```jldoctest
julia> PaduaTransforms.tocoeffsmat!(zeros(5, 4), reshape(1:16, 4, 4))
5×4 Matrix{Float64}:
1.0 5.0 9.0 13.0
2.0 6.0 10.0 14.0
3.0 7.0 11.0 15.0
4.0 8.0 12.0 16.0
0.0 0.0 0.0 0.0
```
"""
function tocoeffsmat!(mat::AbstractMatrix{T}, coeffs::AbstractMatrix) where T
mat[1:end-1, :] = coeffs
mat[ end, :] .= zero(T)
mat
end
"""
invweight!(coeffs::AbstractMatrix)
weight Chebyshev coefficients before the Fourier transform for the [`invpaduatransform!`](@ref).
using the weighting
```math
w = \\begin{cases}
1 & \\textrm{if on vertex} \\\\
\\frac{1}{2} & \\textrm{if on edge} \\\\
\\frac{1}{4} & \\textrm{if in interior} \\\\
\\end{cases}
```
# Examples
```jldoctest
julia> PaduaTransforms.invweight!(ones(5, 5))
5×5 Matrix{Float64}:
1.0 0.5 0.5 0.5 1.0
0.5 0.25 0.25 0.25 0.5
0.5 0.25 0.25 0.25 0.5
0.5 0.25 0.25 0.25 0.5
1.0 0.5 0.5 0.5 1.0
```
"""
function invweight!(coeffs::AbstractMatrix{T}) where T
rmul!(@view(coeffs[:,2:end-1]), T(0.5))
rmul!(@view(coeffs[2:end-1, :]), T(0.5))
coeffs
end
"""
fromvalsmat!(to::AbstractVector, mat::AbstractMatrix, n::Integer)
write values from `mat` into the vector `to` after an [`invpaduatransform!`](@ref) of total
degree `n`.
"""
function fromvalsmat!(to::AbstractVector, mat::AbstractMatrix, degree::Integer)
axes(to, 1) == 1:getpaduanum(degree) || error()
axes(mat) == (1:degree + 2, 1:degree + 1) || error()
if isodd(degree)
# x 0
# 0 x
# x 0
@inbounds for i in 1:length(to)
to[i] = mat[2i - 1]
end
else
@assert iseven(degree)
# x 0 x
# 0 x 0
# x 0 x
# 0 x 0
valspercol = (degree + 2) ÷ 2
# odd columns (j is column index)
for j in 1:2:degree + 1, i in 1:valspercol
k = (j - 1) * valspercol + i
@inbounds to[k] = mat[2i - 1, j]
end
# even columns
for j in 2:2:degree + 1, i in 1:valspercol
k = (j - 1) * valspercol + i
@inbounds to[k] = mat[2i, j]
end
end
to
end
function invpaduatransform!(IP::InvPaduaTransformPlan)
invweight!(IP.coeffs)
IP.dctplan * IP.coeffs
IP.coeffs
end
"""
invpaduatransform!(vals::AbstractVector, IP::InvPaduaTransformPlan, coeffs::AbstractMatrix)
evaluates the polynomial defined by the coefficients of Chebyshev polynomials `coeffs` on the
Padua points using the inverse transform plan `IP` and writes the resulting values into `vals`.
"""
function invpaduatransform!(vals::AbstractVector, IP::InvPaduaTransformPlan, coeffs::AbstractMatrix)
tocoeffsmat!(IP.coeffs, coeffs)
invpaduatransform!(IP)
fromvalsmat!(vals, IP.coeffs, IP.degree)
end
function invpaduatransform!(vals::AbstractMatrix, IP::InvPaduaTransformPlan, coeffs::AbstractArray{<:Any, 3})
axes(coeffs, 3) == axes(vals, 2) || error()
@views for i in axes(coeffs, 3)
invpaduatransform!(vals[:, i], IP, coeffs[:, :, i])
end
vals
end
end # Padua
|
Men who became uncooperative with the CPS system and were unable to adjust to the church @-@ managed camps were reassigned to a few camps managed by the Selective Service System . These camps tended to be the least productive and most difficult to administer . Men who felt compelled to protest the restrictions of the conscription law attempted to disrupt the program through the use of various techniques , including the initiation of work slowdowns and labor strikes . Routine rule breaking frustrated camp directors . The most difficult cases were given to the federal court system and the men imprisoned .
|
{- Byzantine Fault Tolerant Consensus Verification in Agda, version 0.9.
Copyright (c) 2021, Oracle and/or its affiliates.
Licensed under the Universal Permissive License v 1.0 as shown at https://opensource.oracle.com/licenses/upl
-}
open import LibraBFT.Base.Types
import LibraBFT.Impl.Crypto.Crypto.Hash as Hash
open import LibraBFT.ImplShared.Consensus.Types
open import Optics.All
open import Util.KVMap as Map
open import Util.PKCS
open import Util.Prelude
module LibraBFT.Impl.Consensus.ConsensusTypes.BlockData where
------------------------------------------------------------------------------
newGenesis : {-Instant →-} QuorumCert → BlockData
------------------------------------------------------------------------------
newGenesisFromLedgerInfo : LedgerInfo → Either ErrLog BlockData
newGenesisFromLedgerInfo li =
if not (li ^∙ liEndsEpoch)
then Left fakeErr -- ["BlockData", "newGenesisFromLedgerInfo", "liNextEpochState == Nothing"]
else
let ancestor = BlockInfo∙new
(li ^∙ liEpoch)
{-Round-} 0
Hash.valueZero
(li ^∙ liTransactionAccumulatorHash)
(li ^∙ liVersion)
--(li ^∙ liTimestamp)
nothing
genesisQuorumCert = QuorumCert∙new
(VoteData∙new ancestor ancestor)
(LedgerInfoWithSignatures∙new
(LedgerInfo∙new ancestor Hash.valueZero) Map.empty)
in pure $ newGenesis {-(li ^∙ liTimestamp)-} genesisQuorumCert
newGenesis {-timestamp-} qc = BlockData∙new
(qc ^∙ qcCertifiedBlock ∙ biEpoch + 1)
{-Round-} 0
--timestamp
qc
Genesis
newNil : Round → QuorumCert → BlockData
newNil r qc = BlockData∙new
(qc ^∙ qcCertifiedBlock ∙ biEpoch)
r
--(qc ^∙ qcCertifiedBlock ∙ biTimestamp)
qc
NilBlock
newProposal : TX → Author → Round → {-Instant →-} QuorumCert → BlockData
newProposal payload author round {-timestamp-} quorumCert = BlockData∙new
(quorumCert ^∙ qcCertifiedBlock ∙ biEpoch) round {-timestamp-} quorumCert (Proposal payload author)
isGenesisBlock : BlockData → Bool
isGenesisBlock bd = bd ^∙ bdBlockType == Genesis
isNilBlock : BlockData → Bool
isNilBlock bd = bd ^∙ bdBlockType == NilBlock
|
||| Utility types and functions for automatically deriving
||| interface instances. So far, this module does not provide
||| deriving functions for existing interfaces. See
||| Doc.Generic4 for examples, how this could be done
||| using the functionality provided here.
module Language.Reflection.Derive
import Decidable.Equality
import public Language.Reflection.Syntax
import public Language.Reflection.Types
%language ElabReflection
||| Utility type for deriving interface implementations
||| automatically. See implementations of `Eq'` and `Ord'`
||| in Doc.Generic4 as examples, how this can be done.
public export
record DeriveUtil where
constructor MkDeriveUtil
||| The underlying type info containing the list and names
||| of data constructors plus their arguments as well as
||| the data type's name and type arguments.
typeInfo : ParamTypeInfo
||| Fully applied data type, i.e. `var "Either" .$ var "a" .$ var "b"`
appliedType : TTImp
||| The names of type parameters
paramNames : List Name
||| Types of constructor arguments where at least one
||| type parameter makes an appearance. These are the
||| `tpe` fields of `ExplicitArg` where `hasParam`
||| is set to true and `isRecursive` is set
||| to false. See the documentation of `ExplicitArg`
||| when this is the case
argTypesWithParams : List TTImp
||| Creates a deriving utility from information about
||| a (possibly) parameterized type.
export
genericUtil : ParamTypeInfo -> DeriveUtil
genericUtil ti = let pNames = map fst $ params ti
appTpe = appNames (name ti) pNames
twps = calcArgTypesWithParams ti
in MkDeriveUtil ti appTpe pNames twps
||| Generates the name of an interface's implementation function
export
implName : DeriveUtil -> String -> Name
implName g interfaceName = UN $ "impl" ++ interfaceName
++ nameStr g.typeInfo.name
||| Syntax tree and additional info about the
||| implementation function of an interface.
|||
||| With 'implementation function', we mean the following:
||| When deriving an interface implementation, the elaborator
||| creates a function returning the corresponding record value.
||| Values of this record should provide both the full type
||| and implementation of this function as `TTImp` values.
|||
||| ```idris exampel
||| public export
||| implEqEither : {0 a : _} -> {0 b : _} -> Eq a => Eq b => Eq (Either a b)
||| implEqEither = ?impl
||| ```
public export
record InterfaceImpl where
constructor MkInterfaceImpl
||| The interface's name, for instance "Eq" ord "Ord".
||| This is used to generate the name of the
||| implementation function.
interfaceName : String
||| Visibility of the implementation function.
visibility : Visibility
||| Visibility of the implementation function.
options : List FnOpt
||| Actual implementation of the implementation function.
||| This will be the right hand side of the sole pattern clause
||| in the function definition.
|||
||| As an example, assume there is a `genEq` function used
||| as an implementation for `(==)` for data types with
||| some kind of `Generic` instance (see the tutorial on
||| Generics for more information about this). An implementation
||| for interface `Eq` could then look like this:
|||
||| ```idirs example
||| impl = var (singleCon "Eq") .$ `(genEq) .$ `(\a,b => not (a == b))
||| ```
impl : TTImp
||| Full type of the implementation function, including
||| implicit arguments (type parameters), which have to be part
||| of the `TTImp`.
|||
||| See also `implementationType`, a utility function to create this
||| kind of function types for type classes with a single parameter
||| of type `Type`.
|||
||| Example:
|||
||| ```idirs example
||| `({0 a: _} -> {0 b : _} -> Eq a => Eq b => Eq (Either a b))
||| ```
type : TTImp
-- pair of type and implementation
private
implDecl : DeriveUtil -> (DeriveUtil -> InterfaceImpl) -> (Decl,Decl)
implDecl g f = let (MkInterfaceImpl iname vis opts impl type) = f g
function = implName g iname
in ( interfaceHintOpts vis opts function type
, def function [var function .= impl] )
||| Generates a list of pairs of declarations for the
||| implementations of the interfaces specified.
|||
||| The first elements of the pairs are type declarations, while
||| the second elements are the actual implementations.
|||
||| This separation of type declaration and implementation
||| allows us to first declare all types before declaring
||| the actual implementations. This is essential in the
||| implementation of `deriveMutual`.
export
deriveDecls : Name -> List (DeriveUtil -> InterfaceImpl) -> Elab $ List (Decl,Decl)
deriveDecls name fs = mkDecls <$> getParamInfo' name
where mkDecls : ParamTypeInfo -> List (Decl,Decl)
mkDecls pi = let g = genericUtil pi
in map (implDecl g) fs
||| Given a name of a data type plus a list of interfaces, tries
||| to implement these interfaces automatically using
||| elaborator reflection.
|||
||| Again, see Doc.Generic4 for a tutorial and examples how
||| to use this.
export
derive : Name -> List (DeriveUtil -> InterfaceImpl) -> Elab ()
derive name fs = do decls <- deriveDecls name fs
-- Declare types first. Then declare implementations.
declare $ map fst decls
declare $ map snd decls
||| Allows the derivation of mutually dependant interface
||| implementations by first defining type declarations before
||| declaring implementations.
|||
||| Note: There is no need to call this from withi a `mutual` block.
export
deriveMutual : List (Name, List (DeriveUtil -> InterfaceImpl)) -> Elab()
deriveMutual pairs = do declss <- traverse (uncurry deriveDecls) pairs
-- Declare types first. Then declare implementations.
traverse_ (declare . map fst) declss
traverse_ (declare . map snd) declss
||| Given a `TTImp` representing an interface, generates
||| the type of the implementation function with all type
||| parameters applied and auto implicits specified.
|||
||| Example: Given the `DeriveUtil` info of `Either`, this
||| will generate the following type for input ``(Eq)`:
|||
||| ```idris example
||| {0 a : _} -> {0 b : _} -> Eq a => Eq b => Eq (Either a b)
||| ```
|||
||| Note: This function is only to be used with single-parameter
||| type classes, whose type parameters are of type `Type`.
export
implementationType : (iface : TTImp) -> DeriveUtil -> TTImp
implementationType iface (MkDeriveUtil _ appTp names argTypesWithParams) =
let appIface = iface .$ appTp
autoArgs = piAllAuto appIface $ map (iface .$) argTypesWithParams
in piAllImplicit autoArgs names
--------------------------------------------------------------------------------
-- Interface Factories
--------------------------------------------------------------------------------
||| Creates an `Eq` value from the passed implementation functions
||| for (==) and (/=).
public export %inline
mkEq' : (eq : a -> a -> Bool) -> (neq : a -> a -> Bool) -> Eq a
mkEq' = %runElab check (var $ singleCon "Eq")
||| Like `mkEq'` but generates (/=) from the passed `eq` function.
public export %inline
mkEq : (eq : a -> a -> Bool) -> Eq a
mkEq eq = mkEq' eq (\a,b => not $ eq a b)
||| Creates an `Ord` value from the passed implementation functions
||| for `compare`, `(<)`, `(>)`, `(<=)`, `(>=)`, `min`, `max`.
public export %inline
mkOrd' : Eq a
=> (compare : a -> a -> Ordering)
-> (lt : a -> a -> Bool)
-> (gt : a -> a -> Bool)
-> (leq : a -> a -> Bool)
-> (geq : a -> a -> Bool)
-> (min : a -> a -> a)
-> (max : a -> a -> a)
-> Ord a
mkOrd' = %runElab check (var $ singleCon "Ord")
||| Creates an `Ord` value from the passed comparison function
||| using default implementations based on `comp` for all
||| other function.
public export
mkOrd : Eq a => (comp : a -> a -> Ordering) -> Ord a
mkOrd comp = mkOrd' comp
(\a,b => comp a b == LT)
(\a,b => comp a b == GT)
(\a,b => comp a b /= GT)
(\a,b => comp a b /= LT)
(\a,b => if comp a b == GT then a else b)
(\a,b => if comp a b == LT then a else b)
||| Creates a `Num` value from the passed functions.
public export %inline
mkNum : (plus : a -> a -> a)
-> (times : a -> a -> a)
-> (fromInt : Integer -> a)
-> Num a
mkNum = %runElab check (var $ singleCon "Num")
||| Creates a `Neg` value from the passed functions.
public export %inline
mkNeg : Num a
=> (negate : a -> a)
-> (minus : a -> a -> a)
-> Neg a
mkNeg = %runElab check (var $ singleCon "Neg")
||| Creates an `Abs` value from the passed function
||| and `Num` instance.
public export
mkAbs : Num a => (abs : a -> a) -> Abs a
mkAbs = %runElab check (var $ singleCon "Abs")
||| Creates a `Fractional` value from the passed functions
||| and `Num` instance.
public export %inline
mkFractional : Num a => (div : a -> a -> a) -> (recip : a -> a) -> Fractional a
mkFractional = %runElab check (var $ singleCon "Fractional")
||| Creates an `Integral` value from the passed functions.
mkIntegral : Num a => (div : a -> a -> a) -> (mod : a -> a -> a) -> Integral a
mkIntegral = %runElab check (var $ singleCon "Integral")
||| Creates a `Show` value from the passed functions.
public export %inline
mkShow' : (show : a -> String) -> (showPrec : Prec -> a -> String) -> Show a
mkShow' = %runElab check (var $ singleCon "Show")
||| Creates a `Show` value from the passed `show` functions.
public export %inline
mkShow : (show : a -> String) -> Show a
mkShow show = mkShow' show (\_ => show)
||| Creates a `Show` value from the passed `showPrec` functions.
public export %inline
mkShowPrec : (showPrec : Prec -> a -> String) -> Show a
mkShowPrec showPrec = mkShow' (showPrec Open) showPrec
||| Creates an `Uninhabited` value from the passed function.
public export %inline
mkUninhabited : (uninhabited : a -> Void) -> Uninhabited a
mkUninhabited = %runElab check (var $ singleCon "Uninhabited")
||| Creates a `Semigroup` value from the passed function.
public export %inline
mkSemigroup : (mappend : a -> a -> a) -> Semigroup a
mkSemigroup = %runElab check (var $ singleCon "Semigroup")
||| Creates a `Monoid` value from the passed neutral value.
public export %inline
mkMonoid : Semigroup a => (neutral : a) -> Monoid a
mkMonoid = %runElab check (var $ singleCon "Monoid")
||| Creates a `DecEq` value from the passed implementation function
||| for `decEq`
public export %inline
mkDecEq : (decEq : (x1 : a) -> (x2 : a) -> Dec (x1 = x2)) -> DecEq a
mkDecEq = %runElab check (var $ singleCon "DecEq")
|
(* Default settings (from HsToCoq.Coq.Preamble) *)
Generalizable All Variables.
Unset Implicit Arguments.
Set Maximal Implicit Insertion.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Require Coq.Program.Tactics.
Require Coq.Program.Wf.
(* Converted imports: *)
Require Coq.Program.Basics.
Require Data.Foldable.
Require Data.Functor.
Require Data.Functor.Classes.
Require Data.SemigroupInternal.
Require Data.Traversable.
Require GHC.Base.
Require GHC.Num.
Import Data.Functor.Notations.
Import GHC.Base.Notations.
Import GHC.Num.Notations.
(* Converted type declarations: *)
Inductive Compose (f : Type -> Type) (g : Type -> Type) (a : Type) : Type
:= | Mk_Compose (getCompose : f (g a)) : Compose f g a.
Arguments Mk_Compose {_} {_} {_} _.
Definition getCompose {f : Type -> Type} {g : Type -> Type} {a : Type} (arg_0__
: Compose f g a) :=
let 'Mk_Compose getCompose := arg_0__ in
getCompose.
(* Converted value declarations: *)
(* Skipping all instances of class `Data.Data.Data', including
`Data.Functor.Compose.Data__Compose' *)
(* Skipping all instances of class `GHC.Generics.Generic', including
`Data.Functor.Compose.Generic__Compose' *)
(* Skipping all instances of class `GHC.Generics.Generic1', including
`Data.Functor.Compose.Generic1__Compose__5' *)
Local Definition Eq1__Compose_liftEq {inst_f} {inst_g}
`{Data.Functor.Classes.Eq1 inst_f} `{Data.Functor.Classes.Eq1 inst_g}
: forall {a} {b},
(a -> b -> bool) ->
(Compose inst_f inst_g) a -> (Compose inst_f inst_g) b -> bool :=
fun {a} {b} =>
fun arg_0__ arg_1__ arg_2__ =>
match arg_0__, arg_1__, arg_2__ with
| eq, Mk_Compose x, Mk_Compose y =>
Data.Functor.Classes.liftEq (Data.Functor.Classes.liftEq eq) x y
end.
Program Instance Eq1__Compose {f} {g} `{Data.Functor.Classes.Eq1 f}
`{Data.Functor.Classes.Eq1 g}
: Data.Functor.Classes.Eq1 (Compose f g) :=
fun _ k__ =>
k__ {| Data.Functor.Classes.liftEq__ := fun {a} {b} => Eq1__Compose_liftEq |}.
Local Definition Ord1__Compose_liftCompare {inst_f} {inst_g}
`{Data.Functor.Classes.Ord1 inst_f} `{Data.Functor.Classes.Ord1 inst_g}
: forall {a} {b},
(a -> b -> comparison) ->
(Compose inst_f inst_g) a -> (Compose inst_f inst_g) b -> comparison :=
fun {a} {b} =>
fun arg_0__ arg_1__ arg_2__ =>
match arg_0__, arg_1__, arg_2__ with
| comp, Mk_Compose x, Mk_Compose y =>
Data.Functor.Classes.liftCompare (Data.Functor.Classes.liftCompare comp) x y
end.
Program Instance Ord1__Compose {f} {g} `{Data.Functor.Classes.Ord1 f}
`{Data.Functor.Classes.Ord1 g}
: Data.Functor.Classes.Ord1 (Compose f g) :=
fun _ k__ =>
k__ {| Data.Functor.Classes.liftCompare__ := fun {a} {b} =>
Ord1__Compose_liftCompare |}.
(* Skipping all instances of class `Data.Functor.Classes.Read1', including
`Data.Functor.Compose.Read1__Compose' *)
(* Skipping all instances of class `Data.Functor.Classes.Show1', including
`Data.Functor.Compose.Show1__Compose' *)
Local Definition Eq___Compose_op_zeze__ {inst_f} {inst_g} {inst_a}
`{Data.Functor.Classes.Eq1 inst_f} `{Data.Functor.Classes.Eq1 inst_g}
`{GHC.Base.Eq_ inst_a}
: (Compose inst_f inst_g inst_a) -> (Compose inst_f inst_g inst_a) -> bool :=
Data.Functor.Classes.eq1.
Local Definition Eq___Compose_op_zsze__ {inst_f} {inst_g} {inst_a}
`{Data.Functor.Classes.Eq1 inst_f} `{Data.Functor.Classes.Eq1 inst_g}
`{GHC.Base.Eq_ inst_a}
: (Compose inst_f inst_g inst_a) -> (Compose inst_f inst_g inst_a) -> bool :=
fun x y => negb (Eq___Compose_op_zeze__ x y).
Program Instance Eq___Compose {f} {g} {a} `{Data.Functor.Classes.Eq1 f}
`{Data.Functor.Classes.Eq1 g} `{GHC.Base.Eq_ a}
: GHC.Base.Eq_ (Compose f g a) :=
fun _ k__ =>
k__ {| GHC.Base.op_zeze____ := Eq___Compose_op_zeze__ ;
GHC.Base.op_zsze____ := Eq___Compose_op_zsze__ |}.
Local Definition Ord__Compose_compare {inst_f} {inst_g} {inst_a}
`{Data.Functor.Classes.Ord1 inst_f} `{Data.Functor.Classes.Ord1 inst_g}
`{GHC.Base.Ord inst_a}
: (Compose inst_f inst_g inst_a) ->
(Compose inst_f inst_g inst_a) -> comparison :=
Data.Functor.Classes.compare1.
Local Definition Ord__Compose_op_zl__ {inst_f} {inst_g} {inst_a}
`{Data.Functor.Classes.Ord1 inst_f} `{Data.Functor.Classes.Ord1 inst_g}
`{GHC.Base.Ord inst_a}
: (Compose inst_f inst_g inst_a) -> (Compose inst_f inst_g inst_a) -> bool :=
fun x y => Ord__Compose_compare x y GHC.Base.== Lt.
Local Definition Ord__Compose_op_zlze__ {inst_f} {inst_g} {inst_a}
`{Data.Functor.Classes.Ord1 inst_f} `{Data.Functor.Classes.Ord1 inst_g}
`{GHC.Base.Ord inst_a}
: (Compose inst_f inst_g inst_a) -> (Compose inst_f inst_g inst_a) -> bool :=
fun x y => Ord__Compose_compare x y GHC.Base./= Gt.
Local Definition Ord__Compose_op_zg__ {inst_f} {inst_g} {inst_a}
`{Data.Functor.Classes.Ord1 inst_f} `{Data.Functor.Classes.Ord1 inst_g}
`{GHC.Base.Ord inst_a}
: (Compose inst_f inst_g inst_a) -> (Compose inst_f inst_g inst_a) -> bool :=
fun x y => Ord__Compose_compare x y GHC.Base.== Gt.
Local Definition Ord__Compose_op_zgze__ {inst_f} {inst_g} {inst_a}
`{Data.Functor.Classes.Ord1 inst_f} `{Data.Functor.Classes.Ord1 inst_g}
`{GHC.Base.Ord inst_a}
: (Compose inst_f inst_g inst_a) -> (Compose inst_f inst_g inst_a) -> bool :=
fun x y => Ord__Compose_compare x y GHC.Base./= Lt.
Local Definition Ord__Compose_max {inst_f} {inst_g} {inst_a}
`{Data.Functor.Classes.Ord1 inst_f} `{Data.Functor.Classes.Ord1 inst_g}
`{GHC.Base.Ord inst_a}
: (Compose inst_f inst_g inst_a) ->
(Compose inst_f inst_g inst_a) -> (Compose inst_f inst_g inst_a) :=
fun x y => if Ord__Compose_op_zlze__ x y : bool then y else x.
Local Definition Ord__Compose_min {inst_f} {inst_g} {inst_a}
`{Data.Functor.Classes.Ord1 inst_f} `{Data.Functor.Classes.Ord1 inst_g}
`{GHC.Base.Ord inst_a}
: (Compose inst_f inst_g inst_a) ->
(Compose inst_f inst_g inst_a) -> (Compose inst_f inst_g inst_a) :=
fun x y => if Ord__Compose_op_zlze__ x y : bool then x else y.
Program Instance Ord__Compose {f} {g} {a} `{Data.Functor.Classes.Ord1 f}
`{Data.Functor.Classes.Ord1 g} `{GHC.Base.Ord a}
: GHC.Base.Ord (Compose f g a) :=
fun _ k__ =>
k__ {| GHC.Base.op_zl____ := Ord__Compose_op_zl__ ;
GHC.Base.op_zlze____ := Ord__Compose_op_zlze__ ;
GHC.Base.op_zg____ := Ord__Compose_op_zg__ ;
GHC.Base.op_zgze____ := Ord__Compose_op_zgze__ ;
GHC.Base.compare__ := Ord__Compose_compare ;
GHC.Base.max__ := Ord__Compose_max ;
GHC.Base.min__ := Ord__Compose_min |}.
(* Skipping all instances of class `GHC.Read.Read', including
`Data.Functor.Compose.Read__Compose' *)
(* Skipping all instances of class `GHC.Show.Show', including
`Data.Functor.Compose.Show__Compose' *)
Local Definition Functor__Compose_fmap {inst_f} {inst_g} `{GHC.Base.Functor
inst_f} `{GHC.Base.Functor inst_g}
: forall {a} {b},
(a -> b) -> (Compose inst_f inst_g) a -> (Compose inst_f inst_g) b :=
fun {a} {b} =>
fun arg_0__ arg_1__ =>
match arg_0__, arg_1__ with
| f, Mk_Compose x => Mk_Compose (GHC.Base.fmap (GHC.Base.fmap f) x)
end.
Local Definition Functor__Compose_op_zlzd__ {inst_f} {inst_g} `{GHC.Base.Functor
inst_f} `{GHC.Base.Functor inst_g}
: forall {a} {b},
a -> (Compose inst_f inst_g) b -> (Compose inst_f inst_g) a :=
fun {a} {b} => Functor__Compose_fmap GHC.Base.∘ GHC.Base.const.
Program Instance Functor__Compose {f} {g} `{GHC.Base.Functor f}
`{GHC.Base.Functor g}
: GHC.Base.Functor (Compose f g) :=
fun _ k__ =>
k__ {| GHC.Base.fmap__ := fun {a} {b} => Functor__Compose_fmap ;
GHC.Base.op_zlzd____ := fun {a} {b} => Functor__Compose_op_zlzd__ |}.
Local Definition Foldable__Compose_foldMap {inst_f} {inst_g}
`{Data.Foldable.Foldable inst_f} `{Data.Foldable.Foldable inst_g}
: forall {m} {a},
forall `{GHC.Base.Monoid m}, (a -> m) -> (Compose inst_f inst_g) a -> m :=
fun {m} {a} `{GHC.Base.Monoid m} =>
fun arg_0__ arg_1__ =>
match arg_0__, arg_1__ with
| f, Mk_Compose t => Data.Foldable.foldMap (Data.Foldable.foldMap f) t
end.
Local Definition Foldable__Compose_fold {inst_f} {inst_g}
`{Data.Foldable.Foldable inst_f} `{Data.Foldable.Foldable inst_g}
: forall {m}, forall `{GHC.Base.Monoid m}, (Compose inst_f inst_g) m -> m :=
fun {m} `{GHC.Base.Monoid m} => Foldable__Compose_foldMap GHC.Base.id.
Local Definition Foldable__Compose_foldl {inst_f} {inst_g}
`{Data.Foldable.Foldable inst_f} `{Data.Foldable.Foldable inst_g}
: forall {b} {a}, (b -> a -> b) -> b -> (Compose inst_f inst_g) a -> b :=
fun {b} {a} =>
fun f z t =>
Data.SemigroupInternal.appEndo (Data.SemigroupInternal.getDual
(Foldable__Compose_foldMap (Data.SemigroupInternal.Mk_Dual GHC.Base.∘
(Data.SemigroupInternal.Mk_Endo GHC.Base.∘
GHC.Base.flip f)) t)) z.
Local Definition Foldable__Compose_foldr {inst_f} {inst_g}
`{Data.Foldable.Foldable inst_f} `{Data.Foldable.Foldable inst_g}
: forall {a} {b}, (a -> b -> b) -> b -> (Compose inst_f inst_g) a -> b :=
fun {a} {b} =>
fun f z t =>
Data.SemigroupInternal.appEndo (Foldable__Compose_foldMap
(Coq.Program.Basics.compose Data.SemigroupInternal.Mk_Endo f) t) z.
Local Definition Foldable__Compose_foldl' {inst_f} {inst_g}
`{Data.Foldable.Foldable inst_f} `{Data.Foldable.Foldable inst_g}
: forall {b} {a}, (b -> a -> b) -> b -> (Compose inst_f inst_g) a -> b :=
fun {b} {a} =>
fun f z0 xs =>
let f' := fun x k z => k (f z x) in
Foldable__Compose_foldr f' GHC.Base.id xs z0.
Local Definition Foldable__Compose_foldr' {inst_f} {inst_g}
`{Data.Foldable.Foldable inst_f} `{Data.Foldable.Foldable inst_g}
: forall {a} {b}, (a -> b -> b) -> b -> (Compose inst_f inst_g) a -> b :=
fun {a} {b} =>
fun f z0 xs =>
let f' := fun k x z => k (f x z) in
Foldable__Compose_foldl f' GHC.Base.id xs z0.
Local Definition Foldable__Compose_length {inst_f} {inst_g}
`{Data.Foldable.Foldable inst_f} `{Data.Foldable.Foldable inst_g}
: forall {a}, (Compose inst_f inst_g) a -> GHC.Num.Int :=
fun {a} =>
Foldable__Compose_foldl' (fun arg_0__ arg_1__ =>
match arg_0__, arg_1__ with
| c, _ => c GHC.Num.+ #1
end) #0.
Local Definition Foldable__Compose_null {inst_f} {inst_g}
`{Data.Foldable.Foldable inst_f} `{Data.Foldable.Foldable inst_g}
: forall {a}, (Compose inst_f inst_g) a -> bool :=
fun {a} => Foldable__Compose_foldr (fun arg_0__ arg_1__ => false) true.
Local Definition Foldable__Compose_product {inst_f} {inst_g}
`{Data.Foldable.Foldable inst_f} `{Data.Foldable.Foldable inst_g}
: forall {a}, forall `{GHC.Num.Num a}, (Compose inst_f inst_g) a -> a :=
fun {a} `{GHC.Num.Num a} =>
Coq.Program.Basics.compose Data.SemigroupInternal.getProduct
(Foldable__Compose_foldMap Data.SemigroupInternal.Mk_Product).
Local Definition Foldable__Compose_sum {inst_f} {inst_g}
`{Data.Foldable.Foldable inst_f} `{Data.Foldable.Foldable inst_g}
: forall {a}, forall `{GHC.Num.Num a}, (Compose inst_f inst_g) a -> a :=
fun {a} `{GHC.Num.Num a} =>
Coq.Program.Basics.compose Data.SemigroupInternal.getSum
(Foldable__Compose_foldMap Data.SemigroupInternal.Mk_Sum).
Local Definition Foldable__Compose_toList {inst_f} {inst_g}
`{Data.Foldable.Foldable inst_f} `{Data.Foldable.Foldable inst_g}
: forall {a}, (Compose inst_f inst_g) a -> list a :=
fun {a} =>
fun t => GHC.Base.build' (fun _ => (fun c n => Foldable__Compose_foldr c n t)).
Program Instance Foldable__Compose {f} {g} `{Data.Foldable.Foldable f}
`{Data.Foldable.Foldable g}
: Data.Foldable.Foldable (Compose f g) :=
fun _ k__ =>
k__ {| Data.Foldable.fold__ := fun {m} `{GHC.Base.Monoid m} =>
Foldable__Compose_fold ;
Data.Foldable.foldMap__ := fun {m} {a} `{GHC.Base.Monoid m} =>
Foldable__Compose_foldMap ;
Data.Foldable.foldl__ := fun {b} {a} => Foldable__Compose_foldl ;
Data.Foldable.foldl'__ := fun {b} {a} => Foldable__Compose_foldl' ;
Data.Foldable.foldr__ := fun {a} {b} => Foldable__Compose_foldr ;
Data.Foldable.foldr'__ := fun {a} {b} => Foldable__Compose_foldr' ;
Data.Foldable.length__ := fun {a} => Foldable__Compose_length ;
Data.Foldable.null__ := fun {a} => Foldable__Compose_null ;
Data.Foldable.product__ := fun {a} `{GHC.Num.Num a} =>
Foldable__Compose_product ;
Data.Foldable.sum__ := fun {a} `{GHC.Num.Num a} => Foldable__Compose_sum ;
Data.Foldable.toList__ := fun {a} => Foldable__Compose_toList |}.
Local Definition Traversable__Compose_traverse {inst_f} {inst_g}
`{Data.Traversable.Traversable inst_f} `{Data.Traversable.Traversable inst_g}
: forall {f} {a} {b},
forall `{GHC.Base.Applicative f},
(a -> f b) -> (Compose inst_f inst_g) a -> f ((Compose inst_f inst_g) b) :=
fun {f} {a} {b} `{GHC.Base.Applicative f} =>
fun arg_0__ arg_1__ =>
match arg_0__, arg_1__ with
| f, Mk_Compose t =>
Mk_Compose Data.Functor.<$>
Data.Traversable.traverse (Data.Traversable.traverse f) t
end.
Local Definition Traversable__Compose_mapM {inst_f} {inst_g}
`{Data.Traversable.Traversable inst_f} `{Data.Traversable.Traversable inst_g}
: forall {m} {a} {b},
forall `{GHC.Base.Monad m},
(a -> m b) -> (Compose inst_f inst_g) a -> m ((Compose inst_f inst_g) b) :=
fun {m} {a} {b} `{GHC.Base.Monad m} => Traversable__Compose_traverse.
Local Definition Traversable__Compose_sequenceA {inst_f} {inst_g}
`{Data.Traversable.Traversable inst_f} `{Data.Traversable.Traversable inst_g}
: forall {f} {a},
forall `{GHC.Base.Applicative f},
(Compose inst_f inst_g) (f a) -> f ((Compose inst_f inst_g) a) :=
fun {f} {a} `{GHC.Base.Applicative f} =>
Traversable__Compose_traverse GHC.Base.id.
Local Definition Traversable__Compose_sequence {inst_f} {inst_g}
`{Data.Traversable.Traversable inst_f} `{Data.Traversable.Traversable inst_g}
: forall {m} {a},
forall `{GHC.Base.Monad m},
(Compose inst_f inst_g) (m a) -> m ((Compose inst_f inst_g) a) :=
fun {m} {a} `{GHC.Base.Monad m} => Traversable__Compose_sequenceA.
Program Instance Traversable__Compose {f} {g} `{Data.Traversable.Traversable f}
`{Data.Traversable.Traversable g}
: Data.Traversable.Traversable (Compose f g) :=
fun _ k__ =>
k__ {| Data.Traversable.mapM__ := fun {m} {a} {b} `{GHC.Base.Monad m} =>
Traversable__Compose_mapM ;
Data.Traversable.sequence__ := fun {m} {a} `{GHC.Base.Monad m} =>
Traversable__Compose_sequence ;
Data.Traversable.sequenceA__ := fun {f} {a} `{GHC.Base.Applicative f} =>
Traversable__Compose_sequenceA ;
Data.Traversable.traverse__ := fun {f} {a} {b} `{GHC.Base.Applicative f} =>
Traversable__Compose_traverse |}.
Local Definition Applicative__Compose_liftA2 {inst_f} {inst_g}
`{GHC.Base.Applicative inst_f} `{GHC.Base.Applicative inst_g}
: forall {a} {b} {c},
(a -> b -> c) ->
(Compose inst_f inst_g) a ->
(Compose inst_f inst_g) b -> (Compose inst_f inst_g) c :=
fun {a} {b} {c} =>
fun arg_0__ arg_1__ arg_2__ =>
match arg_0__, arg_1__, arg_2__ with
| f, Mk_Compose x, Mk_Compose y =>
Mk_Compose (GHC.Base.liftA2 (GHC.Base.liftA2 f) x y)
end.
Local Definition Applicative__Compose_op_zlztzg__ {inst_f} {inst_g}
`{GHC.Base.Applicative inst_f} `{GHC.Base.Applicative inst_g}
: forall {a} {b},
(Compose inst_f inst_g) (a -> b) ->
(Compose inst_f inst_g) a -> (Compose inst_f inst_g) b :=
fun {a} {b} =>
fun arg_0__ arg_1__ =>
match arg_0__, arg_1__ with
| Mk_Compose f, Mk_Compose x => Mk_Compose (GHC.Base.liftA2 _GHC.Base.<*>_ f x)
end.
Local Definition Applicative__Compose_op_ztzg__ {inst_f} {inst_g}
`{GHC.Base.Applicative inst_f} `{GHC.Base.Applicative inst_g}
: forall {a} {b},
(Compose inst_f inst_g) a ->
(Compose inst_f inst_g) b -> (Compose inst_f inst_g) b :=
fun {a} {b} =>
fun a1 a2 => Applicative__Compose_op_zlztzg__ (GHC.Base.id GHC.Base.<$ a1) a2.
Local Definition Applicative__Compose_pure {inst_f} {inst_g}
`{GHC.Base.Applicative inst_f} `{GHC.Base.Applicative inst_g}
: forall {a}, a -> (Compose inst_f inst_g) a :=
fun {a} => fun x => Mk_Compose (GHC.Base.pure (GHC.Base.pure x)).
Program Instance Applicative__Compose {f} {g} `{GHC.Base.Applicative f}
`{GHC.Base.Applicative g}
: GHC.Base.Applicative (Compose f g) :=
fun _ k__ =>
k__ {| GHC.Base.liftA2__ := fun {a} {b} {c} => Applicative__Compose_liftA2 ;
GHC.Base.op_zlztzg____ := fun {a} {b} => Applicative__Compose_op_zlztzg__ ;
GHC.Base.op_ztzg____ := fun {a} {b} => Applicative__Compose_op_ztzg__ ;
GHC.Base.pure__ := fun {a} => Applicative__Compose_pure |}.
(* Skipping all instances of class `GHC.Base.Alternative', including
`Data.Functor.Compose.Alternative__Compose' *)
(* External variables:
Gt Lt Type bool comparison false list negb true Coq.Program.Basics.compose
Data.Foldable.Foldable Data.Foldable.foldMap Data.Foldable.foldMap__
Data.Foldable.fold__ Data.Foldable.foldl'__ Data.Foldable.foldl__
Data.Foldable.foldr'__ Data.Foldable.foldr__ Data.Foldable.length__
Data.Foldable.null__ Data.Foldable.product__ Data.Foldable.sum__
Data.Foldable.toList__ Data.Functor.op_zlzdzg__ Data.Functor.Classes.Eq1
Data.Functor.Classes.Ord1 Data.Functor.Classes.compare1 Data.Functor.Classes.eq1
Data.Functor.Classes.liftCompare Data.Functor.Classes.liftCompare__
Data.Functor.Classes.liftEq Data.Functor.Classes.liftEq__
Data.SemigroupInternal.Mk_Dual Data.SemigroupInternal.Mk_Endo
Data.SemigroupInternal.Mk_Product Data.SemigroupInternal.Mk_Sum
Data.SemigroupInternal.appEndo Data.SemigroupInternal.getDual
Data.SemigroupInternal.getProduct Data.SemigroupInternal.getSum
Data.Traversable.Traversable Data.Traversable.mapM__
Data.Traversable.sequenceA__ Data.Traversable.sequence__
Data.Traversable.traverse Data.Traversable.traverse__ GHC.Base.Applicative
GHC.Base.Eq_ GHC.Base.Functor GHC.Base.Monad GHC.Base.Monoid GHC.Base.Ord
GHC.Base.build' GHC.Base.compare__ GHC.Base.const GHC.Base.flip GHC.Base.fmap
GHC.Base.fmap__ GHC.Base.id GHC.Base.liftA2 GHC.Base.liftA2__ GHC.Base.max__
GHC.Base.min__ GHC.Base.op_z2218U__ GHC.Base.op_zeze__ GHC.Base.op_zeze____
GHC.Base.op_zg____ GHC.Base.op_zgze____ GHC.Base.op_zl____ GHC.Base.op_zlzd__
GHC.Base.op_zlzd____ GHC.Base.op_zlze____ GHC.Base.op_zlztzg__
GHC.Base.op_zlztzg____ GHC.Base.op_zsze__ GHC.Base.op_zsze____
GHC.Base.op_ztzg____ GHC.Base.pure GHC.Base.pure__ GHC.Num.Int GHC.Num.Num
GHC.Num.fromInteger GHC.Num.op_zp__
*)
|
module Brainfeck.ST
import Control.ST
import Data.Fin
import Data.Fuel
import Data.Vect as V
import System
import Brainfeck.Lex
import Brainfeck.Parse
import Brainfeck.Type
import Brainfeck.VM as VM
%default total
export
interface CharIO (m : Type -> Type) where
getChar : STrans m Char res (const res)
putChar : Char -> STrans m () res (const res)
info : String -> STrans m () res (const res)
export
VMST : Nat -> Nat -> Nat -> Type
VMST l r i = State (VMState l r i)
readChar : CharIO io => (vm : Var) -> ST io () [vm ::: VMST l r i]
readChar vm = do c <- getChar
update vm (inputChar c)
outputChar : CharIO io => (vm : Var) -> ST io () [vm ::: VMST l r i]
outputChar vmVar = do
vm <- read vmVar
let cell = outputChar vm
putChar cell
updateVM : (VMState l r i -> VMState l r i)
-> (vm : Var)
-> ST id () [vm ::: VMST l r i]
updateVM f vmVar = update vmVar f
increment : (vm : Var) -> ST id () [vm ::: VMST l r i]
increment = updateVM VM.increment
decrement : (vm : Var) -> ST id () [vm ::: VMST l r i]
decrement = updateVM VM.decrement
jumpBack : (vm : Var) -> ST id () [vm ::: VMST l r (S i)]
jumpBack = updateVM VM.jumpBack
jumpForward : (vm : Var) -> ST id () [vm ::: VMST l r (S i)]
jumpForward = updateVM VM.jumpForward
data StepResult : Type where
StepInfo : String -> (l : Nat) -> (r : Nat) -> (i : Nat) -> StepResult
StepSuccess : (l : Nat) -> (r : Nat) -> (i : Nat) -> StepResult
ResultST : Type
ResultST = State StepResult
data AlwaysSucceeds : Type where
STrivial : (l : Nat) -> (r : Nat) -> AlwaysSucceeds
AlwaysST : Type
AlwaysST = State StepResult
shiftLeft : CharIO io
=> (vm : Var)
-> ST io StepResult [vm ::: VMST l r i :->
(\res => case res of
(StepInfo e l r i) => VMST l r i
(StepSuccess l' r' i) => VMST l' r' i)]
shiftLeft {l = Z} {r} {i} _ = do
let msg = "Cell index is 0. Unable to leftshift."
info msg
pure $ StepInfo msg Z r i
shiftLeft {l = (S k)} {r} {i} vm = update vm (VM.shiftLeft) >>= \_ => pure (StepSuccess k (S r) i)
shiftRight : {l : Nat} -> {r : Nat} -> {auto p : IsSucc (l + r)}
-> (vm : Var)
-> ST id AlwaysSucceeds [ vm ::: VMST l r i :->
(\res => case res of
(STrivial l' r') => VMST l' r' i) ]
shiftRight {l = (S k)} {r = Z} vmVar =
update vmVar (VM.shiftRight . grow) >>= \_ => pure (STrivial (S (S k)) k)
where
growProof : (vm : VMState llen (0 + (rlen + 0)) i) -> VMState llen rlen i
growProof {rlen} vm = rewrite plusCommutative 0 rlen in vm
grow : VMState (S k) 0 i -> VMState (S k) (S k) i
grow vm = growProof (growVM vm)
shiftRight {l} {r = (S k)} vm = update vm VM.shiftRight >>= \_ => pure (STrivial (S l) k)
stepSuccess : {l : Nat} -> {r : Nat} -> {i : Nat} -> StepResult
stepSuccess {l} {r} {i} = StepSuccess l r i
step : CharIO io => {auto p : IsSucc (l + r) }
-> (vm : Var)
-> ST io StepResult [ vm ::: VMST l r (S i) :->
(\res => case res of
(StepInfo e l r i) => VMST l r i
(StepSuccess l' r' i) => VMST l' r' i) ]
step {l} {r} {i} vmVar = do
vm <- read vmVar
case instruction vm of
OLeft => do vm' <- shiftLeft vmVar
case vm' of
(StepInfo e l r i) => pure $ StepInfo e l r i
(StepSuccess l' r' i) => pure $ StepSuccess l' r' i
ORight => shiftRight vmVar >>= \(STrivial l' r') => pure (StepSuccess l' r' (S i))
OInc => increment vmVar >>= \_ => pure stepSuccess
ODec => decrement vmVar >>= \_ => pure stepSuccess
OOut => outputChar vmVar >>= \_ => pure stepSuccess
OIn => readChar vmVar >>= \_ => pure stepSuccess
OJumpZero _ => jumpForward vmVar >>= \_ => pure stepSuccess
OJumpNZero _ => jumpBack vmVar >>= \_ => pure stepSuccess
runLoop : CharIO io => {auto p : IsSucc (l + r) }
-> Fuel -> (vm : Var)
-> ST io () [ remove vm (VMST l r (S i)) ]
runLoop Dry vmVar = delete vmVar
runLoop (More f) vmVar = do
res <- step vmVar
case res of
(StepInfo _ _ _ (S k)) => info "Aborting" >>= \_ => delete vmVar
(StepInfo _ _ _ Z ) => info "Ended up in an undefined state (missing all instructions)" >>= \_ => delete vmVar
(StepSuccess _ _ Z ) => do info "Ended up in an undefined state (missing all instructions) after successful step"
delete vmVar
(StepSuccess tapeL tapeR (S k)) => do
case isItSucc (tapeL + tapeR) of
No _ => info "Somehow the tape was deleted. Aborting." >>= \_ => delete vmVar
Yes prf => do
vm <- read vmVar
let pc' = FS (pc vm)
case strengthen pc' of
(Left l) => delete vmVar -- end of program
(Right r) => do
update vmVar (record { pc = r })
runLoop f vmVar
printTokens : CharIO io => Bool -> Tokens n -> ST io () []
printTokens False _ = pure ()
printTokens True xs = do
info "Lexed Tokens: "
let strs = foldr (++) "" . V.intersperse ", " $ map (tokenToS . snd) xs
info strs
info ""
pure ()
printParse : CharIO io => Bool -> Instructions n -> ST io () []
printParse False _ = pure ()
printParse True xs = do
info "Parsed Operations: "
let strs = foldr (++) "" . V.intersperse ", " $ map operationToS xs
info strs
info ""
pure ()
export
runProgram : CharIO io => (printLex : Bool) -> (printParse : Bool)
-> Fuel -> (progText : String) -> ST io () []
runProgram plex pparse fuel progText =
case lex progText of
(Z ** _ ) => info "Nothing to do. Bye"
(S n ** ts) => do
printTokens plex ts
case parse ts of
Left (MkParseError loc s) =>
info $ "Error at " ++ locToS loc ++ " " ++ s
Right (Z ** _) => info "Empty parse"
Right (S n ** ops) => do
printParse pparse ops
let vm = initVM ops
case isItSucc InitialVMSize of
(No _) => info "This was compiled with an invalid InitialVMSize! See ya."
(Yes prf) => do v <- new vm
runLoop {p = prf} {l = 0} {r = InitialVMSize} fuel v
|
{-# LANGUAGE DeriveAnyClass #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE ImportQualifiedPost #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE UndecidableInstances #-}
{-# LANGUAGE ViewPatterns #-}
{-# OPTIONS_GHC -Wwarn #-}
module Data.CDF
( Centile(..)
, renderCentile
, briefCentiles
, stdCentiles
, nEquicentiles
, CDFError (..)
, CDF(..)
, CDFIx (..)
, KnownCDF (..)
, liftCDFVal
, unliftCDFVal
, centilesCDF
, filterCDF
, zeroCDF
, projectCDF
, projectCDF'
, indexCDF
, DirectCDF
, cdf
, mapToCDF
, Divisible (..)
, Combine (..)
, stdCombine1
, stdCombine2
, CDF2
, collapseCDFs
, cdf2OfCDFs
--
, module Data.SOP.Strict
) where
import Prelude (String, (!!), error, head, show)
import Cardano.Prelude hiding (head, show)
import Data.Aeson (FromJSON(..), ToJSON(..))
import Data.SOP.Strict
import Data.Time.Clock (NominalDiffTime)
import Data.Vector qualified as Vec
import Statistics.Sample qualified as Stat
import Text.Printf (printf)
import Ouroboros.Consensus.Util.Time (secondsToNominalDiffTime)
-- | Centile specifier: a fractional in range of [0; 1].
newtype Centile =
Centile { unCentile :: Double }
deriving (Eq, Generic, FromJSON, ToJSON, Show)
deriving anyclass NFData
renderCentile :: Int -> Centile -> String
renderCentile width = \case
Centile x -> printf ("%0."<>show (width-2)<>"f") x
briefCentiles :: [Centile]
briefCentiles =
[ Centile 0.5, Centile 0.9, Centile 1.0 ]
stdCentiles :: [Centile]
stdCentiles =
[ Centile 0.01, Centile 0.05
, Centile 0.1, Centile 0.2, Centile 0.3, Centile 0.4
, Centile 0.5, Centile 0.6
, Centile 0.7, Centile 0.75
, Centile 0.8, Centile 0.85, Centile 0.875
, Centile 0.9, Centile 0.925, Centile 0.95, Centile 0.97, Centile 0.98, Centile 0.99
, Centile 0.995, Centile 0.997, Centile 0.998, Centile 0.999
, Centile 0.9995, Centile 0.9997, Centile 0.9998, Centile 0.9999
]
-- | Given a N-large population, produce centiles for each element, except for min and max.
-- We don't need min and max, because CDF already has range.
nEquicentiles :: Int -> [Centile]
nEquicentiles n =
if reindices == indices
then Centile <$> centiles
else error $ printf "centilesForN: reindices for %d: %s, indices: %s" n (show reindices) (show indices)
where
reindices = centiles <&> runCentile n
centiles = [ step * (fromIntegral i + 0.5) | i <- indices ]
indices = if n > 2
then [1 .. n - 2] -- ignore first and last indices, standing for min and max.
else []
step :: Double
step = 1.0 / fromIntegral n
-- | Given a centile of N-large population, produce index of the population element referred by centile.
{-# INLINE runCentile #-}
runCentile :: Int -> Double -> Int
runCentile n centile = floor (fromIntegral n * centile)
& min (n - 1)
{-# INLINE vecCentile #-}
vecCentile :: Vec.Vector a -> Int -> Centile -> a
vecCentile vec n (Centile c) = vec Vec.! runCentile n c
--
-- * Parametric CDF (cumulative distribution function)
--
data CDF p a =
CDF
{ cdfSize :: Int
, cdfAverage :: Double
, cdfStddev :: Double
, cdfRange :: (a, a)
, cdfSamples :: [(Centile, p a)]
}
deriving (Eq, Functor, Generic, Show)
deriving anyclass NFData
instance (FromJSON (p a), FromJSON a) => FromJSON (CDF p a)
instance ( ToJSON (p a), ToJSON a) => ToJSON (CDF p a)
-- * Singletons
--
data CDFIx p where
CDFI :: CDFIx I
CDF2 :: CDFIx (CDF I)
class KnownCDF a where
cdfIx :: CDFIx a
instance KnownCDF I where cdfIx = CDFI
instance KnownCDF (CDF I) where cdfIx = CDF2
centilesCDF :: CDF p a -> [Centile]
centilesCDF = fmap fst . cdfSamples
zeroCDF :: (Num a) => CDF p a
zeroCDF =
CDF
{ cdfSize = 0
, cdfAverage = 0
, cdfStddev = 0
, cdfRange = (0, 0)
, cdfSamples = mempty
}
filterCDF :: ((Centile, p a) -> Bool) -> CDF p a -> CDF p a
filterCDF f d =
d { cdfSamples = cdfSamples d & filter f }
indexCDF :: Int -> CDF p a -> p a
indexCDF i d = snd $ cdfSamples d !! i
projectCDF :: Centile -> CDF p a -> Maybe (p a)
projectCDF p = fmap snd . find ((== p) . fst) . cdfSamples
projectCDF' :: String -> Centile -> CDF p a -> p a
projectCDF' desc p =
maybe (error er) snd . find ((== p) . fst) . cdfSamples
where
er = printf "Missing centile %f in %s" (show $ unCentile p) desc
--
-- * Trivial instantiation: samples are value points
--
type DirectCDF a = CDF I a
liftCDFVal :: Real a => a -> CDFIx p -> p a
liftCDFVal x = \case
CDFI -> I x
CDF2 -> CDF { cdfSize = 1
, cdfAverage = fromRational (toRational x)
, cdfStddev = 0
, cdfRange = (x, x)
, cdfSamples = []
, .. }
unliftCDFVal :: Divisible a => CDFIx p -> p a -> a
unliftCDFVal CDFI (I x) =x
unliftCDFVal CDF2 CDF{cdfRange = (mi, ma)} = (mi + ma) `divide` 2
cdf :: (Real a, KnownCDF p) => [Centile] -> [a] -> CDF p a
cdf centiles (sort -> sorted) =
CDF
{ cdfSize = size
, cdfAverage = Stat.mean doubleVec
, cdfStddev = Stat.stdDev doubleVec
, cdfRange = (mini, maxi)
, cdfSamples =
( (Centile 0, liftCDFVal mini ix) :) .
(<> [(Centile 1.0, liftCDFVal maxi ix) ]) $
centiles <&>
\spec ->
let sample = if size == 0 then 0
else vecCentile vec size spec
in (,) spec (liftCDFVal sample ix)
}
where ix = cdfIx
vec = Vec.fromList sorted
size = length vec
doubleVec = fromRational . toRational <$> vec
(,) mini maxi =
if size == 0
then (0, 0)
else (vec Vec.! 0, Vec.last vec)
mapToCDF :: Real a => (b -> a) -> [Centile] -> [b] -> DirectCDF a
mapToCDF f pspecs xs = cdf pspecs (f <$> xs)
type CDF2 a = CDF (CDF I) a
data CDFError
= CDFIncoherentSamplingLengths [Int]
| CDFIncoherentSamplingCentiles [[Centile]]
| CDFEmptyDataset
deriving Show
-- | Avoiding `Fractional`
class Real a => Divisible a where
divide :: a -> Double -> a
instance Divisible Double where
divide = (/)
instance Divisible Int where
divide x by = round $ fromIntegral x / by
instance Divisible Integer where
divide x by = round $ fromIntegral x / by
instance Divisible Word32 where
divide x by = round $ fromIntegral x / by
instance Divisible Word64 where
divide x by = round $ fromIntegral x / by
instance Divisible NominalDiffTime where
divide x by = x / secondsToNominalDiffTime by
-- * Combining population stats
data Combine p a
= Combine
{ cWeightedAverages :: !([(Int, Double)] -> Double)
, cStddevs :: !([Double] -> Double)
, cRanges :: !([(a, a)] -> (a, a))
, cWeightedSamples :: !([(Int, a)] -> a)
, cCDF :: !([p a] -> Either CDFError (CDF I a))
}
stdCombine1 :: forall a. (Divisible a) => [Centile] -> Combine I a
stdCombine1 cs =
Combine
{ cWeightedAverages = weightedAverage
, cRanges = outerRange
, cStddevs = maximum -- it's an approximation
, cWeightedSamples = weightedAverage
, cCDF = Right . cdf cs . fmap unI
}
where
weightedAverage :: forall b. (Divisible b) => [(Int, b)] -> b
weightedAverage xs = (`divide` (fromIntegral . sum $ fst <$> xs)) . sum $
xs <&> \(size, avg) -> fromIntegral size * avg
outerRange xs = (,) (minimum $ fst <$> xs)
(maximum $ snd <$> xs)
stdCombine2 :: Divisible a => [Centile] -> Combine (CDF I) a
stdCombine2 cs =
let c@Combine{..} = stdCombine1 cs in
Combine
{ cCDF = collapseCDFs c
, ..
}
-- | Collapse: Given a ([Value] -> CDF I) function, and a list of (CDF I), produce a (CDF I).
--
collapseCDFs :: forall a. Combine I a -> [CDF I a] -> Either CDFError (CDF I a)
collapseCDFs _ [] = Left CDFEmptyDataset
collapseCDFs Combine{..} xs = do
unless (all (head lengths ==) lengths) $
Left $ CDFIncoherentSamplingLengths lengths
unless (null incoherent) $
Left $ CDFIncoherentSamplingCentiles (fmap fst <$> incoherent)
pure CDF
{ cdfSize = sum sizes
, cdfAverage = xs <&> cdfAverage & cWeightedAverages . zip sizes
, cdfRange = xs <&> cdfRange & cRanges
, cdfStddev = xs <&> cdfStddev & cStddevs
, cdfSamples = coherent <&>
fmap (I . cWeightedSamples . zip sizes . fmap unI)
}
where
sizes = xs <&> cdfSize
samples = xs <&> cdfSamples
lengths = length <$> samples
centileOrdered :: [[(Centile, I a)]] -- Each sublist must (checked) have the same Centile.
centileOrdered = transpose samples
coherent :: [(Centile, [I a])]
(incoherent, coherent) = partitionEithers $ centileOrdered <&>
\case
[] -> error "cdfOfCDFs: empty list of centiles, hands down."
xxs@((c, _):(fmap fst -> cs)) -> if any (/= c) cs
then Left xxs
else Right (c, snd <$> xxs)
-- | Polymorphic, but practically speaking, intended for either:
-- 1. given a ([I] -> CDF I) function, and a list of (CDF I), produce a CDF (CDF I), or
-- 2. given a ([CDF I] -> CDF I) function, and a list of (CDF (CDF I)), produce a CDF (CDF I)
cdf2OfCDFs :: forall a p. Combine p a -> [CDF p a] -> Either CDFError (CDF (CDF I) a)
cdf2OfCDFs _ [] = Left CDFEmptyDataset
cdf2OfCDFs Combine{..} xs = do
unless (all (head lengths ==) lengths) $
Left $ CDFIncoherentSamplingLengths lengths
unless (null incoherent) $
Left $ CDFIncoherentSamplingCentiles (fmap fst <$> incoherent)
cdfSamples <- mapM sequence -- ..to Either CDFError [(Centile, CDF I a)]
(coherent <&> fmap cCDF :: [(Centile, Either CDFError (CDF I a))])
pure CDF
{ cdfSize = sum sizes
, cdfAverage = xs <&> cdfAverage & cWeightedAverages . zip sizes
, cdfRange = xs <&> cdfRange & cRanges
, cdfStddev = xs <&> cdfStddev & cStddevs
, cdfSamples = cdfSamples
}
where
sizes = xs <&> cdfSize
samples = xs <&> cdfSamples
lengths = length <$> samples
centileOrdered :: [[(Centile, p a)]]
centileOrdered = transpose samples
(incoherent, coherent) = partitionEithers $ centileOrdered <&>
\case
[] -> error "cdfOfCDFs: empty list of centiles, hands down."
xxs@((c, _):(fmap fst -> cs)) -> if any (/= c) cs
then Left xxs
else Right (c, snd <$> xxs)
|
By July 28 , the division was still embroiled in this fight and the 766th bypassed it and moved toward <unk> on the left flank of the city . However the 766th had suffered significant setbacks at Yongdok , with substantial losses due to American and British naval artillery fire . Once it arrived in the area , it met heavier resistance from South Korean police and militia operating in armored vehicles . With air support , they offered the heaviest resistance the unit had faced thus far . With the support of only one of the 5th Division 's regiments , the 766th was unable to sustain its advance , and had to pull back by the 29th . Movement from the ROK Capital Division prevented the 766th Regiment from infiltrating further into the mountains . ROK cavalry and civilian police then began isolated counteroffensives against the 766th . These forces included special counter @-@ guerrilla units targeting the 766th and countering its tactics . South Korean troops halted the advance of the North Koreans again around the end of the month thanks to increased reinforcements and support closer to the Pusan Perimeter logistics network .
|
# The tic-tac-toe matrix has already been defined for you
ttt <- matrix(c("O", NA, "X", NA, "O", NA, "X", "O", "X"), nrow = 3, ncol = 3)
# define the double for loop
for (i in 1:nrow(ttt)) {
for (j in 1:ncol(ttt)) {
print(paste("On row",i,"and column",j,"board contains",ttt[i,j]))
}
} |
The history of Toniná continued after most other Classic Maya cities had fallen , perhaps aided by the site 's relative isolation . Ruler 10 is associated with a monument dating to 904 in the Terminal Classic and a monument dating to 909 bears the last known Long Count date although the name of the king has not survived . Ceramic fragments indicate that occupation at the site continued for another century or more .
|
When about-to-be parents rush to the hospital for delivery full of the joy and terror, there is almost never the thought that when they leave their newborn will not be with them. They're unprepared for the heartbreak of an empty lap. But an infant born too sick or too soon to go home may remain behind in the neonatal intensive care unit for a few days or for months on end. This week, for example, Kamryn Ruth Abbott celebrated her third month of life at Memorial Hermann Memorial City Medical Center's NICU. |
module _ where
open import Agda.Builtin.List
open import Agda.Builtin.Reflection
open import Agda.Builtin.Unit
record Id (A : Set) : Set where
field id : A → A
open Id {{...}}
postulate
T : {A : Set} → A → Set
cong : ∀ {A B : Set} (f : A → B) {x} → T x → T (f x)
X : Set
lem : (x : X) → T x
instance
IdX : Id X
IdX .id n = n
macro
follows-from : Term → Term → TC ⊤
follows-from prf hole = do
typeError (termErr prf ∷ [])
loops : (x : X) → T x
loops x =
follows-from (cong id (lem x))
|
!
! -------------------------------------------------------------
! B I S E C T
! -------------------------------------------------------------
!
SUBROUTINE BISECT(N, EPS1, D, E, E2, LB, UB, MM, M, W, IND, IERR, RV4, &
RV5)
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
!...Translated by Pacific-Sierra Research 77to90 4.3E 21:52:17 11/14/01
!...Switches:
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
INTEGER , INTENT(IN) :: N
INTEGER , INTENT(IN) :: MM
INTEGER , INTENT(INOUT) :: M
INTEGER , INTENT(OUT) :: IERR
REAL(DOUBLE) , INTENT(INOUT) :: EPS1
REAL(DOUBLE) , INTENT(INOUT) :: LB
REAL(DOUBLE) , INTENT(INOUT) :: UB
INTEGER , INTENT(INOUT) :: IND(MM)
REAL(DOUBLE) , INTENT(IN) :: D(N)
REAL(DOUBLE) , INTENT(IN) :: E(N)
REAL(DOUBLE) , INTENT(INOUT) :: E2(N)
REAL(DOUBLE) , INTENT(INOUT) :: W(MM)
REAL(DOUBLE) , INTENT(INOUT) :: RV4(N)
REAL(DOUBLE) , INTENT(INOUT) :: RV5(N)
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
INTEGER :: I, J, K, L, P, Q, R, S, II, M1, M2, TAG, ISTURM
REAL(DOUBLE) :: U, V, T1, T2, XU, X0, X1, MACHEP
!-----------------------------------------------
!
!
! THIS SUBROUTINE IS A TRANSLATION OF THE BISECTION TECHNIQUE
! IN THE ALGOL PROCEDURE TRISTURM BY PETERS AND WILKINSON.
! HANDBOOK FOR AUTO. COMP., VOL.II-LINEAR ALGEBRA, 418-439(1971).
!
! THIS SUBROUTINE FINDS THOSE EIGENVALUES OF A TRIDIAGONAL
! SYMMETRIC MATRIX WHICH LIE IN A SPECIFIED INTERVAL,
! USING BISECTION.
!
! ON INPUT:
!
! N IS THE ORDER OF THE MATRIX;
!
! EPS1 IS AN ABSOLUTE ERROR TOLERANCE FOR THE COMPUTED
! EIGENVALUES. IF THE INPUT EPS1 IS NON-POSITIVE,
! IT IS RESET FOR EACH SUBMATRIX TO A DEFAULT VALUE,
! NAMELY, MINUS THE PRODUCT OF THE RELATIVE MACHINE
! PRECISION AND THE 1-NORM OF THE SUBMATRIX;
!
! D CONTAINS THE DIAGONAL ELEMENTS OF THE INPUT MATRIX;
!
! E CONTAINS THE SUBDIAGONAL ELEMENTS OF THE INPUT MATRIX
! IN ITS LAST N-1 POSITIONS. E(1) IS ARBITRARY;
!
! E2 CONTAINS THE SQUARES OF THE CORRESPONDING ELEMENTS OF E.
! E2(1) IS ARBITRARY;
!
! LB AND UB DEFINE THE INTERVAL TO BE SEARCHED FOR EIGENVALUES.
! IF LB IS NOT LESS THAN UB, NO EIGENVALUES WILL BE FOUND;
!
! MM SHOULD BE SET TO AN UPPER BOUND FOR THE NUMBER OF
! EIGENVALUES IN THE INTERVAL. WARNING: IF MORE THAN
! MM EIGENVALUES ARE DETERMINED TO LIE IN THE INTERVAL,
! AN ERROR RETURN IS MADE WITH NO EIGENVALUES FOUND.
!
! ON OUTPUT:
!
! EPS1 IS UNALTERED UNLESS IT HAS BEEN RESET TO ITS
! (LAST) DEFAULT VALUE;
!
! D AND E ARE UNALTERED;
!
! ELEMENTS OF E2, CORRESPONDING TO ELEMENTS OF E REGARDED
! AS NEGLIGIBLE, HAVE BEEN REPLACED BY ZERO CAUSING THE
! MATRIX TO SPLIT INTO A DIRECT SUM OF SUBMATRICES.
! E2(1) IS ALSO SET TO ZERO;
!
! M IS THE NUMBER OF EIGENVALUES DETERMINED TO LIE IN (LB,UB);
!
! W CONTAINS THE M EIGENVALUES IN ASCENDING ORDER;
!
! IND CONTAINS IN ITS FIRST M POSITIONS THE SUBMATRIX INDICES
! ASSOCIATED WITH THE CORRESPONDING EIGENVALUES IN W --
! 1 FOR EIGENVALUES BELONGING TO THE FIRST SUBMATRIX FROM
! THE TOP, 2 FOR THOSE BELONGING TO THE SECOND SUBMATRIX, ETC.;
!
! IERR IS SET TO
! ZERO FOR NORMAL RETURN,
! 3*N+1 IF M EXCEEDS MM;
!
! RV4 AND RV5 ARE TEMPORARY STORAGE ARRAYS.
!
! THE ALGOL PROCEDURE STURMCNT CONTAINED IN TRISTURM
! APPEARS IN BISECT IN-LINE.
!
! NOTE THAT SUBROUTINE TQL1 OR IMTQL1 IS GENERALLY FASTER THAN
! BISECT, IF MORE THAN N/4 EIGENVALUES ARE TO BE FOUND.
!
! QUESTIONS AND COMMENTS SHOULD BE DIRECTED TO B. S. GARBOW,
! APPLIED MATHEMATICS DIVISION, ARGONNE NATIONAL LABORATORY
!
! ------------------------------------------------------------------
!
! :::::::::: MACHEP IS A MACHINE DEPENDENT PARAMETER SPECIFYING
! THE RELATIVE PRECISION OF FLOATING POINT ARITHMETIC.
! MACHEP = 16.0D0**(-13) FOR LONG FORM ARITHMETIC
! ON S360 ::::::::::
DATA MACHEP/ 1.D-12/
!
IERR = 0
TAG = 0
T1 = LB
T2 = UB
! :::::::::: LOOK FOR SMALL SUB-DIAGONAL ENTRIES ::::::::::
DO I = 1, N
IF (I == 1) GO TO 20
IF (DABS(E(I)) > MACHEP*(DABS(D(I))+DABS(D(I-1)))) CYCLE
20 CONTINUE
E2(I) = 0.0D0
END DO
! :::::::::: DETERMINE THE NUMBER OF EIGENVALUES
! IN THE INTERVAL ::::::::::
P = 1
Q = N
X1 = UB
ISTURM = 1
GO TO 320
60 CONTINUE
M = S
X1 = LB
ISTURM = 2
GO TO 320
80 CONTINUE
M = M - S
IF (M > MM) GO TO 980
Q = 0
R = 0
! :::::::::: ESTABLISH AND PROCESS NEXT SUBMATRIX, REFINING
! INTERVAL BY THE GERSCHGORIN BOUNDS ::::::::::
100 CONTINUE
IF (R == M) GO TO 1001
TAG = TAG + 1
P = Q + 1
XU = D(P)
X0 = D(P)
U = 0.0D0
!
DO Q = P, N
X1 = U
U = 0.0D0
V = 0.0D0
IF (Q /= N) THEN
U = DABS(E(Q+1))
V = E2(Q+1)
ENDIF
XU = DMIN1(D(Q)-(X1+U),XU)
X0 = DMAX1(D(Q)+(X1+U),X0)
IF (V /= 0.0D0) CYCLE
EXIT
END DO
!
X1 = DMAX1(DABS(XU),DABS(X0))*MACHEP
IF (EPS1 <= 0.0D0) EPS1 = -X1
IF (P == Q) THEN
! :::::::::: CHECK FOR ISOLATED ROOT WITHIN INTERVAL ::::::::::
IF (T1>D(P) .OR. D(P)>=T2) GO TO 940
M1 = P
M2 = P
RV5(P) = D(P)
GO TO 900
ENDIF
X1 = X1*DFLOAT(Q - P + 1)
LB = DMAX1(T1,XU - X1)
UB = DMIN1(T2,X0 + X1)
X1 = LB
ISTURM = 3
GO TO 320
200 CONTINUE
M1 = S + 1
X1 = UB
ISTURM = 4
GO TO 320
220 CONTINUE
M2 = S
IF (M1 > M2) GO TO 940
! :::::::::: FIND ROOTS BY BISECTION ::::::::::
X0 = UB
ISTURM = 5
!
RV5(M1:M2) = UB
RV4(M1:M2) = LB
! :::::::::: LOOP FOR K-TH EIGENVALUE
! FOR K=M2 STEP -1 UNTIL M1 DO --
! (-DO- NOT USED TO LEGALIZE COMPUTED-GO-TO) ::::::::::
K = M2
250 CONTINUE
XU = LB
! :::::::::: FOR I=K STEP -1 UNTIL M1 DO -- ::::::::::
DO II = M1, K
I = M1 + K - II
IF (XU >= RV4(I)) CYCLE
XU = RV4(I)
EXIT
END DO
!
X0 = MIN(RV5(K),X0)
! :::::::::: NEXT BISECTION STEP ::::::::::
300 CONTINUE
X1 = (XU + X0)*0.5D0
IF (X0 - XU <= 2.0D0*MACHEP*(DABS(XU) + DABS(X0)) + DABS(EPS1)) GO TO 420
! :::::::::: IN-LINE PROCEDURE FOR STURM SEQUENCE ::::::::::
320 CONTINUE
S = P - 1
U = 1.0D0
!
DO I = P, Q
IF (U == 0.0D0) THEN
V = DABS(E(I))/MACHEP
ELSE
V = E2(I)/U
ENDIF
U = D(I) - X1 - V
IF (U >= 0.0D0) CYCLE
S = S + 1
END DO
!
GO TO (60,80,200,220,360) ISTURM
! :::::::::: REFINE INTERVALS ::::::::::
360 CONTINUE
IF (S >= K) GO TO 400
XU = X1
IF (S >= M1) GO TO 380
RV4(M1) = X1
GO TO 300
380 CONTINUE
RV4(S+1) = X1
RV5(S) = MIN(X1,RV5(S))
GO TO 300
400 CONTINUE
X0 = X1
GO TO 300
! :::::::::: K-TH EIGENVALUE FOUND ::::::::::
420 CONTINUE
RV5(K) = X1
K = K - 1
IF (K >= M1) GO TO 250
! :::::::::: ORDER EIGENVALUES TAGGED WITH THEIR
! SUBMATRIX ASSOCIATIONS ::::::::::
900 CONTINUE
S = R
R = R + M2 - M1 + 1
J = 1
K = M1
!
DO L = 1, R
IF (J <= S) THEN
IF (K > M2) EXIT
IF (RV5(K) >= W(L)) GO TO 915
!
W(L+S-J+1:L+1:(-1)) = W(L+S-J:L:(-1))
IND(L+S-J+1:L+1:(-1)) = IND(L+S-J:L:(-1))
ENDIF
!
W(L) = RV5(K)
IND(L) = TAG
K = K + 1
CYCLE
915 CONTINUE
J = J + 1
END DO
!
940 CONTINUE
IF (Q < N) GO TO 100
GO TO 1001
! :::::::::: SET ERROR -- UNDERESTIMATE OF NUMBER OF
! EIGENVALUES IN INTERVAL ::::::::::
980 CONTINUE
IERR = 3*N + 1
1001 CONTINUE
LB = T1
UB = T2
RETURN
! :::::::::: LAST CARD OF BISECT ::::::::::
END SUBROUTINE BISECT
|
[GOAL]
p q r : ℕ+
⊢ sumInv {p, q, r} = (↑↑p)⁻¹ + (↑↑q)⁻¹ + (↑↑r)⁻¹
[PROOFSTEP]
simp only [sumInv, add_zero, insert_eq_cons, add_assoc, map_cons, sum_cons, map_singleton, sum_singleton]
[GOAL]
pqr : Multiset ℕ+
⊢ Admissible pqr → 1 < sumInv pqr
[PROOFSTEP]
rw [Admissible]
[GOAL]
pqr : Multiset ℕ+
⊢ (∃ q r, A' q r = pqr) ∨ (∃ r, D' r = pqr) ∨ E' 3 = pqr ∨ E' 4 = pqr ∨ E' 5 = pqr → 1 < sumInv pqr
[PROOFSTEP]
rintro (⟨p', q', H⟩ | ⟨n, H⟩ | H | H | H)
[GOAL]
case inl.intro.intro
pqr : Multiset ℕ+
p' q' : ℕ+
H : A' p' q' = pqr
⊢ 1 < sumInv pqr
[PROOFSTEP]
rw [← H, A', sumInv_pqr, add_assoc]
[GOAL]
case inl.intro.intro
pqr : Multiset ℕ+
p' q' : ℕ+
H : A' p' q' = pqr
⊢ 1 < (↑↑1)⁻¹ + ((↑↑p')⁻¹ + (↑↑q')⁻¹)
[PROOFSTEP]
simp only [lt_add_iff_pos_right, PNat.one_coe, inv_one, Nat.cast_one]
[GOAL]
case inl.intro.intro
pqr : Multiset ℕ+
p' q' : ℕ+
H : A' p' q' = pqr
⊢ 0 < (↑↑p')⁻¹ + (↑↑q')⁻¹
[PROOFSTEP]
apply add_pos
[GOAL]
case inl.intro.intro.ha
pqr : Multiset ℕ+
p' q' : ℕ+
H : A' p' q' = pqr
⊢ 0 < (↑↑p')⁻¹
[PROOFSTEP]
simp only [PNat.pos, Nat.cast_pos, inv_pos]
[GOAL]
case inl.intro.intro.hb
pqr : Multiset ℕ+
p' q' : ℕ+
H : A' p' q' = pqr
⊢ 0 < (↑↑q')⁻¹
[PROOFSTEP]
simp only [PNat.pos, Nat.cast_pos, inv_pos]
[GOAL]
case inr.inl.intro
pqr : Multiset ℕ+
n : ℕ+
H : D' n = pqr
⊢ 1 < sumInv pqr
[PROOFSTEP]
rw [← H, D', sumInv_pqr]
[GOAL]
case inr.inl.intro
pqr : Multiset ℕ+
n : ℕ+
H : D' n = pqr
⊢ 1 < (↑↑2)⁻¹ + (↑↑2)⁻¹ + (↑↑n)⁻¹
[PROOFSTEP]
conv_rhs => simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
pqr : Multiset ℕ+
n : ℕ+
H : D' n = pqr
| (↑↑2)⁻¹ + (↑↑2)⁻¹ + (↑↑n)⁻¹
[PROOFSTEP]
simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
pqr : Multiset ℕ+
n : ℕ+
H : D' n = pqr
| (↑↑2)⁻¹ + (↑↑2)⁻¹ + (↑↑n)⁻¹
[PROOFSTEP]
simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
pqr : Multiset ℕ+
n : ℕ+
H : D' n = pqr
| (↑↑2)⁻¹ + (↑↑2)⁻¹ + (↑↑n)⁻¹
[PROOFSTEP]
simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
case inr.inl.intro
pqr : Multiset ℕ+
n : ℕ+
H : D' n = pqr
⊢ 1 < (↑(1 + 1))⁻¹ + (↑(1 + 1))⁻¹ + (↑↑n)⁻¹
[PROOFSTEP]
norm_num
[GOAL]
case inr.inr.inl
pqr : Multiset ℕ+
H : E' 3 = pqr
⊢ 1 < sumInv pqr
case inr.inr.inr.inl
pqr : Multiset ℕ+
H : E' 4 = pqr
⊢ 1 < sumInv pqr
case inr.inr.inr.inr pqr : Multiset ℕ+ H : E' 5 = pqr ⊢ 1 < sumInv pqr
[PROOFSTEP]
all_goals
rw [← H, E', sumInv_pqr]
conv_rhs => simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
case inr.inr.inl
pqr : Multiset ℕ+
H : E' 3 = pqr
⊢ 1 < sumInv pqr
[PROOFSTEP]
rw [← H, E', sumInv_pqr]
[GOAL]
case inr.inr.inl
pqr : Multiset ℕ+
H : E' 3 = pqr
⊢ 1 < (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑3)⁻¹
[PROOFSTEP]
conv_rhs => simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
pqr : Multiset ℕ+
H : E' 3 = pqr
| (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑3)⁻¹
[PROOFSTEP]
simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
pqr : Multiset ℕ+
H : E' 3 = pqr
| (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑3)⁻¹
[PROOFSTEP]
simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
pqr : Multiset ℕ+
H : E' 3 = pqr
| (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑3)⁻¹
[PROOFSTEP]
simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
case inr.inr.inr.inl
pqr : Multiset ℕ+
H : E' 4 = pqr
⊢ 1 < sumInv pqr
[PROOFSTEP]
rw [← H, E', sumInv_pqr]
[GOAL]
case inr.inr.inr.inl
pqr : Multiset ℕ+
H : E' 4 = pqr
⊢ 1 < (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑4)⁻¹
[PROOFSTEP]
conv_rhs => simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
pqr : Multiset ℕ+
H : E' 4 = pqr
| (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑4)⁻¹
[PROOFSTEP]
simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
pqr : Multiset ℕ+
H : E' 4 = pqr
| (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑4)⁻¹
[PROOFSTEP]
simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
pqr : Multiset ℕ+
H : E' 4 = pqr
| (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑4)⁻¹
[PROOFSTEP]
simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
case inr.inr.inr.inr
pqr : Multiset ℕ+
H : E' 5 = pqr
⊢ 1 < sumInv pqr
[PROOFSTEP]
rw [← H, E', sumInv_pqr]
[GOAL]
case inr.inr.inr.inr
pqr : Multiset ℕ+
H : E' 5 = pqr
⊢ 1 < (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑5)⁻¹
[PROOFSTEP]
conv_rhs => simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
pqr : Multiset ℕ+
H : E' 5 = pqr
| (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑5)⁻¹
[PROOFSTEP]
simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
pqr : Multiset ℕ+
H : E' 5 = pqr
| (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑5)⁻¹
[PROOFSTEP]
simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
pqr : Multiset ℕ+
H : E' 5 = pqr
| (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑5)⁻¹
[PROOFSTEP]
simp only [OfNat.ofNat, PNat.mk_coe]
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
H : 1 < sumInv {p, q, r}
⊢ p < 3
[PROOFSTEP]
have h3 : (0 : ℚ) < 3 := by norm_num
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
H : 1 < sumInv {p, q, r}
⊢ 0 < 3
[PROOFSTEP]
norm_num
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
H : 1 < sumInv {p, q, r}
h3 : 0 < 3
⊢ p < 3
[PROOFSTEP]
contrapose! H
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
⊢ sumInv {p, q, r} ≤ 1
[PROOFSTEP]
rw [sumInv_pqr]
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
⊢ (↑↑p)⁻¹ + (↑↑q)⁻¹ + (↑↑r)⁻¹ ≤ 1
[PROOFSTEP]
have h3q := H.trans hpq
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
⊢ (↑↑p)⁻¹ + (↑↑q)⁻¹ + (↑↑r)⁻¹ ≤ 1
[PROOFSTEP]
have h3r := h3q.trans hqr
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
⊢ (↑↑p)⁻¹ + (↑↑q)⁻¹ + (↑↑r)⁻¹ ≤ 1
[PROOFSTEP]
have hp : (p : ℚ)⁻¹ ≤ 3⁻¹ := by
rw [inv_le_inv _ h3]
assumption_mod_cast
norm_num
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
⊢ (↑↑p)⁻¹ ≤ 3⁻¹
[PROOFSTEP]
rw [inv_le_inv _ h3]
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
⊢ 3 ≤ ↑↑p
p q r : ℕ+ hpq : p ≤ q hqr : q ≤ r h3 : 0 < 3 H : 3 ≤ p h3q : 3 ≤ q h3r : 3 ≤ r ⊢ 0 < ↑↑p
[PROOFSTEP]
assumption_mod_cast
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
⊢ 0 < ↑↑p
[PROOFSTEP]
norm_num
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
hp : (↑↑p)⁻¹ ≤ 3⁻¹
⊢ (↑↑p)⁻¹ + (↑↑q)⁻¹ + (↑↑r)⁻¹ ≤ 1
[PROOFSTEP]
have hq : (q : ℚ)⁻¹ ≤ 3⁻¹ := by
rw [inv_le_inv _ h3]
assumption_mod_cast
norm_num
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
hp : (↑↑p)⁻¹ ≤ 3⁻¹
⊢ (↑↑q)⁻¹ ≤ 3⁻¹
[PROOFSTEP]
rw [inv_le_inv _ h3]
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
hp : (↑↑p)⁻¹ ≤ 3⁻¹
⊢ 3 ≤ ↑↑q
p q r : ℕ+ hpq : p ≤ q hqr : q ≤ r h3 : 0 < 3 H : 3 ≤ p h3q : 3 ≤ q h3r : 3 ≤ r hp : (↑↑p)⁻¹ ≤ 3⁻¹ ⊢ 0 < ↑↑q
[PROOFSTEP]
assumption_mod_cast
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
hp : (↑↑p)⁻¹ ≤ 3⁻¹
⊢ 0 < ↑↑q
[PROOFSTEP]
norm_num
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
hp : (↑↑p)⁻¹ ≤ 3⁻¹
hq : (↑↑q)⁻¹ ≤ 3⁻¹
⊢ (↑↑p)⁻¹ + (↑↑q)⁻¹ + (↑↑r)⁻¹ ≤ 1
[PROOFSTEP]
have hr : (r : ℚ)⁻¹ ≤ 3⁻¹ := by
rw [inv_le_inv _ h3]
assumption_mod_cast
norm_num
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
hp : (↑↑p)⁻¹ ≤ 3⁻¹
hq : (↑↑q)⁻¹ ≤ 3⁻¹
⊢ (↑↑r)⁻¹ ≤ 3⁻¹
[PROOFSTEP]
rw [inv_le_inv _ h3]
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
hp : (↑↑p)⁻¹ ≤ 3⁻¹
hq : (↑↑q)⁻¹ ≤ 3⁻¹
⊢ 3 ≤ ↑↑r
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
hp : (↑↑p)⁻¹ ≤ 3⁻¹
hq : (↑↑q)⁻¹ ≤ 3⁻¹
⊢ 0 < ↑↑r
[PROOFSTEP]
assumption_mod_cast
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
hp : (↑↑p)⁻¹ ≤ 3⁻¹
hq : (↑↑q)⁻¹ ≤ 3⁻¹
⊢ 0 < ↑↑r
[PROOFSTEP]
norm_num
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
hp : (↑↑p)⁻¹ ≤ 3⁻¹
hq : (↑↑q)⁻¹ ≤ 3⁻¹
hr : (↑↑r)⁻¹ ≤ 3⁻¹
⊢ (↑↑p)⁻¹ + (↑↑q)⁻¹ + (↑↑r)⁻¹ ≤ 1
[PROOFSTEP]
calc
(p : ℚ)⁻¹ + (q : ℚ)⁻¹ + (r : ℚ)⁻¹ ≤ 3⁻¹ + 3⁻¹ + 3⁻¹ := add_le_add (add_le_add hp hq) hr
_ = 1 := by norm_num
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
h3 : 0 < 3
H : 3 ≤ p
h3q : 3 ≤ q
h3r : 3 ≤ r
hp : (↑↑p)⁻¹ ≤ 3⁻¹
hq : (↑↑q)⁻¹ ≤ 3⁻¹
hr : (↑↑r)⁻¹ ≤ 3⁻¹
⊢ 3⁻¹ + 3⁻¹ + 3⁻¹ = 1
[PROOFSTEP]
norm_num
[GOAL]
q r : ℕ+
hqr : q ≤ r
H : 1 < sumInv {2, q, r}
⊢ q < 4
[PROOFSTEP]
have h4 : (0 : ℚ) < 4 := by norm_num
[GOAL]
q r : ℕ+
hqr : q ≤ r
H : 1 < sumInv {2, q, r}
⊢ 0 < 4
[PROOFSTEP]
norm_num
[GOAL]
q r : ℕ+
hqr : q ≤ r
H : 1 < sumInv {2, q, r}
h4 : 0 < 4
⊢ q < 4
[PROOFSTEP]
contrapose! H
[GOAL]
q r : ℕ+
hqr : q ≤ r
h4 : 0 < 4
H : 4 ≤ q
⊢ sumInv {2, q, r} ≤ 1
[PROOFSTEP]
rw [sumInv_pqr]
[GOAL]
q r : ℕ+
hqr : q ≤ r
h4 : 0 < 4
H : 4 ≤ q
⊢ (↑↑2)⁻¹ + (↑↑q)⁻¹ + (↑↑r)⁻¹ ≤ 1
[PROOFSTEP]
have h4r := H.trans hqr
[GOAL]
q r : ℕ+
hqr : q ≤ r
h4 : 0 < 4
H : 4 ≤ q
h4r : 4 ≤ r
⊢ (↑↑2)⁻¹ + (↑↑q)⁻¹ + (↑↑r)⁻¹ ≤ 1
[PROOFSTEP]
have hq : (q : ℚ)⁻¹ ≤ 4⁻¹ := by
rw [inv_le_inv _ h4]
assumption_mod_cast
norm_num
[GOAL]
q r : ℕ+
hqr : q ≤ r
h4 : 0 < 4
H : 4 ≤ q
h4r : 4 ≤ r
⊢ (↑↑q)⁻¹ ≤ 4⁻¹
[PROOFSTEP]
rw [inv_le_inv _ h4]
[GOAL]
q r : ℕ+
hqr : q ≤ r
h4 : 0 < 4
H : 4 ≤ q
h4r : 4 ≤ r
⊢ 4 ≤ ↑↑q
q r : ℕ+ hqr : q ≤ r h4 : 0 < 4 H : 4 ≤ q h4r : 4 ≤ r ⊢ 0 < ↑↑q
[PROOFSTEP]
assumption_mod_cast
[GOAL]
q r : ℕ+
hqr : q ≤ r
h4 : 0 < 4
H : 4 ≤ q
h4r : 4 ≤ r
⊢ 0 < ↑↑q
[PROOFSTEP]
norm_num
[GOAL]
q r : ℕ+
hqr : q ≤ r
h4 : 0 < 4
H : 4 ≤ q
h4r : 4 ≤ r
hq : (↑↑q)⁻¹ ≤ 4⁻¹
⊢ (↑↑2)⁻¹ + (↑↑q)⁻¹ + (↑↑r)⁻¹ ≤ 1
[PROOFSTEP]
have hr : (r : ℚ)⁻¹ ≤ 4⁻¹ := by
rw [inv_le_inv _ h4]
assumption_mod_cast
norm_num
[GOAL]
q r : ℕ+
hqr : q ≤ r
h4 : 0 < 4
H : 4 ≤ q
h4r : 4 ≤ r
hq : (↑↑q)⁻¹ ≤ 4⁻¹
⊢ (↑↑r)⁻¹ ≤ 4⁻¹
[PROOFSTEP]
rw [inv_le_inv _ h4]
[GOAL]
q r : ℕ+
hqr : q ≤ r
h4 : 0 < 4
H : 4 ≤ q
h4r : 4 ≤ r
hq : (↑↑q)⁻¹ ≤ 4⁻¹
⊢ 4 ≤ ↑↑r
q r : ℕ+ hqr : q ≤ r h4 : 0 < 4 H : 4 ≤ q h4r : 4 ≤ r hq : (↑↑q)⁻¹ ≤ 4⁻¹ ⊢ 0 < ↑↑r
[PROOFSTEP]
assumption_mod_cast
[GOAL]
q r : ℕ+
hqr : q ≤ r
h4 : 0 < 4
H : 4 ≤ q
h4r : 4 ≤ r
hq : (↑↑q)⁻¹ ≤ 4⁻¹
⊢ 0 < ↑↑r
[PROOFSTEP]
norm_num
[GOAL]
q r : ℕ+
hqr : q ≤ r
h4 : 0 < 4
H : 4 ≤ q
h4r : 4 ≤ r
hq : (↑↑q)⁻¹ ≤ 4⁻¹
hr : (↑↑r)⁻¹ ≤ 4⁻¹
⊢ (↑↑2)⁻¹ + (↑↑q)⁻¹ + (↑↑r)⁻¹ ≤ 1
[PROOFSTEP]
calc
(2⁻¹ + (q : ℚ)⁻¹ + (r : ℚ)⁻¹) ≤ 2⁻¹ + 4⁻¹ + 4⁻¹ := add_le_add (add_le_add le_rfl hq) hr
_ = 1 := by norm_num
[GOAL]
q r : ℕ+
hqr : q ≤ r
h4 : 0 < 4
H : 4 ≤ q
h4r : 4 ≤ r
hq : (↑↑q)⁻¹ ≤ 4⁻¹
hr : (↑↑r)⁻¹ ≤ 4⁻¹
⊢ 2⁻¹ + 4⁻¹ + 4⁻¹ = 1
[PROOFSTEP]
norm_num
[GOAL]
r : ℕ+
H : 1 < sumInv {2, 3, r}
⊢ r < 6
[PROOFSTEP]
have h6 : (0 : ℚ) < 6 := by norm_num
[GOAL]
r : ℕ+
H : 1 < sumInv {2, 3, r}
⊢ 0 < 6
[PROOFSTEP]
norm_num
[GOAL]
r : ℕ+
H : 1 < sumInv {2, 3, r}
h6 : 0 < 6
⊢ r < 6
[PROOFSTEP]
contrapose! H
[GOAL]
r : ℕ+
h6 : 0 < 6
H : 6 ≤ r
⊢ sumInv {2, 3, r} ≤ 1
[PROOFSTEP]
rw [sumInv_pqr]
[GOAL]
r : ℕ+
h6 : 0 < 6
H : 6 ≤ r
⊢ (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑r)⁻¹ ≤ 1
[PROOFSTEP]
have hr : (r : ℚ)⁻¹ ≤ 6⁻¹ := by
rw [inv_le_inv _ h6]
assumption_mod_cast
norm_num
[GOAL]
r : ℕ+
h6 : 0 < 6
H : 6 ≤ r
⊢ (↑↑r)⁻¹ ≤ 6⁻¹
[PROOFSTEP]
rw [inv_le_inv _ h6]
[GOAL]
r : ℕ+
h6 : 0 < 6
H : 6 ≤ r
⊢ 6 ≤ ↑↑r
r : ℕ+ h6 : 0 < 6 H : 6 ≤ r ⊢ 0 < ↑↑r
[PROOFSTEP]
assumption_mod_cast
[GOAL]
r : ℕ+
h6 : 0 < 6
H : 6 ≤ r
⊢ 0 < ↑↑r
[PROOFSTEP]
norm_num
[GOAL]
r : ℕ+
h6 : 0 < 6
H : 6 ≤ r
hr : (↑↑r)⁻¹ ≤ 6⁻¹
⊢ (↑↑2)⁻¹ + (↑↑3)⁻¹ + (↑↑r)⁻¹ ≤ 1
[PROOFSTEP]
calc
(2⁻¹ + 3⁻¹ + (r : ℚ)⁻¹ : ℚ) ≤ 2⁻¹ + 3⁻¹ + 6⁻¹ := add_le_add (add_le_add le_rfl le_rfl) hr
_ = 1 := by norm_num
[GOAL]
r : ℕ+
h6 : 0 < 6
H : 6 ≤ r
hr : (↑↑r)⁻¹ ≤ 6⁻¹
⊢ 2⁻¹ + 3⁻¹ + 6⁻¹ = 1
[PROOFSTEP]
norm_num
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
H : 1 < sumInv {p, q, r}
⊢ Admissible {p, q, r}
[PROOFSTEP]
have hp3 : p < 3 := lt_three hpq hqr H
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
H : 1 < sumInv {p, q, r}
hp3 : p < 3
⊢ Admissible {p, q, r}
[PROOFSTEP]
replace hp3 := Finset.mem_Iio.mpr hp3
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
H : 1 < sumInv {p, q, r}
hp3 : p ∈ Finset.Iio 3
⊢ Admissible {p, q, r}
[PROOFSTEP]
conv at hp3 => change p ∈ ({1, 2} : Multiset ℕ+)
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
H : 1 < sumInv {p, q, r}
hp3 : p ∈ Finset.Iio 3
| p ∈ Finset.Iio 3
[PROOFSTEP]
change p ∈ ({1, 2} : Multiset ℕ+)
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
H : 1 < sumInv {p, q, r}
hp3 : p ∈ Finset.Iio 3
| p ∈ Finset.Iio 3
[PROOFSTEP]
change p ∈ ({1, 2} : Multiset ℕ+)
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
H : 1 < sumInv {p, q, r}
hp3 : p ∈ Finset.Iio 3
| p ∈ Finset.Iio 3
[PROOFSTEP]
change p ∈ ({1, 2} : Multiset ℕ+)
[GOAL]
p q r : ℕ+
hpq : p ≤ q
hqr : q ≤ r
H : 1 < sumInv {p, q, r}
hp3 : p ∈ {1, 2}
⊢ Admissible {p, q, r}
[PROOFSTEP]
fin_cases hp3
[GOAL]
case head
q r : ℕ+
hqr : q ≤ r
hpq : 1 ≤ q
H : 1 < sumInv {1, q, r}
⊢ Admissible {1, q, r}
[PROOFSTEP]
exact admissible_A' q r
[GOAL]
case tail.head
q r : ℕ+
hqr : q ≤ r
hpq : 2 ≤ q
H : 1 < sumInv {2, q, r}
⊢ Admissible {2, q, r}
[PROOFSTEP]
have hq4 : q < 4 := lt_four hqr H
[GOAL]
case tail.head
q r : ℕ+
hqr : q ≤ r
hpq : 2 ≤ q
H : 1 < sumInv {2, q, r}
hq4 : q < 4
⊢ Admissible {2, q, r}
[PROOFSTEP]
replace hq4 := Finset.mem_Ico.mpr ⟨hpq, hq4⟩
[GOAL]
case tail.head
q r : ℕ+
hqr : q ≤ r
hpq : 2 ≤ q
H : 1 < sumInv {2, q, r}
hq4 : q ∈ Finset.Ico 2 4
⊢ Admissible {2, q, r}
[PROOFSTEP]
clear hpq
[GOAL]
case tail.head
q r : ℕ+
hqr : q ≤ r
H : 1 < sumInv {2, q, r}
hq4 : q ∈ Finset.Ico 2 4
⊢ Admissible {2, q, r}
[PROOFSTEP]
conv at hq4 => change q ∈ ({2, 3} : Multiset ℕ+)
[GOAL]
q r : ℕ+
hqr : q ≤ r
H : 1 < sumInv {2, q, r}
hq4 : q ∈ Finset.Ico 2 4
| q ∈ Finset.Ico 2 4
[PROOFSTEP]
change q ∈ ({2, 3} : Multiset ℕ+)
[GOAL]
q r : ℕ+
hqr : q ≤ r
H : 1 < sumInv {2, q, r}
hq4 : q ∈ Finset.Ico 2 4
| q ∈ Finset.Ico 2 4
[PROOFSTEP]
change q ∈ ({2, 3} : Multiset ℕ+)
[GOAL]
q r : ℕ+
hqr : q ≤ r
H : 1 < sumInv {2, q, r}
hq4 : q ∈ Finset.Ico 2 4
| q ∈ Finset.Ico 2 4
[PROOFSTEP]
change q ∈ ({2, 3} : Multiset ℕ+)
[GOAL]
case tail.head
q r : ℕ+
hqr : q ≤ r
H : 1 < sumInv {2, q, r}
hq4 : q ∈ {2, 3}
⊢ Admissible {2, q, r}
[PROOFSTEP]
fin_cases hq4
[GOAL]
case tail.head.head
r : ℕ+
hqr : 2 ≤ r
H : 1 < sumInv {2, 2, r}
⊢ Admissible {2, 2, r}
[PROOFSTEP]
exact admissible_D' r
[GOAL]
case tail.head.tail.head
r : ℕ+
hqr : 3 ≤ r
H : 1 < sumInv {2, 3, r}
⊢ Admissible {2, 3, r}
[PROOFSTEP]
have hr6 : r < 6 := lt_six H
[GOAL]
case tail.head.tail.head
r : ℕ+
hqr : 3 ≤ r
H : 1 < sumInv {2, 3, r}
hr6 : r < 6
⊢ Admissible {2, 3, r}
[PROOFSTEP]
replace hr6 := Finset.mem_Ico.mpr ⟨hqr, hr6⟩
[GOAL]
case tail.head.tail.head
r : ℕ+
hqr : 3 ≤ r
H : 1 < sumInv {2, 3, r}
hr6 : r ∈ Finset.Ico 3 6
⊢ Admissible {2, 3, r}
[PROOFSTEP]
clear hqr
[GOAL]
case tail.head.tail.head
r : ℕ+
H : 1 < sumInv {2, 3, r}
hr6 : r ∈ Finset.Ico 3 6
⊢ Admissible {2, 3, r}
[PROOFSTEP]
conv at hr6 => change r ∈ ({3, 4, 5} : Multiset ℕ+)
[GOAL]
r : ℕ+
H : 1 < sumInv {2, 3, r}
hr6 : r ∈ Finset.Ico 3 6
| r ∈ Finset.Ico 3 6
[PROOFSTEP]
change r ∈ ({3, 4, 5} : Multiset ℕ+)
[GOAL]
r : ℕ+
H : 1 < sumInv {2, 3, r}
hr6 : r ∈ Finset.Ico 3 6
| r ∈ Finset.Ico 3 6
[PROOFSTEP]
change r ∈ ({3, 4, 5} : Multiset ℕ+)
[GOAL]
r : ℕ+
H : 1 < sumInv {2, 3, r}
hr6 : r ∈ Finset.Ico 3 6
| r ∈ Finset.Ico 3 6
[PROOFSTEP]
change r ∈ ({3, 4, 5} : Multiset ℕ+)
[GOAL]
case tail.head.tail.head
r : ℕ+
H : 1 < sumInv {2, 3, r}
hr6 : r ∈ {3, 4, 5}
⊢ Admissible {2, 3, r}
[PROOFSTEP]
fin_cases hr6
[GOAL]
case tail.head.tail.head.head
H : 1 < sumInv {2, 3, 3}
⊢ Admissible {2, 3, 3}
[PROOFSTEP]
exact admissible_E6
[GOAL]
case tail.head.tail.head.tail.head
H : 1 < sumInv {2, 3, 4}
⊢ Admissible {2, 3, 4}
[PROOFSTEP]
exact admissible_E7
[GOAL]
case tail.head.tail.head.tail.tail.head
H : 1 < sumInv {2, 3, 5}
⊢ Admissible {2, 3, 5}
[PROOFSTEP]
exact admissible_E8
[GOAL]
p q r : ℕ+
hs : List.Sorted (fun x x_1 => x ≤ x_1) [p, q, r]
x✝ : List.length [p, q, r] = 3
H : 1 < sumInv ↑[p, q, r]
⊢ Admissible ↑[p, q, r]
[PROOFSTEP]
obtain ⟨⟨hpq, -⟩, hqr⟩ : (p ≤ q ∧ p ≤ r) ∧ q ≤ r
[GOAL]
p q r : ℕ+
hs : List.Sorted (fun x x_1 => x ≤ x_1) [p, q, r]
x✝ : List.length [p, q, r] = 3
H : 1 < sumInv ↑[p, q, r]
⊢ (p ≤ q ∧ p ≤ r) ∧ q ≤ r
case intro.intro
p q r : ℕ+
hs : List.Sorted (fun x x_1 => x ≤ x_1) [p, q, r]
x✝ : List.length [p, q, r] = 3
H : 1 < sumInv ↑[p, q, r]
hqr : q ≤ r
hpq : p ≤ q
⊢ Admissible ↑[p, q, r]
[PROOFSTEP]
simpa using hs
[GOAL]
case intro.intro
p q r : ℕ+
hs : List.Sorted (fun x x_1 => x ≤ x_1) [p, q, r]
x✝ : List.length [p, q, r] = 3
H : 1 < sumInv ↑[p, q, r]
hqr : q ≤ r
hpq : p ≤ q
⊢ Admissible ↑[p, q, r]
[PROOFSTEP]
exact admissible_of_one_lt_sumInv_aux' hpq hqr H
[GOAL]
p q r : ℕ+
H : 1 < sumInv {p, q, r}
⊢ Admissible {p, q, r}
[PROOFSTEP]
simp only [Admissible]
[GOAL]
p q r : ℕ+
H : 1 < sumInv {p, q, r}
⊢ (∃ q_1 r_1, A' q_1 r_1 = {p, q, r}) ∨
(∃ r_1, D' r_1 = {p, q, r}) ∨ E' 3 = {p, q, r} ∨ E' 4 = {p, q, r} ∨ E' 5 = {p, q, r}
[PROOFSTEP]
let S := sort ((· ≤ ·) : ℕ+ → ℕ+ → Prop) { p, q, r }
[GOAL]
p q r : ℕ+
H : 1 < sumInv {p, q, r}
S : List ℕ+ := sort (fun x x_1 => x ≤ x_1) {p, q, r}
⊢ (∃ q_1 r_1, A' q_1 r_1 = {p, q, r}) ∨
(∃ r_1, D' r_1 = {p, q, r}) ∨ E' 3 = {p, q, r} ∨ E' 4 = {p, q, r} ∨ E' 5 = {p, q, r}
[PROOFSTEP]
have hS : S.Sorted (· ≤ ·) := sort_sorted _ _
[GOAL]
p q r : ℕ+
H : 1 < sumInv {p, q, r}
S : List ℕ+ := sort (fun x x_1 => x ≤ x_1) {p, q, r}
hS : List.Sorted (fun x x_1 => x ≤ x_1) S
⊢ (∃ q_1 r_1, A' q_1 r_1 = {p, q, r}) ∨
(∃ r_1, D' r_1 = {p, q, r}) ∨ E' 3 = {p, q, r} ∨ E' 4 = {p, q, r} ∨ E' 5 = {p, q, r}
[PROOFSTEP]
have hpqr : ({ p, q, r } : Multiset ℕ+) = S := (sort_eq LE.le { p, q, r }).symm
[GOAL]
p q r : ℕ+
H : 1 < sumInv {p, q, r}
S : List ℕ+ := sort (fun x x_1 => x ≤ x_1) {p, q, r}
hS : List.Sorted (fun x x_1 => x ≤ x_1) S
hpqr : {p, q, r} = ↑S
⊢ (∃ q_1 r_1, A' q_1 r_1 = {p, q, r}) ∨
(∃ r_1, D' r_1 = {p, q, r}) ∨ E' 3 = {p, q, r} ∨ E' 4 = {p, q, r} ∨ E' 5 = {p, q, r}
[PROOFSTEP]
rw [hpqr]
[GOAL]
p q r : ℕ+
H : 1 < sumInv {p, q, r}
S : List ℕ+ := sort (fun x x_1 => x ≤ x_1) {p, q, r}
hS : List.Sorted (fun x x_1 => x ≤ x_1) S
hpqr : {p, q, r} = ↑S
⊢ (∃ q r, A' q r = ↑S) ∨ (∃ r, D' r = ↑S) ∨ E' 3 = ↑S ∨ E' 4 = ↑S ∨ E' 5 = ↑S
[PROOFSTEP]
rw [hpqr] at H
[GOAL]
p q r : ℕ+
S : List ℕ+ := sort (fun x x_1 => x ≤ x_1) {p, q, r}
H : 1 < sumInv ↑S
hS : List.Sorted (fun x x_1 => x ≤ x_1) S
hpqr : {p, q, r} = ↑S
⊢ (∃ q r, A' q r = ↑S) ∨ (∃ r, D' r = ↑S) ∨ E' 3 = ↑S ∨ E' 4 = ↑S ∨ E' 5 = ↑S
[PROOFSTEP]
apply admissible_of_one_lt_sumInv_aux hS _ H
[GOAL]
p q r : ℕ+
S : List ℕ+ := sort (fun x x_1 => x ≤ x_1) {p, q, r}
H : 1 < sumInv ↑S
hS : List.Sorted (fun x x_1 => x ≤ x_1) S
hpqr : {p, q, r} = ↑S
⊢ List.length S = 3
[PROOFSTEP]
simp only [ge_iff_le, insert_eq_cons, length_sort, card_cons, card_singleton]
|
function result = saveImageFile(arg1, arg2, arg3);
% saveImageFile(IMGDIR, fname, res);
% saveImageFile(IMGFULLFILEPATH, res);
% res=200 for spectrograms
global paths PARAMS; % we need to knwo the value of PARAMS.mode
debug.printfunctionstack('>');
result = 0;
switch nargin
case 2,
[IMGDIR, fname] = fileparts(arg1);
outpath = arg1;
res = arg2;
case 3,
[IMGDIR, fname] = deal(arg1, arg2);
outpath = fullfile(arg1, sprintf('%s.png',arg2));
res = arg3;
otherwise,
debug.print_debug(0, 'could not save image file - wrong number of arguments')
return;
end
if ~exist(IMGDIR,'dir')
mkdir(IMGDIR);
end
try
% Save the image file
print(gcf, '-dpng', sprintf('-r%d',res), outpath );
% Lock the output directory if in archive mode - it means we do not want to delete these files ever
if strcmp('PARAMS.mode','archive')
system(sprintf('touch %s/lock',IMGDIR));
end
% Did our image file actually get saved?
if exist(outpath, 'file')
debug.print_debug(1, sprintf('%s: Saved image file %s',datestr(utnow),outpath) );
result = 1;
else
debug.print_debug(1, sprintf('%s: Image file %s was not created',datestr(utnow),outpath));
end
catch
debug.print_debug(0, sprintf('%s: Could not save the image file %s',datestr(utnow),outpath));
end
%print_debug(sprintf('< %s',mfilename),2);
debug.printfunctionstack('<');
|
using MeshArrays, MITgcmTools, OceanStateEstimation
using CSV, DataFrames, Statistics, Plots
#using FortranFiles,
import Plots: heatmap
"""
heatmap(x::MeshArray; args...)
Apply heatmap to each subdomain in a MeshArray
"""
function heatmap(x::MeshArray; args...)
n=x.grid.nFaces
p=()
for i=1:n; p=(p...,heatmap(x[i]; args...)); end
plot(p...)
end
#Convert Velocity (m/s) to transport (m^3/s)
function convert_velocities(U::MeshArray,V::MeshArray,G::NamedTuple)
for i in eachindex(U)
tmp1=U[i]; tmp1[(!isfinite).(tmp1)] .= 0.0
tmp1=V[i]; tmp1[(!isfinite).(tmp1)] .= 0.0
U[i]=G.DRF[i[2]]*U[i].*G.DYG[i[1]]
V[i]=G.DRF[i[2]]*V[i].*G.DXG[i[1]]
end
return U,V
end
##
"""
trsp_read(myspec::String,mypath::String)
Function that reads files that were generated by `trsp_prep`
"""
function trsp_read(myspec::String,mypath::String)
γ=GridSpec(myspec,mypath)
TrspX=γ.read(mypath*"TrspX.bin",MeshArray(γ,Float32))
TrspY=γ.read(mypath*"TrspY.bin",MeshArray(γ,Float32))
TauX=γ.read(mypath*"TauX.bin",MeshArray(γ,Float32))
TauY=γ.read(mypath*"TauY.bin",MeshArray(γ,Float32))
SSH=γ.read(mypath*"SSH.bin",MeshArray(γ,Float32))
return TrspX, TrspY, TauX, TauY, SSH
end
"""
trsp_prep(γ,Γ,dirOut)
Function that generates small binary files (2D) from large netcdf ones (4D).
```
using FortranFiles, MeshArrays
!isdir("nctiles_climatology") ? error("missing files") : nothing
include(joinpath(dirname(pathof(MeshArrays)),"gcmfaces_nctiles.jl"))
(TrspX, TrspY, TauX, TauY, SSH)=trsp_prep(γ,Γ,MeshArrays.GRID_LLC90);
```
"""
function trsp_prep(γ::gcmgrid,Γ::NamedTuple,dirOut::String="")
#wind stress
fileName="nctiles_climatology/oceTAUX/oceTAUX"
oceTAUX=read_nctiles(fileName,"oceTAUX",γ)
fileName="nctiles_climatology/oceTAUY/oceTAUY"
oceTAUY=read_nctiles(fileName,"oceTAUY",γ)
oceTAUX=mask(oceTAUX,0.0)
oceTAUY=mask(oceTAUY,0.0)
#sea surface height anomaly
fileName="nctiles_climatology/ETAN/ETAN"
ETAN=read_nctiles(fileName,"ETAN",γ)
fileName="nctiles_climatology/sIceLoad/sIceLoad"
sIceLoad=read_nctiles(fileName,"sIceLoad",γ)
rhoconst=1029.0
myssh=(ETAN+sIceLoad./rhoconst)
myssh=mask(myssh,0.0)
#seawater transports
fileName="nctiles_climatology/UVELMASS/UVELMASS"
U=read_nctiles(fileName,"UVELMASS",γ)
fileName="nctiles_climatology/VVELMASS/VVELMASS"
V=read_nctiles(fileName,"VVELMASS",γ)
U=mask(U,0.0)
V=mask(V,0.0)
#time averaging and vertical integration
TrspX=similar(Γ.DXC)
TrspY=similar(Γ.DYC)
TauX=similar(Γ.DXC)
TauY=similar(Γ.DYC)
SSH=similar(Γ.XC)
for i=1:γ.nFaces
tmpX=mean(U.f[i],dims=4)
tmpY=mean(V.f[i],dims=4)
for k=1:length(Γ.RC)
tmpX[:,:,k]=tmpX[:,:,k].*Γ.DYG.f[i]
tmpX[:,:,k]=tmpX[:,:,k].*Γ.DRF[k]
tmpY[:,:,k]=tmpY[:,:,k].*Γ.DXG.f[i]
tmpY[:,:,k]=tmpY[:,:,k].*Γ.DRF[k]
end
TrspX.f[i]=dropdims(sum(tmpX,dims=3),dims=(3,4))
TrspY.f[i]=dropdims(sum(tmpY,dims=3),dims=(3,4))
TauX.f[i]=dropdims(mean(oceTAUX.f[i],dims=3),dims=3)
TauY.f[i]=dropdims(mean(oceTAUY.f[i],dims=3),dims=3)
SSH.f[i]=dropdims(mean(myssh.f[i],dims=3),dims=3)
end
if !isempty(dirOut)
write_bin(TrspX,dirOut*"TrspX.bin")
write_bin(TrspY,dirOut*"TrspY.bin")
write_bin(TauX,dirOut*"TauX.bin")
write_bin(TauY,dirOut*"TauY.bin")
write_bin(SSH,dirOut*"SSH.bin")
end
return TrspX, TrspY, TauX, TauY, SSH
end
"""
trsp_prep(γ,Γ,dirOut)
Function that writes a `MeshArray` to a binary file using `FortranFiles`.
"""
function write_bin(inFLD::MeshArray,filOut::String)
recl=prod(inFLD.grid.ioSize)*4
tmp=Float32.(convert2gcmfaces(inFLD))
println("saving to file: "*filOut)
f = FortranFile(filOut,"w",access="direct",recl=recl,convert="big-endian")
write(f,rec=1,tmp)
close(f)
end
##
"""
rotate_uv(uv,γ)
1. Convert to `Sv` units and mask out land
2. Interpolate `x/y` transport to grid cell center
3. Convert to `Eastward/Northward` transport
4. Display Subdomain Arrays (optional)
"""
function rotate_uv(uv::Dict,G::NamedTuple)
u=1e-6 .*uv["U"]; v=1e-6 .*uv["V"];
u[findall(G.hFacW[:,1].==0)].=NaN
v[findall(G.hFacS[:,1].==0)].=NaN;
nanmean(x) = mean(filter(!isnan,x))
nanmean(x,y) = mapslices(nanmean,x,dims=y)
(u,v)=exch_UV(u,v); uC=similar(u); vC=similar(v)
for iF=1:u.grid.nFaces
tmp1=u[iF][1:end-1,:]; tmp2=u[iF][2:end,:]
uC[iF]=reshape(nanmean([tmp1[:] tmp2[:]],2),size(tmp1))
tmp1=v[iF][:,1:end-1]; tmp2=v[iF][:,2:end]
vC[iF]=reshape(nanmean([tmp1[:] tmp2[:]],2),size(tmp1))
end
cs=G.AngleCS
sn=G.AngleSN
u=uC.*cs-vC.*sn
v=uC.*sn+vC.*cs;
return u,v,uC,vC
end
"""
interp_uv(u,v)
"""
function interp_uv(u,v)
mypath=MeshArrays.GRID_LLC90
SPM,lon,lat=read_SPM(mypath) #interpolation matrix (sparse)
uI=MatrixInterp(write(u),SPM,size(lon)) #interpolation itself
vI=MatrixInterp(write(v),SPM,size(lon)); #interpolation itself
return transpose(uI),transpose(vI),vec(lon[:,1]),vec(lat[1,:])
end
|
%% Copyright (C) 2009-2011, Gostai S.A.S.
%%
%% This software is provided "as is" without warranty of any kind,
%% either expressed or implied, including but not limited to the
%% implied warranties of fitness for a particular purpose.
%%
%% See the LICENSE file for more information.
\section{Code}
Functions written in \us.
\subsection{Prototypes}
\begin{refObjects}
\item[Comparable]
\item[Executable]
\end{refObjects}
\subsection{Construction}
The keywords \lstinline|function| and \lstinline|closure| build Code
instances.
\begin{urbiassert}
function(){}.protos[0] === 'package'.lang.getSlotValue("Code");
closure (){}.protos[0] === 'package'.lang.getSlotValue("Code");
\end{urbiassert}
\subsection{Slots}
\begin{urbiscriptapi}
\item['=='](<that>)%
Whether \this and \var{that} are the same source code (actually checks
that both have the same \refSlot{asString}), and same closed values.
Closures and functions are different, even if the body is the same.
\begin{urbiassert}
function () { 1 } == function () { 1 };
function () { 1 } != closure () { 1 };
closure () { 1 } != function () { 1 };
closure () { 1 } == closure () { 1 };
\end{urbiassert}
No form of equivalence is applied on the body, it must be the same.
\begin{urbiassert}
function () { 1 + 1 } == function () { 1 + 1 };
function () { 1 + 2 } != function () { 2 + 1 };
\end{urbiassert}
Arguments do matter, even if in practice the functions are the same.
\begin{urbiassert}
function (var ignored) {} != function () {};
function (var x) { x } != function (y) { y };
\end{urbiassert}
A lazy function cannot be equal to a strict one.
\begin{urbiassert}
function () { 1 } != function { 1 };
\end{urbiassert}
If the functions capture different variables, they are different.
\begin{urbiscript}
{
var x;
function Object.capture_x() { x };
function Object.capture_x_again () { x };
{
var x;
function Object.capture_another_x() { x };
}
}|;
assert
{
getSlotValue("capture_x") == getSlotValue("capture_x_again");
getSlotValue("capture_x") != getSlotValue("capture_another_x");
};
\end{urbiscript}
If the functions capture different targets, they are different.
\begin{urbiscript}
class Foo
{
function makeFunction() { function () {} };
function makeClosure() { closure () {} };
}|;
class Bar
{
function makeFunction() { function () {} };
function makeClosure() { closure () {} };
}|;
assert
{
Foo.makeFunction() == Bar.makeFunction();
Foo.makeClosure() != Bar.makeClosure();
};
\end{urbiscript}
\item[apply](<args>)%
Invoke the routine, with all the arguments. The target, \this, will be
set to \lstinline|\var{args}[0]| and the remaining arguments with be given
as arguments.
\begin{urbiassert}
function (x, y) { x+y }.apply([nil, 10, 20]) == 30;
function () { this }.apply([123]) == 123;
// There is Object.apply.
1.apply([this]) == 1;
\end{urbiassert}
\begin{urbiscript}
function () {}.apply([]);
[00000001:error] !!! apply: argument list must begin with `this'
function () {}.apply([1, 2]);
[00000002:error] !!! apply: expected 0 argument, given 1
\end{urbiscript}
\item[asString]
Conversion to \refObject{String}.
\begin{urbiassert}
closure () { 1 }.asString() == "closure () { 1 }";
function () { 1 }.asString() == "function () { 1 }";
\end{urbiassert}
\item[bodyString]
Conversion to \refObject{String} of the routine body.
\begin{urbiassert}
closure () { 1 }.bodyString() == "1";
function () { 1 }.bodyString() == "1";
\end{urbiassert}
\item[spawn](<clear>)%
Run \this, with fresh tags if \var{clear} is true, otherwise under the
control of the current tags. Return the spawn \refObject{Job}. This is
an internal function, instead, use \lstinline|detach| and
\lstinline|disown|.
\begin{urbiscript}
var jobs = []|;
var res = []|;
for (var i : [0, 1, 2])
{
jobs << closure () { res << i; res << i }.spawn(true) |
if (i == 2)
break
}|
jobs;
[00009120] [Job<shell_7>, Job<shell_8>, Job<shell_9>]
// Wait for the jobs to be done.
jobs.each (function (var j) { j.waitForTermination });
assert (res == [0, 1, 0, 2, 1, 2]);
\end{urbiscript}
\begin{urbiscript}
jobs = []|;
res = []|;
for (var i : [0, 1, 2])
{
jobs << closure () { res << i; res << i }.spawn(false) |
if (i == 2)
break
}|
jobs;
[00009120] [Job<shell_10>, Job<shell_11>, Job<shell_12>]
// Give some time to get the output of the detached expressions.
sleep(100ms);
assert (res == [0, 1, 0]);
\end{urbiscript}
\end{urbiscriptapi}
%%% Local Variables:
%%% coding: utf-8
%%% mode: latex
%%% TeX-master: "../urbi-sdk"
%%% ispell-dictionary: "american"
%%% ispell-personal-dictionary: "../urbi.dict"
%%% fill-column: 76
%%% End:
|
#!/usr/bin/env julia
using Plots
pyplot()
julia1 = readdlm("julia_times-1.dat")
julia4 = readdlm("julia_times-4.dat")
python = readdlm("python_times.dat")
plot(xaxis = (:log,), yaxis = (:log,),
xlabel = "Datapoints", ylabel = "Time (seconds)",
size=(900, 600))
plot!(julia1[:,1], julia1[:,2], linewidth = 2, marker = (:auto,), color = :blue, lab = "LombScargle.jl - single thread")
plot!(julia4[:,1], julia4[:,2], linewidth = 2, marker = (:auto,), color = :orange, lab = "LombScargle.jl - 4 threads")
plot!(python[:,1], python[:,2], linewidth = 2, marker = (:auto,), color = :green, lab = "Astropy")
savefig("benchmarks.svg")
savefig("benchmarks.png")
|
lemma is_nth_power_nat_code [code]: "is_nth_power_nat n m = (if n = 0 then m = 1 else if m = 0 then n > 0 else if n = 1 then True else (\<exists>k\<in>{1..m}. k ^ n = m))" |
import plotly.graph_objs as go
import pandas as pd
import numpy as np
import plotly
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
class PlotBuilder3D:
def __init__(self):
pass
def set_title(self, title):
self.title = title
def set_width(self, width):
self.width = width
def set_height(self, height):
self.height = height
def set_yscale(self, yscale):
self.yscale = yscale
# ----------------------------
def set_xaxis(self, xaxis):
self.xaxis = xaxis
def set_yaxis(self, yaxis):
self.yaxis = yaxis
def set_zaxis(self, zaxis):
self.zaxis = zaxis
# ----------------------------
def prepare(self, z_csv, x_csv, y_csv, t_coeff=1, online=True, path=".", filename="wt2", to_file=""):
print("Making plot...")
# Z----------------------------------------------
z_data = pd.read_csv(z_csv, header=None)
# Z----------------------------------------------
# X----------------------------------------------
x = pd.read_csv(x_csv, keep_default_na=False)
x_header = list(x)[0]
# x["x"] = list(x["x"])
x_ticktext = list(x['x'])
x_tickvals = list(x['vals'])
# x_tickvals = np.linspace(
# list(x["x"])[0], list(x["x"])[-1], 10)
# x_ticktext = np.linspace(
# list(x["vals"])[0], list(x["vals"])[-1], 10)
# x_ticktext = np.round(x_ticktext, 2)
# print(list(x["x"])[-1])
for i in range(len(x_ticktext)):
x_ticktext[i] = x_ticktext[i]
x_ticktext[i] = str(x_ticktext[i])
print('x_ticktext:', x_ticktext)
print('x_tickvals:', x_tickvals)
# X----------------------------------------------
# Y----------------------------------------------
y = pd.read_csv(y_csv, keep_default_na=False)
y_header = list(y)[0]
# print(list(y["y"]))
# exit(0)
# y["y"] = list(y["y"])
# y["vals"] = list(y["vals"])
# y_tickvals = np.linspace(
# list(y["y"])[0], list(y["y"])[-1], 10)
# y_ticktext = np.linspace(
# list(y["vals"])[0], list(y["vals"])[-1], 10)
# y_ticktext = np.round(y_ticktext, 2)
y_ticktext = list(y["y"])
y_tickvals = list(y["vals"])
y_tickvals = np.array(y_tickvals) / t_coeff
print('y_ticktext:', y_ticktext)
print('y_tickvals:', y_tickvals)
# Y----------------------------------------------
data = [
go.Surface(
showlegend=False,
showscale=False,
lighting=dict(diffuse=0.5, specular=.2, fresnel=0.2),
z=z_data.values,
colorscale="Portland",
)
]
scale = int(y_ticktext[-1])
layout = go.Layout(
# plot_bgcolor="#000000",
# pap_bgcolor="#000000",
title=self.title,
titlefont=dict(
# family="Courier New, monospace",
# family='Open Sans, sans-serif',
family='Lato',
size=14,
color="#222"),
# margin=go.Margin(
# l=0,
# r=0,
# b=0,
# t=35,
# pad=50,
# ),
xaxis=dict(
# linecolor="black",
# linewidth=2,
# autotick=False,
# dtick=1,
ticks='outside',
tickfont=dict(
# size=20,
size=200,
),
),
yaxis=dict(
# tickangle=45,
title="y Axis",
titlefont=dict(
family="Courier New, monospace",
# family='Old Standard TT, serif',
size=40,
# size=14,
color="#FFFFFF"),
# autotick=False,
# dtick=1,
ticks='outside',
# tickangle=90,
tickfont=dict(
# size=20,
size=200,
),
),
# zaxis=dict(
# tickangle=90
# ),
autosize=False,
# autosize=True,
width=self.width,
height=self.height,
plot_bgcolor="#AAA",
# paper_bgcolor="#AAA",
scene=go.Scene(
camera=dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0.2),
eye=dict(x=3.75, y=3.75, z=3.75)
),
aspectratio={"x": 1, "y": self.yscale * \
y_ticktext[-1], "z": 1},
xaxis={
"title": self.xaxis,
"showgrid": False,
"showline": False,
# "showline":True,
# "ticks": "outside",
# "showticklabels": True,
# "linewidth": 1,
# "tickvals": list(range(len(x_tickvals))),
# "ticktext": list(range(len(x_tickvals))),
"tickvals": x_tickvals,
"ticktext": x_ticktext,
'titlefont': dict(
size=18,
),
'tickfont': dict(
size=14,
),
'autorange': True,
# "tickangle": 45,
# "linecolor": "black",
# "linewidth": 2,
},
yaxis={
'autorange': True,
"title": self.yaxis+"\t\t\t\t.",
"ticktext": y_ticktext[::2],
"tickvals": y_tickvals[::2],
# "linecolor": "black",
"linewidth": 1,
'titlefont': dict(
size=18,
),
'tickfont': dict(
size=14,
)
},
zaxis={
'autorange': True,
"range": [0, 1],
"title": self.zaxis,
# 'dtick': -20,
# "tickangle": 45,
# "linecolor": "black",
"linewidth": 1,
'titlefont': dict(
size=18,
),
'tickfont': dict(
size=14,
)
# "transform": {"rotate": '0'}
},
),
showlegend=False
)
self.fig = go.Figure(data=data, layout=layout)
if to_file:
py.image.save_as(self.fig, filename=to_file)
return
# fig["layout"].update(scene=dict(aspectmode="data"))
# online=False
# if online:
# py.iplot(fig, filename=filename)
# # plotly.offline.init_notebook_mode()
# # plotly.offline.iplot(fig, filename="wt.html")
# # plotly.
# # py.offline.iplot(fig, filename="wt")
# else:
# # plotly.offline.init_notebook_mode(connected=True)
# # plotly.offline.init_notebook_mode()
# plotly.offline.plot(fig, filename=path + filename + ".html")
# # plotly.offline.iplot(fig, filename=path + filename + ".html")
return
def iplot(self, z_csv, x_csv, y_csv, t_coeff=1, online=True, path=".", filename="wt2", to_file=""):
self.prepare(z_csv, x_csv, y_csv, t_coeff,
online, path, filename, to_file)
plotly.offline.init_notebook_mode(connected=True)
plotly.offline.iplot(self.fig)
def plot(self, z_csv, x_csv, y_csv, t_coeff=1, online=True, path=".", filename="wt2", to_file=""):
self.prepare(z_csv, x_csv, y_csv, t_coeff,
online, path, filename, to_file)
plotly.offline.plot(self.fig, filename=path + filename + ".html")
# ---------------------------------------------------------------------------------------------------------------------
|
[STATEMENT]
theorem compliant_stateful_ACS_static_valid':
"all_security_requirements_fulfilled M \<lparr> nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T> \<rparr>"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. all_security_requirements_fulfilled M \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. all_security_requirements_fulfilled M \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
[PROOF STEP]
from validReqs
[PROOF STATE]
proof (chain)
picking this:
valid_reqs M
[PROOF STEP]
have valid_ReqsIFS: "valid_reqs (get_IFS M)"
[PROOF STATE]
proof (prove)
using this:
valid_reqs M
goal (1 subgoal):
1. valid_reqs (get_IFS M)
[PROOF STEP]
by(simp add: get_IFS_def valid_reqs_def)
\<comment> \<open>show that it holds for IFS, by monotonicity as it holds for more in IFS\<close>
[PROOF STATE]
proof (state)
this:
valid_reqs (get_IFS M)
goal (1 subgoal):
1. all_security_requirements_fulfilled M \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
[PROOF STEP]
from all_security_requirements_fulfilled_mono[OF valid_ReqsIFS _ valid_stateful_policy compliant_stateful_IFS[unfolded stateful_policy_to_network_graph_def]]
[PROOF STATE]
proof (chain)
picking this:
?E' \<subseteq> all_flows \<T> \<Longrightarrow> all_security_requirements_fulfilled (get_IFS M) \<lparr>nodes = hosts \<T>, edges = ?E'\<rparr>
[PROOF STEP]
have
goalIFS: "all_security_requirements_fulfilled (get_IFS M) \<lparr> nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T> \<rparr>"
[PROOF STATE]
proof (prove)
using this:
?E' \<subseteq> all_flows \<T> \<Longrightarrow> all_security_requirements_fulfilled (get_IFS M) \<lparr>nodes = hosts \<T>, edges = ?E'\<rparr>
goal (1 subgoal):
1. all_security_requirements_fulfilled (get_IFS M) \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
[PROOF STEP]
by(simp add: all_flows_def)
[PROOF STATE]
proof (state)
this:
all_security_requirements_fulfilled (get_IFS M) \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
goal (1 subgoal):
1. all_security_requirements_fulfilled M \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
[PROOF STEP]
from wf_stateful_policy.E_state_fix[OF stateful_policy_wf]
[PROOF STATE]
proof (chain)
picking this:
flows_state \<T> \<subseteq> flows_fix \<T>
[PROOF STEP]
have "flows_fix \<T> \<union> flows_state \<T> = flows_fix \<T>"
[PROOF STATE]
proof (prove)
using this:
flows_state \<T> \<subseteq> flows_fix \<T>
goal (1 subgoal):
1. flows_fix \<T> \<union> flows_state \<T> = flows_fix \<T>
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
flows_fix \<T> \<union> flows_state \<T> = flows_fix \<T>
goal (1 subgoal):
1. all_security_requirements_fulfilled M \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
[PROOF STEP]
from this compliant_stateful_ACS_static_valid
[PROOF STATE]
proof (chain)
picking this:
flows_fix \<T> \<union> flows_state \<T> = flows_fix \<T>
all_security_requirements_fulfilled (get_ACS M) \<lparr>nodes = hosts \<T>, edges = flows_fix \<T>\<rparr>
[PROOF STEP]
have goalACS:
"all_security_requirements_fulfilled (get_ACS M) \<lparr> nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T> \<rparr>"
[PROOF STATE]
proof (prove)
using this:
flows_fix \<T> \<union> flows_state \<T> = flows_fix \<T>
all_security_requirements_fulfilled (get_ACS M) \<lparr>nodes = hosts \<T>, edges = flows_fix \<T>\<rparr>
goal (1 subgoal):
1. all_security_requirements_fulfilled (get_ACS M) \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
[PROOF STEP]
by simp
\<comment> \<open>ACS and IFS together form M, we know it holds for ACS\<close>
[PROOF STATE]
proof (state)
this:
all_security_requirements_fulfilled (get_ACS M) \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
goal (1 subgoal):
1. all_security_requirements_fulfilled M \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
[PROOF STEP]
from goalACS goalIFS
[PROOF STATE]
proof (chain)
picking this:
all_security_requirements_fulfilled (get_ACS M) \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
all_security_requirements_fulfilled (get_IFS M) \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
all_security_requirements_fulfilled (get_ACS M) \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
all_security_requirements_fulfilled (get_IFS M) \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
goal (1 subgoal):
1. all_security_requirements_fulfilled M \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
[PROOF STEP]
apply(simp add: all_security_requirements_fulfilled_def get_IFS_def get_ACS_def)
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>\<forall>m. m \<in> set M \<and> \<not> c_isIFS m \<longrightarrow> c_sinvar m \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>; \<forall>m. m \<in> set M \<and> c_isIFS m \<longrightarrow> c_sinvar m \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>\<rbrakk> \<Longrightarrow> \<forall>m\<in>set M. c_sinvar m \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
[PROOF STEP]
by fastforce
[PROOF STATE]
proof (state)
this:
all_security_requirements_fulfilled M \<lparr>nodes = hosts \<T>, edges = flows_fix \<T> \<union> flows_state \<T>\<rparr>
goal:
No subgoals!
[PROOF STEP]
qed |
/**
*
* @file core_zsyssq.c
*
* PLASMA core_blas kernel
* PLASMA is a software package provided by Univ. of Tennessee,
* Univ. of California Berkeley and Univ. of Colorado Denver
*
* @version 2.8.0
* @author Mathieu Faverge
* @date 2010-11-15
* @precisions normal z -> c d s
*
**/
#include <math.h>
#include <lapacke.h>
#include "common.h"
#define COMPLEX
#define UPDATE( __nb, __value ) \
if (__value != 0. ){ \
if ( *scale < __value ) { \
*sumsq = __nb + (*sumsq) * ( *scale / __value ) * ( *scale / __value ); \
*scale = __value; \
} else { \
*sumsq = *sumsq + __nb * ( __value / *scale ) * ( __value / *scale ); \
} \
}
/*****************************************************************************
*
* @ingroup dplasma_cores_complex64
*
* CORE_zsyssq returns the values scl and ssq such that
*
* ( scl**2 )*ssq = sum( A( i, j )**2 ) + ( scale**2 )*sumsq,
* i,j
*
* with i from 0 to N-1 and j form 0 to N-1. The value of sumsq is
* assumed to be at least unity and the value of ssq will then satisfy
*
* 1.0 .le. ssq .le. ( sumsq + 2*n*n ).
*
* scale is assumed to be non-negative and scl returns the value
*
* scl = max( scale, abs( real( A( i, j ) ) ), abs( aimag( A( i, j ) ) ) ),
* i,j
*
* scale and sumsq must be supplied in SCALE and SUMSQ respectively.
* SCALE and SUMSQ are overwritten by scl and ssq respectively.
*
* The routine makes only one pass through the tile triangular part of the
* symmetric tile A defined by uplo.
* See also LAPACK zlassq.f
*
*******************************************************************************
*
* @param[in] uplo
* Specifies whether the upper or lower triangular part of
* the symmetric matrix A is to be referenced as follows:
* = PlasmaLower: Only the lower triangular part of the
* symmetric matrix A is to be referenced.
* = PlasmaUpper: Only the upper triangular part of the
* symmetric matrix A is to be referenced.
*
* @param[in] N
* The number of columns and rows in the tile A.
*
* @param[in] A
* The N-by-N matrix on which to compute the norm.
*
* @param[in] LDA
* The leading dimension of the tile A. LDA >= max(1,N).
*
* @param[in,out] scale
* On entry, the value scale in the equation above.
* On exit, scale is overwritten with the value scl.
*
* @param[in,out] sumsq
* On entry, the value sumsq in the equation above.
* On exit, SUMSQ is overwritten with the value ssq.
*
*******************************************************************************
*
* @return
* \retval PLASMA_SUCCESS successful exit
* \retval -k, the k-th argument had an illegal value
*
*/
#if defined(PLASMA_HAVE_WEAK)
#pragma weak CORE_zsyssq = PCORE_zsyssq
#define CORE_zsyssq PCORE_zsyssq
#endif
int CORE_zsyssq(PLASMA_enum uplo, int N,
const PLASMA_Complex64_t *A, int LDA,
double *scale, double *sumsq)
{
int i, j;
double tmp;
double *ptr;
if ( uplo == PlasmaUpper ) {
for(j=0; j<N; j++) {
ptr = (double*) ( A + j * LDA );
for(i=0; i<j; i++, ptr++) {
tmp = fabs(*ptr);
UPDATE( 2., tmp );
#ifdef COMPLEX
ptr++;
tmp = fabs(*ptr);
UPDATE( 2., tmp );
#endif
}
/* Diagonal */
tmp = fabs(*ptr);
UPDATE( 1., tmp );
#ifdef COMPLEX
ptr++;
tmp = fabs(*ptr);
UPDATE( 1., tmp );
#endif
}
} else {
for(j=0; j<N; j++) {
ptr = (double*) ( A + j * LDA + j);
/* Diagonal */
tmp = fabs(*ptr);
UPDATE( 1., tmp );
ptr++;
#ifdef COMPLEX
tmp = fabs(*ptr);
UPDATE( 1., tmp );
ptr++;
#endif
for(i=j+1; i<N; i++, ptr++) {
tmp = fabs(*ptr);
UPDATE( 2., tmp );
#ifdef COMPLEX
ptr++;
tmp = fabs(*ptr);
UPDATE( 2., tmp );
#endif
}
}
}
return PLASMA_SUCCESS;
}
|
## Not run:
## Example: download the font file of WenQuanYi Micro Hei,
## add it to SWF device, and use it to draw text in swf().
## WenQuanYi Micro Hei is an open source and high quality
## Chinese (and CJKV) font.
wd = setwd(tempdir())
ft.url = "http://sourceforge.net/projects/wqy/files/wqy-microhei"
ft.url = paste(ft.url, "0.2.0-beta/wqy-microhei-0.2.0-beta.tar.gz",
sep = "/")
download.file(ft.url, basename(ft.url))
## Extract and add the directory to search path
untar(basename(ft.url), compressed = "gzip")
font.paths("wqy-microhei")
## Register this font file and assign the family name "wqy"
## Other font faces will be the same with regular by default
font.add("wqy", regular = "wqy-microhei.ttc")
## A more concise way to add font is to give the path directly,
## without calling font.paths()
# font.add("wqy", "wqy-microhei/wqy-microhei.ttc")
## List available font families
font.families()
if(require(R2SWF))
{
## Now it shows that we can use the family "wqy" in swf()
swf("testfont.swf")
## Select font family globally
op = par(family = "serif", font.lab = 2)
## Inline selecting font
plot(1, type = "n")
text(1, 1, intToUtf8(c(20013, 25991)), family = "wqy", font = 1, cex = 2)
dev.off()
swf2html("testfont.swf")
}
setwd(wd)
## End(Not run) |
/-
Copyright (c) 2020 Bhavik Mehta. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Bhavik Mehta
-/
import category_theory.limits.preserves.shapes.binary_products
import category_theory.limits.preserves.shapes.products
import category_theory.limits.shapes.binary_products
import category_theory.limits.shapes.finite_products
import category_theory.pempty
import logic.equiv.fin
/-!
# Constructing finite products from binary products and terminal.
If a category has binary products and a terminal object then it has finite products.
If a functor preserves binary products and the terminal object then it preserves finite products.
# TODO
Provide the dual results.
Show the analogous results for functors which reflect or create (co)limits.
-/
universes v u u'
noncomputable theory
open category_theory category_theory.category category_theory.limits
namespace category_theory
variables {J : Type v} [small_category J]
variables {C : Type u} [category.{v} C]
variables {D : Type u'} [category.{v} D]
/--
Given `n+1` objects of `C`, a fan for the last `n` with point `c₁.X` and a binary fan on `c₁.X` and
`f 0`, we can build a fan for all `n+1`.
In `extend_fan_is_limit` we show that if the two given fans are limits, then this fan is also a
limit.
-/
@[simps {rhs_md := semireducible}]
def extend_fan {n : ℕ} {f : ulift (fin (n+1)) → C}
(c₁ : fan (λ (i : ulift (fin n)), f ⟨i.down.succ⟩))
(c₂ : binary_fan (f ⟨0⟩) c₁.X) :
fan f :=
fan.mk c₂.X
begin
rintro ⟨i⟩,
revert i,
refine fin.cases _ _,
{ apply c₂.fst },
{ intro i,
apply c₂.snd ≫ c₁.π.app (ulift.up i) },
end
/--
Show that if the two given fans in `extend_fan` are limits, then the constructed fan is also a
limit.
-/
def extend_fan_is_limit {n : ℕ} (f : ulift (fin (n+1)) → C)
{c₁ : fan (λ (i : ulift (fin n)), f ⟨i.down.succ⟩)} {c₂ : binary_fan (f ⟨0⟩) c₁.X}
(t₁ : is_limit c₁) (t₂ : is_limit c₂) :
is_limit (extend_fan c₁ c₂) :=
{ lift := λ s,
begin
apply (binary_fan.is_limit.lift' t₂ (s.π.app ⟨0⟩) _).1,
apply t₁.lift ⟨_, discrete.nat_trans (λ i, s.π.app ⟨i.down.succ⟩)⟩
end,
fac' := λ s,
begin
rintro ⟨j⟩,
apply fin.induction_on j,
{ apply (binary_fan.is_limit.lift' t₂ _ _).2.1 },
{ rintro i -,
dsimp only [extend_fan_π_app],
rw [fin.cases_succ, ← assoc, (binary_fan.is_limit.lift' t₂ _ _).2.2, t₁.fac],
refl }
end,
uniq' := λ s m w,
begin
apply binary_fan.is_limit.hom_ext t₂,
{ rw (binary_fan.is_limit.lift' t₂ _ _).2.1,
apply w ⟨0⟩ },
{ rw (binary_fan.is_limit.lift' t₂ _ _).2.2,
apply t₁.uniq ⟨_, _⟩,
rintro ⟨j⟩,
rw assoc,
dsimp only [discrete.nat_trans_app],
rw ← w ⟨j.succ⟩,
dsimp only [extend_fan_π_app],
rw fin.cases_succ }
end }
section
variables [has_binary_products.{v} C] [has_terminal C]
/--
If `C` has a terminal object and binary products, then it has a product for objects indexed by
`ulift (fin n)`.
This is a helper lemma for `has_finite_products_of_has_binary_and_terminal`, which is more general
than this.
-/
private lemma has_product_ulift_fin :
Π (n : ℕ) (f : ulift.{v} (fin n) → C), has_product f
| 0 := λ f,
begin
letI : has_limits_of_shape (discrete (ulift.{v} (fin 0))) C :=
has_limits_of_shape_of_equivalence
(discrete.equivalence.{v} (equiv.ulift.trans fin_zero_equiv').symm),
apply_instance,
end
| (n+1) := λ f,
begin
haveI := has_product_ulift_fin n,
apply has_limit.mk ⟨_, extend_fan_is_limit f (limit.is_limit.{v} _) (limit.is_limit _)⟩,
end
/--
If `C` has a terminal object and binary products, then it has limits of shape
`discrete (ulift (fin n))` for any `n : ℕ`.
This is a helper lemma for `has_finite_products_of_has_binary_and_terminal`, which is more general
than this.
-/
private lemma has_limits_of_shape_ulift_fin (n : ℕ) :
has_limits_of_shape (discrete (ulift.{v} (fin n))) C :=
{ has_limit := λ K,
begin
letI := has_product_ulift_fin n K.obj,
let : discrete.functor K.obj ≅ K := discrete.nat_iso (λ i, iso.refl _),
apply has_limit_of_iso this,
end }
/-- If `C` has a terminal object and binary products, then it has finite products. -/
lemma has_finite_products_of_has_binary_and_terminal : has_finite_products C :=
⟨λ J 𝒥₁ 𝒥₂, begin
resetI,
let e := fintype.equiv_fin J,
apply has_limits_of_shape_of_equivalence (discrete.equivalence (e.trans equiv.ulift.symm)).symm,
refine has_limits_of_shape_ulift_fin (fintype.card J),
end⟩
end
section preserves
variables (F : C ⥤ D)
variables [preserves_limits_of_shape (discrete.{v} walking_pair) F]
variables [preserves_limits_of_shape (discrete.{v} pempty) F]
variables [has_finite_products.{v} C]
/--
If `F` preserves the terminal object and binary products, then it preserves products indexed by
`ulift (fin n)` for any `n`.
-/
noncomputable def preserves_fin_of_preserves_binary_and_terminal :
Π (n : ℕ) (f : ulift.{v} (fin n) → C), preserves_limit (discrete.functor f) F
| 0 := λ f,
begin
letI : preserves_limits_of_shape (discrete (ulift (fin 0))) F :=
preserves_limits_of_shape_of_equiv.{v v}
(discrete.equivalence (equiv.ulift.trans fin_zero_equiv').symm) _,
apply_instance,
end
| (n+1) :=
begin
haveI := preserves_fin_of_preserves_binary_and_terminal n,
intro f,
refine preserves_limit_of_preserves_limit_cone
(extend_fan_is_limit f (limit.is_limit.{v} _) (limit.is_limit _)) _,
apply (is_limit_map_cone_fan_mk_equiv _ _ _).symm _,
let := extend_fan_is_limit (λ i, F.obj (f i))
(is_limit_of_has_product_of_preserves_limit F _)
(is_limit_of_has_binary_product_of_preserves_limit F _ _),
refine is_limit.of_iso_limit this _,
apply cones.ext _ _,
apply iso.refl _,
rintro ⟨j⟩,
apply fin.induction_on j,
{ apply (category.id_comp _).symm },
{ rintro i -,
dsimp only [extend_fan_π_app, iso.refl_hom, fan.mk_π_app],
rw [fin.cases_succ, fin.cases_succ],
change F.map _ ≫ _ = 𝟙 _ ≫ _,
rw [id_comp, ←F.map_comp],
refl }
end
/--
If `F` preserves the terminal object and binary products, then it preserves limits of shape
`discrete (ulift (fin n))`.
-/
def preserves_ulift_fin_of_preserves_binary_and_terminal (n : ℕ) :
preserves_limits_of_shape (discrete (ulift (fin n))) F :=
{ preserves_limit := λ K,
begin
let : discrete.functor K.obj ≅ K := discrete.nat_iso (λ i, iso.refl _),
haveI := preserves_fin_of_preserves_binary_and_terminal F n K.obj,
apply preserves_limit_of_iso_diagram F this,
end }
/-- If `F` preserves the terminal object and binary products then it preserves finite products. -/
def preserves_finite_products_of_preserves_binary_and_terminal
(J : Type v) [fintype J] :
preserves_limits_of_shape.{v} (discrete J) F :=
begin
classical,
let e := fintype.equiv_fin J,
haveI := preserves_ulift_fin_of_preserves_binary_and_terminal F (fintype.card J),
apply preserves_limits_of_shape_of_equiv.{v v}
(discrete.equivalence (e.trans equiv.ulift.symm)).symm,
end
end preserves
/--
Given `n+1` objects of `C`, a cofan for the last `n` with point `c₁.X`
and a binary cofan on `c₁.X` and `f 0`, we can build a cofan for all `n+1`.
In `extend_cofan_is_colimit` we show that if the two given cofans are colimits,
then this cofan is also a colimit.
-/
@[simps {rhs_md := semireducible}]
def extend_cofan {n : ℕ} {f : ulift (fin (n+1)) → C}
(c₁ : cofan (λ (i : ulift (fin n)), f ⟨i.down.succ⟩))
(c₂ : binary_cofan (f ⟨0⟩) c₁.X) :
cofan f :=
cofan.mk c₂.X
begin
rintro ⟨i⟩,
revert i,
refine fin.cases _ _,
{ apply c₂.inl },
{ intro i,
apply c₁.ι.app (ulift.up i) ≫ c₂.inr },
end
/--
Show that if the two given cofans in `extend_cofan` are colimits,
then the constructed cofan is also a colimit.
-/
def extend_cofan_is_colimit {n : ℕ} (f : ulift (fin (n+1)) → C)
{c₁ : cofan (λ (i : ulift (fin n)), f ⟨i.down.succ⟩)} {c₂ : binary_cofan (f ⟨0⟩) c₁.X}
(t₁ : is_colimit c₁) (t₂ : is_colimit c₂) :
is_colimit (extend_cofan c₁ c₂) :=
{ desc := λ s,
begin
apply (binary_cofan.is_colimit.desc' t₂ (s.ι.app ⟨0⟩) _).1,
apply t₁.desc ⟨_, discrete.nat_trans (λ i, s.ι.app ⟨i.down.succ⟩)⟩
end,
fac' := λ s,
begin
rintro ⟨j⟩,
apply fin.induction_on j,
{ apply (binary_cofan.is_colimit.desc' t₂ _ _).2.1 },
{ rintro i -,
dsimp only [extend_cofan_ι_app],
rw [fin.cases_succ, assoc, (binary_cofan.is_colimit.desc' t₂ _ _).2.2, t₁.fac],
refl }
end,
uniq' := λ s m w,
begin
apply binary_cofan.is_colimit.hom_ext t₂,
{ rw (binary_cofan.is_colimit.desc' t₂ _ _).2.1,
apply w ⟨0⟩ },
{ rw (binary_cofan.is_colimit.desc' t₂ _ _).2.2,
apply t₁.uniq ⟨_, _⟩,
rintro ⟨j⟩,
dsimp only [discrete.nat_trans_app],
rw ← w ⟨j.succ⟩,
dsimp only [extend_cofan_ι_app],
rw [fin.cases_succ, assoc], }
end }
section
variables [has_binary_coproducts.{v} C] [has_initial C]
/--
If `C` has an initial object and binary coproducts, then it has a coproduct for objects indexed by
`ulift (fin n)`.
This is a helper lemma for `has_cofinite_products_of_has_binary_and_terminal`, which is more general
than this.
-/
private lemma has_coproduct_ulift_fin :
Π (n : ℕ) (f : ulift.{v} (fin n) → C), has_coproduct f
| 0 := λ f,
begin
letI : has_colimits_of_shape (discrete (ulift.{v} (fin 0))) C :=
has_colimits_of_shape_of_equivalence
(discrete.equivalence.{v} (equiv.ulift.trans fin_zero_equiv').symm),
apply_instance,
end
| (n+1) := λ f,
begin
haveI := has_coproduct_ulift_fin n,
apply has_colimit.mk
⟨_, extend_cofan_is_colimit f (colimit.is_colimit.{v} _) (colimit.is_colimit _)⟩,
end
/--
If `C` has an initial object and binary coproducts, then it has colimits of shape
`discrete (ulift (fin n))` for any `n : ℕ`.
This is a helper lemma for `has_cofinite_products_of_has_binary_and_terminal`, which is more general
than this.
-/
private lemma has_colimits_of_shape_ulift_fin (n : ℕ) :
has_colimits_of_shape (discrete (ulift.{v} (fin n))) C :=
{ has_colimit := λ K,
begin
letI := has_coproduct_ulift_fin n K.obj,
let : K ≅ discrete.functor K.obj := discrete.nat_iso (λ i, iso.refl _),
apply has_colimit_of_iso this,
end }
/-- If `C` has an initial object and binary coproducts, then it has finite coproducts. -/
lemma has_finite_coproducts_of_has_binary_and_terminal : has_finite_coproducts C :=
⟨λ J 𝒥₁ 𝒥₂, begin
resetI,
let e := fintype.equiv_fin J,
apply has_colimits_of_shape_of_equivalence (discrete.equivalence (e.trans equiv.ulift.symm)).symm,
refine has_colimits_of_shape_ulift_fin (fintype.card J),
end⟩
end
section preserves
variables (F : C ⥤ D)
variables [preserves_colimits_of_shape (discrete.{v} walking_pair) F]
variables [preserves_colimits_of_shape (discrete.{v} pempty) F]
variables [has_finite_coproducts.{v} C]
/--
If `F` preserves the initial object and binary coproducts, then it preserves products indexed by
`ulift (fin n)` for any `n`.
-/
noncomputable def preserves_fin_of_preserves_binary_and_initial :
Π (n : ℕ) (f : ulift.{v} (fin n) → C), preserves_colimit (discrete.functor f) F
| 0 := λ f,
begin
letI : preserves_colimits_of_shape (discrete (ulift (fin 0))) F :=
preserves_colimits_of_shape_of_equiv.{v v}
(discrete.equivalence (equiv.ulift.trans fin_zero_equiv').symm) _,
apply_instance,
end
| (n+1) :=
begin
haveI := preserves_fin_of_preserves_binary_and_initial n,
intro f,
refine preserves_colimit_of_preserves_colimit_cocone
(extend_cofan_is_colimit f (colimit.is_colimit.{v} _) (colimit.is_colimit _)) _,
apply (is_colimit_map_cocone_cofan_mk_equiv _ _ _).symm _,
let := extend_cofan_is_colimit (λ i, F.obj (f i))
(is_colimit_of_has_coproduct_of_preserves_colimit F _)
(is_colimit_of_has_binary_coproduct_of_preserves_colimit F _ _),
refine is_colimit.of_iso_colimit this _,
apply cocones.ext _ _,
apply iso.refl _,
rintro ⟨j⟩,
apply fin.induction_on j,
{ apply category.comp_id },
{ rintro i -,
dsimp only [extend_cofan_ι_app, iso.refl_hom, cofan.mk_ι_app],
rw [fin.cases_succ, fin.cases_succ],
erw [comp_id, ←F.map_comp],
refl, }
end
/--
If `F` preserves the initial object and binary coproducts, then it preserves colimits of shape
`discrete (ulift (fin n))`.
-/
def preserves_ulift_fin_of_preserves_binary_and_initial (n : ℕ) :
preserves_colimits_of_shape (discrete (ulift (fin n))) F :=
{ preserves_colimit := λ K,
begin
let : discrete.functor K.obj ≅ K := discrete.nat_iso (λ i, iso.refl _),
haveI := preserves_fin_of_preserves_binary_and_initial F n K.obj,
apply preserves_colimit_of_iso_diagram F this,
end }
/-- If `F` preserves the initial object and binary coproducts then it preserves finite products. -/
def preserves_finite_coproducts_of_preserves_binary_and_initial
(J : Type v) [fintype J] :
preserves_colimits_of_shape.{v} (discrete J) F :=
begin
classical,
let e := fintype.equiv_fin J,
haveI := preserves_ulift_fin_of_preserves_binary_and_initial F (fintype.card J),
apply preserves_colimits_of_shape_of_equiv.{v v}
(discrete.equivalence (e.trans equiv.ulift.symm)).symm,
end
end preserves
end category_theory
|
module Main
import Effects
import IdrisWeb.CGI.Cgi
import IdrisWeb.Session.Session
import IdrisWeb.Session.SessionUtils
import IdrisWeb.DB.SQLite.SQLiteNew
ThreadID : Type
ThreadID = Int
DB_NAME : String
DB_NAME = "/tmp/messageboard.db"
UserID : Type
UserID = Int
USERID_VAR : String
USERID_VAR = "user_id"
----------
-- Handler info
----------
handleRegisterForm : Maybe String -> Maybe String -> FormHandler [CGI (InitialisedCGI TaskRunning),
SESSION (SessionRes SessionUninitialised),
SQLITE ()
]
handlePost : Maybe Int -> Maybe String -> FormHandler [CGI (InitialisedCGI TaskRunning),
SESSION (SessionRes SessionUninitialised),
SQLITE ()
]
handleNewThread : Maybe String -> Maybe String -> FormHandler [CGI (InitialisedCGI TaskRunning),
SESSION (SessionRes SessionUninitialised),
SQLITE ()
]
handleLoginForm : Maybe String -> Maybe String -> FormHandler [CGI (InitialisedCGI TaskRunning),
SESSION (SessionRes SessionUninitialised),
SQLITE ()
]
handlers : HandlerList
handlers = [(([FormString, FormString], [CgiEffect, SessionEffect, SqliteEffect]) ** (handleRegisterForm, "handleRegisterForm")),
(([FormString, FormString], [CgiEffect, SessionEffect, SqliteEffect]) ** (handleLoginForm, "handleLoginForm")),
(([FormString, FormString], [CgiEffect, SessionEffect, SqliteEffect]) ** (handleNewThread, "handleNewThread")),
(([FormInt, FormString], [CgiEffect, SessionEffect, SqliteEffect]) ** (handlePost, "handlePost"))]
-- Template system would be nice...
htmlPreamble : String
htmlPreamble = "<html><head><title>IdrisWeb Message Board</title></head><body>"
htmlPostamble : String
htmlPostamble = "</body></html>"
notLoggedIn : EffM IO [CGI (InitialisedCGI TaskRunning),
SESSION (SessionRes SessionInitialised),
SQLITE ()]
[CGI (InitialisedCGI TaskRunning),
SESSION (SessionRes SessionUninitialised),
SQLITE ()] ()
notLoggedIn = do output htmlPreamble
output "<h1>Error</h1><br />You must be logged in to do that!"
output htmlPostamble
discardSession
outputWithPreamble : String -> Eff IO [CGI (InitialisedCGI TaskRunning)] ()
outputWithPreamble txt = do output htmlPreamble
output txt
output htmlPostamble
-----------
-- Post Creation
-----------
postInsert : Int -> Int -> String -> Eff IO [SQLITE ()] Bool
postInsert uid thread_id content = do
conn_res <- openDB DB_NAME
if_valid then do
let sql = "INSERT INTO `Posts` (`UserID`, `ThreadID`, `Content`) VALUES (?, ?, ?)"
ps_res <- prepareStatement sql
if_valid then do
bindInt 1 uid
bindInt 2 thread_id
bindText 3 content
bind_res <- finishBind
if_valid then do
executeStatement
finalise
closeDB
return True
else do
cleanupBindFail
return False
else do
cleanupPSFail
return False
else
return False
addPostToDB : Int -> String -> SessionData -> EffM IO [CGI (InitialisedCGI TaskRunning),
SESSION (SessionRes SessionInitialised),
SQLITE ()]
[CGI (InitialisedCGI TaskRunning),
SESSION (SessionRes SessionUninitialised),
SQLITE ()] ()
addPostToDB thread_id content sd = do
-- TODO: would be nice to abstract this out
case lookup USERID_VAR sd of
Just (SInt uid) => do insert_res <- postInsert uid thread_id content
if insert_res then do
-- TODO: redirection would be nice
outputWithPreamble "Post successful"
discardSession
return ()
else do
outputWithPreamble "There was an error adding the post to the database."
discardSession
return ()
Nothing => do notLoggedIn
return ()
handlePost (Just thread_id) (Just content) = do withSession (addPostToDB thread_id content) notLoggedIn
pure ()
handlePost _ _ = do outputWithPreamble"<h1>Error</h1><br />There was an error processing your post."
pure ()
newPostForm : Int -> UserForm
newPostForm thread_id = do
addHidden FormInt thread_id
addTextBox "Post Content" FormString Nothing
useEffects [CgiEffect, SessionEffect, SqliteEffect]
addSubmit handlePost handlers
showNewPostForm : Int -> CGIProg [SESSION (SessionRes SessionUninitialised), SQLITE ()] ()
showNewPostForm thread_id = do
output htmlPreamble
output "<h2>Create new post</h2>"
addForm (newPostForm thread_id)
output htmlPostamble
-----------
-- Thread Creation
-----------
threadInsert : Int -> String -> String -> Eff IO [SQLITE ()] (Maybe QueryError)
threadInsert uid title content = do
let query = "INSERT INTO `Threads` (`UserID`, `Title`) VALUES (?, ?)"
insert_res <- executeInsert DB_NAME query [(1, DBInt uid), (2, DBText title)]
case insert_res of
Left err => return (Just err)
Right thread_id => do
post_res <- postInsert uid thread_id content
if post_res then return Nothing else return $ Just (ExecError "post")
addNewThread : String -> String -> SessionData -> EffM IO [CGI (InitialisedCGI TaskRunning),
SESSION (SessionRes SessionInitialised),
SQLITE ()]
[CGI (InitialisedCGI TaskRunning),
SESSION (SessionRes SessionUninitialised),
SQLITE ()] ()
addNewThread title content sd = do
case lookup USERID_VAR sd of
Just (SInt uid) =>
do insert_res <- threadInsert uid title content
case insert_res of
Just err => do
output $ "There was an error adding the thread to the database: " ++ show err
discardSession
return ()
Nothing => do
output "Thread added successfully"
discardSession
return ()
Nothing => do notLoggedIn
return ()
-- Create a new thread, given the title and content
handleNewThread (Just title) (Just content) = do withSession (addNewThread title content) notLoggedIn
pure ()
handleNewThread _ _ = do outputWithPreamble "<h1>Error</h1><br />There was an error posting your thread."
pure ()
newThreadForm : UserForm
newThreadForm = do
addTextBox "Title" FormString Nothing
addTextBox "Post Content" FormString Nothing -- password field would be good
useEffects [CgiEffect, SessionEffect, SqliteEffect]
addSubmit handleNewThread handlers
showNewThreadForm : CGIProg [SESSION (SessionRes SessionUninitialised), SQLITE ()] ()
showNewThreadForm = do output htmlPreamble
output "<h1>New Thread</h1>"
addForm newThreadForm
output htmlPostamble
-----------
-- Registration
-----------
insertUser : String -> String -> Eff IO [SQLITE ()] (Either QueryError Int)
insertUser name pwd = executeInsert DB_NAME query bind_vals
where query = "INSERT INTO `Users` (`Username`, `Password`) VALUES (?, ?)"
bind_vals = [(1, DBText name), (2, DBText pwd)]
userExists' : EffM IO [SQLITE (Either (SQLiteExecuting InvalidRow) (SQLiteExecuting ValidRow))]
[SQLITE ()] Bool
userExists' =
if_valid then do
finaliseValid
closeDB
return True
else do
finaliseInvalid
closeDB
return False
userExists : String -> Eff IO [SQLITE ()] (Either QueryError Bool)
userExists username = do
conn_res <- openDB DB_NAME
if_valid then do
let sql = "SELECT * FROM `Users` WHERE `Username` = ?"
ps_res <- prepareStatement sql
if_valid then do
bindText 1 username
bind_res <- finishBind
if_valid then do
executeStatement
res <- userExists'
return $ Right res
else do
let be = getBindError bind_res
cleanupBindFail
return $ Left be
else do
cleanupPSFail
return $ Left (getQueryError ps_res)
else
return $ Left (getQueryError conn_res)
handleRegisterForm (Just name) (Just pwd) = do
user_exists_res <- userExists name
case user_exists_res of
Left err => do outputWithPreamble "Error checking for user existence"
pure ()
Right user_exists =>
if (not user_exists) then do
insert_res <- insertUser name pwd
case insert_res of
Left err => do outputWithPreamble ("Error inserting new user" ++ (show err))
pure ()
Right insert_res => do outputWithPreamble "User created successfully!"
pure ()
else do outputWithPreamble "This user already exists; please pick another name!"
pure ()
handleRegisterForm _ _ = do outputWithPreamble "Error processing form input data."
pure ()
registerForm : UserForm
registerForm = do
addTextBox "Username" FormString Nothing
addTextBox "Password" FormString Nothing -- password field would be good
useEffects [CgiEffect, SessionEffect, SqliteEffect]
addSubmit handleRegisterForm handlers
showRegisterForm : CGIProg [SESSION (SessionRes SessionUninitialised), SQLITE ()] ()
showRegisterForm = do output htmlPreamble
output "<h1>Create a new account</h1>"
addForm registerForm
output htmlPostamble
-----------
-- Login
-----------
alreadyLoggedIn : SessionData ->
EffM IO [CGI (InitialisedCGI TaskRunning),
SESSION (SessionRes SessionInitialised),
SQLITE ()]
[CGI (InitialisedCGI TaskRunning),
SESSION (SessionRes SessionUninitialised),
SQLITE ()] ()
alreadyLoggedIn _ = do outputWithPreamble "<h1>Error</h1><br />You appear to already be logged in!"
discardSession
-- If the credentials match, return an ID
-- Maybe consolidate the Maybe UserID into the Either, or possibly keep them
-- distinct to encapsulate the system error vs auth failure
authUser' : EffM IO [SQLITE (Either (SQLiteExecuting InvalidRow)
(SQLiteExecuting ValidRow))]
[SQLITE ()]
(Either QueryError (Maybe UserID))
authUser' =
if_valid then do
user_id <- getColumnInt 0
finaliseValid
closeDB
return $ Right (Just user_id)
else do
finaliseInvalid
closeDB
return $ Right Nothing
authUser : String -> String -> Eff IO [SQLITE ()] (Either QueryError (Maybe UserID))
authUser username password = do
conn_res <- openDB DB_NAME
if_valid then do
let sql = "SELECT `UserID` FROM `Users` WHERE `Username` = ? AND `Password` = ?"
ps_res <- prepareStatement sql
if_valid then do
bindText 1 username
bindText 2 password
bind_res <- finishBind
if_valid then do
executeStatement
authUser'
else do
let be = getBindError bind_res
cleanupBindFail
return $ Left be
else do
cleanupPSFail
return $ Left (getQueryError ps_res)
else
return $ Left (getQueryError conn_res)
setSession : UserID -> Eff IO [CGI (InitialisedCGI TaskRunning), SESSION (SessionRes SessionUninitialised), SQLITE ()] Bool
setSession user_id = do
create_res <- createSession [(USERID_VAR, SInt user_id)]
sess_res <- setSessionCookie
db_res <- writeSessionToDB
return (sess_res && db_res)
handleLoginForm (Just name) (Just pwd) = do
auth_res <- authUser name pwd
case auth_res of
Right (Just uid) => do
set_sess_res <- setSession uid
if set_sess_res then do
output $ "Welcome, " ++ name
return ()
else do
output "Could not set session"
return ()
Right Nothing => do
output "Invalid username or password"
return ()
Left err => do
output $ "Error: " ++ (show err)
return ()
loginForm : UserForm
loginForm = do
addTextBox "Username" FormString Nothing
addTextBox "Password" FormString Nothing -- password field would be good
useEffects [CgiEffect, SessionEffect, SqliteEffect]
addSubmit handleLoginForm handlers
showLoginForm : CGIProg [SESSION (SessionRes SessionUninitialised), SQLITE ()] ()
showLoginForm = do output htmlPreamble
output "<h1>Log in</h1>"
addForm loginForm
output "</html>"
-----------
-- Post / Thread Display
-----------
collectPostResults : Eff IO [SQLITE (SQLiteExecuting ValidRow)] (List DBVal) -- (List (String, String))
collectPostResults = do name <- getColumnText 0
content <- getColumnText 1
pure [DBText name, DBText content]
-- Gets the posts
getPosts : Int -> Eff IO [SQLITE ()] (Either QueryError ResultSet)
getPosts thread_id =
executeSelect DB_NAME query bind_vals collectPostResults
where query = "SELECT `Username`, `Content` FROM `Posts` NATURAL JOIN `Users` WHERE `ThreadID` = ?"
bind_vals = [(1, DBInt thread_id)]
collectThreadResults : Eff IO [SQLITE (SQLiteExecuting ValidRow)] (List DBVal)
collectThreadResults = do thread_id <- getColumnInt 0
title <- getColumnText 1
uid <- getColumnInt 2
username <- getColumnText 3
pure [DBInt thread_id, DBText title, DBInt uid, DBText username]
-- Returns (Title, Thread starter ID, Thread starter name)
getThreads : Eff IO [SQLITE ()] (Either QueryError ResultSet)
getThreads = executeSelect DB_NAME query [] collectThreadResults
where query = "SELECT `ThreadID`, `Title`, `UserID`, `Username` FROM `Threads` NATURAL JOIN `Users`"
traversePosts : ResultSet -> Eff IO [CGI (InitialisedCGI TaskRunning)] ()
traversePosts [] = pure ()
traversePosts (x :: xs) = do traverseRow x
traversePosts xs
where traverseRow : List DBVal -> Eff IO [CGI (InitialisedCGI TaskRunning)] ()
traverseRow ((DBText name)::(DBText content)::[]) = output $ "<tr><td>" ++ name ++ "</td><td>" ++ content ++ "</td></tr>"
traverseRow _ = pure () -- invalid row, discard
printPosts : ThreadID -> CGIProg [SQLITE ()] ()
printPosts thread_id = do
post_res <- getPosts thread_id
case post_res of
Left err => do output $ "Could not retrieve posts, error: " ++ (show err)
return ()
Right posts => do output "<table>"
traversePosts posts
output "</table>"
output $ "<a href=\"?action=newpost&thread_id=" ++ (show thread_id) ++ "\">New post</a><br />"
return ()
traverseThreads : ResultSet -> Eff IO [CGI (InitialisedCGI TaskRunning)] ()
traverseThreads [] = pure ()
traverseThreads (x::xs) = do traverseRow x
traverseThreads xs
where traverseRow : List DBVal -> Eff IO [CGI (InitialisedCGI TaskRunning)] ()
traverseRow ((DBInt thread_id)::(DBText title)::(DBInt user_id)::(DBText username)::[]) =
(output $ "<tr><td><a href=\"?action=showthread&thread_id=" ++
(show thread_id) ++ "\">" ++ title ++ "</a></td><td>" ++ username ++ "</td></tr>")
traverseRow _ = pure ()
printThreads : CGIProg [SQLITE ()] ()
printThreads = do
thread_res <- getThreads
case thread_res of
Left err => do output $ "Could not retrieve threads, error: " ++ (show err)
return ()
Right threads => do output htmlPreamble
output "<table><tr><th>Title</th><th>Author</th></tr>"
traverseThreads threads
output "</table><br />"
output "<a href=\"?action=newthread\">Create a new thread</a><br />"
output "<a href=\"?action=register\">Register</a><br />"
output "<a href=\"?action=login\">Log In</a><br />"
output htmlPostamble
return ()
-----------
-- Request handling
-----------
handleNonFormRequest : Maybe String -> Maybe Int -> CGIProg [SESSION (SessionRes SessionUninitialised), SQLITE ()] ()
handleNonFormRequest (Just "newthread") Nothing = showNewThreadForm
handleNonFormRequest (Just "newpost") (Just thread_id) = showNewPostForm thread_id
handleNonFormRequest (Just "showthread") (Just thread_id) = printPosts thread_id
handleNonFormRequest (Just "register") Nothing = showRegisterForm
handleNonFormRequest (Just "login") Nothing = showLoginForm
handleNonFormRequest Nothing _ = printThreads
-- Hacky, probably best to use the parser
strToInt : String -> Int
strToInt s = cast s
handleRequest : CGIProg [SESSION (SessionRes SessionUninitialised), SQLITE ()] ()
handleRequest = do handler_set <- isHandlerSet
if handler_set then do
handleForm handlers
return ()
else do
action <- queryGetVar "action"
thread_id <- queryGetVar "thread_id"
handleNonFormRequest action (map strToInt thread_id)
main : IO ()
main = do runCGI [initCGIState, InvalidSession, ()] handleRequest
pure ()
|
#include "spinodal_spect.h"
#include <gsl/gsl_rng.h>
// Generate array of random numbers between 0 and 1
double *rand_ZeroToOne(int Nx, int Ny, int seed, double *random_ZeroToOne_array) {
const gsl_rng_type *T;
gsl_rng *r;
int i,
NxNy = Nx * Ny;
gsl_rng_env_setup();
T = gsl_rng_default;
r = gsl_rng_alloc(T);
gsl_rng_set(r, seed);
// Generate array of random numbers between 0 and 1
for (i = 0; i < NxNy; i++) {
random_ZeroToOne_array[i] = gsl_rng_uniform(r);
}
gsl_rng_free(r);
return random_ZeroToOne_array;
} |
[STATEMENT]
lemma fds_shift_1 [simp]: "fds_shift a 1 = 1"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. fds_shift a 1 = 1
[PROOF STEP]
by (rule fds_eqI) (simp add: fds_shift_def one_fds_def) |
State Before: α : Type u_2
β✝ : Type ?u.39374
γ : Type ?u.39377
ι : Type ?u.39380
inst✝³ : Countable ι
f✝ g : α → β✝
inst✝² : TopologicalSpace β✝
β : Type u_1
f : α → β
inst✝¹ : NormedAddCommGroup β
inst✝ : NormedSpace ℝ β
m m0 : MeasurableSpace α
μ : Measure α
hf : StronglyMeasurable f
c : ℝ
hf_bound : ∀ᵐ (x : α) ∂μ, ‖f x‖ ≤ c
⊢ ∀ᵐ (x : α) ∂μ, Tendsto (fun n => ↑(approxBounded hf c n) x) atTop (𝓝 (f x)) State After: no goals Tactic: filter_upwards [hf_bound] with x hfx using tendsto_approxBounded_of_norm_le hf hfx |
[STATEMENT]
lemma ceiling_add_le: "\<lceil>x + y\<rceil> \<le> \<lceil>x\<rceil> + \<lceil>y\<rceil>"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lceil>x + y\<rceil> \<le> \<lceil>x\<rceil> + \<lceil>y\<rceil>
[PROOF STEP]
by (simp only: ceiling_le_iff of_int_add add_mono le_of_int_ceiling) |
lemma complex_Im_of_nat [simp]: "Im (of_nat n) = 0" |
Incidentally , Turner 's favorite sport is soccer , not football . He is also interested in foreign cultures and expressed regret at being unable to spend a semester abroad because of college football . Turner said that , dependent upon the outcome of his football career , he would like to attend the 2010 World Cup in South Africa . He also got to meet his childhood idol David Beckham while at the 2010 South Africa world cup .
|
/-
Copyright (c) 2021 Oliver Nash. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Oliver Nash
! This file was ported from Lean 3 source module algebra.lie.cartan_matrix
! leanprover-community/mathlib commit 65ec59902eb17e4ab7da8d7e3d0bd9774d1b8b99
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathbin.Algebra.Lie.Free
import Mathbin.Algebra.Lie.Quotient
import Mathbin.Data.Matrix.Notation
/-!
# Lie algebras from Cartan matrices
Split semi-simple Lie algebras are uniquely determined by their Cartan matrix. Indeed, if `A` is
an `l × l` Cartan matrix, the corresponding Lie algebra may be obtained as the Lie algebra on
`3l` generators: $H_1, H_2, \ldots H_l, E_1, E_2, \ldots, E_l, F_1, F_2, \ldots, F_l$
subject to the following relations:
$$
\begin{align}
[H_i, H_j] &= 0\\
[E_i, F_i] &= H_i\\
[E_i, F_j] &= 0 \quad\text{if $i \ne j$}\\
[H_i, E_j] &= A_{ij}E_j\\
[H_i, F_j] &= -A_{ij}F_j\\
ad(E_i)^{1 - A_{ij}}(E_j) &= 0 \quad\text{if $i \ne j$}\\
ad(F_i)^{1 - A_{ij}}(F_j) &= 0 \quad\text{if $i \ne j$}\\
\end{align}
$$
In this file we provide the above construction. It is defined for any square matrix of integers but
the results for non-Cartan matrices should be regarded as junk.
Recall that a Cartan matrix is a square matrix of integers `A` such that:
* For diagonal values we have: `A i i = 2`.
* For off-diagonal values (`i ≠ j`) we have: `A i j ∈ {-3, -2, -1, 0}`.
* `A i j = 0 ↔ A j i = 0`.
* There exists a diagonal matrix `D` over ℝ such that `D ⬝ A ⬝ D⁻¹` is symmetric positive definite.
## Alternative construction
This construction is sometimes performed within the free unital associative algebra
`free_algebra R X`, rather than within the free Lie algebra `free_lie_algebra R X`, as we do here.
However the difference is illusory since the construction stays inside the Lie subalgebra of
`free_algebra R X` generated by `X`, and this is naturally isomorphic to `free_lie_algebra R X`
(though the proof of this seems to require Poincaré–Birkhoff–Witt).
## Definitions of exceptional Lie algebras
This file also contains the Cartan matrices of the exceptional Lie algebras. By using these in the
above construction, it thus provides definitions of the exceptional Lie algebras. These definitions
make sense over any commutative ring. When the ring is ℝ, these are the split real forms of the
exceptional semisimple Lie algebras.
## References
* [N. Bourbaki, *Lie Groups and Lie Algebras, Chapters 4--6*](bourbaki1968) plates V -- IX,
pages 275--290
* [N. Bourbaki, *Lie Groups and Lie Algebras, Chapters 7--9*](bourbaki1975b) chapter VIII, §4.3
* [J.P. Serre, *Complex Semisimple Lie Algebras*](serre1965) chapter VI, appendix
## Main definitions
* `matrix.to_lie_algebra`
* `cartan_matrix.E₆`
* `cartan_matrix.E₇`
* `cartan_matrix.E₈`
* `cartan_matrix.F₄`
* `cartan_matrix.G₂`
* `lie_algebra.e₆`
* `lie_algebra.e₇`
* `lie_algebra.e₈`
* `lie_algebra.f₄`
* `lie_algebra.g₂`
## Tags
lie algebra, semi-simple, cartan matrix
-/
universe u v w
noncomputable section
variable (R : Type u) {B : Type v} [CommRing R] [DecidableEq B] [Fintype B]
variable (A : Matrix B B ℤ)
namespace CartanMatrix
variable (B)
/-- The generators of the free Lie algebra from which we construct the Lie algebra of a Cartan
matrix as a quotient. -/
inductive Generators
| H : B → generators
| E : B → generators
| F : B → generators
#align cartan_matrix.generators CartanMatrix.Generators
instance [Inhabited B] : Inhabited (Generators B) :=
⟨Generators.H default⟩
variable {B}
namespace Relations
open Function
-- mathport name: exprH
local notation "H" => FreeLieAlgebra.of R ∘ Generators.H
-- mathport name: exprE
local notation "E" => FreeLieAlgebra.of R ∘ Generators.E
-- mathport name: exprF
local notation "F" => FreeLieAlgebra.of R ∘ Generators.F
-- mathport name: exprad
local notation "ad" => LieAlgebra.ad R (FreeLieAlgebra R (Generators B))
/-- The terms correpsonding to the `⁅H, H⁆`-relations. -/
def hH : B × B → FreeLieAlgebra R (Generators B) :=
uncurry fun i j => ⁅H i, H j⁆
#align cartan_matrix.relations.HH CartanMatrix.Relations.hH
/-- The terms correpsonding to the `⁅E, F⁆`-relations. -/
def eF : B × B → FreeLieAlgebra R (Generators B) :=
uncurry fun i j => if i = j then ⁅E i, F i⁆ - H i else ⁅E i, F j⁆
#align cartan_matrix.relations.EF CartanMatrix.Relations.eF
/-- The terms correpsonding to the `⁅H, E⁆`-relations. -/
def hE : B × B → FreeLieAlgebra R (Generators B) :=
uncurry fun i j => ⁅H i, E j⁆ - A i j • E j
#align cartan_matrix.relations.HE CartanMatrix.Relations.hE
/-- The terms correpsonding to the `⁅H, F⁆`-relations. -/
def hF : B × B → FreeLieAlgebra R (Generators B) :=
uncurry fun i j => ⁅H i, F j⁆ + A i j • F j
#align cartan_matrix.relations.HF CartanMatrix.Relations.hF
/-- The terms correpsonding to the `ad E`-relations.
Note that we use `int.to_nat` so that we can take the power and that we do not bother
restricting to the case `i ≠ j` since these relations are zero anyway. We also defensively
ensure this with `ad_E_of_eq_eq_zero`. -/
def adE : B × B → FreeLieAlgebra R (Generators B) :=
uncurry fun i j => ad (E i) ^ (-A i j).toNat <| ⁅E i, E j⁆
#align cartan_matrix.relations.ad_E CartanMatrix.Relations.adE
/-- The terms correpsonding to the `ad F`-relations.
See also `ad_E` docstring. -/
def adF : B × B → FreeLieAlgebra R (Generators B) :=
uncurry fun i j => ad (F i) ^ (-A i j).toNat <| ⁅F i, F j⁆
#align cartan_matrix.relations.ad_F CartanMatrix.Relations.adF
private theorem ad_E_of_eq_eq_zero (i : B) (h : A i i = 2) : adE R A ⟨i, i⟩ = 0 :=
by
have h' : (-2 : ℤ).toNat = 0 := by rfl
simp [ad_E, h, h']
#align cartan_matrix.relations.ad_E_of_eq_eq_zero cartan_matrix.relations.ad_E_of_eq_eq_zero
private theorem ad_F_of_eq_eq_zero (i : B) (h : A i i = 2) : adF R A ⟨i, i⟩ = 0 :=
by
have h' : (-2 : ℤ).toNat = 0 := by rfl
simp [ad_F, h, h']
#align cartan_matrix.relations.ad_F_of_eq_eq_zero cartan_matrix.relations.ad_F_of_eq_eq_zero
/-- The union of all the relations as a subset of the free Lie algebra. -/
def toSet : Set (FreeLieAlgebra R (Generators B)) :=
(Set.range <| hH R) ∪ (Set.range <| eF R) ∪ (Set.range <| hE R A) ∪ (Set.range <| hF R A) ∪
(Set.range <| adE R A) ∪
(Set.range <| adF R A)
#align cartan_matrix.relations.to_set CartanMatrix.Relations.toSet
/-- The ideal of the free Lie algebra generated by the relations. -/
def toIdeal : LieIdeal R (FreeLieAlgebra R (Generators B)) :=
LieSubmodule.lieSpan R _ <| toSet R A
#align cartan_matrix.relations.to_ideal CartanMatrix.Relations.toIdeal
end Relations
end CartanMatrix
/- ./././Mathport/Syntax/Translate/Command.lean:42:9: unsupported derive handler lie_algebra[lie_algebra] R -/
/-- The Lie algebra corresponding to a Cartan matrix.
Note that it is defined for any matrix of integers. Its value for non-Cartan matrices should be
regarded as junk. -/
def Matrix.ToLieAlgebra :=
FreeLieAlgebra R _ ⧸ CartanMatrix.Relations.toIdeal R A deriving Inhabited, LieRing,
«./././Mathport/Syntax/Translate/Command.lean:42:9: unsupported derive handler lie_algebra[lie_algebra] R»
#align matrix.to_lie_algebra Matrix.ToLieAlgebra
namespace CartanMatrix
/- ./././Mathport/Syntax/Translate/Expr.lean:207:4: warning: unsupported notation `«expr!![ » -/
/- ./././Mathport/Syntax/Translate/Expr.lean:387:14: unsupported user notation matrix.notation -/
/-- The Cartan matrix of type e₆. See [bourbaki1968] plate V, page 277.
The corresponding Dynkin diagram is:
```
o
|
o --- o --- o --- o --- o
```
-/
def e₆ : Matrix (Fin 6) (Fin 6) ℤ :=
«expr!![ »
"./././Mathport/Syntax/Translate/Expr.lean:387:14: unsupported user notation matrix.notation"
#align cartan_matrix.E₆ CartanMatrix.e₆
/- ./././Mathport/Syntax/Translate/Expr.lean:207:4: warning: unsupported notation `«expr!![ » -/
/- ./././Mathport/Syntax/Translate/Expr.lean:387:14: unsupported user notation matrix.notation -/
/-- The Cartan matrix of type e₇. See [bourbaki1968] plate VI, page 281.
The corresponding Dynkin diagram is:
```
o
|
o --- o --- o --- o --- o --- o
```
-/
def e₇ : Matrix (Fin 7) (Fin 7) ℤ :=
«expr!![ »
"./././Mathport/Syntax/Translate/Expr.lean:387:14: unsupported user notation matrix.notation"
#align cartan_matrix.E₇ CartanMatrix.e₇
/- ./././Mathport/Syntax/Translate/Expr.lean:207:4: warning: unsupported notation `«expr!![ » -/
/- ./././Mathport/Syntax/Translate/Expr.lean:387:14: unsupported user notation matrix.notation -/
/-- The Cartan matrix of type e₈. See [bourbaki1968] plate VII, page 285.
The corresponding Dynkin diagram is:
```
o
|
o --- o --- o --- o --- o --- o --- o
```
-/
def e₈ : Matrix (Fin 8) (Fin 8) ℤ :=
«expr!![ »
"./././Mathport/Syntax/Translate/Expr.lean:387:14: unsupported user notation matrix.notation"
#align cartan_matrix.E₈ CartanMatrix.e₈
/- ./././Mathport/Syntax/Translate/Expr.lean:207:4: warning: unsupported notation `«expr!![ » -/
/- ./././Mathport/Syntax/Translate/Expr.lean:387:14: unsupported user notation matrix.notation -/
/-- The Cartan matrix of type f₄. See [bourbaki1968] plate VIII, page 288.
The corresponding Dynkin diagram is:
```
o --- o =>= o --- o
```
-/
def f₄ : Matrix (Fin 4) (Fin 4) ℤ :=
«expr!![ »
"./././Mathport/Syntax/Translate/Expr.lean:387:14: unsupported user notation matrix.notation"
#align cartan_matrix.F₄ CartanMatrix.f₄
/- ./././Mathport/Syntax/Translate/Expr.lean:207:4: warning: unsupported notation `«expr!![ » -/
/- ./././Mathport/Syntax/Translate/Expr.lean:387:14: unsupported user notation matrix.notation -/
/-- The Cartan matrix of type g₂. See [bourbaki1968] plate IX, page 290.
The corresponding Dynkin diagram is:
```
o ≡>≡ o
```
Actually we are using the transpose of Bourbaki's matrix. This is to make this matrix consistent
with `cartan_matrix.F₄`, in the sense that all non-zero values below the diagonal are -1. -/
def g₂ : Matrix (Fin 2) (Fin 2) ℤ :=
«expr!![ »
"./././Mathport/Syntax/Translate/Expr.lean:387:14: unsupported user notation matrix.notation"
#align cartan_matrix.G₂ CartanMatrix.g₂
end CartanMatrix
namespace LieAlgebra
/-- The exceptional split Lie algebra of type e₆. -/
abbrev E₆ :=
CartanMatrix.e₆.ToLieAlgebra R
#align lie_algebra.e₆ LieAlgebra.E₆
/-- The exceptional split Lie algebra of type e₇. -/
abbrev E₇ :=
CartanMatrix.e₇.ToLieAlgebra R
#align lie_algebra.e₇ LieAlgebra.E₇
/-- The exceptional split Lie algebra of type e₈. -/
abbrev E₈ :=
CartanMatrix.e₈.ToLieAlgebra R
#align lie_algebra.e₈ LieAlgebra.E₈
/-- The exceptional split Lie algebra of type f₄. -/
abbrev F₄ :=
CartanMatrix.f₄.ToLieAlgebra R
#align lie_algebra.f₄ LieAlgebra.F₄
/-- The exceptional split Lie algebra of type g₂. -/
abbrev G₂ :=
CartanMatrix.g₂.ToLieAlgebra R
#align lie_algebra.g₂ LieAlgebra.G₂
end LieAlgebra
|
// @file kn2row_conv.c
//
// \date Created on: Sep 23, 2017
// \author Gopalakrishna Hegde
//
// Description:
//
//
//
#include <assert.h>
#include <stdbool.h>
#include <stdlib.h>
#include <string.h>
#include <cblas.h>
#include "common_types.h"
#include "data_reshape.h"
#include "utils.h"
//
// col_shift : +ve --> shift left overlap mat , -ve --> shift right overlap mat
// or shift left base mat and keep overlap mat as it is.
//
//
// row_shift : +ve (coeff is down the center coeff) --> shift up overlap mat ,
// -ve --> shift down overlap mat or shift up the base mat.
void MatrixShiftAdd(float *base_mat,
int base_no_rows, int base_no_cols,
float *overlap_mat,
int ov_no_rows, int ov_no_cols,
int row_shift, int col_shift) {
if (row_shift == 0 && col_shift == 0 && (base_no_rows == ov_no_rows) &&
(base_no_cols == ov_no_cols)) {
// normal matrix add
cblas_saxpy(base_no_rows * base_no_cols, 1.0, overlap_mat, 1, base_mat, 1);
return;
}
int rows_to_add, cols_to_add;
int base_row_start, base_col_start;
int ov_row_start, ov_col_start;
// without padding case
if (ov_no_rows > base_no_rows) {
rows_to_add = base_no_rows;
cols_to_add = base_no_cols;
base_row_start = 0;
base_col_start = 0;
ov_row_start = row_shift < 0? -row_shift : 0;
ov_col_start = col_shift < 0? -col_shift : 0;
} else {
rows_to_add = ov_no_rows - abs(row_shift);
cols_to_add = ov_no_cols - abs(col_shift);
ov_col_start = col_shift > 0? col_shift : 0;
ov_row_start = row_shift > 0? row_shift : 0;
base_row_start = row_shift < 0? -row_shift : 0;
base_col_start = col_shift < 0? -col_shift : 0;
}
for (int r = 0; r < rows_to_add; ++r) {
int base_mat_offset = (r + base_row_start) * base_no_cols + base_col_start;
int overlap_mat_offset = (r + ov_row_start) * ov_no_cols + ov_col_start;
cblas_saxpy(cols_to_add, 1.0, overlap_mat + overlap_mat_offset, 1,
base_mat + base_mat_offset, 1);
}
}
/* Ker2Row convolution implementations.
*
* Assumptions:
* 1. in_data is in NCHW format.
* 2. filters are in MCKK format where M is the no of output maps.
* 3. Stride will always be 1.
* 4. pad will be zero or kernel_size / 2
*
* Output will be in NCHW format.
*/
bool Kn2RowConvLayer(const float *in_data, const float *filters,
const float *bias, TensorDim in_dim,
TensorDim filt_dim, int stride, int pad, int group,
float *output) {
// Currently we have limited support.
assert(group == 1);
assert((pad == 0) || (pad == filt_dim.w / 2));
assert(in_dim.n == 1);
assert(filt_dim.h == filt_dim.w);
assert(stride == 1);
// Output dimensions.
TensorDim out_dim;
out_dim.w = (in_dim.w + (pad + pad) - filt_dim.w) / stride + 1;
out_dim.h = (in_dim.h + (pad + pad) - filt_dim.h) / stride + 1;
out_dim.c = filt_dim.n;
out_dim.n = in_dim.n;
// Re-arrange filters in the k x k x no_out_maps x no_in_maps.
// We can avoid this if the filters are already reshaped in this format.
float *kkmc_filters = malloc(filt_dim.n * filt_dim.c * filt_dim.h *
filt_dim.w * sizeof(float));
NCHW2HWNC(filters, filt_dim.n, filt_dim.c, filt_dim.h, filt_dim.w,
kkmc_filters);
// Just for convenience
int H = in_dim.h;
int W = in_dim.w;
float alpha = 1.0;
float beta = 0.0;
// We need separate buffer because GEMM output will have width = H*W even
// if there is no padding (pad = 0).
float *gemm_output = malloc(out_dim.c * H * W * sizeof(float));
// Prefill output buffer with bias if present else set to zero.
if (bias) {
for (int m = 0; m < out_dim.c; ++m) {
for (int a = 0; a < out_dim.h * out_dim.w; ++a) {
output[m * out_dim.h * out_dim.w + a] = bias[m];
}
// For batch size > 1
for (int b = 1; b < out_dim.n; ++b) {
memcpy(output + b * out_dim.c * out_dim.h * out_dim.w,
output, out_dim.c * out_dim.h * out_dim.w * sizeof(float));
}
}
} else {
memset(output, 0, out_dim.n * out_dim.c * out_dim.h * out_dim.w *
sizeof(float));
}
for (int kr = 0; kr < filt_dim.h; kr++) {
int row_shift = kr - filt_dim.h / 2;
for (int kc = 0; kc < filt_dim.w; kc++) {
int group_no = kr * filt_dim.w + kc;
int col_shift = kc - filt_dim.w / 2;
// Matrix dimensions - A -> mxk B -> kxn C --> mxn
int m = filt_dim.n;
int k = filt_dim.c;
int n = in_dim.h * in_dim.w;
// This is just 1x1 convolution
cblas_sgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans,
m, n, k, alpha, kkmc_filters + group_no * m * k,
k, in_data, n, beta, gemm_output, n);
// Slide the resulting matrix which has contribution from one of the
// KxK kernel coefficients and add to the output.
for (int omap = 0; omap < filt_dim.n; omap++) {
MatrixShiftAdd(output + omap * out_dim.h * out_dim.w,
out_dim.h, out_dim.w,
gemm_output + omap * H * W,
H, W, row_shift, col_shift);
}
}
}
free(kkmc_filters);
free(gemm_output);
return true;
}
|
%
% Locations.tex
%
% Aleph Objects Operations Manual
%
% Copyright (C) 2014, 2015 Aleph Objects, Inc.
%
% This document is licensed under the Creative Commons Attribution 4.0
% International Public License (CC BY-SA 4.0) by Aleph Objects, Inc.
%
\section{Aleph Mountain}
\section{Fulfillment}
\subsection{Retail}
\begin{itemize}
\item Loveland, Colorado, USA
\end{itemize}
\subsection{Amazon}
\begin{itemize}
\item USA
\end{itemize}
\subsection{Shipwire}
\begin{itemize}
\item Chicago, Illinois, USA
\item Philadelphia, Pennsylvania, USA
\item Los Angeles, California, USA
\item Toronto, Canada
\item London, United Kingdom
\end{itemize}
\subsection{Resellers}
\begin{itemize}
\item Builders
\item Drop Ship
\end{itemize}
\section{Contract Manufacturers}
\section{Customer}
\section{Employee}
\section{Historical}
\begin{itemize}
\item 2011 Redstone Canyon, Colorado, USA
\item 2011 Fort Collins, Colorado, USA
\item 2011-2014 AOHQ, Loveland, Colorado, USA
\end{itemize}
|
[STATEMENT]
lemma allocAsk_o_sysOfClient_eq: "allocAsk o sysOfClient = allocAsk o snd "
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. allocAsk \<circ> sysOfClient = allocAsk \<circ> snd
[PROOF STEP]
apply record_auto
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done |
integer a,b,g,q
parameter(a=-1,g=0,q=+1,b=2)
|
#
# Example of a medium-scale graphene calculation. Only suitable for running
# on a cluster or machine with large memory.
#src tags: long
#
using DFTK
kgrid = [12, 12, 4]
Tsmear = 0.0009500431544769484
Ecut = 15
lattice = [4.659533614391621 -2.3297668071958104 0.0;
0.0 4.035274479829987 0.0;
0.0 0.0 15.117809010356462]
C = ElementPsp(:C, psp=load_psp("hgh/pbe/c-q4"))
atoms = [C => [[0.0, 0.0, 0.0], [0.33333333333, 0.66666666667, 0.0]]]
model = model_DFT(lattice, atoms, [:gga_x_pbe, :gga_c_pbe];
temperature=Tsmear, smearing=Smearing.Gaussian())
basis = PlaneWaveBasis(model, Ecut, kgrid=kgrid)
# Run SCF
n_bands = 6
scfres = self_consistent_field(basis; n_bands=n_bands)
# Print obtained energies
println()
display(scfres.energies)
|
section propositional
variables P Q R : Prop
------------------------------------------------
-- Proposições de dupla negaço:
------------------------------------------------
theorem doubleneg_intro :
P → ¬¬P :=
begin
intros p np,
contradiction,
end
theorem doubleneg_elim :
¬¬P → P :=
begin
intro np,
by_contradiction hboom,
contradiction,
end
theorem doubleneg_law :
¬¬P ↔ P :=
begin
split,
exact doubleneg_elim P,
exact doubleneg_intro P,
end
------------------------------------------------
-- Comutatividade dos ∨,∧:
------------------------------------------------
theorem disj_comm :
(P ∨ Q) → (Q ∨ P) :=
begin
intro pq,
cases pq with hp hq,
right,
exact hp,
left,
exact hq,
end
theorem conj_comm :
(P ∧ Q) → (Q ∧ P) :=
begin
intros hpq,
cases hpq with hp hq,
split,
exact hq,
exact hp,
end
------------------------------------------------
-- Proposições de interdefinabilidade dos →,∨:
------------------------------------------------
theorem impl_as_disj_converse :
(¬P ∨ Q) → (P → Q) :=
begin
intro npq,
intro p,
cases npq with hnp hq,
contradiction,
exact hq
end
theorem disj_as_impl :
(P ∨ Q) → (¬P → Q) :=
begin
intro pq,
intro np,
cases pq with hp hq,
contradiction,
exact hq,
end
------------------------------------------------
-- Proposições de contraposição:
------------------------------------------------
theorem impl_as_contrapositive :
(P → Q) → (¬Q → ¬P) :=
begin
intro pq,
intro nq,
intro p,
have hq : Q := pq p,
contradiction,
end
theorem impl_as_contrapositive_converse :
(¬Q → ¬P) → (P → Q) :=
begin
intro hnqp,
intro p,
by_contradiction,
have np := hnqp h,
contradiction,
end
theorem contrapositive_law :
(P → Q) ↔ (¬Q → ¬P) :=
begin
split,
exact impl_as_contrapositive P Q,
exact impl_as_contrapositive_converse P Q,
end
------------------------------------------------
-- A irrefutabilidade do LEM:
------------------------------------------------
theorem lem_irrefutable :
¬¬(P∨¬P) :=
begin
intro n_p_or_np,
have h : P ∨ ¬P,
right,
intro np,
have h2 : P ∨ ¬P,
left,
exact np,
contradiction,
contradiction,
end
------------------------------------------------
-- A lei de Peirce
------------------------------------------------
theorem peirce_law_weak :
((P → Q) → P) → ¬¬P :=
begin
intro p_q_p,
intro np,
by_cases h : (P → Q),
have h2 : P := p_q_p h,
contradiction,
have h2 : (P → Q),
intro p,
contradiction,
have h3 : P := p_q_p h2,
contradiction,
end
------------------------------------------------
-- Proposições de interdefinabilidade dos ∨,∧:
------------------------------------------------
theorem disj_as_negconj :
P∨Q → ¬(¬P∧¬Q) :=
begin
intro p_or_q,
intro np_or_nq,
cases np_or_nq with hnp hnq,
cases p_or_q with hp hq,
contradiction,
contradiction,
end
theorem conj_as_negdisj :
P∧Q → ¬(¬P∨¬Q) :=
begin
intro p_and_q,
intro np_or_nq,
cases p_and_q with hp hq,
cases np_or_nq with hnp hnq,
contradiction,
contradiction,
end
------------------------------------------------
-- As leis de De Morgan para ∨,∧:
------------------------------------------------
theorem demorgan_disj :
¬(P∨Q) → (¬P ∧ ¬Q) :=
begin
intro n_p_or_q,
split,
intro p,
have h2 : P ∨ Q,
left,
exact p,
contradiction,
intro q,
have h2 : P ∨ Q,
right,
exact q,
contradiction,
end
theorem demorgan_disj_converse :
(¬P ∧ ¬Q) → ¬(P∨Q) :=
begin
intro np_and_nq,
intro p_or_q,
cases np_and_nq with hnp hnq,
cases p_or_q with hp hq,
contradiction,
contradiction,
end
theorem demorgan_conj :
¬(P∧Q) → (¬Q ∨ ¬P) :=
begin
intro p_and_q,
by_cases q:Q,
right,
intro p,
have hpq: P∧Q,
split,
exact p,
exact q,
contradiction,
left,
exact q,
end
theorem demorgan_conj_converse :
(¬Q ∨ ¬P) → ¬(P∧Q) :=
begin
intro nq_or_np,
intro p_and_q,
cases p_and_q with p q,
cases nq_or_np with hnq hnp,
contradiction,
contradiction,
end
-- try again withou magic
theorem demorgan_conj_law :
¬(P∧Q) ↔ (¬Q ∨ ¬P) :=
begin
split,
exact demorgan_conj P Q,
exact demorgan_conj_converse P Q,
end
theorem demorgan_disj_law :
¬(P∨Q) ↔ (¬P ∧ ¬Q) :=
begin
split,
exact demorgan_disj P Q,
exact demorgan_disj_converse P Q,
end
------------------------------------------------
-- Proposições de distributividade dos ∨,∧:
------------------------------------------------
theorem distr_conj_disj :
P∧(Q∨R) → (P∧Q)∨(P∧R) :=
begin
intro p_and_q_or_r,
cases p_and_q_or_r with p q_or_r,
cases q_or_r with q r,
left,
split,
exact p,
exact q,
right,
split,
exact p,
exact r,
end
theorem distr_conj_disj_converse :
(P∧Q)∨(P∧R) → P∧(Q∨R) :=
begin
intro pq_pr,
cases pq_pr with p_and_q p_and_r,
cases p_and_q with p q,
split,
exact p,
left,
exact q,
cases p_and_r with p r,
split,
exact p,
right,
exact r,
end
theorem distr_disj_conj :
P∨(Q∧R) → (P∨Q)∧(P∨R) :=
begin
intro p_or_qr,
cases p_or_qr with p qr,
split,
left,
exact p,
left,
exact p,
cases qr with q r,
split,
right,
exact q,
right,
exact r,
end
theorem distr_disj_conj_converse :
(P∨Q)∧(P∨R) → P∨(Q∧R) :=
begin
intro pq_and_pr,
cases pq_and_pr with pq pr,
cases pq with p q,
left,
exact p,
cases pr with p r,
left,
exact p,
right,
split,
exact q,
exact r,
end
------------------------------------------------
-- Currificação
------------------------------------------------
theorem curry_prop :
((P∧Q)→R) → (P→(Q→R)) :=
begin
intro pq_r,
intro p,
intro q,
have h : P ∧ Q,
split,
exact p,
exact q,
have r : R := pq_r h,
exact r,
end
theorem uncurry_prop :
(P→(Q→R)) → ((P∧Q)→R) :=
begin
intro p_q_r,
intro pq,
cases pq with p q,
have q_r : Q → R := p_q_r p,
have r : R := q_r q,
exact r,
end
------------------------------------------------
-- Reflexividade da →:
------------------------------------------------
theorem impl_refl :
P → P :=
begin
intro p,
exact p,
end
------------------------------------------------
-- Weakening and contraction:
------------------------------------------------
theorem weaken_disj_right :
P → (P∨Q) :=
begin
intro p,
left,
exact p,
end
theorem weaken_disj_left :
Q → (P∨Q) :=
begin
intro q,
right,
exact q,
end
theorem weaken_conj_right :
(P∧Q) → P :=
begin
intro pq,
cases pq with p q,
exact p,
end
theorem weaken_conj_left :
(P∧Q) → Q :=
begin
intro pq,
cases pq with p q,
exact q,
end
theorem conj_idempot :
(P∧P) ↔ P :=
begin
split,
intro pp,
cases pp with p p,
exact p,
intro pp,
split,
exact pp,
exact pp,
end
theorem disj_idempot :
(P∨P) ↔ P :=
begin
split,
intro pp,
cases pp with p p,
repeat {exact p},
intro pp,
left,
exact pp,
end
end propositional
----------------------------------------------------------------
section predicate
variable U : Type
variables P Q : U -> Prop
------------------------------------------------
-- As leis de De Morgan para ∃,∀:
------------------------------------------------
theorem demorgan_exists :
¬(∃x, P x) → (∀x, ¬P x) :=
begin
intro nex_px,
intro x,
intro n_px,
have h : ∃x, P x,
existsi x,
exact n_px,
contradiction,
end
theorem demorgan_exists_converse :
(∀x, ¬P x) → ¬(∃x, P x) :=
begin
intro px_npx,
intro ex_px,
cases ex_px with x px,
have h : ¬P x := px_npx x,
contradiction,
end
theorem demorgan_forall :
¬(∀x, P x) → (∃x, ¬P x) :=
begin
intro pu_px,
by_contradiction ne,
apply pu_px,
intro x,
by_contradiction npx,
apply ne,
existsi x,
exact npx,
end
theorem demorgan_forall_converse :
(∃x, ¬P x) → ¬(∀x, P x) :=
begin
intro e_npx,
intro p_px,
cases e_npx with x npx,
have h : P x := p_px x,
contradiction,
end
theorem demorgan_forall_law :
¬(∀x, P x) ↔ (∃x, ¬P x) :=
begin
split,
exact demorgan_forall U P,
exact demorgan_forall_converse U P,
end
theorem demorgan_exists_law :
¬(∃x, P x) ↔ (∀x, ¬P x) :=
begin
split,
exact demorgan_exists U P,
exact demorgan_exists_converse U P,
end
------------------------------------------------
-- Proposições de interdefinabilidade dos ∃,∀:
------------------------------------------------
theorem exists_as_neg_forall :
(∃x, P x) → ¬(∀x, ¬P x) :=
begin
intro e_px,
intro p_npx,
cases e_px with x px,
have h: ¬P x := p_npx x,
contradiction,
end
theorem forall_as_neg_exists :
(∀x, P x) → ¬(∃x, ¬P x) :=
begin
intro p_px,
intro e_npx,
cases e_npx with x npx,
have h: P x := p_px x,
contradiction,
end
theorem forall_as_neg_exists_converse :
¬(∃x, ¬P x) → (∀x, P x) :=
begin
intros n_e_npx x,
by_contradiction npx,
apply n_e_npx,
existsi x,
exact npx,
end
theorem exists_as_neg_forall_converse :
¬(∀x, ¬P x) → (∃x, P x) :=
begin
rw contrapositive_law,
rw doubleneg_law,
exact demorgan_exists U P,
end
theorem forall_as_neg_exists_law :
(∀x, P x) ↔ ¬(∃x, ¬P x) :=
begin
split,
exact forall_as_neg_exists U P,
exact forall_as_neg_exists_converse U P,
end
theorem exists_as_neg_forall_law :
(∃x, P x) ↔ ¬(∀x, ¬P x) :=
begin
split,
exact exists_as_neg_forall U P,
exact exists_as_neg_forall_converse U P,
end
------------------------------------------------
-- Proposições de distributividade de quantificadores:
------------------------------------------------
theorem exists_conj_as_conj_exists :
(∃x, P x ∧ Q x) → (∃x, P x) ∧ (∃x, Q x) :=
begin
intro e_pxq,
cases e_pxq with x pxq,
cases pxq with px qx,
split,
existsi x,
exact px,
existsi x,
exact qx,
end
theorem exists_disj_as_disj_exists :
(∃x, P x ∨ Q x) → (∃x, P x) ∨ (∃x, Q x) :=
begin
intro e_px_qx,
cases e_px_qx with x q,
cases q with px qx,
left,
existsi x,
exact px,
right,
existsi x,
exact qx,
end
theorem exists_disj_as_disj_exists_converse :
(∃x, P x) ∨ (∃x, Q x) → (∃x, P x ∨ Q x) :=
begin
intro epx_eqx,
cases epx_eqx with epx eqx,
cases epx with x px,
existsi x,
left,
exact px,
cases eqx with x qx,
existsi x,
right,
exact qx,
end
theorem forall_conj_as_conj_forall :
(∀x, P x ∧ Q x) → (∀x, P x) ∧ (∀x, Q x) :=
begin
intro p_px_qx,
split,
intro x,
have h : P x ∧ Q x := p_px_qx x,
cases h with px qx,
exact px,
intro x,
have h: P x ∧ Q x := p_px_qx x,
cases h with px qx,
exact qx,
end
theorem forall_conj_as_conj_forall_converse :
(∀x, P x) ∧ (∀x, Q x) → (∀x, P x ∧ Q x) :=
begin
intro p_px_qx,
intro x,
cases p_px_qx with px qx,
split,
have h : P x := px x,
exact h,
have h : Q x := qx x,
exact h
end
theorem forall_disj_as_disj_forall_converse :
(∀x, P x) ∨ (∀x, Q x) → (∀x, P x ∨ Q x) :=
begin
intro p_px_qx,
intro x,
cases p_px_qx with px qx,
have h : P x := px x,
left,
exact h,
have h : Q x := qx x,
right,
exact h,
end
/- NOT THEOREMS --------------------------------
theorem forall_disj_as_disj_forall :
(∀x, P x ∨ Q x) → (∀x, P x) ∨ (∀x, Q x) :=
begin
end
theorem exists_conj_as_conj_exists_converse :
(∃x, P x) ∧ (∃x, Q x) → (∃x, P x ∧ Q x) :=
begin
end
---------------------------------------------- -/
end predicate
|
(*<*)
theory Restrict_Frees_Impl
imports
Restrict_Bounds_Impl
Restrict_Frees
begin
(*>*)
section \<open>Refining the Non-Deterministic @{term simplification.split} Function\<close>
definition "fixfree_impl \<Q> = map (apsnd set) (filter (\<lambda>(Q, _ :: (nat \<times> nat) list). \<exists>x \<in> fv Q. gen_impl x Q = [])
(sorted_list_of_set ((apsnd sorted_list_of_set) ` \<Q>)))"
definition "nongens_impl Q = filter (\<lambda>x. gen_impl x Q = []) (sorted_list_of_set (fv Q))"
lemma set_nongens_impl: "set (nongens_impl Q) = nongens Q"
by (auto simp: nongens_def nongens_impl_def set_gen_impl simp flip: List.set_empty)
lemma set_fixfree_impl: "finite \<Q> \<Longrightarrow> \<forall>(_, Qeq) \<in> \<Q>. finite Qeq \<Longrightarrow> set (fixfree_impl \<Q>) = fixfree \<Q>"
by (fastforce simp: fixfree_def nongens_def fixfree_impl_def set_gen_impl image_iff apsnd_def map_prod_def
simp flip: List.set_empty split: prod.splits intro: exI[of _ "sorted_list_of_set _"])
lemma fixfree_empty_iff: "finite \<Q> \<Longrightarrow> \<forall>(_, Qeq) \<in> \<Q>. finite Qeq \<Longrightarrow> fixfree \<Q> \<noteq> {} \<longleftrightarrow> fixfree_impl \<Q> \<noteq> []"
by (auto simp: set_fixfree_impl dest: arg_cong[of _ _ set] simp flip: List.set_empty)
definition "inf_impl \<Q>fin Q =
map (apsnd set) (filter (\<lambda>(Qfix, xys). disjointvars Qfix (set xys) \<noteq> {} \<or> fv Qfix \<union> Field (set xys) \<noteq> fv Q)
(sorted_list_of_set ((apsnd sorted_list_of_set) ` \<Q>fin)))"
lemma set_inf_impl: "finite \<Q>fin \<Longrightarrow> \<forall>(_, Qeq) \<in> \<Q>fin. finite Qeq \<Longrightarrow> set (inf_impl \<Q>fin Q) = inf \<Q>fin Q"
by (fastforce simp: inf_def inf_impl_def image_iff)
lemma inf_empty_iff: "finite \<Q>fin \<Longrightarrow> \<forall>(_, Qeq) \<in> \<Q>fin. finite Qeq \<Longrightarrow> inf \<Q>fin Q \<noteq> {} \<longleftrightarrow> inf_impl \<Q>fin Q \<noteq> []"
by (auto simp: set_inf_impl dest: arg_cong[of _ _ set] simp flip: List.set_empty)
definition (in simplification) split_impl :: "('a :: {infinite, linorder}, 'b :: linorder) fmla \<Rightarrow> (('a, 'b) fmla \<times> ('a, 'b) fmla) nres" where
"split_impl Q = do {
Q' \<leftarrow> rb_impl Q;
\<Q>pair \<leftarrow> WHILE
(\<lambda>(\<Q>fin, _). fixfree_impl \<Q>fin \<noteq> []) (\<lambda>(\<Q>fin, \<Q>inf). do {
(Qfix, Qeq) \<leftarrow> RETURN (hd (fixfree_impl \<Q>fin));
x \<leftarrow> RETURN (hd (nongens_impl Qfix));
G \<leftarrow> RETURN (hd (cov_impl x Qfix));
let \<Q>fin = \<Q>fin - {(Qfix, Qeq)} \<union>
{(simp (Conj Qfix (DISJ (qps G))), Qeq)} \<union>
(\<Union>y \<in> eqs x G. {(cp (Qfix[x \<^bold>\<rightarrow> y]), Qeq \<union> {(x,y)})});
let \<Q>inf = \<Q>inf \<union> {cp (Qfix \<^bold>\<bottom> x)};
RETURN (\<Q>fin, \<Q>inf)})
({(Q', {})}, {});
\<Q>pair \<leftarrow> WHILE
(\<lambda>(\<Q>fin, _). inf_impl \<Q>fin Q \<noteq> []) (\<lambda>(\<Q>fin, \<Q>inf). do {
Qpair \<leftarrow> RETURN (hd (inf_impl \<Q>fin Q));
let \<Q>fin = \<Q>fin - {Qpair};
let \<Q>inf = \<Q>inf \<union> {CONJ Qpair};
RETURN (\<Q>fin, \<Q>inf)})
\<Q>pair;
let (Qfin, Qinf) = assemble \<Q>pair;
Qinf \<leftarrow> rb_impl Qinf;
RETURN (Qfin, Qinf)}"
lemma (in simplification) split_INV2_imp_split_INV1: "split_INV2 Q \<Q>pair \<Longrightarrow> split_INV1 Q \<Q>pair"
unfolding split_INV1_def split_INV2_def wf_state_def sr_def by auto
lemma hd_fixfree_impl_props:
assumes "finite \<Q>" "\<forall>(_, Qeq) \<in> \<Q>. finite Qeq" "fixfree_impl \<Q> \<noteq> []"
shows "hd (fixfree_impl \<Q>) \<in> \<Q>" "nongens (fst (hd (fixfree_impl \<Q>))) \<noteq> {}"
proof -
from hd_in_set[of "fixfree_impl \<Q>"] assms(3) have "hd (fixfree_impl \<Q>) \<in> set (fixfree_impl \<Q>)"
by blast
then have "hd (fixfree_impl \<Q>) \<in> fixfree \<Q>"
by (auto simp: set_fixfree_impl assms(1,2))
then show "hd (fixfree_impl \<Q>) \<in> \<Q>" "nongens (fst (hd (fixfree_impl \<Q>))) \<noteq> {}"
unfolding fixfree_def by auto
qed
lemma (in simplification) split_impl_refines_split: "split_impl Q \<le> split Q"
apply (unfold split_def split_impl_def Let_def)
supply rb_impl_refines_rb[refine_mono]
apply refine_mono
apply (rule order_trans[OF WHILE_le_WHILEI[where I="split_INV1 Q"]])
apply (rule order_trans[OF WHILEI_le_WHILEIT])
apply (rule WHILEIT_refine[OF _ _ _ refine_IdI, THEN refine_IdD])
apply (simp_all only: pair_in_Id_conv split: prod.splits) [4]
apply (intro allI impI, hypsubst_thin)
apply (subst fixfree_empty_iff; auto simp: split_INV1_def wf_state_def)
apply (intro allI impI, simp only: prod.inject, elim conjE, hypsubst_thin)
apply refine_mono
apply (subst set_fixfree_impl[symmetric]; auto simp: split_INV1_def wf_state_def intro!: hd_in_set)
apply clarsimp
subgoal for Q' \<Q>fin \<Q>inf Qfix Qeq Qfix' Qeq'
using hd_fixfree_impl_props(2)[of \<Q>fin]
by (force simp: split_INV1_def wf_state_def set_nongens_impl[symmetric] dest!: sym[of "(Qfix', _)"] intro!: hd_in_set)
apply clarsimp
subgoal for Q' \<Q>fin \<Q>inf Qfix Qeq Qfix' Qeq'
apply (intro RETURN_rule cov_impl_cov hd_in_set rrb_cov_impl)
using hd_fixfree_impl_props(1)[of \<Q>fin]
by (force simp: split_INV1_def wf_state_def dest!: sym[of "(Qfix', _)"])
apply (rule order_trans[OF WHILE_le_WHILEI[where I="split_INV1 Q"]])
apply (rule order_trans[OF WHILEI_le_WHILEIT])
apply (rule WHILEIT_refine[OF _ _ _ refine_IdI, THEN refine_IdD])
apply (simp_all only: pair_in_Id_conv split_INV2_imp_split_INV1 split: prod.splits) [4]
apply (intro allI impI, simp only: prod.inject, elim conjE, hypsubst_thin)
apply (subst inf_empty_iff; auto simp: split_INV2_def wf_state_def)
apply (intro allI impI, simp only: prod.inject, elim conjE, hypsubst_thin)
apply refine_mono
apply (subst set_inf_impl[symmetric]; auto simp: split_INV2_def wf_state_def intro!: hd_in_set)
done
definition (in simplification) split_impl_det :: "('a :: {infinite, linorder}, 'b :: linorder) fmla \<Rightarrow> (('a, 'b) fmla \<times> ('a, 'b) fmla) dres" where
"split_impl_det Q = do {
Q' \<leftarrow> rb_impl_det Q;
\<Q>pair \<leftarrow> dWHILE
(\<lambda>(\<Q>fin, _). fixfree_impl \<Q>fin \<noteq> []) (\<lambda>(\<Q>fin, \<Q>inf). do {
(Qfix, Qeq) \<leftarrow> dRETURN (hd (fixfree_impl \<Q>fin));
x \<leftarrow> dRETURN (hd (nongens_impl Qfix));
G \<leftarrow> dRETURN (hd (cov_impl x Qfix));
let \<Q>fin = \<Q>fin - {(Qfix, Qeq)} \<union>
{(simp (Conj Qfix (DISJ (qps G))), Qeq)} \<union>
(\<Union>y \<in> eqs x G. {(cp (Qfix[x \<^bold>\<rightarrow> y]), Qeq \<union> {(x,y)})});
let \<Q>inf = \<Q>inf \<union> {cp (Qfix \<^bold>\<bottom> x)};
dRETURN (\<Q>fin, \<Q>inf)})
({(Q', {})}, {});
\<Q>pair \<leftarrow> dWHILE
(\<lambda>(\<Q>fin, _). inf_impl \<Q>fin Q \<noteq> []) (\<lambda>(\<Q>fin, \<Q>inf). do {
Qpair \<leftarrow> dRETURN (hd (inf_impl \<Q>fin Q));
let \<Q>fin = \<Q>fin - {Qpair};
let \<Q>inf = \<Q>inf \<union> {CONJ Qpair};
dRETURN (\<Q>fin, \<Q>inf)})
\<Q>pair;
let (Qfin, Qinf) = assemble \<Q>pair;
Qinf \<leftarrow> rb_impl_det Qinf;
dRETURN (Qfin, Qinf)}"
lemma (in simplification) split_impl_det_refines_split_impl: "nres_of (split_impl_det Q) \<le> split_impl Q"
unfolding split_impl_def split_impl_det_def Let_def
by (refine_transfer rb_impl_det_refines_rb_impl)
lemmas (in simplification) SPLIT_correct =
split_impl_det_refines_split_impl[THEN order_trans, OF
split_impl_refines_split[THEN order_trans, OF
split_correct]]
(*<*)
end
(*>*) |
<a href="https://colab.research.google.com/github/ol8vil/thingsboard/blob/master/GAAX.ipynb" target="_parent"></a>
```
import math
from sympy import *
init_printing(use_unicode=True)
```
```
PR1 = 20*10**6 # reservoir pressure for 1st well
PR2 = 20*10**6 # reservoir pressure for 2nd well
PRw1 = 18*10**6 # downhole pressure for 1st well
PRw2 = 18*10**6 # downhole pressure for 2nd well
R1 = 300 # contour supplement radius for 1st well
R2 = 300 # contour supplement radius for 2nd well
Rw1 = 0.12 # 1st well’s radius
Rw2 = 0.12 # 2nd well’s radius
b1 = 140 # vertical distance from 1st well to origin of the coordinates in the intersection area
b2 = 140 # vertical distance from 2nd well to origin of the coordinates in the intersection area
h1 = 4 # thickness of the reservoir layer
h2 = 4 # thickness of the reservoir layer
k = 0.63*10**-2 # permeability coefficient
μ = 1*10**-2
r1z1 = 57.4 # radius on edge of zone 1 for well 1
r1z2 = 262 # radius on edge of zone 2 for well 1
μz1 = 103.4*10**-4 # viscosity on the boarder of the 1st zone
μz2 = 141*10**-4 # viscosity on the boarder of the 2nd zone
μz3 = 206.9*10**-4 # viscosity on the boarder of the 3rd zone (contour supplement)
P1z1 = PR1-((PR1-PRw1) * math.log(R1/r1z1))/math.log(R1/Rw1)# pressure on edge of zone 1 for well 1
P1z2 = PR1-((PR1-PRw1) * math.log(R1/r1z2))/math.log(R1/Rw1)# pressure on edge of zone 2 for well 1
P2z1 = P1z1 # pressure on edge of zone 1 for well 2
P2z2 = P1z2 # pressure on edge of zone 2 for well 2
r2z1 = R2 * (R2/Rw2)**(-(PR2-P2z1)/(PR2-PRw2)) # radius on edge of zone 1 for well 2
r2z2 = R2 * (R2/Rw2)**(-(PR2-P2z2)/(PR2-PRw2)) # radius on edge of zone 2 for well 2
```
```
# Calculate the ratio of the pressure drop over the natural log of the contour
# supplement radius over wells radius for both wells
DP1 = (PR1 - PRw1) / log(R1 / Rw1)
DP2 = (PR2 - PRw2) / log(R2 / Rw2)
```
```
# define radius for any given point in the intersection area for the 1st and
# 2nd well
x, y = symbols('x y')
r1 = sqrt(x**2 + (y+b1)**2)
r2 = sqrt(x**2 + (y-b2)**2)
```
```
# define Cos[θ] as a function of the angle between the two pressure vectors
w = (r1**2 +r2**2 - (b1+b2)**2) / (2*r1*r2)
w
```
```
#define the Net pressure function for both pressure vectors
P = sqrt(DP1**2 * log((R1/r1)**2)+DP2**2*log((R2/r2)**2)+2*DP1*DP2*log(R2/r2)*log(R1/r1)*w)
P
```
```
xintersection = (math.sqrt(-(b1+b2)**4+2*R1**2*(b1+b2)**2+2*R2**2*(b1+b2)**2-(R1-R2)**2))/(2*sqrt(b1**2+2*b1*b2+b2**2))
yintersection = (-b1**2+b2**2+R1**2-R2**2)/(2*(b1+b2))
print(xintersection)
print(yintersection)
```
265.329983228432
0.0
```
yP0 = solve(DP1*log(R1/r1.subs(x, 0))-DP2*log(R2/r2.subs(x, 0)),y) # -4.89639*10**-15
yP0
```
|
#pragma once
#include <voilk/protocol/authority.hpp>
#include <fc/variant.hpp>
#include <boost/container/flat_set.hpp>
#include <string>
#include <vector>
namespace voilk { namespace protocol {
struct get_required_auth_visitor
{
typedef void result_type;
flat_set< account_name_type >& active;
flat_set< account_name_type >& owner;
flat_set< account_name_type >& posting;
std::vector< authority >& other;
get_required_auth_visitor(
flat_set< account_name_type >& a,
flat_set< account_name_type >& own,
flat_set< account_name_type >& post,
std::vector< authority >& oth )
: active( a ), owner( own ), posting( post ), other( oth ) {}
template< typename ...Ts >
void operator()( const fc::static_variant< Ts... >& v )
{
v.visit( *this );
}
template< typename T >
void operator()( const T& v )const
{
v.get_required_active_authorities( active );
v.get_required_owner_authorities( owner );
v.get_required_posting_authorities( posting );
v.get_required_authorities( other );
}
};
} } // voilk::protocol
//
// Place VOILK_DECLARE_OPERATION_TYPE in a .hpp file to declare
// functions related to your operation type
//
#define VOILK_DECLARE_OPERATION_TYPE( OperationType ) \
\
namespace voilk { namespace protocol { \
\
void operation_validate( const OperationType& o ); \
void operation_get_required_authorities( const OperationType& op, \
flat_set< account_name_type >& active, \
flat_set< account_name_type >& owner, \
flat_set< account_name_type >& posting, \
vector< authority >& other ); \
\
} } /* voilk::protocol */
|
If $f$ is the identity function on the coefficients of $p$, then $p = f(p)$. |
Since 2015 Villa 's shirt sponsors have been <unk> . Previous commercial sponsors have been Davenports ( 1982 – 83 ) , Mita ( 1983 – 93 ) , Müller ( 1993 – 95 ) , AST Computer ( 1995 – 98 ) , LDV ( 1998 – 2000 ) , NTL ( 2000 – 02 ) , Rover ( 2002 – 04 ) , DWS Investments ( 2004 – 06 ) , <unk> ( 2006 – 08 ) , <unk> ( 2010 – 11 ) , Genting Casinos ( 2011 – 13 ) , <unk> ( 2013 – 2015 ) , and Intuit <unk> ( 2015 – ) . Since 2016 , kit has been manufactured by Under Armour . Previous manufacturers have been Umbro ( 1972 – 81 , 1990 – 93 ) , le Coq Sportif ( 1981 – 83 ) , Henson ( 1983 – 87 ) , Hummel ( 1987 – 90 , 2004 – 07 ) , Asics ( 1993 – 95 ) , Reebok ( 1995 – 2000 ) , Diadora ( 2000 – 04 ) , Nike ( 2007 – 12 ) and Macron ( 2012 @-@ 16 ) .
|
# -*- coding: utf-8 -*-
import sys
import math
import numpy as np
import matplotlib.pyplot as plt
from scipy import sparse
import cg
sys.path.append('..\\3rd_exercise')
import multigrid as mg
def rho_func(x):
return np.sin(x) * np.exp(-x**2)
N = 512
gridkwds = dict(rho_func=rho_func,
N=N, xmin=-5, xmax=5,
levels=int(math.log(N, 2)+1e-2)-1)
rho = rho_func(np.linspace(-5, 5, N))
L = sparse.diags([1, -2, 1], [-1, 0, 1], (N, N), format='csc')
def compare(imax_cg, imax_mg):
x_cg = np.arange(imax_cg)
# one mg iteration contains 2*levels single iterations of the solver
N = 2*gridkwds['levels']*imax_mg
x_mg = np.arange(0, N, 2*gridkwds['levels'])
err_cg = cg.cg(L, rho, imax=imax_cg)[2]
err_jacobi = mg.err(solver='jacobi', imax=imax_mg, **gridkwds)
err_omegajac = mg.err(solver='omega_jacobi', imax=imax_mg, **gridkwds)
err_gaussseidel = mg.err(solver='gauss_seidel', imax=imax_mg, **gridkwds)
err_redblack = mg.err(solver='red_black', imax=imax_mg, **gridkwds)
ax = plt.gca()
ax.semilogy(x_cg, err_cg, label='cg')
ax.semilogy(x_mg, err_jacobi, label='mg - jacobi')
ax.semilogy(x_mg, err_omegajac, label='mg - omega-jacobi')
ax.semilogy(x_mg, err_gaussseidel, label='mg - gauss-seidel')
ax.semilogy(x_mg, err_redblack, label='mg - red-black')
ax.set_title('comparision - Multigrid and Conjugate Gradient')
ax.legend()
compare(500, 30)
|
{-# LANGUAGE BangPatterns #-}
{-# LANGUAGE CPP #-}
{-# LANGUAGE DeriveDataTypeable #-}
{-# LANGUAGE DeriveFoldable #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE DeriveTraversable #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE NoImplicitPrelude #-}
{-# LANGUAGE Rank2Types #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE StandaloneDeriving #-}
-----------------------------------------------------------------------------
-- |
-- A class for semirings (types with two binary operations, one commutative and one associative, and two respective identities), with various general-purpose instances.
--
-----------------------------------------------------------------------------
module Data.Semiring
( -- * Semiring typeclass
Semiring(..)
, (+)
, (*)
, (^)
, foldMapP
, foldMapT
, sum
, product
, sum'
, product'
, isZero
, isOne
-- * Types
, Add(..)
, Mul(..)
, WrappedNum(..)
, Mod2(..)
#if defined(VERSION_containers)
, IntSetOf(..)
, IntMapOf(..)
#endif
-- * Ring typeclass
, Ring(..)
, fromInteger
, fromIntegral
, minus
, (-)
) where
import Control.Applicative (Applicative(..), Const(..), liftA2)
import Data.Bits (Bits)
import Data.Bool (Bool(..), (||), (&&), otherwise)
import Data.Coerce (Coercible, coerce)
import Data.Complex (Complex(..))
import Data.Eq (Eq(..))
import Data.Fixed (Fixed, HasResolution)
import Data.Foldable (Foldable(foldMap))
import qualified Data.Foldable as Foldable
import Data.Function ((.), const, id)
#if defined(VERSION_unordered_containers) || defined(VERSION_containers)
import Data.Function (flip)
#endif
import Data.Functor (Functor(..))
#if MIN_VERSION_base(4,12,0)
import Data.Functor.Contravariant (Predicate(..), Equivalence(..), Op(..))
#endif
import Data.Functor.Identity (Identity(..))
#if defined(VERSION_unordered_containers)
import Data.Hashable (Hashable)
import Data.HashMap.Strict (HashMap)
import qualified Data.HashMap.Strict as HashMap
import Data.HashSet (HashSet)
import qualified Data.HashSet as HashSet
#endif
import Data.Int (Int, Int8, Int16, Int32, Int64)
import Data.Maybe (Maybe(..))
#if MIN_VERSION_base(4,12,0)
import Data.Monoid (Ap(..))
#endif
#if defined(VERSION_containers)
#if MIN_VERSION_base(4,7,0)
import Data.IntMap (IntMap)
import qualified Data.IntMap as IntMap
import Data.IntSet (IntSet)
import qualified Data.IntSet as IntSet
#endif
import Data.Map (Map)
import qualified Data.Map as Map
#endif
import Data.Monoid (Monoid(..), Dual(..))
import Data.Ord (Ord((<)), (>=))
import Data.Ord (Down(..))
import Data.Proxy (Proxy(..))
import Data.Ratio (Ratio, Rational, (%))
import Data.Semigroup.Compat (Semigroup(..))
#if defined(VERSION_containers)
import Data.Set (Set)
import qualified Data.Set as Set
#endif
import Data.Traversable (Traversable)
import Data.Typeable (Typeable)
import Data.Word (Word, Word8, Word16, Word32, Word64)
import Foreign.C.Types
(CChar, CClock, CDouble, CFloat, CInt,
CIntMax, CIntPtr, CLLong, CLong,
CPtrdiff, CSChar, CSUSeconds, CShort,
CSigAtomic, CSize, CTime, CUChar, CUInt,
CUIntMax, CUIntPtr, CULLong, CULong,
CUSeconds, CUShort, CWchar)
import Foreign.Ptr (IntPtr, WordPtr)
import Foreign.Storable (Storable)
import GHC.Enum (Enum, Bounded)
import GHC.Err (error)
import GHC.Float (Float, Double)
import GHC.Generics (Generic,Generic1)
import GHC.IO (IO)
import GHC.Integer (Integer)
import qualified GHC.Num as Num
import GHC.Read (Read)
import GHC.Real (Integral, Fractional, Real, RealFrac)
import qualified GHC.Real as Real
import GHC.Show (Show)
import Numeric.Natural (Natural)
#ifdef mingw32_HOST_OS
#define HOST_OS_WINDOWS 1
#else
#define HOST_OS_WINDOWS 0
#endif
#if !HOST_OS_WINDOWS
import System.Posix.Types
(CCc, CDev, CGid, CIno, CMode, CNlink,
COff, CPid, CRLim, CSpeed, CSsize,
CTcflag, CUid, Fd)
#endif
infixl 7 *, `times`
infixl 6 +, `plus`, -, `minus`
infixr 8 ^
{--------------------------------------------------------------------
Helpers
--------------------------------------------------------------------}
-- | Raise a number to a non-negative integral power.
-- If the power is negative, this will call 'error'.
{-# SPECIALISE [1] (^) ::
Integer -> Integer -> Integer,
Integer -> Int -> Integer,
Int -> Int -> Int #-}
{-# INLINABLE [1] (^) #-} -- See note [Inlining (^)]
(^) :: (Semiring a, Integral b) => a -> b -> a
x ^ y
| y < 0 = error "Data.Semiring.^: negative power"
| y == 0 = one
| otherwise = getMul (stimes y (Mul x))
{- Note [Inlining (^)]
~~~~~~~~~~~~~~~~~~~
The INLINABLE pragma allows (^) to be specialised at its call sites.
If it is called repeatedly at the same type, that can make a huge
difference, because of those constants which can be repeatedly
calculated.
Currently the fromInteger calls are not floated because we get
\d1 d2 x y -> blah
after the gentle round of simplification.
-}
{- Rules for powers with known small exponent
see Trac #5237
For small exponents, (^) is inefficient compared to manually
expanding the multiplication tree.
Here, rules for the most common exponent types are given.
The range of exponents for which rules are given is quite
arbitrary and kept small to not unduly increase the number of rules.
It might be desirable to have corresponding rules also for
exponents of other types (e.g., Word), but it's doubtful they
would fire, since the exponents of other types tend to get
floated out before the rule has a chance to fire. (Why?)
Note: Trying to save multiplication by sharing the square for
exponents 4 and 5 does not save time, indeed, for Double, it is
up to twice slower, so the rules contain flat sequences of
multiplications.
-}
{-# RULES
"^0/Int" forall x. x ^ (0 :: Int) = one
"^1/Int" forall x. x ^ (1 :: Int) = let u = x in u
"^2/Int" forall x. x ^ (2 :: Int) = let u = x in u*u
"^3/Int" forall x. x ^ (3 :: Int) = let u = x in u*u*u
"^4/Int" forall x. x ^ (4 :: Int) = let u = x in u*u*u*u
"^5/Int" forall x. x ^ (5 :: Int) = let u = x in u*u*u*u*u
"^0/Integer" forall x. x ^ (0 :: Integer) = one
"^1/Integer" forall x. x ^ (1 :: Integer) = let u = x in u
"^2/Integer" forall x. x ^ (2 :: Integer) = let u = x in u*u
"^3/Integer" forall x. x ^ (3 :: Integer) = let u = x in u*u*u
"^4/Integer" forall x. x ^ (4 :: Integer) = let u = x in u*u*u*u
"^5/Integer" forall x. x ^ (5 :: Integer) = let u = x in u*u*u*u*u
#-}
-- | Infix shorthand for 'plus'.
(+) :: Semiring a => a -> a -> a
(+) = plus
{-# INLINE (+) #-}
-- | Infix shorthand for 'times'.
(*) :: Semiring a => a -> a -> a
(*) = times
{-# INLINE (*) #-}
-- | Infix shorthand for 'minus'.
(-) :: Ring a => a -> a -> a
(-) = minus
{-# INLINE (-) #-}
-- | Map each element of the structure to a semiring, and combine the results
-- using 'plus'.
foldMapP :: (Foldable t, Semiring s) => (a -> s) -> t a -> s
foldMapP f = Foldable.foldr (plus . f) zero
{-# INLINE foldMapP #-}
-- | Map each element of the structure to a semiring, and combine the results
-- using 'times'.
foldMapT :: (Foldable t, Semiring s) => (a -> s) -> t a -> s
foldMapT f = Foldable.foldr (times . f) one
{-# INLINE foldMapT #-}
infixr 9 #.
(#.) :: Coercible b c => (b -> c) -> (a -> b) -> a -> c
(#.) _ = coerce
-- | The 'sum' function computes the additive sum of the elements in a structure.
-- This function is lazy. For a strict version, see 'sum''.
sum :: (Foldable t, Semiring a) => t a -> a
sum = getAdd #. foldMap Add
{-# INLINE sum #-}
-- | The 'product' function computes the product of the elements in a structure.
-- This function is lazy. for a strict version, see 'product''.
product :: (Foldable t, Semiring a) => t a -> a
product = getMul #. foldMap Mul
{-# INLINE product #-}
-- | The 'sum'' function computes the additive sum of the elements in a structure.
-- This function is strict. For a lazy version, see 'sum'.
sum' :: (Foldable t, Semiring a) => t a -> a
sum' = Foldable.foldl' plus zero
{-# INLINE sum' #-}
-- | The 'product'' function computes the additive sum of the elements in a structure.
-- This function is strict. For a lazy version, see 'product'.
product' :: (Foldable t, Semiring a) => t a -> a
product' = Foldable.foldl' times one
{-# INLINE product' #-}
-- | Monoid under 'plus'. Analogous to 'Data.Monoid.Sum', but
-- uses the 'Semiring' constraint rather than 'Num.Num'.
newtype Add a = Add { getAdd :: a }
deriving
( Bounded
, Enum
, Eq
, Foldable
, Fractional
, Functor
, Generic
, Generic1
, Num.Num
, Ord
, Read
, Real
, RealFrac
, Show
, Storable
, Traversable
, Typeable
)
instance Semiring a => Semigroup (Add a) where
Add a <> Add b = Add (a + b)
stimes n (Add a) = Add (fromNatural (Real.fromIntegral n) * a)
{-# INLINE (<>) #-}
instance Semiring a => Monoid (Add a) where
mempty = Add zero
mappend = (<>)
{-# INLINE mempty #-}
{-# INLINE mappend #-}
-- | This is an internal type, solely for purposes
-- of default implementation of 'fromNatural'.
newtype Add' a = Add' { getAdd' :: a }
instance Semiring a => Semigroup (Add' a) where
Add' a <> Add' b = Add' (a + b)
-- | Monoid under 'times'. Analogous to 'Data.Monoid.Product', but
-- uses the 'Semiring' constraint rather than 'Num.Num'.
newtype Mul a = Mul { getMul :: a }
deriving
( Bounded
, Enum
, Eq
, Foldable
, Fractional
, Functor
, Generic
, Generic1
, Num.Num
, Ord
, Read
, Real
, RealFrac
, Show
, Storable
, Traversable
, Typeable
)
instance Semiring a => Semigroup (Mul a) where
Mul a <> Mul b = Mul (a * b)
{-# INLINE (<>) #-}
instance Semiring a => Monoid (Mul a) where
mempty = Mul one
mappend = (<>)
{-# INLINE mempty #-}
{-# INLINE mappend #-}
-- | Provide Semiring and Ring for an arbitrary 'Num.Num'. It is useful with GHC 8.6+'s DerivingVia extension.
newtype WrappedNum a = WrapNum { unwrapNum :: a }
deriving
( Bounded
, Enum
, Eq
, Foldable
, Fractional
, Functor
, Generic
, Generic1
, Num.Num
, Ord
, Read
, Real
, RealFrac
, Show
, Storable
, Traversable
, Typeable
, Bits
)
instance Num.Num a => Semiring (WrappedNum a) where
plus = (Num.+)
zero = 0
times = (Num.*)
one = 1
fromNatural = Real.fromIntegral
instance Num.Num a => Ring (WrappedNum a) where
negate = Num.negate
-- | 'Mod2' represents the integers mod 2.
--
-- It is useful in the computing of <https://en.wikipedia.org/wiki/Zhegalkin_polynomial Zhegalkin polynomials>.
newtype Mod2 = Mod2 { getMod2 :: Bool }
deriving
( Bounded
, Enum
, Eq
, Ord
, Read
, Show
, Generic
)
instance Semiring Mod2 where
-- we inline the definition of 'xor'
-- on Bools, since the instance did not exist until
-- base-4.7.0.
plus (Mod2 x) (Mod2 y) = Mod2 (x /= y)
times (Mod2 x) (Mod2 y) = Mod2 (x && y)
zero = Mod2 False
one = Mod2 True
instance Ring Mod2 where
negate = id
{-# INLINE negate #-}
{--------------------------------------------------------------------
Classes
--------------------------------------------------------------------}
-- | The class of semirings (types with two binary
-- operations and two respective identities). One
-- can think of a semiring as two monoids of the same
-- underlying type, with the first being commutative.
-- In the documentation, you will often see the first
-- monoid being referred to as @additive@, and the second
-- monoid being referred to as @multiplicative@, a typical
-- convention when talking about semirings.
--
-- For any type R with a 'Num.Num'
-- instance, the additive monoid is (R, 'Prelude.+', 0)
-- and the multiplicative monoid is (R, 'Prelude.*', 1).
--
-- For 'Prelude.Bool', the additive monoid is ('Prelude.Bool', 'Prelude.||', 'Prelude.False')
-- and the multiplicative monoid is ('Prelude.Bool', 'Prelude.&&', 'Prelude.True').
--
-- Instances should satisfy the following laws:
--
-- [/additive left identity/]
-- @'zero' '+' x = x@
-- [/additive right identity/]
-- @x '+' 'zero' = x@
-- [/additive associativity/]
-- @x '+' (y '+' z) = (x '+' y) '+' z@
-- [/additive commutativity/]
-- @x '+' y = y '+' x@
-- [/multiplicative left identity/]
-- @'one' '*' x = x@
-- [/multiplicative right identity/]
-- @x '*' 'one' = x@
-- [/multiplicative associativity/]
-- @x '*' (y '*' z) = (x '*' y) '*' z@
-- [/left-distributivity of '*' over '+'/]
-- @x '*' (y '+' z) = (x '*' y) '+' (x '*' z)@
-- [/right-distributivity of '*' over '+'/]
-- @(x '+' y) '*' z = (x '*' z) '+' (y '*' z)@
-- [/annihilation/]
-- @'zero' '*' x = x '*' 'zero' = 'zero'@
class Semiring a where
#if __GLASGOW_HASKELL__ >= 708
{-# MINIMAL plus, times, (zero, one | fromNatural) #-}
#endif
plus :: a -> a -> a -- ^ Commutative Operation
zero :: a -- ^ Commutative Unit
zero = fromNatural 0
times :: a -> a -> a -- ^ Associative Operation
one :: a -- ^ Associative Unit
one = fromNatural 1
fromNatural :: Natural -> a -- ^ Homomorphism of additive semigroups
fromNatural 0 = zero
fromNatural n = getAdd' (stimes n (Add' one))
-- | The class of semirings with an additive inverse.
--
-- @'negate' a '+' a = 'zero'@
class Semiring a => Ring a where
#if __GLASGOW_HASKELL__ >= 708
{-# MINIMAL negate #-}
#endif
negate :: a -> a
-- | Subtract two 'Ring' values. For any type @R@ with
-- a 'Num.Num' instance, this is the same as '(Prelude.-)'.
--
-- @x `minus` y = x '+' 'negate' y@
minus :: Ring a => a -> a -> a
minus x y = x + negate y
{-# INLINE minus #-}
-- | Convert from integer to ring.
--
-- When @{-#@ @LANGUAGE RebindableSyntax #-}@ is enabled,
-- this function is used for desugaring integer literals.
-- This may be used to facilitate transition from 'Num.Num' to 'Ring':
-- no need to replace 0 and 1 with 'one' and 'zero'
-- or to cast numeric literals.
fromInteger :: Ring a => Integer -> a
fromInteger x
| x >= 0 = fromNatural (Num.fromInteger x)
| otherwise = negate (fromNatural (Num.fromInteger (Num.negate x)))
{-# INLINE fromInteger #-}
-- | Convert from integral to ring.
fromIntegral :: (Integral a, Ring b) => a -> b
fromIntegral x
| x >= 0 = fromNatural (Real.fromIntegral x)
| otherwise = negate (fromNatural (Real.fromIntegral (Num.negate x)))
{-# INLINE fromIntegral #-}
{--------------------------------------------------------------------
Instances (base)
--------------------------------------------------------------------}
instance Semiring b => Semiring (a -> b) where
plus f g = \x -> f x `plus` g x
zero = const zero
times f g = \x -> f x `times` g x
one = const one
fromNatural = const . fromNatural
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
instance Ring b => Ring (a -> b) where
negate f x = negate (f x)
{-# INLINE negate #-}
instance Semiring () where
plus _ _ = ()
zero = ()
times _ _ = ()
one = ()
fromNatural _ = ()
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
instance Ring () where
negate _ = ()
{-# INLINE negate #-}
instance Semiring (Proxy a) where
plus _ _ = Proxy
zero = Proxy
times _ _ = Proxy
one = Proxy
fromNatural _ = Proxy
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
instance Semiring Bool where
plus = (||)
zero = False
times = (&&)
one = True
fromNatural 0 = False
fromNatural _ = True
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
instance Semiring a => Semiring (Maybe a) where
zero = Nothing
one = Just one
plus Nothing y = y
plus x Nothing = x
plus (Just x) (Just y) = Just (plus x y)
times Nothing _ = Nothing
times _ Nothing = Nothing
times (Just x) (Just y) = Just (times x y)
fromNatural 0 = Nothing
fromNatural n = Just (fromNatural n)
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
instance Semiring a => Semiring (IO a) where
zero = pure zero
one = pure one
plus = liftA2 plus
times = liftA2 times
fromNatural = pure . fromNatural
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
instance Ring a => Ring (IO a) where
negate = fmap negate
{-# INLINE negate #-}
instance Semiring a => Semiring (Dual a) where
zero = Dual zero
Dual x `plus` Dual y = Dual (y `plus` x)
one = Dual one
Dual x `times` Dual y = Dual (y `times` x)
fromNatural = Dual . fromNatural
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
instance Ring a => Ring (Dual a) where
negate (Dual x) = Dual (negate x)
{-# INLINE negate #-}
instance Semiring a => Semiring (Const a b) where
zero = Const zero
one = Const one
plus (Const x) (Const y) = Const (x `plus` y)
times (Const x) (Const y) = Const (x `times` y)
fromNatural = Const . fromNatural
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
instance Ring a => Ring (Const a b) where
negate (Const x) = Const (negate x)
{-# INLINE negate #-}
-- | This instance can suffer due to floating point arithmetic.
instance Ring a => Semiring (Complex a) where
zero = zero :+ zero
one = one :+ zero
plus (x :+ y) (x' :+ y') = plus x x' :+ plus y y'
times (x :+ y) (x' :+ y')
= (x * x' - (y * y')) :+ (x * y' + y * x')
fromNatural n = fromNatural n :+ zero
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
instance Ring a => Ring (Complex a) where
negate (x :+ y) = negate x :+ negate y
{-# INLINE negate #-}
#if MIN_VERSION_base(4,12,0)
instance (Semiring a, Applicative f) => Semiring (Ap f a) where
zero = pure zero
one = pure one
plus = liftA2 plus
times = liftA2 times
fromNatural = pure . fromNatural
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
instance (Ring a, Applicative f) => Ring (Ap f a) where
negate = fmap negate
{-# INLINE negate #-}
#endif
#if MIN_VERSION_base(4,12,0)
deriving instance Semiring (Predicate a)
deriving instance Semiring a => Semiring (Equivalence a)
deriving instance Semiring a => Semiring (Op a b)
deriving instance Ring a => Ring (Op a b)
#endif
#define deriveSemiring(ty) \
instance Semiring (ty) where { \
zero = 0 \
; one = 1 \
; plus x y = (Num.+) x y \
; times x y = (Num.*) x y \
; fromNatural = Real.fromIntegral \
; {-# INLINE zero #-} \
; {-# INLINE one #-} \
; {-# INLINE plus #-} \
; {-# INLINE times #-} \
; {-# INLINE fromNatural #-} \
}
deriveSemiring(Int)
deriveSemiring(Int8)
deriveSemiring(Int16)
deriveSemiring(Int32)
deriveSemiring(Int64)
deriveSemiring(Integer)
deriveSemiring(Word)
deriveSemiring(Word8)
deriveSemiring(Word16)
deriveSemiring(Word32)
deriveSemiring(Word64)
deriveSemiring(Float)
deriveSemiring(Double)
deriveSemiring(CUIntMax)
deriveSemiring(CIntMax)
deriveSemiring(CUIntPtr)
deriveSemiring(CIntPtr)
deriveSemiring(CSUSeconds)
deriveSemiring(CUSeconds)
deriveSemiring(CTime)
deriveSemiring(CClock)
deriveSemiring(CSigAtomic)
deriveSemiring(CWchar)
deriveSemiring(CSize)
deriveSemiring(CPtrdiff)
deriveSemiring(CDouble)
deriveSemiring(CFloat)
deriveSemiring(CULLong)
deriveSemiring(CLLong)
deriveSemiring(CULong)
deriveSemiring(CLong)
deriveSemiring(CUInt)
deriveSemiring(CInt)
deriveSemiring(CUShort)
deriveSemiring(CShort)
deriveSemiring(CUChar)
deriveSemiring(CSChar)
deriveSemiring(CChar)
deriveSemiring(IntPtr)
deriveSemiring(WordPtr)
#if !HOST_OS_WINDOWS
deriveSemiring(CCc)
deriveSemiring(CDev)
deriveSemiring(CGid)
deriveSemiring(CIno)
deriveSemiring(CMode)
deriveSemiring(CNlink)
deriveSemiring(COff)
deriveSemiring(CPid)
deriveSemiring(CRLim)
deriveSemiring(CSpeed)
deriveSemiring(CSsize)
deriveSemiring(CTcflag)
deriveSemiring(CUid)
deriveSemiring(Fd)
#endif
deriveSemiring(Natural)
instance Integral a => Semiring (Ratio a) where
{-# SPECIALIZE instance Semiring Rational #-}
zero = 0 % 1
one = 1 % 1
plus = (Num.+)
times = (Num.*)
fromNatural n = Real.fromIntegral n % 1
{-# INLINE zero #-}
{-# INLINE one #-}
{-# INLINE plus #-}
{-# INLINE times #-}
{-# INLINE fromNatural #-}
deriving instance Semiring a => Semiring (Identity a)
deriving instance Semiring a => Semiring (Down a)
instance HasResolution a => Semiring (Fixed a) where
zero = 0
one = 1
plus = (Num.+)
times = (Num.*)
fromNatural = Real.fromIntegral
{-# INLINE zero #-}
{-# INLINE one #-}
{-# INLINE plus #-}
{-# INLINE times #-}
{-# INLINE fromNatural #-}
#define deriveRing(ty) \
instance Ring (ty) where { \
negate = Num.negate \
; {-# INLINE negate #-} \
}
deriveRing(Int)
deriveRing(Int8)
deriveRing(Int16)
deriveRing(Int32)
deriveRing(Int64)
deriveRing(Integer)
deriveRing(Word)
deriveRing(Word8)
deriveRing(Word16)
deriveRing(Word32)
deriveRing(Word64)
deriveRing(Float)
deriveRing(Double)
deriveRing(CUIntMax)
deriveRing(CIntMax)
deriveRing(CUIntPtr)
deriveRing(CIntPtr)
deriveRing(CSUSeconds)
deriveRing(CUSeconds)
deriveRing(CTime)
deriveRing(CClock)
deriveRing(CSigAtomic)
deriveRing(CWchar)
deriveRing(CSize)
deriveRing(CPtrdiff)
deriveRing(CDouble)
deriveRing(CFloat)
deriveRing(CULLong)
deriveRing(CLLong)
deriveRing(CULong)
deriveRing(CLong)
deriveRing(CUInt)
deriveRing(CInt)
deriveRing(CUShort)
deriveRing(CShort)
deriveRing(CUChar)
deriveRing(CSChar)
deriveRing(CChar)
deriveRing(IntPtr)
deriveRing(WordPtr)
#if !HOST_OS_WINDOWS
deriveRing(CCc)
deriveRing(CDev)
deriveRing(CGid)
deriveRing(CIno)
deriveRing(CMode)
deriveRing(CNlink)
deriveRing(COff)
deriveRing(CPid)
deriveRing(CRLim)
deriveRing(CSpeed)
deriveRing(CSsize)
deriveRing(CTcflag)
deriveRing(CUid)
deriveRing(Fd)
#endif
instance Integral a => Ring (Ratio a) where
negate = Num.negate
{-# INLINE negate #-}
deriving instance Ring a => Ring (Down a)
deriving instance Ring a => Ring (Identity a)
instance HasResolution a => Ring (Fixed a) where
negate = Num.negate
{-# INLINE negate #-}
{--------------------------------------------------------------------
Instances (containers)
--------------------------------------------------------------------}
#if defined(VERSION_containers)
-- | The multiplication laws are satisfied for
-- any underlying 'Monoid', so we require a
-- 'Monoid' constraint instead of a 'Semiring'
-- constraint since 'times' can use
-- the context of either.
instance (Ord a, Monoid a) => Semiring (Set a) where
zero = Set.empty
one = Set.singleton mempty
plus = Set.union
times xs ys = Foldable.foldMap (flip Set.map ys . mappend) xs
fromNatural 0 = zero
fromNatural _ = one
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
-- | Wrapper to mimic 'Set' ('Data.Semigroup.Sum' 'Int'),
-- 'Set' ('Data.Semigroup.Product' 'Int'), etc.,
-- while having a more efficient underlying representation.
newtype IntSetOf a = IntSetOf { getIntSet :: IntSet }
deriving
( Eq
, Generic
, Generic1
, Ord
, Read
, Show
, Typeable
, Semigroup
, Monoid
)
instance (Coercible Int a, Monoid a) => Semiring (IntSetOf a) where
zero = coerce IntSet.empty
one = coerce IntSet.singleton (mempty :: a)
plus = coerce IntSet.union
xs `times` ys
= coerce IntSet.fromList
[ mappend k l
| k :: a <- coerce IntSet.toList xs
, l :: a <- coerce IntSet.toList ys
]
fromNatural 0 = zero
fromNatural _ = one
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
-- | The multiplication laws are satisfied for
-- any underlying 'Monoid' as the key type,
-- so we require a 'Monoid' constraint instead of
-- a 'Semiring' constraint since 'times' can use
-- the context of either.
instance (Ord k, Monoid k, Semiring v) => Semiring (Map k v) where
zero = Map.empty
one = Map.singleton mempty one
plus = Map.unionWith (+)
xs `times` ys
= Map.fromListWith (+)
[ (mappend k l, v * u)
| (k,v) <- Map.toList xs
, (l,u) <- Map.toList ys
]
fromNatural 0 = zero
fromNatural n = Map.singleton mempty (fromNatural n)
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
-- | Wrapper to mimic 'Map' ('Data.Semigroup.Sum' 'Int') v,
-- 'Map' ('Data.Semigroup.Product' 'Int') v, etc.,
-- while having a more efficient underlying representation.
newtype IntMapOf k v = IntMapOf { getIntMap :: IntMap v }
deriving
( Eq
, Generic
, Generic1
, Ord
, Read
, Show
, Typeable
, Semigroup
, Monoid
)
instance (Coercible Int k, Monoid k, Semiring v) => Semiring (IntMapOf k v) where
zero = coerce (IntMap.empty :: IntMap v)
one = coerce (IntMap.singleton :: Int -> v -> IntMap v) (mempty :: k) (one :: v)
plus = coerce (IntMap.unionWith (+) :: IntMap v -> IntMap v -> IntMap v)
xs `times` ys
= coerce (IntMap.fromListWith (+) :: [(Int, v)] -> IntMap v)
[ (mappend k l, v * u)
| (k :: k, v :: v) <- coerce (IntMap.toList :: IntMap v -> [(Int, v)]) xs
, (l :: k, u :: v) <- coerce (IntMap.toList :: IntMap v -> [(Int, v)]) ys
]
fromNatural 0 = zero
fromNatural n = coerce (IntMap.singleton :: Int -> v -> IntMap v) (mempty :: k) (fromNatural n :: v)
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
#endif
{--------------------------------------------------------------------
Instances (unordered-containers)
--------------------------------------------------------------------}
#if defined(VERSION_unordered_containers)
-- | The multiplication laws are satisfied for
-- any underlying 'Monoid', so we require a
-- 'Monoid' constraint instead of a 'Semiring'
-- constraint since 'times' can use
-- the context of either.
instance (Eq a, Hashable a, Monoid a) => Semiring (HashSet a) where
zero = HashSet.empty
one = HashSet.singleton mempty
plus = HashSet.union
times xs ys = Foldable.foldMap (flip HashSet.map ys . mappend) xs
fromNatural 0 = zero
fromNatural _ = one
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
-- | The multiplication laws are satisfied for
-- any underlying 'Monoid' as the key type,
-- so we require a 'Monoid' constraint instead of
-- a 'Semiring' constraint since 'times' can use
-- the context of either.
instance (Eq k, Hashable k, Monoid k, Semiring v) => Semiring (HashMap k v) where
zero = HashMap.empty
one = HashMap.singleton mempty one
plus = HashMap.unionWith (+)
xs `times` ys
= HashMap.fromListWith (+)
[ (mappend k l, v * u)
| (k,v) <- HashMap.toList xs
, (l,u) <- HashMap.toList ys
]
fromNatural 0 = zero
fromNatural n = HashMap.singleton mempty (fromNatural n)
{-# INLINE plus #-}
{-# INLINE zero #-}
{-# INLINE times #-}
{-# INLINE one #-}
{-# INLINE fromNatural #-}
#endif
-- | Is the value 'zero'?
isZero :: (Eq a, Semiring a) => a -> Bool
isZero x = x == zero
{-# INLINEABLE isZero #-}
-- | Is the value 'one'?
isOne :: (Eq a, Semiring a) => a -> Bool
isOne x = x == one
{-# INLINEABLE isOne #-}
|
theory Short_Theory_13_2
imports "HOL-IMP.BExp"
begin
datatype
com = SKIP
| Assign vname aexp ("_ ::= _" [1000, 61] 61)
| Seq com com ("_;;/ _" [60, 61] 60)
| If bexp com com ("(IF _/ THEN _/ ELSE _)" [0, 0, 61] 61)
| Or com com ("_/ OR _" [60, 61] 61)
| While bexp com ("(WHILE _/ DO _)" [0, 61] 61)
text \<open>acom is the type of annotated commands (wrt. a type of annotation)\<close>
datatype 'a acom =
SKIP 'a ("SKIP {_}" 61) |
Assign vname aexp 'a ("(_ ::= _/ {_})" [1000, 61, 0] 61) |
Seq "('a acom)" "('a acom)" ("_;;//_" [60, 61] 60) |
If bexp 'a "'a acom" 'a "'a acom" 'a
("(IF _/ THEN ({_}/ _)/ ELSE ({_}/ _)//{_})" [0, 0, 0, 61, 0, 0] 61) |
Or "'a acom" "'a acom" 'a
("_ OR// _//{_}" [60, 61, 0] 60) |
While 'a bexp 'a "'a acom" 'a
("({_}//WHILE _//DO ({_}//_)//{_})" [0, 0, 0, 61, 0] 61)
notation com.SKIP ("SKIP")
text \<open>strip maps acoms back to the original commands\<close>
text_raw\<open>\snip{stripdef}{1}{1}{%\<close>
fun strip :: "'a acom \<Rightarrow> com" where
"strip (SKIP {P}) = SKIP" |
"strip (x ::= e {P}) = x ::= e" |
"strip (C\<^sub>1;;C\<^sub>2) = strip C\<^sub>1;; strip C\<^sub>2" |
"strip (IF b THEN {P\<^sub>1} C\<^sub>1 ELSE {P\<^sub>2} C\<^sub>2 {P}) =
IF b THEN strip C\<^sub>1 ELSE strip C\<^sub>2" |
"strip (C\<^sub>1 OR C\<^sub>2 {P}) = strip C\<^sub>1 OR strip C\<^sub>2" |
"strip ({I} WHILE b DO {P} C {Q}) = WHILE b DO strip C"
text_raw\<open>}%endsnip\<close>
text \<open>asize counts the number of annotations that a com admits\<close>
text_raw\<open>\snip{asizedef}{1}{1}{%\<close>
fun asize :: "com \<Rightarrow> nat" where
"asize SKIP = 1" |
"asize (x ::= e) = 1" |
"asize (C\<^sub>1;;C\<^sub>2) = asize C\<^sub>1 + asize C\<^sub>2" |
"asize (IF b THEN C\<^sub>1 ELSE C\<^sub>2) = asize C\<^sub>1 + asize C\<^sub>2 + 3" |
"asize (C\<^sub>1 OR C\<^sub>2) = asize C\<^sub>1 + asize C\<^sub>2 + 1" |
"asize (WHILE b DO C) = asize C + 3"
text_raw\<open>}%endsnip\<close>
text \<open>shift eats the first n elements of a sequence\<close>
text_raw\<open>\snip{annotatedef}{1}{1}{%\<close>
definition shift :: "(nat \<Rightarrow> 'a) \<Rightarrow> nat \<Rightarrow> nat \<Rightarrow> 'a" where
"shift f n = (\<lambda>p. f(p+n))"
text \<open>Defined in terms of shift, annotate annotates a command c with a sequence of annotations\<close>
fun annotate :: "(nat \<Rightarrow> 'a) \<Rightarrow> com \<Rightarrow> 'a acom" where
"annotate f SKIP = SKIP {f 0}" |
"annotate f (x ::= e) = x ::= e {f 0}" |
"annotate f (c\<^sub>1;;c\<^sub>2) = annotate f c\<^sub>1;; annotate (shift f (asize c\<^sub>1)) c\<^sub>2" |
"annotate f (IF b THEN c\<^sub>1 ELSE c\<^sub>2) =
IF b THEN {f 0} annotate (shift f 1) c\<^sub>1
ELSE {f(asize c\<^sub>1 + 1)} annotate (shift f (asize c\<^sub>1 + 2)) c\<^sub>2
{f(asize c\<^sub>1 + asize c\<^sub>2 + 2)}" |
"annotate f (c\<^sub>1 OR c\<^sub>2) =
annotate f c\<^sub>1 OR annotate (shift f (asize c\<^sub>1)) c\<^sub>2 {f (asize c\<^sub>1 + asize c\<^sub>2)}" |
"annotate f (WHILE b DO c) =
{f 0} WHILE b DO {f 1} annotate (shift f 2) c {f(asize c + 2)}"
text_raw\<open>}%endsnip\<close>
text \<open>annos collects a command's annotations into a list\<close>
text_raw\<open>\snip{annosdef}{1}{1}{%\<close>
fun annos :: "'a acom \<Rightarrow> 'a list" where
"annos (SKIP {P}) = [P]" |
"annos (x ::= e {P}) = [P]" |
"annos (C\<^sub>1;;C\<^sub>2) = annos C\<^sub>1 @ annos C\<^sub>2" |
"annos (IF b THEN {P\<^sub>1} C\<^sub>1 ELSE {P\<^sub>2} C\<^sub>2 {Q}) =
P\<^sub>1 # annos C\<^sub>1 @ P\<^sub>2 # annos C\<^sub>2 @ [Q]" |
"annos (C\<^sub>1 OR C\<^sub>2 {P}) = annos C\<^sub>1 @ annos C\<^sub>2 @ [P]" |
"annos ({I} WHILE b DO {P} C {Q}) = I # P # annos C @ [Q]"
text_raw\<open>}%endsnip\<close>
text \<open>anno retrives the pth annotation of a command, by first collecting its annotations then
indexing into the pth list element\<close>
definition anno :: "'a acom \<Rightarrow> nat \<Rightarrow> 'a" where
"anno C p = annos C ! p"
text \<open>post retrieves the last annotation of a command, by first collecting its annotations\<close>
definition post :: "'a acom \<Rightarrow>'a" where
"post C = last(annos C)"
text \<open>map_acom maps the annotations of an acom\<close>
text_raw\<open>\snip{mapacomdef}{1}{2}{%\<close>
fun map_acom :: "('a \<Rightarrow> 'b) \<Rightarrow> 'a acom \<Rightarrow> 'b acom" where
"map_acom f (SKIP {P}) = SKIP {f P}" |
"map_acom f (x ::= e {P}) = x ::= e {f P}" |
"map_acom f (C\<^sub>1;;C\<^sub>2) = map_acom f C\<^sub>1;; map_acom f C\<^sub>2" |
"map_acom f (IF b THEN {P\<^sub>1} C\<^sub>1 ELSE {P\<^sub>2} C\<^sub>2 {Q}) =
IF b THEN {f P\<^sub>1} map_acom f C\<^sub>1 ELSE {f P\<^sub>2} map_acom f C\<^sub>2
{f Q}" |
"map_acom f (C\<^sub>1 OR C\<^sub>2 {P}) = map_acom f C\<^sub>1 OR map_acom f C\<^sub>2 {f P}" |
"map_acom f ({I} WHILE b DO {P} C {Q}) =
{f I} WHILE b DO {f P} map_acom f C {f Q}"
text_raw\<open>}%endsnip\<close>
text \<open>the list of annotations for any command is always nonempty\<close>
lemma annos_ne: "annos C \<noteq> []"
by(induction C) auto
text \<open>stripping a command that has been annotated recovers it\<close>
lemma strip_annotate[simp]: "strip(annotate f c) = c"
by(induction c arbitrary: f) auto
text \<open>the list of annotations of a com, once annotated, is as large as the number of annotations it admits\<close>
lemma length_annos_annotate[simp]: "length (annos (annotate f c)) = asize c"
by(induction c arbitrary: f) auto
text \<open>the size of the list of annotations of a command is as large as the number of annotations its underlying com admits\<close>
lemma size_annos: "size(annos C) = asize(strip C)"
by(induction C)(auto)
text \<open>if two acom share the same com, then they have the same number of annotations\<close>
lemma size_annos_same: "strip C1 = strip C2 \<Longrightarrow> size(annos C1) = size(annos C2)"
apply(induct C2 arbitrary: C1)
apply(case_tac C1, simp_all)+
done
lemmas size_annos_same2 = eqTrueI[OF size_annos_same]
text \<open>dually, the pth annotation is the pth element of the annotating sequence\<close>
lemma anno_annotate[simp]: "p < asize c \<Longrightarrow> anno (annotate f c) p = f p"
proof (induction c arbitrary: f p)
case SKIP
then show ?case by (auto simp: anno_def)
next
case (Assign x1 x2)
then show ?case by (auto simp: anno_def)
next
case (Seq c1 c2)
then show ?case by (auto simp: anno_def nth_append shift_def)
next
case (If x1 c1 c2)
then show ?case
by (auto simp: anno_def nth_append nth_Cons shift_def split: nat.split,
metis add_Suc_right add_diff_inverse add.commute,
rule_tac f=f in arg_cong,
arith)
next
case (Or c1 c2)
then show ?case
by (auto simp: anno_def nth_append shift_def,
rule_tac f=f in arg_cong,
arith)
next
case (While x1 c)
then show ?case
by (auto simp: anno_def nth_append nth_Cons shift_def
split: nat.split, rule_tac f=f in arg_cong,
arith)
qed
text \<open>two acoms are equal iff they have the same underlying command and same list of annotations\<close>
text \<open>Proof is by inductive definiton of acom, and list lemmas / annos lemmas\<close>
lemma eq_acom_iff_strip_annos:
"C1 = C2 \<longleftrightarrow> strip C1 = strip C2 \<and> annos C1 = annos C2"
apply(induction C1 arbitrary: C2)
apply(case_tac C2, auto simp: size_annos_same2)+
done
text \<open>two acoms are equal iff they have the same underlying command and same sublist of annotations\<close>
lemma eq_acom_iff_strip_anno:
"C1=C2 \<longleftrightarrow> strip C1 = strip C2 \<and> (\<forall>p<size(annos C1). anno C1 p = anno C2 p)"
by(auto simp add: eq_acom_iff_strip_annos anno_def
list_eq_iff_nth_eq size_annos_same2)
text \<open>the last annotation after mapping through f is exactly the last annotation, then mapping it by f\<close>
lemma post_map_acom[simp]: "post(map_acom f C) = f(post C)"
by (induction C) (auto simp: post_def last_append annos_ne)
text \<open>the underlying command is unchanged by map_acom\<close>
lemma strip_map_acom[simp]: "strip (map_acom f C) = strip C"
by (induction C) auto
text \<open>the pth annotation after mapping through f is exactly the pth annotation, then mapping it by f\<close>
lemma anno_map_acom: "p < size(annos C) \<Longrightarrow> anno (map_acom f C) p = f(anno C p)"
apply(induction C arbitrary: p)
apply(auto simp: anno_def nth_append nth_Cons' size_annos)
done
text \<open>inversion lemma for strip C\<close>
lemma strip_eq_SKIP:
"strip C = SKIP \<longleftrightarrow> (\<exists>P. C = SKIP {P})"
by (cases C) simp_all
lemma strip_eq_Assign:
"strip C = x::=e \<longleftrightarrow> (\<exists>P. C = x::=e {P})"
by (cases C) simp_all
lemma strip_eq_Seq:
"strip C = c1;;c2 \<longleftrightarrow> (\<exists>C1 C2. C = C1;;C2 & strip C1 = c1 & strip C2 = c2)"
by (cases C) simp_all
lemma strip_eq_If:
"strip C = IF b THEN c1 ELSE c2 \<longleftrightarrow>
(\<exists>P1 P2 C1 C2 Q. C = IF b THEN {P1} C1 ELSE {P2} C2 {Q} & strip C1 = c1 & strip C2 = c2)"
by (cases C) simp_all
lemma strip_eq_Or:
"strip C = c1 OR c2 \<longleftrightarrow>
(\<exists>C1 C2 P. C = C1 OR C2 {P} & strip C1 = c1 & strip C2 = c2)"
by (cases C) simp_all
lemma strip_eq_While:
"strip C = WHILE b DO c1 \<longleftrightarrow>
(\<exists>I P C1 Q. C = {I} WHILE b DO {P} C1 {Q} & strip C1 = c1)"
by (cases C) simp_all
text \<open>shifting a constant sequence does nothing\<close>
text \<open>the set of all members of an acom created by annotate with a constant sequence on that annotation is a singleton\<close>
lemma set_annos_anno[simp]: "set (annos (annotate (\<lambda>p. a) c)) = {a}"
by(induction c) simp_all
text \<open>the last annotation of an acom is in the set of list of annotations of that acom\<close>
lemma post_in_annos: "post C \<in> set(annos C)"
by(auto simp: post_def annos_ne)
text \<open>the last annotation of C is the last element of the list of annotations generated from C.\<close>
lemma post_anno_asize: "post C = anno C (size(annos C) - 1)"
by(simp add: post_def last_conv_nth[OF annos_ne] anno_def)
notation
sup (infixl "\<squnion>" 65) and
inf (infixl "\<sqinter>" 70) and
bot ("\<bottom>") and
top ("\<top>")
context
fixes f :: "vname \<Rightarrow> aexp \<Rightarrow> 'a \<Rightarrow> 'a::sup"
fixes g :: "bexp \<Rightarrow> 'a \<Rightarrow> 'a"
begin
fun Step :: "'a \<Rightarrow> 'a acom \<Rightarrow> 'a acom" where
"Step S (SKIP {Q}) = (SKIP {S})" |
"Step S (x ::= e {Q}) =
x ::= e {f x e S}" |
"Step S (C1;; C2) = Step S C1;; Step (post C1) C2" |
"Step S (IF b THEN {P1} C1 ELSE {P2} C2 {Q}) =
IF b THEN {g b S} Step P1 C1 ELSE {g (Not b) S} Step P2 C2
{post C1 \<squnion> post C2}" |
"Step S (C1 OR C2 {P}) =
Step S C1 OR Step S C2
{post C1 \<squnion> post C2}" |
"Step S ({I} WHILE b DO {P} C {Q}) =
{S \<squnion> post C} WHILE b DO {g b I} Step P C {g (Not b) I}"
end
end |
\cleardoublepage
\chapter{Results and Conclusion}
\label{chap:5}
\section{Heading Level 1}
\subsection{Heading Level 2}
\subsubsection{Heading Level 3}
\paragraph{Paragraph Level 1}
\subparagraph{Paragraph Level 2}
|
% TTeMPS Toolbox.
% Michael Steinlechner, 2013-2016
% Questions and contact: [email protected]
% BSD 2-clause license, see LICENSE.txt
function [eta, B1,B3] = precond_laplace_overlapJacobi( L, xi, xL, xR, G, B1, B3 )
% L is a cell of operators
r = xi.rank;
n = xi.size;
d = xi.order;
% If B1 and B3 are not given as arguments, we need to precalculate them
if nargin < 7
% % if applying L is expensive (not just tridiag), one can store all
% % applications with xL and compute the ones for xR with G.
% % You need to first store LUl
% LUl = cell(d,1);
% for idx = 1:d
% LUl{idx} = tensorprod_ttemps( xL.U{idx}, L{idx}, 2 );
% end
% % and then change to LUr in the loop for B3 below
% % if idx+1==d
% % LUr = tensorprod_ttemps( LUl{idx+1}, G{idx}, 1, true);
% % else
% % LUr = tensorprod_ttemps( tensorprod_ttemps( LUl{idx+1}, G{idx+1}', 3), G{idx}, 1, true);
% % end
B1 = cell(d,1);
B1{1} = 0;
for idx = 2:d
LUl = tensorprod_ttemps( xL.U{idx-1}, L{idx-1}, 2 );
if idx>2
TT = tensorprod_ttemps( xL.U{idx-1}, B1{idx-1}, 1 );
else
TT = 0;
end
B1{idx} = unfold(xL.U{idx-1},'left')'*unfold(TT + LUl,'left');
end
B3 = cell(d,1);
for idx = d-1:-1:1
LUr = tensorprod_ttemps( xR.U{idx+1}, L{idx+1}, 2 );
if idx<d-1
TT = tensorprod_ttemps( xR.U{idx+1}, B3{idx+1}, 3 );
else
TT = 0;
end
B3{idx} = unfold(xR.U{idx+1},'right')*unfold(TT + LUr,'right')';
end
B3{d} = 0;
end
eta = xi;
xi = tangent_to_TTeMPS( xi );
% % 1. STEP: Project right hand side
% below is hard-coded version of
% for ii=1:d
% eta_partial_ii = TTeMPS_partial_project_overlap( xL, xR, xi, ii);
% Y{ii} = eta_partial_ii.dU{ii};
% end
% TODO, it seems that the left and right cell arrays consist of a lot of
% identities and zeros.
Y = cell(1,d);
% precompute inner products
left = innerprod( xL, xi, 'LR', d-1, true );
right = innerprod( xR, xi, 'RL', 2, true );
% contract to first core
Y{1} = tensorprod_ttemps( xi.U{1}, right{2}, 3 );
% contract to first core
for idx = 2:d-1
res = tensorprod_ttemps( xi.U{idx}, left{idx-1}, 1 );
Y{idx} = tensorprod_ttemps( res, right{idx+1}, 3 );
end
% contract to last core
Y{d} = tensorprod_ttemps( xi.U{d}, left{d-1}, 1 );
% 2. STEP: Solve ALS systems:
% B1 and B3 were precalculated before
for idx = 1:d
rl = r(idx);
rr = r(idx+1);
B2 = L{idx};
% Solve via the diagonalization trick
[V1,E1] = eig(B1{idx}); [V3,E3] = eig(B3{idx});
V = kron(V3,V1);
EE = diag(E1)*ones(1,rr) + ones(rl,1)*diag(E3)'; E = EE(:);
rhs = matricize( Y{idx}, 2 ) * V;
Z = zeros(size(rhs));
for i=1:length(E)
Z(:,i) = (B2 + E(i)*speye(n(idx))) \ rhs(:,i);
end
eta.dU{idx} = tensorize( Z*V', 2, [rl, n(idx), rr] );
end
eta = TTeMPS_tangent_orth( xL, xR, eta ); % todo? Can we improve efficiency since eta is not a generic TTeMPS but shares the same x.U as xL and xR
end
|
(* Title: HOL/Algebra/Divisibility.thy
Author: Clemens Ballarin
Author: Stephan Hohe
*)
section {* Divisibility in monoids and rings *}
theory Divisibility
imports "~~/src/HOL/Library/Permutation" Coset Group
begin
section {* Factorial Monoids *}
subsection {* Monoids with Cancellation Law *}
locale monoid_cancel = monoid +
assumes l_cancel:
"\<lbrakk>c \<otimes> a = c \<otimes> b; a \<in> carrier G; b \<in> carrier G; c \<in> carrier G\<rbrakk> \<Longrightarrow> a = b"
and r_cancel:
"\<lbrakk>a \<otimes> c = b \<otimes> c; a \<in> carrier G; b \<in> carrier G; c \<in> carrier G\<rbrakk> \<Longrightarrow> a = b"
lemma (in monoid) monoid_cancelI:
assumes l_cancel:
"\<And>a b c. \<lbrakk>c \<otimes> a = c \<otimes> b; a \<in> carrier G; b \<in> carrier G; c \<in> carrier G\<rbrakk> \<Longrightarrow> a = b"
and r_cancel:
"\<And>a b c. \<lbrakk>a \<otimes> c = b \<otimes> c; a \<in> carrier G; b \<in> carrier G; c \<in> carrier G\<rbrakk> \<Longrightarrow> a = b"
shows "monoid_cancel G"
by default fact+
lemma (in monoid_cancel) is_monoid_cancel:
"monoid_cancel G"
..
sublocale group \<subseteq> monoid_cancel
by default simp_all
locale comm_monoid_cancel = monoid_cancel + comm_monoid
lemma comm_monoid_cancelI:
fixes G (structure)
assumes "comm_monoid G"
assumes cancel:
"\<And>a b c. \<lbrakk>a \<otimes> c = b \<otimes> c; a \<in> carrier G; b \<in> carrier G; c \<in> carrier G\<rbrakk> \<Longrightarrow> a = b"
shows "comm_monoid_cancel G"
proof -
interpret comm_monoid G by fact
show "comm_monoid_cancel G"
by unfold_locales (metis assms(2) m_ac(2))+
qed
lemma (in comm_monoid_cancel) is_comm_monoid_cancel:
"comm_monoid_cancel G"
by intro_locales
sublocale comm_group \<subseteq> comm_monoid_cancel
..
subsection {* Products of Units in Monoids *}
lemma (in monoid) Units_m_closed[simp, intro]:
assumes h1unit: "h1 \<in> Units G" and h2unit: "h2 \<in> Units G"
shows "h1 \<otimes> h2 \<in> Units G"
unfolding Units_def
using assms
by auto (metis Units_inv_closed Units_l_inv Units_m_closed Units_r_inv)
lemma (in monoid) prod_unit_l:
assumes abunit[simp]: "a \<otimes> b \<in> Units G" and aunit[simp]: "a \<in> Units G"
and carr[simp]: "a \<in> carrier G" "b \<in> carrier G"
shows "b \<in> Units G"
proof -
have c: "inv (a \<otimes> b) \<otimes> a \<in> carrier G" by simp
have "(inv (a \<otimes> b) \<otimes> a) \<otimes> b = inv (a \<otimes> b) \<otimes> (a \<otimes> b)" by (simp add: m_assoc)
also have "\<dots> = \<one>" by simp
finally have li: "(inv (a \<otimes> b) \<otimes> a) \<otimes> b = \<one>" .
have "\<one> = inv a \<otimes> a" by (simp add: Units_l_inv[symmetric])
also have "\<dots> = inv a \<otimes> \<one> \<otimes> a" by simp
also have "\<dots> = inv a \<otimes> ((a \<otimes> b) \<otimes> inv (a \<otimes> b)) \<otimes> a"
by (simp add: Units_r_inv[OF abunit, symmetric] del: Units_r_inv)
also have "\<dots> = ((inv a \<otimes> a) \<otimes> b) \<otimes> inv (a \<otimes> b) \<otimes> a"
by (simp add: m_assoc del: Units_l_inv)
also have "\<dots> = b \<otimes> inv (a \<otimes> b) \<otimes> a" by simp
also have "\<dots> = b \<otimes> (inv (a \<otimes> b) \<otimes> a)" by (simp add: m_assoc)
finally have ri: "b \<otimes> (inv (a \<otimes> b) \<otimes> a) = \<one> " by simp
from c li ri
show "b \<in> Units G" by (simp add: Units_def, fast)
qed
lemma (in monoid) prod_unit_r:
assumes abunit[simp]: "a \<otimes> b \<in> Units G" and bunit[simp]: "b \<in> Units G"
and carr[simp]: "a \<in> carrier G" "b \<in> carrier G"
shows "a \<in> Units G"
proof -
have c: "b \<otimes> inv (a \<otimes> b) \<in> carrier G" by simp
have "a \<otimes> (b \<otimes> inv (a \<otimes> b)) = (a \<otimes> b) \<otimes> inv (a \<otimes> b)"
by (simp add: m_assoc del: Units_r_inv)
also have "\<dots> = \<one>" by simp
finally have li: "a \<otimes> (b \<otimes> inv (a \<otimes> b)) = \<one>" .
have "\<one> = b \<otimes> inv b" by (simp add: Units_r_inv[symmetric])
also have "\<dots> = b \<otimes> \<one> \<otimes> inv b" by simp
also have "\<dots> = b \<otimes> (inv (a \<otimes> b) \<otimes> (a \<otimes> b)) \<otimes> inv b"
by (simp add: Units_l_inv[OF abunit, symmetric] del: Units_l_inv)
also have "\<dots> = (b \<otimes> inv (a \<otimes> b) \<otimes> a) \<otimes> (b \<otimes> inv b)"
by (simp add: m_assoc del: Units_l_inv)
also have "\<dots> = b \<otimes> inv (a \<otimes> b) \<otimes> a" by simp
finally have ri: "(b \<otimes> inv (a \<otimes> b)) \<otimes> a = \<one> " by simp
from c li ri
show "a \<in> Units G" by (simp add: Units_def, fast)
qed
lemma (in comm_monoid) unit_factor:
assumes abunit: "a \<otimes> b \<in> Units G"
and [simp]: "a \<in> carrier G" "b \<in> carrier G"
shows "a \<in> Units G"
using abunit[simplified Units_def]
proof clarsimp
fix i
assume [simp]: "i \<in> carrier G"
and li: "i \<otimes> (a \<otimes> b) = \<one>"
and ri: "a \<otimes> b \<otimes> i = \<one>"
have carr': "b \<otimes> i \<in> carrier G" by simp
have "(b \<otimes> i) \<otimes> a = (i \<otimes> b) \<otimes> a" by (simp add: m_comm)
also have "\<dots> = i \<otimes> (b \<otimes> a)" by (simp add: m_assoc)
also have "\<dots> = i \<otimes> (a \<otimes> b)" by (simp add: m_comm)
also note li
finally have li': "(b \<otimes> i) \<otimes> a = \<one>" .
have "a \<otimes> (b \<otimes> i) = a \<otimes> b \<otimes> i" by (simp add: m_assoc)
also note ri
finally have ri': "a \<otimes> (b \<otimes> i) = \<one>" .
from carr' li' ri'
show "a \<in> Units G" by (simp add: Units_def, fast)
qed
subsection {* Divisibility and Association *}
subsubsection {* Function definitions *}
definition
factor :: "[_, 'a, 'a] \<Rightarrow> bool" (infix "divides\<index>" 65)
where "a divides\<^bsub>G\<^esub> b \<longleftrightarrow> (\<exists>c\<in>carrier G. b = a \<otimes>\<^bsub>G\<^esub> c)"
definition
associated :: "[_, 'a, 'a] => bool" (infix "\<sim>\<index>" 55)
where "a \<sim>\<^bsub>G\<^esub> b \<longleftrightarrow> a divides\<^bsub>G\<^esub> b \<and> b divides\<^bsub>G\<^esub> a"
abbreviation
"division_rel G == \<lparr>carrier = carrier G, eq = op \<sim>\<^bsub>G\<^esub>, le = op divides\<^bsub>G\<^esub>\<rparr>"
definition
properfactor :: "[_, 'a, 'a] \<Rightarrow> bool"
where "properfactor G a b \<longleftrightarrow> a divides\<^bsub>G\<^esub> b \<and> \<not>(b divides\<^bsub>G\<^esub> a)"
definition
irreducible :: "[_, 'a] \<Rightarrow> bool"
where "irreducible G a \<longleftrightarrow> a \<notin> Units G \<and> (\<forall>b\<in>carrier G. properfactor G b a \<longrightarrow> b \<in> Units G)"
definition
prime :: "[_, 'a] \<Rightarrow> bool" where
"prime G p \<longleftrightarrow>
p \<notin> Units G \<and>
(\<forall>a\<in>carrier G. \<forall>b\<in>carrier G. p divides\<^bsub>G\<^esub> (a \<otimes>\<^bsub>G\<^esub> b) \<longrightarrow> p divides\<^bsub>G\<^esub> a \<or> p divides\<^bsub>G\<^esub> b)"
subsubsection {* Divisibility *}
lemma dividesI:
fixes G (structure)
assumes carr: "c \<in> carrier G"
and p: "b = a \<otimes> c"
shows "a divides b"
unfolding factor_def
using assms by fast
lemma dividesI' [intro]:
fixes G (structure)
assumes p: "b = a \<otimes> c"
and carr: "c \<in> carrier G"
shows "a divides b"
using assms
by (fast intro: dividesI)
lemma dividesD:
fixes G (structure)
assumes "a divides b"
shows "\<exists>c\<in>carrier G. b = a \<otimes> c"
using assms
unfolding factor_def
by fast
lemma dividesE [elim]:
fixes G (structure)
assumes d: "a divides b"
and elim: "\<And>c. \<lbrakk>b = a \<otimes> c; c \<in> carrier G\<rbrakk> \<Longrightarrow> P"
shows "P"
proof -
from dividesD[OF d]
obtain c
where "c\<in>carrier G"
and "b = a \<otimes> c"
by auto
thus "P" by (elim elim)
qed
lemma (in monoid) divides_refl[simp, intro!]:
assumes carr: "a \<in> carrier G"
shows "a divides a"
apply (intro dividesI[of "\<one>"])
apply (simp, simp add: carr)
done
lemma (in monoid) divides_trans [trans]:
assumes dvds: "a divides b" "b divides c"
and acarr: "a \<in> carrier G"
shows "a divides c"
using dvds[THEN dividesD]
by (blast intro: dividesI m_assoc acarr)
lemma (in monoid) divides_mult_lI [intro]:
assumes ab: "a divides b"
and carr: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "(c \<otimes> a) divides (c \<otimes> b)"
using ab
apply (elim dividesE, simp add: m_assoc[symmetric] carr)
apply (fast intro: dividesI)
done
lemma (in monoid_cancel) divides_mult_l [simp]:
assumes carr: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "(c \<otimes> a) divides (c \<otimes> b) = a divides b"
apply safe
apply (elim dividesE, intro dividesI, assumption)
apply (rule l_cancel[of c])
apply (simp add: m_assoc carr)+
apply (fast intro: carr)
done
lemma (in comm_monoid) divides_mult_rI [intro]:
assumes ab: "a divides b"
and carr: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "(a \<otimes> c) divides (b \<otimes> c)"
using carr ab
apply (simp add: m_comm[of a c] m_comm[of b c])
apply (rule divides_mult_lI, assumption+)
done
lemma (in comm_monoid_cancel) divides_mult_r [simp]:
assumes carr: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "(a \<otimes> c) divides (b \<otimes> c) = a divides b"
using carr
by (simp add: m_comm[of a c] m_comm[of b c])
lemma (in monoid) divides_prod_r:
assumes ab: "a divides b"
and carr: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "a divides (b \<otimes> c)"
using ab carr
by (fast intro: m_assoc)
lemma (in comm_monoid) divides_prod_l:
assumes carr[intro]: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
and ab: "a divides b"
shows "a divides (c \<otimes> b)"
using ab carr
apply (simp add: m_comm[of c b])
apply (fast intro: divides_prod_r)
done
lemma (in monoid) unit_divides:
assumes uunit: "u \<in> Units G"
and acarr: "a \<in> carrier G"
shows "u divides a"
proof (intro dividesI[of "(inv u) \<otimes> a"], fast intro: uunit acarr)
from uunit acarr
have xcarr: "inv u \<otimes> a \<in> carrier G" by fast
from uunit acarr
have "u \<otimes> (inv u \<otimes> a) = (u \<otimes> inv u) \<otimes> a" by (fast intro: m_assoc[symmetric])
also have "\<dots> = \<one> \<otimes> a" by (simp add: Units_r_inv[OF uunit])
also from acarr
have "\<dots> = a" by simp
finally
show "a = u \<otimes> (inv u \<otimes> a)" ..
qed
lemma (in comm_monoid) divides_unit:
assumes udvd: "a divides u"
and carr: "a \<in> carrier G" "u \<in> Units G"
shows "a \<in> Units G"
using udvd carr
by (blast intro: unit_factor)
lemma (in comm_monoid) Unit_eq_dividesone:
assumes ucarr: "u \<in> carrier G"
shows "u \<in> Units G = u divides \<one>"
using ucarr
by (fast dest: divides_unit intro: unit_divides)
subsubsection {* Association *}
lemma associatedI:
fixes G (structure)
assumes "a divides b" "b divides a"
shows "a \<sim> b"
using assms
by (simp add: associated_def)
lemma (in monoid) associatedI2:
assumes uunit[simp]: "u \<in> Units G"
and a: "a = b \<otimes> u"
and bcarr[simp]: "b \<in> carrier G"
shows "a \<sim> b"
using uunit bcarr
unfolding a
apply (intro associatedI)
apply (rule dividesI[of "inv u"], simp)
apply (simp add: m_assoc Units_closed)
apply fast
done
lemma (in monoid) associatedI2':
assumes a: "a = b \<otimes> u"
and uunit: "u \<in> Units G"
and bcarr: "b \<in> carrier G"
shows "a \<sim> b"
using assms by (intro associatedI2)
lemma associatedD:
fixes G (structure)
assumes "a \<sim> b"
shows "a divides b"
using assms by (simp add: associated_def)
lemma (in monoid_cancel) associatedD2:
assumes assoc: "a \<sim> b"
and carr: "a \<in> carrier G" "b \<in> carrier G"
shows "\<exists>u\<in>Units G. a = b \<otimes> u"
using assoc
unfolding associated_def
proof clarify
assume "b divides a"
hence "\<exists>u\<in>carrier G. a = b \<otimes> u" by (rule dividesD)
from this obtain u
where ucarr: "u \<in> carrier G" and a: "a = b \<otimes> u"
by auto
assume "a divides b"
hence "\<exists>u'\<in>carrier G. b = a \<otimes> u'" by (rule dividesD)
from this obtain u'
where u'carr: "u' \<in> carrier G" and b: "b = a \<otimes> u'"
by auto
note carr = carr ucarr u'carr
from carr
have "a \<otimes> \<one> = a" by simp
also have "\<dots> = b \<otimes> u" by (simp add: a)
also have "\<dots> = a \<otimes> u' \<otimes> u" by (simp add: b)
also from carr
have "\<dots> = a \<otimes> (u' \<otimes> u)" by (simp add: m_assoc)
finally
have "a \<otimes> \<one> = a \<otimes> (u' \<otimes> u)" .
with carr
have u1: "\<one> = u' \<otimes> u" by (fast dest: l_cancel)
from carr
have "b \<otimes> \<one> = b" by simp
also have "\<dots> = a \<otimes> u'" by (simp add: b)
also have "\<dots> = b \<otimes> u \<otimes> u'" by (simp add: a)
also from carr
have "\<dots> = b \<otimes> (u \<otimes> u')" by (simp add: m_assoc)
finally
have "b \<otimes> \<one> = b \<otimes> (u \<otimes> u')" .
with carr
have u2: "\<one> = u \<otimes> u'" by (fast dest: l_cancel)
from u'carr u1[symmetric] u2[symmetric]
have "\<exists>u'\<in>carrier G. u' \<otimes> u = \<one> \<and> u \<otimes> u' = \<one>" by fast
hence "u \<in> Units G" by (simp add: Units_def ucarr)
from ucarr this a
show "\<exists>u\<in>Units G. a = b \<otimes> u" by fast
qed
lemma associatedE:
fixes G (structure)
assumes assoc: "a \<sim> b"
and e: "\<lbrakk>a divides b; b divides a\<rbrakk> \<Longrightarrow> P"
shows "P"
proof -
from assoc
have "a divides b" "b divides a"
by (simp add: associated_def)+
thus "P" by (elim e)
qed
lemma (in monoid_cancel) associatedE2:
assumes assoc: "a \<sim> b"
and e: "\<And>u. \<lbrakk>a = b \<otimes> u; u \<in> Units G\<rbrakk> \<Longrightarrow> P"
and carr: "a \<in> carrier G" "b \<in> carrier G"
shows "P"
proof -
from assoc and carr
have "\<exists>u\<in>Units G. a = b \<otimes> u" by (rule associatedD2)
from this obtain u
where "u \<in> Units G" "a = b \<otimes> u"
by auto
thus "P" by (elim e)
qed
lemma (in monoid) associated_refl [simp, intro!]:
assumes "a \<in> carrier G"
shows "a \<sim> a"
using assms
by (fast intro: associatedI)
lemma (in monoid) associated_sym [sym]:
assumes "a \<sim> b"
and "a \<in> carrier G" "b \<in> carrier G"
shows "b \<sim> a"
using assms
by (iprover intro: associatedI elim: associatedE)
lemma (in monoid) associated_trans [trans]:
assumes "a \<sim> b" "b \<sim> c"
and "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "a \<sim> c"
using assms
by (iprover intro: associatedI divides_trans elim: associatedE)
lemma (in monoid) division_equiv [intro, simp]:
"equivalence (division_rel G)"
apply unfold_locales
apply simp_all
apply (metis associated_def)
apply (iprover intro: associated_trans)
done
subsubsection {* Division and associativity *}
lemma divides_antisym:
fixes G (structure)
assumes "a divides b" "b divides a"
and "a \<in> carrier G" "b \<in> carrier G"
shows "a \<sim> b"
using assms
by (fast intro: associatedI)
lemma (in monoid) divides_cong_l [trans]:
assumes xx': "x \<sim> x'"
and xdvdy: "x' divides y"
and carr [simp]: "x \<in> carrier G" "x' \<in> carrier G" "y \<in> carrier G"
shows "x divides y"
proof -
from xx'
have "x divides x'" by (simp add: associatedD)
also note xdvdy
finally
show "x divides y" by simp
qed
lemma (in monoid) divides_cong_r [trans]:
assumes xdvdy: "x divides y"
and yy': "y \<sim> y'"
and carr[simp]: "x \<in> carrier G" "y \<in> carrier G" "y' \<in> carrier G"
shows "x divides y'"
proof -
note xdvdy
also from yy'
have "y divides y'" by (simp add: associatedD)
finally
show "x divides y'" by simp
qed
lemma (in monoid) division_weak_partial_order [simp, intro!]:
"weak_partial_order (division_rel G)"
apply unfold_locales
apply simp_all
apply (simp add: associated_sym)
apply (blast intro: associated_trans)
apply (simp add: divides_antisym)
apply (blast intro: divides_trans)
apply (blast intro: divides_cong_l divides_cong_r associated_sym)
done
subsubsection {* Multiplication and associativity *}
lemma (in monoid_cancel) mult_cong_r:
assumes "b \<sim> b'"
and carr: "a \<in> carrier G" "b \<in> carrier G" "b' \<in> carrier G"
shows "a \<otimes> b \<sim> a \<otimes> b'"
using assms
apply (elim associatedE2, intro associatedI2)
apply (auto intro: m_assoc[symmetric])
done
lemma (in comm_monoid_cancel) mult_cong_l:
assumes "a \<sim> a'"
and carr: "a \<in> carrier G" "a' \<in> carrier G" "b \<in> carrier G"
shows "a \<otimes> b \<sim> a' \<otimes> b"
using assms
apply (elim associatedE2, intro associatedI2)
apply assumption
apply (simp add: m_assoc Units_closed)
apply (simp add: m_comm Units_closed)
apply simp+
done
lemma (in monoid_cancel) assoc_l_cancel:
assumes carr: "a \<in> carrier G" "b \<in> carrier G" "b' \<in> carrier G"
and "a \<otimes> b \<sim> a \<otimes> b'"
shows "b \<sim> b'"
using assms
apply (elim associatedE2, intro associatedI2)
apply assumption
apply (rule l_cancel[of a])
apply (simp add: m_assoc Units_closed)
apply fast+
done
lemma (in comm_monoid_cancel) assoc_r_cancel:
assumes "a \<otimes> b \<sim> a' \<otimes> b"
and carr: "a \<in> carrier G" "a' \<in> carrier G" "b \<in> carrier G"
shows "a \<sim> a'"
using assms
apply (elim associatedE2, intro associatedI2)
apply assumption
apply (rule r_cancel[of a b])
apply (metis Units_closed assms(3) assms(4) m_ac)
apply fast+
done
subsubsection {* Units *}
lemma (in monoid_cancel) assoc_unit_l [trans]:
assumes asc: "a \<sim> b" and bunit: "b \<in> Units G"
and carr: "a \<in> carrier G"
shows "a \<in> Units G"
using assms
by (fast elim: associatedE2)
lemma (in monoid_cancel) assoc_unit_r [trans]:
assumes aunit: "a \<in> Units G" and asc: "a \<sim> b"
and bcarr: "b \<in> carrier G"
shows "b \<in> Units G"
using aunit bcarr associated_sym[OF asc]
by (blast intro: assoc_unit_l)
lemma (in comm_monoid) Units_cong:
assumes aunit: "a \<in> Units G" and asc: "a \<sim> b"
and bcarr: "b \<in> carrier G"
shows "b \<in> Units G"
using assms
by (blast intro: divides_unit elim: associatedE)
lemma (in monoid) Units_assoc:
assumes units: "a \<in> Units G" "b \<in> Units G"
shows "a \<sim> b"
using units
by (fast intro: associatedI unit_divides)
lemma (in monoid) Units_are_ones:
"Units G {.=}\<^bsub>(division_rel G)\<^esub> {\<one>}"
apply (simp add: set_eq_def elem_def, rule, simp_all)
proof clarsimp
fix a
assume aunit: "a \<in> Units G"
show "a \<sim> \<one>"
apply (rule associatedI)
apply (fast intro: dividesI[of "inv a"] aunit Units_r_inv[symmetric])
apply (fast intro: dividesI[of "a"] l_one[symmetric] Units_closed[OF aunit])
done
next
have "\<one> \<in> Units G" by simp
moreover have "\<one> \<sim> \<one>" by simp
ultimately show "\<exists>a \<in> Units G. \<one> \<sim> a" by fast
qed
lemma (in comm_monoid) Units_Lower:
"Units G = Lower (division_rel G) (carrier G)"
apply (simp add: Units_def Lower_def)
apply (rule, rule)
apply clarsimp
apply (rule unit_divides)
apply (unfold Units_def, fast)
apply assumption
apply clarsimp
apply (metis Unit_eq_dividesone Units_r_inv_ex m_ac(2) one_closed)
done
subsubsection {* Proper factors *}
lemma properfactorI:
fixes G (structure)
assumes "a divides b"
and "\<not>(b divides a)"
shows "properfactor G a b"
using assms
unfolding properfactor_def
by simp
lemma properfactorI2:
fixes G (structure)
assumes advdb: "a divides b"
and neq: "\<not>(a \<sim> b)"
shows "properfactor G a b"
apply (rule properfactorI, rule advdb)
proof (rule ccontr, simp)
assume "b divides a"
with advdb have "a \<sim> b" by (rule associatedI)
with neq show "False" by fast
qed
lemma (in comm_monoid_cancel) properfactorI3:
assumes p: "p = a \<otimes> b"
and nunit: "b \<notin> Units G"
and carr: "a \<in> carrier G" "b \<in> carrier G" "p \<in> carrier G"
shows "properfactor G a p"
unfolding p
using carr
apply (intro properfactorI, fast)
proof (clarsimp, elim dividesE)
fix c
assume ccarr: "c \<in> carrier G"
note [simp] = carr ccarr
have "a \<otimes> \<one> = a" by simp
also assume "a = a \<otimes> b \<otimes> c"
also have "\<dots> = a \<otimes> (b \<otimes> c)" by (simp add: m_assoc)
finally have "a \<otimes> \<one> = a \<otimes> (b \<otimes> c)" .
hence rinv: "\<one> = b \<otimes> c" by (intro l_cancel[of "a" "\<one>" "b \<otimes> c"], simp+)
also have "\<dots> = c \<otimes> b" by (simp add: m_comm)
finally have linv: "\<one> = c \<otimes> b" .
from ccarr linv[symmetric] rinv[symmetric]
have "b \<in> Units G" unfolding Units_def by fastforce
with nunit
show "False" ..
qed
lemma properfactorE:
fixes G (structure)
assumes pf: "properfactor G a b"
and r: "\<lbrakk>a divides b; \<not>(b divides a)\<rbrakk> \<Longrightarrow> P"
shows "P"
using pf
unfolding properfactor_def
by (fast intro: r)
lemma properfactorE2:
fixes G (structure)
assumes pf: "properfactor G a b"
and elim: "\<lbrakk>a divides b; \<not>(a \<sim> b)\<rbrakk> \<Longrightarrow> P"
shows "P"
using pf
unfolding properfactor_def
by (fast elim: elim associatedE)
lemma (in monoid) properfactor_unitE:
assumes uunit: "u \<in> Units G"
and pf: "properfactor G a u"
and acarr: "a \<in> carrier G"
shows "P"
using pf unit_divides[OF uunit acarr]
by (fast elim: properfactorE)
lemma (in monoid) properfactor_divides:
assumes pf: "properfactor G a b"
shows "a divides b"
using pf
by (elim properfactorE)
lemma (in monoid) properfactor_trans1 [trans]:
assumes dvds: "a divides b" "properfactor G b c"
and carr: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "properfactor G a c"
using dvds carr
apply (elim properfactorE, intro properfactorI)
apply (iprover intro: divides_trans)+
done
lemma (in monoid) properfactor_trans2 [trans]:
assumes dvds: "properfactor G a b" "b divides c"
and carr: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "properfactor G a c"
using dvds carr
apply (elim properfactorE, intro properfactorI)
apply (iprover intro: divides_trans)+
done
lemma properfactor_lless:
fixes G (structure)
shows "properfactor G = lless (division_rel G)"
apply (rule ext) apply (rule ext) apply rule
apply (fastforce elim: properfactorE2 intro: weak_llessI)
apply (fastforce elim: weak_llessE intro: properfactorI2)
done
lemma (in monoid) properfactor_cong_l [trans]:
assumes x'x: "x' \<sim> x"
and pf: "properfactor G x y"
and carr: "x \<in> carrier G" "x' \<in> carrier G" "y \<in> carrier G"
shows "properfactor G x' y"
using pf
unfolding properfactor_lless
proof -
interpret weak_partial_order "division_rel G" ..
from x'x
have "x' .=\<^bsub>division_rel G\<^esub> x" by simp
also assume "x \<sqsubset>\<^bsub>division_rel G\<^esub> y"
finally
show "x' \<sqsubset>\<^bsub>division_rel G\<^esub> y" by (simp add: carr)
qed
lemma (in monoid) properfactor_cong_r [trans]:
assumes pf: "properfactor G x y"
and yy': "y \<sim> y'"
and carr: "x \<in> carrier G" "y \<in> carrier G" "y' \<in> carrier G"
shows "properfactor G x y'"
using pf
unfolding properfactor_lless
proof -
interpret weak_partial_order "division_rel G" ..
assume "x \<sqsubset>\<^bsub>division_rel G\<^esub> y"
also from yy'
have "y .=\<^bsub>division_rel G\<^esub> y'" by simp
finally
show "x \<sqsubset>\<^bsub>division_rel G\<^esub> y'" by (simp add: carr)
qed
lemma (in monoid_cancel) properfactor_mult_lI [intro]:
assumes ab: "properfactor G a b"
and carr: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "properfactor G (c \<otimes> a) (c \<otimes> b)"
using ab carr
by (fastforce elim: properfactorE intro: properfactorI)
lemma (in monoid_cancel) properfactor_mult_l [simp]:
assumes carr: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "properfactor G (c \<otimes> a) (c \<otimes> b) = properfactor G a b"
using carr
by (fastforce elim: properfactorE intro: properfactorI)
lemma (in comm_monoid_cancel) properfactor_mult_rI [intro]:
assumes ab: "properfactor G a b"
and carr: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "properfactor G (a \<otimes> c) (b \<otimes> c)"
using ab carr
by (fastforce elim: properfactorE intro: properfactorI)
lemma (in comm_monoid_cancel) properfactor_mult_r [simp]:
assumes carr: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "properfactor G (a \<otimes> c) (b \<otimes> c) = properfactor G a b"
using carr
by (fastforce elim: properfactorE intro: properfactorI)
lemma (in monoid) properfactor_prod_r:
assumes ab: "properfactor G a b"
and carr[simp]: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "properfactor G a (b \<otimes> c)"
by (intro properfactor_trans2[OF ab] divides_prod_r, simp+)
lemma (in comm_monoid) properfactor_prod_l:
assumes ab: "properfactor G a b"
and carr[simp]: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "properfactor G a (c \<otimes> b)"
by (intro properfactor_trans2[OF ab] divides_prod_l, simp+)
subsection {* Irreducible Elements and Primes *}
subsubsection {* Irreducible elements *}
lemma irreducibleI:
fixes G (structure)
assumes "a \<notin> Units G"
and "\<And>b. \<lbrakk>b \<in> carrier G; properfactor G b a\<rbrakk> \<Longrightarrow> b \<in> Units G"
shows "irreducible G a"
using assms
unfolding irreducible_def
by blast
lemma irreducibleE:
fixes G (structure)
assumes irr: "irreducible G a"
and elim: "\<lbrakk>a \<notin> Units G; \<forall>b. b \<in> carrier G \<and> properfactor G b a \<longrightarrow> b \<in> Units G\<rbrakk> \<Longrightarrow> P"
shows "P"
using assms
unfolding irreducible_def
by blast
lemma irreducibleD:
fixes G (structure)
assumes irr: "irreducible G a"
and pf: "properfactor G b a"
and bcarr: "b \<in> carrier G"
shows "b \<in> Units G"
using assms
by (fast elim: irreducibleE)
lemma (in monoid_cancel) irreducible_cong [trans]:
assumes irred: "irreducible G a"
and aa': "a \<sim> a'"
and carr[simp]: "a \<in> carrier G" "a' \<in> carrier G"
shows "irreducible G a'"
using assms
apply (elim irreducibleE, intro irreducibleI)
apply simp_all
apply (metis assms(2) assms(3) assoc_unit_l)
apply (metis assms(2) assms(3) assms(4) associated_sym properfactor_cong_r)
done
lemma (in monoid) irreducible_prod_rI:
assumes airr: "irreducible G a"
and bunit: "b \<in> Units G"
and carr[simp]: "a \<in> carrier G" "b \<in> carrier G"
shows "irreducible G (a \<otimes> b)"
using airr carr bunit
apply (elim irreducibleE, intro irreducibleI, clarify)
apply (subgoal_tac "a \<in> Units G", simp)
apply (intro prod_unit_r[of a b] carr bunit, assumption)
apply (metis assms associatedI2 m_closed properfactor_cong_r)
done
lemma (in comm_monoid) irreducible_prod_lI:
assumes birr: "irreducible G b"
and aunit: "a \<in> Units G"
and carr [simp]: "a \<in> carrier G" "b \<in> carrier G"
shows "irreducible G (a \<otimes> b)"
apply (subst m_comm, simp+)
apply (intro irreducible_prod_rI assms)
done
lemma (in comm_monoid_cancel) irreducible_prodE [elim]:
assumes irr: "irreducible G (a \<otimes> b)"
and carr[simp]: "a \<in> carrier G" "b \<in> carrier G"
and e1: "\<lbrakk>irreducible G a; b \<in> Units G\<rbrakk> \<Longrightarrow> P"
and e2: "\<lbrakk>a \<in> Units G; irreducible G b\<rbrakk> \<Longrightarrow> P"
shows "P"
using irr
proof (elim irreducibleE)
assume abnunit: "a \<otimes> b \<notin> Units G"
and isunit[rule_format]: "\<forall>ba. ba \<in> carrier G \<and> properfactor G ba (a \<otimes> b) \<longrightarrow> ba \<in> Units G"
show "P"
proof (cases "a \<in> Units G")
assume aunit: "a \<in> Units G"
have "irreducible G b"
apply (rule irreducibleI)
proof (rule ccontr, simp)
assume "b \<in> Units G"
with aunit have "(a \<otimes> b) \<in> Units G" by fast
with abnunit show "False" ..
next
fix c
assume ccarr: "c \<in> carrier G"
and "properfactor G c b"
hence "properfactor G c (a \<otimes> b)" by (simp add: properfactor_prod_l[of c b a])
from ccarr this show "c \<in> Units G" by (fast intro: isunit)
qed
from aunit this show "P" by (rule e2)
next
assume anunit: "a \<notin> Units G"
with carr have "properfactor G b (b \<otimes> a)" by (fast intro: properfactorI3)
hence bf: "properfactor G b (a \<otimes> b)" by (subst m_comm[of a b], simp+)
hence bunit: "b \<in> Units G" by (intro isunit, simp)
have "irreducible G a"
apply (rule irreducibleI)
proof (rule ccontr, simp)
assume "a \<in> Units G"
with bunit have "(a \<otimes> b) \<in> Units G" by fast
with abnunit show "False" ..
next
fix c
assume ccarr: "c \<in> carrier G"
and "properfactor G c a"
hence "properfactor G c (a \<otimes> b)" by (simp add: properfactor_prod_r[of c a b])
from ccarr this show "c \<in> Units G" by (fast intro: isunit)
qed
from this bunit show "P" by (rule e1)
qed
qed
subsubsection {* Prime elements *}
lemma primeI:
fixes G (structure)
assumes "p \<notin> Units G"
and "\<And>a b. \<lbrakk>a \<in> carrier G; b \<in> carrier G; p divides (a \<otimes> b)\<rbrakk> \<Longrightarrow> p divides a \<or> p divides b"
shows "prime G p"
using assms
unfolding prime_def
by blast
lemma primeE:
fixes G (structure)
assumes pprime: "prime G p"
and e: "\<lbrakk>p \<notin> Units G; \<forall>a\<in>carrier G. \<forall>b\<in>carrier G.
p divides a \<otimes> b \<longrightarrow> p divides a \<or> p divides b\<rbrakk> \<Longrightarrow> P"
shows "P"
using pprime
unfolding prime_def
by (blast dest: e)
lemma (in comm_monoid_cancel) prime_divides:
assumes carr: "a \<in> carrier G" "b \<in> carrier G"
and pprime: "prime G p"
and pdvd: "p divides a \<otimes> b"
shows "p divides a \<or> p divides b"
using assms
by (blast elim: primeE)
lemma (in monoid_cancel) prime_cong [trans]:
assumes pprime: "prime G p"
and pp': "p \<sim> p'"
and carr[simp]: "p \<in> carrier G" "p' \<in> carrier G"
shows "prime G p'"
using pprime
apply (elim primeE, intro primeI)
apply (metis assms(2) assms(3) assoc_unit_l)
apply (metis assms(2) assms(3) assms(4) associated_sym divides_cong_l m_closed)
done
subsection {* Factorization and Factorial Monoids *}
subsubsection {* Function definitions *}
definition
factors :: "[_, 'a list, 'a] \<Rightarrow> bool"
where "factors G fs a \<longleftrightarrow> (\<forall>x \<in> (set fs). irreducible G x) \<and> foldr (op \<otimes>\<^bsub>G\<^esub>) fs \<one>\<^bsub>G\<^esub> = a"
definition
wfactors ::"[_, 'a list, 'a] \<Rightarrow> bool"
where "wfactors G fs a \<longleftrightarrow> (\<forall>x \<in> (set fs). irreducible G x) \<and> foldr (op \<otimes>\<^bsub>G\<^esub>) fs \<one>\<^bsub>G\<^esub> \<sim>\<^bsub>G\<^esub> a"
abbreviation
list_assoc :: "('a,_) monoid_scheme \<Rightarrow> 'a list \<Rightarrow> 'a list \<Rightarrow> bool" (infix "[\<sim>]\<index>" 44)
where "list_assoc G == list_all2 (op \<sim>\<^bsub>G\<^esub>)"
definition
essentially_equal :: "[_, 'a list, 'a list] \<Rightarrow> bool"
where "essentially_equal G fs1 fs2 \<longleftrightarrow> (\<exists>fs1'. fs1 <~~> fs1' \<and> fs1' [\<sim>]\<^bsub>G\<^esub> fs2)"
locale factorial_monoid = comm_monoid_cancel +
assumes factors_exist:
"\<lbrakk>a \<in> carrier G; a \<notin> Units G\<rbrakk> \<Longrightarrow> \<exists>fs. set fs \<subseteq> carrier G \<and> factors G fs a"
and factors_unique:
"\<lbrakk>factors G fs a; factors G fs' a; a \<in> carrier G; a \<notin> Units G;
set fs \<subseteq> carrier G; set fs' \<subseteq> carrier G\<rbrakk> \<Longrightarrow> essentially_equal G fs fs'"
subsubsection {* Comparing lists of elements *}
text {* Association on lists *}
lemma (in monoid) listassoc_refl [simp, intro]:
assumes "set as \<subseteq> carrier G"
shows "as [\<sim>] as"
using assms
by (induct as) simp+
lemma (in monoid) listassoc_sym [sym]:
assumes "as [\<sim>] bs"
and "set as \<subseteq> carrier G" and "set bs \<subseteq> carrier G"
shows "bs [\<sim>] as"
using assms
proof (induct as arbitrary: bs, simp)
case Cons
thus ?case
apply (induct bs, simp)
apply clarsimp
apply (iprover intro: associated_sym)
done
qed
lemma (in monoid) listassoc_trans [trans]:
assumes "as [\<sim>] bs" and "bs [\<sim>] cs"
and "set as \<subseteq> carrier G" and "set bs \<subseteq> carrier G" and "set cs \<subseteq> carrier G"
shows "as [\<sim>] cs"
using assms
apply (simp add: list_all2_conv_all_nth set_conv_nth, safe)
apply (rule associated_trans)
apply (subgoal_tac "as ! i \<sim> bs ! i", assumption)
apply (simp, simp)
apply blast+
done
lemma (in monoid_cancel) irrlist_listassoc_cong:
assumes "\<forall>a\<in>set as. irreducible G a"
and "as [\<sim>] bs"
and "set as \<subseteq> carrier G" and "set bs \<subseteq> carrier G"
shows "\<forall>a\<in>set bs. irreducible G a"
using assms
apply (clarsimp simp add: list_all2_conv_all_nth set_conv_nth)
apply (blast intro: irreducible_cong)
done
text {* Permutations *}
lemma perm_map [intro]:
assumes p: "a <~~> b"
shows "map f a <~~> map f b"
using p
by induct auto
lemma perm_map_switch:
assumes m: "map f a = map f b" and p: "b <~~> c"
shows "\<exists>d. a <~~> d \<and> map f d = map f c"
using p m
by (induct arbitrary: a) (simp, force, force, blast)
lemma (in monoid) perm_assoc_switch:
assumes a:"as [\<sim>] bs" and p: "bs <~~> cs"
shows "\<exists>bs'. as <~~> bs' \<and> bs' [\<sim>] cs"
using p a
apply (induct bs cs arbitrary: as, simp)
apply (clarsimp simp add: list_all2_Cons2, blast)
apply (clarsimp simp add: list_all2_Cons2)
apply blast
apply blast
done
lemma (in monoid) perm_assoc_switch_r:
assumes p: "as <~~> bs" and a:"bs [\<sim>] cs"
shows "\<exists>bs'. as [\<sim>] bs' \<and> bs' <~~> cs"
using p a
apply (induct as bs arbitrary: cs, simp)
apply (clarsimp simp add: list_all2_Cons1, blast)
apply (clarsimp simp add: list_all2_Cons1)
apply blast
apply blast
done
declare perm_sym [sym]
lemma perm_setP:
assumes perm: "as <~~> bs"
and as: "P (set as)"
shows "P (set bs)"
proof -
from perm
have "multiset_of as = multiset_of bs"
by (simp add: multiset_of_eq_perm)
hence "set as = set bs" by (rule multiset_of_eq_setD)
with as
show "P (set bs)" by simp
qed
lemmas (in monoid) perm_closed =
perm_setP[of _ _ "\<lambda>as. as \<subseteq> carrier G"]
lemmas (in monoid) irrlist_perm_cong =
perm_setP[of _ _ "\<lambda>as. \<forall>a\<in>as. irreducible G a"]
text {* Essentially equal factorizations *}
lemma (in monoid) essentially_equalI:
assumes ex: "fs1 <~~> fs1'" "fs1' [\<sim>] fs2"
shows "essentially_equal G fs1 fs2"
using ex
unfolding essentially_equal_def
by fast
lemma (in monoid) essentially_equalE:
assumes ee: "essentially_equal G fs1 fs2"
and e: "\<And>fs1'. \<lbrakk>fs1 <~~> fs1'; fs1' [\<sim>] fs2\<rbrakk> \<Longrightarrow> P"
shows "P"
using ee
unfolding essentially_equal_def
by (fast intro: e)
lemma (in monoid) ee_refl [simp,intro]:
assumes carr: "set as \<subseteq> carrier G"
shows "essentially_equal G as as"
using carr
by (fast intro: essentially_equalI)
lemma (in monoid) ee_sym [sym]:
assumes ee: "essentially_equal G as bs"
and carr: "set as \<subseteq> carrier G" "set bs \<subseteq> carrier G"
shows "essentially_equal G bs as"
using ee
proof (elim essentially_equalE)
fix fs
assume "as <~~> fs" "fs [\<sim>] bs"
hence "\<exists>fs'. as [\<sim>] fs' \<and> fs' <~~> bs" by (rule perm_assoc_switch_r)
from this obtain fs'
where a: "as [\<sim>] fs'" and p: "fs' <~~> bs"
by auto
from p have "bs <~~> fs'" by (rule perm_sym)
with a[symmetric] carr
show ?thesis
by (iprover intro: essentially_equalI perm_closed)
qed
lemma (in monoid) ee_trans [trans]:
assumes ab: "essentially_equal G as bs" and bc: "essentially_equal G bs cs"
and ascarr: "set as \<subseteq> carrier G"
and bscarr: "set bs \<subseteq> carrier G"
and cscarr: "set cs \<subseteq> carrier G"
shows "essentially_equal G as cs"
using ab bc
proof (elim essentially_equalE)
fix abs bcs
assume "abs [\<sim>] bs" and pb: "bs <~~> bcs"
hence "\<exists>bs'. abs <~~> bs' \<and> bs' [\<sim>] bcs" by (rule perm_assoc_switch)
from this obtain bs'
where p: "abs <~~> bs'" and a: "bs' [\<sim>] bcs"
by auto
assume "as <~~> abs"
with p
have pp: "as <~~> bs'" by fast
from pp ascarr have c1: "set bs' \<subseteq> carrier G" by (rule perm_closed)
from pb bscarr have c2: "set bcs \<subseteq> carrier G" by (rule perm_closed)
note a
also assume "bcs [\<sim>] cs"
finally (listassoc_trans) have"bs' [\<sim>] cs" by (simp add: c1 c2 cscarr)
with pp
show ?thesis
by (rule essentially_equalI)
qed
subsubsection {* Properties of lists of elements *}
text {* Multiplication of factors in a list *}
lemma (in monoid) multlist_closed [simp, intro]:
assumes ascarr: "set fs \<subseteq> carrier G"
shows "foldr (op \<otimes>) fs \<one> \<in> carrier G"
by (insert ascarr, induct fs, simp+)
lemma (in comm_monoid) multlist_dividesI (*[intro]*):
assumes "f \<in> set fs" and "f \<in> carrier G" and "set fs \<subseteq> carrier G"
shows "f divides (foldr (op \<otimes>) fs \<one>)"
using assms
apply (induct fs)
apply simp
apply (case_tac "f = a", simp)
apply (fast intro: dividesI)
apply clarsimp
apply (metis assms(2) divides_prod_l multlist_closed)
done
lemma (in comm_monoid_cancel) multlist_listassoc_cong:
assumes "fs [\<sim>] fs'"
and "set fs \<subseteq> carrier G" and "set fs' \<subseteq> carrier G"
shows "foldr (op \<otimes>) fs \<one> \<sim> foldr (op \<otimes>) fs' \<one>"
using assms
proof (induct fs arbitrary: fs', simp)
case (Cons a as fs')
thus ?case
apply (induct fs', simp)
proof clarsimp
fix b bs
assume "a \<sim> b"
and acarr: "a \<in> carrier G" and bcarr: "b \<in> carrier G"
and ascarr: "set as \<subseteq> carrier G"
hence p: "a \<otimes> foldr op \<otimes> as \<one> \<sim> b \<otimes> foldr op \<otimes> as \<one>"
by (fast intro: mult_cong_l)
also
assume "as [\<sim>] bs"
and bscarr: "set bs \<subseteq> carrier G"
and "\<And>fs'. \<lbrakk>as [\<sim>] fs'; set fs' \<subseteq> carrier G\<rbrakk> \<Longrightarrow> foldr op \<otimes> as \<one> \<sim> foldr op \<otimes> fs' \<one>"
hence "foldr op \<otimes> as \<one> \<sim> foldr op \<otimes> bs \<one>" by simp
with ascarr bscarr bcarr
have "b \<otimes> foldr op \<otimes> as \<one> \<sim> b \<otimes> foldr op \<otimes> bs \<one>"
by (fast intro: mult_cong_r)
finally
show "a \<otimes> foldr op \<otimes> as \<one> \<sim> b \<otimes> foldr op \<otimes> bs \<one>"
by (simp add: ascarr bscarr acarr bcarr)
qed
qed
lemma (in comm_monoid) multlist_perm_cong:
assumes prm: "as <~~> bs"
and ascarr: "set as \<subseteq> carrier G"
shows "foldr (op \<otimes>) as \<one> = foldr (op \<otimes>) bs \<one>"
using prm ascarr
apply (induct, simp, clarsimp simp add: m_ac, clarsimp)
proof clarsimp
fix xs ys zs
assume "xs <~~> ys" "set xs \<subseteq> carrier G"
hence "set ys \<subseteq> carrier G" by (rule perm_closed)
moreover assume "set ys \<subseteq> carrier G \<Longrightarrow> foldr op \<otimes> ys \<one> = foldr op \<otimes> zs \<one>"
ultimately show "foldr op \<otimes> ys \<one> = foldr op \<otimes> zs \<one>" by simp
qed
lemma (in comm_monoid_cancel) multlist_ee_cong:
assumes "essentially_equal G fs fs'"
and "set fs \<subseteq> carrier G" and "set fs' \<subseteq> carrier G"
shows "foldr (op \<otimes>) fs \<one> \<sim> foldr (op \<otimes>) fs' \<one>"
using assms
apply (elim essentially_equalE)
apply (simp add: multlist_perm_cong multlist_listassoc_cong perm_closed)
done
subsubsection {* Factorization in irreducible elements *}
lemma wfactorsI:
fixes G (structure)
assumes "\<forall>f\<in>set fs. irreducible G f"
and "foldr (op \<otimes>) fs \<one> \<sim> a"
shows "wfactors G fs a"
using assms
unfolding wfactors_def
by simp
lemma wfactorsE:
fixes G (structure)
assumes wf: "wfactors G fs a"
and e: "\<lbrakk>\<forall>f\<in>set fs. irreducible G f; foldr (op \<otimes>) fs \<one> \<sim> a\<rbrakk> \<Longrightarrow> P"
shows "P"
using wf
unfolding wfactors_def
by (fast dest: e)
lemma (in monoid) factorsI:
assumes "\<forall>f\<in>set fs. irreducible G f"
and "foldr (op \<otimes>) fs \<one> = a"
shows "factors G fs a"
using assms
unfolding factors_def
by simp
lemma factorsE:
fixes G (structure)
assumes f: "factors G fs a"
and e: "\<lbrakk>\<forall>f\<in>set fs. irreducible G f; foldr (op \<otimes>) fs \<one> = a\<rbrakk> \<Longrightarrow> P"
shows "P"
using f
unfolding factors_def
by (simp add: e)
lemma (in monoid) factors_wfactors:
assumes "factors G as a" and "set as \<subseteq> carrier G"
shows "wfactors G as a"
using assms
by (blast elim: factorsE intro: wfactorsI)
lemma (in monoid) wfactors_factors:
assumes "wfactors G as a" and "set as \<subseteq> carrier G"
shows "\<exists>a'. factors G as a' \<and> a' \<sim> a"
using assms
by (blast elim: wfactorsE intro: factorsI)
lemma (in monoid) factors_closed [dest]:
assumes "factors G fs a" and "set fs \<subseteq> carrier G"
shows "a \<in> carrier G"
using assms
by (elim factorsE, clarsimp)
lemma (in monoid) nunit_factors:
assumes anunit: "a \<notin> Units G"
and fs: "factors G as a"
shows "length as > 0"
proof -
from anunit Units_one_closed have "a \<noteq> \<one>" by auto
with fs show ?thesis by (auto elim: factorsE)
qed
lemma (in monoid) unit_wfactors [simp]:
assumes aunit: "a \<in> Units G"
shows "wfactors G [] a"
using aunit
by (intro wfactorsI) (simp, simp add: Units_assoc)
lemma (in comm_monoid_cancel) unit_wfactors_empty:
assumes aunit: "a \<in> Units G"
and wf: "wfactors G fs a"
and carr[simp]: "set fs \<subseteq> carrier G"
shows "fs = []"
proof (rule ccontr, cases fs, simp)
fix f fs'
assume fs: "fs = f # fs'"
from carr
have fcarr[simp]: "f \<in> carrier G"
and carr'[simp]: "set fs' \<subseteq> carrier G"
by (simp add: fs)+
from fs wf
have "irreducible G f" by (simp add: wfactors_def)
hence fnunit: "f \<notin> Units G" by (fast elim: irreducibleE)
from fs wf
have a: "f \<otimes> foldr (op \<otimes>) fs' \<one> \<sim> a" by (simp add: wfactors_def)
note aunit
also from fs wf
have a: "f \<otimes> foldr (op \<otimes>) fs' \<one> \<sim> a" by (simp add: wfactors_def)
have "a \<sim> f \<otimes> foldr (op \<otimes>) fs' \<one>"
by (simp add: Units_closed[OF aunit] a[symmetric])
finally
have "f \<otimes> foldr (op \<otimes>) fs' \<one> \<in> Units G" by simp
hence "f \<in> Units G" by (intro unit_factor[of f], simp+)
with fnunit show "False" by simp
qed
text {* Comparing wfactors *}
lemma (in comm_monoid_cancel) wfactors_listassoc_cong_l:
assumes fact: "wfactors G fs a"
and asc: "fs [\<sim>] fs'"
and carr: "a \<in> carrier G" "set fs \<subseteq> carrier G" "set fs' \<subseteq> carrier G"
shows "wfactors G fs' a"
using fact
apply (elim wfactorsE, intro wfactorsI)
apply (metis assms(2) assms(4) assms(5) irrlist_listassoc_cong)
proof -
from asc[symmetric]
have "foldr op \<otimes> fs' \<one> \<sim> foldr op \<otimes> fs \<one>"
by (simp add: multlist_listassoc_cong carr)
also assume "foldr op \<otimes> fs \<one> \<sim> a"
finally
show "foldr op \<otimes> fs' \<one> \<sim> a" by (simp add: carr)
qed
lemma (in comm_monoid) wfactors_perm_cong_l:
assumes "wfactors G fs a"
and "fs <~~> fs'"
and "set fs \<subseteq> carrier G"
shows "wfactors G fs' a"
using assms
apply (elim wfactorsE, intro wfactorsI)
apply (rule irrlist_perm_cong, assumption+)
apply (simp add: multlist_perm_cong[symmetric])
done
lemma (in comm_monoid_cancel) wfactors_ee_cong_l [trans]:
assumes ee: "essentially_equal G as bs"
and bfs: "wfactors G bs b"
and carr: "b \<in> carrier G" "set as \<subseteq> carrier G" "set bs \<subseteq> carrier G"
shows "wfactors G as b"
using ee
proof (elim essentially_equalE)
fix fs
assume prm: "as <~~> fs"
with carr
have fscarr: "set fs \<subseteq> carrier G" by (simp add: perm_closed)
note bfs
also assume [symmetric]: "fs [\<sim>] bs"
also (wfactors_listassoc_cong_l)
note prm[symmetric]
finally (wfactors_perm_cong_l)
show "wfactors G as b" by (simp add: carr fscarr)
qed
lemma (in monoid) wfactors_cong_r [trans]:
assumes fac: "wfactors G fs a" and aa': "a \<sim> a'"
and carr[simp]: "a \<in> carrier G" "a' \<in> carrier G" "set fs \<subseteq> carrier G"
shows "wfactors G fs a'"
using fac
proof (elim wfactorsE, intro wfactorsI)
assume "foldr op \<otimes> fs \<one> \<sim> a" also note aa'
finally show "foldr op \<otimes> fs \<one> \<sim> a'" by simp
qed
subsubsection {* Essentially equal factorizations *}
lemma (in comm_monoid_cancel) unitfactor_ee:
assumes uunit: "u \<in> Units G"
and carr: "set as \<subseteq> carrier G"
shows "essentially_equal G (as[0 := (as!0 \<otimes> u)]) as" (is "essentially_equal G ?as' as")
using assms
apply (intro essentially_equalI[of _ ?as'], simp)
apply (cases as, simp)
apply (clarsimp, fast intro: associatedI2[of u])
done
lemma (in comm_monoid_cancel) factors_cong_unit:
assumes uunit: "u \<in> Units G" and anunit: "a \<notin> Units G"
and afs: "factors G as a"
and ascarr: "set as \<subseteq> carrier G"
shows "factors G (as[0 := (as!0 \<otimes> u)]) (a \<otimes> u)" (is "factors G ?as' ?a'")
using assms
apply (elim factorsE, clarify)
apply (cases as)
apply (simp add: nunit_factors)
apply clarsimp
apply (elim factorsE, intro factorsI)
apply (clarsimp, fast intro: irreducible_prod_rI)
apply (simp add: m_ac Units_closed)
done
lemma (in comm_monoid) perm_wfactorsD:
assumes prm: "as <~~> bs"
and afs: "wfactors G as a" and bfs: "wfactors G bs b"
and [simp]: "a \<in> carrier G" "b \<in> carrier G"
and ascarr[simp]: "set as \<subseteq> carrier G"
shows "a \<sim> b"
using afs bfs
proof (elim wfactorsE)
from prm have [simp]: "set bs \<subseteq> carrier G" by (simp add: perm_closed)
assume "foldr op \<otimes> as \<one> \<sim> a"
hence "a \<sim> foldr op \<otimes> as \<one>" by (rule associated_sym, simp+)
also from prm
have "foldr op \<otimes> as \<one> = foldr op \<otimes> bs \<one>" by (rule multlist_perm_cong, simp)
also assume "foldr op \<otimes> bs \<one> \<sim> b"
finally
show "a \<sim> b" by simp
qed
lemma (in comm_monoid_cancel) listassoc_wfactorsD:
assumes assoc: "as [\<sim>] bs"
and afs: "wfactors G as a" and bfs: "wfactors G bs b"
and [simp]: "a \<in> carrier G" "b \<in> carrier G"
and [simp]: "set as \<subseteq> carrier G" "set bs \<subseteq> carrier G"
shows "a \<sim> b"
using afs bfs
proof (elim wfactorsE)
assume "foldr op \<otimes> as \<one> \<sim> a"
hence "a \<sim> foldr op \<otimes> as \<one>" by (rule associated_sym, simp+)
also from assoc
have "foldr op \<otimes> as \<one> \<sim> foldr op \<otimes> bs \<one>" by (rule multlist_listassoc_cong, simp+)
also assume "foldr op \<otimes> bs \<one> \<sim> b"
finally
show "a \<sim> b" by simp
qed
lemma (in comm_monoid_cancel) ee_wfactorsD:
assumes ee: "essentially_equal G as bs"
and afs: "wfactors G as a" and bfs: "wfactors G bs b"
and [simp]: "a \<in> carrier G" "b \<in> carrier G"
and ascarr[simp]: "set as \<subseteq> carrier G" and bscarr[simp]: "set bs \<subseteq> carrier G"
shows "a \<sim> b"
using ee
proof (elim essentially_equalE)
fix fs
assume prm: "as <~~> fs"
hence as'carr[simp]: "set fs \<subseteq> carrier G" by (simp add: perm_closed)
from afs prm
have afs': "wfactors G fs a" by (rule wfactors_perm_cong_l, simp)
assume "fs [\<sim>] bs"
from this afs' bfs
show "a \<sim> b" by (rule listassoc_wfactorsD, simp+)
qed
lemma (in comm_monoid_cancel) ee_factorsD:
assumes ee: "essentially_equal G as bs"
and afs: "factors G as a" and bfs:"factors G bs b"
and "set as \<subseteq> carrier G" "set bs \<subseteq> carrier G"
shows "a \<sim> b"
using assms
by (blast intro: factors_wfactors dest: ee_wfactorsD)
lemma (in factorial_monoid) ee_factorsI:
assumes ab: "a \<sim> b"
and afs: "factors G as a" and anunit: "a \<notin> Units G"
and bfs: "factors G bs b" and bnunit: "b \<notin> Units G"
and ascarr: "set as \<subseteq> carrier G" and bscarr: "set bs \<subseteq> carrier G"
shows "essentially_equal G as bs"
proof -
note carr[simp] = factors_closed[OF afs ascarr] ascarr[THEN subsetD]
factors_closed[OF bfs bscarr] bscarr[THEN subsetD]
from ab carr
have "\<exists>u\<in>Units G. a = b \<otimes> u" by (fast elim: associatedE2)
from this obtain u
where uunit: "u \<in> Units G"
and a: "a = b \<otimes> u" by auto
from uunit bscarr
have ee: "essentially_equal G (bs[0 := (bs!0 \<otimes> u)]) bs"
(is "essentially_equal G ?bs' bs")
by (rule unitfactor_ee)
from bscarr uunit
have bs'carr: "set ?bs' \<subseteq> carrier G"
by (cases bs) (simp add: Units_closed)+
from uunit bnunit bfs bscarr
have fac: "factors G ?bs' (b \<otimes> u)"
by (rule factors_cong_unit)
from afs fac[simplified a[symmetric]] ascarr bs'carr anunit
have "essentially_equal G as ?bs'"
by (blast intro: factors_unique)
also note ee
finally
show "essentially_equal G as bs" by (simp add: ascarr bscarr bs'carr)
qed
lemma (in factorial_monoid) ee_wfactorsI:
assumes asc: "a \<sim> b"
and asf: "wfactors G as a" and bsf: "wfactors G bs b"
and acarr[simp]: "a \<in> carrier G" and bcarr[simp]: "b \<in> carrier G"
and ascarr[simp]: "set as \<subseteq> carrier G" and bscarr[simp]: "set bs \<subseteq> carrier G"
shows "essentially_equal G as bs"
using assms
proof (cases "a \<in> Units G")
assume aunit: "a \<in> Units G"
also note asc
finally have bunit: "b \<in> Units G" by simp
from aunit asf ascarr
have e: "as = []" by (rule unit_wfactors_empty)
from bunit bsf bscarr
have e': "bs = []" by (rule unit_wfactors_empty)
have "essentially_equal G [] []"
by (fast intro: essentially_equalI)
thus ?thesis by (simp add: e e')
next
assume anunit: "a \<notin> Units G"
have bnunit: "b \<notin> Units G"
proof clarify
assume "b \<in> Units G"
also note asc[symmetric]
finally have "a \<in> Units G" by simp
with anunit
show "False" ..
qed
have "\<exists>a'. factors G as a' \<and> a' \<sim> a" by (rule wfactors_factors[OF asf ascarr])
from this obtain a'
where fa': "factors G as a'"
and a': "a' \<sim> a"
by auto
from fa' ascarr
have a'carr[simp]: "a' \<in> carrier G" by fast
have a'nunit: "a' \<notin> Units G"
proof (clarify)
assume "a' \<in> Units G"
also note a'
finally have "a \<in> Units G" by simp
with anunit
show "False" ..
qed
have "\<exists>b'. factors G bs b' \<and> b' \<sim> b" by (rule wfactors_factors[OF bsf bscarr])
from this obtain b'
where fb': "factors G bs b'"
and b': "b' \<sim> b"
by auto
from fb' bscarr
have b'carr[simp]: "b' \<in> carrier G" by fast
have b'nunit: "b' \<notin> Units G"
proof (clarify)
assume "b' \<in> Units G"
also note b'
finally have "b \<in> Units G" by simp
with bnunit
show "False" ..
qed
note a'
also note asc
also note b'[symmetric]
finally
have "a' \<sim> b'" by simp
from this fa' a'nunit fb' b'nunit ascarr bscarr
show "essentially_equal G as bs"
by (rule ee_factorsI)
qed
lemma (in factorial_monoid) ee_wfactors:
assumes asf: "wfactors G as a"
and bsf: "wfactors G bs b"
and acarr: "a \<in> carrier G" and bcarr: "b \<in> carrier G"
and ascarr: "set as \<subseteq> carrier G" and bscarr: "set bs \<subseteq> carrier G"
shows asc: "a \<sim> b = essentially_equal G as bs"
using assms
by (fast intro: ee_wfactorsI ee_wfactorsD)
lemma (in factorial_monoid) wfactors_exist [intro, simp]:
assumes acarr[simp]: "a \<in> carrier G"
shows "\<exists>fs. set fs \<subseteq> carrier G \<and> wfactors G fs a"
proof (cases "a \<in> Units G")
assume "a \<in> Units G"
hence "wfactors G [] a" by (rule unit_wfactors)
thus ?thesis by (intro exI) force
next
assume "a \<notin> Units G"
hence "\<exists>fs. set fs \<subseteq> carrier G \<and> factors G fs a" by (intro factors_exist acarr)
from this obtain fs
where fscarr: "set fs \<subseteq> carrier G"
and f: "factors G fs a"
by auto
from f have "wfactors G fs a" by (rule factors_wfactors) fact
from fscarr this
show ?thesis by fast
qed
lemma (in monoid) wfactors_prod_exists [intro, simp]:
assumes "\<forall>a \<in> set as. irreducible G a" and "set as \<subseteq> carrier G"
shows "\<exists>a. a \<in> carrier G \<and> wfactors G as a"
unfolding wfactors_def
using assms
by blast
lemma (in factorial_monoid) wfactors_unique:
assumes "wfactors G fs a" and "wfactors G fs' a"
and "a \<in> carrier G"
and "set fs \<subseteq> carrier G" and "set fs' \<subseteq> carrier G"
shows "essentially_equal G fs fs'"
using assms
by (fast intro: ee_wfactorsI[of a a])
lemma (in monoid) factors_mult_single:
assumes "irreducible G a" and "factors G fb b" and "a \<in> carrier G"
shows "factors G (a # fb) (a \<otimes> b)"
using assms
unfolding factors_def
by simp
lemma (in monoid_cancel) wfactors_mult_single:
assumes f: "irreducible G a" "wfactors G fb b"
"a \<in> carrier G" "b \<in> carrier G" "set fb \<subseteq> carrier G"
shows "wfactors G (a # fb) (a \<otimes> b)"
using assms
unfolding wfactors_def
by (simp add: mult_cong_r)
lemma (in monoid) factors_mult:
assumes factors: "factors G fa a" "factors G fb b"
and ascarr: "set fa \<subseteq> carrier G" and bscarr:"set fb \<subseteq> carrier G"
shows "factors G (fa @ fb) (a \<otimes> b)"
using assms
unfolding factors_def
apply (safe, force)
apply hypsubst_thin
apply (induct fa)
apply simp
apply (simp add: m_assoc)
done
lemma (in comm_monoid_cancel) wfactors_mult [intro]:
assumes asf: "wfactors G as a" and bsf:"wfactors G bs b"
and acarr: "a \<in> carrier G" and bcarr: "b \<in> carrier G"
and ascarr: "set as \<subseteq> carrier G" and bscarr:"set bs \<subseteq> carrier G"
shows "wfactors G (as @ bs) (a \<otimes> b)"
apply (insert wfactors_factors[OF asf ascarr])
apply (insert wfactors_factors[OF bsf bscarr])
proof (clarsimp)
fix a' b'
assume asf': "factors G as a'" and a'a: "a' \<sim> a"
and bsf': "factors G bs b'" and b'b: "b' \<sim> b"
from asf' have a'carr: "a' \<in> carrier G" by (rule factors_closed) fact
from bsf' have b'carr: "b' \<in> carrier G" by (rule factors_closed) fact
note carr = acarr bcarr a'carr b'carr ascarr bscarr
from asf' bsf'
have "factors G (as @ bs) (a' \<otimes> b')" by (rule factors_mult) fact+
with carr
have abf': "wfactors G (as @ bs) (a' \<otimes> b')" by (intro factors_wfactors) simp+
also from b'b carr
have trb: "a' \<otimes> b' \<sim> a' \<otimes> b" by (intro mult_cong_r)
also from a'a carr
have tra: "a' \<otimes> b \<sim> a \<otimes> b" by (intro mult_cong_l)
finally
show "wfactors G (as @ bs) (a \<otimes> b)"
by (simp add: carr)
qed
lemma (in comm_monoid) factors_dividesI:
assumes "factors G fs a" and "f \<in> set fs"
and "set fs \<subseteq> carrier G"
shows "f divides a"
using assms
by (fast elim: factorsE intro: multlist_dividesI)
lemma (in comm_monoid) wfactors_dividesI:
assumes p: "wfactors G fs a"
and fscarr: "set fs \<subseteq> carrier G" and acarr: "a \<in> carrier G"
and f: "f \<in> set fs"
shows "f divides a"
apply (insert wfactors_factors[OF p fscarr], clarsimp)
proof -
fix a'
assume fsa': "factors G fs a'"
and a'a: "a' \<sim> a"
with fscarr
have a'carr: "a' \<in> carrier G" by (simp add: factors_closed)
from fsa' fscarr f
have "f divides a'" by (fast intro: factors_dividesI)
also note a'a
finally
show "f divides a" by (simp add: f fscarr[THEN subsetD] acarr a'carr)
qed
subsubsection {* Factorial monoids and wfactors *}
lemma (in comm_monoid_cancel) factorial_monoidI:
assumes wfactors_exists:
"\<And>a. a \<in> carrier G \<Longrightarrow> \<exists>fs. set fs \<subseteq> carrier G \<and> wfactors G fs a"
and wfactors_unique:
"\<And>a fs fs'. \<lbrakk>a \<in> carrier G; set fs \<subseteq> carrier G; set fs' \<subseteq> carrier G;
wfactors G fs a; wfactors G fs' a\<rbrakk> \<Longrightarrow> essentially_equal G fs fs'"
shows "factorial_monoid G"
proof
fix a
assume acarr: "a \<in> carrier G" and anunit: "a \<notin> Units G"
from wfactors_exists[OF acarr]
obtain as
where ascarr: "set as \<subseteq> carrier G"
and afs: "wfactors G as a"
by auto
from afs ascarr
have "\<exists>a'. factors G as a' \<and> a' \<sim> a" by (rule wfactors_factors)
from this obtain a'
where afs': "factors G as a'"
and a'a: "a' \<sim> a"
by auto
from afs' ascarr
have a'carr: "a' \<in> carrier G" by fast
have a'nunit: "a' \<notin> Units G"
proof clarify
assume "a' \<in> Units G"
also note a'a
finally have "a \<in> Units G" by (simp add: acarr)
with anunit
show "False" ..
qed
from a'carr acarr a'a
have "\<exists>u. u \<in> Units G \<and> a' = a \<otimes> u" by (blast elim: associatedE2)
from this obtain u
where uunit: "u \<in> Units G"
and a': "a' = a \<otimes> u"
by auto
note [simp] = acarr Units_closed[OF uunit] Units_inv_closed[OF uunit]
have "a = a \<otimes> \<one>" by simp
also have "\<dots> = a \<otimes> (u \<otimes> inv u)" by (simp add: uunit)
also have "\<dots> = a' \<otimes> inv u" by (simp add: m_assoc[symmetric] a'[symmetric])
finally
have a: "a = a' \<otimes> inv u" .
from ascarr uunit
have cr: "set (as[0:=(as!0 \<otimes> inv u)]) \<subseteq> carrier G"
by (cases as, clarsimp+)
from afs' uunit a'nunit acarr ascarr
have "factors G (as[0:=(as!0 \<otimes> inv u)]) a"
by (simp add: a factors_cong_unit)
with cr
show "\<exists>fs. set fs \<subseteq> carrier G \<and> factors G fs a" by fast
qed (blast intro: factors_wfactors wfactors_unique)
subsection {* Factorizations as Multisets *}
text {* Gives useful operations like intersection *}
(* FIXME: use class_of x instead of closure_of {x} *)
abbreviation
"assocs G x == eq_closure_of (division_rel G) {x}"
definition
"fmset G as = multiset_of (map (\<lambda>a. assocs G a) as)"
text {* Helper lemmas *}
lemma (in monoid) assocs_repr_independence:
assumes "y \<in> assocs G x"
and "x \<in> carrier G"
shows "assocs G x = assocs G y"
using assms
apply safe
apply (elim closure_ofE2, intro closure_ofI2[of _ _ y])
apply (clarsimp, iprover intro: associated_trans associated_sym, simp+)
apply (elim closure_ofE2, intro closure_ofI2[of _ _ x])
apply (clarsimp, iprover intro: associated_trans, simp+)
done
lemma (in monoid) assocs_self:
assumes "x \<in> carrier G"
shows "x \<in> assocs G x"
using assms
by (fastforce intro: closure_ofI2)
lemma (in monoid) assocs_repr_independenceD:
assumes repr: "assocs G x = assocs G y"
and ycarr: "y \<in> carrier G"
shows "y \<in> assocs G x"
unfolding repr
using ycarr
by (intro assocs_self)
lemma (in comm_monoid) assocs_assoc:
assumes "a \<in> assocs G b"
and "b \<in> carrier G"
shows "a \<sim> b"
using assms
by (elim closure_ofE2, simp)
lemmas (in comm_monoid) assocs_eqD =
assocs_repr_independenceD[THEN assocs_assoc]
subsubsection {* Comparing multisets *}
lemma (in monoid) fmset_perm_cong:
assumes prm: "as <~~> bs"
shows "fmset G as = fmset G bs"
using perm_map[OF prm]
by (simp add: multiset_of_eq_perm fmset_def)
lemma (in comm_monoid_cancel) eqc_listassoc_cong:
assumes "as [\<sim>] bs"
and "set as \<subseteq> carrier G" and "set bs \<subseteq> carrier G"
shows "map (assocs G) as = map (assocs G) bs"
using assms
apply (induct as arbitrary: bs, simp)
apply (clarsimp simp add: Cons_eq_map_conv list_all2_Cons1, safe)
apply (clarsimp elim!: closure_ofE2) defer 1
apply (clarsimp elim!: closure_ofE2) defer 1
proof -
fix a x z
assume carr[simp]: "a \<in> carrier G" "x \<in> carrier G" "z \<in> carrier G"
assume "x \<sim> a"
also assume "a \<sim> z"
finally have "x \<sim> z" by simp
with carr
show "x \<in> assocs G z"
by (intro closure_ofI2) simp+
next
fix a x z
assume carr[simp]: "a \<in> carrier G" "x \<in> carrier G" "z \<in> carrier G"
assume "x \<sim> z"
also assume [symmetric]: "a \<sim> z"
finally have "x \<sim> a" by simp
with carr
show "x \<in> assocs G a"
by (intro closure_ofI2) simp+
qed
lemma (in comm_monoid_cancel) fmset_listassoc_cong:
assumes "as [\<sim>] bs"
and "set as \<subseteq> carrier G" and "set bs \<subseteq> carrier G"
shows "fmset G as = fmset G bs"
using assms
unfolding fmset_def
by (simp add: eqc_listassoc_cong)
lemma (in comm_monoid_cancel) ee_fmset:
assumes ee: "essentially_equal G as bs"
and ascarr: "set as \<subseteq> carrier G" and bscarr: "set bs \<subseteq> carrier G"
shows "fmset G as = fmset G bs"
using ee
proof (elim essentially_equalE)
fix as'
assume prm: "as <~~> as'"
from prm ascarr
have as'carr: "set as' \<subseteq> carrier G" by (rule perm_closed)
from prm
have "fmset G as = fmset G as'" by (rule fmset_perm_cong)
also assume "as' [\<sim>] bs"
with as'carr bscarr
have "fmset G as' = fmset G bs" by (simp add: fmset_listassoc_cong)
finally
show "fmset G as = fmset G bs" .
qed
lemma (in monoid_cancel) fmset_ee__hlp_induct:
assumes prm: "cas <~~> cbs"
and cdef: "cas = map (assocs G) as" "cbs = map (assocs G) bs"
shows "\<forall>as bs. (cas <~~> cbs \<and> cas = map (assocs G) as \<and>
cbs = map (assocs G) bs) \<longrightarrow> (\<exists>as'. as <~~> as' \<and> map (assocs G) as' = cbs)"
apply (rule perm.induct[of cas cbs], rule prm)
apply safe apply simp_all
apply (simp add: map_eq_Cons_conv, blast)
apply force
proof -
fix ys as bs
assume p1: "map (assocs G) as <~~> ys"
and r1[rule_format]:
"\<forall>asa bs. map (assocs G) as = map (assocs G) asa \<and>
ys = map (assocs G) bs
\<longrightarrow> (\<exists>as'. asa <~~> as' \<and> map (assocs G) as' = map (assocs G) bs)"
and p2: "ys <~~> map (assocs G) bs"
and r2[rule_format]:
"\<forall>as bsa. ys = map (assocs G) as \<and>
map (assocs G) bs = map (assocs G) bsa
\<longrightarrow> (\<exists>as'. as <~~> as' \<and> map (assocs G) as' = map (assocs G) bsa)"
and p3: "map (assocs G) as <~~> map (assocs G) bs"
from p1
have "multiset_of (map (assocs G) as) = multiset_of ys"
by (simp add: multiset_of_eq_perm)
hence setys: "set (map (assocs G) as) = set ys" by (rule multiset_of_eq_setD)
have "set (map (assocs G) as) = { assocs G x | x. x \<in> set as}" by clarsimp fast
with setys have "set ys \<subseteq> { assocs G x | x. x \<in> set as}" by simp
hence "\<exists>yy. ys = map (assocs G) yy"
apply (induct ys, simp, clarsimp)
proof -
fix yy x
show "\<exists>yya. (assocs G x) # map (assocs G) yy =
map (assocs G) yya"
by (rule exI[of _ "x#yy"], simp)
qed
from this obtain yy
where ys: "ys = map (assocs G) yy"
by auto
from p1 ys
have "\<exists>as'. as <~~> as' \<and> map (assocs G) as' = map (assocs G) yy"
by (intro r1, simp)
from this obtain as'
where asas': "as <~~> as'"
and as'yy: "map (assocs G) as' = map (assocs G) yy"
by auto
from p2 ys
have "\<exists>as'. yy <~~> as' \<and> map (assocs G) as' = map (assocs G) bs"
by (intro r2, simp)
from this obtain as''
where yyas'': "yy <~~> as''"
and as''bs: "map (assocs G) as'' = map (assocs G) bs"
by auto
from as'yy and yyas''
have "\<exists>cs. as' <~~> cs \<and> map (assocs G) cs = map (assocs G) as''"
by (rule perm_map_switch)
from this obtain cs
where as'cs: "as' <~~> cs"
and csas'': "map (assocs G) cs = map (assocs G) as''"
by auto
from asas' and as'cs
have ascs: "as <~~> cs" by fast
from csas'' and as''bs
have "map (assocs G) cs = map (assocs G) bs" by simp
from ascs and this
show "\<exists>as'. as <~~> as' \<and> map (assocs G) as' = map (assocs G) bs" by fast
qed
lemma (in comm_monoid_cancel) fmset_ee:
assumes mset: "fmset G as = fmset G bs"
and ascarr: "set as \<subseteq> carrier G" and bscarr: "set bs \<subseteq> carrier G"
shows "essentially_equal G as bs"
proof -
from mset
have mpp: "map (assocs G) as <~~> map (assocs G) bs"
by (simp add: fmset_def multiset_of_eq_perm)
have "\<exists>cas. cas = map (assocs G) as" by simp
from this obtain cas where cas: "cas = map (assocs G) as" by simp
have "\<exists>cbs. cbs = map (assocs G) bs" by simp
from this obtain cbs where cbs: "cbs = map (assocs G) bs" by simp
from cas cbs mpp
have [rule_format]:
"\<forall>as bs. (cas <~~> cbs \<and> cas = map (assocs G) as \<and>
cbs = map (assocs G) bs)
\<longrightarrow> (\<exists>as'. as <~~> as' \<and> map (assocs G) as' = cbs)"
by (intro fmset_ee__hlp_induct, simp+)
with mpp cas cbs
have "\<exists>as'. as <~~> as' \<and> map (assocs G) as' = map (assocs G) bs"
by simp
from this obtain as'
where tp: "as <~~> as'"
and tm: "map (assocs G) as' = map (assocs G) bs"
by auto
from tm have lene: "length as' = length bs" by (rule map_eq_imp_length_eq)
from tp have "set as = set as'" by (simp add: multiset_of_eq_perm multiset_of_eq_setD)
with ascarr
have as'carr: "set as' \<subseteq> carrier G" by simp
from tm as'carr[THEN subsetD] bscarr[THEN subsetD]
have "as' [\<sim>] bs"
by (induct as' arbitrary: bs) (simp, fastforce dest: assocs_eqD[THEN associated_sym])
from tp and this
show "essentially_equal G as bs" by (fast intro: essentially_equalI)
qed
lemma (in comm_monoid_cancel) ee_is_fmset:
assumes "set as \<subseteq> carrier G" and "set bs \<subseteq> carrier G"
shows "essentially_equal G as bs = (fmset G as = fmset G bs)"
using assms
by (fast intro: ee_fmset fmset_ee)
subsubsection {* Interpreting multisets as factorizations *}
lemma (in monoid) mset_fmsetEx:
assumes elems: "\<And>X. X \<in> set_of Cs \<Longrightarrow> \<exists>x. P x \<and> X = assocs G x"
shows "\<exists>cs. (\<forall>c \<in> set cs. P c) \<and> fmset G cs = Cs"
proof -
have "\<exists>Cs'. Cs = multiset_of Cs'"
by (rule surjE[OF surj_multiset_of], fast)
from this obtain Cs'
where Cs: "Cs = multiset_of Cs'"
by auto
have "\<exists>cs. (\<forall>c \<in> set cs. P c) \<and> multiset_of (map (assocs G) cs) = Cs"
using elems
unfolding Cs
apply (induct Cs', simp)
apply clarsimp
apply (subgoal_tac "\<exists>cs. (\<forall>x\<in>set cs. P x) \<and>
multiset_of (map (assocs G) cs) = multiset_of Cs'")
proof clarsimp
fix a Cs' cs
assume ih: "\<And>X. X = a \<or> X \<in> set Cs' \<Longrightarrow> \<exists>x. P x \<and> X = assocs G x"
and csP: "\<forall>x\<in>set cs. P x"
and mset: "multiset_of (map (assocs G) cs) = multiset_of Cs'"
from ih
have "\<exists>x. P x \<and> a = assocs G x" by fast
from this obtain c
where cP: "P c"
and a: "a = assocs G c"
by auto
from cP csP
have tP: "\<forall>x\<in>set (c#cs). P x" by simp
from mset a
have "multiset_of (map (assocs G) (c#cs)) = multiset_of Cs' + {#a#}" by simp
from tP this
show "\<exists>cs. (\<forall>x\<in>set cs. P x) \<and>
multiset_of (map (assocs G) cs) =
multiset_of Cs' + {#a#}" by fast
qed simp
thus ?thesis by (simp add: fmset_def)
qed
lemma (in monoid) mset_wfactorsEx:
assumes elems: "\<And>X. X \<in> set_of Cs
\<Longrightarrow> \<exists>x. (x \<in> carrier G \<and> irreducible G x) \<and> X = assocs G x"
shows "\<exists>c cs. c \<in> carrier G \<and> set cs \<subseteq> carrier G \<and> wfactors G cs c \<and> fmset G cs = Cs"
proof -
have "\<exists>cs. (\<forall>c\<in>set cs. c \<in> carrier G \<and> irreducible G c) \<and> fmset G cs = Cs"
by (intro mset_fmsetEx, rule elems)
from this obtain cs
where p[rule_format]: "\<forall>c\<in>set cs. c \<in> carrier G \<and> irreducible G c"
and Cs[symmetric]: "fmset G cs = Cs"
by auto
from p
have cscarr: "set cs \<subseteq> carrier G" by fast
from p
have "\<exists>c. c \<in> carrier G \<and> wfactors G cs c"
by (intro wfactors_prod_exists) fast+
from this obtain c
where ccarr: "c \<in> carrier G"
and cfs: "wfactors G cs c"
by auto
with cscarr Cs
show ?thesis by fast
qed
subsubsection {* Multiplication on multisets *}
lemma (in factorial_monoid) mult_wfactors_fmset:
assumes afs: "wfactors G as a" and bfs: "wfactors G bs b" and cfs: "wfactors G cs (a \<otimes> b)"
and carr: "a \<in> carrier G" "b \<in> carrier G"
"set as \<subseteq> carrier G" "set bs \<subseteq> carrier G" "set cs \<subseteq> carrier G"
shows "fmset G cs = fmset G as + fmset G bs"
proof -
from assms
have "wfactors G (as @ bs) (a \<otimes> b)" by (intro wfactors_mult)
with carr cfs
have "essentially_equal G cs (as@bs)" by (intro ee_wfactorsI[of "a\<otimes>b" "a\<otimes>b"], simp+)
with carr
have "fmset G cs = fmset G (as@bs)" by (intro ee_fmset, simp+)
also have "fmset G (as@bs) = fmset G as + fmset G bs" by (simp add: fmset_def)
finally show "fmset G cs = fmset G as + fmset G bs" .
qed
lemma (in factorial_monoid) mult_factors_fmset:
assumes afs: "factors G as a" and bfs: "factors G bs b" and cfs: "factors G cs (a \<otimes> b)"
and "set as \<subseteq> carrier G" "set bs \<subseteq> carrier G" "set cs \<subseteq> carrier G"
shows "fmset G cs = fmset G as + fmset G bs"
using assms
by (blast intro: factors_wfactors mult_wfactors_fmset)
lemma (in comm_monoid_cancel) fmset_wfactors_mult:
assumes mset: "fmset G cs = fmset G as + fmset G bs"
and carr: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
"set as \<subseteq> carrier G" "set bs \<subseteq> carrier G" "set cs \<subseteq> carrier G"
and fs: "wfactors G as a" "wfactors G bs b" "wfactors G cs c"
shows "c \<sim> a \<otimes> b"
proof -
from carr fs
have m: "wfactors G (as @ bs) (a \<otimes> b)" by (intro wfactors_mult)
from mset
have "fmset G cs = fmset G (as@bs)" by (simp add: fmset_def)
then have "essentially_equal G cs (as@bs)" by (rule fmset_ee) (simp add: carr)+
then show "c \<sim> a \<otimes> b" by (rule ee_wfactorsD[of "cs" "as@bs"]) (simp add: assms m)+
qed
subsubsection {* Divisibility on multisets *}
lemma (in factorial_monoid) divides_fmsubset:
assumes ab: "a divides b"
and afs: "wfactors G as a" and bfs: "wfactors G bs b"
and carr: "a \<in> carrier G" "b \<in> carrier G" "set as \<subseteq> carrier G" "set bs \<subseteq> carrier G"
shows "fmset G as \<le> fmset G bs"
using ab
proof (elim dividesE)
fix c
assume ccarr: "c \<in> carrier G"
hence "\<exists>cs. set cs \<subseteq> carrier G \<and> wfactors G cs c" by (rule wfactors_exist)
from this obtain cs
where cscarr: "set cs \<subseteq> carrier G"
and cfs: "wfactors G cs c" by auto
note carr = carr ccarr cscarr
assume "b = a \<otimes> c"
with afs bfs cfs carr
have "fmset G bs = fmset G as + fmset G cs"
by (intro mult_wfactors_fmset[OF afs cfs]) simp+
thus ?thesis by simp
qed
lemma (in comm_monoid_cancel) fmsubset_divides:
assumes msubset: "fmset G as \<le> fmset G bs"
and afs: "wfactors G as a" and bfs: "wfactors G bs b"
and acarr: "a \<in> carrier G" and bcarr: "b \<in> carrier G"
and ascarr: "set as \<subseteq> carrier G" and bscarr: "set bs \<subseteq> carrier G"
shows "a divides b"
proof -
from afs have airr: "\<forall>a \<in> set as. irreducible G a" by (fast elim: wfactorsE)
from bfs have birr: "\<forall>b \<in> set bs. irreducible G b" by (fast elim: wfactorsE)
have "\<exists>c cs. c \<in> carrier G \<and> set cs \<subseteq> carrier G \<and> wfactors G cs c \<and> fmset G cs = fmset G bs - fmset G as"
proof (intro mset_wfactorsEx, simp)
fix X
assume "count (fmset G as) X < count (fmset G bs) X"
hence "0 < count (fmset G bs) X" by simp
hence "X \<in> set_of (fmset G bs)" by simp
hence "X \<in> set (map (assocs G) bs)" by (simp add: fmset_def)
hence "\<exists>x. x \<in> set bs \<and> X = assocs G x" by (induct bs) auto
from this obtain x
where xbs: "x \<in> set bs"
and X: "X = assocs G x"
by auto
with bscarr have xcarr: "x \<in> carrier G" by fast
from xbs birr have xirr: "irreducible G x" by simp
from xcarr and xirr and X
show "\<exists>x. x \<in> carrier G \<and> irreducible G x \<and> X = assocs G x" by fast
qed
from this obtain c cs
where ccarr: "c \<in> carrier G"
and cscarr: "set cs \<subseteq> carrier G"
and csf: "wfactors G cs c"
and csmset: "fmset G cs = fmset G bs - fmset G as" by auto
from csmset msubset
have "fmset G bs = fmset G as + fmset G cs"
by (simp add: multiset_eq_iff mset_le_def)
hence basc: "b \<sim> a \<otimes> c"
by (rule fmset_wfactors_mult) fact+
thus ?thesis
proof (elim associatedE2)
fix u
assume "u \<in> Units G" "b = a \<otimes> c \<otimes> u"
with acarr ccarr
show "a divides b" by (fast intro: dividesI[of "c \<otimes> u"] m_assoc)
qed (simp add: acarr bcarr ccarr)+
qed
lemma (in factorial_monoid) divides_as_fmsubset:
assumes "wfactors G as a" and "wfactors G bs b"
and "a \<in> carrier G" and "b \<in> carrier G"
and "set as \<subseteq> carrier G" and "set bs \<subseteq> carrier G"
shows "a divides b = (fmset G as \<le> fmset G bs)"
using assms
by (blast intro: divides_fmsubset fmsubset_divides)
text {* Proper factors on multisets *}
lemma (in factorial_monoid) fmset_properfactor:
assumes asubb: "fmset G as \<le> fmset G bs"
and anb: "fmset G as \<noteq> fmset G bs"
and "wfactors G as a" and "wfactors G bs b"
and "a \<in> carrier G" and "b \<in> carrier G"
and "set as \<subseteq> carrier G" and "set bs \<subseteq> carrier G"
shows "properfactor G a b"
apply (rule properfactorI)
apply (rule fmsubset_divides[of as bs], fact+)
proof
assume "b divides a"
hence "fmset G bs \<le> fmset G as"
by (rule divides_fmsubset) fact+
with asubb
have "fmset G as = fmset G bs" by (rule order_antisym)
with anb
show "False" ..
qed
lemma (in factorial_monoid) properfactor_fmset:
assumes pf: "properfactor G a b"
and "wfactors G as a" and "wfactors G bs b"
and "a \<in> carrier G" and "b \<in> carrier G"
and "set as \<subseteq> carrier G" and "set bs \<subseteq> carrier G"
shows "fmset G as \<le> fmset G bs \<and> fmset G as \<noteq> fmset G bs"
using pf
apply (elim properfactorE)
apply rule
apply (intro divides_fmsubset, assumption)
apply (rule assms)+
apply (metis assms divides_fmsubset fmsubset_divides)
done
subsection {* Irreducible Elements are Prime *}
lemma (in factorial_monoid) irreducible_is_prime:
assumes pirr: "irreducible G p"
and pcarr: "p \<in> carrier G"
shows "prime G p"
using pirr
proof (elim irreducibleE, intro primeI)
fix a b
assume acarr: "a \<in> carrier G" and bcarr: "b \<in> carrier G"
and pdvdab: "p divides (a \<otimes> b)"
and pnunit: "p \<notin> Units G"
assume irreduc[rule_format]:
"\<forall>b. b \<in> carrier G \<and> properfactor G b p \<longrightarrow> b \<in> Units G"
from pdvdab
have "\<exists>c\<in>carrier G. a \<otimes> b = p \<otimes> c" by (rule dividesD)
from this obtain c
where ccarr: "c \<in> carrier G"
and abpc: "a \<otimes> b = p \<otimes> c"
by auto
from acarr have "\<exists>fs. set fs \<subseteq> carrier G \<and> wfactors G fs a" by (rule wfactors_exist)
from this obtain as where ascarr: "set as \<subseteq> carrier G" and afs: "wfactors G as a" by auto
from bcarr have "\<exists>fs. set fs \<subseteq> carrier G \<and> wfactors G fs b" by (rule wfactors_exist)
from this obtain bs where bscarr: "set bs \<subseteq> carrier G" and bfs: "wfactors G bs b" by auto
from ccarr have "\<exists>fs. set fs \<subseteq> carrier G \<and> wfactors G fs c" by (rule wfactors_exist)
from this obtain cs where cscarr: "set cs \<subseteq> carrier G" and cfs: "wfactors G cs c" by auto
note carr[simp] = pcarr acarr bcarr ccarr ascarr bscarr cscarr
from afs and bfs
have abfs: "wfactors G (as @ bs) (a \<otimes> b)" by (rule wfactors_mult) fact+
from pirr cfs
have pcfs: "wfactors G (p # cs) (p \<otimes> c)" by (rule wfactors_mult_single) fact+
with abpc
have abfs': "wfactors G (p # cs) (a \<otimes> b)" by simp
from abfs' abfs
have "essentially_equal G (p # cs) (as @ bs)"
by (rule wfactors_unique) simp+
hence "\<exists>ds. p # cs <~~> ds \<and> ds [\<sim>] (as @ bs)"
by (fast elim: essentially_equalE)
from this obtain ds
where "p # cs <~~> ds"
and dsassoc: "ds [\<sim>] (as @ bs)"
by auto
then have "p \<in> set ds"
by (simp add: perm_set_eq[symmetric])
with dsassoc
have "\<exists>p'. p' \<in> set (as@bs) \<and> p \<sim> p'"
unfolding list_all2_conv_all_nth set_conv_nth
by force
from this obtain p'
where "p' \<in> set (as@bs)"
and pp': "p \<sim> p'"
by auto
hence "p' \<in> set as \<or> p' \<in> set bs" by simp
moreover
{
assume p'elem: "p' \<in> set as"
with ascarr have [simp]: "p' \<in> carrier G" by fast
note pp'
also from afs
have "p' divides a" by (rule wfactors_dividesI) fact+
finally
have "p divides a" by simp
}
moreover
{
assume p'elem: "p' \<in> set bs"
with bscarr have [simp]: "p' \<in> carrier G" by fast
note pp'
also from bfs
have "p' divides b" by (rule wfactors_dividesI) fact+
finally
have "p divides b" by simp
}
ultimately
show "p divides a \<or> p divides b" by fast
qed
--"A version using @{const factors}, more complicated"
lemma (in factorial_monoid) factors_irreducible_is_prime:
assumes pirr: "irreducible G p"
and pcarr: "p \<in> carrier G"
shows "prime G p"
using pirr
apply (elim irreducibleE, intro primeI)
apply assumption
proof -
fix a b
assume acarr: "a \<in> carrier G"
and bcarr: "b \<in> carrier G"
and pdvdab: "p divides (a \<otimes> b)"
assume irreduc[rule_format]:
"\<forall>b. b \<in> carrier G \<and> properfactor G b p \<longrightarrow> b \<in> Units G"
from pdvdab
have "\<exists>c\<in>carrier G. a \<otimes> b = p \<otimes> c" by (rule dividesD)
from this obtain c
where ccarr: "c \<in> carrier G"
and abpc: "a \<otimes> b = p \<otimes> c"
by auto
note [simp] = pcarr acarr bcarr ccarr
show "p divides a \<or> p divides b"
proof (cases "a \<in> Units G")
assume aunit: "a \<in> Units G"
note pdvdab
also have "a \<otimes> b = b \<otimes> a" by (simp add: m_comm)
also from aunit
have bab: "b \<otimes> a \<sim> b"
by (intro associatedI2[of "a"], simp+)
finally
have "p divides b" by simp
thus "p divides a \<or> p divides b" ..
next
assume anunit: "a \<notin> Units G"
show "p divides a \<or> p divides b"
proof (cases "b \<in> Units G")
assume bunit: "b \<in> Units G"
note pdvdab
also from bunit
have baa: "a \<otimes> b \<sim> a"
by (intro associatedI2[of "b"], simp+)
finally
have "p divides a" by simp
thus "p divides a \<or> p divides b" ..
next
assume bnunit: "b \<notin> Units G"
have cnunit: "c \<notin> Units G"
proof (rule ccontr, simp)
assume cunit: "c \<in> Units G"
from bnunit
have "properfactor G a (a \<otimes> b)"
by (intro properfactorI3[of _ _ b], simp+)
also note abpc
also from cunit
have "p \<otimes> c \<sim> p"
by (intro associatedI2[of c], simp+)
finally
have "properfactor G a p" by simp
with acarr
have "a \<in> Units G" by (fast intro: irreduc)
with anunit
show "False" ..
qed
have abnunit: "a \<otimes> b \<notin> Units G"
proof clarsimp
assume abunit: "a \<otimes> b \<in> Units G"
hence "a \<in> Units G" by (rule unit_factor) fact+
with anunit
show "False" ..
qed
from acarr anunit have "\<exists>fs. set fs \<subseteq> carrier G \<and> factors G fs a" by (rule factors_exist)
then obtain as where ascarr: "set as \<subseteq> carrier G" and afac: "factors G as a" by auto
from bcarr bnunit have "\<exists>fs. set fs \<subseteq> carrier G \<and> factors G fs b" by (rule factors_exist)
then obtain bs where bscarr: "set bs \<subseteq> carrier G" and bfac: "factors G bs b" by auto
from ccarr cnunit have "\<exists>fs. set fs \<subseteq> carrier G \<and> factors G fs c" by (rule factors_exist)
then obtain cs where cscarr: "set cs \<subseteq> carrier G" and cfac: "factors G cs c" by auto
note [simp] = ascarr bscarr cscarr
from afac and bfac
have abfac: "factors G (as @ bs) (a \<otimes> b)" by (rule factors_mult) fact+
from pirr cfac
have pcfac: "factors G (p # cs) (p \<otimes> c)" by (rule factors_mult_single) fact+
with abpc
have abfac': "factors G (p # cs) (a \<otimes> b)" by simp
from abfac' abfac
have "essentially_equal G (p # cs) (as @ bs)"
by (rule factors_unique) (fact | simp)+
hence "\<exists>ds. p # cs <~~> ds \<and> ds [\<sim>] (as @ bs)"
by (fast elim: essentially_equalE)
from this obtain ds
where "p # cs <~~> ds"
and dsassoc: "ds [\<sim>] (as @ bs)"
by auto
then have "p \<in> set ds"
by (simp add: perm_set_eq[symmetric])
with dsassoc
have "\<exists>p'. p' \<in> set (as@bs) \<and> p \<sim> p'"
unfolding list_all2_conv_all_nth set_conv_nth
by force
from this obtain p'
where "p' \<in> set (as@bs)"
and pp': "p \<sim> p'" by auto
hence "p' \<in> set as \<or> p' \<in> set bs" by simp
moreover
{
assume p'elem: "p' \<in> set as"
with ascarr have [simp]: "p' \<in> carrier G" by fast
note pp'
also from afac p'elem
have "p' divides a" by (rule factors_dividesI) fact+
finally
have "p divides a" by simp
}
moreover
{
assume p'elem: "p' \<in> set bs"
with bscarr have [simp]: "p' \<in> carrier G" by fast
note pp'
also from bfac
have "p' divides b" by (rule factors_dividesI) fact+
finally have "p divides b" by simp
}
ultimately
show "p divides a \<or> p divides b" by fast
qed
qed
qed
subsection {* Greatest Common Divisors and Lowest Common Multiples *}
subsubsection {* Definitions *}
definition
isgcd :: "[('a,_) monoid_scheme, 'a, 'a, 'a] \<Rightarrow> bool" ("(_ gcdof\<index> _ _)" [81,81,81] 80)
where "x gcdof\<^bsub>G\<^esub> a b \<longleftrightarrow> x divides\<^bsub>G\<^esub> a \<and> x divides\<^bsub>G\<^esub> b \<and>
(\<forall>y\<in>carrier G. (y divides\<^bsub>G\<^esub> a \<and> y divides\<^bsub>G\<^esub> b \<longrightarrow> y divides\<^bsub>G\<^esub> x))"
definition
islcm :: "[_, 'a, 'a, 'a] \<Rightarrow> bool" ("(_ lcmof\<index> _ _)" [81,81,81] 80)
where "x lcmof\<^bsub>G\<^esub> a b \<longleftrightarrow> a divides\<^bsub>G\<^esub> x \<and> b divides\<^bsub>G\<^esub> x \<and>
(\<forall>y\<in>carrier G. (a divides\<^bsub>G\<^esub> y \<and> b divides\<^bsub>G\<^esub> y \<longrightarrow> x divides\<^bsub>G\<^esub> y))"
definition
somegcd :: "('a,_) monoid_scheme \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> 'a"
where "somegcd G a b = (SOME x. x \<in> carrier G \<and> x gcdof\<^bsub>G\<^esub> a b)"
definition
somelcm :: "('a,_) monoid_scheme \<Rightarrow> 'a \<Rightarrow> 'a \<Rightarrow> 'a"
where "somelcm G a b = (SOME x. x \<in> carrier G \<and> x lcmof\<^bsub>G\<^esub> a b)"
definition
"SomeGcd G A = inf (division_rel G) A"
locale gcd_condition_monoid = comm_monoid_cancel +
assumes gcdof_exists:
"\<lbrakk>a \<in> carrier G; b \<in> carrier G\<rbrakk> \<Longrightarrow> \<exists>c. c \<in> carrier G \<and> c gcdof a b"
locale primeness_condition_monoid = comm_monoid_cancel +
assumes irreducible_prime:
"\<lbrakk>a \<in> carrier G; irreducible G a\<rbrakk> \<Longrightarrow> prime G a"
locale divisor_chain_condition_monoid = comm_monoid_cancel +
assumes division_wellfounded:
"wf {(x, y). x \<in> carrier G \<and> y \<in> carrier G \<and> properfactor G x y}"
subsubsection {* Connections to \texttt{Lattice.thy} *}
lemma gcdof_greatestLower:
fixes G (structure)
assumes carr[simp]: "a \<in> carrier G" "b \<in> carrier G"
shows "(x \<in> carrier G \<and> x gcdof a b) =
greatest (division_rel G) x (Lower (division_rel G) {a, b})"
unfolding isgcd_def greatest_def Lower_def elem_def
by auto
lemma lcmof_leastUpper:
fixes G (structure)
assumes carr[simp]: "a \<in> carrier G" "b \<in> carrier G"
shows "(x \<in> carrier G \<and> x lcmof a b) =
least (division_rel G) x (Upper (division_rel G) {a, b})"
unfolding islcm_def least_def Upper_def elem_def
by auto
lemma somegcd_meet:
fixes G (structure)
assumes carr: "a \<in> carrier G" "b \<in> carrier G"
shows "somegcd G a b = meet (division_rel G) a b"
unfolding somegcd_def meet_def inf_def
by (simp add: gcdof_greatestLower[OF carr])
lemma (in monoid) isgcd_divides_l:
assumes "a divides b"
and "a \<in> carrier G" "b \<in> carrier G"
shows "a gcdof a b"
using assms
unfolding isgcd_def
by fast
lemma (in monoid) isgcd_divides_r:
assumes "b divides a"
and "a \<in> carrier G" "b \<in> carrier G"
shows "b gcdof a b"
using assms
unfolding isgcd_def
by fast
subsubsection {* Existence of gcd and lcm *}
lemma (in factorial_monoid) gcdof_exists:
assumes acarr: "a \<in> carrier G" and bcarr: "b \<in> carrier G"
shows "\<exists>c. c \<in> carrier G \<and> c gcdof a b"
proof -
from acarr have "\<exists>as. set as \<subseteq> carrier G \<and> wfactors G as a" by (rule wfactors_exist)
from this obtain as
where ascarr: "set as \<subseteq> carrier G"
and afs: "wfactors G as a"
by auto
from afs have airr: "\<forall>a \<in> set as. irreducible G a" by (fast elim: wfactorsE)
from bcarr have "\<exists>bs. set bs \<subseteq> carrier G \<and> wfactors G bs b" by (rule wfactors_exist)
from this obtain bs
where bscarr: "set bs \<subseteq> carrier G"
and bfs: "wfactors G bs b"
by auto
from bfs have birr: "\<forall>b \<in> set bs. irreducible G b" by (fast elim: wfactorsE)
have "\<exists>c cs. c \<in> carrier G \<and> set cs \<subseteq> carrier G \<and> wfactors G cs c \<and>
fmset G cs = fmset G as #\<inter> fmset G bs"
proof (intro mset_wfactorsEx)
fix X
assume "X \<in> set_of (fmset G as #\<inter> fmset G bs)"
hence "X \<in> set_of (fmset G as)" by (simp add: multiset_inter_def)
hence "X \<in> set (map (assocs G) as)" by (simp add: fmset_def)
hence "\<exists>x. X = assocs G x \<and> x \<in> set as" by (induct as) auto
from this obtain x
where X: "X = assocs G x"
and xas: "x \<in> set as"
by auto
with ascarr have xcarr: "x \<in> carrier G" by fast
from xas airr have xirr: "irreducible G x" by simp
from xcarr and xirr and X
show "\<exists>x. (x \<in> carrier G \<and> irreducible G x) \<and> X = assocs G x" by fast
qed
from this obtain c cs
where ccarr: "c \<in> carrier G"
and cscarr: "set cs \<subseteq> carrier G"
and csirr: "wfactors G cs c"
and csmset: "fmset G cs = fmset G as #\<inter> fmset G bs" by auto
have "c gcdof a b"
proof (simp add: isgcd_def, safe)
from csmset
have "fmset G cs \<le> fmset G as"
by (simp add: multiset_inter_def mset_le_def)
thus "c divides a" by (rule fmsubset_divides) fact+
next
from csmset
have "fmset G cs \<le> fmset G bs"
by (simp add: multiset_inter_def mset_le_def, force)
thus "c divides b" by (rule fmsubset_divides) fact+
next
fix y
assume ycarr: "y \<in> carrier G"
hence "\<exists>ys. set ys \<subseteq> carrier G \<and> wfactors G ys y" by (rule wfactors_exist)
from this obtain ys
where yscarr: "set ys \<subseteq> carrier G"
and yfs: "wfactors G ys y"
by auto
assume "y divides a"
hence ya: "fmset G ys \<le> fmset G as" by (rule divides_fmsubset) fact+
assume "y divides b"
hence yb: "fmset G ys \<le> fmset G bs" by (rule divides_fmsubset) fact+
from ya yb csmset
have "fmset G ys \<le> fmset G cs" by (simp add: mset_le_def)
thus "y divides c" by (rule fmsubset_divides) fact+
qed
with ccarr
show "\<exists>c. c \<in> carrier G \<and> c gcdof a b" by fast
qed
lemma (in factorial_monoid) lcmof_exists:
assumes acarr: "a \<in> carrier G" and bcarr: "b \<in> carrier G"
shows "\<exists>c. c \<in> carrier G \<and> c lcmof a b"
proof -
from acarr have "\<exists>as. set as \<subseteq> carrier G \<and> wfactors G as a" by (rule wfactors_exist)
from this obtain as
where ascarr: "set as \<subseteq> carrier G"
and afs: "wfactors G as a"
by auto
from afs have airr: "\<forall>a \<in> set as. irreducible G a" by (fast elim: wfactorsE)
from bcarr have "\<exists>bs. set bs \<subseteq> carrier G \<and> wfactors G bs b" by (rule wfactors_exist)
from this obtain bs
where bscarr: "set bs \<subseteq> carrier G"
and bfs: "wfactors G bs b"
by auto
from bfs have birr: "\<forall>b \<in> set bs. irreducible G b" by (fast elim: wfactorsE)
have "\<exists>c cs. c \<in> carrier G \<and> set cs \<subseteq> carrier G \<and> wfactors G cs c \<and>
fmset G cs = (fmset G as - fmset G bs) + fmset G bs"
proof (intro mset_wfactorsEx)
fix X
assume "X \<in> set_of ((fmset G as - fmset G bs) + fmset G bs)"
hence "X \<in> set_of (fmset G as) \<or> X \<in> set_of (fmset G bs)"
by (cases "X :# fmset G bs", simp, simp)
moreover
{
assume "X \<in> set_of (fmset G as)"
hence "X \<in> set (map (assocs G) as)" by (simp add: fmset_def)
hence "\<exists>x. x \<in> set as \<and> X = assocs G x" by (induct as) auto
from this obtain x
where xas: "x \<in> set as"
and X: "X = assocs G x" by auto
with ascarr have xcarr: "x \<in> carrier G" by fast
from xas airr have xirr: "irreducible G x" by simp
from xcarr and xirr and X
have "\<exists>x. (x \<in> carrier G \<and> irreducible G x) \<and> X = assocs G x" by fast
}
moreover
{
assume "X \<in> set_of (fmset G bs)"
hence "X \<in> set (map (assocs G) bs)" by (simp add: fmset_def)
hence "\<exists>x. x \<in> set bs \<and> X = assocs G x" by (induct as) auto
from this obtain x
where xbs: "x \<in> set bs"
and X: "X = assocs G x" by auto
with bscarr have xcarr: "x \<in> carrier G" by fast
from xbs birr have xirr: "irreducible G x" by simp
from xcarr and xirr and X
have "\<exists>x. (x \<in> carrier G \<and> irreducible G x) \<and> X = assocs G x" by fast
}
ultimately
show "\<exists>x. (x \<in> carrier G \<and> irreducible G x) \<and> X = assocs G x" by fast
qed
from this obtain c cs
where ccarr: "c \<in> carrier G"
and cscarr: "set cs \<subseteq> carrier G"
and csirr: "wfactors G cs c"
and csmset: "fmset G cs = fmset G as - fmset G bs + fmset G bs" by auto
have "c lcmof a b"
proof (simp add: islcm_def, safe)
from csmset have "fmset G as \<le> fmset G cs" by (simp add: mset_le_def, force)
thus "a divides c" by (rule fmsubset_divides) fact+
next
from csmset have "fmset G bs \<le> fmset G cs" by (simp add: mset_le_def)
thus "b divides c" by (rule fmsubset_divides) fact+
next
fix y
assume ycarr: "y \<in> carrier G"
hence "\<exists>ys. set ys \<subseteq> carrier G \<and> wfactors G ys y" by (rule wfactors_exist)
from this obtain ys
where yscarr: "set ys \<subseteq> carrier G"
and yfs: "wfactors G ys y"
by auto
assume "a divides y"
hence ya: "fmset G as \<le> fmset G ys" by (rule divides_fmsubset) fact+
assume "b divides y"
hence yb: "fmset G bs \<le> fmset G ys" by (rule divides_fmsubset) fact+
from ya yb csmset
have "fmset G cs \<le> fmset G ys"
apply (simp add: mset_le_def, clarify)
apply (case_tac "count (fmset G as) a < count (fmset G bs) a")
apply simp
apply simp
done
thus "c divides y" by (rule fmsubset_divides) fact+
qed
with ccarr
show "\<exists>c. c \<in> carrier G \<and> c lcmof a b" by fast
qed
subsection {* Conditions for Factoriality *}
subsubsection {* Gcd condition *}
lemma (in gcd_condition_monoid) division_weak_lower_semilattice [simp]:
shows "weak_lower_semilattice (division_rel G)"
proof -
interpret weak_partial_order "division_rel G" ..
show ?thesis
apply (unfold_locales, simp_all)
proof -
fix x y
assume carr: "x \<in> carrier G" "y \<in> carrier G"
hence "\<exists>z. z \<in> carrier G \<and> z gcdof x y" by (rule gcdof_exists)
from this obtain z
where zcarr: "z \<in> carrier G"
and isgcd: "z gcdof x y"
by auto
with carr
have "greatest (division_rel G) z (Lower (division_rel G) {x, y})"
by (subst gcdof_greatestLower[symmetric], simp+)
thus "\<exists>z. greatest (division_rel G) z (Lower (division_rel G) {x, y})" by fast
qed
qed
lemma (in gcd_condition_monoid) gcdof_cong_l:
assumes a'a: "a' \<sim> a"
and agcd: "a gcdof b c"
and a'carr: "a' \<in> carrier G" and carr': "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "a' gcdof b c"
proof -
note carr = a'carr carr'
interpret weak_lower_semilattice "division_rel G" by simp
have "a' \<in> carrier G \<and> a' gcdof b c"
apply (simp add: gcdof_greatestLower carr')
apply (subst greatest_Lower_cong_l[of _ a])
apply (simp add: a'a)
apply (simp add: carr)
apply (simp add: carr)
apply (simp add: carr)
apply (simp add: gcdof_greatestLower[symmetric] agcd carr)
done
thus ?thesis ..
qed
lemma (in gcd_condition_monoid) gcd_closed [simp]:
assumes carr: "a \<in> carrier G" "b \<in> carrier G"
shows "somegcd G a b \<in> carrier G"
proof -
interpret weak_lower_semilattice "division_rel G" by simp
show ?thesis
apply (simp add: somegcd_meet[OF carr])
apply (rule meet_closed[simplified], fact+)
done
qed
lemma (in gcd_condition_monoid) gcd_isgcd:
assumes carr: "a \<in> carrier G" "b \<in> carrier G"
shows "(somegcd G a b) gcdof a b"
proof -
interpret weak_lower_semilattice "division_rel G" by simp
from carr
have "somegcd G a b \<in> carrier G \<and> (somegcd G a b) gcdof a b"
apply (subst gcdof_greatestLower, simp, simp)
apply (simp add: somegcd_meet[OF carr] meet_def)
apply (rule inf_of_two_greatest[simplified], assumption+)
done
thus "(somegcd G a b) gcdof a b" by simp
qed
lemma (in gcd_condition_monoid) gcd_exists:
assumes carr: "a \<in> carrier G" "b \<in> carrier G"
shows "\<exists>x\<in>carrier G. x = somegcd G a b"
proof -
interpret weak_lower_semilattice "division_rel G" by simp
show ?thesis
by (metis carr(1) carr(2) gcd_closed)
qed
lemma (in gcd_condition_monoid) gcd_divides_l:
assumes carr: "a \<in> carrier G" "b \<in> carrier G"
shows "(somegcd G a b) divides a"
proof -
interpret weak_lower_semilattice "division_rel G" by simp
show ?thesis
by (metis carr(1) carr(2) gcd_isgcd isgcd_def)
qed
lemma (in gcd_condition_monoid) gcd_divides_r:
assumes carr: "a \<in> carrier G" "b \<in> carrier G"
shows "(somegcd G a b) divides b"
proof -
interpret weak_lower_semilattice "division_rel G" by simp
show ?thesis
by (metis carr gcd_isgcd isgcd_def)
qed
lemma (in gcd_condition_monoid) gcd_divides:
assumes sub: "z divides x" "z divides y"
and L: "x \<in> carrier G" "y \<in> carrier G" "z \<in> carrier G"
shows "z divides (somegcd G x y)"
proof -
interpret weak_lower_semilattice "division_rel G" by simp
show ?thesis
by (metis gcd_isgcd isgcd_def assms)
qed
lemma (in gcd_condition_monoid) gcd_cong_l:
assumes xx': "x \<sim> x'"
and carr: "x \<in> carrier G" "x' \<in> carrier G" "y \<in> carrier G"
shows "somegcd G x y \<sim> somegcd G x' y"
proof -
interpret weak_lower_semilattice "division_rel G" by simp
show ?thesis
apply (simp add: somegcd_meet carr)
apply (rule meet_cong_l[simplified], fact+)
done
qed
lemma (in gcd_condition_monoid) gcd_cong_r:
assumes carr: "x \<in> carrier G" "y \<in> carrier G" "y' \<in> carrier G"
and yy': "y \<sim> y'"
shows "somegcd G x y \<sim> somegcd G x y'"
proof -
interpret weak_lower_semilattice "division_rel G" by simp
show ?thesis
apply (simp add: somegcd_meet carr)
apply (rule meet_cong_r[simplified], fact+)
done
qed
(*
lemma (in gcd_condition_monoid) asc_cong_gcd_l [intro]:
assumes carr: "b \<in> carrier G"
shows "asc_cong (\<lambda>a. somegcd G a b)"
using carr
unfolding CONG_def
by clarsimp (blast intro: gcd_cong_l)
lemma (in gcd_condition_monoid) asc_cong_gcd_r [intro]:
assumes carr: "a \<in> carrier G"
shows "asc_cong (\<lambda>b. somegcd G a b)"
using carr
unfolding CONG_def
by clarsimp (blast intro: gcd_cong_r)
lemmas (in gcd_condition_monoid) asc_cong_gcd_split [simp] =
assoc_split[OF _ asc_cong_gcd_l] assoc_split[OF _ asc_cong_gcd_r]
*)
lemma (in gcd_condition_monoid) gcdI:
assumes dvd: "a divides b" "a divides c"
and others: "\<forall>y\<in>carrier G. y divides b \<and> y divides c \<longrightarrow> y divides a"
and acarr: "a \<in> carrier G" and bcarr: "b \<in> carrier G" and ccarr: "c \<in> carrier G"
shows "a \<sim> somegcd G b c"
apply (simp add: somegcd_def)
apply (rule someI2_ex)
apply (rule exI[of _ a], simp add: isgcd_def)
apply (simp add: assms)
apply (simp add: isgcd_def assms, clarify)
apply (insert assms, blast intro: associatedI)
done
lemma (in gcd_condition_monoid) gcdI2:
assumes "a gcdof b c"
and "a \<in> carrier G" and bcarr: "b \<in> carrier G" and ccarr: "c \<in> carrier G"
shows "a \<sim> somegcd G b c"
using assms
unfolding isgcd_def
by (blast intro: gcdI)
lemma (in gcd_condition_monoid) SomeGcd_ex:
assumes "finite A" "A \<subseteq> carrier G" "A \<noteq> {}"
shows "\<exists>x\<in> carrier G. x = SomeGcd G A"
proof -
interpret weak_lower_semilattice "division_rel G" by simp
show ?thesis
apply (simp add: SomeGcd_def)
apply (rule finite_inf_closed[simplified], fact+)
done
qed
lemma (in gcd_condition_monoid) gcd_assoc:
assumes carr: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "somegcd G (somegcd G a b) c \<sim> somegcd G a (somegcd G b c)"
proof -
interpret weak_lower_semilattice "division_rel G" by simp
show ?thesis
apply (subst (2 3) somegcd_meet, (simp add: carr)+)
apply (simp add: somegcd_meet carr)
apply (rule weak_meet_assoc[simplified], fact+)
done
qed
lemma (in gcd_condition_monoid) gcd_mult:
assumes acarr: "a \<in> carrier G" and bcarr: "b \<in> carrier G" and ccarr: "c \<in> carrier G"
shows "c \<otimes> somegcd G a b \<sim> somegcd G (c \<otimes> a) (c \<otimes> b)"
proof - (* following Jacobson, Basic Algebra, p.140 *)
let ?d = "somegcd G a b"
let ?e = "somegcd G (c \<otimes> a) (c \<otimes> b)"
note carr[simp] = acarr bcarr ccarr
have dcarr: "?d \<in> carrier G" by simp
have ecarr: "?e \<in> carrier G" by simp
note carr = carr dcarr ecarr
have "?d divides a" by (simp add: gcd_divides_l)
hence cd'ca: "c \<otimes> ?d divides (c \<otimes> a)" by (simp add: divides_mult_lI)
have "?d divides b" by (simp add: gcd_divides_r)
hence cd'cb: "c \<otimes> ?d divides (c \<otimes> b)" by (simp add: divides_mult_lI)
from cd'ca cd'cb
have cd'e: "c \<otimes> ?d divides ?e"
by (rule gcd_divides) simp+
hence "\<exists>u. u \<in> carrier G \<and> ?e = c \<otimes> ?d \<otimes> u"
by (elim dividesE, fast)
from this obtain u
where ucarr[simp]: "u \<in> carrier G"
and e_cdu: "?e = c \<otimes> ?d \<otimes> u"
by auto
note carr = carr ucarr
have "?e divides c \<otimes> a" by (rule gcd_divides_l) simp+
hence "\<exists>x. x \<in> carrier G \<and> c \<otimes> a = ?e \<otimes> x"
by (elim dividesE, fast)
from this obtain x
where xcarr: "x \<in> carrier G"
and ca_ex: "c \<otimes> a = ?e \<otimes> x"
by auto
with e_cdu
have ca_cdux: "c \<otimes> a = c \<otimes> ?d \<otimes> u \<otimes> x" by simp
from ca_cdux xcarr
have "c \<otimes> a = c \<otimes> (?d \<otimes> u \<otimes> x)" by (simp add: m_assoc)
then have "a = ?d \<otimes> u \<otimes> x" by (rule l_cancel[of c a]) (simp add: xcarr)+
hence du'a: "?d \<otimes> u divides a" by (rule dividesI[OF xcarr])
have "?e divides c \<otimes> b" by (intro gcd_divides_r, simp+)
hence "\<exists>x. x \<in> carrier G \<and> c \<otimes> b = ?e \<otimes> x"
by (elim dividesE, fast)
from this obtain x
where xcarr: "x \<in> carrier G"
and cb_ex: "c \<otimes> b = ?e \<otimes> x"
by auto
with e_cdu
have cb_cdux: "c \<otimes> b = c \<otimes> ?d \<otimes> u \<otimes> x" by simp
from cb_cdux xcarr
have "c \<otimes> b = c \<otimes> (?d \<otimes> u \<otimes> x)" by (simp add: m_assoc)
with xcarr
have "b = ?d \<otimes> u \<otimes> x" by (intro l_cancel[of c b], simp+)
hence du'b: "?d \<otimes> u divides b" by (intro dividesI[OF xcarr])
from du'a du'b carr
have du'd: "?d \<otimes> u divides ?d"
by (intro gcd_divides, simp+)
hence uunit: "u \<in> Units G"
proof (elim dividesE)
fix v
assume vcarr[simp]: "v \<in> carrier G"
assume d: "?d = ?d \<otimes> u \<otimes> v"
have "?d \<otimes> \<one> = ?d \<otimes> u \<otimes> v" by simp fact
also have "?d \<otimes> u \<otimes> v = ?d \<otimes> (u \<otimes> v)" by (simp add: m_assoc)
finally have "?d \<otimes> \<one> = ?d \<otimes> (u \<otimes> v)" .
hence i2: "\<one> = u \<otimes> v" by (rule l_cancel) simp+
hence i1: "\<one> = v \<otimes> u" by (simp add: m_comm)
from vcarr i1[symmetric] i2[symmetric]
show "u \<in> Units G"
by (unfold Units_def, simp, fast)
qed
from e_cdu uunit
have "somegcd G (c \<otimes> a) (c \<otimes> b) \<sim> c \<otimes> somegcd G a b"
by (intro associatedI2[of u], simp+)
from this[symmetric]
show "c \<otimes> somegcd G a b \<sim> somegcd G (c \<otimes> a) (c \<otimes> b)" by simp
qed
lemma (in monoid) assoc_subst:
assumes ab: "a \<sim> b"
and cP: "ALL a b. a : carrier G & b : carrier G & a \<sim> b
--> f a : carrier G & f b : carrier G & f a \<sim> f b"
and carr: "a \<in> carrier G" "b \<in> carrier G"
shows "f a \<sim> f b"
using assms by auto
lemma (in gcd_condition_monoid) relprime_mult:
assumes abrelprime: "somegcd G a b \<sim> \<one>" and acrelprime: "somegcd G a c \<sim> \<one>"
and carr[simp]: "a \<in> carrier G" "b \<in> carrier G" "c \<in> carrier G"
shows "somegcd G a (b \<otimes> c) \<sim> \<one>"
proof -
have "c = c \<otimes> \<one>" by simp
also from abrelprime[symmetric]
have "\<dots> \<sim> c \<otimes> somegcd G a b"
by (rule assoc_subst) (simp add: mult_cong_r)+
also have "\<dots> \<sim> somegcd G (c \<otimes> a) (c \<otimes> b)" by (rule gcd_mult) fact+
finally
have c: "c \<sim> somegcd G (c \<otimes> a) (c \<otimes> b)" by simp
from carr
have a: "a \<sim> somegcd G a (c \<otimes> a)"
by (fast intro: gcdI divides_prod_l)
have "somegcd G a (b \<otimes> c) \<sim> somegcd G a (c \<otimes> b)" by (simp add: m_comm)
also from a
have "\<dots> \<sim> somegcd G (somegcd G a (c \<otimes> a)) (c \<otimes> b)"
by (rule assoc_subst) (simp add: gcd_cong_l)+
also from gcd_assoc
have "\<dots> \<sim> somegcd G a (somegcd G (c \<otimes> a) (c \<otimes> b))"
by (rule assoc_subst) simp+
also from c[symmetric]
have "\<dots> \<sim> somegcd G a c"
by (rule assoc_subst) (simp add: gcd_cong_r)+
also note acrelprime
finally
show "somegcd G a (b \<otimes> c) \<sim> \<one>" by simp
qed
lemma (in gcd_condition_monoid) primeness_condition:
"primeness_condition_monoid G"
apply unfold_locales
apply (rule primeI)
apply (elim irreducibleE, assumption)
proof -
fix p a b
assume pcarr: "p \<in> carrier G" and acarr: "a \<in> carrier G" and bcarr: "b \<in> carrier G"
and pirr: "irreducible G p"
and pdvdab: "p divides a \<otimes> b"
from pirr
have pnunit: "p \<notin> Units G"
and r[rule_format]: "\<forall>b. b \<in> carrier G \<and> properfactor G b p \<longrightarrow> b \<in> Units G"
by - (fast elim: irreducibleE)+
show "p divides a \<or> p divides b"
proof (rule ccontr, clarsimp)
assume npdvda: "\<not> p divides a"
with pcarr acarr
have "\<one> \<sim> somegcd G p a"
apply (intro gcdI, simp, simp, simp)
apply (fast intro: unit_divides)
apply (fast intro: unit_divides)
apply (clarsimp simp add: Unit_eq_dividesone[symmetric])
apply (rule r, rule, assumption)
apply (rule properfactorI, assumption)
proof (rule ccontr, simp)
fix y
assume ycarr: "y \<in> carrier G"
assume "p divides y"
also assume "y divides a"
finally
have "p divides a" by (simp add: pcarr ycarr acarr)
with npdvda
show "False" ..
qed simp+
with pcarr acarr
have pa: "somegcd G p a \<sim> \<one>" by (fast intro: associated_sym[of "\<one>"] gcd_closed)
assume npdvdb: "\<not> p divides b"
with pcarr bcarr
have "\<one> \<sim> somegcd G p b"
apply (intro gcdI, simp, simp, simp)
apply (fast intro: unit_divides)
apply (fast intro: unit_divides)
apply (clarsimp simp add: Unit_eq_dividesone[symmetric])
apply (rule r, rule, assumption)
apply (rule properfactorI, assumption)
proof (rule ccontr, simp)
fix y
assume ycarr: "y \<in> carrier G"
assume "p divides y"
also assume "y divides b"
finally have "p divides b" by (simp add: pcarr ycarr bcarr)
with npdvdb
show "False" ..
qed simp+
with pcarr bcarr
have pb: "somegcd G p b \<sim> \<one>" by (fast intro: associated_sym[of "\<one>"] gcd_closed)
from pcarr acarr bcarr pdvdab
have "p gcdof p (a \<otimes> b)" by (fast intro: isgcd_divides_l)
with pcarr acarr bcarr
have "p \<sim> somegcd G p (a \<otimes> b)" by (fast intro: gcdI2)
also from pa pb pcarr acarr bcarr
have "somegcd G p (a \<otimes> b) \<sim> \<one>" by (rule relprime_mult)
finally have "p \<sim> \<one>" by (simp add: pcarr acarr bcarr)
with pcarr
have "p \<in> Units G" by (fast intro: assoc_unit_l)
with pnunit
show "False" ..
qed
qed
sublocale gcd_condition_monoid \<subseteq> primeness_condition_monoid
by (rule primeness_condition)
subsubsection {* Divisor chain condition *}
lemma (in divisor_chain_condition_monoid) wfactors_exist:
assumes acarr: "a \<in> carrier G"
shows "\<exists>as. set as \<subseteq> carrier G \<and> wfactors G as a"
proof -
have r[rule_format]: "a \<in> carrier G \<longrightarrow> (\<exists>as. set as \<subseteq> carrier G \<and> wfactors G as a)"
apply (rule wf_induct[OF division_wellfounded])
proof -
fix x
assume ih: "\<forall>y. (y, x) \<in> {(x, y). x \<in> carrier G \<and> y \<in> carrier G \<and> properfactor G x y}
\<longrightarrow> y \<in> carrier G \<longrightarrow> (\<exists>as. set as \<subseteq> carrier G \<and> wfactors G as y)"
show "x \<in> carrier G \<longrightarrow> (\<exists>as. set as \<subseteq> carrier G \<and> wfactors G as x)"
apply clarify
apply (cases "x \<in> Units G")
apply (rule exI[of _ "[]"], simp)
apply (cases "irreducible G x")
apply (rule exI[of _ "[x]"], simp add: wfactors_def)
proof -
assume xcarr: "x \<in> carrier G"
and xnunit: "x \<notin> Units G"
and xnirr: "\<not> irreducible G x"
hence "\<exists>y. y \<in> carrier G \<and> properfactor G y x \<and> y \<notin> Units G"
apply - apply (rule ccontr, simp)
apply (subgoal_tac "irreducible G x", simp)
apply (rule irreducibleI, simp, simp)
done
from this obtain y
where ycarr: "y \<in> carrier G"
and ynunit: "y \<notin> Units G"
and pfyx: "properfactor G y x"
by auto
have ih':
"\<And>y. \<lbrakk>y \<in> carrier G; properfactor G y x\<rbrakk>
\<Longrightarrow> \<exists>as. set as \<subseteq> carrier G \<and> wfactors G as y"
by (rule ih[rule_format, simplified]) (simp add: xcarr)+
from ycarr pfyx
have "\<exists>as. set as \<subseteq> carrier G \<and> wfactors G as y"
by (rule ih')
from this obtain ys
where yscarr: "set ys \<subseteq> carrier G"
and yfs: "wfactors G ys y"
by auto
from pfyx
have "y divides x"
and nyx: "\<not> y \<sim> x"
by - (fast elim: properfactorE2)+
hence "\<exists>z. z \<in> carrier G \<and> x = y \<otimes> z"
by fast
from this obtain z
where zcarr: "z \<in> carrier G"
and x: "x = y \<otimes> z"
by auto
from zcarr ycarr
have "properfactor G z x"
apply (subst x)
apply (intro properfactorI3[of _ _ y])
apply (simp add: m_comm)
apply (simp add: ynunit)+
done
with zcarr
have "\<exists>as. set as \<subseteq> carrier G \<and> wfactors G as z"
by (rule ih')
from this obtain zs
where zscarr: "set zs \<subseteq> carrier G"
and zfs: "wfactors G zs z"
by auto
from yscarr zscarr
have xscarr: "set (ys@zs) \<subseteq> carrier G" by simp
from yfs zfs ycarr zcarr yscarr zscarr
have "wfactors G (ys@zs) (y\<otimes>z)" by (rule wfactors_mult)
hence "wfactors G (ys@zs) x" by (simp add: x)
from xscarr this
show "\<exists>xs. set xs \<subseteq> carrier G \<and> wfactors G xs x" by fast
qed
qed
from acarr
show ?thesis by (rule r)
qed
subsubsection {* Primeness condition *}
lemma (in comm_monoid_cancel) multlist_prime_pos:
assumes carr: "a \<in> carrier G" "set as \<subseteq> carrier G"
and aprime: "prime G a"
and "a divides (foldr (op \<otimes>) as \<one>)"
shows "\<exists>i<length as. a divides (as!i)"
proof -
have r[rule_format]:
"set as \<subseteq> carrier G \<and> a divides (foldr (op \<otimes>) as \<one>)
\<longrightarrow> (\<exists>i. i < length as \<and> a divides (as!i))"
apply (induct as)
apply clarsimp defer 1
apply clarsimp defer 1
proof -
assume "a divides \<one>"
with carr
have "a \<in> Units G"
by (fast intro: divides_unit[of a \<one>])
with aprime
show "False" by (elim primeE, simp)
next
fix aa as
assume ih[rule_format]: "a divides foldr op \<otimes> as \<one> \<longrightarrow> (\<exists>i<length as. a divides as ! i)"
and carr': "aa \<in> carrier G" "set as \<subseteq> carrier G"
and "a divides aa \<otimes> foldr op \<otimes> as \<one>"
with carr aprime
have "a divides aa \<or> a divides foldr op \<otimes> as \<one>"
by (intro prime_divides) simp+
moreover {
assume "a divides aa"
hence p1: "a divides (aa#as)!0" by simp
have "0 < Suc (length as)" by simp
with p1 have "\<exists>i<Suc (length as). a divides (aa # as) ! i" by fast
}
moreover {
assume "a divides foldr op \<otimes> as \<one>"
hence "\<exists>i. i < length as \<and> a divides as ! i" by (rule ih)
from this obtain i where "a divides as ! i" and len: "i < length as" by auto
hence p1: "a divides (aa#as) ! (Suc i)" by simp
from len have "Suc i < Suc (length as)" by simp
with p1 have "\<exists>i<Suc (length as). a divides (aa # as) ! i" by force
}
ultimately
show "\<exists>i<Suc (length as). a divides (aa # as) ! i" by fast
qed
from assms
show ?thesis
by (intro r, safe)
qed
lemma (in primeness_condition_monoid) wfactors_unique__hlp_induct:
"\<forall>a as'. a \<in> carrier G \<and> set as \<subseteq> carrier G \<and> set as' \<subseteq> carrier G \<and>
wfactors G as a \<and> wfactors G as' a \<longrightarrow> essentially_equal G as as'"
proof (induct as)
case Nil show ?case apply auto
proof -
fix a as'
assume a: "a \<in> carrier G"
assume "wfactors G [] a"
then obtain "\<one> \<sim> a" by (auto elim: wfactorsE)
with a have "a \<in> Units G" by (auto intro: assoc_unit_r)
moreover assume "wfactors G as' a"
moreover assume "set as' \<subseteq> carrier G"
ultimately have "as' = []" by (rule unit_wfactors_empty)
then show "essentially_equal G [] as'" by simp
qed
next
case (Cons ah as) then show ?case apply clarsimp
proof -
fix a as'
assume ih [rule_format]:
"\<forall>a as'. a \<in> carrier G \<and> set as' \<subseteq> carrier G \<and> wfactors G as a \<and>
wfactors G as' a \<longrightarrow> essentially_equal G as as'"
and acarr: "a \<in> carrier G" and ahcarr: "ah \<in> carrier G"
and ascarr: "set as \<subseteq> carrier G" and as'carr: "set as' \<subseteq> carrier G"
and afs: "wfactors G (ah # as) a"
and afs': "wfactors G as' a"
hence ahdvda: "ah divides a"
by (intro wfactors_dividesI[of "ah#as" "a"], simp+)
hence "\<exists>a'\<in> carrier G. a = ah \<otimes> a'" by fast
from this obtain a'
where a'carr: "a' \<in> carrier G"
and a: "a = ah \<otimes> a'"
by auto
have a'fs: "wfactors G as a'"
apply (rule wfactorsE[OF afs], rule wfactorsI, simp)
apply (simp add: a, insert ascarr a'carr)
apply (intro assoc_l_cancel[of ah _ a'] multlist_closed ahcarr, assumption+)
done
from afs have ahirr: "irreducible G ah" by (elim wfactorsE, simp)
with ascarr have ahprime: "prime G ah" by (intro irreducible_prime ahcarr)
note carr [simp] = acarr ahcarr ascarr as'carr a'carr
note ahdvda
also from afs'
have "a divides (foldr (op \<otimes>) as' \<one>)"
by (elim wfactorsE associatedE, simp)
finally have "ah divides (foldr (op \<otimes>) as' \<one>)" by simp
with ahprime
have "\<exists>i<length as'. ah divides as'!i"
by (intro multlist_prime_pos, simp+)
from this obtain i
where len: "i<length as'" and ahdvd: "ah divides as'!i"
by auto
from afs' carr have irrasi: "irreducible G (as'!i)"
by (fast intro: nth_mem[OF len] elim: wfactorsE)
from len carr
have asicarr[simp]: "as'!i \<in> carrier G" by (unfold set_conv_nth, force)
note carr = carr asicarr
from ahdvd have "\<exists>x \<in> carrier G. as'!i = ah \<otimes> x" by fast
from this obtain x where "x \<in> carrier G" and asi: "as'!i = ah \<otimes> x" by auto
with carr irrasi[simplified asi]
have asiah: "as'!i \<sim> ah" apply -
apply (elim irreducible_prodE[of "ah" "x"], assumption+)
apply (rule associatedI2[of x], assumption+)
apply (rule irreducibleE[OF ahirr], simp)
done
note setparts = set_take_subset[of i as'] set_drop_subset[of "Suc i" as']
note partscarr [simp] = setparts[THEN subset_trans[OF _ as'carr]]
note carr = carr partscarr
have "\<exists>aa_1. aa_1 \<in> carrier G \<and> wfactors G (take i as') aa_1"
apply (intro wfactors_prod_exists)
using setparts afs' by (fast elim: wfactorsE, simp)
from this obtain aa_1
where aa1carr: "aa_1 \<in> carrier G"
and aa1fs: "wfactors G (take i as') aa_1"
by auto
have "\<exists>aa_2. aa_2 \<in> carrier G \<and> wfactors G (drop (Suc i) as') aa_2"
apply (intro wfactors_prod_exists)
using setparts afs' by (fast elim: wfactorsE, simp)
from this obtain aa_2
where aa2carr: "aa_2 \<in> carrier G"
and aa2fs: "wfactors G (drop (Suc i) as') aa_2"
by auto
note carr = carr aa1carr[simp] aa2carr[simp]
from aa1fs aa2fs
have v1: "wfactors G (take i as' @ drop (Suc i) as') (aa_1 \<otimes> aa_2)"
by (intro wfactors_mult, simp+)
hence v1': "wfactors G (as'!i # take i as' @ drop (Suc i) as') (as'!i \<otimes> (aa_1 \<otimes> aa_2))"
apply (intro wfactors_mult_single)
using setparts afs'
by (fast intro: nth_mem[OF len] elim: wfactorsE, simp+)
from aa2carr carr aa1fs aa2fs
have "wfactors G (as'!i # drop (Suc i) as') (as'!i \<otimes> aa_2)"
by (metis irrasi wfactors_mult_single)
with len carr aa1carr aa2carr aa1fs
have v2: "wfactors G (take i as' @ as'!i # drop (Suc i) as') (aa_1 \<otimes> (as'!i \<otimes> aa_2))"
apply (intro wfactors_mult)
apply fast
apply (simp, (fast intro: nth_mem[OF len])?)+
done
from len
have as': "as' = (take i as' @ as'!i # drop (Suc i) as')"
by (simp add: Cons_nth_drop_Suc)
with carr
have eer: "essentially_equal G (take i as' @ as'!i # drop (Suc i) as') as'"
by simp
with v2 afs' carr aa1carr aa2carr nth_mem[OF len]
have "aa_1 \<otimes> (as'!i \<otimes> aa_2) \<sim> a"
by (metis as' ee_wfactorsD m_closed)
then
have t1: "as'!i \<otimes> (aa_1 \<otimes> aa_2) \<sim> a"
by (metis aa1carr aa2carr asicarr m_lcomm)
from carr asiah
have "ah \<otimes> (aa_1 \<otimes> aa_2) \<sim> as'!i \<otimes> (aa_1 \<otimes> aa_2)"
by (metis associated_sym m_closed mult_cong_l)
also note t1
finally
have "ah \<otimes> (aa_1 \<otimes> aa_2) \<sim> a" by simp
with carr aa1carr aa2carr a'carr nth_mem[OF len]
have a': "aa_1 \<otimes> aa_2 \<sim> a'"
by (simp add: a, fast intro: assoc_l_cancel[of ah _ a'])
note v1
also note a'
finally have "wfactors G (take i as' @ drop (Suc i) as') a'" by simp
from a'fs this carr
have "essentially_equal G as (take i as' @ drop (Suc i) as')"
by (intro ih[of a']) simp
hence ee1: "essentially_equal G (ah # as) (ah # take i as' @ drop (Suc i) as')"
apply (elim essentially_equalE) apply (fastforce intro: essentially_equalI)
done
from carr
have ee2: "essentially_equal G (ah # take i as' @ drop (Suc i) as')
(as' ! i # take i as' @ drop (Suc i) as')"
proof (intro essentially_equalI)
show "ah # take i as' @ drop (Suc i) as' <~~> ah # take i as' @ drop (Suc i) as'"
by simp
next
show "ah # take i as' @ drop (Suc i) as' [\<sim>] as' ! i # take i as' @ drop (Suc i) as'"
apply (simp add: list_all2_append)
apply (simp add: asiah[symmetric])
done
qed
note ee1
also note ee2
also have "essentially_equal G (as' ! i # take i as' @ drop (Suc i) as')
(take i as' @ as' ! i # drop (Suc i) as')"
apply (intro essentially_equalI)
apply (subgoal_tac "as' ! i # take i as' @ drop (Suc i) as' <~~>
take i as' @ as' ! i # drop (Suc i) as'")
apply simp
apply (rule perm_append_Cons)
apply simp
done
finally
have "essentially_equal G (ah # as) (take i as' @ as' ! i # drop (Suc i) as')" by simp
then show "essentially_equal G (ah # as) as'" by (subst as', assumption)
qed
qed
lemma (in primeness_condition_monoid) wfactors_unique:
assumes "wfactors G as a" "wfactors G as' a"
and "a \<in> carrier G" "set as \<subseteq> carrier G" "set as' \<subseteq> carrier G"
shows "essentially_equal G as as'"
apply (rule wfactors_unique__hlp_induct[rule_format, of a])
apply (simp add: assms)
done
subsubsection {* Application to factorial monoids *}
text {* Number of factors for wellfoundedness *}
definition
factorcount :: "_ \<Rightarrow> 'a \<Rightarrow> nat" where
"factorcount G a =
(THE c. (ALL as. set as \<subseteq> carrier G \<and> wfactors G as a \<longrightarrow> c = length as))"
lemma (in monoid) ee_length:
assumes ee: "essentially_equal G as bs"
shows "length as = length bs"
apply (rule essentially_equalE[OF ee])
apply (metis list_all2_conv_all_nth perm_length)
done
lemma (in factorial_monoid) factorcount_exists:
assumes carr[simp]: "a \<in> carrier G"
shows "EX c. ALL as. set as \<subseteq> carrier G \<and> wfactors G as a \<longrightarrow> c = length as"
proof -
have "\<exists>as. set as \<subseteq> carrier G \<and> wfactors G as a" by (intro wfactors_exist, simp)
from this obtain as
where ascarr[simp]: "set as \<subseteq> carrier G"
and afs: "wfactors G as a"
by (auto simp del: carr)
have "ALL as'. set as' \<subseteq> carrier G \<and> wfactors G as' a \<longrightarrow> length as = length as'"
by (metis afs ascarr assms ee_length wfactors_unique)
thus "EX c. ALL as'. set as' \<subseteq> carrier G \<and> wfactors G as' a \<longrightarrow> c = length as'" ..
qed
lemma (in factorial_monoid) factorcount_unique:
assumes afs: "wfactors G as a"
and acarr[simp]: "a \<in> carrier G" and ascarr[simp]: "set as \<subseteq> carrier G"
shows "factorcount G a = length as"
proof -
have "EX ac. ALL as. set as \<subseteq> carrier G \<and> wfactors G as a \<longrightarrow> ac = length as" by (rule factorcount_exists, simp)
from this obtain ac where
alen: "ALL as. set as \<subseteq> carrier G \<and> wfactors G as a \<longrightarrow> ac = length as"
by auto
have ac: "ac = factorcount G a"
apply (simp add: factorcount_def)
apply (rule theI2)
apply (rule alen)
apply (metis afs alen ascarr)+
done
from ascarr afs have "ac = length as" by (iprover intro: alen[rule_format])
with ac show ?thesis by simp
qed
lemma (in factorial_monoid) divides_fcount:
assumes dvd: "a divides b"
and acarr: "a \<in> carrier G" and bcarr:"b \<in> carrier G"
shows "factorcount G a <= factorcount G b"
apply (rule dividesE[OF dvd])
proof -
fix c
from assms
have "\<exists>as. set as \<subseteq> carrier G \<and> wfactors G as a" by fast
from this obtain as
where ascarr: "set as \<subseteq> carrier G"
and afs: "wfactors G as a"
by auto
with acarr have fca: "factorcount G a = length as" by (intro factorcount_unique)
assume ccarr: "c \<in> carrier G"
hence "\<exists>cs. set cs \<subseteq> carrier G \<and> wfactors G cs c" by fast
from this obtain cs
where cscarr: "set cs \<subseteq> carrier G"
and cfs: "wfactors G cs c"
by auto
note [simp] = acarr bcarr ccarr ascarr cscarr
assume b: "b = a \<otimes> c"
from afs cfs
have "wfactors G (as@cs) (a \<otimes> c)" by (intro wfactors_mult, simp+)
with b have "wfactors G (as@cs) b" by simp
hence "factorcount G b = length (as@cs)" by (intro factorcount_unique, simp+)
hence "factorcount G b = length as + length cs" by simp
with fca show ?thesis by simp
qed
lemma (in factorial_monoid) associated_fcount:
assumes acarr: "a \<in> carrier G" and bcarr:"b \<in> carrier G"
and asc: "a \<sim> b"
shows "factorcount G a = factorcount G b"
apply (rule associatedE[OF asc])
apply (drule divides_fcount[OF _ acarr bcarr])
apply (drule divides_fcount[OF _ bcarr acarr])
apply simp
done
lemma (in factorial_monoid) properfactor_fcount:
assumes acarr: "a \<in> carrier G" and bcarr:"b \<in> carrier G"
and pf: "properfactor G a b"
shows "factorcount G a < factorcount G b"
apply (rule properfactorE[OF pf], elim dividesE)
proof -
fix c
from assms
have "\<exists>as. set as \<subseteq> carrier G \<and> wfactors G as a" by fast
from this obtain as
where ascarr: "set as \<subseteq> carrier G"
and afs: "wfactors G as a"
by auto
with acarr have fca: "factorcount G a = length as" by (intro factorcount_unique)
assume ccarr: "c \<in> carrier G"
hence "\<exists>cs. set cs \<subseteq> carrier G \<and> wfactors G cs c" by fast
from this obtain cs
where cscarr: "set cs \<subseteq> carrier G"
and cfs: "wfactors G cs c"
by auto
assume b: "b = a \<otimes> c"
have "wfactors G (as@cs) (a \<otimes> c)" by (rule wfactors_mult) fact+
with b
have "wfactors G (as@cs) b" by simp
with ascarr cscarr bcarr
have "factorcount G b = length (as@cs)" by (simp add: factorcount_unique)
hence fcb: "factorcount G b = length as + length cs" by simp
assume nbdvda: "\<not> b divides a"
have "c \<notin> Units G"
proof (rule ccontr, simp)
assume cunit:"c \<in> Units G"
have "b \<otimes> inv c = a \<otimes> c \<otimes> inv c" by (simp add: b)
also from ccarr acarr cunit
have "\<dots> = a \<otimes> (c \<otimes> inv c)" by (fast intro: m_assoc)
also from ccarr cunit
have "\<dots> = a \<otimes> \<one>" by simp
also from acarr
have "\<dots> = a" by simp
finally have "a = b \<otimes> inv c" by simp
with ccarr cunit
have "b divides a" by (fast intro: dividesI[of "inv c"])
with nbdvda show False by simp
qed
with cfs have "length cs > 0"
apply -
apply (rule ccontr, simp)
apply (metis Units_one_closed ccarr cscarr l_one one_closed properfactorI3 properfactor_fmset unit_wfactors)
done
with fca fcb show ?thesis by simp
qed
sublocale factorial_monoid \<subseteq> divisor_chain_condition_monoid
apply unfold_locales
apply (rule wfUNIVI)
apply (rule measure_induct[of "factorcount G"])
apply simp
apply (metis properfactor_fcount)
done
sublocale factorial_monoid \<subseteq> primeness_condition_monoid
by default (rule irreducible_is_prime)
lemma (in factorial_monoid) primeness_condition:
shows "primeness_condition_monoid G"
..
lemma (in factorial_monoid) gcd_condition [simp]:
shows "gcd_condition_monoid G"
by default (rule gcdof_exists)
sublocale factorial_monoid \<subseteq> gcd_condition_monoid
by default (rule gcdof_exists)
lemma (in factorial_monoid) division_weak_lattice [simp]:
shows "weak_lattice (division_rel G)"
proof -
interpret weak_lower_semilattice "division_rel G" by simp
show "weak_lattice (division_rel G)"
apply (unfold_locales, simp_all)
proof -
fix x y
assume carr: "x \<in> carrier G" "y \<in> carrier G"
hence "\<exists>z. z \<in> carrier G \<and> z lcmof x y" by (rule lcmof_exists)
from this obtain z
where zcarr: "z \<in> carrier G"
and isgcd: "z lcmof x y"
by auto
with carr
have "least (division_rel G) z (Upper (division_rel G) {x, y})"
by (simp add: lcmof_leastUpper[symmetric])
thus "\<exists>z. least (division_rel G) z (Upper (division_rel G) {x, y})" by fast
qed
qed
subsection {* Factoriality Theorems *}
theorem factorial_condition_one: (* Jacobson theorem 2.21 *)
shows "(divisor_chain_condition_monoid G \<and> primeness_condition_monoid G) =
factorial_monoid G"
apply rule
proof clarify
assume dcc: "divisor_chain_condition_monoid G"
and pc: "primeness_condition_monoid G"
interpret divisor_chain_condition_monoid "G" by (rule dcc)
interpret primeness_condition_monoid "G" by (rule pc)
show "factorial_monoid G"
by (fast intro: factorial_monoidI wfactors_exist wfactors_unique)
next
assume fm: "factorial_monoid G"
interpret factorial_monoid "G" by (rule fm)
show "divisor_chain_condition_monoid G \<and> primeness_condition_monoid G"
by rule unfold_locales
qed
theorem factorial_condition_two: (* Jacobson theorem 2.22 *)
shows "(divisor_chain_condition_monoid G \<and> gcd_condition_monoid G) = factorial_monoid G"
apply rule
proof clarify
assume dcc: "divisor_chain_condition_monoid G"
and gc: "gcd_condition_monoid G"
interpret divisor_chain_condition_monoid "G" by (rule dcc)
interpret gcd_condition_monoid "G" by (rule gc)
show "factorial_monoid G"
by (simp add: factorial_condition_one[symmetric], rule, unfold_locales)
next
assume fm: "factorial_monoid G"
interpret factorial_monoid "G" by (rule fm)
show "divisor_chain_condition_monoid G \<and> gcd_condition_monoid G"
by rule unfold_locales
qed
end
|
lemma order: "p \<noteq> 0 \<Longrightarrow> [:-a, 1:] ^ order a p dvd p \<and> \<not> [:-a, 1:] ^ Suc (order a p) dvd p" |
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R θ : ℝ
⊢ circleMap c R (θ + 2 * π) = circleMap c R θ
[PROOFSTEP]
simp [circleMap, add_mul, exp_periodic _]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R θ : ℝ
⊢ circleMap c R θ - c = circleMap 0 R θ
[PROOFSTEP]
simp [circleMap]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
R θ : ℝ
⊢ ↑Complex.abs (circleMap 0 R θ) = |R|
[PROOFSTEP]
simp [circleMap]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R θ : ℝ
⊢ circleMap c R θ ∈ sphere c |R|
[PROOFSTEP]
simp
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
hR : 0 ≤ R
θ : ℝ
⊢ circleMap c R θ ∈ sphere c R
[PROOFSTEP]
simpa only [_root_.abs_of_nonneg hR] using circleMap_mem_sphere' c R θ
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R θ : ℝ
⊢ ¬circleMap c R θ ∈ ball c R
[PROOFSTEP]
simp [dist_eq, le_abs_self]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
⊢ range (circleMap c R) = c +ᵥ R • range fun θ => exp (↑θ * I)
[PROOFSTEP]
simp only [← image_vadd, ← image_smul, ← range_comp, vadd_eq_add, circleMap, (· ∘ ·), real_smul]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
⊢ (c +ᵥ R • range fun θ => exp (↑θ * I)) = sphere c |R|
[PROOFSTEP]
rw [Complex.range_exp_mul_I, smul_sphere R 0 zero_le_one]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
⊢ c +ᵥ sphere (R • 0) (‖R‖ * 1) = sphere c |R|
[PROOFSTEP]
simp
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
⊢ circleMap c R '' Ioc 0 (2 * π) = sphere c |R|
[PROOFSTEP]
rw [← range_circleMap, ← (periodic_circleMap c R).image_Ioc Real.two_pi_pos 0, zero_add]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R θ : ℝ
⊢ circleMap c R θ = c ↔ R = 0
[PROOFSTEP]
simp [circleMap, exp_ne_zero]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R θ : ℝ
⊢ HasDerivAt (circleMap c R) (circleMap 0 R θ * I) θ
[PROOFSTEP]
simpa only [mul_assoc, one_mul, ofRealClm_apply, circleMap, ofReal_one, zero_add] using
(((ofRealClm.hasDerivAt (x := θ)).mul_const I).cexp.const_mul (R : ℂ)).const_add c
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R θ : ℝ
⊢ deriv (circleMap c R) θ = 0 ↔ R = 0
[PROOFSTEP]
simp [I_ne_zero]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R θ : ℝ
⊢ ↑‖deriv (circleMap c R) θ‖₊ ≤ ↑(↑Real.nnabs R)
[PROOFSTEP]
simp
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
R : ℝ
z w : ℂ
hw : w ∈ ball z R
⊢ Continuous fun θ => (circleMap z R θ - w)⁻¹
[PROOFSTEP]
have : ∀ θ, circleMap z R θ - w ≠ 0 := by
simp_rw [sub_ne_zero]
exact fun θ => circleMap_ne_mem_ball hw θ
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
R : ℝ
z w : ℂ
hw : w ∈ ball z R
⊢ ∀ (θ : ℝ), circleMap z R θ - w ≠ 0
[PROOFSTEP]
simp_rw [sub_ne_zero]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
R : ℝ
z w : ℂ
hw : w ∈ ball z R
⊢ ∀ (θ : ℝ), circleMap z R θ ≠ w
[PROOFSTEP]
exact fun θ => circleMap_ne_mem_ball hw θ
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
R : ℝ
z w : ℂ
hw : w ∈ ball z R
this : ∀ (θ : ℝ), circleMap z R θ - w ≠ 0
⊢ Continuous fun θ => (circleMap z R θ - w)⁻¹
[PROOFSTEP]
exact Continuous.inv₀ (by continuity) this
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
R : ℝ
z w : ℂ
hw : w ∈ ball z R
this : ∀ (θ : ℝ), circleMap z R θ - w ≠ 0
⊢ Continuous fun θ => circleMap z R θ - w
[PROOFSTEP]
continuity
[GOAL]
E : Type u_1
inst✝¹ : NormedAddCommGroup E
f g : ℂ → E
c : ℂ
R : ℝ
inst✝ : NormedSpace ℂ E
hf : CircleIntegrable f c R
⊢ IntervalIntegrable (fun θ => deriv (circleMap c R) θ • f (circleMap c R θ)) volume 0 (2 * π)
[PROOFSTEP]
simp only [CircleIntegrable, deriv_circleMap, intervalIntegrable_iff] at *
[GOAL]
E : Type u_1
inst✝¹ : NormedAddCommGroup E
f g : ℂ → E
c : ℂ
R : ℝ
inst✝ : NormedSpace ℂ E
hf : IntegrableOn (fun θ => f (circleMap c R θ)) (Ι 0 (2 * π))
⊢ IntegrableOn (fun θ => (circleMap 0 R θ * I) • f (circleMap c R θ)) (Ι 0 (2 * π))
[PROOFSTEP]
refine' (hf.norm.const_mul |R|).mono' _ _
[GOAL]
case refine'_1
E : Type u_1
inst✝¹ : NormedAddCommGroup E
f g : ℂ → E
c : ℂ
R : ℝ
inst✝ : NormedSpace ℂ E
hf : IntegrableOn (fun θ => f (circleMap c R θ)) (Ι 0 (2 * π))
⊢ AEStronglyMeasurable (fun θ => (circleMap 0 R θ * I) • f (circleMap c R θ)) (Measure.restrict volume (Ι 0 (2 * π)))
[PROOFSTEP]
exact ((continuous_circleMap _ _).aestronglyMeasurable.mul_const I).smul hf.aestronglyMeasurable
[GOAL]
case refine'_2
E : Type u_1
inst✝¹ : NormedAddCommGroup E
f g : ℂ → E
c : ℂ
R : ℝ
inst✝ : NormedSpace ℂ E
hf : IntegrableOn (fun θ => f (circleMap c R θ)) (Ι 0 (2 * π))
⊢ ∀ᵐ (a : ℝ) ∂Measure.restrict volume (Ι 0 (2 * π)),
‖(circleMap 0 R a * I) • f (circleMap c R a)‖ ≤ |R| * ‖f (circleMap c R a)‖
[PROOFSTEP]
simp [norm_smul]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
f : ℂ → E
c : ℂ
⊢ CircleIntegrable f c 0
[PROOFSTEP]
simp [CircleIntegrable]
[GOAL]
E : Type u_1
inst✝¹ : NormedAddCommGroup E
inst✝ : NormedSpace ℂ E
f : ℂ → E
c : ℂ
R : ℝ
⊢ CircleIntegrable f c R ↔ IntervalIntegrable (fun θ => deriv (circleMap c R) θ • f (circleMap c R θ)) volume 0 (2 * π)
[PROOFSTEP]
by_cases h₀ : R = 0
[GOAL]
case pos
E : Type u_1
inst✝¹ : NormedAddCommGroup E
inst✝ : NormedSpace ℂ E
f : ℂ → E
c : ℂ
R : ℝ
h₀ : R = 0
⊢ CircleIntegrable f c R ↔ IntervalIntegrable (fun θ => deriv (circleMap c R) θ • f (circleMap c R θ)) volume 0 (2 * π)
[PROOFSTEP]
simp [h₀, const]
[GOAL]
case neg
E : Type u_1
inst✝¹ : NormedAddCommGroup E
inst✝ : NormedSpace ℂ E
f : ℂ → E
c : ℂ
R : ℝ
h₀ : ¬R = 0
⊢ CircleIntegrable f c R ↔ IntervalIntegrable (fun θ => deriv (circleMap c R) θ • f (circleMap c R θ)) volume 0 (2 * π)
[PROOFSTEP]
refine' ⟨fun h => h.out, fun h => _⟩
[GOAL]
case neg
E : Type u_1
inst✝¹ : NormedAddCommGroup E
inst✝ : NormedSpace ℂ E
f : ℂ → E
c : ℂ
R : ℝ
h₀ : ¬R = 0
h : IntervalIntegrable (fun θ => deriv (circleMap c R) θ • f (circleMap c R θ)) volume 0 (2 * π)
⊢ CircleIntegrable f c R
[PROOFSTEP]
simp only [CircleIntegrable, intervalIntegrable_iff, deriv_circleMap] at h ⊢
[GOAL]
case neg
E : Type u_1
inst✝¹ : NormedAddCommGroup E
inst✝ : NormedSpace ℂ E
f : ℂ → E
c : ℂ
R : ℝ
h₀ : ¬R = 0
h : IntegrableOn (fun θ => (circleMap 0 R θ * I) • f (circleMap c R θ)) (Ι 0 (2 * π))
⊢ IntegrableOn (fun θ => f (circleMap c R θ)) (Ι 0 (2 * π))
[PROOFSTEP]
refine' (h.norm.const_mul |R|⁻¹).mono' _ _
[GOAL]
case neg.refine'_1
E : Type u_1
inst✝¹ : NormedAddCommGroup E
inst✝ : NormedSpace ℂ E
f : ℂ → E
c : ℂ
R : ℝ
h₀ : ¬R = 0
h : IntegrableOn (fun θ => (circleMap 0 R θ * I) • f (circleMap c R θ)) (Ι 0 (2 * π))
⊢ AEStronglyMeasurable (fun θ => f (circleMap c R θ)) (Measure.restrict volume (Ι 0 (2 * π)))
[PROOFSTEP]
have H : ∀ {θ}, circleMap 0 R θ * I ≠ 0 := fun {θ} => by simp [h₀, I_ne_zero]
[GOAL]
E : Type u_1
inst✝¹ : NormedAddCommGroup E
inst✝ : NormedSpace ℂ E
f : ℂ → E
c : ℂ
R : ℝ
h₀ : ¬R = 0
h : IntegrableOn (fun θ => (circleMap 0 R θ * I) • f (circleMap c R θ)) (Ι 0 (2 * π))
θ : ℝ
⊢ circleMap 0 R θ * I ≠ 0
[PROOFSTEP]
simp [h₀, I_ne_zero]
[GOAL]
case neg.refine'_1
E : Type u_1
inst✝¹ : NormedAddCommGroup E
inst✝ : NormedSpace ℂ E
f : ℂ → E
c : ℂ
R : ℝ
h₀ : ¬R = 0
h : IntegrableOn (fun θ => (circleMap 0 R θ * I) • f (circleMap c R θ)) (Ι 0 (2 * π))
H : ∀ {θ : ℝ}, circleMap 0 R θ * I ≠ 0
⊢ AEStronglyMeasurable (fun θ => f (circleMap c R θ)) (Measure.restrict volume (Ι 0 (2 * π)))
[PROOFSTEP]
simpa only [inv_smul_smul₀ H] using
((continuous_circleMap 0 R).aestronglyMeasurable.mul_const I).aemeasurable.inv.aestronglyMeasurable.smul
h.aestronglyMeasurable
[GOAL]
case neg.refine'_2
E : Type u_1
inst✝¹ : NormedAddCommGroup E
inst✝ : NormedSpace ℂ E
f : ℂ → E
c : ℂ
R : ℝ
h₀ : ¬R = 0
h : IntegrableOn (fun θ => (circleMap 0 R θ * I) • f (circleMap c R θ)) (Ι 0 (2 * π))
⊢ ∀ᵐ (a : ℝ) ∂Measure.restrict volume (Ι 0 (2 * π)),
‖f (circleMap c R a)‖ ≤ |R|⁻¹ * ‖(circleMap 0 R a * I) • f (circleMap c R a)‖
[PROOFSTEP]
simp [norm_smul, h₀]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c w : ℂ
R : ℝ
n : ℤ
⊢ CircleIntegrable (fun z => (z - w) ^ n) c R ↔ R = 0 ∨ 0 ≤ n ∨ ¬w ∈ sphere c |R|
[PROOFSTEP]
constructor
[GOAL]
case mp
E : Type u_1
inst✝ : NormedAddCommGroup E
c w : ℂ
R : ℝ
n : ℤ
⊢ CircleIntegrable (fun z => (z - w) ^ n) c R → R = 0 ∨ 0 ≤ n ∨ ¬w ∈ sphere c |R|
[PROOFSTEP]
intro h
[GOAL]
case mp
E : Type u_1
inst✝ : NormedAddCommGroup E
c w : ℂ
R : ℝ
n : ℤ
h : CircleIntegrable (fun z => (z - w) ^ n) c R
⊢ R = 0 ∨ 0 ≤ n ∨ ¬w ∈ sphere c |R|
[PROOFSTEP]
contrapose! h
[GOAL]
case mp
E : Type u_1
inst✝ : NormedAddCommGroup E
c w : ℂ
R : ℝ
n : ℤ
h : R ≠ 0 ∧ n < 0 ∧ w ∈ sphere c |R|
⊢ ¬CircleIntegrable (fun z => (z - w) ^ n) c R
[PROOFSTEP]
rcases h with ⟨hR, hn, hw⟩
[GOAL]
case mp.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c w : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
hw : w ∈ sphere c |R|
⊢ ¬CircleIntegrable (fun z => (z - w) ^ n) c R
[PROOFSTEP]
simp only [circleIntegrable_iff R, deriv_circleMap]
[GOAL]
case mp.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c w : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
hw : w ∈ sphere c |R|
⊢ ¬IntervalIntegrable (fun θ => (circleMap 0 R θ * I) • (circleMap c R θ - w) ^ n) volume 0 (2 * π)
[PROOFSTEP]
rw [← image_circleMap_Ioc] at hw
[GOAL]
case mp.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c w : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
hw : w ∈ circleMap c R '' Ioc 0 (2 * π)
⊢ ¬IntervalIntegrable (fun θ => (circleMap 0 R θ * I) • (circleMap c R θ - w) ^ n) volume 0 (2 * π)
[PROOFSTEP]
rcases hw with ⟨θ, hθ, rfl⟩
[GOAL]
case mp.intro.intro.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ Ioc 0 (2 * π)
⊢ ¬IntervalIntegrable (fun θ_1 => (circleMap 0 R θ_1 * I) • (circleMap c R θ_1 - circleMap c R θ) ^ n) volume 0 (2 * π)
[PROOFSTEP]
replace hθ : θ ∈ [[0, 2 * π]]
[GOAL]
case hθ
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ Ioc 0 (2 * π)
⊢ θ ∈ [[0, 2 * π]]
case mp.intro.intro.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
⊢ ¬IntervalIntegrable (fun θ_1 => (circleMap 0 R θ_1 * I) • (circleMap c R θ_1 - circleMap c R θ) ^ n) volume 0 (2 * π)
[PROOFSTEP]
exact Icc_subset_uIcc (Ioc_subset_Icc_self hθ)
[GOAL]
case mp.intro.intro.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
⊢ ¬IntervalIntegrable (fun θ_1 => (circleMap 0 R θ_1 * I) • (circleMap c R θ_1 - circleMap c R θ) ^ n) volume 0 (2 * π)
[PROOFSTEP]
refine' not_intervalIntegrable_of_sub_inv_isBigO_punctured _ Real.two_pi_pos.ne hθ
[GOAL]
case mp.intro.intro.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
⊢ (fun x => (x - θ)⁻¹) =O[𝓝[{θ}ᶜ] θ] fun θ_1 => (circleMap 0 R θ_1 * I) • (circleMap c R θ_1 - circleMap c R θ) ^ n
[PROOFSTEP]
set f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
[GOAL]
case mp.intro.intro.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
⊢ (fun x => (x - θ)⁻¹) =O[𝓝[{θ}ᶜ] θ] fun θ_1 => (circleMap 0 R θ_1 * I) • (circleMap c R θ_1 - circleMap c R θ) ^ n
[PROOFSTEP]
have : ∀ᶠ θ' in 𝓝[≠] θ, f θ' ∈ ball (0 : ℂ) 1 \ {0} :=
by
suffices : ∀ᶠ z in 𝓝[≠] circleMap c R θ, z - circleMap c R θ ∈ ball (0 : ℂ) 1 \ {0}
exact
((differentiable_circleMap c R θ).hasDerivAt.tendsto_punctured_nhds (deriv_circleMap_ne_zero hR)).eventually this
filter_upwards [self_mem_nhdsWithin, mem_nhdsWithin_of_mem_nhds (ball_mem_nhds _ zero_lt_one)]
simp_all [dist_eq, sub_eq_zero]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
⊢ ∀ᶠ (θ' : ℝ) in 𝓝[{θ}ᶜ] θ, f θ' ∈ ball 0 1 \ {0}
[PROOFSTEP]
suffices : ∀ᶠ z in 𝓝[≠] circleMap c R θ, z - circleMap c R θ ∈ ball (0 : ℂ) 1 \ {0}
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
this : ∀ᶠ (z : ℂ) in 𝓝[{circleMap c R θ}ᶜ] circleMap c R θ, z - circleMap c R θ ∈ ball 0 1 \ {0}
⊢ ∀ᶠ (θ' : ℝ) in 𝓝[{θ}ᶜ] θ, f θ' ∈ ball 0 1 \ {0}
case this
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
⊢ ∀ᶠ (z : ℂ) in 𝓝[{circleMap c R θ}ᶜ] circleMap c R θ, z - circleMap c R θ ∈ ball 0 1 \ {0}
[PROOFSTEP]
exact ((differentiable_circleMap c R θ).hasDerivAt.tendsto_punctured_nhds (deriv_circleMap_ne_zero hR)).eventually this
[GOAL]
case this
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
⊢ ∀ᶠ (z : ℂ) in 𝓝[{circleMap c R θ}ᶜ] circleMap c R θ, z - circleMap c R θ ∈ ball 0 1 \ {0}
[PROOFSTEP]
filter_upwards [self_mem_nhdsWithin, mem_nhdsWithin_of_mem_nhds (ball_mem_nhds _ zero_lt_one)]
[GOAL]
case h
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
⊢ ∀ (a : ℂ), a ∈ {circleMap c R θ}ᶜ → a ∈ ball (circleMap c R θ) 1 → a - circleMap c R θ ∈ ball 0 1 \ {0}
[PROOFSTEP]
simp_all [dist_eq, sub_eq_zero]
[GOAL]
case mp.intro.intro.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
this : ∀ᶠ (θ' : ℝ) in 𝓝[{θ}ᶜ] θ, f θ' ∈ ball 0 1 \ {0}
⊢ (fun x => (x - θ)⁻¹) =O[𝓝[{θ}ᶜ] θ] fun θ_1 => (circleMap 0 R θ_1 * I) • (circleMap c R θ_1 - circleMap c R θ) ^ n
[PROOFSTEP]
refine'
(((hasDerivAt_circleMap c R θ).isBigO_sub.mono inf_le_left).inv_rev (this.mono fun θ' h₁ h₂ => absurd h₂ h₁.2)).trans
_
[GOAL]
case mp.intro.intro.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
this : ∀ᶠ (θ' : ℝ) in 𝓝[{θ}ᶜ] θ, f θ' ∈ ball 0 1 \ {0}
⊢ (fun x => (circleMap c R x - circleMap c R θ)⁻¹) =O[𝓝 θ ⊓ 𝓟 {θ}ᶜ] fun θ_1 =>
(circleMap 0 R θ_1 * I) • (circleMap c R θ_1 - circleMap c R θ) ^ n
[PROOFSTEP]
refine' IsBigO.of_bound |R|⁻¹ (this.mono fun θ' hθ' => _)
[GOAL]
case mp.intro.intro.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
this : ∀ᶠ (θ' : ℝ) in 𝓝[{θ}ᶜ] θ, f θ' ∈ ball 0 1 \ {0}
θ' : ℝ
hθ' : f θ' ∈ ball 0 1 \ {0}
⊢ ‖(circleMap c R θ' - circleMap c R θ)⁻¹‖ ≤ |R|⁻¹ * ‖(circleMap 0 R θ' * I) • (circleMap c R θ' - circleMap c R θ) ^ n‖
[PROOFSTEP]
set x := abs (f θ')
[GOAL]
case mp.intro.intro.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
this : ∀ᶠ (θ' : ℝ) in 𝓝[{θ}ᶜ] θ, f θ' ∈ ball 0 1 \ {0}
θ' : ℝ
hθ' : f θ' ∈ ball 0 1 \ {0}
x : ℝ := ↑Complex.abs (f θ')
⊢ ‖(circleMap c R θ' - circleMap c R θ)⁻¹‖ ≤ |R|⁻¹ * ‖(circleMap 0 R θ' * I) • (circleMap c R θ' - circleMap c R θ) ^ n‖
[PROOFSTEP]
suffices x⁻¹ ≤ x ^ n by
simpa only [inv_mul_cancel_left₀, abs_eq_zero.not.2 hR, norm_eq_abs, map_inv₀, Algebra.id.smul_eq_mul, map_mul,
abs_circleMap_zero, abs_I, mul_one, abs_zpow, Ne.def, not_false_iff] using this
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
this✝ : ∀ᶠ (θ' : ℝ) in 𝓝[{θ}ᶜ] θ, f θ' ∈ ball 0 1 \ {0}
θ' : ℝ
hθ' : f θ' ∈ ball 0 1 \ {0}
x : ℝ := ↑Complex.abs (f θ')
this : x⁻¹ ≤ x ^ n
⊢ ‖(circleMap c R θ' - circleMap c R θ)⁻¹‖ ≤ |R|⁻¹ * ‖(circleMap 0 R θ' * I) • (circleMap c R θ' - circleMap c R θ) ^ n‖
[PROOFSTEP]
simpa only [inv_mul_cancel_left₀, abs_eq_zero.not.2 hR, norm_eq_abs, map_inv₀, Algebra.id.smul_eq_mul, map_mul,
abs_circleMap_zero, abs_I, mul_one, abs_zpow, Ne.def, not_false_iff] using this
[GOAL]
case mp.intro.intro.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
this : ∀ᶠ (θ' : ℝ) in 𝓝[{θ}ᶜ] θ, f θ' ∈ ball 0 1 \ {0}
θ' : ℝ
hθ' : f θ' ∈ ball 0 1 \ {0}
x : ℝ := ↑Complex.abs (f θ')
⊢ x⁻¹ ≤ x ^ n
[PROOFSTEP]
have : x ∈ Ioo (0 : ℝ) 1 := by simpa [and_comm] using hθ'
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
this : ∀ᶠ (θ' : ℝ) in 𝓝[{θ}ᶜ] θ, f θ' ∈ ball 0 1 \ {0}
θ' : ℝ
hθ' : f θ' ∈ ball 0 1 \ {0}
x : ℝ := ↑Complex.abs (f θ')
⊢ x ∈ Ioo 0 1
[PROOFSTEP]
simpa [and_comm] using hθ'
[GOAL]
case mp.intro.intro.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
this✝ : ∀ᶠ (θ' : ℝ) in 𝓝[{θ}ᶜ] θ, f θ' ∈ ball 0 1 \ {0}
θ' : ℝ
hθ' : f θ' ∈ ball 0 1 \ {0}
x : ℝ := ↑Complex.abs (f θ')
this : x ∈ Ioo 0 1
⊢ x⁻¹ ≤ x ^ n
[PROOFSTEP]
rw [← zpow_neg_one]
[GOAL]
case mp.intro.intro.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
this✝ : ∀ᶠ (θ' : ℝ) in 𝓝[{θ}ᶜ] θ, f θ' ∈ ball 0 1 \ {0}
θ' : ℝ
hθ' : f θ' ∈ ball 0 1 \ {0}
x : ℝ := ↑Complex.abs (f θ')
this : x ∈ Ioo 0 1
⊢ x ^ (-1) ≤ x ^ n
[PROOFSTEP]
refine' (zpow_strictAnti this.1 this.2).le_iff_le.2 (Int.lt_add_one_iff.1 _)
[GOAL]
case mp.intro.intro.intro.intro
E : Type u_1
inst✝ : NormedAddCommGroup E
c : ℂ
R : ℝ
n : ℤ
hR : R ≠ 0
hn : n < 0
θ : ℝ
hθ : θ ∈ [[0, 2 * π]]
f : ℝ → ℂ := fun θ' => circleMap c R θ' - circleMap c R θ
this✝ : ∀ᶠ (θ' : ℝ) in 𝓝[{θ}ᶜ] θ, f θ' ∈ ball 0 1 \ {0}
θ' : ℝ
hθ' : f θ' ∈ ball 0 1 \ {0}
x : ℝ := ↑Complex.abs (f θ')
this : x ∈ Ioo 0 1
⊢ n < -1 + 1
[PROOFSTEP]
exact hn
[GOAL]
case mpr
E : Type u_1
inst✝ : NormedAddCommGroup E
c w : ℂ
R : ℝ
n : ℤ
⊢ R = 0 ∨ 0 ≤ n ∨ ¬w ∈ sphere c |R| → CircleIntegrable (fun z => (z - w) ^ n) c R
[PROOFSTEP]
rintro (rfl | H)
[GOAL]
case mpr.inl
E : Type u_1
inst✝ : NormedAddCommGroup E
c w : ℂ
n : ℤ
⊢ CircleIntegrable (fun z => (z - w) ^ n) c 0
case mpr.inr
E : Type u_1
inst✝ : NormedAddCommGroup E
c w : ℂ
R : ℝ
n : ℤ
H : 0 ≤ n ∨ ¬w ∈ sphere c |R|
⊢ CircleIntegrable (fun z => (z - w) ^ n) c R
[PROOFSTEP]
exacts [circleIntegrable_zero_radius,
((continuousOn_id.sub continuousOn_const).zpow₀ _ fun z hz =>
H.symm.imp_left fun (hw : w ∉ sphere c |R|) => sub_ne_zero.2 <| ne_of_mem_of_not_mem hz hw).circleIntegrable']
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c w : ℂ
R : ℝ
⊢ CircleIntegrable (fun z => (z - w)⁻¹) c R ↔ R = 0 ∨ ¬w ∈ sphere c |R|
[PROOFSTEP]
simp only [← zpow_neg_one, circleIntegrable_sub_zpow_iff]
[GOAL]
E : Type u_1
inst✝ : NormedAddCommGroup E
c w : ℂ
R : ℝ
⊢ R = 0 ∨ False ∨ ¬w ∈ sphere c |R| ↔ R = 0 ∨ ¬w ∈ sphere c |R|
[PROOFSTEP]
norm_num
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
⊢ (∮ (z : ℂ) in C(c, R), f z) = ∫ (θ : ℝ) in Icc 0 (2 * π), deriv (circleMap c R) θ • f (circleMap c R θ)
[PROOFSTEP]
rw [circleIntegral, intervalIntegral.integral_of_le Real.two_pi_pos.le, Measure.restrict_congr_set Ioc_ae_eq_Icc]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
⊢ (∮ (z : ℂ) in C(c, 0), f z) = 0
[PROOFSTEP]
simp [circleIntegral, const]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f g : ℂ → E
c : ℂ
R : ℝ
hR : 0 ≤ R
h : EqOn f g (sphere c R)
θ : ℝ
x✝ : θ ∈ [[0, 2 * π]]
⊢ deriv (circleMap c R) θ • (fun z => f z) (circleMap c R θ) =
deriv (circleMap c R) θ • (fun z => g z) (circleMap c R θ)
[PROOFSTEP]
simp only [h (circleMap_mem_sphere _ hR _)]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c w : ℂ
R : ℝ
⊢ (∮ (z : ℂ) in C(c, R), (z - w)⁻¹ • (z - w) • f z) = ∮ (z : ℂ) in C(c, R), f z
[PROOFSTEP]
rcases eq_or_ne R 0 with (rfl | hR)
[GOAL]
case inl
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c w : ℂ
⊢ (∮ (z : ℂ) in C(c, 0), (z - w)⁻¹ • (z - w) • f z) = ∮ (z : ℂ) in C(c, 0), f z
[PROOFSTEP]
simp only [integral_radius_zero]
[GOAL]
case inr
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c w : ℂ
R : ℝ
hR : R ≠ 0
⊢ (∮ (z : ℂ) in C(c, R), (z - w)⁻¹ • (z - w) • f z) = ∮ (z : ℂ) in C(c, R), f z
[PROOFSTEP]
have : (circleMap c R ⁻¹' { w }).Countable := (countable_singleton _).preimage_circleMap c hR
[GOAL]
case inr
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c w : ℂ
R : ℝ
hR : R ≠ 0
this : Set.Countable (circleMap c R ⁻¹' {w})
⊢ (∮ (z : ℂ) in C(c, R), (z - w)⁻¹ • (z - w) • f z) = ∮ (z : ℂ) in C(c, R), f z
[PROOFSTEP]
refine' intervalIntegral.integral_congr_ae ((this.ae_not_mem _).mono fun θ hθ _' => _)
[GOAL]
case inr
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c w : ℂ
R : ℝ
hR : R ≠ 0
this : Set.Countable (circleMap c R ⁻¹' {w})
θ : ℝ
hθ : ¬θ ∈ circleMap c R ⁻¹' {w}
_' : θ ∈ Ι 0 (2 * π)
⊢ deriv (circleMap c R) θ • (fun z => (z - w)⁻¹ • (z - w) • f z) (circleMap c R θ) =
deriv (circleMap c R) θ • (fun z => f z) (circleMap c R θ)
[PROOFSTEP]
change circleMap c R θ ≠ w at hθ
[GOAL]
case inr
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c w : ℂ
R : ℝ
hR : R ≠ 0
this : Set.Countable (circleMap c R ⁻¹' {w})
θ : ℝ
_' : θ ∈ Ι 0 (2 * π)
hθ : circleMap c R θ ≠ w
⊢ deriv (circleMap c R) θ • (fun z => (z - w)⁻¹ • (z - w) • f z) (circleMap c R θ) =
deriv (circleMap c R) θ • (fun z => f z) (circleMap c R θ)
[PROOFSTEP]
simp only [inv_smul_smul₀ (sub_ne_zero.2 <| hθ)]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f g : ℂ → E
c : ℂ
R : ℝ
hf : CircleIntegrable f c R
hg : CircleIntegrable g c R
⊢ (∮ (z : ℂ) in C(c, R), f z - g z) = (∮ (z : ℂ) in C(c, R), f z) - ∮ (z : ℂ) in C(c, R), g z
[PROOFSTEP]
simp only [circleIntegral, smul_sub, intervalIntegral.integral_sub hf.out hg.out]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hf : ∀ (z : ℂ), z ∈ sphere c |R| → ‖f z‖ ≤ C
θ : ℝ
x✝ : θ ∈ Ι 0 (2 * π)
⊢ ‖deriv (circleMap c R) θ • f (circleMap c R θ)‖ = |R| * ‖f (circleMap c R θ)‖
[PROOFSTEP]
simp [norm_smul]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hf : ∀ (z : ℂ), z ∈ sphere c |R| → ‖f z‖ ≤ C
⊢ |R| * C * |2 * π - 0| = 2 * π * |R| * C
[PROOFSTEP]
rw [sub_zero, _root_.abs_of_pos Real.two_pi_pos]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hf : ∀ (z : ℂ), z ∈ sphere c |R| → ‖f z‖ ≤ C
⊢ |R| * C * (2 * π) = 2 * π * |R| * C
[PROOFSTEP]
ac_rfl
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 ≤ R
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
this : |R| = R
⊢ ∀ (z : ℂ), z ∈ sphere c |R| → ‖f z‖ ≤ C
[PROOFSTEP]
rwa [this]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 ≤ R
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
this : |R| = R
⊢ 2 * π * |R| * C = 2 * π * R * C
[PROOFSTEP]
rw [this]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 ≤ R
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
⊢ ‖(2 * ↑π * I)⁻¹ • ∮ (z : ℂ) in C(c, R), f z‖ ≤ R * C
[PROOFSTEP]
have : ‖(2 * π * I : ℂ)⁻¹‖ = (2 * π)⁻¹ := by simp [Real.pi_pos.le]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 ≤ R
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
⊢ ‖(2 * ↑π * I)⁻¹‖ = (2 * π)⁻¹
[PROOFSTEP]
simp [Real.pi_pos.le]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 ≤ R
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
this : ‖(2 * ↑π * I)⁻¹‖ = (2 * π)⁻¹
⊢ ‖(2 * ↑π * I)⁻¹ • ∮ (z : ℂ) in C(c, R), f z‖ ≤ R * C
[PROOFSTEP]
rw [norm_smul, this, ← div_eq_inv_mul, div_le_iff Real.two_pi_pos, mul_comm (R * C), ← mul_assoc]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 ≤ R
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
this : ‖(2 * ↑π * I)⁻¹‖ = (2 * π)⁻¹
⊢ ‖∮ (z : ℂ) in C(c, R), f z‖ ≤ 2 * π * R * C
[PROOFSTEP]
exact norm_integral_le_of_norm_le_const hR hf
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 < R
hc : ContinuousOn f (sphere c R)
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
hlt : ∃ z, z ∈ sphere c R ∧ ‖f z‖ < C
⊢ ‖∮ (z : ℂ) in C(c, R), f z‖ < 2 * π * R * C
[PROOFSTEP]
rw [← _root_.abs_of_pos hR, ← image_circleMap_Ioc] at hlt
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 < R
hc : ContinuousOn f (sphere c R)
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
hlt : ∃ z, z ∈ circleMap c R '' Ioc 0 (2 * π) ∧ ‖f z‖ < C
⊢ ‖∮ (z : ℂ) in C(c, R), f z‖ < 2 * π * R * C
[PROOFSTEP]
rcases hlt with ⟨_, ⟨θ₀, hmem, rfl⟩, hlt⟩
[GOAL]
case intro.intro.intro.intro
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 < R
hc : ContinuousOn f (sphere c R)
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
θ₀ : ℝ
hmem : θ₀ ∈ Ioc 0 (2 * π)
hlt : ‖f (circleMap c R θ₀)‖ < C
⊢ ‖∮ (z : ℂ) in C(c, R), f z‖ < 2 * π * R * C
[PROOFSTEP]
calc
‖∮ z in C(c, R), f z‖ ≤ ∫ θ in (0)..2 * π, ‖deriv (circleMap c R) θ • f (circleMap c R θ)‖ :=
intervalIntegral.norm_integral_le_integral_norm Real.two_pi_pos.le
_ < ∫ _ in (0)..2 * π, R * C :=
by
simp only [norm_smul, deriv_circleMap, norm_eq_abs, map_mul, abs_I, mul_one, abs_circleMap_zero, abs_of_pos hR]
refine'
intervalIntegral.integral_lt_integral_of_continuousOn_of_le_of_exists_lt Real.two_pi_pos _ continuousOn_const
(fun θ _ => _) ⟨θ₀, Ioc_subset_Icc_self hmem, _⟩
·
exact
continuousOn_const.mul
(hc.comp (continuous_circleMap _ _).continuousOn fun θ _ => circleMap_mem_sphere _ hR.le _).norm
· exact mul_le_mul_of_nonneg_left (hf _ <| circleMap_mem_sphere _ hR.le _) hR.le
· exact (mul_lt_mul_left hR).2 hlt
_ = 2 * π * R * C := by simp [mul_assoc]; ring
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 < R
hc : ContinuousOn f (sphere c R)
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
θ₀ : ℝ
hmem : θ₀ ∈ Ioc 0 (2 * π)
hlt : ‖f (circleMap c R θ₀)‖ < C
⊢ ∫ (θ : ℝ) in 0 ..2 * π, ‖deriv (circleMap c R) θ • f (circleMap c R θ)‖ < ∫ (x : ℝ) in 0 ..2 * π, R * C
[PROOFSTEP]
simp only [norm_smul, deriv_circleMap, norm_eq_abs, map_mul, abs_I, mul_one, abs_circleMap_zero, abs_of_pos hR]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 < R
hc : ContinuousOn f (sphere c R)
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
θ₀ : ℝ
hmem : θ₀ ∈ Ioc 0 (2 * π)
hlt : ‖f (circleMap c R θ₀)‖ < C
⊢ ∫ (θ : ℝ) in 0 ..2 * π, R * ‖f (circleMap c R θ)‖ < ∫ (x : ℝ) in 0 ..2 * π, R * C
[PROOFSTEP]
refine'
intervalIntegral.integral_lt_integral_of_continuousOn_of_le_of_exists_lt Real.two_pi_pos _ continuousOn_const
(fun θ _ => _) ⟨θ₀, Ioc_subset_Icc_self hmem, _⟩
[GOAL]
case refine'_1
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 < R
hc : ContinuousOn f (sphere c R)
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
θ₀ : ℝ
hmem : θ₀ ∈ Ioc 0 (2 * π)
hlt : ‖f (circleMap c R θ₀)‖ < C
⊢ ContinuousOn (fun θ => R * ‖f (circleMap c R θ)‖) (Icc 0 (2 * π))
[PROOFSTEP]
exact
continuousOn_const.mul
(hc.comp (continuous_circleMap _ _).continuousOn fun θ _ => circleMap_mem_sphere _ hR.le _).norm
[GOAL]
case refine'_2
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 < R
hc : ContinuousOn f (sphere c R)
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
θ₀ : ℝ
hmem : θ₀ ∈ Ioc 0 (2 * π)
hlt : ‖f (circleMap c R θ₀)‖ < C
θ : ℝ
x✝ : θ ∈ Ioc 0 (2 * π)
⊢ R * ‖f (circleMap c R θ)‖ ≤ R * C
[PROOFSTEP]
exact mul_le_mul_of_nonneg_left (hf _ <| circleMap_mem_sphere _ hR.le _) hR.le
[GOAL]
case refine'_3
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 < R
hc : ContinuousOn f (sphere c R)
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
θ₀ : ℝ
hmem : θ₀ ∈ Ioc 0 (2 * π)
hlt : ‖f (circleMap c R θ₀)‖ < C
⊢ R * ‖f (circleMap c R θ₀)‖ < R * C
[PROOFSTEP]
exact (mul_lt_mul_left hR).2 hlt
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 < R
hc : ContinuousOn f (sphere c R)
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
θ₀ : ℝ
hmem : θ₀ ∈ Ioc 0 (2 * π)
hlt : ‖f (circleMap c R θ₀)‖ < C
⊢ ∫ (x : ℝ) in 0 ..2 * π, R * C = 2 * π * R * C
[PROOFSTEP]
simp [mul_assoc]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R C : ℝ
hR : 0 < R
hc : ContinuousOn f (sphere c R)
hf : ∀ (z : ℂ), z ∈ sphere c R → ‖f z‖ ≤ C
θ₀ : ℝ
hmem : θ₀ ∈ Ioc 0 (2 * π)
hlt : ‖f (circleMap c R θ₀)‖ < C
⊢ R * (2 * (π * C)) = 2 * (π * (R * C))
[PROOFSTEP]
ring
[GOAL]
E : Type u_1
inst✝⁵ : NormedAddCommGroup E
inst✝⁴ : NormedSpace ℂ E
inst✝³ : CompleteSpace E
𝕜 : Type u_2
inst✝² : IsROrC 𝕜
inst✝¹ : NormedSpace 𝕜 E
inst✝ : SMulCommClass 𝕜 ℂ E
a : 𝕜
f : ℂ → E
c : ℂ
R : ℝ
⊢ (∮ (z : ℂ) in C(c, R), a • f z) = a • ∮ (z : ℂ) in C(c, R), f z
[PROOFSTEP]
simp only [circleIntegral, ← smul_comm a (_ : ℂ) (_ : E), intervalIntegral.integral_smul]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → ℂ
a : E
c : ℂ
R : ℝ
⊢ (∮ (z : ℂ) in C(c, R), f z • a) = (∮ (z : ℂ) in C(c, R), f z) • a
[PROOFSTEP]
simp only [circleIntegral, intervalIntegral.integral_smul_const, ← smul_assoc]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c : ℂ
R : ℝ
hR : R ≠ 0
⊢ (∮ (z : ℂ) in C(c, R), (z - c)⁻¹) = 2 * ↑π * I
[PROOFSTEP]
simp [circleIntegral, ← div_eq_mul_inv, mul_div_cancel_left _ (circleMap_ne_center hR),
-- porting note: `simp` didn't need a hint to apply `integral_const` hereintervalIntegral.integral_const I]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f f' : ℂ → E
c : ℂ
R : ℝ
h : ∀ (z : ℂ), z ∈ sphere c |R| → HasDerivWithinAt f (f' z) (sphere c |R|) z
⊢ (∮ (z : ℂ) in C(c, R), f' z) = 0
[PROOFSTEP]
by_cases hi : CircleIntegrable f' c R
[GOAL]
case pos
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f f' : ℂ → E
c : ℂ
R : ℝ
h : ∀ (z : ℂ), z ∈ sphere c |R| → HasDerivWithinAt f (f' z) (sphere c |R|) z
hi : CircleIntegrable f' c R
⊢ (∮ (z : ℂ) in C(c, R), f' z) = 0
[PROOFSTEP]
rw [← sub_eq_zero.2 ((periodic_circleMap c R).comp f).eq]
[GOAL]
case pos
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f f' : ℂ → E
c : ℂ
R : ℝ
h : ∀ (z : ℂ), z ∈ sphere c |R| → HasDerivWithinAt f (f' z) (sphere c |R|) z
hi : CircleIntegrable f' c R
⊢ (∮ (z : ℂ) in C(c, R), f' z) = (f ∘ circleMap c R) (2 * π) - (f ∘ circleMap c R) 0
[PROOFSTEP]
refine' intervalIntegral.integral_eq_sub_of_hasDerivAt (fun θ _ => _) hi.out
[GOAL]
case pos
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f f' : ℂ → E
c : ℂ
R : ℝ
h : ∀ (z : ℂ), z ∈ sphere c |R| → HasDerivWithinAt f (f' z) (sphere c |R|) z
hi : CircleIntegrable f' c R
θ : ℝ
x✝ : θ ∈ [[0, 2 * π]]
⊢ HasDerivAt (f ∘ circleMap c R) (deriv (circleMap c R) θ • (fun z => f' z) (circleMap c R θ)) θ
[PROOFSTEP]
exact
(h _ (circleMap_mem_sphere' _ _ _)).scomp_hasDerivAt θ (differentiable_circleMap _ _ _).hasDerivAt
(circleMap_mem_sphere' _ _)
[GOAL]
case neg
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f f' : ℂ → E
c : ℂ
R : ℝ
h : ∀ (z : ℂ), z ∈ sphere c |R| → HasDerivWithinAt f (f' z) (sphere c |R|) z
hi : ¬CircleIntegrable f' c R
⊢ (∮ (z : ℂ) in C(c, R), f' z) = 0
[PROOFSTEP]
exact integral_undef hi
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
c w : ℂ
R : ℝ
hn : n < 0
hw : w ∈ sphere c |R|
⊢ (∮ (z : ℂ) in C(c, R), (z - w) ^ n) = 0
[PROOFSTEP]
rcases eq_or_ne R 0 with (rfl | h0)
[GOAL]
case inl
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
c w : ℂ
hn : n < 0
hw : w ∈ sphere c |0|
⊢ (∮ (z : ℂ) in C(c, 0), (z - w) ^ n) = 0
[PROOFSTEP]
apply integral_radius_zero
[GOAL]
case inr
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
c w : ℂ
R : ℝ
hn : n < 0
hw : w ∈ sphere c |R|
h0 : R ≠ 0
⊢ (∮ (z : ℂ) in C(c, R), (z - w) ^ n) = 0
[PROOFSTEP]
apply integral_undef
[GOAL]
case inr.hf
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
c w : ℂ
R : ℝ
hn : n < 0
hw : w ∈ sphere c |R|
h0 : R ≠ 0
⊢ ¬CircleIntegrable (fun z => (z - w) ^ n) c R
[PROOFSTEP]
simpa [circleIntegrable_sub_zpow_iff, *, not_or]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn : n ≠ -1
c w : ℂ
R : ℝ
⊢ (∮ (z : ℂ) in C(c, R), (z - w) ^ n) = 0
[PROOFSTEP]
rcases em (w ∈ sphere c |R| ∧ n < -1) with (⟨hw, hn⟩ | H)
[GOAL]
case inl.intro
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn✝ : n ≠ -1
c w : ℂ
R : ℝ
hw : w ∈ sphere c |R|
hn : n < -1
⊢ (∮ (z : ℂ) in C(c, R), (z - w) ^ n) = 0
[PROOFSTEP]
exact integral_sub_zpow_of_undef (hn.trans (by decide)) hw
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn✝ : n ≠ -1
c w : ℂ
R : ℝ
hw : w ∈ sphere c |R|
hn : n < -1
⊢ -1 < 0
[PROOFSTEP]
decide
[GOAL]
case inr
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn : n ≠ -1
c w : ℂ
R : ℝ
H : ¬(w ∈ sphere c |R| ∧ n < -1)
⊢ (∮ (z : ℂ) in C(c, R), (z - w) ^ n) = 0
[PROOFSTEP]
push_neg at H
[GOAL]
case inr
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn : n ≠ -1
c w : ℂ
R : ℝ
H : w ∈ sphere c |R| → -1 ≤ n
⊢ (∮ (z : ℂ) in C(c, R), (z - w) ^ n) = 0
[PROOFSTEP]
have hd : ∀ z, z ≠ w ∨ -1 ≤ n → HasDerivAt (fun z => (z - w) ^ (n + 1) / (n + 1)) ((z - w) ^ n) z :=
by
intro z hne
convert ((hasDerivAt_zpow (n + 1) _ (hne.imp _ _)).comp z ((hasDerivAt_id z).sub_const w)).div_const _ using 1
· have hn' : (n + 1 : ℂ) ≠ 0 := by rwa [Ne, ← eq_neg_iff_add_eq_zero, ← Int.cast_one, ← Int.cast_neg, Int.cast_inj]
simp [mul_assoc, mul_div_cancel_left _ hn']
exacts [sub_ne_zero.2, neg_le_iff_add_nonneg.1]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn : n ≠ -1
c w : ℂ
R : ℝ
H : w ∈ sphere c |R| → -1 ≤ n
⊢ ∀ (z : ℂ), z ≠ w ∨ -1 ≤ n → HasDerivAt (fun z => (z - w) ^ (n + 1) / (↑n + 1)) ((z - w) ^ n) z
[PROOFSTEP]
intro z hne
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn : n ≠ -1
c w : ℂ
R : ℝ
H : w ∈ sphere c |R| → -1 ≤ n
z : ℂ
hne : z ≠ w ∨ -1 ≤ n
⊢ HasDerivAt (fun z => (z - w) ^ (n + 1) / (↑n + 1)) ((z - w) ^ n) z
[PROOFSTEP]
convert ((hasDerivAt_zpow (n + 1) _ (hne.imp _ _)).comp z ((hasDerivAt_id z).sub_const w)).div_const _ using 1
[GOAL]
case h.e'_7
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn : n ≠ -1
c w : ℂ
R : ℝ
H : w ∈ sphere c |R| → -1 ≤ n
z : ℂ
hne : z ≠ w ∨ -1 ≤ n
⊢ (z - w) ^ n = ↑(n + 1) * (id z - w) ^ (n + 1 - 1) * 1 / (↑n + 1)
[PROOFSTEP]
have hn' : (n + 1 : ℂ) ≠ 0 := by rwa [Ne, ← eq_neg_iff_add_eq_zero, ← Int.cast_one, ← Int.cast_neg, Int.cast_inj]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn : n ≠ -1
c w : ℂ
R : ℝ
H : w ∈ sphere c |R| → -1 ≤ n
z : ℂ
hne : z ≠ w ∨ -1 ≤ n
⊢ ↑n + 1 ≠ 0
[PROOFSTEP]
rwa [Ne, ← eq_neg_iff_add_eq_zero, ← Int.cast_one, ← Int.cast_neg, Int.cast_inj]
[GOAL]
case h.e'_7
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn : n ≠ -1
c w : ℂ
R : ℝ
H : w ∈ sphere c |R| → -1 ≤ n
z : ℂ
hne : z ≠ w ∨ -1 ≤ n
hn' : ↑n + 1 ≠ 0
⊢ (z - w) ^ n = ↑(n + 1) * (id z - w) ^ (n + 1 - 1) * 1 / (↑n + 1)
[PROOFSTEP]
simp [mul_assoc, mul_div_cancel_left _ hn']
[GOAL]
case convert_1
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn : n ≠ -1
c w : ℂ
R : ℝ
H : w ∈ sphere c |R| → -1 ≤ n
z : ℂ
hne : z ≠ w ∨ -1 ≤ n
⊢ z ≠ w → id z - w ≠ 0
case convert_2
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn : n ≠ -1
c w : ℂ
R : ℝ
H : w ∈ sphere c |R| → -1 ≤ n
z : ℂ
hne : z ≠ w ∨ -1 ≤ n
⊢ -1 ≤ n → 0 ≤ n + 1
[PROOFSTEP]
exacts [sub_ne_zero.2, neg_le_iff_add_nonneg.1]
[GOAL]
case inr
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn : n ≠ -1
c w : ℂ
R : ℝ
H : w ∈ sphere c |R| → -1 ≤ n
hd : ∀ (z : ℂ), z ≠ w ∨ -1 ≤ n → HasDerivAt (fun z => (z - w) ^ (n + 1) / (↑n + 1)) ((z - w) ^ n) z
⊢ (∮ (z : ℂ) in C(c, R), (z - w) ^ n) = 0
[PROOFSTEP]
refine' integral_eq_zero_of_hasDerivWithinAt' fun z hz => (hd z _).hasDerivWithinAt
[GOAL]
case inr
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
n : ℤ
hn : n ≠ -1
c w : ℂ
R : ℝ
H : w ∈ sphere c |R| → -1 ≤ n
hd : ∀ (z : ℂ), z ≠ w ∨ -1 ≤ n → HasDerivAt (fun z => (z - w) ^ (n + 1) / (↑n + 1)) ((z - w) ^ n) z
z : ℂ
hz : z ∈ sphere c |R|
⊢ z ≠ w ∨ -1 ≤ n
[PROOFSTEP]
exact (ne_or_eq z w).imp_right fun (h : z = w) => H <| h ▸ hz
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
n : ℕ
w : ℂ
⊢ (↑(cauchyPowerSeries f c R n) fun x => w) = (2 * ↑π * I)⁻¹ • ∮ (z : ℂ) in C(c, R), (w / (z - c)) ^ n • (z - c)⁻¹ • f z
[PROOFSTEP]
simp only [cauchyPowerSeries, ContinuousMultilinearMap.mkPiField_apply, Fin.prod_const, div_eq_mul_inv, mul_pow,
mul_smul, circleIntegral.integral_smul]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
n : ℕ
w : ℂ
⊢ (w ^ n • (2 * ↑π * I)⁻¹ • ∮ (z : ℂ) in C(c, R), (z - c)⁻¹ ^ n • (z - c)⁻¹ • f z) =
(2 * ↑π * I)⁻¹ • w ^ n • ∮ (z : ℂ) in C(c, R), (z - c)⁻¹ ^ n • (z - c)⁻¹ • f z
[PROOFSTEP]
rw [← smul_comm (w ^ n)]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
n : ℕ
⊢ ‖cauchyPowerSeries f c R n‖ = (2 * π)⁻¹ * ‖∮ (z : ℂ) in C(c, R), (z - c)⁻¹ ^ n • (z - c)⁻¹ • f z‖
[PROOFSTEP]
simp [cauchyPowerSeries, norm_smul, Real.pi_pos.le]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
n : ℕ
⊢ 0 ≤ (2 * π)⁻¹
[PROOFSTEP]
simp [Real.pi_pos.le]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
n : ℕ
⊢ (2 * π)⁻¹ *
∫ (θ : ℝ) in 0 ..2 * π,
‖deriv (circleMap c R) θ • (circleMap c R θ - c)⁻¹ ^ n • (circleMap c R θ - c)⁻¹ • f (circleMap c R θ)‖ =
(2 * π)⁻¹ * (|R|⁻¹ ^ n * (|R| * (|R|⁻¹ * ∫ (x : ℝ) in 0 ..2 * π, ‖f (circleMap c R x)‖)))
[PROOFSTEP]
simp [norm_smul, mul_left_comm |R|]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
n : ℕ
⊢ (2 * π)⁻¹ * (|R|⁻¹ ^ n * (|R| * (|R|⁻¹ * ∫ (x : ℝ) in 0 ..2 * π, ‖f (circleMap c R x)‖))) ≤
((2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c R θ)‖) * |R|⁻¹ ^ n
[PROOFSTEP]
rcases eq_or_ne R 0 with (rfl | hR)
[GOAL]
case inl
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
n : ℕ
⊢ (2 * π)⁻¹ * (|0|⁻¹ ^ n * (|0| * (|0|⁻¹ * ∫ (x : ℝ) in 0 ..2 * π, ‖f (circleMap c 0 x)‖))) ≤
((2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c 0 θ)‖) * |0|⁻¹ ^ n
[PROOFSTEP]
cases n
[GOAL]
case inl.zero
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
⊢ (2 * π)⁻¹ * (|0|⁻¹ ^ Nat.zero * (|0| * (|0|⁻¹ * ∫ (x : ℝ) in 0 ..2 * π, ‖f (circleMap c 0 x)‖))) ≤
((2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c 0 θ)‖) * |0|⁻¹ ^ Nat.zero
[PROOFSTEP]
simp [-mul_inv_rev]
[GOAL]
case inl.succ
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
n✝ : ℕ
⊢ (2 * π)⁻¹ * (|0|⁻¹ ^ Nat.succ n✝ * (|0| * (|0|⁻¹ * ∫ (x : ℝ) in 0 ..2 * π, ‖f (circleMap c 0 x)‖))) ≤
((2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c 0 θ)‖) * |0|⁻¹ ^ Nat.succ n✝
[PROOFSTEP]
simp [-mul_inv_rev]
[GOAL]
case inl.zero
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
⊢ 0 ≤ (2 * π)⁻¹ * (2 * π * ‖f c‖)
[PROOFSTEP]
rw [← mul_assoc, inv_mul_cancel (Real.two_pi_pos.ne.symm), one_mul]
[GOAL]
case inl.zero
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
⊢ 0 ≤ ‖f c‖
[PROOFSTEP]
apply norm_nonneg
[GOAL]
case inr
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
n : ℕ
hR : R ≠ 0
⊢ (2 * π)⁻¹ * (|R|⁻¹ ^ n * (|R| * (|R|⁻¹ * ∫ (x : ℝ) in 0 ..2 * π, ‖f (circleMap c R x)‖))) ≤
((2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c R θ)‖) * |R|⁻¹ ^ n
[PROOFSTEP]
rw [mul_inv_cancel_left₀, mul_assoc, mul_comm (|R|⁻¹ ^ n)]
[GOAL]
case inr.h
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
n : ℕ
hR : R ≠ 0
⊢ |R| ≠ 0
[PROOFSTEP]
rwa [Ne.def, _root_.abs_eq_zero]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ≥0
⊢ ↑R ≤ FormalMultilinearSeries.radius (cauchyPowerSeries f c ↑R)
[PROOFSTEP]
refine'
(cauchyPowerSeries f c R).le_radius_of_bound ((2 * π)⁻¹ * ∫ θ : ℝ in (0)..2 * π, ‖f (circleMap c R θ)‖) fun n => _
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ≥0
n : ℕ
⊢ ‖cauchyPowerSeries f c (↑R) n‖ * ↑R ^ n ≤ (2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖
[PROOFSTEP]
refine' (mul_le_mul_of_nonneg_right (norm_cauchyPowerSeries_le _ _ _ _) (pow_nonneg R.coe_nonneg _)).trans _
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ≥0
n : ℕ
⊢ ((2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖) * |↑R|⁻¹ ^ n * ↑R ^ n ≤
(2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖
[PROOFSTEP]
rw [_root_.abs_of_nonneg R.coe_nonneg]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ≥0
n : ℕ
⊢ ((2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖) * (↑R)⁻¹ ^ n * ↑R ^ n ≤
(2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖
[PROOFSTEP]
cases' eq_or_ne (R ^ n : ℝ) 0 with hR hR
[GOAL]
case inl
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ≥0
n : ℕ
hR : ↑(R ^ n) = 0
⊢ ((2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖) * (↑R)⁻¹ ^ n * ↑R ^ n ≤
(2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖
[PROOFSTEP]
rw_mod_cast [hR, mul_zero]
[GOAL]
case inl
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ≥0
n : ℕ
hR : R ^ n = 0
⊢ 0 ≤ (2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖
[PROOFSTEP]
exact
mul_nonneg (inv_nonneg.2 Real.two_pi_pos.le)
(intervalIntegral.integral_nonneg Real.two_pi_pos.le fun _ _ => norm_nonneg _)
[GOAL]
case inr
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ≥0
n : ℕ
hR : ↑(R ^ n) ≠ 0
⊢ ((2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖) * (↑R)⁻¹ ^ n * ↑R ^ n ≤
(2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖
[PROOFSTEP]
rw [inv_pow]
[GOAL]
case inr
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ≥0
n : ℕ
hR : ↑(R ^ n) ≠ 0
⊢ ((2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖) * (↑R ^ n)⁻¹ * ↑R ^ n ≤
(2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖
[PROOFSTEP]
have : (R : ℝ) ^ n ≠ 0 := by norm_cast at hR ⊢
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ≥0
n : ℕ
hR : ↑(R ^ n) ≠ 0
⊢ ↑R ^ n ≠ 0
[PROOFSTEP]
norm_cast at hR ⊢
[GOAL]
case inr
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ≥0
n : ℕ
hR : ↑(R ^ n) ≠ 0
this : ↑R ^ n ≠ 0
⊢ ((2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖) * (↑R ^ n)⁻¹ * ↑R ^ n ≤
(2 * π)⁻¹ * ∫ (θ : ℝ) in 0 ..2 * π, ‖f (circleMap c (↑R) θ)‖
[PROOFSTEP]
rw [inv_mul_cancel_right₀ this]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
⊢ HasSum (fun n => ∮ (z : ℂ) in C(c, R), (w / (z - c)) ^ n • (z - c)⁻¹ • f z)
(∮ (z : ℂ) in C(c, R), (z - (c + w))⁻¹ • f z)
[PROOFSTEP]
have hR : 0 < R := (Complex.abs.nonneg w).trans_lt hw
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
⊢ HasSum (fun n => ∮ (z : ℂ) in C(c, R), (w / (z - c)) ^ n • (z - c)⁻¹ • f z)
(∮ (z : ℂ) in C(c, R), (z - (c + w))⁻¹ • f z)
[PROOFSTEP]
have hwR : abs w / R ∈ Ico (0 : ℝ) 1 := ⟨div_nonneg (Complex.abs.nonneg w) hR.le, (div_lt_one hR).2 hw⟩
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
⊢ HasSum (fun n => ∮ (z : ℂ) in C(c, R), (w / (z - c)) ^ n • (z - c)⁻¹ • f z)
(∮ (z : ℂ) in C(c, R), (z - (c + w))⁻¹ • f z)
[PROOFSTEP]
refine'
intervalIntegral.hasSum_integral_of_dominated_convergence (fun n θ => ‖f (circleMap c R θ)‖ * (abs w / R) ^ n)
(fun n => _) (fun n => _) _ _ _
[GOAL]
case refine'_1
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
n : ℕ
⊢ AEStronglyMeasurable
(fun θ => deriv (circleMap c R) θ • (fun z => (w / (z - c)) ^ n • (z - c)⁻¹ • f z) (circleMap c R θ))
(Measure.restrict volume (Ι 0 (2 * π)))
[PROOFSTEP]
simp only [deriv_circleMap]
[GOAL]
case refine'_1
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
n : ℕ
⊢ AEStronglyMeasurable
(fun θ => (circleMap 0 R θ * I) • (w / (circleMap c R θ - c)) ^ n • (circleMap c R θ - c)⁻¹ • f (circleMap c R θ))
(Measure.restrict volume (Ι 0 (2 * π)))
[PROOFSTEP]
apply_rules [AEStronglyMeasurable.smul, hf.def.1]
[GOAL]
case refine'_1.hf
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
n : ℕ
⊢ AEStronglyMeasurable (fun x => circleMap 0 R x * I) (Measure.restrict volume (Ι 0 (2 * π)))
[PROOFSTEP]
apply Measurable.aestronglyMeasurable
[GOAL]
case refine'_1.hg.hf
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
n : ℕ
⊢ AEStronglyMeasurable (fun x => (w / (circleMap c R x - c)) ^ n) (Measure.restrict volume (Ι 0 (2 * π)))
[PROOFSTEP]
apply Measurable.aestronglyMeasurable
[GOAL]
case refine'_1.hg.hg.hf
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
n : ℕ
⊢ AEStronglyMeasurable (fun x => (circleMap c R x - c)⁻¹) (Measure.restrict volume (Ι 0 (2 * π)))
[PROOFSTEP]
apply Measurable.aestronglyMeasurable
[GOAL]
case refine'_1.hf.hf
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
n : ℕ
⊢ Measurable fun x => circleMap 0 R x * I
[PROOFSTEP]
exact (measurable_circleMap 0 R).mul_const I
[GOAL]
case refine'_1.hg.hf.hf
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
n : ℕ
⊢ Measurable fun x => (w / (circleMap c R x - c)) ^ n
[PROOFSTEP]
exact (((measurable_circleMap c R).sub measurable_const).const_div w).pow measurable_const
[GOAL]
case refine'_1.hg.hg.hf.hf
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
n : ℕ
⊢ Measurable fun x => (circleMap c R x - c)⁻¹
[PROOFSTEP]
exact ((measurable_circleMap c R).sub measurable_const).inv
[GOAL]
case refine'_2
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
n : ℕ
⊢ ∀ᵐ (t : ℝ),
t ∈ Ι 0 (2 * π) →
‖deriv (circleMap c R) t • (fun z => (w / (z - c)) ^ n • (z - c)⁻¹ • f z) (circleMap c R t)‖ ≤
(fun n θ => ‖f (circleMap c R θ)‖ * (↑Complex.abs w / R) ^ n) n t
[PROOFSTEP]
simp [norm_smul, abs_of_pos hR, mul_left_comm R, inv_mul_cancel_left₀ hR.ne', mul_comm ‖_‖]
[GOAL]
case refine'_3
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
⊢ ∀ᵐ (t : ℝ), t ∈ Ι 0 (2 * π) → Summable fun n => (fun n θ => ‖f (circleMap c R θ)‖ * (↑Complex.abs w / R) ^ n) n t
[PROOFSTEP]
exact eventually_of_forall fun _ _ => (summable_geometric_of_lt_1 hwR.1 hwR.2).mul_left _
[GOAL]
case refine'_4
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
⊢ IntervalIntegrable (fun t => ∑' (n : ℕ), (fun n θ => ‖f (circleMap c R θ)‖ * (↑Complex.abs w / R) ^ n) n t) volume 0
(2 * π)
[PROOFSTEP]
simpa only [tsum_mul_left, tsum_geometric_of_lt_1 hwR.1 hwR.2] using hf.norm.mul_continuousOn continuousOn_const
[GOAL]
case refine'_5
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
⊢ ∀ᵐ (t : ℝ),
t ∈ Ι 0 (2 * π) →
HasSum (fun n => deriv (circleMap c R) t • (fun z => (w / (z - c)) ^ n • (z - c)⁻¹ • f z) (circleMap c R t))
(deriv (circleMap c R) t • (fun z => (z - (c + w))⁻¹ • f z) (circleMap c R t))
[PROOFSTEP]
refine' eventually_of_forall fun θ _ => HasSum.const_smul _ _
[GOAL]
case refine'_5
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
θ : ℝ
x✝ : θ ∈ Ι 0 (2 * π)
⊢ HasSum (fun n => (fun z => (w / (z - c)) ^ n • (z - c)⁻¹ • f z) (circleMap c R θ))
((fun z => (z - (c + w))⁻¹ • f z) (circleMap c R θ))
[PROOFSTEP]
simp only [smul_smul]
[GOAL]
case refine'_5
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
θ : ℝ
x✝ : θ ∈ Ι 0 (2 * π)
⊢ HasSum (fun n => ((w / (circleMap c R θ - c)) ^ n * (circleMap c R θ - c)⁻¹) • f (circleMap c R θ))
((circleMap c R θ - (c + w))⁻¹ • f (circleMap c R θ))
[PROOFSTEP]
refine' HasSum.smul_const _ _
[GOAL]
case refine'_5
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
θ : ℝ
x✝ : θ ∈ Ι 0 (2 * π)
⊢ HasSum (fun n => (w / (circleMap c R θ - c)) ^ n * (circleMap c R θ - c)⁻¹) (circleMap c R θ - (c + w))⁻¹
[PROOFSTEP]
have : ‖w / (circleMap c R θ - c)‖ < 1 := by simpa [abs_of_pos hR] using hwR.2
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
θ : ℝ
x✝ : θ ∈ Ι 0 (2 * π)
⊢ ‖w / (circleMap c R θ - c)‖ < 1
[PROOFSTEP]
simpa [abs_of_pos hR] using hwR.2
[GOAL]
case refine'_5
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
θ : ℝ
x✝ : θ ∈ Ι 0 (2 * π)
this : ‖w / (circleMap c R θ - c)‖ < 1
⊢ HasSum (fun n => (w / (circleMap c R θ - c)) ^ n * (circleMap c R θ - c)⁻¹) (circleMap c R θ - (c + w))⁻¹
[PROOFSTEP]
convert (hasSum_geometric_of_norm_lt_1 this).mul_right _ using 1
[GOAL]
case h.e'_6
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
hR : 0 < R
hwR : ↑Complex.abs w / R ∈ Ico 0 1
θ : ℝ
x✝ : θ ∈ Ι 0 (2 * π)
this : ‖w / (circleMap c R θ - c)‖ < 1
⊢ (circleMap c R θ - (c + w))⁻¹ = (1 - w / (circleMap c R θ - c))⁻¹ * (circleMap c R θ - c)⁻¹
[PROOFSTEP]
simp [← sub_sub, ← mul_inv, sub_mul, div_mul_cancel _ (circleMap_ne_center hR.ne')]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
⊢ HasSum (fun n => ↑(cauchyPowerSeries f c R n) fun x => w)
((2 * ↑π * I)⁻¹ • ∮ (z : ℂ) in C(c, R), (z - (c + w))⁻¹ • f z)
[PROOFSTEP]
simp only [cauchyPowerSeries_apply]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ
w : ℂ
hf : CircleIntegrable f c R
hw : ↑Complex.abs w < R
⊢ HasSum (fun n => (2 * ↑π * I)⁻¹ • ∮ (z : ℂ) in C(c, R), (w / (z - c)) ^ n • (z - c)⁻¹ • f z)
((2 * ↑π * I)⁻¹ • ∮ (z : ℂ) in C(c, R), (z - (c + w))⁻¹ • f z)
[PROOFSTEP]
exact (hasSum_two_pi_I_cauchyPowerSeries_integral hf hw).const_smul _
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ≥0
hf : CircleIntegrable f c ↑R
hR : 0 < R
y✝ : ℂ
hy : y✝ ∈ EMetric.ball 0 ↑R
⊢ HasSum (fun n => ↑(cauchyPowerSeries f c (↑R) n) fun x => y✝)
((2 * ↑π * I)⁻¹ • ∮ (z : ℂ) in C(c, ↑R), (z - (c + y✝))⁻¹ • f z)
[PROOFSTEP]
refine' hasSum_cauchyPowerSeries_integral hf _
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ≥0
hf : CircleIntegrable f c ↑R
hR : 0 < R
y✝ : ℂ
hy : y✝ ∈ EMetric.ball 0 ↑R
⊢ ↑Complex.abs y✝ < ↑R
[PROOFSTEP]
rw [← norm_eq_abs, ← coe_nnnorm, NNReal.coe_lt_coe, ← ENNReal.coe_lt_coe]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
f : ℂ → E
c : ℂ
R : ℝ≥0
hf : CircleIntegrable f c ↑R
hR : 0 < R
y✝ : ℂ
hy : y✝ ∈ EMetric.ball 0 ↑R
⊢ ↑‖y✝‖₊ < ↑R
[PROOFSTEP]
exact mem_emetric_ball_zero_iff.1 hy
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
⊢ (∮ (z : ℂ) in C(c, R), (z - w)⁻¹) = 2 * ↑π * I
[PROOFSTEP]
have hR : 0 < R := dist_nonneg.trans_lt hw
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
⊢ (∮ (z : ℂ) in C(c, R), (z - w)⁻¹) = 2 * ↑π * I
[PROOFSTEP]
suffices H : HasSum (fun n : ℕ => ∮ z in C(c, R), ((w - c) / (z - c)) ^ n * (z - c)⁻¹) (2 * π * I)
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
H : HasSum (fun n => ∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ n * (z - c)⁻¹) (2 * ↑π * I)
⊢ (∮ (z : ℂ) in C(c, R), (z - w)⁻¹) = 2 * ↑π * I
[PROOFSTEP]
have A : CircleIntegrable (fun _ => (1 : ℂ)) c R := continuousOn_const.circleIntegrable'
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
H : HasSum (fun n => ∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ n * (z - c)⁻¹) (2 * ↑π * I)
A : CircleIntegrable (fun x => 1) c R
⊢ (∮ (z : ℂ) in C(c, R), (z - w)⁻¹) = 2 * ↑π * I
[PROOFSTEP]
refine' (H.unique _).symm
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
H : HasSum (fun n => ∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ n * (z - c)⁻¹) (2 * ↑π * I)
A : CircleIntegrable (fun x => 1) c R
⊢ HasSum (fun n => ∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ n * (z - c)⁻¹) (∮ (z : ℂ) in C(c, R), (z - w)⁻¹)
[PROOFSTEP]
simpa only [smul_eq_mul, mul_one, add_sub_cancel'_right] using hasSum_two_pi_I_cauchyPowerSeries_integral A hw
[GOAL]
case H
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
⊢ HasSum (fun n => ∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ n * (z - c)⁻¹) (2 * ↑π * I)
[PROOFSTEP]
have H : ∀ n : ℕ, n ≠ 0 → (∮ z in C(c, R), (z - c) ^ (-n - 1 : ℤ)) = 0 := by
refine' fun n hn => integral_sub_zpow_of_ne _ _ _ _; simpa
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
⊢ ∀ (n : ℕ), n ≠ 0 → (∮ (z : ℂ) in C(c, R), (z - c) ^ (-↑n - 1)) = 0
[PROOFSTEP]
refine' fun n hn => integral_sub_zpow_of_ne _ _ _ _
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
n : ℕ
hn : n ≠ 0
⊢ -↑n - 1 ≠ -1
[PROOFSTEP]
simpa
[GOAL]
case H
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
H : ∀ (n : ℕ), n ≠ 0 → (∮ (z : ℂ) in C(c, R), (z - c) ^ (-↑n - 1)) = 0
⊢ HasSum (fun n => ∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ n * (z - c)⁻¹) (2 * ↑π * I)
[PROOFSTEP]
have : (∮ z in C(c, R), ((w - c) / (z - c)) ^ 0 * (z - c)⁻¹) = 2 * π * I := by simp [hR.ne']
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
H : ∀ (n : ℕ), n ≠ 0 → (∮ (z : ℂ) in C(c, R), (z - c) ^ (-↑n - 1)) = 0
⊢ (∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ 0 * (z - c)⁻¹) = 2 * ↑π * I
[PROOFSTEP]
simp [hR.ne']
[GOAL]
case H
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
H : ∀ (n : ℕ), n ≠ 0 → (∮ (z : ℂ) in C(c, R), (z - c) ^ (-↑n - 1)) = 0
this : (∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ 0 * (z - c)⁻¹) = 2 * ↑π * I
⊢ HasSum (fun n => ∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ n * (z - c)⁻¹) (2 * ↑π * I)
[PROOFSTEP]
refine' this ▸ hasSum_single _ fun n hn => _
[GOAL]
case H
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
H : ∀ (n : ℕ), n ≠ 0 → (∮ (z : ℂ) in C(c, R), (z - c) ^ (-↑n - 1)) = 0
this : (∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ 0 * (z - c)⁻¹) = 2 * ↑π * I
n : ℕ
hn : n ≠ 0
⊢ (∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ n * (z - c)⁻¹) = 0
[PROOFSTEP]
simp only [div_eq_mul_inv, mul_pow, integral_const_mul, mul_assoc]
[GOAL]
case H
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
H : ∀ (n : ℕ), n ≠ 0 → (∮ (z : ℂ) in C(c, R), (z - c) ^ (-↑n - 1)) = 0
this : (∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ 0 * (z - c)⁻¹) = 2 * ↑π * I
n : ℕ
hn : n ≠ 0
⊢ ((w - c) ^ n * ∮ (z : ℂ) in C(c, R), (z - c)⁻¹ ^ n * (z - c)⁻¹) = 0
[PROOFSTEP]
rw [(integral_congr hR.le fun z hz => _).trans (H n hn), mul_zero]
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
H : ∀ (n : ℕ), n ≠ 0 → (∮ (z : ℂ) in C(c, R), (z - c) ^ (-↑n - 1)) = 0
this : (∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ 0 * (z - c)⁻¹) = 2 * ↑π * I
n : ℕ
hn : n ≠ 0
⊢ ∀ (z : ℂ), z ∈ sphere c R → (z - c)⁻¹ ^ n * (z - c)⁻¹ = (z - c) ^ (-↑n - 1)
[PROOFSTEP]
intro z _
[GOAL]
E : Type u_1
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℂ E
inst✝ : CompleteSpace E
c w : ℂ
R : ℝ
hw : w ∈ ball c R
hR : 0 < R
H : ∀ (n : ℕ), n ≠ 0 → (∮ (z : ℂ) in C(c, R), (z - c) ^ (-↑n - 1)) = 0
this : (∮ (z : ℂ) in C(c, R), ((w - c) / (z - c)) ^ 0 * (z - c)⁻¹) = 2 * ↑π * I
n : ℕ
hn : n ≠ 0
z : ℂ
hz✝ : z ∈ sphere c R
⊢ (z - c)⁻¹ ^ n * (z - c)⁻¹ = (z - c) ^ (-↑n - 1)
[PROOFSTEP]
rw [← pow_succ', ← zpow_ofNat, inv_zpow, ← zpow_neg, Int.ofNat_succ, neg_add, sub_eq_add_neg _ (1 : ℤ)]
|
/-
In Lean, false is a proposition that
is false in the sense that there is
no proof of it. It's an uninhabited
type.
-/
#check false
/-
inductive false : Prop
-/
/-
There's no introduction rule for false
as there are no proofs of it at all.
-/
/-
The elimination rule for false is very
important.
-/
theorem false_elim' : ∀ (P : Prop), false → P := -- false elimination
λ P f,
match f with
end
theorem false_imp_anything : ∀ (P : Prop), false → P :=
λ P f, false.elim f
-- Universal specialization
lemma false_imp_false : false → false := false_imp_anything false
lemma false_imp_true : false → true := false_imp_anything true
-- trick question
lemma true_imp_false : true → false := λ t, _ -- stuck
/-
As expected from propositional logic
true → true is true
true → false is false
false → true is true
false → false is true
-/
|
/-
Copyright (c) 2022 Kevin Buzzard. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Author : Kevin Buzzard
-/
import tactic -- imports all the Lean tactics
import data.real.basic -- imports the real numbers
import solutions.section02reals.sheet3 -- import the definition of `tends_to` from a previous sheet
-- to get a proof from a type class
-- apply_instance tactic should figure it out
-- so can write have : linear_order ℝ := {apply_instance,},
-- you can maybe do this one now
theorem tends_to_neg {a : ℕ → ℝ} {t : ℝ} (ha : tends_to a t) :
tends_to (λ n, - a n) (-t) :=
begin
rw tends_to at *,
intros ε hε,
specialize ha ε hε, -- try to do linear rewriting?
cases ha with B ha,
use B,
intros n hB,
specialize ha n hB,
-- or simp, rw abs_sub_comm, exact ha,
rw [←abs_neg] at ha,
rw ←neg_add',
exact ha,
end
/-
`tends_to_add` is quite a challenge. In a few weeks' time I'll
show you a two-line proof using filters, but right now
as you're still learning I think it's important that you
try and suffer and struggle through the first principles proof.
BIG piece of advice: write down a complete maths proof first,
with all the details there. Then, once you know the maths
proof, try translating it into Lean. Note that a bunch
of the results we proved in sheet 4 will be helpful.
-/
/-- If `a(n)` tends to `t` and `b(n)` tends to `u` then `a(n) + b(n)`
tends to `t + u`. -/
theorem tends_to_add {a b : ℕ → ℝ} {t u : ℝ}
(ha : tends_to a t) (hb : tends_to b u) :
tends_to (λ n, a n + b n) (t + u) :=
begin
rw tends_to at *,
intros e he,
specialize ha (e/2) (half_pos he),
specialize hb (e/2) (half_pos he),
cases ha with k ha,
cases hb with j hb,
use (max k j),
intros n hn,
specialize ha n (le_of_max_le_left hn),
specialize hb n (le_of_max_le_right hn),
have h3 : a n - t + (b n - u) = a n + b n - (t + u), -- how to get rid of this?
{ ring, },
rw [←h3, ←(add_halves e)],
exact lt_of_le_of_lt (abs_add (a n - t) (b n - u)) (add_lt_add ha hb),
end
-- what is simp_rw -- rw but deeper
/-- If `a(n)` tends to t and `b(n)` tends to `u` then `a(n) - b(n)`
tends to `t - u`. -/
theorem tends_to_sub {a b : ℕ → ℝ} {t u : ℝ}
(ha : tends_to a t) (hb : tends_to b u) :
tends_to (λ n, a n - b n) (t - u) :=
begin
exact tends_to_add ha (tends_to_neg hb),
end
|
[STATEMENT]
lemma Cong\<^sub>0_subst_left:
assumes "t \<approx>\<^sub>0 t'" and "t \<frown> u"
shows "t' \<frown> u" and "t \\ u \<approx>\<^sub>0 t' \\ u"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. t' \<frown> u &&& t \ u \<approx>\<^sub>0 t' \ u
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. t' \<frown> u
2. t \ u \<approx>\<^sub>0 t' \ u
[PROOF STEP]
have 1: "t \<frown> u \<and> t \<frown> t' \<and> u \\ t \<frown> t' \\ t"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. t \<frown> u \<and> t \<frown> t' \<and> u \ t \<frown> t' \ t
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
t \<approx>\<^sub>0 t'
t \<frown> u
goal (1 subgoal):
1. t \<frown> u \<and> t \<frown> t' \<and> u \ t \<frown> t' \ t
[PROOF STEP]
by (metis Resid_along_normal_preserves_Cong\<^sub>0 Cong\<^sub>0_imp_con Cong\<^sub>0_reflexive R.con_sym
R.null_is_zero(2) R.arr_resid_iff_con R.sources_resid R.conI)
[PROOF STATE]
proof (state)
this:
t \<frown> u \<and> t \<frown> t' \<and> u \ t \<frown> t' \ t
goal (2 subgoals):
1. t' \<frown> u
2. t \ u \<approx>\<^sub>0 t' \ u
[PROOF STEP]
hence 2: "t' \<frown> u \<and> u \\ t \<frown> t' \\ t \<and>
(t \\ u) \\ (t' \\ u) = (t \\ t') \\ (u \\ t') \<and>
(t' \\ u) \\ (t \\ u) = (t' \\ t) \\ (u \\ t)"
[PROOF STATE]
proof (prove)
using this:
t \<frown> u \<and> t \<frown> t' \<and> u \ t \<frown> t' \ t
goal (1 subgoal):
1. t' \<frown> u \<and> u \ t \<frown> t' \ t \<and> (t \ u) \ (t' \ u) = (t \ t') \ (u \ t') \<and> (t' \ u) \ (t \ u) = (t' \ t) \ (u \ t)
[PROOF STEP]
by (meson R.con_sym R.cube R.resid_reflects_con)
[PROOF STATE]
proof (state)
this:
t' \<frown> u \<and> u \ t \<frown> t' \ t \<and> (t \ u) \ (t' \ u) = (t \ t') \ (u \ t') \<and> (t' \ u) \ (t \ u) = (t' \ t) \ (u \ t)
goal (2 subgoals):
1. t' \<frown> u
2. t \ u \<approx>\<^sub>0 t' \ u
[PROOF STEP]
show "t' \<frown> u"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. t' \<frown> u
[PROOF STEP]
using 2
[PROOF STATE]
proof (prove)
using this:
t' \<frown> u \<and> u \ t \<frown> t' \ t \<and> (t \ u) \ (t' \ u) = (t \ t') \ (u \ t') \<and> (t' \ u) \ (t \ u) = (t' \ t) \ (u \ t)
goal (1 subgoal):
1. t' \<frown> u
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
t' \<frown> u
goal (1 subgoal):
1. t \ u \<approx>\<^sub>0 t' \ u
[PROOF STEP]
show "t \\ u \<approx>\<^sub>0 t' \\ u"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. t \ u \<approx>\<^sub>0 t' \ u
[PROOF STEP]
using assms 1 2
[PROOF STATE]
proof (prove)
using this:
t \<approx>\<^sub>0 t'
t \<frown> u
t \<frown> u \<and> t \<frown> t' \<and> u \ t \<frown> t' \ t
t' \<frown> u \<and> u \ t \<frown> t' \ t \<and> (t \ u) \ (t' \ u) = (t \ t') \ (u \ t') \<and> (t' \ u) \ (t \ u) = (t' \ t) \ (u \ t)
goal (1 subgoal):
1. t \ u \<approx>\<^sub>0 t' \ u
[PROOF STEP]
by (metis R.arr_resid_iff_con R.con_imp_coinitial R.cube forward_stable)
[PROOF STATE]
proof (state)
this:
t \ u \<approx>\<^sub>0 t' \ u
goal:
No subgoals!
[PROOF STEP]
qed |
Require Import List String Ensembles Arith
Computation.Core
ADT.ADTSig ADT.Core
Common.ilist ADTNotation.StringBound
ADTNotation.BuildADT ADTNotation.BuildADTSig
QueryStructure.QueryStructureSchema QueryStructure.QueryStructure.
Local Obligation Tactic := intuition.
Program Definition EmptyRelation (sch : Schema) : Relation sch :=
Build_Relation sch (fun T : @IndexedTuple (schemaHeading sch) => False) _.
Next Obligation.
destruct (schemaConstraints sch); intuition.
Qed.
Fixpoint Build_EmptyRelations (schemas : list NamedSchema) :
ilist (fun ns : NamedSchema => Relation (relSchema ns))
schemas :=
match schemas with
| [] => inil _
| sch :: schemas' =>
icons _ (EmptyRelation (relSchema sch)) (Build_EmptyRelations schemas')
end.
Lemma Build_EmptyRelation_IsEmpty qsSchema :
forall idx,
ith_Bounded relName (Build_EmptyRelations qsSchema) idx
= EmptyRelation _.
Proof.
intro.
eapply (ith_Bounded_ind (B' := fun _ => unit)
_
(fun As idx il a b b' => b = EmptyRelation (relSchema a))
idx (Build_EmptyRelations qsSchema) tt).
destruct idx as [idx [n nth_n] ]; simpl in *; subst.
revert qsSchema nth_n;
induction n; destruct qsSchema; simpl; eauto;
intros; icons_invert; simpl; auto.
unfold Some_Dep_Option; simpl; eapply IHn.
Qed.
Program Definition QSEmptySpec (qsSchema : QueryStructureSchema) :
QueryStructure qsSchema :=
{| rels := Build_EmptyRelations (qschemaSchemas qsSchema) |}.
Next Obligation.
rewrite Build_EmptyRelation_IsEmpty in *; simpl in *;
destruct (BuildQueryStructureConstraints qsSchema idx idx');
intuition.
Qed.
Notation "'empty'" :=
(ret (QSEmptySpec qsSchemaHint))
(at level 80) : QuerySpec_scope.
|
universe u v
structure InjectiveFunction (α : Type u) (β : Type v) where
fn : α → β
inj : ∀ a b, fn a = fn b → a = b
def add1 : InjectiveFunction Nat Nat where
fn a := a + 1
inj a b h := by injection h; assumption
instance : CoeFun (InjectiveFunction α β) (fun _ => α → β) where
coe s := s.fn
#eval add1 10
def mapAdd1 (xs : List Nat) : List Nat :=
xs.map add1
#eval mapAdd1 [1, 2]
def foo : InjectiveFunction Bool (Nat → Nat) where
fn
| true, a => a + 1
| false, a => a
inj a b h := by
cases a
cases b; rfl; injection (congrFun h 0)
cases b; injection (congrFun h 0); rfl
theorem ex1 (x : Nat) : foo true x = x + 1 :=
rfl
theorem ex2 (x : Nat) : foo false x = x :=
rfl
#eval foo true 10
#eval foo false 20
#eval [1, 2, 3].map (foo true)
|
function sammi(model, parser, data, secondaries, options)
% Visualize the given model, set of reactions, and/or data using SAMMI.
% Documentation at: https://sammim.readthedocs.io/en/latest/index.html
%
% Citation: Schultz, A., & Akbani, R. (2019). SAMMI: A Semi-Automated
% Tool for the Visualization of Metabolic Networks. Bioinformatics.
%
% USAGE:
% sammi(model,parser,data,secondaries,options)
%
% INPUT:
% model: COBRA model to be visualized
%
% OPTIONAL INPUTS:
% parser: How the model is to be parsed. There are four possible
% options for this parameter. Default empty array.
% *empty array: If this parameter is an empty arrray, all reaction
% in the model will be loaded in a single map. Not advisable for
% large maps.
% *string: If this parameter is a characters array there are two
% options. Either the parameter defines the path to a SAMMI map (JSON
% file downloaded from a previous instance of SAMMI), in which case
% the given map will be used, or the parameter defines a field in the
% model struct, in which case this field will be used to parse the
% model into subgraphs.
% *cell array: If this parameter is a cell array, it should be a cell
% array of strings containing reaction IDs. Only these reactions will
% be included in a single SAMMI map.
% *struct: If this model is a struct of length n, the model will be
% parsed into n subgraphs. Each element of the struct should contain
% two fields plus an additional optional one:
% name: Name of the subgraph.
% rxns: Reactions to be included in the subgraph.
% flux: Optional field. Data to be mapped as reaction color.
% data: Data to be mapped onto the model. Struct of length n. Defaults
% to an empty array where no data is mapped. Element of the struct should
% contain two fields:
% type: A cell array of two strings. The firt string should be
% either 'rxns', 'mets', or 'links' indicating which type of data
% is to be mapped. The second string should be either 'color' or
% 'size', indicating how the data is to be mapped. 'links' only
% work with 'size', since link color is the same as the one of
% the reaction it is assciated with.
% data: a table object. VariableNames will be translated into
% condition names, and RowNames should be reaction IDs for 'rxns'
% and 'links' data, and metabolite IDs for 'mets' data. NaN values
% will not be mapped.
% secondaries: Cell array of strings of regular expressions. All
% metabolites, in all subgraphs, matching any of the regular expressions
% will be shelved. Default to empty array where no metabolites are
% shelved.
% options: Struct with the following fields:
% htmlName: Name of the html file to be written and opened for the
% visualization. Defaults to 'index_load'. Change this options to
% write to a different html file that will not be overwritten by the
% default option.
% load: Load the html file in a new tab upon writing the file.
% Default to true. If you would not like a new tab to open, set
% this parameter to false and refresh a previously opened window. To
% open a new window without re-running SAMMI use the openSammi
% function.
% jscode: String. Defaults to empty string. Additional JavaScript
% code to run after loading the map. Can be any code to modify the
% loaded map.
%
% OUTPUT:
% No MATLAB output. Opens a browser window with the SAMMI visualization.
%
% EXAMPLES:
% %1 Open model in single map
% sammi(model)
%
% %2 Open model as multiple subgraphs divided by subSystems
% sammi(model,'subSystems')
%
% %3 Open model as multiple subgraphs divided by subSystems, load two
% %conditions with randomly generated data, and shelve hydrogen, water,
% %and O2 upon loading.
% rxntbl = array2table(randn(length(model.rxns),2),...
% 'VariableNames', {'condition1','condition2'},...
% 'RowNames', model.rxns);
% data(1).type = {'rxns' 'color'};
% data(1).data = rxntbl;
% data(2).type = {'rxns' 'size'};
% data(2).data = rxntbl;
% secondaries = {'^h\[.\]$','^h20\[.\]$','^o2\[.\]$'};
% sammi(model,'subSystems',data,secondaries)
if nargin < 2
parser = [];
end
if nargin < 3
data = [];
end
if nargin < 4
secondaries = [];
end
if nargin < 5 || ~isfield(options,'htmlName')
options.htmlName = 'index_load.html';
elseif isempty(regexp(options.htmlName,'\.html$'))
options.htmlName = [options.htmlName '.html'];
end
if nargin < 5 || ~isfield(options,'load')
options.load = true;
end
if nargin < 5 || ~isfield(options,'jscode')
options.jscode = '';
end
%Read in index
sfolder = regexprep(which('sammi'),'sammi.m$','');
html = fileread([sfolder 'index.html']);
%Define options
if isstruct(parser)
jsonstr = structParse(model,parser);
elseif ischar(parser) && exist(parser,'file') == 2 && ~isempty(regexp(parser,'\.json$','ONCE'))
%Read map
jsonstr = fileread(parser);
jsonstr = strrep(jsonstr,'\','\\');
%Add graph
jsonstr = strcat('e = ',jsonstr,';\nreceivedTextSammi(JSON.stringify(e));');
elseif ischar(parser) && isfield(model,parser)
ss = unique(model.(parser));
if length(model.(parser)) == length(model.rxns)
for i = 1:length(ss)
dat(i).name = ss{i};
dat(i).rxns = model.rxns(ismember(model.subSystems,ss{i}));
end
else
for i = 1:length(ss)
dat(i).name = ss{i};
dat(i).rxns = model.rxns(sum(model.S(ismember(model.(parser),ss{i}),:)) ~= 0);
end
end
jsonstr = structParse(model,dat);
elseif iscell(parser) || isempty(parser)
if iscell(parser)
%Keep only reactions we want
model = removeRxns(model,model.rxns(~ismember(model.rxns,parser)));
end
%Convert model to sammi JSON string
jsonstr = makeSAMMIJson(model);
%Add graph
jsonstr = strcat('e = ',jsonstr,';\nreceivedJSONwrapper(e)');
end
%Add data
for i = 1:length(data)
if isequal(data(i).type{1},'rxns')
if isequal(data(i).type{2},'color')
datastring = makeSAMMIdataString(data(i).data);
jsonstr = strcat(jsonstr,';\ndat = ',datastring,...
';\nreceivedTextFlux(dat)');
elseif isequal(data(i).type{2},'size')
datastring = makeSAMMIdataString(data(i).data);
jsonstr = strcat(jsonstr,';\ndat = ',datastring,...
';\nreceivedTextSizeRxn(dat)');
end
end
if isequal(data(i).type{1},'mets')
if isequal(data(i).type{2},'color')
datastring = makeSAMMIdataString(data(i).data);
jsonstr = strcat(jsonstr,';\ndat = ',datastring,...
';\nreceivedTextConcentration(dat)');
elseif isequal(data(i).type{2},'size')
datastring = makeSAMMIdataString(data(i).data);
jsonstr = strcat(jsonstr,';\ndat = ',datastring,...
';\nreceivedTextSizeMet(dat)');
end
end
if isequal(data(i).type{1},'links')
if isequal(data(i).type{2},'size')
datastring = makeSAMMIdataString(data(i).data);
jsonstr = strcat(jsonstr,';\ndat = ',datastring,...
';\nreceivedTextWidth(dat)');
end
end
end
%Shelve secondaries
if ~isempty(secondaries)
secondaries = strrep(secondaries,'\','\\\\');
jsonstr = strcat(jsonstr,';\nshelveList("(?:',strjoin(secondaries,')|(?:'),')");');
end
%Add last bit of code
jsonstr = strcat(jsonstr,';',options.jscode);
%Replace in html
html = strrep(html,'//MATLAB_CODE_HERE//',jsonstr);
%Account for speial characters
html = strrep(html,'%','%%');
%Write to file
fid = fopen([sfolder options.htmlName],'w');
fprintf(fid,html);
fclose(fid);
%Open window
if options.load
web([sfolder options.htmlName],'-browser')
end
end
function jsonstr = structParse(model,parser)
%Get only reactions we are using
rx = {};
for i = 1:length(parser); rx = unique(cat(1,rx,parser(i).rxns)); end
model = removeRxns(model,model.rxns(~ismember(model.rxns,rx)));
%Convert model to sammi JSON file
jsonstr = makeSAMMIJson(model);
%Add graph
jsonstr = strcat('graph = ',jsonstr);
%Make conversion vector
convvec = makeSAMMIparseVector(parser);
%Add parssing line
jsonstr = strcat(jsonstr,';\ne = ',convvec,';\nfilterWrapper(e)');
end
|
% file : bkjz91.tex 26 July 9.30
\thispagestyle{empty} % empty head
\markboth{}{26 July 91} % empty foot
\begin{center} {\large\bf Zebra Reference Manual} \vspace{8mm} \end{center}
\begin{center} {\LARGE\bf book JZ91} \vspace{2mm} \end{center}
\begin{center} {\Large\bf Processor support} \vspace{4mm} \end{center}
\begin{center}
{\large\bf Zebra version 3.67 \vspace{2mm} \\
July 1991 \vspace{2mm} \\
J. Zoll}
\end{center}
\vspace*{20pt}
\begin{description}
\in{30mm}
\item[Chapter 1] Basic calling sequences
\begin{itemize}
\in{30mm}
\item[1.1] JZIN/JZOUT - simplest use
\item[1.2] JZIN - processor entry, general use
\item[1.3] JZINIT - initialize the JZ91 package
\item[1.4] JZTELL - count processor conditions
\item[1.5] JZZERO - zero the down call bank
\item[1.6] JZROOT - reset to processor level zero
\item[1.7] JZEND - print processor usage statistics
\item[1.8] Titles JZAN - processor constants
\item[1.9] Titles JZFL - processor flags
\end{itemize}
\item[Chapter 2] Extra features
\begin{itemize}
\in{30mm}
\item[2.1] JZIN - extra features
\item[2.2] JZINIT - extra features
\item[2.3] JZSETF - set processor flag by program
\item[2.4] JZLOG - processor logging
\item[2.5] JZWIND - unwind the processor stack
\item[2.6] JZTRAC - print processor trace-back
\item[2.7] Receiving the working space
\item[2.8] Note on processor timing
\item[2.9] Off-line initialization of a processor
\end{itemize}
\item[Appendix] JZ91 data structure and bank descriptions
\end{description}
\newpage
\vspace*{40pt}
{\large\bf Acknowledgement}
This package is derived from the HYDRA package JQ81
A.Norton, J.Zoll, HYDRA Topical Manual, book JQ81, CERN Program Library
\cleardoublepage % continue on next odd page
\markboth{Principles}{Principles}
\vspace*{20pt}
\lile{8mm}
\begin{center}
{\large\bf{Principles}}\\
\end{center}
% \smark{Principles}
\lspa
\vspace*{2pt}
{\large\bf Purpose}
The MZ package of ZEBRA helps the user to organize his data.
The purpose of the present JZ package is to assist him
in structuring his program.
It allows to formalize the concept of 'program module'
beyond the mere subroutine
and it provides the back-up service for these modules.
It is at the design stage of a program,
rather than later,
that the advantage of the JZ package will be most strongly felt,
since it provides a frame-work for the design;
again just like with the data structures of ZEBRA.
The program we are talking about will be designed as
a collection of modules called 'processors'.
The art consists in designing processors with interfaces
as simple and logical as possible,
and entirely documentable.
A given processor has a given task
which formalizes into a transformation of the input data structure
or rather sub-structure.
The result may be a modification of the input structure,
or a new output structure,
or just printed output and the like.
The processor is controlled by what is essentially a parameter list.
Normally this list contains pointers to the sub-structure
the processor is to work with.
Since links have to be held on relocatable memory
the parameter list is passed in a special purpose bank,
the 'call bank',
containing reference links and data words.
This call bank is filled with the input parameters
by the higher level code which calls the processor.
The processor takes them from there and also places back
into the same call bank any output parameters,
in particular links to the output data structure,
if any has been lifted.
Clearly the content of the call bank is part of
the specification of the processor.
A processor may call other processors.
This is not to say that a good design should aim
at having processors at several levels.
On the contrary, the fewer levels one can do with
the better, of course.
Also, one should not write trivial processors
where simple subroutines will do.
By convention, every processor is entitled to have
the ZEBRA working space near the beginning of Q
freely to itself.
As a result a processor calling another processor
looses at that moment the content of its working space.
Its dimensions are saved by JZ,
and they are automatically restored when control
comes back to the calling processor,
ie. it does not have to call MZWORK again.
As an extra facility, JZ91 may be asked to save and restore
also the contents of the first so many links
and of the first so many data words of the working space.
\newpage
{\large\bf JZ91 Services}
JZ91 provides the following services
to an application software organized into processors:
--- handling of 'call banks' serving to transmit parametric
information of the link and data types
between processors at levels n-1, n, and n+1.
For the processor at call-depth n the 'up' call-bank,
pointed to by the system link LQUP,
assures the communication with the higher level at
call-depth n-1;
and the 'down' call-bank,
pointed to by LQDW,
communicates to the lower level at call-depth n+1.
Call banks of equal size are pre-lifted,
one for each level of a definite number of levels,
they stay permanently in memory.
--- handling of 'processor constants',
being part of the environmental data for each processor,
fixed during the run.
If a processor needs any constants at all,
it may initialize them itself,
this then being the default initialization.
By using titles, loaded with TZINIT described in book TZ,
this default can be over-ruled.
The system link LQAN gives Access to these Numbers thus :
\bva
IQ(LQAN) number of constants
Q(LQAN+1) first constant
. . .
\end{verbatim}
--- handling of 'processor conditions' which may be signalled from
any processor with CALL JZTELL (J),
J being a small integer normally from 1 to 10.
This provides for simple counters over the whole run
grouped by processors.
--- handling of statistics of processor usage,
like number of times entered and time spent.
The number of times entered is accessible to
the processor in IQ(LQSV+2).
--- saving the size of the working space,
on special request also the contents,
on down-call to the next processor
and restoring it on up-return.
--- handling of 'processor flags' for test runs during
program development.
The flags may be used to drive debug operations of
a processor without having to recompile it.
The flags for a given processor are defined by the user
on titles JZFL and they
are copied on entry to the processor into the vector JQFLAG,
ready for inspection by the code in the processor;
non-initialized flags are set to zero.
This is only available with the program-development
version of JZ91;
the production version presets all flags to zero
and leaves them thus for the whole run.
This 'environment' information is held in
the bank of 'support variables',
one bank for each processor,
which is permanently in store as part of the JZ91 data structure.
Communication between the processors and JZ91 is via :
\bva
COMMON /JZUC/LQJZ,LQUP,LQDW,LQSV,LQAN, JQLEV,JQFLAG(10)
\end{verbatim}
JZ91 operates in and for one store only,
which must be the user's main store,
normally the primary store.
Links in the call bank can point only into this store.
\lile{-8mm}
\chapter{Basic calling sequences}
\section{JZIN/JZOUT - simplest use}
\smark{JZIN/JZOUT - basic}
Processor AA transfers control to processor BB with
a simple Fortran
\hspace{.2mm}
CALL BB,
having readied the contents of its down call-bank
at LQDW:
\bvb
. . .
LQ(LQDW-1) = load parameters of the link type
LQ(LQDW-2) =
. . .
IQ(LQDW+1) = load parameters of the data type
IQ(LQDW+2) =
. . .
CALL BB transfer control
. . .
\end{verbatim}
\lspa
In the simplest case the processor BB does
not call itself another processor,
does not have processor constants,
and does not use processor flags.
It would then look like this :
\bvb
SUBROUTINE BB
+CDE, Q. this is supposed to declare the store and also /JZUC/
+, links, data, last
CALL JZIN ('BB ',0,0,0)
CALL MZWORK (0,data(1),last,0)
processor body
CALL JZOUT ('BB ')
RETURN
END
\end{verbatim} \lspa
By calling JZIN the processor causes switching of the environment,
gaining access to its own data,
in particular to its call-bank via the system link LQUP,
thus LQ(LQUP-1) is its first link parameter.
The inverse switching is done by JZOUT.
The processor name has to be given to JZOUT explicitely.
This handshake is a check against forgotten calls.
The call to MZWORK must come after the call to JZIN
because JZIN saves the working space parameters of AA,
and hence they must still be intact.
For efficiency, JZIN and JZOUT, and also other routines,
expect to receive the processor name IAM with 4 characters exactly,
with blank-fill if necessary.
\newpage
\section{JZIN - processor entry, general use}
\smark{JZIN - general}
Processor AA transfers control to processor BB with
a simple Fortran
\hspace{.2mm}
CALL BB,
having readied the contents of its down call-bank
at LQDW.
To trigger swopping of the processor environment,
the first executable statement in the processor BB should be
\Subr{CALL JZIN (IAM, IFDOWN, NAN, 0)}
\bvb
with IAM processor ID, a text string of 4 characters exactly
of type CHARACTER*4
IFDOWN flag indicating whether this processor does or does not
call other processors : = 0 no / = 1 yes
NAN number of processor constants used
0 zero; non-zero gives access to the extra features
described in para. 2.1
\end{verbatim}
\lspa
JZIN saves the environment of the upper processor
and then sets up the environment of the
new processor.
If this does not yet exist,
it calls the internal service routine JZLIFT to create
the bank of support variables,
digesting the titles for this processor, if any.
JZIN returns the initialisation status thus :
\bvb
IQUEST(1) = -ve : just initialized without JZAN title
0 : just initialized with JZAN title
+ve : normal running
\end{verbatim}
\lspa
Thus a processor using processor constants can check
this condition like in this {\bf example} :
\bvc
SUBROUTINE BB
+CDE, Q. declaring the store and /JZUC/
CHARACTER IAM*4
PARAMETER (IAM = 'BB ')
CALL JZIN (IAM,0,3,0)
IF (IQUEST(1)) 11, 17, 21
C-- Initialize constants if and only if not set from title
11 Q(LQAN+1) = .0025
Q(LQAN+2) = .3
C-- Complete initialization calculating derived constants
17 Q(LQAN+3) = .5 * SIN (Q(LQAN+2))
21 CALL MZWORK (...)
... processor body
CALL JZOUT (IAM)
RETURN
END
\end{verbatim}
\lspa
The 3rd and the 4th parameter to JZIN are looked at only on first
contact for each processor.
\newpage
Note that over-ruling with the JZAN title only works
if the processor is programmed to handle it.
This is done in this example where statement 11
is reached only if there is no title.
JZIN readies for the new processor these links :
\bvb
COMMON /JZUC/LQJZ,LQUP,LQDW,LQSV,LQAN, JQLEV,JQFLAG(10)
LQJZ the header bank supporting the JZ91 data-structure
LQUP the upper call bank
LQDW the down call bank, if needed, else = zero.
LQSV the bank of support variables;
IQ(LQSV+1) contains system information,
IQ(LQSV+2) is 1 for first entry, 2 for second, etc.
LQAN the processor constants inside the SV bank :
IQ(LQAN) number of constants
Q(LQAN+1) first constant
Q(LQAN+2) second constant
. . .
and also :
JQLEV is the current call depth level, = zero for the root
JQFLAG(10) receives a copy of the flags for this processor.
\end{verbatim}
\lspa
Note that the data in this common /JZUC/ must not be changed
by the user, JQFLAG excepted.
JZIN goes to ZFATAL if IFDOWN is non-zero and
the lowest possible call-depth has been reached.
There are always 10 flag words.
Words not explicitely initialized with a JZFL title
for a given processor are always zero.
This feature is only available with the program development version,
in the production version of ZEBRA the flags are
all initialized to zero and never change,
JZFL titles are dropped by JZINIT and are otherwise ignored.
If you need the working space also in the initialization
part of the processor, look out :
you cannot place the CALL MZWORK before the
CALL JZIN,
nor immediately after, because MZWORK destroys IQUEST.
\newpage
\section{JZINIT - initialize the JZ91 package}
\smark{JZINIT}
The highest processor level,
at call depth zero, is called the 'root'.
The MAIN program is necessarily at this level.
The root level is handled as a processor,
with the ID given in IAMR to JZINIT.
This is used to associate the titles JZAN and JZFL, if any.
The root gets 10 processor constants and 10 JZTELL counters,
unless the extra features of para.~2.2 are used.
Before using the JZ91 package one has to initialize it with
\Subr{CALL JZINIT (IXSTOR, CHIAMR, CHOPT, MAXLEV, NLCALL, NDCALL, 0)}
\bvb
IXSTOR the index of the processing store,
(or the index of any division in this store)
may be zero if the primary store is used
CHIAMR the processor ID of the root,
type CHARACTER*4, string of 4 characters
CHOPT character string of options :
T timing selected
Q quiet, no log output
E error messages only
MAXLEV maximum call-depth number,
eg. =1 if only the root calls processors
NLCALL maximum number of links in all call banks
NDCALL maximum number of data words in all call banks
0 zero; non-zero gives access to the extra features
described in para. 2.2
\end{verbatim}
\lspa
JZINIT will create the long-range division JZ91 in the store
signalled by IXSTOR for holding the JZ91 data structure,
which contains all JZ data, like the call banks, the SV banks, etc.
This store must be the store where the user does his processing;
the links LQJZ,LQUP,... will be declared by JZINIT to be a link-area
for this store.
Links in call banks can only point into this store.
Titles JZAN and JZFL, if any, must have been read into the
title-structure of this same store before JZINIT is called,
because it will re-format or re-link them for use.
All call banks are pre-lifted by JZINIT,
all of the same maximum size as specified by NLCALL and NDCALL,
one call bank for each of the MAXLEV levels.
They are permanent banks,
being continously re-used.
Accounting the execution time of the processors individually
is an option which could be expensive in real time
on some computers.
JZINIT returns IQUEST(1) just like JZIN.
\newpage
\section{JZTELL - count processor conditions}
\smark{JZTELL}
To signal condition J in the current processor one may
\Subr{CALL JZTELL (J)}
which bumps the counter J=1,2,...,NCD contained
in the support variables.
The first or the last counter are bumped for
underflow or overflow in J.
NCD, the number of counters available in the SV bank,
is normally 10.
If more are needed the extra features of JZIN have
to be used,
as explained in para.~2.1.
\section{JZZERO - zero the down call bank}
\smark{JZZERO}
When filling the down call bank for the next processor to be
called it is safer to clear the unused part of this bank to zero,
with
\Subr{CALL JZZERO (NL, ND)}
\bvb
with NL leave the links 1 to NL untouched and reset links
NL+1 to the end to zero;
ND same for the data words, words 1 to ND untouched.
\end{verbatim}
\lspa
Note that JZIN does already clear the new down call bank to zero.
\section{JZROOT - reset processor level to root}
\smark{JZROOT}
If recovery to 'next event' is done with transfer to QNEXT
(see book MZ, para.~3.04),
QNEXT should reset the processor level to 'root' with :
\Subr{CALL JZROOT}
No harm is done by calling JZROOT on first entry to QNEXT,
when the level is already zero.
\section{JZEND - print processor usage statistics}
\smark{JZEND}
To get this printed on the log file, one calls from ZEND :
\Subr{CALL JZEND}
The apparent number of calls to the root reflects the
number of times that JZROOT did actually have to unwind
the processor stack,
except for the initial entry with JZINIT.
\newpage
\section{Titles JZAN - processor constants}
\smark{Titles JZAN constants}
See TZINIT in book TZ for input of titles into
the dynamic store.
For each processor whose constants are to be initialized
via the titles,
thus over-ruling the default in the processor itself,
one title should be given :
\bvb
word 1 processor ID in A4 format
2 constant 1
3 constant 2
... ...
n+1 constant n
\end{verbatim}
\lspa
The number constants given should agree with
the number given as NAN to JZIN.
A discrepancy causes a diagnostic message.
If some constants are derived by the processor from other
constants,
as in the example of para.~1.2,
their places have to be kept open by giving dummy zeros.
If several titles are given for the same processor,
the first title coming in the title input stream is taken,
later ones are dropped.
\bvc
Example :
*DO JZAN -E11 -C21/72 #. Constants for central detector decoding
MAIN :CDRC
GLOBAL T0 0.
DE/DX SCALE 4.
MAX BASE 15.
S-SAMPL LENGTH 8.
A-COEFF 0.89
MIN P-LENGTH 2.
T-SLEW CONSTANT 1000.
AV INV RAW E1 0.00585
DEDXOFFSET 68.0
T-COR FOR S/W -35.0
*DO JZAN -C11/72
:V0FI
DPPMAX 30.0 #. maximum DELTA P / P
Y2TMAX 6.0
ETAMAX 99.0
AMBFLG 1.0 #. do not remove ambiguities
DISVMN 0.0
DISZMX 0.3
STDIMP 1.0
PENALT 0.1
*DO JZAN -E11 -C21/72
:SERC
DUMMY (RUN NO.) 0.
TBIN TRAK+BGR 2.
CDIN TRAK+DIGIT+BGR 3.
FDIN NOT CALLED 0.
DUMMY 6* 0.
\end{verbatim}
\lspa
\newpage
\section{Titles JZFL - processor flags}
\smark{Titles JZFL flags}
Zero, one or several titles may be given,
each containing flags for one or several processors,
given as one data group for one processor which looks thus :
\bvb
word a processor ID in A4 format
a+1 flag word 1
a+2 flag word 2
. . . (n=0 is possible,
it blocks later settings)
a+n flag word n
a+n+1 END - the termination literal,
may be omitted for the last group.
Any JZFL title looks then like this :
word 1 first word of first group of n1 words
ie. n1-2 flags
n1+1 first word of second group
. . .
\end{verbatim}
\lspa
If several flag settings are given for the same processor,
the first one is taken and further are dropped.
\bvb
Examples :
*DO JZFL
:IMRE 0 0 0 0 0 1 0 0 :END
:VEFI 99 5 0 0 0 1 1 0 :END
:V0FI 0 0 0 0 0 0 0 0 :END
:TMER 0 0 0 0 0 0 0 :END
:VEMO 0 0 0 0 0 1 :END
:XCAL 0 :END
:TFIT #B1110110001 :END
For single-group titles it is more economic to omit
the end terminator (no re-formatting needed) :
*DO JZFL
:VEMO 0 0 0 0 0 1
*DO JZFL
:XCAL 0
*DO JZFL
:TFIT #B1110110001
\end{verbatim}
\lspa
\chapter{Extra features}
\section{JZIN - extra features}
\smark{JZIN - extra features}
These are requested by giving a LIST as the fourth parameter
to JZIN rather than zero.
The first word of LIST must indicate the length of the list.
Each further word selects the features described:
\bvb
LIST(2) = NCD the number of JZTELL counters to be provided for the processor
(the default is 10)
LIST(3) = NLS the number of working space links and
LIST(4) = NDS the number of working space data to be saved
(the defaults are 0)
\end{verbatim}
\lspa
When processor AA calls BB it looses the working space
since BB has the right to use it freely,
only the size of the working space is saved by JZIN and
restored by JZOUT.
With these 2 options JZIN is requested to save links 1,...,NLS
and/or data words 1,...,NDS into the bank of support variables
on down-call.
JZOUT will restore them on up-return.
Saving working space data is intended to be used with
{\bf small} amounts of data only,
otherwise this costs time and also memory.
\bvb
Examples :
DIMENSION LIST(2)
DATA LIST /1,24/ selects NCD=24; NLS and NDS remain zero
DIMENSION LIST(3)
DATA LIST /2,4,3/ selects NCD=4 and NLS=3; NDS remains zero
\end{verbatim}
\lspa
\section{JZINIT - extra features}
\smark{JZINIT - extra features}
This is handled in analogy with the extra features of JZIN :
\bvb
LIST(2) = NANR the number of processor constants for the root (default is 10)
LIST(3) = NCD the number of JZTELL counters for the root (default is 10)
LIST(4) = NLSR the number of working space links and
LIST(5) = NDSR the number of working space data to be saved (defaults are 0)
LIST(6) = NACCE extra system accounting words in all SV banks,
this is for monitoring to be used only by experts (default is 0)
\end{verbatim}
\lspa
\newpage
\section{JZSETF - set processor flag by program}
\smark{JZSETF - set flag}
To change the value of flag JFL in processor CHID one can call:
\Subr{CALL JZSETF (CHID, JFL, VALUE)}
This routine acts only if the flag JFL actually exists
in the processor CHID,
ie. if a title JZFL of at least JFL flags has been given.
In case of do-nothing it returns IQUEST(1)=0.
In case of succesful operation it returns 3 values in
IQUEST(1/3) in this order:
\bvb
1 LFL adr of the flag JFL is IQ(LFL+JFL)
2 NFL length of the flag vector
3 OLD previous content of the changed flag
\end{verbatim}
\lspa
\section{JZLOG - processor logging}
\smark{JZLOG - logging}
This gives control over the amount of information printed
about the operation of the processors~:
\Subr{CALL JZLOG (CHOPT)}
CHOPT is a CHARACTER string whose individual letters select
particular outputs :
\bvb
Q : suppress all messages
E : print error messages only
N : reset to normal logging
T : monitor each call to JZTELL
A : monitor each call to JZIN
B : and dump the call bank
C : and dump the parameters
X : monitor each call to JZOUT
Y : and dump the call bank
\end{verbatim}
Options B and C imply A, option Y implies X.
The implementation of the effect of options B, C, Y, is waiting
for other new code in Zebra.
\bvb
Examples :
CALL JZLOG ('E')
CALL JZLOG ('TBCY') maximum logging
CALL JZLOG ('A') log only entries
CALL JZLOG ('N') back to normal
\end{verbatim}
\lspa
\newpage
\section{JZWIND - unwind the processor stack}
\smark{JZWIND}
If one uses setjmp/longjmp to abnormally quit from some low level
processor to some higher level processor (other than the root)
the processor receiving the longjmp has to unwind the JZ91 stack
to itself by calling JZWIND giving its name IAM :
\Subr{CALL JZWIND (IAM)}
\section{JZTRAC - print processor trace-back}
\smark{JZTRAC}
This routine is called from ZPOSTM during error termination.
\Subr{CALL JZTRAC (MODE)}
It prints the processor names and the call-bank addresses,
it also optionally marks some banks as 'critical'
to get a full dump of these banks in DZSNAP.
Marking 'critical' depends on single bits in MODE :
\bvb
bit 1 all SV banks in the chain
2 all call banks in the chain
3 all banks pointed to from the links
in the current call banks at LQUP and LQDW.
\end{verbatim}
\lspa
\section{Receiving the working space}
\smark{Notes}
Sometimes the situation arises when the calling processor
wants to receive the working space of the called processor
intact;
it may for instance contain a large error matrix
which one does not want to be copied into a bank
just in case it may be needed.
This request is done in the calling processor
by setting status-bit 15 into the down call bank,
ie. CALL SBIT1 (IQ(LQDW),15).
JZOUT will see this flag,
it will reset it to zero,
and it will leave the working space unchanged.
\section{Note on processor timing}
\smark{Notes}
JZ91 uses the KERNLIB routine TIMED (Z 007)
for measuring the time spent in each processor.
TIMED is called every time the processor level changes
and the value returned is added into the right Q(LQSV+5).
If the user also wants to use TIMED to time a section
of his code inside (!) a processor,
he can do this.
But unless he follows the recomendation below,
he will invalidate the timing figures for the particular
processor.
To keep things right, he should do this :
\bvb
CALL TIMED (T)
Q(LQSV+5)=Q(LQSV+5) + T
user code to be timed
CALL TIMED (T)
Q(LQSV+5)=Q(LQSV+5) + T
\end{verbatim}
\lspa
The first call marks the start time of the user code;
the time spent in the processor till this moment
is cumulated into the SV bank.
The second call brings the time interval of the user
code,
it too is cumulated.
Note that you cannot so measure an interval
across a processor call.
\newpage
\section{Off-line initialization of a processor}
\smark{Notes}
In case the initialization part of a processor is bulky
it may be convenient to split it off from the processor proper
into a separate subroutine to be called just once from the root,
so as to have it executed and then out of the way.
Suppose we have the processor BB,
and we split the initialization off into subroutine BBIN.
This might then look as follows :
\bvb
SUBROUTINE BBIN
+CDE, Q.
CALL JZIN ('BB ',0,36,0)
IF (IQUEST(1)) 11, 17, 99
C-- Initialize constants if and only if not set from title
11 Q(LQAN+1) = .0025
Q(LQAN+2) = .3
C-- Complete initialization calculating derived constants
17 Q(LQAN+3) = .5 * SIN (Q(LQAN+2))
IQ(LQSV+2) = 0 to reset the entry count from 1 to zero
99 CALL JZOUT ('BB ')
RETURN
END
C=====================================================
SUBROUTINE BB
+CDE, Q.
CALL JZIN ('BB ',0,0,0)
IF (IQUEST(1).LE.0) CALL ZFATAM ('BB, NOT INITIALIZED.')
CALL MZWORK (...)
processor body
CALL JZOUT ('BB ')
RETURN
END
C=====================================================
Note that in subroutine BB it is still wise to check on IQUEST(1)
in case the explicit CALL BBIN from the main program has been lost.
\end{verbatim}
\lspa
\newpage
\markboth{appendix: bank JZ91}{appendix: bank JZ91}
\vspace*{4mm}
{\Large\bf Appendix: JZ91 data structure and bank descriptions}
\vspace*{4mm}
{\large\bf JZ91 - header bank}
address : LQJZ system link in /JZUC/
\bvb
- 2*JQMLEV+6 current SV bank for depth JQMLEV
... ...
+7 1
- JQMLEV+6 current SV bank for depth 0 (root)
- JQMLEV+5 down call bank for depth JQMLEV-1
... ...
- 7 1
- 6 down call bank for depth 0 (root)
- 5 zero (= LQUP for the root)
- 4 fan-out bank for immediate access to SV banks
- 3 linear chain of SV banks
- 2 linear chain of pending JZFL derived title banks
links : - 1 linear chain of pending JZAN title banks
data : 1 guard word
2 NQLINK working space parameters
3 NDATA at level 0 (= root)
4 NQLINK
5 NDATA at level 1 (= 1 below the root)
... ...
2*JQMLEV+1 NDATA at level JQMLEV-1
2*JQMLEV+2 guard word
JQMLEV is MAXLEV of JZINIT, the maximum call depth
\end{verbatim}
\lspa
{\large\bf Fan-out bank}
address : LQ(LQJZ-4)
\bvb
link -J address of SV bank J
data 1 N = length of the list, J=1,...,N
J+1 ID of SV bank J
\end{verbatim}
\lspa
\newpage
\markboth{appendix: bank JZSV}{appendix: bank JZSV}
{\large\bf JZSV - bank of support variables}
One such bank for each processor initialized
\bvb
links :
-(NLSV+3) saved working space link 1
... ...
- 4 saved working space link NLSV
- 3 two links reserved for the user
- 2
- 1 reserved
data :
LQSV + 0 status word
bits 1/8 = LV, the processor has been init. for this level
bits 9/16 = JQNACC-9, extra account words, normally zero
bit 17 set : constants are title initialized
+ 1 processor ID in A4 format
+ 2 number of calls to this processor
+ 3 NLSV working space links to be saved
+ 4 NDSV working space data words to be saved
+ 5 cumulate time for current call
+ 6 longest time interval for this processor
+ 7 cumulated execution time for this processor
+ 8 intermediate time cumulator (to improve precision)
[ + 9 ... possibly extra accounting words ]
LCD = LQSV + JQNACC (constant in /JQC/)
LCD + 0 NCD = number of conditions to be recorded
1 count condition 1 and lower
2 count condition 2
... ...
NCD count condition NCD and higher
LAN = LCD + NCD + 1 --> LQAN
LAN + 0 NAN = number of processor constants
1 constant 1
... ...
NAN constant NAN
LDSV = LAN + NAN + 1
LDSV + 0 saved working space data word 1
... ...
NDSV-1 saved data word NDSV
LFL = LDSV + NDSV only in P=QDEBUG version
LFL + 0 NFL = number of flag words
+ 1 flag word 1
... ...
NFL flag word NFL
\end{verbatim}
\lspa
|
/////////1/////////2/////////3/////////4/////////5/////////6/////////7/////////8
// archive_pointer_oserializer.ipp:
// (C) Copyright 2002 Robert Ramey - http://www.rrsd.com .
// Use, modification and distribution is subject to the Boost Software
// License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
// See http://www.boost.org for updates, documentation, and revision history.
#include <boost/config.hpp> // msvc 6.0 needs this for warning suppression
#include <boost/archive/detail/archive_pointer_oserializer.hpp>
#include <boost/archive/detail/basic_serializer_map.hpp>
namespace boost {
namespace archive {
namespace detail {
template<class Archive>
basic_serializer_map & oserializer_map(){
static basic_serializer_map map;
return map;
}
template<class Archive>
archive_pointer_oserializer<Archive>::archive_pointer_oserializer(
const boost::serialization::extended_type_info & type
) :
basic_pointer_oserializer(type)
{
oserializer_map<Archive>().insert(this);
}
template<class Archive>
const basic_pointer_oserializer *
archive_pointer_oserializer<Archive>::find(
const boost::serialization::extended_type_info & type
){
return static_cast<const basic_pointer_oserializer *>(
oserializer_map<Archive>().tfind(type)
);
}
} // namespace detail
} // namespace archive
} // namespace boost
|
[STATEMENT]
lemma vfieldD[dest]:
assumes "\<langle>r, s\<rangle> \<in>\<^sub>\<circ> vfield A"
shows "r \<in>\<^sub>\<circ> A" and "s = \<F>\<^sub>\<circ> r"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. r \<in>\<^sub>\<circ> A &&& s = \<F>\<^sub>\<circ> r
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
\<langle>r, s\<rangle> \<in>\<^sub>\<circ> vfield A
goal (1 subgoal):
1. r \<in>\<^sub>\<circ> A &&& s = \<F>\<^sub>\<circ> r
[PROOF STEP]
unfolding vfield_def
[PROOF STATE]
proof (prove)
using this:
\<langle>r, s\<rangle> \<in>\<^sub>\<circ> (\<lambda>r\<in>\<^sub>\<circ>A. \<D>\<^sub>\<circ> r \<union>\<^sub>\<circ> \<R>\<^sub>\<circ> r)
goal (1 subgoal):
1. r \<in>\<^sub>\<circ> A &&& s = (\<lambda>r\<in>\<^sub>\<circ>set {r}. \<D>\<^sub>\<circ> r \<union>\<^sub>\<circ> \<R>\<^sub>\<circ> r)\<lparr>r\<rparr>
[PROOF STEP]
by auto |
[STATEMENT]
lemma Red_term_pres_no_match:
"\<lbrakk>i < length ts; ts ! i \<Rightarrow> t'; no_match ps dts; dts = (map dterm ts)\<rbrakk>
\<Longrightarrow> no_match ps (map dterm (ts[i := t']))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>i < length ts; ts ! i \<Rightarrow> t'; no_match ps dts; dts = map dterm ts\<rbrakk> \<Longrightarrow> no_match ps (map dterm (ts[i := t']))
[PROOF STEP]
proof(induct ps dts arbitrary: ts i t' rule:no_match.induct)
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>ps ts tsa i t'. \<lbrakk>\<And>x xa xb xc xd tsa i t'. \<lbrakk>x < min (length ts) (length ps); ps ! x = C xa \<bullet>\<bullet> xc; ts ! x = C xb \<bullet>\<bullet> xd; xa = xb; i < length tsa; tsa ! i \<Rightarrow> t'; no_match xc xd; xd = map dterm tsa\<rbrakk> \<Longrightarrow> no_match xc (map dterm (tsa[i := t'])); i < length tsa; tsa ! i \<Rightarrow> t'; no_match ps ts; ts = map dterm tsa\<rbrakk> \<Longrightarrow> no_match ps (map dterm (tsa[i := t']))
[PROOF STEP]
case (1 ps dts ts i t')
[PROOF STATE]
proof (state)
this:
\<lbrakk>?x < min (length dts) (length ps); ps ! ?x = C ?xa \<bullet>\<bullet> ?xc; dts ! ?x = C ?xb \<bullet>\<bullet> ?xd; ?xa = ?xb; ?i < length ?ts; ?ts ! ?i \<Rightarrow> ?t'; no_match ?xc ?xd; ?xd = map dterm ?ts\<rbrakk> \<Longrightarrow> no_match ?xc (map dterm (?ts[?i := ?t']))
i < length ts
ts ! i \<Rightarrow> t'
no_match ps dts
dts = map dterm ts
goal (1 subgoal):
1. \<And>ps ts tsa i t'. \<lbrakk>\<And>x xa xb xc xd tsa i t'. \<lbrakk>x < min (length ts) (length ps); ps ! x = C xa \<bullet>\<bullet> xc; ts ! x = C xb \<bullet>\<bullet> xd; xa = xb; i < length tsa; tsa ! i \<Rightarrow> t'; no_match xc xd; xd = map dterm tsa\<rbrakk> \<Longrightarrow> no_match xc (map dterm (tsa[i := t'])); i < length tsa; tsa ! i \<Rightarrow> t'; no_match ps ts; ts = map dterm tsa\<rbrakk> \<Longrightarrow> no_match ps (map dterm (tsa[i := t']))
[PROOF STEP]
from \<open>no_match ps dts\<close> \<open>dts = map dterm ts\<close>
[PROOF STATE]
proof (chain)
picking this:
no_match ps dts
dts = map dterm ts
[PROOF STEP]
obtain j nm nm' rs rs' where ob: "j < size ts" "j < size ps"
"ps!j = C nm \<bullet>\<bullet> rs" "dterm (ts!j) = C nm' \<bullet>\<bullet> rs'"
"nm = nm' \<longrightarrow> no_match rs rs'"
[PROOF STATE]
proof (prove)
using this:
no_match ps dts
dts = map dterm ts
goal (1 subgoal):
1. (\<And>j nm rs nm' rs'. \<lbrakk>j < length ts; j < length ps; ps ! j = C nm \<bullet>\<bullet> rs; dterm (ts ! j) = C nm' \<bullet>\<bullet> rs'; nm = nm' \<longrightarrow> no_match rs rs'\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by (subst (asm) no_match.simps) fastforce
[PROOF STATE]
proof (state)
this:
j < length ts
j < length ps
ps ! j = C nm \<bullet>\<bullet> rs
dterm (ts ! j) = C nm' \<bullet>\<bullet> rs'
nm = nm' \<longrightarrow> no_match rs rs'
goal (1 subgoal):
1. \<And>ps ts tsa i t'. \<lbrakk>\<And>x xa xb xc xd tsa i t'. \<lbrakk>x < min (length ts) (length ps); ps ! x = C xa \<bullet>\<bullet> xc; ts ! x = C xb \<bullet>\<bullet> xd; xa = xb; i < length tsa; tsa ! i \<Rightarrow> t'; no_match xc xd; xd = map dterm tsa\<rbrakk> \<Longrightarrow> no_match xc (map dterm (tsa[i := t'])); i < length tsa; tsa ! i \<Rightarrow> t'; no_match ps ts; ts = map dterm tsa\<rbrakk> \<Longrightarrow> no_match ps (map dterm (tsa[i := t']))
[PROOF STEP]
show ?case
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. no_match ps (map dterm (ts[i := t']))
[PROOF STEP]
proof (subst no_match.simps)
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<exists>i<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! i = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! i = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
show "\<exists>k<min (length (map dterm (ts[i := t']))) (length ps).
\<exists>nm nm' rs rs'. ps!k = C nm \<bullet>\<bullet> rs \<and>
map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and>
(nm = nm' \<longrightarrow> no_match rs rs')"
(is "\<exists>k < ?m. ?P k")
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<exists>k<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! k = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
proof-
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<exists>k<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! k = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
{
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<exists>k<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! k = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
assume [simp]: "j=i"
[PROOF STATE]
proof (state)
this:
j = i
goal (1 subgoal):
1. \<exists>k<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! k = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
have "\<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
using \<open>ts ! i \<Rightarrow> t'\<close>
[PROOF STATE]
proof (prove)
using this:
ts ! i \<Rightarrow> t'
goal (1 subgoal):
1. \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
proof(cases rule:Red_term_hnf_cases)
[PROOF STATE]
proof (state)
goal (9 subgoals):
1. \<And>nma vs ts. \<lbrakk>ts ! i = term (C\<^sub>U nma vs) \<bullet>\<bullet> ts; t' = (C nma \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
2. \<And>x vs ts. \<lbrakk>ts ! i = term (V\<^sub>U x vs) \<bullet>\<bullet> ts; t' = (V x \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
3. \<And>vf vs n ts. \<lbrakk>ts ! i = term (Clo vf vs n) \<bullet>\<bullet> ts; t' = \<Lambda> (term (apply (lift 0 (Clo vf vs n)) (V\<^sub>U 0 []))) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
4. \<And>s s' ts. \<lbrakk>ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s' \<bullet>\<bullet> ts; s \<Rightarrow> s'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
5. \<And>v v' ts. \<lbrakk>ts ! i = term v \<bullet>\<bullet> ts; t' = term v' \<bullet>\<bullet> ts; v \<Rightarrow> v'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
6. \<And>x i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = V x \<bullet>\<bullet> ts; t' = V x \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
7. \<And>nma i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = C nma \<bullet>\<bullet> ts; t' = C nma \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
8. \<And>s i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
9. \<And>v i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = term v \<bullet>\<bullet> ts; t' = term v \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
case (5 v v' ts'')
[PROOF STATE]
proof (state)
this:
ts ! i = term v \<bullet>\<bullet> ts''
t' = term v' \<bullet>\<bullet> ts''
v \<Rightarrow> v'
goal (9 subgoals):
1. \<And>nma vs ts. \<lbrakk>ts ! i = term (C\<^sub>U nma vs) \<bullet>\<bullet> ts; t' = (C nma \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
2. \<And>x vs ts. \<lbrakk>ts ! i = term (V\<^sub>U x vs) \<bullet>\<bullet> ts; t' = (V x \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
3. \<And>vf vs n ts. \<lbrakk>ts ! i = term (Clo vf vs n) \<bullet>\<bullet> ts; t' = \<Lambda> (term (apply (lift 0 (Clo vf vs n)) (V\<^sub>U 0 []))) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
4. \<And>s s' ts. \<lbrakk>ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s' \<bullet>\<bullet> ts; s \<Rightarrow> s'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
5. \<And>v v' ts. \<lbrakk>ts ! i = term v \<bullet>\<bullet> ts; t' = term v' \<bullet>\<bullet> ts; v \<Rightarrow> v'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
6. \<And>x i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = V x \<bullet>\<bullet> ts; t' = V x \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
7. \<And>nma i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = C nma \<bullet>\<bullet> ts; t' = C nma \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
8. \<And>s i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
9. \<And>v i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = term v \<bullet>\<bullet> ts; t' = term v \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
ts ! i = term v \<bullet>\<bullet> ts''
t' = term v' \<bullet>\<bullet> ts''
v \<Rightarrow> v'
[PROOF STEP]
obtain vs where [simp]:
"v = C\<^sub>U nm' vs" "rs' = map dterm\<^sub>M\<^sub>L (rev vs) @ map dterm ts''"
[PROOF STATE]
proof (prove)
using this:
ts ! i = term v \<bullet>\<bullet> ts''
t' = term v' \<bullet>\<bullet> ts''
v \<Rightarrow> v'
goal (1 subgoal):
1. (\<And>vs. \<lbrakk>v = C\<^sub>U nm' vs; rs' = map dterm\<^sub>M\<^sub>L (rev vs) @ map dterm ts''\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
using ob
[PROOF STATE]
proof (prove)
using this:
ts ! i = term v \<bullet>\<bullet> ts''
t' = term v' \<bullet>\<bullet> ts''
v \<Rightarrow> v'
j < length ts
j < length ps
ps ! j = C nm \<bullet>\<bullet> rs
dterm (ts ! j) = C nm' \<bullet>\<bullet> rs'
nm = nm' \<longrightarrow> no_match rs rs'
goal (1 subgoal):
1. (\<And>vs. \<lbrakk>v = C\<^sub>U nm' vs; rs' = map dterm\<^sub>M\<^sub>L (rev vs) @ map dterm ts''\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by(cases v) auto
[PROOF STATE]
proof (state)
this:
v = C\<^sub>U nm' vs
rs' = map dterm\<^sub>M\<^sub>L (rev vs) @ map dterm ts''
goal (9 subgoals):
1. \<And>nma vs ts. \<lbrakk>ts ! i = term (C\<^sub>U nma vs) \<bullet>\<bullet> ts; t' = (C nma \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
2. \<And>x vs ts. \<lbrakk>ts ! i = term (V\<^sub>U x vs) \<bullet>\<bullet> ts; t' = (V x \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
3. \<And>vf vs n ts. \<lbrakk>ts ! i = term (Clo vf vs n) \<bullet>\<bullet> ts; t' = \<Lambda> (term (apply (lift 0 (Clo vf vs n)) (V\<^sub>U 0 []))) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
4. \<And>s s' ts. \<lbrakk>ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s' \<bullet>\<bullet> ts; s \<Rightarrow> s'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
5. \<And>v v' ts. \<lbrakk>ts ! i = term v \<bullet>\<bullet> ts; t' = term v' \<bullet>\<bullet> ts; v \<Rightarrow> v'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
6. \<And>x i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = V x \<bullet>\<bullet> ts; t' = V x \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
7. \<And>nma i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = C nma \<bullet>\<bullet> ts; t' = C nma \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
8. \<And>s i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
9. \<And>v i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = term v \<bullet>\<bullet> ts; t' = term v \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
obtain vs' where [simp]: "v' = C\<^sub>U nm' vs'" "vs \<Rightarrow> vs'"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<And>vs'. \<lbrakk>v' = C\<^sub>U nm' vs'; vs \<Rightarrow> vs'\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
using \<open>v\<Rightarrow>v'\<close>
[PROOF STATE]
proof (prove)
using this:
v \<Rightarrow> v'
goal (1 subgoal):
1. (\<And>vs'. \<lbrakk>v' = C\<^sub>U nm' vs'; vs \<Rightarrow> vs'\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by(rule Red_ml.cases) auto
[PROOF STATE]
proof (state)
this:
v' = C\<^sub>U nm' vs'
vs \<Rightarrow> vs'
goal (9 subgoals):
1. \<And>nma vs ts. \<lbrakk>ts ! i = term (C\<^sub>U nma vs) \<bullet>\<bullet> ts; t' = (C nma \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
2. \<And>x vs ts. \<lbrakk>ts ! i = term (V\<^sub>U x vs) \<bullet>\<bullet> ts; t' = (V x \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
3. \<And>vf vs n ts. \<lbrakk>ts ! i = term (Clo vf vs n) \<bullet>\<bullet> ts; t' = \<Lambda> (term (apply (lift 0 (Clo vf vs n)) (V\<^sub>U 0 []))) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
4. \<And>s s' ts. \<lbrakk>ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s' \<bullet>\<bullet> ts; s \<Rightarrow> s'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
5. \<And>v v' ts. \<lbrakk>ts ! i = term v \<bullet>\<bullet> ts; t' = term v' \<bullet>\<bullet> ts; v \<Rightarrow> v'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
6. \<And>x i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = V x \<bullet>\<bullet> ts; t' = V x \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
7. \<And>nma i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = C nma \<bullet>\<bullet> ts; t' = C nma \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
8. \<And>s i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
9. \<And>v i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = term v \<bullet>\<bullet> ts; t' = term v \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
obtain v' k where [arith]: "k<size vs" and "vs!k \<Rightarrow> v'"
and [simp]: "vs' = vs[k := v']"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (\<And>k v'. \<lbrakk>k < length vs; vs ! k \<Rightarrow> v'; vs' = vs[k := v']\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
using Red_ml_list_nth[OF \<open>vs\<Rightarrow>vs'\<close>]
[PROOF STATE]
proof (prove)
using this:
\<exists>v' k. k < length vs \<and> vs ! k \<Rightarrow> v' \<and> vs' = vs[k := v']
goal (1 subgoal):
1. (\<And>k v'. \<lbrakk>k < length vs; vs ! k \<Rightarrow> v'; vs' = vs[k := v']\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by fastforce
[PROOF STATE]
proof (state)
this:
k < length vs
vs ! k \<Rightarrow> v'
vs' = vs[k := v']
goal (9 subgoals):
1. \<And>nma vs ts. \<lbrakk>ts ! i = term (C\<^sub>U nma vs) \<bullet>\<bullet> ts; t' = (C nma \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
2. \<And>x vs ts. \<lbrakk>ts ! i = term (V\<^sub>U x vs) \<bullet>\<bullet> ts; t' = (V x \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
3. \<And>vf vs n ts. \<lbrakk>ts ! i = term (Clo vf vs n) \<bullet>\<bullet> ts; t' = \<Lambda> (term (apply (lift 0 (Clo vf vs n)) (V\<^sub>U 0 []))) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
4. \<And>s s' ts. \<lbrakk>ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s' \<bullet>\<bullet> ts; s \<Rightarrow> s'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
5. \<And>v v' ts. \<lbrakk>ts ! i = term v \<bullet>\<bullet> ts; t' = term v' \<bullet>\<bullet> ts; v \<Rightarrow> v'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
6. \<And>x i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = V x \<bullet>\<bullet> ts; t' = V x \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
7. \<And>nma i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = C nma \<bullet>\<bullet> ts; t' = C nma \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
8. \<And>s i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
9. \<And>v i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = term v \<bullet>\<bullet> ts; t' = term v \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
show ?thesis (is "\<exists>rs'. ?P rs' \<and> ?Q rs'")
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
proof
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> ?rs' \<and> (nm = nm' \<longrightarrow> no_match rs ?rs')
[PROOF STEP]
let ?rs' = "map dterm ((map term (rev vs) @ ts'')[(size vs - k - 1):=term v'])"
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> ?rs' \<and> (nm = nm' \<longrightarrow> no_match rs ?rs')
[PROOF STEP]
have "?P ?rs'"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v'])
[PROOF STEP]
using ob 5
[PROOF STATE]
proof (prove)
using this:
j < length ts
j < length ps
ps ! j = C nm \<bullet>\<bullet> rs
dterm (ts ! j) = C nm' \<bullet>\<bullet> rs'
nm = nm' \<longrightarrow> no_match rs rs'
ts ! i = term v \<bullet>\<bullet> ts''
t' = term v'__ \<bullet>\<bullet> ts''
v \<Rightarrow> v'__
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v'])
[PROOF STEP]
by(simp add: list_update_append map_update[symmetric] rev_update)
[PROOF STATE]
proof (state)
this:
dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v'])
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> ?rs' \<and> (nm = nm' \<longrightarrow> no_match rs ?rs')
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v'])
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> ?rs' \<and> (nm = nm' \<longrightarrow> no_match rs ?rs')
[PROOF STEP]
have "?Q ?rs'"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. nm = nm' \<longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v']))
[PROOF STEP]
apply rule
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. nm = nm' \<Longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v']))
[PROOF STEP]
apply(rule "1.hyps"[OF _ ob(3)])
[PROOF STATE]
proof (prove)
goal (7 subgoals):
1. nm = nm' \<Longrightarrow> j < min (length dts) (length ps)
2. nm = nm' \<Longrightarrow> dts ! j = C ?xb2 \<bullet>\<bullet> ?xd2
3. nm = nm' \<Longrightarrow> nm = ?xb2
4. nm = nm' \<Longrightarrow> length vs - k - 1 < length (map term (rev vs) @ ts'')
5. nm = nm' \<Longrightarrow> (map term (rev vs) @ ts'') ! (length vs - k - 1) \<Rightarrow> term v'
6. nm = nm' \<Longrightarrow> no_match rs ?xd2
7. nm = nm' \<Longrightarrow> ?xd2 = map dterm (map term (rev vs) @ ts'')
[PROOF STEP]
using "1.prems" 5 ob
[PROOF STATE]
proof (prove)
using this:
i < length ts
ts ! i \<Rightarrow> t'
no_match ps dts
dts = map dterm ts
ts ! i = term v \<bullet>\<bullet> ts''
t' = term v'__ \<bullet>\<bullet> ts''
v \<Rightarrow> v'__
j < length ts
j < length ps
ps ! j = C nm \<bullet>\<bullet> rs
dterm (ts ! j) = C nm' \<bullet>\<bullet> rs'
nm = nm' \<longrightarrow> no_match rs rs'
goal (7 subgoals):
1. nm = nm' \<Longrightarrow> j < min (length dts) (length ps)
2. nm = nm' \<Longrightarrow> dts ! j = C ?xb2 \<bullet>\<bullet> ?xd2
3. nm = nm' \<Longrightarrow> nm = ?xb2
4. nm = nm' \<Longrightarrow> length vs - k - 1 < length (map term (rev vs) @ ts'')
5. nm = nm' \<Longrightarrow> (map term (rev vs) @ ts'') ! (length vs - k - 1) \<Rightarrow> term v'
6. nm = nm' \<Longrightarrow> no_match rs ?xd2
7. nm = nm' \<Longrightarrow> ?xd2 = map dterm (map term (rev vs) @ ts'')
[PROOF STEP]
apply (auto simp:nth_append rev_nth ctxt_term[OF \<open>vs!k \<Rightarrow> v'\<close>] simp del: map_map)
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done
[PROOF STATE]
proof (state)
this:
nm = nm' \<longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v']))
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> ?rs' \<and> (nm = nm' \<longrightarrow> no_match rs ?rs')
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v'])
nm = nm' \<longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v']))
[PROOF STEP]
show "?P ?rs' \<and> ?Q ?rs'"
[PROOF STATE]
proof (prove)
using this:
dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v'])
nm = nm' \<longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v']))
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v']) \<and> (nm = nm' \<longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v'])))
[PROOF STEP]
..
[PROOF STATE]
proof (state)
this:
dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v']) \<and> (nm = nm' \<longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[length vs - k - 1 := term v'])))
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
\<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
goal (8 subgoals):
1. \<And>nma vs ts. \<lbrakk>ts ! i = term (C\<^sub>U nma vs) \<bullet>\<bullet> ts; t' = (C nma \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
2. \<And>x vs ts. \<lbrakk>ts ! i = term (V\<^sub>U x vs) \<bullet>\<bullet> ts; t' = (V x \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
3. \<And>vf vs n ts. \<lbrakk>ts ! i = term (Clo vf vs n) \<bullet>\<bullet> ts; t' = \<Lambda> (term (apply (lift 0 (Clo vf vs n)) (V\<^sub>U 0 []))) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
4. \<And>s s' ts. \<lbrakk>ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s' \<bullet>\<bullet> ts; s \<Rightarrow> s'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
5. \<And>x i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = V x \<bullet>\<bullet> ts; t' = V x \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
6. \<And>nma i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = C nma \<bullet>\<bullet> ts; t' = C nma \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
7. \<And>s i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
8. \<And>v i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = term v \<bullet>\<bullet> ts; t' = term v \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (8 subgoals):
1. \<And>nma vs ts. \<lbrakk>ts ! i = term (C\<^sub>U nma vs) \<bullet>\<bullet> ts; t' = (C nma \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
2. \<And>x vs ts. \<lbrakk>ts ! i = term (V\<^sub>U x vs) \<bullet>\<bullet> ts; t' = (V x \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
3. \<And>vf vs n ts. \<lbrakk>ts ! i = term (Clo vf vs n) \<bullet>\<bullet> ts; t' = \<Lambda> (term (apply (lift 0 (Clo vf vs n)) (V\<^sub>U 0 []))) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
4. \<And>s s' ts. \<lbrakk>ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s' \<bullet>\<bullet> ts; s \<Rightarrow> s'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
5. \<And>x i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = V x \<bullet>\<bullet> ts; t' = V x \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
6. \<And>nma i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = C nma \<bullet>\<bullet> ts; t' = C nma \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
7. \<And>s i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
8. \<And>v i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = term v \<bullet>\<bullet> ts; t' = term v \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
case (7 nm'' k r' ts'')
[PROOF STATE]
proof (state)
this:
k < length ts''
ts'' ! k \<Rightarrow> r'
ts ! i = C nm'' \<bullet>\<bullet> ts''
t' = C nm'' \<bullet>\<bullet> ts''[k := r']
goal (8 subgoals):
1. \<And>nma vs ts. \<lbrakk>ts ! i = term (C\<^sub>U nma vs) \<bullet>\<bullet> ts; t' = (C nma \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
2. \<And>x vs ts. \<lbrakk>ts ! i = term (V\<^sub>U x vs) \<bullet>\<bullet> ts; t' = (V x \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
3. \<And>vf vs n ts. \<lbrakk>ts ! i = term (Clo vf vs n) \<bullet>\<bullet> ts; t' = \<Lambda> (term (apply (lift 0 (Clo vf vs n)) (V\<^sub>U 0 []))) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
4. \<And>s s' ts. \<lbrakk>ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s' \<bullet>\<bullet> ts; s \<Rightarrow> s'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
5. \<And>x i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = V x \<bullet>\<bullet> ts; t' = V x \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
6. \<And>nma i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = C nma \<bullet>\<bullet> ts; t' = C nma \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
7. \<And>s i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
8. \<And>v i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = term v \<bullet>\<bullet> ts; t' = term v \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
show ?thesis (is "\<exists>rs'. ?P rs'")
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
proof
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> ?rs' \<and> (nm = nm' \<longrightarrow> no_match rs ?rs')
[PROOF STEP]
show "?P(map dterm (ts''[k := r']))"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> map dterm (ts''[k := r']) \<and> (nm = nm' \<longrightarrow> no_match rs (map dterm (ts''[k := r'])))
[PROOF STEP]
using 7 ob
[PROOF STATE]
proof (prove)
using this:
k < length ts''
ts'' ! k \<Rightarrow> r'
ts ! i = C nm'' \<bullet>\<bullet> ts''
t' = C nm'' \<bullet>\<bullet> ts''[k := r']
j < length ts
j < length ps
ps ! j = C nm \<bullet>\<bullet> rs
dterm (ts ! j) = C nm' \<bullet>\<bullet> rs'
nm = nm' \<longrightarrow> no_match rs rs'
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> map dterm (ts''[k := r']) \<and> (nm = nm' \<longrightarrow> no_match rs (map dterm (ts''[k := r'])))
[PROOF STEP]
apply clarsimp
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> no_match rs (map dterm (ts''[k := r']))
[PROOF STEP]
apply(rule "1.hyps"[OF _ ob(3)])
[PROOF STATE]
proof (prove)
goal (7 subgoals):
1. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> j < min (length dts) (length ps)
2. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> dts ! j = C ?xb16 \<bullet>\<bullet> ?xd16
3. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> nm = ?xb16
4. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> k < length ts''
5. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> ts'' ! k \<Rightarrow> r'
6. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> no_match rs ?xd16
7. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> ?xd16 = map dterm ts''
[PROOF STEP]
using 7 "1.prems" ob
[PROOF STATE]
proof (prove)
using this:
k < length ts''
ts'' ! k \<Rightarrow> r'
ts ! i = C nm'' \<bullet>\<bullet> ts''
t' = C nm'' \<bullet>\<bullet> ts''[k := r']
i < length ts
ts ! i \<Rightarrow> t'
no_match ps dts
dts = map dterm ts
j < length ts
j < length ps
ps ! j = C nm \<bullet>\<bullet> rs
dterm (ts ! j) = C nm' \<bullet>\<bullet> rs'
nm = nm' \<longrightarrow> no_match rs rs'
goal (7 subgoals):
1. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> j < min (length dts) (length ps)
2. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> dts ! j = C ?xb16 \<bullet>\<bullet> ?xd16
3. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> nm = ?xb16
4. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> k < length ts''
5. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> ts'' ! k \<Rightarrow> r'
6. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> no_match rs ?xd16
7. \<lbrakk>k < length ts''; ts'' ! k \<Rightarrow> r'; ts ! i = C nm' \<bullet>\<bullet> ts''; t' = C nm' \<bullet>\<bullet> ts''[k := r']; i < length ts; i < length ps; ps ! i = C nm' \<bullet>\<bullet> rs; no_match rs (map dterm ts''); nm'' = nm'; rs' = map dterm ts''; nm = nm'\<rbrakk> \<Longrightarrow> ?xd16 = map dterm ts''
[PROOF STEP]
apply auto
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done
[PROOF STATE]
proof (state)
this:
dterm t' = C nm' \<bullet>\<bullet> map dterm (ts''[k := r']) \<and> (nm = nm' \<longrightarrow> no_match rs (map dterm (ts''[k := r'])))
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
\<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
goal (7 subgoals):
1. \<And>nma vs ts. \<lbrakk>ts ! i = term (C\<^sub>U nma vs) \<bullet>\<bullet> ts; t' = (C nma \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
2. \<And>x vs ts. \<lbrakk>ts ! i = term (V\<^sub>U x vs) \<bullet>\<bullet> ts; t' = (V x \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
3. \<And>vf vs n ts. \<lbrakk>ts ! i = term (Clo vf vs n) \<bullet>\<bullet> ts; t' = \<Lambda> (term (apply (lift 0 (Clo vf vs n)) (V\<^sub>U 0 []))) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
4. \<And>s s' ts. \<lbrakk>ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s' \<bullet>\<bullet> ts; s \<Rightarrow> s'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
5. \<And>x i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = V x \<bullet>\<bullet> ts; t' = V x \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
6. \<And>s i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
7. \<And>v i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = term v \<bullet>\<bullet> ts; t' = term v \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (7 subgoals):
1. \<And>nma vs ts. \<lbrakk>ts ! i = term (C\<^sub>U nma vs) \<bullet>\<bullet> ts; t' = (C nma \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
2. \<And>x vs ts. \<lbrakk>ts ! i = term (V\<^sub>U x vs) \<bullet>\<bullet> ts; t' = (V x \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
3. \<And>vf vs n ts. \<lbrakk>ts ! i = term (Clo vf vs n) \<bullet>\<bullet> ts; t' = \<Lambda> (term (apply (lift 0 (Clo vf vs n)) (V\<^sub>U 0 []))) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
4. \<And>s s' ts. \<lbrakk>ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s' \<bullet>\<bullet> ts; s \<Rightarrow> s'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
5. \<And>x i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = V x \<bullet>\<bullet> ts; t' = V x \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
6. \<And>s i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
7. \<And>v i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = term v \<bullet>\<bullet> ts; t' = term v \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
case (9 v k r' ts'')
[PROOF STATE]
proof (state)
this:
k < length ts''
ts'' ! k \<Rightarrow> r'
ts ! i = term v \<bullet>\<bullet> ts''
t' = term v \<bullet>\<bullet> ts''[k := r']
goal (7 subgoals):
1. \<And>nma vs ts. \<lbrakk>ts ! i = term (C\<^sub>U nma vs) \<bullet>\<bullet> ts; t' = (C nma \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
2. \<And>x vs ts. \<lbrakk>ts ! i = term (V\<^sub>U x vs) \<bullet>\<bullet> ts; t' = (V x \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
3. \<And>vf vs n ts. \<lbrakk>ts ! i = term (Clo vf vs n) \<bullet>\<bullet> ts; t' = \<Lambda> (term (apply (lift 0 (Clo vf vs n)) (V\<^sub>U 0 []))) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
4. \<And>s s' ts. \<lbrakk>ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s' \<bullet>\<bullet> ts; s \<Rightarrow> s'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
5. \<And>x i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = V x \<bullet>\<bullet> ts; t' = V x \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
6. \<And>s i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
7. \<And>v i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = term v \<bullet>\<bullet> ts; t' = term v \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
k < length ts''
ts'' ! k \<Rightarrow> r'
ts ! i = term v \<bullet>\<bullet> ts''
t' = term v \<bullet>\<bullet> ts''[k := r']
[PROOF STEP]
obtain vs where [simp]: "v = C\<^sub>U nm' vs" "rs' = map dterm\<^sub>M\<^sub>L (rev vs) @ map dterm ts''"
[PROOF STATE]
proof (prove)
using this:
k < length ts''
ts'' ! k \<Rightarrow> r'
ts ! i = term v \<bullet>\<bullet> ts''
t' = term v \<bullet>\<bullet> ts''[k := r']
goal (1 subgoal):
1. (\<And>vs. \<lbrakk>v = C\<^sub>U nm' vs; rs' = map dterm\<^sub>M\<^sub>L (rev vs) @ map dterm ts''\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
using ob
[PROOF STATE]
proof (prove)
using this:
k < length ts''
ts'' ! k \<Rightarrow> r'
ts ! i = term v \<bullet>\<bullet> ts''
t' = term v \<bullet>\<bullet> ts''[k := r']
j < length ts
j < length ps
ps ! j = C nm \<bullet>\<bullet> rs
dterm (ts ! j) = C nm' \<bullet>\<bullet> rs'
nm = nm' \<longrightarrow> no_match rs rs'
goal (1 subgoal):
1. (\<And>vs. \<lbrakk>v = C\<^sub>U nm' vs; rs' = map dterm\<^sub>M\<^sub>L (rev vs) @ map dterm ts''\<rbrakk> \<Longrightarrow> thesis) \<Longrightarrow> thesis
[PROOF STEP]
by(cases v) auto
[PROOF STATE]
proof (state)
this:
v = C\<^sub>U nm' vs
rs' = map dterm\<^sub>M\<^sub>L (rev vs) @ map dterm ts''
goal (7 subgoals):
1. \<And>nma vs ts. \<lbrakk>ts ! i = term (C\<^sub>U nma vs) \<bullet>\<bullet> ts; t' = (C nma \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
2. \<And>x vs ts. \<lbrakk>ts ! i = term (V\<^sub>U x vs) \<bullet>\<bullet> ts; t' = (V x \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
3. \<And>vf vs n ts. \<lbrakk>ts ! i = term (Clo vf vs n) \<bullet>\<bullet> ts; t' = \<Lambda> (term (apply (lift 0 (Clo vf vs n)) (V\<^sub>U 0 []))) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
4. \<And>s s' ts. \<lbrakk>ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s' \<bullet>\<bullet> ts; s \<Rightarrow> s'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
5. \<And>x i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = V x \<bullet>\<bullet> ts; t' = V x \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
6. \<And>s i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
7. \<And>v i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = term v \<bullet>\<bullet> ts; t' = term v \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
show ?thesis (is "\<exists>rs'. ?P rs' \<and> ?Q rs'")
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
proof
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> ?rs' \<and> (nm = nm' \<longrightarrow> no_match rs ?rs')
[PROOF STEP]
let ?rs' = "map dterm ((map term (rev vs) @ ts'')[k+size vs:=r'])"
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> ?rs' \<and> (nm = nm' \<longrightarrow> no_match rs ?rs')
[PROOF STEP]
have "?P ?rs'"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[k + length vs := r'])
[PROOF STEP]
using ob 9
[PROOF STATE]
proof (prove)
using this:
j < length ts
j < length ps
ps ! j = C nm \<bullet>\<bullet> rs
dterm (ts ! j) = C nm' \<bullet>\<bullet> rs'
nm = nm' \<longrightarrow> no_match rs rs'
k < length ts''
ts'' ! k \<Rightarrow> r'
ts ! i = term v \<bullet>\<bullet> ts''
t' = term v \<bullet>\<bullet> ts''[k := r']
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[k + length vs := r'])
[PROOF STEP]
by (auto simp: list_update_append)
[PROOF STATE]
proof (state)
this:
dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[k + length vs := r'])
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> ?rs' \<and> (nm = nm' \<longrightarrow> no_match rs ?rs')
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[k + length vs := r'])
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> ?rs' \<and> (nm = nm' \<longrightarrow> no_match rs ?rs')
[PROOF STEP]
have "?Q ?rs'"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. nm = nm' \<longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[k + length vs := r']))
[PROOF STEP]
apply rule
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. nm = nm' \<Longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[k + length vs := r']))
[PROOF STEP]
apply(rule "1.hyps"[OF _ ob(3)])
[PROOF STATE]
proof (prove)
goal (7 subgoals):
1. nm = nm' \<Longrightarrow> j < min (length dts) (length ps)
2. nm = nm' \<Longrightarrow> dts ! j = C ?xb2 \<bullet>\<bullet> ?xd2
3. nm = nm' \<Longrightarrow> nm = ?xb2
4. nm = nm' \<Longrightarrow> k + length vs < length (map term (rev vs) @ ts'')
5. nm = nm' \<Longrightarrow> (map term (rev vs) @ ts'') ! (k + length vs) \<Rightarrow> r'
6. nm = nm' \<Longrightarrow> no_match rs ?xd2
7. nm = nm' \<Longrightarrow> ?xd2 = map dterm (map term (rev vs) @ ts'')
[PROOF STEP]
using 9 "1.prems" ob
[PROOF STATE]
proof (prove)
using this:
k < length ts''
ts'' ! k \<Rightarrow> r'
ts ! i = term v \<bullet>\<bullet> ts''
t' = term v \<bullet>\<bullet> ts''[k := r']
i < length ts
ts ! i \<Rightarrow> t'
no_match ps dts
dts = map dterm ts
j < length ts
j < length ps
ps ! j = C nm \<bullet>\<bullet> rs
dterm (ts ! j) = C nm' \<bullet>\<bullet> rs'
nm = nm' \<longrightarrow> no_match rs rs'
goal (7 subgoals):
1. nm = nm' \<Longrightarrow> j < min (length dts) (length ps)
2. nm = nm' \<Longrightarrow> dts ! j = C ?xb2 \<bullet>\<bullet> ?xd2
3. nm = nm' \<Longrightarrow> nm = ?xb2
4. nm = nm' \<Longrightarrow> k + length vs < length (map term (rev vs) @ ts'')
5. nm = nm' \<Longrightarrow> (map term (rev vs) @ ts'') ! (k + length vs) \<Rightarrow> r'
6. nm = nm' \<Longrightarrow> no_match rs ?xd2
7. nm = nm' \<Longrightarrow> ?xd2 = map dterm (map term (rev vs) @ ts'')
[PROOF STEP]
by (auto simp:nth_append simp del: map_map)
[PROOF STATE]
proof (state)
this:
nm = nm' \<longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[k + length vs := r']))
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> ?rs' \<and> (nm = nm' \<longrightarrow> no_match rs ?rs')
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[k + length vs := r'])
nm = nm' \<longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[k + length vs := r']))
[PROOF STEP]
show "?P ?rs' \<and> ?Q ?rs'"
[PROOF STATE]
proof (prove)
using this:
dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[k + length vs := r'])
nm = nm' \<longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[k + length vs := r']))
goal (1 subgoal):
1. dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[k + length vs := r']) \<and> (nm = nm' \<longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[k + length vs := r'])))
[PROOF STEP]
..
[PROOF STATE]
proof (state)
this:
dterm t' = C nm' \<bullet>\<bullet> map dterm ((map term (rev vs) @ ts'')[k + length vs := r']) \<and> (nm = nm' \<longrightarrow> no_match rs (map dterm ((map term (rev vs) @ ts'')[k + length vs := r'])))
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
\<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
goal (6 subgoals):
1. \<And>nma vs ts. \<lbrakk>ts ! i = term (C\<^sub>U nma vs) \<bullet>\<bullet> ts; t' = (C nma \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
2. \<And>x vs ts. \<lbrakk>ts ! i = term (V\<^sub>U x vs) \<bullet>\<bullet> ts; t' = (V x \<bullet>\<bullet> map term (rev vs)) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
3. \<And>vf vs n ts. \<lbrakk>ts ! i = term (Clo vf vs n) \<bullet>\<bullet> ts; t' = \<Lambda> (term (apply (lift 0 (Clo vf vs n)) (V\<^sub>U 0 []))) \<bullet>\<bullet> ts\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
4. \<And>s s' ts. \<lbrakk>ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s' \<bullet>\<bullet> ts; s \<Rightarrow> s'\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
5. \<And>x i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = V x \<bullet>\<bullet> ts; t' = V x \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
6. \<And>s i r' ts. \<lbrakk>i < length ts; ts ! i \<Rightarrow> r'; ts ! i = \<Lambda> s \<bullet>\<bullet> ts; t' = \<Lambda> s \<bullet>\<bullet> ts[i := r']\<rbrakk> \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
qed (insert ob, auto simp del: map_map)
[PROOF STATE]
proof (state)
this:
\<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
goal (1 subgoal):
1. \<exists>k<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! k = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
}
[PROOF STATE]
proof (state)
this:
j = i \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
goal (1 subgoal):
1. \<exists>k<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! k = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
hence "\<exists>rs'. dterm (ts[i := t'] ! j) = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')"
[PROOF STATE]
proof (prove)
using this:
j = i \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
goal (1 subgoal):
1. \<exists>rs'. dterm (ts[i := t'] ! j) = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
using \<open>i < size ts\<close> ob
[PROOF STATE]
proof (prove)
using this:
j = i \<Longrightarrow> \<exists>rs'. dterm t' = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
i < length ts
j < length ts
j < length ps
ps ! j = C nm \<bullet>\<bullet> rs
dterm (ts ! j) = C nm' \<bullet>\<bullet> rs'
nm = nm' \<longrightarrow> no_match rs rs'
goal (1 subgoal):
1. \<exists>rs'. dterm (ts[i := t'] ! j) = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
by(simp add:nth_list_update)
[PROOF STATE]
proof (state)
this:
\<exists>rs'. dterm (ts[i := t'] ! j) = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
goal (1 subgoal):
1. \<exists>k<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! k = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
hence "?P j"
[PROOF STATE]
proof (prove)
using this:
\<exists>rs'. dterm (ts[i := t'] ! j) = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
goal (1 subgoal):
1. \<exists>nm nm' rs rs'. ps ! j = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! j = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
using ob
[PROOF STATE]
proof (prove)
using this:
\<exists>rs'. dterm (ts[i := t'] ! j) = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
j < length ts
j < length ps
ps ! j = C nm \<bullet>\<bullet> rs
dterm (ts ! j) = C nm' \<bullet>\<bullet> rs'
nm = nm' \<longrightarrow> no_match rs rs'
goal (1 subgoal):
1. \<exists>nm nm' rs rs'. ps ! j = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! j = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
by auto
[PROOF STATE]
proof (state)
this:
\<exists>nm nm' rs rs'. ps ! j = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! j = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
goal (1 subgoal):
1. \<exists>k<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! k = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
moreover
[PROOF STATE]
proof (state)
this:
\<exists>nm nm' rs rs'. ps ! j = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! j = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
goal (1 subgoal):
1. \<exists>k<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! k = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
have "j < ?m"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. j < min (length (map dterm (ts[i := t']))) (length ps)
[PROOF STEP]
using \<open>j < length ts\<close> \<open>j < size ps\<close>
[PROOF STATE]
proof (prove)
using this:
j < length ts
j < length ps
goal (1 subgoal):
1. j < min (length (map dterm (ts[i := t']))) (length ps)
[PROOF STEP]
by simp
[PROOF STATE]
proof (state)
this:
j < min (length (map dterm (ts[i := t']))) (length ps)
goal (1 subgoal):
1. \<exists>k<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! k = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
ultimately
[PROOF STATE]
proof (chain)
picking this:
\<exists>nm nm' rs rs'. ps ! j = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! j = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
j < min (length (map dterm (ts[i := t']))) (length ps)
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
\<exists>nm nm' rs rs'. ps ! j = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! j = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
j < min (length (map dterm (ts[i := t']))) (length ps)
goal (1 subgoal):
1. \<exists>k<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! k = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
[PROOF STEP]
by blast
[PROOF STATE]
proof (state)
this:
\<exists>k<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! k = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
\<exists>k<min (length (map dterm (ts[i := t']))) (length ps). \<exists>nm nm' rs rs'. ps ! k = C nm \<bullet>\<bullet> rs \<and> map dterm (ts[i := t']) ! k = C nm' \<bullet>\<bullet> rs' \<and> (nm = nm' \<longrightarrow> no_match rs rs')
goal:
No subgoals!
[PROOF STEP]
qed
[PROOF STATE]
proof (state)
this:
no_match ps (map dterm (ts[i := t']))
goal:
No subgoals!
[PROOF STEP]
qed |
{-# OPTIONS --allow-unsolved-metas #-}
{- This is a copy of Sane, but building upon a rather different notion of permutation -}
module Sane2 where
import Data.Fin as F
--
open import Data.Unit
open import Data.Nat using (ℕ ; zero ; suc ; _+_ ; _>_ )
open import Data.Sum using (inj₁ ; inj₂ ; _⊎_)
open import Data.Vec
open import Function using ( id ) renaming (_∘_ to _○_)
open import Relation.Binary -- to make certain goals look nicer
open import Relation.Binary.PropositionalEquality using ( _≡_ ; refl ; sym ; cong ; trans ; subst ; module ≡-Reasoning )
open ≡-Reasoning
-- start re-splitting things up, as this is getting out of hand
open import FT -- Finite Types
open import VecHelpers
open import NatSimple
open import Eval
open import Permutations
open import CombPerm
-- Suppose we have some combinator c, its output vector v, and the corresponding
-- permutation p. We construct p by looking at how many places each element is
-- displaced from its index in v *to the right* (if it's where it "should" be or
-- to the left, just return 0).
-- In other words, if v[i] = j, then p[j] = j - i. That is, if j is in location
-- i, j - i is how many spaces to the right (if any) j appears from its own
-- index. Note that if v[i] = j, then c(i) = j, and inv(c)(j) = i. This suggests
-- that if I can write a tabulate function for permutations, the permutation for
-- a combinator c will be "tabulate (∩ -> i - (evalCombB c i))", modulo type
-- coercions.
--
-- JC: I think an even easier way is to use LeftCancellation and build it
-- recursively! By this I mean something which gives a
-- (fromℕ n ⇛ fromℕ n) given a (fromℕ (suc n) ⇛ fromℕ (suc n))
combToPerm : {n : ℕ} → (fromℕ n ⇛ fromℕ n) → Permutation n
combToPerm {zero} c = []
combToPerm {suc n} c = valToFin (evalComb c (inj₁ tt)) ∷ {!!}
-- This is just nasty, prove it 'directly'
key-lemma : {n : ℕ} (i : F.Fin (suc n)) (j : F.Fin (suc (suc n))) →
(lookup
(lookup
((F.inject₁ i ∷ remove (F.inject₁ i) vId) !! j)
(insert (remove (F.inject₁ i) vId) (F.suc i) (F.inject₁ i)))
(insert vS (F.inject₁ i) F.zero))
≡
(F.suc i ∷ insert (remove i vS) i F.zero) !! j
key-lemma {zero} F.zero F.zero = refl
key-lemma {zero} F.zero (F.suc F.zero) = refl
key-lemma {zero} F.zero (F.suc (F.suc ()))
key-lemma {zero} (F.suc ()) j
key-lemma {suc n} F.zero F.zero = refl
key-lemma {suc n} F.zero (F.suc j) =
begin
lookup (lookup (vS !! j) swap01vec) vId
≡⟨ lookupTab {f = id} (lookup (vS !! j) swap01vec) ⟩
lookup (vS !! j) swap01vec
≡⟨ cong (λ z → lookup z swap01vec) (lookupTab {f = F.suc} j) ⟩
lookup (F.suc j) swap01vec
≡⟨ refl ⟩
lookup j (F.zero ∷ vSS)
∎
key-lemma {suc n} (F.suc i) F.zero =
begin
lookup
(lookup (F.inject₁ i)
(insert (remove (F.inject₁ i) (tabulate F.suc)) (F.suc i) (F.suc (F.inject₁ i))))
(F.suc F.zero ∷
insert (tabulate (F.suc ○ F.suc)) (F.inject₁ i) F.zero)
≡⟨ cong (λ z → lookup z (F.suc F.zero ∷ insert (tabulate (F.suc ○ F.suc)) (F.inject₁ i) F.zero)) (lookup+1-insert-remove i (tabulate F.suc)) ⟩
lookup (lookup (F.suc i) (tabulate F.suc)) (F.suc F.zero ∷ insert (tabulate (F.suc ○ F.suc)) (F.inject₁ i) F.zero)
≡⟨ cong (λ z → lookup z (F.suc F.zero ∷ insert (tabulate (F.suc ○ F.suc)) (F.inject₁ i) F.zero)) (lookupTab {f = F.suc} (F.suc i)) ⟩
lookup (F.suc (F.suc i)) (F.suc F.zero ∷ insert (tabulate (F.suc ○ F.suc)) (F.inject₁ i) F.zero)
≡⟨ refl ⟩
lookup (F.suc i) (insert (tabulate (F.suc ○ F.suc)) (F.inject₁ i) F.zero)
≡⟨ sym (lookup-insert3 i (tabulate (F.suc ○ F.suc))) ⟩
lookup i (tabulate (F.suc ○ F.suc))
≡⟨ lookupTab {f = F.suc ○ F.suc} i ⟩
F.suc (F.suc i)
∎
key-lemma {suc n} (F.suc i) (F.suc j) =
begin
(lookup
(lookup
(lookup j
(remove (F.inject₁ (F.suc i)) vId))
(insert
(remove (F.inject₁ (F.suc i)) vId)
(F.suc (F.suc i)) (F.inject₁ (F.suc i))))
(insert vS (F.inject₁ (F.suc i)) F.zero))
≡⟨ refl ⟩ -- lots of β
(lookup
(lookup
((F.zero ∷ remove (F.inject₁ i) vS) !! j)
(F.zero ∷ insert (remove (F.inject₁ i) vS) (F.suc i) (F.inject₁ (F.suc i))))
(F.suc F.zero ∷ insert vSS (F.inject₁ i) F.zero))
≡⟨ {!!} ⟩
(F.suc F.zero ∷ insert (remove i vSS) i F.zero) !! j
≡⟨ refl ⟩
(insert (remove (F.suc i) vS) (F.suc i) F.zero) !! j
∎
{- (lookup
(lookup
((F.inject₁ i ∷ remove (F.inject₁ i) vId) !! j)
(insert (remove (F.inject₁ i) vId) (F.suc i) (F.inject₁ i)))
(insert vS (F.inject₁ i) F.zero))
≡
(F.suc i ∷ insert (remove i vS) i F.zero) !! j -}
--------------------------------------------------------------------------------------------------------------
-- shuffle is like permute, but takes a combinator rather than a permutation as input
shuffle : {n : ℕ} {A : Set} → (fromℕ n ⇛ fromℕ n) → Vec A n → Vec A n
shuffle c v = tabulate (λ x → v !! valToFin (evalComb c (finToVal x)))
--------------------------------------------------------------------------------------------------------------
swapUpCorrect : {n : ℕ} → (i : F.Fin n) → (j : F.Fin (1 + n)) →
evalComb (swapUpTo i) (finToVal j) ≡ finToVal (evalPerm (swapUpToPerm i) j)
swapUpCorrect {zero} () j
swapUpCorrect {suc zero} F.zero F.zero = refl
swapUpCorrect {suc zero} F.zero (F.suc F.zero) = refl
swapUpCorrect {suc zero} F.zero (F.suc (F.suc ()))
swapUpCorrect {suc zero} (F.suc ()) j
swapUpCorrect {suc (suc n)} F.zero j = cong finToVal (
begin
j ≡⟨ sym (lookupTab {f = id} j) ⟩
lookup j (tabulate id) ≡⟨ cong (λ x → lookup j (F.zero ∷ F.suc F.zero ∷ F.suc (F.suc F.zero) ∷ x)) (sym (idP-id (tabulate (F.suc ○ F.suc ○ F.suc)))) ⟩
evalPerm (swapUpToPerm F.zero) j ∎ )
swapUpCorrect {suc (suc n)} (F.suc i) F.zero = refl
swapUpCorrect {suc (suc n)} (F.suc i) (F.suc j) =
begin
evalComb (assocl₊⇛ ◎ (swap₊⇛ ⊕ id⇛) ◎ assocr₊⇛) (inj₂ (evalComb (swapUpTo i) (finToVal j)))
≡⟨ cong (λ x → evalComb (assocl₊⇛ ◎ (swap₊⇛ ⊕ id⇛) ◎ assocr₊⇛) (inj₂ x)) (swapUpCorrect i j) ⟩
evalComb (assocl₊⇛ ◎ (swap₊⇛ ⊕ id⇛) ◎ assocr₊⇛) (inj₂ (finToVal (evalPerm (swapUpToPerm i) j)))
≡⟨ swapi≡swap01 (F.suc (evalPerm (swapUpToPerm i) j)) ⟩
finToVal (evalPerm (swap01 (suc (suc (suc n)))) (F.suc (evalPerm (swapUpToPerm i) j)))
≡⟨ cong finToVal ( begin
evalPerm (swap01 (suc (suc (suc n)))) (F.suc (evalPerm (swapUpToPerm i) j))
≡⟨ cong (λ x → evalPerm (swap01 (suc (suc (suc n)))) (F.suc (lookup j x))) (swapUpToAct i (tabulate id)) ⟩
lookup
(F.suc (lookup j (insert (tabulate (λ z → F.suc z)) (F.inject₁ i) F.zero)))
(F.suc F.zero ∷ F.zero ∷ F.suc (F.suc F.zero) ∷ permute idP (tabulate (λ z → F.suc (F.suc (F.suc z)))))
≡⟨ cong (λ x → (lookup
(F.suc
(lookup j
(insert (tabulate F.suc) (F.inject₁ i) F.zero)))
(F.suc F.zero ∷ F.zero ∷ F.suc (F.suc F.zero) ∷ x)))
(idP-id _) ⟩
(swap01vec !!
(F.suc ((insert (tabulate F.suc) (F.inject₁ i) F.zero) !! j)))
≡⟨ cong (λ x → swap01vec !! x) (sym (map!! F.suc _ j)) ⟩
(swap01vec !!
(vmap F.suc (insert (tabulate F.suc) (F.inject₁ i) F.zero) !! j))
≡⟨ sym (lookupTab {f = (λ j →
(swap01vec !!
(vmap F.suc (insert (tabulate F.suc) (F.inject₁ i) F.zero) !! j)))} j) ⟩
(tabulate (λ k → (swap01vec !!
(vmap F.suc (insert (tabulate F.suc) (F.inject₁ i) F.zero) !! k))) !! j)
≡⟨ refl ⟩
(((vmap F.suc (insert (tabulate F.suc) (F.inject₁ i) F.zero)) ∘̬ swap01vec) !! j)
≡⟨ cong (λ x → x !! j) (∘̬≡∘̬′ _ _) ⟩
(((vmap F.suc (insert (tabulate F.suc) (F.inject₁ i) F.zero)) ∘̬′ swap01vec) !! j)
≡⟨ cong (λ x →
((vmap F.suc (insert x (F.inject₁ i) F.zero) ∘̬′ swap01vec) !! j))
(sym (mapTab F.suc id)) ⟩
{-- For reference:
newlemma6 : {m n : ℕ} → (i : F.Fin n) → (v : Vec (F.Fin m) n) →
(vmap F.suc (insert (vmap F.suc v) (F.inject₁ i) F.zero)) ∘̬′ swap01vec
≡ insert (vmap F.suc (vmap F.suc v)) (F.inject₁ i) F.zero
--}
(((vmap F.suc (insert (vmap F.suc (tabulate id)) (F.inject₁ i) F.zero)) ∘̬′ swap01vec) !! j)
≡⟨ cong (λ x → x !! j) (newlemma6 i (tabulate id)) ⟩
(insert (vmap F.suc (vmap F.suc (tabulate id))) (F.inject₁ i) F.zero !! j)
≡⟨ cong (λ x → insert (vmap F.suc x) (F.inject₁ i) F.zero !! j)
(mapTab F.suc id) ⟩
(insert (vmap F.suc (tabulate F.suc)) (F.inject₁ i) F.zero !! j)
≡⟨ cong (λ x → insert x (F.inject₁ i) F.zero !! j)
(mapTab F.suc F.suc) ⟩
(insert (tabulate (λ x → F.suc (F.suc x))) (F.inject₁ i) F.zero !! j)
≡⟨ cong (λ x → lookup j (insert x (F.inject₁ i) F.zero))
(sym (idP-id _)) ⟩
(lookup j
(insert (permute idP (tabulate (λ x → F.suc (F.suc x)))) (F.inject₁ i) F.zero))
≡⟨ refl ⟩
evalPerm (swapUpToPerm (F.suc i)) (F.suc j)
∎ ) ⟩
finToVal (evalPerm (swapUpToPerm (F.suc i)) (F.suc j))
∎
swapDownCorrect : {n : ℕ} → (i : F.Fin n) → (j : F.Fin (1 + n)) →
evalComb (swapDownFrom i) (finToVal j) ≡
finToVal (evalPerm (swapDownFromPerm i) j)
swapDownCorrect F.zero j =
begin
evalComb (swapDownFrom F.zero) (finToVal j)
≡⟨ refl ⟩
finToVal j
≡⟨ cong finToVal (sym (lookupTab {f = id} j)) ⟩
finToVal ((tabulate id) !! j)
≡⟨ cong (λ x → finToVal (x !! j)) (sym (idP-id (tabulate id))) ⟩
finToVal (permute idP (tabulate id) !! j)
≡⟨ refl ⟩
finToVal (evalPerm (swapDownFromPerm F.zero) j) ∎
swapDownCorrect (F.suc i) F.zero =
begin
evalComb (swapDownFrom (F.suc i)) (finToVal F.zero)
≡⟨ refl ⟩
evalComb (swapi F.zero ◎ (id⇛ ⊕ swapDownFrom i)) (finToVal F.zero)
≡⟨ refl ⟩
evalComb (id⇛ ⊕ swapDownFrom i) (evalComb (swapi F.zero) (finToVal F.zero))
≡⟨ refl ⟩
evalComb (id⇛ ⊕ swapDownFrom i) (finToVal (F.suc F.zero))
≡⟨ refl ⟩
inj₂ (evalComb (swapDownFrom i) (finToVal F.zero))
≡⟨ cong inj₂ (swapDownCorrect i F.zero) ⟩
inj₂ (finToVal (evalPerm (swapDownFromPerm i) F.zero))
≡⟨ refl ⟩
finToVal (F.suc (evalPerm (swapDownFromPerm i) F.zero))
≡⟨ refl ⟩ -- beta
finToVal (F.suc (permute (swapDownFromPerm i) (tabulate id) !! F.zero))
≡⟨ cong finToVal (push-f-through F.suc F.zero (swapDownFromPerm i) id ) ⟩
finToVal (lookup F.zero (permute (swapDownFromPerm i) (tabulate F.suc)))
≡⟨ cong finToVal (sym (lookup-insert (permute (swapDownFromPerm i) (tabulate F.suc)))) ⟩
finToVal (evalPerm (swapDownFromPerm (F.suc i)) F.zero) ∎
swapDownCorrect (F.suc i) (F.suc F.zero) =
begin
evalComb (swapDownFrom (F.suc i)) (finToVal (F.suc F.zero))
≡⟨ refl ⟩
evalComb (swapi F.zero ◎ (id⇛ ⊕ swapDownFrom i)) (finToVal (F.suc F.zero))
≡⟨ refl ⟩
evalComb (id⇛ ⊕ swapDownFrom i) (evalComb (swapi F.zero) (finToVal (F.suc F.zero)))
≡⟨ refl ⟩
evalComb (id⇛ ⊕ swapDownFrom i) (inj₁ tt)
≡⟨ refl ⟩
inj₁ tt
≡⟨ refl ⟩
finToVal (F.zero)
≡⟨ cong finToVal (sym (lookup-insert′ (F.suc F.zero) (permute (swapDownFromPerm i) (tabulate F.suc)))) ⟩
finToVal (evalPerm (swapDownFromPerm (F.suc i)) (F.suc F.zero)) ∎
swapDownCorrect (F.suc i) (F.suc (F.suc j)) =
begin
evalComb (swapDownFrom (F.suc i)) (finToVal (F.suc (F.suc j)))
≡⟨ refl ⟩
evalComb (swapi F.zero ◎ (id⇛ ⊕ swapDownFrom i)) (finToVal (F.suc (F.suc j)))
≡⟨ refl ⟩
evalComb (id⇛ ⊕ swapDownFrom i) (evalComb (swapi F.zero) (finToVal (F.suc (F.suc j))))
≡⟨ refl ⟩
evalComb (id⇛ ⊕ swapDownFrom i) (finToVal (F.suc (F.suc j)))
≡⟨ refl ⟩
evalComb (id⇛ ⊕ swapDownFrom i) (inj₂ (finToVal (F.suc j)))
≡⟨ refl ⟩
inj₂ (evalComb (swapDownFrom i) (finToVal (F.suc j)))
≡⟨ cong inj₂ (swapDownCorrect i (F.suc j)) ⟩
inj₂ (finToVal (evalPerm (swapDownFromPerm i) (F.suc j)))
≡⟨ refl ⟩
finToVal (F.suc (evalPerm (swapDownFromPerm i) (F.suc j)))
≡⟨ cong finToVal (push-f-through F.suc (F.suc j) (swapDownFromPerm i) id) ⟩
-- need to do a little β-expansion to see this
finToVal (lookup (F.suc j) (permute (1iP i) (tabulate F.suc)))
≡⟨ cong finToVal (lookup-insert′′ j (permute (1iP i) (tabulate F.suc))) ⟩
finToVal (evalPerm (swapDownFromPerm (F.suc i)) (F.suc (F.suc j))) ∎
swapmCorrect : {n : ℕ} → (i j : F.Fin n) → evalComb (swapm i) (finToVal j) ≡ finToVal (evalPerm (swapmPerm i) j)
swapmCorrect {zero} () _
swapmCorrect {suc n} F.zero j =
begin
finToVal j
≡⟨ cong finToVal (sym (lookupTab {f = id} j)) ⟩
finToVal (lookup j (tabulate id))
≡⟨ cong (λ x → finToVal (lookup j x)) (sym (idP-id (tabulate id))) ⟩
finToVal (lookup j (permute idP (tabulate id))) ∎
swapmCorrect {suc zero} (F.suc ()) _
swapmCorrect {suc (suc n)} (F.suc i) j = -- requires the breakdown of swapm ?
begin
evalComb (swapm (F.suc i)) (finToVal j)
≡⟨ refl ⟩
evalComb (swapDownFrom i ◎ swapi i ◎ swapUpTo i) (finToVal j)
≡⟨ refl ⟩
evalComb (swapUpTo i)
(evalComb (swapi i)
(evalComb (swapDownFrom i) (finToVal j)))
≡⟨ cong (λ x → evalComb (swapUpTo i) (evalComb (swapi i) x))
(swapDownCorrect i j) ⟩
evalComb (swapUpTo i)
(evalComb (swapi i)
(finToVal (permute (swapDownFromPerm i) (tabulate id) !! j)))
≡⟨ cong (λ x → evalComb (swapUpTo i) x)
(swapiCorrect i (permute (swapDownFromPerm i) (tabulate id) !! j)) ⟩
evalComb (swapUpTo i)
(finToVal
(permute (swapiPerm i) (tabulate id) !!
(permute (swapDownFromPerm i) (tabulate id) !! j)))
≡⟨ (swapUpCorrect i )
(permute (swapiPerm i) (tabulate id) !!
(permute (swapDownFromPerm i) (tabulate id) !! j))⟩
finToVal
(permute (swapUpToPerm i) (tabulate id) !!
(permute (swapiPerm i) (tabulate id) !!
(permute (swapDownFromPerm i) (tabulate id) !! j)))
≡⟨ cong (λ x → finToVal (x !! (permute (swapiPerm i) (tabulate id) !! (permute (swapDownFromPerm i) (tabulate id) !! j)))) (swapUpToAct i (tabulate id)) ⟩
finToVal
(insert (tabulate F.suc) (F.inject₁ i) F.zero !!
(permute (swapiPerm i) (tabulate id) !!
(permute (swapDownFromPerm i) (tabulate id) !! j)))
≡⟨ cong (λ z → finToVal (insert (tabulate F.suc) (F.inject₁ i) F.zero !! (z !! (permute (swapDownFromPerm i) (tabulate id) !! j)))) (swapiAct i (tabulate id)) ⟩
finToVal
(insert (tabulate F.suc) (F.inject₁ i) F.zero !!
(insert (remove (F.inject₁ i) (tabulate id)) (F.suc i) ((tabulate id) !! (F.inject₁ i)) !!
(permute (swapDownFromPerm i) (tabulate id) !! j)))
≡⟨ cong (λ z → finToVal (insert (tabulate F.suc) (F.inject₁ i) F.zero !!
(insert (remove (F.inject₁ i) (tabulate id)) (F.suc i) ((tabulate id) !! (F.inject₁ i)) !!
(z !! j)))) (swapDownFromAct i (tabulate id)) ⟩
finToVal
(insert (tabulate F.suc) (F.inject₁ i) F.zero !!
(insert (remove (F.inject₁ i) (tabulate id)) (F.suc i) ((tabulate id) !! (F.inject₁ i)) !!
( swapDownFromVec (F.inject₁ i) (tabulate id) !! j)))
≡⟨ cong (λ z → finToVal
(insert (tabulate F.suc) (F.inject₁ i) F.zero !!
(insert (remove (F.inject₁ i) (tabulate id)) (F.suc i) z !!
( swapDownFromVec (F.inject₁ i) (tabulate id) !! j)))) (lookupTab {f = id} (F.inject₁ i)) ⟩
finToVal
(insert (tabulate F.suc) (F.inject₁ i) F.zero !!
(insert (remove (F.inject₁ i) (tabulate id)) (F.suc i) (F.inject₁ i) !!
( swapDownFromVec (F.inject₁ i) (tabulate id) !! j)))
≡⟨ refl ⟩
finToVal
(insert (tabulate F.suc) (F.inject₁ i) F.zero !!
(insert (remove (F.inject₁ i) (tabulate id)) (F.suc i) (F.inject₁ i) !!
( (((tabulate id) !! (F.inject₁ i)) ∷ (remove (F.inject₁ i) (tabulate id))) !! j)))
≡⟨ cong (λ z → finToVal
(insert (tabulate F.suc) (F.inject₁ i) F.zero !!
(insert (remove (F.inject₁ i) (tabulate id)) (F.suc i) (F.inject₁ i) !!
( (z ∷ (remove (F.inject₁ i) (tabulate id))) !! j)))) (lookupTab {f = id} (F.inject₁ i)) ⟩
finToVal
(insert (tabulate F.suc) (F.inject₁ i) F.zero !!
(insert (remove (F.inject₁ i) (tabulate id)) (F.suc i) (F.inject₁ i) !!
( ((F.inject₁ i) ∷ (remove (F.inject₁ i) (tabulate id))) !! j)))
≡⟨ cong finToVal (key-lemma i j) ⟩
finToVal ( ((F.suc i) ∷ (insert (remove i (tabulate F.suc)) i F.zero)) !! j)
≡⟨ cong (λ z → finToVal ((z ∷ (insert (remove i (tabulate F.suc)) i F.zero)) !! j)) (sym (lookupTab {f = F.suc} i)) ⟩
finToVal ( (((tabulate F.suc) !! i) ∷ (insert (remove i (tabulate F.suc)) i F.zero)) !! j)
≡⟨ refl ⟩
finToVal (swapmVec (F.suc i) (tabulate id) !! j)
≡⟨ cong (λ z → finToVal (z !! j)) (sym (swapmAct (F.suc i) (tabulate id))) ⟩
finToVal (permute (swapmPerm (F.suc i)) (tabulate id) !! j)
≡⟨ refl ⟩
finToVal (evalPerm (swapmPerm (F.suc i)) j) ∎
lemma1 : {n : ℕ} (p : Permutation n) → (i : F.Fin n) →
evalComb (permToComb p) (finToVal i) ≡ finToVal (evalPerm p i)
lemma1 {zero} [] ()
lemma1 {suc n} (F.zero ∷ p) F.zero = refl
lemma1 {suc zero} (F.zero ∷ p) (F.suc ())
lemma1 {suc (suc n)} (F.zero ∷ p) (F.suc i) = begin
inj₂ (evalComb (permToComb p) (finToVal i))
≡⟨ cong inj₂ (lemma1 p i) ⟩
inj₂ (finToVal (evalPerm p i))
≡⟨ refl ⟩
finToVal (F.suc (evalPerm p i))
≡⟨ cong finToVal (push-f-through F.suc i p id) ⟩
finToVal (evalPerm (F.zero ∷ p) (F.suc i)) ∎
lemma1 {suc n} (F.suc j ∷ p) i = {!!} -- needs all the previous ones first.
{- This is cleaner as a proof, but is not headed the right way as the cases are not the 'right' ones
swapmCorrect2 : {n : ℕ} → (i j : F.Fin n) → evalComb (swapm i) (finToVal j) ≡ finToVal (evalPerm (swapmPerm i) j)
swapmCorrect2 {zero} () _
swapmCorrect2 {suc zero} F.zero F.zero = refl
swapmCorrect2 {suc zero} F.zero (F.suc ())
swapmCorrect2 {suc zero} (F.suc ()) _
swapmCorrect2 {suc (suc n)} F.zero j = sym (
trans (cong (λ x → finToVal (lookup j (F.zero ∷ F.suc F.zero ∷ x))) (idP-id (tabulate (F.suc ○ F.suc))))
(cong finToVal (lookupTab {f = id} j)))
swapmCorrect2 {suc (suc n)} (F.suc F.zero) F.zero = refl
swapmCorrect2 {suc (suc n)} (F.suc (F.suc i)) F.zero =
let up = λ x → evalComb (swapUpTo (F.suc i)) x
swap = λ x → evalComb (swapi (F.suc i)) x
down = λ x → evalComb (swapDownFrom (F.suc i)) x in
begin
evalComb (swapm (F.suc (F.suc i))) (inj₁ tt)
≡⟨ refl ⟩
evalComb (swapDownFrom (F.suc i) ◎ swapi (F.suc i) ◎ swapUpTo (F.suc i)) (inj₁ tt)
≡⟨ refl ⟩
up (swap (down (inj₁ tt)))
≡⟨ cong (up ○ swap) (swapDownCorrect (F.suc i) F.zero) ⟩
up (swap ( finToVal (evalPerm (swapDownFromPerm (F.suc i)) F.zero) ))
≡⟨ refl ⟩
up (swap (finToVal (permute (swapDownFromPerm (F.suc i)) (tabulate id) !! F.zero)))
≡⟨ cong (λ x → up (swap (finToVal (x !! F.zero )))) (swapDownFromAct (F.suc i) (tabulate id)) ⟩
up (swap (finToVal (swapDownFromVec (F.inject₁ (F.suc i)) (tabulate id) !! F.zero)))
≡⟨ refl ⟩
up (swap (finToVal ( (((tabulate id) !! (F.inject₁ (F.suc i))) ∷ remove (F.inject₁ (F.suc i)) (tabulate id)) !! F.zero )))
≡⟨ refl ⟩
up (swap (finToVal ((tabulate id) !! (F.inject₁ (F.suc i)))))
≡⟨ cong (up ○ swap ○ finToVal) (lookupTab {f = id} (F.inject₁ (F.suc i))) ⟩
up (swap (finToVal (F.inject₁ (F.suc i))))
≡⟨ cong up (swapiCorrect (F.suc i) (F.inject₁ (F.suc i))) ⟩
up (finToVal (evalPerm (swapiPerm (F.suc i)) (F.inject₁ (F.suc i))))
≡⟨ refl ⟩
up (finToVal (permute (swapiPerm (F.suc i)) (tabulate id) !! (F.inject₁ (F.suc i))))
≡⟨ cong (λ x → up (finToVal( x !! (F.inject₁ (F.suc i))))) (swapiAct (F.suc i) (tabulate id)) ⟩
up (finToVal (insert (remove (F.inject₁ (F.suc i)) (tabulate id)) (F.suc (F.suc i)) ((tabulate id) !! (F.inject₁ (F.suc i))) !! (F.inject₁ (F.suc i))))
≡⟨ cong (λ x → up (finToVal (insert (remove (F.inject₁ (F.suc i)) (tabulate id)) (F.suc (F.suc i)) x !! (F.inject₁ (F.suc i))))) (lookupTab {f = id} (F.inject₁ (F.suc i))) ⟩
up (finToVal (insert (remove (F.inject₁ (F.suc i)) (tabulate id)) (F.suc (F.suc i)) (F.inject₁ (F.suc i)) !! (F.inject₁ (F.suc i))))
≡⟨ cong (up ○ finToVal) (lookup+1-insert-remove (F.suc i) (tabulate id)) ⟩
up (finToVal (lookup (F.suc (F.suc i)) (tabulate id)))
≡⟨ cong (up ○ finToVal) (lookupTab {f = id} (F.suc (F.suc i))) ⟩
up (finToVal (F.suc (F.suc i)))
≡⟨ swapUpCorrect (F.suc i) (F.suc (F.suc i)) ⟩
finToVal (evalPerm (swapUpToPerm (F.suc i)) (F.suc (F.suc i)))
≡⟨ cong (λ x → finToVal (x !! (F.suc (F.suc i)))) (swapUpToAct (F.suc i) (tabulate id)) ⟩
finToVal ( insert (tabulate F.suc) (F.inject₁ (F.suc i)) (F.zero) !! (F.suc (F.suc i)))
≡⟨ cong finToVal (sym (lookup-insert3 (F.suc i) (tabulate F.suc))) ⟩
finToVal ((tabulate F.suc) !! (F.suc i))
≡⟨ cong finToVal (lookupTab {f = F.suc} (F.suc i)) ⟩
finToVal (F.suc (F.suc i))
≡⟨ cong finToVal (sym (lookupTab {f = F.suc ○ F.suc} i)) ⟩
finToVal (lookup i (tabulate (F.suc ○ F.suc)))
≡⟨ refl ⟩
finToVal (lookup F.zero (swapmVec (F.suc (F.suc i)) (tabulate id)))
≡⟨ cong (λ x → finToVal (x !! F.zero)) (sym (swapmAct (F.suc (F.suc i)) (tabulate id))) ⟩
finToVal (evalPerm (swapmPerm (F.suc (F.suc i))) F.zero)
∎
swapmCorrect2 {suc (suc n)} (F.suc F.zero) (F.suc j) =
begin
evalComb (swapi F.zero) (inj₂ (finToVal j))
≡⟨ swapi≡swap01 (F.suc j) ⟩
finToVal
(lookup j
(F.zero ∷ permute idP (tabulate (λ z → F.suc (F.suc z)))))
≡⟨ refl ⟩
finToVal (permute (F.suc F.zero ∷ idP) (tabulate id) !! (F.suc j))
∎
swapmCorrect2 {suc (suc n)} (F.suc (F.suc i)) (F.suc j) = {!!}
-}
{-
-- this alternate version of lemma1 might, in the long term, but a better
-- way to go?
lemma1′ : {n : ℕ} → (i : F.Fin n) → vmap (evalComb (swapm i)) (tabulate finToVal) ≡ vmap finToVal (permute (swapmPerm i) (tabulate id))
lemma1′ {zero} ()
lemma1′ {suc n} F.zero = cong (_∷_ (inj₁ tt)) (
begin
vmap id (tabulate (inj₂ ○ finToVal))
≡⟨ mapTab id (inj₂ ○ finToVal) ⟩
tabulate (inj₂ ○ finToVal)
≡⟨ cong tabulate refl ⟩
tabulate (finToVal ○ F.suc)
≡⟨ sym (mapTab finToVal F.suc) ⟩
vmap finToVal (tabulate F.suc)
≡⟨ cong (vmap finToVal) (sym (idP-id _)) ⟩
vmap finToVal (permute idP (tabulate F.suc))
∎ )
lemma1′ {suc n} (F.suc i) =
begin
vmap (evalComb (swapm (F.suc i))) (tabulate finToVal)
≡⟨ refl ⟩
evalComb (swapm (F.suc i)) (inj₁ tt) ∷ vmap (evalComb (swapm (F.suc i))) (tabulate (inj₂ ○ finToVal))
≡⟨ cong (λ x → x ∷ vmap (evalComb (swapm (F.suc i))) (tabulate (inj₂ ○ finToVal))) (swapmCorrect {suc n} (F.suc i) F.zero) ⟩
(finToVal (evalPerm (swapmPerm (F.suc i)) F.zero)) ∷ vmap (evalComb (swapm (F.suc i))) (tabulate (inj₂ ○ finToVal))
≡⟨ cong (λ x → (finToVal (evalPerm (swapmPerm (F.suc i)) F.zero)) ∷ x ) {!!} ⟩ -- need to generalize the inductive hyp. for this to work
{!!}
≡⟨ {!!} ⟩
vmap finToVal (insert (permute (swapOne i) (tabulate F.suc)) (F.suc i) F.zero)
∎
lemma2 : {n : ℕ} (c : (fromℕ n) ⇛ (fromℕ n)) → (i : F.Fin n) →
(evalComb c (finToVal i)) ≡ finToVal (evalPerm (combToPerm c) i)
lemma2 c i = {!!}
-}
|
Require Export List Program Arith Omega MoreList.
Export Nat.
Set Implicit Arguments.
Inductive letter := M | I | U.
Inductive Generatable { init } : list letter -> Prop :=
| Init : Generatable init
| Rule1 : forall l, Generatable (l ++ [I]) -> Generatable (l ++ [I;U])
| Rule2 : forall l, Generatable (M :: l) -> Generatable (M :: l ++ l)
| Rule3 : forall l r, Generatable (l ++ I :: I :: I :: r) -> Generatable (l ++ U :: r)
| Rule4 : forall l r, Generatable (l ++ U :: U :: r) -> Generatable (l ++ r).
Arguments Generatable : clear implicits.
Arguments Init : clear implicits.
Hint Constructors Generatable letter.
Theorem Rule1' i l r : l = r ++ [I;U] -> Generatable i (r ++ [I]) -> Generatable i l.
intros;subst;auto.
Qed.
Theorem Rule2' i l r : l = M :: r ++ r -> Generatable i (M :: r) -> Generatable i l.
intros;subst;auto.
Qed.
Theorem Rule3' i li l r : li = l ++ U :: r ->
Generatable i (l ++ I :: I :: I :: r) -> Generatable i li.
intros;subst;auto.
Qed.
Theorem Rule4' i li l r : li = l ++ r ->
Generatable i (l ++ U :: U :: r) -> Generatable i li.
intros;subst;auto.
Qed.
Goal Generatable [M;I;U] [M;I;U;I;U].
eapply Rule2';eauto;eauto.
Qed.
Goal Generatable [M;U;M] [M;U;M;U;M].
eapply Rule2';eauto;eauto.
Qed.
Goal Generatable [M;U] [M;U;U].
eapply Rule2';eauto;eauto.
Qed.
Goal Generatable [U;M;I;I;I;M;U] [U;M;U;M;U].
eapply Rule3' with (l := [U;M])(r := [M;U]);eauto.
Qed.
Goal Generatable [M;I;I;I;I] [M;I;U].
eapply Rule3' with (l := [M;I])(r := []);eauto.
Qed.
Goal Generatable [M;I;I;I;I] [M;U;I].
eapply Rule3' with (l := [M])(r := [I]);eauto.
Qed.
Goal Generatable [M;I;I;I] [M;U].
eapply Rule3' with (l := [M])(r := []);eauto.
Qed.
Goal Generatable [U;U;U] [U].
eapply Rule4' with (l := [U])(r := []);eauto.
Qed.
Goal Generatable [M;U;U;U;I;I;I] [M;U;I;I;I].
eapply Rule4' with (l := [M;U])(r := [I;I;I]);eauto.
Qed.
Theorem gen_non_div_three l : Generatable [M;I] l ->
~ divide 3 (fold_left (fun (n : nat) e => match e with M => 0 | I => 1 | U => 3 end + n) l 0).
induction 1;
unfold divide in *;
repeat (rewrite !fold_left_app in *||simpl in *);
intuition;
match goal with H : exists _, _ |- _ => destruct H end;
omega||apply IHGeneratable.
destruct x;omega||exists x;omega.
erewrite (foldl_identity ((fun (n : nat) (e : letter) => match e with
| M => 0
| I => 1
| U => 3
end + n))) in H0.
Admitted.
Goal ~ Generatable [M;I] [M;U].
intuition;apply gen_non_div_three in H.
simpl in *;intuition.
Qed. |
```python
import sys, os
sys.path.insert(0, os.path.join(os.pardir, 'src'))
import sympy as sym
from approx1D import least_squares_orth, comparison_plot
import matplotlib.pyplot as plt
x = sym.Symbol('x')
# Naive approach: (not utilizing the fact that i+1 computations can
# make use of i computations)
def naive(f, s, Omega, N=10):
psi = []
for i in range(N+1):
psi.append(sym.sin((2*i+1)*x))
u, c = least_squares_orth(f, psi, Omega, symbolic=False)
comparison_plot(f, u, Omega, 'tmp_sin%02dx' % i,
legend_loc='upper left', show=True)
# Efficient approach: compute just the matrix diagonal
def efficient(f, s, Omega, N=10):
u = 0
for i in range(N+1):
psi = [sym.sin((2*i+1)*x)]
next_term, c = least_squares_orth(f, psi, Omega, False)
u = u + next_term
comparison_plot(f, u, Omega, 'tmp_sin%02dx' % i,
legend_loc='upper left', show=False,
plot_title='s=%g, i=%d' % (s, i))
if __name__ == '__main__':
s = 20 # steepness
f = sym.tanh(s*(x-sym.pi))
from math import pi
Omega = [0, 2*pi] # sym.pi did not work here
efficient(f, s, Omega, N=10)
# Make movie
# avconv/ffmpeg skips frames, use convert instead (few files)
cmd = 'convert -delay 200 tmp_sin*.png tanh_sines_approx.gif'
os.system(cmd)
# Make static plots, 3 figures on 2 lines
for ext in 'pdf', 'png':
cmd = 'doconce combine_images %s -3 ' % ext
cmd += 'tmp_sin00x tmp_sin01x tmp_sin02x tmp_sin04x '
cmd += 'tmp_sin07x tmp_sin10x tanh_sines_approx'
os.system(cmd)
plt.show()
```
```python
```
|
include("gen_code_mem.jl")
include("gen_code_snippets.jl")
include("multilincomb.jl");
include("gen_c_code.jl")
include("gen_julia_code.jl")
include("gen_matlab_code.jl")
export gen_code
# Every language needs:
# comment(::Lang,s)
# slotname(::Lang,i) #
# assign_coeff(::Lang,v,i)
# function_definition(::Lang,graph,T,funname)
# function_init(lang::Lang,T,mem,graph)
# init_mem(lang::Lang,max_nof_nodes)
# function_end(lang::Lang,graph,mem)
# execute_operation!(lang::Lang,T,graph,node,
# dealloc_list, mem)
# Fallback. Default to no main function
function gen_main(lang, T, fname, funname)
return init_code(lang)
end
function preprocess_codegen(graph, lang)
return graph # Fallback to no preprocessing
end
"""
gen_code(
fname,
graph;
priohelp = Dict{Symbol,Float64}(),
lang = LangJulia(),
funname = "dummy",
precomputed_nodes = [:A],
)
Generates the code for the `graph` in the language specified in `lang` and
writes it into the file `fname`. The string `funname` is the function name.
Topological order of the nodes is comptued using [`get_topo_order`](@ref) and
`priohelp` can be used to influence the order. The nodes listed in
`precomputed_nodes` are viewed as inputs, and code to compute these nodes are
not computed.
Currently supported languages: [`LangC_MKL`](@ref), [`LangC_OpenBLAS`](@ref),
[`LangJulia`](@ref), [`LangMatlab`](@ref).
"""
function gen_code(
fname,
graph;
priohelp = Dict{Symbol,Float64}(),
lang = LangJulia(),
funname = "dummy",
precomputed_nodes = [:A],
)
# Make dispatch possible for lang
return _gen_code(fname, graph, lang, priohelp, funname, precomputed_nodes)
end
# Most gen code calls will call this. Can be overloaded with lang
function _gen_code(fname, graph, lang, priohelp, funname, precomputed_nodes)
# Error if graph is trivial (no operations) or has trivial nodes.
if isempty(graph.operations)
error("Unable to generate code for graphs without operations.")
end
if has_trivial_nodes(graph)
error("Please run compress_graph!() on the graph first.")
end
T = eltype(eltype(typeof(graph.coeffs.vals)))
if (fname isa String)
file = open(abspath(fname), "w+")
else
# Lazy: Print out to stdout if no filename
file = Base.stdout
end
graph = preprocess_codegen(graph, lang)
(order, can_be_deallocated, max_nof_slots) =
get_topo_order(graph; priohelp = priohelp)
# max_nof_slots is the path width which gives
# a bound on the number memory slots needed.
println(
file,
to_string(
function_definition(lang, graph, T, funname, precomputed_nodes),
),
)
# We do a double sweep in order to determine exactly how many memory slots
# are needed. The first sweep we carry out all operations but store only the
# maximum number of memory slots needed. The second sweep generates the
# code.
mem = init_mem(lang, max_nof_slots + 3, precomputed_nodes)
function_init(lang, T, mem, graph, precomputed_nodes)
# Sweep 1: Determine exactly the number of slots needed
nof_slots = 0
for (i, node) in enumerate(order)
if (node in precomputed_nodes) # Nothing to do for precomputed nodes
continue
end
(exec_code, result_variable) =
execute_operation!(lang, T, graph, node, can_be_deallocated[i], mem)
# How many slots needed to reach this point
if (!isnothing(findlast(mem.slots .!= :Free)))
nof_slots = max(nof_slots, findlast(mem.slots .!= :Free))
end
end
# Sweep 2:
mem = init_mem(lang, nof_slots, precomputed_nodes)
function_init_code = function_init(lang, T, mem, graph, precomputed_nodes)
push_comment!(
function_init_code,
"Computation order: " * join(string.(order), " "),
)
println(file, to_string(function_init_code))
for (i, node) in enumerate(order)
if (node in precomputed_nodes) # Nothing to do for precomputed nodes
continue
end
(exec_code, result_variable) =
execute_operation!(lang, T, graph, node, can_be_deallocated[i], mem)
println(file, to_string(exec_code))
end
println(file, to_string(function_end(lang, graph, mem)))
# Generate main function, if necessary.
exec_code = gen_main(lang, T, fname, funname)
println(file, to_string(exec_code))
if (fname isa String)
close(file)
end
end
|
# [ include("../src/"*s) for s in readdir("../src") ]
using PolyChaos
# using LinearAlgebra
# import FFTW
# import SpecialFunctions
# numerically compute recurrence coefficients for (almost) Gaussian density w(t)
N = 10
w(t) = exp(-t^2)
lb, ub = -Inf, Inf
@time α, β = rm_compute(w,lb,ub;Nquad=200,Npoly=N)
# analytical solution
α_ana = zeros(N)
β_ana = [ √π; [0.5*k for k=1:N-1] ]
# compare
display("Deviation for α: $(α-α_ana)")
display("Deviation for β: $(β-β_ana)")
## do the same for Chebyshev polynomials #4
v(t) = sqrt(1-t)/sqrt(1+t)
lb, ub = -1+1e-8, 1-1e-8
@time α, β = rm_compute(v,lb,ub;Nquad=2000,Npoly=N)
# analytical solution
α_ana = [-0.5; zeros(N-1) ]
β_ana = [ π; [0.25 for k=1:N-1] ]
# compare
display("Deviation for α: $(α-α_ana)")
display("Deviation for β: $(β-β_ana)")
|
[STATEMENT]
lemma sum_list_us_le:
"sum_list (\<^bold>u y i) \<le> i + 1"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. sum_list (\<^bold>u y i) \<le> i + 1
[PROOF STEP]
proof (induct i)
[PROOF STATE]
proof (state)
goal (2 subgoals):
1. sum_list (\<^bold>u y 0) \<le> 0 + 1
2. \<And>i. sum_list (\<^bold>u y i) \<le> i + 1 \<Longrightarrow> sum_list (\<^bold>u y (Suc i)) \<le> Suc i + 1
[PROOF STEP]
case 0
[PROOF STATE]
proof (state)
this:
goal (2 subgoals):
1. sum_list (\<^bold>u y 0) \<le> 0 + 1
2. \<And>i. sum_list (\<^bold>u y i) \<le> i + 1 \<Longrightarrow> sum_list (\<^bold>u y (Suc i)) \<le> Suc i + 1
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
[PROOF STEP]
show ?case
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. sum_list (\<^bold>u y 0) \<le> 0 + 1
[PROOF STEP]
by (auto simp: sum_list_update)
(metis Suc_eq_plus1 in_set_replicate length_replicate sum_list_eq_0_iff sum_list_inc_le')
[PROOF STATE]
proof (state)
this:
sum_list (\<^bold>u y 0) \<le> 0 + 1
goal (1 subgoal):
1. \<And>i. sum_list (\<^bold>u y i) \<le> i + 1 \<Longrightarrow> sum_list (\<^bold>u y (Suc i)) \<le> Suc i + 1
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. \<And>i. sum_list (\<^bold>u y i) \<le> i + 1 \<Longrightarrow> sum_list (\<^bold>u y (Suc i)) \<le> Suc i + 1
[PROOF STEP]
case (Suc i)
[PROOF STATE]
proof (state)
this:
sum_list (\<^bold>u y i) \<le> i + 1
goal (1 subgoal):
1. \<And>i. sum_list (\<^bold>u y i) \<le> i + 1 \<Longrightarrow> sum_list (\<^bold>u y (Suc i)) \<le> Suc i + 1
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
sum_list (\<^bold>u y i) \<le> i + 1
[PROOF STEP]
show ?case
[PROOF STATE]
proof (prove)
using this:
sum_list (\<^bold>u y i) \<le> i + 1
goal (1 subgoal):
1. sum_list (\<^bold>u y (Suc i)) \<le> Suc i + 1
[PROOF STEP]
by auto (metis Suc_le_mono add.commute le_trans length_us plus_1_eq_Suc sum_list_inc_le')
[PROOF STATE]
proof (state)
this:
sum_list (\<^bold>u y (Suc i)) \<le> Suc i + 1
goal:
No subgoals!
[PROOF STEP]
qed |
open import Relation.Binary.Core
module PLRTree.Heap.Properties {A : Set}
(_≤_ : A → A → Set)
(trans≤ : Transitive _≤_) where
open import PLRTree {A}
open import PLRTree.Heap _≤_
lemma-≤-≤* : {x y : A}{t : PLRTree} → x ≤ y → y ≤* t → x ≤* t
lemma-≤-≤* {x = x} _ (lf≤* _) = lf≤* x
lemma-≤-≤* x≤y (nd≤* y≤z y≤*l y≤*r) = nd≤* (trans≤ x≤y y≤z) (lemma-≤-≤* x≤y y≤*l) (lemma-≤-≤* x≤y y≤*r)
|
Currently , about 75 @,@ 000 military personnel and 15 @,@ 000 civilians comprise the armed forces , for a total of 90 @,@ 000 men and women . Out of these 75 @,@ 000 , <unk> . 43 @,@ 000 are in the Land Forces .
|
{-# LANGUAGE BinaryLiterals #-}
{-# LANGUAGE CPP #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE NegativeLiterals #-}
{-# LANGUAGE NoMonomorphismRestriction #-}
{-# LANGUAGE ScopedTypeVariables #-}
-- | Tests for the flat module
module Main where
import Control.Monad
import Data.Bits
import qualified Data.ByteString as B
import qualified Data.ByteString.Lazy as L
import qualified Data.ByteString.Short as SBS
import Data.Char
import Data.Either
import Flat
import Flat.Bits
import Flat.Decoder
import qualified Flat.Encoder as E
import qualified Flat.Encoder.Prim as E
import qualified Flat.Encoder.Strict as E
import Data.Int
import Data.Proxy
import qualified Data.Sequence as Seq
import Data.String (fromString)
import qualified Data.Text as T
import Data.Word
import Numeric.Natural
import System.Exit
import Test.Data
import Test.Data.Arbitrary ()
import Test.Data.Flat
import Test.Data.Values hiding (lbs, ns)
import Test.E
import Test.E.Arbitrary ()
import Test.E.Flat
import Test.Tasty
import Test.Tasty.HUnit
import Test.Tasty.QuickCheck as QC hiding (getSize)
import Flat.Endian
import Data.FloatCast
import Data.Text.Arbitrary
-- import Test.QuickCheck.Arbitrary
import qualified Data.Complex as B
import qualified Data.Ratio as B
import qualified Data.Map as C
import qualified Data.Map.Strict as CS
import qualified Data.Map.Lazy as CL
import qualified Data.IntMap.Strict as CS
import qualified Data.IntMap.Lazy as CL
-- import Data.List
-- import Data.Ord
#if MIN_VERSION_base(4,9,0)
import qualified Data.List.NonEmpty as BI
#endif
instance Arbitrary UTF8Text where
arbitrary = UTF8Text <$> arbitrary
shrink t = UTF8Text <$> shrink (unUTF8 t)
#if! defined(ghcjs_HOST_OS) && ! defined (ETA_VERSION)
instance Arbitrary UTF16Text where
arbitrary = UTF16Text <$> arbitrary
shrink t = UTF16Text <$> shrink (unUTF16 t)
#endif
-- instance Flat [Int16]
-- instance Flat [Word8]
-- instance Flat [Bool]
main = do
-- #ifdef ghcjs_HOST_OS
-- print "GHCJS"
-- #endif
-- printInfo
-- print $ flat asciiStrT
mainTest
-- print $ flatRaw 18446744073709551615::Word64
-- print $ B.unpack . flat $ (True,0::Word64,18446744073709551615::Word64)
-- print (2^56::Word64,fromIntegral (1::Word8) `shiftL` 56 :: Word64,(18446744073709551615::Word64) `shiftR` 1)
-- mainShow
-- eWord64E id 0b
mainShow = do
mapM_ (\_ -> generate (arbitrary :: Gen Int) >>= print) [1 .. 10]
exitFailure
mainTest = defaultMain tests
tests :: TestTree
tests = testGroup "Tests" [testPrimitives, testEncDec, testFlat]
testPrimitives =
testGroup "conversion/memory primitives" [testEndian, testFloatingConvert]
--,testShifts -- ghcjs fails this
testEncDec = testGroup
"encode/decode primitives"
[ testEncodingPrim
, testDecodingPrim
#ifdef TEST_DECBITS
, testDecBits
#endif
]
testFlat = testGroup
"flat/unflat"
[testSize, testLargeEnum, testContainers, flatUnflatRT, flatTests]
-- Flat.Endian tests (to run, need to modify imports and cabal file)
testEndian = testGroup
"Endian"
[ convBE toBE16 (2 ^ 10 + 3) (2 ^ 9 + 2 ^ 8 + 4)
, convBE toBE32 (2 ^ 18 + 3) 50332672
, convBE toBE64 (2 ^ 34 + 3) 216172782180892672
, convBE toBE16 0x1234 0x3412
, convBE toBE32 0x11223344 0x44332211
, convBE toBE64 0x0123456789ABCDEF 0xEFCDAB8967452301]
testFloatingConvert = testGroup
"Floating conversions"
[ conv floatToWord (-0.15625) 3189768192
, conv wordToFloat 3189768192 (-0.15625)
, conv doubleToWord (-0.15625) 13818169556679524352
, conv wordToDouble 13818169556679524352 (-0.15625)
, rt "floatToWord" (prop_float_conv :: RT Float)
, rt "doubleToWord" (prop_double_conv :: RT Double)]
convBE f v littleEndianE =
let e = if isBigEndian
then v
else littleEndianE
in testCase (unwords ["conv BigEndian", sshow v, "to", sshow e]) $ f v @?= e
conv f v e = testCase
(unwords ["conv", sshow v, showB . flat $ v, "to", sshow e])
$ f v @?= e
-- ghcjs bug on shiftR 0, see: https://github.com/ghcjs/ghcjs/issues/706
testShifts = testGroup "Shifts" $ map tst [0 .. 33]
where
tst n = testCase ("shiftR " ++ show n)
$ let val = 4294967295 :: Word32
s = val `shift` (-n)
r = val `shiftR` n
in r @?= s
-- shR = shiftR
-- shR = unsafeShiftR
shR val 0 = val
shR val n = shift val (-n)
testEncodingPrim = testGroup
"Encoding Primitives"
[ encRawWith 1 E.eTrueF [0b10000001]
, encRawWith 3 (E.eTrueF >=> E.eFalseF >=> E.eTrueF) [0b10100001]
-- Depends on endianess
--,encRawWith 32 (E.eWord32E id $ 2^18 + 3) [3,0,4,0,1]
-- ,encRawWith 64 (E.eWord64E id $ 0x1122334455667788) [0x88,0x77,0x66,0x55,0x44,0x33,0x22,0x11,1]
--,encRawWith 65 (E.eTrueF >=> E.eWord64E id (2^34 + 3)) [1,0,0,0,2,0,0,128,129]
--,encRawWith 65 (E.eFalseF >=> E.eWord64E id (2^34 + 3)) [1,0,0,0,2,0,0,0,129]
-- Big Endian
, encRawWith 32 (E.eWord32BEF $ 2 ^ 18 + 3) [0, 4, 0, 3, 1]
, encRawWith 64 (E.eWord64BEF $ 2 ^ 34 + 3) [0, 0, 0, 4, 0, 0, 0, 3, 1]
, encRawWith
65
(E.eTrueF >=> E.eWord64BEF (2 ^ 34 + 3))
[128, 0, 0, 2, 0, 0, 0, 1, 129]
, encRawWith
65
(E.eFalseF >=> E.eWord64BEF (2 ^ 34 + 3))
[0, 0, 0, 2, 0, 0, 0, 1, 129]]
where
encRawWith sz enc exp = testCase
(unwords ["encode raw with size", show sz])
$ flatRawWith sz enc @?= exp
testDecodingPrim = testGroup
"Decoding Primitives"
[ dec
((,,,) <$> dropBits 13 <*> dBool <*> dBool <*> dBool)
[0b10111110, 0b10011010]
((), False, True, False)
, dec
((,,,) <$> dropBits 1 <*> dBE16 <*> dBool <*> dropBits 6)
[0b11000000, 0b00000001, 0b01000000]
((), 2 ^ 15 + 2, True, ())
, dec
((,,,) <$> dropBits 1 <*> dBE32 <*> dBool <*> dropBits 6)
[0b11000000, 0b00000000, 0b00000000, 0b00000001, 0b01000000]
((), 2 ^ 31 + 2, True, ())
, dec
dBE64
[ 0b10000000
, 0b00000000
, 0b00000000
, 0b00000000
, 0b00000000
, 0b00000000
, 0b00000000
, 0b00000010]
(2 ^ 63 + 2)
, dec
((,,,) <$> dropBits 1 <*> dBE64 <*> dBool <*> dropBits 6)
[ 0b11000000
, 0b00000000
, 0b00000000
, 0b00000000
, 0b00000000
, 0b00000000
, 0b00000000
, 0b00000001
, 0b01000000]
((), 2 ^ 63 + 2, True, ())]
where
dec decOp v e = testCase (unwords ["decode", sshow v])
$ unflatRawWith decOp (B.pack v) @?= Right e
testDecBits = testGroup "Decode Bits"
$ concat
[ decBitsN dBEBits8
, decBitsN dBEBits16
, decBitsN dBEBits32
, decBitsN dBEBits64]
-- Test dBEBits8/16/32/64, extraction of up to 8/16/32/bits from various positions
where
decBitsN :: forall a.
(Num a, FiniteBits a, Show a, Flat a)
=> (Int -> Get a)
-> [TestTree]
decBitsN dec = let s = finiteBitSize (undefined :: a)
in [decBits_ dec val numBitsToTake pre
| numBitsToTake <- [0 .. s]
, val <- [ 0 :: a
, 1 + 2 ^ (s - 2) + 2 ^ (s - 5)
, fromIntegral $ (2 ^ s :: Integer) - 1]
, pre <- [0, 1, 7]]
decBits_ :: forall a.
(FiniteBits a, Show a, Flat a)
=> (Int -> Get a)
-> a
-> Int
-> Int
-> TestTree
decBits_ deco val numBitsToTake pre =
-- a sequence composed by pre zero bits followed by the val and zero bits till the next byte boundary
let vs = B.pack . asBytes . fromBools
$ replicate pre False ++ toBools (asBits val)
len = B.length vs
sz = finiteBitSize (undefined :: a)
dec :: Get a
dec = do
dropBits pre
r <- deco numBitsToTake
dropBits (len * 8 - numBitsToTake - pre)
return r
-- we expect the first numBitsToTake bits of the value
expectedD @ (Right expected) :: Decoded a = Right
$ val `shR` (sz - numBitsToTake) -- ghcjs: shiftR fails, see: https://github.com/ghcjs/ghcjs/issues/706
actualD @ (Right actual) :: Decoded a = unflatRawWith dec vs
in testCase
(unwords
[ "take"
, show numBitsToTake
, "bits from"
, show val
, "of size"
, show sz
, "with prefix"
, show pre
, "sequence"
, showB vs
, show expected
, show actual
, show $ val == actual
, show $ expected == actual
, show $ expected /= actual
, show $ show expected == show actual
, show $ flat expected == flat actual])
$ actualD @?= expectedD
testSize = testGroup "Size"
$ concat
[ sz () 0
, sz True 1
, sz One 2
, sz Two 2
, sz Three 2
, sz Four 3
, sz Five 3
, sz 'a' 8
, sz 'à' 16
, sz '经' 24
, sz (0 :: Word8) 8
, sz (1 :: Word8) 8
, concatMap (uncurry sz) ns
, concatMap (uncurry sz) nsI
, concatMap (uncurry sz) nsII
, sz (1.1 :: Float) 32
, sz (1.1 :: Double) 64
, sz "" 1
, sz "abc" (4 + 3 * 8)
, sz ((), (), Unit) 0
, sz (True, False, One, Five) 7
, sz map1 7
, sz bs (4 + 3 * 8)
, sz stBS bsSize
, sz lzBS bsSize
#ifndef ghcjs_HOST_OS
, sz shBS bsSize
#endif
, sz tx utf8Size
, sz (UTF8Text tx) utf8Size
#if! defined(ghcjs_HOST_OS) && ! defined (ETA_VERSION)
, sz (UTF16Text tx) utf16Size
#endif
]
where
tx = T.pack "txt"
utf8Size = 8 + 8 + 3 * 32 + 8
utf16Size = 8 + 8 + 3 * 16 + 8
bsSize = 8 + 8 + 3 * 8 + 8
sz v e = [testCase (unwords ["size of", sshow v]) $ getSize v @?= e]
-- E258_256 = 11111110 _257 = 111111110 _258 = 111111111
testLargeEnum = testGroup "test enum with more than 256 constructors"
$ concat
[
#ifdef ENUM_LARGE
sz E258_256 8
, sz E258_257 9
, sz E258_258 9
-- As encodes are inlined, this is going to take for ever if this is compiled with -O1 or -O2
-- , encRaw (E258_256) [0b11111110]
-- , encRaw (E258_257) [0b11111111,0b00000000]
-- , encRaw (E258_258) [0b11111111,0b10000000]
-- , encRaw (E258_256,E258_257,E258_258) [0b11111110,0b11111111,0b01111111,0b11000000]
, map trip [E258_1, E258_256, E258_257, E258_258]
, map trip [E256_1, E256_134, E256_256]
#endif
]
testContainers =
testGroup "containers" [trip longSeq, trip dataMap, trip listMap]
-- , trip intMap
flatUnflatRT = testGroup
"unflat (flat v) == v"
[ rt "()" (prop_Flat_roundtrip :: RT ())
, rt "Bool" (prop_Flat_roundtrip :: RT Bool)
, rt "Char" (prop_Flat_roundtrip :: RT Char)
, rt "Complex" (prop_Flat_roundtrip :: RT (B.Complex Float))
, rt "Either N Bool" (prop_Flat_roundtrip :: RT (Either N Bool))
, rt "Either Int Char" (prop_Flat_roundtrip :: RT (Either Int Char))
, rt "Int8" (prop_Flat_Large_roundtrip :: RTL Int8)
, rt "Int16" (prop_Flat_Large_roundtrip :: RTL Int16)
, rt "Int32" (prop_Flat_Large_roundtrip :: RTL Int32)
, rt "Int64" (prop_Flat_Large_roundtrip :: RTL Int64)
, rt "Int" (prop_Flat_Large_roundtrip :: RTL Int)
, rt "[Int16]" (prop_Flat_roundtrip :: RT [Int16])
, rt "String" (prop_Flat_roundtrip :: RT String)
#if MIN_VERSION_base(4,9,0)
, rt "NonEmpty" (prop_Flat_roundtrip :: RT (BI.NonEmpty Bool))
#endif
, rt "Maybe N" (prop_Flat_roundtrip :: RT (Maybe N))
, rt "Ratio" (prop_Flat_roundtrip :: RT (B.Ratio Int32))
, rt "Word8" (prop_Flat_Large_roundtrip :: RTL Word8)
, rt "Word16" (prop_Flat_Large_roundtrip :: RTL Word16)
, rt "Word32" (prop_Flat_Large_roundtrip :: RTL Word32)
, rt "Word64" (prop_Flat_Large_roundtrip :: RTL Word64)
, rt "Word" (prop_Flat_Large_roundtrip :: RTL Word)
, rt "Natural" (prop_Flat_roundtrip :: RT Natural)
, rt "Integer" (prop_Flat_roundtrip :: RT Integer)
, rt "Float" (prop_Flat_roundtrip :: RT Float)
, rt "Double" (prop_Flat_roundtrip :: RT Double)
, rt "Text" (prop_Flat_roundtrip :: RT T.Text)
, rt "UTF8 Text" (prop_Flat_roundtrip :: RT UTF8Text)
#if! defined(ghcjs_HOST_OS) && ! defined (ETA_VERSION)
, rt "UTF16 Text" (prop_Flat_roundtrip :: RT UTF16Text)
#endif
, rt "ByteString" (prop_Flat_roundtrip :: RT B.ByteString)
, rt "Lazy ByteString" (prop_Flat_roundtrip :: RT L.ByteString)
#ifndef ghcjs_HOST_OS
, rt "Short ByteString" (prop_Flat_roundtrip :: RT SBS.ShortByteString)
#endif
, rt "Map.Strict" (prop_Flat_roundtrip :: RT (CS.Map Int Bool))
, rt "Map.Lazy" (prop_Flat_roundtrip :: RT (CL.Map Int Bool))
, rt "IntMap.Strict" (prop_Flat_roundtrip :: RT (CS.IntMap Bool))
, rt "IntMap.Lazy" (prop_Flat_roundtrip :: RT (CL.IntMap Bool))
, rt "Unit" (prop_Flat_roundtrip :: RT Unit)
, rt "Un" (prop_Flat_roundtrip :: RT Un)
, rt "N" (prop_Flat_roundtrip :: RT N)
, rt "E2" (prop_Flat_roundtrip :: RT E2)
, rt "E3" (prop_Flat_roundtrip :: RT E3)
, rt "E4" (prop_Flat_roundtrip :: RT E4)
, rt "E8" (prop_Flat_roundtrip :: RT E8)
, rt "E16" (prop_Flat_roundtrip :: RT E16)
, rt "E17" (prop_Flat_roundtrip :: RT E17)
, rt "E32" (prop_Flat_roundtrip :: RT E32)
, rt "A" (prop_Flat_roundtrip :: RT A)
, rt "B" (prop_Flat_roundtrip :: RT B)
-- ,rt "Tree Bool" (prop_Flat_roundtrip:: RT (Tree Bool))
-- ,rt "Tree N" (prop_Flat_roundtrip:: RT (Tree N))
, rt "List N" (prop_Flat_roundtrip :: RT (List N))]
rt n = QC.testProperty (unwords ["round trip", n])
flatTests = testGroup "flat/unflat Unit tests"
$ concat
[ -- Expected errors
errDec (Proxy :: Proxy Bool) [] -- no data
, errDec (Proxy :: Proxy Bool) [128] -- no filler
, errDec (Proxy :: Proxy Bool) [128 + 1, 1, 2, 4, 8] -- additional bytes
, errDec (Proxy :: Proxy Text) (B.unpack (flat ((fromString "\x80") :: B.ByteString))) -- invalid UTF-8
, encRaw () []
, encRaw ((), (), Unit) []
, encRaw (Unit, 'a', Unit, 'a', Unit, 'a', Unit) [97, 97, 97]
, a () [1]
, a True [128 + 1]
, a (True, True) [128 + 64 + 1]
, a (True, False, True) [128 + 32 + 1]
, a (True, False, True, True) [128 + 32 + 16 + 1]
, a (True, False, True, True, True) [128 + 32 + 16 + 8 + 1]
, a (True, False, True, True, True, True) [128 + 32 + 16 + 8 + 4 + 1]
, a
(True, False, True, True, True, True, True)
[128 + 32 + 16 + 8 + 4 + 2 + 1]
, a
(True, False, True, True, (True, True, True, True))
[128 + 32 + 16 + 8 + 4 + 2 + 1, 1]
, encRaw (True, False, True, True) [128 + 32 + 16]
, encRaw
( (True, True, False, True, False)
, (False, False, True, False, True, True))
[128 + 64 + 16 + 1, 64 + 32]
, encRaw ('\0', '\1', '\127') [0, 1, 127]
, encRaw (33 :: Word32, 44 :: Word32) [33, 44]
--,s (Elem True) [64]
--,s (NECons True (NECons False (Elem True))) [128+64+32+4]
, encRaw (0 :: Word8) [0]
, encRaw (1 :: Word8) [1]
, encRaw (255 :: Word8) [255]
, encRaw (0 :: Word16) [0]
, encRaw (1 :: Word16) [1]
, encRaw (255 :: Word16) [255, 1]
, encRaw (256 :: Word16) [128, 2]
, encRaw (65535 :: Word16) [255, 255, 3]
, encRaw (127 :: Word32) [127]
, encRaw (128 :: Word32) [128, 1]
, encRaw (129 :: Word32) [129, 1]
, encRaw (255 :: Word32) [255, 1]
, encRaw (16383 :: Word32) [255, 127]
, encRaw (16384 :: Word32) [128, 128, 1]
, encRaw (16385 :: Word32) [129, 128, 1]
, encRaw (32767 :: Word32) [255, 255, 1]
, encRaw (32768 :: Word32) [128, 128, 2]
, encRaw (32769 :: Word32) [129, 128, 2]
, encRaw (65535 :: Word32) [255, 255, 3]
, encRaw (2097151 :: Word32) [255, 255, 127]
, encRaw (2097152 :: Word32) [128, 128, 128, 1]
, encRaw (2097153 :: Word32) [129, 128, 128, 1]
, encRaw (4294967295 :: Word32) [255, 255, 255, 255, 15]
, encRaw (255 :: Word64) [255, 1]
, encRaw (65535 :: Word64) [255, 255, 3]
, encRaw (4294967295 :: Word64) [255, 255, 255, 255, 15]
, encRaw
(18446744073709551615 :: Word64)
[255, 255, 255, 255, 255, 255, 255, 255, 255, 1]
, encRaw
(False, 18446744073709551615 :: Word64)
[127, 255, 255, 255, 255, 255, 255, 255, 255, 128, 128]
, encRaw (255 :: Word) [255, 1]
, encRaw (65535 :: Word) [255, 255, 3]
, encRaw (4294967295 :: Word) [255, 255, 255, 255, 15]
, tstI [0 :: Int8, 2, -2]
, encRaw (127 :: Int8) [254]
, encRaw (-128 :: Int8) [255]
, tstI [0 :: Int16, 2, -2, 127, -128]
, tstI [0 :: Int32, 2, -2, 127, -128]
, tstI [0 :: Int64, 2, -2, 127, -128]
, encRaw (-1024 :: Int64) [255, 15]
, encRaw (maxBound :: Word8) [255]
, encRaw (True, maxBound :: Word8) [255, 128]
, encRaw (maxBound :: Word16) [255, 255, 3]
, encRaw (True, maxBound :: Word16) [255, 255, 129, 128]
, encRaw (maxBound :: Word32) [255, 255, 255, 255, 15]
, encRaw (True, maxBound :: Word32) [255, 255, 255, 255, 135, 128]
, encRaw
(maxBound :: Word64)
[255, 255, 255, 255, 255, 255, 255, 255, 255, 1]
, encRaw
(True, maxBound :: Word64)
[255, 255, 255, 255, 255, 255, 255, 255, 255, 128, 128]
, encRaw
(minBound :: Int64)
[255, 255, 255, 255, 255, 255, 255, 255, 255, 1]
, encRaw
(maxBound :: Int64)
[254, 255, 255, 255, 255, 255, 255, 255, 255, 1]
, tstI [0 :: Int, 2, -2, 127, -128]
, tstI [0 :: Integer, 2, -2, 127, -128, -256, -512]
, encRaw (-1024 :: Integer) [255, 15]
, encRaw (0 :: Float) [0, 0, 0, 0]
, encRaw (-2 :: Float) [0b11000000, 0, 0, 0]
, encRaw (0.085 :: Float) [0b00111101, 0b10101110, 0b00010100, 0b01111011]
, encRaw (0 :: Double) [0, 0, 0, 0, 0, 0, 0, 0]
, encRaw (-2 :: Double) [0b11000000, 0, 0, 0, 0, 0, 0, 0]
, encRaw (23 :: Double) [0b01000000, 0b00110111, 0, 0, 0, 0, 0, 0]
, encRaw (-0.15625 :: Float) [0b10111110, 0b00100000, 0, 0]
, encRaw (-0.15625 :: Double) [0b10111111, 0b11000100, 0, 0, 0, 0, 0, 0]
, encRaw
(-123.2325E-23 :: Double)
[ 0b10111011
, 0b10010111
, 0b01000111
, 0b00101000
, 0b01110101
, 0b01111011
, 0b01000111
, 0b10111010]
, encRaw (Left True :: Either Bool (Double, Double)) [0b01000000]
, encRaw (-2.1234E15 :: Double) [195, 30, 44, 226, 90, 221, 64, 0]
, encRaw (1.1234E-22 :: Double) [59, 96, 249, 241, 120, 219, 249, 174]
, encRaw
((False, -2.1234E15) :: (Bool, Double))
[97, 143, 22, 113, 45, 110, 160, 0, 0]
, encRaw
((True, -2.1234E15) :: (Bool, Double))
[225, 143, 22, 113, 45, 110, 160, 0, 0]
, encRaw ((-2.1234E15, 1.1234E-22) :: (Double, Double))
$ [0b11000011, 30, 44, 226, 90, 221, 64, 0]
++ [59, 96, 249, 241, 120, 219, 249, 174]
, encRaw
((True, -2.1234E15, 1.1234E-22) :: (Bool, Double, Double))
[ 0b11100001
, 143
, 22
, 113
, 45
, 110
, 160
, 0
, 29
, 176
, 124
, 248
, 188
, 109
, 252
, 215
, 0]
, encRaw
(Right (-2.1234E15, 1.1234E-22) :: Either Bool (Double, Double))
[ 0b11100001
, 143
, 22
, 113
, 45
, 110
, 160
, 0
, 29
, 176
, 124
, 248
, 188
, 109
, 252
, 215
, 0]
, encRaw (Left True :: Either Bool Direction) [0b01000000]
, encRaw (Right West :: Either Bool Direction) [0b11110000]
, map trip [minBound, maxBound :: Word8]
, map trip [minBound, maxBound :: Word16]
, map trip [minBound, maxBound :: Word32]
, map trip [minBound, maxBound :: Word64]
, map trip [minBound :: Int8, maxBound :: Int8]
, map trip [minBound :: Int16, maxBound :: Int16]
, map trip [minBound :: Int32, maxBound :: Int32]
, map trip [minBound :: Int64, maxBound :: Int64]
, map tripShow [0 :: Float, -0 :: Float, 0 / 0 :: Float, 1 / 0 :: Float]
, map
tripShow
[0 :: Double, -0 :: Double, 0 / 0 :: Double, 1 / 0 :: Double]
, encRaw '\0' [0]
, encRaw '\1' [1]
, encRaw '\127' [127]
, encRaw 'a' [97]
, encRaw 'à' [224, 1]
, encRaw '经' [207, 253, 1]
, [trip [chr 0x10FFFF]]
, encRaw Unit []
, encRaw (Un False) [0]
, encRaw (One, Two, Three) [16 + 8]
, encRaw (Five, Five, Five) [255, 128]
--,s (NECons True (Elem True)) [128+64+16]
, encRaw "" [0]
#ifdef LIST_BIT
, encRaw "abc" [176, 216, 172, 96]
, encRaw [False, True, False, True] [128 + 32 + 16 + 8 + 2 + 1, 0]
#elif defined(LIST_BYTE)
, s "abc" s3
, s (cs 600) s600
#endif
-- Aligned structures
--,s (T.pack "") [1,0]
--,s (Just $ T.pack "abc") [128+1,3,97,98,99,0]
--,s (T.pack "abc") (al s3)
--,s (T.pack $ cs 600) (al s600)
, encRaw map1 [0b10111000]
, encRaw (B.pack $ csb 3) (bsl c3)
, encRaw (B.pack $ csb 600) (bsl s600)
, encRaw (L.pack $ csb 3) (bsl c3)
-- Long LazyStrings can have internal sections shorter than 255
--,s (L.pack $ csb 600) (bsl s600)
, [trip [1 .. 100 :: Int16]]
-- See https://github.com/typelead/eta/issues/901
#ifndef ETA_VERSION
, [trip longAsciiStrT]
, [trip longBoolListT]
#endif
, [trip asciiTextT]
, [trip english]
, [trip "维护和平正"]
, [trip (T.pack "abc")]
, [trip unicodeText]
, [trip unicodeTextUTF8T]
, [trip longBS, trip longLBS]
#ifndef ghcjs_HOST_OS
, [trip longSBS]
#endif
#if! defined(ghcjs_HOST_OS) && ! defined (ETA_VERSION)
, [trip unicodeTextUTF16T]
#endif
]
--al = (1:) -- prealign
where
bsl = id -- noalign
tstI = map ti
ti v
| v >= 0 = testCase (unwords ["Int", show v])
$ teq v (2 * fromIntegral v :: Word64)
| otherwise = testCase (unwords ["Int", show v])
$ teq v (2 * fromIntegral (-v) - 1 :: Word64)
teq a b = ser a @?= ser b
--,testCase (unwords ["unflat raw",sshow v]) $ desRaw e @?= Right v]
-- Aligned values unflat to the original value, modulo the added filler.
a v e = [ testCase (unwords ["flat", sshow v]) $ ser v @?= e
, testCase (unwords ["unflat", sshow v])
$ let Right v' = des e
in v @?= v']
-- a v e = [testCase (unwords ["flat postAligned",show v]) $ ser (postAligned v) @?= e
-- ,testCase (unwords ["unflat postAligned",show v]) $ let Right (PostAligned v' _) = des e in v @?= v']
encRaw :: forall a. (Show a, Flat a) => a -> [Word8] -> [TestTree]
encRaw v e =
[ testCase (unwords ["flat raw", sshow v, show . B.unpack . flat $ v])
$ serRaw v @?= e]
trip :: forall a. (Show a, Flat a, Eq a) => a -> TestTree
trip v = testCase (unwords ["roundtrip", sshow v])
$
-- direct comparison
(unflat (flat v :: B.ByteString) :: Decoded a) @?= (Right v :: Decoded a)
tripShow :: forall a. (Show a, Flat a, Eq a) => a -> TestTree
tripShow v = testCase (unwords ["roundtrip", sshow v])
$
-- we use show to get Right NaN == Right NaN
show (unflat (flat v :: B.ByteString) :: Decoded a)
@?= show (Right v :: Decoded a)
-- Test Data
lzBS = L.pack bs
stBS = B.pack bs
bs = [32, 32, 32 :: Word8]
s3 = [3, 97, 98, 99, 0]
c3a = [3, 99, 99, 99, 0] -- Array Word8
c3 = pre c3a
s600 = pre s600a
pre = (1:)
s600a = concat [[255], csb 255, [255], csb 255, [90], csb 90, [0]]
s600B =
concat [[55], csb 55, [255], csb 255, [90], csb 90, [200], csb 200, [0]]
longSeq :: Seq.Seq Word8
longSeq = Seq.fromList lbs
longBS = B.pack lbs
longLBS = L.concat $ concat $ replicate 10 [L.pack lbs]
lbs = concat $ replicate 100 [234, 123, 255, 0]
cs n = replicate n 'c' -- take n $ cycle ['a'..'z']
csb = map (fromIntegral . ord) . cs
map1 = C.fromList [(False, True), (True, False)]
ns :: [(Word64, Int)]
ns = [((-) (2 ^ (i * 7)) 1, fromIntegral (8 * i)) | i <- [1 .. 10]]
nsI :: [(Int64, Int)]
nsI = nsI_
nsII :: [(Integer, Int)]
nsII = nsI_
nsI_ = [((-) (2 ^ (((-) i 1) * 7)) 1, fromIntegral (8 * i)) | i <- [1 .. 10]]
#ifndef ghcjs_HOST_OS
shBS = SBS.toShort stBS
longSBS = SBS.toShort longBS
#endif
sshow = take 80 . show
showB = show . B.unpack
errDec :: forall a. (Flat a, Eq a, Show a) => Proxy a -> [Word8] -> [TestTree]
--errDec _ bs = [testCase "bad decode" $ let ev = (des bs::Decoded a) in ev @?= Left ""]
errDec _ bs = [ testCase "bad decode"
$ let ev = (des bs :: Decoded a)
in isRight ev @?= False]
ser :: Flat a => a -> [Word8]
ser = B.unpack . flat
des :: Flat a => [Word8] -> Decoded a
des = unflat
flatRawWith sz enc = B.unpack
$ E.strictEncoder (sz + 8) (E.Encoding $ enc >=> E.eFillerF)
serRaw :: Flat a => a -> [Word8]
-- serRaw = B.unpack . flatRaw
-- serRaw = L.unpack . flatRaw
serRaw = asBytes . bits
--desRaw :: Flat a => [Word8] -> Decoded a
--desRaw = unflatRaw . L.pack
type RT a = a -> Bool
type RTL a = Large a -> Bool
prop_Flat_roundtrip :: (Flat a, Eq a) => a -> Bool
prop_Flat_roundtrip = roundTripExt
prop_Flat_Large_roundtrip :: (Eq b, Flat b) => Large b -> Bool
prop_Flat_Large_roundtrip (Large x) = roundTripExt x
roundTrip x = unflat (flat x :: B.ByteString) == Right x
-- Test roundtrip for both the value and the value embedded between bools
roundTripExt x = roundTrip x && roundTrip (True, x, False)
prop_double_conv d = wordToDouble (doubleToWord d) == d
prop_float_conv d = wordToFloat (floatToWord d) == d
{-
prop_common_unsigned :: (Num l,Num h,Flat l,Flat h) => l -> h -> Bool
prop_common_unsigned n _ = let n2 :: h = fromIntegral n
in flat n == flat n2
-}
-- e :: Stream Bool
-- e = unflatIncremental . flat $ stream1
-- el :: List Bool
-- el = unflatIncremental . flat $ infList
-- deflat = unflat
-- b1 :: BLOB UTF8
-- b1 = BLOB UTF8 (preAligned (List255 [97,98,99]))
-- -- b1 = BLOB (preAligned (UTF8 (List255 [97,98,99])))
|
\section{Conclusion}
LSST \gls{DM} has been constructing a cloud ready system for many years. We believe commercial cloud is the correct approach but we may be a few years ahead of commercial and federal cost models aligning. We hope that we may be able to partner with google to usher in a new ear of federally funded research in the cloud.
~
|
#include "opq.h"
#include <boost/math/constants/constants.hpp>
#include <boost/thread/once.hpp>
#include "externals/cxx/namespace.hpp"
#include "externals/cxx/pretty_function.hpp"
#include <stdexcept>
BEGIN_NAMESPACE(rysq, asymptotic)
template<size_t N, typename T>
struct asymptotic_ {
static void roots(T X, T *R, T *W) {
boost::call_once(once_flag, &asymptotic_<N,T>::initialize);
T r = 1/X;
T w = 1/sqrt(X);
for (size_t i = 0; i < N; ++i) {
R[i] = r*R_[i];
W[i] = w*W_[i];
}
}
private:
static T R_[N], W_[N];
static boost::once_flag once_flag;
static void initialize() {
static const size_t n = 2*N;
T beta[n], alpha[n] = { 0 };
beta[0] = boost::math::constants::root_pi<T>();
for (size_t i = 1; i < n; ++i) {
beta[i] = T(i)/2;
}
T r[n], w[n];
int status = opq::coefficients(n, alpha, beta, r, w);
if (status != 0) {
throw std::runtime_error(PRETTY_FUNCTION("opq::coefficients returned ",
status));
}
// CALL RYSGW_(N,ALPHA,BETA,EPS,RTS,W_TS,IERR,W_RK) for
for (size_t i = 0; i < N; ++i) {
size_t j = i + N;
R_[i] = r[j]*r[j];
W_[i] = w[j];
}
}
};
template<size_t N, typename T> T asymptotic_<N,T>::R_[N];
template<size_t N, typename T> T asymptotic_<N,T>::W_[N];
template<size_t N, typename T>
boost::once_flag asymptotic_<N,T>::once_flag = BOOST_ONCE_INIT;
template<size_t N, typename T>
void roots(T X, T *R, T *W) {
asymptotic_<N,T>::roots(X, R, W);
}
END_NAMESPACE(rysq, asymptotic)
|
! ##################################################################################################################################
! Begin MIT license text.
! _______________________________________________________________________________________________________
! Copyright 2019 Dr William R Case, Jr ([email protected])
! Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
! associated documentation files (the "Software"), to deal in the Software without restriction, including
! without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
! copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to
! the following conditions:
! The above copyright notice and this permission notice shall be included in all copies or substantial
! portions of the Software and documentation.
! THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
! OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
! FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
! AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
! LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
! OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
! THE SOFTWARE.
! _______________________________________________________________________________________________________
! End MIT license text.
SUBROUTINE CHECK_BAR_MOIs ( NAME, ID, I1, I2, I12, IERR )
! Checks sensibility of the 3 MOI's of a BAR or BEAM element and replaces zero values with small finite ones
USE PENTIUM_II_KIND, ONLY : BYTE, LONG, DOUBLE
USE IOUNT1, ONLY : WRT_LOG, ERR, F04, F06
USE SCONTR, ONLY : BLNK_SUB_NAM, FATAL_ERR
USE TIMDAT, ONLY : TSEC
USE PARAMS, ONLY : EPSIL, SUPINFO
USE CONSTANTS_1, ONLY : ZERO
USE SUBR_BEGEND_LEVELS, ONLY : CHECK_BAR_MOIs_BEGEND
USE CHECK_BAR_MOIs_USE_IFs
IMPLICIT NONE
CHARACTER(LEN=LEN(BLNK_SUB_NAM)):: SUBR_NAME = 'CHECK_BAR_MOIs'
CHARACTER(LEN=*), INTENT(IN) :: NAME ! Either PBAR, PBARL or PBEAM
CHARACTER(LEN=*), INTENT(IN) :: ID ! Character value of the bar's ID
INTEGER(LONG), INTENT(OUT) :: IERR ! Error indicator
INTEGER(LONG), PARAMETER :: SUBR_BEGEND = CHECK_BAR_MOIs_BEGEND
REAL(DOUBLE), INTENT(INOUT) :: I1 ! MOI of the bar or beam
REAL(DOUBLE), INTENT(INOUT) :: I2 ! MOI of the bar or beam
REAL(DOUBLE), INTENT(INOUT) :: I12 ! MOI of the bar or beam
REAL(DOUBLE) :: EPS1 ! A small number
! *********************************************************************************************************************************
IF (WRT_LOG >= SUBR_BEGEND) THEN
CALL OURTIM
WRITE(F04,9001) SUBR_NAME,TSEC
9001 FORMAT(1X,A,' BEGIN',F10.3)
ENDIF
! **********************************************************************************************************************************
! Initialize
IERR = 0
EPS1 = EPSIL(1)
! If I12 is zero replace a zero value of I1 and/or I2 with a small positive value. If I12 is not zero, make sure I1*I2 > I12^2
IF (DABS(I12) <= EPS1) THEN
IF (DABS(I1) < EPS1) THEN
I1 = 10*EPS1
WRITE(ERR,1001) NAME, ID, 'I1', I1
IF (SUPINFO == 'N') THEN
WRITE(F06,1001) NAME, ID, 'I1', I1
ENDIF
ENDIF
IF (DABS(I2) <= EPS1) THEN
I2 = 10*EPS1
WRITE(ERR,1001) NAME, ID, 'I2', I2
IF (SUPINFO == 'N') THEN
WRITE(F06,1001) NAME, ID, 'I2', I2
ENDIF
ENDIF
ELSE ! I12^2 must be <= I1*I2 is proven by the Cauchy-Schwarz inequality
IF (I12*I12 > I1*I2) THEN
IERR = IERR + 1
WRITE(ERR,1195) NAME, ID, I1, I2, I12
WRITE(F06,1195) NAME, ID, I1, I2, I12
ENDIF
ENDIF
! **********************************************************************************************************************************
IF (WRT_LOG >= SUBR_BEGEND) THEN
CALL OURTIM
WRITE(F04,9002) SUBR_NAME,TSEC
9002 FORMAT(1X,A,' END ',F10.3)
ENDIF
RETURN
! **********************************************************************************************************************************
1001 FORMAT(' *INFORMATION: FOR ',A,' ',A8,' MOMENT OF INERTIA ',A,' HAS BEEN CHANGED FROM 0 TO A SMALL NUMBER = ',1ES10.3)
1195 FORMAT(' *ERROR 1195: THE MOMENTS AND PRODUCTS OF INERTIA ON ',A,' ',A8,' DO NOT SATISIFY THE REQUIREMENT THAT:' &
,/,14X,' I12^2 <= I1*12 WHERE: I1 = ',1ES10.3,', I2 = ',1ES10.3,', I12 = ',1ES10.3)
! **********************************************************************************************************************************
END SUBROUTINE CHECK_BAR_MOIs
|
MODULE integers
USE ISO_C_BINDING
IMPLICIT NONE
CONTAINS
FUNCTION add_ints1(a,b) RESULT(y)
INTEGER ( KIND = 1 ) , INTENT(in) :: a
INTEGER(1), INTENT(in) :: b
INTEGER*1 :: y
y = a+b
END FUNCTION add_ints1
FUNCTION add_ints2(a,b) RESULT(y)
INTEGER*2, INTENT(in) :: a,b
INTEGER(KIND=2 ) :: y
y = a+b
END FUNCTION add_ints2
FUNCTION add_ints4(a,b) RESULT(y)
INTEGER (KIND =4), INTENT(in) :: a
INTEGER, INTENT(in) :: b
INTEGER :: y
y = a+b
END FUNCTION add_ints4
FUNCTION add_ints8(a,b) RESULT(y)
INTEGER (KIND= 8), INTENT(in) :: a
INTEGER*8, INTENT(in) :: b
INTEGER(8) :: y
y = a+b
END FUNCTION add_ints8
FUNCTION add_ints_byval(a,b) RESULT(y)
INTEGER, INTENT(in), VALUE :: a, b
INTEGER :: y
y = a + b
END FUNCTION add_ints_byval
! Test lower case parsing:
function add_ints1_lower(a,b) result(y)
integer ( kind = 1 ) , intent(in) :: a
integer ( 1), intent(in) :: b
integer*1 :: y
y = a+b
end function add_ints1_lower
function add_ints2_lower(a,b) result(y)
integer*2, intent(in) :: a,b
integer(KIND=2 ) :: y
y = a+b
end function add_ints2_lower
function add_ints4_lower(a,b) result(y)
integer (kind =4), intent(in) :: a
integer, intent(in) :: b
integer :: y
y = a+b
end function add_ints4_lower
FUNCTION add_iso_ints(a,b) RESULT(y)
INTEGER (C_INT) , INTENT(in) :: a, b
INTEGER (C_INT) :: y
y = a+b
END FUNCTION add_iso_ints
FUNCTION add_iso_longs(a,b) RESULT(y)
INTEGER (C_LONG) , INTENT(in) :: a, b
INTEGER (C_LONG) :: y
y = a+b
END FUNCTION add_iso_longs
END MODULE integers
|
Do you wonder if you just use drugs or, rather, that you abuse drugs? Has someone close to you made a comment about you use of alcohol, cocaine, painkillers, or marijuana (or commented more than once)? This book’s purpose is to help you determine whether or not you’re progressing towards full-blow addiction or not. And it provides suggestions for how to turn things around before things get really ugly.
The purpose of this book is to stop kind-of-bad behavior before it becomes really bad behavior—bad for you, your health, your job, and those in your life who care about you. Almost Addicted explains how to navigate the spectrum of addiction, explaining the differences between a potential, emerging would-be drug use problem and a devastating drug problem already in full motion.
And if you’re asking these questions on behalf of someone else—wondering about your spouse or sibling’s drug use—this book will also help answer those questions as well. It will help you sort out the difference between a potential problem and less worrisome occasional drug use.
Why stop using drugs if I’m not actually an addict?
Why do I feel like my spouse has a drug problem, but I can’t exactly pinpoint it?
What are some signs that a person is using drugs?
Am I medicating my anxiety with marijuana?
How do I stop my drug use from slowly turning into full-blown addiction?
Part 1 A Problem Emerges from the Shadows discusses why almost-addiction is a concern and why we should care about a pre-addiction even if it’s not yet full-blown—think of it as preventative care. The authors very clearly differentiate between what a real, full-blown addict acts like and what someone who may be almost-addict acts like, making it easier to identify where on the spectrum of addiction one may presently fall. It also discusses why disengaging from being almost-addicted to substances is a good idea.
Would you have more money to spend on necessities or luxuries?
Would you do better at work?
Would you have more time to devote to hobbies?
Would any anxiety and depression you experience improve?
Is your drug use a source of tension or arguments with your wife, husband, or partner?
Does drug use eat into the time that you could otherwise be spending with your children?
Do you use drugs as a way to avoid family responsibilities?
Part 2 The Roots of Almost Addiction delves into the impact of the past on one’s present life and the correlation between mental health issues and drug use/abuse. The authors smartly point out that often times certain medical conditions may also be in play – and discuss how to navigate that double-sided coin. Those who suffer from types of depression, ADD/ADHD, trauma, as well as other challenges to the mind are more likely to struggle at some point with substance use/abuse.
Part 3 Catching and Confronting Almost Addiction in Others explains the warning signs for spotting drug use/abuse, as well as how to handle your response to the discovery of the other person’s drug use. It discusses the importance of protecting one’s self emotionally, financially and physically from the drug user, such as disengaging from taking part in the cover-up, denial, and recommends focusing on your own health and well-being, including enlisting the support of others.
Part 4 Solutions for Your Almost Addiction offers some smart tips for creating a life that’s full, rather than full of holes, in the absence of drugs. One chapter in particular, Time for A Change: Helping Yourself, is so very practical that it really could be the topic for a very useful follow-up book on how to recreate a life after stopping drugs. It explains how switching-up your daily routines can support your avoidance of drugs, as well as how to come up with a list of your “triggers,” so that you can proactively avoid being triggered. Essentially, having a plan for how you’ll engage your mind, body and spirit in the absence of drug use means you’ll…have a plan! No plan = misery. Does this mean new friends? You bet. At the end of the day, will you care about having ditched those people? Not one bit.
I have just one criticism of Almost Addicted. I so, so wish that the section on Solutions for Your Almost Addiction had been emphasized and placed before the section Catching and Confronting Almost Addiction in Others. The book’s order of chapters subtly suggests that focusing on others’ addictions should come before focusing on one’s own, and I feel the reverse is true. It’s most productive to focus first on one’s own potential addictions — you know, put on your oxygen mask first, then help your fellow passengers.
Here’s a link to the book on Amazon.com, where you’ll find this review and other reviews of the book. |
Starting XI Prediction: Solskjaer to keep 4-3-3 formation with Lingard, Rashford, and Martial leading the attack against PSG?
Manchester United return to UEFA Champions League action against Paris Saint-Germain at Old Trafford on Tuesday evening. It is what the Theatre of Dreams is made for, European nights under the floodlights. Last season, United made it to the same stage of the competition, losing to Sevilla and exiting the competition. This season, under the management of Ole Gunnar Solskjaer, things may go a bit different for the Red Devils. Confidence is high and United are undefeated in 11 matches, winning ten and drawing one.
Now is perhaps the best time to face the French champions as Neymar is out of the match and striker, Edinson Cavani limped out of Saturday’s 1-0 victory over Bordeaux. PSG are a competent team but playing in France, they do not really have much competition which may be a negative ahead of this game, as United have been challenged in every match they have played, showing the difference between the Premier League and French football.
The Spanish number one was rarely tested against Fulham at Craven Cottage on Saturday but when he was, he was at his best. There are suggestions that the goalkeeper is calling for a £350,000 per week wage to sign a new contract at United, which is fair considering that he is the player who has been the difference for United – more often than not. He seems players like Alexis Sanchez on a high wage, yet in terms of what he offers United, De Gea offers so much more.
United have managed to keep two clean sheets in their last two matches, which shows that some things have changed for United under Solskjaer. Ashley Young and Victor Lindelof were rested against Fulham, with the ‘ice man’ suffering from a knock. I expect both will be back against PSG with Luke Shaw keeping his place and Eric Bailly partnering Lindelof, which has seen both players in good form recently. Solskjaer has Phil Jones, Chris Smalling and possibly Marcos Rojo to select also, keeping options open.
United’s best midfield will be tested against PSG on Tuesday. Ander Herrera, Nemanja Matic, and Paul Pogba. Both Herrera and Matic allow Pogba to play further forward, which has seen the Frenchman play some good football recently, also seeing him score eight goals and five assists in his last ten matches under Solskjaer. This United midfield has the form, the ability and the experience to work the magic against PSG at the Theatre of Dreams.
Marcus Rashford and Jesse Lingard were both rested in the 3-0 victory over Fulham at the weekend, meaning they will both be fresh to face PSG at Old Trafford. Anthony Martial played 70 minutes against Fulham, scoring a sublime goal, also getting an assist, which will mean his confidence will be high ahead of the Champions League match. These three, on their day, can be devastating to any team. If United manage to keep a clean sheet at home, the quarter-finals could be set for United this season.
Preview: U23’s out to fight for their futures against Reading?
Where did it all go wrong for Luke Shaw? |
using StochasticDelayDiffEq
using Random
using SparseArrays
function sir_dde!(du,u,h,p,t)
(S,I,R) = u
(β,c,τ) = p
N = S+I+R
infection = β*c*I/N*S
(Sd,Id,Rd) = h(p, t-τ) # Time delayed variables
Nd = Sd+Id+Rd
recovery = β*c*Id/Nd*Sd
@inbounds begin
du[1] = -infection
du[2] = infection - recovery
du[3] = recovery
end
nothing
end;
# Define a sparse matrix by making a dense matrix and setting some values as not zero
A = zeros(3,2)
A[1,1] = 1
A[2,1] = 1
A[2,2] = 1
A[3,2] = 1
A = SparseArrays.sparse(A);
# Make `g` write the sparse matrix values
function sir_delayed_noise!(du,u,h,p,t)
(S,I,R) = u
(β,c,τ) = p
N = S+I+R
infection = β*c*I/N*S
(Sd,Id,Rd) = h(p, t-τ) # Time delayed variables
Nd = Sd+Id+Rd
recovery = β*c*Id/Nd*Sd
du[1,1] = -sqrt(infection)
du[2,1] = sqrt(infection)
du[2,2] = -sqrt(recovery)
du[3,2] = sqrt(recovery)
end;
function condition(u,t,integrator) # Event when event_f(u,t) == 0
u[2]
end;
function affect!(integrator)
integrator.u[2] = 0.0
end;
cb = ContinuousCallback(condition,affect!);
δt = 0.1
tmax = 40.0
tspan = (0.0,tmax)
t = 0.0:δt:tmax;
u0 = [990.0,10.0,0.0]; # S,I,R
function sir_history(p, t)
[1000.0, 0.0, 0.0]
end;
p = [0.05,10.0,4.0]; # β,c,τ
Random.seed!(1234);
prob_sdde = SDDEProblem(sir_dde!,sir_delayed_noise!,u0,sir_history,tspan,p;noise_rate_prototype=A);
sol_sdde = solve(prob_sdde,LambaEM(),callback=cb);
|
On April 8 , 2015 , Nathan was placed on the 15 @-@ day disabled list due to a strained right elbow . During a rehab start with the Toledo Mud Hens on April 22 , Nathan re @-@ injured his elbow after throwing only 10 pitches . The same night , Nathan underwent MRIs , which tested positive revealing tears in his ulnar collateral ligament of the elbow and his pronator teres muscle , and would undergo Tommy John surgery , ending Nathan 's 2015 season . Sources projected that this surgery could end Nathan 's career , but he was not planning to retire yet .
|
package java.io;
public class ByteArrayOutputStream extends OutputStream {
private byte[] bytes;
public ByteArrayOutputStream() {
bytes = new byte[1];
}
public synchronized byte[] toByteArray() {
return bytes;
}
public synchronized void write(int i) {
bytes[0] = (byte)i;
}
}
|
Require Import Wellfounded.
Require Import reduction.
Require Import proofsystem.
Set Implicit Arguments.
(* A generic soundness proof.
The proof is parameterized over a semantic interpretation of reachability rules,
which can be instantiated to show one-path or all-path soundness.
The semantics of a rule must be further parameterized by some kind of "index"
with a well-founded approximation order, used to justifying circularity.
This module reduces showing soundess of the proof system to showing a few
lemmas about the selected semantics of reachability rules.
The soundness result proved in this module shows that the conclusion of
the proof holds with any index. To finish a specific soundness proof this
should be shown equaivalent to some un-indexed notion of soundenss.
*)
Lemma clos_true_step : forall {A: Set} (R : A -> A -> Prop) x z, clos R true x z ->
exists y, R x y /\ clos R false y z.
intros.
rewrite clos_iff_left in H;inversion H;subst;rewrite <- ?clos_iff_left in * |-;
eauto using clos_refl, clos_unstrict.
Qed.
Module Type ReachabilitySemantics.
Parameter cfg : Set.
Parameter S : cfg -> cfg -> Prop.
Parameter index : Set.
(* This is one step of soundness relation *)
Parameter ix_rel : index -> index -> Prop.
Parameter holds : bool -> forall env,
formula cfg env -> formula cfg env->
index -> Prop.
(*
Parameter ix_wf : well_founded (fun x y => ix_rel y x).
*)
(* Will this lemma be any harder to supply than one that
distinguishes a one-step and all-step version of ix_rel,
and only promises immediate successors in the
argument of the hypothesis? *)
Parameter ix_rel_ind : forall env (phi phi' : formula cfg env),
forall i0,
(forall i, clos ix_rel false i0 i ->
(forall i', ix_rel i i' -> holds true phi phi' i') ->
holds true phi phi' i)
-> holds true phi phi' i0.
Parameter holds_strict_later : forall env phi1 phi2 i,
@holds true env phi1 phi2 i ->
forall i', ix_rel i i' ->
holds true phi1 phi2 i'.
Parameter holds_unstrict : forall env phi1 phi2 i,
@holds true env phi1 phi2 i -> holds false phi1 phi2 i.
Parameter holds_step : forall env (phi phi' : formula cfg env),
(forall (e : env) (c : cfg),
phi e c ->
(exists c2 : cfg, S c c2) /\ (forall c2 : cfg, S c c2 -> phi' e c2)) ->
forall i, holds true phi phi' i.
Parameter holds_refl : forall env phi i, @holds false env phi phi i.
Parameter holds_trans_strict : forall env phi phi' phi'' i,
@holds true env phi phi' i ->
(forall i' : index, ix_rel i i' -> holds false phi' phi'' i') ->
holds true phi phi'' i.
Parameter holds_trans : forall env phi phi' phi'' i,
@holds false env phi phi' i ->
holds false phi' phi'' i ->
holds false phi phi'' i.
Parameter holds_case :
forall strict env (phi phi1 phi' : formula cfg env) i,
holds strict phi phi' i -> holds strict phi1 phi' i ->
holds strict (fun (e : env) (g : cfg) => phi e g \/ phi1 e g) phi' i.
Parameter holds_mut_conseq :
forall strict env1 (phi1 phi2 : formula cfg env1)
env2 (phi1' phi2' : formula cfg env2),
(forall gamma rho,
phi1 rho gamma ->
exists rho', phi1' rho' gamma /\
forall gamma', phi2' rho' gamma' -> phi2 rho gamma') ->
forall i, holds strict phi1' phi2' i ->
holds strict phi1 phi2 i.
End ReachabilitySemantics.
Module Type StateBasedReachability.
Parameter cfg : Set.
Parameter S : cfg -> cfg -> Prop.
Parameter index : Set.
(* This is one step of soundness relation *)
Parameter ix_rel : index -> index -> Prop.
Parameter state_reaches : bool -> index -> cfg -> (cfg -> Prop) -> Prop.
Definition holds strict env (phi phi' : formula cfg env) i :=
forall rho gamma,
phi rho gamma ->
state_reaches strict i gamma (phi' rho).
Parameter ix_rel_ind : forall env (phi phi' : formula cfg env),
forall i0,
(forall i, clos ix_rel false i0 i ->
(forall i', ix_rel i i' -> holds true phi phi' i') ->
holds true phi phi' i)
-> holds true phi phi' i0.
Parameter reach_later : forall i i', ix_rel i i' ->
forall gamma P,
state_reaches true i gamma P ->
state_reaches true i' gamma P.
Parameter reach_unstrict : forall i gamma P,
state_reaches true i gamma P -> state_reaches false i gamma P.
(* These are really from axiom cases, maybe make it generic *)
Parameter reach_refl : forall i gamma (P : cfg -> Prop),
P gamma -> state_reaches false i gamma P.
Parameter reach_step : forall i gamma (P : cfg -> Prop),
(exists gamma', S gamma gamma') ->
(forall gamma', S gamma gamma' -> P gamma') ->
state_reaches true i gamma P.
Parameter reach_impl : forall (P Q : cfg -> Prop),
(forall c, P c -> Q c) ->
forall strict i gamma,
state_reaches strict i gamma P ->
state_reaches strict i gamma Q.
Parameter reach_join : forall i gamma P,
state_reaches false i gamma
(fun gamma' => state_reaches false i gamma' P) ->
state_reaches false i gamma P.
Parameter reach_join_strict : forall i gamma P,
state_reaches true i gamma
(fun gamma' => forall i', ix_rel i i' -> state_reaches false i' gamma' P) ->
state_reaches true i gamma P.
End StateBasedReachability.
Module StateBasedSemantics (Reach : StateBasedReachability)
<: ReachabilitySemantics.
Import Reach.
Definition cfg := cfg.
Definition S := S.
Definition index := index.
Definition ix_rel := ix_rel.
Definition holds := holds.
Definition ix_rel_ind := ix_rel_ind.
Lemma holds_strict_later : forall env phi1 phi2 i,
@holds true env phi1 phi2 i ->
forall i', ix_rel i i' ->
holds true phi1 phi2 i'.
unfold holds, Reach.holds; eauto using reach_later.
Qed.
Lemma holds_unstrict : forall env phi1 phi2 i,
@holds true env phi1 phi2 i ->
@holds false env phi1 phi2 i.
unfold holds, Reach.holds; eauto using reach_unstrict.
Qed.
Lemma holds_step : forall env (phi phi' : formula cfg env),
(forall (e : env) (c : cfg),
phi e c ->
(exists c2 : cfg, S c c2) /\ (forall c2 : cfg, S c c2 -> phi' e c2)) ->
forall i, holds true phi phi' i.
unfold holds, Reach.holds; intros; apply reach_step;firstorder.
Qed.
Lemma holds_refl : forall env phi i, @holds false env phi phi i.
unfold holds, Reach.holds; intros; apply reach_refl;assumption.
Qed.
Lemma holds_case :
forall strict env (phi phi1 phi' : formula cfg env) i,
holds strict phi phi' i -> holds strict phi1 phi' i ->
holds strict (fun (e : env) (g : cfg) => phi e g \/ phi1 e g) phi' i.
Proof.
firstorder.
Qed.
Lemma holds_mut_conseq :
forall strict env1 (phi1 phi2 : formula cfg env1)
env2 (phi1' phi2' : formula cfg env2),
(forall gamma rho,
phi1 rho gamma ->
exists rho', phi1' rho' gamma /\
forall gamma', phi2' rho' gamma' -> phi2 rho gamma') ->
forall i, holds strict phi1' phi2' i ->
holds strict phi1 phi2 i.
Proof.
unfold holds, Reach.holds;intros.
specialize (H _ _ H1).
firstorder using reach_impl.
Qed.
Lemma holds_trans : forall env phi phi' phi'' i,
@holds false env phi phi' i ->
holds false phi' phi'' i ->
holds false phi phi'' i.
Proof.
unfold holds, Reach.holds;intros.
eauto using reach_join, reach_impl.
Qed.
Lemma holds_trans_strict : forall env phi phi' phi'' i,
@holds true env phi phi' i ->
(forall i' : index, ix_rel i i' -> holds false phi' phi'' i') ->
holds true phi phi'' i.
Proof.
unfold holds, Reach.holds;intros.
eauto using reach_join_strict, reach_impl.
Qed.
End StateBasedSemantics.
Module Soundness(Sem : ReachabilitySemantics).
Import Sem.
Definition system_holds (S : system cfg) (i : index) : Prop :=
forall env phi1 phi2, S env phi1 phi2 -> holds true phi1 phi2 i.
Lemma system_next (S : system cfg) (i : index) :
system_holds S i -> forall i', ix_rel i i' -> system_holds S i'.
Proof. unfold system_holds; eauto using holds_strict_later. Qed.
Lemma system_later (S : system cfg) (i : index) :
system_holds S i -> forall i', clos ix_rel false i i' -> system_holds S i'.
Proof. intros Hsys i' Hpath; revert Hsys; induction Hpath; eauto using system_next. Qed.
Lemma holds_weak : forall (C : option (system cfg)) env phi1 phi2 i,
@holds true env phi1 phi2 i ->
holds
(match C with
| None => false
| Some _ => true
end) phi1 phi2 i.
Proof.
destruct C;auto using holds_unstrict.
Qed.
Lemma holds_conseq : forall strict env (phi1 phi2 phi1' phi2' : formula cfg env),
forall env (phi1 phi2 phi1' phi2' : formula cfg env),
(forall rho gamma, phi1 rho gamma -> phi1' rho gamma) ->
(forall rho gamma, phi2' rho gamma -> phi2 rho gamma) ->
forall i, holds strict phi1' phi2' i ->
holds strict phi1 phi2 i.
intros.
revert H1.
apply holds_mut_conseq.
firstorder.
Qed.
Ltac spec_ih :=
let HA := fresh "HA" in
let HC := fresh "HC" in
intros ? HA HC;
match goal with
| [IHcirc : appcontext C [cons_opt_system] |-_] => idtac
| _ => repeat match goal with
[IH : forall i, system_holds _ i -> _ |- _] =>
specialize (IH _ HA HC)
end
end.
Lemma soundness : forall C A env phi1 phi2,
IS cfg S C A env phi1 phi2 ->
forall (i : index),
system_holds A i ->
match C with
| None => True
| Some S' => forall i', ix_rel i i' -> system_holds S' i'
end ->
holds (
match C with
| None => false
| Some _ => true
end) phi1 phi2 i.
Proof.
induction 1;spec_ih.
+ (* step *)
auto using holds_weak, holds_step.
+ (* axiom *)
auto using holds_weak.
+ (* refl *)
destruct C;[destruct H|]; auto using holds_refl.
+ (* trans *)
destruct C.
* assert (forall i', ix_rel i i' -> holds false phi' phi'' i') as IH2.
clear -IHIS2 HA HC.
intros. apply IHIS2;[clear IHIS2|exact I].
assert (system_holds A i') by (eauto using system_next).
firstorder.
clear -IHIS1 IH2.
eauto using holds_trans_strict.
* assert (holds false phi' phi'' i) as IH2 by (clear -IHIS2 HA HC; auto).
clear -IHIS1 IH2.
eauto using holds_trans.
+ (* consequence *)
eauto using holds_conseq.
+ (* case *)
auto using holds_case.
+ (* abstr *)
revert IHIS;clear;apply holds_mut_conseq;firstorder.
+ (* abstr' *)
revert IHIS; clear -H; apply holds_mut_conseq.
firstorder;eexists;split;[eassumption|];instantiate;firstorder.
+ (* circularity *)
apply holds_weak.
apply ix_rel_ind.
intros i0 Hii0 IHlater.
apply IHIS; clear H IHIS.
eauto using system_later.
simpl.
intros i' Hi'.
unfold system_holds.
intros rho phi1 phi2 H.
destruct H.
* (* rules from C *)
destruct C;[|solve[destruct H]].
simpl in H.
assert (forall i', clos ix_rel true i i' -> system_holds s i').
clear -HC. intros.
apply clos_true_step in H.
destruct H as [i1 [Hstep Hrest]].
eauto using system_later.
clear HC; rename H0 into HC.
assert (clos ix_rel true i i').
clear -Hi' Hii0.
eauto using clos, clos_cons_rt.
specialize (HC _ H0).
auto.
* (* The added rule *)
destruct H as [-> [-> ->]].
unfold eq_rect_r in * |- *; simpl eq_rect in * |- *.
auto.
+ (* subst *)
eauto using holds_mut_conseq.
+ (* Logical framing *)
revert IHIS;
eapply holds_mut_conseq;
intuition;
eauto.
Qed.
End Soundness.
|
{-# OPTIONS --without-K --rewriting #-}
open import HoTT
import homotopy.ConstantToSetExtendsToProp as ConstExt
{-
q[_]ᴳ
G/Q ↞------ G
↑ ↑
φ₂ ╎ ╎ inject
↑ ↑
H ↞------- P
φ₁
Then, H ≃ᴳ P/Q.
-}
module groups.PropQuotUniqueFactorization
{i j l₁ l₂} {G : Group i} {H : Group j}
(P : SubgroupProp G l₁)
(Q : NormalSubgroupProp G l₂)
(φ₁ : Subgroup P →ᴳ H) (φ₁-is-surj : is-surjᴳ φ₁)
(φ₂ : H →ᴳ QuotGroup Q) (φ₂-is-inj : is-injᴳ φ₂)
(φ-comm : ∀ p → GroupHom.f (φ₂ ∘ᴳ φ₁) p == q[ fst p ])
where
private
module G = Group G
module H = Group H
module P = Subgroup P
module φ₁ = GroupHom φ₁
module φ₂ = GroupHom φ₂
P/Q-prop : NormalSubgroupProp (Subgroup P) l₂
P/Q-prop = quot-of-sub P Q
P/Q : Group (lmax i (lmax l₁ l₂))
P/Q = QuotGroup P/Q-prop
module P/Q = Group P/Q
module _ (k : Group.El H) where
H-to-P/Q-f' : hfiber φ₁.f k → P/Q.El
H-to-P/Q-f' (p , _) = q[ p ]
abstract
H-to-P/Q-f'-const : (hf₁ hf₂ : hfiber φ₁.f k)
→ H-to-P/Q-f' hf₁ == H-to-P/Q-f' hf₂
H-to-P/Q-f'-const (h₁ , r₁) (h₂ , r₂) =
quot-relᴳ {P = P/Q-prop} $ <– (quot-relᴳ-equiv {P = Q}) $
! (φ-comm h₁) ∙ ap φ₂.f (r₁ ∙ ! r₂) ∙ φ-comm h₂
module HToP/Q = ConstExt P/Q.El-is-set
H-to-P/Q-f' H-to-P/Q-f'-const
H-to-P/Q-f : Trunc -1 (hfiber φ₁.f k) → P/Q.El
H-to-P/Q-f = HToP/Q.ext
H-to-P/Q-f-is-const : ∀ hf₁ hf₂ → H-to-P/Q-f hf₁ == H-to-P/Q-f hf₂
H-to-P/Q-f-is-const = HToP/Q.ext-is-const
abstract
H-to-P/Q-f-comp : (k₁ k₂ : H.El)
→ (hf₁₂ : Trunc -1 (hfiber φ₁.f (H.comp k₁ k₂)))
→ (hf₁ : Trunc -1 (hfiber φ₁.f k₁))
→ (hf₂ : Trunc -1 (hfiber φ₁.f k₂))
→ H-to-P/Q-f (H.comp k₁ k₂) hf₁₂ == P/Q.comp (H-to-P/Q-f k₁ hf₁) (H-to-P/Q-f k₂ hf₂)
H-to-P/Q-f-comp k₁ k₂ hf₁₂ = Trunc-elim
(λ hf₁ → Π-is-prop λ hf₂ →
P/Q.El-is-set _ (P/Q.comp (H-to-P/Q-f k₁ hf₁) (H-to-P/Q-f k₂ hf₂)))
(λ{(p₁ , r₁) → Trunc-elim
(λ hf₂ → P/Q.El-is-set _ (P/Q.comp q[ p₁ ] (H-to-P/Q-f k₂ hf₂)))
(λ{(p₂ , r₂) → H-to-P/Q-f-is-const (H.comp k₁ k₂)
hf₁₂ [ P.comp p₁ p₂ , φ₁.pres-comp p₁ p₂ ∙ ap2 H.comp r₁ r₂ ]})})
H-to-P/Q : H →ᴳ P/Q
H-to-P/Q = record {
f = λ k → H-to-P/Q-f k (φ₁-is-surj k);
pres-comp = λ k₁ k₂ → H-to-P/Q-f-comp k₁ k₂
(φ₁-is-surj (H.comp k₁ k₂)) (φ₁-is-surj k₁) (φ₁-is-surj k₂)}
H-iso-P/Q : H ≃ᴳ P/Q
H-iso-P/Q = H-to-P/Q , is-eq to from to-from from-to where
to : H.El → P/Q.El
to = λ k → H-to-P/Q-f k (φ₁-is-surj k)
from : P/Q.El → H.El
from = SetQuot-rec H.El-is-set
(λ p → φ₁.f p)
(λ {p₁} {p₂} q'p₁p₂⁻¹ → φ₂-is-inj (φ₁.f p₁) (φ₁.f p₂) $
φ-comm p₁ ∙ quot-relᴳ {P = Q} q'p₁p₂⁻¹ ∙ ! (φ-comm p₂))
abstract
to-from : ∀ p/q → to (from p/q) == p/q
to-from = SetQuot-elim (λ p/q → raise-level -1 $ P/Q.El-is-set _ p/q)
(λ p → H-to-P/Q-f-is-const (φ₁.f p) (φ₁-is-surj (φ₁.f p)) [ p , idp ])
(λ _ → prop-has-all-paths-↓ $ P/Q.El-is-set _ _)
from-to' : ∀ k (hf : Trunc -1 (hfiber φ₁.f k)) → from (H-to-P/Q-f k hf) == k
from-to' k = Trunc-elim (λ hf → H.El-is-set (from (H-to-P/Q-f k hf)) k) (λ{(p , r) → r})
from-to : ∀ k → from (to k) == k
from-to k = from-to' k (φ₁-is-surj k)
|
# Some manipulations on the two degree of freedom model
```python
from sympy import *
init_printing()
def symb(x, y = ''):
return symbols('{0}_{1}'.format(x,y), type = float)
```
Displacement vector:
```python
x = Matrix([symb('u','g'), symb('u','r')])
display(x)
```
$\displaystyle \left[\begin{matrix}u_{g}\\u_{r}\end{matrix}\right]$
Inertia matrix:
```python
J_r, J_g, n = symbols('J_r J_g n', positive=True)
M = diag(J_r, J_g*n**2)
display(M)
```
$\displaystyle \left[\begin{matrix}J_{r} & 0\\0 & J_{g} n^{2}\end{matrix}\right]$
Stiffness matrix
```python
k = symbols('k', positive=True)
K = eye(2)
K[0, 1] = -1
K[1, 0] = -1
K = k*K
display(K)
```
$\displaystyle \left[\begin{matrix}k & - k\\- k & k\end{matrix}\right]$
## Characteristic polynomial
```python
lamda = symb('lambda')
omega = symb('omega')
A = Inverse(M)*K
cp = A.charpoly(lamda)
cp = factor(cp)
display(cp)
```
|
%! Author = Frederik Bußmann
%! Date = 21.10.21
\section{Introduction} \label{sec:introduction}
Hello and welcome!
\\
This is an introduction~\cite{citation01}.
\clearpage
|
module B {
port P
}
|
import algebra.ring
example {α : Type*} [ring α] (a b c : α) :
a * 0 + 0 * b + c * 0 + 0 * a = 0 :=
begin
rw [mul_zero, mul_zero, zero_mul, zero_mul],
repeat { rw add_zero },
end
example {α : Type*} [group α] {a b : α} (h : a * b = 1) :
a⁻¹ = b :=
by rw [←(mul_one a⁻¹), ← h, inv_mul_cancel_left]
|
/*
*
* Copyright (c) 2002, 2003 Kresimir Fresl, Toon Knapen and Karl Meerbergen
*
* Permission to copy, modify, use and distribute this software
* for any non-commercial or commercial purpose is granted provided
* that this license appear on all copies of the software source code.
*
* Authors assume no responsibility whatsoever for its use and makes
* no guarantees about its quality, correctness or reliability.
*
* KF acknowledges the support of the Faculty of Civil Engineering,
* University of Zagreb, Croatia.
*
*/
#ifndef BOOST_NUMERIC_BINDINGS_TRAITS_UBLAS_HERMITIAN_H
#define BOOST_NUMERIC_BINDINGS_TRAITS_UBLAS_HERMITIAN_H
#include <boost/numeric/bindings/traits/traits.hpp>
#ifndef BOOST_NUMERIC_BINDINGS_POOR_MANS_TRAITS
#ifndef BOOST_UBLAS_HAVE_BINDINGS
# include <boost/numeric/ublas/hermitian.hpp>
#endif
#include <boost/numeric/bindings/traits/ublas_matrix.hpp>
#include <boost/numeric/bindings/traits/detail/ublas_uplo.hpp>
namespace boost { namespace numeric { namespace bindings { namespace traits {
// ublas::hermitian_matrix<>
template <typename T, typename F1, typename F2, typename A, typename M>
struct matrix_detail_traits<boost::numeric::ublas::hermitian_matrix<T, F1, F2, A>, M>
{
#ifndef BOOST_NUMERIC_BINDINGS_NO_SANITY_CHECK
BOOST_STATIC_ASSERT( (boost::is_same<boost::numeric::ublas::hermitian_matrix<T, F1, F2, A>, typename boost::remove_const<M>::type>::value) );
#endif
#ifdef BOOST_BINDINGS_FORTRAN
BOOST_STATIC_ASSERT((boost::is_same<
typename F2::orientation_category,
boost::numeric::ublas::column_major_tag
>::value));
#endif
typedef boost::numeric::ublas::hermitian_matrix<T, F1, F2, A> identifier_type;
typedef M matrix_type;
typedef hermitian_packed_t matrix_structure;
typedef typename detail::ublas_ordering<
typename F2::orientation_category
>::type ordering_type;
typedef typename detail::ublas_uplo< F1 >::type uplo_type;
typedef T value_type ;
typedef typename detail::generate_const<M,T>::type* pointer ;
static pointer storage (matrix_type& hm) {
typedef typename detail::generate_const<M,A>::type array_type ;
return vector_traits<array_type>::storage (hm.data());
}
static int size1 (matrix_type& hm) { return hm.size1(); }
static int size2 (matrix_type& hm) { return hm.size2(); }
static int storage_size (matrix_type& hm) {
return (size1 (hm) + 1) * size2 (hm) / 2;
}
};
namespace detail {
template <typename M>
int matrix_bandwidth( M const& m, upper_t ) {
return matrix_traits<M const>::upper_bandwidth( m ) ;
}
template <typename M>
int matrix_bandwidth( M const& m, lower_t ) {
// When the lower triangular band matrix is stored the
// upper bandwidth must be zero
assert( 0 == matrix_traits<M const>::upper_bandwidth( m ) ) ;
return matrix_traits<M const>::lower_bandwidth( m ) ;
}
} // namespace detail
// ublas::hermitian_adaptor<>
template <typename M, typename F1, typename MA>
struct matrix_detail_traits<boost::numeric::ublas::hermitian_adaptor<M, F1>, MA>
{
#ifndef BOOST_NUMERIC_BINDINGS_NO_SANITY_CHECK
BOOST_STATIC_ASSERT( (boost::is_same<boost::numeric::ublas::hermitian_adaptor<M, F1>, typename boost::remove_const<MA>::type>::value) );
#endif
typedef boost::numeric::ublas::hermitian_adaptor<M, F1> identifier_type;
typedef MA matrix_type;
typedef hermitian_t matrix_structure;
typedef typename matrix_traits<M>::ordering_type ordering_type;
typedef typename detail::ublas_uplo< F1 >::type uplo_type;
typedef typename M::value_type value_type;
typedef typename detail::generate_const<MA, value_type>::type* pointer;
private:
typedef typename detail::generate_const<MA, typename MA::matrix_closure_type>::type m_type;
public:
static pointer storage (matrix_type& hm) {
return matrix_traits<m_type>::storage (hm.data());
}
static int size1 (matrix_type& hm) { return hm.size1(); }
static int size2 (matrix_type& hm) { return hm.size2(); }
static int storage_size (matrix_type& hm) {
return size1 (hm) * size2 (hm);
}
static int leading_dimension (matrix_type& hm) {
return matrix_traits<m_type>::leading_dimension (hm.data());
}
// For banded M
static int upper_bandwidth(matrix_type& hm) {
return detail::matrix_bandwidth( hm.data(), uplo_type() );
}
static int lower_bandwidth(matrix_type& hm) {
return detail::matrix_bandwidth( hm.data(), uplo_type() );
}
};
}}}}
#endif // BOOST_NUMERIC_BINDINGS_POOR_MANS_TRAITS
#endif // BOOST_NUMERIC_BINDINGS_TRAITS_UBLAS_HERMITIAN_H
|
import BrownCs22.Library.Defs
import BrownCs22.Library.Tactics
import Mathlib.Data.Nat.ModEq
import Mathlib.Data.Int.GCD
open BrownCs22
/-
Let's check computationally that our RSA algorithm works.
-/
-- given a public key and a modulus `n`, we can encrypt a message.
def rsa_encrypt (public_key : ℕ) (n : ℕ) (message : ℕ) : ℕ :=
(message ^ public_key) % n
-- given a private key and a modulus `n`, we can decrypt a message.
def rsa_decrypt (private_key : ℕ) (n : ℕ) (encrypted_message : ℕ) : ℕ :=
(encrypted_message ^ private_key) % n
-- let's choose `n` to be the product of two primes.
def p := 113
def q := 37
def n := p * q
#eval Nat.Prime p
#eval Nat.Prime q
-- We choose our public key that is relatively prime to `(p - 1)*(q - 1)`.
def public_key := 13
#eval Nat.gcd public_key ((p - 1)*(q - 1))
-- Now we need an inverse to the public key mod `(p - 1)*(q - 1)`.
-- We get this from the extended Euclidean algorithm.
#eval Nat.xgcd public_key ((p - 1)*(q - 1))
def private_key := 1861
-- double check it's an inverse
#eval private_key * public_key % ((p - 1)*(q - 1))
-- Okay! Let's choose a message.
def message := 1034
def encrypted_message := rsa_encrypt public_key n message
#eval encrypted_message
def decrypted_message := rsa_decrypt private_key n encrypted_message
-- encrypting and decrypting the message produces the same output!
#eval decrypted_message
-- We can state, and (mostly) prove, the partial correctness theorem from class!
theorem rsa_correct
(p q : ℕ) (public_key private_key : ℕ) (message : ℕ)
(hp : Prime p) (hq : Prime q)
(h_pub_pri : public_key * private_key ≡ 1 [MOD (p - 1)*(q - 1)])
(h_msg : message < p*q)
(h_rel_prime : Nat.gcd msg (p*q) = 1) :
rsa_decrypt private_key (p*q)
(rsa_encrypt public_key (p*q) message)
= message :=
by
dsimp [rsa_decrypt, rsa_encrypt]
have h_lin_combo : ∃ k, public_key * private_key = 1 + (p - 1)*(q - 1)*k :=
sorry
eliminate h_lin_combo with k hk
calc
(message ^ public_key % (p * q)) ^ private_key % (p * q)
= (message ^ public_key) ^ private_key % (p * q) := by rw [← Nat.pow_mod]
_ = message ^ (public_key * private_key) % (p * q) := by rw [pow_mul]
_ = message ^ (1 + (p - 1)*(q - 1)*k) % (p * q) := by rw [hk]
_ = message ^ (1 + totient (p*q)*k) % (p * q) := by sorry
_ = (message * (message ^ totient (p * q))^k) % (p * q) := by rw [pow_add, pow_one, pow_mul]
_ = (message % (p * q)) * ((message ^ totient (p * q))^k % (p * q)) % (p * q)
:= by rw [Nat.mul_mod]
_ = (message % (p * q)) * 1 % (p * q) := by sorry
_ = message % (p * q) := by rw [mul_one, Nat.mod_mod]
_ = message := by rw [Nat.mod_eq_of_lt h_msg]
|
theory condes_dilemmas imports Main begin
text {*
Proving the constructive and destructive dilemmas in propositional calculus
*}
lemma "(P\<longrightarrow>Q) \<Longrightarrow>(R\<longrightarrow>S) \<Longrightarrow>(P\<or>R) \<Longrightarrow>(Q\<or>S)"
apply(erule disjE)
apply(erule impE)
apply assumption
apply(rule disjI1)
apply assumption
apply(rule disjI2)
apply (erule mp)
apply assumption
done
lemma "(P\<longrightarrow>Q) \<Longrightarrow>(R\<longrightarrow>S) \<Longrightarrow>(\<not>Q\<or>\<not>S)\<Longrightarrow>(\<not>P\<or>\<not>R) "
apply(erule disjE)
apply(rule disjI1)
apply(rule notI)
apply(erule notE)
apply (erule mp)
apply assumption
apply(rule disjI2)
apply(rule notI)
apply(erule notE)
apply(erule mp)
apply assumption
done
|
! RUN: %S/test_errors.sh %s %flang %t
!ERROR: IF statement is not allowed in IF statement
IF (A > 0.0) IF (B < 0.0) A = LOG (A)
END
|
[STATEMENT]
lemma fact_aux_lemma [simp]:
"rec_eval rec_fact_aux [x, y] = fact x"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. rec_eval rec_fact_aux [x, y] = fact x
[PROOF STEP]
by (induct x) (simp_all add: rec_fact_aux_def) |
lemma in_bigomega_zero [simp]: "f \<in> \<Omega>[F](\<lambda>x. 0)" |
Lethbridge Regional Police have concluded an investigation into a robbery at the west side Safeway Monday night and at this time no charges have been laid.
Investigation determined the 78-year-old male subject was suffering from medical issues at the time of the incident and as such has been admitted to hospital. Police have referred the matter for Crown review in consideration of whether or not charges should be pursued.
No further information will be released pending the outcome of the Crown’s review of the case.
Lethbridge Regional Police are currently investigating a robbery that occurred at the Safeway located at 550 University Drive West. At approximately 6pm this evening a 78 year old male entered the grocery store and approached a cashier stating he was going to rob the store and that he had a gun. The male left the business without receiving any money and was apprehended a short distance away by police. The investigation is on-going and further details will be released tomorrow. |
lemma\<^marker>\<open>tag important\<close> emeasure_lborel_cbox[simp]: assumes [simp]: "\<And>b. b \<in> Basis \<Longrightarrow> l \<bullet> b \<le> u \<bullet> b" shows "emeasure lborel (cbox l u) = (\<Prod>b\<in>Basis. (u - l) \<bullet> b)" |
# -*- coding: utf-8 -*-
# author: huihui
# date: 2020/8/3 1:18 下午
import torch
import torch.utils.data
import lr_scheduler as L
import os
import argparse
import pickle
import time
from collections import OrderedDict
import opts
import models
import utils
import codecs
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
parser = argparse.ArgumentParser(description='train.py')
opts.model_opts(parser)
opt = parser.parse_args()
config = utils.read_config(opt.config)
torch.manual_seed(opt.seed)
opts.convert_to_config(opt, config)
# cuda
use_cuda = torch.cuda.is_available()
config.use_cuda = use_cuda
def load_data():
print('loading data...\n')
data = pickle.load(open(config.data + 'data.pkl', 'rb'))
data['train']['length'] = int(data['train']['length'] * opt.scale)
trainset = utils.BiDataset(data['train'], char=config.char)
validset = utils.BiDataset(data['valid'], char=config.char)
src_vocab = data['dict']['src']
tgt_vocab = data['dict']['tgt']
config.src_vocab_size = src_vocab.size()
config.tgt_vocab_size = tgt_vocab.size()
trainloader = torch.utils.data.DataLoader(dataset=trainset,
batch_size=config.batch_size,
shuffle=True,
num_workers=0,
collate_fn=utils.padding)
if hasattr(config, 'valid_batch_size'):
valid_batch_size = config.valid_batch_size
else:
valid_batch_size = config.batch_size
validloader = torch.utils.data.DataLoader(dataset=validset,
batch_size=valid_batch_size,
shuffle=False,
num_workers=0,
collate_fn=utils.padding)
return {'trainset': trainset, 'validset': validset,
'trainloader': trainloader, 'validloader': validloader,
'src_vocab': src_vocab, 'tgt_vocab': tgt_vocab}
def build_model(checkpoints, print_log):
for k, v in config.items():
print_log("%s:\t%s\n" % (str(k), str(v)))
# model
print('building model...\n')
model = getattr(models, opt.model)(config)
if checkpoints is not None:
model.load_state_dict(checkpoints['model'])
if opt.pretrain:
print('loading checkpoint from %s' % opt.pretrain)
pre_ckpt = torch.load(opt.pretrain)['model']
pre_ckpt = OrderedDict({key[8:]: pre_ckpt[key] for key in pre_ckpt if key.startswith('encoder')})
print(model.encoder.state_dict().keys())
print(pre_ckpt.keys())
model.encoder.load_state_dict(pre_ckpt)
if use_cuda:
model.cuda()
# optimizer
if checkpoints is not None:
optim = checkpoints['optim']
else:
optim = models.Optim(config.optim, config.learning_rate, config.max_grad_norm,
lr_decay=config.learning_rate_decay, start_decay_at=config.start_decay_at)
optim.set_parameters(model.parameters())
# print log
param_count = 0
for param in model.parameters():
param_count += param.view(-1).size()[0]
for k, v in config.items():
print_log("%s:\t%s\n" % (str(k), str(v)))
print_log("\n")
print_log(repr(model) + "\n\n")
print_log('total number of parameters: %d\n\n' % param_count)
return model, optim, print_log
def train_model(model, data, optim, epoch, params):
model.train()
trainloader = data['trainloader']
for src, tgt, src_len, tgt_len, original_src, original_tgt in trainloader:
model.zero_grad()
if config.use_cuda:
src = src.cuda()
tgt = tgt.cuda()
src_len = src_len.cuda()
lengths, indices = torch.sort(src_len, dim=0, descending=True)
src = torch.index_select(src, dim=0, index=indices)
tgt = torch.index_select(tgt, dim=0, index=indices)
dec = tgt[:, :-1]
targets = tgt[:, 1:]
try:
if config.schesamp:
if epoch > 8:
e = epoch - 8
loss, outputs = model(src, lengths, dec, targets, teacher_ratio=0.9 ** e)
else:
loss, outputs = model(src, lengths, dec, targets)
else:
loss, outputs = model(src, lengths, dec, targets)
pred = outputs.max(2)[1]
targets = targets.t()
num_correct = pred.eq(targets).masked_select(targets.ne(utils.PAD)).sum().item()
num_total = targets.ne(utils.PAD).sum().item()
if config.max_split == 0:
loss = torch.sum(loss) / num_total
loss.backward()
optim.step()
params['report_loss'] += loss.item()
params['report_correct'] += num_correct
params['report_total'] += num_total
except RuntimeError as e:
if 'out of memory' in str(e):
print('| WARNING: ran out of memory')
if hasattr(torch.cuda, 'empty_cache'):
torch.cuda.empty_cache()
else:
raise e
utils.progress_bar(params['updates'], config.eval_interval)
params['updates'] += 1
if params['updates'] % config.eval_interval == 0:
params['log']("epoch: %3d, loss: %6.3f, time: %6.3f, updates: %8d, accuracy: %2.2f\n"
% (epoch, params['report_loss'], time.time() - params['report_time'],
params['updates'], params['report_correct'] * 100.0 / params['report_total']))
print('evaluating after %d updates...\r' % params['updates'])
score = eval_model(model, data, params)
for metric in config.metrics:
params[metric].append(score[metric])
if score[metric] >= max(params[metric]):
with codecs.open(params['log_path'] + 'best_' + metric + '_prediction.txt', 'w', 'utf-8') as f:
f.write(codecs.open(params['log_path'] + 'candidate.txt', 'r', 'utf-8').read())
save_model(params['log_path'] + 'best_' + metric + '_checkpoint.pt', model, optim,
params['updates'])
model.train()
params['report_loss'], params['report_time'] = 0, time.time()
params['report_correct'], params['report_total'] = 0, 0
if params['updates'] % config.save_interval == 0:
save_model(params['log_path'] + 'checkpoint.pt', model, optim, params['updates'])
optim.updateLearningRate(score=0, epoch=epoch)
def eval_model(model, data, params):
model.eval()
reference, candidate, source, alignments = [], [], [], []
count, total_count = 0, len(data['validset'])
validloader = data['validloader']
tgt_vocab = data['tgt_vocab']
for src, tgt, src_len, tgt_len, original_src, original_tgt in validloader:
if config.use_cuda:
src = src.cuda()
src_len = src_len.cuda()
with torch.no_grad():
if config.beam_size > 1:
samples, alignment, weight = model.beam_sample(src, src_len, beam_size=config.beam_size, eval_=True)
else:
samples, alignment = model.sample(src, src_len)
candidate += [tgt_vocab.convertToLabels(s, utils.EOS) for s in samples]
source += original_src
reference += original_tgt
if alignment is not None:
alignments += [align for align in alignment]
count += len(original_src)
utils.progress_bar(count, total_count)
if config.unk and config.attention != 'None':
cands = []
for s, c, align in zip(source, candidate, alignments):
cand = []
for word, idx in zip(c, align):
if word == utils.UNK_WORD and idx < len(s):
try:
cand.append(s[idx])
except:
cand.append(word)
print("%d %d\n" % (len(s), idx))
else:
cand.append(word)
cands.append(cand)
if len(cand) == 0:
print('Error!')
candidate = cands
with codecs.open(params['log_path'] + 'candidate.txt', 'w+', 'utf-8') as f:
for i in range(len(candidate)):
f.write(" ".join(candidate[i]) + '\n')
score = {}
for metric in config.metrics:
score[metric] = getattr(utils, metric)(reference, candidate, params['log_path'], params['log'], config)
return score
def save_model(path, model, optim, updates):
model_state_dict = model.state_dict()
checkpoints = {
'model': model_state_dict,
'config': config,
'optim': optim,
'updates': updates}
torch.save(checkpoints, path)
def build_log():
# log
if not os.path.exists(config.logF):
os.mkdir(config.logF)
if opt.log == '':
log_path = config.logF + str(int(time.time() * 1000)) + '/'
else:
log_path = config.logF + opt.log + '/'
if not os.path.exists(log_path):
os.mkdir(log_path)
print_log = utils.print_log(log_path + 'log.txt')
return print_log, log_path
def showAttention(path, s, c, attentions, index):
# Set up figure with colorbar
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(attentions.numpy(), cmap='bone')
fig.colorbar(cax)
# Set up axes
ax.set_xticklabels([''] + s, rotation=90)
ax.set_yticklabels([''] + c)
# Show label at every tick
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
plt.savefig(path + str(index) + '.jpg')
def main():
# checkpoint
if opt.restore:
print('loading checkpoint...\n')
checkpoints = torch.load(opt.restore)
else:
checkpoints = None
data = load_data()
print_log, log_path = build_log()
model, optim, print_log = build_model(checkpoints, print_log)
# scheduler
if config.schedule:
scheduler = L.CosineAnnealingLR(optim.optimizer, T_max=config.epoch)
params = {'updates': 0, 'report_loss': 0, 'report_total': 0,
'report_correct': 0, 'report_time': time.time(),
'log': print_log, 'log_path': log_path}
for metric in config.metrics:
params[metric] = []
if opt.restore:
params['updates'] = checkpoints['updates']
if opt.mode == "train":
for i in range(1, config.epoch + 1):
print('{} / {}'.format(i,config.epoch))
if config.schedule:
scheduler.step()
print("Decaying learning rate to %g" % scheduler.get_lr()[0])
train_model(model, data, optim, i, params)
for metric in config.metrics:
print_log("Best %s score: %.2f\n" % (metric, max(params[metric])))
else:
score = eval_model(model, data, params)
print(score)
if __name__ == '__main__':
main()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.