text
stringlengths 0
3.34M
|
|---|
On April 15 , 2010 , The Guardian reported that a number of artists , including Pet Shop Boys , Passion Pit and rock musician Marilyn Manson , have contributed to a remix album by Lady Gaga , titled The Remix . The remixes included in the package had been previously released alongside Gaga 's single releases in the past years . The album was originally released in Japan on March 3 , 2010 , containing sixteen of the remixes . The revised version , consisting of seventeen remixes , was released on May 3 , 2010 , the first market being Mexico . Manson features on the Chew Fu remix of " LoveGame " , while Passion Pit remixed " Telephone " and Pet Shop Boys remixed " Eh , Eh ( Nothing Else I Can Say ) " . Other artists who remixed Gaga 's songs included <unk> , Frankmusik , Stuart Price , Monarchy and Robots to Mars . The album was released in the United Kingdom on May 10 , 2010 and featured a different artwork for that region . The US release of the album was announced by Interscope Records in July 2010 and it was released on August 3 , 2010 .
|
# Spectral Analysis of Deterministic Signals
*This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing.
## Zero-Padding
### Concept
Let's assume a signal $x_N[k]$ of finite length $N$, for instance a windowed signal $x_N[k] = x[k] \cdot \text{rect}_N[k]$. The discrete Fourier transformation (DFT) of $x_N[k]$ reads
\begin{equation}
X_N[\mu] = \sum_{k=0}^{N-1} x_N[k] \; w_N^{\mu k}
\end{equation}
where $w_N = \mathrm{e}^{-\mathrm{j} \frac{2 \pi}{N}}$ denotes the kernel of the DFT. For a sampled time-domain signal, the distance in frequency between two neighboring coefficients is given as $\Delta f = \frac{f_s}{N}$, where $f_s = \frac{1}{T}$ denotes the sampling frequency. Hence, if $N$ is increased the distance between neighboring frequencies is decreased. This leads to the concept of zero-padding in spectral analysis. Here the signal $x_N[k]$ of finite length is filled up with (M-N) zero values to a total length $M \geq N$
\begin{equation}
x_M[k] = \begin{cases}
x_N[k] & \mathrm{for} \; k=0,1,\dots,N-1 \\
0 & \mathrm{for} \; k=N,N+1,\dots,M-1
\end{cases}
\end{equation}
Appending zeros does not change the contents of the signal itself. However, the DFT $X_M[\mu]$ of $x_M[k]$ has now a decreased distance between neighboring frequencies $\Delta f = \frac{f_s}{M}$.
The question arises what influence zero-padding has on the spectrum and if it can enhance spectral analysis. On first sight it seems that the frequency resolution is higher, however do we get more information on the signal? In order to discuss this, a short numerical example is evaluated followed by a derivation of the mathematical relations between the spectrum $X_M[k]$ with zero-padding and $X_N[k]$ without zero-padding.
#### Example - Zero-Padding
The following example computes and plots the magnitude spectra $|X[\mu]|$ of a truncated complex exponential signal $x_N[k] = \mathrm{e}^{\,\mathrm{j}\,\Omega_0\,k} \cdot \text{rect}_N[k]$ and its zero-padded version $x_M[k]$.
```python
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
N = 16 # length of the signal
M = 32 # length of zero-padded signal
Om0 = 5.33*(2*np.pi/N) # frequency of exponential signal
# DFT of the exponential signal
xN = np.exp(1j*Om0*np.arange(N))
XN = np.fft.fft(xN)
# DFT of the zero-padded exponential signal
xM = np.concatenate((xN, np.zeros(M-N)))
XM = np.fft.fft(xM)
# plot spectra
plt.figure(figsize = (10, 6))
plt.subplot(121)
plt.stem(np.arange(N),np.abs(XN))
plt.title(r'DFT$_{%d}$ of $e^{j \Omega_0 k}$ without zero-padding' %N)
plt.xlabel(r'$\mu$')
plt.ylabel(r'$|X_N[\mu]|$')
plt.axis([0, N, 0, 18])
plt.grid()
plt.subplot(122)
plt.stem(np.arange(M),np.abs(XM))
plt.title(r'DFT$_{%d}$ of $e^{j \Omega_0 k}$ with zero-padding' %M)
plt.xlabel(r'$\mu$')
plt.ylabel(r'$|X_M[\mu]|$')
plt.axis([0, M, 0, 18])
plt.grid()
```
**Exercise**
* Check the two spectra carefully for relations. Are there common coefficients for the case $M = 2 N$?
* Increase the length `M` of the zero-padded signal $x_M[k]$. Can you gain additional information from the spectrum?
Solution: Every second (because the DFT length has been doubled) coefficient has been added, the other coefficients stay the same. With longer zero-padding, the maximum of the main lobe of the window gets closer to its true maximum.
### Interpolation of the Discrete Fourier Transformation
Lets step back to the discrete-time Fourier transformation (DTFT) of the finite-length signal $x_N[k]$ without zero-padding
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{k = -\infty}^{\infty} x_N[k] \, \mathrm{e}^{\,-\mathrm{j}\,\Omega\,k} = \sum_{k=0}^{N-1} x_N[k] \,\mathrm{e}^{-\,\mathrm{j}\,\Omega\,k}
\end{equation}
The discrete Fourier transformation (DFT) is derived by sampling $X_N(\mathrm{e}^{\mathrm{j}\,\Omega})$ at $\Omega = \mu \frac{2 \pi}{N}$
\begin{equation}
X_N[\mu] = X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega}) \big\vert_{\Omega = \mu \frac{2 \pi}{N}} = \sum_{k=0}^{N-1} x_N[k] \, \mathrm{e}^{\,-\mathrm{j}\, \mu \frac{2\pi}{N}\,k}
\end{equation}
Since the DFT coefficients $X_N[\mu]$ are sampled equidistantly from the DTFT $X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})$, we can reconstruct the DTFT of $x_N[k]$ from the DFT coefficients by interpolation. Introduce the inverse DFT of $X_N[\mu]$
\begin{equation}
x_N[k] = \frac{1}{N} \sum_{\mu = 0}^{N-1} X_N[\mu] \; \mathrm{e}^{\,\mathrm{j}\,\frac{2 \pi}{N} \mu \,k}
\end{equation}
into the DTFT
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{k=0}^{N-1} x_N[k] \; \mathrm{e}^{-\,\mathrm{j}\, \Omega\, k} =
\sum_{\mu=0}^{N-1} X_N[\mu] \cdot \frac{1}{N} \sum_{k=0}^{N-1} \mathrm{e}^{-\mathrm{j}\, k \,(\Omega - \frac{2 \pi}{N} \mu)}
\end{equation}
reveals the relation between $X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega})$ and $X_N[\mu]$. The last sum over $k$ constitutes a [geometric series](https://en.wikipedia.org/wiki/Geometric_series) and can be rearranged to
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{\mu=0}^{N-1} X_N[\mu] \cdot \frac{1}{N} \cdot \frac{1-\mathrm{e}^{-\mathrm{j}(\Omega-\frac{2\pi}{N}\mu)N}}{1-\mathrm{e}^{-\mathrm{j}(\Omega-\frac{2\pi}{N}\mu)}}
\end{equation}
By factorizing the last fraction to
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{\mu=0}^{N-1} X_N[\mu] \cdot \frac{1}{N} \cdot \frac{\mathrm{e}^{-\mathrm{j}\frac{(\Omega-\frac{2\pi}{N}\mu)N}{2}}}{\mathrm{e}^{-\mathrm{j}\frac{\Omega-\frac{2\pi}{N}\mu}{2}}} \cdot \frac{\mathrm{e}^{\mathrm{j}\frac{(\Omega-\frac{2\pi}{N}\mu)N}{2}}-\mathrm{e}^{-\mathrm{j}\frac{(\Omega-\frac{2\pi}{N}\mu)N}{2}}}{\mathrm{e}^{\mathrm{j}\frac{\Omega-\frac{2\pi}{N}\mu}{2}}-\mathrm{e}^{-\mathrm{j}\frac{\Omega-\frac{2\pi}{N}\mu}{2}}}
\end{equation}
and making use of [Euler's identity](https://en.wikipedia.org/wiki/Euler%27s_identity) $2\mathrm{j}\cdot\sin(x)=\mathrm{e}^{\mathrm{j} x}-\mathrm{e}^{-\mathrm{j} x}$ this can be simplified to
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\,\Omega}) = \sum_{\mu=0}^{N-1} X_N[\mu] \cdot \mathrm{e}^{-\mathrm{j}\frac{(\Omega-\frac{2\pi}{N}\mu)(N-1)}{2}} \cdot \frac{1}{N} \cdot \frac{\sin(N\frac{\Omega-\frac{2\pi}{N}\mu}{2})}{\sin(\frac{\Omega-\frac{2\pi}{N}\mu}{2})}
\end{equation}
The last fraction can be written in terms of the $N$-th order periodic sinc function (aliased sinc function, [Dirichlet kernel](https://en.wikipedia.org/wiki/Dirichlet_kernel)), which is defined as
\begin{equation}
\text{psinc}_N (\Omega) = \frac{1}{N} \frac{\sin(\frac{N}{2} \Omega)}{ \sin(\frac{1}{2} \Omega)}
\end{equation}
According to this definition, the periodic sinc function is not defined at $\Omega = 2 \pi \,n$ for $n \in \mathbb{Z}$. This is resolved by applying [L'Hôpital's rule](https://en.wikipedia.org/wiki/L%27H%C3%B4pital%27s_rule) which results in $\text{psinc}_N (2 \pi \,n) = 1$ for $n \in \mathbb{Z}$.
Using the periodic sinc function, the DTFT $X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ of a finite-length signal $x_N[k]$ can be derived from its DFT $X_N[\mu]$ by
\begin{equation}
X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega}) = \sum_{\mu=0}^{N-1} X_N[\mu] \cdot \mathrm{e}^{-\,\mathrm{j}\, \frac{( \Omega - \frac{2 \pi}{N} \mu ) (N-1)}{2}} \cdot \text{psinc}_N ( \Omega - \frac{2 \pi}{N} \mu )
\end{equation}
#### Example - Periodic sinc function
This example illustrates the
1. periodic sinc function, and
2. interpolation of $X_N(\mathrm{e}^{\,\mathrm{j}\, \Omega})$ from $X_N[\mu]$ for an exponential signal using above relation.
```python
N = 16 # order of periodic sinc function
M = 1024 # number of frequency points
Om = np.linspace(-np.pi, np.pi, M)
# definition of periodic sinc function
def psinc(x, N):
x = np.asanyarray(x)
y = np.where(x == 0, 1.0e-20, x)
return 1/N * np.sin(N/2*y)/np.sin(1/2*y)
# plot psinc
plt.figure(figsize = (10, 8))
plt.plot(Om, psinc(Om, 16))
plt.xlabel(r'$\Omega$')
plt.ylabel(r'$\mathrm{psinc}_N (\Omega)$')
plt.grid()
```
```python
N = 16 # length of the signal
M = 1024 # number of frequency points for DTFT
Om0 = 5.33*(2*np.pi/N) # frequency of exponential signal
# DFT of the exponential signal
xN = np.exp(1j*Om0*np.arange(N))
XN = np.fft.fft(xN)
# interpolation of DTFT from DFT coefficients
Xi = np.asarray(np.zeros(M), dtype=complex)
for mu in np.arange(M):
Omd = 2*np.pi/M*mu-2*np.pi*np.arange(N)/N
interpolator = psinc(Omd, N) * np.exp(-1j*Omd*(N-1)/2)
Xi[mu] = np.sum(XN * interpolator)
# plot spectra
plt.figure(figsize = (10, 8))
ax1 = plt.gca()
plt.plot(np.arange(M)*2*np.pi/M, abs(Xi), 'r', label=r'$|X_N(e^{j \Omega})|$')
plt.stem(np.arange(N)*2*np.pi/N, abs(XN), basefmt = ' ', label=r'$|X_N[\mu]|$')
plt.title(r'DFT $X_N[\mu]$ and interpolated DTFT $X_N(e^{j \Omega})$', y=1.08)
plt.ylim([-0.5, N+2]);
plt.legend()
ax1.set_xlabel(r'$\Omega$')
ax1.set_xlim([0, 2*np.pi])
ax1.grid()
ax2 = ax1.twiny()
ax2.set_xlim([0, N])
ax2.set_xlabel(r'$\mu$', color='C0')
ax2.tick_params('x', colors='C0')
```
### Relation between Discrete Fourier Transformations with and without Zero-Padding
It was already outlined above that the DFT is related to the DTFT by sampling. Hence, the DFT $X_M[\mu]$ is given by sampling the DTFT $X_M(\mathrm{e}^{\mathrm{j}\, \Omega})$ at $\Omega = \frac{2 \pi}{M} \mu$. Since the zero-padded signal $x_M[k]$ differs from $x_N[k]$ only with respect to the additional zeros, the DTFTs of both are equal
\begin{equation}
X_M(\mathrm{e}^{\mathrm{j}\, \Omega}) = X_N(\mathrm{e}^{\mathrm{j}\, \Omega})
\end{equation}
The desired relation between the DFTs $X_N[\mu]$ and $X_M[\mu]$ of the signal $x_N[k]$ and its zero-padded version $x_M[k]$ can be found by sampling the interpolated DTFT $X_N(\mathrm{e}^{\mathrm{j}\, \Omega})$ at $\Omega = \frac{2 \pi}{M} \mu$
\begin{equation}
X_M[\mu] = \sum_{\eta=0}^{N-1} X_N[\eta] \cdot \mathrm{e}^{\,-\mathrm{j}\, \frac{( \frac{2 \pi}{M} \mu - \frac{2 \pi}{N} \eta ) (N-1)}{2}} \cdot \text{psinc}_N \left( \frac{2 \pi}{M} \mu - \frac{2 \pi}{N} \eta \right)
\end{equation}
for $\mu = 0, 1, \dots, M-1$.
Above equation relates the spectrum $X_N[\mu]$ of the original signal $x_N[k]$ to the spectrum $X_M[\mu]$ of the zero-padded signal $x_M[k]$. It essentially constitutes a bandlimited interpolation of the coefficients $X_N[\mu]$.
All spectral information of a signal of finite length $N$ is already contained in its spectrum derived from a DFT of length $N$. By applying zero-padding and a longer DFT, the frequency resolution is only virtually increased. The additional coefficients are related to the original ones by bandlimited interpolation. In general, zero-padding does not bring additional insights in spectral analysis. It may bring a benefit in special applications, for instance when estimating the frequency of an isolated harmonic signal from its spectrum. This is illustrated in the following example.
Zero-padding is also used to make a circular convolution equivalent to a linear convolution. However, there is a different reasoning behind this. Details are discussed in a [later section](../nonrecursive_filters/fast_convolution.ipynb#Linear-Convolution-by-Periodic-Convolution).
#### Example - Interpolation of the DFT
The following example shows that the coefficients $X_M[\mu]$ of the spectrum of the zero-padded signal $x_M[k]$ can be derived by interpolation from the spectrum $X_N[\mu]$.
```python
N = 16 # length of the signal
M = 32 # number of points for interpolated DFT
Om0 = 5.33*(2*np.pi/N) # frequency of exponential signal
# periodic sinc function
def psinc(x, N):
x = np.asanyarray(x)
y = np.where(x == 0, 1.0e-20, x)
return 1/N * np.sin(N/2*y)/np.sin(1/2*y)
# DFT of the exponential signal
xN = np.exp(1j*Om0*np.arange(N))
XN = np.fft.fft(xN)
# interpolation of DFT coefficients
XM = np.asarray(np.zeros(M), dtype=complex)
for mu in np.arange(M):
Omd = 2*np.pi/M*mu-2*np.pi*np.arange(N)/N
interpolator = psinc(Omd, N) * np.exp(-1j*Omd*(N-1)/2)
XM[mu] = np.sum(XN * interpolator)
# plot spectra
plt.figure(figsize = (10, 6))
plt.subplot(121)
plt.stem(np.arange(N),np.abs(XN))
plt.title(r'DFT of $e^{j \Omega_0 k}$ without zero-padding')
plt.xlabel(r'$\mu$')
plt.ylabel(r'$|X_N[\mu]|$')
plt.axis([0, N, 0, 18])
plt.grid()
plt.subplot(122)
plt.stem(np.arange(M),np.abs(XM))
plt.title(r'Interpolated spectrum')
plt.xlabel(r'$\mu$')
plt.ylabel(r'$|X_M[\mu]|$')
plt.axis([0, M, 0, 18])
plt.grid()
```
**Exercise**
* Compare the interpolated spectrum to the spectrum with zero padding from the first example.
* Estimate the frequency $\Omega_0$ of the exponential signal from the interpolated spectrum. How could you further increase the accuracy of your estimate?
Solution: The interpolated spectrum is the same as the spectrum with zero padding from the first example. The estimated frequency from the interpolated spectrum is $\Omega_0=\frac{2\pi}{M}\mu=\frac{2\pi}{32}\cdot11$. A better estimate can be obtained by increasing the number of points for the interpolated DFT or by further zero-padding of the time domain signal.
#### Example - Estimation of Frequency and Amplitude of a Harmonic Signal
The estimation of the normalized frequency $\Omega_0$ and amplitude $A$ of a single exponential signal $x_N[k] = A \cdot e^{j \Omega_0 k}$ by the DFT of the zero-padded signal (or interpolated DFT) is illustrated in the following example. The frequency is estimated from the DFT of the zero-padded signal by finding the maximum in the magnitude spectrum
\begin{equation}
\hat{\mu_0} = \underset{\mu}{\mathrm{argmax}} \{ |X_M[\mu]| \}
\end{equation}
The amplitude is estimated by taking the magnitude at the maximum $\hat{A} = | X_M[\hat{\mu}_0] |$.
First a function is defined which estimates the frequency for a given number of zeros appended to the signal before calculating the DFT. Without loss of generality is is assumed that $A=1$.
```python
N = 128 # length of the signal
Om0 = 5.33*(2*np.pi/N) # frequency of exponential signal
# generate harmonic signal
k = np.arange(N)
x = np.exp(1j*Om0*np.arange(N))
def estimate_frequency_amplitude(x, P):
# perform zero-padding and DFT
xM = np.concatenate((x, np.zeros(P)))
XM = np.fft.fft(xM)
# estimate frequency/amplitude of harmonic signal
mu_max = np.argmax(abs(XM))
amplitude = 1/N * abs(XM[mu_max])
# print results
Om = np.fft.fftfreq(N+P, 1/(2*np.pi))
print('Normalized frequency of signal: {0:1.4f} (real) / {1:1.4f} (estimated) / {2:1.4f} (absolute error)'.format(Om0, Om[mu_max], abs(Om0 - Om[mu_max])))
print('Amplitude of signal: {0:1.4f} (real) / {1:1.4f} (estimated) / {2:2.2f} dB (magnitude error)'.format(1, amplitude, 20*np.log10(abs(1/amplitude))))
```
First the estimation is performed without zero-padding
```python
estimate_frequency_amplitude(x, 0)
```
Normalized frequency of signal: 0.2616 (real) / 0.2454 (estimated) / 0.0162 (absolute error)
Amplitude of signal: 1.0000 (real) / 0.8303 (estimated) / 1.62 dB (magnitude error)
Then the signal is zero-padded to a total length of eight times its original length
```python
estimate_frequency_amplitude(x, 7*N)
```
Normalized frequency of signal: 0.2616 (real) / 0.2638 (estimated) / 0.0022 (absolute error)
Amplitude of signal: 1.0000 (real) / 0.9967 (estimated) / 0.03 dB (magnitude error)
**Exercise**
* What is the maximum error that can occur when estimating the frequency from the maximum of the (zero-padded) magnitude spectrum?
Solution: The maximum absolute error occurs if the maximum in the DTFT of the signal is in between two adjacent bins $\mu$ of the DFT. Since the DTFT is sampled at $\Omega = \frac{2 \pi}{M}$ to derive the DFT, the maximum absolute error is given by $\frac{\pi}{M}$ where $M$ denotes the length of the zero-padded signal/DFT.
|
abstract type BasisFunction end
abstract type Linear <: BasisFunction end
abstract type Quadratic <: BasisFunction end
struct NodeLocation
l::UInt8
j::UInt16
dj::UInt16
end
NodeLocation(x::Float64) = NodeLocation(position(x)...)
mutable struct AdaptiveGrid
nodeinfo
active::BitArray{1}
overlap::Vector{Vector{Bool}}
end
mutable struct NGrid{D, B}
L::Vector{Int}
bounds::Array{Float64, 2}
grid::Array{Float64, 2}
covers::Array{UInt16, 2}
covers_dM::Array{UInt16, 2}
covers_loc::Vector{UInt32}
adapt::AdaptiveGrid
IDs::Vector{Vector{Int}}
Bs::Vector{Vector{Float64}}
end
# Returns the tensor grid of a vector of univariate grids
function Base.kron(X::Vector{Vector{Float64}})
D = length(X)
lxs = Int[length(x) for x in X]
G = zeros(prod(lxs), D)
s = 1
for d = 1:D
snext = s * lxs[d]
for j = 1:prod(lxs)
G[j, d] = X[d][div(rem(j - 1, snext), s) + 1]
end
s = snext
end
return G
end
Base.kron(x::Vector{Float64}) = x
"""
TensorGrid(L)
Returns a tensor grid of univariate grids of levels given by L = [l1,l2,...]
"""
function TensorGrid(L::Vector{Int})
G = ndgrid(Vector{Float64}[cc_g(i) for i in L]...)
G = hcat([vec(g) for g in G]...)
end
"""
SmolyakSize(L[, mL])
Returns the number of nodes in levels mL of a Smolyak grid of maximum
of maximum level L = [l1,l2...]
"""
function SmolyakSize(L::Vector{Int}, mL::UnitRange{Int} = 0:maximum(L))
D = length(L)
m = Int[dM(l) for l = 1:maximum(L) + D]
S = 0
for l = mL
for covering in comb(D, D + l)
if all(covering .≤ L + 1)
s = m[covering[1]]
for i = 2:length(covering)
s *= m[covering[i]]
end
S += s
end
end
end
return S
end
"""
SmolyakGrid(L[, mL])
juli
"""
function SmolyakGrid(L::Vector{Int}, mL::UnitRange{Int} = 0:maximum(L))
D = length(L)
dg = Vector{Float64}[cc_dg(l) for l = 1:maximum(L) + D]
G = Array{Array{Float64}}(0)
index = Array{Array{Int}}(0)
for l = mL
for covering in comb(D, D + l)
if all(covering .≤ L + 1)
push!(G, kron(dg[covering]))
push!(index, repmat(covering', size(G[end], 1)))
end
end
end
# G = vcat(G...)::Array{Float64,2}
# index = vcat(index...)::Array{Int,2}
# return G,index
return (vcat(G...), vcat(index...))::Tuple{Array{Float64, 2}, Array{Int, 2}}
end
"""
level(x)
Computes the minimum level of a point
"""
function level(x::Float64)
l = 0
if x == 0.5
l = 1
elseif x == 0.0 || x == 1.0
l = 2
else
for l = 3:12
mod(x, 1 / 2^(l - 1)) == 0.0 && break
end
end
return l
end
level(G::NGrid) = level(G.grid)
level(X::Array{Float64}) = vec(sum(map(level, X), 2) - size(X, 2))
"""
NGrid(L,bounds,B)
# Construction
Construct a Smolyak grid.
The vector of integers L determines the maximum level of the grid in each
dimension. The bounds can be optionally changed from the default of [0,1]^D and
basis function B can be either Linear or Quadratic.
# Interpolation
Grid objects are callable taking two arguments. The first is a vector containing
function values at the grid nodes. The second array contains rows of points at
which the interpolant is to be evaluated.
"""
function NGrid(L::Vector{Int}, bounds::Array{Float64} = [0, 1] .* ones(1, length(L));B::Type{BT} = Linear) where {BT <: BasisFunction}
grid, ind = SmolyakGrid(L)
covers = UInt16.(unique(ind, 1))
covers_loc = zeros(UInt32, size(covers, 1))
hind = vec(mapslices(hash, ind, 2))
hcovers = vec(mapslices(hash, covers, 2))
for i = 1:size(covers, 1)
covers_loc[i] = findfirst(hind, hcovers[i])
end
G = NGrid{length(L), B}(L,
bounds,
grid,
covers,
dM.(covers),
covers_loc,
AdaptiveGrid([], zeros(Bool, size(grid, 1)), []),
Vector{Int}[Int[] for i = 1:size(grid, 1)],
Vector{Float64}[Float64[] for i = 1:size(grid, 1)]
)
G.adapt.active = (level(G) .== maximum(level(G)))
buildW(G, hind, hcovers)
return G
end
Base.show(io::IO, G::NGrid) = println(io, typeof(G), ": $(size(G.grid,1))pts")
Base.length(G::NGrid) = size(G.grid, 1)
Base.size(G::NGrid) = size(G.grid)
Base.values(G::NGrid) = nUtoX(G.grid, G.bounds)
Base.getindex(G::NGrid, args...) = getindex(G.grid, args...)
|
function [est, x, k] = pnorm(A, p, tol, prnt)
%PNORM Estimate of matrix p-norm (1 <= p <= inf).
% [EST, x, k] = PNORM(A, p, TOL) estimates the Holder p-norm of a
% matrix A, using the p-norm power method with a specially
% chosen starting vector.
% TOL is a relative convergence tolerance (default 1E-4).
% Returned are the norm estimate EST (which is a lower bound for the
% exact p-norm), the corresponding approximate maximizing vector x,
% and the number of power method iterations k.
% A nonzero fourth input argument causes trace output to the screen.
% If A is a vector, this routine simply returns NORM(A, p).
%
% See also NORM, NORMEST, NORMEST1.
% Note: The estimate is exact for p = 1, but is not always exact for
% p = 2 or p = inf. Code could be added to treat p = 2 and p = inf
% separately.
%
% Calls DUAL.
%
% Reference:
% N. J. Higham, Estimating the matrix p-norm, Numer. Math.,
% 62 (1992), pp. 539-555.
% N. J. Higham, Accuracy and Stability of Numerical Algorithms,
% Second edition, Society for Industrial and Applied Mathematics,
% Philadelphia, PA, 2002; sec. 15.2.
if nargin < 2, error('Must specify norm via second parameter.'), end
[m,n] = size(A);
if min(m,n) == 1, est = norm(A,p); return, end
if nargin < 4, prnt = 0; end
if nargin < 3 | isempty(tol), tol = 1e-4; end
% Stage I. Use Algorithm OSE to get starting vector x for power method.
% Form y = B*x, at each stage choosing x(k) = c and scaling previous
% x(k+1:n) by s, where norm([c s],p)=1.
sm = 9; % Number of samples.
y = zeros(m,1); x = zeros(n,1);
for k=1:n
if k == 1
c = 1; s = 0;
else
W = [A(:,k) y];
if p == 2 % Special case. Solve exactly for 2-norm.
[U,S,V] = svd(full(W));
c = V(1,1); s = V(2,1);
else
fopt = 0;
for th=linspace(0,pi,sm)
c1 = cos(th); s1 = sin(th);
nrm = norm([c1 s1],p);
c1 = c1/nrm; s1 = s1/nrm; % [c1 s1] has unit p-norm.
f = norm( W*[c1 s1]', p );
if f > fopt
fopt = f;
c = c1; s = s1;
end
end
end
end
x(k) = c;
y = x(k)*A(:,k) + s*y;
if k > 1, x(1:k-1) = s*x(1:k-1); end
end
est = norm(y,p);
if prnt, fprintf('Alg OSE: %9.4e\n', est), end
% Stage II. Apply Algorithm PM (the power method).
q = dual(p);
k = 1;
while 1
y = A*x;
est_old = est;
est = norm(y,p);
z = A' * dual(y,p);
if prnt
fprintf('%2.0f: norm(y) = %9.4e, norm(z) = %9.4e', ...
k, norm(y,p), norm(z,q))
fprintf(' rel_incr(est) = %9.4e\n', (est-est_old)/est)
end
if ( norm(z,q) <= z'*x | abs(est-est_old)/est <= tol ) & k > 1
return
end
x = dual(z,q);
k = k + 1;
end
|
#ifndef __GSL_VERSION_H__
#define __GSL_VERSION_H__
#include <gsl/gsl_types.h>
#undef __BEGIN_DECLS
#undef __END_DECLS
#ifdef __cplusplus
# define __BEGIN_DECLS extern "C" {
# define __END_DECLS }
#else
# define __BEGIN_DECLS /* empty */
# define __END_DECLS /* empty */
#endif
__BEGIN_DECLS
#define GSL_VERSION "2.2.1"
#define GSL_MAJOR_VERSION 2
#define GSL_MINOR_VERSION 2
GSL_VAR const char * gsl_version;
__END_DECLS
#endif /* __GSL_VERSION_H__ */
|
[STATEMENT]
lemma remove_unknowns_generic_specification: "a = Accept \<or> a = Drop \<Longrightarrow> packet_independent_\<alpha> \<alpha> \<Longrightarrow>
packet_independent_\<beta>_unknown \<beta> \<Longrightarrow>
\<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m)
[PROOF STEP]
proof(induction "(\<beta>, \<alpha>)" a m rule: remove_unknowns_generic.induct)
[PROOF STATE]
proof (state)
goal (7 subgoals):
1. \<And>uv_. \<lbrakk>uv_ = Accept \<or> uv_ = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) uv_ MatchAny)
2. \<And>ux_. \<lbrakk>ux_ = Accept \<or> ux_ = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) ux_ (MatchNot MatchAny))
3. \<And>a A. \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (Match A))
4. \<And>a A. \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (Match A)))
5. \<And>a m. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (MatchNot m)))
6. \<And>a m1 m2. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m1); \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m2); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchAnd m1 m2))
7. \<And>a m1 m2. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (MatchAnd m1 m2)))
[PROOF STEP]
case 3
[PROOF STATE]
proof (state)
this:
a_ = Accept \<or> a_ = Drop
packet_independent_\<alpha> \<alpha>
packet_independent_\<beta>_unknown \<beta>
goal (7 subgoals):
1. \<And>uv_. \<lbrakk>uv_ = Accept \<or> uv_ = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) uv_ MatchAny)
2. \<And>ux_. \<lbrakk>ux_ = Accept \<or> ux_ = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) ux_ (MatchNot MatchAny))
3. \<And>a A. \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (Match A))
4. \<And>a A. \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (Match A)))
5. \<And>a m. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (MatchNot m)))
6. \<And>a m1 m2. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m1); \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m2); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchAnd m1 m2))
7. \<And>a m1 m2. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (MatchAnd m1 m2)))
[PROOF STEP]
thus ?case
[PROOF STATE]
proof (prove)
using this:
a_ = Accept \<or> a_ = Drop
packet_independent_\<alpha> \<alpha>
packet_independent_\<beta>_unknown \<beta>
goal (1 subgoal):
1. \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a_ (Match A_))
[PROOF STEP]
by(simp add: packet_independent_unknown_match packet_independent_\<beta>_unknown_def remove_unknowns_generic.simps)
[PROOF STATE]
proof (state)
this:
\<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a_ (Match A_))
goal (6 subgoals):
1. \<And>uv_. \<lbrakk>uv_ = Accept \<or> uv_ = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) uv_ MatchAny)
2. \<And>ux_. \<lbrakk>ux_ = Accept \<or> ux_ = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) ux_ (MatchNot MatchAny))
3. \<And>a A. \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (Match A)))
4. \<And>a m. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (MatchNot m)))
5. \<And>a m1 m2. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m1); \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m2); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchAnd m1 m2))
6. \<And>a m1 m2. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (MatchAnd m1 m2)))
[PROOF STEP]
next
[PROOF STATE]
proof (state)
goal (6 subgoals):
1. \<And>uv_. \<lbrakk>uv_ = Accept \<or> uv_ = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) uv_ MatchAny)
2. \<And>ux_. \<lbrakk>ux_ = Accept \<or> ux_ = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) ux_ (MatchNot MatchAny))
3. \<And>a A. \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (Match A)))
4. \<And>a m. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (MatchNot m)))
5. \<And>a m1 m2. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m1); \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m2); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchAnd m1 m2))
6. \<And>a m1 m2. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (MatchAnd m1 m2)))
[PROOF STEP]
case 4
[PROOF STATE]
proof (state)
this:
a_ = Accept \<or> a_ = Drop
packet_independent_\<alpha> \<alpha>
packet_independent_\<beta>_unknown \<beta>
goal (6 subgoals):
1. \<And>uv_. \<lbrakk>uv_ = Accept \<or> uv_ = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) uv_ MatchAny)
2. \<And>ux_. \<lbrakk>ux_ = Accept \<or> ux_ = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) ux_ (MatchNot MatchAny))
3. \<And>a A. \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (Match A)))
4. \<And>a m. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (MatchNot m)))
5. \<And>a m1 m2. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m1); \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m2); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchAnd m1 m2))
6. \<And>a m1 m2. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (MatchAnd m1 m2)))
[PROOF STEP]
thus ?case
[PROOF STATE]
proof (prove)
using this:
a_ = Accept \<or> a_ = Drop
packet_independent_\<alpha> \<alpha>
packet_independent_\<beta>_unknown \<beta>
goal (1 subgoal):
1. \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a_ (MatchNot (Match A_)))
[PROOF STEP]
by(simp add: packet_independent_unknown_match packet_independent_\<beta>_unknown_def remove_unknowns_generic.simps)
[PROOF STATE]
proof (state)
this:
\<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a_ (MatchNot (Match A_)))
goal (5 subgoals):
1. \<And>uv_. \<lbrakk>uv_ = Accept \<or> uv_ = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) uv_ MatchAny)
2. \<And>ux_. \<lbrakk>ux_ = Accept \<or> ux_ = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) ux_ (MatchNot MatchAny))
3. \<And>a m. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (MatchNot m)))
4. \<And>a m1 m2. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m1); \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a m2); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchAnd m1 m2))
5. \<And>a m1 m2. \<lbrakk>\<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1)); \<lbrakk>\<not> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) = MatchAny \<or> remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) = MatchAny); remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m1) \<noteq> MatchNot MatchAny; remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2) \<noteq> MatchNot MatchAny; a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot m2)); a = Accept \<or> a = Drop; packet_independent_\<alpha> \<alpha>; packet_independent_\<beta>_unknown \<beta>\<rbrakk> \<Longrightarrow> \<not> has_unknowns \<beta> (remove_unknowns_generic (\<beta>, \<alpha>) a (MatchNot (MatchAnd m1 m2)))
[PROOF STEP]
qed(simp_all add: remove_unknowns_generic.simps)
|
lemma norm_diff_ineq: "norm a - norm b \<le> norm (a + b)"
|
-- ----------------------------------------------------------------- [ Key.idr ]
-- Module : Keys
-- Description : Types for Cryptographic Keys
-- Copyright : (c) Jan de Muijnck-Hughes
-- License : see LICENSE
-- --------------------------------------------------------------------- [ EOH ]
module Crypto.Std.Key
import Crypto.Common
import Data.Bits
||| Keys are vectors of bits.
|||
||| @ v The Visibility of the key with respect to the owner.
||| @ s The setting in which the key is to be used.
||| @ l The length of the key in bits.
data Key : (v : Visibility) -> (s : Setting) -> (l : Nat) -> Type where
||| PKE Encryption keys.
MkEncKey : Bits n -> Key Public Asymm n
||| PKE Decryption keys
MkDecKey : Bits n -> Key Private Asymm n
||| DS Signing keys.
MkSignKey : Bits n -> Key Private Sign n
||| DS Verifying keys.
MkVerifyKey : Bits n -> Key Public Sign n
||| Symmetric keys
MkSymmKey : Bits n -> Key Private Symm n
||| Key pairs
|||
||| @ s The setting in which the key is to be used.
||| @ l The length of the key in bits.
data KeyPair : (s : Setting) -> (l : Nat) -> Type where
||| Key pair for PKE.
|||
||| @ pub The public key.
||| @ priv The private key.
MkAsymmKeyPair : (pub : Key Public Asymm n)
-> (priv : Key Private Asymm n)
-> KeyPair Asymm n
||| Key pair for DS.
|||
||| @ sig The signing key.
||| @ ver The verifying key.
MkSignKeyPair : (sig : Key Private Sign n)
-> (ver : Key Public Sign n)
-> KeyPair Sign n
data TDESKeyOption = KeyOption1 | KeyOption2 | KeyOption3
||| Representation of a Triple DES
|||
||| @ s The setting in ehich the key is to be used.
||| @ n The length of each key
||| @ option The Keying option used.
data TDESKey : (s : Setting) -> (n : Nat) -> (option : TDESKeyOption)-> Type where
|||
MkTDESKey : (key1 : Bits n)
-> (key2 : Bits n)
-> (key3 : Bits n)
-> (opt : TDESKeyOption)
-> TDESKey Symm n opt
-- --------------------------------------------------------------------- [ EOF ]
|
module Language.JSON.Accessors
import Data.Either
import Data.List
import Data.Vect
import Language.JSON
%default total
export
lookupAll : Vect n String -> List (String, JSON) -> Either String (Vect n JSON)
lookupAll [] dict = Right []
lookupAll (key :: keys) dict = [| lookup' key dict :: lookupAll keys dict |]
where
lookup' : String -> List (String, a) -> Either String a
lookup' key = maybeToEither "Missing required key: \{key}." . lookup key
export
bool : JSON -> Either String Bool
bool (JBoolean x) = Right x
bool json = Left "Expected a bool but found \{show json}."
export
string : JSON -> Either String String
string (JString x) = Right x
string json = Left "Expected a string but found \{show json}."
export
integer : JSON -> Either String Integer
integer (JNumber x) = Right $ cast x
integer json = Left "Expected an integer but found \{show json}."
export
array : (JSON -> Either String a) -> JSON -> Either String (List a)
array f (JArray xs) = traverse f xs
array f json = Left "Expected an array but found \{show json}."
export
object : JSON -> Either String (List (String, JSON))
object (JObject xs) = Right xs
object json = Left "Expected an object but found \{show json}."
|
# Vortex tube
A vortex tube takes in high-pressure air at 650 kPa and 305 K, and splits it into two streams at a lower pressure, 100 kPa: one at a higher temperature of 325 K and one at a lower temperature. The fraction of mass entering that leaves at the cold outlet is $f = 0.25$. The vortex tube operates continuously at steady state, is adiabatic, and performs/experiences no work. Air should be modeled as an ideal gas with constant specific heat: $R = 287$ J/kg⋅K and $c_p = 1004$ J/kg⋅K.
<figure>
<center>
<figcaption>Figure: Vortex tube</figcaption>
</center>
</figure>
**Problem:** Determine the temperature at the cold end. Then, determine whether this device is physically possible.
```python
# Enter the known quantities
import numpy as np
from pint import UnitRegistry
ureg = UnitRegistry()
Q_ = ureg.Quantity
gas_constant = Q_(287, 'J/(kg K)')
cp = Q_(1004, 'J/(kg K)')
temp_1 = Q_(305, 'K')
pres_1 = Q_(650, 'kPa')
temp_2 = Q_(325, 'K')
pres_2 = Q_(100, 'kPa')
f = 0.25
pres_3 = Q_(100, 'kPa')
```
First, we can find the temperature at the cold outlet by performing an energy balance on the device:
\begin{align}
\dot{m}_1 u_1 &= \dot{m}_3 u_3 + \dot{m}_2 u_2 \\
\dot{m} c_p T_1 &= f \dot{m} c_p T_3 + (1-f) \dot{m} c_p T_2 \\
T_3 &= \frac{T_1 - (1-f) T_2}{f}
\end{align}
```python
temp_3 = (temp_1 - (1-f)*temp_2) / f
print(f'Temperature at cold outlet: {temp_3: .2f}')
```
Temperature at cold outlet: 245.00 kelvin
Now, examine whether the device is physically possible by performing an entropy balance:
\begin{align}
\dot{m} s_1 + \dot{S}_{\text{gen}} &= f \dot{m} s_3 + (1-f) \dot{m} s_2 \\
\frac{\dot{S}_{\text{gen}}}{\dot{m}} &= f s_3 + (1-f) s_2 = f(s_3 - s_2) + (s_2 - s_1)
\end{align}
We can obtain the $\Delta s$ values by using the relationship for an ideal gas with constant specific heat:
\begin{equation}
\Delta s_{1-2} = c_p \ln \left(\frac{T_2}{T_1}\right) - R \ln \left( \frac{p_2}{p_1} \right)
\end{equation}
```python
delta_s_12 = (
cp * np.log(temp_2/temp_1) -
gas_constant * np.log(pres_2/pres_1)
)
delta_s_23 = (
cp * np.log(temp_3/temp_2) -
gas_constant * np.log(pres_3/pres_2)
)
entropy_gen = f * delta_s_23 + delta_s_12
print(f'Entropy generation rate: {entropy_gen: .2f}')
```
Entropy generation rate: 530.05 joule / kelvin / kilogram
Since the rate of entropy generation is positive, this device can operate as described.
|
Formal statement is: lemma islimpt_greaterThanLessThan1: fixes a b::"'a::{linorder_topology, dense_order}" assumes "a < b" shows "a islimpt {a<..<b}" Informal statement is: If $a < b$, then $a$ is a limit point of the open interval $(a, b)$.
|
\documentclass[conference]{IEEEtran}
\usepackage{graphicx}
\graphicspath{ {figures/} {figures/drawings/} {figures/graphs/} }
\usepackage{listings}
\def\code#1{\texttt{#1}}
\usepackage[ruled]{algorithm}
\usepackage{algpseudocode}
\usepackage{wrapfig}
%\usepackage{algorithmic}
\ifCLASSINFOpdf
\else
\fi
\hyphenation{op-tical net-works semi-conduc-tor}
\begin{document}
\title{MAM: A Memory Allocation Manager for GPUs}
\author{\IEEEauthorblockN{Can Aknesil}
\IEEEauthorblockA{\\Computer Science and Engineering\\
Ko\c{c} University\\
Istanbul, Turkey\\
[email protected]}
\and
\IEEEauthorblockN{Didem Unat}
\IEEEauthorblockA{\\Computer Science and Engineering\\
Ko\c{c} University\\
Istanbul, Turkey \\
[email protected]}
}
\maketitle
\begin{abstract}
Nowadays, GPUs are used in all kinds of computing fields to accelerate computer programs. We observed that allocating memory on GPUs is much slower than that of allocating memory on the CPUs. In this study, we focus on decreasing the device memory allocation overhead of GPUs. The overhead becomes significantly larger as the size of the memory segment that is being allocated increases. In order to achieve the lowest possible overhead during device memory allocations in GPUs, we develop a thread safe memory management library called Memory Allocation Manager (MAM) for CUDA. Our library removes the allocation and the deallocation overheads occurring during the runtime, and makes the performance of CUDA programs independent from the device memory allocation size.
\end{abstract}
\begin{IEEEkeywords}
GPU, CUDA, device memory allocation, performance improvement.
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
The current trend of computer system is to parallelize the hardware and the software programs running on it, rather than producing faster processor cores. With this trend, the usage of GPUs has been increasing in all areas of computing fields. Currently, GPUs are used for many purposes such as for graphics, machine learning, and high performance computing. Since GPUs are used extensively, it is very important to keep the performance of programs using GPUs as high as possible.
In this study, we focus on decreasing the device memory allocation overhead of GPUs. This allocation overhead is large, especially when the allocation size is large. Thus, applications requiring repetitive or large allocations may reduce the overall performance. As our measurements indicate, shown in Figure \ref{fig:m-cm}, the overhead associated to device memory allocations increases almost linearly for allocations larger than 1MB.
The similar result can be observed from a study of a group from the University of Virginia~\cite{virginia}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{malloc_vs_cuda-malloc.png}
\caption{\code{malloc()} and \code{cudaMalloc()} durations vs allocation size}
\label{fig:m-cm}
\end{figure}
We develop a thread safe memory management library, called Memory Allocation Manager (MAM), in order to remove the allocation overhead on GPU memory. Our library provides an abstraction layer between the programmer and the memory management module of CUDA~\cite{cuda} environment. In order to allocate and free memory using MAM, the programmer should call procedures defined in the MAM API rather than directly calling the regular \code{cudaMalloc()} and \code{cudaFree()} procedures. In this paper, we first introduce the MAM API, then its implementation. Lastly we present its performance and compare it against regular memory routines provided by CUDA.
\section{Application Programming Interface}
%The programmer should call the memory management procedures defined in MAM API, rather than calling the regular \code{cudaMalloc()} and \code{cudaFree()} procedures after creating MAM. During the creation, a large \emph{chunk} of memory is allocated from the device memory. The details of it will be explained later.
MAM API contains five procedures that a programmer can use for memory management.
During the creation of the MAM environment, a large \emph{chunk} of memory is allocated by MAM on the device memory, which will be explained in detail later.
\begin{itemize}
\item \code{MAM\_Create(maxSize)}: Creates the MAM environment. Takes a parameter that defines the size of the chunk of memory that will allocated during the creation.
\item \code{MAM\_Create\_auto()}: Creates the MAM environment. Allocates the largest possible chunk of memory during the creation.
\item \code{MAM\_Destroy()}: Destroys the MAM environment.
\item \code{MAM\_CudaMalloc(\&ptr, size)}: Allocates specified size of device memory.
\item \code{MAM\_CudaFree(ptr)}: Frees the previously allocated device memory.
\end{itemize}
MAM can be used in three different ways:
\begin{enumerate}
\item {By specifying the chunk size during its creation:}
\begin{lstlisting}[language=C]
MAM_Create(maxSize);
MAM_CudaMalloc(&ptr, size);
...
MAM_CudaFree(ptr);
MAM_Destroy();
\end{lstlisting}
\item {Without specifying the size of the chunk during its creation:}
In this case, the largest possible size is used. The largest possible size is allocated by performing multiple allocation operations by decreasing the allocation size exponentially starting from the size of the device memory until one of the allocations succeeds. We
take this approach because it is not possible to allocate entire device memory.
\begin{lstlisting}[language=C]
MAM_Create_auto();
MAM_CudaMalloc(&ptr, size);
...
MAM_CudaFree(ptr);
MAM_Destroy();
\end{lstlisting}
\item{Without explicit creation:}
In this case lazy creation occurs. \code{MAM\_Create\_auto()} is called automatically when \code{MAM\_CudaMalloc()} is first called. When all the memory allocated using MAM API is freed, MAM automatically destroys itself.
\begin{lstlisting}[language=C]
MAM_CudaMalloc(&ptr, size);
...
MAM_CudaFree(ptr);
\end{lstlisting}
\end{enumerate}
\section{Implementation}
During the creation of MAM, a large and continuous \emph{chunk} of memory is allocated on the device memory. The size of the chunk is expected to be equal or smaller than the maximum size of the device memory that will be used by the CUDA program at a time instance. The pointers to the segments of this large chunk of memory will be returned by MAM during the allocation process. Every object existing in MAM environment other than the \emph{chunk} live in the host memory.
\begin{figure}[h]%[23]{l}{0.18\textwidth}
\centering
\includegraphics[width=0.18\textwidth]{random-chunk/random-chunk-pointers-and-sizes.png}
\caption{An example of the chunk}
\label{fig:chunk}
\end{figure}
A chunk is divided into segments that are either being used or not being used (empty) by the programmer. Figure \ref{fig:chunk} represents an example of the chunk at a time instance. The example chunk is a continuous memory and consists of 5 segments.
In the MAM environment, each segment is represented by a \code{segment struct} instance in the host memory. The \code{segment struct} contains mainly, a pointer to the beginning of the physical segment located in the device memory, a size attribute, and a flag indicating whether it is being used by the program or not. The \code{segment struct} declaration is as follows:\\
\begin{lstlisting}[language=C]
struct segment {
void *basePtr;
size_t size;
char isEmpty;
/* attributes related to data
structures */
...
};
\end{lstlisting}
\subsection{Internal Data Structures}
In MAM, there are two data structures that store the \code{segment struct} instances. The first data structure is a tree that stores all the segments. It is sorted according to the base pointer of each segment that points to the beginning of the represented physical memory. It is used when the programmer calls \code{MAM\_CudaFree(*void)} in order to find the corresponding segment using the pointer parameter.
The second data structure is a tree-dictionary that stores only the empty segments and it is sorted according to their size attribute. It is used to find an empty segment at an equal or greater size than the desired allocation size during the \code{MAM\_CudaMalloc(**void, size\_t)} call. In both data structures, a red-black tree is used since it is a balanced tree.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{pointer-tree/pointer-tree.png}
\caption{Pointer tree}
\label{fig:ptree}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{size-tree-dictionary/size-tree-dictionary.png}
\caption{Size tree-dictionary}
\label{fig:stree}
\end{figure}
Figure \ref{fig:ptree} and Figure \ref{fig:stree} show the corresponding data structures for the example chunk shown in Figure \ref{fig:chunk}. At that instance, there are 3 segments that are allocated by the user (Segment 0, 2, and 3), and 2 segments that are not (Segment 1, and 4). Figure \ref{fig:ptree} shows the time instance of the pointer-tree. It contains all the segments and it is sorted by the base pointers of each segment. Figure \ref{fig:stree} shows the time instance of the size-tree dictionary that contains all the empty segments. It is sorted according to the sizes of each segments.
\section{Memory Management}
Allocation and deallocation calls to MAM API respectively starts and ends the usage of segments located in the chunk, which was previously allocated. Since the total physical memory that will be used is allocated as a large chunk during the creation of MAM environment, \code{MAM\_CudaMalloc()} and \code{MAM\_CudaFree()} calls do not actually allocate or free any physical memory but imitate the process. This is the main reason why MAM introduces much less overhead than the CUDA memory management module.
The initialization of the MAM environment is slow but the initialization is performed once at the beginning; once MAM is created, all the memory management calls are faster.
Next, we will discuss the allocation and deallocation implementations in MAM.
%The details of what happens in the MAM environment during and allocation and a deallocation is as fallows.
\begin{figure}[!hbp]
\centering
\begin{minipage}[b]{0.20\textwidth}
\includegraphics[width=\linewidth]{allocation/allocation1.png}
\caption{Allocation diagram 1}
\label{fig:alloc1}
\end{minipage}
\hspace{0.1in}
\begin{minipage}[b]{0.20\textwidth}
\includegraphics[width=\linewidth]{allocation/allocation2.png}
\caption{Allocation diagram 2}
\label{fig:alloc2}
\end{minipage}
\end{figure}
\begin{algorithm}
\caption{MAM Allocation Algorithm - $\mathcal{O}(\log n)$}
\label{algorithm1}
\begin{algorithmic}[1]
\Procedure{Allocate}{}
\State Find a best-fitting empty segment from the tree-dictionary $\mathcal{O}(\log n)$
\State Mark the segment as filled $\mathcal{O}(1)$
\If {The segment perfectly fits $\mathcal{O}(1)$}
\State {Remove segment from tree-dictionary $\mathcal{O}(\log n)$}
\Else
\State {Resize it $\mathcal{O}(1)$}
\State {Remove it from tree-dictionary $\mathcal{O}(\log n)$}
\State {Create a new empty segment $\mathcal{O}(1)$}
\State {Insert it in pointer-tree \& tree-dictionary $\mathcal{O}(\log n)$}
\EndIf
\State Return the base pointer of filled segment $\mathcal{O}(1)$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Allocation}
When the programmer calls \code{MAM\_CudaMalloc()}, MAM searches the smallest empty segment whose size is equal or greater than the desired segment using the size tree-dictionary. If there is an empty segment with the same size, MAM marks it as filled. If the segment that is found is larger then the desired segment, a new segment that represents the non-allocated empty part is created. This procedure is illustrated in Figure \ref{fig:alloc1} and Figure \ref{fig:alloc2}. In Figure \ref{fig:alloc2}, Segment 3 is a newly created segment.
The algorithm of MAM allocation is shown in Algorithm~\ref{algorithm1}.
%It is the pseudo-code version of the process explained above:
The complexities of all steps in the algorithm is shown at the end of each step. The overall complexity of this allocation algorithm is $\mathcal{O}(\log n)$, where $n$ is the number of segments existing in the chunk.
\subsection{Deallocation}
When the programmer calls \code{MAM\_CudaFree()}, MAM first marks the segment that is being freed as empty. Then merges the empty segment with previous and next segments if they are also empty. This procedure is illustrated in Figure \ref{fig:dealloc}. % and the algorithm for MAM deallocation is
The algorithm of MAM deallocation is shown in Algorithm~\ref{algorithm2}. The overall completely of the deallocation algorithm is also $\mathcal{O}(\log n)$, where n is the number of the segments in the chunk.
%shown in Algorithm 2.
\begin{figure}[h!]
\centering
\includegraphics[width=0.75\linewidth]{deallocation/deallocation.png}
\caption{Deallocation diagram}
\label{fig:dealloc}
\end{figure}
\begin{algorithm}
\caption{MAM Deallocation Algorithm - $\mathcal{O}(\log n)$}
\label{algorithm2}
\begin{algorithmic}[1]
\Procedure{Deallocate}{}
\State Find the segment in the pointer-tree $\mathcal{O}(\log n)$
\State Mark the segment as empty $\mathcal{O}(1)$
\State Get previous and next segments $\mathcal{O}(\log n)$
\If {the previous segment is empty $\mathcal{O}(1)$}
\State {Remove the segment being newly emptied from pointer-tree and tree-dictionary $\mathcal{O}(\log n)$}
\State {Destroy the segment being newly emptied $\mathcal{O}(1)$}
\State {Resize previous segment $\mathcal{O}(1)$}
\State {Replace it in tree-dictionary $\mathcal{O}(\log n)$}
\State {Assign it to the variable stored the destroyed segment $\mathcal{O}(\log n)$}
\EndIf
%\If {The next segment is empty $\mathcal{O}(1)$}
\State //repeat the similar procedure for next segment.
%\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
The allocation and deallocation algorithms are used in the implementation of MAM API, respectively in the procedures \code{MAM\_CudaMalloc()} and \code{MAM\_CudaFree()}. Thus, the complexities of both allocation and deallocation are $\mathcal{O}(\log n)$ in terms of the number of segments.
\section{Performance Evaluation}
We demonstrate the performance of MAM in two ways:
in terms of the allocation size, and in terms of the number of previously allocated segments.
We used Tesla K20m as the GPU testbed, Linux 2.6.32-431.11.2.el6.x86\_64 as the kernel and NVCC 7.0, V7.0.27 as CUDA Compilation Tools in all of our tests.
In order to measure the performance in terms of allocation size,
we created a histogram that stores the time elapsed during allocation
for different allocation sizes from 1Byte to 1GigaByte.
% in 9 subintervals divided
% logarithmically).
We filled the histogram by allocating the device memory
parts of random sizes over and over again until there is no more space.
Figure \ref{fig:cm-mcm} and Figure \ref{fig:cf-mcf} show the performance comparison between regular \code{cudaMalloc()} and \code{MAM\_CudaMalloc()}, and \code{cudaFree()} and \code{MAM\_CudaFree()}, respectively, in terms of allocation size.
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\linewidth]{cudamalloc-vs-mamcudamalloc.png}
\caption{\code{cudaMalloc()} vs \code{MAM\_CudaMalloc()} comparison}
\label{fig:cm-mcm}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{cudafree-vs-mamcudafree.png}
\caption{\code{cudaFree()} vs \code{MAM\_CudaFree()} comparison}
\label{fig:cf-mcf}
\end{figure}
As shown in these figures, while allocation duration of \code{cudaMalloc()} increases swiftly, the duration of \code{MAM\_CudaMalloc()} stays almost constant. MAM removes the allocation and deallocation overhead and makes the performance of allocations independent from the allocation size. This result was expected because MAM moves the entire physical memory allocation overhead to the creation of MAM environment from individual allocations. Even though the initialization of MAM is slow, once it is initialized, there is no significant overhead caused by memory allocations or deallocations. Because, there is no physical memory allocation after the creation of MAM and the allocation size has no effect on the complexity of MAM.
% it is meaningless to measure the performance according to the allocation size.
The second performance measurement is based on the total number of existing segments during allocation or deallocation. This is meaningful because the size of data structures used in the MAM environment increases with the number of segments.
%This measurement is according to the number of previously allocated segments.
In order to measure the performance in terms of the number of previously allocated segments, we measured the time elapsed during the first allocation after allocated variable number of segments. In this measurement, the allocation size was random between 1Byte to 10Bytes, sufficiently small so that we could make large number of allocations up to $10^7$ before the device memory is full. Figure \ref{fig:cm-mcm-pa} shows the performance comparison between regular \code{cudaMalloc()} and \code{MAM\_CudaMalloc()} in terms of the number of previously allocated segments.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\linewidth]{cudamalloc-vs-mamcudamalloc-pa.png}
\caption{\code{cudaMalloc()} vs \code{MAM\_CudaMalloc()} comparison according to number of previous allocations}
\label{fig:cm-mcm-pa}
\end{figure}
According to this performance measurement, MAM is faster than CUDA and the duration of MAM allocation increases more slowly than actual CUDA allocation for the number of previously allocated segments larger than 100. This is the result of the fact that allocation algorithm of MAM is $\mathcal{O}(\log n)$, since the red-black tree used in MAM environment is a balanced tree.
We should also mention that when the programmer makes a very large number of small device memory allocations, MAM uses lots of host memory, since a \code{segment struct} instance is created for each segment.
\section{Discussion \& Related Work}
This study only covers the performance comparison of MAM with CUDA device memory management. However, MAM is completely applicable to any other environment that involves allocation and deallocation of a contiguous space of any kind, such as pinned memory allocation of CUDA~\cite{ICPP-Burak} or host memory allocation. MAM will work exactly the same way with any of these environments since it does not depend on the actual, or physical allocation procedure once it is created.
In the literature, there is a group that also focusses on GPU memory alloation and deallocation overhead \cite{regeff}. They compare current GPU memory allocators and propse a new one that is register efficient. There are a lot of studies~\cite{ICPP-Burak, dCUDA, CuMAS} about GPU memory management, mainly focusing
on reducing data transfer overhead between the host and device memory. A study deals with the effective usage of relatively small GPU memory by using it as a cache for the host memory and transferring data between the two memories during runtime \cite{gpudmm}. A second study that also focuses on small device memory size decreases data transfer overhead between device and host memory by directly connecting a Solid State Disk (SSD) to a GPU \cite{gpussd}. A group has developed a tool to manage device memory so that multiple applications can use the GPU without any problems \cite{gdm}. Another study integrated GPU as a first-class resource to the operating system \cite{gdev}.
To our knowledge, there is no study focusing specifically on solving GPU memory allocation overhead. Programmers generally write their own memory manager for their specific application when it is needed. MAM offers a generalized solution, is independent of an applications, and provides efficient data structures to keep the overhead low.
% to all kind of allocation overhead problems.
\section{Conclusion}
In this study, we focused on reducing the memory allocation overhead in GPUs and we developed MAM, which is a library for CUDA. This library abstracts the CUDA memory management module from the program and succeeds to remove the overhead by moving all the overhead to the beginning of the program. MAM currently offers a solution for the memory allocation problem of CUDA but it can be easily extended to be used in other platforms. Our future work will extend this work to Intel Xeon Phi architectures and other GPU programming models.
%solve other allocations overhead problem.
\begin{thebibliography}{1}
\bibitem{cuda}
''CUDA Toolkit'', NVIDIA Developer, 2017. \lbrack Online\rbrack . Available: https://developer.nvidia.com/cuda-toolkit. \lbrack Accessed: 10- Jul- 2017\rbrack .
\bibitem{virginia}
CUDA Memory Management Overhead. \lbrack Online\rbrack . Available: https://www.cs.virginia.edu/~mwb7w/cuda\_support/memory\_management\_overhead.html. \lbrack Accessed: 14-Oct-2016\rbrack .
\bibitem{gpudmm}
Y. Kim, J. Lee, and J. Kim, ''GPUdmm: A high-performance and memoryoblivious
GPU architecture using dynamic memory management,'' in
Proc. IEEE Int. Symp. High Perform. Comput. Archit. (HPCA), Feb. 2014,
pp. 546-557.
\bibitem{gpussd}
J. Zhang, D. Donofrio, J. Shalf, M. Kandemir, and M. Jung. Nvmmu:
A non-volatile memory management unit for heterogeneous gpu-ssd
architectures. PACT 2015, 2015.
\bibitem{gdm}
K. Wang, X. Ding, R. Lee, S. Kato, and X. Zhang, ''Gdm:
Device memory management for gpgpu computing,'' in The
2014 ACM International Conference on Measurement and
Modeling of Computer Systems, SIGMETRICS, (New
York, NY, USA), pp. 533-545, ACM, 2014.
\bibitem{gdev}
S. Kato, M. McThrow, C. Maltzahn, and S. A. Brandt. Gdev:
First-class gpu resource management in the operating system.
In USENIX Annual Technical Conference, 2012.
\bibitem{ICPP-Burak}
B. Bastem, D. Unat, W. Zhang, A. Almgren, and J. Shalf.
Overlapping Data Transfers with Computation on GPU with Tiles,
The 46th International Conference on Parallel Processing, ICPP 2017
\bibitem{CuMAS}
Mehmet E. Belviranli, Farzad Khorasani, Laxmi N. Bhuyan, and Rajiv Gupta. 2016. CuMAS: Data Transfer Aware Multi-Application Scheduling for Shared GPUs. In Proceedings of the 2016 International Conference on Supercomputing (ICS '16). ACM, New York, NY, USA, Article 31, 12 pages.
\bibitem{dCUDA}
T. Gysi, J. Bar and T. Hoefler,
dCUDA: Hardware Supported Overlap of Computation and Communication,
SC16: International Conference for High Performance Computing, Networking, Storage and Analysis, Salt Lake City, UT, 2016, pp. 609-620.
\bibitem{regeff}
M. Vinkler and V. Havran, "Register Efficient Dynamic Memory Allocator for GPUs", Computer Graphics Forum, vol. 34, no. 8, pp. 143-154, 2015.
\end{thebibliography}
\end{document}
|
\section{Related Work}\label{relatedWork}
In 2013, authors from University of California published their research on \textit{discretized streams} (D-Streams)~\cite{SparkDStream}. They implemented their work as part of Apache Spark. As already mentioned, data is collected into so called \textit{micro batches}, which means that several tuples in a pre-defined, sub-second-sized window get grouped together into an immutable data set. A stream of \textit{micro batches} is called a D-Stream. Each \textit{micro batch} is then distributed to a worker node within a cluster by a master node. The worker performs a series of operations on the dataset, thus creating a new immutable data set. In this way, each operation is stateless.
This system provides some interesting advantages to the continuous stream processing model of Aurora (and its later iterations, i.e. Borealis and StreamBase). Regarding recovery, the system is able to track the lineage of operations, that were performed on each \textit{micro batch}. This is possible, because of a data structure called \textit{Resilient Distributed Datasets} (RDD), that stores the transformations that created the data set. Because the intermittent results are stored in-memory and supersede the replication of data for fault tolerance, recovering times are sped up significantly. Additionally, also because replication is not necessary, the end-to-end processing times are increased drastically.
Beside having a faster and more resilient performance than systems with continuous operator systems, D-Streams have one downside. By intentionally delaying the processing of arriving tuples, the minimum end-to-end processing delay is raised. This makes micro batching not suitable for applications with sub-second response requirements, such as some stock trading applications. This issue is adjustable to some degree, because users are able to define the window for creating micro batches by themselves.
The D-Stream approach makes use of earlier research on stream processing, and subsequently focuses on the weaknesses of earlier systems, such as recovering mechanisms and parallel execution. The performance increases of using D-Streams instead of other systems are remarkable, which partly explains the ongoing popularity of Apache Spark~\cite{SparkUsers}.
|
program t
implicit none
! inquire stmt all 'access' and 'form' specifiers (direct unformatted)
! and character-variables in open specifiers
!and 'recl' specifier in inquire stmt
character*11::a,b,c,x,y,z
integer::r
character*6::end='direct'
open (56, status='new', file='tmpfile', access=end, form='unformatted', recl=8)
inquire (56, access=a, sequential=b, direct=c, form=x, formatted=y, unformatted=z, recl=r)
print *,a
print *,b
print *,c
print *,x
print *,y
print *,z
print *,r
close (56,status='delete')
endprogram t
|
SUBROUTINE MB03KA( COMPQ, WHICHQ, WS, K, NC, KSCHUR, IFST, ILST,
$ N, NI, S, T, LDT, IXT, Q, LDQ, IXQ, TOL, IWORK,
$ DWORK, LDWORK, INFO )
C
C SLICOT RELEASE 5.7.
C
C Copyright (c) 2002-2020 NICONET e.V.
C
C PURPOSE
C
C To reorder the diagonal blocks of the formal matrix product
C
C T22_K^S(K) * T22_K-1^S(K-1) * ... * T22_1^S(1), (1)
C
C of length K, in the generalized periodic Schur form
C
C [ T11_k T12_k T13_k ]
C T_k = [ 0 T22_k T23_k ], k = 1, ..., K, (2)
C [ 0 0 T33_k ]
C
C where
C
C - the submatrices T11_k are NI(k+1)-by-NI(k), if S(k) = 1, or
C NI(k)-by-NI(k+1), if S(k) = -1, and contain dimension-induced
C infinite eigenvalues,
C - the submatrices T22_k are NC-by-NC and contain core eigenvalues,
C which are generically neither zero nor infinite,
C - the submatrices T33_k contain dimension-induced zero
C eigenvalues,
C
C such that the block with starting row index IFST in (1) is moved
C to row index ILST. The indices refer to the T22_k submatrices.
C
C Optionally, the transformation matrices Q_1,...,Q_K from the
C reduction into generalized periodic Schur form are updated with
C respect to the performed reordering.
C
C ARGUMENTS
C
C Mode Parameters
C
C COMPQ CHARACTER*1
C = 'N': do not compute any of the matrices Q_k;
C = 'U': each coefficient of Q must contain an orthogonal
C matrix Q1_k on entry, and the products Q1_k*Q_k are
C returned, where Q_k, k = 1, ..., K, performed the
C reordering;
C = 'W': the computation of each Q_k is specified
C individually in the array WHICHQ.
C
C WHICHQ INTEGER array, dimension (K)
C If COMPQ = 'W', WHICHQ(k) specifies the computation of Q_k
C as follows:
C = 0: do not compute Q_k;
C > 0: the kth coefficient of Q must contain an orthogonal
C matrix Q1_k on entry, and the product Q1_k*Q_k is
C returned.
C This array is not referenced if COMPQ <> 'W'.
C
C WS LOGICAL
C = .FALSE. : do not perform the strong stability tests;
C = .TRUE. : perform the strong stability tests; often,
C this is not needed, and omitting them can save
C some computations.
C
C Input/Output Parameters
C
C K (input) INTEGER
C The period of the periodic matrix sequences T and Q (the
C number of factors in the matrix product). K >= 2.
C (For K = 1, a standard eigenvalue reordering problem is
C obtained.)
C
C NC (input) INTEGER
C The number of core eigenvalues. 0 <= NC <= min(N).
C
C KSCHUR (input) INTEGER
C The index for which the matrix T22_kschur is upper quasi-
C triangular. All other T22 matrices are upper triangular.
C
C IFST (input/output) INTEGER
C ILST (input/output) INTEGER
C Specify the reordering of the diagonal blocks, as follows:
C The block with starting row index IFST in (1) is moved to
C row index ILST by a sequence of direct swaps between adjacent
C blocks in the product.
C On exit, if IFST pointed on entry to the second row of a
C 2-by-2 block in the product, it is changed to point to the
C first row; ILST always points to the first row of the block
C in its final position in the product (which may differ from
C its input value by +1 or -1).
C 1 <= IFST <= NC, 1 <= ILST <= NC.
C
C N (input) INTEGER array, dimension (K)
C The leading K elements of this array must contain the
C dimensions of the factors of the formal matrix product T,
C such that the k-th coefficient T_k is an N(k+1)-by-N(k)
C matrix, if S(k) = 1, or an N(k)-by-N(k+1) matrix,
C if S(k) = -1, k = 1, ..., K, where N(K+1) = N(1).
C
C NI (input) INTEGER array, dimension (K)
C The leading K elements of this array must contain the
C dimensions of the factors of the matrix sequence T11_k.
C N(k) >= NI(k) + NC >= 0.
C
C S (input) INTEGER array, dimension (K)
C The leading K elements of this array must contain the
C signatures (exponents) of the factors in the K-periodic
C matrix sequence. Each entry in S must be either 1 or -1;
C the value S(k) = -1 corresponds to using the inverse of
C the factor T_k.
C
C T (input/output) DOUBLE PRECISION array, dimension (*)
C On entry, this array must contain at position IXT(k) the
C matrix T_k, which is at least N(k+1)-by-N(k), if S(k) = 1,
C or at least N(k)-by-N(k+1), if S(k) = -1, in periodic
C Schur form.
C On exit, the matrices T_k are overwritten by the reordered
C periodic Schur form.
C
C LDT INTEGER array, dimension (K)
C The leading dimensions of the matrices T_k in the one-
C dimensional array T.
C LDT(k) >= max(1,N(k+1)), if S(k) = 1,
C LDT(k) >= max(1,N(k)), if S(k) = -1.
C
C IXT INTEGER array, dimension (K)
C Start indices of the matrices T_k in the one-dimensional
C array T.
C
C Q (input/output) DOUBLE PRECISION array, dimension (*)
C On entry, this array must contain at position IXQ(k) a
C matrix Q_k of size at least N(k)-by-N(k), provided that
C COMPQ = 'U', or COMPQ = 'W' and WHICHQ(k) > 0.
C On exit, if COMPQ = 'U', or COMPQ = 'W' and WHICHQ(k) > 0,
C Q_k is post-multiplied with the orthogonal matrix that
C performed the reordering.
C This array is not referenced if COMPQ = 'N'.
C
C LDQ INTEGER array, dimension (K)
C The leading dimensions of the matrices Q_k in the one-
C dimensional array Q.
C LDQ(k) >= max(1,N(k)), if COMPQ = 'U', or COMPQ = 'W' and
C WHICHQ(k) > 0;
C This array is not referenced if COMPQ = 'N'.
C
C IXQ INTEGER array, dimension (K)
C Start indices of the matrices Q_k in the one-dimensional
C array Q.
C This array is not referenced if COMPQ = 'N'.
C
C Tolerances
C
C TOL DOUBLE PRECISION array, dimension (3)
C This array contains tolerance parameters. The weak and
C strong stability tests use a threshold computed by the
C formula MAX( c*EPS*NRM, SMLNUM ), where c is a constant,
C NRM is the Frobenius norm of the current matrix formed by
C concatenating K pairs of adjacent diagonal blocks of sizes
C 1 and/or 2 in the T22_k submatrices from (2), which are
C swapped, and EPS and SMLNUM are the machine precision and
C safe minimum divided by EPS, respectively (see LAPACK
C Library routine DLAMCH). The norm NRM is computed by this
C routine; the other values are stored in the array TOL.
C TOL(1), TOL(2), and TOL(3) contain c, EPS, and SMLNUM,
C respectively. TOL(1) should normally be at least 10.
C
C Workspace
C
C IWORK INTEGER array, dimension (4*K)
C
C DWORK DOUBLE PRECISION array, dimension (LDWORK)
C On exit, if INFO = 0, DWORK(1) returns the optimal LDWORK.
C
C LDWORK INTEGER
C The dimension of the array DWORK.
C LDWORK >= 10*K + MN, if all blocks between IFST and ILST
C have order 1;
C LDWORK >= 25*K + MN, if there is at least a block of
C order 2, but no adjacent blocks of
C order 2 can appear between IFST and
C ILST during reordering;
C LDWORK >= MAX(42*K + MN, 80*K - 48), if at least a pair of
C adjacent blocks of order 2 can appear
C between IFST and ILST during
C reordering;
C where MN = MXN, if MXN > 10, and MN = 0, otherwise, with
C MXN = MAX(N(k),k=1,...,K).
C
C If LDWORK = -1 a workspace query is assumed; the
C routine only calculates the optimal size of the DWORK
C array, returns this value as the first entry of the DWORK
C array, and no error message is issued by XERBLA.
C
C Error Indicator
C
C INFO INTEGER
C = 0: successful exit;
C < 0: if INFO = -21, the LDWORK argument was too small;
C = 1: the reordering of T failed because some eigenvalues
C are too close to separate (the problem is very ill-
C conditioned); T may have been partially reordered.
C The returned value of ILST is the index where this
C was detected.
C
C METHOD
C
C An adaptation of the LAPACK Library routine DTGEXC is used.
C
C NUMERICAL ASPECTS
C
C The implemented method is numerically backward stable.
C
C CONTRIBUTOR
C
C R. Granat, Umea University, Sweden, Apr. 2008.
C
C REVISIONS
C
C V. Sima, Research Institute for Informatics, Bucharest, Romania,
C Mar. 2010, SLICOT Library version of the PEP routine PEP_DTGEXC.
C V. Sima, July 2010.
C
C KEYWORDS
C
C Orthogonal transformation, periodic QZ algorithm, periodic
C Sylvester-like equations, QZ algorithm.
C
C ******************************************************************
C
C .. Parameters ..
DOUBLE PRECISION ZERO
PARAMETER ( ZERO = 0.0D+0 )
C ..
C .. Scalar Arguments ..
CHARACTER COMPQ
LOGICAL WS
INTEGER IFST, ILST, INFO, K, KSCHUR, LDWORK, NC
C ..
C .. Array Arguments ..
INTEGER IWORK( * ), IXQ( * ), IXT( * ), LDQ( * ),
$ LDT( * ), N( * ), NI( * ), S( * ), WHICHQ( * )
DOUBLE PRECISION DWORK( * ), Q( * ), T( * ), TOL( * )
C ..
C .. Local Scalars ..
INTEGER HERE, I, IP1, IT, MINWRK, NBF, NBL, NBNEXT
C ..
C .. External Subroutines ..
EXTERNAL MB03KB, XERBLA
C ..
C .. Intrinsic Functions ..
INTRINSIC DBLE, INT, MAX, MOD
C ..
C .. Executable Statements ..
C
C For efficiency reasons the parameters are not checked, except for
C workspace.
C
IF( NC.EQ.2 ) THEN
NBF = 1
NBL = 1
ELSE IF( NC.EQ.3 ) THEN
NBF = 1
NBL = 2
ELSE
NBF = 2
NBL = 2
END IF
CALL MB03KB( COMPQ, WHICHQ, WS, K, NC, KSCHUR, 1, NBF, NBL, N, NI,
$ S, T, LDT, IXT, Q, LDQ, IXQ, TOL, IWORK, DWORK, -1,
$ INFO )
MINWRK = MAX( 1, INT( DWORK(1) ) )
IF( LDWORK.NE.-1 .AND. LDWORK.LT.MINWRK )
$ INFO = -21
C
C Quick return if possible
C
IF( LDWORK.EQ.-1 ) THEN
DWORK(1) = DBLE( MINWRK )
RETURN
ELSE IF( INFO.LT.0 ) THEN
CALL XERBLA( 'MB03KA', -INFO )
RETURN
END IF
C
C Set I and IP1 to point to KSCHUR and KSCHUR+1 to simplify
C indices below.
C
I = KSCHUR
IP1 = MOD( I, K ) + 1
C
C Determine the first row of the block in T22_kschur corresponding
C to the first block in the product and find out if it is 1-by-1 or
C 2-by-2.
C
IF( IFST.GT.1 ) THEN
IF( S(I).EQ.1 ) THEN
IT = IXT(I) + ( NI(I) + IFST - 2 )*LDT(I) + NI(IP1) + IFST
$ - 1
ELSE
IT = IXT(I) + ( NI(IP1) + IFST - 2 )*LDT(I) + NI(I) + IFST
$ - 1
END IF
IF( T( IT ).NE.ZERO )
$ IFST = IFST - 1
END IF
NBF = 1
IF( IFST.LT.NC ) THEN
IF( S(I).EQ.1 ) THEN
IT = IXT(I) + ( NI(I) + IFST - 1 )*LDT(I) + NI(IP1) + IFST
ELSE
IT = IXT(I) + ( NI(IP1) + IFST - 1 )*LDT(I) + NI(I) + IFST
END IF
IF( T( IT ).NE.ZERO )
$ NBF = 2
END IF
C
C Determine the first row of the block in T_kschur corresponding
C to the last block in the product and find out it is 1-by-1 or
C 2-by-2.
C
IF( ILST.GT.1 ) THEN
IF( S(I).EQ.1 ) THEN
IT = IXT(I) + ( NI(I) + ILST - 2 )*LDT(I) + NI(IP1) + ILST
$ - 1
ELSE
IT = IXT(I) + ( NI(IP1) + ILST - 2 )*LDT(I) + NI(I) + ILST
$ - 1
END IF
IF( T( IT ).NE.ZERO )
$ ILST = ILST - 1
END IF
NBL = 1
IF( ILST.LT.NC ) THEN
IF( S(I).EQ.1 ) THEN
IT = IXT(I) + ( NI(I) + ILST - 1 )*LDT(I) + NI(IP1) + ILST
ELSE
IT = IXT(I) + ( NI(IP1) + ILST - 1 )*LDT(I) + NI(I) + ILST
END IF
IF( T( IT ).NE.ZERO )
$ NBL = 2
END IF
C
C If the specified and last block in the product were the same,
C return.
C
IF( IFST.EQ.ILST )
$ RETURN
C
C If the specified block lies above the last block on the diagonal
C of the product and the blocks have unequal sizes, update ILST.
C
IF( IFST.LT.ILST ) THEN
C
C Update ILST.
C
IF( NBF.EQ.2 .AND. NBL.EQ.1 )
$ ILST = ILST - 1
IF( NBF.EQ.1 .AND. NBL.EQ.2 )
$ ILST = ILST + 1
C
HERE = IFST
C
10 CONTINUE
C
C Swap a block with next one below.
C
IF( NBF.EQ.1 .OR. NBF.EQ.2 ) THEN
C
C Current next block is either 1-by-1 or 2-by-2.
C
NBNEXT = 1
IF( HERE+NBF+1.LE.NC ) THEN
IF( S(I).EQ.1 ) THEN
IT = IXT(I) + ( NI(I) + HERE + NBF - 1 )*LDT(I) +
$ NI(IP1) + HERE + NBF
ELSE
IT = IXT(I) + ( NI(IP1) + HERE + NBF - 1 )*LDT(I) +
$ NI(I) + HERE + NBF
END IF
IF( T( IT ).NE.ZERO )
$ NBNEXT = 2
END IF
CALL MB03KB( COMPQ, WHICHQ, WS, K, NC, KSCHUR, HERE, NBF,
$ NBNEXT, N, NI, S, T, LDT, IXT, Q, LDQ, IXQ,
$ TOL, IWORK, DWORK, LDWORK, INFO )
IF( INFO.NE.0 ) THEN
ILST = HERE
RETURN
END IF
HERE = HERE + NBNEXT
C
C Test if a 2-by-2 block breaks into two 1-by-1 blocks.
C
IF( NBF.EQ.2 ) THEN
IF( S(I).EQ.1 ) THEN
IT = IXT(I) + ( NI(I) + HERE - 1 )*LDT(I) + NI(IP1)
$ + HERE
ELSE
IT = IXT(I) + ( NI(IP1) + HERE - 1 )*LDT(I) + NI(I)
$ + HERE
END IF
IF( T( IT ).EQ.ZERO )
$ NBF = 3
END IF
ELSE
C
C Current next block consists of two 1-by-1 blocks each of
C which must be swapped individually.
C
NBNEXT = 1
IF( HERE+3.LE.NC ) THEN
IF( S(I).EQ.1 ) THEN
IT = IXT(I) + ( NI(I) + HERE + 1 )*LDT(I) + NI(IP1) +
$ HERE + 2
ELSE
IT = IXT(I) + ( NI(IP1) + HERE + 1 )*LDT(I) + NI(I) +
$ HERE + 2
END IF
IF( T( IT ).NE.ZERO )
$ NBNEXT = 2
END IF
CALL MB03KB( COMPQ, WHICHQ, WS, K, NC, KSCHUR, HERE+1, 1,
$ NBNEXT, N, NI, S, T, LDT, IXT, Q, LDQ, IXQ,
$ TOL, IWORK, DWORK, LDWORK, INFO )
IF( INFO.NE.0 ) THEN
ILST = HERE
RETURN
END IF
IF( NBNEXT.EQ.1 ) THEN
C
C Swap two 1-by-1 blocks.
C
CALL MB03KB( COMPQ, WHICHQ, WS, K, NC, KSCHUR, HERE, 1,
$ NBNEXT, N, NI, S, T, LDT, IXT, Q, LDQ, IXQ,
$ TOL, IWORK, DWORK, LDWORK, INFO )
IF( INFO.NE.0 ) THEN
ILST = HERE
RETURN
END IF
HERE = HERE + 1
ELSE
C
C Recompute NBNEXT in case 2-by-2 split.
C
IF( S(I).EQ.1 ) THEN
IT = IXT(I) + ( NI(I) + HERE )*LDT(I) + NI(IP1) + HERE
$ + 1
ELSE
IT = IXT(I) + ( NI(IP1) + HERE )*LDT(I) + NI(I) + HERE
$ + 1
END IF
IF( T( IT ).EQ.ZERO )
$ NBNEXT = 1
IF( NBNEXT.EQ.2 ) THEN
C
C The 2-by-2 block did not split.
C
CALL MB03KB( COMPQ, WHICHQ, WS, K, NC, KSCHUR, HERE,
$ 1, NBNEXT, N, NI, S, T, LDT, IXT, Q, LDQ,
$ IXQ, TOL, IWORK, DWORK, LDWORK, INFO )
IF( INFO.NE.0 ) THEN
ILST = HERE
RETURN
END IF
HERE = HERE + 2
ELSE
C
C The 2-by-2 block did split.
C
CALL MB03KB( COMPQ, WHICHQ, WS, K, NC, KSCHUR, HERE,
$ 1, 1, N, NI, S, T, LDT, IXT, Q, LDQ, IXQ,
$ TOL, IWORK, DWORK, LDWORK, INFO )
IF( INFO.NE.0 ) THEN
ILST = HERE
RETURN
END IF
CALL MB03KB( COMPQ, WHICHQ, WS, K, NC, KSCHUR, HERE+1,
$ 1, 1, N, NI, S, T, LDT, IXT, Q, LDQ, IXQ,
$ TOL, IWORK, DWORK, LDWORK, INFO )
IF( INFO.NE.0 ) THEN
ILST = HERE + 1
RETURN
END IF
HERE = HERE + 2
END IF
END IF
END IF
IF( HERE.LT.ILST )
$ GO TO 10
C
ELSE
C
HERE = IFST
20 CONTINUE
C
C Swap a block with next one above.
C
IF( NBF.EQ.1 .OR. NBF.EQ.2 ) THEN
C
C Current block is either 1-by-1 or 2-by-2.
C
NBNEXT = 1
IF( HERE.GE.3 ) THEN
IF( S(I).EQ.1 ) THEN
IT = IXT(I) + ( NI(I) + HERE - 3 )*LDT(I) + NI(IP1)
$ + HERE - 2
ELSE
IT = IXT(I) + ( NI(IP1) + HERE - 3 )*LDT(I) + NI(I)
$ + HERE - 2
END IF
IF( T( IT ).NE.ZERO )
$ NBNEXT = 2
END IF
CALL MB03KB( COMPQ, WHICHQ, WS, K, NC, KSCHUR, HERE-NBNEXT,
$ NBNEXT, NBF, N, NI, S, T, LDT, IXT, Q, LDQ,
$ IXQ, TOL, IWORK, DWORK, LDWORK, INFO )
IF( INFO.NE.0 ) THEN
ILST = HERE
RETURN
END IF
HERE = HERE - NBNEXT
C
C Test if a 2-by-2 block breaks into two 1-by-1 blocks.
C
IF( NBF.EQ.2 ) THEN
IF( S(I).EQ.1 ) THEN
IT = IXT(I) + ( NI(I) + HERE - 1 )*LDT(I) + NI(IP1)
$ + HERE
ELSE
IT = IXT(I) + ( NI(IP1) + HERE - 1 )*LDT(I) + NI(I)
$ + HERE
END IF
IF( T( IT ).EQ.ZERO )
$ NBF = 3
END IF
C
ELSE
C
C Current block consists of two 1-by-1 blocks each of which
C must be swapped individually.
C
NBNEXT = 1
IF( HERE.GE.3 ) THEN
IF( S(I).EQ.1 ) THEN
IT = IXT(I) + ( NI(I) + HERE - 3 )*LDT(I) + NI(IP1)
$ + HERE - 2
ELSE
IT = IXT(I) + ( NI(IP1) + HERE - 3 )*LDT(I) + NI(I)
$ + HERE - 2
END IF
IF( T( IT ).NE.ZERO )
$ NBNEXT = 2
END IF
CALL MB03KB( COMPQ, WHICHQ, WS, K, NC, KSCHUR, HERE-NBNEXT,
$ NBNEXT, 1, N, NI, S, T, LDT, IXT, Q, LDQ, IXQ,
$ TOL, IWORK, DWORK, LDWORK, INFO )
IF( INFO.NE.0 ) THEN
ILST = HERE
RETURN
END IF
IF( NBNEXT.EQ.1 ) THEN
C
C Swap two 1-by-1 blocks.
C
CALL MB03KB( COMPQ, WHICHQ, WS, K, NC, KSCHUR, HERE,
$ NBNEXT, 1, N, NI, S, T, LDT, IXT, Q, LDQ,
$ IXQ, TOL, IWORK, DWORK, LDWORK, INFO )
IF( INFO.NE.0 ) THEN
ILST = HERE
RETURN
END IF
HERE = HERE - 1
ELSE
C
C Recompute NBNEXT in case 2-by-2 split.
C
IF( S(I).EQ.1 ) THEN
IT = IXT(I) + ( NI(I) + HERE - 2 )*LDT(I) + NI(IP1)
$ + HERE - 1
ELSE
IT = IXT(I) + ( NI(IP1) + HERE - 2 )*LDT(I) + NI(I)
$ + HERE - 1
END IF
IF( T( IT ).EQ.ZERO )
$ NBNEXT = 1
IF( NBNEXT.EQ.2 ) THEN
C
C The 2-by-2 block did not split.
C
CALL MB03KB( COMPQ, WHICHQ, WS, K, NC, KSCHUR, HERE-1,
$ 2, 1, N, NI, S, T, LDT, IXT, Q, LDQ, IXQ,
$ TOL, IWORK, DWORK, LDWORK, INFO )
IF( INFO.NE.0 ) THEN
ILST = HERE
RETURN
END IF
HERE = HERE - 2
ELSE
C The 2-by-2 block did split.
C
CALL MB03KB( COMPQ, WHICHQ, WS, K, NC, KSCHUR, HERE,
$ 1, 1, N, NI, S, T, LDT, IXT, Q, LDQ, IXQ,
$ TOL, IWORK, DWORK, LDWORK, INFO )
IF( INFO.NE.0 ) THEN
ILST = HERE
RETURN
END IF
CALL MB03KB( COMPQ, WHICHQ, WS, K, NC, KSCHUR, HERE-1,
$ 1, 1, N, NI, S, T, LDT, IXT, Q, LDQ, IXQ,
$ TOL, IWORK, DWORK, LDWORK, INFO )
IF( INFO.NE.0 ) THEN
ILST = HERE - 1
RETURN
END IF
HERE = HERE - 2
END IF
END IF
END IF
IF( HERE.GT.ILST )
$ GO TO 20
END IF
ILST = HERE
C
C Store optimal workspace values and return.
C
DWORK(1) = DBLE( MINWRK )
RETURN
C
C *** Last line of MB03KA ***
END
|
{-# LANGUAGE QuasiQuotes, MultiParamTypeClasses #-}
{-# LANGUAGE TemplateHaskell #-}
module Lib where
import qualified Language.C.Inline as C
import Foreign.C.Types
import qualified Data.Vector.Storable as V
import qualified Numeric.LinearAlgebra as LA
import qualified Numeric.LinearAlgebra.Data as D
import qualified Numeric.LinearAlgebra.Devel as D
import Control.Arrow ((***))
import Data.Tuple (swap)
C.context (C.baseCtx <> C.vecCtx)
C.include "<math.h>"
C.include "<osqp/osqp.h>"
testsp = D.mkSparse [((0,0), 3.0), ((1,1), 2.0)]
testcsr = D.mkCSR [((0,0), 3.0), ((1,1), 2.0)]
-- getCSR (D.SparseR csr ncols nrows) =
assocVec :: (Num a, Ord a) => [a] -> [(Int,a)]
assocVec v = filter (\(_,x) -> x /= 0) (zipWith (,) [0..] v)
-- conveniently, will assume that if your list stops, it is implied the rest of it is zero. Or maybe inconvenitly from a type safety persepctive
assocMat :: (Num a, Ord a) => [[a]] -> [((Int,Int),a)]
assocMat m = let m' = map assocVec m in addRows m' where addRows m'' = concat $ zipWith (\i xs -> map (\(j,x) -> ((i,j),x)) xs) [0 ..] m''
-- AssocMatrix = [((Int, Int), Double)]
transposeAssoc = map (swap *** id)
mkGCSC = D.tr . D.mkSparse . transposeAssoc
mkGCSC' = mkGCSC . assocMat
mkCSC :: D.AssocMatrix -> CSC'
mkCSC = transpose'' . D.mkCSR . transposeAssoc
mkCSC' :: [[Double]] -> CSC'
mkCSC' = mkCSC . assocMat
-- The CSC type is internal to hmatrix.
data CSC' = CSC'
{ cscVals :: V.Vector Double
, cscRows :: V.Vector CInt
, cscCols :: V.Vector CInt
, cscNRows :: Int
, cscNCols :: Int
} deriving Show
-- data OSQPCSC = CSC -- {nzmax :: CInt, m :: CInt, n :: CInt, col_p :: V.Vector Ptr, col }
{-
instance LA.Transposable D.CSR CSC'
where
tr (D.CSR vs cs rs n m) = CSC' vs cs rs m n
tr' = tr
instance LA.Transposable CSC' D.CSR
where
tr (CSC' vs rs cs n m) = D.CSR vs rs cs m n
tr' = tr
-}
transpose'' (D.CSR vs cs rs n m) = CSC' vs cs rs m n
transpose''' (CSC' vs rs cs n m) = D.CSR vs rs cs m n
funboy :: IO ()
funboy = do
x <- [C.exp| double{ cos(1) } |]
print x
someFunc :: IO ()
someFunc = putStrLn "someFunc"
myfun :: CDouble -> CDouble
myfun x = [C.pure| double{ cos( $(double x)) } |]
-- exampleOSQPData =
-- Actually CDoubles
--data OSQPData = OSQPData {dn :: CInt, dm :: CInt, p :: CSC ,
-- a :: CSC, q :: V.Vector CDouble, l :: V.Vector CDouble, u :: V.Vector CDouble}
-- OSQPWorkspace *osqp_setup(const OSQPData *data, OSQPSettings *settings)
-- c_int osqp_solve(OSQPWorkspace *work)
-- c_int osqp_cleanup(OSQPWorkspace *work)
-- Maybe the easiest thing is to just inject default settings and marshal only what we need
-- Get a demo code in full c and start pulling pieces out.
-- l and u are the easiest
demo :: IO CInt
demo = [C.block|
int {
// Load problem data
c_float P_x[4] = {4.00, 1.00, 1.00, 2.00, };
c_int P_nnz = 4;
c_int P_i[4] = {0, 1, 0, 1, };
c_int P_p[3] = {0, 2, 4, };
c_float q[2] = {1.00, 1.00, };
c_float A_x[4] = {1.00, 1.00, 1.00, 1.00, };
c_int A_nnz = 4;
c_int A_i[4] = {0, 1, 0, 2, };
c_int A_p[3] = {0, 2, 4, };
c_float l[3] = {1.00, 0.00, 0.00, };
c_float u[3] = {1.00, 0.70, 0.70, };
c_int n = 2;
c_int m = 3;
// Problem settings
OSQPSettings * settings = (OSQPSettings *)c_malloc(sizeof(OSQPSettings));
// Structures
OSQPWorkspace * work; // Workspace
OSQPData * data; // OSQPData
// Populate data
data = (OSQPData *)c_malloc(sizeof(OSQPData));
data->n = n;
data->m = m;
data->P = csc_matrix(data->n, data->n, P_nnz, P_x, P_i, P_p);
data->q = q;
data->A = csc_matrix(data->m, data->n, A_nnz, A_x, A_i, A_p);
data->l = l;
data->u = u;
// Define Solver settings as default
osqp_set_default_settings(settings);
settings->alpha = 1.0; // Change alpha parameter
// Setup workspace
work = osqp_setup(data, settings);
// Solve Problem
osqp_solve(work);
// Cleanup
osqp_cleanup(work);
c_free(data->A);
c_free(data->P);
c_free(data);
c_free(settings);
return 0;
} |]
|
# This R environment comes with all of CRAN preinstalled, as well as many other helpful packages
# The environment is defined by the kaggle/rstats docker image: https://github.com/kaggle/docker-rstats
# For example, here's several helpful packages to load in
library(ggplot2) # Data visualization
library(readr) # CSV file I/O, e.g. the read_csv function
library("ggpubr")
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
#system("ls ../input")
# Any results you write to the current directory are saved as output.
my.data <- read.csv('../input/train.csv')
my.data
variable <- colnames(my.data)
variable
# Try to see Correlation using pair (SalePrice, SalePrice)
cor1 <- cor(my.data$SalePrice, my.data$LotArea)
cor.test(my.data$SalePrice, my.data$LotArea, method = "pearson", alternative = "two.sided")
ggscatter(my.data, x = "LotArea", y = "SalePrice",
add = "reg.line", conf.int = TRUE,
cor.coef = TRUE, cor.method = "pearson",
xlab = "Lot Area", ylab = "Housing Price in US Dollar")
# From graph, it seems that these two variables are not related. So maybe won't use this in prediction
# Need to do this to all variables
|
# Following Gravity - ML Foundations Part I
> First in a series on "Machine Larning Foundations," which applies to much of science and statistics as well.
- toc: false
- branch: master
- badges: true
- comments: true
- metadata_key1: metadata_value1
- metadata_key2: metadata_value2
- description: First in a series on Machine Learning Foundations, which applies to much of science and statistics as well. .
- image: ../images/Ia-FollowingGravity_header.jpg
<p style="text-align: right">*Image credit: NASA*</p>
###### Preface: I'm writing this for myself, current students & [ASPIRE](http://aspirecoop.github.io) collaborators, and to 'give back' to the internet community. I recently had insight into my 'main' research problem, but started to hit a snag so decided to return to foundations. Going back to basics can be a good way to move forward...
By the end of this session, we will -- as an example problem -- have used the 1-dimensional path of an object in the presesece of gravity, to 'train' a system to correctly infer (i.e. to 'learn') the constants of the motion -- initial position and velocity, and the acceleration due to gravity. Hopefully we learn a few other things along the way. ;-)
*In the next installment, "Part Ib," we'll derive the differential equation of motion, and in then in "Part II" we'll adapt the techniques we've learned here to do signal processing.*
## Optimization Basics: Gradient Descent
Let's put the "sample problem" aside for now, and talk about the general problem of optimization. Often we may wish to minimize some function $f(x)$. In science, doing so may enable us to fit a curve to our data, as we'll do below. Similarly,'machine learning' systems often operate on the basis of minimizing a 'cost' function to discern patterns in complex datasets.
Thus we want to find the value of $x$ for which $f(x)$ is the smallest. A graph of such a function might look like this...
*(Python code follows, to make the graph)*
```python
import numpy as np, matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.update({'xlabel':'x', 'ylabel':'f'})
x = np.arange(-5,7,0.1)
ax.plot(x,(x-1)**2+1)
plt.show()
```
If $f(x)$ is differentiable and the derivative (*i.e.*, slope) $df/dx$ can be evaluated easily, then we can perform a so-called "gradient descent".
We do so as follows:
1. Start with some initial guess for $x$
2. "Go in the direction of $-df/dx$":
$$x_{new} = x_{old} - \alpha {df\over dx},$$
where $\alpha$ is some parameter often called the "learning rate". All this equation is saying is, "If the function is increasing, then move to the left; and if the function is decreasing then move to the right." The actual change to $x$ is given by $\Delta x \equiv - \alpha (df/dx)$.
3. Repeat step 2 until some approximation criterion is met.
A nice feature of this method is that as $df/dx \rightarrow 0$, so too $\Delta x\rightarrow0$. So an "adaptive stepsize" is built-in.
Now let's try this out with some Python code...
```python
from __future__ import print_function # for backwards-compatibility w/ Python2
import numpy as np, matplotlib.pyplot as plt
def f(x):
return (x-1)**2+1
def dfdx(x):
return 2*(x-1)
fig, ax = plt.subplots()
ax.update({'xlabel':'x', 'ylabel':'f'})
x = np.arange(-5,7,0.1)
ax.plot(x,f(x),ls='dashed')
for alpha in ([0.002,0.1,0.25,0.8]):
print("alpha = ",alpha)
x = -5 # starting point
x_arr = [x]
y_arr = [f(x)]
maxiter = 50
for iter in range(maxiter): # do the descent
# these two lines are just for plotting later
x_arr.append(x)
y_arr.append( f(x) )
# Here's the important part: update via gradient descent
x = x - alpha * dfdx(x)
# report and make the plot
print(" final x = ",x)
ax.plot(x_arr,y_arr,'o-',label="alpha = "+str(alpha))
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.show()
```
Notice how the larger learning rate ($\alpha$=0.8) meant that the steps taken were so large that they "overshot" the minimum, whereas the too-small learning rate ($\alpha=0.002$) still hadn't come anywhere close to the minimum before the maximum iteration was reached.
**Exercise:** Experiment by editing the above code: Try different learning rates and observe the behavior.
### Challenge: Instability
You may have noticed, if you made the learning rate too large, that the algorithm does *not* converge to the solution but instead 'blows up'. This is the 'flip side' of the 'adaptive step size' feature of this algorithm: If you jump "across" the minimum to the other side and end up a greater distance from the minimum that where you started, you will encounter an even larger gradient, which will lead to an even larger $\Delta x$, and so on.
We can see this with the same code from before, let's just use a different starting point and a step size that's clearly too large...
```python
from __future__ import print_function # for backwards-compatibility w/ Python2
import numpy as np, matplotlib.pyplot as plt
def f(x):
return (x-1)**2+1
def dfdx(x):
return 2*(x-1)
alpha = 1.1 # "too big" learning rate
print("alpha = ",alpha)
x = -1 # starting point
x_arr = []
y_arr = []
maxiter = 12
for iter in range(maxiter): # do the descent
x_arr.append(x)
y_arr.append( f(x) )
x = x - alpha * dfdx(x)
# report and make the plot
print(" final x = ",x)
fig, ax = plt.subplots()
ax.update({'xlabel':'x', 'ylabel':'f'})
plt.plot(x_arr,y_arr,'r',zorder=2,)
plt.scatter(x_arr,y_arr,zorder=3,c=range(len(x_arr)),cmap=plt.cm.viridis)
xlim = ax.get_xlim() # find out axis limits
x = np.arange(xlim[0],xlim[1],1) # dashed line
plt.plot(x,f(x),zorder=1,ls='dashed')
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.show()
```
In the above plot, we colored the points by iteration number, starting with the dark purple at the initial location of x=-1, and bouncing around ever-farther from the solution as the color changes to yellow. As this happens, the error is growing exponentially; this is one example of a numerical instability. Thus, this algorithm is <a href="http://bit.ly/2kZZVP1">not entirely stable.</a>
One way to guard against this to check: is our value of $f(x)$ at the current iteration *larger* than the value it was at the previous iteration? If so, that's a sign that our learning rate is too large, and we can use this criterion to dynamically adjust the learning rate.
Let's add some 'control' code to that effect, to the previous script, and also print out the values of the relevant variables so we can track the progress:
```python
from __future__ import print_function # for backwards-compatibility w/ Python2
import numpy as np, matplotlib.pyplot as plt
def f(x):
return (x-1)**2+1
def dfdx(x):
return 2*(x-1)
alpha = 13.0 # "too big" learning rate
print("alpha = ",alpha)
x = -1 # starting point
x_arr = []
y_arr = []
maxiter = 20
f_old = 1e99 # some big number
for iter in range(maxiter): # do the descent
# these two lines are just for plotting later
x_arr.append(x)
f_cur = f(x)
y_arr.append( f_cur )
print("iter = ",iter,"x = ",x,"f(x) =",f(x),"alpha = ",alpha)
if (f_cur > f_old): # check for runaway behavior
alpha = alpha * 0.5
print(" decreasing alpha. new alpha = ",alpha)
f_old = f_cur
# update via gradient descent
x = x - alpha * dfdx(x)
# report and make the plot
print(" final x = ",x)
fig, ax = plt.subplots()
ax.update({'xlabel':'x', 'ylabel':'f'})
plt.plot(x_arr,y_arr,'r',zorder=2,)
plt.scatter(x_arr,y_arr,zorder=3,c=range(len(x_arr)),cmap=plt.cm.viridis)
xlim = ax.get_xlim()
x = np.arange(xlim[0],xlim[1],1) # x for dashed line
plt.plot(x,f(x),zorder=1,ls='dashed')
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.show()
```
So in the preceding example, we start at $x=-1$, then the unstable behavior starts and we begin diverging from the minimum, so we decrease $\alpha$ as often as our criterion tells us to. Finally $\alpha$ becomes low enought to get the system 'under control' and the algorithm enters the convergent regime.
**Exercise:** In the example above, we only decrease $\alpha$ by a factor of 2 each time, but it would be more efficient to decrease by a factor of 10. Try that and observe the behavior of the system.
You may say, *"Why do I need to worry about this instability stuff? As long as $\alpha<1$ the system will converge, right?"* Well, for this simple system it seems obvious what needs to happen, but with multidimensional optimization problems (see below), it's not always obvious what to do. (Sometimes different 'dimensions' need different learning rates.) This simple example serves as an introduction to phenomena which arise in more complex situations.
### Challenge: Non-global minima
To explore more complicated functions, we're going to take advantage of the SymPy package, to let it take derivatives for us. Try executing the import in the next cell, and if nothing happens it means you have SymPy installed. If you get an error, you may need to go into a Terminal and run "`pip install sympy`".
```python
import sympy
```
You're good? No errors? Ok, moving on...
```python
from __future__ import print_function # for backwards-compatibility w/ Python2
import numpy as np, matplotlib.pyplot as plt
from sympy import Symbol, diff
x = Symbol('x')
# our function, more complicated (SymPy handles it!)
f = (x-1)**4 - 20*(x-1)**2 + 10*x + 1
dfdx = diff(f,x)
# setup
fig, ax = plt.subplots()
ax.update({'xlabel':'x', 'ylabel':'f'})
x_arr = np.arange(-5,7,0.1)
y_arr = np.copy(x_arr)
for i, val in enumerate(x_arr):
y_arr[i] = f.evalf(subs={x:val})
ax.plot(x_arr,y_arr,ls='dashed') # space of 'error function'
# for a variety of learning rates...
for alpha in ([0.002,0.01,0.03]):
print("alpha = ",alpha)
xval = 6 # starting point
x_arr = [xval]
y_arr = [f.evalf(subs={x:xval})]
maxiter = 50
# do the descent
for iter in range(maxiter):
# these two lines are just for plotting later
x_arr.append(xval)
y_arr.append( f.evalf(subs={x:xval}) )
# update via gradient descent
xval = xval - alpha * dfdx.evalf(subs={x:xval})
print(" final xval = ",xval)
ax.plot(x_arr,y_arr,'o-',label="alpha = "+str(alpha))
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.show()
```
All the runs start at $x=6$. Notice how the runs marked in organge and green go on to find a "local" minimum, but they don't find the "global" minimum (the overall lowest point) like the run marked in red does. The problem of ending up at non-global local minima is a generic problem for all kinds of optimization tasks. It tends to get even worse when you add more parameters...
### Multidimensional Gradient Descent
*(A descent into darkness...)*
Let's define a function of two variables, that's got at least one minimum in it. We'll choose
$$f(x,y) = -\left( \cos x + 3\cos y \right) /2,$$
which actually has infinitely many minima, but we'll try to 'zoom in' on just one.
We can vizualize this function via the graph produced by the code below; in the graph, darker areas show lower values than ligher areas, and there is a minimum at the point $x=0,y=0$ where $f(0,0)=-2$.
```python
import numpy as np, matplotlib.pyplot as plt
def f(x,y):
return -( np.cos(x) + 3*np.cos(y) )/2
x = y = np.linspace(-4, 4, 100)
z = np.zeros([len(x), len(y)])
for i in range(len(x)):
for j in range(len(y)):
z[j, i] = f(x[i], y[j])
fig, ax = plt.subplots()
ax.update({'xlabel':'x', 'ylabel':'y'})
cs = ax.pcolor(x, y, z, cmap=plt.cm.afmhot)
plt.gca().set_aspect('equal', adjustable='box')
cbar = fig.colorbar(cs, orientation='vertical')
plt.show()
```
The way we find a minimum is similar to what we did before, except we use partial derivatives in the x- and y-directions:
$$x_{new} = x_{old} + \Delta x,\ \ \ \ \ \ \Delta x = - \alpha {\partial f\over \partial x} $$
$$y_{new} = y_{old} + \Delta y,\ \ \ \ \ \ \Delta y = - \alpha {\partial f\over \partial y},$$
```python
from __future__ import print_function # for backwards-compatibility w/ Python2
import numpy as np, matplotlib.pyplot as plt
# our function
def f(x,y):
return -( np.cos(x) + 3*np.cos(y) )/2
def dfdx(x,y):
return np.sin(x)/2
def dfdy(x,y):
return 3*np.sin(y)/2
# variables for this run
alpha = 0.5
xval, yval = 2.5, 1.5 # starting guess(es)
x_arr = []
y_arr = []
maxiter = 20
for iter in range(maxiter): # gradient descent loop
x_arr.append(xval)
y_arr.append(yval)
xval = xval - alpha * dfdx(xval,yval)
yval = yval - alpha * dfdy(xval,yval)
print("Final xval, yval = ",xval,yval,". Target is (0,0)")
# background image: plot the color background
x = y = np.linspace(-4, 4, 100)
z = np.zeros([len(x), len(y)])
for i in range(len(x)):
for j in range(len(y)):
z[j, i] = f(x[i], y[j])
fig, ax = plt.subplots()
ax.update({'xlabel':'x', 'ylabel':'y'})
cs = ax.pcolor(x, y, z, cmap=plt.cm.afmhot)
plt.gca().set_aspect('equal', adjustable='box')
cbar = fig.colorbar(cs, orientation='vertical')
# plot the progress of our optimization
plt.plot(x_arr,y_arr,zorder=1)
plt.scatter(x_arr,y_arr,zorder=2,c=range(len(x_arr)),cmap=plt.cm.viridis)
handles, labels = ax.get_legend_handles_labels()
plt.show()
```
In the above figure, we've shown the 'path' the algorithm takes in $x$-$y$ space, coloring the dots according to iteration number, so that the first points are dark purple, and later points tend to yellow.
Note that due to the asymmetry in the function (between $x$ and $y$), the path descends rapidly in $y$, and then travels along the "valley" in $x$ to reach the minimum. This "long narrow valley" behavior is common in multidimensional optimization problems: the system may 'solve' one parameter quickly, but require thousands of operations to find the other one.
Many sophisticated schemes have arisen to handle this challenge, and we won't cover them here. For now, suffice it to say that, yes, this sort of thing happens. You may have 'found' highly accurate values for certain parameters, but others are bogging down the process of convergence.
*Next time, we'll cover a common application of optimization: Least Squares Regression...*
## Least Squares Regression
This is such a common thing to do in science and statistics, that everyone should learn how it works. We'll do it for linear relationships, but it generalizes to nonlinear situations as well.
### How to Fit a Line
Let's say we're trying to fit a line to a bunch of data. We've been given $n$ data points with coordinates $(x_i,y_i)$ where $i=1..n$. The problem becomes, given a line $f(x) = mx+b$, find the values of the parameters $m$ and $b$ which minimize the overall "error".
#### add some kinda picture here?
The error can take many forms; one is the squared error $SE$, which is just the sum of the squares of the "distances" between each data point's $y$-value and the "guess" from the line fit $f$ at each value of $x$:
$$ SE = (f(x_1) - y_1)^2 + (f(x_2) - y_2)^2 + ... (f(x_n)-y_n)^2,$$
We can write this concisely as
$$ SE = \sum_{i=1}^n (f(x_i)-y_i)^2.$$
Another popular form is the "mean squared error" $MSE$, which is just $SE/n$:
$$ MSE = {1\over n}\sum_{i=1}^n (f(x_i)-y_i)^2.$$
The MSE has the nice feature that as you add more data points, it tends to hold a more-or-less consistent value (as opposed to the SE which gets bigger as you add more points). We'll use the MSE in the work that follows.
So expanding out $f(x)$, we see that the MSE is a function of $m$ and $b$, and these are the parameters we'll vary to minimize the MSE:
$$ MSE(m,b) = {1\over n}\sum_{i=1}^n (mx_i+b-y_i)^2.$$
So, following our earlier work on multidimensional optimization, we start with guesses for $m$ and $b$ and then update according to gradient descent:
$$m_{new} = m_{old} + \Delta m,\ \ \ \ \ \ \Delta m = -\alpha{\partial (MSE)\over\partial m} = -\alpha{2\over n}\sum_{i=1}^n (mx_i+b-y_i)(x_i) $$
$$b_{new} = b_{old} + \Delta b,\ \ \ \ \ \ \Delta b = -\alpha{\partial (MSE)\over\partial b} = -\alpha{2\over n}\sum_{i=1}^n (mx_i+b-y_i)(1).$$
So, to start off, let's get some data...
```python
# Set up the input data
n = 20
np.random.seed(1) # for reproducability
x_data = np.random.uniform(size=n) # random points for x
m_exact = 2.0
b_exact = 1.5
y_data = m_exact * x_data + b_exact
y_data += 0.3*np.random.normal(size=n) # add noise
# Plot the data
def plot_data(x_data, y_data, axis_labels=('x','y'), zero_y=False):
fig, ax = plt.subplots()
ax.update({'xlabel':axis_labels[0], 'ylabel':axis_labels[1]})
ax.plot(x_data, y_data,'o')
if (zero_y):
ax.set_ylim([0,ax.get_ylim()[1]*1.1])
plt.show()
plot_data(x_data,y_data, zero_y=True)
```
*Note: in contrast to earlier parts of this document which include complete python programs in every code post, for brevity's sake we will start using the notebook "as intended", relying on the internal state and adding successive bits of code which make use of the "memory" of previously-defined variables.*
Let's map out the MSE for this group of points, as a function of possible $m$ and $b$ values...
```python
# map out the MSE for various values of m and b
def MSE(x,y,m,b):
# Use Python array operations to compute sums
return ((m*x + b - y)**2).mean()
mm = bb = np.linspace(0, 4, 50)
z = np.zeros([len(mm), len(bb)])
for i in range(len(mm)):
for j in range(len(bb)):
z[j, i] = MSE(x_data,y_data, mm[i],bb[j])
fig, ax = plt.subplots()
ax.update({'xlabel':'m', 'ylabel':'b'})
cs = ax.pcolor(mm, bb, np.log(z), cmap=plt.cm.afmhot)
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
```
We see the minimum near the "exact" values chosen in the begininng. (Note that we've plotted the logarithm of the MSE just to make the colors stand out better.)
Next we will choose starting guesses for $m$ and $b$, and use gradient descent to fit the line...
```python
m = 3.5 # initial guess
b = 3.5
m_arr = []
b_arr = []
def dMSEdm(x,y,m,b):
return (2*(m*x + b - y) *x).mean()
def dMSEdb(x,y,m,b):
return (2*(m*x + b - y)).mean()
alpha = 0.1
maxiter, printevery = 500, 4
for iter in range(maxiter):
m_arr.append(m)
b_arr.append(b)
if (0 == iter % printevery):
print(iter,": b, m = ",b,m,", MSE = ",MSE(x_data,y_data,m,b))
m = m - alpha * dMSEdm(x_data,y_data,m,b)
b = b - alpha * dMSEdb(x_data,y_data,m,b)
print("Final result: m = ",m,", b = ",b)
# background image: plot the color background (remembered from before)
fig, ax = plt.subplots()
ax.update({'xlabel':'m', 'ylabel':'b'})
cs = ax.pcolor(mm, bb, np.log(z), cmap=plt.cm.afmhot)
plt.gca().set_aspect('equal', adjustable='box')
# plot the progress of our descent
plt.plot(m_arr,b_arr,zorder=1)
plt.scatter(m_arr,b_arr,zorder=2,c=range(len(m_arr)),cmap=plt.cm.viridis)
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels)
plt.show()
```
*Note that the optimized values $(m,b)$ that we find may not exactly match the "exact" values we used to make the data, because the noise we added to the data can throw this off. In the limit where the noise amplitude goes to zero, our optimized values will exactly match the "exact" values used to generated the data.*
Let's see the results of our line fit...
```python
# plot the points
fig, ax = plt.subplots()
ax.update({'xlabel':'x', 'ylabel':'y'})
ax.plot(x_data,y_data,'o')
ax.set_ylim([0,ax.get_ylim()[1]*1.1])
# and plot the line we fit
xlim = ax.get_xlim()
x_line = np.linspace(xlim[0],xlim[1],2)
y_line = m*x_line + b
ax.plot(x_line,y_line)
plt.show()
```
Great!
### Least Squares Fitting with Nonlinear Functions
We can generalize the technique describe above to fit polynomials
$$ f(x) = c_0 + c_1 x + c_2 x^2 + ...c_k x^k,$$
where $c_0...c_k$ are the parameters we will tune, and $k$ is the order of the polynomial. (Typically people use the letter $a$ for polynomial coefficients, but in the math rendering of Jupter, $\alpha$ and $a$ look too much alike, so we'll use $c$.) Written more succinctly,
$$ f(x) = \sum_{j=0}^k c_j x^j.$$
(Indeed, we could even try non-polynomial basis functions, e.g.,
$ f(x) = c_0 + c_1 g(x) + c_2 h(x) + ...,$
but let's stick to polynomials for now.)
The key thing to note is that for each parameter $c_j$, the update $\Delta c_j$ will be
$$\Delta c_j = -\alpha {\partial (MSE)\over \partial c_j}
= -\alpha {\partial (MSE)\over \partial f}{\partial f\over \partial c_j}$$
$$= -\alpha {2\over n}\sum_{i=1}^n [f(x_i)-y_i](x_i)^{j} $$
*(Note that we are not taking the derivative with respect to $x_i$, but rather with respect to $c_j$. Thus there is no "power rule" that needs be applied to this derivative. Also there is no sum over j.)*
The following is a complete code for doing this, along with some added refinements:
* $\alpha$ is now $\alpha_j$, i.e. different learning rates for different directions
* we initialise $\alpha_j$ such that larger powers of $x$ start with smaller coefficients
* we put the fitting code inside a method (with a bunch of parameters) so we can call it later
```python
from __future__ import print_function # for backwards-compatibility w/ Python2
import numpy as np, matplotlib.pyplot as plt
def f(x,c):
y = 0*x # f will work on single floats or arrays
for j in range(c.size):
y += c[j]*(x**j)
return y
def polyfit(x_data,y_data, c_start=None, order=None, maxiter=500, printevery = 25,
alpha_start=0.9, alpha_start_power=0.3):
# function definitions
def MSE(x_arr,y_arr,c):
f_arr = f(x_arr,c)
return ((f_arr - y_arr)**2).mean()
def dMSEdcj(x_arr,y_arr,c,j): # deriviative of MSE wrt cj (*not* wrt x!)
f_arr = f(x_arr,c)
return ( 2* ( f_arr - y_arr) * x_arr**j ).mean()
if ((c_start is None) and (order is None)):
print("Error: Either specify initial guesses for coefficients,",
"or specify the order of the polynomial")
raise # halt
if c_start is not None:
order = c_start.size-1
c = np.copy(c_start)
elif order is not None:
c = np.random.uniform(size=order+1) # random guess for starting point
assert(c.size == order+1) # check against conflicting info
k = order
print(" Initial guess: c = " ,np.array_str(c, precision=2))
alpha = np.ones(c.size)
for j in range(c.size): # start with smaller alphas for higher powers of x
alpha[j] = alpha_start*(alpha_start_power)**(j)
MSE_old = 1e99
for iter in range(maxiter+1): # do the descent
for j in range(c.size):
c[j] = c[j] - alpha[j] * dMSEdcj(x_data,y_data,c,j)
MSE_cur = MSE(x_data,y_data,c)
if (MSE_cur > MSE_old): # adjust if runaway behavior starts
alpha[j] *= 0.3
print(" Notice: decreasing alpha[",j,"] to ",alpha[j])
MSE_old = MSE_cur
if (0 == iter % printevery): # progress log
print('{:4d}'.format(iter),"/",maxiter,": MSE =",'{:9.6g}'.format(MSE_cur),
", c = ",np.array_str(c, precision=3),sep='')
print("")
return c
# Set up input data
n = 100
np.random.seed(2) # for reproducability
x_data = np.random.uniform(-2.5,3,size=n) # some random points for x
c_data = np.array([-4,-3,5,.5,-2,.5]) # params to generate data (5th-degree polynomial)
y_data = f(x_data, c_data)
y_data += 0.02*np.random.normal(size=n)*y_data # add a (tiny) bit of noise
#---- Perform Least Squares Fit
c = polyfit(x_data, y_data, c_start=c_data*np.random.random(), maxiter=500)
#----- Plot the results
def plot_data_and_curve(x_data,y_data,axis_labels=('x','y'), ):
# plot the points
fig, ax = plt.subplots()
ax.update({'xlabel':axis_labels[0], 'ylabel':axis_labels[1]})
ax.plot(x_data,y_data,'o')
# and plot the curve we fit
xlim = ax.get_xlim()
x_line = np.linspace(xlim[0],xlim[1],100)
y_line = f(x_line, c)
ax.plot(x_line,y_line)
plt.show()
plot_data_and_curve(x_data,y_data)
```
Now, it turns out that polynomials are often *terrible* things to try to fit arbitrary data with, because they can 'blow up' as $|x|$ increases, and this causes instability. But for a variety of physics problems (see below), polynomials can be just what we're after. Plus, that made a nice demonstration, for now.
(For more general functions, I actually wrote a multi-parameter SymPy gradient-descient that is completely general, but it's *terrifically slow* so I won't be posting it here. If you really want it, contact me.)
## Learning Gravity
Ok. Now we're all we're going to do next is fit a parabola to the motion of a falling ball -- and that's supposed to tell us something deep about physics. Sounds silly, right? 'Everybody' knows objects moving in a gravitational field follow parabolas (both in space & time); the more math-savvy may complain that we're simply going to 'get out of this' what we 'put into it.'
Well, from a philosophical standpoint and from the way that these methods will generalize to other situations, there are significant implications from the *methodology* we're about to follow.
**The Challenge**: Given a set of one-dimensional data of position vs. time $y(t)$, can we find the underlying equation that gives rise to it? Better put, can we fit a model to it, and how well can we fit it, and what kind of model will it be anyway?
(This is the sort of thing that statisticians *do*, but it's also something physicists do, and one could argue, this is what *everybody* does *all the time*. )
Let's get started. I'm just going to specify y(t) at a series of $n+1$ time steps $t_i$ ($t_0$...$t_n$) and we'll make them evenly spaced, and we'll leave out any noise at all -- perfect data. :-)
```python
g_exact = 9.8 # a physical parater we'll find a fit for
dt = 0.01
tmax = 1 # number of time steps
t_data = np.arange(0,tmax,step=dt) # time values
nt = t_data.size
print("dt = ",dt,", nt = ",nt)
y0 = 1.234 # initial position, choose anything
v0 = 3.1415 # initial velocity
#assign the data
y_data = y0 + v0*t_data - 0.5 * g_exact * t_data**2
# y_data *= np.random.uniform(low=.9, high=1.1, size=(y_data.size)) # for later; add noise in
plot_data(t_data,y_data, axis_labels=('t','y'))
```
Can we fit this with a polynomial? Sure, let's do that, using the code from before...
```python
c = polyfit(t_data, y_data, order=2, alpha_start = 10.0, maxiter=1000, printevery=100)
print("Our fit: y(t) = ",c[0]," + ",c[1],"*t + ",c[2],"*t**2",sep='')
print("Compare to exact: y(t) = ",y0, " + ",v0, "*t - ",0.5*g_exact,"*t**2",sep='')
print("Estimate for g = ",-2*c[2])
plot_data_and_curve(t_data,y_data, axis_labels=('t','y'))
```
What if we try fitting higher-order terms? Are their coefficients negligible? The system *may* converge, but it will take *a lot* more iterations... (be prepared to wait!)
```python
c = polyfit(t_data, y_data, order=3, alpha_start = 1.0, maxiter=700000, printevery=10000)
print("Our fit: y(t) = ",c[0]," + ",c[1],"*t + ",c[2],"*t**2 + ",c[3],"*t**3",sep='')
print("Compare to exact: y(t) = ",y0, " + ",v0, "*t - ",0.5*g_exact,"*t**2",sep='')
print("Estimate for g = ",-2*c[2])
```
Initial guess: c = [ 0.33 0.23 0.63 0.41]
0/700000: MSE = 0.828106, c = [ 1.189 -0.045 0.563 0.398]
Notice: decreasing alpha[ 0 ] to 0.3
10000/700000: MSE =0.000464818, c = [ 1.291 2.454 -3.188 -1.138]
20000/700000: MSE =0.000369748, c = [ 1.285 2.528 -3.373 -1.015]
30000/700000: MSE =0.000294122, c = [ 1.279 2.594 -3.538 -0.906]
40000/700000: MSE =0.000233965, c = [ 1.275 2.654 -3.685 -0.808]
50000/700000: MSE =0.000186111, c = [ 1.27 2.706 -3.817 -0.72 ]
60000/700000: MSE =0.000148045, c = [ 1.266 2.753 -3.934 -0.642]
70000/700000: MSE =0.000117765, c = [ 1.263 2.795 -4.038 -0.573]
80000/700000: MSE =9.36783e-05, c = [ 1.26 2.833 -4.131 -0.511]
90000/700000: MSE =7.4518e-05, c = [ 1.257 2.866 -4.214 -0.456]
100000/700000: MSE =5.92766e-05, c = [ 1.254 2.896 -4.289 -0.407]
110000/700000: MSE =4.71526e-05, c = [ 1.252 2.922 -4.355 -0.363]
120000/700000: MSE =3.75083e-05, c = [ 1.25 2.946 -4.414 -0.323]
130000/700000: MSE =2.98366e-05, c = [ 1.248 2.967 -4.466 -0.288]
140000/700000: MSE =2.37341e-05, c = [ 1.247 2.986 -4.513 -0.257]
150000/700000: MSE =1.88797e-05, c = [ 1.246 3.003 -4.555 -0.229]
160000/700000: MSE =1.50182e-05, c = [ 1.244 3.018 -4.592 -0.205]
170000/700000: MSE =1.19465e-05, c = [ 1.243 3.031 -4.626 -0.183]
180000/700000: MSE =9.50301e-06, c = [ 1.242 3.043 -4.655 -0.163]
190000/700000: MSE =7.55933e-06, c = [ 1.241 3.054 -4.682 -0.145]
200000/700000: MSE =6.0132e-06, c = [ 1.24 3.063 -4.705 -0.129]
210000/700000: MSE =4.7833e-06, c = [ 1.24 3.072 -4.726 -0.115]
220000/700000: MSE =3.80496e-06, c = [ 1.239 3.079 -4.745 -0.103]
230000/700000: MSE =3.02672e-06, c = [ 1.239 3.086 -4.762 -0.092]
240000/700000: MSE =2.40766e-06, c = [ 1.238 3.092 -4.777 -0.082]
250000/700000: MSE =1.91521e-06, c = [ 1.238 3.097 -4.79 -0.073]
260000/700000: MSE =1.52349e-06, c = [ 1.237 3.102 -4.802 -0.065]
270000/700000: MSE =1.21188e-06, c = [ 1.237 3.106 -4.813 -0.058]
280000/700000: MSE =9.64014e-07, c = [ 1.237 3.11 -4.822 -0.052]
290000/700000: MSE =7.66841e-07, c = [ 1.236 3.114 -4.83 -0.046]
300000/700000: MSE =6.09997e-07, c = [ 1.236 3.117 -4.838 -0.041]
310000/700000: MSE =4.85233e-07, c = [ 1.236 3.119 -4.845 -0.037]
320000/700000: MSE =3.85987e-07, c = [ 1.236 3.122 -4.851 -0.033]
330000/700000: MSE =3.0704e-07, c = [ 1.235 3.124 -4.856 -0.029]
340000/700000: MSE =2.4424e-07, c = [ 1.235 3.126 -4.861 -0.026]
350000/700000: MSE =1.94285e-07, c = [ 1.235 3.127 -4.865 -0.023]
360000/700000: MSE =1.54547e-07, c = [ 1.235 3.129 -4.869 -0.021]
370000/700000: MSE =1.22937e-07, c = [ 1.235 3.13 -4.872 -0.019]
380000/700000: MSE =9.77925e-08, c = [ 1.235 3.132 -4.875 -0.017]
390000/700000: MSE =7.77907e-08, c = [ 1.235 3.133 -4.878 -0.015]
400000/700000: MSE =6.188e-08, c = [ 1.235 3.134 -4.88 -0.013]
410000/700000: MSE =4.92235e-08, c = [ 1.235 3.134 -4.882 -0.012]
420000/700000: MSE =3.91556e-08, c = [ 1.235 3.135 -4.884 -0.01 ]
430000/700000: MSE =3.1147e-08, c = [ 1.234 3.136 -4.886 -0.009]
440000/700000: MSE =2.47764e-08, c = [ 1.234 3.136 -4.887 -0.008]
450000/700000: MSE =1.97088e-08, c = [ 1.234 3.137 -4.889 -0.007]
460000/700000: MSE =1.56777e-08, c = [ 1.234 3.138 -4.89 -0.007]
470000/700000: MSE =1.24711e-08, c = [ 1.234 3.138 -4.891 -0.006]
480000/700000: MSE =9.92037e-09, c = [ 1.234 3.138 -4.892 -0.005]
490000/700000: MSE =7.89133e-09, c = [ 1.234e+00 3.139e+00 -4.893e+00 -4.691e-03]
500000/700000: MSE =6.27729e-09, c = [ 1.234e+00 3.139e+00 -4.894e+00 -4.184e-03]
510000/700000: MSE =4.99338e-09, c = [ 1.234e+00 3.139e+00 -4.894e+00 -3.731e-03]
520000/700000: MSE =3.97207e-09, c = [ 1.234e+00 3.139e+00 -4.895e+00 -3.328e-03]
530000/700000: MSE =3.15965e-09, c = [ 1.234e+00 3.140e+00 -4.896e+00 -2.968e-03]
540000/700000: MSE =2.5134e-09, c = [ 1.234e+00 3.140e+00 -4.896e+00 -2.647e-03]
550000/700000: MSE =1.99932e-09, c = [ 1.234e+00 3.140e+00 -4.896e+00 -2.361e-03]
560000/700000: MSE =1.5904e-09, c = [ 1.234e+00 3.140e+00 -4.897e+00 -2.106e-03]
570000/700000: MSE =1.26511e-09, c = [ 1.234e+00 3.140e+00 -4.897e+00 -1.878e-03]
580000/700000: MSE =1.00635e-09, c = [ 1.234e+00 3.140e+00 -4.897e+00 -1.675e-03]
590000/700000: MSE =8.0052e-10, c = [ 1.234e+00 3.141e+00 -4.898e+00 -1.494e-03]
600000/700000: MSE =6.36787e-10, c = [ 1.234e+00 3.141e+00 -4.898e+00 -1.332e-03]
610000/700000: MSE =5.06543e-10, c = [ 1.234e+00 3.141e+00 -4.898e+00 -1.188e-03]
620000/700000: MSE =4.02939e-10, c = [ 1.234e+00 3.141e+00 -4.898e+00 -1.060e-03]
630000/700000: MSE =3.20524e-10, c = [ 1.234e+00 3.141e+00 -4.899e+00 -9.454e-04]
640000/700000: MSE =2.54967e-10, c = [ 1.234e+00 3.141e+00 -4.899e+00 -8.432e-04]
650000/700000: MSE =2.02818e-10, c = [ 1.234e+00 3.141e+00 -4.899e+00 -7.520e-04]
660000/700000: MSE =1.61335e-10, c = [ 1.234e+00 3.141e+00 -4.899e+00 -6.707e-04]
670000/700000: MSE =1.28336e-10, c = [ 1.234e+00 3.141e+00 -4.899e+00 -5.982e-04]
680000/700000: MSE =1.02087e-10, c = [ 1.234e+00 3.141e+00 -4.899e+00 -5.335e-04]
690000/700000: MSE =8.12072e-11, c = [ 1.234e+00 3.141e+00 -4.899e+00 -4.758e-04]
700000/700000: MSE =6.45976e-11, c = [ 1.234e+00 3.141e+00 -4.899e+00 -4.244e-04]
Our fit: y(t) = 1.23402130221 + 3.1412436463*t + -4.89936171114*t**2 + -0.000424401050714*t**3
Compare to exact: y(t) = 1.234 + 3.1415*t - 4.9*t**2
Estimate for g = 9.79872342227
So, in this case, we were able to *show* not only that the data fits a parabola well, but that the higher order term (for $t^3$) is negigible!! Great science! In practice, however, for non-perfect data, this does not work out. The higher-order term introduces an extreme sensitivity to the noise, which can render the results inconclusive.
**Exercise:** Go back to where the data is generated, and uncomment the line that says "# for later; add noise in" and re-run the fitting. You will find that the coefficients for the cubic polynomial do *not* resemble the original values found at all, whereas the coefficients for a quadratic polynomial, while not being the same as before, will still be "close."
Thus, by *hypothesizing* a parabolic dependence, we're able to correctly deduce the parameters of the motion (initial position & velocity, and acceleration), and we get a very low error in doing so. :-) Trying to show that higher-order terms in a polynomial expansion don't contribute...that worked for "perfect data" but in a practical case it didn't work out because polynomials are "ill behaved." Still, we got some useful physics out of it. And that works for many applications. We could stop here.
...although...
*What if our data wasn't parabolic?* Sure, for motion in a uniform gravitational field this is fine, but what if we want to model the sinusoidal motion of a simple harmonic oscillator? In that case, guessing a parabola would only work for very early times (thanks to [Taylor's theorem](https://en.wikipedia.org/wiki/Taylor's_theorem)). Sure, we could fit a model where we've explictly put in a sine function in the code -- and I encourage you to write your own code to do this -- but perhaps there's a way to *deduce* the motion, by looking at the local behavior and thereby 'learning' the differential equation underlying the motion.
**Exercise:** Copy the `polyfit()` code elsewhere (e.g. to text file or a new cell in this Jupyter notebook or a new notebook) and rename it `sinefit()`, and modify it to fit a sine function instead of a polynomial:
$$y(t) = A\sin(\omega t + \phi),$$
where the fit parameters will be the amplitude $A$, frequency $\omega$ and phase constant $\phi$. Try fitting to data generated for $A=3$, $\omega=2$, $\phi=1.57$ on $0\le t \le 10$.
As an example, you can check your answer against [this](http://hedges.belmont.edu/~shawley/PHY4410/sinefit_a3w2p1.57.png).
<br>
<br>
<div align="center"><i>The discussion goes on, but I'm breaking it off into a "Part Ib" for a separate post. In that post, we'll switch from fitting the data "globally" to looking "locally," in preparation for work in "Time Series Prediction."
</i></div>
-SH
<hr>
## Afterward: Alternatives to "Simple" Gradient Descent
There are *lots* of schemes that incorporate more sophisticated approaches in order to achieve convergence more reliabily and more quickly than the "simple" gradient descent we've been doing.
Such schemes introduce concepts such as "momentum" and go by names such as Adagrad, Adadelta, Adam, RMSProp, etc... For an excellent overview of such methods, I recommend [Sebastian Ruder's blog post](http://sebastianruder.com/optimizing-gradient-descent/) which includes some great animations!
```python
```
|
## Set working directory to one containg the
## UCI dataset setwd("C:/...UCI HAR Dataset")
########################################################################
###################### Script Begins Here ##############################
########################################################################
xtest <- read.table("test/X_test.txt", header = FALSE)
## Read X_test (test data) as table
ytest <- read.table("test/y_test.txt", header = FALSE)
## Read y_test (activity labels) as table
subj <- read.table("test/subject_test.txt", header = FALSE)
## Read subject_test (subject labels) as table
xtrain <- read.table("train/X_train.txt", header = FALSE)
## Read X_train (train data) as table
ytrain <- read.table("train/y_train.txt", header = FALSE)
## Read y_train (activity labels) as table
subk <- read.table("train/subject_train.txt", header = FALSE)
## Read subject_train (subject labels) as table
features <- read.table("features.txt", header = FALSE)
## Read features (variables) as table
## Variable names considered clear given context of data
## Variable names will NOT be changed
yestolabel <- function(ydat) {
## Function to make test activity labels descriptive (argument: ytest)
trps <- length(ydat$V1)
for (i in 1:trps) {
if (ydat$V1[i] == 1) {
ydat$V1[i] = "Walking"
}
else if (ydat$V1[i] == 2) {
ydat$V1[i] = "WalkingUpstairs"
}
else if (ydat$V1[i] == 3) {
ydat$V1[i] = "WalkingDownstairs"
}
else if (ydat$V1[i] == 4) {
ydat$V1[i] = "Sitting"
}
else if (ydat$V1[i] == 5) {
ydat$V1[i] = "Standing"
}
else {
ydat$V1[i] = "Laying"
}
}
vtest <<- ydat
## returns data set with descriptive rows as vtest
}
datactivity <- function(label, dat1) {
## Function to combine test data and vtest
## (arguments: vtest, xtest) in that order
Activity <- label$V1
dat3 <- cbind(Activity, dat1)
xtest2 <<- dat3
## returns combined data set as xtest2
}
submerge <- function(bdat, sdat) {
## Funtion to combine xtest2 and subject labels
## (arguments: xtest2, subj) in that order
Subject <- sdat$V1
dat4 <- cbind(Subject, bdat)
xtest3 <<- dat4
## returns combined data set as xtest3
}
variplace <- function(todat, var1) {
## Funtion to label columns of combined test/train
## data with features labels(arguments: totdata, variables)
## in that order
for (i in 1:length(var1)) {
names(todat)[2+i] <- var1[i]
}
totdata1 <<- todat
## returns combined test/train data with feature variables
## as column names
}
meansdex <- function(todat) {
## Funtion to extract measurments on mean and standard deviation
## for each measurement (argument: totdata1)
meancols <- todat[,grepl("mean",names(todat))]
## Extract measurement means from test/train data by column
stdcols <- todat[,grepl("std",names(todat))]
## Extract measurement std from test/train data by column
fidat <- cbind(meancols, stdcols)
fimes <- todat[,1:2]
finfo <- cbind(fimes, fidat)
meanfreak <- finfo[,!grepl("meanFreq",names(finfo))]
## measurements featuring the sring 'meanFreq' are removed as they
## do not represent (activity) measurement means
totdata2 <<- meanfreak
## returns dataframe containing mean/sd coulumns
## from the test/train data
}
aggmeans <- function(todat) {
## Funtion to create 'second independent tidy data set with the
## average of each variable for each activity and each subject.'
## (argument: totdata2))
todat[,1] <- as.factor(todat[,1])
## Convert 'todat$Subject' to factor so variables can be aggregated
## by it
aggdata2 <-aggregate(todat,by=list(todat$Subject,todat$Activity),FUN=mean, na.rm=TRUE)
## Aggregation of variables by Subject and Activity. Mean is then calculated
aggdata2$Subject <- NULL
aggdata2$Activity <- NULL
## Excess columns removed
colnames(aggdata2)[1] <- "Subject"
colnames(aggdata2)[2] <- "Activity"
## Columns relabeled
aggdata2[order(aggdata2$Subject),]
Final_Data <<- aggdata2
## returns final tidy data set
}
## Data structuring function calls on test data
yestolabel(ytest)
datactivity(vtest, xtest)
submerge(xtest2, subj)
## Clearing excess Variables
xtest4 <- xtest3
variables <- as.character(features$V2)
rm(vtest)
rm(xtest2)
rm(xtest3)
rm(features)
## Data structuring function calls on train data
yestolabel(ytrain)
datactivity(vtest, xtrain)
submerge(xtest2, subk)
## Clearing excess Variables
xtrain4 <- xtest3
rm(vtest)
rm(xtest2)
rm(xtest3)
# Combining test and train data by row
totdata <- rbind(xtest4, xtrain4)
# Replace column names with feature variables
variplace(totdata, variables)
# Extract mean and sd measurements
meansdex(totdata1)
# Clearing excess Variables
rm(totdata1)
# Create Final Tidy Data Set
aggmeans(totdata2)
# Write Tidy Data Set to file
## write.table(Final_Data, file="TidyData.csv", sep=",")
write.csv(Final_Data, file="TidyData.csv", row.names=FALSE)
|
[STATEMENT]
lemma numgcd0:
assumes g0: "numgcd t = 0"
shows "Inum bs t = 0"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. Inum bs t = 0
[PROOF STEP]
using g0[simplified numgcd_def]
[PROOF STATE]
proof (prove)
using this:
numgcdh t (maxcoeff t) = 0
goal (1 subgoal):
1. Inum bs t = 0
[PROOF STEP]
by (induct t rule: numgcdh.induct) (auto simp add: natabs0 maxcoeff_pos max.absorb2)
|
Child Thoughts in Picture and Verse ( by M. K. Westcott ) ; Blackie , 1925
|
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
by_cases hm : m ≤ m0
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : ¬m ≤ m0
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
swap
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : ¬m ≤ m0
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
simp_rw [condexp_of_not_le hm]
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : ¬m ≤ m0
⊢ 0 =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
rfl
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
by_cases hμm : SigmaFinite (μ.trim hm)
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm : SigmaFinite (Measure.trim μ hm)
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm : ¬SigmaFinite (Measure.trim μ hm)
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
swap
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm : ¬SigmaFinite (Measure.trim μ hm)
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
simp_rw [condexp_of_not_sigmaFinite hm hμm]
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm : ¬SigmaFinite (Measure.trim μ hm)
⊢ 0 =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
rfl
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm : SigmaFinite (Measure.trim μ hm)
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
haveI : SigmaFinite (μ.trim hm) := hμm
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this : SigmaFinite (Measure.trim μ hm)
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
have : SigmaFinite ((μ.restrict s).trim hm) :=
by
rw [← restrict_trim hm _ hs]
exact Restrict.sigmaFinite _ s
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this : SigmaFinite (Measure.trim μ hm)
⊢ SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
[PROOFSTEP]
rw [← restrict_trim hm _ hs]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this : SigmaFinite (Measure.trim μ hm)
⊢ SigmaFinite (Measure.restrict (Measure.trim μ hm) s)
[PROOFSTEP]
exact Restrict.sigmaFinite _ s
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
by_cases hf_int : Integrable f μ
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
hf_int : Integrable f
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
hf_int : ¬Integrable f
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
swap
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
hf_int : ¬Integrable f
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
rw [condexp_undef hf_int]
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
hf_int : Integrable f
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] 0
[PROOFSTEP]
refine' ae_eq_of_forall_set_integral_eq_of_sigmaFinite' hm _ _ _ _ _
[GOAL]
case pos.refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
hf_int : Integrable f
⊢ ∀ (s_1 : Set α), MeasurableSet s_1 → ↑↑(Measure.restrict μ s) s_1 < ⊤ → IntegrableOn (μ[f|m]) s_1
[PROOFSTEP]
exact fun t _ _ => integrable_condexp.integrableOn.integrableOn
[GOAL]
case pos.refine'_2
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
hf_int : Integrable f
⊢ ∀ (s_1 : Set α), MeasurableSet s_1 → ↑↑(Measure.restrict μ s) s_1 < ⊤ → IntegrableOn 0 s_1
[PROOFSTEP]
exact fun t _ _ => (integrable_zero _ _ _).integrableOn
[GOAL]
case pos.refine'_3
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
hf_int : Integrable f
⊢ ∀ (s_1 : Set α),
MeasurableSet s_1 →
↑↑(Measure.restrict μ s) s_1 < ⊤ →
∫ (x : α) in s_1, (μ[f|m]) x ∂Measure.restrict μ s = ∫ (x : α) in s_1, OfNat.ofNat 0 x ∂Measure.restrict μ s
[PROOFSTEP]
intro t ht _
[GOAL]
case pos.refine'_3
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑(Measure.restrict μ s) t < ⊤
⊢ ∫ (x : α) in t, (μ[f|m]) x ∂Measure.restrict μ s = ∫ (x : α) in t, OfNat.ofNat 0 x ∂Measure.restrict μ s
[PROOFSTEP]
rw [Measure.restrict_restrict (hm _ ht), set_integral_condexp hm hf_int (ht.inter hs), ←
Measure.restrict_restrict (hm _ ht)]
[GOAL]
case pos.refine'_3
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑(Measure.restrict μ s) t < ⊤
⊢ ∫ (x : α) in t, f x ∂Measure.restrict μ s = ∫ (x : α) in t, OfNat.ofNat 0 x ∂Measure.restrict μ s
[PROOFSTEP]
refine' set_integral_congr_ae (hm _ ht) _
[GOAL]
case pos.refine'_3
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑(Measure.restrict μ s) t < ⊤
⊢ ∀ᵐ (x : α) ∂Measure.restrict μ s, x ∈ t → f x = OfNat.ofNat 0 x
[PROOFSTEP]
filter_upwards [hf] with x hx _ using hx
[GOAL]
case pos.refine'_4
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
hf_int : Integrable f
⊢ AEStronglyMeasurable' m (μ[f|m]) (Measure.restrict μ s)
[PROOFSTEP]
exact stronglyMeasurable_condexp.aeStronglyMeasurable'
[GOAL]
case pos.refine'_5
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ s] 0
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
hf_int : Integrable f
⊢ AEStronglyMeasurable' m 0 (Measure.restrict μ s)
[PROOFSTEP]
exact stronglyMeasurable_zero.aeStronglyMeasurable'
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ sᶜ] 0
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
by_cases hm : m ≤ m0
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ sᶜ] 0
hm : m ≤ m0
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ sᶜ] 0
hm : ¬m ≤ m0
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
swap
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ sᶜ] 0
hm : ¬m ≤ m0
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
simp_rw [condexp_of_not_le hm, Set.indicator_zero']
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ sᶜ] 0
hm : ¬m ≤ m0
⊢ 0 =ᵐ[μ] 0
[PROOFSTEP]
rfl
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ sᶜ] 0
hm : m ≤ m0
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
have hsf_zero : ∀ g : α → E, g =ᵐ[μ.restrict sᶜ] 0 → s.indicator g =ᵐ[μ] g := fun g =>
indicator_ae_eq_of_restrict_compl_ae_eq_zero (hm _ hs)
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ sᶜ] 0
hm : m ≤ m0
hsf_zero : ∀ (g : α → E), g =ᵐ[Measure.restrict μ sᶜ] 0 → Set.indicator s g =ᵐ[μ] g
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
refine' ((hsf_zero (μ[f|m]) (condexp_ae_eq_restrict_zero hs.compl hf)).trans _).symm
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hs : MeasurableSet s
hf : f =ᵐ[Measure.restrict μ sᶜ] 0
hm : m ≤ m0
hsf_zero : ∀ (g : α → E), g =ᵐ[Measure.restrict μ sᶜ] 0 → Set.indicator s g =ᵐ[μ] g
⊢ μ[f|m] =ᵐ[μ] μ[Set.indicator s f|m]
[PROOFSTEP]
exact condexp_congr_ae (hsf_zero f hf).symm
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
by_cases hm : m ≤ m0
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : ¬m ≤ m0
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
swap
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : ¬m ≤ m0
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
simp_rw [condexp_of_not_le hm, Set.indicator_zero']
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : ¬m ≤ m0
⊢ 0 =ᵐ[μ] 0
[PROOFSTEP]
rfl
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
by_cases hμm : SigmaFinite (μ.trim hm)
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm : SigmaFinite (Measure.trim μ hm)
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm : ¬SigmaFinite (Measure.trim μ hm)
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
swap
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm : ¬SigmaFinite (Measure.trim μ hm)
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
simp_rw [condexp_of_not_sigmaFinite hm hμm, Set.indicator_zero']
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm : ¬SigmaFinite (Measure.trim μ hm)
⊢ 0 =ᵐ[μ] 0
[PROOFSTEP]
rfl
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm : SigmaFinite (Measure.trim μ hm)
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
haveI : SigmaFinite (μ.trim hm) := hμm
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this : SigmaFinite (Measure.trim μ hm)
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
have : s.indicator (μ[f|m]) =ᵐ[μ] s.indicator (μ[s.indicator f + sᶜ.indicator f|m]) := by
rw [Set.indicator_self_add_compl s f]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this : SigmaFinite (Measure.trim μ hm)
⊢ Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
[PROOFSTEP]
rw [Set.indicator_self_add_compl s f]
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
⊢ μ[Set.indicator s f|m] =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
refine' (this.trans _).symm
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
⊢ Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m]) =ᵐ[μ] μ[Set.indicator s f|m]
[PROOFSTEP]
calc
s.indicator (μ[s.indicator f + sᶜ.indicator f|m]) =ᵐ[μ] s.indicator (μ[s.indicator f|m] + μ[sᶜ.indicator f|m]) :=
by
have : μ[s.indicator f + sᶜ.indicator f|m] =ᵐ[μ] μ[s.indicator f|m] + μ[sᶜ.indicator f|m] :=
condexp_add (hf_int.indicator (hm _ hs)) (hf_int.indicator (hm _ hs.compl))
filter_upwards [this] with x hx
classical rw [Set.indicator_apply, Set.indicator_apply, hx]
_ = s.indicator (μ[s.indicator f|m]) + s.indicator (μ[sᶜ.indicator f|m]) := (s.indicator_add' _ _)
_ =ᵐ[μ] s.indicator (μ[s.indicator f|m]) + s.indicator (sᶜ.indicator (μ[sᶜ.indicator f|m])) :=
by
refine' Filter.EventuallyEq.rfl.add _
have : sᶜ.indicator (μ[sᶜ.indicator f|m]) =ᵐ[μ] μ[sᶜ.indicator f|m] :=
by
refine' (condexp_indicator_aux hs.compl _).symm.trans _
· exact indicator_ae_eq_restrict_compl (hm _ hs.compl)
· rw [Set.indicator_indicator, Set.inter_self]
filter_upwards [this] with x hx
by_cases hxs : x ∈ s
· simp only [hx, hxs, Set.indicator_of_mem]
· simp only [hxs, Set.indicator_of_not_mem, not_false_iff]
_ =ᵐ[μ] s.indicator (μ[s.indicator f|m]) := by
rw [Set.indicator_indicator, Set.inter_compl_self, Set.indicator_empty', add_zero]
_ =ᵐ[μ] μ[s.indicator f|m] := by
refine' (condexp_indicator_aux hs _).symm.trans _
· exact indicator_ae_eq_restrict_compl (hm _ hs)
· rw [Set.indicator_indicator, Set.inter_self]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
⊢ Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m]) =ᵐ[μ]
Set.indicator s (μ[Set.indicator s f|m] + μ[Set.indicator sᶜ f|m])
[PROOFSTEP]
have : μ[s.indicator f + sᶜ.indicator f|m] =ᵐ[μ] μ[s.indicator f|m] + μ[sᶜ.indicator f|m] :=
condexp_add (hf_int.indicator (hm _ hs)) (hf_int.indicator (hm _ hs.compl))
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝¹ : SigmaFinite (Measure.trim μ hm)
this✝ : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
this : μ[Set.indicator s f + Set.indicator sᶜ f|m] =ᵐ[μ] μ[Set.indicator s f|m] + μ[Set.indicator sᶜ f|m]
⊢ Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m]) =ᵐ[μ]
Set.indicator s (μ[Set.indicator s f|m] + μ[Set.indicator sᶜ f|m])
[PROOFSTEP]
filter_upwards [this] with x hx
[GOAL]
case h
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝¹ : SigmaFinite (Measure.trim μ hm)
this✝ : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
this : μ[Set.indicator s f + Set.indicator sᶜ f|m] =ᵐ[μ] μ[Set.indicator s f|m] + μ[Set.indicator sᶜ f|m]
x : α
hx : (μ[Set.indicator s f + Set.indicator sᶜ f|m]) x = (μ[Set.indicator s f|m] + μ[Set.indicator sᶜ f|m]) x
⊢ Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m]) x =
Set.indicator s (μ[Set.indicator s f|m] + μ[Set.indicator sᶜ f|m]) x
[PROOFSTEP]
classical rw [Set.indicator_apply, Set.indicator_apply, hx]
[GOAL]
case h
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝¹ : SigmaFinite (Measure.trim μ hm)
this✝ : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
this : μ[Set.indicator s f + Set.indicator sᶜ f|m] =ᵐ[μ] μ[Set.indicator s f|m] + μ[Set.indicator sᶜ f|m]
x : α
hx : (μ[Set.indicator s f + Set.indicator sᶜ f|m]) x = (μ[Set.indicator s f|m] + μ[Set.indicator sᶜ f|m]) x
⊢ Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m]) x =
Set.indicator s (μ[Set.indicator s f|m] + μ[Set.indicator sᶜ f|m]) x
[PROOFSTEP]
rw [Set.indicator_apply, Set.indicator_apply, hx]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
⊢ Set.indicator s (μ[Set.indicator s f|m]) + Set.indicator s (μ[Set.indicator sᶜ f|m]) =ᵐ[μ]
Set.indicator s (μ[Set.indicator s f|m]) + Set.indicator s (Set.indicator sᶜ (μ[Set.indicator sᶜ f|m]))
[PROOFSTEP]
refine' Filter.EventuallyEq.rfl.add _
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
⊢ (fun x => Set.indicator s (μ[Set.indicator sᶜ f|m]) x) =ᵐ[μ] fun x =>
Set.indicator s (Set.indicator sᶜ (μ[Set.indicator sᶜ f|m])) x
[PROOFSTEP]
have : sᶜ.indicator (μ[sᶜ.indicator f|m]) =ᵐ[μ] μ[sᶜ.indicator f|m] :=
by
refine' (condexp_indicator_aux hs.compl _).symm.trans _
· exact indicator_ae_eq_restrict_compl (hm _ hs.compl)
· rw [Set.indicator_indicator, Set.inter_self]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
⊢ Set.indicator sᶜ (μ[Set.indicator sᶜ f|m]) =ᵐ[μ] μ[Set.indicator sᶜ f|m]
[PROOFSTEP]
refine' (condexp_indicator_aux hs.compl _).symm.trans _
[GOAL]
case refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
⊢ Set.indicator sᶜ f =ᵐ[Measure.restrict μ sᶜᶜ] 0
[PROOFSTEP]
exact indicator_ae_eq_restrict_compl (hm _ hs.compl)
[GOAL]
case refine'_2
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
⊢ μ[Set.indicator sᶜ (Set.indicator sᶜ f)|m] =ᵐ[μ] μ[Set.indicator sᶜ f|m]
[PROOFSTEP]
rw [Set.indicator_indicator, Set.inter_self]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝¹ : SigmaFinite (Measure.trim μ hm)
this✝ : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
this : Set.indicator sᶜ (μ[Set.indicator sᶜ f|m]) =ᵐ[μ] μ[Set.indicator sᶜ f|m]
⊢ (fun x => Set.indicator s (μ[Set.indicator sᶜ f|m]) x) =ᵐ[μ] fun x =>
Set.indicator s (Set.indicator sᶜ (μ[Set.indicator sᶜ f|m])) x
[PROOFSTEP]
filter_upwards [this] with x hx
[GOAL]
case h
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝¹ : SigmaFinite (Measure.trim μ hm)
this✝ : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
this : Set.indicator sᶜ (μ[Set.indicator sᶜ f|m]) =ᵐ[μ] μ[Set.indicator sᶜ f|m]
x : α
hx : Set.indicator sᶜ (μ[Set.indicator sᶜ f|m]) x = (μ[Set.indicator sᶜ f|m]) x
⊢ Set.indicator s (μ[Set.indicator sᶜ f|m]) x = Set.indicator s (Set.indicator sᶜ (μ[Set.indicator sᶜ f|m])) x
[PROOFSTEP]
by_cases hxs : x ∈ s
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝¹ : SigmaFinite (Measure.trim μ hm)
this✝ : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
this : Set.indicator sᶜ (μ[Set.indicator sᶜ f|m]) =ᵐ[μ] μ[Set.indicator sᶜ f|m]
x : α
hx : Set.indicator sᶜ (μ[Set.indicator sᶜ f|m]) x = (μ[Set.indicator sᶜ f|m]) x
hxs : x ∈ s
⊢ Set.indicator s (μ[Set.indicator sᶜ f|m]) x = Set.indicator s (Set.indicator sᶜ (μ[Set.indicator sᶜ f|m])) x
[PROOFSTEP]
simp only [hx, hxs, Set.indicator_of_mem]
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝¹ : SigmaFinite (Measure.trim μ hm)
this✝ : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
this : Set.indicator sᶜ (μ[Set.indicator sᶜ f|m]) =ᵐ[μ] μ[Set.indicator sᶜ f|m]
x : α
hx : Set.indicator sᶜ (μ[Set.indicator sᶜ f|m]) x = (μ[Set.indicator sᶜ f|m]) x
hxs : ¬x ∈ s
⊢ Set.indicator s (μ[Set.indicator sᶜ f|m]) x = Set.indicator s (Set.indicator sᶜ (μ[Set.indicator sᶜ f|m])) x
[PROOFSTEP]
simp only [hxs, Set.indicator_of_not_mem, not_false_iff]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
⊢ Set.indicator s (μ[Set.indicator s f|m]) + Set.indicator s (Set.indicator sᶜ (μ[Set.indicator sᶜ f|m])) =ᵐ[μ]
Set.indicator s (μ[Set.indicator s f|m])
[PROOFSTEP]
rw [Set.indicator_indicator, Set.inter_compl_self, Set.indicator_empty', add_zero]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
⊢ Set.indicator s (μ[Set.indicator s f|m]) =ᵐ[μ] μ[Set.indicator s f|m]
[PROOFSTEP]
refine' (condexp_indicator_aux hs _).symm.trans _
[GOAL]
case refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
⊢ Set.indicator s f =ᵐ[Measure.restrict μ sᶜ] 0
[PROOFSTEP]
exact indicator_ae_eq_restrict_compl (hm _ hs)
[GOAL]
case refine'_2
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝² : NormedAddCommGroup E
inst✝¹ : NormedSpace ℝ E
inst✝ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hf_int : Integrable f
hs : MeasurableSet s
hm : m ≤ m0
hμm this✝ : SigmaFinite (Measure.trim μ hm)
this : Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[Set.indicator s f + Set.indicator sᶜ f|m])
⊢ μ[Set.indicator s (Set.indicator s f)|m] =ᵐ[μ] μ[Set.indicator s f|m]
[PROOFSTEP]
rw [Set.indicator_indicator, Set.inter_self]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
⊢ Measure.restrict μ s[f|m] =ᵐ[Measure.restrict μ s] μ[f|m]
[PROOFSTEP]
have : SigmaFinite ((μ.restrict s).trim hm) := by rw [← restrict_trim hm _ hs_m]; infer_instance
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
⊢ SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
[PROOFSTEP]
rw [← restrict_trim hm _ hs_m]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
⊢ SigmaFinite (Measure.restrict (Measure.trim μ hm) s)
[PROOFSTEP]
infer_instance
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
⊢ Measure.restrict μ s[f|m] =ᵐ[Measure.restrict μ s] μ[f|m]
[PROOFSTEP]
rw [ae_eq_restrict_iff_indicator_ae_eq (hm _ hs_m)]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
⊢ Set.indicator s (Measure.restrict μ s[f|m]) =ᵐ[μ] Set.indicator s (μ[f|m])
[PROOFSTEP]
refine' EventuallyEq.trans _ (condexp_indicator hf_int hs_m)
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
⊢ Set.indicator s (Measure.restrict μ s[f|m]) =ᵐ[μ] μ[Set.indicator s f|m]
[PROOFSTEP]
refine' ae_eq_condexp_of_forall_set_integral_eq hm (hf_int.indicator (hm _ hs_m)) _ _ _
[GOAL]
case refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
⊢ ∀ (s_1 : Set α), MeasurableSet s_1 → ↑↑μ s_1 < ⊤ → IntegrableOn (Set.indicator s (Measure.restrict μ s[f|m])) s_1
[PROOFSTEP]
intro t ht _
[GOAL]
case refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
⊢ IntegrableOn (Set.indicator s (Measure.restrict μ s[f|m])) t
[PROOFSTEP]
rw [← integrable_indicator_iff (hm _ ht), Set.indicator_indicator, Set.inter_comm, ← Set.indicator_indicator]
[GOAL]
case refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
⊢ Integrable (Set.indicator s (Set.indicator t (Measure.restrict μ s[f|m])))
[PROOFSTEP]
suffices h_int_restrict : Integrable (t.indicator ((μ.restrict s)[f|m])) (μ.restrict s)
[GOAL]
case refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
h_int_restrict : Integrable (Set.indicator t (Measure.restrict μ s[f|m]))
⊢ Integrable (Set.indicator s (Set.indicator t (Measure.restrict μ s[f|m])))
[PROOFSTEP]
rw [integrable_indicator_iff (hm _ hs_m), IntegrableOn]
[GOAL]
case refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
h_int_restrict : Integrable (Set.indicator t (Measure.restrict μ s[f|m]))
⊢ Integrable (Set.indicator t (Measure.restrict μ s[f|m]))
[PROOFSTEP]
rw [integrable_indicator_iff (hm _ ht), IntegrableOn] at h_int_restrict ⊢
[GOAL]
case refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
h_int_restrict : Integrable (Measure.restrict μ s[f|m])
⊢ Integrable (Measure.restrict μ s[f|m])
[PROOFSTEP]
exact h_int_restrict
[GOAL]
case h_int_restrict
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
⊢ Integrable (Set.indicator t (Measure.restrict μ s[f|m]))
[PROOFSTEP]
exact integrable_condexp.indicator (hm _ ht)
[GOAL]
case refine'_2
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
⊢ ∀ (s_1 : Set α),
MeasurableSet s_1 →
↑↑μ s_1 < ⊤ →
∫ (x : α) in s_1, Set.indicator s (Measure.restrict μ s[f|m]) x ∂μ = ∫ (x : α) in s_1, Set.indicator s f x ∂μ
[PROOFSTEP]
intro t ht _
[GOAL]
case refine'_2
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
⊢ ∫ (x : α) in t, Set.indicator s (Measure.restrict μ s[f|m]) x ∂μ = ∫ (x : α) in t, Set.indicator s f x ∂μ
[PROOFSTEP]
calc
∫ x in t, s.indicator ((μ.restrict s)[f|m]) x ∂μ = ∫ x in t, ((μ.restrict s)[f|m]) x ∂μ.restrict s := by
rw [integral_indicator (hm _ hs_m), Measure.restrict_restrict (hm _ hs_m), Measure.restrict_restrict (hm _ ht),
Set.inter_comm]
_ = ∫ x in t, f x ∂μ.restrict s := (set_integral_condexp hm hf_int.integrableOn ht)
_ = ∫ x in t, s.indicator f x ∂μ := by
rw [integral_indicator (hm _ hs_m), Measure.restrict_restrict (hm _ hs_m), Measure.restrict_restrict (hm _ ht),
Set.inter_comm]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
⊢ ∫ (x : α) in t, Set.indicator s (Measure.restrict μ s[f|m]) x ∂μ =
∫ (x : α) in t, (Measure.restrict μ s[f|m]) x ∂Measure.restrict μ s
[PROOFSTEP]
rw [integral_indicator (hm _ hs_m), Measure.restrict_restrict (hm _ hs_m), Measure.restrict_restrict (hm _ ht),
Set.inter_comm]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
⊢ ∫ (x : α) in t, f x ∂Measure.restrict μ s = ∫ (x : α) in t, Set.indicator s f x ∂μ
[PROOFSTEP]
rw [integral_indicator (hm _ hs_m), Measure.restrict_restrict (hm _ hs_m), Measure.restrict_restrict (hm _ ht),
Set.inter_comm]
[GOAL]
case refine'_3
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m m0 : MeasurableSpace α
inst✝³ : NormedAddCommGroup E
inst✝² : NormedSpace ℝ E
inst✝¹ : CompleteSpace E
μ : Measure α
f : α → E
s : Set α
hm : m ≤ m0
inst✝ : SigmaFinite (Measure.trim μ hm)
hs_m : MeasurableSet s
hf_int : Integrable f
this : SigmaFinite (Measure.trim (Measure.restrict μ s) hm)
⊢ AEStronglyMeasurable' m (Set.indicator s (Measure.restrict μ s[f|m])) μ
[PROOFSTEP]
exact (stronglyMeasurable_condexp.indicator hs_m).aeStronglyMeasurable'
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
⊢ μ[f|m] =ᵐ[Measure.restrict μ s] μ[f|m₂]
[PROOFSTEP]
rw [ae_eq_restrict_iff_indicator_ae_eq (hm _ hs_m)]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
⊢ Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[f|m₂])
[PROOFSTEP]
have hs_m₂ : MeasurableSet[m₂] s := by rwa [← Set.inter_univ s, ← hs Set.univ, Set.inter_univ]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
⊢ MeasurableSet s
[PROOFSTEP]
rwa [← Set.inter_univ s, ← hs Set.univ, Set.inter_univ]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
⊢ Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[f|m₂])
[PROOFSTEP]
by_cases hf_int : Integrable f μ
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
⊢ Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[f|m₂])
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : ¬Integrable f
⊢ Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[f|m₂])
[PROOFSTEP]
swap
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : ¬Integrable f
⊢ Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[f|m₂])
[PROOFSTEP]
simp_rw [condexp_undef hf_int]
[GOAL]
case neg
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : ¬Integrable f
⊢ Set.indicator s 0 =ᵐ[μ] Set.indicator s 0
[PROOFSTEP]
rfl
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
⊢ Set.indicator s (μ[f|m]) =ᵐ[μ] Set.indicator s (μ[f|m₂])
[PROOFSTEP]
refine' ((condexp_indicator hf_int hs_m).symm.trans _).trans (condexp_indicator hf_int hs_m₂)
[GOAL]
case pos
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
⊢ μ[Set.indicator s f|m] =ᵐ[μ] μ[Set.indicator s f|m₂]
[PROOFSTEP]
refine'
ae_eq_of_forall_set_integral_eq_of_sigmaFinite' hm₂ (fun s _ _ => integrable_condexp.integrableOn)
(fun s _ _ => integrable_condexp.integrableOn) _ _ stronglyMeasurable_condexp.aeStronglyMeasurable'
[GOAL]
case pos.refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
⊢ ∀ (s_1 : Set α),
MeasurableSet s_1 →
↑↑μ s_1 < ⊤ → ∫ (x : α) in s_1, (μ[Set.indicator s f|m]) x ∂μ = ∫ (x : α) in s_1, (μ[Set.indicator s f|m₂]) x ∂μ
case pos.refine'_2
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
⊢ AEStronglyMeasurable' m₂ (μ[Set.indicator s f|m]) μ
[PROOFSTEP]
swap
[GOAL]
case pos.refine'_2
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
⊢ AEStronglyMeasurable' m₂ (μ[Set.indicator s f|m]) μ
[PROOFSTEP]
have : StronglyMeasurable[m] (μ[s.indicator f|m]) := stronglyMeasurable_condexp
[GOAL]
case pos.refine'_2
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
this : StronglyMeasurable (μ[Set.indicator s f|m])
⊢ AEStronglyMeasurable' m₂ (μ[Set.indicator s f|m]) μ
[PROOFSTEP]
refine' this.aeStronglyMeasurable'.aeStronglyMeasurable'_of_measurableSpace_le_on hm hs_m (fun t => (hs t).mp) _
[GOAL]
case pos.refine'_2
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
this : StronglyMeasurable (μ[Set.indicator s f|m])
⊢ μ[Set.indicator s f|m] =ᵐ[Measure.restrict μ sᶜ] 0
[PROOFSTEP]
exact condexp_ae_eq_restrict_zero hs_m.compl (indicator_ae_eq_restrict_compl (hm _ hs_m))
[GOAL]
case pos.refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
⊢ ∀ (s_1 : Set α),
MeasurableSet s_1 →
↑↑μ s_1 < ⊤ → ∫ (x : α) in s_1, (μ[Set.indicator s f|m]) x ∂μ = ∫ (x : α) in s_1, (μ[Set.indicator s f|m₂]) x ∂μ
[PROOFSTEP]
intro t ht _
[GOAL]
case pos.refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
⊢ ∫ (x : α) in t, (μ[Set.indicator s f|m]) x ∂μ = ∫ (x : α) in t, (μ[Set.indicator s f|m₂]) x ∂μ
[PROOFSTEP]
have : ∫ x in t, (μ[s.indicator f|m]) x ∂μ = ∫ x in s ∩ t, (μ[s.indicator f|m]) x ∂μ :=
by
rw [← integral_add_compl (hm _ hs_m) integrable_condexp.integrableOn]
suffices ∫ x in sᶜ, (μ[s.indicator f|m]) x ∂μ.restrict t = 0 by
rw [this, add_zero, Measure.restrict_restrict (hm _ hs_m)]
rw [Measure.restrict_restrict (MeasurableSet.compl (hm _ hs_m))]
suffices μ[s.indicator f|m] =ᵐ[μ.restrict sᶜ] 0
by
rw [Set.inter_comm, ← Measure.restrict_restrict (hm₂ _ ht)]
calc
∫ x : α in t, (μ[s.indicator f|m]) x ∂μ.restrict sᶜ = ∫ x : α in t, 0 ∂μ.restrict sᶜ :=
by
refine' set_integral_congr_ae (hm₂ _ ht) _
filter_upwards [this] with x hx _ using hx
_ = 0 := integral_zero _ _
refine' condexp_ae_eq_restrict_zero hs_m.compl _
exact indicator_ae_eq_restrict_compl (hm _ hs_m)
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
⊢ ∫ (x : α) in t, (μ[Set.indicator s f|m]) x ∂μ = ∫ (x : α) in s ∩ t, (μ[Set.indicator s f|m]) x ∂μ
[PROOFSTEP]
rw [← integral_add_compl (hm _ hs_m) integrable_condexp.integrableOn]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
⊢ ∫ (x : α) in s, (μ[Set.indicator s f|m]) x ∂Measure.restrict μ t +
∫ (x : α) in sᶜ, (μ[Set.indicator s f|m]) x ∂Measure.restrict μ t =
∫ (x : α) in s ∩ t, (μ[Set.indicator s f|m]) x ∂μ
[PROOFSTEP]
suffices ∫ x in sᶜ, (μ[s.indicator f|m]) x ∂μ.restrict t = 0 by
rw [this, add_zero, Measure.restrict_restrict (hm _ hs_m)]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
this : ∫ (x : α) in sᶜ, (μ[Set.indicator s f|m]) x ∂Measure.restrict μ t = 0
⊢ ∫ (x : α) in s, (μ[Set.indicator s f|m]) x ∂Measure.restrict μ t +
∫ (x : α) in sᶜ, (μ[Set.indicator s f|m]) x ∂Measure.restrict μ t =
∫ (x : α) in s ∩ t, (μ[Set.indicator s f|m]) x ∂μ
[PROOFSTEP]
rw [this, add_zero, Measure.restrict_restrict (hm _ hs_m)]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
⊢ ∫ (x : α) in sᶜ, (μ[Set.indicator s f|m]) x ∂Measure.restrict μ t = 0
[PROOFSTEP]
rw [Measure.restrict_restrict (MeasurableSet.compl (hm _ hs_m))]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
⊢ ∫ (x : α) in sᶜ ∩ t, (μ[Set.indicator s f|m]) x ∂μ = 0
[PROOFSTEP]
suffices μ[s.indicator f|m] =ᵐ[μ.restrict sᶜ] 0
by
rw [Set.inter_comm, ← Measure.restrict_restrict (hm₂ _ ht)]
calc
∫ x : α in t, (μ[s.indicator f|m]) x ∂μ.restrict sᶜ = ∫ x : α in t, 0 ∂μ.restrict sᶜ :=
by
refine' set_integral_congr_ae (hm₂ _ ht) _
filter_upwards [this] with x hx _ using hx
_ = 0 := integral_zero _ _
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
this : μ[Set.indicator s f|m] =ᵐ[Measure.restrict μ sᶜ] 0
⊢ ∫ (x : α) in sᶜ ∩ t, (μ[Set.indicator s f|m]) x ∂μ = 0
[PROOFSTEP]
rw [Set.inter_comm, ← Measure.restrict_restrict (hm₂ _ ht)]
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
this : μ[Set.indicator s f|m] =ᵐ[Measure.restrict μ sᶜ] 0
⊢ ∫ (x : α) in t, (μ[Set.indicator s f|m]) x ∂Measure.restrict μ sᶜ = 0
[PROOFSTEP]
calc
∫ x : α in t, (μ[s.indicator f|m]) x ∂μ.restrict sᶜ = ∫ x : α in t, 0 ∂μ.restrict sᶜ :=
by
refine' set_integral_congr_ae (hm₂ _ ht) _
filter_upwards [this] with x hx _ using hx
_ = 0 := integral_zero _ _
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
this : μ[Set.indicator s f|m] =ᵐ[Measure.restrict μ sᶜ] 0
⊢ ∫ (x : α) in t, (μ[Set.indicator s f|m]) x ∂Measure.restrict μ sᶜ = ∫ (x : α) in t, 0 ∂Measure.restrict μ sᶜ
[PROOFSTEP]
refine' set_integral_congr_ae (hm₂ _ ht) _
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
this : μ[Set.indicator s f|m] =ᵐ[Measure.restrict μ sᶜ] 0
⊢ ∀ᵐ (x : α) ∂Measure.restrict μ sᶜ, x ∈ t → (μ[Set.indicator s f|m]) x = 0
[PROOFSTEP]
filter_upwards [this] with x hx _ using hx
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
⊢ μ[Set.indicator s f|m] =ᵐ[Measure.restrict μ sᶜ] 0
[PROOFSTEP]
refine' condexp_ae_eq_restrict_zero hs_m.compl _
[GOAL]
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
⊢ Set.indicator s f =ᵐ[Measure.restrict μ sᶜ] 0
[PROOFSTEP]
exact indicator_ae_eq_restrict_compl (hm _ hs_m)
[GOAL]
case pos.refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
this : ∫ (x : α) in t, (μ[Set.indicator s f|m]) x ∂μ = ∫ (x : α) in s ∩ t, (μ[Set.indicator s f|m]) x ∂μ
⊢ ∫ (x : α) in t, (μ[Set.indicator s f|m]) x ∂μ = ∫ (x : α) in t, (μ[Set.indicator s f|m₂]) x ∂μ
[PROOFSTEP]
have hst_m : MeasurableSet[m] (s ∩ t) := (hs _).mpr (hs_m₂.inter ht)
[GOAL]
case pos.refine'_1
α : Type u_1
𝕜 : Type u_2
E : Type u_3
m✝ m0✝ : MeasurableSpace α
inst✝⁴ : NormedAddCommGroup E
inst✝³ : NormedSpace ℝ E
inst✝² : CompleteSpace E
μ✝ : Measure α
f : α → E
s : Set α
m m₂ m0 : MeasurableSpace α
μ : Measure α
hm : m ≤ m0
hm₂ : m₂ ≤ m0
inst✝¹ : SigmaFinite (Measure.trim μ hm)
inst✝ : SigmaFinite (Measure.trim μ hm₂)
hs_m : MeasurableSet s
hs : ∀ (t : Set α), MeasurableSet (s ∩ t) ↔ MeasurableSet (s ∩ t)
hs_m₂ : MeasurableSet s
hf_int : Integrable f
t : Set α
ht : MeasurableSet t
a✝ : ↑↑μ t < ⊤
this : ∫ (x : α) in t, (μ[Set.indicator s f|m]) x ∂μ = ∫ (x : α) in s ∩ t, (μ[Set.indicator s f|m]) x ∂μ
hst_m : MeasurableSet (s ∩ t)
⊢ ∫ (x : α) in t, (μ[Set.indicator s f|m]) x ∂μ = ∫ (x : α) in t, (μ[Set.indicator s f|m₂]) x ∂μ
[PROOFSTEP]
simp_rw [this, set_integral_condexp hm₂ (hf_int.indicator (hm _ hs_m)) ht,
set_integral_condexp hm (hf_int.indicator (hm _ hs_m)) hst_m, integral_indicator (hm _ hs_m),
Measure.restrict_restrict (hm _ hs_m), ← Set.inter_assoc, Set.inter_self]
|
thanks for your help! It appears as though BCBSNC is the only real option that I have that will cover a pre existing condition. I currently only have rent and utilities at school that is $515/mo and will have so to speak,so to speak at $1000/mo once the grace period runs out. I was hoping to go back in January but that doesn look like a possibility when this happens. I have currently los angeles rams schedule espn today youtube clips johane sburg moved back home with my parents as a temporary measure. I could probably sublease my place at college and cover the rent costs, But I don really want to do that if I am to return over the following few months. I have enough saved in investments and savings to cover so to speak,so to speak for a good bit. I tend los angeles rams vintage glassware etsy baby to be fairly savvy with strategy of investment and frugal in spending, So I am pretty well set financially, Besides student loan payments and medical costs/insurance. Plus my parents have said they will help with loan payments and medical bills as they are able. I am able to work part time right now helping with my dad small business. I could probably find a part time some place else, But i am not sure if I could find one that offers a benefits package that would make it worth while. Any other advice is greatly appreciated! Right now it looks like I will have to go with a plan that is $500/mo until I can re enroll in school. I don want to pull my investments if I don have to, But if push comes to shove I will do what is important.They grounded her and took away her phone once in a while. "We had reached our limits in trying to get which worked for her, For her to see the light," Jill runner said. "This wasnu0027t take advantage of the for her, Not the best thing for her as a person, To be in a bond with him." back then of the shooting, Walker was a junior in college and Gaul was 18. The shooting capped a weekend full of unusual events adjacent Gaul and Walker, Who had broken up by be unable to.
The Bills mustn't be among the leaders in dead money next year because McDermott and Beane's roster purge is practically complete. los angeles rams head coach 31741 marisa coughlan wikifeet Next year's cap situation looks good. The los angeles rams womens apparel pants clipart holes in hand Bills currently have the NFL's second fewest cap commitments at $106.4 los angeles rams jersey history society museum of modern million with 30 poker players under contract.
Second, Put 2 refs in a replay booth all the games. These refs review solitary play. If they see a clear and obvious error until the next los angeles rams new stadium seating chart play is snapped, They signal down to the field and the call is modified. this is exactly anything. "That OL handled that guy, Call attempting to keep. The defense got there early, get in touch with PI, If they don see anything prior to a next play happens, It wasn obvious enough and the letter stands. The on field ref doesn 1980 los angeles rams player roster formation continue christine read the play jim everett los angeles rams quarterbacks 1970's toys values ever they just announce the call if it was changed.
Williams's mother los angeles rams seat chart pricing policy of nestle jobs died of breast cancer in 2010 and initially he wanted to honor her by los angeles rams logo 2018 skenderbeu drawing easy pictures wearing pink within your season, A request the NFL denied because it occurred outside it's officially certified (And publicised) October observance los angeles rams awards circuit twitter emoticons and symbols of cancers of the breast awareness.
|
module TypedNBE
-- Typed NBE, based on https://hal.inria.fr/hal-01397929/document
import Data.Vect
data Ty = Base String | Arrow Ty Ty
data Syn : Vect n Ty -> Ty -> Type where
Var : Elem a ctx -> Syn ctx a
Lam : Syn (a :: ctx) b -> Syn ctx (Arrow a b)
App : Syn ctx (Arrow a b) -> Syn ctx a -> Syn ctx b
V : Ty -> Type
mutual
data Nf : Ty -> Type where
NLam : (V a -> Nf b) -> Nf (Arrow a b)
NAt : At (Base a) -> Nf (Base a)
data At : Ty -> Type where
AApp : At (Arrow a b) -> Nf a -> At b
AVar : V a -> At a
data Sem : Ty -> Type where
Fun : (Sem a -> Sem b) -> Sem (Arrow a b)
Base' : At (Base a) -> Sem (Base a)
data Env : Vect n Ty -> Type where
Nil : Env Nil
(::) : Sem ty -> Env ctx -> Env (ty :: ctx)
lookup' : Elem ty ctx -> Env ctx -> Sem ty
lookup' Here (x :: _) = x
lookup' (There later) (x :: xs) = lookup' later xs
eval : Env ctx -> Syn ctx ty -> Sem ty
eval env (Var x) = lookup' x env
eval env (Lam body) = Fun (\x => eval (x :: env) body)
eval env (App f x) with (eval env f)
eval env (App f x) | (Fun f') = f' (eval env x)
mutual
reflect : At ty -> Sem ty
reflect {ty = (Base _)} at = Base' at
reflect {ty = (Arrow a b)} at = Fun (\x => reflect (AApp at (reify x)))
reify : Sem ty -> Nf ty
reify {ty = (Base _)} (Base' base) = NAt base
reify {ty = (Arrow a b)} (Fun f) = NLam (\x => reify (f (reflect (AVar x))))
nbe : Syn [] ty -> Nf ty
nbe syn = reify (eval [] syn)
example : Syn [] (Arrow (Base "a") (Base "a"))
example = Lam (App (Lam (Var Here)) (Var Here))
test : Nf (Arrow (Base "a") (Base "a"))
test = nbe example
|
#' Query data from vcf file
#'
#' Read in GWAS summary data with filters on datasets (if multiple datasets per file) and/or chromosome/position, rsids or pvalues. Chooses most optimal choice for the detected operating system. Typically chrompos searches are the fastest. On Windows, rsid or pvalue filters from a file will be slow.
#'
#' @param vcf Path to GWAS-VCF file or VCF object e.g. output from VariantAnnotation::readVcf or gwasvcftools::query_vcf
#' @param chrompos Either vector of chromosome and position ranges e.g. "1:1000" or "1:1000-2000", or data frame with columns `chrom`, `start`, `end`.
#' @param rsid Vector of rsids
#' @param pval P-value threshold (NOT -log10)
#' @param id If multiple GWAS datasets in the vcf file, the name (sample ID) from which to perform the filter
#' @param rsidx Path to rsidx index file
#' @param build ="GRCh37" Build of vcffile
#' @param os The operating system. Default is as detected. Determines the method used to perform query
#' @param proxies ="no" If SNPs are absent then look for proxies (yes) or not (no). Can also mask all target SNPs and only return proxies (only), for testing purposes. Currently only possible if querying rsid.
#' @param bfile =path to plink bed/bim/fam ld reference panel
#' @param dbfile =path to sqlite tag snp database
#' @param tag_kb =5000 Proxy parameter
#' @param tag_nsnp =5000 Proxy parameter
#' @param tag_r2 =0.6 Proxy parameter
#' @param threads =1 NUmber of threads
#'
#' @export
#' @return vcf object
query_gwas <- function(vcf, chrompos=NULL, rsid=NULL, pval=NULL, id=NULL, rsidx=NULL, build="GRCh37", os=Sys.info()[['sysname']], proxies="no", bfile=NULL, dbfile=NULL, tag_kb=5000, tag_nsnp=5000, tag_r2=0.6, threads=1)
{
if(is.character(vcf))
{
stopifnot(file.exists(vcf))
fileflag <- TRUE
} else {
stopifnot(class(vcf) %in% c("CollapsedVCF", "ExpandedVCF"))
fileflag <- FALSE
}
if(sum(c(!is.null(chrompos), !is.null(rsid), !is.null(pval))) != 1)
{
stop("Must specify filters only for one of chrompos, rsid or pval")
}
if(proxies != "no")
{
if(is.null(rsid))
{
stop("Proxies can only be searched for if rsid query specified")
}
}
if(!is.null(chrompos))
{
if(!fileflag)
{
return(query_chrompos_vcf(chrompos, vcf))
} else {
if(!check_bcftools())
{
return(query_chrompos_file(chrompos, vcf, id, build))
} else {
return(query_chrompos_bcftools(chrompos, vcf, id))
}
}
}
if(!is.null(rsid))
{
stopifnot(proxies %in% c("yes", "no", "only"))
if(proxies != "no")
{
return(proxy_match(vcf, rsid, bfile=bfile, dbfile=dbfile, proxies=proxies, tag_kb=tag_kb, tag_nsnp=tag_nsnp, tag_r2=tag_r2, threads=threads))
}
if(!fileflag)
{
return(query_rsid_vcf(rsid, vcf))
} else {
if(!is.null(rsidx))
{
return(query_rsid_rsidx(rsid, vcf, id, rsidx))
}
if(!check_bcftools())
{
return(query_rsid_file(rsid, vcf, id, build))
} else {
return(query_rsid_bcftools(rsid, vcf, id))
}
}
}
if(!is.null(pval))
{
if(!fileflag)
{
return(query_pval_vcf(pval, vcf, id))
} else {
if(!check_bcftools())
{
return(query_pval_file(pval, vcf, id, build))
} else {
return(query_pval_bcftools(pval, vcf, id))
}
}
}
}
df_to_granges <- function(df)
{
GenomicRanges::GRanges(seqnames=df[["chrom"]], ranges=IRanges::IRanges(start=df[["start"]], end=df[["end"]]))
}
#' Parse chromosome:position
#'
#' Takes data frame or vector of chromosome position ranges and parses to granges object
#'
#' @param chrompos Either vector of chromosome and position ranges e.g. "1:1000" or "1:1000-2000", or data frame with columns `chrom`, `start`, `end`.
#' @param radius Add radius to the specified positions. Default = NULL
#'
#' @export
#' @return GRanges object
parse_chrompos <- function(chrompos, radius=NULL)
{
if("GRanges" %in% class(chrompos))
{
if(!is.null(radius))
{
chrompos <- GenomicRanges::GRanges(
seqnames = GenomeInfoDb::seqnames(chrompos),
ranges = IRanges::IRanges(
start = pmax(chrompos@start - radius, 0),
end = chrompos@end + radius
),
strand = chrompos@strand
)
}
return(chrompos)
} else if(is.data.frame(chrompos)) {
stopifnot(is.data.frame(chrompos))
stopifnot(all(c("chrom", "start", "end") %in% names(chrompos)))
return(df_to_granges(chrompos))
} else if(!is.character(chrompos)) {
stop("chrompos must be data frame with columns chrom, start, end, or character vector of <chr:pos> or <chr:start-end>")
}
a <- stringr::str_split(chrompos, ":")
chrom <- sapply(a, function(x) x[1])
pos <- sapply(a, function(x) x[2])
i <- grepl("-", pos)
temp <- stringr::str_split(pos[i], "-")
pos1 <- pos
pos2 <- pos
pos1[i] <- sapply(temp, function(x) {x[1]})
pos2[i] <- sapply(temp, function(x) {x[2]})
pos1 <- as.numeric(pos1)
pos2 <- as.numeric(pos2)
if(!is.null(radius))
{
pos1 <- pmax(0, pos1 - radius)
pos2 <- pos2 + radius
}
return(df_to_granges(data.frame(chrom, start=pos1, end=pos2, stringsAsFactors=FALSE)))
}
#' Query vcf file, extracting by chromosome and position
#'
#'
#' @param chrompos Either vector of chromosome and position ranges e.g. "1:1000" or "1:1000-2000", or data frame with columns `chrom`, `start`, `end`.
#' @param vcffile Path to .vcf.gz GWAS summary data file
#' @param id If multiple GWAS datasets in the vcf file, the name (sample ID) from which to perform the filter
#' @param build Default="GRCh37" Build of vcffile
#'
#' @export
#' @return VCF object
query_chrompos_file <- function(chrompos, vcffile, id=NULL, build="GRCh37")
{
chrompos <- parse_chrompos(chrompos)
if(!is.null(id))
{
param <- VariantAnnotation::ScanVcfParam(which=chrompos, samples=id)
} else {
param <- VariantAnnotation::ScanVcfParam(which=chrompos)
}
tab <- Rsamtools::TabixFile(vcffile)
vcf <- VariantAnnotation::readVcf(tab, build, param=chrompos)
return(vcf)
}
#' Query vcf file, extracting by rsid
#'
#' @param rsid Vector of rsids. Use DBSNP build (???)
#' @param vcffile Path to .vcf.gz GWAS summary data file
#' @param id If multiple GWAS datasets in the vcf file, the name (sample ID) from which to perform the filter
#' @param build Default="GRCh37" Build of vcffile
#'
#' @export
#' @return VCF object
query_rsid_file <- function(rsid, vcffile, id=NULL, build="GRCh37")
{
message("Note, this is much slower than searching by chromosome/position (e.g. see query_chrompos_file)")
vcf <- Rsamtools::TabixFile(vcffile)
fil <- function(x)
{
grepl(paste(rsid, collapse="|"), x)
}
tempfile <- tempfile()
VariantAnnotation::filterVcf(vcf, build, tempfile, prefilters=S4Vectors::FilterRules(list(fil=fil)), verbose=TRUE)
if(!is.null(id))
{
o <- VariantAnnotation::readVcf(tempfile, param=VariantAnnotation::ScanVcfParam(samples=id))
} else {
o <- VariantAnnotation::readVcf(tempfile)
}
unlink(tempfile)
# Grep isn't matching on exact word so do second pass here
o <- query_rsid_vcf(rsid, o)
return(o)
}
#' Query pval from vcf file
#'
#' @param pval P-value threshold (NOT -log10)
#' @param vcffile Path to tabix indexed vcf file
#' @param id If multiple GWAS datasets in the vcf file, the name (sample ID) from which to perform the filter
#' @param build Default="GRCh37"
#'
#' @export
#' @return VCF object
query_pval_file <- function(pval, vcffile, id=NULL, build="GRCh37")
{
if(is.null(id))
{
id <- VariantAnnotation::samples(VariantAnnotation::scanVcfHeader(vcffile))
}
stopifnot(length(id) == 1)
message("Filtering p-value based on id ", id)
message("Note, this is much slower than searching by chromosome/position (e.g. see query_chrompos_file)")
vcf <- Rsamtools::TabixFile(vcffile)
fil <- function(x)
{
VariantAnnotation::geno(x)[["LP"]][,id,drop=TRUE] > -log10(pval)
}
tempfile <- tempfile()
VariantAnnotation::filterVcf(vcf, build, tempfile, filters=S4Vectors::FilterRules(list(fil=fil)), verbose=TRUE)
o <- VariantAnnotation::readVcf(tempfile)
unlink(tempfile)
return(o)
}
#' Query chrompos from vcf object
#'
#' @param chrompos Either vector of chromosome and position ranges e.g. "1:1000" or "1:1000-2000", or data frame with columns `chrom`, `start`, `end`.
#' @param vcf VCF object (e.g. from readVcf)
#' @param id If multiple GWAS datasets in the vcf file, the name (sample ID) from which to perform the filter
#'
#' @export
#' @return VCF object
query_chrompos_vcf <- function(chrompos, vcf, id=NULL)
{
if(is.null(id))
{
id <- VariantAnnotation::samples(VariantAnnotation::header(vcf))
}
colid <- which(VariantAnnotation::samples(VariantAnnotation::header(vcf)) == id)
chrompos <- parse_chrompos(chrompos)
i <- IRanges::findOverlaps(SummarizedExperiment::rowRanges(vcf), chrompos) %>% S4Vectors::queryHits() %>% unique %>% sort
vcf[i,colid]
}
#' Query rsid from vcf object
#'
#' @param rsid Vector of rsids
#' @param vcf VCF object (e.g. from readVcf)
#' @param id If multiple GWAS datasets in the vcf file, the name (sample ID) from which to perform the filter
#'
#' @export
#' @return VCF object
query_rsid_vcf <- function(rsid, vcf, id=NULL)
{
if(is.null(id))
{
id <- VariantAnnotation::samples(VariantAnnotation::header(vcf))
}
colid <- which(VariantAnnotation::samples(VariantAnnotation::header(vcf)) == id)
vcf[rownames(vcf) %in% rsid,colid]
}
#' Query based on p-value threshold from vcf
#'
#' @param pval P-value threshold (NOT -log10)
#' @param id If multiple GWAS datasets in the vcf file, the name (sample ID) from which to perform the filter
#' @param vcf VCF object (e.g. from readVcf)
#'
#' @export
#' @return VCF object
query_pval_vcf <- function(pval, vcf, id=NULL)
{
if(is.null(id))
{
id <- VariantAnnotation::samples(VariantAnnotation::header(vcf))
}
stopifnot(length(id) == 1)
colid <- which(VariantAnnotation::samples(VariantAnnotation::header(vcf)) == id)
vcf[VariantAnnotation::geno(vcf)[["LP"]][,colid,drop=TRUE] > -log10(pval),colid]
}
#' Query
#'
#' @param rsid Vector of rsids
#' @param vcffile Path to .vcf.gz GWAS summary data file
#' @param id If multiple GWAS datasets in the vcf file, the name (sample ID) from which to perform the filter
#'
#' @export
#' @return VCF object
query_rsid_bcftools <- function(rsid, vcffile, id=NULL)
{
stopifnot(check_bcftools())
bcftools <- options()[["tools_bcftools"]]
if(is.null(id))
{
id <- VariantAnnotation::samples(VariantAnnotation::scanVcfHeader(vcffile))
}
id <- paste(id, collapse=",")
tmp <- tempfile()
utils::write.table(unique(rsid), file=paste0(tmp, ".snplist"), row=F, col=F, qu=F)
cmd <- sprintf("%s view -s %s -i'ID=@%s.snplist' %s > %s.vcf", bcftools, id, tmp, vcffile, tmp)
system(cmd)
o <- VariantAnnotation::readVcf(paste0(tmp, ".vcf"))
unlink(paste0(tmp, ".vcf"))
unlink(paste0(tmp, ".snplist"))
return(o)
}
#' Query p-value using bcftools
#'
#' @param pval P-value threshold (NOT -log10)
#' @param vcffile Path to .vcf.gz GWAS summary data file
#' @param id If multiple GWAS datasets in the vcf file, the name (sample ID) from which to perform the filter
#'
#' @export
#' @return vcf object
query_pval_bcftools <- function(pval, vcffile, id=NULL)
{
stopifnot(check_bcftools())
bcftools <- options()[["tools_bcftools"]]
if(is.null(id))
{
id <- VariantAnnotation::samples(VariantAnnotation::scanVcfHeader(vcffile))
}
id <- paste(id, collapse=",")
tmp <- tempfile()
cmd <- sprintf("%s view -s %s -i 'FORMAT/LP > %s' %s > %s.vcf", bcftools, id, -log10(pval), vcffile, tmp)
system(cmd)
o <- VariantAnnotation::readVcf(paste0(tmp, ".vcf"))
unlink(paste0(tmp, ".vcf"))
return(o)
}
#' Query chromosome and position using bcftools
#'
#' @param chrompos Either vector of chromosome and position ranges e.g. "1:1000" or "1:1000-2000", or data frame with columns `chrom`, `start`, `end`.
#' @param vcffile Path to .vcf.gz GWAS summary data file
#' @param id If multiple GWAS datasets in the vcf file, the name (sample ID) from which to perform the filter
#'
#' @export
#' @return vcf object
query_chrompos_bcftools <- function(chrompos, vcffile, id=NULL)
{
stopifnot(check_bcftools())
bcftools <- options()[["tools_bcftools"]]
if(is.null(id))
{
id <- VariantAnnotation::samples(VariantAnnotation::scanVcfHeader(vcffile))
}
idclause <- ifelse(length(id) == 0, "", paste0("-s ", paste(id, collapse=",")))
chrompos <- parse_chrompos(chrompos)
chrompos %>% as.data.frame
tmp <- tempfile()
utils::write.table(as.data.frame(chrompos)[,1:3], file=paste0(tmp, ".snplist"), sep="\t", row=F, col=F, qu=F)
cmd <- sprintf(paste0("%s view %s -R %s.snplist %s > %s.vcf"), bcftools, idclause, tmp, vcffile, tmp)
system(cmd)
o <- VariantAnnotation::readVcf(paste0(tmp, ".vcf"))
unlink(paste0(tmp, ".vcf"))
unlink(paste0(tmp, ".snplist"))
return(o)
}
#' Query rsid from file using rsidx index
#'
#' See create_rsidx_index
#'
#' @param rsid Vector of rsids
#' @param vcffile Path to .vcf.gz GWAS summary data file
#' @param id If multiple GWAS datasets in the vcf file, the name (sample ID) from which to perform the filter
#' @param rsidx Path to rsidx index file
#'
#' @export
#' @return vcf object
query_rsid_rsidx <- function(rsid, vcffile, id=NULL, rsidx)
{
out <- query_rsidx(rsid, rsidx)
return(
query_gwas(vcffile, chrompos=data.frame(chrom=out$chrom, start=out$coord, end=out$coord), id=id)
)
}
#' Query rsidx
#'
#' @param rsid Vector of rsids
#' @param rsidx Path to rsidx index file
#'
#' @export
#' @return data frame
query_rsidx <- function(rsid, rsidx)
{
conn <- RSQLite::dbConnect(RSQLite::SQLite(), rsidx)
numid <- gsub("rs", "", rsid) %>% paste(.data, collapse=",")
query <- paste0("SELECT DISTINCT * FROM rsid_to_coord WHERE rsid IN (", numid, ")")
out <- RSQLite::dbGetQuery(conn, query)
RSQLite::dbDisconnect(conn)
return(out)
}
|
The Lebesgue measure of an open interval is its length.
|
Formal statement is: lemma convex_onD_Icc': assumes "convex_on {x..y} f" "c \<in> {x..y}" defines "d \<equiv> y - x" shows "f c \<le> (f y - f x) / d * (c - x) + f x" Informal statement is: If $f$ is convex on the interval $[x,y]$, then for any $c \in [x,y]$, we have $f(c) \leq \frac{f(y) - f(x)}{y-x}(c-x) + f(x)$.
|
<a href="https://colab.research.google.com/github/QDaria/QDaria.github.io/blob/main/Copy_of_mnist.ipynb" target="_parent"></a>
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# MNIST classification
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/quantum/tutorials/mnist">View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/mnist.ipynb">Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/quantum/blob/master/docs/tutorials/mnist.ipynb">View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/mnist.ipynb">Download notebook</a>
</td>
</table>
This tutorial builds a quantum neural network (QNN) to classify a simplified version of MNIST, similar to the approach used in <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al</a>. The performance of the quantum neural network on this classical data problem is compared with a classical neural network.
## Setup
```
!pip install tensorflow==2.3.1
```
Collecting tensorflow==2.3.1
[?25l Downloading https://files.pythonhosted.org/packages/eb/18/374af421dfbe74379a458e58ab40cf46b35c3206ce8e183e28c1c627494d/tensorflow-2.3.1-cp37-cp37m-manylinux2010_x86_64.whl (320.4MB)
[K |████████████████████████████████| 320.4MB 50kB/s
[?25hRequirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (0.12.0)
Requirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (1.1.2)
Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (0.3.3)
Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (1.15.0)
Collecting numpy<1.19.0,>=1.16.0
[?25l Downloading https://files.pythonhosted.org/packages/d6/c6/58e517e8b1fb192725cfa23c01c2e60e4e6699314ee9684a1c5f5c9b27e1/numpy-1.18.5-cp37-cp37m-manylinux1_x86_64.whl (20.1MB)
[K |████████████████████████████████| 20.1MB 1.3MB/s
[?25hRequirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (3.3.0)
Collecting tensorflow-estimator<2.4.0,>=2.3.0
[?25l Downloading https://files.pythonhosted.org/packages/e9/ed/5853ec0ae380cba4588eab1524e18ece1583b65f7ae0e97321f5ff9dfd60/tensorflow_estimator-2.3.0-py2.py3-none-any.whl (459kB)
[K |████████████████████████████████| 460kB 47.3MB/s
[?25hRequirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (1.12.1)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (0.36.2)
Requirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (0.2.0)
Requirement already satisfied: tensorboard<3,>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (2.4.1)
Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (2.10.0)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (1.32.0)
Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (1.6.3)
Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (3.12.4)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow==2.3.1) (1.1.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (0.4.4)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.3.4)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (2.23.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.28.1)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.0.1)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (56.0.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.8.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.3.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.10.1)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (2020.12.5)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.0.4)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (4.7.2)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (4.2.1)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.4.1)
Requirement already satisfied: typing-extensions>=3.6.4; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.7.4.3)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (0.4.8)
[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.[0m
[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.[0m
Installing collected packages: numpy, tensorflow-estimator, tensorflow
Found existing installation: numpy 1.19.5
Uninstalling numpy-1.19.5:
Successfully uninstalled numpy-1.19.5
Found existing installation: tensorflow-estimator 2.4.0
Uninstalling tensorflow-estimator-2.4.0:
Successfully uninstalled tensorflow-estimator-2.4.0
Found existing installation: tensorflow 2.4.1
Uninstalling tensorflow-2.4.1:
Successfully uninstalled tensorflow-2.4.1
Successfully installed numpy-1.18.5 tensorflow-2.3.1 tensorflow-estimator-2.3.0
Install TensorFlow Quantum:
```
!pip install tensorflow-quantum
```
Collecting tensorflow-quantum
[?25l Downloading https://files.pythonhosted.org/packages/53/02/878b2d4e7711f5c7f8dff9ff838e8ed84d218a359154ce06c7c01178a125/tensorflow_quantum-0.4.0-cp37-cp37m-manylinux2010_x86_64.whl (5.9MB)
[K |████████████████████████████████| 5.9MB 4.2MB/s
[?25hCollecting sympy==1.5
[?25l Downloading https://files.pythonhosted.org/packages/4d/a7/25d5d6b3295537ab90bdbcd21e464633fb4a0684dd9a065da404487625bb/sympy-1.5-py2.py3-none-any.whl (5.6MB)
[K |████████████████████████████████| 5.6MB 26.4MB/s
[?25hCollecting cirq==0.9.1
[?25l Downloading https://files.pythonhosted.org/packages/18/05/39c24828744b91f658fd1e5d105a9d168da43698cfaec006179c7646c71c/cirq-0.9.1-py3-none-any.whl (1.6MB)
[K |████████████████████████████████| 1.6MB 45.9MB/s
[?25hRequirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.7/dist-packages (from sympy==1.5->tensorflow-quantum) (1.2.1)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (3.7.4.3)
Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (1.1.5)
Requirement already satisfied: networkx~=2.4 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (2.5.1)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (1.4.1)
Requirement already satisfied: google-api-core[grpc]<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (1.26.3)
Requirement already satisfied: protobuf~=3.12.0 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (3.12.4)
Requirement already satisfied: numpy~=1.16 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (1.18.5)
Requirement already satisfied: sortedcontainers~=2.0 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (2.3.0)
Requirement already satisfied: matplotlib~=3.0 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (3.2.2)
Collecting freezegun~=0.3.15
Downloading https://files.pythonhosted.org/packages/17/5d/1b9d6d3c7995fff473f35861d674e0113a5f0bd5a72fe0199c3f254665c7/freezegun-0.3.15-py2.py3-none-any.whl
Requirement already satisfied: requests~=2.18 in /usr/local/lib/python3.7/dist-packages (from cirq==0.9.1->tensorflow-quantum) (2.23.0)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->cirq==0.9.1->tensorflow-quantum) (2.8.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->cirq==0.9.1->tensorflow-quantum) (2018.9)
Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx~=2.4->cirq==0.9.1->tensorflow-quantum) (4.4.2)
Requirement already satisfied: google-auth<2.0dev,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (1.28.1)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (56.0.0)
Requirement already satisfied: six>=1.13.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (1.15.0)
Requirement already satisfied: packaging>=14.3 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (20.9)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (1.53.0)
Requirement already satisfied: grpcio<2.0dev,>=1.29.0; extra == "grpc" in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (1.32.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib~=3.0->cirq==0.9.1->tensorflow-quantum) (2.4.7)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib~=3.0->cirq==0.9.1->tensorflow-quantum) (0.10.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib~=3.0->cirq==0.9.1->tensorflow-quantum) (1.3.1)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests~=2.18->cirq==0.9.1->tensorflow-quantum) (2020.12.5)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests~=2.18->cirq==0.9.1->tensorflow-quantum) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests~=2.18->cirq==0.9.1->tensorflow-quantum) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests~=2.18->cirq==0.9.1->tensorflow-quantum) (2.10)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (4.2.1)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.7/dist-packages (from rsa<5,>=3.1.4; python_version >= "3.6"->google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->cirq==0.9.1->tensorflow-quantum) (0.4.8)
Installing collected packages: sympy, freezegun, cirq, tensorflow-quantum
Found existing installation: sympy 1.7.1
Uninstalling sympy-1.7.1:
Successfully uninstalled sympy-1.7.1
Successfully installed cirq-0.9.1 freezegun-0.3.15 sympy-1.5 tensorflow-quantum-0.4.0
Now import TensorFlow and the module dependencies:
```
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
import seaborn as sns
import collections
# visualization tools
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
```
## 1. Load the data
In this tutorial you will build a binary classifier to distinguish between the digits 3 and 6, following <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a> This section covers the data handling that:
- Loads the raw data from Keras.
- Filters the dataset to only 3s and 6s.
- Downscales the images so they fit can fit in a quantum computer.
- Removes any contradictory examples.
- Converts the binary images to Cirq circuits.
- Converts the Cirq circuits to TensorFlow Quantum circuits.
### 1.1 Load the raw data
Load the MNIST dataset distributed with Keras.
```
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Rescale the images from [0,255] to the [0.0,1.0] range.
x_train, x_test = x_train[..., np.newaxis]/255.0, x_test[..., np.newaxis]/255.0
print("Number of original training examples:", len(x_train))
print("Number of original test examples:", len(x_test))
```
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
Number of original training examples: 60000
Number of original test examples: 10000
Filter the dataset to keep just the 3s and 6s, remove the other classes. At the same time convert the label, `y`, to boolean: `True` for `3` and `False` for 6.
```
def filter_36(x, y):
keep = (y == 3) | (y == 6)
x, y = x[keep], y[keep]
y = y == 3
return x,y
```
```
x_train, y_train = filter_36(x_train, y_train)
x_test, y_test = filter_36(x_test, y_test)
print("Number of filtered training examples:", len(x_train))
print("Number of filtered test examples:", len(x_test))
```
Number of filtered training examples: 12049
Number of filtered test examples: 1968
Show the first example:
```
print(y_train[0])
plt.imshow(x_train[0, :, :, 0])
plt.colorbar()
```
### 1.2 Downscale the images
An image size of 28x28 is much too large for current quantum computers. Resize the image down to 4x4:
```
x_train_small = tf.image.resize(x_train, (4,4)).numpy()
x_test_small = tf.image.resize(x_test, (4,4)).numpy()
```
Again, display the first training example—after resize:
```
print(y_train[0])
plt.imshow(x_train_small[0,:,:,0], vmin=0, vmax=1)
plt.colorbar()
```
### 1.3 Remove contradictory examples
From section *3.3 Learning to Distinguish Digits* of <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a>, filter the dataset to remove images that are labeled as belonging to both classes.
This is not a standard machine-learning procedure, but is included in the interest of following the paper.
```
def remove_contradicting(xs, ys):
mapping = collections.defaultdict(set)
orig_x = {}
# Determine the set of labels for each unique image:
for x,y in zip(xs,ys):
orig_x[tuple(x.flatten())] = x
mapping[tuple(x.flatten())].add(y)
new_x = []
new_y = []
for flatten_x in mapping:
x = orig_x[flatten_x]
labels = mapping[flatten_x]
if len(labels) == 1:
new_x.append(x)
new_y.append(next(iter(labels)))
else:
# Throw out images that match more than one label.
pass
num_uniq_3 = sum(1 for value in mapping.values() if len(value) == 1 and True in value)
num_uniq_6 = sum(1 for value in mapping.values() if len(value) == 1 and False in value)
num_uniq_both = sum(1 for value in mapping.values() if len(value) == 2)
print("Number of unique images:", len(mapping.values()))
print("Number of unique 3s: ", num_uniq_3)
print("Number of unique 6s: ", num_uniq_6)
print("Number of unique contradicting labels (both 3 and 6): ", num_uniq_both)
print()
print("Initial number of images: ", len(xs))
print("Remaining non-contradicting unique images: ", len(new_x))
return np.array(new_x), np.array(new_y)
```
The resulting counts do not closely match the reported values, but the exact procedure is not specified.
It is also worth noting here that applying filtering contradictory examples at this point does not totally prevent the model from receiving contradictory training examples: the next step binarizes the data which will cause more collisions.
```
x_train_nocon, y_train_nocon = remove_contradicting(x_train_small, y_train)
```
Number of unique images: 10387
Number of unique 3s: 4912
Number of unique 6s: 5426
Number of unique contradicting labels (both 3 and 6): 49
Initial number of images: 12049
Remaining non-contradicting unique images: 10338
### 1.4 Encode the data as quantum circuits
To process images using a quantum computer, <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a> proposed representing each pixel with a qubit, with the state depending on the value of the pixel. The first step is to convert to a binary encoding.
```
THRESHOLD = 0.5
x_train_bin = np.array(x_train_nocon > THRESHOLD, dtype=np.float32)
x_test_bin = np.array(x_test_small > THRESHOLD, dtype=np.float32)
```
If you were to remove contradictory images at this point you would be left with only 193, likely not enough for effective training.
```
_ = remove_contradicting(x_train_bin, y_train_nocon)
```
Number of unique images: 193
Number of unique 3s: 80
Number of unique 6s: 69
Number of unique contradicting labels (both 3 and 6): 44
Initial number of images: 10338
Remaining non-contradicting unique images: 149
The qubits at pixel indices with values that exceed a threshold, are rotated through an $X$ gate.
```
def convert_to_circuit(image):
"""Encode truncated classical image into quantum datapoint."""
values = np.ndarray.flatten(image)
qubits = cirq.GridQubit.rect(4, 4)
circuit = cirq.Circuit()
for i, value in enumerate(values):
if value:
circuit.append(cirq.X(qubits[i]))
return circuit
x_train_circ = [convert_to_circuit(x) for x in x_train_bin]
x_test_circ = [convert_to_circuit(x) for x in x_test_bin]
```
Here is the circuit created for the first example (circuit diagrams do not show qubits with zero gates):
```
SVGCircuit(x_train_circ[0])
```
findfont: Font family ['Arial'] not found. Falling back to DejaVu Sans.
Compare this circuit to the indices where the image value exceeds the threshold:
```
bin_img = x_train_bin[0,:,:,0]
indices = np.array(np.where(bin_img)).T
indices
```
array([[2, 2],
[3, 1]])
Convert these `Cirq` circuits to tensors for `tfq`:
```
x_train_tfcirc = tfq.convert_to_tensor(x_train_circ)
x_test_tfcirc = tfq.convert_to_tensor(x_test_circ)
```
## 2. Quantum neural network
There is little guidance for a quantum circuit structure that classifies images. Since the classification is based on the expectation of the readout qubit, <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a> propose using two qubit gates, with the readout qubit always acted upon. This is similar in some ways to running small a <a href="https://arxiv.org/abs/1511.06464" class="external">Unitary RNN</a> across the pixels.
### 2.1 Build the model circuit
This following example shows this layered approach. Each layer uses *n* instances of the same gate, with each of the data qubits acting on the readout qubit.
Start with a simple class that will add a layer of these gates to a circuit:
```
class CircuitLayerBuilder():
def __init__(self, data_qubits, readout):
self.data_qubits = data_qubits
self.readout = readout
def add_layer(self, circuit, gate, prefix):
for i, qubit in enumerate(self.data_qubits):
symbol = sympy.Symbol(prefix + '-' + str(i))
circuit.append(gate(qubit, self.readout)**symbol)
```
Build an example circuit layer to see how it looks:
```
demo_builder = CircuitLayerBuilder(data_qubits = cirq.GridQubit.rect(4,1),
readout=cirq.GridQubit(-1,-1))
circuit = cirq.Circuit()
demo_builder.add_layer(circuit, gate = cirq.XX, prefix='xx')
SVGCircuit(circuit)
```
Now build a two-layered model, matching the data-circuit size, and include the preparation and readout operations.
```
def create_quantum_model():
"""Create a QNN model circuit and readout operation to go along with it."""
data_qubits = cirq.GridQubit.rect(4, 4) # a 4x4 grid.
readout = cirq.GridQubit(-1, -1) # a single qubit at [-1,-1]
circuit = cirq.Circuit()
# Prepare the readout qubit.
circuit.append(cirq.X(readout))
circuit.append(cirq.H(readout))
builder = CircuitLayerBuilder(
data_qubits = data_qubits,
readout=readout)
# Then add layers (experiment by adding more).
builder.add_layer(circuit, cirq.XX, "xx1")
builder.add_layer(circuit, cirq.ZZ, "zz1")
# Finally, prepare the readout qubit.
circuit.append(cirq.H(readout))
return circuit, cirq.Z(readout)
```
```
model_circuit, model_readout = create_quantum_model()
```
### 2.2 Wrap the model-circuit in a tfq-keras model
Build the Keras model with the quantum components. This model is fed the "quantum data", from `x_train_circ`, that encodes the classical data. It uses a *Parametrized Quantum Circuit* layer, `tfq.layers.PQC`, to train the model circuit, on the quantum data.
To classify these images, <a href="https://arxiv.org/pdf/1802.06002.pdf" class="external">Farhi et al.</a> proposed taking the expectation of a readout qubit in a parameterized circuit. The expectation returns a value between 1 and -1.
```
# Build the Keras model.
model = tf.keras.Sequential([
# The input is the data-circuit, encoded as a tf.string
tf.keras.layers.Input(shape=(), dtype=tf.string),
# The PQC layer returns the expected value of the readout gate, range [-1,1].
tfq.layers.PQC(model_circuit, model_readout),
])
```
Next, describe the training procedure to the model, using the `compile` method.
Since the the expected readout is in the range `[-1,1]`, optimizing the hinge loss is a somewhat natural fit.
Note: Another valid approach would be to shift the output range to `[0,1]`, and treat it as the probability the model assigns to class `3`. This could be used with a standard a `tf.losses.BinaryCrossentropy` loss.
To use the hinge loss here you need to make two small adjustments. First convert the labels, `y_train_nocon`, from boolean to `[-1,1]`, as expected by the hinge loss.
```
y_train_hinge = 2.0*y_train_nocon-1.0
y_test_hinge = 2.0*y_test-1.0
```
Second, use a custiom `hinge_accuracy` metric that correctly handles `[-1, 1]` as the `y_true` labels argument.
`tf.losses.BinaryAccuracy(threshold=0.0)` expects `y_true` to be a boolean, and so can't be used with hinge loss).
```
def hinge_accuracy(y_true, y_pred):
y_true = tf.squeeze(y_true) > 0.0
y_pred = tf.squeeze(y_pred) > 0.0
result = tf.cast(y_true == y_pred, tf.float32)
return tf.reduce_mean(result)
```
```
model.compile(
loss=tf.keras.losses.Hinge(),
optimizer=tf.keras.optimizers.Adam(),
metrics=[hinge_accuracy])
```
```
print(model.summary())
```
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
pqc (PQC) (None, 1) 32
=================================================================
Total params: 32
Trainable params: 32
Non-trainable params: 0
_________________________________________________________________
None
### Train the quantum model
Now train the model—this takes about 45 min. If you don't want to wait that long, use a small subset of the data (set `NUM_EXAMPLES=500`, below). This doesn't really affect the model's progress during training (it only has 32 parameters, and doesn't need much data to constrain these). Using fewer examples just ends training earlier (5min), but runs long enough to show that it is making progress in the validation logs.
```
EPOCHS = 3
BATCH_SIZE = 32
NUM_EXAMPLES = len(x_train_tfcirc)
```
```
x_train_tfcirc_sub = x_train_tfcirc[:NUM_EXAMPLES]
y_train_hinge_sub = y_train_hinge[:NUM_EXAMPLES]
```
Training this model to convergence should achieve >85% accuracy on the test set.
```
qnn_history = model.fit(
x_train_tfcirc_sub, y_train_hinge_sub,
batch_size=32,
epochs=EPOCHS,
verbose=1,
validation_data=(x_test_tfcirc, y_test_hinge))
qnn_results = model.evaluate(x_test_tfcirc, y_test)
```
Epoch 1/3
206/324 [==================>...........] - ETA: 3:31 - loss: 0.7359 - hinge_accuracy: 0.8359
Note: The training accuracy reports the average over the epoch. The validation accuracy is evaluated at the end of each epoch.
## 3. Classical neural network
While the quantum neural network works for this simplified MNIST problem, a basic classical neural network can easily outperform a QNN on this task. After a single epoch, a classical neural network can achieve >98% accuracy on the holdout set.
In the following example, a classical neural network is used for for the 3-6 classification problem using the entire 28x28 image instead of subsampling the image. This easily converges to nearly 100% accuracy of the test set.
```
def create_classical_model():
# A simple model based off LeNet from https://keras.io/examples/mnist_cnn/
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(32, [3, 3], activation='relu', input_shape=(28,28,1)))
model.add(tf.keras.layers.Conv2D(64, [3, 3], activation='relu'))
model.add(tf.keras.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(tf.keras.layers.Dropout(0.25))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(1))
return model
model = create_classical_model()
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.summary()
```
```
model.fit(x_train,
y_train,
batch_size=128,
epochs=1,
verbose=1,
validation_data=(x_test, y_test))
cnn_results = model.evaluate(x_test, y_test)
```
The above model has nearly 1.2M parameters. For a more fair comparison, try a 37-parameter model, on the subsampled images:
```
def create_fair_classical_model():
# A simple model based off LeNet from https://keras.io/examples/mnist_cnn/
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(4,4,1)))
model.add(tf.keras.layers.Dense(2, activation='relu'))
model.add(tf.keras.layers.Dense(1))
return model
model = create_fair_classical_model()
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
model.summary()
```
```
model.fit(x_train_bin,
y_train_nocon,
batch_size=128,
epochs=20,
verbose=2,
validation_data=(x_test_bin, y_test))
fair_nn_results = model.evaluate(x_test_bin, y_test)
```
## 4. Comparison
Higher resolution input and a more powerful model make this problem easy for the CNN. While a classical model of similar power (~32 parameters) trains to a similar accuracy in a fraction of the time. One way or the other, the classical neural network easily outperforms the quantum neural network. For classical data, it is difficult to beat a classical neural network.
```
qnn_accuracy = qnn_results[1]
cnn_accuracy = cnn_results[1]
fair_nn_accuracy = fair_nn_results[1]
sns.barplot(["Quantum", "Classical, full", "Classical, fair"],
[qnn_accuracy, cnn_accuracy, fair_nn_accuracy])
```
|
Formal statement is: lemma replace_1: assumes "j < n" "a \<in> s" and p: "\<forall>x\<in>s - {a}. x j = p" and "x \<in> s" shows "a \<le> x" Informal statement is: If $a$ is an element of $s$ and $x$ is an element of $s$ such that $x_j = p$ for all $x \in s - \{a\}$, then $a \leq x$.
|
plan or room type and press the [OK].
[Change Search Parameters] Please fill in the fields below.
* Place a check in the above options to refine your search.
Currency is the yen amount that you are viewing.
[Open Special Price!!] It can be done because of its own HP! The design hostel can stay at this price!
|
function multisolve(m::Model, mdata::MultiData, ::WeightedSum, ::NonLinearProblem)
objectives = mdata.objectives
sensemap = Dict(MOI.MIN_SENSE => 1.0, MOI.MAX_SENSE => -1.0)
vararr = all_variables(m)
numobj = length(objectives)
Phi = zeros(numobj, numobj)
# Individual minimisations
for (i, objective) in enumerate(objectives)
@NLobjective(m, objective.sense, objective.f)
for (key, value) in objective.initialvalue
set_start_value(variable_by_name(m, key), value)
end
optimize!(m, ignore_optimize_hook=true);
if !(termination_status(m) in [MOI.OPTIMAL, MOI.LOCALLY_SOLVED])
return termination_status(m)
end
push!(mdata.utopiavarvalues, [value(var) for var in vararr])
push!(mdata.paretovarvalues, [value(var) for var in vararr])
Phi[:,i] = sensevalue.(objectives)
push!(mdata.paretofront, value.(objectives))
end
Fmax = maximum(Phi, dims=2)
Fmin = minimum(Phi, dims=2) # == diag(Phi)?
if Fmax == Fmin
error("The Nadir and Utopia points are equal") # I think that's what this means?
end
mdata.Phi = Phi
beta = zeros(numobj); beta[end] = 1.0
@NLparameter(m, β[i=1:numobj] == beta[i])
@NLobjective(m, MOI.MIN_SENSE,
sum(β[i] * (sensemap[objectives[i].sense] * objectives[i].f -
Fmin[i]) / (Fmax[i] - Fmin[i]) for i = 1:numobj))
betatree = betas(numobj, mdata.pointsperdim - 1)
for betaval in betatree
if count(t -> t != 0, betaval) == 1
# Skip individual optimisations as
# they are already performed
continue
end
set_value.(β, betaval)
optimize!(m, ignore_optimize_hook=true);
if !(termination_status(m) in [MOI.OPTIMAL, MOI.LOCALLY_SOLVED])
return termination_status(m)
end
push!(mdata.paretofront, value.(objectives))
push!(mdata.paretovarvalues, [value(var) for var in vararr])
end
return MOI.OPTIMAL
end
function multisolve(m::Model, mdata::MultiData, met::NBI, ::NonLinearProblem)
inequalityconstraint = met.inequality
objectives = mdata.objectives
sensemap = Dict(MOI.MIN_SENSE => 1.0, MOI.MAX_SENSE => -1.0)
vararr = all_variables(m)
# Stage 1: Calculate Φ
numobj = length(objectives)
Fstar = zeros(numobj)
Phi = zeros(numobj, numobj)
# Individual minimisations
for (i, objective) in enumerate(objectives)
@NLobjective(m, objective.sense, objective.f)
for (key, value) in objective.initialvalue
set_start_value(variable_by_name(m, key), value)
end
optimize!(m, ignore_optimize_hook=true);
if !(termination_status(m) in [MOI.OPTIMAL, MOI.LOCALLY_SOLVED])
return termination_status(m)
end
push!(mdata.utopiavarvalues, [value(var) for var in vararr])
push!(mdata.paretovarvalues, [value(var) for var in vararr])
Phi[:,i] = sensevalue.(objectives)
push!(mdata.paretofront, value.(objectives))
end
Fstar = diag(Phi)
for j = 1:numobj
Phi[:,j] = Phi[:,j] - Fstar
end
mdata.Phi = Phi
mdata.Fstar = Fstar
# Stage 2: Create NBI subproblems
@variable(m, t)
# TODO: There is a bug in JuMP so it doesn't propagate all
# the necessary information if we use @objective instead of NLObjective
# TODO: test this with nlprewrite
@NLobjective(m, MOI.MAX_SENSE, t)
beta = zeros(numobj); beta[end] = 1.0
@NLparameter(m, β[i=1:numobj] == beta[i])
if !inequalityconstraint
# Standard NBI
for (i, objective) in enumerate(objectives)
@NLconstraint(m,
sum(Phi[i,j] * (β[j] - t)
for j = 1:numobj if j != i) ==
sensemap[objective.sense] * objective.f - Fstar[i])
end
else
# Pascoletti-Serafini extension
for (i, objective) in enumerate(objectives)
@NLconstraint(m,
sum(Phi[i,j] * (β[j] - t)
for j = 1:numobj if j != i) >=
sensemap[objective.sense] * objective.f - Fstar[i])
end
end
# Stage 3: Solve NBI subproblems
betatree = betas(numobj, mdata.pointsperdim - 1)
for betaval in betatree
if count(t -> t != 0, betaval) == 1
# Skip individual optimisations as
# they are already performed
continue
end
set_value.(β, betaval)
optimize!(m, ignore_optimize_hook=true);
if !(termination_status(m) in [MOI.OPTIMAL, MOI.LOCALLY_SOLVED])
return termination_status(m)
end
push!(mdata.paretofront, value.(objectives))
push!(mdata.paretovarvalues, [value(var) for var in vararr])
end
return MOI.OPTIMAL
end
function multisolve(m::Model, mdata::MultiData, ::EpsilonCons, ::NonLinearProblem)
objectives = mdata.objectives
sensemap = Dict(MOI.MIN_SENSE => 1.0, MOI.MAX_SENSE => -1.0)
vararr = all_variables(m)
numobj = length(objectives)
Phi = zeros(numobj, numobj)
if numobj > 2
# TODO:
# The logic here becomes difficult, as the feasible region will require
# a dependency between the constraints
Base.error("EpsilonCons is thought through for > 2 objectives yet")
end
# Individual minimisations
for (i, objective) in enumerate(objectives)
@NLobjective(m, objective.sense, objective.f)
for (key, value) in objective.initialvalue
set_start_value(variable_by_name(m, key), value)
end
optimize!(m, ignore_optimize_hook=true);
if !(termination_status(m) in [MOI.OPTIMAL, MOI.LOCALLY_SOLVED])
return termination_status(m)
end
push!(mdata.utopiavarvalues, [value(var) for var in vararr])
push!(mdata.paretovarvalues, [value(var) for var in vararr])
Phi[:,i] = sensevalue.(objectives)
push!(mdata.paretofront, value.(objectives))
end
Fmax = maximum(Phi, dims=2)
Fmin = minimum(Phi, dims=2) # == diag(Phi)?
mdata.Phi = Phi
@NLobjective(m, objectives[end].sense, objectives[end].f)
beta = zeros(numobj); beta[end] = 1.0
@NLparameter(m, β[i=1:numobj] == beta[i])
@NLconstraint(m, objconstr[i=1:numobj - 1],
sensemap[objectives[i].sense] * objectives[i].f
<= β[i] * Fmin[i] + (1 - β[i]) * Fmax[i])
betatree = betas(numobj, mdata.pointsperdim - 1)
for betaval in betatree
if count(t -> t != 0, betaval) == 1
# Skip individual optimisations as
# they are already performed
continue
end
set_value.(β, betaval)
optimize!(m, ignore_optimize_hook=true);
if !(termination_status(m) in [MOI.OPTIMAL, MOI.LOCALLY_SOLVED])
return termination_status(m)
end
push!(mdata.paretofront, value.(objectives))
push!(mdata.paretovarvalues, [value(var) for var in vararr])
end
return MOI.OPTIMAL
end
|
/**
* Copyright (C) 2015 Topology LP
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef BONEFISH_RAWSOCKET_SERVER_HPP
#define BONEFISH_RAWSOCKET_SERVER_HPP
#include <boost/asio/ip/tcp.hpp>
#include <set>
#include <memory>
namespace bonefish {
class rawsocket_listener;
class rawsocket_server_impl;
class wamp_routers;
class wamp_serializers;
class rawsocket_server
{
public:
rawsocket_server(
const std::shared_ptr<wamp_routers>& routers,
const std::shared_ptr<wamp_serializers>& serializers);
~rawsocket_server();
void attach_listener(const std::shared_ptr<rawsocket_listener>& listener);
void start();
void shutdown();
private:
std::shared_ptr<rawsocket_server_impl> m_impl;
};
} // namespace bonefish
#endif // BONEFISH_RAWSOCKET_SERVER_HPP
|
module GRIN.Name
import Control.Monad.State
import Data.SortedMap
||| A name and resolved index.
public export
record Resolved a where
constructor MkResolved
res : Int
orig : a
export
Eq (Resolved a) where
(==) = (==) `on` res
export
Ord (Resolved a) where
compare = compare `on` res
export
Show a => Show (Resolved a) where
show = show . orig
||| State for resolving names.
record ResolveState name where
constructor MkResState
nextId : Int
resolved : SortedMap name Int
||| Monad for resolving names.
export
record ResolveM name a where
constructor MkResolveM
unResolveM : State (ResolveState name) a
export
runResolveM : Ord name => ResolveM name a -> a
runResolveM = evalState (MkResState 0 empty) . unResolveM
export
Functor (ResolveM name) where
map f = MkResolveM . map f . unResolveM
export
Applicative (ResolveM name) where
pure = MkResolveM . pure
f <*> x = MkResolveM $ unResolveM f <*> unResolveM x
export
Monad (ResolveM name) where
x >>= f = MkResolveM $ unResolveM x >>= (unResolveM . f)
MonadState (ResolveState name) (ResolveM name) where
get = MkResolveM get
put = MkResolveM . put
state = MkResolveM . state
export
resolve : name -> ResolveM name (Resolved name)
resolve orig = do
st <- get
case lookup orig st.resolved of
Just res => pure $ MkResolved res orig
Nothing => do
let res = st.nextId
put (record { nextId $= (+ 1), resolved $= insert orig res } st)
pure $ MkResolved res orig
|
The PilyQ Sahara Stitched Gypsy High Neck Bikini features a striped neutral pattern with open stitch detail in a soft, sueded fabric. This high neck top is a pullover style, criss-crosses in back and has removable pads. Pair with the matching bottoms here!
|
[STATEMENT]
lemma up3_def2[simp,code]:
"up3 x sib twist u = (case u of
Same2 \<Rightarrow> Same2 |
Bal2 t \<Rightarrow> Bal2 (node twist t x sib) |
Unbal2 t n1 h1 \<Rightarrow>
let n2 = size_tree sib; t' = node twist t x sib; n' = n1 + n2 + 1; h' = h1 + 1
in if bal_i n' h' then Unbal2 t' n' h'
else let t'' = bal_tree n' t' in Bal2 t'')"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. up3 x sib twist u = (case u of Same2 \<Rightarrow> Same2 | Bal2 t \<Rightarrow> Bal2 (node twist t x sib) | Unbal2 t n1 h1 \<Rightarrow> let n2 = Root_Balanced_Tree.size_tree sib; t' = node twist t x sib; n' = n1 + n2 + 1; h' = h1 + 1 in if bal_i n' h' then Unbal2 t' n' h' else Let (Root_Balanced_Tree.bal_tree n' t') Bal2)
[PROOF STEP]
using val_cong[OF up3_tm_def]
[PROOF STATE]
proof (prove)
using this:
Time_Monad.val (up3_tm ?x1 ?sib1 ?twist1 ?u1) = Time_Monad.val ((case ?u1 of Same2 \<Rightarrow> return Same2 | Bal2 t \<Rightarrow> return (Bal2 (node ?twist1 t ?x1 ?sib1)) | Unbal2 t n1 h1 \<Rightarrow> size_tree_tm ?sib1 \<bind> (\<lambda>n2. let t' = node ?twist1 t ?x1 ?sib1; n' = n1 + n2 + 1; h' = h1 + 1 in if bal_i n' h' then return (Unbal2 t' n' h') else bal_tree_tm n' t' \<bind> (\<lambda>t''. return (Bal2 t'')))) \<bind> tick)
goal (1 subgoal):
1. up3 x sib twist u = (case u of Same2 \<Rightarrow> Same2 | Bal2 t \<Rightarrow> Bal2 (node twist t x sib) | Unbal2 t n1 h1 \<Rightarrow> let n2 = Root_Balanced_Tree.size_tree sib; t' = node twist t x sib; n' = n1 + n2 + 1; h' = h1 + 1 in if bal_i n' h' then Unbal2 t' n' h' else Let (Root_Balanced_Tree.bal_tree n' t') Bal2)
[PROOF STEP]
by(simp only: up3_def size_tree_def bal_tree_def val_simps up2.case_distrib[of val])
|
------------------------------------------------------------------------
-- Lists
------------------------------------------------------------------------
{-# OPTIONS --without-K --safe #-}
open import Equality
module List
{reflexive} (eq : ∀ {a p} → Equality-with-J a p reflexive) where
open import Prelude
open import Bijection eq as Bijection using (_↔_)
open Derived-definitions-and-properties eq
open import Equality.Decision-procedures eq
import Equivalence eq as Eq
open import Function-universe eq hiding (_∘_)
open import H-level eq as H-level
open import H-level.Closure eq
open import Monad eq hiding (map)
open import Nat eq
open import Nat.Solver eq
private
variable
a ℓ : Level
A B C : Type a
x y : A
f : A → B
n : ℕ
xs ys zs : List A
ns : List ℕ
------------------------------------------------------------------------
-- Some functions
-- The tail of a list. Returns [] if the list is empty.
tail : List A → List A
tail [] = []
tail (_ ∷ xs) = xs
-- The function take n returns the first n elements of a list (or the
-- entire list, if the list does not contain n elements).
take : ℕ → List A → List A
take zero xs = []
take (suc n) (x ∷ xs) = x ∷ take n xs
take (suc n) xs@[] = xs
-- The function drop n removes the first n elements from a list (or
-- all elements, if the list does not contain n elements).
drop : ℕ → List A → List A
drop zero xs = xs
drop (suc n) (x ∷ xs) = drop n xs
drop (suc n) xs@[] = xs
-- Right fold.
foldr : (A → B → B) → B → List A → B
foldr _⊕_ ε [] = ε
foldr _⊕_ ε (x ∷ xs) = x ⊕ foldr _⊕_ ε xs
-- Left fold.
foldl : (B → A → B) → B → List A → B
foldl _⊕_ ε [] = ε
foldl _⊕_ ε (x ∷ xs) = foldl _⊕_ (ε ⊕ x) xs
-- The length of a list.
length : List A → ℕ
length = foldr (const suc) 0
-- The sum of all the elements in a list of natural numbers.
sum : List ℕ → ℕ
sum = foldr _+_ 0
-- Appends two lists.
infixr 5 _++_
_++_ : List A → List A → List A
xs ++ ys = foldr _∷_ ys xs
-- Maps a function over a list.
map : (A → B) → List A → List B
map f = foldr (λ x ys → f x ∷ ys) []
-- Concatenates a list of lists.
concat : List (List A) → List A
concat = foldr _++_ []
-- "Zips" two lists, using the given function to combine elementsw.
zip-with : (A → B → C) → List A → List B → List C
zip-with f [] _ = []
zip-with f _ [] = []
zip-with f (x ∷ xs) (y ∷ ys) = f x y ∷ zip-with f xs ys
-- "Zips" two lists.
zip : List A → List B → List (A × B)
zip = zip-with _,_
-- Reverses a list.
reverse : List A → List A
reverse = foldl (λ xs x → x ∷ xs) []
-- Replicates the given value the given number of times.
replicate : ℕ → A → List A
replicate zero x = []
replicate (suc n) x = x ∷ replicate n x
-- A filter function.
filter : (A → Bool) → List A → List A
filter p = foldr (λ x xs → if p x then x ∷ xs else xs) []
-- Finds the element at the given position.
index : (xs : List A) → Fin (length xs) → A
index [] ()
index (x ∷ xs) fzero = x
index (x ∷ xs) (fsuc i) = index xs i
-- A lookup function.
lookup : (A → A → Bool) → A → List (A × B) → Maybe B
lookup _≟_ x [] = nothing
lookup _≟_ x ((y , z) ∷ ps) =
if x ≟ y then just z else lookup _≟_ x ps
-- The list nats-< consists of the first n natural numbers in
-- strictly descending order.
nats-< : ℕ → List ℕ
nats-< zero = []
nats-< (suc n) = n ∷ nats-< n
-- A list that includes every tail of the given list (including the
-- list itself) exactly once. Longer tails precede shorter ones.
tails : List A → List (List A)
tails [] = [] ∷ []
tails xxs@(_ ∷ xs) = xxs ∷ tails xs
------------------------------------------------------------------------
-- Some properties
-- If you take the first n elements from xs and append what you get if
-- you drop the first n elements from xs, then you get xs (even if n
-- is larger than the length of xs).
take++drop : ∀ n → take n xs ++ drop n xs ≡ xs
take++drop zero = refl _
take++drop {xs = []} (suc n) = refl _
take++drop {xs = x ∷ xs} (suc n) = cong (x ∷_) (take++drop n)
-- The map function commutes with take n.
map-take : map f (take n xs) ≡ take n (map f xs)
map-take {n = zero} = refl _
map-take {n = suc n} {xs = []} = refl _
map-take {n = suc n} {xs = x ∷ xs} = cong (_ ∷_) map-take
-- The map function commutes with drop n.
map-drop : ∀ n → map f (drop n xs) ≡ drop n (map f xs)
map-drop zero = refl _
map-drop {xs = []} (suc n) = refl _
map-drop {xs = x ∷ xs} (suc n) = map-drop n
-- The function foldr _∷_ [] is pointwise equal to the identity
-- function.
foldr-∷-[] : (xs : List A) → foldr _∷_ [] xs ≡ xs
foldr-∷-[] [] = refl _
foldr-∷-[] (x ∷ xs) = cong (x ∷_) (foldr-∷-[] xs)
-- The empty list is a right identity for the append function.
++-right-identity : (xs : List A) → xs ++ [] ≡ xs
++-right-identity [] = refl _
++-right-identity (x ∷ xs) = cong (x ∷_) (++-right-identity xs)
-- The append function is associative.
++-associative : (xs ys zs : List A) →
xs ++ (ys ++ zs) ≡ (xs ++ ys) ++ zs
++-associative [] ys zs = refl _
++-associative (x ∷ xs) ys zs = cong (x ∷_) (++-associative xs ys zs)
-- The map function commutes with _++_.
map-++ : (f : A → B) (xs ys : List A) →
map f (xs ++ ys) ≡ map f xs ++ map f ys
map-++ f [] ys = refl _
map-++ f (x ∷ xs) ys = cong (f x ∷_) (map-++ f xs ys)
-- The concat function commutes with _++_.
concat-++ : (xss yss : List (List A)) →
concat (xss ++ yss) ≡ concat xss ++ concat yss
concat-++ [] yss = refl _
concat-++ (xs ∷ xss) yss =
concat ((xs ∷ xss) ++ yss) ≡⟨⟩
xs ++ concat (xss ++ yss) ≡⟨ cong (xs ++_) (concat-++ xss yss) ⟩
xs ++ (concat xss ++ concat yss) ≡⟨ ++-associative xs _ _ ⟩
(xs ++ concat xss) ++ concat yss ≡⟨ refl _ ⟩∎
concat (xs ∷ xss) ++ concat yss ∎
-- A lemma relating foldr and _++_.
foldr-++ :
{c : A → B → B} {n : B} →
∀ xs → foldr c n (xs ++ ys) ≡ foldr c (foldr c n ys) xs
foldr-++ [] = refl _
foldr-++ {ys = ys} {c = c} {n = n} (x ∷ xs) =
foldr c n (x ∷ xs ++ ys) ≡⟨⟩
c x (foldr c n (xs ++ ys)) ≡⟨ cong (c x) (foldr-++ xs) ⟩
c x (foldr c (foldr c n ys) xs) ≡⟨⟩
foldr c (foldr c n ys) (x ∷ xs) ∎
-- A fusion lemma for foldr and map.
foldr∘map :
(_⊕_ : B → C → C) (ε : C) (f : A → B) (xs : List A) →
(foldr _⊕_ ε ∘ map f) xs ≡ foldr (_⊕_ ∘ f) ε xs
foldr∘map _⊕_ ε f [] = ε ∎
foldr∘map _⊕_ ε f (x ∷ xs) = cong (f x ⊕_) (foldr∘map _⊕_ ε f xs)
-- A fusion lemma for length and map.
length∘map :
(f : A → B) (xs : List A) →
(length ∘ map f) xs ≡ length xs
length∘map = foldr∘map _ _
-- A lemma relating index, map and length∘map.
index∘map :
∀ xs {i} →
index (map f xs) i ≡
f (index xs (subst Fin (length∘map f xs) i))
index∘map {f = f} (x ∷ xs) {i} =
index (f x ∷ map f xs) i ≡⟨ lemma i ⟩
f (index (x ∷ xs) (subst (λ n → ⊤ ⊎ Fin n) p i)) ≡⟨⟩
f (index (x ∷ xs) (subst (Fin ∘ suc) p i)) ≡⟨ cong (f ∘ index (_ ∷ xs)) (subst-∘ Fin suc _) ⟩
f (index (x ∷ xs) (subst Fin (cong suc p) i)) ≡⟨⟩
f (index (x ∷ xs) (subst Fin (length∘map f (x ∷ xs)) i)) ∎
where
p = length∘map f xs
lemma :
∀ i →
index (f x ∷ map f xs) i ≡
f (index (x ∷ xs) (subst (λ n → ⊤ ⊎ Fin n) p i))
lemma fzero =
index (f x ∷ map f xs) fzero ≡⟨⟩
f x ≡⟨⟩
f (index (x ∷ xs) fzero) ≡⟨⟩
f (index (x ∷ xs) (inj₁ (subst (λ _ → ⊤) p tt))) ≡⟨ cong (f ∘ index (_ ∷ xs)) $ sym $ push-subst-inj₁ _ Fin ⟩∎
f (index (x ∷ xs) (subst (λ n → ⊤ ⊎ Fin n) p fzero)) ∎
lemma (fsuc i) =
index (f x ∷ map f xs) (fsuc i) ≡⟨⟩
index (map f xs) i ≡⟨ index∘map xs ⟩
f (index xs (subst Fin p i)) ≡⟨⟩
f (index (x ∷ xs) (fsuc (subst Fin p i))) ≡⟨ cong (f ∘ index (_ ∷ xs)) $ sym $ push-subst-inj₂ _ Fin ⟩∎
f (index (x ∷ xs) (subst (λ n → ⊤ ⊎ Fin n) p (fsuc i))) ∎
-- The length function is homomorphic with respect to _++_/_+_.
length-++ : ∀ xs → length (xs ++ ys) ≡ length xs + length ys
length-++ [] = refl _
length-++ (_ ∷ xs) = cong suc (length-++ xs)
-- The sum function is homomorphic with respect to _++_/_+_.
sum-++ : ∀ ms → sum (ms ++ ns) ≡ sum ms + sum ns
sum-++ [] = refl _
sum-++ {ns = ns} (m ∷ ms) =
sum (m ∷ ms ++ ns) ≡⟨⟩
m + sum (ms ++ ns) ≡⟨ cong (m +_) $ sum-++ ms ⟩
m + (sum ms + sum ns) ≡⟨ +-assoc m ⟩
(m + sum ms) + sum ns ≡⟨⟩
sum (m ∷ ms) + sum ns ∎
-- Some lemmas related to reverse.
++-reverse : ∀ xs → reverse xs ++ ys ≡ foldl (flip _∷_) ys xs
++-reverse xs = lemma xs
where
lemma :
∀ xs →
foldl (flip _∷_) ys xs ++ zs ≡
foldl (flip _∷_) (ys ++ zs) xs
lemma [] = refl _
lemma {ys = ys} {zs = zs} (x ∷ xs) =
foldl (flip _∷_) ys (x ∷ xs) ++ zs ≡⟨⟩
foldl (flip _∷_) (x ∷ ys) xs ++ zs ≡⟨ lemma xs ⟩
foldl (flip _∷_) (x ∷ ys ++ zs) xs ≡⟨⟩
foldl (flip _∷_) (ys ++ zs) (x ∷ xs) ∎
reverse-∷ : ∀ xs → reverse (x ∷ xs) ≡ reverse xs ++ x ∷ []
reverse-∷ {x = x} xs =
reverse (x ∷ xs) ≡⟨⟩
foldl (flip _∷_) (x ∷ []) xs ≡⟨ sym $ ++-reverse xs ⟩∎
reverse xs ++ x ∷ [] ∎
reverse-++ : ∀ xs → reverse (xs ++ ys) ≡ reverse ys ++ reverse xs
reverse-++ {ys = ys} [] =
reverse ys ≡⟨ sym $ ++-right-identity _ ⟩∎
reverse ys ++ [] ∎
reverse-++ {ys = ys} (x ∷ xs) =
reverse (x ∷ xs ++ ys) ≡⟨ reverse-∷ (xs ++ _) ⟩
reverse (xs ++ ys) ++ x ∷ [] ≡⟨ cong (_++ _) $ reverse-++ xs ⟩
(reverse ys ++ reverse xs) ++ x ∷ [] ≡⟨ sym $ ++-associative (reverse ys) _ _ ⟩
reverse ys ++ (reverse xs ++ x ∷ []) ≡⟨ cong (reverse ys ++_) $ sym $ reverse-∷ xs ⟩∎
reverse ys ++ reverse (x ∷ xs) ∎
reverse-reverse : (xs : List A) → reverse (reverse xs) ≡ xs
reverse-reverse [] = refl _
reverse-reverse (x ∷ xs) =
reverse (reverse (x ∷ xs)) ≡⟨ cong reverse (reverse-∷ xs) ⟩
reverse (reverse xs ++ x ∷ []) ≡⟨ reverse-++ (reverse xs) ⟩
reverse (x ∷ []) ++ reverse (reverse xs) ≡⟨⟩
x ∷ reverse (reverse xs) ≡⟨ cong (x ∷_) (reverse-reverse xs) ⟩∎
x ∷ xs ∎
map-reverse : ∀ xs → map f (reverse xs) ≡ reverse (map f xs)
map-reverse [] = refl _
map-reverse {f = f} (x ∷ xs) =
map f (reverse (x ∷ xs)) ≡⟨ cong (map f) $ reverse-∷ xs ⟩
map f (reverse xs ++ x ∷ []) ≡⟨ map-++ _ (reverse xs) _ ⟩
map f (reverse xs) ++ f x ∷ [] ≡⟨ cong (_++ _) $ map-reverse xs ⟩
reverse (map f xs) ++ f x ∷ [] ≡⟨ sym $ reverse-∷ (map f xs) ⟩
reverse (f x ∷ map f xs) ≡⟨⟩
reverse (map f (x ∷ xs)) ∎
foldr-reverse :
{c : A → B → B} {n : B} →
∀ xs → foldr c n (reverse xs) ≡ foldl (flip c) n xs
foldr-reverse [] = refl _
foldr-reverse {c = c} {n = n} (x ∷ xs) =
foldr c n (reverse (x ∷ xs)) ≡⟨ cong (foldr c n) (reverse-++ {ys = xs} (x ∷ [])) ⟩
foldr c n (reverse xs ++ x ∷ []) ≡⟨ foldr-++ (reverse xs) ⟩
foldr c (c x n) (reverse xs) ≡⟨ foldr-reverse xs ⟩
foldl (flip c) (c x n) xs ≡⟨⟩
foldl (flip c) n (x ∷ xs) ∎
foldl-reverse :
{c : B → A → B} {n : B} →
∀ xs → foldl c n (reverse xs) ≡ foldr (flip c) n xs
foldl-reverse {c = c} {n = n} xs =
foldl c n (reverse xs) ≡⟨ sym (foldr-reverse (reverse xs)) ⟩
foldr (flip c) n (reverse (reverse xs)) ≡⟨ cong (foldr (flip c) n) (reverse-reverse xs) ⟩∎
foldr (flip c) n xs ∎
length-reverse : (xs : List A) → length (reverse xs) ≡ length xs
length-reverse [] = refl _
length-reverse (x ∷ xs) =
length (reverse (x ∷ xs)) ≡⟨ cong length (reverse-∷ xs) ⟩
length (reverse xs ++ x ∷ []) ≡⟨ length-++ (reverse xs) ⟩
length (reverse xs) + 1 ≡⟨ cong (_+ 1) (length-reverse xs) ⟩
length xs + 1 ≡⟨ +-comm (length xs) ⟩∎
length (x ∷ xs) ∎
sum-reverse : ∀ ns → sum (reverse ns) ≡ sum ns
sum-reverse [] = refl _
sum-reverse (n ∷ ns) =
sum (reverse (n ∷ ns)) ≡⟨ cong sum (reverse-∷ ns) ⟩
sum (reverse ns ++ n ∷ []) ≡⟨ sum-++ (reverse ns) ⟩
sum (reverse ns) + sum (n ∷ []) ≡⟨ cong₂ _+_ (sum-reverse ns) +-right-identity ⟩
sum ns + n ≡⟨ +-comm (sum ns) ⟩∎
sum (n ∷ ns) ∎
-- The functions filter and map commute (kind of).
filter∘map :
(p : B → Bool) (f : A → B) (xs : List A) →
(filter p ∘ map f) xs ≡ (map f ∘ filter (p ∘ f)) xs
filter∘map p f [] = refl _
filter∘map p f (x ∷ xs) with p (f x)
... | true = cong (_ ∷_) (filter∘map p f xs)
... | false = filter∘map p f xs
-- The length of replicate n x is n.
length-replicate : ∀ n → length (replicate n x) ≡ n
length-replicate zero = refl _
length-replicate {x = x} (suc n) =
length (replicate (suc n) x) ≡⟨⟩
suc (length (replicate n x)) ≡⟨ cong suc $ length-replicate n ⟩∎
suc n ∎
-- The sum of replicate m n is m * n.
sum-replicate : ∀ m → sum (replicate m n) ≡ m * n
sum-replicate zero = refl _
sum-replicate {n = n} (suc m) =
sum (replicate (suc m) n) ≡⟨⟩
n + sum (replicate m n) ≡⟨ cong (n +_) $ sum-replicate m ⟩
n + m * n ≡⟨⟩
suc m * n ∎
-- The length of nats-< n is n.
length∘nats-< : ∀ n → length (nats-< n) ≡ n
length∘nats-< zero = 0 ∎
length∘nats-< (suc n) = cong suc (length∘nats-< n)
-- The sum of nats-< n can be expressed without referring to lists.
sum-nats-< : ∀ n → sum (nats-< n) ≡ ⌊ n * pred n /2⌋
sum-nats-< zero = refl _
sum-nats-< (suc zero) = refl _
sum-nats-< (suc (suc n)) =
sum (suc n ∷ nats-< (suc n)) ≡⟨⟩
suc n + sum (nats-< (suc n)) ≡⟨ cong (suc n +_) (sum-nats-< (suc n)) ⟩
suc n + ⌊ suc n * n /2⌋ ≡⟨ sym $ ⌊2*+/2⌋≡ (suc n) ⟩
⌊ 2 * suc n + suc n * n /2⌋ ≡⟨ cong ⌊_/2⌋ (solve 1 (λ n → con 2 :* (con 1 :+ n) :+ (con 1 :+ n) :* n :=
con 1 :+ n :+ (con 1 :+ n :+ n :* (con 1 :+ n)))
(refl _) n) ⟩
⌊ suc n + (suc n + n * suc n) /2⌋ ≡⟨⟩
⌊ suc (suc n) * suc n /2⌋ ∎
-- If xs ++ ys is equal to [], then both lists are.
++≡[]→≡[]×≡[] : ∀ xs → xs ++ ys ≡ [] → xs ≡ [] × ys ≡ []
++≡[]→≡[]×≡[] {ys = []} [] _ = refl _ , refl _
++≡[]→≡[]×≡[] {ys = _ ∷ _} [] ∷≡[] = ⊥-elim (List.[]≢∷ (sym ∷≡[]))
++≡[]→≡[]×≡[] (_ ∷ _) ∷≡[] = ⊥-elim (List.[]≢∷ (sym ∷≡[]))
-- Empty lists are not equal to non-empty lists.
[]≢++∷ : ∀ xs → [] ≢ xs ++ y ∷ ys
[]≢++∷ {y = y} {ys = ys} xs =
[] ≡ xs ++ y ∷ ys ↝⟨ sym ∘ proj₂ ∘ ++≡[]→≡[]×≡[] xs ∘ sym ⟩
[] ≡ y ∷ ys ↝⟨ List.[]≢∷ ⟩□
⊥ □
------------------------------------------------------------------------
-- The list monad
instance
-- The list monad.
raw-monad : Raw-monad (List {a = ℓ})
Raw-monad.return raw-monad x = x ∷ []
Raw-monad._>>=_ raw-monad xs f = concat (map f xs)
monad : Monad (List {a = ℓ})
Monad.raw-monad monad = raw-monad
Monad.left-identity monad x f = foldr-∷-[] (f x)
Monad.right-identity monad xs = lemma xs
where
lemma : ∀ xs → concat (map (_∷ []) xs) ≡ xs
lemma [] = refl _
lemma (x ∷ xs) =
concat (map (_∷ []) (x ∷ xs)) ≡⟨⟩
x ∷ concat (map (_∷ []) xs) ≡⟨ cong (x ∷_) (lemma xs) ⟩∎
x ∷ xs ∎
Monad.associativity monad xs f g = lemma xs
where
lemma : ∀ xs → concat (map (concat ∘ map g ∘ f) xs) ≡
concat (map g (concat (map f xs)))
lemma [] = refl _
lemma (x ∷ xs) =
concat (map (concat ∘ map g ∘ f) (x ∷ xs)) ≡⟨⟩
concat (map g (f x)) ++ concat (map (concat ∘ map g ∘ f) xs) ≡⟨ cong (concat (map g (f x)) ++_) (lemma xs) ⟩
concat (map g (f x)) ++ concat (map g (concat (map f xs))) ≡⟨ sym $ concat-++ (map g (f x)) _ ⟩
concat (map g (f x) ++ map g (concat (map f xs))) ≡⟨ cong concat (sym $ map-++ g (f x) _) ⟩
concat (map g (f x ++ concat (map f xs))) ≡⟨ refl _ ⟩∎
concat (map g (concat (map f (x ∷ xs)))) ∎
------------------------------------------------------------------------
-- Some isomorphisms
-- An unfolding lemma for List.
List↔Maybe[×List] : List A ↔ Maybe (A × List A)
List↔Maybe[×List] = record
{ surjection = record
{ logical-equivalence = record
{ to = λ { [] → inj₁ tt; (x ∷ xs) → inj₂ (x , xs) }
; from = [ (λ _ → []) , uncurry _∷_ ]
}
; right-inverse-of = [ (λ _ → refl _) , (λ _ → refl _) ]
}
; left-inverse-of = λ { [] → refl _; (_ ∷ _) → refl _ }
}
-- Some isomorphisms related to list equality.
[]≡[]↔⊤ : [] ≡ ([] ⦂ List A) ↔ ⊤
[]≡[]↔⊤ =
[] ≡ [] ↔⟨ inverse $ Eq.≃-≡ (Eq.↔⇒≃ List↔Maybe[×List]) ⟩
nothing ≡ nothing ↝⟨ inverse Bijection.≡↔inj₁≡inj₁ ⟩
tt ≡ tt ↝⟨ tt≡tt↔⊤ ⟩□
⊤ □
[]≡∷↔⊥ : [] ≡ x ∷ xs ↔ ⊥ {ℓ = ℓ}
[]≡∷↔⊥ {x = x} {xs = xs} =
[] ≡ x ∷ xs ↔⟨ inverse $ Eq.≃-≡ (Eq.↔⇒≃ List↔Maybe[×List]) ⟩
nothing ≡ just (x , xs) ↝⟨ Bijection.≡↔⊎ ⟩
⊥ ↝⟨ ⊥↔⊥ ⟩□
⊥ □
∷≡[]↔⊥ : x ∷ xs ≡ [] ↔ ⊥ {ℓ = ℓ}
∷≡[]↔⊥ {x = x} {xs = xs} =
x ∷ xs ≡ [] ↝⟨ ≡-comm ⟩
[] ≡ x ∷ xs ↝⟨ []≡∷↔⊥ ⟩□
⊥ □
∷≡∷↔≡×≡ : x ∷ xs ≡ y ∷ ys ↔ x ≡ y × xs ≡ ys
∷≡∷↔≡×≡ {x = x} {xs = xs} {y = y} {ys = ys} =
x ∷ xs ≡ y ∷ ys ↔⟨ inverse $ Eq.≃-≡ (Eq.↔⇒≃ List↔Maybe[×List]) ⟩
just (x , xs) ≡ just (y , ys) ↝⟨ inverse Bijection.≡↔inj₂≡inj₂ ⟩
(x , xs) ≡ (y , ys) ↝⟨ inverse ≡×≡↔≡ ⟩□
x ≡ y × xs ≡ ys □
------------------------------------------------------------------------
-- H-levels
-- If A is inhabited, then List A is not a proposition.
¬-List-propositional : A → ¬ Is-proposition (List A)
¬-List-propositional x h = List.[]≢∷ $ h [] (x ∷ [])
-- H-levels greater than or equal to two are closed under List.
H-level-List : ∀ n → H-level (2 + n) A → H-level (2 + n) (List A)
H-level-List n _ {[]} {[]} =
$⟨ ⊤-contractible ⟩
Contractible ⊤ ↝⟨ H-level-cong _ 0 (inverse []≡[]↔⊤) ⟩
Contractible ([] ≡ []) ↝⟨ H-level.mono (zero≤ (1 + n)) ⟩□
H-level (1 + n) ([] ≡ []) □
H-level-List n h {[]} {y ∷ ys} =
$⟨ ⊥-propositional ⟩
Is-proposition ⊥₀ ↝⟨ H-level-cong _ 1 (inverse []≡∷↔⊥) ⟩
Is-proposition ([] ≡ y ∷ ys) ↝⟨ H-level.mono (suc≤suc (zero≤ n)) ⟩□
H-level (1 + n) ([] ≡ y ∷ ys) □
H-level-List n h {x ∷ xs} {[]} =
$⟨ ⊥-propositional ⟩
Is-proposition ⊥₀ ↝⟨ H-level-cong _ 1 (inverse ∷≡[]↔⊥) ⟩
Is-proposition (x ∷ xs ≡ []) ↝⟨ H-level.mono (suc≤suc (zero≤ n)) ⟩□
H-level (1 + n) (x ∷ xs ≡ []) □
H-level-List n h {x ∷ xs} {y ∷ ys} =
H-level-cong _ (1 + n) (inverse ∷≡∷↔≡×≡)
(×-closure (1 + n) h (H-level-List n h))
|
%-------------------------------------------------------------------------------
% ANNEXES
%-------------------------------------------------------------------------------
\newpage
\section*{Annexes}
\label{souris}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{sources/pages/1.1.1/1-souris.pdf}
\caption{\label{coq}}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{sources/pages/1.1.1/2-coq.pdf}
\caption{\label{rhino}}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{sources/pages/1.1.1/3-rhino.pdf}
\caption{\label{elephant}}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{sources/pages/1.1.1/4-elephant.pdf}
\caption{\label{ours}}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{sources/pages/1.1.1/5-ours.pdf}
\caption{\label{geo-commencer}}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{sources/pages/1.1.2/pour_commencer.pdf}
\caption{\label{moulinette}}
\end{figure}
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=\linewidth]{sources/pages/1.2.1/1-moulinette.pdf}}
\caption{\label{diamant}}
\end{figure}
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=\linewidth]{sources/pages/1.2.1/2-diamant.pdf}}
\caption{\label{hexamier}}
\end{figure}
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=\linewidth]{sources/pages/1.2.1/3-hexamier.pdf}}
\caption{\label{etoilenoire}}
\end{figure}
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=\linewidth]{sources/pages/1.2.1/4-etoilenoire.pdf}}
\caption{\label{toulouse}}
\end{figure}
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=\linewidth]{sources/pages/1.2.1/5-toulouse.pdf}}
\caption{\label{croixdusud}}
\end{figure}
\begin{figure}[H]
\centering
\fbox{\includegraphics[width=\linewidth]{sources/pages/1.2.1/6-croixdusud.pdf}}
\caption{\label{aa}}
\end{figure}
|
[STATEMENT]
lemma annotate_named_loop_var:
"whileAnno b (named_loop name) V' c = whileAnno b I V c"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. whileAnno b (named_loop name) V' c = whileAnno b I V c
[PROOF STEP]
by (simp add: whileAnno_def)
|
[STATEMENT]
lemma rt_fresh_asI [intro!]:
assumes "rt1 \<sqsubseteq>\<^bsub>dip\<^esub> rt2"
and "rt2 \<sqsubseteq>\<^bsub>dip\<^esub> rt1"
shows "rt1 \<approx>\<^bsub>dip\<^esub> rt2"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. rt1 \<approx>\<^bsub>dip\<^esub> rt2
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
rt1 \<sqsubseteq>\<^bsub>dip\<^esub> rt2
rt2 \<sqsubseteq>\<^bsub>dip\<^esub> rt1
goal (1 subgoal):
1. rt1 \<approx>\<^bsub>dip\<^esub> rt2
[PROOF STEP]
unfolding rt_fresh_as_def
[PROOF STATE]
proof (prove)
using this:
rt1 \<sqsubseteq>\<^bsub>dip\<^esub> rt2
rt2 \<sqsubseteq>\<^bsub>dip\<^esub> rt1
goal (1 subgoal):
1. rt1 \<sqsubseteq>\<^bsub>dip\<^esub> rt2 \<and> rt2 \<sqsubseteq>\<^bsub>dip\<^esub> rt1
[PROOF STEP]
by simp
|
lemma starlike_imp_path_connected: fixes S :: "'a::real_normed_vector set" shows "starlike S \<Longrightarrow> path_connected S"
|
# Explicit Runge Kutta methods and their Butcher tables
## Authors: Brandon Clark & Zach Etienne
## This tutorial notebook stores known explicit Runge Kutta-like methods as Butcher tables in a Python dictionary format.
**Notebook Status:** <font color='green'><b> Validated </b></font>
**Validation Notes:** This tutorial notebook has been confirmed to be **self-consistent with its corresponding NRPy+ module**, as documented [below](#code_validation). In addition, each of these Butcher tables has been verified to yield an RK method to the expected local truncation error in a challenging battery of ODE tests, in the [RK Butcher Table Validation tutorial notebook](Tutorial-RK_Butcher_Table_Validation.ipynb).
### NRPy+ Source Code for this module: [MoLtimestepping/RK_Butcher_Table_Dictionary.py](../edit/MoLtimestepping/RK_Butcher_Table_Dictionary.py)
## Introduction:
The family of explicit [Runge Kutta](https://en.wikipedia.org/w/index.php?title=Runge%E2%80%93Kutta_methods&oldid=898536315)-like methods are commonly used when numerically solving ordinary differential equation (ODE) initial value problems of the form
$$ y'(t) = f(y,t),\ \ \ y(t_0)=y_0.$$
These methods can be extended to solve time-dependent partial differential equations (PDEs) via the [Method of Lines](https://en.wikipedia.org/w/index.php?title=Method_of_lines&oldid=855390257). In the Method of Lines, the above ODE can be generalized to $N$ coupled ODEs, all written as first-order-in-time PDEs of the form
$$ \partial_{t}\mathbf{u}(t,x,y,u_1,u_2,u_3,...)=\mathbf{f}(t,x,y,...,u_1,u_{1,x},...),$$
where $\mathbf{u}$ and $\mathbf{f}$ are vectors. The spatial partial derivatives of components of $\mathbf{u}$, e.g., $u_{1,x}$, may be computed using approximate numerical differentiation, like finite differences.
As any explicit Runge-Kutta method has its own unique local truncation error, can in principle be used to solve time-dependent PDEs using the Method of Lines, and may be stable under different Courant-Friedrichs-Lewy (CFL) conditions, it is useful to have multiple methods at one's disposal. **This module provides a number of such methods.**
More details about the Method of Lines is discussed further in the [Tutorial-RK_Butcher_Table_Generating_C_Code](Tutorial-RK_Butcher_Table_Generating_C_Code.ipynb) module where we generate the C code to implement the Method of Lines, and additional description can be found in the [Numerically Solving the Scalar Wave Equation: A Complete C Code](Tutorial-Start_to_Finish-ScalarWave.ipynb) NRPy+ tutorial notebook.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Initialize needed Python modules
1. [Step 2](#introbutcher): The Family of Explicit Runge-Kutta-Like Schemes (Butcher Tables)
1. [Step 2a](#codebutcher): Generating a Dictionary of Butcher Tables for Explicit Runge Kutta Techniques
1. [Step 2.a.i](#euler): Euler's Method
1. [Step 2.a.ii](#rktwoheun): RK2 Heun's Method
1. [Step 2.a.iii](#rk2mp): RK2 Midpoint Method
1. [Step 2.a.iv](#rk2ralston): RK2 Ralston's Method
1. [Step 2.a.v](#rk3): Kutta's Third-order Method
1. [Step 2.a.vi.](#rk3heun): RK3 Heun's Method
1. [Step 2.a.vii](#rk3ralston): RK3 Ralston's Method
1. [Step 2.a.viii](#ssprk3): Strong Stability Preserving Runge-Kutta (SSPRK3) Method
1. [Step 2.a.ix](#rkfour): Classic RK4 Method
1. [Step 2.a.x](#dp5): RK5 Dormand-Prince Method
1. [Step 2.a.xi](#dp5alt): RK5 Dormand-Prince Method Alternative
1. [Step 2.a.xii](#ck5): RK5 Cash-Karp Method
1. [Step 2.a.xiii](#dp6): RK6 Dormand-Prince Method
1. [Step 2.a.xiv](#l6): RK6 Luther Method
1. [Step 2.a.xv](#dp8): RK8 Dormand-Prince Method
1. [Step 3](#code_validation): Code Validation against `MoLtimestepping.RK_Butcher_Table_Dictionary` NRPy+ module
1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Initialize needed Python modules [Back to [top](#toc)\]
$$\label{initializenrpy}$$
Let's start by importing all the needed modules from Python:
```python
# Step 1: Initialize needed Python modules
import sympy as sp
```
<a id='introbutcher'></a>
# Step 2: The Family of Explicit Runge-Kutta-Like Schemes (Butcher Tables) [Back to [top](#toc)\]
$$\label{introbutcher}$$
In general, a predictor-corrector method performs an estimate timestep from $n$ to $n+1$, using e.g., a Runge Kutta method, to get a prediction of the solution at timestep $n+1$. This is the "predictor" step. Then it uses this prediction to perform another, "corrector" step, designed to increase the accuracy of the solution.
Let us focus on the ordinary differential equation (ODE)
$$ y'(t) = f(y,t), $$
which acts as an analogue for a generic PDE $\partial_{t}u(t,x,y,...)=f(t,x,y,...,u,u_x,...)$.
The general family of Runge Kutta "explicit" timestepping methods are implemented using the following scheme:
$$y_{n+1} = y_n + \sum_{i=1}^s b_ik_i $$
where
\begin{align}
k_1 &= \Delta tf(y_n, t_n) \\
k_2 &= \Delta tf(y_n + [a_{21}k_1], t_n + c_2\Delta t) \\
k_3 &= \Delta tf(y_n +[a_{31}k_1 + a_{32}k_2], t_n + c_3\Delta t) \\
& \ \ \vdots \\
k_s &= \Delta tf(y_n +[a_{s1}k_1 + a_{s2}k_2 + \cdots + a_{s, s-1}k_{s-1}], t_n + c_s\Delta t)
\end{align}
Note $s$ is the number of right-hand side evaluations necessary for any given method, i.e., for RK2 $s=2$ and for RK4 $s=4$, and for RK6 $s=7$. These schemes are often written in the form of a so-called "Butcher tableau". or "Butcher table":
$$\begin{array}{c|ccccc}
0 & \\
c_2 & a_{21} & \\
c_3 & a_{31} & a_{32} & \\
\vdots & \vdots & & \ddots \\
c_s & a_{s_1} & a_{s2} & \cdots & a_{s,s-1} \\ \hline
& b_1 & b_2 & \cdots & b_{s-1} & b_s
\end{array} $$
As an example, the "classic" fourth-order Runge Kutta (RK4) method obtains the solution $y(t)$ to the single-variable ODE $y'(t) = f(y(t),t)$ at time $t_{n+1}$ from $t_n$ via:
\begin{align}
k_1 &= \Delta tf(y_n, t_n), \\
k_2 &= \Delta tf(y_n + \frac{1}{2}k_1, t_n + \frac{\Delta t}{2}), \\
k_3 &= \Delta tf(y_n + \frac{1}{2}k_2, t_n + \frac{\Delta t}{2}), \\
k_4 &= \Delta tf(y_n + k_3, t_n + \Delta t), \\
y_{n+1} &= y_n + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4) + \mathcal{O}\big((\Delta t)^5\big).
\end{align}
It's corresponding Butcher table is constructed as follows:
$$\begin{array}{c|cccc}
0 & \\
1/2 & 1/2 & \\
1/2 & 0 & 1/2 & \\
1 & 0 & 0 & 1 & \\ \hline
& 1/6 & 1/3 & 1/3 & 1/6
\end{array} $$
This is one example of many explicit [Runge Kutta methods](https://en.wikipedia.org/w/index.php?title=List_of_Runge%E2%80%93Kutta_methods&oldid=896594269). Throughout the following sections we will highlight different Runge Kutta schemes and their Butcher tables from the first-order Euler's method up to and including an eighth-order method.
<a id='codebutcher'></a>
## Step 2.a: Generating a Dictionary of Butcher Tables for Explicit Runge Kutta Techniques [Back to [top](#toc)\]
$$\label{codebutcher}$$
We can store all of the Butcher tables in Python's **Dictionary** format using the curly brackets {} and 'key':value pairs. The 'key' will be the *name* of the Runge Kutta method and the value will be the Butcher table itself stored as a list of lists. The convergence order for each Runge Kutta method is also stored. We will construct the dictionary `Butcher_dict` one Butcher table at a time in the following sections.
```python
# Step 2a: Generating a Dictionary of Butcher Tables for Explicit Runge Kutta Techniques
# Initialize the dictionary Butcher_dict
Butcher_dict = {}
```
<a id='euler'></a>
### Step 2.a.i: Euler's Method [Back to [top](#toc)\]
$$\label{euler}$$
[Forward Euler's method](https://en.wikipedia.org/w/index.php?title=Euler_method&oldid=896152463) is a first order Runge Kutta method. Euler's method obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:
$$y_{n+1} = y_{n} + \Delta tf(y_{n}, t_{n})$$
with the trivial corresponding Butcher table
$$\begin{array}{c|c}
0 & \\ \hline
& 1
\end{array}$$
```python
# Step 2.a.i: Euler's Method
Butcher_dict['Euler'] = (
[[sp.sympify(0)],
["", sp.sympify(1)]]
, 1)
```
<a id='rktwoheun'></a>
### Step 2.a.ii: RK2 Heun's Method [Back to [top](#toc)\]
$$\label{rktwoheun}$$
[Heun's method](https://en.wikipedia.org/w/index.php?title=Heun%27s_method&oldid=866896936) is a second-order RK method that obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:
\begin{align}
k_1 &= \Delta tf(y_n, t_n), \\
k_2 &= \Delta tf(y_n + k_1, t_n + \Delta t), \\
y_{n+1} &= y_n + \frac{1}{2}(k_1 + k_2) + \mathcal{O}\big((\Delta t)^3\big).
\end{align}
with corresponding Butcher table
$$\begin{array}{c|cc}
0 & \\
1 & 1 & \\ \hline
& 1/2 & 1/2
\end{array} $$
```python
# Step 2.a.ii: RK2 Heun's Method
Butcher_dict['RK2 Heun'] = (
[[sp.sympify(0)],
[sp.sympify(1), sp.sympify(1)],
["", sp.Rational(1,2), sp.Rational(1,2)]]
, 2)
```
<a id='rk2mp'></a>
### Step 2.a.iii: RK2 Midpoint Method [Back to [top](#toc)\]
$$\label{rk2mp}$$
[Midpoint method](https://en.wikipedia.org/w/index.php?title=Midpoint_method&oldid=886630580) is a second-order RK method that obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:
\begin{align}
k_1 &= \Delta tf(y_n, t_n), \\
k_2 &= \Delta tf(y_n + \frac{2}{3}k_1, t_n + \frac{2}{3}\Delta t), \\
y_{n+1} &= y_n + \frac{1}{2}k_2 + \mathcal{O}\big((\Delta t)^3\big).
\end{align}
with corresponding Butcher table
$$\begin{array}{c|cc}
0 & \\
1/2 & 1/2 & \\ \hline
& 0 & 1
\end{array} $$
```python
# Step 2.a.iii: RK2 Midpoint (MP) Method
Butcher_dict['RK2 MP'] = (
[[sp.sympify(0)],
[sp.Rational(1,2), sp.Rational(1,2)],
["", sp.sympify(0), sp.sympify(1)]]
, 2)
```
<a id='rk2ralston'></a>
### Step 2.a.iv: RK2 Ralston's Method [Back to [top](#toc)\]
$$\label{rk2ralston}$$
Ralston's method (see [Ralston (1962)](https://www.ams.org/journals/mcom/1962-16-080/S0025-5718-1962-0150954-0/S0025-5718-1962-0150954-0.pdf), is a second-order RK method that obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:
\begin{align}
k_1 &= \Delta tf(y_n, t_n), \\
k_2 &= \Delta tf(y_n + \frac{1}{2}k_1, t_n + \frac{1}{2}\Delta t), \\
y_{n+1} &= y_n + \frac{1}{4}k_1 + \frac{3}{4}k_2 + \mathcal{O}\big((\Delta t)^3\big).
\end{align}
with corresponding Butcher table
$$\begin{array}{c|cc}
0 & \\
2/3 & 2/3 & \\ \hline
& 1/4 & 3/4
\end{array} $$
```python
# Step 2.a.iv: RK2 Ralston's Method
Butcher_dict['RK2 Ralston'] = (
[[sp.sympify(0)],
[sp.Rational(2,3), sp.Rational(2,3)],
["", sp.Rational(1,4), sp.Rational(3,4)]]
, 2)
```
<a id='rk3'></a>
### Step 2.a.v: Kutta's Third-order Method [Back to [top](#toc)\]
$$\label{rk3}$$
[Kutta's third-order method](https://en.wikipedia.org/w/index.php?title=List_of_Runge%E2%80%93Kutta_methods&oldid=896594269) obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:
\begin{align}
k_1 &= \Delta tf(y_n, t_n), \\
k_2 &= \Delta tf(y_n + \frac{1}{2}k_1, t_n + \frac{1}{2}\Delta t), \\
k_3 &= \Delta tf(y_n - k_1 + 2k_2, t_n + \Delta t) \\
y_{n+1} &= y_n + \frac{1}{6}k_1 + \frac{2}{3}k_2 + \frac{1}{6}k_3 + \mathcal{O}\big((\Delta t)^4\big).
\end{align}
with corresponding Butcher table
\begin{array}{c|ccc}
0 & \\
1/2 & 1/2 & \\
1 & -1 & 2 & \\ \hline
& 1/6 & 2/3 & 1/6
\end{array}
```python
# Step 2.a.v: Kutta's Third-order Method
Butcher_dict['RK3'] = (
[[sp.sympify(0)],
[sp.Rational(1,2), sp.Rational(1,2)],
[sp.sympify(1), sp.sympify(-1), sp.sympify(2)],
["", sp.Rational(1,6), sp.Rational(2,3), sp.Rational(1,6)]]
, 3)
```
<a id='rk3heun'></a>
### Step 2.a.vi: RK3 Heun's Method [Back to [top](#toc)\]
$$\label{rk3heun}$$
[Heun's third-order method](https://en.wikipedia.org/w/index.php?title=List_of_Runge%E2%80%93Kutta_methods&oldid=896594269) obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:
\begin{align}
k_1 &= \Delta tf(y_n, t_n), \\
k_2 &= \Delta tf(y_n + \frac{1}{3}k_1, t_n + \frac{1}{3}\Delta t), \\
k_3 &= \Delta tf(y_n + \frac{2}{3}k_2, t_n + \frac{2}{3}\Delta t) \\
y_{n+1} &= y_n + \frac{1}{4}k_1 + \frac{3}{4}k_3 + \mathcal{O}\big((\Delta t)^4\big).
\end{align}
with corresponding Butcher table
\begin{array}{c|ccc}
0 & \\
1/3 & 1/3 & \\
2/3 & 0 & 2/3 & \\ \hline
& 1/4 & 0 & 3/4
\end{array}
```python
# Step 2.a.vi: RK3 Heun's Method
Butcher_dict['RK3 Heun'] = (
[[sp.sympify(0)],
[sp.Rational(1,3), sp.Rational(1,3)],
[sp.Rational(2,3), sp.sympify(0), sp.Rational(2,3)],
["", sp.Rational(1,4), sp.sympify(0), sp.Rational(3,4)]]
, 3)
```
<a id='rk3ralston'></a>
### Step 2.a.vii: RK3 Ralton's Method [Back to [top](#toc)\]
$$\label{rk3ralston}$$
Ralston's third-order method (see [Ralston (1962)](https://www.ams.org/journals/mcom/1962-16-080/S0025-5718-1962-0150954-0/S0025-5718-1962-0150954-0.pdf), obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:
\begin{align}
k_1 &= \Delta tf(y_n, t_n), \\
k_2 &= \Delta tf(y_n + \frac{1}{2}k_1, t_n + \frac{1}{2}\Delta t), \\
k_3 &= \Delta tf(y_n + \frac{3}{4}k_2, t_n + \frac{3}{4}\Delta t) \\
y_{n+1} &= y_n + \frac{2}{9}k_1 + \frac{1}{3}k_2 + \frac{4}{9}k_3 + \mathcal{O}\big((\Delta t)^4\big).
\end{align}
with corresponding Butcher table
\begin{array}{c|ccc}
0 & \\
1/2 & 1/2 & \\
3/4 & 0 & 3/4 & \\ \hline
& 2/9 & 1/3 & 4/9
\end{array}
```python
# Step 2.a.vii: RK3 Ralton's Method
Butcher_dict['RK3 Ralston'] = (
[[0],
[sp.Rational(1,2), sp.Rational(1,2)],
[sp.Rational(3,4), sp.sympify(0), sp.Rational(3,4)],
["", sp.Rational(2,9), sp.Rational(1,3), sp.Rational(4,9)]]
, 3)
```
<a id='ssprk3'></a>
### Step 2.a.viii: Strong Stability Preserving Runge-Kutta (SSPRK3) Method [Back to [top](#toc)\]
$\label{ssprk3}$
The [Strong Stability Preserving Runge-Kutta (SSPRK3)](https://en.wikipedia.org/wiki/List_of_Runge%E2%80%93Kutta_methods#Kutta's_third-order_method) method obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:
\begin{align}
k_1 &= \Delta tf(y_n, t_n), \\
k_2 &= \Delta tf(y_n + k_1, t_n + \Delta t), \\
k_3 &= \Delta tf(y_n + \frac{1}{4}k_1 + \frac{1}{4}k_2, t_n + \frac{1}{2}\Delta t) \\
y_{n+1} &= y_n + \frac{1}{6}k_1 + \frac{1}{6}k_2 + \frac{2}{3}k_3 + \mathcal{O}\big((\Delta t)^4\big).
\end{align}
with corresponding Butcher table
\begin{array}{c|ccc}
0 & \\
1 & 1 & \\
1/2 & 1/4 & 1/4 & \\ \hline
& 1/6 & 1/6 & 2/3
\end{array}
```python
# Step 2.a.viii: Strong Stability Preserving Runge-Kutta (SSPRK3) Method
Butcher_dict['SSPRK3'] = (
[[0],
[sp.sympify(1), sp.sympify(1)],
[sp.Rational(1,2), sp.Rational(1,4), sp.Rational(1,4)],
["", sp.Rational(1,6), sp.Rational(1,6), sp.Rational(2,3)]]
, 3)
```
<a id='rkfour'></a>
### Step 2.a.ix: Classic RK4 Method [Back to [top](#toc)\]
$$\label{rkfour}$$
The [classic RK4 method](https://en.wikipedia.org/w/index.php?title=Runge%E2%80%93Kutta_methods&oldid=894771467) obtains the solution $y(t)$ at time $t_{n+1}$ from $t_n$ via:
\begin{align}
k_1 &= \Delta tf(y_n, t_n), \\
k_2 &= \Delta tf(y_n + \frac{1}{2}k_1, t_n + \frac{\Delta t}{2}), \\
k_3 &= \Delta tf(y_n + \frac{1}{2}k_2, t_n + \frac{\Delta t}{2}), \\
k_4 &= \Delta tf(y_n + k_3, t_n + \Delta t), \\
y_{n+1} &= y_n + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4) + \mathcal{O}\big((\Delta t)^5\big).
\end{align}
with corresponding Butcher table
$$\begin{array}{c|cccc}
0 & \\
1/2 & 1/2 & \\
1/2 & 0 & 1/2 & \\
1 & 0 & 0 & 1 & \\ \hline
& 1/6 & 1/3 & 1/3 & 1/6
\end{array} $$
```python
# Step 2.a.vix: Classic RK4 Method
Butcher_dict['RK4'] = (
[[sp.sympify(0)],
[sp.Rational(1,2), sp.Rational(1,2)],
[sp.Rational(1,2), sp.sympify(0), sp.Rational(1,2)],
[sp.sympify(1), sp.sympify(0), sp.sympify(0), sp.sympify(1)],
["", sp.Rational(1,6), sp.Rational(1,3), sp.Rational(1,3), sp.Rational(1,6)]]
, 4)
```
<a id='dp5'></a>
### Step 2.a.x: RK5 Dormand-Prince Method [Back to [top](#toc)\]
$$\label{dp5}$$
The fifth-order Dormand-Prince (DP) method from the RK5(4) family (see [Dormand, J. R.; Prince, P. J. (1980)](https://www.sciencedirect.com/science/article/pii/0771050X80900133?via%3Dihub)) Butcher table is:
$$\begin{array}{c|ccccccc}
0 & \\
\frac{1}{5} & \frac{1}{5} & \\
\frac{3}{10} & \frac{3}{40} & \frac{9}{40} & \\
\frac{4}{5} & \frac{44}{45} & \frac{-56}{15} & \frac{32}{9} & \\
\frac{8}{9} & \frac{19372}{6561} & \frac{−25360}{2187} & \frac{64448}{6561} & \frac{−212}{729} & \\
1 & \frac{9017}{3168} & \frac{−355}{33} & \frac{46732}{5247} & \frac{49}{176} & \frac{−5103}{18656} & \\
1 & \frac{35}{384} & 0 & \frac{500}{1113} & \frac{125}{192} & \frac{−2187}{6784} & \frac{11}{84} & \\ \hline
& \frac{35}{384} & 0 & \frac{500}{1113} & \frac{125}{192} & \frac{−2187}{6784} & \frac{11}{84} & 0
\end{array} $$
```python
# Step 2.a.x: RK5 Dormand-Prince Method
Butcher_dict['DP5'] = (
[[0],
[sp.Rational(1,5), sp.Rational(1,5)],
[sp.Rational(3,10),sp.Rational(3,40), sp.Rational(9,40)],
[sp.Rational(4,5), sp.Rational(44,45), sp.Rational(-56,15), sp.Rational(32,9)],
[sp.Rational(8,9), sp.Rational(19372,6561), sp.Rational(-25360,2187), sp.Rational(64448,6561), sp.Rational(-212,729)],
[sp.sympify(1), sp.Rational(9017,3168), sp.Rational(-355,33), sp.Rational(46732,5247), sp.Rational(49,176), sp.Rational(-5103,18656)],
[sp.sympify(1), sp.Rational(35,384), sp.sympify(0), sp.Rational(500,1113), sp.Rational(125,192), sp.Rational(-2187,6784), sp.Rational(11,84)],
["", sp.Rational(35,384), sp.sympify(0), sp.Rational(500,1113), sp.Rational(125,192), sp.Rational(-2187,6784), sp.Rational(11,84), sp.sympify(0)]]
, 5)
```
<a id='dp5alt'></a>
### Step 2.a.xi: RK5 Dormand-Prince Method Alternative [Back to [top](#toc)\]
$$\label{dp5alt}$$
The fifth-order Dormand-Prince (DP) method from the RK6(5) family (see [Dormand, J. R.; Prince, P. J. (1981)](https://www.sciencedirect.com/science/article/pii/0771050X81900103)) Butcher table is:
$$\begin{array}{c|ccccccc}
0 & \\
\frac{1}{10} & \frac{1}{10} & \\
\frac{2}{9} & \frac{-2}{81} & \frac{20}{81} & \\
\frac{3}{7} & \frac{615}{1372} & \frac{-270}{343} & \frac{1053}{1372} & \\
\frac{3}{5} & \frac{3243}{5500} & \frac{-54}{55} & \frac{50949}{71500} & \frac{4998}{17875} & \\
\frac{4}{5} & \frac{-26492}{37125} & \frac{72}{55} & \frac{2808}{23375} & \frac{-24206}{37125} & \frac{338}{459} & \\
1 & \frac{5561}{2376} & \frac{-35}{11} & \frac{-24117}{31603} & \frac{899983}{200772} & \frac{-5225}{1836} & \frac{3925}{4056} & \\ \hline
& \frac{821}{10800} & 0 & \frac{19683}{71825} & \frac{175273}{912600} & \frac{395}{3672} & \frac{785}{2704} & \frac{3}{50}
\end{array}$$
```python
# Step 2.a.xi: RK5 Dormand-Prince Method Alternative
Butcher_dict['DP5alt'] = (
[[0],
[sp.Rational(1,10), sp.Rational(1,10)],
[sp.Rational(2,9), sp.Rational(-2, 81), sp.Rational(20, 81)],
[sp.Rational(3,7), sp.Rational(615, 1372), sp.Rational(-270, 343), sp.Rational(1053, 1372)],
[sp.Rational(3,5), sp.Rational(3243, 5500), sp.Rational(-54, 55), sp.Rational(50949, 71500), sp.Rational(4998, 17875)],
[sp.Rational(4, 5), sp.Rational(-26492, 37125), sp.Rational(72, 55), sp.Rational(2808, 23375), sp.Rational(-24206, 37125), sp.Rational(338, 459)],
[sp.sympify(1), sp.Rational(5561, 2376), sp.Rational(-35, 11), sp.Rational(-24117, 31603), sp.Rational(899983, 200772), sp.Rational(-5225, 1836), sp.Rational(3925, 4056)],
["", sp.Rational(821, 10800), sp.sympify(0), sp.Rational(19683, 71825), sp.Rational(175273, 912600), sp.Rational(395, 3672), sp.Rational(785, 2704), sp.Rational(3, 50)]]
, 5)
```
<a id='ck5'></a>
### Step 2.a.xii: RK5 Cash-Karp Method [Back to [top](#toc)\]
$$\label{ck5}$$
The fifth-order Cash-Karp Method (see [J. R. Cash, A. H. Karp. (1980)](https://dl.acm.org/citation.cfm?doid=79505.79507)) Butcher table is:
$$\begin{array}{c|cccccc}
0 & \\
\frac{1}{5} & \frac{1}{5} & \\
\frac{3}{10} & \frac{3}{40} & \frac{9}{40} & \\
\frac{3}{5} & \frac{3}{10} & \frac{−9}{10} & \frac{6}{5} & \\
1 & \frac{−11}{54} & \frac{5}{2} & \frac{−70}{27} & \frac{35}{27} & \\
\frac{7}{8} & \frac{1631}{55296} & \frac{175}{512} & \frac{575}{13824} & \frac{44275}{110592} & \frac{253}{4096} & \\ \hline
& \frac{37}{378} & 0 & \frac{250}{621} & \frac{125}{594} & 0 & \frac{512}{1771}
\end{array}$$
```python
# Step 2.a.xii: RK5 Cash-Karp Method
Butcher_dict['CK5'] = (
[[0],
[sp.Rational(1,5), sp.Rational(1,5)],
[sp.Rational(3,10),sp.Rational(3,40), sp.Rational(9,40)],
[sp.Rational(3,5), sp.Rational(3,10), sp.Rational(-9,10), sp.Rational(6,5)],
[sp.sympify(1), sp.Rational(-11,54), sp.Rational(5,2), sp.Rational(-70,27), sp.Rational(35,27)],
[sp.Rational(7,8), sp.Rational(1631,55296), sp.Rational(175,512), sp.Rational(575,13824), sp.Rational(44275,110592), sp.Rational(253,4096)],
["",sp.Rational(37,378), sp.sympify(0), sp.Rational(250,621), sp.Rational(125,594), sp.sympify(0), sp.Rational(512,1771)]]
, 5)
```
<a id='dp6'></a>
### Step 2.a.xiii: RK6 Dormand-Prince Method [Back to [top](#toc)\]
$$\label{dp6}$$
The sixth-order Dormand-Prince method (see [Dormand, J. R.; Prince, P. J. (1981)](https://www.sciencedirect.com/science/article/pii/0771050X81900103)) Butcher Table is
$$\begin{array}{c|cccccccc}
0 & \\
\frac{1}{10} & \frac{1}{10} & \\
\frac{2}{9} & \frac{-2}{81} & \frac{20}{81} & \\
\frac{3}{7} & \frac{615}{1372} & \frac{-270}{343} & \frac{1053}{1372} & \\
\frac{3}{5} & \frac{3243}{5500} & \frac{-54}{55} & \frac{50949}{71500} & \frac{4998}{17875} & \\
\frac{4}{5} & \frac{-26492}{37125} & \frac{72}{55} & \frac{2808}{23375} & \frac{-24206}{37125} & \frac{338}{459} & \\
1 & \frac{5561}{2376} & \frac{-35}{11} & \frac{-24117}{31603} & \frac{899983}{200772} & \frac{-5225}{1836} & \frac{3925}{4056} & \\
1 & \frac{465467}{266112} & \frac{-2945}{1232} & \frac{-5610201}{14158144} & \frac{10513573}{3212352} & \frac{-424325}{205632} & \frac{376225}{454272} & 0 & \\ \hline
& \frac{61}{864} & 0 & \frac{98415}{321776} & \frac{16807}{146016} & \frac{1375}{7344} & \frac{1375}{5408} & \frac{-37}{1120} & \frac{1}{10}
\end{array}$$
```python
# Step 2.a.xiii: RK6 Dormand-Prince Method
Butcher_dict['DP6'] = (
[[0],
[sp.Rational(1,10), sp.Rational(1,10)],
[sp.Rational(2,9), sp.Rational(-2, 81), sp.Rational(20, 81)],
[sp.Rational(3,7), sp.Rational(615, 1372), sp.Rational(-270, 343), sp.Rational(1053, 1372)],
[sp.Rational(3,5), sp.Rational(3243, 5500), sp.Rational(-54, 55), sp.Rational(50949, 71500), sp.Rational(4998, 17875)],
[sp.Rational(4, 5), sp.Rational(-26492, 37125), sp.Rational(72, 55), sp.Rational(2808, 23375), sp.Rational(-24206, 37125), sp.Rational(338, 459)],
[sp.sympify(1), sp.Rational(5561, 2376), sp.Rational(-35, 11), sp.Rational(-24117, 31603), sp.Rational(899983, 200772), sp.Rational(-5225, 1836), sp.Rational(3925, 4056)],
[sp.sympify(1), sp.Rational(465467, 266112), sp.Rational(-2945, 1232), sp.Rational(-5610201, 14158144), sp.Rational(10513573, 3212352), sp.Rational(-424325, 205632), sp.Rational(376225, 454272), sp.sympify(0)],
["", sp.Rational(61, 864), sp.sympify(0), sp.Rational(98415, 321776), sp.Rational(16807, 146016), sp.Rational(1375, 7344), sp.Rational(1375, 5408), sp.Rational(-37, 1120), sp.Rational(1,10)]]
, 6)
```
<a id='l6'></a>
### Step 2.a.xiv: RK6 Luther's Method [Back to [top](#toc)\]
$$\label{l6}$$
Luther's sixth-order method (see [H. A. Luther (1968)](http://www.ams.org/journals/mcom/1968-22-102/S0025-5718-68-99876-1/S0025-5718-68-99876-1.pdf)) Butcher table is:
$$\begin{array}{c|ccccccc}
0 & \\
1 & 1 & \\
\frac{1}{2} & \frac{3}{8} & \frac{1}{8} & \\
\frac{2}{3} & \frac{8}{27} & \frac{2}{27} & \frac{8}{27} & \\
\frac{(7-q)}{14} & \frac{(-21 + 9q)}{392} & \frac{(-56 + 8q)}{392} & \frac{(336 - 48q)}{392} & \frac{(-63 + 3q)}{392} & \\
\frac{(7+q)}{14} & \frac{(-1155 - 255q)}{1960} & \frac{(-280 - 40q)}{1960} & \frac{320q}{1960} & \frac{(63 + 363q)}{1960} & \frac{(2352 + 392q)}{1960} & \\
1 & \frac{(330 + 105q)}{180} & \frac{2}{3} & \frac{(-200 + 280q)}{180} & \frac{(126 - 189q)}{180} & \frac{(-686 - 126q)}{180} & \frac{(490 - 70q)}{180} & \\ \hline
& \frac{1}{20} & 0 & \frac{16}{45} & 0 & \frac{49}{180} & \frac{49}{180} & \frac{1}{20}
\end{array}$$
where $q = \sqrt{21}$.
```python
# Step 2.a.xiv: RK6 Luther's Method
q = sp.sqrt(21)
Butcher_dict['L6'] = (
[[0],
[sp.sympify(1), sp.sympify(1)],
[sp.Rational(1,2), sp.Rational(3,8), sp.Rational(1,8)],
[sp.Rational(2,3), sp.Rational(8,27), sp.Rational(2,27), sp.Rational(8,27)],
[(7 - q)/14, (-21 + 9*q)/392, (-56 + 8*q)/392, (336 -48*q)/392, (-63 + 3*q)/392],
[(7 + q)/14, (-1155 - 255*q)/1960, (-280 - 40*q)/1960, (-320*q)/1960, (63 + 363*q)/1960, (2352 + 392*q)/1960],
[sp.sympify(1), ( 330 + 105*q)/180, sp.Rational(2,3), (-200 + 280*q)/180, (126 - 189*q)/180, (-686 - 126*q)/180, (490 - 70*q)/180],
["", sp.Rational(1, 20), sp.sympify(0), sp.Rational(16, 45), sp.sympify(0), sp.Rational(49, 180), sp.Rational(49, 180), sp.Rational(1, 20)]]
, 6)
```
<a id='dp8'></a>
### Step 2.a.xv: RK8 Dormand-Prince Method [Back to [top](#toc)\]
$$\label{dp8}$$
The eighth-order Dormand-Prince Method (see [Dormand, J. R.; Prince, P. J. (1981)](https://www.sciencedirect.com/science/article/pii/0771050X81900103)) Butcher table is:
$$\begin{array}{c|ccccccccc}
0 & \\
\frac{1}{18} & \frac{1}{18} & \\
\frac{1}{12} & \frac{1}{48} & \frac{1}{16} & \\
\frac{1}{8} & \frac{1}{32} & 0 & \frac{3}{32} & \\
\frac{5}{16} & \frac{5}{16} & 0 & \frac{-75}{64} & \frac{75}{64} & \\
\frac{3}{8} & \frac{3}{80} & 0 & 0 & \frac{3}{16} & \frac{3}{20} & \\
\frac{59}{400} & \frac{29443841}{614563906} & 0 & 0 & \frac{77736538}{692538347} & \frac{-28693883}{1125000000} & \frac{23124283}{1800000000} & \\
\frac{93}{200} & \frac{16016141}{946692911} & 0 & 0 & \frac{61564180}{158732637} & \frac{22789713}{633445777} & \frac{545815736}{2771057229} & \frac{-180193667}{1043307555} & \\
\frac{5490023248}{9719169821} & \frac{39632708}{573591083} & 0 & 0 & \frac{-433636366}{683701615} & \frac{-421739975}{2616292301} & \frac{100302831}{723423059} & \frac{790204164}{839813087} & \frac{800635310}{3783071287} & \\
\frac{13}{20} & \frac{246121993}{1340847787} & 0 & 0 & \frac{-37695042795}{15268766246} & \frac{-309121744}{1061227803} & \frac{-12992083}{490766935} & \frac{6005943493}{2108947869} & \frac{393006217}{1396673457} & \frac{123872331}{1001029789} & \\
\frac{1201146811}{1299019798} & \frac{-1028468189}{846180014} & 0 & 0 & \frac{8478235783}{508512852} & \frac{1311729495}{1432422823} & \frac{-10304129995}{1701304382} & \frac{-48777925059}{3047939560} & \frac{15336726248}{1032824649} & \frac{-45442868181}{3398467696} & \frac{3065993473}{597172653} & \\
1 & \frac{185892177}{718116043} & 0 & 0 & \frac{-3185094517}{667107341} & \frac{-477755414}{1098053517} & \frac{-703635378}{230739211} & \frac{5731566787}{1027545527} & \frac{5232866602}{850066563} & \frac{-4093664535}{808688257} & \frac{3962137247}{1805957418} & \frac{65686358}{487910083} & \\
1 & \frac{403863854}{491063109} & 0 & 0 & \frac{-5068492393}{434740067} & \frac{-411421997}{543043805} & \frac{652783627}{914296604} & \frac{11173962825}{925320556} & \frac{-13158990841}{6184727034} & \frac{3936647629}{1978049680} & \frac{-160528059}{685178525} & \frac{248638103}{1413531060} & 0 & \\
& \frac{14005451}{335480064} & 0 & 0 & 0 & 0 & \frac{-59238493}{1068277825} & \frac{181606767}{758867731} & \frac{561292985}{797845732} & \frac{-1041891430}{1371343529} & \frac{760417239}{1151165299} & \frac{118820643}{751138087} & \frac{-528747749}{2220607170} & \frac{1}{4}
\end{array}$$
```python
# Step 2.a.xv: RK8 Dormand-Prince Method
Butcher_dict['DP8']=(
[[0],
[sp.Rational(1, 18), sp.Rational(1, 18)],
[sp.Rational(1, 12), sp.Rational(1, 48), sp.Rational(1, 16)],
[sp.Rational(1, 8), sp.Rational(1, 32), sp.sympify(0), sp.Rational(3, 32)],
[sp.Rational(5, 16), sp.Rational(5, 16), sp.sympify(0), sp.Rational(-75, 64), sp.Rational(75, 64)],
[sp.Rational(3, 8), sp.Rational(3, 80), sp.sympify(0), sp.sympify(0), sp.Rational(3, 16), sp.Rational(3, 20)],
[sp.Rational(59, 400), sp.Rational(29443841, 614563906), sp.sympify(0), sp.sympify(0), sp.Rational(77736538, 692538347), sp.Rational(-28693883, 1125000000), sp.Rational(23124283, 1800000000)],
[sp.Rational(93, 200), sp.Rational(16016141, 946692911), sp.sympify(0), sp.sympify(0), sp.Rational(61564180, 158732637), sp.Rational(22789713, 633445777), sp.Rational(545815736, 2771057229), sp.Rational(-180193667, 1043307555)],
[sp.Rational(5490023248, 9719169821), sp.Rational(39632708, 573591083), sp.sympify(0), sp.sympify(0), sp.Rational(-433636366, 683701615), sp.Rational(-421739975, 2616292301), sp.Rational(100302831, 723423059), sp.Rational(790204164, 839813087), sp.Rational(800635310, 3783071287)],
[sp.Rational(13, 20), sp.Rational(246121993, 1340847787), sp.sympify(0), sp.sympify(0), sp.Rational(-37695042795, 15268766246), sp.Rational(-309121744, 1061227803), sp.Rational(-12992083, 490766935), sp.Rational(6005943493, 2108947869), sp.Rational(393006217, 1396673457), sp.Rational(123872331, 1001029789)],
[sp.Rational(1201146811, 1299019798), sp.Rational(-1028468189, 846180014), sp.sympify(0), sp.sympify(0), sp.Rational(8478235783, 508512852), sp.Rational(1311729495, 1432422823), sp.Rational(-10304129995, 1701304382), sp.Rational(-48777925059, 3047939560), sp.Rational(15336726248, 1032824649), sp.Rational(-45442868181, 3398467696), sp.Rational(3065993473, 597172653)],
[sp.sympify(1), sp.Rational(185892177, 718116043), sp.sympify(0), sp.sympify(0), sp.Rational(-3185094517, 667107341), sp.Rational(-477755414, 1098053517), sp.Rational(-703635378, 230739211), sp.Rational(5731566787, 1027545527), sp.Rational(5232866602, 850066563), sp.Rational(-4093664535, 808688257), sp.Rational(3962137247, 1805957418), sp.Rational(65686358, 487910083)],
[sp.sympify(1), sp.Rational(403863854, 491063109), sp.sympify(0), sp.sympify(0), sp.Rational(-5068492393, 434740067), sp.Rational(-411421997, 543043805), sp.Rational(652783627, 914296604), sp.Rational(11173962825, 925320556), sp.Rational(-13158990841, 6184727034), sp.Rational(3936647629, 1978049680), sp.Rational(-160528059, 685178525), sp.Rational(248638103, 1413531060), sp.sympify(0)],
["", sp.Rational(14005451, 335480064), sp.sympify(0), sp.sympify(0), sp.sympify(0), sp.sympify(0), sp.Rational(-59238493, 1068277825), sp.Rational(181606767, 758867731), sp.Rational(561292985, 797845732), sp.Rational(-1041891430, 1371343529), sp.Rational(760417239, 1151165299), sp.Rational(118820643, 751138087), sp.Rational(-528747749, 2220607170), sp.Rational(1, 4)]]
, 8)
```
<a id='code_validation'></a>
# Step 3: Code validation against `MoLtimestepping.RK_Butcher_Table_Dictionary` NRPy+ module [Back to [top](#toc)\]
$$\label{code_validation}$$
As a code validation check, we verify agreement in the dictionary of Butcher tables between
1. this tutorial and
2. the NRPy+ [MoLtimestepping.RK_Butcher_Table_Dictionary](../edit/MoLtimestepping/RK_Butcher_Table_Dictionary.py) module.
We analyze all key/value entries in the dictionary for consistency.
```python
# Step 3: Code validation against MoLtimestepping.RK_Butcher_Table_Dictionary NRPy+ module
import sys
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict as B_dict
valid = True
for key, value in Butcher_dict.items():
if Butcher_dict[key] != B_dict[key]:
valid = False
print(key)
if valid == True and len(Butcher_dict.items()) == len(B_dict.items()):
print("The dictionaries match!")
else:
print("ERROR: Dictionaries don't match!")
sys.exit(1)
```
The dictionaries match!
<a id='latex_pdf_output'></a>
# Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-RK_Butcher_Table_Dictionary.pdf](Tutorial-RK_Butcher_Table_Dictionary.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```python
!jupyter nbconvert --to latex --template latex_nrpy_style.tplx --log-level='WARN' Tutorial-RK_Butcher_Table_Dictionary.ipynb
!pdflatex -interaction=batchmode Tutorial-RK_Butcher_Table_Dictionary.tex
!pdflatex -interaction=batchmode Tutorial-RK_Butcher_Table_Dictionary.tex
!pdflatex -interaction=batchmode Tutorial-RK_Butcher_Table_Dictionary.tex
!rm -f Tut*.out Tut*.aux Tut*.log
```
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
This is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)
restricted \write18 enabled.
entering extended mode
|
%!TEX root = ../copatterns-thesis.tex
\chapter{Copatterns Implementation}
\section{Desugaring}
% Why do we want to desugar?
Codata definitions and record definitions with projections, along with function definitions described with copatterns, can all be written using existing Idris constructs. In particular, we consider a function such as \texttt{dup} given in Figure\,\ref{fig:dup_copatterns} to be syntactic sugar for the definition given in Figure\,\ref{fig:dup_desugared}. Likewise, the definition of the coinductively defined Stream type with projections is desugared into a constructor-based definition with exactly one constructor.
\begin{figure}
\begin{lstlisting}
corecord Stream : Type -> Type where
head : Stream a -> a
tail : Stream a -> Stream a
constructor mkStream
--| dupNth s n m duplicates every nth element of s,
--| starting with the mth element
dupNth : Stream a -> Nat -> Nat -> Stream a
head (dupNth s n m) = head s
head (tail (dupNth s n Z)) = head s
head (tail (dupNth s n (S m'))) = head (tail s)
tail (tail (dupNth s n Z)) = tail (dupNth s n n)
tail (tail (dupNth s n (S m'))) = tail (dupNth (tail s) n m')
\end{lstlisting}
\caption{A duplication function on streams, defined with copatterns.}
\label{fig:dup_copatterns}
\end{figure}
\begin{figure}
\begin{lstlisting}
codata Stream : Type -> Type where
mkStream : a -> Stream a -> Stream a
head : Stream a -> a
head (mkStream h _) = h
tail : Stream a -> a
tail (mkStream _ t) = t
dupNth : Stream a -> Nat -> Nat -> Stream a
dupNth s n Z =
mkStream (head s)
(mkStream (head s)
(tail (dupNth s n n)))
dupNth s n (S m') =
mkStream (head s)
(mkStream (head (tail s))
(tail (dupNth (tail s) n m')))
\end{lstlisting}
\caption{A desugared version of the program described in Figure \ref{fig:dup_copatterns}.}
\label{fig:dup_desugared}
\end{figure}
In order to define the desugaring mechanism which allows us to straightforwardly transform programs from a projection-based form to a constructor-based form, we recall that each projection in a type definition defines an elimination rule for that type. For a general coinductive type with product structure $\nu A. B_1 \times B_2 \times \cdots \times B_n$ where $A$ may occur in any of the types $B_i$ (for $1 \leq i \leq n$), we can define \textit{n} elimination rules in the style of Harper\,\cite[Ch. 15]{Harper:2012} as shown in Figure\,\ref{fig:nuA_elim_rules}. Since these rules define all of the ways we can eliminate a value of the given type, we can synthesize them into a single introduction rule which has \textit{n} premises; one for the result of each elimination rule. This property, as illustrated in Figure\,\ref{fig:nuA_intro_rule}, is obviously reversible, such that we can always use our elimination rules on an introduction form and vice versa.
\begin{figure}
\caption{Elimination rules $\pi_1, \pi_2 \ldots \pi_n$ for a type $\nu A. B_1 \times B_2 \times \ldots \times B_n$.}
\label{fig:nuA_elim_rules}
\end{figure}
\begin{figure}
\caption{Introduction rule for a type $\nu A. B_1 \times B_2 \times \ldots \times B_n$ based on information synthesized from the elimination rules.}
\label{fig:nuA_intro_rule}
\end{figure}
Using the intuition of going back and forth between elimination and introduction, we now present our high-level desugaring mechanism which takes constructs from projection-based form to constructor-based form. Note that all desugaring operations happen during or before elaboration.
\subsection{Desugaring record definitions}
%Record and corecords definitions are desugared according to the rule defined in Figure\,\ref{desugaring_records}. The type of each projection becomes an input type to the constructor with the given name.
A record definition defined with projections is desugared to a record definition with a single constructor, where each parameter is labeled with the name given in the projection-based form. An intuition is given in Figure\,\ref{fig:desugar_records}. In Idris, automatic generation of projection functions with the specified names happens during elaboration, and therefore desugaring becomes a matter of injecting the specified projections into the record constructor as named parameters.
\begin{figure}
\begin{lstlisting}[mathescape]
--| Projection-based form
record A : Type where
$\pi_1$ : T$_1$
$\pi_2$ : T$_2$
$\vdots$
$\pi_n$ : T$_n$
constructor mkA
--| Desugared form
record A : Type where
mkA : ($\pi_1$ : T$_1$) ->
($\pi_2$ : T$_2$) ->
$\vdots$
($\pi_n$ : T$_n$) ->
A
\end{lstlisting}
\caption{Desugaring record definitions.}
\label{fig:desugar_records}
\end{figure}
\subsection{Desugaring corecord definitions}
Corecord definitions are desugared to codata definitions with a single constructor. Similar to the desugaring of records, projections are injected into the constructor as named arguments. A general description is given in Figure\,\ref{fig:desugar_corecords}. However, as Idris does not have a corecord construct, projection functions must be generated while desugaring. Generating these functions is straightforward, since it is simply a matter of extracting the constructor argument at a specified position with pattern matching. This procedure is shown in Figure\,\ref{fig:generate_projections}.
\begin{figure}
\begin{lstlisting}[mathescape]
--| Projection-based form
corecord A : Type where
$\pi_1$ : T$_1$
$\pi_2$ : T$_2$
$\vdots$
$\pi_n$ : T$_n$
constructor mkA
--| Desugared form
codata record A : Type where
mkA : ($\pi_1$ : T$_1$) ->
($\pi_2$ : T$_2$) ->
$\vdots$
($\pi_n$ : T$_n$) ->
A
\end{lstlisting}
\caption{Desugaring corecord definitions.}
\label{fig:desugar_corecords}
\end{figure}
\begin{figure}
\begin{lstlisting}[mathescape]
generateProjection ($\pi_i$ : T$_i$) =
$\pi_i$ (mkA (arg$_1$) (arg$_2$) $\cdots$ (arg$_i$) $\cdots$ (arg$_n$)) = arg$_i$
\end{lstlisting}
\caption{Generating projection functions for a desugared corecord definition.}
\label{fig:generate_projections}
\end{figure}
\subsection{Desugaring left-hand side projections}
Left-hand side projections for values of both record and corecord type can be desugared in the same way. A clause with left-hand side projections is desugared in three steps: expand, reduce, and merge. To illustrate each step, we will show the steps involved in desugaring the \textit{dupNth} function from Figure\,\ref{fig:dup_copatterns}.
\begin{figure}
\begin{lstlisting}[mathescape]
dupNth : Stream a -> Nat -> Nat -> Stream a
head (dupNth s n m) =
head (mkStream (head s) ?tail)
head (tail (dupNth s n Z)) =
head (tail (mkStream ?head (mkStream (head s) ?tailtail)))
head (tail (dupNth s n (S m'))) =
head (tail (mkStream ?head (mkStream (head (tail s)) ?tailtail)))
tail (tail (dupNth s n Z)) =
tail (tail (mkStream ?head (mkStream ?headtail
(tail (dupNth s n n)))))
tail (tail (dupNth s n (S m'))) =
tail (tail (mkStream ?head (mkStream ?headtail
(tail (dupNth (tail s) n m')))))
\end{lstlisting}
\caption{Desugaring, step 1: Expand right-hand sides to make projections on a constructor.}
\label{fig:desugar_step1}
\end{figure}
The first step, expansion, is shown in Figure\,\ref{fig:desugar_step1}. Since we desugar in the elaboration phase, we assume that the constructor for a given type, \textit{mkStream} in this example, is available in the context. During expansion, the right-hand side of a clause with one or more left-hand side projections is expanded such that projections happen on an appropriate constructor, where the original right-hand side is injected as a parameter in the position specified by left-hand side projections. Due to the intuition given above, we do not have a well-formed introduction form (i.e. constructor application) before we know the result of all eliminations. Because each clause only defines one elimination rule, holes (e.g. ?tail) must therefore be left for constructor arguments unknown to a given clause. However, we defer resolving these holes to the merging step. None of the clauses try to access information unknown to them, so no undefined behaviour can arise as a result of this step.
\begin{figure}
\begin{lstlisting}[mathescape]
dupNth : Stream a -> Nat -> Nat -> Stream a
dupNth s n m =
mkStream (head s) ?tail
dupNth s n Z =
mkStream ?head (mkStream (head s) ?tailtail)
dupNth s n (S m') =
mkStream ?head (mkStream (head (tail s)) ?tailtail)))
dupNth s n Z)) =
mkStream ?head (mkStream ?headtail (tail (dupNth s n n)))
dupNth s n (S m') =
mkStream ?head (mkStream ?headtail (tail (dupNth (tail s) n m')))
\end{lstlisting}
\caption{Desugaring, step 2: Reduce clauses by removing projections appearing in the same position on left and right-hand sides.}
\label{fig:desugar_step2}
\end{figure}
Reduction is the second step of desugaring, and is exemplified in Figure\,\ref{fig:desugar_step2}. In this step, each clause is reduced by removing equivalent projections on both side of a clause, similar to how one would reduce a mathematical equation. We are allowed to perform such reductions at this point because all the clauses make right-hand side projections directly on the (as yet unmerged) output of the function, while all the left-hand side projections happen directly on the input.
\begin{figure}
\begin{lstlisting}[mathescape]
dupNth : Stream a -> Nat -> Nat -> Stream a
dupNth s n Z =
mkStream (head s) (mkStream (head s)
(tail (dupNth s n n)))
dupNth s n (S m') =
mkStream (head s)
(mkStream (head (tail s))
(tail (dupNth (tail s) n m')))
\end{lstlisting}
\caption{Desugaring, step 3: Merge right-hand sides of compatible clauses.}
\label{fig:desugar_step3}
\end{figure}
The third and final step is merging, shown in Figure\,\ref{fig:desugar_step3}, where the right-hand sides of compatible clauses are merged into a single clause. Two clauses are considered to be compatible if (1) they make equivalent pattern matches on the same arguments or (2) a more specific clause is merged with a more general clause. The first condition means that \texttt{dupNth s n Z} is compatible with \texttt{dupNth s n Z}, but not \texttt{dupNth s n (S~m')}. The second condition is more subtle, expressing that \texttt{dupNth s n Z} is compatible with \texttt{dupNth s n m}, since the former is more specific than the latter. Accordingly, \texttt{dupNth s n m} is not compatible with \texttt{dupNth s n Z}.
\begin{figure}
\begin{lstlisting}[mathescape]
--| Expands the right-hand side to make
--| projections on a constructor definition
expand ($\pi_i$ (f x)) rhs = $\pi_i$ (mkA ?arg$_1$ ?arg$_2$ $\cdots$ rhs$_i$ $\cdots$ ?arg$_n$)
expand ($\pi_j$ lhs) rhs =
$\pi_j$ (expand lhs (mkA ?arg$_1$ ?arg$_2$ $\cdots$ rhs$_j$ $\cdots$ ?arg$_n$))
expand lhs rhs = rhs
--| Reduce an expression by removing equal projections
--| on the left and right-hand sides of a clause
reduce (f x) rhs = f x = rhs
reduce ($\pi_i$ lhs) ($\pi_i$ rhs) = reduce lhs rhs
reduce _ _ = error -- Trying to reduce non-compatible projections
--| Merges the first clause with the second clause
--| if their left-hand sides are compatible
mergeWithClause (lhs$_1$ = rhs$_1$) (lhs$_2$ = rhs$_2$) =
if compatible (lhs$_1$) (lhs$_2$)
then (lhs$_1$ = merge (rhs$_1$) (rhs$_2$))
else (lhs$_1$ = rhs$_1$)
--| Merges the first constructor with the second
merge (mkA ?arg$_1$ ?arg$_2$ $\cdots$ ?arg$_i$ $\cdots$ ?arg$_n$)
(mkA ?arg$_1$ ?arg$_2$ $\cdots$ rhs$_i$ $\cdots$ ?arg$_n$)
= (mkA ?arg$_1$ ?arg$_2$ $\cdots$ rhs$_i$ $\cdots$ ?arg$_n$)
merge (mkA ?arg$_1$ ?arg$_2$ $\cdots$ rhs$_i$ $\cdots$ ?arg$_n$)
(mkA ?arg$_1$ ?arg$_2$ $\cdots$ ?arg$_i$ $\cdots$ ?arg$_n$)
= (mkA ?arg$_1$ ?arg$_2$ $\cdots$ rhs$_i$ $\cdots$ ?arg$_n$)
merge (mkA ?arg$_1$ ?arg$_2$ $\cdots$ rhs$_{i1}$ $\cdots$ ?arg$_n$)
(mkA ?arg$_1$ ?arg$_2$ $\cdots$ rhs$_{i2}$ $\cdots$ ?arg$_n$)
= error -- Trying to merge two different RHS
merge (mkA ?arg$_1$ ?arg$_2$ $\cdots$ mkA($\cdots$)$_{i1}$ $\cdots$ ?arg$_n$)
(mkA ?arg$_1$ ?arg$_2$ $\cdots$ mkA($\cdots$)$_{i2}$ $\cdots$ ?arg$_n$)
= (mkA ?arg$_1$ ?arg$_2$ $\cdots$ merge (mkA($\cdots$)$_{i1}$) (mkA($\cdots$)$_{i2}$) $\cdots$ ?arg$_n$)
merge xs ys = xs
--| Desugars a list of clauses
desugar ((lhs = rhs) :: clauses) =
let expandedRhs = expand lhs rhs
in let reduced = reduce lhs expandedRhs
in foldr mergeWithClause reduced clauses
desugar [] = []
\end{lstlisting}
\caption{High-level formalization of desugaring, given in Haskell-like syntax. The constructor name \texttt{mkA} signifies a constructor for an arbitrary type A, and $\pi_i$ denotes projections.}
\label{fig:desugar_formalization}
\end{figure}
A formalization of the three steps is shown in Figure\,\ref{fig:desugar_formalization}. Expansion and reduction is straightforward. Merging tries to merge one clause with all other clauses of a definition, although only compatible clauses will lead to an actual merge. This presentation makes no mention of implicit arguments, since these are treated in the same way as explicit arguments.
% merge : Constructor A -> Constructor A -> Constructor A
% merge (Constr name args) (Constr name' args') =
% if name == name'
% then (Constr name (mergeArgs args args'))
% else error
% mergeArgs : [Args] -> [Args] -> [Args]
% mergeArgs (? :: xs) (y :: ys) = y :: mergeArgs xs ys
% mergeArgs (x :: xs) (? :: ys) = x :: mergeArgs xs ys
% mergeArgs ((Constr n args) :: xs) ((Constr m args') :: ys) =
% if n == m
% then (Constr n (mergeArgs args args')) :: mergeArgs xs ys
% else error
% mergeArgs (x :: xs) (y :: ys) = error
% mkRHS : Projection -> LHS -> RHS -> RHS
% mkRHS ($\pi$_i) (fun n) (r_i) = mk
\subsection{Desugaring right-hand side projections}
Since each projection is substituted with a projection function when a record or corecord definition is desugared, there should be no reason to desugar right-hand side projections. The desugared projection functions are added to the same namespace as high-level projections, and the semantics of both the projections prior to desugaring and the generated projection functions are the same.
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../copatterns-thesis"
%%% End:
|
// BEGIN_COPYRIGHT
//
// Copyright 2009-2014 CRS4.
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not
// use this file except in compliance with the License. You may obtain a copy
// of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
// WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
// License for the specific language governing permissions and limitations
// under the License.
//
// END_COPYRIGHT
#ifndef HADOOP_PIPES_SERIAL_UTILS_HPP
#define HADOOP_PIPES_SERIAL_UTILS_HPP
#include <string>
#include "hadoop/SerialUtils.hh"
#ifdef __APPLE__
#include "mac_support.hpp"
#endif
#include <boost/python.hpp>
namespace bp = boost::python;
namespace hu = HadoopUtils;
class _StringOutStream: public hu::OutStream {
protected:
std::ostringstream _os;
public:
_StringOutStream();
void write(const void *buff, std::size_t len);
void flush();
std::string str();
};
class _StringInStream: public hu::InStream {
protected:
std::istringstream _is;
public:
_StringInStream(const std::string& s);
void read(void *buff, std::size_t len);
uint16_t readUShort();
uint64_t readLong();
void seekg(std::size_t offset);
std::size_t tellg();
};
std::string pipes_serialize_int(long t);
std::string pipes_serialize_float(float t);
std::string pipes_serialize_string(std::string t);
bp::tuple pipes_deserialize_int(const std::string& s, std::size_t offset);
bp::tuple pipes_deserialize_float(const std::string& s, std::size_t offset);
bp::tuple pipes_deserialize_string(const std::string& s, std::size_t offset);
#endif // HADOOP_PIPES_SERIAL_UTILS_HPP
|
75 @,@ 000 buck & ball cartridges - percussion
|
(* Type constructors are split into several groups depending on how
many arguments need to be applied for the overall type to be
well formed. We keep primitive types like Read, Write and Ref fully
applied to make the mechanisation easier. In the full language the
ability to partially apply these could be recovered using
type synonyms. *)
Require Export Iron.Language.SystemF2Effect.Kind.
(********************************************************************)
(* Type Constructors. *)
Inductive tycon0 : Type :=
| TyConFun : tycon0
| TyConUnit : tycon0
| TyConBool : tycon0
| TyConNat : tycon0.
Hint Constructors tycon0.
Fixpoint kindOfTyCon0 (tc : tycon0) :=
match tc with
| TyConFun => KFun KData (KFun KEffect (KFun KData KData))
| TyConUnit => KData
| TyConBool => KData
| TyConNat => KData
end.
Fixpoint tycon0_beq tc1 tc2 :=
match tc1, tc2 with
| TyConFun, TyConFun => true
| TyConUnit, TyConUnit => true
| TyConBool, TyConBool => true
| TyConNat, TyConNat => true
| _, _ => false
end.
(********************************************************************)
(* Type constructors with at least one applied argument. *)
Inductive tycon1 : Type :=
| TyConRead : tycon1
| TyConWrite : tycon1
| TyConAlloc : tycon1.
Fixpoint kindOfTyCon1 (tc : tycon1) :=
match tc with
| TyConRead => KFun KRegion KEffect
| TyConWrite => KFun KRegion KEffect
| TyConAlloc => KFun KRegion KEffect
end.
Fixpoint tycon1_beq tc1 tc2 :=
match tc1, tc2 with
| TyConRead, TyConRead => true
| TyConWrite, TyConWrite => true
| TyConAlloc, TyConAlloc => true
| _, _ => false
end.
Definition isEffectTyCon_b (tc : tycon1) :=
match tc with
| TyConRead => true
| TyConWrite => true
| TyConAlloc => true
end.
(********************************************************************)
(* Type constructors with at least two applied arguments. *)
Inductive tycon2 : Type :=
| TyConRef : tycon2.
Hint Constructors tycon2.
Fixpoint kindOfTyCon2 (tc : tycon2) :=
match tc with
| TyConRef => KFun KRegion (KFun KData KData)
end.
Fixpoint tycon2_beq tc1 tc2 :=
match tc1, tc2 with
| TyConRef, TyConRef => true
end.
|
module Poly
%access public export
%default total
repeat : (x : x_ty) -> (count : Nat) -> List x_ty
repeat x Z = []
repeat x (S k) = x :: repeat x k
test_repeat1 : repeat 4 2 = [4, 4]
test_repeat1 = Refl
test_repeat2 : repeat False 1 = [False]
test_repeat2 = Refl
namespace MumbleGrumble
data Mumble : Type where
A : Mumble
B : Mumble -> Nat -> Mumble
C : Mumble
data Grumble : (x : Type) -> Type where
D : Mumble -> Grumble x
E : x -> Grumble x
rev : (l : List a) -> List a
rev [] = []
rev (h :: t) = (rev t) ++ [h]
test_rev1 : rev [1, 2] = [2, 1]
test_rev1 = Refl
test_rev2 : rev [True] = [True]
test_rev2 = Refl
len : (l : List a) -> Nat
len [] = Z
len (h :: t) = S (len t)
test_length1 : length [1, 2, 3] = 3
test_length1 = Refl
app_nil_r : (l : List a) -> l ++ [] = l
app_nil_r [] = Refl
app_nil_r (x :: xs) = rewrite app_nil_r xs in Refl
app_nil : (l : List a) -> [] ++ l = l
app_nil l = Refl
app_assoc : (l, m, n : List a) -> l ++ m ++ n = (l ++ m) ++ n
app_assoc [] m n = Refl
app_assoc (x :: xs) m n = rewrite app_assoc xs m n in Refl
app_length : (l1, l2 : List a) -> length (l1 ++ l2) = length l1 + length l2
app_length [] l2 = Refl
app_length (x :: xs) l2 = rewrite app_length xs l2 in Refl
rev_app_distr : (l1, l2 : List a) -> rev (l1 ++ l2) = rev l2 ++ rev l1
rev_app_distr [] l2 = rewrite app_nil_r (rev l2) in Refl
rev_app_distr (x :: xs) l2 =
rewrite rev_app_distr xs l2 in
rewrite sym $ app_assoc (rev l2) (rev xs) [x] in
Refl
rev_involutive : (l : List a) -> rev (rev l) = l
rev_involutive [] = Refl
rev_involutive (x :: xs) =
rewrite rev_app_distr (rev xs) [x] in
rewrite rev_involutive xs in
Refl
split : (l : List (x, y)) -> (List x, List y)
split [] = ([], [])
split ((a, b) :: xs) =
let
(ys, zs) = split xs
in
(a :: ys, b :: zs)
test_split : split [(1, False), (2, False)] = ([1, 2], [False, False])
test_split = Refl
nth_error : (l : List a) -> (n : Nat) -> Maybe a
nth_error [] n = Nothing
nth_error (x :: xs) Z = Just x
nth_error (x :: xs) (S k) = nth_error xs k
test_nth_error1 : nth_error [4, 5, 6, 7] 0 = Just 4
test_nth_error1 = Refl
test_nth_error2 : nth_error [[1], [2]] 1 = Just [2]
test_nth_error2 = Refl
test_nth_error3 : nth_error [True] 2 = Nothing
test_nth_error3 = Refl
hd_error : (l : List a) -> Maybe a
hd_error [] = Nothing
hd_error (x :: xs) = Just x
test_hd_error1 : hd_error [1, 2] = Just 1
test_hd_error1 = Refl
test_hd_error2 : hd_error [[1], [2]] = Just [1]
test_hd_error2 = Refl
map_rev : (f : x -> y) -> (l : List x) -> map f (rev l) = rev (map f l)
map_rev f [] = Refl
map_rev f (x :: xs) = ?map_rev_rhs_2
|
module GeneralizedConjugateResidualSolver
export GeneralizedConjugateResidual
using ..LinearSolvers
const LS = LinearSolvers
using ..MPIStateArrays: device, realview
using LinearAlgebra
using LazyArrays
using StaticArrays
using KernelAbstractions
"""
GeneralizedConjugateResidual(K, Q; rtol, atol)
This is an object for solving linear systems using an iterative Krylov method.
The constructor parameter `K` is the number of steps after which the algorithm
is restarted (if it has not converged), `Q` is a reference state used only
to allocate the solver internal state, and `tolerance` specifies the convergence
criterion based on the relative residual norm. The amount of memory
required by the solver state is roughly `(2K + 2) * size(Q)`.
This object is intended to be passed to the [`linearsolve!`](@ref) command.
This uses the restarted Generalized Conjugate Residual method of Eisenstat (1983).
### References
@article{eisenstat1983variational,
title={Variational iterative methods for nonsymmetric systems of linear equations},
author={Eisenstat, Stanley C and Elman, Howard C and Schultz, Martin H},
journal={SIAM Journal on Numerical Analysis},
volume={20},
number={2},
pages={345--357},
year={1983},
publisher={SIAM}
}
"""
mutable struct GeneralizedConjugateResidual{K, T, AT} <:
LS.AbstractIterativeLinearSolver
residual::AT
L_residual::AT
p::NTuple{K, AT}
L_p::NTuple{K, AT}
alpha::MArray{Tuple{K}, T, 1, K}
normsq::MArray{Tuple{K}, T, 1, K}
rtol::T
atol::T
function GeneralizedConjugateResidual(
K,
Q::AT;
rtol = √eps(eltype(AT)),
atol = eps(eltype(AT)),
) where {AT}
T = eltype(Q)
residual = similar(Q)
L_residual = similar(Q)
p = ntuple(i -> similar(Q), K)
L_p = ntuple(i -> similar(Q), K)
alpha = @MArray zeros(K)
normsq = @MArray zeros(K)
new{K, T, AT}(residual, L_residual, p, L_p, alpha, normsq, rtol, atol)
end
end
const weighted = false
function LS.initialize!(
linearoperator!,
Q,
Qrhs,
solver::GeneralizedConjugateResidual,
args...,
)
residual = solver.residual
p = solver.p
L_p = solver.L_p
@assert size(Q) == size(residual)
rtol, atol = solver.rtol, solver.atol
threshold = rtol * norm(Qrhs, weighted)
linearoperator!(residual, Q, args...)
residual .-= Qrhs
converged = false
residual_norm = norm(residual, weighted)
if residual_norm < threshold
converged = true
return converged, threshold
end
p[1] .= residual
linearoperator!(L_p[1], p[1], args...)
threshold = max(atol, threshold)
converged, threshold
end
function LS.doiteration!(
linearoperator!,
Q,
Qrhs,
solver::GeneralizedConjugateResidual{K},
threshold,
args...,
) where {K}
residual = solver.residual
p = solver.p
L_residual = solver.L_residual
L_p = solver.L_p
normsq = solver.normsq
alpha = solver.alpha
residual_norm = typemax(eltype(Q))
for k in 1:K
normsq[k] = norm(L_p[k], weighted)^2
beta = -dot(residual, L_p[k], weighted) / normsq[k]
Q .+= beta * p[k]
residual .+= beta * L_p[k]
residual_norm = norm(residual, weighted)
if residual_norm <= threshold
return (true, k, residual_norm)
end
linearoperator!(L_residual, residual, args...)
for l in 1:k
alpha[l] = -dot(L_residual, L_p[l], weighted) / normsq[l]
end
if k < K
rv_nextp = realview(p[k + 1])
rv_L_nextp = realview(L_p[k + 1])
else # restart
rv_nextp = realview(p[1])
rv_L_nextp = realview(L_p[1])
end
rv_residual = realview(residual)
rv_p = realview.(p)
rv_L_p = realview.(L_p)
rv_L_residual = realview(L_residual)
groupsize = 256
T = eltype(alpha)
event = Event(device(Q))
event = LS.linearcombination!(device(Q), groupsize)(
rv_nextp,
(one(T), alpha[1:k]...),
(rv_residual, rv_p[1:k]...),
false;
ndrange = length(rv_nextp),
dependencies = (event,),
)
event = LS.linearcombination!(device(Q), groupsize)(
rv_L_nextp,
(one(T), alpha[1:k]...),
(rv_L_residual, rv_L_p[1:k]...),
false;
ndrange = length(rv_nextp),
dependencies = (event,),
)
wait(device(Q), event)
end
(false, K, residual_norm)
end
end
|
import Data.Vect
data Format = Number Format
| Str Format
| Dble Format
| Chr Format
| Lit String Format
| End
PrintfType : Format -> Type
PrintfType (Number fmt) = (i : Int) -> PrintfType fmt
PrintfType (Dble fmt) = (i : Double) -> PrintfType fmt
PrintfType (Str fmt) = (str: String) -> PrintfType fmt
PrintfType (Chr fmt) = (char : Char) -> PrintfType fmt
PrintfType (Lit str fmt) = PrintfType fmt
PrintfType End = String
printfFmt : (fmt : Format) -> (acc: String) -> PrintfType fmt
printfFmt (Number fmt) acc = \i => printfFmt fmt (acc ++ show i)
printfFmt (Dble fmt) acc = \i => printfFmt fmt (acc ++ show i)
printfFmt (Str fmt) acc = \str => printfFmt fmt (acc ++ str)
printfFmt (Chr fmt) acc = \char => printfFmt fmt (acc ++ show char)
printfFmt (Lit lit fmt) acc = printfFmt fmt (acc ++ lit)
printfFmt End acc = acc
toFormat : (xs : List Char) -> Format
toFormat [] = End
toFormat ('%' :: 'd' :: chars) = Number (toFormat chars)
toFormat ('%' :: 'f' :: chars) = Dble (toFormat chars)
toFormat ('%' :: 's' :: chars) = Str (toFormat chars)
toFormat ('%' :: 'c' :: chars) = Chr (toFormat chars)
toFormat ('%' :: chars) = Lit "%" (toFormat chars)
toFormat (c :: chars) = case toFormat chars of
Lit lit chars' => Lit (strCons c lit) chars'
fmt => Lit (strCons c "") fmt
printf : (fmt : String) -> PrintfType (toFormat (unpack fmt))
printf fmt = printfFmt _ ""
Matrix : Nat -> Nat -> Type
Matrix n m = Vect n (Vect m Double)
|
function X_rec = recoverData(Z, U, K)
%RECOVERDATA Recovers an approximation of the original data when using the
%projected data
% X_rec = RECOVERDATA(Z, U, K) recovers an approximation the
% original data that has been reduced to K dimensions. It returns the
% approximate reconstruction in X_rec.
%
% You need to return the following variables correctly.
X_rec = zeros(size(Z, 1), size(U, 1));
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the approximation of the data by projecting back
% onto the original space using the top K eigenvectors in U.
%
% For the i-th example Z(i,:), the (approximate)
% recovered data for dimension j is given as follows:
% v = Z(i, :)';
% recovered_j = v' * U(j, 1:K)';
%
% Notice that U(j, 1:K) is a row vector.
%
for i = 1 : size(Z, 1),
for j = 1 : size(U,1),
v = Z(i, :)';
rec = v' * U(j, 1 : K)';
X_rec(i, j) = rec;
end
end
% =============================================================
end
|
# USAGE:
# Place structure file and this script in a directory, and set the R
# current working directory to be the same. Then type the following
# commands, where "file" is the name of your structure file:
# source("convert_new_hybrids.r")
# convert_snapp("file")
convert_nh <- function(file) {
x <- read.table(file, sep = "\t")
# remove the population identification column or not
if (is.na(x[1, 2]) == TRUE) {
x <- cbind(x[,1], x[,3:ncol(x)])
}
x <- x[2:nrow(x),2:ncol(x)]
n.inds <- nrow(x) / 2
loci <- c()
for(c in 1:ncol(x)) {
x.rep <- x[,c]
x.rep <- x.rep[x.rep>0]
x.rep <- sort(as.vector(table(x.rep)))
x.min <- sum(x.rep) * 0.3
if(x.rep[1]>x.min) {loci <- c(loci, c)}
}
if(length(loci)>500) {
loci <- loci[1:500]
}
n.loci <- length(loci)
write(paste("NumIndivs",n.inds),file="new_hybrids_input.txt",ncolumns=2)
write(paste("NumLoci",n.loci),file="new_hybrids_input.txt",ncolumns=2,append=T)
write(paste("Digits 1"),file="new_hybrids_input.txt",ncolumns=1,append=T)
write(paste("Format Lumped"),file="new_hybrids_input.txt",ncolumns=2,append=T)
write(paste(""),file="new_hybrids_input.txt",ncolumns=1,append=T)
for(a in 1:n.inds) {
rep2 <- a * 2
rep1 <- rep2 - 1
for(b in loci) {
snp.rep1 <- x[rep1,b]
snp.rep2 <- x[rep2,b]
snp.rep <- paste(snp.rep1,snp.rep2,sep="")
if(snp.rep == "00") {
snp.rep <- 0
}
if(b == loci[1]) {
snps <- paste(a, snp.rep)
} else {
snps <- paste(snps, snp.rep)
}
}
write(snps,file="new_hybrids_input.txt",ncolumns=1,append=T)
}
}
|
startest <- function(y,x,x.n=NULL,tvar,tvar.lag=6,ascending=F,diag.lag=6,crit=.05,contemp=T){
y <- as.matrix(y)
n.y <- nrow(y)
x <- as.matrix(x)
n.l <- nrow(x)
m.l <- ncol(x)
if(is.null(x.n)){
x.n <- x
}
x.n <- as.matrix(x.n)
n.n <- nrow(x.n)
m.n <- ncol(x.n)
tv <- as.matrix(tvar)
tvar.num <- ncol(tv)
if(tvar.num==1){
tv.mat <- as.matrix(embed(tv,tvar.lag+1))
}else{
tv.mat <- tv
}
n.t <- nrow(tv.mat)
if(contemp==T){
if(tvar.lag==0 & tvar.num==1){
colnames(tv.mat) <- "s.0"
}else if(tvar.lag!=0 & tvar.num==1){
colnames(tv.mat) <- c(paste0("s.0"),colnames(tv.mat[,-1],do.NULL=F,prefix="s."))
}else{
colnames(tv.mat) <- colnames(tv.mat,do.NULL=F,prefix="s.")
}
}else{
if(tvar.lag==0 & tvar.num==1){
colnames(tv.mat) <- "s.1"
}else{
colnames(tv.mat) <- colnames(tv.mat,do.NULL=F,prefix="s.")
}
}
n <- min(c(n.y,n.l,n.n,n.t))
if(n.y > n){y <- as.matrix(y[-c(1:(n.y-n)),])}
if(n.l > n){x <- as.matrix(x[-c(1:(n.l-n)),])}
if(n.n > n){x.n <- as.matrix(x.n[-c(1:(n.n-n)),])}
if(n.t > n){
tv.mat <- as.matrix(tv.mat[-c(1:(n.t-n)),])
if(tvar.lag==0){
colnames(tv.mat) <- "s.0"
}
}
e <- as.matrix(lm(y~x-1)$resid)
e.mat <- matrix(0,n,diag.lag)
for(i in 1:diag.lag){
e.mat[(i+1):n,i] <- e[1:(n-i),]
}
e2.mat <- e.mat^2
ac.vec <- matrix(NA,diag.lag,1)
arch.vec <- matrix(NA,diag.lag,1)
for(i in 1:diag.lag){
x.e <- cbind(e.mat[,1:i],x)
ac.vec[i,] <- round(anova(lm(e~x.e-1),lm(e~x-1))[2,6],4)
e2 <- as.matrix(e^2)
x.e2 <- cbind(1,e2.mat[,1:i])
x.r2 <- matrix(1,n,1)
arch.vec[i,] <- round(anova(lm(e2~x.e2-1),lm(e2~x.r2-1))[2,6],4)
}
rownames(ac.vec) <- rownames(ac.vec,do.NULL=F,prefix="AC.")
rownames(arch.vec) <- rownames(arch.vec,do.NULL=F,prefix="ARCH.")
colnames(ac.vec) <- 'AC'
colnames(arch.vec) <- 'ARCH'
nl.out <- matrix(NA,ncol(tv.mat),4)
for(i in 1:ncol(tv.mat)){
s.1 <- as.matrix(tv.mat[,i])
s.2 <- s.1^2
s.3 <- s.1^3
t.1 <- as.matrix(c(1:n)/n)
t.2 <- t.1^2
t.3 <- t.1^3
zs.1 <- as.numeric(s.1)*x.n
zs.2 <- as.numeric(s.2)*x.n
zs.3 <- as.numeric(s.3)*x.n
zt.1 <- as.numeric(t.1)*x.n
zt.2 <- as.numeric(t.2)*x.n
zt.3 <- as.numeric(t.3)*x.n
x.0 <- cbind(x)
xs.1 <- cbind(x,zs.1)
xs.2 <- cbind(x,zs.1,zs.2)
xs.3 <- cbind(x,zs.1,zs.2,zs.3)
xt.1 <- cbind(x,zt.1)
xt.2 <- cbind(x,zt.1,zt.2)
xt.3 <- cbind(x,zt.1,zt.2,zt.3)
xs.1 <- xs.1[,!duplicated(t(xs.1))]
xs.2 <- xs.2[,!duplicated(t(xs.2))]
xs.3 <- xs.3[,!duplicated(t(xs.3))]
xt.1 <- xt.1[,!duplicated(t(xt.1))]
xt.2 <- xt.2[,!duplicated(t(xt.2))]
xt.3 <- xt.3[,!duplicated(t(xt.3))]
reg.0 <- lm(y~x.0-1)
regs.1 <- lm(y~xs.1-1)
regs.2 <- lm(y~xs.2-1)
regs.3 <- lm(y~xs.3-1)
regt.1 <- lm(y~xt.1-1)
regt.2 <- lm(y~xt.2-1)
regt.3 <- lm(y~xt.3-1)
ps.n <- round(anova(regs.3,reg.0)[2,6],4)
ps.3 <- round(anova(regs.3,regs.2)[2,6],4)
ps.2 <- round(anova(regs.2,regs.1)[2,6],4)
ps.1 <- round(anova(regs.1,reg.0)[2,6],4)
pt.n <- round(anova(regt.3,reg.0)[2,6],4)
pt.3 <- round(anova(regt.3,regt.2)[2,6],4)
pt.2 <- round(anova(regt.2,regt.1)[2,6],4)
pt.1 <- round(anova(regt.1,reg.0)[2,6],4)
nl.out[i,] <- c(ps.n,ps.3,ps.2,ps.1)
sc.out <- matrix(c(pt.n,pt.3,pt.2,pt.1),1,4)
}
rownames(nl.out) <- colnames(tv.mat)
rownames(sc.out) <- 't*'
# if(contemp == FALSE){
# nl.out <- nl.out[-1,]
# }
if(ascending == TRUE){
if(ncol(tv.mat)!=0){
nl.out <- nl.out[order(nl.out[,1]),]
}
}
nl.rows <- min(ncol(tv.mat),nrow(nl.out))
if(nl.rows==1){
nl.out <- matrix(nl.out,1,4)
rownames(nl.out) <- colnames(tv.mat)
}
nl.lab <- matrix(NA,nl.rows,1)
for(k in 1:nl.rows){
if(nl.out[k,1] >= crit){
nl.lab[k,] <- paste('')
}else if(((nl.out[k,2] < crit) | (nl.out[k,4] < crit)) & (nl.out[k,3] < crit)){
nl.lab[k,] <- paste('b')
}else if(((nl.out[k,2] == min(nl.out[k,2:4])) & (nl.out[k,2] < crit)) | ((nl.out[k,4] == min(nl.out[k,2:4])) & (nl.out[k,4] < crit))){
nl.lab[k,] <- paste('l')
}else if((nl.out[k,3] == min(nl.out[k,2:4])) & (nl.out[k,3] < crit)){
nl.lab[k,] <- paste('e')
}else{
nl.lab[k,] <- paste('')
}
}
if(sc.out[,1] >= crit){
sc.lab <- paste('')
}else if(((sc.out[,2] == min(sc.out[,2:4])) & (sc.out[,2] < crit)) | ((sc.out[,4] == min(sc.out[,2:4])) & (sc.out[,4] < crit))){
sc.lab <- paste('l')
}else if((sc.out[,3] == min(sc.out[,2:4])) & (sc.out[,3] < crit)){
sc.lab <- paste('e')
}else{
sc.lab <- paste('')
}
nl.out <- formatC(nl.out,format="e",digits=3)
sc.out <- formatC(sc.out,format="e",digits=3)
nl.mat <- as.data.frame(cbind(nl.out,nl.lab),stringsAsFactors=F)
sc.mat <- as.data.frame(cbind(sc.out,sc.lab),stringsAsFactors=F)
colnames(nl.mat) <- c('p.0','p.3','p.2','p.1','MODEL')
colnames(sc.mat) <- c('p.0','p.3','p.2','p.1','MODEL')
res.list <- return(list(star=nl.mat,tvar=sc.mat,ac=ac.vec,arch=arch.vec))
}
|
\chapter{Theory}
\label{sec:theory}
\section{Design Load Cases (DLCs)}
The user must specify the metocean conditions that drive the wind and
wave loads upon the floating substructure. \textit{FloatingSE}
currently only uses the single load case of maximum thrust coincident
with maximum wave loading to drive the substructure design. The
assumption is that this load case would be the driver for substructure
sizing and stability. Ideally, multiple DLCs and metocean conditions
would be used for design optimization. The capability to optimize over
multiple DLCs will be added to future versions of the model. By not
currently including a formal set of IEC DLCs, the conceptual designs
derived in this work should be considered preliminary and subject to
extensive revision once other load cases, and higher-fidelity analysis,
is brought to bear.
\section{Load Path}
As with other WISDEM models, the primary simplification in
\textit{FloatingSE} is the treatment of all loads as pseudo-static. This
approximation significantly reduces computational time and resources,
since an accurate calculation of dynamic loads requires more
sophisticated numerical tools and simulations. However, dynamic effects
can still dominate drive component sizing for floating platforms, thus
the static loading assumption gives a rough approximation of the total
loading. Furthermore, fatigue effects and structural lifetime estimates
are also excluded for now, but could be incorporated in future
developments.
A floating wind turbine undergoes loading from a number of sources. The
primary loading source for the tower comes from the aerodynamic loads
induced by the rotor. The substructure must resist the combination of
both rotor loads and hydrodynamics loads, with the latter becoming more
and more important as water depth and wave heights increase.
\textit{FloatingSE}, together with other WISDEM modules, accounts for
these two dominant load sources, as well as the self-loading of gravity
loads. Other sources of loading, such as installation loads, accidental
loads, vortex-induced vibrations, ice, and seismic loads are ignored.
\subsection{Wind and Wave Loads}
Wind drag loads are applied to the tower body and the upper part of the
substructure that extends above the waterline. They are not applied to
connecting truss members that may be part of the substructure geometry.
These drag loads are computed assuming the tower and columns are smooth
circular cross-sections and that the drag coefficient can be selected as
a function of the flow Reynolds number \citep{Roshko}. The aerodynamic
drag force is a function of height, since the wind profile and
cross-sectional geometry varies along that dimension. For the wind
profile, the standard power-law scaling is used,
\begin{equation}
U_a(z) = U_{ref}\left(\frac{z}{z_{ref}}\right)^{\alpha}\quad,
\end{equation}
where $U_a(z)$ is the wind velocity as a function of height, $U_{ref}$ is a
reference wind speed measured at a reference height, $z_{ref}$, and
$\alpha$ is the shear exponent used in the power-law approximation of
wind profiles. The wind profile then feeds the aerodynamic drag,
Reynolds number, and drag coefficient,
\begin{equation} \label{eqn:drag}
dF(z) = \frac{1}{2} \rho_a U_a^2(z) d(z) c_d(Re) dz;\qquad
Re_d = \frac{\rho_a U_a(z) d(z)}{\mu_a}\quad,
\end{equation}
where $Re_d$ is the Reynolds number based on diameter, $\rho_a$ and
$\mu_a$ are the density and viscosity of air, $d(z)$ is the diameter of
the column as a function of height, $c_d$ is the 2-D drag coefficient, and
$dF(z)$ is the force per unit length in the z-direction.
Wave drag loads arise from similar processes, but are computed using
Morison's equation, a semi-empirical expression that predicts the total
hydrodynamic loads. It is comprised of two components, one for viscous
drag contributions and another for inertial effects (which includes
incident, diffracted, and radiated wave effects). For flow past
structures with circular cross sections, Morison's equation for force
per unit length ($dF(z)$) takes the form,
\begin{equation} \label{eqn:morison}
dF(z) = \frac{\pi d^2(z)}{4} \rho_w C_m \dot{U}_w(z)dz + \frac{1}{2} \rho_w U_w^2(z) d(z) c_d(Re)dz\quad,
\end{equation}
where $C_m$ is the added mass coefficient (assumed to be $C_m=2$),
$U_w(z)$ is the current speed as a function of height, $\dot{U}_w(z)$ is
the acceleration as a function of height, and the Reynolds number is
computed by substituting in the appropriate properties for water,
\begin{equation}
Re_d = \frac{\rho_w U_w(z) d(z)}{\mu_w}\quad.
\end{equation}
To compute Morison's equation, expressions for local fluid velocity and
acceleration are required. Wave particle velocity (not the same as the bulk
velocity of the wave) is assumed to follow linear (Airy) wave theory
\begin{equation} \label{eqn:Uwave}
U_w(z) = a\omega\frac{\cosh\left[\kappa\left(z + D \right)\right]}{\sinh\left(\kappa D\right)}\cosh\left(\kappa x -
\omega t\right);
\qquad \omega=\frac{2\pi}{T} = \sqrt{ g \kappa \tanh\left(\kappa
D\right) } \quad,
\end{equation}
where $\omega$ is the circular frequency, $T$ is the wave period, $a$ is
the wave amplitude (half of the significant wave height), $D$ is the
total water depth, $g$ is the acceleration of gravity, and $\kappa$ is
the wave number numerically computed from the dispersion relationship
given as the last expression in Equation \ref{eqn:Uwave}. Note that the
horizontal particle velocity varies in time and space (by the
$\kappa x - \omega t$) term. Thus, the individual particles in the wave
are also accelerating at different rates,
\begin{equation} \label{eqn:Awave}
\dot{U}_w(z) = a\omega^2\frac{\cosh\left[\kappa\left(z + D \right)\right]}{\sinh\left(\kappa D\right)}\sinh\left(\kappa x -
\omega t\right)\quad.
\end{equation}
For
simplicity, \textit{FloatingSE} only considers the maximum velocity and
acceleration at a given height, and makes a conservative assumption that
they are concurrent in time and space. This essentially means ignoring the
$\kappa x - \omega t$ term, since the maximum of any hyperbolic sine or cosine
term is one.
\subsection{Rotor Nacelle Assembly (RNA) Loads}
From a quasi-steady-state point of view, the RNA loads reduce to three
forces and three moments along the main coordinate axes
\citep{JacketSE}. The thrust is the biggest force responsible for the
bending moment distribution along the tower and loads on the
substructure. There is the additional effect of the gravitational load
caused by the offset of the RNA center of mass from the tower
centerline. This effect is more pronounced for downwind turbines than
upwind turbines, but is included regardless. \textit{FloatingSE} does
not compute the force and moment components directly, but rather accepts
them as inputs from other WISDEM modules or from the user directly.
\section{Structural Analysis}
The analysis tool, Frame3DD, is an open-source tool for static and
dynamic structural analysis of 2-D and 3-D frames and trusses with
elastic and geometric stiffness. It computes the static deflections,
reactions, internal element forces, natural frequencies, and modal
shapes using direct stiffness and mass assembly \citep{frame3dd}. The
WISDEM toolkit developed a python interface, \textit{pyFrame3DD}, to
avoid the use of intermediate input and output text files. The
integration of all loads happens within Frame3DD, where the whole floating
turbine load path, from the rotor to the keel of the substructure, is
modeled with Timoshenko frame elements \citep{timoshenko}.
\subsection{Discretization}
For the finite element structural analysis of the substructure, the
discretization of the main columns into a handful of sections is still
too coarse to capture the appropriate physics. Long slender components,
such as the tower and substructure columns, are broken up into a
three-times finer discretization than the physical cans that they are
actually made of. The sectional and nodal variables are re-sampled at
this finer spacing. These additional discretization points give greater
resolution of internal forces and natural frequencies. Substructure
pontoons are represented as single frame elements. Frame elements are
described by their cross sectional properties (area, moments of inertia,
modulus of elasticity, and mass density) and starting and ending nodes.
For simple geometries, such as pontoons with tubular cross sections,
these properties are straightforward calculations. For the turbine
tower, tubular cross section properties are also used, albeit at a finer
discretization. For substructure columns, it is assumed that the
permanent or variable ballast and bulkheads are not load-bearing, so
tubular cross section properties are also used to represent the column
shell. However, the material mass density of the frame element is
scaled to reflect the true mass of the whole section, including ballast,
to ensure that gravity loads are captured correctly.
For the tubular cross sections, the critical properties needed by
Frame3DD given user inputs of diameter, $d$, and tube (or wall)
thickness, $t$, are,
\begin{align*}
\textrm{Outer radius, } r_o &= d/2\\
\textrm{Inner radius, } r_i &= r_o - t\\
\textrm{Material area, } A &= \pi \left( r_o^2 - r_i^2 \right)\\
\textrm{Bending second moment of area, } I_{xx} &= I_{yy} = \frac{\pi}{4}\left( r_o^4 - r_i^4 \right)\\
\textrm{Torsion second moment of area, } I_{zz} &= J_0 = I_{xx} + I_{yy}\\
\textrm{Shear area, } A_{s} &= A / \left[ 1.124235 + 0.055610\left(\frac{r_i}{r_o}\right) +
1.097134\left(\frac{r_i}{r_o}\right)^2 - 0.630057\left(\frac{r_i}{r_o}\right)^3 \right]\\
\textrm{Bending modulus, } S &= I_{xx} / r_o \\
\textrm{Torsion modulus (shear constant), } C &= I_{zz} / r_o
\end{align*}
Note that the shear area expression is an empirical relationship as
opposed to an analytical expression.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=2in]{figs/frameCS.pdf}
\caption{Coordinate system for frame element forces.}
\label{fig:frameCS}
\end{center}
\end{figure}
\subsection{Loads}
All of the loads described above are integrated together within
Frame3DD. These loads include,
\begin{itemize}
\item Rotor-nacelle-assembly loads (thrust, moments, etc)
\item Mooring line force
\item Wind and wave loading
\item Gravity loads (weight distribution)
\item Hydrostatic pressure loads, including buoyancy
\end{itemize}
The forces, moments, and mass properties of the rotor-nacelle assembly
(RNA) are inputs to \textit{FloatingSE} (mass properties are assumed to
be relative to the tower top position). It assumed that the RNA is a
rigid body with respect to the tower modes and the mass properties,
forces, and moments, are applied to the corresponding node in the model.
The forces along each mooring line are applied to the connection
point nodes on the structure. The wind and wave forces per unit length
in Equations \ref{eqn:drag} and \ref{eqn:morison} are applied as
trapezoidally varying loads along the column elements. Other loads
applied to the structure include the gravity loads, and the buoyancy
acting on the submerged elements.
\subsection{Boundary Conditions}
Multiple boundary conditions are applied to the structure. The mooring
system stiffness matrix (linearized about the neutral position) is
applied at the mooring connection nodes. However, even with the mooring
stiffness, the finite element analysis would otherwise still regard the
structure as unrestrained and incapable of supporting any static loads.
Thus, in order to successfully compute stress and buckling limits in a
well-posed problem, an additional rigid boundary condition (in all 6
DOF) is imposed at the bottom node of the main column.
\subsection{Outputs}
Structural analysis outputs include mass properties of the structure,
member stresses, and summary forces and moments on the body. Mass
properties include the total mass of the floating turbine and the mass
of the substructure itself. The calculations also allow for easy
computation of the center of mass of the structure (not accounting for
variable ballast) and the center of buoyancy (centroid of the submerged
volume). The first two natural frequencies of the structure are also
computed to compare against the range of standard wave frequencies and
rotor passing frequencies (1P and 3P). Next, the reaction forces and
moments at the boundary node at the keel are taken as the total loading
on the structure. These are used later in the static stability
calculations to ensure that the mooring lines provide adequate restoring
force and moment. Finally, the axial and shear forces within each frame
element are extracted and converted to stresses using cross-sectional
properties. These element member follow the sign convention in Figure
\ref{fig:frameCS},
\begin{align*}
\sigma_z &= \frac{N_z}{A} - \frac{\sqrt{M_x^2 + M_y^2}}{S}\\
\tau_{z\theta} &= \frac{T_z}{C} + \frac{\sqrt{V_x^2 + V_y^2}}{A_s}
\end{align*}
where $N$ is the axial force (tension or compression), $T$ is the
torsional moment, $V$ is the shear force, $M$ is the bending moment,
$\sigma_z$ is the axial stress, and $\tau_{z\theta}$ is the shear
stress across axial and hoop principle directions.
Hoop stress of the tower is estimated from the dynamic pressure of the
wind loads using the Eurocode method \citep{Eurocode}. Hoop stress of the submerged
columns is determined using the dynamic and static pressure heads of the
water.
\begin{align}
\sigma_{\theta,Euro} &= k_w q_{max} \frac{d-t}{2t};\qquad q_{max} =
\frac{1}{2}\rho_a U_a^2\\
\sigma_{\theta,hydro} &= \left(q_{max}+p_{hydro}\right) \frac{d-t}{2t};\qquad q_{max} =
\frac{1}{2}\rho_w U_w^2\\
p_{hydro} &= \rho_w g \left( a\frac{\cosh\left[\kappa\left(z + D \right)\right]}{\cosh\left(\kappa D\right)} - z\right)
\end{align}
where $\sigma_{\theta}$ is the hoop stress, $q_{max}$ is the maximum
dynamic pressure on a cross-section, and $p_{hydro}$ is the hydrostatic
pressure with contributions from wave motion and the static head. In
the Eurocode method, $k_w$ is the dynamic pressure factor for hoop
stress calculation using cylinder dimensions and an external pressure
buckling factor. Note that the argument, $(z)$, was dropped from many
of the terms without losing generality.
\subsection{Code Compliance as Utilizations}
Once the stress components of all structural members are computed, they
are compared against design code standards for compliance, and serve as
design constraints when conducting optimization. Multiple code
standards are used across all components. For all columns, the tower,
and substructure pontoons, stress components (axial, shear, and hoop)
are combined into a von Mises, equivalent, stress,
\begin{equation}
\sigma_{vm} = \sqrt{\sigma_a^2 + \sigma_{\theta}^2 -
\sigma_a\sigma_{\theta} + 3\tau_{a\theta}^2}
\end{equation}
where $\sigma_{vm}$ is the von Mises stress, $\sigma_a$ is the axial
stress, $\tau_{a\theta}$ is the shear stress across axial and hoop
principle directions. and $\sigma_{\theta}$ is chosen as the relevant
hoop stress. The von Mises stress is compared against the yield stress,
$\sigma_y$, and a safety factor as a utilization criterion.
Main column, offset column, and tower segment stresses and geometry are
also evaluated against a shell buckling criterion published by
\citet{Eurocode} and a global buckling criterion published by
\citet{Germanischer}. Note that the implementation of the Eurocode
buckling is modified slightly so as to produce continuously
differentiable output. See \citet{JacketSE} for a more detailed
exposition.
For submerged columns, additional code standard utilization ratios are
taken from the \citet{api2U}, Bulletin 2U (specifically the procedure
outlined in Appendix B). These standards also apply shell and general
buckling criterion with a margin of safety in a manner that accounts for
stiffeners and the common buckling modes of submerged structures.
Future efforts will also apply Bulletin 2V, the standards for plates, to
the legs that support taut mooring lines.
\section{Mooring Lines}
The quasi-steady mooring system analysis is handled by the external
Mooring Analysis Program (MAP++) library \citep{MAP}, which has
convenient Python bindings to access the simulation output, bundled into
the WISDEM \textit{pyMAP} module. MAP++ is designed to model the
steady-state forces on a Multi-Segmented, Quasi-Static (MSQS) mooring
line. Seabed contact, seabed friction, and multi-element mooring lines
with arbitrary connection configurations can be analyzed. MAP++ inputs
include sea depth, geometry descriptions of the mooring line
connections, and material properties of the lines. For chain and
rope-based cables, these material properties are not easily derived and
would be typically provided by a manufacturer. We borrow from the
approach of the popular Orcina OrcaFlex software \citep{orca} and use
the following expressions,
\begin{align*}
MBL &= 2.74\times 10^7 d^2 \left(44 - 80d\right) \,[\unit{N}] \\
mass &= 19.9\times 10^3 d^2 \,[\unit{kg/m}]\\
A &= 2\left(\pi d^2 / 4 \right)\,[\unit{m^2}]\\
EA &= 8.54\times 10^{10} d^2\,[\unit{N}]\\
cost &= 3.415\times 10^4 d^2 \,[\unit{USD}]
\end{align*}
where $MBL$ is minimum breaking load, $d$ is the diameter of a single
half-chain link, $A$ is the chain cross-sectional area, $E$ is the
Young's modulus, $EA$ is the axial stiffness. When conducting
optimization, the expression for $MBL$ is poorly posed due to its limited
range of diameter applicability, so a linear fit is used instead,
\begin{equation}
MBL = 1000 \max\left(1.0, -5445.3 + 176972.7 d\right)
\end{equation}
\section{Hydrostatic Stability}
\label{sec:static}
\subsection{Neutral Buoyancy}
Any floating body requires enough water displacement to create
sufficient buoyancy force such that the body stays afloat in the most
extreme loading and environmental conditions. This level of
displacement would otherwise be overkill for more benign loading
conditions. Since a floating turbine is designed for a constant hub
height, variable amounts of ballast are required to maintain a neutrally
buoyant system for all operating conditions. The variable ballast is
simply ocean water that is pulled in or pumped out of holding areas
within the substructure columns.
In \textit{FloatingSE}, the variable ballast water mass is calculated as
the difference between the total mass of displaced water and the total
mass of the floating turbine. This mass is then divided by the water
density to obtain the variable ballast volume, which is then compared to
the frustum shell cross section profile above the permanent ballast to
determine the height of the water ballast within the column. Once this
is determined, the final center of mass of the system can be determined.
\subsection{Surge/Sway Stability}
Surge and sway stability is not actively tracked over the coarse of a
load case. Instead the total surge force on the structure is calculated
at the initial conditions and compared to the restoring force of the
mooring system at the maximum allowable surge offset, which is specified
by the user.
The surge direction is assumed to be aligned with the wind vector, which
is aligned with the $x$-axis. Since the rotor yaw is assumed to be
$0^{\circ}$, the surge forces on the turbine include the rotor thrust
and the wind and wave drag on the tower and substructure. The final
surge force over the whole structure is taken from the $x$-direction
reaction force of the reaction node in Frame3DD.
The restoring force is calculated as the smallest possible restoring
force after a displacement in any angular direction in the mooring
model. Since the alignment of the mooring lines relative to the
incoming wind direction is arbitrary, a maximum offset is simulated at
$2^{\circ}$ increments around the unit circle. Also recorded in this
survey is the maximum mooring line tension in any
line, in any direction, for comparison against the minimum breaking load
value,
\begin{equation}
F_{x,restore} = \min_{i\in a} F_{x,i}\quad \mbb{T}_{moor} = \max_{l\in L,i\in a} \mbb{T}_{l,i}\,;
\qquad L=\left\{1,2\ldots nlines\right\}, \, a= \left\{0^{\circ}, 2^{\circ}\ldots 360^{\circ}\right\}
\end{equation}
where $F_x$ is the surge force and $\mbb{T}$ is the tension. If
restoring force at this maximum offset is greater than the surge force
applied, then the system is considered stable in surge. Since the wind
and wave profiles are essentially 2-D in the $x-z$ plane, the sway
stability is given the same status as surge stability.
\subsection{Pitch Stability}
The approach to pitch stability determination is similar to that of
surge stability. The total pitching moment on the floating turbine is
calculated and compared to the restoring moment at the maximum allowable
angle of heel. If the restoring moment at this max heel angle is
greater than the pitching moment applied, the system is said to be
statically stable in pitch.
Similar to the surge force calculation, the total pitching moment is
determined from the reaction moment at the boundary condition
in the Frame3DD analysis. The pitching moment has contributions from
the wind and wave loads on the structure, the rotor forces and torques,
the buoyancy forces on the submerged substructure, and the off-center
weight of components (e.g. the RNA).
The restoring pitching moment has two primary contributions. The first
is from the mooring lines. Similar to the surge force calculation, here
the floating turbine is deflected in pitch by the maximum allowable heel
angle and the mooring forces are recorded. The restoring moment
contribution from the mooring system is computed as,
\begin{equation}
\mbf{M_{moor}} = \sum_i \mbf{r_{cm-l}} \times \mbf{F_l}
\end{equation}
where $r_{cm-l}$ is the vector from the center of mass to the mooring
connection, and $F_l$ is the force applied by the $l$-\th\~mooring
line. As above, $F_l$ is taken as the minimum set over the possible
orientations of the mooring lines relative to the direction.
\begin{figure}[htb]
\begin{subfigure}[b]{0.49\linewidth}
\centering \includegraphics[height=3.5in]{figs/metacenterA.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering \includegraphics[height=3.5in]{figs/metacenterB.pdf}
\caption{}
\end{subfigure}\\
\caption{Static stability of floating offshore wind turbines.}
\label{fig:metacenter}
\end{figure}
The second contributing restoring moment comes from the motion of the
center of buoyancy away from alignment with the center of mass. This is
a standard calculation in naval architecture \citep{thiagarajan2014} and is
diagrammed in Figure \ref{fig:metacenter}. In this diagram, the center
of mass is denoted, $G$, the center of buoyancy is $B$, and the
metacenter is $M$. In neutral conditions (Figure \ref{fig:metacenter}a),
all of these points are vertically aligned.
As the structure lists or heels, the center of buoyancy shifts toward
the side of the structure that is more submerged (from $B$ to $B'$) and
the buoyancy force no longer passes through the center of mass.
Instead, the buoyancy force passes through the metacenter with an
effective moment arm of $GZ$ from the center of mass (Figure
\ref{fig:metacenter}b). The metacenter is defined as the common point
through which the buoyancy force acts as it pitches through small
displacements, for bodies with sufficient freeboard margin.
The metacenteric height, $GM$ is most easily calculated as an offset from the
center of buoyancy ($BM$) by,
\begin{equation}
h_{meta} = M - G = GM = BM + BG;\quad BM = \frac{I_w}{V}
\end{equation}
where $BG$, the distance between the centers of buoyancy and gravity is
easily calculated, $I_w$ is the second moment of area of the substructure waterplane
(with units of \unit{$m^4$}) and $V$ is the total volume of displacement
(with units of \unit{$m^3$}). Note that for semisubmersible type
geometries, $I_w$ is calculated with the parallel axis theorem for all
of the columns at the waterplane,
\begin{equation}
I_w = \sum_i \left( I_{w,i} + S_ir_i^2 \right)
\end{equation}
where $S_i$ is the waterplane cross sectional area of the $i$-\th column and $r_i$
is the distance from the waterplane centroid to the $i$-\th column centroid.
The restoring moment is then the buoyancy force acting through the
restoring arm, $GZ$,
\begin{equation}
M_{meta} = F_B GZ = F_B GM \sin \varphi
\end{equation}
where $\varphi$ is the angle of heel.
For this reason, the metacenter must be located above the center of mass
for static stability. This condition is imposed on the design as a
constraint. Note that the total volume of displacement, and the
subsequent buoyancy force, is not recalculated in the perturbed
configuration. It is assumed that the angles of deflection are small
and that there is sufficient freeboard and design symmetry such that the
total displacement is constant.
The total restoring pitching moment is then the sum of two
contributions,
\begin{equation}
M_{y,restore} = M_{y,moor} + M_{meta}
\end{equation}
\section{Hydrodynamic Stability}
Floating bodies are typically modeled, for small motions and linearized
behavior, as a second-order differential system with mass, damping, and
spring stiffness terms,
\begin{equation}
\left(\mbf{M} + \mbf{A}\right)\ddot{\mbf{x}} + \mbf{C}\dot{\mbf{x}} +
\mbf{K} = \mbf{F}\left( t \right)
\end{equation}
where $\mbf{x}\in\mbb{R}^6$ is the six-degree of freedom vector
(commonly ordered as 1-surge, 2-sway, 3-heave, 4-roll, 5-pitch,
6-yaw), $\mbf{M}$ is the mass matrix, $\mbf{A}$ is the added mass
matrix, $\mbf{C}$ is the damping matrix, and $\mbf{K}$ is the stiffness
matrix. The right-hand side of the equation captures the time-dependent
summation of all forces.
As a low-fidelity, quasi-static sizing and cost module,
\textit{FloatingSE} does not attempt to capture all of the matrix
entries or forcing terms of the hydrodynamics. A more sophisticated
time- or frequency-domain solver, where these quantities are calculated,
may be linked or included into \textit{FloatingSE} in the future.
Nevertheless, it does attempt to compute the diagonal entries of the
mass and stiffness matrices in order to derive the rigid body natural
frequencies of the system,
\begin{equation}
f_i = \frac{\omega_i}{2\pi} =
\frac{1}{2\pi}\sqrt{\frac{K_{ii}}{M_{ii}+A_{ii}}}, \quad \forall i \in
\left[1\ldots6\right]
\end{equation}
where $f_i$ are the frequencies of the eigenmodes and $\omega_i$ is the
circular frequency. The mass matrix diagonal entries, $M_{ii}$, are simply the mass and
moments of inertia of the whole system,
\begin{equation}
M_{11} = M_{22} = M_{33} = m_{sys};\quad M_{44} = I_{xx,sys};\quad M_{55} = I_{yy,sys};\quad M_{66} = I_{zz,sys};
\end{equation}
Where the coordinate system notation is consistent with that of Figure
\ref{fig:diagram}.
The added mass matrix diagonal entries are evaluated via standard strip
theory for the tapered vertical columns. The added mass for the system is a
summation over the columns, using the parallel axis theorem for the
rotational degrees of freedom. Pontoon contributions to system added
mass are currently ignored. The column quantities are calculated as,
\begin{equation}
A_{11} = A_{22} = \rho V;\quad A_{33} =
\left(\frac{1}{2}\right)\frac{8}{3} \rho \max \left[ R^3(z)\right] ; \quad A_{44} =
A_{55} = \pi\rho\int\left(z-z_{cb}\right)R^2(z)dz;\quad A_{66} = 0.0;
\end{equation}
where $\rho$ is the water density, $R(z)$ is the column radius along its
axis, and $V$ is the submerged volume. The extra factor of $1/2$ in
$A_{33}$ is included to account for the fact that the top of the column
extends above the waterline. Also, the integral in $A_{55}$ is only evaluated
along the submerged portion of the column.
The stiffness matrix is comprised of contributions from the mooring
and hydrostatic stiffness. The mooring linearized stiffness matrix is output
directly from MAP++ and needs no additional processing within
\textit{FloatingSE}. The hydrostatic stiffness, for a vertical column, is derived from the same
principals described above regarding the metacentric height,
\begin{equation}
K_{ii} = K_{ii}^{moor} + K_{ii}^{hydro};\quad K_{33}^{hydro} = \rho g
S_{sys};\quad K_{44}^{hydro} = K_{55}^{hydro} = \rho g V h_{meta}
\end{equation}
where $S_{sys}$ is the waterplane area of the system.
Once the rigid body natural frequencies (eigenmodes) of the system are
calculated, they are compared against the standard wave frequencies
range, \unit[0.5--5]{Hz}, and expressed as a design constraint (with a
partial safety factor).
|
[STATEMENT]
lemma additiveD:
"\<lbrakk> additive t; sound P; sound Q \<rbrakk> \<Longrightarrow> t (\<lambda>s. P s + Q s) = (\<lambda>s. t P s + t Q s)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. \<lbrakk>Transformers.additive t; sound P; sound Q\<rbrakk> \<Longrightarrow> t (\<lambda>s. P s + Q s) = (\<lambda>s. t P s + t Q s)
[PROOF STEP]
by(simp add:additive_def)
|
[STATEMENT]
lemma mem_set_2:
assumes "a \<in> set l"
shows "a mem l"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. a mem l
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
a \<in> set l
goal (1 subgoal):
1. a mem l
[PROOF STEP]
by (metis (full_types) memS_def mem_memS_eq)
|
r=0.91
https://sandbox.dams.library.ucdavis.edu/fcrepo/rest/collection/sherry-lehmann/catalogs/d7jp4g/media/images/d7jp4g-045/svc:tesseract/full/full/0.91/default.jpg Accept:application/hocr+xml
|
State Before: α : Type u_1
m : MeasurableSpace α
f : α → α
s : Set α
μ : MeasureTheory.Measure α
hf : PreErgodic f
hs : MeasurableSet s
hs' : f ⁻¹' s = s
⊢ ↑↑μ s = 0 ∨ ↑↑μ (sᶜ) = 0 State After: no goals Tactic: simpa using hf.ae_empty_or_univ hs hs'
|
If $f$ and $g$ are holomorphic functions on a punctured open set $s - \{z\}$, then the residue of $f + g$ at $z$ is the sum of the residues of $f$ and $g$ at $z$.
|
module Nondeterministics
using Random
using StatsFuns: poisinvcdf
using StaticArrays
using CUDAnative
using CuArrays
include("specfuns.jl")
const _distributions = (
:Categorical, :Poisson,
:Uniform, :Normal, :Exponential, :Gamma,
:Dirichlet
)
export Distribution
export params, logpdf
for s in _distributions
@eval export $s
end
############################
# Distribution generalities
############################
primitive type Distribution 8 end
const Categorical = reinterpret(Distribution, 0x00)
const Poisson = reinterpret(Distribution, 0x01)
const Uniform = reinterpret(Distribution, 0x10)
const Normal = reinterpret(Distribution, 0x11)
const Exponential = reinterpret(Distribution, 0x12)
const Gamma = reinterpret(Distribution, 0x13)
const Dirichlet = reinterpret(Distribution, 0x20)
function Base.show(io::IO, d::Distribution)
for s in _distributions
d == eval(s) && return print(io, string(s))
end
print(io, d)
end
Base.length(::Distribution) = 1
Base.iterate(d::Distribution) = (d, nothing)
Base.iterate(::Distribution, ::Nothing) = nothing
@inline function params(d::Distribution)
d == Categorical && return Tuple{Vararg{Real}}
d == Poisson && return Tuple{Real}
d == Uniform && return Tuple{Real,Real}
d == Normal && return Tuple{Real,Real}
d == Exponential && return Tuple{Real}
d == Gamma && return Tuple{Real,Real}
d == Dirichlet && return Tuple{Vararg{Real}}
end
###############################
# Distribution implementations
###############################
# Base.@irrational log2π 1.8378770664093454836 log(big(2.)*π)
function homogenize(t::Tuple)
T = promote_type(typeof(t).parameters...)
Tuple{Vararg{T}}(t)
end
include("scalars.jl")
include("arrays.jl")
@inline function logpdf(d::Distribution)
for s in _distributions
d == eval(s) && return eval(Symbol(:logpdf,s))
end
end
@inline function random(d::Distribution, params...)
for s in _distributions
d == eval(s) && return eval(Symbol(:rand,s))(params...)
end
end
(d::Distribution)(params...) = random(d, params...)
for s in _distributions
@eval CuArrays.@cufunc $(Symbol(:logpdf,s))(args...) = $(Symbol(:_logpdf,s))(args...)
end
end # module
|
function full_forward()
global config mem;
curr_layer_idx = config.misc.current_layer;
mem.activations{curr_layer_idx} = bsxfun(@plus, config.weights{curr_layer_idx} * mem.layer_inputs{curr_layer_idx}, config.weights{config.layer_num+curr_layer_idx});
config.misc.current_layer = curr_layer_idx + 1;
end
|
World Series of Fighting on Tuesday announced that Alexandre has inked “an exclusive, multi-year agreement” to compete for the Las Vegas-based organization. Specifics of the deal were not disclosed.
A decorated muay Thai practitioner, Alexandre began his mixed martial arts career in 2011 and ran up a 5-1 record inside the Bellator cage in the span of 13 months, including a rematch win against the only man to beat him, Josh Quayhagen. The 33-year-old was absent from MMA for more than a year before returning to knock out Rey Trujillo in his most recent bout under the banner of Texas’ Legacy Fighting Championship.
Alexandre has focused mainly on kickboxing in the past two years, most recently defeating John Wayne Parr for the Lion Fight super middleweight title in October.
The date and opponent for Alexandre’s promotional debut “will be announced soon,” according to a release. WSOF has two events on its slate for the end of the year: WSOF 25 on Nov. 20 in Phoenix and WSOF 26 on Dec. 18 in Las Vegas.
|
Coastal Metal Service is a Quality Manufacturer of Metal Roofing & Siding Products for use in Residential, Commercial / Industrial and Agricultural markets. We offer these products in Steel, Aluminum, and Copper.
Based in Goldsboro, North Carolina, Coastal Metal Service offers a wide variety of architectural / standing seam and exposed fastener panels in various metals, gauges, and colors. Custom panels and accessories are also available from CMS. Radius panels and tapered panels are included in the custom panel products we manufacture. We can produce box-style gutter products up to 20 feet long. CMS has two 21 foot Computerized Folding Machines that allow us to offer coping and special order trim products for your residential or commercial project.
|
module Ch05.VarArg
import Data.Vect
--------------------------------------------------------------------------------
-- Auxillary stuff for defining functions with a variable # of args
--------------------------------------------------------------------------------
||| Type of vectors of fixed length of elements of varying types
data VarVect : (n : Nat) -> Vect n Type -> Type where
VarNil : VarVect Z []
VarCons : (a : Type) ->
(x : a) ->
VarVect n as ->
VarVect (S n) (a :: as)
VarArgType : (numArgs : Nat) ->
Vect numArgs Type ->
Type
VarArgType Z [] = ?VarArgType_rhs_1
VarArgType (S k) (a :: as) = a -> VarArgType k as
-- Suppose we want to define a function
--
-- f : a_1 -> a_2 -> ... -> a_n -> b
--
-- for some [a_1, ..., a_n] : Vect n Type. Then there are two ways to do that:
--
-- One: Starting from the knowledge of
--
-- \x1, ... x(n-1) => f x1 ... x(n-1) xn
--
-- for arbitrary but fixed `xn`, we define `f`.
|
/-
Copyright (c) 2017 Mario Carneiro. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Mario Carneiro
-/
import logic.basic data.bool data.option.defs tactic.interactive
namespace option
variables {α : Type*} {β : Type*}
@[simp] theorem get_mem : ∀ {o : option α} (h : is_some o), option.get h ∈ o
| (some a) _ := rfl
theorem get_of_mem {a : α} : ∀ {o : option α} (h : is_some o), a ∈ o → option.get h = a
| _ _ rfl := rfl
theorem mem_unique {o : option α} {a b : α} (ha : a ∈ o) (hb : b ∈ o) : a = b :=
option.some.inj $ ha.symm.trans hb
theorem injective_some (α : Type*) : function.injective (@some α) :=
λ _ _, some_inj.mp
@[extensionality] theorem ext : ∀ {o₁ o₂ : option α}, (∀ a, a ∈ o₁ ↔ a ∈ o₂) → o₁ = o₂
| none none H := rfl
| (some a) o H := ((H _).1 rfl).symm
| o (some b) H := (H _).2 rfl
theorem eq_none_iff_forall_not_mem {o : option α} :
o = none ↔ (∀ a, a ∉ o) :=
⟨λ e a h, by rw e at h; cases h, λ h, ext $ by simpa⟩
@[simp] theorem none_bind {α β} (f : α → option β) : none >>= f = none := rfl
@[simp] theorem some_bind {α β} (a : α) (f : α → option β) : some a >>= f = f a := rfl
@[simp] theorem none_bind' (f : α → option β) : none.bind f = none := rfl
@[simp] theorem some_bind' (a : α) (f : α → option β) : (some a).bind f = f a := rfl
@[simp] theorem bind_some : ∀ x : option α, x >>= some = x :=
@bind_pure α option _ _
@[simp] theorem bind_eq_some {α β} {x : option α} {f : α → option β} {b : β} : x >>= f = some b ↔ ∃ a, x = some a ∧ f a = some b :=
by cases x; simp
@[simp] theorem bind_eq_some' {x : option α} {f : α → option β} {b : β} : x.bind f = some b ↔ ∃ a, x = some a ∧ f a = some b :=
by cases x; simp
lemma bind_comm {α β γ} {f : α → β → option γ} (a : option α) (b : option β) :
a.bind (λx, b.bind (f x)) = b.bind (λy, a.bind (λx, f x y)) :=
by cases a; cases b; refl
@[simp] theorem map_none {α β} {f : α → β} : f <$> none = none := rfl
@[simp] theorem map_some {α β} {a : α} {f : α → β} : f <$> some a = some (f a) := rfl
@[simp] theorem map_none' {f : α → β} : option.map f none = none := rfl
@[simp] theorem map_some' {a : α} {f : α → β} : option.map f (some a) = some (f a) := rfl
@[simp] theorem map_eq_some {α β} {x : option α} {f : α → β} {b : β} : f <$> x = some b ↔ ∃ a, x = some a ∧ f a = b :=
by cases x; simp
@[simp] theorem map_eq_some' {x : option α} {f : α → β} {b : β} : x.map f = some b ↔ ∃ a, x = some a ∧ f a = b :=
by cases x; simp
@[simp] theorem map_id' : option.map (@id α) = id := map_id
@[simp] theorem seq_some {α β} {a : α} {f : α → β} : some f <*> some a = some (f a) := rfl
@[simp] theorem some_orelse' (a : α) (x : option α) : (some a).orelse x = some a := rfl
@[simp] theorem some_orelse (a : α) (x : option α) : (some a <|> x) = some a := rfl
@[simp] theorem none_orelse' (x : option α) : none.orelse x = x :=
by cases x; refl
@[simp] theorem none_orelse (x : option α) : (none <|> x) = x := none_orelse' x
@[simp] theorem orelse_none' (x : option α) : x.orelse none = x :=
by cases x; refl
@[simp] theorem orelse_none (x : option α) : (x <|> none) = x := orelse_none' x
@[simp] theorem is_some_none : @is_some α none = ff := rfl
@[simp] theorem is_some_some {a : α} : is_some (some a) = tt := rfl
theorem is_some_iff_exists {x : option α} : is_some x ↔ ∃ a, x = some a :=
by cases x; simp [is_some]; exact ⟨_, rfl⟩
@[simp] theorem is_none_none : @is_none α none = tt := rfl
@[simp] theorem is_none_some {a : α} : is_none (some a) = ff := rfl
@[simp] theorem not_is_some {a : option α} : is_some a = ff ↔ a.is_none = tt :=
by cases a; simp
theorem iget_mem [inhabited α] : ∀ {o : option α}, is_some o → o.iget ∈ o
| (some a) _ := rfl
theorem iget_of_mem [inhabited α] {a : α} : ∀ {o : option α}, a ∈ o → o.iget = a
| _ rfl := rfl
@[simp] theorem guard_eq_some {p : α → Prop} [decidable_pred p] {a b : α} :
guard p a = some b ↔ a = b ∧ p a :=
by by_cases p a; simp [option.guard, h]; intro; contradiction
@[simp] theorem guard_eq_some' {p : Prop} [decidable p] :
∀ u, _root_.guard p = some u ↔ p
| () := by by_cases p; simp [guard, h, pure]; intro; contradiction
theorem lift_or_get_choice {f : α → α → α} (h : ∀ a b, f a b = a ∨ f a b = b) :
∀ o₁ o₂, lift_or_get f o₁ o₂ = o₁ ∨ lift_or_get f o₁ o₂ = o₂
| none none := or.inl rfl
| (some a) none := or.inl rfl
| none (some b) := or.inr rfl
| (some a) (some b) := by simpa [lift_or_get] using h a b
end option
|
[GOAL]
C : Type u_1
inst✝³ : Category.{u_2, u_1} C
inst✝² : Preadditive C
inst✝¹ : IsIdempotentComplete C
inst✝ : HasFiniteCoproducts C
⊢ Compatibility.τ₀ =
Compatibility.τ₁
(eqToIso (_ : (toKaroubiEquivalence (SimplicialObject C)).functor ⋙ Preadditive.DoldKan.equivalence.functor = N₁))
(eqToIso
(_ :
(toKaroubiEquivalence (ChainComplex C ℕ)).functor ⋙ Preadditive.DoldKan.equivalence.inverse =
Γ ⋙ (toKaroubiEquivalence (SimplicialObject C)).functor))
N₁Γ₀
[PROOFSTEP]
ext K : 3
[GOAL]
case w.w.h
C : Type u_1
inst✝³ : Category.{u_2, u_1} C
inst✝² : Preadditive C
inst✝¹ : IsIdempotentComplete C
inst✝ : HasFiniteCoproducts C
K : ChainComplex C ℕ
⊢ NatTrans.app Compatibility.τ₀.hom K =
NatTrans.app
(Compatibility.τ₁
(eqToIso
(_ : (toKaroubiEquivalence (SimplicialObject C)).functor ⋙ Preadditive.DoldKan.equivalence.functor = N₁))
(eqToIso
(_ :
(toKaroubiEquivalence (ChainComplex C ℕ)).functor ⋙ Preadditive.DoldKan.equivalence.inverse =
Γ ⋙ (toKaroubiEquivalence (SimplicialObject C)).functor))
N₁Γ₀).hom
K
[PROOFSTEP]
simp only [Compatibility.τ₀_hom_app, Compatibility.τ₁_hom_app, eqToIso.hom]
[GOAL]
case w.w.h
C : Type u_1
inst✝³ : Category.{u_2, u_1} C
inst✝² : Preadditive C
inst✝¹ : IsIdempotentComplete C
inst✝ : HasFiniteCoproducts C
K : ChainComplex C ℕ
⊢ NatTrans.app Preadditive.DoldKan.equivalence.counitIso.hom ((toKaroubiEquivalence (ChainComplex C ℕ)).functor.obj K) =
Preadditive.DoldKan.equivalence.functor.map
(NatTrans.app
(eqToHom
(_ :
(toKaroubiEquivalence (ChainComplex C ℕ)).functor ⋙ Preadditive.DoldKan.equivalence.inverse =
Γ ⋙ (toKaroubiEquivalence (SimplicialObject C)).functor))
K) ≫
NatTrans.app
(eqToHom
(_ : (toKaroubiEquivalence (SimplicialObject C)).functor ⋙ Preadditive.DoldKan.equivalence.functor = N₁))
(Γ.obj K) ≫
NatTrans.app N₁Γ₀.hom K
[PROOFSTEP]
refine' (N₂Γ₂_compatible_with_N₁Γ₀ K).trans _
[GOAL]
case w.w.h
C : Type u_1
inst✝³ : Category.{u_2, u_1} C
inst✝² : Preadditive C
inst✝¹ : IsIdempotentComplete C
inst✝ : HasFiniteCoproducts C
K : ChainComplex C ℕ
⊢ NatTrans.app N₂Γ₂ToKaroubiIso.hom K ≫ NatTrans.app N₁Γ₀.hom K =
Preadditive.DoldKan.equivalence.functor.map
(NatTrans.app
(eqToHom
(_ :
(toKaroubiEquivalence (ChainComplex C ℕ)).functor ⋙ Preadditive.DoldKan.equivalence.inverse =
Γ ⋙ (toKaroubiEquivalence (SimplicialObject C)).functor))
K) ≫
NatTrans.app
(eqToHom
(_ : (toKaroubiEquivalence (SimplicialObject C)).functor ⋙ Preadditive.DoldKan.equivalence.functor = N₁))
(Γ.obj K) ≫
NatTrans.app N₁Γ₀.hom K
[PROOFSTEP]
simp only [N₂Γ₂ToKaroubiIso_hom, eqToHom_map, eqToHom_app, eqToHom_trans_assoc]
[GOAL]
C : Type u_1
inst✝³ : Category.{u_2, u_1} C
inst✝² : Preadditive C
inst✝¹ : IsIdempotentComplete C
inst✝ : HasFiniteCoproducts C
⊢ Compatibility.υ
(eqToIso
(_ : (toKaroubiEquivalence (SimplicialObject C)).functor ⋙ Preadditive.DoldKan.equivalence.functor = N₁)) =
Γ₂N₁
[PROOFSTEP]
ext1
[GOAL]
case w
C : Type u_1
inst✝³ : Category.{u_2, u_1} C
inst✝² : Preadditive C
inst✝¹ : IsIdempotentComplete C
inst✝ : HasFiniteCoproducts C
⊢ (Compatibility.υ
(eqToIso
(_ :
(toKaroubiEquivalence (SimplicialObject C)).functor ⋙ Preadditive.DoldKan.equivalence.functor = N₁))).hom =
Γ₂N₁.hom
[PROOFSTEP]
rw [← cancel_epi Γ₂N₁.inv, Iso.inv_hom_id]
[GOAL]
case w
C : Type u_1
inst✝³ : Category.{u_2, u_1} C
inst✝² : Preadditive C
inst✝¹ : IsIdempotentComplete C
inst✝ : HasFiniteCoproducts C
⊢ Γ₂N₁.inv ≫
(Compatibility.υ
(eqToIso
(_ :
(toKaroubiEquivalence (SimplicialObject C)).functor ⋙ Preadditive.DoldKan.equivalence.functor =
N₁))).hom =
𝟙 (N₁ ⋙ Γ₂)
[PROOFSTEP]
ext X : 2
[GOAL]
case w.w.h
C : Type u_1
inst✝³ : Category.{u_2, u_1} C
inst✝² : Preadditive C
inst✝¹ : IsIdempotentComplete C
inst✝ : HasFiniteCoproducts C
X : SimplicialObject C
⊢ NatTrans.app
(Γ₂N₁.inv ≫
(Compatibility.υ
(eqToIso
(_ :
(toKaroubiEquivalence (SimplicialObject C)).functor ⋙ Preadditive.DoldKan.equivalence.functor =
N₁))).hom)
X =
NatTrans.app (𝟙 (N₁ ⋙ Γ₂)) X
[PROOFSTEP]
rw [NatTrans.comp_app]
[GOAL]
case w.w.h
C : Type u_1
inst✝³ : Category.{u_2, u_1} C
inst✝² : Preadditive C
inst✝¹ : IsIdempotentComplete C
inst✝ : HasFiniteCoproducts C
X : SimplicialObject C
⊢ NatTrans.app Γ₂N₁.inv X ≫
NatTrans.app
(Compatibility.υ
(eqToIso
(_ :
(toKaroubiEquivalence (SimplicialObject C)).functor ⋙ Preadditive.DoldKan.equivalence.functor =
N₁))).hom
X =
NatTrans.app (𝟙 (N₁ ⋙ Γ₂)) X
[PROOFSTEP]
erw [compatibility_Γ₂N₁_Γ₂N₂_natTrans X]
[GOAL]
case w.w.h
C : Type u_1
inst✝³ : Category.{u_2, u_1} C
inst✝² : Preadditive C
inst✝¹ : IsIdempotentComplete C
inst✝ : HasFiniteCoproducts C
X : SimplicialObject C
⊢ ((compatibility_Γ₂N₁_Γ₂N₂.app X).inv ≫ NatTrans.app Γ₂N₂.natTrans ((toKaroubi (SimplicialObject C)).obj X)) ≫
NatTrans.app
(Compatibility.υ
(eqToIso
(_ :
(toKaroubiEquivalence (SimplicialObject C)).functor ⋙ Preadditive.DoldKan.equivalence.functor =
N₁))).hom
X =
NatTrans.app (𝟙 (N₁ ⋙ Γ₂)) X
[PROOFSTEP]
rw [Compatibility.υ_hom_app, Preadditive.DoldKan.equivalence_unitIso, Iso.app_inv, assoc]
[GOAL]
case w.w.h
C : Type u_1
inst✝³ : Category.{u_2, u_1} C
inst✝² : Preadditive C
inst✝¹ : IsIdempotentComplete C
inst✝ : HasFiniteCoproducts C
X : SimplicialObject C
⊢ NatTrans.app compatibility_Γ₂N₁_Γ₂N₂.inv X ≫
NatTrans.app Γ₂N₂.natTrans ((toKaroubi (SimplicialObject C)).obj X) ≫
NatTrans.app Γ₂N₂.hom ((toKaroubiEquivalence (SimplicialObject C)).functor.obj X) ≫
Preadditive.DoldKan.equivalence.inverse.map
(NatTrans.app
(eqToIso
(_ :
(toKaroubiEquivalence (SimplicialObject C)).functor ⋙ Preadditive.DoldKan.equivalence.functor =
N₁)).hom
X) =
NatTrans.app (𝟙 (N₁ ⋙ Γ₂)) X
[PROOFSTEP]
erw [← NatTrans.comp_app_assoc, IsIso.hom_inv_id]
[GOAL]
case w.w.h
C : Type u_1
inst✝³ : Category.{u_2, u_1} C
inst✝² : Preadditive C
inst✝¹ : IsIdempotentComplete C
inst✝ : HasFiniteCoproducts C
X : SimplicialObject C
⊢ NatTrans.app compatibility_Γ₂N₁_Γ₂N₂.inv X ≫
NatTrans.app (𝟙 (N₂ ⋙ Γ₂)) ((toKaroubi (SimplicialObject C)).obj X) ≫
Preadditive.DoldKan.equivalence.inverse.map
(NatTrans.app
(eqToIso
(_ :
(toKaroubiEquivalence (SimplicialObject C)).functor ⋙ Preadditive.DoldKan.equivalence.functor =
N₁)).hom
X) =
NatTrans.app (𝟙 (N₁ ⋙ Γ₂)) X
[PROOFSTEP]
rw [NatTrans.id_app, id_comp, NatTrans.id_app, eqToIso.hom, eqToHom_app, eqToHom_map]
[GOAL]
case w.w.h
C : Type u_1
inst✝³ : Category.{u_2, u_1} C
inst✝² : Preadditive C
inst✝¹ : IsIdempotentComplete C
inst✝ : HasFiniteCoproducts C
X : SimplicialObject C
⊢ NatTrans.app compatibility_Γ₂N₁_Γ₂N₂.inv X ≫
eqToHom
(_ :
Preadditive.DoldKan.equivalence.inverse.obj
(Preadditive.DoldKan.equivalence.functor.obj
((toKaroubiEquivalence (SimplicialObject C)).functor.obj X)) =
Preadditive.DoldKan.equivalence.inverse.obj (N₁.obj X)) =
𝟙 ((N₁ ⋙ Γ₂).obj X)
[PROOFSTEP]
rw [compatibility_Γ₂N₁_Γ₂N₂_inv_app, eqToHom_trans, eqToHom_refl]
|
# About ============================
# @author: Adail Carvalho
# @since : 2018-03-24
# __________________________________
#Required Libs ====
checkAndInstallRequiredLib <- function(libname, extralib = NULL) {
if (!require(libname)) {
install.packages(libname)
}
if (!missing(extralib)) {
if (!require(extralib)) {
install.packages(extralib)
}
}
}
# Plotly: library that provides beaultiful chart tools.
library(plotly)
# Webshot: provides chart exports capabilities.
library(webshot)
# Matrices / DFrames headers ====
modelHeader <- c("Coefficients", "Predictions", "RMSErrorRate", "Dataset", "YDependent", "XIndependent")
lmHeader <- c("Observed Y","Predicted Y", "Residual E")
#Error messages ====
insuficientArgsException <- simpleError("Missing one or more required variables to proceed = ")
insuficientArgsWarning <- "Missing one or more required variables to do the requested operation = "
illegalArgsException <- simpleError("Invalid args: ")
notSupportedException <- simpleError("Operation with the passed args are not supported yet.")
# Main ====
' Entry point for our Linear Model program
# input
@filePath CSV file that represents the dataset
@delimiter The CSV file delimiter (if its a TSV, then use "\t")
@dependentVar Represents the Y column name in the dataset, where Y represents the dependent variable in the Linear Model.
Colum Y must be numeric.
@... Represents one or more datasets X columns names, where X represents the independent variables in the Linear Model.
Columns X must be numerics.
# returns
@ model A List containing both Predictions and the Linear Model coefficients.
$Coefficients : list containing the values founded for the linear model coeficients
$Predictions : matrix that shows the predicted values, the real values for Y and the E residuals
$RMSErrorRate : rating used to know how good our model is. A good model is always between -1< R <=1.
$Dataset: a matrix representation of the original dataset.
# usage
beearModel <- linearModel("C:/opt/beer_delivery.csv","\t", "Tempo", "NumCaixas", "Distancia")
houses <- linearModel("C:/opt/houses_set.csv", ",", "SalePrice", "OverallQual", "YearBuilt", "MSSubClass", "BedroomAbvGr")
# checking values
houses$Coefficients
houses$Predictions
'
linearModel <- function (filePath, delimiter, dependentVar, ...) {
independentVarList <- list(...)
if (is.null(dependentVar) || length(independentVarList) == 0) {
stop(paste(insuficientArgsException, "{dependentVar, independentVarList}"))
}
matrixDataSet <- getFileToMatrix(filePath, delimiter)
if (length(independentVarList) == 1) {
model <- simpleLinearModel(matrixDataSet, dependentVar, independentVarList[[1]])
} else {
model <- multiLinearModel(matrixDataSet, dependentVar, independentVarList)
}
returnValue(model)
}
# Handle general operations to get the results
# @returns results A list containing the coefficient values, RMSE error and the made predictions.
simpleLinearModel <- function (matrixDataSet, dependentVar, independentVar) {
if (!is.matrix(matrixDataSet)) {
stop(illegalArgsException)
}
dependentIndex <- grep(dependentVar, colnames(matrixDataSet))
independentIndex <- grep(independentVar, colnames(matrixDataSet))
dependentVector <- matrixDataSet[,dependentIndex]
independentVector <- matrixDataSet[,independentIndex]
slope <- getSLRSlope(independentVector, dependentVector)
intercept <- getSLRIntercept(independentVector, dependentVector, slope)
resultModel <- doSimplePredictions(intercept, slope, independentVector, dependentVector)
coefficientsMatrix <- matrix(nrow = 1, ncol = 2, dimnames = list(c(1),c("Intercept", independentVar)))
coefficientsMatrix[,1] <- intercept
coefficientsMatrix[,2] <- slope
modelErrorRate <- estimateModelError(resultModel[,2])
linearModel <- cbind(dependentVector,resultModel)
colnames(linearModel) <- lmHeader
# Gather generetade values
model <- list(coefficientsMatrix, linearModel, modelErrorRate, matrixDataSet, dependentVar, independentVar)
returnValue(model)
}
'
Handle general operations to get the results
@returns results A list containing the coefficient values, RMSE error and the made predictions.
# Usage (directly)
myModel <- multiLinearModel(dataMatrix, "Y var", as.list(c("var x1", "var x2", "var xn")))
'
multiLinearModel <- function (matrixDataSet, dependentVar, independentVarList) {
if (!is.matrix(matrixDataSet)) {
stop(illegalArgsException)
}
dependentColIndex <- grep(dependentVar, colnames(matrixDataSet))
dependentMatrix <- matrixDataSet[,dependentColIndex]
independentMatrix <- getIndependentVarsAsMatrix(matrixDataSet, independentVarList)
levelsMatrix <- getRegressionLevelsMatrix(as.matrix(independentMatrix))
xProductMatrix <- matrixTranposeAndMultiply(levelsMatrix, NULL)
xProductMatrix <- getInverseMatrix(xProductMatrix)
yProductMatrix <- matrixTranposeAndMultiply(levelsMatrix, as.matrix(dependentMatrix))
coefficientsMatrix <- getMLRCoefficients(xProductMatrix, yProductMatrix, independentMatrix, independentVarList)
resultModel <- doMultivariantPredictions(coefficientsMatrix, independentMatrix, dependentMatrix)
modelErrorRate <- estimateModelError(resultModel[,2])
linearModel <- cbind(dependentMatrix,resultModel)
colnames(linearModel) <- lmHeader
model <- list(coefficientsMatrix, linearModel, modelErrorRate, matrixDataSet, dependentVar, independentVarList)
model <- handleDataHeaders(model, modelHeader)
returnValue(model)
}
# Linear Model ====
# Estimates the values for Y, using the calculated coefficients.
# Apllies the following rule: ~Yi : B0 + (B1 * Xi)
# Where ~Yi = estimated value for dependent value Y, B0 = Slope coefficient, B1 = Intercept coefficient, Xi = observed value for X,
# and i, the iterator value over the X independent vector
# returns a Matrix with the estimated values for Y and the residuals E, where E = real value of Y - estimated value for Y
doSimplePredictions <- function(coefficientB0, coefficientB1, independentVector, dependentVector) {
predictedY <- vector("numeric", length(dependentVector))
residuals <- vector("numeric", length(independentVector))
for (i in 1:length(independentVector)) {
predictedY[i] <- coefficientB0 + (coefficientB1 * independentVector[i])
}
residualVector <- dependentVector - predictedY
returnValue(cbind(predictedY, residualVector))
}
# Estimates the values for Y, using the calculated coefficients.
# Apllies the following rule: ~Y : B0 + B1(X1 - ~X1) + B2(X2 - ~X2) + ... + Bn(Xn - ~Xn)
# Where ~Y = estimated value for dependent value Y, Bi = Coeficient values, Xi = observed value for X, ~Xi = mean of X variable.
# returns a Matrix with the estimated values for Y and the residuals E, where E = real value of Y - estimated value for Y
doMultivariantPredictions <- function(coefficientsMatrix, independentMatrix, dependentMatrix = NULL) {
predictedY <- matrix(0, nrow = nrow(independentMatrix), ncol = 1)
residualMatrix <- matrix(0 , nrow = nrow(independentMatrix), ncol = 1)
for (r in 1:nrow(independentMatrix)) {
for (coe in 2:ncol(coefficientsMatrix)) {
predictedY[r,1] <- predictedY[r,1] + (coefficientsMatrix[,coe] * independentMatrix[r,coe - 1])
}
predictedY[r,1] <- predictedY[r,1] + coefficientsMatrix[,1]
}
predictions <-NULL
if (missing(dependentMatrix)) {
predictions <- predictedY
} else {
residualMatrix <- dependentMatrix - predictedY
predictions <- cbind(predictedY, residualMatrix)
}
returnValue(predictions)
}
# Matrices general operations ====
getMLRCoefficients <- function(xProductMatrix, yProductMatrix, independentMatrix, xLabels) {
avgIndMat <- colMeans(independentMatrix)
coefficientsMatrix <- matrixMultiply(xProductMatrix, yProductMatrix)
coefficientsMatrix <- t(coefficientsMatrix)
slope <- coefficientsMatrix[,1]
for(c in 2:ncol(coefficientsMatrix)) {
slope <- slope + (coefficientsMatrix[,c] * (-avgIndMat[c - 1]))
}
coefficientsMatrix[,1] <- slope
colnames(coefficientsMatrix) <-c("[Intercept]", xLabels)
returnValue(coefficientsMatrix)
}
# Returns the (matriz)-¹ of a given matrix.
# inv(x) is a funtion provided by the package matlib*
getInverseMatrix <- function(matriz) {
invertedMatrix <- solve(matriz)
returnValue(invertedMatrix)
}
# Returns the mean of independent variables in multivariant linear model
getRegressionLevelsMatrix <- function(matriz) {
avgIndependentsVarMatrix <- t(as.matrix(colMeans(matriz)))
levelsMatrix <- cbind(getX0Dimension(matriz), matriz)
for (c in colnames(matriz)) {
x <- matriz[,c]
l <- vector(class(x), length(x))
for (i in 1:length(x)) {
l[i] <- x[i] - avgIndependentsVarMatrix[,c]
}
levelsMatrix[,c] <- l
}
returnValue(levelsMatrix)
}
# Represents the x0 coeficient in the lm equation, where x0 = 1. In the matrix of independent values, it is placed in the first column.
getX0Dimension <- function(matriz) {
numRows <- nrow(matriz)
returnValue(matrix(rep(1, numRows), dimnames = list(c(1:numRows), c("X0"))))
}
# Return product of two matrices
matrixMultiply <- function(matrixA, matrixB) {
resultMatrix <- matrix(0, nrow = nrow(matrixA), ncol = ncol(matrixB))
for (r in 1:nrow(matrixA)) {
for (c in 1:ncol(matrixB)) {
for (kol in 1:ncol(matrixA)) {
resultMatrix[r,c] <- resultMatrix[r,c] + (matrixA[r,kol] * matrixB[kol, c])
}
}
}
returnValue(resultMatrix)
}
# Returns the product of matrices
matrixTranposeAndMultiply <- function(matriz, matrixB) {
transpMatrix <- t(matriz)
if (is.null(matrixB)) {
matrixY <- matriz
} else {
matrixY <- matrixB
}
returnValue(matrixMultiply(transpMatrix, matrixY))
}
# Linear Regression Rules Application ====
# Estimate erros using RootMeanSquareError(RMSE).
# @residualVector Vector that holds the E residuals of our model.
estimateModelError <- function(residualVector) {
rmse <- sqrt(sum(residualVector ** 2) / length(residualVector))
returnValue(rmse)
}
# Returns SLR Slope value
getSLRSlope <- function(independentVector, dependentVector) {
levelsVectorX <- independentVector - mean(independentVector)
levelsVectorY <- dependentVector - mean(dependentVector)
numeratorB1 <- sum(levelsVectorX * levelsVectorY)
denominatorB1 <- sum(levelsVectorX ** 2)
slope <- numeratorB1 / denominatorB1
returnValue(slope)
}
# Returns SLR Intercept value
getSLRIntercept <- function (independentVector, dependentVector, slope) {
intercept <- mean(dependentVector) - (slope * mean(independentVector))
returnValue(intercept)
}
# Utils ====
getIndependentVarsAsMatrix <- function(matrixData, independentVarList) {
independentMatrix <- matrix(0, nrow = nrow(matrixData), ncol = length(independentVarList))
colnames(independentMatrix) <- independentVarList
for (i in independentVarList) {
name <- grep(i, colnames(matrixData))
independentMatrix[,i] <- matrixData[,name]
}
returnValue(independentMatrix)
}
getFileToMatrix <- function(filePath, delimiter) {
rawDataSet <- read.csv(filePath, sep = delimiter, header = TRUE)
returnValue(data.matrix(rawDataSet))
}
' Export charts as images.
@p A chart object
@filename Name of the file to be saved.
@ext File extension
'
exportChartAsImg <- function(p, filename, ext) {
export(p, paste(filename, ext))
}
handleDataHeaders <- function(data, header) {
names(data) <- modelHeader
returnValue(data)
}
# Charts ####
# Add to a chart labels and title
applyLayout <- function(p, xLabel, yLabel, chartTitle) {
title <- NULL
if (missing(chartTitle)) {
title <- paste(yLabel, " by ", xLabel)
} else {
title <- chartTitle
}
p <- layout(p, title = title,
xaxis = list(title = xLabel),
yaxis = list (title = yLabel))
returnValue(p)
}
'Export models as simple X vs Y line charts, from a given model.
@ model A LinearRegressor object.
@ dirToSave The directory where the plots will be exported
'
createChartsFromModel <- function(model, dirToSave = NULL) {
for (x in model$XIndependent) {
if (isTRUE(grepl('^[0-9]', x))) {
x <- paste("X", x, sep = "")
}
print(paste("Generating chart for => ", x))
p <- lmYXLineChart(model$Dataset[,x],
model$Dataset[,model$YDependent],
model$Predictions[,"Predicted Y"],
model$Predictions[,"Residual E"],
x,
model$YDependent)
exportChartAsImg(p, paste(dirToSave, model$YDependent, " by ", x), ".png")
}
}
# Creates a line chart, by ploting y in the y axis and x in the x axis.
# lmYXLineChart(TipoMoradores, PrecoVenda, PrecoPrevisto, ResiduoModelo, "TipoMoradores", "PrecoVenda")
lmYXLineChart <- function(xData, yData, yPredicted, eResid, xLabel, yLabel) {
xName <- xLabel
yName <- yLabel
if (missing(xName)) {
xName <- "X"
}
if (missing(yName)) {
yName <- "Y"
}
chartFrame <- data.frame(xData, yData, yPredicted, eResid)
aggData <- aggregate(chartFrame, by = list(chartFrame$xData), FUN = mean)
lmPlot<- plot_ly(aggData, x = ~aggData$xData, name = xName)
lmPlot <- add_trace(lmPlot, y= ~aggData$yData, name = yName, type = 'scatter', mode = 'lines')
lmPlot <- add_trace(lmPlot, y = ~aggData$yPredicted, name = paste(yName, "(Pred)"), type = 'scatter', mode = 'lines')
lmPlot <- add_trace(lmPlot, y = ~aggData$eResid, name = 'Residual', type = 'scatter', mode = 'markers')
lmPlot <- applyLayout(lmPlot, xName, yName)
returnValue(lmPlot)
}
|
# Single Experiment
In this notebook we run a single experiment and display the estimates of the dynamic effects based on our dynamic DML algorithm. We also display some performance of alternative benchmark approaches.
## 1. Data Generation from a Markovian Treatment Model
We consider the following DGP:
\begin{align}
X_t =& (\pi'X_{t-1} + 1) \cdot A\, T_{t-1} + B X_{t-1} + \epsilon_t\\
T_t =& \gamma\, T_{t-1} + (1-\gamma) \cdot D X_t + \zeta_t\\
Y_t =& (\sigma' X_{t} + 1) \cdot e\, T_{t} + f' X_t + \eta_t
\end{align}
with $X_0, T_0 = 0$ and $\epsilon_t, \zeta_t, \eta_t$ normal $N(0, \sigma^2)$ r.v.'s. Moreover, $X_t \in R^{n_x}$, $B[:, 0:s_x] \neq 0$ and $B[:, s_x:-1] = 0$, $\gamma\in [0, 1]$, $D[:, 0:s_x] \neq 0$, $D[:, s_x:-1]=0$, $f[0:s_x]\neq 0$, $f[s_x:-1]=0$. We draw a single time series of samples of length $n\_samples$.
```python
%load_ext autoreload
%autoreload 2
```
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
```python
import numpy as np
from dynamic_panel_dgp import DynamicPanelDGP, LongRangeDynamicPanelDGP
n_units = 400
n_periods = 3
n_treatments = 1
n_x = 100
s_x = 10
s_t = 10
sigma_x = .5
sigma_t = .5
sigma_y = .5
gamma = .0
autoreg = .5
state_effect = .5
conf_str = 6
hetero_strength = 0
hetero_inds = None
#dgp_class = LongRangeDynamicPanelDGP
dgp_class = DynamicPanelDGP
dgp = dgp_class(n_periods, n_treatments, n_x).create_instance(s_x, sigma_x, sigma_y,
conf_str, hetero_strength, hetero_inds,
autoreg, state_effect,
random_seed=39)
```
C:\ProgramData\Anaconda3\lib\site-packages\numba\errors.py:105: UserWarning: Insufficiently recent colorama version found. Numba requires colorama >= 0.3.9
warnings.warn(msg)
```python
Y, T, X, groups = dgp.observational_data(n_units, gamma, s_t, sigma_t, random_seed=1234)
true_effect = dgp.true_effect
```
```python
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
plt.plot(Y, label="Outcome")
plt.plot(T, label="Treatment")
plt.legend()
plt.subplot(1, 2, 2)
for it in range(3):
plt.plot(X[:, it], label="X[{}]".format(it))
plt.legend()
plt.show()
```
### 1. 1 True Parameters for Dynamic Effects for 3 periods
```python
import matplotlib.pyplot as plt
true_effect = true_effect.flatten()
plt.plot(true_effect, 'o')
plt.show()
```
## 2. Dynamic DML
```python
from sklearn.linear_model import LinearRegression, LassoCV, Lasso, MultiTaskLasso, MultiTaskLassoCV
from sklearn.model_selection import GroupKFold
import warnings
warnings.simplefilter('ignore')
np.random.seed(123)
alpha_regs = [1e-4, 1e-3, 1e-2, 5e-2, .1, 1]
lasso_model = lambda : LassoCV(cv=3, alphas=alpha_regs, max_iter=500)
mlasso_model = lambda : MultiTaskLassoCV(cv=3, alphas=alpha_regs, max_iter=500)
```
```python
from panel_dynamic_dml import DynamicPanelDML
est = DynamicPanelDML(model_t=mlasso_model(),
model_y=lasso_model(),
n_cfit_splits=3).fit(Y, T, X, groups)
```
### 2.1 Parameter Recovery and Confidence Intervals
```python
param_hat = est.param
conf_ints = est.param_interval(alpha=.05)
for kappa in range(n_periods):
for t in range(n_treatments):
param_ind = kappa*n_treatments + t
print("Effect Lag={}, T={}: {:.3f} ({:.3f}, {:.6f}), (Truth={:.6f})".format(kappa, t,
param_hat[param_ind],
*conf_ints[param_ind],
true_effect[param_ind]))
```
Effect Lag=0, T=0: 0.777 (0.687, 0.868074), (Truth=0.866165)
Effect Lag=1, T=0: 1.147 (0.919, 1.375321), (Truth=1.200000)
Effect Lag=2, T=0: 0.896 (0.536, 1.257250), (Truth=0.641424)
```python
plt.figure(figsize=(15, 5))
plt.errorbar(np.arange(n_periods*n_treatments)-.04, param_hat, yerr=(conf_ints[:, 1] - param_hat,
param_hat - conf_ints[:, 0]), fmt='o', label='dyn-dml')
plt.errorbar(np.arange(n_periods*n_treatments), true_effect.flatten(), fmt='o', alpha=.6, label='true')
for t in np.arange(1, n_periods):
plt.axvline(x=t * n_treatments - .5, linestyle='--', alpha=.4)
plt.xticks([t * n_treatments - .5 + n_treatments/2 for t in range(n_periods)],
["$\\theta_{}$".format(t) for t in range(n_periods)])
plt.gca().set_xlim([-.5, n_periods*n_treatments - .5])
plt.legend()
plt.show()
```
### 2.2 Benchmark Method Comparison
```python
panelX = X.reshape(-1, n_periods, n_x)
panelT = T.reshape(-1, n_periods, n_treatments)
panelY = Y.reshape(-1, n_periods)
```
#### 2.2.1 Regressing Y on all T
```python
est_lr = LinearRegression().fit(panelT[:, ::-1, :].reshape(-1, n_periods*n_treatments), panelY[:, -1]).coef_
```
#### 2.2.2 Regressing Y on all T and either final or initial States
```python
est_lr_x0 = lasso_model().fit(np.hstack([panelT[:, ::-1, :].reshape(-1, n_periods*n_treatments),
panelX[:, 0, :]]), panelY[:, -1]).coef_[:n_periods*n_treatments]
est_lr_xfinal = lasso_model().fit(np.hstack([panelT[:, ::-1, :].reshape(-1, n_periods*n_treatments),
panelX[:, -1, :]]), panelY[:, -1]).coef_[:n_periods*n_treatments]
```
#### 2.2.3 Performing DML with Y and all T and controlling for either final or initial States
```python
from econml.dml import LinearDMLCateEstimator
dml_model = lambda : LinearDMLCateEstimator(model_y=lasso_model(), model_t=mlasso_model(),
n_splits=3, linear_first_stages=False)
est_dml_x0 = dml_model().fit(panelY[:, -1], T=panelT[:, ::-1, :].reshape(-1, n_periods*n_treatments),
X=None, W=panelX[:, 0, :]).intercept_
est_dml_xfinal = dml_model().fit(panelY[:, -1], T=panelT[:, ::-1, :].reshape(-1, n_periods*n_treatments),
X=None, W=panelX[:, -1, :]).intercept_
```
#### 2.2.4 Running a Direct version of Dynamic DML
Where a direct lasso is performed at each stage, regressing the calibrated outcome on the current period treatment and state and reading the coefficient in front of the treatment as the causal effect.
```python
Y_cal = panelY[:, -1].copy()
direct_theta = np.zeros((n_periods, n_treatments))
for t in np.arange(n_periods):
direct_theta[t, :] = lasso_model().fit(np.hstack([panelT[:, n_periods - 1 - t, :],
panelX[:, n_periods - 1 - t, :]]), Y_cal).coef_[:n_treatments]
Y_cal -= np.dot(panelT[:, n_periods - 1 - t, :], direct_theta[t, :])
est_direct = direct_theta.flatten()
```
#### 2.2.5 Plot all estimates
```python
plt.figure(figsize=(15, 5))
plt.errorbar(np.arange(n_periods*n_treatments)-.04, param_hat, yerr=(conf_ints[:, 1] - param_hat,
param_hat - conf_ints[:, 0]), fmt='o', label='dyn-dml')
plt.errorbar(np.arange(n_periods*n_treatments)-.02, est_lr, fmt='o', alpha=.6, label='no-ctrls')
plt.errorbar(np.arange(n_periods*n_treatments), true_effect.flatten(), fmt='*', alpha=.6, label='true')
plt.errorbar(np.arange(n_periods*n_treatments)+.02, est_lr_x0, fmt='o', alpha=.6, label='init-ctrls')
plt.errorbar(np.arange(n_periods*n_treatments)+.04, est_dml_x0, fmt='o', alpha=.6, label='init-ctrls-dml')
plt.errorbar(np.arange(n_periods*n_treatments)+.1, est_lr_xfinal, fmt='o', alpha=.6, label='fin-ctrls')
plt.errorbar(np.arange(n_periods*n_treatments)+.12, est_dml_xfinal, fmt='o', alpha=.6, label='fin-ctrls-dml')
plt.errorbar(np.arange(n_periods*n_treatments)+.14, est_direct, fmt='o', alpha=.6, label='dyn-direct')
for t in np.arange(1, n_periods):
plt.axvline(x=t * n_treatments - .5, linestyle='--', alpha=.4)
plt.xticks([t * n_treatments - .5 + n_treatments/2 for t in range(n_periods)],
["$\\theta_{}$".format(t) for t in range(n_periods)])
plt.gca().set_xlim([-.5, n_periods*n_treatments - .5])
plt.legend()
plt.show()
```
```python
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import multivariate_normal
id1, id2 = 0, 1
length = 2 * max(est.param_stderr[id1], est.param_stderr[id2])
xlin = np.linspace(param_hat[id1]-length, param_hat[id1] + length, 500)
ylin = np.linspace(param_hat[id2]-length, param_hat[id2] + length, 500)
grX,grY = np.meshgrid(xlin, ylin)
pos = np.array([grX.flatten(), grY.flatten()]).T
rv = multivariate_normal(param_hat[[id1,id2]], est._cov[np.ix_([id1, id2], [id1, id2])]/n_units)
fig = plt.figure(figsize=(4,4))
ax0 = fig.add_subplot(111)
ax0.contourf(grX, grY, rv.pdf(pos).reshape(500,500))
ax0.scatter(true_effect[id1], true_effect[id2])
plt.xlabel("$\\theta_{{{}}}$".format(id1))
plt.ylabel("$\\theta_{{{}}}$".format(id2))
plt.show()
```
## 3. Policy Effect
```python
tau = np.random.binomial(1, .5, size=(n_periods, n_treatments))
true_policy_effect = dgp.static_policy_effect(tau, mc_samples=1000)
policy_effect_hat = est.policy_effect(tau)
policy_ints = est.policy_effect_interval(tau)
print("Policy effect for treatment seq: \n {}\n {:.3f} ({:.3f}, {:.3f}) (truth={:.3f})".format(tau,
policy_effect_hat,
*policy_ints,
true_policy_effect))
```
Policy effect for treatment seq:
[[0]
[1]
[0]]
1.147 (0.919, 1.375) (truth=1.200)
```python
test_policies = np.random.binomial(1, .5, size=(16, n_periods, n_treatments))
plt.figure(figsize=(15, 15))
for t, tau in enumerate(test_policies):
true_policy_effect = np.dot(true_effect, tau[::-1].flatten())
policy_effect_hat = est.policy_effect(tau)
policy_ints = est.policy_effect_interval(tau)
plt.subplot(4, 4, t + 1)
plt.errorbar([t -.04], [policy_effect_hat], yerr=([policy_ints[1] - policy_effect_hat],
[policy_effect_hat - policy_ints[0]]), fmt='o', label='dyn-dml')
plt.errorbar([t -.02], [np.dot(est_lr, tau[::-1].flatten())], fmt='o', alpha=.6, label='no-ctrls')
plt.errorbar([t], [true_policy_effect], fmt='o', alpha=.6, label='true')
plt.hlines([true_policy_effect], t - .06, t + .14, linestyles='--', alpha=.4)
plt.errorbar([t + .02], [np.dot(est_lr_x0, tau[::-1].flatten())], fmt='o', alpha=.6, label='init-ctrls')
plt.errorbar([t + .04], [np.dot(est_dml_x0, tau[::-1].flatten())], fmt='o', alpha=.6, label='init-ctrls-dml')
plt.errorbar([t + .1], [np.dot(est_lr_xfinal, tau[::-1].flatten())], fmt='o', alpha=.6, label='fin-ctrls')
plt.errorbar([t + .12], [np.dot(est_dml_xfinal, tau[::-1].flatten())], fmt='o', alpha=.6, label='fin-ctrls-dml')
plt.errorbar([t +.14], [np.dot(est_direct, tau[::-1].flatten())], fmt='o', alpha=.6, label='dyn-direct')
plt.title("{}".format(tau.flatten()))
#plt.legend()
plt.tight_layout()
plt.show()
```
```python
# Optimal Contextual Binary Treatment Policy
def adaptive_policy(t, x, period):
return 1.*(dgp.hetero_effect_fn(n_periods - 1 - period, x) > 0)
dgp.adaptive_policy_effect(adaptive_policy)
```
2.707589092135044
```python
est.adaptive_policy_effect(X, groups, adaptive_policy)
```
(2.820942109125225, (2.547666040491572, 3.094218177758878))
## 4. Estimation Diagnostics
```python
import matplotlib.pyplot as plt
plt.plot(est.param_stderr)
plt.show()
```
```python
plt.imshow(est._M)
plt.colorbar()
plt.show()
```
```python
plt.imshow(est._Sigma)
plt.colorbar()
plt.show()
```
```python
plt.imshow(est._cov)
plt.colorbar()
plt.show()
```
```python
```
|
C *********************************************************
C * *
C * TEST NUMBER: 02.02.02/02 *
C * TEST TITLE : Application data *
C * *
C * PHIGS Validation Tests, produced by NIST *
C * *
C *********************************************************
COMMON /GLOBNU/ CTLHND, ERRSIG, ERRFIL, IERRCT, UNERR,
1 TESTCT, IFLERR, PASSSW, ERRSW, MAXLIN,
2 CONID, MEMUN, WKID, WTYPE, GLBLUN, INDLUN,
3 DUMINT, DUMRL
INTEGER CTLHND, ERRSIG, ERRFIL, IERRCT, UNERR,
1 TESTCT, IFLERR, PASSSW, ERRSW, MAXLIN,
2 CONID, MEMUN, WKID, WTYPE, GLBLUN, INDLUN,
3 DUMINT(20), ERRIND
REAL DUMRL(20)
COMMON /GLOBCH/ PIDENT, GLBERR, TSTMSG, FUNCID,
1 DUMCH
CHARACTER PIDENT*40, GLBERR*60, TSTMSG*900, FUNCID*80,
1 DUMCH(20)*20
C Declare program-specific variables
INTEGER CELTYP, I, INLEN, INTLEN, INTG, LDR, LDRACT, RELEN,
1 RL, RLLEN, STR, STRLEN, STLEN, STRID
PARAMETER (INLEN = 50, STLEN = 50, RELEN = 50, STRID = 1)
PARAMETER (LDR = 20)
INTEGER INTAR(INLEN), STRARL(STLEN)
INTEGER DRININ(INLEN), DRINSL(STLEN), DROTIN(INLEN), DROTSL(STLEN)
INTEGER ITRIM
REAL RLAR(RELEN)
REAL DRINRL(RELEN), DROTRL(RELEN)
CHARACTER DRINDR(LDR)*80, DROTDR(LDR)*80
CHARACTER DRINST(STLEN)*80, DROTST(STLEN)*80
CALL INITGL ('02.02.02/02')
C open PHIGS
CALL XPOPPH (ERRFIL, MEMUN)
C *** *** *** *** *** Application data *** *** *** *** ***
CALL POPST (STRID)
C dr = data record to hold
C integers: 174, 175, 176
C reals: 17.4, 17.5, 17.6
C strings: "This is a application data test string.", "This is another."
DRININ(1) = 174
DRININ(2) = 175
DRININ(3) = 176
DRINRL(1) = 17.4
DRINRL(2) = 17.5
DRINRL(3) = 17.6
DRINST(1) = 'This is a application data test string.'
DRINST(2) = 'This is another.'
DRINSL(1) = ITRIM(DRINST(1))
DRINSL(2) = ITRIM(DRINST(2))
C set dr
CALL PPREC (3, DRININ, 3, DRINRL, 2, DRINSL, DRINST, LDR,
1 ERRIND, LDRACT, DRINDR)
CALL CHKINQ ('pprec', ERRIND)
C <application data> with dr
CALL PAP (LDRACT, DRINDR)
CALL SETMSG ('4 5', '<Inquire current element type and size> ' //
1 'should return application data as the type of ' //
2 'the created element and the appropriate element ' //
3 'size.')
CALL PQCETS (ERRIND, CELTYP, INTLEN, RLLEN, STRLEN)
CALL IFPF (ERRIND .EQ. 0 .AND.
1 CELTYP .EQ. 68 .AND.
2 INTLEN .EQ. 0 .AND.
3 RLLEN .EQ. 0 .AND.
4 STRLEN .EQ. LDRACT)
CALL SETMSG ('4 6', '<Inquire current element content> should ' //
1 'return the standard representation for ' //
2 'application data.')
CALL PQCECO (INLEN, RELEN, STLEN, ERRIND, INTG, INTAR,
1 RL, RLAR, STR, STRARL, DROTDR)
IF (ERRIND .EQ. 0 .AND.
1 INTG .EQ. 0 .AND.
2 RL .EQ. 0 .AND.
3 STR .EQ. LDRACT) THEN
C OK so far
ELSE
CALL FAIL
CALL INMSG ('Array sizes from PQCECO are incorrect.')
GOTO 777
ENDIF
DO 10 I = 1, LDRACT
IF (STRARL(I) .NE. 80) THEN
CALL FAIL
CALL INMSG ('String length STRARL for PQCECO is incorrect.')
GOTO 777
ENDIF
10 CONTINUE
C unpack DR and compare all 4 arrays
CALL PUREC (LDRACT, DROTDR, INLEN, RELEN, STLEN, ERRIND,
1 INTG, DROTIN, RL, DROTRL, STR, DROTSL, DROTST)
IF (ERRIND .EQ. 0 .AND.
1 INTG .EQ. 3 .AND.
2 RL .EQ. 3 .AND.
3 STR .EQ. 2) THEN
C OK so far
ELSE
CALL FAIL
CALL INMSG ('Array sizes from PUREC are incorrect.')
GOTO 777
ENDIF
IF (DRININ(1) .EQ. DROTIN(1) .AND.
1 DRININ(2) .EQ. DROTIN(2) .AND.
2 DRININ(3) .EQ. DROTIN(3)) THEN
C OK so far
ELSE
CALL FAIL
CALL INMSG ('Integer array from PUREC is incorrect.')
GOTO 777
ENDIF
IF (DRINRL(1) .EQ. DROTRL(1) .AND.
1 DRINRL(2) .EQ. DROTRL(2) .AND.
2 DRINRL(3) .EQ. DROTRL(3)) THEN
C OK so far
ELSE
CALL FAIL
CALL INMSG ('Real array from PUREC is incorrect.')
GOTO 777
ENDIF
IF (DRINSL(1) .EQ. DROTSL(1) .AND.
1 DRINSL(2) .EQ. DROTSL(2)) THEN
C OK so far
ELSE
CALL FAIL
CALL INMSG ('String-length array from PUREC is incorrect.')
GOTO 777
ENDIF
IF (DRINST(1) .EQ. DROTST(1) .AND.
1 DRINST(2) .EQ. DROTST(2)) THEN
C OK so far
ELSE
CALL FAIL
CALL INMSG ('String array from PUREC is incorrect.')
GOTO 777
ENDIF
CALL PASS
777 CONTINUE
CALL ENDIT
END
|
from get_params import get_params
import pickle
import time
import sys
import os
import numpy as np
def merge_distances(params):
# This function takes the saved distances for each image and does max pooling for all images of the same shot.
if params['database'] =='gt_imgs':
BASELINE_RANKING = os.path.join(params['root'], '2_baseline',params['database'],'all_frames' + '.txt')
with open(BASELINE_RANKING,'r') as f:
shot_list = f.readlines()
else:
if 'fullrank' in params['baseline']:
BASELINE_RANKING = os.path.join(params['root'], '2_baseline',params['baseline'],'all_frames.txt')
with open(BASELINE_RANKING,'r') as f:
shot_list = f.readlines()
else:
BASELINE_RANKING = os.path.join(params['root'], '2_baseline',params['baseline'],params['query_name'] + '.rank')
shot_list = pickle.load(open( BASELINE_RANKING, 'rb') )
if params['rerank_bool']:
shot_list = shot_list[0:params['length_ranking']]
DISTANCES_PATH = os.path.join(params['root'], '6_distances',params['net'],params['database'] + params['year'],params['distance_type'])
i = 0
shots = []
frame_list = []
distance_list = []
region_list = []
errors = []
# For all my shots...
for shot in shot_list:
try:
shot = shot.rstrip()
shot_files = os.listdir(os.path.join(DISTANCES_PATH,shot))
shot_distances = []
shot_regions = []
images = []
# Go through all frames in shot
for f in shot_files:
# Load the distances
images.append(f[:-4])
shot_info = open(os.path.join(DISTANCES_PATH,shot,f),'rb')
print shot, f
distances = pickle.load(shot_info)
matching_regions = pickle.load(shot_info)
if params['database'] == 'gt_imgs':
pos = int(float(params['query_name'])) - 9069
else:
pos = int(float(params['query_name'])) - 9099
if 'scores' in params['distance_type']:
pos = pos + 1 # there are 31 classes in the class score layer, and 0 is the background
matching_region = matching_regions[pos,pos*4:pos*4+4]
shot_distances.append(distances[pos])
shot_regions.append(matching_region)
# Pooling over frames
if 'euclidean' in params['distance_type']:
# Take the minimum distance
idx = np.argmin(shot_distances)
else:
idx = np.argmax(shot_distances)
# Select the image that caused it
frame = images[idx]
# And the region within that image
region = shot_regions[idx]
# And the distance to the query:
distance = shot_distances[idx]
if not os.path.isdir(os.path.join(DISTANCES_PATH,params['query_name'])):
os.makedirs(os.path.join(DISTANCES_PATH,params['query_name']))
frame_list.append(frame)
region_list.append(region)
if 'svm' in params['distance_type']:
distance_list.append(distance[0])
else:
distance_list.append(distance)
shots.append(shot)
except:
errors.append(shot)
print "Could not merge for shot", shot
i = i + 1
# Ranking
print errors, np.shape(errors)
RANKING_FILE = os.path.join(params['root'],'7_rankings',params['net'],params['database'] + params['year'],params['distance_type'])
if not os.path.isdir(RANKING_FILE):
os.makedirs(RANKING_FILE)
RANKING_FILE = os.path.join(params['root'],'7_rankings',params['net'],params['database'] + params['year'],params['distance_type'],params['query_name'] + '.rank')
file_to_save = open(RANKING_FILE,'wb')
if 'euclidean' in params['distance_type']:
ranking = np.array(shots)[np.argsort(distance_list)]
frames = np.array(frame_list)[np.argsort(distance_list)]
regions = np.array(region_list)[np.argsort(distance_list)]
distances = np.array(distance_list)[np.argsort(distance_list)]
else:
ranking = np.array(shots)[np.argsort(distance_list)[::-1]]
frames = np.array(frame_list)[np.argsort(distance_list)[::-1]]
regions = np.array(region_list)[np.argsort(distance_list)[::-1]]
distances = np.array(distance_list)[np.argsort(distance_list)[::-1]]
pickle.dump(ranking,file_to_save)
pickle.dump(frames,file_to_save)
pickle.dump(regions,file_to_save)
pickle.dump(distances,file_to_save)
pickle.dump(np.array(distance_list),file_to_save)
file_to_save.close()
# This removes frame distance files
#shutil.rmtree(os.path.join(DISTANCES_PATH,shot))
def ranking_tv(params,eval_file):
RANKING_FILE = os.path.join(params['root'],'7_rankings',params['net'],params['database'] + params['year'],params['query_name'] + '.rank')
f = open(RANKING_FILE,'rb')
ranking = pickle.load(f)
f.close()
i = 0
for shot in ranking:
eval_file.write(params['query_name'] + '\t' + '0' + '\t' + shot + '\t' + str(i) + '\t' + str(params['length_ranking'] - i) + '\t' + 'NII' +'\n' )
i = i + 1
f.close()
if __name__ == '__main__':
params = get_params()
params['query_name'] = str(sys.argv[1])
ts = time.time()
if params['query_name'] not in (9100,9113,9117):
merge_distances(params)
print "Merged ranking for query", params['query_name'], 'in', time.time() - ts, 'seconds.'
|
lemma card_nth_roots: assumes "c \<noteq> 0" "n > 0" shows "card {z::complex. z ^ n = c} = n"
|
Communication Sites in SharePoint Online are now rolling out to a tenant near you. This is one of the larger releases from the SharePoint team recently and has been a point of focus since the Virtual Summit earlier this year. I personally have been looking forward to communication sites since hearing they were coming. As the modern experience in SharePoint Online continued to expand with team sites and group sites it felt like the publishing world was being left behind. Knowing that Microsoft is not only concentrating on the collaboration world of SharePoint but also the communication world, shows the future keeps looking better. SharePoint is already the king of corporate intranets and this release will make intranets easier and hopefully much better looking using the modern experience.
What is a Communication Site in SharePoint Online?
Below is slide from one of Mark Kashman's recent slideshares about where the role of communication sites exists as compared to team sites.
The primary way to create a communication site is through the + Create Site button on the SharePoint landing page.
Once you click Create site you will now be prompted with type of site you want to create. In the past this would only allow you to create a team site. If you click create a team site the same prompt that you had prior will occur. Now we have what lies behind door number 2 which is the new communication site creation process.
After you click create communication site you get a new site creation page. This page includes some cool new areas.
Starting on the right you have your basic site name, address, and description.
The site address will auto populate but if you click the little pencil you can edit it. A good thing to notice here is that the sites are created under the /sites managed path. The site description will fill in the same site description as in classic sites.
Then as we move down we see 2 new things. We now see site classification and usage guidelines.
Interestingly enough, these are the options that are configurable within Azure AD for Office 365 Groups management.
And last but not least we have a new choose your design section on the left.
As of initial release we have 2 preset designs of Topic and Showcase and then a blank design.
I really like the idea of preset designs. I hope this is an area that will be opened up to allow new designs that you create to be added as communication site design templates.
The Topic and Showcase site designs are easy starting points to meet common needs and begin guiding page/content creation which is always a struggle for intranets. Both the Topic and the Showcase site design come preconfigured with web parts on the page.
I am not sure of the technical reason for this but if you don't have the ability to create an Office 365 Group you won't be able to create a communication site. Even though you need this permission, an Office 365 Group is NOT created.
After you clicked create, the actual process of creating the site was ridiculously fast. There is still quite a bit of work to do to personalize a site but the new web parts such as the hero and stream web part are great starting points for content delivery.
I assume this will be resolved when the new admin portal comes out. For now if you want to see a list of your communication sites you will have to use PowerShell.
There is only 1 collection of navigation for top and left. This can be edited either in a library or on a site page. You can edit directly on these pages by clicking the edit link next to navigation.
Here is some more support information for updating navigation. If you change the navigation it will be reflected on both the site pages and within the libraries and list pages.
With the release of communication sites we finally get more than single column page layouts. Personally I didn't think it was possible to create a super useful page in the modern framework without using more than 1 column so I think this is a huge improvement. This is also not just for communication sites but you can now have multi column layouts on all modern sites.
Now if you are on a communication site and want to create a new page, the easiest way is through the + New option right below top navigation.
This is the kind of flexibility we have been needing for building pages in SharePoint Online within the modern experience.
What else and what's next?
Communication sites are being rolled out to first release users and then full first release tenants. Worldwide rollout targeted for August 2017 and all of this includes the new web parts and sections layouts.
The publishing infrastructure continues to be supported.
Communication sites can support external users as long as the SharePoint site is configured to allow external sharing.
Mobile apps are being updated to support communication sites.
Join a communication sites AMA on Wednesday, June 28th 2017 from 9-10 a.m. PDT.
|
%--------------------------------------------------------------------
\section{Maximum Likelihood Object/Shadow Discrimination}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Introduction}
\begin{itemize}
\item Detect shadows using \alert{maximum likelihood} (ML) estimation
based on color information.
\item Estimate the joint distribution over the difference in the
HSV color space between pixels in the current frame and the
corresponding pixels in a background model, conditional on
whether the pixel is an object pixel or a shadow pixel.
\item Use the ML principle at run time to
classify each foreground pixel as either shadow or object given
the estimated model.
\end{itemize}
\end{frame}
%--------------------------------------------------------------------
\ifnum\short=0
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Related Work}
Chromaticity and luminance are not orthogonal in the RGB color
space, but lighting differences can be controlled for in the
\alert{normalized} RGB color space.
\bigskip
Miki{\'c} et al.\ (2000) observe that in the normalized RGB color space,
shadow pixels tend to be more blue and less red than illuminated
pixels. They apply a probabilistic model based on the normalized
red and blue features to classify shadow pixels in traffic scenes.
\bigskip
One well-known problem with the normalized RGB space
is that normalization of pixels with low intensity results
in unstable chromatic components (Kender; 1976).
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Related Work (cont.)}
Cucchiara et al.\ (2001) and Chen et al.\ (2008) use a HSV
color-based method (deterministic nonmodel-based method)
to eliminate the concerns based on the assumption that
only intensity of the shadow area will significantly change.
\[
SP_t (x,y) = \left\{
\begin{array}{ll}
1 & {\rm if} \; \alpha \le \frac{{I_t^V (x,y)}}{{B_t^V (x,y)}} \le \beta\\
& \;\;\; \wedge (I_t^S (x,y) - B_t^S (x,y)) \le T_S \\
& \;\;\; \wedge \left| {I_t^H (x,y) - B_t^H (x,y)} \right| \le T_H\\ \\
0 & {\rm otherwise}, \\
\end{array} \right.
\]
where $SP_t(x,y)$ is the resulting binary mask for shadows at each
pixel $(x,y)$ at time $t$. $I_t^H$, $I_t^S$, $I_t^V$, $B_t^H$,
$B_t^S$, and $B_t^V$ are the H, S, and V components of foreground
pixel $I_t(x,y)$ and background pixel $B_t(x, y)$ at pixel $(x,y)$ at
time $t$, respectively. They prevent foreground pixels from being
classified as shadow pixels by setting two thresholds, $0 < \alpha <
\beta < 1$. The four thresholds $\alpha$, $\beta$, $T_S$, and $T_H$
are empirically determined.
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Related Work (cont.)}
Some researchers have investigated color spaces besides RGB and
HSV.
\bigskip
Blauensteiner et al.\ (2006) use an ``improved''
hue, luminance, and saturation (IHLS) color space for shadow detection
to deal with the issue of unstable hue at low saturation by modeling
the relationship between them.
\bigskip
Another alternative color space is YUV. Some applications such as
television and videoconferencing use the YUV color space natively, and
since transformation from YUV to HSV is time-consuming,
Schreer~et~al.~(2002) operate in the YUV color space directly,
developing a fast shadow detection algorithm based on approximated
changes of hue and saturation in the YUV color space.
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Related Work (cont.)}
Some work uses texture-based methods such as the normalized
cross-correlation (NCC) technique. This method detects
shadows based on the assumption that the intensity of shadows
is proportional to the incident light, so shadow pixels should
simply be darker than the corresponding background pixels.
\bigskip
However, the texture-based method tends to misclassify foreground
pixels as shadow pixels when the foreground region has a similar
texture to the corresponding background region.
\end{frame}
%--------------------------------------------------------------------
\else
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Related Work}
Chromaticity and luminance are not orthogonal in the RGB color
space, but lighting differences can be controlled for in the
{\em normalized} RGB color space.
However, normalization of pixels with low intensity results
in unstable chromatic components.
\bigskip
HSV color-based method (deterministic nonmodel-based method)
detects shadows based on the assumption that
only intensity of the shadow area will significantly change.
\bigskip
Texture-based methods tends to misclassify foreground
pixels as shadow pixels when the foreground region has a similar
texture to the corresponding background region.
\end{frame}
\fi
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Offline and Online Phases}
We divide our method into two phases:
\begin{enumerate}
\item Offline phase:
\begin{enumerate}
\item Construct a background model from the first few frames;
\item Extract foreground on the remaining frames;
\item Manually label the extracted pixels as either object or
shadow pixels;
\item Construct a joint probability model over the difference in the
HSV color space between pixels in the current frame and the
corresponding pixels in the background model, conditional on
whether the pixel is an object pixel or a shadow pixel.
\end{enumerate}
\item Online phase:
\begin{enumerate}
\item Perform the same background modeling and foreground extraction
procedure;
\item Classify foreground pixels as either shadow or object using
the maximum likelihood approach.
\end{enumerate}
\end{enumerate}
\end{frame}
%--------------------------------------------------------------------
\ifnum\short=0
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Methodology}
\begin{enumerate}
\item Global Motion Detection
\item Foreground Extraction
\item Maximum Likelihood Classification of Foreground Pixels
\end{enumerate}
\end{frame}
\fi
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Maximum Likelihood Classification of Foreground Pixels}
In the \alert{offline} phase:
\begin{itemize}
\item After foreground extraction, we manually label pixels
as either shadow or object.
\item We then observe the distribution over the difference in
hue $(H_\text{diff})$, saturation $(S_\text{diff} )$, and
value $(V_\text{diff} )$ components in the HSV color space
between pixels in the current frame and the corresponding
pixels in the background model.
\end{itemize}
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Distributions over the Differences in HSV
components}
\begin{figure}
\centering
\subfloat[]{\includegraphics[scale=0.12]{figures/foreground_diff_h.png}}
\hspace{0.05cm}
\subfloat[]{\includegraphics[scale=0.12]{figures/foreground_diff_s.png}}
\hspace{0.05cm}
\subfloat[]{\includegraphics[scale=0.12]{figures/foreground_diff_v.png}}
\caption{Example distributions over the difference in (a) hue, (b)
saturation, and (c) value components for {\bf true object pixels},
extracted from our hallway dataset.}
\label{fig:foreground-distribution}
\end{figure}
\vspace{-0.3in}
\begin{figure}
\centering
\subfloat[]{\includegraphics[scale=0.12]{figures/shadow_diff_h.png}}
\hspace{0.05cm}
\subfloat[]{\includegraphics[scale=0.12]{figures/shadow_diff_s.png}}
\hspace{0.05cm}
\subfloat[]{\includegraphics[scale=0.12]{figures/shadow_diff_v.png}}
\caption{Example distributions over the difference in (a) hue, (b)
saturation, and (c) value components for {\bf shadow pixels},
extracted from our hallway dataset.}
\label{fig:shadow-distribution}
\end{figure}
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Measurement Likelihood for Shadow Pixels}
We define the probability of measurement for pixel $(x, y)$ given its
assignment as follows.
\begin{equation*}
\label{eq:shadow-measurement}
\begin{array}{ccl}
P(M_{\text{xy}} \mid A_{\text{xy}} = \text{sh})
& = & P(H_{\text{diff}} \mid A_{\text{xy}} = \text{sh}) \times \\
& & P(S_{\text{diff}} \mid A_{\text{xy}} = \text{sh}) \times \\
& & P(V_{\text{diff}} \mid A_{\text{xy}} = \text{sh}),
\end{array}
\end{equation*}
where $M_{xy}$ is a tuple containing the HSV value for pixel $(x,y)$
in the current image as well as the HSV value for pixel $(x,y)$ in the
background model for pixel $(x,y)$, and $A_{xy}$ is the assignment of
pixel $(x,y)$ as object or shadow.
Similarly, the probability of measurement given its assignment
for object pixels can be computed as follows.
\begin{equation*}
\label{eq:foreground-measurement}
\begin{array}{ccl}
P(M_{\text{xy}} \mid A_{\text{xy}} = \text{obj})
& = & P(H_{\text{diff}} \mid A_{\text{xy}} = \text{obj}) \times \\
& & P(S_{\text{diff}} \mid A_{\text{xy}} = \text{obj}) \times \\
& & P(V_{\text{diff}} \mid A_{\text{xy}} = \text{obj})
\end{array}
\end{equation*}
Here ``sh'' stands for shadow and ``obj'' stands for object.
\end{frame}
%--------------------------------------------------------------------
\ifnum\short=0
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Measurement Likelihood for Shadow Pixels}
To make the problem tractable, we assume that the distributions over
the components on the right hand side in the previous equation
follow Gaussian distributions, defined as follows.
\begin{equation*}
P(H_{\text{diff}} \mid A_{\text{xy}} = \text{sh}) =
{\cal N}(H_{\text{diff}} ;
\mu_{h_{\text{diff}}^{\text{sh}}},
\sigma^2_{h_{\text{diff}}^{\text{sh}}})
\end{equation*}
\begin{equation*}
P(S_{\text{diff}} \mid A_{\text{xy}} = \text{sh}) =
{\cal N}(S_{\text{diff}} ;
\mu_{s_{\text{diff}}^{\text{sh}}},
\sigma^2_{s_{\text{diff}}^{\text{sh}}})
\end{equation*}
\begin{equation*}
P(V_{\text{diff}} \mid A_{\text{xy}} = \text{sh}) =
{\cal N}(V_{\text{diff}} ;
\mu_{v_{\text{diff}}^{\text{sh}}},
\sigma^2_{v_{\text{diff}}^{\text{sh}}})
\end{equation*}
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Measurement Likelihood for Object Pixels}
Similarly, the probability of measurement given its assignment
for object pixels can be computed as follows.
\begin{equation*}
\label{eq:foreground-measurement}
\begin{array}{ccl}
P(M_{\text{xy}} \mid A_{\text{xy}} = \text{obj})
& = & P(H_{\text{diff}} \mid A_{\text{xy}} = \text{obj}) \times \\
& & P(S_{\text{diff}} \mid A_{\text{xy}} = \text{obj}) \times \\
& & P(V_{\text{diff}} \mid A_{\text{xy}} = \text{obj})
\end{array}
\end{equation*}
Here ``obj'' stands for object.
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Measurement Likelihood for Shadow Pixels}
As for the shadow pixel distributions, we assume Gaussian distributions
over the components
on the right hand side in the previous equation, as follows.
\begin{equation*}
P(H_{\text{diff}} \mid A_{\text{xy}} = \text{obj}) =
{\cal N}(H_{\text{diff}} ;
\mu_{h_{\text{diff}}^{\text{obj}}},
\sigma^2_{h_{\text{diff}}^{\text{obj}}})
\end{equation*}
\begin{equation*}
P(S_{\text{diff}} \mid A_{\text{xy}} = \text{obj}) =
{\cal N}(S_{\text{diff}} ;
\mu_{s_{\text{diff}}^{\text{obj}}},
\sigma^2_{s_{\text{diff}}^{\text{obj}}})
\end{equation*}
\begin{equation*}
P(V_{\text{diff}} \mid A_{\text{xy}} = \text{obj}) =
{\cal N}(V_{\text{diff}} ;
\mu_{v_{\text{diff}}^{\text{obj}}},
\sigma^2_{v_{\text{diff}}^{\text{obj}}})
\end{equation*}
\end{frame}
\fi
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Model Parameters}
We estimate the parameters \\
\medskip
$\Theta = \{
\mu_{v_{\text{diff}}^{\text{sh}}},
\sigma^2_{v_{\text{diff}}^{\text{sh}}},
\mu_{v_{\text{diff}}^{\text{sh}}},
\sigma^2_{v_{\text{diff}}^{\text{sh}}},
\mu_{v_{\text{diff}}^{\text{sh}}},
\sigma^2_{v_{\text{diff}}^{\text{sh}}},
\mu_{v_{\text{diff}}^{\text{obj}}},
\sigma^2_{v_{\text{diff}}^{\text{obj}}},
\mu_{v_{\text{diff}}^{\text{obj}}},
\sigma^2_{v_{\text{diff}}^{\text{obj}}},
\mu_{v_{\text{diff}}^{\text{obj}}},
\sigma^2_{v_{\text{diff}}^{\text{obj}}} \}$ \\
\medskip
directly from training data during the offline phase.
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Pixel Classification}
In the \alert{online} phase:
\medskip
Given the model estimate $\Theta$, we use the
maximum likelihood approach to classify a pixel as a shadow pixel if
\begin{equation*}
\label{eq:ml}
P(M_{\text{xy}} \mid A_{xy}=\text{sh} ; \Theta ) >
P(M_{\text{xy}} \mid A_{xy}=\text{obj} ; \Theta ).
\end{equation*}
Otherwise, we classify the pixel as an object pixel.
\bigskip
We could add the prior probabilities to the shadow model
and the object model in the equation above to obtain a maximum a
posteriori classifier. In our experiments, we assume equal priors.
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Experimental Results}
We present experimental results for
\begin{enumerate}
\item our proposed maximum likelihood (ML) classification method;
\item the deterministic nonmodel-based (DNM) method;
\item the normalized cross-correlation (NCC) method.
\end{enumerate}
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Experimental Results (cont.)}
We performed the experiments on three video sequences. The figure
below shows sample frames from the three video sequences.
\begin{figure}
\centering
\subfloat[]{\includegraphics[scale=0.25]{figures/csim_hallway_benchmark.png}}
\hspace{0.05cm}
\subfloat[]{\includegraphics[scale=0.25]{figures/aton_lab_benchmark.png}}
\hspace{0.05cm}
\subfloat[]{\includegraphics[scale=0.25]{figures/aton_highway1_benchmark.png}}
\caption{Sample frames from the (a) Hallway, (b) Laboratory, and (c)
Highway video sequences}
\label{fig:benchmark}
\end{figure}
The {\em Hallway} sequence is our own dataset. The {\em Laboratory} and
{\em Highway} sequences were first introduced in Prati et al.\ (2003).
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Experimental Results (cont.)}
To evaluate the performance of the methods, we compute the two
metrics proposed by Prati et al.\ (2003), defining the shadow
detection rate $\eta$ and the shadow discrimination rate $\xi$
as follows:
\[
\eta = \frac{TP_{\text{sh}}}{TP_{\text{sh}} + FN_{\text{sh}}};\;
\xi = \frac{TP_{\text{obj}}}{TP_{\text{obj}} + FN_{\text{obj}}},
\]
where the subscript ``sh'' and ``obj'' stand for shadow and object,
respectively. $TP$ and $FN$ are the number of true positive (i.e., the
shadow or object pixels correctly identified) and false negative
(i.e., the shadow or object pixels classified incorrectly) pixels.
\end{frame}
%--------------------------------------------------------------------
\ifnum\short=0
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Experimental Results (cont.)}
More information about $\eta$ and $\xi$
\begin{itemize}
\item $\eta$: the proportion of shadow pixels
correctly detected
\item $\xi$: the proportion of object pixels
correctly detected.
\end{itemize}
$\eta$ and $\xi$ can also be thought of as the true
positive rate (sensitivity) and true negative rate (specificity) for
detecting shadows, respectively.
\bigskip
In the experiment, we also compare the methods with the additional
two metrics: precision and $F_1$ score.
\end{frame}
\fi
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Experimental Results (cont.)}
Ground truth data:
\begin{itemize}
\item The {\em Laboratory} and {\em Highway} video sequences are
provided in Sanin et al.\ (2012).
\item A standard Gaussian mixture (GMM) background model in used to
extract
foreground pixels.
\item For our {\em Hallway} video sequence, we used the previously
mentioned extended version of the GMM background model for
foreground extraction, but the results were not substantially
different from those of the standard GMM.
\end{itemize}
\end{frame}
%--------------------------------------------------------------------
\ifnum\short=0
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Experimental Results (cont.)}
We performed five-fold cross validation to find the best parameters
for each model for each of the three models and each of the three
data sets.
\bigskip
Varied the parameter settings for each method on each video dataset
and selected the setting that maximized the $F_1$ score (a measure
combining both precision and recall) over the cross validation test
sets.
\end{frame}
\fi
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Experimental Results (cont.)}
\begin{table}
\caption{Comparison of shadow detection results between the
proposed, DNM, and NCC methods.}
\label{tab:comparison-results}
\vspace{-0.2in}
\centering
\begin{figure}
\includegraphics[width=4.85in]{figures/shadow-detection-results.png}
\end{figure}
\end{table}
From the table, our proposed method
\begin{itemize}
\item achieves the top performance for shadow detection
rate $\eta$ and $F_1$ score in every case;
\item obtains good shadow discrimination rate $\xi$ in every case.
\end{itemize}
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Experimental Results (cont.)}
The DNM method has stable performance for all three videos, with good
performance for all metrics.
\bigskip
Both the DNM method and our proposed method suffer from the problem that
the object colors can be confused with the background color. We clearly
see this situation in the {\em Highway} sequence (third row in the
next figure).
\bigskip
The NCC method achieves the best shadow discrimination rate $\xi$ and
precision because it classifies nearly every pixel as object as can be
seen in the next figure.
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Experimental Results (cont.)}
\begin{figure}
\centering
\includegraphics[width=4in]{figures/shadow-detection-results-for-an-arbitrary-frame.png}
\vspace{-0.1in}
\caption{Results for an arbitrary frame in each video sequence. The
first column contains an example original frame for each video
sequence. The second column shows the ground truth for that frame,
where object pixels are labeled in white and shadow pixels are
labeled in gray. The remaining columns show shadow detection
results for each method, where pixels labeled as object shown in
green and pixels labeled as shadow are shown in red.}
\label{fig:results-for-arbitrary-frame}
\end{figure}
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Conclusion}
\begin{itemize}
\item We propose a new method for detecting shadows using a simple
maximum likelihood approach based on color information;
\item We extend the deterministic nonmodel-based approach, designing
a parametric statistical model-based approach;
\item Our experimental results show that our proposed method is
extremely effective and superior to the standard methods on three
different real-world video surveillance data sets.
\end{itemize}
\end{frame}
%--------------------------------------------------------------------
\begin{frame}
\frametitle{ML Object/Shadow Discrimination}
\framesubtitle{Discussion}
\begin{itemize}
\item In some cases, our method misdetects shadow pixels due to
similar color between the object and the background and unclear
background texture in shadow regions;
\item Incorporating geometric or shadow region shape priors would
potentially improve the detection and discrimination rates;
\item We plan to address these issues, further explore the feasibility
of combining our method with other useful shadow features, and
integrate our shadow detection module with a real-world open source
video surveillance system.
\end{itemize}
\end{frame}
|
theory ProcStrongNorm
imports Main
begin
text {*
Based on HOL/Proofs/Lambda/StrongNorm,
a formalization by Stefan Berghofer.
*}
declare [[syntax_ambiguity_warning = false]]
subsection {* Lambda-terms in de Bruijn notation and substitution *}
(*
datatype program_rep =
Assign variable_untyped expression_untyped
| Sample variable_untyped expression_distr
| Seq program_rep program_rep
| Skip
| IfTE expression_untyped program_rep program_rep
| While expression_untyped program_rep
| CallProc variable_untyped procedure_rep "expression_untyped list"
and procedure_rep =
Proc program_rep "variable_untyped list" expression_untyped
| ProcRef nat (* deBruijn index *)
| ProcAbs procedure_rep
| ProcAppl procedure_rep procedure_rep
*)
typedecl tmp
datatype program_rep =
Skip
| CallProc tmp procedure_rep tmp
and procedure_rep =
Proc program_rep tmp tmp
| ProcRef nat
| ProcAppl procedure_rep procedure_rep (infixl "\<degree>" 200)
| ProcAbs procedure_rep
type_synonym dB = procedure_rep
abbreviation "Var == ProcRef"
abbreviation "App == ProcAppl"
abbreviation "Abs == ProcAbs"
primrec
lift' :: "[program_rep, nat] => program_rep" and
lift :: "[dB, nat] => dB"
where
"lift (Var i) k = (if i < k then Var i else Var (i + 1))"
| "lift (s \<degree> t) k = lift s k \<degree> lift t k"
| "lift (Abs s) k = Abs (lift s (k + 1))"
| "lift (Proc body a r) k = Proc (lift' body k) a r"
| "lift' Skip k = Skip"
| "lift' (CallProc x p y) k = CallProc x (lift p k) y"
primrec
subst' :: "[program_rep, dB, nat] => program_rep" ("_[_'/''_]" [300, 0, 0] 300) and
subst :: "[dB, dB, nat] => dB" ("_[_'/_]" [300, 0, 0] 300)
where (* FIXME base names *)
subst_Var: "(Var i)[s/k] =
(if k < i then Var (i - 1) else if i = k then s else Var i)"
| subst_App: "(t \<degree> u)[s/k] = t[s/k] \<degree> u[s/k]"
| subst_Abs: "(Abs t)[s/k] = Abs (t[lift s 0 / k+1])"
| subst_Proc: "(Proc body x y)[s/k] = Proc (body[s/'k]) x y"
| subst_Skip: "Skip[s/'k] = Skip"
| subst_CallProc: "(CallProc x p y)[s/'k] = CallProc x (p[s/k]) y"
declare subst_Var [simp del]
text {* Optimized versions of @{term subst} and @{term lift}. *}
primrec
liftn' :: "[nat, program_rep, nat] => program_rep" and
liftn :: "[nat, dB, nat] => dB"
where
"liftn n (Var i) k = (if i < k then Var i else Var (i + n))"
| "liftn n (s \<degree> t) k = liftn n s k \<degree> liftn n t k"
| "liftn n (Abs s) k = Abs (liftn n s (k + 1))"
| "liftn n (Proc body x y) k = Proc (liftn' n body k) x y"
| "liftn' n Skip k = Skip"
| "liftn' n (CallProc x p y) k = CallProc x (liftn n p k) y"
primrec
substn' :: "[program_rep, dB, nat] => program_rep" and
substn :: "[dB, dB, nat] => dB"
where
"substn (Var i) s k =
(if k < i then Var (i - 1) else if i = k then liftn k s 0 else Var i)"
| "substn (t \<degree> u) s k = substn t s k \<degree> substn u s k"
| "substn (Abs t) s k = Abs (substn t s (k + 1))"
| "substn (Proc p x y) s k = Proc (substn' p s k) x y"
| "substn' Skip s k = Skip"
| "substn' (CallProc x p y) s k = CallProc x (substn p s k) y"
subsection {* Beta-reduction *}
inductive
beta' :: "[program_rep, program_rep] => bool" (infixl "\<longrightarrow>\<^sub>\<beta>" 50) and
beta :: "[dB, dB] => bool" (infixl "\<rightarrow>\<^sub>\<beta>" 50)
where
beta [simp, intro!]: "Abs s \<degree> t \<rightarrow>\<^sub>\<beta> s[t/0]"
| appL [simp, intro!]: "s \<rightarrow>\<^sub>\<beta> t ==> s \<degree> u \<rightarrow>\<^sub>\<beta> t \<degree> u"
| appR [simp, intro!]: "s \<rightarrow>\<^sub>\<beta> t ==> u \<degree> s \<rightarrow>\<^sub>\<beta> u \<degree> t"
| abs [simp, intro!]: "s \<rightarrow>\<^sub>\<beta> t ==> Abs s \<rightarrow>\<^sub>\<beta> Abs t"
| [simp, intro!]: "s \<longrightarrow>\<^sub>\<beta> t ==> Proc s x y \<rightarrow>\<^sub>\<beta> Proc t x y"
| [simp, intro!]: "s \<rightarrow>\<^sub>\<beta> t \<Longrightarrow> CallProc x s y \<longrightarrow>\<^sub>\<beta> CallProc x t y"
abbreviation
beta_reds :: "[dB, dB] => bool" (infixl "->>" 50) where
"s ->> t == beta^** s t"
abbreviation
beta_reds' :: "[program_rep, program_rep] => bool" (infixl "-->>" 50) where
"s -->> t == beta'^** s t"
notation (latex)
beta_reds (infixl "\<rightarrow>\<^sub>\<beta>\<^sup>*" 50) and
beta_reds' (infixl "\<longrightarrow>\<^sub>\<beta>\<^sup>*" 50)
inductive_cases beta_cases [elim!]:
"Var i \<rightarrow>\<^sub>\<beta> t"
"Abs r \<rightarrow>\<^sub>\<beta> s"
"s \<degree> t \<rightarrow>\<^sub>\<beta> u"
"Proc p x y \<rightarrow>\<^sub>\<beta> u"
"Skip \<longrightarrow>\<^sub>\<beta> u"
declare if_not_P [simp] not_less_eq [simp]
\<comment> \<open>don't add @{text "r_into_rtrancl[intro!]"}\<close>
subsection {* Congruence rules *}
lemma rtrancl_beta_Abs [intro!]:
"s \<rightarrow>\<^sub>\<beta>\<^sup>* s' ==> Abs s \<rightarrow>\<^sub>\<beta>\<^sup>* Abs s'"
by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+
lemma rtrancl_beta_Proc [intro!]:
"s \<longrightarrow>\<^sub>\<beta>\<^sup>* s' ==> Proc s x y \<rightarrow>\<^sub>\<beta>\<^sup>* Proc s' x y"
by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+
lemma rtrancl_beta_CallProc [intro!]:
"s \<rightarrow>\<^sub>\<beta>\<^sup>* s' ==> CallProc x s y \<longrightarrow>\<^sub>\<beta>\<^sup>* CallProc x s' y"
by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+
lemma rtrancl_beta_AppL:
"s \<rightarrow>\<^sub>\<beta>\<^sup>* s' ==> s \<degree> t \<rightarrow>\<^sub>\<beta>\<^sup>* s' \<degree> t"
by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+
lemma rtrancl_beta_AppR:
"t \<rightarrow>\<^sub>\<beta>\<^sup>* t' ==> s \<degree> t \<rightarrow>\<^sub>\<beta>\<^sup>* s \<degree> t'"
by (induct set: rtranclp) (blast intro: rtranclp.rtrancl_into_rtrancl)+
lemma rtrancl_beta_App [intro]:
"[| s \<rightarrow>\<^sub>\<beta>\<^sup>* s'; t \<rightarrow>\<^sub>\<beta>\<^sup>* t' |] ==> s \<degree> t \<rightarrow>\<^sub>\<beta>\<^sup>* s' \<degree> t'"
by (blast intro!: rtrancl_beta_AppL rtrancl_beta_AppR intro: rtranclp_trans)
subsection {* Substitution-lemmas *}
lemma subst_eq [simp]: "(Var k)[u/k] = u"
by (simp add: subst_Var)
lemma subst_gt [simp]: "i < j ==> (Var j)[u/i] = Var (j - 1)"
by (simp add: subst_Var)
lemma subst_lt [simp]: "j < i ==> (Var j)[u/i] = Var j"
by (simp add: subst_Var)
lemma lift_lift:
shows "i < k + 1 \<Longrightarrow> lift' (lift' p i) (Suc k) = lift' (lift' p k) i"
and "i < k + 1 \<Longrightarrow> lift (lift t i) (Suc k) = lift (lift t k) i"
by (induct p and t arbitrary: i k and i k) auto
lemma lift_subst [simp]:
shows "j < i + 1 \<Longrightarrow> lift' (p[s/'j]) i = (lift' p (i + 1)) [lift s i /' j]"
and "j < i + 1 \<Longrightarrow> lift (t[s/j]) i = (lift t (i + 1)) [lift s i / j]"
by (induct p and t arbitrary: i j s and i j s)
(simp_all add: diff_Suc subst_Var lift_lift split: nat.split)
lemma lift_subst_lt:
shows "i < j + 1 \<Longrightarrow> lift' (p[s/'j]) i = (lift' p i) [lift s i /' j + 1]"
and "i < j + 1 \<Longrightarrow> lift (t[s/j]) i = (lift t i) [lift s i / j + 1]"
by (induct p and t arbitrary: i j s and i j s) (simp_all add: subst_Var lift_lift)
lemma subst_lift [simp]:
shows "(lift' p k)[s/'k] = p"
and "(lift t k)[s/k] = t"
by (induct p and t arbitrary: k s and k s) simp_all
lemma subst_subst:
shows "i < j + 1 \<Longrightarrow> p[lift v i /' Suc j][u[v/j]/'i] = p[u/'i][v/'j]"
and "i < j + 1 \<Longrightarrow> t[lift v i / Suc j][u[v/j]/i] = t[u/i][v/j]"
by (induct p and t arbitrary: i j u v and i j u v)
(simp_all add: diff_Suc subst_Var lift_lift [symmetric] lift_subst_lt
split: nat.split)
subsection {* Equivalence proof for optimized substitution *}
lemma liftn_0 [simp]: shows "liftn' 0 p k = p" and "liftn 0 t k = t"
by (induct p and t arbitrary: k and k) (simp_all add: subst_Var)
lemma liftn_lift [simp]:
shows "liftn' (Suc n) p k = lift' (liftn' n p k) k"
and "liftn (Suc n) t k = lift (liftn n t k) k"
by (induct p and t arbitrary: k and k) (simp_all add: subst_Var)
lemma substn_subst_n [simp]:
shows "substn' p s n = p[liftn n s 0 /' n]"
and "substn t s n = t[liftn n s 0 / n]"
by (induct p and t arbitrary: n and n) (simp_all add: subst_Var)
theorem substn_subst_0:
shows "substn' p s 0 = p[s/'0]"
and "substn t s 0 = t[s/0]"
by simp_all
subsection {* Preservation theorems *}
text {* Not used in Church-Rosser proof, but in Strong
Normalization. \medskip *}
theorem subst_preserves_beta [simp]:
shows "pr \<longrightarrow>\<^sub>\<beta> ps ==> pr[t/'i] \<longrightarrow>\<^sub>\<beta> ps[t/'i]"
and "r \<rightarrow>\<^sub>\<beta> s ==> r[t/i] \<rightarrow>\<^sub>\<beta> s[t/i]"
by (induct arbitrary: t i and t i rule:beta'_beta.inducts) (simp_all add: subst_subst [symmetric])
theorem subst_preserves_beta': "r \<rightarrow>\<^sub>\<beta>\<^sup>* s ==> r[t/i] \<rightarrow>\<^sub>\<beta>\<^sup>* s[t/i]"
apply (induct set: rtranclp)
apply (rule rtranclp.rtrancl_refl)
apply (erule rtranclp.rtrancl_into_rtrancl)
apply (erule subst_preserves_beta)
done
theorem lift_preserves_beta [simp]:
shows "pr \<longrightarrow>\<^sub>\<beta> ps ==> lift' pr i \<longrightarrow>\<^sub>\<beta> lift' ps i"
and "r \<rightarrow>\<^sub>\<beta> s ==> lift r i \<rightarrow>\<^sub>\<beta> lift s i"
by (induct arbitrary: i and i rule:beta'_beta.inducts) auto
theorem lift_preserves_beta': "r \<rightarrow>\<^sub>\<beta>\<^sup>* s ==> lift r i \<rightarrow>\<^sub>\<beta>\<^sup>* lift s i"
apply (induct set: rtranclp)
apply (rule rtranclp.rtrancl_refl)
apply (erule rtranclp.rtrancl_into_rtrancl)
apply (erule lift_preserves_beta)
done
theorem subst_preserves_beta2 [simp]:
shows "r \<rightarrow>\<^sub>\<beta> s ==> p[r/'i] \<longrightarrow>\<^sub>\<beta>\<^sup>* p[s/'i]"
and "r \<rightarrow>\<^sub>\<beta> s ==> t[r/i] \<rightarrow>\<^sub>\<beta>\<^sup>* t[s/i]"
apply (induct p and t arbitrary: r s i and r s i)
apply simp
apply (simp add: rtrancl_beta_CallProc)
apply (simp add: rtrancl_beta_Proc)
apply (simp add: subst_Var r_into_rtranclp)
apply (simp add: rtrancl_beta_App)
by (simp add: rtrancl_beta_Abs)
theorem subst_preserves_beta2': "r \<rightarrow>\<^sub>\<beta>\<^sup>* s ==> t[r/i] \<rightarrow>\<^sub>\<beta>\<^sup>* t[s/i]"
apply (induct set: rtranclp)
apply (rule rtranclp.rtrancl_refl)
apply (erule rtranclp_trans)
apply (erule subst_preserves_beta2)
done
abbreviation
list_application :: "dB => dB list => dB" (infixl "\<degree>\<degree>" 150) where
"t \<degree>\<degree> ts == foldl (\<degree>) t ts"
lemma apps_eq_tail_conv [iff]: "(r \<degree>\<degree> ts = s \<degree>\<degree> ts) = (r = s)"
by (induct ts rule: rev_induct) auto
lemma Var_eq_apps_conv [iff]: "(Var m = s \<degree>\<degree> ss) = (Var m = s \<and> ss = [])"
by (induct ss arbitrary: s) auto
lemma Proc_eq_apps_conv [iff]: "(Proc p x y = s \<degree>\<degree> ss) = (Proc p x y = s \<and> ss = [])"
by (induct ss arbitrary: s) auto
lemma Var_apps_eq_Var_apps_conv [iff]:
"(Var m \<degree>\<degree> rs = Var n \<degree>\<degree> ss) = (m = n \<and> rs = ss)"
apply (induct rs arbitrary: ss rule: rev_induct)
apply simp
apply blast
apply (induct_tac ss rule: rev_induct)
apply auto
done
lemma Proc_apps_eq_Proc_apps_conv [iff]:
"(Proc p x y \<degree>\<degree> rs = Proc p' x' y' \<degree>\<degree> ss) = (p=p' \<and> x=x' \<and> y=y' \<and> rs = ss)"
apply (induct rs arbitrary: ss rule: rev_induct)
apply simp
apply blast
apply (induct_tac ss rule: rev_induct)
apply auto
done
lemma App_eq_foldl_conv:
"(r \<degree> s = t \<degree>\<degree> ts) =
(if ts = [] then r \<degree> s = t
else (\<exists>ss. ts = ss @ [s] \<and> r = t \<degree>\<degree> ss))"
apply (rule_tac xs = ts in rev_exhaust)
apply auto
done
lemma Abs_eq_apps_conv [iff]:
"(Abs r = s \<degree>\<degree> ss) = (Abs r = s \<and> ss = [])"
by (induct ss rule: rev_induct) auto
lemma apps_eq_Abs_conv [iff]: "(s \<degree>\<degree> ss = Abs r) = (s = Abs r \<and> ss = [])"
by (induct ss rule: rev_induct) auto
lemma Abs_apps_eq_Abs_apps_conv [iff]:
"(Abs r \<degree>\<degree> rs = Abs s \<degree>\<degree> ss) = (r = s \<and> rs = ss)"
apply (induct rs arbitrary: ss rule: rev_induct)
apply simp
apply blast
apply (induct_tac ss rule: rev_induct)
apply auto
done
lemma Abs_App_neq_Var_apps [iff]:
"Abs s \<degree> t \<noteq> Var n \<degree>\<degree> ss"
by (induct ss arbitrary: s t rule: rev_induct) auto
lemma Abs_App_neq_Proc_apps [iff]:
"Abs s \<degree> t \<noteq> Proc p x y \<degree>\<degree> ss"
by (induct ss arbitrary: s t rule: rev_induct) auto
lemma Var_apps_neq_Abs_apps [iff]:
"Var n \<degree>\<degree> ts \<noteq> Abs r \<degree>\<degree> ss"
apply (induct ss arbitrary: ts rule: rev_induct)
apply simp
apply (induct_tac ts rule: rev_induct)
apply auto
done
lemma Proc_apps_neq_Abs_apps [iff]:
"Proc p x y \<degree>\<degree> ts \<noteq> Abs r \<degree>\<degree> ss"
apply (induct ss arbitrary: ts rule: rev_induct)
apply simp
apply (induct_tac ts rule: rev_induct)
apply auto
done
lemma Var_apps_neq_Proc_apps [iff]:
"Var n \<degree>\<degree> ts \<noteq> Proc p x y \<degree>\<degree> ss"
apply (induct ss arbitrary: ts rule: rev_induct)
apply (induct_tac ts rule: rev_induct, simp_all)
by (induct_tac ts rule: rev_induct, simp_all)
lemma ex_head_tail:
"\<exists>ts h. t = h \<degree>\<degree> ts \<and> ((\<exists>n. h = Var n) \<or> (\<exists>u. h = Abs u) \<or> (\<exists>p x y. h=Proc p x y))"
apply (induct t)
apply simp
apply simp
apply (rule_tac x = "[]" in exI)
apply simp apply simp
apply (metis foldl_Cons foldl_Nil foldl_append)
by simp
lemma size_apps [simp]:
"size (r \<degree>\<degree> rs) = size r + foldl (+) 0 (map size rs) + length rs"
by (induct rs rule: rev_induct) auto
lemma lem0: "[| (0::nat) < k; m <= n |] ==> m < n + k"
by simp
lemma lift_map [simp]:
"lift (t \<degree>\<degree> ts) i = lift t i \<degree>\<degree> map (\<lambda>t. lift t i) ts"
by (induct ts arbitrary: t) simp_all
lemma subst_map [simp]:
"subst (t \<degree>\<degree> ts) u i = subst t u i \<degree>\<degree> map (\<lambda>t. subst t u i) ts"
by (induct ts arbitrary: t) simp_all
lemma app_last: "(t \<degree>\<degree> ts) \<degree> u = t \<degree>\<degree> (ts @ [u])"
by simp
text {* \medskip A customized induction schema for @{text "\<degree>\<degree>"}. *}
(*
lemma lem:
assumes "!!n ts. \<forall>t \<in> set ts. P t ==> P (Var n \<degree>\<degree> ts)"
and "!!u ts. [| P u; \<forall>t \<in> set ts. P t |] ==> P (Abs u \<degree>\<degree> ts)"
shows "size t = n \<Longrightarrow> P t"
apply (induct n arbitrary: t rule: nat_less_induct)
apply (cut_tac t = t in ex_head_tail)
apply clarify
apply (erule disjE)
apply clarify
apply (rule assms)
apply clarify
apply (erule allE, erule impE)
prefer 2
apply (erule allE, erule mp, rule refl)
apply simp
apply (simp only: foldl_conv_fold add.commute fold_plus_listsum_rev)
apply (fastforce simp add: listsum_map_remove1)
apply clarify
apply (rule assms)
apply (erule allE, erule impE)
prefer 2
apply (erule allE, erule mp, rule refl)
apply simp
apply clarify
apply (erule allE, erule impE)
prefer 2
apply (erule allE, erule mp, rule refl)
apply simp
apply (rule le_imp_less_Suc)
apply (rule trans_le_add1)
apply (rule trans_le_add2)
apply (simp only: foldl_conv_fold add.commute fold_plus_listsum_rev)
apply (simp add: member_le_listsum_nat)
done
theorem Apps_dB_induct:
assumes "!!n ts. \<forall>t \<in> set ts. P t ==> P (Var n \<degree>\<degree> ts)"
and "!!u ts. [| P u; \<forall>t \<in> set ts. P t |] ==> P (Abs u \<degree>\<degree> ts)"
shows "P t"
apply (rule_tac t = t in lem)
prefer 3
apply (rule refl)
using assms apply iprover+
done
*)
subsection {* Environments *}
definition
shift :: "(nat \<Rightarrow> 'a) \<Rightarrow> nat \<Rightarrow> 'a \<Rightarrow> nat \<Rightarrow> 'a" ("_<_:_>" [90, 0, 0] 91) where
"e<i:a> = (\<lambda>j. if j < i then e j else if j = i then a else e (j - 1))"
notation (xsymbols)
shift ("_\<langle>_:_\<rangle>" [90, 0, 0] 91)
notation (HTML output)
shift ("_\<langle>_:_\<rangle>" [90, 0, 0] 91)
lemma shift_gt [simp]: "j < i \<Longrightarrow> (e\<langle>i:T\<rangle>) j = e j"
by (simp add: shift_def)
lemma shift_lt [simp]: "i < j \<Longrightarrow> (e\<langle>i:T\<rangle>) j = e (j - 1)"
by (simp add: shift_def)
lemma shift_commute [simp]: "e\<langle>i:U\<rangle>\<langle>0:T\<rangle> = e\<langle>0:T\<rangle>\<langle>Suc i:U\<rangle>"
by (rule ext) (simp_all add: shift_def split: nat.split)
subsection {* Types and typing rules *}
datatype type =
Atom nat
| Fun type type (infixr "\<Rightarrow>" 200)
inductive typing :: "(nat \<Rightarrow> type) \<Rightarrow> dB \<Rightarrow> type \<Rightarrow> bool" ("_ \<turnstile> _ : _" [50, 50, 50] 50)
where
Var [intro!]: "env x = T \<Longrightarrow> env \<turnstile> Var x : T"
| Abs [intro!]: "env\<langle>0:T\<rangle> \<turnstile> t : U \<Longrightarrow> env \<turnstile> Abs t : (T \<Rightarrow> U)"
| App [intro!]: "env \<turnstile> s : T \<Rightarrow> U \<Longrightarrow> env \<turnstile> t : T \<Longrightarrow> env \<turnstile> (s \<degree> t) : U"
inductive_cases typing_elims [elim!]:
"e \<turnstile> Var i : T"
"e \<turnstile> t \<degree> u : T"
"e \<turnstile> Abs t : T"
primrec
typings :: "(nat \<Rightarrow> type) \<Rightarrow> dB list \<Rightarrow> type list \<Rightarrow> bool"
where
"typings e [] Ts = (Ts = [])"
| "typings e (t # ts) Ts =
(case Ts of
[] \<Rightarrow> False
| T # Ts \<Rightarrow> e \<turnstile> t : T \<and> typings e ts Ts)"
abbreviation
typings_rel :: "(nat \<Rightarrow> type) \<Rightarrow> dB list \<Rightarrow> type list \<Rightarrow> bool"
("_ ||- _ : _" [50, 50, 50] 50) where
"env ||- ts : Ts == typings env ts Ts"
notation (latex)
typings_rel ("_ \<tturnstile> _ : _" [50, 50, 50] 50)
abbreviation
funs :: "type list \<Rightarrow> type \<Rightarrow> type" (infixr "=>>" 200) where
"Ts =>> T == foldr Fun Ts T"
notation (latex)
funs (infixr "\<Rrightarrow>" 200)
subsection {* Some examples *}
schematic_lemma "e \<turnstile> Abs (Abs (Abs (Var 1 \<degree> (Var 2 \<degree> Var 1 \<degree> Var 0)))) : ?T"
by force
schematic_lemma "e \<turnstile> Abs (Abs (Abs (Var 2 \<degree> Var 0 \<degree> (Var 1 \<degree> Var 0)))) : ?T"
by force
subsection {* Lists of types *}
lemma lists_typings:
"e \<tturnstile> ts : Ts \<Longrightarrow> listsp (\<lambda>t. \<exists>T. e \<turnstile> t : T) ts"
apply (induct ts arbitrary: Ts)
apply (case_tac Ts)
apply simp
apply (rule listsp.Nil)
apply simp
apply (case_tac Ts)
apply simp
apply simp
apply (rule listsp.Cons)
apply blast
apply blast
done
lemma types_snoc: "e \<tturnstile> ts : Ts \<Longrightarrow> e \<turnstile> t : T \<Longrightarrow> e \<tturnstile> ts @ [t] : Ts @ [T]"
apply (induct ts arbitrary: Ts)
apply simp
apply (case_tac Ts)
apply simp+
done
lemma types_snoc_eq: "e \<tturnstile> ts @ [t] : Ts @ [T] =
(e \<tturnstile> ts : Ts \<and> e \<turnstile> t : T)"
apply (induct ts arbitrary: Ts)
apply (case_tac Ts)
apply simp+
apply (case_tac Ts)
apply (case_tac "ts @ [t]")
apply simp+
done
lemma rev_exhaust2 [extraction_expand]:
obtains (Nil) "xs = []" | (snoc) ys y where "xs = ys @ [y]"
\<comment> \<open>Cannot use @{text rev_exhaust} from the @{text List}
theory, since it is not constructive\<close>
apply (subgoal_tac "\<forall>ys. xs = rev ys \<longrightarrow> thesis")
apply (erule_tac x="rev xs" in allE)
apply simp
apply (rule allI)
apply (rule impI)
apply (case_tac ys)
apply simp
apply simp
done
lemma types_snocE: "e \<tturnstile> ts @ [t] : Ts \<Longrightarrow>
(\<And>Us U. Ts = Us @ [U] \<Longrightarrow> e \<tturnstile> ts : Us \<Longrightarrow> e \<turnstile> t : U \<Longrightarrow> P) \<Longrightarrow> P"
apply (cases Ts rule: rev_exhaust2)
apply simp
apply (case_tac "ts @ [t]")
apply (simp add: types_snoc_eq)+
done
subsection {* n-ary function types *}
lemma list_app_typeD:
"e \<turnstile> t \<degree>\<degree> ts : T \<Longrightarrow> \<exists>Ts. e \<turnstile> t : Ts \<Rrightarrow> T \<and> e \<tturnstile> ts : Ts"
apply (induct ts arbitrary: t T)
apply simp
apply (rename_tac a b t T)
apply atomize
apply simp
apply (erule_tac x = "t \<degree> a" in allE)
apply (erule_tac x = T in allE)
apply (erule impE)
apply assumption
apply (elim exE conjE)
apply (ind_cases "e \<turnstile> t \<degree> u : T" for t u T)
apply (rule_tac x = "Ta # Ts" in exI)
apply simp
done
lemma list_app_typeE:
"e \<turnstile> t \<degree>\<degree> ts : T \<Longrightarrow> (\<And>Ts. e \<turnstile> t : Ts \<Rrightarrow> T \<Longrightarrow> e \<tturnstile> ts : Ts \<Longrightarrow> C) \<Longrightarrow> C"
by (insert list_app_typeD) fast
lemma list_app_typeI:
"e \<turnstile> t : Ts \<Rrightarrow> T \<Longrightarrow> e \<tturnstile> ts : Ts \<Longrightarrow> e \<turnstile> t \<degree>\<degree> ts : T"
apply (induct ts arbitrary: t T Ts)
apply simp
apply (rename_tac a b t T Ts)
apply atomize
apply (case_tac Ts)
apply simp
apply simp
apply (erule_tac x = "t \<degree> a" in allE)
apply (erule_tac x = T in allE)
apply (rename_tac list)
apply (erule_tac x = list in allE)
apply (erule impE)
apply (erule conjE)
apply (erule typing.App)
apply assumption
apply blast
done
text {*
For the specific case where the head of the term is a variable,
the following theorems allow to infer the types of the arguments
without analyzing the typing derivation. This is crucial
for program extraction.
*}
theorem var_app_type_eq:
"e \<turnstile> Var i \<degree>\<degree> ts : T \<Longrightarrow> e \<turnstile> Var i \<degree>\<degree> ts : U \<Longrightarrow> T = U"
apply (induct ts arbitrary: T U rule: rev_induct)
apply simp
apply (ind_cases "e \<turnstile> Var i : T" for T)
apply (ind_cases "e \<turnstile> Var i : T" for T)
apply simp
apply simp
apply (ind_cases "e \<turnstile> t \<degree> u : T" for t u T)
apply (ind_cases "e \<turnstile> t \<degree> u : T" for t u T)
apply atomize
apply (erule_tac x="Ta \<Rightarrow> T" in allE)
apply (erule_tac x="Tb \<Rightarrow> U" in allE)
apply (erule impE)
apply assumption
apply (erule impE)
apply assumption
apply simp
done
lemma var_app_types: "e \<turnstile> Var i \<degree>\<degree> ts \<degree>\<degree> us : T \<Longrightarrow> e \<tturnstile> ts : Ts \<Longrightarrow>
e \<turnstile> Var i \<degree>\<degree> ts : U \<Longrightarrow> \<exists>Us. U = Us \<Rrightarrow> T \<and> e \<tturnstile> us : Us"
apply (induct us arbitrary: ts Ts U)
apply simp
apply (erule var_app_type_eq)
apply assumption
apply simp
apply (rename_tac a b ts Ts U)
apply atomize
apply (case_tac U)
apply (rule FalseE)
apply simp
apply (erule list_app_typeE)
apply (ind_cases "e \<turnstile> t \<degree> u : T" for t u T)
apply (drule_tac T="Atom nat" and U="Ta \<Rightarrow> Tsa \<Rrightarrow> T" in var_app_type_eq)
apply assumption
apply simp
apply (erule_tac x="ts @ [a]" in allE)
apply (erule_tac x="Ts @ [type1]" in allE)
apply (erule_tac x="type2" in allE)
apply simp
apply (erule impE)
apply (rule types_snoc)
apply assumption
apply (erule list_app_typeE)
apply (ind_cases "e \<turnstile> t \<degree> u : T" for t u T)
apply (drule_tac T="type1 \<Rightarrow> type2" and U="Ta \<Rightarrow> Tsa \<Rrightarrow> T" in var_app_type_eq)
apply assumption
apply simp
apply (erule impE)
apply (rule typing.App)
apply assumption
apply (erule list_app_typeE)
apply (ind_cases "e \<turnstile> t \<degree> u : T" for t u T)
apply (frule_tac T="type1 \<Rightarrow> type2" and U="Ta \<Rightarrow> Tsa \<Rrightarrow> T" in var_app_type_eq)
apply assumption
apply simp
apply (erule exE)
apply (rule_tac x="type1 # Us" in exI)
apply simp
apply (erule list_app_typeE)
apply (ind_cases "e \<turnstile> t \<degree> u : T" for t u T)
apply (frule_tac T="type1 \<Rightarrow> Us \<Rrightarrow> T" and U="Ta \<Rightarrow> Tsa \<Rrightarrow> T" in var_app_type_eq)
apply assumption
apply simp
done
lemma var_app_typesE: "e \<turnstile> Var i \<degree>\<degree> ts : T \<Longrightarrow>
(\<And>Ts. e \<turnstile> Var i : Ts \<Rrightarrow> T \<Longrightarrow> e \<tturnstile> ts : Ts \<Longrightarrow> P) \<Longrightarrow> P"
apply (drule var_app_types [of _ _ "[]", simplified])
apply (iprover intro: typing.Var)+
done
lemma abs_typeE: "e \<turnstile> Abs t : T \<Longrightarrow> (\<And>U V. e\<langle>0:U\<rangle> \<turnstile> t : V \<Longrightarrow> P) \<Longrightarrow> P"
apply (cases T)
apply (rule FalseE)
apply (erule typing.cases)
apply simp_all
apply atomize
apply (erule_tac x="type1" in allE)
apply (erule_tac x="type2" in allE)
apply (erule mp)
apply (erule typing.cases)
apply simp_all
done
subsection {* Lifting preserves well-typedness *}
lemma lift_type [intro!]: "e \<turnstile> t : T \<Longrightarrow> e\<langle>i:U\<rangle> \<turnstile> lift t i : T"
by (induct arbitrary: i U set: typing) auto
lemma lift_types:
"e \<tturnstile> ts : Ts \<Longrightarrow> e\<langle>i:U\<rangle> \<tturnstile> (map (\<lambda>t. lift t i) ts) : Ts"
apply (induct ts arbitrary: Ts)
apply simp
apply (case_tac Ts)
apply auto
done
subsection {* Substitution lemmas *}
lemma subst_lemma:
"e \<turnstile> t : T \<Longrightarrow> e' \<turnstile> u : U \<Longrightarrow> e = e'\<langle>i:U\<rangle> \<Longrightarrow> e' \<turnstile> t[u/i] : T"
apply (induct arbitrary: e' i U u set: typing)
apply (rule_tac x = x and y = i in linorder_cases)
apply auto
apply blast
done
lemma substs_lemma:
"e \<turnstile> u : T \<Longrightarrow> e\<langle>i:T\<rangle> \<tturnstile> ts : Ts \<Longrightarrow>
e \<tturnstile> (map (\<lambda>t. t[u/i]) ts) : Ts"
apply (induct ts arbitrary: Ts)
apply (case_tac Ts)
apply simp
apply simp
apply atomize
apply (case_tac Ts)
apply simp
apply simp
apply (erule conjE)
apply (erule (1) subst_lemma)
apply (rule refl)
done
subsection {* Subject reduction *}
lemma subject_reduction: "e \<turnstile> t : T \<Longrightarrow> t \<rightarrow>\<^sub>\<beta> t' \<Longrightarrow> e \<turnstile> t' : T"
apply (induct arbitrary: t' set: typing)
apply blast
apply blast
apply atomize
apply (ind_cases "s \<degree> t \<rightarrow>\<^sub>\<beta> t'" for s t t')
apply hypsubst
apply (ind_cases "env \<turnstile> Abs t : T \<Rightarrow> U" for env t T U)
apply (rule subst_lemma)
apply assumption
apply assumption
apply (rule ext)
apply (case_tac x)
apply auto
done
theorem subject_reduction': "t \<rightarrow>\<^sub>\<beta>\<^sup>* t' \<Longrightarrow> e \<turnstile> t : T \<Longrightarrow> e \<turnstile> t' : T"
by (induct set: rtranclp) (iprover intro: subject_reduction)+
subsection {* Alternative induction rule for types *}
lemma type_induct [induct type]:
assumes
"(\<And>T. (\<And>T1 T2. T = T1 \<Rightarrow> T2 \<Longrightarrow> P T1) \<Longrightarrow>
(\<And>T1 T2. T = T1 \<Rightarrow> T2 \<Longrightarrow> P T2) \<Longrightarrow> P T)"
shows "P T"
proof (induct T)
case Atom
show ?case by (rule assms) simp_all
next
case Fun
show ?case by (rule assms) (insert Fun, simp_all)
qed
text {*
Lifting an order to lists of elements, relating exactly one
element.
*}
definition
step1 :: "('a => 'a => bool) => 'a list => 'a list => bool" where
"step1 r =
(\<lambda>ys xs. \<exists>us z z' vs. xs = us @ z # vs \<and> r z' z \<and> ys =
us @ z' # vs)"
lemma step1_converse [simp]: "step1 (r^--1) = (step1 r)^--1"
apply (unfold step1_def)
apply (blast intro!: order_antisym)
done
lemma in_step1_converse [iff]: "(step1 (r^--1) x y) = ((step1 r)^--1 x y)"
apply auto
done
lemma not_Nil_step1 [iff]: "\<not> step1 r [] xs"
apply (unfold step1_def)
apply blast
done
lemma not_step1_Nil [iff]: "\<not> step1 r xs []"
apply (unfold step1_def)
apply blast
done
lemma Cons_step1_Cons [iff]:
"(step1 r (y # ys) (x # xs)) =
(r y x \<and> xs = ys \<or> x = y \<and> step1 r ys xs)"
apply (unfold step1_def)
apply (rule iffI)
apply (erule exE)
apply (rename_tac ts)
apply (case_tac ts)
apply fastforce
apply force
apply (erule disjE)
apply blast
apply (blast intro: Cons_eq_appendI)
done
lemma append_step1I:
"step1 r ys xs \<and> vs = us \<or> ys = xs \<and> step1 r vs us
==> step1 r (ys @ vs) (xs @ us)"
apply (unfold step1_def)
apply auto
apply blast
apply (blast intro: append_eq_appendI)
done
lemma Cons_step1E [elim!]:
assumes "step1 r ys (x # xs)"
and "!!y. ys = y # xs \<Longrightarrow> r y x \<Longrightarrow> R"
and "!!zs. ys = x # zs \<Longrightarrow> step1 r zs xs \<Longrightarrow> R"
shows R
using assms
apply (cases ys)
apply (simp add: step1_def)
apply blast
done
lemma Snoc_step1_SnocD:
"step1 r (ys @ [y]) (xs @ [x])
==> (step1 r ys xs \<and> y = x \<or> ys = xs \<and> r y x)"
apply (unfold step1_def)
apply (clarify del: disjCI)
apply (rename_tac vs)
apply (rule_tac xs = vs in rev_exhaust)
apply force
apply simp
apply blast
done
lemma Cons_acc_step1I [intro!]:
"Wellfounded.accp r x ==> Wellfounded.accp (step1 r) xs \<Longrightarrow> Wellfounded.accp (step1 r) (x # xs)"
apply (induct arbitrary: xs set: Wellfounded.accp)
apply (erule thin_rl)
apply (erule accp_induct)
apply (rule accp.accI)
apply blast
done
lemma lists_accD: "listsp (Wellfounded.accp r) xs ==> Wellfounded.accp (step1 r) xs"
apply (induct set: listsp)
apply (rule accp.accI)
apply simp
apply (rule accp.accI)
apply (fast dest: accp_downward)
done
lemma ex_step1I:
"[| x \<in> set xs; r y x |]
==> \<exists>ys. step1 r ys xs \<and> y \<in> set ys"
apply (unfold step1_def)
apply (drule in_set_conv_decomp [THEN iffD1])
apply force
done
lemma lists_accI: "Wellfounded.accp (step1 r) xs ==> listsp (Wellfounded.accp r) xs"
apply (induct set: Wellfounded.accp)
apply clarify
apply (rule accp.accI)
apply (drule_tac r=r in ex_step1I, assumption)
apply blast
done
text {*
Lifting beta-reduction to lists of terms, reducing exactly one element.
*}
abbreviation
list_beta :: "dB list => dB list => bool" (infixl "=>" 50) where
"rs => ss == step1 beta rs ss"
lemma head_Var_reduction:
"Var n \<degree>\<degree> rs \<rightarrow>\<^sub>\<beta> v \<Longrightarrow> \<exists>ss. rs => ss \<and> v = Var n \<degree>\<degree> ss"
apply (induct u == "Var n \<degree>\<degree> rs" v arbitrary: rs set: beta)
apply simp
apply (rule_tac xs = rs in rev_exhaust)
apply simp
apply (atomize, force intro: append_step1I)
apply (rule_tac xs = rs in rev_exhaust)
apply simp
apply (auto 0 3 intro: disjI2 [THEN append_step1I])
done
(*
lemma head_Proc_reduction:
"Proc p x y \<degree>\<degree> rs \<rightarrow>\<^sub>\<beta> v \<Longrightarrow> (\<exists>ss p'. rs => ss \<and> v = Proc p x y \<degree>\<degree> ss)"
apply (induct u == "Proc p x y \<degree>\<degree> rs" v arbitrary: rs taking: "\<lambda>_ _. True" set: beta)
apply simp
apply (rule_tac xs = rs in rev_exhaust)
apply simp
apply (atomize, force intro: append_step1I)x
apply (rule_tac xs = rs in rev_exhaust)
apply simp
apply (auto 0 3 intro: disjI2 [THEN append_step1I])[1]
apply (auto 0 3 intro: disjI2 [THEN append_step1I])[1]
apply auto[1]
apply (auto 0 3 intro: disjI2 [THEN append_step1I])[1]
done
*)
lemma apps_betasE [elim!]:
assumes major: "r \<degree>\<degree> rs \<rightarrow>\<^sub>\<beta> s"
and cases: "!!r'. [| r \<rightarrow>\<^sub>\<beta> r'; s = r' \<degree>\<degree> rs |] ==> R"
"!!rs'. [| rs => rs'; s = r \<degree>\<degree> rs' |] ==> R"
"!!t u us. [| r = Abs t; rs = u # us; s = t[u/0] \<degree>\<degree> us |] ==> R"
shows R
proof -
from major have
"(\<exists>r'. r \<rightarrow>\<^sub>\<beta> r' \<and> s = r' \<degree>\<degree> rs) \<or>
(\<exists>rs'. rs => rs' \<and> s = r \<degree>\<degree> rs') \<or>
(\<exists>t u us. r = Abs t \<and> rs = u # us \<and> s = t[u/0] \<degree>\<degree> us)"
apply (induct u == "r \<degree>\<degree> rs" s arbitrary: r rs set: beta)
apply (case_tac r)
apply simp
apply simp
apply (simp add: App_eq_foldl_conv)
apply (split split_if_asm)
apply simp
apply blast
apply simp
apply (simp add: App_eq_foldl_conv)
apply (split split_if_asm)
apply simp
apply simp
apply (drule App_eq_foldl_conv [THEN iffD1])
apply (split split_if_asm)
apply simp
apply blast
apply (force intro!: disjI1 [THEN append_step1I])
apply (drule App_eq_foldl_conv [THEN iffD1])
apply (split split_if_asm)
apply simp
apply blast
by (clarify, auto 0 3 del: exI intro!: exI intro: append_step1I)
with cases show ?thesis by blast
qed
lemma apps_preserves_beta [simp]:
"r \<rightarrow>\<^sub>\<beta> s ==> r \<degree>\<degree> ss \<rightarrow>\<^sub>\<beta> s \<degree>\<degree> ss"
by (induct ss rule: rev_induct) auto
lemma apps_preserves_beta2 [simp]:
"r ->> s ==> r \<degree>\<degree> ss ->> s \<degree>\<degree> ss"
apply (induct set: rtranclp)
apply blast
apply (blast intro: apps_preserves_beta rtranclp.rtrancl_into_rtrancl)
done
lemma apps_preserves_betas [simp]:
"rs => ss \<Longrightarrow> r \<degree>\<degree> rs \<rightarrow>\<^sub>\<beta> r \<degree>\<degree> ss"
apply (induct rs arbitrary: ss rule: rev_induct)
apply simp
apply simp
apply (rule_tac xs = ss in rev_exhaust)
apply simp
apply simp
apply (drule Snoc_step1_SnocD)
apply blast
done
subsection {* Terminating lambda terms *}
inductive IT' :: "program_rep => bool" and IT :: "dB => bool"
where
Var [intro]: "listsp IT rs ==> IT (Var n \<degree>\<degree> rs)"
| Lambda [intro]: "IT r ==> IT (Abs r)"
| Beta [intro]: "IT ((r[s/0]) \<degree>\<degree> ss) ==> IT s ==> IT ((Abs r \<degree> s) \<degree>\<degree> ss)"
| [intro]: "listsp IT rs ==> IT' body \<Longrightarrow> IT (Proc body args ret \<degree>\<degree> rs)"
| [intro]: "IT' Skip"
lemma AppAbs_iff: "IT ((Abs r \<degree> s) \<degree>\<degree> ss) = (IT ((r[s/0]) \<degree>\<degree> ss) \<and> IT s)"
apply (rule iffI)
apply (cases "((Abs r \<degree> s) \<degree>\<degree> ss)" rule:IT.cases, auto)
apply (metis Var_apps_neq_Abs_apps foldl_Cons)
apply (metis Var_apps_neq_Abs_apps foldl_Cons)
apply (metis (mono_tags) Abs_apps_eq_Abs_apps_conv foldl_Cons list.inject)
apply (metis (full_types) Abs_apps_eq_Abs_apps_conv append_Cons foldl_Cons last.simps last_appendR not_Cons_self2 rotate1.simps(2))
apply (metis foldl_Cons Proc_apps_neq_Abs_apps)
by (metis (poly_guards_query) Proc_apps_neq_Abs_apps foldl_Cons)
lemma Var_iff: "IT (Var n \<degree>\<degree> rs) = listsp IT rs"
apply (rule iffI)
apply (cases "Var n \<degree>\<degree> rs" rule:IT.cases, auto)
by (metis Var_apps_neq_Abs_apps foldl_Cons)
subsection {* Every term in @{text "IT"} terminates *}
lemma double_induction_lemma [rule_format]:
"termip beta s ==> \<forall>t. termip beta t -->
(\<forall>r ss. t = r[s/0] \<degree>\<degree> ss --> termip beta (Abs r \<degree> s \<degree>\<degree> ss))"
apply (erule accp_induct)
apply (rule allI)
apply (rule impI)
apply (erule thin_rl)
apply (erule accp_induct)
apply clarify
apply (rule accp.accI)
apply (safe elim!: apps_betasE)
apply (blast intro: subst_preserves_beta apps_preserves_beta)
apply (blast intro: apps_preserves_beta2 subst_preserves_beta2 rtranclp_converseI
dest: accp_downwards) (* FIXME: acc_downwards can be replaced by acc(R ^* ) = acc(r) *)
apply (blast dest: apps_preserves_betas)
done
thm accI
lemma tmp: "listsp (termip (\<rightarrow>\<^sub>\<beta>)) rs \<Longrightarrow>
termip (\<longrightarrow>\<^sub>\<beta>) body \<Longrightarrow> termip (\<rightarrow>\<^sub>\<beta>) (Proc body args ret \<degree>\<degree> rs)"
apply (induction rs rule:rev_induct, simp_all)
apply (erule accp_induct[where r="(\<longrightarrow>\<^sub>\<beta>\<inverse>\<inverse>)"])
apply (rule accp.intros, metis beta_cases(4) conversep_iff)
apply (rule accp.intros)
unfolding conversep_iff
apply (cases rule:beta.cases)
using Abs_App_neq_Proc_apps xxx
using Abs_App_neq_Proc_apps[where ss="x#xs", symmetric]
apply (rule beta_cases)
apply (rule accp_downward)
defer apply simp
apply clarify
apply (erule beta_cases)
apply (metisx beta'.simps)
apply (metis (full_types) beta_cases(5) program_rep.exhaust)
apply (metis (poly_guards_query) beta'.simps beta_cases(4))
apply (metis (mono_tags, hide_lams) beta'.simps beta_cases(4))
apply (drule lists_accD)
apply (rule accp_downward)
defer
apply (drule rev_predicate1D [OF _ listsp_mono [where B="termip beta"]])
apply (fast intro!: predicate1I)
apply (drule lists_accD)
apply (erule accp_induct)
apply (rule accp.accI)
apply simp
apply (drule head_Proc_reduction)
apply (erule exE, erule conjE, clarify)
thm IT'_IT.inducts
oops
lemma IT_implies_termi: "IT t ==> termip beta t"
apply (induct taking: "termip beta'" set: IT)
apply (drule rev_predicate1D [OF _ listsp_mono [where B="termip beta"]])
apply (fast intro!: predicate1I)
apply (drule lists_accD)
apply (erule accp_induct)
apply (rule accp.accI)
apply (blast dest: head_Var_reduction)
apply (erule accp_induct)
apply (rule accp.accI)
apply blast
apply (blast intro: double_induction_lemma)
apply (drule rev_predicate1D [OF _ listsp_mono [where B="termip beta"]])
apply (fast intro!: predicate1I)
apply (drule lists_accD)
apply (erule accp_induct)
apply (rule accp.accI)
using head_Proc_reduction
apply (blaxst dest: head_Proc_reduction)
term "
\<And>rs n x y.
Wellfounded.accp (step1 (\<rightarrow>\<^sub>\<beta>\<inverse>\<inverse>)) x \<Longrightarrow>
\<forall>y. step1 (\<rightarrow>\<^sub>\<beta>\<inverse>\<inverse>) y x \<longrightarrow> termip (\<rightarrow>\<^sub>\<beta>) (Var n \<degree>\<degree> y) \<Longrightarrow>
(\<rightarrow>\<^sub>\<beta>\<inverse>\<inverse>) y (Var n \<degree>\<degree> x) \<Longrightarrow> termip (\<rightarrow>\<^sub>\<beta>) y
"
done
subsection {* Every terminating term is in @{text "IT"} *}
declare Var_apps_neq_Abs_apps [symmetric, simp]
lemma [simp, THEN not_sym, simp]: "Var n \<degree>\<degree> ss \<noteq> Abs r \<degree> s \<degree>\<degree> ts"
by (simp add: foldl_Cons [symmetric] del: foldl_Cons)
inductive_cases [elim!]:
"IT (Var n \<degree>\<degree> ss)"
"IT (Abs t)"
"IT (Abs r \<degree> s \<degree>\<degree> ts)"
"IT (Proc body args ret)"
theorem Apps_dB_induct_tmp:
assumes "!!n ts. \<forall>t \<in> set ts. P t ==> P (Var n \<degree>\<degree> ts)"
and "!!u ts. [| P u; \<forall>t \<in> set ts. P t |] ==> P (Abs u \<degree>\<degree> ts)"
and "\<And>body args ret ts. \<lbrakk> P' body; \<forall>t \<in> set ts. P t \<rbrakk> \<Longrightarrow> P (Proc body args ret \<degree>\<degree> ts)"
and "P' Skip"
shows "P' p" and "P t"
sorry
(* XXX *)
theorem termi_implies_IT: "termip beta r ==> IT r"
apply (erule accp_induct)
apply (rename_tac r)
apply (erule thin_rl)
apply (erule rev_mp)
apply simp
apply (rule_tac t = r and P' = "\<lambda>p. (\<forall>y. p \<longrightarrow>\<^sub>\<beta> y \<longrightarrow> IT' y) \<longrightarrow> IT' p" in Apps_dB_induct_tmp(2))
apply clarify (*4*)
apply (rule IT'_IT.intros)
apply clarify
apply (drule bspec, assumption)
apply (erule mp)
apply clarify
apply (drule_tac r=beta in conversepI)
apply (drule_tac r="beta^--1" in ex_step1I, assumption)
apply clarify
apply (rename_tac us)
apply (erule_tac x = "Var n \<degree>\<degree> us" in allE)
apply (drule mp, rule apps_preserves_betas, simp, unfold Var_iff)
apply force(*3*)
apply (rename_tac u ts)
apply (case_tac ts)(*4*)
apply simp
apply blast(*3*)
apply (rename_tac s ss)
apply simp
apply clarify
apply (rule IT'_IT.intros)(*4*)
apply (blast intro: apps_preserves_beta)(*3*)
apply (erule mp)
apply clarify
apply (rename_tac t)
apply (erule_tac x = "Abs u \<degree> t \<degree>\<degree> ss" in allE)
apply (drule_tac Q="IT (App (Abs u) t \<degree>\<degree> ss)" in mp)(*4*)
apply force(*3*)
apply (unfold AppAbs_iff)
apply force
done
by simp
end
subsection {* Properties of @{text IT} *}
lemma lift_IT [intro!]: "IT t \<Longrightarrow> IT (lift t i)"
apply (induct arbitrary: i set: IT)
apply (simp (no_asm))
apply (rule conjI)
apply
(rule impI,
rule IT.Var,
erule listsp.induct,
simp (no_asm),
simp (no_asm),
rule listsp.Cons,
blast,
assumption)+
apply auto
done
lemma lifts_IT: "listsp IT ts \<Longrightarrow> listsp IT (map (\<lambda>t. lift t 0) ts)"
by (induct ts) auto
lemma subst_Var_IT: "IT r \<Longrightarrow> IT (r[Var i/j])"
apply (induct arbitrary: i j set: IT)
txt {* Case @{term Var}: *}
apply (simp (no_asm) add: subst_Var)
apply
((rule conjI impI)+,
rule IT.Var,
erule listsp.induct,
simp (no_asm),
simp (no_asm),
rule listsp.Cons,
fast,
assumption)+
txt {* Case @{term Lambda}: *}
apply atomize
apply simp
apply (rule IT.Lambda)
apply fastx
txt {* Case @{term Beta}: *}
apply atomize
apply (simp (no_asm_use) add: subst_subst [symmetric])
apply (rule IT.Beta)
apply auto
done
lemma Var_IT: "IT (Var n)"
apply (subgoal_tac "IT (Var n \<degree>\<degree> [])")
apply simp
apply (rule IT.Var)
apply (rule listsp.Nil)
done
lemma app_Var_IT: "IT t \<Longrightarrow> IT (t \<degree> Var i)"
apply (induct set: IT)
apply (subst app_last)
apply (rule IT.Var)
apply simp
apply (rule listsp.Cons)
apply (rule Var_IT)
apply (rule listsp.Nil)
apply (rule IT.Beta [where ?ss = "[]", unfolded foldl_Nil [THEN eq_reflection]])
apply (erule subst_Var_IT)
apply (rule Var_IT)
apply (subst app_last)
apply (rule IT.Beta)
apply (subst app_last [symmetric])
apply assumption
apply assumption
done
subsection {* Well-typed substitution preserves termination *}
lemma subst_type_IT:
"\<And>t e T u i. IT t \<Longrightarrow> e\<langle>i:U\<rangle> \<turnstile> t : T \<Longrightarrow>
IT u \<Longrightarrow> e \<turnstile> u : U \<Longrightarrow> IT (t[u/i])"
(is "PROP ?P U" is "\<And>t e T u i. _ \<Longrightarrow> PROP ?Q t e T u i U")
proof (induct U)
fix T t
assume MI1: "\<And>T1 T2. T = T1 \<Rightarrow> T2 \<Longrightarrow> PROP ?P T1"
assume MI2: "\<And>T1 T2. T = T1 \<Rightarrow> T2 \<Longrightarrow> PROP ?P T2"
assume "IT t"
thus "\<And>e T' u i. PROP ?Q t e T' u i T"
proof induct
fix e T' u i
assume uIT: "IT u"
assume uT: "e \<turnstile> u : T"
{
case (Var rs n e1 T'1 u1 i1)
assume nT: "e\<langle>i:T\<rangle> \<turnstile> Var n \<degree>\<degree> rs : T'"
let ?ty = "\<lambda>t. \<exists>T'. e\<langle>i:T\<rangle> \<turnstile> t : T'"
let ?R = "\<lambda>t. \<forall>e T' u i.
e\<langle>i:T\<rangle> \<turnstile> t : T' \<longrightarrow> IT u \<longrightarrow> e \<turnstile> u : T \<longrightarrow> IT (t[u/i])"
show "IT ((Var n \<degree>\<degree> rs)[u/i])"
proof (cases "n = i")
case True
show ?thesis
proof (cases rs)
case Nil
with uIT True show ?thesis by simp
next
case (Cons a as)
with nT have "e\<langle>i:T\<rangle> \<turnstile> Var n \<degree> a \<degree>\<degree> as : T'" by simp
then obtain Ts
where headT: "e\<langle>i:T\<rangle> \<turnstile> Var n \<degree> a : Ts \<Rrightarrow> T'"
and argsT: "e\<langle>i:T\<rangle> \<tturnstile> as : Ts"
by (rule list_app_typeE)
from headT obtain T''
where varT: "e\<langle>i:T\<rangle> \<turnstile> Var n : T'' \<Rightarrow> Ts \<Rrightarrow> T'"
and argT: "e\<langle>i:T\<rangle> \<turnstile> a : T''"
by cases simp_all
from varT True have T: "T = T'' \<Rightarrow> Ts \<Rrightarrow> T'"
by cases auto
with uT have uT': "e \<turnstile> u : T'' \<Rightarrow> Ts \<Rrightarrow> T'" by simp
from T have "IT ((Var 0 \<degree>\<degree> map (\<lambda>t. lift t 0)
(map (\<lambda>t. t[u/i]) as))[(u \<degree> a[u/i])/0])"
proof (rule MI2)
from T have "IT ((lift u 0 \<degree> Var 0)[a[u/i]/0])"
proof (rule MI1)
have "IT (lift u 0)" by (rule lift_IT [OF uIT])
thus "IT (lift u 0 \<degree> Var 0)" by (rule app_Var_IT)
show "e\<langle>0:T''\<rangle> \<turnstile> lift u 0 \<degree> Var 0 : Ts \<Rrightarrow> T'"
proof (rule typing.App)
show "e\<langle>0:T''\<rangle> \<turnstile> lift u 0 : T'' \<Rightarrow> Ts \<Rrightarrow> T'"
by (rule lift_type) (rule uT')
show "e\<langle>0:T''\<rangle> \<turnstile> Var 0 : T''"
by (rule typing.Var) simp
qed
from Var have "?R a" by cases (simp_all add: Cons)
with argT uIT uT show "IT (a[u/i])" by simp
from argT uT show "e \<turnstile> a[u/i] : T''"
by (rule subst_lemma) simp
qed
thus "IT (u \<degree> a[u/i])" by simp
from Var have "listsp ?R as"
by cases (simp_all add: Cons)
moreover from argsT have "listsp ?ty as"
by (rule lists_typings)
ultimately have "listsp (\<lambda>t. ?R t \<and> ?ty t) as"
by simp
hence "listsp IT (map (\<lambda>t. lift t 0) (map (\<lambda>t. t[u/i]) as))"
(is "listsp IT (?ls as)")
proof induct
case Nil
show ?case by fastforce
next
case (Cons b bs)
hence I: "?R b" by simp
from Cons obtain U where "e\<langle>i:T\<rangle> \<turnstile> b : U" by fast
with uT uIT I have "IT (b[u/i])" by simp
hence "IT (lift (b[u/i]) 0)" by (rule lift_IT)
hence "listsp IT (lift (b[u/i]) 0 # ?ls bs)"
by (rule listsp.Cons) (rule Cons)
thus ?case by simp
qed
thus "IT (Var 0 \<degree>\<degree> ?ls as)" by (rule IT.Var)
have "e\<langle>0:Ts \<Rrightarrow> T'\<rangle> \<turnstile> Var 0 : Ts \<Rrightarrow> T'"
by (rule typing.Var) simp
moreover from uT argsT have "e \<tturnstile> map (\<lambda>t. t[u/i]) as : Ts"
by (rule substs_lemma)
hence "e\<langle>0:Ts \<Rrightarrow> T'\<rangle> \<tturnstile> ?ls as : Ts"
by (rule lift_types)
ultimately show "e\<langle>0:Ts \<Rrightarrow> T'\<rangle> \<turnstile> Var 0 \<degree>\<degree> ?ls as : T'"
by (rule list_app_typeI)
from argT uT have "e \<turnstile> a[u/i] : T''"
by (rule subst_lemma) (rule refl)
with uT' show "e \<turnstile> u \<degree> a[u/i] : Ts \<Rrightarrow> T'"
by (rule typing.App)
qed
with Cons True show ?thesis
by (simp add: comp_def)
qed
next
case False
from Var have "listsp ?R rs" by simp
moreover from nT obtain Ts where "e\<langle>i:T\<rangle> \<tturnstile> rs : Ts"
by (rule list_app_typeE)
hence "listsp ?ty rs" by (rule lists_typings)
ultimately have "listsp (\<lambda>t. ?R t \<and> ?ty t) rs"
by simp
hence "listsp IT (map (\<lambda>x. x[u/i]) rs)"
proof induct
case Nil
show ?case by fastforce
next
case (Cons a as)
hence I: "?R a" by simp
from Cons obtain U where "e\<langle>i:T\<rangle> \<turnstile> a : U" by fast
with uT uIT I have "IT (a[u/i])" by simp
hence "listsp IT (a[u/i] # map (\<lambda>t. t[u/i]) as)"
by (rule listsp.Cons) (rule Cons)
thus ?case by simp
qed
with False show ?thesis by (auto simp add: subst_Var)
qed
next
case (Lambda r e1 T'1 u1 i1)
assume "e\<langle>i:T\<rangle> \<turnstile> Abs r : T'"
and "\<And>e T' u i. PROP ?Q r e T' u i T"
with uIT uT show "IT (Abs r[u/i])"
by fastforce
next
case (Beta r a as e1 T'1 u1 i1)
assume T: "e\<langle>i:T\<rangle> \<turnstile> Abs r \<degree> a \<degree>\<degree> as : T'"
assume SI1: "\<And>e T' u i. PROP ?Q (r[a/0] \<degree>\<degree> as) e T' u i T"
assume SI2: "\<And>e T' u i. PROP ?Q a e T' u i T"
have "IT (Abs (r[lift u 0/Suc i]) \<degree> a[u/i] \<degree>\<degree> map (\<lambda>t. t[u/i]) as)"
proof (rule IT.Beta)
have "Abs r \<degree> a \<degree>\<degree> as \<rightarrow>\<^sub>\<beta> r[a/0] \<degree>\<degree> as"
by (rule apps_preserves_beta) (rule beta.beta)
with T have "e\<langle>i:T\<rangle> \<turnstile> r[a/0] \<degree>\<degree> as : T'"
by (rule subject_reduction)
hence "IT ((r[a/0] \<degree>\<degree> as)[u/i])"
using uIT uT by (rule SI1)
thus "IT (r[lift u 0/Suc i][a[u/i]/0] \<degree>\<degree> map (\<lambda>t. t[u/i]) as)"
by (simp del: subst_map add: subst_subst subst_map [symmetric])
from T obtain U where "e\<langle>i:T\<rangle> \<turnstile> Abs r \<degree> a : U"
by (rule list_app_typeE) fast
then obtain T'' where "e\<langle>i:T\<rangle> \<turnstile> a : T''" by cases simp_all
thus "IT (a[u/i])" using uIT uT by (rule SI2)
qed
thus "IT ((Abs r \<degree> a \<degree>\<degree> as)[u/i])" by simp
}
qed
qed
subsection {* Well-typed terms are strongly normalizing *}
lemma type_implies_IT:
assumes "e \<turnstile> t : T"
shows "IT t"
using assms
proof induct
case Var
show ?case by (rule Var_IT)
next
case Abs
show ?case by (rule IT.Lambda) (rule Abs)
next
case (App e s T U t)
have "IT ((Var 0 \<degree> lift t 0)[s/0])"
proof (rule subst_type_IT)
have "IT (lift t 0)" using `IT t` by (rule lift_IT)
hence "listsp IT [lift t 0]" by (rule listsp.Cons) (rule listsp.Nil)
hence "IT (Var 0 \<degree>\<degree> [lift t 0])" by (rule IT.Var)
also have "Var 0 \<degree>\<degree> [lift t 0] = Var 0 \<degree> lift t 0" by simp
finally show "IT \<dots>" .
have "e\<langle>0:T \<Rightarrow> U\<rangle> \<turnstile> Var 0 : T \<Rightarrow> U"
by (rule typing.Var) simp
moreover have "e\<langle>0:T \<Rightarrow> U\<rangle> \<turnstile> lift t 0 : T"
by (rule lift_type) (rule App.hyps)
ultimately show "e\<langle>0:T \<Rightarrow> U\<rangle> \<turnstile> Var 0 \<degree> lift t 0 : U"
by (rule typing.App)
show "IT s" by fact
show "e \<turnstile> s : T \<Rightarrow> U" by fact
qed
thus ?case by simp
qed
theorem type_implies_termi: "e \<turnstile> t : T \<Longrightarrow> termip beta t"
proof -
assume "e \<turnstile> t : T"
hence "IT t" by (rule type_implies_IT)
thus ?thesis by (rule IT_implies_termi)
qed
|
module TestTools
using Test
using BioSequences,
BioTools.BLAST
import BioCore.Testing:
get_bio_fmt_specimens
fmtdir = get_bio_fmt_specimens()
if !Sys.iswindows() # temporarily disable the BLAST tests on Windows (issue: #197)
@testset "BLAST+ blastn" begin
na1 = dna"""
CGGACCAGACGGACACAGGGAGAAGCTAGTTTCTTTCATGTGATTGANAT
NATGACTCTACTCCTAAAAGGGAAAAANCAATATCCTTGTTTACAGAAGA
GAAACAAACAAGCCCCACTCAGCTCAGTCACAGGAGAGAN
"""
na2 = dna"""
CGGAGCCAGCGAGCATATGCTGCATGAGGACCTTTCTATCTTACATTATG
GCTGGGAATCTTACTCTTTCATCTGATACCTTGTTCAGATTTCAAAATAG
TTGTAGCCTTATCCTGGTTTTACAGATGTGAAACTTTCAA
"""
fna = joinpath(fmtdir, "FASTA", "f002.fasta")
nucldb = joinpath(fmtdir, "BLASTDB", "f002")
nuclresults = joinpath(fmtdir, "BLASTDB", "f002.xml")
@test typeof(blastn(na1, na2)) == Array{BLASTResult, 1}
@test typeof(blastn(na1, [na1, na2])) == Array{BLASTResult, 1}
@test typeof(blastn([na1, na2], [na1, na2])) == Array{BLASTResult, 1}
@test typeof(blastn(na1, nucldb, db=true)) == Array{BLASTResult, 1}
@test typeof(blastn(na1, fna)) == Array{BLASTResult, 1}
@test typeof(blastn(fna, nucldb, db=true)) == Array{BLASTResult, 1}
@test typeof(blastn([na1, na2], nucldb, db=true)) == Array{BLASTResult, 1}
@test typeof(blastn([na1, na2], fna)) == Array{BLASTResult, 1}
@test typeof(blastn(fna, [na1, na2])) == Array{BLASTResult, 1}
end
@testset "BLAST+ blastp" begin
aa1 = aa"""
MWATLPLLCAGAWLLGVPVCGAAELSVNSLEKFHFKSWMSKHRKTYSTEE
YHHRLQTFASNWRKINAHNNGNHTFKMALNQFSDMSFAEIKHKYLWSEPQ
NCSATKSNYLRGTGPYPPSVDWRKKGNFVSPVKNQGACGS
"""
aa2 = aa"""
MWTALPLLCAGAWLLSAGATAELTVNAIEKFHFTSWMKQHQKTYSSREYS
HRLQVFANNWRKIQAHNQRNHTFKMGLNQFSDMSFAEIKHKYLWSEPQNC
SATKSNYLRGTGPYPSSMDWRKKGNVVSPVKNQGACGSCW
"""
faa = joinpath(fmtdir, "FASTA", "cysprot.fasta")
protdb = joinpath(fmtdir, "BLASTDB", "cysprot")
protresults = joinpath(fmtdir, "BLASTDB", "cysprot.xml")
@test typeof(blastp(aa1, aa2)) == Array{BLASTResult, 1}
@test typeof(blastp(aa1, [aa1, aa2])) == Array{BLASTResult, 1}
@test typeof(blastp([aa1, aa2], [aa1, aa2])) == Array{BLASTResult, 1}
@test typeof(blastp(aa1, protdb, db=true)) == Array{BLASTResult, 1}
@test typeof(blastp(aa1, faa)) == Array{BLASTResult, 1}
@test typeof(blastp(faa, protdb, db=true)) == Array{BLASTResult, 1}
@test typeof(blastp([aa1, aa2], protdb, db=true)) == Array{BLASTResult, 1}
@test typeof(blastp([aa1, aa2], faa)) == Array{BLASTResult, 1}
@test typeof(blastp(faa, [aa1, aa2])) == Array{BLASTResult, 1}
end
end # if !is_windows()
end # TestTools
|
from spartan import expr, util
from spartan.array import distarray
from spartan.util import Assert, divup
from test_common import with_ctx
import math
import numpy as np
import pickle
import parakeet
import test_common
import time
from spartan.expr import stencil
ONE_TILE = (10000, 10000, 10000, 10000)
@with_ctx
def test_stencil(ctx):
st = time.time()
IMG_SIZE = int(8 * math.sqrt(ctx.num_workers))
FILT_SIZE = 8
N = 8
F = 32
tile_size = util.divup(IMG_SIZE, math.sqrt(ctx.num_workers))
images = expr.ones((N, 3, IMG_SIZE, IMG_SIZE),
dtype=np.float,
tile_hint=(N, 3, tile_size, tile_size))
filters = expr.ones((F, 3, FILT_SIZE, FILT_SIZE),
dtype=np.float,
tile_hint=ONE_TILE)
result = stencil.stencil(images, filters, 1)
ed = time.time()
print ed - st
@with_ctx
def test_local_convolve(ctx):
F = 16
filters = np.ones((F, 3, 5, 5))
for N in [1, 4, 16]:
images = np.ones((N, 3, 128, 128))
st = time.time()
stencil._convolve(images, filters)
print N, F, time.time() - st
|
Formal statement is: lemma scaleR_le_0_iff: "a *\<^sub>R b \<le> 0 \<longleftrightarrow> 0 < a \<and> b \<le> 0 \<or> a < 0 \<and> 0 \<le> b \<or> a = 0" for b::"'a::ordered_real_vector" Informal statement is: For any real vector $b$, the product $ab$ is non-positive if and only if $a$ is positive and $b$ is non-positive, or $a$ is negative and $b$ is non-negative, or $a$ is zero.
|
## Sockets ##
mutable struct Socket
data::Ptr{Cvoid}
pollfd::_FDWatcher
function Socket(ctx::Context, typ::Integer)
p = ccall((:zmq_socket, libzmq), Ptr{Cvoid}, (Ptr{Cvoid}, Cint), ctx, typ)
if p == C_NULL
throw(StateError(jl_zmq_error_str()))
end
socket = new(p)
setfield!(socket, :pollfd, _FDWatcher(fd(socket), #=readable=#true, #=writable=#false))
finalizer(close, socket)
push!(getfield(ctx, :sockets), WeakRef(socket))
return socket
end
Socket(typ::Integer) = Socket(context(), typ)
end
function Socket(f::Function, args...)
socket = Socket(args...)
try
f(socket)
finally
close(socket)
end
end
Base.unsafe_convert(::Type{Ptr{Cvoid}}, s::Socket) = getfield(s, :data)
Base.isopen(socket::Socket) = getfield(socket, :data) != C_NULL
function Base.close(socket::Socket)
if isopen(socket)
close(getfield(socket, :pollfd), #=readable=#true, #=writable=#false)
rc = ccall((:zmq_close, libzmq), Cint, (Ptr{Cvoid},), socket)
setfield!(socket, :data, C_NULL)
if rc != 0
throw(StateError(jl_zmq_error_str()))
end
end
end
# Raw FD access
if Sys.isunix()
Base.fd(socket::Socket) = RawFD(socket.fd)
end
if Sys.iswindows()
using Base.Libc: WindowsRawSocket
Base.fd(socket::Socket) = WindowsRawSocket(convert(Ptr{Cvoid}, socket.fd))
end
Base.wait(socket::Socket) = wait(getfield(socket, :pollfd), readable=true, writable=false)
Base.notify(socket::Socket) = @preserve socket uv_pollcb(getfield(socket, :pollfd).handle, Int32(0), Int32(UV_READABLE))
function Sockets.bind(socket::Socket, endpoint::AbstractString)
rc = ccall((:zmq_bind, libzmq), Cint, (Ptr{Cvoid}, Ptr{UInt8}), socket, endpoint)
if rc != 0
throw(StateError(jl_zmq_error_str()))
end
end
function Sockets.connect(socket::Socket, endpoint::AbstractString)
rc=ccall((:zmq_connect, libzmq), Cint, (Ptr{Cvoid}, Ptr{UInt8}), socket, endpoint)
if rc != 0
throw(StateError(jl_zmq_error_str()))
end
end
|
The \eslmod{sqio} module handles input from unaligned sequence data
files, such as FASTA files.
\eslmod{sqio} can automatically recognize and parse several different
file formats, including FASTA, UniProt, Genbank, DDBJ, and EMBL.
Additionally, it can read individual unaligned sequences from multiple
alignment files in several different formats, including Stockholm,
Clustal, aligned FASTA, PSI-BLAST, and Phylip.
Sequences can be read from normal files, directly from the
\ccode{stdin} pipe, or from \ccode{gzip}-compressed files.
Sequence files can be automatically looked for in a list of one or
more database directories, specified by an environment variable (such
as \ccode{HMMERDB}).
Table~\ref{tbl:sqio_api} lists the functions in the \eslmod{sqio} API.
The module uses an \ccode{ESL\_SQFILE} object which works much like an
ANSI C \ccode{FILE}, maintaining information for an open sequence file
while it's being read.
% API table is auto generated by the Makefile,
% using autodoc -t esl_sqio.c
%
\input{apitables/esl_sqio_api}
\subsection{Example: reading sequences from a file}
Figure~\ref{fig:sqio_example_text} shows a program that opens a file, reads
sequences from it one at a time, then closes the file.
\begin{figure}
\input{cexcerpts/sqio_example_text}
\caption{Example of reading sequences from a file.}
\label{fig:sqio_example_text}
\end{figure}
A FASTA file named \ccode{seqfile} is opened for reading by calling
\ccode{esl\_sqfile\_Open(filename, format, env, \&sqfp)}, which
creates a new \ccode{ESL\_SQFILE} and returns it through the
\ccode{sqfp} pointer. If the \ccode{format} is passed as
\ccode{eslSQFILE\_UNKNOWN}, then the format of the file is
autodetected. Here, we bypass autodetection by asserting that the file
is in FASTA format by passing a \ccode{eslSQFILE\_FASTA} code. (See
below for a list of valid codes and formats.) The optional \ccode{env}
argument is described below too; here, we're passing \ccode{NULL} and
not using it.
Several things can go wrong in trying to open a sequence file that are
beyond the control of Easel or your application, so it's important
that you check the return code. \ccode{esl\_sqfile\_Open()} returns
\ccode{eslENOTFOUND} if the file can't be opened, and
\ccode{eslEFORMAT} if the file is empty, or if autodetection can't
determine its format.\footnote{Additionally, internal errors might be
thrown, which you should check for if you installed a nonfatal error
handler.}
The file is then read one sequence at a time by calling
\ccode{esl\_sq\_Read(sqfp, sq)}. This function returns \ccode{eslOK}
if it read a new sequence, and leaves that sequence in the \ccode{sq}
object that the caller provided. When there is no more data in the
file, \ccode{esl\_sq\_Read()} returns \ccode{eslEOF}.
If at any point the file does not appear to be in the proper format,
\ccode{esl\_sq\_Read()} returns \ccode{eslEFORMAT}. The application
must check for this. The API can provide a little information about
what went wrong and where. \ccode{sqfp->filename} is the name of the
file that we were parsing (not necessarily the same as
\ccode{seqfile}; \ccode{sqfp->filename} can be a full pathname if we
used an \ccode{env} argument to look for \ccode{seqfile} in installed
database directories). The function \ccode{esl\_sqfile\_GetErrorBuf()}
should be called to get a pointer to the generated error message. The
buffer is a brief explanatory message that gets filled in when a
\ccode{eslEFORMAT} error occurs.
\footnote{Unlike in the MSA module, you don't get access to the
current line text; some of sqio's parsers use fast block-based
(\ccode{fread()}) input instead of line-based input.}
We can reuse the same \ccode{ESL\_SQ} object for subsequent sequences
by calling \ccode{esl\_sq\_Reuse()} on it when we're done with the
previous sequence. If we wanted to load a set of sequences, we'd
\ccode{\_Create()} an array of \ccode{ESL\_SQ} objects.
Finally, to clean up properly, a \ccode{ESL\_SQ} that was created is
destroyed with \ccode{esl\_sq\_Destroy(sq)}, and a \ccode{ESL\_SQFILE}
is closed with \ccode{esl\_sqfile\_Close()}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Digital sequence input mode}
Most Easel-based programs manipulate sequences in Easel's digital
sequence format, using \eslmod{alphabet}, as opposed to manipulating
them as plaintext. The \eslmod{sqio} reader can be used either in text
mode or digital mode. In text mode, you get the \ccode{sq->seq} field
of the \ccode{ESL\_SQ}; in digital mode, you get \ccode{sq->dsq}.
To use digital mode, both the \ccode{ESL\_SQFILE} reader and the
\ccode{ESL\_SQ} sequence object must be set to digital mode. The
reader, because it has an input map that maps plaintext input
characters to internal residue codes (plaintext or digital), or
errors. The sequence object, because it needs to have either its
\ccode{seq} or \ccode{dsq} field allocated. Both also carry a copy of
the pointer to the alphabet.
Figure~\ref{fig:sqio_example_digital} shows an example of the standard
idiom for opening files in digital mode, autoguessing their format and
alphabet by default, and allowing format and alphabet to be specified
on the command line.
\begin{figure}
\input{cexcerpts/sqio_example_digital}
\caption{Standard idiom for reading sequences in digital mode.}
\label{fig:sqio_example_digital}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Accepted formats}
Accepted unaligned sequence file formats (and their Easel format
codes) include:
\begin{tabular}{lll}
\textbf{name} & \textbf{code} & \textbf{description} \\
fasta & \ccode{eslSQFILE\_FASTA} & FASTA format \\
embl & \ccode{eslSQFILE\_EMBL} & EMBL DNA database format \\
genbank & \ccode{eslSQFILE\_GENBANK} & GenBank DNA database format \\
ddbj & \ccode{eslSQFILE\_DDBJ} & DDBJ DNA database format \\
uniprot & \ccode{eslSQFILE\_UNIPROT} & UniProt protein database format \\
\end{tabular}
The above names, case-insensitive, are what a user uses to specify a
format on a command line: i.e.\ \ccode{--informat fasta} or
\ccode{--informat FASTA}.
The codes are what you use as a developer to specify a format to an
Easel function call.
Additionally, there is a code \ccode{eslSQFILE\_UNKNOWN}. It
tells \ccode{esl\_sqfile\_Open()} to perform format
autodetection.\footnote{There are some other formats as well, which we
don't advertise because they're less well supported.}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Reading from stdin and compressed files}
There are two special cases for input files.
The module can read sequence input from a stdin pipe. If the
\ccode{seqfile} argument is ``-'', \ccode{esl\_sqfile\_Open()} ``opens''
standard input (really, it just associates \ccode{stdin}, which is
always open, with the \ccode{ESL\_SQFILE}).
The module can read compressed sequence files. If the \ccode{seqfile}
argument to \ccode{esl\_sqfile\_Open()} ends in \ccode{.gz}, the file
is assumed to be compressed with \ccode{gzip}; instead of opening it
normally, \ccode{esl\_sqfile\_Open()} opens it as a pipe from
\ccode{gzip -dc}. Your system must support pipes to use
this.\footnote{Specifically, it must support the \ccode{popen()}
system call (POSIX.2 compliant operating systems do). The
\ccode{configure} script automatically checks this at compile-time
and defines \ccode{HAVE\_POPEN} appropriately.} Obviously, the user
must also have \ccode{gzip} installed and in his PATH.
For both special cases, the catch is that you can't use format
autodetection; you must provide a valid known format code when you
read from stdin or from a compressed file. Pipes are not rewindable,
and format autodetection currently relies on a two-pass algorithm: it
reads partway into the file to determine the format, then rewinds to
start parsing for real.\footnote{The \eslmod{msafile} module is more
advanced. Its parsers are based on the newer \eslmod{buffer} module
which provides rewindable input buffers even for stdin and pipes.}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% SRE: commented NCBI section out.
%% I don't believe that it works, or that it's up to date.
%% Needs testing.
%% \subsection{NCBI BLAST database support}
%% If sqio is augmented with the ncbi module, then the sqio API gains the
%% ability to read NCBI BLAST database formats in addition to ASCII file
%% formats. The sqio API remains exactly the same (the caller doesn't
%% have to use any msa module functions).
%% To open a BLAST database, the format must be supplied. If a format
%% of \ccode{eslSQFILE\_UNKNOWN} is specified, only ASCII based format,
%% i.e. FASTA, GENBANK, etc. will be opened.
%% When opening a BLAST database, the file name should not include any
%% extensions. If the \ccode{ESL\_ALPHABET} is not specified, Easel
%% will first try to open a protein database, followed by a DNA database
%% and finally a multi-volume database. The handling of a multi-volume
%% database is done through the alias file. The only directive in
%% the alias file is the DBLIST line, listing all the database volumes.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Adding a sequence parser}
New parsers for new formats can be plugged into \eslmod{sqio} without
any API changes. Existing Easel-based programs don't need code changes
to use the new parser.
A new parser will need a format type, a structure for parser specific
data, API functions and a hook into the \ccode{sqfile\_open} function.
The list of formats are defined in \ccode{esl\_sqio.h}. A new
\ccode{\#define} will be added to the existing formats:
\input{cexcerpts/sq_sqio_format}
A data structure for parser specific data will need to be added
to \ccode{ESL\_SQDATA}. This structure is a union of the
different parser specific data structures.
\input{cexcerpts/sq_sqio_data}
Finally, a set of parser specific function pointers need to be
defined. The functions in \ccode{esl\_sqio.c} in turn call these
function pointers. The \ccode{esl\_sqfile\_Open} function initializes
the function pointers to NULL, so if they are not set, an exception
will occur when the function is called. At a minimum, the function
should be defined to return an \ccode{eslEUNIMPLEMENTED}. Below is a
map the the function pointers to their respective function.
\begin{tabular}{ll} \\
Function pointer & \eslmod{sqio} function \\ \hline
position & esl\_sqfile\_Position \\
close & esl\_sqfile\_Close \\
set\_digital & esl\_sqfile\_SetDigital \\
guess\_alphabet & esl\_sqfile\_GuessAlphabet \\
read & esl\_sqio\_Read \\
read\_info & esl\_sqio\_ReadInfo \\
read\_seq & esl\_sqio\_ReadSequence \\
read\_window & esl\_sqio\_ReadWindow \\
echo & esl\_sqio\_Echo \\
read\_block & esl\_sqio\_ReadBlock \\
open\_ssi & esl\_sqfile\_OpenSSI \\
pos\_by\_key & esl\_sqfile\_PositionByKey \\
pos\_by\_number & esl\_sqfile\_PositionByNumber \\
fetch & esl\_sqio\_Fetch \\
fetch\_info & esl\_sqio\_FetchInfo \\
fetch\_subseq & esl\_sqio\_FetchSubseq \\
is\_rewindable & esl\_sqfile\_IsRewindable \\
get\_error & esl\_sqfile\_GetErrorBuf \\
\end{tabular}
\bigskip
A hook needs to be added to the function \ccode{sqfile\_open}.
This hook will try to open the specified file. If successful,
the \ccode{ESL\_SQFILE} structure should be filled in with function
pointers and the parser specific data and the open hook
return \ccode{eslOK}. If the sequence files were not found for
the specific parser, an \ccode{eslENOTFOUND} is returned and
the next parser tries to open the file. Below is an example
of code that tries to open an NCBI BLAST database if not
successful, then the ASCII sequence parsers try to open the
file.
\begin{cchunk}
if (format == eslSQFILE\_NCBI && status == eslENOTFOUND)
status = esl\_sqncbi\_Open(sqfp->filename, sqfp->format, sqfp);
if (status == eslENOTFOUND)
status = esl\_sqascii\_Open(sqfp->filename, sqfp->format, sqfp);
\end{cchunk}
|
State Before: l : Type ?u.867277
m : Type u_4
n : Type u_1
o : Type ?u.867286
m' : o → Type ?u.867291
n' : o → Type ?u.867296
R : Type u_2
S : Type u_3
α : Type v
β : Type w
γ : Type ?u.867309
inst✝⁵ : NonUnitalNonAssocSemiring α
inst✝⁴ : Fintype n
inst✝³ : Monoid R
inst✝² : NonUnitalNonAssocSemiring S
inst✝¹ : DistribMulAction R S
inst✝ : SMulCommClass R S S
M : Matrix m n S
b : R
v : n → S
⊢ mulVec M (b • v) = b • mulVec M v State After: case h
l : Type ?u.867277
m : Type u_4
n : Type u_1
o : Type ?u.867286
m' : o → Type ?u.867291
n' : o → Type ?u.867296
R : Type u_2
S : Type u_3
α : Type v
β : Type w
γ : Type ?u.867309
inst✝⁵ : NonUnitalNonAssocSemiring α
inst✝⁴ : Fintype n
inst✝³ : Monoid R
inst✝² : NonUnitalNonAssocSemiring S
inst✝¹ : DistribMulAction R S
inst✝ : SMulCommClass R S S
M : Matrix m n S
b : R
v : n → S
i : m
⊢ mulVec M (b • v) i = (b • mulVec M v) i Tactic: ext i State Before: case h
l : Type ?u.867277
m : Type u_4
n : Type u_1
o : Type ?u.867286
m' : o → Type ?u.867291
n' : o → Type ?u.867296
R : Type u_2
S : Type u_3
α : Type v
β : Type w
γ : Type ?u.867309
inst✝⁵ : NonUnitalNonAssocSemiring α
inst✝⁴ : Fintype n
inst✝³ : Monoid R
inst✝² : NonUnitalNonAssocSemiring S
inst✝¹ : DistribMulAction R S
inst✝ : SMulCommClass R S S
M : Matrix m n S
b : R
v : n → S
i : m
⊢ mulVec M (b • v) i = (b • mulVec M v) i State After: no goals Tactic: simp only [mulVec, dotProduct, Finset.smul_sum, Pi.smul_apply, mul_smul_comm]
|
library(bcf)
# Uncomment the following for the modified bcf package
# source("bcf/R/RcppExports.R")
# source("bcf/R/bcf.R")
args <- commandArgs(TRUE)
ntrain <- as.double(args[1])
ncandidate <- as.double(args[2])
n_seqential <- as.double(args[3])
sigma <- as.double(args[4]) # observation noise of y
homogeneous <- as.double(args[5]) # 1 if true
linear <- as.character(args[6]) # 1 if linear
num_cv <- as.character(args[7]) # 1 if linear
which_experiment <- as.double(args[8]) # 1 or 2
print(paste0("Beginning with num_cv = ", num_cv))
# ntrain <- 50
# ncandidate <- 2000
# sigma <- 2
# homogeneous <- 1
# linear <- 1
# n_seqential <- 1
# which_experiment <- 2
n <- ntrain + ncandidate
set.seed(num_cv)
p <- 5
# Experiment
if (which_experiment == 1) {
source("bcf/util_experiment1.R")
# Create covariates x
x <- generate_x(n, p)
x_input <- dbarts::makeModelMatrixFromDataFrame(x)
# Compute tau, mu and pi
mu <- prognostic_func(x, linear)
tau <- treatment_func(x, homogeneous)
propensity <- 0.8 * pnorm(3 * mu / sd(mu) - 0.5 * x[, 1]) + 0.05 + 0.1 * runif(n)
z <- rbinom(n, rep(1, n), propensity)
} else if (which_experiment) {
source("bcf/util_experiment2.R")
# Create covariates x
x <- generate_x(n, p)
x_input <- dbarts::makeModelMatrixFromDataFrame(x)
# Compute tau, mu and pi
# RIC
mu <- prognostic_func(x, linear)
tau <- treatment_func(x, mu)
propensity <- 0.6 * pnorm(mu - mean(mu), 0, 0.75 * sd(mu)) + 0.2
z <- rbinom(n, rep(1, n), propensity)
}
# Model:
# Y_i = f(x_i, Z_i) + epsilon_i
# epsilon_i ~ N(0, sigma^2)
# f(x_i, z_i) = mu(x_i) + tau(x_i) * z_i
Ey <- mu + tau * z
noiseVar <- sigma * sd(Ey)
y <- Ey + noiseVar*rnorm(n)
# If pi is unknown, we would need to estimate it here
# pihat <- propensity
library(nnet)
x.mod <- dbarts::makeModelMatrixFromDataFrame(data.frame(x))
fitz <- nnet(z~., data = x.mod, size = 4, rang = 0.1, maxit = 1000, abstol = 1.0e-8, decay = 5e-2, trace=FALSE)
pihat <- fitz$fitted.values
# BCF
# bcf_fit <- bcf(y[1:ntrain], z[1:ntrain], x_input[1:ntrain, ], x_input[1:ntrain, ], pihat[1:ntrain], nburn=2000, nsim=2000, nthin= 5)
bcf_fit <- bcf(y[1:ntrain], z[1:ntrain], x_input[1:ntrain, ], x_input[1:ntrain, ], pihat[1:ntrain], 10000, 3000)
# Get posterior samples of treatment effects
tau_post <- bcf_fit$tau
tauhat <- colMeans(tau_post)
# plot(tau[1:ntrain], tauhat); abline(0,1)
# Posterior of the averaged treatment effects
print(paste0("Mean of |CATE - CATE_hat|: ", mean(abs(tau[1:ntrain] - tauhat))))
if (homogeneous == 1 & which_experiment == 1){
# Sample ATE
y_treated <- bcf_fit$yhat
y_controled <- bcf_fit$yhat
y_treated[z == 0] <- (y_treated + tauhat)[z == 0]
y_controled[z == 1] <- (y_controled - tauhat)[z == 1]
sate <- mean(y_treated - y_controled)
true_sate <- mean(tau[1:ntrain])
print(paste0("True ATE: ", true_sate, " . Estimated: ", sate, ". Abs. diff: ", abs(sate - 3)))
} else {
y_treated <- bcf_fit$yhat
y_controled <- bcf_fit$yhat
y_treated[z == 0] <- (y_treated + tauhat)[z == 0]
y_controled[z == 1] <- (y_controled - tauhat)[z == 1]
sate <- mean(y_treated - y_controled)
true_sate <- mean(tau[1:ntrain])
print(paste0("True SATE: ", true_sate, ". Estimated: ", sate, ". Abs. diff: ", abs(sate - true_sate)))
}
# Gaussian processes
# makeGPBQModelMatrix <- function(df, treatment) {
# return(data.frame(df, treatment))
# }
makeGPBQModelMatrix <- function(df, treatment) {
return(data.frame(df, treatment))
}
# Uncomment the following to exclude treatment z in the data input
# makeGPBQModelMatrix <- function(df) {
# return(dbarts::makeModelMatrixFromDataFrame(df))
# }
# Uncomment the following to include treatment z in the data input
trainX <- makeGPBQModelMatrix(x[1:ntrain,], z[1:ntrain])
trainY <- y[1:ntrain]
candidateX <- makeGPBQModelMatrix(x[-(1:ntrain),], z[-(1:ntrain)])
candidateY <- y[-(1:ntrain)]
# Uncomment the following to exclude treatment z in the data input
# trainX <- makeGPBQModelMatrix(x[1:ntrain,])
# trainY <- y[1:ntrain]
# candidateX <- makeGPBQModelMatrix(x[-(1:ntrain),])
# candidateY <- y[-(1:ntrain)]
library(reticulate)
source("src/optimise_gp.R")
lengthscale <- optimise_gp_r(dbarts::makeModelMatrixFromDataFrame(trainX), trainY, kernel = "matern32", epochs = 500)
print("...Finished training for the lengthscale")
source("bcf/GPBQ.R")
GPBQResults <- computeGPBQEmpirical(
X=trainX,
Y=trainY,
candidateX=candidateX,
candidateY=candidateY,
kernel="matern32",
lengthscale=lengthscale,
epochs=n_seqential
)
# BART
source("bcf/BART.R")
BARTResults <- computeBARTWeighted(trainX, trainY, candidateX, candidateY, n_seqential, num_cv=num_cv, linear=linear)
print(paste0("True ATE: ", true_sate, " . Estimated: ", BARTResults$meanValueBART, ". Abs. diff: ", abs(BARTResults$meanValueBART - sate)))
# Bayesian Quadrature methods: with BART and Gaussian Process
print("Final ATE:")
print(c("Actual integral:", true_sate))
print(c("BCF integral:", sate))
print(c("BART integral:", BARTResults$meanValueBART[(n_seqential+1)]))
print(c("GP integral:", GPBQResults$meanValueGP[(n_seqential+1)]))
results <- data.frame(
"epochs" = c(1:(n_seqential+1)),
"BARTMean" = BARTResults$meanValueBART, "BARTsd" = BARTResults$standardDeviationBART,
"GPMean" = GPBQResults$meanValueGP, "GPsd" = sqrt(GPBQResults$varianceGP),
"actual" = rep(3, n_seqential+1)
)
write.csv(results, paste0("bcf/results/exp", which_experiment, "_", "sigma", sigma, "_", ntrain, "_", num_cv, ".csv"))
|
import Data.Vect
{-
Matrix type using type synonyms
-}
Matrix : Nat -> Nat -> Type
Matrix i j = Vect i (Vect j Double)
testMatrix : Matrix 2 3
testMatrix = [[0, 0, 0], [0, 0, 0]]
|
@testset "GS" begin
m = 20
n = 10
A = randn(m,n);
Aold = copy(A)
Qc,Rc = cgs(A)
@test all(Aold.==A) # make sure input is unchanged
Qt,Rt = cgs!(A)
@test all(Qt.==A) # check that input IS overwritten
@test maximum(abs.(Qt-Qc)) < 1e-15
@test norm(Aold-Qc*Rc,Inf)/norm(A,Inf) < 1e-14
@test norm(Qc'*Qc-I,Inf) < 1e-14
Aold = copy(A)
Qm,Rm = mgs(A)
@test all(Aold.==A) # make sure input is unchanged
Qt,Rt = mgs!(A)
@test all(Qt.==A) # check that input IS overwritten
@test maximum(abs.(Qt-Qm)) < 1e-15
@test norm(Aold-Qm*Rm,Inf)/norm(A) < 1e-14
@test norm(Qm'*Qm-I,Inf) < 1e-14
end
|
State Before: α : Type u_1
inst✝³ : Nonempty α
inst✝² : Preorder α
inst✝¹ : IsDirected α fun x x_1 => x ≤ x_1
β : Type u_2
inst✝ : Fintype β
f : β → α
⊢ BddAbove (Set.range f) State After: case intro
α : Type u_1
inst✝³ : Nonempty α
inst✝² : Preorder α
inst✝¹ : IsDirected α fun x x_1 => x ≤ x_1
β : Type u_2
inst✝ : Fintype β
f : β → α
M : α
hM : ∀ (i : β), f i ≤ M
⊢ BddAbove (Set.range f) Tactic: obtain ⟨M, hM⟩ := Fintype.exists_le f State Before: case intro
α : Type u_1
inst✝³ : Nonempty α
inst✝² : Preorder α
inst✝¹ : IsDirected α fun x x_1 => x ≤ x_1
β : Type u_2
inst✝ : Fintype β
f : β → α
M : α
hM : ∀ (i : β), f i ≤ M
⊢ BddAbove (Set.range f) State After: case intro
α : Type u_1
inst✝³ : Nonempty α
inst✝² : Preorder α
inst✝¹ : IsDirected α fun x x_1 => x ≤ x_1
β : Type u_2
inst✝ : Fintype β
f : β → α
M : α
hM : ∀ (i : β), f i ≤ M
a : α
ha : a ∈ Set.range f
⊢ a ≤ M Tactic: refine' ⟨M, fun a ha => _⟩ State Before: case intro
α : Type u_1
inst✝³ : Nonempty α
inst✝² : Preorder α
inst✝¹ : IsDirected α fun x x_1 => x ≤ x_1
β : Type u_2
inst✝ : Fintype β
f : β → α
M : α
hM : ∀ (i : β), f i ≤ M
a : α
ha : a ∈ Set.range f
⊢ a ≤ M State After: case intro.intro
α : Type u_1
inst✝³ : Nonempty α
inst✝² : Preorder α
inst✝¹ : IsDirected α fun x x_1 => x ≤ x_1
β : Type u_2
inst✝ : Fintype β
f : β → α
M : α
hM : ∀ (i : β), f i ≤ M
b : β
⊢ f b ≤ M Tactic: obtain ⟨b, rfl⟩ := ha State Before: case intro.intro
α : Type u_1
inst✝³ : Nonempty α
inst✝² : Preorder α
inst✝¹ : IsDirected α fun x x_1 => x ≤ x_1
β : Type u_2
inst✝ : Fintype β
f : β → α
M : α
hM : ∀ (i : β), f i ≤ M
b : β
⊢ f b ≤ M State After: no goals Tactic: exact hM b
|
module Formalization.LambdaCalculus.Semantics.Reduction where
import Lvl
open import Data
open import Formalization.LambdaCalculus
open import Formalization.LambdaCalculus.SyntaxTransformation
open import Numeral.Natural
open import Numeral.Finite
open import Relator.ReflexiveTransitiveClosure
open import Syntax.Number
open import Type
private variable d d₁ d₂ : ℕ
private variable f g x y : Term(d)
-- β-reduction (beta) with its compatible closure over `Apply`.
-- Reduces a term of form `f(x)` to `f[0 ≔ x]`.
data _β⇴_ : Term(d₁) → Term(d₂) → Type{1} where
β : {f : Term(𝐒(d))}{x : Term(d)} → (Apply(Abstract(f))(x) β⇴ substituteVar0(x)(f))
cong-applyₗ : (f β⇴ g) → (Apply f(x) β⇴ Apply g(x)) -- TODO: cong-applyₗ and cong-applyᵣ can be applied in any order, but many evaluation strategies have a fixed order. How should this be represented?
cong-applyᵣ : (x β⇴ y) → (Apply f(x) β⇴ Apply f(y))
cong-abstract : (x β⇴ y) → (Abstract x β⇴ Abstract y) -- TODO: Sometimes this is not included, specifically for the call by value evaluation strategy? But it seems to be required for the encoding of ℕ?
-- η-reduction (eta). (TODO: May require more introductions like β have)
-- Reduces a term of form `x ↦ f(x)` to `f`.
data _η⇴_ : Term(d₁) → Term(d₂) → Type{1} where
η : (Abstract(Apply(f)(Var(maximum))) η⇴ f)
-- Reduction of expressions (TODO: May require more introductions like β have)
data _⇴_ : Term(d₁) → Term(d₂) → Type{1} where
β : (Apply(Abstract(f))(x) ⇴ substituteVar0(x)(f))
η : (Abstract(Apply(f)(Var(maximum))) ⇴ f)
_β⇴*_ : Term(d) → Term(d) → Type
_β⇴*_ = ReflexiveTransitiveClosure(_β⇴_)
_β⥈_ : Term(d) → Term(d) → Type
_β⥈_ = SymmetricClosure(_β⇴_)
_β⥈*_ : Term(d) → Term(d) → Type
_β⥈*_ = ReflexiveTransitiveClosure(_β⥈_)
|
-- ------------------------------------------------------------- [ Options.idr ]
-- Module : Options.idr
-- Copyright : (c) Jan de Muijnck-Hughes
-- License : see LICENSE
-- --------------------------------------------------------------------- [ EOH ]
module Frigg.Options
-- ----------------------------------------------------------------- [ Imports ]
import Effects
import Effect.Exception
import ArgParse
import XML.DOM
import XML.Reader
import Freyja.Convert
%access export
-- -------------------------------------------------------------------- [ Mode ]
public export
data FriggMode = Read | Templ | Conv | REPL | VERS | HELP | Sif
-- ----------------------------------------------------------------- [ Options ]
public export
record FriggOpts where
constructor MkFriggOpts
mode : Maybe FriggMode
patt : Maybe String
weights : Maybe String
gscale : Maybe String
to : Maybe FreyjaOutFormat
out : Maybe String
banner : Bool
export
mkDefOpts : FriggOpts
mkDefOpts = MkFriggOpts
(Just HELP) Nothing Nothing Nothing Nothing Nothing
True
Default FriggOpts where
default = mkDefOpts
convOpts : Arg -> FriggOpts -> Maybe FriggOpts
convOpts (Files (x::xs)) o = Nothing
convOpts (KeyValue k v) o =
case k of
"pattern" => Just $ record {patt = Just v} o
"weights" => Just $ record {weights = Just v} o
"gscale" => Just $ record {gscale = Just v} o
"to" => Just $ record {to = readOutFMT v} o
otherwise => Nothing
convOpts (Flag x) o =
case x of
"template" => Just $ record {mode = Just Templ} o
"read" => Just $ record {mode = Just Read} o
"sif" => Just $ record {mode = Just Sif} o
"help" => Just $ record {mode = Just HELP} o
"version" => Just $ record {mode = Just VERS} o
"conv" => Just $ record {mode = Just Conv} o
"repl" => Just $ record {mode = Just REPL} o
"nobanner" => Just $ record {banner = False} o
otherwise => Nothing
export
friggHelpStr : String
friggHelpStr = """Frigg (C) Jan de Muijnck-Hughes 2015
Available Options:
Flag | Description
---------------------|----------------------------------------------------------
--pattern="<fname>" | The pattern to be analysed.
--weights="<fname>" | A problem specification.
--gscale="<fname" | A solution specification.
--to="<fmt>" | The output format.
--sif | Evaluate embedded sif model.
--help | Display help
--version | Display version
--eval | Evaluate problem solution pairing
--conv | Convert to problem solution pairing
--repl | REPL
--nobanner | Don't display banner
"""
-- --------------------------------------------------------------------- [ EOF ]
|
\chapter{Technology Stack}
\label{stack:introduction}
\input{stack/maven}
\input{stack/bundles}
|
struct CircularVector{T}
v::Vector{T}
l::Int
CircularVector{T}(l::Integer) where T = new(zeros(T, l), l)
end
function min_abs_diff(v::CircularVector{T}) where T
val = Inf
for i in 1:Base.length(v)
val = min(val, abs(v[i] - v[i-1]))
end
return val
end
function max_abs_diff(v::CircularVector{T}) where T
val = 0.0
for i in 1:Base.length(v)
val = max(val, abs(v[i] - v[i-1]))
end
return val
end
function Base.getindex(V::CircularVector{T}, i::Int) where T
return V.v[mod1(i, V.l)]
end
function Base.setindex!(V::CircularVector{T}, val::T, i::Int) where T
V.v[mod1(i, V.l)] = val
end
function Base.length(V::CircularVector{T}) where T
return V.l
end
Base.@kwdef mutable struct Options
# Printing options
log_verbose::Bool = false
log_freq::Int = 1000
timer_verbose::Bool = false
timer_file::Bool = false
disable_julia_logger = true
# time options
time_limit::Float64 = 3600_00. #100 hours
warn_on_limit::Bool = false
extended_log::Bool = false
extended_log2::Bool = false
log_repeat_header::Bool = false
# Default tolerances
tol_gap::Float64 = 1e-4
tol_feasibility::Float64 = 1e-4
tol_feasibility_dual::Float64 = 1e-4
tol_primal::Float64 = 1e-4
tol_dual::Float64 = 1e-4
tol_psd::Float64 = 1e-7
tol_soc::Float64 = 1e-7
check_dual_feas::Bool = false
check_dual_feas_freq::Int = 1000
max_obj::Float64 = 1e20
min_iter_max_obj::Int = 10
# infeasibility check
min_iter_time_infeas::Int = 1000
infeas_gap_tol::Float64 = 1e-4
infeas_limit_gap_tol::Float64 = 1e-1
infeas_stable_gap_tol::Float64 = 1e-4
infeas_feasibility_tol::Float64 = 1e-4
infeas_stable_feasibility_tol::Float64 = 1e-8
certificate_search::Bool = true
certificate_obj_tol::Float64 = 1e-1
certificate_fail_tol::Float64 = 1e-8
# Bounds on beta (dual_step / primal_step) [larger bounds may lead to numerical inaccuracy]
min_beta::Float64 = 1e-5
max_beta::Float64 = 1e+5
initial_beta::Float64 = 1.
# Adaptive primal-dual steps parameters [adapt_decay above .7 may lead to slower convergence]
initial_adapt_level::Float64 = .9
adapt_decay::Float64 = .8
adapt_window::Int64 = 50
# PDHG parameters
convergence_window::Int = 200
convergence_check::Int = 50
max_iter::Int = 0
min_iter::Int = 40
divergence_min_update::Int = 50
max_iter_lp::Int = 10_000_000
max_iter_conic::Int = 1_000_000
max_iter_local::Int = 0 #ignores user setting
advanced_initialization::Bool = true
# Linesearch parameters
line_search_flag::Bool = true
max_linsearch_steps::Int = 5000
delta::Float64 = .9999
initial_theta::Float64 = 1.
linsearch_decay::Float64 = .75
# Spectral decomposition parameters
full_eig_decomp::Bool = false
max_target_rank_krylov_eigs::Int = 16
min_size_krylov_eigs::Int = 100
warm_start_eig::Bool = true
rank_increment::Int = 1 # 0=multiply, 1 = add
rank_increment_factor::Int = 1 # 0 multiply, 1 = add
# eigsolver selection
#=
1: Arpack [dsaupd] (tipically non-deterministic)
2: KrylovKit [eigsolve/Lanczos] (DEFAULT)
=#
eigsolver::Int = 2
eigsolver_min_lanczos::Int = 25
eigsolver_resid_seed::Int = 1234
# Arpack
# note that Arpack is Non-deterministic
# (https://github.com/mariohsouto/ProxSDP.jl/issues/69)
arpack_tol::Float64 = 1e-10
#=
0: arpack random [usually faster - NON-DETERMINISTIC - slightly]
1: all ones [???]
2: julia random uniform (eigsolver_resid_seed) [medium for DETERMINISTIC]
3: julia normalized random normal (eigsolver_resid_seed) [best for DETERMINISTIC]
=#
arpack_resid_init::Int = 3
arpack_reset_resid::Bool = true # true for determinism
# larger is more stable to converge and more deterministic
arpack_max_iter::Int = 10_000
# see remark for of dsaupd
# KrylovKit
krylovkit_reset_resid::Bool = false
krylovkit_resid_init::Int = 3
krylovkit_tol::Float64 = 1e-12
krylovkit_max_iter::Int = 100
krylovkit_eager::Bool = false
krylovkit_verbose::Int = 0
# Reduce rank [warning: heuristics]
reduce_rank::Bool = false
rank_slack::Int = 3
full_eig_freq::Int = 10000000
full_eig_len::Int = 0
# equilibration parameters
equilibration::Bool = false
equilibration_iters::Int = 1000
equilibration_lb::Float64 = -10.0
equilibration_ub::Float64 = +10.0
equilibration_limit::Float64 = 0.9
equilibration_force::Bool = false
# spectral norm [using exact norm via svds may result in nondeterministic behavior]
approx_norm::Bool = true
end
mutable struct AffineSets
n::Int # Size of primal variables
p::Int # Number of linear equalities
m::Int # Number of linear inequalities
extra::Int # Number of adition linear equalities (for disjoint cones)
A::SparseMatrixCSC{Float64,Int64}
G::SparseMatrixCSC{Float64,Int64}
b::Vector{Float64}
h::Vector{Float64}
c::Vector{Float64}
end
mutable struct SDPSet
vec_i::Vector{Int}
mat_i::Vector{Int}
tri_len::Int
sq_len::Int
sq_side::Int
end
mutable struct SOCSet
idx::Vector{Int}
len::Int
end
mutable struct ConicSets
sdpcone::Vector{SDPSet}
socone::Vector{SOCSet}
end
mutable struct CPResult
status::Int
status_string::String
primal::Vector{Float64}
dual_cone::Vector{Float64}
dual_eq::Vector{Float64}
dual_in::Vector{Float64}
slack_eq::Vector{Float64}
slack_in::Vector{Float64}
primal_residual::Float64
dual_residual::Float64
objval::Float64
dual_objval::Float64
gap::Float64
time::Float64
final_rank::Int
primal_feasible_user_tol::Bool
dual_feasible_user_tol::Bool
certificate_found::Bool
end
mutable struct PrimalDual
x::Vector{Float64}
x_old::Vector{Float64}
y::Vector{Float64}
y_old::Vector{Float64}
PrimalDual(aff::AffineSets) = new(
zeros(aff.n), zeros(aff.n), zeros(aff.m+aff.p), zeros(aff.m+aff.p)
)
end
mutable struct WarmStart
x::Vector{Float64}
y_eq::Vector{Float64}
y_in::Vector{Float64}
end
mutable struct Residuals
dual_gap::CircularVector{Float64}
prim_obj::CircularVector{Float64}
dual_obj::CircularVector{Float64}
equa_feasibility::Float64
ineq_feasibility::Float64
feasibility::CircularVector{Float64}
primal_residual::CircularVector{Float64}
dual_residual::CircularVector{Float64}
comb_residual::CircularVector{Float64}
Residuals(window::Int) = new(
CircularVector{Float64}(2*window),
CircularVector{Float64}(2*window),
CircularVector{Float64}(2*window),
.0,
.0,
CircularVector{Float64}(2*window),
CircularVector{Float64}(2*window),
CircularVector{Float64}(2*window),
CircularVector{Float64}(2*window)
)
end
const ViewVector = SubArray#{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int}}, true}
const ViewScalar = SubArray#{Float64, 1, Vector{Float64}, Tuple{Int}, true}
mutable struct AuxiliaryData
m::Vector{Symmetric{Float64,Matrix{Float64}}}
Mty::Vector{Float64}
Mty_old::Vector{Float64}
Mx::Vector{Float64}
Mx_old::Vector{Float64}
y_half::Vector{Float64}
y_temp::Vector{Float64}
soc_v::Vector{ViewVector}
soc_s::Vector{ViewScalar}
function AuxiliaryData(aff::AffineSets, cones::ConicSets)
new(
[Symmetric(zeros(sdp.sq_side, sdp.sq_side), :U) for sdp in cones.sdpcone],
zeros(aff.n), zeros(aff.n),
zeros(aff.p+aff.m), zeros(aff.p+aff.m),
zeros(aff.p+aff.m), zeros(aff.p+aff.m),
ViewVector[], ViewScalar[]
)
end
end
mutable struct Matrices
M::SparseMatrixCSC{Float64,Int64}
Mt::SparseMatrixCSC{Float64,Int64}
c::Vector{Float64}
Matrices(M, Mt, c) = new(M, Mt, c)
end
mutable struct Params
current_rank::Vector{Int}
target_rank::Vector{Int}
rank_update::Int
update_cont::Int
min_eig::Vector{Float64}
iter::Int
stop_reason::Int
stop_reason_string::String
iteration::Int
primal_step::Float64
primal_step_old::Float64
dual_step::Float64
theta::Float64
beta::Float64
adapt_level::Float64
window::Int
time0::Float64
norm_c::Float64
norm_b::Float64
norm_h::Float64
sqrt2::Float64
dual_feasibility::Float64
dual_feasibility_check::Bool
certificate_search::Bool
certificate_search_min_iter::Int
certificate_found::Bool
# solution backup
Params() = new()
end
|
module Main
test : List (Maybe Integer)
test = [Just 1]
-- using (.) from prelude
test1 : (List . Maybe) Integer
test1 = [Just 1]
-- using (.) and ($) from prelude
test2 : List . Maybe $ Integer
test2 = [Just 1]
main : IO ()
main = do
print test
print test1
print test2
|
#include <sleip/dynamic_array.hpp>
#include <boost/core/lightweight_test.hpp>
void
test_empty()
{
auto a = sleip::dynamic_array<int>();
BOOST_TEST(a.empty());
a = sleip::dynamic_array<int>{1, 2, 3};
BOOST_TEST(!a.empty());
}
void
test_size()
{
auto a = sleip::dynamic_array<int>();
BOOST_TEST_EQ(a.size(), 0);
a = sleip::dynamic_array<int>{1, 2, 3};
BOOST_TEST_EQ(a.size(), 3);
}
void
test_max_size()
{
auto a = sleip::dynamic_array<int>();
BOOST_TEST_EQ(a.max_size(), std::size_t(-1));
}
int
main()
{
test_empty();
test_size();
test_max_size();
return boost::report_errors();
}
|
Golden Gate University offers a unique experience for graduate students across the country. We offer a variety of options for instruction to best suit all of our candidates, so every student can feel like they are working up to their potential. With an array of specialized instruction, online education, and accelerated degree programs, GGU gives students the flexibility to attend school full time or part time, in person at one of our teaching centers or even 100% online. We also provide top-tier career placement services and a variety of alumni network perks, so we continue to work with our graduates long after they receive their degrees.
GGU offers a variety of instruction options to best suit all of our graduate students, so they can work up to their potential.
Returning to school is convenient at Golden Gate University. Here's all you need to know in order to get started in a degree or certificate program.
If you would like to begin or advance a career in business, taxation, accounting, or psychology, we invite you to explore our degrees and certificates.
GGU is an official Military Friendly School® and has been a proud member of the Yellow Ribbon Program since its inception in August 2009.
About 17% of GGU students are international coming from more than 50 countries. GGU is dedicated to helping our international students from start to finish from application to graduation.
|
[STATEMENT]
lemma subst_subst[simp]:
assumes \<phi>[simp]: "\<phi> \<in> fmla" and t[simp]:"t \<in> trm" and x[simp]:"x \<in> var" and y[simp]:"y \<in> var"
assumes yy: "x \<noteq> y" "y \<notin> Fvars \<phi>"
shows "subst (subst \<phi> (Var y) x) t y = subst \<phi> t x"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. subst (subst \<phi> (Var y) x) t y = subst \<phi> t x
[PROOF STEP]
using subst_compose_eq_or[OF \<phi> _ t x y, of "Var y"]
[PROOF STATE]
proof (prove)
using this:
\<lbrakk>Var y \<in> trm; x = y \<or> y \<notin> Fvars \<phi>\<rbrakk> \<Longrightarrow> subst (subst \<phi> (Var y) x) t y = subst \<phi> (substT (Var y) t y) x
goal (1 subgoal):
1. subst (subst \<phi> (Var y) x) t y = subst \<phi> t x
[PROOF STEP]
using subst_notIn yy
[PROOF STATE]
proof (prove)
using this:
\<lbrakk>Var y \<in> trm; x = y \<or> y \<notin> Fvars \<phi>\<rbrakk> \<Longrightarrow> subst (subst \<phi> (Var y) x) t y = subst \<phi> (substT (Var y) t y) x
\<lbrakk>?\<phi> \<in> fmla; ?t \<in> trm; ?x \<in> var; ?x \<notin> Fvars ?\<phi>\<rbrakk> \<Longrightarrow> subst ?\<phi> ?t ?x = ?\<phi>
x \<noteq> y
y \<notin> Fvars \<phi>
goal (1 subgoal):
1. subst (subst \<phi> (Var y) x) t y = subst \<phi> t x
[PROOF STEP]
by simp
|
#ifndef RPTOGETHER_SERVER_SAFEBEASTWEBSOCKETBACKEND_HPP
#define RPTOGETHER_SERVER_SAFEBEASTWEBSOCKETBACKEND_HPP
#include <boost/beast/ssl.hpp>
#include <RpT-Network/BeastWebsocketBackendBase.inl>
/**
* @file SafeBeastWebsocketBackend.hpp
*/
namespace RpT::Network {
/**
* @brief Implementation for secure HTTPS using SSL TCP underlying stream
*
* @author ThisALV, https://github.com/ThisALV
*/
class SafeBeastWebsocketBackend : public BeastWebsocketBackendBase<boost::beast::ssl_stream<boost::beast::tcp_stream>> {
private:
// Context providing crypto TLS features
boost::asio::ssl::context tls_context_;
/// Takes Websocket stream from `openWebsocketStream()` implementation to call `openSafeWebsocketLayer()` with open
/// SSL layer
void openSecureLayer(WebsocketStream new_client_stream);
/// Takes Websocket stream from `openSecureLayer()` to call `addClientStream()` with open WSS layer
void openSafeWebsocketLayer(WebsocketStream new_client_stream);
protected:
/// Takes base TCP socket to build TCP stream, then build SSL stream using TLS features and uses it to open
/// Websocket stream
void openWebsocketStream(boost::asio::ip::tcp::socket new_client_connection) final;
public:
/**
* @brief Calls superclass constructor then initializes TLS features with given config files paths
*
* @param certificate_file Path to PEM certificate file
* @param private_key_file Path to PEM private key file
* @param local_endpoint Local server endpoint to be listening on
* @param logging_context Context providing logging features
*
* @throws boost::system::system_error Error thrown by TLS features initialization
*/
SafeBeastWebsocketBackend(const std::string& certificate_file, const std::string& private_key_file,
const boost::asio::ip::tcp::endpoint& local_endpoint,
Utils::LoggingContext& logging_context);
};
}
#endif //RPTOGETHER_SERVER_SAFEBEASTWEBSOCKETBACKEND_HPP
|
import random
import numpy as np
from self_supervised_3d_tasks.preprocessing.utils.crop import crop_patches, crop_patches_3d
from self_supervised_3d_tasks.preprocessing.utils.pad import pad_to_final_size_3d, pad_to_final_size_2d
def preprocess_image(image, is_training, patches_per_side, patch_jitter, permutations, mode3d):
label = random.randint(0, len(permutations) - 1)
if mode3d:
patches = crop_patches_3d(image, is_training, patches_per_side, patch_jitter)
else:
patches = crop_patches(image, is_training, patches_per_side, patch_jitter)
b = np.zeros((len(permutations),))
b[label] = 1
return np.array(patches)[np.array(permutations[label])], np.array(b)
def preprocess(batch, patches_per_side, patch_jitter, permutations, is_training=True, mode3d=False):
xs = []
ys = []
for image in batch:
x, y = preprocess_image(image, is_training, patches_per_side, patch_jitter, permutations, mode3d)
xs.append(x)
ys.append(y)
xs = np.stack(xs)
ys = np.stack(ys)
return xs, ys
def preprocess_image_crop_only(image, patches_per_side, is_training, mode3d):
if mode3d:
patches = crop_patches_3d(image, is_training, patches_per_side, 0)
else:
patches = crop_patches(image, is_training, patches_per_side, 0)
return np.stack(patches)
def preprocess_crop_only(batch, patches_per_side, is_training=True, mode3d=False):
xs = []
for image in batch:
x = preprocess_image_crop_only(image, patches_per_side, is_training, mode3d)
xs.append(x)
return np.stack(xs)
def preprocess_image_pad(patches, patch_dim, mode3d):
result = []
for patch in patches:
if mode3d:
patch = pad_to_final_size_3d(patch, patch_dim)
else:
# zero padding
patch = pad_to_final_size_2d(patch, patch_dim)
result.append(patch)
return np.stack(result)
def preprocess_pad(batch, patch_dim, mode3d=False):
xs = []
for patches in batch:
x = preprocess_image_pad(patches, patch_dim, mode3d)
xs.append(x)
return np.stack(xs)
|
\subsection{Hybrid orbital representation}
\label{sec:spo_hybrid}
The hybrid representation of the single particle orbitals combines a localized atomic basis set around atomic cores and B-splines in the interstitial regions to reduce memory use while retaining high evaluation speed and either retaining or increasing overall accuracy. Full details are provided in Ref.~\cite{Luo2018hyb}, and \textbf{users of this feature are kindly requested to cite this paper}.
In practice, we have seen that using a meshfactor=0.5 is often possible and achieves huge memory savings.
Figure~\ref{fig:hybridrep} illustrates how the regions are assigned. Orbitals within region A are computed as
\[
\phi^A_n({\bf r})=R_{n,l,m}(r)Y_{l,m}(\hat{r})
\]
Orbitals in region C are computed as the regular B-spline basis described in subsection~\ref{sec:spo_spline} above. The region B interpolates between A and C as
\begin{align}
\phi^B_n({\bf r}) &= S(r) \phi^A_n({\bf r}) + (1-S(r))\phi^C_n({\bf r}) .\\
S(r) &= \frac{1}{2}-\frac{1}{2}\tanh\left[\alpha\left(\frac{r-r_{\rm A/B}}{r_{\rm B/C}-r_{\rm A/B}}-\frac{1}{2}\right)\right] .
\end{align}
\begin{figure}
\centering
\includegraphics[trim={0 152 0 0},clip,width=0.45\columnwidth]{./figures/hybrid_new.jpg}
\qquad
\includegraphics[trim={0 2 0 150},clip,width=0.45\columnwidth]{./figures/hybrid_new.jpg}
\caption{Regular and hybrid orbital representation. Regular B-spline representation (left panel) contains only one region and a sufficiently fine mesh to resolve orbitals near the nucleus. The hybrid orbital representation (right panel) contains near nucleus regions (A) where spherical harmonics and radial functions are used, buffers or interpolation regions (B), and an interstitial region (C) where a coarse B-spline mesh is used.}
\label{fig:hybridrep}
\end{figure}
To enable hybrid orbital representation, the input XML needs to see the tag \ixml{hybridrep="yes"} shown in Listing 8.6.
\begin{lstlisting}[style=QMCPXML,caption=Hybrid orbital representation input example.\label{listing:hybridrep}]
<determinantset type="bspline" source="i" href="pwscf.h5"
tilematrix="1 1 3 1 2 -1 -2 1 0" twistnum="-1" gpu="yes" meshfactor="0.8"
twist="0 0 0" precision="single" hybridrep="yes">
...
</determinantset>
\end{lstlisting}
Second, the information describing the atomic regions is required in the particle set, shown in Listing 8.7.
\begin{lstlisting}[style=QMCPXML,caption=particleset elements for ions with information needed by hybrid orbital representation.\label{listing:hybridrep_particleset}]
<group name="Ni">
<parameter name="charge"> 18 </parameter>
<parameter name="valence"> 18 </parameter>
<parameter name="atomicnumber" > 28 </parameter>
<parameter name="cutoff_radius" > 1.6 </parameter>
<parameter name="inner_cutoff" > 1.3 </parameter>
<parameter name="lmax" > 5 </parameter>
<parameter name="spline_radius" > 1.8 </parameter>
<parameter name="spline_npoints"> 91 </parameter>
</group>
\end{lstlisting}
The parameters specific to hybrid representation are listed as
\begin{table}[h]
\centering
\begin{tabularx}{\textwidth}{l l l l l l }
\hline
\multicolumn{6}{l}{\texttt{attrib} element} \\
\hline
\multicolumn{2}{l}{Attribute:} & \multicolumn{4}{l}{}\\
& \bfseries Name & \bfseries Datatype & \bfseries Values & \bfseries Default & \bfseries Description \\
& \texttt{cutoff\_radius} & Real & $>=0.0$ & \textit{None} & Cutoff radius for B/C boundary \\
& \texttt{lmax} & Integer & $>=0$ & \textit{None} & Largest angular channel \\
& \texttt{inner\_cutoff} & Real & $>=0.0$ & Dep. & Cutoff radius for A/B boundary \\
& \texttt{spline\_radius} & Real & $>0.0$ & Dep. & Radial function radius used in spline \\
& \texttt{spline\_npoints} & Integer & $>0$ & Dep. & Number of spline knots \\
\hline
\end{tabularx}
\end{table}
\begin{itemize}
\item \texttt{cutoff\_radius} is required for every species. If a species is intended to not be covered by atomic regions, setting the value 0.0 will put default values for all the reset parameters. A good value is usually a bit larger than the core radius listed in the pseudopotential file. After a parametric scan, pick the one from the flat energy region with the smallest variance.
\item \texttt{lmax} is required if $\texttt{cutoff\_radius} > 0.0$. This value usually needs to be at least the highest angular momentum plus 2.
\item \texttt{inner\_cutoff} is optional and set as $\texttt{cutoff\_radius}-0.3$ by default, which is fine in most cases.
\item \texttt{spline\_radius} and \texttt{spline\_npoints} are optional. By default, they are calculated based on \texttt{cutoff\_radius} and a grid displacement $0.02$\,bohr.
If users prefer inputing them, it is required that $\texttt{cutoff\_radius}<=\texttt{spline\_radius}-2\times\texttt{spline\_radius}/(\texttt{spline\_npoints}-1)$.
\end{itemize}
In addition, the hybrid orbital representation allows extra optimization to speed up the nonlocal pseudopotential evaluation using the batched algorithm listed in Section~\ref{sec:nlpp}.
|
State Before: R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
⊢ eval₂ f (z * ↑f (leadingCoeff p)) (integralNormalization p) =
∑ i in Finset.attach (support p), ↑f (coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i) * z ^ ↑i State After: R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
⊢ ∑ n in support p, ↑f (coeff (integralNormalization p) n) * (z * ↑f (leadingCoeff p)) ^ n =
∑ i in Finset.attach (support p), ↑f (coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i) * z ^ ↑i Tactic: rw [eval₂_eq_sum, sum_def, support_integralNormalization] State Before: R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
⊢ ∑ n in support p, ↑f (coeff (integralNormalization p) n) * (z * ↑f (leadingCoeff p)) ^ n =
∑ i in Finset.attach (support p), ↑f (coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i) * z ^ ↑i State After: R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
⊢ ∑ x in support p, ↑f (coeff (integralNormalization p) x) * (↑f (leadingCoeff p) ^ x * z ^ x) =
∑ x in Finset.attach (support p), ↑f (coeff (integralNormalization p) ↑x) * (↑f (leadingCoeff p) ^ ↑x * z ^ ↑x) Tactic: simp only [mul_comm z, mul_pow, mul_assoc, RingHom.map_pow, RingHom.map_mul] State Before: R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
⊢ ∑ x in support p, ↑f (coeff (integralNormalization p) x) * (↑f (leadingCoeff p) ^ x * z ^ x) =
∑ x in Finset.attach (support p), ↑f (coeff (integralNormalization p) ↑x) * (↑f (leadingCoeff p) ^ ↑x * z ^ ↑x) State After: no goals Tactic: exact Finset.sum_attach.symm State Before: R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
⊢ ∑ i in Finset.attach (support p), ↑f (coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i) * z ^ ↑i =
∑ i in Finset.attach (support p), ↑f (coeff p ↑i * leadingCoeff p ^ (natDegree p - 1)) * z ^ ↑i State After: case pos
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : p = 0
⊢ ∑ i in Finset.attach (support p), ↑f (coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i) * z ^ ↑i =
∑ i in Finset.attach (support p), ↑f (coeff p ↑i * leadingCoeff p ^ (natDegree p - 1)) * z ^ ↑i
case neg
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
⊢ ∑ i in Finset.attach (support p), ↑f (coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i) * z ^ ↑i =
∑ i in Finset.attach (support p), ↑f (coeff p ↑i * leadingCoeff p ^ (natDegree p - 1)) * z ^ ↑i Tactic: by_cases hp : p = 0 State Before: case neg
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
⊢ ∑ i in Finset.attach (support p), ↑f (coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i) * z ^ ↑i =
∑ i in Finset.attach (support p), ↑f (coeff p ↑i * leadingCoeff p ^ (natDegree p - 1)) * z ^ ↑i State After: case neg
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
⊢ ∑ i in Finset.attach (support p), ↑f (coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i) * z ^ ↑i =
∑ i in Finset.attach (support p), ↑f (coeff p ↑i * leadingCoeff p ^ (natDegree p - 1)) * z ^ ↑i Tactic: have one_le_deg : 1 ≤ natDegree p :=
Nat.succ_le_of_lt (natDegree_pos_of_eval₂_root hp f hz inj) State Before: case neg
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
⊢ ∑ i in Finset.attach (support p), ↑f (coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i) * z ^ ↑i =
∑ i in Finset.attach (support p), ↑f (coeff p ↑i * leadingCoeff p ^ (natDegree p - 1)) * z ^ ↑i State After: case neg.e_f.h
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
i : { x // x ∈ support p }
⊢ ↑f (coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i) * z ^ ↑i =
↑f (coeff p ↑i * leadingCoeff p ^ (natDegree p - 1)) * z ^ ↑i Tactic: congr with i State Before: case neg.e_f.h
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
i : { x // x ∈ support p }
⊢ ↑f (coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i) * z ^ ↑i =
↑f (coeff p ↑i * leadingCoeff p ^ (natDegree p - 1)) * z ^ ↑i State After: case neg.e_f.h.e_a.h.e_6.h
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
i : { x // x ∈ support p }
⊢ coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i = coeff p ↑i * leadingCoeff p ^ (natDegree p - 1) Tactic: congr 2 State Before: case neg.e_f.h.e_a.h.e_6.h
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
i : { x // x ∈ support p }
⊢ coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i = coeff p ↑i * leadingCoeff p ^ (natDegree p - 1) State After: case pos
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
i : { x // x ∈ support p }
hi : ↑i = natDegree p
⊢ coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i = coeff p ↑i * leadingCoeff p ^ (natDegree p - 1)
case neg
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
i : { x // x ∈ support p }
hi : ¬↑i = natDegree p
⊢ coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i = coeff p ↑i * leadingCoeff p ^ (natDegree p - 1) Tactic: by_cases hi : i.1 = natDegree p State Before: case pos
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : p = 0
⊢ ∑ i in Finset.attach (support p), ↑f (coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i) * z ^ ↑i =
∑ i in Finset.attach (support p), ↑f (coeff p ↑i * leadingCoeff p ^ (natDegree p - 1)) * z ^ ↑i State After: no goals Tactic: simp [hp] State Before: case pos
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
i : { x // x ∈ support p }
hi : ↑i = natDegree p
⊢ coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i = coeff p ↑i * leadingCoeff p ^ (natDegree p - 1) State After: case pos
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
i : { x // x ∈ support p }
hi : ↑i = natDegree p
⊢ degree p = ↑(natDegree p) Tactic: rw [hi, integralNormalization_coeff_degree, one_mul, leadingCoeff, ← pow_succ,
tsub_add_cancel_of_le one_le_deg] State Before: case pos
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
i : { x // x ∈ support p }
hi : ↑i = natDegree p
⊢ degree p = ↑(natDegree p) State After: no goals Tactic: exact degree_eq_natDegree hp State Before: case neg
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
i : { x // x ∈ support p }
hi : ¬↑i = natDegree p
⊢ coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i = coeff p ↑i * leadingCoeff p ^ (natDegree p - 1) State After: case neg
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
i : { x // x ∈ support p }
hi : ¬↑i = natDegree p
this : ↑i ≤ natDegree p - 1
⊢ coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i = coeff p ↑i * leadingCoeff p ^ (natDegree p - 1) Tactic: have : i.1 ≤ p.natDegree - 1 :=
Nat.le_pred_of_lt (lt_of_le_of_ne (le_natDegree_of_ne_zero (mem_support_iff.mp i.2)) hi) State Before: case neg
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
hp : ¬p = 0
one_le_deg : 1 ≤ natDegree p
i : { x // x ∈ support p }
hi : ¬↑i = natDegree p
this : ↑i ≤ natDegree p - 1
⊢ coeff (integralNormalization p) ↑i * leadingCoeff p ^ ↑i = coeff p ↑i * leadingCoeff p ^ (natDegree p - 1) State After: no goals Tactic: rw [integralNormalization_coeff_ne_natDegree hi, mul_assoc, ← pow_add,
tsub_add_cancel_of_le this] State Before: R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
⊢ ∑ i in Finset.attach (support p), ↑f (coeff p ↑i * leadingCoeff p ^ (natDegree p - 1)) * z ^ ↑i =
↑f (leadingCoeff p) ^ (natDegree p - 1) * eval₂ f z p State After: R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
⊢ ↑f (leadingCoeff p) ^ (natDegree p - 1) * ∑ x in Finset.attach (support p), ↑f (coeff p ↑x) * z ^ ↑x =
↑f (leadingCoeff p) ^ (natDegree p - 1) * ∑ n in support p, ↑f (coeff p n) * z ^ n Tactic: simp_rw [eval₂_eq_sum, sum_def, fun i => mul_comm (coeff p i), RingHom.map_mul,
RingHom.map_pow, mul_assoc, ← Finset.mul_sum] State Before: R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
⊢ ↑f (leadingCoeff p) ^ (natDegree p - 1) * ∑ x in Finset.attach (support p), ↑f (coeff p ↑x) * z ^ ↑x =
↑f (leadingCoeff p) ^ (natDegree p - 1) * ∑ n in support p, ↑f (coeff p n) * z ^ n State After: case e_a
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
⊢ ∑ x in Finset.attach (support p), ↑f (coeff p ↑x) * z ^ ↑x = ∑ n in support p, ↑f (coeff p n) * z ^ n Tactic: congr 1 State Before: case e_a
R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
⊢ ∑ x in Finset.attach (support p), ↑f (coeff p ↑x) * z ^ ↑x = ∑ n in support p, ↑f (coeff p n) * z ^ n State After: no goals Tactic: exact @Finset.sum_attach _ _ p.support _ fun i => f (p.coeff i) * z ^ i State Before: R : Type u
S : Type v
a b : R
m n : ℕ
ι : Type y
inst✝² : CommRing R
inst✝¹ : IsDomain R
inst✝ : CommSemiring S
p : R[X]
f : R →+* S
z : S
hz : eval₂ f z p = 0
inj : ∀ (x : R), ↑f x = 0 → x = 0
⊢ ↑f (leadingCoeff p) ^ (natDegree p - 1) * eval₂ f z p = 0 State After: no goals Tactic: rw [hz, mul_zero]
|
```python
import numpy as np
import scipy as sp
import scipy.stats
import matplotlib.pyplot as plt
import math as mt
import scipy.special
import seaborn as sns
plt.style.use('fivethirtyeight')
from statsmodels.graphics.tsaplots import plot_acf
import pandas as pd
```
# <font face="gotham" color="orange"> Markov Chain Monte Carlo </font>
The **Markov Chain Monte Carlo** (**MCMC**) is a class of algorithm to simulate a distribution that has no closed-form expression. To illustrate the mechanism of MCMC, we resort to the example of Gamma-Poisson conjugate.
Though it has a closed-form expression of posterior, we can still simulate the posterior for demonstrative purpose.
To use MCMC, commonly the Bayes' Theorem is modified without affecting the final result.
$$
P(\lambda \mid y) \propto P(y \mid \lambda) P(\lambda)
$$
where $\propto$ means proportional to, the integration in the denominator can be safely omitted since it is a constant.
Here we recap the example of hurricanes in the last chapter. The prior elicitation uses
$$
E(\lambda) = \frac{\alpha}{\beta}\\
\text{Var}(\lambda) = \frac{\alpha}{\beta^2}
$$
```python
x = np.linspace(0, 10, 100)
params = [10, 2]
gamma_pdf = sp.stats.gamma.pdf(x, a=params[0], scale=1/params[1])
fig, ax = plt.subplots(figsize=(7, 7))
ax.plot(x, gamma_pdf, lw = 3, label = r'$\alpha = %.1f, \beta = %.1f$' % (params[0], params[1]))
ax.set_title('Prior')
mean = params[0]/params[1]
mode = (params[0]-1)/params[1]
ax.axvline(mean, color = 'tomato', ls='--', label='mean: {}'.format(mean))
ax.axvline(mode, color = 'red', ls='--', label='mode: {}'.format(mode))
ax.legend()
plt.show()
```
1. Because posterior will also be Gamma distribution, we start from proposing a value drawn from posterior
$$
\lambda = 8
$$
This is an arbitrary value, which is called the **initial value**.
2. Calculate the likelihood of observing $k=3$ hurricanes given $\lambda=8$.
$$
\mathcal{L}(3 ; 8)=\frac{\lambda^{k} e^{-\lambda}}{k !}=\frac{8^{3} e^{-8}}{3 !}=0.1075
$$
```python
def pois_lh(k, lamda):
lh = lamda**k*np.exp(-lamda)/mt.factorial(k)
return lh
```
```python
lamda_init = 8
k = 3
pois_lh(k = k, lamda = lamda_init)
```
0.02862614424768101
3. Calculate prior
$$
g(\lambda ; \alpha, \beta)=\frac{\beta^{\alpha} \lambda^{\alpha-1} e^{-\beta \lambda}}{\Gamma(\alpha)}
$$
```python
def gamma_prior(alpha, beta, lamda):
prior = (beta**alpha*lamda**(alpha-1)*np.exp(-beta*lamda))/sp.special.gamma(alpha)
return prior
```
```python
lamda_current = lamda_init
alpha=10
beta=2
gamma_prior(alpha=alpha, beta=beta, lamda=lamda_current)
```
0.0426221247856141
4. Calculate the posterior with the first guess $\lambda=8$ and we denote it as $\lambda_{current}$
```python
k=3
posterior_current = pois_lh(k=k, lamda=lamda_current) * gamma_prior(alpha=10, beta=2, lamda=lamda_current)
posterior_current
```
0.0012201070922556493
5. Draw a second value $\lambda_{proposed}$ from a **proposal distribution** with $\mu=\lambda_{current}$ and $\sigma = .5$. The $\sigma$ here is called **tuning parameter**, which will be clearer in following demonstrations.
```python
tuning_param = .5
lamda_prop = sp.stats.norm(loc=lamda_current, scale=tuning_param).rvs()
lamda_prop
```
8.48325129448904
6. Calculate posterior based on the $\lambda_{proposed}$.
```python
posterior_prop = gamma_prior(alpha, beta, lamda=lamda_prop)*pois_lh(k, lamda=lamda_prop)
posterior_prop
```
0.0005786900039113245
7. Now we have two posteriors. To proceed, we need to make some rules to throw one away. Here we introduce the **Metropolis Algorithm**. The probability threshold for accepting $\lambda_{proposed}$ is
$$
P_{\text {accept }}=\min \left(\frac{P\left(\lambda_{\text {proposed }} \mid \text { data }\right)}{P\left(\lambda_{\text {current }} \mid \text { data }\right)}, 1\right)
$$
```python
print(posterior_current)
print(posterior_prop)
```
0.0012201070922556493
0.0005786900039113245
```python
prob_threshold = np.min([posterior_prop/posterior_current, 1])
prob_threshold
```
0.4742944349593793
It means the probability of accepting $\lambda_{proposed}$ is $1$. What if the smaller value is $\frac{\text{posterior proposed}}{\text{posterior current}}$, let's say $\text{prob_threshold}=.768$. The algorithm requires a draw from a uniform distribution, if the draw is smaller than $.768$, go for $\lambda_{proposed}$ if larger then stay with $\lambda_{current}$.
8. The demonstrative algorithm will be
```python
if sp.stats.uniform.rvs() > .768:
print('stay with current lambda')
else:
print('accept next lambda')
```
accept next lambda
9. If we accept $\lambda_{proposed}$, redenote it as $\lambda_{current}$, then repeat from step $2$ for thousands of times.
## <font face="gotham" color="orange"> Combine All Steps </font>
We will join all the steps in a loop for thousands of times (the number of repetition depends on your time constraint and your computer's capacity).
```python
def gamma_poisson_mcmc(lamda_init = 2, k = 3, alpha = 10, beta= 2, tuning_param = 1, chain_size = 10000):
np.random.seed(123)
lamda_current = lamda_init
lamda_mcmc = []
pass_rate = []
post_ratio_list = []
for i in range(chain_size):
lh_current = pois_lh(k = k, lamda = lamda_current)
prior_current = gamma_prior(alpha=alpha, beta=beta, lamda=lamda_current)
posterior_current = lh_current*prior_current
lamda_proposal = sp.stats.norm(loc=lamda_current, scale=tuning_param).rvs()
prior_next = gamma_prior(alpha=alpha, beta=beta, lamda=lamda_proposal)
lh_next = pois_lh(k, lamda=lamda_proposal)
posterior_proposal = lh_next*prior_next
post_ratio = posterior_proposal/posterior_current
prob_next = np.min([post_ratio, 1])
unif_draw = sp.stats.uniform.rvs()
post_ratio_list.append(post_ratio)
if unif_draw < prob_next:
lamda_current = lamda_proposal
lamda_mcmc.append(lamda_current)
pass_rate.append('Y')
else:
lamda_mcmc.append(lamda_current)
pass_rate.append('N')
return lamda_mcmc, pass_rate
```
The proposal distribution must be symmetrical otherwise the Markov chain won't reach an equilibrium distribution. Also the tuning parameter should be set at a value which maintains $30\%\sim50\%$ acceptance.
```python
lamda_mcmc, pass_rate = gamma_poisson_mcmc(chain_size = 10000)
```
```python
yes = ['Pass','Not Pass']
counts = [pass_rate.count('Y'), pass_rate.count('N')]
x = np.linspace(0, 10, 100)
params_prior = [10, 2]
gamma_pdf_prior = sp.stats.gamma.pdf(x, a=params_prior[0], scale=1/params_prior[1])
```
We assume $1$ year, the data records in total $3$ hurricanes. Obtain the analytical posterior, therefore we can compare the simulation and the analytical distribution.
\begin{align}
\alpha_{\text {posterior }}&=\alpha_{0}+\sum_{i=1}^{n} x_{i} = 10+3=13\\
\beta_{\text {posterior }}&=\beta_{0}+n = 2+1=3
\end{align}
Prepare the analytical Gamma distribution
```python
params_post = [13, 3]
gamma_pdf_post = sp.stats.gamma.pdf(x, a=params_post[0], scale=1/params_post[1])
```
Because initial sampling might not converge to the equilibrium, so the first $1/10$ values of the Markov chain can be safely dropped. This $1/10$ period is termed as **burn-in** period. Also in order to minimize the _autocorrelation_ issue, we can perform **pruning** process to drop every other (or even five) observation(s).
That is why we use ```lamda_mcmc[1000::2]``` in codes below.
```python
fig, ax = plt.subplots(figsize = (12, 12), nrows = 3, ncols = 1)
ax[0].hist(lamda_mcmc[1000::2], bins=100, density=True)
ax[0].set_title(r'Posterior Frequency Distribution of $\lambda$')
ax[0].plot(x, gamma_pdf_prior, label='Prior')
ax[0].plot(x, gamma_pdf_post, label='Posterior')
ax[0].legend()
ax[1].plot(np.arange(len(lamda_mcmc)), lamda_mcmc, lw=1)
ax[1].set_title('Trace')
ax[2].barh(yes, counts, color=['green', 'blue'], alpha=.7)
plt.show()
```
# <font face="gotham" color="orange"> Diagnostics of MCMC </font>
The demonstration above has been fine-tuned deliberately to circumvent potential errors, which are common in MCMC algorithm designing. We will demonstrate how it happens, what probable remedies we might possess.
## <font face="gotham" color="orange"> Invalid Proposal </font>
Following the Gamma-Poisson example, if we have a proposal distribution with $\mu=1$, the random draw from this proposal distribution might be a negative number, however the posterior is a Gamma distribution which only resides in positive domain.
The remedy of this type of invalid proposal is straightforward, multiply $-1$ onto all negative draws.
```python
x_gamma = np.linspace(-3, 12, 100)
x_norm = np.linspace(-3, 6, 100)
params_gamma = [10, 2]
gamma_pdf = sp.stats.gamma.pdf(x, a=params_gamma[0], scale=1/params_gamma[1])
mu = 1
sigma = 1
normal_pdf = sp.stats.norm.pdf(x, loc=mu, scale=sigma)
fig, ax = plt.subplots(figsize=(14, 7))
ax.plot(x, gamma_pdf, lw = 3, label = r'Prior $\alpha = %.1f, \beta = %.1f$' % (params[0], params[1]), color='#FF6B1A')
ax.plot(x, normal_pdf, lw = 3, label = r'Proposal $\mu=%.1f , \sigma= %.1f$' % (mu, sigma), color='#662400')
ax.text(4, .27, 'Gamma Prior', color ='#FF6B1A')
ax.text(1.7, .37, 'Normal Proposal', color ='#662400')
ax.text(.2, -.04, r'$\lambda_{current}=1$', color ='tomato')
x_fill = np.linspace(-3, 0, 30)
y_fill = sp.stats.norm.pdf(x_fill, loc=mu, scale=sigma)
ax.fill_between(x_fill, y_fill, color ='#B33F00')
ax.axvline(mu, color = 'red', ls='--', label=r'$\mu=${}'.format(mu), alpha=.4)
ax.legend()
plt.show()
```
Two lines of codes will solve this issue
```
if lamda_proposal < 0:
lamda_proposal *= -1
```
```python
def gamma_poisson_mcmc_1(lamda_init = 2, k = 3, alpha = 10, beta= 2, tuning_param = 1, chain_size = 10000):
np.random.seed(123)
lamda_current = lamda_init
lamda_mcmc = []
pass_rate = []
post_ratio_list = []
for i in range(chain_size):
lh_current = pois_lh(k = k, lamda = lamda_current)
prior_current = gamma_prior(alpha=alpha, beta=beta, lamda=lamda_current)
posterior_current = lh_current*prior_current
lamda_proposal = sp.stats.norm(loc=lamda_current, scale=tuning_param).rvs()
if lamda_proposal < 0:
lamda_proposal *= -1
prior_next = gamma_prior(alpha=alpha, beta=beta, lamda=lamda_proposal)
lh_next = pois_lh(k, lamda=lamda_proposal)
posterior_proposal = lh_next*prior_next
post_ratio = posterior_proposal/posterior_current
prob_next = np.min([post_ratio, 1])
unif_draw = sp.stats.uniform.rvs()
post_ratio_list.append(post_ratio)
if unif_draw < prob_next:
lamda_current = lamda_proposal
lamda_mcmc.append(lamda_current)
pass_rate.append('Y')
else:
lamda_mcmc.append(lamda_current)
pass_rate.append('N')
return lamda_mcmc, pass_rate
```
This time we can set chain size much larger.
```python
lamda_mcmc, pass_rate = gamma_poisson_mcmc_1(chain_size = 100000, tuning_param = 1)
```
As you can see the frequency distribution is also much smoother.
```python
y_rate = pass_rate.count('Y')/len(pass_rate)
n_rate = pass_rate.count('N')/len(pass_rate)
yes = ['Pass','Not Pass']
counts = [pass_rate.count('Y'), pass_rate.count('N')]
fig, ax = plt.subplots(figsize = (12, 12), nrows = 3, ncols = 1)
ax[0].hist(lamda_mcmc[int(len(lamda_mcmc)/10)::2], bins=100, density=True)
ax[0].set_title(r'Posterior Frequency Distribution of $\lambda$')
ax[0].plot(x, gamma_pdf_prior, label='Prior')
ax[0].plot(x, gamma_pdf_post, label='Posterior')
ax[0].legend()
ax[1].plot(np.arange(len(lamda_mcmc)), lamda_mcmc, lw=1)
ax[1].set_title('Trace')
ax[2].barh(yes, counts, color=['green', 'blue'], alpha=.7)
ax[2].text(counts[1]*.4, 'Not Pass', r'${}\%$'.format(np.round(n_rate*100,2)), color ='tomato', size = 28)
ax[2].text(counts[0]*.4, 'Pass', r'${}\%$'.format(np.round(y_rate*100,2)), color ='tomato', size = 28)
plt.show()
```
## <font face="gotham" color="orange"> Numerical Overflow </font>
If prior and likelihood are extremely close to $0$, the product would be even closer to $0$. This would cause storage error in computer due to the binary system.
The remedy is to use the log version of Bayes' Theorem, i.e.
$$
\ln{P(\lambda \mid y)} \propto \ln{P(y \mid \lambda)}+ \ln{P(\lambda)}
$$
Also the acceptance rule can be converted into log version
$$
\ln{ \left(\frac{P\left(\lambda_{proposed } \mid y \right)}{P\left(\lambda_{current} \mid y \right)}\right)}
=\ln{P\left(\lambda_{proposed } \mid y \right)} - \ln{P\left(\lambda_{current } \mid y \right)}
$$
```python
def gamma_poisson_mcmc_2(lamda_init = 2, k = 3, alpha = 10, beta= 2, tuning_param = 1, chain_size = 10000):
np.random.seed(123)
lamda_current = lamda_init
lamda_mcmc = []
pass_rate = []
post_ratio_list = []
for i in range(chain_size):
log_lh_current = np.log(pois_lh(k = k, lamda = lamda_current))
log_prior_current = np.log(gamma_prior(alpha=alpha, beta=beta, lamda=lamda_current))
log_posterior_current = log_lh_current + log_prior_current
lamda_proposal = sp.stats.norm(loc=lamda_current, scale=tuning_param).rvs()
if lamda_proposal < 0:
lamda_proposal *= -1
log_prior_next = np.log(gamma_prior(alpha=alpha, beta=beta, lamda=lamda_proposal))
log_lh_next = np.log(pois_lh(k, lamda=lamda_proposal))
log_posterior_proposal = log_lh_next + log_prior_next
log_post_ratio = log_posterior_proposal - log_posterior_current
post_ratio = np.exp(log_post_ratio)
prob_next = np.min([post_ratio, 1])
unif_draw = sp.stats.uniform.rvs()
post_ratio_list.append(post_ratio)
if unif_draw < prob_next:
lamda_current = lamda_proposal
lamda_mcmc.append(lamda_current)
pass_rate.append('Y')
else:
lamda_mcmc.append(lamda_current)
pass_rate.append('N')
return lamda_mcmc, pass_rate
```
With the use of log posterior and acceptance rule, the numerical overflow is unlikely to happen anymore, which means we can set a much longer Markov chain and also a larger tuning parameter.
```python
lamda_mcmc, pass_rate = gamma_poisson_mcmc_2(chain_size = 100000, tuning_param = 3)
```
```python
y_rate = pass_rate.count('Y')/len(pass_rate)
n_rate = pass_rate.count('N')/len(pass_rate)
yes = ['Pass','Not Pass']
counts = [pass_rate.count('Y'), pass_rate.count('N')]
fig, ax = plt.subplots(figsize = (12, 12), nrows = 3, ncols = 1)
ax[0].hist(lamda_mcmc[int(len(lamda_mcmc)/10)::2], bins=100, density=True)
ax[0].set_title(r'Posterior Frequency Distribution of $\lambda$')
ax[0].plot(x, gamma_pdf_prior, label='Prior')
ax[0].plot(x, gamma_pdf_post, label='Posterior')
ax[0].legend()
ax[1].plot(np.arange(len(lamda_mcmc)), lamda_mcmc, lw=1)
ax[1].set_title('Trace')
ax[2].barh(yes, counts, color=['green', 'blue'], alpha=.7)
ax[2].text(counts[1]*.4, 'Not Pass', r'${}\%$'.format(np.round(n_rate*100,2)), color ='tomato', size = 28)
ax[2].text(counts[0]*.4, 'Pass', r'${}\%$'.format(np.round(y_rate*100,2)), color ='tomato', size = 28)
plt.show()
```
Larger tuning parameter yields a lower pass ($30\%\sim50\%$) rate that is exactly what we are seeking for.
## <font face="gotham" color="orange"> Pruning </font>
```python
lamda_mcmc = np.array(lamda_mcmc)
n = 4
fig, ax = plt.subplots(ncols = 1, nrows = n ,figsize=(12, 10))
for i in range(1, n+1):
g = plot_acf(lamda_mcmc[::i], ax=ax[i-1], title='', label=r'$lag={}$'.format(i), lags=30)
fig.suptitle('Markov chain Autocorrelation')
plt.show()
```
# <font face="gotham" color="orange"> Gibbs Sampling Algorithm </font>
The **Gibbs sampler** is a special case of the Metropolis sampler in which the proposal distributions exactly match the posterior conditional distributions and naturally proposals are accepted 100% of the time.
However a specialty of Gibbs Sampler is that can allow one to estimate multiple parameters.
In this section, we will use Normal-Normal conjugate priors to demonstrate the algorithm of Gibbs sampler.
<div style="background-color:Bisque; color:DarkBlue; padding:30px;">
Suppose you want to know the average height of female in your city, in the current setting, we assume $\mu$ and $\tau$ are our parameters of interest for estimation. Note that in conjugate prior section we assumed $\tau$ to be known, however in Gibbs sampling, both can be estimated.<br>
<br>
A prior of _normal distribution_ will be assumed for $\mu$ with hyperparameters
$$
\text{inverse of }\sigma_0:\tau_0 = .15\\
\text{mean}:\mu_0 = 170
$$
A prior of _gamma distribution_ will be assumed for $\tau$ since it can't be negative.
$$
\text{shape}: \alpha_0 = 2\\
\text{rate}:\beta_0 = 1
$$
</div>
The priors graphically are
```python
mu_0, tau_0 = 170, .35
x_mu = np.linspace(150, 190, 100)
y_mu = sp.stats.norm(loc=mu_0, scale=1/tau_0).pdf(x_mu)
alpha_0, beta_0 = 2, 1
x_tau = np.linspace(0, 8, 100)
y_tau = sp.stats.gamma(a=alpha_0, scale=1/beta_0).pdf(x_tau)
fig, ax = plt.subplots(figsize=(15,5), nrows=1, ncols=2)
ax[0].plot(x_mu, y_mu)
ax[0].set_title(r'Prior of $\mu$')
ax[1].plot(x_tau, y_tau)
ax[1].set_title(r'Prior of $\tau$')
plt.show()
```
Choose an initial value of proposal of $\tau$, denoted as $\tau_{\text{proposal},0}$, the $0$ subscript represents the time period, since this is the initial value.
Say
$$
\tau_{\text{proposal},0} = 7
$$
Next step is to obtain
$$
\mu_{\text{proposal},0}|\tau_{\text{proposal},0}
$$
where $\mu_{\text{proposal},0}$ is the first value of proposal $\mu$ conditional on $\tau_{\text{proposal},0}$.
Now go collect some data, for instance you measured $10$ random women's heights, here's the data.
```python
heights = np.array([156, 167, 178, 182, 169, 174, 175, 164, 181, 170])
np.sum(heights)
```
1716
Recall we have a sets of analytical solution derived in chapter 2
\begin{align}
\mu_{\text {posterior }} &=\frac{\tau_{0} \mu_{0}+\tau \sum x_{i}}{\tau_{0}+n \tau}\\
\tau_{\text {posterior }} &=\tau_{0}+n \tau
\end{align}
Substitute $\tau_{\text{proposal},0}$ into both formula.
$$
\mu_{\text {posterior},1} =\frac{\tau_{0} \mu_{0}+\tau_{\text{proposal},0} \sum_{i=0}^{100} x_{i}}{\tau_{0}+n \tau_{\text{proposal},0}}=\frac{.15\times170+7\times 1716}{.15+10\times7}\\
\tau_{\text {posterior}, 1} =\tau_{0}+n \tau_{\text{proposal},0} = .15 + 10\times 7
$$
```python
mu_post = [0]
tau_post = [0] # 0 is placeholder, there isn't 0th eletment, according to algo
tau_proposal = [7]
mu_proposal = [0] # 0 is placeholder
mu_post.append((.15*170+tau_proposal[0]*1716)/(.15+10*tau_proposal[0]))
tau_post.append(.15+10*tau_proposal[0])
```
Draw a proposal from updated distribution for $\mu$, that is $\mu_{\text{posterior}, 1}$ and $\tau_{\text{posterior}, 1}$
```python
mu_proposal_draw = sp.stats.norm(loc=mu_post[1], scale=1/tau_post[1]).rvs()
mu_proposal.append(mu_proposal_draw)
```
Turn to $\tau$ for proposal
```python
```
|
# <center>Applied Stochastic Processes HW_05</center>
**<center>11510691 程远星$\DeclareMathOperator*{\argmin}{argmin}
\DeclareMathOperator*{\argmax}{argmax}
\DeclareMathOperator*{\plim}{plim}
\newcommand{\using}[1]{\stackrel{\mathrm{#1}}{=}}
\newcommand{\ffrac}{\displaystyle \frac}
\newcommand{\asim}{\overset{\text{a}}{\sim}}
\newcommand{\space}{\text{ }}
\newcommand{\bspace}{\;\;\;\;}
\newcommand{\QQQ}{\boxed{?\:}}
\newcommand{\void}{\left.\right.}
\newcommand{\Tran}[1]{{#1}^{\mathrm{T}}}
\newcommand{\d}[1]{\displaystyle{#1}}
\newcommand{\CB}[1]{\left\{ #1 \right\}}
\newcommand{\SB}[1]{\left[ #1 \right]}
\newcommand{\P}[1]{\left( #1 \right)}
\newcommand{\abs}[1]{\left| #1 \right|}
\newcommand{\norm}[1]{\left\| #1 \right\|}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\Exp}{\mathrm{E}}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\EE}{\mathbb{E}}
\newcommand{\NN}{\mathbb{N}}
\newcommand{\ZZ}{\mathbb{Z}}
\newcommand{\QQ}{\mathbb{Q}}
\newcommand{\AcA}{\mathscr{A}}
\newcommand{\FcF}{\mathscr{F}}
\newcommand{\Var}[2][\,\!]{\mathrm{Var}_{#1}\left[#2\right]}
\newcommand{\Cov}[2][\,\!]{\mathrm{Cov}_{#1}\left(#2\right)}
\newcommand{\Corr}[2][\,\!]{\mathrm{Corr}_{#1}\left(#2\right)}
\newcommand{\I}[1]{\mathrm{I}\left( #1 \right)}
\newcommand{\N}[1]{\mathrm{N} \left( #1 \right)}
\newcommand{\ow}{\text{otherwise}}\void^\dagger$</center>**
## Question 1
$$\begin{align}
& P\CB{Y_{n+1} = \P{i_{n+1},i_{n+2}} \mid Y_n = \P{i_n,i_{n+1}}, Y_{n-1} = \P{i_{n-1},i_{n-2}}, \dots, Y_{0} = \P{ i_{0},i_{1}}} \\
=& P\CB{Y_{n+1} = \P{i_{n+1},i_{n+2}} \mid X_{n+1} = i_{n+1},X_{n} = i_n,\dots,X_0 = i_0} \\
=& P\CB{Y_{n+1} = \P{i_{n+1},i_{n+2}} \mid X_{n+1} = i_{n+1},X_{n} = i_n} \\
=& P\CB{Y_{n+1} = \P{i_{n+1},i_{n+2}} \mid Y_{n} = \P{i_{n},i_{n+1}}} \\
\end{align}$$
## Question 2
$\bspace$First we write $\mathbf{P}^\P{r}$ as
$$\mathbf{P}^\P{r} = \begin{Vmatrix}
q_{11} & \cdots & q_{1n} \\
\vdots & \ddots & \vdots \\
q_{n1} & \cdots & q_{nn}
\end{Vmatrix},\bspace q_{ij} > 0, 1 \leq i,j \leq n$$
$\bspace$for each column, we find the minimum, denoted as $q^{\text{c}}_{j}>0$ for $1 \leq j \leq n$. Then for the next transition matrix $\mathbf{P}^{\P{r+1}}$, all its entries has the form:
$$\sum_{j=1}^{n} p_{ij}\cdot q_{jk} \geq \sum_{j=1}^{n} p_{ij}\cdot q^{\text{c}}_{k} = q^{\text{c}}_{k} > 0$$
$\bspace$and thus by induction, we have for $n\geq r$, $\mathbf{P}^{n}$ has all positive entries.
## Question 3
$\bspace$First we write the one-step transition matrix for states $0,1,2,\geq3$:
$$\mathbf{P} = \begin{Vmatrix}
0.7 & 0.3 & 0 & 0 \\
0.7 & 0 & 0.3 & 0 \\
0.7 & 0 & 0 & 0.3 \\
0 & 0 & 0 & 1
\end{Vmatrix}$$
$\bspace$and $P_{0,3}^{6} = 0.0837$
>However, I came up with an alternative solution. Why it's wrong?
>
>Let $H = 0.3$ and $T=0.7$. Then the probability can be calculated as:
>
>$$\ffrac{4} {\d{\binom{6} {3}}}\times H^3T^3 + \ffrac{3+6}{\d{\binom{6}{2}}} \times H^4 T^2 + 1 \times H^5T^1 + 1 \times H^6 = 0.0066636$$
## Question 4
$$\mathbf{P} = \begin{Vmatrix}
0.7 & 0.3 \\
0.6 & 0.4
\end{Vmatrix},\mathbf{P}^{\P{3}} = \begin{Vmatrix}
0.667 & 0.333 \\
0.666 & 0.334
\end{Vmatrix},\mathbf{P}^{\P{4}} = \begin{Vmatrix}
0.6667 & 0.3333 \\
0.6666 & 0.3334
\end{Vmatrix}$$
$\P{1}$
$\bspace$The desired probability is $ 0.5 P^{\P{3}}_{1,1} + 0.5P^{\P{3}}_{2,1} = 0.6665$.
$\P{2}$
$\bspace$The desired probability is $ P^{\P{4}}_{1,1} = 0.6667$.
## Question 5
$\bspace$When $n \geq N$, there's only one possible state, $B$. and when $n < N$, by the definition of $N$ we can say that $X_n$ has never been in any state belonging to $\AcA$, such that $\CB{W_n, n<N} \subset S\backslash \AcA$. Then the overall state space for $\CB{W_n, N \geq 0}$ is $\P{S\backslash \AcA} \cup \CB{B}$.
$\bspace$When $n<N$, $W_n = X_n$, obviously it's a markov chain, and when $n\geq N$, we have
$$\begin{align}
&P\CB{W_{n+1}=i_{n+1} \mid W_{n} i_n, W_{n-1}=i_{n-1},\dots,W_0 = i_0} \\[0.7em]
=& P\CB{W_{n+1}=i_{n+1} \mid W_{n} =B, W_{n-1}=i_{n-1},\dots,W_0 = i_0} \\[0.7em]
=& P\CB{W_{n+1}=i_{n+1} \mid W_{n} =B} = \begin{cases}
1, &\text{if } i_{n+1} = B\\
0, &\text{if } i_{n+1} \neq B
\end{cases}
\end{align}$$
$\bspace$So $\CB{W_n,n\geq 0}$ is a Markov chain. As for it's transition probability, the result is obvious and also presented in the textbook.
## Question 6
$\bspace$Its classes are $\CB{0,1}\cup\CB{2}\cup\CB{3}\cup\CB{4}$. $\CB{0,1}$ and $\CB{2}$ are recurrent classes; $\CB{3}$ and $\CB{4}$ are transient classes.
|
[STATEMENT]
lemma bounded_linear_linepath:
assumes "bounded_linear f"
shows "f (linepath a b x) = linepath (f a) (f b) x"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. f (linepath a b x) = linepath (f a) (f b) x
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. f (linepath a b x) = linepath (f a) (f b) x
[PROOF STEP]
interpret f: bounded_linear f
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. bounded_linear f
[PROOF STEP]
by fact
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. f (linepath a b x) = linepath (f a) (f b) x
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. f (linepath a b x) = linepath (f a) (f b) x
[PROOF STEP]
by (simp add: linepath_def f.add f.scale)
[PROOF STATE]
proof (state)
this:
f (linepath a b x) = linepath (f a) (f b) x
goal:
No subgoals!
[PROOF STEP]
qed
|
lemma divide_poly_main_field: fixes d :: "'a::field poly" assumes d: "d \<noteq> 0" defines lc: "lc \<equiv> coeff d (degree d)" shows "divide_poly_main lc q r d dr n = fst (pseudo_divmod_main lc (smult ((1 / lc)^n) q) (smult ((1 / lc)^n) r) d dr n)"
|
/-
Copyright (c) 2019 Amelia Livingston. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Amelia Livingston
! This file was ported from Lean 3 source module group_theory.monoid_localization
! leanprover-community/mathlib commit 1f0096e6caa61e9c849ec2adbd227e960e9dff58
! Please do not edit these lines, except to modify the commit id
! if you have ported upstream changes.
-/
import Mathlib.GroupTheory.Congruence
import Mathlib.GroupTheory.Submonoid.Membership
import Mathlib.Algebra.Group.Units
/-!
# Localizations of commutative monoids
Localizing a commutative ring at one of its submonoids does not rely on the ring's addition, so
we can generalize localizations to commutative monoids.
We characterize the localization of a commutative monoid `M` at a submonoid `S` up to
isomorphism; that is, a commutative monoid `N` is the localization of `M` at `S` iff we can find a
monoid homomorphism `f : M →* N` satisfying 3 properties:
1. For all `y ∈ S`, `f y` is a unit;
2. For all `z : N`, there exists `(x, y) : M × S` such that `z * f y = f x`;
3. For all `x, y : M`, `f x = f y` iff there exists `c ∈ S` such that `x * c = y * c`.
Given such a localization map `f : M →* N`, we can define the surjection
`LocalizationMap.mk'` sending `(x, y) : M × S` to `f x * (f y)⁻¹`, and
`LocalizationMap.lift`, the homomorphism from `N` induced by a homomorphism from `M` which maps
elements of `S` to invertible elements of the codomain. Similarly, given commutative monoids
`P, Q`, a submonoid `T` of `P` and a localization map for `T` from `P` to `Q`, then a homomorphism
`g : M →* P` such that `g(S) ⊆ T` induces a homomorphism of localizations,
`LocalizationMap.map`, from `N` to `Q`.
We treat the special case of localizing away from an element in the sections `AwayMap` and `Away`.
We also define the quotient of `M × S` by the unique congruence relation (equivalence relation
preserving a binary operation) `r` such that for any other congruence relation `s` on `M × S`
satisfying '`∀ y ∈ S`, `(1, 1) ∼ (y, y)` under `s`', we have that `(x₁, y₁) ∼ (x₂, y₂)` by `s`
whenever `(x₁, y₁) ∼ (x₂, y₂)` by `r`. We show this relation is equivalent to the standard
localization relation.
This defines the localization as a quotient type, `Localization`, but the majority of
subsequent lemmas in the file are given in terms of localizations up to isomorphism, using maps
which satisfy the characteristic predicate.
## Implementation notes
In maths it is natural to reason up to isomorphism, but in Lean we cannot naturally `rewrite` one
structure with an isomorphic one; one way around this is to isolate a predicate characterizing
a structure up to isomorphism, and reason about things that satisfy the predicate.
The infimum form of the localization congruence relation is chosen as 'canonical' here, since it
shortens some proofs.
To apply a localization map `f` as a function, we use `f.toMap`, as coercions don't work well for
this structure.
To reason about the localization as a quotient type, use `mk_eq_monoidOf_mk'` and associated
lemmas. These show the quotient map `mk : M → S → Localization S` equals the
surjection `LocalizationMap.mk'` induced by the map
`monoid_of : localization_map S (localization S)` (where `of` establishes the
localization as a quotient type satisfies the characteristic predicate). The lemma
`mk_eq_monoidOf_mk'` hence gives you access to the results in the rest of the file, which are
about the `LocalizationMap.mk'` induced by any localization map.
## Tags
localization, monoid localization, quotient monoid, congruence relation, characteristic predicate,
commutative monoid
-/
namespace AddSubmonoid
variable {M : Type _} [AddCommMonoid M] (S : AddSubmonoid M) (N : Type _) [AddCommMonoid N]
/-- The type of AddMonoid homomorphisms satisfying the characteristic predicate: if `f : M →+ N`
satisfies this predicate, then `N` is isomorphic to the localization of `M` at `S`. -/
-- Porting note: This linter does not exist yet
-- @[nolint has_nonempty_instance]
structure LocalizationMap extends AddMonoidHom M N where
map_add_units' : ∀ y : S, IsAddUnit (toFun y)
surj' : ∀ z : N, ∃ x : M × S, z + toFun x.2 = toFun x.1
eq_iff_exists' : ∀ x y, toFun x = toFun y ↔ ∃ c : S, ↑c + x = ↑c + y
#align add_submonoid.localization_map AddSubmonoid.LocalizationMap
-- Porting note: no docstrings for AddSubmonoid.LocalizationMap
attribute [nolint docBlame] AddSubmonoid.LocalizationMap.map_add_units'
AddSubmonoid.LocalizationMap.surj' AddSubmonoid.LocalizationMap.eq_iff_exists'
/-- The AddMonoidHom underlying a `LocalizationMap` of `AddCommMonoid`s. -/
add_decl_doc LocalizationMap.toAddMonoidHom
end AddSubmonoid
section CommMonoid
variable {M : Type _} [CommMonoid M] (S : Submonoid M) (N : Type _) [CommMonoid N] {P : Type _}
[CommMonoid P]
namespace Submonoid
/-- The type of monoid homomorphisms satisfying the characteristic predicate: if `f : M →* N`
satisfies this predicate, then `N` is isomorphic to the localization of `M` at `S`. -/
-- Porting note: This linter does not exist yet
-- @[nolint has_nonempty_instance]
structure LocalizationMap extends MonoidHom M N where
map_units' : ∀ y : S, IsUnit (toFun y)
surj' : ∀ z : N, ∃ x : M × S, z * toFun x.2 = toFun x.1
eq_iff_exists' : ∀ x y, toFun x = toFun y ↔ ∃ c : S, ↑c * x = c * y
#align submonoid.localization_map Submonoid.LocalizationMap
-- Porting note: no docstrings for Submonoid.LocalizationMap
attribute [nolint docBlame] Submonoid.LocalizationMap.map_units' Submonoid.LocalizationMap.surj'
Submonoid.LocalizationMap.eq_iff_exists'
attribute [to_additive] Submonoid.LocalizationMap
-- Porting note: this translation already exists
-- attribute [to_additive] Submonoid.LocalizationMap.toMonoidHom
/-- The monoid hom underlying a `LocalizationMap`. -/
add_decl_doc LocalizationMap.toMonoidHom
end Submonoid
namespace Localization
-- Porting note: this does not work so it is done explicitly instead
-- run_cmd to_additive.map_namespace `Localization `addLocalization
-- run_cmd Elab.Command.liftCoreM <| ToAdditive.insertTranslation `Localization `addLocalization
/-- The congruence relation on `M × S`, `M` a `CommMonoid` and `S` a submonoid of `M`, whose
quotient is the localization of `M` at `S`, defined as the unique congruence relation on
`M × S` such that for any other congruence relation `s` on `M × S` where for all `y ∈ S`,
`(1, 1) ∼ (y, y)` under `s`, we have that `(x₁, y₁) ∼ (x₂, y₂)` by `r` implies
`(x₁, y₁) ∼ (x₂, y₂)` by `s`. -/
@[to_additive addLocalization.r
"The congruence relation on `M × S`, `M` an `AddCommMonoid` and `S` an `add_submonoid` of `M`,
whose quotient is the localization of `M` at `S`, defined as the unique congruence relation on
`M × S` such that for any other congruence relation `s` on `M × S` where for all `y ∈ S`,
`(0, 0) ∼ (y, y)` under `s`, we have that `(x₁, y₁) ∼ (x₂, y₂)` by `r` implies
`(x₁, y₁) ∼ (x₂, y₂)` by `s`."]
def r (S : Submonoid M) : Con (M × S) :=
infₛ { c | ∀ y : S, c 1 (y, y) }
#align localization.r Localization.r
#align add_localization.r addLocalization.r
/-- An alternate form of the congruence relation on `M × S`, `M` a `CommMonoid` and `S` a
submonoid of `M`, whose quotient is the localization of `M` at `S`. -/
@[to_additive addLocalization.r'
"An alternate form of the congruence relation on `M × S`, `M` a `CommMonoid` and `S` a
submonoid of `M`, whose quotient is the localization of `M` at `S`."]
def r' : Con (M × S) := by
-- note we multiply by `c` on the left so that we can later generalize to `•`
refine
{ r := fun a b : M × S ↦ ∃ c : S, ↑c * (↑b.2 * a.1) = c * (a.2 * b.1)
iseqv := ⟨fun a ↦ ⟨1, rfl⟩, fun ⟨c, hc⟩ ↦ ⟨c, hc.symm⟩, ?_⟩
mul' := ?_ }
· rintro a b c ⟨t₁, ht₁⟩ ⟨t₂, ht₂⟩
use t₂ * t₁ * b.2
simp only [Submonoid.coe_mul]
calc
(t₂ * t₁ * b.2 : M) * (c.2 * a.1) = t₂ * c.2 * (t₁ * (b.2 * a.1)) := by ac_rfl
_ = t₁ * a.2 * (t₂ * (c.2 * b.1)) := by rw [ht₁] ; ac_rfl
_ = t₂ * t₁ * b.2 * (a.2 * c.1) := by rw [ht₂] ; ac_rfl
· rintro a b c d ⟨t₁, ht₁⟩ ⟨t₂, ht₂⟩
use t₂ * t₁
calc
(t₂ * t₁ : M) * (b.2 * d.2 * (a.1 * c.1)) = t₂ * (d.2 * c.1) * (t₁ * (b.2 * a.1)) := by ac_rfl
_ = (t₂ * t₁ : M) * (a.2 * c.2 * (b.1 * d.1)) := by rw [ht₁, ht₂] ; ac_rfl
#align localization.r' Localization.r'
#align add_localization.r' addLocalization.r'
/-- The congruence relation used to localize a `CommMonoid` at a submonoid can be expressed
equivalently as an infimum (see `Localization.r`) or explicitly
(see `Localization.r'`). -/
@[to_additive addLocalization.r_eq_r'
"The additive congruence relation used to localize an `AddCommMonoid` at a submonoid can be
expressed equivalently as an infimum (see `addLocalization.r`) or explicitly
(see `addLocalization.r'`)."]
theorem r_eq_r' : r S = r' S :=
le_antisymm (infₛ_le fun _ ↦ ⟨1, by simp⟩) <|
le_infₛ fun b H ⟨p, q⟩ ⟨x, y⟩ ⟨t, ht⟩ ↦ by
rw [← one_mul (p, q), ← one_mul (x, y)]
refine b.trans (b.mul (H (t * y)) (b.refl _)) ?_
convert b.symm (b.mul (H (t * q)) (b.refl (x, y))) using 1
dsimp only [Prod.mk_mul_mk, Submonoid.coe_mul] at ht ⊢
simp_rw [mul_assoc, ht, mul_comm y q]
#align localization.r_eq_r' Localization.r_eq_r'
#align add_localization.r_eq_r' addLocalization.r_eq_r'
variable {S}
@[to_additive addLocalization.r_iff_exists]
theorem r_iff_exists {x y : M × S} : r S x y ↔ ∃ c : S, ↑c * (↑y.2 * x.1) = c * (x.2 * y.1) := by
rw [r_eq_r' S] ; rfl
#align localization.r_iff_exists Localization.r_iff_exists
#align add_localization.r_iff_exists addLocalization.r_iff_exists
end Localization
/-- The localization of a `CommMonoid` at one of its submonoids (as a quotient type). -/
@[to_additive addLocalization
"The localization of an `AddCommMonoid` at one of its submonoids (as a quotient type)."]
def Localization := (Localization.r S).Quotient
#align localization Localization
#align add_localization addLocalization
namespace Localization
@[to_additive]
instance inhabited : Inhabited (Localization S) := Con.Quotient.inhabited
#align localization.inhabited Localization.inhabited
#align add_localization.inhabited addLocalization.inhabited
/-- Multiplication in a `Localization` is defined as `⟨a, b⟩ * ⟨c, d⟩ = ⟨a * c, b * d⟩`. -/
-- Porting note: replaced irreducible_def by @[irreducible] to prevent an error with protected
@[to_additive (attr := irreducible)
"Addition in an `addLocalization` is defined as `⟨a, b⟩ + ⟨c, d⟩ = ⟨a + c, b + d⟩`.
Should not be confused with the ring localization counterpart `Localization.add`, which maps
`⟨a, b⟩ + ⟨c, d⟩` to `⟨d * a + b * c, b * d⟩`."]
protected def mul : Localization S → Localization S → Localization S := (r S).commMonoid.mul
#align localization.mul Localization.mul
#align add_localization.add addLocalization.add
@[to_additive]
instance : Mul (Localization S) := ⟨Localization.mul S⟩
/-- The identity element of a `Localization` is defined as `⟨1, 1⟩`. -/
@[to_additive (attr := irreducible)
"The identity element of an `addLocalization` is defined as `⟨0, 0⟩`.
Should not be confused with the ring localization counterpart `Localization.zero`,
which is defined as `⟨0, 1⟩`."]
-- Porting note: replaced irreducible_def by @[irreducible] to prevent an error with protected
protected def one : Localization S := (r S).commMonoid.one
#align localization.one Localization.one
#align add_localization.zero addLocalization.zero
@[to_additive]
instance : One (Localization S) := ⟨Localization.one S⟩
/-- Exponentiation in a `Localization` is defined as `⟨a, b⟩ ^ n = ⟨a ^ n, b ^ n⟩`.
This is a separate `irreducible` def to ensure the elaborator doesn't waste its time
trying to unify some huge recursive definition with itself, but unfolded one step less.
-/
@[to_additive (attr := irreducible)
"Multiplication with a natural in an `AddLocalization` is defined as
`n • ⟨a, b⟩ = ⟨n • a, n • b⟩`.
This is a separate `irreducible` def to ensure the elaborator doesn't waste its time
trying to unify some huge recursive definition with itself, but unfolded one step less."]
-- Porting note: replaced irreducible_def by @[irreducible] to prevent an error with protected
protected def npow : ℕ → Localization S → Localization S := (r S).commMonoid.npow
#align localization.npow Localization.npow
#align add_localization.nsmul addLocalization.nsmul
-- Porting note: remove the attribute `local` because of error:
-- invalid attribute 'semireducible', must be global
attribute [semireducible] Localization.mul Localization.one Localization.npow
@[to_additive]
instance : CommMonoid (Localization S) where
mul := (· * ·)
one := 1
mul_assoc :=
show ∀ x y z : Localization S, x * y * z = x * (y * z) from (r S).commMonoid.mul_assoc
mul_comm := show ∀ x y : Localization S, x * y = y * x from (r S).commMonoid.mul_comm
mul_one := show ∀ x : Localization S, x * 1 = x from (r S).commMonoid.mul_one
one_mul := show ∀ x : Localization S, 1 * x = x from (r S).commMonoid.one_mul
npow := Localization.npow S
npow_zero :=
show ∀ x : Localization S, Localization.npow S 0 x = 1 from (r S).commMonoid.npow_zero
npow_succ :=
show ∀ (n : ℕ) (x : Localization S), Localization.npow S n.succ x = x * Localization.npow S n x
from (r S).commMonoid.npow_succ
variable {S}
/-- Given a `CommMonoid` `M` and submonoid `S`, `mk` sends `x : M`, `y ∈ S` to the equivalence
class of `(x, y)` in the localization of `M` at `S`. -/
@[to_additive
"Given an `AddCommMonoid` `M` and submonoid `S`, `mk` sends `x : M`, `y ∈ S` to
the equivalence class of `(x, y)` in the localization of `M` at `S`."]
def mk (x : M) (y : S) : Localization S := (r S).mk' (x, y)
#align localization.mk Localization.mk
#align add_localization.mk addLocalization.mk
@[to_additive]
theorem mk_eq_mk_iff {a c : M} {b d : S} : mk a b = mk c d ↔ r S ⟨a, b⟩ ⟨c, d⟩ := (r S).eq
#align localization.mk_eq_mk_iff Localization.mk_eq_mk_iff
#align add_localization.mk_eq_mk_iff addLocalization.mk_eq_mk_iff
universe u
/-- Dependent recursion principle for `Localizations`: given elements `f a b : p (mk a b)`
for all `a b`, such that `r S (a, b) (c, d)` implies `f a b = f c d` (with the correct coercions),
then `f` is defined on the whole `Localization S`. -/
@[to_additive (attr := elab_as_elim)
"Dependent recursion principle for `addLocalizations`: given elements `f a b : p (mk a b)`
for all `a b`, such that `r S (a, b) (c, d)` implies `f a b = f c d` (with the correct coercions),
then `f` is defined on the whole `addLocalization S`."]
def rec {p : Localization S → Sort u} (f : ∀ (a : M) (b : S), p (mk a b))
(H : ∀ {a c : M} {b d : S} (h : r S (a, b) (c, d)),
(Eq.ndrec (f a b) (mk_eq_mk_iff.mpr h) : p (mk c d)) = f c d) (x) : p x :=
Quot.rec (fun y ↦ Eq.ndrec (f y.1 y.2) (by rfl)) (fun y z h ↦ by cases y ; cases z ; exact H h) x
#align localization.rec Localization.rec
#align add_localization.rec addLocalization.rec
@[to_additive]
theorem mk_mul (a c : M) (b d : S) : mk a b * mk c d = mk (a * c) (b * d) := rfl
#align localization.mk_mul Localization.mk_mul
#align add_localization.mk_add addLocalization.mk_add
@[to_additive]
theorem mk_one : mk 1 (1 : S) = 1 := rfl
#align localization.mk_one Localization.mk_one
#align add_localization.mk_zero addLocalization.mk_zero
@[to_additive]
theorem mk_pow (n : ℕ) (a : M) (b : S) : mk a b ^ n = mk (a ^ n) (b ^ n) := rfl
#align localization.mk_pow Localization.mk_pow
#align add_localization.mk_nsmul addLocalization.mk_nsmul
-- Porting note: mathport translated `rec` to `ndrec` in the name of this lemma
@[to_additive (attr := simp)]
theorem ndrec_mk {p : Localization S → Sort u} (f : ∀ (a : M) (b : S), p (mk a b)) (H) (a : M)
(b : S) : (rec f H (mk a b) : p (mk a b)) = f a b := rfl
#align localization.rec_mk Localization.ndrec_mk
#align add_localization.rec_mk addLocalization.ndrec_mk
/-- Non-dependent recursion principle for localizations: given elements `f a b : p`
for all `a b`, such that `r S (a, b) (c, d)` implies `f a b = f c d`,
then `f` is defined on the whole `Localization S`. -/
-- Porting note: the attibute `elab_as_elim` fails with `unexpected eliminator resulting type p`
-- @[to_additive (attr := elab_as_elim)
@[to_additive
"Non-dependent recursion principle for `add_localizations`: given elements `f a b : p`
for all `a b`, such that `r S (a, b) (c, d)` implies `f a b = f c d`,
then `f` is defined on the whole `Localization S`."]
def liftOn {p : Sort u} (x : Localization S) (f : M → S → p)
(H : ∀ {a c : M} {b d : S} (_ : r S (a, b) (c, d)), f a b = f c d) : p :=
rec f (fun h ↦ (by simpa only [eq_rec_constant] using H h)) x
#align localization.lift_on Localization.liftOn
#align add_localization.lift_on addLocalization.liftOn
@[to_additive]
theorem liftOn_mk {p : Sort u} (f : ∀ (_a : M) (_b : S), p) (H) (a : M) (b : S) :
liftOn (mk a b) f H = f a b := rfl
#align localization.lift_on_mk Localization.liftOn_mk
#align add_localization.lift_on_mk addLocalization.liftOn_mk
@[to_additive (attr := elab_as_elim)]
theorem ind {p : Localization S → Prop} (H : ∀ y : M × S, p (mk y.1 y.2)) (x) : p x :=
rec (fun a b ↦ H (a, b)) (fun _ ↦ rfl) x
#align localization.ind Localization.ind
#align add_localization.ind addLocalization.ind
@[to_additive (attr := elab_as_elim)]
theorem induction_on {p : Localization S → Prop} (x) (H : ∀ y : M × S, p (mk y.1 y.2)) : p x :=
ind H x
#align localization.induction_on Localization.induction_on
#align add_localization.induction_on addLocalization.induction_on
/-- Non-dependent recursion principle for localizations: given elements `f x y : p`
for all `x` and `y`, such that `r S x x'` and `r S y y'` implies `f x y = f x' y'`,
then `f` is defined on the whole `Localization S`. -/
-- Porting note: the attibute `elab_as_elim` fails with `unexpected eliminator resulting type p`
-- @[to_additive (attr := elab_as_elim)
@[to_additive
"Non-dependent recursion principle for localizations: given elements `f x y : p`
for all `x` and `y`, such that `r S x x'` and `r S y y'` implies `f x y = f x' y'`,
then `f` is defined on the whole `Localization S`."]
def liftOn₂ {p : Sort u} (x y : Localization S) (f : M → S → M → S → p)
(H : ∀ {a a' b b' c c' d d'} (_ : r S (a, b) (a', b')) (_ : r S (c, d) (c', d')),
f a b c d = f a' b' c' d') : p :=
liftOn x (fun a b ↦ liftOn y (f a b) fun hy ↦ H ((r S).refl _) hy) fun hx ↦
induction_on y fun ⟨_, _⟩ ↦ H hx ((r S).refl _)
#align localization.lift_on₂ Localization.liftOn₂
#align add_localization.lift_on₂ addLocalization.liftOn₂
@[to_additive]
theorem liftOn₂_mk {p : Sort _} (f : M → S → M → S → p) (H) (a c : M) (b d : S) :
liftOn₂ (mk a b) (mk c d) f H = f a b c d := rfl
#align localization.lift_on₂_mk Localization.liftOn₂_mk
#align add_localization.lift_on₂_mk addLocalization.liftOn₂_mk
@[to_additive (attr := elab_as_elim)]
theorem induction_on₂ {p : Localization S → Localization S → Prop} (x y)
(H : ∀ x y : M × S, p (mk x.1 x.2) (mk y.1 y.2)) : p x y :=
induction_on x fun x ↦ induction_on y <| H x
#align localization.induction_on₂ Localization.induction_on₂
#align add_localization.induction_on₂ addLocalization.induction_on₂
@[to_additive (attr := elab_as_elim)]
theorem induction_on₃ {p : Localization S → Localization S → Localization S → Prop} (x y z)
(H : ∀ x y z : M × S, p (mk x.1 x.2) (mk y.1 y.2) (mk z.1 z.2)) : p x y z :=
induction_on₂ x y fun x y ↦ induction_on z <| H x y
#align localization.induction_on₃ Localization.induction_on₃
#align add_localization.induction_on₃ addLocalization.induction_on₃
@[to_additive]
theorem one_rel (y : S) : r S 1 (y, y) := fun _ hb ↦ hb y
#align localization.one_rel Localization.one_rel
#align add_localization.zero_rel addLocalization.zero_rel
@[to_additive]
theorem r_of_eq {x y : M × S} (h : ↑y.2 * x.1 = ↑x.2 * y.1) : r S x y :=
r_iff_exists.2 ⟨1, by rw [h]⟩
#align localization.r_of_eq Localization.r_of_eq
#align add_localization.r_of_eq addLocalization.r_of_eq
@[to_additive]
theorem mk_self (a : S) : mk (a : M) a = 1 := by
symm
rw [← mk_one, mk_eq_mk_iff]
exact one_rel a
#align localization.mk_self Localization.mk_self
#align add_localization.mk_self addLocalization.mk_self
section Scalar
variable {R R₁ R₂ : Type _}
/-- Scalar multiplication in a monoid localization is defined as `c • ⟨a, b⟩ = ⟨c • a, b⟩`. -/
-- Porting note: replaced irreducible_def by @[irreducible] to prevent an error with protected
@[irreducible]
protected def smul [SMul R M] [IsScalarTower R M M] (c : R) (z : Localization S) :
Localization S :=
Localization.liftOn z (fun a b ↦ mk (c • a) b)
(fun {a a' b b'} h ↦ mk_eq_mk_iff.2 (by
cases' b with b hb
cases' b' with b' hb'
rw [r_eq_r'] at h ⊢
cases' h with t ht
use t
dsimp only [Subtype.coe_mk] at ht ⊢
-- TODO: this definition should take `SMulCommClass R M M` instead of `IsScalarTower R M M` if
-- we ever want to generalize to the non-commutative case.
haveI : SMulCommClass R M M :=
⟨fun r m₁ m₂ ↦ by simp_rw [smul_eq_mul, mul_comm m₁, smul_mul_assoc]⟩
simp only [mul_smul_comm, ht]))
#align localization.smul Localization.smul
instance [SMul R M] [IsScalarTower R M M] : SMul R (Localization S) where smul := Localization.smul
theorem smul_mk [SMul R M] [IsScalarTower R M M] (c : R) (a b) :
c • (mk a b : Localization S) = mk (c • a) b := by
delta HSMul.hSMul instHSMul SMul.smul instSMulLocalization Localization.smul
show liftOn (mk a b) (fun a b => mk (c • a) b) _ = _
exact liftOn_mk (fun a b => mk (c • a) b) _ a b
#align localization.smul_mk Localization.smul_mk
instance [SMul R₁ M] [SMul R₂ M] [IsScalarTower R₁ M M] [IsScalarTower R₂ M M]
[SMulCommClass R₁ R₂ M] : SMulCommClass R₁ R₂ (Localization S) where
smul_comm s t := Localization.ind <| Prod.rec fun r x ↦ by simp only [smul_mk, smul_comm s t r]
instance [SMul R₁ M] [SMul R₂ M] [IsScalarTower R₁ M M] [IsScalarTower R₂ M M] [SMul R₁ R₂]
[IsScalarTower R₁ R₂ M] : IsScalarTower R₁ R₂ (Localization S) where
smul_assoc s t := Localization.ind <| Prod.rec fun r x ↦ by simp only [smul_mk, smul_assoc s t r]
instance smulCommClass_right {R : Type _} [SMul R M] [IsScalarTower R M M] :
SMulCommClass R (Localization S) (Localization S) where
smul_comm s :=
Localization.ind <|
Prod.rec fun r₁ x₁ ↦
Localization.ind <|
Prod.rec fun r₂ x₂ ↦ by
simp only [smul_mk, smul_eq_mul, mk_mul, mul_comm r₁, smul_mul_assoc]
#align localization.smul_comm_class_right Localization.smulCommClass_right
instance isScalarTower_right {R : Type _} [SMul R M] [IsScalarTower R M M] :
IsScalarTower R (Localization S) (Localization S) where
smul_assoc s :=
Localization.ind <|
Prod.rec fun r₁ x₁ ↦
Localization.ind <|
Prod.rec fun r₂ x₂ ↦ by simp only [smul_mk, smul_eq_mul, mk_mul, smul_mul_assoc]
#align localization.is_scalar_tower_right Localization.isScalarTower_right
instance [SMul R M] [SMul Rᵐᵒᵖ M] [IsScalarTower R M M] [IsScalarTower Rᵐᵒᵖ M M]
[IsCentralScalar R M] : IsCentralScalar R (Localization S) where
op_smul_eq_smul s :=
Localization.ind <| Prod.rec fun r x ↦ by simp only [smul_mk, op_smul_eq_smul]
instance [Monoid R] [MulAction R M] [IsScalarTower R M M] : MulAction R (Localization S) where
one_smul :=
Localization.ind <|
Prod.rec <| by
intros
simp only [Localization.smul_mk, one_smul]
mul_smul s₁ s₂ :=
Localization.ind <|
Prod.rec <| by
intros
simp only [Localization.smul_mk, mul_smul]
instance [Monoid R] [MulDistribMulAction R M] [IsScalarTower R M M] :
MulDistribMulAction R (Localization S) where
smul_one s := by simp only [← Localization.mk_one, Localization.smul_mk, smul_one]
smul_mul s x y :=
Localization.induction_on₂ x y <|
Prod.rec fun r₁ x₁ ↦
Prod.rec fun r₂ x₂ ↦ by simp only [Localization.smul_mk, Localization.mk_mul, smul_mul']
end Scalar
end Localization
variable {S N}
namespace MonoidHom
/-- Makes a localization map from a `CommMonoid` hom satisfying the characteristic predicate. -/
@[to_additive
"Makes a localization map from an `AddCommMonoid` hom satisfying the characteristic predicate."]
def toLocalizationMap (f : M →* N) (H1 : ∀ y : S, IsUnit (f y))
(H2 : ∀ z, ∃ x : M × S, z * f x.2 = f x.1) (H3 : ∀ x y, f x = f y ↔ ∃ c : S, ↑c * x = ↑c * y) :
Submonoid.LocalizationMap S N :=
{ f with
map_units' := H1
surj' := H2
eq_iff_exists' := H3 }
#align monoid_hom.to_localization_map MonoidHom.toLocalizationMap
#align add_monoid_hom.to_localization_map AddMonoidHom.toLocalizationMap
end MonoidHom
namespace Submonoid
namespace LocalizationMap
/-- Short for `toMonoidHom`; used to apply a localization map as a function. -/
@[to_additive "Short for `toAddMonoidHom`; used to apply a localization map as a function."]
abbrev toMap (f : LocalizationMap S N) := f.toMonoidHom
#align submonoid.localization_map.to_map Submonoid.LocalizationMap.toMap
#align add_submonoid.localization_map.to_map AddSubmonoid.LocalizationMap.toMap
@[to_additive (attr := ext)]
theorem ext {f g : LocalizationMap S N} (h : ∀ x, f.toMap x = g.toMap x) : f = g := by
rcases f with ⟨⟨⟩⟩
rcases g with ⟨⟨⟩⟩
simp only [mk.injEq, MonoidHom.mk.injEq]
exact OneHom.ext h
#align submonoid.localization_map.ext Submonoid.LocalizationMap.ext
#align add_submonoid.localization_map.ext AddSubmonoid.LocalizationMap.ext
@[to_additive]
theorem ext_iff {f g : LocalizationMap S N} : f = g ↔ ∀ x, f.toMap x = g.toMap x :=
⟨fun h _ ↦ h ▸ rfl, ext⟩
#align submonoid.localization_map.ext_iff Submonoid.LocalizationMap.ext_iff
#align add_submonoid.localization_map.ext_iff AddSubmonoid.LocalizationMap.ext_iff
@[to_additive]
theorem toMap_injective : Function.Injective (@LocalizationMap.toMap _ _ S N _) :=
fun _ _ h ↦ ext <| FunLike.ext_iff.1 h
#align submonoid.localization_map.to_map_injective Submonoid.LocalizationMap.toMap_injective
#align add_submonoid.localization_map.to_map_injective AddSubmonoid.LocalizationMap.toMap_injective
@[to_additive]
theorem map_units (f : LocalizationMap S N) (y : S) : IsUnit (f.toMap y) :=
f.2 y
#align submonoid.localization_map.map_units Submonoid.LocalizationMap.map_units
#align add_submonoid.localization_map.map_add_units AddSubmonoid.LocalizationMap.map_addUnits
@[to_additive]
theorem surj (f : LocalizationMap S N) (z : N) : ∃ x : M × S, z * f.toMap x.2 = f.toMap x.1 :=
f.3 z
#align submonoid.localization_map.surj Submonoid.LocalizationMap.surj
#align add_submonoid.localization_map.surj AddSubmonoid.LocalizationMap.surj
@[to_additive]
theorem eq_iff_exists (f : LocalizationMap S N) {x y} :
f.toMap x = f.toMap y ↔ ∃ c : S, ↑c * x = c * y := f.4 x y
#align submonoid.localization_map.eq_iff_exists Submonoid.LocalizationMap.eq_iff_exists
#align add_submonoid.localization_map.eq_iff_exists AddSubmonoid.LocalizationMap.eq_iff_exists
/-- Given a localization map `f : M →* N`, a section function sending `z : N` to some
`(x, y) : M × S` such that `f x * (f y)⁻¹ = z`. -/
@[to_additive
"Given a localization map `f : M →+ N`, a section function sending `z : N`
to some `(x, y) : M × S` such that `f x - f y = z`."]
noncomputable def sec (f : LocalizationMap S N) (z : N) : M × S := Classical.choose <| f.surj z
#align submonoid.localization_map.sec Submonoid.LocalizationMap.sec
#align add_submonoid.localization_map.sec AddSubmonoid.LocalizationMap.sec
@[to_additive]
theorem sec_spec {f : LocalizationMap S N} (z : N) :
z * f.toMap (f.sec z).2 = f.toMap (f.sec z).1 := Classical.choose_spec <| f.surj z
#align submonoid.localization_map.sec_spec Submonoid.LocalizationMap.sec_spec
#align add_submonoid.localization_map.sec_spec AddSubmonoid.LocalizationMap.sec_spec
@[to_additive]
theorem sec_spec' {f : LocalizationMap S N} (z : N) :
f.toMap (f.sec z).1 = f.toMap (f.sec z).2 * z := by rw [mul_comm, sec_spec]
#align submonoid.localization_map.sec_spec' Submonoid.LocalizationMap.sec_spec'
#align add_submonoid.localization_map.sec_spec' AddSubmonoid.LocalizationMap.sec_spec'
/-- Given a MonoidHom `f : M →* N` and Submonoid `S ⊆ M` such that `f(S) ⊆ Nˣ`, for all
`w, z : N` and `y ∈ S`, we have `w * (f y)⁻¹ = z ↔ w = f y * z`. -/
@[to_additive
"Given an AddMonoidHom `f : M →+ N` and Submonoid `S ⊆ M` such that
`f(S) ⊆ AddUnits N`, for all `w, z : N` and `y ∈ S`, we have `w - f y = z ↔ w = f y + z`."]
theorem mul_inv_left {f : M →* N} (h : ∀ y : S, IsUnit (f y)) (y : S) (w z : N):
w * (IsUnit.liftRight (f.restrict S) h y)⁻¹ = z ↔ w = f y * z := by
rw [mul_comm]
exact Units.inv_mul_eq_iff_eq_mul (IsUnit.liftRight (f.restrict S) h y)
#align submonoid.localization_map.mul_inv_left Submonoid.LocalizationMap.mul_inv_left
#align add_submonoid.localization_map.add_neg_left AddSubmonoid.LocalizationMap.add_neg_left
/-- Given a MonoidHom `f : M →* N` and Submonoid `S ⊆ M` such that `f(S) ⊆ Nˣ`, for all
`w, z : N` and `y ∈ S`, we have `z = w * (f y)⁻¹ ↔ z * f y = w`. -/
@[to_additive
"Given an AddMonoidHom `f : M →+ N` and Submonoid `S ⊆ M` such that
`f(S) ⊆ AddUnits N`, for all `w, z : N` and `y ∈ S`, we have `z = w - f y ↔ z + f y = w`."]
theorem mul_inv_right {f : M →* N} (h : ∀ y : S, IsUnit (f y)) (y : S) (w z : N) :
z = w * (IsUnit.liftRight (f.restrict S) h y)⁻¹ ↔ z * f y = w := by
rw [eq_comm, mul_inv_left h, mul_comm, eq_comm]
#align submonoid.localization_map.mul_inv_right Submonoid.LocalizationMap.mul_inv_right
#align add_submonoid.localization_map.add_neg_right AddSubmonoid.LocalizationMap.add_neg_right
/-- Given a MonoidHom `f : M →* N` and Submonoid `S ⊆ M` such that
`f(S) ⊆ Nˣ`, for all `x₁ x₂ : M` and `y₁, y₂ ∈ S`, we have
`f x₁ * (f y₁)⁻¹ = f x₂ * (f y₂)⁻¹ ↔ f (x₁ * y₂) = f (x₂ * y₁)`. -/
@[to_additive (attr := simp)
"Given an AddMonoidHom `f : M →+ N` and Submonoid `S ⊆ M` such that
`f(S) ⊆ AddUnits N`, for all `x₁ x₂ : M` and `y₁, y₂ ∈ S`, we have
`f x₁ - f y₁ = f x₂ - f y₂ ↔ f (x₁ + y₂) = f (x₂ + y₁)`."]
theorem mul_inv {f : M →* N} (h : ∀ y : S, IsUnit (f y)) {x₁ x₂} {y₁ y₂ : S} :
f x₁ * (IsUnit.liftRight (f.restrict S) h y₁)⁻¹ =
f x₂ * (IsUnit.liftRight (f.restrict S) h y₂)⁻¹ ↔
f (x₁ * y₂) = f (x₂ * y₁) := by
rw [mul_inv_right h, mul_assoc, mul_comm _ (f y₂), ← mul_assoc, mul_inv_left h, mul_comm x₂,
f.map_mul, f.map_mul]
#align submonoid.localization_map.mul_inv Submonoid.LocalizationMap.mul_inv
#align add_submonoid.localization_map.add_neg AddSubmonoid.LocalizationMap.add_neg
/-- Given a MonoidHom `f : M →* N` and Submonoid `S ⊆ M` such that `f(S) ⊆ Nˣ`, for all
`y, z ∈ S`, we have `(f y)⁻¹ = (f z)⁻¹ → f y = f z`. -/
@[to_additive
"Given an AddMonoidHom `f : M →+ N` and Submonoid `S ⊆ M` such that
`f(S) ⊆ AddUnits N`, for all `y, z ∈ S`, we have `- (f y) = - (f z) → f y = f z`."]
theorem inv_inj {f : M →* N} (hf : ∀ y : S, IsUnit (f y)) {y z : S}
(h : (IsUnit.liftRight (f.restrict S) hf y)⁻¹ = (IsUnit.liftRight (f.restrict S) hf z)⁻¹) :
f y = f z := by
rw [← mul_one (f y), eq_comm, ← mul_inv_left hf y (f z) 1, h]
exact Units.inv_mul (IsUnit.liftRight (f.restrict S) hf z)⁻¹
#align submonoid.localization_map.inv_inj Submonoid.LocalizationMap.inv_inj
#align add_submonoid.localization_map.neg_inj AddSubmonoid.LocalizationMap.neg_inj
/-- Given a MonoidHom `f : M →* N` and Submonoid `S ⊆ M` such that `f(S) ⊆ Nˣ`, for all
`y ∈ S`, `(f y)⁻¹` is unique. -/
@[to_additive
"Given an AddMonoidHom `f : M →+ N` and Submonoid `S ⊆ M` such that
`f(S) ⊆ AddUnits N`, for all `y ∈ S`, `- (f y)` is unique."]
theorem inv_unique {f : M →* N} (h : ∀ y : S, IsUnit (f y)) {y : S} {z : N} (H : f y * z = 1) :
(IsUnit.liftRight (f.restrict S) h y)⁻¹ = z := by
rw [← one_mul _⁻¹, Units.val_mul, mul_inv_left]
exact H.symm
#align submonoid.localization_map.inv_unique Submonoid.LocalizationMap.inv_unique
#align add_submonoid.localization_map.neg_unique AddSubmonoid.LocalizationMap.neg_unique
variable (f : LocalizationMap S N)
@[to_additive]
theorem map_right_cancel {x y} {c : S} (h : f.toMap (c * x) = f.toMap (c * y)) :
f.toMap x = f.toMap y := by
rw [f.toMap.map_mul, f.toMap.map_mul] at h
cases' f.map_units c with u hu
rw [← hu] at h
exact (Units.mul_right_inj u).1 h
#align submonoid.localization_map.map_right_cancel Submonoid.LocalizationMap.map_right_cancel
#align add_submonoid.localization_map.map_right_cancel AddSubmonoid.LocalizationMap.map_right_cancel
@[to_additive]
theorem map_left_cancel {x y} {c : S} (h : f.toMap (x * c) = f.toMap (y * c)) :
f.toMap x = f.toMap y :=
f.map_right_cancel <| by rw [mul_comm _ x, mul_comm _ y, h]
#align submonoid.localization_map.map_left_cancel Submonoid.LocalizationMap.map_left_cancel
#align add_submonoid.localization_map.map_left_cancel AddSubmonoid.LocalizationMap.map_left_cancel
/-- Given a localization map `f : M →* N`, the surjection sending `(x, y) : M × S` to
`f x * (f y)⁻¹`. -/
@[to_additive
"Given a localization map `f : M →+ N`, the surjection sending `(x, y) : M × S`
to `f x - f y`."]
noncomputable def mk' (f : LocalizationMap S N) (x : M) (y : S) : N :=
f.toMap x * ↑(IsUnit.liftRight (f.toMap.restrict S) f.map_units y)⁻¹
#align submonoid.localization_map.mk' Submonoid.LocalizationMap.mk'
#align add_submonoid.localization_map.mk' AddSubmonoid.LocalizationMap.mk'
@[to_additive]
theorem mk'_mul (x₁ x₂ : M) (y₁ y₂ : S) : f.mk' (x₁ * x₂) (y₁ * y₂) = f.mk' x₁ y₁ * f.mk' x₂ y₂ :=
(mul_inv_left f.map_units _ _ _).2 <|
show _ = _ * (_ * _ * (_ * _)) by
rw [← mul_assoc, ← mul_assoc, mul_inv_right f.map_units, mul_assoc, mul_assoc,
mul_comm _ (f.toMap x₂), ← mul_assoc, ← mul_assoc, mul_inv_right f.map_units,
Submonoid.coe_mul, f.toMap.map_mul, f.toMap.map_mul]
ac_rfl
#align submonoid.localization_map.mk'_mul Submonoid.LocalizationMap.mk'_mul
#align add_submonoid.localization_map.mk'_add AddSubmonoid.LocalizationMap.mk'_add
@[to_additive]
theorem mk'_one (x) : f.mk' x (1 : S) = f.toMap x := by
rw [mk', MonoidHom.map_one]
exact mul_one _
#align submonoid.localization_map.mk'_one Submonoid.LocalizationMap.mk'_one
#align add_submonoid.localization_map.mk'_zero AddSubmonoid.LocalizationMap.mk'_zero
/-- Given a localization map `f : M →* N` for a submonoid `S ⊆ M`, for all `z : N` we have that if
`x : M, y ∈ S` are such that `z * f y = f x`, then `f x * (f y)⁻¹ = z`. -/
@[to_additive (attr := simp)
"Given a localization map `f : M →+ N` for a Submonoid `S ⊆ M`, for all `z : N`
we have that if `x : M, y ∈ S` are such that `z + f y = f x`, then `f x - f y = z`."]
theorem mk'_sec (z : N) : f.mk' (f.sec z).1 (f.sec z).2 = z :=
show _ * _ = _ by rw [← sec_spec, mul_inv_left, mul_comm]
#align submonoid.localization_map.mk'_sec Submonoid.LocalizationMap.mk'_sec
#align add_submonoid.localization_map.mk'_sec AddSubmonoid.LocalizationMap.mk'_sec
@[to_additive]
theorem mk'_surjective (z : N) : ∃ (x : _)(y : S), f.mk' x y = z :=
⟨(f.sec z).1, (f.sec z).2, f.mk'_sec z⟩
#align submonoid.localization_map.mk'_surjective Submonoid.LocalizationMap.mk'_surjective
#align add_submonoid.localization_map.mk'_surjective AddSubmonoid.LocalizationMap.mk'_surjective
@[to_additive]
theorem mk'_spec (x) (y : S) : f.mk' x y * f.toMap y = f.toMap x :=
show _ * _ * _ = _ by rw [mul_assoc, mul_comm _ (f.toMap y), ← mul_assoc, mul_inv_left, mul_comm]
#align submonoid.localization_map.mk'_spec Submonoid.LocalizationMap.mk'_spec
#align add_submonoid.localization_map.mk'_spec AddSubmonoid.LocalizationMap.mk'_spec
@[to_additive]
theorem mk'_spec' (x) (y : S) : f.toMap y * f.mk' x y = f.toMap x := by rw [mul_comm, mk'_spec]
#align submonoid.localization_map.mk'_spec' Submonoid.LocalizationMap.mk'_spec'
#align add_submonoid.localization_map.mk'_spec' AddSubmonoid.LocalizationMap.mk'_spec'
@[to_additive]
theorem eq_mk'_iff_mul_eq {x} {y : S} {z} : z = f.mk' x y ↔ z * f.toMap y = f.toMap x :=
⟨fun H ↦ by rw [H, mk'_spec], fun H ↦ by erw [mul_inv_right, H]⟩
#align submonoid.localization_map.eq_mk'_iff_mul_eq Submonoid.LocalizationMap.eq_mk'_iff_mul_eq
#align add_submonoid.localization_map.eq_mk'_iff_add_eq
AddSubmonoid.LocalizationMap.eq_mk'_iff_add_eq
@[to_additive]
theorem mk'_eq_iff_eq_mul {x} {y : S} {z} : f.mk' x y = z ↔ f.toMap x = z * f.toMap y := by
rw [eq_comm, eq_mk'_iff_mul_eq, eq_comm]
#align submonoid.localization_map.mk'_eq_iff_eq_mul Submonoid.LocalizationMap.mk'_eq_iff_eq_mul
#align add_submonoid.localization_map.mk'_eq_iff_eq_add AddSubmonoid.LocalizationMap.mk'_eq_iff_eq_add
@[to_additive]
theorem mk'_eq_iff_eq {x₁ x₂} {y₁ y₂ : S} :
f.mk' x₁ y₁ = f.mk' x₂ y₂ ↔ f.toMap (y₂ * x₁) = f.toMap (y₁ * x₂) :=
⟨fun H ↦ by
rw [f.toMap.map_mul, f.toMap.map_mul, f.mk'_eq_iff_eq_mul.1 H,← mul_assoc, mk'_spec',
mul_comm ((toMap f) x₂) _],
fun H ↦ by
rw [mk'_eq_iff_eq_mul, mk', mul_assoc, mul_comm _ (f.toMap y₁), ← mul_assoc, ←
f.toMap.map_mul, mul_comm x₂, ← H, ← mul_comm x₁, f.toMap.map_mul,
mul_inv_right f.map_units]⟩
#align submonoid.localization_map.mk'_eq_iff_eq Submonoid.LocalizationMap.mk'_eq_iff_eq
#align add_submonoid.localization_map.mk'_eq_iff_eq AddSubmonoid.LocalizationMap.mk'_eq_iff_eq
@[to_additive]
theorem mk'_eq_iff_eq' {x₁ x₂} {y₁ y₂ : S} :
f.mk' x₁ y₁ = f.mk' x₂ y₂ ↔ f.toMap (x₁ * y₂) = f.toMap (x₂ * y₁) := by
simp only [f.mk'_eq_iff_eq, mul_comm]
#align submonoid.localization_map.mk'_eq_iff_eq' Submonoid.LocalizationMap.mk'_eq_iff_eq'
#align add_submonoid.localization_map.mk'_eq_iff_eq' AddSubmonoid.LocalizationMap.mk'_eq_iff_eq'
@[to_additive]
protected theorem eq {a₁ b₁} {a₂ b₂ : S} :
f.mk' a₁ a₂ = f.mk' b₁ b₂ ↔ ∃ c : S, ↑c * (↑b₂ * a₁) = c * (a₂ * b₁) :=
f.mk'_eq_iff_eq.trans <| f.eq_iff_exists
#align submonoid.localization_map.eq Submonoid.LocalizationMap.eq
#align add_submonoid.localization_map.eq AddSubmonoid.LocalizationMap.eq
@[to_additive]
protected theorem eq' {a₁ b₁} {a₂ b₂ : S} :
f.mk' a₁ a₂ = f.mk' b₁ b₂ ↔ Localization.r S (a₁, a₂) (b₁, b₂) := by
rw [f.eq, Localization.r_iff_exists]
#align submonoid.localization_map.eq' Submonoid.LocalizationMap.eq'
#align add_submonoid.localization_map.eq' AddSubmonoid.LocalizationMap.eq'
@[to_additive]
theorem eq_iff_eq (g : LocalizationMap S P) {x y} : f.toMap x = f.toMap y ↔ g.toMap x = g.toMap y :=
f.eq_iff_exists.trans g.eq_iff_exists.symm
#align submonoid.localization_map.eq_iff_eq Submonoid.LocalizationMap.eq_iff_eq
#align add_submonoid.localization_map.eq_iff_eq AddSubmonoid.LocalizationMap.eq_iff_eq
@[to_additive]
theorem mk'_eq_iff_mk'_eq (g : LocalizationMap S P) {x₁ x₂} {y₁ y₂ : S} :
f.mk' x₁ y₁ = f.mk' x₂ y₂ ↔ g.mk' x₁ y₁ = g.mk' x₂ y₂ :=
f.eq'.trans g.eq'.symm
#align submonoid.localization_map.mk'_eq_iff_mk'_eq Submonoid.LocalizationMap.mk'_eq_iff_mk'_eq
#align add_submonoid.localization_map.mk'_eq_iff_mk'_eq AddSubmonoid.LocalizationMap.mk'_eq_iff_mk'_eq
/-- Given a Localization map `f : M →* N` for a Submonoid `S ⊆ M`, for all `x₁ : M` and `y₁ ∈ S`,
if `x₂ : M, y₂ ∈ S` are such that `f x₁ * (f y₁)⁻¹ * f y₂ = f x₂`, then there exists `c ∈ S`
such that `x₁ * y₂ * c = x₂ * y₁ * c`. -/
@[to_additive
"Given a Localization map `f : M →+ N` for a Submonoid `S ⊆ M`, for all `x₁ : M`
and `y₁ ∈ S`, if `x₂ : M, y₂ ∈ S` are such that `(f x₁ - f y₁) + f y₂ = f x₂`, then there exists
`c ∈ S` such that `x₁ + y₂ + c = x₂ + y₁ + c`."]
theorem exists_of_sec_mk' (x) (y : S) :
∃ c : S, ↑c * (↑(f.sec <| f.mk' x y).2 * x) = c * (y * (f.sec <| f.mk' x y).1) :=
f.eq_iff_exists.1 <| f.mk'_eq_iff_eq.1 <| (mk'_sec _ _).symm
#align submonoid.localization_map.exists_of_sec_mk' Submonoid.LocalizationMap.exists_of_sec_mk'
#align add_submonoid.localization_map.exists_of_sec_mk' AddSubmonoid.LocalizationMap.exists_of_sec_mk'
@[to_additive]
theorem mk'_eq_of_eq {a₁ b₁ : M} {a₂ b₂ : S} (H : ↑a₂ * b₁ = ↑b₂ * a₁) :
f.mk' a₁ a₂ = f.mk' b₁ b₂ :=
f.mk'_eq_iff_eq.2 <| H ▸ rfl
#align submonoid.localization_map.mk'_eq_of_eq Submonoid.LocalizationMap.mk'_eq_of_eq
#align add_submonoid.localization_map.mk'_eq_of_eq AddSubmonoid.LocalizationMap.mk'_eq_of_eq
@[to_additive]
theorem mk'_eq_of_eq' {a₁ b₁ : M} {a₂ b₂ : S} (H : b₁ * ↑a₂ = a₁ * ↑b₂) :
f.mk' a₁ a₂ = f.mk' b₁ b₂ :=
f.mk'_eq_of_eq <| by simpa only [mul_comm] using H
#align submonoid.localization_map.mk'_eq_of_eq' Submonoid.LocalizationMap.mk'_eq_of_eq'
#align add_submonoid.localization_map.mk'_eq_of_eq' AddSubmonoid.LocalizationMap.mk'_eq_of_eq'
@[to_additive (attr := simp)]
theorem mk'_self' (y : S) : f.mk' (y : M) y = 1 :=
show _ * _ = _ by rw [mul_inv_left, mul_one]
#align submonoid.localization_map.mk'_self' Submonoid.LocalizationMap.mk'_self'
#align add_submonoid.localization_map.mk'_self' AddSubmonoid.LocalizationMap.mk'_self'
@[to_additive (attr := simp)]
theorem mk'_self (x) (H : x ∈ S) : f.mk' x ⟨x, H⟩ = 1 := mk'_self' f ⟨x, H⟩
#align submonoid.localization_map.mk'_self Submonoid.LocalizationMap.mk'_self
#align add_submonoid.localization_map.mk'_self AddSubmonoid.LocalizationMap.mk'_self
@[to_additive]
theorem mul_mk'_eq_mk'_of_mul (x₁ x₂) (y : S) : f.toMap x₁ * f.mk' x₂ y = f.mk' (x₁ * x₂) y := by
rw [← mk'_one, ← mk'_mul, one_mul]
#align submonoid.localization_map.mul_mk'_eq_mk'_of_mul Submonoid.LocalizationMap.mul_mk'_eq_mk'_of_mul
#align add_submonoid.localization_map.add_mk'_eq_mk'_of_add AddSubmonoid.LocalizationMap.add_mk'_eq_mk'_of_add
@[to_additive]
theorem mk'_mul_eq_mk'_of_mul (x₁ x₂) (y : S) : f.mk' x₂ y * f.toMap x₁ = f.mk' (x₁ * x₂) y := by
rw [mul_comm, mul_mk'_eq_mk'_of_mul]
#align submonoid.localization_map.mk'_mul_eq_mk'_of_mul Submonoid.LocalizationMap.mk'_mul_eq_mk'_of_mul
#align add_submonoid.localization_map.mk'_add_eq_mk'_of_add AddSubmonoid.LocalizationMap.mk'_add_eq_mk'_of_add
@[to_additive]
theorem mul_mk'_one_eq_mk' (x) (y : S) : f.toMap x * f.mk' 1 y = f.mk' x y := by
rw [mul_mk'_eq_mk'_of_mul, mul_one]
#align submonoid.localization_map.mul_mk'_one_eq_mk' Submonoid.LocalizationMap.mul_mk'_one_eq_mk'
#align add_submonoid.localization_map.add_mk'_zero_eq_mk' AddSubmonoid.LocalizationMap.add_mk'_zero_eq_mk'
@[to_additive (attr := simp)]
theorem mk'_mul_cancel_right (x : M) (y : S) : f.mk' (x * y) y = f.toMap x := by
rw [← mul_mk'_one_eq_mk', f.toMap.map_mul, mul_assoc, mul_mk'_one_eq_mk', mk'_self', mul_one]
#align submonoid.localization_map.mk'_mul_cancel_right Submonoid.LocalizationMap.mk'_mul_cancel_right
#align add_submonoid.localization_map.mk'_add_cancel_right AddSubmonoid.LocalizationMap.mk'_add_cancel_right
@[to_additive]
theorem mk'_mul_cancel_left (x) (y : S) : f.mk' ((y : M) * x) y = f.toMap x := by
rw [mul_comm, mk'_mul_cancel_right]
#align submonoid.localization_map.mk'_mul_cancel_left Submonoid.LocalizationMap.mk'_mul_cancel_left
#align add_submonoid.localization_map.mk'_add_cancel_left
AddSubmonoid.LocalizationMap.mk'_add_cancel_left
@[to_additive]
theorem isUnit_comp (j : N →* P) (y : S) : IsUnit (j.comp f.toMap y) :=
⟨Units.map j <| IsUnit.liftRight (f.toMap.restrict S) f.map_units y,
show j _ = j _ from congr_arg j <| IsUnit.coe_liftRight (f.toMap.restrict S) f.map_units _⟩
#align submonoid.localization_map.is_unit_comp Submonoid.LocalizationMap.isUnit_comp
#align add_submonoid.localization_map.is_add_unit_comp
AddSubmonoid.LocalizationMap.isAddUnit_comp
variable {g : M →* P}
/-- Given a Localization map `f : M →* N` for a Submonoid `S ⊆ M` and a map of `CommMonoid`s
`g : M →* P` such that `g(S) ⊆ Units P`, `f x = f y → g x = g y` for all `x y : M`. -/
@[to_additive
"Given a Localization map `f : M →+ N` for a Submonoid `S ⊆ M` and a map of
`AddCommMonoid`s `g : M →+ P` such that `g(S) ⊆ AddUnits P`, `f x = f y → g x = g y`
for all `x y : M`."]
theorem eq_of_eq (hg : ∀ y : S, IsUnit (g y)) {x y} (h : f.toMap x = f.toMap y) : g x = g y := by
obtain ⟨c, hc⟩ := f.eq_iff_exists.1 h
rw [← one_mul (g x), ← IsUnit.liftRight_inv_mul (g.restrict S) hg c]
show _ * g c * _ = _
rw [mul_assoc, ← g.map_mul, hc, mul_comm, mul_inv_left hg, g.map_mul]
#align submonoid.localization_map.eq_of_eq Submonoid.LocalizationMap.eq_of_eq
#align add_submonoid.localization_map.eq_of_eq AddSubmonoid.LocalizationMap.eq_of_eq
/-- Given `CommMonoid`s `M, P`, Localization maps `f : M →* N, k : P →* Q` for Submonoids
`S, T` respectively, and `g : M →* P` such that `g(S) ⊆ T`, `f x = f y` implies
`k (g x) = k (g y)`. -/
@[to_additive
"Given `AddCommMonoid`s `M, P`, Localization maps `f : M →+ N, k : P →+ Q` for Submonoids
`S, T` respectively, and `g : M →+ P` such that `g(S) ⊆ T`, `f x = f y`
implies `k (g x) = k (g y)`."]
theorem comp_eq_of_eq {T : Submonoid P} {Q : Type _} [CommMonoid Q] (hg : ∀ y : S, g y ∈ T)
(k : LocalizationMap T Q) {x y} (h : f.toMap x = f.toMap y) : k.toMap (g x) = k.toMap (g y) :=
f.eq_of_eq (fun y : S ↦ show IsUnit (k.toMap.comp g y) from k.map_units ⟨g y, hg y⟩) h
#align submonoid.localization_map.comp_eq_of_eq Submonoid.LocalizationMap.comp_eq_of_eq
#align add_submonoid.localization_map.comp_eq_of_eq AddSubmonoid.LocalizationMap.comp_eq_of_eq
variable (hg : ∀ y : S, IsUnit (g y))
/-- Given a Localization map `f : M →* N` for a Submonoid `S ⊆ M` and a map of `CommMonoid`s
`g : M →* P` such that `g y` is invertible for all `y : S`, the homomorphism induced from
`N` to `P` sending `z : N` to `g x * (g y)⁻¹`, where `(x, y) : M × S` are such that
`z = f x * (f y)⁻¹`. -/
@[to_additive
"Given a localization map `f : M →+ N` for a submonoid `S ⊆ M` and a map of
`AddCommMonoid`s `g : M →+ P` such that `g y` is invertible for all `y : S`, the homomorphism
induced from `N` to `P` sending `z : N` to `g x - g y`, where `(x, y) : M × S` are such that
`z = f x - f y`."]
noncomputable def lift : N →* P where
toFun z := g (f.sec z).1 * (IsUnit.liftRight (g.restrict S) hg (f.sec z).2)⁻¹
map_one' := by rw [mul_inv_left, mul_one] ; exact f.eq_of_eq hg (by rw [← sec_spec, one_mul])
map_mul' x y :=
by
rw [mul_inv_left hg, ← mul_assoc, ← mul_assoc, mul_inv_right hg, mul_comm _ (g (f.sec y).1), ←
mul_assoc, ← mul_assoc, mul_inv_right hg]
repeat' rw [← g.map_mul]
exact f.eq_of_eq hg (by simp_rw [f.toMap.map_mul, sec_spec'] ; ac_rfl)
#align submonoid.localization_map.lift Submonoid.LocalizationMap.lift
#align add_submonoid.localization_map.lift AddSubmonoid.LocalizationMap.lift
/-- Given a Localization map `f : M →* N` for a Submonoid `S ⊆ M` and a map of `CommMonoid`s
`g : M →* P` such that `g y` is invertible for all `y : S`, the homomorphism induced from
`N` to `P` maps `f x * (f y)⁻¹` to `g x * (g y)⁻¹` for all `x : M, y ∈ S`. -/
@[to_additive
"Given a Localization map `f : M →+ N` for a Submonoid `S ⊆ M` and a map of
`AddCommMonoid`s `g : M →+ P` such that `g y` is invertible for all `y : S`, the homomorphism
induced from `N` to `P` maps `f x - f y` to `g x - g y` for all `x : M, y ∈ S`."]
theorem lift_mk' (x y) : f.lift hg (f.mk' x y) = g x * (IsUnit.liftRight (g.restrict S) hg y)⁻¹ :=
(mul_inv hg).2 <|
f.eq_of_eq hg <| by
simp_rw [f.toMap.map_mul, f.toMap.map_mul, sec_spec', mul_assoc, f.mk'_spec, mul_comm]
#align submonoid.localization_map.lift_mk' Submonoid.LocalizationMap.lift_mk'
#align add_submonoid.localization_map.lift_mk' AddSubmonoid.LocalizationMap.lift_mk'
/-- Given a Localization map `f : M →* N` for a Submonoid `S ⊆ M`, if a `CommMonoid` map
`g : M →* P` induces a map `f.lift hg : N →* P` then for all `z : N, v : P`, we have
`f.lift hg z = v ↔ g x = g y * v`, where `x : M, y ∈ S` are such that `z * f y = f x`. -/
@[to_additive
"Given a Localization map `f : M →+ N` for a Submonoid `S ⊆ M`, if an
`AddCommMonoid` map `g : M →+ P` induces a map `f.lift hg : N →+ P` then for all
`z : N, v : P`, we have `f.lift hg z = v ↔ g x = g y + v`, where `x : M, y ∈ S` are such that
`z + f y = f x`."]
theorem lift_spec (z v) : f.lift hg z = v ↔ g (f.sec z).1 = g (f.sec z).2 * v :=
mul_inv_left hg _ _ v
#align submonoid.localization_map.lift_spec Submonoid.LocalizationMap.lift_spec
#align add_submonoid.localization_map.lift_spec AddSubmonoid.LocalizationMap.lift_spec
/-- Given a Localization map `f : M →* N` for a Submonoid `S ⊆ M`, if a `CommMonoid` map
`g : M →* P` induces a map `f.lift hg : N →* P` then for all `z : N, v w : P`, we have
`f.lift hg z * w = v ↔ g x * w = g y * v`, where `x : M, y ∈ S` are such that
`z * f y = f x`. -/
@[to_additive
"Given a Localization map `f : M →+ N` for a Submonoid `S ⊆ M`, if an `AddCommMonoid` map
`g : M →+ P` induces a map `f.lift hg : N →+ P` then for all
`z : N, v w : P`, we have `f.lift hg z + w = v ↔ g x + w = g y + v`, where `x : M, y ∈ S` are such
that `z + f y = f x`."]
theorem lift_spec_mul (z w v) : f.lift hg z * w = v ↔ g (f.sec z).1 * w = g (f.sec z).2 * v := by
erw [mul_comm, ← mul_assoc, mul_inv_left hg, mul_comm]
#align submonoid.localization_map.lift_spec_mul Submonoid.LocalizationMap.lift_spec_mul
#align add_submonoid.localization_map.lift_spec_add AddSubmonoid.LocalizationMap.lift_spec_add
@[to_additive]
theorem lift_mk'_spec (x v) (y : S) : f.lift hg (f.mk' x y) = v ↔ g x = g y * v := by
rw [f.lift_mk' hg] ; exact mul_inv_left hg _ _ _
#align submonoid.localization_map.lift_mk'_spec Submonoid.LocalizationMap.lift_mk'_spec
#align add_submonoid.localization_map.lift_mk'_spec AddSubmonoid.LocalizationMap.lift_mk'_spec
/-- Given a Localization map `f : M →* N` for a Submonoid `S ⊆ M`, if a `CommMonoid` map
`g : M →* P` induces a map `f.lift hg : N →* P` then for all `z : N`, we have
`f.lift hg z * g y = g x`, where `x : M, y ∈ S` are such that `z * f y = f x`. -/
@[to_additive
"Given a Localization map `f : M →+ N` for a Submonoid `S ⊆ M`, if an `AddCommMonoid`
map `g : M →+ P` induces a map `f.lift hg : N →+ P` then for all `z : N`, we have
`f.lift hg z + g y = g x`, where `x : M, y ∈ S` are such that `z + f y = f x`."]
theorem lift_mul_right (z) : f.lift hg z * g (f.sec z).2 = g (f.sec z).1 := by
erw [mul_assoc, IsUnit.liftRight_inv_mul, mul_one]
#align submonoid.localization_map.lift_mul_right Submonoid.LocalizationMap.lift_mul_right
#align add_submonoid.localization_map.lift_add_right AddSubmonoid.LocalizationMap.lift_add_right
/-- Given a Localization map `f : M →* N` for a Submonoid `S ⊆ M`, if a `CommMonoid` map
`g : M →* P` induces a map `f.lift hg : N →* P` then for all `z : N`, we have
`g y * f.lift hg z = g x`, where `x : M, y ∈ S` are such that `z * f y = f x`. -/
@[to_additive
"Given a Localization map `f : M →+ N` for a Submonoid `S ⊆ M`, if an `AddCommMonoid` map
`g : M →+ P` induces a map `f.lift hg : N →+ P` then for all `z : N`, we have
`g y + f.lift hg z = g x`, where `x : M, y ∈ S` are such that `z + f y = f x`."]
theorem lift_mul_left (z) : g (f.sec z).2 * f.lift hg z = g (f.sec z).1 := by
rw [mul_comm, lift_mul_right]
#align submonoid.localization_map.lift_mul_left Submonoid.LocalizationMap.lift_mul_left
#align add_submonoid.localization_map.lift_add_left AddSubmonoid.LocalizationMap.lift_add_left
@[to_additive (attr := simp)]
theorem lift_eq (x : M) : f.lift hg (f.toMap x) = g x := by
rw [lift_spec, ← g.map_mul] ; exact f.eq_of_eq hg (by rw [sec_spec', f.toMap.map_mul])
#align submonoid.localization_map.lift_eq Submonoid.LocalizationMap.lift_eq
#align add_submonoid.localization_map.lift_eq AddSubmonoid.LocalizationMap.lift_eq
@[to_additive]
theorem lift_eq_iff {x y : M × S} :
f.lift hg (f.mk' x.1 x.2) = f.lift hg (f.mk' y.1 y.2) ↔ g (x.1 * y.2) = g (y.1 * x.2) := by
rw [lift_mk', lift_mk', mul_inv hg]
#align submonoid.localization_map.lift_eq_iff Submonoid.LocalizationMap.lift_eq_iff
#align add_submonoid.localization_map.lift_eq_iff AddSubmonoid.LocalizationMap.lift_eq_iff
@[to_additive (attr := simp)]
theorem lift_comp : (f.lift hg).comp f.toMap = g := by ext ; exact f.lift_eq hg _
#align submonoid.localization_map.lift_comp Submonoid.LocalizationMap.lift_comp
#align add_submonoid.localization_map.lift_comp AddSubmonoid.LocalizationMap.lift_comp
@[to_additive (attr := simp)]
theorem lift_of_comp (j : N →* P) : f.lift (f.isUnit_comp j) = j := by
ext
rw [lift_spec]
show j _ = j _ * _
erw [← j.map_mul, sec_spec']
#align submonoid.localization_map.lift_of_comp Submonoid.LocalizationMap.lift_of_comp
#align add_submonoid.localization_map.lift_of_comp AddSubmonoid.LocalizationMap.lift_of_comp
@[to_additive]
theorem epic_of_localizationMap {j k : N →* P} (h : ∀ a, j.comp f.toMap a = k.comp f.toMap a) :
j = k := by
rw [← f.lift_of_comp j, ← f.lift_of_comp k]
congr 1 with x; exact h x
#align submonoid.localization_map.epic_of_localization_map
Submonoid.LocalizationMap.epic_of_localizationMap
#align add_submonoid.localization_map.epic_of_localization_map
AddSubmonoid.LocalizationMap.epic_of_localizationMap
@[to_additive]
theorem lift_unique {j : N →* P} (hj : ∀ x, j (f.toMap x) = g x) : f.lift hg = j := by
ext
rw [lift_spec, ← hj, ← hj, ← j.map_mul]
apply congr_arg
rw [← sec_spec']
#align submonoid.localization_map.lift_unique Submonoid.LocalizationMap.lift_unique
#align add_submonoid.localization_map.lift_unique AddSubmonoid.LocalizationMap.lift_unique
@[to_additive (attr := simp)]
theorem lift_id (x) : f.lift f.map_units x = x :=
FunLike.ext_iff.1 (f.lift_of_comp <| MonoidHom.id N) x
#align submonoid.localization_map.lift_id Submonoid.LocalizationMap.lift_id
#align add_submonoid.localization_map.lift_id AddSubmonoid.LocalizationMap.lift_id
/-- Given two Localization maps `f : M →* N, k : M →* P` for a Submonoid `S ⊆ M`, the hom
from `P` to `N` induced by `f` is left inverse to the hom from `N` to `P` induced by `k`. -/
@[to_additive (attr := simp)
"Given two Localization maps `f : M →+ N, k : M →+ P` for a Submonoid `S ⊆ M`, the hom
from `P` to `N` induced by `f` is left inverse to the hom from `N` to `P` induced by `k`."]
theorem lift_left_inverse {k : LocalizationMap S P} (z : N) :
k.lift f.map_units (f.lift k.map_units z) = z := by
rw [lift_spec]
cases' f.surj z with x hx
conv_rhs =>
congr
next => skip
rw [f.eq_mk'_iff_mul_eq.2 hx]
rw [mk', ← mul_assoc, mul_inv_right f.map_units, ← f.toMap.map_mul, ← f.toMap.map_mul]
apply k.eq_of_eq f.map_units
rw [k.toMap.map_mul, k.toMap.map_mul, ← sec_spec, mul_assoc, lift_spec_mul]
repeat' rw [← k.toMap.map_mul]
apply f.eq_of_eq k.map_units
repeat' rw [f.toMap.map_mul]
rw [sec_spec', ← hx]
ac_rfl
#align submonoid.localization_map.lift_left_inverse Submonoid.LocalizationMap.lift_left_inverse
#align add_submonoid.localization_map.lift_left_inverse
AddSubmonoid.LocalizationMap.lift_left_inverse
@[to_additive]
theorem lift_surjective_iff :
Function.Surjective (f.lift hg) ↔ ∀ v : P, ∃ x : M × S, v * g x.2 = g x.1 := by
constructor
· intro H v
obtain ⟨z, hz⟩ := H v
obtain ⟨x, hx⟩ := f.surj z
use x
rw [← hz, f.eq_mk'_iff_mul_eq.2 hx, lift_mk', mul_assoc, mul_comm _ (g ↑x.2)]
erw [IsUnit.mul_liftRight_inv (g.restrict S) hg, mul_one]
· intro H v
obtain ⟨x, hx⟩ := H v
use f.mk' x.1 x.2
rw [lift_mk', mul_inv_left hg, mul_comm, ← hx]
#align submonoid.localization_map.lift_surjective_iff Submonoid.LocalizationMap.lift_surjective_iff
#align add_submonoid.localization_map.lift_surjective_iff
AddSubmonoid.LocalizationMap.lift_surjective_iff
@[to_additive]
theorem lift_injective_iff :
Function.Injective (f.lift hg) ↔ ∀ x y, f.toMap x = f.toMap y ↔ g x = g y := by
constructor
· intro H x y
constructor
· exact f.eq_of_eq hg
· intro h
rw [← f.lift_eq hg, ← f.lift_eq hg] at h
exact H h
· intro H z w h
obtain ⟨_, _⟩ := f.surj z
obtain ⟨_, _⟩ := f.surj w
rw [← f.mk'_sec z, ← f.mk'_sec w]
exact (mul_inv f.map_units).2 ((H _ _).2 <| (mul_inv hg).1 h)
#align submonoid.localization_map.lift_injective_iff Submonoid.LocalizationMap.lift_injective_iff
#align add_submonoid.localization_map.lift_injective_iff AddSubmonoid.LocalizationMap.lift_injective_iff
variable {T : Submonoid P} (hy : ∀ y : S, g y ∈ T) {Q : Type _} [CommMonoid Q]
(k : LocalizationMap T Q)
/-- Given a `CommMonoid` homomorphism `g : M →* P` where for Submonoids `S ⊆ M, T ⊆ P` we have
`g(S) ⊆ T`, the induced Monoid homomorphism from the Localization of `M` at `S` to the
Localization of `P` at `T`: if `f : M →* N` and `k : P →* Q` are Localization maps for `S` and
`T` respectively, we send `z : N` to `k (g x) * (k (g y))⁻¹`, where `(x, y) : M × S` are such
that `z = f x * (f y)⁻¹`. -/
@[to_additive
"Given a `AddCommMonoid` homomorphism `g : M →+ P` where for Submonoids `S ⊆ M, T ⊆ P` we have
`g(S) ⊆ T`, the induced AddMonoid homomorphism from the Localization of `M` at `S` to the
Localization of `P` at `T`: if `f : M →+ N` and `k : P →+ Q` are Localization maps for `S` and
`T` respectively, we send `z : N` to `k (g x) - k (g y)`, where `(x, y) : M × S` are such
that `z = f x - f y`."]
noncomputable def map : N →* Q :=
@lift _ _ _ _ _ _ _ f (k.toMap.comp g) fun y ↦ k.map_units ⟨g y, hy y⟩
#align submonoid.localization_map.map Submonoid.LocalizationMap.map
#align add_submonoid.localization_map.map AddSubmonoid.LocalizationMap.map
variable {k}
@[to_additive]
theorem map_eq (x) : f.map hy k (f.toMap x) = k.toMap (g x) :=
f.lift_eq (fun y ↦ k.map_units ⟨g y, hy y⟩) x
#align submonoid.localization_map.map_eq Submonoid.LocalizationMap.map_eq
#align add_submonoid.localization_map.map_eq AddSubmonoid.LocalizationMap.map_eq
@[to_additive (attr := simp)]
theorem map_comp : (f.map hy k).comp f.toMap = k.toMap.comp g :=
f.lift_comp fun y ↦ k.map_units ⟨g y, hy y⟩
#align submonoid.localization_map.map_comp Submonoid.LocalizationMap.map_comp
#align add_submonoid.localization_map.map_comp AddSubmonoid.LocalizationMap.map_comp
@[to_additive]
theorem map_mk' (x) (y : S) : f.map hy k (f.mk' x y) = k.mk' (g x) ⟨g y, hy y⟩ := by
rw [map, lift_mk', mul_inv_left]
· show k.toMap (g x) = k.toMap (g y) * _
rw [mul_mk'_eq_mk'_of_mul]
exact (k.mk'_mul_cancel_left (g x) ⟨g y, hy y⟩).symm
#align submonoid.localization_map.map_mk' Submonoid.LocalizationMap.map_mk'
#align add_submonoid.localization_map.map_mk' AddSubmonoid.LocalizationMap.map_mk'
/-- Given Localization maps `f : M →* N, k : P →* Q` for Submonoids `S, T` respectively, if a
`CommMonoid` homomorphism `g : M →* P` induces a `f.map hy k : N →* Q`, then for all `z : N`,
`u : Q`, we have `f.map hy k z = u ↔ k (g x) = k (g y) * u` where `x : M, y ∈ S` are such that
`z * f y = f x`. -/
@[to_additive
"Given Localization maps `f : M →+ N, k : P →+ Q` for Submonoids `S, T` respectively, if an
`AddCommMonoid` homomorphism `g : M →+ P` induces a `f.map hy k : N →+ Q`, then for all `z : N`,
`u : Q`, we have `f.map hy k z = u ↔ k (g x) = k (g y) + u` where `x : M, y ∈ S` are such that
`z + f y = f x`."]
/-- Given Localization maps `f : M →* N, k : P →* Q` for Submonoids `S, T` respectively, if a
`CommMonoid` homomorphism `g : M →* P` induces a `f.map hy k : N →* Q`, then for all `z : N`,
we have `f.map hy k z * k (g y) = k (g x)` where `x : M, y ∈ S` are such that
`z * f y = f x`. -/
@[to_additive
"Given Localization maps `f : M →+ N, k : P →+ Q` for Submonoids `S, T` respectively, if an
`AddCommMonoid` homomorphism `g : M →+ P` induces a `f.map hy k : N →+ Q`, then for all `z : N`,
we have `f.map hy k z + k (g y) = k (g x)` where `x : M, y ∈ S` are such that
`z + f y = f x`."]
theorem map_mul_right (z) : f.map hy k z * k.toMap (g (f.sec z).2) = k.toMap (g (f.sec z).1) :=
f.lift_mul_right (fun y ↦ k.map_units ⟨g y, hy y⟩) _
#align submonoid.localization_map.map_mul_right Submonoid.LocalizationMap.map_mul_right
#align add_submonoid.localization_map.map_add_right AddSubmonoid.LocalizationMap.map_add_right
/-- Given Localization maps `f : M →* N, k : P →* Q` for Submonoids `S, T` respectively, if a
`CommMonoid` homomorphism `g : M →* P` induces a `f.map hy k : N →* Q`, then for all `z : N`,
we have `k (g y) * f.map hy k z = k (g x)` where `x : M, y ∈ S` are such that
`z * f y = f x`. -/
@[to_additive
"Given Localization maps `f : M →+ N, k : P →+ Q` for Submonoids `S, T` respectively if an
`AddCommMonoid` homomorphism `g : M →+ P` induces a `f.map hy k : N →+ Q`, then for all `z : N`,
we have `k (g y) + f.map hy k z = k (g x)` where `x : M, y ∈ S` are such that
`z + f y = f x`."]
theorem map_mul_left (z) : k.toMap (g (f.sec z).2) * f.map hy k z = k.toMap (g (f.sec z).1) := by
rw [mul_comm, f.map_mul_right]
#align submonoid.localization_map.map_mul_left Submonoid.LocalizationMap.map_mul_left
#align add_submonoid.localization_map.map_add_left AddSubmonoid.LocalizationMap.map_add_left
@[to_additive (attr := simp)]
theorem map_id (z : N) : f.map (fun y ↦ show MonoidHom.id M y ∈ S from y.2) f z = z :=
f.lift_id z
#align submonoid.localization_map.map_id Submonoid.LocalizationMap.map_id
#align add_submonoid.localization_map.map_id AddSubmonoid.LocalizationMap.map_id
/-- If `CommMonoid` homs `g : M →* P, l : P →* A` induce maps of localizations, the composition
of the induced maps equals the map of localizations induced by `l ∘ g`. -/
@[to_additive
"If `AddCommMonoid` homs `g : M →+ P, l : P →+ A` induce maps of localizations, the composition
of the induced maps equals the map of localizations induced by `l ∘ g`."]
theorem map_comp_map {A : Type _} [CommMonoid A] {U : Submonoid A} {R} [CommMonoid R]
(j : LocalizationMap U R) {l : P →* A} (hl : ∀ w : T, l w ∈ U) :
(k.map hl j).comp (f.map hy k) = f.map (fun x ↦ show l.comp g x ∈ U from hl ⟨g x, hy x⟩) j :=
by
ext z
show j.toMap _ * _ = j.toMap (l _) * _
· rw [mul_inv_left, ← mul_assoc, mul_inv_right]
show j.toMap _ * j.toMap (l (g _)) = j.toMap (l _) * _
rw [← j.toMap.map_mul, ← j.toMap.map_mul, ← l.map_mul, ← l.map_mul]
exact
k.comp_eq_of_eq hl j
(by rw [k.toMap.map_mul, k.toMap.map_mul, sec_spec', mul_assoc, map_mul_right])
#align submonoid.localization_map.map_comp_map Submonoid.LocalizationMap.map_comp_map
#align add_submonoid.localization_map.map_comp_map AddSubmonoid.LocalizationMap.map_comp_map
/-- If `CommMonoid` homs `g : M →* P, l : P →* A` induce maps of localizations, the composition
of the induced maps equals the map of localizations induced by `l ∘ g`. -/
@[to_additive
"If `AddCommMonoid` homs `g : M →+ P, l : P →+ A` induce maps of localizations, the composition
of the induced maps equals the map of localizations induced by `l ∘ g`."]
theorem map_map {A : Type _} [CommMonoid A] {U : Submonoid A} {R} [CommMonoid R]
(j : LocalizationMap U R) {l : P →* A} (hl : ∀ w : T, l w ∈ U) (x) :
k.map hl j (f.map hy k x) = f.map (fun x ↦ show l.comp g x ∈ U from hl ⟨g x, hy x⟩) j x := by
-- Porting note: Lean has a hard time figuring out what the implicit arguments should be
-- when calling `map_comp_map`. Hence the original line below has to be replaced by a much more
-- explicit one
-- rw [← f.map_comp_map hy j hl]
rw [← @map_comp_map M _ S N _ P _ f g T hy Q _ k A _ U R _ j l hl]
rfl
#align submonoid.localization_map.map_map Submonoid.LocalizationMap.map_map
#align add_submonoid.localization_map.map_map AddSubmonoid.LocalizationMap.map_map
section AwayMap
variable (x : M)
/-- Given `x : M`, the type of `CommMonoid` homomorphisms `f : M →* N` such that `N`
is isomorphic to the Localization of `M` at the Submonoid generated by `x`. -/
@[to_additive (attr := reducible)
"Given `x : M`, the type of `AddCommMonoid` homomorphisms `f : M →+ N` such that `N`
is isomorphic to the localization of `M` at the Submonoid generated by `x`."]
def AwayMap (N' : Type _) [CommMonoid N'] := LocalizationMap (powers x) N'
#align submonoid.localization_map.away_map Submonoid.LocalizationMap.AwayMap
#align add_submonoid.localization_map.away_map AddSubmonoid.LocalizationMap.AwayMap
variable (F : AwayMap x N)
/-- Given `x : M` and a Localization map `F : M →* N` away from `x`, `invSelf` is `(F x)⁻¹`. -/
noncomputable def AwayMap.invSelf : N := F.mk' 1 ⟨x, mem_powers _⟩
#align submonoid.localization_map.away_map.inv_self Submonoid.LocalizationMap.AwayMap.invSelf
/-- Given `x : M`, a Localization map `F : M →* N` away from `x`, and a map of `CommMonoid`s
`g : M →* P` such that `g x` is invertible, the homomorphism induced from `N` to `P` sending
`z : N` to `g y * (g x)⁻ⁿ`, where `y : M, n : ℕ` are such that `z = F y * (F x)⁻ⁿ`. -/
noncomputable def AwayMap.lift (hg : IsUnit (g x)) : N →* P :=
Submonoid.LocalizationMap.lift F fun y ↦
show IsUnit (g y.1) by
obtain ⟨n, hn⟩ := y.2
rw [← hn, g.map_pow]
exact IsUnit.pow n hg
#align submonoid.localization_map.away_map.lift Submonoid.LocalizationMap.AwayMap.lift
@[simp]
theorem AwayMap.lift_eq (hg : IsUnit (g x)) (a : M) : F.lift x hg (F.toMap a) = g a :=
Submonoid.LocalizationMap.lift_eq _ _ _
#align submonoid.localization_map.away_map.lift_eq Submonoid.LocalizationMap.AwayMap.lift_eq
@[simp]
theorem AwayMap.lift_comp (hg : IsUnit (g x)) : (F.lift x hg).comp F.toMap = g :=
Submonoid.LocalizationMap.lift_comp _ _
#align submonoid.localization_map.away_map.lift_comp Submonoid.LocalizationMap.AwayMap.lift_comp
/-- Given `x y : M` and Localization maps `F : M →* N, G : M →* P` away from `x` and `x * y`
respectively, the homomorphism induced from `N` to `P`. -/
noncomputable def awayToAwayRight (y : M) (G : AwayMap (x * y) P) : N →* P :=
F.lift x <|
show IsUnit (G.toMap x) from
isUnit_of_mul_eq_one (G.toMap x) (G.mk' y ⟨x * y, mem_powers _⟩) <| by
rw [mul_mk'_eq_mk'_of_mul, mk'_self]
#align submonoid.localization_map.away_to_away_right Submonoid.LocalizationMap.awayToAwayRight
end AwayMap
end LocalizationMap
end Submonoid
namespace AddSubmonoid
namespace LocalizationMap
section AwayMap
variable {A : Type _} [AddCommMonoid A] (x : A) {B : Type _} [AddCommMonoid B] (F : AwayMap x B)
{C : Type _} [AddCommMonoid C] {g : A →+ C}
/-- Given `x : A` and a Localization map `F : A →+ B` away from `x`, `neg_self` is `- (F x)`. -/
noncomputable def AwayMap.negSelf : B :=
F.mk' 0 ⟨x, mem_multiples _⟩
#align add_submonoid.localization_map.away_map.neg_self AddSubmonoid.LocalizationMap.AwayMap.negSelf
/-- Given `x : A`, a localization map `F : A →+ B` away from `x`, and a map of `add_comm_monoid`s
`g : A →+ C` such that `g x` is invertible, the homomorphism induced from `B` to `C` sending
`z : B` to `g y - n • g x`, where `y : A, n : ℕ` are such that `z = F y - n • F x`. -/
noncomputable def AwayMap.lift (hg : IsAddUnit (g x)) : B →+ C :=
AddSubmonoid.LocalizationMap.lift F fun y ↦
show IsAddUnit (g y.1) by
obtain ⟨n, hn⟩ := y.2
rw [← hn]
dsimp
rw [g.map_nsmul]
exact IsAddUnit.map (nsmulAddMonoidHom n : C →+ C) hg
#align add_submonoid.localization_map.away_map.lift AddSubmonoid.LocalizationMap.AwayMap.lift
@[simp]
theorem AwayMap.lift_eq (hg : IsAddUnit (g x)) (a : A) : F.lift x hg (F.toMap a) = g a :=
AddSubmonoid.LocalizationMap.lift_eq _ _ _
#align add_submonoid.localization_map.away_map.lift_eq AddSubmonoid.LocalizationMap.AwayMap.lift_eq
@[simp]
theorem AwayMap.lift_comp (hg : IsAddUnit (g x)) : (F.lift x hg).comp F.toMap = g :=
AddSubmonoid.LocalizationMap.lift_comp _ _
#align add_submonoid.localization_map.away_map.lift_comp AddSubmonoid.LocalizationMap.AwayMap.lift_comp
/-- Given `x y : A` and Localization maps `F : A →+ B, G : A →+ C` away from `x` and `x + y`
respectively, the homomorphism induced from `B` to `C`. -/
noncomputable def awayToAwayRight (y : A) (G : AwayMap (x + y) C) : B →+ C :=
F.lift x <|
show IsAddUnit (G.toMap x) from
isAddUnit_of_add_eq_zero (G.toMap x) (G.mk' y ⟨x + y, mem_multiples _⟩) <| by
rw [add_mk'_eq_mk'_of_add, mk'_self]
#align add_submonoid.localization_map.away_to_away_right AddSubmonoid.LocalizationMap.awayToAwayRight
end AwayMap
end LocalizationMap
end AddSubmonoid
namespace Submonoid
namespace LocalizationMap
variable (f : S.LocalizationMap N) {g : M →* P} (hg : ∀ y : S, IsUnit (g y)) {T : Submonoid P}
{Q : Type _} [CommMonoid Q]
/-- If `f : M →* N` and `k : M →* P` are Localization maps for a Submonoid `S`, we get an
isomorphism of `N` and `P`. -/
@[to_additive
"If `f : M →+ N` and `k : M →+ R` are Localization maps for a Submonoid `S`, we get an
isomorphism of `N` and `R`."]
noncomputable def mulEquivOfLocalizations (k : LocalizationMap S P) : N ≃* P :=
{ toFun := f.lift k.map_units
invFun := k.lift f.map_units
left_inv := f.lift_left_inverse
right_inv := k.lift_left_inverse
map_mul' := MonoidHom.map_mul _ }
#align submonoid.localization_map.mul_equiv_of_localizations Submonoid.LocalizationMap.mulEquivOfLocalizations
#align add_submonoid.localization_map.add_equiv_of_localizations AddSubmonoid.LocalizationMap.addEquivOfLocalizations
@[to_additive (attr := simp)]
theorem mulEquivOfLocalizations_apply {k : LocalizationMap S P} {x} :
f.mulEquivOfLocalizations k x = f.lift k.map_units x := rfl
#align submonoid.localization_map.mul_equiv_of_localizations_apply Submonoid.LocalizationMap.mulEquivOfLocalizations_apply
#align add_submonoid.localization_map.add_equiv_of_localizations_apply AddSubmonoid.LocalizationMap.addEquivOfLocalizations_apply
@[to_additive (attr := simp)]
theorem mulEquivOfLocalizations_symm_apply {k : LocalizationMap S P} {x} :
(f.mulEquivOfLocalizations k).symm x = k.lift f.map_units x := rfl
#align submonoid.localization_map.mul_equiv_of_localizations_symm_apply Submonoid.LocalizationMap.mulEquivOfLocalizations_symm_apply
#align add_submonoid.localization_map.add_equiv_of_localizations_symm_apply AddSubmonoid.LocalizationMap.addEquivOfLocalizations_symm_apply
@[to_additive]
theorem mulEquivOfLocalizations_symm_eq_mulEquivOfLocalizations {k : LocalizationMap S P} :
(k.mulEquivOfLocalizations f).symm = f.mulEquivOfLocalizations k := rfl
#align submonoid.localization_map.mul_equiv_of_localizations_symm_eq_mul_equiv_of_localizations Submonoid.LocalizationMap.mulEquivOfLocalizations_symm_eq_mulEquivOfLocalizations
#align add_submonoid.localization_map.add_equiv_of_localizations_symm_eq_add_equiv_of_localizations AddSubmonoid.LocalizationMap.addEquivOfLocalizations_symm_eq_addEquivOfLocalizations
/-- If `f : M →* N` is a Localization map for a Submonoid `S` and `k : N ≃* P` is an isomorphism
of `CommMonoid`s, `k ∘ f` is a Localization map for `M` at `S`. -/
@[to_additive
"If `f : M →+ N` is a Localization map for a Submonoid `S` and `k : N ≃+ P` is an isomorphism
of `AddCommMonoid`s, `k ∘ f` is a Localization map for `M` at `S`."]
def ofMulEquivOfLocalizations (k : N ≃* P) : LocalizationMap S P :=
(k.toMonoidHom.comp f.toMap).toLocalizationMap (fun y ↦ isUnit_comp f k.toMonoidHom y)
(fun v ↦
let ⟨z, hz⟩ := k.toEquiv.surjective v
let ⟨x, hx⟩ := f.surj z
⟨x, show v * k _ = k _ by rw [← hx, k.map_mul, ← hz] ; rfl⟩)
fun x y ↦ k.apply_eq_iff_eq.trans f.eq_iff_exists
#align submonoid.localization_map.of_mul_equiv_of_localizations Submonoid.LocalizationMap.ofMulEquivOfLocalizations
#align add_submonoid.localization_map.of_add_equiv_of_localizations AddSubmonoid.LocalizationMap.ofAddEquivOfLocalizations
@[to_additive (attr := simp)]
theorem ofMulEquivOfLocalizations_apply {k : N ≃* P} (x) :
(f.ofMulEquivOfLocalizations k).toMap x = k (f.toMap x) := rfl
#align submonoid.localization_map.of_mul_equiv_of_localizations_apply Submonoid.LocalizationMap.ofMulEquivOfLocalizations_apply
#align add_submonoid.localization_map.of_add_equiv_of_localizations_apply AddSubmonoid.LocalizationMap.ofAddEquivOfLocalizations_apply
@[to_additive]
theorem ofMulEquivOfLocalizations_eq {k : N ≃* P} :
(f.ofMulEquivOfLocalizations k).toMap = k.toMonoidHom.comp f.toMap := rfl
#align submonoid.localization_map.of_mul_equiv_of_localizations_eq Submonoid.LocalizationMap.ofMulEquivOfLocalizations_eq
#align add_submonoid.localization_map.of_add_equiv_of_localizations_eq AddSubmonoid.LocalizationMap.ofAddEquivOfLocalizations_eq
@[to_additive]
theorem symm_comp_ofMulEquivOfLocalizations_apply {k : N ≃* P} (x) :
k.symm ((f.ofMulEquivOfLocalizations k).toMap x) = f.toMap x := k.symm_apply_apply (f.toMap x)
#align submonoid.localization_map.symm_comp_of_mul_equiv_of_localizations_apply Submonoid.LocalizationMap.symm_comp_ofMulEquivOfLocalizations_apply
#align add_submonoid.localization_map.symm_comp_of_add_equiv_of_localizations_apply AddSubmonoid.LocalizationMap.symm_comp_ofAddEquivOfLocalizations_apply
@[to_additive]
theorem symm_comp_ofMulEquivOfLocalizations_apply' {k : P ≃* N} (x) :
k ((f.ofMulEquivOfLocalizations k.symm).toMap x) = f.toMap x := k.apply_symm_apply (f.toMap x)
#align submonoid.localization_map.symm_comp_of_mul_equiv_of_localizations_apply' Submonoid.LocalizationMap.symm_comp_ofMulEquivOfLocalizations_apply'
#align add_submonoid.localization_map.symm_comp_of_add_equiv_of_localizations_apply' AddSubmonoid.LocalizationMap.symm_comp_ofAddEquivOfLocalizations_apply'
@[to_additive]
theorem ofMulEquivOfLocalizations_eq_iff_eq {k : N ≃* P} {x y} :
(f.ofMulEquivOfLocalizations k).toMap x = y ↔ f.toMap x = k.symm y :=
k.toEquiv.eq_symm_apply.symm
#align submonoid.localization_map.of_mul_equiv_of_localizations_eq_iff_eq Submonoid.LocalizationMap.ofMulEquivOfLocalizations_eq_iff_eq
#align add_submonoid.localization_map.of_add_equiv_of_localizations_eq_iff_eq AddSubmonoid.LocalizationMap.ofAddEquivOfLocalizations_eq_iff_eq
@[to_additive addEquivOfLocalizations_right_inv]
theorem mulEquivOfLocalizations_right_inv (k : LocalizationMap S P) :
f.ofMulEquivOfLocalizations (f.mulEquivOfLocalizations k) = k :=
toMap_injective <| f.lift_comp k.map_units
#align submonoid.localization_map.mul_equiv_of_localizations_right_inv Submonoid.LocalizationMap.mulEquivOfLocalizations_right_inv
#align add_submonoid.localization_map.add_equiv_of_localizations_right_inv AddSubmonoid.LocalizationMap.addEquivOfLocalizations_right_inv
-- @[simp] -- Porting note: simp can prove this
@[to_additive addEquivOfLocalizations_right_inv_apply]
theorem mulEquivOfLocalizations_right_inv_apply {k : LocalizationMap S P} {x} :
(f.ofMulEquivOfLocalizations (f.mulEquivOfLocalizations k)).toMap x = k.toMap x := by simp
#align submonoid.localization_map.mul_equiv_of_localizations_right_inv_apply Submonoid.LocalizationMap.mulEquivOfLocalizations_right_inv_apply
#align add_submonoid.localization_map.add_equiv_of_localizations_right_inv_apply AddSubmonoid.LocalizationMap.addEquivOfLocalizations_right_inv_apply
@[to_additive]
theorem mulEquivOfLocalizations_left_inv (k : N ≃* P) :
f.mulEquivOfLocalizations (f.ofMulEquivOfLocalizations k) = k :=
FunLike.ext _ _ fun x ↦ FunLike.ext_iff.1 (f.lift_of_comp k.toMonoidHom) x
#align submonoid.localization_map.mul_equiv_of_localizations_left_inv Submonoid.LocalizationMap.mulEquivOfLocalizations_left_inv
#align add_submonoid.localization_map.add_equiv_of_localizations_left_neg AddSubmonoid.LocalizationMap.addEquivOfLocalizations_left_neg
-- @[simp] -- Porting note: simp can prove this
@[to_additive]
theorem mulEquivOfLocalizations_left_inv_apply {k : N ≃* P} (x) :
f.mulEquivOfLocalizations (f.ofMulEquivOfLocalizations k) x = k x := by simp
#align submonoid.localization_map.mul_equiv_of_localizations_left_inv_apply Submonoid.LocalizationMap.mulEquivOfLocalizations_left_inv_apply
#align add_submonoid.localization_map.add_equiv_of_localizations_left_neg_apply AddSubmonoid.LocalizationMap.addEquivOfLocalizations_left_neg_apply
@[to_additive (attr := simp)]
theorem ofMulEquivOfLocalizations_id : f.ofMulEquivOfLocalizations (MulEquiv.refl N) = f := by
ext ; rfl
#align submonoid.localization_map.of_mul_equiv_of_localizations_id Submonoid.LocalizationMap.ofMulEquivOfLocalizations_id
#align add_submonoid.localization_map.of_add_equiv_of_localizations_id AddSubmonoid.LocalizationMap.ofAddEquivOfLocalizations_id
@[to_additive]
theorem ofMulEquivOfLocalizations_comp {k : N ≃* P} {j : P ≃* Q} :
(f.ofMulEquivOfLocalizations (k.trans j)).toMap =
j.toMonoidHom.comp (f.ofMulEquivOfLocalizations k).toMap :=
by ext ; rfl
#align submonoid.localization_map.of_mul_equiv_of_localizations_comp Submonoid.LocalizationMap.ofMulEquivOfLocalizations_comp
#align add_submonoid.localization_map.of_add_equiv_of_localizations_comp AddSubmonoid.LocalizationMap.ofAddEquivOfLocalizations_comp
/-- Given `CommMonoid`s `M, P` and Submonoids `S ⊆ M, T ⊆ P`, if `f : M →* N` is a Localization
map for `S` and `k : P ≃* M` is an isomorphism of `CommMonoid`s such that `k(T) = S`, `f ∘ k`
is a Localization map for `T`. -/
@[to_additive
"Given `CommMonoid`s `M, P` and Submonoids `S ⊆ M, T ⊆ P`, if `f : M →* N` is a Localization
map for `S` and `k : P ≃* M` is an isomorphism of `CommMonoid`s such that `k(T) = S`, `f ∘ k`
is a Localization map for `T`."]
def ofMulEquivOfDom {k : P ≃* M} (H : T.map k.toMonoidHom = S) : LocalizationMap T N :=
let H' : S.comap k.toMonoidHom = T :=
H ▸ (SetLike.coe_injective <| T.1.1.preimage_image_eq k.toEquiv.injective)
(f.toMap.comp k.toMonoidHom).toLocalizationMap
(fun y ↦
let ⟨z, hz⟩ := f.map_units ⟨k y, H ▸ Set.mem_image_of_mem k y.2⟩
⟨z, hz⟩)
(fun z ↦
let ⟨x, hx⟩ := f.surj z
let ⟨v, hv⟩ := k.toEquiv.surjective x.1
let ⟨w, hw⟩ := k.toEquiv.surjective x.2
⟨(v, ⟨w, H' ▸ show k w ∈ S from hw.symm ▸ x.2.2⟩),
show z * f.toMap (k.toEquiv w) = f.toMap (k.toEquiv v) by erw [hv, hw, hx]⟩)
fun x y ↦
show f.toMap _ = f.toMap _ ↔ _ by
erw [f.eq_iff_exists] ;
exact
⟨fun ⟨c, hc⟩ ↦
let ⟨d, hd⟩ := k.toEquiv.surjective c
⟨⟨d, H' ▸ show k d ∈ S from hd.symm ▸ c.2⟩, by
erw [← hd, ← k.map_mul, ← k.map_mul] at hc ; exact k.toEquiv.injective hc⟩,
fun ⟨c, hc⟩ ↦
⟨⟨k c, H ▸ Set.mem_image_of_mem k c.2⟩, by
erw [← k.map_mul] ; rw [hc, k.map_mul] ; rfl⟩⟩
#align submonoid.localization_map.of_mul_equiv_of_dom Submonoid.LocalizationMap.ofMulEquivOfDom
#align add_submonoid.localization_map.of_add_equiv_of_dom AddSubmonoid.LocalizationMap.ofAddEquivOfDom
@[to_additive (attr := simp)]
theorem ofMulEquivOfDom_apply {k : P ≃* M} (H : T.map k.toMonoidHom = S) (x) :
(f.ofMulEquivOfDom H).toMap x = f.toMap (k x) := rfl
#align submonoid.localization_map.of_mul_equiv_of_dom_apply Submonoid.LocalizationMap.ofMulEquivOfDom_apply
#align add_submonoid.localization_map.of_add_equiv_of_dom_apply AddSubmonoid.LocalizationMap.ofAddEquivOfDom_apply
@[to_additive]
theorem ofMulEquivOfDom_eq {k : P ≃* M} (H : T.map k.toMonoidHom = S) :
(f.ofMulEquivOfDom H).toMap = f.toMap.comp k.toMonoidHom :=rfl
#align submonoid.localization_map.of_mul_equiv_of_dom_eq Submonoid.LocalizationMap.ofMulEquivOfDom_eq
#align add_submonoid.localization_map.of_add_equiv_of_dom_eq AddSubmonoid.LocalizationMap.ofAddEquivOfDom_eq
@[to_additive]
theorem ofMulEquivOfDom_comp_symm {k : P ≃* M} (H : T.map k.toMonoidHom = S) (x) :
(f.ofMulEquivOfDom H).toMap (k.symm x) = f.toMap x :=
congr_arg f.toMap <| k.apply_symm_apply x
#align submonoid.localization_map.of_mul_equiv_of_dom_comp_symm Submonoid.LocalizationMap.ofMulEquivOfDom_comp_symm
#align add_submonoid.localization_map.of_add_equiv_of_dom_comp_symm AddSubmonoid.LocalizationMap.ofAddEquivOfDom_comp_symm
@[to_additive]
theorem ofMulEquivOfDom_comp {k : M ≃* P} (H : T.map k.symm.toMonoidHom = S) (x) :
(f.ofMulEquivOfDom H).toMap (k x) = f.toMap x := congr_arg f.toMap <| k.symm_apply_apply x
#align submonoid.localization_map.of_mul_equiv_of_dom_comp Submonoid.LocalizationMap.ofMulEquivOfDom_comp
#align add_submonoid.localization_map.of_add_equiv_of_dom_comp AddSubmonoid.LocalizationMap.ofAddEquivOfDom_comp
/-- A special case of `f ∘ id = f`, `f` a Localization map. -/
@[to_additive (attr := simp) "A special case of `f ∘ id = f`, `f` a Localization map."]
theorem ofMulEquivOfDom_id :
f.ofMulEquivOfDom
(show S.map (MulEquiv.refl M).toMonoidHom = S from
Submonoid.ext fun x ↦ ⟨fun ⟨_, hy, h⟩ ↦ h ▸ hy, fun h ↦ ⟨x, h, rfl⟩⟩) = f :=
by ext ; rfl
#align submonoid.localization_map.of_mul_equiv_of_dom_id Submonoid.LocalizationMap.ofMulEquivOfDom_id
#align add_submonoid.localization_map.of_add_equiv_of_dom_id AddSubmonoid.LocalizationMap.ofAddEquivOfDom_id
/-- Given Localization maps `f : M →* N, k : P →* U` for Submonoids `S, T` respectively, an
isomorphism `j : M ≃* P` such that `j(S) = T` induces an isomorphism of localizations `N ≃* U`. -/
@[to_additive
"Given Localization maps `f : M →+ N, k : P →+ U` for Submonoids `S, T` respectively, an
isomorphism `j : M ≃+ P` such that `j(S) = T` induces an isomorphism of localizations `N ≃+ U`."]
noncomputable def mulEquivOfMulEquiv (k : LocalizationMap T Q) {j : M ≃* P}
(H : S.map j.toMonoidHom = T) : N ≃* Q :=
f.mulEquivOfLocalizations <| k.ofMulEquivOfDom H
#align submonoid.localization_map.mul_equiv_of_mul_equiv Submonoid.LocalizationMap.mulEquivOfMulEquiv
#align add_submonoid.localization_map.add_equiv_of_add_equiv AddSubmonoid.LocalizationMap.addEquivOfAddEquiv
@[to_additive (attr := simp)]
theorem mulEquivOfMulEquiv_eq_map_apply {k : LocalizationMap T Q} {j : M ≃* P}
(H : S.map j.toMonoidHom = T) (x) :
f.mulEquivOfMulEquiv k H x =
f.map (fun y : S ↦ show j.toMonoidHom y ∈ T from H ▸ Set.mem_image_of_mem j y.2) k x := rfl
#align submonoid.localization_map.mul_equiv_of_mul_equiv_eq_map_apply Submonoid.LocalizationMap.mulEquivOfMulEquiv_eq_map_apply
#align add_submonoid.localization_map.add_equiv_of_add_equiv_eq_map_apply AddSubmonoid.LocalizationMap.addEquivOfAddEquiv_eq_map_apply
@[to_additive]
theorem mulEquivOfMulEquiv_eq_map {k : LocalizationMap T Q} {j : M ≃* P}
(H : S.map j.toMonoidHom = T) :
(f.mulEquivOfMulEquiv k H).toMonoidHom =
f.map (fun y : S ↦ show j.toMonoidHom y ∈ T from H ▸ Set.mem_image_of_mem j y.2) k := rfl
#align submonoid.localization_map.mul_equiv_of_mul_equiv_eq_map Submonoid.LocalizationMap.mulEquivOfMulEquiv_eq_map
#align add_submonoid.localization_map.add_equiv_of_add_equiv_eq_map AddSubmonoid.LocalizationMap.addEquivOfAddEquiv_eq_map
@[to_additive (attr := simp, nolint simpNF)]
theorem mulEquivOfMulEquiv_eq {k : LocalizationMap T Q} {j : M ≃* P} (H : S.map j.toMonoidHom = T)
(x) :
f.mulEquivOfMulEquiv k H (f.toMap x) = k.toMap (j x) :=
f.map_eq (fun y : S ↦ H ▸ Set.mem_image_of_mem j y.2) _
#align submonoid.localization_map.mul_equiv_of_mul_equiv_eq Submonoid.LocalizationMap.mulEquivOfMulEquiv_eq
#align add_submonoid.localization_map.add_equiv_of_add_equiv_eq AddSubmonoid.LocalizationMap.addEquivOfAddEquiv_eq
@[to_additive (attr := simp, nolint simpNF)]
theorem mulEquivOfMulEquiv_mk' {k : LocalizationMap T Q} {j : M ≃* P} (H : S.map j.toMonoidHom = T)
(x y) :
f.mulEquivOfMulEquiv k H (f.mk' x y) = k.mk' (j x) ⟨j y, H ▸ Set.mem_image_of_mem j y.2⟩ :=
f.map_mk' (fun y : S ↦ H ▸ Set.mem_image_of_mem j y.2) _ _
#align submonoid.localization_map.mul_equiv_of_mul_equiv_mk' Submonoid.LocalizationMap.mulEquivOfMulEquiv_mk'
#align add_submonoid.localization_map.add_equiv_of_add_equiv_mk' AddSubmonoid.LocalizationMap.addEquivOfAddEquiv_mk'
@[to_additive (attr := simp, nolint simpNF)]
theorem of_mulEquivOfMulEquiv_apply {k : LocalizationMap T Q} {j : M ≃* P}
(H : S.map j.toMonoidHom = T) (x) :
(f.ofMulEquivOfLocalizations (f.mulEquivOfMulEquiv k H)).toMap x = k.toMap (j x) :=
ext_iff.1 (f.mulEquivOfLocalizations_right_inv (k.ofMulEquivOfDom H)) x
#align submonoid.localization_map.of_mul_equiv_of_mul_equiv_apply Submonoid.LocalizationMap.of_mulEquivOfMulEquiv_apply
#align add_submonoid.localization_map.of_add_equiv_of_add_equiv_apply AddSubmonoid.LocalizationMap.of_addEquivOfAddEquiv_apply
@[to_additive]
theorem of_mulEquivOfMulEquiv {k : LocalizationMap T Q} {j : M ≃* P} (H : S.map j.toMonoidHom = T) :
(f.ofMulEquivOfLocalizations (f.mulEquivOfMulEquiv k H)).toMap = k.toMap.comp j.toMonoidHom :=
MonoidHom.ext <| f.of_mulEquivOfMulEquiv_apply H
#align submonoid.localization_map.of_mul_equiv_of_mul_equiv Submonoid.LocalizationMap.of_mulEquivOfMulEquiv
#align add_submonoid.localization_map.of_add_equiv_of_add_equiv AddSubmonoid.LocalizationMap.of_addEquivOfAddEquiv
end LocalizationMap
end Submonoid
namespace Localization
variable (S)
/-- Natural homomorphism sending `x : M`, `M` a `CommMonoid`, to the equivalence class of
`(x, 1)` in the Localization of `M` at a Submonoid. -/
@[to_additive
"Natural homomorphism sending `x : M`, `M` an `AddCommMonoid`, to the equivalence class of
`(x, 0)` in the Localization of `M` at a Submonoid."]
def monoidOf : Submonoid.LocalizationMap S (Localization S) :=
{ (r S).mk'.comp <| MonoidHom.inl M
S with
toFun := fun x ↦ mk x 1
map_one' := mk_one
map_mul' := fun x y ↦ by rw [mk_mul, mul_one]
map_units' := fun y ↦
isUnit_iff_exists_inv.2 ⟨mk 1 y, by rw [mk_mul, mul_one, one_mul, mk_self]⟩
surj' := fun z ↦
induction_on z fun x ↦ ⟨x, by rw [mk_mul, mul_comm x.fst, ← mk_mul, mk_self, one_mul]⟩
eq_iff_exists' := fun x y ↦
mk_eq_mk_iff.trans <|
r_iff_exists.trans <|
show (∃ c : S, ↑c * (1 * x) = c * (1 * y)) ↔ _ by rw [one_mul, one_mul] }
#align localization.monoid_of Localization.monoidOf
#align add_localization.add_monoid_of addLocalization.addMonoidOf
variable {S}
@[to_additive]
theorem mk_one_eq_monoidOf_mk (x) : mk x 1 = (monoidOf S).toMap x := rfl
#align localization.mk_one_eq_monoid_of_mk Localization.mk_one_eq_monoidOf_mk
#align add_localization.mk_zero_eq_add_monoid_of_mk addLocalization.mk_zero_eq_addMonoidOf_mk
@[to_additive]
theorem mk_eq_monoidOf_mk'_apply (x y) : mk x y = (monoidOf S).mk' x y :=
show _ = _ * _ from
(Submonoid.LocalizationMap.mul_inv_right (monoidOf S).map_units _ _ _).2 <|
by
rw [← mk_one_eq_monoidOf_mk, ← mk_one_eq_monoidOf_mk, mk_mul x y y 1, mul_comm y 1]
conv => rhs ; rw [← mul_one 1] ; rw [← mul_one x]
exact mk_eq_mk_iff.2 (Con.symm _ <| (Localization.r S).mul (Con.refl _ (x, 1)) <| one_rel _)
#align localization.mk_eq_monoid_of_mk'_apply Localization.mk_eq_monoidOf_mk'_apply
#align add_localization.mk_eq_add_monoid_of_mk'_apply addLocalization.mk_eq_addMonoidOf_mk'_apply
@[to_additive (attr := simp)]
theorem mk_eq_monoidOf_mk' : mk = (monoidOf S).mk' :=
funext fun _ ↦ funext fun _ ↦ mk_eq_monoidOf_mk'_apply _ _
#align localization.mk_eq_monoid_of_mk' Localization.mk_eq_monoidOf_mk'
#align add_localization.mk_eq_add_monoid_of_mk' addLocalization.mk_eq_addMonoidOf_mk'
universe u
@[to_additive (attr := simp)]
theorem liftOn_mk' {p : Sort u} (f : ∀ (_ : M) (_ : S), p) (H) (a : M) (b : S) :
liftOn ((monoidOf S).mk' a b) f H = f a b := by rw [← mk_eq_monoidOf_mk', liftOn_mk]
#align localization.lift_on_mk' Localization.liftOn_mk'
#align add_localization.lift_on_mk' addLocalization.liftOn_mk'
@[to_additive (attr := simp)]
theorem liftOn₂_mk' {p : Sort _} (f : M → S → M → S → p) (H) (a c : M) (b d : S) :
liftOn₂ ((monoidOf S).mk' a b) ((monoidOf S).mk' c d) f H = f a b c d := by
rw [← mk_eq_monoidOf_mk', liftOn₂_mk]
#align localization.lift_on₂_mk' Localization.liftOn₂_mk'
#align add_localization.lift_on₂_mk' addLocalization.liftOn₂_mk'
variable (f : Submonoid.LocalizationMap S N)
/-- Given a Localization map `f : M →* N` for a Submonoid `S`, we get an isomorphism between
the Localization of `M` at `S` as a quotient type and `N`. -/
@[to_additive
"Given a Localization map `f : M →+ N` for a Submonoid `S`, we get an isomorphism between
the Localization of `M` at `S` as a quotient type and `N`."]
noncomputable def mulEquivOfQuotient (f : Submonoid.LocalizationMap S N) : Localization S ≃* N :=
(monoidOf S).mulEquivOfLocalizations f
#align localization.mul_equiv_of_quotient Localization.mulEquivOfQuotient
#align add_localization.add_equiv_of_quotient addLocalization.addEquivOfQuotient
variable {f}
-- Porting note: dsimp can not prove this
@[to_additive (attr := simp, nolint simpNF)]
theorem mulEquivOfQuotient_apply (x) : mulEquivOfQuotient f x = (monoidOf S).lift f.map_units x :=
rfl
#align localization.mul_equiv_of_quotient_apply Localization.mulEquivOfQuotient_apply
#align add_localization.add_equiv_of_quotient_apply addLocalization.addEquivOfQuotient_apply
@[to_additive (attr := simp, nolint simpNF)]
theorem mulEquivOfQuotient_mk' (x y) : mulEquivOfQuotient f ((monoidOf S).mk' x y) = f.mk' x y :=
(monoidOf S).lift_mk' _ _ _
#align localization.mul_equiv_of_quotient_mk' Localization.mulEquivOfQuotient_mk'
#align add_localization.add_equiv_of_quotient_mk' addLocalization.addEquivOfQuotient_mk'
@[to_additive]
theorem mulEquivOfQuotient_mk (x y) : mulEquivOfQuotient f (mk x y) = f.mk' x y := by
rw [mk_eq_monoidOf_mk'_apply] ; exact mulEquivOfQuotient_mk' _ _
#align localization.mul_equiv_of_quotient_mk Localization.mulEquivOfQuotient_mk
#align add_localization.add_equiv_of_quotient_mk addLocalization.addEquivOfQuotient_mk
-- @[simp] -- Porting note: simp can prove this
@[to_additive]
theorem mulEquivOfQuotient_monoidOf (x) : mulEquivOfQuotient f ((monoidOf S).toMap x) = f.toMap x :=
by simp
#align localization.mul_equiv_of_quotient_monoid_of Localization.mulEquivOfQuotient_monoidOf
#align add_localization.add_equiv_of_quotient_add_monoid_of addLocalization.addEquivOfQuotient_addMonoidOf
@[to_additive (attr := simp)]
theorem mulEquivOfQuotient_symm_mk' (x y) :
(mulEquivOfQuotient f).symm (f.mk' x y) = (monoidOf S).mk' x y :=
f.lift_mk' (monoidOf S).map_units _ _
#align localization.mul_equiv_of_quotient_symm_mk' Localization.mulEquivOfQuotient_symm_mk'
#align add_localization.add_equiv_of_quotient_symm_mk' addLocalization.addEquivOfQuotient_symm_mk'
@[to_additive]
theorem mulEquivOfQuotient_symm_mk (x y) : (mulEquivOfQuotient f).symm (f.mk' x y) = mk x y := by
rw [mk_eq_monoidOf_mk'_apply] ; exact mulEquivOfQuotient_symm_mk' _ _
#align localization.mul_equiv_of_quotient_symm_mk Localization.mulEquivOfQuotient_symm_mk
#align add_localization.add_equiv_of_quotient_symm_mk addLocalization.addEquivOfQuotient_symm_mk
@[to_additive (attr := simp)]
theorem mulEquivOfQuotient_symm_monoidOf (x) :
(mulEquivOfQuotient f).symm (f.toMap x) = (monoidOf S).toMap x :=
f.lift_eq (monoidOf S).map_units _
#align localization.mul_equiv_of_quotient_symm_monoid_of Localization.mulEquivOfQuotient_symm_monoidOf
#align add_localization.add_equiv_of_quotient_symm_add_monoid_of addLocalization.addEquivOfQuotient_symm_addMonoidOf
section Away
variable (x : M)
/-- Given `x : M`, the Localization of `M` at the Submonoid generated by `x`, as a quotient. -/
@[to_additive (attr := reducible)
"Given `x : M`, the Localization of `M` at the Submonoid generated by `x`, as a quotient."]
def Away :=
Localization (Submonoid.powers x)
#align localization.away Localization.Away
#align add_localization.away addLocalization.Away
/-- Given `x : M`, `invSelf` is `x⁻¹` in the Localization (as a quotient type) of `M` at the
Submonoid generated by `x`. -/
@[to_additive
"Given `x : M`, `negSelf` is `-x` in the Localization (as a quotient type) of `M` at the
Submonoid generated by `x`."]
def Away.invSelf : Away x :=
mk 1 ⟨x, Submonoid.mem_powers _⟩
#align localization.away.inv_self Localization.Away.invSelf
#align add_localization.away.neg_self addLocalization.Away.negSelf
/-- Given `x : M`, the natural hom sending `y : M`, `M` a `CommMonoid`, to the equivalence class
of `(y, 1)` in the Localization of `M` at the Submonoid generated by `x`. -/
@[to_additive (attr := reducible)
"Given `x : M`, the natural hom sending `y : M`, `M` an `AddCommMonoid`, to the equivalence
class of `(y, 0)` in the Localization of `M` at the Submonoid generated by `x`."]
def Away.monoidOf : Submonoid.LocalizationMap.AwayMap x (Away x) :=
Localization.monoidOf (Submonoid.powers x)
#align localization.away.monoid_of Localization.Away.monoidOf
#align add_localization.away.add_monoid_of addLocalization.Away.addMonoidOf
-- @[simp] -- Porting note: simp can prove this
@[to_additive]
theorem Away.mk_eq_monoidOf_mk' : mk = (Away.monoidOf x).mk' := by simp
#align localization.away.mk_eq_monoid_of_mk' Localization.Away.mk_eq_monoidOf_mk'
#align add_localization.away.mk_eq_add_monoid_of_mk' addLocalization.Away.mk_eq_addMonoidOf_mk'
/-- Given `x : M` and a Localization map `f : M →* N` away from `x`, we get an isomorphism between
the Localization of `M` at the Submonoid generated by `x` as a quotient type and `N`. -/
@[to_additive
"Given `x : M` and a Localization map `f : M →+ N` away from `x`, we get an isomorphism between
the Localization of `M` at the Submonoid generated by `x` as a quotient type and `N`."]
noncomputable def Away.mulEquivOfQuotient (f : Submonoid.LocalizationMap.AwayMap x N) :
Away x ≃* N :=
Localization.mulEquivOfQuotient f
#align localization.away.mul_equiv_of_quotient Localization.Away.mulEquivOfQuotient
#align add_localization.away.add_equiv_of_quotient addLocalization.Away.addEquivOfQuotient
end Away
end Localization
end CommMonoid
section CommMonoidWithZero
variable {M : Type _} [CommMonoidWithZero M] (S : Submonoid M) (N : Type _) [CommMonoidWithZero N]
{P : Type _} [CommMonoidWithZero P]
namespace Submonoid
/-- The type of homomorphisms between monoids with zero satisfying the characteristic predicate:
if `f : M →*₀ N` satisfies this predicate, then `N` is isomorphic to the localization of `M` at
`S`. -/
-- Porting note: This linter does not exist yet
-- @[nolint has_nonempty_instance]
structure LocalizationWithZeroMap extends LocalizationMap S N where
map_zero' : toFun 0 = 0
#align submonoid.localization_with_zero_map Submonoid.LocalizationWithZeroMap
-- Porting note: no docstrings for LocalizationWithZeroMap.map_zero'
attribute [nolint docBlame] LocalizationWithZeroMap.toLocalizationMap
LocalizationWithZeroMap.map_zero'
variable {S N}
/-- The monoid with zero hom underlying a `localization_map`. -/
def LocalizationWithZeroMap.toMonoidWithZeroHom (f : LocalizationWithZeroMap S N) : M →*₀ N :=
{ f with }
#align submonoid.localization_with_zero_map.to_monoid_with_zero_hom Submonoid.LocalizationWithZeroMap.toMonoidWithZeroHom
end Submonoid
namespace Localization
-- Porting note: removed local since attribute 'semireducible' must be global
attribute [semireducible] Localization
/-- The zero element in a Localization is defined as `(0, 1)`.
Should not be confused with `AddLocalization.zero` which is `(0, 0)`. -/
-- Porting note: replaced irreducible_def by @[irreducible] to prevent an error with protected
@[irreducible]
protected def zero : Localization S :=
mk 0 1
#align localization.zero Localization.zero
instance : Zero (Localization S) := ⟨Localization.zero S⟩
-- Porting note: removed local since attribute 'semireducible' must be global
attribute [semireducible] Localization.zero Localization.mul
instance : CommMonoidWithZero (Localization S) :=
{ zero_mul := fun x ↦ Localization.induction_on x <| by
intro
refine mk_eq_mk_iff.mpr (r_of_eq (by simp [zero_mul, mul_zero]))
mul_zero := fun x ↦ Localization.induction_on x <| by
intro
refine mk_eq_mk_iff.mpr (r_of_eq (by simp [zero_mul, mul_zero])) }
variable {S}
theorem mk_zero (x : S) : mk 0 (x : S) = 0 :=
calc
mk 0 x = mk 0 1 := mk_eq_mk_iff.mpr (r_of_eq (by simp))
_ = 0 := rfl
#align localization.mk_zero Localization.mk_zero
theorem liftOn_zero {p : Type _} (f : ∀ (_ : M) (_ : S), p) (H) : liftOn 0 f H = f 0 1 := by
rw [← mk_zero 1, liftOn_mk]
#align localization.lift_on_zero Localization.liftOn_zero
end Localization
variable {S N}
namespace Submonoid
@[simp]
theorem LocalizationMap.sec_zero_fst {f : LocalizationMap S N} : f.toMap (f.sec 0).fst = 0 := by
rw [LocalizationMap.sec_spec', mul_zero]
#align submonoid.localization_map.sec_zero_fst Submonoid.LocalizationMap.sec_zero_fst
namespace LocalizationWithZeroMap
/-- Given a Localization map `f : M →*₀ N` for a Submonoid `S ⊆ M` and a map of
`comm_monoid_with_zero`s `g : M →*₀ P` such that `g y` is invertible for all `y : S`, the
homomorphism induced from `N` to `P` sending `z : N` to `g x * (g y)⁻¹`, where `(x, y) : M × S`
are such that `z = f x * (f y)⁻¹`. -/
noncomputable def lift (f : LocalizationWithZeroMap S N) (g : M →*₀ P)
(hg : ∀ y : S, IsUnit (g y)) : N →*₀ P :=
{ @LocalizationMap.lift _ _ _ _ _ _ _ f.toLocalizationMap g.toMonoidHom hg with
map_zero' := by
erw [LocalizationMap.lift_spec f.toLocalizationMap hg 0 0]
rw [mul_zero, ← map_zero g, ← g.toMonoidHom_coe]
refine f.toLocalizationMap.eq_of_eq hg ?_
rw [LocalizationMap.sec_zero_fst]
exact f.toMonoidWithZeroHom.map_zero.symm }
#align submonoid.localization_with_zero_map.lift Submonoid.LocalizationWithZeroMap.lift
end LocalizationWithZeroMap
end Submonoid
end CommMonoidWithZero
|
/*
* Copyright 2015 Matthias Fuchs
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "ExportVector.h"
#include <boost/lambda/lambda.hpp>
#include <boost/python.hpp>
#include <stromx/runtime/List.h>
#include "DataCast.h"
using namespace boost::python;
using namespace stromx::runtime;
namespace
{
boost::shared_ptr<List> allocate()
{
return boost::shared_ptr<List>(new List(), boost::lambda::_1);
}
}
void exportList()
{
stromx::python::exportVector<Data*>("DataVector");
class_<List, bases<Data>, boost::shared_ptr<List> >("List", no_init)
.def("__init__", make_constructor(&allocate))
.def<std::vector<Data*> & (List::*)()>("content", &List::content, return_internal_reference<>())
.def("data_cast", &stromx::python::data_cast<List>, return_internal_reference<1>())
.staticmethod("data_cast")
;
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.