text
stringlengths 0
3.34M
|
---|
```python
import holoviews as hv
hv.extension('bokeh')
hv.opts.defaults(hv.opts.Curve(width=500),
hv.opts.Image(width=500, colorbar=True, cmap='Viridis'))
```
```python
import numpy as np
import scipy.signal
import scipy.fft
```
# Diseño de sistemas y filtros FIR
En la lección anterior definimos un sistema FIR que transforma una obtiene una salida $y$ a partir de una entrada $x$ como
$$
y[n] = (h * x)[n]
$$
donde $h$ es un vector de largo $L+1$ que tiene los coeficientes del sistema y $*$ es la operación de convolución
En esta lección veremos
- La respuesta al impulso y respuesta en frecuencia de un sistema
- La definición de filtro y los tipos básicos de filtros
- Como diseñar un filtro FIR, es decir como decidir los valores del vector $h$
## Respuesta al impulso de un sistema
Sea el impulso unitario o delta de Kronecker
$$
\delta[n-m] = \begin{cases} 1 & n=m \\ 0 & n \neq m \end{cases}
$$
La **respuesta al impulso de un sistema discreto** es la salida obtenida cuando la entrada es un impulso unitario
Para un sistema FIR arbitrario tenemos
$$
y[n]|_{x=\delta} = (h * \delta)[n] = \sum_{j=0}^L h[j] \delta[n-j] = \begin{cases} h[n] & n \in [0, L] \\ 0 & \text{en otro caso} \end{cases} \\
$$
es decir que la respuesta al impulso:
- tiene **una duración finita y luego decae a zero**
- recupera los coeficientes $h[j]$ del sistema
En un sistema causal se tiene que $h[n] = 0 \quad \forall n < 0$
Llamamos **soporte** del sistema a todos aquellos valores de $n$ tal que $h[n] \neq 0$
### Ejemplo: Respuesta al impulso del sistema reverberante
Para el sistema reverberante
$$
y[n] = x[n] + A x[n-m]
$$
la respuesta al impulso es
$$
y[n] = \delta[n] + A \delta[n-m] = \begin{cases} 1 & n=0\\ A& n=m \\ 0 & \text{en otro caso} \end{cases}
$$
La respuesta al impulse me permite a recuperar los coeficientes del sistema en caso de que no los conociera
## Respuesta en frecuencia de un sistema
Sea un sistema lineal cuyos coeficientes no cambian en el tiempo, como el sistema FIR que hemos estado estudiando
Por propiedad de la transformada de Fourier sabemos que
$$
\begin{align}
\text{DFT}_N [y[n]] & = \text{DFT}_N [(h * x)[n]] \nonumber \\
\text{DFT}_N [y[n]] & = \text{DFT}_N [h[n]] \cdot \text{DFT}_N [x[n]] \nonumber \\
Y[k] &= H[k] \cdot X[k] ,
\end{align}
$$
donde llamamos a $H[k]$ la **respuesta en frecuencia del sistema**
La respuesta en frecuencia es **la transformada de Fourier de la respuesta al impulso**
### Respuesta en frecuencia utilizando Python
Podemos calcular la respuesta en frecuencia de un filtro a partir de su respuesta al impulso $h$ usando la función
```python
scipy.signal.freqz(b, # Coeficientes en el numerador h
a=1, # Coeficientes en el denominador de h
fs=6.28318 # Frecuencia de muestreo
...
)
```
Para el caso de un filtro FIR solo existen coeficientes en el numerador por lo que no utilizamos el argumento $a$
La función retorna
```python
freq, H = scipy.signal.freqz(b=h)
```
un arreglo de frecuencias y la respuesta en frecuencia (compleja)
### Ejemplo: Respuesta en frecuencia del sistema promediador
El sistema promediador que vimos la lección anterior tiene respuesta al impulso
$$
h[i] = \begin{cases} 1/L & i < L \\ 0 & i > L \end{cases}
$$
El valor absoluto de su respuesta en frecuencia es
```python
p1, p2 = [], []
for L in [10, 20, 50]:
h = np.zeros(shape=(100,))
h[:L] = 1/L
freq, H = scipy.signal.freqz(b=h, fs=1)
p1.append(h)
p2.append(np.abs(H))
```
```python
ph = hv.Overlay([hv.Curve((range(100), p), 'Tiempo', 'Respuesta al impulso') for p in p1])
pH = hv.Overlay([hv.Curve((freq, p), 'Frecuencia', 'Respuesta en frecuencia') for p in p2])
hv.Layout([ph, pH]).cols(1).opts(hv.opts.Curve(height=200))
```
:::{note}
Mientras más ancho es el sistema en el dominio del tiempo ($L$ grande), más concentrada se vuelve su respuesta en frecuencia
:::
Si multiplicamos $H$ con el espectro de una señal, lo que estamos haciendo es atenuar las frecuencias altas
Formalizaremos este concepto a continuación
## Filtros digitales
Un **filtro** es un sistema cuyo objetivo es reducir o resaltar un aspecto específico de una señal
Por ejemplo
- Disminuir el nivel de ruido
- Separar dos o más señales que están mezcladas
- Ecualizar la señal
- Restaurar la señal (eliminar desenfoque o artefactos de grabación)
Llamamos **filtro digital** a los filtros aplicados a señales digitales y hablamos de **señal filtrada** para referirnos a la salida del filtro
En esta unidad nos enfocaremos en filtros cuyos coeficientes son fijos y no se modifican en el tiempo. En la próxima unidad veremos filtros que se adaptan continuamente a los cambios de la entrada
### Tipos básicos de filtro
Como vimos el filtro lineal e invariante en el tiempo puede estudiarse en frecuencia usando
$$
Y[k] = H[k] X[k] ,
$$
donde $H[k]$ es la DFT del filtro (respuesta en frecuencia)
El filtro actua como una **máscara multiplicativa** que modifica el espectro de la señal entrada
:::{important}
Esto significa que el filtro sólo puede acentuar, atenuar o remover ciertas frecuencias pero **nunca crear nuevas frecuencias**
:::
Consideremos los siguienes filtros o máscaras ideales
De izquierda a derecha tenemos:
- Filtro pasa bajo: Anula las frecuencias altas. Sirve para suavizar
- Filtro pasa alto: Anula las frecuencias bajas. Sirve para detectar cambios
- Filtro pasa banda: Anula todo excepto una banda continua de frecuencias
- Filtro rechaza banda: Anula sólo una banda continua de frecuencias
Las llamamos "ideales" por que en general los cortes de los filtros no pueden ser tan abruptos como se muestra en la figura
A continuación veremos un método para diseñar filtros FIR partiendo desde el dominio de la frecuencia
## Diseño de un filtro FIR: Método de la ventana
Diseñar un filtro consisten en definir
- L: El largo de la respuesta al impulso
- h: Los valores de la respuesta al impulso
El siguiente algoritmo de diseño de filtro se llama el "método de la ventana" y parte de la base de una respuesta en frecuencia ideal
1. Especificar una **respuesta en frecuencia** ideal $H_d[k]$ dependiendo de los requerimientos
1. Usar la transformada de Fourier inversa para obtener la **respuesta al impulso ideal** $h_d[n]$
1. Truncar la respuesta al impulso ideal usando **una ventana** tal que $h[n] = h_d[n] w[n]$
Finalmente $h[n]$ nos da los coeficientes del filtro FIR y $w[n]$ nos da el largo del filtro
La ventana $w[n]$ puede ser cualquiera de las funciones vistas en la unidad anterior, por ejemplo una ventana rectangular
$$
w[n] = \begin{cases} 1 & n \leq L \\ 0 & n > L \end{cases}
$$
o la ventana de Hann
$$
w[n] = 0.5 - 0.5 \cos \left( \frac{2\pi n}{L-1} \right)
$$
A continuación veremos paso a paso como se crea un filtro pasa-bajo utilizando este método. Más adelante veremos unas funciones de `scipy` que facilitan este proceso
### Diseño de un filtro pasa bajo (LPF)
Un filtro pasa bajo es aquel que sólo deja pasar las **bajas** frecuencias
Sus usos son:
- Recuperar una tendencia o comportamiento lento en la señal
- Suavizar la señal y disminuir la influencia del ruido aditivo
Diseñemos un filtro que elimine todas las frecuencias mayores a $f_c$ [Hz] de una señal $x[n]$ muestreada con frecuencia $F_s$
**Paso 1: Respuesta en frecuencia ideal**
Propongamos la siguiente respuesta en frecuencia que solo deja pasar las frecuencias menores a $f_c$, es decir que sólo es distinta de cero en el rango $[-f_c, f_c]$
$$
\begin{align}
H_d(\omega) &= \begin{cases} 1 & |f| < f_c\\ 0 & |f| > f_c \end{cases} \nonumber \\
&= \text{rect}(f/f_c) \nonumber
\end{align}
$$
```python
fc = 0.1 # Frecuencia de corte
Fs = 1 # Frecuencia de muestreo
n = np.arange(-50, 50, step=1/Fs);
f = scipy.fft.fftshift(scipy.fft.fftfreq(n=len(n), d=1/Fs))
# Diseño de la respuesta en frecuencia ideal
kc = int(len(n)*fc)
Hd = np.zeros_like(n, dtype=np.float64);
Hd[:kc] = 1.
Hd[len(Hd)-kc+1:] = 1.
```
**Paso 2: Respuesta al impulso ideal**
Obtenemos la transformada de Fourier inversa de la respuesta en frecuencia
$$
\begin{align}
h_d(t) &= \int_{-f_c}^{f_c} e^{j 2 \pi f t} df \nonumber \\
& = \frac{2j f_c}{2 j \pi f_c t} \sin(2 \pi f_c t) = 2 f_c \text{sinc}(2 \pi f_c t) \nonumber
\end{align}
$$
donde la versión en tiempo discreto sería
$$
h_d[n] = 2 f_c\text{sinc}(2 \pi f_c n/ F_s)/F_s
$$
Notemos que es una función infinitamente larga
```python
# Cálculo de la respuesta al impulso ideal
#hd = np.real(sfft.ifftshift(sfft.ifft(Hd)))
hd = 2*fc*np.sinc(2*fc*n/Fs)/Fs # Se omite Pi por que está incluido en np.sinc
```
```python
p1 = hv.Curve((f, scipy.fft.fftshift(Hd)), 'Frecuencia', 'Respuesta en\n frecuencia ideal')
p2 = hv.Curve((n, hd), 'Tiempo', 'Respuesta al\n impulso ideal')
hv.Layout([p1, p2]).opts(hv.opts.Curve(width=300, height=200))
```
**Paso 3: Truncar la respuesta al impulso ideal**
Para obtener una respuesta al impulso finita multiplicamos por una ventana finita de largo $L+1$
$$
h[n] = 2 f_c \text{sinc}(2 \pi f_c n /F_s) \cdot \text{rect}(n/(L+1))
$$
```python
# Cálculo de la respuesta al impulso truncada
def truncar(hd, L=100):
w = np.zeros_like(hd);
w[len(w)//2-L//2:len(w)//2+L//2+1] = 1.
return w*hd
# Cálculo de la respuesta en frecuencia truncada
h = truncar(hd)
H = scipy.fft.fft(h)
```
Comparemos la respuesta en frecuencia ideal con la que en realidad aplicamos a la señal
```python
p = []
p.append(hv.Curve((f, scipy.fft.fftshift(Hd)), 'Frecuencia', 'Respuesta en frecuencia', label='Ideal'))
for L in [20, 40]:
H = scipy.fft.fft(truncar(hd, L))
p.append(hv.Curve((f, scipy.fft.fftshift(np.abs(H))), label=f'Truncada L={L}'))
hv.Overlay(p).opts(hv.opts.Curve(line_width=3, alpha=0.75))
```
La respuesta en frecuencia "ideal" $H_d[k]$ es plana y tiene discontinuidades fuertes
La respuesta en frecuencia "truncada" $H[k]$ busca aproximar a $H_d[k]$. Pero observando $H[k]$ notamos que
- Aparecen ondulaciones (ripple). No tiene zonas de paso y/o rechazo perfectamente planas
- La frecuencia de corte es una transición. La discontinuidad no es abrupta como el caso ideal
El siguiente esquema muestra estos comportamientos
La función de ventana que se ocupa para truncar la respuesta ideal influye en el trade-off entre cuan abrupta es la caida del filtro y las ondulaciones (ripple) que aparece en las zonas planas.
En general
- más larga es la ventana ($L$) más fiel será la respuesta en frecuencia
- mientras más suave es la ventana más lenta será la transición en la respuesta en frecuencia y habrán menos ondulaciones en las bandas de paso y rechazo
## Diseño de filtro FIR usando scipy
Podemos diseñar un filtro usando la técnica de enventando con la función de scipy
```python
scipy.signal.firwin(numtaps, # Largo del filtro
cutoff, # Frecuencia(s) de corte
window='hamming', # Función de ventana
pass_zero=True, # Se explicará a continuación
fs=None # Frecuencia de muestreo
...
)
```
El argumento `pass_zero` es un booleano que indica si la frecuencia cero pasa o se rechaza por el filtro. Se darán más detalles en los ejemplos que se muestran a continuación.
La función `firwin` retorna un arreglo con $h$ que corresponde a la respuesta al impulso del filtro FIR. Luego podemos usar el arreglo $h$ para convolucionar con nuestra señal de entrada.
### Diseño de un filtro pasa bajo (LPF)
Veamos como se ocupa esta función para diseñar el filtro pasa-bajo que creamos manualmente en la sección anterior
```python
fc = 0.1 # Frecuencia de corte
Fs = 1 # Frecuencia de muestreo
L = 100+1 # Largo del filtro
h = scipy.signal.firwin(L, fc, window='boxcar', pass_zero=True, fs=Fs)
freq, H = scipy.signal.freqz(h, fs=Fs)
```
El argumento `pass_zero` debe ser `True` para diseñar un filtro pasa-bajo
```python
p1 = hv.Curve((range(len(h)), h), 'Tiempo', 'Respuesta al\n impulso')
p2 = hv.Curve((freq, np.absolute(H)), 'Frecuencia', 'Respuesta en\n frecuencia')
hv.Layout([p1, p2 * hv.VLine(fc).opts(color='r', alpha=0.25)]).opts(hv.opts.Curve(width=300, height=200))
```
El resultado es equivalente al proceso manual que mostramos antes
Podemos cambiar el compromiso (trade-off) entre la velocidad de la transición y las ondulaciones si utilizamos otra ventana. Por ejemplo para la ventana de *Hamming* tenemos
```python
fc = 0.1 # Frecuencia de corte
Fs = 1 # Frecuencia de muestreo
L = 100+1 # Largo del filtro
h = scipy.signal.firwin(L, fc, window='hamming', pass_zero=True, fs=Fs)
freq, H = scipy.signal.freqz(h, fs=Fs)
```
```python
p1 = hv.Curve((range(len(h)), h), 'Tiempo', 'Respuesta al\n impulso')
p2 = hv.Curve((freq, np.absolute(H)), 'Frecuencia', 'Respuesta en\n frecuencia')
hv.Layout([p1, p2 * hv.VLine(fc).opts(color='r', alpha=0.25)]).opts(hv.opts.Curve(width=300, height=200))
```
:::{note}
Las ondulaciones disminuye pero el corte en frecuencia es ahora más lento
:::
### Diseño de un filtro pasa alto (HPF)
Un filtro pasa alto es aquel que sólo deja pasar las **altas** frecuencias
Sus usos son:
- Identificar cambios/detalles, es decir comportamientos rápidos en una señal
- Eliminar tendencias
Con respecto al ejemplo anterior, para diseñar un filtro pasa alto con `firwin` sólo debemos cambiar el valor del argumento `pass_zero`
```python
Fs = 1 # Frecuencia de muestreo
fc = 0.2 # Frecuencia de corte
L = 100+1 # Largo del filtro
h = scipy.signal.firwin(L, fc, window='hamming', pass_zero=False, fs=Fs)
freq, H = scipy.signal.freqz(h, fs=Fs)
```
```python
p1 = hv.Curve((range(L), h), 'Tiempo', 'Respuesta al\n impulso')
p2 = hv.Curve((freq, np.absolute(H)), 'Frecuencia', 'Respuesta en\n frecuencia')
hv.Layout([p1, p2]).opts(hv.opts.Curve(width=300, height=200))
```
:::{note}
La respuesta en frecuecia muestra que este filtro anula las frecuencias bajas de la señal
:::
### Diseño de un filtro pasa banda (BPF) y rechaza banda (BRF)
Como sus nombres lo indican estos filtros
- BPF: Dejan pasar sólo una cierta banda de frecuencia
- BRF: Dejan pasar todas las frecuencias excepto una banda determinada
La banda de frecuencia está definida por sus frecuencias de corte mínima y máxima $f_{c1} < f_{c2}$
Para crear un filtro BPF o BRF con `firwin` debemos entregar una tupla o lista con estas frecuencias. Por ejemplo para un filtro BPF
```python
Fs = 1 # Frecuencia de muestreo
fc1, fc2 = 0.2, 0.3 # Frecuencias de cortes
L = 100+1 # Largo del filtro
h = scipy.signal.firwin(L, (fc1, fc2), window='hamming', pass_zero=False, fs=Fs)
freq, H = scipy.signal.freqz(h, fs=Fs)
```
```python
p1 = hv.Curve((range(L), h), 'Tiempo', 'Respuesta al\n impulso')
p2 = hv.Curve((freq, np.absolute(H)), 'Frecuencia', 'Respuesta en\n frecuencia')
hv.Layout([p1, p2]).opts(hv.opts.Curve(width=300, height=200))
```
:::{note}
La respuesta en frecuecia muestra que este filtro anula las frecuencias **fuera** del rango definido
:::
Un filtro rechaza-banda se crea con el argumento `pass_zero=True`
```python
h = scipy.signal.firwin(L, (fc1, fc2), window='hamming', pass_zero=True, fs=Fs)
freq, H = scipy.signal.freqz(h, fs=Fs)
```
```python
p1 = hv.Curve((range(L), h), 'Tiempo', 'Respuesta al\n impulso')
p2 = hv.Curve((freq, np.absolute(H)), 'Frecuencia', 'Respuesta en\n frecuencia')
hv.Layout([p1, p2]).opts(hv.opts.Curve(width=300, height=200))
```
:::{note}
La respuesta en frecuecia muestra que este filtro anula las frecuencias **dentro** del rango definido
:::
## Ejemplo: Filtro FIR para remover una tendencia
En la lección anterior vimos el caso de una señal de interés que está montada en una tendencia
```python
np.random.seed(0);
n = np.arange(0, 150, step=1)
C = np.exp(-0.5*(n[:, np.newaxis] - n[:, np.newaxis].T)**2/30**2)
x_tendencia = 3*np.random.multivariate_normal(np.zeros_like(n), C)+2.5
x_deseada = np.sin(2.0*np.pi*0.125*n)
x = x_deseada + x_tendencia
```
```python
p3=hv.Curve((n, x_deseada), 'Tiempo', 'Señal', label='Deseada (s)').opts(color='k', alpha=0.75)
p2=hv.Curve((n, x_tendencia), 'Tiempo', 'Señal', label='Tendencia').opts(alpha=0.75)
p1=hv.Curve((n, x), 'Tiempo', 'Señal', label='Observada (x)').opts(height=250)
hv.Overlay([p1,p2,p3]).opts(legend_position='bottom_right')
```
Podemos diseñar un filtro FIR para separar la señal deseada de la tendencia
Para eso necesitamos definir una frecuencia de corte. Podemos encontrar una frecuencia de corte apropiada en base al espectro de amplitud de la señal observada
```python
freq = scipy.fft.rfftfreq(n=len(x), d=1)
SA = np.absolute(scipy.fft.rfft(x-np.mean(x)))
hv.Curve((freq, SA), 'Frecuencia [Hz]', 'Espectro').opts(height=250)
```
El espectro nos indica que existe un componente con frecuencia cercana a 0.13 Hz y otro más lento con frecuencia cercana a 0.01 Hz
Si queremos el componente más rápido podemos diseñar un filtro pasa-alto con una frecuencia de corte entre estos dos valores
Veamos como cambia la señal filtrada y su espectro de amplitud ante distintas frecuencias de corte en torno al rango anteriormente mencionado. Se visualiza también la respuesta en frecuencia del filtro diseñado (en rojo) sobre el espectro resultante
```python
L = 51
y, Y, H = {}, {}, {}
for fc in np.arange(0.01, 0.17, step=0.01):
# Diseñar filtro
h = scipy.signal.firwin(L, fc, window='hamming', pass_zero=False, fs=1)
# Filtrar señal
y[fc] = scipy.signal.convolve(x, h, mode='same', method='auto')
# Obtener espectro de la señal filtrada
Y[fc] = np.absolute(scipy.fft.rfft(y[fc]-np.mean(y[fc]), norm='forward'))
# Obtener respuesta en frecuencia del filtro
freqH, H[fc] = scipy.signal.freqz(h, fs=1)
H[fc] = np.abs(H[fc])
freq = scipy.fft.rfftfreq(n=len(x), d=1)
```
```python
hMap1 = hv.HoloMap(kdims='Frecuencia de corte')
hMap2 = hv.HoloMap(kdims='Frecuencia de corte')
for fc, y_ in y.items():
hMap1[fc] = hv.Curve((n, y_), 'Tiempo [s]', 'Salida', label='y')
for (fc, Y_),(fc, H_) in zip(Y.items(), H.items()):
p1 = hv.Curve((freq, Y_), 'Frecuencia [Hz]', 'Espectro', label='Y')
p2 = hv.Curve((freqH, H_), 'Frecuencia [Hz]', 'Espectro', label='H')
hMap2[fc] = p1 * p2
p_target = hv.Curve((n, x_deseada), 'Tiempo', 'Salida', label='s').opts(color='k', alpha=0.5, width=4)
hv.Layout([hMap1 * p_target, hMap2]).cols(1).opts(hv.opts.Curve(height=250),
hv.opts.Overlay(legend_position='bottom_right'))
```
## Resumen
El diseño del filtro está dado entonces por su
- **Aplicación:** Definida por la respuesta en frecuencia ideal , por ejemplo: Pasa-bajo, Pasa-alto, etc
- **Fidelidad:** El error tolerable entre la respuesta en frecuencia ideal y la real
El tipo de filtro y sus frecuencias de corte definen su Aplicación. Esto es un requisito del problema que buscamos resolver.
El parámetro $L$ nos da un trade-off para la fidelidad. Si agrandamos $L$ tendremos mayor fidelidad pero más costo computacional.
El tipo de función de ventana que se ocupa para truncar también afecta la fidelidad entregando un segundo trade-off entre artefactos (ondulaciones) en las zonas de paso/rechazo y que tan abrupto es el corte en frecuencia
```python
```
|
State Before: α : Type u_1
α' : Type ?u.67520
β : Type u_2
β' : Type ?u.67526
γ : Type u_3
γ' : Type ?u.67532
δ : Type ?u.67535
δ' : Type ?u.67538
ε : Type ?u.67541
ε' : Type ?u.67544
ζ : Type ?u.67547
ζ' : Type ?u.67550
ν : Type ?u.67553
inst✝⁷ : DecidableEq α'
inst✝⁶ : DecidableEq β'
inst✝⁵ : DecidableEq γ
inst✝⁴ : DecidableEq γ'
inst✝³ : DecidableEq δ
inst✝² : DecidableEq δ'
inst✝¹ : DecidableEq ε
inst✝ : DecidableEq ε'
f✝ f' : α → β → γ
g g' : α → β → γ → δ
s✝ s' : Finset α
t✝ t' : Finset β
u u' : Finset γ
a a' : α
b b' : β
c : γ
f : α → β → γ
s : Finset α
t : Finset β
⊢ image (uncurry f) (s ×ˢ t) = image₂ f s t State After: no goals Tactic: rw [← image₂_curry, curry_uncurry] |
{-# OPTIONS --cubical --safe #-}
module Cubical.Foundations.Pointed.Properties where
open import Cubical.Foundations.Prelude
open import Cubical.Foundations.Pointed.Base
open import Cubical.Data.Prod
Π∙ : ∀ {ℓ ℓ'} (A : Type ℓ) (B∙ : A → Pointed ℓ') → Pointed (ℓ-max ℓ ℓ')
Π∙ A B∙ = (∀ a → typ (B∙ a)) , (λ a → pt (B∙ a))
Σ∙ : ∀ {ℓ ℓ'} (A∙ : Pointed ℓ) (B∙ : typ A∙ → Pointed ℓ') → Pointed (ℓ-max ℓ ℓ')
Σ∙ A∙ B∙ = (Σ[ a ∈ typ A∙ ] typ (B∙ a)) , (pt A∙ , pt (B∙ (pt A∙)))
_×∙_ : ∀ {ℓ ℓ'} (A∙ : Pointed ℓ) (B∙ : Pointed ℓ') → Pointed (ℓ-max ℓ ℓ')
A∙ ×∙ B∙ = ((typ A∙) × (typ B∙)) , (pt A∙ , pt B∙)
|
function C = dot_product(A,B)
% Computes the dot product of the corresponding rows of the matrices A and B
C = sum(A.*B,2); |
lemma continuous_mult' [continuous_intros]: fixes f g :: "_ \<Rightarrow> 'b::topological_semigroup_mult" shows "continuous F f \<Longrightarrow> continuous F g \<Longrightarrow> continuous F (\<lambda>x. f x * g x)" |
State Before: α : Sort u_1
P : Prop
inst✝ : Decidable P
x : ¬P → α
y : ¬¬P → α
⊢ dite (¬P) x y = dite P (fun h => y (_ : ¬¬P)) x State After: no goals Tactic: by_cases h : P <;> simp [h] |
import .subset
universes u v
namespace subset
section
parameters {A : Type u} {B : Type v} (F : subset A → subset B)
def cocontinuous :=
∀ (Ix : Type) (f : Ix → subset A),
F (union_ix f) = union_ix (F ∘ f)
def cocontinuous_inh :=
∀ (Ix : Type) [inhabited Ix] (f : Ix → subset A),
F (union_ix f) = union_ix (F ∘ f)
def continuous :=
∀ (Ix : Type) (f : Ix → subset A),
F (intersection_ix f) = intersection_ix (F ∘ f)
def continuous_inh :=
∀ (Ix : Type) [inhabited Ix] (f : Ix → subset A),
F (intersection_ix f) = intersection_ix (F ∘ f)
end
section
parameter {A : Type u}
/-- Given a function over subsets, the greatest fixpoint
for that function is the union of all sets that might
be produced by applying the fixpoint-/
def greatest_fixpoint (F : subset A → subset A) : subset A
:= union_st (λ P, P ≤ F P)
/-- Given a function over subsets, the greatest fixpoint
for that function is the intersection of all sets that might
be produced by applying the fixpoint-/
def least_fixpoint (F : subset A → subset A) : subset A
:= intersection_st (λ P, F P ≤ P)
section
parameters (F : subset A → subset A) (Fmono : monotone F)
include Fmono
/-- Applying F to a greatest fixpoint of F results in
a set that includes the greatest fixpoint
this should likely be an internal only lemma -/
lemma greatest_fixpoint_postfixed
: greatest_fixpoint F ≤ F (greatest_fixpoint F)
:= begin
intros tr H, induction H, apply Fmono, tactic.swap,
apply a, assumption, clear a_1 tr,
intros x H, constructor; assumption,
end
/-- Applying F to a greatest fixpoint of F results in
the same set --/
lemma greatest_fixpoint_fixed
: greatest_fixpoint F = F (greatest_fixpoint F)
:= begin
apply included_eq,
{ apply greatest_fixpoint_postfixed, assumption },
{ intros x H, constructor, apply Fmono,
apply greatest_fixpoint_postfixed, assumption,
assumption
}
end
/-- Applying a function to the fixpoint is no smaller than the fixpoint -/
lemma least_fixpoint_prefixed
: F (least_fixpoint F) ≤ least_fixpoint F
:= begin
unfold least_fixpoint intersection_st,
intros x H P HP, apply HP, revert x H,
apply Fmono, intros x H, apply H, dsimp, assumption
end
/-- Applying a function to the fixpoint does not change the set -/
lemma least_fixpoint_fixed
: F (least_fixpoint F) = least_fixpoint F
:= begin
apply included_eq,
{ apply least_fixpoint_prefixed, assumption },
{ intros x H,
apply H, dsimp, apply Fmono,
apply least_fixpoint_prefixed, assumption,
}
end
end
lemma greatest_fixpoint_and_le (F G : subset A → subset A)
: greatest_fixpoint (λ X, F X ∩ G X)
≤ greatest_fixpoint F ∩ greatest_fixpoint G
:= begin
unfold greatest_fixpoint,
apply included_trans,
tactic.swap,
apply union_st_bintersection,
apply union_st_mono, intros x H,
constructor; intros z Hz;
specialize (H z Hz); induction H with H1 H2; assumption
end
/-- A function F from subset to subset is finitary if
an arbitrary application can be described as the union of some number of applications to finite arguments -/
def finitary (F : subset A → subset A) : Prop :=
∀ x, F x = union_ix_st (λ xs : list A, from_list xs ≤ x) (λ xs, F (from_list xs))
/-- Finitary functions are monotone -/
lemma finitary_monotone (F : subset A → subset A)
(Ffin : finitary F) : monotone F :=
begin
intros P Q PQ,
repeat { rw Ffin },
intros x H,
induction H,
constructor, apply included_trans; assumption,
assumption,
end
def chain_cont (F : subset A → subset A) :=
∀ (f : ℕ → subset A),
(∀ x y : ℕ, x ≤ y → f x ≤ f y)
→ F (union_ix f) = union_ix (F ∘ f)
def chain_cocont (F : subset A → subset A) :=
∀ (f : ℕ → subset A),
(∀ x y : ℕ, x ≤ y → f y ≤ f x)
→ F (intersection_ix f) = intersection_ix (F ∘ f)
protected def simple_chain (P Q : subset A) : ℕ → subset A
| 0 := P
| (nat.succ n) := Q
private lemma simple_chain_mono
{P Q : subset A} (PQ : P ≤ Q) (x y : ℕ)
(H : x ≤ y) : simple_chain P Q x ≤ simple_chain P Q y
:= begin
induction H, apply included_refl,
apply included_trans, assumption,
clear ih_1 a x y,
cases b; simp [subset.simple_chain],
intros x Px, apply PQ, assumption,
apply included_refl,
end
private lemma simple_chain_anti
{P Q : subset A} (PQ : Q ≤ P) (x y : ℕ)
(H : x ≤ y) : simple_chain P Q y ≤ simple_chain P Q x
:= begin
induction H, apply included_refl,
apply included_trans, tactic.swap, assumption,
clear ih_1 a x y,
cases b; simp [subset.simple_chain],
intros x Px, apply PQ, assumption,
apply included_refl,
end
lemma chain_cont_union {F : subset A → subset A}
(H : chain_cont F) {P Q : subset A}
(PQ : P ≤ Q)
: F (P ∪ Q) = F P ∪ F Q
:= begin
unfold chain_cont at H,
specialize (H (subset.simple_chain P Q) (simple_chain_mono PQ)),
transitivity (F (union_ix (subset.simple_chain P Q))),
f_equal,
apply included_eq, rw imp_or at PQ, rw PQ,
intros x Qx, apply (union_ix_st.mk 1),
trivial, dsimp [subset.simple_chain], assumption,
intros x Hx, induction Hx with n _ Hn,
cases n; dsimp [subset.simple_chain] at Hn,
apply or.inl, assumption, apply or.inr, assumption,
rw H, clear H, apply included_eq,
intros x Hx, induction Hx with n _ Hn,
cases n; dsimp [function.comp, subset.simple_chain] at Hn,
apply or.inl, assumption, apply or.inr, assumption,
intros x Hx, induction Hx with Hx Hx,
apply (union_ix_st.mk 0), trivial,
dsimp [function.comp, subset.simple_chain], assumption,
apply (union_ix_st.mk 1), trivial,
dsimp [function.comp, subset.simple_chain], assumption,
end
lemma chain_cocont_intersection {F : subset A → subset A}
(H : chain_cocont F) {P Q : subset A}
(PQ : P ≤ Q)
: F (P ∩ Q) = F P ∩ F Q
:= begin
unfold chain_cocont at H,
specialize (H (subset.simple_chain Q P) (simple_chain_anti PQ)),
transitivity (F (intersection_ix (subset.simple_chain Q P))),
-- f_equal -- TODO: FIX, the tactic isn't working here
apply congr_arg,
apply included_eq,
intros x PQx n, induction PQx with Px Qx,
cases n; dsimp [subset.simple_chain]; assumption,
rw imp_and at PQ, rw ← PQ,
intros x Hx, apply (Hx 1),
rw H, clear H, apply included_eq,
intros x Hx, constructor, apply (Hx 1),
apply (Hx 0),
intros x FPQx, induction FPQx with FPx FQx,
intros n,
cases n; dsimp [function.comp, subset.simple_chain]; assumption,
end
lemma chain_cont_mono {F : subset A → subset A}
(H : chain_cont F) : monotone F
:= begin
unfold monotone, intros P Q PQ,
rw imp_or, rw ← chain_cont_union,
rw imp_or at PQ, rw PQ, assumption, assumption
end
lemma chain_cocont_mono {F : subset A → subset A}
(H : chain_cocont F) : monotone F
:=
begin
unfold monotone, intros P Q PQ,
rw imp_and, rw ← chain_cocont_intersection,
rw imp_and at PQ, rw ← PQ, assumption, assumption
end
/-- Repeatedly apply a function f starting with x. -/
def iterate {A : Type u} (f : A → A) (x : A) : ℕ → A
| 0 := x
| (nat.succ n) := f (iterate n)
/-- The least fixpoint is described by the union of all sets
indexed by the number of iterations
-/
def least_fixpointn (F : subset A → subset A) : subset A
:= union_ix (iterate F ff)
/-- The greatest fixpoint is described by the intersection of all sets
indexed by the number of iterations
-/
def greatest_fixpointn (F : subset A → subset A) : subset A
:= intersection_ix (iterate F tt)
lemma least_fixpointn_postfixed
{F : subset A → subset A}
(Fmono : monotone F)
: least_fixpointn F ≤ F (least_fixpointn F)
:= begin
intros x H, induction H with n _ Hn,
cases n; simp [iterate] at Hn, exfalso, apply Hn,
apply Fmono, tactic.swap, assumption,
clear Hn x,
intros x H, constructor, constructor, assumption,
end
lemma greatest_fixpointn_prefixed
{F : subset A → subset A}
(Fmono : monotone F)
: F (greatest_fixpointn F) ≤ greatest_fixpointn F
:= begin
intros x H n,
cases n; simp [iterate], apply Fmono,
tactic.swap, assumption, clear H x,
intros x H, apply H,
end
lemma continuous_chain_cocont {F : subset A → subset A}
(H : continuous_inh F) : chain_cocont F :=
begin
intros f fmono, apply H
end
lemma cocontinuous_chain_cont {F : subset A → subset A}
(H : cocontinuous F) : chain_cont F :=
begin
intros f fmono, apply H
end
lemma and_continuous_r (P : subset A)
: continuous_inh (bintersection P)
:= begin
unfold continuous_inh, intros Ix inh f, apply included_eq,
{ intros x H ix, dsimp [function.comp],
induction H with Hl Hr, constructor, assumption,
apply Hr },
{ intros x H, constructor,
specialize (H inh.default),
dsimp [function.comp] at H,
induction H with Hl Hr, assumption,
intros ix, specialize (H ix),
induction H with Hl Hr, assumption, }
end
lemma and_cocontinuous_r (P : subset A)
: cocontinuous (bintersection P)
:= begin
unfold cocontinuous, intros Ix f, apply included_eq,
{ intros x H, dsimp [function.comp],
induction H with Hl Hr, induction Hr,
constructor, trivial, constructor; assumption,
},
{ intros x H, induction H, induction a_1, constructor,
assumption, constructor; assumption
}
end
lemma or_continuous_r (P : subset A)
[decP : decidable_pred P]
: continuous (bunion P)
:= begin
unfold continuous, intros Ix f,
apply included_eq,
{ intros x H ix, dsimp [function.comp],
induction H with H H,
apply or.inl, assumption,
apply or.inr, apply H, },
{ intros x Hx,
have H := decP x, induction H with HP HP,
{ apply or.inr, intros n,
specialize (Hx n), induction Hx with Hl Hr,
contradiction, assumption },
{ apply or.inl, assumption }
}
end
lemma or_cocontinuous_r (P : subset A)
: cocontinuous_inh (bunion P)
:= begin
unfold cocontinuous_inh, intros Ix inh f,
apply included_eq,
{ intros x H, dsimp [function.comp],
induction H with H H,
constructor, trivial, apply or.inl,
assumption, apply inh.default,
induction H, constructor, trivial,
apply or.inr, assumption },
{ intros x Hx,
induction Hx with ix _ H', induction H',
apply or.inl, assumption,
apply or.inr, constructor, trivial, assumption,
}
end
lemma iterate_mono_tt_succ {F : subset A → subset A}
(Fmono : monotone F) (n : ℕ)
: iterate F tt n.succ ≤ iterate F tt n
:= begin
dsimp [iterate], induction n; dsimp [iterate],
apply tt_top, apply Fmono, assumption,
end
lemma iterate_mono_tt_n {F : subset A → subset A}
(Fmono : monotone F) (x y : ℕ)
(H : x ≤ y)
: iterate F tt y ≤ iterate F tt x
:= begin
induction H, apply included_refl,
simp [iterate], apply included_trans,
apply Fmono, assumption,
apply iterate_mono_tt_succ,
apply Fmono,
end
lemma greatest_fixpointn_fixed
{F : subset A → subset A}
(Fcont : chain_cocont F)
: F (greatest_fixpointn F) = greatest_fixpointn F
:= begin
apply included_eq, apply greatest_fixpointn_prefixed,
apply chain_cocont_mono,
assumption,
unfold greatest_fixpointn,
rw Fcont, intros x Hx,
intros n, dsimp [function.comp],
specialize (Hx n.succ), apply Hx,
intros, apply iterate_mono_tt_n,
apply chain_cocont_mono, assumption,
assumption
end
lemma le_greatest_fixpoint {P : subset A}
{F : subset A → subset A}
(H : P ≤ F P)
: P ≤ greatest_fixpoint F
:= begin
intros x H', constructor, apply H, assumption
end
lemma least_fixpoint_le {P : subset A}
{F : subset A → subset A}
(H : F P ≤ P)
: least_fixpoint F ≤ P
:= begin
intros x H', apply H', apply H
end
lemma greatest_fixpoint_le {P : subset A}
{F : subset A → subset A} (Fmono : monotone F)
(H : ∀ Q, Q = F Q → Q ≤ P)
: greatest_fixpoint F ≤ P
:= begin
apply (H _ (greatest_fixpoint_fixed F Fmono))
end
lemma le_least_fixpoint {P : subset A}
{F : subset A → subset A} (Fmono : monotone F)
(H : ∀ Q, F Q = Q → P ≤ Q)
: P ≤ least_fixpoint F
:= begin
apply (H _ (least_fixpoint_fixed F Fmono))
end
lemma iterate_mono_ff_succ {F : subset A → subset A}
(Fmono : monotone F) (n : ℕ)
: iterate F ff n ≤ iterate F ff n.succ
:= begin
dsimp [iterate], induction n; dsimp [iterate],
apply ff_bot, apply Fmono, assumption,
end
lemma iterate_mono_ff_n {F : subset A → subset A}
(Fmono : monotone F) (x y : ℕ)
(H : x ≤ y)
: iterate F ff x ≤ iterate F ff y
:= begin
induction H, apply included_refl,
simp [iterate], apply included_trans,
apply ih_1,
apply iterate_mono_ff_succ,
apply Fmono,
end
lemma iterate_mono {F : subset A → subset A}
(Fmono : monotone F) {P Q : subset A}
(PQ : P ≤ Q) (n : ℕ)
: iterate F P n ≤ iterate F Q n
:= begin
induction n; simp [iterate],
{ assumption },
{ apply Fmono, assumption }
end
lemma iterate_mono2 {F G : subset A → subset A}
{P : subset A}
(Fmono : monotone F)
(FG : ∀ x, F x ≤ G x)
(n : ℕ)
: iterate F P n ≤ iterate G P n
:= begin
induction n; simp [iterate],
{ apply included_refl },
{ apply included_trans, apply Fmono, assumption,
apply FG }
end
lemma least_fixpointn_fixed
{F : subset A → subset A}
(Fcoc : chain_cont F)
: least_fixpointn F = F (least_fixpointn F)
:= begin
apply included_eq, apply least_fixpointn_postfixed,
apply chain_cont_mono, assumption,
unfold least_fixpointn,
rw Fcoc, intros x Hx,
induction Hx with n _ Hn,
dsimp [function.comp] at Hn,
constructor, assumption,
tactic.swap, exact n.succ,
apply Hn, intros,
apply iterate_mono_ff_n,
apply chain_cont_mono, assumption, assumption
end
/-- The fixpoint defined by greatest_fixpointn is actually a greatest fixpoint-/
lemma greatest_fixpointn_same
{F : subset A → subset A}
(Fcoc : chain_cocont F)
: greatest_fixpointn F = greatest_fixpoint F
:= begin
apply included_eq,
apply le_greatest_fixpoint,
rw (greatest_fixpointn_fixed Fcoc),
apply (included_refl (greatest_fixpointn F)),
apply (greatest_fixpoint_le _ _),
apply chain_cocont_mono, assumption,
intros Q HQ, intros x Qx,
have HQx : ∀ n, Q = iterate F Q n,
intros n, induction n; simp [iterate],
rw ← ih_1, assumption,
intros n, apply iterate_mono,
apply chain_cocont_mono, assumption,
tactic.swap, rw (HQx n) at Qx,
assumption, apply tt_top
end
/-- The fixpoint defined by least_fixpointn is actually a least fixpoint-/
lemma least_fixpointn_same
{F : subset A → subset A}
(Fchain_cont : chain_cont F)
: least_fixpointn F = least_fixpoint F
:= begin
apply included_eq, tactic.swap,
apply least_fixpoint_le,
rw ← (least_fixpointn_fixed Fchain_cont),
apply included_refl,
apply (le_least_fixpoint _ _),
apply chain_cont_mono, assumption,
intros Q HQ, intros x Qx,
have HQx : ∀ n, Q = iterate F Q n,
intros n, induction n; simp [iterate],
rw ← ih_1, symmetry, assumption,
unfold least_fixpointn at Qx,
induction Qx with n _ Hn,
rw (HQx n), apply iterate_mono,
apply chain_cont_mono, assumption,
tactic.swap, assumption, apply ff_bot,
end
lemma greatest_fixpoint_mono
{F G : subset A → subset A}
(H : ∀ P, F P ≤ G P)
: greatest_fixpoint F ≤ greatest_fixpoint G
:= begin
intros x Hx, induction Hx,
constructor, tactic.swap, apply a_1,
dsimp, apply included_trans, apply a, apply H
end
lemma greatest_fixpointn_mono
{F G : subset A → subset A}
(Fmono : monotone F)
(H : ∀ P, F P ≤ G P)
: greatest_fixpointn F ≤ greatest_fixpointn G
:= begin
unfold greatest_fixpointn,
intros x H n, specialize (H n),
revert H, apply iterate_mono2,
assumption, assumption
end
lemma and_functional_mono
{F G : subset A → subset A}
(Fmono : monotone F) (Gmono : monotone G)
: monotone (λ X, F X ∩ G X)
:= begin
unfold monotone, intros P Q PQ, apply bintersection_mono,
apply Fmono, assumption, apply Gmono, assumption
end
lemma greatest_fixpointn_and_le (F G : subset A → subset A)
(Fmono : monotone F) (Gmono : monotone G)
: greatest_fixpointn (λ X, F X ∩ G X)
≤ greatest_fixpointn F ∩ greatest_fixpointn G
:= begin
intros x H,
constructor,
{ revert H,
apply greatest_fixpointn_mono, apply and_functional_mono,
apply Fmono, apply Gmono,
intros P x H, induction H with Hl Hr, apply Hl,
},
{ revert H,
apply greatest_fixpointn_mono, apply and_functional_mono,
apply Fmono, apply Gmono,
intros P x H, induction H with Hl Hr, apply Hr,
}
end
lemma tImp_cocontinuous_l {Ix : Type}
(P : Ix → subset A) (Q : subset A)
: (union_ix P => Q) = intersection_ix (λ ix, P ix => Q)
:= begin
apply included_eq; intros x Hx,
{ intros n Pn, apply Hx,
constructor, trivial, assumption },
{ intros H, induction H, apply Hx, assumption }
end
end
end subset |
\section{Sandwich Theorem}
We can use the Sandwich Theorem to indirectly find limits by "sandwiching" the function in question between two functions we do know the limit of.
If these two sandwiching functions go to the same value in the limit, then so to must the function in question.
\begin{theorem}[The Sandwich Theorem]
If $g(x) \leq f(x) \leq h(x)$ and $\lim_{x \to c}{g(x)} = \lim_{x\to c}{h(x)} = L$, then $\lim_{x \to c}{f(x)} = L$.
\end{theorem}
\begin{example}
Evaluate the following limit
\begin{equation*}
\lim_{\theta \to 0}{\frac{\sin{\theta}}{\theta}}
\end{equation*}
\end{example}
\begin{answer}
We'll need to use some geometric ideas to solve this limit.
Consider the following on a unit circle.
\begin{figure}[H]
\label{sin_limit_proof}
\centering
\includegraphics[width = 0.5\textwidth]{./limits_continuity/sin_limit_proof.png}
\caption{\hyperref{}{}{}{Triangle with internal angle $\theta$ inside a unit circle.}}
\end{figure}
We can see that the area of the swept arc is between the two triangle with base of length 1 and heights of $\sin{\theta}$ and $\tan{\theta}$.
So, we can write the following inequality.
\begin{align*}
\frac{1}{2}\sin{\theta} &\leq \frac{1}{2}\theta \leq \frac{1}{2}\frac{\sin{\theta}}{\cos{\theta}} \\
\sin{\theta} \leq \theta &\leq \frac{\sin{\theta}}{\cos{\theta}}
\end{align*}
Taking the reciprocal of each part and multiplying by $\sin{\theta}$,
\begin{equation*}
1 \geq \frac{\sin{\theta}}{\theta} \geq \cos{\theta}.
\end{equation*}
Taking the limit of as $\theta$ approaches 0 of each term,
\begin{align*}
1 \geq & \lim_{\theta \to 0}{\frac{\sin{\theta}}{\theta}} \geq \lim_{\theta\to 0}{\cos{\theta}} \\
1 \geq & \lim_{\theta \to 0}{\frac{\sin{\theta}}{\theta}} \geq 1.
\end{align*}
So, by the Sandwich Theorem,
\begin{equation*}
\lim_{\theta \to 0}{\frac{\sin{\theta}}{\theta}} = 1.
\end{equation*}
\end{answer} |
module Utils.Hex
import Data.List
import Data.Primitives.Views
%default total
hexDigit : Int -> Char
hexDigit 0 = '0'
hexDigit 1 = '1'
hexDigit 2 = '2'
hexDigit 3 = '3'
hexDigit 4 = '4'
hexDigit 5 = '5'
hexDigit 6 = '6'
hexDigit 7 = '7'
hexDigit 8 = '8'
hexDigit 9 = '9'
hexDigit 10 = 'a'
hexDigit 11 = 'b'
hexDigit 12 = 'c'
hexDigit 13 = 'd'
hexDigit 14 = 'e'
hexDigit 15 = 'f'
hexDigit _ = 'X' -- TMP HACK: Ideally we'd have a bounds proof, generated below
||| Convert a positive integer into a list of (lower case) hexadecimal characters
export
asHex : Int -> String
asHex n =
if n > 0
then pack $ asHex' n []
else "0"
where
asHex' : Int -> List Char -> List Char
asHex' 0 hex = hex
asHex' n hex with (n `divides` 16)
asHex' (16 * div + rem) hex | DivBy div rem _ =
asHex' (assert_smaller n div) (hexDigit rem :: hex)
export
leftPad : Char -> Nat -> String -> String
leftPad paddingChar padToLength str =
if length str < padToLength
then pack (List.replicate (minus padToLength (length str)) paddingChar) ++ str
else str
export
fromHexDigit : Char -> Maybe Int
fromHexDigit '0' = Just 0
fromHexDigit '1' = Just 1
fromHexDigit '2' = Just 2
fromHexDigit '3' = Just 3
fromHexDigit '4' = Just 4
fromHexDigit '5' = Just 5
fromHexDigit '6' = Just 6
fromHexDigit '7' = Just 7
fromHexDigit '8' = Just 8
fromHexDigit '9' = Just 9
fromHexDigit 'a' = Just 10
fromHexDigit 'b' = Just 11
fromHexDigit 'c' = Just 12
fromHexDigit 'd' = Just 13
fromHexDigit 'e' = Just 14
fromHexDigit 'f' = Just 15
fromHexDigit _ = Nothing
export
fromHexChars : List Char -> Maybe Int
fromHexChars = fromHexChars' 1
where
fromHexChars' : Int -> List Char -> Maybe Int
fromHexChars' _ [] = Just 0
fromHexChars' m (d :: ds) = pure $ !(fromHexDigit (toLower d)) * m + !(fromHexChars' (m*16) ds)
export
fromHex : String -> Maybe Int
fromHex = fromHexChars . unpack
|
/******************************************************************************
CosmoLike Configuration Space Covariances for Projected Galaxy 2-Point Statistics
https://github.com/CosmoLike/CosmoCov
by CosmoLike developers
******************************************************************************/
#include <math.h>
#include <stdlib.h>
#include <stdio.h>
#include <assert.h>
#include <fftw3.h>
#include <gsl/gsl_errno.h>
#include <gsl/gsl_integration.h>
#include <gsl/gsl_spline.h>
#include <gsl/gsl_sf_gamma.h>
#include <gsl/gsl_sf_legendre.h>
#include <gsl/gsl_sf_bessel.h>
#include <gsl/gsl_sf_trig.h>
#include <gsl/gsl_sf_erf.h>
#define NR_END 1
#define FREE_ARG char*
#define EXIT_MISSING_FILE(ein, purpose, filename) if (!ein) {fprintf(stderr, "Could not find %s file %s\n",purpose,filename);exit(1);}
static double darg __attribute__((unused)),maxarg1 __attribute__((unused)), maxarg2 __attribute__((unused));
#define FMAX(a,b) (maxarg1=(a), maxarg2=(b), (maxarg1) > (maxarg2) ? (maxarg1) : (maxarg2))
#define FMIN(a,b) (maxarg1=(a), maxarg2=(b), (maxarg1) < (maxarg2) ? (maxarg1) : (maxarg2))
//Note: SQR*SQR....gives undefined warning
static double sqrarg;
#define SQR(a) ((sqrarg=(a)) == 0.0 ? 0.0 : sqrarg*sqrarg)
static double cubearg;
#define CUBE(a) ((cubearg=(a)) == 0.0 ? 0.0 : cubearg*cubearg*cubearg)
static double pow4arg;
#define POW4(a) ((pow4arg=(a)) == 0.0 ? 0.0 : pow4arg*pow4arg*pow4arg*pow4arg)
void SVD_inversion(gsl_matrix *cov, gsl_matrix *inverseSVD,int Nmatrix);
double interpol2d(double **f, int nx, double ax, double bx, double dx, double x, int ny, double ay, double by, double dy, double y, double lower, double upper);
double interpol2d_fitslope(double **f, int nx, double ax, double bx, double dx, double x, int ny, double ay, double by, double dy, double y, double lower);
double interpol(double *f, int n, double a, double b, double dx, double x, double lower, double upper);
double interpol_fitslope(double *f, int n, double a, double b, double dx, double x, double lower);
void free_double_vector(double *v, long nl, long nh);
long *long_vector(long nl, long nh);
int *int_vector(long nl, long nh);
double *create_double_vector(long nl, long nh);
void free_double_matrix(double **m, long nrl, long nrh, long ncl, long nch);
double **create_double_matrix(long nrl, long nrh, long ncl, long nch);
int line_count(char *filename);
void error(char *s);
void cdgamma(fftw_complex x, fftw_complex *res);
double int_gsl_integrate_insane_precision(double (*func)(double, void*),void *arg,double a, double b, double *error, int niter);
double int_gsl_integrate_high_precision(double (*func)(double, void*),void *arg,double a, double b, double *error, int niter);
double int_gsl_integrate_medium_precision(double (*func)(double, void*),void *arg,double a, double b, double *error, int niter);
double int_gsl_integrate_low_precision(double (*func)(double, void*),void *arg,double a, double b, double *error, int niter);
double int_gsl_integrate_cov_precision(double (*func)(double, void*),void *arg,double a, double b, double *error, int niter);
typedef struct {
double pi;
double pi_sqr;
double twopi;
double ln2;
double arcmin;
double lightspeed;
}con;
con constants = {
3.14159265358979323846, //pi
9.86960440108935861883, //pisqr
6.28318530717958647693, //twopi
0.69314718,
2.90888208665721580e-4, //arcmin
299792.458 //speed of light km/s
};
typedef struct {
double low;
double medium;
double high;
double insane;
}pre;
pre precision= {
1e-2, //low
1e-3, //medium
1e-5, //high
1e-7 //insane
};
typedef struct {
double a_min;
double k_min_mpc;
double k_max_mpc;
double k_max_mpc_class;
double k_min_cH0;
double k_max_cH0;
double P_2_s_min;
double P_2_s_max;
double xi_via_hankel_theta_min;
double xi_via_hankel_theta_max;
double xi_3d_rmin;
double xi_3d_rmax;
double M_min;
double M_max;
}lim;
lim limits = {
// 0.19, //a_min (in order to compute z=4 WFIRST)
1./(1.+10.), //a_min (z=10, needed for CMB lensing)
6.667e-6, //k_min_mpc
1.e3, //k_max_mpc
50., //k_max_mpc_class
2.e-2, //k_min_cH0
3.e+6, //k_max_cH0
0.1,//P_2_s_min
1.0e5,//P_2_s_max
3.0e-7,//xi_via_hankel_theta_min
0.12, //xi_via_hankel_theta_max
1.e-5,//xi_3d_rmin
1.e+0,//xi_3d_rmax
1.e+6, //M_min
1.e+17,//M_max
};
typedef struct {
int N_a ;
int N_k_lin;
int N_k_nlin;
int N_ell;
int N_theta;
int N_thetaH;
int N_S2;
int N_DS;
int N_norm;
int N_r_3d;
int N_k_3d;
int N_a_halo;
}Ntab;
Ntab Ntable = {
100, //N_a
500, //N_k_lin
500, //N_k_nlin
200, //N_ell
200, //N_theta
2048, //N_theta for Hankel
1000, //N_S2
1000, //N_DS
50, //N_norm
50, //N_r_3d
25, //N_k_3d
20, //N_a_halo
};
struct cos{
int ORDER ;
double vt_max;
double vt_min;
double vt_bin_max;
double vt_bin_min;
int ni ;
int nj ;
};
void SVD_inversion(gsl_matrix *cov, gsl_matrix *inverseSVD,int Nmatrix)
{
int i,j;
gsl_matrix *V=gsl_matrix_calloc(Nmatrix,Nmatrix);
gsl_matrix *U=gsl_matrix_calloc(Nmatrix,Nmatrix);
gsl_vector *S=gsl_vector_calloc(Nmatrix);
gsl_vector *work=gsl_vector_calloc(Nmatrix);
gsl_matrix_memcpy(U,cov);
gsl_linalg_SV_decomp(U,V,S,work);
for (i=0;i<Nmatrix;i++){
gsl_vector *b=gsl_vector_calloc(Nmatrix);
gsl_vector *x=gsl_vector_calloc(Nmatrix);
gsl_vector_set(b,i,1.0);
gsl_linalg_SV_solve(U,V,S,b,x);
for (j=0;j<Nmatrix;j++){
gsl_matrix_set(inverseSVD,i,j,gsl_vector_get(x,j));
}
gsl_vector_free(b);
gsl_vector_free(x);
}
gsl_matrix_free(V);
gsl_matrix_free(U);
gsl_vector_free(S);
gsl_vector_free(work);
}
void invert_matrix_colesky(gsl_matrix *A)
{
gsl_linalg_cholesky_decomp (A); // Adummy will be overwritten */
gsl_linalg_cholesky_invert (A);
}
double int_gsl_integrate_insane_precision(double (*func)(double, void*),void *arg,double a, double b, double *error, int niter)
{
double res, err;
gsl_integration_cquad_workspace *w = gsl_integration_cquad_workspace_alloc(niter);
gsl_function F;
F.function = func;
F.params = arg;
gsl_integration_cquad(&F,a,b,0,precision.insane,w,&res,&err,0);
if(NULL!=error)
*error=err;
gsl_integration_cquad_workspace_free(w);
return res;
}
double int_gsl_integrate_high_precision(double (*func)(double, void*),void *arg,double a, double b, double *error, int niter)
{
double res, err;
gsl_integration_cquad_workspace *w = gsl_integration_cquad_workspace_alloc(niter);
gsl_function F;
F.function = func;
F.params = arg;
gsl_integration_cquad(&F,a,b,0,precision.high,w,&res,&err,0);
if(NULL!=error)
*error=err;
gsl_integration_cquad_workspace_free(w);
return res;
}
double int_gsl_integrate_medium_precision(double (*func)(double, void*),void *arg,double a, double b, double *error, int niter)
{
double res, err;
gsl_integration_cquad_workspace *w = gsl_integration_cquad_workspace_alloc(niter);
gsl_function F;
F.function = func;
F.params = arg;
gsl_integration_cquad(&F,a,b,0,precision.medium,w,&res,&err,0);
if(NULL!=error)
*error=err;
gsl_integration_cquad_workspace_free(w);
return res;
}
double int_gsl_integrate_low_precision(double (*func)(double, void*),void *arg,double a, double b, double *error, int niter)
{
double res, err;
gsl_integration_cquad_workspace *wcrude = gsl_integration_cquad_workspace_alloc(niter);
gsl_function F;
F.function = func;
F.params = arg;
gsl_integration_cquad(&F,a,b,0,precision.low,wcrude,&res,&err,0);
if(NULL!=error)
*error=err;
gsl_integration_cquad_workspace_free(wcrude);
return res;
}
double int_gsl_integrate_cov_precision(double (*func)(double, void*),void *arg,double a, double b, double *error, int niter)
{
double res, err;
gsl_integration_cquad_workspace *wcrude = gsl_integration_cquad_workspace_alloc(niter);
gsl_function F;
F.function = func;
F.params = arg;
gsl_integration_cquad(&F,a,b,1e-16,precision.high,wcrude,&res,&err,0);
if(NULL!=error)
*error=err;
gsl_integration_cquad_workspace_free(wcrude);
return res;
}
int line_count(char *filename)
{
FILE *n;
n = fopen(filename,"r");
if (!n){printf("line_count: %s not found!\nEXIT!\n",filename);exit(1);}
int ch =0, prev =0,number_of_lines = 0;
do
{
prev = ch;
ch = fgetc(n);
if(ch == '\n')
number_of_lines++;
} while (ch != EOF);
fclose(n);
// last line might not end with \n, but if previous character does, last line is empty
if(ch != '\n' && prev !='\n' && number_of_lines != 0)
number_of_lines++;
return number_of_lines;
}
double **create_double_matrix(long nrl, long nrh, long ncl, long nch)
/* allocate a double matrix with subscript range m[nrl..nrh][ncl..nch] */
{
long i, nrow=nrh-nrl+1,ncol=nch-ncl+1;
double **m;
/* allocate pointers to rows */
m=(double **) calloc(nrow+NR_END,sizeof(double*));
if (!m) error("allocation failure 1 in create_double_matrix()");
m += NR_END;
m -= nrl;
/* allocate rows and set pointers to them */
m[nrl]=(double *) calloc(nrow*ncol+NR_END,sizeof(double));
if (!m[nrl]) error("allocation failure 2 in create_double_matrix()");
m[nrl] += NR_END;
m[nrl] -= ncl;
for(i=nrl+1;i<=nrh;i++) m[i]=m[i-1]+ncol;
/* return pointer to array of pointers to rows */
return m;
}
void free_double_matrix(double **m, long nrl, long nrh, long ncl, long nch)
/* free a double matrix allocated by create_double_matrix() */
{
free((FREE_ARG) (m[nrl]+ncl-NR_END));
free((FREE_ARG) (m+nrl-NR_END));
}
double *create_double_vector(long nl, long nh)
/* allocate a double vector with subscript range v[nl..nh] */
{
double *v;
v=(double *) calloc(nh-nl+1+NR_END,sizeof(double));
if (!v) error("allocation failure in double vector()");
return v-nl+NR_END;
}
int *int_vector(long nl, long nh)
/* allocate a int vector with subscript range v[nl..nh] */
{
int *v;
v=(int *)calloc(nh-nl+1+NR_END, sizeof(int));
if (!v) error("allocation failure in int vector()");
return v-nl+NR_END;
}
long *long_vector(long nl, long nh)
/* allocate a int vector with subscript range v[nl..nh] */
{
long *v;
v=(long *)calloc(nh-nl+1+NR_END,sizeof(long));
if (!v) error("allocation failure in int vector()");
return v-nl+NR_END;
}
void free_double_vector(double *v, long nl, long nh)
/* free a double vector allocated with vector() */
{
free((FREE_ARG) (v+nl-NR_END));
}
void error(char *s)
{
printf("error:%s\n ",s);
exit(1);
}
/* ============================================================ *
* Interpolates f at the value x, where f is a double[n] array, *
* representing a function between a and b, stepwidth dx. *
* 'lower' and 'upper' are powers of a logarithmic power law *
* extrapolation. If no extrapolation desired, set these to 0 *
* ============================================================ */
double interpol(double *f, int n, double a, double b, double dx, double x, double lower, double upper)
{
double r;
int i;
if (x < a) {
if (lower==0.) {
//error("value too small in interpol");
return 0.0;
}
return f[0] + lower*(x - a);
}
r = (x - a)/dx;
i = (int)(floor(r));
if (i+1 >= n) {
if (upper==0.0) {
if (i+1==n) {
return f[i]; /* constant extrapolation */
} else {
//error("value too big in interpol");
return 0.0;
}
} else {
return f[n-1] + upper*(x-b); /* linear extrapolation */
}
} else {
return (r - i)*(f[i+1] - f[i]) + f[i]; /* interpolation */
}
}
double interpol_fitslope(double *f, int n, double a, double b, double dx, double x, double lower)
{
double r;
int i,fitrange;
if (x < a) {
if (lower==0.) {
//error("value too small in interpol");
return 0.0;
}
return f[0] + lower*(x - a);
}
r = (x - a)/dx;
i = (int)(floor(r));
if (i+1 >= n) {
if (n > 50){fitrange =5;}
else{fitrange = (int)floor(n/10);}
double upper = (f[n-1] - f[n-1-fitrange])/(dx*fitrange);
return f[n-1] + upper*(x-b); /* linear extrapolation */
} else {
return (r - i)*(f[i+1] - f[i]) + f[i]; /* interpolation */
}
}
/* ============================================================ *
* like interpol, but f beeing a 2d-function *
* 'lower' and 'upper' are the powers of a power law extra- *
* polation in the second argument *
* ============================================================ */
double interpol2d(double **f,
int nx, double ax, double bx, double dx, double x,
int ny, double ay, double by, double dy, double y,
double lower, double upper)
{
double t, dt, s, ds;
int i, j;
if (x < ax) {
return 0.;
// error("value too small in interpol2d");
}
if (x > bx) {
return 0.;//
// printf("%le %le\n",x,bx);
// error("value too big in interpol2d");
}
t = (x - ax)/dx;
i = (int)(floor(t));
dt = t - i;
if (y < ay) {
return ((1.-dt)*f[i][0] + dt*f[i+1][0]) + (y-ay)*lower;
} else if (y > by) {
return ((1.-dt)*f[i][ny-1] + dt*f[i+1][ny-1]) + (y-by)*upper;
}
s = (y - ay)/dy;
j = (int)(floor(s));
ds = s - j;
if ((i+1==nx)&&(j+1==ny)) {
//printf("%d %d\n",i+1,j+1);
return (1.-dt)*(1.-ds)*f[i][j];
}
if (i+1==nx){
//printf("%d %d\n",i+1,j+1);
return (1.-dt)*(1.-ds)*f[i][j]+ (1.-dt)*ds*f[i][j+1];
}
if (j+1==ny){
//printf("%d %d\n",i+1,j+1);
return (1.-dt)*(1.-ds)*f[i][j]+ dt*(1.-ds)*f[i+1][j];
}
return (1.-dt)*(1.-ds)*f[i][j] +(1.-dt)*ds*f[i][j+1] + dt*(1.-ds)*f[i+1][j] + dt*ds*f[i+1][j+1];
}
double interpol2d_fitslope(double **f,
int nx, double ax, double bx, double dx, double x,
int ny, double ay, double by, double dy, double y,
double lower)
{
double t, dt, s, ds, upper;
int i, j, fitrange;
if (x < ax) {
return 0.;
// error("value too small in interpol2d");
}
if (x > bx) {
return 0.;
// printf("%le %le\n",x,bx);
// error("value too big in interpol2d");
}
t = (x - ax)/dx;
i = (int)(floor(t));
dt = t - i;
if (y < ay) {
return ((1.-dt)*f[i][0] + dt*f[i+1][0]) + (y-ay)*lower;
} else if (y > by) {
if (ny > 25){fitrange =5;}
else{fitrange = (int)floor(ny/5);}
upper = ((1.-dt)*(f[i][ny-1] - f[i][ny-1-fitrange])+dt*(f[i+1][ny-1] - f[i+1][ny-1-fitrange]))/(dy*fitrange);
return ((1.-dt)*f[i][ny-1] + dt*f[i+1][ny-1]) + (y-by)*upper;
}
s = (y - ay)/dy;
j = (int)(floor(s));
ds = s - j;
if ((i+1==nx)&&(j+1==ny)) {
//printf("%d %d\n",i+1,j+1);
return (1.-dt)*(1.-ds)*f[i][j];
}
if (i+1==nx){
//printf("%d %d\n",i+1,j+1);
return (1.-dt)*(1.-ds)*f[i][j]+ (1.-dt)*ds*f[i][j+1];
}
if (j+1==ny){
//printf("%d %d\n",i+1,j+1);
return (1.-dt)*(1.-ds)*f[i][j]+ dt*(1.-ds)*f[i+1][j];
}
return (1.-dt)*(1.-ds)*f[i][j] +(1.-dt)*ds*f[i][j+1] + dt*(1.-ds)*f[i+1][j] + dt*ds*f[i+1][j+1];
}
void cdgamma(fftw_complex x, fftw_complex *res)
{
double xr, xi, wr, wi, ur, ui, vr, vi, yr, yi, t;
xr = (double) x[0];
xi = (double) x[1];
if (xr<0) {
wr = 1 - xr;
wi = -xi;
} else {
wr = xr;
wi = xi;
}
ur = wr + 6.00009857740312429;
vr = ur * (wr + 4.99999857982434025) - wi * wi;
vi = wi * (wr + 4.99999857982434025) + ur * wi;
yr = ur * 13.2280130755055088 + vr * 66.2756400966213521 +
0.293729529320536228;
yi = wi * 13.2280130755055088 + vi * 66.2756400966213521;
ur = vr * (wr + 4.00000003016801681) - vi * wi;
ui = vi * (wr + 4.00000003016801681) + vr * wi;
vr = ur * (wr + 2.99999999944915534) - ui * wi;
vi = ui * (wr + 2.99999999944915534) + ur * wi;
yr += ur * 91.1395751189899762 + vr * 47.3821439163096063;
yi += ui * 91.1395751189899762 + vi * 47.3821439163096063;
ur = vr * (wr + 2.00000000000603851) - vi * wi;
ui = vi * (wr + 2.00000000000603851) + vr * wi;
vr = ur * (wr + 0.999999999999975753) - ui * wi;
vi = ui * (wr + 0.999999999999975753) + ur * wi;
yr += ur * 10.5400280458730808 + vr;
yi += ui * 10.5400280458730808 + vi;
ur = vr * wr - vi * wi;
ui = vi * wr + vr * wi;
t = ur * ur + ui * ui;
vr = yr * ur + yi * ui + t * 0.0327673720261526849;
vi = yi * ur - yr * ui;
yr = wr + 7.31790632447016203;
ur = log(yr * yr + wi * wi) * 0.5 - 1;
ui = atan2(wi, yr);
yr = exp(ur * (wr - 0.5) - ui * wi - 3.48064577727581257) / t;
yi = ui * (wr - 0.5) + ur * wi;
ur = yr * cos(yi);
ui = yr * sin(yi);
yr = ur * vr - ui * vi;
yi = ui * vr + ur * vi;
if (xr<0) {
wr = xr * 3.14159265358979324;
wi = exp(xi * 3.14159265358979324);
vi = 1 / wi;
ur = (vi + wi) * sin(wr);
ui = (vi - wi) * cos(wr);
vr = ur * yr + ui * yi;
vi = ui * yr - ur * yi;
ur = 6.2831853071795862 / (vr * vr + vi * vi);
yr = ur * vr;
yi = ur * vi;
}
(*res)[0]=yr; (*res)[1]=yi;
}
|
#pragma once
#include "nkg_point.h"
#include "nkg_rect.h"
#include "utility/types.h"
#include <stb_image.h>
#include <gsl/gsl>
#include <memory>
#include <vector>
#include <stdlib.h>
namespace cws80 {
class im_image {
public:
im_image();
~im_image();
void load(const char *filename, uint desired_channels);
void load_from_memory(const u8 *buffer, uint length, uint desired_channels);
void load_from_memory(gsl::span<const u8> memory, uint desired_channels)
{
load_from_memory(memory.data(), memory.size(), desired_channels);
}
const u8 *pixel(uint x, uint y) const;
u8 alpha(uint x, uint y) const;
bool is_transparent_column(uint x) const;
bool is_transparent_row(uint y) const;
im_image cut(const im_recti &r) const;
std::vector<im_image> hsplit() const;
std::vector<im_image> vsplit() const;
im_recti rect() const;
im_recti crop_alpha_border() const;
const u8 *data() const { return data_.get(); }
uint width() const { return w_; }
uint height() const { return h_; }
uint channels() const { return c_; }
im_image(im_image &&other) = default;
im_image &operator=(im_image &&other) = default;
private:
uint w_{}, h_{}, n_{}, c_{};
std::unique_ptr<u8[], void (*)(void *)> data_{nullptr, &stbi_image_free};
public:
class exception : public std::runtime_error {
public:
using std::runtime_error::runtime_error;
};
};
} // namespace cws80
|
/-
Copyright (c) 2020 Microsoft Corporation. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Leonardo de Moura
-/
import Lean.Elab.Term
import Lean.Elab.BindersUtil
import Lean.Elab.PatternVar
import Lean.Elab.Quotation.Util
import Lean.Parser.Do
-- HACK: avoid code explosion until heuristics are improved
set_option compiler.reuse false
namespace Lean.Elab.Term
open Lean.Parser.Term
open Meta
private def getDoSeqElems (doSeq : Syntax) : List Syntax :=
if doSeq.getKind == ``Lean.Parser.Term.doSeqBracketed then
doSeq[1].getArgs.toList.map fun arg => arg[0]
else if doSeq.getKind == ``Lean.Parser.Term.doSeqIndent then
doSeq[0].getArgs.toList.map fun arg => arg[0]
else
[]
private def getDoSeq (doStx : Syntax) : Syntax :=
doStx[1]
@[builtinTermElab liftMethod] def elabLiftMethod : TermElab := fun stx _ =>
throwErrorAt stx "invalid use of `(<- ...)`, must be nested inside a 'do' expression"
/-- Return true if we should not lift `(<- ...)` actions nested in the syntax nodes with the given kind. -/
private def liftMethodDelimiter (k : SyntaxNodeKind) : Bool :=
k == ``Lean.Parser.Term.do ||
k == ``Lean.Parser.Term.doSeqIndent ||
k == ``Lean.Parser.Term.doSeqBracketed ||
k == ``Lean.Parser.Term.termReturn ||
k == ``Lean.Parser.Term.termUnless ||
k == ``Lean.Parser.Term.termTry ||
k == ``Lean.Parser.Term.termFor
/-- Given `stx` which is a `letPatDecl`, `letEqnsDecl`, or `letIdDecl`, return true if it has binders. -/
private def letDeclArgHasBinders (letDeclArg : Syntax) : Bool :=
let k := letDeclArg.getKind
if k == ``Lean.Parser.Term.letPatDecl then
false
else if k == ``Lean.Parser.Term.letEqnsDecl then
true
else if k == ``Lean.Parser.Term.letIdDecl then
-- letIdLhs := ident >> checkWsBefore "expected space before binders" >> many (ppSpace >> (simpleBinderWithoutType <|> bracketedBinder)) >> optType
let binders := letDeclArg[1]
binders.getNumArgs > 0
else
false
/-- Return `true` if the given `letDecl` contains binders. -/
private def letDeclHasBinders (letDecl : Syntax) : Bool :=
letDeclArgHasBinders letDecl[0]
/-- Return true if we should generate an error message when lifting a method over this kind of syntax. -/
private def liftMethodForbiddenBinder (stx : Syntax) : Bool :=
let k := stx.getKind
if k == ``Lean.Parser.Term.fun || k == ``Lean.Parser.Term.matchAlts ||
k == ``Lean.Parser.Term.doLetRec || k == ``Lean.Parser.Term.letrec then
-- It is never ok to lift over this kind of binder
true
-- The following kinds of `let`-expressions require extra checks to decide whether they contain binders or not
else if k == ``Lean.Parser.Term.let then
letDeclHasBinders stx[1]
else if k == ``Lean.Parser.Term.doLet then
letDeclHasBinders stx[2]
else if k == ``Lean.Parser.Term.doLetArrow then
letDeclArgHasBinders stx[2]
else
false
private partial def hasLiftMethod : Syntax → Bool
| Syntax.node _ k args =>
if liftMethodDelimiter k then false
-- NOTE: We don't check for lifts in quotations here, which doesn't break anything but merely makes this rare case a
-- bit slower
else if k == ``Lean.Parser.Term.liftMethod then true
else args.any hasLiftMethod
| _ => false
structure ExtractMonadResult where
m : Expr
α : Expr
expectedType : Expr
private partial def extractBind (expectedType? : Option Expr) : TermElabM ExtractMonadResult := do
match expectedType? with
| none => throwError "invalid 'do' notation, expected type is not available"
| some expectedType =>
let extractStep? (type : Expr) : MetaM (Option ExtractMonadResult) := do
match type with
| Expr.app m α _ =>
try
let bindInstType ← mkAppM ``Bind #[m]
let _ ← Meta.synthInstance bindInstType
return some { m := m, α := α, expectedType := expectedType }
catch _ =>
return none
| _ =>
return none
let rec extract? (type : Expr) : MetaM (Option ExtractMonadResult) := do
match (← extractStep? type) with
| some r => return r
| none =>
let typeNew ← whnfCore type
if typeNew != type then
extract? typeNew
else
if typeNew.getAppFn.isMVar then throwError "invalid 'do' notation, expected type is not available"
match (← unfoldDefinition? typeNew) with
| some typeNew => extract? typeNew
| none => return none
match (← extract? expectedType) with
| some r => return r
| none => throwError "invalid 'do' notation, expected type is not a monad application{indentExpr expectedType}\nYou can use the `do` notation in pure code by writing `Id.run do` instead of `do`, where `Id` is the identity monad."
namespace Do
/- A `doMatch` alternative. `vars` is the array of variables declared by `patterns`. -/
structure Alt (σ : Type) where
ref : Syntax
vars : Array Name
patterns : Syntax
rhs : σ
deriving Inhabited
/-
Auxiliary datastructure for representing a `do` code block, and compiling "reassignments" (e.g., `x := x + 1`).
We convert `Code` into a `Syntax` term representing the:
- `do`-block, or
- the visitor argument for the `forIn` combinator.
We say the following constructors are terminals:
- `break`: for interrupting a `for x in s`
- `continue`: for interrupting the current iteration of a `for x in s`
- `return e`: for returning `e` as the result for the whole `do` computation block
- `action a`: for executing action `a` as a terminal
- `ite`: if-then-else
- `match`: pattern matching
- `jmp` a goto to a join-point
We say the terminals `break`, `continue`, `action`, and `return` are "exit points"
Note that, `return e` is not equivalent to `action (pure e)`. Here is an example:
```
def f (x : Nat) : IO Unit := do
if x == 0 then
return ()
IO.println "hello"
```
Executing `#eval f 0` will not print "hello". Now, consider
```
def g (x : Nat) : IO Unit := do
if x == 0 then
pure ()
IO.println "hello"
```
The `if` statement is essentially a noop, and "hello" is printed when we execute `g 0`.
- `decl` represents all declaration-like `doElem`s (e.g., `let`, `have`, `let rec`).
The field `stx` is the actual `doElem`,
`vars` is the array of variables declared by it, and `cont` is the next instruction in the `do` code block.
`vars` is an array since we have declarations such as `let (a, b) := s`.
- `reassign` is an reassignment-like `doElem` (e.g., `x := x + 1`).
- `joinpoint` is a join point declaration: an auxiliary `let`-declaration used to represent the control-flow.
- `seq a k` executes action `a`, ignores its result, and then executes `k`.
We also store the do-elements `dbg_trace` and `assert!` as actions in a `seq`.
A code block `C` is well-formed if
- For every `jmp ref j as` in `C`, there is a `joinpoint j ps b k` and `jmp ref j as` is in `k`, and
`ps.size == as.size` -/
inductive Code where
| decl (xs : Array Name) (doElem : Syntax) (k : Code)
| reassign (xs : Array Name) (doElem : Syntax) (k : Code)
/- The Boolean value in `params` indicates whether we should use `(x : typeof! x)` when generating term Syntax or not -/
| joinpoint (name : Name) (params : Array (Name × Bool)) (body : Code) (k : Code)
| seq (action : Syntax) (k : Code)
| action (action : Syntax)
| «break» (ref : Syntax)
| «continue» (ref : Syntax)
| «return» (ref : Syntax) (val : Syntax)
/- Recall that an if-then-else may declare a variable using `optIdent` for the branches `thenBranch` and `elseBranch`. We store the variable name at `var?`. -/
| ite (ref : Syntax) (h? : Option Name) (optIdent : Syntax) (cond : Syntax) (thenBranch : Code) (elseBranch : Code)
| «match» (ref : Syntax) (gen : Syntax) (discrs : Syntax) (optType : Syntax) (alts : Array (Alt Code))
| jmp (ref : Syntax) (jpName : Name) (args : Array Syntax)
deriving Inhabited
/- A code block, and the collection of variables updated by it. -/
structure CodeBlock where
code : Code
uvars : NameSet := {} -- set of variables updated by `code`
private def nameSetToArray (s : NameSet) : Array Name :=
s.fold (fun (xs : Array Name) x => xs.push x) #[]
private def varsToMessageData (vars : Array Name) : MessageData :=
MessageData.joinSep (vars.toList.map fun n => MessageData.ofName (n.simpMacroScopes)) " "
partial def CodeBlocl.toMessageData (codeBlock : CodeBlock) : MessageData :=
let us := MessageData.ofList $ (nameSetToArray codeBlock.uvars).toList.map MessageData.ofName
let rec loop : Code → MessageData
| Code.decl xs _ k => m!"let {varsToMessageData xs} := ...\n{loop k}"
| Code.reassign xs _ k => m!"{varsToMessageData xs} := ...\n{loop k}"
| Code.joinpoint n ps body k => m!"let {n.simpMacroScopes} {varsToMessageData (ps.map Prod.fst)} := {indentD (loop body)}\n{loop k}"
| Code.seq e k => m!"{e}\n{loop k}"
| Code.action e => e
| Code.ite _ _ _ c t e => m!"if {c} then {indentD (loop t)}\nelse{loop e}"
| Code.jmp _ j xs => m!"jmp {j.simpMacroScopes} {xs.toList}"
| Code.«break» _ => m!"break {us}"
| Code.«continue» _ => m!"continue {us}"
| Code.«return» _ v => m!"return {v} {us}"
| Code.«match» _ _ ds t alts =>
m!"match {ds} with"
++ alts.foldl (init := m!"") fun acc alt => acc ++ m!"\n| {alt.patterns} => {loop alt.rhs}"
loop codeBlock.code
/- Return true if the give code contains an exit point that satisfies `p` -/
partial def hasExitPointPred (c : Code) (p : Code → Bool) : Bool :=
let rec loop : Code → Bool
| Code.decl _ _ k => loop k
| Code.reassign _ _ k => loop k
| Code.joinpoint _ _ b k => loop b || loop k
| Code.seq _ k => loop k
| Code.ite _ _ _ _ t e => loop t || loop e
| Code.«match» _ _ _ _ alts => alts.any (loop ·.rhs)
| Code.jmp _ _ _ => false
| c => p c
loop c
def hasExitPoint (c : Code) : Bool :=
hasExitPointPred c fun c => true
def hasReturn (c : Code) : Bool :=
hasExitPointPred c fun
| Code.«return» _ _ => true
| _ => false
def hasTerminalAction (c : Code) : Bool :=
hasExitPointPred c fun
| Code.«action» _ => true
| _ => false
def hasBreakContinue (c : Code) : Bool :=
hasExitPointPred c fun
| Code.«break» _ => true
| Code.«continue» _ => true
| _ => false
def hasBreakContinueReturn (c : Code) : Bool :=
hasExitPointPred c fun
| Code.«break» _ => true
| Code.«continue» _ => true
| Code.«return» _ _ => true
| _ => false
def mkAuxDeclFor {m} [Monad m] [MonadQuotation m] (e : Syntax) (mkCont : Syntax → m Code) : m Code := withRef e <| withFreshMacroScope do
let y ← `(y)
let yName := y.getId
let doElem ← `(doElem| let y ← $e:term)
-- Add elaboration hint for producing sane error message
let y ← `(ensure_expected_type% "type mismatch, result value" $y)
let k ← mkCont y
pure $ Code.decl #[yName] doElem k
/- Convert `action _ e` instructions in `c` into `let y ← e; jmp _ jp (xs y)`. -/
partial def convertTerminalActionIntoJmp (code : Code) (jp : Name) (xs : Array Name) : MacroM Code :=
let rec loop : Code → MacroM Code
| Code.decl xs stx k => return Code.decl xs stx (← loop k)
| Code.reassign xs stx k => return Code.reassign xs stx (← loop k)
| Code.joinpoint n ps b k => return Code.joinpoint n ps (← loop b) (← loop k)
| Code.seq e k => return Code.seq e (← loop k)
| Code.ite ref x? h c t e => return Code.ite ref x? h c (← loop t) (← loop e)
| Code.«match» ref g ds t alts => return Code.«match» ref g ds t (← alts.mapM fun alt => do pure { alt with rhs := (← loop alt.rhs) })
| Code.action e => mkAuxDeclFor e fun y =>
let ref := e
-- We jump to `jp` with xs **and** y
let jmpArgs := xs.map $ mkIdentFrom ref
let jmpArgs := jmpArgs.push y
return Code.jmp ref jp jmpArgs
| c => return c
loop code
structure JPDecl where
name : Name
params : Array (Name × Bool)
body : Code
def attachJP (jpDecl : JPDecl) (k : Code) : Code :=
Code.joinpoint jpDecl.name jpDecl.params jpDecl.body k
def attachJPs (jpDecls : Array JPDecl) (k : Code) : Code :=
jpDecls.foldr attachJP k
def mkFreshJP (ps : Array (Name × Bool)) (body : Code) : TermElabM JPDecl := do
let ps ←
if ps.isEmpty then
let y ← mkFreshUserName `y
pure #[(y, false)]
else
pure ps
-- Remark: the compiler frontend implemented in C++ currently detects jointpoints created by
-- the "do" notation by testing the name. See hack at method `visit_let` at `lcnf.cpp`
-- We will remove this hack when we re-implement the compiler frontend in Lean.
let name ← mkFreshUserName `_do_jp
pure { name := name, params := ps, body := body }
def mkFreshJP' (xs : Array Name) (body : Code) : TermElabM JPDecl :=
mkFreshJP (xs.map fun x => (x, true)) body
def addFreshJP (ps : Array (Name × Bool)) (body : Code) : StateRefT (Array JPDecl) TermElabM Name := do
let jp ← mkFreshJP ps body
modify fun (jps : Array JPDecl) => jps.push jp
pure jp.name
def insertVars (rs : NameSet) (xs : Array Name) : NameSet :=
xs.foldl (·.insert ·) rs
def eraseVars (rs : NameSet) (xs : Array Name) : NameSet :=
xs.foldl (·.erase ·) rs
def eraseOptVar (rs : NameSet) (x? : Option Name) : NameSet :=
match x? with
| none => rs
| some x => rs.insert x
/- Create a new jointpoint for `c`, and jump to it with the variables `rs` -/
def mkSimpleJmp (ref : Syntax) (rs : NameSet) (c : Code) : StateRefT (Array JPDecl) TermElabM Code := do
let xs := nameSetToArray rs
let jp ← addFreshJP (xs.map fun x => (x, true)) c
if xs.isEmpty then
let unit ← ``(Unit.unit)
return Code.jmp ref jp #[unit]
else
return Code.jmp ref jp (xs.map $ mkIdentFrom ref)
/- Create a new joinpoint that takes `rs` and `val` as arguments. `val` must be syntax representing a pure value.
The body of the joinpoint is created using `mkJPBody yFresh`, where `yFresh`
is a fresh variable created by this method. -/
def mkJmp (ref : Syntax) (rs : NameSet) (val : Syntax) (mkJPBody : Syntax → MacroM Code) : StateRefT (Array JPDecl) TermElabM Code := do
let xs := nameSetToArray rs
let args := xs.map $ mkIdentFrom ref
let args := args.push val
let yFresh ← mkFreshUserName `y
let ps := xs.map fun x => (x, true)
let ps := ps.push (yFresh, false)
let jpBody ← liftMacroM $ mkJPBody (mkIdentFrom ref yFresh)
let jp ← addFreshJP ps jpBody
pure $ Code.jmp ref jp args
/- `pullExitPointsAux rs c` auxiliary method for `pullExitPoints`, `rs` is the set of update variable in the current path. -/
partial def pullExitPointsAux : NameSet → Code → StateRefT (Array JPDecl) TermElabM Code
| rs, Code.decl xs stx k => return Code.decl xs stx (← pullExitPointsAux (eraseVars rs xs) k)
| rs, Code.reassign xs stx k => return Code.reassign xs stx (← pullExitPointsAux (insertVars rs xs) k)
| rs, Code.joinpoint j ps b k => return Code.joinpoint j ps (← pullExitPointsAux rs b) (← pullExitPointsAux rs k)
| rs, Code.seq e k => return Code.seq e (← pullExitPointsAux rs k)
| rs, Code.ite ref x? o c t e => return Code.ite ref x? o c (← pullExitPointsAux (eraseOptVar rs x?) t) (← pullExitPointsAux (eraseOptVar rs x?) e)
| rs, Code.«match» ref g ds t alts => return Code.«match» ref g ds t (← alts.mapM fun alt => do pure { alt with rhs := (← pullExitPointsAux (eraseVars rs alt.vars) alt.rhs) })
| rs, c@(Code.jmp _ _ _) => return c
| rs, Code.«break» ref => mkSimpleJmp ref rs (Code.«break» ref)
| rs, Code.«continue» ref => mkSimpleJmp ref rs (Code.«continue» ref)
| rs, Code.«return» ref val => mkJmp ref rs val (fun y => pure $ Code.«return» ref y)
| rs, Code.action e =>
-- We use `mkAuxDeclFor` because `e` is not pure.
mkAuxDeclFor e fun y =>
let ref := e
mkJmp ref rs y (fun yFresh => do pure $ Code.action (← ``(Pure.pure $yFresh)))
/-
Auxiliary operation for adding new variables to the collection of updated variables in a CodeBlock.
When a new variable is not already in the collection, but is shadowed by some declaration in `c`,
we create auxiliary join points to make sure we preserve the semantics of the code block.
Example: suppose we have the code block `print x; let x := 10; return x`. And we want to extend it
with the reassignment `x := x + 1`. We first use `pullExitPoints` to create
```
let jp (x!1) := return x!1;
print x;
let x := 10;
jmp jp x
```
and then we add the reassignment
```
x := x + 1
let jp (x!1) := return x!1;
print x;
let x := 10;
jmp jp x
```
Note that we created a fresh variable `x!1` to avoid accidental name capture.
As another example, consider
```
print x;
let x := 10
y := y + 1;
return x;
```
We transform it into
```
let jp (y x!1) := return x!1;
print x;
let x := 10
y := y + 1;
jmp jp y x
```
and then we add the reassignment as in the previous example.
We need to include `y` in the jump, because each exit point is implicitly returning the set of
update variables.
We implement the method as follows. Let `us` be `c.uvars`, then
1- for each `return _ y` in `c`, we create a join point
`let j (us y!1) := return y!1`
and replace the `return _ y` with `jmp us y`
2- for each `break`, we create a join point
`let j (us) := break`
and replace the `break` with `jmp us`.
3- Same as 2 for `continue`.
-/
def pullExitPoints (c : Code) : TermElabM Code := do
if hasExitPoint c then
let (c, jpDecls) ← (pullExitPointsAux {} c).run #[]
pure $ attachJPs jpDecls c
else
pure c
partial def extendUpdatedVarsAux (c : Code) (ws : NameSet) : TermElabM Code :=
let rec update : Code → TermElabM Code
| Code.joinpoint j ps b k => return Code.joinpoint j ps (← update b) (← update k)
| Code.seq e k => return Code.seq e (← update k)
| c@(Code.«match» ref g ds t alts) => do
if alts.any fun alt => alt.vars.any fun x => ws.contains x then
-- If a pattern variable is shadowing a variable in ws, we `pullExitPoints`
pullExitPoints c
else
return Code.«match» ref g ds t (← alts.mapM fun alt => do pure { alt with rhs := (← update alt.rhs) })
| Code.ite ref none o c t e => return Code.ite ref none o c (← update t) (← update e)
| c@(Code.ite ref (some h) o cond t e) => do
if ws.contains h then
-- if the `h` at `if h:c then t else e` shadows a variable in `ws`, we `pullExitPoints`
pullExitPoints c
else
return Code.ite ref (some h) o cond (← update t) (← update e)
| Code.reassign xs stx k => return Code.reassign xs stx (← update k)
| c@(Code.decl xs stx k) => do
if xs.any fun x => ws.contains x then
-- One the declared variables is shadowing a variable in `ws`
pullExitPoints c
else
return Code.decl xs stx (← update k)
| c => return c
update c
/-
Extend the set of updated variables. It assumes `ws` is a super set of `c.uvars`.
We **cannot** simply update the field `c.uvars`, because `c` may have shadowed some variable in `ws`.
See discussion at `pullExitPoints`.
-/
partial def extendUpdatedVars (c : CodeBlock) (ws : NameSet) : TermElabM CodeBlock := do
if ws.any fun x => !c.uvars.contains x then
-- `ws` contains a variable that is not in `c.uvars`, but in `c.dvars` (i.e., it has been shadowed)
pure { code := (← extendUpdatedVarsAux c.code ws), uvars := ws }
else
pure { c with uvars := ws }
private def union (s₁ s₂ : NameSet) : NameSet :=
s₁.fold (·.insert ·) s₂
/-
Given two code blocks `c₁` and `c₂`, make sure they have the same set of updated variables.
Let `ws` the union of the updated variables in `c₁‵ and ‵c₂`.
We use `extendUpdatedVars c₁ ws` and `extendUpdatedVars c₂ ws`
-/
def homogenize (c₁ c₂ : CodeBlock) : TermElabM (CodeBlock × CodeBlock) := do
let ws := union c₁.uvars c₂.uvars
let c₁ ← extendUpdatedVars c₁ ws
let c₂ ← extendUpdatedVars c₂ ws
pure (c₁, c₂)
/-
Extending code blocks with variable declarations: `let x : t := v` and `let x : t ← v`.
We remove `x` from the collection of updated varibles.
Remark: `stx` is the syntax for the declaration (e.g., `letDecl`), and `xs` are the variables
declared by it. It is an array because we have let-declarations that declare multiple variables.
Example: `let (x, y) := t`
-/
def mkVarDeclCore (xs : Array Name) (stx : Syntax) (c : CodeBlock) : CodeBlock := {
code := Code.decl xs stx c.code,
uvars := eraseVars c.uvars xs
}
/-
Extending code blocks with reassignments: `x : t := v` and `x : t ← v`.
Remark: `stx` is the syntax for the declaration (e.g., `letDecl`), and `xs` are the variables
declared by it. It is an array because we have let-declarations that declare multiple variables.
Example: `(x, y) ← t`
-/
def mkReassignCore (xs : Array Name) (stx : Syntax) (c : CodeBlock) : TermElabM CodeBlock := do
let us := c.uvars
let ws := insertVars us xs
-- If `xs` contains a new updated variable, then we must use `extendUpdatedVars`.
-- See discussion at `pullExitPoints`
let code ← if xs.any fun x => !us.contains x then extendUpdatedVarsAux c.code ws else pure c.code
pure { code := Code.reassign xs stx code, uvars := ws }
def mkSeq (action : Syntax) (c : CodeBlock) : CodeBlock :=
{ c with code := Code.seq action c.code }
def mkTerminalAction (action : Syntax) : CodeBlock :=
{ code := Code.action action }
def mkReturn (ref : Syntax) (val : Syntax) : CodeBlock :=
{ code := Code.«return» ref val }
def mkBreak (ref : Syntax) : CodeBlock :=
{ code := Code.«break» ref }
def mkContinue (ref : Syntax) : CodeBlock :=
{ code := Code.«continue» ref }
def mkIte (ref : Syntax) (optIdent : Syntax) (cond : Syntax) (thenBranch : CodeBlock) (elseBranch : CodeBlock) : TermElabM CodeBlock := do
let x? := if optIdent.isNone then none else some optIdent[0].getId
let (thenBranch, elseBranch) ← homogenize thenBranch elseBranch
pure {
code := Code.ite ref x? optIdent cond thenBranch.code elseBranch.code,
uvars := thenBranch.uvars,
}
private def mkUnit : MacroM Syntax :=
``((⟨⟩ : PUnit))
private def mkPureUnit : MacroM Syntax :=
``(pure PUnit.unit)
def mkPureUnitAction : MacroM CodeBlock := do
return mkTerminalAction (← mkPureUnit)
def mkUnless (cond : Syntax) (c : CodeBlock) : MacroM CodeBlock := do
let thenBranch ← mkPureUnitAction
pure { c with code := Code.ite (← getRef) none mkNullNode cond thenBranch.code c.code }
def mkMatch (ref : Syntax) (genParam : Syntax) (discrs : Syntax) (optType : Syntax) (alts : Array (Alt CodeBlock)) : TermElabM CodeBlock := do
-- nary version of homogenize
let ws := alts.foldl (union · ·.rhs.uvars) {}
let alts ← alts.mapM fun alt => do
let rhs ← extendUpdatedVars alt.rhs ws
pure { ref := alt.ref, vars := alt.vars, patterns := alt.patterns, rhs := rhs.code : Alt Code }
pure { code := Code.«match» ref genParam discrs optType alts, uvars := ws }
/- Return a code block that executes `terminal` and then `k` with the value produced by `terminal`.
This method assumes `terminal` is a terminal -/
def concat (terminal : CodeBlock) (kRef : Syntax) (y? : Option Name) (k : CodeBlock) : TermElabM CodeBlock := do
unless hasTerminalAction terminal.code do
throwErrorAt kRef "'do' element is unreachable"
let (terminal, k) ← homogenize terminal k
let xs := nameSetToArray k.uvars
let y ← match y? with | some y => pure y | none => mkFreshUserName `y
let ps := xs.map fun x => (x, true)
let ps := ps.push (y, false)
let jpDecl ← mkFreshJP ps k.code
let jp := jpDecl.name
let terminal ← liftMacroM $ convertTerminalActionIntoJmp terminal.code jp xs
pure { code := attachJP jpDecl terminal, uvars := k.uvars }
def getLetIdDeclVar (letIdDecl : Syntax) : Name :=
letIdDecl[0].getId
-- support both regular and syntax match
def getPatternVarsEx (pattern : Syntax) : TermElabM (Array Name) :=
getPatternVarNames <$> getPatternVars pattern <|>
Array.map Syntax.getId <$> Quotation.getPatternVars pattern
def getPatternsVarsEx (patterns : Array Syntax) : TermElabM (Array Name) :=
getPatternVarNames <$> getPatternsVars patterns <|>
Array.map Syntax.getId <$> Quotation.getPatternsVars patterns
def getLetPatDeclVars (letPatDecl : Syntax) : TermElabM (Array Name) := do
let pattern := letPatDecl[0]
getPatternVarsEx pattern
def getLetEqnsDeclVar (letEqnsDecl : Syntax) : Name :=
letEqnsDecl[0].getId
def getLetDeclVars (letDecl : Syntax) : TermElabM (Array Name) := do
let arg := letDecl[0]
if arg.getKind == ``Lean.Parser.Term.letIdDecl then
pure #[getLetIdDeclVar arg]
else if arg.getKind == ``Lean.Parser.Term.letPatDecl then
getLetPatDeclVars arg
else if arg.getKind == ``Lean.Parser.Term.letEqnsDecl then
pure #[getLetEqnsDeclVar arg]
else
throwError "unexpected kind of let declaration"
def getDoLetVars (doLet : Syntax) : TermElabM (Array Name) :=
-- leading_parser "let " >> optional "mut " >> letDecl
getLetDeclVars doLet[2]
def getHaveIdLhsVar (optIdent : Syntax) : Name :=
if optIdent.isNone then
`this
else
optIdent[0].getId
def getDoHaveVars (doHave : Syntax) : TermElabM (Array Name) :=
-- doHave := leading_parser "have " >> Term.haveDecl
-- haveDecl := leading_parser haveIdDecl <|> letPatDecl <|> haveEqnsDecl
let arg := doHave[1][0]
if arg.getKind == ``Lean.Parser.Term.haveIdDecl then
-- haveIdDecl := leading_parser atomic (haveIdLhs >> " := ") >> termParser
-- haveIdLhs := optional (ident >> many (ppSpace >> (simpleBinderWithoutType <|> bracketedBinder))) >> optType
pure #[getHaveIdLhsVar arg[0]]
else if arg.getKind == ``Lean.Parser.Term.letPatDecl then
getLetPatDeclVars arg
else if arg.getKind == ``Lean.Parser.Term.haveEqnsDecl then
-- haveEqnsDecl := leading_parser haveIdLhs >> matchAlts
pure #[getHaveIdLhsVar arg[0]]
else
throwError "unexpected kind of have declaration"
def getDoLetRecVars (doLetRec : Syntax) : TermElabM (Array Name) := do
-- letRecDecls is an array of `(group (optional attributes >> letDecl))`
let letRecDecls := doLetRec[1][0].getSepArgs
let letDecls := letRecDecls.map fun p => p[2]
let mut allVars := #[]
for letDecl in letDecls do
let vars ← getLetDeclVars letDecl
allVars := allVars ++ vars
pure allVars
-- ident >> optType >> leftArrow >> termParser
def getDoIdDeclVar (doIdDecl : Syntax) : Name :=
doIdDecl[0].getId
-- termParser >> leftArrow >> termParser >> optional (" | " >> termParser)
def getDoPatDeclVars (doPatDecl : Syntax) : TermElabM (Array Name) := do
let pattern := doPatDecl[0]
getPatternVarsEx pattern
-- leading_parser "let " >> optional "mut " >> (doIdDecl <|> doPatDecl)
def getDoLetArrowVars (doLetArrow : Syntax) : TermElabM (Array Name) := do
let decl := doLetArrow[2]
if decl.getKind == ``Lean.Parser.Term.doIdDecl then
pure #[getDoIdDeclVar decl]
else if decl.getKind == ``Lean.Parser.Term.doPatDecl then
getDoPatDeclVars decl
else
throwError "unexpected kind of 'do' declaration"
def getDoReassignVars (doReassign : Syntax) : TermElabM (Array Name) := do
let arg := doReassign[0]
if arg.getKind == ``Lean.Parser.Term.letIdDecl then
pure #[getLetIdDeclVar arg]
else if arg.getKind == ``Lean.Parser.Term.letPatDecl then
getLetPatDeclVars arg
else
throwError "unexpected kind of reassignment"
def mkDoSeq (doElems : Array Syntax) : Syntax :=
mkNode `Lean.Parser.Term.doSeqIndent #[mkNullNode $ doElems.map fun doElem => mkNullNode #[doElem, mkNullNode]]
def mkSingletonDoSeq (doElem : Syntax) : Syntax :=
mkDoSeq #[doElem]
/-
If the given syntax is a `doIf`, return an equivalente `doIf` that has an `else` but no `else if`s or `if let`s. -/
private def expandDoIf? (stx : Syntax) : MacroM (Option Syntax) := match stx with
| `(doElem|if $p:doIfProp then $t else $e) => pure none
| `(doElem|if%$i $cond:doIfCond then $t $[else if%$is $conds:doIfCond then $ts]* $[else $e?]?) => withRef stx do
let mut e := e?.getD (← `(doSeq|pure PUnit.unit))
let mut eIsSeq := true
for (i, cond, t) in Array.zip (is.reverse.push i) (Array.zip (conds.reverse.push cond) (ts.reverse.push t)) do
e ← if eIsSeq then pure e else `(doSeq|$e:doElem)
e ← withRef cond <| match cond with
| `(doIfCond|let $pat := $d) => `(doElem| match%$i $d:term with | $pat:term => $t | _ => $e)
| `(doIfCond|let $pat ← $d) => `(doElem| match%$i ← $d with | $pat:term => $t | _ => $e)
| `(doIfCond|$cond:doIfProp) => `(doElem| if%$i $cond:doIfProp then $t else $e)
| _ => `(doElem| if%$i $(Syntax.missing) then $t else $e)
eIsSeq := false
return some e
| _ => pure none
structure DoIfView where
ref : Syntax
optIdent : Syntax
cond : Syntax
thenBranch : Syntax
elseBranch : Syntax
/- This method assumes `expandDoIf?` is not applicable. -/
private def mkDoIfView (doIf : Syntax) : MacroM DoIfView := do
pure {
ref := doIf,
optIdent := doIf[1][0],
cond := doIf[1][1],
thenBranch := doIf[3],
elseBranch := doIf[5][1]
}
/-
We use `MProd` instead of `Prod` to group values when expanding the
`do` notation. `MProd` is a universe monomorphic product.
The motivation is to generate simpler universe constraints in code
that was not written by the user.
Note that we are not restricting the macro power since the
`Bind.bind` combinator already forces values computed by monadic
actions to be in the same universe.
-/
private def mkTuple (elems : Array Syntax) : MacroM Syntax := do
if elems.size == 0 then
mkUnit
else if elems.size == 1 then
pure elems[0]
else
(elems.extract 0 (elems.size - 1)).foldrM
(fun elem tuple => ``(MProd.mk $elem $tuple))
(elems.back)
/- Return `some action` if `doElem` is a `doExpr <action>`-/
def isDoExpr? (doElem : Syntax) : Option Syntax :=
if doElem.getKind == ``Lean.Parser.Term.doExpr then
some doElem[0]
else
none
/--
Given `uvars := #[a_1, ..., a_n, a_{n+1}]` construct term
```
let a_1 := x.1
let x := x.2
let a_2 := x.1
let x := x.2
...
let a_n := x.1
let a_{n+1} := x.2
body
```
Special cases
- `uvars := #[]` => `body`
- `uvars := #[a]` => `let a := x; body`
We use this method when expanding the `for-in` notation.
-/
private def destructTuple (uvars : Array Name) (x : Syntax) (body : Syntax) : MacroM Syntax := do
if uvars.size == 0 then
return body
else if uvars.size == 1 then
`(let $(← mkIdentFromRef uvars[0]):ident := $x; $body)
else
destruct uvars.toList x body
where
destruct (as : List Name) (x : Syntax) (body : Syntax) : MacroM Syntax := do
match as with
| [a, b] => `(let $(← mkIdentFromRef a):ident := $x.1; let $(← mkIdentFromRef b):ident := $x.2; $body)
| a :: as => withFreshMacroScope do
let rest ← destruct as (← `(x)) body
`(let $(← mkIdentFromRef a):ident := $x.1; let x := $x.2; $rest)
| _ => unreachable!
/-
The procedure `ToTerm.run` converts a `CodeBlock` into a `Syntax` term.
We use this method to convert
1- The `CodeBlock` for a root `do ...` term into a `Syntax` term. This kind of
`CodeBlock` never contains `break` nor `continue`. Moreover, the collection
of updated variables is not packed into the result.
Thus, we have two kinds of exit points
- `Code.action e` which is converted into `e`
- `Code.return _ e` which is converted into `pure e`
We use `Kind.regular` for this case.
2- The `CodeBlock` for `b` at `for x in xs do b`. In this case, we need to generate
a `Syntax` term representing a function for the `xs.forIn` combinator.
a) If `b` contain a `Code.return _ a` exit point. The generated `Syntax` term
has type `m (ForInStep (Option α × σ))`, where `a : α`, and the `σ` is the type
of the tuple of variables reassigned by `b`.
We use `Kind.forInWithReturn` for this case
b) If `b` does not contain a `Code.return _ a` exit point. Then, the generated
`Syntax` term has type `m (ForInStep σ)`.
We use `Kind.forIn` for this case.
3- The `CodeBlock` `c` for a `do` sequence nested in a monadic combinator (e.g., `MonadExcept.tryCatch`).
The generated `Syntax` term for `c` must inform whether `c` "exited" using `Code.action`, `Code.return`,
`Code.break` or `Code.continue`. We use the auxiliary types `DoResult`s for storing this information.
For example, the auxiliary type `DoResultPBC α σ` is used for a code block that exits with `Code.action`,
**and** `Code.break`/`Code.continue`, `α` is the type of values produced by the exit `action`, and
`σ` is the type of the tuple of reassigned variables.
The type `DoResult α β σ` is usedf for code blocks that exit with
`Code.action`, `Code.return`, **and** `Code.break`/`Code.continue`, `β` is the type of the returned values.
We don't use `DoResult α β σ` for all cases because:
a) The elaborator would not be able to infer all type parameters without extra annotations. For example,
if the code block does not contain `Code.return _ _`, the elaborator will not be able to infer `β`.
b) We need to pattern match on the result produced by the combinator (e.g., `MonadExcept.tryCatch`),
but we don't want to consider "unreachable" cases.
We do not distinguish between cases that contain `break`, but not `continue`, and vice versa.
When listing all cases, we use `a` to indicate the code block contains `Code.action _`, `r` for `Code.return _ _`,
and `b/c` for a code block that contains `Code.break _` or `Code.continue _`.
- `a`: `Kind.regular`, type `m (α × σ)`
- `r`: `Kind.regular`, type `m (α × σ)`
Note that the code that pattern matches on the result will behave differently in this case.
It produces `return a` for this case, and `pure a` for the previous one.
- `b/c`: `Kind.nestedBC`, type `m (DoResultBC σ)`
- `a` and `r`: `Kind.nestedPR`, type `m (DoResultPR α β σ)`
- `a` and `bc`: `Kind.nestedSBC`, type `m (DoResultSBC α σ)`
- `r` and `bc`: `Kind.nestedSBC`, type `m (DoResultSBC α σ)`
Again the code that pattern matches on the result will behave differently in this case and
the previous one. It produces `return a` for the constructor `DoResultSPR.pureReturn a u` for
this case, and `pure a` for the previous case.
- `a`, `r`, `b/c`: `Kind.nestedPRBC`, type type `m (DoResultPRBC α β σ)`
Here is the recipe for adding new combinators with nested `do`s.
Example: suppose we want to support `repeat doSeq`. Assuming we have `repeat : m α → m α`
1- Convert `doSeq` into `codeBlock : CodeBlock`
2- Create term `term` using `mkNestedTerm code m uvars a r bc` where
`code` is `codeBlock.code`, `uvars` is an array containing `codeBlock.uvars`,
`m` is a `Syntax` representing the Monad, and
`a` is true if `code` contains `Code.action _`,
`r` is true if `code` contains `Code.return _ _`,
`bc` is true if `code` contains `Code.break _` or `Code.continue _`.
Remark: for combinators such as `repeat` that take a single `doSeq`, all
arguments, but `m`, are extracted from `codeBlock`.
3- Create the term `repeat $term`
4- and then, convert it into a `doSeq` using `matchNestedTermResult ref (repeat $term) uvsar a r bc`
-/
namespace ToTerm
inductive Kind where
| regular
| forIn
| forInWithReturn
| nestedBC
| nestedPR
| nestedSBC
| nestedPRBC
instance : Inhabited Kind := ⟨Kind.regular⟩
def Kind.isRegular : Kind → Bool
| Kind.regular => true
| _ => false
structure Context where
m : Syntax -- Syntax to reference the monad associated with the do notation.
uvars : Array Name
kind : Kind
abbrev M := ReaderT Context MacroM
def mkUVarTuple : M Syntax := do
let ctx ← read
let uvarIdents ← ctx.uvars.mapM mkIdentFromRef
mkTuple uvarIdents
def returnToTerm (val : Syntax) : M Syntax := do
let ctx ← read
let u ← mkUVarTuple
match ctx.kind with
| Kind.regular => if ctx.uvars.isEmpty then ``(Pure.pure $val) else ``(Pure.pure (MProd.mk $val $u))
| Kind.forIn => ``(Pure.pure (ForInStep.done $u))
| Kind.forInWithReturn => ``(Pure.pure (ForInStep.done (MProd.mk (some $val) $u)))
| Kind.nestedBC => unreachable!
| Kind.nestedPR => ``(Pure.pure (DoResultPR.«return» $val $u))
| Kind.nestedSBC => ``(Pure.pure (DoResultSBC.«pureReturn» $val $u))
| Kind.nestedPRBC => ``(Pure.pure (DoResultPRBC.«return» $val $u))
def continueToTerm : M Syntax := do
let ctx ← read
let u ← mkUVarTuple
match ctx.kind with
| Kind.regular => unreachable!
| Kind.forIn => ``(Pure.pure (ForInStep.yield $u))
| Kind.forInWithReturn => ``(Pure.pure (ForInStep.yield (MProd.mk none $u)))
| Kind.nestedBC => ``(Pure.pure (DoResultBC.«continue» $u))
| Kind.nestedPR => unreachable!
| Kind.nestedSBC => ``(Pure.pure (DoResultSBC.«continue» $u))
| Kind.nestedPRBC => ``(Pure.pure (DoResultPRBC.«continue» $u))
def breakToTerm : M Syntax := do
let ctx ← read
let u ← mkUVarTuple
match ctx.kind with
| Kind.regular => unreachable!
| Kind.forIn => ``(Pure.pure (ForInStep.done $u))
| Kind.forInWithReturn => ``(Pure.pure (ForInStep.done (MProd.mk none $u)))
| Kind.nestedBC => ``(Pure.pure (DoResultBC.«break» $u))
| Kind.nestedPR => unreachable!
| Kind.nestedSBC => ``(Pure.pure (DoResultSBC.«break» $u))
| Kind.nestedPRBC => ``(Pure.pure (DoResultPRBC.«break» $u))
def actionTerminalToTerm (action : Syntax) : M Syntax := withRef action <| withFreshMacroScope do
let ctx ← read
let u ← mkUVarTuple
match ctx.kind with
| Kind.regular => if ctx.uvars.isEmpty then pure action else ``(Bind.bind $action fun y => Pure.pure (MProd.mk y $u))
| Kind.forIn => ``(Bind.bind $action fun (_ : PUnit) => Pure.pure (ForInStep.yield $u))
| Kind.forInWithReturn => ``(Bind.bind $action fun (_ : PUnit) => Pure.pure (ForInStep.yield (MProd.mk none $u)))
| Kind.nestedBC => unreachable!
| Kind.nestedPR => ``(Bind.bind $action fun y => (Pure.pure (DoResultPR.«pure» y $u)))
| Kind.nestedSBC => ``(Bind.bind $action fun y => (Pure.pure (DoResultSBC.«pureReturn» y $u)))
| Kind.nestedPRBC => ``(Bind.bind $action fun y => (Pure.pure (DoResultPRBC.«pure» y $u)))
def seqToTerm (action : Syntax) (k : Syntax) : M Syntax := withRef action <| withFreshMacroScope do
if action.getKind == ``Lean.Parser.Term.doDbgTrace then
let msg := action[1]
`(dbg_trace $msg; $k)
else if action.getKind == ``Lean.Parser.Term.doAssert then
let cond := action[1]
`(assert! $cond; $k)
else
let action ← withRef action ``(($action : $((←read).m) PUnit))
``(Bind.bind $action (fun (_ : PUnit) => $k))
def declToTerm (decl : Syntax) (k : Syntax) : M Syntax := withRef decl <| withFreshMacroScope do
let kind := decl.getKind
if kind == ``Lean.Parser.Term.doLet then
let letDecl := decl[2]
`(let $letDecl:letDecl; $k)
else if kind == ``Lean.Parser.Term.doLetRec then
let letRecToken := decl[0]
let letRecDecls := decl[1]
pure $ mkNode ``Lean.Parser.Term.letrec #[letRecToken, letRecDecls, mkNullNode, k]
else if kind == ``Lean.Parser.Term.doLetArrow then
let arg := decl[2]
let ref := arg
if arg.getKind == ``Lean.Parser.Term.doIdDecl then
let id := arg[0]
let type := expandOptType id arg[1]
let doElem := arg[3]
-- `doElem` must be a `doExpr action`. See `doLetArrowToCode`
match isDoExpr? doElem with
| some action =>
let action ← withRef action `(($action : $((← read).m) $type))
``(Bind.bind $action (fun ($id:ident : $type) => $k))
| none => Macro.throwErrorAt decl "unexpected kind of 'do' declaration"
else
Macro.throwErrorAt decl "unexpected kind of 'do' declaration"
else if kind == ``Lean.Parser.Term.doHave then
-- The `have` term is of the form `"have " >> haveDecl >> optSemicolon termParser`
let args := decl.getArgs
let args := args ++ #[mkNullNode /- optional ';' -/, k]
pure $ mkNode `Lean.Parser.Term.«have» args
else
Macro.throwErrorAt decl "unexpected kind of 'do' declaration"
def reassignToTerm (reassign : Syntax) (k : Syntax) : MacroM Syntax := withRef reassign <| withFreshMacroScope do
let kind := reassign.getKind
if kind == ``Lean.Parser.Term.doReassign then
-- doReassign := leading_parser (letIdDecl <|> letPatDecl)
let arg := reassign[0]
if arg.getKind == ``Lean.Parser.Term.letIdDecl then
-- letIdDecl := leading_parser ident >> many (ppSpace >> bracketedBinder) >> optType >> " := " >> termParser
let x := arg[0]
let val := arg[4]
let newVal ← `(ensure_type_of% $x $(quote "invalid reassignment, value") $val)
let arg := arg.setArg 4 newVal
let letDecl := mkNode `Lean.Parser.Term.letDecl #[arg]
`(let $letDecl:letDecl; $k)
else
-- TODO: ensure the types did not change
let letDecl := mkNode `Lean.Parser.Term.letDecl #[arg]
`(let $letDecl:letDecl; $k)
else
-- Note that `doReassignArrow` is expanded by `doReassignArrowToCode
Macro.throwErrorAt reassign "unexpected kind of 'do' reassignment"
def mkIte (optIdent : Syntax) (cond : Syntax) (thenBranch : Syntax) (elseBranch : Syntax) : MacroM Syntax := do
if optIdent.isNone then
``(if $cond then $thenBranch else $elseBranch)
else
let h := optIdent[0]
``(if $h:ident : $cond then $thenBranch else $elseBranch)
def mkJoinPoint (j : Name) (ps : Array (Name × Bool)) (body : Syntax) (k : Syntax) : M Syntax := withRef body <| withFreshMacroScope do
let pTypes ← ps.mapM fun ⟨id, useTypeOf⟩ => do if useTypeOf then `(type_of% $(← mkIdentFromRef id)) else `(_)
let ps ← ps.mapM fun ⟨id, useTypeOf⟩ => mkIdentFromRef id
/-
We use `let_delayed` instead of `let` for joinpoints to make sure `$k` is elaborated before `$body`.
By elaborating `$k` first, we "learn" more about `$body`'s type.
For example, consider the following example `do` expression
```
def f (x : Nat) : IO Unit := do
if x > 0 then
IO.println "x is not zero" -- Error is here
IO.mkRef true
```
it is expanded into
```
def f (x : Nat) : IO Unit := do
let jp (u : Unit) : IO _ :=
IO.mkRef true;
if x > 0 then
IO.println "not zero"
jp ()
else
jp ()
```
If we use the regular `let` instead of `let_delayed`, the joinpoint `jp` will be elaborated and its type will be inferred to be `Unit → IO (IO.Ref Bool)`.
Then, we get a typing error at `jp ()`. By using `let_delayed`, we first elaborate `if x > 0 ...` and learn that `jp` has type `Unit → IO Unit`.
Then, we get the expected type mismatch error at `IO.mkRef true`. -/
`(let_delayed $(← mkIdentFromRef j):ident $[($ps : $pTypes)]* : $((← read).m) _ := $body; $k)
def mkJmp (ref : Syntax) (j : Name) (args : Array Syntax) : Syntax :=
Syntax.mkApp (mkIdentFrom ref j) args
partial def toTerm : Code → M Syntax
| Code.«return» ref val => withRef ref <| returnToTerm val
| Code.«continue» ref => withRef ref continueToTerm
| Code.«break» ref => withRef ref breakToTerm
| Code.action e => actionTerminalToTerm e
| Code.joinpoint j ps b k => do mkJoinPoint j ps (← toTerm b) (← toTerm k)
| Code.jmp ref j args => pure $ mkJmp ref j args
| Code.decl _ stx k => do declToTerm stx (← toTerm k)
| Code.reassign _ stx k => do reassignToTerm stx (← toTerm k)
| Code.seq stx k => do seqToTerm stx (← toTerm k)
| Code.ite ref _ o c t e => withRef ref <| do mkIte o c (← toTerm t) (← toTerm e)
| Code.«match» ref genParam discrs optType alts => do
let mut termAlts := #[]
for alt in alts do
let rhs ← toTerm alt.rhs
let termAlt := mkNode `Lean.Parser.Term.matchAlt #[mkAtomFrom alt.ref "|", alt.patterns, mkAtomFrom alt.ref "=>", rhs]
termAlts := termAlts.push termAlt
let termMatchAlts := mkNode `Lean.Parser.Term.matchAlts #[mkNullNode termAlts]
pure $ mkNode `Lean.Parser.Term.«match» #[mkAtomFrom ref "match", genParam, discrs, optType, mkAtomFrom ref "with", termMatchAlts]
def run (code : Code) (m : Syntax) (uvars : Array Name := #[]) (kind := Kind.regular) : MacroM Syntax := do
let term ← toTerm code { m := m, kind := kind, uvars := uvars }
pure term
/- Given
- `a` is true if the code block has a `Code.action _` exit point
- `r` is true if the code block has a `Code.return _ _` exit point
- `bc` is true if the code block has a `Code.break _` or `Code.continue _` exit point
generate Kind. See comment at the beginning of the `ToTerm` namespace. -/
def mkNestedKind (a r bc : Bool) : Kind :=
match a, r, bc with
| true, false, false => Kind.regular
| false, true, false => Kind.regular
| false, false, true => Kind.nestedBC
| true, true, false => Kind.nestedPR
| true, false, true => Kind.nestedSBC
| false, true, true => Kind.nestedSBC
| true, true, true => Kind.nestedPRBC
| false, false, false => unreachable!
def mkNestedTerm (code : Code) (m : Syntax) (uvars : Array Name) (a r bc : Bool) : MacroM Syntax := do
ToTerm.run code m uvars (mkNestedKind a r bc)
/- Given a term `term` produced by `ToTerm.run`, pattern match on its result.
See comment at the beginning of the `ToTerm` namespace.
- `a` is true if the code block has a `Code.action _` exit point
- `r` is true if the code block has a `Code.return _ _` exit point
- `bc` is true if the code block has a `Code.break _` or `Code.continue _` exit point
The result is a sequence of `doElem` -/
def matchNestedTermResult (term : Syntax) (uvars : Array Name) (a r bc : Bool) : MacroM (List Syntax) := do
let toDoElems (auxDo : Syntax) : List Syntax := getDoSeqElems (getDoSeq auxDo)
let u ← mkTuple (← uvars.mapM mkIdentFromRef)
match a, r, bc with
| true, false, false =>
if uvars.isEmpty then
return toDoElems (← `(do $term:term))
else
return toDoElems (← `(do let r ← $term:term; $u:term := r.2; pure r.1))
| false, true, false =>
if uvars.isEmpty then
return toDoElems (← `(do let r ← $term:term; return r))
else
return toDoElems (← `(do let r ← $term:term; $u:term := r.2; return r.1))
| false, false, true => toDoElems <$>
`(do let r ← $term:term;
match r with
| DoResultBC.«break» u => $u:term := u; break
| DoResultBC.«continue» u => $u:term := u; continue)
| true, true, false => toDoElems <$>
`(do let r ← $term:term;
match r with
| DoResultPR.«pure» a u => $u:term := u; pure a
| DoResultPR.«return» b u => $u:term := u; return b)
| true, false, true => toDoElems <$>
`(do let r ← $term:term;
match r with
| DoResultSBC.«pureReturn» a u => $u:term := u; pure a
| DoResultSBC.«break» u => $u:term := u; break
| DoResultSBC.«continue» u => $u:term := u; continue)
| false, true, true => toDoElems <$>
`(do let r ← $term:term;
match r with
| DoResultSBC.«pureReturn» a u => $u:term := u; return a
| DoResultSBC.«break» u => $u:term := u; break
| DoResultSBC.«continue» u => $u:term := u; continue)
| true, true, true => toDoElems <$>
`(do let r ← $term:term;
match r with
| DoResultPRBC.«pure» a u => $u:term := u; pure a
| DoResultPRBC.«return» a u => $u:term := u; return a
| DoResultPRBC.«break» u => $u:term := u; break
| DoResultPRBC.«continue» u => $u:term := u; continue)
| false, false, false => unreachable!
end ToTerm
def isMutableLet (doElem : Syntax) : Bool :=
let kind := doElem.getKind
(kind == `Lean.Parser.Term.doLetArrow || kind == `Lean.Parser.Term.doLet)
&&
!doElem[1].isNone
namespace ToCodeBlock
structure Context where
ref : Syntax
m : Syntax -- Syntax representing the monad associated with the do notation.
mutableVars : NameSet := {}
insideFor : Bool := false
abbrev M := ReaderT Context TermElabM
def withNewMutableVars {α} (newVars : Array Name) (mutable : Bool) (x : M α) : M α :=
withReader (fun ctx => if mutable then { ctx with mutableVars := insertVars ctx.mutableVars newVars } else ctx) x
def checkReassignable (xs : Array Name) : M Unit := do
let throwInvalidReassignment (x : Name) : M Unit :=
throwError "'{x.simpMacroScopes}' cannot be reassigned"
let ctx ← read
for x in xs do
unless ctx.mutableVars.contains x do
throwInvalidReassignment x
def checkNotShadowingMutable (xs : Array Name) : M Unit := do
let throwInvalidShadowing (x : Name) : M Unit :=
throwError "mutable variable '{x.simpMacroScopes}' cannot be shadowed"
let ctx ← read
for x in xs do
if ctx.mutableVars.contains x then
throwInvalidShadowing x
def withFor {α} (x : M α) : M α :=
withReader (fun ctx => { ctx with insideFor := true }) x
structure ToForInTermResult where
uvars : Array Name
term : Syntax
def mkForInBody (x : Syntax) (forInBody : CodeBlock) : M ToForInTermResult := do
let ctx ← read
let uvars := forInBody.uvars
let uvars := nameSetToArray uvars
let term ← liftMacroM $ ToTerm.run forInBody.code ctx.m uvars (if hasReturn forInBody.code then ToTerm.Kind.forInWithReturn else ToTerm.Kind.forIn)
pure ⟨uvars, term⟩
def ensureInsideFor : M Unit :=
unless (← read).insideFor do
throwError "invalid 'do' element, it must be inside 'for'"
def ensureEOS (doElems : List Syntax) : M Unit :=
unless doElems.isEmpty do
throwError "must be last element in a 'do' sequence"
private partial def expandLiftMethodAux (inQuot : Bool) (inBinder : Bool) : Syntax → StateT (List Syntax) M Syntax
| stx@(Syntax.node i k args) =>
if liftMethodDelimiter k then
return stx
else if k == ``Lean.Parser.Term.liftMethod && !inQuot then withFreshMacroScope do
if inBinder then
throwErrorAt stx "cannot lift `(<- ...)` over a binder, this error usually happens when you are trying to lift a method nested in a `fun`, `let`, or `match`-alternative, and it can often be fixed by adding a missing `do`"
let term := args[1]
let term ← expandLiftMethodAux inQuot inBinder term
let auxDoElem ← `(doElem| let a ← $term:term)
modify fun s => s ++ [auxDoElem]
`(a)
else do
let inAntiquot := stx.isAntiquot && !stx.isEscapedAntiquot
let inBinder := inBinder || (!inQuot && liftMethodForbiddenBinder stx)
let args ← args.mapM (expandLiftMethodAux (inQuot && !inAntiquot || stx.isQuot) inBinder)
return Syntax.node i k args
| stx => pure stx
def expandLiftMethod (doElem : Syntax) : M (List Syntax × Syntax) := do
if !hasLiftMethod doElem then
pure ([], doElem)
else
let (doElem, doElemsNew) ← (expandLiftMethodAux false false doElem).run []
pure (doElemsNew, doElem)
def checkLetArrowRHS (doElem : Syntax) : M Unit := do
let kind := doElem.getKind
if kind == ``Lean.Parser.Term.doLetArrow ||
kind == ``Lean.Parser.Term.doLet ||
kind == ``Lean.Parser.Term.doLetRec ||
kind == ``Lean.Parser.Term.doHave ||
kind == ``Lean.Parser.Term.doReassign ||
kind == ``Lean.Parser.Term.doReassignArrow then
throwErrorAt doElem "invalid kind of value '{kind}' in an assignment"
/- Generate `CodeBlock` for `doReturn` which is of the form
```
"return " >> optional termParser
```
`doElems` is only used for sanity checking. -/
def doReturnToCode (doReturn : Syntax) (doElems: List Syntax) : M CodeBlock := withRef doReturn do
ensureEOS doElems
let argOpt := doReturn[1]
let arg ← if argOpt.isNone then liftMacroM mkUnit else pure argOpt[0]
return mkReturn (← getRef) arg
structure Catch where
x : Syntax
optType : Syntax
codeBlock : CodeBlock
def getTryCatchUpdatedVars (tryCode : CodeBlock) (catches : Array Catch) (finallyCode? : Option CodeBlock) : NameSet :=
let ws := tryCode.uvars
let ws := catches.foldl (fun ws alt => union alt.codeBlock.uvars ws) ws
let ws := match finallyCode? with
| none => ws
| some c => union c.uvars ws
ws
def tryCatchPred (tryCode : CodeBlock) (catches : Array Catch) (finallyCode? : Option CodeBlock) (p : Code → Bool) : Bool :=
p tryCode.code ||
catches.any (fun «catch» => p «catch».codeBlock.code) ||
match finallyCode? with
| none => false
| some finallyCode => p finallyCode.code
mutual
/- "Concatenate" `c` with `doSeqToCode doElems` -/
partial def concatWith (c : CodeBlock) (doElems : List Syntax) : M CodeBlock :=
match doElems with
| [] => pure c
| nextDoElem :: _ => do
let k ← doSeqToCode doElems
let ref := nextDoElem
concat c ref none k
/- Generate `CodeBlock` for `doLetArrow; doElems`
`doLetArrow` is of the form
```
"let " >> optional "mut " >> (doIdDecl <|> doPatDecl)
```
where
```
def doIdDecl := leading_parser ident >> optType >> leftArrow >> doElemParser
def doPatDecl := leading_parser termParser >> leftArrow >> doElemParser >> optional (" | " >> doElemParser)
```
-/
partial def doLetArrowToCode (doLetArrow : Syntax) (doElems : List Syntax) : M CodeBlock := do
let ref := doLetArrow
let decl := doLetArrow[2]
if decl.getKind == ``Lean.Parser.Term.doIdDecl then
let y := decl[0].getId
checkNotShadowingMutable #[y]
let doElem := decl[3]
let k ← withNewMutableVars #[y] (isMutableLet doLetArrow) (doSeqToCode doElems)
match isDoExpr? doElem with
| some action => pure $ mkVarDeclCore #[y] doLetArrow k
| none =>
checkLetArrowRHS doElem
let c ← doSeqToCode [doElem]
match doElems with
| [] => pure c
| kRef::_ => concat c kRef y k
else if decl.getKind == ``Lean.Parser.Term.doPatDecl then
let pattern := decl[0]
let doElem := decl[2]
let optElse := decl[3]
if optElse.isNone then withFreshMacroScope do
let auxDo ←
if isMutableLet doLetArrow then
`(do let discr ← $doElem; let mut $pattern:term := discr)
else
`(do let discr ← $doElem; let $pattern:term := discr)
doSeqToCode <| getDoSeqElems (getDoSeq auxDo) ++ doElems
else
if isMutableLet doLetArrow then
throwError "'mut' is currently not supported in let-decls with 'else' case"
let contSeq := mkDoSeq doElems.toArray
let elseSeq := mkSingletonDoSeq optElse[1]
let auxDo ← `(do let discr ← $doElem; match discr with | $pattern:term => $contSeq | _ => $elseSeq)
doSeqToCode <| getDoSeqElems (getDoSeq auxDo)
else
throwError "unexpected kind of 'do' declaration"
partial def doLetElseToCode (doLetElse : Syntax) (doElems : List Syntax) : M CodeBlock := do
-- "let " >> termParser >> " := " >> termParser >> checkColGt >> " | " >> doElemParser
let pattern := doLetElse[1]
let val := doLetElse[3]
let elseSeq := mkSingletonDoSeq doLetElse[5]
let contSeq := mkDoSeq doElems.toArray
let auxDo ← `(do let discr := $val; match discr with | $pattern:term => $contSeq | _ => $elseSeq)
doSeqToCode <| getDoSeqElems (getDoSeq auxDo)
/- Generate `CodeBlock` for `doReassignArrow; doElems`
`doReassignArrow` is of the form
```
(doIdDecl <|> doPatDecl)
```
-/
partial def doReassignArrowToCode (doReassignArrow : Syntax) (doElems : List Syntax) : M CodeBlock := do
let ref := doReassignArrow
let decl := doReassignArrow[0]
if decl.getKind == ``Lean.Parser.Term.doIdDecl then
let doElem := decl[3]
let y := decl[0]
let auxDo ← `(do let r ← $doElem; $y:ident := r)
doSeqToCode <| getDoSeqElems (getDoSeq auxDo) ++ doElems
else if decl.getKind == ``Lean.Parser.Term.doPatDecl then
let pattern := decl[0]
let doElem := decl[2]
let optElse := decl[3]
if optElse.isNone then withFreshMacroScope do
let auxDo ← `(do let discr ← $doElem; $pattern:term := discr)
doSeqToCode <| getDoSeqElems (getDoSeq auxDo) ++ doElems
else
throwError "reassignment with `|` (i.e., \"else clause\") is not currently supported"
else
throwError "unexpected kind of 'do' reassignment"
/- Generate `CodeBlock` for `doIf; doElems`
`doIf` is of the form
```
"if " >> optIdent >> termParser >> " then " >> doSeq
>> many (group (try (group (" else " >> " if ")) >> optIdent >> termParser >> " then " >> doSeq))
>> optional (" else " >> doSeq)
``` -/
partial def doIfToCode (doIf : Syntax) (doElems : List Syntax) : M CodeBlock := do
let view ← liftMacroM $ mkDoIfView doIf
let thenBranch ← doSeqToCode (getDoSeqElems view.thenBranch)
let elseBranch ← doSeqToCode (getDoSeqElems view.elseBranch)
let ite ← mkIte view.ref view.optIdent view.cond thenBranch elseBranch
concatWith ite doElems
/- Generate `CodeBlock` for `doUnless; doElems`
`doUnless` is of the form
```
"unless " >> termParser >> "do " >> doSeq
``` -/
partial def doUnlessToCode (doUnless : Syntax) (doElems : List Syntax) : M CodeBlock := withRef doUnless do
let ref := doUnless
let cond := doUnless[1]
let doSeq := doUnless[3]
let body ← doSeqToCode (getDoSeqElems doSeq)
let unlessCode ← liftMacroM <| mkUnless cond body
concatWith unlessCode doElems
/- Generate `CodeBlock` for `doFor; doElems`
`doFor` is of the form
```
def doForDecl := leading_parser termParser >> " in " >> withForbidden "do" termParser
def doFor := leading_parser "for " >> sepBy1 doForDecl ", " >> "do " >> doSeq
```
-/
partial def doForToCode (doFor : Syntax) (doElems : List Syntax) : M CodeBlock := do
let doForDecls := doFor[1].getSepArgs
if doForDecls.size > 1 then
/-
Expand
```
for x in xs, y in ys do
body
```
into
```
let s := toStream ys
for x in xs do
match Stream.next? s with
| none => break
| some (y, s') =>
s := s'
body
```
-/
-- Extract second element
let doForDecl := doForDecls[1]
let y := doForDecl[0]
let ys := doForDecl[2]
let doForDecls := doForDecls.eraseIdx 1
let body := doFor[3]
withFreshMacroScope do
let toStreamFn ← withRef ys ``(toStream)
let auxDo ←
`(do let mut s := $toStreamFn:ident $ys
for $doForDecls:doForDecl,* do
match Stream.next? s with
| none => break
| some ($y, s') =>
s := s'
do $body)
doSeqToCode (getDoSeqElems (getDoSeq auxDo) ++ doElems)
else withRef doFor do
let x := doForDecls[0][0]
withRef x <| checkNotShadowingMutable (← getPatternVarsEx x)
let xs := doForDecls[0][2]
let forElems := getDoSeqElems doFor[3]
let forInBodyCodeBlock ← withFor (doSeqToCode forElems)
let ⟨uvars, forInBody⟩ ← mkForInBody x forInBodyCodeBlock
let uvarsTuple ← liftMacroM do mkTuple (← uvars.mapM mkIdentFromRef)
if hasReturn forInBodyCodeBlock.code then
let forInBody ← liftMacroM <| destructTuple uvars (← `(r)) forInBody
let forInTerm ← `(for_in% $(xs) (MProd.mk none $uvarsTuple) fun $x r => let r := r.2; $forInBody)
let auxDo ← `(do let r ← $forInTerm:term;
$uvarsTuple:term := r.2;
match r.1 with
| none => Pure.pure (ensure_expected_type% "type mismatch, 'for'" PUnit.unit)
| some a => return ensure_expected_type% "type mismatch, 'for'" a)
doSeqToCode (getDoSeqElems (getDoSeq auxDo) ++ doElems)
else
let forInBody ← liftMacroM <| destructTuple uvars (← `(r)) forInBody
let forInTerm ← `(for_in% $(xs) $uvarsTuple fun $x r => $forInBody)
if doElems.isEmpty then
let auxDo ← `(do let r ← $forInTerm:term;
$uvarsTuple:term := r;
Pure.pure (ensure_expected_type% "type mismatch, 'for'" PUnit.unit))
doSeqToCode <| getDoSeqElems (getDoSeq auxDo)
else
let auxDo ← `(do let r ← $forInTerm:term; $uvarsTuple:term := r)
doSeqToCode <| getDoSeqElems (getDoSeq auxDo) ++ doElems
/-- Generate `CodeBlock` for `doMatch; doElems` -/
partial def doMatchToCode (doMatch : Syntax) (doElems: List Syntax) : M CodeBlock := do
let ref := doMatch
let genParam := doMatch[1]
let discrs := doMatch[2]
let optType := doMatch[3]
let matchAlts := doMatch[5][0].getArgs -- Array of `doMatchAlt`
let alts ← matchAlts.mapM fun matchAlt => do
let patterns := matchAlt[1]
let vars ← getPatternsVarsEx patterns.getSepArgs
withRef patterns <| checkNotShadowingMutable vars
let rhs := matchAlt[3]
let rhs ← doSeqToCode (getDoSeqElems rhs)
pure { ref := matchAlt, vars := vars, patterns := patterns, rhs := rhs : Alt CodeBlock }
let matchCode ← mkMatch ref genParam discrs optType alts
concatWith matchCode doElems
/--
Generate `CodeBlock` for `doTry; doElems`
```
def doTry := leading_parser "try " >> doSeq >> many (doCatch <|> doCatchMatch) >> optional doFinally
def doCatch := leading_parser "catch " >> binderIdent >> optional (":" >> termParser) >> darrow >> doSeq
def doCatchMatch := leading_parser "catch " >> doMatchAlts
def doFinally := leading_parser "finally " >> doSeq
```
-/
partial def doTryToCode (doTry : Syntax) (doElems: List Syntax) : M CodeBlock := do
let ref := doTry
let tryCode ← doSeqToCode (getDoSeqElems doTry[1])
let optFinally := doTry[3]
let catches ← doTry[2].getArgs.mapM fun catchStx => do
if catchStx.getKind == ``Lean.Parser.Term.doCatch then
let x := catchStx[1]
if x.isIdent then
withRef x <| checkNotShadowingMutable #[x.getId]
let optType := catchStx[2]
let c ← doSeqToCode (getDoSeqElems catchStx[4])
pure { x := x, optType := optType, codeBlock := c : Catch }
else if catchStx.getKind == ``Lean.Parser.Term.doCatchMatch then
let matchAlts := catchStx[1]
let x ← `(ex)
let auxDo ← `(do match ex with $matchAlts)
let c ← doSeqToCode (getDoSeqElems (getDoSeq auxDo))
pure { x := x, codeBlock := c, optType := mkNullNode : Catch }
else
throwError "unexpected kind of 'catch'"
let finallyCode? ← if optFinally.isNone then pure none else some <$> doSeqToCode (getDoSeqElems optFinally[0][1])
if catches.isEmpty && finallyCode?.isNone then
throwError "invalid 'try', it must have a 'catch' or 'finally'"
let ctx ← read
let ws := getTryCatchUpdatedVars tryCode catches finallyCode?
let uvars := nameSetToArray ws
let a := tryCatchPred tryCode catches finallyCode? hasTerminalAction
let r := tryCatchPred tryCode catches finallyCode? hasReturn
let bc := tryCatchPred tryCode catches finallyCode? hasBreakContinue
let toTerm (codeBlock : CodeBlock) : M Syntax := do
let codeBlock ← liftM $ extendUpdatedVars codeBlock ws
liftMacroM $ ToTerm.mkNestedTerm codeBlock.code ctx.m uvars a r bc
let term ← toTerm tryCode
let term ← catches.foldlM
(fun term «catch» => do
let catchTerm ← toTerm «catch».codeBlock
if catch.optType.isNone then
``(MonadExcept.tryCatch $term (fun $(«catch».x):ident => $catchTerm))
else
let type := «catch».optType[1]
``(tryCatchThe $type $term (fun $(«catch».x):ident => $catchTerm)))
term
let term ← match finallyCode? with
| none => pure term
| some finallyCode => withRef optFinally do
unless finallyCode.uvars.isEmpty do
throwError "'finally' currently does not support reassignments"
if hasBreakContinueReturn finallyCode.code then
throwError "'finally' currently does 'return', 'break', nor 'continue'"
let finallyTerm ← liftMacroM <| ToTerm.run finallyCode.code ctx.m {} ToTerm.Kind.regular
``(tryFinally $term $finallyTerm)
let doElemsNew ← liftMacroM <| ToTerm.matchNestedTermResult term uvars a r bc
doSeqToCode (doElemsNew ++ doElems)
partial def doSeqToCode : List Syntax → M CodeBlock
| [] => do liftMacroM mkPureUnitAction
| doElem::doElems => withIncRecDepth <| withRef doElem do
checkMaxHeartbeats "'do'-expander"
match (← liftMacroM <| expandMacro? doElem) with
| some doElem => doSeqToCode (doElem::doElems)
| none =>
match (← liftMacroM <| expandDoIf? doElem) with
| some doElem => doSeqToCode (doElem::doElems)
| none =>
let (liftedDoElems, doElem) ← expandLiftMethod doElem
if !liftedDoElems.isEmpty then
doSeqToCode (liftedDoElems ++ [doElem] ++ doElems)
else
let ref := doElem
let concatWithRest (c : CodeBlock) : M CodeBlock := concatWith c doElems
let k := doElem.getKind
if k == ``Lean.Parser.Term.doLet then
let vars ← getDoLetVars doElem
checkNotShadowingMutable vars
mkVarDeclCore vars doElem <$> withNewMutableVars vars (isMutableLet doElem) (doSeqToCode doElems)
else if k == ``Lean.Parser.Term.doHave then
let vars ← getDoHaveVars doElem
checkNotShadowingMutable vars
mkVarDeclCore vars doElem <$> (doSeqToCode doElems)
else if k == ``Lean.Parser.Term.doLetRec then
let vars ← getDoLetRecVars doElem
checkNotShadowingMutable vars
mkVarDeclCore vars doElem <$> (doSeqToCode doElems)
else if k == ``Lean.Parser.Term.doReassign then
let vars ← getDoReassignVars doElem
checkReassignable vars
let k ← doSeqToCode doElems
mkReassignCore vars doElem k
else if k == ``Lean.Parser.Term.doLetArrow then
doLetArrowToCode doElem doElems
else if k == ``Lean.Parser.Term.doLetElse then
doLetElseToCode doElem doElems
else if k == ``Lean.Parser.Term.doReassignArrow then
doReassignArrowToCode doElem doElems
else if k == ``Lean.Parser.Term.doIf then
doIfToCode doElem doElems
else if k == ``Lean.Parser.Term.doUnless then
doUnlessToCode doElem doElems
else if k == ``Lean.Parser.Term.doFor then withFreshMacroScope do
doForToCode doElem doElems
else if k == ``Lean.Parser.Term.doMatch then
doMatchToCode doElem doElems
else if k == ``Lean.Parser.Term.doTry then
doTryToCode doElem doElems
else if k == ``Lean.Parser.Term.doBreak then
ensureInsideFor
ensureEOS doElems
return mkBreak ref
else if k == ``Lean.Parser.Term.doContinue then
ensureInsideFor
ensureEOS doElems
return mkContinue ref
else if k == ``Lean.Parser.Term.doReturn then
doReturnToCode doElem doElems
else if k == ``Lean.Parser.Term.doDbgTrace then
return mkSeq doElem (← doSeqToCode doElems)
else if k == ``Lean.Parser.Term.doAssert then
return mkSeq doElem (← doSeqToCode doElems)
else if k == ``Lean.Parser.Term.doNested then
let nestedDoSeq := doElem[1]
doSeqToCode (getDoSeqElems nestedDoSeq ++ doElems)
else if k == ``Lean.Parser.Term.doExpr then
let term := doElem[0]
if doElems.isEmpty then
return mkTerminalAction term
else
return mkSeq term (← doSeqToCode doElems)
else
throwError "unexpected do-element of kind {doElem.getKind}:\n{doElem}"
end
def run (doStx : Syntax) (m : Syntax) : TermElabM CodeBlock :=
(doSeqToCode <| getDoSeqElems <| getDoSeq doStx).run { ref := doStx, m }
end ToCodeBlock
/- Create a synthetic metavariable `?m` and assign `m` to it.
We use `?m` to refer to `m` when expanding the `do` notation. -/
private def mkMonadAlias (m : Expr) : TermElabM Syntax := do
let result ← `(?m)
let mType ← inferType m
let mvar ← elabTerm result mType
assignExprMVar mvar.mvarId! m
pure result
@[builtinTermElab «do»] def elabDo : TermElab := fun stx expectedType? => do
tryPostponeIfNoneOrMVar expectedType?
let bindInfo ← extractBind expectedType?
let m ← mkMonadAlias bindInfo.m
let codeBlock ← ToCodeBlock.run stx m
let stxNew ← liftMacroM $ ToTerm.run codeBlock.code m
trace[Elab.do] stxNew
withMacroExpansion stx stxNew $ elabTermEnsuringType stxNew bindInfo.expectedType
end Do
builtin_initialize registerTraceClass `Elab.do
private def toDoElem (newKind : SyntaxNodeKind) : Macro := fun stx => do
let stx := stx.setKind newKind
withRef stx `(do $stx:doElem)
@[builtinMacro Lean.Parser.Term.termFor]
def expandTermFor : Macro := toDoElem ``Lean.Parser.Term.doFor
@[builtinMacro Lean.Parser.Term.termTry]
def expandTermTry : Macro := toDoElem ``Lean.Parser.Term.doTry
@[builtinMacro Lean.Parser.Term.termUnless]
def expandTermUnless : Macro := toDoElem ``Lean.Parser.Term.doUnless
@[builtinMacro Lean.Parser.Term.termReturn]
def expandTermReturn : Macro := toDoElem ``Lean.Parser.Term.doReturn
end Lean.Elab.Term
|
Formal statement is: proposition homotopic_loops_sym_eq: "homotopic_loops s p q \<longleftrightarrow> homotopic_loops s q p" Informal statement is: Two loops $p$ and $q$ are homotopic if and only if $q$ and $p$ are homotopic. |
Require Import Morphisms.
Import ProperNotations.
Require Import SetoidClass.
Require notation categories prods_pullbacks.
Module Make(Import M: notation.T).
Module Export functors_exp := categories.Make(M).
Class Functor `(catC: Category) `(catD: Category) (F: obj catC -> obj catD)
(fmap: forall {a b: obj catC} (f: arrow catC b a), (arrow catD (F b) (F a))): Type :=
mk_Functor
{
preserve_id : forall {a: obj catC}, fmap (@identity catC a) = (@identity catD (F a));
preserve_comp : forall {a b c: obj catC} (f: arrow catC b a) (g : arrow catC c b), fmap (g o f) = (fmap g) o (fmap f)
}.
Check Functor.
Program Instance Opposite_Functor `(catC: Category) `(catD: Category)
(F: obj catC -> obj catD)
(fmapF: forall (a b: obj catC) (f: arrow catC b a), (arrow catD (F b) (F a)))
`(FunctF: Functor catC catD F fmapF):
`(Functor (Dual_Category catC) (Dual_Category catD) F (fun a b => fmapF b a)).
Obligation 1. specialize (@mk_Functor
(Dual_Category catC)
(Dual_Category catD)
F
(fun a b => fmapF b a)
(fun a => (@preserve_id catC catD F fmapF FunctF a))
(fun a b c f g => (@preserve_comp catC catD F fmapF FunctF c b a g f))
). intros. destruct H as [H1 H2]. apply H1. Qed.
Next Obligation. specialize (@mk_Functor
(Dual_Category catC)
(Dual_Category catD)
F
(fun a b => fmapF b a)
(fun a => (@preserve_id catC catD F fmapF FunctF a))
(fun a b c f g => (@preserve_comp catC catD F fmapF FunctF c b a g f))
). intros. destruct H as [H1 H2]. apply H2. Qed.
Check Opposite_Functor.
(* another way of showing the same instance above:
Definition Opposite_Functor_v2 `(catC: Category) `(catD: Category)
(F: catC -> catD)
(fmapF: forall (a b: catC) (f: arrow catC b a), (arrow catD (F b) (F a)))
`(FunctF: Functor catC catD F fmapF):
`(Functor (Dual_Category catC) (Dual_Category catD) F (fun a b => fmapF b a)).
Proof. refine (@mk_Functor
(Dual_Category catC)
(Dual_Category catD)
F
(fun a b => fmapF b a)
(fun a => (@preserve_id catC catD F fmapF FunctF a))
(fun a b c f g => (@preserve_comp catC catD F fmapF FunctF c b a g f))
).
Defined.
*)
Definition Opposite_Opposite_Functor `(catC: Category) `(catD: Category)
(F: obj (Dual_Category catC) -> obj (Dual_Category catD))
(fmapF: forall (a b: obj (Dual_Category catC)) (f: arrow (Dual_Category catC) b a),
(arrow (Dual_Category catD) (F b) (F a)))
`(FunctF: Functor (Dual_Category catC) (Dual_Category catD) F fmapF):
(Functor catC catD F (fun a b => fmapF b a)).
Proof. refine (@mk_Functor
catC
catD
F
(fun a b => fmapF b a)
(fun a => (@preserve_id (Dual_Category catC) (Dual_Category catD) F fmapF FunctF a))
(fun a b c f g => (@preserve_comp (Dual_Category catC) (Dual_Category catD) F fmapF FunctF c b a g f))
).
Defined.
Check Opposite_Opposite_Functor.
(** TODO:= prove the theorem here: oppositing is involutive **)
(*define how the identity functor behaves on objects and morphisms*)
Definition id {catC: Category} (a: obj catC) := a.
Definition idf {catC: Category} {a b: obj catC} (f: arrow catC b a) := f.
(** the identity functor **)
Program Instance IdentityFunctor (catC: Category): (@Functor catC catC id (fun a b f => (@idf catC a b f))).
Check IdentityFunctor.
(*define how the functor composition behaves on objects and morphisms*)
Definition comp_obj_FG {catC catD catE: Category} {F: obj catC -> obj catD} {G: obj catD -> obj catE} (a: obj catC) := G (F a).
Definition comp_morp_FG {catC catD catE: Category} {F: obj catC -> obj catD} {G: obj catD -> obj catE}
{fmapF : forall (a b: obj catC) (f: arrow catC b a), (arrow catD (F b) (F a))}
{fmapG : forall (a b: obj catD) (f: arrow catD b a), (arrow catE (G b) (G a))}
(a b: obj catC) (f: (arrow catC b a)) := fmapG _ _ (fmapF _ _ f).
(**functors compose**)
Program Instance Compose_Functors (catC catD catE: Category) (F: obj catC -> obj catD) (G: obj catD -> obj catE)
(fmapF : forall (a b: obj catC) (f: arrow catC b a), (arrow catD (F b) (F a)))
(FunctF: @Functor catC catD F fmapF)
(fmapG : forall (a b: obj catD) (f: arrow catD b a), (arrow catE (G b) (G a)))
(FunctG: @Functor catD catE G fmapG):
(@Functor catC catE (@comp_obj_FG catC catD catE F G) (@comp_morp_FG catC catD catE F G fmapF fmapG)).
Obligation 1. unfold comp_obj_FG, comp_morp_FG. remember (@preserve_id catC catD F fmapF FunctF a).
remember (@preserve_id catD catE G fmapG FunctG (F a)). rewrite <- e0. rewrite e. reflexivity. Qed.
Next Obligation. unfold comp_obj_FG, comp_morp_FG. remember (@preserve_comp catC catD F fmapF FunctF a b c f g).
remember (@preserve_comp catD catE G fmapG FunctG (F a) (F b) (F c) (fmapF _ _ f) (fmapF _ _ g)).
rewrite <- e0. rewrite e. reflexivity. Qed.
Check Compose_Functors.
(** constant functor **)
Definition Constant_Functor `(catC: Category) `(catD: Category) (const: obj catD):
`(Functor catC catD (fun _ => const) (fun _ _ _ => (@identity catD const))).
Proof. refine(@mk_Functor
catC
catD _ _ _ _
).
intros. reflexivity.
intros. simpl. rewrite identity_f; reflexivity.
Defined.
Check Constant_Functor.
(* obligated fmap *)
Class Functor2 `(catC: Category) `(catD: Category) (F: obj catC -> obj catD): Type :=
mk_Functor2
{
fmap2 : forall {a b: obj catC} (f: arrow catC b a), (arrow catD (F b) (F a));
preserve_id2 : forall {a: obj catC}, fmap2 (@identity catC a) = (@identity catD (F a));
preserve_comp2 : forall {a b c: obj catC} (f: arrow catC b a) (g : arrow catC c b), fmap2 (g o f) = (fmap2 g) o (fmap2 f)
}.
Check Functor2.
(*
Program Instance IdentityFunctor2 (catC: Category):
(@Functor2 catC catC (fun a => id a)).
Check IdentityFunctor2.
*)
Definition Opposite_Functor2 (catC: Category) `(catD: Category)
(F: obj catC -> obj catD)
(FunctF: Functor2 catC catD F ): (Functor2 (Dual_Category catC) (Dual_Category catD) F).
Proof. refine (@mk_Functor2
(Dual_Category catC)
(Dual_Category catD)
F
(fun a b => (@fmap2 catC catD F FunctF b a))
(fun a => (@preserve_id2 catC catD F FunctF a))
(fun a b c f g => (@preserve_comp2 catC catD F FunctF c b a g f))).
Qed.
Check Opposite_Functor2.
Definition Opposite_Opposite_Functor2 (catC: Category) (catD: Category)
(F: obj (Dual_Category catC) -> obj (Dual_Category catD))
(FunctF: Functor2 (Dual_Category catC) (Dual_Category catD) F): (Functor2 catC catD F).
Proof. refine (@mk_Functor2
catC
catD
F
(fun a b => (@fmap2 (Dual_Category catC) (Dual_Category catD) F FunctF b a))
(fun a => (@preserve_id2 (Dual_Category catC) (Dual_Category catD) F FunctF a))
(fun a b c f g => (@preserve_comp2 (Dual_Category catC) (Dual_Category catD) F FunctF c b a g f))
).
Defined.
Check Opposite_Opposite_Functor2.
(**functors compose**)
Definition Compose_Functors2 (catC catD catE: Category) (F: obj catC -> obj catD) (G: obj catD -> obj catE)
(FunctF : @Functor2 catC catD F)
(FunctG : @Functor2 catD catE G): (@Functor2 catC catE (@comp_obj_FG catC catD catE F G)).
Proof. refine (@mk_Functor2
catC
catE
(fun a => G (F a))
(fun a b f => ((@fmap2 catD catE G FunctG _ _ (@fmap2 catC catD F FunctF a b f))))
_ _ ).
- intros. destruct catC, catD, catE, FunctF, FunctG. simpl in *.
specialize (preserve_id4 (F a)). rewrite <- preserve_id4. rewrite preserve_id3. reflexivity.
- intros. destruct catC, catD, catE, FunctF, FunctG. simpl in *.
rewrite <- preserve_comp4. rewrite preserve_comp3. reflexivity.
Defined.
Check Compose_Functors2.
Definition IdentityFunctor2 (catC: Category): (@Functor2 catC catC id).
Proof. refine (@mk_Functor2
catC
catC
id
(fun a b f => (@idf catC a b f))
_ _ ).
- intros. unfold idf. simpl; reflexivity.
- intros. unfold idf; reflexivity.
Defined.
Check IdentityFunctor2.
(** constant functor **)
Definition Constant_Functor2 (catC: Category) (catD: Category) (const: obj catD):
(Functor2 catC catD (fun _ => const)).
Proof. refine(@mk_Functor2
catC
catD
(fun _ => const)
(fun _ _ _ => (@identity catD const))
_ _
).
- intros. reflexivity.
- intros. simpl. rewrite identity_f; reflexivity.
Defined.
Check Constant_Functor2.
End Make.
|
#
# Tests for the Function classes
#
import pybamm
import unittest
import numpy as np
class TestInterpolant(unittest.TestCase):
def test_errors(self):
with self.assertRaisesRegex(ValueError, "data should have exactly two columns"):
pybamm.Interpolant(np.ones(10), None)
with self.assertRaisesRegex(ValueError, "interpolator 'bla' not recognised"):
pybamm.Interpolant(np.ones((10, 2)), None, interpolator="bla")
def test_interpolation(self):
x = np.linspace(0, 1)[:, np.newaxis]
y = pybamm.StateVector(slice(0, 2))
# linear
linear = np.hstack([x, 2 * x])
for interpolator in ["pchip", "cubic spline"]:
interp = pybamm.Interpolant(linear, y, interpolator=interpolator)
np.testing.assert_array_almost_equal(
interp.evaluate(y=np.array([0.397, 1.5]))[:, 0], np.array([0.794, 3])
)
# square
square = np.hstack([x, x ** 2])
y = pybamm.StateVector(slice(0, 1))
for interpolator in ["pchip", "cubic spline"]:
interp = pybamm.Interpolant(square, y, interpolator=interpolator)
np.testing.assert_array_almost_equal(
interp.evaluate(y=np.array([0.397]))[:, 0], np.array([0.397 ** 2])
)
# with extrapolation set to False
for interpolator in ["pchip", "cubic spline"]:
interp = pybamm.Interpolant(
square, y, interpolator=interpolator, extrapolate=False
)
np.testing.assert_array_equal(
interp.evaluate(y=np.array([2]))[:, 0], np.array([np.nan])
)
def test_name(self):
a = pybamm.Symbol("a")
x = np.linspace(0, 1)[:, np.newaxis]
interp = pybamm.Interpolant(np.hstack([x, x]), a, "name")
self.assertEqual(interp.name, "interpolating function (name)")
def test_diff(self):
x = np.linspace(0, 1)[:, np.newaxis]
y = pybamm.StateVector(slice(0, 2))
# linear (derivative should be 2)
linear = np.hstack([x, 2 * x])
for interpolator in ["pchip", "cubic spline"]:
interp_diff = pybamm.Interpolant(linear, y, interpolator=interpolator).diff(
y
)
np.testing.assert_array_almost_equal(
interp_diff.evaluate(y=np.array([0.397, 1.5]))[:, 0], np.array([2, 2])
)
# square (derivative should be 2*x)
square = np.hstack([x, x ** 2])
for interpolator in ["pchip", "cubic spline"]:
interp_diff = pybamm.Interpolant(square, y, interpolator=interpolator).diff(
y
)
np.testing.assert_array_almost_equal(
interp_diff.evaluate(y=np.array([0.397, 0.806]))[:, 0],
np.array([0.794, 1.612]),
decimal=3,
)
if __name__ == "__main__":
print("Add -v for more debug output")
import sys
if "-v" in sys.argv:
debug = True
pybamm.settings.debug_mode = True
unittest.main()
|
# 3M1 - Search methods for univariate continuous functions
Luca Magri, [email protected]
(With many thanks to Professor Gábor Csányi.)
Univariate function = function of one variable
## Lecture 2: List of contents
- General algorithm
- Convergence criteria
- Rate of convergence
- Line search
- Interval reduction
- Golden section method
- Fitting with quadratic functions
- Fitting with a polynomial at three points
- Fitting with a polynomial at one point: Newton's method and quasi-Newton method
## Search methods for multivariate functions
1. Start with an initial guess, $x_0$, for the minimum of $f(x)$
1. Propose a __search direction__, $d_k$
1. Propose a step size $\alpha_k$ along $d_k$, typically, by an inner __line search__ loop to find the lowest value of $f(x)$ along the direction $d_k$
1. Update the estimate of the minimum location, $x_{k+1} = x_k+\alpha_k d_k$
1. Back to 2 until convergence
- In 1D, problem of convergence is "easy", robustness is "hard"
- Robustness is the ability of the algorithm to operate at unexpected conditions
- In >1D, step 3 itself is a 1D optimisation problem
- In >1D, most of the effort is in the design of an algorithm for step 2
## Convergence criteria
- It can be assessed in different ways
\begin{align}
&\textrm{Norm of the residual}\;\;\;&\left\| f(x_{k+1}) - f(x_k)\right\| &\lt \varepsilon_f\\
\\
&\textrm{Norm of the error}\;\;\;&\left\| x_{k+1} - x_k\right\| &\lt \varepsilon_x\\
\\
&\textrm{Norm of the gradient}\;\;\;&\left\| \nabla f(x_k)\right\| &\lt \varepsilon_g\\
\end{align}
- $\epsilon_{\bullet}$ are user-defined tolerances
- The test on the norm of the gradient is often used
- When the curvature is small, it should be combined with a test on the norm of the error
- Search methods find a _local_ minimum
- Convex functions have only one minimum. Therefore, the local minimum is also a global minimum
## Rate of convergence
- The __rate of convergence__ of an algorithm quantifies how fast the approximate solution approaches the true minimum $x^*$, close to the minimum, iteration after iteration
$$
\lim_{k\rightarrow\infty} \frac{\left\| x_{k+1}-x^*\right\|}{\left\| x_k-x^* \right\|^p} = \beta
$$
The operation $\lim_{k\rightarrow\infty}$ means that the rate of convergence is an _asymptotic_ quantity.
- For the keen reader, the rate of convergence is the speed at which a sequence converges to its limit, if it converges
- $\beta$ is the __convergence ratio__: the smaller the better
- $p$ is the __order of convergence__: the larger the better
- $p=1$ $\rightarrow$ linear convergence
- $p=2$ $\rightarrow$ quadratic convergence. The number of correct digits _roughly_ doubles at each iteration
- Example
$$ x_k = 2 + 3^{-k}$$
$x_k\rightarrow 2$ for $k\rightarrow \infty$ . The sequence converges linearly to its limit $2$ because
\begin{align}
\lim_{k\rightarrow\infty} \frac{\left\| x_{k+1}-x^*\right\|}{\left\| x_k-x^* \right\|^p} = \frac{2 + 3^{-(k+1)}-2}{2 + 3^{-k}-2}=\frac{ 3^{-(k+1)}}{3^{-k}} = 3^{-1-k+k}=\frac{1}{3}
\end{align}
Therefore, $\beta=1/3$ and $p=1$.
## Line search
- The search of the minimum for a univariate function is called _line search_
- This is the goal, for example, of the inner loop of multi-variable search
- __Interval reduction__ is about the best you can do if the gradient is not available
- Example with the nonlinear function
\begin{align}f(x) = x^2 + \frac{345000}{\pi x}\;\;\;\;\;\;\;\;\;\;\;\; 22 < x < 48
\end{align}
The minimum is
$$ x^* = \left(\frac{345000}{2\pi}\right)^{\frac 1 3} \approx 38$$
- Now we seek the minimum by line search
```python
from pylab import *
import numpy as np
# function we will optimize
def R(x):
return x**2+(345.0*1000.0)/(np.pi*x)
```
```python
figure(figsize=(12,8))
x = np.linspace(23,47,50)
plot(x, R(x))
ymin=4200
ymax=5400
axis((22, 48, ymin, ymax))
plot((25,25),(ymin,ymax), 'k:' ) ; text(25, 4220, "$x_1$", fontsize=24)
plot((45,45),(ymin,ymax), 'k:' ) ; text(45, 4220, "$x_2$", fontsize=24)
plot((40,40),(ymin,ymax), 'k:' ) ; text(40, 4220, "$x_3$", fontsize=24)
plot((33,33),(ymin,ymax), 'r:' ) ; text(33, 4220, "$x_4$", fontsize=24, color='r')
annotate(s='', xy=(25,5200), xytext=(45,5200), arrowprops=dict(arrowstyle='<->')); text(26,5220, "$I_1$", fontsize=24)
annotate(s='', xy=(33,5000), xytext=(45,5000), arrowprops=dict(arrowstyle='<->')); text(35,5020, "$I_2$", fontsize=24)
show()
```
1. Start with three points, $x_1,x_2,x_3$ such that
$$f(x_1) \gt f(x_3)\;\;\;\textrm{and}\;\;\; f(x_3) \lt f(x_2)$$
$x_1$ and $x_2$ are the bounds, $x_3$ is the interior point
1. As a consequence of [Bolzano theorem](http://mathworld.wolfram.com/BolzanosTheorem.html), there exists a minimum in the interval $I_1 = [x_1,x_2]$.
This is called __bracketing__
1. Compute $f(x_4)$ at a fourth point $x_4 \in I_1$
1. If $f(x_4) \gt f(x_3)$, the minimum must be in $I_2 = [x_4,x_2]$, otherwise the minimum is in $I'_2=[x_1,x_3]$
1. Continue bracketing until the interval is smaller than a tolerance
## Golden section search
Where to place the interior points?
- We impose a constant reduction factor $\beta$ for the interval length (convergence is linear, $p=1$)
$$\frac{I_2}{I_1} = \frac{\Delta x}{I_2} \text{ with }\Delta x = I_1 - I_2$$
set $\beta=I_2/I_1$, and solve quadratic:
$$\beta = \frac{\sqrt{5}-1}{2}\approx 0.618$$
- This is the inverse of the [Golden ratio](https://en.wikipedia.org/wiki/Golden_ratio), also known as golden section or sectio aurea, hence the name of this search method
```python
figure(figsize=(12,8))
x = np.linspace(23,47,50)
plot(x, R(x))
ymin=4200
ymax=5400
axis((22, 48, ymin, ymax))
plot((25,25),(ymin,ymax), 'k:' ) ; text(25, 4220, "$x_1$", fontsize=24)
plot((45,45),(ymin,ymax), 'k:' ) ; text(45, 4220, "$x_2$", fontsize=24)
plot((37,37),(ymin,ymax), 'k:' ) ; text(37, 4220, "$x_3$", fontsize=24)
plot((33,33),(ymin,ymax), 'r:' ) ; text(33, 4220, "$x_4$", fontsize=24, color='r')
annotate(s='', xy=(25,5200), xytext=(45,5200), arrowprops=dict(arrowstyle='<->')); text(26,5220, "$I_1$", fontsize=24)
annotate(s='', xy=(33,5000), xytext=(45,5000), arrowprops=dict(arrowstyle='<->')); text(35,5020, "$I_2$", fontsize=24)
annotate(s='', xy=(25,4800), xytext=(37,4800), arrowprops=dict(arrowstyle='<->')); text(35,4820, "$I'_2$", fontsize=24)
annotate(s='', xy=(37,4500), xytext=(45,4500), arrowprops=dict(arrowstyle='<->')); text(41,4520, "$\Delta x$", fontsize=24)
annotate(s='', xy=(25,4500), xytext=(33,4500), arrowprops=dict(arrowstyle='<->')); text(26,4520, "$\Delta x$", fontsize=24)
show()
```
```python
def golden_section(f, x1, x2, tol):
# initial points
f1 = f(x1) # x1 = left bound
f2 = f(x2) # x2 = right bound
# now set up golden section ratios
r = (np.sqrt(5)-1)/2.0
# third point
x3 = x1*(1-r)+x2*r; f3=f(x3) #x3 = interior point
# now loop until convergence
traj = []
it = 0
while abs(x1-x2) > tol:
x4 = x1*r+x2*(1-r); f4=f(x4) # x4 = test point
traj.append((x1, x2, x3, x4))
if f4 < f3:
x2=x3; f2=f3
x3=x4; f3=f4
else:
x1=x2; f1=f2
x2=x4; f2=f4
print("x1", "x2",)
print(round(x1,1), round(x2,1))
print("\n")
return x3,traj
x, t = golden_section(R, 25, 45, 0.5)
```
('x1', 'x2')
(45.0, 32.6)
('x1', 'x2')
(32.6, 40.3)
('x1', 'x2')
(40.3, 35.6)
('x1', 'x2')
(40.3, 37.4)
('x1', 'x2')
(37.4, 39.2)
('x1', 'x2')
(37.4, 38.5)
('x1', 'x2')
(38.5, 37.8)
('x1', 'x2')
(37.8, 38.2)
```python
fig = figure(figsize=(12,8))
x = np.linspace(23,47,50)
plot(x, R(x))
ymin=4200
ymax=5400
axis((22, 48, ymin, ymax))
text(25, 4220, "$x_1$", fontsize=24)
text(45, 4220, "$x_2$", fontsize=24)
text(38, 4220, "$x_3$", fontsize=24)
text(33, 4220, "$x_4$", fontsize=24)
def animate(i):
ax=gca()
for j in range(1,len(ax.lines)):
ax.lines[j].set_color('k')
plot((t[i][0],t[i][0]),(ymin,ymax), 'g' )
plot((t[i][1],t[i][1]),(ymin,ymax), 'g' )
plot((t[i][2],t[i][2]),(ymin,ymax), 'k' )
plot((t[i][3],t[i][3]),(ymin,ymax), "r")
import matplotlib.animation
ani = matplotlib.animation.FuncAnimation(fig, animate, frames=len(t), interval=1000)
from IPython.display import HTML
HTML(ani.to_jshtml())
```
## Robustness
How could the previous program go wrong?
- The initial interval does not contain a minimum
Much of practical optimisation is actually about __robustness__, creating software that copes with unexpected situations
## Fitting with a quadratic polynomial at three points
We approximate the problem with a simpler one, which we can easily solve.
- We can _fit a quadratic function_
$$
q(x) = a_0 + a_1 x + a_2 x^2
$$
to match $f(x)$ at three points
- Find the minimum of the quadratic function
- Test this minimum
- To obtain the coefficients, we impose
$$
\begin{array}
\\
f(x_1) &= a_0 + a_1 x_1 + a_2 x_1^2\\
f(x_2) &= a_0 + a_1 x_2 + a_2 x_2^2\\
f(x_3) &= a_0 + a_1 x_3 + a_2 x_3^2\\
\end{array}
$$
which in matrix form is cast as
$$
\begin{bmatrix}
f(x_1)\\
f(x_2)\\
f(x_3)\\
\end{bmatrix}
=
\begin{bmatrix}
1&x_1&x_1^2\\
1&x_2&x_2^2\\
1&x_3&x_3^2\\
\end{bmatrix}
\begin{bmatrix}
a_0\\
a_1\\
a_2\\
\end{bmatrix}
$$
- The inversion of the matrix provides the solution
```python
figure(figsize=(12,8))
ymin=4200
ymax=5400
axis((22, 48, ymin, ymax))
x = np.linspace(23,47,50)
# function
plot(x, R(x)); text(24, 5200, "$f(x) = x^2 + 345000/(\pi x)}$", color='b', fontsize=24)
# true minimum
plot([38,38], [ymin,ymax], "b-"); text(31, 4250, "true minimum", color="b", fontsize=24)
# quadratic approx
x1=25; x2=37.5; x3=37
m = np.matrix([[1, x1, x1**2],[1, x2, x2**2], [1, x3, x3**2]])
a = np.dot(np.linalg.inv(m),np.matrix([[R(x1)],[R(x2)],[R(x3)]]))
plot(x, a[0,0]+a[1,0]*x+a[2,0]*x**2, 'r-')
plot([x1,x2,x3],[R(x1),R(x2),R(x3)], "ko", markersize=10)
text(27.5, 4800, "$q(x) = a_0 + a_1 x + a_2 x^2$", color='r', fontsize=24);
# minimum of quadratic approx
x4 = -a[1,0]/(2*a[2,0])
plot([x4, x4], [ymin,ymax], "r-") ; text(39,4250, "min at $x=-a_1/(2a_2)$", color="r", fontsize=24);
```
## Fitting with a quadratic polynomial at one point: Newton's method
- Instead of fitting the function values at three points, we can fit the function value, first and second derivatives evaluated at a single point $x_0$
$$
f(x) = f(x_0) + (x-x_0) f'(x_0) + \frac12 (x-x_0)^2 f''(x_0) + h.o.t.
$$
- We neglect the higher order terms ($h.o.t.$) and approximate $f(x)$ as
$$
q(x) = f(x_0) + (x-x_0) f'(x_0) + \frac12 (x-x_0)^2 f''(x_0)
$$
- To calculate the minimum of $q(x)$, we set its derivative to zero
\begin{align}
q'(x) & = f'(x_0) + (x-x_0) f''(x_0) \\
& = 0
\end{align}
- The solution is
$$
x = x_0 - \frac{f'(x_0)}{f''(x_0)}
$$
```python
figure(figsize=(12,8))
ymin=4200
ymax=5400
axis((22, 48, ymin, ymax))
x = np.linspace(23,47,50)
# function
plot(x, R(x)); text(24, 5200, "$f(x) = x^2 + 345000/(\pi x)$", color='b', fontsize=24)
# true minimum
plot([38,38], [ymin,R(38)], "b-"); text(39, 4250, "true minimum", color="b", fontsize=24)
x0=25
plot([x0],[R(x0)], "ko", markersize=10)
text(24,4950, "$x_0$", fontsize=24)
# quadratic approx
def Rp(x):
return 2*x-(345*1000)/(np.pi*x**2)
def Rpp(x):
return 2*(345*1000)/(np.pi*x**3)
def q(x):
return R(x0)+(x-x0)*Rp(x0)+0.5*(x-x0)**2*Rpp(x0)
plot(x[:35], q(x[:35]), 'r-')
text(26.5, 4900, "$q(x) = f(x_0) + (x-x_0)f'(x_0) + 0.5(x-x_0)^2 f''(x_0)$", color='r', fontsize=24)
x1=x0-Rp(x0)/Rpp(x0)
plot([x1, x1], [ymin,q(x1)], "r");
```
### Remarks on Newton's method
The Newton's method
- might not be as good as the previous 3-point method
- is sensitive to the choice of the initial point
- can make the estimate worse
- has problems when $f''(x_0)$ is zero or negative
- Both polynomial fitting methods can be iterated to refine esimate
- Newton's method, when it works, has a quadratic ($p=2$) convergence
- Newton's method _generalises_ to multidimensional searches
## Newton's method can converge to the wrong extremum
- Example with the nonlinear function
$$ f(x) = -10000e^{-\frac{x^2}{350}}+\frac{345000}{\pi x}\;\;\;\;\;\;\;\;\;\;\;\; 9 < x < 40$$
We focus on the positive $x$-axis. The minimum is
$$ x^* \approx 15.7$$
The maximum is
$$ x^*_{max} \approx 30.9$$
<!--- - $f''(x)=0$ when $x=20.8$ --->
- Now we plot the approximation made by Newton's method
```python
figure(figsize=(12,8))
ymin=1900
ymax=3600
axis((9, 40, ymin, ymax))
x = np.linspace(10,40,50)
# function
def f(x):
return -10000*exp(-x**2.0/350.0)+(345*1000)/(np.pi*x)
plot(x, f(x)); text(12, 3200, "$f(x) = -10000*\exp(-x^2/350) + 345000/(\pi x)$", color='b', fontsize=24)
# true minimum
m = 15.7
plot([m,m], [ymin,f(m)], "b-"); text(m+1, 1950, "true minimum", color="b", fontsize=24)
x0=14.# # 12 14 17 19 21
plot([x0],[f(x0)], "ko", markersize=10)
text(25,2550, "$x_0$", fontsize=24)
# quadratic approx
def fp(x):
return 10000*exp(-x**2.0/350.0)*2*x/350-(345*1000)/(np.pi*x**2)
def fpp(x):
return -10000*exp(-x**2.0/350.0)*4*x**2/350**2 + 10000*exp(-x**2/350)*2/350 + 2*(345*1000)/(np.pi*x**3)
def qq(x):
return f(x0)+(x-x0)*fp(x0)+0.5*(x-x0)**2*fpp(x0)
plot(x, qq(x), 'r-')
text(15, 3000, "$q(x) = f(x_0) + (x-x_0)f'(x_0) + 0.5(x-x_0)^2 f''(x_0)$", color='r', fontsize=24)
x1=x0-fp(x0)/fpp(x0)
plot([x1, x1], [ymin,qq(x1)], "r");
```
## Quasi-Newton methods
- In anticipation of multivariate functions, we stress that the computation of the second derivatives in the Hessian $H=\nabla^2 f(x)$ can be computationally prohibitive
- A large number of methods are designed to approximate the second derivative
- The simplest approximation is the __secant method__, which takes the current and previous points to estimate the second derivative at the current point
$$
f''(x_1) \approx \frac{f'(x_1)-f'(x_0)}{x_1-x_0}
$$
- The Taylor expansion around $x_1$ is approximated as
$$ s(x) = f(x_1) + (x-x_1)f'(x_1) + \frac{1}{2}(x-x_1)^2 \left(\frac{f'(x_1)-f'(x_0)}{x_1-x_0}\right)$$
- Setting $s'(x)=0$ yields the estimate for the minimum
$$
x = x_1 - f'(x_1) \frac{x_1-x_0}{f'(x_1)-f'(x_0)}
$$
- It can be shown that convergence is *superlinear*, $p = (1+\sqrt{5})/2 \approx 1.618$
- It has higher dimensional generalisations:
- Family of Broyden's methods, including the most widely used BFGS method
- The Barzilai-Borwein method (two-point steepest descent), which is often more robust
- We will see the Barzilai-Borwein method in action in the next lecture
```python
figure(figsize=(12,8))
ymin=4200
ymax=5400
axis((22, 48, ymin, ymax))
x = np.linspace(23,47,50)
# function
plot(x, R(x)); text(24, 5200, "$f(x) = x^2 + 345000/(\pi x)$", color='b', fontsize=24)
# true minimum
plot([38,38], [ymin,R(38)], "b-"); text(39, 4250, "true minimum", color="b", fontsize=24)
x0=25.0
x1=27.0
plot([x0,x1],[R(x0), R(x1)], "ko", markersize=10)
text(24,4950, "$x_0$", fontsize=24)
text(26,4700, "$x_1$", fontsize=24)
def q(x):
return R(x0)+(x-x0)*Rp(x0)+0.5*(x-x0)**2*Rpp(x0)
plot(x[:35], q(x[:35]), 'r-')
text(25, 5100, "$q(x) = f(x_0) + (x-x_0)f'(x_0) + 0.5(x-x_0)^2 f''(x_0)$ Newton", color='r', fontsize=18)
x1N=x0-Rp(x0)/Rpp(x0)
plot([x1N, x1N], [ymin,q(x1N)], "r");
def s(x):
return R(x1)+(x-x1)*Rp(x1)+0.5*(x-x1)**2*(Rp(x1)-Rp(x0))/(x1-x0)
plot(x[:35], s(x[:35]), 'g-')
text(26, 5000, "$s(x) = f(x_1) + (x-x_1)f'(x_1) + 0.5(x-x_1)^2 (f'(x_1)-f'(x_0))/(x_1-x_0)$ Secant", color='g', fontsize=18)
x2=x1-Rp(x1) * (x1-x0)/(Rp(x1)-Rp(x0))
plot([x2, x2], [ymin,s(x2)], "g-");
```
|
> module Void.Properties
> import Data.Fin
> import Control.Isomorphism
> import Finite.Predicates
> import Sigma.Sigma
> %default total
> %access public export
> ||| Mapping |Void|s to |Fin|s
> toFin : Void -> Fin Z
> toFin = void
> %freeze toFin
> ||| Mapping |Fin Z|s to |Void|s
> fromFin : Fin Z -> Void
> fromFin k = absurd k
> %freeze fromFin
> ||| |toFin| is the left-inverse of |fromFin|
> toFinFromFinLemma : (k : Fin Z) -> toFin (fromFin k) = k
> toFinFromFinLemma k = absurd k
> %freeze toFinFromFinLemma
> ||| |fromFin| is the left-inverse of |toFin|
> fromFinToFinLemma : (e : Void) -> fromFin (toFin e) = e
> fromFinToFinLemma e = void e
> %freeze fromFinToFinLemma
> ||| Void is finite
> finiteVoid : Finite Void
> finiteVoid = MkSigma Z iso where
> iso : Iso Void (Fin Z)
> iso = MkIso toFin fromFin toFinFromFinLemma fromFinToFinLemma
> ||| Void is decidable
> decidableVoid : Dec Void
> decidableVoid = No void
> {-
> ---}
|
[GOAL]
𝕜 : Type u_1
E : Type u_2
F : Type u_3
G : Type u_4
inst✝⁷ : NontriviallyNormedField 𝕜
inst✝⁶ : NormedAddCommGroup E
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 E
inst✝³ : NormedSpace 𝕜 F
inst✝² : NormedAddCommGroup G
inst✝¹ : NormedSpace 𝕜 G
f g : E → F
s t : Set E
x✝ : E
inst✝ : NormedSpace ℝ E
x : E
r : ℝ
h : DiffContOnCl 𝕜 f (ball x r)
⊢ ContinuousOn f (closedBall x r)
[PROOFSTEP]
rcases eq_or_ne r 0 with (rfl | hr)
[GOAL]
case inl
𝕜 : Type u_1
E : Type u_2
F : Type u_3
G : Type u_4
inst✝⁷ : NontriviallyNormedField 𝕜
inst✝⁶ : NormedAddCommGroup E
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 E
inst✝³ : NormedSpace 𝕜 F
inst✝² : NormedAddCommGroup G
inst✝¹ : NormedSpace 𝕜 G
f g : E → F
s t : Set E
x✝ : E
inst✝ : NormedSpace ℝ E
x : E
h : DiffContOnCl 𝕜 f (ball x 0)
⊢ ContinuousOn f (closedBall x 0)
[PROOFSTEP]
rw [closedBall_zero]
[GOAL]
case inl
𝕜 : Type u_1
E : Type u_2
F : Type u_3
G : Type u_4
inst✝⁷ : NontriviallyNormedField 𝕜
inst✝⁶ : NormedAddCommGroup E
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 E
inst✝³ : NormedSpace 𝕜 F
inst✝² : NormedAddCommGroup G
inst✝¹ : NormedSpace 𝕜 G
f g : E → F
s t : Set E
x✝ : E
inst✝ : NormedSpace ℝ E
x : E
h : DiffContOnCl 𝕜 f (ball x 0)
⊢ ContinuousOn f {x}
[PROOFSTEP]
exact continuousOn_singleton f x
[GOAL]
case inr
𝕜 : Type u_1
E : Type u_2
F : Type u_3
G : Type u_4
inst✝⁷ : NontriviallyNormedField 𝕜
inst✝⁶ : NormedAddCommGroup E
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 E
inst✝³ : NormedSpace 𝕜 F
inst✝² : NormedAddCommGroup G
inst✝¹ : NormedSpace 𝕜 G
f g : E → F
s t : Set E
x✝ : E
inst✝ : NormedSpace ℝ E
x : E
r : ℝ
h : DiffContOnCl 𝕜 f (ball x r)
hr : r ≠ 0
⊢ ContinuousOn f (closedBall x r)
[PROOFSTEP]
rw [← closure_ball x hr]
[GOAL]
case inr
𝕜 : Type u_1
E : Type u_2
F : Type u_3
G : Type u_4
inst✝⁷ : NontriviallyNormedField 𝕜
inst✝⁶ : NormedAddCommGroup E
inst✝⁵ : NormedAddCommGroup F
inst✝⁴ : NormedSpace 𝕜 E
inst✝³ : NormedSpace 𝕜 F
inst✝² : NormedAddCommGroup G
inst✝¹ : NormedSpace 𝕜 G
f g : E → F
s t : Set E
x✝ : E
inst✝ : NormedSpace ℝ E
x : E
r : ℝ
h : DiffContOnCl 𝕜 f (ball x r)
hr : r ≠ 0
⊢ ContinuousOn f (closure (ball x r))
[PROOFSTEP]
exact h.continuousOn
|
%===============================================================================
% Purpose: Template for \LaTeX beamer presentations at Vilnius University
% Created: Sep 15 2013, Version 1.0 from Mar 17 2013
% Autor: Linus Dietz ([email protected]), fork of Marcel Grossmann ([email protected])
%===============================================================================
%===============================================================================
% Run pdflatex and bibtex to compile. Use the Makefile for all in once compilation.
% Configuration in texmaker: pdflatex -synctex=1 -interaction=nonstopmode %.tex | bibtex % | pdflatex -synctex=1 -interaction=nonstopmode %.tex | pdflatex -synctex=1 -interaction=nonstopmode %.tex
% Edit Information in config/metainfo
% Choose the language with the following \lang command
% Options: {english || lithuanian}
\newcommand{\lang}{lithuanian}
%===============================================================================
\documentclass[11pt,\lang ,%
%draft,
%handout,
compress%
]{beamer}
\usepackage{ifthen}
\newcommand{\univustring}{\ifthenelse{\equal{\lang}{lithuanian}}{Vilniaus Universitetas}{Vilnius University}}
\input{config/commands}
\def\signed #1{{\leavevmode\unskip\nobreak\hfil\penalty50\hskip2em
\hbox{}\nobreak\hfil(#1)%
\parfillskip=0pt \finalhyphendemerits=0 \endgraf}}
\newsavebox\mybox
\newenvironment{aquote}[1]
{\savebox\mybox{#1}\begin{fancyquotes}}
{\signed{\usebox\mybox}\end{fancyquotes}}
\input{config/hyphenation}
\setbeamertemplate{caption}[numbered]
%\numberwithin{figure}{section}
\begin{document}
\frame{\titlepage}
\AtBeginSection[]
{
\frame<handout:0>
{
\frametitle{Outline}
\tableofcontents[currentsection,hideallsubsections]
}
}
\AtBeginSubsection[]
{
\frame<handout:0>
{
\frametitle{Outline}
\tableofcontents[sectionstyle=show/hide,subsectionstyle=show/shaded/hide,subsubsectionstyle=hide]
}
}
\AtBeginSubsubsection[]
{
\frame<handout:0>
{
\frametitle{Outline}
\tableofcontents[sectionstyle=show/hide,subsectionstyle=show/shaded/hide,subsubsectionstyle=show/shaded/hide]
}
}
\newcommand<>{\highlighton}[1]{%
\alt#2{\structure{#1}}{{#1}}
}
\newcommand{\icon}[1]{\pgfimage[height=1em]{#1}}
\section*{}
\begin{frame}{Content}
\tableofcontents
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%% Content starts here %%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Intro}
\begin{frame}
\frametitle{Title}
\framesubtitle{Subtitle}
This is a template for presentations in \LaTeX ~beamer.
A slide can have a title and a subtitle.
\end{frame}
\section{Basic Commands}
\begin{frame}
\frametitle{Basic Commands}
\framesubtitle{\texttt{enumerate} \& \texttt{itemize}}
If you want a 2 column layout, use the \texttt{multicols} environment:
\begin{multicols}{2}
\begin{enumerate}
\item First
\item Second
\item Third
\end{enumerate}
\begin{itemize}
\item This
\item and
\item that
\end{itemize}
\end{multicols}
\end{frame}
\begin{frame}
\frametitle{Basic Commands}
\framesubtitle{Images}
\image{.4}{logo.png}{Vilnius University Logo}{img:logo}
Images can be included as usual -- or with the \texttt{\textbackslash image} command.
\end{frame}
\section{Blocks}
\begin{frame}
\frametitle{Blocks}
\begin{block}{Blocktitle}
This is a normal block.
\end{block}
\begin{alertblock}{Alert}
This is an alert block.
\end{alertblock}
\begin{exampleblock}{Example}
This is an example block.
\end{exampleblock}
\end{frame}
\section{References}
\begin{frame}
\frametitle{References}
References can be used with the \texttt{BibTeX} commands \cite{Knuth.1986}. The list of references will be shown at the end of the presentation with the preferred style.
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%% References %%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section*{}
\begin{frame}[allowframebreaks]{References}
\def\newblock{\hskip .11em plus .33em minus .07em}
\scriptsize
\bibliographystyle{alpha}
\bibliography{literature/bib}
\normalsize
\end{frame}
%% Last frame
\frame{
\vspace{2cm}
\ifthenelse{\equal{\lang}{lithuanian}}{{\huge Klausimai?}}{\huge Questions ?}
\vspace{20mm}
\nocite*
\begin{flushright}
Linus Dietz
\structure{\footnotesize{\href{mailto:[email protected]}{[email protected]}}}
\end{flushright}
}
\end{document}
|
function [ Xt,Yt,Zt ] = multiplyEuMat( EuMat, X,Y,Z )
%MULTIPLYEUMAT takes the X, Y, Z coordinates of an object and returns the
%coordinates Xt, Yt, Zt of same object rotated using a rotation matrix
Xt=X;
Yt=Y;
Zt=Z;
resvec=[1;1;1];
for i=1:numel(X)
temp=[X(i);Y(i);Z(i)];
resvec=EuMat*temp;
Xt(i)=resvec(1);
Yt(i)=resvec(2);
Zt(i)=resvec(3);
end |
module ModuleInfix where
open import Data.List using (List; _∷_; [])
open import Data.Bool using (Bool; true; false)
module Sort(A : Set)(_≤_ : A → A → Bool)(_⊝_ : A → A → A)(zero : A) where
infix 1 _≤_
infix 2 _⊝_
insert : A → List A → List A
insert x [] = x ∷ []
insert x (y ∷ ys) with zero ≤ (y ⊝ x)
insert x (y ∷ ys) | true = x ∷ y ∷ ys
insert x (y ∷ ys) | false = y ∷ insert x ys
sort : List A → List A
sort [] = []
sort (x ∷ xs) = insert x (sort xs)
|
import pandas as pd
import numpy as np
import os
from urllib.parse import urlparse
filenames = ["depiction_visual_item.csv", "material.csv", "place.csv", "technique.csv", "time-span.csv"]
results = []
for i in filenames:
print(i)
df = pd.read_csv(i, low_memory=False)
df = df.groupby(['obj'], as_index=False)['museum','label','text','img'].agg(lambda x: list(set(x)))
numbers_of_rows = len(df.index)
df['museum'] = df['museum'].str[0]
museum_count = df.value_counts(subset=['museum'])
print("number of objects: ",numbers_of_rows)
print("number per museum: ",museum_count)
|
header{*An Application: Finite Automata*}
theory Finite_Automata imports Ordinal
begin
text {*The point of this example is that the HF sets are closed under disjoint sums and Cartesian products,
allowing the theory of finite state machines to be developed without issues of polymorphism
or any tricky encodings of states.*}
record 'a fsm = states :: hf
init :: hf
final :: hf
nxt :: "hf \<Rightarrow> 'a \<Rightarrow> hf \<Rightarrow> bool"
inductive reaches :: "['a fsm, hf, 'a list, hf] \<Rightarrow> bool"
where
Nil: "st <: states fsm \<Longrightarrow> reaches fsm st [] st"
| Cons: "\<lbrakk>nxt fsm st x st''; reaches fsm st'' xs st'; st <: states fsm\<rbrakk> \<Longrightarrow> reaches fsm st (x#xs) st'"
declare reaches.intros [intro]
inductive_simps reaches_Nil [simp]: "reaches fsm st [] st'"
inductive_simps reaches_Cons [simp]: "reaches fsm st (x#xs) st'"
lemma reaches_imp_states: "reaches fsm st xs st' \<Longrightarrow> st <: states fsm \<and> st' <: states fsm"
by (induct xs arbitrary: st st', auto)
lemma reaches_append_iff:
"reaches fsm st (xs@ys) st' \<longleftrightarrow> (\<exists>st''. reaches fsm st xs st'' \<and> reaches fsm st'' ys st')"
by (induct xs arbitrary: ys st st') (auto simp: reaches_imp_states)
definition accepts :: "'a fsm \<Rightarrow> 'a list \<Rightarrow> bool" where
"accepts fsm xs \<equiv> \<exists>st st'. reaches fsm st xs st' \<and> st <: init fsm \<and> st' <: final fsm"
definition regular :: "'a list set \<Rightarrow> bool" where
"regular S \<equiv> \<exists>fsm. S = {xs. accepts fsm xs}"
definition Null where
"Null = \<lparr>states = 0, init = 0, final = 0, nxt = \<lambda>st x st'. False\<rparr>"
theorem regular_empty: "regular {}"
by (auto simp: regular_def accepts_def) (metis hempty_iff simps(2))
abbreviation NullStr where
"NullStr \<equiv> \<lparr>states = 1, init = 1, final = 1, nxt = \<lambda>st x st'. False\<rparr>"
theorem regular_emptystr: "regular {[]}"
apply (auto simp: regular_def accepts_def)
apply (rule exI [where x = NullStr], auto)
apply (case_tac x, auto)
done
abbreviation SingStr where
"SingStr a \<equiv> \<lparr>states = {|0, 1|}, init = {|0|}, final = {|1|}, nxt = \<lambda>st x st'. st=0 \<and> x=a \<and> st'=1\<rparr>"
theorem regular_singstr: "regular {[a]}"
apply (auto simp: regular_def accepts_def)
apply (rule exI [where x = "SingStr a"], auto)
apply (case_tac x, auto)
apply (case_tac list, auto)
done
definition Reverse where
"Reverse fsm = \<lparr>states = states fsm, init = final fsm, final = init fsm,
nxt = \<lambda>st x st'. nxt fsm st' x st\<rparr>"
lemma Reverse_Reverse_ident [simp]: "Reverse (Reverse fsm) = fsm"
by (simp add: Reverse_def)
lemma reaches_Reverse_iff [simp]:
"reaches (Reverse fsm) st (rev xs) st' \<longleftrightarrow> reaches fsm st' xs st"
by (induct xs arbitrary: st st') (auto simp add: Reverse_def reaches_append_iff reaches_imp_states)
lemma reaches_Reverse_iff2 [simp]:
"reaches (Reverse fsm) st' xs st \<longleftrightarrow> reaches fsm st (rev xs) st'"
by (metis reaches_Reverse_iff rev_rev_ident)
lemma [simp]: "final (Reverse fsm) = init fsm"
by (simp add: Reverse_def)
theorem regular_rev: "regular S \<Longrightarrow> regular (rev ` S)"
apply (auto simp: regular_def accepts_def)
apply (rule_tac x="Reverse fsm" in exI, force+)
done
definition Times where
"Times fsm1 fsm2 = \<lparr>states = states fsm1 * states fsm2,
init = init fsm1 * init fsm2,
final = final fsm1 * final fsm2,
nxt = \<lambda>st x st'. (\<exists>st1 st2 st1' st2'. st = \<langle>st1,st2\<rangle> \<and> st' = \<langle>st1',st2'\<rangle> \<and>
nxt fsm1 st1 x st1' \<and> nxt fsm2 st2 x st2')\<rparr>"
lemma states_Times [simp]: "states (Times fsm1 fsm2) = states fsm1 * states fsm2"
by (simp add: Times_def)
lemma init_Times [simp]: "init (Times fsm1 fsm2) = init fsm1 * init fsm2"
by (simp add: Times_def)
lemma final_Times [simp]: "final (Times fsm1 fsm2) = final fsm1 * final fsm2"
by (simp add: Times_def)
lemma nxt_Times: "nxt (Times fsm1 fsm2) \<langle>st1,st2\<rangle> x st' \<longleftrightarrow>
(\<exists>st1' st2'. st' = \<langle>st1',st2'\<rangle> \<and> nxt fsm1 st1 x st1' \<and> nxt fsm2 st2 x st2')"
by (simp add: Times_def)
lemma reaches_Times_iff [simp]:
"reaches (Times fsm1 fsm2) \<langle>st1,st2\<rangle> xs \<langle>st1',st2'\<rangle> \<longleftrightarrow>
reaches fsm1 st1 xs st1' \<and> reaches fsm2 st2 xs st2'"
apply (induct xs arbitrary: st1 st2 st1' st2', force)
apply (force simp add: nxt_Times Times_def reaches.Cons)
done
lemma accepts_Times_iff [simp]:
"accepts (Times fsm1 fsm2) xs \<longleftrightarrow>
accepts fsm1 xs \<and> accepts fsm2 xs"
by (force simp add: accepts_def)
theorem regular_Int:
assumes S: "regular S" and T: "regular T" shows "regular (S \<inter> T)"
proof -
obtain fsmS fsmT where "S = {xs. accepts fsmS xs}" "T = {xs. accepts fsmT xs}" using S T
by (auto simp: regular_def)
hence "S \<inter> T = {xs. accepts (Times fsmS fsmT) xs}"
by (auto simp: accepts_Times_iff [of fsmS fsmT])
thus ?thesis
by (metis regular_def)
qed
definition Plus where
"Plus fsm1 fsm2 = \<lparr>states = states fsm1 + states fsm2,
init = init fsm1 + init fsm2,
final = final fsm1 + final fsm2,
nxt = \<lambda>st x st'. (\<exists>st1 st1'. st = Inl st1 \<and> st' = Inl st1' \<and> nxt fsm1 st1 x st1') \<or>
(\<exists>st2 st2'. st = Inr st2 \<and> st' = Inr st2' \<and> nxt fsm2 st2 x st2')\<rparr>"
lemma states_Plus [simp]: "states (Plus fsm1 fsm2) = states fsm1 + states fsm2"
by (simp add: Plus_def)
lemma init_Plus [simp]: "init (Plus fsm1 fsm2) = init fsm1 + init fsm2"
by (simp add: Plus_def)
lemma final_Plus [simp]: "final (Plus fsm1 fsm2) = final fsm1 + final fsm2"
by (simp add: Plus_def)
lemma nxt_Plus1: "nxt (Plus fsm1 fsm2) (Inl st1) x st' \<longleftrightarrow> (\<exists>st1'. st' = Inl st1' \<and> nxt fsm1 st1 x st1')"
by (simp add: Plus_def)
lemma nxt_Plus2: "nxt (Plus fsm1 fsm2) (Inr st2) x st' \<longleftrightarrow> (\<exists>st2'. st' = Inr st2' \<and> nxt fsm2 st2 x st2')"
by (simp add: Plus_def)
lemma reaches_Plus_iff1 [simp]:
"reaches (Plus fsm1 fsm2) (Inl st1) xs st' \<longleftrightarrow>
(\<exists>st1'. st' = Inl st1' \<and> reaches fsm1 st1 xs st1')"
apply (induct xs arbitrary: st1, force)
apply (force simp add: nxt_Plus1 reaches.Cons)
done
lemma reaches_Plus_iff2 [simp]:
"reaches (Plus fsm1 fsm2) (Inr st2) xs st' \<longleftrightarrow>
(\<exists>st2'. st' = Inr st2' \<and> reaches fsm2 st2 xs st2')"
apply (induct xs arbitrary: st2, force)
apply (force simp add: nxt_Plus2 reaches.Cons)
done
lemma reaches_Plus_iff [simp]:
"reaches (Plus fsm1 fsm2) st xs st' \<longleftrightarrow>
(\<exists>st1 st1'. st = Inl st1 \<and> st' = Inl st1' \<and> reaches fsm1 st1 xs st1') \<or>
(\<exists>st2 st2'. st = Inr st2 \<and> st' = Inr st2' \<and> reaches fsm2 st2 xs st2')"
apply (induct xs arbitrary: st st', auto)
apply (force simp add: nxt_Plus1 nxt_Plus2 Plus_def reaches.Cons)
apply (auto simp: Plus_def)
done
lemma accepts_Plus_iff [simp]:
"accepts (Plus fsm1 fsm2) xs \<longleftrightarrow> accepts fsm1 xs \<or> accepts fsm2 xs"
by (auto simp: accepts_def) (metis sum_iff)
lemma regular_Un:
assumes S: "regular S" and T: "regular T" shows "regular (S \<union> T)"
proof -
obtain fsmS fsmT where "S = {xs. accepts fsmS xs}" "T = {xs. accepts fsmT xs}" using S T
by (auto simp: regular_def)
hence "S \<union> T = {xs. accepts (Plus fsmS fsmT) xs}"
by (auto simp: accepts_Plus_iff [of fsmS fsmT])
thus ?thesis
by (metis regular_def)
qed
end
|
using BVHFiles
using Test
@testset "Graph" begin
include("graph.jl")
end
@testset "File" begin
include("file.jl")
end
@testset "Interpolation" begin
include("interpolation.jl")
end |
[STATEMENT]
lemma star_sum_unfold: "(x + y)\<^sup>\<star> = x\<^sup>\<star> + x\<^sup>\<star> \<cdot> y \<cdot> (x + y)\<^sup>\<star>"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. (x + y)\<^sup>\<star> = x\<^sup>\<star> + x\<^sup>\<star> \<cdot> y \<cdot> (x + y)\<^sup>\<star>
[PROOF STEP]
by (metis distrib_left mult_1_right mult.assoc star_denest_var star_unfoldl_eq) |
#' Display module documentation
#'
#' \code{box::help} displays help on a module’s objects and functions in much
#' the same way \code{\link[utils]{help}} does for package contents.
#'
#' @usage \special{box::help(topic, help_type = getOption("help_type", "text"))}
#' @param topic either the fully-qualified name of the object or function to get
#' help for, in the format \code{module$function}; or a name that was exported
#' and attached from an imported module or package.
#' @param help_type character string specifying the output format; currently,
#' only \code{'text'} is supported.
#' @return \code{box::help} is called for its side effect when called directly
#' from the command prompt.
#' @details
#' See the vignette at \code{vignette('box', 'box')} for more information about
#' displaying help for modules.
#' @export
help = function (topic, help_type = getOption('help_type', 'text')) {
topic = substitute(topic)
target = help_topic_target(topic, parent.frame())
target_mod = target[[1L]]
subject = target[[2L]]
if (! inherits(target_mod, 'box$mod')) {
throw('{topic;"} is not a valid module help topic')
}
if (subject != '.__module__.') {
obj = if (
exists(subject, target_mod, inherits = FALSE) &&
! bindingIsActive(subject, target_mod)
) get(subject, envir = target_mod, inherits = FALSE)
if (inherits(obj, 'box$mod')) {
target_mod = obj
subject = '.__module__.'
}
}
info = attr(target_mod, 'info')
mod_name = strsplit(attr(target_mod, 'name'), ':')[[1L]][2L]
if (inherits(info, 'box$pkg_info')) {
help_call = if (subject == '.__module__.') {
bquote(help(.(as.name(mod_name))))
} else {
bquote(help(topic = .(subject), package = .(mod_name)))
}
return(call_help(help_call, parent.frame()))
}
if (! requireNamespace('roxygen2')) {
throw('displaying documentation requires {"roxygen2";\'} installed')
}
mod_ns = attr(target_mod, 'namespace')
all_docs = namespace_info(mod_ns, 'doc')
if (is.null(all_docs)) {
all_docs = parse_documentation(info, mod_ns)
namespace_info(mod_ns, 'doc') = all_docs
}
doc = all_docs[[subject]]
if (is.null(doc)) {
if (subject == '.__module__.') {
throw('no documentation available for {mod_name;"}')
} else {
throw('no documentation available for {subject;"} in module {mod_name;"}')
}
}
display_help(doc, mod_name, help_type)
}
#' Parse a module’s documentation
#'
#' @param info The module info.
#' @param mod_ns The module namespace.
#' @return \code{parse_documentation} returns a list of character strings with
#' the Rd documentation source code for each documented name in a module.
#' @keywords internal
parse_documentation = function (info, mod_ns) {
rdfiles = parse_roxygen_tags(info, mod_ns)
# Due to aliases, documentation entries may have more than one name.
aliases = map(function (rd) unique(rd$get_value('alias')), rdfiles)
names = rep(names(rdfiles), lengths(aliases))
docs = patch_mod_doc(stats::setNames(rdfiles[names], unlist(aliases)))
lapply(docs, format, wrap = FALSE)
}
#' @keywords internal
#' @rdname parse_documentation
parse_roxygen_tags = function (info, mod_ns) {
mod_path = info$source_path
blocks = roxygen2::parse_file(mod_path, mod_ns)
roxygen2::roclet_process(
roxygen2::rd_roclet(),
blocks,
mod_ns,
dirname(mod_path)
)
}
#' @param docs the list of \pkg{roxygen2} documentation objects.
#' @keywords internal
#' @rdname parse_documentation
patch_mod_doc = function (docs) {
if ('.__module__.' %in% names(docs)) {
mod_doc = docs[['.__module__.']]
mod_doc$sections$docType$value = 'package'
mod_doc$sections$usage = NULL
mod_doc$sections$format = NULL
mod_doc$sections$name$value = 'module'
}
docs
}
#' Helper functions for the help functionality
#'
#' \code{help_topic_target} parses the expression being passed to the
#' \code{help} function call to find the innermost module subset expression in
#' it.
#' \code{find_env} acts similarly to \code{\link[utils]{find}}, except that it
#' looks in the current environment’s parents rather than in the global
#' environment search list, it returns only one hit (or zero), and it returns
#' the environment rather than a character string.
#' \code{call_help} invokes a \code{help()} call expression for a package help
#' topic, finding the first \code{help} function definition, ignoring the one
#' from this package.
#'
#' @param topic the unevaluated expression passed to \code{help}.
#' @param caller the environment from which \code{help} was called.
#' @return \code{help_topic_target} returns a list of two elements containing
#' the innermost module of the \code{help} call, as well as the name of the
#' object that’s the subject of the \code{help} call. For \code{help(a$b$c$d)},
#' it returns \code{list(c, quote(d))}.
#' @name help-internal
#' @keywords internal
help_topic_target = function (topic, caller) {
inner_mod = function (mod, expr) {
name = if (is.name(expr)) {
as.character(expr)
} else if (is.call(expr) && identical(expr[[1L]], quote(`$`))) {
mod = Recall(mod, expr[[2L]])
as.character(expr[[3L]])
} else {
throw('{topic;"} is not a valid module help topic', call = call)
}
get(name, envir = mod)
}
call = sys.call(-1L)
if (is.name(topic)) {
obj = inner_mod(caller, topic)
if (inherits(obj, 'box$mod')) {
list(obj, '.__module__.')
} else {
name = as.character(topic)
list(find_env(name, caller), name)
}
} else {
list(inner_mod(caller, topic[[2L]]), as.character(topic[[3L]]))
}
}
#' @param name the name to look for.
#' @name help-internal
#' @keywords internal
find_env = function (name, caller) {
while (! identical(caller, emptyenv())) {
if (exists(name, envir = caller, inherits = FALSE)) return(caller)
caller = parent.env(caller)
}
NULL
}
#' @param call the patched \code{help} call expression.
#' @name help-internal
call_help = function (call, caller) {
# Search for `help` function in caller scope. This is intended to find the
# first help call which is either `utils::help` or potentially from another
# environment, such as `devtools_shims`.
# Unfortunately during testing (and during development) the current package
# *is* attached, so we can’t just use `get` in the global/caller’s
# environment — it would recurse indefinitely to this package’s `help`
# function. To fix this, we need to manually find the first hit that isn’t
# inside this package.
candidates = utils::getAnywhere('help')
envs = map(environment, candidates$objs)
valid = candidates$visible & map_lgl(is.function, candidates$objs)
other_helps = candidates$obj[valid & ! map_lgl(identical, envs, topenv())]
call[[1L]] = other_helps[[1L]]
eval(call, envir = caller)
}
|
\chapter{Predictive Coding}
\acf{PC} is the theory that the ``purpose'' of the perceptual system is not to encode the maximum amount of information about the current state of the world, but rather to provide a prediction of (the posterior distribution of) the future state of the world.
Indeed, this provides a parsimonious explanation for much of the brain's function. If you consider a video of two people talking in a living room with an old TV displaying static - most of the Shannon information in the video will be in the TV's display. While the speck of static may be a result of a particle left by the big bang hitting the antenna, our brain doesn't devote the majority of its resources to keep track of the white noise patterns. i.e., the brain's sensory processing system isn't designed to maximize the mutual information between the sensory signals it receives. Instead, if you consider the temporal information, the brain tries to encode the predictive information within its sensory streams. \PC has been hailed by some as ``The unifying theory of the brain.''
\PC provided the inspiration and fundamental theory behind temporal difference learning, a class of model-free reinforcement learning methods which learn by bootstrapping from the current prediction of the state of the world. Temporal Difference Learning produced TD-Gammon, which was beating humans even before the more famous Deep Blue eventually beat Kasparov. Temporal Difference Learning is also one of the inspirations for many of the current state-of-the-art reinforcement learning applications such as AlphaGo by DeepMind and Five by OpenAI.
If we were prescribed to the theory of \PC we would expect that somewhere the brain has to encode 1) the prediction of what is to come, and 2) the error between the prediction and what came to pass. Indeed, some of the most famous work in neuroscience has demonstrated these values in predictions of reward in dopamine signaling in the VTA. However, the theory of \PC would expect that these values should exist throughout the perceptual system as well for all aspects of the state of the world. An excellent place to look for this would be where the brain processes complex time-varying signals, namely, in auditory processing regions.
A direct corollary of \PC is the notion of surprise which is a measure of how likely a stimuli is given its context. A simple formulation might be $\text{Surprise}=-\log P(S|C)$ where $S$ is the stimuli and $C$ is the context. Several normalizations or transformations have been applied to this formulation; however, this describes the fundamental idea. Some version of this formulation has been used to show that responses from secondary sensory regions are better described by models of surprise than by models using strictly the physical stimuli in both vision \cite{itti2005principled} and audition \cite{Gill2008}.
This formulation requires a model of the world that provides the predicted probability. Not too long ago, ``a model-free approach seems out of the question in the context of video processing, as it would take unreasonably too many data samples and hence unreasonably too long to accumulate sufficient data and allow accurate model-free estimation of the underlying probability density function (PDF) of the data'' \cite{itti2005principled}. However, this is no longer true with modern machine learning techniques and methods for handling significantly more data than was possible in 2005. Therefore I have spent some effort building some models that predict future posterior probability density function of spectrograms.
\section{Models}
\subsection{Deep mixture of beta distributions}
My first attempt was to create a neural network that, given a window of a spectrogram, predict the probability distribution of each frequency band in the next time bin of the spectrogram. The network was optimized by maximizing the log likelihood of the model. I modeled the probability distribution as a mixture of 3 beta distributions. The network was two fully connected layers with an output for a weight, $w$, and the two beta distribution parameters, $\alpha$ and $\beta$, for each of the three beta distributions, for each of the frequency bins. This network was very ill-behaved, and I spent considerable time implementing as many numerical analysis tricks to control and regularize the networks' exploding values. Most were relatively simple and included but was not limited to gradient clipping, different beta distribution approximations, the log-sum-exp trick and even an added cost function tied to the standard deviation of the beta distributions to try to prevent them from becoming delta functions.
One interesting method I used was adversarial training \cite{goodfellow2014explaining,lakshminarayanan2017simple}. Given an input $x$ with target $y$, and loss $l(\theta, x, y)$(e.g. $-\log p_\theta(y|x)$) \cite{goodfellow2014explaining} proposes the fast gradient sign method which generates an adversarial example as $x' = x + \epsilon \sign(\nabla_x l(\theta, x, y))$ and using $x'$ to augment the dataset by treating $(x', y)$ as an additional training example. Intuitively, this can be interpreted as a smoothing method by smoothing the likelihood around the target of an $\epsilon$-neighborhood around the training examples. This helped a small amount, but my networks still eventually got corrupted by NaNs. In my case where I'm predicting the likelihood of a full spectrogram slice, its possible adversarial training on my labels, $y$ might have also helped the stability. In the end, however, I began to strip away as many of the complexities of the network as I could to create a minimal viable network.
\subsection{Deep Gaussian prediction}
This minimum network was to simply predict each frequency band as a univariate Gaussian with a mean and a standard deviation, and let the network try and deal with any covariances. This network trained easily enough but didn't do nearly as good of a job at predicting as the deep mixture of beta distributions networks that were stopped before being corrupted by NaNs. However, even with this poor performance, I did notice peaks in the likelihood that corresponded to motif boundaries.
\subsection{Contrastive Predictive Coding}
The last method I have explored is \CPC\cite{CPC}. \CPC is a method that takes high dimensional time-varying data and attempts to do representation learning on it to provide a representation that captures the predictive information of the signal. They do this by encoding the high dimensional signal in a low dimensional latent space, $Z$, using an encoding network, and then feeding the latent space into a recurrent neural network which outputs a context in another latent space, $C$. $c_t$ is then used to linearly classify if a sample is actually the next time sample, or drawn from a distribution (I have used other random samples from the spectrogram). The whole model is learned end-to-end, providing an encoding network to a $Z$ space that both contains information that is useful for predictions, as well as information that is predictable. The model also provides the $C$ space, which also contains context from the past.
Figure \ref{fig:encoder} shows an example 8-dimensional $Z$ latent space learned by \CPC. It encodes all the predictive and predictable information contained in each spectrogram time slice. Noticeable changes in the spectrogram are reflected in some way in the encoded dimensions. It would be interesting to see if it allocated its weights in a similar distribution as would be predicted by the mel frequency scale.
\begin{figure}[tbp]
\centering
\includegraphics[width=\textwidth]{figures/encoder.pdf}
\caption[8 dimensional $Z$ space learned by Contrastive Predictive Coding.]
{\emph{8 dimensional $Z$ space learned by Contrastive Predictive Coding.} The representation learned by the encoder of \CPC. Top: The 8-dimensional latent space $Z$ created by pushing each 2048-dimensional spectrogram time slice through the encoder network. Bottom: 2048 Frequency bin spectrogram representation fed into the encoder and the \CPC network. The figure has been log scaled in the frequency axis to provide a visualization similar to a mel-scaled spectrogram; however, the 2048 dimensions are not log spaced so the \CPC network must learn where to allocate its weights to encode information in the predictive and predictable frequency dimensions. It would be interesting to see if the allocations it learns match that of a mel-scaled spectrogram.
\index{encoder}}
\label{fig:encoder}
\end{figure}
\subsection{Future Improvements}
\CPC seems to be working quite well to generate representations, even though it doesn't give us explicit predictions. It would be possible to extract predictions from the \CPC network by 1) sampling from the latent space, or 2) explicitly using the linear decision boundary created by the network. There are also several other changes that could be made to the architecture which may improve the performance. A Wavenet \cite{van2016wavenet} style time dilation network architecture might allow for the removal of the LSTM or recurrent neural network and might improve training time. Alternatively, a transformer network \cite{vaswani2017attention} might be more efficient recurrent neural network architecture that is still able to capture the fundamental dynamics. Lastly, further skip-state predictions might improve the learned representations as hinted by \cite{gregor2018temporal}. There are many possible ways to incorporate skip state predictions into \CPC.
\section{Future Applications}
\subsection{Receptive Field Estimation}
These kinds of models have the potential to be used for much more than just posterior distribution prediction. As I alluded to in the introduction, an explicit model of the posterior distribution may allow for improved receptive field models and more accurate spike predictions. This would be done by splitting the stimuli explicitly into what was predicted, and how the actual stimuli differed from this prediction. Since the spiking could be dependent on the surprise at a variety of time lags, the dimensionality of this stimuli space quickly explodes as you consider farther back in time. To restrict the dimensions considered, a linear receptive field model should probably be used first to define the time lag limits and relevant frequency bands.
Alternatively, instead of using an explicit posterior prediction model, the learned \CPC encoded representation could be used to make spike predictions. This wouldn't require explicit arbitrary frequency limits which we have currently imposed by estimating the range of frequencies we believe the birds are sensitive to and by using a mel scale based on the human cochlear organization. Furthermore, by varying the dimensionality of the \CPC encoded representation, a balance between the encoded stimuli information and the limited experimental statistical power could be formed to combat the curse of dimensionality.
\subsection{Natural Unsupervised Segmentation of Vocal Objects}
Both word segmentation and motif segmentation has proven to be a non-trivial task and has limited efforts towards automatic vocal analysis. In both linguistics and birdsong research, hand segmentation remains the gold-standard technique for separating vocal objects. For human speech, some supervised methods do a reasonable job, mainly due to large amounts of labeled data. For birdsong, this is not available.
Preliminary data seems to indicate the usefulness of the likelihood of each time sample as an unsupervised segmentation signal. Previous methods at song segmentation in our lab mainly consisted of using the sum of the spectral power for each time point. I suspect that an improved method could use the likelihood of each time sample or its derivative. This makes intuitive sense because one would expect that intra-object predictability to be higher than inter-object predictability.
\subsection{General Encoding of High-dimensional Time-Varying Data}
Lastly, these techniques might be useful to extract meaningful information out of general high-dimensional time-varying data, of which we have a lot in Neuroscience. I suspect that the success of techniques like Independent Components Analysis (ICA) \cite{ICA}, Delay Differential Analysis \cite{DDA}, and Taken's delay embeddings indicate the kinds of statistical structure that are utilized by \CPC are abundant and informative in Neuroscience data. I expect this to be useful for many types of recordings including but not limited to EEG, fMRI, ECoG, LFP, and even recordings of many single units, mainly as an unsupervised dimensionality reduction technique. |
Contenido bajo licencia Creative Commons BY 4.0 y código bajo licencia MIT. © Juan Gómez y Nicolás Guarín-Zapata 2020. Este material es parte del curso Modelación Computacional en el programa de Ingeniería Civil de la Universidad EAFIT.
# Interpolación de Lagrange en 1D
## Introducción
El problema de interpolación o de predecir funciones a partir de un número determinado de datos representativos de la función aparece con bastante frecuencia en la interpretación de datos experimentales. De otro lado, las técnicas de aproximación de funciones por medio de métodos de interpolación son la base para la formulación de los métodos más importantes de la mecánica computacional como lo son los elementos finitos y los elementos de frontera. En este Notebook se discutirán algunos aspectos básicos y fundamentales de la teoría de interpolación. El notebook se describe en términos del desarrollo completo de un problema ejemplo incluyendo su implementación en Python.
**Al completar este notebook usted debería estar en la capacidad de:**
* Reconocer el problema de interpolación de funciones como uno de aproximación de una función desconocida en términos de valores discretos de la misma función.
* Identificar las propiedades fundamentales de los polinomios de interpolación de Lagrange.
* Usar polinomios de interpolación de Lagrange para proponer aproximaciones a una función dado un conjunto de $N$ valores conocidos de la misma.
* Reconocer la diferencia entre variables primarias y secundarias en un esquema de interpolación.
## Interpolación de Lagrange
El problema de interpolación consiste en la determinación del valor de una función $f(x)$ en un punto arbitrario $x \in [x_1, x_n]$, dados valores conocidos de la función al interior de un dominio de solución. De acuerdo con el teorema de interpolación de Lagrange la aproximación $\hat f(x)$ a la función $f(x)$ se construye como:
\begin{equation}
\hat{f}(x) = \sum_{I=1}^n L_I(x) f_I
\end{equation}
donde $L_I$ es el $I-$ésimo polinomio de orden $n-1$ y $f_1, f_2, \cdots, f_n$ son los $n$ valores conocidos de la función. El $I-$ésimo polinomio de Lagrange se calcula siguiendo la siguiente expresión:
\begin{equation}
{L_I}(x) = \prod_{J=1, J \ne I}^{n} \frac{(x - x_J)}{(x_I - x_J)}\, .
\end{equation}
### Primera derivada
En varios problemas de ingeniería es necesario usar técnicas de interpolación para encontrar valores de las derivadas de la variable primaria o principal. Por ejemplo, en problemas de mecánica de sólidos y teoría de elasticidad, en los cuales la variable primaria es el campo de desplazamientos, uno puede estar interesado en determinar las deformaciones unitarias las cuales son derivadas espaciales de los desplazamientos. Considerando que solo se dispone de valores discretos de los desplazamientos se tiene que las derivadas que se requieren se pueden encontrar usando estos valores discretos. Lo anterior equivale a derivar $\hat{f}(x)$ directamente como sigue:
\begin{equation}
\frac{\mathrm{d}\hat{f}}{\mathrm{d}x}=\frac{\mathrm{d}L_I(x)}{\mathrm{d}x}f_I
\end{equation}
Haciendo,
$$B_I(x) = \frac{\mathrm{d}L_I(x)}{\mathrm{d}x}\, ,$$
podemos escribir el esquema de interpolación como:
\begin{equation}
\frac{\mathrm{d}\hat{f}}{\mathrm{d}x} = \sum_{I=1}^n B_I(x) f_I\, .
\end{equation}
## Ejemplo
Formule un esquema de interpolación para encontrar un valore de la función
\begin{equation}
f(x) = x^3 + 4x^2 - 10
\end{equation}
en un punto arbitrario $x$ en el intervalo $[ -1, 1]$ asumiendo que se conoce el valor exacto de la función en los puntos $x=-1.0$, $x=+1.0$ y $x=0.0.$
En este ejemplo se conoce la función y aparentemente no tiene mucho sentido buscar una aproximación de la misma resolviendo un problema de interpolación. Sin embargo es conveniente seleccionar una función conocida para poder asimilar el problema numérico y sus limitaciones. En este contexto asumiremos que en una aplicación real conocemos los valores de la función en un conjunto de puntos $x=-1.0$, $x=+1.0$ and $x=0.0$ los cuales se denominan puntos nodales o simplemente _nodos_.
El proceso de interpolación involucra 2 pasos fundamentales:
1. Determinar los polinomios de interpolación $L_I$ usando la productoria.
2. Usar la combinación lineal para construir el polinomio interpolante o la aproximación a la función $\hat f(x)$.
Veamos estos pasos entonces.
1. Considerando que tenemos 3 puntos _nodales_ necesitamos generar 3 polinomios de interpolación de segundo orden, cada uno de ellos asociado a cada punto nodal. Rotulemos los _nodos_ como $x_0 = -1.0$, $x_1 = +1.0$ y $x_2 = 0.0$. De acuerdo con esta denominación $L_0(x)$, $L_1(x)$ y $L_2(x)$ serán los polinomios de interpolación de segundo orden asociados a los puntos nodales $x_0 = -1.0$, $x_1 = +1.0$ y $x_2 = 0.0$. Usando la fórmula de la productoria tenemos:
\begin{align}
&L_0(x) = \frac{(x - x_1)(x - x_2)}{(x_0 - x_1)(x_0 - x_2)} \equiv \frac{1}{2}(x - 1.0)x\\
&L_1(x) = \frac{(x-x^0)(x-x^2)}{(x^1-x^0)(x^1-x^2)}\equiv\frac12(x+1.0)x\\
&L_2(x) = \frac{(x-x^0)(x-x^1)}{(x^2-x^0)(x^2-x^1)}\equiv-(x+1.0)(x-1.0)\, .
\end{align}
2. Para llegar a la aproximación final de la función realizamos la combinación lineal:
\begin{equation}
\hat{f}(x) = L_0(x)f_0 + L_1(x)f_1 + L_2(x)f_2
\end{equation}
<div class="alert alert-warning">
**Preguntas**
- Verificar que los polinomios de interpolación $L_0(x)$, $L_1(x)$ y $L_2(x)$ satisfacen la propiedad
$$L_I(x_J)= \delta_{IJ} \equiv \begin{cases}
1\quad \text{si } I=J\\
0\quad \text{si } I\neq J
\end{cases}\, .$$
- Si la condición $L_I(x_J)=\delta_{IJ}$ no se satisface por una de las funciones de interpolación encontradas, ¿cuál sería el efecto resultante en la aproximación de la función?
</div>
### Algunas notas de interés
* En el método de los elementos finitos es común denominar a los polinomios de interpolación como _funciones de forma_.
* El dominio de calculo es el intervalo de tamaño 2 comprendido entre $x=-1.0$ y $x=+1.0$. Como se discutirá mas adelante, por facilidades en la programación en la implementación de métodos de elementos finitos los dominios de tamaño diferente son transformados al dominio de tamaño 2.
## Solución en Python
En los siguientes bloques de código mostramos el paso a paso en la construcción del polinomio de interpolación final $f(x)$ usando Python. Para seguir el Notebook se recomienda que de manera simultánea se implementen los bloques de código en un script independiente o que añada comentarios a las instrucciones más relevantes.
### Paso 1: Importación de módulos
En la escritura de scripts en Python es necesario importar **módulos** o bibliotecas (algunas personas usan la palabra _librería_ como mala traducción de _library_ del inglés) que contienen funciones de Python predefinidas. En este caso importaremos los **módulos**:
* `numpy` el cual es una biblioteca de funciones para realizar operaciones con matrices similar a Matlab.
* `matplotlib` el cual es una biblioteca de graficación.
* `scipy` el cual es una biblioteca fundamental para computación científica.
* `sympy` el cual es una biblioteca para realizar matemáticas simbólicas.
Python importa los módulos usando la palabra reservada `import` seguida del nombre del módulo y un nombre corto para ser usado como prefijo en referencias posteriores a las funciones contenidas en ese módulo.
```python
%matplotlib notebook
```
```python
import numpy as np
import matplotlib.pyplot as plt
from scipy import interpolate
import sympy as sym
```
```python
sym.init_printing()
```
### Paso 2: Definición de una función para determinar los polinomios de interpolación de Lagrange
En un programa de computador una **función** (o también llamada **subrutina**) es un bloque de código que realiza una tarea específica múltiples veces dentro de un programa o probablemente en diferentes programas. En Python estas funciones se definen por medio de la palabra clave `def` seguida del nombre dado a la función.
Conjuntamente con el nombre, encerrado entre paréntesis, la definición de la función debe incluir también una lista de parámetros (o argumentos) a ser usados por la función cuando esta realice tareas especificas.
En el ejemplo definiremos una función de Python para generar el polinomio de Lagrange usando la productoria definida previamente. Le daremos a esta función el nombre `lagrange_poly`. Sus parámetros de entrada serán la variable independiente $x$ a ser usada en el polinomio; el orden del polinomio definido por `order`; y el punto `i` asociado al polinomio. La función será usada posteriormente desde el programa principal.
```python
def lagrange_poly(x, order, i, xi=None):
if xi == None:
xi = sym.symbols('x:%d'%(order+1))
index = list(range(order+1))
index.pop(i)
return sym.prod([(x - xi[j])/(xi[i] - xi[j]) for j in index])
```
Alternativamente a la definición de una función usando la palabra clave `def` Python también permite la definición de funciones en el sentido del Calculo, es decir definición de relaciones que permiten transformar escalares o vectores en otros escalares o vectores. Esto es posible a través de la opción `lambda`. En este ejemplo usaremos la opción `lambda` para crear la función:
\begin{equation}
f(x)=x^3+4x^2-10.
\end{equation}
<div class="alert alert-warning">
**Preguntas**
Use una terminal o un script independiente para probar el uso de la opción `lambda` en la definición de una función. Intente con diferentes funciones.
</div>
```python
fun = lambda x: x**3 + 4.0*x**2 - 10.0
```
Una vez creada la función $f(x)$ usando la opción `lambda` pasamos a definir un conjunto de puntos de evaluación. El numero de puntos se define por medio de la variable `npts` y usamos la función `linspace` del módulo `numpy` para crear una distribución equidistante de puntos entre $x = -1.0$ y $x = +1.0.$ Simultáneamente, el arreglo nulo `yy` almacenará los valores de la función en los puntos almacenados en `xx`.
Note que Python inicia desde la posición 0 el conteo de elementos en arreglos y otras estructuras de datos, de manera que empezamos a contar desde cero para mantener la consistencia con la implementación.
```python
npts = 200
x_pts = np.linspace(-1, 1, npts)
```
Con la función ahora disponible podemos calcular los valores (conocidos) listos para ser usados en el esquema de interpolación. Estos valores se almacenarán en el arreglo `fd()`. Para calcular cada valor de la función usamos el comando (ya disponible) `fx` correspondiente al nombre que usamos en la declaración de la función usando la opción `lambda`.
<div class="alert alert-warning">
**Preguntas**
Intente usar nombres diferentes en la definición de la función usando la opción `lambda` y ensaye si su código aún funciona.
</div>
```python
pts = np.array([-1, 1, 0])
fd = fun(pts)
```
Ahora evalúe la función en los `npts` puntos y grafique la misma:
```python
plt.figure()
y_pts = fx(x_pts)
plt.plot(x_pts , y_pts)
plt.plot(pts, fd, 'ko')
```
<IPython.core.display.Javascript object>
[<matplotlib.lines.Line2D at 0x7f982bdebbe0>]
### Paso 3: Encontrando los polinomios de interpolación de Lagrange
Calculemos ahora los polinomios de Lagrange invocando la función `lagrange_poly()`. Crearemos una lista vacía que llamaremos `pol` y cada que determinemos un nuevo polinomio lo adicionaremos a la lista usando para ello el método `.append()`. En ese momento los polinomios almacenados en la lista `pol=[]` existirán en notación simbólica (como si estuviéramos resolviendo el problema a mano) y estarán listos para ser evaluados en valores específicos de $x$.
```python
x = sym.symbols('x')
pol = []
pol.append(sym.simplify(lagrange_poly(x, 2, 0, [-1,1,0])))
pol.append(sym.simplify(lagrange_poly(x, 2, 1, [-1,1,0])))
pol.append(sym.simplify(lagrange_poly(x, 2, 2, [-1,1,0])))
pol
```
<div class="alert alert-warning">
**Preguntas**
- Grafique los 3 polinomios de Lagrange de orden 2 y verifique que satisfacen la propiedad $L_I(x_J)=\delta_{IJ}$.
- Adicione más puntos al dominio de interpolación y grafique los polinomios resultantes. ¿Cuál es el efecto en los polinomios?
</div>
El método `subs` del módulo `sympy` substituye el valor de la variable independiente $x$ por el valor almacenado en $xx[i]$.
```python
plt.figure()
for k in range(3):
for i in range(npts):
yy[i] = pol[k].subs([(x, xx[i])])
plt.plot(xx, yy)
```
<IPython.core.display.Javascript object>
### Paso 4: Encontrando el polinomio de interpolación para aproximar la función $f(x)$
Construyamos ahora el polinomio de aproximación completo $\hat f(x)$ de acuerdo con la superposición lineal
$$\hat{f}(x) = L_0(x)f(x_0) + L_1(x) f(x_1) + L_2(x) f(x_2)\, ,$$
y utilizando la lista (ya disponible) `pol` que almacena los 3 polinomios generados. Solo para efectos de ilustrar primero lo mostramos con `display`. La versión evaluada de $\hat{f}(x)$ será almacenada en el arreglo `yy[i]`.
```python
display(pol[0]*fd[0] + pol[1]*fd[1] + pol[2]*fd[2])
```
```python
plt.figure()
for i in range(npts):
yy[i] = fd[0]*pol[0].subs([(x, xx[i])]) + fd[1]*pol[1].subs([(x, xx[i])]) \
+ fd[2]*pol[2].subs([(x, xx[i])])
zz = fx(xx)
plt.plot([-1, 1, 0], fd, 'ko')
plt.plot(xx, yy , 'r--')
plt.plot(xx, zz)
```
<IPython.core.display.Javascript object>
[<matplotlib.lines.Line2D at 0x7f982bea99e8>]
Note que debido a la diferencia en orden entre la aproximación $\hat{f}(x)$ y la función $f(x)$ el polinomio de interpolación no coincide completamente con la función aunque si reproduce los valores correctos en los puntos nodales.
<div class="alert alert-warning">
**Pregunta**
¿Cómo podemos mejorar la aproximación $\hat{f}(x)$ a la función $f(x)$?
</div>
### Paso 5: Encontrando las derivadas
Usemos la opción `lambda` una vez más para definir una nueva función `fdx`correspondiente a la primera derivada:
$$f'(x) = 3x^2 + 8x\, .$$
Las derivadas en los puntos nodales se almacenarán en el arreglo `fc`
```python
fdx = lambda x: 3*x**2+8.0*x
fc = np.array([fdx(-1.0), fdx(1.0) ,fdx(0.0)])
```
La interpolación de las derivadas se calculan de acuerdo con
\begin{equation}
\hat{f}'(x) = \frac{\mathrm{d}L_0(x)}{\mathrm{d}x} f_0 + \frac{\mathrm{d}L_1(x)}{\mathrm{d}x} f_1 + \frac{\mathrm{d}L_2(x)}{\mathrm{d}x} f_2\, .
\end{equation}
```python
dpol = []
dpol.append(sym.diff(pol[0],x))
dpol.append(sym.diff(pol[1],x))
dpol.append(sym.diff(pol[2],x))
display(dpol)
display(dpol[0]*fd[0] + dpol[1]*fd[1] + dpol[2]*fd[2])
plt.figure()
yy= fdx(xx)
plt.plot(xx, yy ,'r--')
plt.plot([-1, 1, 0], fc, 'ko')
for i in range(npts):
yy[i] = fd[0]*dpol[0].subs([(x, xx[i])]) + fd[1]*dpol[1].subs([(x, xx[i])])\
+ fd[2]*dpol[2].subs([(x, xx[i])])
plt.plot(xx, yy)
```
## Glosario de términos
**Fórmula de productoria:** Expresión usada para calcular los polinomios de interpolación de Lagrange.
**Función de forma:** Nombre dado a un polinomio de interpolación en el contexto del método de los elementos finitos.
**Punto nodal:** (También nodo). Nombre dado a los puntos específicos donde se conoce el valor de una función y usado en la construcción del esquema de interpolación.
**Subrutina:** Bloque de código independiente que realiza una tarea de cómputo determinada dentro de un programa principal.
**Matriz de interpolación:** Arreglo unidimensional o bidimensional que almacena las funciones de forma en un esquema de interpolación determinado.
## Actividad para la Clase
El propósito de esta actividad es familiarizar al estudiante con el uso del método de interpolación de Lagrange en un contexto particular de la ingeniería, como el método de los elementos finitos. El método de elementos finitos utiliza dominios de interpolación de tamaño constante (llamados **elementos**) permitiendo el uso de funciones de interpolación fijos y al mismo tiempo favoreciendo la automatización en programas de computador. En una aplicación típica de elementos finitos los valores nodales de una función (por ejemplo el desplazamiento) son determinados tras resolver un sistema de ecuaciones algebraicas. Estos valores nodales son posteriormente usados para encontrar el desplazamiento a lo largo de todo el dominio del problema, y también para realizar cálculos posteriores y obtener información adicional del problema físico.
**Siga los pasos que se indican a continuación para implementar, en un notebook independiente, un esquema de interpolación típico del método de elementos finitos. El esquema de interpolación resultante será el correspondiente a un solo elemento**
* Asumiendo que un dominio de interpolación constante se define como $x \in [-1.0, +1.0]$ con puntos nodales correspondientes a $x_0 = -1.0$, $x_1 = +1.0$, $x_2 = 0.0$, $x_3 = -0.25$ y $x_4 = +0.25$ use la función `lagrange_poly()` para generar las funciones de interpolación asociadas a estos 5 puntos nodales. Presente los polinomios resultantes en una gráfica.
* Utilice las funciones de interpolación encontradas en el paso anterior, para definir una matriz $[N(x)]$, que se denominará en adelante **Matriz de interpolación**, de manera que la operación de interpolar se pueda realizar usando operaciones matriciales como:
$$\hat{f}(x) = [N(x)] \{F\}\, ,$$
y en la cual $F$ es un vector que almacena los valores nodales de la función.
* En su Notebook imprima la versión simbólica de la matriz de interpolación.
* En su Notebook grafique una función $f(x)$ (representando el desplazamiento de una barra) y su versión aproximada $\hat{f}(x)$ usando el esquema matricial desarrollado en el numeral anterior. El código también debe graficar la primera derivada de la función.
* Para realizar la interpolación en el Notebook programe una función o subrutina de Python que reciba como parámetros de entrada las coordenadas $x$ donde se desea calcular una aproximación de la función y el vector de valores nodales de la función de desplazamientos y entregue después de su ejecución el valor interpolado de la función.
## Formato del notebook
La siguiente celda cambia el formato del Notebook.
```python
from IPython.core.display import HTML
def css_styling():
styles = open('./nb_style.css', 'r').read()
return HTML(styles)
css_styling()
```
<link href='http://fonts.googleapis.com/css?family=Fenix' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Alegreya+Sans:100,300,400,500,700,800,900,100italic,300italic,400italic,500italic,700italic,800italic,900italic' rel='stylesheet' type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Source+Code+Pro:300,400' rel='stylesheet' type='text/css'>
<style>
/*
Template for Notebooks for Modelación computacional.
Based on Lorena Barba template available at:
https://github.com/barbagroup/AeroPython/blob/master/styles/custom.css
*/
/* Fonts */
@font-face {
font-family: "Computer Modern";
src: url('http://mirrors.ctan.org/fonts/cm-unicode/fonts/otf/cmunss.otf');
}
/* Text */
div.cell{
width:800px;
margin-left:16% !important;
margin-right:auto;
}
h1 {
font-family: 'Alegreya Sans', sans-serif;
}
h2 {
font-family: 'Fenix', serif;
}
h3{
font-family: 'Fenix', serif;
margin-top:12px;
margin-bottom: 3px;
}
h4{
font-family: 'Fenix', serif;
}
h5 {
font-family: 'Alegreya Sans', sans-serif;
}
div.text_cell_render{
font-family: 'Alegreya Sans',Computer Modern, "Helvetica Neue", Arial, Helvetica, Geneva, sans-serif;
line-height: 135%;
font-size: 120%;
width:600px;
margin-left:auto;
margin-right:auto;
}
.CodeMirror{
font-family: "Source Code Pro";
font-size: 90%;
}
/* .prompt{
display: None;
}*/
.text_cell_render h1 {
font-weight: 200;
font-size: 50pt;
line-height: 100%;
color:#CD2305;
margin-bottom: 0.5em;
margin-top: 0.5em;
display: block;
}
.text_cell_render h5 {
font-weight: 300;
font-size: 16pt;
color: #CD2305;
font-style: italic;
margin-bottom: .5em;
margin-top: 0.5em;
display: block;
}
.warning{
color: rgb( 240, 20, 20 )
}
</style>
|
\section{Taylor Series}\label{sec:taylorseries}
% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % %
We have seen that some functions can be represented as series, which
may give valuable information about the function. So far, we have seen
only those examples that result from manipulation of our one
fundamental example, the geometric series. We would like to start with
a given function and produce a series to represent it, if possible.
Suppose that $\ds f(x)=\sum_{n=0}^\infty a_nx^n$ on some interval of
convergence centered at 0. Then we know that we can compute derivatives of $f$ by
taking derivatives of the terms of the series. Let's look at the first
few in general:
\begin{align*}
f'(x)&=\sum_{n=1}^\infty n a_n x^{n-1}=a_1 + 2a_2x+3a_3x^2+4a_4x^3+\cdots \\
f''(x)&=\sum_{n=2}^\infty n(n-1) a_n x^{n-2}=2a_2+3\cdot2a_3x
+4\cdot3a_4x^2+\cdots \\
f'''(x)&=\sum_{n=3}^\infty n(n-1)(n-2) a_n x^{n-3}=3\cdot2a_3
+4\cdot3\cdot2a_4x+\cdots \\
\end{align*}
By examining these it's not hard to discern the general pattern. The
$k$th derivative must be
\begin{align*}
f^{(k)}(x)&=\sum_{n=k}^\infty n(n-1)(n-2)\cdots(n-k+1)a_nx^{n-k} \\
&=k(k-1)(k-2)\cdots(2)(1)a_k+(k+1)(k)\cdots(2)a_{k+1}x+{} \\
&\qquad {}+(k+2)(k+1)\cdots(3)a_{k+2}x^2+\cdots \\
\end{align*}
We can express this more clearly by using factorial notation:
\[
f^{(k)}(x)=\sum_{n=k}^\infty {n!\over (n-k)!}a_nx^{n-k}=
k!a_k+(k+1)!a_{k+1}x+{(k+2)!\over 2!}a_{k+2}x^2+\cdots
\]
We can solve for $a_n$ by substituting $x=0$ in the formula for $f^{(k)}(x)$:
\[f^{(k)}(0)=k!a_k+\sum_{n=k+1}^\infty {n!\over (n-k)!}a_n0^{n-k}=k!a_k,\]
\[a_k={f^{(k)}(0)\over k!}.\]
Note that the original series for $f$ yields $f(0)=a_0$.
So if a function $f$ can be represented by a series, we can easily find such a series.
Given a function $f$, the series
\[\sum_{n=0}^\infty {f^{(n)}(0)\over n!}x^n\]
is called the \dfont{Maclaurin
series} for $f$.
\begin{example}{Maclaurin Series}{MacSeriesOne}
Find the Maclaurin series for $f(x)=1/(1-x)$.
\end{example}
\begin{solution}
We need to
compute the derivatives of $f$ (and hope to spot a pattern).
\begin{align*}
f(x)&=(1-x)^{-1} \\
f'(x)&=(1-x)^{-2} \\
f''(x)&=2(1-x)^{-3} \\
f'''(x)&=6(1-x)^{-4} \\
f^{(4)}(x)&=4!(1-x)^{-5} \\
&\vdots \\
f^{(n)}(x)&=n!(1-x)^{-n-1} \\
\end{align*}
So
\[a_n={f^{(n)}(0)\over n!}={n!(1-0)^{-n-1}\over n!}=1\]
and the Maclaurin series is
\[\sum_{n=0}^\infty 1\cdot x^n=\sum_{n=0}^\infty x^n,\]
the geometric series.
\end{solution}
A warning is in order here. Given a function $f$ we may be able to
compute the Maclaurin series, but that does not mean we have found a
series representation for $f$. We still need to know where the series
converges, and if, where it converges, it converges to $f(x)$. While
for most commonly encountered functions the Maclaurin series does
indeed converge to $f$ on some interval, this is not true of all
functions, so care is required.
As a practical matter, if we are interested in using a series to
approximate a function, we will need some finite number of terms of
the series. Even for functions with messy derivatives we can compute
these using computer software like Sage. If we want to describe a series
completely, we would like to be able to write down a formula for a typical
term in the series. Fortunately, a few of the most important functions are very
easy.
\begin{example}{Maclaurin Series}{MacSeriesTwo}
Find the Maclaurin series for $\sin x$.
\end{example}
\begin{solution}
Computing the first few derivatives is simple: $f'(x)=\cos x$, $f''(x)=-\sin x$,
$f'''(x)=-\cos x$, $\ds f^{(4)}(x)=\sin x$, and then the pattern
repeats. The values of the derivative when $x=0$ are:
1, 0, $-1$, 0, 1, 0, $-1$, 0,\dots, and so the Maclaurin series is
\[
x-{x^3\over 3!}+{x^5\over 5!}-\cdots=
\sum_{n=0}^\infty (-1)^n{x^{2n+1}\over (2n+1)!}.
\]
We should always determine the radius of convergence:
\[
\lim_{n\to\infty} {|x|^{2n+3}\over (2n+3)!}{(2n+1)!\over |x|^{2n+1}}
=\lim_{n\to\infty} {|x|^2\over (2n+3)(2n+2)}=0,
\]
so the series converges for every $x$. Since it turns out that this
series does indeed converge to $\sin x$ everywhere, we have a series
representation for $\sin x$ for every $x$.
\end{solution}
Sometimes the formula for the $n$th derivative of a function $f$ is
difficult to discover, but a combination of a known Maclaurin series
and some algebraic manipulation leads easily to the Maclaurin series
for $f$.
\begin{example}{Maclaurin Series}{MacSeriesThree}
Find the Maclaurin series for $x\sin(-x)$.
\end{example}
\begin{solution}
To get from $\sin x$ to $x\sin(-x)$ we substitute $-x$ for $x$ and
then multiply by $x$. We can do the same thing to the series for $\sin
x$:
\[
x\sum_{n=0}^\infty (-1)^n{(-x)^{2n+1}\over (2n+1)!}
=x\sum_{n=0}^\infty (-1)^{n}(-1)^{2n+1}{x^{2n+1}\over (2n+1)!}
=\sum_{n=0}^\infty (-1)^{n+1}{x^{2n+2}\over (2n+1)!}.
\]
\end{solution}
As we have seen, a power series can be centered at a point
other than zero, and the method that produces the Maclaurin series can
also produce such series.
\begin{example}{Taylor Series}{TaylorSeriesOne}
Find a series centered at $-2$ for $1/(1-x)$.
\end{example}
\begin{solution}
If the series is $\ds\sum_{n=0}^\infty a_n(x+2)^n$ then looking at the
$k$th derivative:
$$k!(1-x)^{-k-1}=\sum_{n=k}^\infty {n!\over (n-k)!}a_n(x+2)^{n-k}$$
and substituting $x=-2$ we get
$\ds k!3^{-k-1}=k!a_k$ and $\ds a_k=3^{-k-1}=1/3^{k+1}$, so the series is
$$\sum_{n=0}^\infty {(x+2)^n\over 3^{n+1}}.$$
\end{solution}
Such a series is called the
\dfont{Taylor series} for the function,
and the general term has the form
\[{f^{(n)}(a)\over n!}(x-a)^n.\]
A Maclaurin series is simply a Taylor series with $a=0$.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\Opensolutionfile{solutions}[ex]
\section*{Exercises for \ref{sec:taylorseries}}
\begin{enumialphparenastyle}
\begin{ex}
For each function, find the Maclaurin series or Taylor series centered
at $a$, and the radius of convergence.
\begin{enumerate}
\item $\cos x$
\item $\ds e^x$
\item $1/x$, $a=5$
\item $\ln x$, $a=1$
\item $\ln x$, $a=2$
\item $\ds 1/x^2$, $a=1$
\item $\ds 1/\sqrt{1-x}$
\item Find the first four terms of the Maclaurin series for $\tan
x$ (up to and including the $\ds x^3$ term).
\item Use a combination of Maclaurin series and algebraic
manipulation to find a series centered at zero for
$\ds x\cos (x^2)$.
\item Use a combination of Maclaurin series and algebraic
manipulation to find a series centered at zero for
$\ds xe^{-x}$.
\end{enumerate}
\begin{sol}
\begin{enumerate}
\item $\ds\sum_{n=0}^\infty (-1)^n x^{2n}/(2n)!$, $R=\infty$
\item $\ds\sum_{n=0}^\infty x^n/n!$, $R=\infty$
\item $\ds\sum_{n=0}^\infty (-1)^n{(x-5)^n\over 5^{n+1}}$, $R=5$
\item $\ds\sum_{n=1}^\infty (-1)^{n-1}{(x-1)^n\over n}$, $R=1$
\item $\ds\ln(2)+\sum_{n=1}^\infty (-1)^{n-1}{(x-2)^n\over n 2^n}$, $R=2$
\item $\ds\sum_{n=0}^\infty (-1)^n(n+1)(x-1)^n$, $R=1$
\item $\ds1+\sum_{n=1}^\infty {1\cdot3\cdot5\cdots(2n-1)\over
n!2^n} x^n=1+\sum_{n=1}^\infty {(2n-1)!\over 2^{2n-1}(n-1)!\,n!}x^n$, $R=1$
\item $\ds x+x^3/3$
\item $\ds\sum_{n=0}^\infty (-1)^n x^{4n+1}/(2n)!$
\item $\ds\sum_{n=0}^\infty (-1)^n x^{n+1}/n!$
\end{enumerate}
\end{sol}
\end{ex}
\end{enumialphparenastyle} |
module Algebra.Dioid.Bool where
open import Algebra.Dioid
data Bool : Set where
true : Bool
false : Bool
data _≡_ : (x y : Bool) -> Set where
reflexivity : ∀ {x : Bool} -> x ≡ x
symmetry : ∀ {x y : Bool} -> x ≡ y -> y ≡ x
transitivity : ∀ {x y z : Bool} -> x ≡ y -> y ≡ z -> x ≡ z
_or_ : Bool -> Bool -> Bool
false or false = false
_ or _ = true
_and_ : Bool -> Bool -> Bool
true and true = true
_ and _ = false
|
{-# OPTIONS --without-K --safe #-}
open import Definition.Typed.EqualityRelation
module Definition.LogicalRelation.Properties.Universe {{eqrel : EqRelSet}} where
open EqRelSet {{...}}
open import Definition.Untyped
open import Definition.Typed
open import Definition.LogicalRelation
open import Definition.LogicalRelation.ShapeView
open import Definition.LogicalRelation.Irrelevance
open import Tools.Embedding
-- Helper function for reducible terms of type U for specific type derivations.
univEq′ : ∀ {l Γ A} ([U] : Γ ⊩⟨ l ⟩U) → Γ ⊩⟨ l ⟩ A ∷ U / U-intr [U] → Γ ⊩⟨ ⁰ ⟩ A
univEq′ (noemb (Uᵣ .⁰ 0<1 ⊢Γ)) (Uₜ A₁ d typeA A≡A [A]) = [A]
univEq′ (emb 0<1 x) (ιx [A]) = univEq′ x [A]
-- Reducible terms of type U are reducible types.
univEq : ∀ {l Γ A} ([U] : Γ ⊩⟨ l ⟩ U) → Γ ⊩⟨ l ⟩ A ∷ U / [U] → Γ ⊩⟨ ⁰ ⟩ A
univEq [U] [A] = univEq′ (U-elim [U])
(irrelevanceTerm [U] (U-intr (U-elim [U])) [A])
-- Helper function for reducible term equality of type U for specific type derivations.
univEqEq′ : ∀ {l l′ Γ A B} ([U] : Γ ⊩⟨ l ⟩U) ([A] : Γ ⊩⟨ l′ ⟩ A)
→ Γ ⊩⟨ l ⟩ A ≡ B ∷ U / U-intr [U]
→ Γ ⊩⟨ l′ ⟩ A ≡ B / [A]
univEqEq′ (noemb (Uᵣ .⁰ 0<1 ⊢Γ)) [A]
(Uₜ₌ A₁ B₁ d d′ typeA typeB A≡B [t] [u] [t≡u]) =
irrelevanceEq [t] [A] [t≡u]
univEqEq′ (emb 0<1 x) [A] (ιx [A≡B]) = univEqEq′ x [A] [A≡B]
-- Reducible term equality of type U is reducible type equality.
univEqEq : ∀ {l l′ Γ A B} ([U] : Γ ⊩⟨ l ⟩ U) ([A] : Γ ⊩⟨ l′ ⟩ A)
→ Γ ⊩⟨ l ⟩ A ≡ B ∷ U / [U]
→ Γ ⊩⟨ l′ ⟩ A ≡ B / [A]
univEqEq [U] [A] [A≡B] =
let [A≡B]′ = irrelevanceEqTerm [U] (U-intr (U-elim [U])) [A≡B]
in univEqEq′ (U-elim [U]) [A] [A≡B]′
|
module Kind (Gnd : Set)(U : Set)(T : U -> Set) where
open import Basics
open import Pr
open import Nom
data Kind : Set where
Ty : Gnd -> Kind
_|>_ : Kind -> Kind -> Kind
Pi : (u : U) -> (T u -> Kind) -> Kind
infixr 60 _|>_
|
[STATEMENT]
lemma CSP_is_CSP1:
assumes A: "is_CSP_process P"
shows "P is CSP1 healthy"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. P is CSP1 healthy
[PROOF STEP]
using A
[PROOF STATE]
proof (prove)
using this:
is_CSP_process P
goal (1 subgoal):
1. P is CSP1 healthy
[PROOF STEP]
by (auto simp: is_CSP_process_def design_defs) |
Did you know that if you travel to London by train, you can get two for one tickets to a host of attractions? I didn’t until Great Northen and Thamelink told me all about it.
It’s actually really simple, you just hop over to thameslinkrailway.com, buy your tickets, and then visit thameslinkrailway.com/destinations/london to grab a 2-for-1 voucher for your chosen attraction. There’s LOADS to choose from, from a ride on the London Eye, to aquarium and zoo tickets and more.
We planned our own action-packed family weekend in London to try the offer out, selecting the London Transport Museum as our chosen attraction. On Friday evening, we all piled onto the train, arranged our snacks on the table, and chatted away happily until we arrived into London just in time for dinner.
One Aldwych is situated just two blocks from Covent Garden in the heart of theatreland.
A gorgeous 5 star hotel with stunning amenities, One Aldwych is a great family option as they offer the option of adjoining rooms so that everyone can get a good night’s sleep.
Our room was up on the third floor and was frankly breathtaking.
Down a hallway we had the children’s bedroom to the left with twin beds and en suite, the master bedroom to the right, also with ensuite, a well-stocked kitchenette to the side of the corridor, and at the end, a spacious living area with dining table and views right over central London. Wow.
We couldn’t have been more comfortable or more well looked after during our stay. This is a wonderful hotel for a family trip to London. Visit onealdwych.com for more information.
Almost reluctant to leave our lovely suite, we headed down for dinner at Indigo on the mezzanine floor at One Aldwych. The service was excellent, with the staff bringing out colouring pencils and colouring sheets for the kids, and taking the time to crouch down to their level and listen intently whenever the kids had something to say.
We were brought gluten and dairy-free samphire bread to enjoy with an oil dip as we perused the menus and we were relieved to see that children are well catered for. J opted for flatbread and houmous with crudités, while JD opted for the battered fish and chips.
I selected a beetroot salad starter, followed by gnocci with girolles, peas, broad beans and Parmesan. Both excellent. Mark chose a pork salad with crackling and truffle dressing, followed by lamb rump with samphire and smoked aubergine and capers. A flavourful, comforting selection.
To finish, JD chose a generously fruit-filled knickerbocker glory while J opted for marshmallows, which came with a generous jug of chocolate sauce (a rich, creamy ganache) that she poured on for herself before running out of energy and curling up happily on Mark’s lap.
Unable to resist dessert, Mark and I decided to share a bowl of orange polenta cake, which came with delicate slithers of rhubarb. Divine!
Check availability and book your meal at onealdwych.com/food-drink/indigo.
In the morning, we woke early to enjoy our gorgeous suite before heading back to Indigo for breakfast, which is included in the room price for most bookings.
Alongside a fresh basket of pastries and plenty of hot tea/coffee and juice for the kids, we were presented with a good range of options on the menu.
I enjoyed an excellent vegetarian cooked breakfast, while Mark opted for the meaty version and the kids enjoyed cereal and pancakes.
Well stocked with carbs for the day ahead, we left our bags with reception and headed out to explore the area.
We hadn’t been walking long when we spotted that J had rather covered herself in maple syrup, so we took a detour to GAP Kids and soon had her kitted out in the outfit, which I think she carries off rather brilliantly.
We strolled about, taking note of our theatre when we spotted it, then headed into the piazza to enjoy some live performances and a bit more shopping. The kids loved and J didn’t want to stop watching the man who put his whole body through a coat hanger!
On the other side of Covent Garden, we reached the London Transport Museum. Using the 2-for-1 voucher we’d printed was easy. Children go free, so we just presented our voucher along with our train tickets, and paid entry of £17 instead of £34. Brilliant!
JD and I had visited the London Transport Museum before, but it was J and Mark’s first time and it was wonderful to see how much everyone enjoyed it. Across one main floor and two mezzanines, there are a huge number of exhibits and most are interactive, from buses to taxis and tube trains and even a horse and carriage!
Each child who visits is given a ‘Stamper Trail’ card, which allows them to collect 13 stamps from numbered points all over the building. It’s a brilliant way to keep kids interested right through the trip, and ensured we visited every corner of the museum.
The kids weren’t all that excited when I told them we were nipping back to the hotel for lunch. That is, until they discovered it was a Charlie and the Chocolate factory themed afternoon tea, their expectations were instantly raised sky high..and One Aldwych exceeded them!
We started with these stunning bubbling dry ice cocktails, delivered in glass teapots, they completely captured our imagination.
Then it was time for the savoury course, little sandwiches and quiches, divided into plates to suit Mark (a meat-eater), the kids, and me (a veggie). It was all perfect – especially the crumbly heritage tomato tart.
Next came the sweet course, and the children’s eyes were as round as saucers as treat after treat appeared, from blueberry loaves, to conserves and scones, to gingerbread cake pops, candy floss, golden eggs filled with cheesecake, and more! Everything tasted exceptionally good – in fact I’d say this quite possibly tops the list of my very favourite food experiences to date.
With full tummies and happy hearts, we rounded off this magical treat with hot chocolates, before dashing off to the theatre.
Book your One Aldwych afternoon tea at onealdwych.com/food-drink/afternoon-tea. Prices start from £37.50 (or £27.50 for children under the age of 12).
The kids had never seen a West End musical before so we were so excited to head back into the heart of London’s theatreland to see a full performance of Charlie and the Chocolate Factory.
The sets, characters and performances in the show were loud, proud and larger than life. We enjoyed every second of this clever show, which boasts original songs and dialogue you won’t have seen in the movies, expertly performed by a highly skilled, pitch perfect and completely engaging cast.
The ending, without giving too much away, was beautifully done. I highly recommend going to see this show – the kids loved it and are already talking about going to see it again before it moves to Broadway in 2017.
Follow Charlie and the Chocolate factory on Facebook and Twitter and book your tickets from £25 at www.charlieandthechocolatefactory.com.
On a high from a truly wonderful weekend, we whizzed back to the hotel, grabbed our bags and then made our way to the station for the train ride home. The children both slept on the journey north, as did Mark and I!
We’ll definitely be having another London adventure soon and will certainly take advantage of the Thameslink 2 for 1 offers. It’s a brilliant way to get the most of a trip to the big city. What’s more, the 2 for 1 offer is also available across Cambridge & Brighton, so that’s lots more reasons to hop on a train.
This is a commissioned post for Thameslink & Great Northern, who provided train and museum tickets.We attended the show and stayed at the hotel as guests of the press offices. The final three Charlie and the Chocolate Factory images are official production images reproduced here with permission.
Also thanks to VisitLondon.com for helping us plan the trip. Do check them out for lots more ideas for London family fun.
« Noa and Nani has a new collection of ‘grown up’ furniture and it’s gorgeous!
London Transport Museum is a great, interactive museum that details the history of transit in London. Interesting for kids of all ages!
My mouth is watering looking at all of those fabulous food pictures. No matter how many times I go to London, I feel like I only scratch the surface. Thanks for sharing, Emily.
Oh my, that food photography is simply amazing! I feel so discouraged when I see quality photos like yours, since I never seem to be able to produce high quality images. Guess I’ll have to try harder! |
From mathcomp
Require Import all_ssreflect.
From chip
Require Import extra connect acyclic closure check change hierarchical_sub.
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Section ChangedHierarchicalSub.
Variables (A_top : eqType) (A_bot : eqType).
Variables (U' : finType) (V' : finType).
Variables (f'_top : U' -> A_top) (f'_bot : V' -> A_bot).
Variables (P_top : pred U') (P_bot : pred V').
Local Notation U := (sig_finType P_top).
Local Notation V := (sig_finType P_bot).
Variable (g'_top : rel U') (g'_bot : rel V').
Local Notation g'_top_rev := [rel x y | g'_top y x].
Local Notation g'_bot_rev := [rel x y | g'_bot y x].
Variables (f_top : U -> A_top) (f_bot : V -> A_bot).
Variables (g_top : rel U) (g_bot : rel V).
Local Notation g_top_rev := [rel x y | g_top y x].
Local Notation g_bot_rev := [rel x y | g_bot y x].
Variables (checkable' : pred V') (checkable : pred V).
Variable R : eqType.
Variables (check : V -> R) (check' : V' -> R).
Variables (p : U -> {set V}) (p' : U' -> {set V'}).
Hypothesis p_neq : forall (u u' : U), u <> u' -> p u <> p u'.
Hypothesis p'_neq : forall (u u' : U'), u <> u' -> p' u <> p' u'.
Hypothesis p_partition : partition (\bigcup_( u | u \in U ) [set p u]) [set: V].
Hypothesis p'_partition : partition (\bigcup_( u | u \in U' ) [set p' u]) [set: V'].
Hypothesis g_bot_top : forall (v v' : V) (u u' : U),
u <> u' -> g_bot v v' -> v \in p u -> v' \in p u' -> g_top u u'.
Hypothesis f_top_bot : forall (u : U),
f_top u = f'_top (val u) -> forall (v : V), v \in p u -> f_bot v = f'_bot (val v).
Local Notation insub_g_top x y := (insub_g g_top x y).
Local Notation g_top_U' := [rel x y : U' | insub_g_top x y].
Local Notation g_top_U'_rev := [rel x y | g_top_U' y x].
Local Notation insub_g_bot x y := (insub_g g_bot x y).
Local Notation g_bot_V' := [rel x y : V' | insub_g_bot x y].
Local Notation g_bot_V'_rev := [rel x y | g_bot_V' y x].
Hypothesis f_top_equal_g_top :
forall u, f_top u = f'_top (val u) -> forall u', g_top_U' (val u) u' = g'_top (val u) u'.
Hypothesis f_bot_equal_g_bot :
forall v, f_bot v = f'_bot (val v) -> forall v', g_bot_V' (val v) v' = g'_bot (val v) v'.
Hypothesis checkable_bot :
forall v, f_bot v = f'_bot (val v) -> checkable v = checkable' (val v).
Hypothesis check_bot :
forall v, checkable v -> checkable' (val v) ->
(forall v', connect g_bot_V' (val v) v' = connect g'_bot (val v) v') ->
(forall v', connect g_bot_V' (val v) (val v') -> f_bot v' = f'_bot (val v')) ->
check v = check' (val v).
Variable V_result_cert : seq (V * R).
Hypothesis V_result_certP :
forall v r, reflect (checkable v /\ check v = r) ((v,r) \in V_result_cert).
Hypothesis V_result_cert_uniq : uniq [seq vr.1 | vr <- V_result_cert].
Local Notation V_sub := (sig_finType (P_V_sub f'_top g_top f_top p)).
Local Notation g_bot_sub := [rel x y : V_sub | g_bot (val x) (val y)].
Definition V'_result_filter_cert_sub :=
[seq (val vr.1, vr.2) | vr <- V_result_cert & val vr.1 \notin impactedVV' g_bot (modifiedV f'_bot f_bot)].
Definition check_all_cert_sub :=
check_impactedV'_sub_cert f'_top f'_bot g_top g_bot f_top f_bot checkable' check' p ++ V'_result_filter_cert_sub.
Definition check_all_cert_V'_sub :=
[seq vr.1 | vr <- check_all_cert_sub].
Lemma check_all_cert_complete_sub :
forall (v : V'), checkable' v -> v \in check_all_cert_V'_sub.
Proof.
move => v Hc.
rewrite /check_all_cert_V'_sub /check_all_cert_sub.
rewrite /check_impactedV'_sub_cert.
rewrite /checkable_impacted_fresh_sub.
rewrite /checkable_impactedV'_sub.
rewrite impactedV'_sub_eq.
apply: check_all_cert_complete; eauto.
- exact: p_neq.
- exact: p_partition.
- exact: g_bot_top.
- exact: f_top_bot.
Qed.
Lemma check_all_cert_sound_sub :
forall (v : V') (r : R), (v,r) \in check_all_cert_sub ->
checkable' v /\ check' v = r.
Proof.
move => v r.
rewrite /check_all_cert_sub.
rewrite /check_impactedV'_sub_cert.
rewrite /checkable_impacted_fresh_sub.
rewrite /checkable_impactedV'_sub.
rewrite impactedV'_sub_eq.
apply: check_all_cert_sound; eauto.
- exact: p_neq.
- exact: p_partition.
- exact: g_bot_top.
- exact: f_top_bot.
Qed.
Lemma check_all_cert_V'_sub_uniq : uniq check_all_cert_V'_sub.
Proof.
rewrite /check_all_cert_V'_sub.
rewrite /check_all_cert_sub.
rewrite /check_impactedV'_sub_cert.
rewrite /checkable_impacted_fresh_sub.
rewrite /checkable_impactedV'_sub.
rewrite impactedV'_sub_eq.
apply: check_all_cert_V'_uniq; eauto.
- exact: p_neq.
- exact: p_partition.
- exact: g_bot_top.
- exact: f_top_bot.
Qed.
End ChangedHierarchicalSub.
|
If $X$ is a contractible space and $f: X \to Y$ is a retraction map, then $Y$ is a contractible space. |
abstract type Move end
"""
FastMove(moveType, stab, power, energy, cooldown)
Struct for holding fast moves that holds all information that determines
mechanics: type, STAB (same type attack bonus), power, energy, and cooldown.
Note that this is agnostic to the identity of the actual move itself.
"""
struct FastMove <: Move
move_type::UInt8
data::UInt16
end
"""
FastMove(gm_move, types)
Generate a fast move from the gamemaster entry of the move, and the types of
the mon using it (to determing STAB, same type attack bonus). This should only
be used internally, as generating the move from the name is a lot cleaner.
"""
function FastMove(gm_move::Dict{String,Any}, types::Tuple{UInt8, UInt8})
move_type = UInt8(findfirst(x -> typings[x] == gm_move["type"], 1:19))
STAB = move_type in types ? 0x0001 : 0x0000
power = UInt16(gm_move["power"])
energy = UInt16(gm_move["energyGain"])
cooldown = UInt16(gm_move["cooldown"] ÷ 500)
return FastMove(
move_type,
cooldown + energy << 3 + power << 8 + STAB << 13
)
end
"""
FastMove(move_name, types)
Generate a fast move from the name of the move, and the types of the mon using
it (to determing STAB, same type attack bonus)
"""
function FastMove(move_name::String, types::Tuple{UInt8, UInt8})
move_index = findfirst(isequal(move_name), map(x ->
gamemaster["moves"][x]["moveId"], 1:length(gamemaster["moves"])))
gm_move = gamemaster["moves"][move_index]
return FastMove(gm_move, types)
end
get_STAB(fm::FastMove) = iszero(fm.data >> 13) ? 10 : 12
get_power(fm::FastMove) = UInt8((fm.data >> 8) & 0x001f)
get_energy(fm::FastMove) = UInt8((fm.data >> 3) & 0x001f)
get_cooldown(fm::FastMove) = UInt8(fm.data & 0x0007)
"""
ChargedMove(moveType, stab, power, energy, buffChance, opp_buffs, self_buffs)
Struct for holding charged moves that holds all information that determines
mechanics: type, STAB (same type attack bonus), power, energy, and buff
information (chance, which buffs are applied and to whom). Note that this is
agnostic to the identity of the actual move itself.
"""
struct ChargedMove <: Move
move_type::UInt8
buff::StatBuffs
data::UInt16
end
"""
ChargedMove(gm_move, types)
Generate a charged move from the gamemaster entry of the move, and the types of
the mon using it (to determing STAB, same type attack bonus). This should only
be used internally, as generating the move from the name is a lot cleaner.
"""
function ChargedMove(gm_move::Dict{String,Any}, types::Tuple{UInt8, UInt8})
move_type = UInt8(findfirst(x -> typings[x] == gm_move["type"], 1:19))
STAB = move_type in types ? 0x0001 : 0x0000
buff_target = (haskey(gm_move, "buffs") &&
gm_move["buffTarget"] == "opponent") ? 0x0001 : 0x0000
power = UInt16(gm_move["power"] ÷ 5)
energy = UInt16(gm_move["energy"] ÷ 5)
buff = !haskey(gm_move, "buffs") ? defaultBuff :
StatBuffs(Int8(gm_move["buffs"][1]), Int8(gm_move["buffs"][2]))
buff_chance = !haskey(gm_move, "buffs") ? 0x0000 :
gm_move["buffApplyChance"] == ".1" ? 0x0001 :
gm_move["buffApplyChance"] == ".125" ? 0x0002 :
gm_move["buffApplyChance"] == ".2" ? 0x0003 :
gm_move["buffApplyChance"] == ".3" ? 0x0004 :
gm_move["buffApplyChance"] == ".5" ? 0x0005 : 0x0006
return ChargedMove(
move_type,
buff,
power + energy << 6 + buff_chance << 10 + buff_target << 11 + STAB << 12
)
end
"""
ChargedMove(move_name, types)
Generate a charged move from the name of the move, and the types of the mon
using it (to determing STAB, same type attack bonus)
"""
function ChargedMove(move_name::String, types)
#if move_name == "NONE"
# return ChargedMove(Int8(0), Int8(0), UInt8(0), Int8(0), Int8(0),
# defaultBuff, defaultBuff)
#end
move_index = findfirst(isequal(move_name), map(x ->
gamemaster["moves"][x]["moveId"], 1:length(gamemaster["moves"])))
gm_move = gamemaster["moves"][move_index]
return ChargedMove(gm_move, types)
end
get_power(cm::ChargedMove) = UInt8(0x0005 * (cm.data & 0x003f))
get_energy(cm::ChargedMove) = UInt8(0x0005 * ((cm.data >> 6) & 0x000f))
get_buff_target(cm::ChargedMove) = UInt8((cm.data >> 11) & 0x0001)
get_STAB(cm::ChargedMove) = iszero(cm.data >> 12) ? 10 : 12
function get_buff_chance(cm::ChargedMove)
buff_chance = (cm.data >> 10) & 0x03ff
return buff_chance == 0x0000 ? 0.0 :
buff_chance == 0x0006 ? 1.0 :
buff_chance == 0x0001 ? 0.1 :
buff_chance == 0x0002 ? 0.125 :
buff_chance == 0x0003 ? 0.2 :
buff_chance == 0x0004 ? 0.3 : 0.5
end
function buff_applies(cm::ChargedMove)
buff_chance = (cm.data >> 10) & 0x03ff
return buff_chance == 0x0000 ? false :
buff_chance == 0x0006 ? true :
buff_chance == 0x0001 ? rand(rb_rng, 1:10) == 1 :
buff_chance == 0x0002 ? rand(rb_rng,
(true, false, false, false, false, false, false, false)) :
buff_chance == 0x0003 ? rand(rb_rng, 1:5) == 1 :
buff_chance == 0x0004 ? rand(rb_rng, 1:10) < 4 :
rand(rb_rng, (true, false))
end
|
function xn=linEqSolveFirstnRows(A,b,n)
%%LINEQSOLVEFIRSTNROWS This function solves the problem A*x=b for the first
% n rows of x. The matrix A CAN be singular as long as
% a unique solution exists for the first n rows of x.
% This function is useful for finding the position
% estimate in an information filter state even before
% estimates of all of the other target state
% components have become observable In such an
% instance A is the inverse covariance matrix, x is
% the target state and b is the information state.
%
%INPUTS: A The NXN matrix A in the equation A*x=b, where x is unknown. The
% matrix can be singular, but the first n rows of x should be
% observable.
% b The NX1 column vector b in the equation A*x=b.
% n The number of rows of x, starting from the first row and going
% down, for which one wishes to solve in the equation A*b. n must
% be less than or equal to the number of rows in A.
%
%OUTPUTS: xn The first n rows of the column vector of x solved from A*x=b.
% If any of the components of xn are not finite, then the
% problem was not completely observable. Because of how the
% problem is solved, if the kth component is not finite, then
% all subsequent components will not be finite, regardless of
% whether they are observable.
%
%The linear equation is solved by performing a modified qr decomposition on
%the matrix A such that A=Q*R, where Q is an rotation matrix and R is a
%LOWER triangular matrix. A standard QR decomposition produces an upper
%triangular matrix. By flipping the rows and columns of a and then
%performing the inverse operations on Q and R, one can get a decomposition
%where R is a lower-triangular marix. One can then write R*x=Q'*b, since
%the transpose of a rotation matrix is its inverse. The first n rows of x
%can then be solved using forward substitution.
%
%The QR decomposition is in Matlab's built-in function qr. It is also
%discussed in Chapter 5.2 of [1].
%
%REFERENCES:
%[1] G. E. Golub and C. F. van Loan, Matrix Computations, 4rd ed.
% Baltimore, MD: Johns Hopkins University Press, 2013.
%
%June 2014 David F.Crouse, Naval Research Laboratory, Washington D.C.
%(UNCLASSIFIED) DISTRIBUTION STATEMENT A. Approved for public release.
%Perform a lower-triangular qr decomposition.
[Q,R]=qr(rot90(A,2));
Q=rot90(Q,2);
R=rot90(R,2);
%Now, Q*R=A and R is lower-triangular.
b=Q'*b;
%Perform forward substitution to solve for the first n components of x.
xn=zeros(n,1);
xn(1)=b(1)/R(1,1);
for curRow=2:n
xn(curRow)=(b(curRow)-sum(xn(1:(curRow-1))'.*R(curRow,1:(curRow-1))))/R(curRow,curRow);
end
end
%LICENSE:
%
%The source code is in the public domain and not licensed or under
%copyright. The information and software may be used freely by the public.
%As required by 17 U.S.C. 403, third parties producing copyrighted works
%consisting predominantly of the material produced by U.S. government
%agencies must provide notice with such work(s) identifying the U.S.
%Government material incorporated and stating that such material is not
%subject to copyright protection.
%
%Derived works shall not identify themselves in a manner that implies an
%endorsement by or an affiliation with the Naval Research Laboratory.
%
%RECIPIENT BEARS ALL RISK RELATING TO QUALITY AND PERFORMANCE OF THE
%SOFTWARE AND ANY RELATED MATERIALS, AND AGREES TO INDEMNIFY THE NAVAL
%RESEARCH LABORATORY FOR ALL THIRD-PARTY CLAIMS RESULTING FROM THE ACTIONS
%OF RECIPIENT IN THE USE OF THE SOFTWARE.
|
function pb = bsdsNonMaxSuppression(pbnew, pbold)
if size(pbold)~=size(pbnew)
pbold = imresize(pbold, size(pbnew), 'nearest');
end
pb = pbnew.*double(pbold>0);
% [h, w] = size(pbnew);
% norient = size(pball, 3);
% theta = (0:norient-1)/norient*pi;
%
% [h2, w2, tmp] = size(pball);
% if h2~=h || w~=w2
% pball = imresize(pball, [h w], 'nearest');
% end
%
% % nonmax suppression and max over orientations
% [unused,maxo] = max(pball,[],3);
% pb = zeros(h,w);
% %theta = zeros(h,w);
% r = 2.5;
% for i = 1:norient,
% mask = (maxo == i);
% %a = fitparab(pball(:,:,i),r,r,theta(i));
% %pbi = nonmax(max(0,a),gtheta(i));
% pbi = nonmax(pbnew,theta(i));
% pb = max(pb,pbi.*mask);
% end
% pb = max(0,min(1,pb));
%
% % mask out 1-pixel border where nonmax suppression fails
% pb(1,:) = 0;
% pb(end,:) = 0;
% pb(:,1) = 0;
% pb(:,end) = 0;
|
If $i$ is not in $I$, then the family of sets $\{A_i\}_{i \in I \cup \{i\}}$ is disjoint if and only if $A_i$ is disjoint from $\bigcup_{i \in I} A_i$ and the family of sets $\{A_i\}_{i \in I}$ is disjoint. |
First Trailer For Sci-Fi Space Terror Film 'LIFE' Starring Ryan Reynolds & Jake Gyllenhaal Arrives!
Earlier tonight during the latest episode of The Walking Dead, Sony Pictures unveiled the first look at the new Sci-fi space adventure LIFE! Even though the film doesn't hit theaters until Memorial Day next year the studio is appears to be pushing this film to be a huge blockbuster event next summer.
Jake Gyllenhaal, Ryan Reynolds, and Rebecca Ferguson star in the new film directed by Safe House helmer Daniel Espinosa and boasts a screenplay from the Deadpool and Zombieland pair of Rhett Reese and Paul Wernick.
Check out the official trailer below and sound off in the comments section with your thoughts. Are you pumped for this one? Why or Why not? As always, thanks for stopping by ECMOVIEGUYS!
Life is set to hit theaters on May 26th, 2017. |
All compositions by Ornette Coleman .
|
{-# OPTIONS --cubical --no-import-sorts --safe #-}
module Cubical.HITs.Rationals.SigmaQ where
open import Cubical.HITs.Rationals.SigmaQ.Base public
open import Cubical.HITs.Rationals.SigmaQ.Properties public
|
{-# OPTIONS --cubical --safe #-}
module Algebra.Construct.Free.Semilattice.Relation.Unary.All.Def where
open import Prelude hiding (⊥; ⊤)
open import Algebra.Construct.Free.Semilattice.Eliminators
open import Algebra.Construct.Free.Semilattice.Definition
open import Cubical.Foundations.HLevels
open import Data.Empty.UniversePolymorphic
open import HITs.PropositionalTruncation.Sugar
open import HITs.PropositionalTruncation.Properties
open import HITs.PropositionalTruncation
open import Data.Unit.UniversePolymorphic
private
variable p : Level
dup-◻ : (P : A → Type p) → (x : A) (xs : Type p) → (∥ P x ∥ × ∥ P x ∥ × xs) ⇔ (∥ P x ∥ × xs)
dup-◻ P _ _ .fun = snd
dup-◻ P _ _ .inv (x , xs) = x , x , xs
dup-◻ P _ _ .rightInv (x , xs) = refl
dup-◻ P _ _ .leftInv (x₁ , x₂ , xs) i .fst = squash x₂ x₁ i
dup-◻ P _ _ .leftInv (x₁ , x₂ , xs) i .snd = (x₂ , xs)
com-◻ : (P : A → Type p) → (x y : A) (xs : Type p) → (∥ P x ∥ × ∥ P y ∥ × xs) ⇔ (∥ P y ∥ × ∥ P x ∥ × xs)
com-◻ P _ _ _ .fun (x , y , xs) = y , x , xs
com-◻ P _ _ _ .inv (y , x , xs) = x , y , xs
com-◻ P _ _ _ .leftInv (x , y , xs) = refl
com-◻ P _ _ _ .rightInv (x , y , xs) = refl
◻′ : (P : A → Type p) → A ↘ hProp p
[ ◻′ P ]-set = isSetHProp
([ ◻′ P ] x ∷ (xs , hxs)) .fst = ∥ P x ∥ × xs
([ ◻′ P ] x ∷ (xs , hxs)) .snd y z = ΣProp≡ (λ _ → hxs) (squash (fst y) (fst z))
[ ◻′ P ][] = ⊤ , λ x y _ → x
[ ◻′ P ]-dup x xs = ΣProp≡ (λ _ → isPropIsProp) (isoToPath (dup-◻ P x (xs .fst)))
[ ◻′ P ]-com x y xs = ΣProp≡ (λ _ → isPropIsProp) (isoToPath (com-◻ P x y (xs .fst)))
◻ : (P : A → Type p) → 𝒦 A → Type p
◻ P xs = [ ◻′ P ]↓ xs .fst
isProp-◻ : ∀ {P : A → Type p} {xs} → isProp (◻ P xs)
isProp-◻ {P = P} {xs = xs} = [ ◻′ P ]↓ xs .snd
|
(* Author: Tobias Nipkow *)
header "Tries (List Version)"
theory Tries
imports Maps
begin
subsection {* Association lists *}
primrec rem_alist :: "'key \<Rightarrow> ('key * 'val)list \<Rightarrow> ('key * 'val)list" where
"rem_alist k [] = []" |
"rem_alist k (p#ps) = (if fst p = k then ps else p # rem_alist k ps)"
lemma rem_alist_id[simp]: "k \<notin> fst ` set al \<Longrightarrow> rem_alist k al = al"
by(induct al, auto)
lemma set_rem_alist:
"distinct(map fst al) \<Longrightarrow> set (rem_alist a al) =
(set al) - {(a,the(map_of al a))}"
apply(induct al)
apply simp
apply simp
apply fastforce
done
lemma fst_set_rem_alist[simp]:
"distinct(map fst al) \<Longrightarrow> fst ` set (rem_alist a al) = fst ` (set al) - {a}"
apply(induct al)
apply simp
apply simp
apply blast
done
(*
lemma fst_set_rem_alist[simp]:
"set (rem_alist a al) = (set al) - {(a,map_of al a)}"
apply(induct al)
apply simp
apply simp
apply blast
done
*)
lemma distinct_map_fst_rem_alist[simp]:
"distinct (map fst al) \<Longrightarrow> distinct (map fst (rem_alist a al))"
by(induct al) simp_all
lemma map_of_rem_distinct_alist: "distinct(map fst al) \<Longrightarrow>
map_of(rem_alist k al) = (map_of al)(k := None)"
apply(induct al)
apply simp
apply clarify
apply(rule ext)
apply auto
apply(erule ssubst)
apply simp
done
lemma map_of_rem_alist[simp]:
"k' \<noteq> k \<Longrightarrow> map_of (rem_alist k al) k' = map_of al k'"
by(induct al, auto)
subsection {* Tries *}
datatype ('a,'v)tries = Tries "'v list" "('a * ('a,'v)tries)list"
primrec "values" :: "('a,'v)tries \<Rightarrow> 'v list" where
"values(Tries vs al) = vs"
primrec alist :: "('a,'v)tries \<Rightarrow> ('a * ('a,'v)tries)list" where
"alist(Tries vs al) = al"
fun inv :: "('a,'v)tries \<Rightarrow> bool" where
"inv(Tries _ al) = (distinct(map fst al) & (\<forall>(a,t) \<in> set al. inv t))"
primrec lookup :: "('a,'v)tries \<Rightarrow> 'a list \<Rightarrow> 'v list" where
"lookup t [] = values t" |
"lookup t (a#as) = (case map_of (alist t) a of
None \<Rightarrow> []
| Some at \<Rightarrow> lookup at as)"
primrec update :: "('a,'v)tries \<Rightarrow> 'a list \<Rightarrow> 'v list \<Rightarrow> ('a,'v)tries" where
"update t [] vs = Tries vs (alist t)" |
"update t (a#as) vs =
(let tt = (case map_of (alist t) a of
None \<Rightarrow> Tries [] [] | Some at \<Rightarrow> at)
in Tries (values t) ((a,update tt as vs) # rem_alist a (alist t)))"
primrec insert :: "('a,'v)tries \<Rightarrow> 'a list \<Rightarrow> 'v \<Rightarrow> ('a,'v)tries" where
"insert t [] v = Tries (v # values t) (alist t)" |
"insert t (a#as) vs =
(let tt = (case map_of (alist t) a of
None \<Rightarrow> Tries [] [] | Some at \<Rightarrow> at)
in Tries (values t) ((a,insert tt as vs) # rem_alist a (alist t)))"
lemma lookup_empty[simp]: "lookup (Tries [] []) as = []"
by(case_tac as, simp_all)
theorem lookup_update:
"lookup (update t as vs) bs =
(if as=bs then vs else lookup t bs)"
apply(induct as arbitrary: t v bs)
apply clarsimp
apply(case_tac bs)
apply simp
apply simp
apply clarsimp
apply(case_tac bs)
apply (auto simp add:Let_def split:option.split)
done
theorem insert_conv:
"insert t as v = update t as (v#lookup t as)"
apply(induct as arbitrary: t)
apply (auto simp:Let_def split:option.split)
done
lemma inv_insert: "inv t \<Longrightarrow> inv(insert t as v)"
apply(induct as arbitrary: t)
apply(case_tac t)
apply simp
apply(case_tac t)
apply simp
apply (auto simp:set_rem_alist split: option.split)
done
lemma inv_update: "inv t \<Longrightarrow> inv(update t as v)"
apply(induct as arbitrary: t)
apply(case_tac t)
apply simp
apply(case_tac t)
apply simp
apply (auto simp:set_rem_alist split: option.split)
done
definition trie_of_list :: "('b \<Rightarrow> 'a list) \<Rightarrow> 'b list \<Rightarrow> ('a,'b)tries" where
"trie_of_list key = foldl (%t v. insert t (key v) v) (Tries [] [])"
lemma inv_foldl_insert:
"inv t \<Longrightarrow> inv (foldl (%t v. insert t (key v) v) t xs)"
apply(induct xs arbitrary: t)
apply(auto simp: inv_insert)
done
lemma inv_of_list: "inv (trie_of_list k xs)"
unfolding trie_of_list_def
apply(rule inv_foldl_insert)
apply simp
done
lemma in_set_lookup_of_list:
"v \<in> set(lookup (trie_of_list key vs) (key v)) = (v \<in> set vs)"
proof -
have "!!t.
v \<in> set(lookup (foldl (%t v. insert t (key v) v) t vs) (key v)) =
(v \<in> set vs \<union> set(lookup t (key v)))"
apply(induct vs)
apply (simp add:trie_of_list_def)
apply (simp add:trie_of_list_def)
apply(simp add:insert_conv lookup_update)
apply blast
done
thus ?thesis by(simp add:trie_of_list_def)
qed
definition set_of :: "('a,'b)tries \<Rightarrow> 'b set" where
"set_of t = Union {gs. \<exists>a. gs = set(lookup t a)}"
lemma set_of_empty[simp]: "set_of (Tries [] []) = {}"
by(simp add: set_of_def)
lemma set_of_insert[simp]:
"set_of (insert t a x) = Set.insert x (set_of t)"
by(auto simp: set_of_def insert_conv lookup_update)
lemma set_of_foldl_insert:
"set_of (foldl (%t v. insert t (key v) v) t xs) =
set xs Un set_of t"
by(induct xs arbitrary: t) auto
lemma set_of_of_list[simp]:
"set_of(trie_of_list key xs) = set xs"
by(simp add: trie_of_list_def set_of_foldl_insert)
lemma in_set_lookup_set_ofD:
"x\<in>set (lookup t a) \<Longrightarrow> x \<in> set_of t"
by(auto simp: set_of_def)
fun all :: "('v \<Rightarrow> bool) \<Rightarrow> ('a,'v)tries \<Rightarrow> bool" where
"all P (Tries vs al) =
((\<forall>v \<in> set vs. P v) \<and> (\<forall>(a,t) \<in> set al. all P t))"
interpretation map:
maps "Tries [] []" update lookup inv
proof
case goal1 show ?case by(rule ext) simp
next
case goal2 show ?case by(rule ext) (simp add:lookup_update)
next
case goal3 show ?case by(simp)
next
case goal4 thus ?case by(simp add:inv_update)
qed
lemma set_of_conv: "set_of = maps.set_of lookup"
by(rule ext) (auto simp add: set_of_def map.set_of_def)
hide_const (open) inv lookup update insert set_of all
end |
Make sure you remove `raise NotImplementedError()` and fill in any place that says `# YOUR CODE HERE`, as well as your `NAME`, `ID`, and `LAB_SECTION` below:
```python
NAME = "FAHMID BIN KIBRIA"
ID = "19201063"
LAB_SECTION = "02"
```
---
# Fixed Point Iteration
### Fixed point:
A number $\xi$ is called a **fixed point** to function $g(x)$ if $g(\xi) = \xi$. Using fixed points are a nice strategy to find roots of an equation. In this method if we are trying to find a root of $f(x) = 0$, we try to write the function in the form, $x = g(x)$. That is,
$$
f(x) = x - g(x) = 0
$$
So, if $\xi$ is a fixed point of $g(x)$ it would also be a root of $f(x)=0$, because,
$$
f(\xi) = \xi - g(\xi) = \xi - \xi = 0
$$
We can find a suitable $g(x)$ in any number of ways. Not all of them would converge, whereas, Some would converge very fast. For example, consider $Eq. 6.1$.
\begin{align}
& & f(x) &=x^5 + 2.5x^4 - 2x^3 -6x^2 + x + 2 \\
&\implies &x - g(x) &=x^5 + 2.5x^4 - 2x^3 -6x^2 + x + 2 \\
&\implies & g(x) &=-x^5 - 2.5x^4 + 2x^3 + 6x^2 - 2 \tag{6.2}\\
\end{align}
again,
$$
f(x) = x^5 + 2.5x^4 - 2x^3 -6x^2 + x + 2 = 0\\
$$
\begin{align}
&\implies &6x^2 &= x^5 + 2.5x^4 - 2x^3 + x + 2 \\
&\implies &x^2 &= \frac{1}{6}(x^5 + 2.5x^4 - 2x^3 + x + 2)\\
&\implies &x &= \sqrt{\frac{1}{6}(x^5 + 2.5x^4 - 2x^3 + x + 2)}\\
&\implies &g(x) &= \sqrt{\frac{1}{6}(x^5 + 2.5x^4 - 2x^3 + x + 2)}\tag{6.3}\\
\end{align}
Similarly,
\begin{align}
& &2.5x^4 &= -x^5 + 2x^3 + 6x^2 - x - 2 \\
&\implies &x^4 &= \frac{1}{2.5}(-x^5 + 2x^3 + 6x^2 - x - 2)\\
&\implies &x &= \sqrt[\leftroot{-1}\uproot{2}\scriptstyle 4]{\frac{1}{2.5}(-x^5 + 2x^3 + 6x^2 - x - 2)}\\
&\implies &g(x) &= \sqrt[\leftroot{-1}\uproot{2}\scriptstyle 4]{\frac{1}{2.5}(-x^5 + 2x^3 + 6x^2 - x - 2)}\tag{6.4}\\
\end{align}
### B. Complete the code below
For this example we will use a couple of $g(x)$ function to find out which one converges faster.
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from numpy.polynomial import Polynomial
f = Polynomial([2.0, 1.0, -6.0, -2.0, 2.5, 1.0])
g1 = Polynomial([-2.0, 0.0, 6.0, 2.0, -2.5, -1.0])
def g2(x):
p = Polynomial([2.0, 1.0, 0.0, -2.0, 2.5, 1.0])
return np.sqrt(p(x)/6)
def g3(x):
p = Polynomial([-2.0, -1.0, 6.0, 2.0, 0.0, -1.0])
return np.power(p(x)/2.5, 1.0/4.0)
a1 = 0.8
g1_a = []
a2 = 0.8
g2_a = []
a3 = 0.8
g3_a = []
# YOUR CODE HERE
val = True
error = 0.1E-6
while val:
a1 = g1(a1)
g1_a.append(a1)
a2 = g2(a2)
g2_a.append(a2)
a3 = g3(a3)
g3_a.append(a3)
val = abs(f(a1)) > error or abs(f(a2)) > error or abs(f(a3)) > error
```
```python
xs = np.linspace(-2.5, 1.6, 100)
ys = f(xs)
dictionary = {
'x': xs,
'y': ys
}
plt.axhline(y=0, color='k')
plt.plot(xs, f(xs), label='f(x)')
plt.plot(xs, g1(xs), label='g1(x)')
plt.plot(xs, g2(xs), label='g2(x)')
plt.plot(xs, g3(xs), label='g3(x)')
plt.legend()
if len(g1_a) > 0:
root = np.array([g1_a[len(g1_a)-1], g2_a[len(g2_a)-1], g3_a[len(g3_a)-1]])
plt.plot(root, f(root), 'ro')
print(pd.DataFrame({'g1(x)':g1_a, 'g2(x)':g2_a, 'g3(x))':g3_a,}))
# Test case:
np.testing.assert_array_almost_equal(root, [-2., 0.67242436, 1.33033625])
```
```python
```
|
pdf_file<-"pdf/maps_shp_osm.pdf"
cairo_pdf(bg="grey98", pdf_file,width=12,height=6)
library(maptools)
par(mfcol=c(1,2))
x <- readShapeSpatial("myData/london/greater_london_const_region.shp")
plot(x, axes=TRUE)
y <- readShapeSpatial("myData/london/london.osm-amenities.shp")
plot(y, axes=TRUE, pch=1, col=rgb(100,100,100,60,maxColorValue=255))
dev.off()
|
Formal statement is: lemma filterlim_times_pos: "LIM x F1. c * f x :> at_right l" if "filterlim f (at_right p) F1" "0 < c" "l = c * p" for c::"'a::{linordered_field, linorder_topology}" Informal statement is: If $f$ converges to $p$ from the right, then $c f$ converges to $c p$ from the right, for any $c > 0$. |
[STATEMENT]
lemma set_borel_measurable_sets:
fixes f :: "_ \<Rightarrow> _::real_normed_vector"
assumes "set_borel_measurable M X f" "B \<in> sets borel" "X \<in> sets M"
shows "f -` B \<inter> X \<in> sets M"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. f -` B \<inter> X \<in> sets M
[PROOF STEP]
proof -
[PROOF STATE]
proof (state)
goal (1 subgoal):
1. f -` B \<inter> X \<in> sets M
[PROOF STEP]
have "f \<in> borel_measurable (restrict_space M X)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. f \<in> borel_measurable (restrict_space M X)
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
set_borel_measurable M X f
B \<in> sets borel
X \<in> sets M
goal (1 subgoal):
1. f \<in> borel_measurable (restrict_space M X)
[PROOF STEP]
unfolding set_borel_measurable_def
[PROOF STATE]
proof (prove)
using this:
(\<lambda>x. indicat_real X x *\<^sub>R f x) \<in> borel_measurable M
B \<in> sets borel
X \<in> sets M
goal (1 subgoal):
1. f \<in> borel_measurable (restrict_space M X)
[PROOF STEP]
by (subst borel_measurable_restrict_space_iff) auto
[PROOF STATE]
proof (state)
this:
f \<in> borel_measurable (restrict_space M X)
goal (1 subgoal):
1. f -` B \<inter> X \<in> sets M
[PROOF STEP]
then
[PROOF STATE]
proof (chain)
picking this:
f \<in> borel_measurable (restrict_space M X)
[PROOF STEP]
have "f -` B \<inter> space (restrict_space M X) \<in> sets (restrict_space M X)"
[PROOF STATE]
proof (prove)
using this:
f \<in> borel_measurable (restrict_space M X)
goal (1 subgoal):
1. f -` B \<inter> space (restrict_space M X) \<in> sets (restrict_space M X)
[PROOF STEP]
by (rule measurable_sets) fact
[PROOF STATE]
proof (state)
this:
f -` B \<inter> space (restrict_space M X) \<in> sets (restrict_space M X)
goal (1 subgoal):
1. f -` B \<inter> X \<in> sets M
[PROOF STEP]
with \<open>X \<in> sets M\<close>
[PROOF STATE]
proof (chain)
picking this:
X \<in> sets M
f -` B \<inter> space (restrict_space M X) \<in> sets (restrict_space M X)
[PROOF STEP]
show ?thesis
[PROOF STATE]
proof (prove)
using this:
X \<in> sets M
f -` B \<inter> space (restrict_space M X) \<in> sets (restrict_space M X)
goal (1 subgoal):
1. f -` B \<inter> X \<in> sets M
[PROOF STEP]
by (subst (asm) sets_restrict_space_iff) (auto simp: space_restrict_space)
[PROOF STATE]
proof (state)
this:
f -` B \<inter> X \<in> sets M
goal:
No subgoals!
[PROOF STEP]
qed |
library(eNetXplorer)
fn = "data_generated/Clinical_data_v123.txt"
df = fread(fn)
df = df %>%
dplyr::select(-Subject, -Visit, -starts_with("CD"), -starts_with("NK")) # CBC only, TBNK clinical flow data removed
y = df$weeks
x = df %>%
dplyr::select(-weeks) %>%
tibble::column_to_rownames("sample") %>%
data.matrix()
dn.out = "results/CBC_enet"
dir.create(dn.out, showWarnings = F, recursive = T)
result = eNetXplorer(x=x,y=y,family="gaussian",alpha=seq(0,1,by=0.1),
seed=123)#,nlambda.ext=1000,n_run=1000, n_perm_null=250)
save(result,file=file.path(dn.out, "enet_data.Rdata"))
summaryPDF(result,path=dn.out)
|
<font size=3 color="midnightblue" face="arial">
<h1 align="center">Escuela de Ciencias Básicas, Tecnología e Ingeniería</h1>
</font>
<font size=3 color="navy" face="arial">
<h1 align="center">ECBTI</h1>
</font>
<font size=2 color="darkorange" face="arial">
<h1 align="center">Curso: Métodos Numéricos</h1>
</font>
<font size=2 color="midnightblue" face="arial">
<h1 align="center">Unidad 1: Error</h1>
</font>
<font size=1 color="darkorange" face="arial">
<h1 align="center">Febrero 28 de 2020</h1>
</font>
***
> **Tutor:** Carlos Alberto Álvarez Henao, I.C. D.Sc.
> **skype:** carlos.alberto.alvarez.henao
> **Herramienta:** [Jupyter](http://jupyter.org/)
> **Kernel:** Python 3.7
***
***Comentario:*** estas notas están basadas en el curso del profesor [Kyle T. Mandli](https://github.com/mandli/intro-numerical-methods) (en inglés)
# Fuentes de error
Los cálculos numéricos, que involucran el uso de máquinas (análogas o digitales) presentan una serie de errores que provienen de diferentes fuentes:
- del Modelo
- de los datos
- de truncamiento
- de representación de los números (punto flotante)
- $\ldots$
***Meta:*** Categorizar y entender cada tipo de error y explorar algunas aproximaciones simples para analizarlas.
# Error en el modelo y los datos
Errores en la formulación fundamental
- Error en los datos: imprecisiones en las mediciones o incertezas en los parámetros
Infortunadamente no tenemos control de los errores en los datos y el modelo de forma directa pero podemos usar métodos que pueden ser más robustos en la presencia de estos tipos de errores.
# Error de truncamiento
Los errores surgen de la expansión de funciones con una función simple, por ejemplo, $sin(x) \approx x$ para $|x|\approx0$.
# Error de representación de punto fotante
Los errores surgen de aproximar números reales con la representación en precisión finita de números en el computador.
# Definiciones básicas
Dado un valor verdadero de una función $f$ y una solución aproximada $F$, se define:
- Error absoluto
$$e_a=|f-F|$$
- Error relativo
$$e_r = \frac{e_a}{|f|}=\frac{|f-F|}{|f|}$$
# Notación $\text{Big}-\mathcal{O}$
sea $$f(x)= \mathcal{O}(g(x)) \text{ cuando } x \rightarrow a$$
si y solo si
$$|f(x)|\leq M|g(x)| \text{ cuando } |x-a| < \delta \text{ donde } M, a > 0$$
En la práctica, usamos la notación $\text{Big}-\mathcal{O}$ para decir algo sobre cómo se pueden comportar los términos que podemos haber dejado fuera de una serie. Veamos el siguiente ejemplo de la aproximación de la serie de Taylor:
***Ejemplo:***
sea $f(x) = \sin x$ con $x_0 = 0$ entonces
$$T_N(x) = \sum^N_{n=0} (-1)^{n} \frac{x^{2n+1}}{(2n+1)!}$$
Podemos escribir $f(x)$ como
$$f(x) = x - \frac{x^3}{6} + \frac{x^5}{120} + \mathcal{O}(x^7)$$
Esto se vuelve más útil cuando lo vemos como lo hicimos antes con $\Delta x$:
$$f(x) = \Delta x - \frac{\Delta x^3}{6} + \frac{\Delta x^5}{120} + \mathcal{O}(\Delta x^7)$$
# Reglas para el error de propagación basado en la notación $\text{Big}-\mathcal{O}$
En general, existen dos teoremas que no necesitan prueba y se mantienen cuando el valor de $x$ es grande:
Sea
$$\begin{aligned}
f(x) &= p(x) + \mathcal{O}(x^n) \\
g(x) &= q(x) + \mathcal{O}(x^m) \\
k &= \max(n, m)
\end{aligned}$$
Entonces
$$
f+g = p + q + \mathcal{O}(x^k)
$$
y
\begin{align}
f \cdot g &= p \cdot q + p \mathcal{O}(x^m) + q \mathcal{O}(x^n) + O(x^{n + m}) \\
&= p \cdot q + \mathcal{O}(x^{n+m})
\end{align}
De otra forma, si estamos interesados en valores pequeños de $x$, $\Delta x$, la expresión puede ser modificada como sigue:
\begin{align}
f(\Delta x) &= p(\Delta x) + \mathcal{O}(\Delta x^n) \\
g(\Delta x) &= q(\Delta x) + \mathcal{O}(\Delta x^m) \\
r &= \min(n, m)
\end{align}
entonces
$$
f+g = p + q + O(\Delta x^r)
$$
y
\begin{align}
f \cdot g &= p \cdot q + p \cdot \mathcal{O}(\Delta x^m) + q \cdot \mathcal{O}(\Delta x^n) + \mathcal{O}(\Delta x^{n+m}) \\
&= p \cdot q + \mathcal{O}(\Delta x^r)
\end{align}
***Nota:*** En este caso, supongamos que al menos el polinomio con $k=max(n,m)$ tiene la siguiente forma:
$$
p(\Delta x) = 1 + p_1 \Delta x + p_2 \Delta x^2 + \ldots
$$
o
$$
q(\Delta x) = 1 + q_1 \Delta x + q_2 \Delta x^2 + \ldots
$$
para que $\mathcal{O}(1)$
de modo que hay un término $\mathcal{O}(1)$ que garantiza la existencia de $\mathcal{O}(\Delta x^r)$ en el producto final.
Para tener una idea de por qué importa más la potencia en $\Delta x$ al considerar la convergencia, la siguiente figura muestra cómo las diferentes potencias en la tasa de convergencia pueden afectar la rapidez con la que converge nuestra solución. Tenga en cuenta que aquí estamos dibujando los mismos datos de dos maneras diferentes. Graficar el error como una función de $\Delta x$ es una forma común de mostrar que un método numérico está haciendo lo que esperamos y muestra el comportamiento de convergencia correcto. Dado que los errores pueden reducirse rápidamente, es muy común trazar este tipo de gráficos en una escala log-log para visualizar fácilmente los resultados. Tenga en cuenta que si un método fuera realmente del orden $n$, será una función lineal en el espacio log-log con pendiente $n$.
```python
import numpy as np
import matplotlib.pyplot as plt
```
```python
dx = np.linspace(1.0, 1e-4, 100)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2.0)
axes = []
axes.append(fig.add_subplot(1, 2, 1))
axes.append(fig.add_subplot(1, 2, 2))
for n in range(1, 5):
axes[0].plot(dx, dx**n, label="$\Delta x^%s$" % n)
axes[1].loglog(dx, dx**n, label="$\Delta x^%s$" % n)
axes[0].legend(loc=2)
axes[1].set_xticks([10.0**(-n) for n in range(5)])
axes[1].set_yticks([10.0**(-n) for n in range(16)])
axes[1].legend(loc=4)
for n in range(2):
axes[n].set_title("Crecimiento del Error vs. $\Delta x^n$")
axes[n].set_xlabel("$\Delta x$")
axes[n].set_ylabel("Error Estimado")
axes[n].set_title("Crecimiento de las diferencias")
axes[n].set_xlabel("$\Delta x$")
axes[n].set_ylabel("Error Estimado")
plt.show()
```
# Error de truncamiento
***Teorema de Taylor:*** Sea $f(x) \in C^{m+1}[a,b]$ y $x_0 \in [a,b]$, para todo $x \in (a,b)$ existe un número $c = c(x)$ que se encuentra entre $x_0$ y $x$ tal que
$$ f(x) = T_N(x) + R_N(x)$$
donde $T_N(x)$ es la aproximación del polinomio de Taylor
$$T_N(x) = \sum^N_{n=0} \frac{f^{(n)}(x_0)\times(x-x_0)^n}{n!}$$
y $R_N(x)$ es el residuo (la parte de la serie que obviamos)
$$R_N(x) = \frac{f^{(n+1)}(c) \times (x - x_0)^{n+1}}{(n+1)!}$$
Otra forma de pensar acerca de estos resultados consiste en reemplazar $x - x_0$ con $\Delta x$. La idea principal es que el residuo $R_N(x)$ se vuelve mas pequeño cuando $\Delta x \rightarrow 0$.
$$T_N(x) = \sum^N_{n=0} \frac{f^{(n)}(x_0)\times \Delta x^n}{n!}$$
y $R_N(x)$ es el residuo (la parte de la serie que obviamos)
$$ R_N(x) = \frac{f^{(n+1)}(c) \times \Delta x^{n+1}}{(n+1)!} \leq M \Delta x^{n+1}$$
***Ejemplo 1:***
$f(x) = e^x$ con $x_0 = 0$
Usando esto podemos encontrar expresiones para el error relativo y absoluto en función de $x$ asumiendo $N=2$.
Derivadas:
$$\begin{aligned}
f'(x) &= e^x \\
f''(x) &= e^x \\
f^{(n)}(x) &= e^x
\end{aligned}$$
Polinomio de Taylor:
$$\begin{aligned}
T_N(x) &= \sum^N_{n=0} e^0 \frac{x^n}{n!} \Rightarrow \\
T_2(x) &= 1 + x + \frac{x^2}{2}
\end{aligned}$$
Restos:
$$\begin{aligned}
R_N(x) &= e^c \frac{x^{n+1}}{(n+1)!} = e^c \times \frac{x^3}{6} \quad \Rightarrow \\
R_2(x) &\leq \frac{e^1}{6} \approx 0.5
\end{aligned}$$
Precisión:
$$
e^1 = 2.718\ldots \\
T_2(1) = 2.5 \Rightarrow e \approx 0.2 ~~ r \approx 0.1
$$
¡También podemos usar el paquete `sympy` que tiene la capacidad de calcular el polinomio de *Taylor* integrado!
```python
import sympy
x = sympy.symbols('x')
f = sympy.symbols('f', cls=sympy.Function)
f = sympy.exp(x)
f.series(x0=0, n=5)
```
$\displaystyle 1 + x + \frac{x^{2}}{2} + \frac{x^{3}}{6} + \frac{x^{4}}{24} + O\left(x^{5}\right)$
Graficando
```python
x = np.linspace(-1, 1, 100)
T_N = 1.0 + x + x**2 / 2.0
R_N = np.exp(1) * x**3 / 6.0
plt.plot(x, T_N, 'r', x, np.exp(x), 'k', x, R_N, 'b')
plt.plot(0.0, 1.0, 'o', markersize=10)
plt.grid(True)
plt.xlabel("x")
plt.ylabel("$f(x)$, $T_N(x)$, $R_N(x)$")
plt.legend(["$T_N(x)$", "$f(x)$", "$R_N(x)$"], loc=2)
plt.show()
```
***Ejemplo 2:***
Aproximar
$$ f(x) = \frac{1}{x} \quad x_0 = 1,$$
usando $x_0 = 1$ para el tercer termino de la serie de Taylor.
$$\begin{aligned}
f'(x) &= -\frac{1}{x^2} \\
f''(x) &= \frac{2}{x^3} \\
f^{(n)}(x) &= \frac{(-1)^n n!}{x^{n+1}}
\end{aligned}$$
$$\begin{aligned}
T_N(x) &= \sum^N_{n=0} (-1)^n (x-1)^n \Rightarrow \\
T_2(x) &= 1 - (x - 1) + (x - 1)^2
\end{aligned}$$
$$\begin{aligned}
R_N(x) &= \frac{(-1)^{n+1}(x - 1)^{n+1}}{c^{n+2}} \Rightarrow \\
R_2(x) &= \frac{-(x - 1)^{3}}{c^{4}}
\end{aligned}$$
```python
x = np.linspace(0.8, 2, 100)
T_N = 1.0 - (x-1) + (x-1)**2
R_N = -(x-1.0)**3 / (1.1**4)
plt.plot(x, T_N, 'r', x, 1.0 / x, 'k', x, R_N, 'b')
plt.plot(1.0, 1.0, 'o', markersize=10)
plt.grid(True)
plt.xlabel("x")
plt.ylabel("$f(x)$, $T_N(x)$, $R_N(x)$")
plt.legend(["$T_N(x)$", "$f(x)$", "$R_N(x)$"], loc=8)
plt.show()
```
# En esta celda haz tus comentarios
Esta cosa con esta vaina quizas tal vez-.-.--
## Error de punto flotante
Errores surgen de aproximar números reales con números de precisión finita
$$\pi \approx 3.14$$
o $\frac{1}{3} \approx 0.333333333$ en decimal, los resultados forman un número finito de registros para representar cada número.
### Sistemas de punto flotante
Los números en sistemas de punto flotante se representan como una serie de bits que representan diferentes partes de un número. En los sistemas de punto flotante normalizados, existen algunas convenciones estándar para el uso de estos bits. En general, los números se almacenan dividiéndolos en la forma
$$F = \pm d_1 . d_2 d_3 d_4 \ldots d_p \times \beta^E$$
donde
1. $\pm$ es un bit único y representa el signo del número.
2. $d_1 . d_2 d_3 d_4 \ldots d_p$ es la *mantisa*. observe que, técnicamente, el decimal se puede mover, pero en general, utilizando la notación científica, el decimal siempre se puede colocar en esta ubicación. Los digitos $d_2 d_3 d_4 \ldots d_p$ son llamados la *fracción* con $p$ digitos de precisión. Los sistemas normalizados específicamente ponen el punto decimal en el frente y asume $d_1 \neq 0$ a menos que el número sea exactamente $0$.
3. $\beta$ es la *base*. Para el sistema binario $\beta = 2$, para decimal $\beta = 10$, etc.
4. $E$ es el *exponente*, un entero en el rango $[E_{\min}, E_{\max}]$
Los puntos importantes en cualquier sistema de punto flotante es
1. Existe un conjunto discreto y finito de números representables.
2. Estos números representables no están distribuidos uniformemente en la línea real
3. La aritmética en sistemas de punto flotante produce resultados diferentes de la aritmética de precisión infinita (es decir, matemática "real")
### Propiedades de los sistemas de punto flotante
Todos los sistemas de punto flotante se caracterizan por varios números importantes
- Número normalizado reducido (underflow si está por debajo, relacionado con números sub-normales alrededor de cero)
- Número normalizado más grande (overflow)
- Cero
- $\epsilon$ o $\epsilon_{mach}$
- `Inf` y `nan`
***Ejemplo: Sistema de juguete***
Considere el sistema decimal de 2 digitos de precisión (normalizado)
$$f = \pm d_1 . d_2 \times 10^E$$
con $E \in [-2, 0]$.
**Numero y distribución de números**
1. Cuántos números pueden representarse con este sistema?
2. Cuál es la distribución en la línea real?
3. Cuáles son los límites underflow y overflow?
Cuántos números pueden representarse con este sistema?
$$f = \pm d_1 . d_2 \times 10^E ~~~ \text{with} E \in [-2, 0]$$
$$2 \times 9 \times 10 \times 3 + 1 = 541$$
Cuál es la distribución en la línea real?
```python
d_1_values = [1, 2, 3, 4, 5, 6, 7, 8, 9]
d_2_values = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
E_values = [0, -1, -2]
fig = plt.figure(figsize=(10.0, 1.0))
axes = fig.add_subplot(1, 1, 1)
for E in E_values:
for d1 in d_1_values:
for d2 in d_2_values:
axes.plot( (d1 + d2 * 0.1) * 10**E, 0.0, 'r+', markersize=20)
axes.plot(-(d1 + d2 * 0.1) * 10**E, 0.0, 'r+', markersize=20)
axes.plot(0.0, 0.0, '+', markersize=20)
axes.plot([-10.0, 10.0], [0.0, 0.0], 'k')
axes.set_title("Distribución de Valores")
axes.set_yticks([])
axes.set_xlabel("x")
axes.set_ylabel("")
axes.set_xlim([-0.1, 0.1])
plt.show()
```
Cuáles son los límites superior (overflow) e inferior (underflow)?
- El menor número que puede ser representado (underflow) es: $1.0 \times 10^{-2} = 0.01$
- El mayor número que puede ser representado (overflow) es: $9.9 \times 10^0 = 9.9$
### Sistema Binario
Considere el sistema en base 2 de 2 dígitos de precisión
$$f=\pm d_1 . d_2 \times 2^E \quad \text{with} \quad E \in [-1, 1]$$
#### Numero y distribución de números**
1. Cuántos números pueden representarse con este sistema?
2. Cuál es la distribución en la línea real?
3. Cuáles son los límites underflow y overflow?
Cuántos números pueden representarse en este sistema?
$$f=\pm d_1 . d_2 \times 2^E ~~~~ \text{con} ~~~~ E \in [-1, 1]$$
$$ 2 \times 1 \times 2 \times 3 + 1 = 13$$
Cuál es la distribución en la línea real?
```python
d_1_values = [1]
d_2_values = [0, 1]
E_values = [1, 0, -1]
fig = plt.figure(figsize=(10.0, 1.0))
axes = fig.add_subplot(1, 1, 1)
for E in E_values:
for d1 in d_1_values:
for d2 in d_2_values:
axes.plot( (d1 + d2 * 0.5) * 2**E, 0.0, 'r+', markersize=20)
axes.plot(-(d1 + d2 * 0.5) * 2**E, 0.0, 'r+', markersize=20)
axes.plot(0.0, 0.0, 'r+', markersize=20)
axes.plot([-4.5, 4.5], [0.0, 0.0], 'k')
axes.set_title("Distribución de Valores")
axes.set_yticks([])
axes.set_xlabel("x")
axes.set_ylabel("")
axes.set_xlim([-3.5, 3.5])
plt.show()
```
Cuáles son los límites superior (*overflow*) e inferior (*underflow*)?
- El menor número que puede ser representado (*underflow*) es: $1.0 \times 2^{-1} = 0.5$
- El mayor número que puede ser representado (*overflow*) es: $1.1 \times 2^1 = 3$
Observe que estos números son en sistema binario.
Una rápida regla de oro:
$$2^3 2^2 2^1 2^0 . 2^{-1} 2^{-2} 2^{-3}$$
corresponde a
8s, 4s, 2s, 1s . mitades, cuartos, octavos, $\ldots$
### Sistema real - IEEE 754 sistema binario de punto flotante
#### Precisión simple
- Almacenamiento total es de 32 bits
- Exponente de 8 bits $\Rightarrow E \in [-126, 127]$
- Fracción 23 bits ($p = 24$)
```
s EEEEEEEE FFFFFFFFFFFFFFFFFFFFFFF
0 1 8 9 31
```
Overflow $= 2^{127} \approx 3.4 \times 10^{38}$
Underflow $= 2^{-126} \approx 1.2 \times 10^{-38}$
$\epsilon_{\text{machine}} = 2^{-23} \approx 1.2 \times 10^{-7}$
#### Precisión doble
- Almacenamiento total asignado es 64 bits
- Exponenete de 11 bits $\Rightarrow E \in [-1022, 1024]$
- Fracción de 52 bits ($p = 53$)
```
s EEEEEEEEEE FFFFFFFFFF FFFFFFFFFF FFFFFFFFFF FFFFFFFFFF FFFFFFFFFF FF
0 1 11 12 63
```
Overflow $= 2^{1024} \approx 1.8 \times 10^{308}$
Underflow $= 2^{-1022} \approx 2.2 \times 10^{-308}$
$\epsilon_{\text{machine}} = 2^{-52} \approx 2.2 \times 10^{-16}$
### Acceso de Python a números de la IEEE
Accede a muchos parámetros importantes, como el epsilon de la máquina
```python
import numpy
numpy.finfo(float).eps
```
```python
import numpy
numpy.finfo(float).eps
print(numpy.finfo(numpy.float16))
print(numpy.finfo(numpy.float32))
print(numpy.finfo(float))
print(numpy.finfo(numpy.float128))
```
## Por qué debería importarnos esto?
- Aritmética de punto flotante no es conmutativa o asociativa
- Errores de punto flotante compuestos, No asuma que la precisión doble es suficiente
- Mezclar precisión es muy peligroso
### Ejemplo 1: Aritmética simple
Aritmética simple $\delta < \epsilon_{\text{machine}}$
$$(1+\delta) - 1 = 1 - 1 = 0$$
$$1 - 1 + \delta = \delta$$
### Ejemplo 2: Cancelación catastrófica
Miremos qué sucede cuando sumamos dos números $x$ y $y$ cuando $x+y \neq 0$. De hecho, podemos estimar estos límites haciendo un análisis de error. Aquí necesitamos presentar la idea de que cada operación de punto flotante introduce un error tal que
$$
\text{fl}(x ~\text{op}~ y) = (x ~\text{op}~ y) (1 + \delta)
$$
donde $\text{fl}(\cdot)$ es una función que devuelve la representación de punto flotante de la expresión encerrada, $\text{op}$ es alguna operación (ex. $+, -, \times, /$), y $\delta$ es el error de punto flotante debido a $\text{op}$.
De vuelta a nuestro problema en cuestión. El error de coma flotante debido a la suma es
$$\text{fl}(x + y) = (x + y) (1 + \delta).$$
Comparando esto con la solución verdadera usando un error relativo tenemos
$$\begin{aligned}
\frac{(x + y) - \text{fl}(x + y)}{x + y} &= \frac{(x + y) - (x + y) (1 + \delta)}{x + y} = \delta.
\end{aligned}$$
entonces si $\delta = \mathcal{O}(\epsilon_{\text{machine}})$ no estaremos muy preocupados.
Que pasa si consideramos un error de punto flotante en la representación de $x$ y $y$, $x \neq y$, y decimos que $\delta_x$ y $\delta_y$ son la magnitud de los errores en su representación. Asumiremos que esto constituye el error de punto flotante en lugar de estar asociado con la operación en sí.
Dado todo esto, tendríamos
$$\begin{aligned}
\text{fl}(x + y) &= x (1 + \delta_x) + y (1 + \delta_y) \\
&= x + y + x \delta_x + y \delta_y \\
&= (x + y) \left(1 + \frac{x \delta_x + y \delta_y}{x + y}\right)
\end{aligned}$$
Calculando nuevamente el error relativo, tendremos
$$\begin{aligned}
\frac{x + y - (x + y) \left(1 + \frac{x \delta_x + y \delta_y}{x + y}\right)}{x + y} &= 1 - \left(1 + \frac{x \delta_x + y \delta_y}{x + y}\right) \\
&= \frac{x}{x + y} \delta_x + \frac{y}{x + y} \delta_y \\
&= \frac{1}{x + y} (x \delta_x + y \delta_y)
\end{aligned}$$
Lo importante aquí es que ahora el error depende de los valores de $x$ y $y$, y más importante aún, su suma. De particular preocupación es el tamaño relativo de $x + y$. A medida que se acerca a cero en relación con las magnitudes de $x$ y $y$, el error podría ser arbitrariamente grande. Esto se conoce como ***cancelación catastrófica***.
```python
dx = numpy.array([10**(-n) for n in range(1, 16)])
x = 1.0 + dx
y = -numpy.ones(x.shape)
error = numpy.abs(x + y - dx) / (dx)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 2, 1)
axes.loglog(dx, x + y, 'o-')
axes.set_xlabel("$\Delta x$")
axes.set_ylabel("$x + y$")
axes.set_title("$\Delta x$ vs. $x+y$")
axes = fig.add_subplot(1, 2, 2)
axes.loglog(dx, error, 'o-')
axes.set_xlabel("$\Delta x$")
axes.set_ylabel("$|x + y - \Delta x| / \Delta x$")
axes.set_title("Diferencia entre $x$ y $y$ vs. Error relativo")
plt.show()
```
### Ejemplo 3: Evaluación de una función
Considere la función
$$
f(x) = \frac{1 - \cos x}{x^2}
$$
con $x\in[-10^{-4}, 10^{-4}]$.
Tomando el límite cuando $x \rightarrow 0$ podemos ver qué comportamiento esperaríamos ver al evaluar esta función:
$$
\lim_{x \rightarrow 0} \frac{1 - \cos x}{x^2} = \lim_{x \rightarrow 0} \frac{\sin x}{2 x} = \lim_{x \rightarrow 0} \frac{\cos x}{2} = \frac{1}{2}.
$$
¿Qué hace la representación de punto flotante?
```python
x = numpy.linspace(-1e-3, 1e-3, 100, dtype=numpy.float32)
error = (0.5 - (1.0 - numpy.cos(x)) / x**2) / 0.5
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, error, 'o')
axes.set_xlabel("x")
axes.set_ylabel("Error Relativo")
```
### Ejemplo 4: Evaluación de un Polinomio
$$f(x) = x^7 - 7x^6 + 21 x^5 - 35 x^4 + 35x^3-21x^2 + 7x - 1$$
```python
x = numpy.linspace(0.988, 1.012, 1000, dtype=numpy.float16)
y = x**7 - 7.0 * x**6 + 21.0 * x**5 - 35.0 * x**4 + 35.0 * x**3 - 21.0 * x**2 + 7.0 * x - 1.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, y, 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_ylim((-0.1, 0.1))
axes.set_xlim((x[0], x[-1]))
plt.show()
```
### Ejemplo 5: Evaluación de una función racional
Calcule $f(x) = x + 1$ por la función $$F(x) = \frac{x^2 - 1}{x - 1}$$
¿Cuál comportamiento esperarías encontrar?
```python
x = numpy.linspace(0.5, 1.5, 101, dtype=numpy.float16)
f_hat = (x**2 - 1.0) / (x - 1.0)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.abs(f_hat - (x + 1.0)))
axes.set_xlabel("$x$")
axes.set_ylabel("Error Absoluto")
plt.show()
```
## Combinación de error
En general, nos debemos ocupar de la combinación de error de truncamiento con el error de punto flotante.
- Error de Truncamiento: errores que surgen de la aproximación de una función, truncamiento de una serie.
$$\sin x \approx x - \frac{x^3}{3!} + \frac{x^5}{5!} + O(x^7)$$
- Error de punto flotante: errores derivados de la aproximación de números reales con números de precisión finita
$$\pi \approx 3.14$$
o $\frac{1}{3} \approx 0.333333333$ en decimal, los resultados forman un número finito de registros para representar cada número.
### Ejemplo 1:
Considere la aproximación de diferencias finitas donde $f(x) = e^x$ y estamos evaluando en $x=1$
$$f'(x) \approx \frac{f(x + \Delta x) - f(x)}{\Delta x}$$
Compare el error entre disminuir $\Delta x$ y la verdadera solucion $f'(1) = e$
```python
delta_x = numpy.linspace(1e-20, 5.0, 100)
delta_x = numpy.array([2.0**(-n) for n in range(1, 60)])
x = 1.0
f_hat_1 = (numpy.exp(x + delta_x) - numpy.exp(x)) / (delta_x)
f_hat_2 = (numpy.exp(x + delta_x) - numpy.exp(x - delta_x)) / (2.0 * delta_x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.loglog(delta_x, numpy.abs(f_hat_1 - numpy.exp(1)), 'o-', label="Unilateral")
axes.loglog(delta_x, numpy.abs(f_hat_2 - numpy.exp(1)), 's-', label="Centrado")
axes.legend(loc=3)
axes.set_xlabel("$\Delta x$")
axes.set_ylabel("Error Absoluto")
plt.show()
```
### Ejemplo 2:
Evalúe $e^x$ con la serie de *Taylor*
$$e^x = \sum^\infty_{n=0} \frac{x^n}{n!}$$
podemos elegir $n< \infty$ que puede aproximarse $e^x$ en un rango dado $x \in [a,b]$ tal que el error relativo $E$ satisfaga $E<8 \cdot \varepsilon_{\text{machine}}$?
¿Cuál podría ser una mejor manera de simplemente evaluar el polinomio de Taylor directamente por varios $N$?
```python
import scipy.special
def my_exp(x, N=10):
value = 0.0
for n in range(N + 1):
value += x**n / scipy.special.factorial(n)
return value
x = numpy.linspace(-2, 2, 100, dtype=numpy.float32)
for N in range(1, 50):
error = numpy.abs((numpy.exp(x) - my_exp(x, N=N)) / numpy.exp(x))
if numpy.all(error < 8.0 * numpy.finfo(float).eps):
break
print(N)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, error)
axes.set_xlabel("x")
axes.set_ylabel("Error Relativo")
plt.show()
```
### Ejemplo 3: Error relativo
Digamos que queremos calcular el error relativo de dos valores $x$ y $y$ usando $x$ como valor de normalización
$$
E = \frac{x - y}{x}
$$
y
$$
E = 1 - \frac{y}{x}
$$
son equivalentes. En precisión finita, ¿qué forma pidría esperarse que sea más precisa y por qué?
Ejemplo tomado de [blog](https://nickhigham.wordpress.com/2017/08/14/how-and-how-not-to-compute-a-relative-error/) posteado por Nick Higham*
Usando este modelo, la definición original contiene dos operaciones de punto flotante de manera que
$$\begin{aligned}
E_1 = \text{fl}\left(\frac{x - y}{x}\right) &= \text{fl}(\text{fl}(x - y) / x) \\
&= \left[ \frac{(x - y) (1 + \delta_+)}{x} \right ] (1 + \delta_/) \\
&= \frac{x - y}{x} (1 + \delta_+) (1 + \delta_/)
\end{aligned}$$
Para la otra formulación tenemos
$$\begin{aligned}
E_2 = \text{fl}\left( 1 - \frac{y}{x} \right ) &= \text{fl}\left(1 - \text{fl}\left(\frac{y}{x}\right) \right) \\
&= \left(1 - \frac{y}{x} (1 + \delta_/) \right) (1 + \delta_-)
\end{aligned}$$
Si suponemos que todos las $\text{op}$s tienen magnitudes de error similares, entonces podemos simplificar las cosas dejando que
$$
|\delta_\ast| \le \epsilon.
$$
Para comparar las dos formulaciones, nuevamente usamos el error relativo entre el error relativo verdadero $e_i$ y nuestras versiones calculadas $E_i$
Definición original
$$\begin{aligned}
\frac{e - E_1}{e} &= \frac{\frac{x - y}{x} - \frac{x - y}{x} (1 + \delta_+) (1 + \delta_/)}{\frac{x - y}{x}} \\
&\le 1 - (1 + \epsilon) (1 + \epsilon) = 2 \epsilon + \epsilon^2
\end{aligned}$$
Definición manipulada:
$$\begin{aligned}
\frac{e - E_2}{e} &= \frac{e - \left[1 - \frac{y}{x}(1 + \delta_/) \right] (1 + \delta_-)}{e} \\
&= \frac{e - \left[e - \frac{y}{x} \delta_/) \right] (1 + \delta_-)}{e} \\
&= \frac{e - \left[e + e\delta_- - \frac{y}{x} \delta_/ - \frac{y}{x} \delta_/ \delta_-)) \right] }{e} \\
&= - \delta_- + \frac{1}{e} \frac{y}{x} \left(\delta_/ + \delta_/ \delta_- \right) \\
&= - \delta_- + \frac{1 -e}{e} \left(\delta_/ + \delta_/ \delta_- \right) \\
&\le \epsilon + \left |\frac{1 - e}{e}\right | (\epsilon + \epsilon^2)
\end{aligned}$$
Vemos entonces que nuestro error de punto flotante dependerá de la magnitud relativa de $e$
```python
# Based on the code by Nick Higham
# https://gist.github.com/higham/6f2ce1cdde0aae83697bca8577d22a6e
# Compares relative error formulations using single precision and compared to double precision
N = 501 # Note: Use 501 instead of 500 to avoid the zero value
d = numpy.finfo(numpy.float32).eps * 1e4
a = 3.0
x = a * numpy.ones(N, dtype=numpy.float32)
y = [x[i] + numpy.multiply((i - numpy.divide(N, 2.0, dtype=numpy.float32)), d, dtype=numpy.float32) for i in range(N)]
# Compute errors and "true" error
relative_error = numpy.empty((2, N), dtype=numpy.float32)
relative_error[0, :] = numpy.abs(x - y) / x
relative_error[1, :] = numpy.abs(1.0 - y / x)
exact = numpy.abs( (numpy.float64(x) - numpy.float64(y)) / numpy.float64(x))
# Compute differences between error calculations
error = numpy.empty((2, N))
for i in range(2):
error[i, :] = numpy.abs((relative_error[i, :] - exact) / numpy.abs(exact))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.semilogy(y, error[0, :], '.', markersize=10, label="$|x-y|/|x|$")
axes.semilogy(y, error[1, :], '.', markersize=10, label="$|1-y/x|$")
axes.grid(True)
axes.set_xlabel("y")
axes.set_ylabel("Error Relativo")
axes.set_xlim((numpy.min(y), numpy.max(y)))
axes.set_ylim((5e-9, numpy.max(error[1, :])))
axes.set_title("Comparasión Error Relativo")
axes.legend()
plt.show()
```
Algunos enlaces de utilidad con respecto al punto flotante IEEE:
- [What Every Computer Scientist Should Know About Floating-Point Arithmetic](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html)
- [IEEE 754 Floating Point Calculator](http://babbage.cs.qc.edu/courses/cs341/IEEE-754.html)
- [Numerical Computing with IEEE Floating Point Arithmetic](http://epubs.siam.org/doi/book/10.1137/1.9780898718072)
## Operaciones de conteo
- ***Error de truncamiento:*** *¿Por qué no usar más términos en la serie de Taylor?*
- ***Error de punto flotante:*** *¿Por qué no utilizar la mayor precisión posible?*
### Ejemplo 1: Multiplicación matriz - vector
Sea $A, B \in \mathbb{R}^{N \times N}$ y $x \in \mathbb{R}^N$.
1. Cuenta el número aproximado de operaciones que tomará para calcular $Ax$
2. Hacer lo mismo para $AB$
***Producto Matriz-vector:*** Definiendo $[A]_i$ como la $i$-ésima fila de $A$ y $A_{ij}$ como la $i$,$j$-ésima entrada entonces
$$
A x = \sum^N_{i=1} [A]_i \cdot x = \sum^N_{i=1} \sum^N_{j=1} A_{ij} x_j
$$
Tomando un caso en particular, siendo $N=3$, entonces la operación de conteo es
$$
A x = [A]_1 \cdot v + [A]_2 \cdot v + [A]_3 \cdot v = \begin{bmatrix}
A_{11} \times v_1 + A_{12} \times v_2 + A_{13} \times v_3 \\
A_{21} \times v_1 + A_{22} \times v_2 + A_{23} \times v_3 \\
A_{31} \times v_1 + A_{32} \times v_2 + A_{33} \times v_3
\end{bmatrix}
$$
Esto son 15 operaciones (6 sumas y 9 multiplicaciones)
Tomando otro caso, siendo $N=4$, entonces el conteo de operaciones es:
$$
A x = [A]_1 \cdot v + [A]_2 \cdot v + [A]_3 \cdot v = \begin{bmatrix}
A_{11} \times v_1 + A_{12} \times v_2 + A_{13} \times v_3 + A_{14} \times v_4 \\
A_{21} \times v_1 + A_{22} \times v_2 + A_{23} \times v_3 + A_{24} \times v_4 \\
A_{31} \times v_1 + A_{32} \times v_2 + A_{33} \times v_3 + A_{34} \times v_4 \\
A_{41} \times v_1 + A_{42} \times v_2 + A_{43} \times v_3 + A_{44} \times v_4 \\
\end{bmatrix}
$$
Esto lleva a 28 operaciones (12 sumas y 16 multiplicaciones).
Generalizando, hay $N^2$ mutiplicaciones y $N(N-1)$ sumas para un total de
$$
\text{operaciones} = N (N - 1) + N^2 = \mathcal{O}(N^2).
$$
***Producto Matriz-Matriz ($AB$):*** Definiendo $[B]_j$ como la $j$-ésima columna de $B$ entonces
$$
(A B)_{ij} = \sum^N_{i=1} \sum^N_{j=1} [A]_i \cdot [B]_j
$$
El producto interno de dos vectores es representado por
$$
a \cdot b = \sum^N_{i=1} a_i b_i
$$
conduce a $\mathcal{O}(3N)$ operaciones. Como hay $N^2$ entradas en la matriz resultante, tendríamos $\mathcal{O}(N^3)$ operaciones
Existen métodos para realizar la multiplicación matriz - matriz más rápido. En la siguiente figura vemos una colección de algoritmos a lo largo del tiempo que han podido limitar el número de operaciones en ciertas circunstancias
$$
\mathcal{O}(N^\omega)
$$
### Ejemplo 2: Método de Horner para evaluar polinomios
Dado
$$P_N(x) = a_0 + a_1 x + a_2 x^2 + \ldots + a_N x^N$$
o
$$P_N(x) = p_1 x^N + p_2 x^{N-1} + p_3 x^{N-2} + \ldots + p_{N+1}$$
queremos encontrar la mejor vía para evaluar $P_N(x)$
Primero considere dos vías para escribir $P_3$
$$ P_3(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4$$
y usando multiplicación anidada
$$ P_3(x) = ((p_1 x + p_2) x + p_3) x + p_4$$
Considere cuántas operaciones se necesitan para cada...
$$ P_3(x) = p_1 x^3 + p_2 x^2 + p_3 x + p_4$$
$$P_3(x) = \overbrace{p_1 \cdot x \cdot x \cdot x}^3 + \overbrace{p_2 \cdot x \cdot x}^2 + \overbrace{p_3 \cdot x}^1 + p_4$$
Sumando todas las operaciones, en general podemos pensar en esto como una pirámide
podemos estimar de esta manera que el algoritmo escrito de esta manera tomará aproximadamente $\mathcal{O}(N^2/2)$ operaciones para completar.
Mirando nuetros otros medios de evaluación
$$ P_3(x) = ((p_1 x + p_2) x + p_3) x + p_4$$
Aquí encontramos que el método es $\mathcal{O}(N)$ (el 2 generalmente se ignora en estos casos). Lo importante es que la primera evaluación es $\mathcal{O}(N^2)$ y la segunda $\mathcal{O}(N)$!
### Algoritmo
Complete la función e implemente el método de *Horner*
```python
def eval_poly(p, x):
"""Evaluates polynomial given coefficients p at x
Function to evaluate a polynomial in order N operations. The polynomial is defined as
P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]
The value x should be a float.
"""
pass
```
```python
def eval_poly(p, x):
"""Evaluates polynomial given coefficients p at x
Function to evaluate a polynomial in order N operations. The polynomial is defined as
P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]
The value x should be a float.
"""
### ADD CODE HERE
pass
```
```python
# Scalar version
def eval_poly(p, x):
"""Evaluates polynomial given coefficients p at x
Function to evaluate a polynomial in order N operations. The polynomial is defined as
P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]
The value x should be a float.
"""
y = p[0]
for coefficient in p[1:]:
y = y * x + coefficient
return y
# Vectorized version
def eval_poly(p, x):
"""Evaluates polynomial given coefficients p at x
Function to evaluate a polynomial in order N operations. The polynomial is defined as
P(x) = p[0] x**n + p[1] x**(n-1) + ... + p[n-1] x + p[n]
The value x can by a NumPy ndarray.
"""
y = numpy.ones(x.shape) * p[0]
for coefficient in p[1:]:
y = y * x + coefficient
return y
p = [1, -3, 10, 4, 5, 5]
x = numpy.linspace(-10, 10, 100)
plt.plot(x, eval_poly(p, x))
plt.show()
```
```python
```
|
\chapter{Introduction}\label{cap:introduction}
There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable.
There is another theory which states that this has already happened.
\begin{figure}[h!]
\centering
\includegraphics[scale=1.7]{universe}
\caption{The Universe}
\label{fig:universe}
\end{figure}
``I always thought something was fundamentally wrong with the universe'' \cite{Zienkiewicz1}
\lipsum
\ifchapterbib
\input{content/chapter_biblio}
\fi
|
/* dwt_undec.h
* Paul Demorest, 2007/10
*/
#ifndef _DWT_UNDEC_H_
#define _DWT_UNDEC_H_
#include <gsl/gsl_wavelet.h>
#ifdef __cplusplus
extern "C" {
#endif
int dwt_undec_transform(double *in, double *out, size_t n, const gsl_wavelet *w);
int dwt_undec_inverse(double *in, double *out, size_t n, const gsl_wavelet *w);
#ifdef __cplusplus
}
#endif
#endif
|
Require Import Coq.Lists.List.
Require Import Coq.ZArith.ZArith.
Require Import coqutil.Word.Interface.
Require Import coqutil.Word.Properties.
Require Import coqutil.Word.Bitwidth.
Require Import coqutil.Datatypes.HList.
Require Import coqutil.Datatypes.PrimitivePair.
Require Import coqutil.Map.Interface.
Require Import coqutil.Map.Properties.
Require Import coqutil.Tactics.Tactics.
Require Import coqutil.sanity.
Require Import coqutil.Z.Lia.
Require Import coqutil.Byte.
Local Open Scope Z_scope.
Section MemAccess.
Context {width} {word: word width} {mem: map.map word byte}.
Definition footprint(a: word)(sz: nat): tuple word sz :=
tuple.unfoldn (fun w => word.add w (word.of_Z 1)) sz a.
Definition load_bytes(sz: nat)(m: mem)(addr: word): option (tuple byte sz) :=
map.getmany_of_tuple m (footprint addr sz).
Definition unchecked_store_bytes(sz: nat)(m: mem)(a: word)(bs: tuple byte sz): mem :=
map.putmany_of_tuple (footprint a sz) bs m.
Definition store_bytes(sz: nat)(m: mem)(a: word)(v: tuple byte sz): option mem :=
match load_bytes sz m a with
| Some _ => Some (unchecked_store_bytes sz m a v)
| None => None (* some addresses were invalid *)
end.
Definition unchecked_store_byte_list(a: word)(l: list byte)(m: mem): mem :=
unchecked_store_bytes (length l) m a (tuple.of_list l).
Lemma unchecked_store_byte_list_cons: forall a x (l: list byte) m,
unchecked_store_byte_list a (x :: l) m =
map.put (unchecked_store_byte_list (word.add a (word.of_Z 1)) l m) a x.
Proof.
intros. reflexivity.
Qed.
Lemma lift_option_match{A B: Type}: forall (x: option A) (f: A -> B) (a: A),
match x with
| Some y => f y
| None => f a
end =
f (match x with
| Some y => y
| None => a
end).
Proof. intros. destruct x; reflexivity. Qed.
Lemma store_bytes_preserves_domain{wordOk: word.ok word}{memOk: map.ok mem}: forall n m a v m',
store_bytes n m a v = Some m' ->
map.same_domain m m'.
Proof.
unfold store_bytes, load_bytes.
induction n; intros.
- simpl in *.
change (unchecked_store_bytes 0 m a v) with m in H.
inversion H. apply map.same_domain_refl.
- destruct (map.getmany_of_tuple m (footprint a (S n))) eqn: E; [|discriminate].
inversion H. subst m'. clear H.
unfold map.getmany_of_tuple in *.
simpl in *.
destruct (map.get m a) eqn: E'; [|discriminate].
destruct_one_match_hyp; [|discriminate].
inversion E. subst t. clear E.
unfold unchecked_store_bytes in *. simpl.
destruct v as [v vs].
eapply map.same_domain_trans.
+ eapply IHn. unfold map.getmany_of_tuple. rewrite E0. reflexivity.
+ eapply map.same_domain_put_r.
* eapply map.same_domain_refl.
* rewrite map.putmany_of_tuple_to_putmany.
rewrite map.get_putmany_dec.
rewrite E'.
rewrite lift_option_match.
reflexivity.
Qed.
End MemAccess.
Require Import riscv.Utility.Utility.
Section MemAccess2.
Context {width} {word: word width} {mem: map.map word byte}.
Definition loadByte: mem -> word -> option w8 := load_bytes 1.
Definition loadHalf: mem -> word -> option w16 := load_bytes 2.
Definition loadWord: mem -> word -> option w32 := load_bytes 4.
Definition loadDouble: mem -> word -> option w64 := load_bytes 8.
Definition storeByte : mem -> word -> w8 -> option mem := store_bytes 1.
Definition storeHalf : mem -> word -> w16 -> option mem := store_bytes 2.
Definition storeWord : mem -> word -> w32 -> option mem := store_bytes 4.
Definition storeDouble: mem -> word -> w64 -> option mem := store_bytes 8.
End MemAccess2.
Local Unset Universe Polymorphism.
Section MemoryHelpers.
Context {width} {word: word width} {word_ok: word.ok word}.
Add Ring wring: (@word.ring_theory width word word_ok).
Goal forall (a: word), word.add a (word.of_Z 0) = a. intros. ring. Qed.
Lemma regToZ_unsigned_add: forall (a b: word),
0 <= word.unsigned a + word.unsigned b < 2 ^ width ->
word.unsigned (word.add a b) = word.unsigned a + word.unsigned b.
Proof.
intros.
rewrite word.unsigned_add.
apply Z.mod_small. assumption.
Qed.
Lemma regToZ_unsigned_add_l: forall (a: Z) (b: word),
0 <= a ->
0 <= a + word.unsigned b < 2 ^ width ->
word.unsigned (word.add (word.of_Z a) b) = a + word.unsigned b.
Proof.
intros.
rewrite word.unsigned_add.
rewrite word.unsigned_of_Z.
pose proof (word.unsigned_range b).
unfold word.wrap.
rewrite (Z.mod_small a) by blia.
rewrite Z.mod_small by assumption.
reflexivity.
Qed.
Lemma regToZ_unsigned_add_r: forall (a: word) (b: Z),
0 <= b ->
0 <= word.unsigned a + b < 2 ^ width ->
word.unsigned (word.add a (word.of_Z b)) = word.unsigned a + b.
Proof.
intros.
rewrite word.unsigned_add.
rewrite word.unsigned_of_Z.
pose proof (word.unsigned_range a).
unfold word.wrap.
rewrite (Z.mod_small b) by blia.
rewrite Z.mod_small by assumption.
reflexivity.
Qed.
End MemoryHelpers.
|
(* PUBLIC DOMAIN *)
Require Export Coq.Vectors.Vector.
Require Export Coq.Lists.List.
Require Import Bool.Bool.
Require Import Logic.FunctionalExtensionality.
Require Import Coq.Program.Wf.
Require Import Lia.
Definition SetVars := nat.
Definition FuncSymb := nat.
Definition PredSymb := nat.
Record FSV := {
fs : FuncSymb;
fsv : nat;
}.
Record PSV := MPSV{
ps : PredSymb;
psv : nat;
}.
Section All.
Variable T : Type.
Variable P : T -> Prop.
Fixpoint All (ls : list T) : Prop :=
match ls with
| nil => True
| cons h t => P h /\ All t
end.
End All.
Section AllT.
Variable T : Type.
Variable P : T -> Type.
Fixpoint AllT (ls : list T) : Type :=
match ls with
| nil => True
| cons h t => (P h) * (AllT t)
end.
End AllT.
Inductive preTerms : Type :=
| FVC :> SetVars -> preTerms
| FSC (f:FSV) : list preTerms -> preTerms.
Section preTerms_ind'.
Variable P : preTerms -> Prop.
Hypothesis preTerms_case1 : forall (sv : SetVars), P (FVC sv).
Hypothesis preTerms_case2 : forall (n : FSV) (ls : list preTerms),
All preTerms P ls -> P (FSC n ls).
Fixpoint preTerms_ind' (tr : preTerms) : P tr.
Proof.
destruct tr.
exact (preTerms_case1 _).
apply preTerms_case2.
revert l; fix G 1; intro l.
destruct l;simpl;trivial.
split.
apply preTerms_ind'.
apply G.
Show Proof.
Defined.
(** fix preTerms_ind' (tr : preTerms) : P tr :=
match tr as p return (P p) with
| FVC s => preTerms_case1 s
| FSC f l =>
preTerms_case2 f l
((fix G (l0 : list preTerms) : All preTerms P l0 :=
match l0 as l1 return (All preTerms P l1) with
| Datatypes.nil => I
| p :: l1 => conj (preTerms_ind' p) (G l1)
end) l)
end **)
End preTerms_ind'.
Section preTerms_rect'.
Variable P : preTerms -> Type.
Hypothesis preTerms_case1 : forall (sv : SetVars), P (FVC sv).
Hypothesis preTerms_case2 : forall (n : FSV) (ls : list preTerms),
AllT preTerms P ls -> P (FSC n ls).
Fixpoint preTerms_rect' (tr : preTerms) : P tr.
Proof.
destruct tr.
exact (preTerms_case1 _).
apply preTerms_case2.
revert l; fix G 1; intro l.
destruct l;simpl;trivial.
split.
apply preTerms_rect'.
apply G.
Show Proof.
Defined.
(** fix preTerms_ind' (tr : preTerms) : P tr :=
match tr as p return (P p) with
| FVC s => preTerms_case1 s
| FSC f l =>
preTerms_case2 f l
((fix G (l0 : list preTerms) : All preTerms P l0 :=
match l0 as l1 return (All preTerms P l1) with
| Datatypes.nil => I
| p :: l1 => conj (preTerms_ind' p) (G l1)
end) l)
end **)
End preTerms_rect'.
(*Definitio isTerm (t : preTerms) : Prop.*)
Fixpoint isTerm (t : preTerms) : Prop.
revert t; apply preTerms_rect'; intros.
exact True.
exact ((length ls)=(fsv n)).
Defined.
Eval compute in isTerm (FSC (Build_FSV 0 3) ((FVC 0)::(FVC 0)::(FVC 0)::nil)).
(*elim t using preTerms_ind'.
induction t using preTerms_ind'.*)
Inductive nat_tree : Set :=
| NNode' : nat -> list nat_tree -> nat_tree.
Check Forall.
Fixpoint mh (t : preTerms) : nat :=
match t with
| FVC _ => 0
| FSC f l => S (fold_right (fun t acc => Nat.max acc (mh t)) 0 l)
end.
Program Fixpoint isTerm isTerm (t : preTerms) {measure (mh t)}: Prop :=
match t with
| FVC _ => True
| FSC f l => (length l = fsv f) /\ (@Forall preTerms (fun q=>isTerm q) l )
(*@All _ _ isTerm l*)
end.
(isTerm a) /\ (chkd fu l)
Inductive isTerm : preTerms -> Prop :=
| c (pt : preTerms)
(H: match pt with
| FVC sv => True
| FSC f lpt => (Forall isTerm lpt)
end) : isTerm pt.
Inductive Terms : Type :=
| nas (pt : preTerms) (H:isTerm pt): Terms
with isTerm :=
| c (pt : preTerms) () : isTerm pt
Fixpoint isTerm :=
(fix isTerm (t : preTerms) : Prop :=
match t with
| FVC _ => True
| FSC f l => length l = fsv f /\
((fix chkd (fu : preTerms -> Prop )(l : list preTerms) : Prop :=
match l with
| nil => True
| a::l => (isTerm a) /\ (chkd fu l)
end) isTerm l)
end).
Fixpoint isTerm (t:preTerms) : Prop.
destruct t.
exact True.
refine (((length l)=(fsv f))/\ _).
exact (List.Forall isTerm l).
Show Proof.
Defined.
destruct l.
exact True.
exact (List.Forall isTerm l).
Defined.
Show Proof.
Unset Elimination Schemes.
Set Elimination Schemes.
Definition Terms_rect (T : Terms -> Type)
(H_FVC : forall sv, T (FVC sv))
(H_FSC : forall f v, (forall n, T (Vector.nth v n)) -> T (FSC f v)) :=
fix loopt (t : Terms) : T t :=
match t with
| FVC sv => H_FVC sv
| FSC f v =>
let fix loopv s (v : Vector.t Terms s) : forall n, T (Vector.nth v n) :=
match v with
| @Vector.nil _ => Fin.case0 _
| @Vector.cons _ t _ v => fun n => Fin.caseS' n (fun n => T (Vector.nth (Vector.cons _ t _ v) n))
(loopt t)
(loopv _ v)
end in
H_FSC f v (loopv _ v)
end.
Definition Terms_ind := Terms_rect.
Fixpoint height (t : Terms) : nat :=
match t with
| FVC _ => 0
| FSC f v => S (Vector.fold_right (fun t acc => Nat.max acc (height t)) v 0)
end.
(* BEGIN *)
Definition substT :=
(fix substT (t : Terms) (xi : SetVars) (u : Terms) {struct u} : Terms :=
match u with
| FVC s => let b := PeanoNat.Nat.eqb s xi in if b then t else FVC s
| FSC f t0 => FSC f (Vector.map (substT t xi) t0)
end).
Fixpoint isParamT (xi : SetVars) (t : Terms) {struct t} : bool :=
match t with
| FVC s => PeanoNat.Nat.eqb s xi
| FSC f t0 => Vector.fold_left orb false (Vector.map (isParamT xi) t0)
end.
Section cor.
Context (X:Type).
Context (fsI:forall(q:FSV),(Vector.t X (fsv q))->X).
Context (prI:forall(q:PSV),(Vector.t X (psv q))->Prop).
Definition teI :=
(fix teI (val : SetVars -> X) (t : Terms) {struct t} : X :=
match t with
| FVC s => val s
| FSC f t0 => fsI f (Vector.map (teI val) t0)
end).
(** (\pi + (\xi \mapsto ?) ) **)
Definition cng (val:SetVars -> X) (xi:SetVars) (m:X) (r:SetVars) :=
match Nat.eqb r xi with
| true => m
| false => (val r)
end.
Section Lem1.
Context (t : Terms).
Definition P(xi : SetVars) (pi : SetVars->X) (u :Terms)
:=(teI pi (substT t xi u))=(teI (cng pi xi (teI pi t)) u).
Definition ap {A B}{a0 a1:A} (f:A->B) (h:a0=a1):((f a0)=(f a1))
:= match h in (_ = y) return (f a0 = f y) with
| eq_refl => eq_refl
end.
Fixpoint vec_comp_as (A B C : Type) (f : B -> C) (g : A -> B)
(n : nat) (t0 : Vector.t A n) {struct t0} :
Vector.map f (Vector.map g t0) = Vector.map (fun x : A => f (g x)) t0 :=
match
t0 as t in (Vector.t _ n0)
return
(Vector.map f (Vector.map g t) = Vector.map (fun x : A => f (g x)) t)
with
| Vector.nil _ => eq_refl
| Vector.cons _ h n0 t1 =>
eq_ind_r
(fun t : Vector.t C n0 =>
Vector.cons C (f (g h)) n0 t =
Vector.cons C (f (g h)) n0 (Vector.map (fun x : A => f (g x)) t1))
eq_refl (vec_comp_as A B C f g n0 t1)
end.
(*
Program Fixpoint lem1 (u : Terms) (xi : SetVars) (pi : SetVars -> X)
{measure (height u)} :
teI pi (substT t xi u) = teI (cng pi xi (teI pi t)) u :=
match
u as t0 return (teI pi (substT t xi t0) = teI (cng pi xi (teI pi t)) t0)
with
| FVC s =>
let b := Nat.eqb s xi in
if b as b0
return
(teI pi (if b0 then t else s) = (if b0 then teI pi t else pi s))
then eq_refl
else eq_refl
| FSC f t0 =>
match
f as f0
return
(forall t1 : Vector.t Terms (fsv f0),
fsI f0 (Vector.map (teI pi) (Vector.map (substT t xi) t1)) =
fsI f0 (Vector.map (teI (cng pi xi (teI pi t))) t1))
with
| {| fs := fs0; fsv := fsv0 |} =>
fun t1 : Vector.t Terms (fsv {| fs := fs0; fsv := fsv0 |}) =>
ap (fsI {| fs := fs0; fsv := fsv0 |})
(let g :
Vector.map (teI pi) (Vector.map (substT t xi) t1) =
Vector.map (fun x : Terms => teI pi (substT t xi x)) t1 :=
vec_comp_as Terms Terms X (teI pi) (substT t xi) fsv0 t1 in
eq_ind_r
(fun t2 : Vector.t X fsv0 =>
t2 = Vector.map (teI (cng pi xi (teI pi t))) t1)
(let Y := fun wm : Terms -> X => Vector.map wm t1 in
let a1 := fun x : Terms => teI pi (substT t xi x) in
let a2 := teI (cng pi xi (teI pi t)) in
let Y1 := Y a1 in
let Y2 := Y a2 in
ap Y
(functional_extensionality
(fun x : Terms => teI pi (substT t xi x))
(teI (cng pi xi (teI pi t)))
(fun x : Terms => lem1 x xi pi))) g)
end t0
end.
Next Obligation.*)
Definition lem1 : forall(u :Terms)(xi : SetVars)(pi : SetVars->X), P xi pi u.
Proof. unfold P.
fix lem1 1.
intros.
destruct u as [s|f] .
+ simpl.
unfold cng.
destruct (Nat.eqb s xi).
* reflexivity.
* simpl.
reflexivity.
+
simpl.
destruct f.
simpl in * |- *.
apply ap.
simpl in t0.
revert fsv0 t0.
fix FFF 1.
intros fsv0 t0.
destruct t0 ; simpl in * |- *; trivial.
Lemma equal_a : forall (A B : Type) (f : A -> B) (a0 a1:A),
(a0 = a1) -> f a0 = f a1.
Proof.
intros A B f a0 a1 r.
destruct r.
reflexivity.
Defined.
Check (FFF n).
rewrite <- FFF.
Check (functional_extensionality _ _ (FFF fs0)).
Check f_equal.
Check (f_equal (fun g=> Vector.cons X g n
(Vector.map (teI pi) (Vector.map (substT t xi) t0)))).
apply (f_equal (fun g=> Vector.cons X g n
(Vector.map (teI pi) (Vector.map (substT t xi) t0)))).
rewrite lem1.
reflexivity.
Defined.
Check functional_extensionality.
rewrite lem1.
apply f_equal.
reflexivity.
Defined.
rewrite -> FFF.
rewrite <- (functional_extensionality _ _ (FFF n)).
rewrite <- (functional_extensionality _ _ (FFF n)).
apply f_equal.
rewrite FFF.
rewrite <- (functional_extensionality _ _ (FFF n)).
simpl.
Check (@f_equal _ _ _ _ _ _).
apply f_equal.
simpl.
apply functional_extensionality.
apply ap.
apply equal_a.
simpl.
apply ap.
Check (FFF _ (Vector.map (teI pi) (Vector.map (substT t xi) t0)) ).
apply FFF.
pose (g:= (@vec_comp_as _ _ _ (teI pi) (substT t xi) _ t0)).
rewrite -> g.
(*Check (@vec_comp_as _ _ _ (teI ) (cng pi xi (teI pi t)) _ ).
pose (g:= (@vec_comp_as _ _ _ (teI pi) (substT t xi) _ t0)).
rewrite -> g.*)
pose (Y:=fun wm => @Vector.map Terms X wm fsv0 t0).
pose (a1:=(fun x : Terms => teI pi (substT t xi x))).
pose (a2:=(teI (cng pi xi (teI pi t)))).
pose (Y1:= Y a1).
pose (Y2:= Y a2).
unfold Y in Y1.
unfold Y in Y2.
fold Y1 Y2 in |- *.
apply (@ap _ _ a1 a2 Y).
unfold a1, a2 in |- *.
apply functional_extensionality.
intro x.
refine (lem1 x xi pi ).
Show Proof.
Admitted.
|
Require Import VST.floyd.proofauto.
Require Import VST.progs.revarray.
Require Import VST.floyd.sublist.
Instance CompSpecs : compspecs. make_compspecs prog. Defined.
Definition Vprog : varspecs. mk_varspecs prog. Defined.
Definition reverse_spec :=
DECLARE _reverse
WITH a0: val, sh : share, contents : list int, size: Z
PRE [ _a OF (tptr tint), _n OF tint ]
PROP (0 <= size <= Int.max_signed; writable_share sh)
LOCAL (temp _a a0; temp _n (Vint (Int.repr size)))
SEP (data_at sh (tarray tint size) (map Vint contents) a0)
POST [ tvoid ]
PROP() LOCAL()
SEP(data_at sh (tarray tint size) (map Vint (rev contents)) a0).
Definition main_spec :=
DECLARE _main
WITH gv : globals
PRE [] main_pre prog nil gv
POST [ tint ] main_post prog nil gv.
Definition Gprog : funspecs := ltac:(with_library prog [reverse_spec; main_spec]).
Definition flip_ends {A} lo hi (contents: list A) :=
sublist 0 lo (rev contents)
++ sublist lo hi contents
++ sublist hi (Zlength contents) (rev contents).
Definition reverse_Inv a0 sh contents size :=
(EX j:Z,
(PROP (0 <= j; j <= size-j)
LOCAL (temp _a a0; temp _lo (Vint (Int.repr j)); temp _hi (Vint (Int.repr (size-j))))
SEP (data_at sh (tarray tint size) (flip_ends j (size-j) contents) a0)))%assert.
Lemma Zlength_flip_ends:
forall A i j (al: list A),
0 <= i -> i<=j -> j <= Zlength al ->
Zlength (flip_ends i j al) = Zlength al.
Proof.
intros.
unfold flip_ends.
autorewrite with sublist. omega.
Qed.
Hint Rewrite @Zlength_flip_ends using (autorewrite with sublist; omega) : sublist.
Lemma flip_fact_1: forall A size (contents: list A) j,
Zlength contents = size ->
0 <= j ->
size - j - 1 <= j <= size - j ->
flip_ends j (size - j) contents = rev contents.
Proof.
intros.
unfold flip_ends.
rewrite <- (Zlen_le_1_rev (sublist j (size-j) contents))
by (autorewrite with sublist; omega).
rewrite !sublist_rev by (autorewrite with sublist; omega).
rewrite <- !rev_app_distr, ?H.
autorewrite with sublist; auto.
Qed.
Lemma flip_fact_3:
forall A (al: list A) j size,
size = Zlength al ->
0 <= j < size - j - 1 ->
sublist 0 j (flip_ends j (size - j) al) ++
sublist (size - j - 1) (size - j) al ++
sublist (j + 1) size
(sublist 0 (size - j - 1) (flip_ends j (size - j) al) ++
sublist j (j + 1) (flip_ends j (size - j) al) ++
sublist (size - j) size (flip_ends j (size - j) al)) =
flip_ends (j + 1) (size - (j + 1)) al.
Proof.
intros.
unfold flip_ends.
rewrite <- H.
autorewrite with sublist.
rewrite (sublist_split 0 j (j+1)) by (autorewrite with sublist; omega).
rewrite !app_ass.
f_equal. f_equal.
rewrite !sublist_rev, <- ?H by omega.
rewrite Zlen_le_1_rev by (autorewrite with sublist; omega).
f_equal; omega.
rewrite (sublist_app2 (size-j) size)
by (autorewrite with sublist; omega).
autorewrite with sublist.
rewrite sublist_app'
by (autorewrite with sublist; omega).
autorewrite with sublist.
f_equal.
f_equal; omega.
autorewrite with sublist.
rewrite <- (Zlen_le_1_rev (sublist j (1+j) al))
by (autorewrite with sublist; omega).
rewrite !sublist_rev, <- ?H by omega.
rewrite <- !rev_app_distr, <- ?H.
autorewrite with sublist.
f_equal; f_equal; omega.
Qed.
Lemma flip_ends_map:
forall A B (F: A -> B) lo hi (al: list A),
flip_ends lo hi (map F al) = map F (flip_ends lo hi al).
Proof.
intros.
unfold flip_ends.
rewrite !map_app.
rewrite !map_sublist, !map_rev, Zlength_map.
auto.
Qed.
Lemma flip_fact_2:
forall {A}{d: Inhabitant A} (al: list A) size j,
Zlength al = size ->
j < size - j - 1 ->
0 <= j ->
Znth (size - j - 1) al =
Znth (size - j - 1) (flip_ends j (size - j) al).
Proof.
intros.
unfold flip_ends.
autorewrite with sublist. auto.
Qed.
Lemma body_reverse: semax_body Vprog Gprog f_reverse reverse_spec.
Proof.
start_function.
forward. (* lo = 0; *)
forward. (* hi = n; *)
assert_PROP (Zlength (map Vint contents) = size)
as ZL by entailer!.
forward_while (reverse_Inv a0 sh (map Vint contents) size).
* (* Prove that current precondition implies loop invariant *)
Exists 0.
entailer!.
unfold flip_ends; autorewrite with sublist; auto.
* (* Prove that loop invariant implies typechecking condition *)
entailer!.
* (* Prove that loop body preserves invariant *)
forward. (* t = a[lo]; *)
{
entailer!.
clear - H0 HRE.
autorewrite with sublist in *|-*.
rewrite flip_ends_map.
rewrite Znth_map by list_solve.
apply I.
}
forward. (* s = a[hi-1]; *)
{
entailer!.
clear - H H0 HRE.
autorewrite with sublist in *|-*.
rewrite flip_ends_map.
rewrite Znth_map by list_solve.
apply I.
}
rewrite <- flip_fact_2 by (rewrite ?Zlength_flip_ends; omega).
forward. (* a[hi-1] = t; *)
forward. (* a[lo] = s; *)
forward. (* lo++; *)
forward. (* hi--; *)
(* Prove postcondition of loop body implies loop invariant *)
Exists (Z.succ j).
entailer!.
f_equal; f_equal; omega.
simpl.
apply derives_refl'.
unfold data_at. f_equal.
clear - H0 HRE H1.
unfold Z.succ.
rewrite <- flip_fact_3 by auto.
rewrite <- (Znth_map (Zlength (map Vint contents)-j-1) Vint) by (autorewrite with sublist in *; list_solve).
forget (map Vint contents) as al. clear contents.
remember (Zlength al) as size.
repeat match goal with |- context [reptype ?t] => change (reptype t) with val end.
unfold upd_Znth.
rewrite !Znth_cons_sublist by (repeat rewrite Zlength_flip_ends; try omega).
rewrite ?Zlength_app, ?Zlength_firstn, ?Z.max_r by omega.
rewrite ?Zlength_flip_ends by omega.
rewrite ?Zlength_sublist by (rewrite ?Zlength_flip_ends ; omega).
unfold Z.succ. rewrite <- Heqsize. autorewrite with sublist.
replace (size - j - 1 + (1 + j)) with size by (clear; omega).
reflexivity.
* (* after the loop *)
forward. (* return; *)
rewrite map_rev. rewrite flip_fact_1; try omega; auto.
cancel.
Qed.
Definition four_contents := [Int.repr 1; Int.repr 2; Int.repr 3; Int.repr 4].
Lemma body_main: semax_body Vprog Gprog f_main main_spec.
Proof.
name four _four.
start_function.
forward_call (* revarray(four,4); *)
(gv _four, Ews, four_contents, 4).
split; [computable | auto].
forward_call (* revarray(four,4); *)
(gv _four,Ews, rev four_contents,4).
split; [computable | auto].
rewrite rev_involutive.
forward. (* return s; *)
Qed.
Existing Instance NullExtension.Espec.
Lemma prog_correct:
semax_prog prog Vprog Gprog.
Proof.
prove_semax_prog.
semax_func_cons body_reverse.
semax_func_cons body_main.
Qed.
|
[STATEMENT]
lemma eqButPID_openToUIDs:
assumes "eqButPID s s1"
shows "openToUIDs s \<longleftrightarrow> openToUIDs s1"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. openToUIDs s = openToUIDs s1
[PROOF STEP]
using eqButPID_stateSelectors[OF assms]
[PROOF STATE]
proof (prove)
using this:
admin s = admin s1 \<and> pendingUReqs s = pendingUReqs s1 \<and> userReq s = userReq s1 \<and> userIDs s = userIDs s1 \<and> user s = user s1 \<and> pass s = pass s1 \<and> pendingFReqs s = pendingFReqs s1 \<and> friendReq s = friendReq s1 \<and> friendIDs s = friendIDs s1 \<and> postIDs s = postIDs s1 \<and> admin s = admin s1 \<and> eeqButPID (post s) (post s1) \<and> owner s = owner s1 \<and> vis s = vis s1 \<and> IDsOK s = IDsOK s1
goal (1 subgoal):
1. openToUIDs s = openToUIDs s1
[PROOF STEP]
unfolding openToUIDs_def
[PROOF STATE]
proof (prove)
using this:
admin s = admin s1 \<and> pendingUReqs s = pendingUReqs s1 \<and> userReq s = userReq s1 \<and> userIDs s = userIDs s1 \<and> user s = user s1 \<and> pass s = pass s1 \<and> pendingFReqs s = pendingFReqs s1 \<and> friendReq s = friendReq s1 \<and> friendIDs s = friendIDs s1 \<and> postIDs s = postIDs s1 \<and> admin s = admin s1 \<and> eeqButPID (post s) (post s1) \<and> owner s = owner s1 \<and> vis s = vis s1 \<and> IDsOK s = IDsOK s1
goal (1 subgoal):
1. (\<exists>uid\<in>UIDs. uid \<in>\<in> userIDs s \<and> (uid = owner s PID \<or> uid \<in>\<in> friendIDs s (owner s PID) \<or> vis s PID = PublicV)) = (\<exists>uid\<in>UIDs. uid \<in>\<in> userIDs s1 \<and> (uid = owner s1 PID \<or> uid \<in>\<in> friendIDs s1 (owner s1 PID) \<or> vis s1 PID = PublicV))
[PROOF STEP]
by auto |
PROMAN is a leading international consulting company specialised in development cooperation. Created in 1986, PROMAN is providing services to international donor agencies, national governments, public institutions and development partners world-wide.
PROMAN has been awarded the ‘South Africa Sector Budget Support Technical Assistance’ contract. The project will provide support to the Government of South Africa, notably the National Treasury and relevant national departments, in the inception and the implementation phases of the Sector Budget Support provided by the EU. The project will start in April 2019 for a period of two years.
PROMAN (lead) in partnership with NIRAS and Particip has been selected to undertake a mapping of services for survivors of violence amongst women and children in South Africa. The specific objective of the assignment is to contribute to the implementation of the Improvement Plan by the Inter-Ministerial Technical Task Team for Violence Against Women and Children. The project will start in April 2019 for a period of 16 months.
PROMAN has been awarded the contract for the Formulation of the Programme of Support to the SADC-EU Economic Partnership Agreement in South Africa. The overall objective of the assignment is to enhance South Africa's trade and business opportunities by promoting the full implementation of the EU-SADC EPA in South Africa while advancing regional integration.
The consortium led by PROMAN has been successfully providing TA in Support of the Education Sector in Sierra Leone since February 2017. The service contract has been extended with another 2 years. The contract is now scheduled to be completed early August 2021.
AECOM International Europe in partnership with PROMAN has been awarded the contract for the assessment of the 2018 specific performance indicators under the Sector Reform Contract “Support to Police Reform”.
Since early 2015 the consortium led by Louis Berger with PROMAN as partner has been providing technical assistance to the Ministry of Social Assistance and Reinsertion (MINARS) in the framework of the Support Project for Vulnerable Groups. The contract has been extended with another 18 months, with completion now scheduled early August 2020.
PROMAN in partnership with B&S Europe has been awarded the contract for the final evaluation of the "Firkidia di skola" project and the identification and formulation of the 11th EDF education sector support programme.
The past months have been very successful. A total of 5 new long-term contracts were awarded to PROMAN as lead company (2) or partner (3) for a total value of €18.8 million. Turnover has now more than doubled over the past 5 years, and is projected at over €18 million in 2019.
PROMAN (lead) in partnership with Palladium International BV has been awarded the €2.8 million 3-year contract for the Support to the Office of the National Authorising Officer of the EDF in Zambia. PROMAN herewith confirms its position as market leader in the provision of TA to NAO structures in ACP countries. The past 5 years PROMAN has been/is supporting NAO offices in Chad, Kenya, Solomon Islands, Ethiopia, Swaziland, Zambia, Guinea, Comoros, Papua New Guinea, Namibia and Angola.
AECOM International Europe in partnership with PROMAN has been awarded the contract for the Support to the implementation of the EU-Georgia Association Agreement in the field of maritime transport.
PROMAN has been contracted by DFID to provide an independent assessment of progress against DFID’s Jordan Compact Education Programme Disbursement Linked Indicators.
PROMAN has been contracted by the Ministry of National Education, Teaching and Research of the Union of the Comoros to assess the feasibility of the proposed AFD funded project ‘Performance and Governance of the Education Sector in the Comoros” with an indicative budget of €6 million, define more precisely the purpose, content and implementation modalities of the project and provide support during the start-up phase, as appropriate.
PROMAN has been awarded the contract for consultancy services for assessors to assist in the evaluation of grant applications received in the framework of the VET TOOLBOX call for proposals on inclusion in Vocational Education and Training (VET). The call for proposals intends to fund new and innovative cooperation initiatives aiming to improve employment opportunities in the formal and informal labour market for disadvantaged and vulnerable groups (especially women, migrants and internally displaced people, people living in rural and remote areas, the poorest quintile and people with disabilities) through inclusive Vocational and Educational Training (VET). The VET Toolbox is composed by GIZ, British Council, Enabel, LuxDev and AFD, and co-funded by the EU.
AECOM International Europe in partnership with PROMAN has been selected to support the Vietnam Tourism Advisory Board to establish a Tourism Development Fund.
PROMAN has been awarded the contract for the ‘Promotion of Inclusive Education in Kyrgyzstan’. The specific objective of the assignment is (i) to develop and pilot pre-service and in-service teacher training designed to address inclusive education for children with disability; and (ii) to promote budgetary commitments enabling to address the inclusive education. The project will start in February 2019 for a duration of 20 months.
PROMAN is very pleased to announce it has been nominated implementing agency for the first phase of the Programme for the Relaunch of Economic Activities in the Region of Ménaka (DDM). The focus of the €1.5 million, one-year contract will be the realization of small infrastructures, favoring labor-intensive works in order to immediately revive economic activity in the territory; to conduct training/labour market insertion actions for young people; and to undertake feasibility studies for the construction of more substantial infrastructures and equipment to be carried out in Phase 2. This new contract further strengthens PROMAN’s important current portfolio in regional socio-economic development support programmes in the northern regions of Mali, in a difficult security context (3F, SDNM II, DDRG, DDRK IV).
PROMAN will provide social sector expertise on the team undertaking an evaluation of the Humanitarian - Development Nexus process in Mali, and providing technical expertise to relaunch the process so as to accelerate the achievement of concrete results. The TA team is co-financed by the EUD, Swiss and Luxembourg Cooperation.
We are delighted to announce that the consortium led by Cardno Emerging Markets, Belgium with PROMAN, Palladium International BV and the Spanish Association for Standardization-UNE as partners has been awarded the €8.9 million contract for the provision of technical assistance for the implementation of the ARISE Plus-Indonesia programme. The ASEAN Regional Integration Support - Indonesia Trade-Related Assistance (ARISE Plus - Indonesia) will be the first EU-funded trade related assistance programme with Indonesia that is closely linked to the ASEAN economic integration agenda. ARISE Plus-Indonesia aims to contribute to Indonesia's preparedness and enhanced competitiveness in global value chains through specific support targeting national and sub-national levels. By enhancing Indonesia's trade competitiveness and openness, the programme will promote inclusive and sustainable economic growth, boost job creation and increase employment in a gender sensitive way. Furthermore, the programme provides country-level interventions closely linked to the regional programme ARISE Plus, supporting regional economic integration and trade in ASEAN. The project will start in January 2019 for a duration of 4 years.
PROMAN has been awarded the contract for the evaluation of the Inclusive Basic Education Component of Education Outcome of the Government of Mongolia and UNICEF Country Programmes 2012-2016 and 2017-2021.
PROMAN has been awarded the contract for the Evaluation of UNICEF’s Disaster Risk Reduction Programming in Education in East Asia and the Pacific.
The consortium led by Linpico with PROMAN and Quarein as partners has been awarded the €2 million contract for the provision of long-term technical assistance services and training for the offices of the EDF NAO in the Republic of Angola. PROMAN currently implements similar TA to NAO projects in Comoros, Guinea, Swaziland, Zambia, Papua New Guinea and Namibia.
The consortium led by Integration with PROMAN and Oxford Policy Management as partners has been awarded the 5-year €3.6 million contract for the provision of long-term technical assistance to the National Authorising Officer (NAO) / National Planning Commission (NPC) Support Programme. PROMAN currently implements similar TA to NAO projects in Comoros, Guinea, Swaziland, Zambia and Papua New Guinea.
PROMAN has been awarded the contract ‘Monitoring, assessment and support to EU and other donors-funded Education and complementary programs implemented by the Ministry of Education to deal with the Syria refugee crisis’. The overall period of implementation will be one year.
The 3-year "Three Borders" (3F) program, with an indicative budget of €33.5 million aims to stabilize the border region of Mali, Niger and Burkina Faso. The programme will support socio-economic development and strengthen social cohesion in cross-border territories. PROMAN has been solicited to support AVSF to undertake a first set of priority activities during the 6-month start-up phase in the region of Gao.
PROMAN has been selected by UNICEF to undertake a meta-analysis of existing research to create a region-specific evidence base on the most effective strategies for improving learning among the most marginalized children in East Asia and Pacific based on existing scientific evidence from the region. Geographically the literature review can consider evidence from the 25 countries in EAP: Cambodia, China, Cook Islands, Fiji, Indonesia, DPR Korea, Kiribati, Lao PDR, Malaysia, Marshall Islands, Micronesia, Mongolia, Myanmar, Nauru, Niue, Palau, Philippines, Samoa, Solomon Islands, Thailand, Timor Leste, Tokelau, Tonga, Tuvalu, Vanuatu, and Vietnam.
PROMAN will provide expertise on the assessment of grant applications received in the framework of the restricted Call for Proposals “Local Authorities: Partnerships for sustainable cities”. The global objective of this CfP is to promote integrated urban development through partnerships built among Local Authorities of the EU Member States and of partner countries in accordance with the 2030 Agenda on sustainable development.
PROMAN in partnership with EPOS has been selected to undertake the mid-term evaluation of the EU funded Northern Dimension Partnership on Public Health and Social Well-being and Northern Dimension Partnership on Culture. The Northern Dimension (ND) is a joint policy between the European union (EU), Russia, Norway and Iceland which promotes dialogue, practical cooperation and development.
PROMAN in partnership with Transtec has been selected to undertake the mid-term and final evaluation of the EU funded Culture Support Programme in Tunisia (PACT). The evaluations are scheduled respectively for mid-November 2018 and end 2021.
PROMAN has been awarded the contract to undertake a survey of class practices in primary education in Niger. The objective of the assignment is to undertake an analysis of the situation of class practices in primary schools in Niger and of the links between class practices and pupils' academic achievements to guide the content of initial and in-service training of teachers.
PROMAN has been awarded the contract for the mid-term evaluation of the Balochistan Education Support Programme. The overall objective of the programme is to accelerate and further increase the number of children (especially girls) enrolling in and completing quality elementary education in Balochistan.
PROMAN has been selected to conduct a Tracer Study (ex-post evaluation) of the EU funded TVET I project. The global objective of the study is to provide comprehensive information to allow the implementing partners and the European Commission to make an accurate assessment of the immediate and long-term value and contribution of the project to the employability of its graduates including the uptake of self-employment opportunities.
PROMAN will assist the EUD support the European Union Delegation to Ghana (EUD) with the preparation of the Terms of Reference for the tender dossier to award the service contract "Support to Communication and Visibility actions for the Ghana Employment and Social Protection Programme (GESP)". The GESP Programme funded by the 11th EDF is expected to contribute to enhance social protection services, notably for vulnerable population groups and to generate employment opportunities, with a particular attention to the youth in Ghana.
The consortium led by AFC, with PROMAN and I&D as partner, has been awarded the € 5.36 million TA contract for the Implementation of the AFAFI-NORD Programme. The main objective of the 6-year AFAFI-NORD program is to promote a sustainable agricultural sector, inclusive and efficient in the North of Madagascar. Its specific objectives are: (i) the improvement of governance of the agricultural sector, (ii) increasing household incomes by supporting the development and strengthening of inclusive agricultural value chains, and (iii) the improvement of food and nutrition security of rural households. The program is organized around two components: (i) support for project coordination, (ii) support to the Agriculture, Livestock Fisheries and Environment covering the three specific objectives. Activities will be in all three targeted regions of northern Madagascar (Diana, SAVA, Analanjirofo). This award further consolidates our solid reputation in the country, in the Indian Ocean region and further reinforces our growing portfolio in local regional development.
PROMAN in partnership with EPOS Health Management and AECOM International Development Europe has been awarded the contract for reviewing compliance with eligibility conditions and disbursement indicators for the EU budget support under the Development, Protection and Social Inclusion Programme. This contract is the first award under the new FWC SIEA 2018, Lot 4.
PROMAN has been awarded the € 1.5 million contract for the Support to the Management of Visibility and Communications for the Delegation of the European Union to the Republic of Malawi. The overall objective of this 3-year contract is to enhance the visibility of the EU-Malawi cooperation and promote a greater awareness and understanding among key actors in the public, private and civil society sectors on issues of EU's development assistance. Partners to the PROMAN led consortium include Action Global Communications and Quarein.
For the past 2 years PROMAN has been a selected service provider on the Long term Agreement for the provision of timely and high quality technical expertise to the UNICEF education sector in the areas of early learning, gender/girls' education and inclusive education, quality and learning and education in emergencies and resilience. The LTA has now been extended till June 2019 and will continue to be called upon to provide technical assistance, advice, capacity building or support services to UNICEF regional and country offices at short notice.
The scope of this FWC is to carry out evaluations of geographic (regions/countries) cooperation strategies and programmes; thematic multi-country evaluations; evaluations of selected policy issues and aid modalities, particularly budget support operations and to support dissemination of the results: lessons learned and evaluation recommendations, support for development of appropriate methodological approaches and tools for evaluation. PROMAN is partner to the consortium led by Landell Mills. This FWC has now been extended till March 2020.
The contract for the External Results Oriented Monitoring (ROM) reviews and support missions concerning projects and programmes financed by the EU for the Asian and Pacific regions including OCTs has once more been extended, the contract now to end in April 2019. The project did start on 1st of January 2015. PROMAN is partner to the consortium led by Landell Mills.
PROMAN is very pleased to inform that the consortium led by AECOM Belgium, with PROMAN as partner has been retained as framework contractor for the EU funded FWC Services for the Implementation of External Aid 2018 (FWC SIEA 2018), Lot 6: INNOVATIVE FINANCING FOR DEVELOPMENT. Lot 6 covers the following sectors: Economic, financial, technical and legal experts on Finance Products and Structures, Financiers/Risk Takers, Markets and financing needs/gaps, Policy issues, Legal, institutional and procedural issues.
Contract implementation will start on 1st of June 2018 for an initial period of two years.
PROMAN is very pleased to inform that the consortium led by AECOM International Development Europe, with PROMAN as partner has been retained as framework contractor for the EU funded FWC Services for the Implementation of External Aid 2018 (FWC SIEA 2018), Lot 5: BUDGET SUPPORT. Lot 5 covers the following sectors: Public policies, Macroeconomic stability, Public finance management, Domestic revenue mobilisation, Statistics and indicators.
PROMAN is very pleased to inform that the consortium led by AECOM International Development Europe, with PROMAN as partner has been retained as framework contractor for the EU funded FWC Services for the Implementation of External Aid 2018 (FWC SIEA 2018), Lot 2: INFRASTRUCTURE, SUSTAINABLE GROWTH AND JOBS. Lot 2 covers the following sectors: Transport and infrastructures, Digital technologies and services, Earth observation, Urban development and cities, Sustainable energy, Nuclear safety, Sustainable waste management, Private sector, Trade, Employment creation.
PROMAN is very pleased to inform that the consortium led by TRANSTEC, with PROMAN as partner has been retained as framework contractor for the EU funded FWC Services for the Implementation of External Aid 2018 (FWC SIEA 2018), Lot 1: SUSTAINABLE MANAGEMENT OF NATURAL RESOURCES AND RESILIENCE. Lot 1 covers the following sectors: Agriculture, Livestock, Sustainable forestry management and conservation, Fishery and aquaculture, Land management, Food security & nutrition, Food safety, Extension/Training/HRD/Institutional Development, Rural infrastructure, Climate change, Sustainable natural resource management, Disaster risk reduction.
PROMAN is very pleased to inform that the consortium led by PROMAN has been retained as framework contractor for the EU funded FWC Services for the Implementation of External Aid 2018 (FWC SIEA 2018), Lot 4: HUMAN DEVELOPMENT AND SAFETY NET. Lot 4 covers the following sectors: Education, VET, Lifelong learning, Culture, Social inclusion and protection, Health, Research & Innovation.
Contract implementation will start on 1st of June 2018 for an initial period of two years. Partners to the consortium include AECOM International Development Europe, AEDES, B&S Europe, CultureLab, EPOS, GOPA, hera, INOVA+, Lattanzio Advisory, Niras Finland, IP Consult, Niras Sweden, Particip, PAI, SFERE, Transtec and World Learning.
PROMAN has been contracted to assist the Ministry of Education in the preparation and facilitation of the review of the 2016-2020 Five Year Education Plan and the definition of targets and actions to be carried out for the coming years of the plan.
PROMAN has been selected by UNICEF to provide technical assistance and overall guidance to the Government of Uzbekistan (and various Ministries of Education and institutions under them) through the process of developing a comprehensive Education Sector Plan (ESP) for the period 2018-2022. The comprehensive ESP is the key national education policy document, which provides a long-term vision for the education system in the country and outlines a coherent set of practicable strategies to reach its objectives and overcome difficulties.
PROMAN has been successfully supporting the NAO Office in Guinea since December 2015. An extension has been granted with the contract now to run till December 2019.
PROMAN has been selected to undertake the end of year 1 and mid-term evaluation of the third phase of the EU funded Support to the Technical and Vocational Education and Training (TVET) Sector Programme. The specific objective of the programme is to improve governance and private sector participation in the TVET sector to enhance access to quality skills development that meets demand of the labour market.
The past months have been extremely successful for PROMAN. No less than 8 new LT contracts have been awarded to PROMAN as lead company (4) or partner (4), with a total value of some € 34.2 million. These remarkable results further consolidate our impressive growth of the past years.
PROMAN in partnership with EGIS International (lead) has been awarded the contract for the provision of long term technical assistance for the implementation of the 11th EDF Territorial Development Support Programme (PADT) (€ 2.45 million). The specific objective of the PADT is to support the State and Territorial Administration (deconcentrated and decentralized) with the operationalization of the National Policy of Decentralization and Deconcentration (PONADEC). The period of implementation will be 4 years. Decentralisation and Local Development continues to be major area of specialisation of PROMAN. The past months the portfolio has grown with a further 3 long term contracts in the sector.
The consortium led by PROMAN has been awarded the contract for the provision of Technical Assistance for the support programme to the implementation of EU-Papua New Guinea cooperation. The € 3.65 million TA contract will start in March 2018, with an implementation period of 40 months. Partners to the consortium are Cardno Emerging Markets and Transtec. PROMAN currently implements similar TA to NAO projects in Comoros, Guinea, Swaziland and Zambia.
PROMAN has been successfully supporting the NAO Office in Zambia since April 2015. An extension has been granted with the contract now to run till the 10th of December 2018.
We are delighted to announce that the consortium led by Palladium International BV with PROMAN as principal partner has been awarded the € 14 million EU funded contract for the provision of Technical Assistance for the 'Employment Promotion through SMMEs Support Programme for the Republic of South Africa'. The specific objectives/outcomes of the SMMEs Programme are (i) to improve the competitiveness of small, micro and medium enterprises (SMMEs) and their ability to meet procurement requirements of large multinational/local corporations, government and state-owned enterprises; (ii) to improve access to finance for SMMEs with limited/no access to finance and (iii) to improve the regulatory and administrative environment for SMMEs. The contract will start in March 2018 for a period of 52 months. Other partners to the consortium are Enclude and Tutwa Consulting Group.
PROMAN has been selected to undertake the final evaluation of the EU funded Technical Cooperation and Official Development Assistance Programme (TCODAP). The Programme aimed to enhance efficiency, effectiveness and sustainable management of incoming and outgoing ODA for better management and impact on strategic development priorities of South Africa.
PROMAN has been selected to provide TA to the Department of Higher Education and Training to build the knowledge and understanding of DHET staff in the extent and range of open learning approaches, open educational resources and the use of multi-media and materials development processes.
The consortium led by MDF with GIZ and PROMAN as partners has been retained as contractor for the Training on Financial and Contractual Procedures in the Framework of the 11th European Development Fund (EDF). Under this € 4.8 million TA contract training courses will be delivered for NAOs and RAOs staff, as well as EUD staff in the ACP countries over a period of 5 years.
PROMAN is very pleased to announce it has been nominated as regional operator for the project "Sécurité et Développement des Régions du Nord du Mali, phase II / Security and Development of Northern Regions of Mali, phase II" (total budget of € 19 million) for the regions of Gao, Kidal and Ménaka, financed by AFD and the EU Emergency Trust Fund for Africa. The overall objective of the project is to contribute to the stability and development of the regions of the North by supporting access to basic social services and revitalizing the local economy. The regional operator will provide support to local authorities on the different stages of the project cycle (identification and consultation for the selection of projects to be financed under the Local Investment Fund, technical and economic feasibility, formulation, procurement, monitoring of supply and work contracts etc.). The selection of PROMAN confirms its strong reputation in the country and the region in the management of projects in a difficult security environment. PROMAN is active in the north of Mali since 1999. This 3-year contract will start end January.
The PROMAN led consortium has been awarded the EU funded contract for the Technical Assistance team for the Development Initiative for Northern Uganda (DINU) programme. The purpose of this EUR 4 million TA contract is to assist the Office of Prime Minister in the effective and efficient execution of the DINU programme. DINU has been designed as an integrated programme providing support to Northern Uganda in the 3 focal sectors identified in the NIP for the 11th EDF, which are food security and nutrition, good governance and transport infrastructures. The project will start mid-December for a period of 64 months. Partners in the consortium are Palladium International BV, NTU International A/S and Saba Engineering Plc.
PROMAN in partnership with B&S Europe has been awarded the contract for the final evaluation of 3 EU funded education sector programmes in Indonesia: (1) the EU Budget Support Programme "Education Sector Support Programme" (ESSP Phase I & II); (2) the "Analytical Capacity and Development Partnership" programme (ACDP) and (3) the "Minimum Service Standards Capacity Development Programme" (MSS CDP) in basic education.
PROMAN has been awarded the contract to assist the National Aid Fund (NAF) to develop a qualitative awareness, information and communication toward public, stakeholders and final beneficiaries. The National Aid Fund (NAF) is one of the leading institutions in the field of social protection in Jordan. Established in 1986, the Fund aims to provide assistance to the most deprived and vulnerable groups to improve their standard of living.
PROMAN in partnership with Lattanzio Advisory has been contracted to support the Women in Engineering and Technology Awareness Campaign in Guyana. The specific objectives of this assignment are to design (i) a communication strategy and annual awareness campaign for girls and women to study science, engineering and technology; (ii) a toolkit to enhance the role of the industrial attachment scheme in attracting more female interest in science, engineering and technology and (iii) an action plan for enhancing the role of women in the Sea and River Defence Board and other disaster risk management (DRM) sector related decision bodies. The assignment will start in September to be completed mid-2019.
PROMAN, in partnership with B&S Europe has been awarded the contract for the Mid-Term Evaluation of the Project SRRMLME- Support to the Reintegration of Returnees and to the Management of Labour Migration in Ethiopia.
Skills development is a major result area under the UNICEF Strategic Plan (2018-2022). PROMAN will assist UNICEF HQ in the development of program guidance on Skills for Employment. The guidance will provide practical advice to inform UNICEF Country Offices when developing related programs.
PROMAN, in partnership with Lattanzio Advisory has been awarded the contract "Consultancy to document lessons learnt and case studies from the Organisation of Eastern Caribbean States (ECS) 10th EDF programme". The purpose of this contract is to create a Lessons Learnt Information Package contextualising the continued journey toward Regional Integration over the last five (5) years towards the continued sensitization and education of the public.
Since January 2015, PROMAN has been successfully providing advisory services to the European Commission, both to headquarters and to delegations, with the aim of improving the effectiveness of the EU's development aid on education. The project has received additional funding with completion now scheduled in January 2020.
PROMAN has been awarded the contract for the 'Analysis of the Needs of Labour Market Institutions' in Bosnia and Herzegovina. The assignment will undertake a detailed analysis of existing infrastructure of Public Employment Services and IT requirements and other logistical aspects of labour market institutions and provide overall recommendations.
The consortium led by PROMAN has been successfully providing assistance to promote education quality and educational services in 9 regions of Madagascar since early 2013. Early July a final extension to the contract was signed, with a reduced expert team till end June 2018. Emphasis will be placed on consolidation of achievements and closure of this 32 MEURO programme.
PROMAN, in partnership with B&S Europe has been awarded the service contract "Consultancy to develop and implement a public sector improvement programme for Barbados". The specific objectives of the assignment are to increase productivity levels within the public sector through sensitization, promotional and advocacy activities, by facilitating knowledge - and experience-sharing, providing the building blocks for a collective approach to solving problems and by training, equipping and supporting employees to maximize the effectiveness, productivity and performance of their departments/ministries, and hence the government of Barbados.
Since end 2014, PROMAN in partnership with PwC and Marge has been providing TA for the implementation of the regional programme "Renewable Energy Development and Energy Efficiency Improvement in Indian Ocean Commission Member States". The contract has now been extended for an additional 30 months with project completion scheduled in December 2019.
The consortium led by NIRAS, with PROMAN as partner has been awarded the contract for the project "Support to the Development of Social Welfare Regulatory Mechanisms" in the Republic of Serbia. The overall objective of this two year project is to contribute to smart, sustainable and inclusive growth for the Republic of Serbia by building a more knowledgeable and skilled labour force, improving social protection policies and promoting the social inclusion of vulnerable populations.
PROMAN will support the International Development Cooperation Unit, National Treasury on the management of the call for proposals under the General Budget Support Programme. The assignment will start end May for a period of two years.
The contract for the External Results Oriented Monitoring (ROM) reviews and support missions concerning projects and programmes financed by the EU for the Asian and Pacific regions including OCTs has been extended till end April 2018. The project did start on 1st of January 2015.
PROMAN has been successfully supporting the Government of Mali on the decentralisation process under two consecutive EDF funded contracts since end 2006. The current PARADDER contract has now been extended till December 2018.
A new award under the UNICEF LTA agreement. The purpose of the consultancy is to reinforce and help finalize the current draft UNGEI Strategic Directions 2017-2022 paper, including the development of a Theory of Change, results or monitoring, evaluation and learning (MEL) framework and governance framework.
PROMAN will support the EU Delegation on the regular monitoring of the implementation of the vocational education sector reform support programme. The expertise will be provided by a team of three experts. A total of 6 missions are scheduled in the period May 2017 to March 2020. Last year PROMAN was awarded the contract for a similar assignment focusing on the education sector reform support programme. Under this assignment the first three monitoring mission have been successfully completed.
Another contract award under the LTA agreement. The objective of the mission is to develop guidance and mechanisms to strengthen knowledge management and capitalisation of projects and experiences, based on the Ministry of Education (MoE)-UNICEF cooperation experiences implemented during the period 2012-2016. This includes the development of a methodological guide (manual) and a training module to strengthen capacities at central and decentralised levels.
We are very pleased to announce PROMAN has been awarded the contract for the provision of TA to the Development Cooperation Support Programme (PAC). The specific objective of this contract is to increase the technical capacities of the services of the National Authorizing Officer, the NAO and the technical ministries in order to improve project management in the framework of the cooperation between the Union of the Comoros and the EU. The project will start in April for a period of three years. This contract further consolidates PROMAN's position in the Indian Ocean Region with currently various other major projects ongoing: PASSOBA-Education (Madagascar), PROCOM (Madagascar), HRD (Seychelles), Biodiversity (IOC regional), Renewable Energy (IOC regional) and Islands II (IOC regional). The provision of TA to NAO services remains a core speciality of PROMAN, with ongoing projects in Guinea, Zambia and Swaziland.
The consortium led by Particip, with PROMAN as partner has been successfully supporting the NAO services in Swaziland since December 2014. The project will be extended for another 20 months with project completion now scheduled end 2018.
PROMAN will assist the Ministry of Education and Sports (MoES) in developing a costed 3-year action plan (2018-2020) for the Early Childhood Education (ECE) sub-sector plan. The costed action plan will be used by the MoES and development partners to support the planning, implementation and monitoring of the MoES annual work plans as well as the Education Sector Development Plan (ESDP) 2016-2020.
PROMAN has been selected by the Evaluation Office UNICEF HQ, NY to undertake the formative evaluation of the Out-of-school-Children Initiative (OOSCI). OOSCI was launched in 2010 by UNICEF and the UNESCO Institute for Statistics (UIS). It aims to 'turn data into action' by developing detailed 'profiles' of out‐of‐school children, identify barriers that are pushing them out of school, and propose changes in partner government policies and strategies to address these barriers. Field visits are planned to Bangladesh, Bolivia, Burkina Faso, Ethiopia, Indonesia, Kyrgyzstan, Sudan and Sri Lanka.
PROMAN has been contracted by the EU Delegation to undertake a gender analysis. The gender analysis will provide an understanding of whether gender inequalities persist in Nigeria and its causes, how it intersects with other inequalities, how it impacts on human rights enjoyment and/or benefits produced by and access to development efforts. It will also provide an understanding of Nigeria's government's commitment and capacity to work on GEWE issues. The analysis will provide relevant and reliable information which the EU and Member States may use to (i) Prepare gender sensitive development response strategies and (ii) contribute to the political dialogue.
The consortium led by PROMAN has been awarded a major contract for the provision of Technical Assistance in Support of the Education Sector in Sierra Leone. The purpose of this 4.1 M EUR contract is to provide the beneficiary country and in particular the Ministry of Education, Science and Technology (MEST), the Teaching Service Commission, selected district education offices and selected teacher colleges with technical assistance to deliver on strengthening of management capacity and provision of education services, in compliance with national education policies and targets. Technical assistance will be carried out by a team of 4 long-term key experts complemented with a pool of short/medium-term experts operating in Freetown and in other locations across the country over an implementation period of 30 months. Partners in the consortium are Palladium, Plan International and Redi4Change.
PROMAN will provide the services of a Senior Border Management Expert in the team undertaking the Evaluation of EU support for Security Reform in enlargement and neighbourhood countries (2010-2016).
PROMAN will assist UNICEF HQ in the production of the Education Annual Results Report for 2016. UNICEF's Annual Results Reports outline the organization's results against the Strategic Plan 2014–2017 to advance children's rights and equity in the areas of health; HIV and AIDS; water, sanitation and hygiene; nutrition; education; child protection; social inclusion; humanitarian action; and gender. The reports detail what UNICEF achieved in each outcome area, working with diverse partners at the global, regional and country levels, and examine the impact of these accomplishments on the lives of children and families worldwide.
PROMAN has been awarded the contract for the Mid-term Review of the Programme to Support Pro-poor Policy Development (PSPPD) II. The programme contributes to improved policies, building systems and institutional capacity to reduce poverty and inequality through evidence-based policy-making.
PROMAN is pleased to announce that the consortium consisting of AECOM International Development Europe (lead) and PROMAN, PAI, Democracy Essentials, Global Operational Support (partners) is of the 4 consortia retained on the EU funded Lot 1 Framework Contract for Support to Electoral Missions. The scope of lot 1 is to provide services (expertise, material and technical support) for Election Observation Missions (EOMs) and Election Assessment Team (EATs) missions which observe electoral processes in partner countries of Africa, Middle East, Asia, Latin America and the Pacific region. The maximum estimated budget for this two year FWC totals 215 000 000 EUR.
PROMAN has been selected to undertake the institutional, organizational and functional audit of the National Institute for Pedagogic Training (INFP) and its regional centres (CRINFP).
PROMAN in partnership with Lattanzio has been awarded the contract for the formulation of the 3rd phase of the EU Support Programme for the Implementation of the National Literacy Strategy (Alpha III).
PROMAN will provide support to the EU Delegation on "Monitoring, assessment and support to EU and other donor funded Education and complementary programs by the Ministry of Education to deal with the Syria crisis". The contract is to run for a period of two years.
PROMAN has been awarded the contract to provide Technical Assistance for the research on "Using Social Dialogue as a socio economic development tool". The specific objective of the assignment is to provide the European Commission (DEVCO B3) and the European Delegations with a reference document (orange publication): (i) Developing an understanding of Social Dialogue and the added-value of including its related mechanisms in development actions; and (ii) Identifying Social Dialogue good practices in technical cooperation projects that can provide operational guidance on how to best include it and support it in future actions.
PROMAN will assist the Government of Angola on the identification and formulation of the 11th EDF higher education support programme. The assignment will include a detailed review of the sector.
PROMAN has been contracted by DFID to support the Donor Coordination Unit at the Ministry of Education (MOE) to develop and finalise one common results framework for delivery of quality formal education for Syrian refugee children. The work will be carried out in close coordination with key donors (Canada, EU, Germany, Norway, US and UK).
PROMAN has been awarded the contract for the provision of specialized sector expertise on the formulation of EU support in the field of communication and audio-visual in Morocco.
PROMAN is very pleased to inform that its bid on the development of a Human Resource Development Strategy for the Seychelles has been successful. The overall objective of the assignment, which will be undertaken in partnership with the Centre for Employment Initiatives (UK) and Edge Consulting (Mauritius) and will last one year, is to develop a HRD strategy for the next 5 years in order to transform the knowledge and skills base, diversify the economy and address the skills and expertise gaps. This contract reinforces PROMAN's position in the Indian Ocean region. Other major ongoing projects include PASSOBA-Education (Madagascar), PROCOM (Madagascar), Renewable Energy (IOC), Biodiversity (IOC) and Islands II (IOC).
PROMAN in association with Lattanzio Advisory has been contracted by the EU Delegation to conduct a Gender Audit of EU bilateral cooperation in Cambodia. The gender audit should (i) measure how gender mainstreaming has been implemented in ongoing EU bilateral projects and programmes; (ii) identify good practices and provide specific recommendations (possibly including gender indicators) for each project as appropriate to improve gender equality and inclusion; and (iii) provide analysis and recommendations that will inform and trigger debate around gender mainstreaming in EU cooperation.
PROMAN has been selected to conduct the final evaluation of the EU funded project "Supporting Technical and Vocational Education and Training Reform in Pakistan (TVET II). The work will be undertaken by a team of four experts over a period of 6 months.
The consortium led by PROMAN has been successfully providing assistance to promote education quality and educational services in 9 regions of Madagascar since early 2013. Early August a new contract was signed extending the TA till end October 2017, the expert team now consisting of 13 long term key experts, and bringing the total value of TA services to 8.650.000 EUR.
PROMAN has been awarded the contract for the Interim Evaluation of Employment and Vocational Education Training (EVET) and to measure the feasibility of EU's further support for education, focused on Lifelong Learning.
PROMAN has been awarded the contract for the Evaluation of the UNESCO/EU Expert facility on the governance of culture in developing countries. The overall objective of the Expert facility is to reinforce the role of culture as a vector of sustainable human development in developing countries.
PROMAN experts will undertake a study to explore options for a new EU-ACP Research and Innovation programme that will both meet the needs and demands of ACP countries in the context of Agenda 2030, particularly SDG 9 and its targets, and bring an added value at collective intra-ACP level.
PROMAN in partnership with Arp Dévelopement and CENAFOD has been successfully providing TA to the Support Programme for Administrative Reform in Decentralisation and Regional Economic Development (PARADDER) since February 2012. Mid July an extension was signed with increased funding extending the contract for a further ten months. The project is now scheduled to end in June 2017.
PROMAN has been awarded the contract for the final evaluation of the EU funded Inclusive Education for Children with Special Needs Project in Uzbekistan. The project focuses on pre-school and primary school (1-4 classes), and promotes the inclusion of 1,200 children aged 2-10 with intellectual and motor disabilities and developmental delays.
PROMAN will provide the services of a senior education expert (general education and TVET) in the team on the preparation of the 11th EDF National Indicative Programme for the Central African Republic. The assignment is expected to be completed by the end of 2016.
PROMAN has been awarded the contract for the Review of the Education Sector Reform Contract. Within the framework of the ESRC (AAP2015), the EU provides financial assistance to the Kyrgyz Republic during Kyrgyz fiscal years 2016-2018. This assistance is provided through a foreign currency facility channeled into the national budget. The overall objective of the contract is to provide a detailed review of the implementation of the ESRC 2015 and to enable the EU to use the outcomes of the reviews for decisions on instalment disbursement and programme execution, as well as to contribute to the Programming exercise to be funded by the Annual Action Plan 2018. The assignment will start in July 2016, to be completed during the 1st semester of 2018.
PROMAN has been selected as service provider on the Long term Agreement for the provision of timely and high quality technical expertise to the UNICEF education sector in the areas of early learning, gender/girls' education and inclusive education, quality and learning and education in emergencies and resilience. The LTA will be called upon at short notice to provide technical assistance, advice, capacity building or support services to UNICEF regional and country offices when requested. The LTA has an overall duration of two years.
PROMAN has been awarded the EU funded contract "Evaluation of the policy steps taken by the Ukrainian government towards the delivery of social services to Internally Displaced Persons (IDPs)". The global objective of the assignment is to assist the Ukrainian authorities in putting in place a policy framework which respects the social, economic and human rights of the IDP population and fosters social cohesion in challenging political and economic circumstances.
UNICEF has renewed the contract for the provision of the services of a Technical Advisor to Support Decentralization and De-concentration Reform in the Education Sector, for a further year.
PROMAN has been awarded the EU funded contract 'Support for the Implementation of the Mauritius Strategy for SIDS in the ESA-IO region – Phase II (ISLANDS II)'. The overall objective of the project is to contribute to the Sustainable Development of the Small Island Developing States of the ESA-IO region by addressing specific development constraints of beneficiary countries (their natural characteristics linked to insularity, their relative smallness, proneness to natural disasters, and limited access to capital) and by fostering regional and global SIDS-SIDS cross-fertilisation. The contract will start 1st of March 2016, for a duration of 20 months.
We are pleased to announce that the consortium led by Landell Mills, with PROMAN as member has been retained as framework contractor on Lot 1. Evaluation. The scope of this contract is to carry out evaluations of geographic (regions/countries) cooperation strategies and programmes; thematic multi-country evaluations; evaluations of selected policy issues and aid modalities, particularly budget support operations and to support dissemination of the results: lessons learned and evaluation recommendations, support for development of appropriate methodological approaches and tools for evaluation. This new FWC will start on 1st March for an initial duration of maximum 2 years and can be extended up to 2 more years.
We are pleased to announce that the contract for the External Results Oriented Monitoring (ROM) reviews and support missions concerning projects and programmes financed by the EU for the Asian and Pacific regions including OCTs has been extended till end April 2017. The project did start on 1st of January 2015, to last initially for one year.
PROMAN, in partnership with AECOM has been awarded the EU funded contract 'Support to the implementation of the PFM Action Plan 2015-2017'. This consultancy will be delivered over a period of indicatively 18 months through several in-country missions with an indicative start date in February 2016.
PROMAN will support the EU Delegation on the regular monitoring of the implementation of the education sector reform support programme. The expertise will be provided by a team of three experts. A total of 7 missions are scheduled in the period May 2016 to November 2018.
PROMAN has been awarded the contract for the 'Comprehensive assessment of Social Units in the poorest Governorates in Egypt'. The global objective is to assess the capacities of the Egyptian Ministry of Social Solidarity to develop its Social Units into efficient, client-oriented and adequately resourced social service providers.
PROMAN will provide expertise on development policy to the Permanent Representation of the Republic of Slovakia to the EU in Brussels and the team dealing with the EU Presidency in the headquarters/capital of its ministries, in view of handling a demanding EU and Global Development Agenda. The assignment will last from May 2016 to January 2017.
PROMAN in partnership with B&S Europe will undertake the thematic evaluation on EU support to Economic Governance in enlargement and neighbourhood countries, covering Albania, Bosnia and Herzegovina, Kosovo, the former Yugoslav Republic of Macedonia, Montenegro and Turkey. The contract will run for 9 months and start in May 2016.
A PROMAN expert team will support the EU Delegation to Swaziland in programming the second tranche of the EDF11 (9.2 million EUR) for improved access to education and decent life for vulnerable children.
PROMAN has been awarded the contract for the TA to NAO project in Guinea Conakry. The expert team will consist of 4 long term experts. The project will start early December for a period of 3 years.
PROMAN was officially mandated by the Luxemburg Government as executing agency for the implementation of the Sustainable Development Programme in the Gao region. With a total budget of 4.920.000 EUR, the programme will focus on 2 major components: rural development and food security and TVET. Implementation has formally started on 1st October 2015, to be completed by 31st December 2019.
PROMAN was officially mandated by the Luxemburg Government as executing agency for the implementation of the fourth phase of the Sustainable Development Programme in the Kidal region. With a total budget of 8.380.000 EUR, the programme will focus on 4 major components: rural development and food security, TVET, health and decentralisation/good governance. Implementation has formally started on 1st October 2015, to be completed by 31st December 2019.
PROMAN, member of the consortium led by TRANSTEC will field a new mission under the FWC for Mid-term and Final evaluations of Projects and Programmes of the Belgian Technical Cooperation in the Education Sector. The objective of the assignment is to undertake the mid-term evaluation of the programmes "Strengthening organizational capacity through scholarships" in Senegal and Morocco.
PROMAN has been awarded the contract for the Final Evaluation of the Support to the Technical and Vocational Education and Training Sector project. The assignment will be undertaken in two phases, to be completed by end 2016.
PROMAN has been selected to assist the EU Delegation in Myanmar in the formulation of its support to the National Education Sector Plan (NESP) and to the PFM reform strategy. This is the second major assignment in Myanmar on the preparation and formulation of EU support to the education sector. The expertise will be provided by a team of 4 experts over a period of one year.
PROMAN in partnership with B&S Europe will provide support to the EUD on three work tender evaluations in the water and sanitation sector.
PROMAN in partnership with Lattanzio and B&S Europe will undertake the evaluation of the support in the area of equal access to quality education (integrated and inclusive education) financed under the Operational Programme for Human Resource Development 2007-2013.
PROMAN in partnership with Lattanzio has been awarded the contract for the Formulation of the Education Component of the EU Social Sector Support Programme under the 11th EDF.
PROMAN and B&S Europe have been awarded the contract to evaluate the results so far achieved through the EU support delivered under the Country Strategy Programme 2007-2013. Focal sectors of the CSP include Good Governance, Private Sector Development, and Basic Social Services.
PROMAN and B&S Europe have been awarded the contract to provide technical assistance and expertise to the EU Office in Kosovo for the assessment of project proposals under several Calls (Civil Society 2014-2015, IPA 2013 (Civil Society grant scheme), EIDHR 2014).
PROMAN in partnership with B&S Europe will support the Ministry of Culture and Heritage Preservation in Tunisia in the realization of 4 priority actions of its 2015 programme including development of a documentary portal of the National Library, status of artists, institutional strengthening of cultural centres and realisation of a communication campaign.
PROMAN will provide legal advice to the Ministry of Labour and Social Affairs on the evaluation of the existing gap between the Labour Code (and its regulations) and the 'acquis communautaire' and European best practices.
PROMAN has been contracted by the EU Delegation in Armenia to undertake a Review of the Sector Support Programme for Continuation of Vocational Education and Training (VET) Reform and Development of an Employment Strategy.
PROMAN has been awarded the contract to support the EU Delegation in the identification and formulation phase of planning for the continuation of its support to the TVET sector in Pakistan.
PROMAN (lead company) in partnership with GRM International has been awarded the EU funded contract for the provision of long term technical support to the National Authorizing Officer in Zambia. This 3 year project with a budget of 1.8 million EUR has formally started on 15th April 2015.
PROMAN has been awarded the contract for the provision of short-term technical assistance to support the preparation of the disbursement requests for the second and third budget support tranche of the EU-Cambodia Education Sector Reform Partnership 2014-2016.
PROMAN has been contracted by the EU Delegation in Kampala to undertake a Salary and Remuneration survey for EDF funded projects in Uganda. The mission is to start end March 2015.
UNICEF has renewed the contract for the provision of the services of a Technical Advisor to Support Decentralization and De-concentration Reform in the Education Sector, for a further 2 years (2015-2016). Activities are to start in March 2015.
PROMAN will provide the services of the ICT expert on the expert team for the identification and formulation on the new EU Regional Environmental Programme for Central Asia. The assignment is undertaken in partnership with B&S Europe, Linpico and AGRER.
On 5th March 2015 the new Indicative Programme (PIC) 2015-2019 was signed between the Governments of Mali and the Grand-Duchy of Luxemburg. PROMAN is the official Implementing Agency for programmes in the northern concentration zone of the PIC. Main focal areas include rural development and food security and TVET for the Gao region; and rural development and food security, TVET, health and decentralisation/good governance for the Kidal region. The total budget amounts to 13.3 million EUR for the period 2015-2019. Implementation is scheduled to start mid-2015.
PROMAN has been awarded the contract for the provision of short-term technical assistance for supporting the implementation of the Promoting Heritage for Ethiopia's Development programme (PROHEDEV) funded by the EU. The objective of mission is to provide technical support to the Ministry of Culture and Tourism during the start-up of the programme and to provide targeted training for museum professionals.
The consortium led by Louis Berger with PROMAN and PBLH as partner has been awarded the contract for the provision of Technical Assistance to the Ministry of Social Assistance and Reintegration (MINARS) in the framework of the Project to support the Government of Angola to define and implement an effective policy for Social Protection and Social Solidarity. The specific objective of this 4 year TA contract with a budget of over EUR 9 million is to strengthen institutional capacity of the Ministry leading to enhanced national social assistance to the population needs with a focus on the most vulnerable groups.
The overall objective of the study is to support DG NEAR and IPA II beneficiaries in strengthening the monitoring and reporting systems to track the performance of IPA II assistance, while enhancing the transparency and visibility of IPA II funds, and providing support for related information and communication activities targeting stakeholder audiences in the EU Member States and a wider public.
PROMAN will undertake the final evaluation of the Regional Support Programme to Cultural Initiatives, covering PALOP-Timor Leste. Field work will cover Guinea Bissau, Cape Verde, Sao Tome & Principe and Mozambique.
The consortium led by Landell Mills with PROMAN and LINPICO as partner has been awarded the contract for the External Results Oriented Monitoring (ROM) reviews and support missions concerning projects and programmes financed by the European Union for the Asian and Pacific regions including OCTs. This lot covers the EU funded national and regional projects and programmes whether single or multi-country in Afghanistan, Bangladesh, Bhutan, Cambodia, China, Cook Islands, East-Timor, Fiji, French Polynesia, India, Indonesia, Iran, Iraq, Kazakhstan, Kiribati, Kyrgyzstan, Laos, Malaysia, Maldives, Marshall Islands, Micronesia, Mongolia, Myanmar, Nauru, Nepal, New Caledonia and Dependencies, Niue, Pakistan, Palau, Papua New Guinea, Philippines, Pitcairn, Samoa, Solomon Islands, Sri Lanka, Tajikistan, Thailand, Tonga, Turkmenistan, Tuvalu, Uzbekistan, Vanuatu, Vietnam, Wallis and Futuna Islands, Yemen. The project has started on 1st of January 2015 and will initially last for one year. Over 300 ROM reviews and support missions are foreseen.
PROMAN has been awarded the Mid-term review of ongoing 'SHARE: Supporting the Hardest to Reach through Basic Education' Programme in Bangladesh. The specific objectives of SHARE programme are to provide basic education opportunities of quality for the hardest to reach children and their parents and guardians, in 219 upazillas and thanas of 47 districts in 7 divisions of Bangladesh. The mission will start in January 2015.
The global objective of the assignment is to provide findings and conclusions on the performance of EU assistance in Turkey in the field of occupational health and safety with regard to the alignment with the EU acquis and practices and recommendations on the measures/actions that might be addressed by IPA 2014-2020 to improve programming and future project identification.
PROMAN in partnership with Particip has been contracted to provide Support to the Ministry of Women, Children and Youth Affairs and Regional Authorities in Ethiopia to start up activities in the area of the enhancement of women's economic status in Ethiopia, under the WBP Project. The mission will last from December 2014 to April 2015.
PROMAN (lead company), in partnership with 4Assist and Save the Children Norway has been awarded the service contract for the Support to the Education Sector Reform project in Lao PDR. The purpose of this 2.5 year project is to improve quality and relevance through implementation of Education Quality Standards for primary education and improved textbook management and supply and strengthening the planning and budgeting process. The project will start mid-January 2015.
The consortium led by Particip with PROMAN as partner has been awarded the long-term contract for the provision of 'Technical Assistance to the NAO Support Unit and Relevant Line Ministries in Order to Build Capacity in Project Management, Mbabane, Swaziland'. This two-year project compliments PROMAN's impressive TA to NAO track record.
PROMAN in partnership with PAI has been contracted to provide Consultancy Services for the Elaboration of an Action Plan for the Tertiary Education Strategic Plan 2013-2025. The global objective of the assignment is to assist the Republic of Mauritius in achieving its objective of becoming a knowledge-based economy.
PROMAN (lead) in partnership with GRM International BV has been awarded the Education Advisory Services project.
The contract will consist of: (i) providing advisory services to the European Commission, both to headquarters and to delegations, with the aim of improving the effectiveness of the EU's development aid on education. This will be achieved by providing adequate support at the key steps of the project/programme cycle and by increasing the know-how and the capacity of staff in charge of operations in the education sector; (ii) supporting the EU's contribution to the international policy debate and the definition of its own strategies for cooperation in the education sector; enhancing the accountability and visibility framework of the EU; (iii) contributing, on a request basis and resources allowing, to reinforcing the technical capacities of the EU's main stakeholders in the field of education. The project will start in January 2015 and is scheduled to be completed by June 2017.
This contract reinforces PROMAN's position as key player in the education sector.
PROMAN provides the services of a Democratisation Specialist on the Mid-Term Evaluation of the 10th EDF SADC Regional¨Political Cooperation Programme (FWC COM Lot 1). The EUR 18 million programme is centred on 4 different components: Strengthening Democratic Institutions, Conflict Prevention and Management, Disaster Risk Reduction and Management, Trafficking in Persons. The mission is to start end October 2014.
PROMAN is partner to ICF International on the "Evaluation of the technical assistance component of DFID India's Education Portfolio", awarded under the DFID Global Evaluation Framework Agreement. The Independent Commission for Aid Impact (ICAI) has challenged DFID to better capture the full range of impacts from engagement in the social sectors in India. This is the first major study of its kind to look at a DFID portfolio relating to technical assistance. The study will start the first week of November 2014, to end in March 2016.
On the 20th of October 2014, the formal kick-of meeting took place on the 'Study to design a programme/clearinghouse providing access to higher education for Syrian refugees and internally displaced persons'. The specific objective of the assignment is to assist in the design of a future programme by the EU to enhance access to further and higher education for young Syrians who had to drop out of formal education, especially internally displaced students inside Syria and Syrian refugees across the region, with a focus on Jordan and Lebanon, but also on Turkey and Iraq.
The Directorate of the Centre for the European Union Education and Youth Programmes (Turkish National Agency) is responsible for the management and implementation of the EU Youth Cooperation Programmes such as Erasmus, Comenius, Comett-I, Petra-I, Youth for Europe, Lingua, Eurotechnet, Force and Socrates. In 2012 the TRNA was allowed to recruit additional 50 staff members bringing the total staff at 178. The specific objective of this assignment is to provide technical assistance to the TRNA in training the newly-employed assistance experts in the areas of PCM, LFA, development of project proposals, M&E, financial management, IPA etc.PROMAN is lead company and provides the services of the trainer in IPA and financial management.
PROMAN in partnership with Lattanzio e Associati has been awarded the contract for the Mid-Term Review of the Second Education Sector Development Programme in Somalia. This evaluation will contribute to the broader process of results mapping under the current programme, contracting of the EU-funded ESDP III programme and planning for future EU funding in the education sector, in line with Sector-Wide Approaches. The assignment started in September and is expected to be completed in November 2014.
The 'Comprehensive Framework for the European Union's policy and support to Myanmar/Burma', adopted by the Foreign Affairs Council in July 2013, sets out the framework for EU policy and support to the on-going reforms in Myanmar. The EU is now finalising its Multiannual Indicative Programme (MIP) 2014-2026 and the global objective of the assignment is to provide assistance in the identification and appraisal of options for EU support to the education sector in Myanmar.
PROMAN in partnership with Particip and B&S Europe has been awarded the contract for the 'Monitoring, assessment and support to EU funded Education and complementary programs by the Ministry of Education, UN Agencies and NGOs' in Jordan. The main objectives of the mission are (i) to provide an overall independent assessment of the European Union's past and current cooperation under the EU Support to the Second Phase of the Education Reform (EUSSPER) programme in Jordan in the field of basic education under the ERfKEII program, in particular the assessment of the completion of agreed benchmarks as well as newly planned EU budget Support programs in the field of Basic Education; (ii) to monitor the education programs in support to Syrian refugees under the Ministry of Education and UN agencies in terms of overall and administrative efficiency to Syrian refugees provided for by the Ministry of Education and by the UN agencies and their sub-contracted NGOs, which will take the form of a field verification; (iii) to make recommendations for the future cooperation programing in the field of education (bilateral, regional and thematic) with Jordan and improvement of the current and future European Union's implementation strategies for (Syrian) refugees as well as suggestions to strengthen the visibility of the interventions; (iv) to develop a strategy for the Ministry of Education and the Ministry of Higher Education in the field of the Vocational Education and Training and (v) to develop recommendations for the teacher induction program. Started in September 2014, the mission will last till June 2016.PROMAN is the lead contractor and provides the services of the Team leader/Education Expert and the Monitoring Expert.
PROMAN has been contracted for the Review of South African Trilateral Development Cooperation Activities, under the Technical Cooperation and Official Development Assistance Programme (TCODAP). The mission is scheduled from early to mid-2015.
Two new mandates have been signed with the Ministry of Foreign Affairs of the Government of Luxemburg. The project 'Support for improving access to basic social services for people affected by the crisis in the Kidal region' will extend activities started in 2013. The project will run from June 2014 to January 2015. A similar project was approved for the Gao region, to be implemented over the period October 2014 to June 2015.
Both projects will compliment other activities in Mali undertaken by PROMAN as executing agency for the Luxembourgish Government.
PROMAN in partnership with ACE International Consultants (lead company) has been awarded the 5-year EU-funded Technical Assistance to the Employment and Regional Integration Support Programme (PROCOM) contract in Madagascar. PROCOM aims to strengthen the capacity of the private sector to grow inclusively and to be more competitive on national, regional and international markets, notably through: (i) strengthening intermediary organizations to act as a lever for change and competitiveness; (ii) developing technical, managerial and marketing skills of MSMEs; and (iii) facilitating and securing commercial transactions of MSMEs nationally, regionally, and internationally. The TA team will start activities the first week of November 2014. This is the second major project currently on-going in Madagascar, next to the PROMAN led TA to PASSOBA-Education.
The Consortium led by PwC South Africa, with PROMAN and MARGE as members has kick-started project activities under the recently awarded 'Technical Assistance for the implementation of the regional programme 'Renewable Energy Development and Energy Efficiency Improvements in Indian Ocean Commission Member States'. The programme aims at creating the conditions for an increased access to modern and sustainable energy services at acceptable cost, focussed both for demand and supply side measures and based on indigenous and renewable energy sources, and for optimising the energy supply requirements which the economy of each country can afford and facilitate trade in this area. The programme covers the Indian Ocean Commission Member States: Comoros, Madagascar, Mauritius, Seychelles and Reunion Island (with specific funding). The project will run from October 2014 to June 2017. PROMAN will be co-managing the project with PwC South Africa.
This project further reinforces PROMAN's presence in the region. In April 2014 the Consortium led by Landell Mills, with PROMAN as partner started activities under the Technical Assistance contract for the implementation of the Regional programme 'Coastal, marine and Island specific Biodiversity Management in the Eastern and Southern Africa-Indian Ocean Coastal States', which will run till January 2018. |
# coding=utf-8
# Copyright 2018 The Dopamine Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Module defining classes and helper methods for general agents."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
import time
from dopamine.agents.dqn import dqn_agent
from dopamine.agents.bdqn import bdqn_agent
from dopamine.agents.kdqn import kdqn_agent
from dopamine.agents.implicit_quantile import implicit_quantile_agent
from dopamine.agents.rainbow import rainbow_agent
from dopamine.discrete_domains import atari_lib
from dopamine.discrete_domains import checkpointer
from dopamine.discrete_domains import iteration_statistics
from dopamine.discrete_domains import logger
import numpy as np
import tensorflow as tf
import gin.tf
def load_gin_configs(gin_files, gin_bindings):
"""Loads gin configuration files.
Args:
gin_files: list, of paths to the gin configuration files for this
experiment.
gin_bindings: list, of gin parameter bindings to override the values in
the config files.
"""
gin.parse_config_files_and_bindings(gin_files,
bindings=gin_bindings,
skip_unknown=False)
@gin.configurable
def create_agent(sess, environment, agent_name=None, summary_writer=None,
debug_mode=True):
"""Creates an agent.
Args:
sess: A `tf.Session` object for running associated ops.
environment: A gym environment (e.g. Atari 2600).
agent_name: str, name of the agent to create.
summary_writer: A Tensorflow summary writer to pass to the agent
for in-agent training statistics in Tensorboard.
debug_mode: bool, whether to output Tensorboard summaries. If set to true,
the agent will output in-episode statistics to Tensorboard. Disabled by
default as this results in slower training.
Returns:
agent: An RL agent.
Raises:
ValueError: If `agent_name` is not in supported list.
"""
assert agent_name is not None
if not debug_mode:
summary_writer = None
if agent_name == 'dqn':
return dqn_agent.DQNAgent(sess, num_actions=environment.action_space.n,
summary_writer=summary_writer)
elif agent_name == 'rainbow':
return rainbow_agent.RainbowAgent(
sess, num_actions=environment.action_space.n,
summary_writer=summary_writer)
elif agent_name == 'implicit_quantile':
return implicit_quantile_agent.ImplicitQuantileAgent(
sess, num_actions=environment.action_space.n,
summary_writer=summary_writer)
elif agent_name == 'bdqn':
return bdqn_agent.BDQNAgent(sess, num_actions=environment.action_space.n,
summary_writer=summary_writer)
summary_writer=summary_writer)
elif agent_name == 'kdqn':
return kdqn_agent.KDQNAgent(sess, num_actions=environment.action_space.n,
summary_writer=summary_writer)
else:
raise ValueError('Unknown agent: {}'.format(agent_name))
@gin.configurable
def create_runner(base_dir, schedule='continuous_train_and_eval'):
"""Creates an experiment Runner.
Args:
base_dir: str, base directory for hosting all subdirectories.
schedule: string, which type of Runner to use.
Returns:
runner: A `Runner` like object.
Raises:
ValueError: When an unknown schedule is encountered.
"""
assert base_dir is not None
# Continuously runs training and evaluation until max num_iterations is hit.
if schedule == 'continuous_train_and_eval':
return Runner(base_dir, create_agent)
# Continuously runs training until max num_iterations is hit.
elif schedule == 'continuous_train':
return TrainRunner(base_dir, create_agent)
else:
raise ValueError('Unknown schedule: {}'.format(schedule))
@gin.configurable
class Runner(object):
"""Object that handles running Dopamine experiments.
Here we use the term 'experiment' to mean simulating interactions between the
agent and the environment and reporting some statistics pertaining to these
interactions.
A simple scenario to train a DQN agent is as follows:
```python
import dopamine.discrete_domains.atari_lib
base_dir = '/tmp/simple_example'
def create_agent(sess, environment):
return dqn_agent.DQNAgent(sess, num_actions=environment.action_space.n)
runner = Runner(base_dir, create_agent, atari_lib.create_atari_environment)
runner.run()
```
"""
def __init__(self,
base_dir,
create_agent_fn,
create_environment_fn=atari_lib.create_atari_environment,
checkpoint_file_prefix='ckpt',
logging_file_prefix='log',
log_every_n=1,
num_iterations=200,
training_steps=250000,
evaluation_steps=125000,
max_steps_per_episode=27000):
"""Initialize the Runner object in charge of running a full experiment.
Args:
base_dir: str, the base directory to host all required sub-directories.
create_agent_fn: A function that takes as args a Tensorflow session and an
environment, and returns an agent.
create_environment_fn: A function which receives a problem name and
creates a Gym environment for that problem (e.g. an Atari 2600 game).
checkpoint_file_prefix: str, the prefix to use for checkpoint files.
logging_file_prefix: str, prefix to use for the log files.
log_every_n: int, the frequency for writing logs.
num_iterations: int, the iteration number threshold (must be greater than
start_iteration).
training_steps: int, the number of training steps to perform.
evaluation_steps: int, the number of evaluation steps to perform.
max_steps_per_episode: int, maximum number of steps after which an episode
terminates.
This constructor will take the following actions:
- Initialize an environment.
- Initialize a `tf.Session`.
- Initialize a logger.
- Initialize an agent.
- Reload from the latest checkpoint, if available, and initialize the
Checkpointer object.
"""
assert base_dir is not None
self._logging_file_prefix = logging_file_prefix
self._log_every_n = log_every_n
self._num_iterations = num_iterations
self._training_steps = training_steps
self._evaluation_steps = evaluation_steps
self._max_steps_per_episode = max_steps_per_episode
self._base_dir = base_dir
self._create_directories()
self._summary_writer = tf.summary.FileWriter(self._base_dir)
self._environment = create_environment_fn()
# Set up a session and initialize variables.
self._sess = tf.Session('',
config=tf.ConfigProto(allow_soft_placement=True))
self._agent = create_agent_fn(self._sess, self._environment,
summary_writer=self._summary_writer)
self._summary_writer.add_graph(graph=tf.get_default_graph())
self._sess.run(tf.global_variables_initializer())
self._initialize_checkpointer_and_maybe_resume(checkpoint_file_prefix)
def _create_directories(self):
"""Create necessary sub-directories."""
self._checkpoint_dir = os.path.join(self._base_dir, 'checkpoints')
self._logger = logger.Logger(os.path.join(self._base_dir, 'logs'))
def _initialize_checkpointer_and_maybe_resume(self, checkpoint_file_prefix):
"""Reloads the latest checkpoint if it exists.
This method will first create a `Checkpointer` object and then call
`checkpointer.get_latest_checkpoint_number` to determine if there is a valid
checkpoint in self._checkpoint_dir, and what the largest file number is.
If a valid checkpoint file is found, it will load the bundled data from this
file and will pass it to the agent for it to reload its data.
If the agent is able to successfully unbundle, this method will verify that
the unbundled data contains the keys,'logs' and 'current_iteration'. It will
then load the `Logger`'s data from the bundle, and will return the iteration
number keyed by 'current_iteration' as one of the return values (along with
the `Checkpointer` object).
Args:
checkpoint_file_prefix: str, the checkpoint file prefix.
Returns:
start_iteration: int, the iteration number to start the experiment from.
experiment_checkpointer: `Checkpointer` object for the experiment.
"""
self._checkpointer = checkpointer.Checkpointer(self._checkpoint_dir,
checkpoint_file_prefix)
self._start_iteration = 0
# Check if checkpoint exists. Note that the existence of checkpoint 0 means
# that we have finished iteration 0 (so we will start from iteration 1).
latest_checkpoint_version = checkpointer.get_latest_checkpoint_number(
self._checkpoint_dir)
if latest_checkpoint_version >= 0:
experiment_data = self._checkpointer.load_checkpoint(
latest_checkpoint_version)
if self._agent.unbundle(
self._checkpoint_dir, latest_checkpoint_version, experiment_data):
assert 'logs' in experiment_data
assert 'current_iteration' in experiment_data
self._logger.data = experiment_data['logs']
self._start_iteration = experiment_data['current_iteration'] + 1
tf.logging.info('Reloaded checkpoint and will start from iteration %d',
self._start_iteration)
def _initialize_episode(self):
"""Initialization for a new episode.
Returns:
action: int, the initial action chosen by the agent.
"""
initial_observation = self._environment.reset()
return self._agent.begin_episode(initial_observation)
def _run_one_step(self, action):
"""Executes a single step in the environment.
Args:
action: int, the action to perform in the environment.
Returns:
The observation, reward, and is_terminal values returned from the
environment.
"""
observation, reward, is_terminal, _ = self._environment.step(action)
return observation, reward, is_terminal
def _end_episode(self, reward):
"""Finalizes an episode run.
Args:
reward: float, the last reward from the environment.
"""
self._agent.end_episode(reward)
def _run_one_episode(self):
"""Executes a full trajectory of the agent interacting with the environment.
Returns:
The number of steps taken and the total reward.
"""
step_number = 0
total_reward = 0.
action = self._initialize_episode()
is_terminal = False
# Keep interacting until we reach a terminal state.
while True:
observation, reward, is_terminal = self._run_one_step(action)
total_reward += reward
step_number += 1
# Perform reward clipping.
reward = np.clip(reward, -1, 1)
if (self._environment.game_over or
step_number == self._max_steps_per_episode):
# Stop the run loop once we reach the true end of episode.
break
elif is_terminal:
# If we lose a life but the episode is not over, signal an artificial
# end of episode to the agent.
self._agent.end_episode(reward)
action = self._agent.begin_episode(observation)
else:
action = self._agent.step(reward, observation)
self._end_episode(reward)
return step_number, total_reward
def _run_one_phase(self, min_steps, statistics, run_mode_str):
"""Runs the agent/environment loop until a desired number of steps.
We follow the Machado et al., 2017 convention of running full episodes,
and terminating once we've run a minimum number of steps.
Args:
min_steps: int, minimum number of steps to generate in this phase.
statistics: `IterationStatistics` object which records the experimental
results.
run_mode_str: str, describes the run mode for this agent.
Returns:
Tuple containing the number of steps taken in this phase (int), the sum of
returns (float), and the number of episodes performed (int).
"""
step_count = 0
num_episodes = 0
sum_returns = 0.
while step_count < min_steps:
episode_length, episode_return = self._run_one_episode()
statistics.append({
'{}_episode_lengths'.format(run_mode_str): episode_length,
'{}_episode_returns'.format(run_mode_str): episode_return
})
step_count += episode_length
sum_returns += episode_return
num_episodes += 1
# We use sys.stdout.write instead of tf.logging so as to flush frequently
# without generating a line break.
sys.stdout.write('Steps executed: {} '.format(step_count) +
'Episode length: {} '.format(episode_length) +
'Return: {}\r'.format(episode_return))
sys.stdout.flush()
return step_count, sum_returns, num_episodes
def _run_train_phase(self, statistics):
"""Run training phase.
Args:
statistics: `IterationStatistics` object which records the experimental
results. Note - This object is modified by this method.
Returns:
num_episodes: int, The number of episodes run in this phase.
average_reward: The average reward generated in this phase.
"""
# Perform the training phase, during which the agent learns.
self._agent.eval_mode = False
start_time = time.time()
number_steps, sum_returns, num_episodes = self._run_one_phase(
self._training_steps, statistics, 'train')
average_return = sum_returns / num_episodes if num_episodes > 0 else 0.0
statistics.append({'train_average_return': average_return})
time_delta = time.time() - start_time
tf.logging.info('Average undiscounted return per training episode: %.2f',
average_return)
tf.logging.info('Average training steps per second: %.2f',
number_steps / time_delta)
return num_episodes, average_return
def _run_eval_phase(self, statistics):
"""Run evaluation phase.
Args:
statistics: `IterationStatistics` object which records the experimental
results. Note - This object is modified by this method.
Returns:
num_episodes: int, The number of episodes run in this phase.
average_reward: float, The average reward generated in this phase.
"""
# Perform the evaluation phase -- no learning.
self._agent.eval_mode = True
self._environment.render = True
_, sum_returns, num_episodes = self._run_one_phase(
self._evaluation_steps, statistics, 'eval')
average_return = sum_returns / num_episodes if num_episodes > 0 else 0.0
tf.logging.info('Average undiscounted return per evaluation episode: %.2f',
average_return)
statistics.append({'eval_average_return': average_return})
return num_episodes, average_return
def _run_one_iteration(self, iteration):
"""Runs one iteration of agent/environment interaction.
An iteration involves running several episodes until a certain number of
steps are obtained. The interleaving of train/eval phases implemented here
are to match the implementation of (Mnih et al., 2015).
Args:
iteration: int, current iteration number, used as a global_step for saving
Tensorboard summaries.
Returns:
A dict containing summary statistics for this iteration.
"""
statistics = iteration_statistics.IterationStatistics()
tf.logging.info('Starting iteration %d', iteration)
num_episodes_train, average_reward_train = self._run_train_phase(
statistics)
num_episodes_eval, average_reward_eval = self._run_eval_phase(
statistics)
self._save_tensorboard_summaries(iteration, num_episodes_train,
average_reward_train, num_episodes_eval,
average_reward_eval)
return statistics.data_lists
def _save_tensorboard_summaries(self, iteration,
num_episodes_train,
average_reward_train,
num_episodes_eval,
average_reward_eval):
"""Save statistics as tensorboard summaries.
Args:
iteration: int, The current iteration number.
num_episodes_train: int, number of training episodes run.
average_reward_train: float, The average training reward.
num_episodes_eval: int, number of evaluation episodes run.
average_reward_eval: float, The average evaluation reward.
"""
summary = tf.Summary(value=[
tf.Summary.Value(tag='Train/NumEpisodes',
simple_value=num_episodes_train),
tf.Summary.Value(tag='Train/AverageReturns',
simple_value=average_reward_train),
tf.Summary.Value(tag='Eval/NumEpisodes',
simple_value=num_episodes_eval),
tf.Summary.Value(tag='Eval/AverageReturns',
simple_value=average_reward_eval)
])
self._summary_writer.add_summary(summary, iteration)
def _log_experiment(self, iteration, statistics):
"""Records the results of the current iteration.
Args:
iteration: int, iteration number.
statistics: `IterationStatistics` object containing statistics to log.
"""
self._logger['iteration_{:d}'.format(iteration)] = statistics
if iteration % self._log_every_n == 0:
self._logger.log_to_file(self._logging_file_prefix, iteration)
def _checkpoint_experiment(self, iteration):
"""Checkpoint experiment data.
Args:
iteration: int, iteration number for checkpointing.
"""
experiment_data = self._agent.bundle_and_checkpoint(self._checkpoint_dir,
iteration)
if experiment_data:
experiment_data['current_iteration'] = iteration
experiment_data['logs'] = self._logger.data
self._checkpointer.save_checkpoint(iteration, experiment_data)
def run_experiment(self):
"""Runs a full experiment, spread over multiple iterations."""
tf.logging.info('Beginning training...')
if self._num_iterations <= self._start_iteration:
tf.logging.warning('num_iterations (%d) < start_iteration(%d)',
self._num_iterations, self._start_iteration)
return
for iteration in range(self._start_iteration, self._num_iterations):
statistics = self._run_one_iteration(iteration)
self._log_experiment(iteration, statistics)
self._checkpoint_experiment(iteration)
@gin.configurable
class TrainRunner(Runner):
"""Object that handles running experiments.
The `TrainRunner` differs from the base `Runner` class in that it does not
the evaluation phase. Checkpointing and logging for the train phase are
preserved as before.
"""
def __init__(self, base_dir, create_agent_fn,
create_environment_fn=atari_lib.create_atari_environment):
"""Initialize the TrainRunner object in charge of running a full experiment.
Args:
base_dir: str, the base directory to host all required sub-directories.
create_agent_fn: A function that takes as args a Tensorflow session and an
environment, and returns an agent.
create_environment_fn: A function which receives a problem name and
creates a Gym environment for that problem (e.g. an Atari 2600 game).
"""
tf.logging.info('Creating TrainRunner ...')
super(TrainRunner, self).__init__(base_dir, create_agent_fn,
create_environment_fn)
self._agent.eval_mode = False
def _run_one_iteration(self, iteration):
"""Runs one iteration of agent/environment interaction.
An iteration involves running several episodes until a certain number of
steps are obtained. This method differs from the `_run_one_iteration` method
in the base `Runner` class in that it only runs the train phase.
Args:
iteration: int, current iteration number, used as a global_step for saving
Tensorboard summaries.
Returns:
A dict containing summary statistics for this iteration.
"""
statistics = iteration_statistics.IterationStatistics()
num_episodes_train, average_reward_train = self._run_train_phase(
statistics)
self._save_tensorboard_summaries(iteration, num_episodes_train,
average_reward_train)
return statistics.data_lists
def _save_tensorboard_summaries(self, iteration, num_episodes,
average_reward):
"""Save statistics as tensorboard summaries."""
print("=================================================================")
summary = tf.Summary(value=[
tf.Summary.Value(tag='Train/NumEpisodes',
simple_value=num_episodes),
tf.Summary.Value(
tag='Train/AverageReturns', simple_value=average_reward),
])
self._summary_writer.add_summary(summary, iteration)
|
(* Title: Jinja/JVM/JVMExecInstrInductive.thy
Author: Susannah Mansky
2018, UIUC
*)
section {* Inductive Program Execution in the JVM *}
theory JVMExecInstrInductive
imports JVMExec
begin
datatype step_input = StepI instr | StepC cname | StepT addr
inductive exec_step_ind :: "[step_input, jvm_prog, heap, val list, val list,
cname, mname, pc, init_call_status, frame list, sheap,jvm_state] \<Rightarrow> bool"
where
exec_step_ind_Load:
"exec_step_ind (StepI (Load n)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, ((loc ! n) # stk, loc, C\<^sub>0, M\<^sub>0, Suc pc, ics)#frs, sh)"
| exec_step_ind_Store:
"exec_step_ind (StepI (Store n)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (tl stk, loc[n:=hd stk], C\<^sub>0, M\<^sub>0, Suc pc, ics)#frs, sh)"
| exec_step_ind_Push:
"exec_step_ind (StepI (Push v)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (v # stk, loc, C\<^sub>0, M\<^sub>0, Suc pc, ics)#frs, sh)"
| exec_step_ind_NewOOM_Called:
"new_Addr h = None
\<Longrightarrow> exec_step_ind (StepI (New C)) P h stk loc C\<^sub>0 M\<^sub>0 pc Called frs sh
(\<lfloor>addr_of_sys_xcpt OutOfMemory\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, No_ics)#frs, sh)"
| exec_step_ind_NewOOM_Done:
"\<lbrakk> sh C = Some(obj, Done); new_Addr h = None; ics \<noteq> Called \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (New C)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt OutOfMemory\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_New_Called:
"new_Addr h = Some a
\<Longrightarrow> exec_step_ind (StepI (New C)) P h stk loc C\<^sub>0 M\<^sub>0 pc Called frs sh
(None, h(a\<mapsto>blank P C), (Addr a#stk, loc, C\<^sub>0, M\<^sub>0, Suc pc, No_ics)#frs, sh)"
| exec_step_ind_New_Done:
"\<lbrakk> sh C = Some(obj, Done); new_Addr h = Some a; ics \<noteq> Called \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (New C)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h(a\<mapsto>blank P C), (Addr a#stk, loc, C\<^sub>0, M\<^sub>0, Suc pc, ics)#frs, sh)"
| exec_step_ind_New_Init:
"\<lbrakk> \<forall>obj. sh C \<noteq> Some(obj, Done); ics \<noteq> Called \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (New C)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, Calling C)#frs, sh)"
| exec_step_ind_Getfield_Null:
"hd stk = Null
\<Longrightarrow> exec_step_ind (StepI (Getfield F C)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt NullPointer\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Getfield_NoField:
"\<lbrakk> v = hd stk; (D,fs) = the(h(the_Addr v)); v \<noteq> Null; \<not>(\<exists>t b. P \<turnstile> D has F,b:t in C) \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Getfield F C)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt NoSuchFieldError\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Getfield_Static:
"\<lbrakk> v = hd stk; (D,fs) = the(h(the_Addr v)); v \<noteq> Null; P \<turnstile> D has F,Static:t in C \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Getfield F C)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt IncompatibleClassChangeError\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Getfield:
"\<lbrakk> v = hd stk; (D,fs) = the(h(the_Addr v)); (D',b,t) = field P C F; v \<noteq> Null;
P \<turnstile> D has F,NonStatic:t in C \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Getfield F C)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (the(fs(F,C))#(tl stk), loc, C\<^sub>0, M\<^sub>0, pc+1, ics)#frs, sh)"
| exec_step_ind_Getstatic_NoField:
"\<not>(\<exists>t b. P \<turnstile> C has F,b:t in D)
\<Longrightarrow> exec_step_ind (StepI (Getstatic C F D)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt NoSuchFieldError\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Getstatic_NonStatic:
"P \<turnstile> C has F,NonStatic:t in D
\<Longrightarrow> exec_step_ind (StepI (Getstatic C F D)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt IncompatibleClassChangeError\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Getstatic_Called:
"\<lbrakk> (D',b,t) = field P D F; P \<turnstile> C has F,Static:t in D;
the(sh D') = (sfs,i);
v = the (sfs F) \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Getstatic C F D)) P h stk loc C\<^sub>0 M\<^sub>0 pc Called frs sh
(None, h, (v#stk, loc, C\<^sub>0, M\<^sub>0, Suc pc, No_ics)#frs, sh)"
| exec_step_ind_Getstatic_Done:
"\<lbrakk> (D',b,t) = field P D F; P \<turnstile> C has F,Static:t in D;
ics \<noteq> Called; sh D' = Some(sfs,Done);
v = the (sfs F) \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Getstatic C F D)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (v#stk, loc, C\<^sub>0, M\<^sub>0, Suc pc, ics)#frs, sh)"
| exec_step_ind_Getstatic_Init:
"\<lbrakk> (D',b,t) = field P D F; P \<turnstile> C has F,Static:t in D;
\<forall>sfs. sh D' \<noteq> Some(sfs,Done); ics \<noteq> Called \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Getstatic C F D)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, Calling D')#frs, sh)"
| exec_step_ind_Putfield_Null:
"hd(tl stk) = Null
\<Longrightarrow> exec_step_ind (StepI (Putfield F C)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt NullPointer\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Putfield_NoField:
"\<lbrakk> r = hd(tl stk); a = the_Addr r; (D,fs) = the (h a); r \<noteq> Null; \<not>(\<exists>t b. P \<turnstile> D has F,b:t in C) \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Putfield F C)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt NoSuchFieldError\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Putfield_Static:
"\<lbrakk> r = hd(tl stk); a = the_Addr r; (D,fs) = the (h a); r \<noteq> Null; P \<turnstile> D has F,Static:t in C \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Putfield F C)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt IncompatibleClassChangeError\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Putfield:
"\<lbrakk> v = hd stk; r = hd(tl stk); a = the_Addr r; (D,fs) = the (h a); (D',b,t) = field P C F;
r \<noteq> Null; P \<turnstile> D has F,NonStatic:t in C \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Putfield F C)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h(a \<mapsto> (D, fs((F,C) \<mapsto> v))), (tl (tl stk), loc, C\<^sub>0, M\<^sub>0, pc+1, ics)#frs, sh)"
| exec_step_ind_Putstatic_NoField:
"\<not>(\<exists>t b. P \<turnstile> C has F,b:t in D)
\<Longrightarrow> exec_step_ind (StepI (Putstatic C F D)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt NoSuchFieldError\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Putstatic_NonStatic:
"P \<turnstile> C has F,NonStatic:t in D
\<Longrightarrow> exec_step_ind (StepI (Putstatic C F D)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt IncompatibleClassChangeError\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Putstatic_Called:
"\<lbrakk> (D',b,t) = field P D F; P \<turnstile> C has F,Static:t in D; the(sh D') = (sfs, i) \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Putstatic C F D)) P h stk loc C\<^sub>0 M\<^sub>0 pc Called frs sh
(None, h, (tl stk, loc, C\<^sub>0, M\<^sub>0, Suc pc, No_ics)#frs, sh(D':=Some ((sfs(F \<mapsto> hd stk)), i)))"
| exec_step_ind_Putstatic_Done:
"\<lbrakk> (D',b,t) = field P D F; P \<turnstile> C has F,Static:t in D;
ics \<noteq> Called; sh D' = Some (sfs, Done) \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Putstatic C F D)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (tl stk, loc, C\<^sub>0, M\<^sub>0, Suc pc, ics)#frs, sh(D':=Some ((sfs(F \<mapsto> hd stk)), Done)))"
| exec_step_ind_Putstatic_Init:
"\<lbrakk> (D',b,t) = field P D F; P \<turnstile> C has F,Static:t in D;
\<forall>sfs. sh D' \<noteq> Some (sfs, Done); ics \<noteq> Called \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Putstatic C F D)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, Calling D')#frs, sh)"
| exec_step_ind_Checkcast:
"cast_ok P C h (hd stk)
\<Longrightarrow> exec_step_ind (StepI (Checkcast C)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (stk, loc, C\<^sub>0, M\<^sub>0, Suc pc, ics)#frs, sh)"
| exec_step_ind_Checkcast_Error:
"\<not>cast_ok P C h (hd stk)
\<Longrightarrow> exec_step_ind (StepI (Checkcast C)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt ClassCast\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Invoke_Null:
"stk!n = Null
\<Longrightarrow> exec_step_ind (StepI (Invoke M n)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt NullPointer\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Invoke_NoMethod:
"\<lbrakk> r = stk!n; C = fst(the(h(the_Addr r))); r \<noteq> Null;
\<not>(\<exists>Ts T m D b. P \<turnstile> C sees M,b:Ts \<rightarrow> T = m in D) \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Invoke M n)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt NoSuchMethodError\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Invoke_Static:
"\<lbrakk> r = stk!n; C = fst(the(h(the_Addr r)));
(D,b,Ts,T,mxs,mxl\<^sub>0,ins,xt)= method P C M; r \<noteq> Null;
P \<turnstile> C sees M,Static:Ts \<rightarrow> T = m in D \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Invoke M n)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt IncompatibleClassChangeError\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Invoke:
"\<lbrakk> ps = take n stk; r = stk!n; C = fst(the(h(the_Addr r)));
(D,b,Ts,T,mxs,mxl\<^sub>0,ins,xt)= method P C M; r \<noteq> Null;
P \<turnstile> C sees M,NonStatic:Ts \<rightarrow> T = m in D;
f' = ([],[r]@(rev ps)@(replicate mxl\<^sub>0 undefined),D,M,0,No_ics) \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Invoke M n)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, f'#(stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Invokestatic_NoMethod:
"\<lbrakk> (D,b,Ts,T,mxs,mxl\<^sub>0,ins,xt)= method P C M; \<not>(\<exists>Ts T m D b. P \<turnstile> C sees M,b:Ts \<rightarrow> T = m in D) \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Invokestatic C M n)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt NoSuchMethodError\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Invokestatic_NonStatic:
"\<lbrakk> (D,b,Ts,T,mxs,mxl\<^sub>0,ins,xt)= method P C M; P \<turnstile> C sees M,NonStatic:Ts \<rightarrow> T = m in D \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Invokestatic C M n)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt IncompatibleClassChangeError\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Invokestatic_Called:
"\<lbrakk> ps = take n stk; (D,b,Ts,T,mxs,mxl\<^sub>0,ins,xt) = method P C M;
P \<turnstile> C sees M,Static:Ts \<rightarrow> T = m in D;
f' = ([],(rev ps)@(replicate mxl\<^sub>0 undefined),D,M,0,No_ics) \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Invokestatic C M n)) P h stk loc C\<^sub>0 M\<^sub>0 pc Called frs sh
(None, h, f'#(stk, loc, C\<^sub>0, M\<^sub>0, pc, No_ics)#frs, sh)"
| exec_step_ind_Invokestatic_Done:
"\<lbrakk> ps = take n stk; (D,b,Ts,T,mxs,mxl\<^sub>0,ins,xt) = method P C M;
P \<turnstile> C sees M,Static:Ts \<rightarrow> T = m in D;
ics \<noteq> Called; sh D = Some (sfs, Done);
f' = ([],(rev ps)@(replicate mxl\<^sub>0 undefined),D,M,0,No_ics) \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Invokestatic C M n)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, f'#(stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Invokestatic_Init:
"\<lbrakk> (D,b,Ts,T,mxs,mxl\<^sub>0,ins,xt) = method P C M;
P \<turnstile> C sees M,Static:Ts \<rightarrow> T = m in D;
\<forall>sfs. sh D \<noteq> Some (sfs, Done); ics \<noteq> Called \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI (Invokestatic C M n)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, Calling D)#frs, sh)"
| exec_step_ind_Return_Last_Init:
"exec_step_ind (StepI Return) P h stk\<^sub>0 loc\<^sub>0 C\<^sub>0 clinit pc ics [] sh
(None, h, [], sh(C\<^sub>0:=Some(fst(the(sh C\<^sub>0)), Done)))"
| exec_step_ind_Return_Last:
"M\<^sub>0 \<noteq> clinit
\<Longrightarrow> exec_step_ind (StepI Return) P h stk\<^sub>0 loc\<^sub>0 C\<^sub>0 M\<^sub>0 pc ics [] sh (None, h, [], sh)"
| exec_step_ind_Return_Init:
"\<lbrakk> (D,b,Ts,T,m) = method P C\<^sub>0 clinit \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI Return) P h stk\<^sub>0 loc\<^sub>0 C\<^sub>0 clinit pc ics ((stk',loc',C',m',pc',ics')#frs') sh
(None, h, (stk',loc',C',m',pc',ics')#frs', sh(C\<^sub>0:=Some(fst(the(sh C\<^sub>0)), Done)))"
| exec_step_ind_Return_NonStatic:
"\<lbrakk> (D,NonStatic,Ts,T,m) = method P C\<^sub>0 M\<^sub>0; M\<^sub>0 \<noteq> clinit \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI Return) P h stk\<^sub>0 loc\<^sub>0 C\<^sub>0 M\<^sub>0 pc ics ((stk',loc',C',m',pc',ics')#frs') sh
(None, h, ((hd stk\<^sub>0)#(drop (length Ts + 1) stk'),loc',C',m',Suc pc',ics')#frs', sh)"
| exec_step_ind_Return_Static:
"\<lbrakk> (D,Static,Ts,T,m) = method P C\<^sub>0 M\<^sub>0; M\<^sub>0 \<noteq> clinit \<rbrakk>
\<Longrightarrow> exec_step_ind (StepI Return) P h stk\<^sub>0 loc\<^sub>0 C\<^sub>0 M\<^sub>0 pc ics ((stk',loc',C',m',pc',ics')#frs') sh
(None, h, ((hd stk\<^sub>0)#(drop (length Ts) stk'),loc',C',m',Suc pc',ics')#frs', sh)"
| exec_step_ind_Pop:
"exec_step_ind (StepI Pop) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (tl stk, loc, C\<^sub>0, M\<^sub>0, Suc pc, ics)#frs, sh)"
| exec_step_ind_IAdd:
"exec_step_ind (StepI IAdd) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (Intg (the_Intg (hd (tl stk)) + the_Intg (hd stk))#(tl (tl stk)), loc, C\<^sub>0, M\<^sub>0, Suc pc, ics)#frs, sh)"
| exec_step_ind_IfFalse_False:
"hd stk = Bool False
\<Longrightarrow> exec_step_ind (StepI (IfFalse i)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (tl stk, loc, C\<^sub>0, M\<^sub>0, nat(int pc+i), ics)#frs, sh)"
| exec_step_ind_IfFalse_nFalse:
"hd stk \<noteq> Bool False
\<Longrightarrow> exec_step_ind (StepI (IfFalse i)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (tl stk, loc, C\<^sub>0, M\<^sub>0, Suc pc, ics)#frs, sh)"
| exec_step_ind_CmpEq:
"exec_step_ind (StepI CmpEq) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (Bool (hd (tl stk) = hd stk) # tl (tl stk), loc, C\<^sub>0, M\<^sub>0, Suc pc, ics)#frs, sh)"
| exec_step_ind_Goto:
"exec_step_ind (StepI (Goto i)) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (stk, loc, C\<^sub>0, M\<^sub>0, nat(int pc+i), ics)#frs, sh)"
| exec_step_ind_Throw:
"hd stk \<noteq> Null
\<Longrightarrow> exec_step_ind (StepI Throw) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>the_Addr (hd stk)\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Throw_Null:
"hd stk = Null
\<Longrightarrow> exec_step_ind (StepI Throw) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(\<lfloor>addr_of_sys_xcpt NullPointer\<rfloor>, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Init_None_Called:
"\<lbrakk> sh C = None; ics = Called \<rbrakk>
\<Longrightarrow> exec_step_ind (StepC C) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, Calling C)#frs, sh(C := Some (sblank P C, Prepared)))"
| exec_step_ind_Init_None_nCalled:
"\<lbrakk> sh C = None; ics \<noteq> Called \<rbrakk>
\<Longrightarrow> exec_step_ind (StepC C) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ICalling C)#frs, sh(C := Some (sblank P C, Prepared)))"
| exec_step_ind_Init_Done:
"sh C = Some (sfs, Done)
\<Longrightarrow> exec_step_ind (StepC C) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Init_Processing:
"sh C = Some (sfs, Processing)
\<Longrightarrow> exec_step_ind (StepC C) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Init_Error:
"\<lbrakk> sh C = Some (sfs, Error); (stk',loc',D',M',pc',ics') = create_init_frame P C \<rbrakk>
\<Longrightarrow> exec_step_ind (StepC C) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (stk',loc',D',M',pc',Throwing (addr_of_sys_xcpt NoClassDefFoundError))
#(stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh)"
| exec_step_ind_Init_Prepared_Object:
"\<lbrakk> sh C = Some (sfs, Prepared);
sh' = sh(C:=Some(fst(the(sh C)), Processing));
C = Object \<rbrakk>
\<Longrightarrow> exec_step_ind (StepC C) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, create_init_frame P C#(stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh')"
| exec_step_ind_Init_Prepared_nObject:
"\<lbrakk> sh C = Some (sfs, Prepared); (stk',loc',C',m',pc',ics') = create_init_frame P C;
sh' = sh(C:=Some(fst(the(sh C)), Processing));
C \<noteq> Object; D = fst(the(class P C)) \<rbrakk>
\<Longrightarrow> exec_step_ind (StepC C) P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh
(None, h, (stk',loc',C',m',pc',ICalling D)#(stk, loc, C\<^sub>0, M\<^sub>0, pc, ics)#frs, sh')"
| exec_step_ind_InitThrow_Init:
"ics' \<noteq> Called
\<Longrightarrow> exec_step_ind (StepT a) P h stk loc C\<^sub>0 M\<^sub>0 pc ics ((stk',loc',C',clinit,pc',ics')#frs') sh
(None, h, (stk',loc',C',clinit,pc',Throwing a)#frs', (sh(C\<^sub>0:=Some(fst(the(sh C\<^sub>0)), Error))))"
| exec_step_ind_InitThrow_Called:
"exec_step_ind (StepT a) P h stk loc C\<^sub>0 M\<^sub>0 pc ics ((stk',loc',C',m',pc',Called)#frs') sh
(\<lfloor>a\<rfloor>, h, (stk',loc',C',m',pc',No_ics)#frs', (sh(C\<^sub>0:=Some(fst(the(sh C\<^sub>0)), Error))))"
| exec_step_ind_InitThrow_Other:
"\<lbrakk> ics' \<noteq> Called; m' \<noteq> clinit \<rbrakk>
\<Longrightarrow> exec_step_ind (StepT a) P h stk loc C\<^sub>0 M\<^sub>0 pc ics ((stk',loc',C',m',pc',ics')#frs') sh
(\<lfloor>a\<rfloor>, h, (stk',loc',C',m',pc',ics')#frs', (sh(C\<^sub>0:=Some(fst(the(sh C\<^sub>0)), Error))))"
| exec_step_ind_InitThrow_Last:
"exec_step_ind (StepT a) P h stk loc C\<^sub>0 M\<^sub>0 pc ics [] sh
(\<lfloor>a\<rfloor>, h, [], (sh(C\<^sub>0:=Some(fst(the(sh C\<^sub>0)), Error))))"
(** ******* **)
inductive_cases exec_step_ind_cases [cases set]:
"exec_step_ind (StepI (Load n)) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI (Store n)) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI (Push v)) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI (New C)) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI (Getfield F C)) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI (Getstatic C F D)) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI (Putfield F C)) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI (Putstatic C F D)) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI (Checkcast C)) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI (Invoke M n)) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI (Invokestatic C M n)) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI Return) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI Pop) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI IAdd) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI (IfFalse i)) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI CmpEq) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI (Goto i)) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepI Throw) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepC C) P h stk loc C M pc ics frs sh \<sigma>"
"exec_step_ind (StepT a) P h stk loc C M pc ics frs sh \<sigma>"
(* HERE: MOVE *)
fun exec_step_input :: "[jvm_prog, cname, mname, pc, init_call_status]
\<Rightarrow> step_input \<times> init_call_status" where
"exec_step_input P C M pc (Calling C') = (StepC C', Called)" |
"exec_step_input P C M pc (ICalling C') = (StepC C', No_ics)" |
"exec_step_input P C M pc (Throwing a) = (StepT a, Throwing a)" |
"exec_step_input P C M pc ics = (StepI (instrs_of P C M ! pc), ics)"
lemma exec_step_imp_exec_step_ind:
assumes "exec_step P h stk loc C M pc ics frs sh = (xp', h', frs', sh')"
and "exec_step_input P C M pc ics = (si, ics')"
shows "exec_step_ind si P h stk loc C M pc ics' frs sh (xp', h', frs', sh')"
proof(cases si)
case (StepT a)
then have exec_Throwing:
"exec_Throwing a P h stk loc C M pc (Throwing a) frs sh = (xp', h', frs', sh')"
"ics = ics'" using assms by(cases ics, auto)+
then obtain stk' loc' C' m' pc' ics' where lets: "(stk',loc',C',m',pc',ics') = hd frs"
by(cases "hd frs", simp)
then show ?thesis using exec_step_ind_InitThrow_Init exec_step_ind_InitThrow_Called
exec_step_ind_InitThrow_Other exec_step_ind_InitThrow_Last StepT exec_Throwing
by(cases frs; cases "m' = clinit"; cases ics', auto)
next
case (StepC C1)
obtain D b Ts T m where lets: "method P C1 clinit = (D,b,Ts,T,m)" by(cases "method P C1 clinit")
then obtain mxs mxl\<^sub>0 ins xt where m: "m = (mxs,mxl\<^sub>0,ins,xt)" by(cases m)
show ?thesis
proof(cases "sh C1")
case None then show ?thesis
using exec_step_ind_Init_None_Called exec_step_ind_Init_None_nCalled StepC assms
by(cases ics, auto)
next
case (Some a)
then obtain sfs i where sfsi: "a = (sfs,i)" by(cases a)
then show ?thesis using exec_step_ind_Init_Done exec_step_ind_Init_Processing
exec_step_ind_Init_Error m lets Some StepC assms
proof(cases i)
case Prepared
show ?thesis
using exec_step_ind_Init_Prepared_Object[where P=P] exec_step_ind_Init_Prepared_nObject
sfsi m lets Prepared Some StepC assms by(cases ics, auto split: if_split_asm)
qed(cases ics, auto)+
qed
next
case (StepI i)
then have exec_instr: "exec_instr i P h stk loc C M pc ics' frs sh = (xp', h', frs', sh')"
"ics = ics'" using assms by(cases ics, auto)+
show ?thesis
proof(cases i)
case (Load x1) then show ?thesis using exec_instr exec_step_ind_Load StepI by auto
next
case (Store x2) then show ?thesis using exec_instr exec_step_ind_Store StepI by auto
next
case (Push x3) then show ?thesis using exec_instr exec_step_ind_Push StepI by auto
next
case (New C1)
then obtain sfs i where sfsi: "the(sh C1) = (sfs,i)" by(cases "the(sh C1)")
then show ?thesis using exec_step_ind_New_Called exec_step_ind_NewOOM_Called
exec_step_ind_New_Done exec_step_ind_NewOOM_Done
exec_step_ind_New_Init sfsi New StepI exec_instr by(cases ics; cases i, auto)
next
case (Getfield F1 C1)
then obtain D fs D' b t where lets: "the(h(the_Addr (hd stk))) = (D,fs)"
"field P C1 F1 = (D',b,t)" by(cases "the(h(the_Addr (hd stk)))", cases "field P C1 F1")
then have "\<And>b' t'. P \<turnstile> D has F1,b':t' in C1 \<Longrightarrow> (D', b, t) = (C1, b', t')"
using field_def2 has_field_idemp has_field_sees by fastforce
then show ?thesis using exec_step_ind_Getfield_Null exec_step_ind_Getfield_NoField
exec_step_ind_Getfield_Static lets Getfield StepI exec_instr
by(cases b, auto split: if_split_asm) (metis Suc_eq_plus1 exec_step_ind_Getfield)+
next
case (Getstatic C1 F1 D1)
then obtain D' b t where lets: "field P D1 F1 = (D',b,t)" by(cases "field P D1 F1")
then have field: "\<And>b' t'. P \<turnstile> C1 has F1,b':t' in D1 \<Longrightarrow> (D', b, t) = (D1, b', t')"
using field_def2 has_field_idemp has_field_sees by fastforce
show ?thesis
proof(cases b)
case NonStatic then show ?thesis
using exec_step_ind_Getstatic_NoField exec_step_ind_Getstatic_NonStatic
field lets Getstatic exec_instr StepI by(auto split: if_split_asm) fastforce
next
case Static show ?thesis
proof(cases "ics = Called")
case True then show ?thesis using exec_step_ind_Getstatic_NoField
exec_step_ind_Getstatic_Called exec_step_ind_Getstatic_Init
Static field lets Getstatic exec_instr StepI
by(cases "the(sh D1)", cases ics, auto split: if_split_asm) metis
next
case False
then have nCalled: "ics \<noteq> Called" by simp
show ?thesis
proof(cases "sh D1")
case None
then have nDone: "\<forall>sfs. sh D1 \<noteq> Some(sfs, Done)" by simp
then show ?thesis using exec_step_ind_Getstatic_NoField
exec_step_ind_Getstatic_Init[where sh=sh, OF _ _ nDone nCalled]
field lets None False Static Getstatic exec_instr StepI
by(cases ics, auto split: if_split_asm) metis+
next
case (Some a)
then obtain sfs i where sfsi: "a=(sfs,i)" by(cases a)
show ?thesis using exec_step_ind_Getstatic_NoField
exec_step_ind_Getstatic_Init sfsi False Static Some field lets Getstatic exec_instr
proof(cases "i = Done")
case True then show ?thesis using exec_step_ind_Getstatic_NoField
exec_step_ind_Getstatic_Done[OF _ _ nCalled] exec_step_ind_Getstatic_Init
sfsi False Static Some field lets Getstatic exec_instr StepI
by(cases ics, auto split: if_split_asm) metis+
next
case nD: False
then have nDone: "\<forall>sfs. sh D1 \<noteq> Some(sfs, Done)" using sfsi Some by simp
show ?thesis using nD
proof(cases i)
case Processing then show ?thesis using exec_step_ind_Getstatic_NoField
exec_step_ind_Getstatic_Init[where sh=sh, OF _ _ nDone nCalled]
sfsi False Static Some field lets Getstatic exec_instr StepI
by(cases ics, auto split: if_split_asm) metis+
next
case Prepared then show ?thesis using exec_step_ind_Getstatic_NoField
exec_step_ind_Getstatic_Init[where sh=sh, OF _ _ nDone nCalled]
sfsi False Static Some field lets Getstatic exec_instr StepI
by(cases ics, auto split: if_split_asm) metis+
next
case Error then show ?thesis using exec_step_ind_Getstatic_NoField
exec_step_ind_Getstatic_Init[where sh=sh, OF _ _ nDone nCalled]
sfsi False Static Some field lets Getstatic exec_instr StepI
by(cases ics, auto split: if_split_asm) metis+
qed(simp)
qed
qed
qed
qed
next
case (Putfield F1 C1)
then obtain D fs D' b t where lets: "the(h(the_Addr (hd(tl stk)))) = (D,fs)"
"field P C1 F1 = (D',b,t)" by(cases "the(h(the_Addr (hd(tl stk))))", cases "field P C1 F1")
then have "\<And>b' t'. P \<turnstile> D has F1,b':t' in C1 \<Longrightarrow> (D', b, t) = (C1, b', t')"
using field_def2 has_field_idemp has_field_sees by fastforce
then show ?thesis using exec_step_ind_Putfield_Null exec_step_ind_Putfield_NoField
exec_step_ind_Putfield_Static lets Putfield exec_instr StepI
by(cases b, auto split: if_split_asm) (metis Suc_eq_plus1 exec_step_ind_Putfield)+
next
case (Putstatic C1 F1 D1)
then obtain D' b t where lets: "field P D1 F1 = (D',b,t)" by(cases "field P D1 F1")
then have field: "\<And>b' t'. P \<turnstile> C1 has F1,b':t' in D1 \<Longrightarrow> (D', b, t) = (D1, b', t')"
using field_def2 has_field_idemp has_field_sees by fastforce
show ?thesis
proof(cases b)
case NonStatic then show ?thesis
using exec_step_ind_Putstatic_NoField exec_step_ind_Putstatic_NonStatic
field lets Putstatic exec_instr StepI by(auto split: if_split_asm) fastforce
next
case Static show ?thesis
proof(cases "ics = Called")
case True then show ?thesis using exec_step_ind_Putstatic_NoField
exec_step_ind_Putstatic_Called exec_step_ind_Putstatic_Init
Static field lets Putstatic exec_instr StepI
by(cases "the(sh D1)", cases ics, auto split: if_split_asm) metis
next
case False
then have nCalled: "ics \<noteq> Called" by simp
show ?thesis
proof(cases "sh D1")
case None
then have nDone: "\<forall>sfs. sh D1 \<noteq> Some(sfs, Done)" by simp
then show ?thesis using exec_step_ind_Putstatic_NoField
exec_step_ind_Putstatic_Init[where sh=sh, OF _ _ nDone nCalled]
field lets None False Static Putstatic exec_instr StepI
by(cases ics, auto split: if_split_asm) metis+
next
case (Some a)
then obtain sfs i where sfsi: "a=(sfs,i)" by(cases a)
show ?thesis using exec_step_ind_Putstatic_NoField
exec_step_ind_Putstatic_Init sfsi False Static Some field lets Putstatic exec_instr
proof(cases "i = Done")
case True then show ?thesis using exec_step_ind_Putstatic_NoField
exec_step_ind_Putstatic_Done[OF _ _ nCalled] exec_step_ind_Putstatic_Init
sfsi False Static Some field lets Putstatic exec_instr StepI
by(cases ics, auto split: if_split_asm) metis+
next
case nD: False
then have nDone: "\<forall>sfs. sh D1 \<noteq> Some(sfs, Done)" using sfsi Some by simp
show ?thesis using nD
proof(cases i)
case Processing then show ?thesis using exec_step_ind_Putstatic_NoField
exec_step_ind_Putstatic_Init[where sh=sh, OF _ _ nDone nCalled]
sfsi False Static Some field lets Putstatic exec_instr StepI
by(cases ics, auto split: if_split_asm) metis+
next
case Prepared then show ?thesis using exec_step_ind_Putstatic_NoField
exec_step_ind_Putstatic_Init[where sh=sh, OF _ _ nDone nCalled]
sfsi False Static Some field lets Putstatic exec_instr StepI
by(cases ics, auto split: if_split_asm) metis+
next
case Error then show ?thesis using exec_step_ind_Putstatic_NoField
exec_step_ind_Putstatic_Init[where sh=sh, OF _ _ nDone nCalled]
sfsi False Static Some field lets Putstatic exec_instr StepI
by(cases ics, auto split: if_split_asm) metis+
qed(simp)
qed
qed
qed
qed
next
case (Checkcast x9) then show ?thesis
using exec_step_ind_Checkcast exec_step_ind_Checkcast_Error exec_instr StepI
by(auto split: if_split_asm)
next
case (Invoke M1 n) show ?thesis
proof(cases "stk!n = Null")
case True then show ?thesis using exec_step_ind_Invoke_Null Invoke exec_instr StepI
by clarsimp
next
case False
let ?C = "cname_of h (the_Addr (stk ! n))"
obtain D b Ts T m where method: "method P ?C M1 = (D,b,Ts,T,m)" by(cases "method P ?C M1")
then obtain mxs mxl\<^sub>0 ins xt where "m = (mxs,mxl\<^sub>0,ins,xt)" by(cases m)
then show ?thesis using exec_step_ind_Invoke_NoMethod
exec_step_ind_Invoke_Static exec_step_ind_Invoke method False Invoke exec_instr StepI
by(cases b; auto split: if_split_asm)
qed
next
case (Invokestatic C1 M1 n)
obtain D b Ts T m where lets: "method P C1 M1 = (D,b,Ts,T,m)" by(cases "method P C1 M1")
then obtain mxs mxl\<^sub>0 ins xt where m: "m = (mxs,mxl\<^sub>0,ins,xt)" by(cases m)
have method: "\<And>b' Ts' t' m' D'. P \<turnstile> C1 sees M1,b':Ts' \<rightarrow> t' = m' in D'
\<Longrightarrow> (D,b,Ts,T,m) = (D',b',Ts',t',m')" using lets by auto
show ?thesis
proof(cases b)
case NonStatic then show ?thesis
using exec_step_ind_Invokestatic_NoMethod exec_step_ind_Invokestatic_NonStatic
m method lets Invokestatic exec_instr StepI by(auto split: if_split_asm)
next
case Static show ?thesis
proof(cases "ics = Called")
case True then show ?thesis using exec_step_ind_Invokestatic_NoMethod
exec_step_ind_Invokestatic_Called exec_step_ind_Invokestatic_Init
Static m method lets Invokestatic exec_instr StepI
by(cases ics, auto split: if_split_asm)
next
case False
then have nCalled: "ics \<noteq> Called" by simp
show ?thesis
proof(cases "sh D")
case None
then have nDone: "\<forall>sfs. sh D \<noteq> Some(sfs, Done)" by simp
show ?thesis using exec_step_ind_Invokestatic_NoMethod
exec_step_ind_Invokestatic_Init[where sh=sh, OF _ _ nDone nCalled]
method m lets None False Static Invokestatic exec_instr StepI
by(cases ics, auto split: if_split_asm)
next
case (Some a)
then obtain sfs i where sfsi: "a=(sfs,i)" by(cases a)
show ?thesis using exec_step_ind_Invokestatic_NoMethod
exec_step_ind_Invokestatic_Init sfsi False Static Some method lets Invokestatic exec_instr
proof(cases "i = Done")
case True then show ?thesis using exec_step_ind_Invokestatic_NoMethod
exec_step_ind_Invokestatic_Done[OF _ _ _ nCalled] exec_step_ind_Invokestatic_Init
sfsi False Static Some m method lets Invokestatic exec_instr StepI
by(cases ics, auto split: if_split_asm)
next
case nD: False
then have nDone: "\<forall>sfs. sh D \<noteq> Some(sfs, Done)" using sfsi Some by simp
show ?thesis using nD
proof(cases i)
case Processing then show ?thesis using exec_step_ind_Invokestatic_NoMethod
exec_step_ind_Invokestatic_Init[where sh=sh, OF _ _ nDone nCalled]
sfsi False Static Some m method lets Invokestatic exec_instr StepI
by(cases ics, auto split: if_split_asm)
next
case Prepared then show ?thesis using exec_step_ind_Invokestatic_NoMethod
exec_step_ind_Invokestatic_Init[where sh=sh, OF _ _ nDone nCalled]
sfsi False Static Some m method lets Invokestatic exec_instr StepI
by(cases ics, auto split: if_split_asm)
next
case Error then show ?thesis using exec_step_ind_Invokestatic_NoMethod
exec_step_ind_Invokestatic_Init[where sh=sh, OF _ _ nDone nCalled]
sfsi False Static Some m method lets Invokestatic exec_instr StepI
by(cases ics, auto split: if_split_asm)
qed(simp)
qed
qed
qed
qed
next
case Return
obtain D b Ts T m where method: "method P C M = (D,b,Ts,T,m)" by(cases "method P C M")
then obtain mxs mxl\<^sub>0 ins xt where "m = (mxs,mxl\<^sub>0,ins,xt)" by(cases m)
then show ?thesis using exec_step_ind_Return_Last_Init exec_step_ind_Return_Last
exec_step_ind_Return_Init exec_step_ind_Return_NonStatic exec_step_ind_Return_Static
method Return exec_instr StepI
by(cases b; cases frs; cases "M = clinit"; cases ics, auto split: if_split_asm)
next
case Pop then show ?thesis using exec_instr StepI exec_step_ind_Pop by auto
next
case IAdd then show ?thesis using exec_instr StepI exec_step_ind_IAdd by auto
next
case (Goto x15) then show ?thesis using exec_instr StepI exec_step_ind_Goto by auto
next
case CmpEq then show ?thesis using exec_instr StepI exec_step_ind_CmpEq by auto
next
case (IfFalse x17) then show ?thesis using exec_instr StepI exec_step_ind_IfFalse_nFalse
proof(cases "hd stk")
case (Bool b) then show ?thesis using exec_step_ind_IfFalse_False
exec_step_ind_IfFalse_nFalse IfFalse exec_instr StepI by(cases b, auto)
qed(auto)
next
case Throw then show ?thesis
using exec_instr StepI exec_step_ind_Throw exec_step_ind_Throw_Null
by(cases "hd stk = Null", auto)
qed
qed
lemma exec_step_ind_imp_exec_step:
assumes "exec_step_ind si P h stk loc C M pc ics' frs sh (xp', h', frs', sh')"
shows "\<And>ics. exec_step_input P C M pc ics = (si, ics')
\<Longrightarrow> exec_step P h stk loc C M pc ics frs sh = (xp', h', frs', sh')"
using assms
proof(induct rule: exec_step_ind.induct)
case (exec_step_ind_NewOOM_Done sh C obj h ics P stk loc C\<^sub>0 M\<^sub>0 pc frs)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_New_Done sh C obj h a ics P stk loc C\<^sub>0 M\<^sub>0 pc frs)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_New_Init sh C ics P h stk loc C\<^sub>0 M\<^sub>0 pc frs)
then show ?case by(cases "snd(the(sh C))"; cases ics, auto)
next
case (exec_step_ind_Getfield_NoField v stk D fs h P F C loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases "the (h (the_Addr (hd stk)))", cases ics, auto)
next
case (exec_step_ind_Getfield_Static v stk D fs h P F t C loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case
by(cases "the (h (the_Addr (hd stk)))", cases "field P C F", cases "fst(snd(field P C F))",
cases ics, auto dest: has_field_sees[OF has_field_idemp])
next
case (exec_step_ind_Getfield v stk D fs h D' b t P C F loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases "the (h (the_Addr (hd stk)))", cases "field P C F",
cases ics, auto) (fastforce dest: has_field_sees[OF has_field_idemp])+
next
case (exec_step_ind_Getstatic_NonStatic P C F t D h stk loc C\<^sub>0 M\<^sub>0 pc ics1 frs sh)
then show ?case by(cases "field P D F", cases "fst(snd(field P D F))";
cases ics, auto) (fastforce dest: has_field_sees[OF has_field_idemp])+
next
case (exec_step_ind_Getstatic_Called D' b t P D F C ics ics' v h stk loc C\<^sub>0 M\<^sub>0 pc frs)
then show ?case by(cases "field P D F", cases "fst(snd(field P D F))",
cases ics, auto) (fastforce dest: has_field_sees[OF has_field_idemp])+
next
case (exec_step_ind_Getstatic_Done D' b t P D F C ics sh sfs v h stk loc C\<^sub>0 M\<^sub>0 pc frs)
then show ?case by(cases "field P D F", cases "fst(snd(field P D F))"; cases ics,
auto dest: has_field_sees[OF has_field_idemp])
next
case (exec_step_ind_Getstatic_Init D' b t P D F C sh ics h stk loc C\<^sub>0 M\<^sub>0 pc frs)
then show ?case
by(cases "field P D F", cases "fst(snd(field P D F))"; cases ics; cases "snd(the(sh D'))",
auto dest: has_field_sees[OF has_field_idemp])
next
case (exec_step_ind_Putfield_NoField r stk a D fs h P F C loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases "the (h (the_Addr (hd(tl stk))))", cases ics, auto)
next
case (exec_step_ind_Putfield_Static r stk a D fs h P F t C loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case
by(cases "the (h (the_Addr (hd(tl stk))))", cases "field P C F", cases "fst(snd(field P C F))",
cases ics, auto dest: has_field_sees[OF has_field_idemp])
next
case (exec_step_ind_Putfield v stk r a D fs h D' b t P C F loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases "the (h (the_Addr (hd(tl stk))))", cases "field P C F",
cases ics, auto) (fastforce dest: has_field_sees[OF has_field_idemp])+
next
case (exec_step_ind_Putstatic_NonStatic P C F t D h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases "field P D F", cases "fst(snd(field P D F))";
cases ics, auto) (fastforce dest: has_field_sees[OF has_field_idemp])+
next
case (exec_step_ind_Putstatic_Called D' b t P D F C sh sfs i h stk loc C\<^sub>0 M\<^sub>0 pc frs ics')
then show ?case by(cases "field P D F", cases "fst(snd(field P D F))",
cases ics', auto dest: has_field_sees[OF has_field_idemp])
next
case (exec_step_ind_Putstatic_Done D' b t P D F C ics sh sfs h stk loc C\<^sub>0 M\<^sub>0 pc frs)
then show ?case by(cases "field P D F", cases "fst(snd(field P D F))"; cases ics,
auto dest: has_field_sees[OF has_field_idemp])
next
case (exec_step_ind_Putstatic_Init D' b t P D F C sh ics h stk loc C\<^sub>0 M\<^sub>0 pc frs)
then show ?case
by(cases "field P D F", cases "fst(snd(field P D F))"; cases ics; cases "snd(the(sh D'))",
auto dest: has_field_sees[OF has_field_idemp])
next
case (exec_step_ind_Invoke ps n stk r C h D b Ts T mxs mxl\<^sub>0 ins xt P M m f' loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, fastforce+)
next
case (exec_step_ind_Invokestatic_Called ps n stk D b Ts T mxs mxl\<^sub>0 ins xt P C M m ics ics' sh)
then show ?case by(cases ics, fastforce+)
next
case (exec_step_ind_Invokestatic_Done ps n stk D b Ts T mxs mxl\<^sub>0 ins xt P C M m ics sh sfs f')
then show ?case by(cases ics, auto) fastforce+
next
case (exec_step_ind_Invokestatic_Init D b Ts T mxs mxl\<^sub>0 ins xt P C M m sh ics n h stk loc C\<^sub>0 M\<^sub>0 pc frs)
then show ?case by(cases ics; cases "snd(the(sh D))", auto) fastforce+
next
case (exec_step_ind_Return_Last ics P h stk\<^sub>0 loc\<^sub>0 C\<^sub>0 M\<^sub>0 pc sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Return_NonStatic D Ts T m P C\<^sub>0 M\<^sub>0 ics h stk\<^sub>0 loc\<^sub>0 pc stk' loc' C' m' pc' ics' frs' sh)
then show ?case by(cases "method P C\<^sub>0 M\<^sub>0", cases ics, auto)
next
case (exec_step_ind_Return_Static D Ts T m P C\<^sub>0 M\<^sub>0 ics h stk\<^sub>0 loc\<^sub>0 pc stk' loc' C' m' pc' ics' frs' sh)
then show ?case by(cases "method P C\<^sub>0 M\<^sub>0", cases ics, auto)
next
case (exec_step_ind_IfFalse_nFalse stk i P h loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases "hd stk"; cases ics, auto)
next
case (exec_step_ind_Throw_Null stk P h loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases "hd stk"; cases ics, auto)
next
case (exec_step_ind_Init_None_nCalled sh C ics P h stk loc C\<^sub>0 M\<^sub>0 pc frs)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Init_Error sh C sfs stk' loc' D' M' pc' ics' P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs)
then show ?case by(cases "method P C clinit", cases ics, auto)
next
case (exec_step_ind_Init_Prepared_nObject sh C sfs stk' loc' C' m' pc' ics' P sh' D h stk loc C\<^sub>0 M\<^sub>0 pc ics frs)
then show ?case by(cases "method P C clinit", cases ics, auto)
next
case (exec_step_ind_InitThrow_Other ics' a P h stk loc C\<^sub>0 M\<^sub>0 pc ics stk' loc' C' m' pc' frs' sh)
then show ?case by(cases ics; cases ics', auto)
next
case (exec_step_ind_InitThrow_Init ics' a P h stk loc C\<^sub>0 M\<^sub>0 pc ics stk' loc' C' pc' frs' sh)
then show ?case by(cases ics'; cases ics, auto)
next
case (exec_step_ind_InitThrow_Called a P h stk loc C\<^sub>0 M\<^sub>0 pc ics stk' loc' C' m' pc' frs' sh ics')
then show ?case by(cases ics', auto)
(***)
next
case (exec_step_ind_Load n P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Store n P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Push v P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_NewOOM_Called h C P stk loc C\<^sub>0 M\<^sub>0 pc frs sh ics')
then show ?case by(cases ics', auto)
next
case (exec_step_ind_New_Called h a C P stk loc C\<^sub>0 M\<^sub>0 pc frs sh ics')
then show ?case by(cases ics', auto)
next
case (exec_step_ind_Getfield_Null stk F C P h loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Getstatic_NoField P C F D h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Putfield_Null stk F C P h loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Putstatic_NoField P C F D h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Checkcast P C h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Checkcast_Error P C h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Invoke_Null stk n M P h loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Invoke_NoMethod r stk n C h P M loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Invoke_Static r stk n C h D b Ts T mxs mxl\<^sub>0 ins xt P M m loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Invokestatic_NoMethod D b Ts T mxs mxl\<^sub>0 ins xt P C M n h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Invokestatic_NonStatic D b Ts T mxs mxl\<^sub>0 ins xt P C M m n h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Return_Last_Init P h stk\<^sub>0 loc\<^sub>0 C\<^sub>0 M\<^sub>0 pc sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Return_Init D b Ts T m P C\<^sub>0 M\<^sub>0 h stk\<^sub>0 loc\<^sub>0 pc stk' loc' C' m' pc' ics' frs' sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Pop P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_IAdd P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_IfFalse_False stk i P h loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_CmpEq P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Goto i P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Throw stk a P h loc C\<^sub>0 M\<^sub>0 pc ics frs sh)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Init_None_Called sh C ics P h stk loc C\<^sub>0 M\<^sub>0 pc frs)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Init_Done sh C sfs P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Init_Processing sh C sfs P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_Init_Prepared_Object sh C sfs sh' P h stk loc C\<^sub>0 M\<^sub>0 pc ics frs)
then show ?case by(cases ics, auto)
next
case (exec_step_ind_InitThrow_Last a P h stk loc C\<^sub>0 M\<^sub>0 pc ics sh)
then show ?case by(cases ics, auto)
qed
lemma exec_step_ind_equiv:
"exec_step_input P C M pc ics = (si, ics')
\<Longrightarrow> exec_step P h stk loc C M pc ics frs sh = (xp', h', frs', sh')
= exec_step_ind si P h stk loc C M pc ics' frs sh (xp', h', frs', sh')"
using exec_step_imp_exec_step_ind exec_step_ind_imp_exec_step by meson
end
|
function XY = rl_reward_firepit( type )
%RL_REWARD_FIREPITE
%
% intput-----------------------------------------------------------------
%
% o type : string, type of fire pit.
%
% output-----------------------------------------------------------------
%
% o XY : (N x 2)
%
%
XY = [];
if strcmp(type,'central_pit')
[X,Y] = meshgrid(linspace(3,7,20),linspace(3,7,20));
XY = [X(:),Y(:)];
else
end
end
|
// This file is part of the dune-stuff project:
// https://github.com/wwu-numerik/dune-stuff
// Copyright holders: Rene Milk, Felix Schindler
// License: BSD 2-Clause License (http://opensource.org/licenses/BSD-2-Clause)
#include "config.h"
#include "localization-study.hh"
#include <boost/io/ios_state.hpp>
#include <dune/stuff/common/string.hh>
namespace Dune {
namespace Stuff {
namespace Common {
LocalizationStudy::LocalizationStudy(const std::vector<std::string> only_these_indicators)
: only_these_indicators_(only_these_indicators)
{
}
LocalizationStudy::~LocalizationStudy() {}
std::vector<std::string> LocalizationStudy::used_indicators() const
{
if (only_these_indicators_.empty())
return provided_indicators();
else {
std::vector<std::string> ret;
for (auto indicator : provided_indicators())
if (std::find(only_these_indicators_.begin(), only_these_indicators_.end(), indicator)
!= only_these_indicators_.end())
ret.push_back(indicator);
return ret;
}
} // ... used_indicators(...)
void LocalizationStudy::run(std::ostream& out)
{
boost::io::ios_all_saver guard(out);
if (provided_indicators().size() == 0)
DUNE_THROW(Dune::InvalidStateException, "You have to provide at least one indicator!");
const auto actually_used_indicators = used_indicators();
if (actually_used_indicators.size() == 0)
DUNE_THROW(Dune::InvalidStateException,
"There are no common indicators in 'provided_indicators()' and 'only_these_indicators'!");
// build table header
out << identifier() << std::endl;
const size_t total_width = 80;
std::string header_line =
std::string(" ||") + " L^2 difference " + "|" + " L^oo difference " + "|" + " standard deviation ";
size_t first_column_size = 0;
for (auto id : actually_used_indicators)
first_column_size = std::max(first_column_size, id.size());
first_column_size = std::max(first_column_size, total_width - header_line.size() - 1);
std::string first_column_str = "";
for (size_t ii = 0; ii < first_column_size; ++ii)
first_column_str += " ";
header_line = std::string(" ") + first_column_str + header_line;
// print table header
if (identifier().size() > header_line.size())
out << Stuff::Common::whitespaceify(identifier(), '=') << "\n";
else
out << Stuff::Common::whitespaceify(header_line, '=') << "\n";
out << header_line << "\n";
const std::string thin_delimiter = Stuff::Common::whitespaceify(" " + first_column_str + " ", '-') + "++"
+ Stuff::Common::whitespaceify(" L^2 difference ", '-') + "+"
+ Stuff::Common::whitespaceify(" L^oo difference ", '-') + "+"
+ Stuff::Common::whitespaceify(" standard deviation ", '-');
const std::string thick_delimiter = Stuff::Common::whitespaceify(" " + first_column_str + " ", '=') + "++"
+ Stuff::Common::whitespaceify(" L^2 difference ", '=') + "+"
+ Stuff::Common::whitespaceify(" L^oo difference ", '=') + "+"
+ Stuff::Common::whitespaceify(" standard deviation ", '=');
out << thick_delimiter << std::endl;
// comput reference indicators
const auto reference_indicators = compute_reference_indicators();
if (reference_indicators.size() == 0)
DUNE_THROW(Exceptions::requirements_not_met, "Given reference indicators must not be empty!");
// loop over all indicators
for (size_t ind = 0; ind < actually_used_indicators.size(); ++ind) {
const std::string indicator_id = actually_used_indicators[ind];
// enlarge/cap id to first_column_size chars
std::string id_str = indicator_id.empty() ? "???" : indicator_id;
if (id_str.size() > first_column_size)
id_str = id_str.substr(0, first_column_size);
else if (id_str.size() < first_column_size) {
const double missing = (double(first_column_size) - id_str.size()) / 2.0;
for (size_t ii = 0; ii < missing; ++ii)
id_str = " " + id_str + " ";
if (id_str.size() == (first_column_size - 1))
id_str = " " + id_str;
}
assert(id_str.size() == first_column_size);
// print first column
out << " " << id_str << " || " << std::flush;
// compute indicators
const auto indicators = compute_indicators(indicator_id);
if (indicators.size() != reference_indicators.size())
DUNE_THROW(Exceptions::requirements_not_met,
"Given indicators of type '" << indicator_id << "' are of wrong length (is " << indicators.size()
<< ", should be "
<< reference_indicators.size()
<< ")!");
const auto difference = reference_indicators - indicators;
// compute L^2 difference
out << std::setw(18) << std::setprecision(2) << std::scientific << difference.l2_norm() << std::flush;
// compute L^oo difference
out << " | " << std::setw(18) << std::setprecision(2) << std::scientific << difference.sup_norm() << std::flush;
// compute standard deviation
out << " | " << std::setw(18) << std::setprecision(2) << std::scientific << difference.standard_deviation()
<< std::flush;
if (ind < (actually_used_indicators.size() - 1))
out << "\n" << thin_delimiter;
out << std::endl;
} // loop over all indicators
} // ... run(...)
} // namespace Common
} // namespace Stuff
} // namespace Dune
|
(*
* Copyright 2022, Proofcraft Pty Ltd
* Copyright 2014, General Dynamics C4 Systems
*
* SPDX-License-Identifier: GPL-2.0-only
*)
theory ArchDetSchedAux_AI
imports DetSchedAux_AI
begin
context Arch begin global_naming ARM_HYP
named_theorems DetSchedAux_AI_assms
crunches init_arch_objects
for exst[wp]: "\<lambda>s. P (exst s)"
and ct[wp]: "\<lambda>s. P (cur_thread s)"
and valid_etcbs[wp, DetSchedAux_AI_assms]: valid_etcbs
(wp: crunch_wps unless_wp)
crunch ct[wp, DetSchedAux_AI_assms]: invoke_untyped "\<lambda>s. P (cur_thread s)"
(wp: crunch_wps dxo_wp_weak preemption_point_inv mapME_x_inv_wp
simp: crunch_simps do_machine_op_def detype_def mapM_x_defsym unless_def
ignore: freeMemory retype_region_ext)
crunch ready_queues[wp, DetSchedAux_AI_assms]: invoke_untyped "\<lambda>s. P (ready_queues s)"
(wp: crunch_wps mapME_x_inv_wp preemption_point_inv'
simp: detype_def detype_ext_def crunch_simps
wrap_ext_det_ext_ext_def mapM_x_defsym
ignore: freeMemory)
crunch scheduler_action[wp, DetSchedAux_AI_assms]: invoke_untyped "\<lambda>s. P (scheduler_action s)"
(wp: crunch_wps mapME_x_inv_wp preemption_point_inv'
simp: detype_def detype_ext_def crunch_simps
wrap_ext_det_ext_ext_def mapM_x_defsym
ignore: freeMemory)
crunch cur_domain[wp, DetSchedAux_AI_assms]: invoke_untyped "\<lambda>s. P (cur_domain s)"
(wp: crunch_wps mapME_x_inv_wp preemption_point_inv'
simp: detype_def detype_ext_def crunch_simps
wrap_ext_det_ext_ext_def mapM_x_defsym
ignore: freeMemory)
crunch idle_thread[wp, DetSchedAux_AI_assms]: invoke_untyped "\<lambda>s. P (idle_thread s)"
(wp: crunch_wps mapME_x_inv_wp preemption_point_inv dxo_wp_weak
simp: detype_def detype_ext_def crunch_simps
wrap_ext_det_ext_ext_def mapM_x_defsym
ignore: freeMemory retype_region_ext)
lemma tcb_sched_action_valid_idle_etcb:
"\<lbrace>valid_idle_etcb\<rbrace>
tcb_sched_action foo thread
\<lbrace>\<lambda>_. valid_idle_etcb\<rbrace>"
apply (rule valid_idle_etcb_lift)
apply (simp add: tcb_sched_action_def set_tcb_queue_def)
apply (wp | simp)+
done
lemma delete_objects_etcb_at[wp, DetSchedAux_AI_assms]:
"\<lbrace>\<lambda>s::det_ext state. etcb_at P t s\<rbrace> delete_objects a b \<lbrace>\<lambda>r s. etcb_at P t s\<rbrace>"
apply (simp add: delete_objects_def)
apply (wp)
apply (simp add: detype_def detype_ext_def wrap_ext_det_ext_ext_def etcb_at_def|wp)+
done
crunch etcb_at[wp]: reset_untyped_cap "etcb_at P t"
(wp: preemption_point_inv' mapME_x_inv_wp crunch_wps
simp: unless_def)
crunch valid_etcbs[wp]: reset_untyped_cap "valid_etcbs"
(wp: preemption_point_inv' mapME_x_inv_wp crunch_wps
simp: unless_def)
lemma invoke_untyped_etcb_at [DetSchedAux_AI_assms]:
"\<lbrace>(\<lambda>s :: det_ext state. etcb_at P t s) and valid_etcbs\<rbrace> invoke_untyped ui \<lbrace>\<lambda>r s. st_tcb_at (Not o inactive) t s \<longrightarrow> etcb_at P t s\<rbrace>"
apply (cases ui)
apply (simp add: mapM_x_def[symmetric] invoke_untyped_def whenE_def
split del: if_split)
apply (rule hoare_pre)
apply (wp retype_region_etcb_at mapM_x_wp'
create_cap_no_pred_tcb_at typ_at_pred_tcb_at_lift
hoare_convert_imp[OF create_cap_no_pred_tcb_at]
hoare_convert_imp[OF _ init_arch_objects_exst]
| simp
| (wp (once) hoare_drop_impE_E))+
done
crunch valid_blocked[wp, DetSchedAux_AI_assms]: init_arch_objects valid_blocked
(wp: valid_blocked_lift set_cap_typ_at)
lemma perform_asid_control_etcb_at:"\<lbrace>(\<lambda>s. etcb_at P t s) and valid_etcbs\<rbrace>
perform_asid_control_invocation aci
\<lbrace>\<lambda>r s. st_tcb_at (Not \<circ> inactive) t s \<longrightarrow> etcb_at P t s\<rbrace>"
apply (simp add: perform_asid_control_invocation_def)
apply (rule hoare_pre)
apply ( wp | wpc | simp)+
apply (wp hoare_imp_lift_something typ_at_pred_tcb_at_lift)[1]
apply (rule hoare_drop_imps)
apply (wp retype_region_etcb_at)+
apply simp
done
crunch ct[wp]: perform_asid_control_invocation "\<lambda>s. P (cur_thread s)"
crunch idle_thread[wp]: perform_asid_control_invocation "\<lambda>s. P (idle_thread s)"
crunch valid_etcbs[wp]: perform_asid_control_invocation valid_etcbs (wp: static_imp_wp)
crunch valid_blocked[wp]: perform_asid_control_invocation valid_blocked (wp: static_imp_wp)
crunch schedact[wp]: perform_asid_control_invocation "\<lambda>s :: det_ext state. P (scheduler_action s)" (wp: crunch_wps simp: detype_def detype_ext_def wrap_ext_det_ext_ext_def cap_insert_ext_def ignore: freeMemory)
crunch rqueues[wp]: perform_asid_control_invocation "\<lambda>s :: det_ext state. P (ready_queues s)" (wp: crunch_wps simp: detype_def detype_ext_def wrap_ext_det_ext_ext_def cap_insert_ext_def ignore: freeMemory)
crunch cur_domain[wp]: perform_asid_control_invocation "\<lambda>s :: det_ext state. P (cur_domain s)" (wp: crunch_wps simp: detype_def detype_ext_def wrap_ext_det_ext_ext_def cap_insert_ext_def ignore: freeMemory)
lemma perform_asid_control_invocation_valid_sched:
"\<lbrace>ct_active and invs and valid_aci aci and valid_sched and valid_idle\<rbrace>
perform_asid_control_invocation aci
\<lbrace>\<lambda>_. valid_sched\<rbrace>"
apply (rule hoare_pre)
apply (rule_tac I="invs and ct_active and valid_aci aci" in valid_sched_tcb_state_preservation)
apply (wp perform_asid_control_invocation_st_tcb_at)
apply simp
apply (wp perform_asid_control_etcb_at)+
apply (rule hoare_strengthen_post, rule aci_invs)
apply (simp add: invs_def valid_state_def)
apply (rule hoare_lift_Pf[where f="\<lambda>s. scheduler_action s"])
apply (rule hoare_lift_Pf[where f="\<lambda>s. cur_domain s"])
apply (rule hoare_lift_Pf[where f="\<lambda>s. idle_thread s"])
apply wp+
apply simp
done
crunch valid_queues[wp]: init_arch_objects valid_queues (wp: valid_queues_lift)
crunch valid_sched_action[wp]: init_arch_objects valid_sched_action (wp: valid_sched_action_lift)
crunch valid_sched[wp]: init_arch_objects valid_sched (wp: valid_sched_lift)
end
lemmas tcb_sched_action_valid_idle_etcb
= ARM_HYP.tcb_sched_action_valid_idle_etcb
global_interpretation DetSchedAux_AI_det_ext?: DetSchedAux_AI_det_ext
proof goal_cases
interpret Arch .
case 1 show ?case by (unfold_locales; (fact DetSchedAux_AI_assms | wp)?)
qed
global_interpretation DetSchedAux_AI?: DetSchedAux_AI
proof goal_cases
interpret Arch .
case 1 show ?case by (unfold_locales; (fact DetSchedAux_AI_assms | wp)?)
qed
end
|
struct NpyPickler{PROTO} <: AbstractPickle
memo::Memo
stack::PickleStack
mt::HierarchicalTable
end
function NpyPickler(proto=DEFAULT_PROTO, memo=Dict())
mt = HierarchicalTable()
mt["numpy.core.multiarray._reconstruct"] = np_multiarray_reconstruct
mt["numpy.dtype"] = np_dtype
mt["numpy.core.multiarray.scalar"] = np_scalar
mt["__build__.Pickle.NpyDtype"] = build_npydtype
mt["__build__.Pickle.NpyArrayPlaceholder"] = build_nparray
mt["scipy.sparse.csr.csr_matrix"] = sparse_matrix_reconstruct
mt["__build__.Pickle.SpMatrixPlaceholder"] = build_spmatrix
return Pickler{proto}(Memo(memo), PickleStack(), mt)
end
npyload(f) = load(NpyPickler(), f)
struct NpyArrayPlaceholder end
function np_multiarray_reconstruct(subtype, shape, dtype)
@assert subtype.head == Symbol("numpy.ndarray")
@assert isempty(subtype.args)
@assert shape == (0,)
@assert dtype == b"b" || dtype == "b"
return NpyArrayPlaceholder()
end
struct SpMatrixPlaceholder end
sparse_matrix_reconstruct() = SpMatrixPlaceholder()
struct NpyDtype{T}
little_endian::Bool
dstring::String
align::Bool
copy::Bool
end
Base.eltype(::NpyDtype{T}) where T = T
function npy_typechar_to_jltype(t, n)
if t in ("?", "b", "B")
@assert isempty(n) || n == "1"
end
n = tryparse(Int, n)
n = (isnothing(n) ? 4 : n) * 8
@assert n in (8, 16, 32, 64, 128)
if t == "?"
return Bool
elseif t == "b"
return Int8
elseif t == "B"
return UInt8
elseif t == "i"
if n == 8
return Int8
elseif n == 16
return Int16
elseif n == 32
return Int32
elseif n == 64
return Int64
elseif n == 128
return Int128
else
error("unsupport length $n for $t")
end
elseif t == "u"
if n == 8
return UInt8
elseif n == 16
return UInt16
elseif n == 32
return UInt32
elseif n == 64
return UInt64
elseif n == 128
return UInt128
else
error("unsupport length $n for $t")
end
elseif t == "f"
if n == 16
return Float16
elseif n == 32
return Float32
elseif n == 64
return Float64
else
error("unsupport length $n for $t")
end
else
error("unsupport type $t")
end
end
struct NString{N} end
slen(::NString{N}) where N = N
# TODO: handle dtype comprehensively
# https://github.com/numpy/numpy/blob/eeef9d4646103c3b1afd3085f1393f2b3f9575b2/numpy/core/src/multiarray/descriptor.c#L2442
function np_dtype(obj, align, copy)
align, copy = Bool(align), Bool(copy)
@assert !align "structure dtype disallow"
@assert copy
m = match(r"^([<=>])?([?bBiufU])(\d*)$", obj)
if isnothing(m)
@warn "unsupported dtype $obj: consider file an issue"
return Defer(Symbol("numpy.dtype"), obj, align, copy)
end
ei, t, n = m.captures
# '>': big, '<': little, '=': hardware-native
islittle = ei == ">" ? false : ei == "<" ? true : islittle_endian()
if t == "U"
n = tryparse(Int, n)
T = NString{isnothing(n) ? 1 : n}()
else
T = npy_typechar_to_jltype(t, n)
end
return NpyDtype{T}(islittle, obj, align, copy)
end
function build_npydtype(npydtype, state)
metadata = length(state) == 9 ? state[9] : nothing
ver, ei, sub_descrip, _names, _fields, elsize, alignment, flags = state
T = eltype(npydtype)
@assert isnothing(sub_descrip)
@assert isnothing(_names)
@assert isnothing(_fields)
if T isa NString
n = slen(T)
@assert elsize == 4n
@assert alignment == 4
@assert flags == 8
else
@assert elsize == -1
@assert alignment == -1
@assert flags == 0
end
# '>': big, '<': little, '=': hardware-native
islittle = ei == ">" ? false : ei == "<" ? true : islittle_endian()
return NpyDtype{T}(islittle, npydtype.dstring, npydtype.align, npydtype.copy)
end
c2f(arr, shape) = PermutedDimsArray(reshape(arr, reverse(shape)), reverse(ntuple(identity, length(shape))))
c2f(arr, shape, n) = c2f(arr, (shape..., n))
function nstring(cs)
i = findfirst(isequal('\0'), cs)
return isnothing(i) ? String(cs) : String(@view(cs[1:i-1]))
end
# https://github.com/numpy/numpy/blob/6568c6b022e12ab6d71e7548314009ced6ccabe9/numpy/core/src/multiarray/methods.c#L1711
# TODO: support picklebuffer (Pickle v5)
function build_nparray(_, args)
ver, shp, dtype, is_column_maj, data = args
if dtype isa Defer
return Defer(Symbol("build.nparray"), args)
end
T = eltype(dtype)
data = data isa String ? codeunits(data) : data # old numpy use string instead of bytes
if T isa NString
n = slen(T)
_data = reinterpret(UInt32, data)
_arr = dtype.little_endian ? Char.(Base.ltoh.(_data)) : Char.(Base.ntoh.(_data))
arr = is_column_maj ? reshape(_arr, n, shp...) : c2f(_arr, shp, n)
return reshape(mapslices(nstring, arr; dims=ndims(arr)), shp)
else
_data = reinterpret(T, data)
_arr = dtype.little_endian ? Base.ltoh.(_data) : Base.ntoh.(_data)
arr = is_column_maj ? reshape(_arr, shp) : c2f(_arr, shp)
return collect(arr)
end
end
function np_scalar(dtype, data)
T = eltype(dtype)
if T isa NString
n = slen(T)
_data = reinterpret(UInt32, data)
_arr = dtype.little_endian ? Char.(Base.ltoh.(_data)) : Char.(Base.ntoh.(_data))
return String(_arr)
else
_data = reinterpret(T, data)
_arr = dtype.little_endian ? Base.ltoh.(_data) : Base.ntoh.(_data)
return _arr[]
end
end
function build_spmatrix(_, args)
shape = args["_shape"]
nzval, colptr, rowval = csr_to_csc(shape..., args["data"], args["indptr"] .+ 1, args["indices"] .+ 1)
return SparseArrays.SparseMatrixCSC(shape..., colptr, rowval, nzval)
end
|
\documentclass[11pt]{article}
\usepackage{graphicx}
\usepackage{wrapfig}
\usepackage{url}
\usepackage{wrapfig}
\usepackage{color}
\usepackage{marvosym}
\usepackage{enumerate}
\usepackage{subfigure}
\usepackage{tikz}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{hyperref}
\usepackage{CJKutf8}
\usepackage{listings}
\usepackage[encapsulated]{CJK}
% set the default code style
\lstset{
frame=tb, % draw a frame at the top and bottom of the code block
tabsize=2
}
\oddsidemargin 0mm
\evensidemargin 5mm
\topmargin -20mm
\textheight 240mm
\textwidth 160mm
\newcommand{\vw}{{\bf w}}
\newcommand{\vx}{{\bf x}}
\newcommand{\vy}{{\bf y}}
\newcommand{\vxi}{{\bf x}_i}
\newcommand{\yi}{y_i}
\newcommand{\vxj}{{\bf x}_j}
\newcommand{\vxn}{{\bf x}_n}
\newcommand{\yj}{y_j}
\newcommand{\ai}{\alpha_i}
\newcommand{\aj}{\alpha_j}
\newcommand{\X}{{\bf X}}
\newcommand{\Y}{{\bf Y}}
\newcommand{\vz}{{\bf z}}
\newcommand{\msigma}{{\bf \Sigma}}
\newcommand{\vmu}{{\bf \mu}}
\newcommand{\vmuk}{{\bf \mu}_k}
\newcommand{\msigmak}{{\bf \Sigma}_k}
\newcommand{\vmuj}{{\bf \mu}_j}
\newcommand{\msigmaj}{{\bf \Sigma}_j}
\newcommand{\pij}{\pi_j}
\newcommand{\pik}{\pi_k}
\newcommand{\D}{\mathcal{D}}
\newcommand{\el}{\mathcal{L}}
\newcommand{\N}{\mathcal{N}}
\newcommand{\vxij}{{\bf x}_{ij}}
\newcommand{\vt}{{\bf t}}
\newcommand{\yh}{\hat{y}}
\newcommand{\code}[1]{{\footnotesize \tt #1}}
\newcommand{\alphai}{\alpha_i}
\pagestyle{myheadings}
\markboth{Haitang Hu}{Machine Translation : Homework 4}
\title{Machine Translation\\Rerank}
\author{Haitang Hu \\
{\tt [email protected]}}
\date{\today}
\begin{document}
\large
\maketitle
\thispagestyle{headings}
\section{Rerank Methods} % (fold)
\label{sec:rerank_methods}
\subsection{Minimum Bayes-Risk Decoding} % (fold)
\label{sub:minimum_bayes_risk_decoding}
Minimum Bayes-Risk Decoding\cite{mbr} aims to minimize the expected loss of translation errors. By applying specific metrics(\textbf{BLEU}, \textbf{WER}) as loss function, we could form our MBR decoder to improve reranking performance on Machine Translation System.\\
The minimum Bayes-Risk can be expressed in terms of \textit{loss function} and our decoder generated results $\delta(F)$.
$$ R(\delta(F)) = E_{P(E,A,F)}[L((E,A),\delta(F))]$$
Where the expectation is taken under the distribution of $P(E,A,F)$, which means the true joint distribution.\\
For this problem, we have a well known solution if given the loss function and a distribution
$$ \delta(F) = \text{arg min}_{E^{\prime}, A^{\prime}} \sum_{E,A}L((E,A),(E^{\prime}, A^{\prime}))P(E,A|F) $$
where $E$ is the translation sentence, $A$ is a alignment under the translation $(E,F)$. But this ideal model is far from reality, since we don't have the true distribution for our $P(E,A|F)$. Here, we compromise to our statiscal methods, to guess the distribution through $N-best$ list we have, and now the model becomes
$$ \hat{i} = \text{arg min}_{i \in \{1,2,\dots, N\}} \sum_{j=1}^N L((E_j,A_j), (E_i,A_i))P(E_j,A_j|F)$$
where $P(E_j,A_j|F)$ can be represented as
$$ P(E_j,A_j|F) = \frac{P(E_j,A_j,F)}{\sum_j^NP(E_j,A_j,F)}$$
Note that $P(E_j,A_j,F)$ now is just a empirical distribution under given $N-best$ list.\\
This model suggests that we should look into all our $N-best$ list, and select the \textit{average} one as our results, since the \textit{average} one always gives us less surprise, or \textit{risk}.\\
Also, it might be worth citing the paper's proof, that if we use a indicator function on our loss function, then MBR reduced to the MAP estimator.
$$ \delta_{MAP}(F) = \text{arg max}_{(E^{\prime}, A^{\prime})}P(E^{\prime}, A^{\prime}|F)$$
This is intuitive, since MAP just use point estimate which assumes all our distribution density peaks at the point. Instead, MBR gives a more smoothed distribution.
% subsection minimum_bayes_risk_decoding (end)
\subsection{Feature Extension} % (fold)
\label{sub:feature_extension}
It is natural that we should not only depends on our translation model, language model and lexical model score. Here we encode another 2 belief into our features, to get a better representation of our domain knowledge. First, we consider the word counts, an intuitive way to encode our belief is to penalize with respect to the difference of length $\delta(c, r)$ between candidate and reference. The second feature is simply the number of untranslated Russian words, notated as $u(c)$. So, we have our model score to be following
$$ s(c) = \lambda_{l(c)}l(c) + \lambda_{t(c)}t(c) + \lambda_{lex(c)}lex(c) - \lambda_{\delta(c, r)}\delta(c, r) - \lambda_{u(c)}u(c)$$
Here we have 5 parameters, and we should choose them to fit best to our training data.
% subsection feature_extension (end)
% section rerank_methods (end)
\section{Implementation} % (fold)
\label{sec:implementation}
\subsection{Metric} % (fold)
\label{sub:metric}
Generally, \textbf{BLEU} will be used as our loss function. Since \textbf{BLEU} score always lies in the range of $(0,1)$, so we could encode our loss function to be
$$ L((E_j,A_j), (E_i,A_i)) = 1 - BLEU((E_j,A_j), (E_i,A_i))$$
Recall, we also need to specify our posterior distribution. Here we specify it to be
$$ P(E_j,A_j|F) = \log(l(E_j)) + \log(t(E_j|F)) + \log(lex(E_j,A_j|F))$$
Also, there is another point worth mentioning, that is the \textbf{BLEU} can both applied on \textit{string level} and \textit{word level}. We will show the performance comparison later.
% subsection metric (end)
\subsection{Efficiency} % (fold)
\label{sub:efficiency}
Since for each $N-best$ list, we need to at least loop $N^2$ times, since we have a pairwise loss, so it is neccesary to implement it in a smarter way. Here we employ a matrix method to avoid to loop twice for computing normalize constant and pariwise loss.
% subsection efficiency (end)
% section implementation (end)
\section{Evaluation} % (fold)
\label{sec:evaluation}
\subsection{Result} % (fold)
\label{sub:result}
\begin{table}[!htf]
\centering
\begin{tabular}{ | c | c |}
\hline
Method & Score\\
\hline
baseline & $0.2735$\\
\hline
baseline($lm = -1.0, tm = -0.65, lex = -1.3$) & $0.2817$ \\
\hline
feature ext($lm = -1.0, tm = -0.65, lex = -1.3, c = 1.03, u = 0.1$) & $0.2893$ \\
\hline
MBR & $0.2916$ \\
\hline
MBR + word count & $\bf{0.2918}$ \\
\hline
\end{tabular}
\caption{Result}
$c, u$ stands for word count weight and untranslated words weight
\end{table}
% subsection result (end)
% section evaluation (end)
\subsection{Evaluation and Optimization} % (fold)
\label{sub:evaluation_Optimization}
As we can see here, simply tuning the parameter of \textit{baseline} system gives us a big improvement, which shows the huge gain we could get from \textit{MERT} or \textit{PRO}(But I did not encode them).\\
Next, we look at he feature extension, which again raise a lot score, since we actually encode the significant reason to express our belief, which happens to be right.\\
We gain the best score by using MBR method with counting \textit{word count} feature into our posterior distribution, this shows the we could benifit from more feature under MBR setting. Put it another way, this suggests combining \textit{MERT} could benifit the MBR method.
% subsection evaluation (end)
\begin{thebibliography}{50}
\bibitem{mbr} Shankar Kumar and William Byrne. \textsl{Minimum Bayes-Risk Decoding for Statistical Machine Translation}, 2011.
\end{thebibliography}
\end{document}
|
-- Andreas, 2020-06-21, issue #4768
-- Problem was: @0 appearing in "Not a finite domain" message.
open import Agda.Builtin.Bool
open import Agda.Primitive.Cubical
f : (i : I) → IsOne i → Set
f i (i0 = i1) = Bool
-- EXPECTED:
-- Not a finite domain: IsOne i
-- when checking that the pattern (i0 = i1) has type IsOne i
|
Northern CG meets every Sunday in church for bible study (except the 1st Sunday of the month, and the June and November school holiday breaks). We also have a house gathering every two months for a time of worship and sharing as well as to bond over a meal. Currently, we are studying the book of Exodus for 2018/9. Join us as we search out the plans of God, the person and work of Jesus, and the empowerment of the Spirit that are hidden in this book.
Central CG meets on Friday evenings at 8.15pm every 2nd and 4th week of the month at a CG member's home in the Upper Thomson area. Our gatherings usually begin with worship, followed by a time of sharing & mutual encouragement, before we study the Bible together. From April to June 2018 we shall cover Luke's Gospel, in line with Sunday messages. From July to September, we plan to explore Psalms and Missions. The fellowship continues over a sumptuous supper following the Bible study. Periodically, we have social events such as outings and combined fellowship with other CGs.
All are welcome to join us. Please call 96516347 for more information. |
lemma homotopy_dominated_contractibility: assumes f: "continuous_map X Y f" and g: "continuous_map Y X g" and hom: "homotopic_with (\<lambda>x. True) Y Y (f \<circ> g) id" and X: "contractible_space X" shows "contractible_space Y" |
(* Author: Matthias Brun, ETH Zürich, 2019 *)
(* Author: Dmitriy Traytel, ETH Zürich, 2019 *)
section \<open>Semantics of $\lambda\bullet$\<close>
(*<*)
theory Semantics
imports FMap_Lemmas Syntax
begin
(*>*)
text \<open>Avoid clash with substitution notation.\<close>
no_notation inverse_divide (infixl "'/" 70)
text \<open>Help automated provers with smallsteps.\<close>
declare One_nat_def[simp del]
subsection \<open>Equivariant Hash Function\<close>
consts hash_real :: "term \<Rightarrow> hash"
nominal_function map_fixed :: "var \<Rightarrow> var list \<Rightarrow> term \<Rightarrow> term" where
"map_fixed fp l Unit = Unit" |
"map_fixed fp l (Var y) = (if y \<in> set l then (Var y) else (Var fp))" |
"atom y \<sharp> (fp, l) \<Longrightarrow> map_fixed fp l (Lam y t) = (Lam y ((map_fixed fp (y # l) t)))" |
"atom y \<sharp> (fp, l) \<Longrightarrow> map_fixed fp l (Rec y t) = (Rec y ((map_fixed fp (y # l) t)))" |
"map_fixed fp l (Inj1 t) = (Inj1 ((map_fixed fp l t)))" |
"map_fixed fp l (Inj2 t) = (Inj2 ((map_fixed fp l t)))" |
"map_fixed fp l (Pair t1 t2) = (Pair ((map_fixed fp l t1)) ((map_fixed fp l t2)))" |
"map_fixed fp l (Roll t) = (Roll ((map_fixed fp l t)))" |
"atom y \<sharp> (fp, l) \<Longrightarrow> map_fixed fp l (Let t1 y t2) = (Let ((map_fixed fp l t1)) y ((map_fixed fp (y # l) t2)))" |
"map_fixed fp l (App t1 t2) = (App ((map_fixed fp l t1)) ((map_fixed fp l t2)))" |
"map_fixed fp l (Case t1 t2 t3) = (Case ((map_fixed fp l t1)) ((map_fixed fp l t2)) ((map_fixed fp l t3)))" |
"map_fixed fp l (Prj1 t) = (Prj1 ((map_fixed fp l t)))" |
"map_fixed fp l (Prj2 t) = (Prj2 ((map_fixed fp l t)))" |
"map_fixed fp l (Unroll t) = (Unroll ((map_fixed fp l t)))" |
"map_fixed fp l (Auth t) = (Auth ((map_fixed fp l t)))" |
"map_fixed fp l (Unauth t) = (Unauth ((map_fixed fp l t)))" |
"map_fixed fp l (Hash h) = (Hash h)" |
"map_fixed fp l (Hashed h t) = (Hashed h ((map_fixed fp l t)))"
using [[simproc del: alpha_lst defined_all]]
subgoal by (simp add: eqvt_def map_fixed_graph_aux_def)
subgoal by (erule map_fixed_graph.induct) (auto simp: fresh_star_def fresh_at_base)
apply clarify
subgoal for P fp l t
by (rule term.strong_exhaust[of t P "(fp, l)"]) (auto simp: fresh_star_def fresh_Pair)
apply (simp_all add: fresh_star_def fresh_at_base)
subgoal for y fp l t ya fpa la ta
apply (erule conjE)+
apply (erule Abs_lst1_fcb2'[where c = "(fp, l)"])
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff fresh_Pair)
done
subgoal for y fp l t ya fpa la ta
apply (erule conjE)+
apply (erule Abs_lst1_fcb2'[where c = "(fp, l)"])
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff fresh_Pair)
done
subgoal for y fp l t ya fpa la ta
apply (erule conjE)+
apply (erule Abs_lst1_fcb2'[where c = "(fp, l)"])
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff fresh_Pair)
done
done
nominal_termination (eqvt)
by lexicographic_order
definition hash where
"hash t = hash_real (map_fixed undefined [] t)"
lemma permute_map_list: "p \<bullet> l = map (\<lambda>x. p \<bullet> x) l"
by (induct l) auto
lemma map_fixed_eqvt: "p \<bullet> l = l \<Longrightarrow> map_fixed v l (p \<bullet> t) = map_fixed v l t"
proof (nominal_induct t avoiding: v l p rule: term.strong_induct)
case (Var x)
then show ?case
by (auto simp: term.supp supp_at_base permute_map_list list_eq_iff_nth_eq in_set_conv_nth)
next
case (Lam y e)
from Lam(1,2,3,5) Lam(4)[of p "y # l" v] show ?case
by (auto simp: fresh_perm)
next
case (Rec y e)
from Rec(1,2,3,5) Rec(4)[of p "y # l" v] show ?case
by (auto simp: fresh_perm)
next
case (Let e' y e)
from Let(1,2,3,6) Let(4)[of p l v] Let(5)[of p "y # l" v] show ?case
by (auto simp: fresh_perm)
qed (auto simp: permute_pure)
lemma hash_eqvt[eqvt]: "p \<bullet> hash t = hash (p \<bullet> t)"
unfolding permute_pure hash_def by (auto simp: map_fixed_eqvt)
lemma map_fixed_idle: "{x. \<not> atom x \<sharp> t} \<subseteq> set l \<Longrightarrow> map_fixed v l t = t"
proof (nominal_induct t avoiding: v l rule: term.strong_induct)
case (Var x)
then show ?case
by (auto simp: subset_iff fresh_at_base)
next
case (Lam y e)
from Lam(1,2,4) Lam(3)[of "y # l" v] show ?case
by (auto simp: fresh_Pair Abs1_eq)
next
case (Rec y e)
from Rec(1,2,4) Rec(3)[of "y # l" v] show ?case
by (auto simp: fresh_Pair Abs1_eq)
next
case (Let e' y e)
from Let(1,2,5) Let(3)[of l v] Let(4)[of "y # l" v] show ?case
by (auto simp: fresh_Pair Abs1_eq)
qed (auto simp: subset_iff)
lemma map_fixed_idle_closed:
"closed t \<Longrightarrow> map_fixed undefined [] t = t"
by (rule map_fixed_idle) auto
lemma map_fixed_inj_closed:
"closed t \<Longrightarrow> closed u \<Longrightarrow> map_fixed undefined [] t = map_fixed undefined [] u \<Longrightarrow> t = u"
by (rule box_equals[OF _ map_fixed_idle_closed map_fixed_idle_closed])
subsection \<open>Substitution\<close>
nominal_function subst_term :: "term \<Rightarrow> term \<Rightarrow> var \<Rightarrow> term" ("_[_ '/ _]" [250, 200, 200] 250) where
"Unit[t' / x] = Unit" |
"(Var y)[t' / x] = (if x = y then t' else Var y)" |
"atom y \<sharp> (x, t') \<Longrightarrow> (Lam y t)[t' / x] = Lam y (t[t' / x])" |
"atom y \<sharp> (x, t') \<Longrightarrow> (Rec y t)[t' / x] = Rec y (t[t' / x])" |
"(Inj1 t)[t' / x] = Inj1 (t[t' / x])" |
"(Inj2 t)[t' / x] = Inj2 (t[t' / x])" |
"(Pair t1 t2)[t' / x] = Pair (t1[t' / x]) (t2[t' / x]) " |
"(Roll t)[t' / x] = Roll (t[t' / x])" |
"atom y \<sharp> (x, t') \<Longrightarrow> (Let t1 y t2)[t' / x] = Let (t1[t' / x]) y (t2[t' / x])" |
"(App t1 t2)[t' / x] = App (t1[t' / x]) (t2[t' / x])" |
"(Case t1 t2 t3)[t' / x] = Case (t1[t' / x]) (t2[t' / x]) (t3[t' / x])" |
"(Prj1 t)[t' / x] = Prj1 (t[t' / x])" |
"(Prj2 t)[t' / x] = Prj2 (t[t' / x])" |
"(Unroll t)[t' / x] = Unroll (t[t' / x])" |
"(Auth t)[t' / x] = Auth (t[t' / x])" |
"(Unauth t)[t' / x] = Unauth (t[t' / x])" |
"(Hash h)[t' / x] = Hash h" |
"(Hashed h t)[t' / x] = Hashed h (t[t' / x])"
using [[simproc del: alpha_lst defined_all]]
subgoal by (simp add: eqvt_def subst_term_graph_aux_def)
subgoal by (erule subst_term_graph.induct) (auto simp: fresh_star_def fresh_at_base)
apply clarify
subgoal for P a b t
by (rule term.strong_exhaust[of a P "(b, t)"]) (auto simp: fresh_star_def fresh_Pair)
apply (simp_all add: fresh_star_def fresh_at_base)
subgoal
apply (erule conjE)
apply (erule Abs_lst1_fcb2')
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff fresh_Pair)
done
subgoal
apply (erule conjE)
apply (erule Abs_lst1_fcb2')
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff fresh_Pair)
done
subgoal
apply (erule conjE)
apply (erule Abs_lst1_fcb2')
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff fresh_Pair)
done
done
nominal_termination (eqvt)
by lexicographic_order
type_synonym tenv = "(var, term) fmap"
nominal_function psubst_term :: "term \<Rightarrow> tenv \<Rightarrow> term" where
"psubst_term Unit f = Unit" |
"psubst_term (Var y) f = (case f $$ y of Some t \<Rightarrow> t | None \<Rightarrow> Var y)" |
"atom y \<sharp> f \<Longrightarrow> psubst_term (Lam y t) f = Lam y (psubst_term t f)" |
"atom y \<sharp> f \<Longrightarrow> psubst_term (Rec y t) f = Rec y (psubst_term t f)" |
"psubst_term (Inj1 t) f = Inj1 (psubst_term t f)" |
"psubst_term (Inj2 t) f = Inj2 (psubst_term t f)" |
"psubst_term (Pair t1 t2) f = Pair (psubst_term t1 f) (psubst_term t2 f) " |
"psubst_term (Roll t) f = Roll (psubst_term t f)" |
"atom y \<sharp> f \<Longrightarrow> psubst_term (Let t1 y t2) f = Let (psubst_term t1 f) y (psubst_term t2 f)" |
"psubst_term (App t1 t2) f = App (psubst_term t1 f) (psubst_term t2 f)" |
"psubst_term (Case t1 t2 t3) f = Case (psubst_term t1 f) (psubst_term t2 f) (psubst_term t3 f)" |
"psubst_term (Prj1 t) f = Prj1 (psubst_term t f)" |
"psubst_term (Prj2 t) f = Prj2 (psubst_term t f)" |
"psubst_term (Unroll t) f = Unroll (psubst_term t f)" |
"psubst_term (Auth t) f = Auth (psubst_term t f)" |
"psubst_term (Unauth t) f = Unauth (psubst_term t f)" |
"psubst_term (Hash h) f = Hash h" |
"psubst_term (Hashed h t) f = Hashed h (psubst_term t f)"
using [[simproc del: alpha_lst defined_all]]
subgoal by (simp add: eqvt_def psubst_term_graph_aux_def)
subgoal by (erule psubst_term_graph.induct) (auto simp: fresh_star_def fresh_at_base)
apply clarify
subgoal for P a b
by (rule term.strong_exhaust[of a P b]) (auto simp: fresh_star_def fresh_Pair)
apply (simp_all add: fresh_star_def fresh_at_base)
subgoal by clarify
subgoal
apply (erule conjE)
apply (erule Abs_lst1_fcb2')
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff)
done
subgoal
apply (erule conjE)
apply (erule Abs_lst1_fcb2')
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff)
done
subgoal
apply (erule conjE)
apply (erule Abs_lst1_fcb2')
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff)
done
done
nominal_termination (eqvt)
by lexicographic_order
nominal_function subst_type :: "ty \<Rightarrow> ty \<Rightarrow> tvar \<Rightarrow> ty" where
"subst_type One t' x = One" |
"subst_type (Fun t1 t2) t' x = Fun (subst_type t1 t' x) (subst_type t2 t' x)" |
"subst_type (Sum t1 t2) t' x = Sum (subst_type t1 t' x) (subst_type t2 t' x)" |
"subst_type (Prod t1 t2) t' x = Prod (subst_type t1 t' x) (subst_type t2 t' x)" |
"atom y \<sharp> (t', x) \<Longrightarrow> subst_type (Mu y t) t' x = Mu y (subst_type t t' x)" |
"subst_type (Alpha y) t' x = (if y = x then t' else Alpha y)" |
"subst_type (AuthT t) t' x = AuthT (subst_type t t' x)"
using [[simproc del: alpha_lst defined_all]]
subgoal by (simp add: eqvt_def subst_type_graph_aux_def)
subgoal by (erule subst_type_graph.induct) (auto simp: fresh_star_def fresh_at_base)
apply clarify
subgoal for P a
by (rule ty.strong_exhaust[of a P]) (auto simp: fresh_star_def)
apply (simp_all add: fresh_star_def fresh_at_base)
subgoal
apply (erule conjE)
apply (erule Abs_lst1_fcb2')
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff fresh_Pair)
done
done
nominal_termination (eqvt)
by lexicographic_order
lemma fresh_subst_term: "atom x \<sharp> t[t' / x'] \<longleftrightarrow> (x = x' \<or> atom x \<sharp> t) \<and> (atom x' \<sharp> t \<or> atom x \<sharp> t')"
by (nominal_induct t avoiding: t' x x' rule: term.strong_induct) (auto simp add: fresh_at_base)
lemma term_fresh_subst[simp]: "atom x \<sharp> t \<Longrightarrow> atom x \<sharp> s \<Longrightarrow> (atom (x::var)) \<sharp> t[s / y]"
by (nominal_induct t avoiding: s y rule: term.strong_induct) (auto)
lemma term_subst_idle[simp]: "atom y \<sharp> t \<Longrightarrow> t[s / y] = t"
by (nominal_induct t avoiding: s y rule: term.strong_induct) (auto simp: fresh_Pair fresh_at_base)
lemma term_subst_subst: "atom y1 \<noteq> atom y2 \<Longrightarrow> atom y1 \<sharp> s2 \<Longrightarrow> t[s1 / y1][s2 / y2] = t[s2 / y2][s1[s2 / y2] / y1]"
by (nominal_induct t avoiding: y1 y2 s1 s2 rule: term.strong_induct) auto
lemma fresh_psubst:
fixes x :: var
assumes "atom x \<sharp> e" "atom x \<sharp> vs"
shows "atom x \<sharp> psubst_term e vs"
using assms
by (induct e vs rule: psubst_term.induct)
(auto simp: fresh_at_base elim: fresh_fmap_fresh_Some split: option.splits)
lemma fresh_subst_type:
"atom \<alpha> \<sharp> subst_type \<tau> \<tau>' \<alpha>' \<longleftrightarrow> ((\<alpha> = \<alpha>' \<or> atom \<alpha> \<sharp> \<tau>) \<and> (atom \<alpha>' \<sharp> \<tau> \<or> atom \<alpha> \<sharp> \<tau>'))"
by (nominal_induct \<tau> avoiding: \<alpha> \<alpha>' \<tau>' rule: ty.strong_induct) (auto simp add: fresh_at_base)
lemma type_fresh_subst[simp]: "atom x \<sharp> t \<Longrightarrow> atom x \<sharp> s \<Longrightarrow> (atom (x::tvar)) \<sharp> subst_type t s y"
by (nominal_induct t avoiding: s y rule: ty.strong_induct) (auto)
lemma type_subst_idle[simp]: "atom y \<sharp> t \<Longrightarrow> subst_type t s y = t"
by (nominal_induct t avoiding: s y rule: ty.strong_induct) (auto simp: fresh_Pair fresh_at_base)
lemma type_subst_subst: "atom y1 \<noteq> atom y2 \<Longrightarrow> atom y1 \<sharp> s2 \<Longrightarrow>
subst_type (subst_type t s1 y1) s2 y2 = subst_type (subst_type t s2 y2) (subst_type s1 s2 y2) y1"
by (nominal_induct t avoiding: y1 y2 s1 s2 rule: ty.strong_induct) auto
subsection \<open>Weak Typing Judgement\<close>
type_synonym tyenv = "(var, ty) fmap"
inductive judge_weak :: "tyenv \<Rightarrow> term \<Rightarrow> ty \<Rightarrow> bool" ("_ \<turnstile>\<^sub>W _ : _" [150,0,150] 149) where
jw_Unit: "\<Gamma> \<turnstile>\<^sub>W Unit : One" |
jw_Var: "\<lbrakk> \<Gamma> $$ x = Some \<tau> \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Var x : \<tau>" |
jw_Lam: "\<lbrakk> atom x \<sharp> \<Gamma>; \<Gamma>(x $$:= \<tau>\<^sub>1) \<turnstile>\<^sub>W e : \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Lam x e : Fun \<tau>\<^sub>1 \<tau>\<^sub>2" |
jw_App: "\<lbrakk> \<Gamma> \<turnstile>\<^sub>W e : Fun \<tau>\<^sub>1 \<tau>\<^sub>2; \<Gamma> \<turnstile>\<^sub>W e' : \<tau>\<^sub>1 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W App e e' : \<tau>\<^sub>2" |
jw_Let: "\<lbrakk> atom x \<sharp> (\<Gamma>, e\<^sub>1); \<Gamma> \<turnstile>\<^sub>W e\<^sub>1 : \<tau>\<^sub>1; \<Gamma>(x $$:= \<tau>\<^sub>1) \<turnstile>\<^sub>W e\<^sub>2 : \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Let e\<^sub>1 x e\<^sub>2 : \<tau>\<^sub>2" |
jw_Rec: "\<lbrakk> atom x \<sharp> \<Gamma>; atom y \<sharp> (\<Gamma>,x); \<Gamma>(x $$:= Fun \<tau>\<^sub>1 \<tau>\<^sub>2) \<turnstile>\<^sub>W Lam y e : Fun \<tau>\<^sub>1 \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Rec x (Lam y e) : Fun \<tau>\<^sub>1 \<tau>\<^sub>2" |
jw_Inj1: "\<lbrakk> \<Gamma> \<turnstile>\<^sub>W e : \<tau>\<^sub>1 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Inj1 e : Sum \<tau>\<^sub>1 \<tau>\<^sub>2" |
jw_Inj2: "\<lbrakk> \<Gamma> \<turnstile>\<^sub>W e : \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Inj2 e : Sum \<tau>\<^sub>1 \<tau>\<^sub>2" |
jw_Case: "\<lbrakk> \<Gamma> \<turnstile>\<^sub>W e : Sum \<tau>\<^sub>1 \<tau>\<^sub>2; \<Gamma> \<turnstile>\<^sub>W e\<^sub>1 : Fun \<tau>\<^sub>1 \<tau>; \<Gamma> \<turnstile>\<^sub>W e\<^sub>2 : Fun \<tau>\<^sub>2 \<tau> \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Case e e\<^sub>1 e\<^sub>2 : \<tau>" |
jw_Pair: "\<lbrakk> \<Gamma> \<turnstile>\<^sub>W e\<^sub>1 : \<tau>\<^sub>1; \<Gamma> \<turnstile>\<^sub>W e\<^sub>2 : \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Pair e\<^sub>1 e\<^sub>2 : Prod \<tau>\<^sub>1 \<tau>\<^sub>2" |
jw_Prj1: "\<lbrakk> \<Gamma> \<turnstile>\<^sub>W e : Prod \<tau>\<^sub>1 \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Prj1 e : \<tau>\<^sub>1" |
jw_Prj2: "\<lbrakk> \<Gamma> \<turnstile>\<^sub>W e : Prod \<tau>\<^sub>1 \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Prj2 e : \<tau>\<^sub>2" |
jw_Roll: "\<lbrakk> atom \<alpha> \<sharp> \<Gamma>; \<Gamma> \<turnstile>\<^sub>W e : subst_type \<tau> (Mu \<alpha> \<tau>) \<alpha> \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Roll e : Mu \<alpha> \<tau>" |
jw_Unroll: "\<lbrakk> atom \<alpha> \<sharp> \<Gamma>; \<Gamma> \<turnstile>\<^sub>W e : Mu \<alpha> \<tau> \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Unroll e : subst_type \<tau> (Mu \<alpha> \<tau>) \<alpha>" |
jw_Auth: "\<lbrakk> \<Gamma> \<turnstile>\<^sub>W e : \<tau> \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Auth e : \<tau>" |
jw_Unauth: "\<lbrakk> \<Gamma> \<turnstile>\<^sub>W e : \<tau> \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile>\<^sub>W Unauth e : \<tau>"
declare judge_weak.intros[simp]
declare judge_weak.intros[intro]
equivariance judge_weak
nominal_inductive judge_weak
avoids jw_Lam: x
| jw_Rec: x and y
| jw_Let: x
| jw_Roll: \<alpha>
| jw_Unroll: \<alpha>
by (auto simp: fresh_subst_type fresh_Pair)
text \<open>Inversion rules for typing judgment.\<close>
inductive_cases jw_Unit_inv[elim]: "\<Gamma> \<turnstile>\<^sub>W Unit : \<tau>"
inductive_cases jw_Var_inv[elim]: "\<Gamma> \<turnstile>\<^sub>W Var x : \<tau>"
lemma jw_Lam_inv[elim]:
assumes "\<Gamma> \<turnstile>\<^sub>W Lam x e : \<tau>"
and "atom x \<sharp> \<Gamma>"
obtains \<tau>\<^sub>1 \<tau>\<^sub>2 where "\<tau> = Fun \<tau>\<^sub>1 \<tau>\<^sub>2" "(\<Gamma>(x $$:= \<tau>\<^sub>1)) \<turnstile>\<^sub>W e : \<tau>\<^sub>2"
using assms proof (atomize_elim, nominal_induct \<Gamma> "Lam x e" \<tau> avoiding: x e rule: judge_weak.strong_induct)
case (jw_Lam x \<Gamma> \<tau>\<^sub>1 t \<tau>\<^sub>2 y u)
then show ?case
by (auto simp: perm_supp_eq elim!:
iffD1[OF Abs_lst1_fcb2'[where f = "\<lambda>x t (\<Gamma>, \<tau>\<^sub>1, \<tau>\<^sub>2). (\<Gamma>(x $$:= \<tau>\<^sub>1)) \<turnstile>\<^sub>W t : \<tau>\<^sub>2"
and c = "(\<Gamma>, \<tau>\<^sub>1, \<tau>\<^sub>2)" and a = x and b = y and x = t and y = u, unfolded prod.case],
rotated -1])
qed
lemma swap_permute_swap: "atom x \<sharp> \<pi> \<Longrightarrow> atom y \<sharp> \<pi> \<Longrightarrow> (x \<leftrightarrow> y) \<bullet> \<pi> \<bullet> (x \<leftrightarrow> y) \<bullet> t = \<pi> \<bullet> t"
by (subst permute_eqvt) (auto simp: flip_fresh_fresh)
lemma jw_Rec_inv[elim]:
assumes "\<Gamma> \<turnstile>\<^sub>W Rec x t : \<tau>"
and "atom x \<sharp> \<Gamma>"
obtains y e \<tau>\<^sub>1 \<tau>\<^sub>2 where "atom y \<sharp> (\<Gamma>,x)" "t = Lam y e" "\<tau> = Fun \<tau>\<^sub>1 \<tau>\<^sub>2" "\<Gamma>(x $$:= Fun \<tau>\<^sub>1 \<tau>\<^sub>2) \<turnstile>\<^sub>W Lam y e : Fun \<tau>\<^sub>1 \<tau>\<^sub>2"
using [[simproc del: alpha_lst]] assms
proof (atomize_elim, nominal_induct \<Gamma> "Rec x t" \<tau> avoiding: x t rule: judge_weak.strong_induct)
case (jw_Rec x \<Gamma> y \<tau>\<^sub>1 \<tau>\<^sub>2 e z t)
then show ?case
proof (nominal_induct t avoiding: y x z rule: term.strong_induct)
case (Lam y' e')
show ?case
proof (intro exI conjI)
from Lam.prems show "atom y \<sharp> (\<Gamma>, z)" by simp
from Lam.hyps(1-3) Lam.prems show "Lam y' e' = Lam y ((y' \<leftrightarrow> y) \<bullet> e')"
by (subst term.eq_iff(3), intro Abs_lst_eq_flipI) (simp add: fresh_at_base)
from Lam.hyps(1-3) Lam.prems show "\<Gamma>(z $$:= Fun \<tau>\<^sub>1 \<tau>\<^sub>2) \<turnstile>\<^sub>W Lam y ((y' \<leftrightarrow> y) \<bullet> e') : Fun \<tau>\<^sub>1 \<tau>\<^sub>2"
by (elim judge_weak.eqvt[of "\<Gamma>(x $$:= Fun \<tau>\<^sub>1 \<tau>\<^sub>2)" "Lam y e" "Fun \<tau>\<^sub>1 \<tau>\<^sub>2" "(x \<leftrightarrow> z)", elim_format])
(simp add: perm_supp_eq Abs1_eq_iff fresh_at_base swap_permute_swap fresh_perm flip_commute)
qed simp
qed (simp_all add: Abs1_eq_iff)
qed
inductive_cases jw_Inj1_inv[elim]: "\<Gamma> \<turnstile>\<^sub>W Inj1 e : \<tau>"
inductive_cases jw_Inj2_inv[elim]: "\<Gamma> \<turnstile>\<^sub>W Inj2 e : \<tau>"
inductive_cases jw_Pair_inv[elim]: "\<Gamma> \<turnstile>\<^sub>W Pair e\<^sub>1 e\<^sub>2 : \<tau>"
lemma jw_Let_inv[elim]:
assumes "\<Gamma> \<turnstile>\<^sub>W Let e\<^sub>1 x e\<^sub>2 : \<tau>\<^sub>2"
and "atom x \<sharp> (e\<^sub>1, \<Gamma>)"
obtains \<tau>\<^sub>1 where "\<Gamma> \<turnstile>\<^sub>W e\<^sub>1 : \<tau>\<^sub>1" "\<Gamma>(x $$:= \<tau>\<^sub>1) \<turnstile>\<^sub>W e\<^sub>2 : \<tau>\<^sub>2"
using assms proof (atomize_elim, nominal_induct \<Gamma> "Let e\<^sub>1 x e\<^sub>2" \<tau>\<^sub>2 avoiding: e\<^sub>1 x e\<^sub>2 rule: judge_weak.strong_induct)
case (jw_Let x \<Gamma> e\<^sub>1 \<tau>\<^sub>1 e\<^sub>2 \<tau>\<^sub>2 x' e\<^sub>2')
then show ?case
by (auto simp: fresh_Pair perm_supp_eq elim!:
iffD1[OF Abs_lst1_fcb2'[where f = "\<lambda>x t (\<Gamma>, \<tau>\<^sub>1, \<tau>\<^sub>2). \<Gamma>(x $$:= \<tau>\<^sub>1) \<turnstile>\<^sub>W t : \<tau>\<^sub>2"
and c = "(\<Gamma>, \<tau>\<^sub>1, \<tau>\<^sub>2)" and a = x and b = x' and x = e\<^sub>2 and y = e\<^sub>2', unfolded prod.case],
rotated -1])
qed
inductive_cases jw_Prj1_inv[elim]: "\<Gamma> \<turnstile>\<^sub>W Prj1 e : \<tau>\<^sub>1"
inductive_cases jw_Prj2_inv[elim]: "\<Gamma> \<turnstile>\<^sub>W Prj2 e : \<tau>\<^sub>2"
inductive_cases jw_App_inv[elim]: "\<Gamma> \<turnstile>\<^sub>W App e e' : \<tau>\<^sub>2"
inductive_cases jw_Case_inv[elim]: "\<Gamma> \<turnstile>\<^sub>W Case e e\<^sub>1 e\<^sub>2 : \<tau>"
inductive_cases jw_Auth_inv[elim]: "\<Gamma> \<turnstile>\<^sub>W Auth e : \<tau>"
inductive_cases jw_Unauth_inv[elim]: "\<Gamma> \<turnstile>\<^sub>W Unauth e : \<tau>"
lemma subst_type_perm_eq:
assumes "atom b \<sharp> t"
shows "subst_type t (Mu a t) a = subst_type ((a \<leftrightarrow> b) \<bullet> t) (Mu b ((a \<leftrightarrow> b) \<bullet> t)) b"
using assms proof -
have f: "atom a \<sharp> subst_type t (Mu a t) a" by (rule iffD2[OF fresh_subst_type]) simp
have "atom b \<sharp> subst_type t (Mu a t) a" by (auto simp: assms)
with f have "subst_type t (Mu a t) a = (a \<leftrightarrow> b) \<bullet> subst_type t (Mu a t) a"
by (simp add: flip_fresh_fresh)
then show "subst_type t (Mu a t) a = subst_type ((a \<leftrightarrow> b) \<bullet> t) (Mu b ((a \<leftrightarrow> b) \<bullet> t)) b"
by simp
qed
lemma jw_Roll_inv[elim]:
assumes "\<Gamma> \<turnstile>\<^sub>W Roll e : \<tau>"
and "atom \<alpha> \<sharp> (\<Gamma>, \<tau>)"
obtains \<tau>' where "\<tau> = Mu \<alpha> \<tau>'" "\<Gamma> \<turnstile>\<^sub>W e : subst_type \<tau>' (Mu \<alpha> \<tau>') \<alpha>"
using assms [[simproc del: alpha_lst]]
proof (atomize_elim, nominal_induct \<Gamma> "Roll e" \<tau> avoiding: e \<alpha> rule: judge_weak.strong_induct)
case (jw_Roll \<alpha> \<Gamma> e \<tau> \<alpha>')
then show ?case
by (auto simp: perm_supp_eq fresh_Pair fresh_at_base subst_type.eqvt
intro!: exI[of _ "(\<alpha> \<leftrightarrow> \<alpha>') \<bullet> \<tau>"] Abs_lst_eq_flipI dest: judge_weak.eqvt[of _ _ _ "(\<alpha> \<leftrightarrow> \<alpha>')"])
qed
lemma jw_Unroll_inv[elim]:
assumes "\<Gamma> \<turnstile>\<^sub>W Unroll e : \<tau>"
and "atom \<alpha> \<sharp> (\<Gamma>, \<tau>)"
obtains \<tau>' where "\<tau> = subst_type \<tau>' (Mu \<alpha> \<tau>') \<alpha>" "\<Gamma> \<turnstile>\<^sub>W e : Mu \<alpha> \<tau>'"
using assms proof (atomize_elim, nominal_induct \<Gamma> "Unroll e" \<tau> avoiding: e \<alpha> rule: judge_weak.strong_induct)
case (jw_Unroll \<alpha> \<Gamma> e \<tau> \<alpha>')
then show ?case
by (auto simp: perm_supp_eq fresh_Pair subst_type_perm_eq fresh_subst_type
intro!: exI[of _ "(\<alpha> \<leftrightarrow> \<alpha>') \<bullet> \<tau>"] dest: judge_weak.eqvt[of _ _ _ "(\<alpha> \<leftrightarrow> \<alpha>')"])
qed
text \<open>Additional inversion rules based on type rather than term.\<close>
inductive_cases jw_Prod_inv[elim]: "{$$} \<turnstile>\<^sub>W e : Prod \<tau>\<^sub>1 \<tau>\<^sub>2"
inductive_cases jw_Sum_inv[elim]: "{$$} \<turnstile>\<^sub>W e : Sum \<tau>\<^sub>1 \<tau>\<^sub>2"
lemma jw_Fun_inv[elim]:
assumes "{$$} \<turnstile>\<^sub>W v : Fun \<tau>\<^sub>1 \<tau>\<^sub>2" "value v"
obtains e x where "v = Lam x e \<or> v = Rec x e" "atom x \<sharp> (c::term)"
using assms [[simproc del: alpha_lst]]
proof (atomize_elim, nominal_induct "{$$} :: tyenv" v "Fun \<tau>\<^sub>1 \<tau>\<^sub>2" avoiding: \<tau>\<^sub>1 \<tau>\<^sub>2 rule: judge_weak.strong_induct)
case (jw_Lam x \<tau>\<^sub>1 e \<tau>\<^sub>2)
then obtain x' where "atom (x'::var) \<sharp> (c, e)" using finite_supp obtain_fresh' by blast
then have "[[atom x]]lst. e = [[atom x']]lst. (x \<leftrightarrow> x') \<bullet> e \<and> atom x' \<sharp> c"
by (simp add: Abs_lst_eq_flipI fresh_Pair)
then show ?case
by auto
next
case (jw_Rec x y \<tau>\<^sub>1 \<tau>\<^sub>2 e')
obtain x' where "atom (x'::var) \<sharp> (c, Lam y e')" using finite_supp obtain_fresh by blast
then have "[[atom x]]lst. Lam y e' = [[atom x']]lst. (x \<leftrightarrow> x') \<bullet> (Lam y e') \<and> atom x' \<sharp> c"
using Abs_lst_eq_flipI fresh_Pair by blast
then show ?case
by auto
qed simp_all
lemma jw_Mu_inv[elim]:
assumes "{$$} \<turnstile>\<^sub>W v : Mu \<alpha> \<tau>" "value v"
obtains v' where "v = Roll v'"
using assms by (atomize_elim, nominal_induct "{$$} :: tyenv" v "Mu \<alpha> \<tau>" rule: judge_weak.strong_induct) simp_all
subsection \<open>Erasure of Authenticated Types\<close>
nominal_function erase :: "ty \<Rightarrow> ty" where
"erase One = One" |
"erase (Fun \<tau>\<^sub>1 \<tau>\<^sub>2) = Fun (erase \<tau>\<^sub>1) (erase \<tau>\<^sub>2)" |
"erase (Sum \<tau>\<^sub>1 \<tau>\<^sub>2) = Sum (erase \<tau>\<^sub>1) (erase \<tau>\<^sub>2)" |
"erase (Prod \<tau>\<^sub>1 \<tau>\<^sub>2) = Prod (erase \<tau>\<^sub>1) (erase \<tau>\<^sub>2)" |
"erase (Mu \<alpha> \<tau>) = Mu \<alpha> (erase \<tau>)" |
"erase (Alpha \<alpha>) = Alpha \<alpha>" |
"erase (AuthT \<tau>) = erase \<tau>"
using [[simproc del: alpha_lst]]
subgoal by (simp add: eqvt_def erase_graph_aux_def)
subgoal by (erule erase_graph.induct) (auto simp: fresh_star_def fresh_at_base)
subgoal for P x
by (rule ty.strong_exhaust[of x P x]) (auto simp: fresh_star_def)
apply (simp_all add: fresh_star_def fresh_at_base)
subgoal
apply (erule Abs_lst1_fcb2')
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff)
done
done
nominal_termination (eqvt)
by lexicographic_order
lemma fresh_erase_fresh:
assumes "atom x \<sharp> \<tau>"
shows "atom x \<sharp> erase \<tau>"
using assms by (induct \<tau> rule: ty.induct) auto
lemma fresh_fmmap_erase_fresh:
assumes "atom x \<sharp> \<Gamma>"
shows "atom x \<sharp> fmmap erase \<Gamma>"
using assms by transfer simp
lemma erase_subst_type_shift[simp]:
"erase (subst_type \<tau> \<tau>' \<alpha>) = subst_type (erase \<tau>) (erase \<tau>') \<alpha>"
by (induct \<tau> \<tau>' \<alpha> rule: subst_type.induct) (auto simp: fresh_Pair fresh_erase_fresh)
definition erase_env :: "tyenv \<Rightarrow> tyenv" where
"erase_env = fmmap erase"
subsection \<open>Strong Typing Judgement\<close>
inductive judge :: "tyenv \<Rightarrow> term \<Rightarrow> ty \<Rightarrow> bool" ("_ \<turnstile> _ : _" [150,0,150] 149) where
j_Unit: "\<Gamma> \<turnstile> Unit : One" |
j_Var: "\<lbrakk> \<Gamma> $$ x = Some \<tau> \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Var x : \<tau>" |
j_Lam: "\<lbrakk> atom x \<sharp> \<Gamma>; \<Gamma>(x $$:= \<tau>\<^sub>1) \<turnstile> e : \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Lam x e : Fun \<tau>\<^sub>1 \<tau>\<^sub>2" |
j_App: "\<lbrakk> \<Gamma> \<turnstile> e : Fun \<tau>\<^sub>1 \<tau>\<^sub>2; \<Gamma> \<turnstile> e' : \<tau>\<^sub>1 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> App e e' : \<tau>\<^sub>2" |
j_Let: "\<lbrakk> atom x \<sharp> (\<Gamma>, e\<^sub>1); \<Gamma> \<turnstile> e\<^sub>1 : \<tau>\<^sub>1; \<Gamma>(x $$:= \<tau>\<^sub>1) \<turnstile> e\<^sub>2 : \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Let e\<^sub>1 x e\<^sub>2 : \<tau>\<^sub>2" |
j_Rec: "\<lbrakk> atom x \<sharp> \<Gamma>; atom y \<sharp> (\<Gamma>,x); \<Gamma>(x $$:= Fun \<tau>\<^sub>1 \<tau>\<^sub>2) \<turnstile> Lam y e' : Fun \<tau>\<^sub>1 \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Rec x (Lam y e') : Fun \<tau>\<^sub>1 \<tau>\<^sub>2" |
j_Inj1: "\<lbrakk> \<Gamma> \<turnstile> e : \<tau>\<^sub>1 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Inj1 e : Sum \<tau>\<^sub>1 \<tau>\<^sub>2" |
j_Inj2: "\<lbrakk> \<Gamma> \<turnstile> e : \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Inj2 e : Sum \<tau>\<^sub>1 \<tau>\<^sub>2" |
j_Case: "\<lbrakk> \<Gamma> \<turnstile> e : Sum \<tau>\<^sub>1 \<tau>\<^sub>2; \<Gamma> \<turnstile> e\<^sub>1 : Fun \<tau>\<^sub>1 \<tau>; \<Gamma> \<turnstile> e\<^sub>2 : Fun \<tau>\<^sub>2 \<tau> \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Case e e\<^sub>1 e\<^sub>2 : \<tau>" |
j_Pair: "\<lbrakk> \<Gamma> \<turnstile> e\<^sub>1 : \<tau>\<^sub>1; \<Gamma> \<turnstile> e\<^sub>2 : \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Pair e\<^sub>1 e\<^sub>2 : Prod \<tau>\<^sub>1 \<tau>\<^sub>2" |
j_Prj1: "\<lbrakk> \<Gamma> \<turnstile> e : Prod \<tau>\<^sub>1 \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Prj1 e : \<tau>\<^sub>1" |
j_Prj2: "\<lbrakk> \<Gamma> \<turnstile> e : Prod \<tau>\<^sub>1 \<tau>\<^sub>2 \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Prj2 e : \<tau>\<^sub>2" |
j_Roll: "\<lbrakk> atom \<alpha> \<sharp> \<Gamma>; \<Gamma> \<turnstile> e : subst_type \<tau> (Mu \<alpha> \<tau>) \<alpha> \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Roll e : Mu \<alpha> \<tau>" |
j_Unroll: "\<lbrakk> atom \<alpha> \<sharp> \<Gamma>; \<Gamma> \<turnstile> e : Mu \<alpha> \<tau> \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Unroll e : subst_type \<tau> (Mu \<alpha> \<tau>) \<alpha>" |
j_Auth: "\<lbrakk> \<Gamma> \<turnstile> e : \<tau> \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Auth e : AuthT \<tau>" |
j_Unauth: "\<lbrakk> \<Gamma> \<turnstile> e : AuthT \<tau> \<rbrakk>
\<Longrightarrow> \<Gamma> \<turnstile> Unauth e : \<tau>"
declare judge.intros[intro]
equivariance judge
nominal_inductive judge
avoids j_Lam: x
| j_Rec: x and y
| j_Let: x
| j_Roll: \<alpha>
| j_Unroll: \<alpha>
by (auto simp: fresh_subst_type fresh_Pair)
lemma judge_imp_judge_weak:
assumes "\<Gamma> \<turnstile> e : \<tau>"
shows "erase_env \<Gamma> \<turnstile>\<^sub>W e : erase \<tau>"
using assms unfolding erase_env_def
by (induct \<Gamma> e \<tau> rule: judge.induct) (simp_all add: fresh_Pair fresh_fmmap_erase_fresh fmmap_fmupd)
subsection \<open>Shallow Projection\<close>
nominal_function shallow :: "term \<Rightarrow> term" ("\<lparr>_\<rparr>") where
"\<lparr>Unit\<rparr> = Unit" |
"\<lparr>Var v\<rparr> = Var v" |
"\<lparr>Lam x e\<rparr> = Lam x \<lparr>e\<rparr>" |
"\<lparr>Rec x e\<rparr> = Rec x \<lparr>e\<rparr>" |
"\<lparr>Inj1 e\<rparr> = Inj1 \<lparr>e\<rparr>" |
"\<lparr>Inj2 e\<rparr> = Inj2 \<lparr>e\<rparr>" |
"\<lparr>Pair e\<^sub>1 e\<^sub>2\<rparr> = Pair \<lparr>e\<^sub>1\<rparr> \<lparr>e\<^sub>2\<rparr>" |
"\<lparr>Roll e\<rparr> = Roll \<lparr>e\<rparr>" |
"\<lparr>Let e\<^sub>1 x e\<^sub>2\<rparr> = Let \<lparr>e\<^sub>1\<rparr> x \<lparr>e\<^sub>2\<rparr>" |
"\<lparr>App e\<^sub>1 e\<^sub>2\<rparr> = App \<lparr>e\<^sub>1\<rparr> \<lparr>e\<^sub>2\<rparr>" |
"\<lparr>Case e e\<^sub>1 e\<^sub>2\<rparr> = Case \<lparr>e\<rparr> \<lparr>e\<^sub>1\<rparr> \<lparr>e\<^sub>2\<rparr>" |
"\<lparr>Prj1 e\<rparr> = Prj1 \<lparr>e\<rparr>" |
"\<lparr>Prj2 e\<rparr> = Prj2 \<lparr>e\<rparr>" |
"\<lparr>Unroll e\<rparr> = Unroll \<lparr>e\<rparr>" |
"\<lparr>Auth e\<rparr> = Auth \<lparr>e\<rparr>" |
"\<lparr>Unauth e\<rparr> = Unauth \<lparr>e\<rparr>" |
\<comment> \<open>No rule is defined for Hash, but: "[..] preserving that structure in every case but that of <h, v> [..]"\<close>
"\<lparr>Hash h\<rparr> = Hash h" |
"\<lparr>Hashed h e\<rparr> = Hash h"
using [[simproc del: alpha_lst]]
subgoal by (simp add: eqvt_def shallow_graph_aux_def)
subgoal by (erule shallow_graph.induct) (auto simp: fresh_star_def fresh_at_base)
subgoal for P a
by (rule term.strong_exhaust[of a P a]) (auto simp: fresh_star_def)
apply (simp_all add: fresh_star_def fresh_at_base)
subgoal
apply (erule Abs_lst1_fcb2')
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff)
done
subgoal
apply (erule Abs_lst1_fcb2')
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff)
done
subgoal
apply (erule Abs_lst1_fcb2')
apply (simp_all add: eqvt_at_def)
apply (simp_all add: perm_supp_eq Abs_fresh_iff)
done
done
nominal_termination (eqvt)
by lexicographic_order
lemma fresh_shallow: "atom x \<sharp> e \<Longrightarrow> atom x \<sharp> \<lparr>e\<rparr>"
by (induct e rule: term.induct) auto
subsection \<open>Small-step Semantics\<close>
datatype mode = I | P | V \<comment> \<open>Ideal, Prover and Verifier modes\<close>
instantiation mode :: pure
begin
definition permute_mode :: "perm \<Rightarrow> mode \<Rightarrow> mode" where
"permute_mode \<pi> h = h"
instance proof qed (auto simp: permute_mode_def)
end
type_synonym proofstream = "term list"
inductive smallstep :: "proofstream \<Rightarrow> term \<Rightarrow> mode \<Rightarrow> proofstream \<Rightarrow> term \<Rightarrow> bool" ("\<lless>_, _\<ggreater> _\<rightarrow> \<lless>_, _\<ggreater>") where
s_App1: "\<lbrakk> \<lless> \<pi>, e\<^sub>1 \<ggreater> m\<rightarrow> \<lless> \<pi>', e\<^sub>1' \<ggreater> \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, App e\<^sub>1 e\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>', App e\<^sub>1' e\<^sub>2 \<ggreater>" |
s_App2: "\<lbrakk> value v\<^sub>1; \<lless> \<pi>, e\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>', e\<^sub>2' \<ggreater> \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, App v\<^sub>1 e\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>', App v\<^sub>1 e\<^sub>2' \<ggreater>" |
s_AppLam: "\<lbrakk> value v; atom x \<sharp> (v,\<pi>) \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, App (Lam x e) v \<ggreater> _\<rightarrow> \<lless> \<pi>, e[v / x] \<ggreater>" |
s_AppRec: "\<lbrakk> value v; atom x \<sharp> (v,\<pi>) \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, App (Rec x e) v \<ggreater> _\<rightarrow> \<lless> \<pi>, App (e[(Rec x e) / x]) v \<ggreater>" |
s_Let1: "\<lbrakk> atom x \<sharp> (e\<^sub>1,e\<^sub>1',\<pi>,\<pi>'); \<lless> \<pi>, e\<^sub>1 \<ggreater> m\<rightarrow> \<lless> \<pi>', e\<^sub>1' \<ggreater> \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Let e\<^sub>1 x e\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>', Let e\<^sub>1' x e\<^sub>2 \<ggreater>" |
s_Let2: "\<lbrakk> value v; atom x \<sharp> (v,\<pi>) \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Let v x e \<ggreater> _\<rightarrow> \<lless> \<pi>, e[v / x] \<ggreater>" |
s_Inj1: "\<lbrakk> \<lless> \<pi>, e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater> \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Inj1 e \<ggreater> m\<rightarrow> \<lless> \<pi>', Inj1 e' \<ggreater>" |
s_Inj2: "\<lbrakk> \<lless> \<pi>, e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater> \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Inj2 e \<ggreater> m\<rightarrow> \<lless> \<pi>', Inj2 e' \<ggreater>" |
s_Case: "\<lbrakk> \<lless> \<pi>, e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater> \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Case e e\<^sub>1 e\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>', Case e' e\<^sub>1 e\<^sub>2 \<ggreater>" |
\<comment> \<open>Case rules are different from paper to account for recursive functions.\<close>
s_CaseInj1: "\<lbrakk> value v \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Case (Inj1 v) e\<^sub>1 e\<^sub>2 \<ggreater> _\<rightarrow> \<lless> \<pi>, App e\<^sub>1 v \<ggreater>" |
s_CaseInj2: "\<lbrakk> value v \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Case (Inj2 v) e\<^sub>1 e\<^sub>2 \<ggreater> _\<rightarrow> \<lless> \<pi>, App e\<^sub>2 v \<ggreater>" |
s_Pair1: "\<lbrakk> \<lless> \<pi>, e\<^sub>1 \<ggreater> m\<rightarrow> \<lless> \<pi>', e\<^sub>1' \<ggreater> \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Pair e\<^sub>1 e\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>', Pair e\<^sub>1' e\<^sub>2 \<ggreater>" |
s_Pair2: "\<lbrakk> value v\<^sub>1; \<lless> \<pi>, e\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>', e\<^sub>2' \<ggreater> \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Pair v\<^sub>1 e\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>', Pair v\<^sub>1 e\<^sub>2' \<ggreater>" |
s_Prj1: "\<lbrakk> \<lless> \<pi>, e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater> \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Prj1 e \<ggreater> m\<rightarrow> \<lless> \<pi>', Prj1 e' \<ggreater>" |
s_Prj2: "\<lbrakk> \<lless> \<pi>, e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater> \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Prj2 e \<ggreater> m\<rightarrow> \<lless> \<pi>', Prj2 e' \<ggreater>" |
s_PrjPair1: "\<lbrakk> value v\<^sub>1; value v\<^sub>2 \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Prj1 (Pair v\<^sub>1 v\<^sub>2) \<ggreater> _\<rightarrow> \<lless> \<pi>, v\<^sub>1 \<ggreater>" |
s_PrjPair2: "\<lbrakk> value v\<^sub>1; value v\<^sub>2 \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Prj2 (Pair v\<^sub>1 v\<^sub>2) \<ggreater> _\<rightarrow> \<lless> \<pi>, v\<^sub>2 \<ggreater>" |
s_Unroll: "\<lless> \<pi>, e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater>
\<Longrightarrow> \<lless> \<pi>, Unroll e \<ggreater> m\<rightarrow> \<lless> \<pi>', Unroll e' \<ggreater>" |
s_Roll: "\<lless> \<pi>, e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater>
\<Longrightarrow> \<lless> \<pi>, Roll e \<ggreater> m\<rightarrow> \<lless> \<pi>', Roll e' \<ggreater>" |
s_UnrollRoll:"\<lbrakk> value v \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Unroll (Roll v) \<ggreater> _\<rightarrow> \<lless> \<pi>, v \<ggreater>" |
\<comment> \<open>Mode-specific rules\<close>
s_Auth: "\<lless> \<pi>, e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater>
\<Longrightarrow> \<lless> \<pi>, Auth e \<ggreater> m\<rightarrow> \<lless> \<pi>', Auth e' \<ggreater>" |
s_Unauth: "\<lless> \<pi>, e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater>
\<Longrightarrow> \<lless> \<pi>, Unauth e \<ggreater> m\<rightarrow> \<lless> \<pi>', Unauth e' \<ggreater>" |
s_AuthI: "\<lbrakk> value v \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Auth v \<ggreater> I\<rightarrow> \<lless> \<pi>, v \<ggreater>" |
s_UnauthI: "\<lbrakk> value v \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Unauth v \<ggreater> I\<rightarrow> \<lless> \<pi>, v \<ggreater>" |
s_AuthP: "\<lbrakk> closed \<lparr>v\<rparr>; value v \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Auth v \<ggreater> P\<rightarrow> \<lless> \<pi>, Hashed (hash \<lparr>v\<rparr>) v \<ggreater>" |
s_UnauthP: "\<lbrakk> value v \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Unauth (Hashed h v) \<ggreater> P\<rightarrow> \<lless> \<pi> @ [\<lparr>v\<rparr>], v \<ggreater>" |
s_AuthV: "\<lbrakk> closed v; value v \<rbrakk>
\<Longrightarrow> \<lless> \<pi>, Auth v \<ggreater> V\<rightarrow> \<lless> \<pi>, Hash (hash v) \<ggreater>" |
s_UnauthV: "\<lbrakk> closed s\<^sub>0; hash s\<^sub>0 = h \<rbrakk>
\<Longrightarrow> \<lless> s\<^sub>0#\<pi>, Unauth (Hash h) \<ggreater> V\<rightarrow> \<lless> \<pi>, s\<^sub>0 \<ggreater>"
declare smallstep.intros[simp]
declare smallstep.intros[intro]
equivariance smallstep
nominal_inductive smallstep
avoids s_AppLam: x
| s_AppRec: x
| s_Let1: x
| s_Let2: x
by (auto simp add: fresh_Pair fresh_subst_term)
inductive smallsteps :: "proofstream \<Rightarrow> term \<Rightarrow> mode \<Rightarrow> nat \<Rightarrow> proofstream \<Rightarrow> term \<Rightarrow> bool" ("\<lless>_, _\<ggreater> _\<rightarrow>_ \<lless>_, _\<ggreater>") where
s_Id: "\<lless> \<pi>, e \<ggreater> _\<rightarrow>0 \<lless> \<pi>, e \<ggreater>" |
s_Tr: "\<lbrakk> \<lless> \<pi>\<^sub>1, e\<^sub>1 \<ggreater> m\<rightarrow>i \<lless> \<pi>\<^sub>2, e\<^sub>2 \<ggreater>; \<lless> \<pi>\<^sub>2, e\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>\<^sub>3, e\<^sub>3 \<ggreater> \<rbrakk>
\<Longrightarrow> \<lless> \<pi>\<^sub>1, e\<^sub>1 \<ggreater> m\<rightarrow>(i+1) \<lless> \<pi>\<^sub>3, e\<^sub>3 \<ggreater>"
declare smallsteps.intros[simp]
declare smallsteps.intros[intro]
equivariance smallsteps
nominal_inductive smallsteps .
lemma steps_1_step[simp]: "\<lless> \<pi>, e \<ggreater> m\<rightarrow>1 \<lless> \<pi>', e' \<ggreater> = \<lless> \<pi>, e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater>" (is "?L \<longleftrightarrow> ?R")
proof
assume ?L
then show ?R
proof (induct \<pi> e m "1::nat" \<pi>' e' rule: smallsteps.induct)
case (s_Tr \<pi>\<^sub>1 e\<^sub>1 m i \<pi>\<^sub>2 e\<^sub>2 \<pi>\<^sub>3 e\<^sub>3)
then show ?case
by (induct \<pi>\<^sub>1 e\<^sub>1 m i \<pi>\<^sub>2 e\<^sub>2 rule: smallsteps.induct) auto
qed simp
qed (auto intro: s_Tr[where i=0, simplified])
text \<open>Inversion rules for smallstep(s) predicates.\<close>
lemma value_no_step[intro]:
assumes "\<lless> \<pi>\<^sub>1, v \<ggreater> m\<rightarrow> \<lless> \<pi>\<^sub>2, t \<ggreater>" "value v"
shows "False"
using assms by (induct \<pi>\<^sub>1 v m \<pi>\<^sub>2 t rule: smallstep.induct) auto
lemma subst_term_perm:
assumes "atom x' \<sharp> (x, e)"
shows "e[v / x] = ((x \<leftrightarrow> x') \<bullet> e)[v / x']"
using assms [[simproc del: alpha_lst]]
by (nominal_induct e avoiding: x x' v rule: term.strong_induct)
(auto simp: fresh_Pair fresh_at_base(2) permute_hash_def)
inductive_cases s_Unit_inv[elim]: "\<lless> \<pi>\<^sub>1, Unit \<ggreater> m\<rightarrow> \<lless> \<pi>\<^sub>2, v \<ggreater>"
inductive_cases s_App_inv[consumes 1, case_names App1 App2 AppLam AppRec, elim]: "\<lless> \<pi>, App v\<^sub>1 v\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>', e \<ggreater>"
lemma s_Let_inv':
assumes "\<lless> \<pi>, Let e\<^sub>1 x e\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
and "atom x \<sharp> (e\<^sub>1,\<pi>)"
obtains e\<^sub>1' where "(e' = e\<^sub>2[e\<^sub>1 / x] \<and> value e\<^sub>1 \<and> \<pi> = \<pi>') \<or> (\<lless> \<pi>, e\<^sub>1 \<ggreater> m\<rightarrow> \<lless> \<pi>', e\<^sub>1' \<ggreater> \<and> e' = Let e\<^sub>1' x e\<^sub>2 \<and> \<not> value e\<^sub>1)"
using assms [[simproc del: alpha_lst]]
by (atomize_elim, induct \<pi> "Let e\<^sub>1 x e\<^sub>2" m \<pi>' e' rule: smallstep.induct)
(auto simp: fresh_Pair fresh_subst_term perm_supp_eq elim: Abs_lst1_fcb2')
lemma s_Let_inv[consumes 2, case_names Let1 Let2, elim]:
assumes "\<lless> \<pi>, Let e\<^sub>1 x e\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
and "atom x \<sharp> (e\<^sub>1,\<pi>)"
and "e' = e\<^sub>2[e\<^sub>1 / x] \<and> value e\<^sub>1 \<and> \<pi> = \<pi>' \<Longrightarrow> Q"
and "\<And>e\<^sub>1'. \<lless> \<pi>, e\<^sub>1 \<ggreater> m\<rightarrow> \<lless> \<pi>', e\<^sub>1' \<ggreater> \<and> e' = Let e\<^sub>1' x e\<^sub>2 \<and> \<not> value e\<^sub>1 \<Longrightarrow> Q"
shows "Q"
using assms by (auto elim: s_Let_inv')
inductive_cases s_Case_inv[consumes 1, case_names Case Inj1 Inj2, elim]:
"\<lless> \<pi>, Case e e\<^sub>1 e\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
inductive_cases s_Prj1_inv[consumes 1, case_names Prj1 PrjPair1, elim]:
"\<lless> \<pi>, Prj1 e \<ggreater> m\<rightarrow> \<lless> \<pi>', v \<ggreater>"
inductive_cases s_Prj2_inv[consumes 1, case_names Prj2 PrjPair2, elim]:
"\<lless> \<pi>, Prj2 e \<ggreater> m\<rightarrow> \<lless> \<pi>', v \<ggreater>"
inductive_cases s_Pair_inv[consumes 1, case_names Pair1 Pair2, elim]:
"\<lless> \<pi>, Pair e\<^sub>1 e\<^sub>2 \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
inductive_cases s_Inj1_inv[consumes 1, case_names Inj1, elim]:
"\<lless> \<pi>, Inj1 e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
inductive_cases s_Inj2_inv[consumes 1, case_names Inj2, elim]:
"\<lless> \<pi>, Inj2 e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
inductive_cases s_Roll_inv[consumes 1, case_names Roll, elim]:
"\<lless> \<pi>, Roll e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
inductive_cases s_Unroll_inv[consumes 1, case_names Unroll UnrollRoll, elim]:
"\<lless> \<pi>, Unroll e \<ggreater> m\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
inductive_cases s_AuthI_inv[consumes 1, case_names Auth AuthI, elim]:
"\<lless> \<pi>, Auth e \<ggreater> I\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
inductive_cases s_UnauthI_inv[consumes 1, case_names Unauth UnauthI, elim]:
"\<lless> \<pi>, Unauth e \<ggreater> I\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
inductive_cases s_AuthP_inv[consumes 1, case_names Auth AuthP, elim]:
"\<lless> \<pi>, Auth e \<ggreater> P\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
inductive_cases s_UnauthP_inv[consumes 1, case_names Unauth UnauthP, elim]:
"\<lless> \<pi>, Unauth e \<ggreater> P\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
inductive_cases s_AuthV_inv[consumes 1, case_names Auth AuthV, elim]:
"\<lless> \<pi>, Auth e \<ggreater> V\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
inductive_cases s_UnauthV_inv[consumes 1, case_names Unauth UnauthV, elim]:
"\<lless> \<pi>, Unauth e \<ggreater> V\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
inductive_cases s_Id_inv[elim]: "\<lless> \<pi>\<^sub>1, e\<^sub>1 \<ggreater> m\<rightarrow>0 \<lless> \<pi>\<^sub>2, e\<^sub>2 \<ggreater>"
inductive_cases s_Tr_inv[elim]: "\<lless> \<pi>\<^sub>1, e\<^sub>1 \<ggreater> m\<rightarrow>i \<lless> \<pi>\<^sub>3, e\<^sub>3 \<ggreater>"
text \<open>Freshness with smallstep.\<close>
lemma fresh_smallstep_P:
fixes x :: var
assumes "\<lless> \<pi>, e \<ggreater> P\<rightarrow> \<lless> \<pi>', e' \<ggreater>" "atom x \<sharp> e"
shows "atom x \<sharp> e'"
using assms by (induct \<pi> e P \<pi>' e' rule: smallstep.induct) (auto simp: fresh_subst_term)
lemma fresh_smallsteps_I:
fixes x :: var
assumes "\<lless> \<pi>, e \<ggreater> I\<rightarrow>i \<lless> \<pi>', e' \<ggreater>" "atom x \<sharp> e"
shows "atom x \<sharp> e'"
using assms by (induct \<pi> e I i \<pi>' e' rule: smallsteps.induct) (simp_all add: fresh_smallstep_I)
lemma fresh_ps_smallstep_P:
fixes x :: var
assumes "\<lless> \<pi>, e \<ggreater> P\<rightarrow> \<lless> \<pi>', e' \<ggreater>" "atom x \<sharp> e" "atom x \<sharp> \<pi>"
shows "atom x \<sharp> \<pi>'"
using assms proof (induct \<pi> e P \<pi>' e' rule: smallstep.induct)
case (s_UnauthP v \<pi> h)
then show ?case
by (simp add: fresh_Cons fresh_append fresh_shallow)
qed auto
text \<open>Proofstream lemmas.\<close>
lemma smallstepI_ps_eq:
assumes "\<lless> \<pi>, e \<ggreater> I\<rightarrow> \<lless> \<pi>', e' \<ggreater>"
shows "\<pi> = \<pi>'"
using assms by (induct \<pi> e I \<pi>' e' rule: smallstep.induct) auto
lemma smallstepI_ps_emptyD:
"\<lless>\<pi>, e\<ggreater> I\<rightarrow> \<lless>[], e'\<ggreater> \<Longrightarrow> \<lless>[], e\<ggreater> I\<rightarrow> \<lless>[], e'\<ggreater>"
"\<lless>[], e\<ggreater> I\<rightarrow> \<lless>\<pi>, e'\<ggreater> \<Longrightarrow> \<lless>[], e\<ggreater> I\<rightarrow> \<lless>[], e'\<ggreater>"
using smallstepI_ps_eq by force+
lemma smallstepsI_ps_eq:
assumes "\<lless>\<pi>, e\<ggreater> I\<rightarrow>i \<lless>\<pi>', e'\<ggreater>"
shows "\<pi> = \<pi>'"
using assms by (induct \<pi> e I i \<pi>' e' rule: smallsteps.induct) (auto dest: smallstepI_ps_eq)
lemma smallstepsI_ps_emptyD:
"\<lless>\<pi>, e\<ggreater> I\<rightarrow>i \<lless>[], e'\<ggreater> \<Longrightarrow> \<lless>[], e\<ggreater> I\<rightarrow>i \<lless>[], e'\<ggreater>"
"\<lless>[], e\<ggreater> I\<rightarrow>i \<lless>\<pi>, e'\<ggreater> \<Longrightarrow> \<lless>[], e\<ggreater> I\<rightarrow>i \<lless>[], e'\<ggreater>"
using smallstepsI_ps_eq by force+
lemma smallstepV_consumes_proofstream:
assumes "\<lless> \<pi>\<^sub>1, eV \<ggreater> V\<rightarrow> \<lless> \<pi>\<^sub>2, eV' \<ggreater>"
obtains \<pi> where "\<pi>\<^sub>1 = \<pi> @ \<pi>\<^sub>2"
using assms by (induct \<pi>\<^sub>1 eV V \<pi>\<^sub>2 eV' rule: smallstep.induct) auto
lemma smallstepsV_consumes_proofstream:
assumes "\<lless> \<pi>\<^sub>1, eV \<ggreater> V\<rightarrow>i \<lless> \<pi>\<^sub>2, eV' \<ggreater>"
obtains \<pi> where "\<pi>\<^sub>1 = \<pi> @ \<pi>\<^sub>2"
using assms by (induct \<pi>\<^sub>1 eV V i \<pi>\<^sub>2 eV' rule: smallsteps.induct)
(auto elim: smallstepV_consumes_proofstream)
lemma smallstepP_generates_proofstream:
assumes "\<lless> \<pi>\<^sub>1, eP \<ggreater> P\<rightarrow> \<lless> \<pi>\<^sub>2, eP' \<ggreater>"
obtains \<pi> where "\<pi>\<^sub>2 = \<pi>\<^sub>1 @ \<pi>"
using assms by (induct \<pi>\<^sub>1 eP P \<pi>\<^sub>2 eP' rule: smallstep.induct) auto
lemma smallstepsP_generates_proofstream:
assumes "\<lless> \<pi>\<^sub>1, eP \<ggreater> P\<rightarrow>i \<lless> \<pi>\<^sub>2, eP' \<ggreater>"
obtains \<pi> where "\<pi>\<^sub>2 = \<pi>\<^sub>1 @ \<pi>"
using assms by (induct \<pi>\<^sub>1 eP P i \<pi>\<^sub>2 eP' rule: smallsteps.induct)
(auto elim: smallstepP_generates_proofstream)
lemma smallstepV_ps_to_suffix:
assumes "\<lless>\<pi>, e\<ggreater> V\<rightarrow> \<lless>\<pi>' @ X, e'\<ggreater>"
obtains \<pi>'' where "\<pi> = \<pi>'' @ X"
using assms
by (induct \<pi> e V "\<pi>' @ X" e' rule: smallstep.induct) auto
lemma smallstepsV_ps_append:
"\<lless> \<pi>, eV \<ggreater> V\<rightarrow>i \<lless> \<pi>', eV' \<ggreater> \<longleftrightarrow> \<lless> \<pi> @ X, eV \<ggreater> V\<rightarrow>i \<lless> \<pi>' @ X, eV' \<ggreater>" (is "?L \<longleftrightarrow> ?R")
proof (rule iffI)
assume ?L then show ?R
proof (induct \<pi> eV V i \<pi>' eV' rule: smallsteps.induct)
case (s_Tr \<pi>\<^sub>1 e\<^sub>1 i \<pi>\<^sub>2 e\<^sub>2 \<pi>\<^sub>3 e\<^sub>3)
then show ?case
by (auto simp: iffD1[OF smallstepV_ps_append])
qed simp
next
assume ?R then show ?L
proof (induct "\<pi> @ X" eV V i "\<pi>' @ X" eV' arbitrary: \<pi>' rule: smallsteps.induct)
case (s_Tr e\<^sub>1 i \<pi>\<^sub>2 e\<^sub>2 e\<^sub>3)
from s_Tr(3) obtain \<pi>''' where "\<pi>\<^sub>2 = \<pi>''' @ X"
by (auto elim: smallstepV_ps_to_suffix)
with s_Tr show ?case
by (auto dest: iffD2[OF smallstepV_ps_append])
qed simp
qed
lemma smallstepP_ps_prepend:
"\<lless> \<pi>, eP \<ggreater> P\<rightarrow> \<lless> \<pi>', eP' \<ggreater> \<longleftrightarrow> \<lless> X @ \<pi>, eP \<ggreater> P\<rightarrow> \<lless> X @ \<pi>', eP' \<ggreater>" (is "?L \<longleftrightarrow> ?R")
proof (rule iffI)
assume ?L then show ?R
proof (nominal_induct \<pi> eP P \<pi>' eP' avoiding: X rule: smallstep.strong_induct)
case (s_UnauthP v \<pi> h)
then show ?case
by (subst append_assoc[symmetric, of X \<pi> "[\<lparr>v\<rparr>]"]) (erule smallstep.s_UnauthP)
qed (auto simp: fresh_append fresh_Pair)
next
assume ?R then show ?L
by (nominal_induct "X @ \<pi>" eP P "X @ \<pi>'" eP' avoiding: X rule: smallstep.strong_induct)
(auto simp: fresh_append fresh_Pair)
qed
lemma smallstepsP_ps_prepend:
"\<lless> \<pi>, eP \<ggreater> P\<rightarrow>i \<lless> \<pi>', eP' \<ggreater> \<longleftrightarrow> \<lless> X @ \<pi>, eP \<ggreater> P\<rightarrow>i \<lless> X @ \<pi>', eP' \<ggreater>" (is "?L \<longleftrightarrow> ?R")
proof (rule iffI)
assume ?L then show ?R
proof (induct \<pi> eP P i \<pi>' eP' rule: smallsteps.induct)
case (s_Tr \<pi>\<^sub>1 e\<^sub>1 i \<pi>\<^sub>2 e\<^sub>2 \<pi>\<^sub>3 e\<^sub>3)
then show ?case
by (auto simp: iffD1[OF smallstepP_ps_prepend])
qed simp
next
assume ?R then show ?L
proof (induct "X @ \<pi>" eP P i "X @ \<pi>'" eP' arbitrary: \<pi>' rule: smallsteps.induct)
case (s_Tr e\<^sub>1 i \<pi>\<^sub>2 e\<^sub>2 e\<^sub>3)
then obtain \<pi>'' where \<pi>'': "\<pi>\<^sub>2 = X @ \<pi> @ \<pi>''"
by (auto elim: smallstepsP_generates_proofstream)
then have "\<lless>\<pi>, e\<^sub>1\<ggreater> P\<rightarrow>i \<lless>\<pi> @ \<pi>'', e\<^sub>2\<ggreater>"
by (auto dest: s_Tr(2))
with \<pi>'' s_Tr(1,3) show ?case
by (auto dest: iffD2[OF smallstepP_ps_prepend])
qed simp
qed
subsection \<open>Type Progress\<close>
lemma type_progress:
assumes "{$$} \<turnstile>\<^sub>W e : \<tau>"
shows "value e \<or> (\<exists>e'. \<lless> [], e \<ggreater> I\<rightarrow> \<lless> [], e' \<ggreater>)"
using assms proof (nominal_induct "{$$} :: tyenv" e \<tau> rule: judge_weak.strong_induct)
case (jw_Let x e\<^sub>1 \<tau>\<^sub>1 e\<^sub>2 \<tau>\<^sub>2)
then show ?case
by (auto 0 3 simp: fresh_smallstep_I elim!: s_Let2[of e\<^sub>2]
intro: exI[where P="\<lambda>e. \<lless>_, _\<ggreater> _\<rightarrow> \<lless>_, e\<ggreater>", OF s_Let1, rotated])
next
case (jw_Prj1 v \<tau>\<^sub>1 \<tau>\<^sub>2)
then show ?case
by (auto elim!: jw_Prod_inv[of v \<tau>\<^sub>1 \<tau>\<^sub>2])
next
case (jw_Prj2 v \<tau>\<^sub>1 \<tau>\<^sub>2)
then show ?case
by (auto elim!: jw_Prod_inv[of v \<tau>\<^sub>1 \<tau>\<^sub>2])
next
case (jw_App e \<tau>\<^sub>1 \<tau>\<^sub>2 e')
then show ?case
by (auto 0 4 elim: jw_Fun_inv[of _ _ _ e'])
next
case (jw_Case v v\<^sub>1 v\<^sub>2 \<tau>\<^sub>1 \<tau>\<^sub>2 \<tau>)
then show ?case
by (auto 0 4 elim: jw_Sum_inv[of _ v\<^sub>1 v\<^sub>2])
qed fast+
subsection \<open>Weak Type Preservation\<close>
lemma fresh_tyenv_None:
fixes \<Gamma> :: tyenv
shows "atom x \<sharp> \<Gamma> \<longleftrightarrow> \<Gamma> $$ x = None" (is "?L \<longleftrightarrow> ?R")
proof
assume assm: ?L show ?R
proof (rule ccontr)
assume "\<Gamma> $$ x \<noteq> None"
then obtain \<tau> where "\<Gamma> $$ x = Some \<tau>" by blast
with assm have "\<forall>a :: var. atom a \<sharp> \<Gamma> \<longrightarrow> \<Gamma> $$ a = Some \<tau>"
using fmap_freshness_lemma_unique[OF exI, of x \<Gamma>]
by (simp add: fresh_Pair fresh_Some) metis
then have "{a :: var. atom a \<sharp> \<Gamma>} \<subseteq> fmdom' \<Gamma>"
by (auto simp: image_iff Ball_def fmlookup_dom'_iff)
moreover
{ assume "infinite {a :: var. \<not> atom a \<sharp> \<Gamma>}"
then have "infinite {a :: var. atom a \<in> supp \<Gamma>}"
unfolding fresh_def by auto
then have "infinite (supp \<Gamma>)"
by (rule contrapos_nn)
(auto simp: image_iff inv_f_f[of atom] inj_on_def
elim!: finite_surj[of _ _ "inv atom"] bexI[rotated])
then have False
using finite_supp[of \<Gamma>] by blast
}
then have "infinite {a :: var. atom a \<sharp> \<Gamma>}"
by auto
ultimately show False
using finite_subset[of "{a. atom a \<sharp> \<Gamma>}" "fmdom' \<Gamma>"] unfolding fmdom'_alt_def
by auto
qed
next
assume ?R then show ?L
proof (induct \<Gamma> arbitrary: x)
case (fmupd y z \<Gamma>)
then show ?case
by (cases "y = x") (auto intro: fresh_fmap_update)
qed simp
qed
lemma judge_weak_fresh_env_fresh_term[dest]:
fixes a :: var
assumes "\<Gamma> \<turnstile>\<^sub>W e : \<tau>" "atom a \<sharp> \<Gamma>"
shows "atom a \<sharp> e"
using assms proof (nominal_induct \<Gamma> e \<tau> avoiding: a rule: judge_weak.strong_induct)
case (jw_Var \<Gamma> x \<tau>)
then show ?case
by (cases "a = x") (auto simp: fresh_tyenv_None)
qed (simp_all add: fresh_Cons fresh_fmap_update)
lemma judge_weak_weakening_1:
assumes "\<Gamma> \<turnstile>\<^sub>W e : \<tau>" "atom y \<sharp> e"
shows "\<Gamma>(y $$:= \<tau>') \<turnstile>\<^sub>W e : \<tau>"
using assms proof (nominal_induct \<Gamma> e \<tau> avoiding: y \<tau>' rule: judge_weak.strong_induct)
case (jw_Lam x \<Gamma> \<tau>\<^sub>1 e \<tau>\<^sub>2)
from jw_Lam(5)[of y \<tau>'] jw_Lam(1-4,6) show ?case
by (auto simp add: fresh_at_base fmupd_reorder_neq fresh_fmap_update)
next
case (jw_App v v' \<Gamma> \<tau>\<^sub>1 \<tau>\<^sub>2)
then show ?case
by (force simp add: fresh_at_base fmupd_reorder_neq fresh_fmap_update)
next
case (jw_Let x \<Gamma> e\<^sub>1 \<tau>\<^sub>1 e\<^sub>2 \<tau>\<^sub>2)
from jw_Let(6)[of y \<tau>'] jw_Let(8)[of y \<tau>'] jw_Let(1-5,7,9) show ?case
by (auto simp add: fresh_at_base fmupd_reorder_neq fresh_fmap_update)
next
case (jw_Rec x \<Gamma> z \<tau>\<^sub>1 \<tau>\<^sub>2 e')
from jw_Rec(9)[of y \<tau>'] jw_Rec(1-8,10) show ?case
by (auto simp add: fresh_at_base fmupd_reorder_neq fresh_fmap_update fresh_Pair)
next
case (jw_Case v v\<^sub>1 v\<^sub>2 \<Gamma> \<tau>\<^sub>1 \<tau>\<^sub>2 \<tau>)
then show ?case
by (fastforce simp add: fresh_at_base fmupd_reorder_neq fresh_fmap_update)
next
case (jw_Roll \<alpha> \<Gamma> v \<tau>)
then show ?case
by (simp add: fresh_fmap_update)
next
case (jw_Unroll \<alpha> \<Gamma> v \<tau>)
then show ?case
by (simp add: fresh_fmap_update)
qed auto
lemma judge_weak_weakening_2:
assumes "\<Gamma> \<turnstile>\<^sub>W e : \<tau>" "atom y \<sharp> \<Gamma>"
shows "\<Gamma>(y $$:= \<tau>') \<turnstile>\<^sub>W e : \<tau>"
proof -
from assms have "atom y \<sharp> e"
by (rule judge_weak_fresh_env_fresh_term)
with assms show "\<Gamma>(y $$:= \<tau>') \<turnstile>\<^sub>W e : \<tau>" by (simp add: judge_weak_weakening_1)
qed
lemma judge_weak_weakening_env:
assumes "{$$} \<turnstile>\<^sub>W e : \<tau>"
shows "\<Gamma> \<turnstile>\<^sub>W e : \<tau>"
using assms proof (induct \<Gamma>)
case fmempty
then show ?case by assumption
next
case (fmupd x y \<Gamma>)
then show ?case
by (simp add: fresh_tyenv_None judge_weak_weakening_2)
qed
lemma value_subst_value:
assumes "value e" "value e'"
shows "value (e[e' / x])"
using assms by (induct e e' x rule: subst_term.induct) auto
lemma type_preservation:
assumes "\<lless> [], e \<ggreater> I\<rightarrow> \<lless> [], e' \<ggreater>" "{$$} \<turnstile>\<^sub>W e : \<tau>"
shows "{$$} \<turnstile>\<^sub>W e' : \<tau>"
using assms [[simproc del: alpha_lst]]
proof (nominal_induct "[]::proofstream" e I "[]::proofstream" e' arbitrary: \<tau> rule: smallstep.strong_induct)
case (s_AppLam v x e)
then show ?case by force
next
case (s_AppRec v x e)
then show ?case
by (elim jw_App_inv jw_Rec_inv) (auto 0 3 simp del: subst_term.simps)
next
case (s_Let1 x e\<^sub>1 e\<^sub>1' e\<^sub>2)
from s_Let1(1,2,7) show ?case
by (auto intro: s_Let1(6) del: jw_Let_inv elim!: jw_Let_inv)
next
case (s_Unroll e e')
then obtain \<alpha>::tvar where "atom \<alpha> \<sharp> \<tau>"
using obtain_fresh by blast
with s_Unroll show ?case
by (auto elim: jw_Unroll_inv[where \<alpha> = \<alpha>])
next
case (s_Roll e e')
then obtain \<alpha>::tvar where "atom \<alpha> \<sharp> \<tau>"
using obtain_fresh by blast
with s_Roll show ?case
by (auto elim: jw_Roll_inv[where \<alpha> = \<alpha>])
next
case (s_UnrollRoll v)
then obtain \<alpha>::tvar where "atom \<alpha> \<sharp> \<tau>"
using obtain_fresh by blast
with s_UnrollRoll show ?case
by (fastforce simp: Abs1_eq(3) elim: jw_Roll_inv[where \<alpha> = \<alpha>] jw_Unroll_inv[where \<alpha> = \<alpha>])
qed fastforce+
subsection \<open>Corrected Lemma 1 from Miller et al.~\<^cite>\<open>"adsg"\<close>: Weak Type Soundness\<close>
lemma type_soundness:
assumes "{$$} \<turnstile>\<^sub>W e : \<tau>"
shows "value e \<or> (\<exists>e'. \<lless> [], e \<ggreater> I\<rightarrow> \<lless> [], e' \<ggreater> \<and> {$$} \<turnstile>\<^sub>W e' : \<tau>)"
proof (cases "value e")
case True
then show ?thesis by simp
next
case False
with assms obtain e' where "\<lless>[], e\<ggreater> I\<rightarrow> \<lless>[], e'\<ggreater>" by (auto dest: type_progress)
with assms show ?thesis
by (auto simp: type_preservation)
qed
(*<*)
end
(*>*)
|
[STATEMENT]
lemma guarantees_Int_right:
"Z guarantees (X \<inter> Y) = (Z guarantees X) \<inter> (Z guarantees Y)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. Z guarantees X \<inter> Y = (Z guarantees X) \<inter> (Z guarantees Y)
[PROOF STEP]
by (unfold guar_def, blast) |
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
P Q : Presheaf C X
f g : P ⟶ Q
w : ∀ (U : Opens ↑X), NatTrans.app f (op U) = NatTrans.app g (op U)
⊢ f = g
[PROOFSTEP]
apply NatTrans.ext
[GOAL]
case app
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
P Q : Presheaf C X
f g : P ⟶ Q
w : ∀ (U : Opens ↑X), NatTrans.app f (op U) = NatTrans.app g (op U)
⊢ f.app = g.app
[PROOFSTEP]
ext U
[GOAL]
case app.h
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
P Q : Presheaf C X
f g : P ⟶ Q
w : ∀ (U : Opens ↑X), NatTrans.app f (op U) = NatTrans.app g (op U)
U : (Opens ↑X)ᵒᵖ
⊢ NatTrans.app f U = NatTrans.app g U
[PROOFSTEP]
induction U with
| _ U => ?_
[GOAL]
case app.h
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
P Q : Presheaf C X
f g : P ⟶ Q
w : ∀ (U : Opens ↑X), NatTrans.app f (op U) = NatTrans.app g (op U)
U : (Opens ↑X)ᵒᵖ
⊢ NatTrans.app f U = NatTrans.app g U
[PROOFSTEP]
induction U with
| _ U => ?_
[GOAL]
case app.h.h
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
P Q : Presheaf C X
f g : P ⟶ Q
w : ∀ (U : Opens ↑X), NatTrans.app f (op U) = NatTrans.app g (op U)
U : Opens ↑X
⊢ NatTrans.app f (op U) = NatTrans.app g (op U)
[PROOFSTEP]
apply w
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
v w x y z : Opens ↑X
h₀ : v ≤ x
h₁ : x ≤ z ⊓ w
h₂ : x ≤ y ⊓ z
⊢ v ≤ y
[PROOFSTEP]
restrict_tac
[GOAL]
C✝ : Type u
inst✝² : Category.{v, u} C✝
X : TopCat
C : Type u_1
inst✝¹ : Category.{u_3, u_1} C
inst✝ : ConcreteCategory C
F : Presheaf C X
U V W : Opens ↑X
e₁ : U ≤ V
e₂ : V ≤ W
x : (forget C).obj (F.obj (op W))
⊢ x |_ V |_ U = x |_ U
[PROOFSTEP]
delta restrictOpen restrict
[GOAL]
C✝ : Type u
inst✝² : Category.{v, u} C✝
X : TopCat
C : Type u_1
inst✝¹ : Category.{u_3, u_1} C
inst✝ : ConcreteCategory C
F : Presheaf C X
U V W : Opens ↑X
e₁ : U ≤ V
e₂ : V ≤ W
x : (forget C).obj (F.obj (op W))
⊢ ↑(F.map (homOfLE (_ : ∀ ⦃a : ↑X⦄, a ∈ ↑U → a ∈ ↑V)).op) (↑(F.map (homOfLE (_ : ∀ ⦃a : ↑X⦄, a ∈ ↑V → a ∈ ↑W)).op) x) =
↑(F.map (homOfLE (_ : ∀ ⦃a : ↑X⦄, a ∈ ↑U → a ∈ ↑W)).op) x
[PROOFSTEP]
rw [← comp_apply, ← Functor.map_comp]
[GOAL]
C✝ : Type u
inst✝² : Category.{v, u} C✝
X : TopCat
C : Type u_1
inst✝¹ : Category.{u_3, u_1} C
inst✝ : ConcreteCategory C
F : Presheaf C X
U V W : Opens ↑X
e₁ : U ≤ V
e₂ : V ≤ W
x : (forget C).obj (F.obj (op W))
⊢ ↑(F.map ((homOfLE (_ : ∀ ⦃a : ↑X⦄, a ∈ ↑V → a ∈ ↑W)).op ≫ (homOfLE (_ : ∀ ⦃a : ↑X⦄, a ∈ ↑U → a ∈ ↑V)).op)) x =
↑(F.map (homOfLE (_ : ∀ ⦃a : ↑X⦄, a ∈ ↑U → a ∈ ↑W)).op) x
[PROOFSTEP]
rfl
[GOAL]
C✝ : Type u
inst✝² : Category.{v, u} C✝
X : TopCat
C : Type u_1
inst✝¹ : Category.{u_3, u_1} C
inst✝ : ConcreteCategory C
F G : Presheaf C X
e : F ⟶ G
U V : Opens ↑X
h : U ≤ V
x : (forget C).obj (F.obj (op V))
⊢ ↑(NatTrans.app e (op U)) (x |_ U) = ↑(NatTrans.app e (op V)) x |_ U
[PROOFSTEP]
delta restrictOpen restrict
[GOAL]
C✝ : Type u
inst✝² : Category.{v, u} C✝
X : TopCat
C : Type u_1
inst✝¹ : Category.{u_3, u_1} C
inst✝ : ConcreteCategory C
F G : Presheaf C X
e : F ⟶ G
U V : Opens ↑X
h : U ≤ V
x : (forget C).obj (F.obj (op V))
⊢ ↑(NatTrans.app e (op U)) (↑(F.map (homOfLE (_ : ∀ ⦃a : ↑X⦄, a ∈ ↑U → a ∈ ↑V)).op) x) =
↑(G.map (homOfLE (_ : ∀ ⦃a : ↑X⦄, a ∈ ↑U → a ∈ ↑V)).op) (↑(NatTrans.app e (op V)) x)
[PROOFSTEP]
rw [← comp_apply, NatTrans.naturality, comp_apply]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
f g : X ⟶ Y
h : f = g
ℱ : Presheaf C X
⊢ f _* ℱ = g _* ℱ
[PROOFSTEP]
rw [h]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
f g : X ⟶ Y
h : f = g
ℱ : Presheaf C X
U : (Opens ↑Y)ᵒᵖ
⊢ (Opens.map f).op.obj U ⟶ (Opens.map g).op.obj U
[PROOFSTEP]
dsimp [Functor.op]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
f g : X ⟶ Y
h : f = g
ℱ : Presheaf C X
U : (Opens ↑Y)ᵒᵖ
⊢ op ((Opens.map f).obj U.unop) ⟶ op ((Opens.map g).obj U.unop)
[PROOFSTEP]
apply Quiver.Hom.op
[GOAL]
case f
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
f g : X ⟶ Y
h : f = g
ℱ : Presheaf C X
U : (Opens ↑Y)ᵒᵖ
⊢ (Opens.map g).obj U.unop ⟶ (Opens.map f).obj U.unop
[PROOFSTEP]
apply eqToHom
[GOAL]
case f.p
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
f g : X ⟶ Y
h : f = g
ℱ : Presheaf C X
U : (Opens ↑Y)ᵒᵖ
⊢ (Opens.map g).obj U.unop = (Opens.map f).obj U.unop
[PROOFSTEP]
rw [h]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
f g : X ⟶ Y
h : f = g
ℱ : Presheaf C X
U : (Opens ↑Y)ᵒᵖ
⊢ NatTrans.app (pushforwardEq h ℱ).hom U =
ℱ.map (id (eqToHom (_ : (Opens.map g).obj U.unop = (Opens.map f).obj U.unop)).op)
[PROOFSTEP]
simp [pushforwardEq]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
f g : X ⟶ Y
h : f = g
ℱ : Presheaf C X
U : (Opens ↑Y)ᵒᵖ
⊢ (Opens.map f).op.obj U = (Opens.map g).op.obj U
[PROOFSTEP]
rw [h]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
f g : X ⟶ Y
h : f = g
ℱ : Presheaf C X
U : (Opens ↑Y)ᵒᵖ
⊢ NatTrans.app (eqToHom (_ : f _* ℱ = g _* ℱ)) U = ℱ.map (eqToHom (_ : (Opens.map f).op.obj U = (Opens.map g).op.obj U))
[PROOFSTEP]
rw [eqToHom_app, eqToHom_map]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C X
U : Opens ↑Y
⊢ NatTrans.app (pushforwardEq (_ : f = f) ℱ).hom (op U) = 𝟙 ((f _* ℱ).obj (op U))
[PROOFSTEP]
dsimp [pushforwardEq]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C X
U : Opens ↑Y
⊢ ℱ.map (𝟙 (op ((Opens.map f).obj U))) = 𝟙 (ℱ.obj (op ((Opens.map f).obj U)))
[PROOFSTEP]
simp
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
ℱ : Presheaf C X
⊢ 𝟙 X _* ℱ = ℱ
[PROOFSTEP]
unfold pushforwardObj
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
ℱ : Presheaf C X
⊢ (Opens.map (𝟙 X)).op ⋙ ℱ = ℱ
[PROOFSTEP]
rw [Opens.map_id_eq]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
ℱ : Presheaf C X
⊢ (𝟭 (Opens ↑X)).op ⋙ ℱ = ℱ
[PROOFSTEP]
erw [Functor.id_comp]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
ℱ : Presheaf C X
U : Set ↑X
p : IsOpen U
⊢ NatTrans.app (id ℱ).hom (op { carrier := U, is_open' := p }) = ℱ.map (𝟙 (op { carrier := U, is_open' := p }))
[PROOFSTEP]
dsimp [id]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
ℱ : Presheaf C X
U : Set ↑X
p : IsOpen U
⊢ NatTrans.app (whiskerRight (NatTrans.op (Opens.mapId X).inv) ℱ ≫ (Functor.leftUnitor ℱ).hom)
(op { carrier := U, is_open' := p }) =
ℱ.map (𝟙 (op { carrier := U, is_open' := p }))
[PROOFSTEP]
simp [CategoryStruct.comp]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
ℱ : Presheaf C X
U : (Opens ↑X)ᵒᵖ
⊢ NatTrans.app (id ℱ).hom U = ℱ.map (eqToHom (_ : (Opens.map (𝟙 X)).op.obj U = U))
[PROOFSTEP]
induction U
[GOAL]
case h
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
ℱ : Presheaf C X
X✝ : Opens ↑X
⊢ NatTrans.app (id ℱ).hom (op X✝) = ℱ.map (eqToHom (_ : (Opens.map (𝟙 X)).op.obj (op X✝) = op X✝))
[PROOFSTEP]
apply id_hom_app'
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
ℱ : Presheaf C X
U : Set ↑X
p : IsOpen U
⊢ NatTrans.app (id ℱ).inv (op { carrier := U, is_open' := p }) = ℱ.map (𝟙 (op { carrier := U, is_open' := p }))
[PROOFSTEP]
dsimp [id]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
ℱ : Presheaf C X
U : Set ↑X
p : IsOpen U
⊢ NatTrans.app ((Functor.leftUnitor ℱ).inv ≫ whiskerRight (NatTrans.op (Opens.mapId X).hom) ℱ)
(op { carrier := U, is_open' := p }) =
ℱ.map (𝟙 (op { carrier := U, is_open' := p }))
[PROOFSTEP]
simp [CategoryStruct.comp]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
ℱ : Presheaf C X
Y Z : TopCat
f : X ⟶ Y
g : Y ⟶ Z
U : (Opens ↑Z)ᵒᵖ
⊢ NatTrans.app (comp ℱ f g).hom U = 𝟙 (((f ≫ g) _* ℱ).obj U)
[PROOFSTEP]
simp [comp]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
ℱ : Presheaf C X
Y Z : TopCat
f : X ⟶ Y
g : Y ⟶ Z
U : (Opens ↑Z)ᵒᵖ
⊢ NatTrans.app (comp ℱ f g).inv U = 𝟙 ((g _* (f _* ℱ)).obj U)
[PROOFSTEP]
simp [comp]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
f : X ⟶ Y
ℱ 𝒢 : Presheaf C X
α : ℱ ⟶ 𝒢
x✝¹ x✝ : (Opens ↑Y)ᵒᵖ
i : x✝¹ ⟶ x✝
⊢ (f _* ℱ).map i ≫ (fun U => NatTrans.app α ((Opens.map f).op.obj U)) x✝ =
(fun U => NatTrans.app α ((Opens.map f).op.obj U)) x✝¹ ≫ (f _* 𝒢).map i
[PROOFSTEP]
erw [α.naturality]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
f : X ⟶ Y
ℱ 𝒢 : Presheaf C X
α : ℱ ⟶ 𝒢
x✝¹ x✝ : (Opens ↑Y)ᵒᵖ
i : x✝¹ ⟶ x✝
⊢ NatTrans.app α ((Opens.map f).op.obj x✝¹) ≫ 𝒢.map ((Opens.map f).op.map i) =
(fun U => NatTrans.app α ((Opens.map f).op.obj U)) x✝¹ ≫ (f _* 𝒢).map i
[PROOFSTEP]
rfl
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
⊢ (pullbackObj f ℱ).obj (op U) ≅ ℱ.obj (op { carrier := ↑f '' ↑U, is_open' := H })
[PROOFSTEP]
let x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (@homOfLE _ _ _ ((Opens.map f).obj ⟨_, H⟩) (Set.image_preimage.le_u_l _)).op
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (homOfLE (_ : ↑U ≤ (fun a => ↑f a) ⁻¹' ((fun a => ↑f a) '' ↑U))).op
⊢ (pullbackObj f ℱ).obj (op U) ≅ ℱ.obj (op { carrier := ↑f '' ↑U, is_open' := H })
[PROOFSTEP]
have hx : IsTerminal x :=
{ lift := fun s ↦ by
fapply CostructuredArrow.homMk
change op (unop _) ⟶ op (⟨_, H⟩ : Opens _)
· refine' (homOfLE _).op
apply (Set.image_subset f s.pt.hom.unop.le).trans
exact Set.image_preimage.l_u_le (SetLike.coe s.pt.left.unop)
·
simp
-- porting note : add `fac`, `uniq` manually
fac := fun _ _ => by ext; simp
uniq := fun _ _ _ => by ext; simp }
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (homOfLE (_ : ↑U ≤ (fun a => ↑f a) ⁻¹' ((fun a => ↑f a) '' ↑U))).op
s : Cone (Functor.empty (CostructuredArrow (Opens.map f).op (op U)))
⊢ s.pt ⟶ (asEmptyCone x).pt
[PROOFSTEP]
fapply CostructuredArrow.homMk
[GOAL]
case g
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (homOfLE (_ : ↑U ≤ (fun a => ↑f a) ⁻¹' ((fun a => ↑f a) '' ↑U))).op
s : Cone (Functor.empty (CostructuredArrow (Opens.map f).op (op U)))
⊢ s.pt.left ⟶ (asEmptyCone x).pt.left
case w
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (homOfLE (_ : ↑U ≤ (fun a => ↑f a) ⁻¹' ((fun a => ↑f a) '' ↑U))).op
s : Cone (Functor.empty (CostructuredArrow (Opens.map f).op (op U)))
⊢ autoParam ((Opens.map f).op.map ?g ≫ (asEmptyCone x).pt.hom = s.pt.hom) _auto✝
[PROOFSTEP]
change op (unop _) ⟶ op (⟨_, H⟩ : Opens _)
[GOAL]
case g
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (homOfLE (_ : ↑U ≤ (fun a => ↑f a) ⁻¹' ((fun a => ↑f a) '' ↑U))).op
s : Cone (Functor.empty (CostructuredArrow (Opens.map f).op (op U)))
⊢ op s.pt.1.unop ⟶ op { carrier := ↑f '' ↑U, is_open' := H }
[PROOFSTEP]
refine' (homOfLE _).op
[GOAL]
case g
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (homOfLE (_ : ↑U ≤ (fun a => ↑f a) ⁻¹' ((fun a => ↑f a) '' ↑U))).op
s : Cone (Functor.empty (CostructuredArrow (Opens.map f).op (op U)))
⊢ { carrier := ↑f '' ↑U, is_open' := H } ≤ s.pt.1.unop
[PROOFSTEP]
apply (Set.image_subset f s.pt.hom.unop.le).trans
[GOAL]
case g
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (homOfLE (_ : ↑U ≤ (fun a => ↑f a) ⁻¹' ((fun a => ↑f a) '' ↑U))).op
s : Cone (Functor.empty (CostructuredArrow (Opens.map f).op (op U)))
⊢ ↑f '' ↑((Opens.map f).op.obj s.pt.left).unop ⊆ ↑s.pt.1.unop
[PROOFSTEP]
exact Set.image_preimage.l_u_le (SetLike.coe s.pt.left.unop)
[GOAL]
case w
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (homOfLE (_ : ↑U ≤ (fun a => ↑f a) ⁻¹' ((fun a => ↑f a) '' ↑U))).op
s : Cone (Functor.empty (CostructuredArrow (Opens.map f).op (op U)))
⊢ autoParam
((Opens.map f).op.map
(let_fun this := (homOfLE (_ : ↑f '' ↑((Functor.fromPUnit (op U)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op;
this) ≫
(asEmptyCone x).pt.hom =
s.pt.hom)
_auto✝
[PROOFSTEP]
simp
-- porting note : add `fac`, `uniq` manually
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (homOfLE (_ : ↑U ≤ (fun a => ↑f a) ⁻¹' ((fun a => ↑f a) '' ↑U))).op
x✝¹ : Cone (Functor.empty (CostructuredArrow (Opens.map f).op (op U)))
x✝ : Discrete PEmpty
⊢ (fun s =>
CostructuredArrow.homMk
(let_fun this := (homOfLE (_ : ↑f '' ↑((Functor.fromPUnit (op U)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op;
this))
x✝¹ ≫
NatTrans.app (asEmptyCone x).π x✝ =
NatTrans.app x✝¹.π x✝
[PROOFSTEP]
ext
[GOAL]
case h
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (homOfLE (_ : ↑U ≤ (fun a => ↑f a) ⁻¹' ((fun a => ↑f a) '' ↑U))).op
x✝¹ : Cone (Functor.empty (CostructuredArrow (Opens.map f).op (op U)))
x✝ : Discrete PEmpty
⊢ ((fun s =>
CostructuredArrow.homMk
(let_fun this :=
(homOfLE (_ : ↑f '' ↑((Functor.fromPUnit (op U)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op;
this))
x✝¹ ≫
NatTrans.app (asEmptyCone x).π x✝).left =
(NatTrans.app x✝¹.π x✝).left
[PROOFSTEP]
simp
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (homOfLE (_ : ↑U ≤ (fun a => ↑f a) ⁻¹' ((fun a => ↑f a) '' ↑U))).op
x✝² : Cone (Functor.empty (CostructuredArrow (Opens.map f).op (op U)))
x✝¹ : x✝².pt ⟶ (asEmptyCone x).pt
x✝ : ∀ (j : Discrete PEmpty), x✝¹ ≫ NatTrans.app (asEmptyCone x).π j = NatTrans.app x✝².π j
⊢ x✝¹ =
(fun s =>
CostructuredArrow.homMk
(let_fun this := (homOfLE (_ : ↑f '' ↑((Functor.fromPUnit (op U)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op;
this))
x✝²
[PROOFSTEP]
ext
[GOAL]
case h
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (homOfLE (_ : ↑U ≤ (fun a => ↑f a) ⁻¹' ((fun a => ↑f a) '' ↑U))).op
x✝² : Cone (Functor.empty (CostructuredArrow (Opens.map f).op (op U)))
x✝¹ : x✝².pt ⟶ (asEmptyCone x).pt
x✝ : ∀ (j : Discrete PEmpty), x✝¹ ≫ NatTrans.app (asEmptyCone x).π j = NatTrans.app x✝².π j
⊢ x✝¹.left =
((fun s =>
CostructuredArrow.homMk
(let_fun this := (homOfLE (_ : ↑f '' ↑((Functor.fromPUnit (op U)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op;
this))
x✝²).left
[PROOFSTEP]
simp
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
f : X ⟶ Y
ℱ : Presheaf C Y
U : Opens ↑X
H : IsOpen (↑f '' ↑U)
x : CostructuredArrow (Opens.map f).op (op U) :=
CostructuredArrow.mk (homOfLE (_ : ↑U ≤ (fun a => ↑f a) ⁻¹' ((fun a => ↑f a) '' ↑U))).op
hx : IsTerminal x
⊢ (pullbackObj f ℱ).obj (op U) ≅ ℱ.obj (op { carrier := ↑f '' ↑U, is_open' := H })
[PROOFSTEP]
exact IsColimit.coconePointUniqueUpToIso (colimit.isColimit _) (colimitOfDiagramTerminal hx _)
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U : (Opens ↑Y)ᵒᵖ
⊢ IsOpen (↑(𝟙 Y) '' ↑U.unop)
[PROOFSTEP]
simpa using U.unop.2
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U : (Opens ↑Y)ᵒᵖ
⊢ op { carrier := ↑(𝟙 Y) '' ↑U.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop)) } = U
[PROOFSTEP]
simp
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U V : (Opens ↑Y)ᵒᵖ
i : U ⟶ V
⊢ (pullbackObj (𝟙 Y) ℱ).map i ≫
((fun U =>
pullbackObjObjOfImageOpen (𝟙 Y) ℱ U.unop (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop)) ≪≫
ℱ.mapIso
(eqToIso (_ : op { carrier := ↑(𝟙 Y) '' ↑U.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop)) } = U)))
V).hom =
((fun U =>
pullbackObjObjOfImageOpen (𝟙 Y) ℱ U.unop (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop)) ≪≫
ℱ.mapIso
(eqToIso (_ : op { carrier := ↑(𝟙 Y) '' ↑U.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop)) } = U)))
U).hom ≫
ℱ.map i
[PROOFSTEP]
simp only [pullbackObj_obj]
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U V : (Opens ↑Y)ᵒᵖ
i : U ⟶ V
⊢ (pullbackObj (𝟙 Y) ℱ).map i ≫
(pullbackObjObjOfImageOpen (𝟙 Y) ℱ V.unop (_ : IsOpen (↑(𝟙 Y) '' ↑V.unop)) ≪≫
ℱ.mapIso
(eqToIso (_ : op { carrier := ↑(𝟙 Y) '' ↑V.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑V.unop)) } = V))).hom =
(pullbackObjObjOfImageOpen (𝟙 Y) ℱ U.unop (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop)) ≪≫
ℱ.mapIso
(eqToIso (_ : op { carrier := ↑(𝟙 Y) '' ↑U.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop)) } = U))).hom ≫
ℱ.map i
[PROOFSTEP]
ext
[GOAL]
case w
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U V : (Opens ↑Y)ᵒᵖ
i : U ⟶ V
j✝ : CostructuredArrow (Opens.map (𝟙 Y)).op U
⊢ colimit.ι (Lan.diagram (Opens.map (𝟙 Y)).op ℱ U) j✝ ≫
(pullbackObj (𝟙 Y) ℱ).map i ≫
(pullbackObjObjOfImageOpen (𝟙 Y) ℱ V.unop (_ : IsOpen (↑(𝟙 Y) '' ↑V.unop)) ≪≫
ℱ.mapIso
(eqToIso
(_ : op { carrier := ↑(𝟙 Y) '' ↑V.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑V.unop)) } = V))).hom =
colimit.ι (Lan.diagram (Opens.map (𝟙 Y)).op ℱ U) j✝ ≫
(pullbackObjObjOfImageOpen (𝟙 Y) ℱ U.unop (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop)) ≪≫
ℱ.mapIso
(eqToIso
(_ : op { carrier := ↑(𝟙 Y) '' ↑U.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop)) } = U))).hom ≫
ℱ.map i
[PROOFSTEP]
simp only [Functor.comp_obj, CostructuredArrow.proj_obj, pullbackObj_map, Iso.trans_hom, Functor.mapIso_hom,
eqToIso.hom, Category.assoc]
[GOAL]
case w
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U V : (Opens ↑Y)ᵒᵖ
i : U ⟶ V
j✝ : CostructuredArrow (Opens.map (𝟙 Y)).op U
⊢ colimit.ι (Lan.diagram (Opens.map (𝟙 Y)).op ℱ U) j✝ ≫
colimit.pre (Lan.diagram (Opens.map (𝟙 Y)).op ℱ V) (CostructuredArrow.map i) ≫
(pullbackObjObjOfImageOpen (𝟙 Y) ℱ V.unop (_ : IsOpen (↑(𝟙 Y) '' ↑V.unop))).hom ≫
ℱ.map (eqToHom (_ : op { carrier := ↑(𝟙 Y) '' ↑V.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑V.unop)) } = V)) =
colimit.ι (Lan.diagram (Opens.map (𝟙 Y)).op ℱ U) j✝ ≫
(pullbackObjObjOfImageOpen (𝟙 Y) ℱ U.unop (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop))).hom ≫
ℱ.map (eqToHom (_ : op { carrier := ↑(𝟙 Y) '' ↑U.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop)) } = U)) ≫
ℱ.map i
[PROOFSTEP]
erw [colimit.pre_desc_assoc, colimit.ι_desc_assoc, colimit.ι_desc_assoc]
[GOAL]
case w
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U V : (Opens ↑Y)ᵒᵖ
i : U ⟶ V
j✝ : CostructuredArrow (Opens.map (𝟙 Y)).op U
⊢ NatTrans.app
(Cocone.whisker (CostructuredArrow.map i)
(coconeOfDiagramTerminal
(IsLimit.mk fun s =>
CostructuredArrow.homMk
(let_fun this :=
(homOfLE (_ : ↑(𝟙 Y) '' ↑((Functor.fromPUnit (op V.unop)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op;
this))
(Lan.diagram (Opens.map (𝟙 Y)).op ℱ (op V.unop)))).ι
j✝ ≫
ℱ.map (eqToHom (_ : op { carrier := ↑(𝟙 Y) '' ↑V.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑V.unop)) } = V)) =
NatTrans.app
(coconeOfDiagramTerminal
(IsLimit.mk fun s =>
CostructuredArrow.homMk
(let_fun this :=
(homOfLE (_ : ↑(𝟙 Y) '' ↑((Functor.fromPUnit (op U.unop)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op;
this))
(Lan.diagram (Opens.map (𝟙 Y)).op ℱ (op U.unop))).ι
j✝ ≫
ℱ.map (eqToHom (_ : op { carrier := ↑(𝟙 Y) '' ↑U.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop)) } = U)) ≫
ℱ.map i
[PROOFSTEP]
dsimp
[GOAL]
case w
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U V : (Opens ↑Y)ᵒᵖ
i : U ⟶ V
j✝ : CostructuredArrow (Opens.map (𝟙 Y)).op U
⊢ ℱ.map
(IsTerminal.from
(IsLimit.mk fun s =>
CostructuredArrow.homMk
(homOfLE (_ : ↑(𝟙 Y) '' ↑((Functor.fromPUnit (op V.unop)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op)
((CostructuredArrow.map i).obj j✝)).left ≫
ℱ.map (eqToHom (_ : op { carrier := ↑(𝟙 Y) '' ↑V.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑V.unop)) } = V)) =
ℱ.map
(IsTerminal.from
(IsLimit.mk fun s =>
CostructuredArrow.homMk
(homOfLE (_ : ↑(𝟙 Y) '' ↑((Functor.fromPUnit (op U.unop)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op)
j✝).left ≫
ℱ.map (eqToHom (_ : op { carrier := ↑(𝟙 Y) '' ↑U.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop)) } = U)) ≫
ℱ.map i
[PROOFSTEP]
simp only [← ℱ.map_comp]
-- Porting note : `congr` does not work, but `congr 1` does
[GOAL]
case w
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U V : (Opens ↑Y)ᵒᵖ
i : U ⟶ V
j✝ : CostructuredArrow (Opens.map (𝟙 Y)).op U
⊢ ℱ.map
((IsTerminal.from
(IsLimit.mk fun s =>
CostructuredArrow.homMk
(homOfLE (_ : ↑(𝟙 Y) '' ↑((Functor.fromPUnit (op V.unop)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op)
((CostructuredArrow.map i).obj j✝)).left ≫
eqToHom (_ : op { carrier := ↑(𝟙 Y) '' ↑V.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑V.unop)) } = V)) =
ℱ.map
((IsTerminal.from
(IsLimit.mk fun s =>
CostructuredArrow.homMk
(homOfLE (_ : ↑(𝟙 Y) '' ↑((Functor.fromPUnit (op U.unop)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op)
j✝).left ≫
eqToHom (_ : op { carrier := ↑(𝟙 Y) '' ↑U.unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑U.unop)) } = U) ≫ i)
[PROOFSTEP]
congr 1
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U : Opens ↑Y
⊢ (Opens.map (𝟙 Y)).op.obj (op U) = op U
[PROOFSTEP]
simp
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U : Opens ↑Y
⊢ NatTrans.app (id ℱ).inv (op U) =
colimit.ι (Lan.diagram (Opens.map (𝟙 Y)).op ℱ (op U))
(CostructuredArrow.mk (eqToHom (_ : (Opens.map (𝟙 Y)).op.obj (op U) = op U)))
[PROOFSTEP]
rw [← Category.id_comp ((id ℱ).inv.app (op U)), ← NatIso.app_inv, Iso.comp_inv_eq]
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U : Opens ↑Y
⊢ 𝟙 (ℱ.obj (op U)) =
colimit.ι (Lan.diagram (Opens.map (𝟙 Y)).op ℱ (op U))
(CostructuredArrow.mk (eqToHom (_ : (Opens.map (𝟙 Y)).op.obj (op U) = op U))) ≫
((id ℱ).app (op U)).hom
[PROOFSTEP]
dsimp [id]
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U : Opens ↑Y
⊢ 𝟙 (ℱ.obj (op U)) =
colimit.ι (Lan.diagram (Opens.map (𝟙 Y)).op ℱ (op U)) (CostructuredArrow.mk (𝟙 (op ((Opens.map (𝟙 Y)).obj U)))) ≫
colimit.desc (Lan.diagram (Opens.map (𝟙 Y)).op ℱ (op U))
(coconeOfDiagramTerminal
(IsLimit.mk fun s =>
CostructuredArrow.homMk
(homOfLE (_ : ↑(𝟙 Y) '' ↑((Functor.fromPUnit (op U)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op)
(Lan.diagram (Opens.map (𝟙 Y)).op ℱ (op U))) ≫
ℱ.map
(eqToHom
(_ : op { carrier := ↑(𝟙 Y) '' ↑(op U).unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑(op U).unop)) } = op U))
[PROOFSTEP]
erw [colimit.ι_desc_assoc]
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U : Opens ↑Y
⊢ 𝟙 (ℱ.obj (op U)) =
NatTrans.app
(coconeOfDiagramTerminal
(IsLimit.mk fun s =>
CostructuredArrow.homMk
(homOfLE (_ : ↑(𝟙 Y) '' ↑((Functor.fromPUnit (op U)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op)
(Lan.diagram (Opens.map (𝟙 Y)).op ℱ (op U))).ι
(CostructuredArrow.mk (𝟙 (op ((Opens.map (𝟙 Y)).obj U)))) ≫
ℱ.map
(eqToHom
(_ : op { carrier := ↑(𝟙 Y) '' ↑(op U).unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑(op U).unop)) } = op U))
[PROOFSTEP]
dsimp
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U : Opens ↑Y
⊢ 𝟙 (ℱ.obj (op U)) =
ℱ.map
(IsTerminal.from
(IsLimit.mk fun s =>
CostructuredArrow.homMk
(homOfLE (_ : ↑(𝟙 Y) '' ↑((Functor.fromPUnit (op U)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op)
(CostructuredArrow.mk (𝟙 (op ((Opens.map (𝟙 Y)).obj U))))).left ≫
ℱ.map
(eqToHom
(_ : op { carrier := ↑(𝟙 Y) '' ↑(op U).unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑(op U).unop)) } = op U))
[PROOFSTEP]
rw [← ℱ.map_comp, ← ℱ.map_id]
[GOAL]
C : Type u
inst✝¹ : Category.{v, u} C
inst✝ : HasColimits C
X Y : TopCat
ℱ : Presheaf C Y
U : Opens ↑Y
⊢ ℱ.map (𝟙 (op U)) =
ℱ.map
((IsTerminal.from
(IsLimit.mk fun s =>
CostructuredArrow.homMk
(homOfLE (_ : ↑(𝟙 Y) '' ↑((Functor.fromPUnit (op U)).obj s.pt.right).unop ⊆ ↑s.pt.1.unop)).op)
(CostructuredArrow.mk (𝟙 (op ((Opens.map (𝟙 Y)).obj U))))).left ≫
eqToHom
(_ : op { carrier := ↑(𝟙 Y) '' ↑(op U).unop, is_open' := (_ : IsOpen (↑(𝟙 Y) '' ↑(op U).unop)) } = op U))
[PROOFSTEP]
rfl
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
⊢ pushforward C (𝟙 X) = 𝟭 (Presheaf C X)
[PROOFSTEP]
apply CategoryTheory.Functor.ext
[GOAL]
case h_map
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
⊢ autoParam
(∀ (X_1 Y : Presheaf C X) (f : X_1 ⟶ Y),
(pushforward C (𝟙 X)).map f =
eqToHom (_ : ?F.obj X_1 = ?G.obj X_1) ≫
(𝟭 (Presheaf C X)).map f ≫ eqToHom (_ : (𝟭 (Presheaf C X)).obj Y = (pushforward C (𝟙 X)).obj Y))
_auto✝
[PROOFSTEP]
intros a b f
[GOAL]
case h_map
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
a b : Presheaf C X
f : a ⟶ b
⊢ (pushforward C (𝟙 X)).map f =
eqToHom (_ : ?F.obj a = ?G.obj a) ≫
(𝟭 (Presheaf C X)).map f ≫ eqToHom (_ : (𝟭 (Presheaf C X)).obj b = (pushforward C (𝟙 X)).obj b)
[PROOFSTEP]
ext U
[GOAL]
case h_map.w
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
a b : Presheaf C X
f : a ⟶ b
U : Opens ↑X
⊢ NatTrans.app ((pushforward C (𝟙 X)).map f) (op U) =
NatTrans.app
(eqToHom (_ : ?F.obj a = ?G.obj a) ≫
(𝟭 (Presheaf C X)).map f ≫ eqToHom (_ : (𝟭 (Presheaf C X)).obj b = (pushforward C (𝟙 X)).obj b))
(op U)
[PROOFSTEP]
erw [NatTrans.congr f (Opens.op_map_id_obj (op U))]
[GOAL]
case h_map.w
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
a b : Presheaf C X
f : a ⟶ b
U : Opens ↑X
⊢ a.map (eqToHom (_ : (Opens.map (𝟙 X)).op.obj (op U) = op U)) ≫
NatTrans.app f (op U) ≫ b.map (eqToHom (_ : op U = (Opens.map (𝟙 X)).op.obj (op U))) =
NatTrans.app
(eqToHom (_ : ?F.obj a = ?G.obj a) ≫
(𝟭 (Presheaf C X)).map f ≫ eqToHom (_ : (𝟭 (Presheaf C X)).obj b = (pushforward C (𝟙 X)).obj b))
(op U)
case h_obj
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
⊢ ∀ (X_1 : Presheaf C X), (pushforward C (𝟙 X)).obj X_1 = (𝟭 (Presheaf C X)).obj X_1
[PROOFSTEP]
simp only [Functor.op_obj, eqToHom_refl, CategoryTheory.Functor.map_id, Category.comp_id, Category.id_comp,
Functor.id_obj, Functor.id_map]
[GOAL]
case h_obj
C : Type u
inst✝ : Category.{v, u} C
X : TopCat
⊢ ∀ (X_1 : Presheaf C X), (pushforward C (𝟙 X)).obj X_1 = (𝟭 (Presheaf C X)).obj X_1
[PROOFSTEP]
apply Pushforward.id_eq
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
H₁ : X ≅ Y
ℱ : Presheaf C X
𝒢 : Presheaf C Y
H₂ : H₁.hom _* ℱ ⟶ 𝒢
U : (Opens ↑X)ᵒᵖ
⊢ U = (Opens.map H₁.hom).op.obj (op ((Opens.map H₁.inv).obj U.unop))
[PROOFSTEP]
simp [Opens.map, Set.preimage_preimage]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
H₁ : X ≅ Y
ℱ : Presheaf C X
𝒢 : Presheaf C Y
H₂ : H₁.hom _* ℱ ⟶ 𝒢
U : (Opens ↑X)ᵒᵖ
⊢ NatTrans.app (toPushforwardOfIso H₁ H₂) U =
ℱ.map
(eqToHom
(_ :
U =
op
{
carrier :=
↑H₁.hom ⁻¹'
↑(op { carrier := ↑H₁.inv ⁻¹' ↑U.unop, is_open' := (_ : IsOpen (↑H₁.inv ⁻¹' ↑U.unop)) }).unop,
is_open' :=
(_ :
IsOpen
(↑H₁.hom ⁻¹'
↑(op
{ carrier := ↑H₁.inv ⁻¹' ↑U.unop,
is_open' := (_ : IsOpen (↑H₁.inv ⁻¹' ↑U.unop)) }).unop)) })) ≫
NatTrans.app H₂ (op ((Opens.map H₁.inv).obj U.unop))
[PROOFSTEP]
delta toPushforwardOfIso
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
H₁ : X ≅ Y
ℱ : Presheaf C X
𝒢 : Presheaf C Y
H₂ : H₁.hom _* ℱ ⟶ 𝒢
U : (Opens ↑X)ᵒᵖ
⊢ NatTrans.app (↑(Adjunction.homEquiv (Equivalence.toAdjunction (presheafEquivOfIso C H₁)) ℱ 𝒢) H₂) U =
ℱ.map
(eqToHom
(_ :
U =
op
{
carrier :=
↑H₁.hom ⁻¹'
↑(op { carrier := ↑H₁.inv ⁻¹' ↑U.unop, is_open' := (_ : IsOpen (↑H₁.inv ⁻¹' ↑U.unop)) }).unop,
is_open' :=
(_ :
IsOpen
(↑H₁.hom ⁻¹'
↑(op
{ carrier := ↑H₁.inv ⁻¹' ↑U.unop,
is_open' := (_ : IsOpen (↑H₁.inv ⁻¹' ↑U.unop)) }).unop)) })) ≫
NatTrans.app H₂ (op ((Opens.map H₁.inv).obj U.unop))
[PROOFSTEP]
simp only [pushforwardObj_obj, Functor.op_obj, Equivalence.toAdjunction, Adjunction.homEquiv_unit, Functor.id_obj,
Functor.comp_obj, Adjunction.mkOfUnitCounit_unit, unop_op, eqToHom_map]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
H₁ : X ≅ Y
ℱ : Presheaf C X
𝒢 : Presheaf C Y
H₂ : H₁.hom _* ℱ ⟶ 𝒢
U : (Opens ↑X)ᵒᵖ
⊢ NatTrans.app (NatTrans.app (Equivalence.unit (presheafEquivOfIso C H₁)) ℱ ≫ (presheafEquivOfIso C H₁).inverse.map H₂)
U =
eqToHom (_ : ℱ.obj U = ℱ.obj (op ((Opens.map H₁.hom).obj ((Opens.map H₁.inv).obj U.unop)))) ≫
NatTrans.app H₂ (op ((Opens.map H₁.inv).obj U.unop))
[PROOFSTEP]
rw [NatTrans.comp_app, presheafEquivOfIso_inverse_map_app, Equivalence.Equivalence_mk'_unit]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
H₁ : X ≅ Y
ℱ : Presheaf C X
𝒢 : Presheaf C Y
H₂ : H₁.hom _* ℱ ⟶ 𝒢
U : (Opens ↑X)ᵒᵖ
⊢ NatTrans.app (NatTrans.app (presheafEquivOfIso C H₁).unitIso.hom ℱ) U ≫
NatTrans.app H₂ (op ((Opens.map H₁.inv).obj U.unop)) =
eqToHom (_ : ℱ.obj U = ℱ.obj (op ((Opens.map H₁.hom).obj ((Opens.map H₁.inv).obj U.unop)))) ≫
NatTrans.app H₂ (op ((Opens.map H₁.inv).obj U.unop))
[PROOFSTEP]
congr 1
[GOAL]
case e_a
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
H₁ : X ≅ Y
ℱ : Presheaf C X
𝒢 : Presheaf C Y
H₂ : H₁.hom _* ℱ ⟶ 𝒢
U : (Opens ↑X)ᵒᵖ
⊢ NatTrans.app (NatTrans.app (presheafEquivOfIso C H₁).unitIso.hom ℱ) U =
eqToHom (_ : ℱ.obj U = ℱ.obj (op ((Opens.map H₁.hom).obj ((Opens.map H₁.inv).obj U.unop))))
[PROOFSTEP]
simp only [Equivalence.unit, Equivalence.op, CategoryTheory.Equivalence.symm, Opens.mapMapIso, Functor.id_obj,
Functor.comp_obj, Iso.symm_hom, NatIso.op_inv, Iso.symm_inv, NatTrans.op_app, NatIso.ofComponents_hom_app,
eqToIso.hom, eqToHom_op, Equivalence.Equivalence_mk'_unitInv, Equivalence.Equivalence_mk'_counitInv, NatIso.op_hom,
unop_op, op_unop, eqToIso.inv, NatIso.ofComponents_inv_app, eqToHom_unop, ← ℱ.map_comp, eqToHom_trans, eqToHom_map,
presheafEquivOfIso_unitIso_hom_app_app]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
H₁ : X ≅ Y
ℱ : Presheaf C Y
𝒢 : Presheaf C X
H₂ : ℱ ⟶ H₁.hom _* 𝒢
U : (Opens ↑X)ᵒᵖ
⊢ (Opens.map H₁.hom).op.obj (op ((Opens.map H₁.inv).obj U.unop)) = U
[PROOFSTEP]
simp [Opens.map, Set.preimage_preimage]
[GOAL]
C : Type u
inst✝ : Category.{v, u} C
X Y : TopCat
H₁ : X ≅ Y
ℱ : Presheaf C Y
𝒢 : Presheaf C X
H₂ : ℱ ⟶ H₁.hom _* 𝒢
U : (Opens ↑X)ᵒᵖ
⊢ NatTrans.app (pushforwardToOfIso H₁ H₂) U =
NatTrans.app H₂ (op ((Opens.map H₁.inv).obj U.unop)) ≫
𝒢.map
(eqToHom
(_ :
op
{
carrier :=
↑H₁.hom ⁻¹'
↑(op { carrier := ↑H₁.inv ⁻¹' ↑U.unop, is_open' := (_ : IsOpen (↑H₁.inv ⁻¹' ↑U.unop)) }).unop,
is_open' :=
(_ :
IsOpen
(↑H₁.hom ⁻¹'
↑(op
{ carrier := ↑H₁.inv ⁻¹' ↑U.unop,
is_open' := (_ : IsOpen (↑H₁.inv ⁻¹' ↑U.unop)) }).unop)) } =
U))
[PROOFSTEP]
simp [pushforwardToOfIso, Equivalence.toAdjunction, CategoryStruct.comp]
|
Formal statement is: lemma LIMSEQ_inverse_realpow_zero: "1 < x \<Longrightarrow> (\<lambda>n. inverse (x ^ n)) \<longlonglongrightarrow> 0" for x :: real Informal statement is: If $x > 1$, then $\frac{1}{x^n} \to 0$ as $n \to \infty$. |
module Trans where
open import Prelude
Rel : Set -> Set1
Rel A = A -> A -> Set
data [_]* {A : Set}(R : Rel A)(x : A) : A -> Set where
ref : [ R ]* x x
_▹_ : {y z : A} -> R x y -> [ R ]* y z -> [ R ]* x z
infixr 40 _▹_ _▹◃_
length : {A : Set}{R : Rel A}{x y : A} -> [ R ]* x y -> Nat
length ref = zero
length (x ▹ xs) = suc (length xs)
_=[_]=>_ : {A B : Set}(R : Rel A)(i : A -> B)(S : Rel B) -> Set
R =[ i ]=> S = forall {x y} -> R x y -> S (i x) (i y)
map : {A B : Set}{R : Rel A}{S : Rel B}(i : A -> B) ->
(R =[ i ]=> S) ->
{x y : A} -> [ R ]* x y -> [ S ]* (i x) (i y)
map i f ref = ref
map i f (x ▹ xs) = f x ▹ map i f xs
lem-length-map :
{A B : Set}{R : Rel A}{S : Rel B}(i : A -> B)
(f : R =[ i ]=> S){x y : A}(xs : [ R ]* x y) ->
length xs ≡ length (map {S = S} i f xs)
lem-length-map i f ref = refl
lem-length-map i f (x ▹ xs) = cong suc (lem-length-map i f xs)
_▹◃_ : {A : Set}{R : Rel A}{x y z : A} ->
[ R ]* x y -> [ R ]* y z -> [ R ]* x z
ref ▹◃ ys = ys
(x ▹ xs) ▹◃ ys = x ▹ (xs ▹◃ ys)
lem-length▹◃ : {A : Set}{R : Rel A}{x y z : A}
(r₁ : [ R ]* x y)(r₂ : [ R ]* y z) ->
length r₁ + length r₂ ≡ length (r₁ ▹◃ r₂)
lem-length▹◃ ref ys = refl
lem-length▹◃ (x ▹ xs) ys = cong suc (lem-length▹◃ xs ys)
|
The infnorm of a vector is always nonnegative. |
using Jecco.KG_3_1
grid = SpecCartGrid3D(
x_min = -5.0,
x_max = 5.0,
x_nodes = 64,
y_min = -5.0,
y_max = 5.0,
y_nodes = 64,
u_max = 1.001,
u_domains = 1,
u_nodes = 48,
)
potential = SquarePotential()
evoleq = AffineNull(
potential = potential,
)
id = Sine2D(
kx = 1,
ky = 2,
Lx = grid.x_max - grid.x_min,
Ly = grid.y_max - grid.y_min,
)
io = InOut(
out_bulk_every_t = 0.04,
checkpoint_every_walltime_hours = 1,
remove_existing = true,
)
integration = Integration(
dt = 0.002,
ODE_method = KG_3_1.RK4(),
adaptive = false,
tmax = 4.0,
)
run_model(grid, id, evoleq, integration, io)
|
module Divisors
import ZZ
%access public export
%default total
|||IsDivisibleZ a b can be constucted if b divides a
IsDivisibleZ : ZZ -> ZZ -> Type
IsDivisibleZ a b = (n : ZZ ** a = b * n)
|||1 divides everything
oneDiv : (a : ZZ) -> IsDivisibleZ a 1
oneDiv a = (a ** rewrite sym (multOneLeftNeutralZ a) in Refl)
|||Genetes a proof of (a+b) = d*(n+m) from (a=d*n)and (b=d*m)
distributeProof: (a:ZZ)->(b:ZZ)->(d:ZZ)->
(n:ZZ)->(m:ZZ)->(a=d*n)->(b=d*m)->((a+b) = d*(n+m))
distributeProof a b d n m pf1 pf2 =
rewrite (multDistributesOverPlusRightZ d n m) in
(trans (the (a+b=(d*n)+b) (v1)) v2) where
v1 =plusConstantRightZ a (d*n) b pf1
v2 =plusConstantLeftZ b (d*m) (d*n) pf2
|||The theorem d|a =>d|ac
multDiv:(IsDivisibleZ a d) ->(c:ZZ)->(IsDivisibleZ (a*c) d)
multDiv {d} (n**Refl) c =
((n*c)** (rewrite sym (multAssociativeZ d n c) in (Refl)))
|||The theorem d|a =>d|ca
multDivLeft:(IsDivisibleZ a d) ->(c:ZZ)->(IsDivisibleZ (c*a) d)
multDivLeft{a} x c = rewrite (multCommutativeZ c a) in (multDiv x c)
|||The theorem d|a and d|b =>d|(a+b)
plusDiv : (IsDivisibleZ a d)->(IsDivisibleZ b d)->(IsDivisibleZ (a+b) d)
plusDiv {d}{a}{b} (n**prf1) (m**prf2) =
((n+m)**(distributeProof a b d n m prf1 prf2))
|||The theorem b|a and c|b =>c|a
transDivide : (IsDivisibleZ a b)->(IsDivisibleZ b c)->(IsDivisibleZ a c)
transDivide {c} (x ** pf1) (y ** pf2) =
(y*x ** (rewrite multAssociativeZ c y x in
(rewrite pf1 in (rewrite pf2 in Refl))))
|||If d divides a and b it divides a linear combination of a and b
linCombDiv:(m:ZZ)->(n:ZZ)->(IsDivisibleZ a d)->(IsDivisibleZ b d)->
(IsDivisibleZ ((a*m)+(b*n)) d)
linCombDiv m n dDiva dDivb =
plusDiv (multDiv dDiva m) (multDiv dDivb n)
|||The theorem that d|a and d|b implies d |(a+b*(-m)
euclidConservesDivisor:(m:ZZ)->(IsDivisibleZ a d)->(IsDivisibleZ b d)->
(IsDivisibleZ (a+(b*(-m))) d)
euclidConservesDivisor m dDiva dDivb = plusDiv dDiva (multDiv dDivb (-m) )
|||Any integer divides zero
zzDividesZero:(a:ZZ)->(IsDivisibleZ 0 a )
zzDividesZero a = (0**(sym (multZeroRightZeroZ a)))
|||A type that is occupied iff c is a common factor of a and b
IsCommonFactorZ : (a:ZZ) -> (b:ZZ) -> (c:ZZ) -> Type
IsCommonFactorZ a b c = ((IsDivisibleZ a c),(IsDivisibleZ b c))
|||The theorem that d is a common factor of a and b implies
|||d is a common factor of b and a
commonfactSym: IsCommonFactorZ a b d ->IsCommonFactorZ b a d
commonfactSym (dDiva, dDivb) = (dDivb,dDiva)
|||The GCD type that is occupied iff d = gcd (a,b).
||| Here GCD is defined as that positive integer such that any common factor
||| of a and b divides it
GCDZ : (a:ZZ) -> (b:ZZ) -> (d:ZZ) -> Type
GCDZ a b d = ((IsPositive d),(IsCommonFactorZ a b d),
({c:ZZ}->(IsCommonFactorZ a b c)->(IsDivisibleZ d c)))
|||Anything divides itself
selfDivide:(a:ZZ)->(IsDivisibleZ a a)
selfDivide a = (1**sym (multOneRightNeutralZ a))
|||Generates the proof that if c is a common factor of a and 0 then c divides a
gcdCondition : (a:ZZ) -> ({c:ZZ}->(IsCommonFactorZ a 0 c)->(IsDivisibleZ a c))
gcdCondition a {c} (cDiva,cDiv0) = cDiva
|||Proves that the GCD of a and 0 is a
gcdOfZeroAndInteger:(a:ZZ)->IsPositive a ->GCDZ a 0 a
gcdOfZeroAndInteger a pf =
(pf,((selfDivide a),(zzDividesZero a)),((gcdCondition a)))
|||The theorem, d|a =>d|(-a)
dDividesNegative:(IsDivisibleZ a d)->(IsDivisibleZ (-a) d)
dDividesNegative{a}{d} (x ** pf) =
((-x)**(multNegateRightIsNegateZ a d x pf))
|||The theorem that d|(-a) implies d|a
dDividesNegative2: (IsDivisibleZ (-a) d)->(IsDivisibleZ a d)
dDividesNegative2 {a}x = rewrite (sym (doubleNegElim a)) in (dDividesNegative x)
|||The theorem c|b and c|(a+bp) then c|a
cDiva :{p:ZZ} ->(cDIvb :(IsDivisibleZ b c))->
(cDIvExp:IsDivisibleZ (a+(b*p)) c)->(IsDivisibleZ a c)
cDiva {p}{b}{a}{c} cDivb cDivExp =
rewrite (sym (addAndSubNeutralZ a (b*p))) in (
plusDiv cDivExp (dDividesNegative(multDiv cDivb (p))))
|||A helper function for euclidConservesGcd function
genFunctionForGcd :(f:({c:ZZ}->(IsCommonFactorZ a b c)->(IsDivisibleZ d c)))->
(({c:ZZ}->(IsCommonFactorZ b (a+(b*(-m))) c)->(IsDivisibleZ d c)))
genFunctionForGcd f (cDivb,cDivExp) =
f((cDiva cDivb cDivExp,cDivb))
|||The theorem, gcd(a,b)=d => gcd (b, a+ b(-m))=d
euclidConservesGcd :(m:ZZ)->(GCDZ a b d)->(GCDZ b (a+(b*(-m))) d)
euclidConservesGcd m (posProof, (dDiva,dDivb), f) =
(posProof,(dDivb,(euclidConservesDivisor m dDiva dDivb)),genFunctionForGcd f)
|||The theorem that if c and d are positive d|c => (d is less than or equal to c)
posDivPosImpliesLte:(IsDivisibleZ c d)->(IsPositive c)->
(IsPositive d)->LTEZ d c
posDivPosImpliesLte {d}{c}(x ** pf) cPos dPos =
posLteMultPosPosEqZ {q=x} d c dPos cPos pf
|||The Theorem that if c and d are positive, d|c and c|d =>(c=d)
posDivAndDivByImpliesEqual: (IsDivisibleZ c d)->(IsDivisibleZ d c)->(IsPositive c)
->(IsPositive d) -> (c=d)
posDivAndDivByImpliesEqual x y z x1 =lteAndGteImpliesEqualZ dLtec cLted where
dLtec =posDivPosImpliesLte x z x1
cLted =posDivPosImpliesLte y x1 z
|||Gcd of a and b is unique
gcdIsUnique: (GCDZ a b d)-> (GCDZ a b c)->(c=d)
gcdIsUnique {a}{b}{c}{d}(dPos, dCommonFactor,fd) (cPos, cCommonFactor,fc) =
posDivAndDivByImpliesEqual (fc dCommonFactor) (fd cCommonFactor) cPos dPos
|||A helper function for GcdSym
genFunctionForGcdSym:({c:ZZ}->(IsCommonFactorZ a b c)->(IsDivisibleZ d c))->
({c:ZZ}->(IsCommonFactorZ b a c)->(IsDivisibleZ d c))
genFunctionForGcdSym f x = f (commonfactSym x)
|||A helper function for negatingPreservesGcdLeft
genFunctionForGcdNeg:({c:ZZ}->(IsCommonFactorZ (-a) b c)->(IsDivisibleZ d c))->
({c:ZZ}->(IsCommonFactorZ a b c)->(IsDivisibleZ d c))
genFunctionForGcdNeg f (cDiva,cDivb) = f (cDivNega,cDivb) where
cDivNega = (dDividesNegative cDiva)
|||Proof that gcd (a,b)=gcd(b,a)
gcdSymZ: (GCDZ a b d)->(GCDZ b a d)
gcdSymZ (dPos,(dDiva,dDivb),fd) = (dPos, (dDivb, dDiva), (genFunctionForGcdSym fd))
|||Theorem that gcd(-a,b)=gcd(a,b)
negatingPreservesGcdLeft: (GCDZ (-a) b d)->(GCDZ a b d)
negatingPreservesGcdLeft (dPos,(dDivNega,dDivb),fd) =
(dPos,(dDiva,dDivb),(genFunctionForGcdNeg fd)) where
dDiva = dDividesNegative2 dDivNega
|||Theorem that gcd (p, -q) = gcd (p,q)
negatingPreservesGcdRight: (GCDZ p (-q) r)->(GCDZ p q r)
negatingPreservesGcdRight {p}{q} x =
gcdSymZ{a=q}{b=p} (negatingPreservesGcdLeft (gcdSymZ {a=p}{b=(-q)} x))
|||Theorem that if d|rem , d|b and a = rem+(quot*b) then d|a
euclidConservesDivisorWithProof :{a:ZZ}->{b:ZZ}->{quot:ZZ}->{rem:ZZ}->
(a=rem+(quot*b))->(IsDivisibleZ rem d)->(IsDivisibleZ b d)->(IsDivisibleZ a d)
euclidConservesDivisorWithProof {a}{b}{quot}{rem}equality dDivrem dDivb =
rewrite equality in (plusDiv dDivrem (multDivLeft dDivb quot))
|||Theorem that a = rem +quot*b implies rem = a + (-quot)*b
auxEqProof:{a:ZZ}->{b:ZZ}->{quot:ZZ}->{rem:ZZ}->(a=rem+(quot *b))->
(rem = (a + (-quot)*b))
auxEqProof {a}{b}{quot}{rem}prf =
(rewrite (multNegateLeftZ quot b) in (sym (subOnBothSides a rem (quot*b) prf)))
|||A helper function for euclidConservesGcdWithProof
genfunction:(a=rem+(quot*b))-> ({c:ZZ}->(IsCommonFactorZ b rem c)->(IsDivisibleZ d c))->
({c:ZZ}->(IsCommonFactorZ a b c)->(IsDivisibleZ d c))
genfunction prf f (dDiva,dDivb) = f(dDivb,dDivrem) where
dDivrem = euclidConservesDivisorWithProof (auxEqProof prf) dDiva dDivb
|||Proof that if a=rem +quot*b and gcd (b,rem)=d, then gcd(a,b)=d
euclidConservesGcdWithProof: {a:ZZ}->{b:ZZ}->{quot:ZZ}->{rem:ZZ}->
(a=rem+(quot*b))->(GCDZ b rem d)->(GCDZ a b d)
euclidConservesGcdWithProof {a}{b}{quot}{rem}equality (dPos,(dDivb,dDivrem),fd) =
(dPos,(dDiva,dDivb),(genfunction equality fd)) where
dDiva = euclidConservesDivisorWithProof equality dDivrem dDivb
|
If $f$ and $g$ are uniformly continuous on a set $S$, then $f - g$ is uniformly continuous on $S$. |
module TestLearningCompositesInspection
using Test
using MLJBase
using ..Models
@load KNNRegressor
@constant X = source()
@constant y = source()
hot = OneHotEncoder()
hotM = machine(hot, X)
@constant W = transform(hotM, X)
knn = KNNRegressor()
knnM = machine(knn, W, y)
@constant yhat = predict(knnM, W)
@constant K = 2*X
@constant all = glb(yhat, K)
@test MLJBase.tree(yhat) == (operation = predict,
model = knn,
arg1 = (operation = transform,
model = hot,
arg1 = (source = X, ),
train_arg1 = (source = X, )),
train_arg1 = (operation = transform,
model = hot,
arg1 = (source = X, ),
train_arg1 = (source = X, )),
train_arg2 = (source = y,))
@test Set(models(yhat)) == Set([hot, knn])
@test Set(sources(yhat)) == Set([X, y])
@test Set(origins(yhat)) == Set([X,])
@test Set(machines(yhat)) == Set([knnM, hotM])
@test Set(MLJBase.args(yhat)) == Set([W, ])
@test Set(MLJBase.train_args(yhat)) == Set([W, y])
@test Set(MLJBase.children(X, all)) == Set([W, K])
@constant Q = 2X
@constant R = 3X
@constant S = glb(X, Q, R)
@test Set(MLJBase.children(X, S)) == Set([Q, R, S])
@test MLJBase.lower_bound([Int, Float64]) == Union{}
@test MLJBase.lower_bound([Int, Integer]) == Int
@test MLJBase.lower_bound([Int, Integer]) == Int
@test MLJBase.lower_bound([]) == Any
@test input_scitype(2X) == Unknown
@test input_scitype(yhat) == input_scitype(KNNRegressor())
W2 = transform(machine(UnivariateStandardizer(), X), X)
# @test input_scitype(X, glb(W, W2)) == Union{}
# @test input_scitype(X, glb(Q, W)) == Unknown
y1 = predict(machine(@load(DecisionTreeRegressor), X, y), X)
@test input_scitype(y1) == Table(Continuous, OrderedFactor, Count)
y2 = predict(machine(@load(KNNRegressor), X, y), X)
@test input_scitype(y2) == Table(Continuous)
# @test input_scitype(X, glb(y1, y2)) == Table(Continuous)
# @test input_scitype(X, glb(y1, y2, Q)) == Unknown
end
true
|
Formal statement is: lemma real_eq_affinity: "m \<noteq> 0 \<Longrightarrow> y = m * x + c \<longleftrightarrow> inverse m * y + - (c / m) = x" for m :: "'a::linordered_field" Informal statement is: If $m \neq 0$, then $y = mx + c$ if and only if $\frac{1}{m}y - \frac{c}{m} = x$. |
# Introduction to RLlib
RLlib is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications.
The following introduction was adapted from [rllib_exercises](https://github.com/ray-project/tutorial/blob/master/rllib_exercises/rllib_colab.ipynb).
For more information about RLlib and its open source community:
- [documentation](https://ray.readthedocs.io/en/latest/rllib.html)
- [GitHub repo](https://github.com/ray-project/ray/tree/master/rllib#rllib-scalable-reinforcement-learning)
- [project board](https://github.com/ray-project/ray/projects/6)
- [Slack sign-up](https://forms.gle/9TSdDYUgxYs8SA9e8)
- [Twitter](https://twitter.com/raydistributed)
## Install Dependencies
First, install the necessary dependencies before beginning the exercises:
```python
!pip install ray[rllib]
!pip install ray[debug]
!pip install ray[tune]
!pip install pandas
!pip install requests
!pip install tensorflow
```
## RLlib: Markov Decision Processes
**GOAL:** The goal of the exercise is to introduce the Markov Decision Process abstraction and to show its use in Python.
**The key abstraction in reinforcement learning is the Markov Decision Process (MDP).**
An MDP models sequential interactions with an external environment. It consists of the following:
- a **state space**
- a set of **actions**
- a **transition function** which describes the probability of being in a state $s'$ at time $t+1$ given that the MDP was in state $s$ at time $t$ and action $a$ was taken
- a **reward function**, which determines the reward received at time $t$
- a **discount factor** $\gamma$
More details are available [here](https://en.wikipedia.org/wiki/Markov_decision_process).
**NOTE:** Reinforcement learning algorithms are often applied to problems that don't strictly fit into the MDP framework. In particular, situations in which the state of the environment is not fully observed lead to violations of the MDP assumption. Nevertheless, RL algorithms can be applied anyway.
### Policies
A **policy** is a function that takes in a **state** and returns an **action**. A policy may be stochastic (i.e., it may sample from a probability distribution) or it can be deterministic.
The **goal of reinforcement learning** is to learn a **policy** for maximizing the cumulative reward in an MDP. That is, we wish to find a policy $\pi$ which solves the following optimization problem
\begin{equation}
\arg\max_{\pi} \sum_{t=1}^T \gamma^t R_t(\pi),
\end{equation}
where $T$ is the number of steps taken in the MDP (this is a random variable and may depend on $\pi$) and $R_t$ is the reward received at time $t$ (also a random variable which depends on $\pi$).
A number of algorithms are available for solving reinforcement learning problems. Several of the most widely known are [value iteration](https://en.wikipedia.org/wiki/Markov_decision_process#Value_iteration), [policy iteration](https://en.wikipedia.org/wiki/Markov_decision_process#Policy_iteration), and [Q learning](https://en.wikipedia.org/wiki/Q-learning).
### RL in Python
The `gym` Python module provides MDP interfaces to a variety of simulators. For example, the CartPole environment interfaces with a simple simulator which simulates the physics of balancing a pole on a cart. The CartPole problem is described at https://gym.openai.com/envs/CartPole-v0. This example fits into the MDP framework as follows.
- The **state** consists of the position and velocity of the cart as well as the angle and angular velocity of the pole that is balancing on the cart.
- The **actions** are to decrease or increase the cart's velocity by one unit.
- The **transition function** is deterministic and is determined by simulating physical laws.
- The **reward function** is a constant 1 as long as the pole is upright, and 0 once the pole has fallen over. Therefore, maximizing the reward means balancing the pole for as long as possible.
- The **discount factor** in this case can be taken to be 1.
More information about the `gym` Python module is available at https://gym.openai.com/.
```python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import gym
import numpy as np
```
The code below illustrates how to create and manipulate MDPs in Python. An MDP can be created by calling `gym.make`. Gym environments are identified by names like `CartPole-v0`. A **catalog of built-in environments** can be found at https://gym.openai.com/envs.
```python
env = gym.make('CartPole-v0')
print('Created env:', env)
```
Reset the state of the MDP by calling `env.reset()`. This call returns the initial state of the MDP.
```python
state = env.reset()
print('The starting state is:', state)
```
The `env.step` method takes an action (in the case of the CartPole environment, the appropriate actions are 0 or 1, for moving left or right). It returns a tuple of four things:
1. the new state of the environment
2. a reward
3. a boolean indicating whether the simulation has finished
4. a dictionary of miscellaneous extra information
```python
# Simulate taking an action in the environment. Appropriate actions for
# the CartPole environment are 0 and 1 (for moving left and right).
action = 0
state, reward, done, info = env.step(action)
print(state, reward, done, info)
```
A **rollout** is a simulation of a policy in an environment. It alternates between choosing actions based (using some policy) and taking those actions in the environment.
The code below performs a rollout in a given environment. It takes **random actions** until the simulation has finished and returns the cumulative reward.
```python
def random_rollout(env):
state = env.reset()
done = False
cumulative_reward = 0
# Keep looping as long as the simulation has not finished.
while not done:
# Choose a random action (either 0 or 1).
action = np.random.choice([0, 1])
# Take the action in the environment.
state, reward, done, _ = env.step(action)
# Update the cumulative reward.
cumulative_reward += reward
# Return the cumulative reward.
return cumulative_reward
reward = random_rollout(env)
print(reward)
reward = random_rollout(env)
print(reward)
```
**EXERCISE:** Finish implementing the `rollout_policy` function below, which should take an environment *and* a policy. The *policy* is a function that takes in a *state* and returns an *action*. The main difference is that instead of choosing a **random action**, the action should be chosen **with the policy** (as a function of the state).
```python
def rollout_policy(env, policy):
state = env.reset()
done = False
cumulative_reward = 0
# EXERCISE: Fill out this function by copying the 'random_rollout' function
# and then modifying it to choose the action using the policy.
raise NotImplementedError
# Return the cumulative reward.
return cumulative_reward
def sample_policy1(state):
return 0 if state[0] < 0 else 1
def sample_policy2(state):
return 1 if state[0] < 0 else 0
reward1 = np.mean([rollout_policy(env, sample_policy1) for _ in range(100)])
reward2 = np.mean([rollout_policy(env, sample_policy2) for _ in range(100)])
print('The first sample policy got an average reward of {}.'.format(reward1))
print('The second sample policy got an average reward of {}.'.format(reward2))
assert 5 < reward1 < 15, ('Make sure that rollout_policy computes the action '
'by applying the policy to the state.')
assert 25 < reward2 < 35, ('Make sure that rollout_policy computes the action '
'by applying the policy to the state.')
```
## RLlib: Proximal Policy Optimization
**GOAL:** The goal of this exercise is to demonstrate how to use the proximal policy optimization (PPO) algorithm.
To understand how to use **RLlib**, see the documentation at http://rllib.io.
PPO is described in detail in https://arxiv.org/abs/1707.06347. It is a variant of Trust Region Policy Optimization (TRPO) described in https://arxiv.org/abs/1502.05477
PPO works in two phases. In one phase, a large number of rollouts are performed (in parallel). The rollouts are then aggregated on the driver and a surrogate optimization objective is defined based on those rollouts. We then use SGD to find the policy that maximizes that objective with a penalty term for diverging too much from the current policy.
**NOTE:** The SGD optimization step is best performed in a data-parallel manner over multiple GPUs. This is exposed through the `num_gpus` field of the `config` dictionary (for this to work, you must be using a machine that has GPUs).
```python
# Be sure to install the latest version of RLlib.
! pip install -U ray[rllib]
```
```python
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import gym
import ray
from ray.rllib.agents.ppo import PPOTrainer, DEFAULT_CONFIG
from ray.tune.logger import pretty_print
```
```python
# Start up Ray. This must be done before we instantiate any RL agents.
ray.init(num_cpus=3, ignore_reinit_error=True, log_to_driver=False)
```
Instantiate a PPOTrainer object. We pass in a config object that specifies how the network and training procedure should be configured. Some of the parameters are the following.
- `num_workers` is the number of actors that the agent will create. This determines the degree of parallelism that will be used.
- `num_sgd_iter` is the number of epochs of SGD (passes through the data) that will be used to optimize the PPO surrogate objective at each iteration of PPO.
- `sgd_minibatch_size` is the SGD batch size that will be used to optimize the PPO surrogate objective.
- `model` contains a dictionary of parameters describing the neural net used to parameterize the policy. The `fcnet_hiddens` parameter is a list of the sizes of the hidden layers.
```python
config = DEFAULT_CONFIG.copy()
config['num_workers'] = 1
config['num_sgd_iter'] = 30
config['sgd_minibatch_size'] = 128
config['model']['fcnet_hiddens'] = [100, 100]
config['num_cpus_per_worker'] = 0 # This avoids running out of resources in the notebook environment when this cell is re-executed
agent = PPOTrainer(config, 'CartPole-v0')
```
Train the policy on the `CartPole-v0` environment for 2 steps. The CartPole problem is described at https://gym.openai.com/envs/CartPole-v0.
**EXERCISE:** Inspect how well the policy is doing by looking for the lines that say something like
```
episode_len_mean: 22.262569832402235
episode_reward_mean: 22.262569832402235
```
This indicates how much reward the policy is receiving and how many time steps of the environment the policy ran. The maximum possible reward for this problem is 200. The reward and trajectory length are very close because the agent receives a reward of one for every time step that it survives (however, that is specific to this environment).
```python
for i in range(2):
result = agent.train()
print(pretty_print(result))
```
**EXERCISE:** The current network and training configuration are too large and heavy-duty for a simple problem like CartPole. Modify the configuration to use a smaller network and to speed up the optimization of the surrogate objective (fewer SGD iterations and a larger batch size should help).
```python
config = DEFAULT_CONFIG.copy()
config['num_workers'] = 3
config['num_sgd_iter'] = 30
config['sgd_minibatch_size'] = 128
config['model']['fcnet_hiddens'] = [100, 100]
config['num_cpus_per_worker'] = 0
agent = PPOTrainer(config, 'CartPole-v0')
```
**EXERCISE:** Train the agent and try to get a reward of 200. If it's training too slowly you may need to modify the config above to use fewer hidden units, a larger `sgd_minibatch_size`, a smaller `num_sgd_iter`, or a larger `num_workers`.
This should take around 20 or 30 training iterations.
```python
for i in range(2):
result = agent.train()
print(pretty_print(result))
```
Checkpoint the current model. The call to `agent.save()` returns the path to the checkpointed model and can be used later to restore the model.
```python
checkpoint_path = agent.save()
print(checkpoint_path)
```
Now let's use the trained policy to make predictions.
**NOTE:** Here we are loading the trained policy in the same process, but in practice, this would often be done in a different process (probably on a different machine).
```python
trained_config = config.copy()
test_agent = PPOTrainer(trained_config, 'CartPole-v0')
test_agent.restore(checkpoint_path)
```
Now use the trained policy to act in an environment. The key line is the call to `test_agent.compute_action(state)` which uses the trained policy to choose an action.
**EXERCISE:** Verify that the reward received roughly matches up with the reward printed in the training logs.
```python
env = gym.make('CartPole-v0')
state = env.reset()
done = False
cumulative_reward = 0
while not done:
action = test_agent.compute_action(state)
state, reward, done, _ = env.step(action)
cumulative_reward += reward
print(cumulative_reward)
```
|
module da_minimisation
!---------------------------------------------------------------------------
! Purpose: Collection of routines associated with minimisation.
!---------------------------------------------------------------------------
use module_configure, only : grid_config_rec_type
use module_dm, only : wrf_dm_sum_real, wrf_dm_sum_integer
#ifdef DM_PARALLEL
use module_dm, only : local_communicator, mytask, ntasks, ntasks_x, &
ntasks_y, data_order_xy, data_order_xyz
use module_comm_dm, only : halo_wpec_sub, halo_wpec_adj_sub
#endif
use module_domain, only : domain, ep_type, vp_type, x_type, domain_clockprint, &
domain_clockadvance, domain_clock_get, domain_clock_set
use module_state_description, only : dyn_em,dyn_em_tl,dyn_em_ad,p_g_qv, &
p_g_qc, p_g_qr, num_moist, PARAM_FIRST_SCALAR
!#ifdef DM_PARALLEL
! use mpi, only : mpi_barrier
!#endif
use da_airep, only : da_calculate_grady_airep, da_ao_stats_airep, &
da_oi_stats_airep, da_get_innov_vector_airep, da_residual_airep, &
da_jo_and_grady_airep
use da_airsr , only : da_calculate_grady_airsr, da_ao_stats_airsr, &
da_oi_stats_airsr, da_get_innov_vector_airsr, da_residual_airsr, &
da_jo_and_grady_airsr
use da_bogus, only : da_calculate_grady_bogus, da_ao_stats_bogus, &
da_oi_stats_bogus, da_get_innov_vector_bogus, da_residual_bogus, &
da_jo_and_grady_bogus
use da_buoy , only : da_calculate_grady_buoy, da_ao_stats_buoy, &
da_oi_stats_buoy,da_get_innov_vector_buoy, da_residual_buoy, &
da_jo_and_grady_buoy
use da_control, only : trace_use, var4d_bin, trajectory_io, analysis_date, &
var4d, rootproc,jcdfi_use,jcdfi_diag,ierr,comm,num_fgat_time, &
var4d_lbc, stdout, eps, stats_unit, test_dm_exact, global, multi_inc, &
calculate_cg_cost_fn,anal_type_randomcv,cv_size_domain,je_factor, &
jb_factor,ntmax,omb_add_noise,write_iv_rad_ascii,use_obs_errfac, &
rtm_option,rtm_option_rttov, rtm_option_crtm, anal_type_verify, &
write_filtered_rad,omb_set_rand,use_rad,var_scaling2,var_scaling1, &
var_scaling4,var_scaling5,var_scaling3, jo_unit, test_gradient, &
print_detail_grad,omb_set_rand,grad_unit,cost_unit, num_pseudo, cv_options, &
cv_size_domain_je,cv_size_domain_jb, cv_size_domain_jp, cv_size_domain_js, cv_size_domain_jl, cv_size_domain_jt, &
sound, mtgirs, sonde_sfc, synop, profiler, gpsref, gpseph, gpspw, polaramv, geoamv, ships, metar, &
satem, radar, ssmi_rv, ssmi_tb, ssmt1, ssmt2, airsr, pilot, airep,tamdar, tamdar_sfc, rain, &
bogus, buoy, qscat,pseudo, radiance, monitor_on, max_ext_its, use_rttov_kmatrix,&
use_crtm_kmatrix,precondition_cg, precondition_factor, use_varbc, varbc_factor, &
biasprep, qc_rad, num_procs, myproc, use_gpspwobs, use_rainobs, use_gpsztdobs, &
use_radar_rf, radar_rf_opt,radar_rf_rscl,radar_rv_rscl,use_radar_rhv,use_radar_rqv,pseudo_var, num_pseudo, &
num_ob_indexes, num_ob_vars, npres_print, pptop, ppbot, qcstat_conv_unit, gas_constant, &
orthonorm_gradient, its, ite, jts, jte, kts, kte, ids, ide, jds, jde, kds, kde, cp, &
use_satcv, sensitivity_option, print_detail_outerloop, adj_sens, filename_len, &
ims, ime, jms, jme, kms, kme, ips, ipe, jps, jpe, kps, kpe, fgat_rain_flags, var4d_bin_rain, freeze_varbc, &
use_wpec, wpec_factor, use_4denvar, anal_type_hybrid_dual_res, alphacv_method, alphacv_method_xa, &
write_detail_grad_fn, pseudo_uvtpq, lanczos_ep_filename, use_divc, divc_factor, &
cloud_cv_options, use_cv_w, var_scaling6, var_scaling7, var_scaling8, var_scaling9, &
var_scaling10, var_scaling11, &
write_gts_omb_oma, write_unpert_obs, write_rej_obs_conv, pseudo_time, &
use_varbc_tamdar, varbc_tamdar_nobsmin, varbc_tamdar_unit
use da_define_structures, only : iv_type, y_type, j_type, be_type, &
xbx_type, jo_type, da_allocate_y,da_zero_x,da_zero_y,da_deallocate_y, &
da_zero_vp_type, qhat_type
use da_dynamics, only : da_wpec_constraint_lin,da_wpec_constraint_adj, &
da_divergence_constraint, da_divergence_constraint_adj
use da_obs, only : da_transform_xtoy_adj,da_transform_xtoy, &
da_add_noise_to_ob,da_random_omb_all, da_obs_sensitivity
use da_geoamv, only : da_calculate_grady_geoamv, da_ao_stats_geoamv, &
da_oi_stats_geoamv, da_get_innov_vector_geoamv,da_residual_geoamv, &
da_jo_and_grady_geoamv
use da_gpspw, only : da_calculate_grady_gpspw, da_ao_stats_gpspw, &
da_oi_stats_gpspw, da_get_innov_vector_gpspw, da_residual_gpspw, &
da_jo_and_grady_gpspw, da_get_innov_vector_gpsztd
use da_gpsref, only : da_calculate_grady_gpsref, da_ao_stats_gpsref, &
da_oi_stats_gpsref, da_get_innov_vector_gpsref, da_residual_gpsref, &
da_jo_and_grady_gpsref
use da_gpseph, only : da_calculate_grady_gpseph, da_ao_stats_gpseph, &
da_oi_stats_gpseph, da_get_innov_vector_gpseph, da_residual_gpseph, &
da_jo_and_grady_gpseph
use da_obs_io, only : da_final_write_y, da_write_y, da_final_write_obs, &
da_write_obs,da_write_obs_etkf,da_write_noise_to_ob, da_use_obs_errfac, &
da_write_iv_for_multi_inc, da_read_iv_for_multi_inc
use da_metar, only : da_calculate_grady_metar, da_ao_stats_metar, &
da_oi_stats_metar, da_get_innov_vector_metar, da_residual_metar, &
da_jo_and_grady_metar
use da_pilot, only : da_calculate_grady_pilot, da_ao_stats_pilot, &
da_oi_stats_pilot, da_get_innov_vector_pilot, da_residual_pilot, &
da_jo_and_grady_pilot
use da_par_util, only : da_system,da_cv_to_global
use da_par_util1, only : da_proc_sum_real,da_proc_sum_ints
use da_polaramv, only : da_calculate_grady_polaramv, da_ao_stats_polaramv, &
da_oi_stats_polaramv, da_get_innov_vector_polaramv, da_residual_polaramv, &
da_jo_and_grady_polaramv
use da_profiler, only : da_calculate_grady_profiler, da_ao_stats_profiler, &
da_oi_stats_profiler,da_get_innov_vector_profiler, da_residual_profiler, &
da_jo_and_grady_profiler
use da_pseudo, only : da_calculate_grady_pseudo, da_ao_stats_pseudo, &
da_oi_stats_pseudo, da_get_innov_vector_pseudo, da_residual_pseudo, &
da_jo_and_grady_pseudo
use da_qscat, only : da_calculate_grady_qscat, da_ao_stats_qscat, &
da_oi_stats_qscat, da_get_innov_vector_qscat, da_residual_qscat, &
da_jo_and_grady_qscat
use da_mtgirs, only : da_calculate_grady_mtgirs, &
da_ao_stats_mtgirs, da_oi_stats_mtgirs,da_oi_stats_mtgirs, &
da_get_innov_vector_mtgirs, &
da_jo_and_grady_mtgirs, da_residual_mtgirs
use da_tamdar, only : da_calculate_grady_tamdar, &
da_ao_stats_tamdar, da_oi_stats_tamdar,da_oi_stats_tamdar, &
da_get_innov_vector_tamdar, &
da_jo_and_grady_tamdar, da_residual_tamdar, &
da_calculate_grady_tamdar_sfc, &
da_ao_stats_tamdar_sfc, da_oi_stats_tamdar_sfc,da_oi_stats_tamdar_sfc, &
da_get_innov_vector_tamdar_sfc, &
da_jo_and_grady_tamdar_sfc, da_residual_tamdar_sfc
use da_varbc_tamdar, only : da_varbc_tamdar_tl,da_varbc_tamdar_adj, &
da_varbc_tamdar_direct,da_varbc_tamdar_precond
#if defined(RTTOV) || defined(CRTM)
use da_radiance, only : da_calculate_grady_rad, da_write_filtered_rad, &
da_get_innov_vector_radiance, satinfo
use da_radiance1, only : da_ao_stats_rad,da_oi_stats_rad, &
da_write_iv_rad_ascii,da_residual_rad,da_jo_and_grady_rad, &
da_biasprep, da_qc_rad
#endif
use da_radar, only : da_calculate_grady_radar, da_ao_stats_radar, &
da_oi_stats_radar, da_get_innov_vector_radar, da_residual_radar, &
da_jo_and_grady_radar
use da_rain, only : da_calculate_grady_rain, da_ao_stats_rain, &
da_oi_stats_rain, da_get_innov_vector_rain, da_residual_rain, &
da_jo_and_grady_rain, da_get_hr_rain, da_transform_xtoy_rain, &
da_transform_xtoy_rain_adj
use da_reporting, only : da_message, da_warning, da_error
use da_satem, only : da_calculate_grady_satem, da_ao_stats_satem, &
da_oi_stats_satem, da_get_innov_vector_satem, da_residual_satem, &
da_jo_and_grady_satem
use da_ships, only : da_calculate_grady_ships, da_ao_stats_ships, &
da_oi_stats_ships, da_get_innov_vector_ships, da_residual_ships, &
da_jo_and_grady_ships
use da_sound, only : da_calculate_grady_sound,da_calculate_grady_sonde_sfc, &
da_ao_stats_sound, da_oi_stats_sound,da_oi_stats_sound, &
da_oi_stats_sonde_sfc,da_ao_stats_sonde_sfc,da_get_innov_vector_sound, &
da_get_innov_vector_sonde_sfc,da_jo_and_grady_sound, da_residual_sound, &
da_jo_and_grady_sound,da_jo_and_grady_sonde_sfc,da_residual_sonde_sfc
use da_ssmi, only : da_calculate_grady_ssmi_tb,da_calculate_grady_ssmi_rv,da_calculate_grady_ssmt1, &
da_calculate_grady_ssmt2, da_ao_stats_ssmi_tb ,da_ao_stats_ssmt2, &
da_ao_stats_ssmt2, da_oi_stats_ssmt1, da_oi_stats_ssmt2, &
da_oi_stats_ssmi_tb,da_oi_stats_ssmi_rv,da_ao_stats_ssmt1,da_get_innov_vector_ssmi_tb, &
da_get_innov_vector_ssmi_rv, da_residual_ssmi_rv, da_residual_ssmi_tb, &
da_get_innov_vector_ssmt1,da_get_innov_vector_ssmt2, &
da_jo_and_grady_ssmt1, da_jo_and_grady_ssmt2,da_jo_and_grady_ssmi_tb, &
da_jo_and_grady_ssmi_rv, &
da_residual_ssmt1,da_residual_ssmt2, da_ao_stats_ssmi_rv
use da_synop, only : da_calculate_grady_synop, da_ao_stats_synop, &
da_oi_stats_synop, da_get_innov_vector_synop, da_residual_synop, &
da_jo_and_grady_synop
use da_statistics, only : da_analysis_stats, da_print_qcstat
use da_tools_serial, only : da_get_unit,da_free_unit
use da_tracing, only : da_trace_entry, da_trace_exit,da_trace
use da_transfer_model, only : da_transfer_wrftltoxa,da_transfer_xatowrftl, &
da_transfer_xatowrftl_adj,da_transfer_wrftltoxa_adj
#if defined(RTTOV) || defined(CRTM)
use da_varbc, only : da_varbc_tl,da_varbc_adj,da_varbc_precond,da_varbc_coldstart, da_varbc_direct
#endif
use da_vtox_transforms, only : da_transform_vtox,da_transform_vtox_adj,da_transform_xtoxa,da_transform_xtoxa_adj
use da_vtox_transforms, only : da_copy_xa, da_add_xa, da_transform_vpatox, da_transform_vpatox_adj
use da_wrf_interfaces, only : wrf_dm_bcast_real, wrf_get_dm_communicator
use module_symbols_util, only : wrfu_finalize
use da_lapack, only : dsteqr
use da_wrfvar_io, only : da_med_initialdata_input
use da_transfer_model, only : da_transfer_wrftoxb
#ifdef VAR4D
use da_4dvar, only : da_tl_model, da_ad_model, model_grid, input_nl_xtraj, &
kj_swap_reverse, upsidedown_ad_forcing, u6_2, v6_2, w6_2, t6_2, ph6_2, p6, &
mu6_2, psfc6, moist6
use da_transfer_model, only : da_transfer_xatowrftl_lbc, da_transfer_xatowrftl_adj_lbc, &
da_transfer_wrftl_lbc_t0, da_transfer_wrftl_lbc_t0_adj, da_get_2nd_firstguess
USE module_io_wrf, only : auxinput6_only
#endif
implicit none
#ifdef DM_PARALLEL
include 'mpif.h'
#endif
private :: da_dot, da_dot_cv
contains
#include "da_calculate_j.inc"
#include "da_calculate_gradj.inc"
#include "da_jo_and_grady.inc"
#include "da_calculate_residual.inc"
#include "da_get_var_diagnostics.inc"
#include "da_get_innov_vector.inc"
#include "da_dot.inc"
#include "da_dot_cv.inc"
#include "da_write_diagnostics.inc"
#include "da_minimise_cg.inc"
#include "da_minimise_lz.inc"
#include "da_calculate_grady.inc"
#include "da_transform_vtoy.inc"
#include "da_transform_vtoy_adj.inc"
#include "da_transform_vtod_wpec.inc"
#include "da_transform_vtod_wpec_adj.inc"
#include "da_adjoint_sensitivity.inc"
#include "da_sensitivity.inc"
#include "da_amat_mul.inc"
#include "da_kmat_mul.inc"
#include "da_lanczos_io.inc"
#include "da_swap_xtraj.inc"
#include "da_read_basicstates.inc"
end module da_minimisation
|
SUBROUTINE POIX0(A, IV, LA, LIV, LV, MODEL, N, P, V, X, YN)
C
C *** COMPUTE INITIAL X OF E. L. FROME ***
C
INTEGER LA, LIV, LV, MODEL, N, P
INTEGER IV(LIV)
DOUBLE PRECISION X(P), A(LA,N), V(LV), YN(2,N)
C
EXTERNAL DIVSET, POISX0, DV7SCP
C
C *** LOCAL VARIABLES ***
C
INTEGER C1, PP1O2, QTR1, TEMP1
DOUBLE PRECISION ONE, ZERO
C
C *** IV COMPONENTS ***
C
INTEGER LMAT
PARAMETER (LMAT=42)
DATA ONE/1.D+0/, ZERO/0.D+0/
C
C--------------------------------- BODY ------------------------------
C
IF (IV(1) .EQ. 0) CALL DIVSET(1, IV, LIV, LV, V)
C
C1 = IV(LMAT)
PP1O2 = P * (P + 1) / 2
QTR1 = C1 + PP1O2
TEMP1 = QTR1 + P
IF (TEMP1 .GT. LV) GO TO 10
CALL POISX0(A, V(C1), LA, P*(P+1)/2, MODEL, N, P, V(QTR1), X, YN)
GO TO 999
C
10 IF (MODEL .GT. 1) GO TO 20
CALL DV7SCP(P, X, ONE)
GO TO 999
20 CALL DV7SCP(P, X, ZERO)
C
999 RETURN
END
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[12pt]{article}
\textheight=24cm
\textwidth=17cm
\hoffset=-2cm
\voffset=-2cm
\vspace{-3cm}
\sloppy
\usepackage{gb4e}
\begin{document}
\title{\large\bfseries PHYSICAL MODEL OF SCHRODINGER ELECTRON.\\
HEISENBERG CONVENIENT WAY FOR \\
DESCRIPTION OF ITS QUANTUM BEHAVIOUR }
\vspace{-3cm}
\author{\sffamily Josiph Mladenov Rangelov,\\
Institute of Solid State Physics\,,\,Bulgarian Academy of Sciences,\\
72\,Tsarigradsko chaussee\,,\,1 784 Sofia\,,\,Bulgaria\,.}
\date{}
\maketitle
\vspace{-1cm}
\begin{abstract}
The object of this paper is to discuss the physical interpretation of
quantum behaviour of Schrodinger electron (SchEl) and bring to light on
the cause for the Heisenberg convenient operator way of its describing,
using the nonrelativistic quantum mechanics laws and its mathematical
results. We describe the forced stochastically diverse circular harmonic
oscillation motion, created by force of the electrical interaction of the
SchEl's elementary electric charge (ElmElcChrg) with the electric intensity
(ElcInt) of the resultant quantum electromagnetic field (QntElcMgnFld)
of the existing StchVrtPhtns, as a solution of Abraham-Lorentz equation.
By dint of this equation we obtain that the smooth thin line of a classical
macro particle is rapidly broken of many short and disorderly orientated
lines, owing the continuous dispersion of the quantum micro particle
(QntMicrPrt) on the StchVrtPhtns. Between two successive scattering the
centers of diverse circular oscillations with stochastically various radii
are moving along this short disordered line. These circular harmonic
oscillations lie within the flats, perpendicular to same disordered short
line, along which are moving its centers. In a result of same forced
circular harmonic oscillation motion the smooth thin line of the LrEl is
roughly spread and turned out into some cylindrically wide path of the SchEl.
Hence the dispersions of different dynamical parameters, determining the
state of the SchEl, which are results of its continuously interaction with
the resultant QntElcMgnFld of the StchVrtPhtns. The absence of the smooth
thin line trajectory at the circular harmonic oscilation moving of the
QntMicrPrt forces us to use the matrix elements (Fourier components) of its
roughly spread wide cylindrical path for its description.
\end{abstract}
\section{ Introduction}
We assume that the vacuum fluctuations (VcmFlcs) through zero-point quantum
electromagnetic field (QntElcMgnFld) perform an important role in a behaviour
of the micro particles (MicrPrts). As thing turned out that if the Brownian
stochastic motion (BrnStchMtn) of some classical micro particle (ClsMicrPrt)
is a result of fluctuating deviations of averaged values of all having an
effect forces on a ClsMcrPrt, coming from many molecule blows from a surround
environment, then the quantized stochastic dualistic wave-particle behaviour
of every QntMicrPrt is a result of the continuous uncontrolled electromagnetic
interaction (ElcMgnIntAct) between its well spread (WllSpr) elementary
electric charge (ElmElcChrg) of the charged one (a Schrodinger's electron
(SchEl)) or its magnetic dipole moment (MgnDplMmn) for uncharged one (as a
neutron), and the averaged electric intensity for charged MicrPrts or the
averaged magnetic intensity for uncharged one, of the resultant quantized
electromagnetic field (QntElcMgnFld) of all stochastic virtual photons
(StchVrtPhtns), excited within the FlcVcm and existing within its
neighborhood, which exercises a very power influence on its state and
behaviour. Consequently the continuously scattering of the well spread
(WllSpr) elementary electric charge (ElmElcChrg) of the SchEl on the
StchVrtPhtns at its creation powerfully broken the smooth thin line of
the classical trajectory of many short and very disorderly orientated
small lines and the powerfully its interaction (IntAct) with the electric
intensity (ElcInt) or magnetic intensity (MgnInt) of the resultant
QntElcMgnFld of the existing StchVrtPhtns forced it to make the circular
harmonic oscillations with various radii and the centers, lying over the
small disordered lines. In a result of this complicated motion the narrow
smooth line of the classical trajectory is turned out into some wide rough
cylindrically spread path of the QntMcrPrt. Although in further we will
give the necessary calculations, we wish to repeated, that as a result of
the continuously scattering of the QntMicrPrt on the StchVrtPhtns at their
creations the smooth thin line of the classical trajectory is turned out
into powerfully often broken of many small and very disorderly orientated
short lines. The uninterrupted ElcMgnIntAct of the ElmElcChrg or of the
MgnDplMmn of the QntMicrPrt with the ElcInt or the MgnInt of the resultant
QntElcMgnFld of the StchVrtPhtns, existent within the fluctuating vacuum
(FlcVcm) between two consecutive scatterings forced the QntMicrPrt to carry
out the stochastic circular oscillation motion, which exercise an influence
of its behavior within a neighborhood of the smooth classically line into
the cylindrically spread by different radii wide path. It isn't allowed us
to forget that the broken of the smooth thin line of very short and very
disorderly orientated small line is a result of its continuously scattering
on the StchVrtPhtns, over which are found the centers of the forced
stochastic circular oscillation motion of the QntMicrPrt, owing a result of
the ElcMgnIntAct of its WllSpr ElmElcChrg and MgnDplMmn with the intensities
of the resultant electric field (RslElcFld) and resultant magnetic field
(RslMgnFld) of the stochastic virtual photons (StchVrtPhtns). The WllSpd
ElmElcChrg of the SchEl is moving at its circular oscillations of different
radii within the flats, which are perpendicular to the very short and very
disorderly orientated small lines, obtained in a result of its continuously
scattering on the StchVrtPhtns, at its Furtian quantized stochastic circular
harmonic oscillation motion through the fluctuating vacuum (FlcVcm).
Therefore in our transparent survey about the physical model (PhsMdl) of the
nonrelativistic quantized SchrEl one will be regarded as some WllSpr
ElmElcChrg), participating simultaneously in two different motions: A) The
classical motion of a classical Lorentz' electron (LrEl) along an well
contoured smooth thin trajectory, realized in a consequence of some known
interaction (IntAct) of its over spread (OvrSpr) ElmElcChrg, MgnDplMnt or
bare mass with the intensity of some external classical fields (ClsFlds) as
in the Newton nonrelativistic classical mechanics (NrlClsMch) and Maxwell
nonrelativistic classical electrodynamics (ClsElcDnm). B) The isotropic
three-dimensional nonrelativistic quantized (IstThrDmnNrlQnt) Furthian
stochastic boson circular harmonic oscillations motion
(FrthStchBsnCrcHrmOscMtn) of the SchEl as a natural result of the permanent
ElcIntAct of its WllSpr ElmElcChrg with the ElcInt of the resultant
QntElcMgnFld of a large number StchVrtPhtns. This ElcIntAct between the
WllSpr ElmElcChrg and the FlcVcm (zero-point ElcMgnFld) is generated by dint
of StchVrtPhtns exchanged between the fluctuating vacuum (FlcVcm) and the
WllSpr ElmElcChrg during a time interval of their life. As soon as this
Furthian quantized stochastic wave-particle behaviour of the SchEl is very
similar to known Brownian classical stochastic behaviour of the ClsMacrPrt,
therefore the QntMicrPrt cannot has the classical sharp contoured smooth and
thin trajectory but has a cylindrical broad rough path, obtained as a sum
of circular oscillations motions of different radii and centers, lying on
accidental broken short lines, strongly disordered within a space. Hence
the often broken trajectory of the moving QntMicrPrt present itself a sum
of small parts from some circumferences with different radii and centers,
lying within flats, which are perpendicular to accidental broken short
lines, strongly disordered in space. Therefore in a principle the exact
description of the resultant behaviour of the SchEl owing of its joint
participation in both mentioned above motions could be done only by means
of the NrlQntMch's and nonrelativistic ClsElcDnm's laws.
It is known of many scientists the existence of three different ways
\cite{WH}, \cite{MBPJ} and \cite{BHJ}, \cite{EScha} and \cite{RPF},
\cite{FH} for the description of the quantum behaviour \cite{LB} of the
nonrelativistic SchEl. It is turned out that there is some possibility
enough to show by means of the existence intrinsic analogy between the
quadratic differential wave equation in partial derives (QdrDfrWvEqtPrtDrv)
of Schrodinger and the quadratic differential particle equation in partial
derivative (QdrDfrPrtEqtPrtDrv) of Hamilton-Jacoby that the addition of the
kinetic energy of the Furthian stochastic boson circular harmonic oscillation
of some QntMicrPrt to the kinetic energy of such ClsMacrPrt determines their
dualistic wave-particle quantized behaviour. It turns out the stochastic
motion over the powerfully break up the sharp contoured smooth thin classical line of the in many shortly and
very disorderly (stochastically orientated) small lines. As in such a natural
way we have ability enough to obtain the minimal value of the dispersion
product, determined with the Heisenberg uncertainty relation. Science there
exists an essential analogy between the registration forms of the quadratic
differential diffusive equation (QdrDfrDfsEqt) of Focker-Plank for the
distribution function $P(r,t)$ of a probability density (DstFncPrbDns) of
the free Brownian ClsMacrPrt (BrnClsMacrPrt) in a motionless coordinate
system in a respect to it and quadratic differential wave equation in
partial derivative (QdrDfrWvEqtPrtDrv) of Schrodinger for the orbital wave
function (OrbWvFnc) of some free Furthian QntMicrPrt (FrthQntMicrPrt) in a
motionless coordinate system in a respect to it we come to an essential
conclusion that there are also some possibility enough to describe the
quantized stochastic behaviour of the SchEl by means of the analogy between
the classical Wiener continual integral and the quantized Feynman continual
integral. Feynman has used for the description of transition between two
OrbWvFncs of some free FrthQntMicrPrt with different coordinates and times
some formula, analogous of such the formula, which early had been used by
Einstein \cite{AE}, \cite{ES}, Smoluchowski \cite{MS} and Wiener \cite{NW}
for the description of same transition between two DstFncsPrbDns of the
free BrnClsMacrPrts. In this way we understand why the behaviour of the
QntMicrPrt must be described by the OrbWvFnc $\Psi$ , although the
behaviour of the ClsMacrPrt may be described by a line.
\section{ Mathematical description of the physical cause ensuring the display
of the QntMicrPrt behaviour.}
The object of this paper is to discuss the fundamental problems of the
physical interpretation of the nonrelativistic quantized behaviour of the
SchEl and bring to light for understanding the cause,securing the existence
of this uncommon state of each the QntMicrPrt. It is necessary to understand
why the QntMicrPrt has no classical smooth thin trajectory and why its
behaviour must be described by the Heisenberg matrix of the convenient
operator way, using the laws of the NrlQntMch and its effective mathematical
results. The PhsMdl of the SchEl is built by means of the equation of the
forced motion of the dumping classical oscillator under the force action of
electric interaction (ElcIntAct) between its WllSpr ElmElcChrg and the ElcInt
of the RslQntElcMgnFld of the StchVrtPhtns, created in the FlcVcm. The
unusual behaviour of the SchEl may be described by the following motion
equation in Maxwell nonrelativistic classical electrodynamics (ClsElcDnm):
\begin{equation}\label{FF}
\ddot{r_j}\,+\,\omega_o^2\,r_j\,=\,-\,(\frac{e}{m})\,\{\,E_j\,+
\,E_j^i\,\}\,=\,\frac{e}{mC}\frac{\partial\,A_j\,}{\partial\,t\,}\,+
\,\frac{2e^2}{3mC^3}\,\stackrel{\cdots}{r_j},
\end{equation}
where $E_j^i$ and $E_j$ denote the ElcInt of both the ElcFld $E_j^i$ of
radiative friction, that is to say of LwEng unemitted longitudinal (Lng)
VrtPhtn (VrtLngPht), and ElcInt $E_j$ of the LwEng-VrtPhtn in the FlcVcm.
In accordance of the relation (\ref{FF}) the ElcInt $E_j$ of an external
QntElcMgnFld may be described by means of its $A_j$, having the following
analytical presentation:
\begin{equation}\label{GG}
A_j\,=\,\frac{i}{L}\,\sum_q\,\sqrt{\frac{2\pi\hbar C}{L\,q}}\,I_{jq}\,
\left[\,a_{jq}^+\,e^{i(t\omega\,-\,qr)}\,-
\,a_{jq}\,e^{-i(t\omega\,-\,qr)}\,\right],
\end{equation}
Indeed the ElcInt $E_j$ of StchVrtFtn could be obtained by taking of
a particle derivative of the expression (\ref{GG}) relatively for the
\begin{equation}\label{HH}
E_j\,=\,\frac{1}{L}\,\sum_q\,\sqrt{\frac{2\pi\hbar\omega}{\,L\,}}\,I_{jq}\,
\left[\,a_{jq}^+\,e^{i(t\omega\,-\,qr)}\,+
\,a_{jq}\,e^{-i(t\omega\,-\,qr)}\,\right],
\end{equation}
There is a necessity to note here that we have exchanged the signs in eqs.
(\ref{GG}) and (\ref{HH}). Indeed, in order to get the necessary
correspondences between operator expressions of the $\hat{p}_j$ and
$\hat{A}_j$, it is appropriate to use the sign (-) in the eq.(\ref{GG})
and the sign (+) in an eq.(\ref{HH}). The helpful of this exchange of
signs will be letter seen in following expressions (\ref{KK}) of $\hat{r_j}$
and (\ref{LL}) of $\hat{p_j}$. Hence by substituting the eq.(\ref{GG}) in
the eq.(\ref{FF}) and transposition of same term in its left-hand side one
can obtain motion equation in Lorentz-Abrahams nonrelativistic presentation
(LAP):
\begin{equation}\label{II}
\ddot{r_j}\,-\,\tau\,\stackrel{\cdots}{r_j}\,+\,\omega_o^2\,r_j\,=
\,-\,(\frac{e}{m})\,E_j,
\end{equation}
The temporary dependence of $r_j$ contains two frequencies $\omega_o$ and
$\omega$. In a spite of $\omega_o\,\ge\,\omega$, then the very greatest
magnitude of the term $\tau\,\stackrel{\cdots}{r_j}$ is
-$\tau\,\omega_o^2\,\dot{r_j}$. Although of that the term
$\tau\,\stackrel{\cdots}{r_j}$ still presents itself by
-$\tau\,\omega^2\,\dot{r_j}$. Indeed, the general solution of
eq.(\ref{II}) is given by sum of the general solution of the homogeneous
equation and a particular solution to the inhomogeneous equation. At
$\omega\,\tau\,=\,\frac{2e^2}{3mC^2}\frac{\omega}{C}\,=\,\frac{\pi}{3}\,
(\frac{2e^2}{C\hbar})\,\frac{\hbar}{mC}\,\frac{2}{\lambda}\,\le\,1,$
the general solution of the homogeneous equation has a form of a relaxing
oscillation of a frequency $\omega$. The particular solution has a form of
a forced oscillation of a frequency $\omega$. Therefore we may rewrite eq.
(\ref{II}) in the following form :
\begin{equation}\label{KK}
\ddot{r_j}\,+\,\tau\,\omega^2\,\dot{r_j}\,+\,\omega_o^2\,r_j\,=
\,-\,(\frac{e}{m})\,E_j(r,t),
\end{equation}
From eq.(\ref{KK}) it is easily seen that the motion dumping of the SchEl
is caused by well-known Lorentz' dumping force owing to radiation friction
of its moving WllSpr ElmElcChrg. In a rough approximation of the Maxwell
nonrelativistic ClsElcDnm the minimum time interval for an emission or
absorption of a real photon (RlPhtn) by the WllSpr ElmElcChrg of the SchEl
may be evidently determined by the parameter of Lorentz-Abrahams :
\begin{equation}\label{LL}
\,\tau\,=\,\frac{2e^2}{3mC^3},
\end{equation}
The particular solution of the motion eq.(\ref{KK}), describing the forced
quantized stochastic circular harmonic motion of the QntMicrPrt, have been
written by Welton \cite{ThW}, Kalitchin \cite{NK} and Sokolov and Tumanov
\cite{ASBT}, cite{AAS} by the way of the operator division in the following
analytical form :
\begin{equation}\label{MM}
\,\hat{r_j}\,=\,\sum_q\,\frac{e\,q}{m\,L}\,
\sqrt{\frac{2\pi \hbar\omega }{\,L\,q\,}}\,I_{jq}\,
\left[\,\frac{a_{jq}^{+}\,\exp{\{i\,t\omega \,-\,i\,qr\,\}}}
{\omega _o^2\,-\,\omega ^2\,+ \,i\tau \omega ^3\,}\,+
\,\frac{a_{jq}\,\exp{\{-\,i\,t\omega \,+\,i\,qr\,\}}}
{\omega _o^2\,-\,\omega ^2\,-\,i\tau \omega ^3\,}\,\right],
\end{equation}
\begin{equation}\label{NN}
\,\hat{P_j}\,=\,i\,\sum_q\,\frac{e\,\omega _o^2}{C\,L}\,
\sqrt{\frac{2\pi\hbar\omega }{\,L\,q\,}}\,I_{jq}\,
\left[\,\frac{\,a_{jq}^{+}\,\exp{\{\,i\,t\omega \,-\,i\,qr\,\}}}
{\omega _o^2\,-\,\omega ^2\,+\,i\tau \omega ^3\,}\,-
\,\frac{\,a_{jq}\,\exp{\{-\,i\,t\omega \,+\,i\,qr\,\}}}
{\omega _o^2\,-\,\omega ^2\,-\,i\tau \omega ^3}\,\right],
\end{equation}
The analytical presentation (\ref{NN}) of the SchEl's momentum components
have been calculated through using the relation known from Maxwell ClsElcDnm :
\begin{equation}\label{OO}
\hat{P}_j\,=\,m\,\dot{r}_j\,-\,(\frac{e}{C})\left[\,A_j\,+\,A_j^i\,\right],
\end{equation}
Further they have calculated the well-known Heisenberg's commutation
relations (HsnCmtRlts) between the operators of the dynamic variables
$\hat{r}_j$ (\ref{MM}) and $\hat{P}_j$ (\ref{NN}) by virtue of the following
definition :
\begin{equation}\label{PP}
{\hat P}_j\,{\hat r}_k\,-\,{\hat r}_k\,{\hat P}_j\,\approx
\,-i\,\hbar\,\delta_{jk}
\end{equation}
Since then it is easily to understand by means of the upper account that
if the ClsMacrPrt's motion is occurred along a clear definite smooth thin
trajectory in the NrlClsMch, then the QntMicrPrt's motion is performed in
a form of the RndTrmMtn along a pete very small line, stochastically
orientated in the space near the clear-cut smooth thin trajectory in the
NrlQntMch. As a result of that we can suppose that the QntStchBhv of the
QntMicrPrt can be described by means of the following physical quantities
in the NrlQntMch :
\begin{equation}\label{QQ}
\quad r_j\,=\,{\bar r}_j\,+\,\delta{r}_j\quad;
\quad p_j\,=\,\bar{p}_j\,+\,\delta{p}_j\quad;
\end{equation}
\section{ Mathematical description of the minimal dispersions of some
dynamical parameters of a QntMicrPrt}
Indeed,because of the eqs.(\ref{QQ}) the values of the averaged physical
parameters in the NrlQntMch $\langle p_j^2 \rangle$ is different from the
values of the same physical parameters in the NrlClsMch $\bar{p}_j^2$ as it
is seen :
\begin{equation}\label{RR}
\quad\langle{r}_j^2\rangle\,=\,{\bar r}_j^2\,+\,\langle\delta{r}_j^2\rangle\,;
\quad\langle{p}_j^2\rangle\,=\,\bar{p}_j^2\,+\,\langle\delta{p}_j^2\rangle\,;
\end{equation}
In spite of that the averaged value of the orbital (angular) mechanical
momentum of the QntMicrPrt has the following value :
\begin{equation}\label{SS}
\,\langle L^2\rangle\,=\,\sum_j\,(\bar{L}_j)^2\,+
\,\sum_j\,\langle(\delta{L}_j)^2\rangle\,=
\,(\bar{L}_x)^2\,+\,\langle(\delta{L}_x)^2\rangle\,+
\,(\bar{L}_y)^2\,+\,\langle(\delta{L}_y)^2\rangle\,+
\,(\bar{L}_z)^2\,+\,\langle(\delta{L}_z)^2\rangle\,;
\end{equation}
or at the $(\bar{L}_x)^2\,=\,0$ and $(\bar{L}_y)^2\,=\,0$ we must obtain :
\begin{equation}\label{TT}
\,\langle L^2\rangle\,=
\,(\bar{L}_z)^2\,+\,\langle(\delta{L}_z)^2\rangle\,+
\,\langle(\delta{L}_y)^2\rangle\,+\,\langle(\delta{L}_x)^2\rangle\,
\end{equation}
As both the value of the $\,\langle(\delta{L}_x)^2\rangle\,$ and
$\,\langle(\delta{L}_y)^2\rangle\,$ are equal of the
$\,\frac{\bar{L}_z\hbar^2}{2}\,$ and the value of the
$\,\langle(\delta{L}_z)^2\rangle\,$ is equal of the
$\,\frac{\hbar^2}{4}\,$. Therefore :
\begin{equation}\label{UU}
\quad\langle L^2\rangle\,=\,l^2\hbar^2\,+\,l\hbar^2\,+
\,\frac{\hbar^2}{4}\,=\,(\,l\,+\,\frac{1}{2}\,)^2\hbar^2\,;
\end{equation}
The realized above investigation assists us to come to the conclusion that
the dispersions of the dynamical parameters of the QntMicrPrt are natural
results of their forced stochastic oscillation motions along the very small
line stochastically orientated in space near to the classical clear-cut
smooth thin line of the corresponding dynamical parameters values of the
ClsMacrPrt, owing to ElcMgnIntAct of its OvrSpr ElmElcChrg or MgnDplMm with
the intensities of the RslElcFld or RslMgnFld of the QntElcMgnFlds of the
StchVrtPhtns at its motion through the FlcVcm. It is turned out that the
kinetic energy of the IstThrDmnNrlQnt FrthStchBsnCrcHrmOscs, which the
QntMicrPrt takes from the FlcVcm, called as its localized energy, one
ensures the stability of the SchEl in its ground state in the H-like atom.
We have the ability to obtain the minimal value of the dispersion product,
determined by the Heisenberg uncertainty relation.
In a consequence of what was asserted above in order to obtain the QntQdrDfr
WvEqn of Sch we must add to the kinetic energy $\,\frac{(\nabla_l\,S_1)^2}
{2m}\,$ of the NtnClsPrt in the following ClsQdrDifPrtEqt of Hml-Jcb :
\begin{equation}\label{f1}
-\frac{\partial S_1}{\partial t}\,=\,\frac{(\nabla_j\,S_1)^2}{2m}\,+\,U\,;
\end{equation}
the kinetic energy $\,\frac{(\nabla_l\,S_2)^2}{2m}\,$ of the BrnClsPrt. In
such the natural way we obtain the following analytic presentation of the
QntQdrDfrWvEqt of Sch \,:
\begin{equation}\label{f2}
-\frac{\partial S_1}{\partial t}\,=\,\frac{(\nabla_j\,S_1)^2}{2m}\,+
\,\frac{(\nabla_j\,S_2)^2}{2m}\,+\,U\,;
\end{equation}
The purpose of our investigation in henceforth is to obtain the eq.
(\ref{f2}) by means of physically obvious and mathematically correct proof.
Therefore we could desire a voice of a supposition that all uncommon ways
of the SchEl's behaviour in the NrlQntMch or of other QntMicrPrts in the
micro world are natural consequences of unconstrained stochastic joggles
on account of continuously accidently exchanges of LwEnr-StchVrtPhtn between
its WllSpr ElmElcChrg and the VcmFlc. In consequence of the absence of
SchEl's trajectory within the NrlQntMch as within the NrlClsMch and the
stochastical character of its random trembling motion together with the
probably interpretation of the SchEl's OrbWvFnc module square are naturally
consequences of the continuous ElcMgnIntAct between the SchEl's WllSpr
ElmElcChrg and EfcElcInt $E_j$ of existent LwEnr-VrtPhtns, stochastically
generated by fluctuating energy within FlcVcm through continuous incident
exchange of LwEnr-StchVrtPhtns, which are either emitted or adsorbable by
either the VcmFlcs or the Schel's Wllspr ElmElcChrg. Really, a deep
understanding of the physics of the random trembling motion, in accordance
with the description of the Brownian stochastic behaviour of BrnCslPrts we
can determine both as the value $V^{-}$ of the SchEl's velocity before the
moment $t$ of the scattering time of some LwEnr-StchVrtPhtns from its WllSpr
ElmElcChrg, so the value $V^{+}$ after the same moment $t$ of the scattering
time by means of the following definitions :
\begin{equation}\label{q1}
V_j^{-}\,=\,lim_{\Delta t\to\,o} \left\{\,\frac{r(t)_j\,-\,r(t-\Delta t)_j\,}
{\Delta t}\,\right\}\,= \,(\,V_j\,-\,i\,U_j\,)\,;
\end{equation}
\begin{equation}\label{q2}
V_j^{+}\,=\,lim_{\Delta t\to\,o}\left\{\,\frac{r(t+\Delta t)_j\,-\,r(t)_j\,}
{\Delta t}\,\right\}\,=\,(\,V_j\,+\,i\,U_j\,)\,;
\end{equation}
In addition we may determine two new velocities $V_j$ and $U_j$ by dint of
the following equations :
\begin{equation}\label{r}
2\,V_j\,=\,V_j^+\,+\,V_j^- \qquad {\rm and} \qquad
2\,i\,U_j\,=\,V_j^+\,-\,V_j^-\,,
\end{equation}
In conformity with the eq.(\ref{r}) it is obviously followed that the
current velocity $V$ describes the regular drift of the SchEl and the
osmotic velocity $U$ describes its nonrelativistic quantized stochastic
bozon oscillations. Afterwards by virtue of the well-known definition
equations :
\begin{equation}\label{s1}
2\,m\,V_j\,=\,m\,(\,V_j^+\,+\,V_j^-\,)\,=\,2\,\nabla_j\,S_1
\end{equation}
and
\begin{equation}\label{s2}
2\,i\,m\,U_j\,=\,m\,(\,V_j^+\,-\,V_j^-\,)\,=\,2\,i\,\nabla_j\,S_2
\end{equation}
one can obtain the following presentation of the SchEl's OrbWvFnc
$\psi(r,t)$ :
\begin{equation}\label{t}
\psi(r,t)\,=\,\exp\{\,i\,\frac{S_1}{\hbar}\,-\,\frac{S_2}{\hbar}\,\}\,=
\,B\,\exp\{\,i\,(\frac{S_1}{\hbar}\}
\end{equation}
It is easily to verify the results (\ref{r}), (\ref{s1}) (\ref{s2}). In an
effect ones may be obtained by means of the following natural equations :
\begin{equation}\label{ur1}
m\,V_j^+\,\psi(r,t)\,=
\,-\,i\,\hbar\,\nabla_j\,\exp\{\frac{i\,S_1}{\hbar}\,-\,\frac{S_2}{\hbar}\}\,
=\,(\,\nabla _j S_1\,+\,i\,\nabla _j S_2\,)\,\psi(r,t)
\end{equation}
and
\begin{equation}\label{ur2}
m\,V_j^-\,\psi(r,t)^+\,=
\,+\,i\,\hbar\,\nabla _j \exp\{\frac{i\,S_1}{\hbar}\,-\,\frac{S_2}{\hbar}\}\,
=\,(\,\nabla _j S_1\,-\,i\,\nabla _j S_2\,)\,\psi(r,t)^+
\end{equation}
Indeed,
\begin{eqnarray}\label{us1}
2\,m\,V_j\,=\,m\,(\,V_j^+\,+\,V_j^-\,)\,= \nonumber \\
\,\left\{\,(\,\nabla _j S_1\,+\,i\,\nabla _j S_2\,)\,+
\,(\,\nabla _j S_1\,-\,i\,\nabla _j S_2\,)\,\right\}\,
\quad {\rm or} \quad 2\,m\,V_j\,=\,2\,\nabla _j S_1\,
\end{eqnarray}
and
\begin{eqnarray}\label{us2}
2\,i\,m\,U_j\,=\,m\,(\,V_j^+\,-\,V_j^-\,)\,= \nonumber \\
\,\left\{\,(\,\nabla _j S_1\,+\,i\,\nabla _j S_2\,)\,-
\,(\,\nabla _j S_1\,-\,i\,\nabla _j S_2\,)\,\right\}\,
\quad {\rm or} \quad 2\,i\,m\,U_j\,=\,2\,i\,\nabla _j S_2\,
\end{eqnarray}
In consequence we could assume that the module square of the SchEl's OrbWvFnc
$\psi(r,t)$ describes the probability density of its location close by the
space point $r$ at the time moment $t$ in the good light of our obvious
interpretation. Further in order to obtain the partial differential equation
of the continuity we are going to calculate one by virtue of its well-known
definitions :
\begin{eqnarray}\label{v1}
\frac{\partial {\mid\psi\mid}^2}{\partial t}\,+
\,\nabla _j (V_j^{+}\,{\mid\psi\mid}^2\,)\,=
\,\frac{\partial (\exp{\{-\,2\,\frac {S_2}{\hbar }\}})}{\partial t}\,+
\,\nabla _j \left[\,(\nabla _j \frac{S_1}{m}\,+\,i\,\nabla _j \frac{S_2}{m})\,
\exp{\{-\,2\,\frac{S_2}{\hbar}\}}\,\right]\,= \nonumber \\
\,\left[\,-\frac{2}{\hbar}\,\frac{\partial {S_2}}{\partial t}\,+
\,\frac{1}{m}\,(\nabla _j)^2 {S_1}\,+
\,\frac{i}{m}\,(\nabla _j)^2 {S_2}\,-
\,\frac{2}{m\hbar}\,\,\nabla _j {S_1}\,\nabla _j {S_2}\,-
\,\frac{2i}{\hbar}\,\nabla _j {S_2}\,\nabla _j {S_2}\,\right]\,
\left [\,\exp{\{\,-\,2\,\frac{S_2}{\hbar}\}}\,\right ]
\end{eqnarray}
\begin{eqnarray}\label{v2}
\frac{\partial {\mid\psi\mid}^2}{\partial t}\,+
\,\nabla _j (V_j^{-}\,{\mid\psi\mid}^2\,)\,=
\,\frac{\partial (\exp{\{-\,2\,\frac{S_2}{\hbar}\}})}{\partial t}\,+
\,\nabla _j \left[\,(\nabla _j \frac{S_1}{m}\,-\,i\,\nabla _j \frac{S_2}{m})\,
\exp{\{-\,2\,\frac{S_2}{\hbar}\}}\,\right]\,= \nonumber \\
\,\left[\,-\frac{2}{\hbar}\,\frac{\partial {S_2}}{\partial t}\,-
\,\frac{1}{m}\,(\nabla _j)^2 {S_1}\,-
\,\frac{i}{m}\,(\nabla _j)^2 {S_2}\,-
\,\frac{2}{m\hbar}\,\nabla _j\,{S_1}\,\nabla _j\,{S_2}\,+
\,\frac{2i}{\hbar}\,\nabla _j\,{S_2}\,\nabla _j\,{S_2}\,\right]\
\,\left[\,\exp{\{-\,2\,\frac{S_2}{\hbar }\}}\,\right]
\end{eqnarray}
With the purpose to calculate the last expressions of the continuity
equations (\ref{v1}) and (\ref{v2}) we are going to turn the expression
(\ref{t}) of the SchEl's OrbWvFnc $\psi(r,t)$ in the quadratic
differential wave equation in partial derivatives of Schrodinger :
\begin{equation}\label{w}
\,i\,\hbar\,\frac{\partial \psi(r,t)}{\partial t}\,=
\,-\,\frac{\hbar^2}{2}\,\frac{(\nabla_j)^2}{m}\,\psi(r,t)\,+\,U(r,t)\,\psi(r,t)
\end{equation}
Further we are able to obtain the following result :
\begin{eqnarray}\label{z}
\left(\,-\,\frac{\partial {S_1}}{\partial t}\,-
\,i\,\frac{\partial {S_2}}{\partial t}\,\right)\,\psi(r,t)\,= \nonumber \\
\,\left\{\,\frac{(\nabla _j {S_1})^2}{2m}\,-
\,\frac{(\nabla_j {S_2})^2}{2m}\,+
\,\frac{\hbar}{2m}\,(\nabla_j)^2 {S_2}\,-
\,i\,\frac{\hbar}{2m}\,(\nabla_j)^2 {S_1}\,+
\,\frac{i}{m}\,\nabla_j {S_1}\,\nabla_j {S_2}\,+
\,U(r,t)\,\right\}\,\psi(r,t)\,
\end{eqnarray}
As there exist both the real and imaginary parts in the complex valued eq.
(\ref{z}) , it is obviously that from this one follows two quadratic
differential equations in partial derivatives :
\begin{equation}\label{aa1}
\frac{\partial {S_2}}{\partial t}\,=
\,\frac{\hbar}{2m}\,(\nabla _j)^2\,{S_1}\,-
\,\frac{1}{m}\,(\nabla _j {S_1})\,(\nabla _j {S_2})
\end{equation}
and
\begin{equation}\label{aa2}
-\,\frac{\partial {S_1}}{\partial t}\,=
\,\frac{1}{2\,m}(\nabla _j {S_1})^2\,-
\,\frac{1}{2\,m}(\nabla _j {S_2})^2\,+
\,\frac{\hbar}{2\,m}(\nabla _j)^2 {S_2}\,+\,U(r,t)
\end{equation}
Inasmuch as it is well-known from the NrlQntMch the continuity partial
differential equation can be obtained by means of the eqs.(\ref{aa1}),
(\ref{s1} and (\ref{t}) in the following form :
\begin{eqnarray}\label{ab}
\frac{\partial {\left|\psi\right|^2}}{\partial t}\,+
\,\nabla _j \left(V_j\,{\left|\psi\right|^2}\,\right)\,=
\,\frac{\partial \exp{\{-\,2\,\frac{S_2}{\hbar}\}}}{\partial t}\,+
\,\frac{1}{m}\,\nabla _j \left(\nabla _j {S_1}
\,\exp{\{-\,2\,\frac{S_2}{\hbar}\}}\,\right)\,=\,0 ;
\end{eqnarray}
Thence the eq.(\ref{v1}) and eq.(\ref{v2}) can be simplified by means of
the eqs.(\ref{ab}) and (\ref{aa1}). In a result of such substitutions the
following continuity partial differential equations could be obtained :
\begin{equation}\label{ac1}
\frac{\partial {\left|\psi\right|^2}}{\partial t}\,+
\,\nabla _j \left(V_j^{+}\,{\left|\psi\right|^2}\right)\,=
\,\frac{i}{m}\,\nabla _j \left(\,\nabla _j {S_2}
\,\exp{\{-2\frac{S_2}{\hbar}\}}\,\right)
\end{equation}
\begin{equation}\label{ac2}
\frac{\partial {\left|\psi\right|^2}}{\partial t}\,+
\,\nabla _j \left(V_j^{-}\,{\left|\psi\right|^2}\right)\,=
\,-\,\frac{i}{m}\,\nabla _j \left(\nabla _j {S_2}
\,\exp{\{-2\frac{S_2}{\hbar}\}}\right)
\end{equation}
In order to calculate the value of the expressions in the brackets in the
right-hand side of the eqs.(\ref{ac1}) and (\ref{ac2}) we will determine
the relation between the values of both integrals :
\begin{equation}\label{ad1}
\,\int\,\int_{V_R}\,\int\,{\nabla_j}^2\,{S_2}\,
\exp{\{-2\frac{S_2}{\hbar}\}}\;dV
\quad {\rm and} \quad
\,\int\,\int_{V_R}\,\int\,(\nabla _j {S_2}\,)^2\,
\exp{\{-2\frac{S_2}{\hbar}\}}\;dV
\end{equation}
The first integral in (\ref{ad1}) may be calculated through integration by
parts. In this easily way we could obtain :
\begin{eqnarray}\label{ad2}
\,\int\,\int_{V_R}\,\int\,(\nabla _j)^2\,{S_2}
\,\exp{\{-2\frac{S_2}{\hbar}\}}\,dV\,=
\,\int\,\int_{S_R}\,\nabla _j {S_2}
\,\exp{\{-2\frac{S_2}{\hbar}\}}\,dS_j\,- \nonumber \\
\,\int\,\int_{S_o}\,\nabla_j {S_2}
\,\exp{\{-2\frac{S_2}{\hbar}\}}\,dS_j\,+
\,\frac{2}{\hbar}\,\int\,\int_{V_R}\,\int\,(\nabla _j {S_2})^2\,
\exp{\{-2\frac{S_2}{\hbar}\}}\,dV
\end{eqnarray}
From above it is evidently that the second two-dimensional integral over
the surface $S_o$ cannot exist in the case when the integrational domain
$V_R$ of the three-dimensional integral has the form of one-piece-integrity
domain. Indeed, the three-multiple integral in the left-hand side of eq.
(\ref{ad2}) has an integration domain of the volume $V_R$,then the both
two-multiple integrals (the first and second ones on the right handside
of the same equation) have a integration domain in form of surface of same
volume (the outer skin $S_R$ and the inter skin $S_o$ of the volume $V_R$).
Inasmuch as we don't take into account the creation and annihilation of the
FrthQntMicrPrt in the NrlQntMch, than the SchEl's OrbWvFnc $\psi(r,t)$ may
have no singularity within the volume $V_R$. Therefore the three-multiple
integrals have the one-piece integrity domain of an integration without
its inter skin surface $S_o$. Hence it is easily seen that both two-multiple
integrals are canceled in the case when R go to $\infty$ and at the absence
of any kind of singularity in the SchEl's OrbWvFnc. Consequently eq.(\ref{ad1})
becomes the form :
\begin{eqnarray}\label{ae}
\,\int\,\int_{V_\infty}\,\int\,(\nabla _j)^2\,{S_2}
\,\exp{\{-2\frac{S_2}{\hbar}\}}\;dV\,
=\,\frac{2}{\hbar}\,\int\int_{U_\infty}\,\int\,(\nabla _j {S_2}\,)^2
\,\exp{\{-2\frac{S_2}{\hbar}\}}\;dV
\end{eqnarray}
Then in a result of the existence of the eqt.(\ref{ae}) we may suppose the
existence of the following equations between the values of both integrand
functions :
\begin{equation}\label{af1}
\quad {\rm the\,first\,:} \quad
(\nabla_j)^2\,{S_2}\,\exp{\{-\,2\,\frac{S_2}{\hbar}\,\}}\,=
\,\frac{2}{\hbar}\,(\nabla _j {S_2})^2\,\exp{\{-\,2\,\frac{S_2}{\hbar}\,\}}\,
\end{equation}
\begin{equation}\label{af2}
\quad {\rm and\,the\,second\,:} \quad
(\nabla _j)^2 {S_2}\,=\,\frac{2}{\hbar}\,(\nabla _j {S_2})^2\,
\end{equation}
Hence it is obviously seen that in a line with the existence of the eq.
(\ref{af2}) the equation (\ref{aa2}) could been rewritten in the following
transparent form :
\begin{equation}\label{ag1}
\,-\,\frac{\partial {S_1}}{\partial t}\,=
\,\frac{1}{2\,m}(\nabla _j {S_1})^2\,+
\,\frac{1}{2\,m}(\nabla _j {S_2})^2\,+\,U(r,t)
\end{equation}
In such a way it is evidently that the right-hand side expressions of the
equations of the continuity (\ref{ac1}) and (\ref{ac2}) are canceled by the
virtue of the eq:(\ref{af2}).Consequently we had an opportunity to shoe that
the continuity partial differential equations are satisfied not only in the
form (\ref{ab}), but they are satisfied also in the forms (\ref{ac1}) and
(\ref{ac2}). Furthermore the expression of the eq.(\ref{ag1}) might been
interpreted from my new point of view, that the kinetic energy $E_k$ of the
SchrEl is formed by two differential parts. Really, if the first part
$\frac{(\nabla_j\,{S_1})^2}{2\,m}$ describes the kinetic energy of its
regular translation motion along some clear-cut thin smooth classical
trajectory in an accordance with the laws of the NrlClsMch and ClsElcDnm with
its current velocity $V_j\,=\,\frac{1}{m}\,\nabla _j{S_1}$ , then the second
part $\frac{(\nabla_j\,{S_2})^2}{2\,m}$ mouth describe the kinetic energy of
its Furthian quantum stochastic motion of the FrthQntMicrPrt with its
probable velocity $U_j\,=\,\frac{1}{m}\,\nabla _j {S_1}$ in a total analogy
with the Brownian classical stochastic motion of the BrnClsMicrPrt with its
osmotic velocity.Therefore it is very helpfully to rewrite the expression
(\ref{ag1}) in the following well-known form :
\begin{equation}\label{ag2}
\,E\,=\,\frac{m\,V^2}{2}\,+\,\frac{m\,U^2}{2}\,=
\,\frac{(\langle \bar P \rangle)^2}{2m}\,+
\,\frac{\langle (\Delta P)^2 \rangle)}{2m}\
\end{equation}
Indeed, some new facts have been brought to light. Therefore the upper
investigation entitles us to make the explicit assertion that the most
important difference between the quadratic differential wave equation in
partial derivative of Schrodinger and the quadratic differential particle
equation in partial derivative of Hamilton-Jacoby is exhibited by the
existence of the kinetic energy of the QntMicrPrt's Furthian trembling
circular oscillations harmonic motion in the first one.
\begin{equation}\label{g}
-\frac{\partial S_1}{\partial t}\,=\,\frac{(\nabla_j\,S_1)^2}{2m}\,+
\,\frac{(\nabla_j\,S_2)^2}{2m}\,+\,U\,;
\end{equation}
As we can observe by cursory comparison there is a total coincidence of
eq.(\ref{f1}) with eq.(\ref{g}). Hence we are able to proof that the
QdrDfrPrtEqt with PrtDrv of Schrodinger may be obtained from the
QdrDfrPrtEqt with PrtDrv of Hamilton-Jacoby by addition the part of the
kinetic energy of the Furthian stochastic circular harmonic oscillations
motion. Indeed, it is obviously to understand that the first term
$\,\frac{(\nabla_l\,S_1)^2} {2m}\,$ in the eq.(\ref{g}) describes the
kinetic energy of the regular translation motion of the NtnClsPrt with
its current velocity $\,V_l\,=\,\frac{\nabla_l\,S_1}{m}\,$ and the second
term $\,\frac{(\nabla_l\,S_2)^2}{2m}\,$ describes the kinetic energy of the
random trembling circular harmonic oscillations motion (RndTrmMtn) of the
FrthQntPrt in a total analogous with BrnClsPrt with its osmotic velocity
$U_l\,=\,\frac{\nabla_l\,S_2}{m}\,$. Therefore we can rewrite the expression
(\ref{g}) in the following form :
\begin{equation}\label{h}
\,E_t\,=\,\frac{m\,V^2}{2}\,+\,\frac{m\,U^2}{2}\,+\,U\,=
\,\frac{{\langle\,\bar P\,\rangle}^2}{2\,m}\,+
\,\frac{\langle\,(\Delta P)^2\,\rangle}{2\,m}\,+\,U\,;
\end{equation}
After elementary physical obviously suppositions some new facts have been
brought to light. Therefore the upper investigation entitles us to make the
explicit assertion that the most important difference between the QntQdrDfr
WvEqt with PrtDrv of Schodinger and the ClsQdrDfrPrtEqt with PrtDrv of
Hamilton-Jacoby is exhibited by the existence of the kinetic energy of the
FrthRndTrmCrcHrmOscsMtn in the first one. Therefore when the SchEl is
appointed in the Coulomb's potential of the atomic nucleus spotted like
(SptLk) elementary electric charge (ElmElcChrg) $Ze$ its total energy may be
written in the following form \,:
\begin{equation}\label{i}
\langle\,E_t\,\rangle\,=\,\frac{1}{2\,m}\,\left[(\langle P_r \rangle)^2\,+
\,\frac{(\langle L \rangle)^2}{(\langle r \rangle)^2}\,\right]\,+
\,\frac{1}{2\,m}\,\left[\langle(\Delta P_r)^2 \rangle\,+
\,\frac{\langle(\Delta L)^2 \rangle}{\langle r \rangle^2}\,\right]\,-
\,\frac{Z e^2}{\langle r \rangle}
\end{equation}
As any SchEl has eigenvalues $n_r\,=\,0\,$ and $l\,=\,0\,$ in a case of its
ground state, so it follows that $\langle P_r \rangle \,=\,0\,$ and
$\langle L \rangle\,= \,0\,$. As a consistency with the eq.(\ref{k}) the
eigenvalue of the SchEl's total energy $E_t^o$ in its ground state in some
H-like atom is contained only by two parts :
\begin{equation}\label{j}
\langle\,E^o_t\,\rangle\,=
\,\frac{1}{2\,m}\,\left[\langle(\Delta P_r)^2 \rangle\,+
\,\frac{\langle(\Delta L)^2 \rangle}{\langle( r )^2\rangle}\,\right]\,-
\,\frac{Z e^2}{\langle r \rangle}
\end{equation}
Further the values of the dispersions $\langle (\Delta P_r)^2\rangle$ and
$\langle(\Delta L)^2\rangle$ can be determined by virtue of the Heisenberg
Uncertainty Relations (HsnUncRlt)\,:
\begin{equation}\label{k}
\,\langle (\Delta P_r)^2\rangle\,\times\,\langle (\Delta r)^2\rangle\,\ge
\,\frac{\hbar^2}{4}
\end{equation}
\begin{equation}\label{l}
\langle (\Delta L_x)^2\rangle\,\times\,\langle (\Delta L_y)^2\rangle\,\ge
\,\frac{\hbar^2}{4}\,\langle (\Delta L_z)^2\rangle\,
\end{equation}
Thence the dispersion $\langle(\Delta P_r)^2\rangle$ will really have its
minimal value at the maximal value of the $\langle(\Delta r)^2\rangle\,=
=\,\langle r \rangle^2$.In this way the minimal dispersion value of the $
\langle(\Delta P_r)^2\rangle$ can be determined by the following equation :
\begin{equation}\label{m}
\,\langle(\Delta P_r)^2\rangle\,=\,\frac{\hbar^2}{4\langle r^2\rangle}\,
\end{equation}
As the SchEl's ground state has a spherical symmetry at $l\,=\,0\,$,then the
following equalities take place :
\begin{equation}\label{n}
\,\langle(\Delta L_x)^2\rangle\,=\,\langle(\Delta L_y)^2\rangle\,=
\,\langle(\Delta L_z)^2\rangle\,;
\end{equation}
Hence we can obtain minimal values of the dispersions (\ref{n}) through
division of the eq.(\ref{k}) with the corresponding equation from the eq.
(\ref{n}).In that a way we obtain the following result\,:
\begin{equation}\label{o}
\,\langle(\Delta L_x)^2\rangle\,+\,\langle(\Delta L_y)^2\rangle\,+
\,\langle(\Delta L_z)^2\rangle\,=\,\frac{3\hbar^2}{4}\;
\end{equation}
Just now we are in a position to rewrite the expression (\ref{k}) in the
handy form as it is well-known :
\begin{equation}\label{p}
\,E_t^o\,=\,\frac{1}{2\,m}\,\left[\,\frac{\hbar^2}{4r^2}\,+
\,\frac{3\hbar^2}{4r^2}\,\right]\,-\,\frac{Z\,e^2}{r}\,=
\,\frac{1}{2}\,\frac{\hbar^2}{m\,r^2}\,-\,\frac{Z\,e^2}{r}\,;
\end{equation}
It is extremely important to note here that we have used undisturbed ElcInt
$E_j$ (\ref{HH}) of the QntElcMgnFld of StchVrtPhtns from the FlcVcm by dint
of the equations (\ref{GG}) and (\ref{HH}) in order to obtain constrain of
dynamical mutual conjugated quantities $r_j$ (\ref{MM}) and $P_x$ (\ref{NN})
from the NrlClsMch in their operator forms ${\hat r}_j$ and ${\hat P}_j$ within
NrlQntMch. The quantum behaviour of the SchEl within NrlQntMch is caused by
the ElcIntAct between its WllSpr ElmElcChrg and the ElcInt $E_j$ of the
undisturbed QntElcMgnFld of StchVrtPhtns from the FlcVcm. So in consequence
of the continuous ElcIntAct of the SchEl's WllSpr ElmElcChrg with the ElcInt of
the QntElcMgnFld of StchVrtPhtns one participates in the Furthian quantized
stochastic motion (FrthQntStchMtn), which is quite obviously analogous of the
Brownian classical stochastic motion (BrnClsStchMtn). As it is well-known the
BrnClsPrts have no classical wave properties (ClsWvPrp), but the FrthQntPrts
have QntWvPrps and display them every where. The cause of this distinction
consists of indifference between the liquid and FlcVcm. Indeed, if atoms
and molecules within liquid have no ClsWvPrps,all excitations of the FlcVcm
and one itself have QntWvPrp. Therefore the FlcVcm transfers its QntWvPrp
over the SchEl at ones ElcIntAct with its WllSpr ElmElcChrg.
\vspace{1cm}
\begin{thebibliography}{99}
\bibitem{JMRa} Rangelov J.M., Reports of JINR, R4-80-493; R4-80-494,
(1980), Dubna.
\bibitem{JMRb} Rangelov J.M., University Annual (Technical Physics),{\bf 22},
(2), 65, 87, (1985); {\bf 23}, (2), 43, 61, (1986); {\bf 24}, (2), 287, (1986);
{\bf 25}, (2), 89, 113, (1988).
\bibitem{JMRc} Rangelgov J.M.,Comptens Rendus e l'Academie Bulgarien
Sciences, {\bf 39}, (12), 37, (1986).
\bibitem{LB} De Broglie L., Comptens Rendus {\bf 177}, 507, 548, 630, (1923);
Ann. de Physique, {\bf 3}, 22, (1925).
\bibitem{WH} Heisenberg W., Ztschr.f.Phys., {\bf 33}, 879, (1925); {\bf 38},
411, (1926); {\bf 43}, 172, (1927). Mathm. Annalen {\bf95}, 694, (1926);
\bibitem{WPa} Pauli W., Ztschr.f.Phys., {\bf 31}, 765, (1925); {\bf 36}, 336,
(1926) ; {\bf 41}, 81, (1927); {\bf 43}, 601, (1927).
\bibitem{EScha} Schrodinger E., Annal.d.Phys. {\bf 79}, 361, 489, 734, (1926);
{\bf 80}, 437 (1926); {\bf 81}, 109, (1926).
\bibitem{MBPJ} Born M., Jodan P., Ztschr.f.Phys., {\bf 34}, 858, (1926).
\bibitem{BHJ} Born M.,Heisenberg W.,Jordan P.,Ztschr.f.Phys., {\bf 35}, 557,
(1926).
\bibitem{PAMa} Dirac P.A.M., Proc.Cambr.Phil.Soc., {\bf 22}, 132, (1924)\,;
Proc.Roy.Soc.(L), {\bf A106}, 581, (1924); {\bf 112}, 661, (1926);
{\bf A113}, 621, (1927).
\bibitem{MB} Born Max, Ztschr.f.Phys., {\bf 37}, 863, (1926); {\bf 38}, 803,
(1926).
\bibitem{PAMb} Dirac P.A.M.,Proc.Roy.Soc., {\bf A117}, 610 ;{\bf A118}, 351,
(1928); 127, (1928); {\bf 68}, 527, (1931).
\bibitem{AE} Einstein A., Ann.d.Phys., {\bf 17}, 549, (1905); {\bf 19},
371, (1906)\,; {\bf 33}, 1275, (1910); {\bf 34}, 591, (1911);
\bibitem{MS} von Smoluchowski M., Ann.d.Phys., {\bf 21}, 756, (1906);
{\bf 25}, 205, (1908) ; Phylos. Mag., {\bf 23}, 165, (1912);
Phys.Zeitschr., {\bf 18}, 534, (1917).
\bibitem{ESchb} Schrodinger E., Sitzunsber.Preuss.Akad.Wiss., K1, 418, (1930);
Berlin.Bericht., 296, 400, (1930); \,144,\,(1931)\,;
\bibitem{ES} Einstein A., von Smoluchowski M., Brownian motion (in Russian),
Moscow,\,ONTI\,(1936).
\bibitem{NW} Wiener N., Jour.Mathm.Phys.Mass.Techn.Inst., {\bf 2}, (3), 131,
(1923); Proc.Mathm.Soc.(L), {\bf 22}, (6), 457, (1924).
\bibitem{RPF} Feynman R.P.,The principal of least action in quantum mechanics,
Ph.D.thesis.Princeton, (1942); Review Mod.Phys., {\bf 20}, (2),
367, (1948); \,Phys.Review, {\bf 76}, (6), 769, (1948) ;
\,{\bf 84},\,(1),\ ,108,\,(1951)\,.
\bibitem{FH} Feynmam R.P., Hibs A.R., Quantum Mechanics and Path Integrals,
McGraw-Hill Comp., New York, (1965)\,.
\bibitem{ThW} Welton Th.,Phys.Review, {\bf 74}, 1157, (1948).
\bibitem{NK} Kalitcin N., (inRussian), JETPH {\bf 25}, 407 (1953).
\bibitem{ASBT} Socolov A.A., Tumanov V.S.,(in Russian),JETPH , {\bf 30},
802 (1956).
\bibitem{AAS} Sokolov A.A.,Scientific reports of higher school, (in Russian)
(1), 120, (1950), Moscow ; Phylosophical problems of elementary particle
physics.(in Russian) \, Acad.of Scien. of UdSSR ,Moscow ,188, (1963).
\bibitem{DB} Bohm D.,Phys.Review,\,{\bf 85} 166, 180 (1952).
\bibitem{JMRd} Rangelov J.M., Report Series of Symposium on the Foundations
of Modern Physics, 6/8, August, 1987, \, Joensuu, p.95-99,\, FTL,\, 131,
Turqu, Finland (1987); Problems in Quantum Physics'2, Gdansk'89, 18-23,
September, 1989, Gdansk, p.461-487,\, World Scientific,\,Singapur,\, (1990) ;
\bibitem{JMRe} Rangelov J.M., Abstracts Booklet of 29th Anual Conference of
the University of Peoples' Friendship , Moscow 17-31 may 1993, \,Physical
ser.; Abstracts Booklet of Symposium on the Foundations of Modern Physics,
13/16, June, 1994, Helsinki, Finland 60-62 .
\bibitem{JMRf} Rangelov J.M.,Abstract Booklet of B R U-2, 12-14, September,
(1994), Ismir, Turkey ; Balk.Phys.Soc. {\bf 2} (2), 1974 (1994). Abstract
Booklet of B R U -3\,,\,2-5 September,(1997), Cluj-Napoca,Romania.
\end{thebibliography}
\end{document}
|
import Base.show
show(io::IO, ::Α) = write(io, "α")
show(io::IO, ::Ω) = write(io, "ω")
show(io::IO, jl::JlVal) = write(io, join(map(string, jl.val), ' '))
function show(io::IO, a::Apply)
write(io, '(')
show(io, a.f)
write(io, ' ')
show(io, a.r)
write(io, ')')
end
function show(io::IO, a::Apply2)
write(io, '(')
show(io, a.l)
write(io, ' ')
show(io, a.f)
write(io, ' ')
show(io, a.r)
write(io, ')')
end
function show(io::IO, a::Op2{c}) where {c}
write(io, '(')
show(io, a.l)
write(io, c)
show(io, a.r)
write(io, ')')
end
function show(io::IO, a::Op1{c}) where {c}
write(io, '(')
show(io, a.l)
write(io, c)
write(io, ')')
end
show(io::IO, a::ConcArr) = (show(io, a.l); write(io, ' '); show(io, a.r))
show(io::IO, a::PrimFn{c}) where {c} = write(io, c)
show(io::IO, f::UDefFn) = (write(io, '{'); show(io, f.ast); write(io, '}'))
|
module TestGeneticVariation
using Test
import BioCore.Testing:
get_bio_fmt_specimens,
random_seq,
random_interval
import BioCore.Exceptions.MissingFieldException
using BioSequences, GeneticVariation
import BufferedStreams: BufferedInputStream
import IntervalTrees: IntervalValue
import YAML
function random_seq(::Type{A}, n::Integer) where A <: Alphabet
nts = alphabet(A)
probs = Vector{Float64}(undef, length(nts))
fill!(probs, 1 / length(nts))
return BioSequence{A}(random_seq(n, nts, probs))
end
fmtdir = get_bio_fmt_specimens()
include("vcf.jl")
include("bcf.jl")
include("site_counting.jl")
include("minhash.jl")
include("allele_freq.jl")
include("diversity_measures.jl")
include("seg_sites.jl")
end # Module TestGeneticVariation
|
[STATEMENT]
lemma lt_imp_ex_count_lt: "M < N \<Longrightarrow> \<exists>y. count M y < count N y"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. M < N \<Longrightarrow> \<exists>y. count M y < count N y
[PROOF STEP]
by (meson less_eq_multiset\<^sub>H\<^sub>O less_le_not_le) |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import os
import sys
import scipy.io as sio
import tensorflow as tf
import numpy as np
import resnet_model
parser = argparse.ArgumentParser()
# Basic model parameters.
parser.add_argument('--data_dir', type=str, default=os.path.join(os.path.dirname(__file__), '../datasets'),
help='The path to the SVHN data directory.')
parser.add_argument('--model_dir', type=str, default='/tmp/svhn_resnet_model',
help='The directory where the model will be stored.')
parser.add_argument('--resnet_size', type=int, default=32,
help='The size of the ResNet model to use.')
parser.add_argument('--train_epochs', type=int, default=160,
help='The number of epochs to train.')
parser.add_argument('--epochs_per_eval', type=int, default=1,
help='The number of epochs to run in between evaluations.')
parser.add_argument('--batch_size', type=int, default=128,
help='The number of images per batch.')
parser.add_argument('--activation', type=str, default='relu',
help='activation function swish,relu,lrelu,tanh,elu')
parser.add_argument(
'--data_format', type=str, default=None,
choices=['channels_first', 'channels_last'],
help='A flag to override the data format used in the model. channels_first '
'provides a performance boost on GPU but is not always compatible '
'with CPU. If left unspecified, the data format will be chosen '
'automatically based on whether TensorFlow was built for CPU or GPU.')
_HEIGHT = 32
_WIDTH = 32
_DEPTH = 3
_NUM_IMAGES = {'train': 73257, 'test': 26032}
_NUM_CLASSES = 10
_WEIGHT_DECAY = 2e-4
_MOMENTUM = 0.9
def get_data(is_training, data_dir):
"""Read the .mat file, do data conversions, and return
TF dataset
"""
if is_training:
filename = 'train_32x32.mat'
else:
filename = 'test_32x32.mat'
filepath = os.path.join(data_dir, filename)
assert (os.path.exists(filepath))
#roll the image# axis backwards to be the first axis
data = sio.loadmat(filepath)
X = np.rollaxis(data['X'], 3)
y = data['y'].reshape((X.shape[0], 1))
num_images = _NUM_IMAGES['train'] if is_training else _NUM_IMAGES['test']
assert(X.shape[0] == num_images)
dataset = tf.data.Dataset.from_tensor_slices((X, y))
dataset = dataset.map(
lambda image, label: (tf.cast(image, tf.float32),
tf.squeeze(
tf.one_hot(tf.cast(label, tf.int32), _NUM_CLASSES)
)))
return dataset
def preprocess_image(is_training, image):
if is_training:
image = tf.image.resize_image_with_crop_or_pad(
image, _HEIGHT + 8, _WIDTH + 8)
image = tf.random_crop(image, [_HEIGHT, _WIDTH, _DEPTH])
image = tf.image.random_flip_left_right(image)
image = tf.image.per_image_standardization(image)
return image
def input_fn(is_training, data_dir, batch_size, num_epochs=1):
"""input function to the network"""
dataset = get_data(is_training, data_dir)
if is_training:
dataset = dataset.shuffle(buffer_size = _NUM_IMAGES['train'])
dataset = dataset.map(
lambda image, label: (preprocess_image(is_training, image), label))
dataset = dataset.prefetch(2 * batch_size)
#Repeat the dataset N epochs before evaluation
dataset = dataset.repeat(num_epochs)
dataset = dataset.batch(batch_size)
iterator = dataset.make_one_shot_iterator()
images, labels = iterator.get_next()
return images, labels
def svhn_model_fn(features, labels, mode, params):
tf.summary.image('images', features, max_outputs=6)
network = resnet_model.cifar10_resnet_v2_generator(
params['resnet_size'], _NUM_CLASSES, params['data_format'], FLAGS.activation)
inputs = tf.reshape(features, [-1, _HEIGHT, _WIDTH, _DEPTH])
logits = network(inputs, mode == tf.estimator.ModeKeys.TRAIN)
predictions = {
'classes': tf.argmax(logits, axis=1),
'probabilities': tf.nn.softmax(logits, name='softmax_tensor')
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
#calculate loss
cross_entropy = tf.losses.softmax_cross_entropy(
logits=logits, onehot_labels=labels)
#logging cross entropy
tf.identity(cross_entropy, name='cross_entropy')
tf.summary.scalar('cross_entropy', cross_entropy)
#Add weight decay to the loss
loss = cross_entropy + _WEIGHT_DECAY * tf.add_n(
[tf.nn.l2_loss(v) for v in tf.trainable_variables()])
if mode == tf.estimator.ModeKeys.TRAIN:
# Scale the learning rate linearly with the batch size. When the batch size
# is 128, the learning rate should be 0.1.
initial_learning_rate = 0.1 * params['batch_size'] / 128
batches_per_epoch = _NUM_IMAGES['train'] / params['batch_size']
global_step = tf.train.get_or_create_global_step()
# Multiply the learning rate by 0.1 at 100, 150, and 200 epochs.
boundaries = [int(batches_per_epoch * epoch) for epoch in [80, 120]]
values = [initial_learning_rate * decay for decay in [1, 0.1, 0.01]]
learning_rate = tf.train.piecewise_constant(
tf.cast(global_step, tf.int32), boundaries, values)
# Create a tensor named learning_rate for logging purposes
tf.identity(learning_rate, name='learning_rate')
tf.summary.scalar('learning_rate', learning_rate)
optimizer = tf.train.MomentumOptimizer(
learning_rate=learning_rate,
momentum=_MOMENTUM)
# Batch norm requires update ops to be added as a dependency to the train_op
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss, global_step)
else:
train_op = None
accuracy = tf.metrics.accuracy(
tf.argmax(labels, axis=1), predictions['classes'])
metrics = {'accuracy': accuracy}
# Create a tensor named train_accuracy for logging purposes
tf.identity(accuracy[1], name='train_accuracy')
tf.summary.scalar('train_accuracy', accuracy[1])
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
loss=loss,
train_op=train_op,
eval_metric_ops=metrics)
def main(unused_argv):
# Using the Winograd non-fused algorithms provides a small performance boost.
os.environ['TF_ENABLE_WINOGRAD_NONFUSED'] = '1'
# Set up a RunConfig to only save checkpoints once per training cycle.
run_config = tf.estimator.RunConfig().replace(save_checkpoints_secs=1e9)
svhn_classifier = tf.estimator.Estimator(
model_fn=svhn_model_fn, model_dir=FLAGS.model_dir, config=run_config,
params={
'resnet_size': FLAGS.resnet_size,
'data_format': FLAGS.data_format,
'batch_size': FLAGS.batch_size,
})
for _ in range(FLAGS.train_epochs // FLAGS.epochs_per_eval):
tensors_to_log = {
'learning_rate': 'learning_rate',
'cross_entropy': 'cross_entropy',
'train_accuracy': 'train_accuracy'
}
logging_hook = tf.train.LoggingTensorHook(
tensors=tensors_to_log, every_n_iter=100)
svhn_classifier.train(
input_fn=lambda: input_fn(
True, FLAGS.data_dir, FLAGS.batch_size, FLAGS.epochs_per_eval),
hooks=[logging_hook])
# Evaluate the model and print results
eval_results = svhn_classifier.evaluate(
input_fn=lambda: input_fn(False, FLAGS.data_dir, FLAGS.batch_size))
print(eval_results)
if __name__ == "__main__":
tf.logging.set_verbosity(tf.logging.INFO)
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(argv=[sys.argv[0]] + unparsed)
|
If $A$ is a subalgebra of a module $M$, then $A$ is an algebra over the restricted space of $M$. |
<a href="https://colab.research.google.com/github/luisarai/NMA2021/blob/main/tutorials/W2D3_BiologicalNeuronModels/student/W2D3_Tutorial1.ipynb" target="_parent"></a>
# Tutorial 1: The Leaky Integrate-and-Fire (LIF) Neuron Model
**Week 2, Day 3: Biological Neuron Models**
**By Neuromatch Academy**
__Content creators:__ Qinglong Gu, Songtin Li, John Murray, Richard Naud, Arvind Kumar
__Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Matthew Krause, Spiros Chavlis, Michael Waskom, Ethan Cheng
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'></p>
---
# Tutorial Objectives
*Estimated timing of tutorial: 1 hour, 10 min*
This is Tutorial 1 of a series on implementing realistic neuron models. In this tutorial, we will build up a leaky integrate-and-fire (LIF) neuron model and study its dynamics in response to various types of inputs. In particular, we are going to write a few lines of code to:
- simulate the LIF neuron model
- drive the LIF neuron with external inputs, such as direct currents, Gaussian white noise, and Poisson spike trains, etc.
- study how different inputs affect the LIF neuron's output (firing rate and spike time irregularity)
Here, we will especially emphasize identifying conditions (input statistics) under which a neuron can spike at low firing rates and in an irregular manner. The reason for focusing on this is that in most cases, neocortical neurons spike in an irregular manner.
```python
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/8djsm/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
```
---
# Setup
```python
# Imports
import numpy as np
import matplotlib.pyplot as plt
```
```python
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
# use NMA plot style
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
my_layout = widgets.Layout()
```
```python
# @title Plotting Functions
def plot_volt_trace(pars, v, sp):
"""
Plot trajetory of membrane potential for a single neuron
Expects:
pars : parameter dictionary
v : volt trajetory
sp : spike train
Returns:
figure of the membrane potential trajetory for a single neuron
"""
V_th = pars['V_th']
dt, range_t = pars['dt'], pars['range_t']
if sp.size:
sp_num = (sp / dt).astype(int) - 1
v[sp_num] += 20 # draw nicer spikes
plt.plot(pars['range_t'], v, 'b')
plt.axhline(V_th, 0, 1, color='k', ls='--')
plt.xlabel('Time (ms)')
plt.ylabel('V (mV)')
plt.legend(['Membrane\npotential', r'Threshold V$_{\mathrm{th}}$'],
loc=[1.05, 0.75])
plt.ylim([-80, -40])
def plot_GWN(pars, I_GWN):
"""
Args:
pars : parameter dictionary
I_GWN : Gaussian white noise input
Returns:
figure of the gaussian white noise input
"""
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.plot(pars['range_t'][::3], I_GWN[::3], 'b')
plt.xlabel('Time (ms)')
plt.ylabel(r'$I_{GWN}$ (pA)')
plt.subplot(122)
plot_volt_trace(pars, v, sp)
plt.tight_layout()
def my_hists(isi1, isi2, cv1, cv2, sigma1, sigma2):
"""
Args:
isi1 : vector with inter-spike intervals
isi2 : vector with inter-spike intervals
cv1 : coefficient of variation for isi1
cv2 : coefficient of variation for isi2
Returns:
figure with two histograms, isi1, isi2
"""
plt.figure(figsize=(11, 4))
my_bins = np.linspace(10, 30, 20)
plt.subplot(121)
plt.hist(isi1, bins=my_bins, color='b', alpha=0.5)
plt.xlabel('ISI (ms)')
plt.ylabel('count')
plt.title(r'$\sigma_{GWN}=$%.1f, CV$_{\mathrm{isi}}$=%.3f' % (sigma1, cv1))
plt.subplot(122)
plt.hist(isi2, bins=my_bins, color='b', alpha=0.5)
plt.xlabel('ISI (ms)')
plt.ylabel('count')
plt.title(r'$\sigma_{GWN}=$%.1f, CV$_{\mathrm{isi}}$=%.3f' % (sigma2, cv2))
plt.tight_layout()
plt.show()
```
---
# Section 1: The Leaky Integrate-and-Fire (LIF) model
```python
# @title Video 1: Reduced Neuron Models
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="av456396195", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="rSExvwCVRYg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
This video introduces the reduction of a biological neuron to a simple leaky-integrate-fire (LIF) neuron model.
<details>
<summary> <font color='blue'>Click here for text recap of video </font></summary>
Now, it's your turn to implement one of the simplest mathematical model of a neuron: the leaky integrate-and-fire (LIF) model. The basic idea of LIF neuron was proposed in 1907 by Louis Édouard Lapicque, long before we understood the electrophysiology of a neuron (see a translation of [Lapicque's paper](https://pubmed.ncbi.nlm.nih.gov/17968583/) ). More details of the model can be found in the book [**Theoretical neuroscience**](http://www.gatsby.ucl.ac.uk/~dayan/book/) by Peter Dayan and Laurence F. Abbott.
The subthreshold membrane potential dynamics of a LIF neuron is described by
\begin{eqnarray}
C_m\frac{dV}{dt} = -g_L(V-E_L) + I,\quad (1)
\end{eqnarray}
where $C_m$ is the membrane capacitance, $V$ is the membrane potential, $g_L$ is the leak conductance ($g_L = 1/R$, the inverse of the leak resistance $R$ mentioned in previous tutorials), $E_L$ is the resting potential, and $I$ is the external input current.
Dividing both sides of the above equation by $g_L$ gives
\begin{align}
\tau_m\frac{dV}{dt} = -(V-E_L) + \frac{I}{g_L}\,,\quad (2)
\end{align}
where the $\tau_m$ is membrane time constant and is defined as $\tau_m=C_m/g_L$.
Note that dividing capacitance by conductance gives units of time!
Below, we will use Eqn.(2) to simulate LIF neuron dynamics.
If $I$ is sufficiently strong such that $V$ reaches a certain threshold value $V_{\rm th}$, $V$ is reset to a reset potential $V_{\rm reset}< V_{\rm th}$, and voltage is clamped to $V_{\rm reset}$ for $\tau_{\rm ref}$ ms, mimicking the refractoriness of the neuron during an action potential:
\begin{eqnarray}
\mathrm{if}\quad V(t_{\text{sp}})\geq V_{\rm th}&:& V(t)=V_{\rm reset} \text{ for } t\in(t_{\text{sp}}, t_{\text{sp}} + \tau_{\text{ref}}]
\end{eqnarray}
where $t_{\rm sp}$ is the spike time when $V(t)$ just exceeded $V_{\rm th}$.
(__Note__: in the lecture slides, $\theta$ corresponds to the threshold voltage $V_{th}$, and $\Delta$ corresponds to the refractory time $\tau_{\rm ref}$.)
</details>
Note that you have seen the LIF model before if you looked at the pre-reqs Python or Calculus days!
The LIF model captures the facts that a neuron:
- performs spatial and temporal integration of synaptic inputs
- generates a spike when the voltage reaches a certain threshold
- goes refractory during the action potential
- has a leaky membrane
The LIF model assumes that the spatial and temporal integration of inputs is linear. Also, membrane potential dynamics close to the spike threshold are much slower in LIF neurons than in real neurons.
## Coding Exercise 1: Python code to simulate the LIF neuron
We now write Python code to calculate our equation for the LIF neuron and simulate the LIF neuron dynamics. We will use the Euler method, which you saw in the linear systems case yesterday to numerically integrate this equation:
\begin{align*}
\tau_m\frac{dV}{dt} = -(V-E_L) + \frac{I}{g_L}\,
\end{align*}
where $V$ is the membrane potential, $g_L$ is the leak conductance, $E_L$ is the resting potential, $I$ is the external input current, and $\tau_m$ is membrane time constant.
The cell below initializes a dictionary that stores parameters of the LIF neuron model and the simulation scheme. You can use `pars=default_pars(T=simulation_time, dt=time_step)` to get the parameters. Note that, `simulation_time` and `time_step` have the unit `ms`. In addition, you can add the value to a new parameter by `pars['New_param'] = value`.
```python
# @markdown Execute this code to initialize the default parameters
def default_pars(**kwargs):
pars = {}
# typical neuron parameters#
pars['V_th'] = -55. # spike threshold [mV]
pars['V_reset'] = -75. # reset potential [mV]
pars['tau_m'] = 10. # membrane time constant [ms]
pars['g_L'] = 10. # leak conductance [nS]
pars['V_init'] = -75. # initial potential [mV]
pars['E_L'] = -75. # leak reversal potential [mV]
pars['tref'] = 2. # refractory time (ms)
# simulation parameters #
pars['T'] = 400. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
# external parameters if any #
for k in kwargs:
pars[k] = kwargs[k]
pars['range_t'] = np.arange(0, pars['T'], pars['dt']) # Vector of discretized time points [ms]
return pars
pars = default_pars()
print(pars)
```
{'V_th': -55.0, 'V_reset': -75.0, 'tau_m': 10.0, 'g_L': 10.0, 'V_init': -75.0, 'E_L': -75.0, 'tref': 2.0, 'T': 400.0, 'dt': 0.1, 'range_t': array([0.000e+00, 1.000e-01, 2.000e-01, ..., 3.997e+02, 3.998e+02,
3.999e+02])}
Complete the function below to simulate the LIF neuron when receiving external current inputs. You can use `v, sp = run_LIF(pars, Iinj)` to get the membrane potential (`v`) and spike train (`sp`) given the dictionary `pars` and input current `Iinj`.
```python
def run_LIF(pars, Iinj, stop=False):
"""
Simulate the LIF dynamics with external input current
Args:
pars : parameter dictionary
Iinj : input current [pA]. The injected current here can be a value
or an array
stop : boolean. If True, use a current pulse
Returns:
rec_v : membrane potential
rec_sp : spike times
"""
# Set parameters
V_th, V_reset = pars['V_th'], pars['V_reset']
tau_m, g_L = pars['tau_m'], pars['g_L']
V_init, E_L = pars['V_init'], pars['E_L']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tref = pars['tref']
# Initialize voltage
v = np.zeros(Lt)
v[0] = V_init
# Set current time course
Iinj = Iinj * np.ones(Lt)
# If current pulse, set beginning and end to 0
if stop:
Iinj[:int(len(Iinj) / 2) - 1000] = 0
Iinj[int(len(Iinj) / 2) + 1000:] = 0
# Loop over time
rec_spikes = [] # record spike times
tr = 0. # the count for refractory duration
for it in range(Lt - 1):
if tr > 0: # check if in refractory period
v[it] = V_reset # set voltage to reset
tr = tr - 1 # reduce running counter of refractory period
elif v[it] >= V_th: # if voltage over threshold
rec_spikes.append(it) # record spike event
v[it] = V_reset # reset voltage
tr = tref / dt # set refractory time
########################################################################
## TODO for students: compute the membrane potential v, spike train sp #
# Fill out function and remove
#raise NotImplementedError('Student Exercise: calculate the dv/dt and the update step!')
########################################################################
# Calculate the increment of the membrane potential
dv = (-(v[it]-E_L)+Iinj[it]/g_L)*(dt/tau_m)
# Update the membrane potential
v[it + 1] = v[it] + dv
# Get spike times in ms
rec_spikes = np.array(rec_spikes) * dt
return v, rec_spikes
# Get parameters
pars = default_pars(T=500)
# Simulate LIF model
v, sp = run_LIF(pars, Iinj=100, stop=True)
# Visualize
plot_volt_trace(pars, v, sp)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial1_Solution_60a1e954.py)
*Example output:*
---
# Section 2: Response of an LIF model to different types of input currents
*Estimated timing to here from start of tutorial: 20 min*
In the following section, we will learn how to inject direct current and white noise to study the response of an LIF neuron.
```python
# @title Video 2: Response of the LIF neuron to different inputs
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="av541417171", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="preNGdab7Kk", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
Tab(children=(Output(), Output()), _titles={'0': 'Youtube', '1': 'Bilibili'})
## Section 2.1: Direct current (DC)
*Estimated timing to here from start of tutorial: 30 min*
### Interactive Demo 2.1: Parameter exploration of DC input amplitude
Here's an interactive demo that shows how the LIF neuron behavior changes for DC input (constant current) with different amplitudes. We plot the membrane potential of an LIF neuron. You may notice that the neuron generates a spike. But this is just a cosmetic spike only for illustration purposes. In an LIF neuron, we only need to keep track of times when the neuron hit the threshold so the postsynaptic neurons can be informed of the spike.
How much DC is needed to reach the threshold (rheobase current)? How does the membrane time constant affect the frequency of the neuron?
```python
# @title
# @markdown Make sure you execute this cell to enable the widget!
my_layout.width = '450px'
@widgets.interact(
I_dc=widgets.FloatSlider(50., min=0., max=300., step=10.,
layout=my_layout),
tau_m=widgets.FloatSlider(10., min=2., max=20., step=2.,
layout=my_layout)
)
def diff_DC(I_dc=200., tau_m=10.):
pars = default_pars(T=100.)
pars['tau_m'] = tau_m
v, sp = run_LIF(pars, Iinj=I_dc)
plot_volt_trace(pars, v, sp)
plt.show()
```
interactive(children=(FloatSlider(value=50.0, description='I_dc', layout=Layout(width='450px'), max=300.0, ste…
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial1_Solution_1058324c.py)
## Section 2.2: Gaussian white noise (GWN) current
*Estimated timing to here from start of tutorial: 38 min*
Given the noisy nature of neuronal activity _in vivo_, neurons usually receive complex, time-varying inputs.
To mimic this, we will now investigate the neuronal response when the LIF neuron receives Gaussian white noise $\xi(t)$ with mean 0 ($\mu = 0$) and some standard deviation $\sigma$.
Note that the GWN has zero mean, that is, it describes only the fluctuations of the input received by a neuron. We can thus modify our definition of GWN to have a nonzero mean value $\mu$ that equals the DC input, since this is the average input into the cell. The cell below defines the modified gaussian white noise currents with nonzero mean $\mu$.
### Interactive Demo 2.2: LIF neuron Explorer for noisy input
The mean of the Gaussian white noise (GWN) is the amplitude of DC. Indeed, when $\sigma = 0$, GWN is just a DC.
So the question arises how does $\sigma$ of the GWN affect the spiking behavior of the neuron. For instance we may want to know
1. how does the minimum input (i.e. $\mu$) needed to make a neuron spike change with increase in $\sigma$
2. how does the spike regularity change with increase in $\sigma$
To get an intuition about these questions you can use the following interactive demo that shows how the LIF neuron behavior changes for noisy input with different amplitudes (the mean $\mu$) and fluctuation sizes ($\sigma$). We use a helper function to generate this noisy input current: `my_GWN(pars, mu, sig, myseed=False)`. Note that fixing the value of the random seed (e.g., `myseed=2020`) will allow you to obtain the same result every time you run this. We then use our `run_LIF` function to simulate the LIF model.
```python
# @markdown Execute to enable helper function `my_GWN`
def my_GWN(pars, mu, sig, myseed=False):
"""
Function that generates Gaussian white noise input
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
myseed : random seed. int or boolean
the same seed will give the same
random number sequence
Returns:
I : Gaussian white noise input
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Generate GWN
# we divide here by 1000 to convert units to sec.
I_gwn = mu + sig * np.random.randn(Lt) / np.sqrt(dt / 1000.)
return I_gwn
help(my_GWN)
```
Help on function my_GWN in module __main__:
my_GWN(pars, mu, sig, myseed=False)
Function that generates Gaussian white noise input
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
myseed : random seed. int or boolean
the same seed will give the same
random number sequence
Returns:
I : Gaussian white noise input
```python
# @title
# @markdown Make sure you execute this cell to enable the widget!
my_layout.width = '450px'
@widgets.interact(
mu_gwn=widgets.FloatSlider(200., min=100., max=300., step=5.,
layout=my_layout),
sig_gwn=widgets.FloatSlider(2.5, min=0., max=5., step=.5,
layout=my_layout)
)
def diff_GWN_to_LIF(mu_gwn, sig_gwn):
pars = default_pars(T=100.)
I_GWN = my_GWN(pars, mu=mu_gwn, sig=sig_gwn)
v, sp = run_LIF(pars, Iinj=I_GWN)
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.plot(pars['range_t'][::3], I_GWN[::3], 'b')
plt.xlabel('Time (ms)')
plt.ylabel(r'$I_{GWN}$ (pA)')
plt.subplot(122)
plot_volt_trace(pars, v, sp)
plt.tight_layout()
plt.show()
```
interactive(children=(FloatSlider(value=200.0, description='mu_gwn', layout=Layout(width='450px'), max=300.0, …
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial1_Solution_2de5d8a9.py)
### Think! 2.2: Analyzing GWN Effects on Spiking
- As we increase the input average ($\mu$) or the input fluctuation ($\sigma$), the spike count changes. How much can we increase the spike count, and what might be the relationship between GWN mean/std or DC value and spike count?
- We have seen above that when we inject DC, the neuron spikes in a regular manner (clock like), and this regularity is reduced when GWN is injected. The question is, how irregular can we make the neurons spiking by changing the parameters of the GWN?
We will see the answers to these questions in the next section but discuss first!
---
# Section 3: Firing rate and spike time irregularity
*Estimated timing to here from start of tutorial: 48 min*
When we plot the output firing rate as a function of GWN mean or DC value, it is called the input-output transfer function of the neuron (so simply F-I curve).
Spike regularity can be quantified as the **coefficient of variation (CV) of the inter-spike-interval (ISI)**:
\begin{align}
\text{CV}_{\text{ISI}} = \frac{std(\text{ISI})}{mean(\text{ISI})}
\end{align}
A Poisson train is an example of high irregularity, in which $\textbf{CV}_{\textbf{ISI}} \textbf{= 1}$. And for a clocklike (regular) process we have $\textbf{CV}_{\textbf{ISI}} \textbf{= 0}$ because of **std(ISI)=0**.
## Interactive Demo 3A: F-I Explorer for different `sig_gwn`
How does the F-I curve of the LIF neuron change as we increase the $\sigma$ of the GWN? We can already expect that the F-I curve will be stochastic and the results will vary from one trial to another. But will there be any other change compared to the F-I curved measured using DC?
Here's an interactive demo that shows how the F-I curve of a LIF neuron changes for different levels of fluctuation $\sigma$.
```python
# @title
# @markdown Make sure you execute this cell to enable the widget!
my_layout.width = '450px'
@widgets.interact(
sig_gwn=widgets.FloatSlider(3.0, min=0., max=6., step=0.5,
layout=my_layout)
)
def diff_std_affect_fI(sig_gwn):
pars = default_pars(T=1000.)
I_mean = np.arange(100., 400., 10.)
spk_count = np.zeros(len(I_mean))
spk_count_dc = np.zeros(len(I_mean))
for idx in range(len(I_mean)):
I_GWN = my_GWN(pars, mu=I_mean[idx], sig=sig_gwn, myseed=2020)
v, rec_spikes = run_LIF(pars, Iinj=I_GWN)
v_dc, rec_sp_dc = run_LIF(pars, Iinj=I_mean[idx])
spk_count[idx] = len(rec_spikes)
spk_count_dc[idx] = len(rec_sp_dc)
# Plot the F-I curve i.e. Output firing rate as a function of input mean.
plt.figure()
plt.plot(I_mean, spk_count, 'k',
label=r'$\sigma_{\mathrm{GWN}}=%.2f$' % sig_gwn)
plt.plot(I_mean, spk_count_dc, 'k--', alpha=0.5, lw=4, dashes=(2, 2),
label='DC input')
plt.ylabel('Spike count')
plt.xlabel('Average injected current (pA)')
plt.legend(loc='best')
plt.show()
```
interactive(children=(FloatSlider(value=3.0, description='sig_gwn', layout=Layout(width='450px'), max=6.0, ste…
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial1_Solution_eba2370f.py)
## Coding Exercise 3: Compute $CV_{ISI}$ values
As shown above, the F-I curve becomes smoother while increasing the amplitude of the fluctuation ($\sigma$). In addition, the fluctuation can also change the irregularity of the spikes. Let's investigate the effect of $\mu=250$ with $\sigma=0.5$ vs $\sigma=3$.
Fill in the code below to compute ISI, then plot the histogram of the ISI and compute the $CV_{ISI}$. Note that, you can use `np.diff` to calculate ISI.
```python
def isi_cv_LIF(spike_times):
"""
Calculates the inter-spike intervals (isi) and
the coefficient of variation (cv) for a given spike_train
Args:
spike_times : (n, ) vector with the spike times (ndarray)
Returns:
isi : (n-1,) vector with the inter-spike intervals (ms)
cv : coefficient of variation of isi (float)
"""
########################################################################
## TODO for students: compute the membrane potential v, spike train sp #
# Fill out function and remove
#raise NotImplementedError('Student Exercise: calculate the isi and the cv!')
########################################################################
if len(spike_times) >= 2:
# Compute isi
isi = np.diff(spike_times)
# Compute cv
cv = np.std(isi)/np.mean(isi)
else:
isi = np.nan
cv = np.nan
return isi, cv
# Set parameters
pars = default_pars(T=1000.)
mu_gwn = 250
sig_gwn1 = 0.5
sig_gwn2 = 3.0
# Run LIF model for sigma = 0.5
I_GWN1 = my_GWN(pars, mu=mu_gwn, sig=sig_gwn1, myseed=2020)
_, sp1 = run_LIF(pars, Iinj=I_GWN1)
# Run LIF model for sigma = 3
I_GWN2 = my_GWN(pars, mu=mu_gwn, sig=sig_gwn2, myseed=2020)
_, sp2 = run_LIF(pars, Iinj=I_GWN2)
# Compute ISIs/CV
isi1, cv1 = isi_cv_LIF(sp1)
isi2, cv2 = isi_cv_LIF(sp2)
# Visualize
my_hists(isi1, isi2, cv1, cv2, sig_gwn1, sig_gwn2)
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial1_Solution_27d69c89.py)
*Example output:*
## Interactive Demo 3B: Spike irregularity explorer for different `sig_gwn`
In the above illustration, we see that the CV of inter-spike-interval (ISI) distribution depends on $\sigma$ of GWN. What about the mean of GWN, should that also affect the CV$_{\rm ISI}$? If yes, how? Does the efficacy of $\sigma$ in increasing the CV$_{\rm ISI}$ depend on $\mu$?
In the following interactive demo, you will examine how different levels of fluctuation $\sigma$ affect the CVs for different average injected currents ($\mu$).
1. Does the standard deviation of the injected current affect the F-I curve in any qualitative manner?
2. Why does increasing the mean of GWN reduce the $CV_{ISI}$?
3. If you plot spike count (or rate) vs. $CV_{ISI}$, should there be a relationship between the two? Try out yourself.
```python
#@title
#@markdown Make sure you execute this cell to enable the widget!
my_layout.width = '450px'
@widgets.interact(
sig_gwn=widgets.FloatSlider(0.0, min=0., max=10.,
step=0.5, layout=my_layout)
)
def diff_std_affect_fI(sig_gwn):
pars = default_pars(T=1000.)
I_mean = np.arange(100., 400., 20)
spk_count = np.zeros(len(I_mean))
cv_isi = np.empty(len(I_mean))
for idx in range(len(I_mean)):
I_GWN = my_GWN(pars, mu=I_mean[idx], sig=sig_gwn)
v, rec_spikes = run_LIF(pars, Iinj=I_GWN)
spk_count[idx] = len(rec_spikes)
if len(rec_spikes) > 3:
isi = np.diff(rec_spikes)
cv_isi[idx] = np.std(isi) / np.mean(isi)
# Plot the F-I curve i.e. Output firing rate as a function of input mean.
plt.figure()
plt.plot(I_mean[spk_count > 5], cv_isi[spk_count > 5], 'bo', alpha=0.5)
plt.xlabel('Average injected current (pA)')
plt.ylabel(r'Spike irregularity ($\mathrm{CV}_\mathrm{ISI}$)')
plt.ylim(-0.1, 1.5)
plt.grid(True)
plt.show()
plt.figure()
plt.plot(cv_isi,spk_count)
#plt.xlabel('Average injected current (pA)')
#plt.ylabel(r'Spike irregularity ($\mathrm{CV}_\mathrm{ISI}$)')
#plt.ylim(-0.1, 1.5)
plt.grid(True)
plt.show()
print(spk_count.shape)
print(cv_isi.shape)
```
interactive(children=(FloatSlider(value=0.0, description='sig_gwn', layout=Layout(width='450px'), max=10.0, st…
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial1_Solution_c6f1c4a2.py)
---
# Summary
*Estimated timing of tutorial: 1 hour, 10 min*
Congratulations! You've just built a leaky integrate-and-fire (LIF) neuron model from scratch, and studied its dynamics in response to various types of inputs, having:
- simulated the LIF neuron model
- driven the LIF neuron with external inputs, such as direct current and Gaussian white noise
- studied how different inputs affect the LIF neuron's output (firing rate and spike time irregularity),
with a special focus on low rate and irregular firing regime to mimc real cortical neurons. The next tutorial will look at how spiking statistics may be influenced by a neuron's input statistics.
If you have extra time, look at the bonus sections below to explore a different type of noise input and learn about extensions to integrate-and-fire models.
---
# Bonus
---
## Bonus Section 1: Orenstein-Uhlenbeck Process
When a neuron receives spiking input, the synaptic current is Shot Noise -- which is a kind of colored noise and the spectrum of the noise determined by the synaptic kernel time constant. That is, a neuron is driven by **colored noise** and not GWN.
We can model colored noise using the Ohrenstein-Uhlenbeck process - filtered white noise.
We next study if the input current is temporally correlated and is modeled as an Ornstein-Uhlenbeck process $\eta(t)$, i.e., low-pass filtered GWN with a time constant $\tau_{\eta}$:
$$\tau_\eta \frac{d}{dt}\eta(t) = \mu-\eta(t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t).$$
**Hint:** An OU process as defined above has
$$E[\eta(t)]=\mu$$
and autocovariance
$$[\eta(t)\eta(t+\tau)]=\sigma_\eta^2e^{-|t-\tau|/\tau_\eta},$$
which can be used to check your code.
```python
# @markdown Execute this cell to get helper function `my_OU`
def my_OU(pars, mu, sig, myseed=False):
"""
Function that produces Ornstein-Uhlenbeck input
Args:
pars : parameter dictionary
sig : noise amplitute
myseed : random seed. int or boolean
Returns:
I_ou : Ornstein-Uhlenbeck input current
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tau_ou = pars['tau_ou'] # [ms]
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# Initialize
noise = np.random.randn(Lt)
I_ou = np.zeros(Lt)
I_ou[0] = noise[0] * sig
# generate OU
for it in range(Lt-1):
I_ou[it+1] = I_ou[it] + (dt / tau_ou) * (mu - I_ou[it]) + np.sqrt(2 * dt / tau_ou) * sig * noise[it + 1]
return I_ou
help(my_OU)
```
### Bonus Interactive Demo 1: LIF Explorer with OU input
In the following, we will check how a neuron responds to a noisy current that follows the statistics of an OU process.
- How does the OU type input change neuron responsiveness?
- What do you think will happen to the spike pattern and rate if you increased or decreased the time constant of the OU process?
```python
# @title
# @markdown Remember to enable the widget by running the cell!
my_layout.width = '450px'
@widgets.interact(
tau_ou=widgets.FloatSlider(10.0, min=5., max=20.,
step=2.5, layout=my_layout),
sig_ou=widgets.FloatSlider(10.0, min=5., max=40.,
step=2.5, layout=my_layout),
mu_ou=widgets.FloatSlider(190.0, min=180., max=220.,
step=2.5, layout=my_layout)
)
def LIF_with_OU(tau_ou=10., sig_ou=40., mu_ou=200.):
pars = default_pars(T=1000.)
pars['tau_ou'] = tau_ou # [ms]
I_ou = my_OU(pars, mu_ou, sig_ou)
v, sp = run_LIF(pars, Iinj=I_ou)
plt.figure(figsize=(12, 4))
plt.subplot(121)
plt.plot(pars['range_t'], I_ou, 'b', lw=1.0)
plt.xlabel('Time (ms)')
plt.ylabel(r'$I_{\mathrm{OU}}$ (pA)')
plt.subplot(122)
plot_volt_trace(pars, v, sp)
plt.tight_layout()
plt.show()
```
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W2D3_BiologicalNeuronModels/solutions/W2D3_Tutorial1_Solution_cf5b6a80.py)
---
## Bonus Section 2: Generalized Integrate-and-Fire models
LIF model is not the only abstraction of real neurons. If you want to learn about more realistic types of neuronal models, watch the Bonus Video!
```python
# @title Video 3 (Bonus): Extensions to Integrate-and-Fire models
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="G0b6wLhuQxE", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
|
[STATEMENT]
lemma SetAif1: "bval b s \<Longrightarrow> preSet upds (IF b THEN C1 ELSE C2) l s = preSet upds C1 l s"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. bval b s \<Longrightarrow> preSet upds (IF b THEN C1 ELSE C2) l s = preSet upds C1 l s
[PROOF STEP]
apply(simp)
[PROOF STATE]
proof (prove)
goal:
No subgoals!
[PROOF STEP]
done |
library(data.table)
library(plyr)
#############################################################################
make.na <- function(x) {NA}
#############################################################################
args=(commandArgs(TRUE))
print(args)
if(length(args)==0){ stop("No arguments supplied. Exiting.") }
sentfile <- args[1]
datadir <- args[2]
feature <- toupper(args[3])
segtype <- args[4]
sents <- fread(sentfile)
#print(sents)
sents <- sents[order(wstart)]
wmeta <- c("niteid","wstart","wend","conv","participant","nxt_agent")
fvalnames <- c(grep(feature, names(sents), value=T))
featnames <- c(wmeta, fvalnames)
#print("*** featnames:")
#print(featnames)
## Get next and previous sentence ids
nextsents <- c(sents[, tail(niteid, -1)], "NONE")
prevsents <- c("NONE", sents[, head(niteid, -1)])
currsents <- sents[,niteid]
currpara <- unlist(lapply(strsplit(currsents, split="\\."), function(x) {as.numeric(x[4])}))
nextpara <- unlist(lapply(strsplit(nextsents, split="\\."), function(x) {as.numeric(x[4])}))
sents <- data.table(sents, para.id=currpara, next.para.id=nextpara, para.change=(currpara != nextpara),
prev.sent.id=prevsents, next.sent.id=nextsents)
#write.table(sents, file="/tmp/sents.txt")
## Make a line for the case where there's nothing following.
nextfeats <- sents[niteid %in% nextsents, featnames, with=F]
nofeats <- copy(nextfeats[1])
nofeats <- data.table(colwise(make.na)(nofeats))
nofeats$niteid <- "NONE"
nextfeats <- rbindlist(list(nextfeats, nofeats))
setnames(nextfeats, names(nextfeats), gsub("^", "next.", names(nextfeats)))
#print(nextfeats)
## Join in
setkey(sents, next.sent.id)
setkey(nextfeats, next.niteid)
nextsents <- sents[nextfeats][!is.na(niteid)][order(wstart)]
nextdiffs <- nextsents[,list(niteid, wstart, wend, pause.dur=next.wstart-wend)]
for (val in fvalnames) {
#print(val)
nval <- paste("next", val, sep=".")
dval <- nextsents[[nval]] - nextsents[[val]]
nextdiffs <- data.table(nextdiffs, dval=dval)
setnames(nextdiffs, "dval", paste("ndiff", val,sep="."))
}
setkey(nextdiffs, niteid, wstart, wend)
setkey(nextsents, niteid, wstart, wend)
sentdiffs <- nextsents[nextdiffs]
####################################
prevfeats <- sents[niteid %in% prevsents, featnames, with=F]
nofeats <- copy(prevfeats[1])
nofeats <- data.table(colwise(make.na)(nofeats))
nofeats$niteid <- "NONE"
prevfeats <- rbindlist(list(nofeats, prevfeats))
setnames(prevfeats, names(prevfeats), gsub("^", "prev.", names(prevfeats)))
## Join in
setkey(sentdiffs, prev.sent.id)
setkey(prevfeats, prev.niteid)
prevsents <- sentdiffs[prevfeats][!is.na(niteid)][order(wstart)]
prevdiffs <- prevsents[,list(niteid, wstart, wend, pause.dur=prev.wstart-wend)]
for (val in fvalnames) {
# print(val)
nval <- paste("prev", val, sep=".")
dval <- prevsents[[nval]] - prevsents[[val]]
prevdiffs <- data.table(prevdiffs, dval=dval)
setnames(prevdiffs, "dval", paste("pdiff", val,sep="."))
}
setkey(prevdiffs, niteid, wstart, wend)
setkey(prevsents, niteid, wstart, wend)
sentdiffs <- prevsents[prevdiffs]
#print(names(sentdiffs))
outfile <- paste(datadir, "/segs/", tolower(feature), "-diff-", segtype, "/", basename(sentfile), ".", segtype, "diff.txt", sep="")
print(outfile)
write.table(sentdiffs[order(wstart)], file=outfile, quote=F, row.names=F, col.names=T)
#print(sentdiffs[order(wstart),list(niteid, para.id, next.para.id, para.change, pause.dur, mean.normI0, pdiff.mean.normI0)][1:20])
#print(summary(lastword))
|
(** Borrowed from Pierce's "Software Foundations" *)
Require Import Arith Arith.EqNat.
Require Import Lia.
Inductive id : Type :=
Id : nat -> id.
Reserved Notation "m i<= n" (at level 70, no associativity).
Reserved Notation "m i> n" (at level 70, no associativity).
Reserved Notation "m i< n" (at level 70, no associativity).
Inductive le_id : id -> id -> Prop :=
le_conv : forall n m, n <= m -> (Id n) i<= (Id m)
where "n i<= m" := (le_id n m).
Inductive lt_id : id -> id -> Prop :=
lt_conv : forall n m, n < m -> (Id n) i< (Id m)
where "n i< m" := (lt_id n m).
Inductive gt_id : id -> id -> Prop :=
gt_conv : forall n m, n > m -> (Id n) i> (Id m)
where "n i> m" := (gt_id n m).
Notation "n i<= m" := (le_id n m).
Notation "n i> m" := (gt_id n m).
Notation "n i< m" := (lt_id n m).
Ltac prove_with th :=
intros;
repeat (match goal with H: id |- _ => destruct H end);
match goal with n: nat, m: nat |- _ => set (th n m) end;
repeat match goal with H: _ + {_} |- _ => inversion_clear H end;
try match goal with H: {_} + {_} |- _ => inversion_clear H end;
repeat
match goal with
H: ?n < ?m |- _ + {Id ?n i< Id ?m} => right
| H: ?n < ?m |- _ + {_} => left
| H: ?n > ?m |- _ + {Id ?n i> Id ?m} => right
| H: ?n > ?m |- _ + {_} => left
| H: ?n < ?m |- {_} + {Id ?n i< Id ?m} => right
| H: ?n < ?m |- {Id ?n i< Id ?m} + {_} => left
| H: ?n > ?m |- {_} + {Id ?n i> Id ?m} => right
| H: ?n > ?m |- {Id ?n i> Id ?m} + {_} => left
| H: ?n = ?m |- _ + {Id ?n = Id ?m} => right
| H: ?n = ?m |- _ + {_} => left
| H: ?n = ?m |- {_} + {Id ?n = Id ?m} => right
| H: ?n = ?m |- {Id ?n = Id ?m} + {_} => left
| H: ?n <> ?m |- _ + {Id ?n <> Id ?m} => right
| H: ?n <> ?m |- _ + {_} => left
| H: ?n <> ?m |- {_} + {Id ?n <> Id ?m} => right
| H: ?n <> ?m |- {Id ?n <> Id ?m} + {_} => left
| H: ?n <= ?m |- _ + {Id ?n i<= Id ?m} => right
| H: ?n <= ?m |- _ + {_} => left
| H: ?n <= ?m |- {_} + {Id ?n i<= Id ?m} => right
| H: ?n <= ?m |- {Id ?n i<= Id ?m} + {_} => left
end;
try (constructor; assumption); congruence.
Lemma lt_eq_lt_id_dec: forall (id1 id2 : id), {id1 i< id2} + {id1 = id2} + {id2 i< id1}.
Proof. admit. Admitted.
Lemma gt_eq_gt_id_dec: forall (id1 id2 : id), {id1 i> id2} + {id1 = id2} + {id2 i> id1}.
Proof. admit. Admitted.
Lemma le_gt_id_dec : forall id1 id2 : id, {id1 i<= id2} + {id1 i> id2}.
Proof. admit. Admitted.
Lemma id_eq_dec : forall id1 id2 : id, {id1 = id2} + {id1 <> id2}.
Proof. admit. Admitted.
Lemma eq_id : forall (T:Type) x (p q:T), (if id_eq_dec x x then p else q) = p.
Proof. admit. Admitted.
Lemma neq_id : forall (T:Type) x y (p q:T), x <> y -> (if id_eq_dec x y then p else q) = q.
Proof. admit. Admitted.
Lemma lt_gt_id_false : forall id1 id2 : id,
id1 i> id2 -> id2 i> id1 -> False.
Proof. admit. Admitted.
Lemma le_gt_id_false : forall id1 id2 : id,
id2 i<= id1 -> id2 i> id1 -> False.
Proof. admit. Admitted.
Lemma le_lt_eq_id_dec : forall id1 id2 : id,
id1 i<= id2 -> {id1 = id2} + {id2 i> id1}.
Proof. admit. Admitted.
Lemma neq_lt_gt_id_dec : forall id1 id2 : id,
id1 <> id2 -> {id1 i> id2} + {id2 i> id1}.
Proof. admit. Admitted.
Lemma eq_gt_id_false : forall id1 id2 : id,
id1 = id2 -> id1 i> id2 -> False.
Proof. admit. Admitted.
|
function mM = minandmax2est(f, N)
%MINANDMAX2EST Estimates the minimum and maximum of a SEPARABLEAPPROX.
% mM = MINANDMAX2EST(F) returns estimates for the minimum and maximum of the
% SEPARABLEAPPROX F over its domain. mM is a vector of length 2 such that
% mM(1) is the estimated minimum and mM(2) is the estimated maximum.
%
% mM = MINANDMAX2EST(F, N) returns estimates for the minimum and maximum of
% the SEPARABLEAPPROX F over its domain, based on samples on an N by N grid
% (N = 33 by default).
%
% See also MINANDMAX2.
% Copyright 2017 by The University of Oxford and The Chebfun Developers.
% See http://www.chebfun.org/ for Chebfun information.
if ( isempty(f) )
mM = [0, 0];
return
end
if ( ( nargin < 2 ) || isempty(N) )
% Default to N = 33:
N = 33;
end
% Sample f on an appropriate grid:
vals = sample(f, N, N);
% Make result a column vector:
vals = vals(:);
% Get min and max:
mM = [ min(vals), max(vals) ];
end
|
There is so much happening at Citrix these days it can be hard to keep track. To help out, next month for Synergy SFO, the CTO Office is organizing a session with three of our CTO’s.
Sheng Liang – CTO Cloud Platforms Group.
In this session we hope to give you some detail on what has been going on, and project that foward, to give you a better perspective of where we see the market headed, and how we intend to help you make the most of it.
This will be a great session for product strategists and architects as your thinking about how your future may unfold.
If you have got any real hot items drop me a line and I will make sure we try and get it covered. If not feel free to hang out after the session and we can talk more. |
!
! -------------------------------------------------------------
! N I N E 1 2
! -------------------------------------------------------------
!
! *
! THIS PACKAGE DETERMINES THE VALUES OF 9j COEFFICIENT *
! *
! | J1/2 J2/2 J3/2 | *
! | L1/2 L2/2 J3/2 | *
! | K1/2 + 1 K1/2 1 | *
! *
! Written by G. Gaigalas, *
! Vanderbilt University, Nashville October 1996 *
!
SUBROUTINE NINE12(J1, J2, J3, L1, L2, K1, A)
!-----------------------------------------------
! M o d u l e s
!-----------------------------------------------
USE vast_kind_param, ONLY: DOUBLE
USE CONSTS_C
!...Translated by Pacific-Sierra Research 77to90 4.3E 10:09:06 11/16/01
!...Switches:
!-----------------------------------------------
! I n t e r f a c e B l o c k s
!-----------------------------------------------
USE sixj_I
IMPLICIT NONE
!-----------------------------------------------
! D u m m y A r g u m e n t s
!-----------------------------------------------
INTEGER :: J1
INTEGER :: J2
INTEGER :: J3
INTEGER :: L1
INTEGER :: L2
INTEGER :: K1
REAL(DOUBLE) , INTENT(OUT) :: A
!-----------------------------------------------
! L o c a l V a r i a b l e s
!-----------------------------------------------
INTEGER :: K2
REAL(DOUBLE) :: S1, SKAI1, S2, SKAI2, S, VARD
!-----------------------------------------------
K2 = K1 - 2
CALL SIXJ (J1, L1, K1, L2, J2, J3, 0, S1)
SKAI1 = DBLE(J2 - L2 + K1)*DBLE(L2 - J2 + K1)*DBLE(J2 + L2 + K2 + 4)*&
DBLE(J2 + L2 - K2)
S1 = S1*DSQRT(SKAI1)
CALL SIXJ (J1, L1, K2, L2, J2, J3, 0, S2)
SKAI2 = DBLE(J1 - L1 + K1)*DBLE(L1 - J1 + K1)*DBLE(J1 + L1 + K2 + 4)*&
DBLE(J1 + L1 - K2)
S2 = S2*DSQRT(SKAI2)
S = S1 + S2
! VARD=DSQRT(DBLE(8*(K2+3)*(K2+2)*(K2+1)*J3*(J3+2)*(J3+1)))
VARD = DSQRT(DBLE(8*(K2 + 3))*DBLE(K2 + 2)*DBLE(K2 + 1)*DBLE(J3)*DBLE(J3&
+ 2)*DBLE(J3 + 1))
A = S/VARD
IF (MOD(L1 + J2 + K2 + J3,4) /= 0) A = -A
RETURN
END SUBROUTINE NINE12
|
```python
import sys
if not '..' in sys.path:
sys.path.insert(0, '..')
import control
import sympy
import numpy as np
import matplotlib.pyplot as plt
import ulog_tools as ut
from urllib.request import urlopen
import pyulog
%matplotlib inline
%load_ext autoreload
%autoreload 2
```
# System Identification
```python
def model_to_acc_tf(m):
# open loop, rate output, mix input plant
acc_tf = m['gain'] * control.tf(*control.pade(m['delay'], 1)) # order 1 approx
tf_integrator = control.tf((1), (1, 0))
return acc_tf * tf_integrator
def pid_design(G_rate_ol, d_tc=1.0/125, K0=[0.01, 0.01, 0.01, 5]):
K_rate, G_cl_rate, G_ol_rate_comp = ut.logsysid.pid_design(
G_rate_ol, K0[:3], d_tc)
tf_integrator = control.tf((1), (1, 0))
K, G_cl, G_ol_comp = ut.logsysid.pid_design(
G_cl_rate*tf_integrator, K0[3:], d_tc,
use_I=False, use_D=False)
return np.vstack([K_rate, K])
def pid_tf(use_P=True, use_I=True, use_D=True, d_tc=0.1):
H = []
if use_P:
H += [control.tf(1, 1)]
if use_I:
H += [control.tf((1), (1, 0))]
if use_D:
H += [control.tf((1, 0), (d_tc, 1))]
H = np.array([H]).T
H_num = [[H[i][j].num[0][0] for i in range(H.shape[0])] for j in range(H.shape[1])]
H_den = [[H[i][j].den[0][0] for i in range(H.shape[0])] for j in range(H.shape[1])]
H = control.tf(H_num, H_den)
return H
```
```python
log_file = ut.ulog.download_log('http://review.px4.io/download?log=35b27fdb-6a93-427a-b634-72ab45b9609e', '/tmp')
data = ut.sysid.prepare_data(log_file)
res = ut.sysid.attitude_sysid(data)
res
```
```python
res = ut.sysid.attitude_sysid(data)
res
```
# Control System Model
```python
fs = ut.ulog.sample_frequency(data)
fs
```
232.99287433805779
$ y[n] = k u[n - d] $
$ y[n] = k z^{-d} u[n] $
$ G = \frac{y[n]}{u[n]} = \frac{k}{z^{d}} $
```python
roll_tf = ut.sysid.delay_gain_model_to_tf(res['roll']['model'])
roll_tf
```
45.69
-----
z^17
dt = 0.00429197675183
```python
pitch_tf = ut.sysid.delay_gain_model_to_tf(res['pitch']['model'])
pitch_tf
```
29.54
-----
z^12
dt = 0.00429197675183
```python
yaw_tf = ut.sysid.delay_gain_model_to_tf(res['yaw']['model'])
yaw_tf
```
41.27
-----
z^26
dt = 0.00429197675183
```python
## Proportional
P_tf = control.tf(1, 1, 1.0/fs)
P_tf
```
1
-
1
dt = 0.00429197675183
## The integrator transfer function:
\begin{align}
y[n] &= y[n-1] + \frac{u[n]}{f_s} \\
Y[n] &= z^{-1}Y[n] + \frac{U[n]}{f_s} \\
Y[n] &(1 - z^{-1}) = \frac{U[n]}{f_s} \\
\frac{Y[n]}{U[n]} &= \frac{z}{fs (z - 1)} \\
\end{align}
```python
z, f_s = sympy.symbols('z, f_s')
```
```python
G_I = z / (f_s*(z-1))
G_I
```
z/(f_s*(z - 1))
```python
I_tf = ut.control.sympy_to_tf(G_I, {'f_s': fs, 'dt': 1.0/fs})
I_tf
```
0.004292 z
----------
z - 1
dt = 0.00429197675183
## The derivative transfer function:
\begin{align}
y[n] &= f_s(u[n] - u[n-1]) \\
Y[n] &= f_s(U[n] - z^{-1}U[n]) \\
\frac{Y[n]}{U[n]} &= f_s(1 - z^{-1}) \\
\frac{Y[n]}{U[n]} &= \frac{f_s(z - 1)}{z}
\end{align}
```python
G_D = f_s * (z-1) / z
G_D
```
f_s*(z - 1)/z
```python
D_tf = ut.control.sympy_to_tf(G_D, {'f_s': fs, 'dt': 1.0/fs})
D_tf
```
233 z - 233
-----------
z
dt = 0.00429197675183
## The combined PID transfer function:
```python
pid_tf = ut.control.tf_vstack([P_tf, I_tf, D_tf])
pid_tf
```
Input 1 to output 1:
1
-
1
Input 2 to output 1:
0.004292 z
----------
z - 1
Input 3 to output 1:
233 z - 233
-----------
z
dt = 0.00429197675183
## Roll
```python
roll_tf_comp = roll_tf*pid_tf
roll_tf_comp
```
Input 1 to output 1:
45.69
-----
z^17
Input 2 to output 1:
0.1961 z
-----------
z^18 - z^17
Input 3 to output 1:
1.064e+04 z - 1.064e+04
-----------------------
z^18
dt = 0.00429197675183
```python
roll_ss_comp = control.tf2ss(roll_tf_comp);
```
# Continuous Time Optimization
```python
def attitude_loop_design(m, name):
d_tc = 1.0/10
G_rate_ol = model_to_acc_tf(m)
K0=[0.182, 1.9, 0.01]
K_rate, G_rate_ol, G_rate_comp_cl = ut.logsysid.pid_design(
G_rate_ol, [0.2, 0.2, 0], d_tc)
tf_integrator = control.tf([1], [1, 0])
G_ol = G_rate_comp_cl*tf_integrator
K, G_ol, G_comp_cl = ut.logsysid.pid_design(
G_ol, [1], d_tc,
use_I=False, use_D=False)
K_rate, K
return {
'MC_{:s}RATE_P'.format(name): K_rate[0, 0],
'MC_{:s}RATE_I'.format(name): K_rate[1, 0],
'MC_{:s}RATE_D'.format(name): K_rate[2, 0],
'MC_{:s}_P'.format(name): K[0, 0],
}
```
```python
log_file = ut.ulog.download_log('http://review.px4.io/download?log=35b27fdb-6a93-427a-b634-72ab45b9609e', '/tmp')
data = ut.sysid.prepare_data(log_file)
res = ut.sysid.attitude_sysid(data)
res
```
```python
attitude_loop_design(res['roll']['model'], 'ROLL')
```
{'MC_ROLLRATE_D': 0.0063734754545666465,
'MC_ROLLRATE_I': 0.19844682163683977,
'MC_ROLLRATE_P': 0.20051094114484688,
'MC_ROLL_P': 4.5843426190341647}
```python
attitude_loop_design(res['pitch']['model'], 'PITCH')
```
{'MC_PITCHRATE_D': 0.015662896675004697,
'MC_PITCHRATE_I': 0.48847645640076243,
'MC_PITCHRATE_P': 0.51104029619426683,
'MC_PITCH_P': 5.8666514695501988}
```python
attitude_loop_design(res['yaw']['model'], 'YAW')
```
{'MC_YAWRATE_D': 0.017251069591687748,
'MC_YAWRATE_I': 0.19498248018478978,
'MC_YAWRATE_P': 0.18924319337905329,
'MC_YAW_P': 3.598452484267229}
```python
```
|
/-
Copyright (c) 2022 Joël Riou. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Joël Riou
-/
import for_mathlib.algebraic_topology.homotopical_algebra.factorisation_axiom
import for_mathlib.algebraic_topology.homotopical_algebra.three_of_two
import for_mathlib.category_theory.retracts
open category_theory category_theory.limits
namespace algebraic_topology
variables (C : Type*) [category C]
@[ext]
class category_with_fib_cof_weq := (fib cof weq : morphism_property C)
namespace category_with_fib_cof_weq
variables {C} (data : category_with_fib_cof_weq C) (data' : category_with_fib_cof_weq Cᵒᵖ)
@[simps]
def op : category_with_fib_cof_weq Cᵒᵖ :=
{ fib := data.cof.op,
cof := data.fib.op,
weq := data.weq.op }
@[simps]
def unop : category_with_fib_cof_weq C :=
{ fib := data'.cof.unop,
cof := data'.fib.unop,
weq := data'.weq.unop }
lemma unop_op : data.op.unop = data :=
by ext1; refl
lemma op_unop : data'.unop.op = data' :=
by ext1; refl
def triv_fib := data.fib ∩ data.weq
def triv_cof := data.cof ∩ data.weq
def inverse_image {D : Type*} [category D] (F : D ⥤ C) : category_with_fib_cof_weq D :=
{ fib := data.fib.inverse_image F,
cof := data.cof.inverse_image F,
weq := data.weq.inverse_image F }
def CM2 := data.weq.three_of_two
lemma CM2_iff_op : data.CM2 ↔ data.op.CM2 := morphism_property.three_of_two.iff_op _
namespace CM2
variable {data}
lemma inverse_image {D : Type*} [category D] (h : data.CM2) (F : D ⥤ C) :
(category_with_fib_cof_weq.inverse_image data F).CM2 :=
morphism_property.three_of_two.for_inverse_image h F
end CM2
def CM3a := data.weq.is_stable_by_retract
def CM3b := data.fib.is_stable_by_retract
def CM3c := data.cof.is_stable_by_retract
structure CM3 : Prop :=
(weq : data.CM3a)
(fib : data.CM3b)
(cof : data.CM3c)
namespace CM3
variable {data}
lemma triv_cof (h : data.CM3) : data.triv_cof.is_stable_by_retract :=
morphism_property.is_stable_by_retract.of_inter h.cof h.weq
lemma triv_fib (h : data.CM3) : data.triv_fib.is_stable_by_retract :=
morphism_property.is_stable_by_retract.of_inter h.fib h.weq
lemma inverse_image {D : Type*} [category D] (h : data.CM3) (F : D ⥤ C) :
(category_with_fib_cof_weq.inverse_image data F).CM3 :=
{ weq := h.weq.inverse_image F,
fib := h.fib.inverse_image F,
cof := h.cof.inverse_image F, }
end CM3
lemma CM3a_iff_op : data.CM3a ↔ data.op.CM3a := morphism_property.is_stable_by_retract.iff_op _
lemma CM3b_iff_op : data.CM3b ↔ data.op.CM3c := morphism_property.is_stable_by_retract.iff_op _
lemma CM3c_iff_op : data.CM3c ↔ data.op.CM3b := morphism_property.is_stable_by_retract.iff_op _
lemma CM3_iff : data.CM3 ↔ data.CM3a ∧ data.CM3b ∧ data.CM3c :=
by { split; rintro ⟨a, b, c⟩; exact ⟨a, b, c⟩, }
lemma CM3_iff_op : data.CM3 ↔ data.op.CM3 :=
by { simp only [CM3_iff, ← CM3a_iff_op, ← CM3b_iff_op, ← CM3c_iff_op], tauto, }
def CM4a := data.triv_cof.has_lifting_property data.fib
def CM4b := data.cof.has_lifting_property data.triv_fib
def CM4 := data.CM4a ∧ data.CM4b
lemma CM4a_iff_op : data.CM4a ↔ data.op.CM4b := morphism_property.has_lifting_property.iff_op _ _
lemma CM4b_iff_op : data.CM4b ↔ data.op.CM4a := morphism_property.has_lifting_property.iff_op _ _
lemma CM4_iff_op : data.CM4 ↔ data.op.CM4 :=
by { dsimp only [CM4], rw [← CM4a_iff_op, ← CM4b_iff_op], tauto, }
def CM5a := factorisation_axiom data.triv_cof data.fib
def CM5b := factorisation_axiom data.cof data.triv_fib
def CM5 := data.CM5a ∧ data.CM5b
lemma CM5a_iff_op : data.CM5a ↔ data.op.CM5b := factorisation_axiom.iff_op _ _
lemma CM5b_iff_op : data.CM5b ↔ data.op.CM5a := factorisation_axiom.iff_op _ _
lemma CM5_iff_op : data.CM5 ↔ data.op.CM5 :=
by { dsimp only [CM5], rw [← CM5a_iff_op, ← CM5b_iff_op], tauto, }
end category_with_fib_cof_weq
end algebraic_topology
|
module Issue1078.A where -- The typo in the module name is intended!
open import Common.Level using (Level)
open import Issue1078A
test = Level
|
#include <boost/python.hpp>
#include "OpenPGP/Hashes/SHA224.h"
void SHA224_init()
{
boost::python::class_<SHA224, boost::python::bases<SHA256>>("SHA224")
.def(boost::python::init<const std::string &>())
.def("digestsize", &SHA224::digestsize)
.def("blocksize", &SHA224::blocksize)
.def("hexdigest", &SHA224::hexdigest)
;
}
|
(* Prove some properties of System F by dinaturality axiom
with Coq's internal type system.
System F's type is encoded as a Prop. *)
(* According to
A Logic for Parametric Polymorphism, Plotkin and Abadi (LNCS 1993),
which introduces dinaturality as an axiom.
*)
Require Import Coq.Logic.FunctionalExtensionality.
Definition one := forall (X: Prop), (X -> X).
Definition compose {X Y Z} (f: Y -> Z) g (x: X) := f (g x).
Axiom dinaturality0one:
forall (X Y: Prop) (f: X -> Y), forall (u: one),
compose f (u X) = compose (u Y) f.
Theorem one_singleton:
forall u: one, u = id.
intro u.
apply functional_extensionality_dep.
intro X.
apply functional_extensionality.
intro x.
generalize (dinaturality0one one X (fun _ => x) u); intro H.
unfold compose in H.
apply equal_f in H.
symmetry.
auto.
exact id.
Qed.
Definition prod (X: Prop) (Y: Prop): Prop :=
forall Z: Prop, (X -> Y -> Z) -> Z.
Axiom dinaturality0Prod:
forall (X Y Z Z': Prop) (f: Z -> Z') (u: prod X Y),
compose f (u Z) = fun g => u Z' (fun x y => f (g x y)).
Definition pair {X Y: Prop} (x: X) (y: Y): prod X Y :=
fun _ f => f x y.
Definition fst {X Y: Prop} (p: prod X Y): X :=
p X (fun x _ => x).
Definition snd {X Y: Prop} (p: prod X Y): Y :=
p Y (fun _ y => y).
Lemma p_pair: forall X Y (p: prod X Y), p _ pair = p.
Admitted.
Theorem prod_ok:
forall X Y (p: prod X Y), p = pair (fst p) (snd p).
intros X Y p.
apply functional_extensionality_dep; intro Z.
apply functional_extensionality; intro q.
unfold pair.
generalize (dinaturality0Prod X Y (prod X Y) Z (fun r => q (fst r) (snd r)) p); intro H.
unfold compose in H.
generalize (equal_f H pair); intro H0.
repeat rewrite (p_pair _ _ p) in H0.
rewrite H0.
unfold fst, snd, pair.
reflexivity.
Qed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.