text
stringlengths 0
3.34M
|
---|
Lalan Star Exporter Company ensconces business in agriculture field with horticulture platform for more growth in our natural and organic products.
Our commodity range split into categorize section starts with flour, grains, pulp, pulses and seeds etc., significance of food processing industry in modern era is being perceived for further business opportunities.
Now a day’s agriculture importance can be mensuration by allotment of quantity and selling their requirement in overseas countries, it effect on industrial sector also. Our leading Indian industries destination accumulated with raw materials, sugar mills, rice mills, cotton industries, homestead depends upon their direct supply. |
Residential buildings account for a significant amount of the national energy consumption of all OECD countries and consequently the EU and the Netherlands. Therefore, the national targets for CO2 reduction should include provisions for a more energy efficient building stock for all EU member states.
National and European level policies the past decades have improved the quality of the building stock by setting stricter standards on the external envelope of newly made buildings, the efficiency of the mechanical and heating components, the renovation practices and by establishing an energy labelling system. Energy related occupancy behavior is a significant part, and relatively unchartered, of buildings’ energy consumption. This thesis tried to contribute to the understanding of the role of the occupant related to the energy consumption of residential buildings by means of simulations and experimental data obtained by an extensive measurement campaign.
The first part of this thesis was based on dynamic building simulations in combination with a Monte Carlo statistical analysis, which tried to shed light to the most influential parameters, including occupancy related ones, that affect the energy consumption and comfort (a factor that is believed to be integral to the energy related behavior of people in buildings). The reference building that was used for the simulations was the TU Delft Concept House that was built for the purposes of the European project SusLab NWE. The concept house was simulated as an A energy label (very efficient) and F label (very inefficient) dwelling and with three different heating systems.
The analysis revealed that if behavioral parameters are not taken into account, the most critical parameters affecting heating consumption are the window U value, window g value, and wall conductivity. When the uncertainty of these parameters increases, the impact of the wall conductivity on heating consumption increases considerably. The most important finding was that when behavioral parameters like thermostat use and ventilation flow rate are added to the analysis, they dwarf the importance of the building parameters in relation to the energy consumption. For the thermal comfort (the PMV index was used as the established model for measuring indoor thermal comfort) the most influential parameters were found to be metabolic activity and clothing, while the thermostat had a secondary impact.
The simulations were followed by an extensive measurement campaign where an in-situ, non-intrusive, wireless sensor system was installed in 32, social housing, residential dwellings in the area of Den Haag. This sensor system was transmitting quantitative data such as temperature, humidity, CO2 levels, and motion every five minutes for a period of six months (the heating period between November to April) and from every room of the 32 dwellings that participated in the campaign. Furthermore, subjective data were gathered during an initial inspection during the installation of the sensor system, concerning the building envelope, the heating and ventilation systems of the dwellings. More importantly though, subjective data were gathered related to the indoor comfort of the occupants with the use of an apparatus that was developed specifically for the SusLab project. This gimmick, named the comfort dial, allowed us to capture data such as the occupants’ comfort level in the PMV 7 point scale. In addition further comfort related data like the occupants’ clothing ensemble, actions related to thermal comfort, and their metabolic activity were captured with the use of a diary. The subjective data measurement session lasted for a week for each dwelling. These data were time coupled real time with the quantitative data that were gathered by the sensor system.
The data analysis focused on the two available indoor thermal comfort models, Fanger’s PMV index and the adaptive model. Concerning the PMV model the analysis showed that while the neutral temperatures are well predicted by the PMV method, the cold and warm sensations are not. It appears that tenants reported (on a statistically significant way) comfortable sensations while the PMV method does not predict such comfort. This indicates a certain level of psychological adaptation to occupant’s expectations. Additionally it was found that although clothing and metabolic activities were similar among tenants of houses with different thermal quality, the neutral temperature was different. Specifically in houses with a good energy rating, the neutral temperature was higher than in houses with a poor rating.
Concerning the adaptive model, which was developed as the answer to the discrepancies of Fanger’s model related to naturally ventilated buildings (the majority of the residential sector), data analysis showed that while indoor temperatures are within the adaptive model’s comfort bandwidth, occupants often reported comfort sensations other than neutral. In addition, when indoor temperatures were below the comfort bandwidth, tenants often reported that they felt ‘neutral’. The adaptive model could overestimate as well as underestimate the occupant’s adaptive capacity towards thermal comfort. Despite the significant outdoors temperature variation, the indoor temperature of the dwellings, as well as the clothing of the tenants, were largely constant. Certain actions towards thermal comfort such as ‘turning the thermostat up’ were taking place while tenants were reporting thermal sensation ‘neutral’ or ‘a bit warm’. This indicates that either there is an indiscrimination among the various thermal sensation levels or alliesthesia, a new concept introduced by the creators of the adaptive model, plays an increased role. Most importantly there was an uncertainty on whether the neutral sensation means at the same time comfortable sensation while many actions are happening out of habit and not in order to improve one’s thermal comfort. A chi² analysis showed that only six actions were correlated to thermal sensation in thermally poorly efficient dwellings, and six in thermally efficient dwellings.
Finally, the abundance of data collected during the measurement campaign led the last piece of research of this thesis to data mining and pattern recognition analysis. Since the introduction of computers, the way research is performed has changed significantly. Huge amounts of data can be gathered and handled by evermore faster computers; the analysis of these data a couple of decades ago would take years.
Sequential pattern mining reveals frequently occurring patterns from time-ordered input streams of data. A great deal of nature behaves in a periodic manner and these strong periodic elements of our environment have led people to adopt periodic behavior in many aspects of their lives such as the time they wake up in the morning, the daily working hours, the weekend days off, the weekly sports practice. These periodic interactions could extend in various aspects of our lives including the relationship of people with their home thermal environment. Repetitive behavioural actions in sensor rich environments, such as the dwellings of the measurement campaign, can be observed and categorized into patterns. These discoveries could form the basis of a model of tenant behaviour that could lead to a self-learning automation strategy or better occupancy data to be used for better predictions of building simulating software such as Energy+ or ESP-r and others.
The analysis revealed various patterns of behaviour; indicatively 59% of the dwellings during the morning hours (7-9 a.m.) were increasing their indoor temperature from 20 oC< T< 22 oC to T> 22oC or that the tenants of 56% of the dwellings were finding the temperature 20 oC< T< 22 oC to be a bit cool and even for temperatures above 22 oC they were having a warm shower leading to the suspicion that a warm shower is a routine action not related to thermal comfort.
Such pattern recognition algorithms can be more effective in the era of mobile internet, which allows the capturing of huge amounts of data. Increased computational power can analyse these data and define useful patterns of behaviour that could be tailor made for each dwelling, for each room of a dwelling, even for each individual of a dwelling. The occupants could then have an overview of their most common behavioural patterns, see which ones are energy consuming, which ones are related to comfort and which are redundant, and therefore, could be discarded leading to energy savings. In any case the balance between indoor comfort and energy consumption will be the final factor that would lead the occupant to decide on a customised model of his indoor environment.
The general conclusion of this thesis is that the effect of energy related occupancy behaviour on the energy consumption of dwellings should not be statistically defined for large groups of population. There are so many different types of people inhabiting so many different types of dwellings that embarking in such a task would be a considerable waste of time and resources.
The future in understanding the energy related occupancy behaviour, and therefore using it towards a more sustainable built environment, lies in the advances of sensor technology, big data gathering, and machine learning. Technology will enable us to move from big population models to tailor made solutions designed for each individual occupant. |
\clearpage
\phantomsection
\addcontentsline{toc}{subsection}{PUSHTR}
\label{insn:pushtr}
\subsection*{PUSHTR: push a reference onto the stack}
\subsubsection*{Format}
\textrm{PUSHTR type, \%r2, \%rs}
\begin{center}
\begin{bytefield}[endianness=big,bitformatting=\scriptsize]{32}
\bitheader{0,7,8,15,16,23,24,31} \\
\bitbox{8}{0x30}
\bitbox{8}{type}
\bitbox{8}{r2}
\bitbox{8}{rs}
\end{bytefield}
\end{center}
\subsubsection*{Description}
The \instruction{pushtr} instruction pushes a reference, contained in
the \registerop{rs} register onto the stack. The length is stored for
a string along with the value. For a numeric value the size of that
value is stored.
\subsubsection*{Pseudocode}
\begin{verbatim}
value = %rs
if type is string:
size = strlen(value)
else:
size = %r2
stack[++index].size = size
stack[index].value = value
\end{verbatim}
\subsubsection*{Constraints}
\subsubsection*{Failure modes}
This instruction has no run-time failure modes beyond its constraints.
|
using JuMP
using DReal
m = Model(solver=DReal.DRealSolver(precision = 0.001))
@defVar(m, -10 <= x <= 10)
@setNLObjective(m, :Min, x^2*(x-1)*(x-2))
status = solve(m)
println("Objective value: ", getObjectiveValue(m))
println("x = ", getValue(x))
f(x) = x^2*(x-1)*(x-2)
y = ForallVar(Float64, -10.0, 10.0)
x = Var(Float64)
add!(f(x) <= f(y))
using DReal
f(x) = x^2*(x-1)*(x-2)
x = Var(Float64, -100., 100.)
obj = Var(Float64, -100., 100.)
add!(obj == f(x))
result = minimize(obj, x, lb= -100., ub=100.) |
subroutine eval_dU(qin,dU,f,g,irr,mitot,mjtot,lwidth,dtn,dtnewn,
& lstgrd,dx,dy,flag,iorder,xlow,ylow,mptr,
& vtime,steady,qx,qy,level,difmax,lastout,
& meqn,time, ffluxlen, gfluxlen, istage)
implicit double precision (a-h,o-z)
parameter ( msize =5200 )
dimension qin(mitot,mjtot,meqn)
dimension q(mitot,mjtot,meqn)
dimension f(mitot,mjtot,meqn),g(mitot,mjtot,meqn)
dimension qx(mitot,mjtot,meqn),qy(mitot,mjtot,meqn)
dimension qxx(mitot,mjtot,meqn),qyy(mitot,mjtot,meqn)
dimension qxy(mitot,mjtot,meqn)
dimension ur(msize,meqn),ul(msize,meqn)
dimension res(mitot,mjtot,meqn)
dimension ff(msize,meqn)
dimension ffluxlen(msize,msize),gfluxlen(msize,msize)
integer irr(mitot,mjtot)
include "cirr.i"
common /userdt/ cfl,gamma,gamma1,xprob,yprob,alpha,Re,iprob,
. ismp,gradThreshold
logical flag, debug, vtime, steady, quad, nolimiter
logical inside
common /order2/ ssw, quad, nolimiter
dimension firreg(-1:irrsize,meqn)
dimension dU(mitot,mjtot,meqn),dU2(mitot,mjtot,meqn)
dimension dlimit(mitot,mjtot,meqn)
dimension fakeState(4)
integer xrp, yrp
data debug/.false./
data xrp/1/, yrp/0/
data pi/3.14159265357989d0/
dimension coeff(2)
data mstage/2/
data coeff/0.5d0,1.d0/
if ( msize .lt. max(mitot,mjtot) ) then
write(6,*) 'Not enough memory allocated for rowwise flux'
write(6,*) 'Calculations. Allocated Size ',msize
write(6,*) 'Space required ',max(mitot,mjtot)
write(6,*) 'Remember also to allocate the same additional'
write(6,*) 'space in subroutine vrm '
stop
endif
fakeState(1) = 1.d0
fakeState(2) = 0.d0
fakeState(3) = 0.d0
fakeState(4) = 2.5d0
c
ix1 = lwidth + 1
ixn = mitot - lwidth
iy1 = lwidth + 1
iyn = mjtot - lwidth
ar(-1) = 1.d0
c ::::: rk with linear reconstruction follows ::::::
c
c store primitive variables in f for now
c
q = qin
if(iprob .ne. 25) then
call vctoprm(q,q,mitot,mjtot)
endif
c ### call for exterior bcs at each stage so can use slopes
xhigh= xlow + mitot*dx
yhigh = ylow + mjtot*dy
call pphysbdlin(xlow,xhigh,ylow,yhigh,level,mitot,mjtot,
& meqn,q,time,dx,dy,qx,qy,irr,lstgrd)
if (ssw .ne. 0.d0) then ! recalc slopes at each stage
call slopes(q,qx,qy,qxx,qxy,qyy,
& mitot,mjtot,irr,lstgrd,lwidth,dx,dy,
& xlow,ylow,meqn)
endif
c
c
c loop through rows of q calculating fluxes one row at time
c vertical riemann problem first
c
do 800 jcol = lwidth-2, mjtot-lwidth+3
c
do 511 i = lwidth-2, mitot-lwidth+3
call getYface(i,jcol,xface,yface,irr,mitot,mjtot,
. xlow,ylow,dx,dy,lstgrd)
call getCellCentroid(lstgrd,i,jcol+1,xcentp,ycentp,
. xlow,ylow,dx,dy,irr(i,jcol+1))
call getCellCentroid(lstgrd,i,jcol,xcent,ycent,
. xlow,ylow,dx,dy,irr(i,jcol))
do 511 m = 1, meqn
if (gfluxlen(i,jcol+1) .ne. 0.d0) then ! real face
ur(i,m) = q(i,jcol+1,m) + (xface-xcentp)*qx(i,jcol+1,m)
. + (yface-ycentp)*qy(i,jcol+1,m)
ul(i,m) = q(i,jcol,m) + (xface-xcent)*qx(i,jcol,m)
. + (yface-ycent)*qy(i,jcol,m)
else
ur(i,m) = fakeState(m)
ul(i,m) = fakeState(m)
endif
511 continue
c
c store fluxes in ff vector, copy into permanent flux array
c
call vrm(ur,ul,ff,lwidth-2,mitot-lwidth+3,yrp,msize)
c
do 720 i = lwidth-2, mitot-lwidth+3
do 720 m = 1, meqn
g(i,jcol+1,m) = ff(i,m)
720 continue
c
c
800 continue
c
c
c Horizontal riemann problems next
c
do 900 irow = lwidth-2, mitot-lwidth+3
c
xright = xlow + (dfloat(irow)-.5d0)*dx
do 611 j = lwidth-2, mjtot-lwidth+3
call getXface(irow,j,xface,yface,irr,mitot,mjtot,
. xlow,ylow,dx,dy,lstgrd)
call getCellCentroid(lstgrd,irow+1,j,xcentp,ycentp,
. xlow,ylow,dx,dy,irr(irow+1,j))
call getCellCentroid(lstgrd,irow,j,xcent,ycent,
. xlow,ylow,dx,dy,irr(irow,j))
do 611 m = 1, meqn
if (ffluxlen(irow+1,j) .ne. 0.) then ! real face
ur(j,m) = q(irow+1,j,m) + (xface-xcentp)*qx(irow+1,j,m)
. + (yface-ycentp)*qy(irow+1,j,m)
ul(j,m) = q(irow,j,m) + (xface-xcent)*qx(irow,j,m)
. + (yface-ycent)*qy(irow,j,m)
else
c ur(j,m) = q(irow+1,j,m)
c ul(j,m) = q(irow,j,m)
ur(j,m) = fakeState(m)
ul(j,m) = fakeState(m)
endif
611 continue
c
c store fluxes in ff
c
call vrm(ur,ul,ff,lwidth-2,mjtot-lwidth+3,xrp,msize)
c
do 721 m = 1, meqn
do 721 j = lwidth-2, mjtot-lwidth+3
f(irow+1,j,m) = ff(j,m)
721 continue
c
c if(irow+1 .eq. 54) print *, "f is ",ff(5,1), ffluxlen(54,5),
c . ffluxlen(54,6),ffluxlen(54,7)
900 continue
c
c irregflux computes the cut cell bndry flux. since no flow
c through bndry use eval pressure there.
firreg = 0.d0
call irregflux(q,firreg,irr,mitot,mjtot,dx,dy,lstgrd,
. xlow,ylow,mptr,qx,qy,lwidth,msize, time)
c
c :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
c
c multiply fluxes by mesh spacing.
c this zeros out unused fluxes so solid vals dont get updated.
c
do 580 m = 1, meqn
do 580 i = 2, mitot
do 580 j = 2, mjtot
f(i,j,m) = f(i,j,m) * ffluxlen(i,j)
g(i,j,m) = g(i,j,m) * gfluxlen(i,j)
580 continue
c
c
c # finite volume update
c = coeff(istage)
ar(-1) = 1.d0 ! prevent zero divides for solid cells
do 917 i = ix1-lwidth+istage, ixn+lwidth-istage
do 917 j = iy1-lwidth+istage, iyn+lwidth-istage
do 917 m = 1, meqn
k = irr(i,j)
dU(i,j,m) = - dtn/ar(k)*( f(i+1,j,m) -f(i,j,m)
. + g(i,j+1,m) -g(i,j,m) - firreg(k,m))
917 continue
return
end
|
function T = prep_trpca_data( impath, dataname, scale )
% impath = the root dir of the images
if ~exist( 'scale', 'var' ) || isempty( scale )
scale = 1.0;
end
files = dir( [impath, '\*.bmp'] );
N = length( files );
% N = N - 1;
img = imresize( imread( [impath, '\', files(1).name], 'bmp' ), scale );
img = rgb2gray( img );
sz = size( img ); lsz = length(sz);
sz(lsz+1) = N;
X = zeros( sz );
for i = 1:N
img = imresize( imread( [impath, '\', files(i).name], 'bmp' ), scale );
img = rgb2gray( img );
if lsz == 2
X( :, :, i ) = double(img) / 255;
else
X( :, :, :, i ) = double(img) / 255;
end
end
% img = imresize( imread( [impath, '\', files(N+1).name], 'bmp' ), scale );
% img = rgb2gray( img );
% data.truth = double(img) / 255;
T = tensor( X );
data.X = T;
save( ['..\..\data\',dataname], 'data' );
end |
# Errors, Intervals, and CLT
## Standard error: meaning and purpose
The previous example's sample means typically deviate about $0.7$. This is an estimate of the uncertainty on the sample mean. The standard deviation of a statistic's sampling distribution is called __standard error__ of that statistic.
With real data we do not have access to many samples or directly to the sampling distribution of the mean. If we know the standard deviation of the sampling distribution, we could say:
* The sample mean is $177.88 \pm 0.71$ at a $68\%$ (1$\sigma$) confidence level</i>
## Central Limit Theorem
The bell-shape of a sampling distribution is not coincidentally Gaussian, due to the __Central Limit Theorem__ (CLT) which states that:
_the sum of $N$ independent and identically distributed (i.i.d.) random variables tends to the normal distribution as $N \rightarrow \infty$._
According to the CLT, the _standard error of the mean_ (SEM) of an $N$-sized sample from a population with standard deviation $\sigma$ is:
$$SE(\bar{x}) = \frac{\sigma}{\sqrt{N}}$$
When the standard deviation of the population is __unknown__ (the usual case) it can be approximated by the sample standard deviation $s$:
$$SE(\bar{x}) = \frac{s}{\sqrt{N}}$$
### The 68 - 95 - 99.7 rule
The CLT explains the percentages the previous code block returns. These are approximately the areas of the PDF of the normal distribution in the ranges $\left(\mu - k\sigma, \mu + k\sigma\right)$ for $k = 1, 2, 3$.
```python
# Importing Libraries
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
%matplotlib inline
```
```python
sigmas = [1,2,3]
percentages = []
for sigma in sigmas:
per = round(stats.norm.cdf(sigma) - stats.norm.cdf(-sigma),3)
print("For", sigma, "sigma the area of the PDF is ~", per)
percentages.append(per)
print('')
for per in percentages:
print("For", per, "% coverage, the interval is", np.round(stats.norm.interval(per), 3))
```
('For', 1, 'sigma the area of the PDF is ~', 0.683)
('For', 2, 'sigma the area of the PDF is ~', 0.954)
('For', 3, 'sigma the area of the PDF is ~', 0.997)
('For', 0.683, '% coverage, the interval is', array([-1.001, 1.001]))
('For', 0.954, '% coverage, the interval is', array([-1.995, 1.995]))
('For', 0.997, '% coverage, the interval is', array([-2.968, 2.968]))
```python
def plotsigma(sig, col):
x = np.linspace(-sig, sig, 100)
y = stats.norm.pdf(x)
text = str(sig) + " sigma" + ("s" if abs(sig) != 1 else "")
plt.fill_between(x, y, y2=0, facecolor = col, label = text, alpha = 0.2)
x = np.linspace(-5, 5, 100)
plt.plot(x, stats.norm.pdf(x))
plotsigma(3, "m")
plotsigma(2, "g")
plotsigma(1, "b")
plt.legend()
plt.show()
```
## Standard error of the mean from one sample
In real-life applications, we have no access to the sampling distribution. Instead, we get __one sample__ of observations and therefore <b>one point estimate per statistic</b> (e.g. mean and standard deviation.)
### Assuming normality
When $\sigma$ is known, then for various sigma levels $k$:
$$\bar{x} \pm k \frac{\sigma}{\sqrt{N}} \qquad \equiv \qquad \bar{x} \pm k \times SE\left(\bar{x}\right)$$
which correspond to confidence levels equal to the area of the standard normal distribution between values $\left(-k, k\right)$:
$$ C = Pr(-k < z < k) \Longrightarrow C = \Phi(k) - \Phi(-k)$$
where $z \sim \mathcal{N}\left(0, 1\right)$ and $\Phi(z)$ is the CDF of standard normal distribution.
### Assuming normality: t-Student approximation
When $\sigma$ is unknown, then the uncertainty of $s$ should be accounted for using the <b>Student's t approximation</b>. For large samples ($N > 30$) this is not necessary as the t-distribution is well approximated by normal distribution. If we decide to use it, then for the sample mean the following formula holds:
$$\bar{x} \pm t_c\left(\frac{a}{2}, N-1\right) \frac{s}{\sqrt{N}} \qquad \equiv \qquad \bar{x} \pm t_c \times SE\left(\bar{x}\right)$$
where $N$ is the sample size and $t_c$ is a critical value that depends on the requested significance level $a$ or equivalently the confidence level $C = 1-a$, and the degrees of freedom (here $N-1$):
$$Pr(-t_c < t < t_c) = 1 - a = C$$
The critical value is actually the inverse CDF of the $t$-distribution with $N-1$ d.o.f., evaluated at $\frac{a}{2}$.
### A common misconception...
The probability of the true parameter to lie inside the confidence interval produced from a sample is <b>not</b> equal to $68\%$: <b>it either contains it or not</b>.
Instead, $68\%$ is the probability for a sample from the same distribution and under the same circumstances, to produce a confidence interval containing the true mean. E.g. out of 1000 samples we expect that $\approx 680$ of the $1\sigma$ CIs will contain the sample mean.
# Confidence intervals
<ul>
<li>
A <i><b>confidence interval</b></i> is an estimate of the range of a parameter of a population (in contrast to point estimates.)
</li>
<li>
A <i><b>condifence interval</b></i> is an interval with random endpoints which contains the parameter of interest with a specified probability $C$ called <i>confidence level
</i></li>
</ul>
It is closely related to <i>hypothesis testing</i>: confidence level is the complement of significance level: $C = 1 - a$.
## Parametric
If the sampling distribution is either known or assumed (e.g. normal from CLT), then deriving the interval at a confidence level $C$ is straightforward:
<ul>
<li>each endpoint corresponds to a value for the CDF: $p_1 = \frac{1 - C}{2}$ and $p_2 = \frac{1 + C}{2}$</li>
<li>find the percentiles $x_1$, $x_2$: the values for which $F(x_i) = p_i \Longrightarrow x_i = F^{-1}(p_i)$ where $F(x)$ is the CDF of the samlping distribution.
<li>the confidence interval is $(x_1, x_2)$</li>
<li>if $\hat{x}$ is the point estimate of the parameter of interest then we can write down all three values in the format: $\hat{x}_{x_1 - \hat{x}}^{x_2 - \hat{x}}$. Also, we shall <b>always</b> state the confidence level.
</ul>
### Confidence bounds
Similarily, we can get <b>one-sided</b> limits. At a confidence level $C$ the lower/upper confidence bounds from a distribution with CDF $F(x)$ are:
<ul>
<li> upper: $F^{-1}(C)$ corresponding to the interval $\left[F^{-1}(0), \, F^{-1}(C)\right]$ </li>
<li> lower: $F^{-1}(1-C)$ corresponding to the interval $\left[F^{-1}(1-C), \, F^{-1}(1)\right]$ </li>
</ul>
For example, if $F(x)$ is the CDF of the standard normal distribution, then $F(-1) \approx 0.16$ and $F(1) \approx 0.84$. Therefore:
<ul>
<li>$1$ is the upper $84\%$ confidence bound</li>
<li>$-1$ is the lower $84\%$ confidence bound</li>
</ul>
### Example
Let's assume that we did the math and found that the sampling distribution of our parameter is the <i>exponential power distribution</i> with shape parameter $b = 3.8$. Then the confidence intervals at various levels would be assymetric:
```python
dist = stats.exponpow(3.8)
# dist = stats.norm(0,10)
ci68 = dist.interval(0.68) # using .interval() method
ci80 = [dist.ppf(0.1), dist.ppf(0.9)] # ...or using percentile point function
cb95 = [dist.ppf(0), dist.ppf(0.95)] # ...which is handy for one-sided intervals
# ci95 = dist.interval(0.95) # using .interval() method
x = np.linspace(0, 1.5, 400); y = dist.pdf(x) # total distr
x68 = np.linspace(ci68[0], ci68[1]); y68 = dist.pdf(x68) # 68% conf interval
x80 = np.linspace(ci80[0], ci80[1]); y80 = dist.pdf(x80) # 80% ...
x95 = np.linspace(cb95[0], cb95[1]); y95 = dist.pdf(x95) # 95% ...
plt.plot(x, y, "k-")
plt.fill_between(x95, y95, 0 * y95, alpha = 0.5, label = "95% UCB", facecolor = "r")
plt.fill_between(x80, y80, 0 * y80, alpha = 0.5, label = "80% CI", facecolor = "b")
plt.fill_between(x68, y68, 0 * y68, alpha = 0.0, label = "68% CI", hatch = "////")
plt.legend()
plt.show()
```
## Non-parametric
For the sample mean, the standard error is well defined and performs quite well for most cases. Though, we may want to compute the standard error of other parameters for which the <b>sampling distribution is either unknown or difficult to compute</b>.
There are various methods for the <b>non-parametric</b> estimation of the standard error / confidence interval. Here we will see two such methods: <b>bootstrap</b> and <b>jackknife</b>.
### Bootstrap
Bootsrapping is a resampling (with replacement) method. As we saw before, by drawing many samples we can approximate the sampling distribution of the mean which is impossible for real data without the assumption of a distribution.
Bootstrap method is based on randomly constructing $B$ samples from the original one, by sampling with replacement from the latter. The size of the resamples should be equal to the size of the original sample. For example, with the sample $X$ below, we can create $B = 5$ new samples $Y_i$:
$$X = \left[1, 8, 3, 4, 7\right]$$
$$\begin{align}
Y_1 &= \left[8, 3, 3, 7, 1\right] \\
Y_2 &= \left[3, 1, 4, 4, 1\right] \\
Y_3 &= \left[3, 7, 1, 8, 7\right] \\
Y_4 &= \left[7, 7, 4, 3, 1\right] \\
Y_5 &= \left[1, 7, 8, 3, 4\right]
\end{align}$$
Then, we compute the desired sample statistic for each of those samples to form an empirical sampling distribution. The standard deviation of the $B$ sample statistics is the bootstrap estimate of the standard error of the statistic.
#### Example: Standard Error (SE) of the median and skewness
```python
N = 100 # size of sample
M = 10000 # no of samples drawn from the distr.
B = 10000 # no of bootstrap resample drawn from a signle sample of the distr.
# Various distributions to be tested
# dist = stats.norm(0, 1)
dist = stats.uniform(0, 1)
# dist = stats.cauchy()
#dist = stats.dweibull(8.5)
many_samples = dist.rvs([M, N]) # this creates many sample of the same distr.
m_many = np.median(many_samples, axis = 1)
s_many = stats.skew(many_samples, axis = 1)
boot_samples = np.random.choice(sample, (B, N), replace = True) # this creates bootstrapped samples of one single sample
m_boot = np.median(boot_samples, axis = 1)
s_boot = stats.skew(boot_samples, axis = 1)
m_norm = np.sqrt(np.pi / (2.0 * N)) * dist.std() # this is the calculation if we assume a normal distribution
s_norm = np.sqrt(6 * N * (N - 1) / ((N + 1) * (N - 2) * (N + 3)))
plt.figure(figsize = [12, 2])
plt.subplot(1, 2, 1)
plt.hist(m_boot, 30, histtype = "step", normed = True, label = "boot")
plt.hist(m_many, 30, histtype = "step", normed = True, label = "many")
plt.title("Median")
plt.legend()
plt.subplot(1, 2, 2)
plt.hist(s_boot, 15, histtype = "step", normed = True, label = "boot")
plt.hist(s_many, 15, histtype = "step", normed = True, label = "many")
plt.title("Skewness")
plt.legend()
plt.show()
print("SE median (if normal) :", m_norm)
print("SE median (bootstrap) :", np.std(m_boot))
print("SE median (many samples):", np.std(m_many))
print("-----------------------------------------")
print("SE skewness (if normal) :", s_norm)
print("SE skewness (bootstrap) :", np.std(s_boot))
print("SE skewness (many samples):", np.std(s_many))
```
We can see that the many samples drawn from the initial distribution have always Gaussian distribution of means due to CLT. However, this is not the case for the bootstrapped method, which however performs quite well. This is the reason we may use bootstrapping, when we don't know or don't expect that distribution to be normal.
### Jackknife resampling
This older method inspired the Bootstrap which can be seen as a generalization (Jackknife is the linear approximation of Bootstrap.) It estimates the sampling distribution of a parameter on an $N$-sized sample through a collection of $N$ sub-samples by removing one element at a time.
E.g. the sample $X$ leads to the <b>Jackknife samples</b> $Y_i$:
$$ X = \left[1, 7, 3\right] $$
$$
\begin{align}
Y_1 &= \left[7, 3\right] \\
Y_2 &= \left[1, 3\right] \\
Y_3 &= \left[1, 7\right]
\end{align}
$$
The <b>Jackknife Replicate</b> $\hat\theta_{\left(i\right)}$ is the value of the estimator of interest $f(x)$ (e.g. mean, median, skewness) for the $i$-th subsample and $\hat\theta_{\left(\cdot\right)}$ is the sample mean of all replicates:
$$
\begin{align}
\hat\theta_{\left(i\right)} &= f\left(Y_i\right) \\
\hat\theta_{\left(\cdot\right)} &= \frac{1}{N}\sum\limits_{i=1}^N {\hat\theta_{\left(i\right)}}
\end{align}
$$
and the <b>Jackknife Standard Error</b> of $\hat\theta$ is computed using the formula:
$$ SE_{jack}(\hat\theta) = \sqrt{\frac{N-1}{N}\sum\limits_{i=1}^N \left[\hat{\theta}\left(Y_i\right) - \hat\theta_{\left(\cdot\right)} \right]^2} = \cdots = \frac{N-1}{\sqrt{N}} s$$
where $s$ is the standard deviation of the replicates.
#### Example: estimation of the standard error of the mean
```python
N = 100
M = 100
# Distributions to be tested
#dist = stats.norm(0, 1)
dist = stats.uniform(0, 1)
#dist = stats.cauchy()
#dist = stats.dweibull(8.5)
def jackknife(x):
return [[x[j] for j in range(len(x)) if j != i] for i in range(len(x))]
# Be careful the aboe is a double loop: first for i and then for j
sample = dist.rvs(N)
SE_clt = np.std(sample) / np.sqrt(N)
many_samples = dist.rvs([M, N])
many_means = np.mean(many_samples, axis = 1)
many_medians = stats.kurtosis(many_samples, axis = 1)
SE_mean_many = np.std(many_means)
SE_median_many = np.std(many_medians)
jack_samples = jackknife(sample)
jack_means = np.mean(jack_samples, axis = 1)
jack_medians = stats.kurtosis(jack_samples, axis = 1)
SE_mean_jack = np.std(jack_means) * (N - 1.0) / np.sqrt(N)
SE_median_jack = np.std(jack_medians) * (N - 1.0) / np.sqrt(N)
print("[ Standard error of the mean ]")
print(" SEM formula :", SE_clt)
print(" Jackknife :", SE_mean_jack)
print(" Many samples :", SE_mean_many)
print("\n[Standard error of the median ]")
print(" Jackknife :", SE_median_jack)
print(" Many samples :", SE_median_many)
```
[ Standard error of the mean ]
(' SEM formula :', 0.029713229319835173)
(' Jackknife :', 0.029713229319835111)
(' Many samples :', 0.027021166426376305)
[Standard error of the median ]
(' Jackknife :', 0.12885958726617672)
(' Many samples :', 0.10724818607554371)
# Propogation of Uncertainty
Let's very briefly introduce the general cases, for completeness. This will be followed by the specific case used typically by engineers and physical scientists, which is perhaps of most interest to us.
## 1) Linear Combinations
For $\big\{{f_k(x_1,x_2,\dots,x_n)}\big\}$, a set of $m$ functions that are linear combinations of $n$ variables $x_1,x_2,\dots,x_3$ with combination coefficients $A_{k1},A_{k2},\dots,A_{kn},k=1 \dots m$.
$\large{f_k=\sum\limits_{i=1}^{n} A_{ki}x_i}$ or $\large{f=Ax}$
From here, we would formally write out the $\textbf{variance-covariance matrix}$, which deals with the correlation of uncertainties across variables and functions, and contains many $\sigma$'s. Each covariance term $\sigma_{ij}$ may be expressed in terms of a $\textbf{correlation coefficient}$ $\rho_{ij}$ as $\sigma_{ij}=\rho_{ij}\sigma_{i}\sigma_{j}$.
In our most typical case, where variables are uncorrelated, the entire matrix may be reduced to:
$\large{\sigma^{2}_{f} = \sum\limits_{i=1}^{n} a^{2}_{i}\sigma^{2}_{i}}$
This form will be seen in the most likely applicable case for astronomers below.
## 2) Non-linear Combinations
When $f$ is a non-linear combination of the variables $x$, $f$ must usually be linearized by approximation to a first-order Taylor series expansion:
$\large{f_k=f^{0}_{k}+\sum\limits^{n}_{i} \frac{\partial f_k}{\partial x_i} x_i}$
where $\large{\frac{\partial f_k}{\partial x_i}}$ denotes the partial derivative of $f_k$ with respect to the $i$-th variable.
### Simplification
If we neglect correlations, or assume the variables are independent, we get the commonly used formula for analytical expressions:
$\large{\sigma^{2}_{f}=\big(\frac{\partial f}{\partial x}\big)^{2}\sigma^{2}_{x}+\big(\frac{\partial f}{\partial y}\big)^{2}\sigma^{2}_{y}+\dots}$
where $\sigma_f$ is the standard deviation of the function $f$, with $\sigma_x$ being the standard deviation of the variable $x$ and so on.
This formula is based on the assumption of the linear characteristics of the gradient of $f$, and is therefore only a good estimation as long as the standard deviations are small compared to the partial derivatives.
### Example: Mass Ratio
The mass ratio for a binary system may be expressed as:
$\large{q=\frac{K_1}{K_2}=\frac{M_2}{M_1}}$
where K's denote the velocity semiamplitudes (from a Keplerian fit to the radial velocities) and M's represent the individual component masses.
Inserting this into the formula gives:
$\large{\sigma^{2}_{q}=\big(\frac{\partial q}{\partial K_1}\big)^{2}\sigma^{2}_{K_1}+\big(\frac{\partial q}{\partial K_2}\big)^{2}\sigma^{2}_{K_2}}$
$\large{\sigma^{2}_{q}=\big(\frac{1}{K_2}\big)^{2}\sigma^{2}_{K_1}+\big(\frac{K_1}{K_2^2}\big)^{2}\sigma^{2}_{K_2}}$
For a simple application of such a case, let's use the values of velocity semiamplitudes for the early-type B binary HD 42401 from Williams (2009):
$K_1=151.4\pm0.3$ km s$^{-1}$
$K_2=217.9\pm1.0$ km s$^{-1}$
Inserting these into the equations and computing the value, we get:
$q=0.6948\pm0.0038$
## 3) Monte Carlo sampling
An uncertainty $\sigma_x$ expressed as a standard error of the quantity $x$ implies that we could treat the latter as a normally distributed random variable: $X \sim \mathcal{N}\left(x, \sigma_x^2\right)$. By sampling $M$ times each variable $x_i$ and computing $f$ we are experimentaly exploring the different outcomes $f$ could give.
### 1) Example
The following code computes the mass ratio for HD 42401 and its uncertainty using uncertainty propagation and Monte Carlo sampling. For better comparison, we print 6 digits after decimal point.
```python
# observed quantities
K1 = 151.4
K2 = 217.9
K1_err = 0.3
K2_err = 1.0
# Error propagation
q = round(K1 / K2, 6)
q_err = round(np.sqrt((K1_err / K2) ** 2.0 + (K2_err * K1 / K2 ** 2.0) ** 2.0), 6)
# Monte Carlo sampling
N = 10000
K1_sample = stats.norm.rvs(K1, K1_err, N)
K2_sample = stats.norm.rvs(K2, K2_err, N)
q_sample = K1_sample / K2_sample
q_mean = round(np.mean(q_sample), 6)
q_std = round(np.std(q_sample), 6)
# q_CI95 = np.percentile(q_sample, [0.025, 0.975])
print("From error propagation formula: q =", q, "+/-", q_err)
print("From monte carlo samlping: q =", q_mean, "+/-", q_std)
plt.hist(q_sample, 30)
plt.title("Sampling outcome")
plt.xlabel("q = K1/K2")
plt.show()
```
### 2) Example
Estimates of the true distance modulus and the radial velocity of NGC 2544 are (unpublished galaxy catalog):
$$
\begin{align}
(m - M)_0 &= 33.2 \pm 0.5 \\
v &= \left(3608 \pm 271\right) \text{km/s}
\end{align}
$$
Applying the Hubble's Law for this object leads to the following formulæ and values:
$$
\begin{align}
H_0 &= \frac{v}{r} = \frac{v}{10^{0.2 m - 5}} = 82.7 \, \text{km}\,\text{s}^{-1}\,\text{Mpc}^{-1}\\
\sigma_{H_0}^2 &= \left(\frac{1}{10^{0.2 m - 5}}\right)^2\left[\sigma_v^2 + \left(\frac{\ln{10}}{5}v \times \sigma_m \right)^2\right] = 20.0 \, \text{km}\,\text{s}^{-1}\,\text{Mpc}^{-1}
\end{align}
$$
But can we trust the uncertainty propagation formula for distance moduli? Due to the <b>logarithmic nature</b> of distance modulus, a change by $\Delta\left(m-M\right)_0$ translates into multiplying/dividing the distance by a value close to $1$. Let alone, distance is always positive.
Applying the uncertainty propagation formula and the sampling method we get:
```python
m = 33.2
m_err = 0.5
v = 3608
v_err = 271
# Estimate with formula and error propagation
H0 = v / 10.0 ** (0.2 * m - 5.0)
H0_err = np.sqrt(v_err ** 2.0 + (np.log(10.0) / 5.0 * v * m_err) ** 2.0) / 10.0 ** (0.2 * m - 5.0)
print("Error propagation : H0 =", round(H0, 3), "+/-", round(H0_err, 3))
# Estimate with sampling
N = 100000
m_sample = stats.norm.rvs(m, m_err, N)
v_sample = stats.norm.rvs(v, v_err, N)
H0_sample = v_sample / (10.0 ** (0.2 * m_sample - 5.0))
H0_mean = np.mean(H0_sample)
print("Monte Carlo : H0 =", round(H0_mean, 3), "+/-", round(np.std(H0_sample), 3))
# Plot Monte-Carlo
plt.hist(H0_sample, 100, normed = True, label = "sampling")
# and plot the formula
x = np.linspace(H0 - 4 * H0_err, H0 + 4 * H0_err, 100)
plt.plot(x, stats.norm.pdf(x, H0, H0_err), "r-", label = "unc. prop.")
plt.legend()
plt.show()
```
```python
```
|
H=100
f=rep(F,H)
which(Reduce(function(d,n) xor(replace(f,seq(n,H,n),T),d), 1:H, f))
|
(* Title: HOL/HOLCF/Lift.thy
Author: Olaf Mueller
*)
section {* Lifting types of class type to flat pcpo's *}
theory Lift
imports Discrete Up
begin
default_sort type
pcpodef 'a lift = "UNIV :: 'a discr u set"
by simp_all
lemmas inst_lift_pcpo = Abs_lift_strict [symmetric]
definition
Def :: "'a \<Rightarrow> 'a lift" where
"Def x = Abs_lift (up\<cdot>(Discr x))"
subsection {* Lift as a datatype *}
lemma lift_induct: "\<lbrakk>P \<bottom>; \<And>x. P (Def x)\<rbrakk> \<Longrightarrow> P y"
apply (induct y)
apply (rule_tac p=y in upE)
apply (simp add: Abs_lift_strict)
apply (case_tac x)
apply (simp add: Def_def)
done
old_rep_datatype "\<bottom>\<Colon>'a lift" Def
by (erule lift_induct) (simp_all add: Def_def Abs_lift_inject inst_lift_pcpo)
text {* @{term bottom} and @{term Def} *}
lemma not_Undef_is_Def: "(x \<noteq> \<bottom>) = (\<exists>y. x = Def y)"
by (cases x) simp_all
lemma lift_definedE: "\<lbrakk>x \<noteq> \<bottom>; \<And>a. x = Def a \<Longrightarrow> R\<rbrakk> \<Longrightarrow> R"
by (cases x) simp_all
text {*
For @{term "x ~= \<bottom>"} in assumptions @{text defined} replaces @{text
x} by @{text "Def a"} in conclusion. *}
method_setup defined = {*
Scan.succeed (fn ctxt => SIMPLE_METHOD'
(etac @{thm lift_definedE} THEN' asm_simp_tac ctxt))
*}
lemma DefE: "Def x = \<bottom> \<Longrightarrow> R"
by simp
lemma DefE2: "\<lbrakk>x = Def s; x = \<bottom>\<rbrakk> \<Longrightarrow> R"
by simp
lemma Def_below_Def: "Def x \<sqsubseteq> Def y \<longleftrightarrow> x = y"
by (simp add: below_lift_def Def_def Abs_lift_inverse)
lemma Def_below_iff [simp]: "Def x \<sqsubseteq> y \<longleftrightarrow> Def x = y"
by (induct y, simp, simp add: Def_below_Def)
subsection {* Lift is flat *}
instance lift :: (type) flat
proof
fix x y :: "'a lift"
assume "x \<sqsubseteq> y" thus "x = \<bottom> \<or> x = y"
by (induct x) auto
qed
subsection {* Continuity of @{const case_lift} *}
lemma case_lift_eq: "case_lift \<bottom> f x = fup\<cdot>(\<Lambda> y. f (undiscr y))\<cdot>(Rep_lift x)"
apply (induct x, unfold lift.case)
apply (simp add: Rep_lift_strict)
apply (simp add: Def_def Abs_lift_inverse)
done
lemma cont2cont_case_lift [simp]:
"\<lbrakk>\<And>y. cont (\<lambda>x. f x y); cont g\<rbrakk> \<Longrightarrow> cont (\<lambda>x. case_lift \<bottom> (f x) (g x))"
unfolding case_lift_eq by (simp add: cont_Rep_lift)
subsection {* Further operations *}
definition
flift1 :: "('a \<Rightarrow> 'b::pcpo) \<Rightarrow> ('a lift \<rightarrow> 'b)" (binder "FLIFT " 10) where
"flift1 = (\<lambda>f. (\<Lambda> x. case_lift \<bottom> f x))"
translations
"\<Lambda>(XCONST Def x). t" => "CONST flift1 (\<lambda>x. t)"
"\<Lambda>(CONST Def x). FLIFT y. t" <= "FLIFT x y. t"
"\<Lambda>(CONST Def x). t" <= "FLIFT x. t"
definition
flift2 :: "('a \<Rightarrow> 'b) \<Rightarrow> ('a lift \<rightarrow> 'b lift)" where
"flift2 f = (FLIFT x. Def (f x))"
lemma flift1_Def [simp]: "flift1 f\<cdot>(Def x) = (f x)"
by (simp add: flift1_def)
lemma flift2_Def [simp]: "flift2 f\<cdot>(Def x) = Def (f x)"
by (simp add: flift2_def)
lemma flift1_strict [simp]: "flift1 f\<cdot>\<bottom> = \<bottom>"
by (simp add: flift1_def)
lemma flift2_strict [simp]: "flift2 f\<cdot>\<bottom> = \<bottom>"
by (simp add: flift2_def)
lemma flift2_defined [simp]: "x \<noteq> \<bottom> \<Longrightarrow> (flift2 f)\<cdot>x \<noteq> \<bottom>"
by (erule lift_definedE, simp)
lemma flift2_bottom_iff [simp]: "(flift2 f\<cdot>x = \<bottom>) = (x = \<bottom>)"
by (cases x, simp_all)
lemma FLIFT_mono:
"(\<And>x. f x \<sqsubseteq> g x) \<Longrightarrow> (FLIFT x. f x) \<sqsubseteq> (FLIFT x. g x)"
by (rule cfun_belowI, case_tac x, simp_all)
lemma cont2cont_flift1 [simp, cont2cont]:
"\<lbrakk>\<And>y. cont (\<lambda>x. f x y)\<rbrakk> \<Longrightarrow> cont (\<lambda>x. FLIFT y. f x y)"
by (simp add: flift1_def cont2cont_LAM)
end
|
% VL_HOMKERMAP Homogeneous kernel map
% V = VL_HOMKERMAP(X, N) computes a 2*N+1 dimensional approximated
% kernel map for the Chi2 kernel. X is an array of data points. Each
% point is expanded into a vector of dimension 2*N+1 and saved to
% the output V. The expanded feature vectors are stacked along the
% first dimension, so that the output array V has the same
% dimensions of the input array X except for the first one, which is
% 2*N+1 times larger.
%
% The function accepts the following options:
%
% Kernel:: KCHI2
% One of KCHI2 (Chi2 kernel), KINTERS (intersection kernel), KJS
% (Jensen-Shannon kernel). The 'Kernel' option name can be omitted,
% i.e. VL_HOMKERMAP(..., 'kernel', 'kchi2') has the same effect of
% VL_HOMKERMAP(..., 'kchi2').
%
% Period:: [automatically tuned]
% Set the period of the kernel specturm. The approximation is
% based on periodicizing the kernel specturm. If not specified,
% the period is automatically set based on the heuristic described
% in [2].
%
% Window:: [RECTANGULAR]
% Set the window used to truncate the spectrum before The window
% can be either RECTANGULAR or UNIFORM window. See [2] and the API
% documentation for details.
%
% Gamma:: [1]
% Set the homogeneity degree of the kernel. The standard kernels
% are 1-homogeneous, but sometimes smaller values perform better
% in applications. See [2] for details.
%
% Example::
% The following code results in approximatively the same
% similarities matrices between points X and Y:
%
% x = rand(10,1) ;
% y = rand(10,100) ;
% psix = vl_homkermap(x, 3) ;
% psiy = vl_homkermap(y, 3) ;
% figure(1) ; clf ;
% ker = vl_alldist(x, y, 'kchi2') ;
% ker_ = psix' * psiy ;
% plot([ker ; ker_]') ;
%
% Note::
% The homogeneous kernels K(X,Y) are normally defined for
% non-negative data only. VL_HOMKERMAP defines them for both
% positive and negative data by using the definition
% SIGN(X)SIGN(Y)K(ABS(X),ABS(Y)) -- note that other extensions are
% possible as well (see [2]).
%
% REFERENCES::
% [1] A. Vedaldi and A. Zisserman
% `Efficient Additive Kernels via Explicit Feature Maps',
% Proc. CVPR, 2010.
%
% [2] A. Vedaldi and A. Zisserman
% `Efficient Additive Kernels via Explicit Feature Maps',
% PAMI, 2011 (submitted).
%
% See also: VL_HELP().
% Authors: Andrea Vedaldi
% Copyright (C) 2007-12 Andrea Vedaldi and Brian Fulkerson.
% All rights reserved.
%
% This file is part of the VLFeat library and is made available under
% the terms of the BSD license (see the COPYING file).
|
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeSup α
inst✝¹ : OrderBot α
s s₁ s₂ : Finset β
f g : β → α
a : α
inst✝ : DecidableEq β
⊢ sup (∅ ∪ s₂) f = sup ∅ f ⊔ sup s₂ f
[PROOFSTEP]
rw [empty_union, sup_empty, bot_sup_eq]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeSup α
inst✝¹ : OrderBot α
s✝ s₁ s₂ : Finset β
f g : β → α
a✝ : α
inst✝ : DecidableEq β
a : β
s : Finset β
x✝ : ¬a ∈ s
ih : sup (s ∪ s₂) f = sup s f ⊔ sup s₂ f
⊢ sup (insert a s ∪ s₂) f = sup (insert a s) f ⊔ sup s₂ f
[PROOFSTEP]
rw [insert_union, sup_insert, sup_insert, ih, sup_assoc]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f g : β → α
a : α
⊢ sup s (f ⊔ g) = sup s f ⊔ sup s g
[PROOFSTEP]
refine' Finset.cons_induction_on s _ fun b t _ h => _
[GOAL]
case refine'_1
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f g : β → α
a : α
⊢ sup ∅ (f ⊔ g) = sup ∅ f ⊔ sup ∅ g
[PROOFSTEP]
rw [sup_empty, sup_empty, sup_empty, bot_sup_eq]
[GOAL]
case refine'_2
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f g : β → α
a : α
b : β
t : Finset β
x✝ : ¬b ∈ t
h : sup t (f ⊔ g) = sup t f ⊔ sup t g
⊢ sup (cons b t x✝) (f ⊔ g) = sup (cons b t x✝) f ⊔ sup (cons b t x✝) g
[PROOFSTEP]
rw [sup_cons, sup_cons, sup_cons, h]
[GOAL]
case refine'_2
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f g : β → α
a : α
b : β
t : Finset β
x✝ : ¬b ∈ t
h : sup t (f ⊔ g) = sup t f ⊔ sup t g
⊢ (f ⊔ g) b ⊔ (sup t f ⊔ sup t g) = f b ⊔ sup t f ⊔ (g b ⊔ sup t g)
[PROOFSTEP]
exact sup_sup_sup_comm _ _ _ _
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f✝ g✝ : β → α
a : α
f g : β → α
hs : s₁ = s₂
hfg : ∀ (a : β), a ∈ s₂ → f a = g a
⊢ sup s₁ f = sup s₂ g
[PROOFSTEP]
subst hs
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ : Finset β
f✝ g✝ : β → α
a : α
f g : β → α
hfg : ∀ (a : β), a ∈ s₁ → f a = g a
⊢ sup s₁ f = sup s₁ g
[PROOFSTEP]
exact Finset.fold_congr hfg
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝⁴ : SemilatticeSup α
inst✝³ : OrderBot α
s✝¹ s₁ s₂ : Finset β
f✝ g✝ : β → α
a : α
inst✝² : SemilatticeSup β
inst✝¹ : OrderBot β
inst✝ : SupBotHomClass F α β
f : F
s✝ : Finset ι
g : ι → α
i : ι
s : Finset ι
x✝ : ¬i ∈ s
h : ↑f (sup s g) = sup s (↑f ∘ g)
⊢ ↑f (sup (cons i s x✝) g) = sup (cons i s x✝) (↑f ∘ g)
[PROOFSTEP]
rw [sup_cons, sup_cons, map_sup, h, Function.comp_apply]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f g : β → α
a✝ a : α
⊢ sup s f ≤ a ↔ ∀ (b : β), b ∈ s → f b ≤ a
[PROOFSTEP]
apply Iff.trans Multiset.sup_le
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f g : β → α
a✝ a : α
⊢ (∀ (b : α), b ∈ Multiset.map f s.val → b ≤ a) ↔ ∀ (b : β), b ∈ s → f b ≤ a
[PROOFSTEP]
simp only [Multiset.mem_map, and_imp, exists_imp]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f g : β → α
a✝ a : α
⊢ (∀ (b : α) (x : β), x ∈ s.val → f x = b → b ≤ a) ↔ ∀ (b : β), b ∈ s → f b ≤ a
[PROOFSTEP]
exact ⟨fun k b hb => k _ _ hb rfl, fun k a' b hb h => h ▸ k _ hb⟩
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeSup α
inst✝¹ : OrderBot α
s✝ s₁ s₂ : Finset β
f g : β → α
a : α
inst✝ : DecidableEq β
s : Finset γ
t : γ → Finset β
c : α
⊢ sup (Finset.biUnion s t) f ≤ c ↔ (sup s fun x => sup (t x) f) ≤ c
[PROOFSTEP]
simp [@forall_swap _ β]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f g : β → α
a : α
s : Finset β
⊢ (sup s fun x => ⊥) = ⊥
[PROOFSTEP]
obtain rfl | hs := s.eq_empty_or_nonempty
[GOAL]
case inl
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f g : β → α
a : α
⊢ (sup ∅ fun x => ⊥) = ⊥
[PROOFSTEP]
exact sup_empty
[GOAL]
case inr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f g : β → α
a : α
s : Finset β
hs : Finset.Nonempty s
⊢ (sup s fun x => ⊥) = ⊥
[PROOFSTEP]
exact sup_const hs _
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f✝ g : β → α
a : α
s : Finset β
t : Finset γ
f : β → γ → α
⊢ (sup s fun b => sup t (f b)) = sup t fun c => sup s fun b => f b c
[PROOFSTEP]
refine' eq_of_forall_ge_iff fun a => _
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f✝ g : β → α
a✝ : α
s : Finset β
t : Finset γ
f : β → γ → α
a : α
⊢ (sup s fun b => sup t (f b)) ≤ a ↔ (sup t fun c => sup s fun b => f b c) ≤ a
[PROOFSTEP]
simp_rw [Finset.sup_le_iff]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f✝ g : β → α
a✝ : α
s : Finset β
t : Finset γ
f : β → γ → α
a : α
⊢ (∀ (b : β), b ∈ s → ∀ (b_1 : γ), b_1 ∈ t → f b b_1 ≤ a) ↔ ∀ (b : γ), b ∈ t → ∀ (b_1 : β), b_1 ∈ s → f b_1 b ≤ a
[PROOFSTEP]
exact ⟨fun h c hc b hb => h b hb c hc, fun h b hb c hc => h c hc b hb⟩
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f✝ g : β → α
a : α
s : Finset β
t : Finset γ
f : β × γ → α
⊢ sup (s ×ˢ t) f = sup s fun i => sup t fun i' => f (i, i')
[PROOFSTEP]
simp only [le_antisymm_iff, Finset.sup_le_iff, mem_product, and_imp, Prod.forall]
-- Porting note: was one expression.
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f✝ g : β → α
a : α
s : Finset β
t : Finset γ
f : β × γ → α
⊢ (∀ (a : β) (b : γ), a ∈ s → b ∈ t → f (a, b) ≤ sup s fun i => sup t fun i' => f (i, i')) ∧
∀ (b : β), b ∈ s → ∀ (b_1 : γ), b_1 ∈ t → f (b, b_1) ≤ sup (s ×ˢ t) f
[PROOFSTEP]
refine ⟨fun b c hb hc => ?_, fun b hb c hc => ?_⟩
[GOAL]
case refine_1
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f✝ g : β → α
a : α
s : Finset β
t : Finset γ
f : β × γ → α
b : β
c : γ
hb : b ∈ s
hc : c ∈ t
⊢ f (b, c) ≤ sup s fun i => sup t fun i' => f (i, i')
[PROOFSTEP]
refine (le_sup hb).trans' ?_
[GOAL]
case refine_1
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f✝ g : β → α
a : α
s : Finset β
t : Finset γ
f : β × γ → α
b : β
c : γ
hb : b ∈ s
hc : c ∈ t
⊢ f (b, c) ≤ sup t fun i' => f (b, i')
[PROOFSTEP]
exact @le_sup _ _ _ _ _ (fun c => f (b, c)) c hc
[GOAL]
case refine_2
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f✝ g : β → α
a : α
s : Finset β
t : Finset γ
f : β × γ → α
b : β
hb : b ∈ s
c : γ
hc : c ∈ t
⊢ f (b, c) ≤ sup (s ×ˢ t) f
[PROOFSTEP]
exact le_sup <| mem_product.2 ⟨hb, hc⟩
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f✝ g : β → α
a : α
s : Finset β
t : Finset γ
f : β × γ → α
⊢ sup (s ×ˢ t) f = sup t fun i' => sup s fun i => f (i, i')
[PROOFSTEP]
rw [sup_product_left, Finset.sup_comm]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeSup α
inst✝¹ : OrderBot α
s✝ s₁ s₂ : Finset β
f g : β → α
a : α
inst✝ : DecidableEq α
s : Finset α
⊢ sup (erase s ⊥) id = sup s id
[PROOFSTEP]
refine' (sup_mono (s.erase_subset _)).antisymm (Finset.sup_le_iff.2 fun a ha => _)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeSup α
inst✝¹ : OrderBot α
s✝ s₁ s₂ : Finset β
f g : β → α
a✝ : α
inst✝ : DecidableEq α
s : Finset α
a : α
ha : a ∈ s
⊢ id a ≤ sup (erase s ⊥) id
[PROOFSTEP]
obtain rfl | ha' := eq_or_ne a ⊥
[GOAL]
case inl
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeSup α
inst✝¹ : OrderBot α
s✝ s₁ s₂ : Finset β
f g : β → α
a : α
inst✝ : DecidableEq α
s : Finset α
ha : ⊥ ∈ s
⊢ id ⊥ ≤ sup (erase s ⊥) id
[PROOFSTEP]
exact bot_le
[GOAL]
case inr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeSup α
inst✝¹ : OrderBot α
s✝ s₁ s₂ : Finset β
f g : β → α
a✝ : α
inst✝ : DecidableEq α
s : Finset α
a : α
ha : a ∈ s
ha' : a ≠ ⊥
⊢ id a ≤ sup (erase s ⊥) id
[PROOFSTEP]
exact le_sup (mem_erase.2 ⟨ha', ha⟩)
[GOAL]
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeSup α✝
inst✝¹ : OrderBot α✝
s✝ s₁ s₂ : Finset β✝
f✝ g : β✝ → α✝
a✝ : α✝
α : Type u_7
β : Type u_8
inst✝ : GeneralizedBooleanAlgebra α
s : Finset β
f : β → α
a : α
⊢ (sup s fun b => f b \ a) = sup s f \ a
[PROOFSTEP]
refine' Finset.cons_induction_on s _ fun b t _ h => _
[GOAL]
case refine'_1
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeSup α✝
inst✝¹ : OrderBot α✝
s✝ s₁ s₂ : Finset β✝
f✝ g : β✝ → α✝
a✝ : α✝
α : Type u_7
β : Type u_8
inst✝ : GeneralizedBooleanAlgebra α
s : Finset β
f : β → α
a : α
⊢ (sup ∅ fun b => f b \ a) = sup ∅ f \ a
[PROOFSTEP]
rw [sup_empty, sup_empty, bot_sdiff]
[GOAL]
case refine'_2
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeSup α✝
inst✝¹ : OrderBot α✝
s✝ s₁ s₂ : Finset β✝
f✝ g : β✝ → α✝
a✝ : α✝
α : Type u_7
β : Type u_8
inst✝ : GeneralizedBooleanAlgebra α
s : Finset β
f : β → α
a : α
b : β
t : Finset β
x✝ : ¬b ∈ t
h : (sup t fun b => f b \ a) = sup t f \ a
⊢ (sup (cons b t x✝) fun b => f b \ a) = sup (cons b t x✝) f \ a
[PROOFSTEP]
rw [sup_cons, sup_cons, h, sup_sdiff]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α
inst✝² : OrderBot α
s✝ s₁ s₂ : Finset β
f✝ g✝ : β → α
a : α
inst✝¹ : SemilatticeSup γ
inst✝ : OrderBot γ
s : Finset β
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
bot : g ⊥ = ⊥
c : β
t : Finset β
hc : ¬c ∈ t
ih : g (sup t f) = sup t (g ∘ f)
⊢ g (sup (cons c t hc) f) = sup (cons c t hc) (g ∘ f)
[PROOFSTEP]
rw [sup_cons, sup_cons, g_sup, ih, Function.comp_apply]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f✝ g : β → α
a : α
P : α → Prop
Pbot : P ⊥
Psup : ∀ ⦃x y : α⦄, P x → P y → P (x ⊔ y)
t : Finset β
f : β → { x // P x }
⊢ ↑(sup t f) = sup t fun x => ↑(f x)
[PROOFSTEP]
letI := Subtype.semilatticeSup Psup
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f✝ g : β → α
a : α
P : α → Prop
Pbot : P ⊥
Psup : ∀ ⦃x y : α⦄, P x → P y → P (x ⊔ y)
t : Finset β
f : β → { x // P x }
this : SemilatticeSup { x // P x } := Subtype.semilatticeSup Psup
⊢ ↑(sup t f) = sup t fun x => ↑(f x)
[PROOFSTEP]
letI := Subtype.orderBot Pbot
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f✝ g : β → α
a : α
P : α → Prop
Pbot : P ⊥
Psup : ∀ ⦃x y : α⦄, P x → P y → P (x ⊔ y)
t : Finset β
f : β → { x // P x }
this✝ : SemilatticeSup { x // P x } := Subtype.semilatticeSup Psup
this : OrderBot { x // P x } := Subtype.orderBot Pbot
⊢ ↑(sup t f) = sup t fun x => ↑(f x)
[PROOFSTEP]
apply comp_sup_eq_sup_comp Subtype.val
[GOAL]
case g_sup
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f✝ g : β → α
a : α
P : α → Prop
Pbot : P ⊥
Psup : ∀ ⦃x y : α⦄, P x → P y → P (x ⊔ y)
t : Finset β
f : β → { x // P x }
this✝ : SemilatticeSup { x // P x } := Subtype.semilatticeSup Psup
this : OrderBot { x // P x } := Subtype.orderBot Pbot
⊢ ∀ (x y : { x // P x }), ↑(x ⊔ y) = ↑x ⊔ ↑y
[PROOFSTEP]
intros
[GOAL]
case bot
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f✝ g : β → α
a : α
P : α → Prop
Pbot : P ⊥
Psup : ∀ ⦃x y : α⦄, P x → P y → P (x ⊔ y)
t : Finset β
f : β → { x // P x }
this✝ : SemilatticeSup { x // P x } := Subtype.semilatticeSup Psup
this : OrderBot { x // P x } := Subtype.orderBot Pbot
⊢ ↑⊥ = ⊥
[PROOFSTEP]
intros
[GOAL]
case g_sup
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f✝ g : β → α
a : α
P : α → Prop
Pbot : P ⊥
Psup : ∀ ⦃x y : α⦄, P x → P y → P (x ⊔ y)
t : Finset β
f : β → { x // P x }
this✝ : SemilatticeSup { x // P x } := Subtype.semilatticeSup Psup
this : OrderBot { x // P x } := Subtype.orderBot Pbot
x✝ y✝ : { x // P x }
⊢ ↑(x✝ ⊔ y✝) = ↑x✝ ⊔ ↑y✝
[PROOFSTEP]
rfl
[GOAL]
case bot
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f✝ g : β → α
a : α
P : α → Prop
Pbot : P ⊥
Psup : ∀ ⦃x y : α⦄, P x → P y → P (x ⊔ y)
t : Finset β
f : β → { x // P x }
this✝ : SemilatticeSup { x // P x } := Subtype.semilatticeSup Psup
this : OrderBot { x // P x } := Subtype.orderBot Pbot
⊢ ↑⊥ = ⊥
[PROOFSTEP]
rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeSup α
inst✝¹ : OrderBot α
s s₁ s₂ : Finset β
f g : β → α
a : α
inst✝ : DecidableEq α
l : List α
⊢ List.foldr (fun x x_1 => x ⊔ x_1) ⊥ l = sup (List.toFinset l) id
[PROOFSTEP]
rw [← coe_fold_r, ← Multiset.fold_dedup_idem, sup_def, ← List.toFinset_coe, toFinset_val, Multiset.map_id]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeSup α
inst✝¹ : OrderBot α
s s₁ s₂ : Finset β
f g : β → α
a : α
inst✝ : DecidableEq α
l : List α
⊢ Multiset.fold (fun x x_1 => x ⊔ x_1) ⊥ (dedup ↑l) = Multiset.sup (dedup ↑l)
[PROOFSTEP]
rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f g : β → α
a : α
p : α → Prop
hb : p ⊥
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs : ∀ (b : β), b ∈ s → p (f b)
⊢ p (sup s f)
[PROOFSTEP]
induction' s using Finset.cons_induction with c s hc ih
[GOAL]
case empty
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f g : β → α
a : α
p : α → Prop
hb : p ⊥
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs✝ : ∀ (b : β), b ∈ s → p (f b)
hs : ∀ (b : β), b ∈ ∅ → p (f b)
⊢ p (sup ∅ f)
[PROOFSTEP]
exact hb
[GOAL]
case cons
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f g : β → α
a : α
p : α → Prop
hb : p ⊥
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs✝ : ∀ (b : β), b ∈ s✝ → p (f b)
c : β
s : Finset β
hc : ¬c ∈ s
ih : (∀ (b : β), b ∈ s → p (f b)) → p (sup s f)
hs : ∀ (b : β), b ∈ cons c s hc → p (f b)
⊢ p (sup (cons c s hc) f)
[PROOFSTEP]
rw [sup_cons]
[GOAL]
case cons
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f g : β → α
a : α
p : α → Prop
hb : p ⊥
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs✝ : ∀ (b : β), b ∈ s✝ → p (f b)
c : β
s : Finset β
hc : ¬c ∈ s
ih : (∀ (b : β), b ∈ s → p (f b)) → p (sup s f)
hs : ∀ (b : β), b ∈ cons c s hc → p (f b)
⊢ p (f c ⊔ sup s f)
[PROOFSTEP]
apply hp
[GOAL]
case cons.a
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f g : β → α
a : α
p : α → Prop
hb : p ⊥
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs✝ : ∀ (b : β), b ∈ s✝ → p (f b)
c : β
s : Finset β
hc : ¬c ∈ s
ih : (∀ (b : β), b ∈ s → p (f b)) → p (sup s f)
hs : ∀ (b : β), b ∈ cons c s hc → p (f b)
⊢ p (f c)
[PROOFSTEP]
exact hs c (mem_cons.2 (Or.inl rfl))
[GOAL]
case cons.a
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s✝ s₁ s₂ : Finset β
f g : β → α
a : α
p : α → Prop
hb : p ⊥
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs✝ : ∀ (b : β), b ∈ s✝ → p (f b)
c : β
s : Finset β
hc : ¬c ∈ s
ih : (∀ (b : β), b ∈ s → p (f b)) → p (sup s f)
hs : ∀ (b : β), b ∈ cons c s hc → p (f b)
⊢ p (sup s f)
[PROOFSTEP]
exact ih fun b h => hs b (mem_cons.2 (Or.inr h))
[GOAL]
F : Type u_1
α✝ : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α✝
inst✝² : OrderBot α✝
s✝ s₁ s₂ : Finset β
f g : β → α✝
a : α✝
α : Type u_7
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Set α
hs : Set.Nonempty s
hdir : DirectedOn (fun x x_1 => x ≤ x_1) s
t : Finset α
⊢ (∀ (x : α), x ∈ t → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup t id ≤ x
[PROOFSTEP]
classical
induction' t using Finset.induction_on with a r _ ih h
·
simpa only [forall_prop_of_true, and_true_iff, forall_prop_of_false, bot_le, not_false_iff, sup_empty,
forall_true_iff, not_mem_empty]
· intro h
have incs : (r : Set α) ⊆ ↑(insert a r) := by
rw [Finset.coe_subset]
apply Finset.subset_insert
obtain ⟨x, ⟨hxs, hsx_sup⟩⟩ := ih fun x hx => h x <| incs hx
obtain ⟨y, hys, hay⟩ :=
h a
(Finset.mem_insert_self a r)
-- z ∈ s is above x and y
obtain ⟨z, hzs, ⟨hxz, hyz⟩⟩ := hdir x hxs y hys
use z, hzs
rw [sup_insert, id.def, sup_le_iff]
exact ⟨le_trans hay hyz, le_trans hsx_sup hxz⟩
[GOAL]
F : Type u_1
α✝ : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α✝
inst✝² : OrderBot α✝
s✝ s₁ s₂ : Finset β
f g : β → α✝
a : α✝
α : Type u_7
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Set α
hs : Set.Nonempty s
hdir : DirectedOn (fun x x_1 => x ≤ x_1) s
t : Finset α
⊢ (∀ (x : α), x ∈ t → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup t id ≤ x
[PROOFSTEP]
induction' t using Finset.induction_on with a r _ ih h
[GOAL]
case empty
F : Type u_1
α✝ : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α✝
inst✝² : OrderBot α✝
s✝ s₁ s₂ : Finset β
f g : β → α✝
a : α✝
α : Type u_7
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Set α
hs : Set.Nonempty s
hdir : DirectedOn (fun x x_1 => x ≤ x_1) s
⊢ (∀ (x : α), x ∈ ∅ → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup ∅ id ≤ x
[PROOFSTEP]
simpa only [forall_prop_of_true, and_true_iff, forall_prop_of_false, bot_le, not_false_iff, sup_empty, forall_true_iff,
not_mem_empty]
[GOAL]
case insert
F : Type u_1
α✝ : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α✝
inst✝² : OrderBot α✝
s✝ s₁ s₂ : Finset β
f g : β → α✝
a✝¹ : α✝
α : Type u_7
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Set α
hs : Set.Nonempty s
hdir : DirectedOn (fun x x_1 => x ≤ x_1) s
a : α
r : Finset α
a✝ : ¬a ∈ r
ih : (∀ (x : α), x ∈ r → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup r id ≤ x
⊢ (∀ (x : α), x ∈ insert a r → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup (insert a r) id ≤ x
[PROOFSTEP]
intro h
[GOAL]
case insert
F : Type u_1
α✝ : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α✝
inst✝² : OrderBot α✝
s✝ s₁ s₂ : Finset β
f g : β → α✝
a✝¹ : α✝
α : Type u_7
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Set α
hs : Set.Nonempty s
hdir : DirectedOn (fun x x_1 => x ≤ x_1) s
a : α
r : Finset α
a✝ : ¬a ∈ r
ih : (∀ (x : α), x ∈ r → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup r id ≤ x
h : ∀ (x : α), x ∈ insert a r → ∃ y, y ∈ s ∧ x ≤ y
⊢ ∃ x, x ∈ s ∧ sup (insert a r) id ≤ x
[PROOFSTEP]
have incs : (r : Set α) ⊆ ↑(insert a r) := by
rw [Finset.coe_subset]
apply Finset.subset_insert
[GOAL]
F : Type u_1
α✝ : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α✝
inst✝² : OrderBot α✝
s✝ s₁ s₂ : Finset β
f g : β → α✝
a✝¹ : α✝
α : Type u_7
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Set α
hs : Set.Nonempty s
hdir : DirectedOn (fun x x_1 => x ≤ x_1) s
a : α
r : Finset α
a✝ : ¬a ∈ r
ih : (∀ (x : α), x ∈ r → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup r id ≤ x
h : ∀ (x : α), x ∈ insert a r → ∃ y, y ∈ s ∧ x ≤ y
⊢ ↑r ⊆ ↑(insert a r)
[PROOFSTEP]
rw [Finset.coe_subset]
[GOAL]
F : Type u_1
α✝ : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α✝
inst✝² : OrderBot α✝
s✝ s₁ s₂ : Finset β
f g : β → α✝
a✝¹ : α✝
α : Type u_7
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Set α
hs : Set.Nonempty s
hdir : DirectedOn (fun x x_1 => x ≤ x_1) s
a : α
r : Finset α
a✝ : ¬a ∈ r
ih : (∀ (x : α), x ∈ r → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup r id ≤ x
h : ∀ (x : α), x ∈ insert a r → ∃ y, y ∈ s ∧ x ≤ y
⊢ r ⊆ insert a r
[PROOFSTEP]
apply Finset.subset_insert
[GOAL]
case insert
F : Type u_1
α✝ : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α✝
inst✝² : OrderBot α✝
s✝ s₁ s₂ : Finset β
f g : β → α✝
a✝¹ : α✝
α : Type u_7
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Set α
hs : Set.Nonempty s
hdir : DirectedOn (fun x x_1 => x ≤ x_1) s
a : α
r : Finset α
a✝ : ¬a ∈ r
ih : (∀ (x : α), x ∈ r → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup r id ≤ x
h : ∀ (x : α), x ∈ insert a r → ∃ y, y ∈ s ∧ x ≤ y
incs : ↑r ⊆ ↑(insert a r)
⊢ ∃ x, x ∈ s ∧ sup (insert a r) id ≤ x
[PROOFSTEP]
obtain ⟨x, ⟨hxs, hsx_sup⟩⟩ := ih fun x hx => h x <| incs hx
[GOAL]
case insert.intro.intro
F : Type u_1
α✝ : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α✝
inst✝² : OrderBot α✝
s✝ s₁ s₂ : Finset β
f g : β → α✝
a✝¹ : α✝
α : Type u_7
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Set α
hs : Set.Nonempty s
hdir : DirectedOn (fun x x_1 => x ≤ x_1) s
a : α
r : Finset α
a✝ : ¬a ∈ r
ih : (∀ (x : α), x ∈ r → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup r id ≤ x
h : ∀ (x : α), x ∈ insert a r → ∃ y, y ∈ s ∧ x ≤ y
incs : ↑r ⊆ ↑(insert a r)
x : α
hxs : x ∈ s
hsx_sup : sup r id ≤ x
⊢ ∃ x, x ∈ s ∧ sup (insert a r) id ≤ x
[PROOFSTEP]
obtain ⟨y, hys, hay⟩ :=
h a
(Finset.mem_insert_self a r)
-- z ∈ s is above x and y
[GOAL]
case insert.intro.intro.intro.intro
F : Type u_1
α✝ : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α✝
inst✝² : OrderBot α✝
s✝ s₁ s₂ : Finset β
f g : β → α✝
a✝¹ : α✝
α : Type u_7
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Set α
hs : Set.Nonempty s
hdir : DirectedOn (fun x x_1 => x ≤ x_1) s
a : α
r : Finset α
a✝ : ¬a ∈ r
ih : (∀ (x : α), x ∈ r → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup r id ≤ x
h : ∀ (x : α), x ∈ insert a r → ∃ y, y ∈ s ∧ x ≤ y
incs : ↑r ⊆ ↑(insert a r)
x : α
hxs : x ∈ s
hsx_sup : sup r id ≤ x
y : α
hys : y ∈ s
hay : a ≤ y
⊢ ∃ x, x ∈ s ∧ sup (insert a r) id ≤ x
[PROOFSTEP]
obtain ⟨z, hzs, ⟨hxz, hyz⟩⟩ := hdir x hxs y hys
[GOAL]
case insert.intro.intro.intro.intro.intro.intro.intro
F : Type u_1
α✝ : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α✝
inst✝² : OrderBot α✝
s✝ s₁ s₂ : Finset β
f g : β → α✝
a✝¹ : α✝
α : Type u_7
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Set α
hs : Set.Nonempty s
hdir : DirectedOn (fun x x_1 => x ≤ x_1) s
a : α
r : Finset α
a✝ : ¬a ∈ r
ih : (∀ (x : α), x ∈ r → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup r id ≤ x
h : ∀ (x : α), x ∈ insert a r → ∃ y, y ∈ s ∧ x ≤ y
incs : ↑r ⊆ ↑(insert a r)
x : α
hxs : x ∈ s
hsx_sup : sup r id ≤ x
y : α
hys : y ∈ s
hay : a ≤ y
z : α
hzs : z ∈ s
hxz : x ≤ z
hyz : y ≤ z
⊢ ∃ x, x ∈ s ∧ sup (insert a r) id ≤ x
[PROOFSTEP]
use z, hzs
[GOAL]
case right
F : Type u_1
α✝ : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α✝
inst✝² : OrderBot α✝
s✝ s₁ s₂ : Finset β
f g : β → α✝
a✝¹ : α✝
α : Type u_7
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Set α
hs : Set.Nonempty s
hdir : DirectedOn (fun x x_1 => x ≤ x_1) s
a : α
r : Finset α
a✝ : ¬a ∈ r
ih : (∀ (x : α), x ∈ r → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup r id ≤ x
h : ∀ (x : α), x ∈ insert a r → ∃ y, y ∈ s ∧ x ≤ y
incs : ↑r ⊆ ↑(insert a r)
x : α
hxs : x ∈ s
hsx_sup : sup r id ≤ x
y : α
hys : y ∈ s
hay : a ≤ y
z : α
hzs : z ∈ s
hxz : x ≤ z
hyz : y ≤ z
⊢ sup (insert a r) id ≤ z
[PROOFSTEP]
rw [sup_insert, id.def, sup_le_iff]
[GOAL]
case right
F : Type u_1
α✝ : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝³ : SemilatticeSup α✝
inst✝² : OrderBot α✝
s✝ s₁ s₂ : Finset β
f g : β → α✝
a✝¹ : α✝
α : Type u_7
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Set α
hs : Set.Nonempty s
hdir : DirectedOn (fun x x_1 => x ≤ x_1) s
a : α
r : Finset α
a✝ : ¬a ∈ r
ih : (∀ (x : α), x ∈ r → ∃ y, y ∈ s ∧ x ≤ y) → ∃ x, x ∈ s ∧ sup r id ≤ x
h : ∀ (x : α), x ∈ insert a r → ∃ y, y ∈ s ∧ x ≤ y
incs : ↑r ⊆ ↑(insert a r)
x : α
hxs : x ∈ s
hsx_sup : sup r id ≤ x
y : α
hys : y ∈ s
hay : a ≤ y
z : α
hzs : z ∈ s
hxz : x ≤ z
hyz : y ≤ z
⊢ a ≤ z ∧ sup r id ≤ z
[PROOFSTEP]
exact ⟨le_trans hay hyz, le_trans hsx_sup hxz⟩
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f✝ g : β → α
a : α
f : β → α
S : Finset β
⊢ sup S f = ⊥ ↔ ∀ (s : β), s ∈ S → f s = ⊥
[PROOFSTEP]
classical induction' S using Finset.induction with a S _ hi <;> simp [*]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f✝ g : β → α
a : α
f : β → α
S : Finset β
⊢ sup S f = ⊥ ↔ ∀ (s : β), s ∈ S → f s = ⊥
[PROOFSTEP]
induction' S using Finset.induction with a S _ hi
[GOAL]
case empty
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f✝ g : β → α
a : α
f : β → α
⊢ sup ∅ f = ⊥ ↔ ∀ (s : β), s ∈ ∅ → f s = ⊥
[PROOFSTEP]
simp [*]
[GOAL]
case insert
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s s₁ s₂ : Finset β
f✝ g : β → α
a✝¹ : α
f : β → α
a : β
S : Finset β
a✝ : ¬a ∈ S
hi : sup S f = ⊥ ↔ ∀ (s : β), s ∈ S → f s = ⊥
⊢ sup (insert a S) f = ⊥ ↔ ∀ (s : β), s ∈ insert a S → f s = ⊥
[PROOFSTEP]
simp [*]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : CompleteLattice α
s : Finset α
⊢ sup s id = sSup ↑s
[PROOFSTEP]
simp [sSup_eq_iSup, sup_eq_iSup]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : CompleteLattice β
s : Finset α
f : α → β
⊢ sup s f = sSup (f '' ↑s)
[PROOFSTEP]
classical rw [← Finset.coe_image, ← sup_id_eq_sSup, sup_image, Function.comp.left_id]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : CompleteLattice β
s : Finset α
f : α → β
⊢ sup s f = sSup (f '' ↑s)
[PROOFSTEP]
rw [← Finset.coe_image, ← sup_id_eq_sSup, sup_image, Function.comp.left_id]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeInf α
inst✝ : OrderTop α
s s₁ s₂ : Finset β
f✝ g✝ : β → α
a : α
f g : β → α
hs : s₁ = s₂
hfg : ∀ (a : β), a ∈ s₂ → f a = g a
⊢ inf s₁ f = inf s₂ g
[PROOFSTEP]
subst hs
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeInf α
inst✝ : OrderTop α
s s₁ : Finset β
f✝ g✝ : β → α
a : α
f g : β → α
hfg : ∀ (a : β), a ∈ s₁ → f a = g a
⊢ inf s₁ f = inf s₁ g
[PROOFSTEP]
exact Finset.fold_congr hfg
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝⁴ : SemilatticeInf α
inst✝³ : OrderTop α
s✝¹ s₁ s₂ : Finset β
f✝ g✝ : β → α
a : α
inst✝² : SemilatticeInf β
inst✝¹ : OrderTop β
inst✝ : InfTopHomClass F α β
f : F
s✝ : Finset ι
g : ι → α
i : ι
s : Finset ι
x✝ : ¬i ∈ s
h : ↑f (inf s g) = inf s (↑f ∘ g)
⊢ ↑f (inf (cons i s x✝) g) = inf (cons i s x✝) (↑f ∘ g)
[PROOFSTEP]
rw [inf_cons, inf_cons, map_inf, h, Function.comp_apply]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeInf α
inst✝¹ : OrderTop α
s s₁ s₂ : Finset β
f g : β → α
a : α
inst✝ : DecidableEq α
l : List α
⊢ List.foldr (fun x x_1 => x ⊓ x_1) ⊤ l = inf (List.toFinset l) id
[PROOFSTEP]
rw [← coe_fold_r, ← Multiset.fold_dedup_idem, inf_def, ← List.toFinset_coe, toFinset_val, Multiset.map_id]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : SemilatticeInf α
inst✝¹ : OrderTop α
s s₁ s₂ : Finset β
f g : β → α
a : α
inst✝ : DecidableEq α
l : List α
⊢ Multiset.fold (fun x x_1 => x ⊓ x_1) ⊤ (dedup ↑l) = Multiset.inf (dedup ↑l)
[PROOFSTEP]
rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : DistribLattice α
inst✝ : OrderBot α
s✝ : Finset ι
t : Finset κ
f✝ : ι → α
g : κ → α
a✝ : α
s : Finset ι
f : ι → α
a : α
⊢ a ⊓ sup s f = sup s fun i => a ⊓ f i
[PROOFSTEP]
induction' s using Finset.cons_induction with i s hi h
[GOAL]
case empty
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : DistribLattice α
inst✝ : OrderBot α
s : Finset ι
t : Finset κ
f✝ : ι → α
g : κ → α
a✝ : α
f : ι → α
a : α
⊢ a ⊓ sup ∅ f = sup ∅ fun i => a ⊓ f i
[PROOFSTEP]
simp_rw [Finset.sup_empty, inf_bot_eq]
[GOAL]
case cons
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : DistribLattice α
inst✝ : OrderBot α
s✝ : Finset ι
t : Finset κ
f✝ : ι → α
g : κ → α
a✝ : α
f : ι → α
a : α
i : ι
s : Finset ι
hi : ¬i ∈ s
h : a ⊓ sup s f = sup s fun i => a ⊓ f i
⊢ a ⊓ sup (cons i s hi) f = sup (cons i s hi) fun i => a ⊓ f i
[PROOFSTEP]
rw [sup_cons, sup_cons, inf_sup_left, h]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : DistribLattice α
inst✝ : OrderBot α
s✝ : Finset ι
t : Finset κ
f✝ : ι → α
g : κ → α
a✝ : α
s : Finset ι
f : ι → α
a : α
⊢ sup s f ⊓ a = sup s fun i => f i ⊓ a
[PROOFSTEP]
rw [_root_.inf_comm, s.sup_inf_distrib_left]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : DistribLattice α
inst✝ : OrderBot α
s✝ : Finset ι
t : Finset κ
f✝ : ι → α
g : κ → α
a✝ : α
s : Finset ι
f : ι → α
a : α
⊢ (sup s fun i => a ⊓ f i) = sup s fun i => f i ⊓ a
[PROOFSTEP]
simp_rw [_root_.inf_comm]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : DistribLattice α
inst✝ : OrderBot α
s : Finset ι
t : Finset κ
f : ι → α
g : κ → α
a : α
⊢ _root_.Disjoint a (sup s f) ↔ ∀ ⦃i : ι⦄, i ∈ s → _root_.Disjoint a (f i)
[PROOFSTEP]
simp only [disjoint_iff, sup_inf_distrib_left, Finset.sup_eq_bot_iff]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : DistribLattice α
inst✝ : OrderBot α
s : Finset ι
t : Finset κ
f : ι → α
g : κ → α
a : α
⊢ _root_.Disjoint (sup s f) a ↔ ∀ ⦃i : ι⦄, i ∈ s → _root_.Disjoint (f i) a
[PROOFSTEP]
simp only [disjoint_iff, sup_inf_distrib_right, Finset.sup_eq_bot_iff]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : DistribLattice α
inst✝ : OrderBot α
s✝ : Finset ι
t✝ : Finset κ
f✝ : ι → α
g✝ : κ → α
a : α
s : Finset ι
t : Finset κ
f : ι → α
g : κ → α
⊢ sup s f ⊓ sup t g = sup (s ×ˢ t) fun i => f i.fst ⊓ g i.snd
[PROOFSTEP]
simp_rw [Finset.sup_inf_distrib_right, Finset.sup_inf_distrib_left, sup_product_left]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
s : Finset ι
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
⊢ (inf s fun i => sup (t i) (f i)) = sup (pi s t) fun g => inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))
[PROOFSTEP]
induction' s using Finset.induction with i s hi ih
[GOAL]
case empty
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
⊢ (inf ∅ fun i => sup (t i) (f i)) = sup (pi ∅ t) fun g => inf (attach ∅) fun i => f (↑i) (g ↑i (_ : ↑i ∈ ∅))
[PROOFSTEP]
simp
[GOAL]
case insert
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
i : ι
s : Finset ι
hi : ¬i ∈ s
ih : (inf s fun i => sup (t i) (f i)) = sup (pi s t) fun g => inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))
⊢ (inf (insert i s) fun i => sup (t i) (f i)) =
sup (pi (insert i s) t) fun g => inf (attach (insert i s)) fun i_1 => f (↑i_1) (g ↑i_1 (_ : ↑i_1 ∈ insert i s))
[PROOFSTEP]
rw [inf_insert, ih, attach_insert, sup_inf_sup]
[GOAL]
case insert
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
i : ι
s : Finset ι
hi : ¬i ∈ s
ih : (inf s fun i => sup (t i) (f i)) = sup (pi s t) fun g => inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))
⊢ (sup (t i ×ˢ pi s t) fun i_1 => f i i_1.fst ⊓ inf (attach s) fun i_2 => f (↑i_2) (Prod.snd i_1 ↑i_2 (_ : ↑i_2 ∈ s))) =
sup (pi (insert i s) t) fun g =>
inf
(insert { val := i, property := (_ : i ∈ insert i s) }
(image (fun x => { val := ↑x, property := (_ : ↑x ∈ insert i s) }) (attach s)))
fun i_1 => f (↑i_1) (g ↑i_1 (_ : ↑i_1 ∈ insert i s))
[PROOFSTEP]
refine' eq_of_forall_ge_iff fun c => _
[GOAL]
case insert
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
i : ι
s : Finset ι
hi : ¬i ∈ s
ih : (inf s fun i => sup (t i) (f i)) = sup (pi s t) fun g => inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))
c : α
⊢ (sup (t i ×ˢ pi s t) fun i_1 => f i i_1.fst ⊓ inf (attach s) fun i_2 => f (↑i_2) (Prod.snd i_1 ↑i_2 (_ : ↑i_2 ∈ s))) ≤
c ↔
(sup (pi (insert i s) t) fun g =>
inf
(insert { val := i, property := (_ : i ∈ insert i s) }
(image (fun x => { val := ↑x, property := (_ : ↑x ∈ insert i s) }) (attach s)))
fun i_1 => f (↑i_1) (g ↑i_1 (_ : ↑i_1 ∈ insert i s))) ≤
c
[PROOFSTEP]
simp only [Finset.sup_le_iff, mem_product, mem_pi, and_imp, Prod.forall, inf_insert, inf_image]
[GOAL]
case insert
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
i : ι
s : Finset ι
hi : ¬i ∈ s
ih : (inf s fun i => sup (t i) (f i)) = sup (pi s t) fun g => inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))
c : α
⊢ (∀ (a : κ i) (b : (a : ι) → a ∈ s → κ a),
a ∈ t i →
(∀ (a : ι) (h : a ∈ s), b a h ∈ t a) → (f i a ⊓ inf (attach s) fun i => f (↑i) (b ↑i (_ : ↑i ∈ s))) ≤ c) ↔
∀ (b : (a : ι) → a ∈ insert i s → κ a),
(∀ (a : ι) (h : a ∈ insert i s), b a h ∈ t a) →
f i
(b ↑{ val := i, property := (_ : i ∈ insert i s) }
(_ : ↑{ val := i, property := (_ : i ∈ insert i s) } ∈ insert i s)) ⊓
inf (attach s)
((fun i_1 => f (↑i_1) (b ↑i_1 (_ : ↑i_1 ∈ insert i s))) ∘ fun x =>
{ val := ↑x, property := (_ : ↑x ∈ insert i s) }) ≤
c
[PROOFSTEP]
refine'
⟨fun h g hg =>
h (g i <| mem_insert_self _ _) (fun j hj => g j <| mem_insert_of_mem hj) (hg _ <| mem_insert_self _ _) fun j hj =>
hg _ <| mem_insert_of_mem hj,
fun h a g ha hg => _⟩
-- TODO: This `have` must be named to prevent it being shadowed by the internal `this` in `simpa`
[GOAL]
case insert
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
i : ι
s : Finset ι
hi : ¬i ∈ s
ih : (inf s fun i => sup (t i) (f i)) = sup (pi s t) fun g => inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))
c : α
h :
∀ (b : (a : ι) → a ∈ insert i s → κ a),
(∀ (a : ι) (h : a ∈ insert i s), b a h ∈ t a) →
f i
(b ↑{ val := i, property := (_ : i ∈ insert i s) }
(_ : ↑{ val := i, property := (_ : i ∈ insert i s) } ∈ insert i s)) ⊓
inf (attach s)
((fun i_1 => f (↑i_1) (b ↑i_1 (_ : ↑i_1 ∈ insert i s))) ∘ fun x =>
{ val := ↑x, property := (_ : ↑x ∈ insert i s) }) ≤
c
a : κ i
g : (a : ι) → a ∈ s → κ a
ha : a ∈ t i
hg : ∀ (a : ι) (h : a ∈ s), g a h ∈ t a
⊢ (f i a ⊓ inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))) ≤ c
[PROOFSTEP]
have aux : ∀ j : { x // x ∈ s }, ↑j ≠ i := fun j : s => ne_of_mem_of_not_mem j.2 hi
[GOAL]
case insert
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
i : ι
s : Finset ι
hi : ¬i ∈ s
ih : (inf s fun i => sup (t i) (f i)) = sup (pi s t) fun g => inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))
c : α
h :
∀ (b : (a : ι) → a ∈ insert i s → κ a),
(∀ (a : ι) (h : a ∈ insert i s), b a h ∈ t a) →
f i
(b ↑{ val := i, property := (_ : i ∈ insert i s) }
(_ : ↑{ val := i, property := (_ : i ∈ insert i s) } ∈ insert i s)) ⊓
inf (attach s)
((fun i_1 => f (↑i_1) (b ↑i_1 (_ : ↑i_1 ∈ insert i s))) ∘ fun x =>
{ val := ↑x, property := (_ : ↑x ∈ insert i s) }) ≤
c
a : κ i
g : (a : ι) → a ∈ s → κ a
ha : a ∈ t i
hg : ∀ (a : ι) (h : a ∈ s), g a h ∈ t a
aux : ∀ (j : { x // x ∈ s }), ↑j ≠ i
⊢ (f i a ⊓ inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))) ≤ c
[PROOFSTEP]
have :=
h (fun j hj => if hji : j = i then cast (congr_arg κ hji.symm) a else g _ <| mem_of_mem_insert_of_ne hj hji)
(fun j hj => ?_)
[GOAL]
case insert.refine_2
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
i : ι
s : Finset ι
hi : ¬i ∈ s
ih : (inf s fun i => sup (t i) (f i)) = sup (pi s t) fun g => inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))
c : α
h :
∀ (b : (a : ι) → a ∈ insert i s → κ a),
(∀ (a : ι) (h : a ∈ insert i s), b a h ∈ t a) →
f i
(b ↑{ val := i, property := (_ : i ∈ insert i s) }
(_ : ↑{ val := i, property := (_ : i ∈ insert i s) } ∈ insert i s)) ⊓
inf (attach s)
((fun i_1 => f (↑i_1) (b ↑i_1 (_ : ↑i_1 ∈ insert i s))) ∘ fun x =>
{ val := ↑x, property := (_ : ↑x ∈ insert i s) }) ≤
c
a : κ i
g : (a : ι) → a ∈ s → κ a
ha : a ∈ t i
hg : ∀ (a : ι) (h : a ∈ s), g a h ∈ t a
aux : ∀ (j : { x // x ∈ s }), ↑j ≠ i
this :
f i
((fun j hj => if hji : j = i then cast (_ : κ i = κ j) a else g j (_ : j ∈ s))
↑{ val := i, property := (_ : i ∈ insert i s) }
(_ : ↑{ val := i, property := (_ : i ∈ insert i s) } ∈ insert i s)) ⊓
inf (attach s)
((fun i_1 =>
f (↑i_1)
((fun j hj => if hji : j = i then cast (_ : κ i = κ j) a else g j (_ : j ∈ s)) ↑i_1
(_ : ↑i_1 ∈ insert i s))) ∘
fun x => { val := ↑x, property := (_ : ↑x ∈ insert i s) }) ≤
c
⊢ (f i a ⊓ inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))) ≤ c
case insert.refine_1
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
i : ι
s : Finset ι
hi : ¬i ∈ s
ih : (inf s fun i => sup (t i) (f i)) = sup (pi s t) fun g => inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))
c : α
h :
∀ (b : (a : ι) → a ∈ insert i s → κ a),
(∀ (a : ι) (h : a ∈ insert i s), b a h ∈ t a) →
f i
(b ↑{ val := i, property := (_ : i ∈ insert i s) }
(_ : ↑{ val := i, property := (_ : i ∈ insert i s) } ∈ insert i s)) ⊓
inf (attach s)
((fun i_1 => f (↑i_1) (b ↑i_1 (_ : ↑i_1 ∈ insert i s))) ∘ fun x =>
{ val := ↑x, property := (_ : ↑x ∈ insert i s) }) ≤
c
a : κ i
g : (a : ι) → a ∈ s → κ a
ha : a ∈ t i
hg : ∀ (a : ι) (h : a ∈ s), g a h ∈ t a
aux : ∀ (j : { x // x ∈ s }), ↑j ≠ i
j : ι
hj : j ∈ insert i s
⊢ (fun j hj => if hji : j = i then cast (_ : κ i = κ j) a else g j (_ : j ∈ s)) j hj ∈ t j
[PROOFSTEP]
simpa only [cast_eq, dif_pos, Function.comp, Subtype.coe_mk, dif_neg, aux] using this
[GOAL]
case insert.refine_1
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
i : ι
s : Finset ι
hi : ¬i ∈ s
ih : (inf s fun i => sup (t i) (f i)) = sup (pi s t) fun g => inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))
c : α
h :
∀ (b : (a : ι) → a ∈ insert i s → κ a),
(∀ (a : ι) (h : a ∈ insert i s), b a h ∈ t a) →
f i
(b ↑{ val := i, property := (_ : i ∈ insert i s) }
(_ : ↑{ val := i, property := (_ : i ∈ insert i s) } ∈ insert i s)) ⊓
inf (attach s)
((fun i_1 => f (↑i_1) (b ↑i_1 (_ : ↑i_1 ∈ insert i s))) ∘ fun x =>
{ val := ↑x, property := (_ : ↑x ∈ insert i s) }) ≤
c
a : κ i
g : (a : ι) → a ∈ s → κ a
ha : a ∈ t i
hg : ∀ (a : ι) (h : a ∈ s), g a h ∈ t a
aux : ∀ (j : { x // x ∈ s }), ↑j ≠ i
j : ι
hj : j ∈ insert i s
⊢ (fun j hj => if hji : j = i then cast (_ : κ i = κ j) a else g j (_ : j ∈ s)) j hj ∈ t j
[PROOFSTEP]
rw [mem_insert] at hj
[GOAL]
case insert.refine_1
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
i : ι
s : Finset ι
hi : ¬i ∈ s
ih : (inf s fun i => sup (t i) (f i)) = sup (pi s t) fun g => inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))
c : α
h :
∀ (b : (a : ι) → a ∈ insert i s → κ a),
(∀ (a : ι) (h : a ∈ insert i s), b a h ∈ t a) →
f i
(b ↑{ val := i, property := (_ : i ∈ insert i s) }
(_ : ↑{ val := i, property := (_ : i ∈ insert i s) } ∈ insert i s)) ⊓
inf (attach s)
((fun i_1 => f (↑i_1) (b ↑i_1 (_ : ↑i_1 ∈ insert i s))) ∘ fun x =>
{ val := ↑x, property := (_ : ↑x ∈ insert i s) }) ≤
c
a : κ i
g : (a : ι) → a ∈ s → κ a
ha : a ∈ t i
hg : ∀ (a : ι) (h : a ∈ s), g a h ∈ t a
aux : ∀ (j : { x // x ∈ s }), ↑j ≠ i
j : ι
hj✝ : j ∈ insert i s
hj : j = i ∨ j ∈ s
⊢ (fun j hj => if hji : j = i then cast (_ : κ i = κ j) a else g j (_ : j ∈ s)) j hj✝ ∈ t j
[PROOFSTEP]
obtain (rfl | hj) := hj
[GOAL]
case insert.refine_1.inl
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
s : Finset ι
ih : (inf s fun i => sup (t i) (f i)) = sup (pi s t) fun g => inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))
c : α
g : (a : ι) → a ∈ s → κ a
hg : ∀ (a : ι) (h : a ∈ s), g a h ∈ t a
j : ι
hi : ¬j ∈ s
h :
∀ (b : (a : ι) → a ∈ insert j s → κ a),
(∀ (a : ι) (h : a ∈ insert j s), b a h ∈ t a) →
f j
(b ↑{ val := j, property := (_ : j ∈ insert j s) }
(_ : ↑{ val := j, property := (_ : j ∈ insert j s) } ∈ insert j s)) ⊓
inf (attach s)
((fun i => f (↑i) (b ↑i (_ : ↑i ∈ insert j s))) ∘ fun x =>
{ val := ↑x, property := (_ : ↑x ∈ insert j s) }) ≤
c
a : κ j
ha : a ∈ t j
aux : ∀ (j_1 : { x // x ∈ s }), ↑j_1 ≠ j
hj : j ∈ insert j s
⊢ (fun j_1 hj => if hji : j_1 = j then cast (_ : κ j = κ j_1) a else g j_1 (_ : j_1 ∈ s)) j hj ∈ t j
[PROOFSTEP]
simpa
[GOAL]
case insert.refine_1.inr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ✝ : Type u_6
inst✝² : DistribLattice α
inst✝¹ : BoundedOrder α
inst✝ : DecidableEq ι
κ : ι → Type u_7
t : (i : ι) → Finset (κ i)
f : (i : ι) → κ i → α
i : ι
s : Finset ι
hi : ¬i ∈ s
ih : (inf s fun i => sup (t i) (f i)) = sup (pi s t) fun g => inf (attach s) fun i => f (↑i) (g ↑i (_ : ↑i ∈ s))
c : α
h :
∀ (b : (a : ι) → a ∈ insert i s → κ a),
(∀ (a : ι) (h : a ∈ insert i s), b a h ∈ t a) →
f i
(b ↑{ val := i, property := (_ : i ∈ insert i s) }
(_ : ↑{ val := i, property := (_ : i ∈ insert i s) } ∈ insert i s)) ⊓
inf (attach s)
((fun i_1 => f (↑i_1) (b ↑i_1 (_ : ↑i_1 ∈ insert i s))) ∘ fun x =>
{ val := ↑x, property := (_ : ↑x ∈ insert i s) }) ≤
c
a : κ i
g : (a : ι) → a ∈ s → κ a
ha : a ∈ t i
hg : ∀ (a : ι) (h : a ∈ s), g a h ∈ t a
aux : ∀ (j : { x // x ∈ s }), ↑j ≠ i
j : ι
hj✝ : j ∈ insert i s
hj : j ∈ s
⊢ (fun j hj => if hji : j = i then cast (_ : κ i = κ j) a else g j (_ : j ∈ s)) j hj✝ ∈ t j
[PROOFSTEP]
simpa [ne_of_mem_of_not_mem hj hi] using hg _ _
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : BooleanAlgebra α
s✝ s : Finset ι
f : ι → α
a : α
⊢ (sup s fun b => a \ f b) = a \ inf s f
[PROOFSTEP]
refine' Finset.cons_induction_on s _ fun b t _ h => _
[GOAL]
case refine'_1
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : BooleanAlgebra α
s✝ s : Finset ι
f : ι → α
a : α
⊢ (sup ∅ fun b => a \ f b) = a \ inf ∅ f
[PROOFSTEP]
rw [sup_empty, inf_empty, sdiff_top]
[GOAL]
case refine'_2
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : BooleanAlgebra α
s✝ s : Finset ι
f : ι → α
a : α
b : ι
t : Finset ι
x✝ : ¬b ∈ t
h : (sup t fun b => a \ f b) = a \ inf t f
⊢ (sup (cons b t x✝) fun b => a \ f b) = a \ inf (cons b t x✝) f
[PROOFSTEP]
rw [sup_cons, inf_cons, h, sdiff_inf]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : BooleanAlgebra α
s : Finset ι
hs : Finset.Nonempty s
f : ι → α
a : α
⊢ (inf s fun b => a \ f b) = a \ sup s f
[PROOFSTEP]
induction' hs using Finset.Nonempty.cons_induction with b b t _ _ h
[GOAL]
case h₀
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : BooleanAlgebra α
s : Finset ι
f : ι → α
a : α
b : ι
⊢ (inf {b} fun b => a \ f b) = a \ sup {b} f
[PROOFSTEP]
rw [sup_singleton, inf_singleton]
[GOAL]
case h₁
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : BooleanAlgebra α
s : Finset ι
f : ι → α
a : α
b : ι
t : Finset ι
h✝ : ¬b ∈ t
hs✝ : Finset.Nonempty t
h : (inf t fun b => a \ f b) = a \ sup t f
⊢ (inf (cons b t h✝) fun b => a \ f b) = a \ sup (cons b t h✝) f
[PROOFSTEP]
rw [sup_cons, inf_cons, h, sdiff_sup]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : BooleanAlgebra α
s : Finset ι
hs : Finset.Nonempty s
f : ι → α
a : α
⊢ (inf s fun b => f b \ a) = inf s f \ a
[PROOFSTEP]
induction' hs using Finset.Nonempty.cons_induction with b b t _ _ h
[GOAL]
case h₀
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : BooleanAlgebra α
s : Finset ι
f : ι → α
a : α
b : ι
⊢ (inf {b} fun b => f b \ a) = inf {b} f \ a
[PROOFSTEP]
rw [inf_singleton, inf_singleton]
[GOAL]
case h₁
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : BooleanAlgebra α
s : Finset ι
f : ι → α
a : α
b : ι
t : Finset ι
h✝ : ¬b ∈ t
hs✝ : Finset.Nonempty t
h : (inf t fun b => f b \ a) = inf t f \ a
⊢ (inf (cons b t h✝) fun b => f b \ a) = inf (cons b t h✝) f \ a
[PROOFSTEP]
rw [inf_cons, inf_cons, h, inf_sdiff]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
ha : ⊥ < a
⊢ a ≤ sup s f ↔ ∃ b, b ∈ s ∧ a ≤ f b
[PROOFSTEP]
apply Iff.intro
[GOAL]
case mp
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
ha : ⊥ < a
⊢ a ≤ sup s f → ∃ b, b ∈ s ∧ a ≤ f b
[PROOFSTEP]
induction s using cons_induction with
| empty => exact (absurd · (not_le_of_lt ha))
| @cons c t hc ih =>
rw [sup_cons, le_sup_iff]
exact fun
| Or.inl h => ⟨c, mem_cons.2 (Or.inl rfl), h⟩
| Or.inr h =>
let ⟨b, hb, hle⟩ := ih h;
⟨b, mem_cons.2 (Or.inr hb), hle⟩
[GOAL]
case mp
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
ha : ⊥ < a
⊢ a ≤ sup s f → ∃ b, b ∈ s ∧ a ≤ f b
[PROOFSTEP]
induction s using cons_induction with
| empty => exact (absurd · (not_le_of_lt ha))
| @cons c t hc ih =>
rw [sup_cons, le_sup_iff]
exact fun
| Or.inl h => ⟨c, mem_cons.2 (Or.inl rfl), h⟩
| Or.inr h =>
let ⟨b, hb, hle⟩ := ih h;
⟨b, mem_cons.2 (Or.inr hb), hle⟩
[GOAL]
case mp.empty
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
ha : ⊥ < a
⊢ a ≤ sup ∅ f → ∃ b, b ∈ ∅ ∧ a ≤ f b
[PROOFSTEP]
| empty => exact (absurd · (not_le_of_lt ha))
[GOAL]
case mp.empty
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
ha : ⊥ < a
⊢ a ≤ sup ∅ f → ∃ b, b ∈ ∅ ∧ a ≤ f b
[PROOFSTEP]
exact (absurd · (not_le_of_lt ha))
[GOAL]
case mp.cons
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
ha : ⊥ < a
c : ι
t : Finset ι
hc : ¬c ∈ t
ih : a ≤ sup t f → ∃ b, b ∈ t ∧ a ≤ f b
⊢ a ≤ sup (cons c t hc) f → ∃ b, b ∈ cons c t hc ∧ a ≤ f b
[PROOFSTEP]
| @cons c t hc ih =>
rw [sup_cons, le_sup_iff]
exact fun
| Or.inl h => ⟨c, mem_cons.2 (Or.inl rfl), h⟩
| Or.inr h =>
let ⟨b, hb, hle⟩ := ih h;
⟨b, mem_cons.2 (Or.inr hb), hle⟩
[GOAL]
case mp.cons
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
ha : ⊥ < a
c : ι
t : Finset ι
hc : ¬c ∈ t
ih : a ≤ sup t f → ∃ b, b ∈ t ∧ a ≤ f b
⊢ a ≤ sup (cons c t hc) f → ∃ b, b ∈ cons c t hc ∧ a ≤ f b
[PROOFSTEP]
rw [sup_cons, le_sup_iff]
[GOAL]
case mp.cons
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
ha : ⊥ < a
c : ι
t : Finset ι
hc : ¬c ∈ t
ih : a ≤ sup t f → ∃ b, b ∈ t ∧ a ≤ f b
⊢ a ≤ f c ∨ a ≤ sup t f → ∃ b, b ∈ cons c t hc ∧ a ≤ f b
[PROOFSTEP]
exact fun
| Or.inl h => ⟨c, mem_cons.2 (Or.inl rfl), h⟩
| Or.inr h =>
let ⟨b, hb, hle⟩ := ih h;
⟨b, mem_cons.2 (Or.inr hb), hle⟩
[GOAL]
case mpr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
ha : ⊥ < a
⊢ (∃ b, b ∈ s ∧ a ≤ f b) → a ≤ sup s f
[PROOFSTEP]
exact fun ⟨b, hb, hle⟩ => le_trans hle (le_sup hb)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
⊢ a < sup s f ↔ ∃ b, b ∈ s ∧ a < f b
[PROOFSTEP]
apply Iff.intro
[GOAL]
case mp
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
⊢ a < sup s f → ∃ b, b ∈ s ∧ a < f b
[PROOFSTEP]
induction s using cons_induction with
| empty => exact (absurd · not_lt_bot)
| @cons c t hc ih =>
rw [sup_cons, lt_sup_iff]
exact fun
| Or.inl h => ⟨c, mem_cons.2 (Or.inl rfl), h⟩
| Or.inr h =>
let ⟨b, hb, hlt⟩ := ih h;
⟨b, mem_cons.2 (Or.inr hb), hlt⟩
[GOAL]
case mp
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
⊢ a < sup s f → ∃ b, b ∈ s ∧ a < f b
[PROOFSTEP]
induction s using cons_induction with
| empty => exact (absurd · not_lt_bot)
| @cons c t hc ih =>
rw [sup_cons, lt_sup_iff]
exact fun
| Or.inl h => ⟨c, mem_cons.2 (Or.inl rfl), h⟩
| Or.inr h =>
let ⟨b, hb, hlt⟩ := ih h;
⟨b, mem_cons.2 (Or.inr hb), hlt⟩
[GOAL]
case mp.empty
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
⊢ a < sup ∅ f → ∃ b, b ∈ ∅ ∧ a < f b
[PROOFSTEP]
| empty => exact (absurd · not_lt_bot)
[GOAL]
case mp.empty
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
⊢ a < sup ∅ f → ∃ b, b ∈ ∅ ∧ a < f b
[PROOFSTEP]
exact (absurd · not_lt_bot)
[GOAL]
case mp.cons
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
c : ι
t : Finset ι
hc : ¬c ∈ t
ih : a < sup t f → ∃ b, b ∈ t ∧ a < f b
⊢ a < sup (cons c t hc) f → ∃ b, b ∈ cons c t hc ∧ a < f b
[PROOFSTEP]
| @cons c t hc ih =>
rw [sup_cons, lt_sup_iff]
exact fun
| Or.inl h => ⟨c, mem_cons.2 (Or.inl rfl), h⟩
| Or.inr h =>
let ⟨b, hb, hlt⟩ := ih h;
⟨b, mem_cons.2 (Or.inr hb), hlt⟩
[GOAL]
case mp.cons
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
c : ι
t : Finset ι
hc : ¬c ∈ t
ih : a < sup t f → ∃ b, b ∈ t ∧ a < f b
⊢ a < sup (cons c t hc) f → ∃ b, b ∈ cons c t hc ∧ a < f b
[PROOFSTEP]
rw [sup_cons, lt_sup_iff]
[GOAL]
case mp.cons
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
c : ι
t : Finset ι
hc : ¬c ∈ t
ih : a < sup t f → ∃ b, b ∈ t ∧ a < f b
⊢ a < f c ∨ a < sup t f → ∃ b, b ∈ cons c t hc ∧ a < f b
[PROOFSTEP]
exact fun
| Or.inl h => ⟨c, mem_cons.2 (Or.inl rfl), h⟩
| Or.inr h =>
let ⟨b, hb, hlt⟩ := ih h;
⟨b, mem_cons.2 (Or.inr hb), hlt⟩
[GOAL]
case mpr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
⊢ (∃ b, b ∈ s ∧ a < f b) → a < sup s f
[PROOFSTEP]
exact fun ⟨b, hb, hlt⟩ => lt_of_lt_of_le hlt (le_sup hb)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
inst✝ : OrderBot α
s : Finset ι
f : ι → α
a : α
ha : ⊥ < a
c : ι
t : Finset ι
hc : ¬c ∈ t
⊢ ((∀ (b : ι), b ∈ t → f b < a) → sup t f < a) → (∀ (b : ι), b ∈ cons c t hc → f b < a) → sup (cons c t hc) f < a
[PROOFSTEP]
simpa only [sup_cons, sup_lt_iff, mem_cons, forall_eq_or_imp] using And.imp_right
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
⊢ sup s (WithBot.some ∘ f) ≠ ⊥
[PROOFSTEP]
simpa using H
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
⊢ ↑(sup' s H f) = sup s (WithBot.some ∘ f)
[PROOFSTEP]
rw [sup', WithBot.coe_unbot]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
b : β
hb : ¬b ∈ s
h : Finset.Nonempty (cons b s hb)
⊢ sup' (cons b s hb) h f = f b ⊔ sup' s H f
[PROOFSTEP]
rw [← WithBot.coe_eq_coe]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
b : β
hb : ¬b ∈ s
h : Finset.Nonempty (cons b s hb)
⊢ ↑(sup' (cons b s hb) h f) = ↑(f b ⊔ sup' s H f)
[PROOFSTEP]
simp [WithBot.coe_sup]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
inst✝ : DecidableEq β
b : β
h : Finset.Nonempty (insert b s)
⊢ sup' (insert b s) h f = f b ⊔ sup' s H f
[PROOFSTEP]
rw [← WithBot.coe_eq_coe]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
inst✝ : DecidableEq β
b : β
h : Finset.Nonempty (insert b s)
⊢ ↑(sup' (insert b s) h f) = ↑(f b ⊔ sup' s H f)
[PROOFSTEP]
simp [WithBot.coe_sup]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
a : α
hs : ∀ (b : β), b ∈ s → f b ≤ a
⊢ sup' s H f ≤ a
[PROOFSTEP]
rw [← WithBot.coe_le_coe, coe_sup']
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
a : α
hs : ∀ (b : β), b ∈ s → f b ≤ a
⊢ sup s (WithBot.some ∘ f) ≤ ↑a
[PROOFSTEP]
exact Finset.sup_le fun b h => WithBot.coe_le_coe.2 <| hs b h
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
b : β
h : b ∈ s
⊢ f b ≤ sup' s (_ : ∃ x, x ∈ s) f
[PROOFSTEP]
rw [← WithBot.coe_le_coe, coe_sup']
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
b : β
h : b ∈ s
⊢ ↑(f b) ≤ sup s (WithBot.some ∘ f)
[PROOFSTEP]
exact le_sup (f := fun c => WithBot.some (f c)) h
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
a : α
⊢ (sup' s H fun x => a) = a
[PROOFSTEP]
apply le_antisymm
[GOAL]
case a
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
a : α
⊢ (sup' s H fun x => a) ≤ a
[PROOFSTEP]
apply sup'_le
[GOAL]
case a.hs
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
a : α
⊢ ∀ (b : β), b ∈ s → a ≤ a
[PROOFSTEP]
intros
[GOAL]
case a.hs
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
a : α
b✝ : β
a✝ : b✝ ∈ s
⊢ a ≤ a
[PROOFSTEP]
exact le_rfl
[GOAL]
case a
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
a : α
⊢ a ≤ sup' s H fun x => a
[PROOFSTEP]
apply le_sup' (fun _ => a) H.choose_spec
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H : Finset.Nonempty s✝
f : β → α
inst✝ : DecidableEq β
s : Finset γ
Hs : Finset.Nonempty s
t : γ → Finset β
Ht : ∀ (b : γ), Finset.Nonempty (t b)
c : α
⊢ sup' (Finset.biUnion s t) (_ : Finset.Nonempty (Finset.biUnion s t)) f ≤ c ↔
(sup' s Hs fun b => sup' (t b) (_ : Finset.Nonempty (t b)) f) ≤ c
[PROOFSTEP]
simp [@forall_swap _ β]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
⊢ g (sup' s H f) = sup' s H (g ∘ f)
[PROOFSTEP]
rw [← WithBot.coe_eq_coe, coe_sup']
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
⊢ ↑(g (sup' s H f)) = sup s (WithBot.some ∘ g ∘ f)
[PROOFSTEP]
let g' := WithBot.map g
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
⊢ ↑(g (sup' s H f)) = sup s (WithBot.some ∘ g ∘ f)
[PROOFSTEP]
show g' ↑(s.sup' H f) = s.sup fun a => g' ↑(f a)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
⊢ g' ↑(sup' s H f) = sup s fun a => g' ↑(f a)
[PROOFSTEP]
rw [coe_sup']
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
⊢ g' (sup s (WithBot.some ∘ f)) = sup s fun a => g' ↑(f a)
[PROOFSTEP]
refine' comp_sup_eq_sup_comp g' _ rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
⊢ ∀ (x y : WithBot α), g' (x ⊔ y) = g' x ⊔ g' y
[PROOFSTEP]
intro f₁ f₂
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
f₁ f₂ : WithBot α
⊢ g' (f₁ ⊔ f₂) = g' f₁ ⊔ g' f₂
[PROOFSTEP]
cases f₁ using WithBot.recBotCoe with
| bot =>
rw [bot_sup_eq]
exact bot_sup_eq.symm
| coe f₁ =>
cases f₂ using WithBot.recBotCoe with
| bot => rfl
| coe f₂ => exact congr_arg _ (g_sup f₁ f₂)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
f₁ f₂ : WithBot α
⊢ g' (f₁ ⊔ f₂) = g' f₁ ⊔ g' f₂
[PROOFSTEP]
cases f₁ using WithBot.recBotCoe with
| bot =>
rw [bot_sup_eq]
exact bot_sup_eq.symm
| coe f₁ =>
cases f₂ using WithBot.recBotCoe with
| bot => rfl
| coe f₂ => exact congr_arg _ (g_sup f₁ f₂)
[GOAL]
case bot
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
f₂ : WithBot α
⊢ g' (⊥ ⊔ f₂) = g' ⊥ ⊔ g' f₂
[PROOFSTEP]
| bot =>
rw [bot_sup_eq]
exact bot_sup_eq.symm
[GOAL]
case bot
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
f₂ : WithBot α
⊢ g' (⊥ ⊔ f₂) = g' ⊥ ⊔ g' f₂
[PROOFSTEP]
rw [bot_sup_eq]
[GOAL]
case bot
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
f₂ : WithBot α
⊢ g' f₂ = g' ⊥ ⊔ g' f₂
[PROOFSTEP]
exact bot_sup_eq.symm
[GOAL]
case coe
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
f₂ : WithBot α
f₁ : α
⊢ g' (↑f₁ ⊔ f₂) = g' ↑f₁ ⊔ g' f₂
[PROOFSTEP]
| coe f₁ =>
cases f₂ using WithBot.recBotCoe with
| bot => rfl
| coe f₂ => exact congr_arg _ (g_sup f₁ f₂)
[GOAL]
case coe
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
f₂ : WithBot α
f₁ : α
⊢ g' (↑f₁ ⊔ f₂) = g' ↑f₁ ⊔ g' f₂
[PROOFSTEP]
cases f₂ using WithBot.recBotCoe with
| bot => rfl
| coe f₂ => exact congr_arg _ (g_sup f₁ f₂)
[GOAL]
case coe
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
f₂ : WithBot α
f₁ : α
⊢ g' (↑f₁ ⊔ f₂) = g' ↑f₁ ⊔ g' f₂
[PROOFSTEP]
cases f₂ using WithBot.recBotCoe with
| bot => rfl
| coe f₂ => exact congr_arg _ (g_sup f₁ f₂)
[GOAL]
case coe.bot
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
f₁ : α
⊢ g' (↑f₁ ⊔ ⊥) = g' ↑f₁ ⊔ g' ⊥
[PROOFSTEP]
| bot => rfl
[GOAL]
case coe.bot
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
f₁ : α
⊢ g' (↑f₁ ⊔ ⊥) = g' ↑f₁ ⊔ g' ⊥
[PROOFSTEP]
rfl
[GOAL]
case coe.coe
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
f₁ f₂ : α
⊢ g' (↑f₁ ⊔ ↑f₂) = g' ↑f₁ ⊔ g' ↑f₂
[PROOFSTEP]
| coe f₂ => exact congr_arg _ (g_sup f₁ f₂)
[GOAL]
case coe.coe
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
s✝ : Finset β
H✝ : Finset.Nonempty s✝
f✝ : β → α
inst✝ : SemilatticeSup γ
s : Finset β
H : Finset.Nonempty s
f : β → α
g : α → γ
g_sup : ∀ (x y : α), g (x ⊔ y) = g x ⊔ g y
g' : WithBot α → WithBot γ := WithBot.map g
f₁ f₂ : α
⊢ g' (↑f₁ ⊔ ↑f₂) = g' ↑f₁ ⊔ g' ↑f₂
[PROOFSTEP]
exact congr_arg _ (g_sup f₁ f₂)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
p : α → Prop
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs : ∀ (b : β), b ∈ s → p (f b)
⊢ p (sup' s H f)
[PROOFSTEP]
show @WithBot.recBotCoe α (fun _ => Prop) True p ↑(s.sup' H f)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
p : α → Prop
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs : ∀ (b : β), b ∈ s → p (f b)
⊢ WithBot.recBotCoe True p ↑(sup' s H f)
[PROOFSTEP]
rw [coe_sup']
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
p : α → Prop
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs : ∀ (b : β), b ∈ s → p (f b)
⊢ WithBot.recBotCoe True p (sup s (WithBot.some ∘ f))
[PROOFSTEP]
refine' sup_induction trivial _ hs
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
p : α → Prop
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs : ∀ (b : β), b ∈ s → p (f b)
⊢ ∀ (a₁ : WithBot α),
WithBot.recBotCoe True p a₁ → ∀ (a₂ : WithBot α), WithBot.recBotCoe True p a₂ → WithBot.recBotCoe True p (a₁ ⊔ a₂)
[PROOFSTEP]
rintro (_ | a₁) h₁ a₂ h₂
[GOAL]
case none
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
p : α → Prop
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs : ∀ (b : β), b ∈ s → p (f b)
h₁ : WithBot.recBotCoe True p none
a₂ : WithBot α
h₂ : WithBot.recBotCoe True p a₂
⊢ WithBot.recBotCoe True p (none ⊔ a₂)
[PROOFSTEP]
rw [WithBot.none_eq_bot, bot_sup_eq]
[GOAL]
case none
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
p : α → Prop
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs : ∀ (b : β), b ∈ s → p (f b)
h₁ : WithBot.recBotCoe True p none
a₂ : WithBot α
h₂ : WithBot.recBotCoe True p a₂
⊢ WithBot.recBotCoe True p a₂
[PROOFSTEP]
exact h₂
[GOAL]
case some
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
p : α → Prop
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs : ∀ (b : β), b ∈ s → p (f b)
a₁ : α
h₁ : WithBot.recBotCoe True p (some a₁)
a₂ : WithBot α
h₂ : WithBot.recBotCoe True p a₂
⊢ WithBot.recBotCoe True p (some a₁ ⊔ a₂)
[PROOFSTEP]
cases a₂ using WithBot.recBotCoe with
| bot => exact h₁
| coe a₂ => exact hp a₁ h₁ a₂ h₂
[GOAL]
case some
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
p : α → Prop
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs : ∀ (b : β), b ∈ s → p (f b)
a₁ : α
h₁ : WithBot.recBotCoe True p (some a₁)
a₂ : WithBot α
h₂ : WithBot.recBotCoe True p a₂
⊢ WithBot.recBotCoe True p (some a₁ ⊔ a₂)
[PROOFSTEP]
cases a₂ using WithBot.recBotCoe with
| bot => exact h₁
| coe a₂ => exact hp a₁ h₁ a₂ h₂
[GOAL]
case some.bot
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
p : α → Prop
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs : ∀ (b : β), b ∈ s → p (f b)
a₁ : α
h₁ : WithBot.recBotCoe True p (some a₁)
h₂ : WithBot.recBotCoe True p ⊥
⊢ WithBot.recBotCoe True p (some a₁ ⊔ ⊥)
[PROOFSTEP]
| bot => exact h₁
[GOAL]
case some.bot
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
p : α → Prop
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs : ∀ (b : β), b ∈ s → p (f b)
a₁ : α
h₁ : WithBot.recBotCoe True p (some a₁)
h₂ : WithBot.recBotCoe True p ⊥
⊢ WithBot.recBotCoe True p (some a₁ ⊔ ⊥)
[PROOFSTEP]
exact h₁
[GOAL]
case some.coe
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
p : α → Prop
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs : ∀ (b : β), b ∈ s → p (f b)
a₁ : α
h₁ : WithBot.recBotCoe True p (some a₁)
a₂ : α
h₂ : WithBot.recBotCoe True p ↑a₂
⊢ WithBot.recBotCoe True p (some a₁ ⊔ ↑a₂)
[PROOFSTEP]
| coe a₂ => exact hp a₁ h₁ a₂ h₂
[GOAL]
case some.coe
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f : β → α
p : α → Prop
hp : ∀ (a₁ : α), p a₁ → ∀ (a₂ : α), p a₂ → p (a₁ ⊔ a₂)
hs : ∀ (b : β), b ∈ s → p (f b)
a₁ : α
h₁ : WithBot.recBotCoe True p (some a₁)
a₂ : α
h₂ : WithBot.recBotCoe True p ↑a₂
⊢ WithBot.recBotCoe True p (some a₁ ⊔ ↑a₂)
[PROOFSTEP]
exact hp a₁ h₁ a₂ h₂
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s : Finset β
H : Finset.Nonempty s
f✝ : β → α
t : Finset β
f g : β → α
h₁ : s = t
h₂ : ∀ (x : β), x ∈ s → f x = g x
⊢ sup' s H f = sup' t (_ : Finset.Nonempty t) g
[PROOFSTEP]
subst s
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
f✝ : β → α
t : Finset β
f g : β → α
H : Finset.Nonempty t
h₂ : ∀ (x : β), x ∈ t → f x = g x
⊢ sup' t H f = sup' t (_ : Finset.Nonempty t) g
[PROOFSTEP]
refine' eq_of_forall_ge_iff fun c => _
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
f✝ : β → α
t : Finset β
f g : β → α
H : Finset.Nonempty t
h₂ : ∀ (x : β), x ∈ t → f x = g x
c : α
⊢ sup' t H f ≤ c ↔ sup' t (_ : Finset.Nonempty t) g ≤ c
[PROOFSTEP]
simp (config := { contextual := true }) only [sup'_le_iff, h₂]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s✝ : Finset β
H : Finset.Nonempty s✝
f✝ : β → α
s : Finset γ
f : γ ↪ β
g : β → α
hs : Finset.Nonempty (map f s)
hs' : optParam (Finset.Nonempty s) (_ : Finset.Nonempty s)
⊢ sup' (map f s) hs g = sup' s hs' (g ∘ ↑f)
[PROOFSTEP]
rw [← WithBot.coe_eq_coe, coe_sup', sup_map, coe_sup']
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeSup α
s✝ : Finset β
H : Finset.Nonempty s✝
f✝ : β → α
s : Finset γ
f : γ ↪ β
g : β → α
hs : Finset.Nonempty (map f s)
hs' : optParam (Finset.Nonempty s) (_ : Finset.Nonempty s)
⊢ sup s ((WithBot.some ∘ g) ∘ ↑f) = sup s (WithBot.some ∘ g ∘ ↑f)
[PROOFSTEP]
rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : SemilatticeInf α
s : Finset β
H : Finset.Nonempty s
f : β → α
⊢ inf s (WithTop.some ∘ f) ≠ ⊤
[PROOFSTEP]
simpa using H
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : SemilatticeSup α
inst✝ : OrderBot α
s : Finset β
h : Finset.Nonempty s
f : β → α
⊢ ↑(sup s f) = sup s (WithBot.some ∘ f)
[PROOFSTEP]
simp only [← sup'_eq_sup h, coe_sup' h]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset ι
H : Finset.Nonempty s
f : ι → α
a : α
⊢ a ≤ sup' s H f ↔ ∃ b, b ∈ s ∧ a ≤ f b
[PROOFSTEP]
rw [← WithBot.coe_le_coe, coe_sup', Finset.le_sup_iff (WithBot.bot_lt_coe a)]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset ι
H : Finset.Nonempty s
f : ι → α
a : α
⊢ (∃ b, b ∈ s ∧ ↑a ≤ (WithBot.some ∘ f) b) ↔ ∃ b, b ∈ s ∧ a ≤ f b
[PROOFSTEP]
exact exists_congr (fun _ => and_congr_right' WithBot.coe_le_coe)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset ι
H : Finset.Nonempty s
f : ι → α
a : α
⊢ a < sup' s H f ↔ ∃ b, b ∈ s ∧ a < f b
[PROOFSTEP]
rw [← WithBot.coe_lt_coe, coe_sup', Finset.lt_sup_iff]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset ι
H : Finset.Nonempty s
f : ι → α
a : α
⊢ (∃ b, b ∈ s ∧ ↑a < (WithBot.some ∘ f) b) ↔ ∃ b, b ∈ s ∧ a < f b
[PROOFSTEP]
exact exists_congr (fun _ => and_congr_right' WithBot.coe_lt_coe)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset ι
H : Finset.Nonempty s
f : ι → α
a : α
⊢ sup' s H f < a ↔ ∀ (i : ι), i ∈ s → f i < a
[PROOFSTEP]
rw [← WithBot.coe_lt_coe, coe_sup', Finset.sup_lt_iff (WithBot.bot_lt_coe a)]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset ι
H : Finset.Nonempty s
f : ι → α
a : α
⊢ (∀ (b : ι), b ∈ s → (WithBot.some ∘ f) b < ↑a) ↔ ∀ (i : ι), i ∈ s → f i < a
[PROOFSTEP]
exact ball_congr (fun _ _ => WithBot.coe_lt_coe)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset ι
H : Finset.Nonempty s
f✝ : ι → α
a : α
f : ι → α
⊢ ∃ i, i ∈ s ∧ sup' s H f = f i
[PROOFSTEP]
refine' H.cons_induction (fun c => _) fun c s hc hs ih => _
[GOAL]
case refine'_1
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset ι
H : Finset.Nonempty s
f✝ : ι → α
a : α
f : ι → α
c : ι
⊢ ∃ i, i ∈ {c} ∧ sup' {c} (_ : Finset.Nonempty {c}) f = f i
[PROOFSTEP]
exact ⟨c, mem_singleton_self c, rfl⟩
[GOAL]
case refine'_2
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset ι
H : Finset.Nonempty s✝
f✝ : ι → α
a : α
f : ι → α
c : ι
s : Finset ι
hc : ¬c ∈ s
hs : Finset.Nonempty s
ih : ∃ i, i ∈ s ∧ sup' s hs f = f i
⊢ ∃ i, i ∈ cons c s hc ∧ sup' (cons c s hc) (_ : Finset.Nonempty (cons c s hc)) f = f i
[PROOFSTEP]
rcases ih with ⟨b, hb, h'⟩
[GOAL]
case refine'_2.intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset ι
H : Finset.Nonempty s✝
f✝ : ι → α
a : α
f : ι → α
c : ι
s : Finset ι
hc : ¬c ∈ s
hs : Finset.Nonempty s
b : ι
hb : b ∈ s
h' : sup' s hs f = f b
⊢ ∃ i, i ∈ cons c s hc ∧ sup' (cons c s hc) (_ : Finset.Nonempty (cons c s hc)) f = f i
[PROOFSTEP]
rw [sup'_cons hs, h']
[GOAL]
case refine'_2.intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset ι
H : Finset.Nonempty s✝
f✝ : ι → α
a : α
f : ι → α
c : ι
s : Finset ι
hc : ¬c ∈ s
hs : Finset.Nonempty s
b : ι
hb : b ∈ s
h' : sup' s hs f = f b
⊢ ∃ i, i ∈ cons c s hc ∧ f c ⊔ f b = f i
[PROOFSTEP]
cases le_total (f b) (f c) with
| inl h => exact ⟨c, mem_cons.2 (Or.inl rfl), sup_eq_left.2 h⟩
| inr h => exact ⟨b, mem_cons.2 (Or.inr hb), sup_eq_right.2 h⟩
[GOAL]
case refine'_2.intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset ι
H : Finset.Nonempty s✝
f✝ : ι → α
a : α
f : ι → α
c : ι
s : Finset ι
hc : ¬c ∈ s
hs : Finset.Nonempty s
b : ι
hb : b ∈ s
h' : sup' s hs f = f b
x✝ : f b ≤ f c ∨ f c ≤ f b
⊢ ∃ i, i ∈ cons c s hc ∧ f c ⊔ f b = f i
[PROOFSTEP]
cases le_total (f b) (f c) with
| inl h => exact ⟨c, mem_cons.2 (Or.inl rfl), sup_eq_left.2 h⟩
| inr h => exact ⟨b, mem_cons.2 (Or.inr hb), sup_eq_right.2 h⟩
[GOAL]
case refine'_2.intro.intro.inl
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset ι
H : Finset.Nonempty s✝
f✝ : ι → α
a : α
f : ι → α
c : ι
s : Finset ι
hc : ¬c ∈ s
hs : Finset.Nonempty s
b : ι
hb : b ∈ s
h' : sup' s hs f = f b
h : f b ≤ f c
⊢ ∃ i, i ∈ cons c s hc ∧ f c ⊔ f b = f i
[PROOFSTEP]
| inl h => exact ⟨c, mem_cons.2 (Or.inl rfl), sup_eq_left.2 h⟩
[GOAL]
case refine'_2.intro.intro.inl
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset ι
H : Finset.Nonempty s✝
f✝ : ι → α
a : α
f : ι → α
c : ι
s : Finset ι
hc : ¬c ∈ s
hs : Finset.Nonempty s
b : ι
hb : b ∈ s
h' : sup' s hs f = f b
h : f b ≤ f c
⊢ ∃ i, i ∈ cons c s hc ∧ f c ⊔ f b = f i
[PROOFSTEP]
exact ⟨c, mem_cons.2 (Or.inl rfl), sup_eq_left.2 h⟩
[GOAL]
case refine'_2.intro.intro.inr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset ι
H : Finset.Nonempty s✝
f✝ : ι → α
a : α
f : ι → α
c : ι
s : Finset ι
hc : ¬c ∈ s
hs : Finset.Nonempty s
b : ι
hb : b ∈ s
h' : sup' s hs f = f b
h : f c ≤ f b
⊢ ∃ i, i ∈ cons c s hc ∧ f c ⊔ f b = f i
[PROOFSTEP]
| inr h => exact ⟨b, mem_cons.2 (Or.inr hb), sup_eq_right.2 h⟩
[GOAL]
case refine'_2.intro.intro.inr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset ι
H : Finset.Nonempty s✝
f✝ : ι → α
a : α
f : ι → α
c : ι
s : Finset ι
hc : ¬c ∈ s
hs : Finset.Nonempty s
b : ι
hb : b ∈ s
h' : sup' s hs f = f b
h : f c ≤ f b
⊢ ∃ i, i ∈ cons c s hc ∧ f c ⊔ f b = f i
[PROOFSTEP]
exact ⟨b, mem_cons.2 (Or.inr hb), sup_eq_right.2 h⟩
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
a : α
⊢ Finset.max {a} = ↑a
[PROOFSTEP]
rw [← insert_emptyc_eq]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
a : α
⊢ Finset.max (insert a ∅) = ↑a
[PROOFSTEP]
exact max_insert
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
a : α
h : a ∈ s
⊢ ∃ b, Finset.max s = ↑b
[PROOFSTEP]
obtain ⟨b, h, _⟩ := le_sup (α := WithBot α) h _ rfl
[GOAL]
case intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
a : α
h✝ : a ∈ s
b : α
h : b ∈ sup s some
right✝ : a ≤ b
⊢ ∃ b, Finset.max s = ↑b
[PROOFSTEP]
exact ⟨b, h⟩
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
h : Finset.max s = ⊥
H : Finset.Nonempty s
⊢ s = ∅
[PROOFSTEP]
obtain ⟨a, ha⟩ := max_of_nonempty H
[GOAL]
case intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
h : Finset.max s = ⊥
H : Finset.Nonempty s
a : α
ha : Finset.max s = ↑a
⊢ s = ∅
[PROOFSTEP]
rw [h] at ha
[GOAL]
case intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
h : Finset.max s = ⊥
H : Finset.Nonempty s
a : α
ha : ⊥ = ↑a
⊢ s = ∅
[PROOFSTEP]
cases ha
[GOAL]
[PROOFSTEP]
done
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
⊢ ∀ {a : α}, Finset.max s = ↑a → a ∈ s
[PROOFSTEP]
induction' s using Finset.induction_on with b s _ ih
[GOAL]
case empty
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
⊢ ∀ {a : α}, Finset.max ∅ = ↑a → a ∈ ∅
[PROOFSTEP]
intro _ H
[GOAL]
case empty
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
a✝ : α
H : Finset.max ∅ = ↑a✝
⊢ a✝ ∈ ∅
[PROOFSTEP]
cases H
[GOAL]
case insert
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
b : α
s : Finset α
a✝ : ¬b ∈ s
ih : ∀ {a : α}, Finset.max s = ↑a → a ∈ s
⊢ ∀ {a : α}, Finset.max (insert b s) = ↑a → a ∈ insert b s
[PROOFSTEP]
intro a h
[GOAL]
case insert
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
b : α
s : Finset α
a✝ : ¬b ∈ s
ih : ∀ {a : α}, Finset.max s = ↑a → a ∈ s
a : α
h : Finset.max (insert b s) = ↑a
⊢ a ∈ insert b s
[PROOFSTEP]
by_cases p : b = a
[GOAL]
case pos
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
b : α
s : Finset α
a✝ : ¬b ∈ s
ih : ∀ {a : α}, Finset.max s = ↑a → a ∈ s
a : α
h : Finset.max (insert b s) = ↑a
p : b = a
⊢ a ∈ insert b s
[PROOFSTEP]
induction p
[GOAL]
case pos.refl
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
b : α
s : Finset α
a✝ : ¬b ∈ s
ih : ∀ {a : α}, Finset.max s = ↑a → a ∈ s
a : α
h : Finset.max (insert b s) = ↑b
⊢ b ∈ insert b s
[PROOFSTEP]
exact mem_insert_self b s
[GOAL]
case neg
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
b : α
s : Finset α
a✝ : ¬b ∈ s
ih : ∀ {a : α}, Finset.max s = ↑a → a ∈ s
a : α
h : Finset.max (insert b s) = ↑a
p : ¬b = a
⊢ a ∈ insert b s
[PROOFSTEP]
cases' max_choice (↑b) s.max with q q
[GOAL]
case neg.inl
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
b : α
s : Finset α
a✝ : ¬b ∈ s
ih : ∀ {a : α}, Finset.max s = ↑a → a ∈ s
a : α
h : Finset.max (insert b s) = ↑a
p : ¬b = a
q : max (↑b) (Finset.max s) = ↑b
⊢ a ∈ insert b s
[PROOFSTEP]
rw [max_insert, q] at h
[GOAL]
case neg.inr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
b : α
s : Finset α
a✝ : ¬b ∈ s
ih : ∀ {a : α}, Finset.max s = ↑a → a ∈ s
a : α
h : Finset.max (insert b s) = ↑a
p : ¬b = a
q : max (↑b) (Finset.max s) = Finset.max s
⊢ a ∈ insert b s
[PROOFSTEP]
rw [max_insert, q] at h
[GOAL]
case neg.inl
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
b : α
s : Finset α
a✝ : ¬b ∈ s
ih : ∀ {a : α}, Finset.max s = ↑a → a ∈ s
a : α
h : ↑b = ↑a
p : ¬b = a
q : max (↑b) (Finset.max s) = ↑b
⊢ a ∈ insert b s
[PROOFSTEP]
cases h
[GOAL]
case neg.inl.refl
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
b : α
s : Finset α
a✝ : ¬b ∈ s
ih : ∀ {a : α}, Finset.max s = ↑a → a ∈ s
q : max (↑b) (Finset.max s) = ↑b
p : ¬b = b
⊢ b ∈ insert b s
[PROOFSTEP]
cases p rfl
[GOAL]
case neg.inr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
b : α
s : Finset α
a✝ : ¬b ∈ s
ih : ∀ {a : α}, Finset.max s = ↑a → a ∈ s
a : α
h : Finset.max s = ↑a
p : ¬b = a
q : max (↑b) (Finset.max s) = Finset.max s
⊢ a ∈ insert b s
[PROOFSTEP]
exact mem_insert_of_mem (ih h)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
a : α
⊢ Finset.min {a} = ↑a
[PROOFSTEP]
rw [← insert_emptyc_eq]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
a : α
⊢ Finset.min (insert a ∅) = ↑a
[PROOFSTEP]
exact min_insert
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
a : α
h : a ∈ s
⊢ ∃ b, Finset.min s = ↑b
[PROOFSTEP]
obtain ⟨b, h, _⟩ := inf_le (α := WithTop α) h _ rfl
[GOAL]
case intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
a : α
h✝ : a ∈ s
b : α
h : b ∈ inf s some
right✝ : b ≤ a
⊢ ∃ b, Finset.min s = ↑b
[PROOFSTEP]
exact ⟨b, h⟩
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
h : Finset.min s = ⊤
H : Finset.Nonempty s
⊢ s = ∅
[PROOFSTEP]
let ⟨a, ha⟩ := min_of_nonempty H
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
h : Finset.min s = ⊤
H : Finset.Nonempty s
a : α
ha : Finset.min s = ↑a
⊢ s = ∅
[PROOFSTEP]
rw [h] at ha
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
h : Finset.min s = ⊤
H : Finset.Nonempty s
a : α
ha : ⊤ = ↑a
⊢ s = ∅
[PROOFSTEP]
cases ha
[GOAL]
[PROOFSTEP]
done
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
H : Finset.Nonempty s
x : α
⊢ Finset.min s = ↑(min' s H)
[PROOFSTEP]
simp only [Finset.min, min', id_eq, coe_inf']
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
H : Finset.Nonempty s
x : α
⊢ inf s WithTop.some = inf s (WithTop.some ∘ fun x => x)
[PROOFSTEP]
rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
H : Finset.Nonempty s
x a : α
⊢ min' {a} (_ : Finset.Nonempty {a}) = a
[PROOFSTEP]
simp [min']
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
H : Finset.Nonempty s
x : α
⊢ Finset.max s = ↑(max' s H)
[PROOFSTEP]
simp only [max', Finset.max, id_eq, coe_sup']
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
H : Finset.Nonempty s
x : α
⊢ sup s WithBot.some = sup s (WithBot.some ∘ fun x => x)
[PROOFSTEP]
rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
H : Finset.Nonempty s
x a : α
⊢ max' {a} (_ : Finset.Nonempty {a}) = a
[PROOFSTEP]
simp [max']
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
H : Finset.Nonempty s
x : α
h₂ : 1 < card s
⊢ min' s (_ : Finset.Nonempty s) < max' s (_ : Finset.Nonempty s)
[PROOFSTEP]
rcases one_lt_card.1 h₂ with ⟨a, ha, b, hb, hab⟩
[GOAL]
case intro.intro.intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset α
H : Finset.Nonempty s
x : α
h₂ : 1 < card s
a : α
ha : a ∈ s
b : α
hb : b ∈ s
hab : a ≠ b
⊢ min' s (_ : Finset.Nonempty s) < max' s (_ : Finset.Nonempty s)
[PROOFSTEP]
exact s.min'_lt_max' ha hb hab
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset αᵒᵈ
⊢ WithTop.map (↑ofDual) (Finset.min s) = Finset.max (image (↑ofDual) s)
[PROOFSTEP]
rw [max_eq_sup_withBot, sup_image]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset αᵒᵈ
⊢ WithTop.map (↑ofDual) (Finset.min s) = sup s (WithBot.some ∘ ↑ofDual)
[PROOFSTEP]
exact congr_fun Option.map_id _
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset αᵒᵈ
⊢ WithBot.map (↑ofDual) (Finset.max s) = Finset.min (image (↑ofDual) s)
[PROOFSTEP]
rw [min_eq_inf_withTop, inf_image]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset αᵒᵈ
⊢ WithBot.map (↑ofDual) (Finset.max s) = inf s (WithTop.some ∘ ↑ofDual)
[PROOFSTEP]
exact congr_fun Option.map_id _
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
⊢ WithTop.map (↑toDual) (Finset.min s) = Finset.max (image (↑toDual) s)
[PROOFSTEP]
rw [max_eq_sup_withBot, sup_image]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
⊢ WithTop.map (↑toDual) (Finset.min s) = sup s (WithBot.some ∘ ↑toDual)
[PROOFSTEP]
exact congr_fun Option.map_id _
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
⊢ WithBot.map (↑toDual) (Finset.max s) = Finset.min (image (↑toDual) s)
[PROOFSTEP]
rw [min_eq_inf_withTop, inf_image]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
⊢ WithBot.map (↑toDual) (Finset.max s) = inf s (WithTop.some ∘ ↑toDual)
[PROOFSTEP]
exact congr_fun Option.map_id _
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset αᵒᵈ
hs : Finset.Nonempty s
⊢ ↑ofDual (min' s hs) = max' (image (↑ofDual) s) (_ : Finset.Nonempty (image (↑ofDual) s))
[PROOFSTEP]
rw [← WithBot.coe_eq_coe]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset αᵒᵈ
hs : Finset.Nonempty s
⊢ ↑(↑ofDual (min' s hs)) = ↑(max' (image (↑ofDual) s) (_ : Finset.Nonempty (image (↑ofDual) s)))
[PROOFSTEP]
simp only [min'_eq_inf', id_eq, ofDual_inf', Function.comp_apply, coe_sup', max'_eq_sup', sup_image]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset αᵒᵈ
hs : Finset.Nonempty s
⊢ sup s (WithBot.some ∘ fun x => ↑ofDual x) = sup s ((WithBot.some ∘ fun x => x) ∘ ↑ofDual)
[PROOFSTEP]
rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset αᵒᵈ
hs : Finset.Nonempty s
⊢ ↑ofDual (max' s hs) = min' (image (↑ofDual) s) (_ : Finset.Nonempty (image (↑ofDual) s))
[PROOFSTEP]
rw [← WithTop.coe_eq_coe]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset αᵒᵈ
hs : Finset.Nonempty s
⊢ ↑(↑ofDual (max' s hs)) = ↑(min' (image (↑ofDual) s) (_ : Finset.Nonempty (image (↑ofDual) s)))
[PROOFSTEP]
simp only [max'_eq_sup', id_eq, ofDual_sup', Function.comp_apply, coe_inf', min'_eq_inf', inf_image]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset αᵒᵈ
hs : Finset.Nonempty s
⊢ inf s (WithTop.some ∘ fun x => ↑ofDual x) = inf s ((WithTop.some ∘ fun x => x) ∘ ↑ofDual)
[PROOFSTEP]
rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
hs : Finset.Nonempty s
⊢ ↑toDual (min' s hs) = max' (image (↑toDual) s) (_ : Finset.Nonempty (image (↑toDual) s))
[PROOFSTEP]
rw [← WithBot.coe_eq_coe]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
hs : Finset.Nonempty s
⊢ ↑(↑toDual (min' s hs)) = ↑(max' (image (↑toDual) s) (_ : Finset.Nonempty (image (↑toDual) s)))
[PROOFSTEP]
simp only [min'_eq_inf', id_eq, toDual_inf', Function.comp_apply, coe_sup', max'_eq_sup', sup_image]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
hs : Finset.Nonempty s
⊢ sup s (WithBot.some ∘ fun x => ↑toDual x) = sup s ((WithBot.some ∘ fun x => x) ∘ ↑toDual)
[PROOFSTEP]
rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
hs : Finset.Nonempty s
⊢ ↑toDual (max' s hs) = min' (image (↑toDual) s) (_ : Finset.Nonempty (image (↑toDual) s))
[PROOFSTEP]
rw [← WithTop.coe_eq_coe]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
hs : Finset.Nonempty s
⊢ ↑(↑toDual (max' s hs)) = ↑(min' (image (↑toDual) s) (_ : Finset.Nonempty (image (↑toDual) s)))
[PROOFSTEP]
simp only [max'_eq_sup', id_eq, toDual_sup', Function.comp_apply, coe_inf', min'_eq_inf', inf_image]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
hs : Finset.Nonempty s
⊢ inf s (WithTop.some ∘ fun x => ↑toDual x) = inf s ((WithTop.some ∘ fun x => x) ∘ ↑toDual)
[PROOFSTEP]
rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H✝ : Finset.Nonempty s✝
x a : α
s : Finset α
H : Finset.Nonempty s
⊢ IsGreatest (↑(insert a s)) (max (max' s H) a)
[PROOFSTEP]
rw [coe_insert, max_comm]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H✝ : Finset.Nonempty s✝
x a : α
s : Finset α
H : Finset.Nonempty s
⊢ IsGreatest (insert a ↑s) (max a (max' s H))
[PROOFSTEP]
exact (isGreatest_max' _ _).insert _
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H✝ : Finset.Nonempty s✝
x a : α
s : Finset α
H : Finset.Nonempty s
⊢ IsLeast (↑(insert a s)) (min (min' s H) a)
[PROOFSTEP]
rw [coe_insert, min_comm]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H✝ : Finset.Nonempty s✝
x a : α
s : Finset α
H : Finset.Nonempty s
⊢ IsLeast (insert a ↑s) (min a (min' s H))
[PROOFSTEP]
exact (isLeast_min' _ _).insert _
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
inst✝ : LinearOrder β
f : α → β
hf : Monotone f
s : Finset α
h : Finset.Nonempty (image f s)
⊢ max' (image f s) h = f (max' s (_ : Finset.Nonempty s))
[PROOFSTEP]
refine' le_antisymm (max'_le _ _ _ fun y hy => _) (le_max' _ _ (mem_image.mpr ⟨_, max'_mem _ _, rfl⟩))
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
inst✝ : LinearOrder β
f : α → β
hf : Monotone f
s : Finset α
h : Finset.Nonempty (image f s)
y : β
hy : y ∈ image f s
⊢ y ≤ f (max' s (_ : Finset.Nonempty s))
[PROOFSTEP]
obtain ⟨x, hx, rfl⟩ := mem_image.mp hy
[GOAL]
case intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x✝ : α
inst✝ : LinearOrder β
f : α → β
hf : Monotone f
s : Finset α
h : Finset.Nonempty (image f s)
x : α
hx : x ∈ s
hy : f x ∈ image f s
⊢ f x ≤ f (max' s (_ : Finset.Nonempty s))
[PROOFSTEP]
exact hf (le_max' _ _ hx)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
inst✝ : LinearOrder β
f : α → β
hf : Monotone f
s : Finset α
h : Finset.Nonempty (image f s)
⊢ min' (image f s) h = f (min' s (_ : Finset.Nonempty s))
[PROOFSTEP]
refine' le_antisymm (min'_le _ _ (mem_image.mpr ⟨_, min'_mem _ _, rfl⟩)) (le_min' _ _ _ fun y hy => _)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
inst✝ : LinearOrder β
f : α → β
hf : Monotone f
s : Finset α
h : Finset.Nonempty (image f s)
y : β
hy : y ∈ image f s
⊢ f (min' s (_ : Finset.Nonempty s)) ≤ y
[PROOFSTEP]
obtain ⟨x, hx, rfl⟩ := mem_image.mp hy
[GOAL]
case intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x✝ : α
inst✝ : LinearOrder β
f : α → β
hf : Monotone f
s : Finset α
h : Finset.Nonempty (image f s)
x : α
hx : x ∈ s
hy : f x ∈ image f s
⊢ f (min' s (_ : Finset.Nonempty s)) ≤ f x
[PROOFSTEP]
exact hf (min'_le _ _ hx)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
⊢ Finset.max (erase s x) ≠ ↑x
[PROOFSTEP]
by_cases s0 : (s.erase x).Nonempty
[GOAL]
case pos
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
s0 : Finset.Nonempty (erase s x)
⊢ Finset.max (erase s x) ≠ ↑x
[PROOFSTEP]
refine' ne_of_eq_of_ne (coe_max' s0).symm _
[GOAL]
case pos
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
s0 : Finset.Nonempty (erase s x)
⊢ ↑(max' (erase s x) s0) ≠ ↑x
[PROOFSTEP]
exact WithBot.coe_eq_coe.not.mpr (max'_erase_ne_self _)
[GOAL]
case neg
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
s0 : ¬Finset.Nonempty (erase s x)
⊢ Finset.max (erase s x) ≠ ↑x
[PROOFSTEP]
rw [not_nonempty_iff_eq_empty.mp s0, max_empty]
[GOAL]
case neg
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
s0 : ¬Finset.Nonempty (erase s x)
⊢ ⊥ ≠ ↑x
[PROOFSTEP]
exact WithBot.bot_ne_coe
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
⊢ Finset.min (erase s x) ≠ ↑x
[PROOFSTEP]
convert @max_erase_ne_self αᵒᵈ _ (toDual x) (s.map toDual.toEmbedding) using 1
[GOAL]
case h.e'_2.h
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
e_1✝ : WithTop α = WithBot αᵒᵈ
⊢ Finset.min (erase s x) = Finset.max (erase (map (Equiv.toEmbedding toDual) s) (↑toDual x))
[PROOFSTEP]
apply congr_arg
[GOAL]
case h.e'_2.h.h
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
e_1✝ : WithTop α = WithBot αᵒᵈ
⊢ erase s x = erase (map (Equiv.toEmbedding toDual) s) (↑toDual x)
[PROOFSTEP]
congr!
[GOAL]
case h.e'_2.h.h.h.e'_3.h
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
e_1✝¹ : WithTop α = WithBot αᵒᵈ
e_1✝ : α = αᵒᵈ
⊢ s = map (Equiv.toEmbedding toDual) s
[PROOFSTEP]
ext
[GOAL]
case h.e'_2.h.h.h.e'_3.h.a
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
e_1✝¹ : WithTop α = WithBot αᵒᵈ
e_1✝ : α = αᵒᵈ
a✝ : α
⊢ a✝ ∈ s ↔ a✝ ∈ map (Equiv.toEmbedding toDual) s
[PROOFSTEP]
simp only [mem_map_equiv]
[GOAL]
case h.e'_2.h.h.h.e'_3.h.a
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s : Finset α
e_1✝¹ : WithTop α = WithBot αᵒᵈ
e_1✝ : α = αᵒᵈ
a✝ : α
⊢ a✝ ∈ s ↔ ↑toDual.symm a✝ ∈ s
[PROOFSTEP]
exact Iff.rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x✝ x : α
s : Finset α
h : ∃ y, y ∈ s ∧ x < y
y : α
hy : y ∈ s ∧ x < y
⊢ y ∈ s ∧ (fun x x_1 => x < x_1) x y
[PROOFSTEP]
simpa
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x✝ x : α
s : Finset α
h : ∃ y, y ∈ s ∧ x < y
Hne : Finset.Nonempty (filter ((fun x x_1 => x < x_1) x) s)
aux :
min' (filter ((fun x x_1 => x < x_1) x) s) Hne ∈ s ∧
(fun x x_1 => x < x_1) x (min' (filter ((fun x x_1 => x < x_1) x) s) Hne)
⊢ x < min' (filter ((fun x x_1 => x < x_1) x) s) Hne
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x✝ x : α
s : Finset α
h : ∃ y, y ∈ s ∧ x < y
Hne : Finset.Nonempty (filter ((fun x x_1 => x < x_1) x) s)
aux :
min' (filter ((fun x x_1 => x < x_1) x) s) Hne ∈ s ∧
(fun x x_1 => x < x_1) x (min' (filter ((fun x x_1 => x < x_1) x) s) Hne)
z : α
hzs : z ∈ s
hz : x < z
⊢ (fun x x_1 => x < x_1) x z
[PROOFSTEP]
simpa
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s t : Finset α
h : ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → (∀ (z : α), z ∈ s → ¬z ∈ Set.Ioo x y) → ∃ z, z ∈ t ∧ x < z ∧ z < y
⊢ card s ≤ card t + 1
[PROOFSTEP]
replace h : ∀ (x) (_ : x ∈ s) (y) (_ : y ∈ s), x < y → ∃ z ∈ t, x < z ∧ z < y
[GOAL]
case h
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s t : Finset α
h : ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → (∀ (z : α), z ∈ s → ¬z ∈ Set.Ioo x y) → ∃ z, z ∈ t ∧ x < z ∧ z < y
⊢ ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → ∃ z, z ∈ t ∧ x < z ∧ z < y
[PROOFSTEP]
intro x hx y hy hxy
[GOAL]
case h
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x✝ : α
s t : Finset α
h : ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → (∀ (z : α), z ∈ s → ¬z ∈ Set.Ioo x y) → ∃ z, z ∈ t ∧ x < z ∧ z < y
x : α
hx : x ∈ s
y : α
hy : y ∈ s
hxy : x < y
⊢ ∃ z, z ∈ t ∧ x < z ∧ z < y
[PROOFSTEP]
rcases exists_next_right ⟨y, hy, hxy⟩ with ⟨a, has, hxa, ha⟩
[GOAL]
case h.intro.intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x✝ : α
s t : Finset α
h : ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → (∀ (z : α), z ∈ s → ¬z ∈ Set.Ioo x y) → ∃ z, z ∈ t ∧ x < z ∧ z < y
x : α
hx : x ∈ s
y : α
hy : y ∈ s
hxy : x < y
a : α
has : a ∈ s
hxa : x < a
ha : ∀ (z : α), z ∈ s → x < z → a ≤ z
⊢ ∃ z, z ∈ t ∧ x < z ∧ z < y
[PROOFSTEP]
rcases h x hx a has hxa fun z hzs hz => hz.2.not_le <| ha _ hzs hz.1 with ⟨b, hbt, hxb, hba⟩
[GOAL]
case h.intro.intro.intro.intro.intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x✝ : α
s t : Finset α
h : ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → (∀ (z : α), z ∈ s → ¬z ∈ Set.Ioo x y) → ∃ z, z ∈ t ∧ x < z ∧ z < y
x : α
hx : x ∈ s
y : α
hy : y ∈ s
hxy : x < y
a : α
has : a ∈ s
hxa : x < a
ha : ∀ (z : α), z ∈ s → x < z → a ≤ z
b : α
hbt : b ∈ t
hxb : x < b
hba : b < a
⊢ ∃ z, z ∈ t ∧ x < z ∧ z < y
[PROOFSTEP]
exact ⟨b, hbt, hxb, hba.trans_le <| ha _ hy hxy⟩
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s t : Finset α
h : ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → ∃ z, z ∈ t ∧ x < z ∧ z < y
⊢ card s ≤ card t + 1
[PROOFSTEP]
set f : α → WithTop α := fun x => (t.filter fun y => x < y).min
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s t : Finset α
h : ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → ∃ z, z ∈ t ∧ x < z ∧ z < y
f : α → WithTop α := fun x => Finset.min (filter (fun y => x < y) t)
⊢ card s ≤ card t + 1
[PROOFSTEP]
have f_mono : StrictMonoOn f s := by
intro x hx y hy hxy
rcases h x hx y hy hxy with ⟨a, hat, hxa, hay⟩
calc
f x ≤ a := min_le (mem_filter.2 ⟨hat, by simpa⟩)
_ < f y :=
(Finset.lt_inf_iff <| WithTop.coe_lt_top a).2 fun b hb =>
WithTop.coe_lt_coe.2 <| hay.trans (by simpa using (mem_filter.1 hb).2)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s t : Finset α
h : ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → ∃ z, z ∈ t ∧ x < z ∧ z < y
f : α → WithTop α := fun x => Finset.min (filter (fun y => x < y) t)
⊢ StrictMonoOn f ↑s
[PROOFSTEP]
intro x hx y hy hxy
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x✝ : α
s t : Finset α
h : ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → ∃ z, z ∈ t ∧ x < z ∧ z < y
f : α → WithTop α := fun x => Finset.min (filter (fun y => x < y) t)
x : α
hx : x ∈ ↑s
y : α
hy : y ∈ ↑s
hxy : x < y
⊢ f x < f y
[PROOFSTEP]
rcases h x hx y hy hxy with ⟨a, hat, hxa, hay⟩
[GOAL]
case intro.intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x✝ : α
s t : Finset α
h : ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → ∃ z, z ∈ t ∧ x < z ∧ z < y
f : α → WithTop α := fun x => Finset.min (filter (fun y => x < y) t)
x : α
hx : x ∈ ↑s
y : α
hy : y ∈ ↑s
hxy : x < y
a : α
hat : a ∈ t
hxa : x < a
hay : a < y
⊢ f x < f y
[PROOFSTEP]
calc
f x ≤ a := min_le (mem_filter.2 ⟨hat, by simpa⟩)
_ < f y :=
(Finset.lt_inf_iff <| WithTop.coe_lt_top a).2 fun b hb =>
WithTop.coe_lt_coe.2 <| hay.trans (by simpa using (mem_filter.1 hb).2)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x✝ : α
s t : Finset α
h : ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → ∃ z, z ∈ t ∧ x < z ∧ z < y
f : α → WithTop α := fun x => Finset.min (filter (fun y => x < y) t)
x : α
hx : x ∈ ↑s
y : α
hy : y ∈ ↑s
hxy : x < y
a : α
hat : a ∈ t
hxa : x < a
hay : a < y
⊢ x < a
[PROOFSTEP]
simpa
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x✝ : α
s t : Finset α
h : ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → ∃ z, z ∈ t ∧ x < z ∧ z < y
f : α → WithTop α := fun x => Finset.min (filter (fun y => x < y) t)
x : α
hx : x ∈ ↑s
y : α
hy : y ∈ ↑s
hxy : x < y
a : α
hat : a ∈ t
hxa : x < a
hay : a < y
b : α
hb : b ∈ filter (fun y_1 => y < y_1) t
⊢ y < b
[PROOFSTEP]
simpa using (mem_filter.1 hb).2
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
s t : Finset α
h : ∀ (x : α), x ∈ s → ∀ (y : α), y ∈ s → x < y → ∃ z, z ∈ t ∧ x < z ∧ z < y
f : α → WithTop α := fun x => Finset.min (filter (fun y => x < y) t)
f_mono : StrictMonoOn f ↑s
⊢ card s ≤ card t + 1
[PROOFSTEP]
calc
s.card = (s.image f).card := (card_image_of_injOn f_mono.injOn).symm
_ ≤ (insert ⊤ (t.image (↑)) : Finset (WithTop α)).card :=
(card_mono <|
image_subset_iff.2 fun x _ =>
insert_subset_insert _ (image_subset_image <| filter_subset _ _) (min_mem_insert_top_image_coe _))
_ ≤ t.card + 1 := (card_insert_le _ _).trans (add_le_add_right card_image_le _)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
inst✝ : DecidableEq α
p : Finset α → Prop
s : Finset α
h0 : p ∅
step : ∀ (a : α) (s : Finset α), (∀ (x : α), x ∈ s → x < a) → p s → p (insert a s)
⊢ p s
[PROOFSTEP]
induction' s using Finset.strongInductionOn with s ihs
[GOAL]
case a
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
inst✝ : DecidableEq α
p : Finset α → Prop
h0 : p ∅
step : ∀ (a : α) (s : Finset α), (∀ (x : α), x ∈ s → x < a) → p s → p (insert a s)
s : Finset α
ihs : ∀ (t : Finset α), t ⊂ s → p t
⊢ p s
[PROOFSTEP]
rcases s.eq_empty_or_nonempty with (rfl | hne)
[GOAL]
case a.inl
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
s : Finset α
H : Finset.Nonempty s
x : α
inst✝ : DecidableEq α
p : Finset α → Prop
h0 : p ∅
step : ∀ (a : α) (s : Finset α), (∀ (x : α), x ∈ s → x < a) → p s → p (insert a s)
ihs : ∀ (t : Finset α), t ⊂ ∅ → p t
⊢ p ∅
[PROOFSTEP]
exact h0
[GOAL]
case a.inr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
s✝ : Finset α
H : Finset.Nonempty s✝
x : α
inst✝ : DecidableEq α
p : Finset α → Prop
h0 : p ∅
step : ∀ (a : α) (s : Finset α), (∀ (x : α), x ∈ s → x < a) → p s → p (insert a s)
s : Finset α
ihs : ∀ (t : Finset α), t ⊂ s → p t
hne : Finset.Nonempty s
⊢ p s
[PROOFSTEP]
have H : s.max' hne ∈ s := max'_mem s hne
[GOAL]
case a.inr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
s✝ : Finset α
H✝ : Finset.Nonempty s✝
x : α
inst✝ : DecidableEq α
p : Finset α → Prop
h0 : p ∅
step : ∀ (a : α) (s : Finset α), (∀ (x : α), x ∈ s → x < a) → p s → p (insert a s)
s : Finset α
ihs : ∀ (t : Finset α), t ⊂ s → p t
hne : Finset.Nonempty s
H : max' s hne ∈ s
⊢ p s
[PROOFSTEP]
rw [← insert_erase H]
[GOAL]
case a.inr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : LinearOrder α
s✝ : Finset α
H✝ : Finset.Nonempty s✝
x : α
inst✝ : DecidableEq α
p : Finset α → Prop
h0 : p ∅
step : ∀ (a : α) (s : Finset α), (∀ (x : α), x ∈ s → x < a) → p s → p (insert a s)
s : Finset α
ihs : ∀ (t : Finset α), t ⊂ s → p t
hne : Finset.Nonempty s
H : max' s hne ∈ s
⊢ p (insert (max' s hne) (erase s (max' s hne)))
[PROOFSTEP]
exact step _ _ (fun x => s.lt_max'_of_mem_erase_max' hne) (ihs _ <| erase_ssubset H)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : LinearOrder α
inst✝¹ : LinearOrder β
inst✝ : DecidableEq ι
f : ι → α
p : Finset ι → Prop
s : Finset ι
h0 : p ∅
step : ∀ (a : ι) (s : Finset ι), ¬a ∈ s → (∀ (x : ι), x ∈ s → f x ≤ f a) → p s → p (insert a s)
⊢ p s
[PROOFSTEP]
induction' s using Finset.strongInductionOn with s ihs
[GOAL]
case a
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : LinearOrder α
inst✝¹ : LinearOrder β
inst✝ : DecidableEq ι
f : ι → α
p : Finset ι → Prop
h0 : p ∅
step : ∀ (a : ι) (s : Finset ι), ¬a ∈ s → (∀ (x : ι), x ∈ s → f x ≤ f a) → p s → p (insert a s)
s : Finset ι
ihs : ∀ (t : Finset ι), t ⊂ s → p t
⊢ p s
[PROOFSTEP]
rcases(s.image f).eq_empty_or_nonempty with (hne | hne)
[GOAL]
case a.inl
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : LinearOrder α
inst✝¹ : LinearOrder β
inst✝ : DecidableEq ι
f : ι → α
p : Finset ι → Prop
h0 : p ∅
step : ∀ (a : ι) (s : Finset ι), ¬a ∈ s → (∀ (x : ι), x ∈ s → f x ≤ f a) → p s → p (insert a s)
s : Finset ι
ihs : ∀ (t : Finset ι), t ⊂ s → p t
hne : image f s = ∅
⊢ p s
[PROOFSTEP]
simp only [image_eq_empty] at hne
[GOAL]
case a.inl
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : LinearOrder α
inst✝¹ : LinearOrder β
inst✝ : DecidableEq ι
f : ι → α
p : Finset ι → Prop
h0 : p ∅
step : ∀ (a : ι) (s : Finset ι), ¬a ∈ s → (∀ (x : ι), x ∈ s → f x ≤ f a) → p s → p (insert a s)
s : Finset ι
ihs : ∀ (t : Finset ι), t ⊂ s → p t
hne : s = ∅
⊢ p s
[PROOFSTEP]
simp only [hne, h0]
[GOAL]
case a.inr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : LinearOrder α
inst✝¹ : LinearOrder β
inst✝ : DecidableEq ι
f : ι → α
p : Finset ι → Prop
h0 : p ∅
step : ∀ (a : ι) (s : Finset ι), ¬a ∈ s → (∀ (x : ι), x ∈ s → f x ≤ f a) → p s → p (insert a s)
s : Finset ι
ihs : ∀ (t : Finset ι), t ⊂ s → p t
hne : Finset.Nonempty (image f s)
⊢ p s
[PROOFSTEP]
have H : (s.image f).max' hne ∈ s.image f := max'_mem (s.image f) hne
[GOAL]
case a.inr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : LinearOrder α
inst✝¹ : LinearOrder β
inst✝ : DecidableEq ι
f : ι → α
p : Finset ι → Prop
h0 : p ∅
step : ∀ (a : ι) (s : Finset ι), ¬a ∈ s → (∀ (x : ι), x ∈ s → f x ≤ f a) → p s → p (insert a s)
s : Finset ι
ihs : ∀ (t : Finset ι), t ⊂ s → p t
hne : Finset.Nonempty (image f s)
H : max' (image f s) hne ∈ image f s
⊢ p s
[PROOFSTEP]
simp only [mem_image, exists_prop] at H
[GOAL]
case a.inr
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : LinearOrder α
inst✝¹ : LinearOrder β
inst✝ : DecidableEq ι
f : ι → α
p : Finset ι → Prop
h0 : p ∅
step : ∀ (a : ι) (s : Finset ι), ¬a ∈ s → (∀ (x : ι), x ∈ s → f x ≤ f a) → p s → p (insert a s)
s : Finset ι
ihs : ∀ (t : Finset ι), t ⊂ s → p t
hne : Finset.Nonempty (image f s)
H : ∃ a, a ∈ s ∧ f a = max' (image f s) hne
⊢ p s
[PROOFSTEP]
rcases H with ⟨a, has, hfa⟩
[GOAL]
case a.inr.intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : LinearOrder α
inst✝¹ : LinearOrder β
inst✝ : DecidableEq ι
f : ι → α
p : Finset ι → Prop
h0 : p ∅
step : ∀ (a : ι) (s : Finset ι), ¬a ∈ s → (∀ (x : ι), x ∈ s → f x ≤ f a) → p s → p (insert a s)
s : Finset ι
ihs : ∀ (t : Finset ι), t ⊂ s → p t
hne : Finset.Nonempty (image f s)
a : ι
has : a ∈ s
hfa : f a = max' (image f s) hne
⊢ p s
[PROOFSTEP]
rw [← insert_erase has]
[GOAL]
case a.inr.intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : LinearOrder α
inst✝¹ : LinearOrder β
inst✝ : DecidableEq ι
f : ι → α
p : Finset ι → Prop
h0 : p ∅
step : ∀ (a : ι) (s : Finset ι), ¬a ∈ s → (∀ (x : ι), x ∈ s → f x ≤ f a) → p s → p (insert a s)
s : Finset ι
ihs : ∀ (t : Finset ι), t ⊂ s → p t
hne : Finset.Nonempty (image f s)
a : ι
has : a ∈ s
hfa : f a = max' (image f s) hne
⊢ p (insert a (erase s a))
[PROOFSTEP]
refine' step _ _ (not_mem_erase a s) (fun x hx => _) (ihs _ <| erase_ssubset has)
[GOAL]
case a.inr.intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : LinearOrder α
inst✝¹ : LinearOrder β
inst✝ : DecidableEq ι
f : ι → α
p : Finset ι → Prop
h0 : p ∅
step : ∀ (a : ι) (s : Finset ι), ¬a ∈ s → (∀ (x : ι), x ∈ s → f x ≤ f a) → p s → p (insert a s)
s : Finset ι
ihs : ∀ (t : Finset ι), t ⊂ s → p t
hne : Finset.Nonempty (image f s)
a : ι
has : a ∈ s
hfa : f a = max' (image f s) hne
x : ι
hx : x ∈ erase s a
⊢ f x ≤ f a
[PROOFSTEP]
rw [hfa]
[GOAL]
case a.inr.intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝² : LinearOrder α
inst✝¹ : LinearOrder β
inst✝ : DecidableEq ι
f : ι → α
p : Finset ι → Prop
h0 : p ∅
step : ∀ (a : ι) (s : Finset ι), ¬a ∈ s → (∀ (x : ι), x ∈ s → f x ≤ f a) → p s → p (insert a s)
s : Finset ι
ihs : ∀ (t : Finset ι), t ⊂ s → p t
hne : Finset.Nonempty (image f s)
a : ι
has : a ∈ s
hfa : f a = max' (image f s) hne
x : ι
hx : x ∈ erase s a
⊢ f x ≤ max' (image f s) hne
[PROOFSTEP]
exact le_max' _ _ (mem_image_of_mem _ <| mem_of_mem_erase hx)
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset β
f : β → α
h : Finset.Nonempty s
⊢ ∃ x, x ∈ s ∧ ∀ (x' : β), x' ∈ s → f x' ≤ f x
[PROOFSTEP]
cases' max_of_nonempty (h.image f) with y hy
[GOAL]
case intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset β
f : β → α
h : Finset.Nonempty s
y : α
hy : Finset.max (image f s) = ↑y
⊢ ∃ x, x ∈ s ∧ ∀ (x' : β), x' ∈ s → f x' ≤ f x
[PROOFSTEP]
rcases mem_image.mp (mem_of_max hy) with ⟨x, hx, rfl⟩
[GOAL]
case intro.intro.intro
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
s : Finset β
f : β → α
h : Finset.Nonempty s
x : β
hx : x ∈ s
hy : Finset.max (image f s) = ↑(f x)
⊢ ∃ x, x ∈ s ∧ ∀ (x' : β), x' ∈ s → f x' ≤ f x
[PROOFSTEP]
exact ⟨x, hx, fun x' hx' => le_max_of_eq (mem_image_of_mem f hx') hy⟩
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
i : α
s : Finset α
hs : Finset.Nonempty s
⊢ IsGLB (↑s) i ↔ IsLeast (↑s) i
[PROOFSTEP]
refine' ⟨fun his => _, IsLeast.isGLB⟩
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
i : α
s : Finset α
hs : Finset.Nonempty s
his : IsGLB (↑s) i
⊢ IsLeast (↑s) i
[PROOFSTEP]
suffices i = min' s hs by
rw [this]
exact isLeast_min' s hs
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
i : α
s : Finset α
hs : Finset.Nonempty s
his : IsGLB (↑s) i
this : i = min' s hs
⊢ IsLeast (↑s) i
[PROOFSTEP]
rw [this]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
i : α
s : Finset α
hs : Finset.Nonempty s
his : IsGLB (↑s) i
this : i = min' s hs
⊢ IsLeast (↑s) (min' s hs)
[PROOFSTEP]
exact isLeast_min' s hs
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
i : α
s : Finset α
hs : Finset.Nonempty s
his : IsGLB (↑s) i
⊢ i = min' s hs
[PROOFSTEP]
rw [IsGLB, IsGreatest, mem_lowerBounds, mem_upperBounds] at his
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
i : α
s : Finset α
hs : Finset.Nonempty s
his : (∀ (x : α), x ∈ ↑s → i ≤ x) ∧ ∀ (x : α), x ∈ lowerBounds ↑s → x ≤ i
⊢ i = min' s hs
[PROOFSTEP]
exact le_antisymm (his.1 (Finset.min' s hs) (Finset.min'_mem s hs)) (his.2 _ (Finset.min'_le s))
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
i : α
s : Finset α
his : IsGLB (↑s) i
hs : Finset.Nonempty s
⊢ i ∈ s
[PROOFSTEP]
rw [← mem_coe]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : LinearOrder α
i : α
s : Finset α
his : IsGLB (↑s) i
hs : Finset.Nonempty s
⊢ i ∈ ↑s
[PROOFSTEP]
exact ((isGLB_iff_isLeast i s hs).mp his).1
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : DecidableEq β
s : Finset α
f : α → Multiset β
b : β
⊢ count b (Finset.sup s f) = Finset.sup s fun a => count b (f a)
[PROOFSTEP]
letI := Classical.decEq α
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : DecidableEq β
s : Finset α
f : α → Multiset β
b : β
this : DecidableEq α := Classical.decEq α
⊢ count b (Finset.sup s f) = Finset.sup s fun a => count b (f a)
[PROOFSTEP]
refine' s.induction _ _
[GOAL]
case refine'_1
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : DecidableEq β
s : Finset α
f : α → Multiset β
b : β
this : DecidableEq α := Classical.decEq α
⊢ count b (Finset.sup ∅ f) = Finset.sup ∅ fun a => count b (f a)
[PROOFSTEP]
exact count_zero _
[GOAL]
case refine'_2
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : DecidableEq β
s : Finset α
f : α → Multiset β
b : β
this : DecidableEq α := Classical.decEq α
⊢ ∀ ⦃a : α⦄ {s : Finset α},
¬a ∈ s →
(count b (Finset.sup s f) = Finset.sup s fun a => count b (f a)) →
count b (Finset.sup (insert a s) f) = Finset.sup (insert a s) fun a => count b (f a)
[PROOFSTEP]
intro i s _ ih
[GOAL]
case refine'_2
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : DecidableEq β
s✝ : Finset α
f : α → Multiset β
b : β
this : DecidableEq α := Classical.decEq α
i : α
s : Finset α
a✝ : ¬i ∈ s
ih : count b (Finset.sup s f) = Finset.sup s fun a => count b (f a)
⊢ count b (Finset.sup (insert i s) f) = Finset.sup (insert i s) fun a => count b (f a)
[PROOFSTEP]
rw [Finset.sup_insert, sup_eq_union, count_union, Finset.sup_insert, ih]
[GOAL]
case refine'_2
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : DecidableEq β
s✝ : Finset α
f : α → Multiset β
b : β
this : DecidableEq α := Classical.decEq α
i : α
s : Finset α
a✝ : ¬i ∈ s
ih : count b (Finset.sup s f) = Finset.sup s fun a => count b (f a)
⊢ max (count b (f i)) (Finset.sup s fun a => count b (f a)) = count b (f i) ⊔ Finset.sup s fun a => count b (f a)
[PROOFSTEP]
rfl
[GOAL]
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
s : Finset α
f : α → Multiset β
x : β
⊢ x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
[PROOFSTEP]
classical
induction' s using Finset.induction_on with a s has hxs
· simp
· rw [Finset.sup_insert, Multiset.sup_eq_union, Multiset.mem_union]
constructor
· intro hxi
cases' hxi with hf hf
· refine' ⟨a, _, hf⟩
simp only [true_or_iff, eq_self_iff_true, Finset.mem_insert]
· rcases hxs.mp hf with ⟨v, hv, hfv⟩
refine' ⟨v, _, hfv⟩
simp only [hv, or_true_iff, Finset.mem_insert]
· rintro ⟨v, hv, hfv⟩
rw [Finset.mem_insert] at hv
rcases hv with (rfl | hv)
· exact Or.inl hfv
· refine' Or.inr (hxs.mpr ⟨v, hv, hfv⟩)
[GOAL]
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
s : Finset α
f : α → Multiset β
x : β
⊢ x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
[PROOFSTEP]
induction' s using Finset.induction_on with a s has hxs
[GOAL]
case empty
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
⊢ x ∈ Finset.sup ∅ f ↔ ∃ v, v ∈ ∅ ∧ x ∈ f v
[PROOFSTEP]
simp
[GOAL]
case insert
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
a : α
s : Finset α
has : ¬a ∈ s
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
⊢ x ∈ Finset.sup (insert a s) f ↔ ∃ v, v ∈ insert a s ∧ x ∈ f v
[PROOFSTEP]
rw [Finset.sup_insert, Multiset.sup_eq_union, Multiset.mem_union]
[GOAL]
case insert
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
a : α
s : Finset α
has : ¬a ∈ s
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
⊢ x ∈ f a ∨ x ∈ Finset.sup s f ↔ ∃ v, v ∈ insert a s ∧ x ∈ f v
[PROOFSTEP]
constructor
[GOAL]
case insert.mp
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
a : α
s : Finset α
has : ¬a ∈ s
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
⊢ x ∈ f a ∨ x ∈ Finset.sup s f → ∃ v, v ∈ insert a s ∧ x ∈ f v
[PROOFSTEP]
intro hxi
[GOAL]
case insert.mp
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
a : α
s : Finset α
has : ¬a ∈ s
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
hxi : x ∈ f a ∨ x ∈ Finset.sup s f
⊢ ∃ v, v ∈ insert a s ∧ x ∈ f v
[PROOFSTEP]
cases' hxi with hf hf
[GOAL]
case insert.mp.inl
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
a : α
s : Finset α
has : ¬a ∈ s
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
hf : x ∈ f a
⊢ ∃ v, v ∈ insert a s ∧ x ∈ f v
[PROOFSTEP]
refine' ⟨a, _, hf⟩
[GOAL]
case insert.mp.inl
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
a : α
s : Finset α
has : ¬a ∈ s
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
hf : x ∈ f a
⊢ a ∈ insert a s
[PROOFSTEP]
simp only [true_or_iff, eq_self_iff_true, Finset.mem_insert]
[GOAL]
case insert.mp.inr
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
a : α
s : Finset α
has : ¬a ∈ s
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
hf : x ∈ Finset.sup s f
⊢ ∃ v, v ∈ insert a s ∧ x ∈ f v
[PROOFSTEP]
rcases hxs.mp hf with ⟨v, hv, hfv⟩
[GOAL]
case insert.mp.inr.intro.intro
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
a : α
s : Finset α
has : ¬a ∈ s
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
hf : x ∈ Finset.sup s f
v : α
hv : v ∈ s
hfv : x ∈ f v
⊢ ∃ v, v ∈ insert a s ∧ x ∈ f v
[PROOFSTEP]
refine' ⟨v, _, hfv⟩
[GOAL]
case insert.mp.inr.intro.intro
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
a : α
s : Finset α
has : ¬a ∈ s
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
hf : x ∈ Finset.sup s f
v : α
hv : v ∈ s
hfv : x ∈ f v
⊢ v ∈ insert a s
[PROOFSTEP]
simp only [hv, or_true_iff, Finset.mem_insert]
[GOAL]
case insert.mpr
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
a : α
s : Finset α
has : ¬a ∈ s
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
⊢ (∃ v, v ∈ insert a s ∧ x ∈ f v) → x ∈ f a ∨ x ∈ Finset.sup s f
[PROOFSTEP]
rintro ⟨v, hv, hfv⟩
[GOAL]
case insert.mpr.intro.intro
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
a : α
s : Finset α
has : ¬a ∈ s
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
v : α
hv : v ∈ insert a s
hfv : x ∈ f v
⊢ x ∈ f a ∨ x ∈ Finset.sup s f
[PROOFSTEP]
rw [Finset.mem_insert] at hv
[GOAL]
case insert.mpr.intro.intro
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
a : α
s : Finset α
has : ¬a ∈ s
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
v : α
hv : v = a ∨ v ∈ s
hfv : x ∈ f v
⊢ x ∈ f a ∨ x ∈ Finset.sup s f
[PROOFSTEP]
rcases hv with (rfl | hv)
[GOAL]
case insert.mpr.intro.intro.inl
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
s : Finset α
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
v : α
hfv : x ∈ f v
has : ¬v ∈ s
⊢ x ∈ f v ∨ x ∈ Finset.sup s f
[PROOFSTEP]
exact Or.inl hfv
[GOAL]
case insert.mpr.intro.intro.inr
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
f : α → Multiset β
x : β
a : α
s : Finset α
has : ¬a ∈ s
hxs : x ∈ Finset.sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
v : α
hfv : x ∈ f v
hv : v ∈ s
⊢ x ∈ f a ∨ x ∈ Finset.sup s f
[PROOFSTEP]
refine' Or.inr (hxs.mpr ⟨v, hv, hfv⟩)
[GOAL]
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
s : Finset α
f : α → Finset β
x : β
⊢ x ∈ sup s f ↔ ∃ v, v ∈ s ∧ x ∈ f v
[PROOFSTEP]
change _ ↔ ∃ v ∈ s, x ∈ (f v).val
[GOAL]
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
s : Finset α
f : α → Finset β
x : β
⊢ x ∈ sup s f ↔ ∃ v, v ∈ s ∧ x ∈ (f v).val
[PROOFSTEP]
rw [← Multiset.mem_sup, ← Multiset.mem_toFinset, sup_toFinset]
[GOAL]
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
s : Finset α
f : α → Finset β
x : β
⊢ x ∈ sup s f ↔ x ∈ sup s fun x => Multiset.toFinset (f x).val
[PROOFSTEP]
simp_rw [val_toFinset]
[GOAL]
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
s : Finset α
t : α → Finset β
⊢ sup s t = Finset.biUnion s t
[PROOFSTEP]
ext
[GOAL]
case a
F : Type u_1
α✝ : Type u_2
β✝ : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
α : Type u_7
β : Type u_8
inst✝ : DecidableEq β
s : Finset α
t : α → Finset β
a✝ : β
⊢ a✝ ∈ sup s t ↔ a✝ ∈ Finset.biUnion s t
[PROOFSTEP]
rw [mem_sup, mem_biUnion]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : DecidableEq α
s : Finset β
f : β → α
⊢ (sup s fun b => {f b}) = image f s
[PROOFSTEP]
ext a
[GOAL]
case a
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : DecidableEq α
s : Finset β
f : β → α
a : α
⊢ (a ∈ sup s fun b => {f b}) ↔ a ∈ image f s
[PROOFSTEP]
rw [mem_sup, mem_image]
[GOAL]
case a
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : DecidableEq α
s : Finset β
f : β → α
a : α
⊢ (∃ v, v ∈ s ∧ a ∈ {f v}) ↔ ∃ a_1, a_1 ∈ s ∧ f a_1 = a
[PROOFSTEP]
simp only [mem_singleton, eq_comm]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
ι' : Sort u_7
inst✝ : CompleteLattice α
s : ι → α
⊢ ⨆ (i : ι), s i = ⨆ (t : Finset ι) (i : ι) (_ : i ∈ t), s i
[PROOFSTEP]
classical
refine le_antisymm ?_ ?_
· exact iSup_le fun b => le_iSup_of_le { b } <| le_iSup_of_le b <| le_iSup_of_le (by simp) <| le_rfl
· exact iSup_le fun t => iSup_le fun b => iSup_le fun _ => le_iSup _ _
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
ι' : Sort u_7
inst✝ : CompleteLattice α
s : ι → α
⊢ ⨆ (i : ι), s i = ⨆ (t : Finset ι) (i : ι) (_ : i ∈ t), s i
[PROOFSTEP]
refine le_antisymm ?_ ?_
[GOAL]
case refine_1
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
ι' : Sort u_7
inst✝ : CompleteLattice α
s : ι → α
⊢ ⨆ (i : ι), s i ≤ ⨆ (t : Finset ι) (i : ι) (_ : i ∈ t), s i
[PROOFSTEP]
exact iSup_le fun b => le_iSup_of_le { b } <| le_iSup_of_le b <| le_iSup_of_le (by simp) <| le_rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
ι' : Sort u_7
inst✝ : CompleteLattice α
s : ι → α
b : ι
⊢ b ∈ {b}
[PROOFSTEP]
simp
[GOAL]
case refine_2
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
ι' : Sort u_7
inst✝ : CompleteLattice α
s : ι → α
⊢ ⨆ (t : Finset ι) (i : ι) (_ : i ∈ t), s i ≤ ⨆ (i : ι), s i
[PROOFSTEP]
exact iSup_le fun t => iSup_le fun b => iSup_le fun _ => le_iSup _ _
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
ι' : Sort u_7
inst✝ : CompleteLattice α
s : ι' → α
⊢ ⨆ (i : ι'), s i = ⨆ (t : Finset (PLift ι')) (i : PLift ι') (_ : i ∈ t), s i.down
[PROOFSTEP]
rw [← iSup_eq_iSup_finset, ← Equiv.plift.surjective.iSup_comp]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
ι' : Sort u_7
inst✝ : CompleteLattice α
s : ι' → α
⊢ ⨆ (x : PLift ι'), s (↑Equiv.plift x) = ⨆ (i : PLift ι'), s i.down
[PROOFSTEP]
rfl
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : CompleteLattice β
a : α
s : α → β
⊢ ⨆ (x : α) (_ : x ∈ {a}), s x = s a
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : CompleteLattice β
a : α
s : α → β
⊢ ⨅ (x : α) (_ : x ∈ {a}), s x = s a
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝ : CompleteLattice β
o : Option α
f : α → β
⊢ ⨆ (x : α) (_ : x ∈ Option.toFinset o), f x = ⨆ (x : α) (_ : x ∈ o), f x
[PROOFSTEP]
simp
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : CompleteLattice β
inst✝ : DecidableEq α
f : α → β
s t : Finset α
⊢ ⨆ (x : α) (_ : x ∈ s ∪ t), f x = (⨆ (x : α) (_ : x ∈ s), f x) ⊔ ⨆ (x : α) (_ : x ∈ t), f x
[PROOFSTEP]
simp [iSup_or, iSup_sup_eq]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : CompleteLattice β
inst✝ : DecidableEq α
a : α
s : Finset α
t : α → β
⊢ ⨆ (x : α) (_ : x ∈ insert a s), t x = t a ⊔ ⨆ (x : α) (_ : x ∈ s), t x
[PROOFSTEP]
rw [insert_eq]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : CompleteLattice β
inst✝ : DecidableEq α
a : α
s : Finset α
t : α → β
⊢ ⨆ (x : α) (_ : x ∈ {a} ∪ s), t x = t a ⊔ ⨆ (x : α) (_ : x ∈ s), t x
[PROOFSTEP]
simp only [iSup_union, Finset.iSup_singleton]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : CompleteLattice β
inst✝ : DecidableEq α
f : γ → α
g : α → β
s : Finset γ
⊢ ⨆ (x : α) (_ : x ∈ image f s), g x = ⨆ (y : γ) (_ : y ∈ s), g (f y)
[PROOFSTEP]
rw [← iSup_coe, coe_image, iSup_image, iSup_coe]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : CompleteLattice β
inst✝ : DecidableEq α
f : γ → α
g : α → β
s : Finset γ
⊢ ⨅ (x : α) (_ : x ∈ image f s), g x = ⨅ (y : γ) (_ : y ∈ s), g (f y)
[PROOFSTEP]
rw [← iInf_coe, coe_image, iInf_image, iInf_coe]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : CompleteLattice β
inst✝ : DecidableEq α
x : α
t : Finset α
f : α → β
s : β
hx : ¬x ∈ t
⊢ ⨆ (i : α) (_ : i ∈ insert x t), update f x s i = s ⊔ ⨆ (i : α) (_ : i ∈ t), f i
[PROOFSTEP]
simp only [Finset.iSup_insert, update_same]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : CompleteLattice β
inst✝ : DecidableEq α
x : α
t : Finset α
f : α → β
s : β
hx : ¬x ∈ t
⊢ s ⊔ ⨆ (x_1 : α) (_ : x_1 ∈ t), update f x s x_1 = s ⊔ ⨆ (i : α) (_ : i ∈ t), f i
[PROOFSTEP]
rcongr (i hi)
[GOAL]
case e_a.e_s.h.e_s.h
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : CompleteLattice β
inst✝ : DecidableEq α
x : α
t : Finset α
f : α → β
s : β
hx : ¬x ∈ t
i : α
hi : i ∈ t
⊢ update f x s i = f i
[PROOFSTEP]
apply update_noteq
[GOAL]
case e_a.e_s.h.e_s.h.h
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : CompleteLattice β
inst✝ : DecidableEq α
x : α
t : Finset α
f : α → β
s : β
hx : ¬x ∈ t
i : α
hi : i ∈ t
⊢ i ≠ x
[PROOFSTEP]
rintro rfl
[GOAL]
case e_a.e_s.h.e_s.h.h
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : CompleteLattice β
inst✝ : DecidableEq α
t : Finset α
f : α → β
s : β
i : α
hi : i ∈ t
hx : ¬i ∈ t
⊢ False
[PROOFSTEP]
exact hx hi
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
inst✝¹ : CompleteLattice β
inst✝ : DecidableEq α
s : Finset γ
t : γ → Finset α
f : α → β
⊢ ⨆ (y : α) (_ : y ∈ Finset.biUnion s t), f y = ⨆ (x : γ) (_ : x ∈ s) (y : α) (_ : y ∈ t x), f y
[PROOFSTEP]
simp [@iSup_comm _ α, iSup_and]
[GOAL]
F : Type u_1
α : Type u_2
β : Type u_3
γ : Type u_4
ι : Type u_5
κ : Type u_6
s : Finset α
f : α → Set β
x : α
h : x ∈ s
⊢ f x ≤ ⨆ (_ : x ∈ s), f x
[PROOFSTEP]
simp only [h, iSup_pos, le_refl]
|
\section{\lego\ Design}
\label{sec:lego:design}
Based on the \splitkernel\ architecture,
we built {\em \lego}, the first OS designed for hardware resource disaggregation.
\lego\ is a research prototype that demonstrates the feasibility of the \splitkernel\ design,
but it is not the only way to build a \splitkernel.
\lego' design targets three types of hardware components:
processor, memory, and storage,
and we call them {\em \pcomponent, \mcomponent}, and {\em \scomponent}.
This section first introduces the abstraction \lego\ exposes to users
and then describes the hardware architecture of components \lego\ runs on.
Next, we explain the design of \lego' process, memory, and storage \microos{}s.
Finally, we discuss \lego' global resource management and failure handling mechanisms.
Overall, \lego\ achieves the following design goals:
\begin{itemize}
\item Clean separation of process, memory, and storage functionalities.
\item Monitors run at hardware components and fit device constraints.
\item Comparable performance to monolithic Linux servers.
\item Efficient resource management and memory failure handling, both in space and in performance. % and performance-efficient memory replication scheme.
\item Easy-to-use, backward compatible user interface.
\item Supports common Linux system call interfaces.
\end{itemize}
\subsection{Abstraction and Usage Model}
\lego\ exposes a distributed set of {\em virtual nodes}, or {\em \vnode}, to users.
From users' point of view, a \vnode\ is like a virtual machine.
Multiple users can run in a \vnode\ and each user can run multiple processes.
Each \vnode\ has a unique ID, a unique virtual IP address, %({\em \vip}),
and its own storage mount point. % ({\em \vmount}).
\lego\ protects and isolates the resources given to each \vnode\ from others.
Internally, one \vnode\ can run on multiple \pcomponent{}s, multiple \mcomponent{}s,
and multiple \scomponent{}s.
At the same time, each hardware component can host resources for more than one \vnode.
The internal execution status is transparent to \lego\ users;
they do not know which physical components their applications run on.
With \splitkernel's design principle of components not being coherent,
\lego\ does not support writable shared memory across processors. %execute application threads that need to have shared write access to common memory.
\lego\ assumes that threads within the same process access shared memory
and threads belonging to different processes do not share writable memory,
and \lego\ makes scheduling decision based on this assumption (\S\ref{sec:lego:proc-scheduling}).
Applications that use shared writable memory across processes (\eg, with MAP\_SHARED)
will need to be adapted to use message passing across processes.
We made this decision because writable shared memory across processes is rare
(we have not seen a single instance in the datacenter applications we studied),
and supporting it makes both hardware and software more complex
(in fact, we have implemented this support but later decided not to include it because of its complexity).
One of the initial decisions we made when building \lego\ is to support the Linux system call interface
and unmodified Linux ABI,
because doing so can greatly ease the adoption of \lego.
Distributed applications that run on Linux can seamlessly run on a \lego\ cluster
by running on a set of \vnode{}s. % and using their virtual IP addresses to communicate.
\subsection{Hardware Architecture}
\label{sec:lego:hardware}
\input{lego/fig-hw-arch}
\lego\ \pcomponent, \mcomponent, and \scomponent\ are independent devices,
each having their own hardware controller and network interface (for \pcomponent, the hardware controller is the processor itself).
Our current hardware model uses CPU in \pcomponent,
DRAM in \mcomponent, and SSD or HDD in \scomponent.
We leave exploring other hardware devices for future work.
To demonstrate the feasibility of hardware resource disaggregation,
we propose a \pcomponent{} and an \mcomponent\ architecture designed
within today's network, processor, and memory performance and hardware constraints
(Figure~\ref{fig-lego-hw-arch}).
\textit{\uline{Separating process and memory functionalities.}}
\lego\ moves all hardware memory functionalities to \mcomponent{}s
(e.g., page tables, TLBs) and leaves {\em only} caches at the \pcomponent{} side.
With a clean separation of process and memory hardware units,
the allocation and management of memory can be completely transparent to \pcomponent{}s.
Each \mcomponent{} can choose its own memory allocation technique
and virtual to physical memory address mappings (\eg, segmentation).
\textit{\uline{Processor virtual caches.}}
After moving all memory functionalities to \mcomponent{}s,
\pcomponent{}s will only see virtual addresses and have to use virtual memory addresses to access its caches.
Because of this, \lego\ organizes all levels of \pcomponent{} caches as {\em virtual caches}~\cite{Goodman-ASPLOS87,Wang-ISCA89},
\ie, virtually-indexed and virtually-tagged caches.
A virtual cache has two potential problems, commonly known as synonyms and homonyms~\cite{CacheMemory82}.
Synonyms happens when a physical address maps to multiple virtual addresses (and thus multiple virtual cache lines)
as a result of memory sharing across processes,
and the update of one virtual cache line will not reflect to other lines that share the data.
Since \lego\ does not allow writable inter-process memory sharing,
it will not have the synonym problem.
The homonym problem happens when two address spaces use the same virtual address for their own different data.
Similar to previous solutions~\cite{OVC}, we solve homonyms by storing an address space ID (ASID) with each cache line,
and differentiate a virtual address in different address spaces using ASIDs.
\textit{\uline{Separating memory for performance and for capacity.}}
Previous studies~\cite{Gao16-OSDI,GU17-NSDI} and our own show that today's network speed
cannot meet application performance requirements if all memory accesses are across the network.
Fortunately, many modern datacenter applications exhibit strong memory access temporal locality.
For example, we found 90\% of memory accesses in PowerGraph~\cite{Gonzalez12-OSDI} go to just 0.06\% of total memory
and 95\% go to 3.1\% of memory
(22\% and 36\% for TensorFlow~\cite{TensorFlow} respectively,
5.1\% and 6.6\% for Phoenix~\cite{Ranger07-HPCA}).
%PG 90% 0.0063G 95% 0.301G 100% 9.68G
%TF 90% 0.608G 95% 0.968G 100% 2.7G
With good memory-access locality, we propose to %separate hardware memory into two categories and organize them differently:
leave a small amount of memory (\eg, 4\GB) at each \pcomponent{}
and move most memory across the network (\eg, few TBs per \mcomponent{}).
\pcomponent{}s' local memory can be regular DRAM
or the on-die HBM~\cite{HBM-JEDEC,Knights-Landing},
and \mcomponent{}s use DRAM or NVM.
Different from previous proposals~\cite{Lim09-disaggregate},
we propose to organize \pcomponent{}s' DRAM/HBM as cache rather than main memory
for a clean separation of process and memory functionalities.
We place this cache under the current processor Last-Level Cache (LLC)
and call it an extended cache, or {\em \excache}.
\excache\ serves as another layer in the memory hierarchy between LLC and memory across the network.
With this design, \excache\ can serve hot memory accesses fast, while \mcomponent{}s can provide the capacity applications desire.
\excache\ is a virtual, inclusive cache,
and we use a combination of hardware and software to manage \excache.
Each \excache\ line has a (virtual-address) tag and two access permission bits (one for read/write and one for valid).
These bits are set by software when a line is inserted to \excache\ and checked by hardware at access time.
For best hit performance, the hit path of \excache\ is handled purely by hardware
--- the hardware cache controller maps a virtual address to an \excache\ set,
fetches and compares tags in the set, and on a hit, fetches the hit \excache\ line.
Handling misses of \excache\ is more complex than with traditional CPU caches,
and thus we use \lego\ to handle the miss path of \excache\ (see \S\ref{sec:lego:excachemgmt}).
Finally, we use a small amount of DRAM/HBM at \pcomponent{} for \lego' own kernel data usages,
accessed directly with physical memory addresses and managed by \lego.
\lego\ ensures that all its own data fits in this space to avoid going to \mcomponent{}s.
With our design, \pcomponent{}s do not need any address mappings:
\lego\ accesses all \pcomponent{}-side DRAM/HBM using physical memory addresses
and does simple calculations to locate the \excache\ set for a memory access.
Another benefit of not handling address mapping at \pcomponent{}s and moving TLBs to \mcomponent{}s
is that \pcomponent{}s do not need to access TLB or suffer from TLB misses,
potentially making \pcomponent{} cache accesses faster~\cite{Kaxiras-ISCA13}.
%We use software~\cite{softvm-HPCA97,Tsai-ISCA17} (\lego) to manage \excache\ and the kernel physical memory,
%although they can all be implemented in hardware too.
\subsection{Process Management}
The \lego\ {\em process \microos{}} runs in the kernel space of a \pcomponent\
and manages the \pcomponent's CPU cores and \excache.
\pcomponent{}s run user programs in the user space.
\subsubsection{Process Management and Scheduling}
\label{sec:lego:proc-scheduling}
At every \pcomponent, \lego\ uses a simple local thread scheduling model
that targets datacenter applications
(we will discuss global scheduling in \S~\ref{sec:lego:grm}).
\lego\ dedicates a small amount of cores for kernel background threads
(currently two to four)
and uses the rest of the cores for application threads.
When a new process starts, \lego\ uses a global policy to choose a \pcomponent{} for it (\S~\ref{sec:lego:grm}).
Afterwards, \lego\ schedules new threads the process spawns on the same \pcomponent{}
by choosing the cores that host fewest threads.
After assigning a thread to a core,
we let it run to the end with no scheduling or kernel preemption under common scenarios.
For example, we do not use any network interrupts
and let threads busy wait on the completion of outstanding network requests,
since a network request in \lego\ is fast
(\eg, fetching an \excache\ line from an \mcomponent\ takes around 6.5\mus).
\lego\ improves the overall processor utilization in a disaggregated cluster,
since it can freely schedule processes on any \pcomponent{}s without considering memory allocation.
Thus, we do not push for perfect core utilization when scheduling individual threads
and instead aim to minimize scheduling and context switch performance overheads.
Only when a \pcomponent{} has to schedule
more threads than its cores will
\lego\ start preempting threads on a core.
\subsubsection{\excache\ Management}
\label{sec:lego:excachemgmt}
\lego\ process \microos\ configures and manages \excache.
During the \pcomponent{}'s boot time, \lego\ configures the set associativity of \excache\
and its cache replacement policy.
While \excache\ hit is handled completely in hardware,
\lego\ handles misses in software.
When an \excache\ miss happens,
the process \microos\ fetches the corresponding line from an \mcomponent\ and inserts it to \excache.
If the \excache\ set is full, the process \microos\ first evicts a line in the set.
It throws away the evicted line if it is clean
and writes it back to an \mcomponent{} if it is dirty.
\lego\ currently supports two eviction policies: FIFO and LRU.
For each \excache\ set, \lego\ maintains a FIFO queue (or an approximate LRU list)
and chooses \excache\ lines to evict based on the corresponding policy (see \S\ref{sec:lego:procimpl} for details).
\subsubsection{Supporting Linux Syscall Interface}
One of our early decisions is to support Linux ABIs for backward compatibility
and easy adoption of \lego.
A challenge in supporting the Linux system call interface is that
many Linux syscalls are associated with {\em states},
information about different Linux subsystems that is stored with each process
and can be accessed by user programs across syscalls.
For example, Linux records the states of a running process' open files, socket connections, and several other entities,
and it associates these states with file descriptors ({\em fd}s) that are exposed to users.
In contrast, \lego\ aims at the clean separation of OS functionalities.
With \lego' stateless design principle, each component only stores information about its own resource
and each request across components contains all the information that the destination component needs to handle the request.
To solve this discrepancy between the Linux syscall interface and \lego' design,
we add a layer on top of \lego' core process \microos\ at each \pcomponent\ to store Linux states
and translate these states and the Linux syscall interface to \lego' internal interface.
\subsection{Memory Management}
We use \mcomponent{}s for three types of data:
anonymous memory (\ie, heaps, stacks),
memory-mapped files, and storage buffer caches.
The \lego\ {\em memory \microos{}}
manages both the virtual and physical memory address spaces,
their allocation, deallocation, and memory address mappings.
It also performs the actual memory read and write.
No user processes run on \mcomponent{}s
and they run completely in the kernel mode
(same is true for \scomponent{}s).
\lego\ lets a process address space span multiple \mcomponent{}s
to achieve efficient memory space utilization and high parallelism.
Each application process uses one or more \mcomponent{}s to host its data
and a {\em home \mcomponent},
an \mcomponent\ that initially loads the process,
accepts and oversees all system calls related to virtual memory space management
(\eg, \brk, \mmap, \munmap, and \mremap).
\lego\ uses a global memory resource manager ({\em \gmm}) to assign a home \mcomponent{} to each new process at its creation time.
A home \mcomponent\ can also host process data.
\subsubsection{Memory Space Management}
\textit{\uline{Virtual memory space management.}}
We propose a two-level approach to manage distributed virtual memory spaces,
where the home \mcomponent\ of a process makes coarse-grained, high-level virtual memory allocation decisions
and other \mcomponent{}s perform fine-grained virtual memory allocation.
This approach minimizes network communication during both normal memory accesses and virtual memory operations,
while ensuring good load balancing and memory utilization.
Figure~\ref{fig-dist-vma} demonstrates the data structures used. % in virtual memory space management.
At the higher level, we split each virtual memory address space into coarse-grained, fix-sized {\em virtual regions},
or {\em \vregion{}s} (\eg, of 1\GB).
Each \vregion\ that contains allocated virtual memory addresses (an active \vregion) is {\em owned} by an \mcomponent{}.
The owner of a \vregion\ handles all memory accesses and virtual memory requests within the \vregion.
\input{lego/fig-dist-vma}
The lower level stores user process virtual memory area ({\em vma}) information,
such as virtual address ranges and permissions, in {\em vma trees}.
The owner of an active \vregion\ stores a vma tree for the \vregion,
with each node in the tree being one vma.
A user-perceived virtual memory range can split across multiple \mcomponent{}s,
but only one \mcomponent{} owns a \vregion.
\vregion\ owners perform the actual virtual memory allocation and vma tree set up.
A home \mcomponent{} can also be the owner of \vregion{}s,
but the home \mcomponent{} does not maintain any information about memory that belongs to \vregion{}s owned by other \mcomponent{}s.
It only keeps the information of which \mcomponent{} owns a \vregion\ (in a {\em \vregion\ array})
and how much free virtual memory space is left in each \vregion.
These metadata can be easily reconstructed if a home \mcomponent{} fails.
When an application process wants to allocate a virtual memory space,
the \pcomponent{} forwards the allocation request
to its home \mcomponent{} (\circled{1} in Figure~\ref{fig-dist-vma}).
The home \mcomponent{} uses its stored information of available virtual memory space in \vregion{}s
to find one or more \vregion{}s that best fit the requested amount of virtual memory space.
If no active \vregion\ can fit the allocation request, the home \mcomponent{} makes a new \vregion\ active and
contacts the \gmm\ (\circled{2} and \circled{3}) to find a candidate \mcomponent{} to own the new \vregion.
\gmm\ makes this decision based on available physical memory space and access load on different \mcomponent{}s (\S~\ref{sec:lego:grm}).
If the candidate \mcomponent\ is not the home \mcomponent{}, the home \mcomponent{} next forwards the request to that \mcomponent\ (\circled{4}),
which then performs local virtual memory area allocation and sets up the proper vma tree.
Afterwards, the \pcomponent{} directly sends memory access requests to the owner of the \vregion\ where the memory access falls into
(\eg, \circled{a} and \circled{c} in Figure~\ref{fig-dist-vma}).
\lego' mechanism of distributed virtual memory management is efficient and it cleanly separates memory operations from \pcomponent{}s.
\pcomponent{}s hand over all memory-related system call requests to \mcomponent{}s
and only cache a copy of the \vregion\ array for fast memory accesses.
To fill a cache miss or to flush a dirty cache line,
a \pcomponent{} looks up the cached \vregion\ array to find its owner \mcomponent{} and sends the request to it.
\textit{\uline{Physical memory space management.}}
Each \mcomponent\ manages the physical memory allocation for data that falls into the
\vregion\ that it owns.
Each \mcomponent{} can choose their own way of physical memory allocation
and own mechanism of virtual-to-physical memory address mapping.
\subsubsection{Optimization on Memory Accesses}
\label{sec:lego:zerofill}
With our strawman memory management design,
all \excache\ misses will go to \mcomponent{}s.
We soon found that a large performance overhead in running real applications
is caused by filling empty \excache, \ie, {\em cold misses}.
To reduce the performance overhead of cold misses, we propose a technique
to avoid accessing \mcomponent\ on first memory accesses.
The basic idea is simple: since the initial content of anonymous memory
(non-file-backed memory) is zero, %undefined and can be any data,
\lego\ can directly allocate a cache line with empty content
in \excache\ for the first access to
anonymous memory instead of going to \mcomponent\
(we call such cache lines {\em p-local lines}).
When an application creates a new anonymous memory region, the process \microos\ records its address range and permission.
The application's first access to this region will be an \excache\ miss and it will trap to \lego.
\lego\ process \microos\ then allocates an \excache\ line, fills it with zeros,
and sets its R/W bit according to the recorded memory region's permission.
Before this p-local line is evicted, it only lives in the \excache.
No \mcomponent{}s are aware of it or will allocate physical memory or a virtual-to-physical memory mapping for it.
When a p-local cache line becomes dirty and needs to be flushed,
the process \microos\ sends it to its owner \mcomponent, which then
allocates physical memory space and establishes a virtual-to-physical memory mapping.
Essentially, \lego\ {\em delays physical memory allocation until write time}.
Notice that it is safe to only maintain p-local lines at a \pcomponent{} \excache\
without any other \pcomponent{}s knowing them,
since \pcomponent{}s in \lego\ do not share any memory
and other \pcomponent{}s will not access this data.
\subsection{Storage Management}
\lego\ supports a hierarchical file interface that is backward compatible with POSIX
through its \vnode\ abstraction.
Users can store their directories and files under their \vnode{}s' mount points
and perform normal read, write, and other accesses to them.
\lego\ implements core storage functionalities at \scomponent{}s.
To cleanly separate storage functionalities, \lego\ uses a stateless storage server design,
where each I/O request to the storage server contains all the information needed to
fulfill this request, \eg, full path name, absolute file offset,
similar to the server design in NFS v2~\cite{Sandberg-NFS-85}.
While \lego\ supports a hierarchical file use interface,
internally, \lego\ storage \microos\ treats (full) directory and file paths just as unique names of a file
and place all files of a \vnode\ under one internal directory at the \scomponent{}.
To locate a file, \lego\ storage \microos\ maintains a simple hash table with the full paths of files (and directories) as keys.
From our observation, most datacenter applications only have a few hundred files or less.
Thus, a simple hash table for a whole \vnode\ is sufficient to achieve good lookup performance.
Using a non-hierarchical file system implementation largely reduces the complexity of \lego' file system,
making it possible for a storage \microos\ to fit in storage devices controllers that have limited processing power~\cite{Willow}.
\lego\ places the storage buffer cache at \mcomponent{}s
rather than at \scomponent{}s, because \scomponent{}s can only host a limited amount of internal memory.
\lego\ memory \microos\ manages the storage buffer cache by simply performing insertion, lookup, and deletion of buffer cache entries.
For simplicity and to avoid coherence traffic, we currently place the buffer cache of one file
under one \mcomponent{}.
When receiving a file read system call, the \lego\ process \microos\ first uses its extended Linux state layer to
look up the full path name, then passes it with the requested offset and size to the \mcomponent\ that holds the file's buffer cache.
This \mcomponent\ will look up the buffer cache and returns the data to \pcomponent\ on a hit.
On a miss, \mcomponent\ will forward the request to the \scomponent\ that stores the file,
which will fetch the data from storage device and return it to the \mcomponent.
The \mcomponent\ will then insert it into the buffer cache and returns it to the \pcomponent.
Write and fsync requests work in a similar fashion.
\subsection{Global Resource Management}
\label{sec:lego:grm}
\lego\ uses a two-level resource management mechanism.
At the higher level, \lego\ uses three global resource managers for process, memory, and storage resources,
{\em \gpm, \gmm}, and {\em \gsm}.
These global managers perform coarse-grained global resource allocation and load balancing,
and they can run on one normal Linux machine.
Global managers only maintain approximate resource usage and load information.
They update their information either when they make allocation decisions
or by periodically asking \microos{}s in the cluster.
At the lower level, each \microos\ can employ its own policies and mechanisms to manage its local resources.
For example, process \microos{}s allocate new threads locally
and only ask \gpm\ when they need to create a new process.
\gpm\ chooses the \pcomponent{} that has the least amount of threads based on its maintained approximate information.
Memory \microos{}s allocate virtual and physical memory space on their own.
Only home \mcomponent{} asks \gmm\ when it needs to allocate a new \vregion.
\gmm\ maintains approximate physical memory space usages and memory access load by periodically asking \mcomponent{}s
and chooses the memory with least load among all the ones that have at least \vregion\ size of free physical memory.
\lego\ decouples the allocation of different resources and
can freely allocate each type of resource from a pool of components.
Doing so largely improves resource packing compared to a monolithic server cluster
that packs all type of resources a job requires within one physical machine.
Also note that \lego\ allocates hardware resources only {\em on demand},
\ie, when applications actually create threads or access physical memory.
This on-demand allocation strategy further improves \lego' resource packing efficiency
and allows more aggressive over-subscription in a cluster.
\subsection{Reliability and Failure Handling}
\label{sec:lego:failure}
After disaggregation, \pcomponent{}s, \mcomponent{}s, and \scomponent{}s can all fail independently.
Our goal is to build a reliable disaggregated cluster that has the same or lower application failure rate
than a monolithic cluster.
As a first (and important) step towards achieving this goal, %building a reliable disaggregated cluster,
we focus on providing memory reliability by handling \mcomponent\ failure in the current version of \lego\ because of three observations.
First, when distributing an application's memory to multiple \mcomponent{}s,
the probability of memory failure increases and not handling \mcomponent\ failure will cause applications to fail more often
on a disaggregated cluster than on monolithic servers.
Second, since most modern datacenter applications
already provide reliability to their distributed storage data %(usually through some form of redundancy)
and the current version of \lego\ does not split a file across \scomponent,
we leave providing storage reliability to applications.
Finally, since \lego\ does not split a process across \pcomponent{}s,
the chance of a running application process being affected by the failure of a \pcomponent\ is similar to
one affected by the failure of a processor in a monolithic server.
Thus, we currently do not deal with \pcomponent\ failure and leave it for future work.
A naive approach to handle memory failure is to perform a full replication of memory content over two or more \mcomponent{}s.
This method would require at least 2\x\ memory space,
making the monetary and energy cost of providing reliability prohibitively high (the same reason why RAMCloud~\cite{Ongaro11-RamCloud} does not replicate in memory).
Instead, we propose a space- and performance-efficient approach to provide in-memory data reliability in a best-effort way.
Further, since losing in-memory data will not affect user persistent data,
we propose to provide memory reliability in a best-effort manner.
We use one primary \mcomponent, one secondary \mcomponent, and a backup file in \scomponent\ for each vma.
A \mcomponent{} can serve as the primary for some vma and the secondary for others.
The primary stores all memory data and metadata.
\lego\ maintains a small append-only log at the secondary \mcomponent{}
and also replicates the vma tree there.
When \pcomponent{} flushes a dirty \excache\ line,
\lego\ sends the data to both primary and secondary in parallel (step \circled{a} and \circled{b} in Figure~\ref{fig-dist-vma})
and waits for both to reply (\circled{c} and \circled{d}).
In the background, the secondary \mcomponent\ flushes the backup log to a \scomponent{},
which writes it to an append-only file.
If the flushing of a backup log to \scomponent\ is slow and the log is full,
we will skip replicating application memory.
If the primary fails during this time, \lego\ simply reports an error to application.
Otherwise when a primary \mcomponent\ fails, we can recover memory content
by replaying the backup logs on \scomponent\ and in the secondary \mcomponent.
When a secondary \mcomponent\ fails, we do not reconstruct anything
and start replicating to a new backup log on another \mcomponent{}.
|
If $f$ and $g$ are analytic on a set $S$, then $f \cdot g$ is analytic on $S$. |
// SPDX-License-Identifier: Apache-2.0
//
// Copyright 2022, Craig Kolb
// Licensed under the Apache License, Version 2.0
//
#define plugInName "FITS"
#define plugInCopyrightYear "2021"
#define plugInDescription "FITS file format plug-in."
#define vendorName "Astimatist"
#define plugInAETEComment "FITS file format module"
#define plugInSuiteID 'sdK4'
#define plugInClassID 'simP'
#define plugInEventID typeNull // must be this
#include "PIDefines.h"
#include "PIResourceDefines.h"
#if __PIMac__
#include "PIGeneral.r"
#include "PIUtilities.r"
#elif defined(__PIWin__)
#define Rez
#include "PIGeneral.h"
#include "PIUtilities.r"
#endif
#include "PITerminology.h"
#include "PIActions.h"
resource 'PiPL' (ResourceID, plugInName " PiPL", purgeable)
{
{
Kind { ImageFormat },
Name { plugInName },
Version { (latestFormatVersion << 16) | latestFormatSubVersion },
Component { ComponentNumber, plugInName },
#if Macintosh
#if defined(__arm64__)
CodeMacARM64 { "PluginMain" },
#endif
#if defined(__x86_64__)
CodeMacIntel64 { "PluginMain" },
#endif
#elif MSWindows
CodeEntryPointWin64 { "PluginMain" },
#endif
SupportsPOSIXIO {},
HasTerminology { plugInClassID,
plugInEventID,
ResourceID,
vendorName " " plugInName },
SupportedModes
{
doesSupportBitmap, doesSupportGrayScale,
noIndexedColor, doesSupportRGBColor,
doesSupportCMYKColor, doesSupportHSLColor,
doesSupportHSBColor, noMultichannel,
noDuotone, doesSupportLABColor
},
EnableInfo { "true" },
PlugInMaxSize { 2147483647, 2147483647 },
FormatMaxSize { { 32767, 32767 } },
FormatMaxChannels { { 1, 24, 24, 24, 24, 24,
24, 24, 24, 24, 24, 24 } },
FmtFileType { 'FITS', '8BIM' },
//ReadTypes { { '8B1F', ' ' } },
FilteredTypes { { '8B1F', ' ' } },
ReadExtensions { { 'FITS', 'FIT ' } },
WriteExtensions { { 'FITS', 'FIT ' } },
FilteredExtensions { { 'FITS', 'FIT ' } },
FormatFlags { fmtSavesImageResources,
fmtCanRead,
fmtCanWrite,
fmtCanWriteIfRead,
fmtCanWriteTransparency,
fmtCanCreateThumbnail },
FormatICCFlags { iccCannotEmbedGray,
iccCanEmbedIndexed,
iccCannotEmbedRGB,
iccCanEmbedCMYK }
}
};
|
module Data.Functor.Representable
import Data.Vect
import Data.Vect.Extra
%default total
public export
interface Functor f => Representable f rep | f where
index : f a -> rep -> a
tabulate : (rep -> a) -> f a
public export
[Compose] Representable f rep => Representable g rep' => Representable (f . g) (rep, rep') using Functor.Compose where
index x (i, j) = (x `index` i) `index` j
tabulate = map tabulate . tabulate . curry
public export
{n : Nat} -> Representable (Vect n) (Fin n) where
index = flip index
tabulate f = mapWithIndex (\i, () => f i) $ replicate n ()
|
#' diag
#'
#' Create a diagonal matrix from a vector, or alternatively, get the vector of
#' diagonal entries from a matrix.
#'
#' @param x
#' fmlmat vector or matrix.
#' @param nrow,ncol
#' The dimension of the resulting matrix if specified.
#' @param names
#' Ignored.
#'
#' @aliases diag,fmlmat-method
#'
#' @name diag
#' @rdname diag
#' @export
setMethod("diag", signature(x="fmlmat"),
function(x, nrow, ncol, names=TRUE)
{
# get vector from matrix
if (missing(nrow) && missing(ncol))
{
if (!is_mat(DATA(x)))
stop("'x' is an array, but not one-dimensional.")
ret = skeleton_vec(x)
DATA(x)$diag(ret)
}
# create matrix
else
{
if (!is_vec(DATA(x)))
stop("'nrow' or 'ncol' cannot be specified when 'x' is a matrix")
if (missing(ncol))
ncol = nrow
if (missing(nrow))
nrow = ncol
ret = skeleton_mat(x)
ret$resize(nrow, ncol)
ret$fill_diag(DATA(x))
}
wrapfml(ret)
}
)
|
function C = DiagCorr(a,b)
% e.g. dim of A: nFrames*nCells, like in Matlab 'corr' function
An=bsxfun(@minus,a,mean(a,1));
Bn=bsxfun(@minus,b,mean(b,1));
An=bsxfun(@times,An,1./sqrt(sum(An.^2,1)));
Bn=bsxfun(@times,Bn,1./sqrt(sum(Bn.^2,1)));
C=sum(An.*Bn,1);
end |
The Amoyo Performing Arts Foundation is a registered non-profit organisation for children based in Hout Bay, Cape town, South Africa.
Our MISSION is to uplift the communities of Imizamo Yethu and Hangberg, one child at a time, through an after-school and holiday programme offering high-quality dance, drama, music and performance classes.
Our VISION is that each Amoyo child will continue into tertiary education after school, equipped not only with performing arts skills but also with the life skills, self-esteem, self-confidence and self-discipline to conquer life successfully.
Our PHILOSOPHY is one of gratitude. Amoyo means “spirit of appreciation” (appreciating everything and everyone) and our unique approach to upskilling and empowering the children of Hout Bay is already having a huge impact, not only on the children but on their families and the broader community, too.
Amoyo is so much more than just a performing arts training programme. Our classes are a platform to engage with our youth, to show them we care about them, about the choices they are making, that we are there to support and help them develop into successful, employable adults. The Amoyo journey aligns skill development with self-respect, respect for others, integrity, emotional intelligence and communication skills needed to survive in the fast paced professional world outside of their immediate environment of poverty, neglect, criminal and gang activity.
It’s amazing what you’re doing at Amoyo. I look forward to visiting when I’m back in Cape Town soon.
At AMOYO, we greatly appreciate any donation big or small, enabling us to continue to help enrich and support the lives of these children.
Amoyo is an after school and weekend performing arts training platform (dance, acting, singing) working with over 150 impoverished children from the townships in Hout Bay.
Mbuzeli is going to be our new fundraiser and has extensive experience in the NGO sector, specialising in resource development and organizational sustainability.
He has gained and applied almost two decades of know-how in various settings in the non-profit space. Having studied Administration and Human Resources Management at the University of Fort Hare, he enrolled at the Damelin Correspondence College pursuing a Certificate in General Management.
As a family man, he brings to Amoyo love for and commitment towards youth development. |
[STATEMENT]
lemma set_of_real_interval_subset: "set_of (real_interval x) \<subseteq> set_of (real_interval y)"
if "set_of x \<subseteq> set_of y"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. set_of (real_interval x) \<subseteq> set_of (real_interval y)
[PROOF STEP]
using that
[PROOF STATE]
proof (prove)
using this:
set_of x \<subseteq> set_of y
goal (1 subgoal):
1. set_of (real_interval x) \<subseteq> set_of (real_interval y)
[PROOF STEP]
by (auto simp: set_of_eq) |
[STATEMENT]
lemma leftmost_lhs[simp]: "lhs (leftmostCmd c) = lhs c"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. lhs (leftmostCmd c) = lhs c
[PROOF STEP]
by (induction c) auto |
using LombScargle
using Measurements
using Base.Test
ntimes = 1001
# Observation times
t = linspace(0.01, 10pi, ntimes)
# Randomize times
t += step(t)*rand(ntimes)
t = collect(t)
# The signal
s = sinpi(t) + cospi(2t) + rand(ntimes)
# Frequency grid
nfreqs = 10000
freqs = linspace(0.01, 3, nfreqs)
# Randomize frequency grid
freqs += step(freqs)*rand(nfreqs)
freqs = collect(freqs)
# Use "freqpower" just to call that function and increase code coverage.
# "autofrequency" function is tested below.
@test_approx_eq_eps freqpower(lombscargle(t, s, frequencies=freqs, fit_mean=false))[2] freqpower(lombscargle(t, s, frequencies=freqs, fit_mean=true))[2] 5e-3
# Simple signal, without any randomness
t = collect(linspace(0.01, 10pi, ntimes))
s = sin(t)
pgram1 = lombscargle(t, s, fit_mean=false)
pgram2 = lombscargle(t, s, fit_mean=true)
@test_approx_eq_eps power(pgram1) power(pgram2) 2e-7
pgram3 = lombscargle(t, s, center_data=false, fit_mean=false)
pgram4 = lombscargle(t, s, center_data=false, fit_mean=true)
@test_approx_eq_eps power(pgram3) power(pgram4) 3e-7
# Test the values in order to prevent wrong results in both algorithms
@test_approx_eq power(lombscargle(t, s, frequencies=0.2:0.2:1, fit_mean=true)) [0.029886871262324886,0.0005456198989410226,1.912507742056023e-5, 4.54258409531214e-6, 1.0238342782430832e-5]
@test_approx_eq power(lombscargle(t, s, frequencies=0.2:0.2:1, fit_mean=false)) [0.02988686776042212, 0.0005456197937194695,1.9125076826683257e-5,4.542583863304549e-6,1.0238340733199874e-5]
@test_approx_eq power(lombscargle(t, s, frequencies=0.2:0.2:1, center_data=false, fit_mean=true)) [0.029886871262325004,0.0005456198989536703,1.9125077421448458e-5,4.5425840956285145e-6,1.023834278337881e-5]
@test_approx_eq power(lombscargle(t, s, frequencies=0.2:0.2:1, center_data=false, fit_mean=false)) [0.029886868328967767,0.0005456198924872134,1.9125084251687147e-5,4.542588504467314e-6,1.0238354525870936e-5]
@test_approx_eq power(lombscargle(t, s, frequencies=0.2:0.2:1, normalization="model")) [0.030807614469885718,0.0005459177625354441,1.9125443196143085e-5,4.54260473047638e-6,1.0238447607164715e-5]
@test_approx_eq power(lombscargle(t, s, frequencies=0.2:0.2:1, normalization="log")) [0.030342586720560734,0.0005457688036440774,1.9125260307148152e-5,4.542594412890309e-6,1.0238395194654036e-5]
@test_approx_eq power(lombscargle(t, s, frequencies=0.2:0.2:1, normalization="psd")) [7.474096700871138,0.1364484040771917,0.004782791641128195,0.0011360075968541799,0.002560400630125523]
@test_approx_eq power(lombscargle(t, s, frequencies=0.2:0.2:1, normalization="Scargle")) [0.029886871262324904,0.0005456198989410194,1.912507742056126e-5,4.54258409531238e-6,1.0238342782428552e-5]
@test_approx_eq power(lombscargle(t, s, frequencies=0.2:0.2:1, normalization="HorneBaliunas")) [14.943435631162451,0.2728099494705097,0.009562538710280628,0.00227129204765619,0.005119171391214276]
@test_approx_eq power(lombscargle(t, s, frequencies=0.2:0.2:1, normalization="Cumming")) [15.372999620472974,0.2806521440709115,0.009837423440787873,0.0023365826071340815,0.005266327088140394]
@test_throws ErrorException lombscargle(t, s, frequencies=0.2:0.2:1, normalization="foo")
# Test signal with uncertainties
err = collect(linspace(0.5, 1.5, ntimes))
@test_approx_eq power(lombscargle(t, s, err, frequencies=0.1:0.1:1, fit_mean=true)) [0.06659683848818687,0.09361438921056377,0.006815919926284516,0.0016347568319229223,0.0005385706045724484,0.00021180745624003642,0.00010539881897690343,7.01610752020905e-5,5.519295593372065e-5,4.339157565349008e-5]
@test_approx_eq power(lombscargle(t, s, err, frequencies=0.1:0.1:1, fit_mean=false)) [0.0692080444168825,0.09360343748833044,0.006634919855866448,0.0015362074096358814,0.000485825178683968,0.000181798596773626,8.543735242380012e-5,5.380000205539795e-5,4.010727072660524e-5,2.97840883747593e-5]
@test_approx_eq power(lombscargle(t, s, err, frequencies=0.1:0.1:1, fit_mean=false, center_data=false)) [0.06920814049261209,0.09360344864985352,0.006634919960009565,0.0015362072871144769,0.0004858250831632676,0.00018179850370583626,8.543727416919218e-5,5.379994730581837e-5,4.0107232867763e-5,2.9784059487535237e-5]
@test power(lombscargle(t, s, err)) ==
power(lombscargle(t, measurement(s, err)))
# Test fast method
t = linspace(0.01, 10pi, ntimes)
s = sin(t)
pgram5 = lombscargle(t, s, maximum_frequency=30, fast=true)
pgram6 = lombscargle(t, s, maximum_frequency=30, fast=false)
@test_approx_eq_eps power(pgram5) power(pgram6) 2e-6
@test_approx_eq power(lombscargle(t, s, frequencies=0.2:0.2:1, fast=true, fit_mean=true)) [0.029886871262324963,0.0005453105325981758,1.9499330722168046e-5,2.0859593514888897e-6,1.0129019249708592e-5]
@test_approx_eq power(lombscargle(t, s, frequencies=0.2:0.2:1, fast=true, fit_mean=false)) [0.029886867760422008,0.0005453104197620392,1.9499329579010375e-5,2.085948496002562e-6,1.0128073536975395e-5]
@test_approx_eq power(lombscargle(t, s, frequencies=0.2:0.2:1, fast=true, center_data=false, fit_mean=false)) [0.029886868328967718,0.0005453105220405559,1.949931928224576e-5,2.0859802347505357e-6,1.0127777365273726e-5]
@test_approx_eq power(lombscargle(t, s, err, frequencies=0.2:0.2:1, fast=true, fit_mean=true)) [0.09230959166317655,0.0015654929813132702,0.00019405185108843607,6.0898671943944786e-5,6.0604505038256276e-5]
@test_approx_eq power(lombscargle(t, s, err, frequencies=0.2:0.2:1, fast=true, fit_mean=false)) [0.09219168665786258,0.0015654342453078724,0.00019403694017215876,6.088944186950046e-5,6.05771360378885e-5]
@test_approx_eq power(lombscargle(t, s, err, frequencies=0.2:0.2:1, fast=true, center_data=false, fit_mean=false)) [0.09360344864985332,0.0015354489715019735,0.0001784388515190763,4.744247354697125e-5,3.240223498703448e-5]
@test power(lombscargle(t, s, err)) ==
power(lombscargle(t, measurement(s, err)))
##################################################
### Testing utilities
# Test findmaxfreq and findmaxpower
@test_approx_eq findmaxfreq(pgram1) [31.997145470342]
@test_approx_eq findmaxfreq(pgram1, 0.965) [0.15602150741832602,31.685102455505348,31.997145470342,63.52622641842902,63.838269433265665]
t = linspace(0.01, 10pi, 1001)
s = sinpi(2t) + cospi(4t)
p = lombscargle(t, s, maximum_frequency=4)
@test_approx_eq findmaxfreq(p, [0.9, 1.1]) 1.0029954048320957
@test_approx_eq findmaxfreq(p, [1.9, 2.1]) 2.002806697267899
@test_approx_eq findmaxpower(pgram1) 0.9695017551608017
# Test autofrequency function
@test_approx_eq LombScargle.autofrequency(t) 0.003184112396292367:0.006368224792584734:79.6824127172165
@test_approx_eq LombScargle.autofrequency(t, minimum_frequency=0) 0.0:0.006368224792584734:79.6792286048202
@test_approx_eq LombScargle.autofrequency(t, maximum_frequency=10) 0.003184112396292367:0.006368224792584734:9.99492881196174
# Test probabilities and FAP
t = collect(linspace(0.01, 10pi, 101))
s = sin(t)
for norm in ("standard", "Scargle", "HorneBaliunas", "Cumming")
P = lombscargle(t, s, normalization = norm)
for z_0 in (0.1, 0.5, 0.9)
@test_approx_eq prob(P, probinv(P, z_0)) z_0
@test_approx_eq fap(P, fapinv(P, z_0)) z_0
end
end
P = lombscargle(t, s, normalization = "log")
@test_throws ErrorException prob(P, 0.5)
@test_throws ErrorException probinv(P, 0.5)
# Test model function
@test_approx_eq s LombScargle.model(t, s, 1/2pi, center_data=false, fit_mean=false)
s = sinpi(t) + pi*cospi(t) + e
@test_approx_eq s LombScargle.model(t, measurement(s, ones(s)), 0.5)
# Test extirpolation function
x = linspace(0, 10)
y = sin(x)
@test_approx_eq LombScargle.extirpolate(x, y) [0.39537718210649553,3.979484140636793,4.833090108345013,0.506805556164743,-3.828112427525919,-4.748341359084166,-1.3022050566901917,3.3367666084342256,5.070478111668922,1.291245296032218,-0.8264466821981216,0.0,0.0]
@test_approx_eq LombScargle.extirpolate(x, y, 11) LombScargle.extirpolate(x, y)[1:11]
# Test trig_sum
C, S = LombScargle.trig_sum(x, y, 1, 10)
@test_approx_eq S [0.0,0.3753570125888358,0.08163980192703546,-0.10139634351774979,-0.4334223744905633,-2.7843373311769777,0.32405810159838055,0.05729663600471602,-0.13191736591325876,-0.5295781583202946]
@test_approx_eq C [8.708141477890015,-0.5402668064176129,-0.37460815054027985,-0.3793457539084364,-0.5972351546196192,14.612204307982497,-0.5020253140297526,-0.37724493022381034,-0.394096831764578,-0.6828241623474718]
|
-- {-# OPTIONS -v tc.term.exlam:100 -v extendedlambda:100 -v int2abs.reifyterm:100 -v tc.with:100 -v tc.mod.apply:100 #-}
module Issue778b (Param : Set) where
open import Issue778M Param
data D : (Nat → Nat) → Set where
d : D pred → D pred
test : (f : Nat → Nat) → D f → Nat
test .pred (d x) = bla
where bla : Nat
bla with x
... | (d y) = test pred y
|
!--------------------------------------------------------------------------------------------------
! MODULE: cgto_mod
!> @author Marin Sapunar, Ruđer Bošković Institute
!> @date March, 2019
!
!> @brief Contains the cgto_subshell type and accompanying subroutines.
!--------------------------------------------------------------------------------------------------
module cgto_mod
use global_defs
use ang_mom_defs, only : amp
implicit none
private
public :: cgto_subshell
public :: cgto_one_el
public :: gto_norms
public :: cgto_norm
public :: cngto_norm
!----------------------------------------------------------------------------------------------
! TYPE: cgto_subshell
!
!> @brief Contracted Gaussian Type Orbital
!----------------------------------------------------------------------------------------------
type cgto_subshell
integer :: l !< Angular momentum.
character(len=1) :: typ !< Type of subshell.
integer :: n_prim = 0 !< Number of primitive Gaussians.
real(dp), allocatable :: z(:) !< Zeta coefficients. Dimensions: n_prim.
real(dp), allocatable :: b(:) !< Beta coefficients. Dimensions: n_prim.
real(dp), allocatable :: cb(:, :) !< Beta coefficients normalized for each cartesian function.
!! Dimensions: n_prim x n_cart(l)
logical :: init_cb = .false.
logical :: inorm_b = .false.
logical :: norm_gto = .false. !< Normalized primitive GTOs.
contains
procedure :: norm_b => cngto_norm_b
procedure :: gen_cb => cgto_gen_cb
end type cgto_subshell
contains
!----------------------------------------------------------------------------------------------
! SUBROUTINE: cgto_gen_cb
!> @brief Normalize contracted Cartesian GTO.
!> @details
!! Multiply b coefficients by the norms of their respective primitive GTOs (if norm_gto) and by
!! the norm of the full expansion for each combination of {lx, ly, lz} with lx+ly+lz=l.
!----------------------------------------------------------------------------------------------
subroutine cgto_gen_cb(self, norm_ccgto, norm_gto)
class(cgto_subshell) :: self
logical, intent(in) :: norm_ccgto
logical, intent(in) :: norm_gto
integer :: i, n_cart
integer, allocatable :: lc(:, :)
real(dp) :: norm
n_cart = amp%n_cart(self%l)
if (allocated(self%cb)) deallocate(self%cb)
allocate(self%cb(self%n_prim, n_cart))
allocate(lc(1:3,1:amp%n_cart(self%l)), source=amp%cart(self%l))
do i = 1, n_cart
if (norm_gto) then
self%cb(:, i) = self%b * gto_norms(self%z, lc(1, i), lc(2, i), lc(3, i))
self%norm_gto = .true.
else
self%cb(:, i) = self%b
end if
if (norm_ccgto) then
norm = cgto_norm(self%z, self%cb(:, i), lc(1, i), lc(2, i), lc(3, i))
self%cb(:, i) = self%cb(:, i) * norm
end if
end do
self%init_cb = .true.
end subroutine cgto_gen_cb
!----------------------------------------------------------------------------------------------
! FUNCTION: cngto_norm_b
!
!> @brief Normalize subshell coefficients assuming primitive Gaussians will also be normalized.
!> @details
!! See documentation of cngto_norm function.
!> @note
!! This subroutine is redundant if norm_cb is used to normalize the cb coefficients,
!! but it is the standard normalization for the b coefficients prior to expanding the subshell.
!----------------------------------------------------------------------------------------------
subroutine cngto_norm_b(self)
class(cgto_subshell) :: self
integer:: i, j
real(dp) :: bi, bj, zi, zj, pow
real(dp):: nsum
self%b = self%b * cngto_norm(self%z, self%b, self%l)
self%inorm_b = .true.
end subroutine cngto_norm_b
!----------------------------------------------------------------------------------------------
! FUNCTION: cgto_norm
!> @brief Norm of a contracted GTOs.
!----------------------------------------------------------------------------------------------
function cgto_norm(z, b, lx, ly, lz) result(nsum)
real(dp), intent(in) :: z(:)
real(dp), intent(in) :: b(:)
integer, intent(in) :: lx
integer, intent(in) :: ly
integer, intent(in) :: lz
real(dp):: nsum
real(dp) :: pow, ggg
integer :: i, j, n_prim
n_prim = size(b)
ggg = gamma(f1o2+lx)*gamma(f1o2+ly)*gamma(f1o2+lz)
pow = - num3/num2 - lx - ly - lz
nsum = num0
do i = 1, n_prim
do j = 1, n_prim
nsum = nsum + b(i) * b(j) * (z(i)+z(j))**pow
end do
end do
nsum = 1 / sqrt(nsum * ggg)
end function cgto_norm
!----------------------------------------------------------------------------------------------
! FUNCTION: cngto_norm
!
!> @brief Norm of a contracted GTO assuming primitive Gaussians will also be normalized.
!> @details
!! The cGTO is normalized and it is assumed that the b coefficients used here were not
!! already multiplied by the norms of the primitive functions. This norm depends only on the
!! total angular momentum and can be used once to normalize the coefficients of the subshell.
!! The norms of the individual primitive (Cartesian) Gaussians also depend on the components
!! of the angular momentum and can only be added once the subshell is expanded.
!----------------------------------------------------------------------------------------------
function cngto_norm(z, b, l) result(nsum)
real(dp), intent(in) :: z(:)
real(dp), intent(in) :: b(:)
integer, intent(in) :: l
real(dp):: nsum
integer:: i, j, n_prim
real(dp):: pow
n_prim = size(b)
nsum = num0
pow = f3o4 + l / num2
do i = 1, n_prim
do j = 1, n_prim
nsum = nsum + b(i) * b(j) * (2*z(i))**pow * (2*z(j))**pow / (z(i)+z(j))**(num2*pow)
end do
end do
nsum = 1 / sqrt(nsum)
end function cngto_norm
!----------------------------------------------------------------------------------------------
! FUNCTION: gto_norms
!> @brief Return norms of primitive cartesian gaussians in a contracted GTO.
!----------------------------------------------------------------------------------------------
function gto_norms(z, lx, ly, lz) result(norm)
real(dp), intent(in) :: z(:)
integer, intent(in) :: lx
integer, intent(in) :: ly
integer, intent(in) :: lz
real(dp) :: norm(size(z))
real(dp) :: pow, ggg
pow = f3o4 + (lx + ly + lz) / num2
ggg = gamma(f1o2+lx)*gamma(f1o2+ly)*gamma(f1o2+lz)
norm = (2*z)**pow / sqrt(ggg)
end function gto_norms
!----------------------------------------------------------------------------------------------
! FUNCTION: cgto_one_el
!> @brief Calculate one electron operator matrix between two cTGO subshells.
!----------------------------------------------------------------------------------------------
function cgto_one_el(cg1, cg2, q1, qc, q2, lc) result(op)
type(cgto_subshell), intent(in) :: cg1
type(cgto_subshell), intent(in) :: cg2
real(dp), intent(in) :: q1(3)
real(dp), intent(in) :: q2(3)
integer, intent(in) :: lc(3)
real(dp), allocatable :: op(:, :)
real(dp) :: ssum, r2, z1, z2
real(dp), parameter :: pi_3o2 = pi**(num3/num2)
real(dp), parameter :: tinydp = 0.00000000000001_dp
integer :: i, j, m, n
real(dp) :: qc(3), os_int(3)
integer, allocatable :: l1(:, :), l2(:, :)
if (.not. cg1%init_cb) call cg1%gen_cb(.true., .true.)
if (.not. cg2%init_cb) call cg2%gen_cb(.true., .true.)
r2 = sum((q1-q2)**2)
allocate(l1(1:3,1:amp%n_cart(cg1%l)), source=amp%cart(cg1%l))
allocate(l2(1:3,1:amp%n_cart(cg2%l)), source=amp%cart(cg2%l))
allocate(op(amp%n_cart(cg1%l), amp%n_cart(cg2%l)), source=num0)
do i = 1, amp%n_cart(cg1%l)
do j = 1, amp%n_cart(cg2%l)
ssum = 0.0_dp
do m = 1, cg1%n_prim
z1 = cg1%z(m)
do n = 1, cg2%n_prim
z2 = cg2%z(n)
os_int = osrec_multi(l1(:, i), lc, l2(:, j), q1, qc, q2, z1, z2)
ssum = ssum + cg1%cb(m, i) * cg2%cb(n, j) * product(os_int) * pi_3o2 * &
& (z1+z2) ** (-num3/num2) * exp(-(z1*z2) * r2 / (z1+z2))
end do
end do
if (abs(ssum) > tinydp) op(i, j) = ssum
end do
end do
end function cgto_one_el
!----------------------------------------------------------------------------------------------
! FUNCTION: osrec_multi
!> @brief Calls osrec subroutine for each dimension
!----------------------------------------------------------------------------------------------
function osrec_multi(l1, lc, l2, q1, qc, q2, z1, z2) result(os_int)
integer, intent(in) :: l1(:)
integer, intent(in) :: lc(:)
integer, intent(in) :: l2(:)
real(dp), intent(in) :: q1(:)
real(dp), intent(in) :: qc(:)
real(dp), intent(in) :: q2(:)
real(dp), intent(in) :: z1
real(dp), intent(in) :: z2
real(dp) :: os_int(size(l1))
real(dp) :: qp, inv2p, inq(3)
integer :: i, inl(3)
inv2p = num1 / num2 / (z1 + z2)
do i = 1, size(l1)
inl = [l1(i), l2(i), lc(i)]
inq = [q1(i), q2(i), qc(i)]
qp = (z1*q1(i) + z2*q2(i)) / (z1+z2)
os_int(i) = osrec(inl, inq, qp, inv2p)
end do
end function osrec_multi
!----------------------------------------------------------------------------------------------
! FUNCTION: OSRec
!
!> @brief Obara-Saika recurrence relations for calculating integrals.
!> @details
!! The Obara-Saika recurrence relations for calculating the overlap and multipole integrals of
!! Cartesian Gaussian functions of arbitrary quantum number:
!! G(a,q,q0,lq) = (q-q0)^lq * exp(-a * (q-q0)^2)
!! Integral: S(lq1,lqb,lq3) = < G(a,q,q0a,lqa) | (q-q03)^lq3 | G(b,q,q0b,lqb) >
!! Recurrence relations:
!! S(i+1,j,k) = (qp-q0a) * S(i,j,k) + (i*S(i-1,j,k) + j*S(i,j-1,k) + k*S(i,j,k-1))/2(a+b)
!! S(i,j+1,k) = (qp-q0b) * S(i,j,k) + (i*S(i-1,j,k) + j*S(i,j-1,k) + k*S(i,j,k-1))/2(a+b)
!! S(i,j,k+1) = (qp-q03) * S(i,j,k) + (i*S(i-1,j,k) + j*S(i,j-1,k) + k*S(i,j,k-1))/2(a+b)
!
!> @param inl - The 3 quantum numbers: lqa, lqb and lq3.
!> @param inq - Coordinates of the centres of the functions: q0a, q0b, q03
!> @param qp - Position of the Gaussian overlap distribution: qp = (lqa*q0a + lqb*q0b)/(a+b)
!> @param inv2p - 1/(2(a+b))
!> @param sout - Result of applying the recurrence relations up to S(lqa,lqb,lq3).
!! For the value of the integral this should be multiplied by S(0,0,0).
!----------------------------------------------------------------------------------------------
function osrec(inl, inq, qp, inv2p) result(sout)
integer,intent(in) :: inl(3)
real(dp),intent(in) :: inq(3)
real(dp),intent(in) :: qp
real(dp),intent(in) :: inv2p
real(dp) :: sout
integer :: i, j, k, l1, l2, l3
integer :: ord(3)
real(dp) :: q1, q2, q3, qp1, qp2, qp3
real(dp), allocatable :: s(:,:,:)
if (inl(1)>=inl(2) .and. inl(1)>=inl(3)) then
if (inl(2)>=inl(3)) then
ord=[1,2,3]
else
ord=[1,3,2]
end if
else if (inl(2)>=inl(3)) then
if (inl(1)>=inl(3)) then
ord=[2,1,3]
else
ord=[2,3,1]
end if
else
if (inl(1)>=inl(2)) then
ord=[3,1,2]
else
ord=[3,2,1]
end if
end if
l1=inl(ord(1))
l2=inl(ord(2))
l3=inl(ord(3))
q1=inq(ord(1))
q2=inq(ord(2))
q3=inq(ord(3))
allocate(s(0:l1,0:l2,0:l3))
s = 0.0_dp
qp1 = qp - q1
qp2 = qp - q2
qp3 = qp - q3
s(0,0,0) = 1.0_dp
if (l1>0) s(1,0,0) = qp1
if (l2>0) then
s(0,1,0) = qp2
s(1,1,0) = qp1 * qp2 + inv2p
end if
do i=2,l1
s(i,0,0) = qp1*s(i-1,0,0) + inv2p*(i-1)*s(i-2,0,0)
end do
if (l2>0) then
do i=2,l1
s(i,1,0) = qp2*s(i,0,0) + inv2p * i *s(i-1,0,0)
end do
end if
do j=2,l2
s(0,j,0) = qp1*s(0,j-1,0) + inv2p*(j-1)*s(0,j-2,0)
s(1,j,0) = qp2*s(0,j,0) + inv2p* j *s(0,j-1,0)
end do
do j=2,l2
do i=2,l1
s(i,j,0) = qp2*s(i,j-1,0) + inv2p * (i*s(i-1,j-1,0) + (j-1)*s(i,j-2,0))
end do
end do
do j=l2-l3+1,l2
do i=l3-l2+j,l1
s(i,j,1) = qp3*s(i,j,0) + inv2p * (i*s(i-1,j,0) + j*s(i,j-1,0))
end do
end do
do k=2,l3
do j=l2-l3+k,l2
do i=l3-l2+j-k+1,l1
s(i,j,k)=qp3*s(i,j,k-1) + inv2p * (i*s(i-1,j,k-1) + j*s(i,j-1,k-1) + (k-1)*s(i,j,k-2))
end do
end do
end do
sout = s(l1,l2,l3)
deallocate(s)
end function osrec
end module cgto_mod
|
[STATEMENT]
lemma R_g_evoll:
fixes \<phi> :: "('a::preorder) \<Rightarrow> 'b \<Rightarrow> 'b"
shows "P = (\<lambda>s. \<forall>t\<in>U s. (\<forall>\<tau>\<in>down (U s) t. G (\<phi> \<tau> s)) \<longrightarrow> R (\<phi> t s)) \<Longrightarrow>
(EVOL \<phi> G U) ; Ref \<lceil>R\<rceil> \<lceil>Q\<rceil> \<le> Ref \<lceil>P\<rceil> \<lceil>Q\<rceil>"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. P = (\<lambda>s. \<forall>t\<in>U s. (\<forall>\<tau>\<in>down (U s) t. G (\<phi> \<tau> s)) \<longrightarrow> R (\<phi> t s)) \<Longrightarrow> \<lparr>EVOL \<phi> G U\<rparr>Ref \<lceil>R\<rceil> \<lceil>Q\<rceil>\<lparr>Ref \<lceil>P\<rceil> \<lceil>Q\<rceil>\<rparr>
[PROOF STEP]
apply(rule_tac R=R in R_seq_law)
[PROOF STATE]
proof (prove)
goal (2 subgoals):
1. P = (\<lambda>s. \<forall>t\<in>U s. (\<forall>\<tau>\<in>down (U s) t. G (\<phi> \<tau> s)) \<longrightarrow> R (\<phi> t s)) \<Longrightarrow> EVOL \<phi> G U \<le> Ref \<lceil>P\<rceil> \<lceil>R\<rceil>
2. P = (\<lambda>s. \<forall>t\<in>U s. (\<forall>\<tau>\<in>down (U s) t. G (\<phi> \<tau> s)) \<longrightarrow> R (\<phi> t s)) \<Longrightarrow> Ref \<lceil>R\<rceil> \<lceil>Q\<rceil> \<le> Ref \<lceil>R\<rceil> \<lceil>Q\<rceil>
[PROOF STEP]
by (rule_tac R_g_evol_law, simp_all) |
#----- SOME FUNCTIONS -----#
# Remove unnecessary data received from the Spotify API, and
# select which columns are usefull to us.
remove_junk <- function(data) {
select(data,Genre=playlist_name,
PlaylistLength=playlist_num_tracks,
Artist=artist_name,
Album=album_name,
Track=track_name,
Date=track_added_at,
Popularity=track_popularity,
Danceability=danceability,
Energy=energy,
Key=key,
Loudness=loudness,
Mode=mode,
Speechiness=speechiness,
Acousticness=acousticness,
Instrumentalness=instrumentalness,
Liveness=liveness,
Valence=valence,
Tempo=tempo,
Duration=duration_ms,
TimeSignature=time_signature,
KeyMode=key_mode)
}
get_analysis <- function(id) {
# return(get_tidy_audio_analysis(id) %>%
# select(segments) %>%
# unnest(segments) %>%
# select(start, duration, pitches))
return(get_tidy_audio_analysis(id) %>%
compmus_align(bars, segments) %>%
select(bars) %>% unnest(bars) %>%
mutate(
pitches =
map(segments,
compmus_summarise, pitches,
method = 'rms', norm = 'euclidean')) %>%
mutate(
timbre =
map(segments,
compmus_summarise, timbre,
method = 'mean')))
}
plot_chroma <- function(data) {
return(data %>%
mutate(pitches = map(pitches, compmus_normalise, 'chebyshev')) %>%
compmus_gather_chroma %>%
ggplot(
aes(
x = start + duration / 2,
width = duration,
y = pitch_class,
fill = value)) +
geom_tile() +
labs(x = 'Time (s)', y = NULL, fill = 'Magnitude') +
theme_minimal())
}
plot_cesptro <- function(data) {
return(data %>%
compmus_gather_timbre %>%
ggplot(
aes(
x = start + duration / 2,
width = duration,
y = basis,
fill = value)) +
geom_tile() +
labs(x = 'Time (s)', y = NULL, fill = 'Magnitude') +
scale_fill_viridis_c(option = 'E') +
theme_classic())
}
plot_ssm <- function(data) {
return(data %>%
compmus_self_similarity(timbre, 'cosine') %>%
ggplot(
aes(
x = xstart + xduration / 2,
width = xduration,
y = ystart + yduration / 2,
height = yduration,
fill = d)) +
geom_tile() +
coord_fixed() +
scale_fill_viridis_c(option = 'E', guide = 'none') +
theme_classic() +
labs(x = '', y = ''))
}
# -------- Load Data --------- #
if (!file.exists("data/ff.rds") {
future_funk <- remove_junk(get_playlist_audio_features("bkd0b33gypt1ixtyg44x4y2ui","4a0xb2zui3hIPll7CMgeSu"))
writeRDS(future_funk, "data/ff.rds")
} else {
future_funk <- readRDS("data/ff.rds")
}
if (!file.exists("data/kfb.rds") {
kawaii_future_bass <- remove_junk( get_playlist_audio_features("bkd0b33gypt1ixtyg44x4y2ui","75OfhBfc4tnQ8MFdiPiMcx"))
writeRDS(kawaii_future_bass, "data/kfb.rds")
} else {
kawaii_future_bass <- readRDS("data/kfb.rds")
}
if (!file.exists("data/fp.rds") {
futurepop <- remove_junk(get_playlist_audio_features("bkd0b33gypt1ixtyg44x4y2ui","1TBQdi8VdYsvruWv1W5HjB"))
writeRDS(futurepop, "data/fp.rds")
} else {
futurepop <- readRDS("data/fp.rds")
}
if (!file.exists("data/fa.rds") {
future_ambient <- remove_junk(get_playlist_audio_features("bkd0b33gypt1ixtyg44x4y2ui","2dZ7eWcGRtuyseKY5QNZoP"))
writeRDS(future_ambient, "data/fa.rds")
} else {
future_ambient <- readRDS("data/fa.rds")
}
if (!file.exists("data/fg.rds") {
future_garage <- remove_junk(get_playlist_audio_features("bkd0b33gypt1ixtyg44x4y2ui","2IgZ50kclGP2tNVx7mu9vL"))
writeRDS(future_garage, "data/fg.rds")
} else {
future_garage <- readRDS("data/fg.rds")
}
|
subroutine cline(hdt ,j ,nmmaxj ,kmax ,icx , &
& icy ,icxy ,mf ,nf ,dxf , &
& dyf ,xf ,yf ,u1 ,v1 , &
& xcor ,ycor ,guu ,gvv ,guv , &
& gvu ,kcu ,kcv ,kcs ,kfu , &
& kfv ,windxt ,windyt ,windft , &
& s1 ,dpu ,dpv ,thick ,dpdro , &
& kfumin ,kfumax ,kfvmin ,kfvmax , &
& dzu1 ,dzv1 ,zk ,gdp )
!----- GPL ---------------------------------------------------------------------
!
! Copyright (C) Stichting Deltares, 2011-2016.
!
! This program is free software: you can redistribute it and/or modify
! it under the terms of the GNU General Public License as published by
! the Free Software Foundation version 3.
!
! This program is distributed in the hope that it will be useful,
! but WITHOUT ANY WARRANTY; without even the implied warranty of
! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
! GNU General Public License for more details.
!
! You should have received a copy of the GNU General Public License
! along with this program. If not, see <http://www.gnu.org/licenses/>.
!
! contact: [email protected]
! Stichting Deltares
! P.O. Box 177
! 2600 MH Delft, The Netherlands
!
! All indications and logos of, and references to, "Delft3D" and "Deltares"
! are registered trademarks of Stichting Deltares, and remain the property of
! Stichting Deltares. All rights reserved.
!
!-------------------------------------------------------------------------------
! $Id: cline.f90 5717 2016-01-12 11:35:24Z mourits $
! $HeadURL: https://svn.oss.deltares.nl/repos/delft3d/tags/6686/src/engines_gpl/flow2d3d/packages/kernel/src/compute/cline.f90 $
!!--description-----------------------------------------------------------------
!
! Function: - calculates position of one drogue if release time < NST
! if stop time > NST and drogue on active point
! Method used:
!
!!--pseudo code and references--------------------------------------------------
! NONE
!!--declarations----------------------------------------------------------------
use precision
!
use globaldata
!
implicit none
!
type(globdat),target :: gdp
!
! The following list of pointer parameters is used to point inside the gdp structure
!
integer , pointer :: ifirst
integer , pointer :: ndim
integer , pointer :: md
integer , pointer :: loop
integer, dimension(:) , pointer :: inc
real(fp), dimension(:,:) , pointer :: ud
real(fp), dimension(:,:) , pointer :: xd
real(fp), dimension(:) , pointer :: rdep
integer , pointer :: lundia
logical , pointer :: wind
logical , pointer :: temp
logical , pointer :: drogue
logical , pointer :: zmodel
!
! Global variables
!
integer , intent(in) :: icx !! Increment in the X-dir., if ICX= NMAX
!! then computation proceeds in the X-
!! dir. If icx=1 then computation pro-
!! ceeds in the Y-dir.
integer , intent(in) :: icxy !! Max (ICX ,ICY )
integer , intent(in) :: icy !! Increment in the Y-dir. (see ICX)
integer :: j !! Begin pointer for arrays which have
!! been transformed into 1D arrays.
!! Due to the shift in the 2nd (M-)
!! index, J = -2*NMAX + 1
integer , intent(in) :: kmax ! Description and declaration in esm_alloc_int.f90
integer :: mf !! M-coordinate at start of calculation
!! and after calculation (-><- Cline)
integer :: nf !! N coordinate at start of calculation
!! and after calculation (-><- Cline)
integer :: nmmaxj ! Description and declaration in dimens.igs
integer, dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: kcs ! Description and declaration in esm_alloc_int.f90
integer, dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: kcu ! Description and declaration in esm_alloc_int.f90
integer, dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: kcv ! Description and declaration in esm_alloc_int.f90
integer, dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: kfu ! Description and declaration in esm_alloc_int.f90
integer, dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: kfv ! Description and declaration in esm_alloc_int.f90
integer, dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: kfumax ! Description and declaration in esm_alloc_int.f90
integer, dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: kfumin ! Description and declaration in esm_alloc_int.f90
integer, dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: kfvmax ! Description and declaration in esm_alloc_int.f90
integer, dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: kfvmin ! Description and declaration in esm_alloc_int.f90
real(fp) :: dxf !! Delta X at start of calculation
!! and after calculation (-><- Cline)
real(fp) :: dyf !! Delta Y at start of calculation
!! and after calculation (-><- Cline)
real(fp) , intent(in) :: hdt ! Description and declaration in esm_alloc_real.f90
real(fp) , intent(in) :: windft ! Description and declaration in trisol.igs
real(fp) , intent(in) :: windxt ! Description and declaration in trisol.igs
real(fp) , intent(in) :: windyt ! Description and declaration in trisol.igs
real(fp) , intent(out) :: xf !! X coordinate at start of calculation
!! and after calculation (-><- Cline)
real(fp) , intent(out) :: yf !! Y coordinet at start of calculation
!! and after calculation (-><- Cline)
real(fp), dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: guu ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: guv ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: gvu ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: gvv ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: xcor ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: ycor ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(gdp%d%nmlb:gdp%d%nmub, kmax), intent(in) :: u1 ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(gdp%d%nmlb:gdp%d%nmub, kmax), intent(in) :: v1 ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: s1 ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: dpu ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(gdp%d%nmlb:gdp%d%nmub) , intent(in) :: dpv ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(gdp%d%nmlb:gdp%d%nmub, kmax), intent(in) :: dzu1 ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(gdp%d%nmlb:gdp%d%nmub, kmax), intent(in) :: dzv1 ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(kmax) , intent(in) :: thick ! Description and declaration in esm_alloc_real.f90
real(fp), dimension(0:kmax) , intent(in) :: zk
real(fp) , intent(in) :: dpdro ! drodep for one drogue
!
! Local variables
!
integer :: ddb
integer :: i ! Hulp Var.
integer :: istat
integer :: m ! M-coordinate starts at MF
integer :: n ! N coordinate starts at NF
integer :: ndm ! N-1,M array index
integer :: ndmd ! N-1,M-1 array index
integer :: num ! N+1,M array index
integer :: nm ! N,M array index
integer :: nmd ! N,M-1 array index
integer :: nmu ! N,M+1 array index
integer :: kc
integer :: kfmin
integer :: kfmax
integer :: kfstep
real(fp) :: dtn ! Maximum time of drogues in 1 cell
real(fp) :: dxdeta ! X distance between (n,m) and (n-1,m)
real(fp) :: dxdksi ! X distance between (n,m) and (n,m-1)
real(fp) :: dydeta ! Y distance between (n,m) and (n-1,m)
real(fp) :: dydksi ! Y distance between (n,m) and (n,m-1)
real(fp) :: guvndm
real(fp) :: guvnm
real(fp) :: gvunm
real(fp) :: gvunmd
real(fp) :: te ! Time to pass cell
real(fp) :: wcu ! Windcorrection on u-velocity
real(fp) :: wcv ! Windcorrection on v-velocity
real(fp) :: xr ! Hulp var. for XD(1,1)
real(fp) :: yr ! Hulp var. for XD(2,1)
real(fp) :: wlev
real(fp) :: uu1
real(fp) :: uu2
real(fp) :: vv1
real(fp) :: vv2
real(fp) :: interp
!
!! executable statements -------------------------------------------------------
!
ifirst => gdp%gdcline%ifirst
ndim => gdp%gdcline%ndim
md => gdp%gdcline%md
loop => gdp%gdcline%loop
inc => gdp%gdcline%inc
ud => gdp%gdcline%ud
xd => gdp%gdcline%xd
rdep => gdp%gdcline%rdep
lundia => gdp%gdinout%lundia
wind => gdp%gdprocs%wind
temp => gdp%gdprocs%temp
drogue => gdp%gdprocs%drogue
zmodel => gdp%gdprocs%zmodel
!
if (ifirst == 1) then
ifirst = 0
allocate (gdp%gdcline%inc (ndim) , stat = istat)
if (istat==0) allocate (gdp%gdcline%ud (ndim, 2 ), stat = istat)
if (istat==0) allocate (gdp%gdcline%xd (ndim, md), stat = istat)
if (istat==0) allocate (gdp%gdcline%rdep(0:kmax+1), stat = istat)
if (istat/=0) then
call prterr(lundia, 'U021', 'Cline: memory alloc error')
call d3stop(1, gdp)
endif
!
! update the local pointers
!
inc => gdp%gdcline%inc
ud => gdp%gdcline%ud
xd => gdp%gdcline%xd
rdep => gdp%gdcline%rdep
endif
!
! initialition of m,n xd and inc
!
ddb = gdp%d%ddbound
m = mf
n = nf
xd(1, md) = dxf
xd(2, md) = dyf
inc(1) = 0
inc(2) = 0
uu1 = 0.0_fp
uu2 = 0.0_fp
vv1 = 0.0_fp
vv2 = 0.0_fp
!
! time in seconds (velocities in m/s)
!
dtn = hdt
!
! walk through model with drogue as long as DTN > 0 and
! inc(1) / inc(2) not equal 0 (not first time i=1)
!
do i = 1, loop
!
if (dtn < 0.0) then
exit
endif
if (i>1 .and. inc(1)==0 .and. inc(2)==0) then
exit
endif
!
! redefine m,n if drogue hit cell boundary during time step
! (see sline)
!
m = m + inc(1)
n = n + inc(2)
!
! initialize nm, nmd, ndm and ndmd
! the first time kcs (nm) = 1 (see per3), so n,m will be inside
! 2 <= m <= mmax-1 and 2 <= n <= nmax-1 the first time.
! Substraction or addition by 1, can never cause an array out of
! bound
!
nm = (n + ddb)*icy + (m + ddb)*icx - icxy
nmd = nm - icx
nmu = nm + icx
ndm = nm - icy
num = nm + icy
ndmd = ndm - icx
!
! test for active point (kcs (nm) = 1)
! if kcs (nm) = 0 => permanent dry point
! if kcs (nm) = 2 => open boundary point (for left and lower
! boundaries xcor (ndmd) might not exist)
!
if (kcs(nm) /= 1) then
exit
endif
!
! test for existing gvu and guv by testing if
! mask arrays kcu and kcv in these points are not equal 0
!
if (kcu(nm)==0 .and. kcu(nmd)==0 .and. kcv(nm)==0 .and. kcv(ndm)==0) then
exit
endif
!
! reset gvu and guv depending on kcu and kcv values.
! if kcu = 0 then per definition kfu = 0
!
gvunm = real(kcu(nm) ,fp)*gvu(nm) + real(1 - kcu(nm) ,fp)
gvunmd = real(kcu(nmd),fp)*gvu(nmd) + real(1 - kcu(nmd),fp)
guvnm = real(kcv(nm) ,fp)*guv(nm) + real(1 - kcv(nm) ,fp)
guvndm = real(kcv(ndm),fp)*guv(ndm) + real(1 - kcv(ndm),fp)
!
! jump out of 'loop' if head bites tail
!
if (i>1 .and. m==mf .and. n==nf) then
exit
endif
!
! Compute velocities at the four cell boundaries
! at a certain depth to be used by the drogue
! transport routines
!
if (zmodel) then
kfstep = -1
else
kfmin = 1
kfmax = kmax
kfstep = 1
endif
!
kc = min(1,kcs(nmd))
wlev = (kcs(nm)*s1(nm) + kc*s1(nmd))/(kcs(nm) + kc)
call layerdep(rdep , thick, kmax, nmd , dpu , wlev, &
& zmodel, zk , dzu1, kfumin, kfumax, gdp )
if (zmodel) then
kfmin = kfumax(nmd)
kfmax = kfumin(nmd)
endif
if (kfu(nmd)/=0) then
uu1 = interp (rdep, u1(nmd,:), kfmin, kfmax, kfstep, kmax, dpdro)
endif
!
kc = min(1,kcs(nmu))
wlev = (kcs(nm)*s1(nm) + kc*s1(nmu))/(kcs(nm) + kc)
call layerdep(rdep , thick, kmax, nm , dpu , wlev, &
& zmodel, zk , dzu1, kfumin, kfumax, gdp )
if (zmodel) then
kfmin = kfumax(nm)
kfmax = kfumin(nm)
endif
if (kfu(nm)/=0) then
uu2 = interp (rdep, u1(nm ,:), kfmin, kfmax, kfstep, kmax, dpdro)
endif
!
kc = min(1,kcs(ndm))
wlev = (kcs(nm)*s1(nm) + kc*s1(ndm))/(kcs(nm) + kc)
call layerdep(rdep , thick, kmax, ndm , dpv , wlev, &
& zmodel, zk , dzv1, kfvmin, kfvmax, gdp )
if (zmodel) then
kfmin = kfvmax(ndm)
kfmax = kfvmin(ndm)
endif
if (kfv(ndm)/=0) then
vv1 = interp (rdep, v1(ndm,:), kfmin, kfmax, kfstep, kmax, dpdro)
endif
!
kc = min(1,kcs(num))
wlev = (kcs(nm)*s1(nm) + kc*s1(num))/(kcs(nm) + kc)
call layerdep(rdep , thick, kmax, nm , dpv , wlev, &
& zmodel, zk , dzv1, kfvmin, kfvmax, gdp )
if (zmodel) then
kfmin = kfvmax(nm)
kfmax = kfvmin(nm)
endif
if (kfv(nm)/=0) then
vv2 = interp (rdep, v1(nm ,:), kfmin, kfmax, kfstep, kmax, dpdro)
endif
!
! ud(2,2)
! --
! initialisation velocities ud(1,1) | | ud(1,2)
! between 2 cell boundaries: --
! scaled with cell-length ud(2,1)
!
! take in account temporary drypoints (mask kfu, kfv)
!
if (abs(windft)<1.E-6) then
ud(1, 1) = kfu(nmd)*uu1/gvunmd
ud(1, 2) = kfu(nm) *uu2/gvunm
ud(2, 1) = kfv(ndm)*vv1/guvndm
ud(2, 2) = kfv(nm) *vv2/guvnm
else
dxdeta = xcor(nm) - xcor(ndm)
dydeta = ycor(nm) - ycor(ndm)
dxdksi = xcor(nm) - xcor(nmd)
dydksi = ycor(nm) - ycor(nmd)
wcu = (windxt*dydeta - windyt*dxdeta)/guu(nm)
wcv = (windyt*dxdksi - windxt*dydksi)/gvv(nm)
ud(1, 1) = kfu(nmd)*(uu1 + wcu)/gvunmd
ud(1, 2) = kfu(nm) *(uu2 + wcu)/gvunm
ud(2, 1) = kfv(ndm)*(vv1 + wcv)/guvndm
ud(2, 2) = kfv(nm) *(vv2 + wcv)/guvnm
endif
!
! initialisation coordinates
!
if (inc(1) == 0) then
xd(1, 1) = xd(1, md)
elseif (inc(1) == 1) then
xd(1, 1) = 0.0
else
xd(1, 1) = 1.0
endif
!
if (inc(2) == 0) then
xd(2, 1) = xd(2, md)
elseif (inc(2) == 1) then
xd(2, 1) = 0.0
else
xd(2, 1) = 1.0
endif
!
! calculate coordinates drogue movement for one cell
! update time spend in one cell
!
call sline(dtn ,md ,ndim ,te ,xd , &
& ud ,inc )
dtn = dtn - te
!
! calculate and write streamline coordinates
!
xr = xd(1, md)
yr = xd(2, md)
xf = (1.0 - xr)*(1.0 - yr)*xcor(ndmd) + xr*(1.0 - yr)*xcor(ndm) + (1.0 - xr) &
& *yr*xcor(nmd) + xr*yr*xcor(nm)
yf = (1.0 - xr)*(1.0 - yr)*ycor(ndmd) + xr*(1.0 - yr)*ycor(ndm) + (1.0 - xr) &
& *yr*ycor(nmd) + xr*yr*ycor(nm)
enddo
!
! exit 'loop'
!
!
! redefine mf,nf and dxf,dyf
!
mf = m
nf = n
dxf = xd(1, md)
dyf = xd(2, md)
end subroutine cline
|
\documentclass[MS]{spherex}
\input{meta}
\spherexHandle{SSDC-MS-000}
\modulename{Test Module}
\version{1.0}
\docDate{2021-01-01} % override the vcs date
\pipelevel{L2}
\diagramindex{13}
\difficulty{Low}
\spherexlead[[email protected]]{Galileo Galilei}
\ipaclead[[email protected]]{Vera C. Rubin}
\approved{2021-01-01}{Edwin Hubble}
\begin{document}
\maketitle
\section{Introduction}
Lorem ipsum. \cite{SPHEREx_SPIE}
Module name: \themodulename
Pipeline level: \thepipelevel
Diagram index: \thediagramindex
Difficulty: \thedifficulty
Version: \theversion
Handle: \thehandle
\bibliography{spherex}
\end{document}
|
Formal statement is: lemma strict_mono_id: "strict_mono id" Informal statement is: The identity function is strictly monotonic. |
State Before: ι : Type ?u.6487579
𝕜 : Type ?u.6487582
E : Type ?u.6487585
F : Type ?u.6487588
A : Type u_1
inst✝¹ : NormedAddCommGroup E
inst✝ : NormedRing A
f✝ g✝ : ℝ → E
a b : ℝ
μ : MeasureTheory.Measure ℝ
f g : ℝ → A
hf : IntervalIntegrable f μ a b
hg : ContinuousOn g [[a, b]]
⊢ IntervalIntegrable (fun x => g x * f x) μ a b State After: ι : Type ?u.6487579
𝕜 : Type ?u.6487582
E : Type ?u.6487585
F : Type ?u.6487588
A : Type u_1
inst✝¹ : NormedAddCommGroup E
inst✝ : NormedRing A
f✝ g✝ : ℝ → E
a b : ℝ
μ : MeasureTheory.Measure ℝ
f g : ℝ → A
hf : IntegrableOn f (Ι a b)
hg : ContinuousOn g [[a, b]]
⊢ IntegrableOn (fun x => g x * f x) (Ι a b) Tactic: rw [intervalIntegrable_iff] at hf ⊢ State Before: ι : Type ?u.6487579
𝕜 : Type ?u.6487582
E : Type ?u.6487585
F : Type ?u.6487588
A : Type u_1
inst✝¹ : NormedAddCommGroup E
inst✝ : NormedRing A
f✝ g✝ : ℝ → E
a b : ℝ
μ : MeasureTheory.Measure ℝ
f g : ℝ → A
hf : IntegrableOn f (Ι a b)
hg : ContinuousOn g [[a, b]]
⊢ IntegrableOn (fun x => g x * f x) (Ι a b) State After: no goals Tactic: exact hf.continuousOn_mul_of_subset hg isCompact_uIcc measurableSet_Ioc Ioc_subset_Icc_self |
! SpiralMaker software (General Node Model)
! Computational model to simulate embryonic cleavage.
! Copyright (C) 2014 Miquel Marin-Riera, Miguel Brun-Usan, Roland Zimm, Tommi Välikangas & Isaac Salazar-Ciudad
! This program is free software: you can redistribute it and/or modify
! it under the terms of the GNU General Public License as published by
! the Free Software Foundation, either version 3 of the License, or
! any later version.
! This program is distributed in the hope that it will be useful,
! but WITHOUT ANY WARRANTY; without even the implied warranty of
! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
! GNU General Public License for more details.
! You should have received a copy of the GNU General Public License
! along with this program. If not, see <http://www.gnu.org/licenses/>.
module pola !>>>>>>>MIQUEL MADE MODULE 3-4-13
use general
use neighboring
use io
use aleas
use genetic
contains
subroutine pol_physic(celd,vx,vy,vz) !this subroutine applies a physical criterion to determine the cell polarisation for division to take place
!(namely it will polarize in the cell's longer axis)
!a linear regression is calculated using all the cell's nodes and the resulting vector will be normal to the plane of division
integer,intent(in)::celd
integer::num,tipi
real*8::unum
integer::i,j,k,ii,jj,kk
real*8 ::d
real*8::a,b,c,xm,ym,zm,sxx,sxy,syy,sxz,szz,syz,thet,cost,sint
real*8::k11,k22,k12,k10,k01,k00
real*8::c2,c1,c0
real*8::dm,rho,phi,p,q,r
real*8::ka,kb,ku,kv,kw
real*8,intent(out)::vx,vy,vz
real*8,allocatable::px(:),py(:),pz(:)
tipi=node(cels(celd)%node(1))%tipus
if(tipi<3)then !if epithelial, we only calculate the regression for the top layer (we ignore cell depth,because it causes artifacts if we take all the epithelial nodes)
num=cels(celd)%nunodes
allocate(px(num),py(num),pz(num))
ii=0
do i=1,num !we load the cell's nodes
j=cels(celd)%node(i)
end do
num=ii
else
num=cels(celd)%nunodes
unum=1d0/num
allocate(px(num),py(num),pz(num))
! print*,"num",num
do i=1,num !we load the cell's nodes
j=cels(celd)%node(i)
px(i)=node(j)%x
py(i)=node(j)%y
pz(i)=node(j)%z
! print*,"px",px(i),"py",py(i),"pz",pz(i)
end do
end if
unum=1d0/num
! print*,"num",num
xm=0;ym=0;zm=0 !averages
do i=1,num
xm=xm+px(i)
ym=ym+py(i)
zm=zm+pz(i)
end do
xm=xm*unum
ym=ym*unum
zm=zm*unum
sxx=0;sxy=0;syy=0;sxz=0;szz=0;syz=0 !covariances
do i=1,num
sxx=sxx+px(i)**2
sxy=sxy+px(i)*py(i)
syy=syy+py(i)**2
sxz=sxz+px(i)*pz(i)
szz=szz+pz(i)**2
syz=syz+py(i)*pz(i)
end do
sxx=sxx*unum-(xm**2)
sxy=sxy*unum-xm*ym
syy=syy*unum-(ym**2)
sxz=sxz*unum-xm*zm
szz=szz*unum-(zm**2)
syz=syz*unum-ym*zm
if(sxx<=epsilod) sxx=1d-5 !if one of the covariances are 0 (which means the surface is perfectly flat on one coordinate plane
if(sxy<=epsilod) sxy=1d-5 !the algorithm crashes, so we do the trick of assigning a very small value
if(syy<=epsilod) syy=1d-5
if(sxz<=epsilod) sxz=1d-5
if(syz<=epsilod) syz=1d-5
if(szz<=epsilod) szz=1d-5
!from here on it's fucking black magic!
thet=0.5d0*atan(2*sxy/(sxx-Syy))
cost=cos(thet)
sint=sin(thet)
! print*,"thet",thet,"cost",cost,"sint",sint
k11=(syy+szz)*cost**2+(sxx+szz)*sint**2-2*sxy*cost*sint
k22=(syy+szz)*sint**2+(sxx+szz)*cost**2+2*sxy*cost*sint
k12=-sxy*(cost**2-sint**2)+(sxx-syy)*cost*sint
k10=sxz*cost+syz*sint
k01=-sxz*sint+syz*cost
k00=sxx+syy
c2=-k00-k11-k22
c1=k00*k11+k00*k22+k11*k22-k01**2-k10**2
c0=k01**2*k11+k10**2*k22-k00*k11*k22
p=c1-(c2**2)/3d0
q=2*(c2**3)/27d0-(c1*c2)/3d0+c0
r=(q**2)/4d0+(p**3)/27d0
! print*,"p",p,"q",q,"r",r
if(r>0)then
a=-c2/3d0
b=(-0.5d0*q+sqrt(r))
c=(-0.5d0*q-sqrt(r))
! print*,"A",a,"B",b,"C",c
if(b<0)then
b=-(-b)**(1d0/3d0)
end if
if(c<0)then
c=-(-c)**(1d0/3d0)
end if
! print*,"A",a,"B",b,"C",c
dm=a+b+c
else
rho=sqrt(-p**3/27d0)
phi=acos(-q/(2*rho))
a=-c2/3d0+2*(rho**(1d0/3d0))*cos(phi/3d0)
b=-c2/3d0+2*(rho**(1d0/3d0))*cos((phi+2*pi)/3d0)
c=-c2/3d0+2*(rho**(1d0/3d0))*cos((phi+4*pi)/3d0)
dm=min(a,b,c)
end if
ka=-k10*cost/(k11-dm)+k01*sint/(k22-dm)
kb=-k10*sint/(k11-dm)-k01*cost/(k22-dm)
! print*,"ka",ka,"kb",kb
ku=((1+kb**2)*xm-ka*kb*ym+ka*zm)/(1+ka**2+kb**2)
kv=(-ka*kb*xm+(1+ka**2)*ym+kb*zm)/(1+ka**2+kb**2)
kw=(ka*xm+kb*ym+(ka**2+kb**2)*zm)/(1+ka**2+kb**2)
! print*,"vectortip",ku,kv,kw
! print*,"centerpoint",xm,ym,zm
vx=xm-ku;vy=ym-kv;vz=zm-kw
d=1d0/sqrt(vx**2+vy**2+vz**2) ! >>> Is 21-6-14
vx=vx*d ; vy=vy*d ; vz=vz*d ! >>> Is 21-6-14
end subroutine pol_physic
!!!!!!!!!
!********************************************************************
subroutine polarigen(kceld,kgen)
integer:: kceld,nnod,ggr,ccen,kgen
real*8::a,b,c,d,e,ax,ay,az,bx,by,bz,cx,cy,cz,iwx,iwy,iwz,alfa,s
nnod=cels(kceld)%nunodes
iwy=1d10 ; cx=0d0 ; cy=0d0 ; cz=0d0
a=cels(kceld)%cex ; b=cels(kceld)%cey ; c=cels(kceld)%cez
do i=1,nnod ! (gen) in the centroid (in the closest node)
j=cels(kceld)%node(i)
d=sqrt((node(j)%x-a)**2+(node(j)%y-b)**2+(node(j)%z-c)**2)
if(d.le.iwy)then;iwy=d;ccen=j;endif
end do
alfa=0.0d0 ! concentration in the central node
kk=kgen!whonpag(nparam_per_node+8,k)
if (gex(ccen,kk)>0.0d0) then
alfa=alfa+gex(ccen,kk)!*gen(kk)%wa(nparam_per_node+8) ! wa in units of probability such that it makes things to go from 0 to 1
end if
iwx=0d0 ; iwy=0d0 ; iwz=0d0 ! vector of the gradient within a cell
do i=1,nnod
j=cels(kceld)%node(i)
d=sqrt((node(j)%x-a)**2+(node(j)%y-b)**2+(node(j)%z-c)**2)
if (d<epsilod) cycle
d=1d0/d ! module of radial vectors to get unitary vectors
s=0.0d0
kk=kgen!whonpag(nparam_per_node+8,k)
if (gex(j,kk)>0.0d0) then
s=s+gex(j,kk)!*gen(kk)%wa(nparam_per_node+8)
end if
iwx=iwx+((node(j)%x-a)*d)*(s-alfa) ! and ignore shape/siwze effects
iwy=iwy+((node(j)%y-b)*d)*(s-alfa)
iwz=iwz+((node(j)%z-c)*d)*(s-alfa)
end do
if((iwx.eq.0).and.(iwy.eq.0).and.(iwz.eq.0))then ! if the gene has uniform expresion, the vector is random ! >>>Miguel1-7-14
call random_number(a) ! >>>Miguel1-7-14
k=int(a*nvaloq)+1 ! >>>Miguel1-7-14
cels(kceld)%polx=particions_esfera(k,1) ; cels(kceld)%poly=particions_esfera(k,2)
cels(kceld)%polz=particions_esfera(k,3)
else ! >>>Miguel1-7-14
a=iwx**2+iwy**2+iwz**2
if(a==0)then
cels(kceld)%polx=0d0 ; cels(kceld)%poly=0d0 ; cels(kceld)%polz=0d0 ! unitary resultant vector (gradient polariwzation)
else
d=1d0/sqrt(a)
cels(kceld)%polx=iwx*d ; cels(kceld)%poly=iwy*d ; cels(kceld)%polz=iwz*d ! unitary resultant vector (gradient polariwzation)
end if
if((iwx.eq.0d0).and.(iwy.eq.0d0).and.(iwz.eq.0d0))then ! miguel27-11-13
cels(kceld)%polx=0d0 ; cels(kceld)%poly=0d0 ; cels(kceld)%polz=0d0
endif ! miguel27-11-13
endif ! >>>Miguel1-7-14
end subroutine
!*******************************************************************
subroutine pol_special(celd,actin) ! the same as befre, but using only nodes with some amount of certain gene(s) !>>>Miguel 28-1-15
integer,intent(in)::celd
integer::num,tipi,actin
real*8::unum
integer::i,j,k,ii,jj,kk
real*8 ::d
real*8::a,b,c,xm,ym,zm,sxx,sxy,syy,sxz,szz,syz,thet,cost,sint
real*8::k11,k22,k12,k10,k01,k00
real*8::c2,c1,c0
real*8::dm,rho,phi,p,q,r
real*8::ka,kb,ku,kv,kw
real*8::vx,vy,vz
real*8,allocatable::px(:),py(:),pz(:)
integer,allocatable :: which(:)
allocate(which(cels(celd)%nunodes)) ; which=0
num=0
do i=1,cels(celd)%nunodes
j=cels(celd)%node(i)
if(gex(j,actin).gt.1d1)then
num=num+1 ; which(num)=j
!if((ncels.gt.1).and.(gex(j,4).lt.1d1))then
!num=num-1 ; end if
end if
end do
!write(*,*)'celd',celd,'nunodes',cels(celd)%nunodes,'num',num
allocate(px(num),py(num),pz(num))
do i=1,num !we load the cell's nodes
j=which(i)
px(i)=node(j)%x
py(i)=node(j)%y
pz(i)=node(j)%z
end do
unum=1d0/num
xm=0;ym=0;zm=0 !averages
do i=1,num
xm=xm+px(i) ; ym=ym+py(i) ; zm=zm+pz(i)
end do
xm=xm*unum ; ym=ym*unum ; zm=zm*unum
sxx=0;sxy=0;syy=0;sxz=0;szz=0;syz=0 !covariances
do i=1,num
sxx=sxx+px(i)**2 ; sxy=sxy+px(i)*py(i)
syy=syy+py(i)**2 ; sxz=sxz+px(i)*pz(i)
szz=szz+pz(i)**2 ; syz=syz+py(i)*pz(i)
end do
sxx=sxx*unum-(xm**2) ; sxy=sxy*unum-xm*ym
syy=syy*unum-(ym**2) ; sxz=sxz*unum-xm*zm
szz=szz*unum-(zm**2) ; syz=syz*unum-ym*zm
if(sxx<=epsilod) sxx=1d-5 !if one of the covariances are 0 (which means the surface is perfectly flat on one coordinate plane
if(sxy<=epsilod) sxy=1d-5 !the algorithm crashes, so we do the trick of assigning a very small value
if(syy<=epsilod) syy=1d-5
if(sxz<=epsilod) sxz=1d-5
if(syz<=epsilod) syz=1d-5
if(szz<=epsilod) szz=1d-5
!from here on it's fucking black magic!
thet=0.5d0*atan(2*sxy/(sxx-Syy))
cost=cos(thet) ; sint=sin(thet)
k11=(syy+szz)*cost**2+(sxx+szz)*sint**2-2*sxy*cost*sint
k22=(syy+szz)*sint**2+(sxx+szz)*cost**2+2*sxy*cost*sint
k12=-sxy*(cost**2-sint**2)+(sxx-syy)*cost*sint
k10=sxz*cost+syz*sint ; k01=-sxz*sint+syz*cost ; k00=sxx+syy
c2=-k00-k11-k22 ; c1=k00*k11+k00*k22+k11*k22-k01**2-k10**2
c0=k01**2*k11+k10**2*k22-k00*k11*k22
p=c1-(c2**2)/3d0 ; q=2*(c2**3)/27d0-(c1*c2)/3d0+c0
r=(q**2)/4d0+(p**3)/27d0
if(r>0)then
a=-c2/3d0 ; b=(-0.5d0*q+sqrt(r)) ; c=(-0.5d0*q-sqrt(r))
if(b<0)then ; b=-(-b)**(1d0/3d0) ; end if
if(c<0)then ; c=-(-c)**(1d0/3d0) ; end if
dm=a+b+c
else
rho=sqrt(-p**3/27d0) ; phi=acos(-q/(2*rho))
a=-c2/3d0+2*(rho**(1d0/3d0))*cos(phi/3d0)
b=-c2/3d0+2*(rho**(1d0/3d0))*cos((phi+2*pi)/3d0)
c=-c2/3d0+2*(rho**(1d0/3d0))*cos((phi+4*pi)/3d0)
dm=min(a,b,c)
end if
ka=-k10*cost/(k11-dm)+k01*sint/(k22-dm)
kb=-k10*sint/(k11-dm)-k01*cost/(k22-dm)
ku=((1+kb**2)*xm-ka*kb*ym+ka*zm)/(1+ka**2+kb**2)
kv=(-ka*kb*xm+(1+ka**2)*ym+kb*zm)/(1+ka**2+kb**2)
kw=(ka*xm+kb*ym+(ka**2+kb**2)*zm)/(1+ka**2+kb**2)
vx=xm-ku ; vy=ym-kv ; vz=zm-kw
d=1d0/sqrt(vx**2+vy**2+vz**2) ! >>> Is 21-6-14
cels(celd)%spolx=vx*d ; cels(celd)%spoly=vy*d ; cels(celd)%spolz=vz*d ! >>> Is 21-6-14
deallocate(which)
end subroutine pol_special
!!!!!!!!!
end module pola
|
corollary rel_boundary_retract_of_punctured_affine_hull: fixes S :: "'a::euclidean_space set" assumes "compact S" "convex S" "a \<in> rel_interior S" shows "(S - rel_interior S) retract_of (affine hull S - {a})" |
{-# OPTIONS --cubical --no-import-sorts --safe #-}
module Cubical.HITs.Wedge where
open import Cubical.HITs.Wedge.Base public
|
-- forwardable.rb
$Release Version: 1.1 $
$Revision: 25189 $
Original version by Tosh
=begin
= Forwardable
A Module to define delegations for selected methods to a class.
== Usage
Using through extending the class.
class Foo
extend Forwardable
def_delegators("@out", "printf", "print")
def_delegators(:@in, :gets)
def_delegator(:@contents, :[], "content_at")
end
f = Foo.new
f.printf ...
f.gets
f.content_at(1)
== Methods
--- Forwardable#def_instance_delegators(accessor, *methods)
adding the delegations for each method of ((|methods|)) to
((|accessor|)).
--- Forwardable#def_instance_delegator(accessor, method, ali = method)
adding the delegation for ((|method|)) to ((|accessor|)). When
you give optional argument ((|ali|)), ((|ali|)) is used as the
name of the delegation method, instead of ((|method|)).
--- Forwardable#def_delegators(accessor, *methods)
the alias of ((|Forwardable#def_instance_delegators|)).
--- Forwardable#def_delegator(accessor, method, ali = method)
the alias of ((|Forwardable#def_instance_delegator|)).
= SingleForwardable
a Module to define delegations for selected methods to an object.
== Usage
Using through extending the object.
g = Goo.new
g.extend SingleForwardable
g.def_delegator("@out", :puts)
g.puts ...
== Methods
--- SingleForwardable#def_singleton_delegators(accessor, *methods)
adding the delegations for each method of ((|methods|)) to
((|accessor|)).
--- SingleForwardable#def_singleton_delegator(accessor, method, ali = method)
adding the delegation for ((|method|)) to ((|accessor|)). When
you give optional argument ((|ali|)), ((|ali|)) is used as the
name of the delegation method, instead of ((|method|)).
--- SingleForwardable#def_delegators(accessor, *methods)
the alias of ((|SingleForwardable#def_instance_delegators|)).
--- SingleForwardable#def_delegator(accessor, method, ali = method)
the alias of ((|SingleForwardable#def_instance_delegator|)).
=end
|
//
// Copyright (C) 2011-2019 Ben Key
// Distributed under the Boost Software License, Version 1.0.
// (See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
//
#include <cstdio>
#include <cstdlib>
#include <algorithm>
#include <locale>
#include <iterator>
#include <string>
#include <vector>
#include <boost/filesystem/operations.hpp>
#include <boost/filesystem/path.hpp>
#include <boost/locale.hpp>
#include <boost/tokenizer.hpp>
#if (BOOST_VERSION >= 106100 && BOOST_VERSION < 106400)
# include <boost/dll/runtime_symbol_info.hpp>
#endif
#if (BOOST_VERSION >= 106400)
# include <boost/process.hpp>
#endif
#if (BOOST_OS_CYGWIN || BOOST_OS_WINDOWS)
# include <windows.h>
#endif
#include <util/executable_path/include/boost/detail/executable_path_internals.hpp>
namespace boost {
namespace detail {
// NOLINTNEXTLINE(clang-diagnostic-unused-const-variable)
const size_t buffer_size = 8192;
const size_t default_converter_max_length = 6;
std::string os_pathsep()
{
#if (BOOST_OS_WINDOWS)
return ";";
#else
return ":";
#endif
}
std::wstring wos_pathsep()
{
#if (BOOST_OS_WINDOWS)
return L";";
#else
return L":";
#endif
}
std::string os_sep()
{
#if (BOOST_OS_WINDOWS)
return "\\";
#else
return "/";
#endif
}
std::wstring wos_sep()
{
#if (BOOST_OS_WINDOWS)
return L"\\";
#else
return L"/";
#endif
}
bool IsUTF8(const std::locale& loc)
{
std::string locName = loc.name();
return (!locName.empty() && std::string::npos != locName.find("UTF-8"));
}
std::string to_string(const std::wstring& s, const std::locale& loc)
{
using char_vector = std::vector<char>;
using converter_type = std::codecvt<wchar_t, char, std::mbstate_t>;
using wchar_facet = std::ctype<wchar_t>;
std::string return_value;
if (s.empty())
{
return "";
}
if (IsUTF8(loc))
{
return_value = boost::locale::conv::utf_to_utf<char>(s);
if (!return_value.empty())
{
return return_value;
}
}
const wchar_t* from = s.c_str();
size_t len = s.length();
size_t converterMaxLength = default_converter_max_length;
size_t bufferSize = ((len + default_converter_max_length) * converterMaxLength);
if (std::has_facet<converter_type>(loc))
{
const auto& converter = std::use_facet<converter_type>(loc);
if (!converter.always_noconv())
{
converterMaxLength = converter.max_length();
if (converterMaxLength != default_converter_max_length)
{
bufferSize = ((len + default_converter_max_length) * converterMaxLength);
}
std::mbstate_t state;
const wchar_t* from_next = nullptr;
char_vector to(bufferSize, 0);
char* toPtr = to.data();
char* to_next = nullptr;
const converter_type::result result = converter.out(
state, from, from + len, from_next, toPtr, toPtr + bufferSize, to_next);
if ((converter_type::ok == result || converter_type::noconv == result) && 0 != toPtr[0])
{
return_value.assign(toPtr, to_next);
}
}
}
if (return_value.empty() && std::has_facet<wchar_facet>(loc))
{
char_vector to(bufferSize, 0);
auto toPtr = to.data();
const auto& facet = std::use_facet<wchar_facet>(loc);
if (facet.narrow(from, from + len, '?', toPtr) != nullptr)
{
return_value = toPtr;
}
}
return return_value;
}
std::wstring to_wstring(const std::string& s, const std::locale& loc)
{
using wchar_vector = std::vector<wchar_t>;
using wchar_facet = std::ctype<wchar_t>;
std::wstring return_value;
if (s.empty())
{
return L"";
}
if (IsUTF8(loc))
{
return_value = boost::locale::conv::utf_to_utf<wchar_t>(s);
if (!return_value.empty())
{
return return_value;
}
}
if (std::has_facet<wchar_facet>(loc))
{
std::string::size_type bufferSize = s.size() + 2;
wchar_vector to(bufferSize, 0);
wchar_t* toPtr = to.data();
const auto& facet = std::use_facet<wchar_facet>(loc);
if (facet.widen(s.c_str(), s.c_str() + s.size(), toPtr) != nullptr)
{
return_value = toPtr;
}
}
return return_value;
}
std::string GetEnv(const std::string& varName)
{
if (varName.empty())
{
return "";
}
#if (BOOST_OS_BSD || BOOST_OS_CYGWIN || BOOST_OS_LINUX || BOOST_OS_MACOS || BOOST_OS_SOLARIS)
char* value = std::getenv(varName.c_str());
if (value == nullptr)
{
return "";
}
return value;
#elif (BOOST_OS_WINDOWS)
using char_vector = std::vector<char>;
using size_type = std::vector<char>::size_type;
char_vector value(buffer_size, 0);
size_type size = value.size();
bool haveValue = false;
bool shouldContinue = true;
do
{
DWORD result = GetEnvironmentVariableA(varName.c_str(), value.data(), size);
if (result == 0)
{
shouldContinue = false;
}
else if (result < size)
{
haveValue = true;
shouldContinue = false;
}
else
{
size *= 2;
value.resize(size);
}
} while (shouldContinue);
std::string ret;
if (haveValue)
{
ret = value.data();
}
return ret;
#else
return "";
#endif
}
std::wstring GetEnv(const std::wstring& varName)
{
if (varName.empty())
{
return L"";
}
#if (BOOST_OS_BSD || BOOST_OS_CYGWIN || BOOST_OS_LINUX || BOOST_OS_MACOS || BOOST_OS_SOLARIS)
std::locale loc;
std::string sVarName = to_string(varName, loc);
char* value = std::getenv(sVarName.c_str());
if (value == nullptr)
{
return L"";
}
std::wstring ret = to_wstring(value, loc);
return ret;
#elif (BOOST_OS_WINDOWS)
using char_vector = std::vector<wchar_t>;
using size_type = std::vector<wchar_t>::size_type;
char_vector value(buffer_size, 0);
size_type size = value.size();
bool haveValue = false;
bool shouldContinue = true;
do
{
DWORD result = GetEnvironmentVariableW(varName.c_str(), value.data(), size);
if (result == 0)
{
shouldContinue = false;
}
else if (result < size)
{
haveValue = true;
shouldContinue = false;
}
else
{
size *= 2;
value.resize(size);
}
} while (shouldContinue);
std::wstring ret;
if (haveValue)
{
ret = value.data();
}
return ret;
#else
return L"";
#endif
}
bool GetDirectoryListFromDelimitedString(const std::string& str, std::vector<std::string>& dirs)
{
using char_separator_type = boost::char_separator<char>;
using tokenizer_type = boost::tokenizer<
boost::char_separator<char>, std::string::const_iterator, std::string>;
dirs.clear();
if (str.empty())
{
return false;
}
char_separator_type pathSep(os_pathsep().c_str());
tokenizer_type strTok(str, pathSep);
typename tokenizer_type::iterator strIt;
typename tokenizer_type::iterator strEndIt = strTok.end();
for (strIt = strTok.begin(); strIt != strEndIt; ++strIt)
{
dirs.push_back(*strIt);
}
return (!dirs.empty());
}
bool GetDirectoryListFromDelimitedString(const std::wstring& str, std::vector<std::wstring>& dirs)
{
using char_separator_type = boost::char_separator<wchar_t>;
using tokenizer_type = boost::tokenizer<
boost::char_separator<wchar_t>, std::wstring::const_iterator, std::wstring>;
dirs.clear();
if (str.empty())
{
return false;
}
char_separator_type pathSep(wos_pathsep().c_str());
tokenizer_type strTok(str, pathSep);
typename tokenizer_type::iterator strIt;
typename tokenizer_type::iterator strEndIt = strTok.end();
for (strIt = strTok.begin(); strIt != strEndIt; ++strIt)
{
dirs.push_back(*strIt);
}
return (!dirs.empty());
}
std::string search_path(const std::string& file)
{
if (file.empty())
{
return "";
}
std::string ret;
#if (BOOST_VERSION >= 106400)
{
namespace bp = boost::process;
boost::filesystem::path p = bp::search_path(file);
ret = p.make_preferred().string();
}
#endif
if (!ret.empty())
{
return ret;
}
// Drat! I have to do it the hard way.
std::string pathEnvVar = GetEnv("PATH");
if (pathEnvVar.empty())
{
return "";
}
std::vector<std::string> pathDirs;
bool getDirList = GetDirectoryListFromDelimitedString(pathEnvVar, pathDirs);
if (!getDirList)
{
return "";
}
auto it = pathDirs.cbegin();
auto itEnd = pathDirs.cend();
for (; it != itEnd; ++it)
{
boost::filesystem::path p(*it);
p /= file;
boost::system::error_code ec;
if (boost::filesystem::exists(p, ec) && boost::filesystem::is_regular_file(p, ec))
{
return p.make_preferred().string();
}
}
return "";
}
std::wstring search_path(const std::wstring& file)
{
if (file.empty())
{
return L"";
}
std::wstring ret;
#if (BOOST_VERSION >= 106400)
{
namespace bp = boost::process;
boost::filesystem::path p = bp::search_path(file);
ret = p.make_preferred().wstring();
}
#endif
if (!ret.empty())
{
return ret;
}
// Drat! I have to do it the hard way.
std::wstring pathEnvVar = GetEnv(L"PATH");
if (pathEnvVar.empty())
{
return L"";
}
std::vector<std::wstring> pathDirs;
bool getDirList = GetDirectoryListFromDelimitedString(pathEnvVar, pathDirs);
if (!getDirList)
{
return L"";
}
auto it = pathDirs.cbegin();
auto itEnd = pathDirs.cend();
for (; it != itEnd; ++it)
{
boost::filesystem::path p(*it);
p /= file;
if (boost::filesystem::exists(p) && boost::filesystem::is_regular_file(p))
{
return p.make_preferred().wstring();
}
}
return L"";
}
std::string executable_path_fallback(const char* argv0)
{
if (argv0 == nullptr || argv0[0] == 0)
{
#if (BOOST_VERSION >= 106100 && BOOST_VERSION < 106400)
boost::system::error_code ec;
auto p = boost::dll::program_location(ec);
if (ec.value() == boost::system::errc::success)
{
return p.make_preferred().string();
}
return "";
#else
return "";
#endif
}
if (strstr(argv0, os_sep().c_str()) != nullptr)
{
boost::system::error_code ec;
boost::filesystem::path p(
boost::filesystem::canonical(argv0, boost::filesystem::current_path(), ec));
if (ec.value() == boost::system::errc::success)
{
return p.make_preferred().string();
}
}
std::string ret = search_path(argv0);
if (!ret.empty())
{
return ret;
}
boost::system::error_code ec;
boost::filesystem::path p(
boost::filesystem::canonical(argv0, boost::filesystem::current_path(), ec));
if (ec.value() == boost::system::errc::success)
{
ret = p.make_preferred().string();
}
return ret;
}
std::wstring executable_path_fallback(const wchar_t* argv0)
{
if (argv0 == nullptr || argv0[0] == 0)
{
#if (BOOST_VERSION >= 106100 && BOOST_VERSION < 106400)
boost::system::error_code ec;
auto p = boost::dll::program_location(ec);
if (ec.value() == boost::system::errc::success)
{
return p.make_preferred().wstring();
}
return L"";
#else
return L"";
#endif
}
if (wcsstr(argv0, wos_sep().c_str()) != nullptr)
{
boost::system::error_code ec;
boost::filesystem::path p(
boost::filesystem::canonical(argv0, boost::filesystem::current_path(), ec));
if (ec.value() == boost::system::errc::success)
{
return p.make_preferred().wstring();
}
}
std::wstring ret = search_path(argv0);
if (!ret.empty())
{
return ret;
}
boost::system::error_code ec;
boost::filesystem::path p(
boost::filesystem::canonical(argv0, boost::filesystem::current_path(), ec));
if (ec.value() == boost::system::errc::success)
{
ret = p.make_preferred().wstring();
}
return ret;
}
} // namespace detail
} // namespace boost
|
(* Author: Tobias Nipkow *)
header "Regular expressions"
theory Regular_Exp
imports Regular_Set
begin
datatype (atoms: 'a) rexp =
is_Zero: Zero |
is_One: One |
Atom 'a |
Plus "('a rexp)" "('a rexp)" |
Times "('a rexp)" "('a rexp)" |
Star "('a rexp)"
primrec lang :: "'a rexp => 'a lang" where
"lang Zero = {}" |
"lang One = {[]}" |
"lang (Atom a) = {[a]}" |
"lang (Plus r s) = (lang r) Un (lang s)" |
"lang (Times r s) = conc (lang r) (lang s)" |
"lang (Star r) = star(lang r)"
primrec nullable :: "'a rexp \<Rightarrow> bool" where
"nullable Zero = False" |
"nullable One = True" |
"nullable (Atom c) = False" |
"nullable (Plus r1 r2) = (nullable r1 \<or> nullable r2)" |
"nullable (Times r1 r2) = (nullable r1 \<and> nullable r2)" |
"nullable (Star r) = True"
lemma nullable_iff: "nullable r \<longleftrightarrow> [] \<in> lang r"
by (induct r) (auto simp add: conc_def split: if_splits)
text{* Composition on rhs usually complicates matters: *}
lemma map_map_rexp:
"map_rexp f (map_rexp g r) = map_rexp (\<lambda>r. f (g r)) r"
unfolding rexp.map_comp o_def ..
lemma map_rexp_ident[simp]: "map_rexp (\<lambda>x. x) = (\<lambda>r. r)"
unfolding id_def[symmetric] fun_eq_iff rexp.map_id id_apply by (intro allI refl)
lemma atoms_lang: "w : lang r \<Longrightarrow> set w \<subseteq> atoms r"
proof(induction r arbitrary: w)
case Times thus ?case by fastforce
next
case Star thus ?case by (fastforce simp add: star_conv_concat)
qed auto
lemma lang_eq_ext: "(lang r = lang s) =
(\<forall>w \<in> lists(atoms r \<union> atoms s). w \<in> lang r \<longleftrightarrow> w \<in> lang s)"
by (auto simp: atoms_lang[unfolded subset_iff])
subsection {* Term ordering *}
instantiation rexp :: (order) "{order}"
begin
fun le_rexp :: "('a::order) rexp \<Rightarrow> ('a::order) rexp \<Rightarrow> bool"
where
"le_rexp Zero _ = True"
| "le_rexp _ Zero = False"
| "le_rexp One _ = True"
| "le_rexp _ One = False"
| "le_rexp (Atom a) (Atom b) = (a <= b)"
| "le_rexp (Atom _) _ = True"
| "le_rexp _ (Atom _) = False"
| "le_rexp (Star r) (Star s) = le_rexp r s"
| "le_rexp (Star _) _ = True"
| "le_rexp _ (Star _) = False"
| "le_rexp (Plus r r') (Plus s s') =
(if r = s then le_rexp r' s' else le_rexp r s)"
| "le_rexp (Plus _ _) _ = True"
| "le_rexp _ (Plus _ _) = False"
| "le_rexp (Times r r') (Times s s') =
(if r = s then le_rexp r' s' else le_rexp r s)"
(* The class instance stuff is by Dmitriy Traytel *)
definition less_eq_rexp where "r \<le> s \<equiv> le_rexp r s"
definition less_rexp where "r < s \<equiv> le_rexp r s \<and> r \<noteq> s"
lemma le_rexp_Zero: "le_rexp r Zero \<Longrightarrow> r = Zero"
by (induction r) auto
lemma le_rexp_refl: "le_rexp r r"
by (induction r) auto
lemma le_rexp_antisym: "\<lbrakk>le_rexp r s; le_rexp s r\<rbrakk> \<Longrightarrow> r = s"
by (induction r s rule: le_rexp.induct) (auto dest: le_rexp_Zero)
lemma le_rexp_trans: "\<lbrakk>le_rexp r s; le_rexp s t\<rbrakk> \<Longrightarrow> le_rexp r t"
proof (induction r s arbitrary: t rule: le_rexp.induct)
fix v t assume "le_rexp (Atom v) t" thus "le_rexp One t" by (cases t) auto
next
fix s1 s2 t assume "le_rexp (Plus s1 s2) t" thus "le_rexp One t" by (cases t) auto
next
fix s1 s2 t assume "le_rexp (Times s1 s2) t" thus "le_rexp One t" by (cases t) auto
next
fix s t assume "le_rexp (Star s) t" thus "le_rexp One t" by (cases t) auto
next
fix v u t assume "le_rexp (Atom v) (Atom u)" "le_rexp (Atom u) t"
thus "le_rexp (Atom v) t" by (cases t) auto
next
fix v s1 s2 t assume "le_rexp (Plus s1 s2) t" thus "le_rexp (Atom v) t" by (cases t) auto
next
fix v s1 s2 t assume "le_rexp (Times s1 s2) t" thus "le_rexp (Atom v) t" by (cases t) auto
next
fix v s t assume "le_rexp (Star s) t" thus "le_rexp (Atom v) t" by (cases t) auto
next
fix r s t
assume IH: "\<And>t. le_rexp r s \<Longrightarrow> le_rexp s t \<Longrightarrow> le_rexp r t"
and "le_rexp (Star r) (Star s)" "le_rexp (Star s) t"
thus "le_rexp (Star r) t" by (cases t) auto
next
fix r s1 s2 t assume "le_rexp (Plus s1 s2) t" thus "le_rexp (Star r) t" by (cases t) auto
next
fix r s1 s2 t assume "le_rexp (Times s1 s2) t" thus "le_rexp (Star r) t" by (cases t) auto
next
fix r1 r2 s1 s2 t
assume "\<And>t. r1 = s1 \<Longrightarrow> le_rexp r2 s2 \<Longrightarrow> le_rexp s2 t \<Longrightarrow> le_rexp r2 t"
"\<And>t. r1 \<noteq> s1 \<Longrightarrow> le_rexp r1 s1 \<Longrightarrow> le_rexp s1 t \<Longrightarrow> le_rexp r1 t"
"le_rexp (Plus r1 r2) (Plus s1 s2)" "le_rexp (Plus s1 s2) t"
thus "le_rexp (Plus r1 r2) t" by (cases t) (auto split: split_if_asm intro: le_rexp_antisym)
next
fix r1 r2 s1 s2 t assume "le_rexp (Times s1 s2) t" thus "le_rexp (Plus r1 r2) t" by (cases t) auto
next
fix r1 r2 s1 s2 t
assume "\<And>t. r1 = s1 \<Longrightarrow> le_rexp r2 s2 \<Longrightarrow> le_rexp s2 t \<Longrightarrow> le_rexp r2 t"
"\<And>t. r1 \<noteq> s1 \<Longrightarrow> le_rexp r1 s1 \<Longrightarrow> le_rexp s1 t \<Longrightarrow> le_rexp r1 t"
"le_rexp (Times r1 r2) (Times s1 s2)" "le_rexp (Times s1 s2) t"
thus "le_rexp (Times r1 r2) t" by (cases t) (auto split: split_if_asm intro: le_rexp_antisym)
qed auto
instance proof
qed (auto simp add: less_eq_rexp_def less_rexp_def
intro: le_rexp_refl le_rexp_antisym le_rexp_trans)
end
instantiation rexp :: (linorder) "{linorder}"
begin
lemma le_rexp_total: "le_rexp (r :: 'a :: linorder rexp) s \<or> le_rexp s r"
by (induction r s rule: le_rexp.induct) auto
instance proof
qed (unfold less_eq_rexp_def less_rexp_def, rule le_rexp_total)
end
end
|
From MatchingLogic Require Export Logic
Theories.Definedness
DerivedOperators
Theories.Sorts
Helpers.FOL_helpers.
Import MatchingLogic.Syntax.Notations MatchingLogic.DerivedOperators.Notations.
From Coq Require Import ssreflect ssrfun ssrbool.
Require Export extralibrary
Coq.Program.Wf
Lia
FunctionalExtensionality
Logic.PropExtensionality
Program.Equality.
From stdpp Require Import countable.
Require Export Vector PeanoNat String Arith.Lt.
Ltac separate :=
match goal with
| [H : is_true (andb _ _) |- _] => apply andb_true_iff in H as [?E1 ?E2]
| |- is_true (andb _ _) => apply andb_true_iff
end.
Definition vec {A : Set} := @t A.
Lemma Forall_map (T : Set) n (l : vec n) : forall (P : T -> Prop) (f : T -> T),
(forall x, P x -> P (f x))
->
Forall P l -> Forall P (map f l).
Proof.
induction l; intros P f H; constructor;
inversion H0; subst. auto.
apply IHl; auto. simpl_existT. subst. auto.
Qed.
Theorem fold_left_map (T Q : Set) n (v : vec n) :
forall (App : Q -> T -> Q) (start : Q) f,
fold_left App start (map f v) = @fold_left T Q (fun Acc x => App Acc (f x)) start _ v.
Proof.
induction v; intros App start f; simpl; auto.
Qed.
Lemma map_Forall (T : Set) n (l : vec n) : forall (P : T -> Prop) (f : T -> T),
(forall x, P (f x) -> P x)
->
Forall P (map f l) -> Forall P l.
Proof.
induction l; intros P f H; constructor;
inversion H0; subst. auto.
eapply IHl; eauto. simpl_existT. now subst.
Qed.
Lemma Forall_map_ext (T : Set) n (l : vec n) : forall (P : T -> Prop) (f : T -> T),
(forall x, In x l -> P x -> P (f x))
->
Forall P l -> Forall P (map f l).
Proof.
induction l; intros P f H; constructor;
inversion H0; subst. auto. apply H. constructor. auto.
apply IHl; auto. intros x H1 H2. apply H. constructor 2. auto. auto. simpl_existT. now subst.
Qed.
Lemma map_Forall_ext (T : Set) n (l : vec n) : forall (P : T -> Prop) (f : T -> T),
(forall x, In x l -> P (f x) -> P x)
->
Forall P (map f l) -> Forall P l.
Proof.
induction l; intros P f H; constructor;
inversion H0; subst. auto. apply H. constructor; auto. auto.
eapply IHl; auto. intros x H1 H2. apply H. constructor 2. auto. exact H2. simpl_existT.
now subst.
Qed.
Lemma Forall_impl_ext (A : Set) (P Q : A → Prop) n (l : vec n) :
(∀ a : A, In a l -> P a → Q a) → Forall P l → Forall Q l.
Proof.
induction l; intros H H0; constructor; inversion H0; subst.
apply H. constructor; auto. auto.
apply IHl; auto. intros a H1 H2. apply H; auto. constructor 2. auto.
simpl_existT. now subst.
Qed.
Global Hint Constructors Forall : core.
Class funcs_signature :=
{ funs : Set; funs_eqdec : EqDecision funs; ar_funs : funs -> nat }.
Coercion funs : funcs_signature >-> Sortclass.
Class preds_signature :=
{ preds : Set; preds_eqdec : EqDecision preds; ar_preds : preds -> nat }.
Class FOL_variables :=
{
vars : Set;
var_eqdec : EqDecision vars;
var_countable : Countable vars;
var_infinite : Infinite vars;
}.
Coercion preds : preds_signature >-> Sortclass.
Section fix_signature.
Context {Σ_vars : FOL_variables}.
Context {Σ_funcs : funcs_signature}.
Definition var_fresh (l : list vars) : vars := @infinite_fresh _ var_infinite l.
Unset Elimination Schemes.
Inductive term : Set :=
| bvar : nat -> term
| fvar : vars -> term
| func : forall (f : funs), @vec term (ar_funs f) -> term.
Set Elimination Schemes.
Fixpoint fsubst_term (t0 t : term) (n : vars) : term :=
match t0 with
| fvar t' => if var_eqdec t' n then t else t0
| bvar _ => t0
| func f v => func f (map (fun x => fsubst_term x t n) v)
end.
Fixpoint bsubst_term (t0 t : term) (n : nat) : term :=
match t0 with
| bvar t' => match compare_nat t' n with
| Nat_less _ _ _ => bvar t'
| Nat_equal _ _ _ => t
| Nat_greater _ _ _ => bvar t' (* (pred t') ? According to Leroy *)
end
| fvar _ => t0
| func f v => func f (map (fun x => bsubst_term x t n) v)
end.
Context {Σ_preds : preds_signature}.
Inductive form : Type :=
| fal : form
| atom : forall (P : preds), @vec term (ar_preds P) -> form
| impl : form -> form -> form
| exs : form -> form.
Fixpoint fsubst_form (phi : form) (t : term) (n: vars) : form :=
match phi with
| fal => fal
| atom P v => atom P (map (fun x => fsubst_term x t n) v)
| impl phi1 phi2 => impl (fsubst_form phi1 t n) (fsubst_form phi2 t n)
| exs phi => exs (fsubst_form phi t n)
end.
Fixpoint bsubst_form (phi : form) (t : term) (n: nat) : form :=
match phi with
| fal => fal
| atom P v => atom P (map (fun x => bsubst_term x t n) v)
| impl phi1 phi2 => impl (bsubst_form phi1 t n) (bsubst_form phi2 t n)
| exs phi => exs (bsubst_form phi t (S n))
end.
Inductive ForallT {A : Set} (P : A -> Type) : forall {n}, vec n -> Type :=
| ForallT_nil : ForallT P (nil)
| ForallT_cons : forall n (x : A) (l : vec n), P x -> ForallT P l -> ForallT P (@cons A x n l).
Inductive vec_in {A : Set} (a : A) : forall {n}, vec n -> Type :=
| vec_inB {n} (v : vec n) : vec_in a (@cons _ a n v)
| vec_inS a' {n} (v : vec n) : vec_in a v -> vec_in a (@cons _ a' n v).
Hint Constructors vec_in : core.
Lemma term_rect' (p : term -> Set) :
(forall x, p (fvar x)) ->
(forall x, p (bvar x)) ->
(forall F v, (ForallT p v) -> p (func F v)) -> forall (t : term), p t.
Proof.
intros f1 f2 f3. fix strong_term_ind' 1. destruct t as [n|n|F v].
- apply f2.
- apply f1.
- apply f3. induction v.
+ econstructor.
+ econstructor. now eapply strong_term_ind'. eauto.
Qed.
Lemma term_rect (p : term -> Set) :
(forall x, p (fvar x)) -> (forall x, p (bvar x)) -> (forall F v, (forall t, vec_in t v -> p t) -> p (func F v)) -> forall (t : term), p t.
Proof.
intros f1 f2 f3. eapply term_rect'.
- apply f1.
- apply f2.
- intros F v H. apply f3. intros t. induction 1; inversion H; subst; eauto.
apply Eqdep_dec.inj_pair2_eq_dec in H3; subst; eauto. decide equality.
Qed.
Lemma term_ind (p : term -> Prop) :
(forall x, p (fvar x)) -> (forall x, p (bvar x)) -> (forall F v (IH : forall t, In t v -> p t), p (func F v)) -> forall (t : term), p t.
Proof.
intros f1 f2 f3. eapply term_rect'.
- apply f1.
- apply f2.
- intros F v H. apply f3. intros t. induction 1; inversion H; subst; eauto.
apply Eqdep_dec.inj_pair2_eq_dec in H3; subst; eauto. decide equality.
Qed.
Fixpoint wf_term (t : term) (n : nat) : bool :=
match t with
| bvar x => x <? n
| fvar x => true
| func f x => fold_right (fun t Acc => andb Acc (wf_term t n)) x true
end.
Fixpoint wf_form (F : form) (n : nat) : bool :=
match F with
| fal => true
| atom P x => fold_right (fun t Acc => andb Acc (wf_term t n)) x true
| impl x x0 => wf_form x n && wf_form x0 n
| exs x => wf_form x (S n)
end.
Theorem wf_increase_term :
forall t n, wf_term t n -> forall n', n' >= n -> wf_term t n'.
Proof.
induction t; intros n H n' H0; auto.
* simpl in *. apply Nat.leb_le in H. apply Nat.leb_le. lia.
* simpl in *. induction v; auto; simpl.
simpl in H. do 2 separate.
erewrite IH. split; auto. apply IHv; auto.
- intros t H n1 H1 n'0 H2. eapply IH. now constructor. exact H1. auto.
- constructor.
- exact E2.
- auto.
Qed.
Theorem wf_increase :
forall φ n, wf_form φ n -> forall n', n' >= n -> wf_form φ n'.
Proof.
induction φ; intros n H n' H0; auto.
* simpl in *. induction v; auto; simpl.
simpl in H. do 2 separate.
erewrite wf_increase_term. split; auto. apply IHv; auto.
- eapply wf_increase_term. exact E2. auto.
- auto.
* simpl in *. apply andb_true_iff in H as [E1 E2].
erewrite IHφ1, IHφ2; [ auto | | | |]; eassumption.
* simpl in *. erewrite IHφ. 2: exact H. auto. lia.
Qed.
Theorem wf_term_subst :
forall b t n, wf_term b (S n) -> wf_term t n ->
wf_term (bsubst_term b t n) n.
Proof.
induction b; intros t n H H0; inversion H; subst.
* constructor.
* simpl. break_match_goal.
- simpl. now apply Nat.ltb_lt.
- auto.
- simpl. apply Nat.ltb_lt in H2. lia.
* simpl in *; induction v; simpl in *; auto.
do 2 separate. rewrite IH; auto. constructor. split; auto.
apply IHv; auto. intros t0 H. apply IH. now constructor 2.
Qed.
Theorem wf_form_subst :
forall φ t n, wf_form φ (S n) -> wf_term t n ->
wf_form (bsubst_form φ t n) n.
Proof.
induction φ; intros t n H H0; simpl; auto.
* simpl in *; induction v; simpl in *; auto. do 2 separate.
rewrite wf_term_subst; auto. split; auto.
apply IHv; auto.
* simpl in H. separate. rewrite IHφ1; auto. rewrite IHφ2; auto.
* simpl in H. subst. apply IHφ; auto. eapply wf_increase_term. exact H0.
lia.
Qed.
End fix_signature.
Section semantics.
Context {Σ_vars : FOL_variables}.
Context {Σ_funcs : funcs_signature}.
Context {Σ_preds : preds_signature}.
Variable domain : Set.
Class interp := B_I
{
i_f : forall f : funs, @vec domain (ar_funs f) -> domain ;
i_P : forall P : preds, @vec domain (ar_preds P) -> bool ; (* for decidability *)
}.
Context {I : interp }.
Definition env := vars -> domain. (* for free vars *)
Variable failure : domain. (* for wrong evaluation!!! *)
Fixpoint mmap {A B : Type} (f : A -> option B) {n : nat} (v : t A n) :
option (t B n) :=
match v in (t _ n0) return (option (t B n0)) with
| nil => Some nil
| @cons _ a n0 v' => match f a with
| None => None
| Some x => match mmap f v' with
| None => None
| Some xs => Some (cons x xs)
end
end
end.
Fixpoint eval (rho : env) (t : term) : domain :=
match t with
| fvar s => rho s
| bvar s => failure (* for wf terms, this is never used *)
| func f v => i_f f (Vector.map (eval rho) v)
end.
Definition update_env (ρ : env) (x : vars) (d : domain) : env :=
fun v => if var_eqdec v x then d else ρ v.
Import List.
Import ListNotations.
Fixpoint term_vars (t : term) : list vars :=
match t with
| bvar x => []
| fvar x => [x]
| func f x => Vector.fold_right (fun t Acc => term_vars t ++ Acc) x []
end.
Fixpoint form_vars (f : form) : list vars :=
match f with
| fal => []
| atom P x => Vector.fold_right (fun t Acc => term_vars t ++ Acc) x []
| impl x x0 => form_vars x ++ form_vars x0
| exs x => form_vars x
end.
Fixpoint form_size (f : form) : nat :=
match f with
| fal => 0
| atom P x => 0
| impl x x0 => S (form_size x + form_size x0)
| exs x => S (form_size x)
end.
Theorem subst_var_size :
forall f x y, form_size f = form_size (bsubst_form f (fvar x) y).
Proof.
induction f; intros x y; auto; simpl.
- now rewrite -> (IHf1 x y), -> (IHf2 x y).
- now rewrite -> (IHf x (S y)).
Qed.
Program Fixpoint sat (rho : env) (phi : form) {measure (form_size phi)} : Prop :=
match phi with
| atom P v => i_P P (Vector.map (eval rho) v) = true
| fal => False
| impl phi psi => sat rho phi -> sat rho psi
| exs phi => let x := var_fresh (form_vars phi) in
exists d : domain, sat (update_env rho x d) (bsubst_form phi (fvar x) 0)
end.
Next Obligation. intros. subst. simpl; lia. Defined.
Next Obligation. intros. subst. simpl; lia. Defined.
Next Obligation. intros. subst. simpl. rewrite <- subst_var_size. lia. Defined.
Next Obligation. Tactics.program_simpl. Defined.
Proposition sat_atom : forall ρ P v, sat ρ (atom P v) =
(i_P P (Vector.map (eval ρ) v) = true).
Proof. reflexivity. Qed.
Proposition sat_fal : forall ρ, sat ρ fal = False.
Proof. reflexivity. Qed.
Proposition sat_impl : forall ρ φ₁ φ₂, sat ρ (impl φ₁ φ₂) =
(sat ρ φ₁ -> sat ρ φ₂).
Proof.
intros ρ φ₁ φ₂. unfold sat, sat_func.
rewrite fix_sub_eq.
Tactics.program_simpl. unfold projT1, projT2.
destruct X; auto with f_equal.
{ f_equal. apply propositional_extensionality.
epose proof (H _). epose proof (H _).
apply ZifyClasses.eq_iff in H0. apply ZifyClasses.eq_iff in H1. split; intros.
- eapply H0 in H2. exact H2. apply H1. auto.
- eapply H0 in H2. exact H2. apply H1. auto.
}
{ f_equal. apply functional_extensionality; auto. }
{ f_equal. }
Qed.
Proposition sat_exs : forall ρ φ, sat ρ (exs φ) =
(let x := var_fresh (form_vars φ) in
exists d : domain, sat (update_env ρ x d) (bsubst_form φ (fvar x) 0)).
Proof.
intros ρ φ. unfold sat, sat_func.
rewrite fix_sub_eq.
Tactics.program_simpl. unfold projT1, projT2.
destruct X; auto with f_equal.
{ f_equal. apply propositional_extensionality.
epose proof (H _). epose proof (H _).
apply ZifyClasses.eq_iff in H0. apply ZifyClasses.eq_iff in H1. split; intros.
- eapply H0 in H2. exact H2. apply H1. auto.
- eapply H0 in H2. exact H2. apply H1. auto.
}
{ f_equal. apply functional_extensionality; auto. }
{ f_equal. }
Qed.
Notation "rho ⊨ phi" := (sat rho phi) (at level 20).
Theorem sat_dec : forall sz φ, form_size φ <= sz -> forall ρ, {ρ ⊨ φ} + {~ ρ ⊨ φ}.
Proof.
induction sz; intros φ H; destruct φ; simpl in H; try lia; intros ρ.
* right. auto.
* rewrite sat_atom. apply bool_dec.
* right. auto.
* rewrite sat_atom. apply bool_dec.
* destruct (IHsz φ1 ltac:(lia) ρ), (IHsz φ2 ltac:(lia) ρ).
1, 3-4: left; rewrite sat_impl; intros; auto.
congruence.
right. rewrite sat_impl. intros. auto.
* rewrite sat_exs. simpl.
epose proof (IHsz (bsubst_form φ (fvar (var_fresh (form_vars φ))) 0) _).
Search "ex" not.
admit. (* TODO: not trivial, maybe using size based induction *)
Abort.
End semantics.
Section proof_system.
Context {Σ_vars : FOL_variables}.
Context {Σ_funcs : funcs_signature}.
Context {Σ_preds : preds_signature}.
Fixpoint quantify_term (t : term) (v : vars) (n : nat) : term :=
match t with
| bvar x => bvar x
| fvar x => if var_eqdec v x then bvar n else fvar x
| func f x => func f (Vector.map (fun t => quantify_term t v n) x)
end.
Fixpoint quantify_form (φ : form) (v : vars) (n : nat) : form :=
match φ with
| fal => fal
| atom P x => atom P (Vector.map (fun t => quantify_term t v n) x)
| impl x x0 => impl (quantify_form x v n) (quantify_form x0 v n)
| exs x => exs (quantify_form x v (S n))
end.
Reserved Notation "Γ ⊢_FOL form" (at level 50).
Inductive Hilbert_proof_sys (Γ : (list form)): form -> Set :=
| AX (φ : form) : wf_form φ 0 -> List.In φ Γ -> Γ ⊢_FOL φ
| P1F (φ ψ : form) : wf_form φ 0 -> wf_form ψ 0 -> Γ ⊢_FOL impl φ (impl ψ φ)
| P2F (φ ψ ξ : form) : wf_form φ 0 -> wf_form ψ 0 -> wf_form ξ 0 ->
Γ ⊢_FOL impl (impl φ (impl ψ ξ)) (impl (impl φ ψ) (impl φ ξ))
| P3F (φ : form) : wf_form φ 0 ->
Γ ⊢_FOL impl (impl (impl φ fal) fal) φ
| MPF (φ1 φ2 : form) : wf_form φ1 0 -> wf_form (impl φ1 φ2) 0 ->
Γ ⊢_FOL φ1 -> Γ ⊢_FOL impl φ1 φ2 -> Γ ⊢_FOL φ2
| Q5F (φ : form) (t : term) :
wf_form (exs φ) 0 -> wf_term t 0 ->
Γ ⊢_FOL impl (bsubst_form φ t 0) (exs φ)
| Q6F (φ ψ : form)(x : vars) :
wf_form φ 0 -> wf_form ψ 0 -> Γ ⊢_FOL impl φ ψ ->
~List.In x (form_vars ψ)
->
Γ ⊢_FOL impl (exs (quantify_form φ x 0)) ψ
where "Γ ⊢_FOL form" := (Hilbert_proof_sys Γ form).
End proof_system.
Section soundness_completeness.
Context {Σ_vars : FOL_variables}.
Context {Σ_funcs : funcs_signature}.
Context {Σ_preds : preds_signature}.
Notation "rho ⊨_FOL phi" := (sat _ _ _ rho phi) (at level 50).
Notation "Γ ⊢_FOL form" := (Hilbert_proof_sys Γ form) (at level 50).
Definition valid A phi :=
forall (D : Set) (fail : D) (I : interp D) (rho : vars -> D),(forall Phi, List.In Phi A -> sat D fail rho Phi)
-> sat D fail rho phi.
Theorem soundness :
forall φ Γ, Γ ⊢_FOL φ -> valid Γ φ.
Proof.
intros φ Γ H. induction H; subst; unfold valid; intros.
* now apply H.
* do 2 rewrite sat_impl. intros. auto.
* repeat rewrite sat_impl. intros. apply H0; auto.
* repeat rewrite sat_impl. intros.
admit.
* unfold valid in *.
apply IHHilbert_proof_sys1 in H1 as IH1.
apply IHHilbert_proof_sys2 in H1 as IH2. rewrite sat_impl in IH2. now apply IH2.
* rewrite -> sat_impl, -> sat_exs. intros.
exists (eval D fail rho t). clear H.
generalize dependent φ. induction φ; intros; auto.
- admit.
- admit.
- admit.
(* TODO... *)
* rewrite -> sat_impl, -> sat_exs. intros. unfold valid in *.
apply IHHilbert_proof_sys in H0. rewrite sat_impl in H0. apply H0.
destruct H1. simpl in H1.
remember (var_fresh (form_vars (quantify_form φ x 0))) as FF.
admit. (* TODO... *)
Admitted.
Theorem completeness :
forall φ Γ, valid Γ φ -> Γ ⊢_FOL φ. Admitted.
End soundness_completeness.
Section FOL_ML_correspondence.
Context {Σ_vars : FOL_variables}.
Context {Σ_funcs : funcs_signature}.
Context {Σ_preds : preds_signature}.
Inductive Symbols : Set :=
| sym_fun (name : funs)
| sym_pred (name : preds)
| sym_import_definedness (d : Definedness.Symbols).
Lemma Symbols_dec : forall (s1 s2 : Symbols), {s1 = s2} + {s1 <> s2}.
Proof.
repeat decide equality.
apply Σ_funcs.
apply Σ_preds.
Qed.
Instance FOLVars : MLVariables :=
{|
Syntax.evar := vars;
Syntax.svar := vars;
evar_eqdec := var_eqdec;
svar_eqdec := var_eqdec;
evar_countable := var_countable;
svar_countable := var_countable;
evar_infinite := var_infinite;
svar_infinite := var_infinite;
|}.
Instance sig : Signature :=
{|
variables := FOLVars;
symbols := Symbols;
sym_eq := Symbols_dec
|}.
Instance definedness_syntax : Definedness.Syntax :=
{|
Definedness.inj := sym_import_definedness;
|}.
Fixpoint convert_term (t : term) : Pattern :=
match t with
| bvar x => patt_bound_evar x
| fvar x => patt_free_evar x
| func f x => fold_left (fun Acc t => patt_app Acc (convert_term t))
(patt_sym (sym_fun f)) x
end.
Fixpoint convert_form (f : form) : Pattern :=
match f with
| fal => patt_bott
| atom P x => fold_left (fun Acc t => patt_app Acc (convert_term t))
(patt_sym (sym_pred P)) x
| impl x x0 => patt_imp (convert_form x) (convert_form x0)
| exs x => patt_exists (convert_form x)
end.
Inductive AxName :=
| AxDefinedness (name : Definedness.AxiomName)
| AxFun (f : funs)
| AxPred (p : preds).
Fixpoint add_forall_prefix (n : nat) (base : Pattern) {struct n} : Pattern :=
match n with
| 0 => base
| S n' => patt_forall (add_forall_prefix n' base)
end.
Fixpoint make_list1 (n : nat) : list nat :=
match n with
| 0 => []
| S n' => n :: make_list1 n'
end.
Fixpoint make_list0 (n : nat) : list nat :=
match n with
| 0 => []
| S n' => n' :: make_list0 n'
end.
Definition axiom (name : AxName) : Pattern :=
match name with
| AxDefinedness name' => Definedness.axiom name'
| AxFun f => add_forall_prefix (ar_funs f) (patt_exists (patt_equal
(List.fold_left
(fun Acc (x : nat) => patt_app Acc (patt_bound_evar x))
(make_list1 (ar_funs f)) (patt_sym (sym_fun f)))
(patt_bound_evar 0)))
| AxPred p => let φ := (List.fold_left
(fun Acc (x : nat) => patt_app Acc (patt_bound_evar x))
(make_list0 (ar_preds p)) (patt_sym (sym_pred p))) in
add_forall_prefix (ar_preds p)
(patt_or (patt_equal φ patt_top) (patt_equal φ patt_bott))
end.
Definition named_axioms : NamedAxioms := {| NAName := AxName; NAAxiom := axiom; |}.
Definition base_FOL_theory : Theory := theory_of_NamedAxioms named_axioms.
Definition from_FOL_theory (Γ : list form) : Theory :=
List.fold_right (fun x Acc => {[ convert_form x ]} ∪ Acc) base_FOL_theory Γ.
Notation "Γ ⊢_FOL form" := (Hilbert_proof_sys Γ form) (at level 50).
Notation "Γ ⊢_ML form" := (ML_proof_system Γ form) (at level 50).
Theorem closed_term_FOL_ML : forall t n m,
wf_term t n -> well_formed_closed_aux (convert_term t) n m.
Proof.
induction t; intros n m H; auto.
* simpl in *.
remember (@patt_sym sig (sym_fun F)) as start.
assert (is_true (well_formed_closed_aux start n m)). { rewrite Heqstart. auto. }
clear Heqstart. generalize dependent start. induction v.
- simpl. auto.
- intros start H0. simpl. simpl in H. separate.
apply IHv; auto.
intros t H1 n1 m0 H2. apply IH. now constructor 2. auto.
simpl. rewrite -> H0, -> IH; auto. constructor.
Qed.
Theorem closed_form_FOL_ML : forall φ n m,
wf_form φ n -> well_formed_closed_aux (convert_form φ) n m.
Proof.
induction φ; intros n m H; simpl; auto.
* simpl in *.
remember (@patt_sym sig (sym_pred P)) as start.
assert (is_true (well_formed_closed_aux start n m)). { rewrite Heqstart. auto. }
clear Heqstart. generalize dependent start. induction v.
- simpl. auto.
- simpl in *. separate. intros start H.
apply IHv; auto. simpl. rewrite -> closed_term_FOL_ML, -> H; auto.
* simpl in *. separate. subst. rewrite -> IHφ1, -> IHφ2; auto.
Qed.
Theorem positive_term_FOL_ML : forall t,
well_formed_positive (convert_term t).
Proof.
induction t; auto.
* simpl. remember (@patt_sym sig (sym_fun F)) as start.
assert (is_true (well_formed_positive start)) by now rewrite Heqstart.
clear Heqstart. generalize dependent start.
induction v; intros start H; auto.
simpl. apply IHv.
- intros. apply IH; auto. now constructor 2.
- simpl. rewrite -> H, -> IH; auto. constructor.
Qed.
Theorem positive_form_FOL_ML : forall φ,
well_formed_positive (convert_form φ).
Proof.
induction φ; auto.
* simpl. remember (@patt_sym sig (sym_pred P)) as start.
assert (is_true (well_formed_positive start)) by now rewrite Heqstart.
clear Heqstart. generalize dependent start. induction v; intros start H; auto.
simpl. apply IHv.
simpl. rewrite -> H, -> positive_term_FOL_ML; auto.
* simpl. rewrite -> IHφ1, -> IHφ2; auto.
Qed.
Corollary wf_term_FOL_ML : forall t,
wf_term t 0 -> well_formed (convert_term t).
Proof.
intros t H. unfold well_formed. separate. split.
* apply positive_term_FOL_ML.
* now apply closed_term_FOL_ML.
Qed.
Corollary wf_form_FOL_ML : forall φ,
wf_form φ 0 -> well_formed (convert_form φ).
Proof.
intros φ H. unfold well_formed. separate. split.
* apply positive_form_FOL_ML.
* now apply closed_form_FOL_ML.
Qed.
Theorem in_FOL_theory : forall Γ x,
List.In x Γ -> convert_form x ∈ from_FOL_theory Γ.
Proof.
induction Γ; intros x H.
* inversion H.
* simpl. inversion H; subst.
- apply sets.elem_of_union_l. now apply sets.elem_of_singleton_2.
- apply IHΓ in H0.
now apply sets.elem_of_union_r.
Qed.
Hint Resolve wf_form_FOL_ML : core.
Hint Resolve wf_term_FOL_ML : core.
Lemma pointwise_fold : forall n0 (v : @vec term n0) start (F : Pattern -> Pattern),
(forall (p1 p2 : Pattern), F (patt_app p1 p2) = patt_app (F p1) (F p2)) ->
F (fold_left (λ (Acc : Pattern) (t : term), (Acc $ convert_term t)%ml)
start v) =
(fold_left (λ (Acc : Pattern) (t : term), (Acc $ F (convert_term t))%ml)
(F start) v).
Proof.
induction v; intros start F H.
* simpl. auto.
* simpl. rewrite -> IHv, -> H. auto. apply H.
Qed.
Corollary evar_quantify_fold : forall n0 (v : @vec term n0) start x n,
evar_quantify x n (fold_left (λ (Acc : Pattern) (t : term), (Acc $ convert_term t)%ml)
start v) =
(fold_left (λ (Acc : Pattern) (t : term), (Acc $ evar_quantify x n (convert_term t))%ml)
(evar_quantify x n start) v).
Proof.
intros. apply pointwise_fold. intros. auto.
Qed.
(** This is boiler-plate *)
Corollary bevar_subst_fold : forall n0 (v : @vec term n0) start x n,
bevar_subst (fold_left (λ (Acc : Pattern) (t : term), (Acc $ convert_term t)%ml)
start v) x n =
(fold_left (λ (Acc : Pattern) (t : term), (Acc $ bevar_subst (convert_term t) x n)%ml)
(bevar_subst start x n) v).
Proof.
induction v; intros.
* simpl. auto.
* simpl. rewrite IHv. auto.
Qed.
Theorem quantify_term_correspondence :
forall t n x, convert_term (quantify_term t x n) =
evar_quantify x n (convert_term t).
Proof.
induction t; intros n x'; auto; cbn.
* now destruct (var_eqdec x' x).
* remember (@patt_sym sig (sym_fun F)) as start.
rewrite fold_left_map.
assert (start = evar_quantify x' n start) by now rewrite Heqstart.
clear Heqstart.
generalize dependent start.
induction v; intros; simpl; auto.
rewrite IHv.
- intros. apply IH. now constructor 2.
- simpl. rewrite <- H, IH, double_evar_quantify; auto. constructor.
- do 2 rewrite evar_quantify_fold.
simpl. rewrite -> IH, -> double_evar_quantify. auto. simpl.
constructor.
Qed.
Theorem quantify_form_correspondence :
forall φ n x, convert_form (quantify_form φ x n) =
evar_quantify x n (convert_form φ).
Proof.
induction φ; intros n x'; auto; cbn.
* remember (@patt_sym sig (sym_pred P)) as start.
rewrite fold_left_map.
assert (start = evar_quantify x' n start) by now rewrite Heqstart.
clear Heqstart.
generalize dependent start.
induction v; intros; simpl; auto.
rewrite IHv.
- simpl. rewrite <- H, quantify_term_correspondence, double_evar_quantify; auto.
- do 2 rewrite evar_quantify_fold.
simpl. rewrite -> quantify_term_correspondence, -> double_evar_quantify. auto.
* now rewrite -> IHφ1, -> IHφ2.
* now rewrite IHφ.
Qed.
Theorem term_vars_free_vars_notin :
forall t x, ~List.In x (term_vars t) -> x ∉ (free_evars (convert_term t)).
Proof.
induction t; intros x' H.
* simpl. intros H0. apply H. simpl. apply sets.elem_of_singleton_1 in H0. auto.
* intro. simpl in H0. inversion H0.
* simpl in H. simpl.
remember (@patt_sym sig (sym_fun F)) as start.
assert (x' ∉ free_evars start) by now rewrite Heqstart.
clear Heqstart. generalize dependent start.
induction v; intros.
- auto.
- simpl. epose proof (IHv _ _ (start $ convert_term h)%ml _). clear IHv.
apply H1.
Unshelve.
intros. apply IH. now constructor 2. auto.
simpl in H. now apply notin_app_r in H.
simpl in H. apply notin_app_l in H. apply IH in H.
simpl. intro. apply elem_of_union in H1; inversion H1; contradiction.
constructor.
Qed.
Theorem form_vars_free_vars_notin :
forall φ x, ~List.In x (form_vars φ) -> x ∉ (free_evars (convert_form φ)).
Proof.
induction φ; intros x' H; auto.
* intro. inversion H0.
* simpl in H. simpl.
remember (@patt_sym sig (sym_pred P)) as start.
assert (x' ∉ free_evars start) by now rewrite Heqstart.
clear Heqstart. generalize dependent start.
induction v; intros.
- auto.
- simpl. epose proof (IHv _ (start $ convert_term h)%ml _). clear IHv.
apply H1.
Unshelve.
simpl in H. now apply notin_app_r in H.
simpl in H. apply notin_app_l in H. apply term_vars_free_vars_notin in H.
simpl. intro. apply elem_of_union in H1; inversion H1; contradiction.
* simpl in *. apply notin_app_r in H as H'. apply notin_app_l in H.
apply IHφ1 in H. apply IHφ2 in H'.
apply sets.not_elem_of_union. auto.
Qed.
Theorem bevar_subst_corr_term :
forall b t n, wf_term t n ->
convert_term (bsubst_term b t n) =
bevar_subst (convert_term b) (convert_term t) n.
Proof.
induction b; intros t n H; auto.
* simpl. now break_match_goal.
* simpl. remember (@patt_sym sig (sym_fun F)) as start.
rewrite fold_left_map.
assert (start = bevar_subst start (convert_term t) n) by now rewrite Heqstart.
clear Heqstart.
generalize dependent start.
induction v; intros; simpl; auto.
rewrite IHv.
- intros. apply IH. constructor 2; auto. auto.
- simpl. erewrite <- H0, IH, double_bevar_subst; auto.
apply closed_term_FOL_ML. auto. constructor.
- do 2 rewrite bevar_subst_fold.
simpl. erewrite IH, double_bevar_subst; auto.
apply closed_term_FOL_ML. auto. constructor.
Unshelve. all: exact 0.
Qed.
Theorem bevar_subst_corr_form :
forall φ t n, wf_term t n ->
convert_form (bsubst_form φ t n) =
bevar_subst (convert_form φ) (convert_term t) n.
Proof.
induction φ; intros t n H; auto.
* simpl.
remember (@patt_sym sig (sym_pred P)) as start.
rewrite fold_left_map.
assert (start = bevar_subst start (convert_term t) n) by now rewrite Heqstart.
clear Heqstart. revert H.
generalize dependent start.
induction v; intros; simpl; auto.
rewrite IHv.
- intros. rewrite bevar_subst_corr_term; auto.
simpl. erewrite double_bevar_subst. rewrite <- H0. auto.
apply closed_term_FOL_ML. auto.
- auto.
- do 2 rewrite bevar_subst_fold.
simpl. erewrite bevar_subst_corr_term, double_bevar_subst. auto.
apply closed_term_FOL_ML; auto. auto.
* simpl. now rewrite -> IHφ1, -> IHφ2.
* simpl. rewrite IHφ. auto. eapply wf_increase_term. apply H. lia. auto.
Unshelve. all: exact 0.
Qed.
Theorem ax_in :
forall Γ F, axiom F ∈ from_FOL_theory Γ.
Proof.
induction Γ; intros F.
* simpl. unfold base_FOL_theory. econstructor.
reflexivity.
* simpl. apply sets.elem_of_union_r. apply IHΓ.
Qed.
Lemma add_forall_prefix_subst : forall n φ ψ m,
bevar_subst (add_forall_prefix n φ) ψ m = add_forall_prefix n (bevar_subst φ ψ (m + n)).
Proof.
induction n; intros.
* cbn. auto.
* simpl. rewrite -> IHn, -> Nat.add_succ_comm. auto.
Qed.
Lemma subst_make_list : forall n m ψ start, m > n ->
bevar_subst
(List.fold_left
(λ (Acc : Pattern) (x : nat),
(Acc $ patt_bound_evar x)%ml)
(make_list1 n) start)
ψ m =
(List.fold_left
(λ (Acc : Pattern) (x : nat),
(Acc $ patt_bound_evar x)%ml)
(make_list1 n) (bevar_subst start ψ m)).
Proof.
induction n; intros; cbn; auto.
rewrite IHn. lia. cbn. break_match_goal.
* congruence.
* lia.
* auto.
Qed.
Lemma term_mu_free :
forall t, mu_free (convert_term t).
Proof.
induction t; auto.
simpl. remember (@patt_sym sig (sym_fun F)) as start.
assert (is_true (mu_free start)) by (rewrite Heqstart; constructor). clear Heqstart.
generalize dependent start.
induction v; simpl.
* intros. auto.
* intros. eapply IHv.
intros. apply IH. constructor 2; auto.
simpl. rewrite H. auto. apply IH. constructor.
Qed.
Lemma form_mu_free:
forall φ, mu_free (convert_form φ).
Proof.
induction φ; auto.
* simpl. remember (@patt_sym sig (sym_pred P)) as start.
assert (is_true (mu_free start)) by (rewrite Heqstart; constructor). clear Heqstart.
generalize dependent start. induction v; simpl; auto.
intros start H. eapply IHv.
simpl. now rewrite -> H, -> term_mu_free.
* simpl. now rewrite -> IHφ1, -> IHφ2.
Qed.
Lemma well_formed_closed_prefix φ : forall n k m,
is_true (well_formed_closed_aux (add_forall_prefix n φ) k m) <->
is_true (well_formed_closed_aux φ (n + k) m).
Proof.
induction n; simpl; auto; intros.
do 2 rewrite andb_true_r.
rewrite -> IHn, -> NPeano.Nat.add_succ_r. auto.
Qed.
Lemma well_formed_positive_prefix φ : forall n,
is_true (well_formed_positive (add_forall_prefix n φ)) <->
is_true (well_formed_positive φ).
Proof.
induction n; simpl; auto.
do 2 rewrite andb_true_r. auto.
Qed.
Lemma well_formed_closed_list n : forall start m k, m > n ->
is_true (well_formed_closed_aux start m k) ->
is_true (well_formed_closed_aux
(List.fold_left
(λ (Acc : Pattern) (x : nat), (Acc $ patt_bound_evar x)%ml)
(make_list1 n) start )
m k).
Proof.
induction n; intros start m k H H0; simpl; auto.
apply (IHn). lia. simpl. rewrite H0. simpl. apply NPeano.Nat.ltb_lt. lia.
Qed.
Lemma well_formed_positive_list n : forall start,
is_true (well_formed_positive start) ->
is_true (well_formed_positive
(List.fold_left
(λ (Acc : Pattern) (x : nat), (Acc $ patt_bound_evar x)%ml)
(make_list1 n) start)).
Proof.
induction n; intros; simpl; auto.
apply (IHn). simpl. rewrite H. auto.
Qed.
Lemma well_formed_closed_list0 n : forall start m k, m >= n ->
is_true (well_formed_closed_aux start m k) ->
is_true (well_formed_closed_aux
(List.fold_left
(λ (Acc : Pattern) (x : nat), (Acc $ patt_bound_evar x)%ml)
(make_list0 n) start)
m k).
Proof.
induction n; intros start m k H H0; simpl; auto.
apply (IHn). lia. simpl. rewrite H0. simpl. apply NPeano.Nat.ltb_lt. lia.
Qed.
Lemma well_formed_positive_list0 n : forall start,
is_true (well_formed_positive start) ->
is_true (well_formed_positive
(List.fold_left
(λ (Acc : Pattern) (x : nat), (Acc $ patt_bound_evar x)%ml)
(make_list0 n) start)).
Proof.
induction n; intros start H; simpl; auto.
apply (IHn). simpl. rewrite H. auto.
Qed.
Theorem ax_wf :
forall F, is_true (well_formed (axiom F)).
Proof.
unfold axiom. intros F.
break_match_goal.
* unfold Definedness.axiom. destruct name. simpl. constructor.
* unfold well_formed, well_formed_closed. apply andb_true_intro. split.
- apply well_formed_positive_prefix. simpl. rewrite well_formed_positive_list. auto.
auto.
- apply well_formed_closed_prefix. simpl. rewrite well_formed_closed_list.
simpl. auto. lia. all: now simpl.
* unfold well_formed, well_formed_closed. apply andb_true_intro. split.
- apply well_formed_positive_prefix. simpl. rewrite well_formed_positive_list0. auto.
auto.
- apply well_formed_closed_prefix. simpl. rewrite well_formed_closed_list0.
simpl. auto. lia. all: now simpl.
Qed.
Proposition term_functionality :
forall t Γ, wf_term t 0 ->
from_FOL_theory Γ ⊢_ML patt_exists (patt_equal (convert_term t) (patt_bound_evar 0)).
Proof.
induction t using term_rect; intros.
* simpl.
pose proof (Ex_quan (from_FOL_theory Γ ) (patt_equal (patt_free_evar x) (patt_bound_evar 0)) x).
simpl in H0. eapply Modus_ponens. 4: exact H0.
all: auto.
epose proof (@patt_equal_refl _ _ (patt_free_evar x) (from_FOL_theory Γ) _).
exact H1.
* simpl in H. apply Nat.ltb_lt in H. lia.
* assert (from_FOL_theory Γ ⊢_ML axiom (AxFun F)). {
apply hypothesis. apply ax_wf. apply ax_in.
} simpl in H1, H0.
simpl. remember (@patt_sym sig (sym_fun F)) as start.
assert (forall n ψ, bevar_subst start ψ n = start) as HIND.
{ intros. rewrite Heqstart. auto. }
assert (is_true (mu_free start)) as HMUF. { rewrite Heqstart. constructor. }
assert (is_true (well_formed start)) as WFS. { rewrite Heqstart. auto. }
clear Heqstart. generalize dependent start.
revert H0. induction v; intros.
- cbn. simpl in H1. exact H1.
- cbn in *. eapply (IHv _ _ (start $ convert_term h)%ml).
clear IHv. separate.
specialize (H h ltac:(constructor) Γ E2).
remember (add_forall_prefix n
(ex ,
patt_equal
(List.fold_left
(λ (Acc : Pattern) (x : nat), (Acc $ patt_bound_evar x)%ml)
(make_list1 n) (start $ patt_bound_evar (S n))%ml)
BoundVarSugar.b0)) as A.
pose proof (@forall_functional_subst _ _ A (convert_term h) (from_FOL_theory Γ)).
assert (mu_free A). {
rewrite HeqA. clear H HIND H1 HeqA E1 E2 H0 h v Γ A F WFS.
generalize dependent start. induction n; simpl.
* intros. repeat constructor. all: rewrite HMUF; auto.
* intros. simpl. rewrite IHn; auto. simpl. now rewrite HMUF.
}
assert (well_formed (all , A)) as WfA.
{
rewrite HeqA. clear H E1 E2 H2 HIND H1 H H0 h v Γ H2 HeqA A F HMUF.
unfold well_formed, well_formed_closed.
apply eq_sym, andb_true_eq in WFS. destruct WFS.
apply andb_true_intro. split.
* clear H0. apply well_formed_positive_all, well_formed_positive_prefix.
simpl. generalize dependent start. induction n; simpl; intros.
- rewrite <- H. auto.
- rewrite IHn; auto. simpl. rewrite <- H. auto.
* clear H. apply well_formed_closed_all, well_formed_closed_prefix.
simpl. replace (0 <? S (n + 1)) with true.
2: apply eq_sym, NPeano.Nat.ltb_lt; lia.
repeat rewrite andb_true_r. rewrite well_formed_closed_list; auto.
lia. simpl. apply andb_true_intro.
split.
- eapply wfc_aux_extend. apply eq_sym, H0. all: lia.
- apply NPeano.Nat.ltb_lt. lia.
}
assert (from_FOL_theory Γ ⊢_ML (all , A and ex , patt_equal (convert_term h) BoundVarSugar.b0 )). {
apply conj_intro_meta; auto.
unfold well_formed. simpl. rewrite positive_term_FOL_ML.
unfold well_formed_closed. simpl. apply wf_increase_term with (n' := 1) in E2. 2: lia.
eapply closed_term_FOL_ML in E2.
rewrite E2. auto.
}
apply Modus_ponens in H0; auto.
2: {
unfold well_formed in *. simpl. rewrite positive_term_FOL_ML.
unfold well_formed_closed in *. simpl. apply wf_increase_term with (n' := 1) in E2. 2: lia.
eapply closed_term_FOL_ML in E2.
rewrite E2. separate.
simpl in E0, E3. do 4 rewrite andb_true_r in E0, E3.
rewrite -> E0, -> E3. auto.
}
2: {
unfold well_formed in *. simpl. rewrite positive_term_FOL_ML.
unfold well_formed_closed in *. simpl.
eapply wf_increase_term with (n' := 1) in E2 as H4'. 2: lia.
eapply closed_term_FOL_ML in H4'.
separate. simpl in E0, E3. do 4 rewrite andb_true_r in E0, E3.
rewrite -> E0, -> E3.
rewrite -> bevar_subst_positive, -> bevar_subst_closed, -> closed_term_FOL_ML; auto.
eapply wf_increase_term. eassumption. lia.
now apply closed_term_FOL_ML.
now apply positive_term_FOL_ML.
}
simpl in H0.
rewrite -> HeqA, -> add_forall_prefix_subst in H0.
simpl Nat.add in H0.
replace (bevar_subst
(ex ,
@patt_equal sig definedness_syntax
(List.fold_left
(λ (Acc : Pattern) (x : nat),
(Acc $ patt_bound_evar x)%ml)
(make_list1 n) (start $ patt_bound_evar (S n))%ml)
BoundVarSugar.b0) (convert_term h) n) with
((ex ,
@patt_equal sig definedness_syntax
(bevar_subst (List.fold_left
(λ (Acc : Pattern) (x : nat),
(Acc $ patt_bound_evar x)%ml)
(make_list1 n) (start $ patt_bound_evar (S n))%ml) (convert_term h) (S n))
BoundVarSugar.b0)) in H by auto.
simpl in H0.
rewrite subst_make_list in H0. lia.
simpl in H0. rewrite HIND in H0. break_match_hyp.
+ lia.
+ exact H0.
+ lia.
(** asserted hypotheses *)
+ apply wf_ex_to_wf_body. unfold well_formed, well_formed_closed in *.
do 2 separate. simpl in E0, E3.
do 4 rewrite andb_true_r in E0, E3. simpl. now rewrite -> E0, -> E3.
+ intros. simpl. rewrite HIND. erewrite well_formed_bevar_subst.
auto.
2: { apply closed_term_FOL_ML. inversion H0. separate. eassumption. }
lia.
+ simpl. now rewrite -> HMUF, -> term_mu_free.
+ unfold well_formed, well_formed_closed in *.
simpl. apply eq_sym, andb_true_eq in WFS. destruct WFS.
rewrite <- H2, <- H3.
simpl. rewrite -> positive_term_FOL_ML, -> closed_term_FOL_ML; auto.
separate. auto.
Unshelve.
4-7: exact 0.
** auto.
** intros. apply H; auto. constructor 2; auto.
** now separate.
Qed.
Theorem arrow_1 : forall (φ : form) (Γ : list form),
Γ ⊢_FOL φ
->
from_FOL_theory Γ ⊢_ML convert_form φ.
Proof.
intros φ Γ IH. induction IH; intros.
* apply hypothesis. now apply wf_form_FOL_ML. now apply in_FOL_theory.
* simpl. apply P1; auto.
* apply P2; auto.
* apply P3; auto.
* eapply Modus_ponens. 3: exact IHIH1. 3: exact IHIH2. all: auto.
simpl in i0. separate. auto.
* simpl.
epose proof (term_functionality t Γ i0).
pose proof (@exists_functional_subst _ _ (convert_form φ) (convert_term t) (from_FOL_theory Γ)).
simpl in H0. rewrite bevar_subst_corr_form; auto.
eapply and_impl_patt2. 4: exact H. 4: apply H0.
all: unfold well_formed, well_formed_closed; simpl.
all: try rewrite -> closed_term_FOL_ML, -> positive_term_FOL_ML; auto.
apply wf_increase_term with (n := 0); auto.
2: try rewrite -> closed_form_FOL_ML, -> positive_form_FOL_ML; auto.
rewrite -> bevar_subst_well_formedness, -> well_formed_positive_bevar_subst.
auto.
1, 6: apply form_mu_free.
apply positive_form_FOL_ML.
apply positive_term_FOL_ML.
apply closed_form_FOL_ML; auto.
apply closed_term_FOL_ML; auto.
apply wf_ex_to_wf_body. apply wf_form_FOL_ML in i. exact i.
* simpl. rewrite quantify_form_correspondence. eapply Ex_gen; auto.
apply form_vars_free_vars_notin. auto.
Qed.
End FOL_ML_correspondence.
Section tests.
Inductive PA_funcs : Set :=
Zero : PA_funcs
| Succ : PA_funcs
| Plus : PA_funcs
| Mult : PA_funcs.
Theorem pa_funs_eqdec : EqDecision PA_funcs.
Proof.
unfold EqDecision, Decision; intros. decide equality.
Qed.
Definition PA_funcs_ar (f : PA_funcs ) :=
match f with
| Zero => 0
| Succ => 1
| Plus => 2
| Mult => 2
end.
Inductive PA_preds : Set :=
Eq : PA_preds.
Theorem pa_preds_eqdec : EqDecision PA_preds.
Proof.
unfold EqDecision, Decision; intros. decide equality.
Qed.
Definition PA_preds_ar (P : PA_preds) :=
match P with
| Eq => 2
end.
Instance PA_funcs_signature : funcs_signature :=
{| funs := PA_funcs ; funs_eqdec := pa_funs_eqdec; ar_funs := PA_funcs_ar |}.
Instance PA_preds_signature : preds_signature :=
{| preds := PA_preds ; preds_eqdec := pa_preds_eqdec; ar_preds := PA_preds_ar |}.
Context {Σ_vars : FOL_variables}.
Instance FOLVars2 : MLVariables :=
{|
Syntax.evar := vars;
evar_eqdec := var_eqdec;
svar_eqdec := var_eqdec;
evar_countable := var_countable;
svar_countable := var_countable;
Syntax.svar := vars;
evar_infinite := var_infinite;
svar_infinite := var_infinite;
|}.
Instance sig2 : Signature :=
{|
variables := FOLVars;
symbols := Symbols;
sym_eq := Symbols_dec
|}.
Instance definedness_syntax2 : Definedness.Syntax :=
{|
Definedness.inj := sym_import_definedness;
|}.
Goal axiom (AxFun Mult) = patt_forall (patt_forall (patt_exists (patt_equal
(patt_app (patt_app (patt_sym (sym_fun Mult)) (patt_bound_evar 2)) (patt_bound_evar 1))
(patt_bound_evar 0)))).
Proof.
simpl. reflexivity.
Qed.
Goal axiom (AxPred Eq) = patt_forall (patt_forall (patt_or (patt_equal
(patt_app (patt_app (patt_sym (sym_pred Eq)) (patt_bound_evar 1)) (patt_bound_evar 0)) patt_top)
(patt_equal
(patt_app (patt_app (patt_sym (sym_pred Eq)) (patt_bound_evar 1)) (patt_bound_evar 0)) patt_bott))
).
Proof.
simpl. reflexivity.
Qed.
Goal convert_term (func Plus (cons (func Zero nil) (cons (func Succ (cons (func Zero nil) nil)) nil))) =
patt_app (patt_app (patt_sym (sym_fun Plus)) (patt_sym (sym_fun Zero))) (patt_app (patt_sym (sym_fun Succ)) (patt_sym (sym_fun Zero))).
Proof.
simpl. reflexivity.
Qed.
End tests. |
Load LFindLoad.
From lfind Require Import LFind.
From QuickChick Require Import QuickChick.
From adtind Require Import goal33.
Derive Show for natural.
Derive Arbitrary for natural.
Instance Dec_Eq_natural : Dec_Eq natural.
Proof. dec_eq. Qed.
Lemma conj19eqsynthconj1 : forall (lv0 : natural) (lv1 : natural) (lv2 : natural), (@eq natural (plus lv0 (mult lv1 lv2)) (plus Zero (plus (mult lv1 lv2) lv0))).
Admitted.
QuickChick conj19eqsynthconj1.
|
(* Title: HOL/SET_Protocol/Merchant_Registration.thy
Author: Giampaolo Bella
Author: Fabio Massacci
Author: Lawrence C Paulson
*)
section\<open>The SET Merchant Registration Protocol\<close>
theory Merchant_Registration
imports Public_SET
begin
text\<open>Copmpared with Cardholder Reigstration, \<open>KeyCryptKey\<close> is not
needed: no session key encrypts another. Instead we
prove the "key compromise" theorems for sets KK that contain no private
encryption keys (@{term "priEK C"}).\<close>
inductive_set
set_mr :: "event list set"
where
Nil: \<comment>\<open>Initial trace is empty\<close>
"[] \<in> set_mr"
| Fake: \<comment>\<open>The spy MAY say anything he CAN say.\<close>
"[| evsf \<in> set_mr; X \<in> synth (analz (knows Spy evsf)) |]
==> Says Spy B X # evsf \<in> set_mr"
| Reception: \<comment>\<open>If A sends a message X to B, then B might receive it\<close>
"[| evsr \<in> set_mr; Says A B X \<in> set evsr |]
==> Gets B X # evsr \<in> set_mr"
| SET_MR1: \<comment>\<open>RegFormReq: M requires a registration form to a CA\<close>
"[| evs1 \<in> set_mr; M = Merchant k; Nonce NM1 \<notin> used evs1 |]
==> Says M (CA i) \<lbrace>Agent M, Nonce NM1\<rbrace> # evs1 \<in> set_mr"
| SET_MR2: \<comment>\<open>RegFormRes: CA replies with the registration form and the
certificates for her keys\<close>
"[| evs2 \<in> set_mr; Nonce NCA \<notin> used evs2;
Gets (CA i) \<lbrace>Agent M, Nonce NM1\<rbrace> \<in> set evs2 |]
==> Says (CA i) M \<lbrace>sign (priSK (CA i)) \<lbrace>Agent M, Nonce NM1, Nonce NCA\<rbrace>,
cert (CA i) (pubEK (CA i)) onlyEnc (priSK RCA),
cert (CA i) (pubSK (CA i)) onlySig (priSK RCA) \<rbrace>
# evs2 \<in> set_mr"
| SET_MR3:
\<comment>\<open>CertReq: M submits the key pair to be certified. The Notes
event allows KM1 to be lost if M is compromised. Piero remarks
that the agent mentioned inside the signature is not verified to
correspond to M. As in CR, each Merchant has fixed key pairs. M
is only optionally required to send NCA back, so M doesn't do so
in the model\<close>
"[| evs3 \<in> set_mr; M = Merchant k; Nonce NM2 \<notin> used evs3;
Key KM1 \<notin> used evs3; KM1 \<in> symKeys;
Gets M \<lbrace>sign (invKey SKi) \<lbrace>Agent X, Nonce NM1, Nonce NCA\<rbrace>,
cert (CA i) EKi onlyEnc (priSK RCA),
cert (CA i) SKi onlySig (priSK RCA) \<rbrace>
\<in> set evs3;
Says M (CA i) \<lbrace>Agent M, Nonce NM1\<rbrace> \<in> set evs3 |]
==> Says M (CA i)
\<lbrace>Crypt KM1 (sign (priSK M) \<lbrace>Agent M, Nonce NM2,
Key (pubSK M), Key (pubEK M)\<rbrace>),
Crypt EKi (Key KM1)\<rbrace>
# Notes M \<lbrace>Key KM1, Agent (CA i)\<rbrace>
# evs3 \<in> set_mr"
| SET_MR4:
\<comment>\<open>CertRes: CA issues the certificates for merSK and merEK,
while checking never to have certified the m even
separately. NOTE: In Cardholder Registration the
corresponding rule (6) doesn't use the "sign" primitive. "The
CertRes shall be signed but not encrypted if the EE is a Merchant
or Payment Gateway."-- Programmer's Guide, page 191.\<close>
"[| evs4 \<in> set_mr; M = Merchant k;
merSK \<notin> symKeys; merEK \<notin> symKeys;
Notes (CA i) (Key merSK) \<notin> set evs4;
Notes (CA i) (Key merEK) \<notin> set evs4;
Gets (CA i) \<lbrace>Crypt KM1 (sign (invKey merSK)
\<lbrace>Agent M, Nonce NM2, Key merSK, Key merEK\<rbrace>),
Crypt (pubEK (CA i)) (Key KM1) \<rbrace>
\<in> set evs4 |]
==> Says (CA i) M \<lbrace>sign (priSK(CA i)) \<lbrace>Agent M, Nonce NM2, Agent(CA i)\<rbrace>,
cert M merSK onlySig (priSK (CA i)),
cert M merEK onlyEnc (priSK (CA i)),
cert (CA i) (pubSK (CA i)) onlySig (priSK RCA)\<rbrace>
# Notes (CA i) (Key merSK)
# Notes (CA i) (Key merEK)
# evs4 \<in> set_mr"
text\<open>Note possibility proofs are missing.\<close>
declare Says_imp_knows_Spy [THEN parts.Inj, dest]
declare parts.Body [dest]
declare analz_into_parts [dest]
declare Fake_parts_insert_in_Un [dest]
text\<open>General facts about message reception\<close>
lemma Gets_imp_Says:
"[| Gets B X \<in> set evs; evs \<in> set_mr |] ==> \<exists>A. Says A B X \<in> set evs"
apply (erule rev_mp)
apply (erule set_mr.induct, auto)
done
lemma Gets_imp_knows_Spy:
"[| Gets B X \<in> set evs; evs \<in> set_mr |] ==> X \<in> knows Spy evs"
by (blast dest!: Gets_imp_Says Says_imp_knows_Spy)
declare Gets_imp_knows_Spy [THEN parts.Inj, dest]
subsubsection\<open>Proofs on keys\<close>
text\<open>Spy never sees an agent's private keys! (unless it's bad at start)\<close>
lemma Spy_see_private_Key [simp]:
"evs \<in> set_mr
==> (Key(invKey (publicKey b A)) \<in> parts(knows Spy evs)) = (A \<in> bad)"
apply (erule set_mr.induct)
apply (auto dest!: Gets_imp_knows_Spy [THEN parts.Inj])
done
lemma Spy_analz_private_Key [simp]:
"evs \<in> set_mr ==>
(Key(invKey (publicKey b A)) \<in> analz(knows Spy evs)) = (A \<in> bad)"
by auto
declare Spy_see_private_Key [THEN [2] rev_iffD1, dest!]
declare Spy_analz_private_Key [THEN [2] rev_iffD1, dest!]
(*This is to state that the signed keys received in step 4
are into parts - rather than installing sign_def each time.
Needed in Spy_see_priSK_RCA, Spy_see_priEK and in Spy_see_priSK
Goal "[|Gets C \<lbrace>Crypt KM1
(sign K \<lbrace>Agent M, Nonce NM2, Key merSK, Key merEK\<rbrace>), X\<rbrace>
\<in> set evs; evs \<in> set_mr |]
==> Key merSK \<in> parts (knows Spy evs) \<and>
Key merEK \<in> parts (knows Spy evs)"
by (fast_tac (claset() addss (simpset())) 1);
qed "signed_keys_in_parts";
???*)
text\<open>Proofs on certificates -
they hold, as in CR, because RCA's keys are secure\<close>
lemma Crypt_valid_pubEK:
"[| Crypt (priSK RCA) \<lbrace>Agent (CA i), Key EKi, onlyEnc\<rbrace>
\<in> parts (knows Spy evs);
evs \<in> set_mr |] ==> EKi = pubEK (CA i)"
apply (erule rev_mp)
apply (erule set_mr.induct, auto)
done
lemma certificate_valid_pubEK:
"[| cert (CA i) EKi onlyEnc (priSK RCA) \<in> parts (knows Spy evs);
evs \<in> set_mr |]
==> EKi = pubEK (CA i)"
apply (unfold cert_def signCert_def)
apply (blast dest!: Crypt_valid_pubEK)
done
lemma Crypt_valid_pubSK:
"[| Crypt (priSK RCA) \<lbrace>Agent (CA i), Key SKi, onlySig\<rbrace>
\<in> parts (knows Spy evs);
evs \<in> set_mr |] ==> SKi = pubSK (CA i)"
apply (erule rev_mp)
apply (erule set_mr.induct, auto)
done
lemma certificate_valid_pubSK:
"[| cert (CA i) SKi onlySig (priSK RCA) \<in> parts (knows Spy evs);
evs \<in> set_mr |] ==> SKi = pubSK (CA i)"
apply (unfold cert_def signCert_def)
apply (blast dest!: Crypt_valid_pubSK)
done
lemma Gets_certificate_valid:
"[| Gets A \<lbrace> X, cert (CA i) EKi onlyEnc (priSK RCA),
cert (CA i) SKi onlySig (priSK RCA)\<rbrace> \<in> set evs;
evs \<in> set_mr |]
==> EKi = pubEK (CA i) & SKi = pubSK (CA i)"
by (blast dest: certificate_valid_pubEK certificate_valid_pubSK)
text\<open>Nobody can have used non-existent keys!\<close>
lemma new_keys_not_used [rule_format,simp]:
"evs \<in> set_mr
==> Key K \<notin> used evs --> K \<in> symKeys -->
K \<notin> keysFor (parts (knows Spy evs))"
apply (erule set_mr.induct, simp_all)
apply (force dest!: usedI keysFor_parts_insert) \<comment>\<open>Fake\<close>
apply force \<comment>\<open>Message 2\<close>
apply (blast dest: Gets_certificate_valid) \<comment>\<open>Message 3\<close>
apply force \<comment>\<open>Message 4\<close>
done
subsubsection\<open>New Versions: As Above, but Generalized with the Kk Argument\<close>
lemma gen_new_keys_not_used [rule_format]:
"evs \<in> set_mr
==> Key K \<notin> used evs --> K \<in> symKeys -->
K \<notin> keysFor (parts (Key`KK Un knows Spy evs))"
by auto
lemma gen_new_keys_not_analzd:
"[|Key K \<notin> used evs; K \<in> symKeys; evs \<in> set_mr |]
==> K \<notin> keysFor (analz (Key`KK Un knows Spy evs))"
by (blast intro: keysFor_mono [THEN [2] rev_subsetD]
dest: gen_new_keys_not_used)
lemma analz_Key_image_insert_eq:
"[|Key K \<notin> used evs; K \<in> symKeys; evs \<in> set_mr |]
==> analz (Key ` (insert K KK) \<union> knows Spy evs) =
insert (Key K) (analz (Key ` KK \<union> knows Spy evs))"
by (simp add: gen_new_keys_not_analzd)
lemma Crypt_parts_imp_used:
"[|Crypt K X \<in> parts (knows Spy evs);
K \<in> symKeys; evs \<in> set_mr |] ==> Key K \<in> used evs"
apply (rule ccontr)
apply (force dest: new_keys_not_used Crypt_imp_invKey_keysFor)
done
lemma Crypt_analz_imp_used:
"[|Crypt K X \<in> analz (knows Spy evs);
K \<in> symKeys; evs \<in> set_mr |] ==> Key K \<in> used evs"
by (blast intro: Crypt_parts_imp_used)
text\<open>Rewriting rule for private encryption keys. Analogous rewriting rules
for other keys aren't needed.\<close>
lemma parts_image_priEK:
"[|Key (priEK (CA i)) \<in> parts (Key`KK Un (knows Spy evs));
evs \<in> set_mr|] ==> priEK (CA i) \<in> KK | CA i \<in> bad"
by auto
text\<open>trivial proof because (priEK (CA i)) never appears even in (parts evs)\<close>
lemma analz_image_priEK:
"evs \<in> set_mr ==>
(Key (priEK (CA i)) \<in> analz (Key`KK Un (knows Spy evs))) =
(priEK (CA i) \<in> KK | CA i \<in> bad)"
by (blast dest!: parts_image_priEK intro: analz_mono [THEN [2] rev_subsetD])
subsection\<open>Secrecy of Session Keys\<close>
text\<open>This holds because if (priEK (CA i)) appears in any traffic then it must
be known to the Spy, by \<open>Spy_see_private_Key\<close>\<close>
lemma merK_neq_priEK:
"[|Key merK \<notin> analz (knows Spy evs);
Key merK \<in> parts (knows Spy evs);
evs \<in> set_mr|] ==> merK \<noteq> priEK C"
by blast
text\<open>Lemma for message 4: either merK is compromised (when we don't care)
or else merK hasn't been used to encrypt K.\<close>
lemma msg4_priEK_disj:
"[|Gets B \<lbrace>Crypt KM1
(sign K \<lbrace>Agent M, Nonce NM2, Key merSK, Key merEK\<rbrace>),
Y\<rbrace> \<in> set evs;
evs \<in> set_mr|]
==> (Key merSK \<in> analz (knows Spy evs) | merSK \<notin> range(\<lambda>C. priEK C))
& (Key merEK \<in> analz (knows Spy evs) | merEK \<notin> range(\<lambda>C. priEK C))"
apply (unfold sign_def)
apply (blast dest: merK_neq_priEK)
done
lemma Key_analz_image_Key_lemma:
"P --> (Key K \<in> analz (Key`KK Un H)) --> (K\<in>KK | Key K \<in> analz H)
==>
P --> (Key K \<in> analz (Key`KK Un H)) = (K\<in>KK | Key K \<in> analz H)"
by (blast intro: analz_mono [THEN [2] rev_subsetD])
lemma symKey_compromise:
"evs \<in> set_mr ==>
(\<forall>SK KK. SK \<in> symKeys \<longrightarrow> (\<forall>K \<in> KK. K \<notin> range(\<lambda>C. priEK C)) -->
(Key SK \<in> analz (Key`KK Un (knows Spy evs))) =
(SK \<in> KK | Key SK \<in> analz (knows Spy evs)))"
apply (erule set_mr.induct)
apply (safe del: impI intro!: Key_analz_image_Key_lemma [THEN impI])
apply (drule_tac [7] msg4_priEK_disj)
apply (frule_tac [6] Gets_certificate_valid)
apply (safe del: impI)
apply (simp_all del: image_insert image_Un imp_disjL
add: analz_image_keys_simps abbrev_simps analz_knows_absorb
analz_knows_absorb2 analz_Key_image_insert_eq notin_image_iff
Spy_analz_private_Key analz_image_priEK)
\<comment>\<open>5 seconds on a 1.6GHz machine\<close>
apply spy_analz \<comment>\<open>Fake\<close>
apply auto \<comment>\<open>Message 3\<close>
done
lemma symKey_secrecy [rule_format]:
"[|CA i \<notin> bad; K \<in> symKeys; evs \<in> set_mr|]
==> \<forall>X m. Says (Merchant m) (CA i) X \<in> set evs -->
Key K \<in> parts{X} -->
Merchant m \<notin> bad -->
Key K \<notin> analz (knows Spy evs)"
apply (erule set_mr.induct)
apply (drule_tac [7] msg4_priEK_disj)
apply (frule_tac [6] Gets_certificate_valid)
apply (safe del: impI)
apply (simp_all del: image_insert image_Un imp_disjL
add: analz_image_keys_simps abbrev_simps analz_knows_absorb
analz_knows_absorb2 analz_Key_image_insert_eq
symKey_compromise notin_image_iff Spy_analz_private_Key
analz_image_priEK)
apply spy_analz \<comment>\<open>Fake\<close>
apply force \<comment>\<open>Message 1\<close>
apply (auto intro: analz_into_parts [THEN usedI] in_parts_Says_imp_used) \<comment>\<open>Message 3\<close>
done
subsection\<open>Unicity\<close>
lemma msg4_Says_imp_Notes:
"[|Says (CA i) M \<lbrace>sign (priSK (CA i)) \<lbrace>Agent M, Nonce NM2, Agent (CA i)\<rbrace>,
cert M merSK onlySig (priSK (CA i)),
cert M merEK onlyEnc (priSK (CA i)),
cert (CA i) (pubSK (CA i)) onlySig (priSK RCA)\<rbrace> \<in> set evs;
evs \<in> set_mr |]
==> Notes (CA i) (Key merSK) \<in> set evs
& Notes (CA i) (Key merEK) \<in> set evs"
apply (erule rev_mp)
apply (erule set_mr.induct)
apply (simp_all (no_asm_simp))
done
text\<open>Unicity of merSK wrt a given CA:
merSK uniquely identifies the other components, including merEK\<close>
lemma merSK_unicity:
"[|Says (CA i) M \<lbrace>sign (priSK(CA i)) \<lbrace>Agent M, Nonce NM2, Agent (CA i)\<rbrace>,
cert M merSK onlySig (priSK (CA i)),
cert M merEK onlyEnc (priSK (CA i)),
cert (CA i) (pubSK (CA i)) onlySig (priSK RCA)\<rbrace> \<in> set evs;
Says (CA i) M' \<lbrace>sign (priSK(CA i)) \<lbrace>Agent M', Nonce NM2', Agent (CA i)\<rbrace>,
cert M' merSK onlySig (priSK (CA i)),
cert M' merEK' onlyEnc (priSK (CA i)),
cert (CA i) (pubSK(CA i)) onlySig (priSK RCA)\<rbrace> \<in> set evs;
evs \<in> set_mr |] ==> M=M' & NM2=NM2' & merEK=merEK'"
apply (erule rev_mp)
apply (erule rev_mp)
apply (erule set_mr.induct)
apply (simp_all (no_asm_simp))
apply (blast dest!: msg4_Says_imp_Notes)
done
text\<open>Unicity of merEK wrt a given CA:
merEK uniquely identifies the other components, including merSK\<close>
lemma merEK_unicity:
"[|Says (CA i) M \<lbrace>sign (priSK(CA i)) \<lbrace>Agent M, Nonce NM2, Agent (CA i)\<rbrace>,
cert M merSK onlySig (priSK (CA i)),
cert M merEK onlyEnc (priSK (CA i)),
cert (CA i) (pubSK (CA i)) onlySig (priSK RCA)\<rbrace> \<in> set evs;
Says (CA i) M' \<lbrace>sign (priSK(CA i)) \<lbrace>Agent M', Nonce NM2', Agent (CA i)\<rbrace>,
cert M' merSK' onlySig (priSK (CA i)),
cert M' merEK onlyEnc (priSK (CA i)),
cert (CA i) (pubSK(CA i)) onlySig (priSK RCA)\<rbrace> \<in> set evs;
evs \<in> set_mr |]
==> M=M' & NM2=NM2' & merSK=merSK'"
apply (erule rev_mp)
apply (erule rev_mp)
apply (erule set_mr.induct)
apply (simp_all (no_asm_simp))
apply (blast dest!: msg4_Says_imp_Notes)
done
text\<open>-No interest on secrecy of nonces: they appear to be used
only for freshness.
-No interest on secrecy of merSK or merEK, as in CR.
-There's no equivalent of the PAN\<close>
subsection\<open>Primary Goals of Merchant Registration\<close>
subsubsection\<open>The merchant's certificates really were created by the CA,
provided the CA is uncompromised\<close>
text\<open>The assumption @{term "CA i \<noteq> RCA"} is required: step 2 uses
certificates of the same form.\<close>
lemma certificate_merSK_valid_lemma [intro]:
"[|Crypt (priSK (CA i)) \<lbrace>Agent M, Key merSK, onlySig\<rbrace>
\<in> parts (knows Spy evs);
CA i \<notin> bad; CA i \<noteq> RCA; evs \<in> set_mr|]
==> \<exists>X Y Z. Says (CA i) M
\<lbrace>X, cert M merSK onlySig (priSK (CA i)), Y, Z\<rbrace> \<in> set evs"
apply (erule rev_mp)
apply (erule set_mr.induct)
apply (simp_all (no_asm_simp))
apply auto
done
lemma certificate_merSK_valid:
"[| cert M merSK onlySig (priSK (CA i)) \<in> parts (knows Spy evs);
CA i \<notin> bad; CA i \<noteq> RCA; evs \<in> set_mr|]
==> \<exists>X Y Z. Says (CA i) M
\<lbrace>X, cert M merSK onlySig (priSK (CA i)), Y, Z\<rbrace> \<in> set evs"
by auto
lemma certificate_merEK_valid_lemma [intro]:
"[|Crypt (priSK (CA i)) \<lbrace>Agent M, Key merEK, onlyEnc\<rbrace>
\<in> parts (knows Spy evs);
CA i \<notin> bad; CA i \<noteq> RCA; evs \<in> set_mr|]
==> \<exists>X Y Z. Says (CA i) M
\<lbrace>X, Y, cert M merEK onlyEnc (priSK (CA i)), Z\<rbrace> \<in> set evs"
apply (erule rev_mp)
apply (erule set_mr.induct)
apply (simp_all (no_asm_simp))
apply auto
done
lemma certificate_merEK_valid:
"[| cert M merEK onlyEnc (priSK (CA i)) \<in> parts (knows Spy evs);
CA i \<notin> bad; CA i \<noteq> RCA; evs \<in> set_mr|]
==> \<exists>X Y Z. Says (CA i) M
\<lbrace>X, Y, cert M merEK onlyEnc (priSK (CA i)), Z\<rbrace> \<in> set evs"
by auto
text\<open>The two certificates - for merSK and for merEK - cannot be proved to
have originated together\<close>
end
|
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hfg : Exact f g
hgh : Exact g h
hf'g' : Exact f' g'
hα : Epi α
hβ : Mono β
hδ : Mono δ
c : Pseudoelement C
hc : pseudoApply γ c = 0
⊢ pseudoApply δ (pseudoApply h c) = pseudoApply h' (pseudoApply γ c)
[PROOFSTEP]
rw [← Pseudoelement.comp_apply, ← comm₃, Pseudoelement.comp_apply]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hfg : Exact f g
hgh : Exact g h
hf'g' : Exact f' g'
hα : Epi α
hβ : Mono β
hδ : Mono δ
c : Pseudoelement C
hc : pseudoApply γ c = 0
⊢ pseudoApply h' (pseudoApply γ c) = pseudoApply h' 0
[PROOFSTEP]
rw [hc]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hfg : Exact f g
hgh : Exact g h
hf'g' : Exact f' g'
hα : Epi α
hβ : Mono β
hδ : Mono δ
c : Pseudoelement C
hc : pseudoApply γ c = 0
this : pseudoApply h c = 0
b : Pseudoelement B
hb : pseudoApply g b = c
⊢ pseudoApply g' (pseudoApply β b) = pseudoApply γ (pseudoApply g b)
[PROOFSTEP]
rw [← Pseudoelement.comp_apply, comm₂, Pseudoelement.comp_apply]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hfg : Exact f g
hgh : Exact g h
hf'g' : Exact f' g'
hα : Epi α
hβ : Mono β
hδ : Mono δ
c : Pseudoelement C
hc : pseudoApply γ c = 0
this : pseudoApply h c = 0
b : Pseudoelement B
hb : pseudoApply g b = c
⊢ pseudoApply γ (pseudoApply g b) = pseudoApply γ c
[PROOFSTEP]
rw [hb]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hfg : Exact f g
hgh : Exact g h
hf'g' : Exact f' g'
hα : Epi α
hβ : Mono β
hδ : Mono δ
c : Pseudoelement C
hc : pseudoApply γ c = 0
this✝ : pseudoApply h c = 0
b : Pseudoelement B
hb : pseudoApply g b = c
this : pseudoApply g' (pseudoApply β b) = 0
a' : Pseudoelement A'
ha' : pseudoApply f' a' = pseudoApply β b
a : Pseudoelement A
ha : pseudoApply α a = a'
⊢ pseudoApply β (pseudoApply f a) = pseudoApply f' (pseudoApply α a)
[PROOFSTEP]
rw [← Pseudoelement.comp_apply, ← comm₁, Pseudoelement.comp_apply]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hfg : Exact f g
hgh : Exact g h
hf'g' : Exact f' g'
hα : Epi α
hβ : Mono β
hδ : Mono δ
c : Pseudoelement C
hc : pseudoApply γ c = 0
this✝ : pseudoApply h c = 0
b : Pseudoelement B
hb : pseudoApply g b = c
this : pseudoApply g' (pseudoApply β b) = 0
a' : Pseudoelement A'
ha' : pseudoApply f' a' = pseudoApply β b
a : Pseudoelement A
ha : pseudoApply α a = a'
⊢ pseudoApply f' (pseudoApply α a) = pseudoApply f' a'
[PROOFSTEP]
rw [ha]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hfg : Exact f g
hgh : Exact g h
hf'g' : Exact f' g'
hα : Epi α
hβ : Mono β
hδ : Mono δ
c : Pseudoelement C
hc : pseudoApply γ c = 0
this✝¹ : pseudoApply h c = 0
b : Pseudoelement B
hb : pseudoApply g b = c
this✝ : pseudoApply g' (pseudoApply β b) = 0
a' : Pseudoelement A'
ha' : pseudoApply f' a' = pseudoApply β b
a : Pseudoelement A
ha : pseudoApply α a = a'
this : pseudoApply f a = b
⊢ pseudoApply g b = pseudoApply g (pseudoApply f a)
[PROOFSTEP]
rw [this]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
⊢ r = 0
[PROOFSTEP]
have hf'r : f' ≫ r = 0 :=
Limits.zero_of_epi_comp α <|
calc
α ≫ f' ≫ r = f ≫ β ≫ r := by rw [reassoc_of% comm₁]
_ = f ≫ 0 := by rw [hβr]
_ = 0 := HasZeroMorphisms.comp_zero _ _
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
⊢ α ≫ f' ≫ r = f ≫ β ≫ r
[PROOFSTEP]
rw [reassoc_of% comm₁]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
⊢ f ≫ β ≫ r = f ≫ 0
[PROOFSTEP]
rw [hβr]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
⊢ r = 0
[PROOFSTEP]
let y : R ⟶ pushout r g' := pushout.inl
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
⊢ r = 0
[PROOFSTEP]
let z : C' ⟶ pushout r g' := pushout.inr
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
⊢ r = 0
[PROOFSTEP]
have : Mono (cokernel.desc f' g' hf'g'.w) := mono_cokernel_desc_of_exact _ _ hf'g'
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
⊢ r = 0
[PROOFSTEP]
have : Mono y :=
mono_inl_of_factor_thru_epi_mono_factorization r g' (cokernel.π f') (cokernel.desc f' g' hf'g'.w) (by simp)
(cokernel.desc f' r hf'r) (by simp) _ (colimit.isColimit _)
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
⊢ cokernel.π f' ≫ cokernel.desc f' g' (_ : f' ≫ g' = 0) = g'
[PROOFSTEP]
simp
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
⊢ cokernel.π f' ≫ cokernel.desc f' r hf'r = r
[PROOFSTEP]
simp
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝ : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this : Mono y
⊢ r = 0
[PROOFSTEP]
have hz : g ≫ γ ≫ z = 0 :=
calc
g ≫ γ ≫ z = β ≫ g' ≫ z := by rw [← reassoc_of% comm₂]
_ = β ≫ r ≫ y := by rw [← pushout.condition]
_ = 0 ≫ y := by rw [reassoc_of% hβr]
_ = 0 := HasZeroMorphisms.zero_comp _ _
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝ : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this : Mono y
⊢ g ≫ γ ≫ z = β ≫ g' ≫ z
[PROOFSTEP]
rw [← reassoc_of% comm₂]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝ : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this : Mono y
⊢ β ≫ g' ≫ z = β ≫ r ≫ y
[PROOFSTEP]
rw [← pushout.condition]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝ : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this : Mono y
⊢ β ≫ r ≫ y = 0 ≫ y
[PROOFSTEP]
rw [reassoc_of% hβr]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝ : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this : Mono y
hz : g ≫ γ ≫ z = 0
⊢ r = 0
[PROOFSTEP]
let v : pushout r g' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inl
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝ : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this : Mono y
hz : g ≫ γ ≫ z = 0
v : pushout r g' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inl
⊢ r = 0
[PROOFSTEP]
let w : D' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inr
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝ : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this : Mono y
hz : g ≫ γ ≫ z = 0
v : pushout r g' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inl
w : D' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inr
⊢ r = 0
[PROOFSTEP]
have : Mono (cokernel.desc g h hgh.w) := mono_cokernel_desc_of_exact _ _ hgh
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝¹ : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this✝ : Mono y
hz : g ≫ γ ≫ z = 0
v : pushout r g' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inl
w : D' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inr
this : Mono (cokernel.desc g h (_ : g ≫ h = 0))
⊢ r = 0
[PROOFSTEP]
have : Mono v :=
mono_inl_of_factor_thru_epi_mono_factorization _ _ (cokernel.π g) (cokernel.desc g h hgh.w ≫ δ) (by simp)
(cokernel.desc _ _ hz) (by simp) _ (colimit.isColimit _)
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝¹ : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this✝ : Mono y
hz : g ≫ γ ≫ z = 0
v : pushout r g' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inl
w : D' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inr
this : Mono (cokernel.desc g h (_ : g ≫ h = 0))
⊢ cokernel.π g ≫ cokernel.desc g h (_ : g ≫ h = 0) ≫ δ = h ≫ δ
[PROOFSTEP]
simp
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝¹ : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this✝ : Mono y
hz : g ≫ γ ≫ z = 0
v : pushout r g' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inl
w : D' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inr
this : Mono (cokernel.desc g h (_ : g ≫ h = 0))
⊢ cokernel.π g ≫ cokernel.desc g (γ ≫ z) hz = γ ≫ z
[PROOFSTEP]
simp
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝² : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this✝¹ : Mono y
hz : g ≫ γ ≫ z = 0
v : pushout r g' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inl
w : D' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inr
this✝ : Mono (cokernel.desc g h (_ : g ≫ h = 0))
this : Mono v
⊢ r = 0
[PROOFSTEP]
have hzv : z ≫ v = h' ≫ w :=
(cancel_epi γ).1 <|
calc
γ ≫ z ≫ v = h ≫ δ ≫ w := by rw [← Category.assoc, pushout.condition, Category.assoc]
_ = γ ≫ h' ≫ w := by rw [reassoc_of% comm₃]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝² : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this✝¹ : Mono y
hz : g ≫ γ ≫ z = 0
v : pushout r g' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inl
w : D' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inr
this✝ : Mono (cokernel.desc g h (_ : g ≫ h = 0))
this : Mono v
⊢ γ ≫ z ≫ v = h ≫ δ ≫ w
[PROOFSTEP]
rw [← Category.assoc, pushout.condition, Category.assoc]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝² : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this✝¹ : Mono y
hz : g ≫ γ ≫ z = 0
v : pushout r g' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inl
w : D' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inr
this✝ : Mono (cokernel.desc g h (_ : g ≫ h = 0))
this : Mono v
⊢ h ≫ δ ≫ w = γ ≫ h' ≫ w
[PROOFSTEP]
rw [reassoc_of% comm₃]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝² : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this✝¹ : Mono y
hz : g ≫ γ ≫ z = 0
v : pushout r g' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inl
w : D' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inr
this✝ : Mono (cokernel.desc g h (_ : g ≫ h = 0))
this : Mono v
hzv : z ≫ v = h' ≫ w
⊢ r = 0
[PROOFSTEP]
suffices (r ≫ y) ≫ v = 0 from zero_of_comp_mono _ (zero_of_comp_mono _ this)
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝² : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this✝¹ : Mono y
hz : g ≫ γ ≫ z = 0
v : pushout r g' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inl
w : D' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inr
this✝ : Mono (cokernel.desc g h (_ : g ≫ h = 0))
this : Mono v
hzv : z ≫ v = h' ≫ w
⊢ (r ≫ y) ≫ v = 0
[PROOFSTEP]
calc
(r ≫ y) ≫ v = g' ≫ z ≫ v := by rw [pushout.condition, Category.assoc]
_ = g' ≫ h' ≫ w := by rw [hzv]
_ = 0 ≫ w := (hg'h'.w_assoc _)
_ = 0 := HasZeroMorphisms.zero_comp _ _
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝² : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this✝¹ : Mono y
hz : g ≫ γ ≫ z = 0
v : pushout r g' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inl
w : D' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inr
this✝ : Mono (cokernel.desc g h (_ : g ≫ h = 0))
this : Mono v
hzv : z ≫ v = h' ≫ w
⊢ (r ≫ y) ≫ v = g' ≫ z ≫ v
[PROOFSTEP]
rw [pushout.condition, Category.assoc]
[GOAL]
V : Type u
inst✝¹ : Category.{v, u} V
inst✝ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
hgh : Exact g h
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hα : Epi α
hγ : Epi γ
hδ : Mono δ
R : V
r : B' ⟶ R
hβr : β ≫ r = 0
hf'r : f' ≫ r = 0
y : R ⟶ pushout r g' := pushout.inl
z : C' ⟶ pushout r g' := pushout.inr
this✝² : Mono (cokernel.desc f' g' (_ : f' ≫ g' = 0))
this✝¹ : Mono y
hz : g ≫ γ ≫ z = 0
v : pushout r g' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inl
w : D' ⟶ pushout (γ ≫ z) (h ≫ δ) := pushout.inr
this✝ : Mono (cokernel.desc g h (_ : g ≫ h = 0))
this : Mono v
hzv : z ≫ v = h' ≫ w
⊢ g' ≫ z ≫ v = g' ≫ h' ≫ w
[PROOFSTEP]
rw [hzv]
[GOAL]
V : Type u
inst✝⁵ : Category.{v, u} V
inst✝⁴ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
E E' : V
i : D ⟶ E
i' : D' ⟶ E'
ε : E ⟶ E'
comm₄ : δ ≫ i' = i ≫ ε
hfg : Exact f g
hgh : Exact g h
hhi : Exact h i
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hh'i' : Exact h' i'
inst✝³ : Epi α
inst✝² : IsIso β
inst✝¹ : IsIso δ
inst✝ : Mono ε
⊢ Mono γ
[PROOFSTEP]
apply mono_of_epi_of_mono_of_mono comm₁ comm₂ comm₃ hfg hgh hf'g'
[GOAL]
case hα
V : Type u
inst✝⁵ : Category.{v, u} V
inst✝⁴ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
E E' : V
i : D ⟶ E
i' : D' ⟶ E'
ε : E ⟶ E'
comm₄ : δ ≫ i' = i ≫ ε
hfg : Exact f g
hgh : Exact g h
hhi : Exact h i
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hh'i' : Exact h' i'
inst✝³ : Epi α
inst✝² : IsIso β
inst✝¹ : IsIso δ
inst✝ : Mono ε
⊢ Epi α
[PROOFSTEP]
infer_instance
[GOAL]
case hβ
V : Type u
inst✝⁵ : Category.{v, u} V
inst✝⁴ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
E E' : V
i : D ⟶ E
i' : D' ⟶ E'
ε : E ⟶ E'
comm₄ : δ ≫ i' = i ≫ ε
hfg : Exact f g
hgh : Exact g h
hhi : Exact h i
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hh'i' : Exact h' i'
inst✝³ : Epi α
inst✝² : IsIso β
inst✝¹ : IsIso δ
inst✝ : Mono ε
⊢ Mono β
[PROOFSTEP]
infer_instance
[GOAL]
case hδ
V : Type u
inst✝⁵ : Category.{v, u} V
inst✝⁴ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
E E' : V
i : D ⟶ E
i' : D' ⟶ E'
ε : E ⟶ E'
comm₄ : δ ≫ i' = i ≫ ε
hfg : Exact f g
hgh : Exact g h
hhi : Exact h i
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hh'i' : Exact h' i'
inst✝³ : Epi α
inst✝² : IsIso β
inst✝¹ : IsIso δ
inst✝ : Mono ε
⊢ Mono δ
[PROOFSTEP]
infer_instance
[GOAL]
V : Type u
inst✝⁵ : Category.{v, u} V
inst✝⁴ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
E E' : V
i : D ⟶ E
i' : D' ⟶ E'
ε : E ⟶ E'
comm₄ : δ ≫ i' = i ≫ ε
hfg : Exact f g
hgh : Exact g h
hhi : Exact h i
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hh'i' : Exact h' i'
inst✝³ : Epi α
inst✝² : IsIso β
inst✝¹ : IsIso δ
inst✝ : Mono ε
this : Mono γ
⊢ Epi γ
[PROOFSTEP]
apply epi_of_epi_of_epi_of_mono comm₂ comm₃ comm₄ hhi hg'h' hh'i'
[GOAL]
case hα
V : Type u
inst✝⁵ : Category.{v, u} V
inst✝⁴ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
E E' : V
i : D ⟶ E
i' : D' ⟶ E'
ε : E ⟶ E'
comm₄ : δ ≫ i' = i ≫ ε
hfg : Exact f g
hgh : Exact g h
hhi : Exact h i
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hh'i' : Exact h' i'
inst✝³ : Epi α
inst✝² : IsIso β
inst✝¹ : IsIso δ
inst✝ : Mono ε
this : Mono γ
⊢ Epi β
[PROOFSTEP]
infer_instance
[GOAL]
case hγ
V : Type u
inst✝⁵ : Category.{v, u} V
inst✝⁴ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
E E' : V
i : D ⟶ E
i' : D' ⟶ E'
ε : E ⟶ E'
comm₄ : δ ≫ i' = i ≫ ε
hfg : Exact f g
hgh : Exact g h
hhi : Exact h i
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hh'i' : Exact h' i'
inst✝³ : Epi α
inst✝² : IsIso β
inst✝¹ : IsIso δ
inst✝ : Mono ε
this : Mono γ
⊢ Epi δ
[PROOFSTEP]
infer_instance
[GOAL]
case hδ
V : Type u
inst✝⁵ : Category.{v, u} V
inst✝⁴ : Abelian V
A B C D A' B' C' D' : V
f : A ⟶ B
g : B ⟶ C
h : C ⟶ D
f' : A' ⟶ B'
g' : B' ⟶ C'
h' : C' ⟶ D'
α : A ⟶ A'
β : B ⟶ B'
γ : C ⟶ C'
δ : D ⟶ D'
comm₁ : α ≫ f' = f ≫ β
comm₂ : β ≫ g' = g ≫ γ
comm₃ : γ ≫ h' = h ≫ δ
E E' : V
i : D ⟶ E
i' : D' ⟶ E'
ε : E ⟶ E'
comm₄ : δ ≫ i' = i ≫ ε
hfg : Exact f g
hgh : Exact g h
hhi : Exact h i
hf'g' : Exact f' g'
hg'h' : Exact g' h'
hh'i' : Exact h' i'
inst✝³ : Epi α
inst✝² : IsIso β
inst✝¹ : IsIso δ
inst✝ : Mono ε
this : Mono γ
⊢ Mono ε
[PROOFSTEP]
infer_instance
|
If $f$ is differentiable at $x$, then $\Re(f)$ is differentiable at $x$ and $\Re(f)'(x) = \Re(f'(x))$. |
(*
Copyright (C) 2017 M.A.L. Marques
2020 Susi Lehtola
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*)
(* type: gga_exc *)
(* prefix:
gga_c_p86_params *params;
assert(p->params != NULL);
params = (gga_c_p86_params * )(p->params);
*)
$define lda_c_pz_params
$include "lda_c_pz.mpl"
(* Equation (4) *)
p86_DD := z -> sqrt(opz_pow_n(z,5/3) + opz_pow_n(-z,5/3))/sqrt(2):
(* Equation (6) *)
p86_CC := rs ->
+ params_a_aa
+ (params_a_bb + params_a_malpha*rs + params_a_mbeta*rs^2)/(1 + params_a_mgamma*rs + params_a_mdelta*rs^2 + 1.0e4*params_a_mbeta*rs^3):
p86_CCinf := params_a_aa + params_a_bb:
(* Equation (9) *)
p86_x1 := (rs, xt) -> xt/sqrt(rs/RS_FACTOR):
p86_mPhi := (rs, xt) -> params_a_ftilde*(p86_CCinf/p86_CC(rs))*p86_x1(rs, xt):
(* Equation (8) *)
p86_H := (rs, z, xt) -> p86_x1(rs, xt)^2*exp(-p86_mPhi(rs, xt))*p86_CC(rs)/p86_DD(z):
f_p86 := (rs, z, xt, xs0, xs1) ->
f_pz(rs, z) + p86_H(rs, z, xt):
f := (rs, z, xt, xs0, xs1) ->
f_p86(rs, z, xt, xs0, xs1):
|
(********************************************************************)
(* *)
(* The Why3 Verification Platform / The Why3 Development Team *)
(* Copyright 2010-2018 -- Inria - CNRS - Paris-Sud University *)
(* *)
(* This software is distributed under the terms of the GNU Lesser *)
(* General Public License version 2.1, with the special exception *)
(* on linking described in file LICENSE. *)
(* *)
(********************************************************************)
(* This file is generated by Why3's Coq-realize driver *)
(* Beware! Only edit allowed sections below *)
Require Import BuiltIn.
Require BuiltIn.
Require Import floating_point.GenFloat.
(* Why3 goal *)
Definition single : Type.
exact (t 24 128).
Defined.
Global Instance single_WhyType : WhyType single.
Proof.
apply t_WhyType.
Qed.
|
import tactic
import tactic.monotonicity
import tactic.norm_num
import category.basic
import category.nursery
import data.equiv.nursery
import data.serial.medium
universes u v w
abbreviation put_m' := medium.put_m'.{u} unsigned
abbreviation put_m := medium.put_m'.{u} unsigned punit
abbreviation get_m := medium.get_m.{u} unsigned
def serial_inverse {α : Type u} (encode : α → put_m) (decode : get_m α) : Prop :=
∀ w, decode -<< encode w = pure w
class serial (α : Type u) :=
(encode : α → put_m)
(decode : get_m α)
(correctness : ∀ w, decode -<< encode w = pure w)
class serial1 (f : Type u → Type v) :=
(encode : Π {α}, (α → put_m) → f α → put_m)
(decode : Π {α}, get_m α → get_m (f α))
(correctness : ∀ {α} put get, serial_inverse.{u} put get →
∀ (w : f α), decode get -<< encode put w = pure w)
instance serial.serial1 {f α} [serial1 f] [serial α] : serial (f α) :=
{ encode := λ x, serial1.encode serial.encode x,
decode := serial1.decode f (serial.decode α),
correctness := serial1.correctness _ _ serial.correctness }
class serial2 (f : Type u → Type v → Type w) :=
(encode : Π {α β}, (α → put_m.{u}) → (β → put_m.{v}) → f α β → put_m.{w})
(decode : Π {α β}, get_m α → get_m β → get_m (f α β))
(correctness : ∀ {α β} putα getα putβ getβ,
serial_inverse putα getα →
serial_inverse putβ getβ →
∀ (w : f α β), decode getα getβ -<< encode putα putβ w = pure w)
instance serial.serial2 {f α β} [serial2 f] [serial α] [serial β] : serial (f α β) :=
{ encode := λ x, serial2.encode serial.encode serial.encode x,
decode := serial2.decode f (serial.decode _) (serial.decode _),
correctness := serial2.correctness _ _ _ _ serial.correctness serial.correctness }
instance serial1.serial2 {f α} [serial2 f] [serial α] : serial1 (f α) :=
{ encode := λ β put x, serial2.encode serial.encode put x,
decode := λ β get, serial2.decode f (serial.decode _) get,
correctness := λ β get put, serial2.correctness _ _ _ _ serial.correctness }
export serial (encode decode)
namespace serial
open function
export medium (hiding put_m get_m put_m')
variables {α β σ γ : Type u} {ω : Type}
def serialize [serial α] (x : α) : list unsigned := (encode x).eval
def deserialize (α : Type u) [serial α] (bytes : list unsigned) : option α := (decode α).eval bytes
lemma deserialize_serialize [serial α] (x : α) :
deserialize _ (serialize x) = some x :=
by simp [deserialize,serialize,eval_eval,serial.correctness]; refl
lemma encode_decode_bind [serial α]
(f : α → get_m β) (f' : punit → put_m) (w : α) :
(decode α >>= f) -<< (encode w >>= f') = f w -<< f' punit.star :=
by { rw [read_write_mono]; rw serial.correctness; refl }
lemma encode_decode_bind' [serial α]
(f : α → get_m β) (w : α) :
(decode α >>= f) -<< (encode w) = f w -<< pure punit.star :=
by { rw [read_write_mono_left]; rw serial.correctness; refl }
lemma encode_decode_pure
(w w' : α) (u : punit) :
(pure w : get_m α) -<< (pure u) = pure w' ↔ w = w' :=
by split; intro h; cases h; refl
open ulift
protected def ulift.encode [serial α] (w : ulift.{v} α) : put_m :=
(liftable1.up _ equiv.punit_equiv_punit (encode (down w) : medium.put_m' unsigned _) : medium.put_m' unsigned _)
protected def ulift.decode [serial α] : get_m (ulift α) :=
get_m.up ulift.up (decode α)
instance [serial α] : serial (ulift.{v u} α) :=
{ encode := ulift.encode
, decode := ulift.decode
, correctness :=
by { introv, simp [ulift.encode,ulift.decode],
rw up_read_write' _ equiv.ulift.symm,
rw [serial.correctness], cases w, refl,
intro, refl } }
instance unsigned.serial : serial unsigned :=
{ encode := λ w, put_m'.write w put_m'.pure
, decode := get_m.read get_m.pure
, correctness := by introv; refl }
-- protected def write_word (w : unsigned) : put_m :=
-- encode (up.{u} w)
@[simp] lemma loop_read_write_word {α β γ : Type u}
(w : unsigned) (x : α) (f : α → unsigned → get_m (β ⊕ α)) (g : β → get_m γ)
(rest : punit → put_m) :
get_m.loop f g x -<< (write_word w >>= rest) =
(f x w >>= get_m.loop.rest f g) -<< rest punit.star := rfl
@[simp] lemma loop_read_write_word' {α β γ : Type u}
(w : unsigned) (x : α) (f : α → unsigned → get_m (β ⊕ α)) (g : β → get_m γ) :
get_m.loop f g x -<< (write_word w) =
(f x w >>= get_m.loop.rest f g) -<< pure punit.star := rfl
-- protected def read_word : get_m.{u} (ulift unsigned) :=
-- decode _
def select_tag' (tag : unsigned) : list (unsigned × get_m α) → get_m α
| [] := get_m.fail
| ((w,x) :: xs) := if w = tag then x else select_tag' xs
def select_tag (xs : list (unsigned × get_m α)) : get_m α :=
do w ← read_word,
select_tag' (down w) xs
@[simp]
lemma read_write_tag_hit {w w' : unsigned} {x : get_m α}
{xs : list (unsigned × get_m α)} {y : put_m}
(h : w = w') :
select_tag ( (w,x) :: xs ) -<< (write_word w' >> y) = x -<< y :=
by subst w'; simp [select_tag,(>>),encode_decode_bind,select_tag']
lemma read_write_tag_hit' {w w' : unsigned} {x : get_m α}
{xs : list (unsigned × get_m α)}
(h : w = w') :
select_tag ( (w,x) :: xs ) -<< (write_word w') = x -<< pure punit.star :=
by subst w'; simp [select_tag,(>>),encode_decode_bind',select_tag']
@[simp]
lemma read_write_tag_miss {w w' : unsigned} {x : get_m α}
{xs : list (unsigned × get_m α)} {y : put_m}
(h : w ≠ w') :
select_tag ( (w,x) :: xs ) -<< (write_word w' >> y) = select_tag xs -<< (write_word w' >> y) :=
by simp [select_tag,(>>),encode_decode_bind,select_tag',*]
def recursive_parser {α} : ℕ → (get_m α → get_m α) → get_m α
| 0 _ := get_m.fail
| (nat.succ n) rec_fn := rec_fn $ recursive_parser n rec_fn
lemma recursive_parser_unfold {α} (n : ℕ) (f : get_m α → get_m α) (h : 1 ≤ n) :
recursive_parser n f = f (recursive_parser (n-1) f) :=
by cases n; [ cases h, refl ]
attribute [simp] serial.correctness
end serial
structure serializer (α : Type u) (β : Type u) :=
(encoder : α → put_m.{u})
(decoder : get_m β)
def serial.mk_serializer' (α) [serial α] : serializer α α :=
{ encoder := encode,
decoder := decode α }
namespace serializer
def valid_serializer {α} (x : serializer α α) :=
serial_inverse
(serializer.encoder x)
(serializer.decoder x)
lemma serializer.eq {α β} (x y : serializer α β)
(h : x.encoder = y.encoder)
(h' : x.decoder = y.decoder) :
x = y :=
by cases x; cases y; congr; assumption
namespace serializer.seq
variables {α : Type u} {i j : Type u}
variables (x : serializer α (i → j))
variables (y : serializer α i)
def encoder := λ (k : α), (x.encoder k >> y.encoder k : put_m' _)
def decoder := x.decoder <*> y.decoder
end serializer.seq
instance {α : Type u} : applicative (serializer.{u} α) :=
{ pure := λ i x, { encoder := λ _, (return punit.star : put_m' _), decoder := pure x }
, seq := λ i j x y,
{ encoder := serializer.seq.encoder x y
, decoder := serializer.seq.decoder x y } }
section lawful_applicative
variables {α β : Type u} {σ : Type u}
@[simp]
lemma decoder_pure (x : β) :
(pure x : serializer σ β).decoder = pure x := rfl
@[simp]
lemma decoder_map (f : α → β) (x : serializer σ α) :
(f <$> x).decoder = f <$> x.decoder := rfl
@[simp]
lemma decoder_seq (f : serializer σ (α → β)) (x : serializer σ α) :
(f <*> x).decoder = f.decoder <*> x.decoder := rfl
@[simp]
lemma encoder_pure (x : β) (w : σ) :
(pure x : serializer σ β).encoder w = (pure punit.star : put_m' _) := rfl
@[simp]
lemma encoder_map (f : α → β) (w : σ) (x : serializer σ α) :
(f <$> x : serializer σ β).encoder w = x.encoder w := rfl
@[simp]
lemma encoder_seq (f : serializer σ (α → β)) (x : serializer σ α) (w : σ) :
(f <*> x : serializer σ β).encoder w = (f.encoder w >> x.encoder w : put_m' _) := rfl
end lawful_applicative
instance {α} : is_lawful_functor (serializer.{u} α) :=
by refine { .. }; intros; apply serializer.eq; try { ext }; simp [map_map]
instance {α} : is_lawful_applicative (serializer.{u} α) :=
by{ constructor; intros; apply serializer.eq; try { ext };
simp [(>>),pure_seq_eq_map,seq_assoc,bind_assoc], }
protected def up {β} (ser : serializer β β) : serializer (ulift.{u v} β) (ulift.{u v} β) :=
{ encoder := pliftable.up' _ ∘ ser.encoder ∘ ulift.down,
decoder := medium.get_m.up ulift.up ser.decoder }
def ser_field_with {α β} (ser : serializer β β) (f : α → β) : serializer α β :=
{ encoder := ser.encoder ∘ f,
decoder := ser.decoder }
@[simp]
def ser_field_with' {α β} (ser : serializer β β) (f : α → β) : serializer.{max u v} α (ulift.{v} β) :=
ser_field_with ser.up (ulift.up ∘ f)
@[simp]
def ser_field {α β} [serial β] (f : α → β) : serializer α β :=
ser_field_with (serial.mk_serializer' β) f
@[simp]
lemma valid_mk_serializer (α) [serial α] :
valid_serializer (serial.mk_serializer' α) :=
serial.correctness
variables {α β σ γ : Type u} {ω : Type}
def there_and_back_again
(y : serializer γ α) (w : γ) : option α :=
y.decoder -<< y.encoder w
open medium (hiding put_m put_m' get_m)
lemma there_and_back_again_seq {ser : serializer α α}
{x : serializer γ (α → β)} {f : α → β} {y : γ → α} {w : γ} {w' : β}
(h' : there_and_back_again x w = pure f)
(h : w' = f (y w))
(h₀ : valid_serializer ser) :
there_and_back_again (x <*> ser_field_with ser y) w = pure w' :=
by { simp [there_and_back_again,(>>),seq_eq_bind_map] at *,
rw [read_write_mono h',map_read_write],
rw [ser_field_with,h₀], subst w', refl }
lemma there_and_back_again_map {ser : serializer α α}
{f : α → β} {y : γ → α} {w : γ}
(h₀ : valid_serializer ser) :
there_and_back_again (f <$> ser_field_with ser y) w = pure (f $ y w) :=
by rw [← pure_seq_eq_map,there_and_back_again_seq]; refl <|> assumption
lemma there_and_back_again_pure (x : β) (w : γ) :
there_and_back_again (pure x) w =
pure x := rfl
lemma valid_serializer_of_there_and_back_again
{α : Type*} (y : serializer α α) :
valid_serializer y ↔
∀ (w : α), there_and_back_again y w = pure w :=
by { simp [valid_serializer,serial_inverse],
repeat { rw forall_congr, intro }, refl }
@[simp]
lemma valid_serializer_up (x: serializer α α) :
valid_serializer (serializer.up.{v} x) ↔ valid_serializer x :=
by { cases x, simp [valid_serializer,serializer.up,serial_inverse,equiv.forall_iff_forall equiv.ulift],
apply forall_congr, intro, dsimp [equiv.ulift,pliftable.up'],
rw up_read_write' _ equiv.ulift.symm, split; intro h,
{ replace h := congr_arg (liftable1.down.{u} option (equiv.symm equiv.ulift)) h,
simp [liftable1.down_up] at h, simp [h], refl },
{ simp [h], refl },
{ intro, refl, } }
open ulift
def ser_field' {α β} [serial β] (f : α → β) : serializer.{max u v} α (ulift.{v} β) :=
ser_field (up ∘ f)
def put₀ {α} (x : α) : put_m.{u} := (pure punit.star : put_m' _)
def get₀ {α} : get_m α := get_m.fail
def of_encoder {α} (x : α → put_m) : serializer α α :=
⟨ x, get₀ ⟩
def of_decoder {α} (x : get_m α) : serializer α α :=
⟨ put₀, x ⟩
section applicative
@[simp]
lemma encoder_ser_field (f : β → α) (x : serializer α α) (w : β) :
(ser_field_with x f).encoder w = x.encoder (f w) := rfl
@[simp]
lemma encoder_up (x : serializer α α) (w : ulift α) :
(serializer.up x).encoder w = pliftable.up' _ (x.encoder $ w.down) := rfl
@[simp]
lemma encoder_of_encoder (x : α → put_m) (w : α) :
(of_encoder x).encoder w = x w := rfl
@[simp]
lemma decoder_ser_field (f : β → α) (x : serializer α α) :
(ser_field_with x f).decoder = x.decoder := rfl
@[simp]
lemma decoder_up (x : serializer α α) :
(serializer.up x).decoder = (x.decoder).up ulift.up := rfl
@[simp]
lemma decoder_of_decoder (x : get_m α) :
(of_decoder x).decoder = x := rfl
end applicative
end serializer
namespace serial
open serializer
def of_serializer {α} (s : serializer α α)
(h : ∀ w, there_and_back_again s w = pure w) : serial α :=
{ encode := s.encoder
, decode := s.decoder
, correctness := @h }
def of_serializer₁ {f : Type u → Type v}
(s : Π α, serializer α α → serializer (f α) (f α))
(h : ∀ α ser, valid_serializer ser →
∀ w, there_and_back_again (s α ser) w = pure w)
(h₀ : ∀ {α} ser w, (s α (of_encoder (encoder ser))).encoder w = (s α ser).encoder w)
(h₁ : ∀ {α} ser, (s α (of_decoder (decoder ser))).decoder = (s α ser).decoder) : serial1 f :=
{ encode := λ α put, (s α (of_encoder put)).encoder
, decode := λ α get, (s α (of_decoder get)).decoder
, correctness := by { introv hh, simp [h₀ ⟨put, get⟩,h₁ ⟨put,get⟩], apply h; assumption } }
def of_serializer₂ {f : Type u → Type v → Type w}
(s : Π α β, serializer α α →
serializer β β →
serializer (f α β) (f α β))
(h : ∀ α β serα serβ, valid_serializer serα → valid_serializer serβ →
∀ w, there_and_back_again (s α β serα serβ) w = pure w)
(h₀ : ∀ {α β} serα serβ w, (s α β (of_encoder (encoder serα)) (of_encoder (encoder serβ))).encoder w = (s α β serα serβ).encoder w)
(h₁ : ∀ {α β} serα serβ, (s α β (of_decoder (decoder serα)) (of_decoder (decoder serβ))).decoder = (s α β serα serβ).decoder) : serial2 f :=
{ encode := λ α β putα putβ, (s α β (of_encoder putα) (of_encoder putβ)).encoder
, decode := λ α β getα getβ, (s α β (of_decoder getα) (of_decoder getβ)).decoder
, correctness := by { introv hα hβ, simp [h₀ ⟨putα,getα⟩ ⟨putβ,getβ⟩,h₁ ⟨putα,getα⟩ ⟨putβ,getβ⟩],
apply h; assumption } }
end serial
namespace tactic
open interactive
open interactive.types
open lean.parser
meta def interactive.mk_serializer (p : parse texpr) : tactic unit :=
do g ← mk_mvar,
refine ``(serial.of_serializer %%p %%g) <|>
refine ``(serial.of_serializer₁ (λ α ser, %%p) %%g _ _) <|>
refine ``(serial.of_serializer₂ (λ α β ser_α ser_β, %%p) %%g _ _),
gs ← get_goals,
set_goals [g],
vs ← intros,
cases vs.ilast,
iterate $
applyc ``serializer.there_and_back_again_map <|>
applyc ``serializer.there_and_back_again_pure <|>
applyc ``serializer.there_and_back_again_seq,
gs' ← get_goals,
set_goals (gs ++ gs'),
repeat $
intros >>
`[simp *] <|>
reflexivity
end tactic
|
(*
Copyright (C) 2017 M.A.L. Marques
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
*)
(* type: gga_exc *)
$include "gga_c_tca.mpl"
msinc := x -> my_piecewise3(x = 0, 1, sin(x)/x):
revtca_aa := Pi*(9*Pi/4)^(1/3):
revtca_fD := (rs, z, s) -> 1 - z^4*(1 - msinc(revtca_aa*s/rs)^2):
f := (rs, z, xt, xs0, xs1) ->
f_tcs(rs, z, xt)*revtca_fD(rs, z, X2S*2^(1/3)*xt):
|
Formal statement is: lemma eventually_at_ball: "d > 0 \<Longrightarrow> eventually (\<lambda>t. t \<in> ball z d \<and> t \<in> A) (at z within A)" Informal statement is: If $d > 0$, then there exists a neighborhood of $z$ contained in $A$ such that all points in this neighborhood are in the ball of radius $d$ around $z$. |
if normpath(@__DIR__) ∉ LOAD_PATH
pushfirst!(LOAD_PATH, normpath(@__DIR__, "../.."))
pushfirst!(LOAD_PATH, normpath(@__DIR__))
end
using DECAES
using Statistics
using LaTeXStrings
using CairoMakie
set_theme!(theme_minimal(); resolution = (800,600), font = "CMU Serif")
const AbstractTensor{N} = AbstractArray{T,N} where {T}
gridsize(n) = (imax = floor(Int, sqrt(n)); jmax = ceil(Int, n / imax); (imax, jmax))
function global_reduction(reducer, img::AbstractTensor{4}, mask = (img[:,:,:,1] .> 0))
[reducer(img[mask,echo]) for echo in axes(img, 4)]
end
function plot_echoes(img::AbstractTensor{4}; echoes)
@assert size(img, 3) == 1
fig = Figure()
nplots = length(echoes)
limits = extrema(vec(img))
for (i,I) in enumerate(CartesianIndices(gridsize(nplots)))
i > nplots && break
ax = Axis(fig[I[1], I[2]]; aspect = AxisAspect(1))
hidedecorations!(ax)
heatmap!(img[:,:,1,echoes[i]]; limits = limits)
text!("$(echoes[i])"; color = :white, position = (1.5, 1.5), align = (:left, :baseline))
end
return fig
end
function plot_band!(ax, ydata::AbstractTensor{4}, xdata = 1:size(ydata, 4); meankwargs, bandkwargs)
μ = global_reduction(mean, ydata)
σ = global_reduction(std, ydata)
(meankwargs !== nothing) && scatterlines!(ax, xdata, μ; meankwargs...)
(bandkwargs !== nothing) && band!(ax, xdata, μ .- σ, μ .+ σ; bandkwargs...)
end
function plot_global_signal(img::AbstractTensor{4};
meankwargs = (label = L"\mu", color = :red, markersize = 5, linewidth = 1),
bandkwargs = (label = L"\mu \pm \sigma", color = (:red, 0.5)),
)
fig = Figure()
ax = Axis(fig[1,1])
plot_band!(ax, img; meankwargs = meankwargs, bandkwargs = bandkwargs)
axislegend(ax)
return fig
end
function compare_refcon_correction(
img::AbstractTensor{4},
mask::AbstractTensor{3},
t2maps_norefcon::AbstractDict,
t2dist_norefcon::AbstractTensor{4},
t2maps_refcon::AbstractDict,
t2dist_refcon::AbstractTensor{4},
)
DECAES.@unpack t2times, echotimes = t2maps_norefcon
t2times, echotimes = 1000 .* t2times, 1000 .* echotimes
globalmean(x) = vec(mean(x[mask, :]; dims = 1))
fig = Figure()
ax = Axis(fig[1,1]; ylabel = "mean signal [a.u.]", xlabel = "echo time [ms]")
lines!(ax, echotimes, globalmean(img); label = "data", markersize = 5, linewidth = 3, color = :red)
lines!(ax, echotimes, globalmean(t2maps_norefcon["decaycurve"]); label = "no correction", markersize = 5, linewidth = 3, color = :blue)
lines!(ax, echotimes, globalmean(t2maps_refcon["decaycurve"]); label = "w/ correction", markersize = 5, linewidth = 3, color = :green)
hideydecorations!(ax, label = false)
axislegend(ax)
ax = Axis(fig[1,2]; ylabel = "mean difference [a.u.]", xlabel = "echo time [ms]")
hlines!(ax, [0.0]; label = "zero", linewidth = 3, linestyle = :dot, color = :red)
lines!(ax, echotimes, globalmean(img .- t2maps_norefcon["decaycurve"]); label = "no correction", markersize = 5, linewidth = 3, color = :blue)
lines!(ax, echotimes, globalmean(img .- t2maps_refcon["decaycurve"]); label = "w/ correction", markersize = 5, linewidth = 3, color = :green)
hideydecorations!(ax, label = false)
axislegend(ax)
ax = Axis(fig[2,1]; xlabel = "fit error [a.u.]", ylabel = "relative count")
hist!(ax, t2maps_norefcon["resnorm"][mask]; label = "no correction", color = (:blue, 0.5), scale_to = 0.9, offset = 1, bins = 50)
hist!(ax, t2maps_refcon["resnorm"][mask]; label = "w/ correction", color = (:green, 0.5), scale_to = 0.9, offset = 0, bins = 50)
hideydecorations!(ax, label = false)
axislegend(ax)
ax = Axis(fig[2,2]; xlabel = "T2 times [ms]", ylabel = "mean T2 dist. [a.u.]", xscale = log10)
band!(ax, t2times, zero(t2times), globalmean(t2dist_norefcon); label = "no correction", linewidth = 3, color = (:blue, 0.5))
band!(ax, t2times, zero(t2times), globalmean(t2dist_refcon); label = "w/ correction", linewidth = 3, color = (:green, 0.5))
scatter!(ax, t2times, globalmean(t2dist_norefcon); markersize = 5, color = :blue)
scatter!(ax, t2times, globalmean(t2dist_refcon); markersize = 5, color = :green)
hideydecorations!(ax, label = false)
axislegend(ax)
return fig
end
|
import tactic
variables (x y : ℕ)
open nat
-- Q3 def
def is_pred (x y : ℕ) := x.succ = y
theorem Q3a : ¬ ∃ x : ℕ, is_pred x 0 :=
begin
sorry
end
theorem Q3b : y ≠ 0 → ∃! x, is_pred x y :=
begin
sorry
end
def aux : 0 < y → ∃ x, is_pred x y :=
begin
sorry
end
-- definition of pred' is "choose a random d such that succ(d) = n"
noncomputable def pred' : ℕ+ → ℕ := λ nhn, classical.some (aux nhn nhn.2)
theorem pred'_def : ∀ np : ℕ+, is_pred (pred' np) np :=
λ nhn, classical.some_spec (aux nhn nhn.2)
def succ' : ℕ → ℕ+ :=
λ n, ⟨n.succ, zero_lt_succ n⟩
noncomputable definition Q3c : ℕ+ ≃ ℕ :=
{ to_fun := pred',
inv_fun := succ',
left_inv := begin
sorry
end,
right_inv := begin
sorry
end
}
|
#ifndef QCQP_B_H
#define QCQP_B_H
#include "Matrix.h"
#include "ALM_L_Katyusha.h"
#include <string>
#include <gsl/gsl_rng.h>
#include <gsl/gsl_randist.h>
#include <stdio.h> /* printf */
#include <time.h>
#include <fstream>
#include <algorithm>
#include <iomanip>
#include <ctime>
#include <math.h>
/*This class solves problem of the form
min 1/2 x^\top Q_0 x+b_0^\top x
s.t. 1/2 x^\top Q_i x+b_i^\top x -1\leq 0,\enspace \forall i=1,\dots,m
-b\leq x\leq b
using IPALM with loopless Katyusha as inner solver.
*/
template<typename L, typename D>
class QCQP_b: public ALM_L_Katyusha<L, D>
{
private:
vector<D> Lambda; // maximal eigenvalues of Q0, Q1,...Qm.
vector<D> norm_c; //||[b0;b1;...;bm] ||_2
vector<D> norm_1_c; //||[b0;b1;...;bm] ||_1
D b; // absolute bound on each variable
D nb_q_c; //number of quadratic constraints
D nb_v; //number of variables
Matrix<L,D> data_Q; // Q=[Q0;Q1;...;Qm]; b=[b0;b1;...;bm].
protected:
public:
QCQP_b(const char* Matrix_file,D val_b)
:ALM_L_Katyusha<L,D>(), data_Q(Matrix_file)
{
b= val_b;
nb_v=data_Q.nfeatures;
nb_q_c=data_Q.nsamples/data_Q.nfeatures- 1;
this->mu_g=0;
}
inline D get_n(){
return data_Q.nfeatures;
}
inline D get_m(){
return data_Q.nsamples/data_Q.nfeatures- 1;
}
void set_dimension(){
this->d= nb_v;
this->m= nb_q_c;
}
void set_is_feasibility_constraint(){
for(L i=0;i<nb_q_c;i++)
this->is_feasiblity_constraint[i]=1;
}
inline D value_of_P(vector<D> & x){
D res=0;
for(L i=0;i<this->d;i++)
{
if (fabs(x[i])> b){
cout<< "error: the variable x_"<< i<<"="<<x[i]<<". It should be in [-"<<b<<", "<<b<<" ]"<< endl;
system("pause");
return std::numeric_limits<double>::max();
}
}
return res;
}
inline void prox_of_P(D alpha,vector<D> & x, vector<D> & y){
for(L i=0;i<this->d;i++)
{
D x2=x[i];
if (x2>= b){
y[i]=b;
}
else if(x2<=-b){
y[i]=-b;
}
else{
y[i]=x2;
}
}
}
D prox_of_h_star_j(D x,D beta, L j){
if(x>=0) return x;
else return 0;
}
D value_of_h_star_j(D x,L j){
if(x>=0) return 0;
else {
cout<< "Error! x="<<x<<" should be nonnegative."<<endl;
system("pause");
return std::numeric_limits<double>::max();
}
}
void compute_gradient_f0(vector<D> & x, vector<D>& g){
for(L j=0;j<nb_v;j++){
g[j]=0;
for(L k = data_Q.ptr[j]; k < data_Q.ptr[j + 1];k++){
L kj=data_Q.row_idx[k];
g[j]+=x[kj]*data_Q.A[k];
}
g[j]+=data_Q.b[j];
}
}
D compute_gradient_and_value_f_i(vector<D> & x ,vector<D> & g, L i){
D fi=0;
for(L j=0;j<nb_v;j++){
g[j]=0;
L j2=i*nb_v+j;
for(L k = data_Q.ptr[j2]; k < data_Q.ptr[j2 + 1];k++){
L kj=data_Q.row_idx[k];
g[j]+=x[kj]*data_Q.A[k];
}
D bj=data_Q.b[j2];
fi+=x[j]*g[j]/2+x[j]*bj;
g[j]+=bj;
}
return fi-1;
}
D value_of_f0(vector<D> & x){
return my_value_of_f_i( x, 0);
}
D value_of_f_i(vector<D> & x, L i){
return my_value_of_f_i( x, i)-1;
}
D my_value_of_f_i(vector<D> & x, L i){
D fi=0;
for(L j=0;j<nb_v;j++){
D Qjx=0;
L j2=i*nb_v+j;
for(L k = data_Q.ptr[j2]; k < data_Q.ptr[j2 + 1];k++){
L kj=data_Q.row_idx[k];
Qjx+=x[kj]*data_Q.A[k];
}
fi+=x[j]*Qjx/2+x[j]*data_Q.b[j2];
}
return fi;
}
D value_of_h_j(D x, L i){
cout<< "Error! The "<<i<<"th function should be a feasibility constraint"<< endl;
system("pause");
return std::numeric_limits<double>::max();
}
D distance_to_domain_of_h_j(D x, L i){
if(x<=0)
return 0;
else
return x;
}
D compute_Qi_x(vector<D> & x ,vector<D> & g, L i){
D normg=0;
for(L j=0;j<nb_v;j++){
g[j]=0;
L j2=i*nb_v+j;
for(L k = data_Q.ptr[j2]; k < data_Q.ptr[j2 + 1];k++){
L kj=data_Q.row_idx[k];
g[j]+=x[kj]*data_Q.A[k];
}
normg+=g[j]*g[j];
}
return sqrt(normg);
}
void compute_lambda(){
D maxv=0;
D minv=std::numeric_limits<double>::max();
Lambda.resize(nb_q_c+1);
norm_c.resize(nb_q_c+1,0);
norm_1_c.resize(nb_q_c+1,0);
std::vector<D> xk(nb_v,1);
std::vector<D> yk(nb_v);
D normk;
D tmp;
for (L i= 0; i<nb_q_c+1 ; i++){
for(L kk=0;kk<20;kk++){
normk= compute_Qi_x(xk , yk ,i);
for (L j=0;j<nb_v;j++)
xk[j]=yk[j]/normk;
}
D res=0;
normk= compute_Qi_x(xk, yk ,i);
for (L j=0;j<nb_v;j++)
res+=xk[j]*yk[j];
Lambda[i]= res;
if (res> maxv){
maxv= res;
}
if (res< minv){
minv= res;
}
D res2= 0;
D res3= 0;
for (L j=0; j< nb_v; j++){
res2+= data_Q.b[i*nb_v+j]*data_Q.b[i*nb_v+j];
res3+= fabs(data_Q.b[i*nb_v+j]);
}
norm_c[i]= sqrt(res2);
norm_1_c[i]=res3;
}
cout<< "max Lambda= "<< maxv<< " min Lambda= "<< minv<< endl;
}
inline void set_Li_Lf(){
this->Li.clear();
this->Li.resize(this->nsamples,0);
this->Li[0]=this->nsamples*Lambda[0];
this->Lf=Lambda[0];
D maxLi=this->Li[0];
D minLi=this->Li[0];
for (L i=0; i< nb_q_c; i++){
D M_p_j= nb_v*Lambda[i+ 1]*b*b+ b*norm_1_c[i+1]+ 1;
D M_dp_j= sqrt(nb_v)*Lambda[i+ 1]*b+ norm_c[i+1];
D L_dp_j= Lambda[i+1];
D d_s_j= M_p_j+ this->beta_s*this->lambda_s[i];
D tmp=L_dp_j*d_s_j/this->beta_s+ M_dp_j*M_dp_j/this->beta_s;
this->Li[i+1]= this->nsamples*tmp;
this->Lf+=tmp;
if (this->Li[i+1]> maxLi){
maxLi= this->Li[i+ 1];
}
if (this->Li[i+ 1]< minLi){
minLi= this->Li[i+ 1];
}
}
this->sumLi=this->Lf*this->nsamples;
cout<<" max of Li: "<<maxLi<<" min of Li: "<<minLi<<" sumLi="<<this->sumLi<<" Lf= "<<this->Lf<<endl;
}
void ALM_QCQP_solver(D beta_0, D epsilon_0, D eta, D rho,vector<D> & x0,vector<D> & y0,L val_tau, L max_nb_outer, L p_N_1, L p_N_2,string filename1, string filename2, D time){
compute_lambda();
this->ALM_solve_with_L_Katyusha(beta_0, epsilon_0, eta, rho,x0,y0, val_tau, max_nb_outer, p_N_1, p_N_2, filename1, filename2, time);
}
};
#endif
|
[GOAL]
T : Type u
inst✝ : TopologicalSpace T
X Y : Opens T
S : Sieve X
f : Y ⟶ X
hf : S ∈ (fun X S => ∀ (x : T), x ∈ X → ∃ U f, S.arrows f ∧ x ∈ U) X
y : T
hy : y ∈ Y
⊢ ∃ U f_1, (Sieve.pullback f S).arrows f_1 ∧ y ∈ U
[PROOFSTEP]
rcases hf y (f.le hy) with ⟨U, g, hg, hU⟩
[GOAL]
case intro.intro.intro
T : Type u
inst✝ : TopologicalSpace T
X Y : Opens T
S : Sieve X
f : Y ⟶ X
hf : S ∈ (fun X S => ∀ (x : T), x ∈ X → ∃ U f, S.arrows f ∧ x ∈ U) X
y : T
hy : y ∈ Y
U : Opens T
g : U ⟶ X
hg : S.arrows g
hU : y ∈ U
⊢ ∃ U f_1, (Sieve.pullback f S).arrows f_1 ∧ y ∈ U
[PROOFSTEP]
refine' ⟨U ⊓ Y, homOfLE inf_le_right, _, hU, hy⟩
[GOAL]
case intro.intro.intro
T : Type u
inst✝ : TopologicalSpace T
X Y : Opens T
S : Sieve X
f : Y ⟶ X
hf : S ∈ (fun X S => ∀ (x : T), x ∈ X → ∃ U f, S.arrows f ∧ x ∈ U) X
y : T
hy : y ∈ Y
U : Opens T
g : U ⟶ X
hg : S.arrows g
hU : y ∈ U
⊢ (Sieve.pullback f S).arrows (homOfLE (_ : U ⊓ Y ≤ Y))
[PROOFSTEP]
apply S.downward_closed hg (homOfLE inf_le_left)
[GOAL]
T : Type u
inst✝ : TopologicalSpace T
X : Opens T
S : Sieve X
hS : S ∈ (fun X S => ∀ (x : T), x ∈ X → ∃ U f, S.arrows f ∧ x ∈ U) X
R : Sieve X
hR :
∀ ⦃Y : Opens T⦄ ⦃f : Y ⟶ X⦄,
S.arrows f → Sieve.pullback f R ∈ (fun X S => ∀ (x : T), x ∈ X → ∃ U f, S.arrows f ∧ x ∈ U) Y
x : T
hx : x ∈ X
⊢ ∃ U f, R.arrows f ∧ x ∈ U
[PROOFSTEP]
rcases hS x hx with ⟨U, f, hf, hU⟩
[GOAL]
case intro.intro.intro
T : Type u
inst✝ : TopologicalSpace T
X : Opens T
S : Sieve X
hS : S ∈ (fun X S => ∀ (x : T), x ∈ X → ∃ U f, S.arrows f ∧ x ∈ U) X
R : Sieve X
hR :
∀ ⦃Y : Opens T⦄ ⦃f : Y ⟶ X⦄,
S.arrows f → Sieve.pullback f R ∈ (fun X S => ∀ (x : T), x ∈ X → ∃ U f, S.arrows f ∧ x ∈ U) Y
x : T
hx : x ∈ X
U : Opens T
f : U ⟶ X
hf : S.arrows f
hU : x ∈ U
⊢ ∃ U f, R.arrows f ∧ x ∈ U
[PROOFSTEP]
rcases hR hf _ hU with ⟨V, g, hg, hV⟩
[GOAL]
case intro.intro.intro.intro.intro.intro
T : Type u
inst✝ : TopologicalSpace T
X : Opens T
S : Sieve X
hS : S ∈ (fun X S => ∀ (x : T), x ∈ X → ∃ U f, S.arrows f ∧ x ∈ U) X
R : Sieve X
hR :
∀ ⦃Y : Opens T⦄ ⦃f : Y ⟶ X⦄,
S.arrows f → Sieve.pullback f R ∈ (fun X S => ∀ (x : T), x ∈ X → ∃ U f, S.arrows f ∧ x ∈ U) Y
x : T
hx : x ∈ X
U : Opens T
f : U ⟶ X
hf : S.arrows f
hU : x ∈ U
V : Opens T
g : V ⟶ U
hg : (Sieve.pullback f R).arrows g
hV : x ∈ V
⊢ ∃ U f, R.arrows f ∧ x ∈ U
[PROOFSTEP]
exact ⟨_, g ≫ f, hg, hV⟩
[GOAL]
T : Type u
inst✝ : TopologicalSpace T
X Y : Opens T
f : Y ⟶ X
S : Presieve X
hS : S ∈ (fun X R => ∀ (x : T), x ∈ X → ∃ U f, R f ∧ x ∈ U) X
x : T
hx : x ∈ Y
⊢ ∃ U f_1, Presieve.pullbackArrows f S f_1 ∧ x ∈ U
[PROOFSTEP]
rcases hS _ (f.le hx) with ⟨U, g, hg, hU⟩
[GOAL]
case intro.intro.intro
T : Type u
inst✝ : TopologicalSpace T
X Y : Opens T
f : Y ⟶ X
S : Presieve X
hS : S ∈ (fun X R => ∀ (x : T), x ∈ X → ∃ U f, R f ∧ x ∈ U) X
x : T
hx : x ∈ Y
U : Opens T
g : U ⟶ X
hg : S g
hU : x ∈ U
⊢ ∃ U f_1, Presieve.pullbackArrows f S f_1 ∧ x ∈ U
[PROOFSTEP]
refine' ⟨_, _, Presieve.pullbackArrows.mk _ _ hg, _⟩
[GOAL]
case intro.intro.intro
T : Type u
inst✝ : TopologicalSpace T
X Y : Opens T
f : Y ⟶ X
S : Presieve X
hS : S ∈ (fun X R => ∀ (x : T), x ∈ X → ∃ U f, R f ∧ x ∈ U) X
x : T
hx : x ∈ Y
U : Opens T
g : U ⟶ X
hg : S g
hU : x ∈ U
⊢ x ∈ pullback g f
[PROOFSTEP]
have : U ⊓ Y ≤ pullback g f
[GOAL]
case this
T : Type u
inst✝ : TopologicalSpace T
X Y : Opens T
f : Y ⟶ X
S : Presieve X
hS : S ∈ (fun X R => ∀ (x : T), x ∈ X → ∃ U f, R f ∧ x ∈ U) X
x : T
hx : x ∈ Y
U : Opens T
g : U ⟶ X
hg : S g
hU : x ∈ U
⊢ U ⊓ Y ≤ pullback g f
case intro.intro.intro
T : Type u
inst✝ : TopologicalSpace T
X Y : Opens T
f : Y ⟶ X
S : Presieve X
hS : S ∈ (fun X R => ∀ (x : T), x ∈ X → ∃ U f, R f ∧ x ∈ U) X
x : T
hx : x ∈ Y
U : Opens T
g : U ⟶ X
hg : S g
hU : x ∈ U
this : U ⊓ Y ≤ pullback g f
⊢ x ∈ pullback g f
[PROOFSTEP]
refine' leOfHom (pullback.lift (homOfLE inf_le_left) (homOfLE inf_le_right) rfl)
[GOAL]
case intro.intro.intro
T : Type u
inst✝ : TopologicalSpace T
X Y : Opens T
f : Y ⟶ X
S : Presieve X
hS : S ∈ (fun X R => ∀ (x : T), x ∈ X → ∃ U f, R f ∧ x ∈ U) X
x : T
hx : x ∈ Y
U : Opens T
g : U ⟶ X
hg : S g
hU : x ∈ U
this : U ⊓ Y ≤ pullback g f
⊢ x ∈ pullback g f
[PROOFSTEP]
apply this ⟨hU, hx⟩
[GOAL]
T : Type u
inst✝ : TopologicalSpace T
X : Opens T
S : Presieve X
Ti : ⦃Y : Opens T⦄ → (f : Y ⟶ X) → S f → Presieve Y
hS : S ∈ (fun X R => ∀ (x : T), x ∈ X → ∃ U f, R f ∧ x ∈ U) X
hTi : ∀ ⦃Y : Opens T⦄ (f : Y ⟶ X) (H : S f), Ti f H ∈ (fun X R => ∀ (x : T), x ∈ X → ∃ U f, R f ∧ x ∈ U) Y
x : T
hx : x ∈ X
⊢ ∃ U f, Presieve.bind S Ti f ∧ x ∈ U
[PROOFSTEP]
rcases hS x hx with ⟨U, f, hf, hU⟩
[GOAL]
case intro.intro.intro
T : Type u
inst✝ : TopologicalSpace T
X : Opens T
S : Presieve X
Ti : ⦃Y : Opens T⦄ → (f : Y ⟶ X) → S f → Presieve Y
hS : S ∈ (fun X R => ∀ (x : T), x ∈ X → ∃ U f, R f ∧ x ∈ U) X
hTi : ∀ ⦃Y : Opens T⦄ (f : Y ⟶ X) (H : S f), Ti f H ∈ (fun X R => ∀ (x : T), x ∈ X → ∃ U f, R f ∧ x ∈ U) Y
x : T
hx : x ∈ X
U : Opens T
f : U ⟶ X
hf : S f
hU : x ∈ U
⊢ ∃ U f, Presieve.bind S Ti f ∧ x ∈ U
[PROOFSTEP]
rcases hTi f hf x hU with ⟨V, g, hg, hV⟩
[GOAL]
case intro.intro.intro.intro.intro.intro
T : Type u
inst✝ : TopologicalSpace T
X : Opens T
S : Presieve X
Ti : ⦃Y : Opens T⦄ → (f : Y ⟶ X) → S f → Presieve Y
hS : S ∈ (fun X R => ∀ (x : T), x ∈ X → ∃ U f, R f ∧ x ∈ U) X
hTi : ∀ ⦃Y : Opens T⦄ (f : Y ⟶ X) (H : S f), Ti f H ∈ (fun X R => ∀ (x : T), x ∈ X → ∃ U f, R f ∧ x ∈ U) Y
x : T
hx : x ∈ X
U : Opens T
f : U ⟶ X
hf : S f
hU : x ∈ U
V : Opens T
g : V ⟶ U
hg : Ti f hf g
hV : x ∈ V
⊢ ∃ U f, Presieve.bind S Ti f ∧ x ∈ U
[PROOFSTEP]
exact ⟨_, _, ⟨_, g, f, hf, hg, rfl⟩, hV⟩
[GOAL]
T : Type u
inst✝ : TopologicalSpace T
⊢ Pretopology.ofGrothendieck (Opens T) (grothendieckTopology T) = pretopology T
[PROOFSTEP]
apply le_antisymm
[GOAL]
case a
T : Type u
inst✝ : TopologicalSpace T
⊢ Pretopology.ofGrothendieck (Opens T) (grothendieckTopology T) ≤ pretopology T
[PROOFSTEP]
intro X R hR x hx
[GOAL]
case a
T : Type u
inst✝ : TopologicalSpace T
X : Opens T
R : Presieve X
hR : R ∈ Pretopology.coverings (Pretopology.ofGrothendieck (Opens T) (grothendieckTopology T)) X
x : T
hx : x ∈ X
⊢ ∃ U f, R f ∧ x ∈ U
[PROOFSTEP]
rcases hR x hx with ⟨U, f, ⟨V, g₁, g₂, hg₂, _⟩, hU⟩
[GOAL]
case a.intro.intro.intro.intro.intro.intro.intro
T : Type u
inst✝ : TopologicalSpace T
X : Opens T
R : Presieve X
hR : R ∈ Pretopology.coverings (Pretopology.ofGrothendieck (Opens T) (grothendieckTopology T)) X
x : T
hx : x ∈ X
U : Opens T
f : U ⟶ X
hU : x ∈ U
V : Opens T
g₁ : U ⟶ V
g₂ : V ⟶ X
hg₂ : R g₂
right✝ : g₁ ≫ g₂ = f
⊢ ∃ U f, R f ∧ x ∈ U
[PROOFSTEP]
exact ⟨V, g₂, hg₂, g₁.le hU⟩
[GOAL]
case a
T : Type u
inst✝ : TopologicalSpace T
⊢ pretopology T ≤ Pretopology.ofGrothendieck (Opens T) (grothendieckTopology T)
[PROOFSTEP]
intro X R hR x hx
[GOAL]
case a
T : Type u
inst✝ : TopologicalSpace T
X : Opens T
R : Presieve X
hR : R ∈ Pretopology.coverings (pretopology T) X
x : T
hx : x ∈ X
⊢ ∃ U f, (Sieve.generate R).arrows f ∧ x ∈ U
[PROOFSTEP]
rcases hR x hx with ⟨U, f, hf, hU⟩
[GOAL]
case a.intro.intro.intro
T : Type u
inst✝ : TopologicalSpace T
X : Opens T
R : Presieve X
hR : R ∈ Pretopology.coverings (pretopology T) X
x : T
hx : x ∈ X
U : Opens T
f : U ⟶ X
hf : R f
hU : x ∈ U
⊢ ∃ U f, (Sieve.generate R).arrows f ∧ x ∈ U
[PROOFSTEP]
exact ⟨U, f, Sieve.le_generate R U hf, hU⟩
[GOAL]
T : Type u
inst✝ : TopologicalSpace T
⊢ Pretopology.toGrothendieck (Opens T) (pretopology T) = grothendieckTopology T
[PROOFSTEP]
rw [← pretopology_ofGrothendieck]
[GOAL]
T : Type u
inst✝ : TopologicalSpace T
⊢ Pretopology.toGrothendieck (Opens T) (Pretopology.ofGrothendieck (Opens T) (grothendieckTopology T)) =
grothendieckTopology T
[PROOFSTEP]
apply (Pretopology.gi (Opens T)).l_u_eq
|
-- https://personal.cis.strath.ac.uk/conor.mcbride/PolyTest.pdf
module Sandbox.PolyTest where
open import Data.Nat
open import Data.Nat.Properties.Extra
open import Relation.Binary.PropositionalEquality
_^_ : ℕ → ℕ → ℕ
x ^ zero = suc zero
x ^ suc y = x * (x ^ y)
-- data Poly : ℕ → Set where
-- κ : ℕ → Poly 0
-- ι : Poly 1
-- _⊕_ : ∀ {l m} → Poly l → Poly m → Poly (l ⊓ m)
-- _⊗_ : ∀ {l m} → Poly l → Poly m → Poly (l * m)
--
-- ⟦_⟧_ : ∀ {n} → Poly n → ℕ → ℕ
-- ⟦ κ c ⟧ x = c
-- ⟦ ι ⟧ x = x
-- ⟦ p ⊕ q ⟧ x = ⟦ p ⟧ x + ⟦ q ⟧ x
-- ⟦ p ⊗ q ⟧ x = ⟦ p ⟧ x * ⟦ q ⟧ x
--
-- Δ : ∀ {n} → Poly (suc n) → Poly n
-- Δ p = {! p !}
-- -- I'm not sure if there should be a case for the constructor _⊕_,
-- -- because I get stuck when trying to solve the following unification
-- -- problems (inferred index ≟ expected index):
-- -- l ⊓ m ≟ suc n
-- -- when checking that the expression ? has type Poly .n
-- data Poly : ℕ → Set where
-- κ : ℕ → Poly 0
-- ι : Poly 1
-- plus : ∀ {l m n} → Poly l → Poly m → (l ⊓ m) ≡ n → Poly n
-- times : ∀ {l m n} → Poly l → Poly m → (l * m) ≡ n → Poly n
--
-- ⟦_⟧_ : ∀ {n} → Poly n → ℕ → ℕ
-- ⟦ κ c ⟧ x = c
-- ⟦ ι ⟧ x = x
-- ⟦ plus p q eq ⟧ x = ⟦ p ⟧ x + ⟦ q ⟧ x
-- ⟦ times p q eq ⟧ x = ⟦ p ⟧ x * ⟦ q ⟧ x
-- Δ : ∀ {n} → Poly (suc n) → Poly n
-- Δ ι = κ 1
-- Δ (plus (κ x) q ())
-- Δ (plus ι (κ x) ())
-- Δ (plus ι ι eq) = plus (Δ ι) (Δ ι) (cancel-suc eq)
-- Δ (plus ι (plus p₂ q₂ eq₂) eq) = plus (Δ ι) (Δ {! (plus p₂ q₂ eq₂) !}) {! !}
-- Δ (plus ι (times p₂ q₂ eq₂) eq) = {! !}
-- -- Δ (plus ι q eq) = plus (Δ ι) (Δ {! !}) {! !}
-- Δ (plus (plus p q' eq') q eq) = {! !}
-- Δ (plus (times p q' eq') q eq) = {! !}
-- Δ (times p q eq) = {! !}
-- data Poly : ℕ → Set where
-- κ : ∀ {n} → ℕ → Poly n
-- ι : ∀ {n} → Poly (suc n)
-- plus : ∀ {n} → Poly n → Poly n → Poly n
-- times : ∀ {l m n} → Poly l → Poly m → (l * m) ≡ n → Poly n
--
-- -- ∆(p ⊗ r) = ∆p ⊗ (r · (1+)) ⊕ p ⊗ ∆r
-- Δ : ∀ {n} → Poly (suc n) → Poly n
-- Δ (κ x) = κ 0
-- Δ ι = κ 1
-- Δ (plus p q) = plus (Δ p) (Δ q)
-- Δ (times p q eq) = times (Δ {! p !}) {! !} {! !}
data Add : ℕ → ℕ → ℕ → Set where
0+n : ∀ {n} → Add 0 n n
n+0 : ∀ {n} → Add n 0 n
m+n : ∀ {l m n} → Add l m n → Add (suc l) (suc m) (suc (suc n))
data Poly : ℕ → Set where
κ : ∀ {n} → ℕ → Poly n
ι : ∀ {n} → Poly (suc n)
↑ : ∀ {n} → Poly n → Poly n
_⊕_ : ∀ {n} → Poly n → Poly n → Poly n
times : ∀ {l m n} → Poly l → Poly m → Add l m n → Poly n
infixl 5 _⊕_
⟦_⟧_ : ∀ {n} → Poly n → ℕ → ℕ
⟦ κ c ⟧ x = c
⟦ ι ⟧ x = x
⟦ ↑ p ⟧ x = ⟦ p ⟧ suc x
⟦ p ⊕ q ⟧ x = ⟦ p ⟧ x + ⟦ q ⟧ x
⟦ times p q add ⟧ x = ⟦ p ⟧ x * ⟦ q ⟧ x
sucl : ∀ {l m n} → Add l m n → Add (suc l) m (suc n)
sucl (0+n {zero}) = n+0
sucl (0+n {suc n}) = m+n 0+n
sucl (n+0 {zero}) = n+0
sucl (n+0 {suc n}) = n+0
sucl (m+n add) = m+n (sucl add)
sucr : ∀ {l m n} → Add l m n → Add l (suc m) (suc n)
sucr 0+n = 0+n
sucr (n+0 {zero}) = 0+n
sucr (n+0 {suc n}) = m+n n+0
sucr (m+n add) = m+n (sucr add)
-- -- ∆(p ⊗ q) = (∆p ⊗ (q · (1+))) ⊕ (p ⊗ ∆q)
Δ : ∀ {n} → Poly (suc n) → Poly n
Δ (κ c) = κ 0
Δ ι = κ 1
Δ (↑ p) = ↑ (Δ p)
Δ (p ⊕ q) = Δ p ⊕ Δ q
Δ (times p q 0+n) = times (κ (⟦ p ⟧ 0)) (Δ q) 0+n
Δ (times p q n+0) = times (Δ p) (κ (⟦ q ⟧ 0)) n+0
Δ (times p q (m+n add)) = times (Δ p) (↑ q) (sucr add) ⊕ times p (Δ q) (sucl add)
add : ∀ l m → Add l m (l + m)
add zero m = 0+n
add (suc l) m = sucl (add l m)
_⊗_ : ∀ {l m} → Poly l → Poly m → Poly (l + m)
_⊗_ {l} {m} p q = times p q (add l m)
infixr 6 _⊗_
ι₁ : Poly 1
ι₁ = ι
κ₀ : ℕ → Poly 0
κ₀ c = κ c
_⊛_ : ∀ {m} → Poly m → (n : ℕ) → Poly (n * m)
p ⊛ zero = κ 1
p ⊛ suc n = {! !}
|
State Before: α : Type u_1
β : Type u_2
γ : Type ?u.47735
δ : Type ?u.47738
e : LocalEquiv α β
e' : LocalEquiv β γ
⊢ (LocalEquiv.trans (LocalEquiv.refl α) e).source = e.source State After: no goals Tactic: simp [trans_source, preimage_id] |
State Before: n : ℕ
F : TypeVec n → Type u_1
inst✝ : MvFunctor F
q : MvQPF F
α β γ : TypeVec n
f : α ⟹ β
g : β ⟹ γ
x : F α
⊢ (g ⊚ f) <$$> x = g <$$> f <$$> x State After: n : ℕ
F : TypeVec n → Type u_1
inst✝ : MvFunctor F
q : MvQPF F
α β γ : TypeVec n
f : α ⟹ β
g : β ⟹ γ
x : F α
⊢ (g ⊚ f) <$$> abs (repr x) = g <$$> f <$$> abs (repr x) Tactic: rw [← abs_repr x] State Before: n : ℕ
F : TypeVec n → Type u_1
inst✝ : MvFunctor F
q : MvQPF F
α β γ : TypeVec n
f : α ⟹ β
g : β ⟹ γ
x : F α
⊢ (g ⊚ f) <$$> abs (repr x) = g <$$> f <$$> abs (repr x) State After: case mk
n : ℕ
F : TypeVec n → Type u_1
inst✝ : MvFunctor F
q : MvQPF F
α β γ : TypeVec n
f✝ : α ⟹ β
g : β ⟹ γ
x : F α
a : (P F).A
f : MvPFunctor.B (P F) a ⟹ α
⊢ (g ⊚ f✝) <$$> abs { fst := a, snd := f } = g <$$> f✝ <$$> abs { fst := a, snd := f } Tactic: cases' repr x with a f State Before: case mk
n : ℕ
F : TypeVec n → Type u_1
inst✝ : MvFunctor F
q : MvQPF F
α β γ : TypeVec n
f✝ : α ⟹ β
g : β ⟹ γ
x : F α
a : (P F).A
f : MvPFunctor.B (P F) a ⟹ α
⊢ (g ⊚ f✝) <$$> abs { fst := a, snd := f } = g <$$> f✝ <$$> abs { fst := a, snd := f } State After: case mk
n : ℕ
F : TypeVec n → Type u_1
inst✝ : MvFunctor F
q : MvQPF F
α β γ : TypeVec n
f✝ : α ⟹ β
g : β ⟹ γ
x : F α
a : (P F).A
f : MvPFunctor.B (P F) a ⟹ α
⊢ abs ((g ⊚ f✝) <$$> { fst := a, snd := f }) = abs (g <$$> f✝ <$$> { fst := a, snd := f }) Tactic: rw [← abs_map, ← abs_map, ← abs_map] State Before: case mk
n : ℕ
F : TypeVec n → Type u_1
inst✝ : MvFunctor F
q : MvQPF F
α β γ : TypeVec n
f✝ : α ⟹ β
g : β ⟹ γ
x : F α
a : (P F).A
f : MvPFunctor.B (P F) a ⟹ α
⊢ abs ((g ⊚ f✝) <$$> { fst := a, snd := f }) = abs (g <$$> f✝ <$$> { fst := a, snd := f }) State After: no goals Tactic: rfl |
lemma prime_elem_dvd_power_iff: "prime_elem p \<Longrightarrow> n > 0 \<Longrightarrow> p dvd x ^ n \<longleftrightarrow> p dvd x" |
\vsssub
\subsubsection{~$S_{uo}$: Unresolved Obstacles Source Term} \label{sec:UOST}
\vsssub
\opthead{UOST}{\ws}{L. Mentaschi}
Unresolved bathymetric and coastal features, such as cliffs, shoals and small islands,
are a major source of local error in spectral wave models.
Their dissipating effects can be accumulated over long distances, and
neglecting them can compromise the simulation skill on large portions of the domain
\citep{tol:Waves01a,tol:WaF02,art:Tuomi2014,art:Mentaschi2015a}.
In \ws\
two approaches are available for subscale modelling
the dissipation due to unresolved obstacles:
a) a propagation-based approach established for regular grids, described in section
\ref{sub:num_obst} \citep{tol:OMOD03a, tol:OMOD08a}; and b) the
Unresolved Obstacles Source Term \citep[UOST,][]{art:Mentaschi2018a,art:Mentaschi2015b}
described here.
In addition to supporting virtually any type of mesh,
UOST takes into account the directional/spatial layout
of the unresolved features, which can improve the model skill
with respect to the established approach \citep{art:HMM00,art:Mentaschi2018b}.
UOST relies on the hypothesis that any mesh can be considered as a set of polygons,
called cells, and that the model estimates the average value of the unknowns
in each cell.
Given a cell (let us call it A, figure \ref{fig:UOST}ab) UOST estimates, for each spectral component,
the effect of a) the unresolved features located in A (Local Dissipation, LD);
b) the unresolved features located upstream of A, and projecting their shadow on A
(Shadow Effect, SE). For the estimation of SE,
an upstream polygon A\textsc{\char13} is defined for each
cell/spectral component, as the intersection between the joint cells neighboring A
(cells B, C and D in figure \ref{fig:UOST}ab), and the flux upstream of A.
For each cell or upstream polygon, and for each spectral component,
two different transparency coefficients are estimated.
1) The overall transparency coefficient $\alpha$; and
2) a layout-dependent transparency $\beta$, defined as the average transparency
of cell sections starting from the cell upstream side.
The source term can be expressed as:
\begin{equation}
S_{uo} = S_{ld} + S_{se}\ ,
\end{equation}
\begin{equation}
S_{ld} = - \psi_{ld}(\mathbf{k}) \frac{1 - \beta_l(\mathbf{k})}{\beta_l(\mathbf{k})} \frac{c_g(\mathbf{k})}{\Delta L} N \ ,
\end{equation}
\begin{equation}
S_{se} = - \psi_{se}(\mathbf{k}) \bigg[ \frac{\beta_u(\mathbf{k})}{\alpha_u(\mathbf{k})} - 1 \bigg] \frac{c_g(\mathbf{k})}{\Delta L} N \ ,
\end{equation}
where $S_{ld}$ and $S_{se}$ are the local dissipation and the shadow effect,
$N$ is the spectral density,
$\mathbf{k}$ is the wave vector,
$c_g$ is the group velocity, $\Delta L$ is the path length of the spectral component
in the cell, and the $\psi$ factors model the reduction of the dissipation in presence of
local wave growth.
The subscripts l and u of $\alpha$ and $\beta$ indicate that these coefficients
can be referred, respectively, to the cell and to the upstream polygon.
For a more detailed explanation on the theoretical framework of UOST,
the reader is referred to
\citep{art:Mentaschi2015b, art:Mentaschi2018a}.
\begin{figure} \begin{center}
\epsfig{file=./eqs/UOST.eps,angle=0,width=4in}
\caption{
a: a square cell (A) and its upstream polygon
(A\textsc{\char13}, delimited by blue line, in light blue color) for a spectral
component propagating with group velocity $c_g$.
The joint BCD polygon represents the neighborhood polygon.
b: same as a, but for a triangular mesh (the hexagons approximate the median dual cells).
c: Computation of $\alpha$ and $\beta$ for a square cell, $N_s=4$,
and a spectral component propagating along the x axis.
d: Like c, but for a hexagonal cell and for a tilted spectral component.
In panel d the gray squares represent unresolved obstacles.
}
\label{fig:UOST} \botline
\end{center}
\end{figure}
\textbf{Automatic generation of mesh parameters.}
An open-source python package (alphaBetaLab, https://github.com/menta78/alphaBetaLab)
was developed
for the automatic estimation, from real-world bathymetry, of the upstream polygons,
of the transparency coefficients
$\alpha_l$, $\beta_l$, $\alpha_u$, $\beta_u$,
and of the other parameters needed by UOST.
alphaBetaLab considers the cells as free polygons, and estimates the transparency coefficients
from the cross section of the unresolved obstacles versus the incident spectral component
(figure \ref{fig:UOST}cd).
This involves, that it can be applied to any type of mesh,
including unstructured triangular and SMC meshes
(as of August 2018 only regular and triangular meshes are handled,
but support for SMC meshes
will be soon added).
We need to mention that while UOST would be able to modulate the energy dissipation
with the spectral frequency, only the direction is currently considered in alphaBetaLab.
For more details on the algorithms implemented in alphaBetaLab,
the user is referred to \cite{art:Mentaschi2018a}.
\cite{art:Mentaschi2018c} provides the documentation of the software and of its architecture,
along with use guidance and illustrative examples.
\textbf{Time step settings.}
In \ws\ the source terms are applied at the end of each global time step.
Therefore, to work properly on a given cell,
UOST needs a global time step lower
or equal to the critical CFL time step of the cell,
i.e., the amount of time needed by the fastest spectral component to entirely cross the cell.
Otherwise, part of the energy will leak through the cell without being blocked \citep{art:Mentaschi2018a}.
In unstructured grids with cells of very different sizes, the application of UOST to
all the cells, including the smallest ones, may come with an excedingly small global time step
that would affect the economy of the model.
To avoid this problem the user can set alphaBetaLab in order to
neglect cells smaller than a user-defined threshold, and then set in \ws\
a global time step equal to the critical CFL time step related with that threshold
\citep{art:Mentaschi2018c}.
\begin{table} \begin{center}
\footnotesize
\begin{tabular}{|p{4cm}|p{2.5cm}|p{5.5cm}|} \hline \hline
Namelist parameter & Description & default value \\
\hline
UOSTFILELOCAL & Local $\alpha$/$\beta$ input file path & obstructions\_local.\textit{gridname}.in \\ \hline
UOSTFILESHADOW & Shadow $\alpha$/$\beta$ input file path & obstructions\_shadow.\textit{gridname}.in \\ \hline
UOSTFACTORLOCAL & Calibration factor for local transparencies & 1 \\ \hline
UOSTFACTORSHADOW & Calibration factor for shadow transparencies & 1 \\
\hline
\end{tabular} \end{center}
\caption{UOST parameters, their description and default values. } \label{tab:UOST}
\botline
\end{table}
|
In this section we detail our reference scenario for MRS coordination and as well
as the formalization of the problem we addressed and the solutions we propose.
\section{Problem description}
Our reference scenario is based on a warehouse that stores items of various types.
Such items must be composed together to satisfy orders that arrive based on customers’ demand.
The items of various types are stored in particular sections of the building (\textit{loading bay})
and must be trasported to a set of \textit{unloading bays} where such items are then
packed together by human operators. The set of items to be trasported and where they should
go depends on the orders.
In our domain a set of robots is responsible for transporting items from
the loading bay to the unloading bays and the system goal is to maximize the
throughput of the orders, i.e., to maximize the number of orders completed in
the unit of time. Now, robots involved in transportation tasks move around
the warehouse and are likely to interfere when they move in close proximity,
and this can become a major source of inefficiency (e.g., robots must slow down
and they might even collide causing serious delays in the system).
Hence, a crucial aspect to maintain highly efficient and safe operations is to minimize the
possible spatial interferences between robots.
The main goal of our system is composing simple tasks to create a subset of complex tasks
to improve the productivity and minimize the robot's time travel and the paths distance.
To manage the merging of the task we can use a range of different techniques.
In the setion \ref{solution} there are the strategies used to create subsets of tasks.
After the composing step, we have varieties of tasks subsets. At the end to validate which is the best
subset we propose an heuristic. This heuristic function normalize the overall path of the subset of tasks
based on the number of items that each robot can carry.
Furthermore the purpose is to maximize the number of items in the same journey.
\section{Problem fomalization}
In this section we formalize the MRS coordination problem described above as a task allocation problem
where the robots must be allocated to transportation tasks.
In our formalization we have a finite set of tasks. A robot can execute a task if it is available else the robot will go at start position.
For how the system is built, a robot to request a task must arrive at the previous vertex of the loading bay.
In more detail, our model considers a set of items of different types $E = \{ e_1,...,e_N\}$,
stored in a specific loading bay ($L$). The warehouse must serve a set of orders
$O=\{o_1,...,o_M\}$. Orders are processed in one or more than one of the unloading bays ($U_i$).
Each order is defined by a vector of demand for each item type (the number of required
items to close the order). Hence, $o_j = < d_{1,j},...,d_{N,j}>$, where $d_{i,j}$ is the
demand for order $j$ of items of type $i$. When an order is finished a new one arrives,
until the set of task is finished.
The orders induce a set of $N \times M$ trasportation tasks $T = {t_{i,j}}$, with
$t_{i,j} = < d_{i,j}, P_{i,j}>$, where $t_{i,j}$ defines the task of transporting
$d_{i,j}$ items of type $i$ for order $o_j$ (hence to unloading bay $U_j$).
Each task has a destination bay for centralized coordination the $t_{i,j}$ has a set of edges
$P_{i,j}$ which respects the strategy used.
We have a set of robot $R = \{r_1,...,r_K \}$ that can execute transportation tasks, where
each robot has a defined load capacity for each item type $C_k = <c_{1,k},...,c_{N,k}>$,
hence $c_{i,k}$ is the load capacity of robot $k$ for items of type $i$.
We consider in our logistic scenario, homogenous robots, which have the same radius
and the same capacity. Because often in the logistic environments robots are all
equal.
% formalizzare la euristica
Since the main of this thesis is composing subsets of tasks we formalize what is
a subset of tasks and what we mean for merging paths of a subset of tasks and
which is the heuristic funcion used for differentiate the subests of tasks or simple tasks.
Specifically given a set of tasks $\mathcal{T}$ it define intrinsically a set of orders $O$.
One order perform a subset of $\mathcal{T}$. A subest of tasks $S$, denoted by $S\subseteq\mathcal{T}$.
Where $S = \{T_1,\cdots,T_k\}$ for each element of the subset we combine their paths $P$ to form a single path $\pi$.
For $\pi$ define a path that perform the travel order such as a vector of vertices $\pi = \{v_1,\cdots,v_i\}$.
% figure dell'unione di due path
\begin{figure}[hbt]
\begin{multicols}{2}
\includegraphics[width=\linewidth]{img/p1p2_cut.png}
\includegraphics[width=\linewidth]{img/p3_cut.png}
\end{multicols}
\caption{Comparison of the union of two paths $P$ and $P'$ to form a single path $\pi = P \cup P'$.}
\end{figure}
% funzione che calcola il percorso distaza
The function $p(\cdot)$ is used to join the paths, this funcion return the vector of vertices $\pi$.
For calculate the cost of the paths we used a function $f(\cdot)$ that return the path distance.
Because we maximize the total demand ($d_S$) we calculate the sum the single demand for each element in the subset.
More formally:
\[d_S =demand(T_1) + \cdots + demand(T_k)\]
where $demand(\cdot)$ return the single demand of the specific task.
Another function used in our system is $dst(\cdot)$ that return the euclidean distances between loading and unloading bays.
Finally we define our heuristic function $v(\cdot)$ which
can be defined for any task $T$ or subsets $S$:
\[ v(T) = \frac{f(P)}{demand(T)}\]
This heuristic function is used for normalize the cost of path (path distance)
on the demand (cost of the item/items) of the order.
For compute the best partition of tasks the heuristic is based on the concept of loss $L$,
which can be defined for any pair of subset $S_i$, $S_j$ as:
\[L(S_i,S_j) = v(\{ S_i \cup S_j\}) - v(S_i) - v(S_j)\]
where $v(S)$ is value of the characteristic function $v(\cdot)$ for subset $S$.
In other worlds, this loss funcion captures the value of synergies between subsets
and is computed in constant time.
The function $v$ for a subset $S$ is defined as:
\[ v(S) = \frac{f(\pi)}{d_S}\]
where $f(\pi)$ is the path distance between the loading bay $L$ and the unloading bays $U_j$ and the distance
to return to the initial vertex and the $d_s$ is the total demand of the tasks.
We want minimize the cost of the loss, that can be defined as:
\[L(S_i,S_j) < 0 \]
if the loss $L$ is less than 0 then we allocate the pair and delete the element which
form the subset.
|
Finn Hudson ( Cory Monteith ) is exhausted by his extra @-@ curricular activities , so Terri gives him pseudoephedrine tablets , which Finn shares with the rest of the males in the glee club . The effects of the tablets enhance their performance , and they give an energetic mash @-@ up of " It 's My Life and " Confessions Part II " . When Kurt Hummel ( Chris Colfer ) tells the girls the secret behind the boys ' performance , they , too , request the tablets from Terri , and give a high @-@ spirited mash @-@ up of " Halo " and " Walking On Sunshine " . Finn and Rachel Berry ( Lea Michele ) feel guilty for cheating , however , and agree to nullify the competition . When Principal Figgins ( Iqbal Theba ) learns what has happened , he fires Terri and , angry with Will , appoints Sue as co @-@ director of the glee club .
|
/-
Copyright (c) 2019 Zhouhang Zhou. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Zhouhang Zhou, Yury Kudryashov, Heather Macbeth
-/
import measure_theory.function.l1_space
/-!
# Density of simple functions
Show that each Borel measurable function can be approximated pointwise, and each `Lᵖ` Borel
measurable function can be approximated in `Lᵖ` norm, by a sequence of simple functions.
## Main definitions
* `measure_theory.simple_func.nearest_pt (e : ℕ → α) (N : ℕ) : α →ₛ ℕ`: the `simple_func` sending
each `x : α` to the point `e k` which is the nearest to `x` among `e 0`, ..., `e N`.
* `measure_theory.simple_func.approx_on (f : β → α) (hf : measurable f) (s : set α) (y₀ : α)
(h₀ : y₀ ∈ s) [separable_space s] (n : ℕ) : β →ₛ α` : a simple function that takes values in `s`
and approximates `f`.
* `measure_theory.Lp.simple_func`, the type of `Lp` simple functions
* `coe_to_Lp`, the embedding of `Lp.simple_func E p μ` into `Lp E p μ`
## Main results
* `tendsto_approx_on` (pointwise convergence): If `f x ∈ s`, then the sequence of simple
approximations `measure_theory.simple_func.approx_on f hf s y₀ h₀ n`, evaluated at `x`,
tends to `f x` as `n` tends to `∞`.
* `tendsto_approx_on_univ_Lp` (Lᵖ convergence): If `E` is a `normed_group` and `f` is measurable
and `mem_ℒp` (for `p < ∞`), then the simple functions `simple_func.approx_on f hf s 0 h₀ n` may
be considered as elements of `Lp E p μ`, and they tend in Lᵖ to `f`.
* `Lp.simple_func.dense_embedding`: the embedding `coe_to_Lp` of the `Lp` simple functions into
`Lp` is dense.
* `Lp.simple_func.induction`, `Lp.induction`, `mem_ℒp.induction`, `integrable.induction`: to prove
a predicate for all elements of one of these classes of functions, it suffices to check that it
behaves correctly on simple functions.
## TODO
For `E` finite-dimensional, simple functions `α →ₛ E` are dense in L^∞ -- prove this.
## Notations
* `α →ₛ β` (local notation): the type of simple functions `α → β`.
* `α →₁ₛ[μ] E`: the type of `L1` simple functions `α → β`.
-/
open set function filter topological_space ennreal emetric finset
open_locale classical topological_space ennreal measure_theory big_operators
variables {α β ι E F 𝕜 : Type*}
noncomputable theory
namespace measure_theory
local infixr ` →ₛ `:25 := simple_func
namespace simple_func
/-! ### Pointwise approximation by simple functions -/
section pointwise
variables [measurable_space α] [emetric_space α] [opens_measurable_space α]
/-- `nearest_pt_ind e N x` is the index `k` such that `e k` is the nearest point to `x` among the
points `e 0`, ..., `e N`. If more than one point are at the same distance from `x`, then
`nearest_pt_ind e N x` returns the least of their indexes. -/
noncomputable def nearest_pt_ind (e : ℕ → α) : ℕ → α →ₛ ℕ
| 0 := const α 0
| (N + 1) := piecewise (⋂ k ≤ N, {x | edist (e (N + 1)) x < edist (e k) x})
(measurable_set.Inter $ λ k, measurable_set.Inter_Prop $ λ hk,
measurable_set_lt measurable_edist_right measurable_edist_right)
(const α $ N + 1) (nearest_pt_ind N)
/-- `nearest_pt e N x` is the nearest point to `x` among the points `e 0`, ..., `e N`. If more than
one point are at the same distance from `x`, then `nearest_pt e N x` returns the point with the
least possible index. -/
noncomputable def nearest_pt (e : ℕ → α) (N : ℕ) : α →ₛ α :=
(nearest_pt_ind e N).map e
@[simp] lemma nearest_pt_ind_zero (e : ℕ → α) : nearest_pt_ind e 0 = const α 0 := rfl
@[simp] lemma nearest_pt_zero (e : ℕ → α) : nearest_pt e 0 = const α (e 0) := rfl
lemma nearest_pt_ind_succ (e : ℕ → α) (N : ℕ) (x : α) :
nearest_pt_ind e (N + 1) x =
if ∀ k ≤ N, edist (e (N + 1)) x < edist (e k) x
then N + 1 else nearest_pt_ind e N x :=
by { simp only [nearest_pt_ind, coe_piecewise, set.piecewise], congr, simp }
lemma nearest_pt_ind_le (e : ℕ → α) (N : ℕ) (x : α) : nearest_pt_ind e N x ≤ N :=
begin
induction N with N ihN, { simp },
simp only [nearest_pt_ind_succ],
split_ifs,
exacts [le_rfl, ihN.trans N.le_succ]
end
lemma edist_nearest_pt_le (e : ℕ → α) (x : α) {k N : ℕ} (hk : k ≤ N) :
edist (nearest_pt e N x) x ≤ edist (e k) x :=
begin
induction N with N ihN generalizing k,
{ simp [nonpos_iff_eq_zero.1 hk, le_refl] },
{ simp only [nearest_pt, nearest_pt_ind_succ, map_apply],
split_ifs,
{ rcases hk.eq_or_lt with rfl|hk,
exacts [le_rfl, (h k (nat.lt_succ_iff.1 hk)).le] },
{ push_neg at h,
rcases h with ⟨l, hlN, hxl⟩,
rcases hk.eq_or_lt with rfl|hk,
exacts [(ihN hlN).trans hxl, ihN (nat.lt_succ_iff.1 hk)] } }
end
lemma tendsto_nearest_pt {e : ℕ → α} {x : α} (hx : x ∈ closure (range e)) :
tendsto (λ N, nearest_pt e N x) at_top (𝓝 x) :=
begin
refine (at_top_basis.tendsto_iff nhds_basis_eball).2 (λ ε hε, _),
rcases emetric.mem_closure_iff.1 hx ε hε with ⟨_, ⟨N, rfl⟩, hN⟩,
rw [edist_comm] at hN,
exact ⟨N, trivial, λ n hn, (edist_nearest_pt_le e x hn).trans_lt hN⟩
end
variables [measurable_space β] {f : β → α}
/-- Approximate a measurable function by a sequence of simple functions `F n` such that
`F n x ∈ s`. -/
noncomputable def approx_on (f : β → α) (hf : measurable f) (s : set α) (y₀ : α) (h₀ : y₀ ∈ s)
[separable_space s] (n : ℕ) :
β →ₛ α :=
by haveI : nonempty s := ⟨⟨y₀, h₀⟩⟩;
exact comp (nearest_pt (λ k, nat.cases_on k y₀ (coe ∘ dense_seq s) : ℕ → α) n) f hf
@[simp] lemma approx_on_zero {f : β → α} (hf : measurable f) {s : set α} {y₀ : α} (h₀ : y₀ ∈ s)
[separable_space s] (x : β) :
approx_on f hf s y₀ h₀ 0 x = y₀ :=
rfl
lemma approx_on_mem {f : β → α} (hf : measurable f) {s : set α} {y₀ : α} (h₀ : y₀ ∈ s)
[separable_space s] (n : ℕ) (x : β) :
approx_on f hf s y₀ h₀ n x ∈ s :=
begin
haveI : nonempty s := ⟨⟨y₀, h₀⟩⟩,
suffices : ∀ n, (nat.cases_on n y₀ (coe ∘ dense_seq s) : α) ∈ s, { apply this },
rintro (_|n),
exacts [h₀, subtype.mem _]
end
@[simp] lemma approx_on_comp {γ : Type*} [measurable_space γ] {f : β → α} (hf : measurable f)
{g : γ → β} (hg : measurable g) {s : set α} {y₀ : α} (h₀ : y₀ ∈ s) [separable_space s] (n : ℕ) :
approx_on (f ∘ g) (hf.comp hg) s y₀ h₀ n = (approx_on f hf s y₀ h₀ n).comp g hg :=
rfl
lemma tendsto_approx_on {f : β → α} (hf : measurable f) {s : set α} {y₀ : α} (h₀ : y₀ ∈ s)
[separable_space s] {x : β} (hx : f x ∈ closure s) :
tendsto (λ n, approx_on f hf s y₀ h₀ n x) at_top (𝓝 $ f x) :=
begin
haveI : nonempty s := ⟨⟨y₀, h₀⟩⟩,
rw [← @subtype.range_coe _ s, ← image_univ, ← (dense_range_dense_seq s).closure_eq] at hx,
simp only [approx_on, coe_comp],
refine tendsto_nearest_pt (closure_minimal _ is_closed_closure hx),
simp only [nat.range_cases_on, closure_union, range_comp coe],
exact subset.trans (image_closure_subset_closure_image continuous_subtype_coe)
(subset_union_right _ _)
end
lemma edist_approx_on_mono {f : β → α} (hf : measurable f) {s : set α} {y₀ : α} (h₀ : y₀ ∈ s)
[separable_space s] (x : β) {m n : ℕ} (h : m ≤ n) :
edist (approx_on f hf s y₀ h₀ n x) (f x) ≤ edist (approx_on f hf s y₀ h₀ m x) (f x) :=
begin
dsimp only [approx_on, coe_comp, (∘)],
exact edist_nearest_pt_le _ _ ((nearest_pt_ind_le _ _ _).trans h)
end
lemma edist_approx_on_le {f : β → α} (hf : measurable f) {s : set α} {y₀ : α} (h₀ : y₀ ∈ s)
[separable_space s] (x : β) (n : ℕ) :
edist (approx_on f hf s y₀ h₀ n x) (f x) ≤ edist y₀ (f x) :=
edist_approx_on_mono hf h₀ x (zero_le n)
lemma edist_approx_on_y0_le {f : β → α} (hf : measurable f) {s : set α} {y₀ : α} (h₀ : y₀ ∈ s)
[separable_space s] (x : β) (n : ℕ) :
edist y₀ (approx_on f hf s y₀ h₀ n x) ≤ edist y₀ (f x) + edist y₀ (f x) :=
calc edist y₀ (approx_on f hf s y₀ h₀ n x) ≤
edist y₀ (f x) + edist (approx_on f hf s y₀ h₀ n x) (f x) : edist_triangle_right _ _ _
... ≤ edist y₀ (f x) + edist y₀ (f x) : add_le_add_left (edist_approx_on_le hf h₀ x n) _
end pointwise
/-! ### Lp approximation by simple functions -/
section Lp
variables [measurable_space β]
variables [measurable_space E] [normed_group E] {q : ℝ} {p : ℝ≥0∞}
lemma nnnorm_approx_on_le [opens_measurable_space E] {f : β → E} (hf : measurable f)
{s : set E} {y₀ : E} (h₀ : y₀ ∈ s) [separable_space s] (x : β) (n : ℕ) :
∥approx_on f hf s y₀ h₀ n x - f x∥₊ ≤ ∥f x - y₀∥₊ :=
begin
have := edist_approx_on_le hf h₀ x n,
rw edist_comm y₀ at this,
simp only [edist_nndist, nndist_eq_nnnorm] at this,
exact_mod_cast this
end
lemma norm_approx_on_y₀_le [opens_measurable_space E] {f : β → E} (hf : measurable f)
{s : set E} {y₀ : E} (h₀ : y₀ ∈ s) [separable_space s] (x : β) (n : ℕ) :
∥approx_on f hf s y₀ h₀ n x - y₀∥ ≤ ∥f x - y₀∥ + ∥f x - y₀∥ :=
begin
have := edist_approx_on_y0_le hf h₀ x n,
repeat { rw [edist_comm y₀, edist_eq_coe_nnnorm_sub] at this },
exact_mod_cast this,
end
lemma norm_approx_on_zero_le [opens_measurable_space E] {f : β → E} (hf : measurable f)
{s : set E} (h₀ : (0 : E) ∈ s) [separable_space s] (x : β) (n : ℕ) :
∥approx_on f hf s 0 h₀ n x∥ ≤ ∥f x∥ + ∥f x∥ :=
begin
have := edist_approx_on_y0_le hf h₀ x n,
simp [edist_comm (0 : E), edist_eq_coe_nnnorm] at this,
exact_mod_cast this,
end
lemma tendsto_approx_on_Lp_snorm [opens_measurable_space E]
{f : β → E} (hf : measurable f) {s : set E} {y₀ : E} (h₀ : y₀ ∈ s) [separable_space s]
(hp_ne_top : p ≠ ∞) {μ : measure β} (hμ : ∀ᵐ x ∂μ, f x ∈ closure s)
(hi : snorm (λ x, f x - y₀) p μ < ∞) :
tendsto (λ n, snorm (approx_on f hf s y₀ h₀ n - f) p μ) at_top (𝓝 0) :=
begin
by_cases hp_zero : p = 0,
{ simpa only [hp_zero, snorm_exponent_zero] using tendsto_const_nhds },
have hp : 0 < p.to_real := to_real_pos_iff.mpr ⟨bot_lt_iff_ne_bot.mpr hp_zero, hp_ne_top⟩,
suffices : tendsto (λ n, ∫⁻ x, ∥approx_on f hf s y₀ h₀ n x - f x∥₊ ^ p.to_real ∂μ) at_top (𝓝 0),
{ simp only [snorm_eq_lintegral_rpow_nnnorm hp_zero hp_ne_top],
convert continuous_rpow_const.continuous_at.tendsto.comp this;
simp [_root_.inv_pos.mpr hp] },
-- We simply check the conditions of the Dominated Convergence Theorem:
-- (1) The function "`p`-th power of distance between `f` and the approximation" is measurable
have hF_meas : ∀ n, measurable (λ x, (∥approx_on f hf s y₀ h₀ n x - f x∥₊ : ℝ≥0∞) ^ p.to_real),
{ simpa only [← edist_eq_coe_nnnorm_sub] using
λ n, (approx_on f hf s y₀ h₀ n).measurable_bind (λ y x, (edist y (f x)) ^ p.to_real)
(λ y, (measurable_edist_right.comp hf).pow_const p.to_real) },
-- (2) The functions "`p`-th power of distance between `f` and the approximation" are uniformly
-- bounded, at any given point, by `λ x, ∥f x - y₀∥ ^ p.to_real`
have h_bound : ∀ n, (λ x, (∥approx_on f hf s y₀ h₀ n x - f x∥₊ : ℝ≥0∞) ^ p.to_real)
≤ᵐ[μ] (λ x, ∥f x - y₀∥₊ ^ p.to_real),
{ exact λ n, eventually_of_forall
(λ x, rpow_le_rpow (coe_mono (nnnorm_approx_on_le hf h₀ x n)) to_real_nonneg) },
-- (3) The bounding function `λ x, ∥f x - y₀∥ ^ p.to_real` has finite integral
have h_fin : ∫⁻ (a : β), ∥f a - y₀∥₊ ^ p.to_real ∂μ ≠ ⊤,
from (lintegral_rpow_nnnorm_lt_top_of_snorm_lt_top hp_zero hp_ne_top hi).ne,
-- (4) The functions "`p`-th power of distance between `f` and the approximation" tend pointwise
-- to zero
have h_lim : ∀ᵐ (a : β) ∂μ,
tendsto (λ n, (∥approx_on f hf s y₀ h₀ n a - f a∥₊ : ℝ≥0∞) ^ p.to_real) at_top (𝓝 0),
{ filter_upwards [hμ],
intros a ha,
have : tendsto (λ n, (approx_on f hf s y₀ h₀ n) a - f a) at_top (𝓝 (f a - f a)),
{ exact (tendsto_approx_on hf h₀ ha).sub tendsto_const_nhds },
convert continuous_rpow_const.continuous_at.tendsto.comp (tendsto_coe.mpr this.nnnorm),
simp [zero_rpow_of_pos hp] },
-- Then we apply the Dominated Convergence Theorem
simpa using tendsto_lintegral_of_dominated_convergence _ hF_meas h_bound h_fin h_lim,
end
lemma mem_ℒp_approx_on [borel_space E]
{f : β → E} {μ : measure β} (fmeas : measurable f) (hf : mem_ℒp f p μ) {s : set E} {y₀ : E}
(h₀ : y₀ ∈ s) [separable_space s] (hi₀ : mem_ℒp (λ x, y₀) p μ) (n : ℕ) :
mem_ℒp (approx_on f fmeas s y₀ h₀ n) p μ :=
begin
refine ⟨(approx_on f fmeas s y₀ h₀ n).ae_measurable, _⟩,
suffices : snorm (λ x, approx_on f fmeas s y₀ h₀ n x - y₀) p μ < ⊤,
{ have : mem_ℒp (λ x, approx_on f fmeas s y₀ h₀ n x - y₀) p μ :=
⟨(approx_on f fmeas s y₀ h₀ n - const β y₀).ae_measurable, this⟩,
convert snorm_add_lt_top this hi₀,
ext x,
simp },
-- We don't necessarily have `mem_ℒp (λ x, f x - y₀) p μ`, because the `ae_measurable` part
-- requires `ae_measurable.add`, which requires second-countability
have hf' : mem_ℒp (λ x, ∥f x - y₀∥) p μ,
{ have h_meas : measurable (λ x, ∥f x - y₀∥),
{ simp only [← dist_eq_norm],
exact (continuous_id.dist continuous_const).measurable.comp fmeas },
refine ⟨h_meas.ae_measurable, _⟩,
rw snorm_norm,
convert snorm_add_lt_top hf hi₀.neg,
ext x,
simp [sub_eq_add_neg] },
have : ∀ᵐ x ∂μ, ∥approx_on f fmeas s y₀ h₀ n x - y₀∥ ≤ ∥(∥f x - y₀∥ + ∥f x - y₀∥)∥,
{ refine eventually_of_forall _,
intros x,
convert norm_approx_on_y₀_le fmeas h₀ x n,
rw [real.norm_eq_abs, abs_of_nonneg],
exact add_nonneg (norm_nonneg _) (norm_nonneg _) },
calc snorm (λ x, approx_on f fmeas s y₀ h₀ n x - y₀) p μ
≤ snorm (λ x, ∥f x - y₀∥ + ∥f x - y₀∥) p μ : snorm_mono_ae this
... < ⊤ : snorm_add_lt_top hf' hf',
end
lemma tendsto_approx_on_univ_Lp_snorm [opens_measurable_space E] [second_countable_topology E]
{f : β → E} (hp_ne_top : p ≠ ∞) {μ : measure β} (fmeas : measurable f) (hf : snorm f p μ < ∞) :
tendsto (λ n, snorm (approx_on f fmeas univ 0 trivial n - f) p μ) at_top (𝓝 0) :=
tendsto_approx_on_Lp_snorm fmeas trivial hp_ne_top (by simp) (by simpa using hf)
lemma mem_ℒp_approx_on_univ [borel_space E] [second_countable_topology E]
{f : β → E} {μ : measure β} (fmeas : measurable f) (hf : mem_ℒp f p μ) (n : ℕ) :
mem_ℒp (approx_on f fmeas univ 0 trivial n) p μ :=
mem_ℒp_approx_on fmeas hf (mem_univ _) zero_mem_ℒp n
lemma tendsto_approx_on_univ_Lp [borel_space E] [second_countable_topology E]
{f : β → E} [hp : fact (1 ≤ p)] (hp_ne_top : p ≠ ∞) {μ : measure β} (fmeas : measurable f)
(hf : mem_ℒp f p μ) :
tendsto (λ n, (mem_ℒp_approx_on_univ fmeas hf n).to_Lp (approx_on f fmeas univ 0 trivial n))
at_top (𝓝 (hf.to_Lp f)) :=
by simp [Lp.tendsto_Lp_iff_tendsto_ℒp'', tendsto_approx_on_univ_Lp_snorm hp_ne_top fmeas hf.2]
end Lp
/-! ### L1 approximation by simple functions -/
section integrable
variables [measurable_space β]
variables [measurable_space E] [normed_group E]
lemma tendsto_approx_on_L1_nnnorm [opens_measurable_space E]
{f : β → E} (hf : measurable f) {s : set E} {y₀ : E} (h₀ : y₀ ∈ s) [separable_space s]
{μ : measure β} (hμ : ∀ᵐ x ∂μ, f x ∈ closure s) (hi : has_finite_integral (λ x, f x - y₀) μ) :
tendsto (λ n, ∫⁻ x, ∥approx_on f hf s y₀ h₀ n x - f x∥₊ ∂μ) at_top (𝓝 0) :=
by simpa [snorm_one_eq_lintegral_nnnorm] using tendsto_approx_on_Lp_snorm hf h₀ one_ne_top hμ
(by simpa [snorm_one_eq_lintegral_nnnorm] using hi)
lemma integrable_approx_on [borel_space E]
{f : β → E} {μ : measure β} (fmeas : measurable f) (hf : integrable f μ)
{s : set E} {y₀ : E} (h₀ : y₀ ∈ s)
[separable_space s] (hi₀ : integrable (λ x, y₀) μ) (n : ℕ) :
integrable (approx_on f fmeas s y₀ h₀ n) μ :=
begin
rw ← mem_ℒp_one_iff_integrable at hf hi₀ ⊢,
exact mem_ℒp_approx_on fmeas hf h₀ hi₀ n,
end
lemma tendsto_approx_on_univ_L1_nnnorm [opens_measurable_space E] [second_countable_topology E]
{f : β → E} {μ : measure β} (fmeas : measurable f) (hf : integrable f μ) :
tendsto (λ n, ∫⁻ x, ∥approx_on f fmeas univ 0 trivial n x - f x∥₊ ∂μ) at_top (𝓝 0) :=
tendsto_approx_on_L1_nnnorm fmeas trivial (by simp) (by simpa using hf.2)
lemma integrable_approx_on_univ [borel_space E] [second_countable_topology E]
{f : β → E} {μ : measure β} (fmeas : measurable f) (hf : integrable f μ) (n : ℕ) :
integrable (approx_on f fmeas univ 0 trivial n) μ :=
integrable_approx_on fmeas hf _ (integrable_zero _ _ _) n
end integrable
section simple_func_properties
variables [measurable_space α]
variables [normed_group E] [measurable_space E] [normed_group F]
variables {μ : measure α} {p : ℝ≥0∞}
/-!
### Properties of simple functions in `Lp` spaces
A simple function `f : α →ₛ E` into a normed group `E` verifies, for a measure `μ`:
- `mem_ℒp f 0 μ` and `mem_ℒp f ∞ μ`, since `f` is a.e.-measurable and bounded,
- for `0 < p < ∞`,
`mem_ℒp f p μ ↔ integrable f μ ↔ f.fin_meas_supp μ ↔ ∀ y ≠ 0, μ (f ⁻¹' {y}) < ∞`.
-/
lemma exists_forall_norm_le (f : α →ₛ F) : ∃ C, ∀ x, ∥f x∥ ≤ C :=
exists_forall_le (f.map (λ x, ∥x∥))
lemma mem_ℒp_zero (f : α →ₛ E) (μ : measure α) : mem_ℒp f 0 μ :=
mem_ℒp_zero_iff_ae_measurable.mpr f.ae_measurable
lemma mem_ℒp_top (f : α →ₛ E) (μ : measure α) : mem_ℒp f ∞ μ :=
let ⟨C, hfC⟩ := f.exists_forall_norm_le in
mem_ℒp_top_of_bound f.ae_measurable C $ eventually_of_forall hfC
protected lemma snorm'_eq {p : ℝ} (f : α →ₛ F) (μ : measure α) :
snorm' f p μ = (∑ y in f.range, (nnnorm y : ℝ≥0∞) ^ p * μ (f ⁻¹' {y})) ^ (1/p) :=
have h_map : (λ a, (nnnorm (f a) : ℝ≥0∞) ^ p) = f.map (λ a : F, (nnnorm a : ℝ≥0∞) ^ p), by simp,
by rw [snorm', h_map, lintegral_eq_lintegral, map_lintegral]
lemma measure_preimage_lt_top_of_mem_ℒp (hp_pos : 0 < p) (hp_ne_top : p ≠ ∞) (f : α →ₛ E)
(hf : mem_ℒp f p μ) (y : E) (hy_ne : y ≠ 0) :
μ (f ⁻¹' {y}) < ∞ :=
begin
have hp_pos_real : 0 < p.to_real, from ennreal.to_real_pos_iff.mpr ⟨hp_pos, hp_ne_top⟩,
have hf_snorm := mem_ℒp.snorm_lt_top hf,
rw [snorm_eq_snorm' hp_pos.ne.symm hp_ne_top, f.snorm'_eq,
← @ennreal.lt_rpow_one_div_iff _ _ (1 / p.to_real) (by simp [hp_pos_real]),
@ennreal.top_rpow_of_pos (1 / (1 / p.to_real)) (by simp [hp_pos_real]),
ennreal.sum_lt_top_iff] at hf_snorm,
by_cases hyf : y ∈ f.range,
swap,
{ suffices h_empty : f ⁻¹' {y} = ∅,
by { rw [h_empty, measure_empty], exact ennreal.coe_lt_top, },
ext1 x,
rw [set.mem_preimage, set.mem_singleton_iff, mem_empty_eq, iff_false],
refine λ hxy, hyf _,
rw [mem_range, set.mem_range],
exact ⟨x, hxy⟩, },
specialize hf_snorm y hyf,
rw ennreal.mul_lt_top_iff at hf_snorm,
cases hf_snorm,
{ exact hf_snorm.2, },
cases hf_snorm,
{ refine absurd _ hy_ne,
simpa [hp_pos_real] using hf_snorm, },
{ simp [hf_snorm], },
end
lemma mem_ℒp_of_finite_measure_preimage (p : ℝ≥0∞) {f : α →ₛ E} (hf : ∀ y ≠ 0, μ (f ⁻¹' {y}) < ∞) :
mem_ℒp f p μ :=
begin
by_cases hp0 : p = 0,
{ rw [hp0, mem_ℒp_zero_iff_ae_measurable], exact f.ae_measurable, },
by_cases hp_top : p = ∞,
{ rw hp_top, exact mem_ℒp_top f μ, },
refine ⟨f.ae_measurable, _⟩,
rw [snorm_eq_snorm' hp0 hp_top, f.snorm'_eq],
refine ennreal.rpow_lt_top_of_nonneg (by simp) (ennreal.sum_lt_top_iff.mpr (λ y hy, _)).ne,
by_cases hy0 : y = 0,
{ simp [hy0, ennreal.to_real_pos_iff.mpr ⟨lt_of_le_of_ne (zero_le _) (ne.symm hp0), hp_top⟩], },
{ refine ennreal.mul_lt_top _ (hf y hy0).ne,
exact (ennreal.rpow_lt_top_of_nonneg ennreal.to_real_nonneg ennreal.coe_ne_top).ne },
end
lemma mem_ℒp_iff {f : α →ₛ E} (hp_pos : 0 < p) (hp_ne_top : p ≠ ∞) :
mem_ℒp f p μ ↔ ∀ y ≠ 0, μ (f ⁻¹' {y}) < ∞ :=
⟨λ h, measure_preimage_lt_top_of_mem_ℒp hp_pos hp_ne_top f h,
λ h, mem_ℒp_of_finite_measure_preimage p h⟩
lemma integrable_iff {f : α →ₛ E} : integrable f μ ↔ ∀ y ≠ 0, μ (f ⁻¹' {y}) < ∞ :=
mem_ℒp_one_iff_integrable.symm.trans $ mem_ℒp_iff ennreal.zero_lt_one ennreal.coe_ne_top
lemma mem_ℒp_iff_integrable {f : α →ₛ E} (hp_pos : 0 < p) (hp_ne_top : p ≠ ∞) :
mem_ℒp f p μ ↔ integrable f μ :=
(mem_ℒp_iff hp_pos hp_ne_top).trans integrable_iff.symm
lemma mem_ℒp_iff_fin_meas_supp {f : α →ₛ E} (hp_pos : 0 < p) (hp_ne_top : p ≠ ∞) :
mem_ℒp f p μ ↔ f.fin_meas_supp μ :=
(mem_ℒp_iff hp_pos hp_ne_top).trans fin_meas_supp_iff.symm
lemma integrable_iff_fin_meas_supp {f : α →ₛ E} : integrable f μ ↔ f.fin_meas_supp μ :=
integrable_iff.trans fin_meas_supp_iff.symm
lemma fin_meas_supp.integrable {f : α →ₛ E} (h : f.fin_meas_supp μ) : integrable f μ :=
integrable_iff_fin_meas_supp.2 h
lemma integrable_pair [measurable_space F] {f : α →ₛ E} {g : α →ₛ F} :
integrable f μ → integrable g μ → integrable (pair f g) μ :=
by simpa only [integrable_iff_fin_meas_supp] using fin_meas_supp.pair
lemma mem_ℒp_of_is_finite_measure (f : α →ₛ E) (p : ℝ≥0∞) (μ : measure α) [is_finite_measure μ] :
mem_ℒp f p μ :=
let ⟨C, hfC⟩ := f.exists_forall_norm_le in
mem_ℒp.of_bound f.ae_measurable C $ eventually_of_forall hfC
lemma integrable_of_is_finite_measure [is_finite_measure μ] (f : α →ₛ E) : integrable f μ :=
mem_ℒp_one_iff_integrable.mp (f.mem_ℒp_of_is_finite_measure 1 μ)
lemma measure_preimage_lt_top_of_integrable (f : α →ₛ E) (hf : integrable f μ) {x : E}
(hx : x ≠ 0) :
μ (f ⁻¹' {x}) < ∞ :=
integrable_iff.mp hf x hx
lemma measure_support_lt_top [has_zero β] (f : α →ₛ β) (hf : ∀ y ≠ 0, μ (f ⁻¹' {y}) < ∞) :
μ (support f) < ∞ :=
begin
rw support_eq,
refine (measure_bUnion_finset_le _ _).trans_lt (ennreal.sum_lt_top_iff.mpr (λ y hy, _)),
rw finset.mem_filter at hy,
exact hf y hy.2,
end
lemma measure_support_lt_top_of_mem_ℒp (f : α →ₛ E) (hf : mem_ℒp f p μ) (hp_ne_zero : p ≠ 0)
(hp_ne_top : p ≠ ∞) :
μ (support f) < ∞ :=
f.measure_support_lt_top ((mem_ℒp_iff (pos_iff_ne_zero.mpr hp_ne_zero) hp_ne_top).mp hf)
lemma measure_support_lt_top_of_integrable (f : α →ₛ E) (hf : integrable f μ) :
μ (support f) < ∞ :=
f.measure_support_lt_top (integrable_iff.mp hf)
lemma measure_lt_top_of_mem_ℒp_indicator (hp_pos : 0 < p) (hp_ne_top : p ≠ ∞) {c : E} (hc : c ≠ 0)
{s : set α} (hs : measurable_set s)
(hcs : mem_ℒp ((const α c).piecewise s hs (const α 0)) p μ) :
μ s < ⊤ :=
begin
have : function.support (const α c) = set.univ := function.support_const hc,
simpa only [mem_ℒp_iff_fin_meas_supp hp_pos hp_ne_top, fin_meas_supp_iff_support,
support_indicator, set.inter_univ, this] using hcs
end
end simple_func_properties
end simple_func
/-! Construction of the space of `Lp` simple functions, and its dense embedding into `Lp`. -/
namespace Lp
open ae_eq_fun
variables
[measurable_space α]
[normed_group E] [second_countable_topology E] [measurable_space E] [borel_space E]
[normed_group F] [second_countable_topology F] [measurable_space F] [borel_space F]
(p : ℝ≥0∞) (μ : measure α)
variables (E)
/-- `Lp.simple_func` is a subspace of Lp consisting of equivalence classes of an integrable simple
function. -/
def simple_func : add_subgroup (Lp E p μ) :=
{ carrier := {f : Lp E p μ | ∃ (s : α →ₛ E), (ae_eq_fun.mk s s.ae_measurable : α →ₘ[μ] E) = f},
zero_mem' := ⟨0, rfl⟩,
add_mem' := λ f g ⟨s, hs⟩ ⟨t, ht⟩, ⟨s + t,
by simp only [←hs, ←ht, mk_add_mk, add_subgroup.coe_add, mk_eq_mk, simple_func.coe_add]⟩,
neg_mem' := λ f ⟨s, hs⟩, ⟨-s,
by simp only [←hs, neg_mk, simple_func.coe_neg, mk_eq_mk, add_subgroup.coe_neg]⟩ }
variables {E p μ}
namespace simple_func
section instances
/-! Simple functions in Lp space form a `normed_space`. -/
@[norm_cast] lemma coe_coe (f : Lp.simple_func E p μ) : ⇑(f : Lp E p μ) = f := rfl
protected lemma eq' {f g : Lp.simple_func E p μ} : (f : α →ₘ[μ] E) = (g : α →ₘ[μ] E) → f = g :=
subtype.eq ∘ subtype.eq
/-! Implementation note: If `Lp.simple_func E p μ` were defined as a `𝕜`-submodule of `Lp E p μ`,
then the next few lemmas, putting a normed `𝕜`-group structure on `Lp.simple_func E p μ`, would be
unnecessary. But instead, `Lp.simple_func E p μ` is defined as an `add_subgroup` of `Lp E p μ`,
which does not permit this (but has the advantage of working when `E` itself is a normed group,
i.e. has no scalar action). -/
variables [normed_field 𝕜] [normed_space 𝕜 E] [measurable_space 𝕜] [opens_measurable_space 𝕜]
/-- If `E` is a normed space, `Lp.simple_func E p μ` is a `has_scalar`. Not declared as an
instance as it is (as of writing) used only in the construction of the Bochner integral. -/
protected def has_scalar : has_scalar 𝕜 (Lp.simple_func E p μ) := ⟨λk f, ⟨k • f,
begin
rcases f with ⟨f, ⟨s, hs⟩⟩,
use k • s,
apply eq.trans (smul_mk k s s.ae_measurable).symm _,
rw hs,
refl,
end ⟩⟩
local attribute [instance] simple_func.has_scalar
@[simp, norm_cast] lemma coe_smul (c : 𝕜) (f : Lp.simple_func E p μ) :
((c • f : Lp.simple_func E p μ) : Lp E p μ) = c • (f : Lp E p μ) := rfl
/-- If `E` is a normed space, `Lp.simple_func E p μ` is a module. Not declared as an
instance as it is (as of writing) used only in the construction of the Bochner integral. -/
protected def module : module 𝕜 (Lp.simple_func E p μ) :=
{ one_smul := λf, by { ext1, exact one_smul _ _ },
mul_smul := λx y f, by { ext1, exact mul_smul _ _ _ },
smul_add := λx f g, by { ext1, exact smul_add _ _ _ },
smul_zero := λx, by { ext1, exact smul_zero _ },
add_smul := λx y f, by { ext1, exact add_smul _ _ _ },
zero_smul := λf, by { ext1, exact zero_smul _ _ } }
local attribute [instance] simple_func.module
/-- If `E` is a normed space, `Lp.simple_func E p μ` is a normed space. Not declared as an
instance as it is (as of writing) used only in the construction of the Bochner integral. -/
protected def normed_space [fact (1 ≤ p)] : normed_space 𝕜 (Lp.simple_func E p μ) :=
⟨ λc f, by { rw [coe_norm_subgroup, coe_norm_subgroup, coe_smul, norm_smul] } ⟩
end instances
local attribute [instance] simple_func.module simple_func.normed_space
section to_Lp
/-- Construct the equivalence class `[f]` of a simple function `f` satisfying `mem_ℒp`. -/
@[reducible] def to_Lp (f : α →ₛ E) (hf : mem_ℒp f p μ) : (Lp.simple_func E p μ) :=
⟨hf.to_Lp f, ⟨f, rfl⟩⟩
lemma to_Lp_eq_to_Lp (f : α →ₛ E) (hf : mem_ℒp f p μ) :
(to_Lp f hf : Lp E p μ) = hf.to_Lp f := rfl
lemma to_Lp_eq_mk (f : α →ₛ E) (hf : mem_ℒp f p μ) :
(to_Lp f hf : α →ₘ[μ] E) = ae_eq_fun.mk f f.ae_measurable := rfl
lemma to_Lp_zero : to_Lp (0 : α →ₛ E) zero_mem_ℒp = (0 : Lp.simple_func E p μ) := rfl
lemma to_Lp_add (f g : α →ₛ E) (hf : mem_ℒp f p μ) (hg : mem_ℒp g p μ) :
to_Lp (f + g) (hf.add hg) = to_Lp f hf + to_Lp g hg := rfl
lemma to_Lp_neg (f : α →ₛ E) (hf : mem_ℒp f p μ) :
to_Lp (-f) hf.neg = -to_Lp f hf := rfl
lemma to_Lp_sub (f g : α →ₛ E) (hf : mem_ℒp f p μ) (hg : mem_ℒp g p μ) :
to_Lp (f - g) (hf.sub hg) = to_Lp f hf - to_Lp g hg :=
by { simp only [sub_eq_add_neg, ← to_Lp_neg, ← to_Lp_add], refl }
variables [normed_field 𝕜] [normed_space 𝕜 E] [measurable_space 𝕜] [opens_measurable_space 𝕜]
lemma to_Lp_smul (f : α →ₛ E) (hf : mem_ℒp f p μ) (c : 𝕜) :
to_Lp (c • f) (hf.const_smul c) = c • to_Lp f hf := rfl
lemma norm_to_Lp [fact (1 ≤ p)] (f : α →ₛ E) (hf : mem_ℒp f p μ) :
∥to_Lp f hf∥ = ennreal.to_real (snorm f p μ) :=
norm_to_Lp f hf
end to_Lp
section to_simple_func
/-- Find a representative of a `Lp.simple_func`. -/
def to_simple_func (f : Lp.simple_func E p μ) : α →ₛ E := classical.some f.2
/-- `(to_simple_func f)` is measurable. -/
@[measurability]
protected lemma measurable (f : Lp.simple_func E p μ) : measurable (to_simple_func f) :=
(to_simple_func f).measurable
@[measurability]
protected lemma ae_measurable (f : Lp.simple_func E p μ) : ae_measurable (to_simple_func f) μ :=
(simple_func.measurable f).ae_measurable
lemma to_simple_func_eq_to_fun (f : Lp.simple_func E p μ) : to_simple_func f =ᵐ[μ] f :=
show ⇑(to_simple_func f) =ᵐ[μ] ⇑(f : α →ₘ[μ] E), by
begin
convert (ae_eq_fun.coe_fn_mk (to_simple_func f) (simple_func.ae_measurable f)).symm using 2,
exact (classical.some_spec f.2).symm,
end
/-- `to_simple_func f` satisfies the predicate `mem_ℒp`. -/
protected lemma mem_ℒp (f : Lp.simple_func E p μ) : mem_ℒp (to_simple_func f) p μ :=
mem_ℒp.ae_eq (to_simple_func_eq_to_fun f).symm $ mem_Lp_iff_mem_ℒp.mp (f : Lp E p μ).2
lemma to_Lp_to_simple_func (f : Lp.simple_func E p μ) :
to_Lp (to_simple_func f) (simple_func.mem_ℒp f) = f :=
simple_func.eq' (classical.some_spec f.2)
lemma to_simple_func_to_Lp (f : α →ₛ E) (hfi : mem_ℒp f p μ) :
to_simple_func (to_Lp f hfi) =ᵐ[μ] f :=
by { rw ← mk_eq_mk, exact classical.some_spec (to_Lp f hfi).2 }
variables (E μ)
lemma zero_to_simple_func : to_simple_func (0 : Lp.simple_func E p μ) =ᵐ[μ] 0 :=
begin
filter_upwards [to_simple_func_eq_to_fun (0 : Lp.simple_func E p μ), Lp.coe_fn_zero E 1 μ],
assume a h₁ h₂,
rwa h₁,
end
variables {E μ}
lemma add_to_simple_func (f g : Lp.simple_func E p μ) :
to_simple_func (f + g) =ᵐ[μ] to_simple_func f + to_simple_func g :=
begin
filter_upwards [to_simple_func_eq_to_fun (f + g), to_simple_func_eq_to_fun f,
to_simple_func_eq_to_fun g, Lp.coe_fn_add (f : Lp E p μ) g],
assume a,
simp only [← coe_coe, add_subgroup.coe_add, pi.add_apply],
iterate 4 { assume h, rw h }
end
lemma neg_to_simple_func (f : Lp.simple_func E p μ) :
to_simple_func (-f) =ᵐ[μ] - to_simple_func f :=
begin
filter_upwards [to_simple_func_eq_to_fun (-f), to_simple_func_eq_to_fun f,
Lp.coe_fn_neg (f : Lp E p μ)],
assume a,
simp only [pi.neg_apply, add_subgroup.coe_neg, ← coe_coe],
repeat { assume h, rw h }
end
lemma sub_to_simple_func (f g : Lp.simple_func E p μ) :
to_simple_func (f - g) =ᵐ[μ] to_simple_func f - to_simple_func g :=
begin
filter_upwards [to_simple_func_eq_to_fun (f - g), to_simple_func_eq_to_fun f,
to_simple_func_eq_to_fun g, Lp.coe_fn_sub (f : Lp E p μ) g],
assume a,
simp only [add_subgroup.coe_sub, pi.sub_apply, ← coe_coe],
repeat { assume h, rw h }
end
variables [normed_field 𝕜] [normed_space 𝕜 E] [measurable_space 𝕜] [opens_measurable_space 𝕜]
lemma smul_to_simple_func (k : 𝕜) (f : Lp.simple_func E p μ) :
to_simple_func (k • f) =ᵐ[μ] k • to_simple_func f :=
begin
filter_upwards [to_simple_func_eq_to_fun (k • f), to_simple_func_eq_to_fun f,
Lp.coe_fn_smul k (f : Lp E p μ)],
assume a,
simp only [pi.smul_apply, coe_smul, ← coe_coe],
repeat { assume h, rw h }
end
lemma norm_to_simple_func [fact (1 ≤ p)] (f : Lp.simple_func E p μ) :
∥f∥ = ennreal.to_real (snorm (to_simple_func f) p μ) :=
by simpa [to_Lp_to_simple_func] using norm_to_Lp (to_simple_func f) (simple_func.mem_ℒp f)
end to_simple_func
section induction
variables (p)
/-- The characteristic function of a finite-measure measurable set `s`, as an `Lp` simple function.
-/
def indicator_const {s : set α} (hs : measurable_set s) (hμs : μ s ≠ ∞) (c : E) :
Lp.simple_func E p μ :=
to_Lp ((simple_func.const _ c).piecewise s hs (simple_func.const _ 0))
(mem_ℒp_indicator_const p hs c (or.inr hμs))
variables {p}
@[simp] lemma coe_indicator_const {s : set α} (hs : measurable_set s) (hμs : μ s ≠ ∞) (c : E) :
(↑(indicator_const p hs hμs c) : Lp E p μ) = indicator_const_Lp p hs hμs c :=
rfl
lemma to_simple_func_indicator_const {s : set α} (hs : measurable_set s) (hμs : μ s ≠ ∞) (c : E) :
to_simple_func (indicator_const p hs hμs c)
=ᵐ[μ] (simple_func.const _ c).piecewise s hs (simple_func.const _ 0) :=
Lp.simple_func.to_simple_func_to_Lp _ _
/-- To prove something for an arbitrary `Lp` simple function, with `0 < p < ∞`, it suffices to show
that the property holds for (multiples of) characteristic functions of finite-measure measurable
sets and is closed under addition (of functions with disjoint support). -/
@[elab_as_eliminator]
protected lemma induction (hp_pos : 0 < p) (hp_ne_top : p ≠ ∞) {P : Lp.simple_func E p μ → Prop}
(h_ind : ∀ (c : E) {s : set α} (hs : measurable_set s) (hμs : μ s < ∞),
P (Lp.simple_func.indicator_const p hs hμs.ne c))
(h_add : ∀ ⦃f g : α →ₛ E⦄, ∀ hf : mem_ℒp f p μ, ∀ hg : mem_ℒp g p μ,
disjoint (support f) (support g) → P (Lp.simple_func.to_Lp f hf)
→ P (Lp.simple_func.to_Lp g hg) → P (Lp.simple_func.to_Lp f hf + Lp.simple_func.to_Lp g hg))
(f : Lp.simple_func E p μ) : P f :=
begin
suffices : ∀ f : α →ₛ E, ∀ hf : mem_ℒp f p μ, P (to_Lp f hf),
{ rw ← to_Lp_to_simple_func f,
apply this }, clear f,
refine simple_func.induction _ _,
{ intros c s hs hf,
by_cases hc : c = 0,
{ convert h_ind 0 measurable_set.empty (by simp) using 1,
ext1,
simp [hc] },
exact h_ind c hs (simple_func.measure_lt_top_of_mem_ℒp_indicator hp_pos hp_ne_top hc hs hf) },
{ intros f g hfg hf hg hfg',
obtain ⟨hf', hg'⟩ : mem_ℒp f p μ ∧ mem_ℒp g p μ,
{ exact (mem_ℒp_add_of_disjoint hfg f.measurable g.measurable).mp hfg' },
exact h_add hf' hg' hfg (hf hf') (hg hg') },
end
end induction
section coe_to_Lp
variables [fact (1 ≤ p)]
protected lemma uniform_continuous :
uniform_continuous (coe : (Lp.simple_func E p μ) → (Lp E p μ)) :=
uniform_continuous_comap
protected lemma uniform_embedding :
uniform_embedding (coe : (Lp.simple_func E p μ) → (Lp E p μ)) :=
uniform_embedding_comap subtype.val_injective
protected lemma uniform_inducing : uniform_inducing (coe : (Lp.simple_func E p μ) → (Lp E p μ)) :=
simple_func.uniform_embedding.to_uniform_inducing
protected lemma dense_embedding (hp_ne_top : p ≠ ∞) :
dense_embedding (coe : (Lp.simple_func E p μ) → (Lp E p μ)) :=
begin
apply simple_func.uniform_embedding.dense_embedding,
assume f,
rw mem_closure_iff_seq_limit,
have hfi' : mem_ℒp f p μ := Lp.mem_ℒp f,
refine ⟨λ n, ↑(to_Lp (simple_func.approx_on f (Lp.measurable f) univ 0 trivial n)
(simple_func.mem_ℒp_approx_on_univ (Lp.measurable f) hfi' n)), λ n, mem_range_self _, _⟩,
convert simple_func.tendsto_approx_on_univ_Lp hp_ne_top (Lp.measurable f) hfi',
rw to_Lp_coe_fn f (Lp.mem_ℒp f)
end
protected lemma dense_inducing (hp_ne_top : p ≠ ∞) :
dense_inducing (coe : (Lp.simple_func E p μ) → (Lp E p μ)) :=
(simple_func.dense_embedding hp_ne_top).to_dense_inducing
protected lemma dense_range (hp_ne_top : p ≠ ∞) :
dense_range (coe : (Lp.simple_func E p μ) → (Lp E p μ)) :=
(simple_func.dense_inducing hp_ne_top).dense
variables [normed_field 𝕜] [normed_space 𝕜 E] [measurable_space 𝕜] [opens_measurable_space 𝕜]
variables (α E 𝕜)
/-- The embedding of Lp simple functions into Lp functions, as a continuous linear map. -/
def coe_to_Lp : (Lp.simple_func E p μ) →L[𝕜] (Lp E p μ) :=
{ map_smul' := λk f, rfl,
cont := Lp.simple_func.uniform_continuous.continuous,
.. add_subgroup.subtype (Lp.simple_func E p μ) }
variables {α E 𝕜}
end coe_to_Lp
end simple_func
end Lp
variables [measurable_space α] [normed_group E] [measurable_space E] [borel_space E]
[second_countable_topology E] {f : α → E} {p : ℝ≥0∞} {μ : measure α}
/-- To prove something for an arbitrary `Lp` function in a second countable Borel normed group, it
suffices to show that
* the property holds for (multiples of) characteristic functions;
* is closed under addition;
* the set of functions in `Lp` for which the property holds is closed.
-/
@[elab_as_eliminator]
lemma Lp.induction [_i : fact (1 ≤ p)] (hp_ne_top : p ≠ ∞) (P : Lp E p μ → Prop)
(h_ind : ∀ (c : E) {s : set α} (hs : measurable_set s) (hμs : μ s < ∞),
P (Lp.simple_func.indicator_const p hs hμs.ne c))
(h_add : ∀ ⦃f g⦄, ∀ hf : mem_ℒp f p μ, ∀ hg : mem_ℒp g p μ, disjoint (support f) (support g) →
P (hf.to_Lp f) → P (hg.to_Lp g) → P ((hf.to_Lp f) + (hg.to_Lp g)))
(h_closed : is_closed {f : Lp E p μ | P f}) :
∀ f : Lp E p μ, P f :=
begin
refine λ f, (Lp.simple_func.dense_range hp_ne_top).induction_on f h_closed _,
refine Lp.simple_func.induction (lt_of_lt_of_le ennreal.zero_lt_one _i.elim) hp_ne_top _ _,
{ exact λ c s, h_ind c },
{ exact λ f g hf hg, h_add hf hg },
end
/-- To prove something for an arbitrary `mem_ℒp` function in a second countable
Borel normed group, it suffices to show that
* the property holds for (multiples of) characteristic functions;
* is closed under addition;
* the set of functions in the `Lᵖ` space for which the property holds is closed.
* the property is closed under the almost-everywhere equal relation.
It is possible to make the hypotheses in the induction steps a bit stronger, and such conditions
can be added once we need them (for example in `h_add` it is only necessary to consider the sum of
a simple function with a multiple of a characteristic function and that the intersection
of their images is a subset of `{0}`).
-/
@[elab_as_eliminator]
lemma mem_ℒp.induction [_i : fact (1 ≤ p)] (hp_ne_top : p ≠ ∞) (P : (α → E) → Prop)
(h_ind : ∀ (c : E) ⦃s⦄, measurable_set s → μ s < ∞ → P (s.indicator (λ _, c)))
(h_add : ∀ ⦃f g : α → E⦄, disjoint (support f) (support g) → mem_ℒp f p μ → mem_ℒp g p μ →
P f → P g → P (f + g))
(h_closed : is_closed {f : Lp E p μ | P f} )
(h_ae : ∀ ⦃f g⦄, f =ᵐ[μ] g → mem_ℒp f p μ → P f → P g) :
∀ ⦃f : α → E⦄ (hf : mem_ℒp f p μ), P f :=
begin
have : ∀ (f : simple_func α E), mem_ℒp f p μ → P f,
{ refine simple_func.induction _ _,
{ intros c s hs h,
by_cases hc : c = 0,
{ subst hc, convert h_ind 0 measurable_set.empty (by simp) using 1, ext, simp [const] },
have hp_pos : 0 < p := lt_of_lt_of_le ennreal.zero_lt_one _i.elim,
exact h_ind c hs (simple_func.measure_lt_top_of_mem_ℒp_indicator hp_pos hp_ne_top hc hs h) },
{ intros f g hfg hf hg int_fg,
rw [simple_func.coe_add, mem_ℒp_add_of_disjoint hfg f.measurable g.measurable] at int_fg,
refine h_add hfg int_fg.1 int_fg.2 (hf int_fg.1) (hg int_fg.2) } },
have : ∀ (f : Lp.simple_func E p μ), P f,
{ intro f,
exact h_ae (Lp.simple_func.to_simple_func_eq_to_fun f) (Lp.simple_func.mem_ℒp f)
(this (Lp.simple_func.to_simple_func f) (Lp.simple_func.mem_ℒp f)) },
have : ∀ (f : Lp E p μ), P f :=
λ f, (Lp.simple_func.dense_range hp_ne_top).induction_on f h_closed this,
exact λ f hf, h_ae hf.coe_fn_to_Lp (Lp.mem_ℒp _) (this (hf.to_Lp f)),
end
section integrable
local attribute [instance] fact_one_le_one_ennreal
notation α ` →₁ₛ[`:25 μ `] ` E := @measure_theory.Lp.simple_func α E _ _ _ _ _ 1 μ
lemma L1.simple_func.to_Lp_one_eq_to_L1 (f : α →ₛ E) (hf : integrable f μ) :
(Lp.simple_func.to_Lp f (mem_ℒp_one_iff_integrable.2 hf) : α →₁[μ] E) = hf.to_L1 f :=
rfl
protected lemma L1.simple_func.integrable (f : α →₁ₛ[μ] E) :
integrable (Lp.simple_func.to_simple_func f) μ :=
by { rw ← mem_ℒp_one_iff_integrable, exact (Lp.simple_func.mem_ℒp f) }
/-- To prove something for an arbitrary integrable function in a second countable
Borel normed group, it suffices to show that
* the property holds for (multiples of) characteristic functions;
* is closed under addition;
* the set of functions in the `L¹` space for which the property holds is closed.
* the property is closed under the almost-everywhere equal relation.
It is possible to make the hypotheses in the induction steps a bit stronger, and such conditions
can be added once we need them (for example in `h_add` it is only necessary to consider the sum of
a simple function with a multiple of a characteristic function and that the intersection
of their images is a subset of `{0}`).
-/
@[elab_as_eliminator]
lemma integrable.induction (P : (α → E) → Prop)
(h_ind : ∀ (c : E) ⦃s⦄, measurable_set s → μ s < ∞ → P (s.indicator (λ _, c)))
(h_add : ∀ ⦃f g : α → E⦄, disjoint (support f) (support g) → integrable f μ → integrable g μ →
P f → P g → P (f + g))
(h_closed : is_closed {f : α →₁[μ] E | P f} )
(h_ae : ∀ ⦃f g⦄, f =ᵐ[μ] g → integrable f μ → P f → P g) :
∀ ⦃f : α → E⦄ (hf : integrable f μ), P f :=
begin
simp only [← mem_ℒp_one_iff_integrable] at *,
exact mem_ℒp.induction one_ne_top P h_ind h_add h_closed h_ae
end
end integrable
end measure_theory
|
{-# OPTIONS --safe #-}
module Cubical.Categories.Abelian.Instances.Terminal where
open import Cubical.Foundations.Prelude
open import Cubical.Categories.Abelian.Base
open import Cubical.Categories.Additive.Instances.Terminal
open import Cubical.Data.Unit
private
variable
ℓ : Level
private
open PreAbCategory
open PreAbCategoryStr
terminalPreAbCategory : PreAbCategory ℓ ℓ
PreAbCategory.additive terminalPreAbCategory = terminalAdditiveCategory
Kernel.k (hasKernels (preAbStr terminalPreAbCategory) f) = tt*
Kernel.ker (hasKernels (preAbStr terminalPreAbCategory) f) = refl
IsKernel.ker⋆f (Kernel.isKer (hasKernels (preAbStr terminalPreAbCategory) f)) = refl
IsKernel.univ (Kernel.isKer (hasKernels (preAbStr terminalPreAbCategory) f)) = λ _ _ _ → (refl , refl) , λ _ → refl
Cokernel.c (hasCokernels (preAbStr terminalPreAbCategory) f) = tt*
Cokernel.coker (hasCokernels (preAbStr terminalPreAbCategory) f) = refl
IsCokernel.f⋆coker (Cokernel.isCoker (hasCokernels (preAbStr terminalPreAbCategory) f)) = refl
IsCokernel.univ (Cokernel.isCoker (hasCokernels (preAbStr terminalPreAbCategory) f)) = λ _ _ _ → (refl , refl) , λ _ → refl
open AbelianCategory
open AbelianCategoryStr
terminalAbelianCategory : AbelianCategory ℓ ℓ
preAb terminalAbelianCategory = terminalPreAbCategory
fst (monNormal (abelianStr terminalAbelianCategory) m _) = tt*
fst (snd (monNormal (abelianStr terminalAbelianCategory) m _)) = m
IsKernel.ker⋆f (snd (snd (monNormal (abelianStr terminalAbelianCategory) m _))) = refl
IsKernel.univ (snd (snd (monNormal (abelianStr terminalAbelianCategory) m _))) = λ _ _ _ → (refl , refl) , (λ _ → refl)
fst (epiNormal (abelianStr terminalAbelianCategory) e _) = tt*
fst (snd (epiNormal (abelianStr terminalAbelianCategory) e _)) = e
IsCokernel.f⋆coker (snd (snd (epiNormal (abelianStr terminalAbelianCategory) e _))) = refl
IsCokernel.univ (snd (snd (epiNormal (abelianStr terminalAbelianCategory) e _))) = λ _ _ _ → (refl , refl) , (λ _ → refl)
|
/*
* Website:
* https://github.com/wo3kie/dojo
*
* Author:
* Lukasz Czerwinski
*
* Compilation:
* g++ --std=c++11 bind.cpp -o bind
*/
#include <boost/bind.hpp>
#include <boost/bind/apply.hpp>
#include <boost/bind/protect.hpp>
#include <algorithm>
#include <iostream>
#include <vector>
/*
* boost::bind apply
*/
int i1( int i ){ return i; }
int i2( int i ){ return 2*i; }
int i3( int i ){ return 3*i; }
void apply()
{
std::vector< int(*)(int) > fs{ { &i1, &i2, &i3 } };
/*
* This does not work
*
* std::for_each(
* fs.begin(),
* fs.end(),
* boost::bind< int >( _1, 5 ) );
*
* /usr/local/include/boost/bind/bind.hpp:243:16: error: type 'boost::arg<1>' does not provide a call operator
* return unwrapper<F>::unwrap(f, 0)(a[base_type::a1_]);
* ^~~~~~~~~~~~~~~~~~~~~~~~~~
*/
std::for_each(
fs.begin(),
fs.end(),
boost::bind( boost::apply< int >(), _1, 5 ) );
}
/*
* boost::bind protect
*/
struct Fi : public std::unary_function< int, int >
{
int operator()( int i ) const {
return i;
}
};
struct Ff
{
template< typename F >
int operator()( F f ) const {
return f( 5 );
}
typedef int result_type;
};
void protect()
{
/*
* This does not work
*
* boost::bind( Ff(), boost::bind( Fi(), _1 ) )( 5 );
*
* bind.cpp:55:16: error: called object type 'int' is not a function or function pointer
* return f( 5 );
* ^
*/
boost::bind( Ff(), boost::protect( boost::bind( Fi(), _1 ) ) );
}
/*
* main
*/
int main()
{
apply();
protect();
}
|
= Pattycake ( gorilla ) =
|
// Copyright (C) Vladimir Prus 2003.
// Distributed under the Boost Software License, Version 1.0. (See
// accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
//
// See http://www.boost.org/libs/graph/vector_property_map.html for
// documentation.
//
#ifndef BOOST_PROPERTY_MAP_VECTOR_PROPERTY_MAP_HPP
#define BOOST_PROPERTY_MAP_VECTOR_PROPERTY_MAP_HPP
#include <boost/property_map/property_map.hpp>
#include <boost/smart_ptr/shared_ptr.hpp>
#include <iterator>
#include <vector>
namespace boost {
template<typename T, typename IndexMap = identity_property_map>
class vector_property_map
: public boost::put_get_helper<
typename std::iterator_traits<
typename std::vector<T>::iterator >::reference,
vector_property_map<T, IndexMap> >
{
public:
typedef typename property_traits<IndexMap>::key_type key_type;
typedef T value_type;
typedef typename std::iterator_traits<
typename std::vector<T>::iterator >::reference reference;
typedef boost::lvalue_property_map_tag category;
vector_property_map(const IndexMap& index = IndexMap())
: store(new std::vector<T>()), index(index)
{}
vector_property_map(unsigned initial_size,
const IndexMap& index = IndexMap())
: store(new std::vector<T>(initial_size)), index(index)
{}
typename std::vector<T>::iterator storage_begin()
{
return store->begin();
}
typename std::vector<T>::iterator storage_end()
{
return store->end();
}
typename std::vector<T>::const_iterator storage_begin() const
{
return store->begin();
}
typename std::vector<T>::const_iterator storage_end() const
{
return store->end();
}
IndexMap& get_index_map() { return index; }
const IndexMap& get_index_map() const { return index; }
public:
// Copy ctor absent, default semantics is OK.
// Assignment operator absent, default semantics is OK.
// CONSIDER: not sure that assignment to 'index' is correct.
reference operator[](const key_type& v) const {
typename property_traits<IndexMap>::value_type i = get(index, v);
if (static_cast<unsigned>(i) >= store->size()) {
store->resize(i + 1, T());
}
return (*store)[i];
}
private:
// Conceptually, we have a vector of infinite size. For practical
// purposes, we start with an empty vector and grow it as needed.
// Note that we cannot store pointer to vector here -- we cannot
// store pointer to data, because if copy of property map resizes
// the vector, the pointer to data will be invalidated.
// I wonder if class 'pmap_ref' is simply needed.
shared_ptr< std::vector<T> > store;
IndexMap index;
};
template<typename T, typename IndexMap>
vector_property_map<T, IndexMap>
make_vector_property_map(IndexMap index)
{
return vector_property_map<T, IndexMap>(index);
}
}
#ifdef BOOST_GRAPH_USE_MPI
// Hide include from dependency trackers; the right fix
// is not to have it at all, but who knows what'll break
#define BOOST_VPMAP_HEADER_NAME <boost/property_map/parallel/vector_property_map.hpp>
#include BOOST_VPMAP_HEADER_NAME
#undef BOOST_VPMAP_HEADER_NAME
#endif
#endif
|
Formal statement is: lemma compact_scaling: fixes s :: "'a::real_normed_vector set" assumes "compact s" shows "compact ((\<lambda>x. c *\<^sub>R x) ` s)" Informal statement is: If $s$ is a compact set, then the set of all scalar multiples of elements of $s$ is compact. |
lemma abs_triangle_half_r: fixes y :: "'a::linordered_field" shows "abs (y - x1) < e / 2 \<Longrightarrow> abs (y - x2) < e / 2 \<Longrightarrow> abs (x1 - x2) < e" |
/-
Copyright (c) 2021 Yury Kudryashov. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Yury Kudryashov
-/
import analysis.box_integral.box.subbox_induction
import analysis.box_integral.partition.tagged
/-!
# Induction on subboxes
In this file we prove (see
`box_integral.tagged_partition.exists_is_Henstock_is_subordinate_homothetic`) that for every box `I`
in `ℝⁿ` and a function `r : ℝⁿ → ℝ` positive on `I` there exists a tagged partition `π` of `I` such
that
* `π` is a Henstock partition;
* `π` is subordinate to `r`;
* each box in `π` is homothetic to `I` with coefficient of the form `1 / 2 ^ n`.
Later we will use this lemma to prove that the Henstock filter is nontrivial, hence the Henstock
integral is well-defined.
## Tags
partition, tagged partition, Henstock integral
-/
namespace box_integral
open set metric
open_locale classical topological_space
noncomputable theory
variables {ι : Type*} [fintype ι] {I J : box ι}
namespace prepartition
/-- Split a box in `ℝⁿ` into `2 ^ n` boxes by hyperplanes passing through its center. -/
def split_center (I : box ι) : prepartition I :=
{ boxes := finset.univ.map (box.split_center_box_emb I),
le_of_mem' := by simp [I.split_center_box_le],
pairwise_disjoint :=
begin
rw [finset.coe_map, finset.coe_univ, image_univ],
rintro _ ⟨s, rfl⟩ _ ⟨t, rfl⟩ Hne,
exact I.disjoint_split_center_box (mt (congr_arg _) Hne)
end }
@[simp] lemma mem_split_center : J ∈ split_center I ↔ ∃ s, I.split_center_box s = J :=
by simp [split_center]
lemma is_partition_split_center (I : box ι) : is_partition (split_center I) :=
λ x hx, by simp [hx]
lemma upper_sub_lower_of_mem_split_center (h : J ∈ split_center I) (i : ι) :
J.upper i - J.lower i = (I.upper i - I.lower i) / 2 :=
let ⟨s, hs⟩ := mem_split_center.1 h in hs ▸ I.upper_sub_lower_split_center_box s i
end prepartition
namespace box
open prepartition tagged_prepartition
/-- Let `p` be a predicate on `box ι`, let `I` be a box. Suppose that the following two properties
hold true.
* Consider a smaller box `J ≤ I`. The hyperplanes passing through the center of `J` split it into
`2 ^ n` boxes. If `p` holds true on each of these boxes, then it true on `J`.
* For each `z` in the closed box `I.Icc` there exists a neighborhood `U` of `z` within `I.Icc` such
that for every box `J ≤ I` such that `z ∈ J.Icc ⊆ U`, if `J` is homothetic to `I` with a
coefficient of the form `1 / 2 ^ m`, then `p` is true on `J`.
Then `p I` is true. See also `box_integral.box.subbox_induction_on'` for a version using
`box_integral.box.split_center_box` instead of `box_integral.prepartition.split_center`. -/
@[elab_as_eliminator]
lemma subbox_induction_on {p : box ι → Prop} (I : box ι)
(H_ind : ∀ J ≤ I, (∀ J' ∈ split_center J, p J') → p J)
(H_nhds : ∀ z ∈ I.Icc, ∃ (U ∈ 𝓝[I.Icc] z), ∀ (J ≤ I) (m : ℕ), z ∈ J.Icc → J.Icc ⊆ U →
(∀ i, J.upper i - J.lower i = (I.upper i - I.lower i) / 2 ^ m) → p J) :
p I :=
begin
refine subbox_induction_on' I (λ J hle hs, H_ind J hle $ λ J' h', _) H_nhds,
rcases mem_split_center.1 h' with ⟨s, rfl⟩,
exact hs s
end
/-- Given a box `I` in `ℝⁿ` and a function `r : ℝⁿ → (0, ∞)`, there exists a tagged partition `π` of
`I` such that
* `π` is a Henstock partition;
* `π` is subordinate to `r`;
* each box in `π` is homothetic to `I` with coefficient of the form `1 / 2 ^ m`.
This lemma implies that the Henstock filter is nontrivial, hence the Henstock integral is
well-defined. -/
lemma exists_tagged_partition_is_Henstock_is_subordinate_homothetic (I : box ι)
(r : (ι → ℝ) → Ioi (0 : ℝ)) :
∃ π : tagged_prepartition I, π.is_partition ∧ π.is_Henstock ∧ π.is_subordinate r ∧
(∀ J ∈ π, ∃ m : ℕ, ∀ i, (J : _).upper i - J.lower i = (I.upper i - I.lower i) / 2 ^ m) ∧
π.distortion = I.distortion :=
begin
refine subbox_induction_on I (λ J hle hJ, _) (λ z hz, _),
{ choose! πi hP hHen hr Hn Hd using hJ, choose! n hn using Hn,
have hP : ((split_center J).bUnion_tagged πi).is_partition,
from (is_partition_split_center _).bUnion_tagged hP,
have hsub : ∀ (J' ∈ (split_center J).bUnion_tagged πi), ∃ n : ℕ, ∀ i,
(J' : _).upper i - J'.lower i = (J.upper i - J.lower i) / 2 ^ n,
{ intros J' hJ',
rcases (split_center J).mem_bUnion_tagged.1 hJ' with ⟨J₁, h₁, h₂⟩,
refine ⟨n J₁ J' + 1, λ i, _⟩,
simp only [hn J₁ h₁ J' h₂, upper_sub_lower_of_mem_split_center h₁, pow_succ,
div_div_eq_div_mul] },
refine ⟨_, hP, is_Henstock_bUnion_tagged.2 hHen, is_subordinate_bUnion_tagged.2 hr, hsub, _⟩,
refine tagged_prepartition.distortion_of_const _ hP.nonempty_boxes (λ J' h', _),
rcases hsub J' h' with ⟨n, hn⟩,
exact box.distortion_eq_of_sub_eq_div hn },
{ refine ⟨I.Icc ∩ closed_ball z (r z),
inter_mem_nhds_within _ (closed_ball_mem_nhds _ (r z).coe_prop), _⟩,
intros J Hle n Hmem HIcc Hsub,
rw set.subset_inter_iff at HIcc,
refine ⟨single _ _ le_rfl _ Hmem, is_partition_single _, is_Henstock_single _,
(is_subordinate_single _ _).2 HIcc.2, _, distortion_single _ _⟩,
simp only [tagged_prepartition.mem_single, forall_eq],
refine ⟨0, λ i, _⟩, simp }
end
end box
namespace prepartition
open tagged_prepartition finset function
/-- Given a box `I` in `ℝⁿ`, a function `r : ℝⁿ → (0, ∞)`, and a prepartition `π` of `I`, there
exists a tagged prepartition `π'` of `I` such that
* each box of `π'` is included in some box of `π`;
* `π'` is a Henstock partition;
* `π'` is subordinate to `r`;
* `π'` covers exactly the same part of `I` as `π`;
* the distortion of `π'` is equal to the distortion of `π`.
-/
lemma exists_tagged_le_is_Henstock_is_subordinate_Union_eq {I : box ι} (r : (ι → ℝ) → Ioi (0 : ℝ))
(π : prepartition I) :
∃ π' : tagged_prepartition I, π'.to_prepartition ≤ π ∧
π'.is_Henstock ∧ π'.is_subordinate r ∧ π'.distortion = π.distortion ∧
π'.Union = π.Union :=
begin
have := λ J, box.exists_tagged_partition_is_Henstock_is_subordinate_homothetic J r,
choose! πi πip πiH πir hsub πid, clear hsub,
refine ⟨π.bUnion_tagged πi, bUnion_le _ _, is_Henstock_bUnion_tagged.2 (λ J _, πiH J),
is_subordinate_bUnion_tagged.2 (λ J _, πir J), _, π.Union_bUnion_partition (λ J _, πip J)⟩,
rw [distortion_bUnion_tagged],
exact sup_congr rfl (λ J _, πid J)
end
/-- Given a prepartition `π` of a box `I` and a function `r : ℝⁿ → (0, ∞)`, `π.to_subordinate r`
is a tagged partition `π'` such that
* each box of `π'` is included in some box of `π`;
* `π'` is a Henstock partition;
* `π'` is subordinate to `r`;
* `π'` covers exactly the same part of `I` as `π`;
* the distortion of `π'` is equal to the distortion of `π`.
-/
def to_subordinate (π : prepartition I) (r : (ι → ℝ) → Ioi (0 : ℝ)) : tagged_prepartition I :=
(π.exists_tagged_le_is_Henstock_is_subordinate_Union_eq r).some
lemma is_Henstock_to_subordinate (π : prepartition I) (r : (ι → ℝ) → Ioi (0 : ℝ)) :
(π.to_subordinate r).is_Henstock :=
(π.exists_tagged_le_is_Henstock_is_subordinate_Union_eq r).some_spec.2.1
lemma is_subordinate_to_subordinate (π : prepartition I) (r : (ι → ℝ) → Ioi (0 : ℝ)) :
(π.to_subordinate r).is_subordinate r :=
(π.exists_tagged_le_is_Henstock_is_subordinate_Union_eq r).some_spec.2.2.1
@[simp] lemma distortion_to_subordinate (π : prepartition I) (r : (ι → ℝ) → Ioi (0 : ℝ)) :
(π.to_subordinate r).distortion = π.distortion :=
(π.exists_tagged_le_is_Henstock_is_subordinate_Union_eq r).some_spec.2.2.2.1
@[simp] lemma Union_to_subordinate (π : prepartition I) (r : (ι → ℝ) → Ioi (0 : ℝ)) :
(π.to_subordinate r).Union = π.Union :=
(π.exists_tagged_le_is_Henstock_is_subordinate_Union_eq r).some_spec.2.2.2.2
end prepartition
namespace tagged_prepartition
/-- Given a tagged prepartition `π₁`, a prepartition `π₂` that covers exactly `I \ π₁.Union`, and
a function `r : ℝⁿ → (0, ∞)`, returns the union of `π₁` and `π₂.to_subordinate r`. This partition
`π` has the following properties:
* `π` is a partition, i.e. it covers the whole `I`;
* `π₁.boxes ⊆ π.boxes`;
* `π.tag J = π₁.tag J` whenever `J ∈ π₁`;
* `π` is Henstock outside of `π₁`: `π.tag J ∈ J.Icc` whenever `J ∈ π`, `J ∉ π₁`;
* `π` is subordinate to `r` outside of `π₁`;
* the distortion of `π` is equal to the maximum of the distortions of `π₁` and `π₂`.
-/
def union_compl_to_subordinate (π₁ : tagged_prepartition I) (π₂ : prepartition I)
(hU : π₂.Union = I \ π₁.Union) (r : (ι → ℝ) → Ioi (0 : ℝ)) :
tagged_prepartition I :=
π₁.disj_union (π₂.to_subordinate r)
(((π₂.Union_to_subordinate r).trans hU).symm ▸ disjoint_diff)
lemma is_partition_union_compl_to_subordinate (π₁ : tagged_prepartition I) (π₂ : prepartition I)
(hU : π₂.Union = I \ π₁.Union) (r : (ι → ℝ) → Ioi (0 : ℝ)) :
is_partition (π₁.union_compl_to_subordinate π₂ hU r) :=
prepartition.is_partition_disj_union_of_eq_diff ((π₂.Union_to_subordinate r).trans hU)
@[simp] lemma union_compl_to_subordinate_boxes (π₁ : tagged_prepartition I) (π₂ : prepartition I)
(hU : π₂.Union = I \ π₁.Union) (r : (ι → ℝ) → Ioi (0 : ℝ)) :
(π₁.union_compl_to_subordinate π₂ hU r).boxes = π₁.boxes ∪ (π₂.to_subordinate r).boxes :=
rfl
@[simp] lemma Union_union_compl_to_subordinate_boxes (π₁ : tagged_prepartition I)
(π₂ : prepartition I) (hU : π₂.Union = I \ π₁.Union) (r : (ι → ℝ) → Ioi (0 : ℝ)) :
(π₁.union_compl_to_subordinate π₂ hU r).Union = I :=
(is_partition_union_compl_to_subordinate _ _ _ _).Union_eq
@[simp] lemma distortion_union_compl_to_subordinate (π₁ : tagged_prepartition I)
(π₂ : prepartition I) (hU : π₂.Union = I \ π₁.Union) (r : (ι → ℝ) → Ioi (0 : ℝ)) :
(π₁.union_compl_to_subordinate π₂ hU r).distortion = max π₁.distortion π₂.distortion :=
by simp [union_compl_to_subordinate]
end tagged_prepartition
end box_integral
|
{-# OPTIONS --cubical --no-import-sorts --safe #-}
module Cubical.Data.DiffInt.Properties where
open import Cubical.Foundations.Prelude
open import Cubical.Foundations.Univalence
open import Cubical.Data.DiffInt.Base
open import Cubical.Data.Nat as ℕ using (suc; zero; isSetℕ; discreteℕ; ℕ) renaming (_+_ to _+ⁿ_; _·_ to _·ⁿ_)
open import Cubical.Data.Sigma
open import Cubical.Data.Bool
open import Cubical.Data.Int as Int using (Int; sucInt)
open import Cubical.Foundations.Path
open import Cubical.Foundations.Isomorphism
open import Cubical.Relation.Binary.Base
open import Cubical.Relation.Nullary
open import Cubical.HITs.SetQuotients
open BinaryRelation
relIsEquiv : isEquivRel rel
relIsEquiv = equivRel {A = ℕ × ℕ} relIsRefl relIsSym relIsTrans
where
open import Cubical.Data.Nat
relIsRefl : isRefl rel
relIsRefl (a0 , a1) = refl
relIsSym : isSym rel
relIsSym (a0 , a1) (b0 , b1) p = sym p
relIsTrans : isTrans rel
relIsTrans (a0 , a1) (b0 , b1) (c0 , c1) p0 p1 =
inj-m+ {m = (b0 + b1)} ((b0 + b1) + (a0 + c1) ≡⟨ +-assoc (b0 + b1) a0 c1 ⟩
((b0 + b1) + a0) + c1 ≡[ i ]⟨ +-comm b0 b1 i + a0 + c1 ⟩
((b1 + b0) + a0) + c1 ≡[ i ]⟨ +-comm (b1 + b0) a0 i + c1 ⟩
(a0 + (b1 + b0)) + c1 ≡[ i ]⟨ +-assoc a0 b1 b0 i + c1 ⟩
(a0 + b1) + b0 + c1 ≡⟨ sym (+-assoc (a0 + b1) b0 c1) ⟩
(a0 + b1) + (b0 + c1) ≡⟨ cong (λ p → p . fst + p .snd) (transport ΣPath≡PathΣ (p0 , p1))⟩
(b0 + a1) + (c0 + b1) ≡⟨ sym (+-assoc b0 a1 (c0 + b1))⟩
b0 + (a1 + (c0 + b1)) ≡[ i ]⟨ b0 + (a1 + +-comm c0 b1 i) ⟩
b0 + (a1 + (b1 + c0)) ≡[ i ]⟨ b0 + +-comm a1 (b1 + c0) i ⟩
b0 + ((b1 + c0) + a1) ≡[ i ]⟨ b0 + +-assoc b1 c0 a1 (~ i) ⟩
b0 + (b1 + (c0 + a1)) ≡⟨ +-assoc b0 b1 (c0 + a1)⟩
(b0 + b1) + (c0 + a1) ∎ )
relIsProp : BinaryRelation.isPropValued rel
relIsProp a b x y = isSetℕ _ _ _ _
discreteℤ : Discrete ℤ
discreteℤ = discreteSetQuotients (discreteΣ discreteℕ λ _ → discreteℕ) relIsProp relIsEquiv (λ _ _ → discreteℕ _ _)
isSetℤ : isSet ℤ
isSetℤ = Discrete→isSet discreteℤ
sucℤ' : ℕ × ℕ -> ℤ
sucℤ' (a⁺ , a⁻) = [ suc a⁺ , a⁻ ]
sucℤ'-respects-rel : (a b : ℕ × ℕ) → rel a b → sucℤ' a ≡ sucℤ' b
sucℤ'-respects-rel a@(a⁺ , a⁻) b@(b⁺ , b⁻) a~b = eq/ (suc a⁺ , a⁻) (suc b⁺ , b⁻) λ i → suc (a~b i)
sucℤ : ℤ -> ℤ
sucℤ = elim {R = rel} {B = λ _ → ℤ} (λ _ → isSetℤ) sucℤ' sucℤ'-respects-rel
predℤ' : ℕ × ℕ -> ℤ
predℤ' (a⁺ , a⁻) = [ a⁺ , suc a⁻ ]
⟦_⟧ : Int -> ℤ
⟦_⟧ (Int.pos n) = [ n , 0 ]
⟦_⟧ (Int.negsuc n) = [ 0 , suc n ]
fwd = ⟦_⟧
bwd' : ℕ × ℕ -> Int
bwd' (zero , a⁻) = Int.neg a⁻
bwd' (suc a⁺ , a⁻) = sucInt (bwd' (a⁺ , a⁻))
rel-suc : ∀ a⁺ a⁻ → rel (a⁺ , a⁻) (suc a⁺ , suc a⁻)
rel-suc a⁺ a⁻ = ℕ.+-suc a⁺ a⁻
bwd'-suc : ∀ a⁺ a⁻ → bwd' (a⁺ , a⁻) ≡ bwd' (suc a⁺ , suc a⁻)
bwd'-suc zero zero = refl
bwd'-suc zero (suc a⁻) = refl
bwd'-suc (suc a⁺) a⁻ i = sucInt (bwd'-suc a⁺ a⁻ i)
bwd'+ : ∀ m n → bwd' (m , m +ⁿ n) ≡ bwd' (0 , n)
bwd'+ zero n = refl
bwd'+ (suc m) n = sym (bwd'-suc m (m +ⁿ n)) ∙ bwd'+ m n
bwd'-respects-rel : (a b : ℕ × ℕ) → rel a b → bwd' a ≡ bwd' b
bwd'-respects-rel (zero , a⁻) ( b⁺ , b⁻) a~b = sym (bwd'+ b⁺ a⁻) ∙ (λ i → bwd' (b⁺ , a~b (~ i)))
bwd'-respects-rel (suc a⁺ , a⁻) (zero , b⁻) a~b = (λ i → bwd' (suc a⁺ , a~b (~ i))) ∙ sym (bwd'-suc a⁺ (a⁺ +ⁿ b⁻)) ∙ bwd'+ a⁺ b⁻
bwd'-respects-rel (suc a⁺ , a⁻) (suc b⁺ , b⁻) a~b i = sucInt (bwd'-respects-rel (a⁺ , a⁻) (b⁺ , b⁻) (ℕ.inj-m+ {1} {a⁺ +ⁿ b⁻} {b⁺ +ⁿ a⁻} a~b) i)
bwd : ℤ -> Int
bwd = elim {R = rel} {B = λ _ → Int} (λ _ → Int.isSetInt) bwd' bwd'-respects-rel
bwd-fwd : ∀ (x : Int) -> bwd (fwd x) ≡ x
bwd-fwd (Int.pos zero) = refl
bwd-fwd (Int.pos (suc n)) i = sucInt (bwd-fwd (Int.pos n) i)
bwd-fwd (Int.negsuc n) = refl
suc-⟦⟧ : ∀ x → sucℤ ⟦ x ⟧ ≡ ⟦ sucInt x ⟧
suc-⟦⟧ (Int.pos n) = refl
suc-⟦⟧ (Int.negsuc zero) = eq/ {R = rel} (1 , 1) (0 , 0) refl
suc-⟦⟧ (Int.negsuc (suc n)) = eq/ {R = rel} (1 , 2 +ⁿ n) (0 , 1 +ⁿ n) refl
fwd-bwd' : (a : ℕ × ℕ) → fwd (bwd [ a ]) ≡ [ a ]
fwd-bwd' a@(zero , zero) = refl
fwd-bwd' a@(zero , suc a⁻) = refl
fwd-bwd' a@(suc a⁺ , a⁻) = sym (suc-⟦⟧ (bwd [ a⁺ , a⁻ ])) ∙ (λ i → sucℤ (fwd-bwd' (a⁺ , a⁻) i))
fwd-bwd : ∀ (z : ℤ) -> fwd (bwd z) ≡ z
fwd-bwd = elimProp {R = rel} (λ _ → isSetℤ _ _) fwd-bwd'
Int≡ℤ : Int ≡ ℤ
Int≡ℤ = isoToPath (iso fwd bwd fwd-bwd bwd-fwd)
infix 8 -_
infixl 7 _·_
infixl 6 _+_
_+'_ : (a b : ℕ × ℕ) → ℤ
(a⁺ , a⁻) +' (b⁺ , b⁻) = [ a⁺ +ⁿ b⁺ , a⁻ +ⁿ b⁻ ]
private
commˡⁿ : ∀ a b c → a +ⁿ b +ⁿ c ≡ a +ⁿ c +ⁿ b
commˡⁿ a b c = sym (ℕ.+-assoc a b c) ∙ (λ i → a +ⁿ ℕ.+-comm b c i) ∙ ℕ.+-assoc a c b
lem0 : ∀ a b c d → (a +ⁿ b) +ⁿ (c +ⁿ d) ≡ (a +ⁿ c) +ⁿ (b +ⁿ d)
lem0 a b c d = ℕ.+-assoc (a +ⁿ b) c d ∙ (λ i → commˡⁿ a b c i +ⁿ d) ∙ sym (ℕ.+-assoc (a +ⁿ c) b d)
+ⁿ-creates-rel-≡ : ∀ a⁺ a⁻ x → _≡_ {A = ℤ} [ a⁺ , a⁻ ] [ a⁺ +ⁿ x , a⁻ +ⁿ x ]
+ⁿ-creates-rel-≡ a⁺ a⁻ x = eq/ (a⁺ , a⁻) (a⁺ +ⁿ x , a⁻ +ⁿ x) ((λ i → a⁺ +ⁿ ℕ.+-comm a⁻ x i) ∙ ℕ.+-assoc a⁺ x a⁻)
+-respects-relʳ : (a b c : ℕ × ℕ) → rel a b → (a +' c) ≡ (b +' c)
+-respects-relʳ a@(a⁺ , a⁻) b@(b⁺ , b⁻) c@(c⁺ , c⁻) p = eq/ {R = rel} (a⁺ +ⁿ c⁺ , a⁻ +ⁿ c⁻) (b⁺ +ⁿ c⁺ , b⁻ +ⁿ c⁻) (
(a⁺ +ⁿ c⁺) +ⁿ (b⁻ +ⁿ c⁻) ≡⟨ lem0 a⁺ c⁺ b⁻ c⁻ ⟩
(a⁺ +ⁿ b⁻) +ⁿ (c⁺ +ⁿ c⁻) ≡[ i ]⟨ p i +ⁿ (c⁺ +ⁿ c⁻) ⟩
(b⁺ +ⁿ a⁻) +ⁿ (c⁺ +ⁿ c⁻) ≡⟨ sym (lem0 b⁺ c⁺ a⁻ c⁻) ⟩
(b⁺ +ⁿ c⁺) +ⁿ (a⁻ +ⁿ c⁻) ∎)
+-respects-relˡ : (a b c : ℕ × ℕ) → rel b c → (a +' b) ≡ (a +' c)
+-respects-relˡ a@(a⁺ , a⁻) b@(b⁺ , b⁻) c@(c⁺ , c⁻) p = eq/ {R = rel} (a⁺ +ⁿ b⁺ , a⁻ +ⁿ b⁻) (a⁺ +ⁿ c⁺ , a⁻ +ⁿ c⁻) (
(a⁺ +ⁿ b⁺) +ⁿ (a⁻ +ⁿ c⁻) ≡⟨ lem0 a⁺ b⁺ a⁻ c⁻ ⟩
(a⁺ +ⁿ a⁻) +ⁿ (b⁺ +ⁿ c⁻) ≡[ i ]⟨ (a⁺ +ⁿ a⁻) +ⁿ p i ⟩
(a⁺ +ⁿ a⁻) +ⁿ (c⁺ +ⁿ b⁻) ≡⟨ sym (lem0 a⁺ c⁺ a⁻ b⁻) ⟩
(a⁺ +ⁿ c⁺) +ⁿ (a⁻ +ⁿ b⁻) ∎)
_+''_ : ℤ → ℤ → ℤ
_+''_ = rec2 {R = rel} {B = ℤ} φ _+'_ +-respects-relʳ +-respects-relˡ
where abstract φ = isSetℤ
-- normalization of isSetℤ explodes. Therefore we wrap this with expanded cases
_+_ : ℤ → ℤ → ℤ
x@([ _ ]) + y@([ _ ]) = x +'' y
x@([ _ ]) + y@(eq/ _ _ _ _) = x +'' y
x@(eq/ _ _ _ _) + y@([ _ ]) = x +'' y
x@(eq/ _ _ _ _) + y@(eq/ _ _ _ _) = x +'' y
x@(eq/ _ _ _ _) + y@(squash/ a b p q i j) = isSetℤ _ _ (cong (x +_) p) (cong (x +_) q) i j
x@([ _ ]) + y@(squash/ a b p q i j) = isSetℤ _ _ (cong (x +_) p) (cong (x +_) q) i j
x@(squash/ a b p q i j) + y = isSetℤ _ _ (cong (_+ y) p) (cong (_+ y) q) i j
-'_ : ℕ × ℕ → ℤ
-' (a⁺ , a⁻) = [ a⁻ , a⁺ ]
neg-respects-rel'-≡ : (a b : ℕ × ℕ) → rel a b → (-' a) ≡ (-' b)
neg-respects-rel'-≡ a@(a⁺ , a⁻) b@(b⁺ , b⁻) p = eq/ {R = rel} (a⁻ , a⁺) (b⁻ , b⁺) (ℕ.+-comm a⁻ b⁺ ∙ sym p ∙ ℕ.+-comm a⁺ b⁻)
-_ : ℤ → ℤ
-_ = rec {R = rel} {B = ℤ} isSetℤ -'_ neg-respects-rel'-≡
_·'_ : (a b : ℕ × ℕ) → ℤ
(a⁺ , a⁻) ·' (b⁺ , b⁻) = [ a⁺ ·ⁿ b⁺ +ⁿ a⁻ ·ⁿ b⁻ , a⁺ ·ⁿ b⁻ +ⁿ a⁻ ·ⁿ b⁺ ]
private
lem1 : ∀ a b c d → (a +ⁿ b) +ⁿ (c +ⁿ d) ≡ (a +ⁿ d) +ⁿ (b +ⁿ c)
lem1 a b c d = (λ i → (a +ⁿ b) +ⁿ ℕ.+-comm c d i) ∙ ℕ.+-assoc (a +ⁿ b) d c ∙ (λ i → commˡⁿ a b d i +ⁿ c) ∙ sym (ℕ.+-assoc (a +ⁿ d) b c)
·-respects-relʳ : (a b c : ℕ × ℕ) → rel a b → (a ·' c) ≡ (b ·' c)
·-respects-relʳ a@(a⁺ , a⁻) b@(b⁺ , b⁻) c@(c⁺ , c⁻) p = eq/ {R = rel} (a⁺ ·ⁿ c⁺ +ⁿ a⁻ ·ⁿ c⁻ , a⁺ ·ⁿ c⁻ +ⁿ a⁻ ·ⁿ c⁺) (b⁺ ·ⁿ c⁺ +ⁿ b⁻ ·ⁿ c⁻ , b⁺ ·ⁿ c⁻ +ⁿ b⁻ ·ⁿ c⁺) (
(a⁺ ·ⁿ c⁺ +ⁿ a⁻ ·ⁿ c⁻) +ⁿ (b⁺ ·ⁿ c⁻ +ⁿ b⁻ ·ⁿ c⁺) ≡⟨ lem1 (a⁺ ·ⁿ c⁺) (a⁻ ·ⁿ c⁻) (b⁺ ·ⁿ c⁻) (b⁻ ·ⁿ c⁺) ⟩
(a⁺ ·ⁿ c⁺ +ⁿ b⁻ ·ⁿ c⁺) +ⁿ (a⁻ ·ⁿ c⁻ +ⁿ b⁺ ·ⁿ c⁻) ≡[ i ]⟨ ℕ.·-distribʳ a⁺ b⁻ c⁺ i +ⁿ ℕ.·-distribʳ a⁻ b⁺ c⁻ i ⟩
((a⁺ +ⁿ b⁻) ·ⁿ c⁺) +ⁿ ((a⁻ +ⁿ b⁺) ·ⁿ c⁻) ≡[ i ]⟨ p i ·ⁿ c⁺ +ⁿ (ℕ.+-comm a⁻ b⁺ ∙ sym p ∙ ℕ.+-comm a⁺ b⁻) i ·ⁿ c⁻ ⟩
((b⁺ +ⁿ a⁻) ·ⁿ c⁺) +ⁿ ((b⁻ +ⁿ a⁺) ·ⁿ c⁻) ≡[ i ]⟨ ℕ.·-distribʳ b⁺ a⁻ c⁺ (~ i) +ⁿ ℕ.·-distribʳ b⁻ a⁺ c⁻ (~ i) ⟩
(b⁺ ·ⁿ c⁺ +ⁿ a⁻ ·ⁿ c⁺) +ⁿ (b⁻ ·ⁿ c⁻ +ⁿ a⁺ ·ⁿ c⁻) ≡⟨ sym (lem1 (b⁺ ·ⁿ c⁺) (b⁻ ·ⁿ c⁻) (a⁺ ·ⁿ c⁻) (a⁻ ·ⁿ c⁺)) ⟩
(b⁺ ·ⁿ c⁺ +ⁿ b⁻ ·ⁿ c⁻) +ⁿ (a⁺ ·ⁿ c⁻ +ⁿ a⁻ ·ⁿ c⁺) ∎)
·-respects-relˡ : (a b c : ℕ × ℕ) → rel b c → (a ·' b) ≡ (a ·' c)
·-respects-relˡ a@(a⁺ , a⁻) b@(b⁺ , b⁻) c@(c⁺ , c⁻) p = eq/ {R = rel} (a⁺ ·ⁿ b⁺ +ⁿ a⁻ ·ⁿ b⁻ , a⁺ ·ⁿ b⁻ +ⁿ a⁻ ·ⁿ b⁺) (a⁺ ·ⁿ c⁺ +ⁿ a⁻ ·ⁿ c⁻ , a⁺ ·ⁿ c⁻ +ⁿ a⁻ ·ⁿ c⁺) (
(a⁺ ·ⁿ b⁺ +ⁿ a⁻ ·ⁿ b⁻) +ⁿ (a⁺ ·ⁿ c⁻ +ⁿ a⁻ ·ⁿ c⁺) ≡⟨ lem0 (a⁺ ·ⁿ b⁺) (a⁻ ·ⁿ b⁻) (a⁺ ·ⁿ c⁻) (a⁻ ·ⁿ c⁺) ⟩
(a⁺ ·ⁿ b⁺ +ⁿ a⁺ ·ⁿ c⁻) +ⁿ (a⁻ ·ⁿ b⁻ +ⁿ a⁻ ·ⁿ c⁺) ≡[ i ]⟨ ℕ.·-distribˡ a⁺ b⁺ c⁻ i +ⁿ ℕ.·-distribˡ a⁻ b⁻ c⁺ i ⟩
(a⁺ ·ⁿ (b⁺ +ⁿ c⁻)) +ⁿ (a⁻ ·ⁿ (b⁻ +ⁿ c⁺)) ≡[ i ]⟨ a⁺ ·ⁿ p i +ⁿ a⁻ ·ⁿ (ℕ.+-comm b⁻ c⁺ ∙ sym p ∙ ℕ.+-comm b⁺ c⁻) i ⟩
(a⁺ ·ⁿ (c⁺ +ⁿ b⁻)) +ⁿ (a⁻ ·ⁿ (c⁻ +ⁿ b⁺)) ≡[ i ]⟨ ℕ.·-distribˡ a⁺ c⁺ b⁻ (~ i) +ⁿ ℕ.·-distribˡ a⁻ c⁻ b⁺ (~ i) ⟩
(a⁺ ·ⁿ c⁺ +ⁿ a⁺ ·ⁿ b⁻) +ⁿ (a⁻ ·ⁿ c⁻ +ⁿ a⁻ ·ⁿ b⁺) ≡⟨ sym (lem0 (a⁺ ·ⁿ c⁺) (a⁻ ·ⁿ c⁻) (a⁺ ·ⁿ b⁻) (a⁻ ·ⁿ b⁺)) ⟩
(a⁺ ·ⁿ c⁺ +ⁿ a⁻ ·ⁿ c⁻) +ⁿ (a⁺ ·ⁿ b⁻ +ⁿ a⁻ ·ⁿ b⁺) ∎)
_·''_ : ℤ → ℤ → ℤ
_·''_ = rec2 {R = rel} {B = ℤ} isSetℤ _·'_ ·-respects-relʳ ·-respects-relˡ
-- normalization of isSetℤ explodes. Therefore we wrap this with expanded cases
_·_ : ℤ → ℤ → ℤ
x@([ _ ]) · y@([ _ ]) = x ·'' y
x@([ _ ]) · y@(eq/ _ _ _ _) = x ·'' y
x@(eq/ _ _ _ _) · y@([ _ ]) = x ·'' y
x@(eq/ _ _ _ _) · y@(eq/ _ _ _ _) = x ·'' y
x@(eq/ _ _ _ _) · y@(squash/ a b p q i j) = isSetℤ _ _ (cong (x ·_) p) (cong (x ·_) q) i j
x@([ _ ]) · y@(squash/ a b p q i j) = isSetℤ _ _ (cong (x ·_) p) (cong (x ·_) q) i j
x@(squash/ a b p q i j) · y = isSetℤ _ _ (cong (_· y) p) (cong (_· y) q) i j
+-identityʳ : (x : ℤ) → x + 0 ≡ x
+-identityʳ = elimProp {R = rel} (λ _ → isSetℤ _ _)
λ{ (a⁺ , a⁻) i → [ ℕ.+-comm a⁺ 0 i , ℕ.+-comm a⁻ 0 i ] }
+-comm : (x y : ℤ) → x + y ≡ y + x
+-comm = elimProp2 {R = rel} (λ _ _ → isSetℤ _ _)
λ{ (a⁺ , a⁻) (b⁺ , b⁻) i → [ ℕ.+-comm a⁺ b⁺ i , ℕ.+-comm a⁻ b⁻ i ] }
+-inverseʳ : (x : ℤ) → x + (- x) ≡ 0
+-inverseʳ = elimProp {R = rel} (λ _ → isSetℤ _ _)
λ{ (a⁺ , a⁻) → eq/ {R = rel} (a⁺ +ⁿ a⁻ , a⁻ +ⁿ a⁺) (0 , 0) (ℕ.+-zero (a⁺ +ⁿ a⁻) ∙ ℕ.+-comm a⁺ a⁻) }
+-assoc : (x y z : ℤ) → x + (y + z) ≡ x + y + z
+-assoc = elimProp3 {R = rel} (λ _ _ _ → isSetℤ _ _)
λ{ (a⁺ , a⁻) (b⁺ , b⁻) (c⁺ , c⁻) i → [ ℕ.+-assoc a⁺ b⁺ c⁺ i , ℕ.+-assoc a⁻ b⁻ c⁻ i ] }
·-identityʳ : (x : ℤ) → x · 1 ≡ x
·-identityʳ = elimProp {R = rel} (λ _ → isSetℤ _ _) γ
where
γ : (a : ℕ × ℕ) → _
γ (a⁺ , a⁻) i = [ p i , q i ]
where
p : a⁺ ·ⁿ 1 +ⁿ a⁻ ·ⁿ 0 ≡ a⁺
p i = ℕ.+-comm (ℕ.·-identityʳ a⁺ i) (ℕ.0≡m·0 a⁻ (~ i)) i
q : a⁺ ·ⁿ 0 +ⁿ a⁻ ·ⁿ 1 ≡ a⁻
q i = ℕ.0≡m·0 a⁺ (~ i) +ⁿ ℕ.·-identityʳ a⁻ i
·-comm : (x y : ℤ) → x · y ≡ y · x
·-comm = elimProp2 {R = rel} (λ _ _ → isSetℤ _ _)
λ{ (a⁺ , a⁻) (b⁺ , b⁻) i → [ ℕ.·-comm a⁺ b⁺ i +ⁿ ℕ.·-comm a⁻ b⁻ i , ℕ.+-comm (ℕ.·-comm a⁺ b⁻ i) (ℕ.·-comm a⁻ b⁺ i) i ] }
·-distribˡ : (x y z : ℤ) → x · (y + z) ≡ x · y + x · z
·-distribˡ = elimProp3 {R = rel} (λ _ _ _ → isSetℤ _ _)
λ{ (a⁺ , a⁻) (b⁺ , b⁻) (c⁺ , c⁻) →
[ a⁺ ·ⁿ (b⁺ +ⁿ c⁺) +ⁿ a⁻ ·ⁿ (b⁻ +ⁿ c⁻)
, a⁺ ·ⁿ (b⁻ +ⁿ c⁻) +ⁿ a⁻ ·ⁿ (b⁺ +ⁿ c⁺)
] ≡[ i ]⟨ [ ℕ.·-distribˡ a⁺ b⁺ c⁺ (~ i) +ⁿ ℕ.·-distribˡ a⁻ b⁻ c⁻ (~ i) , ℕ.·-distribˡ a⁺ b⁻ c⁻ (~ i) +ⁿ ℕ.·-distribˡ a⁻ b⁺ c⁺ (~ i) ] ⟩
[ (a⁺ ·ⁿ b⁺ +ⁿ a⁺ ·ⁿ c⁺) +ⁿ (a⁻ ·ⁿ b⁻ +ⁿ a⁻ ·ⁿ c⁻)
, (a⁺ ·ⁿ b⁻ +ⁿ a⁺ ·ⁿ c⁻) +ⁿ (a⁻ ·ⁿ b⁺ +ⁿ a⁻ ·ⁿ c⁺)
] ≡[ i ]⟨ [ lem0 (a⁺ ·ⁿ b⁺) (a⁻ ·ⁿ b⁻) (a⁺ ·ⁿ c⁺) (a⁻ ·ⁿ c⁻) (~ i), lem0 (a⁺ ·ⁿ b⁻) (a⁺ ·ⁿ c⁻) (a⁻ ·ⁿ b⁺) (a⁻ ·ⁿ c⁺) i ] ⟩
[ a⁺ ·ⁿ b⁺ +ⁿ a⁻ ·ⁿ b⁻ +ⁿ (a⁺ ·ⁿ c⁺ +ⁿ a⁻ ·ⁿ c⁻)
, a⁺ ·ⁿ b⁻ +ⁿ a⁻ ·ⁿ b⁺ +ⁿ (a⁺ ·ⁿ c⁻ +ⁿ a⁻ ·ⁿ c⁺)
] ∎
}
·-assoc : (x y z : ℤ) → x · (y · z) ≡ x · y · z
·-assoc = elimProp3 {R = rel} (λ _ _ _ → isSetℤ _ _)
λ{ (a⁺ , a⁻) (b⁺ , b⁻) (c⁺ , c⁻) →
[ a⁺ ·ⁿ (b⁺ ·ⁿ c⁺ +ⁿ b⁻ ·ⁿ c⁻) +ⁿ a⁻ ·ⁿ (b⁺ ·ⁿ c⁻ +ⁿ b⁻ ·ⁿ c⁺)
, a⁺ ·ⁿ (b⁺ ·ⁿ c⁻ +ⁿ b⁻ ·ⁿ c⁺) +ⁿ a⁻ ·ⁿ (b⁺ ·ⁿ c⁺ +ⁿ b⁻ ·ⁿ c⁻)
] ≡[ i ]⟨ [ ℕ.·-distribˡ a⁺ (b⁺ ·ⁿ c⁺) (b⁻ ·ⁿ c⁻) (~ i) +ⁿ ℕ.·-distribˡ a⁻ (b⁺ ·ⁿ c⁻) (b⁻ ·ⁿ c⁺) (~ i)
, ℕ.·-distribˡ a⁺ (b⁺ ·ⁿ c⁻) (b⁻ ·ⁿ c⁺) (~ i) +ⁿ ℕ.·-distribˡ a⁻ (b⁺ ·ⁿ c⁺) (b⁻ ·ⁿ c⁻) (~ i) ] ⟩
[ (a⁺ ·ⁿ (b⁺ ·ⁿ c⁺) +ⁿ a⁺ ·ⁿ (b⁻ ·ⁿ c⁻)) +ⁿ (a⁻ ·ⁿ (b⁺ ·ⁿ c⁻) +ⁿ a⁻ ·ⁿ (b⁻ ·ⁿ c⁺))
, (a⁺ ·ⁿ (b⁺ ·ⁿ c⁻) +ⁿ a⁺ ·ⁿ (b⁻ ·ⁿ c⁺)) +ⁿ (a⁻ ·ⁿ (b⁺ ·ⁿ c⁺) +ⁿ a⁻ ·ⁿ (b⁻ ·ⁿ c⁻))
] ≡[ i ]⟨ [ (ℕ.·-assoc a⁺ b⁺ c⁺ i +ⁿ ℕ.·-assoc a⁺ b⁻ c⁻ i) +ⁿ (ℕ.·-assoc a⁻ b⁺ c⁻ i +ⁿ ℕ.·-assoc a⁻ b⁻ c⁺ i)
, (ℕ.·-assoc a⁺ b⁺ c⁻ i +ⁿ ℕ.·-assoc a⁺ b⁻ c⁺ i) +ⁿ (ℕ.·-assoc a⁻ b⁺ c⁺ i +ⁿ ℕ.·-assoc a⁻ b⁻ c⁻ i) ] ⟩
[ (a⁺ ·ⁿ b⁺ ·ⁿ c⁺ +ⁿ a⁺ ·ⁿ b⁻ ·ⁿ c⁻) +ⁿ (a⁻ ·ⁿ b⁺ ·ⁿ c⁻ +ⁿ a⁻ ·ⁿ b⁻ ·ⁿ c⁺)
, (a⁺ ·ⁿ b⁺ ·ⁿ c⁻ +ⁿ a⁺ ·ⁿ b⁻ ·ⁿ c⁺) +ⁿ (a⁻ ·ⁿ b⁺ ·ⁿ c⁺ +ⁿ a⁻ ·ⁿ b⁻ ·ⁿ c⁻)
] ≡[ i ]⟨ [ lem1 (a⁺ ·ⁿ b⁺ ·ⁿ c⁺) (a⁺ ·ⁿ b⁻ ·ⁿ c⁻) (a⁻ ·ⁿ b⁺ ·ⁿ c⁻) (a⁻ ·ⁿ b⁻ ·ⁿ c⁺) i
, lem1 (a⁺ ·ⁿ b⁺ ·ⁿ c⁻) (a⁺ ·ⁿ b⁻ ·ⁿ c⁺) (a⁻ ·ⁿ b⁺ ·ⁿ c⁺) (a⁻ ·ⁿ b⁻ ·ⁿ c⁻) i ] ⟩
[ (a⁺ ·ⁿ b⁺ ·ⁿ c⁺ +ⁿ a⁻ ·ⁿ b⁻ ·ⁿ c⁺) +ⁿ (a⁺ ·ⁿ b⁻ ·ⁿ c⁻ +ⁿ a⁻ ·ⁿ b⁺ ·ⁿ c⁻)
, (a⁺ ·ⁿ b⁺ ·ⁿ c⁻ +ⁿ a⁻ ·ⁿ b⁻ ·ⁿ c⁻) +ⁿ (a⁺ ·ⁿ b⁻ ·ⁿ c⁺ +ⁿ a⁻ ·ⁿ b⁺ ·ⁿ c⁺)
] ≡[ i ]⟨ [ ℕ.·-distribʳ (a⁺ ·ⁿ b⁺) (a⁻ ·ⁿ b⁻) c⁺ i +ⁿ ℕ.·-distribʳ (a⁺ ·ⁿ b⁻) (a⁻ ·ⁿ b⁺) c⁻ i
, ℕ.·-distribʳ (a⁺ ·ⁿ b⁺) (a⁻ ·ⁿ b⁻) c⁻ i +ⁿ ℕ.·-distribʳ (a⁺ ·ⁿ b⁻) (a⁻ ·ⁿ b⁺) c⁺ i ] ⟩
[ (a⁺ ·ⁿ b⁺ +ⁿ a⁻ ·ⁿ b⁻) ·ⁿ c⁺ +ⁿ (a⁺ ·ⁿ b⁻ +ⁿ a⁻ ·ⁿ b⁺) ·ⁿ c⁻
, (a⁺ ·ⁿ b⁺ +ⁿ a⁻ ·ⁿ b⁻) ·ⁿ c⁻ +ⁿ (a⁺ ·ⁿ b⁻ +ⁿ a⁻ ·ⁿ b⁺) ·ⁿ c⁺
] ∎
}
private
_ : Dec→Bool (discreteℤ [ (3 , 5) ] [ (4 , 6) ]) ≡ true
_ = refl
_ : Dec→Bool (discreteℤ [ (3 , 5) ] [ (4 , 7) ]) ≡ false
_ = refl
|
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{dtk-logos}
\parindent 0pt
\parskip 1.5ex
%\renewcommand{\baselinestretch}{1.33}
\usepackage{graphicx}
\usepackage{amsfonts,amssymb}
\graphicspath{{Images/}}
\title{Proposal on Parrallel Betweenness Centrality Algorithm (ParBC)}
\author{Rui Qiu (rq2170), Hao Zhou (hz2754)}
\date{Nov 2021}
%%%%%%
\usepackage[font=default,skip=0pt]{caption}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{multirow}
\usepackage{color}
\usepackage{amsmath, amssymb, amsthm}
\usepackage{mathrsfs}
\usepackage{geometry}
\usepackage{extarrows}
\usepackage{xcolor}
\usepackage{graphicx}
\usepackage{cite}
\usepackage{enumitem}
\usepackage{titlesec}
\usepackage{hyperref}
\usepackage{array}
%%%%%%%%%%%%
\begin{document}
\maketitle
\section{Introduction}
\paragraph{
With the popularity of personal mobile devise, social applications have boosting populations of users. Among them, there is a huge amount of interactions constructing a vast social network that is worthy of further investigation. One task is to identify how central or important a single node (user) is. To evaluate it, many metrics and algorithms are reported including the one our project focus on called Betweenness Centrality. Betweenness Centrality is formulated by Freeman \cite{freeman1977set} as shown below.
}
\[Betweenness(k) = \sum_{i \neq k \neq j}(\frac{\sigma_{i,j}(k)}{\sigma_{i,j}}) \]
\paragraph{
Where $\sigma_{i,j}(k)$ is the number of shortest path between $i, j$ that pass through the node k and $\sigma_{i,j}$ is the number of shortest path between $i, j$. This could reflect the significance or centrality of a node in a network. For example, in the context of social network, there are many groups. A node with high betweenness centrality might be the node in the center of the whole network, in the center of some big groups or a node act like a gateway (bridge) between two major groups.
}
\paragraph{
However, in reality, the network is highly likely huge is size. This could be changing to apply Betweenness Centrality algorithm on those real-world data. Therefore, the project aims to provide a parallel implementation of Betweenness Centrality Algorithm to make the it efficient to perform centrality analysis in real-world social network.
}
\section{Project Scope and Data}
\paragraph{
The project focus on undirected graph. To be more specific, It would target on the networks listed in SNAP \cite{snapnets}. We would first try our implementation on a github network \cite{rozemberczki2019multiscale} where there are 37,700 nodes and 289,003 edges. Each node indicate a github developer with at least 10 repositories and each edge is established if there is a mutual follower between them. Although it is unweighted in nature, we would implement a more general one so that both weighted and unweighted network could be handled by our program. If time permits, we would test our project on even larger network in this social network data collection.
}
\section{Implementation Plan}
\paragraph{
The algorithm is divided into two steps. First step is calculating pair-wise shortest distances and the second is quantifying the times of occurrence of a node on those shortest paths. Both steps could be implemented in a parallel fashion.
}
\paragraph{
To calculate the pair-wise shortest path, it is natural to turn to Floyd–Warshall algorithm \cite{floyd1962algorithm}. However, there might be two major limitation. First, its nature does not fit for the parallel implementation since the shortest path from i to j passing through $\{1..k+1\}$ depends on the result of the shortest path from i to j passing through $\{1..k\}$. This forces the threads to be synchronised and wait in this loop. Second, its complexity is $O(n^3)$ where n is the number of nodes. However, in reality, social network is edge-sparse but the algorithm could not take a good advantage of this feature. Therefore, we turn to Dijkstra's Algorithm \cite{dijkstra1959note} which is more suitable for edge-sparse cases and could calculate shortest path independently from one node to any others. In addition, there might be other shortest path algorithm we might try to implement in parallel fashion. They are Johnson's algorithm \cite{johnson1977efficient} and Brandes' algorithm \cite{brandes2001faster}
}
\paragraph{
To quantify the times of occurrence of a node on those shortest paths, we could use map-and-reduce fashion to perform this calculation in a parallel manner where, node id is the key and count is the value. This could make a good use of parallelism to calculate the final betweenness centrality for all the nodes in a networks.
}
\section{Conclusion}
\paragraph{
The project aims to provide a parallel implementation of Betweenness Centrality Algorithm that is efficient for large real-world social networks. It will perform both shortest path and betweenness calculation in a parallel fashion and also make good use of the fact that most real-world social network is edge-sparse.
}
\newpage
\bibliographystyle{IEEEtran}
\bibliography{IEEEabrv,mylib}
\end{document} |
// Copyright 2018-2020 Erik Kouters (Falcons)
// SPDX-License-Identifier: Apache-2.0
/*
* observerRtDB.hpp
*
* Created on: Oct 10, 2018
* Author: Erik Kouters
*/
#ifndef OBSERVERRTDB_HPP_
#define OBSERVERRTDB_HPP_
#include <stdio.h>
#include "observer.hpp"
#include <boost/thread/thread.hpp>
#include "boost/thread/mutex.hpp"
#include "cDiagnostics.hpp"
#include "FalconsRtDB2.hpp"
class observerRtDB: public observer
{
public:
observerRtDB(const uint robotID, const bool cameraCorrectlyMounted, const float minimumLockTime, const float frequency);
virtual ~observerRtDB();
virtual void update_own_position(std::vector<robotLocationType> robotLocations, double timestampOffset);
virtual void update_own_ball_position(std::vector<ballPositionType> ballLocations, double timestampOffset);
virtual void update_own_obstacle_position(std::vector<obstaclePositionType> obstacleLocations, double timestampOffset);
virtual void update_own_ball_possession(const bool hasPossession);
virtual void update_multi_cam_statistics(multiCamStatistics const &multiCamStats);
RtDB2 *_rtdb;
int _myRobotId;
boost::mutex mtx;
float _frequency;
// data may be written asynchronously using the update* function
// there is one thread which writes into RTDB hence triggering worldModel and the rest of the software
T_VIS_BALL_POSSESSION _visionBallPossession = false;
T_LOCALIZATION_CANDIDATES _robotLocCandidates;
T_OBSTACLE_CANDIDATES _obstacleCandidates;
T_BALL_CANDIDATES _ballCandidates;
T_MULTI_CAM_STATISTICS _multiCamStats;
boost::thread _heartBeatThread; // here will be the effective heartBeat
private:
bool isSamePosition(const float x, const float y, const float theta);
void initializeRtDB();
bool heartBeatTick();
void heartBeatLoop();
int localizationUniqueObjectIDIdx;
int ballUniqueObjectIDIdx;
int obstacleUniqueObjectIDIdx;
const int MAX_UNIQUEOBJID = 10000;
};
#endif
|
import h5py
import os
import numpy as np
path = ('C:/Users/kid/Documents/QTLab2122/SingleIRsource/data/raw/edge_acq/good/')
path_save = ('C:/Users/kid/Documents/QTLab2122/SingleIRsource/data/raw/')
names = [ 'acq_170522_205935.h5', 'acq_170522_210232.h5', 'acq_170522_210601.h5', 'acq_170522_210815.h5', 'acq_170522_210919.h5']
f = h5py.File(path_save + 'final_250522_222222.h5', 'a')
for i in range(len(names)):
I1, Q1, I2, Q2 = [], [], [], []
with h5py.File(path + names[i], 'r') as hdf:
I1 = np.array(hdf['i_signal_ch0'])
Q1 = np.array(hdf['q_signal_ch0'])
#I2 = np.array(hdf['i_signal_ch1'])
#Q2 = np.array(hdf['q_signal_ch1'])
if i == 0:
# Create the dataset at first
f.create_dataset('i_signal_ch0', data=I1, compression="gzip", chunks=True, maxshape=(None,None))
f.create_dataset('q_signal_ch0', data=Q1, compression="gzip", chunks=True, maxshape=(None,None))
#f.create_dataset('i_signal_ch1', data=I2, compression="gzip", chunks=True, maxshape=(None,None))
#f.create_dataset('q_signal_ch1', data=Q2, compression="gzip", chunks=True, maxshape=(None,None))
else:
# Append new data to it
f['i_signal_ch0'].resize((f['i_signal_ch0'].shape[0] + I1.shape[0]), axis=0)
f['i_signal_ch0'][-I1.shape[0]:] = I1
f['q_signal_ch0'].resize((f['q_signal_ch0'].shape[0] + Q1.shape[0]), axis=0)
f['q_signal_ch0'][-Q1.shape[0]:] = Q1
#f['i_signal_ch1'].resize((f['i_signal_ch1'].shape[0] + I2.shape[0]), axis=0)
#f['i_signal_ch1'][-I2.shape[0]:] = I2
#f['q_signal_ch1'].resize((f['q_signal_ch1'].shape[0] + Q2.shape[0]), axis=0)
#f['q_signal_ch1'][-Q2.shape[0]:] = Q2
print("I am on iteration {} and 'data' chunk has shape:{}".format(i,f['i_signal_ch0'].shape))
l = f['i_signal_ch0'].shape[0]
t = np.linspace(0, l-1, l)
f.create_dataset('timestamp_ch0', data=t, compression="gzip", chunks=True, maxshape=(None,))
#f.create_dataset('timestamp_ch1', data=t, compression="gzip", chunks=True, maxshape=(None,))
f.close() |
r=0.95
https://sandbox.dams.library.ucdavis.edu/fcrepo/rest/collection/sherry-lehmann/catalogs/d7302s/media/images/d7302s-016/svc:tesseract/full/full/0.95/default.jpg Accept:application/hocr+xml
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Neuroimage-like layout
\documentclass[1p,11pt]{elsarticle}
%\documentclass[5p]{elsarticle}
% For kindle
%\documentclass[1p,12pt]{elsarticle}
%\usepackage{geometry}
%\geometry{a6paper,hmargin={.2cm,.2cm},vmargin={1cm,1cm}}
% End kindle
\usepackage{geometry}
\geometry{letterpaper,hmargin={1in,1in},vmargin={1in,1in}}
\usepackage{graphicx}
\usepackage{amsmath,amsfonts,amssymb}
\usepackage{bm}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{url}
\usepackage[breaklinks=true,letterpaper=true,colorlinks,bookmarks=false]{hyperref}
\usepackage[table]{xcolor}
% For review: line numbers
\usepackage[pagewise]{lineno}
\biboptions{sort}
\definecolor{deep_blue}{rgb}{0,.2,.5}
\definecolor{dark_blue}{rgb}{0,.15,.5}
\hypersetup{pdftex, % needed for pdflatex
breaklinks=true, % so long urls are correctly broken across lines
colorlinks=true,
linkcolor=dark_blue,
citecolor=deep_blue,
}
% Float parameters, for more full pages.
\renewcommand{\topfraction}{0.9} % max fraction of floats at top
\renewcommand{\bottomfraction}{0.8} % max fraction of floats at bottom
\renewcommand{\textfraction}{0.07} % allow minimal text w. figs
% Parameters for FLOAT pages (not text pages):
\renewcommand{\floatpagefraction}{0.6} % require fuller float pages
% % N.B.: floatpagefraction MUST be less than topfraction !!
\def\B#1{\mathbf{#1}}
%\def\B#1{\bm{#1}}
\def\trans{^\mathsf{T}}
% A compact fraction
\def\slantfrac#1#2{\kern.1em^{#1}\kern-.1em/\kern-.1em_{#2}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% For the final version: to output PDF figures from latex
\newif\iffinal
\finaltrue
%\finalfalse
\iffinal\else
\usepackage[tightpage,active]{preview}
\fi
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Modification tracking
\usepackage{xcolor}
\usepackage[normalem]{ulem}
\colorlet{markercolor}{purple!50!black}
\newcommand{\ADDED}[1]{\textcolor{markercolor}{\uline{#1}}}
\newcommand{\DELETED}[1]{\textcolor{red}{\sout{#1}}}
% For highlighting changes for reviewer
\usepackage{MnSymbol}
\def\marker{%
\vadjust{{%
\llap{\smash{%
\color{purple}%
\scalebox{1.8}{$\filledmedtriangleright$}}\;}%
}}\hspace*{-.1ex}%
}%
\def\hl#1{\textcolor{markercolor}{%
\protect\marker%
#1%
}}%
% Show the old version
%\renewcommand{\ADDED}[1]{}
%\renewcommand{\DELETED}[1]{#1}
% Show the new version
%\renewcommand{\ADDED}[1]{#1}
%\renewcommand{\DELETED}[1]{}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
% Show line numbers
%\linenumbers
%\linenumbersep 3pt\relax
%\renewcommand\linenumberfont{\normalfont\tiny\sffamily\color{black!50}}
\title{Harnessing cloud computing for high capacity analysis of neuroimaging data}
\author[cdb]{Daniel J. Clark}
\author[mass]{Christian Haselgrove}
\author[mass]{David N. Kennedy}
\author[loni]{Zhizhong Liu}
\author[cdb,cbin]{Michael Milham}
\author[loni]{Petros Petrosyan}
\author[loni]{Carinna M. Torgerson}
\author[loni]{John D. Van Horn}
\author[cdb,cnl]{R. Cameron Craddock\corref{corresponding}}
\cortext[corresponding]{Corresponding author}
\address[cdb]{Center for the Developing Brain, Child Mind Institute, New York, New York, USA}
\address[mass]{Division of Informatics, Department of Psychiatry,
University of Massachusetts Medical School, Worcester, MA, USA}
\address[loni]{The Institute for Neuroimaging and Informatics (INI) and
Laboratory of Neuro Imaging (LONI), Keck School of Medicine of USC,
University of Southern California, Los Angeles, CA, USA}
\address[cbin]{Center for Biomedical Imaging and Neuromodulation,
Nathan S. Kline Institute for Psychiatric Research, Orangeburg, New York, USA}
\address[cnl]{Computational Neuroimaging Laboratory,
Center for Biomedical Imaging and Neuromodulation,
Nathan S. Kline Institute for Psychiatric Research, Orangeburg, New York, USA}
\begin{abstract}
Functional connectomes capture brain interactions via synchronized
fluctuations in the functional magnetic resonance imaging signal. If
measured during rest, they map the intrinsic functional architecture of
the brain. With task-driven experiments they represent integration
mechanisms between specialized brain areas. Analyzing their variability
across subjects and conditions can reveal
markers of brain pathologies and mechanisms underlying cognition.
Methods of estimating functional connectomes from the imaging signal
have undergone rapid developments and the literature is full of diverse
strategies for comparing them. This review aims to clarify links across
functional-connectivity methods as well as to expose different steps
to perform a group study of functional connectomes.
\end{abstract}
\begin{keyword}
Functional connectivity, connectome, group study, effective
connectivity, fMRI, resting-state
\end{keyword}
\maketitle
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\sloppy % Fed up with messed-up line breaks
\section{Introduction}
Text from CPAC grant:
Conventional neuroimaging tools alone have insufficient capacity for
performing big data analyses in real-world contexts. Using current best
practices, processing a single fMRI dataset on conventional computer
workstations can take almost 5 hours. Using tools that do not permit
automated processing, 1000 datasets will take ~5,000 hours of time, or
2.5 man-years, to prepare for processing. With sufficient programming
skills, an analyst can construct an automated pipeline to process the
data unattended in 5,000 hours or nearly 7 months. If the analyst is
more sophisticated and can parallelize the processing using multi-core
or cluster computer systems, and has access to adequate resources, this
processing can be reduced to anywhere between a few hours and 2 months.
C-PAC makes these levels of processing speedups accessible to all
neuropsychiatric researchers, without requiring the advanced programming
background necessary to implement them. Additionally C-PAC offers other
improvements over conventional tools, such as the ability to restart a
pipeline with different parameters and only re-computing the affected
stages. All of these innovations make it possible for researchers to
perform large-scale data analyses in reasonable amounts of time. The
lack of access to sufficient computational systems needlessly bars many
researchers, who would otherwise be able to make substantial
contributions to the field, from performing ‘big data’ analyses. The
cost of purchasing and maintaining high performance computing systems is
a significant barrier for many researchers. Since most neuroimaging
researchers only periodically need computational resources, but require
fairly large resources when they do, they can benefit substantially from
pay-as-you-go computational services such as the Amazon Web Services
(AWS) Elastic Compute Cloud (EC2).
To meet this demand, C-PAC has been
ported to EC2 under a contract from National Database of Autism Research
(NDAR; see letter of support from NDAR manager Dan Hall). The proposed
work will extend this capability to a Software-as-a-Service (SaaS) model
that makes the entire C-PAC ecosystem available over the Internet using
an execution dashboard (thin client) running in a web browser. This
approach simplifies the cloud-computing model so that it is readily
accessible to the large audience of researchers who lack computational
resources and knowledge to provision a system in the cloud. Pushing the
envelope for what is possible in connectome analyses requires taking
advantage of new computing technologies. Exploiting the lack of data
dependencies between datasets, pipelining systems have been able to
achieve substantial reductions in processing times by running several
pipelines in parallel on multicore and cluster computer systems. An
advantage of this approach is that it can use pre-existing software
tools that are designed to work with conventional computer processors
(CPUs). A disadvantage is that it is costly to scale, as an incremental
increase in performance will require a similar increase in hardware.
Enabling even larger scale computation, such as higher resolution graph
analyses, and non-parametric statistics that require thousands of
iterations, will require taking advantage of new computing technologies
such as Graphical Processing Units (GPU). When matched by computational
power, GPUs are significantly cheaper than conventional CPUs in both
cost and energy consumption 21,22. Although, there have been a number of
reports lauding the ability of GPU implementations of neuroimaging
algorithms to achieve 195x speedups for nonlinear registration23, 33x
speedups for permutation testing24, 6x speedups for surface
extraction25, 100x speedups for diffusion tractography26, and 250x
speedups for computing functional connectivity27, the use of GPUs in
this field has yet to become widespread. This is due to the lack of
off-the-shelf tools that support GPU architectures, and the specialist
knowledge required to develop software for these architectures.
As a
part of the proposed improvements to C-PAC, high scale graph theoretic
methods and non-parametric statistical group-level analyses will be
implemented using GPU architectures. This advance in C-PAC will make the
sophisticated analyses enabled by the higher throughput of GPUs
accessible to the wider community of neuroimaging researchers. By
implementing C-PAC as a Software-as-a-Service in Amazon’s EC2, it
creates the first-of-its-kind, pay-as-you-go solution for performing big
data science on neuroimaging data. Other neuroimaging centric Amazon
Machine Images exist, allowing users with advanced computational
background to process large datasets on clusters provisioned in the
cloud. C-PAC will be the first to do it using thin clients that simplify
all of the technical implementation details inherent in this process. To
a user, it will look no different than using a web application. The
Cloud Computing Model: To make large-scale processing available to
researchers without access to computational resources, C-PAC has been
ported to the Amazon Web Services (AWS) Elastic Compute Cloud (EC2)
under a contract from NDAR (see letter from NDAR manager Dan Hall).
Using a freely available preconfigured C-PAC Amazon Machine Image (AMI,
a virtual machine), users can quickly provision a computing system and
preprocess data without having to deal with software installation or
hardware configuration. Distributing a standardized software ecosystem
avoids errors related to incompatible or unsupported software versions
and permits pipelines to be exactly reproduced down to the fine details.
Data can either be transferred into and out of EC2 using conventional
network transfer mechanisms (i.e., scp) or C-PAC can be configured to
access data from the AWS Simple Storage Service (S3). The C-PAC AMI
supports the StarCluster cluster computing toolkit to allow users to
easily create computing clusters in EC2 that can be dynamically rescaled
based on the users needs. The C-PAC team has used this AMI to perform
cortical thickness extractions on nearly 3,060 structural scans from the
NDAR database (doi:10.15154/1165646), and recently preprocessed all
1,102 datasets from ABIDE in approximately 63 hours. Until now the
enthusiasm for performing neuroimaging analysis in EC2 has been tempered
by the lack of clear models that illustrate the cost and performance
tradeoffs. Based on lessons learned from the aforementioned
preprocessing efforts, a comparison of processing data on local
equipment versus in the cloud is illustrated in figure 5. For this
model, each dataset includes a structural MRI and one resting state
fMRI, is processed using 4 pipeline configurations, takes 4.5 hours to
process on 8 processors and 15 GB of RAM, and produces 4.5 GB of
outputs. The local workstation is assumed to be a Dell Precision 7910
with dual Intel Xeon E5-2630 2.4 GHz 8-core processors (32 total virtual
CPUs when hyper threading), 64GB of RAM, two 400GB solid state drives
(SSD) for local storage, and a 1100 Watt power supply, that costs \$8,642
(from dell.com on 1/31/2015). The cost of electricity is estimated based
on 90\% usage of the workstation power supply at the US average
commercial rate for November 2014 of \$0.1055 per kilowatt hour
(\url{http://www.eia.gov}, accessed 1/31/2015). The costs associated with
software and hardware maintenance was conservatively estimated to
require 5\% of a research technician’s effort (\$50,000 a year salary +
25\% fringe) for a year. For the cloud computation model, c3.8xlarge EC2
instances are the most comparable to the specified workstation and
include 32 virtual CPUs, 60 GB of RAM, and two 320 GB SSDs and cost
\$1.60 per hour for on-demand instances and average \$0.26 per hour for
spot instances in the US east zone (aws.amazon.com, accessed on
1/31/2015). Spot instances offer a method to pay reduced rates for
“spare” cycles that are not currently being used at on-demand prices and
their price fluctuates based on demand. These instances are reserved at
a “bid” price, and if the cost of the instance goes above that price,
the spot instance is terminated, making them less reliable for
computation. This simulation allows up to 20 instances to be used for
the computation. Additional costs include persistent storage (\$0.10 per
GB per month) to hold the inputs and outputs of the processing and the
cost of transferring processed data out of the cloud, at a rate of \$0.09
per GB. Importantly, the cloud computation model does not require
software maintenance costs since the C-PAC development team maintains
the AMIs. Figure 5 illustrates that the results of this simulation
strongly favor cloud computing. The stepwise nature of the AWS line
reflects that since up to 80 datasets (20 nodes, 4 datasets each) can be
processed in parallel, the processing time and hence the price increases
for every 80th dataset that is added. When using the more expensive and
robust on-demand instances up to 5,000 datasets can be processed
for less than the cost of owning and maintaining a workstation
(Fig. 5A). The cost drops substantially when using spot
instances, even for a more conservative model that uses
twice the average spot instance price in its
formulation. Using spot instances is cheaper than
maintenance and electricity costs alone for processing
up to 4,000 datasets. Another advantage of cloud
computing is that additional nodes can be added with no
additional overhead costs, resulting in much faster
computation for the same bottom line (Fig. 5B), whereas
adding twenty nodes to the local cluster would increase
the capital costs 20 fold. The cost of processing 1000
subjects is almost 16 times cheaper for EC2 spot
instances than for local computation (Fig. 5C). Thus
illustrating that cloud computing is a very
cost-effective alternative for neuroimaging data
processing when these infrastructures are not available
and can provide a simpler and more scalable solution
even when they are. AIM 3. EXTEND C-PAC TO LEVERAGE
CLOUD COMPUTING AND GRAPHICAL PROCESSING UNIT (GPU)
TECHNOLOGY TO FURTHER OPTIMIZE COMPUTATIONAL EFFICIENCY.
Implementing C-PAC as a Software-as-a-Service in the
cloud: Although C-PAC has been ported into the Amazon
EC2, it is far from the turnkey solution that the C-PAC
team will build in releases 3 through 7 of the
development timeline. Software-as-a-service is a
software distribution model in which applications are
hosted by a service provider and accessed over the
Internet. It can provide a user access to a considerable
amount of computational resources, without any need to
deal with software and hardware maintenance. Figure 8
illustrates the basic concept for the implementation of
C-PAC in the cloud. Rather than installing the entire
C-PAC ecosystem locally, the user will log into the
Amazon cloud and start the C-PAC AMI on a medium size
on-demand instance. The C-PAC AMI will be equipped with
a web server that is running the C-PAC dashboard. By
connecting to the C-PAC dashboard, the user will be able
to configure pipelines and data bundles, initiate
pipeline execution and monitor the pipeline’s progress.
When the user starts a pipeline, the server will
provision a computing cluster in the cloud based on the
configuration provided by the user, and will start
executing the pipeline. When processing has completed,
the computing cluster will be terminated, but the master
node will remain running until terminated by the user.
Our goal is that the cost of process a single data
bundle should not exceed \$0.75 for spot instances and
\$2.50 for on-demand instances. Development of the C-PAC
SaaS infrastructure will occur in three phases. The
first phase, to be completed in release 2,
infrastructure will be developed for building,
maintaining, and testing C-PAC AMIs. The C-PAC dashboard
will be constructed in releases 3 through 6. The
dashboard will be developed in the Django Python web
framework (\url{https://www.djangoproject.com/}). Although
specifically developed for the cloud, the dashboard will
also be usable locally to submit and monitor jobs on
cluster systems. The third component, developed in
releases 5 and 6, consists of quality assessment tools
that will enable to user to view and rate pipeline
outputs using a web browser. From this information, the
user will be able to create subjects list for
group-level analysis. Although the development of the
dashboard and the quality assessment are listed as
different features, they will be tightly integrated.
\section{The Amazon Web Services Elastic Compute Cloud (EC2)}
4000 characters including spaces total Introduction (1576) The National
Database for Autism Research (NDAR, \url{http://ndar.nih.gov}) and other
NIH/NIMH data repositories are amassing and sharing thousands of
neuroimaging datasets. With the availability of this deluge of data and
the development of the NDAR infrastructure for its organization and
storage, the bottleneck for applying discovery science to psychiatric
neuroimaging has shifted to the computational challenges associated with
data processing and analysis. Maximizing the potential of these data
requires automated pipelines that can leverage high-performance
computing (HPC) architectures to achieve high throughput computation
without compromising on the quality of the results. A disadvantage of
this approach is that it requires access to HPC systems that are not
always available, particularly at smaller research institutions, or in
developing countries. Cloud computing resources such as Amazon Web
Services (AWS) Elastic Compute Cloud (EC2) offers a “pay as you go”
model that might be an economical alternative to the large capital costs
and maintenance burden of dedicated HPC infrastructures. Realizing this
need, the developers of the Laboratory of Neuro Imaging (LONI) Pipeline,
the Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC)
Computational Environment (CE) and the Configurable Pipeline for the
Analysis of Connectomes (C-PAC) have implemented pipelines in EC2 that
interface with NDAR. Each pipeline was used to perform a benchmark
analysis of 2,000 structural images from the NDAR database to establish
the feasibility of this approach.
\subsection{EC2 Instances}
\subsection{Storage}
\subsection{Data Transfer}
\section{Processing Nueroimaging Data in the Cloud}
Methods (1574) Each of three pipelines were installed into Amazon
Machine Images (AMIs) and customized to perform structural preprocessing
on NDAR data. The LONI Pipeline (\url{http://pipeline.loni.usc.edu}) was
enhanced to permit direct access to NDAR collections for workflow-based
data processing (Torgerson, in press). Workflows can be created from a
combination of commonly available neuroimaging processing tools
represented as Pipeline Modules. With respect the benchmark analysis,
specifically developed Pipeline Modules captured the results from
FreeSurfer and FSL FirstAll, updated the NDAR with the results and
returned them back to the NDAR Amazon Cloud storage server. C-PAC
(\url{http://fcp-indi.github.io}) is a configurable pipeline for performing
comprehensive functional connectivity analyses that was extended to
include the Advanced Normalization Tools (ANTs) cortical thickness
methodology (Tustison, 2014) and to interface it with NDAR
(\url{https://github.com/FCP-INDI/ndar-dev}). Results of this workflow include
3D volumes of cortical thickness and regional measures derived from the
Desikan-Killiany-Tourville atlas
(\url{http://mindboggle.info/faq/labels.html}). NITRC-CE
(\url{http://www.nitrc.org/projects/nitrc_es/}) is an AMI that is
pre-installed with popular neuroimaging tools. A series of scripts were
developed for NITRC-CE to interact with NDAR, calculate a series of
quality assessment measures on the data, perform structural imaging
analysis using FreeSurfer and FSL FirstAll results, and to write the
results back to NDAR (\url{https://github.com/chaselgrove/ndar}).
Results(513) Speeds obtained for processing structural data in EC2 were
consistent with those obtained for local multi-core processors. For
example, using an EC2 instance with 4 processors and 15 GB of RAM
(m3.xlarge), the C-PAC pipeline was able to complete the ANTS cortical
thickness pipeline in 8.5 hours per subject, in comparison to 9 hours on
a local workstation with 12 processors and 64 GB of RAM. EC2 processing
cost \$1.94 per image for on demand instances and an estimated \$0.26 per
image when using spot instances. Conclusions (349) Analyzing data using
cloud computing is an affordable solution, with low hardware and
software maintenance burdens; this can be beneficial for smaller
laboratories and when data is already in the cloud. Further reductions
in cost can be obtained using lower costs spot instances, which
fluctuate in price and may get shut down if demand gets too high.
\subsection{Cloud computing cost and performance}
\subsection{Other Considerations}
\paragraph{Models for minimizing data transfer}
\paragraph{Optimizing allocation of resources}
\paragraph{Security and privacy}
Kleinschmidt on on-going activity and Bertrand Thirion on statistical
data processing. RCC would like to acknowledge support by a NARSAD Young
Investigator Grant from the Brain \& Behavior Research Foundation. The
authors would like to thank the anonymous reviewers for their
suggestions, which improved the manuscript.
\section{Conclusion}
{
%\clearpage
\section*{References} \small
%\bibliographystyle{elsarticle-num-names}
\bibliographystyle{model1b-num-names} \bibliography{Clark2015_AWS} }
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\end{document}
|
# Riduzione della dimensionalità
Fino ad ora abbiamo visto come le feature siano importanti per poter definire un algoritmo in grado di eseguire il proprio compito imparando dai dati, ora il problema è che ci potremmo trovare in condizioni in cui sfortunatamente abbiamo troppe feature e troppi pochi dati(troppe colonne e troppe poche righe) o che ci dicano che vogliono un njumero massimo di feature da usare per predire il nostro algoritmo in tal caso possiamo utilizzare **algoritmi che ci aiutino a ridurre le dimensioni del nostro dataset considerando solo quelle rilevanti anche senza sapere il modello che useremo, inoltre i dataset possono essere utilizzati in diversi contesti come tesi, immagini e molto altro**.
## decomposizione ai valori singolari (SVD)
La __[decomposizione ai valori singolari](https://it.wikipedia.org/wiki/Decomposizione_ai_valori_singolari)__ si basa su nozioni geometriche al fine di fattorizzare la matrice di partenza in matrici più semplici e che mi fornisca informazioni sulle proprietà di ogni componente che stiamo considerando.Dal punto di vista matematico si ha:
\begin{equation}
\Large M_{n \times m} = U_{n \times n} D_{n \times m} V^{\dagger}_{m \times m}
\end{equation}
dove $ M_{n \times m}$ è la nostra matrice di partenza con n righe e m colonne, $U_{n \times n}$ è una matrice unitaria ortogonale, $D_{n \times m}$ è una matrice singolare diagonale $n\times m$, e $V^{\dagger}_{m \times m}$ è la trasposta coniugata di una matrice unitaria ortogonale.<br>
Dal punto di vista pratico quello che ci interessa è la matrice $D$ poiché i suoi valori sulla diagonali rappresentano la varianza di ogni singola componente, a cosa ci serve questo? Per capirlo facciamo un esempio
```python
import numpy as np
#la nostra matrice di partenza
M = np.matrix([[1, 5, 6 ], [3, 4, 19], [2,7,24]])
U, D, V = np.linalg.svd(M)
#Trasformo solo D poiché essendo diagonale numpy per risparmiare memoria
#vi ritorna un array 1D poiché per le operazioni hanno lo stesso comportamento
print(f'Matrix U:\n {U}\n Matrix D :\n {np.diag(D)}\n Matrix V :\n {V}')
```
Matrix U:
[[-0.22097491 -0.91114121 0.34783874]
[-0.60034273 0.4081545 0.68774887]
[-0.76860828 -0.05684721 -0.63718891]]
Matrix D :
[[32.61541883 0. 0. ]
[ 0. 3.45287401 0. ]
[ 0. 0. 1.14547607]]
Matrix V :
[[-0.1091269 -0.27246326 -0.95595768]
[ 0.05781499 -0.96181284 0.26753223]
[ 0.99234507 0.02607372 -0.12071212]]
Ora mettiamo ipotesi che i voglia considerare solo le due componenti più importanti per ricostruire il dataset, per farlo vediamo se togliendo un valore alla diagonale che succede, ricordate noi vogliamo che $M \approx UDV^{\dagger}$, per fare in modo che non ci siano problemi di dimensionalità nel prodotto scalare successivo alla matrice $U$ si toglie la relativa colonna, mentre alla matrice $V^{\dagger}$ si toglie la riga relativa ad essa.
```python
#eliminate the first value of D, U and V lose the column
#@ means dot product in numpy
#original calculus gives me the original M
print(f'original matrix obtained with all features:\n {U @ np.diag(D) @ V}')
#i remove the first colmn of U and last row of V
print(f'matrix obtained eliminating the first element in diagonal {D[0]}:\n'
f'{U[: ,1:] @ np.diag(D[1:]) @ V[:2, :]}')
#i remove the middle column of U and middle row of V
print(f'matrix obtained eliminating the second element in diagonal {D[1]}:\n'
f'{U[: ,[0,2]] @ np.diag(D[[0,2]]) @ V[[0,2], :]}')
##i remove the last column of U and first row of V
print(f'matrix obtained eliminating the second element in diagonal {D[2]}:\n'
f'{U[: ,[0,1]] @ np.diag(D[:2]) @ V[[0,1], :]}')
```
original matrix obtained with all features:
[[ 1. 5. 6.]
[ 3. 4. 19.]
[ 2. 7. 24.]]
matrix obtained eliminating the first element in diagonal 32.6154188335189:
[[ 0.36635519 0.47395901 3.114092 ]
[-0.10824657 -1.14170015 -1.13647509]
[-0.02077816 0.75549322 -0.00762631]]
matrix obtained eliminating the second element in diagonal 3.4528740055280096:
[[ 1.18188917 1.97408316 6.84167133]
[ 2.91852099 5.35548866 18.6229652 ]
[ 2.01134829 6.81120935 24.0525129 ]]
matrix obtained eliminating the second element in diagonal 1.1454760652624991:
[[ 0.60460909 4.98961116 6.04809665]
[ 2.21823068 3.97945913 19.09509699]
[ 2.72429743 7.0190308 23.91189408]]
Da come possiamo vedere che se noi togliamo il valore più piccolo dalla matrice diagonale, la nuova matrice ricostruita sarà molto vicina alla matrice ottenuta, in tal caso la decomposizione viene chiamata __[TruncatedSVD](https://langvillea.people.cofc.edu/DISSECTION-LAB/Emmie%27sLSI-SVDModule/p5module.html)__.<br>
Da qui possiamo capire che qualora volessimo le componenti sono più rilevanti è sufficiente selezionare i valori delle diagonali più alti fino ad averne il numero desiderato.<br>
## PCA
La PCA (Principal Component Analysis) sfrutta prorio questo algoritmo di SVD permettendo di mappare il nostro problema proiettandolo su uno spazio più piccolo con la condizione di conserva quanto più possibile la norma dei nostri vettori sfruttando proprio la varianza di ogni singola feature associata ottenuta attraverso la matrice diagonale attenzione che ora le nostre nuove feature sono chiamate **principal component**, se avete dubbi consultate __[qui](https://medium.com/analytics-vidhya/what-is-principal-component-analysis-cf880cf95a0c)__.<br>
Poiché __[scikit](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html)__ ha già implementato la sua funzione useremo quella.
```python
import time
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import plot_confusion_matrix, classification_report
#classification data
diabetes = pd.read_csv('../data/diabetes2.csv')
X_diabetes, y_diabetes = diabetes.drop('Outcome', axis = 1).values, diabetes.Outcome.values
target_names = ["Not Diabetes", "Diabetes"]
print(f'Original data: {X_diabetes.shape[0]} dati, {X_diabetes.shape[1]} feature')
#let's find the 5 most valuable feature
diabetes_pca = PCA(n_components=5)
#fit the data
diabetes_pca.fit(X_diabetes)
#trasform the data
X_pca = diabetes_pca.transform(X_diabetes)
print(f'Reduced data: {X_pca.shape[0]} dati, {X_pca.shape[1]} feature')
print(f'Reduced data PCA output: \n {X_pca}')
print("PCA : ")
print(f'- components: \n{diabetes_pca.components_} \n'
f'- explained variance: \n {diabetes_pca.explained_variance_} \n'
f'- explained variance ratio: \n {diabetes_pca.explained_variance_ratio_} \n'
f'- singular values: \n {diabetes_pca.singular_values_}\n'
f'- noise variance values: {diabetes_pca.noise_variance_}' )
print('-'*80)
#prepare the data,
print("The reduced data ha been divided to train and test in 80% traing, 20% testing")
X_pca_train, X_pca_test, y_diabetes_train, y_diabetes_test = train_test_split(
X_pca, y_diabetes, random_state=0, test_size = 0.2)
print('Allenaimo un Gradient Boosting Classifier sui dati ridotti:')
tree = GradientBoostingClassifier()
start = time.time()
tree.fit(X_pca_train, y_diabetes_train)
end = time.time()
print(f"Time taken to train Gradient Boosting Classifier on reduced data: {end - start}s ")
plot_confusion_matrix(tree, X_pca_test, y_diabetes_test, display_labels=target_names)
plt.title("Confusion matrix of classification")
plt.show()
print(classification_report(y_diabetes_test, tree.predict(X_pca_test), target_names= target_names))
```
# Non-negative matrix factorization (NMF o NNMF)
La PCA presenta un modo di ridurre la dimensionalità del dataset, ma è presente un problema: la possibilità che la ricostruzione della matrice dia dei valori negativi ed in genere i valori negativi sono difficili da interptretare ed analizzare, per questo l'obiettivo della __[NMF](https://scikit-learn.org/stable/modules/decomposition.html#non-negative-matrix-factorization-nmf-or-nnmf)__ è quello di fattorizzare la matrice imponendo che gli autovalori e i vettori delle matrici fattorizzati siano tutti positivi, poiché questo implica avere un maggiorn numero possibile di modi di fattorizzare, in genere la condizione che si pone è che la distanza matriciale tra la decomposizione e l'originale sia quanto **più vicina secondo la distanza di Frobenius definita anche come** __[norma matriciale](https://it.wikipedia.org/wiki/Norma_matriciale)__, **si introducono inoltre termini di regoralizzazione o si usano altre metriche per assicurare un risultato flessibile e quanto meno divergente, per saperne di più guardate** __[qui](https://scikit-learn.org/stable/modules/decomposition.html#nmf-with-a-beta-divergence)__.
Usiamo ora il modello sul diabetes dataset.
```python
from sklearn.decomposition import NMF
nmf = NMF(n_components=5, verbose = 0, max_iter=500, init= 'nndsvda' )
nmf.fit(X_diabetes)
#trasform the data
X_NMF = nmf.transform(X_diabetes)
print(f'Reduced data: {X_NMF.shape[0]} dati, {X_NMF.shape[1]} feature')
print("NMF : ")
print(f'- components: \n{nmf.components_} \n'
f'- reguralization: {nmf.regularization} \n'
f'- reconstruction error: {nmf.reconstruction_err_}\n'
f'- iterations: {nmf.n_iter_}')
print('-'*80)
#prepare the data,
print("The reduced data ha been divided to train and test in 80% traing, 20% testing")
X_NMF_train, X_NMF_test, y_diabetes_train, y_diabetes_test = train_test_split(
X_NMF, y_diabetes, random_state=0, test_size = 0.2)
print('Allenaimo un Gradient Boosting Classifier sui dati ridotti:')
tree = GradientBoostingClassifier()
start = time.time()
tree.fit(X_NMF_train, y_diabetes_train)
end = time.time()
print(f"Time taken to train Gradient Boosting Classifier on reduced data: {end - start}s ")
plot_confusion_matrix(tree, X_NMF_test, y_diabetes_test, display_labels=target_names)
plt.title("Confusion matrix of classification")
plt.show()
print(classification_report(y_diabetes_test, tree.predict(X_NMF_test), target_names= target_names))
```
## Latent Dirichlet Annotation(LDA)
L' __[LDA](https://scikit-learn.org/stable/modules/decomposition.html#latent-dirichlet-allocation-lda)__ è un algoritmo di riduzione dimensionale che è __[probabilistico generativo](https://ichi.pro/it/modelli-grafici-probabilistici-generativi-vs-discriminativi-40857457895478)__, la differenza da quelli discriminativi è che in questo caso noi cerchiamo di determinare una distribuzione di probabilità attraverso cui possiamo determinare quale sia la probabilità associata a quell'evento. Tradotto in matematica i modelli discriminativi determinano $P(Y|X)$, mentre quelli generativi $P(Y,X)$, questo permette in futuro anche di generare anche valori con una certa probabilità associata e in genere non sono limitati alla mera classificazione, per dettagli guardate qui un __[video sulle GAN](https://www.youtube.com/watch?v=8L11aMN5KY8)__, che sono modelli generativi.<br>
***Attenti però che questi modelli sono meno precisi poichè assumono che idati siano i.i.d. condizione che nei discriminativi può anche non essere vera!.***<br>
Tornando alla LDA quello che succede è che questo algoritmo cervca di capire dai dati quale sia la struttura sottostante leggendone solo una parte, facendo ciò quello che succede è che divide per categorie la struttura e in base a ciò considera solo le categorie più rilevanti al fine di poterne ricreare la struttura completa, **questo algoritmo permette l'apprendimento "online" ovvero ogni singolo nuovo dato può essere usato per allenare il modello e adattarlo in maniera istantanea ai possibili cambiamenti, se invece volete riallenare il modello solo quanto un certo numero di dati è raggiunto potete usare "batch"**.
```python
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_components=5, n_jobs=-1)
lda.fit(X_diabetes)
#trasform the data
X_lda = lda.transform(X_diabetes)
print(f'Reduced data: {X_lda.shape[0]} dati, {X_lda.shape[1]} feature')
print("LDA : ")
print(f'- components: \n{lda.components_} \n'
f'- bound_: {lda.bound_} \n'
f'- exp dirichlet components:\n {lda.exp_dirichlet_component_}\n'
f'- iterations: {lda.n_iter_}')
print('-'*80)
#prepare the data,
print("The reduced data ha been divided to train and test in 80% traing, 20% testing")
X_lda_train, X_lda_test, y_diabetes_train, y_diabetes_test = train_test_split(
X_lda, y_diabetes, random_state=0, test_size = 0.2)
print('Allenaimo un Gradient Boosting Classifier sui dati ridotti:')
tree = GradientBoostingClassifier()
start = time.time()
tree.fit(X_lda_train, y_diabetes_train)
end = time.time()
print(f"Time taken to train Gradient Boosting Classifier on reduced data: {end - start}s ")
plot_confusion_matrix(tree, X_lda_test, y_diabetes_test, display_labels=target_names)
plt.title("Confusion matrix of classification")
plt.show()
print(classification_report(y_diabetes_test, tree.predict(X_lda_test), target_names= target_names))
```
In questo notebook abbiamo quindi visto come possiamo utilizzare alcune tecniche per ridurre la dimensione del nostro dataset con lacondizione di riuscire a usare modelli che riescano a preformare quanto meglio possibile, sono presenti molte altre tecniche, per saprenedi più consultate la __[guida di scikit sulla dimensionality reduction](https://scikit-learn.org/stable/modules/decomposition.html#decompositions)__.
***
COMPLIMENTI AVETE FINITO LA LEZIONE SU PCA LDA E NMF, A PRESTO!
|
If $f$ is holomorphic on the punctured disk $D(z,r)$, then the integral of $f$ over the circle of radius $r$ is equal to the integral of $f$ over the circle of radius $r'$ for any $0 < r' < r$. |
[STATEMENT]
lemma list_conj: "Ifm bs (list_conj ps) = (\<forall>p\<in> set ps. Ifm bs p)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. Ifm bs (list_conj ps) = (\<forall>p\<in>set ps. Ifm bs p)
[PROOF STEP]
by (induct ps, auto simp add: list_conj_def) |
import os
import pandas as pd
from tqdm import tqdm
import random
import argparse
import numpy as np
import scipy.stats
from math import sqrt
from sklearn.metrics import mean_absolute_error as MAE
from sklearn.metrics import mean_squared_error as MSE
parser = argparse.ArgumentParser()
parser.add_argument("--num_user", type=int, default=134, help="num user to select from in each replication")
parser.add_argument("--iter", type=int, default=1000, help="number iterations")
SEED = 1984
random.seed(SEED)
# get index of key in given array
def getIndex(array, key):
return int(np.argwhere(array==key))
def getGIndex(array, key):
index = getIndex(array, key)
return int(np.floor(index/10))
def getGIndexFromRow(array, row):
key = row['audio_smaple']
return getGIndex(array, key)
if __name__ == '__main__':
args = parser.parse_args()
NUM_USER = args.num_user
BOOTSTRAP_ITER = args.iter
# load vcc2018_mos
df = pd.read_csv('./vcc2018_mos.csv', encoding='utf-8')
# df.iloc[df.loc[df['audio_sample'].str.contains('NAT')].index, df.columns.get_loc('MOS')] = 5
# remove NAT audio
df = df[df['audio_sample'].str.contains('NAT')==False]
mos_df = df[['audio_sample', 'MOS']].groupby(['audio_sample']).mean()
sys_df = df[['system_ID', 'MOS']].groupby(['system_ID']).mean()
user_list = df['user_ID'].unique()
print('bootstrap estimation for intrinsic MOS of VCC2018 submitted audio')
print('number user: {}, and number iterations {}'.format(NUM_USER, BOOTSTRAP_ITER))
MSEs = []
MAEs = []
RMSEs = []
LCCs = []
SRCCs = []
tenMSEs = []
tenMAEs = []
tenRMSEs = []
tenLCCs = []
tenSRCCs = []
sysMSEs = []
sysMAEs = []
sysRMSEs = []
sysLCCs = []
sysSRCCs = []
# start bootstraping
for b in tqdm(range(BOOTSTRAP_ITER)):
# get random sampled users
random_user = random.sample(list(user_list), NUM_USER)
# get sub df
sub_df = df[df['user_ID'].isin(random_user)]
# for 10 utterance
# get unique audio list
u_audio_list = sub_df.audio_sample.unique()
# get unique_df audio from df
u_df = df[df['audio_sample'].isin(u_audio_list)]
# clustering
random.shuffle(u_audio_list)
group_df = pd.DataFrame(data={'audio_sample': u_audio_list})
group_df['audio_group'] = np.floor(group_df.index/10)
# merge group_df into sub_df and u_df
sub_df = pd.merge(sub_df, group_df, how='left', on=['audio_sample'])
u_df = pd.merge(u_df, group_df, how='left', on=['audio_sample'])
g_mos_df = u_df[['audio_group', 'MOS']].groupby(['audio_group']).mean()
# calculate mean
sub_mos = sub_df[['audio_sample', 'MOS']].groupby(['audio_sample']).mean()
sub_tenmos = sub_df[['audio_group', 'MOS']].groupby(['audio_group']).mean()
sub_sys = sub_df[['system_ID', 'MOS']].groupby(['system_ID']).mean()
# merge selected df with whole df
merge_mos = pd.merge(sub_mos, mos_df, how='inner', on='audio_sample')
merge_tenmos = pd.merge(sub_tenmos, g_mos_df, how='inner', on='audio_group')
merge_sys = pd.merge(sub_sys, sys_df, how='inner', on='system_ID')
# get two mos list
mos1 = merge_mos.iloc[:,0].values
mos2 = merge_mos.iloc[:,1].values
# get two mos list
tenmos1 = merge_tenmos.iloc[:,0].values
tenmos2 = merge_tenmos.iloc[:,1].values
sys1 = merge_sys.iloc[:,0].values
sys2 = merge_sys.iloc[:,1].values
# calculate statistics for utterance, MSE, RMSE, MAE, rho, rho_s
mse = MSE(mos1, mos2)
rmse = sqrt(mse)
mae = MAE(mos1, mos2)
lcc = scipy.stats.pearsonr(mos1, mos2)[0]
srcc = scipy.stats.spearmanr(mos1, mos2)[0]
# add to list
MSEs.append(mse)
RMSEs.append(rmse)
MAEs.append(mae)
LCCs.append(lcc)
SRCCs.append(srcc)
# calculate statistics for 10utterance, MSE, RMSE, MAE, rho, rho_s
tenmse = MSE(tenmos1, tenmos2)
tenrmse = sqrt(tenmse)
tenmae = MAE(tenmos1, tenmos2)
tenlcc = scipy.stats.pearsonr(tenmos1, tenmos2)[0]
tensrcc = scipy.stats.spearmanr(tenmos1, tenmos2)[0]
# add to list
tenMSEs.append(tenmse)
tenRMSEs.append(tenrmse)
tenMAEs.append(tenmae)
tenLCCs.append(tenlcc)
tenSRCCs.append(tensrcc)
# system level correlation
# calculate statistics, MSE, RMSE, MAE, rho, rho_s
smse = MSE(sys1, sys2)
srmse = sqrt(smse)
smae = MAE(sys1, sys2)
slcc = scipy.stats.pearsonr(sys1, sys2)[0]
ssrcc = scipy.stats.spearmanr(sys1, sys2)[0]
# add to list
sysMSEs.append(smse)
sysRMSEs.append(srmse)
sysMAEs.append(smae)
sysLCCs.append(slcc)
sysSRCCs.append(ssrcc)
MSEs = np.array(MSEs)
RMSEs = np.array(RMSEs)
MAEs = np.array(MAEs)
LCCs = np.array(LCCs)
SRCCs = np.array(SRCCs)
tenMSEs = np.array(tenMSEs)
tenRMSEs = np.array(tenRMSEs)
tenMAEs = np.array(tenMAEs)
tenLCCs = np.array(tenLCCs)
tenSRCCs = np.array(tenSRCCs)
sysMSEs = np.array(sysMSEs)
sysRMSEs = np.array(sysRMSEs)
sysMAEs = np.array(sysMAEs)
sysLCCs = np.array(sysLCCs)
sysSRCCs = np.array(sysSRCCs)
print('===========================================')
print('============== utterance level ============')
print('===========================================')
print('\n\t\tMEAN\tSD\tMIN\tMAX')
print('\tMSE\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(MSEs.mean(), MSEs.std(), MSEs.min(), MSEs.max()))
print('\tRMSE\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(RMSEs.mean(), RMSEs.std(), RMSEs.min(), RMSEs.max()))
print('\tMAE\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(MAEs.mean(), MAEs.std(), MAEs.min(), MAEs.max()))
print('\tLCC\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(LCCs.mean(), LCCs.std(), LCCs.min(), LCCs.max()))
print('\tSRCC\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(SRCCs.mean(), SRCCs.std(), SRCCs.min(), SRCCs.max()))
print('')
print('===========================================')
print('============== 10 utterance level =========')
print('===========================================')
print('\n\t\tMEAN\tSD\tMIN\tMAX')
print('\tMSE\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(tenMSEs.mean(), tenMSEs.std(), tenMSEs.min(), tenMSEs.max()))
print('\tRMSE\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(tenRMSEs.mean(), tenRMSEs.std(), tenRMSEs.min(), tenRMSEs.max()))
print('\tMAE\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(tenMAEs.mean(), tenMAEs.std(), tenMAEs.min(), tenMAEs.max()))
print('\tLCC\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(tenLCCs.mean(), tenLCCs.std(), tenLCCs.min(), tenLCCs.max()))
print('\tSRCC\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(tenSRCCs.mean(), tenSRCCs.std(), tenSRCCs.min(), tenSRCCs.max()))
print('')
print('===========================================')
print('============== system level ===============')
print('===========================================')
print('\n\t\tMEAN\tSD\tMIN\tMAX')
print('\tMSE\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(sysMSEs.mean(), sysMSEs.std(), sysMSEs.min(), sysMSEs.max()))
print('\tRMSE\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(sysRMSEs.mean(), sysRMSEs.std(), sysRMSEs.min(), sysRMSEs.max()))
print('\tMAE\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(sysMAEs.mean(), sysMAEs.std(), sysMAEs.min(), sysMAEs.max()))
print('\tLCC\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(sysLCCs.mean(), sysLCCs.std(), sysLCCs.min(), sysLCCs.max()))
print('\tSRCC\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}'.format(sysSRCCs.mean(), sysSRCCs.std(), sysSRCCs.min(), sysSRCCs.max()))
|
State Before: α : Type u_1
l : RBNode α
v : α
r : RBNode α
⊢ toList (balance2 l v r) = toList l ++ v :: toList r State After: α : Type u_1
l : RBNode α
v : α
r : RBNode α
⊢ toList
(match l, v, r with
| a, x, node red (node red b y c) z d => node red (node black a x b) y (node black c z d)
| a, x, node red b y (node red c z d) => node red (node black a x b) y (node black c z d)
| a, x, b => node black a x b) =
toList l ++ v :: toList r Tactic: unfold balance2 State Before: α : Type u_1
l : RBNode α
v : α
r : RBNode α
⊢ toList
(match l, v, r with
| a, x, node red (node red b y c) z d => node red (node black a x b) y (node black c z d)
| a, x, node red b y (node red c z d) => node red (node black a x b) y (node black c z d)
| a, x, b => node black a x b) =
toList l ++ v :: toList r State After: no goals Tactic: split <;> simp |
[STATEMENT]
lemma swap_registers_left:
assumes "compatible R S"
shows "R a *\<^sub>u S b *\<^sub>u c = S b *\<^sub>u R a *\<^sub>u c"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. R a *\<^sub>u S b *\<^sub>u c = S b *\<^sub>u R a *\<^sub>u c
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
compatible R S
goal (1 subgoal):
1. R a *\<^sub>u S b *\<^sub>u c = S b *\<^sub>u R a *\<^sub>u c
[PROOF STEP]
unfolding compatible_def
[PROOF STATE]
proof (prove)
using this:
register R \<and> register S \<and> (\<forall>a b. R a *\<^sub>u S b = S b *\<^sub>u R a)
goal (1 subgoal):
1. R a *\<^sub>u S b *\<^sub>u c = S b *\<^sub>u R a *\<^sub>u c
[PROOF STEP]
by metis |
from __future__ import division
from builtins import zip, range
from future.utils import with_metaclass
import numpy as np
import abc
import scipy.stats as stats
import scipy.special as special
from scipy.special import logsumexp
try:
from ..util.cstats import sample_markov
except ImportError:
from ..util.stats import sample_markov
from ..util.general import top_eigenvector, cumsum
from .hmm_states import HMMStatesPython, HMMStatesEigen, _SeparateTransMixin
from .hsmm_states import HSMMStatesEigen
# TODO these classes are currently backed by HMM message passing, but they can
# be made much more time and memory efficient. i have the code to do it in some
# other branches, but dense matrix multiplies are actually competitive.
class _HSMMStatesIntegerNegativeBinomialBase(
with_metaclass(abc.ABCMeta, HSMMStatesEigen, HMMStatesEigen)
):
@property
def rs(self):
return np.array([d.r for d in self.dur_distns])
@property
def ps(self):
return np.array([d.p for d in self.dur_distns])
### HMM embedding parameters
@abc.abstractproperty
def hmm_trans_matrix(self):
pass
@property
def hmm_aBl(self):
if self._hmm_aBl is None:
self._hmm_aBl = self.aBl.repeat(self.rs, axis=1)
return self._hmm_aBl
@property
def hmm_pi_0(self):
if not self.left_censoring:
rs = self.rs
starts = np.concatenate(((0,), rs.cumsum()[:-1]))
pi_0 = np.zeros(rs.sum())
pi_0[starts] = self.pi_0
return pi_0
else:
return top_eigenvector(self.hmm_trans_matrix)
def clear_caches(self):
super(_HSMMStatesIntegerNegativeBinomialBase, self).clear_caches()
self._hmm_aBl = None
def _map_states(self):
themap = np.arange(self.num_states).repeat(self.rs).astype("int32")
self.stateseq = themap[self.stateseq]
def generate_states(self):
self.stateseq = sample_markov(
T=self.T, trans_matrix=self.hmm_trans_matrix, init_state_distn=self.hmm_pi_0
)
self._map_states()
def Viterbi_hmm(self):
from hmm_messages_interface import viterbi
self.stateseq = viterbi(
self.hmm_trans_matrix,
self.hmm_aBl,
self.hmm_pi_0,
np.empty(self.hmm_aBl.shape[0], dtype="int32"),
)
self._map_states()
def resample_hmm(self):
alphan, self._normalizer = HMMStatesEigen._messages_forwards_normalized(
self.hmm_trans_matrix, self.hmm_pi_0, self.hmm_aBl
)
self.stateseq = HMMStatesEigen._sample_backwards_normalized(
alphan, self.hmm_trans_matrix.T.copy()
)
self._map_states()
self.alphan = alphan # TODO remove
def resample_hsmm(self):
betal, betastarl = HSMMStatesEigen.messages_backwards(self)
HMMStatesEigen.sample_forwards(betal, betastarl)
def resample(self):
self.resample_hmm()
def Viterbi(self):
self.Viterbi_hmm()
def hmm_messages_forwards_log(self):
return HMMStatesEigen._messages_forwards_log(
self.hmm_trans_matrix, self.hmm_pi_0, self.hmm_aBl
)
class HSMMStatesIntegerNegativeBinomial(_HSMMStatesIntegerNegativeBinomialBase):
@property
def hmm_trans_matrix(self):
return self.hmm_bwd_trans_matrix
@property
def hmm_bwd_trans_matrix(self):
rs, ps = self.rs, self.ps
starts, ends = cumsum(rs, strict=True), cumsum(rs, strict=False)
trans_matrix = np.zeros((ends[-1], ends[-1]))
enters = self.bwd_enter_rows
for (i, j), Aij in np.ndenumerate(self.trans_matrix):
block = trans_matrix[starts[i] : ends[i], starts[j] : ends[j]]
block[-1, :] = Aij * (1 - ps[i]) * enters[j]
if i == j:
block[...] += np.diag(np.repeat(ps[i], rs[i])) + np.diag(
np.repeat(1 - ps[i], rs[i] - 1), k=1
)
assert np.allclose(trans_matrix.sum(1), 1) or self.trans_matrix.shape == (1, 1)
return trans_matrix
@property
def bwd_enter_rows(self):
return [
stats.binom.pmf(np.arange(r)[::-1], r - 1, p)
for r, p in zip(self.rs, self.ps)
]
@property
def hmm_fwd_trans_matrix(self):
rs, ps = self.rs, self.ps
starts, ends = cumsum(rs, strict=True), cumsum(rs, strict=False)
trans_matrix = np.zeros((ends[-1], ends[-1]))
exits = self.fwd_exit_cols
for (i, j), Aij in np.ndenumerate(self.trans_matrix):
block = trans_matrix[starts[i] : ends[i], starts[j] : ends[j]]
block[:, 0] = Aij * exits[i] * (1 - ps[i])
if i == j:
block[...] += np.diag(np.repeat(ps[i], rs[i])) + np.diag(
np.repeat(1 - ps[i], rs[i] - 1) * (1 - exits[i][:-1]), k=1
)
assert np.allclose(trans_matrix.sum(1), 1)
assert (0 <= trans_matrix).all() and (trans_matrix <= 1.0).all()
return trans_matrix
@property
def fwd_exit_cols(self):
return [(1 - p) ** (np.arange(r)[::-1]) for r, p in zip(self.rs, self.ps)]
def messages_backwards2(self):
# this method is just for numerical testing
# returns HSMM messages using HMM embedding. the way of the future!
Al = np.log(self.trans_matrix)
T, num_states = self.T, self.num_states
betal = np.zeros((T, num_states))
betastarl = np.zeros((T, num_states))
starts = cumsum(self.rs, strict=True)
ends = cumsum(self.rs, strict=False)
foo = np.zeros((num_states, ends[-1]))
for idx, row in enumerate(self.bwd_enter_rows):
foo[idx, starts[idx] : ends[idx]] = row
bar = np.zeros_like(self.hmm_bwd_trans_matrix)
for start, end in zip(starts, ends):
bar[start:end, start:end] = self.hmm_bwd_trans_matrix[start:end, start:end]
pmess = np.zeros(ends[-1])
# betal[-1] is 0
for t in range(T - 1, -1, -1):
pmess += self.hmm_aBl[t]
betastarl[t] = logsumexp(np.log(foo) + pmess, axis=1)
betal[t - 1] = logsumexp(Al + betastarl[t], axis=1)
pmess = logsumexp(np.log(bar) + pmess, axis=1)
pmess[ends - 1] = np.logaddexp(
pmess[ends - 1], betal[t - 1] + np.log(1 - self.ps)
)
betal[-1] = 0.0
return betal, betastarl
### NEW
def meanfieldupdate(self):
return self.meanfieldupdate_sampling()
# return self.meanfieldupdate_Estep()
def meanfieldupdate_sampling(self):
from ..util.general import count_transitions
num_r_samples = (
self.model.mf_num_samples if hasattr(self.model, "mf_num_samples") else 10
)
self.expected_states = np.zeros((self.T, self.num_states))
self.expected_transcounts = np.zeros((self.num_states, self.num_states))
self.expected_durations = np.zeros((self.num_states, self.T))
eye = np.eye(self.num_states) / num_r_samples
for i in range(num_r_samples):
self.model._resample_from_mf()
self.clear_caches()
self.resample()
self.expected_states += eye[self.stateseq]
self.expected_transcounts += (
count_transitions(self.stateseq_norep, minlength=self.num_states)
/ num_r_samples
)
for state in range(self.num_states):
self.expected_durations[state] += (
np.bincount(
self.durations_censored[self.stateseq_norep == state],
minlength=self.T,
)[: self.T].astype(np.double)
/ num_r_samples
)
def meanfieldupdate_Estep(self):
# TODO bug in here? it's not as good as sampling
num_r_samples = (
self.model.mf_num_samples if hasattr(self.model, "mf_num_samples") else 10
)
num_stateseq_samples_per_r = (
self.model.mf_num_stateseq_samples_per_r
if hasattr(self.model, "mf_num_stateseq_samples_per_r")
else 1
)
self.expected_states = np.zeros((self.T, self.num_states))
self.expected_transcounts = np.zeros((self.num_states, self.num_states))
self.expected_durations = np.zeros((self.num_states, self.T))
mf_aBl = self.mf_aBl
for i in range(num_r_samples):
for d in self.dur_distns:
d._resample_r_from_mf()
self.clear_caches()
trans = self.mf_bwd_trans_matrix # TODO check this
init = self.hmm_mf_bwd_pi_0
aBl = mf_aBl.repeat(self.rs, axis=1)
hmm_alphal, hmm_betal = HMMStatesEigen._messages_log(self, trans, init, aBl)
# collect stateseq and transitions statistics from messages
(
hmm_expected_states,
hmm_expected_transcounts,
normalizer,
) = HMMStatesPython._expected_statistics_from_messages(
trans, aBl, hmm_alphal, hmm_betal
)
expected_states, expected_transcounts, _ = self._hmm_stats_to_hsmm_stats(
hmm_expected_states, hmm_expected_transcounts, normalizer
)
self.expected_states += expected_states / num_r_samples
self.expected_transcounts += expected_transcounts / num_r_samples
# collect duration statistics by sampling from messages
for j in range(num_stateseq_samples_per_r):
self._resample_from_mf(trans, init, aBl, hmm_alphal, hmm_betal)
for state in range(self.num_states):
self.expected_durations[state] += np.bincount(
self.durations_censored[self.stateseq_norep == state],
minlength=self.T,
)[: self.T].astype(np.double) / (
num_r_samples * num_stateseq_samples_per_r
)
def _hmm_stats_to_hsmm_stats(
self, hmm_expected_states, hmm_expected_transcounts, normalizer
):
rs = self.rs
starts = np.concatenate(((0,), np.cumsum(rs[:-1])))
dotter = np.zeros((rs.sum(), len(rs)))
for idx, (start, length) in enumerate(zip(starts, rs)):
dotter[start : start + length, idx] = 1.0
expected_states = hmm_expected_states.dot(dotter)
expected_transcounts = dotter.T.dot(hmm_expected_transcounts).dot(dotter)
expected_transcounts.flat[:: expected_transcounts.shape[0] + 1] = 0
return expected_states, expected_transcounts, normalizer
def _resample_from_mf(self, trans, init, aBl, hmm_alphal, hmm_betal):
self.stateseq = HMMStatesEigen._sample_forwards_log(hmm_betal, trans, init, aBl)
self._map_states()
@property
def hmm_mf_bwd_pi_0(self):
rs = self.rs
starts = np.concatenate(((0,), rs.cumsum()[:-1]))
mf_pi_0 = np.zeros(rs.sum())
mf_pi_0[starts] = self.mf_pi_0
return mf_pi_0
@property
def mf_bwd_trans_matrix(self):
rs = self.rs
starts, ends = cumsum(rs, strict=True), cumsum(rs, strict=False)
trans_matrix = np.zeros((ends[-1], ends[-1]))
Elnps, Eln1mps = zip(
*[
d._fixedr_distns[d.ridx]._mf_expected_statistics()
for d in self.dur_distns
]
)
Eps, E1mps = np.exp(Elnps), np.exp(Eln1mps) # NOTE: actually exp(E[ln(p)]) etc
enters = self.mf_bwd_enter_rows(rs, Eps, E1mps)
for (i, j), Aij in np.ndenumerate(self.mf_trans_matrix):
block = trans_matrix[starts[i] : ends[i], starts[j] : ends[j]]
block[-1, :] = Aij * eE1mps[i] * enters[j]
if i == j:
block[...] += np.diag(np.repeat(eEps[i], rs[i])) + np.diag(
np.repeat(eE1mps[i], rs[i] - 1), k=1
)
assert np.all(trans_matrix >= 0)
return trans_matrix
def mf_bwd_enter_rows(self, rs, Elnps, Eln1mps):
return [
self._mf_binom(np.arange(r)[::-1], r - 1, Ep, E1mp)
for r, Ep, E1mp in zip(rs, Eps, E1mps)
]
@staticmethod
def _mf_binom(k, n, p1, p2):
return np.exp(
special.gammaln(n + 1)
- special.gammaln(k + 1)
- special.gammaln(n - k + 1)
+ k * p1
+ (n - k) * p2
)
class HSMMStatesIntegerNegativeBinomialVariant(_HSMMStatesIntegerNegativeBinomialBase):
@property
def hmm_trans_matrix(self):
return self.hmm_bwd_trans_matrix
@property
def hmm_bwd_trans_matrix(self):
rs, ps = self.rs, self.ps
starts, ends = cumsum(rs, strict=True), cumsum(rs, strict=False)
trans_matrix = np.zeros((rs.sum(), rs.sum()))
for (i, j), Aij in np.ndenumerate(self.trans_matrix):
block = trans_matrix[starts[i] : ends[i], starts[j] : ends[j]]
block[-1, 0] = Aij * (1 - ps[i])
if i == j:
block[...] += np.diag(np.repeat(ps[i], rs[i])) + np.diag(
np.repeat(1 - ps[i], rs[i] - 1), k=1
)
assert np.allclose(trans_matrix.sum(1), 1)
return trans_matrix
class HSMMStatesIntegerNegativeBinomialSeparateTrans(
_SeparateTransMixin, HSMMStatesIntegerNegativeBinomial
):
pass
class HSMMStatesDelayedIntegerNegativeBinomial(HSMMStatesIntegerNegativeBinomial):
@property
def hmm_trans_matrix(self):
# return self.hmm_trans_matrix_orig
return self.hmm_trans_matrix_2
@property
def hmm_trans_matrix_orig(self):
rs, ps, delays = self.rs, self.ps, self.delays
starts, ends = (
cumsum(rs + delays, strict=True),
cumsum(rs + delays, strict=False),
)
trans_matrix = np.zeros((ends[-1], ends[-1]))
enters = self.bwd_enter_rows
for (i, j), Aij in np.ndenumerate(self.trans_matrix):
block = trans_matrix[starts[i] : ends[i], starts[j] : ends[j]]
if delays[i] == 0:
block[-1, : rs[j]] = Aij * enters[j] * (1 - ps[i])
else:
block[-1, : rs[j]] = Aij * enters[j]
if i == j:
block[: rs[i], : rs[i]] += np.diag(np.repeat(ps[i], rs[i])) + np.diag(
np.repeat(1 - ps[i], rs[i] - 1), k=1
)
if delays[i] > 0:
block[rs[i] - 1, rs[i]] = 1 - ps[i]
block[rs[i] :, rs[i] :] = np.eye(delays[i], k=1)
assert np.allclose(trans_matrix.sum(1), 1.0)
return trans_matrix
@property
def hmm_trans_matrix_1(self):
rs, ps, delays = self.rs, self.ps, self.delays
starts, ends = (
cumsum(rs + delays, strict=True),
cumsum(rs + delays, strict=False),
)
trans_matrix = np.zeros((ends[-1], ends[-1]))
enters = self.bwd_enter_rows
for (i, j), Aij in np.ndenumerate(self.trans_matrix):
block = trans_matrix[starts[i] : ends[i], starts[j] : ends[j]]
block[-1, : rs[j]] = Aij * enters[j] * (1 - ps[i])
if i == j:
block[-rs[i] :, -rs[i] :] += np.diag(np.repeat(ps[i], rs[i])) + np.diag(
np.repeat(1 - ps[i], rs[i] - 1), k=1
)
if delays[i] > 0:
block[: delays[i] :, : delays[i]] = np.eye(delays[i], k=1)
block[delays[i] - 1, delays[i]] = 1
assert np.allclose(trans_matrix.sum(1), 1.0)
return trans_matrix
@property
def hmm_trans_matrix_2(self):
rs, ps, delays = self.rs, self.ps, self.delays
starts, ends = (
cumsum(rs + delays, strict=True),
cumsum(rs + delays, strict=False),
)
trans_matrix = np.zeros((ends[-1], ends[-1]))
enters = self.bwd_enter_rows
for (i, j), Aij in np.ndenumerate(self.trans_matrix):
block = trans_matrix[starts[i] : ends[i], starts[j] : ends[j]]
block[-1, 0] = Aij * (1 - ps[i])
if i == j:
block[-rs[i] :, -rs[i] :] += np.diag(np.repeat(ps[i], rs[i])) + np.diag(
np.repeat(1 - ps[i], rs[i] - 1), k=1
)
if delays[i] > 0:
block[: delays[i] :, : delays[i]] = np.eye(delays[i], k=1)
block[delays[i] - 1, -rs[i] :] = enters[i]
assert np.allclose(trans_matrix.sum(1), 1.0)
return trans_matrix
@property
def hmm_aBl(self):
if self._hmm_aBl is None:
self._hmm_aBl = self.aBl.repeat(self.rs + self.delays, axis=1)
return self._hmm_aBl
@property
def hmm_pi_0(self):
if self.left_censoring:
raise NotImplementedError
else:
rs, delays = self.rs, self.delays
starts = np.concatenate(((0,), (rs + delays).cumsum()[:-1]))
pi_0 = np.zeros((rs + delays).sum())
pi_0[starts] = self.pi_0
return pi_0
@property
def delays(self):
return np.array([d.delay for d in self.dur_distns])
def _map_states(self):
themap = (
np.arange(self.num_states).repeat(self.rs + self.delays).astype("int32")
)
self.stateseq = themap[self.stateseq]
class HSMMStatesTruncatedIntegerNegativeBinomial(
HSMMStatesDelayedIntegerNegativeBinomial
):
@property
def bwd_enter_rows(self):
As = [
np.diag(np.repeat(p, r)) + np.diag(np.repeat(1 - p, r - 1), k=1)
for r, p in zip(self.rs, self.ps)
]
enters = [
stats.binom.pmf(np.arange(r)[::-1], r - 1, p)
for A, r, p in zip(As, self.rs, self.ps)
]
# norms = [sum(v.dot(np.linalg.matrix_power(A,d))[-1]*(1-p) for d in range(delay))
# for A,v,p,delay in zip(As,enters,self.ps,self.delays)]
# enters = [v.dot(np.linalg.matrix_power(A,self.delays[state])) / (1.-norm)
enters = [
v.dot(np.linalg.matrix_power(A, self.delays[state]))
for state, (A, v) in enumerate(zip(As, enters))
]
return [
v / v.sum() for v in enters
] # this should just be for numerical purposes
class HSMMStatesDelayedIntegerNegativeBinomialSeparateTrans(
_SeparateTransMixin, HSMMStatesDelayedIntegerNegativeBinomial
):
pass
class HSMMStatesTruncatedIntegerNegativeBinomialSeparateTrans(
_SeparateTransMixin, HSMMStatesTruncatedIntegerNegativeBinomial
):
pass
|
module Main
import System
main : IO ()
main = do
threadID <- fork $ do
sleep 1
putStrLn "Hello"
threadWait threadID
putStrLn "Goodbye"
|
{-# LANGUAGE DeriveDataTypeable, DeriveGeneric, OverloadedStrings #-}
-- |
-- Module : Statistics.Distribution.DiscreteUniform
-- Copyright : (c) 2016 André Szabolcs Szelp
-- License : BSD3
--
-- Maintainer : [email protected]
-- Stability : experimental
-- Portability : portable
--
-- The discrete uniform distribution. There are two parametrizations of
-- this distribution. First is the probability distribution on an
-- inclusive interval {1, ..., n}. This is parametrized with n only,
-- where p_1, ..., p_n = 1/n. ('discreteUniform').
--
-- The second parametrization is the uniform distribution on {a, ..., b} with
-- probabilities p_a, ..., p_b = 1/(a-b+1). This is parametrized with
-- /a/ and /b/. ('discreteUniformAB')
module Statistics.Distribution.DiscreteUniform
(
DiscreteUniform
-- * Constructors
, discreteUniform
, discreteUniformAB
-- * Accessors
, rangeFrom
, rangeTo
) where
import Control.Applicative (empty)
import Data.Aeson (FromJSON(..), ToJSON, Value(..), (.:))
import Data.Binary (Binary(..))
import Data.Data (Data, Typeable)
import System.Random.Stateful (uniformRM)
import GHC.Generics (Generic)
import qualified Statistics.Distribution as D
import Statistics.Internal
-- | The discrete uniform distribution.
data DiscreteUniform = U {
rangeFrom :: {-# UNPACK #-} !Int
-- ^ /a/, the lower bound of the support {a, ..., b}
, rangeTo :: {-# UNPACK #-} !Int
-- ^ /b/, the upper bound of the support {a, ..., b}
} deriving (Eq, Typeable, Data, Generic)
instance Show DiscreteUniform where
showsPrec i (U a b) = defaultShow2 "discreteUniformAB" a b i
instance Read DiscreteUniform where
readPrec = defaultReadPrecM2 "discreteUniformAB" (\a b -> Just (discreteUniformAB a b))
instance ToJSON DiscreteUniform
instance FromJSON DiscreteUniform where
parseJSON (Object v) = do
a <- v .: "uniformA"
b <- v .: "uniformB"
return $ discreteUniformAB a b
parseJSON _ = empty
instance Binary DiscreteUniform where
put (U a b) = put a >> put b
get = discreteUniformAB <$> get <*> get
instance D.Distribution DiscreteUniform where
cumulative (U a b) x
| x < fromIntegral a = 0
| x > fromIntegral b = 1
| otherwise = fromIntegral (floor x - a + 1) / fromIntegral (b - a + 1)
instance D.DiscreteDistr DiscreteUniform where
probability (U a b) k
| k >= a && k <= b = 1 / fromIntegral (b - a + 1)
| otherwise = 0
instance D.Mean DiscreteUniform where
mean (U a b) = fromIntegral (a+b)/2
instance D.Variance DiscreteUniform where
variance (U a b) = (fromIntegral (b - a + 1)^(2::Int) - 1) / 12
instance D.MaybeMean DiscreteUniform where
maybeMean = Just . D.mean
instance D.MaybeVariance DiscreteUniform where
maybeStdDev = Just . D.stdDev
maybeVariance = Just . D.variance
instance D.Entropy DiscreteUniform where
entropy (U a b) = log $ fromIntegral $ b - a + 1
instance D.MaybeEntropy DiscreteUniform where
maybeEntropy = Just . D.entropy
instance D.ContGen DiscreteUniform where
genContVar d = fmap fromIntegral . D.genDiscreteVar d
instance D.DiscreteGen DiscreteUniform where
genDiscreteVar (U a b) = uniformRM (a,b)
-- | Construct discrete uniform distribution on support {1, ..., n}.
-- Range /n/ must be >0.
discreteUniform :: Int -- ^ Range
-> DiscreteUniform
discreteUniform n
| n < 1 = error $ msg ++ "range must be > 0. Got " ++ show n
| otherwise = U 1 n
where msg = "Statistics.Distribution.DiscreteUniform.discreteUniform: "
-- | Construct discrete uniform distribution on support {a, ..., b}.
discreteUniformAB :: Int -- ^ Lower boundary (inclusive)
-> Int -- ^ Upper boundary (inclusive)
-> DiscreteUniform
discreteUniformAB a b
| b < a = U b a
| otherwise = U a b
|
import numpy as np
from numpy.testing import assert_array_equal, assert_allclose
from scipy.sparse import csr_matrix
from ef.config.components import BoundaryConditionsConf, SpatialMeshConf
from ef.config.components import Box
from ef.field.solvers.field_solver import FieldSolver
from ef.inner_region import InnerRegion
class TestFieldSolver:
def test_eval_field_from_potential(self):
mesh = SpatialMeshConf((1.5, 2, 1), (0.5, 1, 1)).make(BoundaryConditionsConf())
mesh.potential = np.stack([np.array([[0., 0, 0],
[1, 2, 3],
[4, 3, 2],
[4, 4, 4]]), np.zeros((4, 3))], -1)
FieldSolver.eval_fields_from_potential(mesh)
expected = np.array([[[[-2, 0, 0], [0, 0, 0]], [[-4, 0, 0], [0, 0, 0]], [[-6, 0, 0], [0, 0, 0]]],
[[[-4, -1, 1], [0, 0, 1]], [[-3, -1, 2], [0, 0, 2]], [[-2, -1, 3], [0, 0, 3]]],
[[[-3, 1, 4], [0, 0, 4]], [[-2, 1, 3], [0, 0, 3]], [[-1, 1, 2], [0, 0, 2]]],
[[[0, 0, 4], [0, 0, 4]], [[-2, 0, 4], [0, 0, 4]], [[-4, 0, 4], [0, 0, 4]]]])
assert_array_equal(mesh.electric_field, expected)
def test_global_index(self):
double_index = list(FieldSolver.double_index(np.array((9, 10, 6))))
for i in range(7):
for j in range(8):
for k in range(4):
n = i + j * 7 + k * 7 * 8
assert double_index[n] == (n, i + 1, j + 1, k + 1)
assert list(FieldSolver.double_index(np.array((4, 5, 3)))) == [(0, 1, 1, 1),
(1, 2, 1, 1),
(2, 1, 2, 1),
(3, 2, 2, 1),
(4, 1, 3, 1),
(5, 2, 3, 1)]
def test_init_rhs(self):
mesh = SpatialMeshConf((4, 3, 3)).make(BoundaryConditionsConf())
solver = FieldSolver(mesh, [])
solver.init_rhs_vector_in_full_domain(mesh)
assert_array_equal(solver.rhs, np.zeros(3 * 2 * 2))
mesh = SpatialMeshConf((4, 3, 3)).make(BoundaryConditionsConf(-2))
solver = FieldSolver(mesh, [])
solver.init_rhs_vector_in_full_domain(mesh)
assert_array_equal(solver.rhs, [6, 4, 6, 6, 4, 6, 6, 4, 6, 6, 4, 6]) # what
mesh = SpatialMeshConf((4, 4, 5)).make(BoundaryConditionsConf(-2))
solver = FieldSolver(mesh, [])
solver.init_rhs_vector_in_full_domain(mesh)
assert_array_equal(solver.rhs, [6, 4, 6, 4, 2, 4, 6, 4, 6,
4, 2, 4, 2, 0, 2, 4, 2, 4,
4, 2, 4, 2, 0, 2, 4, 2, 4,
6, 4, 6, 4, 2, 4, 6, 4, 6]) # what
mesh = SpatialMeshConf((8, 12, 5), (2, 3, 1)).make(BoundaryConditionsConf(-1))
solver = FieldSolver(mesh, [])
solver.init_rhs_vector_in_full_domain(mesh)
assert_array_equal(solver.rhs, [49, 40, 49, 45, 36, 45, 49, 40, 49,
13, 4, 13, 9, 0, 9, 13, 4, 13,
13, 4, 13, 9, 0, 9, 13, 4, 13,
49, 40, 49, 45, 36, 45, 49, 40, 49])
mesh = SpatialMeshConf((4, 6, 9), (1, 2, 3)).make(BoundaryConditionsConf())
solver = FieldSolver(mesh, [])
mesh.charge_density = np.array([[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 1, 2, 0], [0, -1, 0, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 3, 4, 0], [0, 0, -1, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 5, 6, 0], [0, -1, 0, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]])
solver.init_rhs_vector_in_full_domain(mesh)
assert_allclose(solver.rhs, -np.array([1, 3, 5, -1, 0, -1, 2, 4, 6, 0, -1, 0]) * np.pi * 4 * 36)
mesh = SpatialMeshConf((4, 6, 9), (1, 2, 3)).make(BoundaryConditionsConf())
solver = FieldSolver(mesh, [])
region = InnerRegion('test', Box((1, 2, 3), (1, 2, 3)), 3)
solver.init_rhs_vector(mesh, [region])
assert_array_equal(solver.rhs, [3, 3, 0, 3, 3, 0, 3, 3, 0, 3, 3, 0])
def test_zero_nondiag_inside_objects(self):
mesh = SpatialMeshConf((4, 6, 9), (1, 2, 3)).make(BoundaryConditionsConf())
solver = FieldSolver(mesh, [])
region = InnerRegion('test', Box((1, 2, 3), (1, 2, 3)), 3)
a = csr_matrix(np.full((12, 12), 2))
assert_array_equal(a.toarray(), [[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]])
result = solver.zero_nondiag_for_nodes_inside_objects(a, mesh, [region])
assert_array_equal(result.toarray(), [[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]])
# TODO: check algorithm if on-diagonal zeros should turn into ones
a = csr_matrix(np.array([[4, 0, 3, 0, 0, 0, 0, 2, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 2, 0, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 6, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]))
result = solver.zero_nondiag_for_nodes_inside_objects(a, mesh, [region])
assert_array_equal(result.toarray(), [[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 2, 0, 0, 3, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
def test_d2dx2(self):
a = FieldSolver.construct_d2dx2_in_3d(3, 2, 2).toarray()
assert_array_equal(a, [[-2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, -2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, -2, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, -2, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, -2, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, -2, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, -2, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1, -2, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, -2, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, -2, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -2, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, -2]])
a = FieldSolver.construct_d2dx2_in_3d(3, 2, 1).toarray()
assert_array_equal(a, [[-2, 1, 0, 0, 0, 0],
[1, -2, 1, 0, 0, 0],
[0, 1, -2, 0, 0, 0],
[0, 0, 0, -2, 1, 0],
[0, 0, 0, 1, -2, 1],
[0, 0, 0, 0, 1, -2]])
def test_d2dy2(self):
a = FieldSolver.construct_d2dy2_in_3d(3, 2, 2).toarray()
assert_array_equal(a, [[-2, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, -2, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, -2, 0, 0, 1, 0, 0, 0, 0, 0, 0],
[1, 0, 0, -2, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, -2, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, -2, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, -2, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, -2, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, -2, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 1, 0, 0, -2, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 0, -2, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, -2]])
a = FieldSolver.construct_d2dy2_in_3d(3, 2, 1).toarray()
assert_array_equal(a, [[-2, 0, 0, 1, 0, 0],
[0, -2, 0, 0, 1, 0],
[0, 0, -2, 0, 0, 1],
[1, 0, 0, -2, 0, 0],
[0, 1, 0, 0, -2, 0],
[0, 0, 1, 0, 0, -2]])
def test_d2dz2(self):
a = FieldSolver.construct_d2dz2_in_3d(3, 2, 2).toarray()
assert_array_equal(a, [[-2, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, -2, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
[0, 0, -2, 0, 0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, -2, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, -2, 0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, -2, 0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0, -2, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, -2, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0, 0, -2, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0, 0, 0, -2, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, -2, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, -2]])
a = FieldSolver.construct_d2dz2_in_3d(3, 2, 1).toarray()
assert_array_equal(a, [[-2, 0, 0, 0, 0, 0],
[0, -2, 0, 0, 0, 0],
[0, 0, -2, 0, 0, 0],
[0, 0, 0, -2, 0, 0],
[0, 0, 0, 0, -2, 0],
[0, 0, 0, 0, 0, -2]])
def test_construct_equation_matrix(self):
mesh = SpatialMeshConf((4, 6, 9), (1, 2, 3)).make(BoundaryConditionsConf())
solver = FieldSolver(mesh, [])
solver.construct_equation_matrix(mesh, [])
d = -2 * (2 * 2 * 3 * 3 + 3 * 3 + 2 * 2)
x = 2 * 2 * 3 * 3
y = 3 * 3
z = 2 * 2
assert_array_equal(solver.A.toarray(), [[d, x, 0, y, 0, 0, z, 0, 0, 0, 0, 0],
[x, d, x, 0, y, 0, 0, z, 0, 0, 0, 0],
[0, x, d, 0, 0, y, 0, 0, z, 0, 0, 0],
[y, 0, 0, d, x, 0, 0, 0, 0, z, 0, 0],
[0, y, 0, x, d, x, 0, 0, 0, 0, z, 0],
[0, 0, y, 0, x, d, 0, 0, 0, 0, 0, z],
[z, 0, 0, 0, 0, 0, d, x, 0, y, 0, 0],
[0, z, 0, 0, 0, 0, x, d, x, 0, y, 0],
[0, 0, z, 0, 0, 0, 0, x, d, 0, 0, y],
[0, 0, 0, z, 0, 0, y, 0, 0, d, x, 0],
[0, 0, 0, 0, z, 0, 0, y, 0, x, d, x],
[0, 0, 0, 0, 0, z, 0, 0, y, 0, x, d]])
def test_transfer_solution_to_spat_mesh(self):
mesh = SpatialMeshConf((4, 6, 9), (1, 2, 3)).make(BoundaryConditionsConf())
solver = FieldSolver(mesh, [])
solver.phi_vec = np.array(range(1, 3 * 2 * 2 + 1))
solver.transfer_solution_to_spat_mesh(mesh)
assert_array_equal(mesh.potential[1:-1, 1:-1, 1:-1], [[[1, 7], [4, 10]],
[[2, 8], [5, 11]],
[[3, 9], [6, 12]]])
assert_array_equal(mesh.potential, [
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 1, 7, 0], [0, 4, 10, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 2, 8, 0], [0, 5, 11, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 3, 9, 0], [0, 6, 12, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]])
|
#include <petsc.h>
#include "um.h"
PetscErrorCode UMInitialize(UM *mesh) {
mesh->N = 0;
mesh->K = 0;
mesh->P = 0;
mesh->loc = NULL;
mesh->e = NULL;
mesh->bf = NULL;
mesh->ns = NULL;
return 0;
}
PetscErrorCode UMDestroy(UM *mesh) {
PetscErrorCode ierr;
ierr = VecDestroy(&(mesh->loc)); CHKERRQ(ierr);
ierr = ISDestroy(&(mesh->e)); CHKERRQ(ierr);
ierr = ISDestroy(&(mesh->bf)); CHKERRQ(ierr);
ierr = ISDestroy(&(mesh->ns)); CHKERRQ(ierr);
return 0;
}
PetscErrorCode UMViewASCII(UM *mesh, PetscViewer viewer) {
PetscErrorCode ierr;
PetscInt n, k;
const Node *aloc;
const PetscInt *ae, *abf, *ans;
ierr = PetscViewerASCIIPushSynchronized(viewer); CHKERRQ(ierr);
if (mesh->loc && (mesh->N > 0)) {
ierr = PetscViewerASCIISynchronizedPrintf(viewer,"%d nodes at (x,y) coordinates:\n",mesh->N); CHKERRQ(ierr);
ierr = VecGetArrayRead(mesh->loc,(const PetscReal **)&aloc); CHKERRQ(ierr);
for (n = 0; n < mesh->N; n++) {
ierr = PetscViewerASCIISynchronizedPrintf(viewer," %3d : (%g,%g)\n",
n,aloc[n].x,aloc[n].y); CHKERRQ(ierr);
}
ierr = VecRestoreArrayRead(mesh->loc,(const PetscReal **)&aloc); CHKERRQ(ierr);
} else {
ierr = PetscViewerASCIISynchronizedPrintf(viewer,"node coordinates empty or unallocated\n"); CHKERRQ(ierr);
}
if (mesh->e && (mesh->K > 0)) {
ierr = PetscViewerASCIISynchronizedPrintf(viewer,"%d elements:\n",mesh->K); CHKERRQ(ierr);
ierr = ISGetIndices(mesh->e,&ae); CHKERRQ(ierr);
for (k = 0; k < mesh->K; k++) {
ierr = PetscPrintf(PETSC_COMM_WORLD," %3d : %3d %3d %3d\n",
k,ae[3*k+0],ae[3*k+1],ae[3*k+2]); CHKERRQ(ierr);
}
ierr = ISRestoreIndices(mesh->e,&ae); CHKERRQ(ierr);
} else {
ierr = PetscViewerASCIISynchronizedPrintf(viewer,"element index triples empty or unallocated\n"); CHKERRQ(ierr);
}
if (mesh->bf && (mesh->N > 0)) {
ierr = PetscViewerASCIISynchronizedPrintf(viewer,"%d boundary flags at nodes (0 = interior, 1 = boundary, 2 = Dirichlet):\n",mesh->N); CHKERRQ(ierr);
ierr = ISGetIndices(mesh->bf,&abf); CHKERRQ(ierr);
for (n = 0; n < mesh->N; n++) {
ierr = PetscViewerASCIISynchronizedPrintf(viewer," %3d : %1d\n",
n,abf[n]); CHKERRQ(ierr);
}
ierr = ISRestoreIndices(mesh->bf,&abf); CHKERRQ(ierr);
} else {
ierr = PetscViewerASCIISynchronizedPrintf(viewer,"boundary flags empty or unallocated\n"); CHKERRQ(ierr);
}
if (mesh->ns && (mesh->P > 0)) {
ierr = PetscViewerASCIISynchronizedPrintf(viewer,"%d Neumann boundary segments:\n",mesh->P); CHKERRQ(ierr);
ierr = ISGetIndices(mesh->ns,&ans); CHKERRQ(ierr);
for (n = 0; n < mesh->P; n++) {
ierr = PetscViewerASCIISynchronizedPrintf(viewer," %3d : %3d %3d\n",
n,ans[2*n+0],ans[2*n+1]); CHKERRQ(ierr);
}
ierr = ISRestoreIndices(mesh->ns,&ans); CHKERRQ(ierr);
} else {
ierr = PetscViewerASCIISynchronizedPrintf(viewer,"Neumann boundary segments empty or unallocated\n"); CHKERRQ(ierr);
}
ierr = PetscViewerASCIIPopSynchronized(viewer); CHKERRQ(ierr);
return 0;
}
PetscErrorCode UMViewSolutionBinary(UM *mesh, char *filename, Vec u) {
PetscErrorCode ierr;
PetscInt Nu;
PetscViewer viewer;
ierr = VecGetSize(u,&Nu); CHKERRQ(ierr);
if (Nu != mesh->N) {
SETERRQ2(PETSC_COMM_SELF,1,
"incompatible sizes of u (=%d) and number of nodes (=%d)\n",Nu,mesh->N);
}
ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,filename,FILE_MODE_WRITE,&viewer); CHKERRQ(ierr);
ierr = VecView(u,viewer); CHKERRQ(ierr);
ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr);
return 0;
}
PetscErrorCode UMReadNodes(UM *mesh, char *filename) {
PetscErrorCode ierr;
PetscInt twoN;
PetscViewer viewer;
if (mesh->N > 0) {
SETERRQ(PETSC_COMM_SELF,1,"nodes already created?\n");
}
ierr = VecCreate(PETSC_COMM_WORLD,&mesh->loc); CHKERRQ(ierr);
ierr = VecSetFromOptions(mesh->loc); CHKERRQ(ierr);
ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,filename,FILE_MODE_READ,&viewer); CHKERRQ(ierr);
ierr = VecLoad(mesh->loc,viewer); CHKERRQ(ierr);
ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr);
ierr = VecGetSize(mesh->loc,&twoN); CHKERRQ(ierr);
if (twoN % 2 != 0) {
SETERRQ1(PETSC_COMM_SELF,2,"node locations loaded from %s are not N pairs\n",filename);
}
mesh->N = twoN / 2;
return 0;
}
PetscErrorCode UMCheckElements(UM *mesh) {
PetscErrorCode ierr;
const PetscInt *ae;
PetscInt k, m;
if ((mesh->K == 0) || (mesh->e == NULL)) {
SETERRQ(PETSC_COMM_SELF,1,
"number of elements unknown; call UMReadElements() first\n");
}
if (mesh->N == 0) {
SETERRQ(PETSC_COMM_SELF,2,
"node size unknown so element check impossible; call UMReadNodes() first\n");
}
ierr = ISGetIndices(mesh->e,&ae); CHKERRQ(ierr);
for (k = 0; k < mesh->K; k++) {
for (m = 0; m < 3; m++) {
if ((ae[3*k+m] < 0) || (ae[3*k+m] >= mesh->N)) {
SETERRQ3(PETSC_COMM_SELF,3,
"index e[%d]=%d invalid: not between 0 and N-1=%d\n",
3*k+m,ae[3*k+m],mesh->N-1);
}
}
// FIXME: could add check for distinct indices
}
ierr = ISRestoreIndices(mesh->e,&ae); CHKERRQ(ierr);
return 0;
}
PetscErrorCode UMCheckBoundaryData(UM *mesh) {
PetscErrorCode ierr;
const PetscInt *ans, *abf;
PetscInt n, m;
if (mesh->N == 0) {
SETERRQ(PETSC_COMM_SELF,2,
"node size unknown so boundary flag check impossible; call UMReadNodes() first\n");
}
if (mesh->bf == NULL) {
SETERRQ(PETSC_COMM_SELF,1,
"boundary flags at nodes not allocated; call UMReadNodes() first\n");
}
if ((mesh->P > 0) && (mesh->ns == NULL)) {
SETERRQ(PETSC_COMM_SELF,3,
"inconsistent data for Neumann boundary segments\n");
}
ierr = ISGetIndices(mesh->bf,&abf); CHKERRQ(ierr);
for (n = 0; n < mesh->N; n++) {
switch (abf[n]) {
case 0 :
case 1 :
case 2 :
break;
default :
SETERRQ2(PETSC_COMM_SELF,5,
"boundary flag bf[%d]=%d invalid: not in {0,1,2}\n",
n,abf[n]);
}
}
ierr = ISRestoreIndices(mesh->bf,&abf); CHKERRQ(ierr);
if (mesh->P > 0) {
ierr = ISGetIndices(mesh->ns,&ans); CHKERRQ(ierr);
for (n = 0; n < mesh->P; n++) {
for (m = 0; m < 2; m++) {
if ((ans[2*n+m] < 0) || (ans[2*n+m] >= mesh->N)) {
SETERRQ3(PETSC_COMM_SELF,6,
"index ns[%d]=%d invalid: not between 0 and N-1=%d\n",
2*n+m,ans[3*n+m],mesh->N-1);
}
}
}
ierr = ISRestoreIndices(mesh->ns,&ans); CHKERRQ(ierr);
}
return 0;
}
PetscErrorCode UMReadISs(UM *mesh, char *filename) {
PetscErrorCode ierr;
PetscViewer viewer;
PetscInt n_bf;
if ((!mesh->loc) || (mesh->N == 0)) {
SETERRQ(PETSC_COMM_SELF,2,
"node coordinates not created ... do that first ... stopping\n");
}
if ((mesh->K > 0) || (mesh->P > 0) || (mesh->e != NULL) || (mesh->bf != NULL) || (mesh->ns != NULL)) {
SETERRQ(PETSC_COMM_SELF,1,
"elements, boundary flags, Neumann boundary segments already created? ... stopping\n");
}
ierr = PetscViewerBinaryOpen(PETSC_COMM_WORLD,filename,FILE_MODE_READ,&viewer); CHKERRQ(ierr);
// create and load e
ierr = ISCreate(PETSC_COMM_WORLD,&(mesh->e)); CHKERRQ(ierr);
ierr = ISLoad(mesh->e,viewer); CHKERRQ(ierr);
ierr = ISGetSize(mesh->e,&(mesh->K)); CHKERRQ(ierr);
if (mesh->K % 3 != 0) {
SETERRQ1(PETSC_COMM_SELF,3,
"IS e loaded from %s is wrong size for list of element triples\n",filename);
}
mesh->K /= 3;
// create and load bf
ierr = ISCreate(PETSC_COMM_WORLD,&(mesh->bf)); CHKERRQ(ierr);
ierr = ISLoad(mesh->bf,viewer); CHKERRQ(ierr);
ierr = ISGetSize(mesh->bf,&n_bf); CHKERRQ(ierr);
if (n_bf != mesh->N) {
SETERRQ1(PETSC_COMM_SELF,4,
"IS bf loaded from %s is wrong size for list of boundary flags\n",filename);
}
// FIXME seems there is no way to tell if file is empty at this point
// create and load ns last ... may *start with a negative value* in which case set P = 0
const PetscInt *ans;
ierr = ISCreate(PETSC_COMM_WORLD,&(mesh->ns)); CHKERRQ(ierr);
ierr = ISLoad(mesh->ns,viewer); CHKERRQ(ierr);
ierr = ISGetIndices(mesh->ns,&ans); CHKERRQ(ierr);
if (ans[0] < 0) {
ISDestroy(&(mesh->ns));
mesh->ns = NULL;
mesh->P = 0;
} else {
ierr = ISGetSize(mesh->ns,&(mesh->P)); CHKERRQ(ierr);
if (mesh->P % 2 != 0) {
SETERRQ1(PETSC_COMM_SELF,4,
"IS s loaded from %s is wrong size for list of Neumann boundary segment pairs\n",filename);
}
mesh->P /= 2;
}
ierr = PetscViewerDestroy(&viewer); CHKERRQ(ierr);
// check that mesh is complete now
ierr = UMCheckElements(mesh); CHKERRQ(ierr);
ierr = UMCheckBoundaryData(mesh); CHKERRQ(ierr);
return 0;
}
PetscErrorCode UMStats(UM *mesh, PetscReal *maxh, PetscReal *meanh,
PetscReal *maxa, PetscReal *meana) {
PetscErrorCode ierr;
const PetscInt *ae;
const Node *aloc;
PetscInt k;
PetscReal x[3], y[3], ax, ay, bx, by, cx, cy, h, a,
Maxh = 0.0, Maxa = 0.0, Sumh = 0.0, Suma = 0.0;
if ((mesh->K == 0) || (mesh->e == NULL)) {
SETERRQ(PETSC_COMM_SELF,1,
"number of elements unknown; call UMReadElements() first\n");
}
if (mesh->N == 0) {
SETERRQ(PETSC_COMM_SELF,2,
"node size unknown so element check impossible; call UMReadNodes() first\n");
}
ierr = UMGetNodeCoordArrayRead(mesh,&aloc); CHKERRQ(ierr);
ierr = ISGetIndices(mesh->e,&ae); CHKERRQ(ierr);
for (k = 0; k < mesh->K; k++) {
x[0] = aloc[ae[3*k]].x;
y[0] = aloc[ae[3*k]].y;
x[1] = aloc[ae[3*k+1]].x;
y[1] = aloc[ae[3*k+1]].y;
x[2] = aloc[ae[3*k+2]].x;
y[2] = aloc[ae[3*k+2]].y;
ax = x[1] - x[0];
ay = y[1] - y[0];
bx = x[2] - x[0];
by = y[2] - y[0];
cx = x[1] - x[2];
cy = y[1] - y[2];
h = PetscMax(ax*ax+ay*ay, PetscMax(bx*bx+by*by, cx*cx+cy*cy));
h = sqrt(h);
a = 0.5 * PetscAbs(ax*by-ay*bx);
Maxh = PetscMax(Maxh,h);
Sumh += h;
Maxa = PetscMax(Maxa,a);
Suma += a;
}
ierr = ISRestoreIndices(mesh->e,&ae); CHKERRQ(ierr);
ierr = UMRestoreNodeCoordArrayRead(mesh,&aloc); CHKERRQ(ierr);
if (maxh) *maxh = Maxh;
if (maxa) *maxa = Maxa;
if (meanh) *meanh = Sumh / mesh->K;
if (meana) *meana = Suma / mesh->K;
return 0;
}
PetscErrorCode UMGetNodeCoordArrayRead(UM *mesh, const Node **xy) {
PetscErrorCode ierr;
if ((!mesh->loc) || (mesh->N == 0)) {
SETERRQ(PETSC_COMM_SELF,1,"node coordinates not created ... stopping\n");
}
ierr = VecGetArrayRead(mesh->loc,(const PetscReal **)xy); CHKERRQ(ierr);
return 0;
}
PetscErrorCode UMRestoreNodeCoordArrayRead(UM *mesh, const Node **xy) {
PetscErrorCode ierr;
if ((!mesh->loc) || (mesh->N == 0)) {
SETERRQ(PETSC_COMM_SELF,1,"node coordinates not created ... stopping\n");
}
ierr = VecRestoreArrayRead(mesh->loc,(const PetscReal **)xy); CHKERRQ(ierr);
return 0;
}
|
/-
Copyright (c) 2019 Simon Hudon. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Simon Hudon
-/
import tactic.monotonicity
import tactic.norm_num
import algebra.order.ring.defs
import measure_theory.measure.lebesgue
import measure_theory.function.locally_integrable
import data.list.defs
open list tactic tactic.interactive set
example
(h : 3 + 6 ≤ 4 + 5)
: 1 + 3 + 2 + 6 ≤ 4 + 2 + 1 + 5 :=
begin
ac_mono,
end
example
(h : 3 ≤ (4 : ℤ))
(h' : 5 ≤ (6 : ℤ))
: (1 + 3 + 2) - 6 ≤ (4 + 2 + 1 : ℤ) - 5 :=
begin
ac_mono,
mono,
end
example
(h : 3 ≤ (4 : ℤ))
(h' : 5 ≤ (6 : ℤ))
: (1 + 3 + 2) - 6 ≤ (4 + 2 + 1 : ℤ) - 5 :=
begin
transitivity (1 + 3 + 2 - 5 : ℤ),
{ ac_mono },
{ ac_mono },
end
example (x y z k : ℕ)
(h : 3 ≤ (4 : ℕ))
(h' : z ≤ y)
: (k + 3 + x) - y ≤ (k + 4 + x) - z :=
begin
mono, norm_num
end
example (x y z k : ℤ)
(h : 3 ≤ (4 : ℤ))
(h' : z ≤ y)
: (k + 3 + x) - y ≤ (k + 4 + x) - z :=
begin
mono, norm_num
end
example (x y z a b : ℕ)
(h : a ≤ (b : ℕ))
(h' : z ≤ y)
: (1 + a + x) - y ≤ (1 + b + x) - z :=
begin
transitivity (1 + a + x - z),
{ mono, },
{ mono, mono, mono },
end
example (x y z a b : ℤ)
(h : a ≤ (b : ℤ))
(h' : z ≤ y)
: (1 + a + x) - y ≤ (1 + b + x) - z :=
begin
transitivity (1 + a + x - z),
{ mono, },
{ mono, mono, mono },
end
example (x y z : ℤ)
(h' : z ≤ y)
: (1 + 3 + x) - y ≤ (1 + 4 + x) - z :=
begin
transitivity (1 + 3 + x - z),
{ mono },
{ mono, mono, norm_num },
end
example (x y z : ℤ)
(h : 3 ≤ (4 : ℤ))
(h' : z ≤ y)
: (1 + 3 + x) - y ≤ (1 + 4 + x) - z :=
begin
ac_mono, mono*
end
@[simp]
def list.le' {α : Type*} [has_le α] : list α → list α → Prop
| (x::xs) (y::ys) := x ≤ y ∧ list.le' xs ys
| [] [] := true
| _ _ := false
@[simp]
instance list_has_le {α : Type*} [has_le α] : has_le (list α) :=
⟨ list.le' ⟩
lemma list.le_refl {α : Type*} [preorder α] {xs : list α}
: xs ≤ xs :=
begin
induction xs with x xs,
{ trivial },
{ simp [has_le.le,list.le],
split, exact le_rfl, apply xs_ih }
end
-- @[trans]
lemma list.le_trans {α : Type*} [preorder α]
{xs zs : list α} (ys : list α)
(h : xs ≤ ys)
(h' : ys ≤ zs)
: xs ≤ zs :=
begin
revert ys zs,
induction xs with x xs
; intros ys zs h h'
; cases ys with y ys
; cases zs with z zs
; try { cases h ; cases h' ; done },
{ apply list.le_refl },
{ simp [has_le.le,list.le],
split,
apply le_trans h.left h'.left,
apply xs_ih _ h.right h'.right, }
end
@[mono]
lemma list_le_mono_left {α : Type*} [preorder α] {xs ys zs : list α}
(h : xs ≤ ys)
: xs ++ zs ≤ ys ++ zs :=
begin
revert ys,
induction xs with x xs ; intros ys h,
{ cases ys, apply list.le_refl, cases h },
{ cases ys with y ys, cases h, simp [has_le.le,list.le] at *,
revert h, apply and.imp_right,
apply xs_ih }
end
@[mono]
lemma list_le_mono_right {α : Type*} [preorder α] {xs ys zs : list α}
(h : xs ≤ ys)
: zs ++ xs ≤ zs ++ ys :=
begin
revert ys zs,
induction xs with x xs ; intros ys zs h,
{ cases ys, { simp, apply list.le_refl }, cases h },
{ cases ys with y ys, cases h, simp [has_le.le,list.le] at *,
suffices : list.le' ((zs ++ [x]) ++ xs) ((zs ++ [y]) ++ ys),
{ refine cast _ this, simp, },
apply list.le_trans (zs ++ [y] ++ xs),
{ apply list_le_mono_left,
induction zs with z zs,
{ simp [has_le.le,list.le], apply h.left },
{ simp [has_le.le,list.le], split, exact le_rfl,
apply zs_ih, } },
{ apply xs_ih h.right, } }
end
lemma bar_bar'
(h : [] ++ [3] ++ [2] ≤ [1] ++ [5] ++ [4])
: [] ++ [3] ++ [2] ++ [2] ≤ [1] ++ [5] ++ ([4] ++ [2]) :=
begin
ac_mono,
end
lemma bar_bar''
(h : [3] ++ [2] ++ [2] ≤ [5] ++ [4] ++ [])
: [1] ++ ([3] ++ [2]) ++ [2] ≤ [1] ++ [5] ++ ([4] ++ []) :=
begin
ac_mono,
end
lemma bar_bar
(h : [3] ++ [2] ≤ [5] ++ [4])
: [1] ++ [3] ++ [2] ++ [2] ≤ [1] ++ [5] ++ ([4] ++ [2]) :=
begin
ac_mono,
end
def P (x : ℕ) := 7 ≤ x
def Q (x : ℕ) := x ≤ 7
@[mono]
lemma P_mono {x y : ℕ}
(h : x ≤ y)
: P x → P y :=
by { intro h', apply le_trans h' h }
@[mono]
lemma Q_mono {x y : ℕ}
(h : y ≤ x)
: Q x → Q y :=
by apply le_trans h
example (x y z : ℕ)
(h : x ≤ y)
: P (x + z) → P (z + y) :=
begin
ac_mono,
ac_mono,
end
example (x y z : ℕ)
(h : y ≤ x)
: Q (x + z) → Q (z + y) :=
begin
ac_mono,
ac_mono,
end
example (x y z k m n : ℤ)
(h₀ : z ≤ 0)
(h₁ : y ≤ x)
: (m + x + n) * z + k ≤ z * (y + n + m) + k :=
begin
ac_mono,
ac_mono,
ac_mono,
end
example (x y z k m n : ℕ)
(h₀ : z ≥ 0)
(h₁ : x ≤ y)
: (m + x + n) * z + k ≤ z * (y + n + m) + k :=
begin
ac_mono,
ac_mono,
ac_mono,
end
example (x y z k m n : ℕ)
(h₀ : z ≥ 0)
(h₁ : x ≤ y)
: (m + x + n) * z + k ≤ z * (y + n + m) + k :=
begin
ac_mono,
-- ⊢ (m + x + n) * z ≤ z * (y + n + m)
ac_mono,
-- ⊢ m + x + n ≤ y + n + m
ac_mono,
end
example (x y z k m n : ℕ)
(h₀ : z ≥ 0)
(h₁ : x ≤ y)
: (m + x + n) * z + k ≤ z * (y + n + m) + k :=
by { ac_mono* := h₁ }
example (x y z k m n : ℕ)
(h₀ : z ≥ 0)
(h₁ : m + x + n ≤ y + n + m)
: (m + x + n) * z + k ≤ z * (y + n + m) + k :=
by { ac_mono* := h₁ }
example (x y z k m n : ℕ)
(h₀ : z ≥ 0)
(h₁ : n + x + m ≤ y + n + m)
: (m + x + n) * z + k ≤ z * (y + n + m) + k :=
begin
ac_mono* : m + x + n ≤ y + n + m,
transitivity ; [ skip , apply h₁ ],
apply le_of_eq,
ac_refl,
end
example (x y z k m n : ℤ)
(h₁ : x ≤ y)
: true :=
begin
have : (m + x + n) * z + k ≤ z * (y + n + m) + k,
{ ac_mono,
success_if_fail { ac_mono },
admit },
trivial
end
example (x y z k m n : ℕ)
(h₁ : x ≤ y)
: true :=
begin
have : (m + x + n) * z + k ≤ z * (y + n + m) + k,
{ ac_mono*,
change 0 ≤ z, apply nat.zero_le, },
trivial
end
example (x y z k m n : ℕ)
(h₁ : x ≤ y)
: true :=
begin
have : (m + x + n) * z + k ≤ z * (y + n + m) + k,
{ ac_mono,
change (m + x + n) * z ≤ z * (y + n + m),
admit },
trivial,
end
example (x y z k m n i j : ℕ)
(h₁ : x + i = y + j)
: (m + x + n + i) * z + k = z * (j + n + m + y) + k :=
begin
ac_mono^3,
cc
end
example (x y z k m n i j : ℕ)
(h₁ : x + i = y + j)
: z * (x + i + n + m) + k = z * (y + j + n + m) + k :=
begin
congr,
simp [h₁],
end
example (x y z k m n i j : ℕ)
(h₁ : x + i = y + j)
: (m + x + n + i) * z + k = z * (j + n + m + y) + k :=
begin
ac_mono*,
cc,
end
example (x y : ℕ)
(h : x ≤ y)
: true :=
begin
(do v ← mk_mvar,
p ← to_expr ```(%%v + x ≤ y + %%v),
assert `h' p),
ac_mono := h,
trivial,
exact 1,
end
example {x y z : ℕ} : true :=
begin
have : y + x ≤ y + z,
{ mono,
guard_target' x ≤ z,
admit },
trivial
end
example {x y z : ℕ} : true :=
begin
suffices : x + y ≤ z + y, trivial,
mono,
guard_target' x ≤ z,
admit,
end
example {x y z w : ℕ} : true :=
begin
have : x + y ≤ z + w,
{ mono,
guard_target' x ≤ z, admit,
guard_target' y ≤ w, admit },
trivial
end
example {x y z w : ℕ} : true :=
begin
have : x * y ≤ z * w,
{ mono with [0 ≤ z,0 ≤ y],
{ guard_target 0 ≤ z, admit },
{ guard_target 0 ≤ y, admit },
guard_target' x ≤ z, admit,
guard_target' y ≤ w, admit },
trivial
end
example {x y z w : Prop} : true :=
begin
have : x ∧ y → z ∧ w,
{ mono,
guard_target' x → z, admit,
guard_target' y → w, admit },
trivial
end
example {x y z w : Prop} : true :=
begin
have : x ∨ y → z ∨ w,
{ mono,
guard_target' x → z, admit,
guard_target' y → w, admit },
trivial
end
example {x y z w : ℤ} : true :=
begin
suffices : x + y < w + z, trivial,
have : x < w, admit,
have : y ≤ z, admit,
mono right,
end
example {x y z w : ℤ} : true :=
begin
suffices : x * y < w * z, trivial,
have : x < w, admit,
have : y ≤ z, admit,
mono right,
{ guard_target' 0 < y, admit },
{ guard_target' 0 ≤ w, admit },
end
open tactic
example (x y : ℕ)
(h : x ≤ y)
: true :=
begin
(do v ← mk_mvar,
p ← to_expr ```(%%v + x ≤ y + %%v),
assert `h' p),
ac_mono := h,
trivial,
exact 3
end
example {α} [linear_order α]
(a b c d e : α) :
max a b ≤ e → b ≤ e :=
by { mono, apply le_max_right }
example (a b c d e : Prop)
(h : d → a) (h' : c → e) :
(a ∧ b → c) ∨ d → (d ∧ b → e) ∨ a :=
begin
mono,
mono,
mono,
end
example : ∫ x in Icc 0 1, real.exp x ≤ ∫ x in Icc 0 1, real.exp (x+1) :=
begin
mono,
{ exact real.continuous_exp.locally_integrable.integrable_on_is_compact is_compact_Icc },
{ exact (real.continuous_exp.comp $ continuous_add_right 1)
.locally_integrable.integrable_on_is_compact is_compact_Icc },
intro x,
dsimp only,
mono,
linarith
end
|
In the early evening of 15 June , Webb began his next attempt by passing over the lower obstructions in the Wilmington River and spent the rest of the night coaling . He moved forward the next evening to a concealed position within easy reach of the monitors for an attack early the following morning . Webb planned to sink one of the monitors with his spar torpedo and then deal with the other one with his guns . The gunboat <unk> and the tugboat Resolute were to accompany him to tow one or both of the monitors back to Savannah .
|
\documentclass{article}
\usepackage[landscape,centering]{geometry}
\geometry{
paperwidth=3in,
paperheight=5in,
margin=0.5cm
}
\begin{document}\sloppy
\section{Title}
\begin{itemize}
\item Thank the AV tech if present. How is audio? Field of view? What is
my range of motion?
\item Welcome to Openwest 2017.
\item Introduction. Medici Ventures, blockchain, Java/Scala.
\item Explain how this talk came to be. Differing opinions. Poorly written
and organized or no tests.
\item Jonathan Swift: You cannot reason people out of positions they didn’t
reason themselves into. From "Letter to a Young Gentleman."
\item Opinionated Screed talk series... Agile, REST, Design Patterns...
\item Focus is on unit testing.
\item Survey: How many developers? How many love writing tests?
\end{itemize}
\newpage
\section{Opinionated}
\begin{itemize}
\item Disagreements are opportunities for learning.
\item If I trigger you, it's not personal. It should have the effect of
reasoning yourself into or out of your current opinion.
\item Intended as humour.
\end{itemize}
\newpage
\section{Screed}
\begin{itemize}
\item Screed is a cool word.
\item Whether screed or not depends on who you are...
\item Wife asked ``what does a unit test do...''
\item Your mileage may vary.
\end{itemize}
\newpage
\section{I Hate UT}
\begin{itemize}
\item Developers seem to have a rocky relationship with unit tests.
\item Like cleaning up your toys after playing all afternoon, doing the
dishes, etc...
\item Often an afterthought. Don't leave time for writing tests.
\item Unit tests are often stale, which is another reason for devs to avoid
them.
\end{itemize}
\newpage
\section{UT Are Not IT}
\begin{itemize}
\item Two distinct kinds of tests: Unit tests and integration tests.
\item Unit tests test our own units of code, hopefully isolated.
\item Integration tests, also often called ``system'' or ``functional''
tests, are full stack tests winding it's way through all of our code
paths.
\item Integration tests are often fragile.
\end{itemize}
\newpage
\section{Not UT If...}
\begin{itemize}
\item Test is not a unit test if...
\item It talks to a database.
\item It communicates across the network.
\item It touches the filesystem.
\item It can't run at the same time as your other unit tests.
\item You must do special things to your environment.
\item Michael Feathers.
\end{itemize}
\newpage
\section{Unit Tests Should Be FAST!}
\begin{itemize}
\item Waiting 30 minutes for tests to run is 30 minutes of your life you'll
never get back.
\item It means you will avoid running tests.
\end{itemize}
\newpage
\section{Unit Tests Should Be Segregated}
\begin{itemize}
\item Unit tests and integration tests should be segregated.
\item They serve different purposes. Mingling them makes it impossible to
run them separately.
\item Integration tests require external dependencies that are difficult to
implement in a developer environment.
\item During development, devs should be running unit tests often. Segregation
of unit and integration tests reduces friction.
\end{itemize}
\newpage
\section{Unit Tests Should Assert One Thing}
\begin{itemize}
\item Unit tests should be simple.
\item Multiple assertions make it difficult to understand what failed.
\item The exception is when multiple assertions are used to confirm results
that are related.
\end{itemize}
\newpage
\section{Unit Tests Should Be Run Often}
\begin{itemize}
\item Before modifying code.
\item After code change.
\item At night before you leave.
\item In the morning when you arrive.
\item Just because...
\end{itemize}
\newpage
\section{Tips}
\begin{itemize}
\item Adopting some best practices can make unit tests more useful.
\end{itemize}
\newpage
\section{Tips: Naming}
\begin{itemize}
\item Two hardest things in computer science...
\item Unit test names should act as a specification.
\item Describe what is expected.
\item Helps locate the problem on failed test.
\end{itemize}
\newpage
\section{Tips: Comments}
\begin{itemize}
\item Commenting your tests improves readability.
\item Helps document your units.
\end{itemize}
\newpage
\section{Tips: Isolate}
\begin{itemize}
\item Mocks, stubs, fakes, test doubles.
\item Loose coupling helps with this.
\end{itemize}
\newpage
\section{Tips: Overspecification}
\begin{itemize}
\item Leads to tests coupling strongly to production code and limiting
ability to refactor.
\item Sign of this is testing implementation details instead of the overall
behavior.
\item Overusing mocks is an indicator of overspecification.
\item Prefer stubs over mocks as much as possible.
\item Test should focus on testing the results of a function instead of the
internal actions. Not always possible. Refer to design.
\end{itemize}
\newpage
\section{Tips: Brevity}
\begin{itemize}
\item Tests should be short and to the point.
\item Long elaborate tests indicate overspecification or the code is in
need of refactoring.
\item Brevity helps us maintain focus on tests as documentation.
\end{itemize}
\newpage
\section{Tips: Setup}
\begin{itemize}
\item Setup common dependencies for each suite of tests.
\item Teardown resets dependencies.
\item Prevents fragile unit tests by breaking dependencies between tests.
\end{itemize}
\newpage
\section{Tips: Arrange Act Assert}
\begin{itemize}
\item Perform any test specific setup that needs to be done.
\item Call the unit.
\item Assert to confirm the single result, even if multiple asserts are
needed for this purpose.
\end{itemize}
\newpage
\section{Tips: DAMP vs DRY}
\begin{itemize}
\item Don't Repeat Yourself.
\item Declarative And Meaningful Phrases
\item Each test is an isolated chapter in your test book. If removing a
piece of logic hurts the readability, then don't remove it.
\item From a maintenance perspective, don't centralize a piece of logic that
can't be changed on it's own.
\end{itemize}
\newpage
\section{Tips: Perfection}
\begin{itemize}
\item Remember, you're trying to run a business!
\item Perfection takes a lot of time. Don't waste time.
\item The test can be refactored if it fails when it shouldn't.
\item Insufficient isolation can be fixed later if necessary.
\end{itemize}
\newpage
\section{Tips: Code Coverage}
\begin{itemize}
\item Some devs don't like code coverage.
\item I get a kick out of seeing my reports improve.
\item Ideas for tests often come from the coverage report.
\item How do you know what you're really testing?
\item Code coverage reports help reason about design and guide refactor.
\item Spot trends in testing coverage over time.
\item Test quality != code coverage.
\end{itemize}
\newpage
\section{Tips: Test Driven Development}
\begin{itemize}
\item Does not mean Test Deficit Disorder.
\item Test last: Risks, design problems and bugs discovered late. Can
descend into frustrating spiral of debugging and rework.
\item Test last leads to heavy refactoring burdens.
\item You might not leave time for adequate test design, leading to being
rushed. May miss important cases.
\item Test first? I haven't even figured out what to name stuff yet...
\item Test first forces you to specify what you are building in advance
of building it.
\item Risk of rework. Mini waterfall. Discoveries during development that
make you want to iterate.
\item Test driven development. Designing tests and code together.
\item Allow the design of the tests and the code emerge as you develop.
\item Poorly designed code is hard to test.
\end{itemize}
\newpage
\section{Tips: Limitations}
\begin{itemize}
\item Unit tests will not prove correctness. GIGO.
\item Unit tests won't find integration errors.
\item Unit tests will not test non-functional requirements like performance
and security.
\end{itemize}
\newpage
\section{Purpose: Why Do We Do It?}
\begin{itemize}
\item Your own personal development process.
\item Developer responsibility. Not optional extra or job of seperate test team.
\item Avoid embarrassing bugs in your code that others find.
\item To help your team not break your code
\item Writing code is how you tell your colleagues how you feel about them.
\item Psychopathic developer who knows where you live.
\end{itemize}
\newpage
\section{Purpose: What To Build}
\begin{itemize}
\item Unit tests cannot be written without a good understanding of what to
build. Forces business requirements.
\end{itemize}
\newpage
\section{Purpose: Better Design}
\begin{itemize}
\item Anecdote about PDF Form Filler being untestable.
\item Well designed code is testable. If your code is hard to test, it is
not well designed.
\item Decompose the problem into units that are independently testable.
(loose coupling)
\item Unit testing forces thinking of interface seperately from implementation.
\item Just the discipline of writing unit tests helps with better design.
\end{itemize}
\newpage
\section{Purpose: Documentation}
\begin{itemize}
\item Unit tests document our units. Comments help with this.
\item Unit tests should be highly readable.
\item Think of them as an executable specification. Tests document the
behavior of the code in a form you can execute.
\item Demonstrates how a unit is intended to be used, how to construct
it, call the method, what kind of arguments, what kind of results.
\end{itemize}
\newpage
\section{Purpose: Fear-Free Refactor}
\begin{itemize}
\item Adding new functionality or just refactoring units is easier. Unit
tests let us know quickly when something is broken.
\end{itemize}
\newpage
\section{Purpose: Regression Protection}
\begin{itemize}
\item Something that worked before and now doesn't.
\item Changes to unknown dependencies can cause failures.
\item Test failure messages should be very clear and readable so it is
easy to find any failures.
\end{itemize}
\newpage
\section{I Love Unit Tests}
\begin{itemize}
\item Unit tests are 100\% organic, non-GMO, and gluten-free.
\item Unit tests are your friend. They make your life easier.
\item Unit tests are challenging and fun!
\item Unit tests make you a better developer.
\item Unit tests earn the admiration of your colleagues.
\item I love unit tests. So should you!
\end{itemize}
\end{document}
|
From iris.proofmode Require Import tactics monpred.
From iris.base_logic.lib Require Import invariants.
Unset Mangle Names.
Section tests.
Context {I : biIndex} {PROP : bi}.
Local Notation monPred := (monPred I PROP).
Local Notation monPredI := (monPredI I PROP).
Implicit Types P Q R : monPred.
Implicit Types 𝓟 𝓠 𝓡 : PROP.
Implicit Types i j : I.
Lemma test0 P : P -∗ P.
Proof. iIntros "$". Qed.
Lemma test_iStartProof_1 P : P -∗ P.
Proof. iStartProof. iStartProof. iIntros "$". Qed.
Lemma test_iStartProof_2 P : P -∗ P.
Proof. iStartProof monPred. iStartProof monPredI. iIntros "$". Qed.
Lemma test_iStartProof_3 P : P -∗ P.
Proof. iStartProof monPredI. iStartProof monPredI. iIntros "$". Qed.
Lemma test_iStartProof_4 P : P -∗ P.
Proof. iStartProof monPredI. iStartProof monPred. iIntros "$". Qed.
Lemma test_iStartProof_5 P : P -∗ P.
Proof. iStartProof PROP. iIntros (i) "$". Qed.
Lemma test_iStartProof_6 P : P ⊣⊢ P.
Proof. iStartProof PROP. iIntros (i). iSplit; iIntros "$". Qed.
Lemma test_iStartProof_7 `{!BiInternalEq PROP} P : ⊢@{monPredI} P ≡ P.
Proof. iStartProof PROP. done. Qed.
Lemma test_intowand_1 P Q : (P -∗ Q) -∗ P -∗ Q.
Proof.
iStartProof PROP. iIntros (i) "HW". Show.
iIntros (j ->) "HP". Show. by iApply "HW".
Qed.
Lemma test_intowand_2 P Q : (P -∗ Q) -∗ P -∗ Q.
Proof.
iStartProof PROP. iIntros (i) "HW". iIntros (j ->) "HP".
iSpecialize ("HW" with "[HP //]"). done.
Qed.
Lemma test_intowand_3 P Q : (P -∗ Q) -∗ P -∗ Q.
Proof.
iStartProof PROP. iIntros (i) "HW". iIntros (j ->) "HP".
iSpecialize ("HW" with "HP"). done.
Qed.
Lemma test_intowand_4 P Q : (P -∗ Q) -∗ ▷ P -∗ ▷ Q.
Proof.
iStartProof PROP. iIntros (i) "HW". iIntros (j ->) "HP". by iApply "HW".
Qed.
Lemma test_intowand_5 P Q : (P -∗ Q) -∗ ▷ P -∗ ▷ Q.
Proof.
iStartProof PROP. iIntros (i) "HW". iIntros (j ->) "HP".
iSpecialize ("HW" with "HP"). done.
Qed.
Lemma test_apply_in_elim (P : monPredI) (i : I) : monPred_in i -∗ ⎡ P i ⎤ → P.
Proof. iIntros. by iApply monPred_in_elim. Qed.
Lemma test_iStartProof_forall_1 (Φ : nat → monPredI) : ∀ n, Φ n -∗ Φ n.
Proof.
iStartProof PROP. iIntros (n i) "$".
Qed.
Lemma test_iStartProof_forall_2 (Φ : nat → monPredI) : ∀ n, Φ n -∗ Φ n.
Proof.
iStartProof. iIntros (n) "$".
Qed.
Lemma test_embed_wand (P Q : PROP) : (⎡P⎤ -∗ ⎡Q⎤) -∗ ⎡P -∗ Q⎤ : monPred.
Proof.
iIntros "H HP". by iApply "H".
Qed.
Lemma test_objectively P Q : <obj> emp -∗ <obj> P -∗ <obj> Q -∗ <obj> (P ∗ Q).
Proof. iIntros "#? HP HQ". iModIntro. by iSplitL "HP". Qed.
Lemma test_objectively_absorbing P Q R `{!Absorbing P} :
<obj> emp -∗ <obj> P -∗ <obj> Q -∗ R -∗ <obj> (P ∗ Q).
Proof. iIntros "#? HP HQ HR". iModIntro. by iSplitL "HP". Qed.
Lemma test_objectively_affine P Q R `{!Affine R} :
<obj> emp -∗ <obj> P -∗ <obj> Q -∗ R -∗ <obj> (P ∗ Q).
Proof. iIntros "#? HP HQ HR". iModIntro. by iSplitL "HP". Qed.
Lemma test_iModIntro_embed P `{!Affine Q} 𝓟 𝓠 :
□ P -∗ Q -∗ ⎡𝓟⎤ -∗ ⎡𝓠⎤ -∗ ⎡ 𝓟 ∗ 𝓠 ⎤.
Proof. iIntros "#H1 _ H2 H3". iModIntro. iFrame. Qed.
Lemma test_iModIntro_embed_objective P `{!Objective Q} 𝓟 𝓠 :
□ P -∗ Q -∗ ⎡𝓟⎤ -∗ ⎡𝓠⎤ -∗ ⎡ ∀ i, 𝓟 ∗ 𝓠 ∗ Q i ⎤.
Proof. iIntros "#H1 H2 H3 H4". iModIntro. Show. iFrame. Qed.
Lemma test_iModIntro_embed_nested P 𝓟 𝓠 :
□ P -∗ ⎡◇ 𝓟⎤ -∗ ⎡◇ 𝓠⎤ -∗ ⎡ ◇ (𝓟 ∗ 𝓠) ⎤.
Proof. iIntros "#H1 H2 H3". iModIntro ⎡ _ ⎤%I. by iSplitL "H2". Qed.
Lemma test_into_wand_embed 𝓟 𝓠 :
(𝓟 -∗ ◇ 𝓠) →
⎡𝓟⎤ ⊢@{monPredI} ◇ ⎡𝓠⎤.
Proof.
iIntros (HPQ) "HP".
iMod (HPQ with "[-]") as "$"; last by auto.
iAssumption.
Qed.
Context (FU : BiFUpd PROP).
Lemma test_apply_fupd_intro_mask_subseteq E1 E2 P :
E2 ⊆ E1 → P -∗ |={E1,E2}=> |={E2,E1}=> P.
Proof. iIntros. by iApply @fupd_mask_intro_subseteq. Qed.
Lemma test_apply_fupd_mask_subseteq E1 E2 P :
E2 ⊆ E1 → P -∗ |={E1,E2}=> |={E2,E1}=> P.
Proof. iIntros. iFrame. by iApply @fupd_mask_subseteq. Qed.
Lemma test_iFrame_embed_persistent (P : PROP) (Q: monPred) :
Q ∗ □ ⎡P⎤ ⊢ Q ∗ ⎡P ∗ P⎤.
Proof.
iIntros "[$ #HP]". iFrame "HP".
Qed.
Lemma test_iNext_Bi P :
▷ P ⊢@{monPredI} ▷ P.
Proof. iIntros "H". by iNext. Qed.
(** Test monPred_at framing *)
Lemma test_iFrame_monPred_at_wand (P Q : monPred) i :
P i -∗ (Q -∗ P) i.
Proof. iIntros "$". Show. Abort.
Program Definition monPred_id (R : monPred) : monPred :=
MonPred (λ V, R V) _.
Next Obligation. intros ? ???. eauto. Qed.
Lemma test_iFrame_monPred_id (Q R : monPred) i :
Q i ∗ R i -∗ (Q ∗ monPred_id R) i.
Proof.
iIntros "(HQ & HR)". iFrame "HR". iAssumption.
Qed.
Lemma test_iFrame_rel P i j ij :
IsBiIndexRel i ij → IsBiIndexRel j ij →
P i -∗ P j -∗ P ij ∗ P ij.
Proof. iIntros (??) "HPi HPj". iFrame. Qed.
Lemma test_iFrame_later_rel `{!BiAffine PROP} P i j :
IsBiIndexRel i j →
▷ (P i) -∗ (▷ P) j.
Proof. iIntros (?) "?". iFrame. Qed.
Lemma test_iFrame_laterN n P i :
▷^n (P i) -∗ (▷^n P) i.
Proof. iIntros "?". iFrame. Qed.
Lemma test_iFrame_quantifiers P i :
P i -∗ (∀ _:(), ∃ _:(), P) i.
Proof. iIntros "?". iFrame. Show. iIntros ([]). iExists (). iEmpIntro. Qed.
Lemma test_iFrame_embed (P : PROP) i :
P -∗ (embed (B:=monPredI) P) i.
Proof. iIntros "$". Qed.
(* Make sure search doesn't diverge on an evar *)
Lemma test_iFrame_monPred_at_evar (P : monPred) i j :
P i -∗ ∃ Q, (Q j).
Proof. iIntros "HP". iExists _. Fail iFrame "HP". Abort.
End tests.
Section tests_iprop.
Context {I : biIndex} `{!invG Σ}.
Local Notation monPred := (monPred I (iPropI Σ)).
Implicit Types P Q R : monPred.
Implicit Types 𝓟 𝓠 𝓡 : iProp Σ.
Lemma test_iInv_0 N 𝓟 :
embed (B:=monPred) (inv N (<pers> 𝓟)) ={⊤}=∗ ⎡▷ 𝓟⎤.
Proof.
iIntros "#H".
iInv N as "#H2". Show.
iModIntro. iSplit=>//. iModIntro. iModIntro; auto.
Qed.
Lemma test_iInv_0_with_close N 𝓟 :
embed (B:=monPred) (inv N (<pers> 𝓟)) ={⊤}=∗ ⎡▷ 𝓟⎤.
Proof.
iIntros "#H".
iInv N as "#H2" "Hclose". Show.
iMod ("Hclose" with "H2").
iModIntro. iModIntro. by iNext.
Qed.
Lemma test_iPoseProof `{inG Σ A} P γ (x y : A) :
x ~~> y → P ∗ ⎡own γ x⎤ ==∗ ⎡own γ y⎤.
Proof.
iIntros (?) "[_ Hγ]".
iPoseProof (own_update with "Hγ") as "H"; first done.
by iMod "H".
Qed.
End tests_iprop.
|
{-# OPTIONS --without-K --rewriting #-}
open import lib.Basics
module lib.types.Sigma where
-- pointed [Σ]
⊙Σ : ∀ {i j} (X : Ptd i) → (de⊙ X → Ptd j) → Ptd (lmax i j)
⊙Σ ⊙[ A , a₀ ] Y = ⊙[ Σ A (de⊙ ∘ Y) , (a₀ , pt (Y a₀)) ]
-- Cartesian product
_×_ : ∀ {i j} (A : Type i) (B : Type j) → Type (lmax i j)
A × B = Σ A (λ _ → B)
_⊙×_ : ∀ {i j} → Ptd i → Ptd j → Ptd (lmax i j)
X ⊙× Y = ⊙Σ X (λ _ → Y)
infixr 80 _×_ _⊙×_
-- XXX Do we really need two versions of [⊙fst]?
⊙fstᵈ : ∀ {i j} {X : Ptd i} (Y : de⊙ X → Ptd j) → ⊙Σ X Y ⊙→ X
⊙fstᵈ Y = fst , idp
⊙fst : ∀ {i j} {X : Ptd i} {Y : Ptd j} → X ⊙× Y ⊙→ X
⊙fst = ⊙fstᵈ _
⊙snd : ∀ {i j} {X : Ptd i} {Y : Ptd j} → X ⊙× Y ⊙→ Y
⊙snd = (snd , idp)
fanout : ∀ {i j k} {A : Type i} {B : Type j} {C : Type k}
→ (A → B) → (A → C) → (A → B × C)
fanout f g x = f x , g x
⊙fanout-pt : ∀ {i j} {A : Type i} {B : Type j}
{a₀ a₁ : A} (p : a₀ == a₁) {b₀ b₁ : B} (q : b₀ == b₁)
→ (a₀ , b₀) == (a₁ , b₁) :> A × B
⊙fanout-pt = pair×=
⊙fanout : ∀ {i j k} {X : Ptd i} {Y : Ptd j} {Z : Ptd k}
→ X ⊙→ Y → X ⊙→ Z → X ⊙→ Y ⊙× Z
⊙fanout (f , fpt) (g , gpt) = fanout f g , ⊙fanout-pt fpt gpt
diag : ∀ {i} {A : Type i} → (A → A × A)
diag a = a , a
⊙diag : ∀ {i} {X : Ptd i} → X ⊙→ X ⊙× X
⊙diag = ((λ x → (x , x)) , idp)
⊙×-inl : ∀ {i j} (X : Ptd i) (Y : Ptd j) → X ⊙→ X ⊙× Y
⊙×-inl X Y = (λ x → x , pt Y) , idp
⊙×-inr : ∀ {i j} (X : Ptd i) (Y : Ptd j) → Y ⊙→ X ⊙× Y
⊙×-inr X Y = (λ y → pt X , y) , idp
⊙fst-fanout : ∀ {i j k} {X : Ptd i} {Y : Ptd j} {Z : Ptd k}
(f : X ⊙→ Y) (g : X ⊙→ Z)
→ ⊙fst ⊙∘ ⊙fanout f g == f
⊙fst-fanout (f , idp) (g , idp) = idp
⊙snd-fanout : ∀ {i j k} {X : Ptd i} {Y : Ptd j} {Z : Ptd k}
(f : X ⊙→ Y) (g : X ⊙→ Z)
→ ⊙snd ⊙∘ ⊙fanout f g == g
⊙snd-fanout (f , idp) (g , idp) = idp
⊙fanout-pre∘ : ∀ {i j k l} {X : Ptd i} {Y : Ptd j} {Z : Ptd k} {W : Ptd l}
(f : Y ⊙→ Z) (g : Y ⊙→ W) (h : X ⊙→ Y)
→ ⊙fanout f g ⊙∘ h == ⊙fanout (f ⊙∘ h) (g ⊙∘ h)
⊙fanout-pre∘ (f , idp) (g , idp) (h , idp) = idp
module _ {i j} {A : Type i} {B : A → Type j} where
pair : (a : A) (b : B a) → Σ A B
pair a b = (a , b)
-- pair= has already been defined
fst= : {ab a'b' : Σ A B} (p : ab == a'b') → (fst ab == fst a'b')
fst= = ap fst
snd= : {ab a'b' : Σ A B} (p : ab == a'b')
→ (snd ab == snd a'b' [ B ↓ fst= p ])
snd= {._} {_} idp = idp
fst=-β : {a a' : A} (p : a == a')
{b : B a} {b' : B a'} (q : b == b' [ B ↓ p ])
→ fst= (pair= p q) == p
fst=-β idp idp = idp
snd=-β : {a a' : A} (p : a == a')
{b : B a} {b' : B a'} (q : b == b' [ B ↓ p ])
→ snd= (pair= p q) == q [ (λ v → b == b' [ B ↓ v ]) ↓ fst=-β p q ]
snd=-β idp idp = idp
pair=-η : {ab a'b' : Σ A B} (p : ab == a'b')
→ p == pair= (fst= p) (snd= p)
pair=-η {._} {_} idp = idp
pair== : {a a' : A} {p p' : a == a'} (α : p == p')
{b : B a} {b' : B a'} {q : b == b' [ B ↓ p ]} {q' : b == b' [ B ↓ p' ]}
(β : q == q' [ (λ u → b == b' [ B ↓ u ]) ↓ α ])
→ pair= p q == pair= p' q'
pair== idp idp = idp
module _ {i j} {A : Type i} {B : Type j} where
fst×= : {ab a'b' : A × B} (p : ab == a'b') → (fst ab == fst a'b')
fst×= = ap fst
snd×= : {ab a'b' : A × B} (p : ab == a'b')
→ (snd ab == snd a'b')
snd×= = ap snd
fst×=-β : {a a' : A} (p : a == a')
{b b' : B} (q : b == b')
→ fst×= (pair×= p q) == p
fst×=-β idp idp = idp
snd×=-β : {a a' : A} (p : a == a')
{b b' : B} (q : b == b')
→ snd×= (pair×= p q) == q
snd×=-β idp idp = idp
pair×=-η : {ab a'b' : A × B} (p : ab == a'b')
→ p == pair×= (fst×= p) (snd×= p)
pair×=-η {._} {_} idp = idp
module _ {i j} {A : Type i} {B : A → Type j} where
=Σ : (x y : Σ A B) → Type (lmax i j)
=Σ (a , b) (a' , b') = Σ (a == a') (λ p → b == b' [ B ↓ p ])
=Σ-econv : (x y : Σ A B) → (=Σ x y) ≃ (x == y)
=Σ-econv x y =
equiv (λ pq → pair= (fst pq) (snd pq)) (λ p → fst= p , snd= p)
(λ p → ! (pair=-η p))
(λ pq → pair= (fst=-β (fst pq) (snd pq)) (snd=-β (fst pq) (snd pq)))
=Σ-conv : (x y : Σ A B) → (=Σ x y) == (x == y)
=Σ-conv x y = ua (=Σ-econv x y)
Σ= : ∀ {i j} {A A' : Type i} (p : A == A') {B : A → Type j} {B' : A' → Type j}
(q : B == B' [ (λ X → (X → Type j)) ↓ p ]) → Σ A B == Σ A' B'
Σ= idp idp = idp
instance
Σ-level : ∀ {i j} {n : ℕ₋₂} {A : Type i} {P : A → Type j}
→ has-level n A → ((x : A) → has-level n (P x))
→ has-level n (Σ A P)
Σ-level {n = ⟨-2⟩} p q = has-level-in ((contr-center p , (contr-center (q (contr-center p)))) , lemma)
where abstract lemma = λ y → pair= (contr-path p _) (from-transp! _ _ (contr-path (q _) _))
Σ-level {n = S n} p q = has-level-in lemma where
abstract
lemma = λ x y → equiv-preserves-level (=Σ-econv x y)
{{Σ-level (has-level-apply p _ _) (λ _ →
equiv-preserves-level ((to-transp-equiv _ _)⁻¹) {{has-level-apply (q _) _ _}})}}
×-level : ∀ {i j} {n : ℕ₋₂} {A : Type i} {B : Type j}
→ (has-level n A → has-level n B → has-level n (A × B))
×-level pA pB = Σ-level pA (λ x → pB)
-- Equivalences in a Σ-type
Σ-fmap-l : ∀ {i j k} {A : Type i} {B : Type j} (P : B → Type k)
→ (f : A → B) → (Σ A (P ∘ f) → Σ B P)
Σ-fmap-l P f (a , r) = (f a , r)
×-fmap-l : ∀ {i₀ i₁ j} {A₀ : Type i₀} {A₁ : Type i₁} (B : Type j)
→ (f : A₀ → A₁) → (A₀ × B → A₁ × B)
×-fmap-l B = Σ-fmap-l (λ _ → B)
Σ-isemap-l : ∀ {i j k} {A : Type i} {B : Type j} (P : B → Type k) {h : A → B}
→ is-equiv h → is-equiv (Σ-fmap-l P h)
Σ-isemap-l {A = A} {B = B} P {h} e = is-eq _ g f-g g-f
where f = Σ-fmap-l P h
g : Σ B P → Σ A (P ∘ h)
g (b , s) = (is-equiv.g e b , transport P (! (is-equiv.f-g e b)) s)
f-g : ∀ y → f (g y) == y
f-g (b , s) = pair= (is-equiv.f-g e b) (transp-↓ P (is-equiv.f-g e b) s)
g-f : ∀ x → g (f x) == x
g-f (a , r) =
pair= (is-equiv.g-f e a)
(transport (λ q → transport P (! q) r == r [ P ∘ h ↓ is-equiv.g-f e a ])
(is-equiv.adj e a)
(transp-ap-↓ P h (is-equiv.g-f e a) r))
×-isemap-l : ∀ {i₀ i₁ j} {A₀ : Type i₀} {A₁ : Type i₁} (B : Type j) {h : A₀ → A₁}
→ is-equiv h → is-equiv (×-fmap-l B h)
×-isemap-l B = Σ-isemap-l (λ _ → B)
Σ-emap-l : ∀ {i j k} {A : Type i} {B : Type j} (P : B → Type k)
→ (e : A ≃ B) → (Σ A (P ∘ –> e) ≃ Σ B P)
Σ-emap-l P (f , e) = _ , Σ-isemap-l P e
×-emap-l : ∀ {i₀ i₁ j} {A₀ : Type i₀} {A₁ : Type i₁} (B : Type j)
→ (e : A₀ ≃ A₁) → (A₀ × B ≃ A₁ × B)
×-emap-l B = Σ-emap-l (λ _ → B)
Σ-fmap-r : ∀ {i j k} {A : Type i} {B : A → Type j} {C : A → Type k}
→ (∀ x → B x → C x) → (Σ A B → Σ A C)
Σ-fmap-r h (a , b) = (a , h a b)
×-fmap-r : ∀ {i j₀ j₁} (A : Type i) {B₀ : Type j₀} {B₁ : Type j₁}
→ (h : B₀ → B₁) → (A × B₀ → A × B₁)
×-fmap-r A h = Σ-fmap-r (λ _ → h)
Σ-isemap-r : ∀ {i j k} {A : Type i} {B : A → Type j} {C : A → Type k}
{h : ∀ x → B x → C x} → (∀ x → is-equiv (h x)) → is-equiv (Σ-fmap-r h)
Σ-isemap-r {A = A} {B = B} {C = C} {h} k = is-eq _ g f-g g-f
where f = Σ-fmap-r h
g : Σ A C → Σ A B
g (a , c) = (a , is-equiv.g (k a) c)
f-g : ∀ p → f (g p) == p
f-g (a , c) = pair= idp (is-equiv.f-g (k a) c)
g-f : ∀ p → g (f p) == p
g-f (a , b) = pair= idp (is-equiv.g-f (k a) b)
×-isemap-r : ∀ {i j₀ j₁} (A : Type i) {B₀ : Type j₀} {B₁ : Type j₁}
→ {h : B₀ → B₁} → is-equiv h → is-equiv (×-fmap-r A h)
×-isemap-r A e = Σ-isemap-r (λ _ → e)
Σ-emap-r : ∀ {i j k} {A : Type i} {B : A → Type j} {C : A → Type k}
→ (∀ x → B x ≃ C x) → (Σ A B ≃ Σ A C)
Σ-emap-r k = _ , Σ-isemap-r (λ x → snd (k x))
×-emap-r : ∀ {i j₀ j₁} (A : Type i) {B₀ : Type j₀} {B₁ : Type j₁}
→ (e : B₀ ≃ B₁) → (A × B₀ ≃ A × B₁)
×-emap-r A e = Σ-emap-r (λ _ → e)
hfiber-Σ-fmap-r : ∀ {i j k} {A : Type i} {B : A → Type j} {C : A → Type k}
→ (h : ∀ x → B x → C x) → {a : A} → (c : C a)
→ hfiber (Σ-fmap-r h) (a , c) ≃ hfiber (h a) c
hfiber-Σ-fmap-r h {a} c = equiv to from to-from from-to
where to : hfiber (Σ-fmap-r h) (a , c) → hfiber (h a) c
to ((_ , b) , idp) = b , idp
from : hfiber (h a) c → hfiber (Σ-fmap-r h) (a , c)
from (b , idp) = (a , b) , idp
abstract
to-from : (x : hfiber (h a) c) → to (from x) == x
to-from (b , idp) = idp
from-to : (x : hfiber (Σ-fmap-r h) (a , c)) → from (to x) == x
from-to ((_ , b) , idp) = idp
{-
-- 2016/08/20 favonia: no one is using the following two functions.
-- Two ways of simultaneously applying equivalences in each component.
module _ {i₀ i₁ j₀ j₁} {A₀ : Type i₀} {A₁ : Type i₁}
{B₀ : A₀ → Type j₀} {B₁ : A₁ → Type j₁} where
Σ-emap : (u : A₀ ≃ A₁) (v : ∀ a → B₀ (<– u a) ≃ B₁ a) → Σ A₀ B₀ ≃ Σ A₁ B₁
Σ-emap u v = Σ A₀ B₀ ≃⟨ Σ-emap-l _ (u ⁻¹) ⁻¹ ⟩
Σ A₁ (B₀ ∘ <– u) ≃⟨ Σ-emap-r v ⟩
Σ A₁ B₁ ≃∎
Σ-emap' : (u : A₀ ≃ A₁) (v : ∀ a → B₀ a ≃ B₁ (–> u a)) → Σ A₀ B₀ ≃ Σ A₁ B₁
Σ-emap' u v = Σ A₀ B₀ ≃⟨ Σ-emap-r v ⟩
Σ A₀ (B₁ ∘ –> u) ≃⟨ Σ-emap-l _ u ⟩
Σ A₁ B₁ ≃∎
-}
×-fmap : ∀ {i₀ i₁ j₀ j₁} {A₀ : Type i₀} {A₁ : Type i₁} {B₀ : Type j₀} {B₁ : Type j₁}
→ (h : A₀ → A₁) (k : B₀ → B₁) → (A₀ × B₀ → A₁ × B₁)
×-fmap u v = ×-fmap-r _ v ∘ ×-fmap-l _ u
⊙×-fmap : ∀ {i i' j j'} {X : Ptd i} {X' : Ptd i'} {Y : Ptd j} {Y' : Ptd j'}
→ X ⊙→ X' → Y ⊙→ Y' → X ⊙× Y ⊙→ X' ⊙× Y'
⊙×-fmap (f , fpt) (g , gpt) = ×-fmap f g , pair×= fpt gpt
×-isemap : ∀ {i₀ i₁ j₀ j₁} {A₀ : Type i₀} {A₁ : Type i₁} {B₀ : Type j₀} {B₁ : Type j₁}
{h : A₀ → A₁} {k : B₀ → B₁} → is-equiv h → is-equiv k → is-equiv (×-fmap h k)
×-isemap eh ek = ×-isemap-r _ ek ∘ise ×-isemap-l _ eh
×-emap : ∀ {i₀ i₁ j₀ j₁} {A₀ : Type i₀} {A₁ : Type i₁} {B₀ : Type j₀} {B₁ : Type j₁}
→ (u : A₀ ≃ A₁) (v : B₀ ≃ B₁) → (A₀ × B₀ ≃ A₁ × B₁)
×-emap u v = ×-emap-r _ v ∘e ×-emap-l _ u
⊙×-emap : ∀ {i i' j j'} {X : Ptd i} {X' : Ptd i'} {Y : Ptd j} {Y' : Ptd j'}
→ X ⊙≃ X' → Y ⊙≃ Y' → X ⊙× Y ⊙≃ X' ⊙× Y'
⊙×-emap (F , F-ise) (G , G-ise) = ⊙×-fmap F G , ×-isemap F-ise G-ise
-- Implementation of [_∙'_] on Σ
Σ-∙' : ∀ {i j} {A : Type i} {B : A → Type j}
{x y z : A} {p : x == y} {p' : y == z}
{u : B x} {v : B y} {w : B z}
(q : u == v [ B ↓ p ]) (r : v == w [ B ↓ p' ])
→ (pair= p q ∙' pair= p' r) == pair= (p ∙' p') (q ∙'ᵈ r)
Σ-∙' {p = idp} {p' = idp} q idp = idp
-- Implementation of [_∙_] on Σ
Σ-∙ : ∀ {i j} {A : Type i} {B : A → Type j}
{x y z : A} {p : x == y} {p' : y == z}
{u : B x} {v : B y} {w : B z}
(q : u == v [ B ↓ p ]) (r : v == w [ B ↓ p' ])
→ (pair= p q ∙ pair= p' r) == pair= (p ∙ p') (q ∙ᵈ r)
Σ-∙ {p = idp} {p' = idp} idp r = idp
-- Implementation of [!] on Σ
Σ-! : ∀ {i j} {A : Type i} {B : A → Type j}
{x y : A} {p : x == y}
{u : B x} {v : B y}
(q : u == v [ B ↓ p ])
→ ! (pair= p q) == pair= (! p) (!ᵈ q)
Σ-! {p = idp} idp = idp
-- Implementation of [_∙'_] on ×
×-∙' : ∀ {i j} {A : Type i} {B : Type j}
{x y z : A} (p : x == y) (p' : y == z)
{u v w : B} (q : u == v) (q' : v == w)
→ (pair×= p q ∙' pair×= p' q') == pair×= (p ∙' p') (q ∙' q')
×-∙' idp idp q idp = idp
-- Implementation of [_∙_] on ×
×-∙ : ∀ {i j} {A : Type i} {B : Type j}
{x y z : A} (p : x == y) (p' : y == z)
{u v w : B} (q : u == v) (q' : v == w)
→ (pair×= p q ∙ pair×= p' q') == pair×= (p ∙ p') (q ∙ q')
×-∙ idp idp idp r = idp
-- Implementation of [!] on ×
×-! : ∀ {i j} {A : Type i} {B : Type j}
{x y : A} (p : x == y) {u v : B} (q : u == v)
→ ! (pair×= p q) == pair×= (! p) (! q)
×-! idp idp = idp
-- Special case of [ap-,]
ap-cst,id : ∀ {i j} {A : Type i} (B : A → Type j)
{a : A} {x y : B a} (p : x == y)
→ ap (λ x → _,_ {B = B} a x) p == pair= idp p
ap-cst,id B idp = idp
-- hfiber fst == B
module _ {i j} {A : Type i} {B : A → Type j} where
private
to : ∀ a → hfiber (fst :> (Σ A B → A)) a → B a
to a ((.a , b) , idp) = b
from : ∀ (a : A) → B a → hfiber (fst :> (Σ A B → A)) a
from a b = (a , b) , idp
to-from : ∀ (a : A) (b : B a) → to a (from a b) == b
to-from a b = idp
from-to : ∀ a b′ → from a (to a b′) == b′
from-to a ((.a , b) , idp) = idp
hfiber-fst : ∀ a → hfiber (fst :> (Σ A B → A)) a ≃ B a
hfiber-fst a = to a , is-eq (to a) (from a) (to-from a) (from-to a)
{- Dependent paths in a Σ-type -}
module _ {i j k} {A : Type i} {B : A → Type j} {C : (a : A) → B a → Type k}
where
↓-Σ-in : {x x' : A} {p : x == x'} {r : B x} {r' : B x'}
{s : C x r} {s' : C x' r'}
(q : r == r' [ B ↓ p ])
→ s == s' [ uncurry C ↓ pair= p q ]
→ (r , s) == (r' , s') [ (λ x → Σ (B x) (C x)) ↓ p ]
↓-Σ-in {p = idp} idp t = pair= idp t
↓-Σ-fst : {x x' : A} {p : x == x'} {r : B x} {r' : B x'}
{s : C x r} {s' : C x' r'}
→ (r , s) == (r' , s') [ (λ x → Σ (B x) (C x)) ↓ p ]
→ r == r' [ B ↓ p ]
↓-Σ-fst {p = idp} u = fst= u
↓-Σ-snd : {x x' : A} {p : x == x'} {r : B x} {r' : B x'}
{s : C x r} {s' : C x' r'}
→ (u : (r , s) == (r' , s') [ (λ x → Σ (B x) (C x)) ↓ p ])
→ s == s' [ uncurry C ↓ pair= p (↓-Σ-fst u) ]
↓-Σ-snd {p = idp} idp = idp
↓-Σ-β-fst : {x x' : A} {p : x == x'} {r : B x} {r' : B x'}
{s : C x r} {s' : C x' r'}
(q : r == r' [ B ↓ p ])
(t : s == s' [ uncurry C ↓ pair= p q ])
→ ↓-Σ-fst (↓-Σ-in q t) == q
↓-Σ-β-fst {p = idp} idp idp = idp
↓-Σ-β-snd : {x x' : A} {p : x == x'} {r : B x} {r' : B x'}
{s : C x r} {s' : C x' r'}
(q : r == r' [ B ↓ p ])
(t : s == s' [ uncurry C ↓ pair= p q ])
→ ↓-Σ-snd (↓-Σ-in q t) == t
[ (λ q' → s == s' [ uncurry C ↓ pair= p q' ]) ↓ ↓-Σ-β-fst q t ]
↓-Σ-β-snd {p = idp} idp idp = idp
↓-Σ-η : {x x' : A} {p : x == x'} {r : B x} {r' : B x'}
{s : C x r} {s' : C x' r'}
(u : (r , s) == (r' , s') [ (λ x → Σ (B x) (C x)) ↓ p ])
→ ↓-Σ-in (↓-Σ-fst u) (↓-Σ-snd u) == u
↓-Σ-η {p = idp} idp = idp
{- Dependent paths in a ×-type -}
module _ {i j k} {A : Type i} {B : A → Type j} {C : A → Type k}
where
↓-×-in : {x x' : A} {p : x == x'} {r : B x} {r' : B x'}
{s : C x} {s' : C x'}
→ r == r' [ B ↓ p ]
→ s == s' [ C ↓ p ]
→ (r , s) == (r' , s') [ (λ x → B x × C x) ↓ p ]
↓-×-in {p = idp} q t = pair×= q t
{- Dependent paths over a ×-type -}
module _ {i j k} {A : Type i} {B : Type j} (C : A → B → Type k)
where
↓-over-×-in : {x x' : A} {p : x == x'} {y y' : B} {q : y == y'}
{u : C x y} {v : C x' y} {w : C x' y'}
→ u == v [ (λ a → C a y) ↓ p ]
→ v == w [ (λ b → C x' b) ↓ q ]
→ u == w [ uncurry C ↓ pair×= p q ]
↓-over-×-in {p = idp} {q = idp} idp idp = idp
↓-over-×-in' : {x x' : A} {p : x == x'} {y y' : B} {q : y == y'}
{u : C x y} {v : C x y'} {w : C x' y'}
→ u == v [ (λ b → C x b) ↓ q ]
→ v == w [ (λ a → C a y') ↓ p ]
→ u == w [ uncurry C ↓ pair×= p q ]
↓-over-×-in' {p = idp} {q = idp} idp idp = idp
module _ where
-- An orphan lemma.
↓-cst×app-in : ∀ {i j k} {A : Type i}
{B : Type j} {C : A → B → Type k}
{a₁ a₂ : A} (p : a₁ == a₂)
{b₁ b₂ : B} (q : b₁ == b₂)
{c₁ : C a₁ b₁}{c₂ : C a₂ b₂}
→ c₁ == c₂ [ uncurry C ↓ pair×= p q ]
→ (b₁ , c₁) == (b₂ , c₂) [ (λ x → Σ B (C x)) ↓ p ]
↓-cst×app-in idp idp idp = idp
{- pair= and pair×= where one argument is reflexivity -}
pair=-idp-l : ∀ {i j} {A : Type i} {B : A → Type j} (a : A) {b₁ b₂ : B a}
(q : b₁ == b₂) → pair= {B = B} idp q == ap (λ y → (a , y)) q
pair=-idp-l _ idp = idp
pair×=-idp-l : ∀ {i j} {A : Type i} {B : Type j} (a : A) {b₁ b₂ : B}
(q : b₁ == b₂) → pair×= idp q == ap (λ y → (a , y)) q
pair×=-idp-l _ idp = idp
pair×=-idp-r : ∀ {i j} {A : Type i} {B : Type j} {a₁ a₂ : A} (p : a₁ == a₂)
(b : B) → pair×= p idp == ap (λ x → (x , b)) p
pair×=-idp-r idp _ = idp
pair×=-split-l : ∀ {i j} {A : Type i} {B : Type j} {a₁ a₂ : A} (p : a₁ == a₂)
{b₁ b₂ : B} (q : b₁ == b₂)
→ pair×= p q == ap (λ a → (a , b₁)) p ∙ ap (λ b → (a₂ , b)) q
pair×=-split-l idp idp = idp
pair×=-split-r : ∀ {i j} {A : Type i} {B : Type j} {a₁ a₂ : A} (p : a₁ == a₂)
{b₁ b₂ : B} (q : b₁ == b₂)
→ pair×= p q == ap (λ b → (a₁ , b)) q ∙ ap (λ a → (a , b₂)) p
pair×=-split-r idp idp = idp
-- Commutativity of products and derivatives.
module _ {i j} {A : Type i} {B : Type j} where
×-comm : Σ A (λ _ → B) ≃ Σ B (λ _ → A)
×-comm = equiv (λ {(a , b) → (b , a)}) (λ {(b , a) → (a , b)}) (λ _ → idp) (λ _ → idp)
module _ {i j k} {A : Type i} {B : A → Type j} {C : A → Type k} where
Σ₂-×-comm : Σ (Σ A B) (λ ab → C (fst ab)) ≃ Σ (Σ A C) (λ ac → B (fst ac))
Σ₂-×-comm = Σ-assoc ⁻¹ ∘e Σ-emap-r (λ a → ×-comm) ∘e Σ-assoc
module _ {i j k} {A : Type i} {B : Type j} {C : A → B → Type k} where
Σ₁-×-comm : Σ A (λ a → Σ B (λ b → C a b)) ≃ Σ B (λ b → Σ A (λ a → C a b))
Σ₁-×-comm = Σ-assoc ∘e Σ-emap-l _ ×-comm ∘e Σ-assoc ⁻¹
|
-- -------------------------------------------------------------- [ Parser.idr ]
-- Module : UML.Class.Parser
-- Description :
-- Copyright : (c) Jan de Muijnck-Hughes
-- License : see LICENSE
-- --------------------------------------------------------------------- [ EOH ]
module UML.Class.Parser
import Lightyear
import Lightyear.Strings
import UML.Class.Model
import UML.Types
import UML.Utils.Parsing
%access private
-- -------------------------------------------------------------------- [ Misc ]
modifier : Parser Modifier
modifier = do token "static"
pure Static
<|> do token "abstract"
pure Abstract
<?> "Modifier"
visibility : Parser Visibility
visibility = do token "-"
pure Private
<|> do token "+"
pure Public
<|> do token "~"
pure Package
<|> do token "#"
pure Protected
<?> "Visibility"
-- ---------------------------------------------------------------- [ Relation ]
description : Parser String
description = do
colon
x <- literallyBetween '\"'
pure x
relationType : Parser RelationTy
relationType = do token "<|--"
pure Specialisation
<|> do token "o--"
pure Composition
<|> do token "*--"
pure Aggregation
<|> do token "<--"
pure Realisation
<|> do token "-->"
pure Association
<?> "Relation Type"
relation : Parser $ ClassModel RELA
relation = do
f <- ident <* space
relTy <- relationType
t <- ident <* space
desc <- opt description
pure $ Relation relTy f t desc
-- ------------------------------------------------------------------ [ Params ]
param : Parser $ ClassModel PARAM
param = do
id <- ident <* space
colon
ty <- ident <* space
pure $ Param id ty
<?> "Param"
params : Parser $ List $ ClassModel PARAM
params = sepBy1 param comma <?> "Params"
-- ----------------------------------------------------------------- [ Methods ]
method : Parser $ ClassModel METHOD
method = do
mod <- opt $ braces modifier
vis <- visibility
id <- ident
ps <- parens $ opt params
colon
ty <- ident
pure $ Method id ty mod vis $ fromMaybe Nil ps
<?> "Method"
-- -------------------------------------------------------------- [ Attributes ]
attribute : Parser $ ClassModel ATTR
attribute = do
mod <- opt $ braces modifier
vis <- visibility
id <- ident <* space
colon
ty <- ident
pure $ Attribute id ty mod vis
<?> "Attribute"
-- ------------------------------------------------------------------- [ Class ]
classty : Parser ClassTy
classty = do token "Abstract"
token "Class"
return ClassAbstract
<|> do token "Class"
return ClassStandard
<|> do token "Interface"
return ClassInterface
<?> "Class Type"
classDecl : Parser (ClassTy, String)
classDecl = do
t <- classty
i <- ident
space
pure (t,i)
<?> "Class Delcaration"
emptyClass : Parser $ ClassModel CLASS
emptyClass = do
(ty,id) <- classDecl
let c = Clazz id ty Nil Nil
pure c
<?> "Empty Class"
element : Parser (Maybe $ ClassModel ATTR, Maybe $ ClassModel METHOD)
element = do a <- attribute
eol
pure (Just a, Nothing)
<|> do m <- method
eol
pure (Nothing, Just m)
<?> "Element"
classBody : Parser (List $ ClassModel ATTR, List $ ClassModel METHOD)
classBody = do
es <- some element
let (as', ms') = unzip es
pure (catMaybes as', catMaybes ms')
bodyClass : Parser $ ClassModel CLASS
bodyClass = do
(ty, id) <- classDecl
(as, ms) <- braces classBody
let c = Clazz id ty as ms
pure c
clazz : Parser $ ClassModel CLASS
clazz = bodyClass <|> emptyClass <?> "Class"
-- ----------------------------------------------------------- [ Class Diagram ]
public
classModel : Parser UML
classModel = do
cs <- some (clazz <* space)
rs <- some (relation <* space)
pure $ Class $ MkClassModel cs rs
<?> "Class Diagram"
-- --------------------------------------------------------------------- [ EOF ]
|
SUBROUTINE gammas
!-------------------------------------------------------------
!
! Compute gamma elements from known analytic form of gamma
!
!-------------------------------------------------------------
USE solvar
IMPLICIT NONE
REAL*8 :: alf
! Set the scalar alpha
alf = 1.0d0/(-0.125d0-0.25d0*ez-0.25d0*ey-0.25d0*ex)
! Now need 16 formulas to make 16 gamma elements
! gaa
gaa = alf*(-0.125d0)
! gax, gaxz, gayz
gaxy = alf*(-0.25d0)*ez
gaxz = alf*(-0.25d0)*ey
gayz = alf*(-0.25d0)*ex
! gxya, gxza, gyza
gxya = alf*(-0.25d0)
gxza = gxya
gyza = gxya
! gxyxy, gxyxz, gxyyz, gxzxy, gxzxz, gxzyz, gyzxy, gyzxz, gyzyz
gxyxy = alf*(0.5d0*(0.25d0+0.5d0*(ey+ex))-0.25d0*ez)
gxyxz = 2.0d0*gaxz
gxyyz = 2.0d0*gayz
gxzxy = 2.0d0*gaxy
gxzxz = alf*(0.5d0*(0.25d0+0.5d0*(ez+ex))-0.25d0*ey)
gxzyz = 2.0d0*gayz
gyzxy = 2.0d0*gaxy
gyzxz = 2.0d0*gaxz
gyzyz = alf*(0.5d0*(0.25d0+0.5d0*(ez+ey))-0.25d0*ex)
RETURN
END SUBROUTINE gammas
|
% Copyright 2019 by Renée Ahrens, Olof Frahm, Jens Kluttig, Matthias Schulz, Stephan Schuster
% Copyright 2019 by Till Tantau
% Copyright 2019 by Jannis Pohlmann
%
% This file may be distributed and/or modified
%
% 1. under the LaTeX Project Public License and/or
% 2. under the GNU Free Documentation License.
%
% See the file doc/generic/pgf/licenses/LICENSE for more details.
\section{The Algorithm Layer}
\label{section-gd-algorithm-layer}
\noindent{\emph{by Till Tantau}}
\ifluatex
\else
This section of the manual can only be typeset using Lua\TeX.
\expandafter\endinput
\fi
\subsection{Overview}
The present section is addressed at readers interested in implementing new
graph drawing algorithms for the graph drawing system. Obviously, in order to
do so, you need to have an algorithm in mind and also some programming skills;
but fortunately only in the Lua programming language: Even though the graph
drawing system was originally developed as an extension of \tikzname, is has
been restructured so that the ``algorithm layer'' where you define algorithms
is scrupulously separated from \tikzname. In particular, an algorithm declared
and implemented on this layer can be used in with every ``display layers'', see
Section~\ref{section-gd-display-layer}, without change. Nevertheless, in the
following we will use the \tikzname\ display layer and syntax in our examples.
Normally, new graph drawing algorithms can and must be implemented in the Lua
programming language, which is a small, easy-to-learn (and quite beautiful)
language integrated into current versions of \TeX. However, as explained in
Section~\ref{section-algorithms-in-c}, you can also implement algorithms in C
or C++ (and, possibly, in the future also in other languages), but this comes
at a great cost concerning portability. In the present section, I assume that
you are only interested in writing an algorithm using Lua.
In the following, after a small ``hello world'' example of graph drawing and a
discussion of technical details like how to name files so that \TeX\ will find
them, we have a look at the main parts of the algorithm layer:
%
\begin{itemize}
\item Section~\ref{section-gd-namespaces} gives and overview of the
available namespaces and also of naming conventions used in the graph
drawing system.
\item Section~\ref{section-gd-gd-scope} explores what graph drawing scopes
``look like on the algorithm layer''. As the graph of a graph drawing
scope is being parsed on the display layer, a lot of information is
gathered: The nodes and edges of the graph are identified and the
object-oriented model is built, but other information is also
collected. For instance, a sequence of \emph{events} is created during
the parsing process. As another example, numerous kinds of
\emph{collections} may be identified by the parser. The parsed graph
together with the event sequence and the collections are all gathered
in a single table, called the \emph{scope table} of the current graph
drawing scope. Algorithms can access this table to retrieve information
that goes beyond the ``pure'' graph model.
One entry in this table is of particular importance: The
\emph{syntactic digraph.} While most graph drawing algorithms are not
really interested in the ``details'' of how a graph was specified, for
some algorithms it makes a big difference whether you write |a -> b| or
|b <- a| in your specification of the graph. These algorithms can
access the ``fine details'' of how the input graph was specified
through the syntactic digraph; all other algorithms can access their
|digraph| or |ugraph| fields and do not have to worry about the
difference between |a -> b| and |b <- a|.
\item Section~\ref{section-gd-models} explains the object-oriented model of
graphs used throughout the graph drawing system. Graph drawing
algorithms do not get the ``raw'' specification used by the user to
specify a graph (like |{a -> {b,c}}| in the |graph| syntax). Instead,
what a graph drawing algorithm sees is ``just'' a graph object that
provides methods for accessing the vertices and arcs.
\item Section~\ref{section-gd-transformations} explains how the information
in the graph drawing scope is processed. One might expect that we
simply run the algorithm selected by the user; however, things are more
involved in practice. When the layout of a graph needs to be computed,
only very few algorithms will actually be able to compute positions for
the nodes of \emph{every} graph. For instance, most algorithms
implicitly assume that the input graph is connected; algorithms for
computing layouts for trees assume that the input is, well, a tree; and
so on. For this reason, graph drawing algorithms will not actually need
the original input graph as their input, but some \emph{transformed}
version of it. Indeed, \emph{all} graph drawing algorithms are treated
as graph transformations by the graph drawing engine.
This section explains how transformations are chosen and which
transformations are applied by default.
\item Section~\ref{section-gd-interface-to-algorithms} documents the
interface-to-algorithm class. This interface encapsulates all that an
algorithm ``sees'' of the graph drawing system (apart from the classes
in |model| and |lib|).
\item Section~\ref{section-gd-examples} provides a number of complete
examples that show how graph drawing algorithms can, actually, be
implemented.
\item Section~\ref{section-gd-libs} documents the different libraries
functions that come with the graph drawing engine. For instance, there
are library functions for computing the (path) distance of nodes in a
graph; a parameter that is needed by some algorithms.
\end{itemize}
\subsection{Getting Started}
In this section, a ``hello world'' example of a graph drawing algorithm is
given, followed by an overview of the organization of the whole engine.
\subsubsection{The Hello World of Graph Drawing}
Let us start our tour of the algorithm layer with a ``hello world'' version of
graph drawing: An algorithm that simply places all nodes of a graph in a circle
of a fixed radius. Naturally, this is not a particularly impressive or
intelligent graph drawing algorithm; but neither is the classical ``hello
world''$\dots$\ Here is a minimal version of the needed code (this is not the
typical way of formulating the code, but it is the shortest; we will have a
look at the more standard and verbose way in a moment):
%
\begin{codeexample}[code only, tikz syntax=false]
pgf.gd.interface.InterfaceToAlgorithms.declare {
key = "very simple demo layout",
algorithm = {
run =
function (self)
local alpha = (2 * math.pi) / #self.ugraph.vertices
for i,vertex in ipairs(self.ugraph.vertices) do
vertex.pos.x = math.cos(i * alpha) * 25
vertex.pos.y = math.sin(i * alpha) * 25
end
end
}
}
\end{codeexample}
\directlua{
pgf.gd.interface.InterfaceToAlgorithms.declare {
key = "very simple demo layout",
algorithm = {
run =
function (self)
local alpha = (2 * math.pi) / \luaescapestring{#}self.ugraph.vertices
for i,vertex in ipairs(self.ugraph.vertices) do
vertex.pos.x = math.cos(i * alpha) * 25
vertex.pos.y = math.sin(i * alpha) * 25
end
end
}
}
}
This code \emph {declares} a new algorithm (|very simple demo layout|) and
includes an implementation of the algorithm (through the |run| field of the
|algorithm| field). When the |run| method is called, the |self| parameter will
contain the to-be-drawn graph in its |ugraph| field. It is now the job of the
code to modify the positions of the vertices in this graph (in the example,
this is done by assigning values to |vertex.pos.x| and |vertex.pos.y|).
In order to actually \emph{use} the algorithm, the above code first needs to be
executed somehow. For \tikzname, one can just call |\directlua| on it or put it
in a file and then use |\directlua| plus |require| (a better alternative) or
you put it in a file like |simpledemo.lua| and use |\usegdlibrary{simpledemo}|
(undoubtedly the ``best'' way). For another display layer, like a graphical
editor, the code could also be executed through the use of |require|.
Executing the code ``just'' declares the algorithm, this is what the |declare|
function does. Inside some internal tables, the algorithm layer will store the
fact that a |very simple demo layout| is now available. The algorithm layer
will also communicate with the display layer through the binding layer to
advertise this fact to the ``user''. In the case of \tikzname, this means that
the option key |very simple demo layout| becomes available at this point and we
can use it like this:
%
\begin{codeexample}[]
\tikz [very simple demo layout]
\graph { f -> c -> e -> a -> {b -> {c, d, f}, e -> b}};
\end{codeexample}
It turns out, that our little algorithm is already more powerful than one might
expect. Consider the following example:
%
\begin{codeexample}[]
\tikz [very simple demo layout, componentwise]
\graph {
1 -> 2 ->[orient=right] 3 -> 1;
a -- b --[orient=45] c -- d -- a;
};
\end{codeexample}
Note that, in our algorithm, we ``just'' put all nodes on a circle around the
origin. Nevertheless, the graph gets decomposed into two connected components,
the components are rotated so that the edge from node |2| to node |3| goes from
left to right and the edge from |b| to |c| goes up at an angle of $45^\circ$,
and the components are placed next to each other so that some spacing is
achieved.
The ``magic'' that achieves all this behind the scenes is called ``graph
transformations''. They will heavily pre- and postprocess the input and output
of graph drawing algorithms to achieve the above results.
Naturally, some algorithms may not wish their inputs and/or outputs to be
``tampered'' with. An algorithm can easily configure which transformations
should be applied, by passing appropriate options to |declare|.
\subsubsection{Declaring an Algorithm}
Let us now have a look at how one would ``really'' implement the example
algorithm. First of all, we place our algorithm in a separate file called, say,
|ExampleLayout.lua|. This way, by putting it in a separate file, all display
layers can easily install the algorithm at runtime by saying
|require "ExampleLayout"|.
Next, the |declare| function is needed quite often, so it makes sense to create
a short local name for it:
%
\begin{codeexample}[code only, tikz syntax=false]
-- This is the file ExampleLayout.lua
local declare = require "pgf.gd.interface.InterfaceToAlgorithms".declare
\end{codeexample}
The |declare| function is the work-horse of the algorithm layer. It takes a
table that contains at least a |key| field, which must be a unique string, and
some other fields that specify in more detail what kind of key is declared.
Once declared through a call of |declare|, the ``key'' can be used on the
display layer.
For declaring an algorithm, the table passed to |declare| must contain a field
|algorithm|. This field, in turn, must (normally) be set to a table that will
become the algorithm class. In the above example, our algorithm was so simple
that we could place the whole definition of the class inside the call of
|declare|, but normally the class is defined in more detail after the call to
|declare|:
%
\begin{codeexample}[code only, tikz syntax=false]
local ExampleClass = {} -- A local variable holding the class table
declare {
key = "very simple demo layout",
algorithm = ExampleClass
}
function ExampleClass:run ()
local alpha = (2 * math.pi) / #self.ugraph.vertices
...
end
\end{codeexample}
The effect of the |declare| will be that the table stored in |ExampleClass| is
setup to form a class in the sense of object-oriented programming. In
particular, a static |new| function is installed.
Now, whenever the user uses the key |very simple demo layout| on a graph, at
some point the graph drawing engine will create a new instance of the
|ExampleClass| using |new| and will then call the |run| method of this class.
The class can have any number of other methods, but |new| and |run| are the
only ones directly called by the graph drawing system.
\subsubsection{The Run Method}
The |run| method of an algorithm classes lies at the heart of any graph drawing
algorithm. This method will be called whenever a graph needs to be laid out.
Upon this call, the |self| object will have some important fields set:
%
\begin{itemize}
\item |ugraph| This stands for ``undirected graph'' and is the
``undirected'' version of the to-be-laid out graph. In this graph,
whenever there is an arc between $u$ and $v$, there is also an arc
between $v$ and $u$. It is obtained by considering the syntactic
digraph and then ``forgetting'' about the actual direction of the
edges.
When you have set certain |preconditions| in your algorithm class, like
|connected=true|, the |ugraph| will satisfy these conditions. In
particular, the |ugraph| typically will not be the underlying
undirected graph of the complete syntactic digraph, but rather of some
part of it. The use of (sub)layouts will also modify the syntactic
digraph is fancy ways.
Refer to this graph whenever your algorithm is ``most comfortable''
with an undirected graph, as is the case for instance for most
force-base algorithms.
\item |digraph| This stands for ``directed graph'' and is the
``semantically directed'' version of the to-be-laid out graph.
Basically, when happens is that reverse edges in the syntactic digraph
(an edge like |b <- a|) will yield an |Arc| from |a| to |b| in the
|digraph| while they yield a |b| to |a| arc and edge in the syntactic
digraph. Also, undirected edges like |a -- b| are replaced by directed
edges in both directions between the vertices.
\item |scope| The graph drawing scope.
\item |layout| The layout object for this graph. This is a collection of
kind |layout|.
\end{itemize}
\subsubsection{Loading Algorithms on Demand}
In order to use the |very simple demo layout| on the display layer, |declare|
must have been called for this key. However, we just saw that the |declare|
function takes the actual class table as parameter and, thus, whenever an
algorithm is declared, it is also completely loaded and compiled at this point.
This is not always desirable. A user may wish to include a number of libraries
in order to declare a large number of potentially useful algorithms, but will
not actually use all of them. Indeed, at least for large, complex algorithms,
it is preferable that the algorithm's code is loaded only when the algorithm is
used for the first time.
Such a ``loading of algorithms on demand'' is supported through the option of
setting the |algorithm| field in a |declare| to a string. This string must now
be the file name of a Lua file that contains the code of the actual algorithm.
When the key is actually used for the first time, this file will be loaded. It
must return a table that will be plugged into the |algorithm| field; so
subsequent usages of the key will not load the file again.
The net effect of all this is that you can place implementations of algorithms
in files separate from interface files that just contain the |declare| commands
for these algorithms. You will typically do this only for rather large
algorithms.
For our example, the code would look like this:
%
\begin{codeexample}[code only, tikz syntax=false]
-- File ExampleLayout.lua
local declare = require "pgf.gd.interface.InterfaceToAlgorithms".declare
declare {
key = "very simple demo layout",
algorithm = "ExampleLayoutImplementation"
}
\end{codeexample}
\begin{codeexample}[code only, tikz syntax=false]
-- File ExampleLayoutImplementation.lua
local ExampleClass = {}
function ExampleClass:run ()
local alpha = (2 * math.pi) / #self.ugraph.vertices
...
end
return ExampleClass
\end{codeexample}
\subsubsection{Declaring Options}
Let us now make our example algorithm a bit more ``configurable''. For this, we
use |declare| once more, but instead of the |algorithm| field, we use a |type|
field. This tells the display layer that the key is not used to select an
algorithm, but to configure ``something'' about the graph or about nodes or
edges.
In our example, we may wish to configure the radius of the graph. So, we
introduce a |radius| key (actually, this key already exists, so we would not
need to declare it, but let us do so anyway for example purposes):
%
\begin{codeexample}[code only, tikz syntax=false]
declare {
key = "radius",
type = "length",
initial = "25pt"
}
\end{codeexample}
This tells the display layer that there is now an option called |radius|, that
users set it to some ``length'', and that if it is not set at all, then the
25pt should be used.
To access what the user has specified for this key, an algorithm can access the
|options| field of a graph, vertex, or arc at the key's name:
%
\begin{codeexample}[code only, tikz syntax=false]
vertex.pos.x = math.cos(i * alpha) * vertex.options.radius
vertex.pos.y = math.sin(i * alpha) * vertex.options.radius
\end{codeexample}
\subsubsection{Adding Inline Documentation}
You should always document the keys you |declare|. For this, the |declare|
function allows you to add three fields to its argument table:
%
\begin{itemize}
\item |summary| This should be a string that succinctly summarizes the
effect this key has. The idea is that this text will be shown as a
``tooltip'' in a graphical editor or will be printed out by a command
line tool when a user requests help about the key. You can profit from
using Lua's |[[| and |]]| syntax for specifying multi-line strings.
Also, when the file containing the key is parsed for this manual, this
text will be shown.
\item |documentation| When present, this field contains a more extensive
documentation of the key. It will also be shown in this manual, but
typically not as a tool tip.
\item |examples| This should either be a single string or an array of
strings. Each string should be an example demonstrating how the key is
used in \tikzname. They will all be included in the manual, each
surrounded by a |codeexample| environment.
\end{itemize}
Let us augment our |radius| key with some documentation. The three dashes
before the |declare| are only needed when the declaration is part of this
manual and they will trigger an inclusion of the key in the manual.
%
\begin{codeexample}[code only, tikz syntax=false]
---
declare {
key = "radius",
type = "length",
initial = "25pt",
summary = [[
Specifies the radius of a circle on which the nodes are placed when
the |very simple example layout| is used. Each vertex can have a
different radius.
]],
examples = [[
\tikz \graph [very simple example layout, radius=2cm] {
a -- b -- c -- d -- e;
};
]]
}
\end{codeexample}
As a courtesy, all of the strings given in the documentation can start and end
with quotation marks, which will be removed. (This helps syntax highlighting
with editors that do not recognize the |[[| to |]]| syntax.) Also, the
indentation of the strings is removed (we compute the minimum number of leading
spaces on any line and remove this many spaces from all lines).
\subsubsection{Adding External Documentation}
\label{section-gd-documentation-in}
As an alternative to inlining documentation, you can also store the
documentation of keys in a separate file that is loaded only when the
documentation is actually accessed. Since this happens only rarely (for
instance, not at all, when \tikzname\ is run, except for this manual), this
will save time and space. Also, for C code, it is impractical to store
multi-line documentation strings directly in the C file.
In order to store documentation externally, instead of the |summary|,
|documentation|, and |examples| keys, you provide the key |documentation_in|.
The |documentation_in| key must be set to a string that is input using
|require|.
In detail, when someone tries to access the |summary|, |documentation|, or
|examples| field of a key and these keys are not (yet) defined, the system
checks whether the |documentation_in| key is set. If so, we apply |require| to
the string stored in this field. The file loaded in this way can now setup the
missing fields of the current key and, typically, also of all other keys
defined in the same file as the current key. For this purpose, it is advisable
to use the |pgf.gd.doc| class:
\includeluadocumentationof{pgf.gd.doc}
As a longer example, consider the following declarations:
%
\begin{codeexample}[code only, tikz syntax=false]
---
declare {
key = "very simple demo layout",
algorithm = ExampleClass,
documentation_in = "documentation_file"
}
---
declare {
key = "radius",
type = "length",
initial = "25",
documentation_in = "documentation_file"
}
\end{codeexample}
The file |documentation_file.lua| would look like this:
%
\begin{codeexample}[code only, tikz syntax=false]
-- File documentation_file.lua
local key = require 'pgf.gd.doc'.key
local documentation = require 'pgf.gd.doc'.documentation
local summary = require 'pgf.gd.doc'.summary
local example = require 'pgf.gd.doc'.example
key "very simple demo layout"
documentation "This layout is a very simple layout that, ..."
key "radius"
summary "Specifies the radius of a circle on which the nodes are placed."
documentation
[[
This key can be used together with |very simple example layout|. An
important feature ist that...
]]
example
[[
\tikz \graph [very simple example layout, radius=2cm]
{ a -- b -- c -- d -- e; };
]]
\end{codeexample}
\subsection{Namespaces and File Names}
\label{section-gd-namespaces}
\subsubsection{Namespaces}
All parts of the |graphdrawing| library reside in the Lua ``namespace''
|pgf.gd|, which is itself a ``sub-namespace'' of |pgf|. For your own
algorithms, you are free to place them in whatever namespace you like; only for
the official distribution of \pgfname\ everything has been put into the correct
namespace.
Let us now have a more detailed look at these namespaces. A namespace is just a
Lua table, and sub-namespaces are just subtables of namespace tables. Following
the Java convention, namespaces are in lowercase letters. The following
namespaces are part of the core of the graph drawing engine:
%
\begin{itemize}
\item |pgf| This namespace is the main namespace of \pgfname. Other parts
of \pgfname\ and \tikzname\ that also employ Lua should put an entry
into this table. Since, currently, only the graph drawing engine
adheres to this rule, this namespace is declared inside the graph
drawing directory, but this will change.
The |pgf| table is the \emph{only} entry into the global table of Lua
generated by the graph drawing engine (or, \pgfname, for that matter).
If you intend to extend the graph drawing engine, do not even
\emph{think} of polluting the global namespace. You will be fined.
\item |pgf.gd| This namespace is the main namespace of the graph drawing
engine, including the object-oriented models of graphs and the layout
pipeline. Algorithms that are part of the distribution are also inside
this namespace, but if you write your own algorithms you do not need
place them inside this namespace. (Indeed, you probably should not
before they are made part of the official distribution.)
\item |pgf.gd.interface| This namespace handles, on the one hand, the
communication between the algorithm layer and the binding layer and, on
the other hand, the communication between the display layer (\tikzname)
and the binding layer.
\item |pgf.gd.binding| So-called ``bindings'' between display layers and
the graph drawing system reside in this namespace.
\item |pgf.gd.lib| Numerous useful classes that ``make an algorithm's your
life easier'' are collected in this namespace. Examples are a class for
decomposing a graph into connected components or a class for computing
the ideal distance between two sibling nodes in a tree, taking all
sorts of rotations and separation parameters into account.
\item |pgf.gd.model| This namespace contains all Lua classes that are part
of the object-oriented model of graphs employed throughout the graph
drawing engine. For readers familiar with the model--view--controller
pattern: This is the namespace containing the model-part of this
pattern.
\item |pgf.gd.control| This namespace contains the ``control logic'' of the
graph drawing system. It will transform graphs according to rules,
disassemble layouts and sublayouts and will call the appropriate
algorithms. For readers still familiar with the model--view--controller
pattern: This is the namespace containing the control-part of this
pattern.
\item |pgf.gd.trees| This namespace contains classes that are useful for
dealing with graphs that are trees. In particular, it contains a class
for computing a spanning tree of an arbitrary connected graph; an
operation that is an important preprocessing step for many algorithms.
In addition to providing ``utility functions for trees'', the namespace
\emph{also} includes actual algorithms for computing graph layouts like
|pgf.gd.trees.ReingoldTilford1981|. It may seem to be a bit of an
``impurity'' that a namespace mixes utility classes and ``real''
algorithms, but experience has shown that it is better to keep things
together in this way.
Concluding the analogy to the model--view--controller pattern, a graph
drawing algorithm is, in a loose sense, the ``view'' part of the
pattern.
\item |pgf.gd.layered| This namespace provides classes and functions for
``layered'' layouts; the Sugiyama layout method being the most
well-known one. Again, the namespace contains both algorithms to be
used by a user and utility functions.
\item |pgf.gd.force| Collects force-based algorithms and, again, also
utility functions and classes.
\item |pgf.gd.examples| Contains some example algorithms. They are
\emph{not} intended to be used directly, rather they should serve as
inspirations for readers wishing to implement their own algorithms.
\end{itemize}
There are further namespaces that also reside in the |pgf.gd| namespace, these
namespaces are used to organize different graph drawing algorithms into
categories.
In Lua, similarly to Java, when a class |SomeClass| is part of, say, the
namespace |pgf.gd.example|, it is customary to put the class's code in a file
|SomeClass.lua| and then put this class in a directory |example|, that is a
subdirectory of a directory |gd|, which is in turn a subdirectory of a
directory |pgf|. When you write \texttt{require "pgf.gd.example.SomeClass"} the
so-called \emph{loader} will turn this into a request for the file
\texttt{pgf/gd/example/SomeClass.lua} (for Unix systems).
\subsubsection{Defining and Using Namespaces and Classes}
There are a number of rules concerning the structure and naming of namespaces
as well as the naming of files. Let us start with the rules for naming
namespaces, classes, and functions. They follow the ``Java convention'':
%
\begin{enumerate}
\item A namespace is a short lowercase |word|.
\item A function in a namespace is in
|lowercase_with_underscores_between_words|.
\item A class name is in |CamelCaseWithAnUppercaseFirstLetter|.
\item A class method name is in |camelCaseWithALowercaseFirstLetter|.
\end{enumerate}
From Lua's point of view, every namespace and every class is just a table.
However, since these tables will be loaded using Lua's |require| function, each
namespace and each class must be placed inside a separate file (unless you
modify the |package.loaded| table, but, then, you know what you are doing
anyway). Inside such a file, you should first declare a local variable whose
name is the name of the namespace or class that you intend to define and then
assign a (possibly empty) table to this variable:
%
\begin{codeexample}[code only, tikz syntax=false]
-- File pgf.gd.example.SomeClass.lua:
local SomeClass = {}
\end{codeexample}
%
Next, you should add your class to the encompassing namespace. This is achieved
as follows:
%
\begin{codeexample}[code only, tikz syntax=false]
require("pgf.gd.example").SomeClass = SomeClass
\end{codeexample}
%
The reason this works is that the |require| will return the table that is the
namespace |pgf.gd.example|. So, inside this namespace, the |SomeClass| field
will be filled with the table stored in the local variable of the same name --
which happens to be the table representing the class.
At the end of the file, you must write
%
\begin{codeexample}[code only, tikz syntax=false]
return SomeClass
\end{codeexample}
%
This ensures that the table that is defined in this file gets stored by Lua in
the right places. Note that you need and should not use Lua's |module| command.
The reason is that this command has disappeared in the new version of Lua and
that it is not really needed.
Users of your class can import and use your class by writing:
%
\begin{codeexample}[code only, tikz syntax=false]
...
local SomeClass = require "pgf.gd.examples.SomeClass"
...
\end{codeexample}
\subsection{The Graph Drawing Scope}
\label{section-gd-gd-scope}
\includeluadocumentationof{pgf.gd.interface.Scope}
\subsection{The Model Classes}
\label{section-gd-models}
All that a graph drawing algorithm will ``see'' of the graph specified by the
user is a ``graph object''. Such an object is an object-oriented model of the
user's graph that no longer encodes the specific way in which the user
specified the graph; it only encodes which nodes and edges are present. For
instance, the \tikzname\ graph specification
%
\begin{codeexample}[code only]
graph { a -- {b, c} }
\end{codeexample}
%
\noindent and the graph specification
%
\begin{codeexample}[code only]
node (a) { a }
child { node (b) {b} }
child { node (c) {c} }
\end{codeexample}
%
will generate exactly the same graph object.
\begin{luanamespace}{pgf.gd.}{model}
This namespace contains the classes modeling graphs, nodes, and edges.
Also, the |Coordinate| class is found here, since coordinates are also part
of the modeling.
\end{luanamespace}
\subsubsection{Directed Graphs (Digraphs)}
Inside the graph drawing engine, the only model of a graph that is available
treats graphs as
%
\begin{enumerate}
\item directed (all edges have a designated head and a designated tail) and
\item simple (there can be at most one edge between any pair of nodes).
\end{enumerate}
%
These two properties may appear to be somewhat at odds with what users can
specify as graphs and with what some graph drawing algorithms might expect as
input. For instance, suppose a user writes
%
\begin{codeexample}[code only]
graph { a -- b --[red] c, b --[green, bend right] c }
\end{codeexample}
%
In this case, it seems that the input graph for a graph drawing algorithm
should actually be an \emph{undirected} graph in which there are
\emph{multiple} edges (namely $2$) between |b| and~|c|. Nevertheless, the graph
drawing engine will turn the user's input a directed simple graph in ways
described later. You do not need to worry that information gets lost during
this process: The \emph{syntactic digraph,} which is available to graph drawing
algorithms on request, stores all the information about which edges are present
in the original input graph.
The main reasons for only considering directed, simple graphs are speed and
simplicity: The implementation of these graphs has been optimized so that all
operations on these graphs have a guaranteed running time that is small in
practice.
\includeluadocumentationof{pgf.gd.model.Digraph}
\subsubsection{Vertices}
\includeluadocumentationof{pgf.gd.model.Vertex}
\subsubsection{Arcs}
\label{section-gd-arc-model}
\includeluadocumentationof{pgf.gd.model.Arc}
\subsubsection{Edges}
\includeluadocumentationof{pgf.gd.model.Edge}
\subsubsection{Collections}
\includeluadocumentationof{pgf.gd.model.Collection}
\subsubsection{Coordinates, Paths, and Transformations}
\includeluadocumentationof{pgf.gd.model.Coordinate}
\includeluadocumentationof{pgf.gd.model.Path}
\includeluadocumentationof{pgf.gd.lib.Transform}
\subsubsection{Options and Data Storages for Vertices, Arcs, and Digraphs}
Many objects in the graph drawing system have an |options| table attached to
them. These tables will contain the different kinds options specified by the
user for the object. For efficiency reasons, many objects may share the same
options table (since, more often than not, almost all objects have exactly the
same |options| table). For this reason, you cannot store anything in an options
table, indeed, you should never attempt to write anything into an options
table. Instead, you should use a |Storage|.
\includeluadocumentationof{pgf.gd.lib.Storage}
\subsubsection{Events}
\includeluadocumentationof{pgf.gd.lib.Event}
\subsection{Graph Transformations}
\label{section-gd-transformations}
\subsubsection{The Layout Pipeline}
\includeluadocumentationof{pgf.gd.control.LayoutPipeline}
\subsubsection{Hints For Edge Routing}
\includeluadocumentationof{pgf.gd.routing.Hints}
\subsection{The Interface To Algorithms}
\label{section-gd-interface-to-algorithms}
\includeluadocumentationof{pgf.gd.interface.InterfaceToAlgorithms}
\subsection{Examples of Implementations of Graph Drawing Algorithms}
\label{section-gd-examples}
\includeluadocumentationof{pgf.gd.examples.library}
\includeluadocumentationof{pgf.gd.examples.SimpleDemo}
\includeluadocumentationof{pgf.gd.examples.SimpleEdgeDemo}
\includeluadocumentationof{pgf.gd.examples.SimpleHuffman}
\subsection{Support Libraries}
\label{section-gd-libs}
The present section lists a number of general-purpose libraries that are used
by different algorithms.
\subsubsection{Basic Functions}
\includeluadocumentationof{pgf}
\includeluadocumentationof{pgf.gd.lib}
\subsubsection{Lookup Tables}
\includeluadocumentationof{pgf.gd.lib.LookupTable}
\subsubsection{Computing Distances in Graphs}
\emph{Still needs to be ported to digraph classes!}
%\includeluadocumentationof{pgf.gd.lib.PathLengths}
\subsubsection{Priority Queues}
\includeluadocumentationof{pgf.gd.lib.PriorityQueue}
|
[STATEMENT]
lemma simple_integral_cong_AE_mult_indicator:
assumes sf: "simple_function M f" and eq: "AE x in M. x \<in> S" and "S \<in> sets M"
shows "integral\<^sup>S M f = (\<integral>\<^sup>Sx. f x * indicator S x \<partial>M)"
[PROOF STATE]
proof (prove)
goal (1 subgoal):
1. integral\<^sup>S M f = \<integral>\<^sup>S x. f x * indicator S x \<partial>M
[PROOF STEP]
using assms
[PROOF STATE]
proof (prove)
using this:
simple_function M f
AE x in M. x \<in> S
S \<in> sets M
goal (1 subgoal):
1. integral\<^sup>S M f = \<integral>\<^sup>S x. f x * indicator S x \<partial>M
[PROOF STEP]
by (intro simple_integral_cong_AE) auto |
DGEEV Example Program Results
Eigenvalue( 1) = 7.9948E-01
Eigenvector( 1)
-6.5509E-01
-5.2363E-01
5.3622E-01
-9.5607E-02
Eigenvalue( 2) = (-9.9412E-02, 4.0079E-01)
Eigenvector( 2)
(-1.9330E-01, 2.5463E-01)
( 2.5186E-01,-5.2240E-01)
( 9.7182E-02,-3.0838E-01)
( 6.7595E-01, 0.0000E+00)
Eigenvalue( 3) = (-9.9412E-02,-4.0079E-01)
Eigenvector( 3)
(-1.9330E-01,-2.5463E-01)
( 2.5186E-01, 5.2240E-01)
( 9.7182E-02, 3.0838E-01)
( 6.7595E-01,-0.0000E+00)
Eigenvalue( 4) = -1.0066E-01
Eigenvector( 4)
1.2533E-01
3.3202E-01
5.9384E-01
7.2209E-01
|
/-
Copyright (c) 2020 Scott Morrison. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Scott Morrison
-/
import data.equiv.mul_add
/-!
# `ulift` instances for groups and monoids
This file defines instances for group, monoid, semigroup and related structures on `ulift` types.
(Recall `ulift α` is just a "copy" of a type `α` in a higher universe.)
We use `tactic.pi_instance_derive_field`, even though it wasn't intended for this purpose,
which seems to work fine.
We also provide `ulift.mul_equiv : ulift R ≃* R` (and its additive analogue).
-/
universes u v
variables {α : Type u} {x y : ulift.{v} α}
namespace ulift
@[to_additive] instance has_one [has_one α] : has_one (ulift α) := ⟨⟨1⟩⟩
@[simp, to_additive] lemma one_down [has_one α] : (1 : ulift α).down = 1 := rfl
@[to_additive] instance has_mul [has_mul α] : has_mul (ulift α) := ⟨λ f g, ⟨f.down * g.down⟩⟩
@[simp, to_additive] lemma mul_down [has_mul α] : (x * y).down = x.down * y.down := rfl
@[to_additive] instance has_div [has_div α] : has_div (ulift α) := ⟨λ f g, ⟨f.down / g.down⟩⟩
@[simp, to_additive] lemma div_down [has_div α] : (x / y).down = x.down / y.down := rfl
@[to_additive] instance has_inv [has_inv α] : has_inv (ulift α) := ⟨λ f, ⟨f.down⁻¹⟩⟩
@[simp, to_additive] lemma inv_down [has_inv α] : x⁻¹.down = (x.down)⁻¹ := rfl
/--
The multiplicative equivalence between `ulift α` and `α`.
-/
@[to_additive "The additive equivalence between `ulift α` and `α`."]
def _root_.mul_equiv.ulift [has_mul α] : ulift α ≃* α :=
{ map_mul' := λ x y, rfl,
.. equiv.ulift }
@[to_additive]
instance semigroup [semigroup α] : semigroup (ulift α) :=
mul_equiv.ulift.injective.semigroup _ $ λ x y, rfl
@[to_additive]
instance comm_semigroup [comm_semigroup α] : comm_semigroup (ulift α) :=
equiv.ulift.injective.comm_semigroup _ $ λ x y, rfl
@[to_additive]
instance mul_one_class [mul_one_class α] : mul_one_class (ulift α) :=
equiv.ulift.injective.mul_one_class _ rfl $ λ x y, rfl
@[to_additive has_vadd]
instance has_scalar {β : Type*} [has_scalar α β] : has_scalar α (ulift β) :=
⟨λ n x, up (n • x.down)⟩
@[to_additive has_scalar, to_additive_reorder 1]
instance has_pow {β : Type*} [has_pow α β] : has_pow (ulift α) β :=
⟨λ x n, up (x.down ^ n)⟩
@[to_additive]
instance monoid [monoid α] : monoid (ulift α) :=
equiv.ulift.injective.monoid_pow _ rfl (λ _ _, rfl) (λ _ _, rfl)
@[to_additive]
instance comm_monoid [comm_monoid α] : comm_monoid (ulift α) :=
{ .. ulift.monoid, .. ulift.comm_semigroup }
@[to_additive]
instance div_inv_monoid [div_inv_monoid α] : div_inv_monoid (ulift α) :=
equiv.ulift.injective.div_inv_monoid_pow _ rfl (λ _ _, rfl) (λ _, rfl)
(λ _ _, rfl) (λ _ _, rfl) (λ _ _, rfl)
@[to_additive]
instance group [group α] : group (ulift α) :=
equiv.ulift.injective.group_pow _ rfl (λ _ _, rfl) (λ _, rfl)
(λ _ _, rfl) (λ _ _, rfl) (λ _ _, rfl)
@[to_additive]
instance comm_group [comm_group α] : comm_group (ulift α) :=
{ .. ulift.group, .. ulift.comm_semigroup }
@[to_additive add_left_cancel_semigroup]
instance left_cancel_semigroup [left_cancel_semigroup α] :
left_cancel_semigroup (ulift α) :=
equiv.ulift.injective.left_cancel_semigroup _ (λ _ _, rfl)
@[to_additive add_right_cancel_semigroup]
instance right_cancel_semigroup [right_cancel_semigroup α] :
right_cancel_semigroup (ulift α) :=
equiv.ulift.injective.right_cancel_semigroup _ (λ _ _, rfl)
@[to_additive add_left_cancel_monoid]
instance left_cancel_monoid [left_cancel_monoid α] :
left_cancel_monoid (ulift α) :=
{ .. ulift.monoid, .. ulift.left_cancel_semigroup }
@[to_additive add_right_cancel_monoid]
instance right_cancel_monoid [right_cancel_monoid α] :
right_cancel_monoid (ulift α) :=
{ .. ulift.monoid, .. ulift.right_cancel_semigroup }
@[to_additive add_cancel_monoid]
instance cancel_monoid [cancel_monoid α] :
cancel_monoid (ulift α) :=
{ .. ulift.left_cancel_monoid, .. ulift.right_cancel_semigroup }
@[to_additive add_cancel_monoid]
instance cancel_comm_monoid [cancel_comm_monoid α] :
cancel_comm_monoid (ulift α) :=
{ .. ulift.cancel_monoid, .. ulift.comm_semigroup }
-- TODO we don't do `ordered_cancel_comm_monoid` or `ordered_comm_group`
-- We'd need to add instances for `ulift` in `order.basic`.
end ulift
|
/-
Copyright (c) 2019 Johannes Hölzl. All rights reserved.
Released under Apache 2.0 license as described in the file LICENSE.
Authors: Johannes Hölzl, Patrick Massot, Casper Putz, Anne Baanen
-/
import data.matrix.basic
import linear_algebra.finite_dimensional
/-!
# The finite-dimensional space of matrices
This file shows that `m` by `n` matrices form a finite-dimensional space,
and proves the `finrank` of that space is equal to `card m * card n`.
## Main definitions
* `matrix.finite_dimensional`: matrices form a finite dimensional vector space over a field `K`
* `matrix.finrank_matrix`: the `finrank` of `matrix m n R` is `card m * card n`
## Tags
matrix, finite dimensional, findim, finrank
-/
universes u v
namespace matrix
section finite_dimensional
variables {m n : Type*} [fintype m] [fintype n]
variables {R : Type v} [field R]
instance : finite_dimensional R (matrix m n R) :=
linear_equiv.finite_dimensional (linear_equiv.curry R m n)
/--
The dimension of the space of finite dimensional matrices
is the product of the number of rows and columns.
-/
@[simp] lemma finrank_matrix :
finite_dimensional.finrank R (matrix m n R) = fintype.card m * fintype.card n :=
by rw [@linear_equiv.finrank_eq R (matrix m n R) _ _ _ _ _ _ (linear_equiv.curry R m n).symm,
finite_dimensional.finrank_fintype_fun_eq_card, fintype.card_prod]
end finite_dimensional
end matrix
|
/*******************************************************************************
Copyright (C) 2013 SequoiaDB Software Inc.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License, version 3,
as published by the Free Software Foundation.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/license/>.
*******************************************************************************/
#ifndef OSSQUEUE_HPP__
#define OSSQUEUE_HPP__
#include <queue>
#include <boost/thread.hpp>
#include <boost/thread/thread_time.hpp>
#include "core.hpp"
template<typename Data>
class ossQueue
{
private :
std::queue<Data> _queue ;
boost::mutex _mutex ;
boost::condition_variable _cond ;
public :
unsigned int size ()
{
boost::mutex::scoped_lock lock ( _mutex ) ;
return (unsigned int)_queue.size () ;
}
void push ( Data const &data )
{
boost::mutex::scoped_lock lock ( _mutex ) ;
_queue.push ( data ) ;
lock.unlock () ;
_cond.notify_one () ;
}
bool empty () const
{
boost::mutex::scoped_lock lock ( _mutex ) ;
return _queue.empty () ;
}
bool try_pop ( Data &value )
{
boost::mutex::scoped_lock lock ( _mutex ) ;
if ( _queue.empty () )
return false ;
value = _queue.front () ;
_queue.pop () ;
return true ;
}
void wait_and_pop ( Data &value )
{
boost::mutex::scoped_lock lock ( _mutex ) ;
while ( _queue.empty () )
{
_cond.wait ( lock ) ;
}
value = _queue.front () ;
_queue.pop () ;
}
bool timed_wait_and_pop ( Data &value, long long millsec )
{
boost::system_time const timeout = boost::get_system_time () +
boost::posix_time::milliseconds(millsec) ;
boost::mutex::scoped_lock lock ( _mutex ) ;
// if timed_wait return false, that means we failed by timeout
while ( _queue.empty () )
{
if ( !_cond.timed_wait ( lock, timeout ) )
{
return false ;
}
}
value = _queue.front () ;
_queue.pop () ;
return true ;
}
} ;
#endif
|
% Version 1.000
%
% Code provided by Ruslan Salakhutdinov and Geoff Hinton
%
% Permission is granted for anyone to copy, use, modify, or distribute this
% program and accompanying programs and documents for any purpose, provided
% this copyright notice is retained and prominently displayed, along with
% a note saying that the original programs are available from our
% web page.
% The programs and documents are distributed without any warranty, express or
% implied. As the programs were written for research purposes only, they have
% not been tested to the degree that would be advisable in any important
% application. All use of these programs is entirely at the user's own risk.
function [f, df] = CG_MNIST(VV,Dim,XX);
l1 = Dim(1);
l2 = Dim(2);
l3 = Dim(3);
l4= Dim(4);
l5= Dim(5);
l6= Dim(6);
l7= Dim(7);
l8= Dim(8);
l9= Dim(9);
N = size(XX,1);
% Do decomversion.
w1 = reshape(VV(1:(l1+1)*l2),l1+1,l2);
xxx = (l1+1)*l2;
w2 = reshape(VV(xxx+1:xxx+(l2+1)*l3),l2+1,l3);
xxx = xxx+(l2+1)*l3;
w3 = reshape(VV(xxx+1:xxx+(l3+1)*l4),l3+1,l4);
xxx = xxx+(l3+1)*l4;
w4 = reshape(VV(xxx+1:xxx+(l4+1)*l5),l4+1,l5);
xxx = xxx+(l4+1)*l5;
w5 = reshape(VV(xxx+1:xxx+(l5+1)*l6),l5+1,l6);
xxx = xxx+(l5+1)*l6;
w6 = reshape(VV(xxx+1:xxx+(l6+1)*l7),l6+1,l7);
xxx = xxx+(l6+1)*l7;
w7 = reshape(VV(xxx+1:xxx+(l7+1)*l8),l7+1,l8);
xxx = xxx+(l7+1)*l8;
w8 = reshape(VV(xxx+1:xxx+(l8+1)*l9),l8+1,l9);
XX = [XX ones(N,1)];
w1probs = 1./(1 + exp(-XX*w1)); w1probs = [w1probs ones(N,1)];
w2probs = 1./(1 + exp(-w1probs*w2)); w2probs = [w2probs ones(N,1)];
w3probs = 1./(1 + exp(-w2probs*w3)); w3probs = [w3probs ones(N,1)];
w4probs = w3probs*w4; w4probs = [w4probs ones(N,1)];
w5probs = 1./(1 + exp(-w4probs*w5)); w5probs = [w5probs ones(N,1)];
w6probs = 1./(1 + exp(-w5probs*w6)); w6probs = [w6probs ones(N,1)];
w7probs = 1./(1 + exp(-w6probs*w7)); w7probs = [w7probs ones(N,1)];
XXout = 1./(1 + exp(-w7probs*w8));
f = -1/N*sum(sum( XX(:,1:end-1).*log(XXout) + (1-XX(:,1:end-1)).*log(1-XXout)));
IO = 1/N*(XXout-XX(:,1:end-1));
Ix8=IO;
dw8 = w7probs'*Ix8;
Ix7 = (Ix8*w8').*w7probs.*(1-w7probs);
Ix7 = Ix7(:,1:end-1);
dw7 = w6probs'*Ix7;
Ix6 = (Ix7*w7').*w6probs.*(1-w6probs);
Ix6 = Ix6(:,1:end-1);
dw6 = w5probs'*Ix6;
Ix5 = (Ix6*w6').*w5probs.*(1-w5probs);
Ix5 = Ix5(:,1:end-1);
dw5 = w4probs'*Ix5;
Ix4 = (Ix5*w5');
Ix4 = Ix4(:,1:end-1);
dw4 = w3probs'*Ix4;
Ix3 = (Ix4*w4').*w3probs.*(1-w3probs);
Ix3 = Ix3(:,1:end-1);
dw3 = w2probs'*Ix3;
Ix2 = (Ix3*w3').*w2probs.*(1-w2probs);
Ix2 = Ix2(:,1:end-1);
dw2 = w1probs'*Ix2;
Ix1 = (Ix2*w2').*w1probs.*(1-w1probs);
Ix1 = Ix1(:,1:end-1);
dw1 = XX'*Ix1;
df = [dw1(:)' dw2(:)' dw3(:)' dw4(:)' dw5(:)' dw6(:)' dw7(:)' dw8(:)' ]';
|
!******************************************************************************
!*
!* VARIOUS APPLICATIONS OF FFT
!*
!******************************************************************************
!==========================================================================
! 1) AUTHOR: F. Galliano
!
! 2) HISTORY:
! - Created 07/2013
!
! 3) DESCRIPTION: implements FFTs, and various relaed functions.
!==========================================================================
MODULE FFT_specials
USE utilities, ONLY:
USE constants, ONLY:
USE arrays, ONLY:
IMPLICIT NONE
PRIVATE
PUBLIC :: realft, correl
CONTAINS
!==========================================================================
! CALL FOURROW(data,isign)
!
! Replaces each row (constant first index) of data(1:M,1:N) by its discrete
! Fourier transform (transform on second index), if isign is input as 1; or
! replaces each row of data by N times its inverse discrete Fourier
! transform, if isign is input as −1. N must be an integer power of 2.
! Parallelism is M-fold on the first index of data.
!==========================================================================
SUBROUTINE fourrow (data,isign)
USE utilities, ONLY: DP, CDP, swap, strike
USE constants, ONLY: pi
IMPLICIT NONE
COMPLEX(CDP), DIMENSION(:,:), INTENT(INOUT) :: data
INTEGER, INTENT(IN) :: isign
INTEGER :: N, i, istep, j, m, Mmax, N2
REAL(DP) :: theta
COMPLEX(CDP), DIMENSION(SIZE(data,1)) :: temp
COMPLEX(CDP) :: w,wp
COMPLEX(CDP) :: ws
!-------------------------------------------------------------------------
N = SIZE(data,2)
IF (IAND(N,N-1) /= 0) CALL STRIKE("FOOURROW","n must be a power of 2")
N2 = N / 2
j = N2
DO i=1,N-2
IF (j > i) CALL SWAP(data(:,j+1),data(:,i+1))
m = N2
DO
IF (m < 2 .OR. j < m) EXIT
j = j - m
m = m / 2
END DO
j = j + m
END DO
Mmax = 1
DO
IF (N <= Mmax) EXIT
istep = 2 * Mmax
theta = pi / (isign*Mmax)
wp = CMPLX(-2._DP*SIN(0.5_DP*theta)**2,SIN(theta),KIND=CDP)
w = CMPLX(1._DP,0._DP,KIND=CDP)
DO m=1,Mmax
ws = w
DO i=m,n,istep
j = i + Mmax
temp = ws * data(:,j)
data(:,j) = data(:,i) - temp
data(:,i) = data(:,i) + temp
END DO
w = w * wp + w
END DO
Mmax = istep
END DO
!-------------------------------------------------------------------------
END SUBROUTINE fourrow
!==========================================================================
! CALL FOUR1(data,isign)
!
! Replaces a complex array data by its discrete Fourier transform, if
! isign is input as 1; or replaces data by its inverse discrete Fourier
! transform times the size of data, if isign is input as −1. The size of
! data must be an integer power of 2. Parallelism is achieved by internally
! reshaping the input array to two dimensions. (Use this version if fourrow
! is faster than fourcol on your machine.)
!==========================================================================
SUBROUTINE four1 (data,isign)
USE utilities, ONLY: DP, CDP, strike, arth
USE constants, ONLY: twopi
IMPLICIT NONE
COMPLEX(CDP), DIMENSION(:), INTENT(INOUT) :: data
INTEGER, INTENT(IN) :: isign
COMPLEX(CDP), DIMENSION(:,:), ALLOCATABLE :: dat, temp
COMPLEX(CDP), DIMENSION(:), ALLOCATABLE :: w, wp
REAL(DP), DIMENSION(:), ALLOCATABLE :: theta
INTEGER :: N, m1, m2, j
!-------------------------------------------------------------------------
N = SIZE(data)
IF (IAND(N,N-1) /=0) CALL STRIKE("FOUR1","N must be a power of 2")
m1 = 2**CEILING( 0.5_DP * LOG(REAL(N,DP)) / 0.693147_DP )
m2 = n / m1
ALLOCATE (dat(m1,m2),theta(m1),w(m1),wp(m1),temp(m2,m1))
dat = RESHAPE(data,SHAPE(dat))
CALL FOURROW(dat,isign)
theta = ARTH(0,isign,m1) * TWOPI / N
wp = CMPLX(-2._DP*SIN(0.5_DP*theta)**2,SIN(theta),KIND=CDP)
w = CMPLX(1._DP,0._DP,KIND=CDP)
DO j=2,m2
w = w * wp + w
dat(:,j) = dat(:,j) * w
END DO
temp = TRANSPOSE(dat)
CALL FOURROW(temp,isign)
data = RESHAPE(temp,SHAPE(data))
DEALLOCATE (dat,w,wp,theta,temp)
!-------------------------------------------------------------------------
END SUBROUTINE four1
!==========================================================================
! CALL REALFT(data,isign,zdata)
!
! When isign = 1, calculates the Fourier transform of a set of N
! real-valued data points, input in the array data. If the optional argument
! zdata is not present, the data are replaced by the positive frequency half
! of its complex Fourier transform. The real-valued first and last components
! of the complex transform are returned as elements data(1) and data(2),
! respectively. If the complex array zdata of length N/2 is present, data is
! unchanged and the transform is returned in zdata. N must be a power of 2.
! If isign = 1, this routine calculates the inverse transform of a complex
! data array if it is the transform of real data. (Result in this case must
! be multiplied by 2/N.) The data can be supplied either in data, with zdata
! absent, or in zdata.
!==========================================================================
SUBROUTINE REALFT(data,isign,zdata)
USE utilities, ONLY: DP, CDP, strike, zroots_unity
IMPLICIT NONE
REAL(DP), DIMENSION(:), INTENT(INOUT) :: data
INTEGER, INTENT(IN) :: isign
COMPLEX(CDP), DIMENSION(:), OPTIONAL, TARGET :: zdata
INTEGER :: N, Ndum, Nh, Nq
COMPLEX(CDP), DIMENSION(SIZE(data)/4) :: w
COMPLEX(CDP), DIMENSION(SIZE(data)/4-1) :: h1, h2
COMPLEX(CDP), DIMENSION(:), POINTER :: cdata
COMPLEX(CDP) :: z
REAL(DP) :: c1 = 0.5_DP, c2
!-------------------------------------------------------------------------
N = SIZE(data)
IF (IAND(N,N-1) /= 0) CALL STRIKE("REALFT","N must be a power of 2")
Nh = N / 2
Nq = N / 4
IF (PRESENT(zdata)) THEN
Ndum = N / 2
IF (SIZE(zdata) /= N/2) CALL STRIKE("REALFT","Wonrg size of input")
cdata => zdata
IF (isign == 1) cdata = CMPLX(data(1:N-1:2),data(2:N:2),KIND=CDP)
ELSE
ALLOCATE (cdata(N/2))
cdata = CMPLX(data(1:N-1:2),data(2:N:2),KIND=CDP)
END IF
IF (isign == 1) THEN
c2 = - 0.5_DP
CALL four1(cdata,+1)
ELSE
c2 = 0.5_DP
END IF
w = ZROOTS_UNITY(SIGN(N,isign),N/4)
w = CMPLX(-AIMAG(w),REAL(w),KIND=CDP)
h1 = c1 * (cdata(2:Nq)+CONJG(cdata(Nh:Nq+2:-1)))
h2 = c2 * (cdata(2:Nq)-CONJG(cdata(Nh:Nq+2:-1)))
cdata(2:Nq) = h1 + w(2:Nq) * h2
cdata(Nh:Nq+2:-1) = CONJG(h1-w(2:Nq)*h2)
z = cdata(1)
IF (isign == 1) THEN
cdata(1) = CMPLX(REAL(z)+AIMAG(z),REAL(z)-AIMAG(z),KIND=CDP)
ELSE
cdata(1) = CMPLX(c1*(REAL(z)+AIMAG(z)),c1*(REAL(z)-AIMAG(z)),KIND=CDP)
CALL FOUR1(cdata,-1)
END IF
IF (PRESENT(zdata)) THEN
IF (isign /= 1) THEN
data(1:N-1:2) = REAL(cdata)
data(2:N:2) = AIMAG(cdata)
END IF
ELSE
data(1:N-1:2) = REAL(cdata)
data(2:N:2) = AIMAG(cdata)
DEALLOCATE(cdata)
END IF
!-------------------------------------------------------------------------
END SUBROUTINE realft
!==========================================================================
! f[N] = CORREL(data1[N],data2[N])
!
! Computes the correlation of two arrays for a lag: f(N) to f(N/2+1) are
! negative lags, f(1) is lag 0, and f(1) to f(N/2) are positive lags.
!==========================================================================
FUNCTION correl (data1,data2)
USE utilities, ONLY: DP, CDP, strike
IMPLICIT NONE
REAL(DP), DIMENSION(:), INTENT(INOUT) :: data1, data2
REAL(DP), DIMENSION(SIZE(data1)) :: correl
COMPLEX(CDP), DIMENSION(SIZE(data1)/2) :: cdat1, cdat2
INTEGER :: No2, N
!-------------------------------------------------------------------------
! Preliminaries
N = SIZE(data1)
IF (SIZE(data2) /= N) CALL STRIKE("CORREL","Wrong size of input")
IF (IAND(N,N-1) /= 0) CALL STRIKE("CORREL","N must be a power of 2")
No2 = N / 2
! Transform both data vectors
CALL REALFT(data1,1,cdat1)
CALL REALFT(data2,1,cdat2)
! Multiply to find FFT of their correlation
cdat1(1) = CMPLX( REAL(cdat1(1)) * REAL(cdat2(1)) / No2, &
AIMAG(cdat1(1)) * AIMAG(cdat2(1)) / No2, KIND=CDP )
cdat1(2:) = cdat1(2:) * CONJG(cdat2(2:)) / No2
! Inverse transform gives correlation
CALL REALFT(correl,-1,cdat1)
!-------------------------------------------------------------------------
END FUNCTION correl
END MODULE FFT_specials
|
#-----------------------------------------------------------------------------------
#XOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOX
#-----------------------------------------------------------------------------------
# This script gives the amino acid position (first transcript) of SNPs in exons
# It Also creat input files for polydNdS and a summary file for processing the results of
# This script was written to carry out a particular analysis; it may not be applied to another case without changes
# A header of each file is put under every file used
# Script written by Sofiane mezmouk (Ross-Ibarra laboratory)
#-----------------------------------------------------------------------------------
#XOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOXOX
#-----------------------------------------------------------------------------------
#Choose a chromosome to work with (1 to 10)
Chr <- 10
#--------------
library(stringr)
library(gdata)
is.odd <- function(x) x %% 2 != 0
#--------------
#-----------------------------
# A function that converts a nucleotide sequence to an amino acid sequence
transcript <- function(x){xtm <- as.vector(str_sub(paste(as.vector(x),collapse="",sep=""),debut,fin)); as.vector(codon[tapply(xtm,rep(1:length(xtm)),function(x){match(x,codon[,1])}),2])}
#-----------------------------
#--------------
gffall <- read.table("ZmB73_5b_FGS.gff")
# gffall format
#9 ensembl chromosome 1 156750706 . . . ID=9;Name=chromosome:AGPv2:9:1:156750706:1
#9 ensembl gene 66347 68582 . - . ID=GRMZM2G354611;Name=GRMZM2G354611;biotype=protein_coding
#9 ensembl mRNA 66347 68582 . - . ID=GRMZM2G354611_T01;Parent=GRMZM2G354611;Name=GRMZM2G354611_T01;biotype=protein_coding
#9 ensembl intron 68433 68561 . - . Parent=GRMZM2G354611_T01;Name=intron.1
#9 ensembl intron 67142 67886 . - . Parent=GRMZM2G354611_T01;Name=intron.2
codon <- read.table("Codons.txt", header=T, sep="\t")
#codon format
#Codon AA_1 AA_3 AA_Full AntiCodon
#TCA S Ser Serine TGA
#TCG S Ser Serine CGA
#TCC S Ser Serine GGA
#TCT S Ser Serine AGA
genelist <- read.table("GeneProtNames", header=F, sep="\t")
#genelist format
#AC147602.5_FG004 AC147602.5_FGT004 AC147602.5_FGP004
#AC148152.3_FG001 AC148152.3_FGT001 AC148152.3_FGP001
#AC148152.3_FG005 AC148152.3_FGT005 AC148152.3_FGP005
#AC148152.3_FG006 AC148152.3_FGT006 AC148152.3_FGP006
#AC148152.3_FG008 AC148152.3_FGT008 AC148152.3_FGP008
transc <- as.vector(read.table("ListeProtFirstTranscrit", header=F, sep="\t")[,1])
#transc format
#AC147602.5_FGP004
#AC148152.3_FGP001
#AC148152.3_FGP005
#AC148152.3_FGP006
#AC148152.3_FGP008
genelist <- genelist[as.vector(genelist[,3]%in%transc),]; rm(transc)
geneposi <- read.table("GenePositions", header=T, sep="\t")
#geneposi format
#Genes Chr Start End
#GRMZM2G059865 1 4854 9652
#GRMZM5G888250 1 9882 10387
#GRMZM2G093344 1 109519 111769
#GRMZM2G093399 1 136307 138929
geneposi <- geneposi[geneposi[,2]==Chr,]
genelist <- genelist[as.vector(genelist[,1]) %in% as.vector(geneposi[,1]),]
#---------------
geno <- read.table(paste("282_20120110_scv10mF8maf002_mgs_E1pLD5kpUn_imp95_1024_chr",Chr,".hmp.txt", sep=""), header=T, sep="\t")
# geno format
#rs alleles chrom pos strand assembly center protLSID assayLSID panelLSID QCcode 33-16 38-11 ...
#S1_2111 C/T 1 2111 + NA NA NA NA NA NA C C ...
#S1_10097 C/G 1 10097 + NA NA NA NA NA NA C C ...
#S1_10390 G/A 1 10390 + NA NA NA NA NA NA G G ...
geno <- as.matrix(geno[,-c(1:2,4:10)])
geno[is.element(geno, c("M","K","S","W","Y","R","V","H","D","B","H"))] <- "N"
#---------------
# Result file to keep track between the real positions and the positions in the sequence used for polydNdS estimates
RespolydNdS <- matrix(NA,1,8, dimnames=list(NULL, c("gene","SNP","Chr","Position","SeqPosition","Sens","LengthCDS","NbSeq")))
# Result file with the amino acid polymorphisms corresponding to the
resaa <- matrix(NA,1,(6+ncol(geno)),dimnames=list(NULL,c("gene","transcript","AAposition","SNP1","SNP2","SNP3","B73ref",dimnames(geno)[[2]][-1])))
problemes <- vector()
#---------------
#---------------
#Loop over gene
for(i in 1:nrow(geneposi)){
if(nrow(geno[as.numeric(as.vector(geno[,1]))%in%c(geneposi[i,3]:geneposi[i,4]),,drop=F])>0){ # if I have SNPs in the gene{
gff <- gffall[grep(geneposi[i,1],gffall[,9]),]
posgene <- as.vector(c(geneposi[i,3]:geneposi[i,4]))
posgene <- posgene[order(posgene)]
SENStransc <- as.vector(gff[grep("gene",gff[,3]),7])
posi <- gffall[grep(as.vector(genelist[match(geneposi[i,1],genelist[,1]),2]),gffall[,9]),]
posi <- posi[grep("CDS",posi[,3]),,drop=F]
CDS <- c(posi[1,4]:posi[1,5])
if (nrow(posi)>1)
{
for (j in 2:nrow(posi))
{
CDS <- c(CDS,c(posi[j,4]:posi[j,5]))
}
rm(j)
}
CDS <- CDS[order(CDS)]
rm(posi)
#----------------
if(nrow(geno[as.numeric(as.vector(geno[,1]))%in%CDS,,drop=F])>0){
geneseq <- readLines(paste("gene",geneposi[i,1],".fasta",sep=""))
# geneseq format for geneAC147602.5_FG004.fasta
#>AC147602.5_FG004 seq=gene; coord=3:178846540..178848005:-1
#ATGGAGATCGTCGCCACGCGCTCCCCGGCTTGCTGCGCCGCCGTGTCCTTCTCCCAGTCG
#TACAGGCCCAAGGTACGTACGGCACCTTCATATCTCGTGACTACTGTACGTAAGCGGAAA
#GTAGCAGCAGCTCGTCGCGCACACGTGCAGAAGCCTTAAGTTTGCTGATGATGTTGATGA
geneseq <- paste(geneseq[-1],collapse="", sep="")
geneseq <- strsplit(geneseq,split=character(1),fixed=T)[[1]]
tprot <- readLines(paste("tprot_",genelist[as.vector(genelist[,1])==as.vector(geneposi[i,1]),3],".fasta",sep=""))
#tprot format for tprot_AC147602.5_FGP004.fasta
#>AC147602.5_FGP004 seq=translation; coord=3:178846540..178848005:-1; parent_transcript=AC147602.5_FGT004; parent_gene=AC147602.5_FG004
#MEIVATRSPACCAAVSFSQSYRPKASRPPTTFYGESVRVNTARPLSARRQSKAASRAALS
#ARCEIGDSLEEFLTKATPDKNLIRLLICMGEAMRTIAFKVRTASCGGTACVNSFGDEQLA
#VDMLANKLLFEALEYSHVCKYACSEEVPELQDMGGPVEGS
tprot <- paste(tprot[-1],collapse="",sep="")
tprot <- strsplit(tprot, split = "", fixed = T, perl = FALSE, useBytes = FALSE)[[1]]
# Creat the nucleotide sequenc of every genotype
if(SENStransc=="-"){
sequ <- matrix(rep(geneseq,ncol(geno)), length(geneseq),ncol(geno), dimnames=list(rev(posgene),c("B73ref",dimnames(geno)[[2]][-1])))
}else
{
sequ <- matrix(rep(geneseq,ncol(geno)), length(geneseq),ncol(geno), dimnames=list(posgene,c("B73ref",dimnames(geno)[[2]][-1])))
}
rm(geneseq)
sequ <- sequ[as.numeric(dimnames(sequ)[[1]])%in%CDS,,drop=F]
tmp <- geno[as.numeric(as.vector(geno[,1]))%in%CDS,, drop=F]
dimnames(tmp)[[1]] <- as.numeric(as.vector(tmp[,1])); tmp <- tmp[,-1,drop=F]
if(SENStransc=="-")
{
tmp2 <- tmp[,,drop=F]
tmp[tmp2=="A"] <- "T";tmp[tmp2=="T"] <- "A";tmp[tmp2=="C"] <- "G";tmp[tmp2=="G"] <- "C"
tmp[tmp2=="M"] <- "K";tmp[tmp2=="K"] <- "M";tmp[tmp2=="Y"] <- "R";tmp[tmp2=="R"] <- "Y"
rm(tmp2)
}
for(j in 1:nrow(tmp))
{
bof <- tmp[j,tmp[j,]!="N",drop=F]
sequ[match(dimnames(bof)[[1]],dimnames(sequ)[[1]]),match(dimnames(bof)[[2]],dimnames(sequ)[[2]])] <- bof
rm(bof)
}
rm(j)
#-X-X-X-X-X-X-X-X-X-X-X-X-X-X-X-
# write an input file for polydNdS
bofseq <- apply(sequ, 2, function(x){paste(as.vector(x),collapse="",sep="")})
bofseq <-unique(bofseq)
bof <- vector()
bof[is.odd(1:(length(bofseq)*2))] <- paste("> sequenceNumber",c(1:length(bofseq)),sep="")
bof[!is.odd(1:(length(bofseq)*2))] <- bofseq
writeLines(bof,paste("seq_",as.vector(geneposi[i,1]),".fasta", sep="")); rm(bof)
#---------
bof <- cbind(as.vector(geneposi[i,1]),paste("S",Chr,"_",as.numeric(as.vector(dimnames(tmp)[[1]])),sep=""),Chr,as.numeric(as.vector(dimnames(tmp)[[1]])),
match(dimnames(tmp)[[1]],dimnames(sequ)[[1]]),SENStransc,nrow(sequ),length(bofseq))
dimnames(bof)[[2]] <- c("gene","SNP","Chr","Position","SeqPosition","Sens","LengthCDS","NbSeq")
RespolydNdS <- rbind(RespolydNdS, bof); rm(bof,bofseq)
#-X-X-X-X-X-X-X-X-X-X-X-X-X-X-X-
# nucleotide to aa
debut <- seq(1,nrow(sequ),3)
fin <- pmin(debut+2,nrow(sequ))
AA <- matrix(apply(sequ,2,transcript),ncol=ncol(geno), byrow=F)
AA <- cbind(c(1:nrow(AA)),dimnames(sequ)[[1]][debut],dimnames(sequ)[[1]][(debut+1)],dimnames(sequ)[[1]][fin],AA)
# Put a warning if the aa sequence I transcript is different from the aa sequence from the files I upload for the reference B73
if(sum(as.numeric(as.vector(AA[,5])[1:length(tprot)]!=tprot),na.rm=T)!=0){
problemes[length(problemes)+1] <- as.vector(geneposi[i,1])
#print("!!!problem"); print(as.vector(as.matrix(genelistmp[ii,])))
}
AA <- AA[(as.numeric(AA[,2])%in%as.numeric(as.vector(geno[,1])))|(as.numeric(AA[,3])%in%as.numeric(as.vector(geno[,1])))|(as.numeric(AA[,4])%in%as.numeric(as.vector(geno[,1]))),,drop=F]
if (nrow(AA)>0){
AA <- cbind(as.vector(geneposi[i,1]),as.vector(genelist[as.vector(genelist[,1])==as.vector(geneposi[i,1]),3]),AA)
dimnames(AA) <- list(NULL,c("gene","transcript","AAposition","SNP1","SNP2","SNP3",dimnames(sequ)[[2]]))
resaa <- rbind(resaa,AA)
}
rm(AA,debut,fin,tprot,sequ)
}
rm(gff,SENStransc,CDS)
}
}
resaa <- resaa[-1,]
RespolydNdS <- RespolydNdS[-1,]
if(length(problemes)>0){write.table(problemes,paste("Problemes_Chr",Chr,sep=""), sep="\t", row.names=F, quote=F, col.names=F)}
write.table(RespolydNdS, paste("SummaryPolydNdS.Chr",Chr,sep=""), sep="\t", quote=F, row.names=F)
write.table(resaa,paste("NucToAA_Chr",Chr,".txt",sep=""), sep="\t", row.names=F, quote=F)
|
Formal statement is: lemma convex_on_alt: fixes C :: "'a::real_vector set" shows "convex_on C f \<longleftrightarrow> (\<forall>x \<in> C. \<forall> y \<in> C. \<forall> \<mu> :: real. \<mu> \<ge> 0 \<and> \<mu> \<le> 1 \<longrightarrow> f (\<mu> *\<^sub>R x + (1 - \<mu>) *\<^sub>R y) \<le> \<mu> * f x + (1 - \<mu>) * f y)" Informal statement is: A function $f$ is convex on a set $C$ if and only if for all $x, y \in C$ and $\mu \in [0, 1]$, we have $f(\mu x + (1 - \mu) y) \leq \mu f(x) + (1 - \mu) f(y)$. |
{-# OPTIONS --cubical --safe #-}
module Relation.Nullary.Decidable where
open import Relation.Nullary.Decidable.Base public
|
# example of extracting and resizing xrays into a new dataset
from os import listdir
from numpy import asarray
from numpy import savez_compressed
from PIL import Image
from mtcnn.mtcnn import MTCNN
import pandas as pd
import numpy as np
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
varNames = ["NORMAL","VIRAL","BACTERIAL"]
# load an image as an rgb numpy array
def load_image(filename):
# load image from file
image = Image.open(filename)
# convert to RGB, if needed
image = image.convert('RGB')
# convert to array
pixels = asarray(image)
return pixels
# extract the xray from a loaded image and resize
def extract_xray(pixels, required_size=(80, 80)):
# resize pixels to the model size
image = Image.fromarray(pixels)
image = image.resize(required_size)
xray_array = asarray(image)
return xray_array
# load images and extract xrays for all images in a directory
def load_xrays(directory, n_xrays):
xrays = list()
ids = list()
labels = list()
for dirname in listdir(directory):
directory1=directory+dirname
print("directory1: ", directory1)
for dirname1 in listdir(directory1):
directory2=directory1+"/"+dirname1
print("directory2: ", directory2)
# enumerate files
for idx, filename in enumerate(listdir(directory2)):
# load the image
if "virus" in filename: label=1
elif "bacteria" in filename: label=2
else: label=0
pixels = load_image(directory2 + "/" + filename)
# print("pixels.size: ", pixels.size)
# get xray
xray = extract_xray(pixels)
# print("pixels.size after extract: ", pixels.size)
# if xray is None:
# continue
# if data_attractive[idx] == -1.0:
# continue
# store
xrays.append(xray)
ids.append(idx)
labels.append(label)
if (len(xrays)+1)%100==0:
print(len(xrays)+1, xray.shape)
# stop once we have enough
if len(xrays) >= n_xrays:
break
return asarray(xrays),asarray(labels)
# directory that contains all images
directory = 'xray/chest_xray/'
# load and extract all xrays
n_xrays = 50000
# n_xrays = 100
all_xrays, all_labels = load_xrays(directory, n_xrays)
print('Loaded xrays: ', all_xrays.shape)
print('Loaded labels: ', all_labels.shape)
qSave = True
if qSave:
savez_compressed('xray/img_align_xray.npz', all_xrays)
savez_compressed('xray/labels_align_xray.npz', all_labels)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.