Datasets:
AI4M
/

text
stringlengths
0
3.34M
# Exercise 6, answers ## Problem 1 ```python from pyomo.environ import * model = ConcreteModel() #Three variables model.x = Var([1,2,3]) #Objective function including powers and logarithm model.OBJ = Objective(expr = log(model.x[1]**2+1)+model.x[2]**4 +model.x[1]*model.x[3]) #Objective function model.constr = Constraint(expr = model.x[1]**3-model.x[2]**2>=1) model.box1 = Constraint(expr = model.x[1]>=0) model.box2 = Constraint(expr = model.x[3]>=0) from pyomo.opt import SolverFactory #Import interfaces to solvers opt = SolverFactory("ipopt") #Use ipopt res = opt.solve(model, tee=True) #Solve the problem and print the output print "Optimal solutions is " model.x.display() print "Objective value at the optimal solution is " model.OBJ.display() ``` ****************************************************************************** This program contains Ipopt, a library for large-scale nonlinear optimization. Ipopt is released as open source code under the Eclipse Public License (EPL). For more information visit http://projects.coin-or.org/Ipopt ****************************************************************************** This is Ipopt version 3.12, running with linear solver mumps. NOTE: Other linear solvers might be more efficient (see Ipopt documentation). Number of nonzeros in equality constraint Jacobian...: 0 Number of nonzeros in inequality constraint Jacobian.: 4 Number of nonzeros in Lagrangian Hessian.............: 3 Total number of variables............................: 3 variables with only lower bounds: 0 variables with lower and upper bounds: 0 variables with only upper bounds: 0 Total number of equality constraints.................: 0 Total number of inequality constraints...............: 3 inequality constraints with only lower bounds: 3 inequality constraints with lower and upper bounds: 0 inequality constraints with only upper bounds: 0 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 0.0000000e+00 1.00e+00 5.00e-01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 2.2129049e-06 1.00e+00 1.09e+02 -1.0 1.01e+00 - 1.00e+00 9.80e-03h 1 2 2.4328193e-06 1.00e+00 1.55e+05 -1.0 1.00e+00 - 1.39e-01 9.90e-05h 1 3 1.9533370e-02 9.97e-01 9.82e+04 -1.0 1.60e+05 - 6.81e-08 9.04e-07h 1 4 8.5283692e-01 0.00e+00 2.91e+06 -1.0 1.57e+01 - 1.18e-02 6.25e-02f 5 5 7.4508982e-01 0.00e+00 1.19e+07 -1.0 1.19e-01 8.0 3.17e-04 1.00e+00f 1 6 7.3522284e-01 0.00e+00 6.57e+05 -1.0 9.61e-03 7.5 7.73e-01 1.00e+00h 1 7 7.3514688e-01 0.00e+00 9.59e+02 -1.0 7.35e-05 7.0 1.00e+00 1.00e+00h 1 8 7.3514746e-01 0.00e+00 3.58e+00 -1.0 9.67e-07 6.6 1.00e+00 1.00e+00h 1 9 7.6476859e-01 0.00e+00 3.06e-02 -1.0 2.25e-02 - 1.00e+00 1.00e+00f 1 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 10 6.9848511e-01 0.00e+00 1.98e-03 -2.5 5.37e-02 - 1.00e+00 1.00e+00f 1 11 6.9344736e-01 0.00e+00 1.59e-05 -3.8 4.78e-03 - 1.00e+00 1.00e+00h 1 12 6.9315086e-01 0.00e+00 5.53e-08 -5.7 4.18e-04 - 1.00e+00 1.00e+00h 1 13 6.9314717e-01 0.00e+00 8.49e-12 -8.6 5.40e-06 - 1.00e+00 1.00e+00h 1 Number of Iterations....: 13 (scaled) (unscaled) Objective...............: 6.9314717223847255e-01 6.9314717223847255e-01 Dual infeasibility......: 8.4893203577962595e-12 8.4893203577962595e-12 Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00 Complementarity.........: 2.5092981987187852e-09 2.5092981987187852e-09 Overall NLP error.......: 2.5092981987187852e-09 2.5092981987187852e-09 Number of objective function evaluations = 20 Number of objective gradient evaluations = 14 Number of equality constraint evaluations = 0 Number of inequality constraint evaluations = 20 Number of equality constraint Jacobian evaluations = 0 Number of inequality constraint Jacobian evaluations = 14 Number of Lagrangian Hessian evaluations = 13 Total CPU secs in IPOPT (w/o function evaluations) = 0.004 Total CPU secs in NLP function evaluations = 0.000 EXIT: Optimal Solution Found. Ipopt 3.12: Optimal Solution Found Optimal solutions is x : Size=3, Index=x_index, Domain=Reals Key : Lower : Value : Upper : Fixed : Stale 1 : None : 0.999999999169 : None : False : False 2 : None : 0.0 : None : False : False 3 : None : -7.49070198136e-09 : None : False : False Objective value at the optimal solution is OBJ : Size=1, Index=None, Active=True Key : Active : Value None : True : 0.693147172238 ## Problem 2 The set Pareto optimal solutions is $\{(t,1-t):t\in[0,1]\}$. Let us denote set of Pareto optimal solutions by $PS$ and show that $PS=\{(t,1-t):t\in[0,1]\}$. $PS\supset\{(t,1-t):t\in[0,1]\}$: Let's assume that there exists $t\in[0,1]$, which is not Pareto optimal. Then there exists $x=(x_1,x_2)\in\mathbb R^2$ and $t\in[0,1]$ such that $$ \left\{ \begin{align} \|x-(1,0)\|^2<\|(t,1-t)-(1,0) \|^2,\text{ and}\\ \|x-(0,1)\|^2\leq\|(t,1-t)-(0,1) \|^2 \end{align} \right. $$ or $$ \left\{ \begin{align} \|x-(1,0)\|^2\leq\|(t,1-t)-(1,0) \|^2,\text{ and}\\ \|x-(0,1)\|^2<\|(t,1-t)-(0,1)\|^2. \end{align} \right. $$ But in both cases $$ \sqrt{2} = \|(1,0)-(0,1)\|\\ \leq \|(1,0)-x\|+\|x-(0,1)\|\\ < \|(t,1-t)-(1,0) \|+\|(t,1-t)-(0,1) \|\\ = \|(1,0)-(0,1)\| =\sqrt{2}. $$ because the point $(t,1-t)$ is on the straight line from $(1,0)$ to $(0,1)$. Thus, neither one of the requirements of non-Pareto optimality can hold. Thus, the point is Pareto optimal. $PS\subset\{(t,1-t):t\in[0,1]\}$: Let's assume a Pareto optimal solution $x$. This follows from the triangle inequality. ## Problem 3 Ideal: To solve $$ \min \|x-(1,0)\|^2\\ \text{s.t. }x\in \mathbb R^2. $$ The solution of this problem is naturally $x = (1,0)$ and the minimum is $0$. Minimizing the second objective give $x=(0,1)$ and the minimum is again $0$. Thus, the ideal is $(0,0)$. Now, the problem has just two objectives and thus, we get the components of the nadir by optimizing $$ \min f_1(x)\\ \text{s.t. }f_2(x)\leq z^{ideal}_2 $$ and $$ \min f_2(x)\\ \text{s.t. }f_1(x)\leq z^{ideal}_1. $$ The solution of this problem is Pareto optimal because of the epsilon constraint method and also because the other one of the objectives is at the minimum and the other one cannot be grown with growing the other. Thus, the components of the nadir are at least the optimal values of the above optimization problems. On the other hand, the components of the nadir have to be at most the optimal values of the above optimization problems, because if this was not the case, then the solution would not be Pareto optimal. By solving these optimization problems, we get nadir (2,2). ## Problem 4 ```python def prob(x): return [(x[0]-1)**2+x[1]**2,x[0]**2+(x[1]-1)**2] ``` Let's do this using Pyomo: ```python from pyomo.environ import * from pyomo.opt import SolverFactory #Import interfaces to solvers def weighting_method_pyomo(f,w): points = [] for wi in w: model = ConcreteModel() model.x = Var([0,1]) #weighted sum model.obj = Objective(expr = wi[0]*f(model.x)[0]+wi[1]*f(model.x)[1]) opt = SolverFactory("ipopt") #Use ipopt #Combination of expression and function res=opt.solve(model) #Solve the problem points.append([model.x[0].value,model.x[1].value]) #We should check for optimality... return points ``` ```python w = np.random.random((500,2)) #500 random weights repr = weighting_method_pyomo(prob,w) ``` **Plot the solutions in the objective space** ```python import matplotlib.pyplot as plt f_repr_ws = [prob(repri) for repri in repr] fig = plt.figure() plt.scatter([z[0] for z in f_repr_ws],[z[1] for z in f_repr_ws]) plt.show() ``` **Plot the solutions in the decision space** ```python import matplotlib.pyplot as plt fig = plt.figure() plt.scatter([x[0] for x in repr],[x[1] for x in repr]) plt.show() ``` **What do we notice?** In this problem, the weighting method works. This is because the objective functions are convex. Working here means that the method produces an even representation of the whole Pareto optimal set.
library(devtools) # remove.packages("ptdalgorithms") devtools::install_github("TobiasRoikjer/PtDAlgorithms") library(ptdalgorithms) Rcpp::sourceCpp("./isolation_migration.cpp") im_expectation <- function(n1, n2, m1, m2, split_t) { cat(n1, " ", n2, " ", m1, " ", m1, " ", split_t, " ") # build im graph im_g <- construct_im_graph(n1,n2,m1,m2) cat(vertices_length(im_g), " ") im_expected_visits <- accumulated_visiting_time(im_g, split_t) # create ancestral graph a_g <- construct_ancestral_graph(n1,n2) cat(vertices_length(a_g), " ") # find probabilities of starting at each state in ancestral graph start_prob <- start_prob_from_im(a_g, im_g, im_expected_visits) # compute expectations for each graph im_expectation <- matrix(nrow=n1+1,ncol=n2+1) a_expectation <- matrix(nrow=n1+1,ncol=n2+1) for (i in 0:n1) { for (j in 0:n2) { im_expectation[i+1,j+1] <- sum(im_expected_visits * rewards_at(im_g, i,j,n1,n2)) a_expectation[i+1, j+1]<- sum(start_prob * expected_waiting_time(a_g, rewards_at(a_g, i,j,n1,n2))) } } } for (i in 3:10) { then <- proc.time() im_expectation(i, i, 1, 1, 2) now <- proc.time() cat((now - then)[3], "\n") flush.console() }
theorem Casorati_Weierstrass: assumes "open M" "z \<in> M" "f holomorphic_on (M - {z})" and "\<And>l. \<not> (f \<longlongrightarrow> l) (at z)" "\<And>l. \<not> ((inverse \<circ> f) \<longlongrightarrow> l) (at z)" shows "closure(f ` (M - {z})) = UNIV"
Socialist Federal Republic of Yugoslavia was composed of six constituent republics : Bosnia @-@ Herzegovina , Croatia , Macedonia , Montenegro , Serbia , and Slovenia . In 1991 , Croatia , and Slovenia seceded from Yugoslavia . Bosnia @-@ Herzegovina — a republic with a mixed population consisting of Bosniaks , Serbs , and Croats — followed suit in March 1992 in a highly controversial referendum , creating tension in the ethnic communities . Bosnian Serb militias , whose strategic goal was to secede from Bosnia and Herzegovina and unite with Serbia , encircled Sarajevo with a siege force of 18 @,@ 000 stationed in the surrounding hills , from which they assaulted the city with weapons that included artillery , mortars , tanks , anti @-@ aircraft guns , heavy machine @-@ guns , rocket launchers , and aircraft bombs . From 2 May 1992 until the end of the war in 1996 , the city was blockaded . The Army of the Republic of Bosnia and Herzegovina , numbering roughly 40 @,@ 000 inside the besieged city , was poorly equipped and unable to break the siege . Meanwhile , throughout the country , thousands of predominantly Bosniak civilians were driven from their homes in a process of ethnic cleansing . In Sarajevo , women and children attempting to buy food were frequently terrorized by Bosnian Serb sniper fire .
State Before: n' : Type u_1 inst✝² : DecidableEq n' inst✝¹ : Fintype n' R : Type u_2 inst✝ : CommRing R A : M ha : IsUnit (det A) z1 z2 : ℤ ⊢ A ^ (z1 - z2) = A ^ z1 / A ^ z2 State After: no goals Tactic: rw [sub_eq_add_neg, zpow_add ha, zpow_neg ha, div_eq_mul_inv]
model_tavg <- function (tmin = 0.0, tmax = 0.0){ #'- Name: Tavg -Version: 1.0, -Time step: 1 #'- Description: #' * Title: Mean temperature calculation #' * Author: STICS #' * Reference: doi:http://dx.doi.org/10.1016/j.agrformet.2014.05.002 #' * Institution: INRA #' * Abstract: It simulates the depth of snow cover and recalculate weather data #'- inputs: #' * name: tmin #' ** description : current minimum air temperature #' ** inputtype : variable #' ** variablecategory : auxiliary #' ** datatype : DOUBLE #' ** default : 0.0 #' ** min : 0.0 #' ** max : 500.0 #' ** unit : degC #' ** uri : #' * name: tmax #' ** description : current maximum air temperature #' ** inputtype : variable #' ** variablecategory : auxiliary #' ** datatype : DOUBLE #' ** default : 0.0 #' ** min : 0.0 #' ** max : 100.0 #' ** unit : degC #' ** uri : #'- outputs: #' * name: tavg #' ** description : mean temperature #' ** variablecategory : auxiliary #' ** datatype : DOUBLE #' ** min : 0.0 #' ** max : 500.0 #' ** unit : degC #' ** uri : tavg <- (tmin + tmax) / 2 return (list('tavg' = tavg)) }
SUBROUTINE GG_WSPD ( filtyp, dattim, kwninc, kcolrs, + numc, iskip, interv, itmclr, iret ) C************************************************************************ C* GG_WSPD * C* * C* This subroutine sets up the times and attributes for plotting * C* altimeter-derived windspeed data. * C* * C* GG_WSPD ( FILTYP, DATTIM, KWNINC, KCOLRS, NUMC, ISKIP, INTERV, * C* ITMCLR, IRET ) * C* * C* Input parameters: * C* FILTYP CHAR* File type; e.g., 'WSPDA' * C* DATTIM CHAR* Ending time for WINDSPEED * C* KWNINC(*) INTEGER WINDSPEED increments * C* KCOLRS(*) INTEGER Color for each WINDSPEED increm * C* NUMC INTEGER Number of WINDSPEED intervals * C* ISKIP INTEGER Skip value * C* INTERV INTEGER Time stamp interval * C* ITMCLR INTEGER Time stamp color * C* * C* Output parameters: * C* IRET INTEGER Return code * C* * C** * C* Log: * C* G. McFadden/IMSG 11/13 Modeled after gg_wave * C************************************************************************ INCLUDE 'GEMPRM.PRM' C* CHARACTER*(*) filtyp, dattim INTEGER kwninc (*), kcolrs (*) C* CHARACTER path*25, templ*(MXTMPL), cdttm*20, + dattm2*20, tfile*128, + flstrt*160 CHARACTER*(MXFLSZ) filnam, files(MXNMFL), fnull CHARACTER cstmin*(20), cval*(20), stime*(20) INTEGER icolrs(LLCLEV), itarr(5), jtarr(5) REAL shainc(LLCLEV) LOGICAL done C----------------------------------------------------------------------- IF ( filtyp .ne. "WSPDA" .and. filtyp .ne. "WSPD2" .and. + filtyp .ne. "WSPDC" ) THEN print *, "filtyp '", filtyp, "' unknown!" iret = -1 return END IF iret = 0 C numclr = numc IF ( interv .le. 0 ) interv = 30 CALL ST_LCUC ( filtyp, filtyp, ier ) IF ( numclr .gt. 0 ) THEN DO ii = 1, numclr shainc ( ii ) = kwninc ( ii ) icolrs ( ii ) = kcolrs ( ii ) END DO ELSE C C* No speeds or colors were specified. Set defaults. C numclr = 1 shainc ( 1 ) = 200. icolrs ( 1 ) = 3 END IF shainc (numclr+1) = shainc (numclr) icolrs (numclr+1) = 0 nexp = MXNMFL iorder = 1 C C* Check for the last file requested by the user. C CALL ST_LCUC ( dattim, dattim, ier ) itype = 1 IF ( dattim .eq. 'LAST' ) THEN CALL CSS_GTIM ( itype, dattm2, ier ) ELSE IF ( dattim .eq. 'ALL' ) THEN RETURN ELSE CALL CSS_GTIM ( itype, cdttm, ier ) CALL TI_STAN ( dattim, cdttm, dattm2, ier ) IF ( ier .ne. 0 ) THEN CALL ER_WMSG ( 'TI', ier, dattim, ierr ) iret = ier return ENDIF END IF C C* Compute stime, the start time of the range by subtracting C* minutes in SAT_WIND_START from the frame time. C CALL ST_NULL ( 'SAT_WIND_START', cstmin, lens, ier ) cval = ' ' CALL CTB_PFSTR ( cstmin, cval, ier1 ) CALL ST_NUMB ( cval, mins, ier2 ) IF ( (ier1 .ne. 0) .or. (ier2 .ne. 0) .or. (mins .lt. 0) ) THEN mins = 6 * 60 END IF CALL TI_CTOI ( dattm2, itarr, ier ) CALL TI_SUBM ( itarr, mins, jtarr, ier ) CALL TI_ITOC ( jtarr, stime, ier ) C C* Compute the new end time of the range by adding C* minutes in SAT_WIND_END to the frame time. C CALL ST_NULL ( 'SAT_WIND_END', cstmin, lens, ier ) cval = ' ' CALL CTB_PFSTR ( cstmin, cval, ier1 ) CALL ST_NUMB ( cval, mins, ier2 ) IF ( (ier1 .ne. 0) .or. (ier2 .ne. 0) .or. (mins .lt. 0) ) THEN mins = 0 END IF CALL TI_ADDM ( itarr, mins, jtarr, ier ) CALL TI_ITOC ( jtarr, dattm2, ier ) C C* Set the color attributes. C CALL GQCOLR ( jcolr, ier ) CALL GSCOLR(itmclr, ier) C C* Set the text attributes for the time stamps. C CALL GQTEXT ( itxfn, itxhw, sztext, itxwid, ibrdr, + irrotn, ijust, ier ) CALL GSTEXT ( 21, 2, 1.0, 1, 111, 1, 1, ier ) C C* Scan the directory for all of the data files. C CALL ST_NULL ( filtyp, fnull, nf, ier ) path = ' ' templ = ' ' CALL CTB_DTGET ( fnull, path, templ, ic, is, if, ir, ii, ion, + ihb, mnb, iha, mna, mstrct, idtmch, ier ) CALL ST_RNUL ( path, path, lens, ier ) CALL ST_RNUL ( templ, templ, lens, ier ) CALL ST_LSTR ( path, lenp, ier ) CALL FL_SCND ( path, templ, iorder, nexp, files, nfile, ier ) C C* Display altimeter windspeed data. C IF ( ier .eq. 0 ) THEN C C* This data source exists. Make the file name for the C* last file requested. C CALL FL_MNAM ( dattm2, templ, filnam, ier ) C C* Find the earliest file to start searching. C CALL FL_MNAM ( stime, templ, flstrt, ier ) C C* Decode each file until the end time is reached. C done = .false. ifl = 1 DO WHILE ( ( ifl .le. nfile ) .and. ( .not. done ) ) IF ( files(ifl) .gt. filnam ) THEN done = .true. ELSE C C* Plot this file''s windspeed data. C IF ( files(ifl) .ge. flstrt ) THEN tfile = path(:lenp) // '/' // files(ifl) CALL ST_NULL ( tfile, tfile, lens, ier ) CALL GG_RWSP ( fnull, tfile, shainc, + icolrs, numclr, iskip, + interv, itmclr, ier ) END IF END IF ifl = ifl + 1 END DO END IF C C* Draw the color bar. C CALL GG_CBAR ( '1/H/UR/.95;.95/.50;.01/1|1/21//111///hw', + numclr, shainc, icolrs, ier ) C C* Reset the saved attributes. C CALL GSCOLR ( jcolr, ier ) CALL GSTEXT ( itxfn, itxhw, sztext, itxwid, ibrdr, + irrotn, ijust, ier ) C RETURN END
/* block/gsl_block_ushort.h * * Copyright (C) 1996, 1997, 1998, 1999, 2000 Gerard Jungman, Brian Gough * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or (at * your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ #ifndef __GSL_BLOCK_USHORT_H__ #define __GSL_BLOCK_USHORT_H__ #include <stdlib.h> #include <gsl/gsl_errno.h> #undef __BEGIN_DECLS #undef __END_DECLS #ifdef __cplusplus # define __BEGIN_DECLS extern "C" { # define __END_DECLS } #else # define __BEGIN_DECLS /* empty */ # define __END_DECLS /* empty */ #endif __BEGIN_DECLS struct gsl_block_ushort_struct { size_t size; unsigned short *data; }; typedef struct gsl_block_ushort_struct gsl_block_ushort; gsl_block_ushort *gsl_block_ushort_alloc (const size_t n); gsl_block_ushort *gsl_block_ushort_calloc (const size_t n); void gsl_block_ushort_free (gsl_block_ushort * b); int gsl_block_ushort_fread (FILE * stream, gsl_block_ushort * b); int gsl_block_ushort_fwrite (FILE * stream, const gsl_block_ushort * b); int gsl_block_ushort_fscanf (FILE * stream, gsl_block_ushort * b); int gsl_block_ushort_fprintf (FILE * stream, const gsl_block_ushort * b, const char *format); int gsl_block_ushort_raw_fread (FILE * stream, unsigned short * b, const size_t n, const size_t stride); int gsl_block_ushort_raw_fwrite (FILE * stream, const unsigned short * b, const size_t n, const size_t stride); int gsl_block_ushort_raw_fscanf (FILE * stream, unsigned short * b, const size_t n, const size_t stride); int gsl_block_ushort_raw_fprintf (FILE * stream, const unsigned short * b, const size_t n, const size_t stride, const char *format); size_t gsl_block_ushort_size (const gsl_block_ushort * b); unsigned short * gsl_block_ushort_data (const gsl_block_ushort * b); __END_DECLS #endif /* __GSL_BLOCK_USHORT_H__ */
At the same time , his mother Loretta was diagnosed with skin cancer . Following her death early in his freshman year , Cullen contemplated returning to his Ontario home , but was convinced by his father to continue with both school and hockey . He used the game to cope with the loss and dedicated every game he played to his mother 's memory . Cullen felt that the inspiration he drew from his mother 's battle allowed him to become a better player .
If $f$ is a linear injective map from a normed vector space to a Euclidean space, then there exists a constant $B > 0$ such that $B \|x\| \leq \|f(x)\|$ for all $x$.
r=0.46 https://sandbox.dams.library.ucdavis.edu/fcrepo/rest/collection/sherry-lehmann/catalogs/d7h59w/media/images/d7h59w-012/svc:tesseract/full/full/0.46/default.jpg Accept:application/hocr+xml
read "../IdentifiabilityODE.mpl"; sys := [ diff(s(t), t) = -b1*b0*s(t)*x1(t)*i(t) - b0*s(t)*i(t) - s(t)*mu + mu + g*r(t), diff(x2(t), t) = M*x1(t), diff(i(t), t) = -nu*i(t) + b1*b0*s(t)*x1(t)*i(t) + b0*s(t)*i(t) - mu*i(t), diff(r(t), t) = nu*i(t) - mu*r(t) - g*r(t), diff(x1(t), t) = -M*x2(t), y1(t) = i(t), y2(t) = r(t) ]; CodeTools[CPUTime](IdentifiabilityODE(sys, GetParameters(sys)));
% PLEASE DO NOT MODIFY THIS FILE! It was generated by raskman version: 1.1.0 \subsubsection{glite-job-submit} \label{glite-job-submit} \medskip \textbf{glite-job-submit} \smallskip \medskip \textbf{SYNOPSIS} \smallskip \textbf{glite-job-submit [options] $<$jdl\_file$>$} {\begin{verbatim} options: --version --help --config, -c <configfile> --debug --logfile <filepath> --noint --input, -i <filepath> --output, -o <filepath> --resource, -r <ceid> --nodes-resource <ceid> --nolisten --nogui --nomsg --chkpt <filepath> --lrms <lrmstype> --valid, -v <hh:mm> --config-vo <configfile> --vo <voname> \end{verbatim} \medskip \textbf{DESCRIPTION} \smallskip glite-job-submit is the command for submitting jobs to the DataGrid and hence allows the user to run a job at one or several remote resources. glite-job-submit requires as input a job description file in which job characteristics and requirements are expressed by means of Condor class-ad-like expressions. While it does not matter the order of the other arguments, the job description file has to be the last argument of this command. \medskip \textbf{OPTIONS} \smallskip \textbf{--version} displays UI version. \textbf{--help} displays command usage \textbf{--config}, \textbf{-c} <configfile> if the command is launched with this option, the configuration file pointed by configfile is used. This option is meaningless when used together with "--vo" option \textbf{--debug} When this option is specified, debugging information is displayed on the standard output and written into the log file, whose location is eventually printed on screen. The default UI logfile location is: glite-wms-job-<command\_name>\_<uid>\_<pid>\_<time>.log located under the /var/tmp directory please notice that this path can be overriden with the '--logfile' option \textbf{--logfile} <filepath> when this option is specified, all information is written into the specified file pointed by filepath. This option will override the default location of the logfile: glite-wms-job-<command\_name>\_<uid>\_<pid>\_<time>.log located under the /var/tmp directory \textbf{--noint} if this option is specified, every interactive question to the user is skipped and the operation is continued (when possible) \textbf{--input}, \textbf{-i} <filepath> if this option is specified, the user will be asked to choose a CEId from a list of CEs contained in the filepath. Once a CEId has been selected the command behaves as explained for the resource option. If this option is used together with the --int one and the input file contains more than one CEId, then the first CEId in the list is taken into account for submitting the job. \textbf{--output}, \textbf{-o} <filepath> writes the generated jobId assigned to the submitted job in the file specified by filepath,which can be either a simple name or an absolute path (on the submitting machine). In the former case the file is created in the current working directory. \textbf{--resource}, \textbf{-r} <ceid> This command is available only for jobs. if this option is specified, the job-ad sent to the NS contains a line of the type "SubmitTo = <ceid>" and the job is submitted by the WMS to the resource identified by <ceid> without going through the match-making process. \textbf{--nodes-resource} <ceid> This command is available only for dags. if this option is specified, the job-ad sent to the NS contains a line of the type "SubmitTo = <ceid>" and the dag is submitted by the WMS to the resource identified by <ceid> without going through the match-making process for each of its nodes. \textbf{--nolisten} This option can be used only for interactive jobs. It makes the command forward the job standard streams coming from the WN to named pipes on the client machine whose names are returned to the user together with the OS id of the listener process. This allows the user to interact with the job through her/his own tools. It is important to note that when this option is specified, the command has no more control over the launched listener process that has hence to be killed by the user (through the returned process id) once the job is finished. \textbf{--nogui} This option can be used only for interactive jobs. As the command for such jobs opens an X window, the user should make sure that an X server is running on the local machine and if she/he is connected to the UI node from a remote machine (e.g. with ssh) enable secure X11 tunneling. If this is not possible, the user can specify the --nogui option that makes the command provide a simple standard non-graphical interaction with the running job. \textbf{--nomsg} this option makes the command print on the standard output only the jobId generated for the job when submission was successful.The location of the log file containing massages and diagnostics is printed otherwise. \textbf{--chkpt} <filepath> This option can be used only for checkpointable jobs. The state specified as input is a checkpoint state generated by a previously submitted job. This option makes the submitted job start running from the checkpoint state given in input and not from the very beginning. The initial checkpoint states to be used with this option can be retrieved by means of the glite-job-get-chkpt command. \textbf{--lrms} <lrmstype> This option is only for MPICH jobs and must be used together with either --resource or --input option; it specifies the type of the lrms of the resource the user is submitting to. When the batch system type of the specified CE resource given is not known, the lrms must be provided while submitting. For non-MPICH jobs this option will be ignored. \textbf{--valid}, \textbf{-v} <hh:mm> A job for which no compatible CEs have been found during the matchmaking phase is hold in the WMS Task Queue for a certain time so that it can be subjected again to matchmaking from time to time until a compatible CE is found. The JDL ExpiryTime attribute is an integer representing the date and time (in seconds since epoch)until the job request has to be considered valid by the WMS. This option allows to specify the validity in hours and minutes from submission time of the submitted JDL. When this option is used the command sets the value for the ExpiryTime attribute converting appropriately the relative timestamp provided as input. It overrides, if present,the current value. If the specified value exceeds one day from job submission then it is not taken into account by the WMS. \textbf{--config-vo} <configfile> if the command is launched with this option, the VO-specific configuration file pointed by configfile is used. This option is meaningless when used together with "--vo" option \textbf{--vo} <voname> this option allows the user to specify the name of the Virtual Organisation she/he is currently working for. If the user proxy contains VOMS extensions then the VO specified through this option is overridden by the default VO contained in the proxy (i.e. this option is only useful when working with non-VOMS proxies). This option is meaningless when used together with "--config-vo" option \medskip \textbf{EXAMPLES} \smallskip Upon successful submissions, this command returns to the identifier (JobId) assigned to the job - saves the returned JobId in a file: glite-job-submit --output jobid.out ./job.jdl - forces the submission to the resource specified with the -r option: glite-job-submit -r lxb1111.glite.it:2119/blah-lsf-jra1\_low ./job.jdl - forces the submission of the DAG (the parent and all child nodes) to the resource specified with the --nodes-resources option: glite-job-submit --nodes-resources lxb1111.glite.it:2119/blah-lsf-jra1\_low ./dag.jdl \medskip \textbf{ENVIRONMENT} \smallskip GLITE\_WMSUI\_CONFIG\_VAR: This variable may be set to specify the path location of the custom default attribute configuration GLITE\_WMSUI\_CONFIG\_VO: This variable may be set to specify the path location of the VO-specific configuration file GLITE\_WMS\_LOCATION: This variable must be set when the Glite WMS installation is not located in the default paths: either /opt/glite or /usr/local GLITE\_LOCATION: This variable must be set when the Glite installation is not located in the default paths: either /opt/glite or /usr/local GLOBUS\_LOCATION: This variable must be set when the Globus installation is not located in the default path /opt/globus. It is taken into account only by submission and get-output commands GLOBUS\_TCP\_PORT\_RANGE="<val min> <val max>" This variable must be set to define a range of ports to be used for inbound connections in the interactivity context. It is taken into account only by submission of interactive jobs and attach commands X509\_CERT\_DIR: This variable may be set to override the default location of the trusted certificates directory, which is normally /etc/grid-security/certificates. X509\_USER\_PROXY: This variable may be set to override the default location of the user proxy credentials, which is normally /tmp/x509up\_u<uid>. \medskip \textbf{FILES} \smallskip One of the following paths must exist (seeked with the specified order): - \$GLITE\_WMS\_LOCATION/etc/ - \$GLITE\_LOCATION/etc/ - /opt/glite/etc/ - /usr/local/etc/ - /etc/ and contain the following UI configuration files: glite\_wmsui\_cmd\_var.conf, glite\_wmsui\_cmd\_err.conf, glite\_wmsui\_cmd\_help.conf, <voName>/glite\_wmsui.conf - glite\_wmsui\_cmd\_var.conf will contain custom configuration default values A different configuration file may be specified either by using the --config option or by setting the GLITE\_WMSUI\_CONFIG\_VAR environment variable here follows a possible example: [ RetryCount = 3 ; ErrorStorage= "/tmp" ; OutputStorage="/tmp"; ListenerStorage = "/tmp" ; LoggingTimeout = 30 ; LoggingSyncTimeout = 30 ; NSLoggerLevel = 0; DefaultStatusLevel = 1 ; DefaultLogInfoLevel = 1; ] - glite\_wmsui\_cmd\_err.conf will contain UI exception mapping between error codes and error messages (no relocation possible) - glite\_wmsui\_cmd\_help.conf will contain UI long-help information (no relocation possible) - <voName>/glite\_wmsui.conf will contain User VO-specific attributes. A different configuration file may be specified either by using the --config-vo option or by setting the GLITE\_WMSUI\_CONFIG\_VO environment variable here follows a possible example: [ LBAddresses = { "tigerman.cnaf.infn.it:9000" }; VirtualOrganisation = "egee"; NSAddresses = { "tigerman.cnaf.infn.it:7772" } ] Besides those files, a valid proxy must be found inside the following path: /tmp/x509up\_u<uid> ( use the X509\_USER\_PROXY environment variable to override the default location JDL file) \medskip \textbf{AUTHORS} \smallskip Alessandro Maraschini ([email protected])
[GOAL] F : Sort u_1 α : Sort u_2 β : α → Sort u_3 i : FunLike F α β f g : F h : f = g ⊢ ↑f = ↑g [PROOFSTEP] cases h [GOAL] case refl F : Sort u_1 α : Sort u_2 β : α → Sort u_3 i : FunLike F α β f : F ⊢ ↑f = ↑f [PROOFSTEP] rfl
[GOAL] k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a : k ⊢ slope f a a = 0 [PROOFSTEP] rw [slope, sub_self, inv_zero, zero_smul] [GOAL] k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b : k ⊢ (b - a) • slope f a b = f b -ᵥ f a [PROOFSTEP] rcases eq_or_ne a b with (rfl | hne) [GOAL] case inl k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a : k ⊢ (a - a) • slope f a a = f a -ᵥ f a [PROOFSTEP] rw [sub_self, zero_smul, vsub_self] [GOAL] case inr k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b : k hne : a ≠ b ⊢ (b - a) • slope f a b = f b -ᵥ f a [PROOFSTEP] rw [slope, smul_inv_smul₀ (sub_ne_zero.2 hne.symm)] [GOAL] k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b : k ⊢ (b - a) • slope f a b +ᵥ f a = f b [PROOFSTEP] rw [sub_smul_slope, vsub_vadd] [GOAL] k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → E c : PE ⊢ (slope fun x => f x +ᵥ c) = slope f [PROOFSTEP] ext a b [GOAL] case h.h k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → E c : PE a b : k ⊢ slope (fun x => f x +ᵥ c) a b = slope f a b [PROOFSTEP] simp only [slope, vadd_vsub_vadd_cancel_right, vsub_eq_sub] [GOAL] k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → E a b : k h : a ≠ b ⊢ slope (fun x => (x - a) • f x) a b = f b [PROOFSTEP] simp [slope, inv_smul_smul₀ (sub_ne_zero.2 h.symm)] [GOAL] k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b : k h : slope f a b = 0 ⊢ f a = f b [PROOFSTEP] rw [← sub_smul_slope_vadd f a b, h, smul_zero, zero_vadd] [GOAL] k : Type u_1 E : Type u_2 PE : Type u_3 inst✝⁶ : Field k inst✝⁵ : AddCommGroup E inst✝⁴ : Module k E inst✝³ : AddTorsor E PE F : Type u_4 PF : Type u_5 inst✝² : AddCommGroup F inst✝¹ : Module k F inst✝ : AddTorsor F PF f : PE →ᵃ[k] PF g : k → PE a b : k ⊢ slope (↑f ∘ g) a b = ↑f.linear (slope g a b) [PROOFSTEP] simp only [slope, (· ∘ ·), f.linear.map_smul, f.linearMap_vsub] [GOAL] k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b : k ⊢ slope f a b = slope f b a [PROOFSTEP] rw [slope, slope, ← neg_vsub_eq_vsub_rev, smul_neg, ← neg_smul, neg_inv, neg_sub] [GOAL] k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b c : k ⊢ ((b - a) / (c - a)) • slope f a b + ((c - b) / (c - a)) • slope f b c = slope f a c [PROOFSTEP] by_cases hab : a = b [GOAL] case pos k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b c : k hab : a = b ⊢ ((b - a) / (c - a)) • slope f a b + ((c - b) / (c - a)) • slope f b c = slope f a c [PROOFSTEP] subst hab [GOAL] case pos k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a c : k ⊢ ((a - a) / (c - a)) • slope f a a + ((c - a) / (c - a)) • slope f a c = slope f a c [PROOFSTEP] rw [sub_self, zero_div, zero_smul, zero_add] [GOAL] case pos k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a c : k ⊢ ((c - a) / (c - a)) • slope f a c = slope f a c [PROOFSTEP] by_cases hac : a = c [GOAL] case pos k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a c : k hac : a = c ⊢ ((c - a) / (c - a)) • slope f a c = slope f a c [PROOFSTEP] simp [hac] [GOAL] case neg k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a c : k hac : ¬a = c ⊢ ((c - a) / (c - a)) • slope f a c = slope f a c [PROOFSTEP] rw [div_self (sub_ne_zero.2 <| Ne.symm hac), one_smul] [GOAL] case neg k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b c : k hab : ¬a = b ⊢ ((b - a) / (c - a)) • slope f a b + ((c - b) / (c - a)) • slope f b c = slope f a c [PROOFSTEP] by_cases hbc : b = c [GOAL] case pos k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b c : k hab : ¬a = b hbc : b = c ⊢ ((b - a) / (c - a)) • slope f a b + ((c - b) / (c - a)) • slope f b c = slope f a c [PROOFSTEP] subst hbc [GOAL] case pos k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b : k hab : ¬a = b ⊢ ((b - a) / (b - a)) • slope f a b + ((b - b) / (b - a)) • slope f b b = slope f a b [PROOFSTEP] simp [sub_ne_zero.2 (Ne.symm hab)] [GOAL] case neg k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b c : k hab : ¬a = b hbc : ¬b = c ⊢ ((b - a) / (c - a)) • slope f a b + ((c - b) / (c - a)) • slope f b c = slope f a c [PROOFSTEP] rw [add_comm] [GOAL] case neg k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b c : k hab : ¬a = b hbc : ¬b = c ⊢ ((c - b) / (c - a)) • slope f b c + ((b - a) / (c - a)) • slope f a b = slope f a c [PROOFSTEP] simp_rw [slope, div_eq_inv_mul, mul_smul, ← smul_add, smul_inv_smul₀ (sub_ne_zero.2 <| Ne.symm hab), smul_inv_smul₀ (sub_ne_zero.2 <| Ne.symm hbc), vsub_add_vsub_cancel] [GOAL] k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b c : k h : a ≠ c ⊢ ↑(lineMap (slope f a b) (slope f b c)) ((c - b) / (c - a)) = slope f a c [PROOFSTEP] field_simp [sub_ne_zero.2 h.symm, ← sub_div_sub_smul_slope_add_sub_div_sub_smul_slope f a b c, lineMap_apply_module] [GOAL] k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b r : k ⊢ ↑(lineMap (slope f (↑(lineMap a b) r) b) (slope f a (↑(lineMap a b) r))) r = slope f a b [PROOFSTEP] obtain rfl | hab : a = b ∨ a ≠ b := Classical.em _ [GOAL] case inl k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a r : k ⊢ ↑(lineMap (slope f (↑(lineMap a a) r) a) (slope f a (↑(lineMap a a) r))) r = slope f a a [PROOFSTEP] simp [GOAL] case inr k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b r : k hab : a ≠ b ⊢ ↑(lineMap (slope f (↑(lineMap a b) r) b) (slope f a (↑(lineMap a b) r))) r = slope f a b [PROOFSTEP] rw [slope_comm _ a, slope_comm _ a, slope_comm _ _ b] [GOAL] case inr k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b r : k hab : a ≠ b ⊢ ↑(lineMap (slope f b (↑(lineMap a b) r)) (slope f (↑(lineMap a b) r) a)) r = slope f b a [PROOFSTEP] convert lineMap_slope_slope_sub_div_sub f b (lineMap a b r) a hab.symm using 2 [GOAL] case h.e'_2.h.e'_6 k : Type u_1 E : Type u_2 PE : Type u_3 inst✝³ : Field k inst✝² : AddCommGroup E inst✝¹ : Module k E inst✝ : AddTorsor E PE f : k → PE a b r : k hab : a ≠ b ⊢ r = (a - ↑(lineMap a b) r) / (a - b) [PROOFSTEP] rw [lineMap_apply_ring, eq_div_iff (sub_ne_zero.2 hab), sub_mul, one_mul, mul_sub, ← sub_sub, sub_sub_cancel]
Skin Xantary was added 12.04.2019, skin has a size of 64x64 pixels. This skin using 1 players. Below is a list of players who use this skin. How to set Xantary skin? Who are the players use Xantary skin?
Wing area : 48 @.@ 80 m2 ( 525 @.@ 30 ft2 )
(* * Copyright 2014, General Dynamics C4 Systems * * SPDX-License-Identifier: GPL-2.0-only *) theory ArchInterrupt_AI imports "../Interrupt_AI" begin context Arch begin global_naming ARM primrec arch_irq_control_inv_valid_real :: "arch_irq_control_invocation \<Rightarrow> 'a::state_ext state \<Rightarrow> bool" where "arch_irq_control_inv_valid_real (ArchIRQControlIssue irq dest_slot src_slot trigger) = (cte_wp_at ((=) cap.NullCap) dest_slot and cte_wp_at ((=) cap.IRQControlCap) src_slot and ex_cte_cap_wp_to is_cnode_cap dest_slot and real_cte_at dest_slot and K (irq \<le> maxIRQ))" defs arch_irq_control_inv_valid_def: "arch_irq_control_inv_valid \<equiv> arch_irq_control_inv_valid_real" named_theorems Interrupt_AI_asms lemma (* decode_irq_control_invocation_inv *)[Interrupt_AI_asms]: "\<lbrace>P\<rbrace> decode_irq_control_invocation label args slot caps \<lbrace>\<lambda>rv. P\<rbrace>" apply (simp add: decode_irq_control_invocation_def Let_def arch_check_irq_def arch_decode_irq_control_invocation_def whenE_def, safe) apply (wp | simp)+ done lemma decode_irq_control_valid [Interrupt_AI_asms]: "\<lbrace>\<lambda>s. invs s \<and> (\<forall>cap \<in> set caps. s \<turnstile> cap) \<and> (\<forall>cap \<in> set caps. is_cnode_cap cap \<longrightarrow> (\<forall>r \<in> cte_refs cap (interrupt_irq_node s). ex_cte_cap_wp_to is_cnode_cap r s)) \<and> cte_wp_at ((=) cap.IRQControlCap) slot s\<rbrace> decode_irq_control_invocation label args slot caps \<lbrace>irq_control_inv_valid\<rbrace>,-" apply (simp add: decode_irq_control_invocation_def Let_def split_def whenE_def arch_check_irq_def arch_decode_irq_control_invocation_def split del: if_split cong: if_cong) apply (wpsimp wp: ensure_empty_stronger simp: cte_wp_at_eq_simp arch_irq_control_inv_valid_def | wp (once) hoare_drop_imps)+ apply (clarsimp simp: linorder_not_less word_le_nat_alt unat_ucast maxIRQ_def) apply (cases caps ; fastforce simp: cte_wp_at_eq_simp) done lemma get_irq_slot_different_ARCH[Interrupt_AI_asms]: "\<lbrace>\<lambda>s. valid_global_refs s \<and> ex_cte_cap_wp_to is_cnode_cap ptr s\<rbrace> get_irq_slot irq \<lbrace>\<lambda>rv s. rv \<noteq> ptr\<rbrace>" apply (simp add: get_irq_slot_def) apply wp apply (clarsimp simp: valid_global_refs_def valid_refs_def ex_cte_cap_wp_to_def) apply (elim allE, erule notE, erule cte_wp_at_weakenE) apply (clarsimp simp: global_refs_def is_cap_simps cap_range_def) done lemma is_derived_use_interrupt_ARCH[Interrupt_AI_asms]: "(is_ntfn_cap cap \<and> interrupt_derived cap cap') \<longrightarrow> (is_derived m p cap cap')" apply (clarsimp simp: is_cap_simps) apply (clarsimp simp: interrupt_derived_def is_derived_def) apply (clarsimp simp: cap_master_cap_def split: cap.split_asm) apply (simp add: is_cap_simps is_pt_cap_def vs_cap_ref_def) done lemma maskInterrupt_invs_ARCH[Interrupt_AI_asms]: "\<lbrace>invs and (\<lambda>s. \<not>b \<longrightarrow> interrupt_states s irq \<noteq> IRQInactive)\<rbrace> do_machine_op (maskInterrupt b irq) \<lbrace>\<lambda>rv. invs\<rbrace>" apply (simp add: do_machine_op_def split_def maskInterrupt_def) apply wp apply (clarsimp simp: in_monad invs_def valid_state_def all_invs_but_valid_irq_states_for_def valid_irq_states_but_def valid_irq_masks_but_def valid_machine_state_def cur_tcb_def valid_irq_states_def valid_irq_masks_def) done lemma no_cap_to_obj_with_diff_IRQHandler_ARCH[Interrupt_AI_asms]: "no_cap_to_obj_with_diff_ref (IRQHandlerCap irq) S = \<top>" by (rule ext, simp add: no_cap_to_obj_with_diff_ref_def cte_wp_at_caps_of_state obj_ref_none_no_asid) lemma (* set_irq_state_valid_cap *)[Interrupt_AI_asms]: "\<lbrace>valid_cap cap\<rbrace> set_irq_state IRQSignal irq \<lbrace>\<lambda>rv. valid_cap cap\<rbrace>" apply (clarsimp simp: set_irq_state_def) apply (wp do_machine_op_valid_cap) apply (auto simp: valid_cap_def valid_untyped_def split: cap.splits option.splits arch_cap.splits split del: if_split) done crunch valid_global_refs[Interrupt_AI_asms]: set_irq_state "valid_global_refs" lemma invoke_irq_handler_invs'[Interrupt_AI_asms]: assumes dmo_ex_inv[wp]: "\<And>f. \<lbrace>invs and ex_inv\<rbrace> do_machine_op f \<lbrace>\<lambda>rv::unit. ex_inv\<rbrace>" assumes cap_insert_ex_inv[wp]: "\<And>cap src dest. \<lbrace>ex_inv and invs and K (src \<noteq> dest)\<rbrace> cap_insert cap src dest \<lbrace>\<lambda>_.ex_inv\<rbrace>" assumes cap_delete_one_ex_inv[wp]: "\<And>cap. \<lbrace>ex_inv and invs\<rbrace> cap_delete_one cap \<lbrace>\<lambda>_.ex_inv\<rbrace>" shows "\<lbrace>invs and ex_inv and irq_handler_inv_valid i\<rbrace> invoke_irq_handler i \<lbrace>\<lambda>rv s. invs s \<and> ex_inv s\<rbrace>" proof - have cap_insert_invs_ex_invs[wp]: "\<And>cap src dest. \<lbrace>ex_inv and (invs and cte_wp_at (\<lambda>c. c = NullCap) dest and valid_cap cap and tcb_cap_valid cap dest and ex_cte_cap_wp_to (appropriate_cte_cap cap) dest and (\<lambda>s. \<forall>r\<in>obj_refs cap. \<forall>p'. dest \<noteq> p' \<and> cte_wp_at (\<lambda>cap'. r \<in> obj_refs cap') p' s \<longrightarrow> cte_wp_at (Not \<circ> is_zombie) p' s \<and> \<not> is_zombie cap) and (\<lambda>s. cte_wp_at (is_derived (cdt s) src cap) src s) and (\<lambda>s. cte_wp_at (\<lambda>cap'. \<forall>irq\<in>cap_irqs cap - cap_irqs cap'. irq_issued irq s) src s) and (\<lambda>s. \<forall>t R. cap = ReplyCap t False R \<longrightarrow> st_tcb_at awaiting_reply t s \<and> \<not> has_reply_cap t s) and K (\<not> is_master_reply_cap cap))\<rbrace> cap_insert cap src dest \<lbrace>\<lambda>rv s. invs s \<and> ex_inv s\<rbrace>" apply wp apply (auto simp: cte_wp_at_caps_of_state) done show ?thesis apply (cases i, simp_all) apply (wp maskInterrupt_invs_ARCH) apply simp apply (rename_tac irq cap prod) apply (rule hoare_pre) apply (wp valid_cap_typ [OF cap_delete_one_typ_at]) apply (strengthen real_cte_tcb_valid) apply (wp real_cte_at_typ_valid [OF cap_delete_one_typ_at]) apply (rule_tac Q="\<lambda>rv s. is_ntfn_cap cap \<and> invs s \<and> cte_wp_at (is_derived (cdt s) prod cap) prod s" in hoare_post_imp) apply (clarsimp simp: is_cap_simps is_derived_def cte_wp_at_caps_of_state) apply (simp split: if_split_asm) apply (simp add: cap_master_cap_def split: cap.split_asm) apply (drule cte_wp_valid_cap [OF caps_of_state_cteD] | clarsimp)+ apply (clarsimp simp: cap_master_cap_simps valid_cap_def obj_at_def is_ntfn is_tcb is_cap_table split: option.split_asm dest!:cap_master_cap_eqDs) apply (wp cap_delete_one_still_derived) apply simp apply (wp get_irq_slot_ex_cte get_irq_slot_different_ARCH hoare_drop_imps) apply (clarsimp simp: valid_state_def invs_def appropriate_cte_cap_def is_cap_simps) apply (erule cte_wp_at_weakenE, simp add: is_derived_use_interrupt_ARCH) apply (wp| simp add: )+ done qed lemma (* invoke_irq_control_invs *) [Interrupt_AI_asms]: "\<lbrace>invs and irq_control_inv_valid i\<rbrace> invoke_irq_control i \<lbrace>\<lambda>rv. invs\<rbrace>" apply (cases i, simp_all) apply (wp cap_insert_simple_invs | simp add: IRQHandler_valid is_cap_simps no_cap_to_obj_with_diff_IRQHandler_ARCH | strengthen real_cte_tcb_valid)+ apply (clarsimp simp: cte_wp_at_caps_of_state is_simple_cap_def is_cap_simps is_pt_cap_def safe_parent_for_def ex_cte_cap_to_cnode_always_appropriate_strg) apply (case_tac x2) apply (simp add: arch_irq_control_inv_valid_def) apply (wp cap_insert_simple_invs | simp add: IRQHandler_valid is_cap_simps no_cap_to_obj_with_diff_IRQHandler_ARCH | strengthen real_cte_tcb_valid)+ apply (clarsimp simp: cte_wp_at_caps_of_state is_simple_cap_def is_cap_simps is_pt_cap_def safe_parent_for_def ex_cte_cap_to_cnode_always_appropriate_strg) done crunch device_state_inv[wp]: resetTimer "\<lambda>ms. P (device_state ms)" lemma resetTimer_invs_ARCH[Interrupt_AI_asms]: "\<lbrace>invs\<rbrace> do_machine_op resetTimer \<lbrace>\<lambda>_. invs\<rbrace>" apply (wp dmo_invs) apply safe apply (drule_tac Q="%_ b. underlying_memory b p = underlying_memory m p" in use_valid) apply (simp add: resetTimer_def machine_op_lift_def machine_rest_lift_def split_def) apply wp apply (clarsimp+)[2] apply(erule use_valid, wp no_irq_resetTimer no_irq, assumption) done lemma empty_fail_ackInterrupt_ARCH[Interrupt_AI_asms]: "empty_fail (ackInterrupt irq)" by (wp | simp add: ackInterrupt_def)+ lemma empty_fail_maskInterrupt_ARCH[Interrupt_AI_asms]: "empty_fail (maskInterrupt f irq)" by (wp | simp add: maskInterrupt_def)+ lemma (* handle_interrupt_invs *) [Interrupt_AI_asms]: "\<lbrace>invs\<rbrace> handle_interrupt irq \<lbrace>\<lambda>_. invs\<rbrace>" apply (simp add: handle_interrupt_def ) apply (rule conjI; rule impI) apply (simp add: do_machine_op_bind empty_fail_ackInterrupt_ARCH empty_fail_maskInterrupt_ARCH) apply (wp dmo_maskInterrupt_invs maskInterrupt_invs_ARCH dmo_ackInterrupt send_signal_interrupt_states | wpc | simp)+ apply (wp get_cap_wp send_signal_interrupt_states ) apply (rule_tac Q="\<lambda>rv. invs and (\<lambda>s. st = interrupt_states s irq)" in hoare_post_imp) apply (clarsimp simp: ex_nonz_cap_to_def invs_valid_objs) apply (intro allI exI, erule cte_wp_at_weakenE) apply (clarsimp simp: is_cap_simps) apply (wp hoare_drop_imps resetTimer_invs_ARCH | simp add: get_irq_state_def handle_reserved_irq_def)+ done lemma sts_arch_irq_control_inv_valid[wp, Interrupt_AI_asms]: "\<lbrace>arch_irq_control_inv_valid i\<rbrace> set_thread_state t st \<lbrace>\<lambda>rv. arch_irq_control_inv_valid i\<rbrace>" apply (simp add: arch_irq_control_inv_valid_def) apply (cases i) apply (clarsimp) apply (wp ex_cte_cap_to_pres | simp add: cap_table_at_typ)+ done end interpretation Interrupt_AI?: Interrupt_AI proof goal_cases interpret Arch . case 1 show ?case by (intro_locales; (unfold_locales, simp_all add: Interrupt_AI_asms)?) qed end
(* bounded stack using List as its implementation *) theory BoundedStack imports Main begin section {* implementation by list *} record 'a stack = stk :: "'a list" maxsize :: nat locale bstack_list begin fun push :: "'a \<Rightarrow> 'a stack \<Rightarrow> 'a stack" where "push v s = (if length (stk s) < maxsize s then s\<lparr>stk := v # stk s\<rparr> else s)" fun pop :: "'a stack \<Rightarrow> ('a \<times> 'a stack)" where "pop s = (case stk s of x # xs \<Rightarrow> (x, s\<lparr>stk := xs\<rparr>))" definition top :: "'a stack \<Rightarrow> 'a" where "top s \<equiv> hd (stk s)" definition "emp s \<equiv> stk s = []" definition "full s \<equiv> length (stk s) = maxsize s" definition "notfull s \<equiv> length (stk s) < maxsize s" definition "valid s \<equiv> notfull s \<or> full s" lemma "notfull s \<Longrightarrow> top (push v s) = v" by(simp add:top_def notfull_def) lemma "notfull s \<Longrightarrow> pop (push x s) = (x, s)" by (simp add:notfull_def) lemma "full s \<Longrightarrow> push x s = s" by (simp add:full_def) lemma "\<not> emp s \<Longrightarrow> top s = fst (pop s)" by (simp add: emp_def list.case_eq_if top_def) lemma "\<lbrakk>\<not> emp s; (v, s0) = pop s; valid s \<rbrakk> \<Longrightarrow> push v s0 = s" apply(simp add: emp_def valid_def full_def notfull_def) apply(case_tac "stk s") apply simp by auto end section {* implementation by type *} typedef (overloaded) 'a bstack = "{xs :: ('a list \<times> nat). length (fst xs) \<le> snd xs}" morphisms alist_of Abs_bstack proof - have "([],0) \<in> {xs. length (fst xs) \<le> snd xs}" by simp thus ?thesis by blast qed thm alist_of_inverse thm alist_of thm Abs_bstack_inverse thm Abs_bstack_inject locale bstack_type begin definition capacity :: "'a bstack \<Rightarrow> nat" where "capacity s \<equiv> snd (alist_of s)" definition size :: "'a bstack \<Rightarrow> nat" where "size s \<equiv> length (fst (alist_of s))" definition isfull :: "'a bstack \<Rightarrow> bool" where "isfull s \<equiv> size s = capacity s" definition isempty :: "'a bstack \<Rightarrow> bool" where "isempty s \<equiv> fst (alist_of s) = []" lemma Abs_bstack: "length (fst xs) \<le> snd xs \<Longrightarrow> alist_of (Abs_bstack xs) = xs" apply (rule Abs_bstack_inverse) by simp lemma [code abstype]: "Abs_bstack (alist_of bs) = bs" by (simp add:alist_of_inverse) lemma "xs = alist_of bs \<Longrightarrow> length (fst xs) \<le> snd xs" using alist_of by auto lemma bstack_valid: "size s \<le> capacity s" apply(simp add:capacity_def size_def) using alist_of by blast definition newstack :: "nat \<Rightarrow> 'a bstack" where "newstack n \<equiv> Abs_bstack ([],n)" definition push :: "'a \<Rightarrow> 'a bstack \<Rightarrow> 'a bstack" where "push v s \<equiv> (if \<not>isfull s then Abs_bstack (v # fst (alist_of s), snd (alist_of s)) else s)" definition pop :: "'a bstack \<Rightarrow> ('a option \<times> 'a bstack)" where "pop s \<equiv> (if \<not> isempty s then (Some (hd (fst (alist_of s))), Abs_bstack (tl (fst (alist_of s)), snd (alist_of s))) else (None, s))" definition top :: "'a bstack \<Rightarrow> 'a option" where "top s \<equiv> (if \<not> isempty s then (Some (hd (fst (alist_of s)))) else None)" lemma "\<not> isfull s \<Longrightarrow> top (push v s) = Some v" apply(simp add:top_def isfull_def capacity_def isempty_def size_def push_def) by (metis Abs_bstack bstack_valid capacity_def fst_conv length_Cons less_Suc_eq list.distinct(1) list.sel(1) not_less size_def snd_conv) lemma "\<not> isfull s \<Longrightarrow> pop (push x s) = (Some x, s)" apply(simp add:pop_def isfull_def capacity_def isempty_def size_def push_def) by (metis (no_types, lifting) Abs_bstack alist_of alist_of_inverse fst_conv length_Cons less_Suc_eq list.distinct(1) list.sel(1) list.sel(3) mem_Collect_eq not_less prod_eq_iff snd_conv) lemma "isfull s \<Longrightarrow> push x s = s" by (simp add:isfull_def size_def capacity_def push_def) lemma "\<not> isempty s \<Longrightarrow> top s = fst (pop s)" by (simp add: isempty_def top_def pop_def) lemma "\<lbrakk>\<not> isempty s; (v, s0) = pop s \<rbrakk> \<Longrightarrow> push (the v) s0 = s" apply(simp add: pop_def isempty_def isfull_def size_def capacity_def push_def) by (metis alist_of_inverse Abs_bstack bstack_valid capacity_def size_def fst_conv length_Cons less_Suc_eq list.exhaust_sel not_less prod.exhaust_sel snd_conv) end end
-- exercises in "Type-Driven Development with Idris" -- chapter 12 import Control.Monad.State import Tree -- check that all functions are total %default total -- -- section 12.1 -- update : (stateType -> stateType) -> State stateType () update f = get >>= put . f -- or: -- update f = do s <- get -- put $ f s increase : Nat -> State Nat () increase x = update (+x) countEmpty : Tree a -> State Nat () countEmpty Empty = increase 1 countEmpty (Node left val right) = do countEmpty left countEmpty right incrementFst : State (Nat, Nat) () incrementFst = update (\(x,y) => (x + 1, y)) incrementSnd : State (Nat, Nat) () incrementSnd = update (\(x,y) => (x, y + 1)) countEmptyNode : Tree a -> State (Nat, Nat) () countEmptyNode Empty = incrementFst countEmptyNode (Node left val right) = do countEmptyNode left countEmptyNode right incrementSnd testTree : Tree String testTree = Node (Node (Node Empty "Jim" Empty) "Fred" (Node Empty "Sheila" Empty)) "Alice" (Node Empty "Bob" (Node Empty "Eve" Empty)) -- -- section 12.3 -- see Quiz.idr (with added things to update difficulty) -- see SocialNews.idr --
[STATEMENT] lemma le_oLimitI: "x \<le> f n \<Longrightarrow> x \<le> oLimit f" [PROOF STATE] proof (prove) goal (1 subgoal): 1. x \<le> f n \<Longrightarrow> x \<le> oLimit f [PROOF STEP] by (erule order_trans, rule le_oLimit)
import tactic.trunc_cases import tactic.interactive import tactic.congr import data.quot example (t : trunc ℕ) : punit := begin trunc_cases t, exact (), -- no more goals, because `trunc_cases` used the correct `trunc.rec_on_subsingleton` recursor end example (t : trunc ℕ) : ℕ := begin trunc_cases t, guard_hyp t : ℕ, -- verify that the new hypothesis is still called `t`. exact 0, -- verify that we don't even need to use `simp`, -- because `trunc_cases` has already removed the `eq.rec`. refl, end example {α : Type} [subsingleton α] (I : trunc (has_zero α)) : α := begin trunc_cases I, exact 0, end /-- A mock typeclass, set up so that it's possible to extract data from `trunc (has_unit α)`. -/ class has_unit (α : Type) [has_one α] := (unit : α) (unit_eq_one : unit = 1) def u {α : Type} [has_one α] [has_unit α] : α := has_unit.unit attribute [simp] has_unit.unit_eq_one example {α : Type} [has_one α] (I : trunc (has_unit α)) : α := begin trunc_cases I, exact u, -- Verify that the typeclass is immediately available -- Verify that there's no `eq.rec` in the goal. (do tgt ← tactic.target, eq_rec ← tactic.mk_const `eq.rec, guard $ ¬ eq_rec.occurs tgt), simp [u], end universes v w z /-- Transport through a product is given by individually transporting each component. -/ -- It's a pity that this is no good as a `simp` lemma. -- (It seems the unification problem with `λ a, W a × Z a` is too hard.) -- (One could write a tactic to syntactically analyse `eq.rec` expressions -- and simplify more of them!) lemma eq_rec_prod {α : Sort v} (W : α → Type w) (Z : α → Type z) {a b : α} (p : W a × Z a) (h : a = b) : @eq.rec α a (λ a, W a × Z a) p b h = (@eq.rec α a W p.1 b h, @eq.rec α a Z p.2 b h) := begin cases h, simp only [prod.mk.eta], end -- This time, we make a goal that (quite artificially) depends on the `trunc`. example {α : Type} [has_one α] (I : trunc (has_unit α)) : α × plift (I = I) := begin -- This time `trunc_cases` has no choice but to use `trunc.rec_on`. trunc_cases I, { exact ⟨u, plift.up rfl⟩, }, { -- And so we get an `eq.rec` in the invariance goal. -- Since `simp` can't handle it because of the unification problem, -- for now we have to handle it by hand. convert eq_rec_prod (λ I, α) (λ I, plift (I = I)) _ _, { simp [u], }, { ext, } } end
If $i$ and $j$ are natural numbers less than or equal to $n$, then $enum(i) = enum(j)$ if and only if $i = j$.
[STATEMENT] theorem Atm_BisT: assumes "compatAtm atm" shows "Atm atm \<approx>T Atm atm" [PROOF STATE] proof (prove) goal (1 subgoal): 1. Atm atm \<approx>T Atm atm [PROOF STEP] by (metis assms siso0_Atm siso0_Sbis)
Suppose $S$ is a locally compact set, $C$ is a component of $S$, and $C$ is compact. Then there exists a compact open set $K$ such that $C \subseteq K \subseteq U$.
# -*- coding: utf-8 -*- """ DepthDependentTaylorNonLinearDiffuser Component @author: R Glade @author: K Barnhart @author: G Tucker """ import numpy as np from landlab import INACTIVE_LINK, Component class DepthDependentTaylorDiffuser(Component): """ This component implements a depth-dependent Taylor series diffusion rule, combining concepts of Ganti et al. (2012) and Johnstone and Hilley (2014). Hillslope sediment flux uses a Taylor Series expansion of the Andrews- Bucknam formulation of nonlinear hillslope flux derived following following Ganti et al., 2012 with a depth dependent component inspired Johnstone and Hilley (2014). The flux :math:`q_s` is given as: .. math:: q_s = DSH^* ( 1 + (S/S_c)^2 + (S/Sc_)^4 + .. + (S/S_c)^2(n-1) ) (1.0 - exp( H / H^*) where :math:`D` is is the diffusivity, :math:`S` is the slope, :math:`S_c` is the critical slope, :math:`n` is the number of terms, :math:`H` is the soil depth on links, and :math:`H^*` is the soil transport decay depth. The default behavior uses two terms to produce a slope dependence as described by Equation 6 of Ganti et al., (2012). This component will ignore soil thickness located at non-core nodes. Parameters ---------- grid: ModelGrid Landlab ModelGrid object linear_diffusivity: float, optional. Hillslope diffusivity, m**2/yr Default = 1.0 slope_crit: float, optional Critical gradient parameter, m/m Default = 1.0 soil_transport_decay_depth: float, optional characteristic transport soil depth, m Default = 1.0 nterms: int, optional. default = 2 number of terms in the Taylor expansion. Two terms (default) gives the behavior described in Ganti et al. (2012). Examples -------- First lets make a simple example with flat topography. >>> import numpy as np >>> from landlab import RasterModelGrid >>> from landlab.components import ExponentialWeatherer >>> from landlab.components import DepthDependentTaylorDiffuser >>> mg = RasterModelGrid((5, 5)) >>> soilTh = mg.add_zeros('node', 'soil__depth') >>> z = mg.add_zeros('node', 'topographic__elevation') >>> BRz = mg.add_zeros('node', 'bedrock__elevation') >>> expweath = ExponentialWeatherer(mg) >>> DDdiff = DepthDependentTaylorDiffuser(mg) >>> expweath.calc_soil_prod_rate() >>> np.allclose(mg.at_node['soil_production__rate'][mg.core_nodes], 1.) True >>> DDdiff.soilflux(2.) >>> np.allclose(mg.at_node['topographic__elevation'][mg.core_nodes], 0.) True >>> np.allclose(mg.at_node['bedrock__elevation'][mg.core_nodes], -2.) True >>> np.allclose(mg.at_node['soil__depth'][mg.core_nodes], 2.) True Now a more complicated example with a slope. >>> mg = RasterModelGrid((3, 5)) >>> soilTh = mg.add_zeros('node', 'soil__depth') >>> z = mg.add_zeros('node', 'topographic__elevation') >>> BRz = mg.add_zeros('node', 'bedrock__elevation') >>> z += mg.node_x.copy() >>> BRz += mg.node_x / 2. >>> soilTh[:] = z - BRz >>> expweath = ExponentialWeatherer(mg) >>> DDdiff = DepthDependentTaylorDiffuser(mg) >>> expweath.calc_soil_prod_rate() >>> np.allclose( ... mg.at_node['soil_production__rate'][mg.core_nodes], ... np.array([ 0.60653066, 0.36787944, 0.22313016])) True >>> DDdiff.soilflux(0.1) >>> np.allclose( ... mg.at_node['topographic__elevation'][mg.core_nodes], ... np.array([ 1.04773024, 2.02894986, 3.01755898])) True >>> np.allclose(mg.at_node['bedrock__elevation'][mg.core_nodes], ... np.array([ 0.43934693, 0.96321206, 1.47768698])) True >>> np.allclose(mg.at_node['soil__depth'], z - BRz) True The DepthDependentTaylorDiffuser makes and moves soil at a rate proportional to slope, this means that there is a characteristic time scale for soil \ transport and an associated stability criteria for the timestep. The maximum characteristic time scale, Demax, is given as a function of the hillslope diffustivity, D, the maximum slope, Smax, and the critical slope Sc. Demax = D ( 1 + ( Smax / Sc )**2 ( Smax / Sc )**4 + .. + ( Smax / Sc )**( 2 * ( n - 1 )) ) The maximum stable time step is given by dtmax = courant_factor * dx * dx / Demax Where the courant factor is a user defined scale (default is 0.2) The DepthDependentTaylorDiffuser has a boolean flag that permits a user to be warned if timesteps are too large for the slopes in the model grid (if_unstable = 'warn') and a boolean flag that turns on dynamic timestepping (dynamic_dt = False). >>> DDdiff.soilflux(2., if_unstable='warn') Topographic slopes are high enough such that the Courant condition is exceeded AND you have not selected dynamic timestepping with dynamic_dt=True. This may lead to infinite and/or nan values for slope, elevation, and soil depth. Consider using a smaller time step or dynamic timestepping. The Courant condition recommends a timestep of 0.0953407607307 or smaller. Alternatively you can specify if_unstable='raise', and a Runtime Error will be raised if this condition is not met. Next, lets do an example with dynamic timestepping. >>> mg = RasterModelGrid((3, 5)) >>> soilTh = mg.add_zeros('node', 'soil__depth') >>> z = mg.add_zeros('node', 'topographic__elevation') >>> BRz = mg.add_zeros('node', 'bedrock__elevation') We'll use a steep slope and very little soil. >>> z += mg.node_x.copy()**2 >>> BRz = z.copy() - 1.0 >>> soilTh[:] = z - BRz >>> expweath = ExponentialWeatherer(mg) >>> DDdiff = DepthDependentTaylorDiffuser(mg) >>> expweath.calc_soil_prod_rate() Lets try to move the soil with a large timestep. Without dynamic time steps, this gives a warning that we've exceeded the dynamic timestep size and should use a smaller timestep. We could either use the smaller timestep, or specify that we want to use a dynamic timestep. >>> DDdiff.soilflux(10, if_unstable='warn', dynamic_dt=False) Topographic slopes are high enough such that the Courant condition is exceeded AND you have not selected dynamic timestepping with dynamic_dt=True. This may lead to infinite and/or nan values for slope, elevation, and soil depth. Consider using a smaller time step or dynamic timestepping. The Courant condition recommends a timestep of 0.004 or smaller. Now, we'll re-build the grid and do the same example with dynamic timesteps. >>> mg = RasterModelGrid((3, 5)) >>> soilTh = mg.add_zeros('node', 'soil__depth') >>> z = mg.add_zeros('node', 'topographic__elevation') >>> BRz = mg.add_zeros('node', 'bedrock__elevation') >>> z += mg.node_x.copy()**2 >>> BRz = z.copy() - 1.0 >>> soilTh[:] = z - BRz >>> expweath = ExponentialWeatherer(mg) >>> DDdiff = DepthDependentTaylorDiffuser(mg) >>> expweath.calc_soil_prod_rate() >>> DDdiff.soilflux(10, if_unstable='warn', dynamic_dt=True) >>> np.any(np.isnan(z)) False Now, we'll test that changing the transport decay depth behaves as expected. >>> mg = RasterModelGrid((3, 5)) >>> soilTh = mg.add_zeros('node', 'soil__depth') >>> z = mg.add_zeros('node', 'topographic__elevation') >>> BRz = mg.add_zeros('node', 'bedrock__elevation') >>> z += mg.node_x.copy()**0.5 >>> BRz = z.copy() - 1.0 >>> soilTh[:] = z - BRz >>> DDdiff = DepthDependentTaylorDiffuser(mg, soil_transport_decay_depth = 0.1) >>> DDdiff.soilflux(1) >>> soil_decay_depth_point1 = mg.at_node['topographic__elevation'][mg.core_nodes] >>> z[:] = 0 >>> z += mg.node_x.copy()**0.5 >>> BRz = z.copy() - 1.0 >>> soilTh[:] = z - BRz >>> DDdiff = DepthDependentTaylorDiffuser(mg, soil_transport_decay_depth = 1.0) >>> DDdiff.soilflux(1) >>> soil_decay_depth_1 = mg.at_node['topographic__elevation'][mg.core_nodes] >>> np.greater(soil_decay_depth_1[1], soil_decay_depth_point1[1]) False """ _name = "DepthDependentTaylorDiffuser" _input_var_names = ( "topographic__elevation", "soil__depth", "soil_production__rate", ) _output_var_names = ( "soil__flux", "topographic__slope", "topographic__elevation", "bedrock__elevation", "soil__depth", ) _var_units = { "topographic__elevation": "m", "topographic__slope": "m/m", "soil__depth": "m", "soil__flux": "m^2/yr", "soil_production__rate": "m/yr", "bedrock__elevation": "m", } _var_mapping = { "topographic__elevation": "node", "topographic__slope": "link", "soil__depth": "node", "soil__flux": "link", "soil_production__rate": "node", "bedrock__elevation": "node", } _var_doc = { "topographic__elevation": "elevation of the ground surface", "topographic__slope": "gradient of the ground surface", "soil__depth": "depth of soil/weather bedrock", "soil__flux": "flux of soil in direction of link", "soil_production__rate": "rate of soil production at nodes", "bedrock__elevation": "elevation of the bedrock surface", } def __init__( self, grid, linear_diffusivity=1.0, slope_crit=1.0, soil_transport_decay_depth=1.0, nterms=2, ): """Initialize the DepthDependentTaylorDiffuser. Parameters ---------- grid: ModelGrid Landlab ModelGrid object linear_diffusivity: float, optional. Hillslope diffusivity, m**2/yr Default = 1.0 slope_crit: float, optional Critical gradient parameter, m/m Default = 1.0 soil_transport_decay_depth: float, optional characteristic transport soil depth, m Default = 1.0 nterms: int, optional. default = 2 number of terms in the Taylor expansion. Two terms (default) gives the behavior described in Ganti et al. (2012). """ # Store grid and parameters self._grid = grid self.K = linear_diffusivity self.soil_transport_decay_depth = soil_transport_decay_depth self.slope_crit = slope_crit self.nterms = nterms # create fields # elevation if "topographic__elevation" in self.grid.at_node: self.elev = self.grid.at_node["topographic__elevation"] else: self.elev = self.grid.add_zeros("node", "topographic__elevation") # slope if "topographic__slope" in self.grid.at_link: self.slope = self.grid.at_link["topographic__slope"] else: self.slope = self.grid.add_zeros("link", "topographic__slope") # soil depth if "soil__depth" in self.grid.at_node: self.depth = self.grid.at_node["soil__depth"] else: self.depth = self.grid.add_zeros("node", "soil__depth") # soil flux if "soil__flux" in self.grid.at_link: self.flux = self.grid.at_link["soil__flux"] else: self.flux = self.grid.add_zeros("link", "soil__flux") # weathering rate if "soil_production__rate" in self.grid.at_node: self.soil_prod_rate = self.grid.at_node["soil_production__rate"] else: self.soil_prod_rate = self.grid.add_zeros("node", "soil_production__rate") # bedrock elevation if "bedrock__elevation" in self.grid.at_node: self.bedrock = self.grid.at_node["bedrock__elevation"] else: self.bedrock = self.grid.add_zeros("node", "bedrock__elevation") def soilflux(self, dt, dynamic_dt=False, if_unstable="pass", courant_factor=0.2): """Calculate soil flux for a time period 'dt'. Parameters ---------- dt: float (time) The imposed timestep. dynamic_dt : boolean (optional, default is False) Keyword argument to turn on or off dynamic time-stepping if_unstable : string (optional, default is "pass") Keyword argument to determine how potential instability due to slopes that are too high is handled. Options are "pass", "warn", and "raise". courant_factor : float (optional, default = 0.2) Factor to identify stable time-step duration when using dynamic timestepping. """ # establish time left as all of dt time_left = dt # begin while loop for time left while time_left > 0.0: # calculate soil__depth self.grid.at_node["soil__depth"][:] = ( self.grid.at_node["topographic__elevation"] - self.grid.at_node["bedrock__elevation"] ) # Calculate soil depth at links. self.H_link = self.grid.map_value_at_max_node_to_link( "topographic__elevation", "soil__depth" ) # Calculate gradients self.slope = self.grid.calc_grad_at_link(self.elev) self.slope[self.grid.status_at_link == INACTIVE_LINK] = 0.0 # Test for time stepping courant condition # Test for time stepping courant condition courant_slope_term = 0.0 courant_s_over_scrit = self.slope.max() / self.slope_crit for i in range(0, 2 * self.nterms, 2): courant_slope_term += courant_s_over_scrit ** i if np.any(np.isinf(courant_slope_term)): message = ( "Soil flux term is infinite in Courant condition " "calculation. This is likely due to " "using too many terms in the Taylor expansion." ) raise RuntimeError(message) # Calculate De Max De_max = self.K * (courant_slope_term) # Calculate longest stable timestep self.dt_max = courant_factor * (self.grid.dx ** 2) / De_max # Test for the Courant condition and print warning if user intended # for it to be printed. if (self.dt_max < dt) and (not dynamic_dt) and (if_unstable != "pass"): message = ( "Topographic slopes are high enough such that the " "Courant condition is exceeded AND you have not " "selected dynamic timestepping with dynamic_dt=True. " "This may lead to infinite and/or nan values for " "slope, elevation, and soil depth. Consider using a " "smaller time step or dynamic timestepping. The " "Courant condition recommends a timestep of " "" + str(self.dt_max) + " or smaller." ) if if_unstable == "raise": raise RuntimeError(message) if if_unstable == "warn": print(message) # if dynamic dt is selected, use it, otherwise, use the entire time if dynamic_dt: self.sub_dt = np.min([dt, self.dt_max]) time_left -= self.sub_dt else: self.sub_dt = dt time_left = 0 # update sed flux, topography, soil, and bedrock based on the # current self.sub_dt self._update_flux_topography_soil_and_bedrock() def _update_flux_topography_soil_and_bedrock(self): """Calculate soil flux and update topography. """ # Calculate flux slope_term = 0.0 s_over_scrit = self.slope / self.slope_crit for i in range(0, 2 * self.nterms, 2): slope_term += s_over_scrit ** i if np.any(np.isinf(slope_term)): message = ( "Soil flux term is infinite. This is likely due to " "using too many terms in the Taylor expansion." ) raise RuntimeError(message) self.flux[:] = -( (self.K * self.slope * self.soil_transport_decay_depth) * (slope_term) * (1.0 - np.exp(-self.H_link / self.soil_transport_decay_depth)) ) # Calculate flux divergence dqdx = self.grid.calc_flux_div_at_node(self.flux) # Calculate change in soil depth dhdt = self.soil_prod_rate - dqdx # Calculate soil depth at nodes self.depth[self.grid.core_nodes] += dhdt[self.grid.core_nodes] * self.sub_dt # prevent negative soil thickness self.depth[self.depth < 0.0] = 0.0 # Calculate bedrock elevation self.bedrock[self.grid.core_nodes] -= ( self.soil_prod_rate[self.grid.core_nodes] * self.sub_dt ) # Update topography self.elev[self.grid.core_nodes] = ( self.depth[self.grid.core_nodes] + self.bedrock[self.grid.core_nodes] ) def run_one_step(self, dt, **kwds): """ Parameters ---------- dt: float (time) The imposed timestep. """ self.soilflux(dt, **kwds)
Require Import Crypto.Arithmetic.PrimeFieldTheorems. Require Import Crypto.Specific.solinas64_2e512m569_11limbs.Synthesis. (* TODO : change this to field once field isomorphism happens *) Definition freeze : { freeze : feBW_tight -> feBW_limbwidths | forall a, phiBW_limbwidths (freeze a) = phiBW_tight a }. Proof. Set Ltac Profiling. Time synthesize_freeze (). Show Ltac Profile. Time Defined. Print Assumptions freeze.
!==================================================== SUBROUTINE splper(n,h,y,bi,ci,di) ! Makes a cubic spline of periodic function y(x) ! ! Input: n number of values in y ! h step size in x (aequidistant) ! y(n) y-values ! Output: bi(n),ci(n),di(n) Spline parameters USE neo_precision IMPLICIT NONE INTEGER, INTENT(in) :: n REAL(kind=dp), INTENT(in) :: h REAL(kind=dp), DIMENSION(n), INTENT(in) :: y REAL(kind=dp), DIMENSION(n), INTENT(out) :: bi, ci, di REAL(kind=dp) :: psi, ss REAL(kind=dp), DIMENSION(:), ALLOCATABLE :: bmx, yl REAL(kind=dp), DIMENSION(:), ALLOCATABLE :: amx1, amx2, amx3 INTEGER :: nmx, n1, n2, i, i1 ALLOCATE ( bmx(n), yl(n), amx1(n), amx2(n), amx3(n) ) bmx(1) = 1.e30_dp nmx=n-1 n1=nmx-1 n2=nmx-2 psi=3.0_dp/h/h CALL spfper(n,amx1,amx2,amx3) bmx(nmx) = (y(nmx+1)-2*y(nmx)+y(nmx-1))*psi bmx(1) =(y(2)-y(1)-y(nmx+1)+y(nmx))*psi DO i = 3,nmx bmx(i-1) = (y(i)-2*y(i-1)+y(i-2))*psi END DO yl(1) = bmx(1)/amx1(1) DO i = 2,n1 i1 = i-1 yl(i) = (bmx(i)-yl(i1)*amx2(i1))/amx1(i) END DO ss = 0 DO i = 1,n1 ss = ss+yl(i)*amx3(i) END DO yl(nmx) = (bmx(nmx)-ss)/amx1(nmx) bmx(nmx) = yl(nmx)/amx1(nmx) bmx(n1) = (yl(n1)-amx2(n1)*bmx(nmx))/amx1(n1) DO i = n2,1,-1 bmx(i) = (yl(i)-amx3(i)*bmx(nmx)-amx2(i)*bmx(i+1))/amx1(i) END DO DO i = 1,nmx ci(i) = bmx(i) END DO DO i = 1,n1 bi(i) = (y(i+1)-y(i))/h-h*(ci(i+1)+2*ci(i))/3 di(i) = (ci(i+1)-ci(i))/h/3 END DO bi(nmx) = (y(n)-y(n-1))/h-h*(ci(1)+2*ci(nmx))/3 di(nmx) = (ci(1)-ci(nmx))/h/3 ! ! Fix of problems at upper periodicity boundary ! bi(n) = bi(1) ci(n) = ci(1) di(n) = di(1) DEALLOCATE ( bmx, yl, amx1, amx2, amx3 ) RETURN END SUBROUTINE splper
lemma eventually_at_right_to_0: "eventually P (at_right a) \<longleftrightarrow> eventually (\<lambda>x. P (x + a)) (at_right 0)" for a :: real
Formal statement is: lemma incseq_def: "incseq X \<longleftrightarrow> (\<forall>m. \<forall>n\<ge>m. X n \<ge> X m)" Informal statement is: A sequence $X$ is increasing if and only if for all $m$ and $n \geq m$, we have $X_n \geq X_m$.
% !TEX root = ../main_lecture_notes.tex \chapter{Efficiency of blockchain systems}\label{chap:efficiency} \section{A queueing model with bulk service}\label{sec:queue} Blockchain users send transactions to the network of validators according to some rate $\lambda$. These transactions enter a queue of pending transactions. The validators select a subset of $b$ transactions to be recorded in the next block. The block is built by a leader elected via a consensus protocol. The block is then communicated to the other validators and the $b$ transactions exit the queue. We assume that building a block takes some exponentially distributed time with mean $\mu$. What we just described is exactly a single server with bulk service queueing system, described for instance in \citet{Bailey1954} and \citet{Chaudhry1981} with exponential arrival times, that processes $k$ items at a time, with an exponential service time. This a $M/M^b/1$ queue in Kendall's notation summarized in \cref{fig:blockchain_queue}. \begin{figure}[!ht] \begin{center} \begin{tikzpicture}[-, >=stealth', auto, semithick, node distance=1cm] \tikzstyle{phantom block}=[rectangle, fill=white,draw=white, thick,text=black,scale=2] \tikzstyle{block}=[rectangle, fill=white,draw=black,thick,text=black,scale=4] \tikzstyle{Intensity}=[circle, fill=white,draw=blue,very thick, text=black,scale=1.2] \tikzstyle{transaction pending}=[circle, fill=white,draw=blue,very thick, text=black,scale=1] \tikzstyle{transaction considered}=[circle, fill=blue, text=black, scale=1] \node[Intensity] (1){$\lambda$}; \node[phantom block] (2)[right of=1] {}; \node[transaction pending] (3)[right of=2] {}; \node[transaction pending] (4)[above of=3] {}; \node[transaction pending] (5)[above of=4] {}; \node[transaction pending] (6)[above of=5] {}; \node[transaction pending] (7)[below of=3] {}; \node[transaction pending] (8)[below of=7] {}; \path (1) edge[->,bend left] node{} (4) edge[->, bend right] node{} (7); \node[Intensity] (14)[above of=6]{$\mu$}; % \path % (1) edge[->,bend left] node{$\text{Exp}(\lambda)$} (4) % edge[->, bend right] node{} (7); \node[transaction considered] (3)[right of=2] {}; \node[transaction considered] (4)[above of=3] {}; \node[transaction considered] (5)[above of=4] {}; \node[transaction considered] (6)[above of=5] {}; \node[phantom block] (10)[right of=3] {}; \node[block] (11)[right of=10] {}; \path (6) edge[->] node{} (11) (5) edge[->] node{} (11) (4) edge[->] node{} (11) (3) edge[->] node{} (11) ; \node[Intensity] (12)[above of=11] {$\mu$}; \node[phantom block] (13)[above of=12] {}; \path (11) edge[-] node{} (12); \path (12) edge[->] node{} (13); \end{tikzpicture} \end{center} \caption{Blockchain queue} \label{fig:blockchain_queue} \end{figure} One specificity of this queue is that the server is always busy. Our goal is to assess the efficiency which is characterized by \begin{itemize} \item Throughputs: Number of transaction being processed per time unit \item Latency: Average transaction confirmation time \end{itemize} This can be done by studying the distribution of the number of pending transaction in the queue over the long run. A stationary state can only be reached if \begin{equation}\label{eq:stationarity_cond} \mu \cdot b > \lambda. \end{equation} Denote by $N^q$ the length of the queue upon stationarity, The following result holds. \begin{theo} Assume that \eqref{eq:stationarity_cond} holds then $N^q$ is geometrically distributed $$ \mathbb{P}(N^q = n) = (1-p)\cdot p^n,\text{ } n\geq0 $$ where $p = 1/z^\ast$ and $z^\ast$ is the only root of $$ -\frac{\lambda}{\mu}z^{b+1}+z^b\left(\frac{\lambda}{\mu}+1\right) - 1, $$ such that $|z^\ast$|>1. \end{theo} \begin{proof} Let $N^q_t$ be the number of transactions in the queue at time $t\geq0$ and $X_t$ the time elapsed since the last block was found. Further define \[ P_{n}(x,t)\text{d}x =\mathbb{P}[N_t^q = n, X_t \in(x, x + \text{dx})] \] If $\lambda < \mu\cdot b$ holds then the process admits a limiting distribution given by \[ \underset{t\rightarrow\infty}{\lim}P_{n}(x,t) = P_{n}(x). \] Adding the variable $X_t$ is a known trick going back to \citet{Cox1955}, it allows us to make the process $(N_t^q)_{t\geq0}$ Markovian. We aim at finding the distribution of the queue length upon stationarity \begin{equation}\label{eq:alpha_n} \mathbb{P}(N^q=n):=\alpha_n =\int_{0}^\infty P_{n}(x)\text{d}x, \text{ }n\geq0. \end{equation} Consider the possible transitions over a small time lapse \text{h} during which no block is being generated. Over this time interval, either \begin{itemize} \item no transactions arrives \item one transaction arrives \end{itemize} We have for $n\geq1$ \[ P_{n}(x+h) = e^{-\mu h}\left[e^{-\lambda h}P_{n}(x)+\lambda h e^{-\lambda h}P_{n-1}(x)\right]. \] Differentiating with respect to $h$ and letting $h\rightarrow0$ leads to \begin{equation}\label{eq:diff_eq_n_geq_1} P_{n}'(x) = -(\lambda+\mu)P_{n}(x)+\lambda P_{n-1}(x),\text{ }n \geq1. \end{equation} Similarly for $n = 0$, we have \begin{equation}\label{eq:diff_eq_n_eq_0} P_{0}'(x) = -(\lambda+\mu)P_{0}(x). \end{equation} We denote by $$ \xi(x)\text{d}x =\mathbb{P}(x\leq X< x+\text{d}x|X\geq x)= \mu\text{d}x, $$ the hazard function of the block arrival time (constant as it is exponentially distributed). The system of differential equations \eqref{eq:diff_eq_n_geq_1}, \eqref{eq:diff_eq_n_eq_0} admits boundary conditions at $x = 0$ with \begin{equation}\label{eq:boundary_cond_1} \begin{cases} P_{n}(0) = \int_0^{+\infty} P_{n+b}(x)\xi(x)\text{d}x = \mu\alpha_{n+b},&n \geq1,\\ P_{0}(0) = \mu\sum_{n=0}^{b}\alpha_n,&n = 0,\ldots,b\\ \end{cases} \end{equation} Define the probability generating function of $N^q$ at some elapsed service time $x\geq 0$ as $$ G(z;x) = \sum_{n=0}^\infty P_{n}(x)z^n. $$ By differentiating with respect to $x$, we get (using \eqref{eq:diff_eq_n_geq_1} and \eqref{eq:diff_eq_n_eq_0}) $$ \frac{\partial}{\partial x}G(z;x) = -\left[\lambda(1-z)+\mu\right]G(z;x), $$ and therefore $$ G(z;x) = G(z;0)\exp\left\{-\left[\lambda(1-z)+\mu\right]x\right\}. $$ We get the probability generating function of $N^q$ by integrating over $x$ as \begin{equation}\label{eq:G_z_solve_ODE} G(z) = \frac{G(z;0)}{\lambda(1-z)+\mu}. \end{equation} Using the boundary conditions \eqref{eq:boundary_cond_1}, we write \begin{eqnarray} G(z;0) &= &\sum_{n = 0}^\infty P_{n}(0)z^n \nonumber\\ &= &P_{0}(0)+\sum_{n=1}^{+\infty}P_{n}(0)z^n\nonumber\\ &=& \mu\sum_{n = 0}^{b}\alpha_n + \mu\sum_{n=1}^{+\infty}\alpha_{n+b} z^n\nonumber\\ &=& \mu\sum_{n = 0}^{b}\alpha_n + \mu z^{-b}\left[G(z)-\sum_{n = 0}^{b}\alpha_n z^n\right]\label{eq:G_z_0} \end{eqnarray} Replacing the left hand side of \eqref{eq:G_z_0} by \eqref{eq:G_z_solve_ODE}, multiplying on both side by $z^b$ and rearranging yields \begin{equation}\label{eq:G_z_as_rational_function} \frac{G(z)}{M(z)}[z^b - M(z)] =\sum_{n=0}^{b-1}\alpha_n(z^b - z^n), \end{equation} where $M(z) = \mu/(\lambda(1-z)+\mu)$. Using Rouche's theorem, we find that both side of the equation shares $b$ zeros inside the circle $\mathcal{C} = \{z\in\mathbb{C}\text{ ; }|z| <1+\epsilon\}$ for some epsilon. \begin{lemma} Let $\mathcal{C}\in \mathbb{C}$ and $f$ and $g$ two holomorphic functions on $\mathcal{C}$. Let $\partial\mathcal{C}$ be the contour of $\partial\mathcal{C}$. If $$ |f(z)-g(z)|<|g(z)|\text{, }\forall z\in\partial\mathcal{C} $$ then $Z_f-P_f = Z_g-P_g$, where $Z_f$, $P_f$, $Z_g$, and $P_g$ are the number of zeros and poles of $f$ and $g$ respectively. \end{lemma} We have $\partial\mathcal{C} =\{z\in\mathbb{C}\text{ ; }|z| =1+\epsilon\}$. The left hand side can be rewritten as $$ G(z)\left[-\frac{\lambda}{\mu}z^{b+1} + \left(1 + \frac{\lambda}{\mu}\right)z^b -1\right]. $$ Define $f(z) = -\frac{\lambda}{\mu}z^{b+1} + \left(1 + \frac{\lambda}{\mu}\right)z^b -1$ and $g(z)=\left(1 + \frac{\lambda}{\mu}\right)z^b$. We have $$ |f(z) - g(z)| = |-\frac{\lambda}{\mu}z^{b+1}-1|\leq \frac{\lambda}{\mu}(1+\epsilon)^{b+1}+1< \left(1 + \frac{\lambda}{\mu}\right)(1+\epsilon)^b= |g(z)| ,\text{ with }\epsilon \rightarrow 0. $$ Regarding the right hand side, define $f(z) = \sum_{n=0}^{b-1}\alpha_n(z^b - z^n)$ and $g(z) =\sum_{n=0}^{b-1}\alpha_nz^b $. We have $$ |f(z) - g(z)| < |\sum_{n=0}^{b-1}\alpha_n z^n| \leq \sum_{n=0}^{b-1}\alpha_n (1+\epsilon)^n<(1+\epsilon)^b\sum_{n=0}^{b-1}\alpha_n = |g(z)|. $$ We deduce from Rouche's theorem that both sides have $b$ share roots inside $\mathcal{C}$. Note that one of them is $1$, and we denote by $z_k$, $k = 1,\ldots, b-1$ the remaining $b-1$ roots. Given the polynomial form of the right hand side of \eqref{eq:G_z_as_rational_function}, the fundamental theorem of algebra indicates that the number of zero is $b$. Given the left hand side $$ G(z)\left[-\frac{\lambda}{\mu}z^{b+1} + \left(1 + \frac{\lambda}{\mu}\right)z^b -1\right]. $$ we deduce that there is one zeros outside $\mathcal{C}$, we can further show that it is a real number $z^\ast$. Multiplying both side of \eqref{eq:G_z_as_rational_function} by $(z-1)\prod_{k =1}^{b-1}(z-z_k)$, and using $G(1)=1$ yields $$ G(z) = \frac{1-z^\ast}{z-z^{\ast}}. $$ $N^q$ is then a geometric random variable with parameter $p = \frac{1}{z^\ast}.$ \end{proof} The result above can be found in \citet{Bailey1954}. The application to blockchain under more general assumptions over the block discovery time is given in \citet{Kawase2017}. \section{Latency and throughputs computation}\label{sec:latency_throughputs} The practical computation of latency and throughputs then follow from a standard result in queueing, known as Little's law, see \citet{Little1961}. \begin{theo} Consider a stationary queueing system and denote by \begin{itemize} \item $1/\lambda$ the mean of the unit inter-arrival times \item $L$ be the mean number of units in the system \item $W$ be the mean time spent by units in the system \end{itemize} We have $$ L = \lambda \cdot W $$ \end{theo} \begin{itemize} \item Latency is the confirmation time of a transaction $$ \text{Latency} = W + \frac{1}{\mu} = \frac{\mathbb{E}(N^q)}{\lambda} + \frac{1}{\mu} = \frac{p}{(1-p)\lambda} + \frac{1}{\mu}. $$ \item Throughput is the number of transaction confirmed per time unit $$ \text{Throughput} = \mu\mathbb{E}(N^q\mathbb{I}_{N^q\leq b}+b\mathbb{I}_{N^q> b}) = \mu\sum_{n = 0}^bn(1-p)p^n + bp^{b+1}. $$ \end{itemize} Avenue for future research includes \begin{itemize} \item the inclusion of priority consideration, accounting for the transaction fee, see \citet{Kawase2020} \item Refine the hypothesis of the queueing system to better adapt to the different consensus protocol, see \citet{Li2018} and \citet{Li2019}. \end{itemize} \newpage
Formal statement is: lemma dist_triangle_lt: "dist x z + dist y z < e \<Longrightarrow> dist x y < e" Informal statement is: If the distance between $x$ and $z$ plus the distance between $y$ and $z$ is less than $e$, then the distance between $x$ and $y$ is less than $e$.
function [dat] = read_yokogawa_data_new(filename, hdr, begsample, endsample, chanindx) % READ_YOKAGAWA_DATA_NEW reads continuous, epoched or averaged MEG data % that has been generated by the Yokogawa MEG system and software % and allows that data to be used in combination with FieldTrip. % % Use as % [dat] = read_yokogawa_data_new(filename, hdr, begsample, endsample, chanindx) % % This is a wrapper function around the function % getYkgwData % % See also READ_YOKOGAWA_HEADER_NEW, READ_YOKOGAWA_EVENT % Copyright (C) 2005, Robert Oostenveld and 2010, Tilmann Sander-Thoemmes % % This file is part of FieldTrip, see http://www.fieldtriptoolbox.org % for the documentation and details. % % FieldTrip is free software: you can redistribute it and/or modify % it under the terms of the GNU General Public License as published by % the Free Software Foundation, either version 3 of the License, or % (at your option) any later version. % % FieldTrip is distributed in the hope that it will be useful, % but WITHOUT ANY WARRANTY; without even the implied warranty of % MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the % GNU General Public License for more details. % % You should have received a copy of the GNU General Public License % along with FieldTrip. If not, see <http://www.gnu.org/licenses/>. % % $Id$ if ~ft_hastoolbox('yokogawa_meg_reader') ft_error('cannot determine whether Yokogawa toolbox is present'); end % hdr = read_yokogawa_header(filename); hdr = hdr.orig; % use the original Yokogawa header, not the FieldTrip header % default is to select all channels if nargin<5 chanindx = 1:hdr.channel_count; end handles = definehandles; switch hdr.acq_type case handles.AcqTypeEvokedAve % dat is returned as double start_sample = begsample - 1; % samples start at 0 sample_length = endsample - begsample + 1; epoch_count = 1; start_epoch = 0; dat = getYkgwData(filename, start_sample, sample_length); case handles.AcqTypeContinuousRaw % dat is returned as double start_sample = begsample - 1; % samples start at 0 sample_length = endsample - begsample + 1; epoch_count = 1; start_epoch = 0; dat = getYkgwData(filename, start_sample, sample_length); case handles.AcqTypeEvokedRaw % dat is returned as double begtrial = ceil(begsample/hdr.sample_count); endtrial = ceil(endsample/hdr.sample_count); if begtrial<1 ft_error('cannot read before the begin of the file'); elseif endtrial>hdr.actual_epoch_count ft_error('cannot read beyond the end of the file'); end epoch_count = endtrial-begtrial+1; start_epoch = begtrial-1; % read all the necessary trials that contain the desired samples dat = getYkgwData(filename, start_epoch, epoch_count); if size(dat,2)~=epoch_count*hdr.sample_count ft_error('could not read all epochs'); end rawbegsample = begsample - (begtrial-1)*hdr.sample_count; rawendsample = endsample - (begtrial-1)*hdr.sample_count; sample_length = rawendsample - rawbegsample + 1; % select the desired samples from the complete trials dat = dat(:,rawbegsample:rawendsample); otherwise ft_error('unknown data type'); end if size(dat,1)~=hdr.channel_count ft_error('could not read all channels'); elseif size(dat,2)~=(endsample-begsample+1) ft_error('could not read all samples'); end % select only the desired channels dat = dat(chanindx,:); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % this defines some usefull constants %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% function handles = definehandles handles.output = []; handles.sqd_load_flag = false; handles.mri_load_flag = false; handles.NullChannel = 0; handles.MagnetoMeter = 1; handles.AxialGradioMeter = 2; handles.PlannerGradioMeter = 3; handles.RefferenceChannelMark = hex2dec('0100'); handles.RefferenceMagnetoMeter = bitor( handles.RefferenceChannelMark, handles.MagnetoMeter ); handles.RefferenceAxialGradioMeter = bitor( handles.RefferenceChannelMark, handles.AxialGradioMeter ); handles.RefferencePlannerGradioMeter = bitor( handles.RefferenceChannelMark, handles.PlannerGradioMeter ); handles.TriggerChannel = -1; handles.EegChannel = -2; handles.EcgChannel = -3; handles.EtcChannel = -4; handles.NonMegChannelNameLength = 32; handles.DefaultMagnetometerSize = (4.0/1000.0); % Square of 4.0mm in length handles.DefaultAxialGradioMeterSize = (15.5/1000.0); % Circle of 15.5mm in diameter handles.DefaultPlannerGradioMeterSize = (12.0/1000.0); % Square of 12.0mm in length handles.AcqTypeContinuousRaw = 1; handles.AcqTypeEvokedAve = 2; handles.AcqTypeEvokedRaw = 3; handles.sqd = []; handles.sqd.selected_start = []; handles.sqd.selected_end = []; handles.sqd.axialgradiometer_ch_no = []; handles.sqd.axialgradiometer_ch_info = []; handles.sqd.axialgradiometer_data = []; handles.sqd.plannergradiometer_ch_no = []; handles.sqd.plannergradiometer_ch_info = []; handles.sqd.plannergradiometer_data = []; handles.sqd.eegchannel_ch_no = []; handles.sqd.eegchannel_data = []; handles.sqd.nullchannel_ch_no = []; handles.sqd.nullchannel_data = []; handles.sqd.selected_time = []; handles.sqd.sample_rate = []; handles.sqd.sample_count = []; handles.sqd.pretrigger_length = []; handles.sqd.matching_info = []; handles.sqd.source_info = []; handles.sqd.mri_info = []; handles.mri = [];
/- Copyright (c) 2020 Chris Hughes. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Chris Hughes -/ import algebra.hom.aut import logic.function.basic import group_theory.subgroup.basic /-! # Semidirect product This file defines semidirect products of groups, and the canonical maps in and out of the semidirect product. The semidirect product of `N` and `G` given a hom `φ` from `G` to the automorphism group of `N` is the product of sets with the group `⟨n₁, g₁⟩ * ⟨n₂, g₂⟩ = ⟨n₁ * φ g₁ n₂, g₁ * g₂⟩` ## Key definitions There are two homs into the semidirect product `inl : N →* N ⋊[φ] G` and `inr : G →* N ⋊[φ] G`, and `lift` can be used to define maps `N ⋊[φ] G →* H` out of the semidirect product given maps `f₁ : N →* H` and `f₂ : G →* H` that satisfy the condition `∀ n g, f₁ (φ g n) = f₂ g * f₁ n * f₂ g⁻¹` ## Notation This file introduces the global notation `N ⋊[φ] G` for `semidirect_product N G φ` ## Tags group, semidirect product -/ variables (N : Type*) (G : Type*) {H : Type*} [group N] [group G] [group H] /-- The semidirect product of groups `N` and `G`, given a map `φ` from `G` to the automorphism group of `N`. It the product of sets with the group operation `⟨n₁, g₁⟩ * ⟨n₂, g₂⟩ = ⟨n₁ * φ g₁ n₂, g₁ * g₂⟩` -/ @[ext, derive decidable_eq] structure semidirect_product (φ : G →* mul_aut N) := (left : N) (right : G) attribute [pp_using_anonymous_constructor] semidirect_product notation N` ⋊[`:35 φ:35`] `:0 G :35 := semidirect_product N G φ namespace semidirect_product variables {N G} {φ : G →* mul_aut N} private def one_aux : N ⋊[φ] G := ⟨1, 1⟩ private def mul_aux (a b : N ⋊[φ] G) : N ⋊[φ] G := ⟨a.1 * φ a.2 b.1, a.right * b.right⟩ private def inv_aux (a : N ⋊[φ] G) : N ⋊[φ] G := let i := a.2⁻¹ in ⟨φ i a.1⁻¹, i⟩ private lemma mul_assoc_aux (a b c : N ⋊[φ] G) : mul_aux (mul_aux a b) c = mul_aux a (mul_aux b c) := by simp [mul_aux, mul_assoc, mul_equiv.map_mul] private lemma mul_one_aux (a : N ⋊[φ] G) : mul_aux a one_aux = a := by cases a; simp [mul_aux, one_aux] private lemma one_mul_aux (a : N ⋊[φ] G) : mul_aux one_aux a = a := by cases a; simp [mul_aux, one_aux] private lemma mul_left_inv_aux (a : N ⋊[φ] G) : mul_aux (inv_aux a) a = one_aux := by simp only [mul_aux, inv_aux, one_aux, ← mul_equiv.map_mul, mul_left_inv]; simp instance : group (N ⋊[φ] G) := { one := one_aux, inv := inv_aux, mul := mul_aux, mul_assoc := mul_assoc_aux, one_mul := one_mul_aux, mul_one := mul_one_aux, mul_left_inv := mul_left_inv_aux } instance : inhabited (N ⋊[φ] G) := ⟨1⟩ @[simp] lemma one_left : (1 : N ⋊[φ] G).left = 1 := rfl @[simp] lemma one_right : (1 : N ⋊[φ] G).right = 1 := rfl @[simp] lemma inv_left (a : N ⋊[φ] G) : (a⁻¹).left = φ a.right⁻¹ a.left⁻¹ := rfl @[simp] lemma inv_right (a : N ⋊[φ] G) : (a⁻¹).right = a.right⁻¹ := rfl @[simp] lemma mul_left (a b : N ⋊[φ] G) : (a * b).left = a.left * φ a.right b.left := rfl @[simp] lemma mul_right (a b : N ⋊[φ] G) : (a * b).right = a.right * b.right := rfl /-- The canonical map `N →* N ⋊[φ] G` sending `n` to `⟨n, 1⟩` -/ def inl : N →* N ⋊[φ] G := { to_fun := λ n, ⟨n, 1⟩, map_one' := rfl, map_mul' := by intros; ext; simp } @[simp] lemma left_inl (n : N) : (inl n : N ⋊[φ] G).left = n := rfl @[simp] lemma right_inl (n : N) : (inl n : N ⋊[φ] G).right = 1 := rfl lemma inl_injective : function.injective (inl : N → N ⋊[φ] G) := function.injective_iff_has_left_inverse.2 ⟨left, left_inl⟩ @[simp] lemma inl_inj {n₁ n₂ : N} : (inl n₁ : N ⋊[φ] G) = inl n₂ ↔ n₁ = n₂ := inl_injective.eq_iff /-- The canonical map `G →* N ⋊[φ] G` sending `g` to `⟨1, g⟩` -/ def inr : G →* N ⋊[φ] G := { to_fun := λ g, ⟨1, g⟩, map_one' := rfl, map_mul' := by intros; ext; simp } @[simp] lemma left_inr (g : G) : (inr g : N ⋊[φ] G).left = 1 := rfl @[simp] lemma right_inr (g : G) : (inr g : N ⋊[φ] G).right = g := rfl lemma inr_injective : function.injective (inr : G → N ⋊[φ] G) := function.injective_iff_has_left_inverse.2 ⟨right, right_inr⟩ @[simp] lemma inr_inj {g₁ g₂ : G} : (inr g₁ : N ⋊[φ] G) = inr g₂ ↔ g₁ = g₂ := inr_injective.eq_iff lemma inl_aut (g : G) (n : N) : (inl (φ g n) : N ⋊[φ] G) = inr g * inl n * inr g⁻¹ := by ext; simp lemma inl_aut_inv (g : G) (n : N) : (inl ((φ g)⁻¹ n) : N ⋊[φ] G) = inr g⁻¹ * inl n * inr g := by rw [← monoid_hom.map_inv, inl_aut, inv_inv] @[simp] @[simp] lemma inl_left_mul_inr_right (x : N ⋊[φ] G) : inl x.left * inr x.right = x := by ext; simp /-- The canonical projection map `N ⋊[φ] G →* G`, as a group hom. -/ def right_hom : N ⋊[φ] G →* G := { to_fun := semidirect_product.right, map_one' := rfl, map_mul' := λ _ _, rfl } @[simp] lemma right_hom_eq_right : (right_hom : N ⋊[φ] G → G) = right := rfl @[simp] lemma right_hom_comp_inl : (right_hom : N ⋊[φ] G →* G).comp inl = 1 := by ext; simp [right_hom] @[simp] lemma right_hom_comp_inr : (right_hom : N ⋊[φ] G →* G).comp inr = monoid_hom.id _ := by ext; simp [right_hom] @[simp] lemma right_hom_inl (n : N) : right_hom (inl n : N ⋊[φ] G) = 1 := by simp [right_hom] @[simp] lemma right_hom_inr (g : G) : right_hom (inr g : N ⋊[φ] G) = g := by simp [right_hom] lemma right_hom_surjective : function.surjective (right_hom : N ⋊[φ] G → G) := function.surjective_iff_has_right_inverse.2 ⟨inr, right_hom_inr⟩ lemma range_inl_eq_ker_right_hom : (inl : N →* N ⋊[φ] G).range = right_hom.ker := le_antisymm (λ _, by simp [monoid_hom.mem_ker, eq_comm] {contextual := tt}) (λ x hx, ⟨x.left, by ext; simp [*, monoid_hom.mem_ker] at *⟩) section lift variables (f₁ : N →* H) (f₂ : G →* H) (h : ∀ g, f₁.comp (φ g).to_monoid_hom = (mul_aut.conj (f₂ g)).to_monoid_hom.comp f₁) /-- Define a group hom `N ⋊[φ] G →* H`, by defining maps `N →* H` and `G →* H` -/ def lift (f₁ : N →* H) (f₂ : G →* H) (h : ∀ g, f₁.comp (φ g).to_monoid_hom = (mul_aut.conj (f₂ g)).to_monoid_hom.comp f₁) : N ⋊[φ] G →* H := { to_fun := λ a, f₁ a.1 * f₂ a.2, map_one' := by simp, map_mul' := λ a b, begin have := λ n g, monoid_hom.ext_iff.1 (h n) g, simp only [mul_aut.conj_apply, monoid_hom.comp_apply, mul_equiv.coe_to_monoid_hom] at this, simp [this, mul_assoc] end } @[simp] lemma lift_inl (n : N) : lift f₁ f₂ h (inl n) = f₁ n := by simp [lift] @[simp] lemma lift_comp_inl : (lift f₁ f₂ h).comp inl = f₁ := by ext; simp @[simp] lemma lift_inr (g : G) : lift f₁ f₂ h (inr g) = f₂ g := by simp [lift] @[simp] lemma lift_comp_inr : (lift f₁ f₂ h).comp inr = f₂ := by ext; simp lemma lift_unique (F : N ⋊[φ] G →* H) : F = lift (F.comp inl) (F.comp inr) (λ _, by ext; simp [inl_aut]) := begin ext, simp only [lift, monoid_hom.comp_apply, monoid_hom.coe_mk], rw [← F.map_mul, inl_left_mul_inr_right], end /-- Two maps out of the semidirect product are equal if they're equal after composition with both `inl` and `inr` -/ lemma hom_ext {f g : (N ⋊[φ] G) →* H} (hl : f.comp inl = g.comp inl) (hr : f.comp inr = g.comp inr) : f = g := by { rw [lift_unique f, lift_unique g], simp only * } end lift section map variables {N₁ : Type*} {G₁ : Type*} [group N₁] [group G₁] {φ₁ : G₁ →* mul_aut N₁} /-- Define a map from `N ⋊[φ] G` to `N₁ ⋊[φ₁] G₁` given maps `N →* N₁` and `G →* G₁` that satisfy a commutativity condition `∀ n g, f₁ (φ g n) = φ₁ (f₂ g) (f₁ n)`. -/ def map (f₁ : N →* N₁) (f₂ : G →* G₁) (h : ∀ g : G, f₁.comp (φ g).to_monoid_hom = (φ₁ (f₂ g)).to_monoid_hom.comp f₁) : N ⋊[φ] G →* N₁ ⋊[φ₁] G₁ := { to_fun := λ x, ⟨f₁ x.1, f₂ x.2⟩, map_one' := by simp, map_mul' := λ x y, begin replace h := monoid_hom.ext_iff.1 (h x.right) y.left, ext; simp * at *, end } variables (f₁ : N →* N₁) (f₂ : G →* G₁) (h : ∀ g : G, f₁.comp (φ g).to_monoid_hom = (φ₁ (f₂ g)).to_monoid_hom.comp f₁) @[simp] lemma map_left (g : N ⋊[φ] G) : (map f₁ f₂ h g).left = f₁ g.left := rfl @[simp] lemma map_right (g : N ⋊[φ] G) : (map f₁ f₂ h g).right = f₂ g.right := rfl @[simp] lemma right_hom_comp_map : right_hom.comp (map f₁ f₂ h) = f₂.comp right_hom := rfl @[simp] lemma map_inl (n : N) : map f₁ f₂ h (inl n) = inl (f₁ n) := by simp [map] @[simp] lemma map_comp_inl : (map f₁ f₂ h).comp inl = inl.comp f₁ := by ext; simp @[simp] lemma map_inr (g : G) : map f₁ f₂ h (inr g) = inr (f₂ g) := by simp [map] @[simp] lemma map_comp_inr : (map f₁ f₂ h).comp inr = inr.comp f₂ := by ext; simp [map] end map end semidirect_product
""" - Author: Bongsang Kim - homepage: https://bongsang.github.io - Linkedin: https://www.linkedin.com/in/bongsang """ import os from skimage import io, transform import torch from torch.utils.data import Dataset import pandas as pd import numpy as np import cv2 class LandmarksDataset(Dataset): """Face Landmarks dataset.""" def __init__(self, csv_file, root_dir, transform=None): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.landmarks_frame = pd.read_csv(csv_file) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.landmarks_frame) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() image_name = self.landmarks_frame.iloc[idx, 0] image_path = os.path.join(self.root_dir, image_name) image = cv2.imread(image_path) image_data = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # if image has an alpha color channel, get rid of it if image.shape[2] == 4: image = image[:, :, 0:3] landmarks = self.landmarks_frame.iloc[idx, 1:] landmarks = np.array([landmarks]) landmarks = landmarks.astype('float').reshape(-1, 2) sample = {'image': image_data, 'landmarks': landmarks, 'name': image_name} print(f'LandmarksDataset shape, file={image_name}, image={image_data.shape}, landmarks={landmarks.shape}') if self.transform: sample = self.transform(sample) return sample
If $p$ is a nonzero polynomial, then the last coefficient of $p$ is equal to the leading coefficient of $p$.
glamorous coloring page bird large size of free printable star wars coloring pages page captivating angry birds glamorous coloring page bird cage. here we are reach glamorous coloring page bird large size of free printable star wars coloring pages page captivating angry birds glamorous coloring page bird cage for an scheme that engage in child activities. coloring is beautiful and in a cheerful mood, also as well as proficient training activities in that regard. it has always been a joy to discover things was start with that so simply from the more complicated letters or tricks. the most recognizable thing for instance the cartoon character, or in the movie, may also be an interesting tom but not included in either. here we are in coloring pages gallery of pics can also come in the form of animals or flowers. there is much more such as plants or insects. they are that so interesting and offer different levels of difficulty. including giving a form was has never been encountered, is a form of learning also to children. for more difficult levels is part of the teenager or more. right here in theme glamorous coloring page bird is one where you can choose and you try to finish. start with a base color or a higher level by playing gradations or color discs. everything becoming possible when you begin to effort it now. good luck hope this be in a cheerful mood for you or maybe you can show it to friends or teacher or also parents. coloring page tweety bird pages of a rose color skulls and roses realistic hummingbird,coloring page bird feeder angry birds go colouring of cage printable pages free,coloring page angry birds flag state glamorous sheet hummingbird tweety bird,coloring page birdhouse tweety bird book pages kids color glamorous of baby,glamorous coloring page dinosaurs pictures and facts pages angry birds 2 rio tweety bird,hummingbird coloring page swallow tail free pages birds hawk with angry rio,glamorous coloring page bird space bomb in angry birds star wars pages of baby tweety swallow tail hummingbird,coloring pages angry birds transformers page best of images epic bird cage,bird coloring pages stunning free page birdhouse angry birds star wars pigs,coloring page birdhouse images of hummingbird pages swallow tail mickey and.
This banking software innovator and paperless financial institution has a unique company culture, something we helped bring to life in two of their gorgeous Wilmington offices. Among our work: detail-rich desks and conference tables, shelving units and reception desks. All that, plus a walnut conference table, board of directors table made from a 1000-year old Cypress slab, and multiple pieces of Skylar Morgan furniture (Bide stools, The Heavy bench, Doc Table, Y Table, Branch Coat Trees) make this one of our favorite--and largest--installations to date.
/- Copyright (c) 2020 Yury Kudryashov. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Yury Kudryashov, Johannes Hölzl, Mario Carneiro, Patrick Massot -/ import order.filter.basic import data.set.countable import data.pprod /-! # Filter bases A filter basis `B : filter_basis α` on a type `α` is a nonempty collection of sets of `α` such that the intersection of two elements of this collection contains some element of the collection. Compared to filters, filter bases do not require that any set containing an element of `B` belongs to `B`. A filter basis `B` can be used to construct `B.filter : filter α` such that a set belongs to `B.filter` if and only if it contains an element of `B`. Given an indexing type `ι`, a predicate `p : ι → Prop`, and a map `s : ι → set α`, the proposition `h : filter.is_basis p s` makes sure the range of `s` bounded by `p` (ie. `s '' set_of p`) defines a filter basis `h.filter_basis`. If one already has a filter `l` on `α`, `filter.has_basis l p s` (where `p : ι → Prop` and `s : ι → set α` as above) means that a set belongs to `l` if and only if it contains some `s i` with `p i`. It implies `h : filter.is_basis p s`, and `l = h.filter_basis.filter`. The point of this definition is that checking statements involving elements of `l` often reduces to checking them on the basis elements. We define a function `has_basis.index (h : filter.has_basis l p s) (t) (ht : t ∈ l)` that returns some index `i` such that `p i` and `s i ⊆ t`. This function can be useful to avoid manual destruction of `h.mem_iff.mpr ht` using `cases` or `let`. This file also introduces more restricted classes of bases, involving monotonicity or countability. In particular, for `l : filter α`, `l.is_countably_generated` means there is a countable set of sets which generates `s`. This is reformulated in term of bases, and consequences are derived. ## Main statements * `has_basis.mem_iff`, `has_basis.mem_of_superset`, `has_basis.mem_of_mem` : restate `t ∈ f` in terms of a basis; * `basis_sets` : all sets of a filter form a basis; * `has_basis.inf`, `has_basis.inf_principal`, `has_basis.prod`, `has_basis.prod_self`, `has_basis.map`, `has_basis.comap` : combinators to construct filters of `l ⊓ l'`, `l ⊓ 𝓟 t`, `l ×ᶠ l'`, `l ×ᶠ l`, `l.map f`, `l.comap f` respectively; * `has_basis.le_iff`, `has_basis.ge_iff`, has_basis.le_basis_iff` : restate `l ≤ l'` in terms of bases. * `has_basis.tendsto_right_iff`, `has_basis.tendsto_left_iff`, `has_basis.tendsto_iff` : restate `tendsto f l l'` in terms of bases. * `is_countably_generated_iff_exists_antitone_basis` : proves a filter is countably generated if and only if it admits a basis parametrized by a decreasing sequence of sets indexed by `ℕ`. * `tendsto_iff_seq_tendsto ` : an abstract version of "sequentially continuous implies continuous". ## Implementation notes As with `Union`/`bUnion`/`sUnion`, there are three different approaches to filter bases: * `has_basis l s`, `s : set (set α)`; * `has_basis l s`, `s : ι → set α`; * `has_basis l p s`, `p : ι → Prop`, `s : ι → set α`. We use the latter one because, e.g., `𝓝 x` in an `emetric_space` or in a `metric_space` has a basis of this form. The other two can be emulated using `s = id` or `p = λ _, true`. With this approach sometimes one needs to `simp` the statement provided by the `has_basis` machinery, e.g., `simp only [exists_prop, true_and]` or `simp only [forall_const]` can help with the case `p = λ _, true`. -/ open set filter open_locale filter classical section sort variables {α β γ : Type*} {ι ι' : Sort*} /-- A filter basis `B` on a type `α` is a nonempty collection of sets of `α` such that the intersection of two elements of this collection contains some element of the collection. -/ structure filter_basis (α : Type*) := (sets : set (set α)) (nonempty : sets.nonempty) (inter_sets {x y} : x ∈ sets → y ∈ sets → ∃ z ∈ sets, z ⊆ x ∩ y) instance filter_basis.nonempty_sets (B : filter_basis α) : nonempty B.sets := B.nonempty.to_subtype /-- If `B` is a filter basis on `α`, and `U` a subset of `α` then we can write `U ∈ B` as on paper. -/ @[reducible] instance {α : Type*}: has_mem (set α) (filter_basis α) := ⟨λ U B, U ∈ B.sets⟩ -- For illustration purposes, the filter basis defining (at_top : filter ℕ) instance : inhabited (filter_basis ℕ) := ⟨{ sets := range Ici, nonempty := ⟨Ici 0, mem_range_self 0⟩, inter_sets := begin rintros _ _ ⟨n, rfl⟩ ⟨m, rfl⟩, refine ⟨Ici (max n m), mem_range_self _, _⟩, rintros p p_in, split ; rw mem_Ici at *, exact le_of_max_le_left p_in, exact le_of_max_le_right p_in, end }⟩ /-- `is_basis p s` means the image of `s` bounded by `p` is a filter basis. -/ protected structure filter.is_basis (p : ι → Prop) (s : ι → set α) : Prop := (nonempty : ∃ i, p i) (inter : ∀ {i j}, p i → p j → ∃ k, p k ∧ s k ⊆ s i ∩ s j) namespace filter namespace is_basis /-- Constructs a filter basis from an indexed family of sets satisfying `is_basis`. -/ protected def filter_basis {p : ι → Prop} {s : ι → set α} (h : is_basis p s) : filter_basis α := { sets := {t | ∃ i, p i ∧ s i = t}, nonempty := let ⟨i, hi⟩ := h.nonempty in ⟨s i, ⟨i, hi, rfl⟩⟩, inter_sets := by { rintros _ _ ⟨i, hi, rfl⟩ ⟨j, hj, rfl⟩, rcases h.inter hi hj with ⟨k, hk, hk'⟩, exact ⟨_, ⟨k, hk, rfl⟩, hk'⟩ } } variables {p : ι → Prop} {s : ι → set α} (h : is_basis p s) lemma mem_filter_basis_iff {U : set α} : U ∈ h.filter_basis ↔ ∃ i, p i ∧ s i = U := iff.rfl end is_basis end filter namespace filter_basis /-- The filter associated to a filter basis. -/ protected def filter (B : filter_basis α) : filter α := { sets := {s | ∃ t ∈ B, t ⊆ s}, univ_sets := let ⟨s, s_in⟩ := B.nonempty in ⟨s, s_in, s.subset_univ⟩, sets_of_superset := λ x y ⟨s, s_in, h⟩ hxy, ⟨s, s_in, set.subset.trans h hxy⟩, inter_sets := λ x y ⟨s, s_in, hs⟩ ⟨t, t_in, ht⟩, let ⟨u, u_in, u_sub⟩ := B.inter_sets s_in t_in in ⟨u, u_in, set.subset.trans u_sub $ set.inter_subset_inter hs ht⟩ } lemma mem_filter_iff (B : filter_basis α) {U : set α} : U ∈ B.filter ↔ ∃ s ∈ B, s ⊆ U := iff.rfl lemma mem_filter_of_mem (B : filter_basis α) {U : set α} : U ∈ B → U ∈ B.filter:= λ U_in, ⟨U, U_in, subset.refl _⟩ lemma eq_infi_principal (B : filter_basis α) : B.filter = ⨅ s : B.sets, 𝓟 s := begin have : directed (≥) (λ (s : B.sets), 𝓟 (s : set α)), { rintros ⟨U, U_in⟩ ⟨V, V_in⟩, rcases B.inter_sets U_in V_in with ⟨W, W_in, W_sub⟩, use [W, W_in], finish }, ext U, simp [mem_filter_iff, mem_infi_of_directed this] end protected lemma generate (B : filter_basis α) : generate B.sets = B.filter := begin apply le_antisymm, { intros U U_in, rcases B.mem_filter_iff.mp U_in with ⟨V, V_in, h⟩, exact generate_sets.superset (generate_sets.basic V_in) h }, { rw sets_iff_generate, apply mem_filter_of_mem } end end filter_basis namespace filter namespace is_basis variables {p : ι → Prop} {s : ι → set α} /-- Constructs a filter from an indexed family of sets satisfying `is_basis`. -/ protected def filter (h : is_basis p s) : filter α := h.filter_basis.filter protected lemma mem_filter_iff (h : is_basis p s) {U : set α} : U ∈ h.filter ↔ ∃ i, p i ∧ s i ⊆ U := begin erw [h.filter_basis.mem_filter_iff], simp only [mem_filter_basis_iff h, exists_prop], split, { rintros ⟨_, ⟨i, pi, rfl⟩, h⟩, tauto }, { tauto } end lemma filter_eq_generate (h : is_basis p s) : h.filter = generate {U | ∃ i, p i ∧ s i = U} := by erw h.filter_basis.generate ; refl end is_basis /-- We say that a filter `l` has a basis `s : ι → set α` bounded by `p : ι → Prop`, if `t ∈ l` if and only if `t` includes `s i` for some `i` such that `p i`. -/ protected structure has_basis (l : filter α) (p : ι → Prop) (s : ι → set α) : Prop := (mem_iff' : ∀ (t : set α), t ∈ l ↔ ∃ i (hi : p i), s i ⊆ t) section same_type variables {l l' : filter α} {p : ι → Prop} {s : ι → set α} {t : set α} {i : ι} {p' : ι' → Prop} {s' : ι' → set α} {i' : ι'} lemma has_basis_generate (s : set (set α)) : (generate s).has_basis (λ t, finite t ∧ t ⊆ s) (λ t, ⋂₀ t) := ⟨begin intro U, rw mem_generate_iff, apply exists_congr, tauto end⟩ /-- The smallest filter basis containing a given collection of sets. -/ def filter_basis.of_sets (s : set (set α)) : filter_basis α := { sets := sInter '' { t | finite t ∧ t ⊆ s}, nonempty := ⟨univ, ∅, ⟨⟨finite_empty, empty_subset s⟩, sInter_empty⟩⟩, inter_sets := begin rintros _ _ ⟨a, ⟨fina, suba⟩, rfl⟩ ⟨b, ⟨finb, subb⟩, rfl⟩, exact ⟨⋂₀ (a ∪ b), mem_image_of_mem _ ⟨fina.union finb, union_subset suba subb⟩, by rw sInter_union⟩, end } /-- Definition of `has_basis` unfolded with implicit set argument. -/ lemma has_basis.mem_iff (hl : l.has_basis p s) : t ∈ l ↔ ∃ i (hi : p i), s i ⊆ t := hl.mem_iff' t lemma has_basis.eq_of_same_basis (hl : l.has_basis p s) (hl' : l'.has_basis p s) : l = l' := begin ext t, rw [hl.mem_iff, hl'.mem_iff] end lemma has_basis_iff : l.has_basis p s ↔ ∀ t, t ∈ l ↔ ∃ i (hi : p i), s i ⊆ t := ⟨λ ⟨h⟩, h, λ h, ⟨h⟩⟩ lemma has_basis.ex_mem (h : l.has_basis p s) : ∃ i, p i := let ⟨i, pi, h⟩ := h.mem_iff.mp univ_mem in ⟨i, pi⟩ protected lemma has_basis.nonempty (h : l.has_basis p s) : nonempty ι := nonempty_of_exists h.ex_mem protected lemma is_basis.has_basis (h : is_basis p s) : has_basis h.filter p s := ⟨λ t, by simp only [h.mem_filter_iff, exists_prop]⟩ lemma has_basis.mem_of_superset (hl : l.has_basis p s) (hi : p i) (ht : s i ⊆ t) : t ∈ l := (hl.mem_iff).2 ⟨i, hi, ht⟩ lemma has_basis.mem_of_mem (hl : l.has_basis p s) (hi : p i) : s i ∈ l := hl.mem_of_superset hi $ subset.refl _ /-- Index of a basis set such that `s i ⊆ t` as an element of `subtype p`. -/ noncomputable def has_basis.index (h : l.has_basis p s) (t : set α) (ht : t ∈ l) : {i : ι // p i} := ⟨(h.mem_iff.1 ht).some, (h.mem_iff.1 ht).some_spec.fst⟩ lemma has_basis.property_index (h : l.has_basis p s) (ht : t ∈ l) : p (h.index t ht) := (h.index t ht).2 lemma has_basis.set_index_mem (h : l.has_basis p s) (ht : t ∈ l) : s (h.index t ht) ∈ l := h.mem_of_mem $ h.property_index _ lemma has_basis.set_index_subset (h : l.has_basis p s) (ht : t ∈ l) : s (h.index t ht) ⊆ t := (h.mem_iff.1 ht).some_spec.snd lemma has_basis.is_basis (h : l.has_basis p s) : is_basis p s := { nonempty := let ⟨i, hi, H⟩ := h.mem_iff.mp univ_mem in ⟨i, hi⟩, inter := λ i j hi hj, by simpa [h.mem_iff] using l.inter_sets (h.mem_of_mem hi) (h.mem_of_mem hj) } lemma has_basis.filter_eq (h : l.has_basis p s) : h.is_basis.filter = l := by { ext U, simp [h.mem_iff, is_basis.mem_filter_iff] } lemma has_basis.eq_generate (h : l.has_basis p s) : l = generate { U | ∃ i, p i ∧ s i = U } := by rw [← h.is_basis.filter_eq_generate, h.filter_eq] lemma generate_eq_generate_inter (s : set (set α)) : generate s = generate (sInter '' { t | finite t ∧ t ⊆ s}) := by erw [(filter_basis.of_sets s).generate, ← (has_basis_generate s).filter_eq] ; refl lemma of_sets_filter_eq_generate (s : set (set α)) : (filter_basis.of_sets s).filter = generate s := by rw [← (filter_basis.of_sets s).generate, generate_eq_generate_inter s] ; refl protected lemma _root_.filter_basis.has_basis {α : Type*} (B : filter_basis α) : has_basis (B.filter) (λ s : set α, s ∈ B) id := ⟨λ t, B.mem_filter_iff⟩ lemma has_basis.to_has_basis' (hl : l.has_basis p s) (h : ∀ i, p i → ∃ i', p' i' ∧ s' i' ⊆ s i) (h' : ∀ i', p' i' → s' i' ∈ l) : l.has_basis p' s' := begin refine ⟨λ t, ⟨λ ht, _, λ ⟨i', hi', ht⟩, mem_of_superset (h' i' hi') ht⟩⟩, rcases hl.mem_iff.1 ht with ⟨i, hi, ht⟩, rcases h i hi with ⟨i', hi', hs's⟩, exact ⟨i', hi', subset.trans hs's ht⟩ end lemma has_basis.to_has_basis (hl : l.has_basis p s) (h : ∀ i, p i → ∃ i', p' i' ∧ s' i' ⊆ s i) (h' : ∀ i', p' i' → ∃ i, p i ∧ s i ⊆ s' i') : l.has_basis p' s' := hl.to_has_basis' h $ λ i' hi', let ⟨i, hi, hss'⟩ := h' i' hi' in hl.mem_iff.2 ⟨i, hi, hss'⟩ lemma has_basis.to_subset (hl : l.has_basis p s) {t : ι → set α} (h : ∀ i, p i → t i ⊆ s i) (ht : ∀ i, p i → t i ∈ l) : l.has_basis p t := hl.to_has_basis' (λ i hi, ⟨i, hi, h i hi⟩) ht lemma has_basis.eventually_iff (hl : l.has_basis p s) {q : α → Prop} : (∀ᶠ x in l, q x) ↔ ∃ i, p i ∧ ∀ ⦃x⦄, x ∈ s i → q x := by simpa using hl.mem_iff lemma has_basis.frequently_iff (hl : l.has_basis p s) {q : α → Prop} : (∃ᶠ x in l, q x) ↔ ∀ i, p i → ∃ x ∈ s i, q x := by simp [filter.frequently, hl.eventually_iff] lemma has_basis.exists_iff (hl : l.has_basis p s) {P : set α → Prop} (mono : ∀ ⦃s t⦄, s ⊆ t → P t → P s) : (∃ s ∈ l, P s) ↔ ∃ (i) (hi : p i), P (s i) := ⟨λ ⟨s, hs, hP⟩, let ⟨i, hi, his⟩ := hl.mem_iff.1 hs in ⟨i, hi, mono his hP⟩, λ ⟨i, hi, hP⟩, ⟨s i, hl.mem_of_mem hi, hP⟩⟩ lemma has_basis.forall_iff (hl : l.has_basis p s) {P : set α → Prop} (mono : ∀ ⦃s t⦄, s ⊆ t → P s → P t) : (∀ s ∈ l, P s) ↔ ∀ i, p i → P (s i) := ⟨λ H i hi, H (s i) $ hl.mem_of_mem hi, λ H s hs, let ⟨i, hi, his⟩ := hl.mem_iff.1 hs in mono his (H i hi)⟩ lemma has_basis.ne_bot_iff (hl : l.has_basis p s) : ne_bot l ↔ (∀ {i}, p i → (s i).nonempty) := forall_mem_nonempty_iff_ne_bot.symm.trans $ hl.forall_iff $ λ _ _, nonempty.mono lemma has_basis.eq_bot_iff (hl : l.has_basis p s) : l = ⊥ ↔ ∃ i, p i ∧ s i = ∅ := not_iff_not.1 $ ne_bot_iff.symm.trans $ hl.ne_bot_iff.trans $ by simp only [not_exists, not_and, ← ne_empty_iff_nonempty] lemma basis_sets (l : filter α) : l.has_basis (λ s : set α, s ∈ l) id := ⟨λ t, exists_mem_subset_iff.symm⟩ lemma has_basis_self {l : filter α} {P : set α → Prop} : has_basis l (λ s, s ∈ l ∧ P s) id ↔ ∀ t ∈ l, ∃ r ∈ l, P r ∧ r ⊆ t := begin simp only [has_basis_iff, exists_prop, id, and_assoc], exact forall_congr (λ s, ⟨λ h, h.1, λ h, ⟨h, λ ⟨t, hl, hP, hts⟩, mem_of_superset hl hts⟩⟩) end /-- If `{s i | p i}` is a basis of a filter `l` and each `s i` includes `s j` such that `p j ∧ q j`, then `{s j | p j ∧ q j}` is a basis of `l`. -/ lemma has_basis.restrict (h : l.has_basis p s) {q : ι → Prop} (hq : ∀ i, p i → ∃ j, p j ∧ q j ∧ s j ⊆ s i) : l.has_basis (λ i, p i ∧ q i) s := begin refine ⟨λ t, ⟨λ ht, _, λ ⟨i, hpi, hti⟩, h.mem_iff.2 ⟨i, hpi.1, hti⟩⟩⟩, rcases h.mem_iff.1 ht with ⟨i, hpi, hti⟩, rcases hq i hpi with ⟨j, hpj, hqj, hji⟩, exact ⟨j, ⟨hpj, hqj⟩, subset.trans hji hti⟩ end /-- If `{s i | p i}` is a basis of a filter `l` and `V ∈ l`, then `{s i | p i ∧ s i ⊆ V}` is a basis of `l`. -/ lemma has_basis.restrict_subset (h : l.has_basis p s) {V : set α} (hV : V ∈ l) : l.has_basis (λ i, p i ∧ s i ⊆ V) s := h.restrict $ λ i hi, (h.mem_iff.1 (inter_mem hV (h.mem_of_mem hi))).imp $ λ j hj, ⟨hj.fst, subset_inter_iff.1 hj.snd⟩ lemma has_basis.has_basis_self_subset {p : set α → Prop} (h : l.has_basis (λ s, s ∈ l ∧ p s) id) {V : set α} (hV : V ∈ l) : l.has_basis (λ s, s ∈ l ∧ p s ∧ s ⊆ V) id := by simpa only [and_assoc] using h.restrict_subset hV theorem has_basis.ge_iff (hl' : l'.has_basis p' s') : l ≤ l' ↔ ∀ i', p' i' → s' i' ∈ l := ⟨λ h i' hi', h $ hl'.mem_of_mem hi', λ h s hs, let ⟨i', hi', hs⟩ := hl'.mem_iff.1 hs in mem_of_superset (h _ hi') hs⟩ theorem has_basis.le_iff (hl : l.has_basis p s) : l ≤ l' ↔ ∀ t ∈ l', ∃ i (hi : p i), s i ⊆ t := by simp only [le_def, hl.mem_iff] theorem has_basis.le_basis_iff (hl : l.has_basis p s) (hl' : l'.has_basis p' s') : l ≤ l' ↔ ∀ i', p' i' → ∃ i (hi : p i), s i ⊆ s' i' := by simp only [hl'.ge_iff, hl.mem_iff] lemma has_basis.ext (hl : l.has_basis p s) (hl' : l'.has_basis p' s') (h : ∀ i, p i → ∃ i', p' i' ∧ s' i' ⊆ s i) (h' : ∀ i', p' i' → ∃ i, p i ∧ s i ⊆ s' i') : l = l' := begin apply le_antisymm, { rw hl.le_basis_iff hl', simpa using h' }, { rw hl'.le_basis_iff hl, simpa using h }, end lemma has_basis.inf' (hl : l.has_basis p s) (hl' : l'.has_basis p' s') : (l ⊓ l').has_basis (λ i : pprod ι ι', p i.1 ∧ p' i.2) (λ i, s i.1 ∩ s' i.2) := ⟨begin intro t, split, { simp only [mem_inf_iff, exists_prop, hl.mem_iff, hl'.mem_iff], rintros ⟨t, ⟨i, hi, ht⟩, t', ⟨i', hi', ht'⟩, rfl⟩, use [⟨i, i'⟩, ⟨hi, hi'⟩, inter_subset_inter ht ht'] }, { rintros ⟨⟨i, i'⟩, ⟨hi, hi'⟩, H⟩, exact mem_inf_of_inter (hl.mem_of_mem hi) (hl'.mem_of_mem hi') H } end⟩ lemma has_basis.inf {ι ι' : Type*} {p : ι → Prop} {s : ι → set α} {p' : ι' → Prop} {s' : ι' → set α} (hl : l.has_basis p s) (hl' : l'.has_basis p' s') : (l ⊓ l').has_basis (λ i : ι × ι', p i.1 ∧ p' i.2) (λ i, s i.1 ∩ s' i.2) := (hl.inf' hl').to_has_basis (λ i hi, ⟨⟨i.1, i.2⟩, hi, subset.rfl⟩) (λ i hi, ⟨⟨i.1, i.2⟩, hi, subset.rfl⟩) lemma has_basis_principal (t : set α) : (𝓟 t).has_basis (λ i : unit, true) (λ i, t) := ⟨λ U, by simp⟩ lemma has_basis_pure (x : α) : (pure x : filter α).has_basis (λ i : unit, true) (λ i, {x}) := by simp only [← principal_singleton, has_basis_principal] lemma has_basis.sup' (hl : l.has_basis p s) (hl' : l'.has_basis p' s') : (l ⊔ l').has_basis (λ i : pprod ι ι', p i.1 ∧ p' i.2) (λ i, s i.1 ∪ s' i.2) := ⟨begin intros t, simp only [mem_sup, hl.mem_iff, hl'.mem_iff, pprod.exists, union_subset_iff, exists_prop, and_assoc, exists_and_distrib_left], simp only [← and_assoc, exists_and_distrib_right, and_comm] end⟩ lemma has_basis.sup {ι ι' : Type*} {p : ι → Prop} {s : ι → set α} {p' : ι' → Prop} {s' : ι' → set α} (hl : l.has_basis p s) (hl' : l'.has_basis p' s') : (l ⊔ l').has_basis (λ i : ι × ι', p i.1 ∧ p' i.2) (λ i, s i.1 ∪ s' i.2) := (hl.sup' hl').to_has_basis (λ i hi, ⟨⟨i.1, i.2⟩, hi, subset.rfl⟩) (λ i hi, ⟨⟨i.1, i.2⟩, hi, subset.rfl⟩) lemma has_basis_supr {ι : Sort*} {ι' : ι → Type*} {l : ι → filter α} {p : Π i, ι' i → Prop} {s : Π i, ι' i → set α} (hl : ∀ i, (l i).has_basis (p i) (s i)) : (⨆ i, l i).has_basis (λ f : Π i, ι' i, ∀ i, p i (f i)) (λ f : Π i, ι' i, ⋃ i, s i (f i)) := has_basis_iff.mpr $ λ t, by simp only [has_basis_iff, (hl _).mem_iff, classical.skolem, forall_and_distrib, Union_subset_iff, mem_supr] lemma has_basis.sup_principal (hl : l.has_basis p s) (t : set α) : (l ⊔ 𝓟 t).has_basis p (λ i, s i ∪ t) := ⟨λ u, by simp only [(hl.sup' (has_basis_principal t)).mem_iff, pprod.exists, exists_prop, and_true, unique.exists_iff]⟩ lemma has_basis.sup_pure (hl : l.has_basis p s) (x : α) : (l ⊔ pure x).has_basis p (λ i, s i ∪ {x}) := by simp only [← principal_singleton, hl.sup_principal] lemma has_basis.inf_principal (hl : l.has_basis p s) (s' : set α) : (l ⊓ 𝓟 s').has_basis p (λ i, s i ∩ s') := ⟨λ t, by simp only [mem_inf_principal, hl.mem_iff, subset_def, mem_set_of_eq, mem_inter_iff, and_imp]⟩ lemma has_basis.inf_basis_ne_bot_iff (hl : l.has_basis p s) (hl' : l'.has_basis p' s') : ne_bot (l ⊓ l') ↔ ∀ ⦃i⦄ (hi : p i) ⦃i'⦄ (hi' : p' i'), (s i ∩ s' i').nonempty := (hl.inf' hl').ne_bot_iff.trans $ by simp [@forall_swap _ ι'] lemma has_basis.inf_ne_bot_iff (hl : l.has_basis p s) : ne_bot (l ⊓ l') ↔ ∀ ⦃i⦄ (hi : p i) ⦃s'⦄ (hs' : s' ∈ l'), (s i ∩ s').nonempty := hl.inf_basis_ne_bot_iff l'.basis_sets lemma has_basis.inf_principal_ne_bot_iff (hl : l.has_basis p s) {t : set α} : ne_bot (l ⊓ 𝓟 t) ↔ ∀ ⦃i⦄ (hi : p i), (s i ∩ t).nonempty := (hl.inf_principal t).ne_bot_iff lemma inf_ne_bot_iff : ne_bot (l ⊓ l') ↔ ∀ ⦃s : set α⦄ (hs : s ∈ l) ⦃s'⦄ (hs' : s' ∈ l'), (s ∩ s').nonempty := l.basis_sets.inf_ne_bot_iff lemma inf_principal_ne_bot_iff {s : set α} : ne_bot (l ⊓ 𝓟 s) ↔ ∀ U ∈ l, (U ∩ s).nonempty := l.basis_sets.inf_principal_ne_bot_iff lemma inf_eq_bot_iff {f g : filter α} : f ⊓ g = ⊥ ↔ ∃ (U ∈ f) (V ∈ g), U ∩ V = ∅ := not_iff_not.1 $ ne_bot_iff.symm.trans $ inf_ne_bot_iff.trans $ by simp [← ne_empty_iff_nonempty] protected lemma disjoint_iff {f g : filter α} : disjoint f g ↔ ∃ (U ∈ f) (V ∈ g), U ∩ V = ∅ := disjoint_iff.trans inf_eq_bot_iff lemma mem_iff_inf_principal_compl {f : filter α} {s : set α} : s ∈ f ↔ f ⊓ 𝓟 sᶜ = ⊥ := begin refine not_iff_not.1 ((inf_principal_ne_bot_iff.trans _).symm.trans ne_bot_iff), exact ⟨λ h hs, by simpa [empty_not_nonempty] using h s hs, λ hs t ht, inter_compl_nonempty_iff.2 $ λ hts, hs $ mem_of_superset ht hts⟩, end lemma not_mem_iff_inf_principal_compl {f : filter α} {s : set α} : s ∉ f ↔ ne_bot (f ⊓ 𝓟 sᶜ) := (not_congr mem_iff_inf_principal_compl).trans ne_bot_iff.symm lemma mem_iff_disjoint_principal_compl {f : filter α} {s : set α} : s ∈ f ↔ disjoint f (𝓟 sᶜ) := mem_iff_inf_principal_compl.trans disjoint_iff.symm lemma le_iff_forall_disjoint_principal_compl {f g : filter α} : f ≤ g ↔ ∀ V ∈ g, disjoint f (𝓟 Vᶜ) := forall_congr $ λ _, forall_congr $ λ _, mem_iff_disjoint_principal_compl lemma le_iff_forall_inf_principal_compl {f g : filter α} : f ≤ g ↔ ∀ V ∈ g, f ⊓ 𝓟 Vᶜ = ⊥ := forall_congr $ λ _, forall_congr $ λ _, mem_iff_inf_principal_compl lemma inf_ne_bot_iff_frequently_left {f g : filter α} : ne_bot (f ⊓ g) ↔ ∀ {p : α → Prop}, (∀ᶠ x in f, p x) → ∃ᶠ x in g, p x := by simpa only [inf_ne_bot_iff, frequently_iff, exists_prop, and_comm] lemma inf_ne_bot_iff_frequently_right {f g : filter α} : ne_bot (f ⊓ g) ↔ ∀ {p : α → Prop}, (∀ᶠ x in g, p x) → ∃ᶠ x in f, p x := by { rw inf_comm, exact inf_ne_bot_iff_frequently_left } lemma has_basis.eq_binfi (h : l.has_basis p s) : l = ⨅ i (_ : p i), 𝓟 (s i) := eq_binfi_of_mem_iff_exists_mem $ λ t, by simp only [h.mem_iff, mem_principal] lemma has_basis.eq_infi (h : l.has_basis (λ _, true) s) : l = ⨅ i, 𝓟 (s i) := by simpa only [infi_true] using h.eq_binfi lemma has_basis_infi_principal {s : ι → set α} (h : directed (≥) s) [nonempty ι] : (⨅ i, 𝓟 (s i)).has_basis (λ _, true) s := ⟨begin refine λ t, (mem_infi_of_directed (h.mono_comp _ _) t).trans $ by simp only [exists_prop, true_and, mem_principal], exact λ _ _, principal_mono.2 end⟩ /-- If `s : ι → set α` is an indexed family of sets, then finite intersections of `s i` form a basis of `⨅ i, 𝓟 (s i)`. -/ lemma has_basis_infi_principal_finite {ι : Type*} (s : ι → set α) : (⨅ i, 𝓟 (s i)).has_basis (λ t : set ι, finite t) (λ t, ⋂ i ∈ t, s i) := begin refine ⟨λ U, (mem_infi_finite _).trans _⟩, simp only [infi_principal_finset, mem_Union, mem_principal, exists_prop, exists_finite_iff_finset, finset.set_bInter_coe] end lemma has_basis_binfi_principal {s : β → set α} {S : set β} (h : directed_on (s ⁻¹'o (≥)) S) (ne : S.nonempty) : (⨅ i ∈ S, 𝓟 (s i)).has_basis (λ i, i ∈ S) s := ⟨begin refine λ t, (mem_binfi_of_directed _ ne).trans $ by simp only [mem_principal], rw [directed_on_iff_directed, ← directed_comp, (∘)] at h ⊢, apply h.mono_comp _ _, exact λ _ _, principal_mono.2 end⟩ lemma has_basis_binfi_principal' {ι : Type*} {p : ι → Prop} {s : ι → set α} (h : ∀ i, p i → ∀ j, p j → ∃ k (h : p k), s k ⊆ s i ∧ s k ⊆ s j) (ne : ∃ i, p i) : (⨅ i (h : p i), 𝓟 (s i)).has_basis p s := filter.has_basis_binfi_principal h ne lemma has_basis.map (f : α → β) (hl : l.has_basis p s) : (l.map f).has_basis p (λ i, f '' (s i)) := ⟨λ t, by simp only [mem_map, image_subset_iff, hl.mem_iff, preimage]⟩ lemma has_basis.comap (f : β → α) (hl : l.has_basis p s) : (l.comap f).has_basis p (λ i, f ⁻¹' (s i)) := ⟨begin intro t, simp only [mem_comap, exists_prop, hl.mem_iff], split, { rintros ⟨t', ⟨i, hi, ht'⟩, H⟩, exact ⟨i, hi, subset.trans (preimage_mono ht') H⟩ }, { rintros ⟨i, hi, H⟩, exact ⟨s i, ⟨i, hi, subset.refl _⟩, H⟩ } end⟩ lemma comap_has_basis (f : α → β) (l : filter β) : has_basis (comap f l) (λ s : set β, s ∈ l) (λ s, f ⁻¹' s) := ⟨λ t, mem_comap⟩ lemma has_basis.prod_self (hl : l.has_basis p s) : (l ×ᶠ l).has_basis p (λ i, (s i).prod (s i)) := ⟨begin intro t, apply mem_prod_iff.trans, split, { rintros ⟨t₁, ht₁, t₂, ht₂, H⟩, rcases hl.mem_iff.1 (inter_mem ht₁ ht₂) with ⟨i, hi, ht⟩, exact ⟨i, hi, λ p ⟨hp₁, hp₂⟩, H ⟨(ht hp₁).1, (ht hp₂).2⟩⟩ }, { rintros ⟨i, hi, H⟩, exact ⟨s i, hl.mem_of_mem hi, s i, hl.mem_of_mem hi, H⟩ } end⟩ lemma mem_prod_self_iff {s} : s ∈ l ×ᶠ l ↔ ∃ t ∈ l, set.prod t t ⊆ s := l.basis_sets.prod_self.mem_iff lemma has_basis.sInter_sets (h : has_basis l p s) : ⋂₀ l.sets = ⋂ i (hi : p i), s i := begin ext x, suffices : (∀ t ∈ l, x ∈ t) ↔ ∀ i, p i → x ∈ s i, by simpa only [mem_Inter, mem_set_of_eq, mem_sInter], simp_rw h.mem_iff, split, { intros h i hi, exact h (s i) ⟨i, hi, subset.refl _⟩ }, { rintros h _ ⟨i, hi, sub⟩, exact sub (h i hi) }, end variables {ι'' : Type*} [preorder ι''] (l) (s'' : ι'' → set α) /-- `is_antitone_basis s` means the image of `s` is a filter basis such that `s` is decreasing. -/ @[protect_proj] structure is_antitone_basis extends is_basis (λ _, true) s'' : Prop := (antitone : antitone s'') /-- We say that a filter `l` has an antitone basis `s : ι → set α`, if `t ∈ l` if and only if `t` includes `s i` for some `i`, and `s` is decreasing. -/ @[protect_proj] structure has_antitone_basis (l : filter α) (s : ι'' → set α) extends has_basis l (λ _, true) s : Prop := (antitone : antitone s) end same_type section two_types variables {la : filter α} {pa : ι → Prop} {sa : ι → set α} {lb : filter β} {pb : ι' → Prop} {sb : ι' → set β} {f : α → β} lemma has_basis.tendsto_left_iff (hla : la.has_basis pa sa) : tendsto f la lb ↔ ∀ t ∈ lb, ∃ i (hi : pa i), maps_to f (sa i) t := by { simp only [tendsto, (hla.map f).le_iff, image_subset_iff], refl } lemma has_basis.tendsto_right_iff (hlb : lb.has_basis pb sb) : tendsto f la lb ↔ ∀ i (hi : pb i), ∀ᶠ x in la, f x ∈ sb i := by simpa only [tendsto, hlb.ge_iff, mem_map, filter.eventually] lemma has_basis.tendsto_iff (hla : la.has_basis pa sa) (hlb : lb.has_basis pb sb) : tendsto f la lb ↔ ∀ ib (hib : pb ib), ∃ ia (hia : pa ia), ∀ x ∈ sa ia, f x ∈ sb ib := by simp [hlb.tendsto_right_iff, hla.eventually_iff] lemma tendsto.basis_left (H : tendsto f la lb) (hla : la.has_basis pa sa) : ∀ t ∈ lb, ∃ i (hi : pa i), maps_to f (sa i) t := hla.tendsto_left_iff.1 H lemma tendsto.basis_right (H : tendsto f la lb) (hlb : lb.has_basis pb sb) : ∀ i (hi : pb i), ∀ᶠ x in la, f x ∈ sb i := hlb.tendsto_right_iff.1 H lemma tendsto.basis_both (H : tendsto f la lb) (hla : la.has_basis pa sa) (hlb : lb.has_basis pb sb) : ∀ ib (hib : pb ib), ∃ ia (hia : pa ia), ∀ x ∈ sa ia, f x ∈ sb ib := (hla.tendsto_iff hlb).1 H lemma has_basis.prod'' (hla : la.has_basis pa sa) (hlb : lb.has_basis pb sb) : (la ×ᶠ lb).has_basis (λ i : pprod ι ι', pa i.1 ∧ pb i.2) (λ i, (sa i.1).prod (sb i.2)) := (hla.comap prod.fst).inf' (hlb.comap prod.snd) lemma has_basis.prod {ι ι' : Type*} {pa : ι → Prop} {sa : ι → set α} {pb : ι' → Prop} {sb : ι' → set β} (hla : la.has_basis pa sa) (hlb : lb.has_basis pb sb) : (la ×ᶠ lb).has_basis (λ i : ι × ι', pa i.1 ∧ pb i.2) (λ i, (sa i.1).prod (sb i.2)) := (hla.comap prod.fst).inf (hlb.comap prod.snd) lemma has_basis.prod' {la : filter α} {lb : filter β} {ι : Type*} {p : ι → Prop} {sa : ι → set α} {sb : ι → set β} (hla : la.has_basis p sa) (hlb : lb.has_basis p sb) (h_dir : ∀ {i j}, p i → p j → ∃ k, p k ∧ sa k ⊆ sa i ∧ sb k ⊆ sb j) : (la ×ᶠ lb).has_basis p (λ i, (sa i).prod (sb i)) := begin simp only [has_basis_iff, (hla.prod hlb).mem_iff], refine λ t, ⟨_, _⟩, { rintros ⟨⟨i, j⟩, ⟨hi, hj⟩, hsub : (sa i).prod (sb j) ⊆ t⟩, rcases h_dir hi hj with ⟨k, hk, ki, kj⟩, exact ⟨k, hk, (set.prod_mono ki kj).trans hsub⟩ }, { rintro ⟨i, hi, h⟩, exact ⟨⟨i, i⟩, ⟨hi, hi⟩, h⟩ }, end end two_types end filter end sort namespace filter variables {α β γ ι ι' : Type*} /-- `is_countably_generated f` means `f = generate s` for some countable `s`. -/ class is_countably_generated (f : filter α) : Prop := (out [] : ∃ s : set (set α), countable s ∧ f = generate s) /-- `is_countable_basis p s` means the image of `s` bounded by `p` is a countable filter basis. -/ structure is_countable_basis (p : ι → Prop) (s : ι → set α) extends is_basis p s : Prop := (countable : countable $ set_of p) /-- We say that a filter `l` has a countable basis `s : ι → set α` bounded by `p : ι → Prop`, if `t ∈ l` if and only if `t` includes `s i` for some `i` such that `p i`, and the set defined by `p` is countable. -/ structure has_countable_basis (l : filter α) (p : ι → Prop) (s : ι → set α) extends has_basis l p s : Prop := (countable : countable $ set_of p) /-- A countable filter basis `B` on a type `α` is a nonempty countable collection of sets of `α` such that the intersection of two elements of this collection contains some element of the collection. -/ structure countable_filter_basis (α : Type*) extends filter_basis α := (countable : countable sets) -- For illustration purposes, the countable filter basis defining (at_top : filter ℕ) instance nat.inhabited_countable_filter_basis : inhabited (countable_filter_basis ℕ) := ⟨{ countable := countable_range (λ n, Ici n), ..(default $ filter_basis ℕ),}⟩ lemma has_countable_basis.is_countably_generated {f : filter α} {p : ι → Prop} {s : ι → set α} (h : f.has_countable_basis p s) : f.is_countably_generated := ⟨⟨{t | ∃ i, p i ∧ s i = t}, h.countable.image s, h.to_has_basis.eq_generate⟩⟩ lemma antitone_seq_of_seq (s : ℕ → set α) : ∃ t : ℕ → set α, antitone t ∧ (⨅ i, 𝓟 $ s i) = ⨅ i, 𝓟 (t i) := begin use λ n, ⋂ m ≤ n, s m, split, { exact λ i j hij, bInter_mono' (Iic_subset_Iic.2 hij) (λ n hn, subset.refl _) }, apply le_antisymm; rw le_infi_iff; intro i, { rw le_principal_iff, refine (bInter_mem (finite_le_nat _)).2 (λ j hji, _), rw ← le_principal_iff, apply infi_le_of_le j _, apply le_refl _ }, { apply infi_le_of_le i _, rw principal_mono, intro a, simp, intro h, apply h, refl }, end lemma countable_binfi_eq_infi_seq [complete_lattice α] {B : set ι} (Bcbl : countable B) (Bne : B.nonempty) (f : ι → α) : ∃ (x : ℕ → ι), (⨅ t ∈ B, f t) = ⨅ i, f (x i) := begin rw countable_iff_exists_surjective_to_subtype Bne at Bcbl, rcases Bcbl with ⟨g, gsurj⟩, rw infi_subtype', use (λ n, g n), apply le_antisymm; rw le_infi_iff, { intro i, apply infi_le_of_le (g i) _, apply le_refl _ }, { intros a, rcases gsurj a with ⟨i, rfl⟩, apply infi_le } end lemma countable_binfi_eq_infi_seq' [complete_lattice α] {B : set ι} (Bcbl : countable B) (f : ι → α) {i₀ : ι} (h : f i₀ = ⊤) : ∃ (x : ℕ → ι), (⨅ t ∈ B, f t) = ⨅ i, f (x i) := begin cases B.eq_empty_or_nonempty with hB Bnonempty, { rw [hB, infi_emptyset], use λ n, i₀, simp [h] }, { exact countable_binfi_eq_infi_seq Bcbl Bnonempty f } end lemma countable_binfi_principal_eq_seq_infi {B : set (set α)} (Bcbl : countable B) : ∃ (x : ℕ → set α), (⨅ t ∈ B, 𝓟 t) = ⨅ i, 𝓟 (x i) := countable_binfi_eq_infi_seq' Bcbl 𝓟 principal_univ section is_countably_generated /-- If `f` is countably generated and `f.has_basis p s`, then `f` admits a decreasing basis enumerated by natural numbers such that all sets have the form `s i`. More precisely, there is a sequence `i n` such that `p (i n)` for all `n` and `s (i n)` is a decreasing sequence of sets which forms a basis of `f`-/ lemma has_basis.exists_antitone_subbasis {f : filter α} [h : f.is_countably_generated] {p : ι → Prop} {s : ι → set α} (hs : f.has_basis p s) : ∃ x : ℕ → ι, (∀ i, p (x i)) ∧ f.has_antitone_basis (λ i, s (x i)) := begin obtain ⟨x', hx'⟩ : ∃ x : ℕ → set α, f = ⨅ i, 𝓟 (x i), { unfreezingI { rcases h with ⟨s, hsc, rfl⟩ }, rw generate_eq_binfi, exact countable_binfi_principal_eq_seq_infi hsc }, have : ∀ i, x' i ∈ f := λ i, hx'.symm ▸ (infi_le (λ i, 𝓟 (x' i)) i) (mem_principal_self _), let x : ℕ → {i : ι // p i} := λ n, nat.rec_on n (hs.index _ $ this 0) (λ n xn, (hs.index _ $ inter_mem (this $ n + 1) (hs.mem_of_mem xn.coe_prop))), have x_mono : antitone (λ i, s (x i)), { refine antitone_nat_of_succ_le (λ i, _), exact (hs.set_index_subset _).trans (inter_subset_right _ _) }, have x_subset : ∀ i, s (x i) ⊆ x' i, { rintro (_|i), exacts [hs.set_index_subset _, subset.trans (hs.set_index_subset _) (inter_subset_left _ _)] }, refine ⟨λ i, x i, λ i, (x i).2, _⟩, have : (⨅ i, 𝓟 (s (x i))).has_antitone_basis (λ i, s (x i)) := ⟨has_basis_infi_principal (directed_of_sup x_mono), x_mono⟩, convert this, exact le_antisymm (le_infi $ λ i, le_principal_iff.2 $ by cases i; apply hs.set_index_mem) (hx'.symm ▸ le_infi (λ i, le_principal_iff.2 $ this.to_has_basis.mem_iff.2 ⟨i, trivial, x_subset i⟩)) end /-- A countably generated filter admits a basis formed by an antitone sequence of sets. -/ lemma exists_antitone_basis (f : filter α) [f.is_countably_generated] : ∃ x : ℕ → set α, f.has_antitone_basis x := let ⟨x, hxf, hx⟩ := f.basis_sets.exists_antitone_subbasis in ⟨x, hx⟩ lemma exists_antitone_seq (f : filter α) [f.is_countably_generated] : ∃ x : ℕ → set α, antitone x ∧ ∀ {s}, (s ∈ f ↔ ∃ i, x i ⊆ s) := let ⟨x, hx⟩ := f.exists_antitone_basis in ⟨x, hx.antitone, λ s, by simp [hx.to_has_basis.mem_iff]⟩ instance inf.is_countably_generated (f g : filter α) [is_countably_generated f] [is_countably_generated g] : is_countably_generated (f ⊓ g) := begin rcases f.exists_antitone_basis with ⟨s, hs⟩, rcases g.exists_antitone_basis with ⟨t, ht⟩, exact has_countable_basis.is_countably_generated ⟨hs.to_has_basis.inf ht.to_has_basis, set.countable_encodable _⟩ end instance comap.is_countably_generated (l : filter β) [l.is_countably_generated] (f : α → β) : (comap f l).is_countably_generated := let ⟨x, hxl⟩ := l.exists_antitone_basis in has_countable_basis.is_countably_generated ⟨hxl.to_has_basis.comap _, countable_encodable _⟩ instance sup.is_countably_generated (f g : filter α) [is_countably_generated f] [is_countably_generated g] : is_countably_generated (f ⊔ g) := begin rcases f.exists_antitone_basis with ⟨s, hs⟩, rcases g.exists_antitone_basis with ⟨t, ht⟩, exact has_countable_basis.is_countably_generated ⟨hs.to_has_basis.sup ht.to_has_basis, set.countable_encodable _⟩ end end is_countably_generated @[instance] lemma is_countably_generated_seq [encodable β] (x : β → set α) : is_countably_generated (⨅ i, 𝓟 $ x i) := begin use [range x, countable_range x], rw [generate_eq_binfi, infi_range] end lemma is_countably_generated_of_seq {f : filter α} (h : ∃ x : ℕ → set α, f = ⨅ i, 𝓟 $ x i) : f.is_countably_generated := let ⟨x, h⟩ := h in by rw h ; apply is_countably_generated_seq lemma is_countably_generated_binfi_principal {B : set $ set α} (h : countable B) : is_countably_generated (⨅ (s ∈ B), 𝓟 s) := is_countably_generated_of_seq (countable_binfi_principal_eq_seq_infi h) lemma is_countably_generated_iff_exists_antitone_basis {f : filter α} : is_countably_generated f ↔ ∃ x : ℕ → set α, f.has_antitone_basis x := begin split, { introI h, exact f.exists_antitone_basis }, { rintros ⟨x, h⟩, rw h.to_has_basis.eq_infi, exact is_countably_generated_seq x }, end @[instance] lemma is_countably_generated_principal (s : set α) : is_countably_generated (𝓟 s) := is_countably_generated_of_seq ⟨λ _, s, infi_const.symm⟩ @[instance] lemma is_countably_generated_bot : is_countably_generated (⊥ : filter α) := @principal_empty α ▸ is_countably_generated_principal _ @[instance] lemma is_countably_generated_top : is_countably_generated (⊤ : filter α) := @principal_univ α ▸ is_countably_generated_principal _ end filter
State Before: M : Type u_3 A : Type ?u.225218 B : Type ?u.225221 inst✝² : Monoid M N : Type u_1 F : Type u_2 inst✝¹ : Monoid N inst✝ : MonoidHomClass F M N f : F m : M ⊢ map f (powers m) = powers (↑f m) State After: no goals Tactic: simp only [powers_eq_closure, map_mclosure f, Set.image_singleton]
%report.tex % the glue for everything else %\includeonly{simulation} %\documentstyle[a4]{report} \documentstyle{report} \renewcommand{\baselinestretch}{1.5} \begin{document} \parindent 0pt \setlength{\parskip}{3ex} \include{title} \include{abstract} \tableofcontents \chapter{Introduction} \input{intro} \input{risc} \input{urisc} \newpage \input{formal} \include{architecture} \include{construction} \include{host} \include{execution} \include{control} \include{alu} \include{memory} \include{specification} \include{performance} \include{conclusions} \bibliographystyle{alpha} \bibliography{report} \appendix \include{credits} \include{components} \include{building} \include{mon} \include{epld} \include{simulation} \chapter{Circuit Diagrams} \end{document}
import Data.List (transpose) import Data.Complex type Matrix a = [[a]] main :: IO () main = mapM_ (\a -> do putStrLn "\nMatrix:" mapM_ print a putStrLn "Conjugate Transpose:" mapM_ print (conjTranspose a) putStrLn $ "Hermitian? " ++ show (isHermitianMatrix a) putStrLn $ "Normal? " ++ show (isNormalMatrix a) putStrLn $ "Unitary? " ++ show (isUnitaryMatrix a)) ([[[3, 2:+1], [2:+(-1), 1 ]], [[1, 1, 0], [0, 1, 1], [1, 0, 1]], [[sqrt 2/2:+0, sqrt 2/2:+0, 0 ], [0:+sqrt 2/2, 0:+ (-sqrt 2/2), 0 ], [0, 0, 0:+1]]] :: [Matrix (Complex Double)]) isHermitianMatrix, isNormalMatrix, isUnitaryMatrix :: RealFloat a => Matrix (Complex a) -> Bool isHermitianMatrix a = a `approxEqualMatrix` conjTranspose a isNormalMatrix a = (a `mmul` conjTranspose a) `approxEqualMatrix` (conjTranspose a `mmul` a) isUnitaryMatrix a = (a `mmul` conjTranspose a) `approxEqualMatrix` ident (length a) approxEqualMatrix :: (Fractional a, Ord a) => Matrix (Complex a) -> Matrix (Complex a) -> Bool approxEqualMatrix a b = length a == length b && length (head a) == length (head b) && and (zipWith approxEqualComplex (concat a) (concat b)) where approxEqualComplex (rx :+ ix) (ry :+ iy) = abs (rx - ry) < eps && abs (ix - iy) < eps eps = 1e-14 mmul :: Num a => Matrix a -> Matrix a -> Matrix a mmul a b = [[sum (zipWith (*) row column) | column <- transpose b] | row <- a] ident :: Num a => Int -> Matrix a ident size = [[fromIntegral $ div a b * div b a | a <- [1..size]] | b <- [1..size]] conjTranspose :: Num a => Matrix (Complex a) -> Matrix (Complex a) conjTranspose = map (map conjugate) . transpose
```python import numpy as np import math as m import matplotlib.pyplot as plt import sympy def cel_to_kel(D): K = D + 273.15 return K def kel_to_cel(K): D = K - 273.15 return D ``` # Useful constants ```python sigma = 5.67*10**(-8) ``` # Exercício 3.1 - Página 69 ### Considerações: 1. Condições de regime estacionário. 2. Transferência de calor unidimensional por condução por meio das camadas de pele/gordura e de isolante. 3. Resistência de contato é desprezível. 4. Condutividades térmicas são uniformes. 5. Troca radiante entre a superfície dos casacos e a vizinhança pode ser vista como entre uma pequena superfície e uma grande vizinhança na temperatura do ar. 6. Água líquida é opaca para a radiação térmica. 7. Radiação solar é desprezível. 8. O corpo encontra-se completamente imerso na água na parte 2. ```python # Dados T_i = cel_to_kel(35) T_inf = T_viz = kel_to_cel(10) k_pg = 0.3 L_pg = 3e-3 epsilon = 0.95 k_iso = 0.014 h1 = 2 h2 = 200 hr = 5.9 q = 100 A = 1.8 # Analise R_tot = (T_i-T_inf)/q print('R_tot = {:.6f} K/W'.format(R_tot)) # Para o ar L_iso = k_iso*(A*R_tot-L_pg/k_pg-1/(h1+hr)) print('Para o ar: L_iso = {:.6e} m'.format(L_iso)) # Para o agua L_iso = k_iso*(A*R_tot-L_pg/k_pg-1/(h2)) print('Para a agua: L_iso = {:.6e} m'.format(L_iso)) T_p = T_i - q*L_pg/(k_pg*A) print('Temperatura da pele: T_p = {:.6f} K = {:.6f}ºC '.format(T_p,kel_to_cel(T_p))) ``` R_tot = 5.713000 K/W Para o ar: L_iso = 1.420554e-01 m Para a agua: L_iso = 1.437576e-01 m Temperatura da pele: T_p = 307.594444 K = 34.444444ºC # Exercício 3.2 - Página 71 ### Considerações: 1. Condições de regime estacionário. 2. Condução unidimensional (transferência de calor desprezível nas laterais do conjunto). 3. Resistência térmica no chip desprezível (chip isotérmico). 4. Propriedades constantes. 5. Troca radiante com a vizinhança desprezível. ### Balanço de energia: $ \Large \dot{E}_{acu}=\dot{E}_{ent}+\dot{E}_{g}-\dot{E}_{sai} \therefore 0=\dot{E}_{g}-\dot{E}_{sai} \therefore \dot{E}_{g}=\dot{E}_{sai} \therefore {q_c}''={q_1}''+{q_2}''\therefore \\ \Large {q_c}'' = \frac{T_c-T_infty}{1/h} + \frac{T_c-T_\infty}{{R}''_{t,c} + (L/k_{al}) + (L/h)}$ ```python L = 8e-3 W = 10e-3 T_inf = cel_to_kel(25) h = 100 flux_q_c = 10**4 k_al = 239 R_tc = 0.9e-4 T_c = T_inf + flux_q_c*(h+1/(R_tc +(L/k_al)+(1/h)))**(-1) print('Temperatura do chip: {:.6f} K = {:.6f} ºC'.format(T_c,kel_to_cel(T_c))) ``` Temperatura do chip: 348.456788 K = 75.306788 ºC # Exercício 3.3 - Página 73 ### Considerações: 1. Condições de regime estacionário. 2. Transferência de calor unidimensional. 3. Propriedades constantes. 4. Resistências térmicas de contato desprezíveis. 5. Diferenças de temperaturas no interior da camada de silício desprezíveis. ```python k_v = 1.4 L_v = 3e-3 L_a = 0.1e-3 k_a = 145 L_s = 0.1e-3 L_n = 2e-3 a = 0.553 b = 0.001 G_s = 700 ref_v = 7/100 abs_v = 10/100 abs_s = 83/100 epsilon_v = 0.9 C = 1 W = 0.1 h = 35 T_inf = T_viz = cel_to_kel(20) K = L_a/k_a + L_v/k_v P, Tsi, Tvtop = sympy.symbols("P Tsi Tvtop", real=True) eq1 = sympy.Eq(P/(0.83 * G_s * C * W) + b * Tsi, a) eq2 = sympy.Eq(Tsi*(1/K-0.83*G_s*b)-Tvtop*(1/K),0.83*(1-a)) eq3 = sympy.Eq(h*(Tvtop-T_inf)+epsilon_v*sigma*(Tvtop**4-T_viz**4)-0.83*G_s*b*Tsi, 0.83*G_s*(1-a)+0.1*G_s) sympy.solve([eq1, eq2,eq3]) ``` [{P: 14.3466715857189, Tsi: 306.069335874030, Tvtop: 305.687361634752}, {P: 88.2376344520889, Tsi: -965.720042204628, Tvtop: -964.518129075350}] ```python import sympy x, y = sympy.symbols("x y", real=True) eq1 = sympy.Eq((6.3205 - x)**2 + (-0.0347 - y)**2, 1.4869**2) eq2 = sympy.Eq((8.3769 - x)**2 + (-0.6242 - y)**2, 0.8459**2) sympy.solve([eq1, eq2]) ``` [{x: 7.56236430542394, y: -0.852406950510969}, {x: 7.80697412192104, y: 0.000885037011715321}] Solve the system of equations x0 + 2 * x1 = 1 and 3 * x0 + 5 * x1 = 2: # Exercício 3.4 > # Não entendi - Página 74 ### Considerações: 1. Condições de regime estacionário. 2. Transferência de calor unidimensional. 3. As ilhas aquecida e sensora são isotérmicas. 4. Troca radiante entre as superfícies e a vizinhança é desprezível. 5. Perdas convectivas são desprezíveis. 6. Aquecimento ôhmico nas linhas de platina é desprezível. 7. Propriedades constantes. 8. Resistência de contato entre o nanotubo e as ilhas é desprezível. # Exercício 3.5 - Página 77 ### Considerações: 1. Condições de regime estacionário. 2. Condução unidimensional na direção x. 3. Não há geração de calor no interior do cone. 4. Propriedades constantes. ```python a = 0.25 x1 = 50e-3 x2 = 250e-3 T1 = 400 T2 = 600 k = 3.46 qx = np.pi*a**2*k*(T1 - T2)/(4*((1/x1)-(1/x2))) x = np.linspace(x1,x2,1000) T = T1-(4*qx/(np.pi*a**2*k))*((1/x1)-(1/x)) print('A taxa de transferência de calor é {:.4f} W'.format(qx)) ``` A taxa de transferência de calor é -2.1230 W ```python fig = plt.figure(figsize=[16,9]) ax = fig.subplots(1) # Plot ax.set_ylabel('$T(K)$',fontsize=16) ax.set_xlabel('$x$(m)',fontsize=16) ax.set_title('Gráfico da Temperatura (T) em função da distância (x)',fontsize=20) ax.plot(x, T,'b', linewidth=4) ax.grid() plt.show() ``` # Exercício 3.6 > # Conceitual - Página 80 ### Considerações: 1. Condições de regime estacionário. 2. Transferência de calor unidimensional na direção radial (cilíndrica). 3. Resistência térmica na parede do tubo desprezível. 4. Propriedades constantes do isolante. 5. Troca térmica por radiação entre a superfície externa do isolante e a vizinhança desprezível. # Exercício 3.7 - Página 84 ### Considerações: 1. Condições de regime estacionário. 2. Condução unidimensional na direção x. 3. Resistência de contato entre as paredes desprezível. 4. Superfície interna de A adiabática. 5. Propriedades dos materiais A e B constantes. ```python # Dados dot_q = 1.5e6 ka = 75 La = 50e-3 kb = 150 Lb = 20e-3 T_inf = cel_to_kel(30) h = 1e3 T2 = T_inf + dot_q*La/h print('A Temperatura 2 é {:.4f} K = {:.4f} C '.format(T2,kel_to_cel(T2))) flux_q = h*(T2-T_inf) T1 = T_inf + (Lb/kb+1/h)*flux_q print('A Temperatura 1 é {:.4f} K = {:.4f} C '.format(T1,kel_to_cel(T1))) T0 = dot_q*La**2/(2*ka) + T1 print('A Temperatura 0 é {:.4f} K = {:.4f} C '.format(T0,kel_to_cel(T0))) ``` A Temperatura 2 é 378.1500 K = 105.0000 C A Temperatura 1 é 388.1500 K = 115.0000 C A Temperatura 0 é 413.1500 K = 140.0000 C # Exercício 3.8 > # Conceitual - Página 86 ### Considerações: 1. Condições de regime estacionário. 2. Condução radial unidimensional. 3. Propriedades constantes. 4. Geração de calor volumétrica uniforme. 5. Superfície externa adiabática. # Exercício 3.9 - Página 92 ### Considerações: 1. Condições de regime estacionário. 2. Temperatura uniforme na seção transversal do bastão. 3. Propriedades constantes. 4. Troca radiante com a vizinhança desprezível. 5. Coeficiente de transferência de calor uniforme. 6. Bastão com comprimento infinito. ```python D = 5e-3 Tb = 100 Tinf = 25 h = 100 k_cu = 398 k_al = 180 k_aco = 14 P = np.pi * D Atr = np.pi * D**2 / 4 theta_b = Tb - Tinf Mcu = theta_b * (4*h/(k_cu*D))**0.5 Mal = theta_b * (4*h/(k_al*D))**0.5 Maco = theta_b * (4*h/(k_aco*D))**0.5 x = np.linspace(0,300e-3,1000) Tcu = Tinf + (Tb-Tinf) * np.exp((-Mcu)*x) Tal = Tinf + (Tb-Tinf) * np.exp((-Mal)*x) Taco = Tinf + (Tb-Tinf) * np.exp((-Maco)*x) fig = plt.figure(figsize=[16,9]) ax = fig.subplots(1) # Plot ax.set_ylabel('$T(K)$',fontsize=16) ax.set_xlabel('$x$(m)',fontsize=16) ax.set_title('Gráfico da Temperatura (T) em função da distância (x)',fontsize=20) ax.plot(x, Tcu,'b', linewidth=4) ax.plot(x, Tal,'r', linewidth=4) ax.plot(x, Taco,'g', linewidth=4) ax.grid() plt.show() ``` # Exercício 3.10 - Página 98 ### Considerações: 1. Condições de regime estacionário. 2. Temperatura uniforme através da espessura da aleta. 3. Propriedades constantes. 4. Troca radiante com a vizinhança desprezível. 5. Coeficiente convectivo uniforme sobre a superfície externa (com ou sem aletas). ```python H = 100e-3 D = 50e-3 r1 = D/2 q_t = 2e3 Tinf = 300 h = 75 n = 10 t = 4e-3 L = 20e-3 r2 = r1 + L k = 186 ``` # Tabelas de resolução > ### TABELA 3.3 Soluções unidimensionais, em regime estacionário, da equação do calor sem geração > ### TABELA 3.4 Distribuição de temperaturas e taxas de perda de calor em aletas de seção transversal uniforme > ### TABELA 3.5 Eficiência de perfis de aletas comuns > ### TABELA C.1: Soluções da Equação do Calor Unidimensionais e em Regime Estacionário para Paredes Planas, Cilíndricas e Esféricas com Geração Uniforme e Condições nas Superfícies Assimétricas > ### TABELA C.2 Condições Superficiais Alternativas e Balanços de Energia para Soluções Unidimensionais em Regime Estacionário da Equação do Calor para Paredes Planas, Cilíndricas e Esféricas com Geração Uniforme > ### TABELA C.3 Soluções Unidimensionais em Regime Estacionário da Equação do Calor com Geração Uniforme em uma Parede Plana com uma Superfície Adiabática, em um Cilindro Sólido e em uma Esfera Sólida > ### TABELA C.4 Condições Superficiais Alternativas e Balanços de Energia para Soluções Unidimensionais em Regime Estacionário da Equação do Calor com Geração Uniforme em uma Parede Plana com uma Superfície Adiabática, em um Cilindro Sólido e em uma Esfera Sólida ```python ```
(* Copyright (C) 2017 M.A.L. Marques 2019 Susi Lehtola This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. *) (* type: mgga_exc *) (* prefix: mgga_x_gdme_params *params; assert(p->params != NULL); params = (mgga_x_gdme_params * )(p->params); *) gdme_at := (params_a_AA + 3/5*params_a_BB)*2^(1/3)/(X_FACTOR_C*(3*Pi^2)^(2/3)): gdme_bt := params_a_BB/(X_FACTOR_C*2^(1/3)*(3*Pi^2)^(4/3)): gdme_f := (x, u, t) -> gdme_at + gdme_bt*((params_a_a^2 - params_a_a + 1/2)*u - 2*t): f := (rs, z, xt, xs0, xs1, u0, u1, t0, t1) -> mgga_exchange(gdme_f, rs, z, xs0, xs1, u0, u1, t0, t1):
A simple framework for developing a concept beneficiation flow sheet. China Iron, Copper Ore Flotation Machine, flow sheet of the beneficiation process. and vanadiumbearing slag by using a titaniferous . Advances in Low Grade Iron Ore Beneficiation. beneficiation plant in milling process prices of grinding beneficiation plant in milling process. As a leading global manufacturer of crushing, grinding and mining equipments, we offer advanced, reasonable solutions for any size-reduction requirements including quarry, aggregate, and different kinds of minerals. The process is similar to the Hall-Hroult process, which uses electrolysis to produce aluminum, but operates at a higher operating temperature to enable production of liquid copper. Yantai Orient Metallurgical Design and Research Institute Co., Ltd was founded in 2002 and own metallurgical designing and researching professional qualification authorized by Ministry of Housing and Urban-Rural Development, mainly work on mines design as core technical strength of Jinpeng Group. Ore dressing means the production process in which the extracted ore goes through several The equipment of mineral processing cover feeder, crusher, ball mill, classifier, There are two kinds of ball mill, grate type and overall type due to. Copper ore beneficiation plant. Different beneficiation process is designed according to different ore. Even if the same type of ore in different mining plant, the beneficiation process is also. Read more; Gypsum mining equipment. Advances in Low Grade Iron Ore Beneficiation. installing the low grade iron ore beneficiation plant in this region. All these plant will be same region, plants are planned with different flow sheets. advanced beneficiation process for iron or. Industry News Ore beneficiation process introduction Ore Beneficiation Advanced ore beneficiation process As the mining industry's rapid development in recent years, Weakly magnetic iron ore beneficiation Beneficiation Outotec . Copper beneficiation plant flotation machine sf20 in new design Shicheng Jinchuan Best copper ore flotation machine/ floatation cell/ flotation machine price . SF copper ore froth flotation / floatation cell machine popular in India, Asia. Allanore lab's new molten sulfide electrolysis method better handles trace metals and other impurities that come with the copper, allowing for separation of multiple elements at high purity from the same production process.
## Design Matrix: * Imputations * Design Matrix: $$ \begin{align} t\ | \ y_t \xrightarrow{f} v_t, | \ \ x^1_t, \ x^2_t, \ \ldots \ x^k_t \xrightarrow{g} \ g_1x^1_t, \ g_2x^2_t, \ldots, g_kx^k_t \end{align} $$ $$ \text{DM}_t = \text{DM}_t(y_t, f, f^{-1}, \{x^i_t\}_{i=1,k}, \{g^i_t\}_{i=1,k}) $$ ### Exploratory analysis * Autocorrelation: * $ \text{ACF}(x_t) $ * $ \text{PACF}(x_t) $ * Scatter Plots: * $y_t$ next to $v_t$ and $x_t^{k}$ next to $g_kx^k_t$ * $y_t \ \text{vs.} \ x^j_{t} $ for $j=1,k$ with LOWESS for dependency shape analysis * $x_t \ \text{vs.} \ x_{t-h}$ with LOWESS for autocorrelation analysis * $y_t \ \text{vs.} \ x^k_{t-h}$ for given $k$ with LOWESS for lagged-leading relationship ## Calibrator: $$ \mathbb{C} ( {\mathcal{H}yperParams}) \rightarrow \mathbb{C} $$ ## Model: $$ \text{M} = \mathbb{M}(\text{C}, \ \text{DM} ) \to \{\hat{\theta}_l\}_{l=1,m} $$ ### Model Specification $$ \{\hat{\theta}_l\} \xrightarrow[\text{GridSearch}]{I(\theta): \ AIC, \ AICc, \ BIC } \{\hat{\theta^*_l}\} $$ ### Residuals Diagnostics: $$ \hat{\varepsilon}_t = v_{{t}}-\hat{v}_t; \ \hat{\varepsilon}_t^{std}; \ \hat{\varepsilon}_t^{stu} $$ $$ \text{RD} = \mathcal{RD}(\hat{\varepsilon}_t, \hat{\varepsilon}_t^{std}, \hat{\varepsilon}_t^{stu} ) $$ ### Model Selection: #### Cross Validation $$ \text{CV} = \mathbb{CV}(\text{M}, \text{Partitioning}, \text{ Performance Metric}) $$ $$ \mathbb{C} \xrightarrow{\varepsilon_{CV}} \mathbb{C}^* $$ ## Forecast: $$ \text{C}^*(DM_{t+1}=g_i({x^i_{t+1}})_{i=1,k} \ | \ \{\hat{\theta^*}_l\}_{l=1,m}) = \hat{v}_{t+1} \xrightarrow{f^{-1} , \ y_t} \hat{y}_{t+1} $$ ```python ```
/- Copyright (c) 2015 Microsoft Corporation. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Leonardo de Moura, Mario Carneiro ! This file was ported from Lean 3 source module group_theory.perm.via_embedding ! leanprover-community/mathlib commit c3291da49cfa65f0d43b094750541c0731edc932 ! Please do not edit these lines, except to modify the commit id ! if you have ported upstream changes. -/ import Mathbin.GroupTheory.Perm.Basic import Mathbin.Logic.Equiv.Set /-! # `equiv.perm.via_embedding`, a noncomputable analogue of `equiv.perm.via_fintype_embedding`. > THIS FILE IS SYNCHRONIZED WITH MATHLIB4. > Any changes to this file require a corresponding PR to mathlib4. -/ variable {α β : Type _} namespace Equiv namespace Perm variable (e : Perm α) (ι : α ↪ β) open Classical #print Equiv.Perm.viaEmbedding /- /-- Noncomputable version of `equiv.perm.via_fintype_embedding` that does not assume `fintype` -/ noncomputable def viaEmbedding : Perm β := extendDomain e (ofInjective ι.1 ι.2) #align equiv.perm.via_embedding Equiv.Perm.viaEmbedding -/ #print Equiv.Perm.viaEmbedding_apply /- theorem viaEmbedding_apply (x : α) : e.viaEmbedding ι (ι x) = ι (e x) := extendDomain_apply_image e (ofInjective ι.1 ι.2) x #align equiv.perm.via_embedding_apply Equiv.Perm.viaEmbedding_apply -/ #print Equiv.Perm.viaEmbedding_apply_of_not_mem /- theorem viaEmbedding_apply_of_not_mem (x : β) (hx : x ∉ Set.range ι) : e.viaEmbedding ι x = x := extendDomain_apply_not_subtype e (ofInjective ι.1 ι.2) hx #align equiv.perm.via_embedding_apply_of_not_mem Equiv.Perm.viaEmbedding_apply_of_not_mem -/ #print Equiv.Perm.viaEmbeddingHom /- /-- `via_embedding` as a group homomorphism -/ noncomputable def viaEmbeddingHom : Perm α →* Perm β := extendDomainHom (ofInjective ι.1 ι.2) #align equiv.perm.via_embedding_hom Equiv.Perm.viaEmbeddingHom -/ /- warning: equiv.perm.via_embedding_hom_apply -> Equiv.Perm.viaEmbeddingHom_apply is a dubious translation: lean 3 declaration is forall {α : Type.{u1}} {β : Type.{u2}} (e : Equiv.Perm.{succ u1} α) (ι : Function.Embedding.{succ u1, succ u2} α β), Eq.{succ u2} (Equiv.Perm.{succ u2} β) (coeFn.{max (succ u2) (succ u1), max (succ u1) (succ u2)} (MonoidHom.{u1, u2} (Equiv.Perm.{succ u1} α) (Equiv.Perm.{succ u2} β) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} α) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} α) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} α) (Equiv.Perm.permGroup.{u1} α)))) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} β) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} β) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} β) (Equiv.Perm.permGroup.{u2} β))))) (fun (_x : MonoidHom.{u1, u2} (Equiv.Perm.{succ u1} α) (Equiv.Perm.{succ u2} β) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} α) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} α) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} α) (Equiv.Perm.permGroup.{u1} α)))) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} β) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} β) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} β) (Equiv.Perm.permGroup.{u2} β))))) => (Equiv.Perm.{succ u1} α) -> (Equiv.Perm.{succ u2} β)) (MonoidHom.hasCoeToFun.{u1, u2} (Equiv.Perm.{succ u1} α) (Equiv.Perm.{succ u2} β) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} α) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} α) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} α) (Equiv.Perm.permGroup.{u1} α)))) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} β) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} β) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} β) (Equiv.Perm.permGroup.{u2} β))))) (Equiv.Perm.viaEmbeddingHom.{u1, u2} α β ι) e) (Equiv.Perm.viaEmbedding.{u1, u2} α β e ι) but is expected to have type forall {α : Type.{u1}} {β : Type.{u2}} (e : Equiv.Perm.{succ u1} α) (ι : Function.Embedding.{succ u1, succ u2} α β), Eq.{succ u2} ((fun ([email protected]._hyg.2391 : Equiv.Perm.{succ u1} α) => Equiv.Perm.{succ u2} β) e) (FunLike.coe.{max (succ u1) (succ u2), succ u1, succ u2} (MonoidHom.{u1, u2} (Equiv.Perm.{succ u1} α) (Equiv.Perm.{succ u2} β) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} α) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} α) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} α) (Equiv.Perm.permGroup.{u1} α)))) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} β) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} β) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} β) (Equiv.Perm.permGroup.{u2} β))))) (Equiv.Perm.{succ u1} α) (fun (_x : Equiv.Perm.{succ u1} α) => (fun ([email protected]._hyg.2391 : Equiv.Perm.{succ u1} α) => Equiv.Perm.{succ u2} β) _x) (MulHomClass.toFunLike.{max u1 u2, u1, u2} (MonoidHom.{u1, u2} (Equiv.Perm.{succ u1} α) (Equiv.Perm.{succ u2} β) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} α) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} α) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} α) (Equiv.Perm.permGroup.{u1} α)))) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} β) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} β) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} β) (Equiv.Perm.permGroup.{u2} β))))) (Equiv.Perm.{succ u1} α) (Equiv.Perm.{succ u2} β) (MulOneClass.toMul.{u1} (Equiv.Perm.{succ u1} α) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} α) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} α) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} α) (Equiv.Perm.permGroup.{u1} α))))) (MulOneClass.toMul.{u2} (Equiv.Perm.{succ u2} β) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} β) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} β) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} β) (Equiv.Perm.permGroup.{u2} β))))) (MonoidHomClass.toMulHomClass.{max u1 u2, u1, u2} (MonoidHom.{u1, u2} (Equiv.Perm.{succ u1} α) (Equiv.Perm.{succ u2} β) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} α) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} α) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} α) (Equiv.Perm.permGroup.{u1} α)))) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} β) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} β) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} β) (Equiv.Perm.permGroup.{u2} β))))) (Equiv.Perm.{succ u1} α) (Equiv.Perm.{succ u2} β) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} α) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} α) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} α) (Equiv.Perm.permGroup.{u1} α)))) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} β) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} β) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} β) (Equiv.Perm.permGroup.{u2} β)))) (MonoidHom.monoidHomClass.{u1, u2} (Equiv.Perm.{succ u1} α) (Equiv.Perm.{succ u2} β) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} α) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} α) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} α) (Equiv.Perm.permGroup.{u1} α)))) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} β) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} β) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} β) (Equiv.Perm.permGroup.{u2} β))))))) (Equiv.Perm.viaEmbeddingHom.{u1, u2} α β ι) e) (Equiv.Perm.viaEmbedding.{u1, u2} α β e ι) Case conversion may be inaccurate. Consider using '#align equiv.perm.via_embedding_hom_apply Equiv.Perm.viaEmbeddingHom_applyₓ'. -/ theorem viaEmbeddingHom_apply : viaEmbeddingHom ι e = viaEmbedding e ι := rfl #align equiv.perm.via_embedding_hom_apply Equiv.Perm.viaEmbeddingHom_apply /- warning: equiv.perm.via_embedding_hom_injective -> Equiv.Perm.viaEmbeddingHom_injective is a dubious translation: lean 3 declaration is forall {α : Type.{u1}} {β : Type.{u2}} (ι : Function.Embedding.{succ u1, succ u2} α β), Function.Injective.{succ u1, succ u2} (Equiv.Perm.{succ u1} α) (Equiv.Perm.{succ u2} β) (coeFn.{max (succ u2) (succ u1), max (succ u1) (succ u2)} (MonoidHom.{u1, u2} (Equiv.Perm.{succ u1} α) (Equiv.Perm.{succ u2} β) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} α) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} α) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} α) (Equiv.Perm.permGroup.{u1} α)))) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} β) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} β) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} β) (Equiv.Perm.permGroup.{u2} β))))) (fun (_x : MonoidHom.{u1, u2} (Equiv.Perm.{succ u1} α) (Equiv.Perm.{succ u2} β) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} α) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} α) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} α) (Equiv.Perm.permGroup.{u1} α)))) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} β) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} β) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} β) (Equiv.Perm.permGroup.{u2} β))))) => (Equiv.Perm.{succ u1} α) -> (Equiv.Perm.{succ u2} β)) (MonoidHom.hasCoeToFun.{u1, u2} (Equiv.Perm.{succ u1} α) (Equiv.Perm.{succ u2} β) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} α) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} α) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} α) (Equiv.Perm.permGroup.{u1} α)))) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} β) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} β) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} β) (Equiv.Perm.permGroup.{u2} β))))) (Equiv.Perm.viaEmbeddingHom.{u1, u2} α β ι)) but is expected to have type forall {α : Type.{u2}} {β : Type.{u1}} (ι : Function.Embedding.{succ u2, succ u1} α β), Function.Injective.{succ u2, succ u1} (Equiv.Perm.{succ u2} α) (Equiv.Perm.{succ u1} β) (FunLike.coe.{max (succ u2) (succ u1), succ u2, succ u1} (MonoidHom.{u2, u1} (Equiv.Perm.{succ u2} α) (Equiv.Perm.{succ u1} β) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} α) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} α) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} α) (Equiv.Perm.permGroup.{u2} α)))) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} β) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} β) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} β) (Equiv.Perm.permGroup.{u1} β))))) (Equiv.Perm.{succ u2} α) (fun (_x : Equiv.Perm.{succ u2} α) => (fun ([email protected]._hyg.2391 : Equiv.Perm.{succ u2} α) => Equiv.Perm.{succ u1} β) _x) (MulHomClass.toFunLike.{max u2 u1, u2, u1} (MonoidHom.{u2, u1} (Equiv.Perm.{succ u2} α) (Equiv.Perm.{succ u1} β) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} α) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} α) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} α) (Equiv.Perm.permGroup.{u2} α)))) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} β) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} β) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} β) (Equiv.Perm.permGroup.{u1} β))))) (Equiv.Perm.{succ u2} α) (Equiv.Perm.{succ u1} β) (MulOneClass.toMul.{u2} (Equiv.Perm.{succ u2} α) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} α) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} α) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} α) (Equiv.Perm.permGroup.{u2} α))))) (MulOneClass.toMul.{u1} (Equiv.Perm.{succ u1} β) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} β) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} β) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} β) (Equiv.Perm.permGroup.{u1} β))))) (MonoidHomClass.toMulHomClass.{max u2 u1, u2, u1} (MonoidHom.{u2, u1} (Equiv.Perm.{succ u2} α) (Equiv.Perm.{succ u1} β) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} α) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} α) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} α) (Equiv.Perm.permGroup.{u2} α)))) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} β) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} β) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} β) (Equiv.Perm.permGroup.{u1} β))))) (Equiv.Perm.{succ u2} α) (Equiv.Perm.{succ u1} β) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} α) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} α) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} α) (Equiv.Perm.permGroup.{u2} α)))) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} β) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} β) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} β) (Equiv.Perm.permGroup.{u1} β)))) (MonoidHom.monoidHomClass.{u2, u1} (Equiv.Perm.{succ u2} α) (Equiv.Perm.{succ u1} β) (Monoid.toMulOneClass.{u2} (Equiv.Perm.{succ u2} α) (DivInvMonoid.toMonoid.{u2} (Equiv.Perm.{succ u2} α) (Group.toDivInvMonoid.{u2} (Equiv.Perm.{succ u2} α) (Equiv.Perm.permGroup.{u2} α)))) (Monoid.toMulOneClass.{u1} (Equiv.Perm.{succ u1} β) (DivInvMonoid.toMonoid.{u1} (Equiv.Perm.{succ u1} β) (Group.toDivInvMonoid.{u1} (Equiv.Perm.{succ u1} β) (Equiv.Perm.permGroup.{u1} β))))))) (Equiv.Perm.viaEmbeddingHom.{u2, u1} α β ι)) Case conversion may be inaccurate. Consider using '#align equiv.perm.via_embedding_hom_injective Equiv.Perm.viaEmbeddingHom_injectiveₓ'. -/ theorem viaEmbeddingHom_injective : Function.Injective (viaEmbeddingHom ι) := extendDomainHom_injective (ofInjective ι.1 ι.2) #align equiv.perm.via_embedding_hom_injective Equiv.Perm.viaEmbeddingHom_injective end Perm end Equiv
function f = dot(F, G) %DOT Vector dot product. % DOT(F, G) returns the dot product of the SPHEREFUN objects F and G. DOT(F, % G) is the same as F'*G. % % See also SPHEREFUNV/CROSS. % Copyright 2017 by The University of Oxford and The Chebfun Developers. % See http://www.chebfun.org/ for Chebfun information. if ( isempty(F) || isempty(G) ) f = spherefun(); return end Fc = F.components; Gc = G.components; f = Fc{1}.*Gc{1} + Fc{2}.*Gc{2} + Fc{3}.*Gc{3}; end
lemma reduced_labelling_zero: "j < n \<Longrightarrow> x j = 0 \<Longrightarrow> reduced n x \<noteq> j"
Require Import VST.floyd.base. Require Import VST.floyd.val_lemmas. Require Import VST.floyd.typecheck_lemmas. Definition const_only_isUnOpResultType {CS: compspecs} op typeof_a valueof_a ty : bool := match op with | Cop.Onotbool => match typeof_a with | Tint _ _ _ | Tlong _ _ | Tfloat _ _ => is_int_type ty | Tpointer _ _ => if Archi.ptr64 then match valueof_a with | Vlong v => andb (negb (eqb_type (typeof_a) int_or_ptr_type)) (andb (is_int_type ty) (Z.eqb 0 (Int64.unsigned v))) | _ => false end else match valueof_a with | Vint v => andb (negb (eqb_type typeof_a int_or_ptr_type)) (andb (is_int_type ty) (Z.eqb 0 (Int.unsigned v))) | _ => false end | _ => false end | Cop.Onotint => match Cop.classify_notint (typeof_a) with | Cop.notint_default => false | Cop.notint_case_i _ => (is_int32_type ty) | Cop.notint_case_l _ => (is_long_type ty) end | Cop.Oneg => match Cop.classify_neg (typeof_a) with | Cop.neg_case_i sg => andb (is_int32_type ty) match (typeof_a) with | Tint _ Signed _ => match valueof_a with | Vint v => negb (Z.eqb (Int.signed v) Int.min_signed) | _ => false end | Tlong Signed _ => match valueof_a with | Vlong v => negb (Z.eqb (Int64.signed v) Int64.min_signed) | _ => false end | _ => true end | Cop.neg_case_f => is_float_type ty | Cop.neg_case_s => is_single_type ty | _ => false end | Cop.Oabsfloat =>match Cop.classify_neg (typeof_a) with | Cop.neg_case_i sg => is_float_type ty | Cop.neg_case_l _ => is_float_type ty | Cop.neg_case_f => is_float_type ty | Cop.neg_case_s => is_float_type ty | _ => false end end. (* TODO: binarithType would better be bool type *) Definition const_only_isBinOpResultType {CS: compspecs} op typeof_a1 valueof_a1 typeof_a2 valueof_a2 ty : bool := match op with | Cop.Oadd => match Cop.classify_add (typeof_a1) (typeof_a2) with | Cop.add_case_pi t _ | Cop.add_case_pl t => andb (andb (andb (match valueof_a1 with Vptr _ _ => true | _ => false end) (complete_type cenv_cs t)) (negb (eqb_type (typeof_a1) int_or_ptr_type))) (is_pointer_type ty) | Cop.add_case_ip _ t | Cop.add_case_lp t => andb (andb (andb (match valueof_a2 with Vptr _ _ => true | _ => false end) (complete_type cenv_cs t)) (negb (eqb_type (typeof_a2) int_or_ptr_type))) (is_pointer_type ty) | Cop.add_default => false (* andb (binarithType (typeof a1) (typeof a2) ty deferr reterr) (tc_nobinover Z.add a1 a2) *) end | _ => false (* TODO *) end. Definition const_only_isCastResultType {CS: compspecs} (t1 t2: type) (valueof_a: val) : bool := false. (* TODO *) Fixpoint const_only_eval_expr {cs: compspecs} (e: Clight.expr): option val := match e with | Econst_int i (Tint I32 _ _) => Some (Vint i) | Econst_int _ _ => None | Econst_long i ty => None (*Some (Vlong i) *) | Econst_float f (Tfloat F64 _) => Some (Vfloat f) | Econst_float _ _ => None | Econst_single f (Tfloat F32 _) => Some (Vsingle f) | Econst_single _ _ => None | Etempvar id ty => None | Evar _ _ => None | Eaddrof a ty => None | Eunop op a ty => match const_only_eval_expr a with | Some v => if const_only_isUnOpResultType op (typeof a) v ty then Some (eval_unop op (typeof a) v) else None | None => None end | Ebinop op a1 a2 ty => match (const_only_eval_expr a1), (const_only_eval_expr a2) with | Some v1, Some v2 => if const_only_isBinOpResultType op (typeof a1) v1 (typeof a2) v2 ty then Some (eval_binop op (typeof a1) (typeof a2) v1 v2) else None | _, _ => None end | Ecast a ty => match const_only_eval_expr a with | Some v => if const_only_isCastResultType (typeof a) ty v then Some (eval_cast (typeof a) ty v) else None | None => None end | Ederef a ty => None | Efield a i ty => None | Esizeof t t0 => if andb (complete_type cenv_cs t) (eqb_type t0 size_t) then Some (Vptrofs (Ptrofs.repr (sizeof t))) else None | Ealignof t t0 => if andb (complete_type cenv_cs t) (eqb_type t0 size_t) then Some (Vptrofs (Ptrofs.repr (alignof t))) else None end. Lemma const_only_isUnOpResultType_spec: forall {cs: compspecs} rho u e t P, const_only_isUnOpResultType u (typeof e) (eval_expr e rho) t = true -> P |-- denote_tc_assert (isUnOpResultType u e t) rho. Proof. intros. unfold isUnOpResultType. unfold const_only_isUnOpResultType in H. destruct u. + destruct (typeof e); try solve [inv H | rewrite H; exact (@prop_right mpred _ True _ I)]. rewrite !denote_tc_assert_andp. match goal with | |- context [denote_tc_assert (tc_test_eq ?a ?b)] => change (denote_tc_assert (tc_test_eq a b)) with (expr2.denote_tc_assert (tc_test_eq a b)) end. rewrite binop_lemmas2.denote_tc_assert_test_eq'. simpl expr2.denote_tc_assert. unfold_lift. simpl. unfold tc_int_or_ptr_type. destruct Archi.ptr64 eqn:HH. - destruct (eval_expr e rho); try solve [inv H]. rewrite !andb_true_iff in H. destruct H as [? [? ?]]. rewrite H, H0. rewrite Z.eqb_eq in H1. apply andp_right; [exact (@prop_right mpred _ True _ I) |]. apply andp_right; [exact (@prop_right mpred _ True _ I) |]. simpl. rewrite HH. change (P |-- (!! (i = Int64.zero)) && (!! (Int64.zero = Int64.zero)))%logic. apply andp_right; apply prop_right; auto. rewrite <- (Int64.repr_unsigned i), <- H1. auto. - destruct (eval_expr e rho); try solve [inv H]. rewrite !andb_true_iff in H. destruct H as [? [? ?]]. rewrite H, H0. rewrite Z.eqb_eq in H1. apply andp_right; [exact (@prop_right mpred _ True _ I) |]. apply andp_right; [exact (@prop_right mpred _ True _ I) |]. simpl. rewrite HH. change (P |-- (!! (i = Int.zero)) && (!! (Int.zero = Int.zero)))%logic. apply andp_right; apply prop_right; auto. rewrite <- (Int.repr_unsigned i), <- H1. auto. + destruct (Cop.classify_notint (typeof e)); try solve [inv H | rewrite H; exact (@prop_right mpred _ True _ I)]. + destruct (Cop.classify_neg (typeof e)); try solve [inv H | rewrite H; exact (@prop_right mpred _ True _ I)]. rewrite !andb_true_iff in H. destruct H. rewrite H; simpl. destruct (typeof e) as [| ? [|] | [|] | | | | | |]; try solve [exact (@prop_right mpred _ True _ I)]. - simpl. unfold_lift. unfold denote_tc_nosignedover. destruct (eval_expr e rho); try solve [inv H0]. rewrite negb_true_iff in H0. rewrite Z.eqb_neq in H0. apply prop_right. change (Int.signed Int.zero) with 0. rep_omega. - simpl. unfold_lift. unfold denote_tc_nosignedover. destruct (eval_expr e rho); try solve [inv H0]. rewrite negb_true_iff in H0. rewrite Z.eqb_neq in H0. apply prop_right. change (Int64.signed Int64.zero) with 0. rep_omega. + destruct (Cop.classify_neg (typeof e)); try solve [inv H | rewrite H; exact (@prop_right mpred _ True _ I)]. Qed. Lemma const_only_isBinOpResultType_spec: forall {cs: compspecs} rho b e1 e2 t P, const_only_isBinOpResultType b (typeof e1) (eval_expr e1 rho) (typeof e2) (eval_expr e2 rho) t = true -> P |-- denote_tc_assert (isBinOpResultType b e1 e2 t) rho. Proof. intros. unfold isBinOpResultType. unfold const_only_isBinOpResultType in H. destruct b. + destruct (Cop.classify_add (typeof e1) (typeof e2)). - rewrite !denote_tc_assert_andp; simpl. unfold_lift. unfold tc_int_or_ptr_type, denote_tc_isptr. destruct (eval_expr e1 rho); inv H. rewrite !andb_true_iff in H1. destruct H1 as [[? ?] ?]. rewrite H, H0, H1. simpl. repeat apply andp_right; apply prop_right; auto. - rewrite !denote_tc_assert_andp; simpl. unfold_lift. unfold tc_int_or_ptr_type, denote_tc_isptr. destruct (eval_expr e1 rho); inv H. rewrite !andb_true_iff in H1. destruct H1 as [[? ?] ?]. rewrite H, H0, H1. simpl. repeat apply andp_right; apply prop_right; auto. - rewrite !denote_tc_assert_andp; simpl. unfold_lift. unfold tc_int_or_ptr_type, denote_tc_isptr. destruct (eval_expr e2 rho); inv H. rewrite !andb_true_iff in H1. destruct H1 as [[? ?] ?]. rewrite H, H0, H1. simpl. repeat apply andp_right; apply prop_right; auto. - rewrite !denote_tc_assert_andp; simpl. unfold_lift. unfold tc_int_or_ptr_type, denote_tc_isptr. destruct (eval_expr e2 rho); inv H. rewrite !andb_true_iff in H1. destruct H1 as [[? ?] ?]. rewrite H, H0, H1. simpl. repeat apply andp_right; apply prop_right; auto. - inv H. + inv H. + inv H. + inv H. + inv H. + inv H. + inv H. + inv H. + inv H. + inv H. + inv H. + inv H. + inv H. + inv H. + inv H. + inv H. Qed. Lemma const_only_isCastResultType_spec: forall {cs: compspecs} rho e t P, const_only_isCastResultType (typeof e) t (eval_expr e rho) = true -> P |-- denote_tc_assert (isCastResultType (typeof e) t e) rho. Proof. intros. inv H. Qed. Lemma const_only_eval_expr_eq: forall {cs: compspecs} rho e v, const_only_eval_expr e = Some v -> eval_expr e rho = v. Proof. intros. revert v H; induction e; try solve [intros; inv H; auto]. + intros. simpl in *. destruct t as [| [| | |] | | | | | | |]; inv H. auto. + intros. simpl in *. destruct t as [| | | [|] | | | | |]; inv H. auto. + intros. simpl in *. destruct t as [| | | [|] | | | | |]; inv H. auto. + intros. simpl in *. unfold option_map in H. destruct (const_only_eval_expr e); inv H. destruct (const_only_isUnOpResultType u (typeof e) v0 t); inv H1. specialize (IHe _ eq_refl). unfold_lift. rewrite IHe; auto. + intros. simpl in *. unfold option_map in H. destruct (const_only_eval_expr e1); inv H. destruct (const_only_eval_expr e2); inv H1. destruct (const_only_isBinOpResultType b (typeof e1) v0 (typeof e2) v1 t); inv H0. specialize (IHe1 _ eq_refl). specialize (IHe2 _ eq_refl). unfold_lift. rewrite IHe1, IHe2; auto. + intros. simpl in *. unfold option_map in H. destruct (const_only_eval_expr e); inv H. (* specialize (IHe _ eq_refl). unfold_lift. rewrite IHe; auto.*) + intros. simpl in *. destruct (complete_type cenv_cs t && eqb_type t0 size_t); inv H. auto. + intros. simpl in *. destruct (complete_type cenv_cs t && eqb_type t0 size_t); inv H. auto. Qed. Lemma const_only_eval_expr_tc: forall {cs: compspecs} Delta e v P, const_only_eval_expr e = Some v -> P |-- tc_expr Delta e. Proof. intros. intro rho. revert v H; induction e; try solve [intros; inv H]. + intros. inv H. destruct t as [| [| | |] | | | | | | |]; inv H1. exact (@prop_right mpred _ True _ I). + intros. inv H. destruct t as [| | | [|] | | | | |]; inv H1. exact (@prop_right mpred _ True _ I). + intros. inv H. destruct t as [| | | [|] | | | | |]; inv H1. exact (@prop_right mpred _ True _ I). + intros. unfold tc_expr in *. simpl in *. unfold option_map in H. destruct (const_only_eval_expr e) eqn:HH; inv H. specialize (IHe _ eq_refl). unfold_lift. rewrite denote_tc_assert_andp; simpl; apply andp_right; auto. apply const_only_isUnOpResultType_spec. apply (const_only_eval_expr_eq rho) in HH. rewrite HH. destruct (const_only_isUnOpResultType u (typeof e) v0 t); inv H1; auto. + intros. unfold tc_expr in *. simpl in *. unfold option_map in H. destruct (const_only_eval_expr e1) eqn:HH1; inv H. destruct (const_only_eval_expr e2) eqn:HH2; inv H1. specialize (IHe1 _ eq_refl). specialize (IHe2 _ eq_refl). unfold_lift. rewrite !denote_tc_assert_andp; simpl; repeat apply andp_right; auto. apply const_only_isBinOpResultType_spec. apply (const_only_eval_expr_eq rho) in HH1. apply (const_only_eval_expr_eq rho) in HH2. rewrite HH1, HH2. destruct (const_only_isBinOpResultType b (typeof e1) v0 (typeof e2) v1 t); inv H0; auto. + intros. unfold tc_expr in *. simpl in *. unfold option_map in H. destruct (const_only_eval_expr e) eqn:HH; inv H. (* specialize (IHe _ eq_refl). unfold_lift. rewrite denote_tc_assert_andp; simpl; apply andp_right; auto. apply const_only_isUnOpResultType_spec. apply (const_only_eval_expr_eq rho) in HH. *) + intros. inv H. unfold tc_expr. simpl typecheck_expr. simpl. destruct (complete_type cenv_cs t && eqb_type t0 size_t) eqn:HH; inv H1. rewrite andb_true_iff in HH. unfold tuint in HH; destruct HH. rewrite H, H0. exact (@prop_right mpred _ True _ I). + intros. inv H. unfold tc_expr. simpl typecheck_expr. simpl. destruct (complete_type cenv_cs t && eqb_type t0 size_t) eqn:HH; inv H1. rewrite andb_true_iff in HH. unfold tuint in HH; destruct HH. rewrite H, H0. exact (@prop_right mpred _ True _ I). Qed.
" Crazy in Love " is a song from American singer Beyoncé 's debut solo album Dangerously in Love ( 2003 ) . Beyoncé wrote the song with Rich Harrison , Jay Z , and Eugene Record , and produced it with Harrison . " Crazy in Love " is an R & B and pop love song that incorporates elements of hip hop , soul , and 1970s @-@ style funk music . Its lyrics describe a romantic obsession that causes the protagonist to act out of character . Jay Z contributes a rapped verse to the song and is credited as a featured performer . The French horn @-@ based hook samples " Are You My Woman ( Tell Me So ) " , a 1970 song by the Chi @-@ Lites .
module SQLLoader using Octo.Repo: ExecuteResult # db_connect function db_connect(; kwargs...) end # db_disconnect function db_disconnect() end # query function query(sql::String) end function query(prepared::String, vals::Vector) end # execute function execute(sql::String)::ExecuteResult ExecuteResult() end function execute(prepared::String, vals::Vector)::ExecuteResult ExecuteResult() end function execute(prepared::String, nts::Vector{<:NamedTuple})::ExecuteResult ExecuteResult() end end # module Octo.Backends.SQLLoader
function generateOpenfieldPB(fieldSizeX, fieldSizeY, beamletSize); xP = -(fieldSizeX-beamletSize)/2 : beamletSize : (fieldSizeX-beamletSize)/2; yP = -(fieldSizeY-beamletSize)/2 : beamletSize : (fieldSizeY-beamletSize)/2; [xPosV, yPosV] = meshgrid(xP, yP); xPosV = xPosV(:); yPosV = yPosV(:); w_field = ones(size(xPosV)); beamlet_delta_x = beamletSize*ones(size(xPosV)); beamlet_delta_y = beamlet_delta_x; filename = ['PB_', num2str(fieldSizeX), 'x', num2str(fieldSizeY), '_PBsize', num2str(beamletSize), 'cm.mat'] disp ('save PB info to above filename.') save (filename, 'xPosV', 'yPosV', 'beamlet_delta_x', 'beamlet_delta_y', 'w_field'); % Display generated PB information. figure;hAxis2 = axes;hold on; %axis(hAxis2, 'manual'); % w_colors = floor((w_field ./ max(w_field))*255)+1; %set(gcf, 'doublebuffer', 'on'); for i=1:length(xPosV) patch([xPosV(i) - beamlet_delta_x(i)/2 xPosV(i) - beamlet_delta_x(i)/2 xPosV(i) + beamlet_delta_x(i)/2 xPosV(i) + beamlet_delta_x(i)/2 xPosV(i) - beamlet_delta_x(i)/2], [yPosV(i) - beamlet_delta_y(i)/2 yPosV(i) + beamlet_delta_y(i)/2 yPosV(i) + beamlet_delta_y(i)/2 yPosV(i) - beamlet_delta_y(i)/2 yPosV(i) - beamlet_delta_y(i)/2], w_field(i)); end % axis([hAxis1 hAxis2], 'ij'); % kids = get(hAxis2, 'children'); % set(kids, 'edgecolor', 'none'); % cMap = colormap('jet'); % set(hAxis2, 'color', cMap(1,:)); end
State Before: M : Type ?u.2777832 N : Type ?u.2777835 G : Type ?u.2777838 R : Type u_1 S : Type ?u.2777844 F : Type ?u.2777847 inst✝⁴ : CommMonoid M inst✝³ : CommMonoid N inst✝² : DivisionCommMonoid G k l : ℕ inst✝¹ : CommRing R ζ✝ : Rˣ h : IsPrimitiveRoot ζ✝ k inst✝ : IsDomain R ζ : R hζ : IsPrimitiveRoot ζ k hk : 1 < k ⊢ ∑ i in range k, ζ ^ i = 0 State After: M : Type ?u.2777832 N : Type ?u.2777835 G : Type ?u.2777838 R : Type u_1 S : Type ?u.2777844 F : Type ?u.2777847 inst✝⁴ : CommMonoid M inst✝³ : CommMonoid N inst✝² : DivisionCommMonoid G k l : ℕ inst✝¹ : CommRing R ζ✝ : Rˣ h : IsPrimitiveRoot ζ✝ k inst✝ : IsDomain R ζ : R hζ : IsPrimitiveRoot ζ k hk : 1 < k ⊢ (1 - ζ) * ∑ i in range k, ζ ^ i = 0 Tactic: refine' eq_zero_of_ne_zero_of_mul_left_eq_zero (sub_ne_zero_of_ne (hζ.ne_one hk).symm) _ State Before: M : Type ?u.2777832 N : Type ?u.2777835 G : Type ?u.2777838 R : Type u_1 S : Type ?u.2777844 F : Type ?u.2777847 inst✝⁴ : CommMonoid M inst✝³ : CommMonoid N inst✝² : DivisionCommMonoid G k l : ℕ inst✝¹ : CommRing R ζ✝ : Rˣ h : IsPrimitiveRoot ζ✝ k inst✝ : IsDomain R ζ : R hζ : IsPrimitiveRoot ζ k hk : 1 < k ⊢ (1 - ζ) * ∑ i in range k, ζ ^ i = 0 State After: no goals Tactic: rw [mul_neg_geom_sum, hζ.pow_eq_one, sub_self]
##### Copyright 2020 The OpenFermion Developers ```python #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Introduction to OpenFermion <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.example.org/openfermion/tutorials/intro_to_openfermion">View on QuantumLib</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/OpenFermion/blob/master/docs/tutorials/intro_to_openfermion.ipynb">Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/OpenFermion/blob/master/docs/tutorials/intro_to_openfermion.ipynb">View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/OpenFermion/docs/tutorials/intro_to_openfermion.ipynb">Download notebook</a> </td> </table> Note: The examples below must be run sequentially within a section. ## Setup Install the OpenFermion package: ```python try: import openfermion except ImportError: !pip install git+https://github.com/quantumlib/OpenFermion.git@master#egg=openfermion ``` ## Initializing the FermionOperator data structure Fermionic systems are often treated in second quantization where arbitrary operators can be expressed using the fermionic creation and annihilation operators, $a^\dagger_k$ and $a_k$. The fermionic ladder operators play a similar role to their qubit ladder operator counterparts, $\sigma^+_k$ and $\sigma^-_k$ but are distinguished by the canonical fermionic anticommutation relations, $\{a^\dagger_i, a^\dagger_j\} = \{a_i, a_j\} = 0$ and $\{a_i, a_j^\dagger\} = \delta_{ij}$. Any weighted sums of products of these operators are represented with the FermionOperator data structure in OpenFermion. The following are examples of valid FermionOperators: $$ \begin{align} & a_1 \nonumber \\ & 1.7 a^\dagger_3 \nonumber \\ &-1.7 \, a^\dagger_3 a_1 \nonumber \\ &(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 \nonumber \\ &(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 - 1.7 \, a^\dagger_3 a_1 \nonumber \end{align} $$ The FermionOperator class is contained in $\textrm{ops/_fermion_operator.py}$. In order to support fast addition of FermionOperator instances, the class is implemented as hash table (python dictionary). The keys of the dictionary encode the strings of ladder operators and values of the dictionary store the coefficients. The strings of ladder operators are encoded as a tuple of 2-tuples which we refer to as the "terms tuple". Each ladder operator is represented by a 2-tuple. The first element of the 2-tuple is an int indicating the tensor factor on which the ladder operator acts. The second element of the 2-tuple is Boole: 1 represents raising and 0 represents lowering. For instance, $a^\dagger_8$ is represented in a 2-tuple as $(8, 1)$. Note that indices start at 0 and the identity operator is an empty list. Below we give some examples of operators and their terms tuple: $$ \begin{align} I & \mapsto () \nonumber \\ a_1 & \mapsto ((1, 0),) \nonumber \\ a^\dagger_3 & \mapsto ((3, 1),) \nonumber \\ a^\dagger_3 a_1 & \mapsto ((3, 1), (1, 0)) \nonumber \\ a^\dagger_4 a^\dagger_3 a_9 a_1 & \mapsto ((4, 1), (3, 1), (9, 0), (1, 0)) \nonumber \end{align} $$ Note that when initializing a single ladder operator one should be careful to add the comma after the inner pair. This is because in python ((1, 2)) = (1, 2) whereas ((1, 2),) = ((1, 2),). The "terms tuple" is usually convenient when one wishes to initialize a term as part of a coded routine. However, the terms tuple is not particularly intuitive. Accordingly, OpenFermion also supports another user-friendly, string notation below. This representation is rendered when calling "print" on a FermionOperator. $$ \begin{align} I & \mapsto \textrm{""} \nonumber \\ a_1 & \mapsto \textrm{"1"} \nonumber \\ a^\dagger_3 & \mapsto \textrm{"3^"} \nonumber \\ a^\dagger_3 a_1 & \mapsto \textrm{"3^}\;\textrm{1"} \nonumber \\ a^\dagger_4 a^\dagger_3 a_9 a_1 & \mapsto \textrm{"4^}\;\textrm{3^}\;\textrm{9}\;\textrm{1"} \nonumber \end{align} $$ Let's initialize our first term! We do it two different ways below. ```python from openfermion.ops import FermionOperator my_term = FermionOperator(((3, 1), (1, 0))) print(my_term) my_term = FermionOperator('3^ 1') print(my_term) ``` The preferred way to specify the coefficient in openfermion is to provide an optional coefficient argument. If not provided, the coefficient defaults to 1. In the code below, the first method is preferred. The multiplication in the second method actually creates a copy of the term, which introduces some additional cost. All inplace operands (such as +=) modify classes whereas binary operands such as + create copies. Important caveats are that the empty tuple FermionOperator(()) and the empty string FermionOperator('') initializes identity. The empty initializer FermionOperator() initializes the zero operator. ```python good_way_to_initialize = FermionOperator('3^ 1', -1.7) print(good_way_to_initialize) bad_way_to_initialize = -1.7 * FermionOperator('3^ 1') print(bad_way_to_initialize) identity = FermionOperator('') print(identity) zero_operator = FermionOperator() print(zero_operator) ``` Note that FermionOperator has only one attribute: .terms. This attribute is the dictionary which stores the term tuples. ```python my_operator = FermionOperator('4^ 1^ 3 9', 1. + 2.j) print(my_operator) print(my_operator.terms) ``` ## Manipulating the FermionOperator data structure So far we have explained how to initialize a single FermionOperator such as $-1.7 \, a^\dagger_3 a_1$. However, in general we will want to represent sums of these operators such as $(1 + 2i) \, a^\dagger_4 a^\dagger_3 a_9 a_1 - 1.7 \, a^\dagger_3 a_1$. To do this, just add together two FermionOperators! We demonstrate below. ```python from openfermion.ops import FermionOperator term_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j) term_2 = FermionOperator('3^ 1', -1.7) my_operator = term_1 + term_2 print(my_operator) my_operator = FermionOperator('4^ 3^ 9 1', 1. + 2.j) term_2 = FermionOperator('3^ 1', -1.7) my_operator += term_2 print('') print(my_operator) ``` The print function prints each term in the operator on a different line. Note that the line my_operator = term_1 + term_2 creates a new object, which involves a copy of term_1 and term_2. The second block of code uses the inplace method +=, which is more efficient. This is especially important when trying to construct a very large FermionOperator. FermionOperators also support a wide range of builtins including, str(), repr(), ==, !=, *=, *, /, /=, +, +=, -, -=, - and **. Note that since FermionOperators involve floats, == and != check for (in)equality up to numerical precision. We demonstrate some of these methods below. ```python term_1 = FermionOperator('4^ 3^ 9 1', 1. + 2.j) term_2 = FermionOperator('3^ 1', -1.7) my_operator = term_1 - 33. * term_2 print(my_operator) my_operator *= 3.17 * (term_2 + term_1) ** 2 print('') print(my_operator) print('') print(term_2 ** 3) print('') print(term_1 == 2.*term_1 - term_1) print(term_1 == my_operator) ``` Additionally, there are a variety of methods that act on the FermionOperator data structure. We demonstrate a small subset of those methods here. ```python from openfermion.utils import commutator, count_qubits, hermitian_conjugated from openfermion.transforms import normal_ordered # Get the Hermitian conjugate of a FermionOperator, count its qubit, check if it is normal-ordered. term_1 = FermionOperator('4^ 3 3^', 1. + 2.j) print(hermitian_conjugated(term_1)) print(term_1.is_normal_ordered()) print(count_qubits(term_1)) # Normal order the term. term_2 = normal_ordered(term_1) print('') print(term_2) print(term_2.is_normal_ordered()) # Compute a commutator of the terms. print('') print(commutator(term_1, term_2)) ``` ## The QubitOperator data structure The QubitOperator data structure is another essential part of openfermion. As the name suggests, QubitOperator is used to store qubit operators in almost exactly the same way that FermionOperator is used to store fermion operators. For instance $X_0 Z_3 Y_4$ is a QubitOperator. The internal representation of this as a terms tuple would be $((0, \textrm{"X"}), (3, \textrm{"Z"}), (4, \textrm{"Y"}))$. Note that one important difference between QubitOperator and FermionOperator is that the terms in QubitOperator are always sorted in order of tensor factor. In some cases, this enables faster manipulation. We initialize some QubitOperators below. ```python from openfermion.ops import QubitOperator my_first_qubit_operator = QubitOperator('X1 Y2 Z3') print(my_first_qubit_operator) print(my_first_qubit_operator.terms) operator_2 = QubitOperator('X3 Z4', 3.17) operator_2 -= 77. * my_first_qubit_operator print('') print(operator_2) ``` ## Jordan-Wigner and Bravyi-Kitaev openfermion provides functions for mapping FermionOperators to QubitOperators. ```python from openfermion.ops import FermionOperator from openfermion.transforms import jordan_wigner, bravyi_kitaev from openfermion.utils import hermitian_conjugated from openfermion.linalg import eigenspectrum # Initialize an operator. fermion_operator = FermionOperator('2^ 0', 3.17) fermion_operator += hermitian_conjugated(fermion_operator) print(fermion_operator) # Transform to qubits under the Jordan-Wigner transformation and print its spectrum. jw_operator = jordan_wigner(fermion_operator) print('') print(jw_operator) jw_spectrum = eigenspectrum(jw_operator) print(jw_spectrum) # Transform to qubits under the Bravyi-Kitaev transformation and print its spectrum. bk_operator = bravyi_kitaev(fermion_operator) print('') print(bk_operator) bk_spectrum = eigenspectrum(bk_operator) print(bk_spectrum) ``` We see that despite the different representation, these operators are iso-spectral. We can also apply the Jordan-Wigner transform in reverse to map arbitrary QubitOperators to FermionOperators. Note that we also demonstrate the .compress() method (a method on both FermionOperators and QubitOperators) which removes zero entries. ```python from openfermion.transforms import reverse_jordan_wigner # Initialize QubitOperator. my_operator = QubitOperator('X0 Y1 Z2', 88.) my_operator += QubitOperator('Z1 Z4', 3.17) print(my_operator) # Map QubitOperator to a FermionOperator. mapped_operator = reverse_jordan_wigner(my_operator) print('') print(mapped_operator) # Map the operator back to qubits and make sure it is the same. back_to_normal = jordan_wigner(mapped_operator) back_to_normal.compress() print('') print(back_to_normal) ``` ## Sparse matrices and the Hubbard model Often, one would like to obtain a sparse matrix representation of an operator which can be analyzed numerically. There is code in both openfermion.transforms and openfermion.utils which facilitates this. The function get_sparse_operator converts either a FermionOperator, a QubitOperator or other more advanced classes such as InteractionOperator to a scipy.sparse.csc matrix. There are numerous functions in openfermion.utils which one can call on the sparse operators such as "get_gap", "get_hartree_fock_state", "get_ground_state", etc. We show this off by computing the ground state energy of the Hubbard model. To do that, we use code from the openfermion.hamiltonians module which constructs lattice models of fermions such as Hubbard models. ```python from openfermion.hamiltonians import fermi_hubbard from openfermion.linalg import get_sparse_operator, get_ground_state from openfermion.transforms import jordan_wigner # Set model. x_dimension = 2 y_dimension = 2 tunneling = 2. coulomb = 1. magnetic_field = 0.5 chemical_potential = 0.25 periodic = 1 spinless = 1 # Get fermion operator. hubbard_model = fermi_hubbard( x_dimension, y_dimension, tunneling, coulomb, chemical_potential, magnetic_field, periodic, spinless) print(hubbard_model) # Get qubit operator under Jordan-Wigner. jw_hamiltonian = jordan_wigner(hubbard_model) jw_hamiltonian.compress() print('') print(jw_hamiltonian) # Get scipy.sparse.csc representation. sparse_operator = get_sparse_operator(hubbard_model) print('') print(sparse_operator) print('\nEnergy of the model is {} in units of T and J.'.format( get_ground_state(sparse_operator)[0])) ``` ## Hamiltonians in the plane wave basis A user can write plugins to openfermion which allow for the use of, e.g., third-party electronic structure package to compute molecular orbitals, Hamiltonians, energies, reduced density matrices, coupled cluster amplitudes, etc using Gaussian basis sets. We may provide scripts which interface between such packages and openfermion in future but do not discuss them in this tutorial. When using simpler basis sets such as plane waves, these packages are not needed. openfermion comes with code which computes Hamiltonians in the plane wave basis. Note that when using plane waves, one is working with the periodized Coulomb operator, best suited for condensed phase calculations such as studying the electronic structure of a solid. To obtain these Hamiltonians one must choose to study the system without a spin degree of freedom (spinless), one must the specify dimension in which the calculation is performed (n_dimensions, usually 3), one must specify how many plane waves are in each dimension (grid_length) and one must specify the length scale of the plane wave harmonics in each dimension (length_scale) and also the locations and charges of the nuclei. One can generate these models with plane_wave_hamiltonian() found in openfermion.hamiltonians. For simplicity, below we compute the Hamiltonian in the case of zero external charge (corresponding to the uniform electron gas, aka jellium). We also demonstrate that one can transform the plane wave Hamiltonian using a Fourier transform without effecting the spectrum of the operator. ```python from openfermion.hamiltonians import jellium_model from openfermion.utils import Grid from openfermion.linalg import eigenspectrum from openfermion.transforms import jordan_wigner, fourier_transform # Let's look at a very small model of jellium in 1D. grid = Grid(dimensions=1, length=3, scale=1.0) spinless = True # Get the momentum Hamiltonian. momentum_hamiltonian = jellium_model(grid, spinless) momentum_qubit_operator = jordan_wigner(momentum_hamiltonian) momentum_qubit_operator.compress() print(momentum_qubit_operator) # Fourier transform the Hamiltonian to the position basis. position_hamiltonian = fourier_transform(momentum_hamiltonian, grid, spinless) position_qubit_operator = jordan_wigner(position_hamiltonian) position_qubit_operator.compress() print('') print (position_qubit_operator) # Check the spectra to make sure these representations are iso-spectral. spectral_difference = eigenspectrum(momentum_qubit_operator) - eigenspectrum(position_qubit_operator) print('') print(spectral_difference) ``` ## Basics of MolecularData class Data from electronic structure calculations can be saved in an OpenFermion data structure called MolecularData, which makes it easy to access within our library. Often, one would like to analyze a chemical series or look at many different Hamiltonians and sometimes the electronic structure calculations are either expensive to compute or difficult to converge (e.g. one needs to mess around with different types of SCF routines to make things converge). Accordingly, we anticipate that users will want some way to automatically database the results of their electronic structure calculations so that important data (such as the SCF integrals) can be looked up on-the-fly if the user has computed them in the past. OpenFermion supports a data provenance strategy which saves key results of the electronic structure calculation (including pointers to files containing large amounts of data, such as the molecular integrals) in an HDF5 container. The MolecularData class stores information about molecules. One initializes a MolecularData object by specifying parameters of a molecule such as its geometry, basis, multiplicity, charge and an optional string describing it. One can also initialize MolecularData simply by providing a string giving a filename where a previous MolecularData object was saved in an HDF5 container. One can save a MolecularData instance by calling the class's .save() method. This automatically saves the instance in a data folder specified during OpenFermion installation. The name of the file is generated automatically from the instance attributes and optionally provided description. Alternatively, a filename can also be provided as an optional input if one wishes to manually name the file. When electronic structure calculations are run, the data files for the molecule can be automatically updated. If one wishes to later use that data they either initialize MolecularData with the instance filename or initialize the instance and then later call the .load() method. Basis functions are provided to initialization using a string such as "6-31g". Geometries can be specified using a simple txt input file (see geometry_from_file function in molecular_data.py) or can be passed using a simple python list format demonstrated below. Atoms are specified using a string for their atomic symbol. Distances should be provided in angstrom. Below we initialize a simple instance of MolecularData without performing any electronic structure calculations. ```python from openfermion.chem import MolecularData # Set parameters to make a simple molecule. diatomic_bond_length = .7414 geometry = [('H', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))] basis = 'sto-3g' multiplicity = 1 charge = 0 description = str(diatomic_bond_length) # Make molecule and print out a few interesting facts about it. molecule = MolecularData(geometry, basis, multiplicity, charge, description) print('Molecule has automatically generated name {}'.format( molecule.name)) print('Information about this molecule would be saved at:\n{}\n'.format( molecule.filename)) print('This molecule has {} atoms and {} electrons.'.format( molecule.n_atoms, molecule.n_electrons)) for atom, atomic_number in zip(molecule.atoms, molecule.protons): print('Contains {} atom, which has {} protons.'.format( atom, atomic_number)) ``` If we had previously computed this molecule using an electronic structure package, we can call molecule.load() to populate all sorts of interesting fields in the data structure. Though we make no assumptions about what electronic structure packages users might install, we assume that the calculations are saved in OpenFermion's MolecularData objects. Currently plugins are available for [Psi4](http://psicode.org/) [(OpenFermion-Psi4)](http://github.com/quantumlib/OpenFermion-Psi4) and [PySCF](https://github.com/sunqm/pyscf) [(OpenFermion-PySCF)](http://github.com/quantumlib/OpenFermion-PySCF), and there may be more in the future. For the purposes of this example, we will load data that ships with OpenFermion to make a plot of the energy surface of hydrogen. Note that helper functions to initialize some interesting chemical benchmarks are found in openfermion.utils. ```python # Set molecule parameters. basis = 'sto-3g' multiplicity = 1 bond_length_interval = 0.1 n_points = 25 # Generate molecule at different bond lengths. hf_energies = [] fci_energies = [] bond_lengths = [] for point in range(3, n_points + 1): bond_length = bond_length_interval * point bond_lengths += [bond_length] description = str(round(bond_length,2)) print(description) geometry = [('H', (0., 0., 0.)), ('H', (0., 0., bond_length))] molecule = MolecularData( geometry, basis, multiplicity, description=description) # Load data. molecule.load() # Print out some results of calculation. print('\nAt bond length of {} angstrom, molecular hydrogen has:'.format( bond_length)) print('Hartree-Fock energy of {} Hartree.'.format(molecule.hf_energy)) print('MP2 energy of {} Hartree.'.format(molecule.mp2_energy)) print('FCI energy of {} Hartree.'.format(molecule.fci_energy)) print('Nuclear repulsion energy between protons is {} Hartree.'.format( molecule.nuclear_repulsion)) for orbital in range(molecule.n_orbitals): print('Spatial orbital {} has energy of {} Hartree.'.format( orbital, molecule.orbital_energies[orbital])) hf_energies += [molecule.hf_energy] fci_energies += [molecule.fci_energy] # Plot. import matplotlib.pyplot as plt %matplotlib inline plt.figure(0) plt.plot(bond_lengths, fci_energies, 'x-') plt.plot(bond_lengths, hf_energies, 'o-') plt.ylabel('Energy in Hartree') plt.xlabel('Bond length in angstrom') plt.show() ``` The geometry data needed to generate MolecularData can also be retreived from the PubChem online database by inputting the molecule's name. ```python from openfermion.chem import geometry_from_pubchem methane_geometry = geometry_from_pubchem('methane') print(methane_geometry) ``` ## InteractionOperator and InteractionRDM for efficient numerical representations Fermion Hamiltonians can be expressed as $H = h_0 + \sum_{pq} h_{pq}\, a^\dagger_p a_q + \frac{1}{2} \sum_{pqrs} h_{pqrs} \, a^\dagger_p a^\dagger_q a_r a_s$ where $h_0$ is a constant shift due to the nuclear repulsion and $h_{pq}$ and $h_{pqrs}$ are the famous molecular integrals. Since fermions interact pairwise, their energy is thus a unique function of the one-particle and two-particle reduced density matrices which are expressed in second quantization as $\rho_{pq} = \left \langle p \mid a^\dagger_p a_q \mid q \right \rangle$ and $\rho_{pqrs} = \left \langle pq \mid a^\dagger_p a^\dagger_q a_r a_s \mid rs \right \rangle$, respectively. Because the RDMs and molecular Hamiltonians are both compactly represented and manipulated as 2- and 4- index tensors, we can represent them in a particularly efficient form using similar data structures. The InteractionOperator data structure can be initialized for a Hamiltonian by passing the constant $h_0$ (or 0), as well as numpy arrays representing $h_{pq}$ (or $\rho_{pq}$) and $h_{pqrs}$ (or $\rho_{pqrs}$). Importantly, InteractionOperators can also be obtained by calling MolecularData.get_molecular_hamiltonian() or by calling the function get_interaction_operator() (found in openfermion.transforms) on a FermionOperator. The InteractionRDM data structure is similar but represents RDMs. For instance, one can get a molecular RDM by calling MolecularData.get_molecular_rdm(). When generating Hamiltonians from the MolecularData class, one can choose to restrict the system to an active space. These classes inherit from the same base class, PolynomialTensor. This data structure overloads the slice operator [] so that one can get or set the key attributes of the InteractionOperator: $\textrm{.constant}$, $\textrm{.one_body_coefficients}$ and $\textrm{.two_body_coefficients}$ . For instance, InteractionOperator[(p, 1), (q, 1), (r, 0), (s, 0)] would return $h_{pqrs}$ and InteractionRDM would return $\rho_{pqrs}$. Importantly, the class supports fast basis transformations using the method PolynomialTensor.rotate_basis(rotation_matrix). But perhaps most importantly, one can map the InteractionOperator to any of the other data structures we've described here. Below, we load MolecularData from a saved calculation of LiH. We then obtain an InteractionOperator representation of this system in an active space. We then map that operator to qubits. We then demonstrate that one can rotate the orbital basis of the InteractionOperator using random angles to obtain a totally different operator that is still iso-spectral. ```python from openfermion.chem import MolecularData from openfermion.transforms import get_fermion_operator, jordan_wigner from openfermion.linalg import get_ground_state, get_sparse_operator import numpy import scipy import scipy.linalg # Load saved file for LiH. diatomic_bond_length = 1.45 geometry = [('Li', (0., 0., 0.)), ('H', (0., 0., diatomic_bond_length))] basis = 'sto-3g' multiplicity = 1 # Set Hamiltonian parameters. active_space_start = 1 active_space_stop = 3 # Generate and populate instance of MolecularData. molecule = MolecularData(geometry, basis, multiplicity, description="1.45") molecule.load() # Get the Hamiltonian in an active space. molecular_hamiltonian = molecule.get_molecular_hamiltonian( occupied_indices=range(active_space_start), active_indices=range(active_space_start, active_space_stop)) # Map operator to fermions and qubits. fermion_hamiltonian = get_fermion_operator(molecular_hamiltonian) qubit_hamiltonian = jordan_wigner(fermion_hamiltonian) qubit_hamiltonian.compress() print('The Jordan-Wigner Hamiltonian in canonical basis follows:\n{}'.format(qubit_hamiltonian)) # Get sparse operator and ground state energy. sparse_hamiltonian = get_sparse_operator(qubit_hamiltonian) energy, state = get_ground_state(sparse_hamiltonian) print('Ground state energy before rotation is {} Hartree.\n'.format(energy)) # Randomly rotate. n_orbitals = molecular_hamiltonian.n_qubits // 2 n_variables = int(n_orbitals * (n_orbitals - 1) / 2) numpy.random.seed(1) random_angles = numpy.pi * (1. - 2. * numpy.random.rand(n_variables)) kappa = numpy.zeros((n_orbitals, n_orbitals)) index = 0 for p in range(n_orbitals): for q in range(p + 1, n_orbitals): kappa[p, q] = random_angles[index] kappa[q, p] = -numpy.conjugate(random_angles[index]) index += 1 # Build the unitary rotation matrix. difference_matrix = kappa + kappa.transpose() rotation_matrix = scipy.linalg.expm(kappa) # Apply the unitary. molecular_hamiltonian.rotate_basis(rotation_matrix) # Get qubit Hamiltonian in rotated basis. qubit_hamiltonian = jordan_wigner(molecular_hamiltonian) qubit_hamiltonian.compress() print('The Jordan-Wigner Hamiltonian in rotated basis follows:\n{}'.format(qubit_hamiltonian)) # Get sparse Hamiltonian and energy in rotated basis. sparse_hamiltonian = get_sparse_operator(qubit_hamiltonian) energy, state = get_ground_state(sparse_hamiltonian) print('Ground state energy after rotation is {} Hartree.'.format(energy)) ``` ## Quadratic Hamiltonians and Slater determinants The general electronic structure Hamiltonian $H = h_0 + \sum_{pq} h_{pq}\, a^\dagger_p a_q + \frac{1}{2} \sum_{pqrs} h_{pqrs} \, a^\dagger_p a^\dagger_q a_r a_s$ contains terms that act on up to 4 sites, or is quartic in the fermionic creation and annihilation operators. However, in many situations we may fruitfully approximate these Hamiltonians by replacing these quartic terms with terms that act on at most 2 fermionic sites, or quadratic terms, as in mean-field approximation theory. These Hamiltonians have a number of special properties one can exploit for efficient simulation and manipulation of the Hamiltonian, thus warranting a special data structure. We refer to Hamiltonians which only contain terms that are quadratic in the fermionic creation and annihilation operators as quadratic Hamiltonians, and include the general case of non-particle conserving terms as in a general Bogoliubov transformation. Eigenstates of quadratic Hamiltonians can be prepared efficiently on both a quantum and classical computer, making them amenable to initial guesses for many more challenging problems. A general quadratic Hamiltonian takes the form $$H = \sum_{p, q} (M_{pq} - \mu \delta_{pq}) a^\dagger_p a_q + \frac{1}{2} \sum_{p, q} (\Delta_{pq} a^\dagger_p a^\dagger_q + \Delta_{pq}^* a_q a_p) + \text{constant},$$ where $M$ is a Hermitian matrix, $\Delta$ is an antisymmetric matrix, $\delta_{pq}$ is the Kronecker delta symbol, and $\mu$ is a chemical potential term which we keep separate from $M$ so that we can use it to adjust the expectation of the total number of particles. In OpenFermion, quadratic Hamiltonians are conveniently represented and manipulated using the QuadraticHamiltonian class, which stores $M$, $\Delta$, $\mu$ and the constant. It is specialized to exploit the properties unique to quadratic Hamiltonians. Like InteractionOperator and InteractionRDM, it inherits from the PolynomialTensor class. The BCS mean-field model of superconductivity is a quadratic Hamiltonian. The following code constructs an instance of this model as a FermionOperator, converts it to a QuadraticHamiltonian, and then computes its ground energy: ```python from openfermion.hamiltonians import mean_field_dwave from openfermion.transforms import get_quadratic_hamiltonian # Set model. x_dimension = 2 y_dimension = 2 tunneling = 2. sc_gap = 1. periodic = True # Get FermionOperator. mean_field_model = mean_field_dwave( x_dimension, y_dimension, tunneling, sc_gap, periodic=periodic) # Convert to QuadraticHamiltonian quadratic_hamiltonian = get_quadratic_hamiltonian(mean_field_model) # Compute the ground energy ground_energy = quadratic_hamiltonian.ground_energy() print(ground_energy) ``` Any quadratic Hamiltonian may be rewritten in the form $$H = \sum_p \varepsilon_p b^\dagger_p b_p + \text{constant},$$ where the $b_p$ are new annihilation operators that satisfy the fermionic anticommutation relations, and which are linear combinations of the old creation and annihilation operators. This form of $H$ makes it easy to deduce its eigenvalues; they are sums of subsets of the $\varepsilon_p$, which we call the orbital energies of $H$. The following code computes the orbital energies and the constant: ```python orbital_energies, constant = quadratic_hamiltonian.orbital_energies() print(orbital_energies) print() print(constant) ``` Eigenstates of quadratic hamiltonians are also known as fermionic Gaussian states, and they can be prepared efficiently on a quantum computer. One can use OpenFermion to obtain circuits for preparing these states. The following code obtains the description of a circuit which prepares the ground state (operations that can be performed in parallel are grouped together), along with a description of the starting state to which the circuit should be applied: ```python from openfermion.circuits import gaussian_state_preparation_circuit circuit_description, start_orbitals = gaussian_state_preparation_circuit(quadratic_hamiltonian) for parallel_ops in circuit_description: print(parallel_ops) print('') print(start_orbitals) ``` In the circuit description, each elementary operation is either a tuple of the form $(i, j, \theta, \varphi)$, indicating the operation $\exp[i \varphi a_j^\dagger a_j]\exp[\theta (a_i^\dagger a_j - a_j^\dagger a_i)]$, which is a Givens rotation of modes $i$ and $j$, or the string 'pht', indicating the particle-hole transformation on the last fermionic mode, which is the operator $\mathcal{B}$ such that $\mathcal{B} a_N \mathcal{B}^\dagger = a_N^\dagger$ and leaves the rest of the ladder operators unchanged. Operations that can be performed in parallel are grouped together. In the special case that a quadratic Hamiltonian conserves particle number ($\Delta = 0$), its eigenstates take the form $$\lvert \Psi_S \rangle = b^\dagger_{1}\cdots b^\dagger_{N_f}\lvert \text{vac} \rangle,\qquad b^\dagger_{p} = \sum_{k=1}^N Q_{pq}a^\dagger_q,$$ where $Q$ is an $N_f \times N$ matrix with orthonormal rows. These states are also known as Slater determinants. OpenFermion also provides functionality to obtain circuits for preparing Slater determinants starting with the matrix $Q$ as the input.
""" LeakyReLU{T, N}(α::T) <: Bijector{N} Defines the invertible mapping x ↦ x if x ≥ 0 else αx where α > 0. """ struct LeakyReLU{T, N} <: Bijector{N} α::T end LeakyReLU(α::T; dim::Val{N} = Val(0)) where {T<:Real, N} = LeakyReLU{T, N}(α) LeakyReLU(α::T; dim::Val{N} = Val(D)) where {D, T<:AbstractArray{<:Real, D}, N} = LeakyReLU{T, N}(α) up1(b::LeakyReLU{T, N}) where {T, N} = LeakyReLU{T, N + 1}(b.α) # (N=0) Univariate case function (b::LeakyReLU{<:Any, 0})(x::Real) mask = x < zero(x) return mask * b.α * x + !mask * x end (b::LeakyReLU{<:Any, 0})(x::AbstractVector{<:Real}) = map(b, x) function Base.inv(b::LeakyReLU{<:Any,N}) where N invα = inv.(b.α) return LeakyReLU{typeof(invα),N}(invα) end function logabsdetjac(b::LeakyReLU{<:Any, 0}, x::Real) mask = x < zero(x) J = mask * b.α + (1 - mask) * one(x) return log(abs(J)) end logabsdetjac(b::LeakyReLU{<:Real, 0}, x::AbstractVector{<:Real}) = map(x -> logabsdetjac(b, x), x) # We implement `forward` by hand since we can re-use the computation of # the Jacobian of the transformation. This will lead to faster sampling # when using `rand` on a `TransformedDistribution` making use of `LeakyReLU`. function forward(b::LeakyReLU{<:Any, 0}, x::Real) mask = x < zero(x) J = mask * b.α + !mask * one(x) return (rv=J * x, logabsdetjac=log(abs(J))) end # Batched version function forward(b::LeakyReLU{<:Any, 0}, x::AbstractVector) J = let T = eltype(x), z = zero(T), o = one(T) @. (x < z) * b.α + (x > z) * o end return (rv=J .* x, logabsdetjac=log.(abs.(J))) end # (N=1) Multivariate case function (b::LeakyReLU{<:Any, 1})(x::AbstractVecOrMat) return let z = zero(eltype(x)) @. (x < z) * b.α * x + (x > z) * x end end function logabsdetjac(b::LeakyReLU{<:Any, 1}, x::AbstractVecOrMat) # Is really diagonal of jacobian J = let T = eltype(x), z = zero(T), o = one(T) @. (x < z) * b.α + (x > z) * o end if x isa AbstractVector return sum(log.(abs.(J))) elseif x isa AbstractMatrix return vec(sum(log.(abs.(J)); dims = 1)) # sum along column end end # We implement `forward` by hand since we can re-use the computation of # the Jacobian of the transformation. This will lead to faster sampling # when using `rand` on a `TransformedDistribution` making use of `LeakyReLU`. function forward(b::LeakyReLU{<:Any, 1}, x::AbstractVecOrMat) # Is really diagonal of jacobian J = let T = eltype(x), z = zero(T), o = one(T) @. (x < z) * b.α + (x > z) * o end if x isa AbstractVector logjac = sum(log.(abs.(J))) elseif x isa AbstractMatrix logjac = vec(sum(log.(abs.(J)); dims = 1)) # sum along column end y = J .* x return (rv=y, logabsdetjac=logjac) end
lemma emeasure_eq_AE: assumes iff: "AE x in M. x \<in> A \<longleftrightarrow> x \<in> B" assumes A: "A \<in> sets M" and B: "B \<in> sets M" shows "emeasure M A = emeasure M B"
Produce is a book of short fiction and poetry by accomplished UC Davis Undergraduates undergraduate and graduate students. A reinvention of the UCD undergraduate literary magazine known as Seele (pronounced zayluh), Produce was grown in association with the UC Davis English Club. To purchase, contact the above email or one of the editors. The cost is $10. 2007 Released June 2007 Contents: 148 pages; 19 undergraduate poems, 17 undergraduate pieces of fiction, 14 graduate poems and pieces of fiction Editors: Crystal Cheney, Tyler Fyotek, Users/BrianAng Brian Ang, Vanessa Uhlig, Users/EliseKane Elise Kane, Jacob Israel Chilton, Users/BoHeeKim Bo Hee Kim, Carmen Lau, Preston Hatfield, Users/JacobRoche J. Richard Roche, Monica Storss Contributors: Undergraduate Poetry: Michelle TangJackson, Users/BrianAng Brian Ang, Naushad Ulhaq, Vanessa Uhlig, Nathan Test, Susan Calvillo, Users/JacobRoche Jacob Roche, Users/EliseKane Elise Kane, Arnold Kemp, Alicia Raby, Jacob Israel Chilton, Henry 7 Reneau Jr., Kristen Judd, Collin Brennan, Tyler Fyotek, Toni Chisamore Undergraduate Fiction: Carmen Lau, Michelle TangJackson, Rachel Slotnick, Ryan Willingham, Preston Hatfield, Long Lim, James Xiao, Users/BoHeeKim Bo Hee Kim, Jessica Ng, Kira McManus, Dahlia GrossmanHeinze, Sam Bivins, Users/JacobRoche J. Richard Roche, Hailey Yeager Graduate: Monica Storss, Gabrielle Myers, Patricia Killelea, Jeanine Peters, Crystal Anderson, Emily Norwood, Masin Persina, Crystal Cheney 2006 The book was created with no departmental or university funding, released independently through Moonstreet Press and printed in Massachusetts. Prior to its official release, three of the eight editors, (Kaelan Smith, Elise Kane, and Crystal Cheney), were guests on Dr. Andy Jones Dr. Andy Joness esteemed KDVS 90.3 fm radio showDr. Andys Poetry and Technology Hourwhere they explained the book editing process and financial challenges of publishing. They also read some of their own work from the book as a sample for listeners. Produce debuted May 18, 2006 at a release party held in the Art Building lobby. Sacramento band Johanna provided entertainment. Contents: 31 poems, 20 pieces of fiction Editors: Kaelan Smith, Tristen Chang, Crystal Cheney, Emily Connors, Elise Kane, James Xiao, Marie Burcham, Sam Spieller Contributors: Poetry: S.A. Spieller, Michelle Jackson, Tristen Chang, Patricia Anne Killelea, Elise Kane, Arjuna Neuman, Marie Burcham, Alycia Raby, Arnold Kemp, James Xiao, Nathan Smith, Kaelan Smith, and Michael Giardina. Fiction: Kaelan Smith, Crystal Cheney, Emily Connor, Tristen Chang, Kate James, Luke Maulding, Melissa Chordas, Marie Burcham, Michael Giardina, Diana Chan, S. A. Spieller, James Xiao.
If $f$ is continuously differentiable, then it is locally Lipschitz.
If a sequence of holomorphic functions $\{f_n\}$ converges uniformly to a holomorphic function $g$ on a connected open set $S$, and if each $f_n$ is injective on $S$, then $g$ is injective on $S$.
[STATEMENT] lemma rtranclp_conjD: "(\<lambda>x1 x2. r1 x1 x2 \<and> r2 x1 x2)\<^sup>*\<^sup>* x1 x2 \<Longrightarrow> r1\<^sup>*\<^sup>* x1 x2 \<and> r2\<^sup>*\<^sup>* x1 x2" [PROOF STATE] proof (prove) goal (1 subgoal): 1. (\<lambda>x1 x2. r1 x1 x2 \<and> r2 x1 x2)\<^sup>*\<^sup>* x1 x2 \<Longrightarrow> r1\<^sup>*\<^sup>* x1 x2 \<and> r2\<^sup>*\<^sup>* x1 x2 [PROOF STEP] by (metis (no_types, lifting) rtrancl_mono_proof)
# -*- coding: utf-8 -*- # TODO: ADD COPYRIGHT TAG import logging import utool as ut import numpy as np from wbia.algo.graph.state import POSTV, NEGTV (print, rrr, profile) = ut.inject2(__name__) logger = logging.getLogger('wbia') def sight_resight_prob(N_range, nvisit1, nvisit2, resight): """ https://en.wikipedia.org/wiki/Talk:Mark_and_recapture#Statistical_treatment http://stackoverflow.com/questions/31439875/infinite-summation-in-python/31442749 """ k, K, n = resight, nvisit1, nvisit2 from scipy.special import comb N_range = np.array(N_range) def integers(start, blk_size=10000, pos=True, neg=False): x = np.arange(start, start + blk_size) while True: if pos: yield x if neg: yield -x - 1 x += blk_size def converge_inf_sum(func, x_strm, eps=1e-5, axis=0): # Can still be very slow total = np.sum(func(x_strm.next()), axis=axis) # for x_blk in ut.ProgIter(x_strm, lbl='converging'): for x_blk in x_strm: diff = np.sum(func(x_blk), axis=axis) total += diff # error = abs(np.linalg.norm(diff)) # logger.info('error = %r' % (error,)) if np.sqrt(diff.ravel().dot(diff.ravel())) <= eps: # Converged break return total numers = comb(N_range - K, n - k) / comb(N_range, n) @ut.memoize def func(N_): return comb(N_ - K, n - k) / comb(N_, n) denoms = [] for N in ut.ProgIter(N_range, lbl='denoms'): x_strm = integers(start=(N + n - k), blk_size=100) denom = converge_inf_sum(func, x_strm, eps=1e-3) denoms.append(denom) # denom = sum([func(N_) for N_ in range(N_start, N_start * 2)]) probs = numers / np.array(denoms) return probs def dans_splits(ibs): """ python -m wbia dans_splits --show Example: >>> # DISABLE_DOCTEST GGR >>> from wbia.other.dbinfo import * # NOQA >>> import wbia >>> dbdir = '/media/danger/GGR/GGR-IBEIS' >>> dbdir = dbdir if ut.checkpath(dbdir) else ut.truepath('~/lev/media/danger/GGR/GGR-IBEIS') >>> ibs = wbia.opendb(dbdir=dbdir, allow_newdir=False) >>> import wbia.guitool as gt >>> gt.ensure_qtapp() >>> win = dans_splits(ibs) >>> ut.quit_if_noshow() >>> import wbia.plottool as pt >>> gt.qtapp_loop(qwin=win) """ # pair = 9262, 932 dans_aids = [ 26548, 2190, 9418, 29965, 14738, 26600, 3039, 2742, 8249, 20154, 8572, 4504, 34941, 4040, 7436, 31866, 28291, 16009, 7378, 14453, 2590, 2738, 22442, 26483, 21640, 19003, 13630, 25395, 20015, 14948, 21429, 19740, 7908, 23583, 14301, 26912, 30613, 19719, 21887, 8838, 16184, 9181, 8649, 8276, 14678, 21950, 4925, 13766, 12673, 8417, 2018, 22434, 21149, 14884, 5596, 8276, 14650, 1355, 21725, 21889, 26376, 2867, 6906, 4890, 21524, 6690, 14738, 1823, 35525, 9045, 31723, 2406, 5298, 15627, 31933, 19535, 9137, 21002, 2448, 32454, 12615, 31755, 20015, 24573, 32001, 23637, 3192, 3197, 8702, 1240, 5596, 33473, 23874, 9558, 9245, 23570, 33075, 23721, 24012, 33405, 23791, 19498, 33149, 9558, 4971, 34183, 24853, 9321, 23691, 9723, 9236, 9723, 21078, 32300, 8700, 15334, 6050, 23277, 31164, 14103, 21231, 8007, 10388, 33387, 4319, 26880, 8007, 31164, 32300, 32140, ] is_hyrbid = [ # NOQA 7123, 7166, 7157, 7158, ] needs_mask = [26836, 29742] # NOQA justfine = [19862] # NOQA annots = ibs.annots(dans_aids) unique_nids = ut.unique(annots.nids) grouped_aids = ibs.get_name_aids(unique_nids) annot_groups = ibs._annot_groups(grouped_aids) split_props = {'splitcase', 'photobomb'} needs_tag = [ len(split_props.intersection(ut.flatten(tags))) == 0 for tags in annot_groups.match_tags ] num_needs_tag = sum(needs_tag) num_had_split = len(needs_tag) - num_needs_tag logger.info('num_had_split = %r' % (num_had_split,)) logger.info('num_needs_tag = %r' % (num_needs_tag,)) # all_annot_groups = ibs._annot_groups(ibs.group_annots_by_name(ibs.get_valid_aids())[0]) # all_has_split = [len(split_props.intersection(ut.flatten(tags))) > 0 for tags in all_annot_groups.match_tags] # num_nondan = sum(all_has_split) - num_had_split # logger.info('num_nondan = %r' % (num_nondan,)) from wbia.algo.graph import graph_iden from wbia.viz import viz_graph2 import wbia.guitool as gt import wbia.plottool as pt pt.qt4ensure() gt.ensure_qtapp() aids_list = ut.compress(grouped_aids, needs_tag) aids_list = [a for a in aids_list if len(a) > 1] logger.info('len(aids_list) = %r' % (len(aids_list),)) for aids in aids_list: infr = graph_iden.AnnotInference(ibs, aids) infr.initialize_graph() win = viz_graph2.AnnotGraphWidget( infr=infr, use_image=False, init_mode='rereview' ) win.populate_edge_model() win.show() return win assert False def fix_splits_interaction(ibs): """ python -m wbia fix_splits_interaction --show Example: >>> # DISABLE_DOCTEST GGR >>> from wbia.other.dbinfo import * # NOQA >>> import wbia >>> dbdir = '/media/danger/GGR/GGR-IBEIS' >>> dbdir = dbdir if ut.checkpath(dbdir) else ut.truepath('~/lev/media/danger/GGR/GGR-IBEIS') >>> ibs = wbia.opendb(dbdir=dbdir, allow_newdir=False) >>> import wbia.guitool as gt >>> gt.ensure_qtapp() >>> win = fix_splits_interaction(ibs) >>> ut.quit_if_noshow() >>> import wbia.plottool as pt >>> gt.qtapp_loop(qwin=win) """ split_props = {'splitcase', 'photobomb'} all_annot_groups = ibs._annot_groups( ibs.group_annots_by_name(ibs.get_valid_aids())[0] ) all_has_split = [ len(split_props.intersection(ut.flatten(tags))) > 0 for tags in all_annot_groups.match_tags ] tosplit_annots = ut.compress(all_annot_groups.annots_list, all_has_split) tosplit_annots = ut.take(tosplit_annots, ut.argsort(ut.lmap(len, tosplit_annots)))[ ::-1 ] if ut.get_argflag('--reverse'): tosplit_annots = tosplit_annots[::-1] logger.info('len(tosplit_annots) = %r' % (len(tosplit_annots),)) aids_list = [a.aids for a in tosplit_annots] from wbia.algo.graph import graph_iden from wbia.viz import viz_graph2 import wbia.guitool as gt import wbia.plottool as pt pt.qt4ensure() gt.ensure_qtapp() for aids in ut.InteractiveIter(aids_list): infr = graph_iden.AnnotInference(ibs, aids) infr.initialize_graph() win = viz_graph2.AnnotGraphWidget( infr=infr, use_image=False, init_mode='rereview' ) win.populate_edge_model() win.show() return win # assert False def split_analysis(ibs): """ CommandLine: python -m wbia.other.dbinfo split_analysis --show python -m wbia split_analysis --show python -m wbia split_analysis --show --good Ignore: # mount sshfs -o idmap=user lev:/ ~/lev # unmount fusermount -u ~/lev Example: >>> # DISABLE_DOCTEST GGR >>> from wbia.other.dbinfo import * # NOQA >>> import wbia >>> dbdir = '/media/danger/GGR/GGR-IBEIS' >>> dbdir = dbdir if ut.checkpath(dbdir) else ut.truepath('~/lev/media/danger/GGR/GGR-IBEIS') >>> ibs = wbia.opendb(dbdir=dbdir, allow_newdir=False) >>> import wbia.guitool as gt >>> gt.ensure_qtapp() >>> win = split_analysis(ibs) >>> ut.quit_if_noshow() >>> import wbia.plottool as pt >>> gt.qtapp_loop(qwin=win) >>> #ut.show_if_requested() """ # nid_list = ibs.get_valid_nids(filter_empty=True) import datetime day1 = datetime.date(2016, 1, 30) day2 = datetime.date(2016, 1, 31) filter_kw = { 'multiple': None, # 'view': ['right'], # 'minqual': 'good', 'is_known': True, 'min_pername': 1, } aids1 = ibs.filter_annots_general( filter_kw=ut.dict_union( filter_kw, { 'min_unixtime': ut.datetime_to_posixtime(ut.date_to_datetime(day1, 0.0)), 'max_unixtime': ut.datetime_to_posixtime(ut.date_to_datetime(day1, 1.0)), }, ) ) aids2 = ibs.filter_annots_general( filter_kw=ut.dict_union( filter_kw, { 'min_unixtime': ut.datetime_to_posixtime(ut.date_to_datetime(day2, 0.0)), 'max_unixtime': ut.datetime_to_posixtime(ut.date_to_datetime(day2, 1.0)), }, ) ) all_aids = aids1 + aids2 all_annots = ibs.annots(all_aids) logger.info('%d annots on day 1' % (len(aids1))) logger.info('%d annots on day 2' % (len(aids2))) logger.info('%d annots overall' % (len(all_annots))) logger.info('%d names overall' % (len(ut.unique(all_annots.nids)))) nid_list, annots_list = all_annots.group(all_annots.nids) REVIEWED_EDGES = True if REVIEWED_EDGES: aids_list = [annots.aids for annots in annots_list] # aid_pairs = [annots.get_am_aidpairs() for annots in annots_list] # Slower aid_pairs = ibs.get_unflat_am_aidpairs(aids_list) # Faster else: # ALL EDGES aid_pairs = [annots.get_aidpairs() for annots in annots_list] speeds_list = ibs.unflat_map(ibs.get_annotpair_speeds, aid_pairs) import vtool as vt max_speeds = np.array([vt.safe_max(s, nans=False) for s in speeds_list]) nan_idx = np.where(np.isnan(max_speeds))[0] inf_idx = np.where(np.isinf(max_speeds))[0] bad_idx = sorted(ut.unique(ut.flatten([inf_idx, nan_idx]))) ok_idx = ut.index_complement(bad_idx, len(max_speeds)) logger.info('#nan_idx = %r' % (len(nan_idx),)) logger.info('#inf_idx = %r' % (len(inf_idx),)) logger.info('#ok_idx = %r' % (len(ok_idx),)) ok_speeds = max_speeds[ok_idx] ok_nids = ut.take(nid_list, ok_idx) ok_annots = ut.take(annots_list, ok_idx) sortx = np.argsort(ok_speeds)[::-1] sorted_speeds = np.array(ut.take(ok_speeds, sortx)) sorted_annots = np.array(ut.take(ok_annots, sortx)) sorted_nids = np.array(ut.take(ok_nids, sortx)) # NOQA sorted_speeds = np.clip(sorted_speeds, 0, 100) # idx = vt.find_elbow_point(sorted_speeds) # EXCESSIVE_SPEED = sorted_speeds[idx] # http://www.infoplease.com/ipa/A0004737.html # http://www.speedofanimals.com/animals/zebra # ZEBRA_SPEED_MAX = 64 # km/h # ZEBRA_SPEED_RUN = 50 # km/h ZEBRA_SPEED_SLOW_RUN = 20 # km/h # ZEBRA_SPEED_FAST_WALK = 10 # km/h # ZEBRA_SPEED_WALK = 7 # km/h MAX_SPEED = ZEBRA_SPEED_SLOW_RUN # MAX_SPEED = ZEBRA_SPEED_WALK # MAX_SPEED = EXCESSIVE_SPEED flags = sorted_speeds > MAX_SPEED flagged_ok_annots = ut.compress(sorted_annots, flags) inf_annots = ut.take(annots_list, inf_idx) flagged_annots = inf_annots + flagged_ok_annots logger.info('MAX_SPEED = %r km/h' % (MAX_SPEED,)) logger.info('%d annots with infinite speed' % (len(inf_annots),)) logger.info('%d annots with large speed' % (len(flagged_ok_annots),)) logger.info('Marking all pairs of annots above the threshold as non-matching') from wbia.algo.graph import graph_iden import networkx as nx progkw = dict(freq=1, bs=True, est_window=len(flagged_annots)) bad_edges_list = [] good_edges_list = [] for annots in ut.ProgIter(flagged_annots, lbl='flag speeding names', **progkw): edge_to_speeds = annots.get_speeds() bad_edges = [edge for edge, speed in edge_to_speeds.items() if speed > MAX_SPEED] good_edges = [ edge for edge, speed in edge_to_speeds.items() if speed <= MAX_SPEED ] bad_edges_list.append(bad_edges) good_edges_list.append(good_edges) all_bad_edges = ut.flatten(bad_edges_list) good_edges_list = ut.flatten(good_edges_list) logger.info('num_bad_edges = %r' % (len(ut.flatten(bad_edges_list)),)) logger.info('num_bad_edges = %r' % (len(ut.flatten(good_edges_list)),)) if 1: from wbia.viz import viz_graph2 import wbia.guitool as gt gt.ensure_qtapp() if ut.get_argflag('--good'): logger.info('Looking at GOOD (no speed problems) edges') aid_pairs = good_edges_list else: logger.info('Looking at BAD (speed problems) edges') aid_pairs = all_bad_edges aids = sorted(list(set(ut.flatten(aid_pairs)))) infr = graph_iden.AnnotInference(ibs, aids, verbose=False) infr.initialize_graph() # Use random scores to randomize sort order rng = np.random.RandomState(0) scores = (-rng.rand(len(aid_pairs)) * 10).tolist() infr.graph.add_edges_from(aid_pairs) if True: edge_sample_size = 250 pop_nids = ut.unique(ibs.get_annot_nids(ut.unique(ut.flatten(aid_pairs)))) sorted_pairs = ut.sortedby(aid_pairs, scores)[::-1][0:edge_sample_size] sorted_nids = ibs.get_annot_nids(ut.take_column(sorted_pairs, 0)) sample_size = len(ut.unique(sorted_nids)) am_rowids = ibs.get_annotmatch_rowid_from_undirected_superkey( *zip(*sorted_pairs) ) flags = ut.not_list(ut.flag_None_items(am_rowids)) # am_rowids = ut.compress(am_rowids, flags) positive_tags = ['SplitCase', 'Photobomb'] flags_list = [ ut.replace_nones(ibs.get_annotmatch_prop(tag, am_rowids), 0) for tag in positive_tags ] logger.info( 'edge_case_hist: ' + ut.repr3( [ '%s %s' % (txt, sum(flags_)) for flags_, txt in zip(flags_list, positive_tags) ] ) ) is_positive = ut.or_lists(*flags_list) num_positive = sum( ut.lmap(any, ut.group_items(is_positive, sorted_nids).values()) ) pop = len(pop_nids) logger.info( 'A positive is any edge flagged as a %s' % (ut.conj_phrase(positive_tags, 'or'),) ) logger.info('--- Sampling wrt edges ---') logger.info('edge_sample_size = %r' % (edge_sample_size,)) logger.info('edge_population_size = %r' % (len(aid_pairs),)) logger.info('num_positive_edges = %r' % (sum(is_positive))) logger.info('--- Sampling wrt names ---') logger.info('name_population_size = %r' % (pop,)) vt.calc_error_bars_from_sample( sample_size, num_positive, pop, conf_level=0.95 ) nx.set_edge_attributes( infr.graph, name='score', values=dict(zip(aid_pairs, scores)) ) win = viz_graph2.AnnotGraphWidget(infr=infr, use_image=False, init_mode=None) win.populate_edge_model() win.show() return win # Make review interface for only bad edges infr_list = [] iter_ = list(zip(flagged_annots, bad_edges_list)) for annots, bad_edges in ut.ProgIter(iter_, lbl='creating inference', **progkw): aids = annots.aids nids = [1] * len(aids) infr = graph_iden.AnnotInference(ibs, aids, nids, verbose=False) infr.initialize_graph() infr.reset_feedback() infr_list.append(infr) # Check which ones are user defined as incorrect # num_positive = 0 # for infr in infr_list: # flag = np.any(infr.get_feedback_probs()[0] == 0) # num_positive += flag # logger.info('num_positive = %r' % (num_positive,)) # pop = len(infr_list) # logger.info('pop = %r' % (pop,)) iter_ = list(zip(infr_list, bad_edges_list)) for infr, bad_edges in ut.ProgIter(iter_, lbl='adding speed edges', **progkw): flipped_edges = [] for aid1, aid2 in bad_edges: if infr.graph.has_edge(aid1, aid2): flipped_edges.append((aid1, aid2)) infr.add_feedback((aid1, aid2), NEGTV) nx.set_edge_attributes(infr.graph, name='_speed_split', values='orig') nx.set_edge_attributes( infr.graph, name='_speed_split', values={edge: 'new' for edge in bad_edges} ) nx.set_edge_attributes( infr.graph, name='_speed_split', values={edge: 'flip' for edge in flipped_edges}, ) # for infr in ut.ProgIter(infr_list, lbl='flagging speeding edges', **progkw): # annots = ibs.annots(infr.aids) # edge_to_speeds = annots.get_speeds() # bad_edges = [edge for edge, speed in edge_to_speeds.items() if speed > MAX_SPEED] def inference_stats(infr_list_): relabel_stats = [] for infr in infr_list_: num_ccs, num_inconsistent = infr.relabel_using_reviews() state_hist = ut.dict_hist( nx.get_edge_attributes(infr.graph, 'decision').values() ) if POSTV not in state_hist: state_hist[POSTV] = 0 hist = ut.dict_hist( nx.get_edge_attributes(infr.graph, '_speed_split').values() ) subgraphs = infr.positive_connected_compoments() subgraph_sizes = [len(g) for g in subgraphs] info = ut.odict( [ ('num_nonmatch_edges', state_hist[NEGTV]), ('num_match_edges', state_hist[POSTV]), ( 'frac_nonmatch_edges', state_hist[NEGTV] / (state_hist[POSTV] + state_hist[NEGTV]), ), ('num_inconsistent', num_inconsistent), ('num_ccs', num_ccs), ('edges_flipped', hist.get('flip', 0)), ('edges_unchanged', hist.get('orig', 0)), ('bad_unreviewed_edges', hist.get('new', 0)), ('orig_size', len(infr.graph)), ('new_sizes', subgraph_sizes), ] ) relabel_stats.append(info) return relabel_stats relabel_stats = inference_stats(infr_list) logger.info('\nAll Split Info:') lines = [] for key in relabel_stats[0].keys(): data = ut.take_column(relabel_stats, key) if key == 'new_sizes': data = ut.flatten(data) lines.append( 'stats(%s) = %s' % (key, ut.repr2(ut.get_stats(data, use_median=True), precision=2)) ) logger.info('\n'.join(ut.align_lines(lines, '='))) num_incon_list = np.array(ut.take_column(relabel_stats, 'num_inconsistent')) can_split_flags = num_incon_list == 0 logger.info( 'Can trivially split %d / %d' % (sum(can_split_flags), len(can_split_flags)) ) splittable_infrs = ut.compress(infr_list, can_split_flags) relabel_stats = inference_stats(splittable_infrs) logger.info('\nTrival Split Info:') lines = [] for key in relabel_stats[0].keys(): if key in ['num_inconsistent']: continue data = ut.take_column(relabel_stats, key) if key == 'new_sizes': data = ut.flatten(data) lines.append( 'stats(%s) = %s' % (key, ut.repr2(ut.get_stats(data, use_median=True), precision=2)) ) logger.info('\n'.join(ut.align_lines(lines, '='))) num_match_edges = np.array(ut.take_column(relabel_stats, 'num_match_edges')) num_nonmatch_edges = np.array(ut.take_column(relabel_stats, 'num_nonmatch_edges')) flags1 = np.logical_and(num_match_edges > num_nonmatch_edges, num_nonmatch_edges < 3) reasonable_infr = ut.compress(splittable_infrs, flags1) new_sizes_list = ut.take_column(relabel_stats, 'new_sizes') flags2 = [ len(sizes) == 2 and sum(sizes) > 4 and (min(sizes) / max(sizes)) > 0.3 for sizes in new_sizes_list ] reasonable_infr = ut.compress(splittable_infrs, flags2) logger.info('#reasonable_infr = %r' % (len(reasonable_infr),)) for infr in ut.InteractiveIter(reasonable_infr): annots = ibs.annots(infr.aids) edge_to_speeds = annots.get_speeds() logger.info('max_speed = %r' % (max(edge_to_speeds.values()),)) infr.initialize_visual_node_attrs() infr.show_graph(use_image=True, only_reviewed=True) rest = ~np.logical_or(flags1, flags2) nonreasonable_infr = ut.compress(splittable_infrs, rest) rng = np.random.RandomState(0) random_idx = ut.random_indexes(len(nonreasonable_infr) - 1, 15, rng=rng) random_infr = ut.take(nonreasonable_infr, random_idx) for infr in ut.InteractiveIter(random_infr): annots = ibs.annots(infr.aids) edge_to_speeds = annots.get_speeds() logger.info('max_speed = %r' % (max(edge_to_speeds.values()),)) infr.initialize_visual_node_attrs() infr.show_graph(use_image=True, only_reviewed=True) # import scipy.stats as st # conf_interval = .95 # st.norm.cdf(conf_interval) # view-source:http://www.surveysystem.com/sscalc.htm # zval = 1.96 # 95 percent confidence # zValC = 3.8416 # # zValC = 6.6564 # import statsmodels.stats.api as sms # es = sms.proportion_effectsize(0.5, 0.75) # sms.NormalIndPower().solve_power(es, power=0.9, alpha=0.05, ratio=1) pop = 279 num_positive = 3 sample_size = 15 conf_level = 0.95 # conf_level = .99 vt.calc_error_bars_from_sample(sample_size, num_positive, pop, conf_level) logger.info('---') vt.calc_error_bars_from_sample(sample_size + 38, num_positive, pop, conf_level) logger.info('---') vt.calc_error_bars_from_sample(sample_size + 38 / 3, num_positive, pop, conf_level) logger.info('---') vt.calc_error_bars_from_sample(15 + 38, num_positive=3, pop=675, conf_level=0.95) vt.calc_error_bars_from_sample(15, num_positive=3, pop=675, conf_level=0.95) pop = 279 # err_frac = .05 # 5% err_frac = 0.10 # 10% conf_level = 0.95 vt.calc_sample_from_error_bars(err_frac, pop, conf_level) pop = 675 vt.calc_sample_from_error_bars(err_frac, pop, conf_level) vt.calc_sample_from_error_bars(0.05, pop, conf_level=0.95, prior=0.1) vt.calc_sample_from_error_bars(0.05, pop, conf_level=0.68, prior=0.2) vt.calc_sample_from_error_bars(0.10, pop, conf_level=0.68) vt.calc_error_bars_from_sample(100, num_positive=5, pop=675, conf_level=0.95) vt.calc_error_bars_from_sample(100, num_positive=5, pop=675, conf_level=0.68) # flagged_nids = [a.nids[0] for a in flagged_annots] # all_nids = ibs.get_valid_nids() # remain_nids = ut.setdiff(all_nids, flagged_nids) # nAids_list = np.array(ut.lmap(len, ibs.get_name_aids(all_nids))) # nAids_list = np.array(ut.lmap(len, ibs.get_name_aids(remain_nids))) # #graph = infr.graph # g2 = infr.graph.copy() # [ut.nx_delete_edge_attr(g2, a) for a in infr.visual_edge_attrs] # g2.edge def estimate_ggr_count(ibs): """ Example: >>> # DISABLE_DOCTEST GGR >>> from wbia.other.dbinfo import * # NOQA >>> import wbia >>> dbdir = ut.truepath('~/lev/media/danger/GGR/GGR-IBEIS') >>> ibs = wbia.opendb(dbdir='/home/joncrall/lev/media/danger/GGR/GGR-IBEIS') """ import datetime day1 = datetime.date(2016, 1, 30) day2 = datetime.date(2016, 1, 31) filter_kw = { 'multiple': None, 'minqual': 'good', 'is_known': True, 'min_pername': 1, 'view': ['right'], } logger.info('\nOnly Single-Animal-In-Annotation:') filter_kw['multiple'] = False estimate_twoday_count(ibs, day1, day2, filter_kw) logger.info('\nOnly Multi-Animal-In-Annotation:') filter_kw['multiple'] = True estimate_twoday_count(ibs, day1, day2, filter_kw) logger.info('\nUsing Both:') filter_kw['multiple'] = None return estimate_twoday_count(ibs, day1, day2, filter_kw) def estimate_twoday_count(ibs, day1, day2, filter_kw): # gid_list = ibs.get_valid_gids() all_images = ibs.images() dates = [dt.date() for dt in all_images.datetime] date_to_images = all_images.group_items(dates) date_to_images = ut.sort_dict(date_to_images) # date_hist = ut.map_dict_vals(len, date2_gids) # logger.info('date_hist = %s' % (ut.repr2(date_hist, nl=2),)) verbose = 0 visit_dates = [day1, day2] visit_info_list_ = [] for day in visit_dates: images = date_to_images[day] aids = ut.flatten(images.aids) aids = ibs.filter_annots_general(aids, filter_kw=filter_kw, verbose=verbose) nids = ibs.get_annot_name_rowids(aids) grouped_aids = ut.group_items(aids, nids) unique_nids = ut.unique(list(grouped_aids.keys())) if False: aids_list = ut.take(grouped_aids, unique_nids) for aids in aids_list: if len(aids) > 30: break timedeltas_list = ibs.get_unflat_annots_timedelta_list(aids_list) # Do the five second rule marked_thresh = 5 flags = [] for nid, timedeltas in zip(unique_nids, timedeltas_list): flags.append(timedeltas.max() > marked_thresh) logger.info('Unmarking %d names' % (len(flags) - sum(flags))) unique_nids = ut.compress(unique_nids, flags) grouped_aids = ut.dict_subset(grouped_aids, unique_nids) unique_aids = ut.flatten(list(grouped_aids.values())) info = { 'unique_nids': unique_nids, 'grouped_aids': grouped_aids, 'unique_aids': unique_aids, } visit_info_list_.append(info) # Estimate statistics from wbia.other import dbinfo aids_day1, aids_day2 = ut.take_column(visit_info_list_, 'unique_aids') nids_day1, nids_day2 = ut.take_column(visit_info_list_, 'unique_nids') resight_nids = ut.isect(nids_day1, nids_day2) nsight1 = len(nids_day1) nsight2 = len(nids_day2) resight = len(resight_nids) lp_index, lp_error = dbinfo.sight_resight_count(nsight1, nsight2, resight) if False: from wbia.other import dbinfo logger.info('DAY 1 STATS:') _ = dbinfo.get_dbinfo(ibs, aid_list=aids_day1) # NOQA logger.info('DAY 2 STATS:') _ = dbinfo.get_dbinfo(ibs, aid_list=aids_day2) # NOQA logger.info('COMBINED STATS:') _ = dbinfo.get_dbinfo(ibs, aid_list=aids_day1 + aids_day2) # NOQA logger.info('%d annots on day 1' % (len(aids_day1))) logger.info('%d annots on day 2' % (len(aids_day2))) logger.info('%d names on day 1' % (nsight1,)) logger.info('%d names on day 2' % (nsight2,)) logger.info('resight = %r' % (resight,)) logger.info('lp_index = %r ± %r' % (lp_index, lp_error)) return nsight1, nsight2, resight, lp_index, lp_error def draw_twoday_count(ibs, visit_info_list_): import copy visit_info_list = copy.deepcopy(visit_info_list_) aids_day1, aids_day2 = ut.take_column(visit_info_list_, 'aids') nids_day1, nids_day2 = ut.take_column(visit_info_list_, 'unique_nids') resight_nids = ut.isect(nids_day1, nids_day2) if False: # HACK REMOVE DATA TO MAKE THIS FASTER num = 20 for info in visit_info_list: non_resight_nids = list(set(info['unique_nids']) - set(resight_nids)) sample_nids2 = non_resight_nids[0:num] + resight_nids[:num] info['grouped_aids'] = ut.dict_subset(info['grouped_aids'], sample_nids2) info['unique_nids'] = sample_nids2 # Build a graph of matches if False: debug = False for info in visit_info_list: edges = [] grouped_aids = info['grouped_aids'] aids_list = list(grouped_aids.values()) ams_list = ibs.get_annotmatch_rowids_in_cliques(aids_list) aids1_list = ibs.unflat_map(ibs.get_annotmatch_aid1, ams_list) aids2_list = ibs.unflat_map(ibs.get_annotmatch_aid2, ams_list) for ams, aids, aids1, aids2 in zip( ams_list, aids_list, aids1_list, aids2_list ): edge_nodes = set(aids1 + aids2) # #if len(edge_nodes) != len(set(aids)): # #logger.info('--') # #logger.info('aids = %r' % (aids,)) # #logger.info('edge_nodes = %r' % (edge_nodes,)) bad_aids = edge_nodes - set(aids) if len(bad_aids) > 0: logger.info('bad_aids = %r' % (bad_aids,)) unlinked_aids = set(aids) - edge_nodes mst_links = list(ut.itertwo(list(unlinked_aids) + list(edge_nodes)[:1])) bad_aids.add(None) user_links = [ (u, v) for (u, v) in zip(aids1, aids2) if u not in bad_aids and v not in bad_aids ] new_edges = mst_links + user_links new_edges = [ (int(u), int(v)) for u, v in new_edges if u not in bad_aids and v not in bad_aids ] edges += new_edges info['edges'] = edges # Add edges between days grouped_aids1, grouped_aids2 = ut.take_column(visit_info_list, 'grouped_aids') nids_day1, nids_day2 = ut.take_column(visit_info_list, 'unique_nids') resight_nids = ut.isect(nids_day1, nids_day2) resight_aids1 = ut.take(grouped_aids1, resight_nids) resight_aids2 = ut.take(grouped_aids2, resight_nids) # resight_aids3 = [list(aids1) + list(aids2) for aids1, aids2 in zip(resight_aids1, resight_aids2)] ams_list = ibs.get_annotmatch_rowids_between_groups(resight_aids1, resight_aids2) aids1_list = ibs.unflat_map(ibs.get_annotmatch_aid1, ams_list) aids2_list = ibs.unflat_map(ibs.get_annotmatch_aid2, ams_list) between_edges = [] for ams, aids1, aids2, rawaids1, rawaids2 in zip( ams_list, aids1_list, aids2_list, resight_aids1, resight_aids2 ): link_aids = aids1 + aids2 rawaids3 = rawaids1 + rawaids2 badaids = ut.setdiff(link_aids, rawaids3) assert not badaids user_links = [ (int(u), int(v)) for (u, v) in zip(aids1, aids2) if u is not None and v is not None ] # HACK THIS OFF user_links = [] if len(user_links) == 0: # Hack in an edge between_edges += [(rawaids1[0], rawaids2[0])] else: between_edges += user_links assert np.all( 0 == np.diff( np.array(ibs.unflat_map(ibs.get_annot_nids, between_edges)), axis=1 ) ) import wbia.plottool as pt import networkx as nx # pt.qt4ensure() # len(list(nx.connected_components(graph1))) # logger.info(ut.graph_info(graph1)) # Layout graph layoutkw = dict( prog='neato', draw_implicit=False, splines='line', # splines='curved', # splines='spline', # sep=10 / 72, # prog='dot', rankdir='TB', ) def translate_graph_to_origin(graph): x, y, w, h = ut.get_graph_bounding_box(graph) ut.translate_graph(graph, (-x, -y)) def stack_graphs(graph_list, vert=False, pad=None): graph_list_ = [g.copy() for g in graph_list] for g in graph_list_: translate_graph_to_origin(g) bbox_list = [ut.get_graph_bounding_box(g) for g in graph_list_] if vert: dim1 = 3 dim2 = 2 else: dim1 = 2 dim2 = 3 dim1_list = np.array([bbox[dim1] for bbox in bbox_list]) dim2_list = np.array([bbox[dim2] for bbox in bbox_list]) if pad is None: pad = np.mean(dim1_list) / 2 offset1_list = ut.cumsum([0] + [d + pad for d in dim1_list[:-1]]) max_dim2 = max(dim2_list) offset2_list = [(max_dim2 - d2) / 2 for d2 in dim2_list] if vert: t_xy_list = [(d2, d1) for d1, d2 in zip(offset1_list, offset2_list)] else: t_xy_list = [(d1, d2) for d1, d2 in zip(offset1_list, offset2_list)] for g, t_xy in zip(graph_list_, t_xy_list): ut.translate_graph(g, t_xy) nx.set_node_attributes(g, name='pin', values='true') new_graph = nx.compose_all(graph_list_) # pt.show_nx(new_graph, layout='custom', node_labels=False, as_directed=False) # NOQA return new_graph # Construct graph for count, info in enumerate(visit_info_list): graph = nx.Graph() edges = [ (int(u), int(v)) for u, v in info['edges'] if u is not None and v is not None ] graph.add_edges_from(edges, attr_dict={'zorder': 10}) nx.set_node_attributes(graph, name='zorder', values=20) # Layout in neato _ = pt.nx_agraph_layout(graph, inplace=True, **layoutkw) # NOQA # Extract components and then flatten in nid ordering ccs = list(nx.connected_components(graph)) root_aids = [] cc_graphs = [] for cc_nodes in ccs: cc = graph.subgraph(cc_nodes) try: root_aids.append(list(ut.nx_source_nodes(cc.to_directed()))[0]) except nx.NetworkXUnfeasible: root_aids.append(list(cc.nodes())[0]) cc_graphs.append(cc) root_nids = ibs.get_annot_nids(root_aids) nid2_graph = dict(zip(root_nids, cc_graphs)) resight_nids_ = set(resight_nids).intersection(set(root_nids)) noresight_nids_ = set(root_nids) - resight_nids_ n_graph_list = ut.take(nid2_graph, sorted(noresight_nids_)) r_graph_list = ut.take(nid2_graph, sorted(resight_nids_)) if len(n_graph_list) > 0: n_graph = nx.compose_all(n_graph_list) _ = pt.nx_agraph_layout(n_graph, inplace=True, **layoutkw) # NOQA n_graphs = [n_graph] else: n_graphs = [] r_graphs = [stack_graphs(chunk) for chunk in ut.ichunks(r_graph_list, 100)] if count == 0: new_graph = stack_graphs(n_graphs + r_graphs, vert=True) else: new_graph = stack_graphs(r_graphs[::-1] + n_graphs, vert=True) # pt.show_nx(new_graph, layout='custom', node_labels=False, as_directed=False) # NOQA info['graph'] = new_graph graph1_, graph2_ = ut.take_column(visit_info_list, 'graph') if False: pt.show_nx(graph1_, layout='custom', node_labels=False, as_directed=False) pt.show_nx(graph2_, layout='custom', node_labels=False, as_directed=False) graph_list = [graph1_, graph2_] twoday_graph = stack_graphs(graph_list, vert=True, pad=None) nx.set_node_attributes(twoday_graph, name='pin', values='true') if debug: ut.nx_delete_None_edge_attr(twoday_graph) ut.nx_delete_None_node_attr(twoday_graph) logger.info( 'twoday_graph(pre) info' + ut.repr3(ut.graph_info(twoday_graph), nl=2) ) # Hack, no idea why there are nodes that dont exist here between_edges_ = [ edge for edge in between_edges if twoday_graph.has_node(edge[0]) and twoday_graph.has_node(edge[1]) ] twoday_graph.add_edges_from(between_edges_, attr_dict={'alpha': 0.2, 'zorder': 0}) ut.nx_ensure_agraph_color(twoday_graph) layoutkw['splines'] = 'line' layoutkw['prog'] = 'neato' agraph = pt.nx_agraph_layout( twoday_graph, inplace=True, return_agraph=True, **layoutkw )[ -1 ] # NOQA if False: fpath = ut.truepath('~/ggr_graph.png') agraph.draw(fpath) ut.startfile(fpath) if debug: logger.info('twoday_graph(post) info' + ut.repr3(ut.graph_info(twoday_graph))) pt.show_nx(twoday_graph, layout='custom', node_labels=False, as_directed=False) def cheetah_stats(ibs): filters = [ dict(view=['right', 'frontright', 'backright'], minqual='good'), dict(view=['right', 'frontright', 'backright']), ] for filtkw in filters: annots = ibs.annots(ibs.filter_annots_general(**filtkw)) unique_nids, grouped_annots = annots.group(annots.nids) annots_per_name = ut.lmap(len, grouped_annots) annots_per_name_freq = ut.dict_hist(annots_per_name) def bin_mapper(num): if num < 5: return (num, num + 1) else: for bin, mod in [(20, 5), (50, 10)]: if num < bin: low = (num // mod) * mod high = low + mod return (low, high) if num >= bin: return (bin, None) else: assert False, str(num) hist = ut.ddict(lambda: 0) for num in annots_per_name: hist[bin_mapper(num)] += 1 hist = ut.sort_dict(hist) logger.info('------------') logger.info('filters = %s' % ut.repr4(filtkw)) logger.info('num_annots = %r' % (len(annots))) logger.info('num_names = %r' % (len(unique_nids))) logger.info('annots_per_name_freq = %s' % (ut.repr4(annots_per_name_freq))) logger.info('annots_per_name_freq (ranges) = %s' % (ut.repr4(hist))) assert sum(hist.values()) == len(unique_nids) def print_feature_info(testres): """ draws keypoint statistics for each test configuration Args: testres (wbia.expt.test_result.TestResult): test result Ignore: import wbia.plottool as pt pt.qt4ensure() testres.draw_rank_cmc() Example: >>> # DISABLE_DOCTEST >>> from wbia.other.dbinfo import * # NOQA >>> import wbia >>> ibs, testres = wbia.testdata_expts(defaultdb='PZ_MTEST', a='timectrl', t='invar:AI=False') >>> (tex_nKpts, tex_kpts_stats, tex_scale_stats) = feature_info(ibs) >>> result = ('(tex_nKpts, tex_kpts_stats, tex_scale_stats) = %s' % (ut.repr2((tex_nKpts, tex_kpts_stats, tex_scale_stats)),)) >>> print(result) >>> ut.quit_if_noshow() >>> import wbia.plottool as pt >>> ut.show_if_requested() """ import vtool as vt # ibs = testres.ibs def print_feat_stats(kpts, vecs): assert len(vecs) == len(kpts), 'disagreement' logger.info('keypoints and vecs agree') flat_kpts = np.vstack(kpts) num_kpts = list(map(len, kpts)) kpt_scale = vt.get_scales(flat_kpts) num_kpts_stats = ut.get_stats(num_kpts) scale_kpts_stats = ut.get_stats(kpt_scale) logger.info( 'Number of ' + prefix + ' keypoints: ' + ut.repr3(num_kpts_stats, nl=0, precision=2) ) logger.info( 'Scale of ' + prefix + ' keypoints: ' + ut.repr3(scale_kpts_stats, nl=0, precision=2) ) for cfgx in range(testres.nConfig): logger.info('------------------') ut.colorprint(testres.cfgx2_lbl[cfgx], 'yellow') qreq_ = testres.cfgx2_qreq_[cfgx] depc = qreq_.ibs.depc_annot tablename = 'feat' prefix_list = ['query', 'data'] config_pair = [qreq_.query_config2_, qreq_.data_config2_] aids_pair = [qreq_.qaids, qreq_.daids] for prefix, aids, config in zip(prefix_list, aids_pair, config_pair): config_ = depc._ensure_config(tablename, config) ut.colorprint(prefix + ' Config: ' + str(config_), 'blue') # Get keypoints and SIFT descriptors for this config kpts = depc.get(tablename, aids, 'kpts', config=config_) vecs = depc.get(tablename, aids, 'vecs', config=config_) # Check various stats of these pairs print_feat_stats(kpts, vecs) # kpts = np.vstack(cx2_kpts) # logger.info('[dbinfo] --- LaTeX --- ') # # _printopts = np.get_printoptions() # # np.set_printoptions(precision=3) # scales = np.array(sorted(scales)) # tex_scale_stats = util_latex.latex_get_stats(r'kpt scale', scales) # tex_nKpts = util_latex.latex_scalar(r'\# kpts', len(kpts)) # tex_kpts_stats = util_latex.latex_get_stats(r'\# kpts/chip', cx2_nFeats) # logger.info(tex_nKpts) # logger.info(tex_kpts_stats) # logger.info(tex_scale_stats) # # np.set_printoptions(**_printopts) # logger.info('[dbinfo] ---/LaTeX --- ') # return (tex_nKpts, tex_kpts_stats, tex_scale_stats) def tst_name_consistency(ibs): """ Example: >>> # FIXME failing-test (22-Jul-2020) PZ_Master0 doesn't exist >>> # xdoctest: +SKIP >>> import wbia >>> ibs = wbia.opendb(db='PZ_Master0') >>> #ibs = wbia.opendb(db='GZ_ALL') """ from wbia.other import ibsfuncs import utool as ut max_ = -1 # max_ = 10 valid_aids = ibs.get_valid_aids()[0:max_] valid_nids = ibs.get_valid_nids()[0:max_] ax2_nid = ibs.get_annot_name_rowids(valid_aids) nx2_aids = ibs.get_name_aids(valid_nids) logger.info('len(valid_aids) = %r' % (len(valid_aids),)) logger.info('len(valid_nids) = %r' % (len(valid_nids),)) logger.info('len(ax2_nid) = %r' % (len(ax2_nid),)) logger.info('len(nx2_aids) = %r' % (len(nx2_aids),)) # annots are grouped by names, so mapping aid back to nid should # result in each list having the same value _nids_list = ibsfuncs.unflat_map(ibs.get_annot_name_rowids, nx2_aids) logger.info(_nids_list[-20:]) logger.info(nx2_aids[-20:]) assert all(map(ut.allsame, _nids_list))
import tactic import data.set.basic -- This is just Coursework 2 but cleaned up substantially! /-- An ideal of R consists of a nonempty subset of R which is closed under addition, additive inverses, and multiplication by elements of R. -/ @[nolint has_inhabited_instance] structure myideal (R : Type) [comm_ring R] := (iset : set R) (not_empty : iset.nonempty) (r_mul_mem' {x r}: x ∈ iset → r * x ∈ iset) (add_mem' {x y} : x ∈ iset → y ∈ iset → (x + y) ∈ iset) (neg_mem' {x} : x ∈ iset → -x ∈ iset) attribute [ext] myideal namespace myideal variables {R : Type} [comm_ring R] (I : myideal R) instance : has_mem R (myideal R) := { mem := λ x i , x ∈ i.iset} instance : has_coe (myideal R) (set R) := {coe := λ x, x.iset} instance : has_subset (myideal R) := { subset := λ x y, x.iset ⊆ y.iset } theorem add_mem {x y : R}: x ∈ I → y ∈ I → x + y ∈ I := by apply add_mem' theorem neg_mem {x : R} : x ∈ I → -x ∈ I := by apply neg_mem' theorem r_mul_mem {x r : R} : x ∈ I → r * x ∈ I := by apply r_mul_mem' end myideal variables {R : Type} [comm_ring R] /-- An integral domain has no zero divisors. -/ class integral_domain (R: Type) extends comm_ring R := (hzd: ∀ (x y : R), x * y = 0 → x = 0 ∨ y = 0) /-- A principal ideal is of the form aR for some a ∈ R -/ def principal_ideal (x : R) : myideal R := { iset := { i : R | ∃(v:R), i = x * v}, not_empty := begin rw set.nonempty_def, use x, use 1, rw mul_one, end, r_mul_mem' := begin intro i, intro j, intro h, cases h, use (j * h_w), rw h_h, ring, end, add_mem' := begin intros i j hi hj, cases hi, cases hj, use hi_w + hj_w, rw mul_add, rw hi_h, rw hj_h, end, neg_mem' := begin intros i hi, cases hi, use -hi_w, rw hi_h, ring, end } /-- An integral domain is a PID iff every ideal is principal. -/ class pid (R: Type) extends integral_domain R := (hpid : ∀(I : myideal R), ∃ (x : R), I = principal_ideal x) --1 ∈ I → I = R lemma one_mem_ideal_R {I : myideal R} : (1:R) ∈ I → coe I = {x : R | true} := begin intro h, ext, split, intro, triv, intro h2, rw ←mul_one x, apply myideal.r_mul_mem, exact h, end lemma zero_mem_ideal {I : myideal R} : (0:R) ∈ I := begin have h := myideal.not_empty I, rw set.nonempty at h, cases h with x hx, have h2 : x + (-x) = 0, ring, rw ←h2, apply myideal.add_mem I, exact hx, apply myideal.neg_mem I, exact hx, end /-- The sum of two ideals is also an ideal. -/ def sum_ideal (I J : myideal R) : myideal R := { iset := {r : R | ∃ (i ∈ I) (j ∈ J), r = i + j}, not_empty := begin have h1 := myideal.not_empty I, have h2 := myideal.not_empty J, rw set.nonempty at h1 h2 ⊢, cases h1, cases h2, use h1_w + h2_w, use h1_w, split, exact h1_h, use h2_w, split, exact h2_h, refl, end, r_mul_mem' := begin intros x r h, cases h with i h2, cases h2 with hi h2, cases h2 with j h2, cases h2 with hj h2, rw h2, rw mul_add, use r * i, split, apply myideal.r_mul_mem, exact hi, use r * j, split, apply myideal.r_mul_mem, exact hj, refl, end, add_mem' := begin intros x y hxm hym, cases hxm with xi hxi, cases hxi with hxi h2, cases h2 with xj hxj, cases hxj with hxj hx, cases hym with yi hyi, cases hyi with hyi h2, cases h2 with yj hyj, cases hyj with hyj hy, use xi + yi, split, apply myideal.add_mem, exact hxi, exact hyi, use xj + yj, split, apply myideal.add_mem, exact hxj, exact hyj, rw hy, rw hx, ring, end, neg_mem' := begin intros x h, cases h with i hi, cases hi with hi h2, cases h2 with j h2, cases h2 with hj h2, use -i, split, apply myideal.neg_mem, exact hi, use -j, split, apply myideal.neg_mem, exact hj, rw h2, ring, end } namespace myring /-- An element r of R is irreducible iff it is not a unit and for all factorisations x * y = r, x or y is a unit. -/ def irreducible (r : R) : Prop := ¬is_unit r ∧ ∀(x y : R), x * y = r → is_unit x ∨ is_unit y /-- Two elements a b are associates iff a = b * c for some unit c. -/ def associates (a b : R) : Prop := ∃ (c : R), is_unit c ∧ b = a * c notation a ` ~ ` b := associates a b end myring open myring lemma r_prod_unit_r_unit (r a : R) (hpu : is_unit (r * a)) : is_unit r := begin have h2:= is_unit.exists_right_inv hpu, cases h2, rw is_unit_iff_exists_inv', use a * h2_w, rw mul_comm, rw ← mul_assoc, exact h2_h, end lemma unit_mul_irr_is_irr (r a : R) (hirr :irreducible r) (hu: is_unit a) : irreducible (a * r) := begin split, { by_contra, cases hirr, apply hirr_left, apply r_prod_unit_r_unit r a, rw mul_comm, exact h }, { intros x y h, have h2 : ∃ (b : R), r = (b * x) * y, rw is_unit_iff_exists_inv at hu, cases hu with c h3, use c, rw mul_assoc, rw h, ring_nf, rw mul_assoc, rw h3, rw mul_one, cases hirr, cases h2 with b hb, specialize hirr_right (b * x) y, rw eq_comm at hb, have h3 := hirr_right hb, cases h3, left, rw mul_comm at h3, apply r_prod_unit_r_unit x b, exact h3, right, exact h3, } end /-- For some a and b, a divides b iff there is a c ∈ R such that a * c = b -/ def divisible (a b: R) : Prop := ∃ (c : R), b = a * c notation a ` \ ` b := divisible a b lemma assoc_sym (a b : R) : a ~ b → b ~ a:= begin intro h, cases h with u h2, cases h2 with hunit h3, rw is_unit_iff_exists_inv at hunit, cases hunit with uinv huinv, use uinv, split, rw is_unit_iff_exists_inv, use u, rw mul_comm, exact huinv, rw h3, rw mul_assoc, rw huinv, rw mul_one, end variables (S : Type) [integral_domain S] lemma symm_divisible_associates_int_domain (a b : S) : a \ b → b \ a → a ~ b:= begin intros h1 h2, cases h1 with x hx, cases h2 with y hy, by_cases a ≠ 0, { rw hx at hy, apply_fun λ x, x + (-a) at hy, rw add_neg_self at hy, rw ←mul_one (-a) at hy, rw neg_mul_comm a 1 at hy, rw mul_assoc at hy, rw ←mul_add at hy, have hint := _inst_2.hzd, specialize hint a (x * y + (-1)), have h2 : a = 0 ∨ x * y + -1 = 0, exact hint (eq.symm hy), cases h2, exfalso, apply h, exact h2, apply_fun λ x, x + 1 at h2, rw zero_add at h2, rw add_assoc at h2, rw neg_add_self at h2, rw add_zero at h2, use x, split, rw is_unit_iff_exists_inv, use y, exact h2, exact hx, }, { use 1, split, exact is_unit_one, rw not_ne_iff at h, rw h at hx, rw hx, rw h, rw zero_mul, rw zero_mul, } end lemma generators_associate_if_ideals_eq (a b : S) : principal_ideal a = principal_ideal b → a ~ b := begin intro h, have h1 : a ∈ principal_ideal a, use 1, rw mul_one, have h2 : b ∈ principal_ideal b, use 1, rw mul_one, rw h at h1, rw ←h at h2, apply symm_divisible_associates_int_domain, exact h2, exact h1, end /-- In a dividing sequence, each term is divisible by the next term. -/ def dividing_sequence (f : ℕ → R) : Prop := ∀ (n : ℕ), f (n + 1) \ f n /-- A unique factorisation domain (UFD) is an integral domain such that: All infinite dividing sequences 'stabilise': past some n ∈ ℕ, all terms are associate.#check All irreducible elements are prime. -/ class ufd (R : Type) extends integral_domain R := (hseq : ∀(f : ℕ → R), dividing_sequence f → ∃ (m : ℕ), ∀(q : ℕ), m ≤ q → f q ~ f (q + 1)) (hpi : ∀ (p: R), irreducible p →∀ (a b: R), p \ (a*b) → p \ a ∨ p \ b ) /-- In an ascending ideal chain, each ideal is contained in the next one. -/ def asc_ideal_chain (i : ℕ → myideal R) : Prop := ∀ (n : ℕ), i n ⊆ i (n + 1) lemma asc_ideal_chain_add (i : ℕ → myideal R) : asc_ideal_chain i → ∀(n : ℕ), ∀ (m : ℕ), i n ⊆ i (m+n) := begin intros h n m, induction m, rw zero_add, refl, specialize h (m_n + n), rw nat.succ_eq_add_one, change (i n).iset ⊆ (i (m_n + n)).iset at m_ih, change (i (m_n + n)).iset ⊆ (i (m_n + n + 1)).iset at h, change (i n).iset ⊆ (i (m_n + 1 + n)).iset, apply set.subset.trans, exact m_ih, nth_rewrite_rhs 1 add_comm, rw add_assoc, nth_rewrite_rhs 0 add_comm, exact h, end lemma asc_ideal_chain_ind (i : ℕ → myideal R) : asc_ideal_chain i ↔ ∀(n : ℕ), ∀ (m : ℕ), n ≤ m → i n ⊆ i m := begin split, { intros h n m h2, have h3 : ∃ (r: ℕ), n + r = m, use m - n, linarith, cases h3 with s hs, rw ←hs, rw add_comm, apply asc_ideal_chain_add, exact h, }, { intros h n, specialize h n (n+1), apply h, norm_num, } end theorem pid_is_noetherian (P : Type) [pid P] (i :ℕ → myideal P) (hinc : asc_ideal_chain i) : ∃(r : ℕ), ∀(s : ℕ ), r ≤ s → i s = i (s + 1) := begin let S := set.Union (λ (x : ℕ), myideal.iset (i x)), let si := myideal.mk S, let sii : myideal P, apply si, { rw set.nonempty, let i0 := i 0, have hne := myideal.not_empty i0, rw set.nonempty at hne, cases hne with x0 hx0, use x0, rw set.mem_Union, use 0, exact hx0, }, { intros x r h, rw set.mem_Union at h ⊢, cases h, use h_w, apply myideal.r_mul_mem', exact h_h, },{ intros x y h1 h2, rw set.mem_Union at h1 h2 ⊢, cases h1 with i1 hi1, cases h2 with i2 hi2, by_cases i1 ≤ i2, { have h3 := ((asc_ideal_chain_ind i).mp) hinc, specialize h3 i1 i2, have h4 := h3 h, use i2, apply myideal.add_mem', apply set.mem_of_subset_of_mem h4, exact hi1, exact hi2, }, { have hbt : i2 ≤ i1, rw le_iff_lt_or_eq, left, push_neg at h, exact h, have h3 := ((asc_ideal_chain_ind i).mp) hinc, specialize h3 i2 i1, have h4 := h3 hbt, use i1, apply myideal.add_mem', exact hi1, apply set.mem_of_subset_of_mem h4, exact hi2, } },{ intros x h, rw set.mem_Union at h ⊢, cases h with b hb, use b, apply myideal.neg_mem', exact hb, }, have hpid := pid.hpid sii, cases hpid with a ha, have hasi : a ∈ sii, rw ha, change ∃(v:P), a = a * v, use 1, rw mul_one, change a ∈ sii.iset at hasi, rw set.mem_Union at hasi, cases hasi with q hq, use q, intro s, intro hsq, have hisq : (i q).iset = S, { ext, split, intro h, rw set.mem_Union, use q, exact h, intro h, change x ∈ sii.iset at h, rw ha at h, cases h, rw mul_comm at h_h, rw h_h, apply myideal.r_mul_mem', exact hq, }, apply myideal.ext, apply set.subset.antisymm, specialize hinc s, exact hinc, apply @set.subset.trans _ (i (s + 1)).iset (i q).iset, rw hisq, exact set.subset_Union (λ (x : ℕ), myideal.iset (i x)) (s + 1), rw asc_ideal_chain_ind at hinc, specialize hinc q s, apply hinc, exact hsq, end theorem pid_irreducible_is_prime (P : Type) [pid P] (p : P) (hirr : irreducible p) : ∀ (a b : P), p \ (a * b) → p \ a ∨ p \ b := begin intros a b h, let I := sum_ideal (principal_ideal a) (principal_ideal p), have hpid := pid.hpid I, cases hpid with d hd, have hpi: p ∈ I, use 0, split, exact zero_mem_ideal, use p, split, use 1, rw mul_one, rw zero_add, rw hd at hpi, cases hpi with r hdr, cases hirr, specialize hirr_right d r, have hut := hirr_right (eq_comm.mpr hdr), cases hut, { right, have hone : (1:P) ∈ I, rw is_unit_iff_exists_inv at hut, rw hd, cases hut with di hdi, use di, rw hdi, cases hone with u h2, cases h2 with hu h2, cases h2 with v h2, cases h2 with hv h2, cases hu with s hs, cases hv with t ht, rw hs at h2, rw ht at h2, apply_fun λ x, b*x at h2, rw mul_one at h2, rw mul_add at h2, rw h2, rw ← mul_assoc, rw mul_comm b a, cases h with q hq, rw hq, use q * s + b * t, ring, },{ have hai : a ∈ I, use a, split, use 1, rw mul_one, use 0, split, exact zero_mem_ideal, rw add_zero, rw hd at hai, cases hai with e he, rw is_unit_iff_exists_inv at hut, cases hut with ri hri, apply_fun λ x, x * ri at hdr, rw mul_assoc at hdr, rw hri at hdr, rw mul_one at hdr, rw ←hdr at he, left, use (ri * e), rw ← mul_assoc, exact he, } end namespace pid @[priority 100] instance (P : Type) [pid P]: ufd P:= { hseq := begin intro f, intro h, let i := λ(x : ℕ), principal_ideal (f x), have hinc : asc_ideal_chain i, intro n, change principal_ideal (f n) ⊆ principal_ideal (f (n+1)), specialize h n, cases h with y hy, rw hy, intros x h2, cases h2 with z hz, use y * z, rw ← mul_assoc, exact hz, have hnoet := pid_is_noetherian P, specialize hnoet i, have hstab := hnoet hinc, cases hstab with m hm, use m, intro q, specialize hm q, intro hmq, have hiqs := hm hmq, apply generators_associate_if_ideals_eq, exact hiqs, end, hpi := pid_irreducible_is_prime P } end pid #lint
library(tidyverse) library(janitor) library(here) path <- "rawdata/" filenames <- dir(path = "rawdata", pattern = "*.txt", full.names = TRUE) get_country_name <- function(x){ read_lines(x, n_max = 1) %>% str_extract(".+?,") %>% str_remove(",") } shorten_name <- function(x){ str_replace_all(x, " -- ", " ") %>% str_replace("The United States of America", "USA") %>% snakecase::to_any_case() } countries <- tibble(country = map_chr(filenames, get_country_name), ccode = map_chr(country, shorten_name), path = filenames) mortality <- countries %>% mutate(data = map(path, ~ read_table(., skip = 2, na = "."))) %>% unnest(cols = c(data)) %>% clean_names() %>% mutate(age = as.integer(recode(age, "110+" = "110"))) %>% select(-path) %>% group_by(country, ccode) %>% nest() save(mortality, file = "data/mortality.rda", compress = "xz") britain <- read_table(paste0(path, "GBRTENW.Mx_1x1.txt"), skip = 2, na = ".") %>% clean_names() britain$age <- as.integer(recode(britain$age, "110+" = "110")) ## britain <- britain %>% mutate(ratio = male / female, ## deciles = cut(ratio, breaks = quantile(ratio, probs = seq(0, 1, 0.1), na.rm = TRUE)), ## pct_diff = ((male - female) / (male + female))*100, ## bin_ratio = ntile(ratio, 100)) save(britain, file = "data/britain.rda", compress = "xz") france <- read_table(paste0(path, "FRATNP.Mx_1x1.txt"), skip = 2, na = ".") %>% clean_names() france$age <- as.integer(recode(france$age, "110+" = "110")) save(france, file = "data/france.rda", compress = "xz") ## france <- france %>% mutate(ratio = male / female, ## deciles = cut(ratio, breaks = quantile(ratio, probs = seq(0, 1, 0.1), na.rm = TRUE)), ## pct_diff = ((male - female) / (male + female))*100, ## bin_ratio = ntile(ratio, 100)) ireland <- read_table(paste0(path, "IRL.Mx_1x1.txt"), skip = 2, na = ".") %>% clean_names() ireland$age <- as.integer(recode(ireland$age, "110+" = "110")) save(ireland, file = "data/ireland.rda", compress = "xz") tmp <- nzl %>% mutate(ratio = male / female, deciles = cut(ratio, breaks = quantile(ratio, probs = seq(0, 1, 0.1), na.rm = TRUE)), pct_diff = ((male - female) / (male + female))*100, bin_ratio = ntile(ratio, 100)) tmp <- sweden %>% mutate(ratio = male / female, deciles = cut(ratio, breaks = quantile(ratio, probs = seq(0, 1, 0.1), na.rm = TRUE)), pct_diff = ((male - female) / (male + female))*100, bin_ratio = ntile(ratio, 100)) library(colorspace) tmp %>% filter(age < 101) %>% ggplot(aes(x = year, y = age, fill = deciles)) + geom_raster() + scale_fill_discrete_diverging("Green-Orange") + scale_x_continuous(breaks = seq(1750, 2015, by = 15)) + ylim(c(0, 100)) + guides(fill = guide_legend(nrow = 1, title.position = "top", label.position = "bottom")) + labs(x = "Year", y = "Age", fill = "M/F Mortality Ratio") + theme_minimal() + theme(legend.position = "top", legend.title = element_text(size = 8)) nzl <- read_table(paste0(path, "NZL_NM.Mx_1x1.txt"), skip = 2, na = ".") %>% clean_names() nzl$age <- as.integer(recode(nzl$age, "110+" = "110")) ## nzl <- nzl %>% mutate(ratio = male / female, ## deciles = cut(ratio, breaks = quantile(ratio, probs = seq(0, 1, 0.1), na.rm = TRUE)), ## pct_diff = ((male - female) / (male + female))*100, ## bin_ratio = ntile(ratio, 100)) save(nzl, file = "data/nzl.rda", compress = "xz") japan <- read_table(paste0(path, "JPN.Mx_1x1.txt"), skip = 2, na = ".") %>% clean_names() japan$age <- as.integer(recode(japan$age, "110+" = "110")) save(japan, file = "data/japan.rda", compress = "xz") sweden <- read_table(paste0(path, "SWE.Mx_1x1.txt"), skip = 2, na = ".") %>% clean_names() sweden$age <- as.integer(recode(sweden$age, "110+" = "110")) save(sweden, file = "data/sweden.rda", compress = "xz") okboomer <- read_csv("rawdata/boom_births.csv") save(okboomer, file = "data/okboomer.rda", compress = "xz") start_date <- "1938-01-01" end_date <- "1991-12-01" by_unit = "year" ###-------------------------------------------------- ### Time series ###-------------------------------------------------- ## break_vec <- seq(from=as.Date(start_date), to=as.Date(end_date), by = "month") ## break_vec <- break_vec[seq(25, length(break_vec), 60)] ## title_txt <- "Monthly Birth Rates, 1938-1991" ## subtitle_txt <- "Average births per million people per day."
section "Soundness" theory Soundness imports Completeness begin lemma permutation_validS: "fs <~~> gs --> (validS fs = validS gs)" apply(simp add: validS_def) apply(simp add: evalS_def) apply(simp add: perm_set_eq) done lemma modelAssigns_vblcase: "phi \<in> modelAssigns M \<Longrightarrow> x \<in> objects M \<Longrightarrow> vblcase x phi \<in> modelAssigns M" apply (simp add: modelAssigns_def, rule) apply(erule_tac rangeE) apply(case_tac xaa rule: vbl_casesE, auto) done lemma tmp: "(!x : A. P x | Q) ==> (! x : A. P x) | Q " by blast lemma soundnessFAll: "!!Gamma. [| u ~: freeVarsFL (FAll Pos A # Gamma); validS (instanceF u A # Gamma) |] ==> validS (FAll Pos A # Gamma)" apply (simp add: validS_def, rule) apply (drule_tac x=M in spec, rule) apply(simp add: evalF_instance) apply (rule tmp, rule) apply(drule_tac x="% y. if y = u then x else phi y" in bspec) apply(simp add: modelAssigns_def) apply force apply(erule disjE) apply (rule disjI1, simp) apply(subgoal_tac "evalF M (vblcase x (\<lambda>y. if y = u then x else phi y)) A = evalF M (vblcase x phi) A") apply force apply(rule evalF_equiv) apply(rule equalOn_vblcaseI) apply(rule,rule) apply(simp add: freeVarsFL_cons) apply (rule equalOnI, force) apply(rule disjI2) apply(subgoal_tac "evalS M (\<lambda>y. if y = u then x else phi y) Gamma = evalS M phi Gamma") apply force apply(rule evalS_equiv) apply(rule equalOnI) apply(force simp: freeVarsFL_cons) done lemma soundnessFEx: "validS (instanceF x A # Gamma) ==> validS (FAll Neg A # Gamma)" apply(simp add: validS_def) apply (simp add: evalF_instance, rule, rule) apply(drule_tac x=M in spec) apply (drule_tac x=phi in bspec, assumption) apply(erule disjE) apply(rule disjI1) apply (rule_tac x="phi x" in bexI, assumption) apply(force dest: modelAssignsD subsetD) apply (rule disjI2, assumption) done lemma soundnessFCut: "[| validS (C # Gamma); validS (FNot C # Delta) |] ==> validS (Gamma @ Delta)" (* apply(force simp: validS_def evalS_append evalS_cons evalF_FNot)*) apply (simp add: validS_def, rule, rule) apply(drule_tac x=M in spec) apply(drule_tac x=M in spec) apply(drule_tac x=phi in bspec) apply assumption apply(drule_tac x=phi in bspec) apply assumption apply (simp add: evalS_append evalF_FNot, blast) done lemma completeness: "fs : deductions (PC) = validS fs" apply rule apply(rule soundness) apply assumption apply(subgoal_tac "fs : deductions CutFreePC") apply(rule subsetD) prefer 2 apply assumption apply(rule mono_deductions) apply(simp add: PC_def CutFreePC_def) apply blast apply(rule adequacy) by assumption end
module ADMMModule import Base: iterate using ..ObjectAgentModule using ..ConvAnalysisModule using Printf export admm mutable struct ADMMIterable{T<:Real} λ::T; ρ::T; agents::Array{<:ObjectAgent, 1}; method::Symbol; # either :freedir or :fixdir maxiter::Int; elapsed::T; function ADMMIterable(λ::T, ρ::T, agents::Array{<:ObjectAgent, 1}, method::Symbol, maxiter::Int) where T<:Real @assert (method == :fixdir || method == :freedir) ":fixdir and :freedir are the only currently supported methods" new{T}(λ, ρ, agents, method, maxiter, 0) end end function iterate(it::ADMMIterable{<:Real}, iteration::Int=0) if iteration >= it.maxiter return nothing end it.elapsed += @elapsed begin Threads.@threads for k = 1:length(it.agents) if it.method == :freedir updatex!(it.agents[k], it.λ); else updatex_dirFixed!(it.agents[k], it.λ); end end for agent in it.agents broadcastx!(agent); end for agent in it.agents updateu!(agent, it.ρ); end end return (iteration, iteration+1); end function admm(agents::Array{<:ObjectAgent, 1}; λ::T=1.0, ρ::T=1/λ, method::Symbol = :fixdir, log::Bool = false, verbose::Bool = false, maxiter::Int = 25, stoping_criteria = x -> false, ) where T<:Real if log history = ConvAnalysis_Data(); end admm_it = ADMMIterable(λ, ρ, agents, method, maxiter); for (iteration, item) = enumerate(admm_it) if log push!(history, agents); end if verbose verbose && @printf("%3d\n", iteration) end if iteration > 1 && stoping_criteria(history) break; end end log ? (admm_it.elapsed, history) : (admm_it.elapsed, nothing) end end
[STATEMENT] lemma a_omega: "ad (\<Omega> x y) = ad y + |x\<rangle> y" [PROOF STATE] proof (prove) goal (1 subgoal): 1. ad (\<Omega> x y) = ad y + |x\<rangle> y [PROOF STEP] by (simp add: Omega_def local.a_6 local.ds.fd_def)
The Strand plays host to a yearly Mardi Gras festival , Galveston Island Jazz & Blues Festival and a Victorian @-@ themed Christmas festival called Dickens on the Strand ( honoring the works of novelist Charles Dickens , especially A Christmas Carol ) in early December . Galveston is home to several historic ships : the tall ship Elissa ( the official Tall Ship of Texas ) at the Texas Seaport Museum and USS Cavalla and USS Stewart , both berthed at Seawolf Park on nearby Pelican Island . Galveston is ranked the number one cruise port on the Gulf Coast and fourth in the United States .
<a href="https://colab.research.google.com/github/Fotrichis/LinearAlgebra_2ndSem/blob/main/Assignment_9_(Galano).ipynb" target="_parent"></a> # Lab 2 - Plotting Vector using NumPy and MatPlotLib In this laboratory we will be discussing the basics of numerical and scientific programming by working with Vectors using NumPy and MatPlotLib. ### Objectives At the end of this activity you will be able to: 1. Be familiar with the libraries in Python for numerical and scientific programming. 2. Visualize vectors through Python programming. 3. Perform simple vector operations through code. ## Discussion ### NumPy NumPy or Numerical Python, is mainly used for matrix and vector operations. It is capable of declaring computing and representing matrices. Most Python scientific programming libraries uses NumPy as the basic code. ###Scalars A single number. Variables representing scalars are typically written in lower case. Scalars can be whole numbers or decimals. \begin{align} a = 12 \qquad b = 6.5842342 \end{align} They can be positive, negative, 0 or any other real number. \begin{align} c = -8.5412\mathrm{e}{+23} \qquad d = \pi \end{align} ###Vectors A vector of dimension *n* is an **ordered** collection of *n* elements, which are called **components**. Vector notation variables are commonly written as a bold-faced lowercase letters or italicized non-bold-faced lowercase characters with an arrow (→) above the letters: Written: $\vec{v}$ Examples: \begin{align} \vec{a} = \begin{bmatrix} -10\\ 5 \end{bmatrix} \qquad \vec{b} = \begin{bmatrix} 8\\ 21\\ -3 \end{bmatrix} \qquad \vec{c} = \begin{bmatrix} 4.5 \end{bmatrix} \qquad \vec{d} = \begin{bmatrix} a\\ b\\ \frac{4}{8} \end{bmatrix} \end{align} #### Representing Vectors Now that you know how to represent vectors using their component and matrix form we can now hard-code them in Python. Let's say that you have the vectors: $$ A = 8\hat{x} + 11\hat{y} -4\hat{z}\\ B = 15\hat{x} - 7\hat{y} + 10\hat{z}\\ C = 9ax + 10ay - 11az \\ D = 14\hat{i} - 3\hat{j} + 9\hat{k}$$ In which it's matrix equivalent is: $$ A = \begin{bmatrix} 8 \\ 11\\ -4\end{bmatrix} , B = \begin{bmatrix} 15 \\ -7\\ 10\end{bmatrix} , C = \begin{bmatrix} 9 \\ 10 \\ -11 \end{bmatrix}, D = \begin{bmatrix} 14 \\ -3 \\ 9\end{bmatrix} $$ $$ A = \begin{bmatrix} 8 & 11&-4\end{bmatrix} , B = \begin{bmatrix} 15 & -7 & 10\end{bmatrix} , C = \begin{bmatrix} 9 & 10 & -11\end{bmatrix} , D = \begin{bmatrix} 14 & -3 & 9\end{bmatrix} $$ We can then start doing numpy code with this by: ```python import numpy as np ## 'np' here is short-hand name of the library (numpy) or a nickname. ``` ```python A = np.array([8, 11, -4]) B = np.array([15, -7, 10]) C = np.array([ [9], [10], [-11] ]) D = np.array ([[14], [-3], [9]]) print('Vector A is ', A) print('Vector B is ', B) print('Vector C is ', C) print('Vector D is ', D) ``` Vector A is [ 8 11 -4] Vector B is [15 -7 10] Vector C is [[ 9] [ 10] [-11]] Vector D is [[14] [-3] [ 9]] #### Describing vectors in NumPy NumPy is at the core of many types of scientific computing and is the main tool that we will use to perform Linear Algebra operations with Python. NumPy arrays can be used to represent vectors but cannot be used to differentiate between row and column vectors. \begin{align} \text{column vector} = \begin{bmatrix}1 \\ 2 \\ 3\end{bmatrix} \end{align} \begin{align} \text{row vector} = \begin{bmatrix} a & b & c\end{bmatrix} \end{align} Describing vectors is very important if we want to perform basic to advanced operations with them. The fundamental ways in describing vectors are knowing their shape, size and dimensions. ```python ### Checking shapes ### Shapes tells us how many elements are there on each row and column G = np.array([[15, 11, 2, 9, -0.5, 3, 10]]) G.shape ``` (1, 7) ```python ### Checking size ### Array/Vector sizes tells us many total number of elements are there in the vector G.size ``` 7 ```python ### Checking dimensions ### The dimensions or rank of a vector tells us how many dimensions are there for the vector. G.ndim ``` 2 Great! Now let's try to explore in performing operations with these vectors. #### Addition The addition rule is simple, the we just need to add the elements of the matrices according to their index. So in this case if we add vector $A$ and vector $B$ we will have a resulting vector: $$R = 23\hat{x}+4\hat{y}+6\hat{z} \\ \\or \\ \\ R = \begin{bmatrix} 23 \\ 4\\6\end{bmatrix} $$ So let's try to do that in NumPy in several number of ways: ```python R = np.add(A, B) ## this is the functional method using the numpy library P = np.add(C, D) print ("sum of A & B:") print (R) print ("sum of C & D:") print (P) ``` sum of A & B: [23 4 6] sum of C & D: [[23] [ 7] [-2]] ```python R = A + B ## this is the explicit method, since Python does a value-reference so it can ## know that these variables would need to do array operations. R ``` array([23, 4, 6]) ```python pos1 = np.array([0,0,0]) pos2 = np.array([0,1,3]) pos3 = np.array([1,5,-2]) pos4 = np.array([5,-3,3]) sumR = pos1 + pos2 + pos3 + pos4 productR = np.multiply(pos3, pos4) R = pos3 / pos4 R print("sum:") print(sumR) print("product:") print(productR) print("quotient:") print(R) ``` sum: [6 3 4] product: [ 5 -15 -6] quotient: [ 0.2 -1.66666667 -0.66666667] ##### Try for yourself! Try to implement subtraction, multiplication, and division with vectors $A$ and $B$! $$ A = \begin{bmatrix} 18 & 4&7\\ 6&10&-1\\ -9&3&-11\end{bmatrix} B = \begin{bmatrix} 7&-11&7 \\ -3&8&-13\\ 6&1&10\end{bmatrix} $$ ```python A = np.array([ [18,4,7], [6,10,-1], [-9,3,-11] ]) B = np.array([ [7,-11,7], [-3,8,-13], [6,1,10] ]) diff = np.subtract(A,B) prod = np.multiply(A,B) quot = A/B print("Difference:") print(diff) print("Product:") print(prod) print("Quotient:") print(quot) ``` Difference: [[ 11 15 0] [ 9 2 12] [-15 2 -21]] Product: [[ 126 -44 49] [ -18 80 13] [ -54 3 -110]] Quotient: [[ 2.57142857 -0.36363636 1. ] [-2. 1.25 0.07692308] [-1.5 3. -1.1 ]] ### Scaling Scaling or scalar multiplication takes a scalar value and performs multiplication with a vector. Let's take the example below: $$S = 5 \cdot A$$ We can do this in numpy through: ```python S = np.multiply(5,A) S ``` array([20, 15]) Try to implement scaling with two vectors. $$ C = \begin{bmatrix} 11 & 6\\ 8&10\\ -4&7\end{bmatrix} D = \begin{bmatrix} 7&-10 \\ -3&3\\ 5&-1\end{bmatrix} $$ $$P = 8 \ (C+D)$$ ```python C = np.array([ [11,6], [8,10], [-4,7] ]) D = np.array([ [7,-10], [-3,3], [5,-1] ]) P = np.multiply(8,C+D) P ``` array([[144, -32], [ 40, 104], [ 8, 48]]) ### MatPlotLib MatPlotLib or MATLab Plotting library is Python's take on MATLabs plotting feature. MatPlotLib can be used vastly from graping values to visualizing several dimensions of data. #### Visualizing Data It's not enough just solving these vectors so might need to visualize them. So we'll use MatPlotLib for that. We'll need to import it first. ```python import matplotlib.pyplot as plt import matplotlib %matplotlib inline ``` ```python A = [8, -6] B = [3, -3] plt.scatter(A[0], A[1], label='A', c='green') #A[0] is Ax , A[1] is Ay , c=color plt.scatter(B[0], B[1], label='B', c='magenta') plt.grid() plt.legend() plt.show() ``` ```python A = np.array([1, -1]) B = np.array([1, 5]) R = A + B Magnitude = np.sqrt(np.sum(R**2)) plt.title("Resultant Vector\nMagnitude:{}" .format(Magnitude)) plt.xlim(-5, 5) #limit of x plt.ylim(-5, 5) #limit of y plt.quiver(0, 0, A[0], A[1], angles='xy', scale_units='xy', scale=1, color='red') plt.quiver(A[0], A[1], B[0], B[1], angles='xy', scale_units='xy', scale=1, color='green') plt.quiver(0, 0, R[0], R[1], angles='xy', scale_units='xy', scale=1, color='black') plt.grid() plt.show() print("Resultant:") print(R) print("Magnitude:") print(Magnitude) Slope = R[1]/R[0] print("Slope:") print(Slope) Angle = (np.arctan(Slope))*(180/np.pi) print("Angle:") print(Angle) ``` ```python n = A.shape[0] plt.xlim(-10, 10) plt.ylim(-10, 10) plt.quiver(0,0, A[0], A[1], angles='xy', scale_units='xy',scale=1) plt.quiver(A[0],A[1], B[0], B[1], angles='xy', scale_units='xy',scale=1) plt.quiver(0,0, R[0], R[1], angles='xy', scale_units='xy',scale=1) plt.show() ``` Try plotting Three Vectors and show the Resultant Vector as a result. Use Head to Tail Method. In this task, we import the matplotlib to execute the limitations of x and y axis, the grid, and quievers that are significant on plotting single or multiple vectors in the Google Colaboratory. To begin the program, we need to input a values on the arrays that will be the basis or the starting point of the vectors. Quivers are the ones that connects two points to indicate a scaling unit and its direction- the head to tail method. This is controlled by setting a point to the given values of x and y of an array, for example, A[0] is for x of vector A. These things must be labeled on different colors to avoid confusion for the joining points. Lastly, the resultant, e sum of the vectors; the magnitude, square root of the squared resultant; slope, quotient of y over x of resultant; and angle, which is the arctan of slope divided by the ratio of 180 over pie can also be observed in the given figure. ## 2 Vectors ```python C = np.array([-5, -3]) D = np.array([-4, 6]) R = C + D Magnitude = np.sqrt(np.sum(R**2)) plt.title("Resultant Vector\nMagnitude:{}" .format(Magnitude)) plt.xlim(-10, 10) #limit of x plt.ylim(-10, 10) #limit of y plt.quiver(0, 0, C[0], C[1], angles='xy', scale_units='xy', scale=1, color='blue') plt.quiver(C[0], C[1], D[0], D[1], angles='xy', scale_units='xy', scale=1, color='orange') plt.quiver(0, 0, R[0], R[1], angles='xy', scale_units='xy', scale=1, color='brown') plt.grid() plt.show() print("Resultant:") print(R) print("Magnitude:") print(Magnitude) Slope = R[1]/R[0] print("Slope:") print(Slope) Angle = (np.arctan(Slope))*(180/np.pi) print("Angle:") print(Angle) ``` ```python E = np.array([8, 2]) F = np.array([1, 9]) R = E + F Magnitude = np.sqrt(np.sum(R**2)) plt.title("Resultant Vector\nMagnitude:{}" .format(Magnitude)) plt.xlim(-15, 15) #limit of x plt.ylim(-15, 15) #limit of y plt.quiver(0, 0, E[0], E[1], angles='xy', scale_units='xy', scale=1, color='red') plt.quiver(E[0], E[1], F[0], F[1], angles='xy', scale_units='xy', scale=1, color='blue') plt.quiver(0, 0, R[0], R[1], angles='xy', scale_units='xy', scale=1, color='green') plt.grid() plt.show() print("Resultant:") print(R) print("Magnitude:") print(Magnitude) Slope = R[1]/R[0] print("Slope:") print(Slope) Angle = (np.arctan(Slope))*(180/np.pi) print("Angle:") print(Angle) ``` ## 3 Vectors ```python A = np.array([-1, 9.1]) B = np.array([-3, 2]) C = np.array([-4, -11]) R = A + B + C Magnitude = np.sqrt(np.sum(R**2)) plt.title("Resultant Vector\nMagnitude:{}" .format(Magnitude)) plt.xlim(-15, 16) #limit of x plt.ylim(-15, 16) #limit of y plt.quiver(0, 0, A[0], A[1], angles='xy', scale_units='xy', scale=1, color='black') plt.quiver(A[0], A[1], B[0], B[1], angles='xy', scale_units='xy', scale=1, color='purple') plt.quiver(-4, 11, C[0], C[1], angles='xy', scale_units='xy', scale=1, color='brown') plt.quiver(0, 0, R[0], R[1], angles='xy', scale_units='xy', scale=1, color='red') plt.grid() plt.show() print("Resultant:") print(R) print("Magnitude:") print(Magnitude) Slope = R[1]/R[0] print("Slope:") print(Slope) Angle = (np.arctan(Slope))*(180/np.pi) print("Angle:") print(Angle) ```
(** * Category theory *) (** While [Basics.CategoryOps] gives the _signatures_ of categorical operations, this module describes their properties. *) (* begin hide *) From Coq Require Import Setoid Morphisms. From ITree.Basics Require Import CategoryOps CategoryFunctor. Import Carrier. Import CatNotations. Local Open Scope cat. Set Warnings "-future-coercion-class-field". (* end hide *) (** ** Categories *) Section CatLaws. Context {obj : Type} (C : Hom obj). Context {Eq2C : Eq2 C} {IdC : Id_ C} {CatC : Cat C}. (** [cat] must have units and be associative. *) Class CatIdL : Prop := cat_id_l : forall a b (f : C a b), id_ _ >>> f ⩯ f. Class CatIdR : Prop := cat_id_r : forall a b (f : C a b), f >>> id_ _ ⩯ f. Class CatAssoc : Prop := cat_assoc : forall a b c d (f : C a b) (g : C b c) (h : C c d), (f >>> g) >>> h ⩯ f >>> (g >>> h). Class Category : Prop := { category_cat_id_l :> CatIdL; category_cat_id_r :> CatIdR; category_cat_assoc :> CatAssoc; category_proper_cat :> forall a b c, @Proper (C a b -> C b c -> C a c) (eq2 ==> eq2 ==> eq2) cat; }. (** *** Initial object *) (** There is only one morphism between the initial object and any other object. *) Class InitialObject (i : obj) {Initial_i : Initial C i} : Prop := initial_object : forall a (f : C i a), f ⩯ empty. (** *** Terminal object *) (** There is a unique morphism from any other object and the terminal object. *) Class TerminalObject (t : obj) {Terminal_t : Terminal C t} : Prop := terminal_object : forall a (f : C a t), f ⩯ one. End CatLaws. Arguments cat_id_l {obj C Eq2C IdC CatC CatIdL} [a b] f. Arguments cat_id_r {obj C Eq2C IdC CatC CatIdR} [a b] f. Arguments cat_assoc {obj C Eq2C CatC CatAssoc} [a b c d] f g. Arguments category_proper_cat {obj C Eq2C IdC CatC Category} [a b c]. Arguments initial_object {obj C Eq2C i Initial_i InitialObject} [a] f. Arguments terminal_object {obj C Eq2C t Terminal_t TerminalObject} [a] f. (** Synonym of [initial_object]. *) Notation unique_initial := initial_object. (** Synonym of [terminal_object]. *) Notation unique_terminal := terminal_object. (** ** Mono-, Epi-, Iso- morphisms *) (** _Semi-isomorphisms_ are morphisms which compose to the identity. If [f >>> f' = id_ _], we also say that [f] is a _section_, or _split monomorphism_ and [f'] is a _retraction_, or _split epimorphism_. _Isomorphisms_ are those that compose _both ways_ to the identity. *) (** The most common example is in the category of functions: sections are injective functions, retractions are surjective functions. A minor detail to mention regarding that example is that traditional function composition is denoted backwards: [(f >>> f') x = (f' ∘ f) x = f' (f x)]. *) Section SemiIso. Context {obj : Type} (C : Hom obj). Context {Eq2C : Eq2 C} {IdC : Id_ C} {CatC : Cat C}. (** An instance [SemiIso C f f'] means that [f] is a section of [f'] in the category [C]. *) Class SemiIso {a b : obj} (f : C a b) (f' : C b a) : Prop := semi_iso : f >>> f' ⩯ id_ _. (** The class of isomorphisms *) Class Iso {a b : obj} (f : C a b) (f' : C b a) : Prop := { iso_mono : SemiIso f f'; iso_epi : SemiIso f' f; }. End SemiIso. #[global] Existing Instance iso_mono. #[global] Existing Instance iso_epi. Arguments semi_iso {obj C Eq2C IdC CatC a b} f f' {SemiIso}. (** ** Opposite *) Section OppositeCat. Context {obj : Type} (C : Hom obj). Context {Eq2C : Eq2 C} {IdC : Id_ C} {CatC : Cat C}. (* All these opposite instances are prone to trigger loops in instance resolution, notably if the category C for which an instance is looked after is a meta-variable. It also loops easily with the [Fun] since it can interpret it as being [Op] too easily. I therefore don't declare them [Global] and use [Local Existing Instance] where needed. I don't know if there's a better way to go. *) Instance Eq2_Op : Eq2 (op C) := fun a b => eq2 (C := C). Instance Id_Op : Id_ (op C) := id_ (C := C). Instance Cat_Op : Cat (op C) := fun a b c f g => cat (C := C) g f. End OppositeCat. (** ** Dagger *) Section DaggerLaws. Context {obj : Type} (C : Hom obj). Context {Eq2C : Eq2 C} {IdC : Id_ C} {CatC : Cat C}. Context {DagC : Dagger C}. Instance Dagger_Op : Dagger (op C) := fun a b f => dagger (C := C) f. Class DaggerInvolution : Prop := dagger_invol : forall a b (f: C a b), dagger (dagger f) ⩯ f. Local Existing Instance Eq2_Op. Local Existing Instance Id_Op. Local Existing Instance Cat_Op. Class DaggerLaws : Prop := { dagger_involution :> DaggerInvolution ; dagger_functor :> Functor (op C) C id (@dagger obj C _) }. End DaggerLaws. (** ** Bifunctors *) Section BifunctorLaws. Context {obj : Type} (C : Hom obj). Context {Eq2_C : Eq2 C} {Id_C : Id_ C} {Cat_C : Cat C}. Context (bif : binop obj). Context {Bimap_bif : Bimap C bif}. (** Vertical composition ([bimap]) must be compatible with horizontal composition ([cat]). *) Class BimapId : Prop := bimap_id : forall a b, bimap (id_ a) (id_ b) ⩯ id_ (bif a b). Class BimapCat : Prop := bimap_cat : forall a1 a2 b1 b2 c1 c2 (f1 : C a1 b1) (g1 : C b1 c1) (f2 : C a2 b2) (g2 : C b2 c2), bimap f1 f2 >>> bimap g1 g2 ⩯ bimap (f1 >>> g1) (f2 >>> g2). Class Bifunctor : Prop := { bifunctor_bimap_id :> BimapId; bifunctor_bimap_cat :> BimapCat; bifunctor_proper_bimap :> forall a b c d, @Proper (C a c -> C b d -> C _ _) (eq2 ==> eq2 ==> eq2) bimap; }. End BifunctorLaws. Arguments bimap_id {obj C Eq2_C Id_C bif Bimap_bif BimapId} a b. Arguments bimap_cat {obj C Eq2_C Cat_C bif Bimap_bif BimapCat} [_ _ _ _ _ _] f1 g1 f2 g2. (** ** Coproducts *) (** These laws capture the essence of sums. *) Section CoproductLaws. Context {obj : Type} (C : Hom obj). Context {Eq2_C : Eq2 C} {Id_C : Id_ C} {Cat_C : Cat C}. Context (bif : binop obj). Context {Case_C : Case C bif} {Inl_C : Inl C bif} {Inr_C : Inr C bif}. Class CaseInl : Prop := case_inl : forall a b c (f : C a c) (g : C b c), inl_ >>> case_ f g ⩯ f. Class CaseInr : Prop := case_inr : forall a b c (f : C a c) (g : C b c), inr_ >>> case_ f g ⩯ g. (** Uniqueness of coproducts *) Class CaseUniversal : Prop := case_universal : forall a b c (f : C a c) (g : C b c) (fg : C (bif a b) c), (inl_ >>> fg ⩯ f) -> (inr_ >>> fg ⩯ g) -> fg ⩯ case_ f g. Class Coproduct : Prop := { coproduct_case_inl :> CaseInl; coproduct_case_inr :> CaseInr; coproduct_case_universal :> CaseUniversal; coproduct_proper_case :> forall a b c, @Proper (C a c -> C b c -> C _ c) (eq2 ==> eq2 ==> eq2) case_ }. End CoproductLaws. Arguments case_inl {obj C Eq2_C Cat_C bif Case_C Inl_C CaseInl} [a b c] f g. Arguments case_inr {obj C Eq2_C Cat_C bif Case_C Inr_C CaseInr} [a b c] f g. Arguments case_universal {obj C _ _ bif _ _ _ _} [a b c] f g fg. (** More intuitive names. *) Notation inl_case := case_inl. Notation inr_case := case_inr. (** ** Products *) (** These laws capture the essence of products. *) Section ProductLaws. Context {obj : Type} (C : Hom obj). Context {Eq2_C : Eq2 C} {Id_C : Id_ C} {Cat_C : Cat C}. Context (bif : binop obj). Context {Pair_C : Pair C bif} {Fst_C : Fst C bif} {Snd_C : Snd C bif}. Class PairFst : Prop := pair_fst : forall a b c (f : C a b) (g : C a c), pair_ f g >>> fst_ ⩯ f. Class PairSnd : Prop := pair_snd : forall a b c (f : C a b) (g : C a c), pair_ f g >>> snd_ ⩯ g. (** Uniqueness of products *) Class PairUniversal : Prop := pair_universal : forall a b c (f : C c a) (g : C c b) (fg : C c (bif a b)), (fg >>> fst_ ⩯ f) -> (fg >>> snd_ ⩯ g) -> fg ⩯ pair_ f g. Class Product : Prop := { product_pair_fst :> PairFst; product_pair_snd :> PairSnd; product_pair_universal :> PairUniversal; product_proper_pair :> forall a b c, @Proper (C c a -> C c b -> C c _) (eq2 ==> eq2 ==> eq2) pair_ }. End ProductLaws. Arguments pair_fst {obj C Eq2_C Cat_C bif Pair_C Fst_C PairFst} [a b c] f g. Arguments pair_snd {obj C Eq2_C Cat_C bif Pair_C Snd_C PairSnd} [a b c] f g. Arguments pair_universal {obj C _ _ bif _ _ _ _} [a b c] f g fg. (** Cartesian Closure *) Section CartesianClosureLaws. Context {obj : Type} (C : Hom obj). Context {Eq2_C : Eq2 C} {Id_C : Id_ C} {Cat_C : Cat C}. Context (ONE : obj) {Term_C : Terminal C ONE}. Context (PROD : binop obj). Context {Pair_C : Pair C PROD} {Fst_C : Fst C PROD} {Snd_C : Snd C PROD}. Context (EXP : binop obj). Context {Apply_C : Apply C PROD EXP} {Curry_C : Curry C PROD EXP}. Existing Instance Bimap_Product. Class CurryApply : Prop := curry_apply : forall a b c (f : C (PROD c a) b), f ⩯ ((@bimap obj C PROD _ _ _ _ _ (curry_ f) (id_ a)) >>> apply_). Class CartesianClosed : Prop := { cartesian_terminal :> TerminalObject _ ONE; cartesian_product :> Product _ PROD; cartesian_curry_apply :> CurryApply; cartesian_proper_curry_ :> forall a b c, @Proper (C (PROD c a) b -> C c (EXP a b)) (eq2 ==> eq2) curry_ }. End CartesianClosureLaws. Arguments curry_apply {obj C Eq2_C Id_C Cat_C PROD Pair_C Fst_C Snd_C EXP Apply_C Curry_C} _ [a b c] f. (** ** Monoidal categories *) Section MonoidalLaws. Context {obj : Type} (C : Hom obj). Context {Eq2_C : Eq2 C} {Id_C : Id_ C} {Cat_C : Cat C}. Context (bif : binop obj). Context {Bimap_bif : Bimap C bif}. Context {AssocR_bif : AssocR C bif}. Context {AssocL_bif : AssocL C bif}. (** *** Associators and unitors are isomorphisms *) (** [assoc_r] and [assoc_l] are mutual inverses. *) Notation AssocIso := (forall a b c, Iso C (assoc_r_ a b c) assoc_l) (only parsing). (* TODO: should that be a notation? *) (** [assoc_r] is a split monomorphism, i.e., a left-inverse, [assoc_l] is its retraction. *) Corollary assoc_r_mono {AssocIso_C : AssocIso} : forall a b c, assoc_r >>> assoc_l ⩯ id_ (bif (bif a b) c). Proof. intros; apply semi_iso, AssocIso_C. Qed. (** [assoc_r] is a split epimorphism, i.e., a right-inverse, [assoc_l] is its section. *) Corollary assoc_l_mono {AssocIso_C : AssocIso} : forall a b c, assoc_l >>> assoc_r ⩯ id_ (bif a (bif b c)). Proof. intros; apply semi_iso, AssocIso_C. Qed. Context (i : obj). Context {UnitL_bif : UnitL C bif i}. Context {UnitL'_bif : UnitL' C bif i}. Context {UnitR_bif : UnitR C bif i}. Context {UnitR'_bif : UnitR' C bif i}. (** *** Naturality *) Class UnitLNatural : Prop := natural_unit_l : forall a b (f : C a b), bimap (id_ i) f >>> unit_l_ i b ⩯ unit_l_ i a >>> f. Class UnitL'Natural : Prop := natural_unit_l' : forall a b (f : C a b), unit_l'_ i a >>> bimap (id_ i) f ⩯ f >>> unit_l'_ i b. (** [unit_l] and [unit_l'] are mutual inverses. *) Notation UnitLIso := (forall a, Iso C (unit_l_ i a) unit_l') (only parsing). (** [unit_l] is a split monomorphism, i.e., a left-inverse, [unit_l'] is its retraction. *) Corollary unit_l_mono {UnitLIso_C : UnitLIso} : forall a, unit_l >>> unit_l' ⩯ id_ (bif i a). Proof. intros; apply semi_iso, UnitLIso_C. Qed. (** [unit_l] is a split epimorphism, [unit_l'] is its section. *) Corollary unit_l_epi {UnitLIso_C : UnitLIso} : forall a, unit_l' >>> unit_l ⩯ id_ a. Proof. intros; apply semi_iso, UnitLIso_C. Qed. (** [unit_r] and [unit_r'] are mutual inverses. *) Notation UnitRIso := (forall a, Iso C (unit_r_ i a) unit_r') (only parsing). Corollary unit_r_mono {UnitRIso_C : UnitRIso} : forall a, unit_r >>> unit_r' ⩯ id_ (bif a i). Proof. intros; apply semi_iso, UnitRIso_C. Qed. Corollary unit_r_epi {UnitRIso_C : UnitRIso} : forall a, unit_r' >>> unit_r ⩯ id_ a. Proof. intros; apply semi_iso, UnitRIso_C. Qed. (** *** Coherence laws *) (** The Triangle Diagram *) Class AssocRUnit : Prop := assoc_r_unit : forall a b, assoc_r >>> bimap (id_ a) unit_l ⩯ bimap unit_r (id_ b). (** The Pentagon Diagram *) Class AssocRAssocR : Prop := assoc_r_assoc_r : forall a b c d, bimap (@assoc_r _ _ _ _ a b c) (id_ d) >>> assoc_r >>> bimap (id_ _) assoc_r ⩯ assoc_r >>> assoc_r. Class Monoidal : Prop := { monoidal_bifunctor :> Bifunctor C bif; monoidal_assoc_iso :> AssocIso; monoidal_unit_l_iso :> UnitLIso; monoidal_unit_r_iso :> UnitRIso; monoidal_unit_l_natural :> UnitLNatural; monoidal_unit_l'_natural :> UnitL'Natural; monoidal_assoc_r_unit :> AssocRUnit; monoidal_assoc_r_assoc_r :> AssocRAssocR; }. (** The [assoc_l] variants can be derived by symmetry, because [assoc_l] is the inverse of [assoc_r]. *) Class AssocLUnit : Prop := assoc_l_unit : forall a b, assoc_l >>> bimap unit_r (id_ b) ⩯ bimap (id_ a) unit_l. Class AssocLAssocL : Prop := assoc_l_assoc_l : forall a b c d, bimap (id_ a) (@assoc_l _ _ _ _ b c d) >>> assoc_l >>> bimap assoc_l (id_ _) ⩯ assoc_l >>> assoc_l. End MonoidalLaws. Arguments assoc_r_mono {obj C Eq2_C Id_C Cat_C bif AssocR_bif AssocL_bif AssocIso_C} a b c. Arguments assoc_l_mono {obj C Eq2_C Id_C Cat_C bif AssocR_bif AssocL_bif AssocIso_C} a b c. Arguments unit_l_mono {obj C Eq2_C Id_C Cat_C bif i UnitL_bif UnitL'_bif UnitLIso_C} a. Arguments unit_l_epi {obj C Eq2_C Id_C Cat_C bif i UnitL_bif UnitL'_bif UnitLIso_C} a. Arguments unit_r_mono {obj C Eq2_C Id_C Cat_C bif i UnitR_bif UnitR'_bif UnitRIso_C} a. Arguments unit_r_epi {obj C Eq2_C Id_C Cat_C bif i UnitR_bif UnitR'_bif UnitRIso_C} a. Arguments assoc_r_unit {obj C Eq2_C Id_C Cat_C bif Bimap_bif AssocR_bif i UnitL_bif UnitR_bif AssocRUnit} a b. Arguments assoc_r_assoc_r {obj C Eq2_C Id_C Cat_C bif Bimap_bif AssocR_bif AssocRAssocR} a b c d. Arguments assoc_l_unit {obj C Eq2_C Id_C Cat_C bif Bimap_bif AssocL_bif i UnitL_bif UnitR_bif AssocLUnit} a b. Arguments assoc_l_assoc_l {obj C Eq2_C Id_C Cat_C bif Bimap_bif AssocL_bif AssocLAssocL} a b c d. (** ** Symmetric monoidal categories *) Section SymmetricLaws. Context {obj : Type} (C : Hom obj). Context {Eq2_C : Eq2 C} {Id_C : Id_ C} {Cat_C : Cat C}. Context (bif : binop obj). Context {Bimap_bif : Bimap C bif}. Context {Swap_bif : Swap C bif}. (** [swap] is an involution *) Notation SwapInvolutive := (forall a b, SemiIso C (swap_ a b) swap) (only parsing). Corollary swap_involutive {SwapInvolutive_C : SwapInvolutive} : forall a b, swap >>> swap ⩯ id_ (bif a b). Proof. intros; apply semi_iso, SwapInvolutive_C. Qed. Context (i : obj). Context {UnitL_i : UnitL C bif i}. Context {UnitL'_i : UnitL' C bif i}. Context {UnitR_i : UnitR C bif i}. Context {UnitR'_i : UnitR' C bif i}. (** Coherence between [swap] and unitors. *) Class SwapUnitL : Prop := swap_unit_l : forall a, swap >>> unit_l ⩯ unit_r_ _ a. Context {AssocR_bif : AssocR C bif}. Context {AssocL_bif : AssocL C bif}. (** Coherence between [swap] and associators. *) Class SwapAssocR : Prop := swap_assoc_r : forall a b c, @assoc_r _ _ _ _ a _ _ >>> swap >>> assoc_r ⩯ bimap swap (id_ c) >>> assoc_r >>> bimap (id_ b) swap. (** Symmetric monoidal category. *) Class SymMonoidal : Prop := { symmetric_monoidal : Monoidal C bif i; symmetric_swap_involutive : SwapInvolutive; symmetric_swap_unit_l : SwapUnitL; symmetric_swap_assoc_r : SwapAssocR; }. (* The name [Symmetric] is taken by [Setoid]. *) Class SwapAssocL : Prop := swap_assoc_l : forall a b c, @assoc_l _ _ _ _ _ _ c >>> swap >>> assoc_l ⩯ bimap (id_ a) swap >>> assoc_l >>> bimap swap (id_ b). End SymmetricLaws. Arguments swap_involutive {obj C Eq2_C Id_C Cat_C bif Swap_bif SwapInvolutive_C} a b. Arguments swap_unit_l {obj C Eq2_C Cat_C bif Swap_bif i UnitL_i UnitR_i SwapUnitL} a. Arguments swap_assoc_r {obj C Eq2_C Id_C Cat_C bif Bimap_bif Swap_bif AssocR_bif SwapAssocR} a b c. Arguments swap_assoc_l {obj C Eq2_C Id_C Cat_C bif Bimap_bif Swap_bif AssocL_bif SwapAssocL} a b c. Section IterationLaws. Context {obj : Type} (C : Hom obj). Context {Eq2_C : Eq2 C} {Id_C : Id_ C} {Cat_C : Cat C}. Context (bif : binop obj). Context {Case_C : Case C bif}. Context {Inl_C : Inl C bif}. Context {Inr_C : Inr C bif}. Context {Iter_C : Iter C bif}. (** The loop operation satisfies a fixed point equation. *) Class IterUnfold : Prop := iter_unfold : forall a b (f : C a (bif a b)), iter f ⩯ f >>> case_ (iter f) (id_ b). (** Naturality in the output (in [b], with [C a (bif a b) -> C a b]). Also known as "parameter identity". *) Class IterNatural : Prop := iter_natural : forall a b c (f : C a (bif a b)) (g : C b c), iter f >>> g ⩯ iter (f >>> bimap (id_ _) g). (** Dinaturality in the accumulator (in [a], with [C a (bif a b) -> C a b]). Also known as "composition identity". *) Class IterDinatural : Prop := iter_dinatural : forall a b c (f : C a (bif b c)) (g : C b (bif a c)), iter (f >>> case_ g inr_) ⩯ f >>> case_ (iter (g >>> case_ f inr_)) (id_ _). (** TODO: provable from the others + uniformity? *) (** Flatten nested loops. Also known as "double dagger identity". *) Class IterCodiagonal : Prop := iter_codiagonal : forall a b (f : C a (bif a (bif a b))), iter (iter f) ⩯ iter (f >>> case_ inl_ (id_ _)). (* TODO: also define uniformity, requires a "purity" assumption. *) Class Iterative : Prop := { iterative_unfold :> IterUnfold ; iterative_natural :> IterNatural ; iterative_dinatural :> IterDinatural ; iterative_codiagonal :> IterCodiagonal ; iterative_proper_iter :> forall a b, @Proper (C a (bif a b) -> C a b) (eq2 ==> eq2) iter }. (** Also called Bekic identity *) Definition IterPairing : Prop := forall a b c (f : C a (bif (bif a b) c)) (g : C b (bif (bif a b) c)), let h : C b (bif b c) := g >>> assoc_r >>> case_ (iter (f >>> assoc_r)) (id_ _) in iter (case_ f g) ⩯ iter (case_ (iter (f >>> assoc_r) >>> case_ (iter h) (id_ _) >>> inr_) (h >>> bimap inr_ (id_ _))). End IterationLaws. Arguments iter_unfold {obj C Eq2_C Id_C Cat_C bif Case_C Iter_C IterUnfold} [a b] f. Arguments iter_natural {obj C Eq2_C Id_C Cat_C bif Case_C Inl_C Inr_C Iter_C IterNatural} [a b c] f. Arguments iter_dinatural {obj C Eq2_C Id_C Cat_C bif Case_C Inr_C Iter_C IterDinatural} [a b c] f. Arguments iter_codiagonal {obj C Eq2_C Id_C Cat_C bif Case_C Inl_C Iter_C IterCodiagonal} [a b] f. Arguments iterative_proper_iter {obj C Eq2_C Id_C Cat_C bif Case_C Inl_C Inr_C Iter_C Iterative}.
invReshape : Vector ((x1 * m)::x2::xs) t -> Vector (x1::(m * x2)::xs) t invReshape v {x1 = Z} = [] invReshape v {x1 =S r} = map redDim $ incDim v
(* Copyright (C) 2017 M.A.L. Marques This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. *) (* type: work_gga_x *) mkappa := 0.4604: mmu := 0.354546875: f0 := s -> 1 + mkappa*(1 - mkappa/(mkappa + mmu*s^2)): f := x -> f0(X2S_2D*x):
{- Byzantine Fault Tolerant Consensus Verification in Agda, version 0.9. Copyright (c) 2020, 2021, Oracle and/or its affiliates. Licensed under the Universal Permissive License v 1.0 as shown at https://opensource.oracle.com/licenses/upl -} open import LibraBFT.Prelude open import Level using (0ℓ) -- This module incldes various Agda lemmas that are independent of the project's domain module LibraBFT.Lemmas where cong₃ : ∀{a b c d}{A : Set a}{B : Set b}{C : Set c}{D : Set d} → (f : A → B → C → D) → ∀{x y u v m n} → x ≡ y → u ≡ v → m ≡ n → f x u m ≡ f y v n cong₃ f refl refl refl = refl ≡-pi : ∀{a}{A : Set a}{x y : A}(p q : x ≡ y) → p ≡ q ≡-pi refl refl = refl Unit-pi : {u1 u2 : Unit} → u1 ≡ u2 Unit-pi {unit} {unit} = refl ++-inj : ∀{a}{A : Set a}{m n o p : List A} → length m ≡ length n → m ++ o ≡ n ++ p → m ≡ n × o ≡ p ++-inj {m = []} {x ∷ n} () hip ++-inj {m = x ∷ m} {[]} () hip ++-inj {m = []} {[]} lhip hip = refl , hip ++-inj {m = m ∷ ms} {n ∷ ns} lhip hip with ++-inj {m = ms} {ns} (suc-injective lhip) (proj₂ (∷-injective hip)) ...| (mn , op) rewrite proj₁ (∷-injective hip) = cong (n ∷_) mn , op ++-abs : ∀{a}{A : Set a}{n : List A}(m : List A) → 1 ≤ length m → [] ≡ m ++ n → ⊥ ++-abs [] () ++-abs (x ∷ m) imp () data All-vec {ℓ} {A : Set ℓ} (P : A → Set ℓ) : ∀ {n} → Vec {ℓ} A n → Set (Level.suc ℓ) where [] : All-vec P [] _∷_ : ∀ {x n} {xs : Vec A n} (px : P x) (pxs : All-vec P xs) → All-vec P (x ∷ xs) ≤-unstep : ∀{m n} → suc m ≤ n → m ≤ n ≤-unstep (s≤s ss) = ≤-step ss ≡⇒≤ : ∀{m n} → m ≡ n → m ≤ n ≡⇒≤ refl = ≤-refl ∈-cong : ∀{a b}{A : Set a}{B : Set b}{x : A}{l : List A} → (f : A → B) → x ∈ l → f x ∈ List-map f l ∈-cong f (here px) = here (cong f px) ∈-cong f (there hyp) = there (∈-cong f hyp) All-self : ∀{a}{A : Set a}{xs : List A} → All (_∈ xs) xs All-self = All-tabulate (λ x → x) All-reduce⁺ : ∀{a b}{A : Set a}{B : Set b}{Q : A → Set}{P : B → Set} → { xs : List A } → (f : ∀{x} → Q x → B) → (∀{x} → (prf : Q x) → P (f prf)) → (all : All Q xs) → All P (All-reduce f all) All-reduce⁺ f hyp [] = [] All-reduce⁺ f hyp (ax ∷ axs) = (hyp ax) ∷ All-reduce⁺ f hyp axs All-reduce⁻ : ∀{a b}{A : Set a}{B : Set b} {Q : A → Set} → { xs : List A } → ∀ {vdq} → (f : ∀{x} → Q x → B) → (all : All Q xs) → vdq ∈ All-reduce f all → ∃[ v ] ∃[ v∈xs ] (vdq ≡ f {v} v∈xs) All-reduce⁻ {Q = Q} {(h ∷ _)} {vdq} f (px ∷ pxs) (here refl) = h , px , refl All-reduce⁻ {Q = Q} {(_ ∷ t)} {vdq} f (px ∷ pxs) (there vdq∈) = All-reduce⁻ {xs = t} f pxs vdq∈ List-index : ∀ {A : Set} → (_≟A_ : (a₁ a₂ : A) → Dec (a₁ ≡ a₂)) → A → (l : List A) → Maybe (Fin (length l)) List-index _≟A_ x l with break (_≟A x) l ...| not≡ , _ with length not≡ <? length l ...| no _ = nothing ...| yes found = just ( fromℕ< {length not≡} {length l} found) nats : ℕ → List ℕ nats 0 = [] nats (suc n) = (nats n) ++ (n ∷ []) _ : nats 4 ≡ 0 ∷ 1 ∷ 2 ∷ 3 ∷ [] _ = refl _ : Maybe-map toℕ (List-index _≟_ 2 (nats 4)) ≡ just 2 _ = refl _ : Maybe-map toℕ (List-index _≟_ 4 (nats 4)) ≡ nothing _ = refl allDistinct : ∀ {A : Set} → List A → Set allDistinct l = ∀ (i j : Σ ℕ (_< length l)) → proj₁ i ≡ proj₁ j ⊎ List-lookup l (fromℕ< (proj₂ i)) ≢ List-lookup l (fromℕ< (proj₂ j)) postulate -- TODO-1: currently unused; prove it, if needed allDistinct? : ∀ {A : Set} → {≟A : (a₁ a₂ : A) → Dec (a₁ ≡ a₂)} → (l : List A) → Dec (allDistinct l) -- Extends an arbitrary relation to work on the head of -- the supplied list, if any. data OnHead {A : Set}(P : A → A → Set) (x : A) : List A → Set where [] : OnHead P x [] on-∷ : ∀{y ys} → P x y → OnHead P x (y ∷ ys) -- Establishes that a list is sorted according to the supplied -- relation. data IsSorted {A : Set}(_<_ : A → A → Set) : List A → Set where [] : IsSorted _<_ [] _∷_ : ∀{x xs} → OnHead _<_ x xs → IsSorted _<_ xs → IsSorted _<_ (x ∷ xs) OnHead-prop : ∀{A}(P : A → A → Set)(x : A)(l : List A) → Irrelevant P → isPropositional (OnHead P x l) OnHead-prop P x [] hyp [] [] = refl OnHead-prop P x (x₁ ∷ l) hyp (on-∷ x₂) (on-∷ x₃) = cong on-∷ (hyp x₂ x₃) IsSorted-prop : ∀{A}(_<_ : A → A → Set)(l : List A) → Irrelevant _<_ → isPropositional (IsSorted _<_ l) IsSorted-prop _<_ [] hyp [] [] = refl IsSorted-prop _<_ (x ∷ l) hyp (x₁ ∷ a) (x₂ ∷ b) = cong₂ _∷_ (OnHead-prop _<_ x l hyp x₁ x₂) (IsSorted-prop _<_ l hyp a b) IsSorted-map⁻ : {A : Set}{_≤_ : A → A → Set} → {B : Set}(f : B → A)(l : List B) → IsSorted (λ x y → f x ≤ f y) l → IsSorted _≤_ (List-map f l) IsSorted-map⁻ f .[] [] = [] IsSorted-map⁻ f .(_ ∷ []) (x ∷ []) = [] ∷ [] IsSorted-map⁻ f .(_ ∷ _ ∷ _) (on-∷ x ∷ (x₁ ∷ is)) = (on-∷ x) ∷ IsSorted-map⁻ f _ (x₁ ∷ is) transOnHead : ∀ {A} {l : List A} {y x : A} {_<_ : A → A → Set} → Transitive _<_ → OnHead _<_ y l → x < y → OnHead _<_ x l transOnHead _ [] _ = [] transOnHead trans (on-∷ y<f) x<y = on-∷ (trans x<y y<f) ++-OnHead : ∀ {A} {xs ys : List A} {y : A} {_<_ : A → A → Set} → OnHead _<_ y xs → OnHead _<_ y ys → OnHead _<_ y (xs ++ ys) ++-OnHead [] y<y₁ = y<y₁ ++-OnHead (on-∷ y<x) _ = on-∷ y<x h∉t : ∀ {A} {t : List A} {h : A} {_<_ : A → A → Set} → Irreflexive _<_ _≡_ → Transitive _<_ → IsSorted _<_ (h ∷ t) → h ∉ t h∉t irfl trans (on-∷ h< ∷ sxs) (here refl) = ⊥-elim (irfl h< refl) h∉t irfl trans (on-∷ h< ∷ (x₁< ∷ sxs)) (there h∈t) = h∉t irfl trans ((transOnHead trans x₁< h<) ∷ sxs) h∈t ≤-head : ∀ {A} {l : List A} {x y : A} {_<_ : A → A → Set} {_≤_ : A → A → Set} → Reflexive _≤_ → Trans _<_ _≤_ _≤_ → y ∈ (x ∷ l) → IsSorted _<_ (x ∷ l) → _≤_ x y ≤-head ref≤ trans (here refl) _ = ref≤ ≤-head ref≤ trans (there y∈) (on-∷ x<x₁ ∷ sl) = trans x<x₁ (≤-head ref≤ trans y∈ sl) -- TODO-1 : Better name and/or replace with library property Any-sym : ∀ {a b}{A : Set a}{B : Set b}{tgt : B}{l : List A}{f : A → B} → Any (λ x → tgt ≡ f x) l → Any (λ x → f x ≡ tgt) l Any-sym (here x) = here (sym x) Any-sym (there x) = there (Any-sym x) Any-lookup-correct : ∀ {a b}{A : Set a}{B : Set b}{tgt : B}{l : List A}{f : A → B} → (p : Any (λ x → f x ≡ tgt) l) → Any-lookup p ∈ l Any-lookup-correct (here px) = here refl Any-lookup-correct (there p) = there (Any-lookup-correct p) Any-lookup-correctP : ∀ {a}{A : Set a}{l : List A}{P : A → Set} → (p : Any P l) → Any-lookup p ∈ l Any-lookup-correctP (here px) = here refl Any-lookup-correctP (there p) = there (Any-lookup-correctP p) Any-witness : ∀ {a b} {A : Set a} {l : List A} {P : A → Set b} → (p : Any P l) → P (Any-lookup p) Any-witness (here px) = px Any-witness (there x) = Any-witness x -- TODO-1: there is probably a library property for this. ∈⇒Any : ∀ {A : Set}{x : A} → {xs : List A} → x ∈ xs → Any (_≡ x) xs ∈⇒Any {x = x} (here refl) = here refl ∈⇒Any {x = x} {h ∷ t} (there xxxx) = there (∈⇒Any {xs = t} xxxx) false≢true : false ≢ true false≢true () witness : {A : Set}{P : A → Set}{x : A}{xs : List A} → x ∈ xs → All P xs → P x witness x y = All-lookup y x maybe-⊥ : ∀{a}{A : Set a}{x : A}{y : Maybe A} → y ≡ just x → y ≡ nothing → ⊥ maybe-⊥ () refl Maybe-map-cool : ∀ {S S₁ : Set} {f : S → S₁} {x : Maybe S} {z} → Maybe-map f x ≡ just z → x ≢ nothing Maybe-map-cool {x = nothing} () Maybe-map-cool {x = just y} prf = λ x → ⊥-elim (maybe-⊥ (sym x) refl) Maybe-map-cool-1 : ∀ {S S₁ : Set} {f : S → S₁} {x : Maybe S} {z} → Maybe-map f x ≡ just z → Σ S (λ x' → f x' ≡ z) Maybe-map-cool-1 {x = nothing} () Maybe-map-cool-1 {x = just y} {z = z} refl = y , refl Maybe-map-cool-2 : ∀ {S S₁ : Set} {f : S → S₁} {x : S} {z} → f x ≡ z → Maybe-map f (just x) ≡ just z Maybe-map-cool-2 {S}{S₁}{f}{x}{z} prf rewrite prf = refl T⇒true : ∀ {a : Bool} → T a → a ≡ true T⇒true {true} _ = refl isJust : ∀ {A : Set}{aMB : Maybe A}{a : A} → aMB ≡ just a → Is-just aMB isJust refl = just tt to-witness-isJust-≡ : ∀ {A : Set}{aMB : Maybe A}{a prf} → to-witness (isJust {aMB = aMB} {a} prf) ≡ a to-witness-isJust-≡ {aMB = just a'} {a} {prf} with to-witness-lemma (isJust {aMB = just a'} {a} prf) refl ...| xxx = just-injective (trans (sym xxx) prf) deMorgan : ∀ {A B : Set} → (¬ A) ⊎ (¬ B) → ¬ (A × B) deMorgan (inj₁ ¬a) = λ a×b → ¬a (proj₁ a×b) deMorgan (inj₂ ¬b) = λ a×b → ¬b (proj₂ a×b) ¬subst : ∀ {ℓ₁ ℓ₂} {A : Set ℓ₁} {P : A → Set ℓ₂} → {x y : A} → ¬ (P x) → y ≡ x → ¬ (P y) ¬subst px refl = px ∸-suc-≤ : ∀ (x w : ℕ) → suc x ∸ w ≤ suc (x ∸ w) ∸-suc-≤ x zero = ≤-refl ∸-suc-≤ zero (suc w) rewrite 0∸n≡0 w = z≤n ∸-suc-≤ (suc x) (suc w) = ∸-suc-≤ x w m∸n≤o⇒m∸o≤n : ∀ (x z w : ℕ) → x ∸ z ≤ w → x ∸ w ≤ z m∸n≤o⇒m∸o≤n x zero w p≤ rewrite m≤n⇒m∸n≡0 p≤ = z≤n m∸n≤o⇒m∸o≤n zero (suc z) w p≤ rewrite 0∸n≡0 w = z≤n m∸n≤o⇒m∸o≤n (suc x) (suc z) w p≤ = ≤-trans (∸-suc-≤ x w) (s≤s (m∸n≤o⇒m∸o≤n x z w p≤)) tail-⊆ : ∀ {A : Set} {x} {xs ys : List A} → (x ∷ xs) ⊆List ys → xs ⊆List ys tail-⊆ xxs⊆ys x∈xs = xxs⊆ys (there x∈xs) allDistinctTail : ∀ {A : Set} {x} {xs : List A} → allDistinct (x ∷ xs) → allDistinct xs allDistinctTail allDist (i , i<l) (j , j<l) with allDist (suc i , s≤s i<l) (suc j , s≤s j<l) ...| inj₁ 1+i≡1+j = inj₁ (cong pred 1+i≡1+j) ...| inj₂ lookup≢ = inj₂ lookup≢ ∈-Any-Index-elim : ∀ {A : Set} {x y} {ys : List A} (x∈ys : x ∈ ys) → x ≢ y → y ∈ ys → y ∈ ys ─ Any-index x∈ys ∈-Any-Index-elim (here refl) x≢y (here refl) = ⊥-elim (x≢y refl) ∈-Any-Index-elim (here refl) _ (there y∈ys) = y∈ys ∈-Any-Index-elim (there _) _ (here refl) = here refl ∈-Any-Index-elim (there x∈ys) x≢y (there y∈ys) = there (∈-Any-Index-elim x∈ys x≢y y∈ys) ∉∧⊆List⇒∉ : ∀ {A : Set} {x} {xs ys : List A} → x ∉ xs → ys ⊆List xs → x ∉ ys ∉∧⊆List⇒∉ x∉xs ys∈xs x∈ys = ⊥-elim (x∉xs (ys∈xs x∈ys)) allDistinctʳʳ : ∀ {A : Set} {x x₁ : A} {xs : List A} → allDistinct (x ∷ x₁ ∷ xs) → allDistinct (x ∷ xs) allDistinctʳʳ _ (zero , _) (zero , _) = inj₁ refl allDistinctʳʳ allDist (zero , i<l) (suc j , j<l) with allDist (0 , s≤s z≤n) (suc (suc j) , s≤s j<l) ...| inj₂ x≢lookup = inj₂ λ x≡lkpxs → ⊥-elim (x≢lookup x≡lkpxs) allDistinctʳʳ allDist (suc i , i<l) (zero , j<l) with allDist (suc (suc i) , s≤s i<l) (0 , s≤s z≤n) ...| inj₂ x≢lookup = inj₂ λ x≡lkpxs → ⊥-elim (x≢lookup x≡lkpxs) allDistinctʳʳ allDist (suc i , i<l) (suc j , j<l) with allDist (2 + i , (s≤s i<l)) (2 + j , s≤s j<l) ...| inj₁ si≡sj = inj₁ (cong pred si≡sj) ...| inj₂ lookup≡ = inj₂ lookup≡ allDistinct⇒∉ : ∀ {A : Set} {x} {xs : List A} → allDistinct (x ∷ xs) → x ∉ xs allDistinct⇒∉ allDist (here x≡x₁) with allDist (0 , s≤s z≤n) (1 , s≤s (s≤s z≤n)) ... | inj₂ x≢x₁ = ⊥-elim (x≢x₁ x≡x₁) allDistinct⇒∉ allDist (there x∈xs) = allDistinct⇒∉ (allDistinctʳʳ allDist) x∈xs sumListMap : ∀ {A : Set} {x} {xs : List A} (f : A → ℕ) → (x∈xs : x ∈ xs) → f-sum f xs ≡ f x + f-sum f (xs ─ Any-index x∈xs) sumListMap _ (here refl) = refl sumListMap {_} {x} {x₁ ∷ xs} f (there x∈xs) rewrite sumListMap f x∈xs | sym (+-assoc (f x) (f x₁) (f-sum f (xs ─ Any-index x∈xs))) | +-comm (f x) (f x₁) | +-assoc (f x₁) (f x) (f-sum f (xs ─ Any-index x∈xs)) = refl lookup⇒Any : ∀ {A : Set} {xs : List A} {P : A → Set} (i : Fin (length xs)) → P (List-lookup xs i) → Any P xs lookup⇒Any {_} {_ ∷ _} zero px = here px lookup⇒Any {_} {_ ∷ _} (suc i) px = there (lookup⇒Any i px) x∉→AllDistinct : ∀ {A : Set} {x} {xs : List A} → allDistinct xs → x ∉ xs → allDistinct (x ∷ xs) x∉→AllDistinct _ _ (0 , _) (0 , _) = inj₁ refl x∉→AllDistinct _ x∉xs (0 , _) (suc j , j<l) = inj₂ (λ x≡lkp → x∉xs (lookup⇒Any (fromℕ< (≤-pred j<l)) x≡lkp)) x∉→AllDistinct _ x∉xs (suc i , i<l) (0 , _) = inj₂ (λ x≡lkp → x∉xs (lookup⇒Any (fromℕ< (≤-pred i<l)) (sym x≡lkp))) x∉→AllDistinct allDist x∉xs (suc i , i<l) (suc j , j<l) with allDist (i , (≤-pred i<l)) (j , (≤-pred j<l)) ...| inj₁ i≡j = inj₁ (cong suc i≡j) ...| inj₂ lkup≢ = inj₂ lkup≢ module DecLemmas {A : Set} (_≟D_ : Decidable {A = A} (_≡_)) where _∈?_ : ∀ (x : A) → (xs : List A) → Dec (Any (x ≡_) xs) x ∈? xs = Any-any (x ≟D_) xs y∉xs⇒Allxs≢y : ∀ {xs : List A} {x y} → y ∉ (x ∷ xs) → x ≢ y × y ∉ xs y∉xs⇒Allxs≢y {xs} {x} {y} y∉ with y ∈? xs ...| yes y∈xs = ⊥-elim (y∉ (there y∈xs)) ...| no y∉xs with x ≟D y ...| yes x≡y = ⊥-elim (y∉ (here (sym x≡y))) ...| no x≢y = x≢y , y∉xs ⊆List-Elim : ∀ {x} {xs ys : List A} (x∈ys : x ∈ ys) → x ∉ xs → xs ⊆List ys → xs ⊆List ys ─ Any-index x∈ys ⊆List-Elim (here refl) x∉xs xs∈ys x₂∈xs with xs∈ys x₂∈xs ...| here refl = ⊥-elim (x∉xs x₂∈xs) ...| there x∈xs = x∈xs ⊆List-Elim (there x∈ys) x∉xs xs∈ys x₂∈xxs with x₂∈xxs ...| there x₂∈xs = ⊆List-Elim (there x∈ys) (proj₂ (y∉xs⇒Allxs≢y x∉xs)) (tail-⊆ xs∈ys) x₂∈xs ...| here refl with xs∈ys x₂∈xxs ...| here refl = here refl ...| there x₂∈ys = there (∈-Any-Index-elim x∈ys (≢-sym (proj₁ (y∉xs⇒Allxs≢y x∉xs))) x₂∈ys) sum-⊆-≤ : ∀ {ys} (xs : List A) (f : A → ℕ) → allDistinct xs → xs ⊆List ys → f-sum f xs ≤ f-sum f ys sum-⊆-≤ [] _ _ _ = z≤n sum-⊆-≤ (x ∷ xs) f dxs xs⊆ys rewrite sumListMap f (xs⊆ys (here refl)) = let x∉xs = allDistinct⇒∉ dxs xs⊆ysT = tail-⊆ xs⊆ys xs⊆ys-x = ⊆List-Elim (xs⊆ys (here refl)) x∉xs xs⊆ysT disTail = allDistinctTail dxs in +-monoʳ-≤ (f x) (sum-⊆-≤ xs f disTail xs⊆ys-x) intersect : List A → List A → List A intersect xs [] = [] intersect xs (y ∷ ys) with y ∈? xs ...| yes _ = y ∷ intersect xs ys ...| no _ = intersect xs ys union : List A → List A → List A union xs [] = xs union xs (y ∷ ys) with y ∈? xs ...| yes _ = union xs ys ...| no _ = y ∷ union xs ys ∈-intersect : ∀ (xs ys : List A) {α} → α ∈ intersect xs ys → α ∈ xs × α ∈ ys ∈-intersect xs (y ∷ ys) α∈int with y ∈? xs | α∈int ...| no y∉xs | α∈ = ×-map₂ there (∈-intersect xs ys α∈) ...| yes y∈xs | here refl = y∈xs , here refl ...| yes y∈xs | there α∈ = ×-map₂ there (∈-intersect xs ys α∈) x∉⇒x∉intersect : ∀ {x} {xs ys : List A} → x ∉ xs ⊎ x ∉ ys → x ∉ intersect xs ys x∉⇒x∉intersect {x} {xs} {ys} x∉ x∈int = contraposition (∈-intersect xs ys) (deMorgan x∉) x∈int intersectDistinct : ∀ (xs ys : List A) → allDistinct xs → allDistinct ys → allDistinct (intersect xs ys) intersectDistinct xs (y ∷ ys) dxs dys with y ∈? xs ...| yes y∈xs = let distTail = allDistinctTail dys intDTail = intersectDistinct xs ys dxs distTail y∉intTail = x∉⇒x∉intersect (inj₂ (allDistinct⇒∉ dys)) in x∉→AllDistinct intDTail y∉intTail ...| no y∉xs = intersectDistinct xs ys dxs (allDistinctTail dys) x∉⇒x∉union : ∀ {x} {xs ys : List A} → x ∉ xs × x ∉ ys → x ∉ union xs ys x∉⇒x∉union {_} {_} {[]} (x∉xs , _) x∈∪ = ⊥-elim (x∉xs x∈∪) x∉⇒x∉union {x} {xs} {y ∷ ys} (x∉xs , x∉ys) x∈union with y ∈? xs | x∈union ...| yes y∈xs | x∈∪ = ⊥-elim (x∉⇒x∉union (x∉xs , (proj₂ (y∉xs⇒Allxs≢y x∉ys))) x∈∪) ...| no y∉xs | here refl = ⊥-elim (proj₁ (y∉xs⇒Allxs≢y x∉ys) refl) ...| no y∉xs | there x∈∪ = ⊥-elim (x∉⇒x∉union (x∉xs , (proj₂ (y∉xs⇒Allxs≢y x∉ys))) x∈∪) unionDistinct : ∀ (xs ys : List A) → allDistinct xs → allDistinct ys → allDistinct (union xs ys) unionDistinct xs [] dxs dys = dxs unionDistinct xs (y ∷ ys) dxs dys with y ∈? xs ...| yes y∈xs = unionDistinct xs ys dxs (allDistinctTail dys) ...| no y∉xs = let distTail = allDistinctTail dys uniDTail = unionDistinct xs ys dxs distTail y∉intTail = x∉⇒x∉union (y∉xs , allDistinct⇒∉ dys) in x∉→AllDistinct uniDTail y∉intTail sumIntersect≤ : ∀ (xs ys : List A) (f : A → ℕ) → f-sum f (intersect xs ys) ≤ f-sum f (xs ++ ys) sumIntersect≤ _ [] _ = z≤n sumIntersect≤ xs (y ∷ ys) f with y ∈? xs ...| yes y∈xs rewrite map-++-commute f xs (y ∷ ys) | sum-++-commute (List-map f xs) (List-map f (y ∷ ys)) | sym (+-assoc (f-sum f xs) (f y) (f-sum f ys)) | +-comm (f-sum f xs) (f y) | +-assoc (f y) (f-sum f xs) (f-sum f ys) | sym (sum-++-commute (List-map f xs) (List-map f ys)) | sym (map-++-commute f xs ys) = +-monoʳ-≤ (f y) (sumIntersect≤ xs ys f) ...| no y∉xs rewrite map-++-commute f xs (y ∷ ys) | sum-++-commute (List-map f xs) (List-map f (y ∷ ys)) | +-comm (f y) (f-sum f ys) | sym (+-assoc (f-sum f xs) (f-sum f ys) (f y)) | sym (sum-++-commute (List-map f xs) (List-map f ys)) | sym (map-++-commute f xs ys) = ≤-stepsʳ (f y) (sumIntersect≤ xs ys f)
# This file is a part of LegendTextIO.jl, licensed under the MIT License (MIT). module LegendTextIO using DelimitedFiles using ArraysOfArrays using BufferedStreams using CSV using LegendDataTypes using RadiationDetectorSignals using StaticArrays using Unitful using RadiationDetectorSignals: group_by_evtno include("util.jl") include("geant4_csv.jl") ## .root.hits files import Base, Tables using Mmap: mmap using Parsers export DarioHitsFile include("dario_hits.jl") end # module
\documentclass[12pt]{report} \usepackage{url} \usepackage[utf8]{inputenc} % This defines the font-encoding you prefer to use \usepackage[pdftex]{graphicx} \usepackage[bindingoffset=1cm,centering,includeheadfoot,margin=2cm]{geometry} \usepackage[ citestyle=numeric-comp, backend=biber, bibencoding=inputenc ]{biblatex} \addbibresource{refs.bib} \usepackage{setspace} \linespread{1.5} \setcounter{tocdepth}{2} \usepackage[colorlinks=true, pdfstartview=FitV, linkcolor=blue, citecolor=blue, urlcolor=blue]{hyperref} \setlength{\parindent}{0pt} % No indentation between paragraphs \setlength{\parskip}{10pt} % Space between paragraphs % Tables \usepackage{ltxtable} \usepackage{booktabs} % Needed for code listings \usepackage{listings} \usepackage{color} % Subfigure \usepackage{subcaption} \usepackage{floatpag} % to move floatpagenr to topright % Fußnote \usepackage[hang]{footmisc} \setlength{\footnotemargin}{-0.8em} \usepackage{csquotes} \usepackage{afterpage} % needed for empty page after front \usepackage[all]{nowidow} % prevents overhanging paragraphs \usepackage{acronym} % allow acronyms %%=================================== % Custom definitions % Signal color \definecolor{signalColor}{RGB}{164, 63, 114} \newcommand\signal[1]{\textbf{\textcolor{signalColor}{#1}}} % List with less space between items \newenvironment{cList}{ \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} }{\end{itemize}} % Enumeration with less space between items \newenvironment{cEnum}{ \begin{enumerate} \setlength{\itemsep}{0pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} }{\end{enumerate}} % Space LoL \let\Chapter\chapter \def\chapter{\addtocontents{lol}{\protect\addvspace{10pt}}\Chapter} % Rename listings and toc \renewcommand{\contentsname}{Table of Contents} \renewcommand{\lstlistlistingname}{List of Listings} \begin{document} %%======================================== % Frontmatter \include{frontmatter/front} % This is the titlepage \setcounter{page}{0} \pagenumbering{Roman} \include{frontmatter/declaration} % \include{frontmatter/acknowledgment} \include{frontmatter/summary} \tableofcontents \clearpage \phantomsection \addcontentsline{toc}{section}{\listfigurename} \listoffigures \clearpage \phantomsection \addcontentsline{toc}{section}{\listtablename} \listoftables \clearpage \phantomsection \addcontentsline{toc}{section}{\lstlistlistingname} \lstlistoflistings %%========================================= % Mainmatter \cleardoublepage \setcounter{page}{0} \pagenumbering{arabic} \include{mainmatter/01_introduction.tex} \include{mainmatter/02_background.tex} \include{mainmatter/03_systemdesign.tex} \include{mainmatter/04_evaluation.tex} \include{mainmatter/05_relatedwork.tex} \include{mainmatter/06_conclusion.tex} % Include more chapters as required. %%========================================= %Backmatter \appendix \include{backmatter/00_lessImportantText} \include{backmatter/01_acronyms} \include{backmatter/02_lexicon} \include{backmatter/03_listings} % Include more appendices as required. \cleardoublepage \addcontentsline{toc}{chapter}{Bibliography} \defbibheading{notonline}{\chapter*{Bibliography}} \printbibliography[heading=notonline] %%============================================= \end{document}
Not too sure, afaik the auto and manual V4 wagons had the same engine specs. But maybe the downpipe is different because of the auto trans? I have a completely stock JDM v4 wrx auto wagon that I picked up cheap for DD duties. From what i've read it's got a piddly td04, 390cc injectors and of course the 4 speed slush box. I'm wondering if a 2.5 or 3" decat downpipe mated to the rest of the stock exhaust, or possibly an aftermarket centre section would be a worthwhile upgrade ? Or is that opening up a can of worms with boost creep and hitting boost cut on cold nights under high load ? Thanks.
/- Copyright (c) 2020 Mario Carneiro. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Mario Carneiro, Floris van Doorn, Yury Kudryashov -/ import topology.algebra.order.monotone_continuity import topology.instances.nnreal import tactic.norm_cast /-! # Square root of a real number In this file we define * `nnreal.sqrt` to be the square root of a nonnegative real number. * `real.sqrt` to be the square root of a real number, defined to be zero on negative numbers. Then we prove some basic properties of these functions. ## Implementation notes We define `nnreal.sqrt` as the noncomputable inverse to the function `x ↦ x * x`. We use general theory of inverses of strictly monotone functions to prove that `nnreal.sqrt x` exists. As a side effect, `nnreal.sqrt` is a bundled `order_iso`, so for `nnreal` numbers we get continuity as well as theorems like `sqrt x ≤ y ↔ x * x ≤ y` for free. Then we define `real.sqrt x` to be `nnreal.sqrt (real.to_nnreal x)`. We also define a Cauchy sequence `real.sqrt_aux (f : cau_seq ℚ abs)` which converges to `sqrt (mk f)` but do not prove (yet) that this sequence actually converges to `sqrt (mk f)`. ## Tags square root -/ open set filter open_locale filter nnreal topological_space namespace nnreal variables {x y : ℝ≥0} /-- Square root of a nonnegative real number. -/ @[pp_nodot] noncomputable def sqrt : ℝ≥0 ≃o ℝ≥0 := order_iso.symm $ strict_mono.order_iso_of_surjective (λ x, x * x) (λ x y h, mul_self_lt_mul_self x.2 h) $ (continuous_id.mul continuous_id).surjective tendsto_mul_self_at_top $ by simp [order_bot.at_bot_eq] lemma sqrt_le_sqrt_iff : sqrt x ≤ sqrt y ↔ x ≤ y := sqrt.le_iff_le lemma sqrt_lt_sqrt_iff : sqrt x < sqrt y ↔ x < y := sqrt.lt_iff_lt lemma sqrt_eq_iff_sq_eq : sqrt x = y ↔ y * y = x := sqrt.to_equiv.apply_eq_iff_eq_symm_apply.trans eq_comm lemma sqrt_le_iff : sqrt x ≤ y ↔ x ≤ y * y := sqrt.to_galois_connection _ _ lemma le_sqrt_iff : x ≤ sqrt y ↔ x * x ≤ y := (sqrt.symm.to_galois_connection _ _).symm @[simp] lemma sqrt_eq_zero : sqrt x = 0 ↔ x = 0 := sqrt_eq_iff_sq_eq.trans $ by rw [eq_comm, zero_mul] @[simp] lemma sqrt_zero : sqrt 0 = 0 := sqrt_eq_zero.2 rfl @[simp] lemma sqrt_one : sqrt 1 = 1 := sqrt_eq_iff_sq_eq.2 $ mul_one 1 @[simp] lemma mul_self_sqrt (x : ℝ≥0) : sqrt x * sqrt x = x := sqrt.symm_apply_apply x @[simp] lemma sqrt_mul_self (x : ℝ≥0) : sqrt (x * x) = x := sqrt.apply_symm_apply x @[simp] lemma sq_sqrt (x : ℝ≥0) : (sqrt x)^2 = x := by rw [sq, mul_self_sqrt x] @[simp] lemma sqrt_sq (x : ℝ≥0) : sqrt (x^2) = x := by rw [sq, sqrt_mul_self x] lemma sqrt_mul (x y : ℝ≥0) : sqrt (x * y) = sqrt x * sqrt y := by rw [sqrt_eq_iff_sq_eq, mul_mul_mul_comm, mul_self_sqrt, mul_self_sqrt] /-- `nnreal.sqrt` as a `monoid_with_zero_hom`. -/ noncomputable def sqrt_hom : ℝ≥0 →*₀ ℝ≥0 := ⟨sqrt, sqrt_zero, sqrt_one, sqrt_mul⟩ lemma sqrt_inv (x : ℝ≥0) : sqrt (x⁻¹) = (sqrt x)⁻¹ := sqrt_hom.map_inv x lemma sqrt_div (x y : ℝ≥0) : sqrt (x / y) = sqrt x / sqrt y := sqrt_hom.map_div x y lemma continuous_sqrt : continuous sqrt := sqrt.continuous end nnreal namespace real /-- An auxiliary sequence of rational numbers that converges to `real.sqrt (mk f)`. Currently this sequence is not used in `mathlib`. -/ def sqrt_aux (f : cau_seq ℚ abs) : ℕ → ℚ | 0 := rat.mk_nat (f 0).num.to_nat.sqrt (f 0).denom.sqrt | (n + 1) := let s := sqrt_aux n in max 0 $ (s + f (n+1) / s) / 2 theorem sqrt_aux_nonneg (f : cau_seq ℚ abs) : ∀ i : ℕ, 0 ≤ sqrt_aux f i | 0 := by rw [sqrt_aux, rat.mk_nat_eq, rat.mk_eq_div]; apply div_nonneg; exact int.cast_nonneg.2 (int.of_nat_nonneg _) | (n + 1) := le_max_left _ _ /- TODO(Mario): finish the proof theorem sqrt_aux_converges (f : cau_seq ℚ abs) : ∃ h x, 0 ≤ x ∧ x * x = max 0 (mk f) ∧ mk ⟨sqrt_aux f, h⟩ = x := begin rcases sqrt_exists (le_max_left 0 (mk f)) with ⟨x, x0, hx⟩, suffices : ∃ h, mk ⟨sqrt_aux f, h⟩ = x, { exact this.imp (λ h e, ⟨x, x0, hx, e⟩) }, apply of_near, suffices : ∃ δ > 0, ∀ i, abs (↑(sqrt_aux f i) - x) < δ / 2 ^ i, { rcases this with ⟨δ, δ0, hδ⟩, intros } end -/ /-- The square root of a real number. This returns 0 for negative inputs. -/ @[pp_nodot] noncomputable def sqrt (x : ℝ) : ℝ := nnreal.sqrt (real.to_nnreal x) /-quotient.lift_on x (λ f, mk ⟨sqrt_aux f, (sqrt_aux_converges f).fst⟩) (λ f g e, begin rcases sqrt_aux_converges f with ⟨hf, x, x0, xf, xs⟩, rcases sqrt_aux_converges g with ⟨hg, y, y0, yg, ys⟩, refine xs.trans (eq.trans _ ys.symm), rw [← @mul_self_inj_of_nonneg ℝ _ x y x0 y0, xf, yg], congr' 1, exact quotient.sound e end)-/ variables {x y : ℝ} @[simp, norm_cast] lemma coe_sqrt {x : ℝ≥0} : (nnreal.sqrt x : ℝ) = real.sqrt x := by rw [real.sqrt, real.to_nnreal_coe] @[continuity] lemma continuous_sqrt : continuous sqrt := nnreal.continuous_coe.comp $ nnreal.sqrt.continuous.comp continuous_real_to_nnreal theorem sqrt_eq_zero_of_nonpos (h : x ≤ 0) : sqrt x = 0 := by simp [sqrt, real.to_nnreal_eq_zero.2 h] theorem sqrt_nonneg (x : ℝ) : 0 ≤ sqrt x := nnreal.coe_nonneg _ @[simp] theorem mul_self_sqrt (h : 0 ≤ x) : sqrt x * sqrt x = x := by rw [sqrt, ← nnreal.coe_mul, nnreal.mul_self_sqrt, real.coe_to_nnreal _ h] @[simp] theorem sqrt_mul_self (h : 0 ≤ x) : sqrt (x * x) = x := (mul_self_inj_of_nonneg (sqrt_nonneg _) h).1 (mul_self_sqrt (mul_self_nonneg _)) theorem sqrt_eq_cases : sqrt x = y ↔ y * y = x ∧ 0 ≤ y ∨ x < 0 ∧ y = 0 := begin split, { rintro rfl, cases le_or_lt 0 x with hle hlt, { exact or.inl ⟨mul_self_sqrt hle, sqrt_nonneg x⟩ }, { exact or.inr ⟨hlt, sqrt_eq_zero_of_nonpos hlt.le⟩ } }, { rintro (⟨rfl, hy⟩|⟨hx, rfl⟩), exacts [sqrt_mul_self hy, sqrt_eq_zero_of_nonpos hx.le] } end theorem sqrt_eq_iff_mul_self_eq (hx : 0 ≤ x) (hy : 0 ≤ y) : sqrt x = y ↔ y * y = x := ⟨λ h, by rw [← h, mul_self_sqrt hx], λ h, by rw [← h, sqrt_mul_self hy]⟩ theorem sqrt_eq_iff_mul_self_eq_of_pos (h : 0 < y) : sqrt x = y ↔ y * y = x := by simp [sqrt_eq_cases, h.ne', h.le] @[simp] lemma sqrt_eq_one : sqrt x = 1 ↔ x = 1 := calc sqrt x = 1 ↔ 1 * 1 = x : sqrt_eq_iff_mul_self_eq_of_pos zero_lt_one ... ↔ x = 1 : by rw [eq_comm, mul_one] @[simp] theorem sq_sqrt (h : 0 ≤ x) : (sqrt x)^2 = x := by rw [sq, mul_self_sqrt h] @[simp] theorem sqrt_sq (h : 0 ≤ x) : sqrt (x ^ 2) = x := by rw [sq, sqrt_mul_self h] theorem sqrt_eq_iff_sq_eq (hx : 0 ≤ x) (hy : 0 ≤ y) : sqrt x = y ↔ y ^ 2 = x := by rw [sq, sqrt_eq_iff_mul_self_eq hx hy] theorem sqrt_mul_self_eq_abs (x : ℝ) : sqrt (x * x) = |x| := by rw [← abs_mul_abs_self x, sqrt_mul_self (abs_nonneg _)] theorem sqrt_sq_eq_abs (x : ℝ) : sqrt (x ^ 2) = |x| := by rw [sq, sqrt_mul_self_eq_abs] @[simp] theorem sqrt_zero : sqrt 0 = 0 := by simp [sqrt] @[simp] theorem sqrt_one : sqrt 1 = 1 := by simp [sqrt] @[simp] theorem sqrt_le_sqrt_iff (hy : 0 ≤ y) : sqrt x ≤ sqrt y ↔ x ≤ y := by rw [sqrt, sqrt, nnreal.coe_le_coe, nnreal.sqrt_le_sqrt_iff, real.to_nnreal_le_to_nnreal_iff hy] @[simp] theorem sqrt_lt_sqrt_iff (hx : 0 ≤ x) : sqrt x < sqrt y ↔ x < y := lt_iff_lt_of_le_iff_le (sqrt_le_sqrt_iff hx) theorem sqrt_lt_sqrt_iff_of_pos (hy : 0 < y) : sqrt x < sqrt y ↔ x < y := by rw [sqrt, sqrt, nnreal.coe_lt_coe, nnreal.sqrt_lt_sqrt_iff, to_nnreal_lt_to_nnreal_iff hy] theorem sqrt_le_sqrt (h : x ≤ y) : sqrt x ≤ sqrt y := by { rw [sqrt, sqrt, nnreal.coe_le_coe, nnreal.sqrt_le_sqrt_iff], exact to_nnreal_le_to_nnreal h } theorem sqrt_lt_sqrt (hx : 0 ≤ x) (h : x < y) : sqrt x < sqrt y := (sqrt_lt_sqrt_iff hx).2 h theorem sqrt_le_iff : sqrt x ≤ y ↔ 0 ≤ y ∧ x ≤ y ^ 2 := begin rw [← and_iff_right_of_imp (λ h, (sqrt_nonneg x).trans h), and.congr_right_iff], exact sqrt_le_left end /- note: if you want to conclude `x ≤ sqrt y`, then use `le_sqrt_of_sq_le`. if you have `x > 0`, consider using `le_sqrt'` -/ theorem le_sqrt (hx : 0 ≤ x) (hy : 0 ≤ y) : x ≤ sqrt y ↔ x ^ 2 ≤ y := by rw [mul_self_le_mul_self_iff hx (sqrt_nonneg _), sq, mul_self_sqrt hy] theorem le_sqrt' (hx : 0 < x) : x ≤ sqrt y ↔ x ^ 2 ≤ y := by { rw [sqrt, ← nnreal.coe_mk x hx.le, nnreal.coe_le_coe, nnreal.le_sqrt_iff, real.le_to_nnreal_iff_coe_le', sq, nnreal.coe_mul], exact mul_pos hx hx } theorem abs_le_sqrt (h : x^2 ≤ y) : |x| ≤ sqrt y := by rw ← sqrt_sq_eq_abs; exact sqrt_le_sqrt h theorem sq_le (h : 0 ≤ y) : x^2 ≤ y ↔ -sqrt y ≤ x ∧ x ≤ sqrt y := begin split, { simpa only [abs_le] using abs_le_sqrt }, { rw [← abs_le, ← sq_abs], exact (le_sqrt (abs_nonneg x) h).mp }, end theorem neg_sqrt_le_of_sq_le (h : x^2 ≤ y) : -sqrt y ≤ x := ((sq_le ((sq_nonneg x).trans h)).mp h).1 theorem le_sqrt_of_sq_le (h : x^2 ≤ y) : x ≤ sqrt y := ((sq_le ((sq_nonneg x).trans h)).mp h).2 @[simp] theorem sqrt_inj (hx : 0 ≤ x) (hy : 0 ≤ y) : sqrt x = sqrt y ↔ x = y := by simp [le_antisymm_iff, hx, hy] @[simp] theorem sqrt_eq_zero (h : 0 ≤ x) : sqrt x = 0 ↔ x = 0 := by simpa using sqrt_inj h le_rfl theorem sqrt_eq_zero' : sqrt x = 0 ↔ x ≤ 0 := by rw [sqrt, nnreal.coe_eq_zero, nnreal.sqrt_eq_zero, real.to_nnreal_eq_zero] theorem sqrt_ne_zero (h : 0 ≤ x) : sqrt x ≠ 0 ↔ x ≠ 0 := by rw [not_iff_not, sqrt_eq_zero h] theorem sqrt_ne_zero' : sqrt x ≠ 0 ↔ 0 < x := by rw [← not_le, not_iff_not, sqrt_eq_zero'] @[simp] theorem sqrt_pos : 0 < sqrt x ↔ 0 < x := lt_iff_lt_of_le_iff_le (iff.trans (by simp [le_antisymm_iff, sqrt_nonneg]) sqrt_eq_zero') @[simp] theorem sqrt_mul (hx : 0 ≤ x) (y : ℝ) : sqrt (x * y) = sqrt x * sqrt y := by simp_rw [sqrt, ← nnreal.coe_mul, nnreal.coe_eq, real.to_nnreal_mul hx, nnreal.sqrt_mul] @[simp] theorem sqrt_mul' (x) {y : ℝ} (hy : 0 ≤ y) : sqrt (x * y) = sqrt x * sqrt y := by rw [mul_comm, sqrt_mul hy, mul_comm] @[simp] theorem sqrt_inv (x : ℝ) : sqrt x⁻¹ = (sqrt x)⁻¹ := by rw [sqrt, real.to_nnreal_inv, nnreal.sqrt_inv, nnreal.coe_inv, sqrt] @[simp] theorem sqrt_div (hx : 0 ≤ x) (y : ℝ) : sqrt (x / y) = sqrt x / sqrt y := by rw [division_def, sqrt_mul hx, sqrt_inv, division_def] @[simp] theorem div_sqrt : x / sqrt x = sqrt x := begin cases le_or_lt x 0, { rw [sqrt_eq_zero'.mpr h, div_zero] }, { rw [div_eq_iff (sqrt_ne_zero'.mpr h), mul_self_sqrt h.le] }, end theorem sqrt_div_self' : sqrt x / x = 1 / sqrt x := by rw [←div_sqrt, one_div_div, div_sqrt] theorem sqrt_div_self : sqrt x / x = (sqrt x)⁻¹ := by rw [sqrt_div_self', one_div] theorem lt_sqrt (hx : 0 ≤ x) (hy : 0 ≤ y) : x < sqrt y ↔ x ^ 2 < y := by rw [mul_self_lt_mul_self_iff hx (sqrt_nonneg y), sq, mul_self_sqrt hy] theorem sq_lt : x^2 < y ↔ -sqrt y < x ∧ x < sqrt y := begin split, { simpa only [← sqrt_lt_sqrt_iff (sq_nonneg x), sqrt_sq_eq_abs] using abs_lt.mp }, { rw [← abs_lt, ← sq_abs], exact λ h, (lt_sqrt (abs_nonneg x) (sqrt_pos.mp (lt_of_le_of_lt (abs_nonneg x) h)).le).mp h }, end theorem neg_sqrt_lt_of_sq_lt (h : x^2 < y) : -sqrt y < x := (sq_lt.mp h).1 theorem lt_sqrt_of_sq_lt (h : x^2 < y) : x < sqrt y := (sq_lt.mp h).2 /-- The natural square root is at most the real square root -/ lemma nat_sqrt_le_real_sqrt {a : ℕ} : ↑(nat.sqrt a) ≤ real.sqrt ↑a := begin rw real.le_sqrt (nat.cast_nonneg _) (nat.cast_nonneg _), norm_cast, exact nat.sqrt_le' a, end /-- The real square root is at most the natural square root plus one -/ lemma real_sqrt_le_nat_sqrt_succ {a : ℕ} : real.sqrt ↑a ≤ nat.sqrt a + 1 := begin rw real.sqrt_le_iff, split, { norm_cast, simp, }, { norm_cast, exact le_of_lt (nat.lt_succ_sqrt' a), }, end instance : star_ordered_ring ℝ := { nonneg_iff := λ r, by { refine ⟨λ hr, ⟨sqrt r, show r = sqrt r * sqrt r, by rw [←sqrt_mul hr, sqrt_mul_self hr]⟩, _⟩, rintros ⟨s, rfl⟩, exact mul_self_nonneg s }, ..real.ordered_add_comm_group } end real open real variables {α : Type*} lemma filter.tendsto.sqrt {f : α → ℝ} {l : filter α} {x : ℝ} (h : tendsto f l (𝓝 x)) : tendsto (λ x, sqrt (f x)) l (𝓝 (sqrt x)) := (continuous_sqrt.tendsto _).comp h variables [topological_space α] {f : α → ℝ} {s : set α} {x : α} lemma continuous_within_at.sqrt (h : continuous_within_at f s x) : continuous_within_at (λ x, sqrt (f x)) s x := h.sqrt lemma continuous_at.sqrt (h : continuous_at f x) : continuous_at (λ x, sqrt (f x)) x := h.sqrt lemma continuous_on.sqrt (h : continuous_on f s) : continuous_on (λ x, sqrt (f x)) s := λ x hx, (h x hx).sqrt @[continuity] lemma continuous.sqrt (h : continuous f) : continuous (λ x, sqrt (f x)) := continuous_sqrt.comp h
The sun is shining and birds are chirping; There's a cool breeze coming in, and you've got no particular place to be. It's the perfect time to relax on the deck of your dreams. Don't waste another minute of warm weather wishing you had a comfortable outdoor space. At RCK Construction, LLC, we've been building decks in Mountain Grove, MO and the surrounding Rogersville area for over twenty years. Whether you have a fully realized project in mind or want an expert's opinion, you can trust our deck builders to get the job done right. Our decks can take on the weather, and they're sturdy enough to host your next family reunion. We'll make sure your deck is built to last through the years. We use a variety of materials, including treated wood, cedar and composite. At RCK Construction, we're passionate about deck building, and we'll work with you every step of the way. Call us today for your free custom quote.
r=0.48 https://sandbox.dams.library.ucdavis.edu/fcrepo/rest/collection/sherry-lehmann/catalogs/d7459g/media/images/d7459g-029/svc:tesseract/full/full/0.48/default.jpg Accept:application/hocr+xml
If $S$ is an open set, then the path component of $x$ in $S$ is the same as the connected component of $x$ in $S$.
If $S$ is an open set, then the path component of $x$ in $S$ is the same as the connected component of $x$ in $S$.
From Test Require Import tactic. Section FOFProblem. Variable Universe : Set. Variable UniverseElement : Universe. Variable wd_ : Universe -> Universe -> Prop. Variable col_ : Universe -> Universe -> Universe -> Prop. Variable col_swap1_1 : (forall A B C : Universe, (col_ A B C -> col_ B A C)). Variable col_swap2_2 : (forall A B C : Universe, (col_ A B C -> col_ B C A)). Variable col_triv_3 : (forall A B : Universe, col_ A B B). Variable wd_swap_4 : (forall A B : Universe, (wd_ A B -> wd_ B A)). Variable col_trans_5 : (forall P Q A B C : Universe, ((wd_ P Q /\ (col_ P Q A /\ (col_ P Q B /\ col_ P Q C))) -> col_ A B C)). Theorem pipo_6 : (forall O E Eprime AX AY BX BY CX CY AXMBX AYMBY BXMCX BYMCY XProd BXMAX BYMAY CXMAX CYMAY CXMBX CYMBY AXMCX AYMCY L1 L2 L3 : Universe, ((wd_ O AXMBX /\ (wd_ O AYMBY /\ (wd_ O BXMCX /\ (wd_ O BYMCY /\ (wd_ O E /\ (wd_ E Eprime /\ (wd_ O Eprime /\ (col_ O E AX /\ (col_ O E AY /\ (col_ O E BX /\ (col_ O E BY /\ (col_ O E CX /\ (col_ O E CY /\ (col_ O E AXMBX /\ (col_ O E AYMBY /\ (col_ O E BXMCX /\ (col_ O E BYMCY /\ (col_ O E BXMAX /\ (col_ O E BYMAY /\ (col_ O E CXMAX /\ (col_ O E CYMAY /\ (col_ O E CXMBX /\ (col_ O E CYMBY /\ (col_ O E AXMCX /\ (col_ O E AYMCY /\ (col_ O E XProd /\ (col_ O E L1 /\ (col_ O E L2 /\ col_ O E L3)))))))))))))))))))))))))))) -> col_ AX BX CX)). Proof. time tac. Qed. End FOFProblem.
------------------------------------------------------------------------------ -- Common definitions ------------------------------------------------------------------------------ {-# OPTIONS --exact-split #-} {-# OPTIONS --no-sized-types #-} {-# OPTIONS --no-universe-polymorphism #-} {-# OPTIONS --without-K #-} module Common.DefinitionsI where open import Common.FOL.FOL using ( ¬_ ; D ) open import Common.FOL.Relation.Binary.PropositionalEquality using ( _≡_ ) -- We add 3 to the fixities of the Agda standard library 0.8.1 (see -- Relation/Binary/Core.agda). infix 7 _≢_ ------------------------------------------------------------------------------ -- Inequality. _≢_ : D → D → Set x ≢ y = ¬ x ≡ y
Formal statement is: lemma sets_vimage_algebra2: "f \<in> X \<rightarrow> space M \<Longrightarrow> sets (vimage_algebra X f M) = {f -` A \<inter> X | A. A \<in> sets M}" Informal statement is: If $f$ is a function from $X$ to the measurable space $M$, then the sets of the vimage algebra of $f$ are the preimages of the sets of $M$ under $f$.
module TestLib import Postgres import System import Data.String import Data.String.Extra import Data.List import Data.List1 export databaseUrl : HasIO io => io (Maybe String) databaseUrl = getEnv "TEST_DATABASE_URL" ||| Strip the database name off the end of the database URL ||| and append the test database. testDatabaseUrl : String -> String testDatabaseUrl url = let splitUrl = split (== '/') url allButDatabase = init splitUrl in join "/" $ (allButDatabase `snoc` "pg_idris_test") public export record Config where constructor MkConfig databaseUrl : String export getTestConfig : HasIO io => io (Either String Config) getTestConfig = do Just url <- databaseUrl | Nothing => pure $ Left "Missing TEST_DATABASE_URL environment variable needed for testing." pure $ Right $ MkConfig url export dbSetup : Database () Open (const Open) dbSetup = do liftIO' $ putStrLn "Setting database up" res1 <- exec $ perform "drop database if exists pg_idris_test" liftIO' . putStrLn $ show res1 res2 <- exec $ perform "create database pg_idris_test" liftIO' . putStrLn $ show res2 liftIO' $ putStrLn "test database created" export withTestDB : HasIO io => {default False setup : Bool} -> (run : Database () Open (const Open)) -> io Bool withTestDB {setup} run = do Right config <- getTestConfig | Left err => do putStrLn err pure False let databaseUrl : String = if setup then config.databaseUrl else testDatabaseUrl config.databaseUrl Right () <- withDB databaseUrl run | Left err => do putStrLn err pure False pure True
subroutine delseg(delsgs,ndel,nadj,madj,npd,x,y,ntot,nerror) # Output the endpoints of the line segments joining the # vertices of the Delaunay triangles. # Called by master. implicit double precision(a-h,o-z) logical value dimension nadj(-3:ntot,0:madj), x(-3:ntot), y(-3:ntot) dimension delsgs(6,ndel) # For each distinct pair of points i and j, if they are adjacent # then put their endpoints into the output array. npd = ntot-4 kseg = 0 do i = 2,npd { do j = 1,i-1 { call adjchk(i,j,value,nadj,madj,ntot,nerror) if(nerror>0){ return } if(value) { kseg = kseg+1 if(kseg > ndel) { nerror = 14 return } delsgs(1,kseg) = x(i) delsgs(2,kseg) = y(i) delsgs(3,kseg) = x(j) delsgs(4,kseg) = y(j) delsgs(5,kseg) = i delsgs(6,kseg) = j } } } ndel = kseg return end
#Author: Matt Williams #Version: 06/24/2022 #Grid Search Cross Validation is done so we can get a better understanding #of the hyperparameter settings we need in order to optimize the algorithms for #our dataset. from numpy import ndarray from sklearn.experimental import enable_halving_search_cv from sklearn.model_selection import HalvingGridSearchCV from sklearn.naive_bayes import ComplementNB from sklearn.preprocessing import minmax_scale from get_article_vectors import get_training_info from save_load_json import save_json from utils import make_cv_result_path, K_FOLDS, CV_BEST_DICT_KEY def run_grid_cv(classifier, param_grid, vec_model_name, c_name, n_jobs = 3): '''Given a classifier instance, its associated param grid, the name of the vector model we are to use the training data from, and the classifier name. Run Grid Search Cross validation and save the results to a json file. ''' training_data, training_labels = get_training_info(vec_model_name) grid_search_cv = HalvingGridSearchCV(classifier, param_grid, n_jobs=n_jobs, verbose=2, cv = K_FOLDS, refit=False, scoring='accuracy', min_resources=100) if type(classifier) is ComplementNB: training_data = minmax_scale(training_data, feature_range=(0,1)) grid_search_cv.fit(training_data, training_labels) file_path = make_cv_result_path(c_name, vec_model_name) cv_results = grid_search_cv.cv_results_ for key in cv_results.keys(): if isinstance(cv_results[key], ndarray): cv_results[key] = cv_results[key].tolist() cv_results[CV_BEST_DICT_KEY] = grid_search_cv.best_params_ cv_results['Word Vector Model'] = vec_model_name cv_results['best score'] = grid_search_cv.best_score_ save_json(cv_results, file_path)
State Before: G : Type u_1 inst✝² : Group G A : Type ?u.4059 inst✝¹ : AddGroup A N : Type ?u.4065 inst✝ : Group N g : G ⊢ zpowers g = closure {g} State After: case h G : Type u_1 inst✝² : Group G A : Type ?u.4059 inst✝¹ : AddGroup A N : Type ?u.4065 inst✝ : Group N g x✝ : G ⊢ x✝ ∈ zpowers g ↔ x✝ ∈ closure {g} Tactic: ext State Before: case h G : Type u_1 inst✝² : Group G A : Type ?u.4059 inst✝¹ : AddGroup A N : Type ?u.4065 inst✝ : Group N g x✝ : G ⊢ x✝ ∈ zpowers g ↔ x✝ ∈ closure {g} State After: no goals Tactic: exact mem_closure_singleton.symm
#ifndef SHIFT_PROTO_ALIAS_HPP #define SHIFT_PROTO_ALIAS_HPP #include <cstddef> #include <cstdint> #include <list> #include <stdexcept> #include <type_traits> #include <unordered_map> #include <utility> #include <vector> #include <memory> #include <string> #include <utility> #include <shift/core/boost_disable_warnings.hpp> #include <boost/lexical_cast.hpp> #include <boost/variant/get.hpp> #include <boost/variant/recursive_wrapper.hpp> #include <boost/variant/variant.hpp> #include <shift/core/boost_restore_warnings.hpp> #include "shift/proto/types.hpp" #include "shift/proto/node.hpp" #include "shift/proto/type_reference.hpp" namespace shift::proto { struct alias final : public node { alias() = default; alias(const alias&) = default; alias(alias&&) = default; ~alias() override; alias& operator=(const alias&) = default; alias& operator=(alias&&) = default; /// @see node::generate_uids. void generate_uids() override; /// Returns the actual type referenced, which is different from /// type_reference when chaining aliases. const type_reference& actual_type_reference() const; type_reference reference; }; } #endif
[STATEMENT] lemma delete_idem: "delete k (delete k al) = delete k al" [PROOF STATE] proof (prove) goal (1 subgoal): 1. delete k (delete k al) = delete k al [PROOF STEP] by (simp add: delete_eq)
Formal statement is: lemma measure_frontier: "bounded S \<Longrightarrow> measure lebesgue (frontier S) = measure lebesgue (closure S) - measure lebesgue (interior S)" Informal statement is: If $S$ is a bounded set, then the measure of the boundary of $S$ is equal to the measure of the closure of $S$ minus the measure of the interior of $S$.
<a href="https://colab.research.google.com/github/DataScienceUB/DeepLearningMaster2019/blob/master/14.%20Uncertainty_and_Probabilistic_Layers.ipynb" target="_parent"></a> # Deep Learning, uncertainty and probabilistic models. > **Certainty** is perfect knowledge that has total security from error, or the mental state of being without doubt. (*Wikipedia*) > **Uncertainty** has been called *an unintelligible expression without a straightforward description*. It describes a situation involving insecurity and/or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown. (*Wikipedia*) A common understanding of the term **uncertainty** in scientific works is that of a measure that reflects the amount of dispersion of a random variable. In other words, it is a scalar measure of how "random" a random variable is. But this a reduction of the term. > There is **no single formula for uncertainty** because there are many different ways to measure dispersion: standard deviation, variance, and entropy are all appropriate measures. However, it's important to keep in mind that a single scalar number cannot paint a full picture of "randomness'', as that would require communicating the entire random variable itself! *Eric Jang*. When working with predictive systems, measuring uncertainty is **important for dealing with the risk associated to decisions**. If we can measure risk, we can define policies related to making use of predictions. Regarding uncertainty, we can be in several **states**: + Complete certainty. This is the case of macroscopic mechanics, where given an input there is no uncertainty about the output. + Uncertainty with risk. This is the case of a game with an uncertain output, but where we exactly know the probability distribution over outcomes. + Fully reducible uncertainty. This is the case of a game where by gathering sufficient data and/or using the right model we can get complete certainty. + Partially reducible uncertainty. This is the case of games where we have no full knowledge about the probability distribution over outcomes. By gathering sufficient data we can know the probability distribution over outcomes. + Etc. Uncertainty in predictive systems $y = f_w(x)$ can be **caused** by several factors (see next). Different types of uncertainty must be measured in different ways. ## Epistemic uncertainty Epistemic uncertainty corresponds to the uncertainty originated by the lack of knowledge about the model. It can be explained as to which extent is our model able to describe the distribution that generated the data. In this case we can talk of two different types of uncertainties caused by whether the model has been **trained with enough data**, so it’s been able to learn the full distributionof the data, or whether the **expressiveness of the model is able to capture the complexity of the distribution**. When using a expressive enough model, this type of uncertainty *can be reduced* by providing more samples during the training phase. ## Aleatoric uncertainty In this case the uncertainty measured belongs to the data. This uncertainty is inherent to the data and *can’t be reduced* by adding more data to the training process. Its cause can be the lack of information in $x$ to determine $y$ or the lack of precision when measuring $x$ or $y$. **Aleatoric uncertainty propagates from the inputs to the model predictions**. Consider a simple model $y=5x$, which takes in normally-distributed input $x \sim N(0,1)$. In this case, $y \sim N(0,5)$, so the aleatoric uncertainty of the predictive distribution can be described by $\sigma=5$. There are two types of aleatoric uncertainty: + **Homoscedastic**: Uncertainty remains constant for all the data. For example: $y \sim N(0,5)$. + **Heteroscedastic**: Uncertainty depends on the input. For example: $y \sim N(0,f(x))$. ## Out of Distribution (OoD) Determining whether inputs are "valid'' is a serious problem for deploying ML in the wild, and is known as the Out of Distribution (OoD) problem. There are two ways to handle OoD inputs for a machine learning model: 1) Model the bad inputs and detect them before we even put them through the model 2) Let the "weirdness'' of model predictions imply to us that the input was probably malformed. Modeling bad inputs is a difficult task. Training a discriminator is not completely robust because it can give arbitrary predictions for an input that lies in neither distribution. Instead of a discriminator, we could build a density model of the in-distribution data, such as a kernel density estimator. The second approach to OoD detection involves using the predictive (epistemic) uncertainty of the task model to tell us when inputs are OoD. Ideally, malformed inputs to a model ought to generate "weird'' predictive distribution $p(y|x)$. For instance, it has been showed that the maximum softmax probability (the predicted class) for OoD inputs tends to be lower than that of in-distribution inputs. ## Example Uncertainty can be understood from a **simple machine learning model** that attempts to predict daily rainfall from a sequence of barometer readings. Aleatoric uncertainty is irreducible randomness that arises from the data collection process. Epistemic uncertainty reflects confidence that our model is making the correct predictions. Finally, out-of-distribution errors arise when the model sees an input that differs from its training data (e.g. temperature of the sun, other anomalies). *Eric Jang*. ## Calibration Just because a model is able to output a "confidence interval" for a prediction doesn't mean that the confidence interval actually reflects the actual probabilities of outcomes in reality! Suppose our rainfall model tells us that there will be $N(4,1)$ inches of rain today. If our model is **calibrated**, then if we were to repeat this experiment over and over again under identical conditions (possibly re-training the model each time), we really would observe empirical rainfall to be distributed exactly $N(4,1)$. Machine Learning models mostly optimize for test accuracy or some fitness function. Researchers are not performing model selection by deploying the model in repeated identical experiments and measuring calibration error, so unsurprisingly, this **models tend to be poorly calibrated**. ## Uncertainty and Neural Networks A trained neural network $f$ can be viewed as the instantiation of a specific probabilistic model $p(y|x,D)$. For classification, $y$ is a set of classes and $p(y|x,D)$ is a **categorical distribution**. For regression, $y$ is a continuous variable and $p(y|x,D)$ can be modelled with a **Gaussian (or Laplacian) distribution**. Our training goal can be to **find the most probable network instance** (represented by parameters $w$) that generated the outputs: $$ p(y | x, D) = \int_w p(y | x, w)p(w|D)dw $$ We can observe that the distribution of the output depends on two terms: one that depends on the application of the **model to the input data**, and a second one that is measuring **how the model may vary depending on the training data**. From this definition we can say that the first term is modeling the **aleatoric uncertainty**, as it is measuring how the output is distributed when the input is $x$, and the second term is modeling the **epistemic uncertainty** as it is measuring the uncertainty induced by the parameters of the model. ## Epistemic Uncertainty In the case of the epistemic uncertainty, let’s assume that we want to train a model with parameters $w$ that produces an output $y = f_w(x)$. To capture epistemic uncertainty in a neural network we will adopt a Bayesian point of view and we will put a prior distribution over its weights, for example a Gaussian prior distribution: $w ∼ \mathcal{N} (0, I)$. Such a model is referred to as a **Bayesian neural network**. Bayesian neural networks replace the deterministic network’s weight parameters with distributions over these parameters. In the case of regression, we suppose the model likelihood is $p(y | x,w) ∼ \mathcal{N} (f_w(x), \sigma^2)$. For classification tasks, we assume a softmax likelihood: $$ p(y = d | x, w) = \frac{\exp(f^d_w(x))}{\sum_{d'}\exp(f^{d'}_w(x))} $$ Given a dataset $D = (X,Y)$ we then look for the posterior distribution over the space of parameters by invoking Bayes’ theorem: $$ p(w|X,Y) = \frac{p(Y|X,w) p(w)}{p(Y|X)} $$ The true posterior $p(w|X, Y)$ cannot usually be evaluated analytically, but we can use indirect methods such as variational inference or MC-dropout. This distribution captures the most probable function parameters given our observed data. With it we can predict an output for a new input point $x$ by integrating: $$ p(y|x,X,Y) = \int p(y|x,w)p(w|X,Y)dw $$ This integral can be computed by Monte Carlos method: $$ \mathop{\mathbb{E}}(y|x, X,Y) \approx \frac{1}{M}\sum_{j=1}^{M} f_{{w_{j}}}(x) $$ Finaly we can calculate the corresponding variance: $$ \mathop{\mathbb{Var}} (y) \approx \frac{1}{M}\sum_{j=1}^{M} f_{w_{j}}(x)^2 - \mathop{\mathbb{E}}(y|x, X,Y)^2 $$ Which can be used as a measure of the **epistemic uncertainty**. ## Aleatoric Uncertainty For obtaining the heteroscedastic uncertainty, we follow the same approach as in the epistemic case: we consider that we have a **fixed model** $f_w(x)$ and we want to observe the variability of the term $p(y|f_w(x))$. In **regression tasks**, , we can assume that $y \sim \mathcal{N}(f_{{w}}(x), \sigma(x)^2)$, where $f_{w}(x)$ is the predicted value of $x$ and $\sigma_(x)$ is the unknown variance of this value. How do we estimate this value? In the same way we use a deep learning model to estimate $f_w(x)$, we can use a deep learning model to estimate variance: $\sigma_w(x)$, and then $y \sim \mathcal{N}(f_{{w}}(x), \sigma_w(x)^2)$. Applying this approximation to the log-likelihood adds an additional term to the loss that depends on $\sigma_w(x)$: $$ \mathcal L = - \frac{1}{N} \sum_{i=1}^N \log p(y|f_w(x_i)) $$ $$ = \frac{1}{2 \sigma(x)^2} ( y - f_w(x))^2 + \frac{1}{2} \log \sigma_w(x)^2 $$ We can easily optimiza a model that outputs $(f_{{w}}(x), \sigma_w(x))$ by using the reparametrization trick in the last layer of the network. In **classification tasks**, this approximation is not as straightforward as in regression, as for classification there is already an uncertainty component due to the distribution of the probabilities when applying the softmax to the logits. In this scenario, we can apply the same assumption but to the logits space instead of the output directly: $$\begin{split} logits \sim \mathcal{N}(f_{w}(x), diag(\sigma_w(x)^2)) \\ p = softmax(logits) \\ y \sim Categorical(p) \end{split} $$ Here we can apply the reparametrization trick for computing the logits, $u$: $$ \begin{split} u \sim \mathcal{N}(f_{w}(x), diag(\sigma_w(x)^2)) \\ u = f_{w}(x) + \sqrt{diag(\sigma_w(x)^2)}* \epsilon\\ u = f_{w}(x) + diag(\sigma_w(x))* \epsilon, \epsilon\sim\mathcal{N}(0,1) \end{split} $$ And then apply Monte Carlo sampling to obtain the expectation of the probability: $$ \mathop{\mathbb{E}}[p] = \frac{1}{M}\sum^{M}_{m=1}softmax(u^{m}) $$ When applied to a crossentropy loss we have that: $$ \begin{align} \ell(W) & = -\frac{1} {C}\sum^{C}_{c=1}y_{i,c}\log{(p_{i,c})} \\ & = -\frac{1}{C}\sum^{C}_{c=1}y_{i,c}\log{\frac{1}{M}\sum^{M}_{m=1}softmax(u^{m})}\\ & = -\frac{1}{C}\sum^{C}_{c=1}y_{i,c}\log{\frac{1}{M}\sum^{M}_{m=1}\frac{\exp{u^{m}_{i,c,m}}}{\sum^{C}_{c{}'=1}\exp{u^{m}_{i,c{}',m}}}} \\ & = -\frac{1}{C}\sum^{C}_{c=1}y_{i,c}\log{\sum^{M}_{m=1}\exp({u^{m}_{i,c,m}}} - \log{\sum^{C}_{c'=1}\exp{u^{m}_{i,c',m}})}-\log{M} \\ \end{align} $$ Where $C$ is the number of classes and $M$ is the number of Monte Carlo samples taken. Once trained, the sigmas can be used to obtain a measure of the aleatoric uncertainty associated to the input by calculating the mean variance of the logits: $$ \begin{align} \mathop{\mathbb{U}} & = \frac{1}{C}\sum^{C}_{c=1}var[p_{c}] \\ &= \frac{1}{C}\sum^{C}_{c=1}var(softmax(u^{m}_{c})), m \in \left \{1,M\right \}\end{align} $$ ## Total uncertainty Epistemic and aleatoric uncertinty can be combined in a model. Then, the predictive uncertainty for $y$ can be approaximated by: $$ Var(y) \approx \frac{1}{T} \sum_{t=1}^{T} y_t^2 - (\frac{1}{T} \sum_{t=1}^{T} y)^2 + \frac{1}{T} \sum_{t=1}^{T} \sigma_t(x)^2 $$ with ${y_t,\sigma_t(x)}$ a set of $T$ sampled outputs: $y_t, \sigma_t (x) = f_{w_t} (x)$ for random weights $w_t \sim p(w)$. # Predicting probability distributions #### Regression with a pre-defined homoscedastic noise model A network can now be trained with a Gaussian negative log likelihood function (`neg_log_likelihood`) as loss function assuming a **fixed standard deviation** (`noise`). This is equivalent to consider the following loss function: $$ LogLoss = \sum_i \frac{(y_i - f(x_i))^2}{2 \sigma^2}+\frac{1}{2} \log \sigma^2 $$ where the model predicts a mean $f(x_i)$. ``` import tensorflow import numpy as np import matplotlib.pyplot as plt %matplotlib inline def f(x, sigma): epsilon = np.random.randn(*x.shape) * sigma return 10 * np.sin(2 * np.pi * (x)) + epsilon train_size = 320 noise = 1.0 plt.figure(figsize=(8,4)) X = np.linspace(-0.8, 0.8, train_size).reshape(-1, 1) y = f(X, sigma=noise) y_true = f(X, sigma=0.0) plt.scatter(X, y, marker='+', label='Training data') plt.plot(X, y_true, label='Truth', color='r') plt.title('Noisy training data and ground truth') plt.legend(); print(X[0],y[0],y_true[0]) ``` ``` from tensorflow.keras import backend as K from tensorflow.keras import activations, initializers from tensorflow.keras.layers import Input, Dense from tensorflow.keras.models import Model from tensorflow.keras import callbacks, optimizers import tensorflow as tf import tensorflow_probability as tfp x_in = Input(shape=(1,)) x = Dense(20, activation='relu')(x_in) x = Dense(20, activation='relu')(x) x = Dense(1)(x) model = Model(x_in, x) model.summary() ``` WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons If you depend on functionality not listed there, please file an issue. WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) (None, 1) 0 _________________________________________________________________ dense (Dense) (None, 20) 40 _________________________________________________________________ dense_1 (Dense) (None, 20) 420 _________________________________________________________________ dense_2 (Dense) (None, 1) 21 ================================================================= Total params: 481 Trainable params: 481 Non-trainable params: 0 _________________________________________________________________ ``` def neg_log_likelihood(y_true, y_pred, sigma=1.0): dist = tfp.distributions.Normal(loc=y_pred, scale=sigma) return K.sum(-dist.log_prob(y_true)) model.compile(loss=neg_log_likelihood, optimizer=optimizers.Adam(lr=0.01), metrics=['mse']) model.fit(X, y, batch_size=10, epochs=150, verbose=0); ``` ``` from tqdm import tqdm_notebook X_test = np.linspace(-0.8, 0.8, 1000).reshape(-1, 1) y_pred_list = [] y_pred = model.predict(X_test) y_mean = np.mean(y_pred, axis=1) y_sigma = noise plt.figure(figsize=(8,4)) plt.plot(X_test, y_mean, 'r-', label='Predictive mean'); plt.scatter(X, y, marker='+', label='Training data') plt.fill_between(X_test.ravel(), y_mean + 2 * y_sigma, y_mean - 2 * y_sigma, alpha=0.2, label='Uncertainty (Confidence Interval)') plt.title('Prediction') plt.legend(); ``` #### Regression with an heteroscedastic noise model This is equivalent to consider the following loss function: $$ LogLoss = \sum_i \frac{(y_i - f(x_i))^2}{2 \sigma^2(x_i)}+\frac{1}{2} \log \sigma^2(x_i) $$ ``` # https://engineering.taboola.com/predicting-probability-distributions/ import numpy as np import pandas as pd import seaborn as sns import tensorflow as tf import matplotlib.pyplot as plt from sklearn.utils import shuffle def f(x): return x**2-6*x+9 def data_generator(x,sigma_0,samples): return np.random.normal(f(x),sigma_0*x,samples) sigma_0 = 0.1 x_vals = np.arange(1,5.2,0.2) x_arr = np.array([]) y_arr = np.array([]) samples = 50 for x in x_vals: x_arr = np.append(x_arr, np.full(samples,x)) y_arr = np.append(y_arr, data_generator(x,sigma_0,samples)) x_arr, y_arr = shuffle(x_arr, y_arr) x_test = np.arange(1.1,5.1,0.2) fig, ax = plt.subplots(figsize=(10,10)) plt.grid(True) plt.xlabel('x') plt.ylabel('g(x)') ax.scatter(x_arr,y_arr,label='sampled data') ax.plot(x_vals,list(map(f,x_vals)),c='m',label='f(x)') ax.legend(loc='upper center',fontsize='large',shadow=True) plt.show() ``` ``` epochs = 500 batch_size = 50 learning_rate = 0.0003 display_step = 50 batch_num = int(len(x_arr) / batch_size) tf.reset_default_graph() x = tf.placeholder(name='x',shape=(None,1),dtype=tf.float32) y = tf.placeholder(name='y',shape=(None,1),dtype=tf.float32) layer = x for _ in range(3): layer = tf.layers.dense(inputs=layer, units=12, activation=tf.nn.tanh) output = tf.layers.dense(inputs=layer, units=1) # cot function cost = tf.reduce_mean(tf.losses.mean_squared_error(labels=y,predictions=output)) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) x_batches = np.array_split(x_arr, batch_num) y_batches = np.array_split(y_arr, batch_num) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for epoch in range(epochs): avg_cost = 0.0 x_batches, y_batches = shuffle(x_batches, y_batches) for i in range(batch_num): x_batch = np.expand_dims(x_batches[i],axis=1) y_batch = np.expand_dims(y_batches[i],axis=1) _, c = sess.run([optimizer,cost], feed_dict={x:x_batch, y:y_batch}) avg_cost += c/batch_num if epoch % display_step == 0: print('Epoch {0} | cost = {1:.4f}'.format(epoch,avg_cost)) y_pred = sess.run(output,feed_dict={x:np.expand_dims(x_test,axis=1)}) print('Final cost: {0:.4f}'.format(avg_cost)) fig, ax = plt.subplots(figsize=(10,10)) plt.grid(True) plt.xlabel('x') plt.ylabel('y') ax.scatter(x_arr,y_arr,c='b',label='sampled data') ax.scatter(x_test,y_pred,c='r',label='predicted values') ax.plot(x_vals,list(map(f,x_vals)),c='m',label='f(x)') ax.legend(loc='upper center',fontsize='large',shadow=True) plt.show() ``` ``` #new cost function def mdn_cost(mu, sigma, y): dist = tf.distributions.Normal(loc=mu, scale=sigma) return tf.reduce_mean(-dist.log_prob(y)) epochs = 500 batch_size = 50 learning_rate = 0.0003 display_step = 50 batch_num = int(len(x_arr) / batch_size) tf.reset_default_graph() x = tf.placeholder(name='x',shape=(None,1),dtype=tf.float32) y = tf.placeholder(name='y',shape=(None,1),dtype=tf.float32) layer = x for _ in range(3): layer = tf.layers.dense(inputs=layer, units=12, activation=tf.nn.tanh) # now we have two different outputs mu = tf.layers.dense(inputs=layer, units=1) sigma = tf.layers.dense(inputs=layer, units=1, activation=lambda x: tf.nn.elu(x) + 1) cost = mdn_cost(mu, sigma, y) optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) x_batches = np.array_split(x_arr, batch_num) y_batches = np.array_split(y_arr, batch_num) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for epoch in range(epochs): avg_cost = 0.0 x_batches, y_batches = shuffle(x_batches, y_batches) for i in range(batch_num): x_batch = np.expand_dims(x_batches[i],axis=1) y_batch = np.expand_dims(y_batches[i],axis=1) _, c = sess.run([optimizer,cost], feed_dict={x:x_batch, y:y_batch}) avg_cost += c/batch_num if epoch % display_step == 0: print('Epoch {0} | cost = {1:.4f}'.format(epoch,avg_cost)) mu_pred, sigma_pred = sess.run([mu,sigma],feed_dict={x:np.expand_dims(x_test,axis=1)}) print('Final cost: {0:.4f}'.format(avg_cost)) fig, ax = plt.subplots(figsize=(10,10)) plt.grid(True) plt.xlabel('x') plt.ylabel('y') ax.errorbar(x_test,mu_pred,yerr=np.absolute(sigma_pred),c='r',ls='None',marker='.',ms=10,label='predicted distributions') ax.scatter(x_arr,y_arr,c='b',alpha=0.05,label='sampled data') ax.errorbar(x_vals,list(map(f,x_vals)),yerr=list(map(lambda x: sigma_0*x,x_vals)),c='b',lw=2,ls='None',marker='.',ms=10,label='true distributions') ax.plot(x_vals,list(map(f,x_vals)),c='m',label='f(x)') ax.legend(loc='upper center',fontsize='large',shadow=True) plt.show() ``` ## Epistemic and Total Uncertainty ##### Copyright 2019 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"); ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # TFP Probabilistic Layers: Regression <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb">Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb">View source on GitHub</a> </td> </table> In this example we show how to fit regression models using TFP's "probabilistic layers." ### Dependencies & Prerequisites ``` #@title Install { display-mode: "form" } TF_Installation = 'TF2 Nightly (GPU)' #@param ['TF2 Nightly (GPU)', 'TF2 Stable (GPU)', 'TF1 Nightly (GPU)', 'TF1 Stable (GPU)','System'] if TF_Installation == 'TF2 Nightly (GPU)': !pip install -q --upgrade tf-nightly-gpu-2.0-preview print('Installation of `tf-nightly-gpu-2.0-preview` complete.') elif TF_Installation == 'TF2 Stable (GPU)': !pip install -q --upgrade tensorflow-gpu==2.0.0-alpha0 print('Installation of `tensorflow-gpu==2.0.0-alpha0` complete.') elif TF_Installation == 'TF1 Nightly (GPU)': !pip install -q --upgrade tf-nightly-gpu print('Installation of `tf-nightly-gpu` complete.') elif TF_Installation == 'TF1 Stable (GPU)': !pip install -q --upgrade tensorflow-gpu print('Installation of `tensorflow-gpu` complete.') elif TF_Installation == 'System': pass else: raise ValueError('Selection Error: Please select a valid ' 'installation option.') ```  |████████████████████████████████| 349.2MB 52kB/s  |████████████████████████████████| 3.1MB 38.7MB/s  |████████████████████████████████| 61kB 26.3MB/s  |████████████████████████████████| 430kB 59.4MB/s [?25h Building wheel for wrapt (setup.py) ... [?25l[?25hdone ERROR: thinc 6.12.1 has requirement wrapt<1.11.0,>=1.10.0, but you'll have wrapt 1.11.1 which is incompatible. Installation of `tf-nightly-gpu-2.0-preview` complete. ``` #@title Install { display-mode: "form" } TFP_Installation = "Nightly" #@param ["Nightly", "Stable", "System"] if TFP_Installation == "Nightly": !pip install -q tfp-nightly print("Installation of `tfp-nightly` complete.") elif TFP_Installation == "Stable": !pip install -q --upgrade tensorflow-probability print("Installation of `tensorflow-probability` complete.") elif TFP_Installation == "System": pass else: raise ValueError("Selection Error: Please select a valid " "installation option.") ```  |████████████████████████████████| 983kB 3.4MB/s [?25hInstallation of `tfp-nightly` complete. ``` #@title Import { display-mode: "form" } from __future__ import absolute_import from __future__ import division from __future__ import print_function from pprint import pprint import matplotlib.pyplot as plt import numpy as np import seaborn as sns import tensorflow as tf from tensorflow.python import tf2 if not tf2.enabled(): import tensorflow.compat.v2 as tf tf.enable_v2_behavior() assert tf2.enabled() import tensorflow_probability as tfp sns.reset_defaults() #sns.set_style('whitegrid') #sns.set_context('talk') sns.set_context(context='talk',font_scale=0.7) %matplotlib inline tfd = tfp.distributions ``` ### Make things Fast! Before we dive in, let's make sure we're using a GPU for this demo. To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU". The following snippet will verify that we have access to a GPU. ``` if tf.test.gpu_device_name() != '/device:GPU:0': print('WARNING: GPU device not found.') else: print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name())) ``` SUCCESS: Found GPU: /device:GPU:0 Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.) ## Motivation Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e., ``` negloglik = lambda y, rv_y: -rv_y.log_prob(y) ``` Well not only is it possible, but this colab shows how! (In context of linear regression problems.) ``` #@title Synthesize dataset. w0 = 0.125 b0 = 5. x_range = [-20, 60] def load_dataset(n=150, n_tst=150): np.random.seed(43) def s(x): g = (x - x_range[0]) / (x_range[1] - x_range[0]) return 3 * (0.25 + g**2.) x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0] eps = np.random.randn(n) * s(x) y = (w0 * x * (1. + np.sin(x)) + b0) + eps x = x[..., np.newaxis] x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32) x_tst = x_tst[..., np.newaxis] return y, x, x_tst y, x, x_tst = load_dataset() ``` ### Case 1: No Uncertainty ``` # Build model. model = tf.keras.Sequential([ tf.keras.layers.Dense(1), tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)), ]) # Do inference. model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik) model.fit(x, y, epochs=1000, verbose=False); # Profit. [print(np.squeeze(w.numpy())) for w in model.weights]; yhat = model(x_tst) assert isinstance(yhat, tfd.Distribution) ``` 0.13275796 5.1289654 ``` #@title Figure 1: No uncertainty. w = np.squeeze(model.layers[-2].kernel.numpy()) b = np.squeeze(model.layers[-2].bias.numpy()) plt.figure(figsize=[6, 1.5]) # inches #plt.figure(figsize=[8, 5]) # inches plt.plot(x, y, 'b.', label='observed'); plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4); plt.ylim(-0.,17); plt.yticks(np.linspace(0, 15, 4)[1:]); plt.xticks(np.linspace(*x_range, num=9)); ax=plt.gca(); ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data', 0)) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) #ax.spines['left'].set_smart_bounds(True) #ax.spines['bottom'].set_smart_bounds(True) plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5)) plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300) ``` ### Case 2: Aleatoric Uncertainty ``` # Build model. model = tf.keras.Sequential([ tf.keras.layers.Dense(1 + 1), tfp.layers.DistributionLambda( lambda t: tfd.Normal(loc=t[..., :1], scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))), ]) # Do inference. model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik) model.fit(x, y, epochs=1000, verbose=False); # Profit. [print(np.squeeze(w.numpy())) for w in model.weights]; yhat = model(x_tst) assert isinstance(yhat, tfd.Distribution) ``` [0.13226233 0.41329 ] [5.1153 1.2915019] ``` #@title Figure 2: Aleatoric Uncertainty plt.figure(figsize=[6, 1.5]) # inches plt.plot(x, y, 'b.', label='observed'); m = yhat.mean() s = yhat.stddev() plt.plot(x_tst, m, 'r', linewidth=4, label='mean'); plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev'); plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev'); plt.ylim(-0.,17); plt.yticks(np.linspace(0, 15, 4)[1:]); plt.xticks(np.linspace(*x_range, num=9)); ax=plt.gca(); ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data', 0)) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) #ax.spines['left'].set_smart_bounds(True) #ax.spines['bottom'].set_smart_bounds(True) plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5)) plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300) ``` ### Case 3: Epistemic Uncertainty ``` # Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`. def posterior_mean_field(kernel_size, bias_size=0, dtype=None): n = kernel_size + bias_size c = np.log(np.expm1(1.)) return tf.keras.Sequential([ tfp.layers.VariableLayer(2 * n, dtype=dtype), tfp.layers.DistributionLambda(lambda t: tfd.Independent( tfd.Normal(loc=t[..., :n], scale=1e-5 + tf.nn.softplus(c + t[..., n:])), reinterpreted_batch_ndims=1)), ]) ``` ``` # Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`. def prior_trainable(kernel_size, bias_size=0, dtype=None): n = kernel_size + bias_size return tf.keras.Sequential([ tfp.layers.VariableLayer(n, dtype=dtype), tfp.layers.DistributionLambda(lambda t: tfd.Independent( tfd.Normal(loc=t, scale=1), reinterpreted_batch_ndims=1)), ]) ``` ``` # Build model. model = tf.keras.Sequential([ tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable), tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)), ]) # Do inference. model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik) model.fit(x, y, epochs=1000, verbose=False); # Profit. [print(np.squeeze(w.numpy())) for w in model.weights]; yhat = model(x_tst) assert isinstance(yhat, tfd.Distribution) ``` [ 0.1374202 5.1056857 -3.7132006 -0.5256554] [0.1592448 5.12206 ] ``` #@title Figure 3: Epistemic Uncertainty plt.figure(figsize=[6, 1.5]) # inches plt.clf(); plt.plot(x, y, 'b.', label='observed'); yhats = [model(x_tst) for _ in range(100)] avgm = np.zeros_like(x_tst[..., 0]) for i, yhat in enumerate(yhats): m = np.squeeze(yhat.mean()) s = np.squeeze(yhat.stddev()) if i < 25: plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5) avgm += m plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4) plt.ylim(-0.,17); plt.yticks(np.linspace(0, 15, 4)[1:]); plt.xticks(np.linspace(*x_range, num=9)); ax=plt.gca(); ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data', 0)) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) #ax.spines['left'].set_smart_bounds(True) #ax.spines['bottom'].set_smart_bounds(True) plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5)) plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300) ``` ### Case 4: Aleatoric & Epistemic Uncertainty ``` # Build model. model = tf.keras.Sequential([ tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable), tfp.layers.DistributionLambda( lambda t: tfd.Normal(loc=t[..., :1], scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))), ]) # Do inference. model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik) model.fit(x, y, epochs=1000, verbose=False); # Profit. [print(np.squeeze(w.numpy())) for w in model.weights]; yhat = model(x_tst) assert isinstance(yhat, tfd.Distribution) ``` [ 0.13091448 2.8183267 5.1185093 2.954348 -3.344546 -0.684905 -0.5135757 0.05770317] [0.13267924 2.7312424 5.191225 2.9794762 ] ``` #@title Figure 4: Both Aleatoric & Epistemic Uncertainty plt.figure(figsize=[6, 1.5]) # inches plt.plot(x, y, 'b.', label='observed'); yhats = [model(x_tst) for _ in range(100)] avgm = np.zeros_like(x_tst[..., 0]) for i, yhat in enumerate(yhats): m = np.squeeze(yhat.mean()) s = np.squeeze(yhat.stddev()) if i < 15: plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.) plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None); plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None); avgm += m plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4) plt.ylim(-0.,17); plt.yticks(np.linspace(0, 15, 4)[1:]); plt.xticks(np.linspace(*x_range, num=9)); ax=plt.gca(); ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data', 0)) ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) #ax.spines['left'].set_smart_bounds(True) #ax.spines['bottom'].set_smart_bounds(True) plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5)) plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300) ```
Formal statement is: lemma asymp_equiv_eventually_zeros: fixes f g :: "'a \<Rightarrow> 'b :: real_normed_field" assumes "f \<sim>[F] g" shows "eventually (\<lambda>x. f x = 0 \<longleftrightarrow> g x = 0) F" Informal statement is: If $f$ and $g$ are asymptotically equivalent, then eventually $f(x) = 0$ if and only if $g(x) = 0$.
[STATEMENT] lemma dvd_imp_gcd_dvd_gcd: "b dvd c \<Longrightarrow> gcd a b dvd gcd a c" [PROOF STATE] proof (prove) goal (1 subgoal): 1. b dvd c \<Longrightarrow> gcd a b dvd gcd a c [PROOF STEP] by (meson gcd_dvd1 gcd_dvd2 gcd_greatest dvd_trans)
function M = median_filter(im) % MEDIAN_FILTER simpler wrapper for calling medfilt2 on each channel % % M = median_filter(im) % % Input: % im w by h by c image % Output: % M median filtered image in each channel % % This could probably be a one-liner using num2cell and cellfun M = zeros(size(im)); for c = 1 : size(im,3) M(:,:,c) = medfilt2(im(:,:,c)); end end
/* * Copyright (c) 1997-1999 Massachusetts Institute of Technology * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * */ /* This file was automatically generated --- DO NOT EDIT */ /* Generated on Sun Nov 7 20:43:55 EST 1999 */ #include <fftw-int.h> #include <fftw.h> /* Generated by: ./genfft -magic-alignment-check -magic-twiddle-load-all -magic-variables 4 -magic-loopi -real2hc 12 */ /* * This function contains 38 FP additions, 8 FP multiplications, * (or, 34 additions, 4 multiplications, 4 fused multiply/add), * 18 stack variables, and 24 memory accesses */ static const fftw_real K866025403 = FFTW_KONST(+0.866025403784438646763723170752936183471402627); static const fftw_real K500000000 = FFTW_KONST(+0.500000000000000000000000000000000000000000000); /* * Generator Id's : * $Id: exprdag.ml,v 1.41 1999/05/26 15:44:14 fftw Exp $ * $Id: fft.ml,v 1.43 1999/05/17 19:44:18 fftw Exp $ * $Id: to_c.ml,v 1.25 1999/10/26 21:41:32 stevenj Exp $ */ void fftw_real2hc_12(const fftw_real *input, fftw_real *real_output, fftw_real *imag_output, int istride, int real_ostride, int imag_ostride) { fftw_real tmp5; fftw_real tmp25; fftw_real tmp11; fftw_real tmp23; fftw_real tmp30; fftw_real tmp35; fftw_real tmp10; fftw_real tmp26; fftw_real tmp12; fftw_real tmp18; fftw_real tmp29; fftw_real tmp34; fftw_real tmp31; fftw_real tmp32; ASSERT_ALIGNED_DOUBLE; { fftw_real tmp1; fftw_real tmp2; fftw_real tmp3; fftw_real tmp4; ASSERT_ALIGNED_DOUBLE; tmp1 = input[0]; tmp2 = input[4 * istride]; tmp3 = input[8 * istride]; tmp4 = tmp2 + tmp3; tmp5 = tmp1 + tmp4; tmp25 = tmp1 - (K500000000 * tmp4); tmp11 = tmp3 - tmp2; } { fftw_real tmp19; fftw_real tmp20; fftw_real tmp21; fftw_real tmp22; ASSERT_ALIGNED_DOUBLE; tmp19 = input[9 * istride]; tmp20 = input[istride]; tmp21 = input[5 * istride]; tmp22 = tmp20 + tmp21; tmp23 = tmp19 - (K500000000 * tmp22); tmp30 = tmp19 + tmp22; tmp35 = tmp21 - tmp20; } { fftw_real tmp6; fftw_real tmp7; fftw_real tmp8; fftw_real tmp9; ASSERT_ALIGNED_DOUBLE; tmp6 = input[6 * istride]; tmp7 = input[10 * istride]; tmp8 = input[2 * istride]; tmp9 = tmp7 + tmp8; tmp10 = tmp6 + tmp9; tmp26 = tmp6 - (K500000000 * tmp9); tmp12 = tmp8 - tmp7; } { fftw_real tmp14; fftw_real tmp15; fftw_real tmp16; fftw_real tmp17; ASSERT_ALIGNED_DOUBLE; tmp14 = input[3 * istride]; tmp15 = input[7 * istride]; tmp16 = input[11 * istride]; tmp17 = tmp15 + tmp16; tmp18 = tmp14 - (K500000000 * tmp17); tmp29 = tmp14 + tmp17; tmp34 = tmp16 - tmp15; } real_output[3 * real_ostride] = tmp5 - tmp10; imag_output[3 * imag_ostride] = tmp29 - tmp30; tmp31 = tmp5 + tmp10; tmp32 = tmp29 + tmp30; real_output[6 * real_ostride] = tmp31 - tmp32; real_output[0] = tmp31 + tmp32; { fftw_real tmp37; fftw_real tmp38; fftw_real tmp33; fftw_real tmp36; ASSERT_ALIGNED_DOUBLE; tmp37 = tmp34 + tmp35; tmp38 = tmp11 + tmp12; imag_output[2 * imag_ostride] = K866025403 * (tmp37 - tmp38); imag_output[4 * imag_ostride] = K866025403 * (tmp38 + tmp37); tmp33 = tmp25 - tmp26; tmp36 = K866025403 * (tmp34 - tmp35); real_output[5 * real_ostride] = tmp33 - tmp36; real_output[real_ostride] = tmp33 + tmp36; } { fftw_real tmp27; fftw_real tmp28; fftw_real tmp13; fftw_real tmp24; ASSERT_ALIGNED_DOUBLE; tmp27 = tmp25 + tmp26; tmp28 = tmp18 + tmp23; real_output[2 * real_ostride] = tmp27 - tmp28; real_output[4 * real_ostride] = tmp27 + tmp28; tmp13 = K866025403 * (tmp11 - tmp12); tmp24 = tmp18 - tmp23; imag_output[imag_ostride] = tmp13 - tmp24; imag_output[5 * imag_ostride] = -(tmp13 + tmp24); } } fftw_codelet_desc fftw_real2hc_12_desc = { "fftw_real2hc_12", (void (*)()) fftw_real2hc_12, 12, FFTW_FORWARD, FFTW_REAL2HC, 266, 0, (const int *) 0, };
Formal statement is: lemma lipschitz_on_concat_max: fixes a b c::real assumes f: "L-lipschitz_on {a .. b} f" assumes g: "M-lipschitz_on {b .. c} g" assumes fg: "f b = g b" shows "(max L M)-lipschitz_on {a .. c} (\<lambda>x. if x \<le> b then f x else g x)" Informal statement is: If $f$ is L-Lipschitz on $[a,b]$ and $g$ is M-Lipschitz on $[b,c]$, and $f(b) = g(b)$, then the function $h$ defined by $h(x) = f(x)$ if $x \leq b$ and $h(x) = g(x)$ if $x > b$ is $(\max\{L,M\})$-Lipschitz on $[a,c]$.
import math from criterion_core.utils.reshape_data import tile2d from criterion_core.utils import path from criterion_core.utils import data_generator import numpy as np import json import os from . import io_tools import cv2 def create_atlas(facet_gen): images = [None] * len(facet_gen) for ii in range(len(facet_gen)): images[ii], y = facet_gen[ii] images = np.squeeze(np.concatenate(images)*255).astype(np.uint8) atlas_size = int(math.ceil(math.sqrt(len(images)))) atlas = tile2d(images, (atlas_size, atlas_size)) return atlas def build_facets(facet_gen, output_path, facet_key='', **atlas_param): facet_dive = 'facets{}.json'.format(facet_key) facet_sprite = 'spriteatlas{}.jpeg'.format(facet_key) io_tools.write_file(path.join(output_path, facet_dive), json.dumps(list(facet_gen.samples))) atlas = create_atlas(facet_gen) atlas = cv2.cvtColor(atlas, cv2.COLOR_BGR2RGB) jpeg_created, buffer = cv2.imencode(".jpeg", atlas) assert jpeg_created io_tools.write_file(path.join(output_path, facet_sprite), bytes(buffer)) return facet_dive, facet_sprite if __name__ == '__main__': from criterion_core import load_image_datasets from criterion_core.utils import sampletools output_path = r'C:\data\novo\jobdirs\image_classification35' datasets = [ dict(bucket=r'C:\data\novo\data\22df6831-5de9-4545-af17-cdfe2e8b2049.datasets.criterion.ai', id='22df6831-5de9-4545-af17-cdfe2e8b2049', name='test') ] dssamples = load_image_datasets(datasets) samples = list(sampletools.flatten_dataset(dssamples)) build_facets(samples, output_path, thumbnail_size=(64, 64), color_mode="greyscale")
Require Import Nat Arith. Inductive Nat : Type := succ : Nat -> Nat | zero : Nat. Inductive Lst : Type := nil : Lst | cons : Nat -> Lst -> Lst. Inductive Tree : Type := node : Nat -> Tree -> Tree -> Tree | leaf : Tree. Inductive Pair : Type := mkpair : Nat -> Nat -> Pair with ZLst : Type := zcons : Pair -> ZLst -> ZLst | znil : ZLst. Fixpoint append (append_arg0 : Lst) (append_arg1 : Lst) : Lst := match append_arg0, append_arg1 with | nil, x => x | cons x y, z => cons x (append y z) end. Fixpoint mem (mem_arg0 : Nat) (mem_arg1 : Lst) : Prop := match mem_arg0, mem_arg1 with | x, nil => False | x, cons y z => x = y \/ mem x z end. Theorem theorem0 : forall (x : Nat) (y : Lst) (z : Lst), mem x z -> mem x (append y z). Proof. intros. induction y. - auto. - simpl. auto. Qed.
# "Next Generation Reservoir Computing" Daniel J. Gauthier, Erik Bollt, Aaron Griffith & Wendson A. S. Barbosa *Nature Communications*, vol. 12, no. 1, p. 5564, Sep. 2021, doi: 10.1038/s41467-021-25801-2. ```python import matplotlib.pyplot as plt import numpy as np from reservoirpy.datasets import lorenz, doublescroll from reservoirpy.observables import nrmse from reservoirpy.nodes import Ridge, NVAR %matplotlib inline from IPython.core.display import HTML HTML(""" <style> .output_png { display: table-cell; text-align: center; vertical-align: middle; } </style> """) ``` <style> .output_png { display: table-cell; text-align: center; vertical-align: middle; } </style> ## Abstract Reservoir computing is a best-in-class machine learning algorithm for processing information generated by dynamical systems using observed time-series data. Importantly, it requires very small training data sets, uses linear optimization, and thus requires minimal computing resources. However, the algorithm uses randomly sampled matrices to define the underlying recurrent neural network and has a multitude of metaparameters that must be optimized. Recent results demonstrate the equivalence of reservoir computing to nonlinear vector autoregression, which requires no random matrices, fewer metaparameters, and provides interpretable results. Here, we demonstrate that nonlinear vector autoregression excels at reservoir computing benchmark tasks and requires even shorter training data sets and training time, heralding the next generation of reservoir computing. ## Implementation using *ReservoirPy* This notebook is provided as a demo of the `NVAR` node in ReservoirPy, implementing the method described in the paper *Next Generation Reservoir Computing* by Gauthier *et al.* The nonlinear vector autoregressive (NVAR) machine implements the following equations. The state $\mathbb{O}_{total}$ of the NVAR first contains a serie of linear features $\mathbb{O}_{lin}$ made of input data concatenated with delayed inputs: $$ \mathbb{O}_{lin}[t] = \mathbf{X}[t] \oplus \mathbf{X}[t - s] \oplus \mathbf{X}[t - 2s] \oplus \dots \oplus \mathbf{X}[t - (k-1)s] $$ where $\mathbf{X}[t]$ are the inputs at time $t$, $k$ is the delay and $s$ is the strides (only one input every $s$ inputs within the delayed inputs is used). The operator $\oplus$ denotes the concatenation. In addition to these linear features, nonlinear representations $\mathbb{O}_{nonlin}^n$ of the inputs are contructed using all unique monomials of order $n$ of these inputs: $$ \mathbb{O}_{nonlin}^n[t] = \mathbb{O}_{lin}[t] \otimes \mathbb{O}_{lin}[t] \overbrace{\otimes \dots \otimes}^{n-1~\mathrm{times}} \mathbb{O}_{lin}[t] $$ where $\otimes$ is the operator denoting an outer product followed by the selection of all unique monomials generated by this outer product. Under the hood, this product is computed by ReservoirPy by finding all unique combinations of input features and multiplying each combination of terms. Finally, all representations are gathered to form the final feature vector $\mathbb{O}_{total}$: $$ \mathbb{O}_{total} = \mathbb{O}_{lin}[t] \oplus \mathbb{O}_{nonlin}^n[t] $$ Tikhonov regression is used to compute the readout weights using this feature vector and some target values, in an offline way (we will simply use ReservoirPy's `Ridge` node of for this). **Fig. 1** A traditional (Reservoir Computing machine) is implicit in an NG-RC (Next Generation Reservoir Computing machine). *(top)* A traditional RC processes time-series data associated with a strange attractor (blue, middle left) using an artificial recurrent neural network. The forecasted strange attractor (red, middle right) is a linear weight of the reservoir states. *(bottom)* The NG-RC performs a forecast using a linear weight of time-delay states (two times shown here) of the time series data and nonlinear functionals of this data (quadratic functional shown here). Figure and legend from Gauthier *et al.* (2021). ## 1. NVAR for Lorenz strange attractor forecating Lorenz attractor is defined by three coupled differential equations: $$ \begin{split} \dot{x} &= 10(y-x)\\ \dot{y} &= x(28-z)\\ \dot{z} &= xy - \frac{8z}{3} \end{split} $$ ```python # time step duration (in time unit) dt = 0.025 # training time (in time unit) train_time = 10. # testing time (idem) test_time = 120. # warmup time (idem): should always be > k * s warm_time = 5. # discretization train_steps = round(train_time / dt) test_steps = round(test_time / dt) warm_steps = round(warm_time / dt) ``` ```python x0 = [17.67715816276679, 12.931379185960404, 43.91404334248268] n_timesteps = train_steps + test_steps + warm_steps X = lorenz(n_timesteps, x0=x0, h=dt, method="RK23") ``` ```python N = train_steps + warm_steps + test_steps fig = plt.figure(figsize=(10, 10)) ax = fig.add_subplot(111, projection='3d') ax.set_title("Lorenz attractor (1963)") ax.set_xlabel("$x$") ax.set_ylabel("$y$") ax.set_zlabel("$z$") ax.grid(False) for i in range(N-1): ax.plot(X[i:i+2, 0], X[i:i+2, 1], X[i:i+2, 2], color=plt.cm.magma(255*i//N), lw=1.0) plt.show() ``` ```python ``` We define a NVAR with delay $k=2$, strides $s=1$ and order $n=2$. The final feature vector $\mathbb{O}_{total}$ will hence be: $$ \begin{align} \mathbb{O}_{lin}[t] &= \begin{bmatrix} x_t\\y_t\\z_t\\x_{t-1}\\y_{t-1}\\z_{t-1} \end{bmatrix} & \mathbb{O}_{nonlin}[t] &= \begin{bmatrix} x_t^2\\x_t y_t\\x_t z_t\\x_t x_{t-1}\\x_t y_{t-1}\\x_t z_{t-1}\\y_t^2\\y_t z_t\\y_t x_{t-1}\\y_t y_{t-1}\\y_t z_{t-1}\\z_t^2\\z_t x_{t-1}\\z_t y_{t-1}\\z_t z_{t-1}\\x_{t-1}^2\\x_{t-1} y_{t-1}\\x_{t-1} z_{t-1}\\y_{t-1}^2\\y_{t-1} z_{t-1}\\z_{t-1}^2 \end{bmatrix} \end{align} $$ $$ \mathbb{O}_{total}[t] = \mathbb{O}_{lin}[t] \oplus \mathbb{O}_{nonlin}[t] $$ The NVAR is connected to a readout layer with offline learning using regularized linear regression. The regularization parameter is set to $2.5\times10^{-6}$. The model is first trained to infer the value of $\mathbf{X}[t+1]$ knowing the value of $\mathbf{X}[t]$. This training step enable the model to learn an internal representation of the local dynamics of the attractor. ```python nvar = NVAR(delay=2, order=2, strides=1) readout = Ridge(3, ridge=2.5e-6) model = nvar >> readout ``` We first warmup the NVAR. The warmup time can be as short as only $k \times s$ steps. The NVAR has relevant features as soon as all the delayed signals are non zeros, i.e. as soon as at leas $k \times s$ steps has been stores in the linear vector. ```python _ = nvar.run(X[:warm_steps-1]) ``` Running NVAR-0: 100%|██████████████████████| 199/199 [00:00<00:00, 16676.99it/s] Then, train the model to perform one-step-ahead prediction. ```python Xi = X[warm_steps-1:train_steps+warm_steps-1] dXi = X[warm_steps:train_steps+warm_steps] - X[warm_steps-1:train_steps+warm_steps-1] model = model.fit(Xi, dXi) ``` Running Model-0: 0%| | 0/1 [00:00<?, ?it/s] Running SubModel-acbe96c7-f28c-476f-8e23-d40f78914be8: 400it [00:00, 9133.79it/s][A Running Model-0: 100%|████████████████████████████| 1/1 [00:00<00:00, 21.79it/s] Fitting node Ridge-0... ```python lin = ["$x_t$", "$y_t$", "$z_t$", "$x_{t-1}$", "$y_{t-1}$", "$z_{t-1}$"] nonlin = [] for idx in nvar._monomial_idx: idx = idx.astype(int) if idx[0] == idx[1]: c = lin[idx[0]][:-1] + "^2$" else: c = " ".join((lin[idx[0]][:-1], lin[idx[1]][1:])) nonlin.append(c) coefs = ["$c$"] + lin + nonlin ``` In the plot below are displayed the linear coefficients learned by the model. ```python fig = plt.figure(figsize=(10, 10)) Wout = np.r_[readout.bias, readout.Wout] x_Wout, y_Wout, z_Wout = Wout[:, 0], Wout[:, 1], Wout[:, 2] ax = fig.add_subplot(131) ax.set_xlim(-0.2, 0.2) ax.grid(axis="y") ax.set_xlabel("$[W_{out}]_x$") ax.set_ylabel("Features") ax.set_yticks(np.arange(len(coefs))) ax.set_yticklabels(coefs[::-1]) ax.barh(np.arange(x_Wout.size), x_Wout.ravel()[::-1]) ax1 = fig.add_subplot(132) ax1.set_xlim(-0.2, 0.2) ax1.grid(axis="y") ax1.set_yticks(np.arange(len(coefs))) ax1.set_xlabel("$[W_{out}]_y$") ax1.barh(np.arange(y_Wout.size), y_Wout.ravel()[::-1]) ax2 = fig.add_subplot(133) ax2.set_xlim(-0.2, 0.2) ax2.grid(axis="y") ax2.set_yticks(np.arange(len(coefs))) ax2.set_xlabel("$[W_{out}]_z$") ax2.barh(np.arange(z_Wout.size), z_Wout.ravel()[::-1]) plt.show() ``` ```python nvar.run(X[warm_steps+train_steps-2:warm_steps+train_steps]) u = X[warm_steps+train_steps] res = np.zeros((test_steps, readout.output_dim)) for i in range(test_steps): u = u + model(u) res[i, :] = u ``` Running NVAR-0: 100%|███████████████████████████| 2/2 [00:00<00:00, 2889.63it/s] ```python N = test_steps Y = X[warm_steps+train_steps:] fig = plt.figure(figsize=(15, 10)) ax = fig.add_subplot(121, projection='3d') ax.set_title("Generated attractor") ax.set_xlabel("$x$") ax.set_ylabel("$y$") ax.set_zlabel("$z$") ax.grid(False) for i in range(N-1): ax.plot(res[i:i+2, 0], res[i:i+2, 1], res[i:i+2, 2], color=plt.cm.magma(255*i//N), lw=1.0) ax2 = fig.add_subplot(122, projection='3d') ax2.set_title("Real attractor") ax2.grid(False) for i in range(N-1): ax2.plot(Y[i:i+2, 0], Y[i:i+2, 1], Y[i:i+2, 2], color=plt.cm.magma(255*i//N), lw=1.0) ``` ## 2. NVAR for double scroll strange attractor forecasting ```python dt = 0.25 train_time = 100. test_time = 800. warm_time = 1. train_steps = round(train_time / dt) test_steps = round(test_time / dt) warm_steps = round(warm_time / dt) ``` ```python x0 = [0.37926545, 0.058339, -0.08167691] n_timesteps = train_steps + test_steps + warm_steps X = doublescroll(n_timesteps, x0=x0, h=dt, method="RK23") ``` ```python N = train_steps + warm_steps + test_steps fig = plt.figure(figsize=(10, 10)) ax = fig.add_subplot(111, projection='3d') ax.set_title("Double scroll attractor (1998)") ax.set_xlabel("x") ax.set_ylabel("y") ax.set_zlabel("z") ax.grid(False) for i in range(N-1): ax.plot(X[i:i+2, 0], X[i:i+2, 1], X[i:i+2, 2], color=plt.cm.cividis(255*i//N), lw=1.0) plt.show() ``` ```python nvar = NVAR(delay=2, order=3, strides=1) readout = Ridge(3, ridge=2.5e-6, input_bias=False) model = nvar >> readout ``` ```python _ = nvar.run(X[:warm_steps-1]) ``` ```python Xi = X[warm_steps-1:train_steps+warm_steps-1] dXi = X[warm_steps:train_steps+warm_steps] - X[warm_steps-1:train_steps+warm_steps-1] model = model.fit(Xi, dXi) ``` ```python nvar.run(X[warm_steps+train_steps-2:warm_steps+train_steps]) u = X[warm_steps+train_steps] res = np.zeros((test_steps, readout.output_dim)) for i in range(test_steps): u = u + model(u) res[i, :] = u ``` ```python N = test_steps Y = X[warm_steps+train_steps:] fig = plt.figure(figsize=(15, 10)) ax = fig.add_subplot(121, projection='3d') ax.set_title("Generated attractor") ax.set_xlabel("$x$") ax.set_ylabel("$y$") ax.set_zlabel("$z$") ax.grid(False) for i in range(N-1): ax.plot(res[i:i+2, 0], res[i:i+2, 1], res[i:i+2, 2], color=plt.cm.cividis(255*i//N), lw=1.0) ax2 = fig.add_subplot(122, projection='3d') ax2.set_title("Real attractor") ax2.grid(False) for i in range(N-1): ax2.plot(Y[i:i+2, 0], Y[i:i+2, 1], Y[i:i+2, 2], color=plt.cm.cividis(255*i//N), lw=1.0) ``` ```python # time step duration (in time unit) dt = 0.05 # training time (in time unit) train_time = 20. # testing time (idem) test_time = 45. # warmup time (idem): should always be > k * s warm_time = 5. # discretization train_steps = round(train_time / dt) test_steps = round(test_time / dt) warm_steps = round(warm_time / dt) ``` ```python x0 = [17.67715816276679, 12.931379185960404, 43.91404334248268] n_timesteps = train_steps + test_steps + warm_steps X = lorenz(n_timesteps, x0=x0, h=dt, method="RK23") ``` ```python nvar = NVAR(delay=4, order=2, strides=5) readout = Ridge(1, ridge=0.05) model = nvar >> readout ``` We first warmup the NVAR. The warmup time can be as short as only $k \times s$ steps. The NVAR has relevant features as soon as all the delayed signals are non zeros, i.e. as soon as at leas $k \times s$ steps has been stores in the linear vector. ```python _ = nvar.run(X[:warm_steps-1, :2]) ``` Then, train the model to perform one-step-ahead prediction. ```python xy = X[warm_steps-1:train_steps+warm_steps-1, :2] z = X[warm_steps:train_steps+warm_steps, 2][:, np.newaxis] model = model.fit(xy, z) ``` ```python _ = nvar.run(X[train_steps:warm_steps+train_steps, :2]) xy_test = X[warm_steps+train_steps:-1, :2] res = model.run(xy_test) ``` ```python fig = plt.figure(figsize=(10, 5)) ax = fig.add_subplot(111) ax.plot(res, label="Inferred") ax.plot(X[warm_steps+train_steps+1:, 2], label="Truth", linestyle="--") ax.plot(abs(res[:, 0] - X[warm_steps+train_steps+1:, 2]), label="Absolute deviation") ax.set_ylabel("$z$") ax.set_xlabel("time") ax.set_title("Lorenz attractor $z$ component inferred value") ax.set_xticks(np.linspace(0, 900, 5)) ax.set_xticklabels(np.linspace(0, 900, 5) * dt + train_time + warm_time) plt.legend() plt.show() ``` ```python ```