Datasets:
AI4M
/

text
stringlengths
0
3.34M
\documentclass[12pt,a4paper]{article} \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} \usepackage{lmodern} \usepackage{microtype} \usepackage{mathtools} \usepackage{amssymb,amsmath} % Aumenta lo spacing tra equazioni su più righe % nell'ambiente equation, rendendolo scalabile in base alla dimensione del testo \setlength{\jot}{3ex} \usepackage[makeroom]{cancel} \usepackage{siunitx} \usepackage{relsize} \usepackage{tikz} \usetikzlibrary{automata,positioning,calc,shapes,arrows} \usepackage{multirow} \usepackage{caption} \captionsetup[figure]{font=small,labelfont=small,margin={40pt,40pt}} \usepackage[english]{babel} \usepackage{float} \usepackage[left=2.5cm,right=2.5cm,bottom=2cm,top=2cm,includeheadfoot]{geometry} \usepackage[shortlabels]{enumitem} \usepackage{environ} \NewEnviron{MatrixAdjust*}{ \begin{equation*} \scalebox{0.975}{$\BODY$} \end{equation*} } \linespread{1.2} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \newcommand*{\transp}{\mathsf{p}} \title{Automata \& Queueing Systems} \author{Francesco Casciola, Nicola Landolfi} \usepackage[hyperfootnotes=false]{hyperref} %Per rendere l'indice cliccabile. Da rimuovere quando stampo. \hypersetup{ pdfprintscaling=None, colorlinks, citecolor=black, filecolor=black, linkcolor=black, urlcolor=black } \begin{document} \maketitle \tableofcontents \newpage \section{Introduction and State Automata} \label{sec:Int} A system with time-driven dynamics is a kind of system in which, even though events might occur, after the occurrences the system doesn’t stay in the same state, but variates as the time goes on. \bigskip \noindent A system with event-driven dynamics is a kind of system whose state variates only with the occurrence of certain events and it is constant in the time between an event and the next one. This produces a piecewise constant function in the time. \paragraph{Discrete Event System} Dynamical system with discrete states and event-driven dynamics. \paragraph{State Automaton} It’s a model through which Discrete Event Systems can be represented and it’s identified as a 5-tuple $(\mathcal{E},\mathcal{X},\Gamma,f,x_0)$ where: \begin{itemize} \item $\mathcal{E}$ is a discrete set of events. \item $\mathcal{X}$ is a discrete set of states. \item $\Gamma$ is a function taking values $\mathcal{X}\rightarrow 2^\mathcal{E}$, where $2^\mathcal{E}$ is the `power set’ of the set $\mathcal{E}$ and it represents the set of all the possible subsets of $\mathcal{E}$: $$ \text{e.g. : }\mathcal{E}={a,b} \Rightarrow 2^\mathcal{E}=\{\emptyset, \{a\}, \{b\}, \{a,b\}\} \text{\hspace{1 cm};\hspace{1 cm}} dim(2^\mathcal{E}) = 2^{dim(\mathcal{E})} $$ $\Gamma(x)$ represents the set of events that are possible in the state $x$. \item $f$ is a function taking values in $\mathcal{X}\times\mathcal{E}\rightarrow\mathcal{X}$ and defines the state transitions, such that $x' = f(x,e)$ is the next state when event $e\in\Gamma(x)$ occurs in the current state $x\in\mathcal{X}$. \item $x_0\in\mathcal{X}$ is the initial state. \end{itemize} \begin{figure}[H] \begin{center} \begin{tikzpicture} \tikzset{edge/.style = {->,> = angle 60, thick}} % draw the rectangular shape with vertical lines \node[rectangle,draw,text height=1cm,text depth=1cm,inner ysep=0pt] at (0,0) (S) {State Automaton}; \node[rectangle,text height=1cm,text depth=1cm,inner ysep=0pt] at (-3.65,0) (I) {$e_1, \, e_2, \, \dots, \, e_n$}; \node[rectangle,text height=1cm,text depth=1cm,inner ysep=0pt] at (4,0) (O) {$x_0, \, x_1, \, x_2, \, \dots, \, x_n$}; \path[->] (I) edge node[auto] {} (S) (S) edge node[auto] {} (O) ; \end{tikzpicture} \end{center} \caption{State automaton block diagram.} \label{fig:stateAut} \end{figure} Please notice the difference between the model and and the actual system: models introduce a certain degree of approximation with respect to the real system. \paragraph{The concept of feasibility} When thinking about a real system there are many events that are possible and many others that aren’t. When modelling a system, the events that are extremely improbable (at a level that they can be considered impossible) must be excluded, but there are state-related events that are actually impossible. For instance, a machine which is not working cannot complete a job. A sequence of events $(e_1,e_2,\dots,e_n)$ is \textbf{feasible} (could occur in reality) only if all the events of the sequence occur in states in which they are possible. In other words, the following conditions must hold: \begin{align*} e_k &\in \Gamma (x_{k-1}), \quad k = 1,2,\dots,n \\ x_k &= f(x_{k-1},e_k) \end{align*} Unfeasible sequences aren’t always obvious, which means that to detect some of them it’s either necessary to have a deep knowledge of the system or to run a huge amount of model simulations (assuming that the model is realistic enough). \paragraph{State Automaton with outputs} It’s a model through which Discrete Event Systems can be represented and it’s identified as a 7-tuple $(\mathcal{E},\mathcal{X},\Gamma,f,x_0,\mathcal{Y},g)$ where: \begin{itemize} \item $(\mathcal{E},\mathcal{X},\Gamma,f,x_0)$ is a state automaton. \item $\mathcal{Y}$ is a discrete set of outputs. \item $g$ is a function taking values in $\mathcal{X}\rightarrow\mathcal{Y}$, such that $y=g(x)$ where $y\in\mathcal{Y}$ is the output corresponding to state $x\in\mathcal{X}$. \end{itemize} \newpage \section{Timed Automata} \label{sec:TA} \paragraph{Concept of time in DES} In a real system, when an event occurs, it’s also possible to know the time instant when this happens. When trying to introduce the concept of time in a State Automaton model, it’s important to remember: \begin{itemize} \item Time cannot be given as an input to the state automaton, since the inputs are independent of the system while time instants in which events occur depend on the system itself. \end{itemize} This can be demonstrated by considering the execution of jobs on some elements by a given machine as regulated by two different disciplines (while having the time as input): First-In-First-Out (FIFO) and Round-Robin (RR). The FIFO discipline is self-explanatory. The RR is based on the concept of `time slice’, which is the maximum time the machine dedicates to a certain element that needs processing before switching to the next one. In the case in which the time needed to complete the job on the first element is higher than the time slice, the partially processed element will be put back in the queue in last position. Both the disciplines allow the machine to complete the job on all the elements, but, even if their times of arrival are the same, the times in which they are accepted in the system and the ones in which the processing on each single element terminates are different between the two disciplines. This means that the time instants in which the events occur cannot be given as input to the system, since they depend on it. \paragraph{Timed Automaton} A solution to the problem presented in the past paragraph is to use as inputs, instead of the time instants, the duration of processes, at the end of which the events occur. This way, when the system enters a state $x$ in which a given event $e$ is possible, in the model it’s possible to start a process of a set duration. This process represents the \textit{event's lifetime} and when the lifetime depletes, the event occurs. Finally, the time instant when $e$ occurs is obtained as sum of the time instants in which the system enters the state x and the lifetime of the event e. This allows us to define the \textbf{timed automaton} as a $6-$tuple $(\mathcal{E},\mathcal{X},\Gamma,f,x_0,V)$, where: \begin{itemize} \item $(\mathcal{E},\mathcal{X},\Gamma,f,x_0)$ is a state automaton. \item $V$ is the \textit{clock structure} which is an array of `clock sequences’ of the various events: $$ V=\{V_e : e\in\mathcal{E}\}, \quad V_e=\{v_{e,1}, v_{e,2}, v_{e,3},\dots\} $$ \end{itemize} \noindent Where $V_e$ is the clock sequence of event $e$ and $v_{e,i}$ is the lifetime of the $i$-th occurrence of event $e$. Please note that lifetimes must be $v_{e,i} \ge 0$ for the model to be representative of a real system (since event lifetimes cannot be negative). \begin{figure}[H] \begin{flushright} \begin{tikzpicture} \tikzset{edge/.style = {->,> = angle 60, thick}} % draw the rectangular shape with vertical lines \node[rectangle,draw,text height=1cm,text depth=1cm,inner ysep=0pt] at (-1,0) (S) {Timed Automaton}; \node[rectangle,text height=1cm,text depth=1cm,inner ysep=0pt] at (-4.5,0) (I) {$V$}; \node[rectangle,text height=1cm,text depth=1cm,inner ysep=0pt] at (4.5,0) (O) { $ \begin{matrix} &x_0, & x_1, & x_2, & \dots, & x_n \\ &e_1, & e_2, & e_3, & \dots, & e_n \\ &t_0, & t_1, & t_2, & \dots, & t_n \\ \end{matrix} $}; \node[rectangle,text height=1cm,text depth=1cm,inner ysep=0pt] at (7.3,0) (d){}; %invisible node to center the picture \path[->] (I) edge node[auto] {} (S) (S) edge node[auto] {} (O) ; \end{tikzpicture} \end{flushright} \caption{Timed automaton block diagram.} \label{fig:timedAut} \end{figure} \paragraph{Residual Times} According to the definition of a Timed Automaton when entering a state in which an event is possible a process with a certain lifetime starts. Let’s consider the situation in which there are two lifetimes, related to two events $e_1$ and $e_2$, where $V_{e1}<V_{e2}$ that start when the system enters the state $x_{k-1}$. The event $e_1$ will occur first and the system will enter state $x_k$. If event $e_2$ is still possible in state $x_k$ then, instead of starting a new process with its lifetime, the `residual lifetime’ $y_{e,2}=V_{e,2}-V_{e,1}$ is employed. If event $e_2$ is not possible there are two options: either dropping the current lifetime in order to start a new process when the event becomes possible again or keeping the residual lifetime in order to reuse it when the system enters a state in which $e_2$ is possible again. The choice between these options, in the model, depends on the behaviour of the system. \paragraph{Notation for Timed Automata} The `score’ of an event $e$ at the time $t$, denoted $n_e (t)$, is the number of lifetimes of the event $e$ used in the interval $[t_0,t]$ (where $t_0$ is the initial time of the system). From now on the following notation will be adopted: \begin{itemize} \item With respect to event occurrences: \begin{itemize} \item $k$ is the event index $(k=1,2,3,\dots)$. \item $e_k$ is the $k$-th event. \item $x_k$ is the state reached after $e_k$ occurs. \item $t_k$ is the time when the $e_k$ occurs. \item $n_{e,k}$ is score of the event $e$ after the $k$-th event. \item $y_{e,k}$ is the residual lifetime of $e$ after the $k$-th event. \end{itemize} \item With respect to time: \begin{itemize} \item $t$ is the continuous time. \item $n_{e}(t)$ is score of the event $e$ after the time $t$. \item $x(t)$ is the state of the system at time $t$. \end{itemize} \end{itemize} \paragraph{General algorithm for event timing} The algorithm works only when the following assumptions hold: \begin{enumerate} \item When an event $e$ doesn’t occur and it’s not possible in the next state, its residual lifetime is ignored and the next time event $e$ becomes possible a new total lifetime is taken from the corresponding clock sequence. \item When event $e$ occurs, the next time it becomes possible a new total lifetime is taken from the corresponding clock sequence. \item If the event $e$ doesn’t occur and it’s still possible in the next state, then its residual lifetime is used. \end{enumerate} \noindent Under these assumptions, the algorithm is composed by the following steps: \begin{enumerate} \setcounter{enumi}{-1} \item \textbf{Initialization:} for all the events $e\in\mathcal{E}$, if $e\in\Gamma(x_0)$ we consider $y_{e,0}=v_{e,1}$ and $n_{e,0}=1$. If $e\notin\Gamma(x_0)$, $y_{e,0}$ is undefined and $n_{e,0}=0$. \item \textbf{Selection of the next event:} the next event is the one with the smallest residual lifetime. $$ e_k = \argmin\limits_{\mathlarger{e\in\Gamma(x_{k-1})}} (y_{e,k-1}) = \text{arg} (y_{k-1}^*) $$ \item \textbf{Determination of the time instant of the next event} $$ t_k=t_{k-1}+y_{k-1}^* $$ \item \textbf{State update} $$ x_{k}=f(x_{k-1},e_{k}) $$ \item \textbf{Score update:} for all the events $e\in\mathcal{E}$. $$ n_{e,k}= \begin{cases} n_{e,k-1}+1 & \text{if a new total lifetime is used (asm. 1, asm 2)} \\ n_{e,k-1} & \text{if the residual lifetime is used (asm. 3)} \end{cases} $$ \item \textbf{Update of the residual lifetimes:} for all the events $e\in\mathcal{E}$. $$ y_{e,k}= \begin{cases} v_{e,n_{e,k}} & \text{if } \left[\left(e\notin\Gamma(x_{k-1})\wedge e\in\Gamma(x_{k})\right)\vee\left(e=e_{k} \wedge e\in \Gamma(x_{k})\right)\right] \\ y_{e,k-1}-y_{k-1}^{*} & \text{if } \left[e\in\Gamma(x_{k-1})\wedge e\neq e_{k} \wedge e\in\Gamma(x_{k}) \right] \\ \end{cases} $$ \item \textbf{Increase by one the value of variable $k$ and go to step 1.} \end{enumerate} \newpage \section{Stochastic Timed Automata} \label{sec:STA} \paragraph{Concept of ubiquitous uncertainty} In real systems there might be a degree of uncertainty which must be accounted for in the system's models. Let's suppose having two machines which operate in parallel: how can one determine if they are both available, which one will start working when a piece arrives? Here’s a list of the elements in the Timed automaton which are subject to uncertainty: \begin{itemize} \item $f$: The example just described is a case of uncertainty in the state transition function. \item $x_0$: Let’s consider a shop, if it opens at a given time $t_0$ and there are some people waiting for it to open, how long is the queue (state) at $t_0$? \item $V$: The processing times can vary depending on the request, it’s not always possible to know them in advance. \end{itemize} The need to introduce elements of uncertainty in the model bring to the definition of the next kind of state automaton. \paragraph{Stochastic Timed Automaton} It’s a model through which Discrete Event Systems can be represented, when the elements of uncertainty must be taken into account, and it’s identified as a $6$-tuple $(\mathcal{E},\mathcal{X},\Gamma,\transp,\transp_0,F)$ where: \begin{itemize} \item $(\mathcal{E},\mathcal{X},\Gamma)$ are the same as for the timed automaton. \item $\transp$ is a set of transition probabilities from a state to another. It substitutes $f$ and it’s defined as follows: $$ \transp(x' \mid x,e) = P\left(X_{k+1} = x' \mid X_k = x, \, E_{k+1} = e\right), \qquad \forall e \in \Gamma(x) , \quad \forall x,x' \in \mathcal{X} $$ This set of probabilities generalises the deterministic case, that is $x' = f(x,e)$. In fact, if the next state is $\tilde{x}$: \begin{equation*} \transp(\tilde{x} \mid x, e) = \begin{cases} 1 &\text{if } \tilde{x} = x' \\ 0 &\text{otherwise} \end{cases} \end{equation*} \item $\transp_0$ is a discrete random variable which defines the initial state probabilities: \begin{equation*} \transp_0(x) = P(X_0 = x), \quad \forall x\in \mathcal{X} \end{equation*} \item $F$ is the stochastic clock structure and $F_e$ is the cumulative distribution function of the lifetimes of the generic event $e\in\mathcal{E}$: \begin{equation*} F = \{F_e : e \in \mathcal{E}\}, \quad F_e(t) = P(V_{e,i} \leq t) \end{equation*} Throught the notes, \textbf{only stochastic clock structure satisfying three assumptions} will be considered: \begin{enumerate}[(a)] \item The random variables $V_{e,i}$ are independent; \item The lifetimes $V_{e,i}$ of the \textit{same event} are identically distributed; \item Lifetimes of \textit{different} events are independent: \begin{equation*} V_{e,i} \perp V_{e',i}, \quad e \neq e' \end{equation*} \end{enumerate} \end{itemize} \paragraph{Exponential Distribution} When computing probabilities (like the probability of reaching a given state within a certain amount of time), since the lifetimes are random variables, it’s normally possible to know only the distributions of the total lifetimes, but not the one of the ones of the residual lifetimes. An exception is the one of exponential distribution which has some helpful properties that will be observed soon. The exponential distribution $X \sim \text{Exp}(1/\lambda)$ (where $1/\lambda$ is the location parameter and $\lambda$ is the rate) is defined as follows: $$ \begin{matrix} F_X(t)=P\left(X\leq t\right)= \begin{cases} 1-e^{-\lambda t} & \textrm{if } t\geq 0 \\ 0 & \textrm{otherwise} \\ \end{cases} &&& f_X(t)=\dfrac{dF_X(t)}{dt}= \begin{cases} \lambda e^{-\lambda t} & \textrm{if } t\geq 0 \\ 0 & \textrm{otherwise} \\ \end{cases} \end{matrix} $$ Where $F_X(t)$ is the CDF and $f_X(t)$ is the pdf. The aforementioned properties are the following: \begin{itemize} \item \textbf{Memoryless property}: If the time $X$ between the occurrences of a given event is modelled through an exponential distribution and at the time $t$ the event hasn't occurred yet, the probability of the occurrence of the event (computed at a time $s>t$) does not depend $t$. In formulae: $$ P(X>t+s \mid X>t)= P(X>s) $$ \emph{Proof:} \begin{equation*} \begin{split} &P(X>t+s \mid X>t) = \frac{P(X>t+s \textrm{ , } X>t)}{P(X>t)} = \frac{P(X>t+s)}{P(X>t)}= \\ =&\, \frac{1-P(X\leq t+s)}{1-P(X\leq t)} =\frac{1-F_X(t+s)}{1-F_X(t)} = \frac{e^{-\lambda (t+s)}}{e^{-\lambda t}} = e^{-\lambda s} = \\ =&\, 1 - P(X \leq s) = P(X>s) \end{split} \end{equation*} \begin{flushright} $\blacksquare$ \end{flushright} \item \textbf{Extended memoryless property}: The memoryless property can be defined in a more generic way by considering, instead of the time $t$, a generic time distribution $Y$, with support in $\left[0,\infty \right)$, independent from $X$: $$ P(X>Y+s \mid X>Y)=P(X>s) $$ \emph{Proof:} \begin{equation*} \begin{split} P(X>Y+s \mid X>Y) = \frac{P(X>Y+s \textrm{ , } X>Y)}{P(X>Y)} = \frac{P(X>Y+s)}{P(X>Y)} \end{split} \end{equation*} Let's compute the numerator: $P(X>Y+s)$ is equal to the highlighted area $A$ in Figure \ref{fig:extmemlessarea} where the value of $X$ is greater than $Y+s$. \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{IMG/CommArea.eps} \caption{The highlighted area $A$ is the one for which $X-Y>s$} \label{fig:extmemlessarea} \end{center} \end{figure} This consideration allows us to proceed as follows: \begin{equation*} \begin{aligned} &P(X>Y+s) = P(X,Y\in A) = \int_{0}^{+\infty}dy\int_{y+s}^{+\infty}f_Y(y)\lambda e^{-\lambda x}dx= \\ =&\int_{0}^{+\infty}f_Y(y)dy\int_{y+s}^{+\infty}\lambda e^{-\lambda x}dx =\int_{0}^{+\infty}f_Y(y) \left[-e^{-\lambda x}\right]^{+\infty}_{y+s}dy = \\ =&\int_{0}^{+\infty}f_Y(y)e^{-\lambda y}e^{-\lambda s}dy = e^{-\lambda s} \int_{0}^{+\infty}f_Y(y)e^{-\lambda y}dy \end{aligned} \end{equation*} Since the $s$ in the lower limit of integration in the innermost integral produces only a term $e^{-\lambda s}$ which can be put outside the integral, it's clear that $$ \int_{0}^{+\infty}f_Y(y)e^{-\lambda y}dy = P(X>Y) $$ Now it is easy to compute $P(X>Y+s \mid X>Y)$: $$ \frac{P(X>Y+s)}{P(X>Y)} = \frac{e^{-\lambda s}\int_{0}^{+\infty}f_Y(y)e^{-\lambda y}dy } {\int_{0}^{+\infty}f_Y(y)e^{-\lambda y}dy } = e^{-\lambda s} = 1-P(X\leq s) = P(X>s) $$ \begin{flushright} $\blacksquare$ \end{flushright} \item \textbf{Superposition property}: Let's consider the case where it's required to know, for independent events with exponentially distributed lifetimes $\left( X_i\sim Exp\left(\dfrac{1}{\lambda_i}\right)\right)$, the one that occurs first. The random variable that must be taken into account is: $$ X=\min_{i=1,2,\dots,n}{\{X_i\}} $$ The random variable $X$ is exponentially distributed with rate $$ \lambda '=\sum_{i=1}^{n}\lambda_i $$ \emph{Proof:} $$ P(X\leq t) = 1-P(X>t) = 1-P(\min_{i=1,\dots,n}{X_i}>t) $$ From this last result, considering both the independence of the distributions and the fact that if the value $\min_i{\{X_i\}}$ is greater than $t$ then all the $X_i$ are, it's possible to proceed with the computation as follows: \begin{align*} 1-P(\min_{i=1,\dots,n}{X_i}>t) &= 1-\prod_{i=1}^{n}{P(X_i>t)} = 1-\prod_{i=1}^{n}{1-P(X_i\leq t)} = \\ &= 1-\prod_{i=1}^{n}{e^{-\lambda_{i}t}} = 1-e^{-\sum_{i}{\lambda_i t}} = 1-e^{-\lambda' t} \end{align*} \begin{flushright} $\blacksquare$ \end{flushright} \end{itemize} In addition to these properties, a `useful computation' (for the next topics) will be executed. Given two exponentially distributed random variables $X\sim Exp\left(\dfrac{1}{\lambda}\right),Y\sim Exp\left(\dfrac{1}{\mu}\right)$, let's compute the probability $P(X \leq Y+s)$: \begin{equation*} \begin{aligned} P(X \leq Y+s) &= \int_{0}^{+\infty}dy\int_{0}^{y+s}\lambda e^{-\lambda x}\mu e^{-\mu y}dx = \int_{0}^{+\infty}\mu e^{-\mu y}dy\int_{0}^{y+s}\lambda e^{-\lambda x}dx = \\ &=\int_{0}^{+\infty}\mu e^{-\mu y}\left[ -e^{-\lambda x}\right]_0^{y+s}dy = \int_{0}^{+\infty}\mu e^{-\mu y}\left[ 1-e^{-\lambda(y+s)}\right]dy = \\ &= \int_{0}^{+\infty}\mu e^{-\mu y}dy - e^{-\lambda s}\int_{0}^{+\infty}\mu e^{-\mu y} e^{-\lambda y}dy\\ &= \left[-e^{-\mu y} + \frac{\mu e^{-\lambda s}}{\lambda + \mu}e^{-(\lambda+\mu)y}\right]_{0}^{+\infty} = 1-\dfrac{\mu e^{-\lambda s}}{\lambda + \mu} \end{aligned} \end{equation*} Setting $s=0$ yields a closed-form formula for computing $P(X\leq Y)$ (skipping the entire chain of integration) which corresponds to the probability of lifetime X being less than lifetime Y. \begin{equation} \label{eq:xlessy} P(X\leq Y)= \frac{\lambda}{\lambda + \mu} \end{equation} In the next chapter this last result will be used a lot, so it's important to keep it in mind. \begin{figure}[H] \begin{center} \includegraphics[width=0.5\textwidth]{IMG/CommArea2.eps} \caption{The highlighted area is the one for which $X-Y\leq s$} \label{Picture 2} \end{center} \end{figure} \newpage \section{Stochastic Timed Automata With Poisson Clock Structure} \label{sec:STAWPCS} \paragraph{Poisson Counting Processes} It's a process which counts the occurrences of an event which is always possible. In particular, the `interarrival times' $T_i$ between any two occurrences of the same event are \textit{i.i.d.} and their distribution is: $$ T_i\sim Exp\left(\frac{1}{\lambda}\right), \quad \lambda > 0 $$ \noindent A Poisson counting process is defined as a discrete random variable $N_e(t,t+s)$ (it is a random variable because the interarrival times are, indeed, random variables) representing the number of occurrences of the event $e$ over the interval $\left( t,t+s\right]$. Given that the interarrival times are exponentially distributed, the probability mass function (pmf) of the Poisson counting process is: \begin{equation*} \begin{aligned} &P\left(N_e(t,t+s)=n\right), \quad n = 0,1,2,\dots \\ &P\left(N_e(t,t+s)=n\right) = \frac{(\lambda s)^n}{n!}e^{-\lambda s} \end{aligned} \end{equation*} Quite evidently, the pmf depends only on $s$ (due to the memoryless property), so: \begin{equation} \label{eq:poissonCountingDef} P\left(N_e(s)=n\right) = \frac{(\lambda s)^n}{n!}e^{-\lambda s} \end{equation} Finally, it possible to compute the probability that at least $n$ occurrences of the event $e$ fall in the interval of width $s$: $$ P\left(N_e(s) \geq n\right) = 1 - \sum_{m=0}^{n-1}P\left(N_e(s) = m \right) = 1 - \sum_{m=0}^{n-1} \frac{(\lambda s)^m}{m!}e^{-\lambda s} $$ The first equality arises from the fact that the complement of $(n,n+1,n+2,\dots)$ is $(n-1,n-2,\dots)$. This is equivalent to comparing $s$ to the sum of $n$ interarrival times $T_i$, coming from the same exponential distribution. Thus: $$ P(T_1 + \dots + T_n \leq s) = 1 - \sum_{m=0}^{n-1} \frac{(\lambda s)^m}{m!}e^{-\lambda s} $$ \newpage \paragraph{Stochastic Timed Automaton With Poisson Clock Structure} It's a stochastic timed automaton $(\mathcal{E},\mathcal{X},\Gamma,\transp,\transp_0,F)$ where $F = \{F_e : e \in \mathcal{E}\}$, with \textit{i.i.d.} events' lifetimes and: $$ F_e(t)=1-e^{-\lambda_e t}, \quad t\geq 0, \quad \lambda_e>0 $$ It's important to notice that there is no constraint about whether the events are always possible or not, this means that the `Poisson clock structure' doesn't refer to a Poisson counting process but to the fact that all the events have exponential interarrival times. \emph{Moreover, for stochastic timed automata with Poisson clock structure, the residual lifetimes of the events follow the same distribution of the corresponding total lifetimes}. This last property can be proved through induction. The actual demonstration can be considered as homework by the reader. \bigskip \noindent In stochastic timed automata with Poisson clock structure it's quite straightforward (compared to other models) to compute useful probabilities via closed-form formulae, like: \begin{enumerate} \item $P(E_{k+1}=e \mid X_k=x)$ which is the probability that the next event $E_{k+1}$ (uppercase, since it's a random variable) will be $e$, given that the current state is $x$: $$ P(E_{k+1}=e \mid X_k=x)=P\left(Y_{e,k} < \min_{\left[ \begin{matrix} e'\in \Gamma(x) \\ e'\neq e \end{matrix} \right]} \left\lbrace Y_{e',k}\right\rbrace \right) $$ Where the $Y$ random variables are the residual lifetimes of the feasible events in state $x$. Since the residual lifetimes follow exponential distribution, it's possible to use Equation \ref{eq:xlessy} and the superposition property to proceed with the computation: \begin{equation} \label{eq:inxnextstateise} P\left(Y_{e,k} < \min_{\left[ \begin{matrix} e'\in \Gamma(x) \\ e'\neq e \end{matrix} \right]} \left\lbrace Y_{e',k}\right\rbrace \right) =\frac{\lambda_e}{\lambda_e+(\Lambda(x)-\lambda_e)}=\frac{\lambda_e}{\Lambda(x)} \end{equation} \noindent Where $(\Lambda(x)-\lambda_e)$ is the rate of $Y_{e',k}$ since $\Lambda(x)$ is the sum of the rates of all the feasible events $e \neq e'$: $$ \Lambda(x) = \mathlarger{\sum_{e \in \Gamma(x)}\lambda_e} $$ \newpage \item $P(X_{k+1}=x' \mid X_k=x)$ which is the probability that the next state is $x'$, given that the current state is $x$: \begin{align*} &P(X_{k+1}=x' \mid X_k=x) \qquad\underset{\textrm{rule}}{\overset{\textrm{total probability}}{=}} \\ &=\sum_{e\in \Gamma(x)}{ \left[ P\left( X_{k+1}=x' \mid X_k=x\textrm{, }E_{k+1}=e \right)\cdot P\left( E_{k+1}=e \mid X_k=x \right) \right] }= \\ &= \sum_{e\in \Gamma(x)}{ \left[ P\left( X_{k+1}=x' \mid X_k=x\textrm{, }E_{k+1}=e \right)\cdot \frac{\lambda_e}{\Lambda(x)} \right]}\triangleq \\ &\underset{\textrm{for a more lean notation}}{\triangleq} \qquad \sum_{e\in \Gamma(x)}{ \left[ \transp\left(x' \mid x,e \right)\cdot \frac{\lambda_e}{\Lambda(x)} \right]} \end{align*} \end{enumerate} These two results are quite good, but there are still more things that can be done. By defining $\mathcal{E}=\left\lbrace 1,2,\dots,m \right\rbrace$ and $\mathcal{X}=\left\lbrace 1,2,\dots,n \right\rbrace$ it's possible to define the following matrices and vector: \begin{MatrixAdjust*} P_E= \left[ \begin{matrix} P\left(E_{k+1}=1 \mid X_k=1\right)&& P\left(E_{k+1}=2 \mid X_k=1\right)&& \dots&& P\left(E_{k+1}=m \mid X_k=1\right)\\\\ P\left(E_{k+1}=1 \mid X_k=2\right)&& P\left(E_{k+1}=2 \mid X_k=2\right)&& \dots&& P\left(E_{k+1}=m \mid X_k=2\right)\\\\ \vdots&& \vdots&& \ddots&& \vdots\\\\ P\left(E_{k+1}=1 \mid X_k=n\right)&& P\left(E_{k+1}=2 \mid X_k=n\right)&& \dots&& P\left(E_{k+1}=m \mid X_k=n\right) \end{matrix} \right] \nonumber \end{MatrixAdjust*} \bigskip \begin{MatrixAdjust*} P_X= \left[ \begin{matrix} P\left(X_{k+1}=1 \mid X_k=1\right)&& P\left(X_{k+1}=2 \mid X_k=1\right)&& \dots&& P\left(X_{k+1}=n \mid X_k=1\right)\\\\ P\left(X_{k+1}=1 \mid X_k=2\right)&& P\left(X_{k+1}=2 \mid X_k=2\right)&& \dots&& P\left(X_{k+1}=n \mid X_k=2\right)\\\\ \vdots&& \vdots&& \ddots&& \vdots\\\\ P\left(X_{k+1}=1 \mid X_k=n\right)&& P\left(X_{k+1}=2 \mid X_k=n\right)&& \dots&& P\left(X_{k+1}=n \mid X_k=n\right) \end{matrix} \right] \nonumber \end{MatrixAdjust*} \bigskip \begin{equation*} \Pi_X(k)= \left[ \begin{matrix} P(X_k=1)& P(X_k=2)& \cdots& P(X_k=n)& \end{matrix} \right] \end{equation*}\label{eq:stateprobs} \noindent Thus, it's possible to redefine $P(X_{k+1} = x' \mid X_k = x)$ as the probability that the $k+1$-th state is $j$ as: \begin{equation*} P(X_{k+1} = j) = \sum_{i \in \mathcal{X}} \underbrace{P(X_{k+1} = j \mid X_k = i)}_{ (i,j) \text{-th entry of } P_X} \underbrace{P(X_k = i)}_{i \text{-th entry of } \Pi_X(k)} \end{equation*} $P(X_{k+1} = j)$ will be the $j$-th entry of the vector $\Pi_X(k+1)$ and of the product $\Pi_X(k) \cdot P_X$. So assuming that $\Pi_X(0)$ is known: $$ \begin{matrix} \Pi_X(0)&=&\left[ \begin{matrix} \transp_0(1),& \transp_0(2),& \dots,& \transp_0(n) \end{matrix} \right]\\\\ \Pi_X(1)&=&\Pi_X(0) \cdot P_X\\\\ \Pi_X(2)&=&\Pi_X(1) \cdot P_X&=&\Pi_X(0) \cdot P_X^2\\\\ \Pi_X(3)&=&\Pi_X(2) \cdot P_X&=&\Pi_X(0) \cdot P_X^3\\ &&\vdots\\ \Pi_X(k+1)&=&\Pi_X(k) \cdot P_X&=&\Pi_X(0)P_X^{k+1} \end{matrix} $$ Likewise, it's possible to obtain the same result for the events: $$ P\left(E_{k+1}=j\right)=\sum_{i\in\mathcal{X}}{P\left(E_{k+1}=j \mid X_k=i\right)P\left(X_k=i\right)} $$ $$ \Pi_E(k)= \left[ \begin{matrix} P\left(E_k=1\right),& P\left(E_k=2\right),& \dots,& P\left(E_k=m\right) \end{matrix} \right] $$ $$ \Pi_E(k+1)=\Pi_X(k)P_E=\Pi_X(0)P_X^kP_E $$ With this result, it's finally possible to say that \emph{when using stochastic timed automata with Poisson clock structure, it's enough to know the matrices $P_X$ and $P_E$ and the vector $\Pi_X(0)$ to find all the future state and event probabilities}. \paragraph{Distribution of State Holding Times} The state holding time $V(x)$ is a continuous random variable characterising the time spent by the system in state $x$. Notice that, while the system is in state $x$, there might occur events which do not trigger a state transition: this implies that $V(x)$ keeps increasing until the system leaves state $x$. Let's compute the CDF of the state holding time, for a stochastic timed automaton with Poisson clock structure, and show that it is exponentially distributed with rate: $$ \sum_{\mathlarger{e\in\Gamma(x)}}\lambda_e \left[ 1-\transp\left(x \mid x,e\right) \right] $$ \emph{Proof:} The cdf of $V(x)$ is: $$ P(V(x)\leq t) = 1-P(V(x)> t) $$ Consider $P(V(x) > t)$ only: \begin{equation*} \begin{aligned} &P(V(x)> t) = P\left(\textit{no state transitions over the }\left(0,t\right] \textit{ interval} \mid X(0)=x\right)= \\ &=P\left(\underset{e\in\Gamma(x)}\bigcap\textit{no state transition triggered by event e over }\left( 0,t\right] \mid X(0)=x\right)=\footnotemark \\ &=\prod_{e\in\Gamma(x)} P\left(\textit{no state transition triggered by event e over }\left( 0,t\right] \mid X(0)=x\right)=\footnotemark \\ &=\prod_{e\in\Gamma(x)} P\left( \begin{aligned} &\underset{n=0}{\overset{+\infty}\bigcup} \textit{event e occurs exactly n times over } \left(0,t\right] \\ &\textit{and never triggers a state transition} \mid X(0)=x \end{aligned} \right) =\footnotemark \\ &=\prod_{e\in\Gamma(x)}\sum_{n=0}^{+\infty} P\left( \begin{aligned} &\textit{event e occurs exactly n times over } \left( 0,t\right]\\ &\textit{and never triggers a state transition} \mid X(0)=x \end{aligned} \right)= \end{aligned} \addtocounter{footnote}{-2} % -2 perchè sono 3 notes \footnotetext{Thanks to the independence of the events' lifetimes in Poisson clock structure} \stepcounter{footnote}\footnotetext{It may occur more than once} \stepcounter{footnote}\footnotetext{Union of disjoint events, you may sum them up} \end{equation*} \noindent In this last probability the occurrences of an \textit{i.i.d} set of exponentially distributed random variables are counted, which is equivalent to a Poisson counting process. Therefore, it's possible to substitute Equation \ref{eq:poissonCountingDef} and multiply it by the probability that the system remains in $x$, given event $e$, exactly $n$ times: \begin{equation*} \begin{aligned} P(V(x) > t) & =\prod_{e\in\Gamma(x)}\sum_{n=0}^{+\infty} \frac{(\lambda_e t)^n}{n!} \cdot e^{-\lambda_e t} \cdot \transp( x \mid x,e)^n = \\ &= \prod_{e\in\Gamma(x)}e^{-\lambda_e t}\sum_{n=0}^{+\infty} \frac{[(\lambda_e t)\cdot \transp (x \mid x,e)]^n}{n!} \end{aligned} \end{equation*} Now the last sum can be rewritten as $e^x=\sum_{n=0}^{+\infty}x^n/n!$ (Maclaurin series expansion): \begin{equation*} \begin{aligned} &\prod_{e\in\Gamma(x)}\exp(-\lambda_e t) \cdot \exp((\lambda_e t) \cdot \transp(x \mid x,e)) = \\ =&\prod_{e\in\Gamma(x)}\exp(-\lambda_e [1 - \transp(x \mid x,e)] t) \end{aligned} \end{equation*} And finally: \begin{equation*} P(V(x) > t) = \exp\left(- \sum_{e\in\Gamma(x)} \lambda_e \cdot [1 - \transp(x \mid x,e)] \cdot t\right) \end{equation*} Switching to the cdf of interest is straightforward (namely, the complement of $P(V(x)) > t$), thus: \begin{equation*} P\left(V(x)\leq t\right)=1- \exp\left(- \sum_{e\in\Gamma(x)} \lambda_e \left[1 - \transp(x \mid x,e)\right] t\right) \end{equation*} \begin{flushright} $\blacksquare$ \end{flushright} \section{Markov Chains} \label{sec:MC} \paragraph{Stochastic Processes} An example of a stochastic process was already proposed when talking about `Poisson counting process', now a slightly more formal definition will be given: \begin{itemize} \item A stochastic process $\{X(t)\}$ is a collection of random variables indexed by a time index $t \in T$. \end{itemize} where the index set $T$ can be either discrete or continuous. The stochastic processes are actually classified depending on the nature of $T$: \begin{itemize} \item \textbf{if $T$ is discrete} (either finite or not) the process is a \emph{`discrete time stochastic process'} and it's also called \emph{`chain'}. \item \textbf{if $T$ is continuous} the process is a \emph{`continuous time stochastic process'}. \end{itemize} To characterise stochastic processes it's necessary to provide joint distributions of all the possible $n$-tuples of the random variables which compose the process. Since this is really hard to realise, in these notes the stochastic processes will be used only when the independence between the random variables holds, therefore with stochastic timed automata with Poisson clock structure. Finally, the independence concept in stochastic processes is defined as follows: \bigskip \noindent Given $X(t_1), X(t_2),\dots, X(t_n)$ random variables with $t_1<t_2<\dots<t_n \in T$ and $n\in\mathbb{N}^+$ the process is said \textit{`independent'} if all the $n-$tuples are independent. So, let $x(t)$ be the realisation of the random variable $X(t)$, if the equality: $$ P\left(X\left(t+s\right)=\overset{\sim}{x} \mid X\left(\tau\right)= x\left(\tau\right),\hspace{5pt}\forall\tau\leq t\right)= P\left(X\left(t+s\right)=\overset{\sim}{x} \right) $$ holds, the process is independent. This means that the history of the past evolution of the system is irrelevant for predicting the future evolution ($t+s$ is called \textit{prediction horizon}). \paragraph{Continuous Time Homogeneous Markov Chains (CTHMC)} are a subset of a kind of stochastic processes called `Markov processes' for which the definition of process independence is more relaxed than the standard one just given. If the condition: $$ P\left(X(t+s)=\overset{\sim}{x} \mid X(\tau)= x(\tau),\;\forall\tau\leq t\right)= P\left(X(t+s)=\overset{\sim}{x} \mid X(t)=x(t)\right) $$ holds, the process is independent. This definition allows the stochastic processes whose next realisation $\tilde{x}$ depends only on the current one $x(t)$ to be called independent (\textit{`Markov property'}). \newline A Continuous Time Homogeneous Markov Chain is a stochastic process with the following properties: \begin{itemize} \item $T=\mathbb{R}^+=\{t\in\mathbb{R}:\;t\geq 0\}$ (\textbf{Continuous Time}). \item $X(t)\in \mathcal{X}=\left\lbrace 1,2,\dots\right\rbrace$ (\textbf{Chain}). \item \textbf{Markov property}. \item \textbf{Homogeneity}: the transition function depends only on the prediction horizon $$ P\left(X(t+s)=j \mid X(t)=i\right) = P\left(X(t'+s)=j \mid X(t')=i\right),\;\forall t\neq t' $$ Thus, it can be rewritten as: $$ \transp_{i,j}(s) = P\left(X(t+s)=j \mid X(t)=i\right) $$ \end{itemize} \paragraph{Chapman-Kolmogorov Equation} Please notice that $x(t)\in \mathcal{X}$ implies that the process' random variables realisations are the states of the system modelled as a CTHMC. So, let's try to compute the probability that, given a current state $i$, after a time $s$ the state will be $j$: \begin{equation*} \transp_{i,j}(s) = P\left(X(t+s)=j \mid X(t)=i\right) \end{equation*} \begin{figure}[h] \begin{center} \includegraphics[width=0.35\textwidth]{IMG/CTHMC1.eps} \caption{At time $t$ the current state is $i$. At $t+s$ the next state is $j$, while at $t+u$ the state is $r \in \mathcal{X}$ which can be $i$, $j$ or any other state.} \label{fig:chapmanTimeDiagram} \end{center} \end{figure} \noindent In order to execute the computation, let's consider the case in Figure \ref{fig:chapmanTimeDiagram}: \begin{equation*} \begin{aligned} \transp_{i,j}(s)&=P\left(X(t+s)=j \mid X(t)=i\right)=\footnotemark \\ &=\sum_{r\in\mathcal{X}}P\left(X(t+s) = j \mid X(t+u) = r, \, X(t) = i\right) \cdot P\left(X(t+u) = r \mid X(t) = i\right) = \footnotemark \\ &=\sum_{r\in\mathcal{X}}P\left(X(t+s) = j \mid X(t+u) = r\right) \cdot P\left(X(t+u) = r \mid X(t) = i\right) = \\ &=\sum_{r\in\mathcal{X}}\transp_{r,j}(s-u)\cdot\transp_{i,r}(u) \end{aligned} \addtocounter{footnote}{-1} % -1 perchè sono 2 notes \footnotetext{By applying the total probability rule} \stepcounter{footnote}\footnotetext{By applying the Markov property} \end{equation*} The equality: \begin{equation} \label{eq:chapmanKDef} \transp_{i,j}(s) = \sum_{r\in \mathcal{X}}\transp_{i,r}(u)\cdot \transp_{r,j}(s-u) \end{equation} is known as `Chapman-Kolmogorov equation'. As seen for the stochastic timed automata with Poisson clock structure, also here it's possible to define a matrix for the transition probabilities $\transp_{i,j}(s)$. Let's call this matrix $H(s)$: $$ H(s)=\left[ \begin{matrix} \transp_{1,1}(s)&\transp_{1,2}(s)&\cdots \\ \transp_{2,1}(s)&\transp_{2,2}(s)&\cdots \\ \vdots&\vdots&\ddots \end{matrix} \right] $$ This matrix has some important properties: \begin{enumerate} \item The sum of the elements along any row of $H(s)$ is 1 (being the sum of the probabilities related to all the possible cases starting from the state the row refers to). \item As a result of the previous property, if $s=0$ then the probability that the state does not change is 1, therefore $\transp_{i,i} = 1$ and $\transp_{i,j} = 0, \; j \neq i$: $$ H(0)=I $$ \item Since $\transp_{i,j}(s)$ is the generic element of matrix $H(s)$, from Equation \ref{eq:chapmanKDef} one notices that $H(s)$ equals the matrices product $H(u)$ by $H(s-u)$ (matrix form of the Chapman-Kolmogorov equation): $$ H(s)=H(u) \cdot H(s-u) $$ \end{enumerate} From the last property it's possible to define the derivative of $H(s)$: $$ \frac{dH(s)}{ds}=H(s) \cdot Q, \quad Q\triangleq\lim_{ds\rightarrow 0}\frac{H(ds)-I}{ds} $$ \emph{Proof}: We would like to obtain the difference quotient. Let's consider an infinitesimal difference $ds$: \begin{equation*} \begin{aligned} &H(s+ds)=H(s)H(ds)= \footnotemark \\ &\frac{H(s+ds)-H(s)}{ds}=\frac{H(s)H(ds)-H(s)}{ds} \end{aligned} \footnotetext{By subtracting H(s) from both sides and dividing by ds} \end{equation*} Taking the limit for $ds\rightarrow 0$ yields the difference quotient on the left-hand side: \begin{equation*} \begin{aligned} \lim_{ds\rightarrow 0}\frac{H(s+ds)-H(s)}{ds} = \lim_{ds\rightarrow 0}\frac{H(s)H(ds)-H(ds)}{ds} \end{aligned} \end{equation*} Thus: $$ \frac{dH(s)}{ds}=H(s) \cdot \lim_{ds\rightarrow 0}{\frac{H(ds)-I}{ds}} $$ The right-hand side limit has an indeterminate form $0/0$ since $H(0)=I$, then $$ \frac{(I-I)}{ds} \xrightarrow{ds\rightarrow 0} \frac{0}{0} $$ Let's \textit{assume that the limit exists} and let's call it $Q$. Since all the elements involved in the limit are matrices, also $Q$ will be a matrix (as a side note, matrix $Q$ can be estimated on the field by performing measurements on the system). Finally, taking into account property $\#2$ it's possible to define the following Cauchy problem: $$ \begin{cases} \dfrac{dH(s)}{ds}=H(s) \cdot Q\\ H(0)=I \end{cases} $$ Whose solution is: $$ H(s)=e^{Qs}, \quad e^{Qs}=\sum_{n=0}^{+\infty}{\frac{(Q \cdot s)^n}{n!}} $$ Where $e^{Qs}$ is the matrix exponential. Now we need to validate our initial assumption on the existence of $Q$. Starting from the initial definition of matrix $Q$, substitute the matrix exponential: \begin{equation*} \begin{aligned} &\lim_{ds\rightarrow 0}\frac{H(ds)-I}{ds} = \lim_{ds\rightarrow 0}\frac{e^{Qds}-I}{ds}\overset{\text{Taylor}}{\underset{1^{st}\text{ order}}{=}} \\ &\lim_{ds\rightarrow 0}\frac{(I+Qds+o(ds))-I}{ds}=\lim_{ds\rightarrow 0}\frac{Q\cancel{ds}}{\cancel{ds}}+\cancelto{o(1)}{\frac{o(ds)}{ds}} = Q \end{aligned} \end{equation*} Due to the last equality, it is clear that the result is consistent with the initial assumption. \begin{flushright} $\blacksquare$ \end{flushright} \newpage \subsection{The Q Matrix} From the previous proof, it's hard to obtain any direct information about $Q$, so this section will be dedicated to the properties of the $Q$ matrix. To start with, this matrix is called `\emph{Transition Rate Matrix}': $$ Q=\left[ \begin{matrix} q_{1,1} & q_{1,2} & \cdots \cr q_{2,1} & q_{2,2} & \cdots \cr \vdots & \vdots & \ddots \end{matrix} \right] $$ where the generic element $q_{i,j}$ is called transition rate (and has dimention \si{\per\second}). Since the need of defining $Q$ came from the computation of the derivative of the matrix $H$, the properties of $H$ have been used to define the ones of $Q$. Let's start from the property of $H$ about the sum along any row being always $1$. Said $\textrm{\underline{\textbf{1}}}$ the vector $(1,1,1,\dots,1)^T$: $$ H(s)\cdot \textrm{\underline{\textbf{1}}} = \textrm{\underline{\textbf{1}}} \hspace{15pt} \overset{\textrm{taking the derivative}}{\underset{\textrm{of both the sides}}{\Rightarrow}} \hspace{15pt} \underbrace{\frac{d\left(H(s)\right)}{ds}}_{H(s)Q}\textrm{\underline{\textbf{1}}} \hspace{5pt}+\hspace{5pt} \underbrace{\frac{d\textrm{\underline{\textbf{1}}}}{ds}}_{ =\hspace{2pt}O}H(s) \hspace{5pt}=\hspace{5pt} \underbrace{\frac{d\textrm{\underline{\textbf{1}}}}{ds}}_{=\hspace{2pt}O} \hspace{5pt}\Rightarrow $$ $$ \Rightarrow\hspace{5pt} H(s)Q\cdot \textrm{\underline{\textbf{1}}}\hspace{5pt}=\hspace{5pt}O \hspace{15pt} \overset{s\rightarrow 0}{\underset{H(0)\hspace{2pt}=\hspace{2pt}I}{\Rightarrow}} \hspace{15pt} Q\cdot \textrm{\underline{\textbf{1}}}\hspace{5pt}=\hspace{5pt}O $$ \bigskip \noindent Where $O$ is the vector $\left(0,0,0,\dots,0\right)$. From this final result it's clear that: \bigskip\noindent \underline{\emph{The sum along every row of $Q$ is always $0$.}} $\quad(\star )$ \bigskip\noindent This property has an important implication: \emph{One of the eigenvalues of $Q$ it's always $0$}, since summing to one of the columns of $Q$ all the other columns will produce $O$. Therefore, the property $(\star )$ implies the linear dependency of one of the columns and, since the number of zero eigenvalues is equal to the dimension of the kernel \footnote{$dim(ker(Q))=dim(Q)-rank(Q)$}, one of the eigenvalues will always be $0$. \bigskip\noindent So, keeping in mind the property $(\star )$, let's proceed with the definition of the generic coefficient $q_{i,j}$. At the moment, only the case in which $i\neq j$ will be considered. Let's start with the first order Taylor's representation of $H(ds)$, with $ds\rightarrow 0$: $$ H(ds) \hspace{5pt} =\hspace{5pt}e^{Qds} \quad \overset{\textrm{Taylor}}{\underset{1^{st}\textrm{ order}}{=}} \quad I+Qds+o(ds) $$ \bigskip\noindent So, reducing this last equation to the coefficients of the matrices involved, the result will be (always under the condition $ds\rightarrow 0$): $$ \transp_{i,j}(ds)=q_{i,j}ds+o(ds) $$ Looking at the equation it's clear that \underline{\emph{all the $q_{i,j}$ with $i\neq j$ are non-negative}}, since $\transp_{i,j}(s)$ $\geq 0$ (being a probability), $ds\geq 0$ (as it represents time, which cannot be negative) and $o(ds)$ goes to zero with $ds\rightarrow 0$. \newpage \noindent The non-negativity of $q_{i,j}$ and the property $(\star )$, impose that: \bigskip\noindent \underline{\emph{All the coefficients $q_{i,i}$ of the main diagonal of $Q$ must be non-positive}}, and in particular: $$ q_{i,i} \hspace{5pt}=\hspace{5pt}- \sum_{j\neq i}{q_{i,j}} $$ \paragraph{Physical Representation Of The Transition Rates} On a physical level the elements of the $Q$ matrix have two different meanings, depending on whether they are on the main diagonal or not: \begin{itemize} \item The elements on the main diagonal define the rate of the distribution of the state holding times (which are exponentially distributed) of the system and, in particular: $$ E\left[V(i)\right]=\frac{1}{-q_{i,i}} $$ \item All the other elements are used, together with the ones from the main diagonal, to compute the overall state transition probability, independent from the time, from the generic state $i$ to the generic state $j\neq i$ with the formula: $$ p_{i,j}=-\frac{q_{i,j}}{q_{i,i}} $$ \end{itemize} Let's start by computing the CDF of the state holding time of the generic state $i$, in order to provide a proof for the statement done about elements on the main diagonal: $$ F_i(t)\hspace{5pt}\triangleq\hspace{5pt} P\left( V(i)\leq t \right) $$ And let's consider a small increment $dt$ of $t$ such that a state transition (only one) occurs in the time interval $(t,t+dt]$: \begin{align*} &F_i(t+dt)-F_i(t)= P\left( t < V(i) \leq t+dt\right)= P\left(V(i) \leq t+dt \mid V(i)>t \right)\cdot \underbrace{P\left(V(i)>t\right)}_{1-F_i(t)} \\ &\overset{\textrm{in particular}}{\Rightarrow} \quad P\left(V(i) \leq t+dt \mid V(i)>t \right)= \sum_{j\neq i}{\transp_{i,j}(dt)}\quad \overset{\textrm{Taylor}}{\underset{1^{st}\textrm{ order}}{=}}\quad \sum_{j\neq i}{q_{i,j}dt+o(dt)} \Rightarrow\\ &\Rightarrow F_i(t+dt)-F_i(t) = \left[ \sum_{j\neq i}{q_{i,j}dt+o(dt)} \right] \left[ 1-F_i(t) \right] \end{align*} Dividing both the members of the last equation for $dt$ and computing the limit for $dt\rightarrow 0$ \begin{align*} &\lim_{dt\rightarrow 0}{\frac{F_i(t+dt)-F_i(t)}{dt}} =\frac{d\left(F_i(t)\right)}{dt} &=&\quad\lim_{dt\rightarrow 0}\dfrac{ \left[ \sum_{j\neq i}{q_{i,j}dt+o(dt)} \right] \left[ 1-F_i(t) \right]}{dt}= \\ =&\lim_{dt\rightarrow 0}{ \left[ \sum_{j\neq i}{\frac{q_{i,j}dt}{dt}+\frac{o(dt)}{dt}} \right] \left[ 1-F_i(t) \right]} &=&\quad\sum_{j\neq i}{q_{i,j}} \left[ 1-F_i(t) \right]=-q_{i,i} \left[ 1-F_i(t) \right] \end{align*} \newpage \noindent In order to find $F_i(t)$, let's solve the following Cauchy Problem: $$ \begin{cases} \dfrac{d(F_i(t))}{dt}=-q_{i,i} \cdot [1-F_i(t)]\\ F_i(0)=0 \end{cases} $$ Where $F_i(0)$ is the initial value of the generic state holding time $V(i)$. In order to compute the solution let's apply the following substitution: $$ G_i(t)\hspace{5pt}\triangleq\hspace{5pt} 1-F_i(t) \quad \overset{\textrm{substituting }G_i(t) \textrm{ in}} {\underset{\textrm{both the equations}}{\Rightarrow}} \quad \begin{cases} \dfrac{d(G_i(t))}{dt}=-\dfrac{d(F_i(t))}{dt}=q_{i,i}\cdot G_i(t)\\ G_i(0)=1 \end{cases} $$ The solution to this Cauchy problem is: $$ G_i(t)=\exp(q_{i,i} \cdot t) \Rightarrow F_i(t)=1-G_i(t)=1-\exp(q_{i,i} \cdot t) $$ This means that all the state holding times in CTHMC are exponentially distributed and, moreover that the rate of $F_i(t)$ is negative, namely $-q_{i,i}$. Computing the expected value of the distribution will return the result of the first statement. \begin{flushright} $\blacksquare$ \end{flushright} \bigskip\noindent Now the proof for the statement about the elements outside the main diagonal will be provided. Given two generic states $i$ and $j$, where $i\neq j$, and a generic time interval $(t, t+dt]$, let's compute the following probability: \begin{equation} \label{eq:qijtransition} P\big(\textit{a transition from i to j occurs in the interval } (t, t+dt] \mid X_k=i\big)= \end{equation} \begin{align*} =P\left( V(i)>t\right) P_{i,j}(dt) &= \left[1-P\left( V(i)\leq t\right)\right] \underbrace{P_{i,j}(dt)}_{ \overset{\textrm{Taylor }1^{\textrm{st}}\textrm{ order}} {\Rightarrow q_{i,j}dt+o(dt)}} \overset{dt\rightarrow 0}{=} \left[1-\left(1-e^{q_{i,i} t}\right)\right]q_{i,j}dt \\\Rightarrow& P\left( V(i)>t\right) P_{i,j}(dt) = \exp(q_{i,i} t)q_{i,j}dt \end{align*} \noindent Let's now introduce another probability, which is the overall transition probability from $i$ to $j$, without taking into account the time: $$ p_{i,j}=P(X_{k+1}=j \mid X_k=i) $$ $p_{i,j}$ can be obtained as sum of the probabilities like \ref{eq:qijtransition} computed for all the intervals of the kind $[t_n,t_n+dt_n]$, with $t_0=0$, $dt \rightarrow 0$, $n\in \mathbb{N}^+$: $$ p_{i,j} \hspace{5pt} = \hspace{5pt} \int_0^{+\infty}{P\left( \textrm{a transition from }i\textrm{ to }j \textrm{ occurs in the interval} \left[ t, t+dt\right] \mid X_k=i \right)} \hspace{5pt} = $$ $$ = \hspace{5pt} \int_0^{+\infty}{ e^{q_{i,i} t}q_{i,j}dt } \hspace{5pt} = \hspace{5pt} q_{i,j}\left[ \frac{e^{q_{i,i}t}}{q_{i,i}}\right]_0^{+\infty} \hspace{5pt} = \hspace{5pt} -\frac{q_{i,j}}{q_{i,i}} $$ \begin{flushright} $\blacksquare$ \end{flushright} \newpage \subsection{Steady State Analysis} Let's consider the `State Probability Vector' $\Pi(t)$, similar to $\Pi_X(k)$ in \emph{Section \ref{eq:stateprobs}}: $$ \Pi(t)\triangleq \left[ \begin{matrix} &\Pi_1(t),& \Pi_2(t),& \Pi_3(t),& \dots& \end{matrix} \right] $$ The `State Probability' of the generic $j$-th state at time $t$ will be obtained as: \begin{equation*} \begin{aligned} \Pi_j(t) & = P(X(t) = j) = \\ & = \sum_{i\in\mathcal{X}}\underbrace{P(X(t)=j \mid X(0) = i)}_{\transp_{i,j}(t)} \cdot \underbrace{P(X(0) = i)}_{\Pi(0)} = \\ & = \Pi(0) \cdot H(t) \end{aligned} \end{equation*} Where $H_j(t)$ is the $j$-th column of the matrix $H(t)$. This means that, given a time instant $t$, it's possible to compute all the state probabilities in one go by using: $$ \Pi(t)\hspace{5pt}=\hspace{5pt} \Pi(0)\cdot H(t)\hspace{5pt}=\hspace{5pt} \Pi(0)\cdot e^{Qt} $$ The state probabilities can be divided, depending on the behaviour (with respect to time) of the system, in two categories: ‘Transient State’ and ‘Steady State’ probabilities: \begin{itemize} \item \textbf{Transient State}: In order to analyse the behaviour of the system's state probabilities during the transient state, the derivative of $\Pi(t)$ must be computed: $$ \frac{d\Pi(t)}{dt} \hspace{5pt}=\hspace{5pt} \Pi(0)\frac{dH(t)}{dt} \hspace{5pt}=\hspace{5pt} \Pi(0)H(t)Q \hspace{5pt}=\hspace{5pt} \Pi(t)Q $$ \noindent Which means that, to study the behaviour of the system in the transient state, it's enough to find the solution to the following Cauchy problem: $$ \begin{matrix} \begin{cases} \dfrac{d\Pi(t)}{dt}=\Pi(t)Q\\ \Pi(0)=\Pi_0 \\ \end{cases} \end{matrix} $$ \noindent Where $\Pi_0$ is the initial state of the system. \item \textbf{Steady State}: A system reaches steady state when every probability in the system starts varying less and instead converges asymptotically to a constant value. So in order to study steady state probabilities, it's necessary to focus on the following limit: $$ \lim_{t\rightarrow \infty}{\Pi_i(t)} $$ \end{itemize} \newpage \paragraph{Classification of States} in this paragraph some properties of the states, useful for the steady state analysis, will be explained: \begin{itemize} \item \textbf{Reachability}: A generic state $j$ is reachable from state $i$ if $$ \exists \, s : \transp_{i,j}(s)>0. $$ Such concept can be also informally explained with the following definition: \textit{it must exist a directed path from state $i$ to state $j$}. \item \textbf{Closure}: A subset $S\subseteq\mathcal{X}$ is \textit{closed} if $$ \transp_{i,j}=0,\hspace{10pt} \forall i\in S,\hspace{10pt} j\in\mathcal{X} \setminus S. $$ Informally, it's possible to say that from the subset $S$ it's not possible to reach the states of the subset $\mathcal{X}\setminus S$: \begin{figure}[H] \begin{center} $$ \begin{tikzpicture} [ thick, align=center, every state/.style={draw=black!60, fill=black!5} ] \tikzset{state/.style = {shape=ellipse,draw, node distance=2cm}} \tikzset{edge/.style = {->,> = angle 60, thick}} % nodes \node[state] (a) at (0,0) {$1$}; \node[state,draw=none] (s) [left = 0.5cm of a] {}; \node[state] (b) [below =2cm of a] {$2$}; \node[state] (c) [right =2cm of a] {$3$}; \node[state] (d) [below =2cm of c] {$4$}; % arcs \path[->] (s) edge node[auto] {} (a) (a) edge node[auto] {} (b) (b) edge[bend left] node[auto] {} (a) (a) edge node[auto] {} (c) (c) edge[bend left] node[auto] {} (d) (d) edge node[auto] {} (c) (b) edge node[auto] {} (c) ; \end{tikzpicture} $$ \caption{\small the subset $\{3,4\}$ is an example of closed set, since once the system enters in state $3$ it's not possible that it will return to state $1$ or $2$ anymore.} \end{center} \end{figure} \item \textbf{Irreducibility}: A closed subset $S\subseteq\mathcal{X}$ is called irreducible if every state of $S$ is reachable from other states of $S$. \item \textbf{Recurrence}: Let's define the random variable $T_{i,i}$ as the time that takes to the system to return in state $i$. Let's also define the probability $\rho_i(t) \triangleq P\left( T_{i,i}<t\right) $. Considering the limit for $t\rightarrow\infty$ of $\rho_i(t)$ it's possible to know if the system will ever return to state $i$. $$ \rho_i\triangleq\lim_{t\rightarrow\infty}{\rho_i(t)}= \begin{matrix} \begin{cases} 1 & \textrm{ then the system will return (for }t\rightarrow\infty\textrm{) to the state i}\\ a<1 & \textrm{ then the state i is transient}\\ \end{cases} \end{matrix} $$ \noindent Starting form the assumption that the state $i$ is recurrent, it's possible to realise a further classification. First of all it's important to remember that $\rho_i(t)$ is the CDF of $T_{i,i}$, so it's possible to define also the PDF $f_i(t)$ as $\rho_i(t)$'s derivative. \noindent So, now it's possible to compute the expected value $M_i$ of $T_{i,i}$, as follows: $$ M_i\triangleq E\left[T_{i,i}\right]=\int_{0}^{+\infty}{t\cdot f_i(t)dt}= \begin{matrix} \begin{cases} \infty & \textrm{then the integral doesn't converge}\\ &\textrm{and the state }i\textrm{ is said `null recurrent'.}\\ a<\infty & \textrm{then the integral converges and the}\\ & \textrm{state }i\textrm{ is said `positive recurrent'.}\\ \end{cases} \end{matrix} $$ In real applications the behaviours of null recurrent and transient states are the same, so the positive recurrent states are the only ones in which the system actually returns to the state $i$. \end{itemize} From these definition it's possible to define two theorems and an important corollary: \begin{enumerate} \item If $i$ is a positive recurrent state and $j$ is reachable from $i$, then $j$ is positive recurrent. \item If $S$ is a closed, irreducible and finite subset of $\mathcal{X}$, then all the states in $S$ are positive recurrent. \item [2.1.] \underline{\emph{An irreducible and finite Markov Chain has only positive recurrent states.}} \end{enumerate} \paragraph{Steady State Analysis} Let's now define the stationary probability vector: $$ \Pi\triangleq \left[ \begin{matrix} &\Pi_1,& \Pi_2,& \Pi_3,& \dots& \end{matrix} \right] \hspace{10pt},\hspace{10pt} \Pi_i\triangleq \lim_{t\rightarrow\infty}{\Pi_i(t)} $$ The actual definition of the stationary probability vector brings some problems that need to be solved before actually working with it, such as: \begin{itemize} \item Existence of the limit. \item Conditions for the independence from the initial state $\Pi_0$. \item Consistency of the probability vector, as the limit might exist but the probabilities might not sum up to $1$. \end{itemize} A theorem (whose proof is not provided in these notes) states that for Continuous Time Homogeneous Markov Chains these problems, under precise conditions, can all be easily solved. \newpage \noindent \textbf{Theorem}: \noindent For a CTHMC, which is irreducible and with all positive recurrent states, the limits $$ \Pi_i= \lim_{t\rightarrow\infty}{\Pi_i(t)} $$ exist, with $\Pi_i>0,\hspace{4pt}\forall i\in\mathcal{X}$ and they are all independent of $\Pi_0$. Moreover, the vector $\Pi$ can be computed by solving the system of linear equations: $$ \begin{matrix} \begin{cases} \Pi Q=0\\ \underset{i\in\mathcal{X}}{\sum{}}{\Pi_i=1} \end{cases} \end{matrix} $$ \noindent From this last theorem, keeping in mind also the corollary $\#2.1$ of the previous paragraph, the following corollary can be obtained: \bigskip \noindent \textbf{Corollary}: \noindent The previous theorem holds for irreducible and finite CTHMC. \bigskip \noindent In both the theorem and the corollary the vector $\Pi$ can be found by solving the following system: $$ \begin{matrix} \begin{cases} \Pi Q=0 & n \textrm{ equations}\\ \underset{i\in\mathcal{X}}{\sum{}}{\Pi_i=1} & 1 \textrm{ equation} \end{cases} \end{matrix} $$ Where $n$ is the cardinality of $Q$. This means that there is a redundant equation, which can be found in $\Pi Q=0$, due to the fact that $Q$ doesn't have full rank since it has an eigenvalue $\lambda =0$. So, the constraint $\sum_{i\in\mathcal{X}}{\Pi_i=1}$ ensures both the consistency of the probability vector and the existence of a unique solution. As a final clarification about the equations in the system, the set of equations given by $\Pi Q=0$ comes from the definition already treated for the transient state definition: $$ \frac{d\Pi(t)}{dt}=\Pi(t)Q $$ Since for $t\rightarrow\infty$ the state probabilities are expected to converge to constant values, their derivatives are expected to converge to zero, so: $$ \frac{d\Pi(t)}{dt}=\Pi(t)Q \hspace{10pt} \overset{t\rightarrow\infty}{\Rightarrow}\hspace{10pt} \Pi Q=0 $$ \newpage \paragraph{Equivalent CTHMC} A Stochastic Timed Automaton with Poisson Clock Structure $(\mathcal{E},\mathcal{X},\Gamma,P,p_0,F)$ is `stochastically equivalent' to a CTHMC $(\mathcal{X},Q,\Pi_0)$ which has: \begin{itemize} \item the same distributions for the state holding times: $$ V_i \sim Exp\left(\frac{1}{-q_{i,i}}\right) $$ \item the same state transition probabilities: $$ p_{i,j}=\frac{q_{i,j}}{-q_{i,i}} $$ \end{itemize} So, in order to find the equivalent Markov Chain for a Stochastic Timed Automaton with Poisson Clock Structure it's enough to: \begin{itemize} \item Compute the expected value of the state holding time for every state, and compute the $q_{i,i}$ as: $$ q_{i,i}=-\frac{1}{E[V(i)]}=-\sum_{e\in\Gamma(i)}{\lambda_e\left[1-\transp(i \mid i,e)\right]} $$ \item Compute the $p_{i,j}$ according to the frequentist probability in order to find the corresponding $q_{i,j}$ as: $$ q_{i,j}=-q_{i,i}p_{i,j}=\sum_{e\in\Gamma(i)}{\lambda_e \transp(j \mid i,e)} $$ \end{itemize} Where $\lambda_e$ is the rate of the exponential distribution (since all the events lifetimes' distributions are exponential) of the event $e$. \newpage \section{Queueing Systems} \label{sec:QS} Queueing systems are a really important kind of discrete events systems and they can be treated using any of the models defined up to now. When looking for a representation for a queueing system, a general one is in Figure \ref{fig:QueueingSys}: \begin{figure}[H] \centering \scalebox{0.9}{ \begin{tikzpicture} % draw the rectangular shape with vertical lines \node[rectangle split, rectangle split parts=4, draw, rectangle split horizontal,text height=1cm,text depth=0.5cm,inner ysep=0pt] (Q) {Queue}; % nodes \node[circle,minimum size=1pt] (h0) [right =30pt of Q] {}; \node[draw,circle,minimum size=1.5cm] (s1) [label=above:{Server 1}] [above right =3cm of h0]{$S1$}; \node[draw,circle,minimum size=1.5cm] (s2) [label=above:{Server 2}] [below right =3cm of h0]{$S2$}; \node[circle,minimum size=1pt] (h1) [below right =3cm of s1] {}; % the arrows and labels \draw[-] (Q.east) -- +(30pt,0) node[right] {}; \draw[->] (h0.west) -- +(2.20cm,2.75cm) node[left] {}; \draw[->] (h0.west) -- +(2.20cm,-2.75cm) node[left] {}; \draw[-] (s2.east) -- +(2.20cm,2.80cm) node[left] {}; \draw[-] (s1.east) -- +(2.20cm,-2.80cm) node[left] {}; \draw[->] (h1.east) -- +(30pt,0) node[left] {}; \end{tikzpicture}} \caption{Generic queueing system representation.} \label{fig:QueueingSys} \end{figure} In order to define a queueing system, some parameters must be specified: \begin{itemize} \item \textbf{Structural Parameters}: \begin{itemize} \item Number of servers. \item Capacity of the queue. \end{itemize} \item \textbf{Operating Policies}: \begin{itemize} \item Number of accepted customers and the kind of service they need (type). \item Scheduling Policy of the queue (FIFO, Round-Robin, ...). \item Conditions to accept new customers in the system. \end{itemize} \item \textbf{Distributions of interarrival and service times}. \end{itemize} The choices made in the definition phase of a queueing system will affect its behaviour, especially the `\textit{effective production rate}'. When producing any kind of product (or providing any kind of service), this is done by expecting a certain demand for that product (or service). The queueing system must be designed in order to meet the demand, in particular the effective production rate $\mu_{eff}$ must be greater than the `\textit{demand rate}' $d$: $$ \mu_{eff}\geq d $$ \newpage \paragraph{Kendall's Notation} It describes queueing systems where there exists only one kind of customer, the scheduling policy is FIFO and a customer is rejected when the queue is full. The notation is composed by the following set of parameters: \begin{center} \textbf{A / B / m / K / n / D} \end{center} \noindent Where: \begin{itemize} \item \textbf{A}: is the distribution of the interarrival times. \item \textbf{B}: is the distribution of the service times. \end{itemize} The possible values for $A$ and $B$ are: $$ \begin{matrix} M & : & \textrm{Exponential distribution (\underline{Memoryless})} \cr U & : & \textrm{Uniform distribution} \cr G & : & \textrm{Generic distribution} \cr D & : & \textrm{Deterministic distribution} \end{matrix} $$ \begin{itemize} \item \textbf{m}: is the number of servers. \item \textbf{K}: is the capacity of the system (therefore the capacity of the queue is $K-m$). \item \textbf{n}: is the size of the population from which the customers come. \item \textbf{D}: is the scheduling policy. \end{itemize} When one or more parameters are not specified it means that the model of the queueing system operates without taking into account the information deriving form them. In fact, in these notes, only systems that can be represented through the notation $A/B/n/K$ are treated. \paragraph{Queueing systems in steady state} Before proceeding with the behaviour of queueing systems in steady state, it's necessary to provide some definitions. Let's consider the generic $k$-th customer entering a queueing system. The time spent by the customer in the system doesn't necessarily correspond to the time that took the system to process the customer request due to the presence of the queue. So, said $Z_k$ and $W_k$ respectively the `service time' and the `waiting time' of the $k$-th customer in the system, the overall time spent in the system is called `system time' and it's defined as: $$ S_k \triangleq W_k+Z_k $$ Generally, the distribution of the system time changes accordingly to the number $k$ of customers accepted in the system up to that moment, but if, after a certain amount of customers accepted, the system enters in a steady state, then the distribution of the system times will be the same for all the customers. \newpage \noindent \textbf{System time in steady state: } If there exists a random variable $S$ such that: \begin{equation*} P(S \leq t) = \lim_{k \rightarrow \infty}P(S_k \leq t), \quad \forall t \end{equation*} Then the random variable $S$ describes the system time of a generic customer while the system is in steady state. \bigskip \noindent \textbf{Average amount of customers in steady state } If there exists a random variable $X$ such that: \begin{equation*} P(X = i) = \lim_{t\rightarrow \infty} P(X(t) = i), \quad \forall i \end{equation*} Then the random variable $X$ describes the number of customers in the system when this is in steady state. Moreover, always in steady state, since the number of customers in the system doesn't depend on time any more, then also its expected value $E[X(t)]$ doesn't: \begin{equation*} E[X(t)] = \sum_{i} i \cdot P(X(t) = i) \; \overset{t\rightarrow \infty}{=} \; \sum_{i} i \cdot P(X = i) \end{equation*} From the rightmost result comes out that $E\left[X(t)\right]=E\left[X\right]$. In particular, through these definitions it's possible to find a necessary condition for the system to be in steady state. \bigskip\noindent \textbf{Effective rates and necessary condition for steady state } Said $\mu_{eff}$ the effective production rate and $\lambda_{eff}$ the effective arrival rate (rate of arrivals accepted in the system) at steady state the following condition must hold: $$ \mu_{eff}=\lambda_{eff} $$ This condition holds also for queueing networks, which are queueing systems composed by others queueing sub-systems. So, considering a system consisting of two stations in series, each of them composed by a machine preceded by a buffer, if the whole system reaches steady state the following conditions will hold: $$ \mu_{eff,1}=\lambda_{eff,1} \quad , \quad \mu_{eff,2}=\lambda_{eff,2} $$ When studying the steady state of a queueing system, an important parameter to take into account would be the `utilization' $U$, which is the fraction of time (over all the system activity time) in which the machine is actively working. If $U=1$ the machine it's always working, of course if $U=1$ for an observation time $t\rightarrow \infty$, then the machine is perfect. \bigskip\noindent The utilization value is also used to obtain another parameter called \emph{throughput}, which is defined as the mean number of requests served during a time unit: $$ throughput\hspace{5pt}=\hspace{5pt}U\cdot m\cdot \mu_{eff} $$ In particular, on a single server system ($m = 1$) with $U=1$ the throughput is equal to $\mu_{eff}$. \newpage \paragraph{Little's Law} Let's consider a queueing network at steady state and a curve $\Sigma$, closed around any fraction of the network (also the whole network can be considered an acceptable fraction). Let's consider only the portion of network within $\Sigma$ and let's define: \begin{itemize} \item $\lambda_{\Sigma}$ as the arrival rate for the arrivals which enter the curve $\Sigma$ (if $\Sigma$ surrounds the whole network $\lambda_{\Sigma}=\lambda_{eff}$). \item $E[X_{\Sigma}]$ as the expected value of the number of customers within $\Sigma$ (if $\Sigma$ surrounds the whole network $E[X_{\Sigma}]=E[X]$). \item $E[S_{\Sigma}]$ as the expected value of the time spent by a customer in $\Sigma$ (if $\Sigma$ surrounds the whole network $E[S_{\Sigma}]=E[S]$). \end{itemize} Little's Law states that: \begin{equation*} E[X_{\Sigma}]=\lambda_{\Sigma}\cdot E[S_{\Sigma}] \end{equation*} \bigskip \noindent This law comes particularly handy since $E[S_{\Sigma}]$ it's often hard to compute directly, while instead it's easy to do it for the other two parameters. \paragraph{PASTA Property} It's another property for queueing systems. Let's define: \begin{itemize} \item $A(t)$ as the occurrence of an arrival in the system at the time $t$. \item $\alpha_n(t)$ as the posterior probability that the state at time $t$ is $n$, knowing that at time $t$ an arrival occurs: $$ \alpha_n(t) \hspace{5pt}=\hspace{5pt} P(X(t)=n \hspace{4pt} | \hspace{4pt} A(t)) $$ \item $\Pi_n(t)$ as the prior probability that the state at time $t$ is $n$: $$ \Pi_n(t) \hspace{5pt}=\hspace{5pt} P(X(t)=n) $$ \end{itemize} Generally, $\alpha_n(t)\neq \Pi_n(t)$, but PASTA property states that: \bigskip\noindent \emph{If the arrivals are generated by a Poisson process and the lifetimes for arrivals and service terminations are independent then:} $$ \alpha_n(t) = \Pi_n(t) \quad,\quad \forall t,n $$ Which means that it's possible to obtain, under these conditions, the posterior probability simply by computing the prior. Moreover, since the result of PASTA property holds $\forall t$, it can also be used when the system is in steady state. \newpage \paragraph{Ergodicity} It's a property of stochastic processes $X_h(t)$ in which the ensemble average (average over $h$) and the time average are the same. In particular, when the system is in steady state, its stochastic behaviour won't depend on time anymore and therefore it can be considered ergodic. So, the only condition for ergodicity is that the system must be able to reach steady state. \bigskip \noindent The reason why ergodicity is such an interesting property is that it makes possible to compute, when the system is in steady state, the fraction of time spent in a certain state as the prior probability of being in that state and vice versa. \end{document}
If $f$ is continuous on every triangle in $S$ and the integral of $f$ around every triangle in $S$ is zero, then $f$ is analytic on $S$.
/* * InnerMapTask.cpp * * Created on: 25 Jul 2018 * Author: scsjd */ #include <fstream> #include <sstream> #include <iterator> #include <chrono> #include <NTL/ZZ.h> #include <NTL/ZZ_p.h> #include <NTL/vec_ZZ.h> #include <jsoncpp/json/json.h> #include "InnerMapTask.h" #include "HE1CiphertextCPU.h" InnerMapTask::InnerMapTask(int numPerLine, const char* parametersPath, const char* inputPath){ this->inputPath = inputPath; this->numPerLine = numPerLine; parseParameters(parametersPath); totalSumTime=0; numberAdditions=0; totalProductTime=0; numberMultiplications=0; } InnerMapTask::~InnerMapTask(){ } void InnerMapTask::parseParameters(const char* parametersPath){ NTL::ZZ mod; std::ifstream ifs(parametersPath); if (ifs.is_open()){ std::string json; getline(ifs,json); Json::Value root; // will contains the root value after parsing. Json::Reader reader; bool parsingSuccessful = reader.parse(json,root); if (parsingSuccessful){ modulus = NTL::conv<NTL::ZZ>(root["modulus"].asCString()); } } } NTL::ZZ InnerMapTask::run(){ std::ifstream ifs(inputPath); if (!ifs.is_open()){ throw std::ios_base::failure("Could not open input file."); } std::string line; HE1CiphertextCPU::setModulus(modulus); NTL::ZZ zero; HE1CiphertextCPU sum(zero); while(getline(ifs,line)){ NTL::ZZ one; set(one); HE1CiphertextCPU prod(one); std::istringstream iss(line); std::vector<std::string> words((std::istream_iterator<std::string>(iss)),std::istream_iterator<std::string>()); for(int i = 0; i < words.size(); i++){ NTL::ZZ z = NTL::conv<NTL::ZZ>(words[i].c_str()); HE1CiphertextCPU tmp(z); auto start = std::chrono::high_resolution_clock::now(); prod*=tmp; auto finish = std::chrono::high_resolution_clock::now(); totalProductTime += std::chrono::duration_cast<std::chrono::nanoseconds>(finish-start).count(); numberMultiplications++; } auto start = std::chrono::high_resolution_clock::now(); sum+=prod; auto finish = std::chrono::high_resolution_clock::now(); totalSumTime += std::chrono::duration_cast<std::chrono::nanoseconds>(finish-start).count(); numberAdditions++; } ifs.close(); return NTL::rep(sum.get_ciphertext()); };
#Load relevant libraries library(RedditExtractoR) library(stringr) if (file.exists("glassine_urls.csv")) { glassine_urls <- read.csv("glassine_urls.csv",stringsAsFactors = FALSE) } else { download.file("https://github.com/areeves87/glassine-area-codes/blob/master/glassine_urls.csv","glassine_urls.csv",mode="wb") glassine_urls <- read.csv("glassine_urls.csv",stringsAsFactors = FALSE) } # ##Uncomment to re-perform reddit scrape # glassine_urls <- reddit_urls(subreddit = "glassine",page_threshold = 40) if (file.exists("area_codes_by_state.csv")) { area_codes_by_state <- read.csv("area_codes_by_state.csv",stringsAsFactors = FALSE) } else { download.file("https://github.com/areeves87/glassine-area-codes/blob/master/area_codes_by_state.csv","area_codes_by_state.csv",mode="wb") area_codes_by_state <- read.csv("area_codes_by_state.csv",stringsAsFactors = FALSE) } #grab three-digit strings from thread titles; these are candidate area codes area_codes<-str_extract(glassine_urls$title, "[0-9]{3}") #lookup the state associated with each candidate area code area_code_row<-match(area_codes, area_codes_by_state$Area.code) state_codes<-area_codes_by_state[area_code_row,3] # #uncomment to create an "area code | state code" mentions table # df.counts<-cbind(area_codes,state_codes) # write.csv(df.counts,"counts.csv",row.names = FALSE)
{- This second-order term syntax was created from the following second-order syntax description: syntax Monad | M type T : 1-ary term ret : α -> T α bind : T α α.(T β) -> T β | _>>=_ r10 theory (LU) a : α b : α.(T β) |> bind (ret(a), x. b[x]) = b[a] (RU) t : T α |> bind (t, x. ret(x)) = t (AS) t : T α b : α.(T β) c : β.(T γ) |> bind (bind (t, x.b[x]), y.c[y]) = bind (t, x. bind (b[x], y.c[y])) -} module Monad.Syntax where open import SOAS.Common open import SOAS.Context open import SOAS.Variable open import SOAS.Families.Core open import SOAS.Construction.Structure open import SOAS.ContextMaps.Inductive open import SOAS.Metatheory.Syntax open import Monad.Signature private variable Γ Δ Π : Ctx α β : MT 𝔛 : Familyₛ -- Inductive term declaration module M:Terms (𝔛 : Familyₛ) where data M : Familyₛ where var : ℐ ⇾̣ M mvar : 𝔛 α Π → Sub M Π Γ → M α Γ ret : M α Γ → M (T α) Γ _>>=_ : M (T α) Γ → M (T β) (α ∙ Γ) → M (T β) Γ infixr 10 _>>=_ open import SOAS.Metatheory.MetaAlgebra ⅀F 𝔛 Mᵃ : MetaAlg M Mᵃ = record { 𝑎𝑙𝑔 = λ where (retₒ ⋮ a) → ret a (bindₒ ⋮ a , b) → _>>=_ a b ; 𝑣𝑎𝑟 = var ; 𝑚𝑣𝑎𝑟 = λ 𝔪 mε → mvar 𝔪 (tabulate mε) } module Mᵃ = MetaAlg Mᵃ module _ {𝒜 : Familyₛ}(𝒜ᵃ : MetaAlg 𝒜) where open MetaAlg 𝒜ᵃ 𝕤𝕖𝕞 : M ⇾̣ 𝒜 𝕊 : Sub M Π Γ → Π ~[ 𝒜 ]↝ Γ 𝕊 (t ◂ σ) new = 𝕤𝕖𝕞 t 𝕊 (t ◂ σ) (old v) = 𝕊 σ v 𝕤𝕖𝕞 (mvar 𝔪 mε) = 𝑚𝑣𝑎𝑟 𝔪 (𝕊 mε) 𝕤𝕖𝕞 (var v) = 𝑣𝑎𝑟 v 𝕤𝕖𝕞 (ret a) = 𝑎𝑙𝑔 (retₒ ⋮ 𝕤𝕖𝕞 a) 𝕤𝕖𝕞 (_>>=_ a b) = 𝑎𝑙𝑔 (bindₒ ⋮ 𝕤𝕖𝕞 a , 𝕤𝕖𝕞 b) 𝕤𝕖𝕞ᵃ⇒ : MetaAlg⇒ Mᵃ 𝒜ᵃ 𝕤𝕖𝕞 𝕤𝕖𝕞ᵃ⇒ = record { ⟨𝑎𝑙𝑔⟩ = λ{ {t = t} → ⟨𝑎𝑙𝑔⟩ t } ; ⟨𝑣𝑎𝑟⟩ = refl ; ⟨𝑚𝑣𝑎𝑟⟩ = λ{ {𝔪 = 𝔪}{mε} → cong (𝑚𝑣𝑎𝑟 𝔪) (dext (𝕊-tab mε)) } } where open ≡-Reasoning ⟨𝑎𝑙𝑔⟩ : (t : ⅀ M α Γ) → 𝕤𝕖𝕞 (Mᵃ.𝑎𝑙𝑔 t) ≡ 𝑎𝑙𝑔 (⅀₁ 𝕤𝕖𝕞 t) ⟨𝑎𝑙𝑔⟩ (retₒ ⋮ _) = refl ⟨𝑎𝑙𝑔⟩ (bindₒ ⋮ _) = refl 𝕊-tab : (mε : Π ~[ M ]↝ Γ)(v : ℐ α Π) → 𝕊 (tabulate mε) v ≡ 𝕤𝕖𝕞 (mε v) 𝕊-tab mε new = refl 𝕊-tab mε (old v) = 𝕊-tab (mε ∘ old) v module _ (g : M ⇾̣ 𝒜)(gᵃ⇒ : MetaAlg⇒ Mᵃ 𝒜ᵃ g) where open MetaAlg⇒ gᵃ⇒ 𝕤𝕖𝕞! : (t : M α Γ) → 𝕤𝕖𝕞 t ≡ g t 𝕊-ix : (mε : Sub M Π Γ)(v : ℐ α Π) → 𝕊 mε v ≡ g (index mε v) 𝕊-ix (x ◂ mε) new = 𝕤𝕖𝕞! x 𝕊-ix (x ◂ mε) (old v) = 𝕊-ix mε v 𝕤𝕖𝕞! (mvar 𝔪 mε) rewrite cong (𝑚𝑣𝑎𝑟 𝔪) (dext (𝕊-ix mε)) = trans (sym ⟨𝑚𝑣𝑎𝑟⟩) (cong (g ∘ mvar 𝔪) (tab∘ix≈id mε)) 𝕤𝕖𝕞! (var v) = sym ⟨𝑣𝑎𝑟⟩ 𝕤𝕖𝕞! (ret a) rewrite 𝕤𝕖𝕞! a = sym ⟨𝑎𝑙𝑔⟩ 𝕤𝕖𝕞! (_>>=_ a b) rewrite 𝕤𝕖𝕞! a | 𝕤𝕖𝕞! b = sym ⟨𝑎𝑙𝑔⟩ -- Syntax instance for the signature M:Syn : Syntax M:Syn = record { ⅀F = ⅀F ; ⅀:CS = ⅀:CompatStr ; mvarᵢ = M:Terms.mvar ; 𝕋:Init = λ 𝔛 → let open M:Terms 𝔛 in record { ⊥ = M ⋉ Mᵃ ; ⊥-is-initial = record { ! = λ{ {𝒜 ⋉ 𝒜ᵃ} → 𝕤𝕖𝕞 𝒜ᵃ ⋉ 𝕤𝕖𝕞ᵃ⇒ 𝒜ᵃ } ; !-unique = λ{ {𝒜 ⋉ 𝒜ᵃ} (f ⋉ fᵃ⇒) {x = t} → 𝕤𝕖𝕞! 𝒜ᵃ f fᵃ⇒ t } } } } -- Instantiation of the syntax and metatheory open Syntax M:Syn public open M:Terms public open import SOAS.Families.Build public open import SOAS.Syntax.Shorthands Mᵃ public open import SOAS.Metatheory M:Syn public
read("est_pi.mpl"): printf("\nDoing calculation using loop:\n\n"): est_pi(n): printf("\nDoing calculation using sum:\n\n"): native_est_pi(n):
State Before: R : Type u_1 inst✝ : CommSemiring R f : ArithmeticFunction R hf : IsMultiplicative f k : ℕ ⊢ IsMultiplicative (ArithmeticFunction.ppow f k) State After: case zero R : Type u_1 inst✝ : CommSemiring R f : ArithmeticFunction R hf : IsMultiplicative f ⊢ IsMultiplicative (ArithmeticFunction.ppow f Nat.zero) case succ R : Type u_1 inst✝ : CommSemiring R f : ArithmeticFunction R hf : IsMultiplicative f k : ℕ hi : IsMultiplicative (ArithmeticFunction.ppow f k) ⊢ IsMultiplicative (ArithmeticFunction.ppow f (succ k)) Tactic: induction' k with k hi State Before: case zero R : Type u_1 inst✝ : CommSemiring R f : ArithmeticFunction R hf : IsMultiplicative f ⊢ IsMultiplicative (ArithmeticFunction.ppow f Nat.zero) State After: no goals Tactic: exact isMultiplicative_zeta.nat_cast State Before: case succ R : Type u_1 inst✝ : CommSemiring R f : ArithmeticFunction R hf : IsMultiplicative f k : ℕ hi : IsMultiplicative (ArithmeticFunction.ppow f k) ⊢ IsMultiplicative (ArithmeticFunction.ppow f (succ k)) State After: case succ R : Type u_1 inst✝ : CommSemiring R f : ArithmeticFunction R hf : IsMultiplicative f k : ℕ hi : IsMultiplicative (ArithmeticFunction.ppow f k) ⊢ IsMultiplicative (ArithmeticFunction.pmul f (ArithmeticFunction.ppow f k)) Tactic: rw [ppow_succ] State Before: case succ R : Type u_1 inst✝ : CommSemiring R f : ArithmeticFunction R hf : IsMultiplicative f k : ℕ hi : IsMultiplicative (ArithmeticFunction.ppow f k) ⊢ IsMultiplicative (ArithmeticFunction.pmul f (ArithmeticFunction.ppow f k)) State After: no goals Tactic: apply hf.pmul hi
Require Import compcert.lib.Coqlib compcert.lib.Zbits. Require Import compcert.common.AST compcert.lib.Integers. Require Import compcert.common.Values compcert.common.Memory. Require Import compcert.common.Globalenvs compcert.common.Events. Inductive int_or_ptr: val -> mem -> val -> Prop := | int_or_ptr_int: forall n m, Int.and n Int.one = Int.one -> int_or_ptr (Vint n) m Vfalse | int_or_ptr_ptr: forall b ofs m, Mem.range_perm m b (Ptrofs.unsigned ofs) (Ptrofs.unsigned ofs + 2) Cur Nonempty -> Ptrofs.and ofs Ptrofs.one = Ptrofs.zero -> int_or_ptr (Vptr b ofs) m Vtrue. Remark ptrofs_2_aligned: forall ofs, Ptrofs.and ofs Ptrofs.one = Ptrofs.zero <-> (2 | Ptrofs.unsigned ofs). Proof. assert (A: forall n, (2 | n) <-> Z.Even n). { intros; split; intros [k E]; rewrite E; exists k; ring. } assert (B: forall n, Z.Even n <-> Z.testbit n 0 = false). { intros. rewrite Z.bit0_odd, <- Z.negb_even, <- Z.even_spec, negb_false_iff. tauto. } intros; split; intros. - apply A. apply B. change false with (Ptrofs.testbit Ptrofs.zero 0). rewrite <- H. rewrite Ptrofs.bits_and, Ptrofs.bits_one. simpl. rewrite andb_true_r; auto. generalize Ptrofs.wordsize_pos; omega. - apply A in H. apply B in H. Ptrofs.bit_solve. rewrite Ptrofs.bits_one. destruct (zeq i 0); simpl. rewrite andb_true_r. subst i. exact H. apply andb_false_r. Qed. Lemma int_or_ptr_inject: forall v m res f v' m', int_or_ptr v m res -> Val.inject f v v' -> Mem.inject f m m' -> exists res', int_or_ptr v' m' res' /\ Val.inject f res res'. Proof. destruct 1; intros VI MI; inv VI. - (* integer *) exists Vfalse; split. + constructor; auto. + constructor. - (* pointer *) assert (AL: (2 | delta)). { change 2 with (align_chunk Mint16unsigned). clear H0. eauto using Mem.mi_align, Mem.mi_inj, Mem.range_perm_max. } assert (OF: Ptrofs.unsigned (Ptrofs.add ofs (Ptrofs.repr delta)) = Ptrofs.unsigned ofs + delta). { eapply Mem.address_inject; eauto. apply H. omega. } exists Vtrue; split. + constructor. * rewrite OF. replace (Ptrofs.unsigned ofs + delta + 2) with ((Ptrofs.unsigned ofs + 2) + delta) by omega. eapply Mem.range_perm_inject; eauto. * apply ptrofs_2_aligned in H0. apply ptrofs_2_aligned. rewrite OF. apply Z.divide_add_r; auto. + constructor. Qed.
theory Update_Root_Ref imports Read_Write_Ref begin lemma update_root_non_det_det_refine: "\<lbrakk> update_TTBR0 r (s::'a non_det_tlb_state_scheme) = ((), s') ; update_TTBR0 r (t::'b det_tlb_state_scheme) = ((), t'); tlb_rel_det (typ_non_det_tlb s) (typ_det_tlb t) \<rbrakk> \<Longrightarrow> tlb_rel_det (typ_non_det_tlb s') (typ_det_tlb t')" apply (clarsimp simp: update_TTBR0_non_det_tlb_state_ext_def update_TTBR0_det_tlb_state_ext_def) apply (clarsimp simp: tlb_rel_det_def saturated_def) by (cases s, cases t , clarsimp simp: state.defs ) lemma update_root_det_sat_refine: "\<lbrakk> update_TTBR0 r (s::'a det_tlb_state_scheme) = ((), s') ; update_TTBR0 r (t::'b sat_tlb_state_scheme) = ((), t'); tlb_rel_sat (typ_det_tlb s) (typ_sat_tlb t) \<rbrakk> \<Longrightarrow> tlb_rel_sat (typ_det_tlb s') (typ_sat_tlb t')" apply (clarsimp simp: update_TTBR0_det_tlb_state_ext_def update_TTBR0_sat_tlb_state_ext_def) apply (clarsimp simp: tlb_rel_sat_def saturated_def) apply (cases s, cases t , clarsimp simp: state.defs) by blast lemma update_root_sat_incon_refine: "\<lbrakk> update_TTBR0 r (s::'a sat_tlb_state_scheme) = ((), s') ; update_TTBR0 r (t::'b set_tlb_state_scheme) = ((), t'); tlb_rel_abs (typ_sat_tlb s) (typ_set_tlb t) \<rbrakk> \<Longrightarrow> tlb_rel_abs (typ_sat_tlb s') (typ_set_tlb t')" apply (clarsimp simp: update_TTBR0_sat_tlb_state_ext_def update_TTBR0_set_tlb_state_ext_def tlb_rel_abs_def) apply (subgoal_tac " TTBR0 t = TTBR0 s \<and> MEM t = MEM s") prefer 2 apply (clarsimp simp: typ_sat_tlb_def state.defs) apply (rule conjI) apply (clarsimp simp: typ_sat_tlb_def state.defs) apply (clarsimp simp: incon_addrs_def) apply (rule conjI) apply (clarsimp simp: inconsistent_vaddrs_def incoherrent_vaddrs_def) apply (drule union_incon_cases) apply (erule disjE, clarsimp simp: lookup_range_pt_walk_not_incon) apply (erule disjE, clarsimp) apply (clarsimp simp: incon_comp_def ptable_comp_def Let_def) apply (erule disjE) apply blast apply (erule disjE) apply (subgoal_tac "the (pt_walk () (MEM s) r xc) = the (pt_walk () (MEM s) (TTBR0 s) x)") apply (case_tac "\<not>is_fault (pt_walk () (MEM s) (TTBR0 s) x)") apply clarsimp using saturatd_lookup_hit_no_fault apply fastforce apply (subgoal_tac "the (pt_walk () (MEM s) r xc) = the (pt_walk () (MEM s) r x)") apply (force simp: is_fault_def) apply (frule lookup_range_fault_pt_walk) apply (drule_tac x = x in bspec; clarsimp simp: lookup_hit_entry_range) apply (subgoal_tac "is_fault (pt_walk () (MEM s) (TTBR0 s) x)") apply blast apply (erule disjE) apply (force simp: is_fault_def) apply clarsimp apply (subgoal_tac "the (pt_walk () (MEM s) r xc) = the (pt_walk () (MEM s) r x)") prefer 2 using lookup_hit_entry_range_asid_tags va_entry_set_pt_palk_same apply blast apply clarsimp apply (subgoal_tac "\<not> is_fault (pt_walk () (MEM s) (TTBR0 s) x)") prefer 2 apply force apply (subgoal_tac " xa = the (pt_walk () (MEM s) r x)") apply clarsimp apply (subgoal_tac "lookup' (sat_tlb s \<union> the ` {e \<in> range (pt_walk () (MEM s) (TTBR0 s)). \<not> is_fault e}) x = Hit xa") prefer 2 apply (clarsimp simp: saturated_def) apply (simp add: sup_absorb1) apply (drule lookup_hit_union_cases') apply (erule disjE) apply clarsimp using saturatd_lookup_hit_no_fault apply fastforce apply (erule disjE) apply clarsimp apply clarsimp using saturatd_lookup_hit_no_fault apply fastforce using lookup_range_pt_walk_not_incon apply blast apply (rule conjI) apply clarsimp apply (clarsimp simp: incoherrent_vaddrs_def inconsistent_vaddrs_def) apply (simp only: incon_comp_def ptable_comp_def) apply clarsimp apply (drule lookup_hit_union_cases') apply (clarsimp simp: lookup_miss_is_fault_intro subset_eq) by (clarsimp simp: saturated_def) lemma flush_tlb_non_det_det_refine: "\<lbrakk> flush FlushTLB (s::'a non_det_tlb_state_scheme) = ((), s') ; flush FlushTLB (t::'b det_tlb_state_scheme) = ((), t'); tlb_rel_det (typ_non_det_tlb s) (typ_det_tlb t) \<rbrakk> \<Longrightarrow> tlb_rel_det (typ_non_det_tlb s') (typ_det_tlb t')" apply (clarsimp simp: flush_non_det_tlb_state_ext_def flush_det_tlb_state_ext_def) apply (clarsimp simp: tlb_rel_det_def) by (cases s, cases t , clarsimp simp: state.defs) lemma flush_varange_non_det_det_refine: "\<lbrakk>flush (Flushvarange vset) (s::'a non_det_tlb_state_scheme) = ((), s') ; flush (Flushvarange vset) (t::'b det_tlb_state_scheme) = ((), t'); tlb_rel_det (typ_non_det_tlb s) (typ_det_tlb t) \<rbrakk> \<Longrightarrow> tlb_rel_det (typ_non_det_tlb s') (typ_det_tlb t')" apply (clarsimp simp: flush_non_det_tlb_state_ext_def flush_det_tlb_state_ext_def flush_tlb_vset_def) apply (clarsimp simp: tlb_rel_det_def) apply (cases s, cases t , clarsimp simp: state.defs) by blast lemma flush_non_det_det_refine: "\<lbrakk> flush f (s::'a non_det_tlb_state_scheme) = ((), s') ; flush f (t::'b det_tlb_state_scheme) = ((), t'); tlb_rel_det (typ_non_det_tlb s) (typ_det_tlb t) \<rbrakk> \<Longrightarrow> tlb_rel_det (typ_non_det_tlb s') (typ_det_tlb t')" by (cases f; clarsimp simp: flush_tlb_non_det_det_refine flush_varange_non_det_det_refine ) lemma flush_tlb_non_det_sat_refine: "\<lbrakk> flush FlushTLB (s::'a det_tlb_state_scheme) = ((), s') ; flush FlushTLB (t::'b sat_tlb_state_scheme) = ((), t'); tlb_rel_sat (typ_det_tlb s) (typ_sat_tlb t) \<rbrakk> \<Longrightarrow> tlb_rel_sat (typ_det_tlb s') (typ_sat_tlb t')" apply (clarsimp simp: flush_det_tlb_state_ext_def flush_sat_tlb_state_ext_def) apply (clarsimp simp: tlb_rel_sat_def saturated_def) apply (cases s, cases t , clarsimp simp: state.defs) done lemma flush_varange_non_det_sat_refine: "\<lbrakk> flush (Flushvarange vset) (s::'a det_tlb_state_scheme) = ((), s') ; flush (Flushvarange vset) (t::'b sat_tlb_state_scheme) = ((), t'); tlb_rel_sat (typ_det_tlb s) (typ_sat_tlb t) \<rbrakk> \<Longrightarrow> tlb_rel_sat (typ_det_tlb s') (typ_sat_tlb t')" apply (clarsimp simp: flush_det_tlb_state_ext_def flush_sat_tlb_state_ext_def flush_tlb_vset_def) apply (clarsimp simp: tlb_rel_sat_def saturated_def) apply (cases s, cases t , clarsimp simp: state.defs ) by blast lemma flush_det_sat_refine: "\<lbrakk> flush f (s::'a det_tlb_state_scheme) = ((), s') ; flush f (t::'b sat_tlb_state_scheme) = ((), t'); tlb_rel_sat (typ_det_tlb s) (typ_sat_tlb t) \<rbrakk> \<Longrightarrow> tlb_rel_sat (typ_det_tlb s') (typ_sat_tlb t')" by (cases f; clarsimp simp: flush_tlb_non_det_sat_refine flush_varange_non_det_sat_refine ) lemma flush_tlb_sat_incon_refine: "\<lbrakk>flush FlushTLB (s::'a sat_tlb_state_scheme) = ((), s') ; flush FlushTLB (t::'b set_tlb_state_scheme) = ((), t'); tlb_rel_abs (typ_sat_tlb s) (typ_set_tlb t) \<rbrakk> \<Longrightarrow> tlb_rel_abs (typ_sat_tlb s') (typ_set_tlb t')" apply (clarsimp simp: flush_sat_tlb_state_ext_def flush_set_tlb_state_ext_def tlb_rel_abs_def) apply (rule conjI) apply (cases s, cases t, clarsimp simp: state.defs) apply (rule conjI) apply (clarsimp simp: incon_addrs_def) apply (rule conjI) apply (clarsimp simp: inconsistent_vaddrs_def lookup_range_pt_walk_not_incon) apply (clarsimp simp: incoherrent_vaddrs_def) apply (clarsimp simp: lookup_miss_is_fault_intro) by (clarsimp simp: saturated_def) lemma flush_varange_sat_incon_refine: "\<lbrakk>flush (Flushvarange vset) (s::'a sat_tlb_state_scheme) = ((), s') ; flush (Flushvarange vset) (t::'b set_tlb_state_scheme) = ((), t'); tlb_rel_abs (typ_sat_tlb s) (typ_set_tlb t) \<rbrakk> \<Longrightarrow> tlb_rel_abs (typ_sat_tlb s') (typ_set_tlb t')" apply (clarsimp simp: flush_sat_tlb_state_ext_def flush_set_tlb_state_ext_def tlb_rel_abs_def flush_tlb_vset_def) apply (rule conjI) apply (cases s, cases t, clarsimp simp: state.defs) apply (subgoal_tac "(sat_tlb s) = (sat_tlb s) \<union> the ` {e \<in> range (pt_walk () (MEM s) (TTBR0 s)). \<not> is_fault e} ") prefer 2 apply (drule sat_state_tlb, clarsimp simp: state.defs) apply (rule conjI) apply (clarsimp simp: incon_addrs_def) apply (rule conjI) apply (clarsimp simp: inconsistent_vaddrs_def) apply (drule union_incon_cases) apply (erule disjE) apply (clarsimp simp: lookup_range_pt_walk_not_incon) apply (erule disjE) apply clarsimp apply (rule conjI) prefer 2 using lookup_hit_entry_range apply force apply (drule lookup_hit_incon_minus, erule disjE) prefer 2 apply blast apply (subgoal_tac "the (pt_walk () (MEM s) (TTBR0 s) xc) = the (pt_walk () (MEM s) (TTBR0 s) x)") prefer 2 apply (frule lookup_range_fault_pt_walk) apply (drule_tac x = x and A = "range_of (the (pt_walk () (MEM s) (TTBR0 s) xc))" in bspec; simp add : lookup_hit_entry_range) apply (clarsimp simp: incoherrent_vaddrs_def) apply (frule_tac b = x and x = xa in saturatd_lookup_hit_no_fault, simp) prefer 2 apply force apply (smt lookup_hit_entry_range_asid_tags va_entry_set_pt_palk_same') apply (erule disjE) apply clarsimp apply (simp add: lookup_range_pt_walk_not_incon) apply (erule disjE) prefer 2 apply (erule disjE, clarsimp) using lookup_range_pt_walk_not_incon apply force apply (rule conjI) prefer 2 subgoal proof - fix x :: vaddr assume "lookup' ((sat_tlb s) - (\<Union>v\<in>vset. {e \<in> (sat_tlb s). v \<in> range_of e})) x = Incon \<and> lookup' (the ` {e \<in> range (pt_walk () (MEM s) (TTBR0 s)). \<not> is_fault e}) x = Miss" then have "\<exists>T. lookup' (T - (\<Union>a\<in>vset. {t::unit tlb_entry \<in> T. a \<in> range_of t})) x \<noteq> Miss" by (metis lookup_type.simps(3)) then show "x \<notin> vset" using lookup_not_miss_varange by blast qed using lookup_incon_minus mem_Collect_eq apply blast apply clarsimp apply (rule conjI) prefer 2 subgoal proof - fix x :: vaddr and xb :: vaddr assume "lookup' ((sat_tlb s) - (\<Union>v\<in>vset. {e \<in> (sat_tlb s). v \<in> range_of e})) x = Incon" then have "\<exists>T . lookup' (T - (\<Union>a\<in>vset. {t::unit tlb_entry \<in> T. a \<in> range_of t})) x \<noteq> Miss" by (metis (full_types) lookup_type.simps(3)) then show "x \<notin> vset" using lookup_not_miss_varange by blast qed using lookup_incon_minus apply force apply (clarsimp simp: incoherrent_vaddrs_def) apply (subgoal_tac "lookup' ((sat_tlb s) - (\<Union>v\<in>vset. {e \<in> (sat_tlb s). v \<in> range_of e})) x = Hit xa") prefer 2 apply (simp add: lookup_miss_is_fault_intro lookup_miss_union_equal) apply (drule lookup_hit_incon_minus, erule disjE) prefer 2 apply (clarsimp simp: inconsistent_vaddrs_def) apply (rule conjI) apply force subgoal proof - fix x :: vaddr and xa :: "unit tlb_entry" assume a1: "is_fault (pt_walk () (MEM s) (TTBR0 s) x)" assume "lookup' ((sat_tlb s) - (\<Union>v\<in>vset. {e \<in> (sat_tlb s). v \<in> range_of e}) \<union> the ` {e \<in> range (pt_walk () (MEM s) (TTBR0 s)). \<not> is_fault e}) x = Hit xa" then have "lookup' ((sat_tlb s) - (\<Union>a\<in>vset. {t \<in> (sat_tlb s). a \<in> range_of t})) x = Hit xa" using a1 by (simp add: lookup_miss_is_fault_intro lookup_miss_union_equal) then have "\<exists> T. lookup' (T - (\<Union>a\<in>vset. {t::unit tlb_entry \<in> T. a \<in> range_of t})) x \<noteq> Miss" by (metis lookup_type.simps(5)) then show "x \<notin> vset" using lookup_not_miss_varange by blast qed apply (rule conjI) apply blast subgoal proof - fix x :: vaddr and xa :: "unit tlb_entry" assume a1: "is_fault (pt_walk () (MEM s) (TTBR0 s) x)" assume "lookup' ((sat_tlb s) - (\<Union>v\<in>vset. {e \<in> (sat_tlb s). v \<in> range_of e}) \<union> the ` {e \<in> range (pt_walk () (MEM s) (TTBR0 s)). \<not> is_fault e}) x = Hit xa" then have "lookup' ((sat_tlb s) - (\<Union>a\<in>vset. {t::unit tlb_entry \<in> (sat_tlb s). a \<in> range_of t})) x = Hit xa" using a1 by (simp add: lookup_miss_is_fault_intro lookup_miss_union_equal) then show "x \<notin> vset" by (metis (mono_tags) lookup_not_miss_varange lookup_type.simps(5)) qed by (clarsimp simp: saturated_def) lemma flush_sat_incon_refine: "\<lbrakk> flush f (s::'a sat_tlb_state_scheme) = ((), s') ; flush f (t::'b set_tlb_state_scheme) = ((), t'); tlb_rel_abs (typ_sat_tlb s) (typ_set_tlb t) \<rbrakk> \<Longrightarrow> tlb_rel_abs (typ_sat_tlb s') (typ_set_tlb t')" by (cases f; clarsimp simp: flush_tlb_sat_incon_refine flush_varange_sat_incon_refine ) end
function res = transformLine3d(line, trans) %TRANSFORMLINE3D Transform a 3D line with a 3D affine transform. % % LINE2 = transformLine3d(LINE1, TRANS) % % Example % P1 = [10 20 30]; % P2 = [30 40 50]; % L = createLine3d(P1, P2); % T = createRotationOx(P1, pi/6); % L2 = transformLine3d(L, T); % figure; hold on; % axis([0 100 0 100 0 100]); view(3); % drawPoint3d([P1;P2]); % drawLine3d(L, 'b'); % drawLine3d(L2, 'm'); % % See also: % lines3d, transforms3d, transformPoint3d, transformVector3d % % ------ % Author: David Legland % e-mail: [email protected] % Created: 2008-11-25, using Matlab 7.7.0.471 (R2008b) % Copyright 2008 INRA - BIA PV Nantes - MIAJ Jouy-en-Josas. res = [... transformPoint3d(line(:, 1:3), trans) ... % transform origin point transformVector3d(line(:,4:6), trans)]; % transform direction vect.
We see college admissions exams such as the SAT as a critical tool – not a barrier – for our students, parents and teachers, as well as college admissions officers. Teachers and counselors in Patterson Joint Unified are provided with SAT data, which can be used to inform them how best to support students in reaching their academic goals. When our students take the SAT, they can be connected to college application fee waivers, Khan Academy’s official free, personalized test practice and extensive scholarship opportunities. The way to ensure authentic equity in college admissions is to give all students the support and tools they need to pursue their college and career ambitions. When college admissions tests are offered during the school day, instead of Saturday mornings when some students would be working, more students are able to achieve their fullest potential. When school districts are working to level the field, that includes making sure all students have the support and accommodations they need. English learners taking the SAT are always provided appropriate accommodations, including access to testing instructions in their primary language, bilingual glossaries and, more recently, extended testing time. All students with exceptional needs are given appropriate accommodations on the SAT as identified by their personal education team. Leaders in the college admissions community recognize college admissions tests are an important part of a holistic admission process – one that considers test scores as one factor among many that can show a students’ true potential for success. But college admission is just one component of these exams’ utility. College admissions exams provide highly useful data for teachers and administrators, not just admissions officers. College admissions tests are used nationwide as a complement to grades, in order to reliably predict college and career success and student achievement across socio-economic status, race and ethnicity without bias or potential inflation. Patterson is diverse, inclusive and proud. Increasing access to important college admissions tests like the SAT – at no cost to the students – means Patterson Joint Unified students from every walk of life can seek their college and career dreams. This is an approach many districts across California are taking as we endeavor to provide students the tools they need to succeed. Philip M. Alfano, Ed. D, is superintendent of Patterson Joint Unified School District.
import Random # TODO: remove Random.CloseOpen01{BigFloat} parameters # TODO: remove setprecision # possibly fixed by https://github.com/JuliaLang/julia/pull/38169 function Random.Sampler( RNG::Type{<:Random.AbstractRNG}, st::Random.SamplerType{Acb}, n::Random.Repetition, ) return Random.SamplerSimple( Random.SamplerType{Acb}(), Random.SamplerBigFloat{Random.CloseOpen01{BigFloat}}(precision(Acb)), ) end function Random.Sampler( ::Type{<:Random.AbstractRNG}, ::Random.CloseOpen01{T}, ::Random.Repetition, ) where {T<:Union{Arf,Arb}} return Random.SamplerSimple( Random.SamplerType{T}(), Random.SamplerBigFloat{Random.CloseOpen01{BigFloat}}(precision(T)), ) end function Random.Sampler( RNG::Type{<:Random.AbstractRNG}, x::TOrRef, n::Random.Repetition, ) where {TOrRef<:Union{Arf,ArfRef,Arb,ArbRef,Acb,AcbRef}} T = _nonreftype(TOrRef) return Random.SamplerSimple( Random.SamplerType{T}(), Random.SamplerBigFloat{Random.CloseOpen01{BigFloat}}(precision(x)), ) end Random.rand(rng::Random.AbstractRNG, sp::Random.SamplerSimple{Random.SamplerType{Arf}}) = setprecision(BigFloat, sp.data.prec) do Arf(rand(rng, sp.data), prec = sp.data.prec) end Random.rand(rng::Random.AbstractRNG, sp::Random.SamplerSimple{Random.SamplerType{Arb}}) = setprecision(BigFloat, sp.data.prec) do Arb(rand(rng, sp.data), prec = sp.data.prec) end Random.rand(rng::Random.AbstractRNG, sp::Random.SamplerSimple{Random.SamplerType{Acb}}) = setprecision(BigFloat, sp.data.prec) do Acb(rand(rng, sp.data), rand(rng, sp.data), prec = sp.data.prec) end
If $s$ is a closed set and $c$ is a component of $s$, then $c$ is closed.
= = Damage and dilapidation = =
module Web.URI.Port where open import Web.URI.Port.Primitive public using ( Port? ; :80 ; ε )
import numpy import time import random as rn a = numpy.array(range(0,10240*50)) start = time.time() R_chosen = rn.sample(range(0,10240*50), 10240*50) end = time.time() print(end-start)
module TicTacToe import public TicTacToe.GameState import public TicTacToe.Player import public TicTacToe.Simple
# Item 04 A *fix-point-iteration review*. See Numerical Analysis, 2nd edition by Timothy Sauer. * Definition 1: The real number $r$ is a fix-point of the function $g(x)$ if $g(r)=r$. * Algorithm 1: Fixed-Point-Iteration: Let $x_0$ be the initial guess. Compute $x_{i+1} = g(x_i)$, for $i=0,1,2,3,\dots$. Notice that this fixed-point-iteration may or may not converge to $r$. * Definition 2: Let $e=|r-x_i|$ be the error at iteration $i$. If $0 < \lim_{i \rightarrow \infty} \frac{e_{i+1}}{e_i} = S < 1$, the fixed-point-iteration $x_{i+1} = g(x_i)$ is said to obey linear convergence with rate $S$. * Theorem 1: Assume that $g(x)$ is continuously differentiable and that $S=|g'(x)|<1$. Then the fixed-point-iteration $x_{i+1}=g(x_i)$ converges at least linearly with rate $S$ to the fixed point $r$ for the initial guesses sufficiently close to $r$. --- * **Question 1**: Prove that a continuously differentiable function $g(x)$ satisfying $|g'(x)|<1$ on a closed interval cannot have two fixed points on that interval. __Dem__: By contradiction, let's asume that $g(x)$ has two fixed points, $x_1$ and $x_2$, with $x_1 \neq x_2$ on the interval $[a,b]$, and that $|g'(x)|< 1 \, \forall (x \in [a,b])$, then we have that: $$ g(x_1) = x_1 \qquad g(x_2) = x_2 \,. $$ Given, this two points, by the [mean value theorem](https://en.wikipedia.org/wiki/Mean_value_theorem), there should exist a point $c \in [x_1,x_2]$ so that: $$ g'(c) = \frac{g(x_1)-g(x_2)}{x_1-x_2} = \frac{x_1-x_2}{x_1-x_2} = 1 $$ as $c \in [x_1,x_2] \Rightarrow c \in [a,b]$, we have a contradiction, since we said that $|g'(x)|<1 \, \forall (x \in [a,b])$. --- * **Question 2**: Given that $f(x)$ has a root near $x_0$. Derive three different fix-point-iterations that may converge to $f(r)=0$ and state the restrictions of $f(x)$ needed, if any. Assume that $f(x)$ has as many derivatives as you may need. (1) We have the trivial options (as $x \rightarrow r$): \begin{align} 0 &= f(x) \\ x &= \underbrace{\pm f(x) + x}_{g(x)} \end{align} it requires that $|g'(r)| = |\pm f'(r) + 1| \leq 1$, we may choose the sign of $\pm$. (2) We expand the Taylor series arround $x_0$: \begin{align} f(x) &= f(x_0) + f'(x_0)(x-x_0) + \dots \\ 0 &= f(x_0) + f'(x_0)(x-x_0) \qquad \qquad \text{as $x\rightarrow r$} \\ f'(x_0)x &= -f(x_0) + f'(x_0) x_0 \\ x &= -\frac{f(x_0)}{f'(x_0)} + x_0 \\ x &= \underbrace{x_0 - \frac{f(x_0)}{f'(x_0)}}_{g(x_0)} \end{align} Which corresponds to the Newton's method. It doens't have restrictions. (2) We expand the Taylor series arround $x_0$: \begin{align} f(x) &= f(x_0) + f'(x_0)(x-x_0) + \frac{1}{2}f''(x_0)(x-x_0)^2 \\ 0 &= f(x_0) + f'(x_0)(x-x_0) + \frac{1}{2}f''(x_0)(x-x_0)^2 &\text{as $x\rightarrow r$} \\ 0 &= f'(x_0) + f''(x_0)x - f''(x_0)x_0 &\text{after $\frac{d(\cdot)}{dx}$} \\ f''(x_0)x &= f''(x_0)x_0 - f'(x_0) \\ x &= \underbrace{x_0 - \frac{f'(x_0)}{f''(x_0)}}_{g(x_0)} \end{align} Which is the [Newton's method in optimization](https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization). It requires $f''(r)$ to exist, and if we make $|g'(r)| <1$ that results in: $$ |f'(r) f'''(r)| < |f''(r)| $$ --- * **Question 3**: Derive a unsuccessful and a successful fix-point-iteration for finding the root of $x^3+x=1$ near $x_0=1$. Our unsuccessful fix-point-iteration would be: \begin{align} x^3+x &= 1 \\ x &= 1-x^3 \\ \text{we make } g(x) &= 1-x^3 \end{align} it won't converge as $g'(x) = -3x^2$ and $g'(1) = -3$. Our successful one: \begin{align} x^3+x &= 1 \\ x(x^2+1) &= 1 \\ x^2+1 &= \frac{1}{x} \\ x^2 &= \frac{1}{x}-1 \\ x &= \sqrt{\frac{1}{x}-1} \\ \text{we make } g(x) &= \sqrt{\frac{1}{x}-1} \end{align} it will converge as $g'(x) = -\frac{1}{2} \left( x^{-1}-1 \right)^{-\frac{1}{2}}x^{-2}$ and $g'(1) = 0$. --- * Algorithm 2: Newton's method: Let $x_0$ be the initial guess. Compute $x_{i+1} = \hat{g}(x_i)$, for $i=0,1,2,3,\dots$, where $\hat{g}=x-(f'(x))^{-1}f(x)$ and $(f'(x))^{-1}$ is the inverse of $f'(x)$, i.e. $(f'(x))^{-1} = 1/f'(x)$. * Definition 3: Let $e = |r-x_i|$ be the error at iteration $i$. If $\lim_{i \rightarrow \infty} \frac{e_{i+1}}{e_i^2} = M < \infty$, the method is said to be quadratically convergent. --- * **Question 4**: Prove that Newton's method is quadratically convergent as long as $f'(r) \neq 0$. **Dem**: We have that $g(x) = x - \frac{f(x)}{f'(x)}$. Expanding a Taylor's series around $r$: \begin{align} g(x_i) &= g(r) + g'(r)(x_i-r) + \frac{1}{2}g''(r)(x_i-r)^2 + \dots \\ x_{i+1} &= r + g'(r)(x_i-r) + \frac{1}{2}g''(r)(x_i-r)^2 + \dots \end{align} We can see that: \begin{align} g'(x) &= 1 - \frac{f'(x)f'(x)-f(x)f''(x)}{(f'(x))^2} \\ g'(r) &= 1 - \frac{f'(r)f'(r)-f(r)f''(r)}{(f'(r))^2} \\ &= 1 - \frac{f'(r)f'(r)}{(f'(r))^2} = 1-1 = 0 \end{align} So, retaking the previous equation: \begin{align} g(x_i) &= g(r) + g'(r)(x_i-r) + \frac{1}{2}g''(r)(x_i-r)^2 + \dots \\ x_{i+1} &= r \frac{1}{2}g''(r)(x_i-r)^2 + \dots \\ x_{i+1}-r &= \frac{1}{2}g''(r)(x_i-r)^2 + \dots \\ \frac{x_{i+1}-r}{(x_i-r)^2} &= \frac{1}{2}g''(r) + \dots \\ \frac{e_{i+1}}{e_i^2} &= \frac{1}{2}g''(r) + \dots \end{align} And the method has quadratic convergence at rate $M = \frac{1}{2}g''(r)$. --- * **Question 5**: Explain when Newton's method show linear convergence and also explain how it can be fixed. When $f'(r)=0$ we have that: \begin{align} g'(x) &= 1 - \frac{f'(x)f'(x)-f(x)f''(x)}{(f'(x))^2} \\ &= 1-\frac{{(f'(x))^2}}{{(f'(x))^2}}+\frac{f(x)f''(x)}{(f'(x))^2} \\ &= 1-1+\frac{f(x)f''(x)}{(f'(x))^2} \\ &= \frac{f(x)f''(x)}{(f'(x))^2} \end{align} using L'Hopital (because $f(r)=0$ too). \begin{align} g(r) &= \frac{f'(r)f''(r)+f(r)f'''(r)}{2f'(r)f''(r)} \\ &= \frac{1}{2}+\frac{f(r)f'''(r)}{2f'(r)f''(r)} \\ &= \frac{1}{2}+\frac{f(r)f'''(r)}{2f'(r)f''(r)} \\ &= \frac{1}{2}+\frac{f'(r)f'''(r)+f(r)f^{(4)}(r)}{2f''(r)f''(r)+2f'(r)f'''(r)} \\ &= \frac{1}{2}+' \qquad \text{as long as $f''(x)\neq 0$} \end{align} we see that the method has linear convergence because $|g'(r)|=\frac{1}{2} \neq 0$. If we make our $g(x)= x - \alpha \frac{f'(x)}{f''(x)}$, we will see that the derivate will be: \begin{align} g'(x) &= 1 - \alpha\frac{f'(x)f'(x)-f(x)f''(x)}{(f'(x))^2} \\g'(r) &= 1 - \alpha \left(1-\frac{1}{2} \right) \end{align} and we just have to make $\alpha = 2$ so that $g'(x)=0$. In general, we have to make $a$ equal to the multiplicity of the root ```python ```
Load LFindLoad. From lfind Require Import LFind. From QuickChick Require Import QuickChick. From adtind Require Import goal4. Derive Show for natural. Derive Arbitrary for natural. Instance Dec_Eq_natural : Dec_Eq natural. Proof. dec_eq. Qed. Derive Show for lst. Derive Arbitrary for lst. Instance Dec_Eq_lst : Dec_Eq lst. Proof. dec_eq. Qed. Lemma conj14synthconj5 : forall (lv0 : lst) (lv1 : natural), (@eq natural (len (append lv0 (Cons lv1 lv0))) (Succ (len (append lv0 lv0)))). Admitted. QuickChick conj14synthconj5.
[STATEMENT] lemma PO_obs_consistent [iff]: "obs_consistent R0n0i med0n0i a0n a0i" [PROOF STATE] proof (prove) goal (1 subgoal): 1. obs_consistent R0n0i med0n0i a0n a0i [PROOF STEP] by (auto simp add: obs_consistent_def R0n0i_def med0n0i_def a0i_def a0n_def)
function r = msmoothboxrnd(a,b,sigma,n) %MSMOOTHBOXRND Random arrays from the multivariate smooth-box distribution. % R = MSMOOTHBOXRND(A,B,SIGMA) returns an N-by-D matrix R of random % vectors chosen from the multivariate smooth-box distribution % with pivots A and B and scale SIGMA. A, B and SIGMA are N-by-D matrices, % and MSMOOTHBOXRND generates each row of R using the corresponding row % of A, B and SIGMA. % % R = MSMOOTHBOXRND(A,B,SIGMA,N) returns a N-by-D matrix R of random % vectors chosen from the multivariate smooth-box distribution % with pivots A and B and scale SIGMA. % % See also MSMOOTHBOXPDF. % Luigi Acerbi 2022 [Na,Da] = size(a); [Nb,Db] = size(b); [Nsigma,Dsigma] = size(sigma); if any(sigma(:) <= 0) error('msmoothboxrnd:NonPositiveSigma', ... 'All elements of SIGMA should be positive.'); end if nargin < 4 || isempty(n) n = max([Na,Nb,Nsigma]); else if (Na ~= 1 && Na ~= n) || (Nb ~= 1 && Nb ~= n) || ... (Nsigma ~= 1 && Nsigma ~= n) error('msmoothboxrnd:SizeError', ... 'A, B, SIGMA should be 1-by-D or N-by-D arrays.'); end end if Na ~= Nb || Da ~= Db || Na ~= Nsigma || Da ~= Dsigma error('msmoothboxrnd:SizeError', ... 'A, B, SIGMA should be arrays of the same size.'); end D = Da; if size(a,1) == 1; a = repmat(a,[n,1]); end if size(b,1) == 1; b = repmat(b,[n,1]); end if size(sigma,1) == 1; sigma = repmat(sigma,[n,1]); end r = zeros(n,D); nf = 1 + 1/sqrt(2*pi)./sigma.*(b - a); % Sample one dimension at a time for d = 1:D % Draw component (left/right tails or plateau) u = nf(:,d) .* rand(n,1); % Left Gaussian tails idx = u < 0.5; if any(idx) z1 = abs(randn(sum(idx),1).*sigma(idx,d)); r(idx,d) = a(idx) - z1; end % Right Gaussian tails idx = (u >= 0.5 & u < 1); if any(idx) z1 = abs(randn(sum(idx),1).*sigma(idx,d)); r(idx,d) = b(idx) + z1; end % Plateau idx = u >= 1; if any(idx) r(idx,d) = a(idx,d) + (b(idx,d) - a(idx,d)).*rand(sum(idx),1); end end
-- Andreas, 2014-01-24, Issue 1411 -- First split might not succeed in the unifier, -- so try later splits also. -- {-# OPTIONS -v tc.lhs:10 #-} open import Common.Prelude open import Common.Equality data Fin : Nat → Set where fzero : (n : Nat) → Fin (suc n) fsuc : (n : Nat) → (i : Fin n) → Fin (suc n) data _≅_ {A : Set} (a : A) : {B : Set} (b : B) → Set where refl : a ≅ a works : ∀ n m (i : Fin n) (j : Fin m) → n ≡ m → fsuc n i ≅ fsuc m j → i ≅ j works n .n i .i refl refl = refl fails : ∀ n m (i : Fin n) (j : Fin m) → fsuc n i ≅ fsuc m j → n ≡ m → i ≅ j fails n .n i .i refl refl = refl -- Refuse to solve heterogeneous constraint i : Fin n =?= j : Fin m -- when checking that the pattern refl has type fsuc n i ≅ fsuc m j -- Should work now.
#pragma once #include <type_traits> #include <cuda/runtime_api.hpp> #include <gsl-lite/gsl-lite.hpp> namespace thrustshift { namespace kernel { template <typename MapT, typename SrcT, typename DstT> __global__ void gather(gsl_lite::span<const MapT> map, gsl_lite::span<const SrcT> src, gsl_lite::span<DstT> dst) { const auto gtid = threadIdx.x + blockIdx.x * blockDim.x; if (gtid < src.size()) { dst[gtid] = src[map[gtid]]; } } } // namespace kernel namespace async { template <class MapRange, class SrcRange, class DstRange> void gather(cuda::stream_t& stream, MapRange&& map, SrcRange&& src, DstRange&& dst) { gsl_Expects(src.size() == dst.size()); gsl_Expects(src.size() == map.size()); gsl_Expects(src.data() != dst.data()); if (src.empty()) { return; } using map_index_type = typename std::remove_reference<MapRange>::type::value_type; using src_value_type = typename std::remove_reference<SrcRange>::type::value_type; using dst_value_type = typename std::remove_reference<DstRange>::type::value_type; constexpr cuda::grid::block_dimension_t block_dim = 128; const cuda::grid::dimension_t grid_dim = (src.size() + block_dim - 1) / block_dim; auto c = cuda::make_launch_config(grid_dim, block_dim); auto k = kernel::gather<map_index_type, src_value_type, dst_value_type>; cuda::enqueue_launch(k, stream, c, map, src, dst); } } // namespace async } // namespace thrustshift
\section{Status and Problems} \label{sec:statusAndProblems} \subsection{Status of the implementation} \label{sec:status} At the time this technote is in writing, March 2019, only the \textbf{lsst\_distrib} repository is released and distributed. \subsubsection{Packaging} \label{sec:statusPkgs} The following technologies are already used in DM for packaging: \begin{itemize} \item {\bf Eups}: for the majority of DM packages. It is used also for all third party libraries that require customization, or that are not available in the conda channels. The lsst DM binary Eups repository is hosted at \url{https://eups.lsst.codes/}. \item {\bf Sonatype Nexus}: for java based software packages. \item {\bf Conda}: for packages publicly available, that can be used without any customization \item {\bf Pip/PiPy}: for python packages publicly available. \end{itemize} \subsubsection{Distribution} \label{sec:statusDistrib} The main tool used for distribution to operations is \textbf{Docker}. The Science Pipelines distribution to the science comunity is mainly provided using the script \textit{newinstall.sh}. This scripts permits to retrieve a specific Science Pipelines build or release available in the Eups repository, and setup the environment for its build and execution. In \url{pipelines.lsst.io} instructions are provided on how to deploy a Science Pipelines using \textit{newinstall.sh} or \textbf{Docker}. \subsubsection{SW Products Identification} \label{sec:statusIdentification} No software products as described in the product tree in \citeds{LDM-294} have official releases so far. Only the Science Pipelines distribution, identified by the \textit{lsst\_distrib} Git mata-package, has regular official releases. All explicit and implicit dependencies in \textit{lsst\_distrib} which team is \textbf{Data Management} or \textbf{DM Externals} are considered part of the distribution. The following Git teams are relevant for the identification of the Science Pipelines distribution: \begin{itemize} \item the \textbf{Data Management} team identifies all DM developed software included in the distribution. \item the \textbf{DM Externals} team identifies third-party libraries required by \textit{lsst\_distrib}. These are software packages developed outside DM, that are not available in public conda channels, or that are updated often and therefore can't be included in the conda environment definition (\ref{sec:statusEnvs}). See draft \citeds{DMTN-110} for current problems and possible solutions on conda environemnts. \end{itemize} A third team, \textbf{DM Auxilliaries}, identifies auxiliary packages to be tagged when a release or build of the distribution is done, but they are not part of it, therefore not distributed with it. \subsubsection{Environment} \label{sec:statusEnvs} The conda environment used for building the Science Pipelines is defined in the Git repository \textit{scipipe\_conda\_env}. \subsubsection{Other Tools} \label{sec:statusTools} The build tools used to build Science Pipelines software are available in \textit{lsstsw} and \textit{lsst\_build} Git repositories. The Jenkins scripts also are an important part of the tooling, facilitating a large number of actions, like, test a ticket branch, automate releases, or provide periodic build (weekly/daily). The tools picture is completed by \textit{codekit}, which permits interaction with multiple Git repositories. Just as an example, it is used to create the same tag on all the Git repositories composing a defined product automatically, instead of creating it manually. \subsection{Open Problems} \label{sec:openProblems} In this section, a list of problems derived from the current development approach is enumerated. While these problems are unresolved the only release process suitable for DM is to release the entire codebase each time. Their resolution may lead to a different development approach, and therefore the proposed release procedure may need to be reviewed accordingly. \subsubsection{SW Product Composition} \label{sec:problemId} A Git repository may be included in more than one software product. This makes impossible to apply in a consistent way the release procedure to an SW product that is not the Science Pipelines. The reason is explained in the following example. Product A is composed of 100 Git repositories. Product B is composed of 80 Git repositories. 70 Git repositories are shared by both products. When release 1.0 is done on product A, all 100 repositories will be tagged, and corresponding Eups packages are created. This will include the 70 shared repositories. One week later, product B is ready for the release 1.0, and some of the 70 shared repositories have been updated. For them, the 1.0 tag required for product B release, will not be the same as the 1.0 tag required for the product A release. This implies that for some repositories, release 1.0 of product A, is different from release 1.0 of product B. \textbf{Requirement}: there should be a 1 to 1 correspondence between software products and Git repository. If this is not possible, each Git repository shall be included in only one software product (one SW product to many Git repositories). Note also that, each Git repository used for building, unit testing and packaging the software products shall not be included in the SW product itself, but versioned separately and be part of the environment definition. See section \ref{sec:swprod}. \subsubsection{Code Fragmentation} \label{sec:problemCode} The high number of repositories causes problems in that it : \begin{itemize} \item increases the build time: each Git repository needs to build each time. This imply that extra activities need to be done, such as clone, checkout, package creation and push to the \url{eups.lsst.code} repository. All these activities are not time consuming on itself, and may not represent a problem is just one repository is added. However, if the number of repositories included in a software product grows without control, the overall build time will become a problem. \item increases the release time: all Git repositories need to be built and released each time. As for the build time, adding one repository to the software product, will not represent a problem in terms of time. However, if the number of repositories grows without control, the overall time required to do a release will become a problem, especially for those software products that require patches to be available very quick. \item increases the failure probability: the tooling may be affected by network glitch or similar technical issues. This may lead to the failure of the build/release process in a non-deterministic way. \end{itemize} Moving 3rd party libraries to conda environment will mitigate these problem. However, this requires proper management of the conda environment. See draft \citeds{DMTN-110}. \textbf{Requirement}: the number of Git repositories shall be kept low. DM-CCB shall approve each time a new repository is introduced since this has an impact on the maintainability of the system. All third party libraries, shall also not be part of the stack, if not requiring source code changes. \subsubsection{Binaries Persistence} \label{sec:problemPersistence} EUPS builds a Git repository and installs the binary packages locally. If a Git repository is not affected by any changes, once installed locally by EUPS, it will not be rebuilt. Continuous integration tools instead are making available, in the remote repository at \url{https://eups.lsst.codes/}, the binary packages for macOS and Linux platforms. However, the build tools in lsstsw are not able to resolve the binary packages from that remote repository. Therefore, each time the Science Pipelines is built from scratch, all Git repositories are rebuilt. A Git repository should be rebuilt, only in case its source code has changed, or one of its dependencies has changed (in case of semantic versioning, only for breaking changes that increase the major version). \textbf{Requirement}: it shall be possible to resolve a package binaries from \url{https://eups.lsst.codes/} if available, instead of building it from the source code. This problem is not blocking on the release process, but its resolution permits a better optimization of the builds.
Require Export Iron.Language.SimpleData.ExpBase. (* Get the data constructor of an alternative. *) Fixpoint dcOfAlt (aa: alt) : datacon := match aa with | AAlt dc _ _ => dc end. Hint Unfold dcOfAlt. (* Get the alternative body that matches a given constructor. *) Fixpoint getAlt (dc: datacon) (alts: list alt) {struct alts} : option alt := match alts with | nil => None | AAlt dc' tsArgs x :: alts' => if datacon_beq dc dc' then Some (AAlt dc' tsArgs x) else getAlt dc alts' end. (* If we get a single alternative from a list, then that alternative was in the list. *) Lemma getAlt_in : forall dc alt alts , getAlt dc alts = Some alt -> In alt alts. Proof. intros. induction alts. false. destruct a as [dc' tsArgs x]. simpl in H. breaka (datacon_beq dc dc'). inverts H. apply datacon_beq_eq in HeqX; auto. Qed. (* Given a data constructor, if one of the alternatives in a list matches that data constructor then we can get the other information about the alternative from the list. *) Lemma getAlt_exists : forall d alts , In d (map dcOfAlt alts) -> (exists tsArgs x, getAlt d alts = Some (AAlt d tsArgs x)). Proof. intros. induction alts. simpl in H. false. simpl in H. inverts H. destruct a. simpl. breaka (datacon_beq d d). exists l. exists e. auto. apply datacon_beq_false in HeqX. false. lets D: IHalts H0. destruct a. simpl. breaka (datacon_beq d d0). apply datacon_beq_eq in HeqX. subst. auto. exists l. exists e. auto. Qed.
{-# OPTIONS --copatterns #-} open import Common.Nat open import Common.IO open import Common.Unit record Test : Set where field a b c : Nat f : Test -> Nat f r = a + b + c where open Test r open Test r1 : Test a r1 = 100 b r1 = 120 c r1 = 140 r2 : Test c r2 = 400 a r2 = 200 b r2 = 300 g : Nat g = f r1 + a m + b m + c m where m = r2 main : IO Unit main = printNat g -- Expected Output: 1260
# Helper data for CIE observer functions include("cie_data.jl") # readonly static 3x3 matrix struct Mat3x3{T} <: AbstractMatrix{T} e::NTuple{9, T} end Mat3x3(mat::Matrix{T}) where {T} = Mat3x3{T}(Tuple(mat)) Base.IndexStyle(::Type{<:Mat3x3}) = IndexLinear() Base.size(::Mat3x3) = (3, 3) Base.getindex(M::Mat3x3{T}, i::Int) where {T} = M.e[i] macro mul3x3(C, M, c1, c2, c3) esc(quote F = typeof(0.5f0 * $c1) === Float32 ? Float32 : eltype($M) if $c1 isa N0f8 && $c2 isa N0f8 && $c3 isa N0f8 s = 0xffffff c1x = Int32(reinterpret($c1)) * 0x10101 c2x = Int32(reinterpret($c2)) * 0x10101 c3x = Int32(reinterpret($c3)) * 0x10101 else s = true c1x, c2x, c3x = $c1, $c2, $c3 end @inbounds ret = $C( muladd(F($M[1,1] / s), c1x, muladd(F($M[1,2] / s), c2x, F($M[1,3] / s) * c3x)), muladd(F($M[2,1] / s), c1x, muladd(F($M[2,2] / s), c2x, F($M[2,3] / s) * c3x)), muladd(F($M[3,1] / s), c1x, muladd(F($M[3,2] / s), c2x, F($M[3,3] / s) * c3x))) ret end) end # for optimization div60(x) = x / 60 _div60(x::T) where T = muladd(x, T(1/960), x * T(0x1p-6)) if reduce(max, _div60.((90.0f0,))) == 1.5f0 div60(x::T) where T <: Union{Float32, Float64} = _div60(x) else # force two-step multiplication div60(x::T) where T <: Union{Float32, Float64} = x * T(0x1p-6) + x * T(1/960) end # mod6 supports the input `x` in [-2^28, 2^29] mod6(::Type{T}, x::Int32) where T = unsafe_trunc(T, x - 6 * ((widemul(x, 0x2aaaaaaa) + Int64(0x20000000)) >> 0x20)) # Approximation of the reciprocal of the cube root, x^(-1/3). # assuming that x > 0.003, the conditional branches are omitted. @inline function rcbrt(x::Float64) ix = reinterpret(UInt64, x) e0 = (ix >> 0x34) % UInt32 ed = e0 ÷ 0x3 er = e0 - ed * 0x3 a = 0x000b_f2d7 - 0x0005_5718 * er e = (UInt32(1363) - ed) << 0x14 | a t1 = reinterpret(Float64, UInt64(e) << 0x20) h1 = muladd(t1^2, -x * t1, 1.0) t2 = muladd(@evalpoly(h1, 1/3, 2/9, 14/81), h1 * t1, t1) h2 = muladd(t2^2, -x * t2, 1.0) t3 = muladd(muladd(2/9, h2, 1/3), h2 * t2, t2) reinterpret(Float64, reinterpret(UInt64, t3) & 0xffff_ffff_8000_0000) end @inline function rcbrt(x::Float32) ix = reinterpret(UInt32, x) e0 = ix >> 0x17 + 0x2 ed = e0 ÷ 0x3 er = e0 - ed * 0x3 a = 0x005f_9cbe - 0x002a_bd7d * er t1 = reinterpret(Float32, (UInt32(169) - ed) << 0x17 | a) h1 = muladd(t1^2, -x * t1, 1.0f0) t2 = muladd(muladd(2/9f0, h1, 1/3f0), h1 * t1, t1) h2 = muladd(t2^2, -x * t2, 1.0f0) t3 = muladd(1/3f0, h2 * t2, t2) reinterpret(Float32, reinterpret(UInt32, t3) & 0xffff_f000) end cbrt01(x) = cbrt(x) @inline function cbrt01(x::Float64) r = rcbrt(x) # x^(-1/3) h = muladd(r^2, -x * r, 1.0) e = muladd(2/9, h, 1/3) * h * r muladd(r, x * r, x * e * (r + r + e)) # x * x^(-2/3) end @inline function cbrt01(x::Float32) r = Float64(rcbrt(x)) # x^(-1/3) h = muladd(r^2, -Float64(x) * r, 1.0) e = muladd(muladd(14/81, h, 2/9), h, 1/3) * h Float32(1 / muladd(r, e, r)) end pow3_4(x) = (y = @fastmath(sqrt(x)); y*@fastmath(sqrt(y))) # x^(3/4) # `pow5_12` is called from `srgb_compand`. pow5_12(x) = pow3_4(x) / cbrt(x) # 5/12 == 1/2 + 1/4 - 1/3 == 3/4 - 1/3 @inline function pow5_12(x::Float64) p3_4 = pow3_4(x) # x^(-1/6) if x < 0.02 t0 = @evalpoly(x, 3.1366722556806232, -221.51395962221136, 19788.889459114234, -905934.6541469148, 1.5928561711645417e7) elseif x < 0.12 t0 = @evalpoly(x, 2.3135905865468644, -26.43664640894651, 385.0146581045545, -2890.0920682466267, 8366.343115590817) elseif x < 1.2 t0 = @evalpoly(x, 1.7047813285940905, -3.1261253501167308, 7.498744828350077, -10.100319516746419, 6.820601476522508, -1.7978894213531524) else return p3_4 / cbrt01(x) end # x^(-1/3) t1 = t0 * t0 h1 = muladd(t1^2, -x * t1, 1.0) t2 = muladd(h1, 1/3 * t1, t1) h2 = muladd(t2^2, -x * t2, 1.0) t2h = @evalpoly(h2, 1/3, 2/9, 14/81) * h2 * t2 # Taylor series of (1-h)^(-1/3) # x^(3/4) * x^(-1/3) muladd(p3_4, t2, p3_4 * t2h) end @inline function pow5_12(x::Float32) # x^(-1/3) rc = rcbrt(x) rcx = -rc * x rch = muladd(muladd(rc, x, rcx), -rc^2, muladd(rc^2, rcx, 1.0f0)) # 1 - x * rc^3 rce = muladd(2/9f0, rch, 1/3f0) * rch * rc # x^(3/4) p3_4_f64 = pow3_4(Float64(x)) p3_4r = reinterpret(Float64, reinterpret(UInt64, p3_4_f64) & 0xffffffff_e0000000) p3_4 = Float32(p3_4r) p3_4e = Float32(p3_4_f64 - p3_4r) # x^(3/4) * x^(-1/3) muladd(p3_4, rc, muladd(p3_4, rce, p3_4e * rc)) end # `pow12_5` is called from `invert_srgb_compand`. pow12_5(x) = pow12_5(Float64(x)) pow12_5(x::BigFloat) = x^big"2.4" @inline function pow12_5(x::Float64) # x^0.4 t1 = @evalpoly(@fastmath(min(x, 1.75)), 0.24295462640373672, 1.7489099720303518, -1.9919942887850166, 1.3197188815160004, -0.3257258790067756) t2 = muladd(2/5, muladd(x / t1^2, @fastmath(sqrt(t1)), -t1), t1) # Newton's method t3 = muladd(2/5, muladd(x / t2^2, @fastmath(sqrt(t2)), -t2), t2) t4 = muladd(2/5, muladd(x / t3^2, @fastmath(sqrt(t3)), -t3), t3) # x^0.4 * x^2 rx = reinterpret(Float64, reinterpret(UInt64, x) & 0xffffffff_f8000000) # hi e = x - rx # lo muladd(t4, rx^2, t4 * (rx + rx + e) * e) end pow7(x) = (y = x*x*x; y*y*x) # TODO: migrate to ColorTypes.jl @inline function atan360_kernel(t::Float64) @evalpoly(t^2, 0.8952465548919112, -0.2984155182972285, 0.1790493109637673, -0.12789236387151418, 0.09947179554077099, -0.08138502549008089, 0.06884985860325084, -0.059532134551367105, 0.05164982834653973, -0.042514874197367984, 0.028600080170923112, -0.011001809246782802) end @inline function atan360_kernel(t::Float32) @evalpoly(t^2, 0.89524657f0, -0.2984143f0, 0.17899014f0, -0.12683357f0, 0.090689056f0, -0.045563098f0) end # a variant of `atand` returning the angle in the range of [0, 360] atan360(y, x) = (a = atand(y, x); signbit(a) ? oftype(a, a + 360) : a) @inline function atan360(y::T, x::T) where T <: Union{Float32, Float64} (isnan(x) | isnan(y)) && return T(NaN) ax, ay = abs(x), abs(y) n, m = @fastmath minmax(ax, ay) if m == T(Inf) d0 = n == T(Inf) ? T(45) : T(0) else m = m == T(0) ? T(0.5) : m ta = (n + n) > m ? T(0.5) : T(0) # 1-step CORDIC # ro=(n + n) > m ? T(atand(0.5) / 64) : T(0) ro = @fastmath max(T(0), ta - T(0.5 - 0.4150789246418436)) n1 = n - ta * m m1 = m + ta * n t = n1 / m1 # in [0, 0.5] p = atan360_kernel(t) d0 = muladd(t, p, ro) * T(64) end b1 = T( 90) + flipsign(T( -90), x) b2 = T(180) + flipsign(T(-180), y) d1 = ay > ax ? T(90) - d0 : d0 d2 = b1 + flipsign(d1, x) # signbit(x) ? T(180) - d1 : d1 dd = b2 + flipsign(d2, y) # signbit(y) ? T(360) - d2 : d2 return dd end # override only the `Lab` and `Luv` versions just for now @inline ColorTypes.hue(c::Lab) = atan360(c.b, c.a) @inline ColorTypes.hue(c::Luv) = atan360(c.v, c.u) @inline function sin_kernel(x::Float64) x * @evalpoly(x^2, 1.117010721276371, -0.23228479064016105, 0.014491237085286733, -0.00043049771889962576, 7.460244157055791e-6, -8.462038405688494e-8, 6.767827147797153e-10, -3.987482394639226e-12) end @inline function sin_kernel(x::Float32) y = @evalpoly(x^2, 0.11701072f0, -0.23228478f0, 0.014491233f0, -0.0004304645f0, 7.368049f-6) muladd(x, y, x) end @inline function cos_kernel(x::Float64) @evalpoly(x^2, 1.0, -0.6238564757231793, 0.06486615038362423, -0.002697811198135598, 6.010882091788964e-5, -8.333171603045294e-7, 7.876495580576226e-9, -5.351642293798961e-11) end @inline function cos_kernel(x::Float32) @evalpoly(x^2, 1.0f0, -0.6238565f0,0.06486606f0,-0.002697325f0, 5.904168f-5) end sincos360(x) = sincos(deg2rad(x)) @inline function sincos360(x::T) where T <: Union{Float32, Float64} isfinite(x) || return T(NaN), T(NaN) t = x - round(x * T(1/360)) * 360 # [-180, 180] a0 = @fastmath abs(t) # [0, 180] a1 = a0 <= 90 ? a0 : 180 - a0 # [0, 90] a2 = a1 <= 45 ? a1 : 90 - a1 # [0, 45] ax = a2 * T(1/64) sn, cs = sin_kernel(ax), cos_kernel(ax) sn1, cs1 = a1 === a2 ? (sn, cs) : (cs, sn) return @fastmath flipsign(sn1, t), flipsign(cs1, 90 - a0) end """ x, y = polar_to_cartesian(r, theta) Convert a polar coordinate of radius `r` and angle `theta` (in degrees) to a Cartesian coordinate. """ @inline function polar_to_cartesian(r, theta) y, x = r .* sincos360(theta) return x, y end # Linear interpolation in [a, b] where x is in [0,1], # or coerced to be if not. function lerp(x, a, b) a + (b - a) * max(min(x, one(x)), zero(x)) end clamp01(v::T) where {T<:Fractional} = ifelse(v < zero(T), zero(T), ifelse(v > oneunit(T), oneunit(T), v)) clamp01(v::T) where {T<:Union{Bool,N0f8,N0f16,N0f32,N0f64}} = v """ HexNotation{C, A, N} This is a private type for specifying the style of hex notations. It is not recommended to use this type and its derived types in user scripts or other packages, since they may change in the future without notice. # Arguments - `C`: a base colorant type. - `A`: a symbol (`:upper` or `:lower`) to specify the letter casing. - `N`: a total number of digits. """ abstract type HexNotation{C, A, N} end abstract type HexAuto <: HexNotation{Colorant,:upper,0} end abstract type HexShort{A} <: HexNotation{Colorant,A,0} end """ hex(c::Colorant) hex(c::Colorant, style::Symbol) Convert a color to a hexadecimal string, optionally specifying its style. # Arguments - `c`: a target color. - `style`: a symbol to specify the hexadecimal notation. Spesifying the uppercase symbols means the return values are in uppercase. The following symbols are available: - `:AUTO`: notation automatically selected according to the type of `c` - `:RRGGBB`/`:rrggbb`: 6-digit opaque notation - `:AARRGGBB`/`:aarrggbb`: 8-digit notation with alpha at the head - `:RRGGBBAA`/`:rrggbbaa`: 8-digit notation with alpha at the tail - `:RGB`/`:rgb`/`:ARGB`/`:argb`/`:RGBA`/`:rgba`: 3-digit or 4-digit noatation - `:S`/`:s`: short notation if available # Examples ```jldoctest; setup = :(using Colors) julia> hex(RGB(1,0.5,0)) "FF8000" julia> hex(ARGB(1,0.5,0,0.25)) "40FF8000" julia> hex(HSV(30,1.0,1.0), :AARRGGBB) "FFFF8000" julia> hex(ARGB(1,0.533,0,0.267), :rrggbbaa) "ff880044" julia> hex(ARGB(1,0.533,0,0.267), :rgba) "f804" julia> hex(ARGB(1,0.533,0,0.267), :S) "4F80" ``` !!! compat "Colors v0.12" `style` requires at least Colors v0.12. """ hex(c::Colorant) = _hex(HexAuto, c) # there is no need to search the dictionary hex(c::Colorant, style::Symbol) = _hex(get(_hex_styles, style, HexAuto), c) const _hex_styles = Dict{Symbol, Type}( :AUTO => HexAuto, :S => HexShort{:upper}, :s => HexShort{:lower}, :RGB => HexNotation{RGB,:upper,3}, :rgb => HexNotation{RGB,:lower,3}, :ARGB => HexNotation{ARGB,:upper,4}, :argb => HexNotation{ARGB,:lower,4}, :RGBA => HexNotation{RGBA,:upper,4}, :rgba => HexNotation{RGBA,:lower,4}, :RRGGBB => HexNotation{RGB,:upper,6}, :rrggbb => HexNotation{RGB,:lower,6}, :AARRGGBB => HexNotation{ARGB,:upper,8}, :aarrggbb => HexNotation{ARGB,:lower,8}, :RRGGBBAA => HexNotation{RGBA,:upper,8}, :rrggbbaa => HexNotation{RGBA,:lower,8}, ) @inline function _hexstring(::Type{T}, u::U, itr) where {C, T <: HexNotation{C,:upper}, U <: Unsigned} s = UInt8(8sizeof(u) - 4) @inbounds String([b"0123456789ABCDEF"[((u << i) >> s) + 1] for i in itr]) end @inline function _hexstring(::Type{T}, u::U, itr) where {C, T <: HexNotation{C,:lower}, U <: Unsigned} s = UInt8(8sizeof(u) - 4) @inbounds String([b"0123456789abcdef"[((u << i) >> s) + 1] for i in itr]) end _to_uint32(c::Colorant) = reinterpret(UInt32, ARGB32(c)) _to_uint32(c::TransparentColor) = reinterpret(UInt32, alphacolor(RGB24(c), clamp01(alpha(c)))) _to_uint32(c::C) where C <: Union{AbstractRGB, TransparentRGB} = reinterpret(UInt32, ARGB32(correct_gamut(c))) _to_uint32(c::C) where C <: Union{AbstractGray, TransparentGray} = reinterpret(UInt32, AGray32(clamp01(gray(c)), clamp01(alpha(c)))) _hex(t::Type, c::Colorant) = _hex(t, _to_uint32(c)) _hex(::Type{HexAuto}, c::Color) = _hex(HexNotation{RGB,:upper,6}, c) _hex(::Type{HexAuto}, c::AlphaColor) = _hex(HexNotation{ARGB,:upper,8}, c) _hex(::Type{HexAuto}, c::ColorAlpha) = _hex(HexNotation{RGBA,:upper,8}, c) function _hex(::Type{HexShort{A}}, c::Colorant) where A u = _to_uint32(c) s = u == (u & 0x0F0F0F0F) * 0x11 c isa AlphaColor && return _hex(HexNotation{ARGB, A, s ? 4 : 8}, u) c isa ColorAlpha && return _hex(HexNotation{RGBA, A, s ? 4 : 8}, u) _hex(HexNotation{RGB, A, s ? 3 : 6}, u) end # for 3-digit or 4-digit notations function _hex(t::Type{T}, u::UInt32) where {C <:Union{RGB, ARGB, RGBA}, A, T <: HexNotation{C,A}} # To double the number of digits, we multiply each element by 17 (= 0x11). # Thus, we divide each element by 17 here, to halve the number of digits. u64 = UInt64(u) # TODO: use SIMD `move` with zero extension (e.g. vpmovzxbw) unpacked = ((u64 & 0xFF00FF00)<<24) | (u64 & 0x00FF00FF) # 0x00AA00GG00RR00BB # `all(x -> round(x / 17) == (x * 15 + 135) >> 8, 0:255) == true` q = muladd(unpacked, 0xF, 0x0087_0087_0087_0087) # 0x0Aaa0Ggg0Rrr0Bbb t <: HexNotation{ARGB} && return _hexstring(t, q, (0x04, 0x24, 0x14, 0x34)) t <: HexNotation{RGBA} && return _hexstring(t, q, (0x24, 0x14, 0x34, 0x04)) _hexstring(t, q, (0x24, 0x14, 0x34)) end # for 6-digit or 8-digit notations _hex(t::Type{HexNotation{ RGB,A,6}}, u::UInt32) where {A} = _hexstring(t, u, 0x8:0x4:0x1C) _hex(t::Type{HexNotation{ARGB,A,8}}, u::UInt32) where {A} = _hexstring(t, u, 0x0:0x4:0x1C) _hex(t::Type{HexNotation{RGBA,A,8}}, u::UInt32) where {A} = _hexstring(t, u, (0x8, 0xC, 0x10, 0x14, 0x18, 0x1C, 0x0, 0x4)) """ normalize_hue(h::Real) normalize_hue(c::Colorant) Returns a normalized (wrapped-around) hue angle, or a color with the normalized hue, in degrees, in [0, 360]. The normalization is essentially equivalent to `mod(h, 360)`, but is faster at the expense of some accuracy. """ @fastmath normalize_hue(h::Real) = max(muladd(floor(h / 360), -360, h), zero(h)) @fastmath normalize_hue(h::Float16) = Float16(normalize_hue(Float32(h))) normalize_hue(c::C) where {C <: Union{HSV, HSL, HSI}} = C(normalize_hue(c.h), c.s, comp3(c)) normalize_hue(c::C) where {Cb <: Union{HSV, HSL, HSI}, C <: Union{AlphaColor{Cb}, ColorAlpha{Cb}}} = C(normalize_hue(c.h), c.s, comp3(c), alpha(c)) normalize_hue(c::C) where C <: Union{LCHab, LCHuv} = C(c.l, c.c, normalize_hue(c.h)) normalize_hue(c::C) where {Cb <: Union{LCHab, LCHuv}, C <: Union{AlphaColor{Cb}, ColorAlpha{Cb}}} = C(c.l, c.c, normalize_hue(c.h), c.alpha) """ mean_hue(h1::Real, h2::Real) mean_hue(a::C, b::C) where {C <: Colorant} Compute the mean of two hue angles in degrees. If the inputs are HSV-like or Lab-like color objects, this will also return a hue, not a color. If one of the colors is achromatic, i.e. has zero saturation or chroma, the hue of the other color is returned instead of the mean. When the hue difference is exactly 180 degrees, which of the two mean hues is returned depends on the implementation. In other words, it may vary by the color type and version. """ function mean_hue(h1::T, h2::T) where {T <: Real} @fastmath hmin, hmax = minmax(h1, h2) d = 180 - normalize_hue(hmin - hmax - 180) F = typeof(zero(T) / 2) mh = muladd(F(0.5), d, hmin) return mh < 0 ? mh + 360 : mh end @inline function mean_hue(a::C, b::C) where {Cb <: Union{Lab, Luv}, C <: Union{Cb, AlphaColor{Cb}, ColorAlpha{Cb}}} a1, b1, a2, b2 = comp2(a), comp3(a), comp2(b), comp3(b) c1, c2 = chroma(a), chroma(b) k1 = c1 == zero(c1) ? oneunit(c1) : c1 k2 = c2 == zero(c2) ? oneunit(c2) : c2 dp = muladd(a1, a2, b1 * b2) # dot product dt = b1 * a2 - a1 * b2 if dp < zero(dp) # 90 deg rotations a1, b1 = @fastmath flipsign( b1, dt), flipsign(-a1, dt) a2, b2 = @fastmath flipsign(-b2, dt), flipsign( a2, dt) if dt == zero(dt) && a1 > zero(a1) k1, k2 = -k1, -k2 end end ma = muladd(k2, a1, k1 * a2) mb = muladd(k2, b1, k1 * b2) hue(Cb(zero(ma), ma, mb)) end function mean_hue(a::C, b::C) where {Cb <: Union{LCHab, LCHuv}, C <: Union{Cb, AlphaColor{Cb}, ColorAlpha{Cb}}} mean_hue(a.c == 0 ? b.h : a.h, b.c == 0 ? a.h : b.h) end function mean_hue(a::C, b::C) where {Cb <: Union{HSV, HSL, HSI}, C <: Union{Cb, AlphaColor{Cb}, ColorAlpha{Cb}}} mean_hue(a.s == 0 ? b.h : a.h, b.s == 0 ? a.h : b.h) end mean_hue(a, b) = mean_hue(promote(a, b)...) _delta_h_th(T) = zero(T) _delta_h_th(::Type{Float32}) = 0.1f0 _delta_h_th(::Type{Float64}) = 6.5e-3 function delta_h(a::C, b::C) where {Cb <: Union{Lab, Luv}, C <: Union{Cb, AlphaColor{Cb}, ColorAlpha{Cb}}} a1, b1, a2, b2 = comp2(a), comp3(a), comp2(b), comp3(b) c1, c2 = chroma(a), chroma(b) dp = muladd(a1, a2, b1 * b2) # dot product dt = b1 * a2 - a1 * b2 if _delta_h_th(typeof(dp)) * dp <= abs(dt) if dp isa Float32 || dp < 0 dh = @fastmath sqrt(2 * muladd(c1, c2, -dp)) else da, db, dc = a1 - a2, b1 - b2, c1 - c2 dh = @fastmath sqrt(max(muladd(dc, -dc, muladd(da, da, db^2)), 0)) end return copysign(dh, dt) else dtf = muladd(b1, a2, -a1 * b2) tn = dtf / dp # tan(Δh) = Δh + (1/3)*Δh^3 + (2/15)*Δh^5 + ... # 2sin(Δh/2) = Δh - (1/24)*Δh^3 + (1/1920)*Δh^5 - ... # ≈ tan(Δh) - (3/8)*tan(Δh)^3 + (31/128)*tan(Δh)^5 sn = tn * @evalpoly(tn^2, 1, oftype(tn, -3/8), oftype(tn, 31/128)) # 2sin(Δh/2) return @fastmath sqrt(c1 * c2) * sn end end function delta_h(a::C, b::C) where {Cb <: Union{LCHab, LCHuv}, C <: Union{Cb, AlphaColor{Cb}, ColorAlpha{Cb}}} dh0 = hue(a) - hue(b) sh = muladd(dh0, oftype(dh0, 1 / 360), oftype(dh0, 0.5)) d = @fastmath sh - floor(sh) - oftype(sh, 0.5) 2 * sqrt(chroma(a) * chroma(b)) * sinpi(d) end delta_h(a, b) = delta_h(promote(a, b)...) @inline function delta_c(a::C, b::C) where {Cb <: Union{Lab{Float32}, Luv{Float32}}, C <: Union{Cb, AlphaColor{Cb}, ColorAlpha{Cb}}} n1, m1 = @fastmath minmax(comp2(a)^2, comp3(a)^2) n2, m2 = @fastmath minmax(comp2(b)^2, comp3(b)^2) ((m1 - m2) + (n1 - n2)) / @fastmath(max(chroma(a) + chroma(b), floatmin(Float32))) end delta_c(a, b) = chroma(a) - chroma(b) """ weighted_color_mean(w1, c1, c2) Returns the color `w1*c1 + (1-w1)*c2` that is the weighted mean of `c1` and `c2`, where `c1` has a weight 0 ≤ `w1` ≤ 1. """ weighted_color_mean(w1::Real, c1::Colorant, c2::Colorant) = _weighted_color_mean(w1, c1, c2) function weighted_color_mean(w1::Real, c1::C, c2::C) where {Cb <: Union{HSV, HSL, HSI, LCHab, LCHuv}, C <: Union{Cb, AlphaColor{Cb}, ColorAlpha{Cb}}} normalize_hue(_weighted_color_mean(w1, c1, c2)) end function _weighted_color_mean(w1::Real, c1::Colorant{T1}, c2::Colorant{T2}) where {T1,T2} @fastmath min(w1, oneunit(w1) - w1) >= zero(w1) || throw(DomainError(w1, "`w1` must be in [0, 1]")) w2 = oneunit(w1) - w1 mapc((x, y) -> convert(promote_type(T1, T2), muladd(w1, x, w2 * y)), c1, c2) end function _weighted_color_mean(w1::Integer, c1::C, c2::C) where C <: Colorant (w1 & 0b1) === w1 || throw(DomainError(w1, "`w1` must be in [0, 1]")) w1 == zero(w1) ? c2 : c1 end """ weighted_color_mean(weights, colors) Returns the weighted mean of the given collection `colors` with `weights`. This is semantically equivalent to the calculation of `sum(weights .* colors)`. # Examples ```jldoctest; setup = :(using Colors, FixedPointNumbers) julia> rgbs = (RGB(1, 0, 0), RGB(0, 1, 0), RGB(0, 0, 1)); julia> weighted_color_mean([0.2, 0.2, 0.6], rgbs) RGB{N0f8}(0.2,0.2,0.6) julia> weighted_color_mean(0.5:-0.25:0.0, RGB{Float64}.(rgbs)) RGB{Float64}(0.5,0.25,0.0) ``` !!! compat "Colors v0.13" `wighted_color_mean` with collection or iterator inputs requires Colors v0.13 or later. !!! note In a cylindrical color space such as HSV, a weighted mean of more than three colors is generally not meaningful. """ function weighted_color_mean(weights, colors) if length(weights) != length(colors) throw(DimensionMismatch("`weights` and `colors` must be the same length.")) end w1, c1 = first(weights), first(colors) acc = mapc(_ -> zero(promote_type(typeof(w1), Float32)), c1) for (w, c) in zip(weights, colors) acc = mapc((a, v) -> a + w * v, acc, c) end convert(typeof(c1), acc) end """ range(start::T; stop::T, length=100) where T<:Colorant range(start::T, stop::T; length=100) where T<:Colorant Generates N (=`length`) >2 colors in a linearly interpolated ramp from `start` to`stop`, inclusive, returning an `Array` of colors. !!! compat "Julia 1.1" `stop` as a positional argument requires at least Julia 1.1. """ function range(start::T; stop::T, length::Integer=100) where T<:Colorant return T[weighted_color_mean(w1, start, stop) for w1 in range(1.0,stop=0.0,length=length)] end if VERSION >= v"1.1" range(start::T, stop::T; kwargs...) where T<:Colorant = range(start; stop=stop, kwargs...) end if VERSION < v"1.0.0-" import Base: linspace Base.@deprecate linspace(start::Colorant, stop::Colorant, n::Integer=100) range(start, stop=stop, length=n) end # Double quadratic Bezier curve function bezier(t::T, p0::T, p2::T, q0::T, q1::T, q2::T) where T<:Real B(t,a,b,c)=a*(1.0-t)^2 + 2.0b*(1.0-t)*t + c*t^2 if t <= 0.5 return B(2.0t, p0, q0, q1) else #t > 0.5 return B(2.0(t-0.5), q1, q2, p2) end end # Inverse double quadratic Bezier curve function inv_bezier(t::T, p0::T, p2::T, q0::T, q1::T, q2::T) where T<:Real invB(t,a,b,c)=(a-b+sqrt(b^2-a*c+(a-2.0b+c)*t))/(a-2.0b+c) if t < q1 return 0.5*invB(t,p0,q0,q1) else #t >= q1 return 0.5*invB(t,q1,q2,p2)+0.5 end end
using ErrorAnalysis using Base.Test # write your own tests here @testset "Generic" begin x = 3.1234; y = 3.021; A = [0.531844 0.717453; 0.552965 0.421109]; B = [0.273785 0.212329; 0.175248 0.598923]; @test !iswithinerrorbars(x,y,0.1,true) @test iswithinerrorbars(x,y,0.11,true) == true @test !iswithinerrorbars(A,B,fill(0.01,size(A)...), true) @test iswithinerrorbars(A,B,fill(0.6,size(A)...), true) end
SI-Instability.com | Sacroiliac Joint Dysfunction, Instability and related disorders. — A personal journey. Here is my Pro-Active Faith now given to the internet world.
State Before: R : Type u S : Type v k : Type y A : Type z a b : R n : ℕ inst✝² : CommRing R inst✝¹ : IsDomain R inst✝ : NormalizationMonoid R p : R[X] hp : Monic p ⊢ ↑normalize p = p State After: no goals Tactic: simp only [Polynomial.coe_normUnit, normalize_apply, hp.leadingCoeff, normUnit_one, Units.val_one, Polynomial.C.map_one, mul_one]
Load LFindLoad. From lfind Require Import LFind. From QuickChick Require Import QuickChick. From adtind Require Import goal33. Derive Show for natural. Derive Arbitrary for natural. Instance Dec_Eq_natural : Dec_Eq natural. Proof. dec_eq. Qed. Lemma conj17eqsynthconj6 : forall (lv0 : natural) (lv1 : natural), (@eq natural (plus lv0 lv1) (plus lv1 lv0)). Admitted. QuickChick conj17eqsynthconj6.
State Before: α : Type u_3 E : Type ?u.107083 F : Type ?u.107086 F' : Type ?u.107089 G : Type ?u.107092 𝕜 : Type u_1 p : ℝ≥0∞ inst✝⁹ : NormedAddCommGroup E inst✝⁸ : NormedSpace ℝ E inst✝⁷ : NormedAddCommGroup F inst✝⁶ : NormedSpace ℝ F inst✝⁵ : NormedAddCommGroup F' inst✝⁴ : NormedSpace ℝ F' inst✝³ : NormedAddCommGroup G m : MeasurableSpace α μ : Measure α β : Type u_2 inst✝² : SeminormedAddCommGroup β T T' : Set α → β C C' : ℝ inst✝¹ : NormedField 𝕜 inst✝ : NormedSpace 𝕜 β hT : DominatedFinMeasAdditive μ T C c : 𝕜 ⊢ DominatedFinMeasAdditive μ (fun s => c • T s) (‖c‖ * C) State After: α : Type u_3 E : Type ?u.107083 F : Type ?u.107086 F' : Type ?u.107089 G : Type ?u.107092 𝕜 : Type u_1 p : ℝ≥0∞ inst✝⁹ : NormedAddCommGroup E inst✝⁸ : NormedSpace ℝ E inst✝⁷ : NormedAddCommGroup F inst✝⁶ : NormedSpace ℝ F inst✝⁵ : NormedAddCommGroup F' inst✝⁴ : NormedSpace ℝ F' inst✝³ : NormedAddCommGroup G m : MeasurableSpace α μ : Measure α β : Type u_2 inst✝² : SeminormedAddCommGroup β T T' : Set α → β C C' : ℝ inst✝¹ : NormedField 𝕜 inst✝ : NormedSpace 𝕜 β hT : DominatedFinMeasAdditive μ T C c : 𝕜 s : Set α hs : MeasurableSet s hμs : ↑↑μ s < ⊤ ⊢ ‖(fun s => c • T s) s‖ ≤ ‖c‖ * C * ENNReal.toReal (↑↑μ s) Tactic: refine' ⟨hT.1.smul c, fun s hs hμs => _⟩ State Before: α : Type u_3 E : Type ?u.107083 F : Type ?u.107086 F' : Type ?u.107089 G : Type ?u.107092 𝕜 : Type u_1 p : ℝ≥0∞ inst✝⁹ : NormedAddCommGroup E inst✝⁸ : NormedSpace ℝ E inst✝⁷ : NormedAddCommGroup F inst✝⁶ : NormedSpace ℝ F inst✝⁵ : NormedAddCommGroup F' inst✝⁴ : NormedSpace ℝ F' inst✝³ : NormedAddCommGroup G m : MeasurableSpace α μ : Measure α β : Type u_2 inst✝² : SeminormedAddCommGroup β T T' : Set α → β C C' : ℝ inst✝¹ : NormedField 𝕜 inst✝ : NormedSpace 𝕜 β hT : DominatedFinMeasAdditive μ T C c : 𝕜 s : Set α hs : MeasurableSet s hμs : ↑↑μ s < ⊤ ⊢ ‖(fun s => c • T s) s‖ ≤ ‖c‖ * C * ENNReal.toReal (↑↑μ s) State After: α : Type u_3 E : Type ?u.107083 F : Type ?u.107086 F' : Type ?u.107089 G : Type ?u.107092 𝕜 : Type u_1 p : ℝ≥0∞ inst✝⁹ : NormedAddCommGroup E inst✝⁸ : NormedSpace ℝ E inst✝⁷ : NormedAddCommGroup F inst✝⁶ : NormedSpace ℝ F inst✝⁵ : NormedAddCommGroup F' inst✝⁴ : NormedSpace ℝ F' inst✝³ : NormedAddCommGroup G m : MeasurableSpace α μ : Measure α β : Type u_2 inst✝² : SeminormedAddCommGroup β T T' : Set α → β C C' : ℝ inst✝¹ : NormedField 𝕜 inst✝ : NormedSpace 𝕜 β hT : DominatedFinMeasAdditive μ T C c : 𝕜 s : Set α hs : MeasurableSet s hμs : ↑↑μ s < ⊤ ⊢ ‖c • T s‖ ≤ ‖c‖ * C * ENNReal.toReal (↑↑μ s) Tactic: dsimp only State Before: α : Type u_3 E : Type ?u.107083 F : Type ?u.107086 F' : Type ?u.107089 G : Type ?u.107092 𝕜 : Type u_1 p : ℝ≥0∞ inst✝⁹ : NormedAddCommGroup E inst✝⁸ : NormedSpace ℝ E inst✝⁷ : NormedAddCommGroup F inst✝⁶ : NormedSpace ℝ F inst✝⁵ : NormedAddCommGroup F' inst✝⁴ : NormedSpace ℝ F' inst✝³ : NormedAddCommGroup G m : MeasurableSpace α μ : Measure α β : Type u_2 inst✝² : SeminormedAddCommGroup β T T' : Set α → β C C' : ℝ inst✝¹ : NormedField 𝕜 inst✝ : NormedSpace 𝕜 β hT : DominatedFinMeasAdditive μ T C c : 𝕜 s : Set α hs : MeasurableSet s hμs : ↑↑μ s < ⊤ ⊢ ‖c • T s‖ ≤ ‖c‖ * C * ENNReal.toReal (↑↑μ s) State After: α : Type u_3 E : Type ?u.107083 F : Type ?u.107086 F' : Type ?u.107089 G : Type ?u.107092 𝕜 : Type u_1 p : ℝ≥0∞ inst✝⁹ : NormedAddCommGroup E inst✝⁸ : NormedSpace ℝ E inst✝⁷ : NormedAddCommGroup F inst✝⁶ : NormedSpace ℝ F inst✝⁵ : NormedAddCommGroup F' inst✝⁴ : NormedSpace ℝ F' inst✝³ : NormedAddCommGroup G m : MeasurableSpace α μ : Measure α β : Type u_2 inst✝² : SeminormedAddCommGroup β T T' : Set α → β C C' : ℝ inst✝¹ : NormedField 𝕜 inst✝ : NormedSpace 𝕜 β hT : DominatedFinMeasAdditive μ T C c : 𝕜 s : Set α hs : MeasurableSet s hμs : ↑↑μ s < ⊤ ⊢ ‖c‖ * ‖T s‖ ≤ ‖c‖ * (C * ENNReal.toReal (↑↑μ s)) Tactic: rw [norm_smul, mul_assoc] State Before: α : Type u_3 E : Type ?u.107083 F : Type ?u.107086 F' : Type ?u.107089 G : Type ?u.107092 𝕜 : Type u_1 p : ℝ≥0∞ inst✝⁹ : NormedAddCommGroup E inst✝⁸ : NormedSpace ℝ E inst✝⁷ : NormedAddCommGroup F inst✝⁶ : NormedSpace ℝ F inst✝⁵ : NormedAddCommGroup F' inst✝⁴ : NormedSpace ℝ F' inst✝³ : NormedAddCommGroup G m : MeasurableSpace α μ : Measure α β : Type u_2 inst✝² : SeminormedAddCommGroup β T T' : Set α → β C C' : ℝ inst✝¹ : NormedField 𝕜 inst✝ : NormedSpace 𝕜 β hT : DominatedFinMeasAdditive μ T C c : 𝕜 s : Set α hs : MeasurableSet s hμs : ↑↑μ s < ⊤ ⊢ ‖c‖ * ‖T s‖ ≤ ‖c‖ * (C * ENNReal.toReal (↑↑μ s)) State After: no goals Tactic: exact mul_le_mul le_rfl (hT.2 s hs hμs) (norm_nonneg _) (norm_nonneg _)
State Before: α : Type u_1 M : Type u N : Type v G : Type w H : Type x A : Type y B : Type z R : Type u₁ S : Type u₂ inst✝ : DivisionMonoid α a✝ b a : α ⊢ a⁻¹ ^ 0 = (a ^ 0)⁻¹ State After: no goals Tactic: rw [pow_zero, pow_zero, inv_one] State Before: α : Type u_1 M : Type u N : Type v G : Type w H : Type x A : Type y B : Type z R : Type u₁ S : Type u₂ inst✝ : DivisionMonoid α a✝ b a : α n : ℕ ⊢ a⁻¹ ^ (n + 1) = (a ^ (n + 1))⁻¹ State After: no goals Tactic: rw [pow_succ', pow_succ, inv_pow _ n, mul_inv_rev]
module Effect.Exception import Effects import System import Control.IOExcept data Exception : Type -> Effect where Raise : a -> { () } Exception a b instance Handler (Exception a) Maybe where handle _ (Raise e) k = Nothing instance Show a => Handler (Exception a) IO where handle _ (Raise e) k = do print e believe_me (exit 1) instance Handler (Exception a) (IOExcept a) where handle _ (Raise e) k = ioM (return (Left e)) instance Handler (Exception a) (Either a) where handle _ (Raise e) k = Left e EXCEPTION : Type -> EFFECT EXCEPTION t = MkEff () (Exception t) raise : a -> { [EXCEPTION a ] } Eff m b raise err = Raise err -- TODO: Catching exceptions mid program? -- probably need to invoke a new interpreter -- possibly add a 'handle' to the Eff language so that an alternative -- handler can be introduced mid interpretation?
Kurzanov and colleagues in 2003 designated six teeth from Siberia as Allosaurus sp . ( meaning the authors found the specimens to be most like those of Allosaurus , but did not or could not assign a species ) . Also , reports of Allosaurus in Shanxi , China go back to at least 1982 .
[STATEMENT] lemma (in PO) dualPO: "dual cl \<in> PartialOrder" [PROOF STATE] proof (prove) goal (1 subgoal): 1. dual cl \<in> PartialOrder [PROOF STEP] using cl_po [PROOF STATE] proof (prove) using this: cl \<in> PartialOrder goal (1 subgoal): 1. dual cl \<in> PartialOrder [PROOF STEP] by (simp add: PartialOrder_def dual_def)
A bespoke range of farm house furniture by Blue Tree Margaret River, all hand-made with natural timbers. Each products has its own personality and is perfectly imperfect using all natural materials. Also we stock other brands such as; PAPAYA, FRENCH COUNTRY and SWING. Jeremy has been restoring antiques and building furniture for over 25 years. His love of working with timber and creating bespoke pieces for clients has left him with a wide array of styles and abilities. Now finally, we are show casing his work and products, bringing them to you well priced, all one off ‘s and unique. From barn doors, dungeon doors, tapas and chopping boards, reclaimed timber furniture, stools and light fittings. Products can be purchased online and delivered for FREE locally or you can order anything direct with Jeremy to suit your own needs. With Jeremy’s work once featuring in galleries and his work being renown throughout the Southwest, we are now bringing his amazing skill to you, along with other stunning brands, to offer an affordable Farmhouse style that hits the brief in the current trend.
function sphere_voronoi_test ( ) %*****************************************************************************80 % %% SPHERE_VORONOI_TEST tests the SPHERE_VORONOI library. % % Licensing: % % This code is distributed under the GNU LGPL license. % % Modified: % % 30 April 2010 % % Author: % % John Burkardt % timestamp ( ); fprintf ( 1, '\n' ); fprintf ( 1, 'SPHERE_VORONOI_TEST\n' ); fprintf ( 1, ' MATLAB version:\n' ); fprintf ( 1, ' Test the SPHERE_VORONOI library.\n' ); sphere_voronoi_test01 ( ); sphere_voronoi_test02 ( ); sphere_voronoi_test03 ( ); % % Terminate. % fprintf ( 1, '\n' ); fprintf ( 1, 'SPHERE_VORONOI_TEST:\n' ); fprintf ( 1, ' Normal end of execution.\n' ); fprintf ( 1, '\n' ); timestamp ( ); return end
theorem Arzela_Ascoli: fixes \<F> :: "[nat,'a::euclidean_space] \<Rightarrow> 'b::{real_normed_vector,heine_borel}" assumes "compact S" and M: "\<And>n x. x \<in> S \<Longrightarrow> norm(\<F> n x) \<le> M" and equicont: "\<And>x e. \<lbrakk>x \<in> S; 0 < e\<rbrakk> \<Longrightarrow> \<exists>d. 0 < d \<and> (\<forall>n y. y \<in> S \<and> norm(x - y) < d \<longrightarrow> norm(\<F> n x - \<F> n y) < e)" obtains g k where "continuous_on S g" "strict_mono (k :: nat \<Rightarrow> nat)" "\<And>e. 0 < e \<Longrightarrow> \<exists>N. \<forall>n x. n \<ge> N \<and> x \<in> S \<longrightarrow> norm(\<F>(k n) x - g x) < e"
State Before: C : Type u inst✝³ : Category C D : Type u' inst✝² : Category D inst✝¹ : HasZeroMorphisms C X Y : C f : X ⟶ Y inst✝ : IsSplitMono f ⊢ IsZero X ↔ f = 0 State After: C : Type u inst✝³ : Category C D : Type u' inst✝² : Category D inst✝¹ : HasZeroMorphisms C X Y : C f : X ⟶ Y inst✝ : IsSplitMono f ⊢ 𝟙 X = 0 ↔ f = 0 Tactic: rw [iff_id_eq_zero] State Before: C : Type u inst✝³ : Category C D : Type u' inst✝² : Category D inst✝¹ : HasZeroMorphisms C X Y : C f : X ⟶ Y inst✝ : IsSplitMono f ⊢ 𝟙 X = 0 ↔ f = 0 State After: case mp C : Type u inst✝³ : Category C D : Type u' inst✝² : Category D inst✝¹ : HasZeroMorphisms C X Y : C f : X ⟶ Y inst✝ : IsSplitMono f ⊢ 𝟙 X = 0 → f = 0 case mpr C : Type u inst✝³ : Category C D : Type u' inst✝² : Category D inst✝¹ : HasZeroMorphisms C X Y : C f : X ⟶ Y inst✝ : IsSplitMono f ⊢ f = 0 → 𝟙 X = 0 Tactic: constructor State Before: case mp C : Type u inst✝³ : Category C D : Type u' inst✝² : Category D inst✝¹ : HasZeroMorphisms C X Y : C f : X ⟶ Y inst✝ : IsSplitMono f ⊢ 𝟙 X = 0 → f = 0 State After: case mp C : Type u inst✝³ : Category C D : Type u' inst✝² : Category D inst✝¹ : HasZeroMorphisms C X Y : C f : X ⟶ Y inst✝ : IsSplitMono f h : 𝟙 X = 0 ⊢ f = 0 Tactic: intro h State Before: case mp C : Type u inst✝³ : Category C D : Type u' inst✝² : Category D inst✝¹ : HasZeroMorphisms C X Y : C f : X ⟶ Y inst✝ : IsSplitMono f h : 𝟙 X = 0 ⊢ f = 0 State After: no goals Tactic: rw [← Category.id_comp f, h, zero_comp] State Before: case mpr C : Type u inst✝³ : Category C D : Type u' inst✝² : Category D inst✝¹ : HasZeroMorphisms C X Y : C f : X ⟶ Y inst✝ : IsSplitMono f ⊢ f = 0 → 𝟙 X = 0 State After: case mpr C : Type u inst✝³ : Category C D : Type u' inst✝² : Category D inst✝¹ : HasZeroMorphisms C X Y : C f : X ⟶ Y inst✝ : IsSplitMono f h : f = 0 ⊢ 𝟙 X = 0 Tactic: intro h State Before: case mpr C : Type u inst✝³ : Category C D : Type u' inst✝² : Category D inst✝¹ : HasZeroMorphisms C X Y : C f : X ⟶ Y inst✝ : IsSplitMono f h : f = 0 ⊢ 𝟙 X = 0 State After: case mpr C : Type u inst✝³ : Category C D : Type u' inst✝² : Category D inst✝¹ : HasZeroMorphisms C X Y : C f : X ⟶ Y inst✝ : IsSplitMono f h : f = 0 ⊢ f ≫ retraction f = 0 Tactic: rw [← IsSplitMono.id f] State Before: case mpr C : Type u inst✝³ : Category C D : Type u' inst✝² : Category D inst✝¹ : HasZeroMorphisms C X Y : C f : X ⟶ Y inst✝ : IsSplitMono f h : f = 0 ⊢ f ≫ retraction f = 0 State After: no goals Tactic: simp [h]
#redirect Donald G. Brewer
\subsection{Bundle projection} This is a projection from any point on any of the fibres, to the underlying base manifold.
#ifndef PRECONDITIONED_CONJUGATE_GRADIENT_H #define PRECONDITIONED_CONJUGATE_GRADIENT_H #include <Eigen/Dense> #include "mtao/solvers/linear/linear.hpp" #include "mtao/solvers/cholesky/ldlt.hpp" #include <iostream> namespace mtao::solvers::linear { template <typename MatrixType, typename VectorType, typename Preconditioner> struct PCGSolver; template <typename MatrixType, typename VectorType, typename Preconditioner> struct solver_traits<PCGSolver< MatrixType, VectorType, Preconditioner> > { using Scalar = typename VectorType::Scalar; using Matrix = MatrixType; using Vector = VectorType; }; template <typename MatrixType, typename VectorType, typename Preconditioner> struct PCGSolver: public IterativeLinearSolver<PCGSolver< MatrixType, VectorType, Preconditioner> > { typedef MatrixType Matrix; typedef VectorType Vector; typedef typename Vector::Scalar Scalar; using Base = IterativeLinearSolver<PCGSolver< MatrixType, VectorType, Preconditioner> >; using Base::A ; using Base::b ; using Base::x ; using Base::Base; void compute() { precond = Preconditioner(A()); r = b()-A()*x(); precond->solve(r,z); p = z; Ap = A()*p; rdz = r.dot(z); } Scalar error() { return r.template lpNorm<Eigen::Infinity>(); } void step() { alpha = (rdz)/(p.dot(Ap)); x()+=alpha * p; r-=alpha * Ap; precond->solve(r,z); beta=1/rdz; rdz = r.dot(z); beta*=rdz; p=z+beta*p; Ap=A()*p; } private: Vector r; Vector z; Vector p; Vector Ap; Scalar rdz; Scalar alpha, beta; std::optional<Preconditioner> precond; }; template <typename Preconditioner, typename Matrix, typename Vector> void PCGSolve(const Matrix & A, const Vector & b, Vector & x) { auto residual = (b-A*x).template lpNorm<Eigen::Infinity>(); auto solver = PCGSolver<Matrix,Vector, Preconditioner>(5*A.rows(), 1e-5*residual); //auto solver = IterativeLinearSolver<PreconditionedConjugateGradientCapsule<Matrix,Vector, Preconditioner> >(A.rows(), 1e-5); solver.solve(A,b,x); x = solver.x(); } template <typename Matrix, typename Vector> void DenseCholeskyPCGSolve(const Matrix & A, const Vector & b, Vector & x) { PCGSolve<cholesky::DenseLDLT_MIC0<std::decay_t<decltype(A)>> >(A,b,x); } template <typename Matrix, typename Vector> void SparseCholeskyPCGSolve(const Matrix & A, const Vector & b, Vector & x) { PCGSolve<cholesky::SparseLDLT_MIC0<Matrix,Vector> >(A,b,x); } template <typename Matrix, typename Vector> void CholeskyPCGSolve(const Matrix & A, const Vector & b, Vector & x) { PCGSolve<cholesky::LDLT_MIC0<Matrix,Vector> >(A,b,x); } } #endif
# Numerical Recipes Workshop 3 For the week of 7-11 October, 2019 This notebook will provide a practical example of root finding for a nonlinear equation. ## The temperature of interstellar dust grains Understanding the nature of interstellar dust grains is vital to many areas of astronomy, from star formation to measuring the cosmic microwave background (CMB). Many observed properties of interstellar dust are derived from knowing its temperature. In general, dust is well mixed with the gas in the interstellar medium (ISM), but the two are rarely at the same temperature. The timescales for dust-related processes are very short, so the dust temperature can be calculated by assuming it is in thermal equilibrium at all times. Then, one only needs to balance the various heating and cooling processes, i.e., to find the root of the energy loss equation: $ \begin{align} \large \frac{de}{dt} = \Gamma(T_{dust}) - \Lambda(T_{dust}), \end{align} $ where $\Gamma$ and $\Lambda$ are the dust heating and cooling rates, respectively. Including the relevant heating and cooling processes, this becomes $ \begin{align} \large \frac{de}{dt} = 4 \sigma T_{CMB}^{4} \kappa_{gr} + \Gamma_{isrf} + \Lambda_{gas/grain}(T_{dust}, T_{gas}, n_{H}) - 4 \sigma T_{dust}^{4} \kappa_{gr}, \end{align} $ where $\sigma$ is the Stefan-Boltzmann constant, $T_{CMB}$ is the temperature of the CMB, $\kappa_{gr}$ is the dust opacity, $\Gamma_{isrf}$ is the heating from the instellar radiation field, and $\Lambda_{gas/grain}$ is the rate of heat exchange via collisions between the gas and dust. The first term represents heating from the CMB, the second is heating from nearby stars, the third term transfers heat from the hotter to the cooler matter, and the final term is the cooling of the dust by thermal radiation. The opacity of the dust can be approximated by the piece-wise power-law: $ \begin{align} \large \kappa_{gr}(T_{dust}) \propto \left\{ \begin{array}{ll} T_{dust}^{2} & , T_{dust} < 200 K,\\ \textrm{constant} & , 200\ K < T_{dust} < 1500\ K,\\ T_{dust}^{-12} & , T_{dust} > 1500\ K. \end{array} \right. \end{align} $ The gas/grain heat transfer rate is given by: $ \begin{align} \large \Lambda_{gas/grain} = 7.2\times10^{-8} n_{H} \left(\frac{T_{gas}}{1000 K}\right)^{\frac{1}{2}} (1 - 0.8 e^{-75/T_{gas}}) (T_{gas} - T_{dust})\ [erg/s/g], \end{align} $ where $n_{H}$ is the number density of the gas. ## Calculating dust temperatures with root finding The above equations have been coded below with the full heat balance equation implemented as the `gamma_grain` function. Do `help(gamma_grain)` to see how it can be called. Assuming a constant gas temperature, $T_{gas}$ and gas density, $n_{H}$, calculate the dust temperature, $T_{dust}$, using bisection, the secand method, and the Scipy implementation of [Brent's method](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brentq.html#scipy.optimize.brentq). Implement your own bisection and secant methods and count the number of steps to reach a solution. ```python import numpy as np mh = 1.673735e-24 # g # Stefan-Boltzmann constant sigma_b = 5.670373e-5 # erg cm^−2 s^−1 K^−4 def gas_grain(Tgas): """ Return gas/grain heat transfer rate coefficient. """ grain_coef = 1.2e-31 * 1.0e3**-0.5 / mh gasgra = grain_coef * Tgas**0.5 * \ (1.0 - (0.8 * np.exp(-75.0 / Tgas))) return gasgra def kappa_grain(Tdust): """ Return grain mean opacity. """ kgr1 = 4.0e-4 kgr200 = 16.0 T_subl = 1500. Tdust = np.asarray(Tdust) kgr = np.zeros(Tdust.size) f1 = Tdust < 200 if f1.any(): kgr[f1] = kgr1 * Tdust[f1]**2 kgr[(Tdust >= 200) & (Tdust < T_subl)] = kgr200 f2 = Tdust >= T_subl if f2.any(): kgr[f2] = kgr200 * (Tdust[f2] / T_subl)**-12 return kgr def gamma_isrf(): """ Interstellar radiation field heating rate coefficient. """ return 4.154682e-22 / mh def gamma_grain(Tdust, Tgas, nh, isrf=1.7, z=0): """ Return the grain heating rate. Parameters ---------- Tdust : float dust temperature in K Tgas : float gas temperature in K nh : float Hydrogen number density in cm^-3 isrf : float, optional interstellar radiation field strengh in Habing units default: 1.7 (typical for local interstellar medium) z : float, optional current redshift, used to set the temperature of the Cosmic Microwave Background. default: 0 """ TCMB = 2.73 * (1 + z) my_isrf = isrf * gamma_isrf() return my_isrf + \ 4 * sigma_b * kappa_grain(Tdust) * (TCMB**4 - Tdust**4) + \ (gas_grain(Tgas) * nh * (Tgas - Tdust)) ``` ```python ### Tgas and nH values Tgas = 100 # K nH = 1e3 # cm^-3 ``` ### Bisection See if you can implement the bisection method to calculate $T_{dust}$ for a relative tolerance of $10^{-4}$, where the relative tolerance is given by: $ \begin{align} rtol = \left|\frac{val_{new} - val_{old}}{val_{old}}\right|. \end{align} $ A sensible initial bound is $[T_{CMB}, T_{gas}]$, where $T_{CMB} = 2.73 K$ in the local Universe. ```python def bisection(low, high, tol): while (np.abs(high - low)) >= tol: midpoint = (high + low) / 2.0 above = gamma_grain(high, Tgas, nH) * gamma_grain(midpoint, Tgas, nH) below = gamma_grain(midpoint, Tgas, nH) * gamma_grain(low, Tgas, nH) if above < 0: low = midpoint elif below < 0: high = midpoint return midpoint answer = bisection(2.73, Tgas, 1e-4) print(answer) ``` 40.85661255836487 ### Secant Method See if you can implement the secant method for the same tolerance and initial guesses. ```python def secant(high, low, tol): x_ while(np.abs(high - low)) >= tol: ``` ### Brent's Method Use [Brent's method](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brentq.html#scipy.optimize.brentq) to calculate $T_{dust}$. After that, try calculating $T_{dust}$ for a range of $n_{H}$ from $1\ cm^{-3}$ to $10^{13} cm^{-3}$ and plotting $T_{dust}$ vs. $n_{H}$. ```python import scipy.optimize as opt ``` ```python ``` ```python # Try a range of nH values. nH = np.logspace(0, 13, 100) ``` ```python ``` ```python from matplotlib import pyplot as plt %matplotlib inline ``` ```python plt.rcParams['figure.figsize'] = (10, 6) plt.rcParams['font.size'] = 14 ``` ```python # plot Tdust vs. nH plt.semilogx(nH, Tdust) plt.xlabel('$n_{H}$ $cm^{-3}$') plt.ylabel('$T_{dust}$ $K$') ```
State Before: α : Type u_1 β : Type u_2 γ : Type ?u.114268 f : α → β t : α ⊢ map f ∘ cons t = cons (f t) ∘ map f State After: case h α : Type u_1 β : Type u_2 γ : Type ?u.114268 f : α → β t : α x✝ : Multiset α ⊢ (map f ∘ cons t) x✝ = (cons (f t) ∘ map f) x✝ Tactic: ext State Before: case h α : Type u_1 β : Type u_2 γ : Type ?u.114268 f : α → β t : α x✝ : Multiset α ⊢ (map f ∘ cons t) x✝ = (cons (f t) ∘ map f) x✝ State After: no goals Tactic: simp
Require Import Circuit. Import Circuit_Tactics. Require Import List. Import ListNotations. Open Scope list_scope. Require Import Monad. Require Import PeanoNat. Infix "<?" := (Nat.ltb). Existing Instance event_eq_dec. Existing Instance latch_eq_dec. Require Import Coq.Logic.FunctionalExtensionality. Require Import Coq.Program.Equality. Require Import Omega. (** * Definition of fall-decoupled marked graph *) Section FallDecoupled. Context (even odd : Set). Context `{Heven : eq_dec even} `{Hodd : eq_dec odd}. Variable c : circuit even odd. Variable init_st : state (latch even odd). Inductive fd_place : event (latch even odd) bool -> event (latch even odd) bool -> Set := | latch_fall l : fd_place (Rise l) (Fall l) | latch_rise l : fd_place (Fall l) (Rise l) (* E+ → O+ for (E,O) *) (* O+ → E+ for (O,E) *) | neighbor_rise_rise l l' : neighbor c l l' -> fd_place (Rise l) (Rise l') (* O- → E+ for (E,O) *) (* E- → O+ for (O,E) *) | neighbor_fall_rise l l' : neighbor c l l' -> fd_place (Fall l') (Rise l) . Definition fall_decoupled : marked_graph (event (latch even odd) bool) := {| place := fd_place ; init_marking := fun t1 t2 p => match p with | neighbor_rise_rise (Odd _) (Even _) _ => 1 | latch_rise (Even _) => 1 | latch_fall (Odd _) => 1 | _ => 0 end |}. (** * Specialized is_enabled predicate *) Inductive is_enabled_FD : event (latch even odd) bool -> marking fall_decoupled -> Prop := | fall_enabled_FD l (m : marking fall_decoupled) : 0 < m _ _ (latch_fall l) -> is_enabled_FD (Fall l) m | rise_enabled_RD l (m : marking fall_decoupled) : 0 < m _ _ (latch_rise l) -> (forall l0 (pf : neighbor c l0 l), 0 < m _ _ (neighbor_rise_rise l0 l pf)) -> (forall l' (pf : neighbor c l l'), 0 < m _ _ (neighbor_fall_rise l l' pf)) -> is_enabled_FD (Rise l) m. Lemma FD_is_enabled_equiv : forall e m, is_enabled_FD e m -> is_enabled fall_decoupled e m. Proof. destruct e as [[O | E] [ | ]]; intros m; inversion 1; subst; intros e0 p; simpl in p; dependent destruction p; auto. Qed. Lemma is_enabled_FD_equiv : forall e m, is_enabled fall_decoupled e m -> is_enabled_FD e m. Proof. intros e m Henabled. unfold is_enabled in *. destruct e as [[O | E] [ | ]]; constructor; intros; apply Henabled; eexists; try (econstructor; eauto; fail). Qed. Ltac get_enabled_constraints := try match goal with | [ H : is_enabled fall_decoupled _ _ |- _ ] => apply is_enabled_FD_equiv in H; inversion H; subst end; specialize_enabled_constraints. (** * Helper lemmas *) Section loop_lemmas. Variable t : trace (latch even odd) bool. Variable m : marking fall_decoupled. Hypothesis fd_t_m : [fall_decoupled]⊢ t ↓ m. Lemma fd_loop : forall l, m _ _ (latch_fall l) + m _ _ (latch_rise l) = 1. Proof. intros. solve_loop. destruct l; auto. (* induction fd_t_m; intros [O | E]; try reflexivity. + specialize (IHm0 m0 (Odd O)). subst; unfold fire; repeat compare_next; get_enabled_constraints; try omega. + specialize (IHm0 m0 (Even E)). subst; unfold fire; repeat compare_next; get_enabled_constraints; try omega. *) Qed. Lemma fd_loop_neighbor : forall l l' (pf : neighbor c l l'), m _ _ (latch_fall l') + m _ _ (neighbor_fall_rise _ _ pf) + m _ _ (neighbor_rise_rise _ _ pf) = 1. Proof. intros. solve_loop. destruct pf; auto. (* induction fd_t_m; intros [O | E] [O' | E'] pf; try reflexivity; find_event_contradiction; subst; unfold fire; repeat (compare_next; find_event_contradiction); get_enabled_constraints; simpl in *; try omega. *) Qed. End loop_lemmas. Section fd_lemmas. Variable t : trace (latch even odd) bool. Variable m : marking fall_decoupled. Hypothesis fd_t_m : [fall_decoupled]⊢ t ↓ m. Lemma marking_fall : forall l, m _ _ (latch_fall l) = match transparent t l with | Opaque => 0 | Transparent => 1 end. Proof. induction fd_t_m; intros l; auto. * destruct l; auto. * simpl. set (loop := fd_loop t' m m0 l); subst. unfold fire. repeat compare_next. { get_enabled_constraints. omega. } { get_enabled_constraints. omega. } { rewrite IHm0; auto. } Qed. Lemma marking_rise : forall l, m _ _ (latch_rise l) = match transparent t l with | Opaque => 1 | Transparent => 0 end. Proof. induction fd_t_m; intros l; auto. * destruct l; auto. * simpl. set (loop := fd_loop t' m m0 l). subst. unfold fire. repeat compare_next. { get_enabled_constraints. omega. } { get_enabled_constraints. omega. } { rewrite IHm0; auto. } Qed. Lemma odd_num_events : forall O, ( m _ _ (latch_rise (Odd O)) > 0 -> num_events (Fall (Odd O)) t = 1+num_events (Rise (Odd O)) t) /\ ( m _ _ (latch_fall (Odd O)) > 0 -> num_events (Fall (Odd O)) t = num_events (Rise (Odd O)) t). Proof. induction fd_t_m; intros O; split; intros Hrise; auto. { contradict Hrise. simpl. inversion 1. } { simpl in *. set (loop := fd_loop t' m m0 (Odd O)). specialize (IHm0 m0 O). destruct IHm0 as [IH1 IH2]. subst; unfold fire in Hrise. repeat compare_next; reduce_eqb. { rewrite IH2; auto. } { contradict Hrise. omega. } { apply IH1; auto. } } { simpl in *. set (loop := fd_loop t' m m0 (Odd O)). specialize (IHm0 m0 O). destruct IHm0 as [IH1 IH2]. subst; unfold fire in Hrise. repeat compare_next; reduce_eqb. { contradict Hrise. omega. } { rewrite IH1; auto. } { apply IH2; auto. } } Qed. Lemma even_num_events : forall E, ( m _ _ (latch_rise (Even E)) > 0 -> num_events (Rise (Even E)) t = num_events (Fall (Even E)) t) /\ ( m _ _ (latch_fall (Even E)) > 0 -> num_events (Rise (Even E)) t = 1+num_events (Fall (Even E)) t). Proof. induction fd_t_m; intros E; split; intros Hrise; auto. { contradict Hrise. simpl. inversion 1. } { simpl in *. set (loop := fd_loop t' m m0 (Even E)). specialize (IHm0 m0 E). destruct IHm0 as [IH1 IH2]. subst; unfold fire in Hrise. repeat compare_next; reduce_eqb. { contradict Hrise. omega. } { rewrite IH2; auto. } { rewrite IH1; auto. } } { simpl in *. set (loop := fd_loop t' m m0 (Even E)). specialize (IHm0 m0 E). destruct IHm0 as [IH1 IH2]. subst; unfold fire in Hrise. repeat compare_next; reduce_eqb. { rewrite IH1; auto. } { contradict Hrise. omega. } { apply IH2; auto. } } Qed. Lemma opaque_num_events : forall l, transparent t l = Opaque -> num_events (Fall l) t = match l with | Odd _ => 1+num_events (Rise l) t | Even _ => num_events (Rise l) t end. Proof. intros [O | E] Hopaque. * destruct (odd_num_events O) as [H _]. rewrite H; auto. rewrite marking_rise. rewrite Hopaque. auto. * destruct (even_num_events E) as [H _]. rewrite H; auto. rewrite marking_rise. rewrite Hopaque. auto. Qed. Lemma transparent_num_events : forall l, transparent t l = Transparent -> num_events (Rise l) t = match l with | Odd _ => num_events (Fall l) t | Even _ => 1+num_events (Fall l) t end. Proof. intros [O | E] Htransparent. * destruct (odd_num_events O) as [_ H]. rewrite H; auto. rewrite marking_fall. rewrite Htransparent. auto. * destruct (even_num_events E) as [_ H]. rewrite H; auto. rewrite marking_fall. rewrite Htransparent. auto. Qed. Section even_odd. Variable E : even. Variable O : odd. Hypothesis Hin : neighbor c (Even E) (Odd O). Lemma even_odd_num_events : (m _ _ (latch_fall (Odd O)) > 0 -> num_events (Rise (Even E)) t = num_events (Rise (Odd O)) t) /\ (m _ _ (neighbor_fall_rise _ _ Hin) > 0 -> num_events (Rise (Even E)) t = num_events (Rise (Odd O)) t) /\ (m _ _ (neighbor_rise_rise _ _ Hin) > 0 -> num_events (Rise (Even E)) t = 1+num_events (Rise (Odd O)) t). Proof. induction fd_t_m; try set (Hloop := fd_loop_neighbor t' m m0 _ _ Hin); try destruct (IHm0 m0) as [IH1 [IH2 IH3]]; repeat split; intros Hgt; simpl in *; auto; find_contradiction; subst; unfold fire in Hgt; try (repeat compare_next; auto; get_enabled_constraints; contradict Hgt; omega). Qed. End even_odd. Section odd_even. Variable O : odd. Variable E : even. Hypothesis Hin : neighbor c (Odd O) (Even E). Lemma odd_even_num_events : (m _ _ (latch_fall (Even E)) > 0 -> num_events (Rise (Even E)) t = 1+num_events (Rise (Odd O)) t) /\ (m _ _ (neighbor_fall_rise _ _ Hin) > 0 -> num_events (Rise (Even E)) t = 1+ num_events (Rise (Odd O)) t) /\ (m _ _ (neighbor_rise_rise _ _ Hin) > 0 -> num_events (Rise (Even E)) t = num_events (Rise (Odd O)) t). Proof. induction fd_t_m; try set (Hloop := fd_loop_neighbor t' m m0 _ _ Hin); try destruct (IHm0 m0) as [IH1 [IH2 IH3]]; repeat split; intros Hgt; simpl in *; auto; find_contradiction; subst; unfold fire in Hgt; try (repeat compare_next; auto; get_enabled_constraints; contradict Hgt; omega). Qed. End odd_even. Lemma transparent_neighbor_num_events : forall l, transparent t l = Transparent -> forall l' (pf : neighbor c l' l), num_events (Rise l) t = match l with | Even _ => 1+num_events (Rise l') t | Odd _ => num_events (Rise l') t end. Proof. intros [O | E] Htransparent l' pf; inversion pf; subst. * destruct (even_odd_num_events _ _ pf) as [H _]. rewrite H; auto. rewrite marking_fall. rewrite Htransparent. auto. * destruct (odd_even_num_events _ _ pf) as [H _]. rewrite H; auto. rewrite marking_fall. rewrite Htransparent. auto. Qed. End fd_lemmas. (** * Flow equivalence proof *) (** Induction invariant *) Lemma fall_decoupled_strong : forall l t o v, ⟨ c , init_st ⟩⊢ t ↓ l ↦{ o } v -> forall m, [fall_decoupled]⊢ t ↓ m -> forall n, n = match l with | Odd _ => 1+num_events (Rise l) t | Even _ => num_events (Rise l) t end -> v = sync_eval c init_st n l. Proof. intros l t O v Hrel. induction Hrel; intros m Hm n Hn. * (* Because l is opaque in the initial marking, l must be even. *) inversion Hm; subst. auto. * (* l is transparent *) rewrite H2. assert (n > 0). { (* n > 0 *) subst. destruct l as [O | E]; try omega. erewrite transparent_num_events; eauto. omega. } erewrite sync_eval_gt; eauto. intros l' Hl'. erewrite H1; eauto. transitivity (sync_eval c init_st (num_events (Rise l) t) l'). 2:{ inversion Hl'; f_equal; subst; omega. } f_equal. erewrite transparent_neighbor_num_events with (l := l); eauto. inversion Hl'; auto. * inversion Hm; subst. simpl in *. compare_next. erewrite IHHrel; eauto. * inversion Hm. assert (n > 0). { subst. destruct l as [O | E]; try omega. set (Hopaque := opaque_num_events _ _ Hm (Even E)). simpl in Hopaque. compare_next. specialize (Hopaque eq_refl). simpl. compare_next. rewrite <- Hopaque; omega. } erewrite sync_eval_gt; eauto. intros l' Hl'. inversion Hm; subst. erewrite H1; eauto. assert (transparent t' l = Transparent). { get_enabled_constraints. rewrite marking_fall with (t := t') in *; auto. destruct (transparent t' l); auto; find_contradiction. } simpl. transitivity (sync_eval c init_st (num_events (Rise l) t') l'). 2:{ destruct l; compare_next; simpl; f_equal; omega. } f_equal. erewrite transparent_neighbor_num_events with (l := l) (l' := l'); eauto. inversion Hl'; auto. Qed. (** Main theorem statement *) Theorem fall_decoupled_flow_equivalence : flow_equivalence fall_decoupled c init_st. Proof. intros l t v [m Hm] Heval. erewrite (fall_decoupled_strong l t Opaque v Heval m Hm); eauto. erewrite opaque_num_events; eauto. { eapply async_b; eauto. } Qed. End FallDecoupled.
function [X_poly] = polyFeatures(X, p) %POLYFEATURES Maps X (1D vector) into the p-th power % [X_poly] = POLYFEATURES(X, p) takes a data matrix X (size m x 1) and % maps each example into its polynomial features where % X_poly(i, :) = [X(i) X(i).^2 X(i).^3 ... X(i).^p]; % % You need to return the following variables correctly. X_poly = zeros(numel(X), p); % ====================== YOUR CODE HERE ====================== % Instructions: Given a vector X, return a matrix X_poly where the p-th % column of X contains the values of X to the p-th power. % % for i=1:p X_poly(:,i)=X .^ i; end % ========================================================================= end
State Before: 𝕜 : Type u_2 inst✝¹⁶ : NontriviallyNormedField 𝕜 D : Type uD inst✝¹⁵ : NormedAddCommGroup D inst✝¹⁴ : NormedSpace 𝕜 D E : Type uE inst✝¹³ : NormedAddCommGroup E inst✝¹² : NormedSpace 𝕜 E F : Type uF inst✝¹¹ : NormedAddCommGroup F inst✝¹⁰ : NormedSpace 𝕜 F G : Type uG inst✝⁹ : NormedAddCommGroup G inst✝⁸ : NormedSpace 𝕜 G X : Type ?u.3124593 inst✝⁷ : NormedAddCommGroup X inst✝⁶ : NormedSpace 𝕜 X s s₁ t u : Set E f f₁ : E → F g : F → G x✝ x₀ : E c : F b : E × F → G m✝ n : ℕ∞ p : E → FormalMultilinearSeries 𝕜 E F 𝕜' : Type u_1 inst✝⁵ : NontriviallyNormedField 𝕜' inst✝⁴ : NormedAlgebra 𝕜 𝕜' inst✝³ : NormedSpace 𝕜' E inst✝² : IsScalarTower 𝕜 𝕜' E inst✝¹ : NormedSpace 𝕜' F inst✝ : IsScalarTower 𝕜 𝕜' F p' : E → FormalMultilinearSeries 𝕜' E F h : HasFTaylorSeriesUpToOn n f p' s m : ℕ hm : ↑m < n x : E hx : x ∈ s ⊢ HasFDerivWithinAt (fun x => FormalMultilinearSeries.restrictScalars 𝕜 (p' x) m) (ContinuousMultilinearMap.curryLeft (FormalMultilinearSeries.restrictScalars 𝕜 (p' x) (Nat.succ m))) s x State After: no goals Tactic: simpa only using (ContinuousMultilinearMap.restrictScalarsLinear 𝕜).hasFDerivAt.comp_hasFDerivWithinAt x <| (h.fderivWithin m hm x hx).restrictScalars 𝕜
Formal statement is: lemma lebesgue_sets_translation: fixes f :: "'a \<Rightarrow> 'a::euclidean_space" assumes S: "S \<in> sets lebesgue" shows "((\<lambda>x. a + x) ` S) \<in> sets lebesgue" Informal statement is: If $S$ is a Lebesgue measurable set, then the set $a + S$ is also Lebesgue measurable.
lemma shows sets_distr[simp, measurable_cong]: "sets (distr M N f) = sets N" and space_distr[simp]: "space (distr M N f) = space N"
If $f$ is a Lebesgue-integrable function and $f$ is nonnegative almost everywhere, then for any $c > 0$, $f$ is $c$-small in $L^1$-norm compared to $|g|$.
[STATEMENT] lemma filter_insort_triv: "\<not> P x \<Longrightarrow> filter P (insort_key f x xs) = filter P xs" [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<not> P x \<Longrightarrow> filter P (insort_key f x xs) = filter P xs [PROOF STEP] by (induct xs) simp_all
macro "foo!" x:term:max : term => `($x - 1) theorem ex1 : foo! 10 = 9 := rfl macro (priority := high) "foo! " x:term:max : term => `($x + 1) theorem ex2 : foo! 10 = 11 := rfl macro (priority := low) "foo! " x:term:max : term => `($x * 2) theorem ex3 : foo! 10 = 11 := rfl macro (priority := high+1) "foo! " x:term:max : term => `($x * 2) theorem ex4 : foo! 10 = 20 := rfl macro (priority := high+4-2) "foo! " x:term:max : term => `($x * 3) theorem ex5 : foo! 10 = 30 := rfl
[STATEMENT] lemma path_from_to_last': "v \<leadsto>(xs @ x # xs')\<leadsto> w \<Longrightarrow> w \<notin> set xs" [PROOF STATE] proof (prove) goal (1 subgoal): 1. v \<leadsto>(xs @ x # xs')\<leadsto> w \<Longrightarrow> w \<notin> set xs [PROOF STEP] by (metis path_from_toE bex_empty last_appendR last_in_set list.set(1) list.simps(3) path_disjoint)
Gaeltacht or Irish @-@ speaking areas are still seeing a decline in the language . The main Gaeltacht areas are down the west of the country , in Donegal , Mayo , Galway and Kerry with smaller Gaeltacht areas near Dungarvan in Waterford , Navan , in Meath , and the Shaw 's Road in Belfast . Irish language is a compulsory subject in the state education system in the Republic , and the Gaelscoil movement has seen many Irish medium schools established in both jurisdictions .
Formal statement is: lemma closed_Compl [continuous_intros, intro]: "open S \<Longrightarrow> closed (- S)" Informal statement is: The complement of an open set is closed.
#ifndef AVECADO_FACTORY_HPP #define AVECADO_FACTORY_HPP #include <boost/property_tree/ptree.hpp> namespace pt = boost::property_tree; namespace avecado { namespace post_process { /** * Generic factory class for creating objects based on type name and configuration. */ template <typename T> class factory { public: typedef std::shared_ptr<T> ptr_t; typedef ptr_t (*factory_func_t)(pt::ptree const& config); typedef std::map<std::string, factory_func_t> func_map_t; factory() : m_factory_functions() {}; factory & register_type(std::string const& type, factory_func_t func) { m_factory_functions.insert(std::make_pair(type, func)); return (*this); } ptr_t create(std::string const& type, pt::ptree const& config) const { typename func_map_t::const_iterator f_itr = m_factory_functions.find(type); if (f_itr == m_factory_functions.end()) { throw std::runtime_error("Unrecognized type: " + type); } factory_func_t func = f_itr->second; ptr_t ptr = func(config); return ptr; } private: func_map_t m_factory_functions; }; } // namespace post_process } // namespace avecado #endif // AVECADO_FACTORY_HPP
(* *********************************************************************) (* *) (* The Compcert verified compiler *) (* *) (* Xavier Leroy, INRIA Paris-Rocquencourt *) (* *) (* Copyright Institut National de Recherche en Informatique et en *) (* Automatique. All rights reserved. This file is distributed *) (* under the terms of the INRIA Non-Commercial License Agreement. *) (* *) (* *********************************************************************) (** Correctness proof for expression simplification. *) Require Import FunInd. Require Import Coqlib Maps Errors Integers. Require Import AST Linking. Require Import Values Memory Events Globalenvs Smallstep. Require Import Ctypes Cop Csyntax Csem Cstrategy Clight. Require Import SimplExpr SimplExprspec. Require Import sflib. (** ** Relational specification of the translation. *) Definition match_prog (p: Csyntax.program) (tp: Clight.program) := match_program (fun ctx f tf => tr_fundef f tf) eq p tp /\ prog_types tp = prog_types p. Lemma transf_program_match: forall p tp, transl_program p = OK tp -> match_prog p tp. Proof. unfold transl_program; intros. monadInv H. split; auto. unfold program_of_program; simpl. destruct x; simpl. eapply match_transform_partial_program_contextual. eexact EQ. intros. apply transl_fundef_spec; auto. Qed. (** ** Semantic preservation *) Section PRESERVATION. Variable prog: Csyntax.program. Variable tprog: Clight.program. Hypothesis TRANSL: match_prog prog tprog. Section CORELEMMA. Variable se tse: Senv.t. Hypothesis (MATCH_SENV: Senv.equiv se tse). Variable ge : Csem.genv. Variable tge : Clight.genv. Hypothesis SECOMPATSRC: senv_genv_compat se ge. Hypothesis SECOMPATTGT: senv_genv_compat tse tge. (** Invariance properties. *) Hypothesis (MATCH_CGENV: Genv.match_genvs (match_globdef (fun (ctx : AST.program Csyntax.fundef type) f tf => tr_fundef f tf) eq prog) ge tge /\ genv_cenv tge = Csem.genv_cenv ge). Lemma comp_env_preserved: Clight.genv_cenv tge = Csem.genv_cenv ge. Proof. apply MATCH_CGENV. Qed. Lemma symbols_preserved: forall (s: ident), Genv.find_symbol tge s = Genv.find_symbol ge s. Proof (Genv.find_symbol_match_genv (proj1 MATCH_CGENV)). Lemma senv_preserved: Senv.equiv ge tge. Proof (Genv.senv_match_genv (proj1 MATCH_CGENV)). Lemma function_ptr_translated: forall b f, Genv.find_funct_ptr ge b = Some f -> exists tf, Genv.find_funct_ptr tge b = Some tf /\ tr_fundef f tf. Proof. intros. edestruct (Genv.find_funct_ptr_match_genv (proj1 MATCH_CGENV)) as (ctx & tf & A & B & C); eauto. Qed. Lemma functions_translated: forall v f, Genv.find_funct ge v = Some f -> exists tf, Genv.find_funct tge v = Some tf /\ tr_fundef f tf. Proof. intros. edestruct (Genv.find_funct_match_genv (proj1 MATCH_CGENV)) as (ctx & tf & A & B & C); eauto. Qed. Lemma type_of_fundef_preserved: forall f tf, tr_fundef f tf -> type_of_fundef tf = Csyntax.type_of_fundef f. Proof. intros. inv H. inv H0; simpl. unfold type_of_function, Csyntax.type_of_function. congruence. auto. Qed. Lemma function_return_preserved: forall f tf, tr_function f tf -> fn_return tf = Csyntax.fn_return f. Proof. intros. inv H; auto. Qed. (** Properties of smart constructors. *) Lemma eval_Ederef': forall ge e le m a t l ofs, eval_expr ge e le m a (Vptr l ofs) -> eval_lvalue ge e le m (Ederef' a t) l ofs. Proof. intros. unfold Ederef'; destruct a; auto using eval_Ederef. destruct (type_eq t (typeof a)); auto using eval_Ederef. inv H. - auto. - inv H0. Qed. Lemma typeof_Ederef': forall a t, typeof (Ederef' a t) = t. Proof. unfold Ederef'; intros; destruct a; auto. destruct (type_eq t (typeof a)); auto. Qed. Lemma eval_Eaddrof': forall ge e le m a t l ofs, eval_lvalue ge e le m a l ofs -> eval_expr ge e le m (Eaddrof' a t) (Vptr l ofs). Proof. intros. unfold Eaddrof'; destruct a; auto using eval_Eaddrof. destruct (type_eq t (typeof a)); auto using eval_Eaddrof. inv H; auto. Qed. Lemma typeof_Eaddrof': forall a t, typeof (Eaddrof' a t) = t. Proof. unfold Eaddrof'; intros; destruct a; auto. destruct (type_eq t (typeof a)); auto. Qed. (** Translation of simple expressions. *) Lemma tr_simple_nil: (forall le dst r sl a tmps, tr_expr le dst r sl a tmps -> dst = For_val \/ dst = For_effects -> simple r = true -> sl = nil) /\(forall le rl sl al tmps, tr_exprlist le rl sl al tmps -> simplelist rl = true -> sl = nil). Proof. assert (A: forall dst a, dst = For_val \/ dst = For_effects -> final dst a = nil). intros. destruct H; subst dst; auto. apply tr_expr_exprlist; intros; simpl in *; try discriminate; auto. rewrite H0; auto. simpl; auto. rewrite H0; auto. simpl; auto. destruct H1; congruence. destruct (andb_prop _ _ H6). inv H1. rewrite H0; eauto. simpl; auto. unfold chunk_for_volatile_type in H9. destruct (type_is_volatile (Csyntax.typeof e1)); simpl in H8; congruence. rewrite H0; auto. simpl; auto. rewrite H0; auto. simpl; auto. destruct (andb_prop _ _ H7). rewrite H0; auto. rewrite H2; auto. simpl; auto. rewrite H0; auto. simpl; auto. destruct (andb_prop _ _ H6). rewrite H0; auto. Qed. Lemma tr_simple_expr_nil: forall le dst r sl a tmps, tr_expr le dst r sl a tmps -> dst = For_val \/ dst = For_effects -> simple r = true -> sl = nil. Proof (proj1 tr_simple_nil). Lemma tr_simple_exprlist_nil: forall le rl sl al tmps, tr_exprlist le rl sl al tmps -> simplelist rl = true -> sl = nil. Proof (proj2 tr_simple_nil). (** Translation of [deref_loc] and [assign_loc] operations. *) Remark deref_loc_translated: forall ty m b ofs t v, Csem.deref_loc se ty m b ofs t v -> match chunk_for_volatile_type ty with | None => t = E0 /\ Clight.deref_loc ty m b ofs v | Some chunk => volatile_load tse chunk m b ofs t v end. Proof. intros. unfold chunk_for_volatile_type. inv H. (* By_value, not volatile *) rewrite H1. split; auto. eapply deref_loc_value; eauto. (* By_value, volatile *) rewrite H0; rewrite H1. eapply volatile_load_preserved with (ge1 := se); auto. (* By reference *) rewrite H0. destruct (type_is_volatile ty); split; auto; eapply deref_loc_reference; eauto. (* By copy *) rewrite H0. destruct (type_is_volatile ty); split; auto; eapply deref_loc_copy; eauto. Qed. Remark assign_loc_translated: forall ty m b ofs v t m', Csem.assign_loc se ge ty m b ofs v t m' -> match chunk_for_volatile_type ty with | None => t = E0 /\ Clight.assign_loc tge ty m b ofs v m' | Some chunk => volatile_store tse chunk m b ofs v t m' end. Proof. intros. unfold chunk_for_volatile_type. inv H. (* By_value, not volatile *) rewrite H1. split; auto. eapply assign_loc_value; eauto. (* By_value, volatile *) rewrite H0; rewrite H1. eapply volatile_store_preserved with (ge1 := se); auto. (* By copy *) rewrite H0. rewrite <- comp_env_preserved in *. destruct (type_is_volatile ty); split; auto; eapply assign_loc_copy; eauto. Qed. (** Evaluation of simple expressions and of their translation *) Lemma tr_simple: forall e m, (forall r v, eval_simple_rvalue se ge e m r v -> forall le dst sl a tmps, tr_expr le dst r sl a tmps -> match dst with | For_val => sl = nil /\ Csyntax.typeof r = typeof a /\ eval_expr tge e le m a v | For_effects => sl = nil | For_set sd => exists b, sl = do_set sd b /\ Csyntax.typeof r = typeof b /\ eval_expr tge e le m b v end) /\ (forall l b ofs, eval_simple_lvalue se ge e m l b ofs -> forall le sl a tmps, tr_expr le For_val l sl a tmps -> sl = nil /\ Csyntax.typeof l = typeof a /\ eval_lvalue tge e le m a b ofs). Proof. Opaque makeif. intros e m. apply (eval_simple_rvalue_lvalue_ind se ge e m); intros until tmps; intros TR; inv TR. (* value *) auto. auto. exists a0; auto. (* rvalof *) inv H7; try congruence. exploit H0; eauto. intros [A [B C]]. subst sl1; simpl. assert (eval_expr tge e le m a v). eapply eval_Elvalue. eauto. rewrite <- B. exploit deref_loc_translated; eauto. unfold chunk_for_volatile_type; rewrite H2. tauto. destruct dst; auto. econstructor. split. simpl; eauto. auto. (* addrof *) exploit H0; eauto. intros [A [B C]]. subst sl1; simpl. assert (eval_expr tge e le m (Eaddrof' a1 ty) (Vptr b ofs)) by (apply eval_Eaddrof'; auto). assert (typeof (Eaddrof' a1 ty) = ty) by (apply typeof_Eaddrof'). destruct dst; auto. simpl; econstructor; eauto. (* unop *) exploit H0; eauto. intros [A [B C]]. subst sl1; simpl. assert (eval_expr tge e le m (Eunop op a1 ty) v). econstructor; eauto. congruence. destruct dst; auto. simpl; econstructor; eauto. (* binop *) exploit H0; eauto. intros [A [B C]]. exploit H2; eauto. intros [D [E F]]. subst sl1 sl2; simpl. assert (eval_expr tge e le m (Ebinop op a1 a2 ty) v). econstructor; eauto. rewrite comp_env_preserved; congruence. destruct dst; auto. simpl; econstructor; eauto. (* cast *) exploit H0; eauto. intros [A [B C]]. subst sl1; simpl. assert (eval_expr tge e le m (Ecast a1 ty) v). econstructor; eauto. congruence. destruct dst; auto. simpl; econstructor; eauto. (* sizeof *) rewrite <- comp_env_preserved. destruct dst. split; auto. split; auto. constructor. auto. exists (Esizeof ty1 ty). split. auto. split. auto. constructor. (* alignof *) rewrite <- comp_env_preserved. destruct dst. split; auto. split; auto. constructor. auto. exists (Ealignof ty1 ty). split. auto. split. auto. constructor. (* var local *) split; auto. split; auto. apply eval_Evar_local; auto. (* var global *) split; auto. split; auto. apply eval_Evar_global; auto. rewrite symbols_preserved; auto. (* deref *) exploit H0; eauto. intros [A [B C]]. subst sl1. split; auto. split. rewrite typeof_Ederef'; auto. apply eval_Ederef'; auto. (* field struct *) rewrite <- comp_env_preserved in *. exploit H0; eauto. intros [A [B C]]. subst sl1. split; auto. split; auto. rewrite B in H1. eapply eval_Efield_struct; eauto. (* field union *) rewrite <- comp_env_preserved in *. exploit H0; eauto. intros [A [B C]]. subst sl1. split; auto. split; auto. rewrite B in H1. eapply eval_Efield_union; eauto. Qed. Lemma tr_simple_rvalue: forall e m r v, eval_simple_rvalue se ge e m r v -> forall le dst sl a tmps, tr_expr le dst r sl a tmps -> match dst with | For_val => sl = nil /\ Csyntax.typeof r = typeof a /\ eval_expr tge e le m a v | For_effects => sl = nil | For_set sd => exists b, sl = do_set sd b /\ Csyntax.typeof r = typeof b /\ eval_expr tge e le m b v end. Proof. intros e m. exact (proj1 (tr_simple e m)). Qed. Lemma tr_simple_lvalue: forall e m l b ofs, eval_simple_lvalue se ge e m l b ofs -> forall le sl a tmps, tr_expr le For_val l sl a tmps -> sl = nil /\ Csyntax.typeof l = typeof a /\ eval_lvalue tge e le m a b ofs. Proof. intros e m. exact (proj2 (tr_simple e m)). Qed. Lemma tr_simple_exprlist: forall le rl sl al tmps, tr_exprlist le rl sl al tmps -> forall e m tyl vl, eval_simple_list se ge e m rl tyl vl -> sl = nil /\ eval_exprlist tge e le m al tyl vl. Proof. induction 1; intros. inv H. split. auto. constructor. inv H4. exploit tr_simple_rvalue; eauto. intros [A [B C]]. exploit IHtr_exprlist; eauto. intros [D E]. split. subst; auto. econstructor; eauto. congruence. Qed. (** Commutation between the translation of expressions and left contexts. *) Lemma typeof_context: forall k1 k2 C, leftcontext k1 k2 C -> forall e1 e2, Csyntax.typeof e1 = Csyntax.typeof e2 -> Csyntax.typeof (C e1) = Csyntax.typeof (C e2). Proof. induction 1; intros; auto. Qed. Scheme leftcontext_ind2 := Minimality for leftcontext Sort Prop with leftcontextlist_ind2 := Minimality for leftcontextlist Sort Prop. Combined Scheme leftcontext_leftcontextlist_ind from leftcontext_ind2, leftcontextlist_ind2. Lemma tr_expr_leftcontext_rec: ( forall from to C, leftcontext from to C -> forall le e dst sl a tmps, tr_expr le dst (C e) sl a tmps -> exists dst', exists sl1, exists sl2, exists a', exists tmp', tr_expr le dst' e sl1 a' tmp' /\ sl = sl1 ++ sl2 /\ incl tmp' tmps /\ (forall le' e' sl3, tr_expr le' dst' e' sl3 a' tmp' -> (forall id, ~In id tmp' -> le'!id = le!id) -> Csyntax.typeof e' = Csyntax.typeof e -> tr_expr le' dst (C e') (sl3 ++ sl2) a tmps) ) /\ ( forall from C, leftcontextlist from C -> forall le e sl a tmps, tr_exprlist le (C e) sl a tmps -> exists dst', exists sl1, exists sl2, exists a', exists tmp', tr_expr le dst' e sl1 a' tmp' /\ sl = sl1 ++ sl2 /\ incl tmp' tmps /\ (forall le' e' sl3, tr_expr le' dst' e' sl3 a' tmp' -> (forall id, ~In id tmp' -> le'!id = le!id) -> Csyntax.typeof e' = Csyntax.typeof e -> tr_exprlist le' (C e') (sl3 ++ sl2) a tmps) ). Proof. Ltac TR := econstructor; econstructor; econstructor; econstructor; econstructor; split; [eauto | split; [idtac | split]]. Ltac NOTIN := match goal with | [ H1: In ?x ?l, H2: list_disjoint ?l _ |- ~In ?x _ ] => red; intro; elim (H2 x x); auto; fail | [ H1: In ?x ?l, H2: list_disjoint _ ?l |- ~In ?x _ ] => red; intro; elim (H2 x x); auto; fail end. Ltac UNCHANGED := match goal with | [ H: (forall (id: ident), ~In id _ -> ?le' ! id = ?le ! id) |- (forall (id: ident), In id _ -> ?le' ! id = ?le ! id) ] => intros; apply H; NOTIN end. (*generalize compat_dest_change; intro CDC.*) apply leftcontext_leftcontextlist_ind; intros. (* base *) TR. rewrite <- app_nil_end; auto. red; auto. intros. rewrite <- app_nil_end; auto. (* deref *) inv H1. exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl1; rewrite app_ass; eauto. auto. intros. rewrite <- app_ass. econstructor; eauto. (* field *) inv H1. exploit H0. eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl1; rewrite app_ass; eauto. auto. intros. rewrite <- app_ass. econstructor; eauto. (* rvalof *) inv H1. exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl1; rewrite app_ass; eauto. red; eauto. intros. rewrite <- app_ass; econstructor; eauto. exploit typeof_context; eauto. congruence. (* addrof *) inv H1. exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl1; rewrite app_ass; eauto. auto. intros. rewrite <- app_ass. econstructor; eauto. (* unop *) inv H1. exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl1; rewrite app_ass; eauto. auto. intros. rewrite <- app_ass. econstructor; eauto. (* binop left *) inv H1. exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl1. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. econstructor; eauto. eapply tr_expr_invariant; eauto. UNCHANGED. (* binop right *) inv H2. assert (sl1 = nil) by (eapply tr_simple_expr_nil; eauto). subst sl1; simpl. exploit H1; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl2. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. change (sl3 ++ sl2') with (nil ++ sl3 ++ sl2'). rewrite app_ass. econstructor; eauto. eapply tr_expr_invariant; eauto. UNCHANGED. (* cast *) inv H1. exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl1; rewrite app_ass; eauto. auto. intros. rewrite <- app_ass. econstructor; eauto. (* seqand *) inv H1. (* for val *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. auto. auto. auto. auto. (* for effects *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. auto. auto. auto. auto. (* for set *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. auto. auto. auto. auto. (* seqor *) inv H1. (* for val *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. auto. auto. auto. auto. (* for effects *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. auto. auto. auto. auto. (* for set *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. auto. auto. auto. auto. (* condition *) inv H1. (* for val *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. eapply tr_expr_invariant; eauto. UNCHANGED. auto. auto. auto. auto. auto. auto. (* for effects *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. eapply tr_condition_effects. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. eapply tr_expr_invariant; eauto. UNCHANGED. auto. auto. auto. auto. auto. (* for set *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. eapply tr_condition_set. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. eapply tr_expr_invariant; eauto. UNCHANGED. auto. auto. auto. auto. auto. auto. (* assign left *) inv H1. (* for effects *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl1. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. auto. auto. auto. (* for val *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl1. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. auto. auto. auto. auto. auto. auto. eapply typeof_context; eauto. auto. (* assign right *) inv H2. (* for effects *) assert (sl1 = nil) by (eapply tr_simple_expr_nil; eauto). subst sl1; simpl. exploit H1; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl2. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. change (sl3 ++ sl2') with (nil ++ (sl3 ++ sl2')). rewrite app_ass. econstructor. eapply tr_expr_invariant; eauto. UNCHANGED. apply S; auto. auto. auto. auto. (* for val *) assert (sl1 = nil) by (eapply tr_simple_expr_nil; eauto). subst sl1; simpl. exploit H1; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl2. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. change (sl3 ++ sl2') with (nil ++ (sl3 ++ sl2')). rewrite app_ass. econstructor. eapply tr_expr_invariant; eauto. UNCHANGED. apply S; auto. auto. auto. auto. auto. auto. auto. auto. eapply typeof_context; eauto. (* assignop left *) inv H1. (* for effects *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl1. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. symmetry; eapply typeof_context; eauto. eauto. auto. auto. auto. auto. auto. auto. (* for val *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl1. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. eauto. auto. auto. auto. auto. auto. auto. auto. auto. auto. auto. eapply typeof_context; eauto. (* assignop right *) inv H2. (* for effects *) assert (sl1 = nil) by (eapply tr_simple_expr_nil; eauto). subst sl1; simpl. exploit H1; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl2. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. change (sl0 ++ sl2') with (nil ++ sl0 ++ sl2'). rewrite app_ass. econstructor. eapply tr_expr_invariant; eauto. UNCHANGED. apply S; auto. auto. eauto. auto. auto. auto. auto. auto. auto. (* for val *) assert (sl1 = nil) by (eapply tr_simple_expr_nil; eauto). subst sl1; simpl. exploit H1; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl2. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. change (sl0 ++ sl2') with (nil ++ sl0 ++ sl2'). rewrite app_ass. econstructor. eapply tr_expr_invariant; eauto. UNCHANGED. apply S; auto. eauto. auto. auto. auto. auto. auto. auto. auto. auto. auto. auto. auto. (* postincr *) inv H1. (* for effects *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q; rewrite app_ass; eauto. red; auto. intros. rewrite <- app_ass. econstructor; eauto. symmetry; eapply typeof_context; eauto. (* for val *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q; rewrite app_ass; eauto. red; auto. intros. rewrite <- app_ass. econstructor; eauto. eapply typeof_context; eauto. (* call left *) inv H1. (* for effects *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q; rewrite app_ass; eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_exprlist_invariant; eauto. UNCHANGED. auto. auto. auto. (* for val *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q; rewrite app_ass; eauto. red; auto. intros. rewrite <- app_ass. econstructor. auto. apply S; auto. eapply tr_exprlist_invariant; eauto. UNCHANGED. auto. auto. auto. auto. (* call right *) inv H2. (* for effects *) assert (sl1 = nil) by (eapply tr_simple_expr_nil; eauto). subst sl1; simpl. exploit H1; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q; rewrite app_ass; eauto. (*destruct dst'; constructor||contradiction.*) red; auto. intros. rewrite <- app_ass. change (sl3++sl2') with (nil ++ sl3 ++ sl2'). rewrite app_ass. econstructor. eapply tr_expr_invariant; eauto. UNCHANGED. apply S; auto. auto. auto. auto. (* for val *) assert (sl1 = nil) by (eapply tr_simple_expr_nil; eauto). subst sl1; simpl. exploit H1; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q; rewrite app_ass; eauto. (*destruct dst'; constructor||contradiction.*) red; auto. intros. rewrite <- app_ass. change (sl3++sl2') with (nil ++ sl3 ++ sl2'). rewrite app_ass. econstructor. auto. eapply tr_expr_invariant; eauto. UNCHANGED. apply S; auto. auto. auto. auto. auto. (* builtin *) inv H1. (* for effects *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q; rewrite app_ass; eauto. red; auto. intros. rewrite <- app_ass. change (sl3++sl2') with (nil ++ sl3 ++ sl2'). rewrite app_ass. econstructor. apply S; auto. auto. (* for val *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q; rewrite app_ass; eauto. red; auto. intros. rewrite <- app_ass. change (sl3++sl2') with (nil ++ sl3 ++ sl2'). rewrite app_ass. econstructor. auto. apply S; auto. auto. auto. (* comma *) inv H1. exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q; rewrite app_ass; eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_expr_invariant; eauto. UNCHANGED. auto. auto. auto. (* paren *) inv H1. (* for val *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q. eauto. red; auto. intros. econstructor; eauto. (* for effects *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q. eauto. auto. intros. econstructor; eauto. (* for set *) exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. rewrite Q. eauto. auto. intros. econstructor; eauto. (* cons left *) inv H1. exploit H0; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl1. rewrite app_ass. eauto. red; auto. intros. rewrite <- app_ass. econstructor. apply S; auto. eapply tr_exprlist_invariant; eauto. UNCHANGED. auto. auto. auto. (* cons right *) inv H2. assert (sl1 = nil) by (eapply tr_simple_expr_nil; eauto). subst sl1; simpl. exploit H1; eauto. intros [dst' [sl1' [sl2' [a' [tmp' [P [Q [R S]]]]]]]]. TR. subst sl2. eauto. red; auto. intros. change sl3 with (nil ++ sl3). rewrite app_ass. econstructor. eapply tr_expr_invariant; eauto. UNCHANGED. apply S; auto. auto. auto. auto. Qed. Theorem tr_expr_leftcontext: forall C le r dst sl a tmps, leftcontext RV RV C -> tr_expr le dst (C r) sl a tmps -> exists dst', exists sl1, exists sl2, exists a', exists tmp', tr_expr le dst' r sl1 a' tmp' /\ sl = sl1 ++ sl2 /\ incl tmp' tmps /\ (forall le' r' sl3, tr_expr le' dst' r' sl3 a' tmp' -> (forall id, ~In id tmp' -> le'!id = le!id) -> Csyntax.typeof r' = Csyntax.typeof r -> tr_expr le' dst (C r') (sl3 ++ sl2) a tmps). Proof. intros. eapply (proj1 tr_expr_leftcontext_rec); eauto. Qed. Theorem tr_top_leftcontext: forall e le m dst rtop sl a tmps, tr_top tge e le m dst rtop sl a tmps -> forall r C, rtop = C r -> leftcontext RV RV C -> exists dst', exists sl1, exists sl2, exists a', exists tmp', tr_top tge e le m dst' r sl1 a' tmp' /\ sl = sl1 ++ sl2 /\ incl tmp' tmps /\ (forall le' m' r' sl3, tr_expr le' dst' r' sl3 a' tmp' -> (forall id, ~In id tmp' -> le'!id = le!id) -> Csyntax.typeof r' = Csyntax.typeof r -> tr_top tge e le' m' dst (C r') (sl3 ++ sl2) a tmps). Proof. induction 1; intros. (* val for val *) inv H2; inv H1. exists For_val; econstructor; econstructor; econstructor; econstructor. split. apply tr_top_val_val; eauto. split. instantiate (1 := nil); auto. split. apply incl_refl. intros. rewrite <- app_nil_end. constructor; auto. (* base *) subst r. exploit tr_expr_leftcontext; eauto. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. exists dst'; exists sl1; exists sl2; exists a'; exists tmp'. split. apply tr_top_base; auto. split. auto. split. auto. intros. apply tr_top_base. apply S; auto. Qed. (** Semantics of smart constructors *) Remark sem_cast_deterministic: forall v ty ty' m1 v1 m2 v2, sem_cast v ty ty' m1 = Some v1 -> sem_cast v ty ty' m2 = Some v2 -> v1 = v2. Proof. unfold sem_cast; intros. destruct (classify_cast ty ty'); try congruence. - destruct v; try congruence. destruct Archi.ptr64; try discriminate. destruct (Mem.weak_valid_pointer m1 b (Ptrofs.unsigned i)); inv H. destruct (Mem.weak_valid_pointer m2 b (Ptrofs.unsigned i)); inv H0. auto. - destruct v; try congruence. destruct (negb Archi.ptr64); try discriminate. destruct (Mem.weak_valid_pointer m1 b (Ptrofs.unsigned i)); inv H. destruct (Mem.weak_valid_pointer m2 b (Ptrofs.unsigned i)); inv H0. auto. Qed. Lemma eval_simpl_expr_sound: forall e le m a v, eval_expr tge e le m a v -> match eval_simpl_expr a with Some v' => v' = v | None => True end. Proof. induction 1; simpl; auto. destruct (eval_simpl_expr a); auto. subst. destruct (sem_cast v1 (typeof a) ty Mem.empty) as [v'|] eqn:C; auto. eapply sem_cast_deterministic; eauto. inv H; simpl; auto. Qed. Lemma static_bool_val_sound: forall v t m b, bool_val v t Mem.empty = Some b -> bool_val v t m = Some b. Proof. assert (A: forall b ofs, Mem.weak_valid_pointer Mem.empty b ofs = false). { unfold Mem.weak_valid_pointer, Mem.valid_pointer, proj_sumbool; intros. rewrite ! pred_dec_false by (apply Mem.perm_empty). auto. } intros until b; unfold bool_val. destruct (classify_bool t); destruct v; destruct Archi.ptr64 eqn:SF; auto. - rewrite A; congruence. - simpl; rewrite A; congruence. Qed. Lemma step_makeif: forall f a s1 s2 k e le m v1 b, eval_expr tge e le m a v1 -> bool_val v1 (typeof a) m = Some b -> star step1 tse tge (State f (makeif a s1 s2) k e le m) E0 (State f (if b then s1 else s2) k e le m). Proof. intros. functional induction (makeif a s1 s2). - exploit eval_simpl_expr_sound; eauto. rewrite e0. intro EQ; subst v. assert (bool_val v1 (typeof a) m = Some true) by (apply static_bool_val_sound; auto). replace b with true by congruence. constructor. - exploit eval_simpl_expr_sound; eauto. rewrite e0. intro EQ; subst v. assert (bool_val v1 (typeof a) m = Some false) by (apply static_bool_val_sound; auto). replace b with false by congruence. constructor. - apply star_one. eapply step_ifthenelse; eauto. - apply star_one. eapply step_ifthenelse; eauto. Qed. Lemma step_make_set: forall id a ty m b ofs t v e le f k, Csem.deref_loc se ty m b ofs t v -> eval_lvalue tge e le m a b ofs -> typeof a = ty -> step1 tse tge (State f (make_set id a) k e le m) t (State f Sskip k e (PTree.set id v le) m). Proof. intros. exploit deref_loc_translated; eauto. rewrite <- H1. unfold make_set. destruct (chunk_for_volatile_type (typeof a)) as [chunk|]. (* volatile case *) intros. change (PTree.set id v le) with (set_opttemp (Some id) v le). econstructor. econstructor. constructor. eauto. simpl. unfold sem_cast. simpl. eauto. constructor. simpl. econstructor; eauto. (* nonvolatile case *) intros [A B]. subst t. constructor. eapply eval_Elvalue; eauto. Qed. Lemma step_make_assign: forall a1 a2 ty m b ofs t v m' v2 e le f k, Csem.assign_loc se ge ty m b ofs v t m' -> eval_lvalue tge e le m a1 b ofs -> eval_expr tge e le m a2 v2 -> sem_cast v2 (typeof a2) ty m = Some v -> typeof a1 = ty -> step1 tse tge (State f (make_assign a1 a2) k e le m) t (State f Sskip k e le m'). Proof. intros. exploit assign_loc_translated; eauto. rewrite <- H3. unfold make_assign. destruct (chunk_for_volatile_type (typeof a1)) as [chunk|]. (* volatile case *) intros. change le with (set_opttemp None Vundef le) at 2. econstructor. econstructor. constructor. eauto. simpl. unfold sem_cast. simpl. eauto. econstructor; eauto. rewrite H3; eauto. constructor. simpl. econstructor; eauto. (* nonvolatile case *) intros [A B]. subst t. econstructor; eauto. congruence. Qed. Fixpoint Kseqlist (sl: list statement) (k: cont) := match sl with | nil => k | s :: l => Kseq s (Kseqlist l k) end. Remark Kseqlist_app: forall sl1 sl2 k, Kseqlist (sl1 ++ sl2) k = Kseqlist sl1 (Kseqlist sl2 k). Proof. induction sl1; simpl; congruence. Qed. Lemma push_seq: forall f sl k e le m, star step1 tse tge (State f (makeseq sl) k e le m) E0 (State f Sskip (Kseqlist sl k) e le m). Proof. intros. unfold makeseq. generalize Sskip. revert sl k. induction sl; simpl; intros. apply star_refl. eapply star_right. apply IHsl. constructor. traceEq. Qed. Lemma step_tr_rvalof: forall ty m b ofs t v e le a sl a' tmp f k, Csem.deref_loc se ty m b ofs t v -> eval_lvalue tge e le m a b ofs -> tr_rvalof ty a sl a' tmp -> typeof a = ty -> exists le', star step1 tse tge (State f Sskip (Kseqlist sl k) e le m) t (State f Sskip k e le' m) /\ eval_expr tge e le' m a' v /\ typeof a' = typeof a /\ forall x, ~In x tmp -> le'!x = le!x. Proof. intros. inv H1. (* not volatile *) exploit deref_loc_translated; eauto. unfold chunk_for_volatile_type; rewrite H3. intros [A B]. subst t. exists le; split. apply star_refl. split. eapply eval_Elvalue; eauto. auto. (* volatile *) intros. exists (PTree.set t0 v le); split. simpl. eapply star_two. econstructor. eapply step_make_set; eauto. traceEq. split. constructor. apply PTree.gss. split. auto. intros. apply PTree.gso. congruence. Qed. (** Matching between continuations *) Inductive match_cont : Csem.cont -> cont -> Prop := | match_Kstop: match_cont Csem.Kstop Kstop | match_Kseq: forall s k ts tk, tr_stmt s ts -> match_cont k tk -> match_cont (Csem.Kseq s k) (Kseq ts tk) | match_Kwhile2: forall r s k s' ts tk, tr_if r Sskip Sbreak s' -> tr_stmt s ts -> match_cont k tk -> match_cont (Csem.Kwhile2 r s k) (Kloop1 (Ssequence s' ts) Sskip tk) | match_Kdowhile1: forall r s k s' ts tk, tr_if r Sskip Sbreak s' -> tr_stmt s ts -> match_cont k tk -> match_cont (Csem.Kdowhile1 r s k) (Kloop1 ts s' tk) | match_Kfor3: forall r s3 s k ts3 s' ts tk, tr_if r Sskip Sbreak s' -> tr_stmt s3 ts3 -> tr_stmt s ts -> match_cont k tk -> match_cont (Csem.Kfor3 r s3 s k) (Kloop1 (Ssequence s' ts) ts3 tk) | match_Kfor4: forall r s3 s k ts3 s' ts tk, tr_if r Sskip Sbreak s' -> tr_stmt s3 ts3 -> tr_stmt s ts -> match_cont k tk -> match_cont (Csem.Kfor4 r s3 s k) (Kloop2 (Ssequence s' ts) ts3 tk) | match_Kswitch2: forall k tk, match_cont k tk -> match_cont (Csem.Kswitch2 k) (Kswitch tk) | match_Kcall: forall f e C ty k optid tf le sl tk a dest tmps, tr_function f tf -> leftcontext RV RV C -> (forall v m, tr_top tge e (set_opttemp optid v le) m dest (C (Csyntax.Eval v ty)) sl a tmps) -> match_cont_exp dest a k tk -> match_cont (Csem.Kcall f e C ty k) (Kcall optid tf e le (Kseqlist sl tk)) (* | match_Kcall_some: forall f e C ty k dst tf le sl tk a dest tmps, transl_function f = Errors.OK tf -> leftcontext RV RV C -> (forall v m, tr_top tge e (PTree.set dst v le) m dest (C (C.Eval v ty)) sl a tmps) -> match_cont_exp dest a k tk -> match_cont (Csem.Kcall f e C ty k) (Kcall (Some dst) tf e le (Kseqlist sl tk)) *) with match_cont_exp : destination -> expr -> Csem.cont -> cont -> Prop := | match_Kdo: forall k a tk, match_cont k tk -> match_cont_exp For_effects a (Csem.Kdo k) tk | match_Kifthenelse_empty: forall a k tk, match_cont k tk -> match_cont_exp For_val a (Csem.Kifthenelse Csyntax.Sskip Csyntax.Sskip k) (Kseq Sskip tk) | match_Kifthenelse_1: forall a s1 s2 k ts1 ts2 tk, tr_stmt s1 ts1 -> tr_stmt s2 ts2 -> match_cont k tk -> match_cont_exp For_val a (Csem.Kifthenelse s1 s2 k) (Kseq (Sifthenelse a ts1 ts2) tk) | match_Kwhile1: forall r s k s' a ts tk, tr_if r Sskip Sbreak s' -> tr_stmt s ts -> match_cont k tk -> match_cont_exp For_val a (Csem.Kwhile1 r s k) (Kseq (makeif a Sskip Sbreak) (Kseq ts (Kloop1 (Ssequence s' ts) Sskip tk))) | match_Kdowhile2: forall r s k s' a ts tk, tr_if r Sskip Sbreak s' -> tr_stmt s ts -> match_cont k tk -> match_cont_exp For_val a (Csem.Kdowhile2 r s k) (Kseq (makeif a Sskip Sbreak) (Kloop2 ts s' tk)) | match_Kfor2: forall r s3 s k s' a ts3 ts tk, tr_if r Sskip Sbreak s' -> tr_stmt s3 ts3 -> tr_stmt s ts -> match_cont k tk -> match_cont_exp For_val a (Csem.Kfor2 r s3 s k) (Kseq (makeif a Sskip Sbreak) (Kseq ts (Kloop1 (Ssequence s' ts) ts3 tk))) | match_Kswitch1: forall ls k a tls tk, tr_lblstmts ls tls -> match_cont k tk -> match_cont_exp For_val a (Csem.Kswitch1 ls k) (Kseq (Sswitch a tls) tk) | match_Kreturn: forall k a tk, match_cont k tk -> match_cont_exp For_val a (Csem.Kreturn k) (Kseq (Sreturn (Some a)) tk). Lemma match_cont_call: forall k tk, match_cont k tk -> match_cont (Csem.call_cont k) (call_cont tk). Proof. induction 1; simpl; auto. constructor. econstructor; eauto. Qed. (** Matching between states *) Inductive match_states: Csem.state -> state -> Prop := | match_exprstates: forall f r k e m tf sl tk le dest a tmps, tr_function f tf -> tr_top tge e le m dest r sl a tmps -> match_cont_exp dest a k tk -> match_states (Csem.ExprState f r k e m) (State tf Sskip (Kseqlist sl tk) e le m) | match_regularstates: forall f s k e m tf ts tk le, tr_function f tf -> tr_stmt s ts -> match_cont k tk -> match_states (Csem.State f s k e m) (State tf ts tk e le m) | match_callstates: forall fptr ty args k m tk, DUMMY_PROP -> match_cont k tk -> match_states (Csem.Callstate fptr ty args k m) (Callstate fptr ty args tk m) | match_returnstates: forall res k m tk, match_cont k tk -> match_states (Csem.Returnstate res k m) (Returnstate res tk m) | match_stuckstate: forall S, match_states Csem.Stuckstate S. (** Additional results on translation of statements *) Lemma tr_select_switch: forall n ls tls, tr_lblstmts ls tls -> tr_lblstmts (Csem.select_switch n ls) (select_switch n tls). Proof. assert (DFL: forall ls tls, tr_lblstmts ls tls -> tr_lblstmts (Csem.select_switch_default ls) (select_switch_default tls)). { induction 1; simpl. constructor. destruct c; auto. constructor; auto. } assert (CASE: forall n ls tls, tr_lblstmts ls tls -> match Csem.select_switch_case n ls with | None => select_switch_case n tls = None | Some ls' => exists tls', select_switch_case n tls = Some tls' /\ tr_lblstmts ls' tls' end). { induction 1; simpl; intros. auto. destruct c; auto. destruct (zeq z n); auto. econstructor; split; eauto. constructor; auto. } intros. unfold Csem.select_switch, select_switch. specialize (CASE n ls tls H). destruct (Csem.select_switch_case n ls) as [ls'|]. destruct CASE as [tls' [P Q]]. rewrite P. auto. rewrite CASE. apply DFL; auto. Qed. Lemma tr_seq_of_labeled_statement: forall ls tls, tr_lblstmts ls tls -> tr_stmt (Csem.seq_of_labeled_statement ls) (seq_of_labeled_statement tls). Proof. induction 1; simpl; constructor; auto. Qed. (** Commutation between translation and the "find label" operation. *) Section FIND_LABEL. Variable lbl: label. Definition nolabel (s: statement) : Prop := forall k, find_label lbl s k = None. Fixpoint nolabel_list (sl: list statement) : Prop := match sl with | nil => True | s1 :: sl' => nolabel s1 /\ nolabel_list sl' end. Lemma nolabel_list_app: forall sl2 sl1, nolabel_list sl1 -> nolabel_list sl2 -> nolabel_list (sl1 ++ sl2). Proof. induction sl1; simpl; intros. auto. tauto. Qed. Lemma makeseq_nolabel: forall sl, nolabel_list sl -> nolabel (makeseq sl). Proof. assert (forall sl s, nolabel s -> nolabel_list sl -> nolabel (makeseq_rec s sl)). induction sl; simpl; intros. auto. destruct H0. apply IHsl; auto. red. intros; simpl. rewrite H. apply H0. intros. unfold makeseq. apply H; auto. red. auto. Qed. Lemma makeif_nolabel: forall a s1 s2, nolabel s1 -> nolabel s2 -> nolabel (makeif a s1 s2). Proof. intros. functional induction (makeif a s1 s2); auto. red; simpl; intros. rewrite H; auto. red; simpl; intros. rewrite H; auto. Qed. Lemma make_set_nolabel: forall t a, nolabel (make_set t a). Proof. unfold make_set; intros; red; intros. destruct (chunk_for_volatile_type (typeof a)); auto. Qed. Lemma make_assign_nolabel: forall l r, nolabel (make_assign l r). Proof. unfold make_assign; intros; red; intros. destruct (chunk_for_volatile_type (typeof l)); auto. Qed. Lemma tr_rvalof_nolabel: forall ty a sl a' tmp, tr_rvalof ty a sl a' tmp -> nolabel_list sl. Proof. destruct 1; simpl; intuition. apply make_set_nolabel. Qed. Lemma nolabel_do_set: forall sd a, nolabel_list (do_set sd a). Proof. induction sd; intros; simpl; split; auto; red; auto. Qed. Lemma nolabel_final: forall dst a, nolabel_list (final dst a). Proof. destruct dst; simpl; intros. auto. auto. apply nolabel_do_set. Qed. Ltac NoLabelTac := match goal with | [ |- nolabel_list nil ] => exact I | [ |- nolabel_list (final _ _) ] => apply nolabel_final (*; NoLabelTac*) | [ |- nolabel_list (_ :: _) ] => simpl; split; NoLabelTac | [ |- nolabel_list (_ ++ _) ] => apply nolabel_list_app; NoLabelTac | [ H: _ -> nolabel_list ?x |- nolabel_list ?x ] => apply H; NoLabelTac | [ |- nolabel (makeseq _) ] => apply makeseq_nolabel; NoLabelTac | [ |- nolabel (makeif _ _ _) ] => apply makeif_nolabel; NoLabelTac | [ |- nolabel (make_set _ _) ] => apply make_set_nolabel | [ |- nolabel (make_assign _ _) ] => apply make_assign_nolabel | [ |- nolabel _ ] => red; intros; simpl; auto | [ |- _ /\ _ ] => split; NoLabelTac | _ => auto end. Lemma tr_find_label_expr: (forall le dst r sl a tmps, tr_expr le dst r sl a tmps -> nolabel_list sl) /\(forall le rl sl al tmps, tr_exprlist le rl sl al tmps -> nolabel_list sl). Proof. apply tr_expr_exprlist; intros; NoLabelTac. apply nolabel_do_set. eapply tr_rvalof_nolabel; eauto. apply nolabel_do_set. apply nolabel_do_set. eapply tr_rvalof_nolabel; eauto. eapply tr_rvalof_nolabel; eauto. eapply tr_rvalof_nolabel; eauto. Qed. Lemma tr_find_label_top: forall e le m dst r sl a tmps, tr_top tge e le m dst r sl a tmps -> nolabel_list sl. Proof. induction 1; intros; NoLabelTac. eapply (proj1 tr_find_label_expr); eauto. Qed. Lemma tr_find_label_expression: forall r s a, tr_expression r s a -> forall k, find_label lbl s k = None. Proof. intros. inv H. assert (nolabel (makeseq sl)). apply makeseq_nolabel. eapply tr_find_label_top with (e := empty_env) (le := PTree.empty val) (m := Mem.empty). eauto. apply H. Qed. Lemma tr_find_label_expr_stmt: forall r s, tr_expr_stmt r s -> forall k, find_label lbl s k = None. Proof. intros. inv H. assert (nolabel (makeseq sl)). apply makeseq_nolabel. eapply tr_find_label_top with (e := empty_env) (le := PTree.empty val) (m := Mem.empty). eauto. apply H. Qed. Lemma tr_find_label_if: forall r s, tr_if r Sskip Sbreak s -> forall k, find_label lbl s k = None. Proof. intros. inv H. assert (nolabel (makeseq (sl ++ makeif a Sskip Sbreak :: nil))). apply makeseq_nolabel. apply nolabel_list_app. eapply tr_find_label_top with (e := empty_env) (le := PTree.empty val) (m := Mem.empty). eauto. simpl; split; auto. apply makeif_nolabel. red; simpl; auto. red; simpl; auto. apply H. Qed. Lemma tr_find_label: forall s k ts tk (TR: tr_stmt s ts) (MC: match_cont k tk), match Csem.find_label lbl s k with | None => find_label lbl ts tk = None | Some (s', k') => exists ts', exists tk', find_label lbl ts tk = Some (ts', tk') /\ tr_stmt s' ts' /\ match_cont k' tk' end with tr_find_label_ls: forall s k ts tk (TR: tr_lblstmts s ts) (MC: match_cont k tk), match Csem.find_label_ls lbl s k with | None => find_label_ls lbl ts tk = None | Some (s', k') => exists ts', exists tk', find_label_ls lbl ts tk = Some (ts', tk') /\ tr_stmt s' ts' /\ match_cont k' tk' end. Proof. induction s; intros; inversion TR; subst; clear TR; simpl. auto. eapply tr_find_label_expr_stmt; eauto. (* seq *) exploit (IHs1 (Csem.Kseq s2 k)); eauto. constructor; eauto. destruct (Csem.find_label lbl s1 (Csem.Kseq s2 k)) as [[s' k'] | ]. intros [ts' [tk' [A [B C]]]]. rewrite A. exists ts'; exists tk'; auto. intro EQ. rewrite EQ. eapply IHs2; eauto. (* if empty *) rename s' into sr. rewrite (tr_find_label_expression _ _ _ H3). auto. (* if not empty *) rename s' into sr. rewrite (tr_find_label_expression _ _ _ H2). exploit (IHs1 k); eauto. destruct (Csem.find_label lbl s1 k) as [[s' k'] | ]. intros [ts' [tk' [A [B C]]]]. rewrite A. exists ts'; exists tk'; intuition. intro EQ. rewrite EQ. eapply IHs2; eauto. (* while *) rename s' into sr. rewrite (tr_find_label_if _ _ H1); auto. exploit (IHs (Kwhile2 e s k)); eauto. econstructor; eauto. destruct (Csem.find_label lbl s (Kwhile2 e s k)) as [[s' k'] | ]. intros [ts' [tk' [A [B C]]]]. rewrite A. exists ts'; exists tk'; intuition. intro EQ. rewrite EQ. auto. (* dowhile *) rename s' into sr. rewrite (tr_find_label_if _ _ H1); auto. exploit (IHs (Kdowhile1 e s k)); eauto. econstructor; eauto. destruct (Csem.find_label lbl s (Kdowhile1 e s k)) as [[s' k'] | ]. intros [ts' [tk' [A [B C]]]]. rewrite A. exists ts'; exists tk'; intuition. intro EQ. rewrite EQ. auto. (* for skip *) rename s' into sr. rewrite (tr_find_label_if _ _ H4); auto. exploit (IHs3 (Csem.Kfor3 e s2 s3 k)); eauto. econstructor; eauto. destruct (Csem.find_label lbl s3 (Csem.Kfor3 e s2 s3 k)) as [[s' k'] | ]. intros [ts' [tk' [A [B C]]]]. rewrite A. exists ts'; exists tk'; intuition. intro EQ. rewrite EQ. exploit (IHs2 (Csem.Kfor4 e s2 s3 k)); eauto. econstructor; eauto. (* for not skip *) rename s' into sr. rewrite (tr_find_label_if _ _ H3); auto. exploit (IHs1 (Csem.Kseq (Csyntax.Sfor Csyntax.Sskip e s2 s3) k)); eauto. econstructor; eauto. econstructor; eauto. destruct (Csem.find_label lbl s1 (Csem.Kseq (Csyntax.Sfor Csyntax.Sskip e s2 s3) k)) as [[s' k'] | ]. intros [ts' [tk' [A [B C]]]]. rewrite A. exists ts'; exists tk'; intuition. intro EQ; rewrite EQ. exploit (IHs3 (Csem.Kfor3 e s2 s3 k)); eauto. econstructor; eauto. destruct (Csem.find_label lbl s3 (Csem.Kfor3 e s2 s3 k)) as [[s'' k''] | ]. intros [ts' [tk' [A [B C]]]]. rewrite A. exists ts'; exists tk'; intuition. intro EQ'. rewrite EQ'. exploit (IHs2 (Csem.Kfor4 e s2 s3 k)); eauto. econstructor; eauto. (* break, continue, return 0 *) auto. auto. auto. (* return 1 *) rewrite (tr_find_label_expression _ _ _ H0). auto. (* switch *) rewrite (tr_find_label_expression _ _ _ H1). apply tr_find_label_ls. auto. constructor; auto. (* labeled stmt *) destruct (ident_eq lbl l). exists ts0; exists tk; auto. apply IHs; auto. (* goto *) auto. induction s; intros; inversion TR; subst; clear TR; simpl. (* nil *) auto. (* case *) exploit (tr_find_label s (Csem.Kseq (Csem.seq_of_labeled_statement s0) k)); eauto. econstructor; eauto. apply tr_seq_of_labeled_statement; eauto. destruct (Csem.find_label lbl s (Csem.Kseq (Csem.seq_of_labeled_statement s0) k)) as [[s' k'] | ]. intros [ts' [tk' [A [B C]]]]. rewrite A. exists ts'; exists tk'; auto. intro EQ. rewrite EQ. eapply IHs; eauto. Qed. End FIND_LABEL. (** Anti-stuttering measure *) (** There are some stuttering steps in the translation: - The execution of [Sdo a] where [a] is side-effect free, which is three transitions in the source: << Sdo a, k ---> a, Kdo k ---> rval v, Kdo k ---> Sskip, k >> but the translation, which is [Sskip], makes no transitions. - The reduction [Ecomma (Eval v) r2 --> r2]. - The reduction [Eparen (Eval v) --> Eval v] in a [For_effects] context. The following measure decreases for these stuttering steps. *) Fixpoint esize (a: Csyntax.expr) : nat := match a with | Csyntax.Eloc _ _ _ => 1%nat | Csyntax.Evar _ _ => 1%nat | Csyntax.Ederef r1 _ => S(esize r1) | Csyntax.Efield l1 _ _ => S(esize l1) | Csyntax.Eval _ _ => O | Csyntax.Evalof l1 _ => S(esize l1) | Csyntax.Eaddrof l1 _ => S(esize l1) | Csyntax.Eunop _ r1 _ => S(esize r1) | Csyntax.Ebinop _ r1 r2 _ => S(esize r1 + esize r2)%nat | Csyntax.Ecast r1 _ => S(esize r1) | Csyntax.Eseqand r1 _ _ => S(esize r1) | Csyntax.Eseqor r1 _ _ => S(esize r1) | Csyntax.Econdition r1 _ _ _ => S(esize r1) | Csyntax.Esizeof _ _ => 1%nat | Csyntax.Ealignof _ _ => 1%nat | Csyntax.Eassign l1 r2 _ => S(esize l1 + esize r2)%nat | Csyntax.Eassignop _ l1 r2 _ _ => S(esize l1 + esize r2)%nat | Csyntax.Epostincr _ l1 _ => S(esize l1) | Csyntax.Ecomma r1 r2 _ => S(esize r1 + esize r2)%nat | Csyntax.Ecall r1 rl2 _ => S(esize r1 + esizelist rl2)%nat | Csyntax.Ebuiltin ef _ rl _ => S(esizelist rl)%nat | Csyntax.Eparen r1 _ _ => S(esize r1) end with esizelist (el: Csyntax.exprlist) : nat := match el with | Csyntax.Enil => O | Csyntax.Econs r1 rl2 => (esize r1 + esizelist rl2)%nat end. Definition measure (st: Csem.state) : nat := match st with | Csem.ExprState _ r _ _ _ => (esize r + 1)%nat | Csem.State _ Csyntax.Sskip _ _ _ => 0%nat | Csem.State _ (Csyntax.Sdo r) _ _ _ => (esize r + 2)%nat | Csem.State _ (Csyntax.Sifthenelse r _ _) _ _ _ => (esize r + 2)%nat | _ => 0%nat end. Lemma leftcontext_size: forall from to C, leftcontext from to C -> forall e1 e2, (esize e1 < esize e2)%nat -> (esize (C e1) < esize (C e2))%nat with leftcontextlist_size: forall from C, leftcontextlist from C -> forall e1 e2, (esize e1 < esize e2)%nat -> (esizelist (C e1) < esizelist (C e2))%nat. Proof. induction 1; intros; simpl; auto with arith. exploit leftcontextlist_size; eauto. auto with arith. exploit leftcontextlist_size; eauto. auto with arith. induction 1; intros; simpl; auto with arith. exploit leftcontext_size; eauto. auto with arith. Qed. (** Forward simulation for expressions. *) Lemma tr_val_gen: forall le dst v ty a tmp, typeof a = ty -> (forall tge e le' m, (forall id, In id tmp -> le'!id = le!id) -> eval_expr tge e le' m a v) -> tr_expr le dst (Csyntax.Eval v ty) (final dst a) a tmp. Proof. intros. destruct dst; simpl; econstructor; auto. Qed. Lemma estep_simulation: forall S1 t S2, Cstrategy.estep se ge S1 t S2 -> forall S1' (MS: match_states S1 S1'), exists S2', (plus step1 tse tge S1' t S2' \/ (star step1 tse tge S1' t S2' /\ measure S2 < measure S1)%nat) /\ match_states S2 S2'. Proof. induction 1; intros; inv MS. (* expr *) assert (tr_expr le dest r sl a tmps). inv H9. contradiction. auto. exploit tr_simple_rvalue; eauto. destruct dest. (* for val *) intros [SL1 [TY1 EV1]]. subst sl. econstructor; split. right; split. apply star_refl. destruct r; simpl; (contradiction || omega). econstructor; eauto. instantiate (1 := tmps). apply tr_top_val_val; auto. (* for effects *) intros SL1. subst sl. econstructor; split. right; split. apply star_refl. destruct r; simpl; (contradiction || omega). econstructor; eauto. instantiate (1 := tmps). apply tr_top_base. constructor. (* for set *) inv H10. (* rval volatile *) exploit tr_top_leftcontext; eauto. clear H11. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H2. inv H7; try congruence. exploit tr_simple_lvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl. econstructor; split. left. eapply plus_two. constructor. eapply step_make_set; eauto. traceEq. econstructor; eauto. change (final dst' (Etempvar t0 (Csyntax.typeof l)) ++ sl2) with (nil ++ (final dst' (Etempvar t0 (Csyntax.typeof l)) ++ sl2)). apply S. apply tr_val_gen. auto. intros. constructor. rewrite H5; auto. apply PTree.gss. intros. apply PTree.gso. red; intros; subst; elim H5; auto. auto. (* seqand true *) exploit tr_top_leftcontext; eauto. clear H9. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H2. (* for val *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := true) (v1 := v); auto. congruence. apply push_seq. reflexivity. reflexivity. rewrite <- Kseqlist_app. eapply match_exprstates; eauto. apply S. apply tr_paren_val with (a1 := a2); auto. apply tr_expr_monotone with tmp2; eauto. auto. auto. (* for effects *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := true) (v1 := v); auto. congruence. apply push_seq. reflexivity. reflexivity. rewrite <- Kseqlist_app. eapply match_exprstates; eauto. apply S. apply tr_paren_effects with (a1 := a2); auto. apply tr_expr_monotone with tmp2; eauto. auto. auto. (* for set *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := true) (v1 := v); auto. congruence. apply push_seq. reflexivity. reflexivity. rewrite <- Kseqlist_app. eapply match_exprstates; eauto. apply S. apply tr_paren_set with (a1 := a2) (t := sd_temp sd); auto. apply tr_expr_monotone with tmp2; eauto. auto. auto. (* seqand false *) exploit tr_top_leftcontext; eauto. clear H9. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H2. (* for val *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := false) (v1 := v); auto. congruence. apply star_one. constructor. constructor. reflexivity. reflexivity. eapply match_exprstates; eauto. change sl2 with (nil ++ sl2). apply S. econstructor; eauto. intros. constructor. rewrite H2. apply PTree.gss. auto. intros. apply PTree.gso. congruence. auto. (* for effects *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. apply step_makeif with (b := false) (v1 := v); auto. congruence. reflexivity. eapply match_exprstates; eauto. change sl2 with (nil ++ sl2). apply S. econstructor; eauto. auto. auto. (* for set *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := false) (v1 := v); auto. congruence. apply push_seq. reflexivity. reflexivity. rewrite <- Kseqlist_app. eapply match_exprstates; eauto. apply S. econstructor; eauto. intros. constructor. auto. auto. (* seqor true *) exploit tr_top_leftcontext; eauto. clear H9. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H2. (* for val *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := true) (v1 := v); auto. congruence. apply star_one. constructor. constructor. reflexivity. reflexivity. eapply match_exprstates; eauto. change sl2 with (nil ++ sl2). apply S. econstructor; eauto. intros. constructor. rewrite H2. apply PTree.gss. auto. intros. apply PTree.gso. congruence. auto. (* for effects *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. apply step_makeif with (b := true) (v1 := v); auto. congruence. reflexivity. eapply match_exprstates; eauto. change sl2 with (nil ++ sl2). apply S. econstructor; eauto. auto. auto. (* for set *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := true) (v1 := v); auto. congruence. apply push_seq. reflexivity. reflexivity. rewrite <- Kseqlist_app. eapply match_exprstates; eauto. apply S. econstructor; eauto. intros. constructor. auto. auto. (* seqand false *) exploit tr_top_leftcontext; eauto. clear H9. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H2. (* for val *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := false) (v1 := v); auto. congruence. apply push_seq. reflexivity. reflexivity. rewrite <- Kseqlist_app. eapply match_exprstates; eauto. apply S. apply tr_paren_val with (a1 := a2); auto. apply tr_expr_monotone with tmp2; eauto. auto. auto. (* for effects *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := false) (v1 := v); auto. congruence. apply push_seq. reflexivity. reflexivity. rewrite <- Kseqlist_app. eapply match_exprstates; eauto. apply S. apply tr_paren_effects with (a1 := a2); auto. apply tr_expr_monotone with tmp2; eauto. auto. auto. (* for set *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := false) (v1 := v); auto. congruence. apply push_seq. reflexivity. reflexivity. rewrite <- Kseqlist_app. eapply match_exprstates; eauto. apply S. apply tr_paren_set with (a1 := a2) (t := sd_temp sd); auto. apply tr_expr_monotone with tmp2; eauto. auto. auto. (* condition *) exploit tr_top_leftcontext; eauto. clear H9. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H2. (* for value *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. destruct b. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := true) (v1 := v); auto. congruence. apply push_seq. reflexivity. reflexivity. rewrite <- Kseqlist_app. eapply match_exprstates; eauto. apply S. econstructor; eauto. apply tr_expr_monotone with tmp2; eauto. auto. auto. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := false) (v1 := v); auto. congruence. apply push_seq. reflexivity. reflexivity. rewrite <- Kseqlist_app. eapply match_exprstates; eauto. apply S. econstructor; eauto. apply tr_expr_monotone with tmp3; eauto. auto. auto. (* for effects *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. destruct b. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := true) (v1 := v); auto. congruence. apply push_seq. reflexivity. traceEq. rewrite <- Kseqlist_app. econstructor. eauto. apply S. econstructor; eauto. apply tr_expr_monotone with tmp2; eauto. econstructor; eauto. auto. auto. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := false) (v1 := v); auto. congruence. apply push_seq. reflexivity. traceEq. rewrite <- Kseqlist_app. econstructor. eauto. apply S. econstructor; eauto. apply tr_expr_monotone with tmp3; eauto. econstructor; eauto. auto. auto. (* for set *) exploit tr_simple_rvalue; eauto. intros [SL [TY EV]]. subst sl0; simpl Kseqlist. destruct b. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := true) (v1 := v); auto. congruence. apply push_seq. reflexivity. traceEq. rewrite <- Kseqlist_app. econstructor. eauto. apply S. econstructor; eauto. apply tr_expr_monotone with tmp2; eauto. econstructor; eauto. auto. auto. econstructor; split. left. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (b := false) (v1 := v); auto. congruence. apply push_seq. reflexivity. traceEq. rewrite <- Kseqlist_app. econstructor. eauto. apply S. econstructor; eauto. apply tr_expr_monotone with tmp3; eauto. econstructor; eauto. auto. auto. (* assign *) exploit tr_top_leftcontext; eauto. clear H12. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H4. (* for effects *) exploit tr_simple_rvalue; eauto. intros [SL2 [TY2 EV2]]. exploit tr_simple_lvalue; eauto. intros [SL1 [TY1 EV1]]. subst; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. apply star_one. eapply step_make_assign; eauto. rewrite <- TY2; eauto. traceEq. econstructor. auto. change sl2 with (nil ++ sl2). apply S. constructor. auto. auto. auto. (* for value *) exploit tr_simple_rvalue; eauto. intros [SL2 [TY2 EV2]]. exploit tr_simple_lvalue. eauto. eapply tr_expr_invariant with (le' := PTree.set t0 v' le). eauto. intros. apply PTree.gso. intuition congruence. intros [SL1 [TY1 EV1]]. subst; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. eapply star_left. constructor. econstructor. eauto. rewrite <- TY2; eauto. eapply star_left. constructor. apply star_one. eapply step_make_assign; eauto. constructor. apply PTree.gss. simpl. eapply cast_idempotent; eauto. reflexivity. reflexivity. traceEq. econstructor. auto. apply S. apply tr_val_gen. auto. intros. constructor. rewrite H4; auto. apply PTree.gss. intros. apply PTree.gso. intuition congruence. auto. auto. (* assignop *) exploit tr_top_leftcontext; eauto. clear H15. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H6. (* for effects *) exploit tr_simple_lvalue; eauto. intros [SL1 [TY1 EV1]]. exploit step_tr_rvalof; eauto. intros [le' [EXEC [EV3 [TY3 INV]]]]. exploit tr_simple_lvalue. eauto. eapply tr_expr_invariant with (le := le) (le' := le'). eauto. intros. apply INV. NOTIN. intros [? [? EV1']]. exploit tr_simple_rvalue. eauto. eapply tr_expr_invariant with (le := le) (le' := le'). eauto. intros. apply INV. NOTIN. simpl. intros [SL2 [TY2 EV2]]. subst; simpl Kseqlist. econstructor; split. left. eapply star_plus_trans. rewrite app_ass. rewrite Kseqlist_app. eexact EXEC. eapply plus_two. simpl. econstructor. eapply step_make_assign; eauto. econstructor. eexact EV3. eexact EV2. rewrite TY3; rewrite <- TY1; rewrite <- TY2; rewrite comp_env_preserved; auto. reflexivity. traceEq. econstructor. auto. change sl2 with (nil ++ sl2). apply S. constructor. auto. auto. auto. (* for value *) exploit tr_simple_lvalue; eauto. intros [SL1 [TY1 EV1]]. exploit step_tr_rvalof; eauto. intros [le' [EXEC [EV3 [TY3 INV]]]]. exploit tr_simple_lvalue. eauto. eapply tr_expr_invariant with (le := le) (le' := le'). eauto. intros. apply INV. NOTIN. intros [? [? EV1']]. exploit tr_simple_rvalue. eauto. eapply tr_expr_invariant with (le := le) (le' := le'). eauto. intros. apply INV. NOTIN. simpl. intros [SL2 [TY2 EV2]]. exploit tr_simple_lvalue. eauto. eapply tr_expr_invariant with (le := le) (le' := PTree.set t v4 le'). eauto. intros. rewrite PTree.gso. apply INV. NOTIN. intuition congruence. intros [? [? EV1'']]. subst; simpl Kseqlist. econstructor; split. left. rewrite app_ass. rewrite Kseqlist_app. eapply star_plus_trans. eexact EXEC. simpl. eapply plus_four. econstructor. econstructor. econstructor. econstructor. eexact EV3. eexact EV2. rewrite TY3; rewrite <- TY1; rewrite <- TY2; rewrite comp_env_preserved; eauto. eassumption. econstructor. eapply step_make_assign; eauto. constructor. apply PTree.gss. simpl. eapply cast_idempotent; eauto. reflexivity. traceEq. econstructor. auto. apply S. apply tr_val_gen. auto. intros. constructor. rewrite H10; auto. apply PTree.gss. intros. rewrite PTree.gso. apply INV. red; intros; elim H10; auto. intuition congruence. auto. auto. (* assignop stuck *) exploit tr_top_leftcontext; eauto. clear H12. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H4. (* for effects *) exploit tr_simple_lvalue; eauto. intros [SL1 [TY1 EV1]]. exploit tr_simple_rvalue; eauto. intros [SL2 [TY2 EV2]]. exploit step_tr_rvalof; eauto. intros [le' [EXEC [EV3 [TY3 INV]]]]. subst; simpl Kseqlist. econstructor; split. right; split. rewrite app_ass. rewrite Kseqlist_app. eexact EXEC. simpl. omega. constructor. (* for value *) exploit tr_simple_lvalue; eauto. intros [SL1 [TY1 EV1]]. exploit tr_simple_rvalue; eauto. intros [SL2 [TY2 EV2]]. exploit step_tr_rvalof; eauto. intros [le' [EXEC [EV3 [TY3 INV]]]]. subst; simpl Kseqlist. econstructor; split. right; split. rewrite app_ass. rewrite Kseqlist_app. eexact EXEC. simpl. omega. constructor. (* postincr *) exploit tr_top_leftcontext; eauto. clear H14. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H5. (* for effects *) exploit tr_simple_lvalue; eauto. intros [SL1 [TY1 EV1]]. exploit step_tr_rvalof; eauto. intros [le' [EXEC [EV3 [TY3 INV]]]]. exploit tr_simple_lvalue. eauto. eapply tr_expr_invariant with (le := le) (le' := le'). eauto. intros. apply INV. NOTIN. intros [? [? EV1']]. subst; simpl Kseqlist. econstructor; split. left. rewrite app_ass; rewrite Kseqlist_app. eapply star_plus_trans. eexact EXEC. eapply plus_two. simpl. constructor. eapply step_make_assign; eauto. unfold transl_incrdecr. destruct id; simpl in H2. econstructor. eauto. constructor. rewrite TY3; rewrite <- TY1; rewrite comp_env_preserved. simpl; eauto. econstructor. eauto. constructor. rewrite TY3; rewrite <- TY1; rewrite comp_env_preserved. simpl; eauto. destruct id; auto. reflexivity. traceEq. econstructor. auto. change sl2 with (nil ++ sl2). apply S. constructor. auto. auto. auto. (* for value *) exploit tr_simple_lvalue; eauto. intros [SL1 [TY1 EV1]]. exploit tr_simple_lvalue. eauto. eapply tr_expr_invariant with (le' := PTree.set t v1 le). eauto. intros. apply PTree.gso. intuition congruence. intros [SL2 [TY2 EV2]]. subst; simpl Kseqlist. econstructor; split. left. eapply plus_four. constructor. eapply step_make_set; eauto. constructor. eapply step_make_assign; eauto. unfold transl_incrdecr. destruct id; simpl in H2. econstructor. constructor. apply PTree.gss. constructor. rewrite comp_env_preserved; simpl; eauto. econstructor. constructor. apply PTree.gss. constructor. rewrite comp_env_preserved; simpl; eauto. destruct id; auto. traceEq. econstructor. auto. apply S. apply tr_val_gen. auto. intros. econstructor; eauto. rewrite H5; auto. apply PTree.gss. intros. apply PTree.gso. intuition congruence. auto. auto. (* postincr stuck *) exploit tr_top_leftcontext; eauto. clear H11. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H3. (* for effects *) exploit tr_simple_lvalue; eauto. intros [SL1 [TY1 EV1]]. exploit step_tr_rvalof; eauto. intros [le' [EXEC [EV3 [TY3 INV]]]]. subst. simpl Kseqlist. econstructor; split. right; split. rewrite app_ass; rewrite Kseqlist_app. eexact EXEC. simpl; omega. constructor. (* for value *) exploit tr_simple_lvalue; eauto. intros [SL1 [TY1 EV1]]. subst. simpl Kseqlist. econstructor; split. left. eapply plus_two. constructor. eapply step_make_set; eauto. traceEq. constructor. (* comma *) exploit tr_top_leftcontext; eauto. clear H9. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H1. exploit tr_simple_rvalue; eauto. simpl; intro SL1. subst sl0; simpl Kseqlist. econstructor; split. right; split. apply star_refl. simpl. apply plus_lt_compat_r. apply (leftcontext_size _ _ _ H). simpl. omega. econstructor; eauto. apply S. eapply tr_expr_monotone; eauto. auto. auto. (* paren *) exploit tr_top_leftcontext; eauto. clear H9. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H2. (* for value *) exploit tr_simple_rvalue; eauto. intros [b [SL1 [TY1 EV1]]]. subst sl1; simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. apply star_one. econstructor. econstructor; eauto. rewrite <- TY1; eauto. traceEq. econstructor; eauto. change sl2 with (final For_val (Etempvar t (Csyntax.typeof r)) ++ sl2). apply S. constructor. auto. intros. constructor. rewrite H2; auto. apply PTree.gss. intros. apply PTree.gso. intuition congruence. auto. (* for effects *) econstructor; split. right; split. apply star_refl. simpl. apply plus_lt_compat_r. apply (leftcontext_size _ _ _ H). simpl. omega. econstructor; eauto. exploit tr_simple_rvalue; eauto. simpl. intros A. subst sl1. apply S. constructor; auto. auto. auto. (* for set *) exploit tr_simple_rvalue; eauto. simpl. intros [b [SL1 [TY1 EV1]]]. subst sl1. simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. apply star_one. econstructor. econstructor; eauto. rewrite <- TY1; eauto. traceEq. econstructor; eauto. apply S. constructor; auto. intros. constructor. rewrite H2. apply PTree.gss. auto. intros. apply PTree.gso. congruence. auto. (* call *) exploit tr_top_leftcontext; eauto. clear H12. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H5. (* for effects *) exploit tr_simple_rvalue; eauto. intros [SL1 [TY1 EV1]]. exploit tr_simple_exprlist; eauto. intros [SL2 EV2]. subst. simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. apply star_one. econstructor; eauto. rewrite <- TY1; eauto. traceEq. constructor; auto. econstructor; eauto. intros. change sl2 with (nil ++ sl2). apply S. constructor. auto. auto. (* for value *) exploit tr_simple_rvalue; eauto. intros [SL1 [TY1 EV1]]. exploit tr_simple_exprlist; eauto. intros [SL2 EV2]. subst. simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. apply star_one. econstructor; eauto. rewrite <- TY1; eauto. traceEq. constructor; auto. econstructor; eauto. intros. apply S. destruct dst'; constructor. auto. intros. constructor. rewrite H5; auto. apply PTree.gss. auto. intros. constructor. rewrite H5; auto. apply PTree.gss. intros. apply PTree.gso. intuition congruence. auto. (* builtin *) exploit tr_top_leftcontext; eauto. clear H9. intros [dst' [sl1 [sl2 [a' [tmp' [P [Q [R S]]]]]]]]. inv P. inv H2. (* for effects *) exploit tr_simple_exprlist; eauto. intros [SL EV]. subst. simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. apply star_one. econstructor; eauto. eapply external_call_symbols_preserved; eauto. traceEq. econstructor; eauto. change sl2 with (nil ++ sl2). apply S. constructor. simpl; auto. auto. (* for value *) exploit tr_simple_exprlist; eauto. intros [SL EV]. subst. simpl Kseqlist. econstructor; split. left. eapply plus_left. constructor. apply star_one. econstructor; eauto. eapply external_call_symbols_preserved; eauto. traceEq. econstructor; eauto. change sl2 with (nil ++ sl2). apply S. apply tr_val_gen. auto. intros. constructor. rewrite H2; auto. simpl. apply PTree.gss. intros; simpl. apply PTree.gso. intuition congruence. auto. Qed. (** Forward simulation for statements. *) Lemma tr_top_val_for_val_inv: forall e le m v ty sl a tmps, tr_top tge e le m For_val (Csyntax.Eval v ty) sl a tmps -> sl = nil /\ typeof a = ty /\ eval_expr tge e le m a v. Proof. intros. inv H. auto. inv H0. auto. Qed. Lemma alloc_variables_preserved: forall e m params e' m', Csem.alloc_variables ge e m params e' m' -> alloc_variables tge e m params e' m'. Proof. induction 1; econstructor; eauto. rewrite comp_env_preserved; auto. Qed. Lemma bind_parameters_preserved: forall e m params args m', Csem.bind_parameters se ge e m params args m' -> bind_parameters tge e m params args m'. Proof. induction 1; econstructor; eauto. inv H0. - eapply assign_loc_value; eauto. - inv H4. eapply assign_loc_value; eauto. - rewrite <- comp_env_preserved in *. eapply assign_loc_copy; eauto. Qed. Lemma blocks_of_env_preserved: forall e, blocks_of_env tge e = Csem.blocks_of_env ge e. Proof. intros; unfold blocks_of_env, Csem.blocks_of_env. unfold block_of_binding, Csem.block_of_binding. rewrite comp_env_preserved. auto. Qed. Lemma sstep_simulation: forall S1 t S2, Csem.sstep se ge S1 t S2 -> forall S1' (MS: match_states S1 S1'), exists S2', (plus step1 tse tge S1' t S2' \/ (star step1 tse tge S1' t S2' /\ measure S2 < measure S1)%nat) /\ match_states S2 S2'. Proof. induction 1; intros; inv MS. (* do 1 *) inv H6. inv H0. econstructor; split. right; split. apply push_seq. simpl. omega. econstructor; eauto. constructor. auto. (* do 2 *) inv H7. inv H6. inv H. econstructor; split. right; split. apply star_refl. simpl. omega. econstructor; eauto. constructor. (* seq *) inv H6. econstructor; split. left. apply plus_one. constructor. econstructor; eauto. constructor; auto. (* skip seq *) inv H6; inv H7. econstructor; split. left. apply plus_one; constructor. econstructor; eauto. (* continue seq *) inv H6; inv H7. econstructor; split. left. apply plus_one; constructor. econstructor; eauto. constructor. (* break seq *) inv H6; inv H7. econstructor; split. left. apply plus_one; constructor. econstructor; eauto. constructor. (* ifthenelse *) inv H6. (* ifthenelse empty *) inv H3. econstructor; split. left. eapply plus_left. constructor. apply push_seq. econstructor; eauto. econstructor; eauto. econstructor; eauto. (* ifthenelse non empty *) inv H2. econstructor; split. left. eapply plus_left. constructor. apply push_seq. traceEq. econstructor; eauto. econstructor; eauto. (* ifthenelse *) inv H8. (* ifthenelse empty *) exploit tr_top_val_for_val_inv; eauto. intros [A [B C]]. subst. econstructor; split; simpl. right. destruct b; econstructor; eauto. eapply star_left. apply step_skip_seq. econstructor. traceEq. eapply star_left. apply step_skip_seq. econstructor. traceEq. destruct b; econstructor; eauto. econstructor; eauto. econstructor; eauto. (* ifthenelse non empty *) exploit tr_top_val_for_val_inv; eauto. intros [A [B C]]. subst. econstructor; split. left. eapply plus_two. constructor. apply step_ifthenelse with (v1 := v) (b := b); auto. traceEq. destruct b; econstructor; eauto. (* while *) inv H6. inv H1. econstructor; split. left. eapply plus_left. constructor. eapply star_left. constructor. apply push_seq. reflexivity. traceEq. rewrite Kseqlist_app. econstructor; eauto. simpl. econstructor; eauto. econstructor; eauto. (* while false *) inv H8. exploit tr_top_val_for_val_inv; eauto. intros [A [B C]]. subst. econstructor; split. left. simpl. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (v1 := v) (b := false); auto. eapply star_two. constructor. apply step_break_loop1. reflexivity. reflexivity. traceEq. constructor; auto. constructor. (* while true *) inv H8. exploit tr_top_val_for_val_inv; eauto. intros [A [B C]]. subst. econstructor; split. left. simpl. eapply plus_left. constructor. eapply star_right. apply step_makeif with (v1 := v) (b := true); auto. constructor. reflexivity. traceEq. constructor; auto. constructor; auto. (* skip-or-continue while *) assert (ts = Sskip \/ ts = Scontinue). destruct H; subst s0; inv H7; auto. inv H8. econstructor; split. left. eapply plus_two. apply step_skip_or_continue_loop1; auto. apply step_skip_loop2. traceEq. constructor; auto. constructor; auto. (* break while *) inv H6. inv H7. econstructor; split. left. apply plus_one. apply step_break_loop1. constructor; auto. constructor. (* dowhile *) inv H6. econstructor; split. left. apply plus_one. apply step_loop. constructor; auto. constructor; auto. (* skip_or_continue dowhile *) assert (ts = Sskip \/ ts = Scontinue). destruct H; subst s0; inv H7; auto. inv H8. inv H4. econstructor; split. left. eapply plus_left. apply step_skip_or_continue_loop1. auto. apply push_seq. traceEq. rewrite Kseqlist_app. econstructor; eauto. simpl. econstructor; auto. econstructor; eauto. (* dowhile false *) inv H8. exploit tr_top_val_for_val_inv; eauto. intros [A [B C]]. subst. econstructor; split. left. simpl. eapply plus_left. constructor. eapply star_right. apply step_makeif with (v1 := v) (b := false); auto. constructor. reflexivity. traceEq. constructor; auto. constructor. (* dowhile true *) inv H8. exploit tr_top_val_for_val_inv; eauto. intros [A [B C]]. subst. econstructor; split. left. simpl. eapply plus_left. constructor. eapply star_right. apply step_makeif with (v1 := v) (b := true); auto. constructor. reflexivity. traceEq. constructor; auto. constructor; auto. (* break dowhile *) inv H6. inv H7. econstructor; split. left. apply plus_one. apply step_break_loop1. constructor; auto. constructor. (* for start *) inv H7. congruence. econstructor; split. left; apply plus_one. constructor. econstructor; eauto. constructor; auto. econstructor; eauto. (* for *) inv H6; try congruence. inv H2. econstructor; split. left. eapply plus_left. apply step_loop. eapply star_left. constructor. apply push_seq. reflexivity. traceEq. rewrite Kseqlist_app. econstructor; eauto. simpl. constructor; auto. econstructor; eauto. (* for false *) inv H8. exploit tr_top_val_for_val_inv; eauto. intros [A [B C]]. subst. econstructor; split. left. simpl. eapply plus_left. constructor. eapply star_trans. apply step_makeif with (v1 := v) (b := false); auto. eapply star_two. constructor. apply step_break_loop1. reflexivity. reflexivity. traceEq. constructor; auto. constructor. (* for true *) inv H8. exploit tr_top_val_for_val_inv; eauto. intros [A [B C]]. subst. econstructor; split. left. simpl. eapply plus_left. constructor. eapply star_right. apply step_makeif with (v1 := v) (b := true); auto. constructor. reflexivity. traceEq. constructor; auto. constructor; auto. (* skip_or_continue for3 *) assert (ts = Sskip \/ ts = Scontinue). destruct H; subst x; inv H7; auto. inv H8. econstructor; split. left. apply plus_one. apply step_skip_or_continue_loop1. auto. econstructor; eauto. econstructor; auto. (* break for3 *) inv H6. inv H7. econstructor; split. left. apply plus_one. apply step_break_loop1. econstructor; eauto. constructor. (* skip for4 *) inv H6. inv H7. econstructor; split. left. apply plus_one. constructor. econstructor; eauto. constructor; auto. (* return none *) inv H7. econstructor; split. left. apply plus_one. econstructor; eauto. rewrite blocks_of_env_preserved; eauto. constructor. apply match_cont_call; auto. (* return some 1 *) inv H6. inv H0. econstructor; split. left; eapply plus_left. constructor. apply push_seq. traceEq. econstructor; eauto. constructor. auto. (* return some 2 *) inv H9. exploit tr_top_val_for_val_inv; eauto. intros [A [B C]]. subst. econstructor; split. left. eapply plus_two. constructor. econstructor. eauto. erewrite function_return_preserved; eauto. rewrite blocks_of_env_preserved; eauto. eauto. traceEq. constructor. apply match_cont_call; auto. (* skip return *) inv H8. assert (is_call_cont tk). inv H9; simpl in *; auto. econstructor; split. left. apply plus_one. apply step_skip_call; eauto. rewrite blocks_of_env_preserved; eauto. constructor. auto. (* switch *) inv H6. inv H1. econstructor; split. left; eapply plus_left. constructor. apply push_seq. traceEq. econstructor; eauto. constructor; auto. (* expr switch *) inv H8. exploit tr_top_val_for_val_inv; eauto. intros [A [B C]]. subst. econstructor; split. left; eapply plus_two. constructor. econstructor; eauto. traceEq. econstructor; eauto. apply tr_seq_of_labeled_statement. apply tr_select_switch. auto. constructor; auto. (* skip-or-break switch *) assert (ts = Sskip \/ ts = Sbreak). destruct H; subst x; inv H7; auto. inv H8. econstructor; split. left; apply plus_one. apply step_skip_break_switch. auto. constructor; auto. constructor. (* continue switch *) inv H6. inv H7. econstructor; split. left; apply plus_one. apply step_continue_switch. constructor; auto. constructor. (* label *) inv H6. econstructor; split. left; apply plus_one. constructor. constructor; auto. (* goto *) inv H7. inversion H6; subst. exploit tr_find_label. eauto. apply match_cont_call. eauto. instantiate (1 := lbl). rewrite H. intros [ts' [tk' [P [Q R]]]]. econstructor; split. left. apply plus_one. econstructor; eauto. econstructor; eauto. (* internal function *) exploit functions_translated; eauto. intros [tfd [J K]]. inv K. inversion H3; subst. econstructor; split. left; apply plus_one. eapply step_internal_function; eauto. erewrite type_of_fundef_preserved; eauto. econstructor. econstructor; eauto. econstructor. rewrite H6; rewrite H7; auto. rewrite H6; rewrite H7. eapply alloc_variables_preserved; eauto. rewrite H6. eapply bind_parameters_preserved; eauto. eauto. constructor; auto. (* external function *) exploit functions_translated; eauto. intros [tfd [J K]]. inv K. econstructor; split. left; apply plus_one. econstructor; eauto. eapply external_call_symbols_preserved; eauto. constructor; auto. (* return *) inv H3. econstructor; split. left; apply plus_one. constructor. econstructor; eauto. Qed. (** Semantic preservation *) Theorem simulation: forall S1 t S2, Cstrategy.step se ge S1 t S2 -> forall S1' (MS: match_states S1 S1'), exists S2', (plus step1 tse tge S1' t S2' \/ (star step1 tse tge S1' t S2' /\ measure S2 < measure S1)%nat) /\ match_states S2 S2'. Proof. intros S1 t S2 STEP. destruct STEP. apply estep_simulation; auto. apply sstep_simulation; auto. Qed. End CORELEMMA. Section WHOLE. Let ge := Csem.globalenv prog. Let tge := globalenv tprog. Let MATCH_CGENV: Genv.match_genvs (match_globdef (fun (ctx : AST.program Csyntax.fundef type) f tf => tr_fundef f tf) eq prog) ge tge /\ genv_cenv tge = Csem.genv_cenv ge. Proof. intros. constructor. - apply Genv.globalenvs_match. apply TRANSL. - unfold tge, ge. destruct prog, tprog; simpl. destruct TRANSL as [_ EQ]. simpl in EQ. congruence. Qed. Lemma transl_initial_states: forall S, Csem.initial_state prog S -> exists S', Clight.initial_state tprog S' /\ match_states tge S S'. Proof. intros. inv H. econstructor; split. econstructor; eauto. eapply (Genv.init_mem_match (proj1 TRANSL)); eauto. replace (prog_main tprog) with (prog_main prog). assert (Genv.globalenv tprog = tge.(genv_genv)) by auto. rewrite H. erewrite symbols_preserved; eauto. destruct TRANSL. destruct H as (A & B & C). simpl in B. auto. constructor. auto. constructor. Qed. Lemma transl_final_states: forall S S' r, match_states tge S S' -> Csem.final_state S r -> Clight.final_state S' r. Proof. intros. inv H0. inv H. inv H4. constructor. Qed. Theorem transl_program_correct: forward_simulation (Cstrategy.semantics prog) (Clight.semantics1 tprog). Proof. eapply forward_simulation_star_wf with (order := ltof _ measure). eapply senv_preserved; auto. eexact transl_initial_states. eexact transl_final_states. apply well_founded_ltof. apply simulation; auto. eapply senv_preserved; auto. Qed. End WHOLE. End PRESERVATION. (** ** Commutation with linking *) Instance TransfSimplExprLink : TransfLink match_prog. Proof. red; intros. eapply Ctypes.link_match_program; eauto. - intros. Local Transparent Linker_fundef. simpl in *; unfold link_fundef in *. inv H3; inv H4; try discriminate. destruct ef; inv H2. exists (Internal tf); split; auto. constructor; auto. destruct ef; inv H2. exists (Internal tf); split; auto. constructor; auto. destruct (external_function_eq ef ef0 && typelist_eq targs targs0 && type_eq tres tres0 && calling_convention_eq cconv cconv0); inv H2. exists (External ef targs tres cconv); split; auto. constructor. Qed.
{-# OPTIONS --cubical --no-import-sorts --safe #-} module Cubical.ZCohomology.Groups.Wedge where open import Cubical.ZCohomology.Base open import Cubical.ZCohomology.Properties open import Cubical.ZCohomology.MayerVietorisUnreduced open import Cubical.Foundations.HLevels open import Cubical.Foundations.Prelude open import Cubical.Foundations.Pointed open import Cubical.HITs.Wedge open import Cubical.HITs.SetTruncation renaming (elim to sElim ; elim2 to sElim2) open import Cubical.HITs.PropositionalTruncation renaming (rec to pRec ; ∣_∣ to ∣_∣₁) open import Cubical.Data.Nat open import Cubical.Data.Prod open import Cubical.Data.Unit open import Cubical.Algebra.Group open import Cubical.ZCohomology.Groups.Unit open import Cubical.ZCohomology.Groups.Sn open import Cubical.HITs.Pushout --- This module contains a proof that Hⁿ(A ⋁ B) ≅ Hⁿ(A) × Hⁿ(B), n ≥ 1 module _ {ℓ ℓ'} (A : Pointed ℓ) (B : Pointed ℓ') where module I = MV (typ A) (typ B) Unit (λ _ → pt A) (λ _ → pt B) Hⁿ-⋁ : (n : ℕ) → GroupEquiv (coHomGr (suc n) (A ⋁ B)) (×coHomGr (suc n) (typ A) (typ B)) Hⁿ-⋁ zero = BijectionIsoToGroupEquiv (bij-iso (grouphom (GroupHom.fun (I.i 1)) (sElim2 (λ _ _ → isOfHLevelPath 2 (isOfHLevelΣ 2 setTruncIsSet λ _ → setTruncIsSet) _ _) λ a b → GroupHom.isHom (I.i 1) ∣ a ∣₂ ∣ b ∣₂)) (sElim (λ _ → isOfHLevelΠ 2 λ _ → isOfHLevelPath 2 setTruncIsSet _ _) λ f inker → helper ∣ f ∣₂ (I.Ker-i⊂Im-d 0 ∣ f ∣₂ inker)) (sigmaElim (λ _ → isOfHLevelSuc 1 propTruncIsProp) λ f g → I.Ker-Δ⊂Im-i 1 (∣ f ∣₂ , g) (isOfHLevelSuc 0 (isContrHⁿ-Unit 0) _ _))) where surj-helper : (x : coHom 0 Unit) → isInIm _ _ (I.Δ 0) x surj-helper = sElim (λ _ → isOfHLevelSuc 1 propTruncIsProp) λ f → ∣ (∣ (λ _ → f tt) ∣₂ , 0ₕ) , cong ∣_∣₂ (funExt (λ _ → cong ((f tt) +ₖ_) -0ₖ ∙ rUnitₖ (f tt))) ∣₁ helper : (x : coHom 1 (A ⋁ B)) → isInIm _ _ (I.d 0) x → x ≡ 0ₕ helper x inim = pRec (setTruncIsSet _ _) (λ p → sym (snd p) ∙ MV.Im-Δ⊂Ker-d _ _ Unit (λ _ → pt A) (λ _ → pt B) 0 (fst p) (surj-helper (fst p))) inim Hⁿ-⋁ (suc n) = vSES→GroupEquiv _ _ (ses (isOfHLevelSuc 0 (isContrHⁿ-Unit n)) (isOfHLevelSuc 0 (isContrHⁿ-Unit (suc n))) (I.d (suc n)) (I.Δ (suc (suc n))) (I.i (suc (suc n))) (I.Ker-i⊂Im-d (suc n)) (I.Ker-Δ⊂Im-i (suc (suc n)))) open import Cubical.Foundations.Isomorphism wedgeConnected : ((x : typ A) → ∥ pt A ≡ x ∥) → ((x : typ B) → ∥ pt B ≡ x ∥) → (x : A ⋁ B) → ∥ (inl (pt A)) ≡ x ∥ wedgeConnected conA conB = PushoutToProp (λ _ → propTruncIsProp) (λ a → pRec propTruncIsProp (λ p → ∣ cong inl p ∣₁) (conA a)) λ b → pRec propTruncIsProp (λ p → ∣ push tt ∙ cong inr p ∣₁) (conB b)
module Main where import qualified Numeric.LinearAlgebra as LA import qualified MachineLearning.Types as T import qualified MachineLearning as ML import qualified MachineLearning.Classification.Binary as CB import qualified MachineLearning.Classification.OneVsAll as OVA import qualified MachineLearning.TerminalProgress as TP import qualified MachineLearning.Optimization as Opt import MachineLearning.MultiSvmClassifier import MachineLearning.SoftmaxClassifier featuresMapParameter = 2 -- processFeatures :: T.Matrix -> T.Matrix processFeatures muSigma = ML.addBiasDimension . ML.featureNormalization muSigma . ML.mapFeatures featuresMapParameter calcAccuracy :: (Model a) => a -> T.Matrix -> T.Vector -> T.Vector -> Double calcAccuracy m x y theta = OVA.calcAccuracy y yPredicted where yPredicted = hypothesis m x theta main = do putStrLn "\n=== Optical Recognition of Handwritten Digits Data Set ===\n" -- Step 1. Data loading. -- Step 1.1 Training Data loading. (x, y) <- pure ML.splitToXY <*> LA.loadMatrix "digits_classification/optdigits.tra" -- Step 1.1 Testing Data loading. (xTest, yTest) <- pure ML.splitToXY <*> LA.loadMatrix "digits_classification/optdigits.tes" -- Step 2. Outputs and features preprocessing. let numLabels = 10 svm = MultiClass $ MultiSvm 1 numLabels softmax = MultiClass $ Softmax numLabels muSigma = ML.meanStddev (ML.mapFeatures featuresMapParameter x) x1 = processFeatures muSigma x xTest1 = processFeatures muSigma xTest initialTheta = LA.konst 0.1 (numLabels*(LA.cols x1)) print $ LA.size x1 print $ LA.size $ x1 LA.? [1..10] -- Step 3. Learning. putStrLn "Learning Multi SVM model..." (thetaSvm, optPathSvm) <- TP.learnWithProgressBar (Opt.minimize (Opt.BFGS2 0.01 0.7) svm 0.0001 5 (CB.L2 2) x1 y) initialTheta 20 putStrLn "\nLearning MGD Softmax model" --(thetaSm, optPathSm) <- TP.learnWithProgressBar (Opt.minimize (Opt.BFGS2 0.01 0.7) softmax 0.000001 5 1 x1 y) initialTheta 20 (thetaSm, optPathSm) <- TP.learnWithProgressBar (Opt.minimize (Opt.MinibatchGradientDescent 0 1024 0.01) svm 0.000001 200 (CB.L2 1) x1 y) initialTheta 20 putStrLn "\nLearning GD Softmax model" (thetaSmGD, optPathSmGD) <- TP.learnWithProgressBar (Opt.minimize (Opt.GradientDescent 0.01) softmax 0.000001 50 (CB.L2 1) x1 y) initialTheta 20 -- Step 4. Prediction and checking accuracy let accuracyTrainSvm = calcAccuracy svm x1 y thetaSvm accuracyTestSvm = calcAccuracy svm xTest1 yTest thetaSvm -- Step 5. Printing results. putStrLn $ "\nSVM: Number of iterations to learn: " ++ show (LA.rows optPathSvm) putStrLn $ "SVM: Accuracy on train set (%): " ++ show (accuracyTrainSvm*100) putStrLn $ "SVM: Accuracy on test set (%): " ++ show (accuracyTestSvm*100) let accuracyTrainSm = calcAccuracy softmax x1 y thetaSm accuracyTestSm = calcAccuracy softmax xTest1 yTest thetaSm -- Step 5. Printing results. putStrLn $ "\nSoftmax: Number of iterations to learn: " ++ show (LA.rows optPathSm) putStrLn $ "Softmax: Accuracy on train set (%): " ++ show (accuracyTrainSm*100) putStrLn $ "Softmax: Accuracy on test set (%): " ++ show (accuracyTestSm*100) let accuracyTrainSmGD = calcAccuracy softmax x1 y thetaSmGD accuracyTestSmGD = calcAccuracy softmax xTest1 yTest thetaSmGD -- Step 5. Printing results. putStrLn $ "\nGD Softmax: Number of iterations to learn: " ++ show (LA.rows optPathSmGD) putStrLn $ "GD Softmax: Accuracy on train set (%): " ++ show (accuracyTrainSmGD*100) putStrLn $ "GD Softmax: Accuracy on test set (%): " ++ show (accuracyTestSmGD*100)
{-# OPTIONS --without-K --safe #-} open import Categories.Category module Categories.Category.Construction.Spans {o ℓ e} (𝒞 : Category o ℓ e) where open import Level open import Categories.Category.Diagram.Span 𝒞 open import Categories.Morphism.Reasoning 𝒞 open Category 𝒞 open HomReasoning open Equiv open Span open Span⇒ Spans : Obj → Obj → Category (o ⊔ ℓ) (o ⊔ ℓ ⊔ e) e Spans X Y = record { Obj = Span X Y ; _⇒_ = Span⇒ ; _≈_ = λ f g → arr f ≈ arr g ; id = id-span⇒ ; _∘_ = _∘ₛ_ ; assoc = assoc ; sym-assoc = sym-assoc ; identityˡ = identityˡ ; identityʳ = identityʳ ; identity² = identity² ; equiv = record { refl = refl ; sym = sym ; trans = trans } ; ∘-resp-≈ = ∘-resp-≈ }
function [N,rho,P]=atmosParam4GasTemp(gasTable,T,rhow) %%ATMOSPARAM4GASTEMP Get basic parameters for atmospheric refractivity, % density, and pressure given a table of constitudent % gasses in the atmosphere, the temperature and the % absolute humidity. The model is best suited for % altitudes below 90km as ionozed parameters, such as % anomolous oxygen, are not used. % %INPUTS: gasTable An NX2 cell array where gasTable{i,1} is a string % describing the ith constituent atmospheric element and % gasTable{i,2} is the number density of the element in % particles per cubic meter. For a list of constituent % elements that can be handled, see the documentation for % the Constants.gasProp method. Unknown constituent % elements that are passed will be ignored. % T The temperature in degrees Kelvin. % rhow An optional parameter specifying the mass density of % water vapor at the point in question in units of % kilograms per cubic meter. If omitted, the air is assumed % to be dry (rhow=0). The total density of the air is % assumed to be the sum of the dry air density and rhow. % Alternatively, this parameter can be omitted and 'H2O' % can be given as one of the constituent elements in % gasTable. % %OUTPUTS: N The refractivity of the atmosphere. In this model, N is always % real. N=10^6*(n-1) where n is the index of refraction. This is % generally valid for frequencies from L-band (1GHz) to 10 GHz % (the middle of X-band). % rho The atmospheric density at the point in question in units of % kilograms per cubic meter. % P The atmospheric pressure at the point in question in units of % Newtons per square meter (Pascals). It assumes that the gasses % can be treated as ideal gasses. % %The refractive index is then found using the dry and wet air densities %using the formula of [1], which should be valid for frequencies between %1GHz and 10GHz. It ignores the lossiness of the atmosphere. % %REFERENCES: %[1] J. M. Aparicio and S. Laroche, "An evaluation of the expression of % the atmospheric refractivity for GPS signals," Journal of Geophysical % Research, vol. 116, no. D11, 16 Jun. 2011. % %September 2015 David F. Crouse, Naval Research Laboratory, Washington D.C. %(UNCLASSIFIED) DISTRIBUTION STATEMENT A. Approved for public release. if(nargin<3) rhow=0; end numGasses=size(gasTable,1); numberDensities=[gasTable{:,2}]; totalGasNumberDensity=sum(numberDensities); %rhod will be the total mass density of dry air at the point in kg/m^3. rhod=0; for curGas=1:numGasses AMU=Constants.gasProp(gasTable{curGas,1}); %If an unknown gas is given, then ignore it. if(~isempty(AMU)) %The number density has units of particles per cubic meter. %Multiplied by the atomic mass of the gas, we have atomic mass %units (AMU) per cubic meter. Multiplied by the value of the atomic %mass unit in kilograms, we get kilograms per cubic meter. rhod=rhod+Constants.atomicMassUnit*numberDensities(curGas)*AMU; else %This is so that the number density of the unknown constitutent is %also ignored when computing the pressure, so that the results are %consistent with the computation of the density. numberDensities(curGas)=0; end end rho=rhod+rhow; %To use the ideal gas law to find the air pressure, the number of water %molecules per cubic meter of the atmosphere is needed. This is obtained %using the molar mass of water (H2O) and Avogadro's constant Av=Constants.AvogadroConstant; %The molar mass of water (H2O) in atomic mass units (grams per mole). HAMU=Constants.elementAMU(1); OAMU=Constants.elementAMU(8); MMWater=HAMU*2+OAMU; %The number of atoms of water per cubic meter. The 1000 transforms grams to %kilograms. NH2O=rhow/(1000*Av*MMWater); %The total number density of the gasses in the atmosphere. That is, the %number of atoms per cubic meter. NTotal=totalGasNumberDensity+NH2O; kB=Constants.BoltzmannConstant; P=NTotal*kB*T; %T is the temperature in Kelvin. tau=273.15/T-1; N0=(222.682+0.069*tau)*rhod+(6701.605+6385.886*tau)*rhow; N=N0*(1+10^(-6)*N0/6); end %LICENSE: % %The source code is in the public domain and not licensed or under %copyright. The information and software may be used freely by the public. %As required by 17 U.S.C. 403, third parties producing copyrighted works %consisting predominantly of the material produced by U.S. government %agencies must provide notice with such work(s) identifying the U.S. %Government material incorporated and stating that such material is not %subject to copyright protection. % %Derived works shall not identify themselves in a manner that implies an %endorsement by or an affiliation with the Naval Research Laboratory. % %RECIPIENT BEARS ALL RISK RELATING TO QUALITY AND PERFORMANCE OF THE %SOFTWARE AND ANY RELATED MATERIALS, AND AGREES TO INDEMNIFY THE NAVAL %RESEARCH LABORATORY FOR ALL THIRD-PARTY CLAIMS RESULTING FROM THE ACTIONS %OF RECIPIENT IN THE USE OF THE SOFTWARE.
[STATEMENT] lemma confluence_to_local_confluence: "y\<^sup>\<star> \<cdot> x\<^sup>\<star> \<le> x\<^sup>\<star> \<cdot> y\<^sup>\<star> \<Longrightarrow> y \<cdot> x \<le> x\<^sup>\<star> \<cdot> y\<^sup>\<star>" [PROOF STATE] proof (prove) goal (1 subgoal): 1. y\<^sup>\<star> \<cdot> x\<^sup>\<star> \<le> x\<^sup>\<star> \<cdot> y\<^sup>\<star> \<Longrightarrow> y \<cdot> x \<le> x\<^sup>\<star> \<cdot> y\<^sup>\<star> [PROOF STEP] by (meson mult_isol_var order_trans star_ext)
-- | Common infrastructure for CVode/ARKode {-# OPTIONS_GHC -Wno-name-shadowing #-} module Numeric.Sundials.Common where import Foreign.C.Types import Foreign.Ptr import Foreign.C.String import Numeric.Sundials.Foreign as T import qualified Data.Vector as V import qualified Data.Vector.Storable as VS import qualified Data.Vector.Storable.Mutable as VSM import Numeric.LinearAlgebra.HMatrix as H hiding (Vector) import GHC.Prim import Control.DeepSeq import Katip import Data.Aeson import qualified Data.Text as T import qualified Data.Text.Encoding as T import qualified Data.ByteString as BS import qualified Data.Map.Strict as Map import Control.Monad.Reader import GHC.Generics (Generic) import Foreign.ForeignPtr import Language.C.Types as CT import Language.C.Inline.Context import qualified Language.Haskell.TH as TH -- | A collection of variables that we allocate on the Haskell side and -- pass into the C code to be filled. data CVars vec = CVars { c_diagnostics :: vec SunIndexType -- ^ Mutable vector to which we write diagnostic data while -- solving. Its size corresponds to the number of fields in -- 'SundialsDiagnostics'. , c_root_info :: vec CInt -- ^ Just a temporary vector (of the size equal to the number of event -- specs) that we use to get root info. Isn't used for output. , c_event_index :: vec CInt -- ^ For each event occurrence, this indicates which of the events -- occurred. Size: max_num_events. , c_event_time :: vec CDouble -- ^ For each event occurrence, this indicates the time of the -- occurrence. Size: max_num_events. , c_n_events :: vec CInt -- ^ Vector of size 1 that gives the total number of events occurred. , c_n_rows :: vec CInt -- ^ The total number of rows in the output matrix. , c_output_mat :: vec CDouble -- ^ The output matrix stored in the row-major order. -- Dimensions: (1 + dim) * (2 * max_events + nTs). , c_actual_event_direction :: vec CInt -- ^ Vector of size max_num_events that gives the direction of the -- occurred event. , c_local_error :: vec CDouble -- ^ Vector containing local error estimates. Size: the dimensionality -- of the system. , c_var_weight :: vec CDouble -- ^ Vector containing variable weights (derived from the tolerances). -- Size: the dimensionality of the system. , c_local_error_set :: vec CInt -- The flag (size 1) indicating whether c_local_error is filled with meaningful -- values. *Should be initialized with 0.* } allocateCVars :: OdeProblem -> IO (CVars (VS.MVector RealWorld)) allocateCVars OdeProblem{..} = do let dim = VS.length odeInitCond c_diagnostics <- VSM.new 11 c_root_info <- VSM.new $ V.length odeEventDirections c_event_index <- VSM.new odeMaxEvents c_event_time <- VSM.new odeMaxEvents c_actual_event_direction <- VSM.new odeMaxEvents c_n_events <- VSM.new 1 c_n_rows <- VSM.new 1 c_local_error <- VSM.new dim c_var_weight <- VSM.new dim c_local_error_set <- VSM.new 1 VSM.write c_local_error_set 0 0 c_output_mat <- VSM.new $ (1 + dim) * (2 * odeMaxEvents + VS.length odeSolTimes) return CVars {..} -- NB: the mutable CVars must not be used after this freezeCVars :: CVars (VS.MVector RealWorld) -> IO (CVars VS.Vector) freezeCVars CVars{..} = do c_diagnostics <- VS.unsafeFreeze c_diagnostics c_root_info <- VS.unsafeFreeze c_root_info c_event_index <- VS.unsafeFreeze c_event_index c_event_time <- VS.unsafeFreeze c_event_time c_actual_event_direction <- VS.unsafeFreeze c_actual_event_direction c_n_events <- VS.unsafeFreeze c_n_events c_n_rows <- VS.unsafeFreeze c_n_rows c_output_mat <- VS.unsafeFreeze c_output_mat c_local_error <- VS.unsafeFreeze c_local_error c_var_weight <- VS.unsafeFreeze c_var_weight c_local_error_set <- VS.unsafeFreeze c_local_error_set return CVars {..} -- | Similar to 'CVars', except these are immutable values that are -- accessed (read-only) by the C code and specify the system to be solved. data CConsts = CConsts { c_dim :: SunIndexType -- ^ the dimensionality (number of variables/equations) , c_method :: CInt -- ^ the ODE method (specific to the solver) , c_n_sol_times :: CInt , c_sol_time :: VS.Vector CDouble , c_init_cond :: VS.Vector CDouble , c_rhs :: FunPtr OdeRhsCType , c_rhs_userdata :: Ptr UserData , c_rtol :: CDouble , c_atol :: VS.Vector CDouble , c_n_event_specs :: CInt , c_event_fn :: FunPtr EventConditionCType , c_apply_event :: CInt -- number of triggered events -> Ptr CInt -- event indices -> CDouble -- time -> Ptr T.SunVector -- y -> Ptr T.SunVector -- new y -> Ptr CInt -- (out) stop the solver? -> Ptr CInt -- (out) record the event? -> IO CInt , c_jac_set :: CInt , c_jac :: FunPtr OdeJacobianCType , c_sparse_jac :: CInt -- ^ If 0, use a dense matrix. -- If non-0, use a sparse matrix with that number of non-zero -- elements. , c_requested_event_direction :: VS.Vector CInt , c_next_time_event :: IO CDouble , c_max_events :: CInt , c_minstep :: CDouble , c_fixedstep :: CDouble , c_max_n_steps :: SunIndexType , c_max_err_test_fails :: CInt , c_init_step_size_set :: CInt , c_init_step_size :: CDouble } data MethodType = Explicit | Implicit deriving (Show, Eq) class IsMethod method where methodToInt :: method -> CInt methodType :: method -> MethodType matrixToSunMatrix :: Matrix Double -> T.SunMatrix matrixToSunMatrix m = T.SunMatrix { T.rows = nr, T.cols = nc, T.vals = vs } where nr = fromIntegral $ H.rows m nc = fromIntegral $ H.cols m vs = coerce . VS.concat $ toColumns m -- Contrary to the documentation, it appears that CVodeGetRootInfo -- may use both 1 and -1 to indicate a root, depending on the -- direction of the sign change. See near the end of cvRootfind. intToDirection :: Integral d => d -> Maybe CrossingDirection intToDirection d = case d of 1 -> Just Upwards -1 -> Just Downwards _ -> Nothing -- | Almost inverse of 'intToDirection'. Map 'Upwards' to 1, 'Downwards' to -- -1, and 'AnyDirection' to 0. directionToInt :: Integral d => CrossingDirection -> d directionToInt d = case d of Upwards -> 1 Downwards -> -1 AnyDirection -> 0 foreign import ccall "wrapper" mkOdeRhsC :: OdeRhsCType -> IO (FunPtr OdeRhsCType) foreign import ccall "wrapper" mkOdeJacobianC :: OdeJacobianCType -> IO (FunPtr OdeJacobianCType) foreign import ccall "wrapper" mkEventConditionsC :: EventConditionCType -> IO (FunPtr EventConditionCType) assembleSolverResult :: OdeProblem -> CInt -> CVars VS.Vector -> IO (Either ErrorDiagnostics SundialsSolution) assembleSolverResult OdeProblem{..} ret CVars{..} = do let dim = VS.length odeInitCond n_rows = fromIntegral . VS.head $ c_n_rows output_mat = coerce . reshape (dim + 1) . subVector 0 ((dim + 1) * n_rows) $ c_output_mat (local_errors, var_weights) = if c_local_error_set VS.! 0 == 0 then (mempty, mempty) else coerce (c_local_error, c_var_weight) diagnostics = SundialsDiagnostics (fromIntegral $ c_diagnostics VS.!0) (fromIntegral $ c_diagnostics VS.!1) (fromIntegral $ c_diagnostics VS.!2) (fromIntegral $ c_diagnostics VS.!3) (fromIntegral $ c_diagnostics VS.!4) (fromIntegral $ c_diagnostics VS.!5) (fromIntegral $ c_diagnostics VS.!6) (fromIntegral $ c_diagnostics VS.!7) (fromIntegral $ c_diagnostics VS.!8) (fromIntegral $ c_diagnostics VS.!9) (toEnum . fromIntegral $ c_diagnostics VS.! 10) return $ if ret == T.cV_SUCCESS then Right $ SundialsSolution { actualTimeGrid = extractTimeGrid output_mat , solutionMatrix = dropTimeGrid output_mat , diagnostics } else Left ErrorDiagnostics { partialResults = output_mat , errorCode = fromIntegral ret , errorEstimates = local_errors , varWeights = var_weights } where -- The time grid is the first column of the result matrix extractTimeGrid :: Matrix Double -> VS.Vector Double extractTimeGrid = head . toColumns dropTimeGrid :: Matrix Double -> Matrix Double dropTimeGrid = fromColumns . tail . toColumns -- | An auxiliary function to construct a storable vector from a C pointer -- and length. -- -- There doesn't seem to be a safe version of 'VS.unsafeFromForeignPtr0', -- nor a way to clone an immutable vector, so we emulate it via an -- intermediate mutable vector. vecFromPtr :: VS.Storable a => Ptr a -> Int -> IO (VS.Vector a) vecFromPtr ptr n = do fptr <- newForeignPtr_ ptr let mv = VSM.unsafeFromForeignPtr0 fptr n VS.freeze mv -- this does the copying and makes the whole thing safe ---------------------------------------------------------------------- -- Logging ---------------------------------------------------------------------- -- | The Katip payload for logging Sundials errors data SundialsErrorContext = SundialsErrorContext { sundialsErrorCode :: !Int , sundialsErrorModule :: !T.Text , sundialsErrorFunction :: !T.Text } deriving Generic instance ToJSON SundialsErrorContext instance ToObject SundialsErrorContext instance LogItem SundialsErrorContext where payloadKeys _ _ = AllKeys type ReportErrorFn = ( CInt -- error code -> CString -- module name -> CString -- function name -> CString -- the message -> Ptr () -- user data (ignored) -> IO () ) cstringToText :: CString -> IO T.Text cstringToText = fmap T.decodeUtf8 . BS.packCString reportErrorWithKatip :: LogEnv -> ReportErrorFn reportErrorWithKatip log_env err_code c_mod_name c_func_name c_msg _userdata = do -- See Note [CV_TOO_CLOSE] if err_code == T.cV_TOO_CLOSE then pure () else do let mod_name <- cstringToText c_mod_name func_name <- cstringToText c_func_name msg <- cstringToText c_msg let severity :: Severity severity = if err_code <= 0 then ErrorS else InfoS errCtx :: SundialsErrorContext errCtx = SundialsErrorContext { sundialsErrorCode = fromIntegral err_code , sundialsErrorModule = mod_name , sundialsErrorFunction = func_name } flip runReaderT log_env . unKatipT $ do logF errCtx "sundials" severity (logStr msg) debugMsgWithKatip :: LogEnv -> CString -> IO () debugMsgWithKatip log_env cstr = do text <- cstringToText cstr flip runReaderT log_env . unKatipT $ do logF () "hmatrix-sundials" DebugS (logStr text) -- From the former Types module data EventHandlerResult = EventHandlerResult { eventStopSolver :: !Bool -- ^ should we stop the solver after handling this event? , eventRecord :: !Bool -- ^ should we record the state before and after the event in the ODE -- solution? , eventNewState :: !(VS.Vector Double) -- ^ the new state after the event has been applied } type EventHandler = Double -- ^ time -> VS.Vector Double -- ^ values of the variables -> VS.Vector Int -- ^ Vector of triggered event indices. -- If the vector is empty, this is a time-based event. -> IO EventHandlerResult data OdeProblem = OdeProblem { odeEventConditions :: EventConditions -- ^ The event conditions , odeEventDirections :: V.Vector CrossingDirection -- ^ The requested directions of 0 crossing for each event. Also, the -- length of this vector tells us the number of events (even when -- 'odeEventConditions' is represented by a single C function). , odeMaxEvents :: !Int -- ^ The maximal number of events that may occur. This is needed to -- allocate enough space to store the events. If more events occur, an -- error is returned. , odeEventHandler :: EventHandler -- ^ The event handler. , odeTimeBasedEvents :: TimeEventSpec , odeRhs :: OdeRhs -- ^ The right-hand side of the system: either a Haskell function or -- a pointer to a compiled function. , odeJacobian :: Maybe OdeJacobian -- ^ The optional Jacobian (the arguments are the time and the state -- vector). , odeInitCond :: VS.Vector Double -- ^ The initial conditions of the problem. , odeSolTimes :: VS.Vector Double -- ^ The requested solution times. The actual solution times may be -- larger if any events occurred. , odeTolerances :: Tolerances -- ^ How much error is tolerated in each variable. } data Tolerances = Tolerances { relTolerance :: !CDouble , absTolerances :: Either CDouble (VS.Vector CDouble) -- ^ If 'Left', then the same tolerance is used for all variables. -- -- If 'Right', the vector should contain one tolerance per variable. } deriving (Show, Eq, Ord) -- | The type of the C ODE RHS function. type OdeRhsCType = CDouble -> Ptr SunVector -> Ptr SunVector -> Ptr UserData -> IO CInt data UserData -- | The right-hand side of an ODE system. -- -- Can be either a Haskell function or a pointer to a C function. data OdeRhs = OdeRhsHaskell (CDouble -> VS.Vector CDouble -> IO (VS.Vector CDouble)) | OdeRhsC (FunPtr OdeRhsCType) (Ptr UserData) -- | A version of 'OdeRhsHaskell' that accepts a pure function odeRhsPure :: (CDouble -> VS.Vector CDouble -> VS.Vector CDouble) -> OdeRhs odeRhsPure f = OdeRhsHaskell $ \t y -> return $ f t y type OdeJacobianCType = SunRealType -- ^ @realtype t@ -> Ptr SunVector -- ^ @N_Vector y@ -> Ptr SunVector -- ^ @N_Vector fy@ -> Ptr SunMatrix -- ^ @SUNMatrix Jac@ -> Ptr UserData -- ^ @void *user_data@ -> Ptr SunVector -- ^ @N_Vector tmp1@ -> Ptr SunVector -- ^ @N_Vector tmp2@ -> Ptr SunVector -- ^ @N_Vector tmp3@ -> IO CInt -- ^ return value (0 if successful, >0 for a recoverable error, <0 for an unrecoverable error) -- | The Jacobian of the right-hand side of an ODE system. -- -- Can be either a Haskell function or a pointer to a C function. data OdeJacobian = OdeJacobianHaskell (Double -> VS.Vector Double -> Matrix Double) | OdeJacobianC (FunPtr OdeJacobianCType) data JacobianRepr = SparseJacobian !SparsePattern -- ^ sparse Jacobian with the given sparse pattern | DenseJacobian deriving (Show) type EventConditionCType = SunRealType -- ^ @realtype t@ -> Ptr SunVector -- ^ @N_Vector y@ -> Ptr SunRealType -- ^ @realtype *gout@ -> Ptr UserData -- ^ @void *user_data@ -> IO CInt data EventConditions = EventConditionsHaskell (Double -> VS.Vector Double -> VS.Vector Double) | EventConditionsC (FunPtr EventConditionCType) -- | A way to construct 'EventConditionsHaskell' when there is no shared -- computation among different functions eventConditionsPure :: V.Vector (Double -> VS.Vector Double -> Double) -> EventConditions eventConditionsPure conds = EventConditionsHaskell $ \t y -> V.convert $ V.map (\cond -> cond t y) conds data SundialsDiagnostics = SundialsDiagnostics { odeGetNumSteps :: Int , odeGetNumStepAttempts :: Int , odeGetNumRhsEvals_fe :: Int , odeGetNumRhsEvals_fi :: Int , odeGetNumLinSolvSetups :: Int , odeGetNumErrTestFails :: Int , odeGetNumNonlinSolvIters :: Int , odeGetNumNonlinSolvConvFails :: Int , dlsGetNumJacEvals :: Int , dlsGetNumRhsEvals :: Int , odeMaxEventsReached :: Bool } deriving (Eq, Show, Generic, NFData) instance Semigroup SundialsDiagnostics where (<>) (SundialsDiagnostics numSteps_1 numStepAttempts_1 numRhsEvals_fe_1 numRhsEvals_fi_1 numLinSolvSetups_1 numErrTestFails_1 numNonlinSolvIters_1 numNonlinSolvConvFails_1 numJacEvals_1 numRhsEvals_1 reachedMaxEvents_1) (SundialsDiagnostics numSteps_2 numStepAttempts_2 numRhsEvals_fe_2 numRhsEvals_fi_2 numLinSolvSetups_2 numErrTestFails_2 numNonlinSolvIters_2 numNonlinSolvConvFails_2 numJacEvals_2 numRhsEvals_2 reachedMaxEvents_2) = SundialsDiagnostics (numSteps_2 + numSteps_1) (numStepAttempts_2 + numStepAttempts_1) (numRhsEvals_fe_2 + numRhsEvals_fe_1) (numRhsEvals_fi_2 + numRhsEvals_fi_1) (numLinSolvSetups_2 + numLinSolvSetups_1) (numErrTestFails_2 + numErrTestFails_1) (numNonlinSolvIters_2 + numNonlinSolvIters_1) (numNonlinSolvConvFails_2 + numNonlinSolvConvFails_1) (numJacEvals_2 + numJacEvals_1) (numRhsEvals_2 + numRhsEvals_1) (reachedMaxEvents_1 || reachedMaxEvents_2) instance Monoid SundialsDiagnostics where mempty = SundialsDiagnostics 0 0 0 0 0 0 0 0 0 0 False data SundialsSolution = SundialsSolution { actualTimeGrid :: VS.Vector Double -- ^ actual time grid returned by the solver (with duplicated event times) , solutionMatrix :: Matrix Double -- ^ matrix of solutions: each column is an unknwown , diagnostics :: SundialsDiagnostics -- ^ usual Sundials diagnostics } data ErrorDiagnostics = ErrorDiagnostics { errorCode :: !Int -- ^ The numeric error code. Mostly useless at this point, since it is -- set to 1 under most error conditions. See 'solveOdeC'. , errorEstimates :: !(VS.Vector Double) -- ^ The local error estimates as returned by @CVodeGetEstLocalErrors@. -- Either an empty vector, or has the same dimensionality as the state -- space. , varWeights :: !(VS.Vector Double) -- ^ The weights with which errors are combined, equal to @1 / (atol_i + y_i * rtol)@. -- Either an empty vector, or has the same dimensionality as the state -- space. , partialResults :: !(Matrix Double) -- ^ Partial solution of the ODE system, up until the moment when -- solving failed. Contains the time as its first column. } deriving Show -- | The direction in which a function should cross the x axis data CrossingDirection = Upwards | Downwards | AnyDirection deriving (Generic, Eq, Show, NFData) -- | A time-based event, implemented as an action that returns the time of -- the next time-based event. -- -- If there's an additional condition attached to a time-based event, it -- should be verified in the event handler. -- -- The action is supposed to be stateful, and the state of the action -- should be updated by the event handler so that after a given time-based -- event is handled, the action starts returning the time of the next -- unhandled time-based event. -- -- If there is no next time-based event, the action should return +Inf. newtype TimeEventSpec = TimeEventSpec (IO Double) sunTypesTable :: Map.Map TypeSpecifier TH.TypeQ sunTypesTable = Map.fromList [ (TypeName "sunindextype", [t| SunIndexType |] ) , (TypeName "realtype", [t| SunRealType |] ) , (TypeName "N_Vector", [t| Ptr SunVector |] ) , (TypeName "SUNMatrix", [t| Ptr SunMatrix |] ) , (TypeName "UserData", [t| UserData |] ) ] -- | Allows to map between Haskell and C types sunCtx :: Context sunCtx = mempty {ctxTypesTable = sunTypesTable}
/- Copyright (c) 2019 Scott Morrison. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Scott Morrison, Markus Himmel -/ import category_theory.limits.shapes.equalizers import category_theory.limits.shapes.pullbacks import category_theory.limits.shapes.strong_epi /-! # Categorical images We define the categorical image of `f` as a factorisation `f = e ≫ m` through a monomorphism `m`, so that `m` factors through the `m'` in any other such factorisation. ## Main definitions * A `mono_factorisation` is a factorisation `f = e ≫ m`, where `m` is a monomorphism * `is_image F` means that a given mono factorisation `F` has the universal property of the image. * `has_image f` means that there is some image factorization for the morphism `f : X ⟶ Y`. * In this case, `image f` is some image object (selected with choice), `image.ι f : image f ⟶ Y` is the monomorphism `m` of the factorisation and `factor_thru_image f : X ⟶ image f` is the morphism `e`. * `has_images C` means that every morphism in `C` has an image. * Let `f : X ⟶ Y` and `g : P ⟶ Q` be morphisms in `C`, which we will represent as objects of the arrow category `arrow C`. Then `sq : f ⟶ g` is a commutative square in `C`. If `f` and `g` have images, then `has_image_map sq` represents the fact that there is a morphism `i : image f ⟶ image g` making the diagram X ----→ image f ----→ Y | | | | | | ↓ ↓ ↓ P ----→ image g ----→ Q commute, where the top row is the image factorisation of `f`, the bottom row is the image factorisation of `g`, and the outer rectangle is the commutative square `sq`. * If a category `has_images`, then `has_image_maps` means that every commutative square admits an image map. * If a category `has_images`, then `has_strong_epi_images` means that the morphism to the image is always a strong epimorphism. ## Main statements * When `C` has equalizers, the morphism `e` appearing in an image factorisation is an epimorphism. * When `C` has strong epi images, then these images admit image maps. ## Future work * TODO: coimages, and abelian categories. * TODO: connect this with existing working in the group theory and ring theory libraries. -/ noncomputable theory universes v u open category_theory open category_theory.limits.walking_parallel_pair namespace category_theory.limits variables {C : Type u} [category.{v} C] variables {X Y : C} (f : X ⟶ Y) /-- A factorisation of a morphism `f = e ≫ m`, with `m` monic. -/ structure mono_factorisation (f : X ⟶ Y) := (I : C) (m : I ⟶ Y) [m_mono : mono m] (e : X ⟶ I) (fac' : e ≫ m = f . obviously) restate_axiom mono_factorisation.fac' attribute [simp, reassoc] mono_factorisation.fac attribute [instance] mono_factorisation.m_mono attribute [instance] mono_factorisation.m_mono namespace mono_factorisation /-- The obvious factorisation of a monomorphism through itself. -/ def self [mono f] : mono_factorisation f := { I := X, m := f, e := 𝟙 X } -- I'm not sure we really need this, but the linter says that an inhabited instance -- ought to exist... instance [mono f] : inhabited (mono_factorisation f) := ⟨self f⟩ variables {f} /-- The morphism `m` in a factorisation `f = e ≫ m` through a monomorphism is uniquely determined. -/ @[ext] lemma ext {F F' : mono_factorisation f} (hI : F.I = F'.I) (hm : F.m = (eq_to_hom hI) ≫ F'.m) : F = F' := begin cases F, cases F', cases hI, simp at hm, dsimp at F_fac' F'_fac', congr, { assumption }, { resetI, apply (cancel_mono F_m).1, rw [F_fac', hm, F'_fac'], } end /-- Any mono factorisation of `f` gives a mono factorisation of `f ≫ g` when `g` is a mono. -/ @[simps] def comp_mono (F : mono_factorisation f) {Y' : C} (g : Y ⟶ Y') [mono g] : mono_factorisation (f ≫ g) := { I := F.I, m := F.m ≫ g, m_mono := mono_comp _ _, e := F.e, } /-- A mono factorisation of `f ≫ g`, where `g` is an isomorphism, gives a mono factorisation of `f`. -/ @[simps] def of_comp_iso {Y' : C} {g : Y ⟶ Y'} [is_iso g] (F : mono_factorisation (f ≫ g)) : mono_factorisation f := { I := F.I, m := F.m ≫ (inv g), m_mono := mono_comp _ _, e := F.e, } /-- Any mono factorisation of `f` gives a mono factorisation of `g ≫ f`. -/ @[simps] def iso_comp (F : mono_factorisation f) {X' : C} (g : X' ⟶ X) : mono_factorisation (g ≫ f) := { I := F.I, m := F.m, e := g ≫ F.e, } /-- A mono factorisation of `g ≫ f`, where `g` is an isomorphism, gives a mono factorisation of `f`. -/ @[simps] def of_iso_comp {X' : C} (g : X' ⟶ X) [is_iso g] (F : mono_factorisation (g ≫ f)) : mono_factorisation f := { I := F.I, m := F.m, e := inv g ≫ F.e, } /-- If `f` and `g` are isomorphic arrows, then a mono factorisation of `f` gives a mono factorisation of `g` -/ @[simps] def of_arrow_iso {f g : arrow C} (F : mono_factorisation f.hom) (sq : f ⟶ g) [is_iso sq] : mono_factorisation g.hom := { I := F.I, m := F.m ≫ sq.right, e := inv sq.left ≫ F.e, m_mono := mono_comp _ _, fac' := by simp only [fac_assoc, arrow.w, is_iso.inv_comp_eq, category.assoc] } end mono_factorisation variable {f} /-- Data exhibiting that a given factorisation through a mono is initial. -/ structure is_image (F : mono_factorisation f) := (lift : Π (F' : mono_factorisation f), F.I ⟶ F'.I) (lift_fac' : Π (F' : mono_factorisation f), lift F' ≫ F'.m = F.m . obviously) restate_axiom is_image.lift_fac' attribute [simp, reassoc] is_image.lift_fac namespace is_image @[simp, reassoc] lemma fac_lift {F : mono_factorisation f} (hF : is_image F) (F' : mono_factorisation f) : F.e ≫ hF.lift F' = F'.e := (cancel_mono F'.m).1 $ by simp variable (f) /-- The trivial factorisation of a monomorphism satisfies the universal property. -/ @[simps] def self [mono f] : is_image (mono_factorisation.self f) := { lift := λ F', F'.e } instance [mono f] : inhabited (is_image (mono_factorisation.self f)) := ⟨self f⟩ variable {f} /-- Two factorisations through monomorphisms satisfying the universal property must factor through isomorphic objects. -/ -- TODO this is another good candidate for a future `unique_up_to_canonical_iso`. @[simps] def iso_ext {F F' : mono_factorisation f} (hF : is_image F) (hF' : is_image F') : F.I ≅ F'.I := { hom := hF.lift F', inv := hF'.lift F, hom_inv_id' := (cancel_mono F.m).1 (by simp), inv_hom_id' := (cancel_mono F'.m).1 (by simp) } variables {F F' : mono_factorisation f} (hF : is_image F) (hF' : is_image F') lemma iso_ext_hom_m : (iso_ext hF hF').hom ≫ F'.m = F.m := by simp lemma iso_ext_inv_m : (iso_ext hF hF').inv ≫ F.m = F'.m := by simp lemma e_iso_ext_hom : F.e ≫ (iso_ext hF hF').hom = F'.e := by simp lemma e_iso_ext_inv : F'.e ≫ (iso_ext hF hF').inv = F.e := by simp /-- If `f` and `g` are isomorphic arrows, then a mono factorisation of `f` that is an image gives a mono factorisation of `g` that is an image -/ @[simps] def of_arrow_iso {f g : arrow C} {F : mono_factorisation f.hom} (hF : is_image F) (sq : f ⟶ g) [is_iso sq] : is_image (F.of_arrow_iso sq) := { lift := λ F', hF.lift (F'.of_arrow_iso (inv sq)), lift_fac' := λ F', by simpa only [mono_factorisation.of_arrow_iso_m, arrow.inv_right, ← category.assoc, is_iso.comp_inv_eq] using hF.lift_fac (F'.of_arrow_iso (inv sq)) } end is_image variable (f) /-- Data exhibiting that a morphism `f` has an image. -/ structure image_factorisation (f : X ⟶ Y) := (F : mono_factorisation f) (is_image : is_image F) namespace image_factorisation instance [mono f] : inhabited (image_factorisation f) := ⟨⟨_, is_image.self f⟩⟩ /-- If `f` and `g` are isomorphic arrows, then an image factorisation of `f` gives an image factorisation of `g` -/ @[simps] def of_arrow_iso {f g : arrow C} (F : image_factorisation f.hom) (sq : f ⟶ g) [is_iso sq] : image_factorisation g.hom := { F := F.F.of_arrow_iso sq, is_image := F.is_image.of_arrow_iso sq } end image_factorisation /-- `has_image f` means that there exists an image factorisation of `f`. -/ class has_image (f : X ⟶ Y) : Prop := mk' :: (exists_image : nonempty (image_factorisation f)) lemma has_image.mk {f : X ⟶ Y} (F : image_factorisation f) : has_image f := ⟨nonempty.intro F⟩ lemma has_image.of_arrow_iso {f g : arrow C} [h : has_image f.hom] (sq : f ⟶ g) [is_iso sq] : has_image g.hom := ⟨⟨h.exists_image.some.of_arrow_iso sq⟩⟩ section variable [has_image f] /-- Some factorisation of `f` through a monomorphism (selected with choice). -/ def image.mono_factorisation : mono_factorisation f := (classical.choice (has_image.exists_image)).F /-- The witness of the universal property for the chosen factorisation of `f` through a monomorphism. -/ def image.is_image : is_image (image.mono_factorisation f) := (classical.choice (has_image.exists_image)).is_image /-- The categorical image of a morphism. -/ def image : C := (image.mono_factorisation f).I /-- The inclusion of the image of a morphism into the target. -/ def image.ι : image f ⟶ Y := (image.mono_factorisation f).m @[simp] lemma image.as_ι : (image.mono_factorisation f).m = image.ι f := rfl instance : mono (image.ι f) := (image.mono_factorisation f).m_mono /-- The map from the source to the image of a morphism. -/ def factor_thru_image : X ⟶ image f := (image.mono_factorisation f).e /-- Rewrite in terms of the `factor_thru_image` interface. -/ @[simp] lemma as_factor_thru_image : (image.mono_factorisation f).e = factor_thru_image f := rfl @[simp, reassoc] lemma image.fac : factor_thru_image f ≫ image.ι f = f := (image.mono_factorisation f).fac' variable {f} /-- Any other factorisation of the morphism `f` through a monomorphism receives a map from the image. -/ def image.lift (F' : mono_factorisation f) : image f ⟶ F'.I := (image.is_image f).lift F' @[simp, reassoc] lemma image.lift_fac (F' : mono_factorisation f) : image.lift F' ≫ F'.m = image.ι f := (image.is_image f).lift_fac' F' @[simp, reassoc] lemma image.fac_lift (F' : mono_factorisation f) : factor_thru_image f ≫ image.lift F' = F'.e := (image.is_image f).fac_lift F' @[simp, reassoc] lemma is_image.lift_ι {F : mono_factorisation f} (hF : is_image F) : hF.lift (image.mono_factorisation f) ≫ image.ι f = F.m := hF.lift_fac _ -- TODO we could put a category structure on `mono_factorisation f`, -- with the morphisms being `g : I ⟶ I'` commuting with the `m`s -- (they then automatically commute with the `e`s) -- and show that an `image_of f` gives an initial object there -- (uniqueness of the lift comes for free). instance image.lift_mono (F' : mono_factorisation f) : mono (image.lift F') := by { apply mono_of_mono _ F'.m, simpa using mono_factorisation.m_mono _ } lemma has_image.uniq (F' : mono_factorisation f) (l : image f ⟶ F'.I) (w : l ≫ F'.m = image.ι f) : l = image.lift F' := (cancel_mono F'.m).1 (by simp [w]) /-- If `has_image g`, then `has_image (f ≫ g)` when `f` is an isomorphism. -/ instance {X Y Z : C} (f : X ⟶ Y) [is_iso f] (g : Y ⟶ Z) [has_image g] : has_image (f ≫ g) := { exists_image := ⟨ { F := { I := image g, m := image.ι g, e := f ≫ factor_thru_image g, }, is_image := { lift := λ F', image.lift { I := F'.I, m := F'.m, e := inv f ≫ F'.e, }, }, }⟩ } end section variables (C) /-- `has_images` asserts that every morphism has an image. -/ class has_images : Prop := (has_image : Π {X Y : C} (f : X ⟶ Y), has_image f) attribute [instance, priority 100] has_images.has_image end section variables (f) [has_image f] /-- The image of a monomorphism is isomorphic to the source. -/ def image_mono_iso_source [mono f] : image f ≅ X := is_image.iso_ext (image.is_image f) (is_image.self f) @[simp, reassoc] lemma image_mono_iso_source_inv_ι [mono f] : (image_mono_iso_source f).inv ≫ image.ι f = f := by simp [image_mono_iso_source] @[simp, reassoc] lemma image_mono_iso_source_hom_self [mono f] : (image_mono_iso_source f).hom ≫ f = image.ι f := begin conv { to_lhs, congr, skip, rw ←image_mono_iso_source_inv_ι f, }, rw [←category.assoc, iso.hom_inv_id, category.id_comp], end -- This is the proof that `factor_thru_image f` is an epimorphism -- from https://en.wikipedia.org/wiki/Image_%28category_theory%29, which is in turn taken from: -- Mitchell, Barry (1965), Theory of categories, MR 0202787, p.12, Proposition 10.1 @[ext] lemma image.ext {W : C} {g h : image f ⟶ W} [has_limit (parallel_pair g h)] (w : factor_thru_image f ≫ g = factor_thru_image f ≫ h) : g = h := begin let q := equalizer.ι g h, let e' := equalizer.lift _ w, let F' : mono_factorisation f := { I := equalizer g h, m := q ≫ image.ι f, m_mono := by apply mono_comp, e := e' }, let v := image.lift F', have t₀ : v ≫ q ≫ image.ι f = image.ι f := image.lift_fac F', have t : v ≫ q = 𝟙 (image f) := (cancel_mono_id (image.ι f)).1 (by { convert t₀ using 1, rw category.assoc }), -- The proof from wikipedia next proves `q ≫ v = 𝟙 _`, -- and concludes that `equalizer g h ≅ image f`, -- but this isn't necessary. calc g = 𝟙 (image f) ≫ g : by rw [category.id_comp] ... = v ≫ q ≫ g : by rw [←t, category.assoc] ... = v ≫ q ≫ h : by rw [equalizer.condition g h] ... = 𝟙 (image f) ≫ h : by rw [←category.assoc, t] ... = h : by rw [category.id_comp] end instance [Π {Z : C} (g h : image f ⟶ Z), has_limit (parallel_pair g h)] : epi (factor_thru_image f) := ⟨λ Z g h w, image.ext f w⟩ lemma epi_image_of_epi {X Y : C} (f : X ⟶ Y) [has_image f] [E : epi f] : epi (image.ι f) := begin rw ←image.fac f at E, resetI, exact epi_of_epi (factor_thru_image f) (image.ι f), end lemma epi_of_epi_image {X Y : C} (f : X ⟶ Y) [has_image f] [epi (image.ι f)] [epi (factor_thru_image f)] : epi f := by { rw [←image.fac f], apply epi_comp, } end section variables {f} {f' : X ⟶ Y} [has_image f] [has_image f'] /-- An equation between morphisms gives a comparison map between the images (which momentarily we prove is an iso). -/ def image.eq_to_hom (h : f = f') : image f ⟶ image f' := image.lift { I := image f', m := image.ι f', e := factor_thru_image f', }. instance (h : f = f') : is_iso (image.eq_to_hom h) := ⟨⟨image.eq_to_hom h.symm, ⟨(cancel_mono (image.ι f)).1 (by simp [image.eq_to_hom]), (cancel_mono (image.ι f')).1 (by simp [image.eq_to_hom])⟩⟩⟩ /-- An equation between morphisms gives an isomorphism between the images. -/ def image.eq_to_iso (h : f = f') : image f ≅ image f' := as_iso (image.eq_to_hom h) /-- As long as the category has equalizers, the image inclusion maps commute with `image.eq_to_iso`. -/ lemma image.eq_fac [has_equalizers C] (h : f = f') : image.ι f = (image.eq_to_iso h).hom ≫ image.ι f' := by { ext, simp [image.eq_to_iso, image.eq_to_hom], } end section variables {Z : C} (g : Y ⟶ Z) /-- The comparison map `image (f ≫ g) ⟶ image g`. -/ def image.pre_comp [has_image g] [has_image (f ≫ g)] : image (f ≫ g) ⟶ image g := image.lift { I := image g, m := image.ι g, e := f ≫ factor_thru_image g } @[simp, reassoc] lemma image.pre_comp_ι [has_image g] [has_image (f ≫ g)] : image.pre_comp f g ≫ image.ι g = image.ι (f ≫ g) := by simp [image.pre_comp] @[simp, reassoc] lemma image.factor_thru_image_pre_comp [has_image g] [has_image (f ≫ g)] : factor_thru_image (f ≫ g) ≫ image.pre_comp f g = f ≫ factor_thru_image g := by simp [image.pre_comp] /-- `image.pre_comp f g` is a monomorphism. -/ instance image.pre_comp_mono [has_image g] [has_image (f ≫ g)] : mono (image.pre_comp f g) := begin apply mono_of_mono _ (image.ι g), simp only [image.pre_comp_ι], apply_instance, end /-- The two step comparison map `image (f ≫ (g ≫ h)) ⟶ image (g ≫ h) ⟶ image h` agrees with the one step comparison map `image (f ≫ (g ≫ h)) ≅ image ((f ≫ g) ≫ h) ⟶ image h`. -/ lemma image.pre_comp_comp {W : C} (h : Z ⟶ W) [has_image (g ≫ h)] [has_image (f ≫ g ≫ h)] [has_image h] [has_image ((f ≫ g) ≫ h)] : image.pre_comp f (g ≫ h) ≫ image.pre_comp g h = image.eq_to_hom (category.assoc f g h).symm ≫ (image.pre_comp (f ≫ g) h) := begin apply (cancel_mono (image.ι h)).1, simp [image.pre_comp, image.eq_to_hom], end variables [has_equalizers C] /-- `image.pre_comp f g` is an epimorphism when `f` is an epimorphism (we need `C` to have equalizers to prove this). -/ instance image.pre_comp_epi_of_epi [has_image g] [has_image (f ≫ g)] [epi f] : epi (image.pre_comp f g) := begin apply epi_of_epi_fac (image.factor_thru_image_pre_comp _ _), exact epi_comp _ _ end instance has_image_iso_comp [is_iso f] [has_image g] : has_image (f ≫ g) := has_image.mk { F := (image.mono_factorisation g).iso_comp f, is_image := { lift := λ F', image.lift (F'.of_iso_comp f) }, } /-- `image.pre_comp f g` is an isomorphism when `f` is an isomorphism (we need `C` to have equalizers to prove this). -/ instance image.is_iso_precomp_iso (f : X ⟶ Y) [is_iso f] [has_image g] : is_iso (image.pre_comp f g) := ⟨⟨image.lift { I := image (f ≫ g), m := image.ι (f ≫ g), e := inv f ≫ factor_thru_image (f ≫ g) }, ⟨by { ext, simp [image.pre_comp], }, by { ext, simp [image.pre_comp], }⟩⟩⟩ -- Note that in general we don't have the other comparison map you might expect -- `image f ⟶ image (f ≫ g)`. instance has_image_comp_iso [has_image f] [is_iso g] : has_image (f ≫ g) := has_image.mk { F := (image.mono_factorisation f).comp_mono g, is_image := { lift := λ F', image.lift F'.of_comp_iso }, } /-- Postcomposing by an isomorphism induces an isomorphism on the image. -/ def image.comp_iso [has_image f] [is_iso g] : image f ≅ image (f ≫ g) := { hom := image.lift (image.mono_factorisation (f ≫ g)).of_comp_iso, inv := image.lift ((image.mono_factorisation f).comp_mono g) } @[simp, reassoc] lemma image.comp_iso_hom_comp_image_ι [has_image f] [is_iso g] : (image.comp_iso f g).hom ≫ image.ι (f ≫ g) = image.ι f ≫ g := by { ext, simp [image.comp_iso] } @[simp, reassoc] lemma image.comp_iso_inv_comp_image_ι [has_image f] [is_iso g] : (image.comp_iso f g).inv ≫ image.ι f = image.ι (f ≫ g) ≫ inv g := by { ext, simp [image.comp_iso] } end end category_theory.limits namespace category_theory.limits variables {C : Type u} [category.{v} C] section instance {X Y : C} (f : X ⟶ Y) [has_image f] : has_image (arrow.mk f).hom := show has_image f, by apply_instance end section has_image_map /-- An image map is a morphism `image f → image g` fitting into a commutative square and satisfying the obvious commutativity conditions. -/ structure image_map {f g : arrow C} [has_image f.hom] [has_image g.hom] (sq : f ⟶ g) := (map : image f.hom ⟶ image g.hom) (map_ι' : map ≫ image.ι g.hom = image.ι f.hom ≫ sq.right . obviously) instance inhabited_image_map {f : arrow C} [has_image f.hom] : inhabited (image_map (𝟙 f)) := ⟨⟨𝟙 _, by tidy⟩⟩ restate_axiom image_map.map_ι' attribute [simp, reassoc] image_map.map_ι @[simp, reassoc] lemma image_map.factor_map {f g : arrow C} [has_image f.hom] [has_image g.hom] (sq : f ⟶ g) (m : image_map sq) : factor_thru_image f.hom ≫ m.map = sq.left ≫ factor_thru_image g.hom := (cancel_mono (image.ι g.hom)).1 $ by simp /-- To give an image map for a commutative square with `f` at the top and `g` at the bottom, it suffices to give a map between any mono factorisation of `f` and any image factorisation of `g`. -/ def image_map.transport {f g : arrow C} [has_image f.hom] [has_image g.hom] (sq : f ⟶ g) (F : mono_factorisation f.hom) {F' : mono_factorisation g.hom} (hF' : is_image F') {map : F.I ⟶ F'.I} (map_ι : map ≫ F'.m = F.m ≫ sq.right) : image_map sq := { map := image.lift F ≫ map ≫ hF'.lift (image.mono_factorisation g.hom), map_ι' := by simp [map_ι] } /-- `has_image_map sq` means that there is an `image_map` for the square `sq`. -/ class has_image_map {f g : arrow C} [has_image f.hom] [has_image g.hom] (sq : f ⟶ g) : Prop := mk' :: (has_image_map : nonempty (image_map sq)) lemma has_image_map.mk {f g : arrow C} [has_image f.hom] [has_image g.hom] {sq : f ⟶ g} (m : image_map sq) : has_image_map sq := ⟨nonempty.intro m⟩ lemma has_image_map.transport {f g : arrow C} [has_image f.hom] [has_image g.hom] (sq : f ⟶ g) (F : mono_factorisation f.hom) {F' : mono_factorisation g.hom} (hF' : is_image F') (map : F.I ⟶ F'.I) (map_ι : map ≫ F'.m = F.m ≫ sq.right) : has_image_map sq := has_image_map.mk $ image_map.transport sq F hF' map_ι /-- Obtain an `image_map` from a `has_image_map` instance. -/ def has_image_map.image_map {f g : arrow C} [has_image f.hom] [has_image g.hom] (sq : f ⟶ g) [has_image_map sq] : image_map sq := classical.choice $ @has_image_map.has_image_map _ _ _ _ _ _ sq _ @[priority 100] -- see Note [lower instance priority] instance has_image_map_of_is_iso {f g : arrow C} [has_image f.hom] [has_image g.hom] (sq : f ⟶ g) [is_iso sq] : has_image_map sq := has_image_map.mk { map := image.lift ((image.mono_factorisation g.hom).of_arrow_iso (inv sq)), map_ι' := begin erw [← cancel_mono (inv sq).right, category.assoc, ← mono_factorisation.of_arrow_iso_m, image.lift_fac, category.assoc, ← comma.comp_right, is_iso.hom_inv_id, comma.id_right, category.comp_id], end } instance has_image_map.comp {f g h : arrow C} [has_image f.hom] [has_image g.hom] [has_image h.hom] (sq1 : f ⟶ g) (sq2 : g ⟶ h) [has_image_map sq1] [has_image_map sq2] : has_image_map (sq1 ≫ sq2) := has_image_map.mk { map := (has_image_map.image_map sq1).map ≫ (has_image_map.image_map sq2).map, map_ι' := by simp only [image_map.map_ι, image_map.map_ι_assoc, comma.comp_right, category.assoc] } variables {f g : arrow C} [has_image f.hom] [has_image g.hom] (sq : f ⟶ g) section local attribute [ext] image_map instance : subsingleton (image_map sq) := subsingleton.intro $ λ a b, image_map.ext a b $ (cancel_mono (image.ι g.hom)).1 $ by simp only [image_map.map_ι] end variable [has_image_map sq] /-- The map on images induced by a commutative square. -/ abbreviation image.map : image f.hom ⟶ image g.hom := (has_image_map.image_map sq).map lemma image.factor_map : factor_thru_image f.hom ≫ image.map sq = sq.left ≫ factor_thru_image g.hom := by simp lemma image.map_ι : image.map sq ≫ image.ι g.hom = image.ι f.hom ≫ sq.right := by simp lemma image.map_hom_mk'_ι {X Y P Q : C} {k : X ⟶ Y} [has_image k] {l : P ⟶ Q} [has_image l] {m : X ⟶ P} {n : Y ⟶ Q} (w : m ≫ l = k ≫ n) [has_image_map (arrow.hom_mk' w)] : image.map (arrow.hom_mk' w) ≫ image.ι l = image.ι k ≫ n := image.map_ι _ section variables {h : arrow C} [has_image h.hom] (sq' : g ⟶ h) variables [has_image_map sq'] /-- Image maps for composable commutative squares induce an image map in the composite square. -/ def image_map_comp : image_map (sq ≫ sq') := { map := image.map sq ≫ image.map sq' } @[simp] lemma image.map_comp [has_image_map (sq ≫ sq')] : image.map (sq ≫ sq') = image.map sq ≫ image.map sq' := show (has_image_map.image_map (sq ≫ sq')).map = (image_map_comp sq sq').map, by congr end section variables (f) /-- The identity `image f ⟶ image f` fits into the commutative square represented by the identity morphism `𝟙 f` in the arrow category. -/ def image_map_id : image_map (𝟙 f) := { map := 𝟙 (image f.hom) } @[simp] lemma image.map_id [has_image_map (𝟙 f)] : image.map (𝟙 f) = 𝟙 (image f.hom) := show (has_image_map.image_map (𝟙 f)).map = (image_map_id f).map, by congr end end has_image_map section variables (C) [has_images C] /-- If a category `has_image_maps`, then all commutative squares induce morphisms on images. -/ class has_image_maps := (has_image_map : Π {f g : arrow C} (st : f ⟶ g), has_image_map st) attribute [instance, priority 100] has_image_maps.has_image_map end section has_image_maps variables [has_images C] [has_image_maps C] /-- The functor from the arrow category of `C` to `C` itself that maps a morphism to its image and a commutative square to the induced morphism on images. -/ @[simps] def im : arrow C ⥤ C := { obj := λ f, image f.hom, map := λ _ _ st, image.map st } end has_image_maps section strong_epi_mono_factorisation /-- A strong epi-mono factorisation is a decomposition `f = e ≫ m` with `e` a strong epimorphism and `m` a monomorphism. -/ structure strong_epi_mono_factorisation {X Y : C} (f : X ⟶ Y) extends mono_factorisation f := [e_strong_epi : strong_epi e] attribute [instance] strong_epi_mono_factorisation.e_strong_epi /-- Satisfying the inhabited linter -/ instance strong_epi_mono_factorisation_inhabited {X Y : C} (f : X ⟶ Y) [strong_epi f] : inhabited (strong_epi_mono_factorisation f) := ⟨⟨⟨Y, 𝟙 Y, f, by simp⟩⟩⟩ /-- A mono factorisation coming from a strong epi-mono factorisation always has the universal property of the image. -/ def strong_epi_mono_factorisation.to_mono_is_image {X Y : C} {f : X ⟶ Y} (F : strong_epi_mono_factorisation f) : is_image F.to_mono_factorisation := { lift := λ G, arrow.lift $ arrow.hom_mk' $ show G.e ≫ G.m = F.e ≫ F.m, by rw [F.to_mono_factorisation.fac, G.fac] } variable (C) /-- A category has strong epi-mono factorisations if every morphism admits a strong epi-mono factorisation. -/ class has_strong_epi_mono_factorisations : Prop := mk' :: (has_fac : Π {X Y : C} (f : X ⟶ Y), nonempty (strong_epi_mono_factorisation f)) variable {C} lemma has_strong_epi_mono_factorisations.mk (d : Π {X Y : C} (f : X ⟶ Y), strong_epi_mono_factorisation f) : has_strong_epi_mono_factorisations C := ⟨λ X Y f, nonempty.intro $ d f⟩ @[priority 100] instance has_images_of_has_strong_epi_mono_factorisations [has_strong_epi_mono_factorisations C] : has_images C := { has_image := λ X Y f, let F' := classical.choice (has_strong_epi_mono_factorisations.has_fac f) in has_image.mk { F := F'.to_mono_factorisation, is_image := F'.to_mono_is_image } } end strong_epi_mono_factorisation section has_strong_epi_images variables (C) [has_images C] /-- A category has strong epi images if it has all images and `factor_thru_image f` is a strong epimorphism for all `f`. -/ class has_strong_epi_images : Prop := (strong_factor_thru_image : Π {X Y : C} (f : X ⟶ Y), strong_epi (factor_thru_image f)) attribute [instance] has_strong_epi_images.strong_factor_thru_image end has_strong_epi_images section has_strong_epi_images /-- If there is a single strong epi-mono factorisation of `f`, then every image factorisation is a strong epi-mono factorisation. -/ lemma strong_epi_of_strong_epi_mono_factorisation {X Y : C} {f : X ⟶ Y} (F : strong_epi_mono_factorisation f) {F' : mono_factorisation f} (hF' : is_image F') : strong_epi F'.e := by { rw ←is_image.e_iso_ext_hom F.to_mono_is_image hF', apply strong_epi_comp } lemma strong_epi_factor_thru_image_of_strong_epi_mono_factorisation {X Y : C} {f : X ⟶ Y} [has_image f] (F : strong_epi_mono_factorisation f) : strong_epi (factor_thru_image f) := strong_epi_of_strong_epi_mono_factorisation F $ image.is_image f /-- If we constructed our images from strong epi-mono factorisations, then these images are strong epi images. -/ @[priority 100] instance has_strong_epi_images_of_has_strong_epi_mono_factorisations [has_strong_epi_mono_factorisations C] : has_strong_epi_images C := { strong_factor_thru_image := λ X Y f, strong_epi_factor_thru_image_of_strong_epi_mono_factorisation $ classical.choice $ has_strong_epi_mono_factorisations.has_fac f } end has_strong_epi_images section has_strong_epi_images variables [has_images C] /-- A category with strong epi images has image maps. -/ @[priority 100] instance has_image_maps_of_has_strong_epi_images [has_strong_epi_images C] : has_image_maps C := { has_image_map := λ f g st, has_image_map.mk { map := arrow.lift $ arrow.hom_mk' $ show (st.left ≫ factor_thru_image g.hom) ≫ image.ι g.hom = factor_thru_image f.hom ≫ (image.ι f.hom ≫ st.right), by simp } } /-- If a category has images, equalizers and pullbacks, then images are automatically strong epi images. -/ @[priority 100] instance has_strong_epi_images_of_has_pullbacks_of_has_equalizers [has_pullbacks C] [has_equalizers C] : has_strong_epi_images C := { strong_factor_thru_image := λ X Y f, { epi := by apply_instance, has_lift := λ A B x y h h_mono w, arrow.has_lift.mk { lift := image.lift { I := pullback h y, m := pullback.snd ≫ image.ι f, m_mono := by exactI mono_comp _ _, e := pullback.lift _ _ w } ≫ pullback.fst } } } end has_strong_epi_images variables [has_strong_epi_mono_factorisations.{v} C] variables {X Y : C} {f : X ⟶ Y} /-- If `C` has strong epi mono factorisations, then the image is unique up to isomorphism, in that if `f` factors as a strong epi followed by a mono, this factorisation is essentially the image factorisation. -/ def image.iso_strong_epi_mono {I' : C} (e : X ⟶ I') (m : I' ⟶ Y) (comm : e ≫ m = f) [strong_epi e] [mono m] : I' ≅ image f := is_image.iso_ext {strong_epi_mono_factorisation . I := I', m := m, e := e}.to_mono_is_image $ image.is_image f @[simp] lemma image.iso_strong_epi_mono_hom_comp_ι {I' : C} (e : X ⟶ I') (m : I' ⟶ Y) (comm : e ≫ m = f) [strong_epi e] [mono m] : (image.iso_strong_epi_mono e m comm).hom ≫ image.ι f = m := is_image.lift_fac _ _ @[simp] lemma image.iso_strong_epi_mono_inv_comp_mono {I' : C} (e : X ⟶ I') (m : I' ⟶ Y) (comm : e ≫ m = f) [strong_epi e] [mono m] : (image.iso_strong_epi_mono e m comm).inv ≫ m = image.ι f := image.lift_fac _ end category_theory.limits
# Copyright (c) 2018-2021, Carnegie Mellon University # See LICENSE for details Import(paradigms.vector, platforms.sse); # Tests SIMD functionality TestSIMD := function() if SSE_4x32f.active then doSimdDft(5, SSE_4x32f); doSimdDft(5, SSE_4x32f, rec(oddSizes:=false, svct:=false)); doSimdDft([2..20], SSE_4x32f); fi; end;
module Effect.Socket import Effects import Network.Socket -- A simple UDP library ||| The internal UDP effect structure data UDPClient : Effect where MakeSocket : UDPClient (Either SocketError ()) () (\res => case res of Left err => () Right s => Socket) Send : SocketAddress -> Int -> String -> UDPClient (Either SocketError ByteLength) Socket (const Socket) Close : UDPClient () Socket (const ()) instance Handler UDPClient IO where handle () MakeSocket k = do s <- socket AF_INET Datagram 0 case s of Left err => k (Left err) () Right sock => do bind sock Nothing 0 k (Right ()) sock handle s (Send address port msg) k = do res <- (sendTo s address port msg) k res s handle s Close k = do close s k () () ||| A UDP socket effect. ||| @ resource either `()` or `Socket`, depending on whether the library is initialised UDP : (resource : Type) -> EFFECT UDP res = MkEff res UDPClient ||| Initialise a socket to send UDP packets mksocket : { [UDP ()] ==> [UDP (case result of {Left _ => () ; Right _ => Socket})] } Eff (Either SocketError ()) mksocket = call MakeSocket ||| Send a message to a given address and port send : SocketAddress -> Int -> String -> { [UDP Socket] } Eff (Either SocketError ByteLength) send addr port buf = call (Send addr port buf) ||| Clean up the socket close : { [UDP Socket] ==> [UDP ()] } Eff () close = call Close
% emacs, this is -*-Matlab-*- mode % The main script. Performes the self calibration % The point coordinates are expected to be known % see the directory FindingPoints % % $Author: svoboda $ % $Revision: 2.7 $ % $Id: gocal.m,v 2.7 2005/05/24 09:15:11 svoboda Exp $ % $State: Exp $ clear variables globals Octave = exist('OCTAVE_VERSION', 'builtin') ~= 0; % add necessary paths addpath ('../CommonCfgAndIO') addpath ('../RadialDistortions') addpath ('./CoreFunctions') addpath ('./OutputFunctions') addpath ('./BlueCLocal') addpath ('./LocalAlignments') addpath ('../CalTechCal') addpath ('../RansacM'); % ./Ransac for mex functions (it is significantly faster for noisy data) % get the configuration config = read_configuration(); disp('Multi-Camera Self-Calibration, Tomas Svoboda et al., 07/2003') disp('************************************************************') disp(sprintf('Experiment name: %s',expname)) %%% % how many cameras to be filled % if 0 then only points visible in all cameras will be used for selfcal % the higher, the more points -> better against Gaussian noise % however the higher probability of wrong filling % Then the iterative search for outliers takes accordingly longer % However, typically no more than 5-7 iterations are needed % this number should correspond to the total number of the cameras NUM_CAMS_FILL = config.cal.NUM_CAMS_FILL; %%% % tolerance for inliers. The higher uncorrected radial distortion % the higher value. For BlueC cameras set to 2 for the ViRoom % plastic cams, set to 4 (see FINDINL) INL_TOL = config.cal.INL_TOL; %%% % Use Bundle Adjustment to refine the final (after removing outliers) results % It is often not needed at all DO_BA = config.cal.DO_BA; UNDO_RADIAL = config.cal.UNDO_RADIAL; % undo radial distortion, parameters are expected to be available BA_RADIAL = config.cal.BA_RADIAL; % bundle adjustment % also finds % non-linear parameters SAVE_STEPHI = 1; % save calibration parameters in Stephi's Carve/BlueC compatible form SAVE_PGUHA = 1; % save calib pars in Prithwijit's compatible form USED_MULTIPROC = 0; % was the multipropcessing used? % if yes then multiple IdMat.dat and points.dat have to be loaded % setting to 1 it forces to read the multiprocessor data against the % monoprocessor see the IM2POINTS, IM2PMULTIPROC.PL %%% % Data structures % lin.* corrected values which obey linear model % in.* inliers, detected by a chain application of Ransac if strfind(expname,'oscar') % add a projector idx to the cameras % they are handled the same config.files.cams2use = [config.files.idxcams,config.files.idxproj]; end selfcal.par2estimate = config.cal.nonlinpar; selfcal.iterate = 1; selfcal.count = 0; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Load initial distortion parameters %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% nl_params_all_cams = []; if BA_RADIAL for i=1:CAMS, if UNDO_RADIAL [K,kc] = ... readradfile(sprintf(config.files.rad,config.cal.cams2use(i))); else % no radial distortion K = [ 1 0 config.cal.Res(i,1)/2; ... 0 1 config.cal.Res(i,2)/2; 0 0 1]; kc = [0,0,0,0]; end cam_pvec = rad2pvec(K,kc); % convert all NL params to row vector nl_params_all_cams(i) = cam_pvec; % append row vector to matrix end end if ~isempty(nl_params_all_cams) % This error is here to prevent a long run just to be confronted % with it in bundle_PX_proj. Remove this test when support is % implemented. error('Not implemented: radial distortion in bundle adjustment'); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Main global cycle begins %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% while selfcal.iterate && selfcal.count < config.cal.GLOBAL_ITER_MAX, % read the input data loaded = loaddata(config); linear = loaded; % initalize the linear structure CAMS = size(config.cal.cams2use,2); FRAMES = size(loaded.IdMat,2); if CAMS < 3 || FRAMES < 20 error('gocal: Not enough cameras or images -> Problem in loading data?') end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% correct the required amount of cameras to be filled if CAMS-NUM_CAMS_FILL < 3 NUM_CAMS_FILL = CAMS-3; end config.cal.Res= loaded.Res; config.cal.pp = reshape([loaded.Res./2,zeros(size(loaded.Res(:,1)))]',CAMS*3,1); config.cal.pp = [loaded.Res./2,zeros(size(loaded.Res(:,1)))]; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % See the README how to compute data % for undoing of the radial distortion if UNDO_RADIAL for i=1:CAMS, [K,kc] = readradfile(sprintf(config.files.rad,config.cal.cams2use(i))); xn = undoradial(loaded.Ws(i*3-2:i*3,:),K,[kc,0]); linear.Ws(i*3-2:i*3,:) = xn; end linear.Ws = linear.Ws - repmat(reshape(config.cal.pp',CAMS*3,1), 1, FRAMES); elseif config.cal.UNDO_HEIKK, for i=1:CAMS, heikkpar = load(sprintf(config.files.heikkrad,config.cal.cams2use(i)),'-ASCII'); xn = undoheikk(heikkpar(1:4),heikkpar(5:end),loaded.Ws(i*3-2:i*3-1,:)'); linear.Ws(i*3-2:i*3-1,:) = xn'; end linear.Ws = linear.Ws - repmat(reshape(config.cal.pp',CAMS*3,1), 1, FRAMES); else linear.Ws = loaded.Ws - repmat(reshape(config.cal.pp',CAMS*3,1), 1, FRAMES); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Detection of outliers % RANSAC is pairwise applied disp(' ') disp(sprintf('********** After %d iteration *******************************************',selfcal.count)) % disp('****************************************************************************') disp(sprintf('RANSAC validation step running with tolerance threshold: %2.2f ...',INL_TOL)); inliers.IdMat = findinl(linear.Ws,linear.IdMat,INL_TOL); addpath ('./MartinecPajdla'); setpaths; % set paths for M&P algorithms % remove zero-columns or just 1 point columns % create packed represenatation % it is still a bit tricky, the number of the minimum number of cameras % are specified here, may be some automatic method would be useful packed.idx = find(sum(inliers.IdMat)>=size(inliers.IdMat,1)-NUM_CAMS_FILL); packed.IdMat = inliers.IdMat(:,packed.idx); packed.Ws = linear.Ws(:,packed.idx); if size(packed.Ws,2)<20 error(sprintf('Only %d points survived RANSAC validation and packing: probably not enough points for reliable selfcalibration',size(packed.Ws,2))); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% fill cam(i) structures for i=1:CAMS, cam(i).camId = config.cal.cams2use(i); cam(i).idlin = find(linear.IdMat(i,:)); % loaded structure cam(i).idin = find(inliers.IdMat(i,:)); % survived initial pairwise validation cam(i).xdist = loaded.Ws(3*i-2:3*i,cam(i).idlin); % original distorted coordinates cam(i).xgt = linear.Ws(3*i-2:3*i,cam(i).idlin); cam(i).xgtin = linear.Ws(3*i-2:3*i,cam(i).idin); % convert the ground truth coordinates by using the known principal point cam(i).xgt = cam(i).xgt + repmat(config.cal.pp(i,:)', 1, size(cam(i).xgt,2)); cam(i).xgtin = cam(i).xgtin + repmat(config.cal.pp(i,:)', 1, size(cam(i).xgtin,2)); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% options for the Martinec-Pajdla filling procedure options.verbose = 0; % options.strategy = 1; % force a particular central camera options.no_BA = 1; options.iter = 5; options.detection_accuracy = 2; options.consistent_number = 9; options.consistent_number_min = 6; options.samples = 1000; %1000; %10000; options.sequence = 0; options.create_nullspace.trial_coef = 10; %20; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% start of the *compute and remove outliers cycle* outliers = 1; inliers.idx = packed.idx; inliers.reprerr = [9e9]; while outliers disp(sprintf('%d points/frames have survived validations so far',size(inliers.idx,2))) disp('Filling of missing points is running ...') [P,X, u1,u2, info] = fill_mm_bundle(linear.Ws(:,inliers.idx),config.cal.pp(:,1:2)',nl_params_all_cams,options); % Rmat = P*X; Lambda = Rmat(3:3:end,:); % [Pe,Xe,Ce,Re] = euclidize(Rmat,Lambda,P,X,config); disp('************************************************************') % % compute reprojection errors cam = reprerror(cam,Pe,Xe,FRAMES,inliers); % % detect outliers in cameras [outliers,inliers] = findoutl(cam,inliers,INL_TOL,NUM_CAMS_FILL); % disp(sprintf('Number of detected outliers: %3d',outliers)) disp('About cameras (Id, 2D reprojection error, #inliers):') dispcamstats(cam,inliers); % [[cam(:).camId]',[cam(:).std2Derr]',[cam(:).mean2Derr]', sum(inliers.IdMat')'] disp('***************************************************************') % % do BA after removing very bad outliers or if the process starts to diverge % and only if required config.cal.START_BA inliers.reprerr = [inliers.reprerr, mean([cam(:).mean2Derr])]; if inliers.reprerr(end)<5*INL_TOL || inliers.reprerr(end-1)<inliers.reprerr(end), try, options.no_BA = ~config.cal.START_BA; catch, options.no_BA = 1; end % 1 0 end end %%% end of the cycle %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% Do the final refinement through the BA if required and if not %%% performed during the iteration steps if (DO_BA && ~config.cal.START_BA) || (DO_BA && config.cal.START_BA && size(inliers.reprerr,2)<3) disp('**************************************************************') disp('Refinement by using Bundle Adjustment') options.no_BA = 0; [P,X, u1,u2, info] = fill_mm_bundle(linear.Ws(:,inliers.idx),config.cal.pp(:,1:2)',nl_params_all_cams,options); Rmat = P*X; Lambda = Rmat(3:3:end,:); [in.Pe,in.Xe,in.Ce,in.Re] = euclidize(Rmat,Lambda,P,X,config); cam = reprerror(cam,in.Pe,in.Xe,FRAMES,inliers); % [outliers,inliers] = findoutl(cam,inliers,INL_TOL,NUM_CAMS_FILL); else in.Pe = Pe; in.Xe = Xe; in.Ce = Ce; in.Re = Re; end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% if ~Octave, % plot reconstructed cameras and points drawscene(Xe,Ce,Re,3,'cloud','reconstructed points/camera setup'); drawscene(in.Xe,in.Ce,in.Re,4,'cloud','reconstructed points/camera setup only inliers are used',config.cal.cams2use); % plot measured and reprojected 2D points for i=1:CAMS in.xe = in.Pe(((3*i)-2):(3*i),:)*in.Xe; cam(i).inxe = in.xe./repmat(in.xe(3,:),3,1); figure(i+10) clf plot(cam(i).xdist(1,:),cam(i).xdist(2,:),'go'); hold on, grid on plot(cam(i).xgt(1,:),cam(i).xgt(2,:),'ro'); plot(cam(i).xgtin(1,:),cam(i).xgtin(2,:),'bo'); % plot(cam(i).xe(1,:),cam(i).xe(2,:),'r+') plot(cam(i).inxe(1,:),cam(i).inxe(2,:),'k+','MarkerSize',7) %plot(xe(1,:),xe(2,:),'r+','linewidth',3,'MarkerSize',10) title(sprintf('o: measured (green=distorted, blue=inlier, red=discarded). +: reprojected. (camera: %d)',config.cal.cams2use(i))); for j=1:size(cam(i).visandrec,2); % plot the reprojection errors line([cam(i).xgt(1,cam(i).visandrec(j)), ... cam(i).inxe(1,cam(i).recandvis(j))], ... [cam(i).xgt(2,cam(i).visandrec(j)), ... cam(i).inxe(2,cam(i).recandvis(j))],'Color','g'); end % draw the image border line([0 0 0 2*config.cal.pp(i,1) 2*config.cal.pp(i,1) 2*config.cal.pp(i,1) 2*config.cal.pp(i,1) 0],[0 2*config.cal.pp(i,2) 2*config.cal.pp(i,2) 2*config.cal.pp(i,2) 2*config.cal.pp(i,2) 0 0 0],'Color','k','LineWidth',2,'LineStyle','--') axis('equal') drawnow eval(['print -depsc ', config.paths.data, sprintf('%s%d.reprojection.eps',config.files.basename,cam(i).camId)]) end end %%% % SAVE camera matrices P = in.Pe; X = in.Xe; R = in.Re; C = in.Ce; if Octave % all Octave data in ASCII format save(config.files.Pmats,'P'); save(config.files.Xe,'X'); save(config.files.Re,'R'); save(config.files.Ce,'C'); else save(config.files.Pmats,'P','-ASCII'); save(config.files.Xe,'X','-ASCII'); save(config.files.Re,'R','-ASCII'); save(config.files.Ce,'C','-ASCII'); end % save normal data if SAVE_STEPHI || SAVE_PGUHA [in.Cst,in.Rot] = savecalpar(in.Pe,config); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % local routines for the BlueC installations % The main functionality of these functions that % they align the coordinate frame from the selfcalibration % with the pre-defined world frame % it is assumed the necessary informations are avialable if strfind(expname,'BlueCRZ') [align] = bluecrz(in,config); end if strfind(expname,'Hoengg') [align] = bluechoengg(in,config); end if strfind(expname,'Erlangen') [align] = erlangen(in,config); % [align] = planarmove(in,config); end if strfind(expname,'G9') [align] = g9(in,config); end if strfind(expname,'flydra') [align] = flydra(in,config); end if strfind(expname,'mamarama') [align] = mamarama(in,config); end if strfind(expname,'humdra') [align] = humdra(in,config); end if config.cal.ALIGN_EXISTING align_existing_camera_centers(in,config); end % planar alignement if knowledge available % [align,cam] = planarmove(in,cam,config); % try, [align,cam] = planarcams(in,cam,config,config.cal.planarcams); end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Evaluate reprojection error %%% cam = evalreprerror(cam,config); %%%% % Save the 2D-3D correpondences for further processing for i=1:CAMS, xe = cam(i).xdist(1:3,cam(i).visandrec); % save the original distorted coordinates % save the reconstructed points (aligned if available) try, Xe = align.X(1:4,cam(i).recandvis); catch, Xe = in.Xe(1:4,cam(i).recandvis); end; % Xe = in.Xe(1:4,cam(i).recandvis); corresp = [Xe',xe']; if Octave % all Octave data in ASCII format save(sprintf(config.files.points4cal,config.cal.cams2use(i)),'corresp'); else save(sprintf(config.files.points4cal,config.cal.cams2use(i)),'corresp','-ASCII'); end end %%% % TO-DO: % - find a suitable end condition for the global iteration. % This threshold may very depend on local conditions % % - how to check meaningful number of iterations % typically only few iterations are needed % % - The precision of the non-linear estimation should be somehow taken into account selfcal.count = selfcal.count+1; if max([cam.mean2Derr])>config.cal.GLOBAL_ITER_THR && config.cal.DO_GLOBAL_ITER && selfcal.count < config.cal.GLOBAL_ITER_MAX % if the maximal reprojection error is still bigger % than acceptable estimate radial distortion and % iterate further cd ../CalTechCal selfcalib = goradf(config,selfcal.par2estimate,INL_TOL); cd ../MultiCamSelfCal selfcal.iterate = 1; UNDO_RADIAL = 1; if ~selfcalib.goradproblem; % if all non-linear parameters estimated reliable % we can reduce the tolerance threshold INL_TOL = max([(2/3)*INL_TOL,config.cal.GLOBAL_ITER_THR]); % add the second radial distortion parameter if config.cal.NL_UPDATE(4), selfcal.par2estimate(4) = 1; end % estimate also the principal point if selfcal.count > 1 && config.cal.NL_UPDATE(2), selfcal.par2estimate(2) = 1; end % estimate also the tangential distortion if selfcal.count > 3 && all(config.cal.NL_UPDATE(5:6)), selfcal.par2estimate(5:6) = 1; end else INL_TOL = min([3/2*INL_TOL,config.cal.INL_TOL]); end else % ends the iteration % the last computed parameters will be taken as valid selfcal.iterate = 0; end end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % End of the main global cycle %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-- Solver for Category {-# OPTIONS --without-K --safe #-} open import Categories.Category module Experiment.Categories.AnotherSolver.Category {o ℓ e} (𝒞 : Category o ℓ e) where open import Level open import Relation.Binary using (Rel) import Function.Base as Fun open import Categories.Functor renaming (id to idF) open import Categories.NaturalTransformation renaming (id to idN) import Categories.Morphism.Reasoning as MR import Categories.Category.Equivalence as Eq open Category 𝒞 open HomReasoning open MR 𝒞 private variable A B C D E : Obj infixr 9 _:∘_ data Expr : Rel Obj (o ⊔ ℓ) where :id : Expr A A _:∘_ : Expr B C → Expr A B → Expr A C ∥_∥ : A ⇒ B → Expr A B data NExpr : Rel Obj (o ⊔ ℓ) where :id : NExpr A A _:∘_ : B ⇒ C → NExpr A B → NExpr A C -- Semantics ⟦_⟧ : Expr A B → A ⇒ B ⟦ :id ⟧ = id ⟦ e₁ :∘ e₂ ⟧ = ⟦ e₁ ⟧ ∘ ⟦ e₂ ⟧ ⟦ ∥ f ∥ ⟧ = f ⟦_⟧N : NExpr A B → A ⇒ B ⟦ :id ⟧N = id ⟦ f :∘ e ⟧N = f ∘ ⟦ e ⟧N _∘N_ : NExpr B C → NExpr A B → NExpr A C :id ∘N e₂ = e₂ (f :∘ e₁) ∘N e₂ = f :∘ (e₁ ∘N e₂) linear : NExpr A B → Expr A B linear :id = :id linear (f :∘ e) = ∥ f ∥ :∘ linear e normalise : Expr A B → NExpr A B normalise :id = :id normalise (e₁ :∘ e₂) = normalise e₁ ∘N normalise e₂ normalise ∥ f ∥ = f :∘ :id ∘N-homo : (e₁ : NExpr B C) (e₂ : NExpr A B) → ⟦ e₁ ∘N e₂ ⟧N ≈ ⟦ e₁ ⟧N ∘ ⟦ e₂ ⟧N ∘N-homo :id e₂ = sym identityˡ ∘N-homo (f :∘ e₁) e₂ = pushʳ (∘N-homo e₁ e₂) NExpr-assoc : {f : NExpr A B} {g : NExpr B C} {h : NExpr C D} → ⟦ (h ∘N g) ∘N f ⟧N ≈ ⟦ h ∘N (g ∘N f) ⟧N NExpr-assoc {f = f} {g} {:id} = refl NExpr-assoc {f = f} {g} {x :∘ h} = ∘-resp-≈ʳ (NExpr-assoc {f = f} {g} {h}) NExpr-identityʳ : {f : NExpr A B} → ⟦ f ∘N :id ⟧N ≈ ⟦ f ⟧N NExpr-identityʳ {f = :id} = refl NExpr-identityʳ {f = x :∘ f} = ∘-resp-≈ʳ (NExpr-identityʳ {f = f}) normalise-correct : (e : Expr A B) → ⟦ normalise e ⟧N ≈ ⟦ e ⟧ normalise-correct :id = refl normalise-correct (e₁ :∘ e₂) = begin ⟦ normalise e₁ ∘N normalise e₂ ⟧N ≈⟨ ∘N-homo (normalise e₁) (normalise e₂) ⟩ ⟦ normalise e₁ ⟧N ∘ ⟦ normalise e₂ ⟧N ≈⟨ normalise-correct e₁ ⟩∘⟨ normalise-correct e₂ ⟩ ⟦ e₁ ⟧ ∘ ⟦ e₂ ⟧ ∎ normalise-correct ∥ f ∥ = identityʳ normalise-cong : (e₁ e₂ : Expr A B) → ⟦ e₁ ⟧ ≈ ⟦ e₂ ⟧ → ⟦ normalise e₁ ⟧N ≈ ⟦ normalise e₂ ⟧N normalise-cong e₁ e₂ eq = trans (normalise-correct e₁) (trans eq (sym (normalise-correct e₂))) normalise-inj : (e₁ e₂ : Expr A B) → ⟦ normalise e₁ ⟧N ≈ ⟦ normalise e₂ ⟧N → ⟦ e₁ ⟧ ≈ ⟦ e₂ ⟧ normalise-inj e₁ e₂ eq = trans (sym (normalise-correct e₁)) (trans eq (normalise-correct e₂)) linear-correct : (e : NExpr A B) → ⟦ linear e ⟧ ≈ ⟦ e ⟧N linear-correct :id = refl linear-correct (f :∘ e) = ∘-resp-≈ʳ (linear-correct e) Expr-category : Category o (o ⊔ ℓ) e Expr-category = categoryHelper record { Obj = Obj ; _⇒_ = Expr ; _≈_ = λ e₁ e₂ → ⟦ e₁ ⟧ ≈ ⟦ e₂ ⟧ ; id = :id ; _∘_ = _:∘_ ; assoc = assoc ; identityˡ = identityˡ ; identityʳ = identityʳ ; equiv = record { refl = refl ; sym = sym ; trans = trans } ; ∘-resp-≈ = ∘-resp-≈ } NExpr-category : Category o (o ⊔ ℓ) e NExpr-category = categoryHelper record { Obj = Obj ; _⇒_ = NExpr ; _≈_ = λ e₁ e₂ → ⟦ e₁ ⟧N ≈ ⟦ e₂ ⟧N ; id = :id ; _∘_ = _∘N_ ; assoc = λ {_} {_} {_} {_} {f = f} {g} {h} → NExpr-assoc {f = f} {g} {h} ; identityˡ = refl ; identityʳ = λ {_} {_} {f = f} → NExpr-identityʳ {f = f} ; equiv = record { refl = refl ; sym = sym ; trans = trans } ; ∘-resp-≈ = λ {_} {_} {_} {f} {h} {g} {i} f≈h g≈i → trans (∘N-homo f g) (trans (∘-resp-≈ f≈h g≈i) (sym (∘N-homo h i))) } ⟦⟧-functor : Functor Expr-category 𝒞 ⟦⟧-functor = record { F₀ = Fun.id ; F₁ = ⟦_⟧ ; identity = refl ; homomorphism = refl ; F-resp-≈ = Fun.id } ⟦⟧N-functor : Functor NExpr-category 𝒞 ⟦⟧N-functor = record { F₀ = Fun.id ; F₁ = ⟦_⟧N ; identity = refl ; homomorphism = λ {_} {_} {_} {f} {g} → ∘N-homo g f ; F-resp-≈ = Fun.id } normalise-functor : Functor Expr-category NExpr-category normalise-functor = record { F₀ = Fun.id ; F₁ = normalise ; identity = refl ; homomorphism = λ {_} {_} {_} {f = f} {g} → refl ; F-resp-≈ = λ {_} {_} {f} {g} f≈g → normalise-cong f g f≈g } linear-functor : Functor NExpr-category Expr-category linear-functor = record { F₀ = Fun.id ; F₁ = linear ; identity = refl ; homomorphism = λ {_} {_} {_} {f} {g} → begin ⟦ linear (g ∘N f) ⟧ ≈⟨ linear-correct (g ∘N f) ⟩ ⟦ g ∘N f ⟧N ≈⟨ ∘N-homo g f ⟩ ⟦ g ⟧N ∘ ⟦ f ⟧N ≈⟨ sym (linear-correct g) ⟩∘⟨ sym (linear-correct f) ⟩ ⟦ linear g ⟧ ∘ ⟦ linear f ⟧ ∎ ; F-resp-≈ = λ {_} {_} {f} {g} eq → begin ⟦ linear f ⟧ ≈⟨ linear-correct f ⟩ ⟦ f ⟧N ≈⟨ eq ⟩ ⟦ g ⟧N ≈⟨ sym (linear-correct g) ⟩ ⟦ linear g ⟧ ∎ } normalise-functor-Faithful : Faithful normalise-functor normalise-functor-Faithful = normalise-inj linear-functor-Faithful : Faithful linear-functor linear-functor-Faithful = λ f g x → trans (sym (linear-correct f)) (trans x (linear-correct g)) ⟦⟧n-functor : Functor Expr-category 𝒞 ⟦⟧n-functor = ⟦⟧N-functor ∘F normalise-functor ⟦⟧l-functor : Functor NExpr-category 𝒞 ⟦⟧l-functor = ⟦⟧-functor ∘F linear-functor normalise-natural : NaturalTransformation ⟦⟧n-functor ⟦⟧-functor normalise-natural = ntHelper record { η = λ X → id ; commute = λ e → begin id ∘ ⟦ normalise e ⟧N ≈⟨ identityˡ ⟩ ⟦ normalise e ⟧N ≈⟨ normalise-correct e ⟩ ⟦ e ⟧ ≈⟨ sym identityʳ ⟩ ⟦ e ⟧ ∘ id ∎ } linear-natural : NaturalTransformation ⟦⟧l-functor ⟦⟧N-functor linear-natural = ntHelper record { η = λ X → id ; commute = λ e → trans identityˡ (trans (linear-correct e) (sym identityʳ)) } embedExpr : Functor 𝒞 Expr-category embedExpr = record { F₀ = Fun.id ; F₁ = ∥_∥ ; identity = refl ; homomorphism = refl ; F-resp-≈ = Fun.id } embedNExpr : Functor 𝒞 NExpr-category embedNExpr = record { F₀ = Fun.id ; F₁ = λ e → e :∘ :id ; identity = identity² ; homomorphism = λ {_} {_} {_} {f} {g} → assoc ; F-resp-≈ = λ f≈g → trans identityʳ (trans f≈g (sym identityʳ)) } {- solve : (e₁ e₂ : Expr A B) → ⟦ e₁ ⟧N ≈ ⟦ e₂ ⟧N → ⟦ e₁ ⟧ ≈ ⟦ e₂ ⟧ solve e₁ e₂ eq = begin ⟦ e₁ ⟧ ≈˘⟨ ⟦e⟧N≈⟦e⟧ e₁ ⟩ ⟦ e₁ ⟧N ≈⟨ eq ⟩ ⟦ e₂ ⟧N ≈⟨ ⟦e⟧N≈⟦e⟧ e₂ ⟩ ⟦ e₂ ⟧ ∎ ∥-∥ : ∀ {f : A ⇒ B} → Expr A B ∥-∥ {f = f} = ∥ f ∥ -}
-- Modulo destinado a todas as funcoes relativas a execucao da Rede. module Execution (execute, feedforward) where import InputOutput import Types import Numeric.LinearAlgebra.HMatrix type Image = [Double] type Sample = (Int, Image) -- execute, por retornar IO, pode interagir chamando outras IO Actions como -- leitura dos weights, biases e input da Rede, por exemplo. execute :: IO String -- mero esqueleto da funcao de execucao execute = do network <- readIn let image = [1.0 .. 784.0] executedNetwork = feedforward image network do definitiveAnswer executedNetwork -- recebe a imagem, a network e computa os calculos, -- retornando a nova data com os valores de ativacao -- e zeta do hidden e output alterados. feedforward :: Image -> Data -> Data feedforward image network = let zH = add ((wHidden network) #> (fromList image)) (bHidden network) aH = fromList $ sigV (toList zH) zO = add ((wOutput network) #> aH) (bOutput network) aO = fromList $ sigV (toList zO) in Data (wHidden network) (bHidden network) aH zH (wOutput network) (bOutput network) aO zO
Load LFindLoad. From lfind Require Import LFind. From QuickChick Require Import QuickChick. From adtind Require Import goal33. Derive Show for natural. Derive Arbitrary for natural. Instance Dec_Eq_natural : Dec_Eq natural. Proof. dec_eq. Qed. Lemma conj4eqsynthconj4 : forall (lv0 : natural), (@eq natural (Succ lv0) (plus (Succ Zero) lv0)). Admitted. QuickChick conj4eqsynthconj4.
[STATEMENT] lemma codim_map: assumes "domain.chamber C" "y \<subseteq> C" shows "card (f`C - f`y) = card (C-y)" [PROOF STATE] proof (prove) goal (1 subgoal): 1. card (f ` C - f ` y) = card (C - y) [PROOF STEP] using assms dim_map domain.chamberD_simplex domain.faces[of C y] domain.finite_simplex card_Diff_subset[of "f`y" "f`C"] card_map card_Diff_subset[of y C] [PROOF STATE] proof (prove) using this: domain.chamber C y \<subseteq> C domain.chamber ?C \<Longrightarrow> card (f ` ?C) = card ?C domain.chamber ?x \<Longrightarrow> ?x \<in> X \<lbrakk>C \<in> X; y \<subseteq> C\<rbrakk> \<Longrightarrow> y \<in> X ?x \<in> X \<Longrightarrow> finite ?x \<lbrakk>finite (f ` y); f ` y \<subseteq> f ` C\<rbrakk> \<Longrightarrow> card (f ` C - f ` y) = card (f ` C) - card (f ` y) ?x \<in> X \<Longrightarrow> card (f ` ?x) = card ?x \<lbrakk>finite y; y \<subseteq> C\<rbrakk> \<Longrightarrow> card (C - y) = card C - card y goal (1 subgoal): 1. card (f ` C - f ` y) = card (C - y) [PROOF STEP] by auto
import category_theory.isomorphism import category_theory.limits.shapes.binary_products import category_theory.limits.shapes.reflexive import category_theory.limits.opposites import category_theory.closed.cartesian import category_theory.adjunction.basic import category_theory.functor.epi_mono import category_theory.monad.basic import category_theory.monad.monadicity import subobject_classifier import adjunction import image open category_theory category_theory.category category_theory.limits category_theory.cartesian_closed classifier opposite /-! # Topos Basic definitions of a topos From Sheaves IV -/ universes v u noncomputable theory variables (C : Type u) [category.{v} C] class topos := [lim : has_finite_limits.{v} C] [sub : has_subobject_classifier.{v} C] [cc : cartesian_closed.{v} C] attribute [instance] topos.lim topos.sub topos.cc variables [topos C] -- It doesn't seem to bee infered automatically instance : has_finite_colimits Cᵒᵖ := has_finite_colimits_opposite variables {C} (c : C) def sub_to_hom : subobject c → (c ⟶ Ω C) := λ s, classifier_of s.arrow def hom_to_sub : (c ⟶ Ω C) → subobject c := λ σ, canonical_sub σ lemma sub_equiv_hom : subobject c ≃ (c ⟶ Ω C) := { to_fun := sub_to_hom c, inv_fun := hom_to_sub c, left_inv := λ S, sub_eq_canonical_sub_of_classifier S, right_inv := λ σ, classifier.uniquely _ _ (classifying_pullback.mk _ (is_pullback_canonical_arrow _)) } def δ := classifier_of (diag c) def singleton_map := curry (δ c) variables (C) {a b : C} def P : Cᵒᵖ ⥤ C := { obj := λ c, (exp c.unop).obj (Ω C), map := λ c d f, (pre f.unop).app (Ω C), map_id' := λ c, by {rw [unop_id, pre_id, nat_trans.id_app]}, map_comp' := λ _ _ _ f g, by {rw [unop_comp, pre_map, nat_trans.comp_app]} } def P_op : C ⥤ Cᵒᵖ := functor.right_op (P C) variable {C} def in_map : c ⨯ (P C).obj (op c) ⟶ Ω C := (exp.ev c).app (Ω C) variable {c} lemma in_map_natural (g : a ⟶ (P C).obj (op b)) : uncurry g = limits.prod.map (𝟙 _) g ≫ in_map b := uncurry_eq g lemma in_map_dinatural (h : b ⟶ c) : limits.prod.map h (𝟙 _) ≫ in_map c = limits.prod.map (𝟙 _) ((P C).map h.op) ≫ in_map b := begin erw [←in_map_natural, uncurry_pre], refl end variables {C c} {d : C} {f : c ⟶ d} namespace delta open category_theory.limits.prod def pullback_cone_map_diag (f : c ⟶ d) : pullback_cone (map f (𝟙 _)) (diag d) := pullback_cone.mk (lift (𝟙 _) f) f (by simp only [lift_map,comp_lift, id_comp, comp_id]) lemma cone_map_diag_fst (s : pullback_cone (map f (𝟙 _)) (diag d)) : s.fst ≫ fst ≫ f = s.snd := by { rw [←map_fst f (𝟙 _), ←assoc, s.condition, assoc, lift_fst, comp_id] } lemma cone_map_diag_snd (s : pullback_cone (map f (𝟙 _)) (diag d)) : s.fst ≫ snd = s.snd := by { rw [←comp_id (s.fst ≫ snd), assoc, ←map_snd f (𝟙 _), ←assoc, s.condition, assoc, lift_snd, comp_id] } lemma cone_map_diag_fst_snd (s : pullback_cone (map f (𝟙 _)) (diag d)) : s.fst ≫ fst ≫ f = s.fst ≫ snd := eq.trans (cone_map_diag_fst s) (cone_map_diag_snd s).symm def is_limit_pullback_cone_map_diag : is_limit (pullback_cone_map_diag f) := begin apply pullback_cone.is_limit.mk (pullback_cone_map_diag f).condition (λ s, s.fst ≫ fst); intro s, { simp only [assoc], dunfold pullback_cone_map_diag, rw pullback_cone.mk_fst, rw [comp_lift], nth_rewrite 1 ←comp_id s.fst, simp only [comp_id], apply hom_ext, rw [assoc, lift_fst, ←assoc, comp_id (s.fst ≫ fst)], rw [assoc, lift_snd], apply cone_map_diag_fst_snd }, { simp only [assoc], erw pullback_cone.mk_snd, apply cone_map_diag_fst, }, { intros l fst' snd', simp only, rw ←eq_whisker fst' fst, erw [assoc, lift_fst, comp_id] } end variable (f) def big_square_cone : pullback_cone (map f (𝟙 d) ≫ δ d) (truth C) := pullback_cone.mk (lift (𝟙 c) f) (f ≫ terminal.from d) (by { erw [←assoc, (pullback_cone_map_diag f).condition, assoc, classifier.comm (diag d), ←assoc, terminal.comp_from] }) lemma is_limit_big_square_cone : is_limit (big_square_cone f) := begin apply big_square_is_pullback f (terminal.from _) (map f (𝟙 _)) (δ _) (lift (𝟙 _) f) (diag _) (truth C), apply classifier.is_pb, apply is_limit_pullback_cone_map_diag, end lemma big_square_classifying : classifying (truth C) (lift (𝟙 c) f) (map f (𝟙 d) ≫ δ d) := { comm := by { rw ←terminal.comp_from f, erw (big_square_cone f).condition, refl }, is_pb := begin let g := is_limit_big_square_cone f, unfold big_square_cone at g, simp [terminal.comp_from f] at g, assumption, end } lemma classifies : classifier_of (lift (𝟙 _) f) = map f (𝟙 _) ≫ δ d := classifier.uniquely _ _ (big_square_classifying f) variable (g : c ⟶ d) lemma cancel_classifier: (classifier_of (lift (𝟙 _) f) = classifier_of (lift (𝟙 _) g)) ↔ f = g := begin split; intro heq, { have k := (pullback_cone.is_limit.lift' (classifier.is_pb (lift (𝟙 _) f)) ((lift (𝟙 _) g)) (terminal.from _) (by rw [heq, classifier.comm])).prop.left, have eq_id := eq_whisker k fst, erw [assoc, lift_fst, lift_fst, comp_id] at eq_id, rw [eq_id, id_comp] at k, convert eq_whisker k snd, erw lift_snd, rw lift_snd }, { rw heq } end end delta -- We show that a topos is balanced namespace balanced lemma iso_of_is_limit_fork_id {f : c ⟶ d} {s : fork f f} (is_lim : is_limit s) : is_iso s.ι := begin apply is_iso.mk, use is_lim.lift (fork.of_ι (𝟙 c) (by simp)), split, { apply fork.is_limit.hom_ext is_lim, rw [assoc, fork.is_limit.lift_ι, fork.ι_of_ι, id_comp], apply comp_id }, { apply fork.is_limit.lift_ι } end lemma is_limit_of_is_limit_fork_eq {f g : c ⟶ d} {s : fork f g} (is_lim : is_limit s) (heq : f = g) : is_limit (fork.of_ι s.ι (by rw s.condition) : fork f f) := begin refine fork.is_limit.mk _ (λ t : fork f f, (fork.is_limit.lift' is_lim t.ι (by rw ←heq)).val) _ _; intro t, { apply fork.is_limit.lift_ι }, { intros r ht, apply fork.is_limit.hom_ext is_lim, erw fork.is_limit.lift_ι, apply ht } end lemma iso_of_is_limit_fork_eq {f g : c ⟶ d} {s : fork f g} (is_lim : is_limit s) (heq : f = g) : is_iso s.ι := iso_of_is_limit_fork_id (is_limit_of_is_limit_fork_eq is_lim heq) variable (C) instance topos_balanced : balanced C := { is_iso_of_mono_of_epi := λ c d f m e, begin resetI, apply iso_of_is_limit_fork_eq (image.monic_is_limit_fork f), rw ←cancel_epi f, exact (image.monic_to_canonical_fork f).condition end } end balanced
In addition to Waldo & Tulip The Clowns, the Puppetime Players offer a variety of special party characters for holidays and promotional events. We also offer walk-arounds and meet-and-greet services. From Santa Claus and the Easter Bunny to Mr. Bear, any of our special party characters will bring magic to your holiday or themed party. They are all available for meet-and-greets, and Santa comes with holiday magic, balloon sculptures, stories about the North Pole, and presents! In addition to special party characters, we offer informal entertainment with our walk-arounds. These are great for groups of children and adults. We also have feature improvisations, sight gags and jokes, pocket magic, and balloon sculptures the whole family will love. Walk-arounds are great for picnics, fairs, fundraisers, carnivals, and other get-togethers. Our clowns happily promote establishments and products with lively, energetic entertainment services. We’ll do our usual shtick or adapt the show to suit your specific needs or interests. This includes walk-arounds, meet-and-greets, photo-ops, and giveaways at trade fairs, festivals, grand openings, and more. Contact us to inquire about our amazing special party characters or book us for your next event.
[STATEMENT] lemma (in PolynRg) polyn_addTr1:"pol_coeff S (n, f) \<Longrightarrow> \<forall>g. pol_coeff S (n + m, g) \<longrightarrow> (polyn_expr R X n (n, f) \<plusminus> (polyn_expr R X (n + m) ((n + m), g)) = polyn_expr R X (n + m) (add_cf S (n, f) ((n + m), g)))" [PROOF STATE] proof (prove) goal (1 subgoal): 1. pol_coeff S (n, f) \<Longrightarrow> \<forall>g. pol_coeff S (n + m, g) \<longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (n + m) (n + m, g) = polyn_expr R X (n + m) (add_cf S (n, f) (n + m, g)) [PROOF STEP] apply (cut_tac subring, frule subring_Ring) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S\<rbrakk> \<Longrightarrow> \<forall>g. pol_coeff S (n + m, g) \<longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (n + m) (n + m, g) = polyn_expr R X (n + m) (add_cf S (n, f) (n + m, g)) [PROOF STEP] apply (induct_tac m) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S\<rbrakk> \<Longrightarrow> \<forall>g. pol_coeff S (n + 0, g) \<longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (n + 0) (n + 0, g) = polyn_expr R X (n + 0) (add_cf S (n, f) (n + 0, g)) 2. \<And>na. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; \<forall>g. pol_coeff S (n + na, g) \<longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (n + na) (n + na, g) = polyn_expr R X (n + na) (add_cf S (n, f) (n + na, g))\<rbrakk> \<Longrightarrow> \<forall>g. pol_coeff S (n + Suc na, g) \<longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (n + Suc na) (n + Suc na, g) = polyn_expr R X (n + Suc na) (add_cf S (n, f) (n + Suc na, g)) [PROOF STEP] apply (rule allI, rule impI, simp) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (n, g)\<rbrakk> \<Longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X n (n, g) = polyn_expr R X n (add_cf S (n, f) (n, g)) 2. \<And>na. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; \<forall>g. pol_coeff S (n + na, g) \<longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (n + na) (n + na, g) = polyn_expr R X (n + na) (add_cf S (n, f) (n + na, g))\<rbrakk> \<Longrightarrow> \<forall>g. pol_coeff S (n + Suc na, g) \<longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (n + Suc na) (n + Suc na, g) = polyn_expr R X (n + Suc na) (add_cf S (n, f) (n + Suc na, g)) [PROOF STEP] apply (simp add:polyn_add_n1) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; \<forall>g. pol_coeff S (n + na, g) \<longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (n + na) (n + na, g) = polyn_expr R X (n + na) (add_cf S (n, f) (n + na, g))\<rbrakk> \<Longrightarrow> \<forall>g. pol_coeff S (n + Suc na, g) \<longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (n + Suc na) (n + Suc na, g) = polyn_expr R X (n + Suc na) (add_cf S (n, f) (n + Suc na, g)) [PROOF STEP] apply (simp add:add.commute[of n]) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; \<forall>g. pol_coeff S (na + n, g) \<longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g))\<rbrakk> \<Longrightarrow> \<forall>g. pol_coeff S (Suc (na + n), g) \<longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (Suc (na + n)) (Suc (na + n), g) = polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) [PROOF STEP] apply (rule allI, rule impI) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; \<forall>g. pol_coeff S (na + n, g) \<longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); pol_coeff S (Suc (na + n), g)\<rbrakk> \<Longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (Suc (na + n)) (Suc (na + n), g) = polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) [PROOF STEP] apply (frule_tac n = "na + n" and f = g in pol_coeff_pre) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; \<forall>g. pol_coeff S (na + n, g) \<longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g)\<rbrakk> \<Longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (Suc (na + n)) (Suc (na + n), g) = polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) [PROOF STEP] apply (drule_tac a = g in forall_spec, assumption) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g))\<rbrakk> \<Longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (Suc (na + n)) (Suc (na + n), g) = polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) [PROOF STEP] apply (cut_tac n = "na + n" and c = "(Suc (na + n), g)" in polyn_Suc, simp, simp del:npow_suc, thin_tac "polyn_expr R X (Suc (na + n)) (Suc (na + n), g) = polyn_expr R X (na + n) (Suc (na + n), g) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R (Suc (na + n))\<^esup>") [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g))\<rbrakk> \<Longrightarrow> polyn_expr R X n (n, f) \<plusminus> (polyn_expr R X (na + n) (Suc (na + n), g) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup>) = polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) [PROOF STEP] apply (frule_tac c = "(n, f)" and k = n in polyn_mem, simp, frule_tac c = "(Suc (na + n), g)" and k = "na + n" in polyn_mem, simp, frule_tac c = "(Suc (na + n), g)" in monomial_mem) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (Suc (na + n), g) \<in> carrier R; \<forall>j\<le>fst (Suc (na + n), g). snd (Suc (na + n), g) j \<cdot>\<^sub>r X^\<^bsup>R j\<^esup> \<in> carrier R\<rbrakk> \<Longrightarrow> polyn_expr R X n (n, f) \<plusminus> (polyn_expr R X (na + n) (Suc (na + n), g) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup>) = polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) [PROOF STEP] apply (drule_tac a = "Suc (na + n)" in forall_spec, simp del:npow_suc, cut_tac ring_is_ag, subst aGroup.ag_pOp_assoc[THEN sym], assumption+, simp del:npow_suc) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (Suc (na + n), g) \<in> carrier R; snd (Suc (na + n), g) (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R\<rbrakk> \<Longrightarrow> polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (Suc (na + n), g) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) [PROOF STEP] apply (simp del:npow_suc add:polyn_expr_restrict) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) [PROOF STEP] apply (frule_tac c = "(n, f)" and d = "(Suc (na + n), g)" in add_cf_pol_coeff, assumption+, frule_tac c = "(n, f)" and d = "(na + n, g)" in add_cf_pol_coeff, assumption+) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g))\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) [PROOF STEP] apply (frule_tac c = "add_cf S (n, f) (Suc (na + n), g)" and n = "na + n" and m = "Suc (na + n)" in polyn_n_m, simp, subst add_cf_len, assumption+, simp) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); polyn_expr R X (Suc (na + n)) (Suc (na + n), snd (add_cf S (n, f) (Suc (na + n), g))) = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> \<Sigma>\<^sub>f R (\<lambda>j. snd (add_cf S (n, f) (Suc (na + n), g)) j \<cdot>\<^sub>r X^\<^bsup>R j\<^esup>) Suc (na + n) Suc (na + n)\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) [PROOF STEP] apply (cut_tac k = "Suc (na + n)" and f = "add_cf S (n, f) (Suc (na + n), g)" in polyn_expr_split) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); polyn_expr R X (Suc (na + n)) (Suc (na + n), snd (add_cf S (n, f) (Suc (na + n), g))) = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> \<Sigma>\<^sub>f R (\<lambda>j. snd (add_cf S (n, f) (Suc (na + n), g)) j \<cdot>\<^sub>r X^\<^bsup>R j\<^esup>) Suc (na + n) Suc (na + n); polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) = polyn_expr R X (Suc (na + n)) (fst (add_cf S (n, f) (Suc (na + n), g)), snd (add_cf S (n, f) (Suc (na + n), g)))\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) [PROOF STEP] apply (frule_tac c = "(n, f)" and d = "(Suc (na + n), g)" in add_cf_len, assumption+, simp del: npow_suc add: max.absorb1 max.absorb2) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); polyn_expr R X (Suc (na + n)) (Suc (na + n), snd (add_cf S (n, f) (Suc (na + n), g))) = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> \<Sigma>\<^sub>f R (\<lambda>j. snd (add_cf S (n, f) (Suc (na + n), g)) j \<cdot>\<^sub>r X^\<^bsup>R j\<^esup>) Suc (na + n) Suc (na + n); polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> \<Sigma>\<^sub>f R (\<lambda>j. snd (add_cf S (n, f) (Suc (na + n), g)) j \<cdot>\<^sub>r X^\<^bsup>R j\<^esup>) Suc (na + n) Suc (na + n); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n)\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> \<Sigma>\<^sub>f R (\<lambda>j. snd (add_cf S (n, f) (Suc (na + n), g)) j \<cdot>\<^sub>r X^\<^bsup>R j\<^esup>) Suc (na + n) Suc (na + n) [PROOF STEP] apply (thin_tac "polyn_expr R X (Suc (na + n)) (Suc (na + n), snd (add_cf S (n, f) (Suc (na + n), g))) = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> \<Sigma>\<^sub>f R (\<lambda>j. snd (add_cf S (n, f) (Suc (na + n), g)) j \<cdot>\<^sub>r X^\<^bsup>R j\<^esup>) (Suc (na + n)) (Suc (na + n))", thin_tac "polyn_expr R X (Suc (na + n)) (add_cf S (n, f) (Suc (na + n), g)) = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> \<Sigma>\<^sub>f R (\<lambda>j. snd (add_cf S (n, f) (Suc (na + n), g)) j \<cdot>\<^sub>r X^\<^bsup>R j\<^esup>) (Suc (na + n)) (Suc (na + n))") [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n)\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> \<Sigma>\<^sub>f R (\<lambda>j. snd (add_cf S (n, f) (Suc (na + n), g)) j \<cdot>\<^sub>r X^\<^bsup>R j\<^esup>) Suc (na + n) Suc (na + n) [PROOF STEP] apply (simp del:npow_suc add:fSum_def cmp_def slide_def) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n)\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> snd (add_cf S (n, f) (Suc (na + n), g)) (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> [PROOF STEP] apply (cut_tac d = "(Suc (na + n), g)" in add_cf_val_hi[of "(n, f)"], simp, simp del:npow_suc, thin_tac "snd (add_cf S (n, f) (Suc (na + n), g)) (Suc (na + n)) = g (Suc (na + n))") [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n)\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> [PROOF STEP] apply (frule_tac c = "add_cf S (n, f) (Suc (na + n), g)" and k = "na + n" in polyn_mem, simp, frule_tac c = "add_cf S (n, f) (na + n, g)" and k = "na + n" in polyn_mem, simp ) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r (X^\<^bsup>R (na + n)\<^esup> \<cdot>\<^sub>r X) \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n); polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) \<in> carrier R\<rbrakk> \<Longrightarrow> na + n \<le> fst (add_cf S (n, f) (na + n, g)) 2. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n); polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) \<in> carrier R; polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<in> carrier R\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> [PROOF STEP] apply (subst add_cf_len, assumption+, simp del:npow_suc) [PROOF STATE] proof (prove) goal (1 subgoal): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n); polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) \<in> carrier R; polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<in> carrier R\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> [PROOF STEP] apply (frule_tac a = "polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g))" and b = "polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g))" and c = "g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R (Suc (na + n))\<^esup>" in aGroup.ag_pOp_add_r[of R], assumption+) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n); polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) \<in> carrier R; polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<in> carrier R\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) = polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) 2. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n); polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) \<in> carrier R; polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<in> carrier R; polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup>\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> [PROOF STEP] apply (rule_tac c = "add_cf S (n, f) (na + n, g)" and d = "add_cf S (n, f) (Suc (na + n), g)" and k = "na + n" in polyn_exprs_eq, assumption+, simp, subst add_cf_len, assumption+) [PROOF STATE] proof (prove) goal (3 subgoals): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r (X^\<^bsup>R (na + n)\<^esup> \<cdot>\<^sub>r X) \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n); polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) \<in> carrier R; polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<in> carrier R\<rbrakk> \<Longrightarrow> na + n \<le> max (fst (n, f)) (fst (na + n, g)) 2. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n); polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) \<in> carrier R; polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<in> carrier R\<rbrakk> \<Longrightarrow> \<forall>j\<le>na + n. snd (add_cf S (n, f) (na + n, g)) j = snd (add_cf S (n, f) (Suc (na + n), g)) j 3. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n); polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) \<in> carrier R; polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<in> carrier R; polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup>\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> [PROOF STEP] apply (simp) [PROOF STATE] proof (prove) goal (2 subgoals): 1. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n); polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) \<in> carrier R; polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<in> carrier R\<rbrakk> \<Longrightarrow> \<forall>j\<le>na + n. snd (add_cf S (n, f) (na + n, g)) j = snd (add_cf S (n, f) (Suc (na + n), g)) j 2. \<And>na g. \<lbrakk>pol_coeff S (n, f); Subring R S; Ring S; pol_coeff S (Suc (na + n), g); pol_coeff S (na + n, g); polyn_expr R X n (n, f) \<plusminus> polyn_expr R X (na + n) (na + n, g) = polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)); polyn_expr R X n (n, f) \<in> carrier R; polyn_expr R X (na + n) (na + n, g) \<in> carrier R; g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> \<in> carrier R; aGroup R; pol_coeff S (add_cf S (n, f) (Suc (na + n), g)); pol_coeff S (add_cf S (n, f) (na + n, g)); fst (add_cf S (n, f) (Suc (na + n), g)) = Suc (na + n); polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) \<in> carrier R; polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<in> carrier R; polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (na + n) (add_cf S (n, f) (Suc (na + n), g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup>\<rbrakk> \<Longrightarrow> polyn_expr R X (na + n) (add_cf S (n, f) (na + n, g)) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> = polyn_expr R X (na + n) (na + n, snd (add_cf S (n, f) (Suc (na + n), g))) \<plusminus> g (Suc (na + n)) \<cdot>\<^sub>r X^\<^bsup>R Suc (na + n)\<^esup> [PROOF STEP] apply (rule allI, rule impI, (subst add_cf_def)+, simp, frule_tac m = na and g = g in polyn_expr_restrict1[of n f], assumption, simp del:npow_suc) [PROOF STATE] proof (prove) goal: No subgoals! [PROOF STEP] done
/- Fill in the sorry’s below, to prove the barber paradox. open classical -- not needed, but you can use it -- This is an exercise from Chapter 4. Use it as an axiom here. axiom not_iff_not_self (P : Prop) : ¬ (P ↔ ¬ P) example (Q : Prop) : ¬ (Q ↔ ¬ Q) := not_iff_not_self Q section variable Person : Type variable shaves : Person → Person → Prop variable barber : Person variable h : ∀ x, shaves barber x ↔ ¬ shaves x x -- Show the following: example : false := sorry end -/ open classical -- not needed, but you can use it -- This is an exercise from Chapter 4. Use it as an axiom here. axiom not_iff_not_self (P : Prop) : ¬ (P ↔ ¬ P) example (Q : Prop) : ¬ (Q ↔ ¬ Q) := not_iff_not_self Q -- style 1, prove using excluded middle from classical section variable Person : Type variable shaves : Person → Person → Prop variable barber : Person variable h : ∀ x, shaves barber x ↔ ¬ shaves x x example : false := have hBarberInverse: shaves barber barber ↔ ¬ shaves barber barber, from h(barber), or.elim(em(shaves(barber)(barber)))( λ hBarberShavesSelf: shaves(barber)(barber), have hBarberDoesntShaveSelf: ¬ shaves barber barber, from iff.mp(hBarberInverse)(hBarberShavesSelf), show false, from hBarberDoesntShaveSelf(hBarberShavesSelf) )( λ hBarberDoesntShaveSelf: ¬ shaves(barber)(barber), have hBarberShavesSelf: shaves barber barber, from iff.mpr(hBarberInverse)(hBarberDoesntShaveSelf), show false, from hBarberDoesntShaveSelf(hBarberShavesSelf) ) end -- style 2, prove using the not_iff_not_self axiom section variable Person : Type variable shaves : Person → Person → Prop variable barber : Person variable h : ∀ x, shaves barber x ↔ ¬ shaves x x example : false := have hBarberInverse: shaves barber barber ↔ ¬ shaves barber barber, from h(barber), have hNotBarberInverse: ¬ (shaves barber barber ↔ ¬ shaves barber barber), from not_iff_not_self(shaves barber barber), show false, from hNotBarberInverse(hBarberInverse) end
(* Title: HOL/Algebra/Subrings.thy Authors: Martin Baillon and Paulo Emílio de Vilhena *) theory Subrings imports Ring RingHom QuotRing begin section \<open>Subrings\<close> subsection \<open>Definitions\<close> locale subring = subgroup H "add_monoid R" + submonoid H R for H and R (structure) locale subcring = subring + assumes sub_m_comm: "\<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<otimes> h2 = h2 \<otimes> h1" locale subdomain = subcring + assumes sub_one_not_zero [simp]: "\<one> \<noteq> \<zero>" assumes subintegral: "\<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<otimes> h2 = \<zero> \<Longrightarrow> h1 = \<zero> \<or> h2 = \<zero>" locale subfield = subdomain K R for K and R (structure) + assumes subfield_Units: "Units (R \<lparr> carrier := K \<rparr>) = K - { \<zero> }" subsection \<open>Basic Properties\<close> subsubsection \<open>Subrings\<close> lemma (in ring) subringI: assumes "H \<subseteq> carrier R" and "\<one> \<in> H" and "\<And>h. h \<in> H \<Longrightarrow> \<ominus> h \<in> H" and "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<otimes> h2 \<in> H" and "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<oplus> h2 \<in> H" shows "subring H R" using add.subgroupI[OF assms(1) _ assms(3, 5)] assms(2) submonoid.intro[OF assms(1, 4, 2)] unfolding subring_def by auto lemma subringE: assumes "subring H R" shows "H \<subseteq> carrier R" and "\<zero>\<^bsub>R\<^esub> \<in> H" and "\<one>\<^bsub>R\<^esub> \<in> H" and "H \<noteq> {}" and "\<And>h. h \<in> H \<Longrightarrow> \<ominus>\<^bsub>R\<^esub> h \<in> H" and "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<otimes>\<^bsub>R\<^esub> h2 \<in> H" and "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<oplus>\<^bsub>R\<^esub> h2 \<in> H" using subring.axioms[OF assms] unfolding submonoid_def subgroup_def a_inv_def by auto lemma (in ring) carrier_is_subring: "subring (carrier R) R" by (simp add: subringI) lemma (in ring) subring_inter: assumes "subring I R" and "subring J R" shows "subring (I \<inter> J) R" using subringE[OF assms(1)] subringE[OF assms(2)] subringI[of "I \<inter> J"] by auto lemma (in ring) subring_Inter: assumes "\<And>I. I \<in> S \<Longrightarrow> subring I R" and "S \<noteq> {}" shows "subring (\<Inter>S) R" proof (rule subringI, auto simp add: assms subringE[of _ R]) fix x assume "\<forall>I \<in> S. x \<in> I" thus "x \<in> carrier R" using assms subringE(1)[of _ R] by blast qed (* NEW ======================= *) lemma (in ring) subring_is_ring: assumes "subring H R" shows "ring (R \<lparr> carrier := H \<rparr>)" proof - interpret group "add_monoid (R \<lparr> carrier := H \<rparr>)" + monoid "R \<lparr> carrier := H \<rparr>" using subgroup.subgroup_is_group[OF subring.axioms(1) add.is_group] assms submonoid.submonoid_is_monoid[OF subring.axioms(2) monoid_axioms] by auto show ?thesis using subringE(1)[OF assms] by (unfold_locales, simp_all add: subringE(1)[OF assms] add.m_comm subset_eq l_distr r_distr) qed lemma (in ring) ring_incl_imp_subring: assumes "H \<subseteq> carrier R" and "ring (R \<lparr> carrier := H \<rparr>)" shows "subring H R" using group.group_incl_imp_subgroup[OF add.group_axioms, of H] assms(1) monoid.monoid_incl_imp_submonoid[OF monoid_axioms assms(1)] ring.axioms(1, 2)[OF assms(2)] abelian_group.a_group[of "R \<lparr> carrier := H \<rparr>"] unfolding subring_def by auto (* PROOF ====================== *) lemma (in ring) subring_iff: assumes "H \<subseteq> carrier R" shows "subring H R \<longleftrightarrow> ring (R \<lparr> carrier := H \<rparr>)" using subring_is_ring ring_incl_imp_subring[OF assms] by auto subsubsection \<open>Subcrings\<close> lemma (in ring) subcringI: assumes "subring H R" and "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<otimes> h2 = h2 \<otimes> h1" shows "subcring H R" unfolding subcring_def subcring_axioms_def using assms by simp+ lemma (in cring) subcringI': assumes "subring H R" shows "subcring H R" using subcringI[OF assms] subringE(1)[OF assms] m_comm by auto lemma subcringE: assumes "subcring H R" shows "H \<subseteq> carrier R" and "\<zero>\<^bsub>R\<^esub> \<in> H" and "\<one>\<^bsub>R\<^esub> \<in> H" and "H \<noteq> {}" and "\<And>h. h \<in> H \<Longrightarrow> \<ominus>\<^bsub>R\<^esub> h \<in> H" and "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<otimes>\<^bsub>R\<^esub> h2 \<in> H" and "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<oplus>\<^bsub>R\<^esub> h2 \<in> H" and "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<otimes>\<^bsub>R\<^esub> h2 = h2 \<otimes>\<^bsub>R\<^esub> h1" using subringE[OF subcring.axioms(1)[OF assms]] subcring.sub_m_comm[OF assms] by simp+ lemma (in cring) carrier_is_subcring: "subcring (carrier R) R" by (simp add: subcringI' carrier_is_subring) lemma (in ring) subcring_inter: assumes "subcring I R" and "subcring J R" shows "subcring (I \<inter> J) R" using subcringE[OF assms(1)] subcringE[OF assms(2)] subcringI[of "I \<inter> J"] subringI[of "I \<inter> J"] by auto lemma (in ring) subcring_Inter: assumes "\<And>I. I \<in> S \<Longrightarrow> subcring I R" and "S \<noteq> {}" shows "subcring (\<Inter>S) R" proof (rule subcringI) show "subring (\<Inter>S) R" using subcring.axioms(1)[of _ R] subring_Inter[of S] assms by auto next fix h1 h2 assume h1: "h1 \<in> \<Inter>S" and h2: "h2 \<in> \<Inter>S" obtain S' where S': "S' \<in> S" using assms(2) by blast hence "h1 \<in> S'" "h2 \<in> S'" using h1 h2 by blast+ thus "h1 \<otimes> h2 = h2 \<otimes> h1" using subcring.sub_m_comm[OF assms(1)[OF S']] by simp qed lemma (in ring) subcring_iff: assumes "H \<subseteq> carrier R" shows "subcring H R \<longleftrightarrow> cring (R \<lparr> carrier := H \<rparr>)" proof assume A: "subcring H R" hence ring: "ring (R \<lparr> carrier := H \<rparr>)" using subring_iff[OF assms] subcring.axioms(1)[OF A] by simp moreover have "comm_monoid (R \<lparr> carrier := H \<rparr>)" using monoid.monoid_comm_monoidI[OF ring.is_monoid[OF ring]] subcring.sub_m_comm[OF A] by auto ultimately show "cring (R \<lparr> carrier := H \<rparr>)" using cring_def by blast next assume A: "cring (R \<lparr> carrier := H \<rparr>)" hence "subring H R" using cring.axioms(1) subring_iff[OF assms] by simp moreover have "comm_monoid (R \<lparr> carrier := H \<rparr>)" using A unfolding cring_def by simp hence"\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<otimes> h2 = h2 \<otimes> h1" using comm_monoid.m_comm[of "R \<lparr> carrier := H \<rparr>"] by auto ultimately show "subcring H R" unfolding subcring_def subcring_axioms_def by auto qed subsubsection \<open>Subdomains\<close> lemma (in ring) subdomainI: assumes "subcring H R" and "\<one> \<noteq> \<zero>" and "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<otimes> h2 = \<zero> \<Longrightarrow> h1 = \<zero> \<or> h2 = \<zero>" shows "subdomain H R" unfolding subdomain_def subdomain_axioms_def using assms by simp+ lemma (in domain) subdomainI': assumes "subring H R" shows "subdomain H R" proof (rule subdomainI[OF subcringI[OF assms]], simp_all) show "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<otimes> h2 = h2 \<otimes> h1" using m_comm subringE(1)[OF assms] by auto show "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H; h1 \<otimes> h2 = \<zero> \<rbrakk> \<Longrightarrow> (h1 = \<zero>) \<or> (h2 = \<zero>)" using integral subringE(1)[OF assms] by auto qed lemma subdomainE: assumes "subdomain H R" shows "H \<subseteq> carrier R" and "\<zero>\<^bsub>R\<^esub> \<in> H" and "\<one>\<^bsub>R\<^esub> \<in> H" and "H \<noteq> {}" and "\<And>h. h \<in> H \<Longrightarrow> \<ominus>\<^bsub>R\<^esub> h \<in> H" and "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<otimes>\<^bsub>R\<^esub> h2 \<in> H" and "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<oplus>\<^bsub>R\<^esub> h2 \<in> H" and "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<otimes>\<^bsub>R\<^esub> h2 = h2 \<otimes>\<^bsub>R\<^esub> h1" and "\<And>h1 h2. \<lbrakk> h1 \<in> H; h2 \<in> H \<rbrakk> \<Longrightarrow> h1 \<otimes>\<^bsub>R\<^esub> h2 = \<zero>\<^bsub>R\<^esub> \<Longrightarrow> h1 = \<zero>\<^bsub>R\<^esub> \<or> h2 = \<zero>\<^bsub>R\<^esub>" and "\<one>\<^bsub>R\<^esub> \<noteq> \<zero>\<^bsub>R\<^esub>" using subcringE[OF subdomain.axioms(1)[OF assms]] assms unfolding subdomain_def subdomain_axioms_def by auto lemma (in ring) subdomain_iff: assumes "H \<subseteq> carrier R" shows "subdomain H R \<longleftrightarrow> domain (R \<lparr> carrier := H \<rparr>)" proof assume A: "subdomain H R" hence cring: "cring (R \<lparr> carrier := H \<rparr>)" using subcring_iff[OF assms] subdomain.axioms(1)[OF A] by simp thus "domain (R \<lparr> carrier := H \<rparr>)" using domain.intro[OF cring] subdomain.subintegral[OF A] subdomain.sub_one_not_zero[OF A] unfolding domain_axioms_def by auto next assume A: "domain (R \<lparr> carrier := H \<rparr>)" hence subcring: "subcring H R" using subcring_iff[OF assms] unfolding domain_def by simp thus "subdomain H R" using subdomain.intro[OF subcring] domain.integral[OF A] domain.one_not_zero[OF A] unfolding subdomain_axioms_def by auto qed lemma (in domain) subring_is_domain: assumes "subring H R" shows "domain (R \<lparr> carrier := H \<rparr>)" using subdomainI'[OF assms] unfolding subdomain_iff[OF subringE(1)[OF assms]] . (* NEW ====================== *) lemma (in ring) subdomain_is_domain: assumes "subdomain H R" shows "domain (R \<lparr> carrier := H \<rparr>)" using assms unfolding subdomain_iff[OF subdomainE(1)[OF assms]] . subsubsection \<open>Subfields\<close> lemma (in ring) subfieldI: assumes "subcring K R" and "Units (R \<lparr> carrier := K \<rparr>) = K - { \<zero> }" shows "subfield K R" proof (rule subfield.intro) show "subfield_axioms K R" using assms(2) unfolding subfield_axioms_def . show "subdomain K R" proof (rule subdomainI[OF assms(1)], auto) have subM: "submonoid K R" using subring.axioms(2)[OF subcring.axioms(1)[OF assms(1)]] . show contr: "\<one> = \<zero> \<Longrightarrow> False" proof - assume one_eq_zero: "\<one> = \<zero>" have "\<one> \<in> K" and "\<one> \<otimes> \<one> = \<one>" using submonoid.one_closed[OF subM] by simp+ hence "\<one> \<in> Units (R \<lparr> carrier := K \<rparr>)" unfolding Units_def by (simp, blast) hence "\<one> \<noteq> \<zero>" using assms(2) by simp thus False using one_eq_zero by simp qed fix k1 k2 assume k1: "k1 \<in> K" and k2: "k2 \<in> K" "k2 \<noteq> \<zero>" and k12: "k1 \<otimes> k2 = \<zero>" obtain k2' where k2': "k2' \<in> K" "k2' \<otimes> k2 = \<one>" "k2 \<otimes> k2' = \<one>" using assms(2) k2 unfolding Units_def by auto have "\<zero> = (k1 \<otimes> k2) \<otimes> k2'" using k12 k2'(1) submonoid.mem_carrier[OF subM] by fastforce also have "... = k1" using k1 k2(1) k2'(1,3) submonoid.mem_carrier[OF subM] by (simp add: m_assoc) finally have "\<zero> = k1" . thus "k1 = \<zero>" by simp qed qed lemma (in field) subfieldI': assumes "subring K R" and "\<And>k. k \<in> K - { \<zero> } \<Longrightarrow> inv k \<in> K" shows "subfield K R" proof (rule subfieldI) show "subcring K R" using subcringI[OF assms(1)] m_comm subringE(1)[OF assms(1)] by auto show "Units (R \<lparr> carrier := K \<rparr>) = K - { \<zero> }" proof show "K - { \<zero> } \<subseteq> Units (R \<lparr> carrier := K \<rparr>)" proof fix k assume k: "k \<in> K - { \<zero> }" hence inv_k: "inv k \<in> K" using assms(2) by simp moreover have "k \<in> carrier R - { \<zero> }" using subringE(1)[OF assms(1)] k by auto ultimately have "k \<otimes> inv k = \<one>" "inv k \<otimes> k = \<one>" by (simp add: field_Units)+ thus "k \<in> Units (R \<lparr> carrier := K \<rparr>)" unfolding Units_def using k inv_k by auto qed next show "Units (R \<lparr> carrier := K \<rparr>) \<subseteq> K - { \<zero> }" proof fix k assume k: "k \<in> Units (R \<lparr> carrier := K \<rparr>)" then obtain k' where k': "k' \<in> K" "k \<otimes> k' = \<one>" unfolding Units_def by auto hence "k \<in> carrier R" and "k' \<in> carrier R" using k subringE(1)[OF assms(1)] unfolding Units_def by auto hence "\<zero> = \<one>" if "k = \<zero>" using that k'(2) by auto thus "k \<in> K - { \<zero> }" using k unfolding Units_def by auto qed qed qed lemma (in field) carrier_is_subfield: "subfield (carrier R) R" by (auto intro: subfieldI[OF carrier_is_subcring] simp add: field_Units) lemma subfieldE: assumes "subfield K R" shows "subring K R" and "subcring K R" and "K \<subseteq> carrier R" and "\<And>k1 k2. \<lbrakk> k1 \<in> K; k2 \<in> K \<rbrakk> \<Longrightarrow> k1 \<otimes>\<^bsub>R\<^esub> k2 = k2 \<otimes>\<^bsub>R\<^esub> k1" and "\<And>k1 k2. \<lbrakk> k1 \<in> K; k2 \<in> K \<rbrakk> \<Longrightarrow> k1 \<otimes>\<^bsub>R\<^esub> k2 = \<zero>\<^bsub>R\<^esub> \<Longrightarrow> k1 = \<zero>\<^bsub>R\<^esub> \<or> k2 = \<zero>\<^bsub>R\<^esub>" and "\<one>\<^bsub>R\<^esub> \<noteq> \<zero>\<^bsub>R\<^esub>" using subdomain.axioms(1)[OF subfield.axioms(1)[OF assms]] subcring_def subdomainE(1, 8, 9, 10)[OF subfield.axioms(1)[OF assms]] by auto lemma (in ring) subfield_m_inv: assumes "subfield K R" and "k \<in> K - { \<zero> }" shows "inv k \<in> K - { \<zero> }" and "k \<otimes> inv k = \<one>" and "inv k \<otimes> k = \<one>" proof - have K: "subring K R" "submonoid K R" using subfieldE(1)[OF assms(1)] subring.axioms(2) by auto have monoid: "monoid (R \<lparr> carrier := K \<rparr>)" using submonoid.submonoid_is_monoid[OF subring.axioms(2)[OF K(1)] is_monoid] . have "monoid R" by (simp add: monoid_axioms) hence k: "k \<in> Units (R \<lparr> carrier := K \<rparr>)" using subfield.subfield_Units[OF assms(1)] assms(2) by blast hence unit_of_R: "k \<in> Units R" using assms(2) subringE(1)[OF subfieldE(1)[OF assms(1)]] unfolding Units_def by auto have "inv\<^bsub>(R \<lparr> carrier := K \<rparr>)\<^esub> k \<in> Units (R \<lparr> carrier := K \<rparr>)" by (simp add: k monoid monoid.Units_inv_Units) hence "inv\<^bsub>(R \<lparr> carrier := K \<rparr>)\<^esub> k \<in> K - { \<zero> }" using subfield.subfield_Units[OF assms(1)] by blast thus "inv k \<in> K - { \<zero> }" and "k \<otimes> inv k = \<one>" and "inv k \<otimes> k = \<one>" using Units_l_inv[OF unit_of_R] Units_r_inv[OF unit_of_R] using monoid.m_inv_monoid_consistent[OF monoid_axioms k K(2)] by auto qed lemma (in ring) subfield_m_inv_simprule: assumes "subfield K R" shows "\<lbrakk> k \<in> K - { \<zero> }; a \<in> carrier R \<rbrakk> \<Longrightarrow> k \<otimes> a \<in> K \<Longrightarrow> a \<in> K" proof - note subring_props = subringE[OF subfieldE(1)[OF assms]] assume A: "k \<in> K - { \<zero> }" "a \<in> carrier R" "k \<otimes> a \<in> K" then obtain k' where k': "k' \<in> K" "k \<otimes> a = k'" by blast have inv_k: "inv k \<in> K" "inv k \<otimes> k = \<one>" using subfield_m_inv[OF assms A(1)] by auto hence "inv k \<otimes> (k \<otimes> a) \<in> K" using k' A(3) subring_props(6) by auto thus "a \<in> K" using m_assoc[of "inv k" k a] A(2) inv_k subring_props(1) by (metis (no_types, hide_lams) A(1) Diff_iff l_one subsetCE) qed lemma (in ring) subfield_iff: shows "\<lbrakk> field (R \<lparr> carrier := K \<rparr>); K \<subseteq> carrier R \<rbrakk> \<Longrightarrow> subfield K R" and "subfield K R \<Longrightarrow> field (R \<lparr> carrier := K \<rparr>)" proof- assume A: "field (R \<lparr> carrier := K \<rparr>)" "K \<subseteq> carrier R" have "\<And>k1 k2. \<lbrakk> k1 \<in> K; k2 \<in> K \<rbrakk> \<Longrightarrow> k1 \<otimes> k2 = k2 \<otimes> k1" using comm_monoid.m_comm[OF cring.axioms(2)[OF fieldE(1)[OF A(1)]]] by simp moreover have "subring K R" using ring_incl_imp_subring[OF A(2) cring.axioms(1)[OF fieldE(1)[OF A(1)]]] . ultimately have "subcring K R" using subcringI by simp thus "subfield K R" using field.field_Units[OF A(1)] subfieldI by auto next assume A: "subfield K R" have cring: "cring (R \<lparr> carrier := K \<rparr>)" using subcring_iff[OF subringE(1)[OF subfieldE(1)[OF A]]] subfieldE(2)[OF A] by simp thus "field (R \<lparr> carrier := K \<rparr>)" using cring.cring_fieldI[OF cring] subfield.subfield_Units[OF A] by simp qed subsection \<open>Subring Homomorphisms\<close> (* PROOF ====================== *) lemma (in ring) hom_imp_img_subring: assumes "h \<in> ring_hom R S" and "subring K R" shows "ring (S \<lparr> carrier := h ` K, zero := h \<zero> \<rparr>)" proof - have "ring (R \<lparr> carrier := K \<rparr>)" using subring_is_ring[OF assms(2)] by simp moreover have "h \<in> ring_hom (R \<lparr> carrier := K \<rparr>) S" using assms subringE(1)[OF assms (2)] unfolding ring_hom_def apply simp apply blast done ultimately show ?thesis using ring.ring_hom_imp_img_ring[of "R \<lparr> carrier := K \<rparr>" h S] by simp qed lemma (in ring_hom_ring) img_is_subring: assumes "subring K R" shows "subring (h ` K) S" proof - have "ring (S \<lparr> carrier := h ` K \<rparr>)" using R.hom_imp_img_subring[OF homh assms] hom_zero hom_one by simp moreover have "h ` K \<subseteq> carrier S" using ring_hom_memE(1)[OF homh] subringE(1)[OF assms] by auto ultimately show ?thesis using ring_incl_imp_subring by simp qed (* PROOF ====================== *) lemma (in ring_hom_ring) img_is_subfield: assumes "subfield K R" and "\<one>\<^bsub>S\<^esub> \<noteq> \<zero>\<^bsub>S\<^esub>" shows "inj_on h K" and "subfield (h ` K) S" proof - have K: "K \<subseteq> carrier R" "subring K R" "subring (h ` K) S" using subfieldE(1)[OF assms(1)] subringE(1) img_is_subring by auto have field: "field (R \<lparr> carrier := K \<rparr>)" and ring: "ring (R \<lparr> carrier := K \<rparr>)" "ring (S \<lparr> carrier := h ` K \<rparr>)" using R.subfield_iff assms(1) R.subring_is_ring[OF K(2)] S.subring_is_ring[OF K(3)] by auto hence h: "h \<in> ring_hom (R \<lparr> carrier := K \<rparr>) (S \<lparr> carrier := h ` K \<rparr>)" unfolding ring_hom_def apply auto using ring_hom_memE[OF homh] K by (meson contra_subsetD)+ hence ring_hom: "ring_hom_ring (R \<lparr> carrier := K \<rparr>) (S \<lparr> carrier := h ` K \<rparr>) h" using ring_axioms ring ring_hom_ringI2 by blast have "h ` K \<noteq> { \<zero>\<^bsub>S\<^esub> }" using subfieldE(1, 5)[OF assms(1)] subringE(3) assms(2) by (metis hom_one image_eqI singletonD) thus "inj_on h K" using ring_hom_ring.non_trivial_field_hom_imp_inj[OF ring_hom field] by auto hence "h \<in> ring_iso (R \<lparr> carrier := K \<rparr>) (S \<lparr> carrier := h ` K \<rparr>)" using h unfolding ring_iso_def bij_betw_def by auto hence "field (S \<lparr> carrier := h ` K \<rparr>)" using field.ring_iso_imp_img_field[OF field, of h "S \<lparr> carrier := h ` K \<rparr>"] by auto thus "subfield (h ` K) S" using S.subfield_iff[of "h ` K"] K(1) ring_hom_memE(1)[OF homh] by blast qed (* NEW ========================================================================== *) lemma (in ring_hom_ring) induced_ring_hom: assumes "subring K R" shows "ring_hom_ring (R \<lparr> carrier := K \<rparr>) S h" proof - have "h \<in> ring_hom (R \<lparr> carrier := K \<rparr>) S" using homh subringE(1)[OF assms] unfolding ring_hom_def by (auto, meson hom_mult hom_add subsetCE)+ thus ?thesis using R.subring_is_ring[OF assms] ring_axioms unfolding ring_hom_ring_def ring_hom_ring_axioms_def by auto qed (* NEW ========================================================================== *) lemma (in ring_hom_ring) inj_on_subgroup_iff_trivial_ker: assumes "subring K R" shows "inj_on h K \<longleftrightarrow> a_kernel (R \<lparr> carrier := K \<rparr>) S h = { \<zero> }" using ring_hom_ring.inj_iff_trivial_ker[OF induced_ring_hom[OF assms]] by simp lemma (in ring_hom_ring) inv_ring_hom: assumes "inj_on h K" and "subring K R" shows "ring_hom_ring (S \<lparr> carrier := h ` K \<rparr>) R (inv_into K h)" proof (intro ring_hom_ringI[OF _ R.ring_axioms], auto) show "ring (S \<lparr> carrier := h ` K \<rparr>)" using subring_is_ring[OF img_is_subring[OF assms(2)]] . next show "inv_into K h \<one>\<^bsub>S\<^esub> = \<one>\<^bsub>R\<^esub>" using assms(1) subringE(3)[OF assms(2)] hom_one by (simp add: inv_into_f_eq) next fix k1 k2 assume k1: "k1 \<in> K" and k2: "k2 \<in> K" with \<open>k1 \<in> K\<close> show "inv_into K h (h k1) \<in> carrier R" using assms(1) subringE(1)[OF assms(2)] by (simp add: subset_iff) from \<open>k1 \<in> K\<close> and \<open>k2 \<in> K\<close> have "h k1 \<oplus>\<^bsub>S\<^esub> h k2 = h (k1 \<oplus>\<^bsub>R\<^esub> k2)" and "k1 \<oplus>\<^bsub>R\<^esub> k2 \<in> K" and "h k1 \<otimes>\<^bsub>S\<^esub> h k2 = h (k1 \<otimes>\<^bsub>R\<^esub> k2)" and "k1 \<otimes>\<^bsub>R\<^esub> k2 \<in> K" using subringE(1,6,7)[OF assms(2)] by (simp add: subset_iff)+ thus "inv_into K h (h k1 \<oplus>\<^bsub>S\<^esub> h k2) = inv_into K h (h k1) \<oplus>\<^bsub>R\<^esub> inv_into K h (h k2)" and "inv_into K h (h k1 \<otimes>\<^bsub>S\<^esub> h k2) = inv_into K h (h k1) \<otimes>\<^bsub>R\<^esub> inv_into K h (h k2)" using assms(1) k1 k2 by simp+ qed end
import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.decomposition import PCA data = pd.read_excel(r"Concrete_Data.xls") req_col_names = ["Cement", "BlastFurnaceSlag", "FlyAsh", "Water", "Superplasticizer", "CoarseAggregate", "FineAggregate", "Age", "CC_Strength"] curr_col_names = list(data.columns) mapper = {} for i, name in enumerate(curr_col_names): mapper[name] = req_col_names[i] data = data.rename(columns=mapper) data.head() data.isna().sum() data.describe() # pairwise relations #sns.pairplot(newX) #plt.show() # Pearson Correlation coefficients heatmap X=data.values[:, :-1] pca=PCA(n_components=X.shape[1]-2) newX = pca.fit_transform(X) corr = np.corrcoef(newX.T) plt.figure(figsize=(9,7)) sns.heatmap(corr, annot=True, cmap='Oranges') b, t = plt.ylim() plt.ylim(b+0.5, t-0.5) plt.title("Feature Correlation Heatmap") plt.show() # Observations # Observations from Strength vs (Cement, Age, Water) ax = sns.distplot(data.CC_Strength) ax.set_title("Compressive Strength Distribution") fig, ax = plt.subplots(figsize=(10,7)) sns.scatterplot(y="CC_Strength", x="Cement", hue="Water", size="Age", data=data, ax=ax, sizes=(50, 300)) ax.set_title("CC Strength vs (Cement, Age, Water)") ax.legend(loc="upper left", bbox_to_anchor=(1,1)) plt.show() # Observations from CC Strength vs (Fine aggregate, Super Plasticizer, FlyAsh) fig, ax = plt.subplots(figsize=(10,7)) sns.scatterplot(y="CC_Strength", x="FineAggregate", hue="FlyAsh", size="Superplasticizer", data=data, ax=ax, sizes=(50, 300)) ax.set_title("CC Strength vs (Fine aggregate, Super Plasticizer, FlyAsh)") ax.legend(loc="upper left", bbox_to_anchor=(1,1)) plt.show() # Observations from CC Strength vs (Fine aggregate, Super Plasticizer, Water) fig, ax = plt.subplots(figsize=(10,7)) sns.scatterplot(y="CC_Strength", x="FineAggregate", hue="Water", size="Superplasticizer", data=data, ax=ax, sizes=(50, 300)) ax.set_title("CC Strength vs (Fine aggregate, Super Plasticizer, Water)") ax.legend(loc="upper left", bbox_to_anchor=(1,1)) plt.show() #reference: https://github.com/pranaymodukuru/Concrete-compressive-strength/blob/master/ConcreteCompressiveStrengthPrediction.ipynb
Formal statement is: lemma powser_0_nonzero: fixes a :: "nat \<Rightarrow> 'a::{real_normed_field,banach}" assumes r: "0 < r" and sm: "\<And>x. norm (x-\<xi>) < r \<Longrightarrow> (\<lambda>n. a n * (x-\<xi>) ^ n) sums (f x)" and [simp]: "f \<xi> = 0" and m0: "a m \<noteq> 0" and "m>0" obtains s where "0 < s" and "\<And>z. z \<in> cball \<xi> s - {\<xi>} \<Longrightarrow> f z \<noteq> 0" Informal statement is: If $f$ is a power series with radius of convergence $r > 0$ and $f(\xi) = 0$, then there exists $s > 0$ such that $f(z) \neq 0$ for all $z \<in> \mathbb{C}$ with $|z - \xi| < s$ and $z \neq \xi$.
/- Copyright (c) 2019 Scott Morrison. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Scott Morrison, Bhavik Mehta -/ import Mathlib.PrePort import Mathlib.Lean3Lib.init.default import Mathlib.category_theory.monad.basic import Mathlib.category_theory.adjunction.basic import Mathlib.category_theory.reflects_isomorphisms import Mathlib.PostPort universes v₁ u₁ l namespace Mathlib /-! # Eilenberg-Moore (co)algebras for a (co)monad This file defines Eilenberg-Moore (co)algebras for a (co)monad, and provides the category instance for them. Further it defines the adjoint pair of free and forgetful functors, respectively from and to the original category, as well as the adjoint pair of forgetful and cofree functors, respectively from and to the original category. ## References * [Riehl, *Category theory in context*, Section 5.2.4][riehl2017] -/ namespace category_theory namespace monad /-- An Eilenberg-Moore algebra for a monad `T`. cf Definition 5.2.3 in [Riehl][riehl2017]. -/ structure algebra {C : Type u₁} [category C] (T : C ⥤ C) [monad T] where A : C a : functor.obj T A ⟶ A unit' : autoParam (nat_trans.app η_ A ≫ a = 𝟙) (Lean.Syntax.ident Lean.SourceInfo.none (String.toSubstring "Mathlib.obviously") (Lean.Name.mkStr (Lean.Name.mkStr Lean.Name.anonymous "Mathlib") "obviously") []) assoc' : autoParam (nat_trans.app μ_ A ≫ a = functor.map T a ≫ a) (Lean.Syntax.ident Lean.SourceInfo.none (String.toSubstring "Mathlib.obviously") (Lean.Name.mkStr (Lean.Name.mkStr Lean.Name.anonymous "Mathlib") "obviously") []) theorem algebra.unit {C : Type u₁} [category C] {T : C ⥤ C} [monad T] (c : algebra T) : nat_trans.app η_ (algebra.A c) ≫ algebra.a c = 𝟙 := sorry theorem algebra.assoc {C : Type u₁} [category C] {T : C ⥤ C} [monad T] (c : algebra T) : nat_trans.app μ_ (algebra.A c) ≫ algebra.a c = functor.map T (algebra.a c) ≫ algebra.a c := sorry theorem algebra.assoc_assoc {C : Type u₁} [category C] {T : C ⥤ C} [monad T] (c : algebra T) {X' : C} (f' : algebra.A c ⟶ X') : nat_trans.app μ_ (algebra.A c) ≫ algebra.a c ≫ f' = functor.map T (algebra.a c) ≫ algebra.a c ≫ f' := sorry namespace algebra /-- A morphism of Eilenberg–Moore algebras for the monad `T`. -/ structure hom {C : Type u₁} [category C] {T : C ⥤ C} [monad T] (A : algebra T) (B : algebra T) where f : A A ⟶ A B h' : autoParam (functor.map T f ≫ a B = a A ≫ f) (Lean.Syntax.ident Lean.SourceInfo.none (String.toSubstring "Mathlib.obviously") (Lean.Name.mkStr (Lean.Name.mkStr Lean.Name.anonymous "Mathlib") "obviously") []) @[simp] theorem hom.h {C : Type u₁} [category C] {T : C ⥤ C} [monad T] {A : algebra T} {B : algebra T} (c : hom A B) : functor.map T (hom.f c) ≫ a B = a A ≫ hom.f c := sorry @[simp] theorem hom.h_assoc {C : Type u₁} [category C] {T : C ⥤ C} [monad T] {A : algebra T} {B : algebra T} (c : hom A B) {X' : C} (f' : A B ⟶ X') : functor.map T (hom.f c) ≫ a B ≫ f' = a A ≫ hom.f c ≫ f' := sorry namespace hom /-- The identity homomorphism for an Eilenberg–Moore algebra. -/ def id {C : Type u₁} [category C] {T : C ⥤ C} [monad T] (A : algebra T) : hom A A := mk 𝟙 protected instance inhabited {C : Type u₁} [category C] {T : C ⥤ C} [monad T] (A : algebra T) : Inhabited (hom A A) := { default := mk 𝟙 } /-- Composition of Eilenberg–Moore algebra homomorphisms. -/ def comp {C : Type u₁} [category C] {T : C ⥤ C} [monad T] {P : algebra T} {Q : algebra T} {R : algebra T} (f : hom P Q) (g : hom Q R) : hom P R := mk (f f ≫ f g) end hom protected instance category_theory.category_struct {C : Type u₁} [category C] {T : C ⥤ C} [monad T] : category_struct (algebra T) := category_struct.mk hom.id hom.comp @[simp] theorem comp_eq_comp {C : Type u₁} [category C] {T : C ⥤ C} [monad T] {A : algebra T} {A' : algebra T} {A'' : algebra T} (f : A ⟶ A') (g : A' ⟶ A'') : hom.comp f g = f ≫ g := rfl @[simp] theorem id_eq_id {C : Type u₁} [category C] {T : C ⥤ C} [monad T] (A : algebra T) : hom.id A = 𝟙 := rfl @[simp] theorem id_f {C : Type u₁} [category C] {T : C ⥤ C} [monad T] (A : algebra T) : hom.f 𝟙 = 𝟙 := rfl @[simp] theorem comp_f {C : Type u₁} [category C] {T : C ⥤ C} [monad T] {A : algebra T} {A' : algebra T} {A'' : algebra T} (f : A ⟶ A') (g : A' ⟶ A'') : hom.f (f ≫ g) = hom.f f ≫ hom.f g := rfl /-- The category of Eilenberg-Moore algebras for a monad. cf Definition 5.2.4 in [Riehl][riehl2017]. -/ protected instance EilenbergMoore {C : Type u₁} [category C] {T : C ⥤ C} [monad T] : category (algebra T) := category.mk /-- To construct an isomorphism of algebras, it suffices to give an isomorphism of the carriers which commutes with the structure morphisms. -/ @[simp] theorem iso_mk_hom_f {C : Type u₁} [category C] {T : C ⥤ C} [monad T] {A : algebra T} {B : algebra T} (h : A A ≅ A B) (w : functor.map T (iso.hom h) ≫ a B = a A ≫ iso.hom h) : hom.f (iso.hom (iso_mk h w)) = iso.hom h := Eq.refl (hom.f (iso.hom (iso_mk h w))) end algebra /-- The forgetful functor from the Eilenberg-Moore category, forgetting the algebraic structure. -/ @[simp] theorem forget_map {C : Type u₁} [category C] (T : C ⥤ C) [monad T] (A : algebra T) (B : algebra T) (f : A ⟶ B) : functor.map (forget T) f = algebra.hom.f f := Eq.refl (functor.map (forget T) f) /-- The free functor from the Eilenberg-Moore category, constructing an algebra for any object. -/ @[simp] theorem free_obj_A {C : Type u₁} [category C] (T : C ⥤ C) [monad T] (X : C) : algebra.A (functor.obj (free T) X) = functor.obj T X := Eq.refl (algebra.A (functor.obj (free T) X)) protected instance algebra.inhabited {C : Type u₁} [category C] (T : C ⥤ C) [monad T] [Inhabited C] : Inhabited (algebra T) := { default := functor.obj (free T) Inhabited.default } /-- The adjunction between the free and forgetful constructions for Eilenberg-Moore algebras for a monad. cf Lemma 5.2.8 of [Riehl][riehl2017]. -/ -- The other two `simps` projection lemmas can be derived from these two, so `simp_nf` complains if -- those are added too @[simp] theorem adj_counit {C : Type u₁} [category C] (T : C ⥤ C) [monad T] : adjunction.counit (adj T) = nat_trans.mk fun (Y : algebra T) => equiv.inv_fun (adjunction.core_hom_equiv.hom_equiv (adjunction.core_hom_equiv.mk fun (X : C) (Y : algebra T) => equiv.mk (fun (f : functor.obj (free T) X ⟶ Y) => nat_trans.app η_ X ≫ algebra.hom.f f) (fun (f : X ⟶ functor.obj (forget T) Y) => algebra.hom.mk (functor.map T f ≫ algebra.a Y)) (adj._proof_2 T X Y) (adj._proof_3 T X Y)) (functor.obj (forget T) Y) (functor.obj 𝟭 Y)) 𝟙 := Eq.refl (adjunction.counit (adj T)) /-- Given an algebra morphism whose carrier part is an isomorphism, we get an algebra isomorphism. -/ def algebra_iso_of_iso {C : Type u₁} [category C] (T : C ⥤ C) [monad T] {A : algebra T} {B : algebra T} (f : A ⟶ B) [is_iso (algebra.hom.f f)] : is_iso f := is_iso.mk (algebra.hom.mk (inv (algebra.hom.f f))) protected instance forget_reflects_iso {C : Type u₁} [category C] (T : C ⥤ C) [monad T] : reflects_isomorphisms (forget T) := reflects_isomorphisms.mk fun (A B : algebra T) => algebra_iso_of_iso T protected instance forget_faithful {C : Type u₁} [category C] (T : C ⥤ C) [monad T] : faithful (forget T) := faithful.mk end monad namespace comonad /-- An Eilenberg-Moore coalgebra for a comonad `T`. -/ structure coalgebra {C : Type u₁} [category C] (G : C ⥤ C) [comonad G] where A : C a : A ⟶ functor.obj G A counit' : autoParam (a ≫ nat_trans.app ε_ A = 𝟙) (Lean.Syntax.ident Lean.SourceInfo.none (String.toSubstring "Mathlib.obviously") (Lean.Name.mkStr (Lean.Name.mkStr Lean.Name.anonymous "Mathlib") "obviously") []) coassoc' : autoParam (a ≫ nat_trans.app δ_ A = a ≫ functor.map G a) (Lean.Syntax.ident Lean.SourceInfo.none (String.toSubstring "Mathlib.obviously") (Lean.Name.mkStr (Lean.Name.mkStr Lean.Name.anonymous "Mathlib") "obviously") []) theorem coalgebra.counit {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] (c : coalgebra G) : coalgebra.a c ≫ nat_trans.app ε_ (coalgebra.A c) = 𝟙 := sorry theorem coalgebra.coassoc {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] (c : coalgebra G) : coalgebra.a c ≫ nat_trans.app δ_ (coalgebra.A c) = coalgebra.a c ≫ functor.map G (coalgebra.a c) := sorry theorem coalgebra.counit_assoc {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] (c : coalgebra G) {X' : C} (f' : coalgebra.A c ⟶ X') : coalgebra.a c ≫ nat_trans.app ε_ (coalgebra.A c) ≫ f' = f' := sorry namespace coalgebra /-- A morphism of Eilenberg-Moore coalgebras for the comonad `G`. -/ structure hom {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] (A : coalgebra G) (B : coalgebra G) where f : A A ⟶ A B h' : autoParam (a A ≫ functor.map G f = f ≫ a B) (Lean.Syntax.ident Lean.SourceInfo.none (String.toSubstring "Mathlib.obviously") (Lean.Name.mkStr (Lean.Name.mkStr Lean.Name.anonymous "Mathlib") "obviously") []) @[simp] theorem hom.h {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] {A : coalgebra G} {B : coalgebra G} (c : hom A B) : a A ≫ functor.map G (hom.f c) = hom.f c ≫ a B := sorry @[simp] theorem hom.h_assoc {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] {A : coalgebra G} {B : coalgebra G} (c : hom A B) {X' : C} (f' : functor.obj G (A B) ⟶ X') : a A ≫ functor.map G (hom.f c) ≫ f' = hom.f c ≫ a B ≫ f' := sorry namespace hom /-- The identity homomorphism for an Eilenberg–Moore coalgebra. -/ def id {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] (A : coalgebra G) : hom A A := mk 𝟙 /-- Composition of Eilenberg–Moore coalgebra homomorphisms. -/ def comp {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] {P : coalgebra G} {Q : coalgebra G} {R : coalgebra G} (f : hom P Q) (g : hom Q R) : hom P R := mk (f f ≫ f g) end hom /-- The category of Eilenberg-Moore coalgebras for a comonad. -/ protected instance category_theory.category_struct {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] : category_struct (coalgebra G) := category_struct.mk hom.id hom.comp @[simp] theorem comp_eq_comp {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] {A : coalgebra G} {A' : coalgebra G} {A'' : coalgebra G} (f : A ⟶ A') (g : A' ⟶ A'') : hom.comp f g = f ≫ g := rfl @[simp] theorem id_eq_id {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] (A : coalgebra G) : hom.id A = 𝟙 := rfl @[simp] theorem id_f {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] (A : coalgebra G) : hom.f 𝟙 = 𝟙 := rfl @[simp] theorem comp_f {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] {A : coalgebra G} {A' : coalgebra G} {A'' : coalgebra G} (f : A ⟶ A') (g : A' ⟶ A'') : hom.f (f ≫ g) = hom.f f ≫ hom.f g := rfl /-- The category of Eilenberg-Moore coalgebras for a comonad. -/ protected instance EilenbergMoore {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] : category (coalgebra G) := category.mk /-- To construct an isomorphism of coalgebras, it suffices to give an isomorphism of the carriers which commutes with the structure morphisms. -/ @[simp] theorem iso_mk_hom_f {C : Type u₁} [category C] {G : C ⥤ C} [comonad G] {A : coalgebra G} {B : coalgebra G} (h : A A ≅ A B) (w : a A ≫ functor.map G (iso.hom h) = iso.hom h ≫ a B) : hom.f (iso.hom (iso_mk h w)) = iso.hom h := Eq.refl (hom.f (iso.hom (iso_mk h w))) end coalgebra /-- The forgetful functor from the Eilenberg-Moore category, forgetting the coalgebraic structure. -/ def forget {C : Type u₁} [category C] (G : C ⥤ C) [comonad G] : coalgebra G ⥤ C := functor.mk (fun (A : coalgebra G) => coalgebra.A A) fun (A B : coalgebra G) (f : A ⟶ B) => coalgebra.hom.f f /-- Given a coalgebra morphism whose carrier part is an isomorphism, we get a coalgebra isomorphism. -/ def coalgebra_iso_of_iso {C : Type u₁} [category C] (G : C ⥤ C) [comonad G] {A : coalgebra G} {B : coalgebra G} (f : A ⟶ B) [is_iso (coalgebra.hom.f f)] : is_iso f := is_iso.mk (coalgebra.hom.mk (inv (coalgebra.hom.f f))) protected instance forget_reflects_iso {C : Type u₁} [category C] (G : C ⥤ C) [comonad G] : reflects_isomorphisms (forget G) := reflects_isomorphisms.mk fun (A B : coalgebra G) => coalgebra_iso_of_iso G /-- The cofree functor from the Eilenberg-Moore category, constructing a coalgebra for any object. -/ @[simp] theorem cofree_map_f {C : Type u₁} [category C] (G : C ⥤ C) [comonad G] (X : C) (Y : C) (f : X ⟶ Y) : coalgebra.hom.f (functor.map (cofree G) f) = functor.map G f := Eq.refl (coalgebra.hom.f (functor.map (cofree G) f)) /-- The adjunction between the cofree and forgetful constructions for Eilenberg-Moore coalgebras for a comonad. -/ -- The other two `simps` projection lemmas can be derived from these two, so `simp_nf` complains if -- those are added too @[simp] theorem adj_counit {C : Type u₁} [category C] (G : C ⥤ C) [comonad G] : adjunction.counit (adj G) = nat_trans.mk (nat_trans.app ε_) := sorry protected instance forget_faithful {C : Type u₁} [category C] (G : C ⥤ C) [comonad G] : faithful (forget G) := faithful.mk end Mathlib
He scored nine times in 40 league games in his first season , including two in a 2 – 1 home win over relegated Southport on 28 April 2007 in order to seal a play @-@ off place . Eleven days later , in the second leg of the play @-@ off semi @-@ final away to Oxford United , he scored a goal which took the match to extra time and eventually a penalty shootout which his side won . In the final on 20 May at Wembley Stadium , he came on as a 36th @-@ minute substitute for goalscorer Lee Phillips in a 1 – 2 loss to Morecambe .
#ifndef __RNG_H__ #define __RNG_H__ #include <gsl/gsl_rng.h> #include <ctime> #include<iostream> #include <cstring> #include <fstream> #define ABS(x) ((x) >= 0 ? (x) : -(x)) #define min(a,b) ((a) <= (b) ? (a) : (b)) #define max(a,b) ((a) >= (b) ? (a) : (b)) #define OVERFLO 1e100 #define UNDERFLO 1e-100 namespace weakarg { extern gsl_rng * rng; unsigned long makerng(bool fast=false); unsigned long seedrng(unsigned long seed=0); int saverng(std::string fname); int loadrng(std::string fname); double RandomReal(double low, double high); int RandomInteger(int low, int high); double rnd(); double RGamma(double n,double lambda); void RDirichlet(const double * a, const int k, double * b); long RPoisson(double mu); double RExpon(double av); double RNormal(double mu,double sd) ; double fsign( double num, double sign ); double sexpo(void); double snorm(); double genexp(double av); long ignpoi(double mean); long ignuin(int low, int high); double genunf(double low, double high); long Binomial(int n, double p); long Binomial1(int n, double p); double BinoProb(int n, double p,int i); void LogRDirichlet (const double *a, const int k, double *b,double *c); } // end namespace weakarg #endif // __RNG_H__
Formal statement is: lemma limitin_eventually: "\<lbrakk>l \<in> topspace X; eventually (\<lambda>x. f x = l) F\<rbrakk> \<Longrightarrow> limitin X f l F" Informal statement is: If $l$ is a limit point of $X$ and $f$ is eventually equal to $l$ on $F$, then $l$ is a limit point of $f$ on $F$.
State Before: α : Type ?u.80889 p : PosNum ⊢ bit0 (succ (_root_.bit0 p)) = bit0 (bit1 p) State After: α : Type ?u.80889 p : PosNum ⊢ bit0 (succ (bit0 p)) = bit0 (bit1 p) Tactic: rw [bit0_of_bit0 p] State Before: α : Type ?u.80889 p : PosNum ⊢ bit0 (succ (bit0 p)) = bit0 (bit1 p) State After: no goals Tactic: rfl
Load LFindLoad. From lfind Require Import LFind. From QuickChick Require Import QuickChick. From adtind Require Import goal33. Derive Show for natural. Derive Arbitrary for natural. Instance Dec_Eq_natural : Dec_Eq natural. Proof. dec_eq. Qed. Lemma conj25synthconj2 : forall (lv0 : natural) (lv1 : natural) (lv2 : natural), (@eq natural (plus (plus lv0 lv1) (plus (Succ lv2) Zero)) (plus (mult lv2 (Succ lv1)) (Succ lv1))). Admitted. QuickChick conj25synthconj2.
{-# OPTIONS --safe #-} module STLC.Syntax where open import Data.Nat using (ℕ; suc) open import Data.Fin using (Fin) renaming (zero to fzero; suc to fsuc) open import Relation.Binary.PropositionalEquality using (_≡_; refl) open import Relation.Nullary using (Dec; yes; no) private variable n : ℕ data Type : Set where _⇒_ : Type → Type → Type _×'_ : Type → Type → Type unit : Type data Term : ℕ → Set where abs : Type → Term (suc n) → Term n app : Term n → Term n → Term n var : Fin n → Term n ⋆ : Term n pair : Term n → Term n → Term n projₗ : Term n → Term n projᵣ : Term n → Term n data Ctx : ℕ → Set where ● : Ctx 0 _,-_ : Ctx n → Type → Ctx (suc n) find : Ctx n → Fin n → Type find (Γ ,- A) fzero = A find (Γ ,- A) (fsuc n) = find Γ n _=?=_ : (x : Type) → (y : Type) → Dec (x ≡ y) unit =?= unit = yes refl (a ⇒ b) =?= (c ⇒ d) with a =?= c | b =?= d ... | yes refl | yes refl = yes refl ... | yes refl | no ne = no λ { refl → ne refl } ... | no ne | _ = no λ { refl → ne refl } (a ×' b) =?= (c ×' d) with a =?= c | b =?= d ... | yes refl | yes refl = yes refl ... | yes refl | no ne = no λ { refl → ne refl } ... | no ne | _ = no λ { refl → ne refl } unit =?= (_ ×' _) = no λ () unit =?= (_ ⇒ _) = no λ () (_ ×' _) =?= unit = no λ () (_ ×' _) =?= (_ ⇒ _) = no λ () (_ ⇒ _) =?= unit = no λ () (_ ⇒ _) =?= (_ ×' _) = no λ ()
/- Copyright (c) 2021 Nicolò Cavalleri. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Nicolò Cavalleri -/ import topology.homeomorph /-! # Topological space structure on the opposite monoid and on the units group > THIS FILE IS SYNCHRONIZED WITH MATHLIB4. > Any changes to this file require a corresponding PR to mathlib4. In this file we define `topological_space` structure on `Mᵐᵒᵖ`, `Mᵃᵒᵖ`, `Mˣ`, and `add_units M`. This file does not import definitions of a topological monoid and/or a continuous multiplicative action, so we postpone the proofs of `has_continuous_mul Mᵐᵒᵖ` etc till we have these definitions. ## Tags topological space, opposite monoid, units -/ variables {M X : Type*} open filter open_locale topology namespace mul_opposite /-- Put the same topological space structure on the opposite monoid as on the original space. -/ @[to_additive "Put the same topological space structure on the opposite monoid as on the original space."] instance [topological_space M] : topological_space Mᵐᵒᵖ := topological_space.induced (unop : Mᵐᵒᵖ → M) ‹_› variables [topological_space M] @[continuity, to_additive] lemma continuous_unop : continuous (unop : Mᵐᵒᵖ → M) := continuous_induced_dom @[continuity, to_additive] lemma continuous_op : continuous (op : M → Mᵐᵒᵖ) := continuous_induced_rng.2 continuous_id /-- `mul_opposite.op` as a homeomorphism. -/ @[to_additive "`add_opposite.op` as a homeomorphism.", simps] def op_homeomorph : M ≃ₜ Mᵐᵒᵖ := { to_equiv := op_equiv, continuous_to_fun := continuous_op, continuous_inv_fun := continuous_unop } @[to_additive] instance [t2_space M] : t2_space Mᵐᵒᵖ := op_homeomorph.symm.embedding.t2_space @[simp, to_additive] lemma map_op_nhds (x : M) : map (op : M → Mᵐᵒᵖ) (𝓝 x) = 𝓝 (op x) := op_homeomorph.map_nhds_eq x @[simp, to_additive] lemma map_unop_nhds (x : Mᵐᵒᵖ) : map (unop : Mᵐᵒᵖ → M) (𝓝 x) = 𝓝 (unop x) := op_homeomorph.symm.map_nhds_eq x @[simp, to_additive] lemma comap_op_nhds (x : Mᵐᵒᵖ) : comap (op : M → Mᵐᵒᵖ) (𝓝 x) = 𝓝 (unop x) := op_homeomorph.comap_nhds_eq x @[simp, to_additive] end mul_opposite namespace units open mul_opposite variables [topological_space M] [monoid M] [topological_space X] /-- The units of a monoid are equipped with a topology, via the embedding into `M × M`. -/ @[to_additive "The additive units of a monoid are equipped with a topology, via the embedding into `M × M`."] instance : topological_space Mˣ := prod.topological_space.induced (embed_product M) @[to_additive] lemma inducing_embed_product : inducing (embed_product M) := ⟨rfl⟩ @[to_additive] lemma embedding_embed_product : embedding (embed_product M) := ⟨inducing_embed_product, embed_product_injective M⟩ @[to_additive] lemma topology_eq_inf : units.topological_space = topological_space.induced (coe : Mˣ → M) ‹_› ⊓ topological_space.induced (λ u, ↑u⁻¹ : Mˣ → M) ‹_› := by simp only [inducing_embed_product.1, prod.topological_space, induced_inf, mul_opposite.topological_space, induced_compose]; refl /-- An auxiliary lemma that can be used to prove that coercion `Mˣ → M` is a topological embedding. Use `units.coe_embedding₀`, `units.coe_embedding`, or `to_units_homeomorph` instead. -/ @[to_additive "An auxiliary lemma that can be used to prove that coercion `add_units M → M` is a topological embedding. Use `add_units.coe_embedding` or `to_add_units_homeomorph` instead."] lemma embedding_coe_mk {M : Type*} [division_monoid M] [topological_space M] (h : continuous_on has_inv.inv {x : M | is_unit x}) : embedding (coe : Mˣ → M) := begin refine ⟨⟨_⟩, ext⟩, rw [topology_eq_inf, inf_eq_left, ← continuous_iff_le_induced, continuous_iff_continuous_at], intros u s hs, simp only [coe_inv, nhds_induced, filter.mem_map] at hs ⊢, exact ⟨_, mem_inf_principal.1 (h u u.is_unit hs), λ u' hu', hu' u'.is_unit⟩ end @[to_additive] lemma continuous_embed_product : continuous (embed_product M) := continuous_induced_dom @[to_additive] lemma continuous_coe : continuous (coe : Mˣ → M) := (@continuous_embed_product M _ _).fst @[to_additive] protected lemma continuous_iff {f : X → Mˣ} : continuous f ↔ continuous (coe ∘ f : X → M) ∧ continuous (λ x, ↑(f x)⁻¹ : X → M) := by simp only [inducing_embed_product.continuous_iff, embed_product_apply, (∘), continuous_prod_mk, op_homeomorph.symm.inducing.continuous_iff, op_homeomorph_symm_apply, unop_op] @[to_additive] lemma continuous_coe_inv : continuous (λ u, ↑u⁻¹ : Mˣ → M) := (units.continuous_iff.1 continuous_id).2 end units
import_data <- function(dname,filedir = '',train_label=TRUE,test_label=TRUE){ trainfile=sprintf("%s%s/%s_TRAIN.txt",filedir,dname,dname) testfile=sprintf("%s%s/%s_TEST.txt",filedir,dname,dname) traindata=as.matrix(fread(trainfile)) testdata=as.matrix(fread(testfile)) class_train=as.factor(traindata[,1]) class_test=as.factor(testdata[,1]) traindata=traindata[,-1] testdata=testdata[,-1] return(list("train_data" = traindata,"train_labels"= class_train,"test_data" = testdata,"test_labels"=class_test)) }
[STATEMENT] lemma hyperb_ineq [mono_intros]: "Gromov_product_at (e::'a) x z \<ge> min (Gromov_product_at e x y) (Gromov_product_at e y z) - deltaG(TYPE('a))" [PROOF STATE] proof (prove) goal (1 subgoal): 1. min (Gromov_product_at e x y) (Gromov_product_at e y z) - deltaG TYPE('a) \<le> Gromov_product_at e x z [PROOF STEP] using hyperb_quad_ineq[of e y x z] [PROOF STATE] proof (prove) using this: dist e y + dist x z \<le> max (dist e x + dist y z) (dist e z + dist y x) + 2 * deltaG TYPE('a) goal (1 subgoal): 1. min (Gromov_product_at e x y) (Gromov_product_at e y z) - deltaG TYPE('a) \<le> Gromov_product_at e x z [PROOF STEP] unfolding Gromov_product_at_def min_def max_def [PROOF STATE] proof (prove) using this: dist e y + dist x z \<le> (if dist e x + dist y z \<le> dist e z + dist y x then dist e z + dist y x else dist e x + dist y z) + 2 * deltaG TYPE('a) goal (1 subgoal): 1. (if (dist e x + dist e y - dist x y) / 2 \<le> (dist e y + dist e z - dist y z) / 2 then (dist e x + dist e y - dist x y) / 2 else (dist e y + dist e z - dist y z) / 2) - deltaG TYPE('a) \<le> (dist e x + dist e z - dist x z) / 2 [PROOF STEP] by (auto simp add: divide_simps algebra_simps metric_space_class.dist_commute)
\newcommand{\RN}[1]{% \textup{\uppercase\expandafter{\romannumeral#1}}% } \Lecture{Jayalal Sarma}{Oct 19, 2020}{17}{Generating Functions(continued)}{Lalithaditya}{$\alpha$}{JS} \section{Quick Recap of Previous Two Lectures} \begin{itemize} \item We represented the sequence of non-negative integers in the form of a formal power series. \item Operations on power series corresponding to combinatorial meanings. \item We used the concept of Generating Functions for the following examples: \begin{enumerate} \item Distributing 'n' votes to 'k' candidates such that every candidate gets atleast one vote. \item Count the number of non-negative solutions for the equation $a+b+c=n$ \item Derving the expression for Catalan numbers. \end{enumerate} \end{itemize} \section{Recurrence Relations} There are three types of recurrence relations,that are being discussed in this lecture. There are Linear Recurrence Relations, Degree Recurrence Relations and Homogenous Recurrence Relations.\\ \ Before getting into examples,lets discuss about these relations. \begin{itemize} \item \textbf{Linear Recurrence Relation:}\\ \\A Linear Recurrence Relation is a equation that defines $n\textsuperscript{th}$ in a sequence in terms of the $k$ previous terms in the sequence. The recurrence relation is in the form:\\$$a_n = c_1.a_{n-1}~+~c_2.a_{n-2}~+~c_3.a_{n-3}~+~\dots~+~ c_k.a_{n-k}$$ $$ =\sum_{i=1}^{k} c_i*a_{n-i}$$ $where~c_i's~are~constants~independent~of~n$,\\ $c_1,c_2,c_3,\dots,c_k \in \mathbb{R} $ and $c_k \neq 0$. \item \textbf{Degree Recurrence Relation:}\\ \\A recurrence relation of degree d is said to be Degree Recurrence Relation where $a_n$ depends only on $a_{n-d}$. \item \textbf{Homogenous Recurrence Relation:}\\ \\ A recurrence relation where each term of the right hand side of the equation has the same degree. \item \textbf{Some examples on recurrence relations:} \begin{enumerate} \item $a_n = 5.a_{n-1}~+~a_{n-2}.a_{n-3}$ : This is neither linear nor homogenous. \item $a_n = a_{n-1}.a_{n-2}~+~a_{n-3}.a_{n-4}$ : This is not linear but homogenous of degree 4. \item $a_n = 5.a_{n-2}~+~10^n$ : This is linear but not homogenous of degree 2. \end{enumerate} \end{itemize} \section{Using Generating Functions to solve recurrence relations} In this section, we will look how to solve recurrence relations using generating functions. \\ \\ \textbf{Example 1:}\\ In the previous lectures,we can calculated the number of binary strings of length n, which have even number of $0$'s. It turned out to be $2^{n-1}$.\\ Similarly, calculate the number of decimal strings of length n, which contain even number of $0$'s.\\ \\ \textbf{Solution:}\\ Let the $a_n$ be the number of decimal strings,which satisfy the given condition.\\ By convention,lets take that when $n=0$, the number of such strings is 1.\\ If $n=1$,then the number of such strings will be 9.\\ $\implies a_0=1$ and $a_1=9$.\\ \\ \textbf{Forming the recurrence relation:} Let's take a n-length decimal string, and let $d_n$ be the last digit in the string. There are two cases for this type of situation i.e. if $d_n=0$ and $d_n \neq 0$.\\ \underline{\textbf{Case-\RN{1}:}} \\If the last digit is 0, then the remaining string must have odd number of zeroes.Then the number of such strings will be $(10^{n-1} - a_{n-1})$.\\ \\ \underline{\textbf{Case-\RN{2}:}}\\ If the last digit is not zero, then the remaining string must have even number of zeroes,which is equal to number of such strings of length $n-1$ i.e. $a_{n-1}$. The last digit can vary from $1,2,3, \dots,9$. Therefore, the number of such strings will be $(9a_{n-1})$.\\ \\ The resultant recurrence relation for $a_n$ is,\\ $$\implies a_n~=~(10^{n-1}-a_{n-1}+9a_{n-1})$$ $$\implies a_n~=~(10^{n-1}+8a_{n-1})$$ The generating function for this problem will be, \begin{equation} G(x) = \sum_{n \geq 0} a_n.x^n \end{equation} $$ G(x)~=~a_0~+~\sum_{n \geq 1}a_n.x^n $$ $$ G(x)~=~a_0~+~\sum_{n \geq 1}.(10^{n-1}+8a_{n-1})x^n$$ $$ G(x)~=~1~+~\sum_{n \geq 1} 8.a_{n-1}.x^n~+~\sum_{n \geq 1}10^{n-1}.x^n $$ $$ G(x)~=~1~+~8.x ~\sum_{n \geq 1} a_{n-1}.x^{n-1}~+~x~ \sum_{n \geq 1}10^{n-1}.x^{n-1} $$ Let $n-1~=~h$.Then, $$ G(x)~=~1~+~8.x ~\sum_{h \geq 0} a_{h}.x^{h}~+~x~ \sum_{h \geq 0}10^{h}.x^{h} $$ After renaming the variable, we have $$ G(x)~=~1~+~8.x ~\sum_{n \geq 0} a_{n}.x^{n}~+~x~ \sum_{n \geq 0}10^{n}.x^{n} $$ From the equation (17.72), we can see that $G(x) = \sum_{n \geq 0} a_n.x^n$ $$ G(x)~=~1~+~8.x.G(x)~+~x~ \sum_{n \geq 0}10^{n}.x^{n} $$ $$ G(x)~=~1~+~8.x.G(x)~+~x~ \sum_{n \geq 0}{(10.x)}^{n} $$ From the summation of infinte geometric progression, we have $$\sum_{n \geq 0}{(10.x)}^{n} = \frac{1}{1-10.x}$$ $$ G(x)~=~1~+~8.x.G(x)~+~x.\left(\frac{1}{1-10.x}\right) $$ After rearranging the terms, we finally $G(x)$ as, $$G(x)~=~\frac{\left(1-9.x\right)}{(1-8.x).(1-10.x)}$$ By using the concept of partial fractions, let's split the above into two fractions, $$\frac{\left(1-9.x\right)}{(1-8.x).(1-10.x)} ~=~ \frac{A}{(1-8.x)}~+~\frac{B}{(1-10.x)}$$ $$=~\frac{A+B-(10.A.x)-(8.B.x)}{(1-8.x).(1-10.x)}$$ $$\implies A+B=9~and~10.A+8.B=9$$ After solving for A and B, we get $A=\frac{1}{2}$ and $B=\frac{1}{2}$ \begin{equation} G(x)~=~\frac{\frac{1}{2}}{(1-8.x)}~+~\frac{\frac{1}{2}}{(1-10.x)} \end{equation} Our aim was to find the number $a_n$,which is nothing but the coefficient of $x^n$ in $G(x)$. $$\implies Coefficient~of~x^n~in~G(x)~=~ \left(\frac{1}{2}.8^n\right) + \left(\frac{1}{2}.10^n\right)$$ $$\hspace{1ex}=~\frac{8^n+10^n}{2}$$ $\therefore$ Number of decimal strings with even number of zeroes is $\left(\frac{8^n+10^n}{2}\right)$. \\ \\ \textbf{Example 2:}\\ In this example, we are not using any recurrence relations. We are proving combinatorial equations using generating functions. \\ \\ For $n \geq k$, prove that \begin{equation} \sum_{m=k}^{k}{m \choose k}~=~{n+1 \choose k+1} \end{equation} \textbf{Solution:}\\ For a fixed $k$, lets assume that $$a_n~=~\sum_{m=k}^{n}{m \choose k} $$ The generating function for this problem will be, \begin{equation} S(x)~=~ \sum_{n \geq k}a_n.x^n \end{equation} We can observe that in the above summation, n starts from k. It can also start from n = 0, but it is the same, as $k \geq 0$. \\ \\ Lets introduce a new function $\sigma$, $$ \sigma = \left\{ \begin{array}{ll} 1 & if~k\leq m\leq n \\ 0 & otherwise \\ \end{array} \right. $$ From equation (17.74), $$ S(x)~=~ \sum_{n \geq k}a_n.x^n$$ $$ S(x)~=~ \sum_{n \geq k}~\sum_{m=k}^{n}{m \choose k}.x^n$$ $$ S(x)~=~ \sum_{n \geq k}\left(~\sum_{m \geq k}{m \choose k}.x^n\left(\sigma \right) \right)$$ After rearranging the summations, $$ S(x)~=~ \left(~\sum_{m \geq k}\sum_{n \geq k}{m \choose k}.x^n\left(\sigma \right) \right)$$ Since ${k \leq m}$ and ${k \leq n}$ the new function $\sigma$ becomes $1$.\\ \\ Also ${k \leq m}$ and ${k \leq n}$, $\implies m \leq n$. $$\implies~S(x)~=~ \left(~\sum_{m \geq k}\sum_{n \geq m}{m \choose k}.x^n \right)$$ Since, ${m \choose k}$ is independent of $n$, $$S(x)~=~ \left(~\sum_{m \geq k}{m \choose k}. \sum_{n \geq m}x^n \right)$$ $$S(x)~=~\sum_{m \geq k}{m \choose k}.\left(x^m \sum_{n \geq m}x^{n-m} \right) $$ We can observe that the second summation is the sum of an infinite geometric progression. $$S(x)~=~\sum_{m \geq k}{m \choose k}.\left( \frac{x^m}{1-x} \right)$$ \begin{equation} S(x)~=~\frac{x^k}{1-x} \left(\sum_{m \geq k} {m \choose k}.x^{m-k} \right) \end{equation} We know that, $$\frac{1}{1-x} = 1+x+x^2+\dots + \dots$$ and also, $${\left(\frac{1}{1-x}\right)}^{k+1} = {(1+x+x^2+\dots+\dots)^{k+1}} $$ $${(1+x+x^2+\dots+\dots)^{k+1}} = (1+x+x^2+\dots).(1+x+x^2+\dots).\dots $$ Let $d_1,d_2,d_3,\dots,d_{k+1}$ be the degree of each x terms in the product.\\ Our aim is to get the coefficient of $x^{m-k}$ in the above product, this is equivalent to the question \\ \\ \emph{In how many ways can we pick $d_1,d_2,\dots,d_{k+1}$ such that} $$\sum_{i=1}^{k+1}d_i = (m-k)$$\\ This is an example of multichoosing. As discussed in the previous lectures, the number of solutions to this question is $${{k+1+m-k-1} \choose {m-k}} = {m \choose m-k}$$ and also, $${m \choose m-k} = {m \choose k} $$ From the equation (17.76), we can replace ${m \choose k}.x^{m-k}$ with ${\left(\frac{1}{1-x}\right)}^{k+1}$ $$S(x)~=~\frac{x^k}{1-x}{\left(\frac{1}{1-X} \right)}^{k+1} $$ \begin{equation} S(x)~=~\frac{x^k}{{\left(1-x \right)}^{k+2}} \end{equation} Our aim is to find the number $a_n$, which is nothing but the coefficient of $x^n$ in the generating function $S(x)$.\\ \\ We can observe that the, $$Coefficient~of~x^n~in \frac{x^k}{{\left(1-x \right)}^{k+2}}~~=~~Coefficient~of~x^{n-k}~in~{\left(\frac{1}{1-x}\right)}^{k+2}$$ As we proved earlier in this example, that the coefficient of $x^{n-k}$ in the right hand side, is equivalent to the sum of degrees of x terms by expanding $\left(\frac{1}{1-x}\right)$ equal to $(k+2)$. \\ As proved in earlier lectures, this sum is equal to $$a_n = {{k+2+n-k-1} \choose {n-k}}$$ $${{k+2+n-k-1} \choose {n-k}} = {n+1 \choose n-k} $$ $${n+1 \choose n-k} = {n+1 \choose k+1}$$ $$\boxed{\therefore a_n = {n+1 \choose k+1}}$$ \Lecture{Jayalal Sarma}{Oct 20, 2020}{18}{Two Variable Generating Functions}{Lalithaditya and Pragnya}{$\alpha$}{JS} Till now we had discussed Generating Functions with one variable. In this lecture, we are going to discuss Generating Functions with two variables. Such type of Generating functions are known as \textbf{Bivariate Generating Functions}.\\ \\ The general form of the Bivariate Generating functions is,$$G(x,y) = \sum_{n,k \geq 0} a_{n,k}.x^n.y^k$$ These type of generating functions are useful, when dealing combinatorial problems with two variables.\\ Lets try out some examples, to get an idea on how to use Generating Functions with two variables. \section{Examples based on Bivariate Generating Functions} \textbf{Example 1:}\\ Prove the binomial theorem in single variable using the two variable generating functions.\\ Binomial Theorem in single variable: $${\left(1+x\right)}^{n} = \sum_{k=0}^{n}{n \choose k}.x^k$$ \textbf{Solution:}\\ We know that the number of ways of choosing a k-sized subset from n-sized set is equal to ${n \choose k}$.\\ Let the number be $b_{n,k}$. \\ When $n=0$ , $b_{0,k}~=~0$ and when $k=0$, $b_{n,0}~=~1$.\\ \\ As discussed in previous lectures, we can choose a k-sized subset from n-1 elements or from n elements. \\ \textbf{Recurrence relation:} $$b_{n,k} = b_{n-1,k-1} + b_{n-1,k} $$ The generating function for this problem is, \begin{equation} B(x,y) = \sum_{n,k \geq 0}b_{n,k}.\left(x^n.y^k \right) \end{equation} $$B(x,y) = \sum_{n \geq 0,k=0}b_{n,0}x^n + \sum_{n=0,k \geq 0}b_{0,k}.y^k + \sum_{n,k \geq 1}b_{n,k}.\left(x^n.y^k \right) $$ We know that $b_{0,k} = 0~and~b_{n,0}=1$. $$B(x,y) = \sum_{n \geq 0,k=0}1. \left( x^n \right) + \sum_{n=0,k \geq 0}0. \left( y^k \right) + \sum_{n,k \geq 1}b_{n,k}.\left(x^n.y^k \right) $$ $$B(x,y) = \sum_{n \geq 0,k=0} \left( x^n \right) + \sum_{n,k \geq 1}b_{n,k}.\left(x^n.y^k \right)$$ By using the recurrence relation, $$B(x,y) = \sum_{n \geq 0,k=0} \left( x^n \right) + \sum_{n,k \geq 1}b_{n-1,k-1}.\left(x^n.y^k \right) + \sum_{n,k \geq 1}b_{n-1,k}.\left(x^n.y^k \right)$$ We know that, $$\sum_{n \geq 0}x^n = \frac{1}{1-x} $$ $$B(x,y) = \frac{1}{1-x} + \sum_{n,k \geq 1}b_{n-1,k-1}.\left(x^n.y^k \right) + \sum_{n,k \geq 1}b_{n-1,k}.\left(x^n.y^k \right)$$ $$B(x,y) = \frac{1}{1-x} + \left(x.y\right)\sum_{n,k \geq 1}b_{n-1,k-1}.\left(x^{n-1}.y^{k-1} \right) + x.\sum_{n,k \geq 1}b_{n-1,k}.\left(x^{n-1}.y^k \right)$$ $$B(x,y) = \frac{1}{1-x} + \left(x.y\right)\sum_{n,k \geq 1}b_{n-1,k-1}.\left(x^{n-1}.y^{k-1} \right) + x.\sum_{n,k \geq 1}b_{n-1,k}.\left(x^{n-1}.y^k \right)$$ Let $(n-1)=h~and~(k-1)=p$ $$B(x,y) = \frac{1}{1-x} + \left(x.y\right)\sum_{h,p \geq 1}b_{h,p}.\left(x^{h}.y^{p} \right) + x.\sum_{n,k \geq 1}b_{n-1,k}.\left(x^{n-1}.y^k \right)$$ After renaming of variables, $$B(x,y) = \frac{1}{1-x} + \left(x.y\right)\sum_{n,k \geq 1}b_{n,k}.\left(x^{n}.y^{k} \right) + x.\sum_{n,k \geq 1}b_{n-1,k}.\left(x^{n-1}.y^k \right)$$ $$B(x,y) = \frac{1}{1-x} + \left(x.y\right)\sum_{n,k \geq 1}b_{n,k}.\left(x^{n}.y^{k} \right) + x.\left(\sum_{n,k \geq 0}b_{n,k}.\left(x^{n}.y^k \right) - \sum_{n \geq 0,k=0}x^n \right)$$ $$B(x,y) = \frac{1}{1-x} + \left(x.y\right)\sum_{n,k \geq 0}b_{n,k}.\left(x^{n}.y^{k} \right) + x.\left(\sum_{n,k \geq 0}b_{n,k}.\left(x^{n}.y^k \right) - \sum_{n \geq 0,k=0}x^n \right)$$ From the equation (18.78), $$B(x,y) = \frac{1}{1-x} + \left(x.y\right).B(x,y) + x.\left(B(x,y) - \frac{1}{1-x} \right)$$ After rearranging the terms, $$B(x,y) = 1+x.\left(y+1\right).B(x,y)$$ $$B(x,y) = \frac{1}{1-x.\left(y+1\right)}$$ $\therefore$ The generating function $B(x,y)$ is, \begin{equation} \boxed{B(x,y) = \frac{1}{1-x.\left(y+1\right)}} \end{equation} Coefficient of $x^n$ in the Left hand side of the above equation is equal to the coefficient of $x^n$ in the right hand side of the above equation. Coefficient of $x^n$ in the left hand side $= \sum_{k \geq 0}b_{n,k}.y^k~(\because From~equation~(18.78))$ \\ \textbf{Note:} Coefficient of $x^n$ in $\left(\frac{1}{1-ax}\right)$ is $a^n$.\\ $\implies$ Coefficient of $x^n$ in the right hand side $= {\left(1+y \right)}^{n}$\\ Hence, $$\sum_{k \geq 0}b_{n,k}.y^k = {\left(1+y \right)}^{n} $$ After renaming of variables, $${\left(1+x \right)}^{n} = \sum_{k \geq 0}b_{n,k}.x^k $$ At the beginning of the proof, we assumed that $b_{n,k} = {n \choose k}$ \begin{equation} \boxed{\therefore {\left(1+x \right)}^{n} = \sum_{k \geq 0}{n \choose k}.x^k} \end{equation} which completes our proof. \\ \\ \textbf{Example 2 (Delannoy Numbers):}\\ Consider a nxn grid. Delannoy number D counts the number of paths from the left-bottom corner (0,0) to any other point on the grid (n,m). \\ The path can be reached by only three paths i.e Upward edges(U), Rightward edges(R) and upward forward diagonals(F). Find the Delannoy Number. \\ \\ \textbf{Solution:}\\ Let $d_{n,m}$ be the number of Delannoy paths from (0,0) to (n,m), by using the above edges only. \\ \\ For example,When n=3 and m=3, then the number of Delannoy paths is 63. \begin{figure}[H] \centerline{\includegraphics[width=0.5\textwidth,height=0.5\textwidth]{images/DelannoyNumbers.png}} \end{figure} \textbf{\underline{Aim}:} To find $d_{n,m}$ \\ \textbf{Recurrence Relation:}\\ \\ Lets find a recurrence relation for $d_{n,m}$. \\ A point (n,m) can be reached from three ways i.e. from (n-1,m), from (n-1,m-1) and from (n,m-1). \\ Hence, the recurrence relation for $d_{n,m}$ will be, \begin{equation} \boxed{d_{n,m} = d_{n,m-1} + d_{n-1,m} + d_{n-1,m-1}} \end{equation} \textbf{Generating Function:}\\ \\ The generating function for this problem is, \begin{equation} \boxed{D(x,y) = \sum_{n,m \geq 0}d_{n,m}.x^n.y^m} \end{equation} We can observe that $d_{n,0}=d_{0,m}=1$. $$D(x,y) = \sum_{n \geq 0,m=0}d_{n,0}.x^n + \sum_{n=0,m \geq 1}d_{0,m}.y^m + \sum_{n \geq 1,m \geq 1}d_{n,m}.x^n.y^m$$ $$D(x,y) = \sum_{n \geq 0,m=0}1.x^n + \sum_{n=0,m \geq 1}1.y^m + \sum_{n \geq 1,m \geq 1}d_{n,m}.x^n.y^m$$ We know that, $$\sum_{n \geq 0}x^n = \frac{1}{1-x}$$ $$D(x,y) = \frac{1}{1-x} + \sum_{n=0,m \geq 1}y^m + \sum_{n \geq 1,m \geq 1}d_{n,m}.x^n.y^m$$ $$D(x,y) = \frac{1}{1-x} + y.\sum_{n=0,m \geq 1}y^{m-1} + \sum_{n \geq 1,m \geq 1}d_{n,m}.x^n.y^m$$ Let $(m-1)=h$, $$D(x,y) = \frac{1}{1-x} + y.\sum_{n=0,h=0}y^h + \sum_{n \geq 1,m \geq 1}d_{n,m}.x^n.y^m$$ After renaming the variables, $$D(x,y) = \frac{1}{1-x} + y.\sum_{n=0,m=0}y^m + \sum_{n \geq 1,m \geq 1}d_{n,m}.x^n.y^m$$ $$D(x,y) = \frac{1}{1-x} + y.\left(\frac{1}{1-y}\right) + \sum_{n \geq 1,m \geq 1}d_{n,m}.x^n.y^m$$ Using the recurrence relation from equation (18.81), $$D(x,y) = \frac{1}{1-x} + \frac{y}{1-y} + \sum_{n \geq 1,m \geq 1}(d_{n,m-1} + d_{n-1,m} + d_{n-1,m-1}).x^n.y^m$$ $$D(x,y) = \frac{1}{1-x} + \frac{y}{1-y} + \sum_{n \geq 1,m \geq 1}d_{n,m-1} + \sum_{n \geq 1,m \geq 1}d_{n-1,m} + \sum_{n \geq 1,m \geq 1}d_{n-1,m-1}.x^n.y^m$$ $$D(x,y) = \frac{1}{1-x} + \frac{y}{1-y} + x.y.\sum_{n \geq 1,m \geq 1}d_{n-1,m-1}.x^{n-1}.y^{m-1} + \sum_{n \geq 1,m \geq 1}d_{n,m-1} + \sum_{n \geq 1,m \geq 1}d_{n-1,m}$$ Let $(n-1)=h~and~(m-1)=p$, $$D(x,y) = \frac{1}{1-x} + \frac{y}{1-y} + x.y.\sum_{h \geq 0,p \geq 0}d_{h,p}.x^{h}.y^{p} + \sum_{n \geq 1,m \geq 1}d_{n,m-1} + \sum_{n \geq 1,m \geq 1}d_{n-1,m}$$ After renaming the variables, $$D(x,y) = \frac{1}{1-x} + \frac{y}{1-y} + x.y.\sum_{n \geq 0,m \geq 0}d_{n,m}.x^{n}.y^{m} + \sum_{n \geq 1,m \geq 1}d_{n,m-1}.x^{n}.y^{m} + \sum_{n \geq 1,m \geq 1}d_{n-1,m}.x^{n}.y^{m}$$ From the equation (18.82), \begin{equation} D(x,y) = \frac{1}{1-x} + \frac{y}{1-y} + x.y.D(x,y) + \sum_{n \geq 1,m \geq 1}d_{n-1,m}.x^{n}.y^{m} +\sum_{n \geq 1,m \geq 1}d_{n,m-1}.x^{n}.y^{m} \end{equation} Consider the fourth term in the above equation, $$\sum_{n \geq 1,m \geq 1}d_{n-1,m}.x^{n}.y^{m} = x.\sum_{n \geq 1,m \geq 1}d_{n-1,m}.x^{n-1}.y^{m}$$ Let $(n-1)=h$ $$x.\sum_{n \geq 1,m \geq 1}d_{n-1,m}.x^{n-1}.y^{m} = x.\sum_{h \geq 0,m \geq 1}d_{h,m}.x^h.y^m$$ After renaming the variables, $$x.\sum_{h \geq 0,m \geq 1}d_{h,m}.x^h.y^m = x.\sum_{n \geq 0,m \geq 1}d_{n,m}.x^n.y^m$$ $$x.\sum_{n \geq 0,m \geq 1}d_{n,m}.x^n.y^m = x.\left(\sum_{n \geq 0,m \geq 0}d_{n,m}.x^{n}.y^{m} - \sum_{n \geq 0,m = 0}d_{n,0}.x^{n} \right)$$ $$x.\left(\sum_{n \geq 0,m \geq 0}d_{n,m}.x^{n}.y^{m} - \sum_{n \geq 0,m = 0}d_{n,0}.x^{n} \right) = x.\left(D(x,y)-\frac{1}{1-x} \right) $$ \\ \\ Substituting the above value in the equation (18.83),then $$D(x,y) = \frac{1}{1-x} + \frac{y}{1-y} + x.y.D(x,y) + x.\left(D(x,y)-\frac{1}{1-x} \right) +\sum_{n \geq 1,m \geq 1}d_{n,m-1}.x^{n}.y^{m}$$ After rearranging the terms, \begin{equation} D(x,y) = 1 + \frac{y}{1-y} + x.y.D(x,y) + x.D(x,y) + \sum_{n \geq 1,m \geq 1}d_{n,m-1}.x^{n}.y^{m} \end{equation} Consider the last term of the above equation, $$\sum_{n \geq 1,m \geq 1}d_{n,m-1}.x^{n}.y^{m} = y.\sum_{n \geq 1,m \geq 1}d_{n,m-1}.x^{n}.y^{m-1} $$ Let $p=(m-1)$,then $$y.\sum_{n \geq 1,m \geq 1}d_{n,m-1}.x^{n}.y^{m-1} = y.\sum_{n \geq 1,p \geq 0}d_{n,p}.x^{n}.y^{p}$$ After renaming the variables, $$y.\sum_{n \geq 1,m \geq 0}d_{n,m}.x^{n}.y^{m} = y.\left(\sum_{n \geq 0,m \geq 0}d_{n,m}.x^n.y^m - \sum_{n=0,m \geq 0}d_{0,m}.y^{m} \right) $$ $$y.\left(\sum_{n \geq 0,m \geq 0}d_{n,m}.x^n.y^m - \sum_{n=0,m \geq 0}d_{0,m}.y^{m} \right) = y.\left(D(x,y) - \frac{1}{1-y} \right) $$ Substitute the above value in the equation (18.84), $$D(x,y) = 1 + \frac{y}{1-y} + x.y.D(x,y) + x.D(x,y) + y.\left(D(x,y) - \frac{1}{1-y} \right)$$ $$D(x,y) = 1 + x.y.D(x,y) + x.D(x,y) + y.D(x,y) $$ After rearranging the terms, $$D(x,y) = \frac{1}{1-x-y-xy} $$ $$D(x,y) = \left(\frac{1}{1-y}\right).\left(\frac{1}{1-\left(\frac{1+y}{1-y}\right).x}\right) $$ We know that, $$\frac{1}{1-a.x} = \sum_{n \geq 0}a^n.x^n $$ $$D(x,y) = \left(\frac{1}{1-y}\right).\left(\sum_{n \geq 0}{\left(\frac{1+y}{1-y} \right)}^n.x^n \right) $$ The generating function is, \begin{equation} \boxed{D(x,y) = \left(\sum_{n \geq 0}{\frac{{(1+y)}^n}{{(1-y)}^{n+1}}}.x^n \right)} \end{equation} The required number $d_{n,m}$ is,\\ $$d_{n,m}~=~Coefficient~of~x^n.y^m~in~D(x,y)$$ $$d_{n,m}~=~Coefficient~of~y^m~in~{\frac{{(1+y)}^n}{{(1-y)}^{n+1}}}$$ $$d_{n,m}~=~Coefficient~of~y^m~in~ {(1+y)}^n.\left(\frac{1}{1-y}.\frac{1}{1-y}. \dots (n+1) times\right)$$ We know that, $$\frac{1}{1-y} = 1+y+y^2+ \dots$$ $$d_{n,m}~=~Coefficient~of~y^m~in~{(1+y)}^n.\left((1+y+y^2+\dots).(1+y+y^2+\dots). \dots (n+1) times\right)$$ Let's say that a number $k \geq 0$ is taken, such that the term along with its coefficient $y^k$ comes from ${(1+y)^n}$ and the remaining term along with its coefficient $y^{m-k}$ comes from the (n+1) term product.\\ The coefficient of $y^k$ in ${(1+y)^n}$ is ${n \choose k}$.\\ Let $c_1,c_2,\dots,c_{n+1}$ be the degrees of x from the (n+1)-term product.\\ Finding out the coefficient of $y^{m-k}$ from the (n+1) term product is equivalent to count the number of ways of picking $c_i$'s such that $c_1+c_2+\dots+c_{n+1}=m-k$\\ The number of such pickings = ${{n+1+m-k-1} \choose {m-k}}$ = ${n+m-k} \choose {m-k}$ = ${n+m-k} \choose {n}$.\\ Therefore, the required number $d_{n,m}$ is, $$d_{n,m} = \sum_{k \geq 0}{n \choose k}.{{n+m-k} \choose n}$$ Hence, \begin{equation} \boxed{Delannoy~Number~(D) = \sum_{k \geq 0}{n \choose k}.{{n+m-k} \choose n}} \end{equation} \Lecture{Jayalal Sarma}{Oct 21, 2020}{17}{Generating Functions(continued)}{Pragnya}{$\alpha$}{JS} \section{Introduction} In this section we'll see some examples of ordinary generating functions and get introduced to exponential generating functions. \subsection{Example 3 : } To show two combinatorial qualities are equal it sufficies to show that they have same generating functions. Consider the following, $$ B_n(m) = \{(x_1, x_2, \dots x_n) ~|~ \forall i ~ x_i \in \Z , \sum |x_i| \leq m \}$$ let $b_{n,m} = |B_n(m)|$. Let's see properties of $b_{n,m}$ : \begin{enumerate} \item $b_{n, m} = \sum_{k=0}^n {n \choose k}{m \choose k} 2^k$. \item $b_{n, m} = b_{m, n}$. This can also be proved using bijection. \item $b_{n, m} = d_{m, n}$. \end{enumerate} We'll prove property $3$ by showing they have same generating functions. \begin{align*} B_{x,y} &= \sum_{n,m \geq 0} b_{n, m} x^n y^m\\ &= \sum_{n,m \geq 0} (\sum_{k=0}^n {n \choose k}{m \choose k} 2^k) x^n y^m \\ &= \sum_{n,m,k \geq 0}{n \choose k}{m \choose k} 2^k x^n y^m \\ &= \sum_{k \geq 0} 2^k \sum_{n,m \geq 0} {n \choose k}{m \choose k} x^n y^m \\ &= \sum_{k \geq 0} 2^k (\sum_{n \geq 0} {n \choose k} x^n)(\sum_{m \geq 0} {m \choose k} y^m) \\ B_{x, y}&= \sum_{k \geq 0} 2^k (x^k \sum_{n \geq 0} {n \choose k} x^{n-k})(y^k \sum_{m \geq 0} {m \choose k} y^{m-k}) \end{align*} Consider $\frac{1}{(1-x)^{k+1}}$ : $$\frac{1}{(1-x)^{k+1}} = \frac{1}{(1-x)}.\frac{1}{(1-x)}. \dots \frac{1}{(1-x)} ~(k+1 ~times)$$ Coefficient of $x^{n-k}$ in $\frac{1}{(1-x)^{k+1}}$ is equivalent to no.of solutions of $a_1 + a_2 + \dots + a_{k+1} = n-k$ which is $ = {(n-k) + (k+1) -1 \choose n-k} = {n \choose k}$. Hence $$ \sum_{n \geq 0} {n \choose k} x^{n-k} = \frac{1}{(1-x)^{k+1}} $$ Similarly $$ \sum_{m \geq 0} {m \choose k} y^{m-k} = \frac{1}{(1-y)^{k+1}} $$ Substituting them in the above derived $B_{x, y}$ - \begin{align*} B_{x, y} &= \sum_{k \geq 0} 2^k x^k y^k \frac{1}{(1-x)^{k+1}} \frac{1}{(1-y)^{k+1}} \\ &= \sum_{k \geq 0} (2xy)^k \frac{1}{(1-x)^{k+1}} \frac{1}{(1-y)^{k+1}} \\ &= \frac{1}{(1-x)(1-y)} \sum_{k \geq 0} \frac{(2xy)^k}{(1-x)^k(1-y)^k} \\ &= \frac{1}{(1-x)(1-y)} \sum_{k \geq 0} (\frac{(2xy)}{(1-x)(1-y)}) ^k \\ &= \frac{1}{(1-x)(1-y)} \frac{1}{1- \frac{2xy}{(1-x)(1-y)}} \\ &= \frac{1}{(1-x)(1-y) - 2xy} \\ B(x, y) &= \frac{1}{1 - x - y - xy} = D(x, y) \end{align*} Since $b_{n, m}$ and $d_{n, m}$ have same generating functions, $b_{n, m} = d_{n, m}$. Hence $b_{n, m}$ also satisfies recurrence relation of $d_{n, m}$ - $$ b_{n, m} = b_{n-1, m} + b_{n, m-1} + b_{n-1, m-1}$$ \subsection{Example 4 : Stirling number of second kind} As discussed in previous lectures, number of ways to partition set $\{1, 2, 3 \dots n \}$ into $k$ non-empty parts is called stirling number of second kind. Let's represent by $S_{n,k}$. It's recurrence relation is given by - $$S_{n, k} = S_{n-1, k-1} + k S_{n-1, k} $$ LHS : number of ways to partition set $\{1, 2, 3 \dots n \}$ into $k$ non-empty parts $ = S_{n, k}$ \\ RHS : \begin{enumerate} \item If element $1$ occurs in a singleton set. No.of ways to partition remaining $n-1$ elements to $k-1$ sets $= S_{n-1, k-1}$. \item If element $1$ doesn't occur in a singleton set. Then we can partition remaining $n-1$ elements to $k$ sets and add element $1$ to one of these $k$ sets $= k S_{n-1, k}$ \end{enumerate} We can also see that $S_{0,0} = 1 ,~ S_{n, 0} = 0 ,~ S_{0, k} = 0$. %$S_{x, y} = {\sum_{n, k \geq 0} ^ {\infty} S_{n,k} x^n y^k}$ \\ %\begin{equation} \begin{align*} S(x, y) &= {\sum_{n, k \geq 0} ^ {\infty} S_{n,k} x^n y^k} \\ &= S_{0,0}~x^0y^0 + {\sum_{n = 0, k \geq 1}^ {\infty} S_{0,k}~x^0y^k} + {\sum_{n \geq 1, k = 0}^ {\infty} S_{n,0}~x^n y^0} + {\sum_{n \geq 1, k \geq 1}^ {\infty} S_{n,k}~x^n y^k} \\ &= 1 + \sum_{n \geq 1 , k \geq 1} ^ {\infty} S_{n,k} ~ x^n y^k \\ &= 1 + \sum_{n \geq 1 , k \geq 1} S_{n-1, k-1}~ x^n y^k + \sum_{n \geq 1, k \geq 1} k S_{n-1, k}~ x^n y^k \\ &= 1 + xy \sum_{n \geq 1, k \geq 1} S_{n-1, k-1} x^{n-1} y^{k-1} + x\sum_{n \geq 1, k \geq 1} k S_{n-1, k}~ x^{n-1} y^k \\ &= 1 + xy~ S(x, y) + x \sum_{n \geq 0, k \geq 1} k S_{n, k}~ x^{n} y^k \\ &= 1 + xy~ S(x, y) + \frac{\partial}{\partial y} S(x,y) \end{align*} Note : $\frac{\partial}{\partial y} S(x, y) = \sum_{n \geq 0, k \geq 1} k S_{n,k}~x^n y^{k-1}$ Consider $y^k$ coefficients on both sides : \begin{equation} \begin{split} LHS &= \sum_{n \geq 0} S_{n, k} x^n \\ RHS &= x\sum_{n \geq 0} S_{n, k-1} x^n + xk \sum_{n \geq 0} S_{n, k}x^n \end{split} \end{equation} Equating LHS and RHS : \begin{align*} \sum_{n \geq 0} S_{n, k}~ x^n &= x\sum_{n \geq 0} S_{n, k-1} ~x^n + xk \sum_{n \geq 0} S_{n, k}~x^n \\ \sum_{n \geq 0} S_{n, k} ~x^n &= \frac{x}{1-xk} \sum_{n \geq 0} S_{n, k-1}~ x^n \\ &= \frac{x}{1-xk} \frac{x}{1-x(k-1)} \sum_{n \geq 0} S_{n, k-2}~ x^n \\ &= \frac{x}{1-xk} \frac{x}{1-x(k-1)} \dots \frac{x}{1-x(k-(k-1))} \sum_{n \geq 0} S_{n, 0}~ x^n \\ &= \frac{x^k}{(1-x)(1-2x)\dots (1-kx)} \times 1 ~( Note : S_{0,0} = 1, S_{n, 0} = 0) \\ \sum_{n \geq 0} S_{n, k}~ x^n &= x^k \times (\frac{A_1}{1-x} + \frac{A_2}{1-2x} + \dots + \frac{A_k}{1-kx}) \end{align*} Solving for $A_1, A_2, \dots A_k $ we'll get $A_r = (-1)^{k-r} \frac{r^{k-1}}{(r-1)! (k-r)!}$. \\ $S_{n,k}$ is the coefficient of $x^n$ in RHS. i.e., \begin{align*} S_{n,k} &= coeff~ of ~x^n ~in~ x^k \times (\frac{A_1}{1-x} + \frac{A_2}{1-2x} + \dots + \frac{A_k}{1-kx}) \\ &= coeff ~ of ~ x^{n-k} ~ in ~ \sum_{r=1}^k \frac{A_r}{1-rx} \end{align*} Coefficient of $x^{p}$ in $\frac{1}{1-rx} = r^p$. hence, \begin{align*} S_{n,k} &= \sum_{r=1}^k A_r r^{n-k} \\ &= \sum_{r=1}^k (-1)^{k-r} \frac{r^{k-1}}{(r-1)! (k-r)!} ~ r^{n-k}\\ S_{n,k} &= \sum_{r=1}^k (-1)^{k-r} \frac{r^{n}}{(r-1)! (k-r)!} \end{align*} The above expression is Stirling number of second kind %\end{equation} %$$ = S_{0,0}~x^0y^0 + {\sum_{n = 0, k \geq 1}^ {\infty} S_{0,k}~x^0y^k} + {\sum_{n \geq 1, k = 0}^ {\infty} S_{n,0}~x^ny^0} + {\sum_{n \geq 1, k \geq 1}^ {\infty} S_{n,k}~x^ny^k} $$ \\ %$&= 1 + \sum_{\substack{n \geq 1 \\ k \geq 1}} ^ {\infty} S_{n,k} ~ x^n y^k$ \section{Exponential generating functions} In ordinary generating functions we associate sequence, ${(a_n)}_{n \geq 0}$ with $G(x) = \sum_{n \geq 0} a_n x^n$. In $G(x)$ we chose basis $\{ 1, x, x^2, x^3 \dots\}$ for set of all polynomials in one variable. But there are many other basis for set of polynomials, like $\{1, x, x(x-1), x(x-1)(x-2), \dots\}$. We chose basis $\{ 1, x, x^2, x^3 \dots\}$ because it has combinatorial meaning. Other such meaning full basis are $\{\frac{x^n}{n!}\}_{n \in \N}$, $\{e^{-x} \frac{x^n}{n!}\}_{n \in \N}$ and $\{\frac{1}{n^x}\}_{n \in \N}$. In this lecture we'll explore exponential generating functions which use basis $\{\frac{x^n}{n!}\}_{n \in \N}$ . \\ So ${(a_n)}_{n \geq 0}$ is associated with $E(x) = \sum_{n \geq 0} a_n \frac{x^n}{n!}$ . Let's see a few examples - \begin{align*} (1, 1, 1, \dots ) &\xrightarrow[generating function]{exponential} \sum_{n \geq 0} \frac{x^n}{n!} = e^x \\ &\xrightarrow[generating function]{ordinary} \sum_{n \geq 0} x^n = \frac{1}{1-x} \\ (1!, 2!, 3!, \dots) &\xrightarrow[generating function]{exponential} \sum_{n \geq 0} n! \frac{x^n}{n!} = \sum_{n \geq 0} x^n = \frac{1}{1-x} \end{align*} $\frac{1}{1-x}$ is ordinary generating function(ogf) of $(1, 1, 1, \dots )$ and exponential generating function(egf) of $(1!, 2!, 3!, \dots )$.\\ \\ \textbf{\Large {Operations of EGF}} \begin{enumerate} \item{\textbf{Addition :} } It's similar to ogf. \begin{align*} \{a_n\}_{n \geq 0} &\xrightarrow[]{egf} E(x) \\ \{b_n\}_{n \geq 0} &\xrightarrow[]{egf} F(x) \\ \{a_n + b_n \}_{n \geq 0} &\xrightarrow[]{egf} E(x) + F(x) \\ \end{align*} \item {\textbf{Shifting :} } Multiplying ogf by $x$ shifts the sequence to left as seen in earlier lectures. \begin{align*} \{a_0, a_1, a_2 \dots \} &\xrightarrow[]{egf} E(x) \\ \{0, a_0, a_1, a_2 \dots \} &\xrightarrow[]{egf} xE(x) \\ \end{align*} Differentiating egf function will shift the sequence to right. \begin{gather*} \{a_0, a_1, a_2 \dots \} \xrightarrow[]{egf} ~~ E(x) \endline \{a_1, a_2, a_3 \dots \} \xrightarrow[]{egf} \frac{d}{dx} E(x) \\ \frac{d}{dx} E(x) = \sum_{n \geq 1} a_n~ \frac{n. x^{n-1}}{n!} = \sum_{n \geq 1} a_n~ \frac{x^{n-1}}{(n-1)!} = \sum_{n \geq 0} a_{n+1}~ \frac{x^{n}}{n!} \end{gather*} \item {\textbf{Multiplication :} } EGFs are used if the sequence counts labelled structures like permutations, derangements and partitions. Let $(a_n)_{n \geq 0}$, $(b_n)_{n \geq 0}$ count arrangements of type $A$ and type $B$ respectively using $n$ labelled objects. If we want to count type $C$ arrangements, that can be obtained by a unique split of $n$ objects into two sets and then arranging first set according to type $A$ and second set according to type $B$ - \\ No.of arrangements of type $C$ of size $n, c_n = \sum_{k=0}^n {n \choose k} a_k b_{n-k}$.\\ Now Let's see how multiplication of $A(x)$(egf of $A$) and $B(x)$(egf of $B$) is useful \begin{align*} A(x).B(x) &= (\sum_{n=0}^{\infty} a_n \frac{x^n}{n!})(\sum_{n=0}^{\infty} b_n \frac{x^n}{n!}) \\ &= \sum_{n=0}^{\infty}(\sum_{k=0}^{n} \frac{a_k}{k!}.\frac{b_{n-k}}{(n-k)!}) x^n \\ &= \sum_{n=0}^{\infty}(\sum_{k=0}^{n} \frac{n!}{k!(n-k)!} a_k b_{n-k}) \frac{x^n}{n!} \\ &= \sum_{n=0}^{\infty}(\sum_{k=0}^{n} {n \choose k} a_k b_{n-k}) \frac{x^n}{n!} \\ &= \sum_{n=0}^{\infty} c_n \frac{x^n}{n!} \\ A(x).B(x) &= C(x) \end{align*} Now Let's see few examples of egf \subsection{Derangements } Recall that we've discussed derangements in PIE and recurrence relations. Now let's derive it using egf. Let $D_n$ represent set of derangements of $n$ objects and let $d_n = |D_n|$. we can see that $d_0 = 1 ,~ d_1 = 0,~ d_2 = 1$. Recall the recurrence relation : $$ d_{n+2} = (n+1)(d_{n+1} + d_n)$$ \begin{align*} D(x) &= \sum_{n=0}^{\infty} d_n \frac{x^n}{n!} \\ D'(x) &= \sum_{n=0}^{\infty} d_{n+1} \frac{x^n}{n!} ~(by ~shifting ~operation)\\ &= \sum_{n=1}^{\infty} n(d_n + d_{n-1}) \frac{x^n}{n!} \\ &= \sum_{n=1}^{\infty} nd_n \frac{x^n}{n!} + \sum_{n=1}^{\infty} nd_{n-1} \frac{x^n}{n!} \\ &= x \sum_{n=1}^{\infty} d_n \frac{x^{n-1}}{(n-1)!} + x \sum_{n=1}^{\infty} d_{n-1} \frac{x^{n-1}}{(n-1)!} \\ &= x \sum_{n=0}^{\infty} d_{n+1} \frac{x^n}{n!} + x \sum_{n=0}^{\infty} d_n \frac{x^n}{n!} \\ D'(x) &= xD'(x) + xD(x) \\ (1-x)D'(x) &= xD(x) \\ \frac{D'(x)}{D(x)} &= \frac{x}{1-x} = \frac{1}{1-x} - 1 \end{align*} Integrating on both sides $$\ln{D(x)} = \ln{(1-x)} - x + c$$ Since $D(0) = d_0 = 1 \implies c=0$. $$\ln{D(x)} = \ln{(1-x)} - x$$ $$D(x) = \frac{e^{-x}}{1-x} = \sum_{n=0}^{\infty} d_n \frac{x^n}{n!}$$ To get $d_n$ we need coefficient of $\frac{x^n}{n!}$ in LHS. $$e^{-x} \frac{1}{1-x} = (\sum_{n=0}^{\infty} (-1)^n \frac{x^n}{n!})(\sum_{n=0}^{\infty} n! \frac{x^n}{n!})$$ Coefficient of $\frac{x^n}{n!}$ by multiplication property = $\sum_{k=0}^{n} {n \choose k} a_k b_{n-k}$ \begin{align*} d_n = \sum_{k=0}^{n} {n \choose k} (-1)^{n-k} k! = \sum_{k=0}^{n} (-1)^{n-k} k! {n \choose k} \end{align*} $d_n$ is count of derangements of $n$ objects. \subsection{Bell Numbers} Let $S_{n,k}$ represent number of ways of partitioning \{$1, 2 ,3 \dots n $\} into $k$ non empty blocks and $B_n$ represent number of ways of partitioning \{$1, 2 ,3 \dots n $\} ($B_0 = 1$). By definitions, $$B_n = \sum_{k=0}^n S_{n,k}$$ Equivalent intrepretation : Consider a number whose prime factorization is square free i.e., $k \in \N$ such that $k = p_1p_2 \dots p_n$ where $\{p_1, p_2, \dots p_n\}$ are distinct primes. Number of ways of writing $k$ as product of natural numbers $\geq 2 = $ number of ways of partitioning $\{p_1, p_2, \dots p_n\}$ $= B_n$. \\ Recurrence Relation : $$ B_n = \sum_{k=0}^{n-1} {{n-1} \choose k} B_k $$ % LHS : Number ways of partitioning \{$1, 2 ,3 \dots n $\}. \\ % RHS : \begin{parts} % \item If $1$ occurs in singleton set. No.of such partitions = $B_{n-1}$ % \item If $1$ occurs in set with two elements. No.of ways element can be chosen $ = n-1$.No.of such partitions = ${{n-1} \choose 1}B_{n-2}$. % \item $1$ occurs in set with $k+1$ elements. No.of ways elements in this set can be chosen $ = {n-1 \choose k}$. No.of such partitions = ${{n-1} \choose k}B_{n-(k+1)}$. % \end{parts} % Hence total $= \sum_{k=0}^{n-1} {{n-1} \choose k} B_{n-k-1} = \sum_{k=0}^{n-1} {{n-1} \choose n-k-1} B_{n-k-1} = \sum_{n-k-1=0}^{n-1} {{n-1} \choose k} B_{k} = \sum_{k=0}^{n-1} {{n-1} \choose k} B_{k} $ \\ Let's derive closed form expression for $B_n$ : \begin{align*} B(x) &= \sum_{n=0}^{\infty} B_n \frac{x^n}{n!} \\ B'(x) &= \sum_{n=0}^{\infty} B_{n+1} \frac{x^n}{n!} ~(by ~ shifting ~rule) \\ &= \sum_{n=0}^{\infty} (\sum_{k=0}^{n} {n \choose k} B_k) \frac{x^n}{n!} \\ &= \sum_{n=0}^{\infty} (\sum_{k=0}^{n} {n \choose k}. ~1 B_k) \frac{x^n}{n!} \\ &= (\sum_{n=0}^{\infty} B_n \frac{x^n}{n!})(\sum_{n=0}^{\infty} 1 \frac{x^n}{n!}) ~(by ~ multiplication ~rule) \\ B'(x) &= B(x) . e^x \\ \frac{B'(x)}{B(x)} &= e^x \end{align*} Integrating on both sides $$\ln{B(x)} = e^x + c$$ Since $B(0) = B_0 = 1 \implies c = -1$. Hence, %$$ B(x) = e^{e^x - 1} = \frac{e^{e^x}}{e}$$ \begin{align*} B(x) &= e^{e^x - 1} \\ &= \frac{e^{e^x}}{e} \\ &= \frac{1}{e} (\sum_{k=0}^{\infty} \frac{(e^x)^k}{k!}) \\ &= \frac{1}{e} (\sum_{k=0}^{\infty} \frac{e^{kx}}{k!}) \\ &= \frac{1}{e} (~\sum_{k=0}^{\infty} \frac{1}{k!}~ (\sum_{n=0}^{\infty} \frac{(kx)^n}{n!})~) \\ &= \frac{1}{e} (~\sum_{n=0}^{\infty} \frac{x^n}{n!}~ (\sum_{k=0}^{\infty} \frac{k^n}{k!})~)\\ B(x)&= \sum_{n=0}^{\infty} \frac{1}{e} (\sum_{k=0}^{\infty} \frac{k^n}{k!}) \frac{x^n}{n!} \\ B_n &= \frac{1}{e} (\sum_{k=0}^{\infty} \frac{k^n}{k!}) \end{align*} We've derived closed form expression for bell number. Above expression for $B_n$ is also called as Dobinski's formula. \end{enumerate}
John Clare , the nineteenth @-@ century English poet based in Northamptonshire , wrote " The Landrail " , a semi @-@ comic piece which is primarily about the difficulty of seeing corn crakes – as opposed to hearing them . In the fourth verse he exclaims : " Tis like a fancy everywhere / A sort of living doubt " . Clare wrote about corn crakes in his prose works too , and his writings help to clarify the distribution of this rail when it was far more widespread than now .
module Intro sm : List Nat -> Nat sm [] = 0 sm (x :: xs) = x + (sm xs) fct : Nat -> Nat fct Z = 1 fct (S k) = (S k) * (fct k) fbp : Nat -> (Nat, Nat) fbp Z = (1, 1) fbp (S k) = (snd (fbp k), fst (fbp k) + snd (fbp k)) fib : Nat -> Nat fib n = fst (fbp n) add : Nat -> Nat -> Nat add Z j = j add (S k) j = S (add k j) mul : Nat -> Nat -> Nat mul Z j = Z mul (S k) j = add j (mul k j) sub : (n: Nat) -> (m : Nat) -> (LTE m n) -> Nat sub n Z LTEZero = n sub (S right) (S left) (LTESucc x) = sub right left x
/* ----------------------------------------------------------------------------- * Copyright 2021 Jonathan Haigh * SPDX-License-Identifier: MIT * ---------------------------------------------------------------------------*/ #ifndef SQ_INCLUDE_GUARD_core_errors_h_ #define SQ_INCLUDE_GUARD_core_errors_h_ #include "core/Primitive.fwd.h" #include "core/Token.fwd.h" #include "core/typeutil.h" #include <cstddef> #include <filesystem> #include <fmt/format.h> #include <fmt/ostream.h> #include <gsl/gsl> #include <stdexcept> #include <string_view> #include <system_error> namespace sq { /** * Base class for errors thrown by the SQ code. */ class Exception : public std::runtime_error { public: using std::runtime_error::runtime_error; }; /** * Indicates that a required parameter of a field is missing. */ class ArgumentMissingError : public Exception { public: /** * @param arg_name the name of the missing argument. * @param arg_type the type of the missing argument. */ ArgumentMissingError(std::string_view arg_name, std::string_view arg_type); }; /** * Indicates that a given parameter is of incorrect type. */ class ArgumentTypeError : public Exception { public: /** * @param received the value of the given parameter. * @param type_expected the name of the type expected for the parameter. */ ArgumentTypeError(const Primitive &received, std::string_view type_expected); }; /** * Indicates a programming error in SQ. * * InternalError should never actually be thrown - they are used in places that * the programmer believes are dead code, but where the C++ language still * requires e.g. a return statement. */ class InternalError : public Exception { public: using Exception::Exception; }; class InvalidConversionError : public Exception { public: using Exception::Exception; InvalidConversionError(std::string_view from, std::string_view to); }; /** * Indicates attempted access of a non-existent field. */ class InvalidFieldError : public Exception { public: /** * @param sq_type the SQ type of the parent of the missing field. * @param field the name of the field that was requested. * * E.g. if the query is "a.b" and "b" is not a field of "a" then sq_type * should be the type of "a" and field should be "b". */ InvalidFieldError(std::string_view sq_type, std::string_view field); }; /** * Indicates incorrect grammar in a query. */ class ParseError : public Exception { public: using Exception::Exception; /** * Create a ParseError for when an unexpected token is found. * * @param token the unexpected token. * @param expecting the set of tokens that would have been valid in place * of the unexpected token. */ ParseError(const Token &token, const TokenKindSet &expecting); }; /** * Indicates a failure to interpret part of the input query as a token. */ class LexError : public ParseError { public: /** * @param pos position in the input query (in characters) at which the lex * error occured. * @param query the full input query. */ LexError(gsl::index pos, std::string_view query); }; /** * Indicates that an array operation has been requested on a non-array type. */ class NotAScalarError : public Exception { public: using Exception::Exception; }; /** * Indicates that an array operation has been requested on a non-array type. */ class NotAnArrayError : public Exception { public: using Exception::Exception; }; /** * Indicates that a requested feature has not been implemented. */ class NotImplementedError : public Exception { public: using Exception::Exception; }; /** * Indicates that a request to access an element outside of an allowed range. */ class OutOfRangeError : public Exception { public: using Exception::Exception; /** * @param token the token in the query where the access was requested. * @param message details about the requested access. */ OutOfRangeError(const Token &token, std::string_view message); }; /** * Indicates that a pullup type field access has been requested for a field * access with siblings. */ class PullupWithSiblingsError : public Exception { using Exception::Exception; }; /** * Indicates an error was received from a system library. */ class SystemError : public Exception { public: using Exception::Exception; /** * Create a SystemError object. * * @param operation the operation that failed. * @param code the system error code associated with the error. */ SystemError(std::string_view operation, std::error_code code); /** * Get the system error code associated with this error. */ SQ_ND std::error_code code() const; private: std::error_code code_; }; /** * Indicates that the udev library returned an error. */ class UdevError : public SystemError { using SystemError::SystemError; }; /** * Indicates that a filesystem error occurred. */ class FilesystemError : public SystemError { public: using SystemError::SystemError; /** * Create a FilesystemError object. * * @param operation the operation that failed. * @param path the path for which the operation failed. * @param code the system error code associated with the error. */ FilesystemError(std::string_view operation, const std::filesystem::path &path, std::error_code code); }; class NarrowingError : public OutOfRangeError { public: using OutOfRangeError::OutOfRangeError; NarrowingError(auto &&target, auto &&value, auto &&kind_of_value_format, auto &&...format_args); NarrowingError(auto &&target, auto &&value); }; } // namespace sq #include "core/errors.inl.h" #endif // SQ_INCLUDE_GUARD_core_errors_h_
The complex conjugate of $z$ is $1$ if and only if $z$ is $1$.
When Nesbitt was 11 years old , the family moved to Coleraine , County Londonderry , where May worked for the Housing Executive . He completed his primary education at <unk> primary school , then moved on to Coleraine Academical Institution ( CAI ) . In 1978 , when he was 13 , his parents took him to audition for the Riverside Theatre 's Christmas production of Oliver ! . Nesbitt sang " Bohemian Rhapsody " at the audition and won the part of the Artful Dodger , who he played in his acting debut . He continued to act and sing with the Riverside until he was 16 , and appeared at festivals and as an extra in Play For Today : The Cry ( Christopher Menaul , 1984 ) . He got his Equity card when the professional actor playing Jiminy Cricket in Pinocchio broke his ankle two days before the performance , and Nesbitt stepped in to take his place . Acting had not initially appealed to him , but he " felt a light go on " after he saw The Winslow Boy ( Anthony Asquith , 1948 ) . When he was 15 , he got his first paid job as a bingo caller at Barry 's Amusements in Portrush . He was paid £ 1 per hour for the summer job and would also , on occasions , work as the brake man on the big dipper .
York mostly occupied the bottom half of the table before the turn of the year , and dropped as low as 23rd in September 2013 . During February 2014 the team broke into the top half of the table and with one match left were in sixth @-@ place . York 's defensive record was the third best in League Two with 41 goals conceded , bettered only by Southend ( 39 ) and Chesterfield ( 40 ) . Davies made the highest number of appearances over the season , appearing in 47 of York 's 52 matches . Fletcher was York 's top scorer in the league and in all competitions , with 10 league goals and 13 in total . He was the only player to reach double figures , and was followed by Jarvis with nine goals .
lemma linear_scale_real: fixes r::real shows "linear f \<Longrightarrow> f (r * b) = r * f b"
/- Copyright (c) 2022 Joël Riou. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Joël Riou -/ import for_mathlib.algebraic_topology.homotopical_algebra.model_category import category_theory.abelian.basic import category_theory.preadditive.projective import algebra.homology.homological_complex import algebra.homology.quasi_iso import for_mathlib.category_theory.limits.kernel_functor import for_mathlib.algebra.homology.twist_cocycle import tactic.linarith noncomputable theory open category_theory category_theory.limits category_theory.category open algebraic_topology cochain_complex.hom_complex open_locale zero_object variables (C : Type*) [category C] [abelian C] namespace cochain_complex @[derive category] def Cminus := full_subcategory (λ (K : cochain_complex C ℤ), K.is_bounded_above) namespace Cminus variable {C} @[simps] def mk (K : cochain_complex C ℤ) (hK : K.is_bounded_above) : Cminus C := ⟨K, hK⟩ def homology_functor (i : ℤ) : Cminus C ⥤ C := induced_functor _ ⋙ homology_functor _ _ i def eval (i : ℤ) : Cminus C ⥤ C := induced_functor _ ⋙ homological_complex.eval _ _ i namespace projective_structure variable (C) def arrow_classes : category_with_fib_cof_weq (Cminus C) := { weq := λ X Y w, quasi_iso w, fib := λ X Y w, ∀ n, epi (w.f n), cof := λ X Y w, ∀ n, mono (w.f n) ∧ (projective (cokernel (w.f n))), } variable {C} def CM2 : (arrow_classes C).CM2 := { of_comp := λ X Y Z f g (hf : quasi_iso f) (hg : quasi_iso g), begin haveI := hf, haveI := hg, exact quasi_iso_comp f g, end, of_comp_left := λ X Y Z f g (hf : quasi_iso f) (hfg : quasi_iso (f ≫ g)), begin haveI := hf, haveI := hfg, convert quasi_iso_of_comp_left f g, end, of_comp_right := λ X Y Z f g (hg : quasi_iso g) (hfg : quasi_iso (f ≫ g)), begin haveI := hg, haveI := hfg, convert quasi_iso_of_comp_right f g, end, } def CM3 : (arrow_classes C).CM3 := { weq := λ X₁ X₂ Y₁ Y₂ f g hfg hg, ⟨λ n, begin have hfg' := is_retract.imp_of_functor (homology_functor n).map_arrow (arrow.mk f) (arrow.mk g) hfg, apply morphism_property.is_stable_by_retract.for_isomorphisms _ _ hfg', apply hg.1, end⟩, cof := λ X₁ X₂ Y₁ Y₂ f g hfg hg n, begin split, { exact morphism_property.is_stable_by_retract.for_monomorphisms _ _ (is_retract.imp_of_functor (eval n).map_arrow _ _ hfg) (hg n).1, }, { exact projective.of_retract (is_retract.imp_of_functor ((eval n).map_arrow ⋙ limits.cokernel_functor C) _ _ hfg) (hg n).2, }, end, fib := λ X₁ X₂ Y₁ Y₂ f g hfg hg n, morphism_property.is_stable_by_retract.for_epimorphisms _ _ (is_retract.imp_of_functor (eval n).map_arrow _ _ hfg) (hg n), } def CM4 : (arrow_classes C).CM4 := sorry variable [enough_projectives C] namespace CM5a def P (L : Cminus C) (q : ℤ) : C := begin by_cases is_zero (L.1.X q), { exact 0, }, { exact projective.over (L.1.X q), }, end instance (L : Cminus C) (q : ℤ) : projective (P L q) := begin dsimp [P], split_ifs, { apply projective.zero_projective, }, { apply projective.projective_over, }, end lemma P_eq (L : Cminus C) (q : ℤ) (hq : ¬(is_zero (L.1.X q))) : P L q = projective.over (L.1.X q) := begin dsimp [P], split_ifs, { exfalso, exact hq h, }, { refl, }, end lemma P_eq_zero (L : Cminus C) (q : ℤ) (hq : is_zero (L.1.X q)) : P L q = 0 := begin dsimp [P], split_ifs, { refl, }, { exfalso, exact h hq, }, end lemma P_is_initial (L : Cminus C) (q : ℤ) (hq : is_zero (L.1.X q)) : is_initial (P L q) := begin rw P_eq_zero L q hq, apply is_zero.is_initial, apply is_zero_zero, end def is_zero.unique_up_to_iso {X Y : C} (hX : is_zero X) (hY : is_zero Y) : X ≅ Y := { hom := 0, inv := 0, hom_inv_id' := by { rw is_zero.iff_id_eq_zero at hX, rw [hX, comp_zero], }, inv_hom_id' := by { rw is_zero.iff_id_eq_zero at hY, rw [hY, comp_zero], }, } def P_π (L : Cminus C) (q : ℤ) : P L q ⟶ L.1.X q := begin by_cases is_zero (L.1.X q), { have e : 0 ≅ L.1.X q := is_zero.unique_up_to_iso (is_zero_zero C) h, swap, exact eq_to_hom (P_eq_zero L q h) ≫ e.hom, }, { exact eq_to_hom (P_eq L q h) ≫ projective.π (L.1.X q), }, end lemma P_π_eq_to_hom (L : Cminus C) (q₁ q₂ : ℤ) (hq : q₁ = q₂) : P_π L q₁ = eq_to_hom (by rw hq) ≫ P_π L q₂ ≫ eq_to_hom (by rw hq) := by { subst hq, simp only [eq_to_hom_refl, comp_id, id_comp], } @[simps] def KP (L : Cminus C) : Cminus C := Cminus.mk { X := λ q, P L (q-1), d := λ i j, 0, shape' := λ i j hij, rfl, d_comp_d' := λ i j k hij hjk, comp_zero, } begin cases L.2 with r hr, use r+1, intros i hi, dsimp, rw P_eq_zero L, { apply is_zero_zero, }, { apply hr, linarith, }, end instance (L : Cminus C) (q : ℤ) : epi (P_π L q) := by { dsimp only [P_π], split_ifs; apply epi_comp, } def twistP (L : Cminus C) : Cminus C := ⟨twist (cocycle.of_hom (𝟙 (KP L).1)), twist.is_bounded_above _ (KP L).2 (KP L).2⟩ def π (L : Cminus C) : twistP L ⟶ L := begin refine twist.desc (cocycle.of_hom (𝟙 (KP L).1)) (cochain.mk _) _ (neg_add_self 1) _ , { exact (λ p q hpq, P_π L _ ≫ eq_to_hom (by {congr' 1, linarith})), }, { exact { f := λ i, P_π L (i-1) ≫ L.1.d (i-1) i, comm' := λ i j hij, begin change i+1=j at hij, dsimp [KP], simp only [assoc, homological_complex.d_comp_d, comp_zero, zero_comp], end, }, }, { ext, dsimp [KP], simp only [δ_v (-1) 0 rfl _ p p (add_zero p).symm (p-1) (p+1) rfl rfl, add_zero, zero_comp, cochain.mk_v, eq_to_hom_refl, comp_id, smul_zero, cochain.id_comp, cochain.of_hom_v], }, end example : 2+2=4 := rfl instance (L : Cminus C) (q : ℤ) : epi ((π L).f q) := begin haveI : epi (biprod.inl ≫ (π L).f q), { have eq : biprod.inl ≫ (π L).f q = eq_to_hom (by { dsimp, congr, linarith }) ≫ P_π L q, { dsimp [π, twist.desc_cochain, twist.fst, twist.snd, cochain.mk, cochain.v, cochain.of_hom, cochain.of_homs, cochain.comp], simp only [id_comp, assoc, add_zero, preadditive.comp_add, biprod.inl_fst_assoc, biprod.inl_snd_assoc, zero_comp, P_π_eq_to_hom L (q+(0 - -1)-1) q (by linarith), eq_to_hom_trans, eq_to_hom_refl, eq_to_hom_trans_assoc, comp_id], }, rw eq, apply epi_comp, }, exact epi_of_epi biprod.inl ((π L).f q), end instance : preadditive (Cminus C) := sorry instance : has_binary_biproducts (Cminus C) := sorry end CM5a lemma CM5a : (arrow_classes C).CM5a := λ X Z f, begin let Y := CM5a.twistP Z, let i : X ⟶ X ⊞ Y := biprod.inl, let p : X ⊞ Y ⟶ Z := biprod.desc f (CM5a.π Z), let j : Y ⟶ X ⊞ Y := biprod.inr, have hip : i ≫ p = f := biprod.inl_desc _ _, refine ⟨X ⊞ Y, i, _, p, _, hip⟩, { sorry, }, { intro, have hjp : j ≫ p = CM5a.π Z := biprod.inr_desc _ _, have hjp' : j.f n ≫ p.f n = (CM5a.π Z).f n, { rw [← hjp, ← homological_complex.comp_f], refl, }, haveI : epi (j.f n ≫ p.f n), { rw hjp', apply_instance, }, exact epi_of_epi (j.f n) (p.f n), }, end def CM5 : (arrow_classes C).CM5 := ⟨CM5a, sorry⟩ variable (C) @[simps] def projective_structure : model_category (Cminus C) := { to_category_with_fib_cof_weq := arrow_classes C, CM1axiom := sorry, CM2axiom := CM2, CM3axiom := CM3, CM4axiom := CM4, CM5axiom := CM5, } instance : model_category (Cminus C) := projective_structure C end projective_structure end Cminus end cochain_complex
open import Prelude hiding (lift; id) module Implicits.Syntax.LNMetaType where open import Implicits.Syntax.Type open import Data.Nat as Nat mutual data MetaSType (m : ℕ) : Set where tvar : ℕ → MetaSType m mvar : Fin m → MetaSType m _→'_ : (a b : MetaType m) → MetaSType m tc : ℕ → MetaSType m data MetaType (m : ℕ) : Set where _⇒_ : (a b : MetaType m) → MetaType m ∀' : MetaType m → MetaType m simpl : MetaSType m → MetaType m mutual open-meta : ∀ {m} → ℕ → MetaType m → MetaType (suc m) open-meta k (a ⇒ b) = open-meta k a ⇒ open-meta k b open-meta k (∀' a) = ∀' (open-meta (suc k) a ) open-meta k (simpl x) = simpl (open-st k x) where open-st : ∀ {m} → ℕ → MetaSType m → MetaSType (suc m) open-st k (tvar x) with Nat.compare x k open-st .(suc (x N+ k)) (tvar x) | less .x k = tvar x open-st k (tvar .k) | equal .k = mvar zero open-st k (tvar .(suc (k N+ x))) | greater .k x = tvar (k N+ x) open-st k (mvar x) = mvar (suc x) open-st k (a →' b) = open-meta k a →' open-meta k b open-st k (tc x) = tc x mutual data TClosedS {m} (n : ℕ) : MetaSType m → Set where tvar : ∀ {x} → (x N< n) → TClosedS n (tvar x) mvar : ∀ {x} → TClosedS n (mvar x) _→'_ : ∀ {a b} → TClosed n a → TClosed n b → TClosedS n (a →' b) tc : ∀ {c} → TClosedS n (tc c) data TClosed {m} (n : ℕ) : MetaType m → Set where _⇒_ : ∀ {a b} → TClosed n a → TClosed n b → TClosed n (a ⇒ b) ∀' : ∀ {a} → TClosed (suc n) a → TClosed n (∀' a) simpl : ∀ {τ} → TClosedS n τ → TClosed n (simpl τ) to-meta : ∀ {ν} → Type ν → MetaType zero to-meta (simpl (tc x)) = simpl (tc x) to-meta (simpl (tvar n)) = simpl (tvar (toℕ n)) to-meta (simpl (a →' b)) = simpl (to-meta a →' to-meta b) to-meta (a ⇒ b) = to-meta a ⇒ to-meta b to-meta (∀' a) = ∀' (to-meta a) from-meta : ∀ {ν} {a : MetaType zero} → TClosed ν a → Type ν from-meta (a ⇒ b) = from-meta a ⇒ from-meta b from-meta (∀' a) = ∀' (from-meta a) from-meta (simpl (tvar x)) = simpl (tvar (fromℕ≤ x)) from-meta (simpl (mvar {()})) from-meta (simpl (a →' b)) = simpl (from-meta a →' from-meta b) from-meta (simpl (tc {c})) = simpl (tc c)
\chapter{The monojet analysis} \label{chapter:MonojetAnalysis} The monojet analysis is described in detail in this chapter. The data and the Monte Carlo simulated samples used for the analysis are presented, together with the definition of the different physics objects and the event selection criteria. The statistical treatment of the data and the estimation of the different Standard Model (SM) background processes are discussed. The observations are then compared to the SM predictions in the different signal regions. \section{Data sample} \label{sec:DataSample} The data sample considered in the analysis presented in this Thesis was collected with the ATLAS detector in proton-proton collisions at a center of mass energy of $\unit[8]{TeV}$ between April 4, 2012 and December 6, 2012. A total integrated luminosity of $\unit[20.3 \pm 0.6]{fb^{-1}}$ was recorded after requiring tracking detectors, calorimeters, muon chambers and magnets to be fully operational during the data taking. Events are selected using the lowest unprescaled $\met$ trigger logic called \texttt{EF\_xe80\_tclcw}, that selects events with $\met$ above $\unit[80]{GeV}$, as computed at the final stage of the three-level trigger system of ATLAS discussed in Section~\ref{subsec:TriggerSystem}. The details of the implementation of the $\met$ trigger can be found in Ref.~\cite{Casadei:2011via}. \section{Object definition} \label{sec:ObjectDefinition} Jets and $\met$ are used to define the signal selections whereas leptons are used to both veto the electroweak backgrounds and to define the different control samples. \subsection{Jets} \label{subsec:JetDefinition} Jets are reconstructed from energy deposits in the calorimeters using the $\akt$ jet algorithm with the jet radius parameter $R=0.4$ (see Section \ref{sec:JetReco}). The transverse momentum of the jets is corrected for detector effects with the LCW calibration. Jets with corrected $\pt>\unit[20]{GeV}$ and $|\eta|<2.8$ are considered in the analysis. In order to remove jets originating from pileup collisions, central jets ($|\eta|<2.4$) with $\pt < \unit[50]{GeV}$ are required to have a jet vertex fraction (JVF) above 0.5. \subsection{Electrons} \label{subsec:ElectronDefinition} Electrons are required to have $\pt>\unit[20]{GeV}$ and $|\eta|<2.47$, and need to fulfill the \emph{medium} shower shape and track selection criteria (see Section \ref{sec:ElectronReco}). The same $\pt$ threshold is used to veto electrons in the signal selections and to select them in the control samples (see Section~\ref{sec:EventSelection}), which minimizes the impact of the reconstruction, identification and efficiency systematic uncertainties. The threshold of $\unit[20]{GeV}$ used in this definition combined to the definition of the electron control sample, brings the background of jets misidentified as electrons to negligible levels, and therefore no isolation is required. Overlaps between identified electrons and jets in the final state are resolved. Jets are discarded if their separation $\Delta R$ from an identified electron is less than 0.2. The electrons separated by $\Delta R$ between 0.2 and 0.4 from any remaining jet are removed. \subsection{Muons} \label{subsec:MuonDefinition} Muons are reconstructed by combining information from the muon spectrometer and inner tracking detectors (see Section~\ref{sec:MuonReco}). The muon candidates for the analysis presented are required to have $\pt>\unit[10]{GeV}$, $|\eta|<2.4$, and $\Delta R > 0.4$ with respect to any jet candidate with $\pt > \unit[30]{GeV}$. The use of this $\pt$ threshold increases the precision for selecting real muons from $W$ boson decays, and avoids the bias in the muon selection due to the presence of low $\pt$ jets with large pileup contributions. Finally, an isolation condition is applied to the muons, that requires the sum of the $\pt$ of the tracks not associated with the muon in a cone of radius $\Delta R = 0.2$ around the muon direction, to be less than $\unit[1.8]{GeV}$. \subsection{Missing transverse energy} \label{subsec:MetDefinition} The missing transverse energy is described in detail in Section~\ref{subsec:ETmissReco}. It is reconstructed using all energy deposits in the calorimeter up to a pseudorapidity $|\eta|<4.9$, and without including information from identified muons in the final state. \section{Event selection} \label{sec:EventSelection} The different signal regions defined in this analysis have a common preselection criteria, that suppresses large contribution of SM processes with leptons in the final state and non-collision background contributions: \begin{itemize} \item Events are required to have a reconstructed primary vertex consistent with the beam spot envelope and it is required to have at least five isolated tracks with $\pt>\unit[400]{MeV}$. If two or more vertices are consistent with these requirements, the one with the largest sum $\pt^2$ is chosen as primary vertex. This requirement removes beam-related backgrounds and cosmic rays. \item Events are initially requested to have $\met>\unit[150]{GeV}$ in order to ensure the trigger to be fully efficient. \item At least one jet with $\pt>\unit[150]{GeV}$ and $|\eta|<2.8$ is required in the final state, in order to select monojet-like configurations. \item Different quality cuts are applied to remove events recorded during a LAr noise burst or during a failure in the electronics of any subsystem. Also events not correctly processed are vetoed from the selection. \item Events containing any jet with $\pt>\unit[20]{GeV}$ and $|\eta|<4.5$ with charged fraction \footnote{The charged fraction is defined as $f_{\text{ch}} = \sum{\pt^{\text{track,jet}}/\pt^{\text{jet}}}$, where $\sum{\pt^{\text{track,jet}}}$ is the scalar sum of the transverse momenta of the tracks associated to the primary vertex within a cone of radius 0.4 around the jet axis, and $\pt^{\text{jet}}$ is the transverse momentum as determined from the calorimetric measurements.}, electromagnetic fraction\footnote{The electromagnetic fraction is defined as $f_{\text{em}} = E_{\text{LAr}}/(E_{\text{LAr}} + E_{\text{TileCal}})$, where $E_{\text{LAr}}$ is the energy deposited in the electromagnetic calorimeter and $E_{\text{TileCal}}$ is the energy deposited in the hadronic calorimeter.} or sampling fraction\footnote{$f_{\text{max}}$ denotes the maximum fraction of the jet energy collected by a single calorimeter layer.} inconsistent with the requirement that they originate from a $\pp$ collision ($f_{\text{ch}}<0.02$, $f_{\text{em}}<0.1$ and $f_{\text{max}}>0.8$ respectively), are vetoed. \item Events with one or more reconstructed isolated muons with $\pt>\unit[10]{GeV}$ or electrons with $\pt>\unit[20]{GeV}$ are vetoed. \end{itemize} A maximum of three jets with $\pt>\unit[30]{GeV}$ and $|\eta|<2.8$ in the event are allowed. An additional requirement on the azimutal separation of $\Delta\phi(\text{jet}, \met) > 0.4$ between the missing transverse momentum direction and that of each of the selected jets is imposed. The latest suppresses the multijet background contribution where the large $\met$ originates from a jet energy mismeasurement. Three separate signal regions (denoted by M1, M2 and M3) are defined with increasing lower thresholds on the leading jet $\pt$ and $\met$. The definition of these signal regions come as a result of an optimization on the stop pair production with $\stoptocharm$ model, performed across the stop-neutralino mass plane with increasing $\stopone$ and $\ninoone$ masses. For the M1 selection, the events are required to have $\met>\unit[220]{GeV}$ and leading jet $\pt>\unit[280]{GeV}$. M2 (M3) selection must have $\met>\unit[340]{GeV}$ ($\met>\unit[450]{GeV}$) and leading jet $\pt>\unit[340]{GeV}$ ($\pt>\unit[450]{GeV}$). Three extra generic signal regions (M4, M5 and M6) are defined to increase the sensitivity to a broad variety of models leading to final states with larger $\met$. Signal region M4 requires the events to have leading jet $\pt>\unit[450]{GeV}$ and $\met>\unit[340]{GeV}$, while region M5 (M6) are required to have leading jet $\pt>\unit[550]{GeV}$ and $\met>\unit[550]{GeV}$ (leading jet $\pt>\unit[600]{GeV}$ and $\met>\unit[600]{GeV}$). Table \ref{tab:SignalRegionCuts} summarizes the six signal region selections. \begin{table}[!ht] \renewcommand{\baselinestretch}{1} \begin{center} \begin{small} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccc}\hline\hline \multicolumn{7}{c}{\small{\textbf{Selection criteria}}} \\\hline %%% --- PRESELECTION \multicolumn{7}{c}{{\small{Preselection}} } \\\hline \multicolumn{7}{l}{Primary vertex}\\ \multicolumn{7}{l}{$\met > 150$~GeV }\\ \multicolumn{7}{l}{At least one jet with $\pt >150$~GeV and $|\eta|< 2.8$}\\ \multicolumn{7}{l}{Jet quality requirements} \\ \multicolumn{7}{l}{Lepton vetoes}\\ \hline %%% --- MONOJET \multicolumn{7}{c}{\small{Monojet-like selection}}\\\hline \multicolumn{7}{l}{At most a total of three jets with $\pt > 30$~GeV and $|\eta|<2.8$}\\ \multicolumn{7}{l}{$\Delta\phi(\text{jet}, \met) > 0.4$} \\\hline Signal region & M1 & M2 & M3 & M4 & M5 & M6 \\ Minimum leading jet $\pt$ [GeV] & 280 & 340 & 450 & 450 & 550 & 600 \\ Minimum $\met$ [GeV] & 220 & 340 & 450 & 340 & 550 & 600 \\ \hline\hline \end{tabular*} \end{small} \end{center} \caption{Event selection criteria applied for the signal regions M1 to M6.} \label{tab:SignalRegionCuts} \end{table} \section{Monte Carlo simulated samples} \label{sec:MCSamples} The analysis uses MC samples to estimate each Standard Model process. The MC events are passed through a detailed simulation of the detector based on {\sc Geant4} \cite{Agostinelli:2002hh}. Different in-time and out-of-time pileup conditions as a function of the instantaneous luminosity are also taken into account by overlaying simulated minimum-bias events generated with \pythia-8 onto the hard scattering process and re-weighting them with the distribution of the observed mean number of interactions per bunch crossing. In the following, details are given for the SM background MC simulated samples. \subsection{$W$+jets and $Z$+jets} \label{subsec:WZjetsMCSimulation} A set of simulated $W$+jets and $Z$+jets events are generated using \sherpa{}, including LO matrix elements for up to 5 partons in the final state and using massive $b/c$-quarks, CT10 parton distribution functions\footnote{Next-to-next-to-leading order (NNLO) PDFs from the CTEQ/TEA group.} and its own model of hadronization. Similar samples have been generated with the \alpgen{} generator, to study the modeling uncertainties. The MC samples are initially normalized to next-to-next-to-leading-order (NNLO) cross sections in perturbative QCD (pQCD) with the DYNNLO \cite{Catani:2007vq} program using MSTW2008 NNLO PDF sets. \subsection{Top} \label{subsec:TopSimulation} The production of top quark pairs (\ttbar) is simulated using the \powheg{} MC generator. A top quark mass of $\unit[172.5]{GeV}$, the CTEQ6L1 parton distribution functions and the Peruggia 2011C Tune \cite{Skands:2010ak} have been used for the generation and the underlying event simulation. The $\ttbar$ samples are normalized to NNLO+NNLL (next-to-next-to-leading-logarithm pQCD accuracy), determined by \texttt{Top++2.0}. Similar \alpgen{} and \mcnlo{} samples are used to assess the $\ttbar$ modeling uncertainties. Single top samples are generated with \powheg{} for the $s$- and $Wt$-channels, while \acer{} \cite{Kersevan:2002dd} is used for the $t$-channel. An approximate NLO+NNLL pQCD prediction is used for the $Wt$ process. Samples generated with the \mcnlo{} generator are then used to estimate the systematic uncertainties. \subsection{Diboson} \label{subsec:DibosonSimulation} Diboson samples ($WW$, $WZ$ and $ZZ$ production) are generated with \sherpa{}, using massive $c$/$b$-quarks, with CT10 PDF and are normalized to NLO predictions. Similar samples generated with \herwig{} are used to compute the modeling uncertainties. \section{Background estimation} \label{sec:BkgEstimation} The expected SM background is dominated by $\znn$ (irreducible), $\wln$ and $\ttbar$ production, and includes small contributions from $\zgammall$, single top, diboson and multijet processes. The $W/Z$+jets backgrounds are estimated using MC event samples normalized using data in control regions. The simulated $W/Z$+jets events are re-weighted to data as a function of the generated $\pt$ of the vector boson, which is found to improve the agreement between data and simulation. The weights are extracted from the comparison of the reconstructed boson $\pt$ distribution in data and {\sherpa} MC simulation in a $W$+jets control sample where the jet and $\met$ preselection requirements from Table \ref{tab:SignalRegionCuts} have been applied. As detailed in Appendix~\ref{app:BosonPtReweight}, these weights are defined in several bins in the boson $\pt$ and applied to the truth boson $\pt$ distribution of the simulated samples. Due to the limited number of data events at large boson $\pt$, an inclusive last bin with boson $\pt > \unit[400]{GeV}$ is used. The uncertainties of the re-weighting procedure are taken into account in the final result. The top-quark background contribution is very small and is determined using MC simulated samples. The simulated $\ttbar$ events are re-weighted based on the measurement in the data (as described in Ref.~\cite{Aad:2012hg}), indicating that the differential cross section as a function of the $\pt$ of the $\ttbar$ system is softer than that predicted by the MC simulation. The diboson background contribution is also very small and fully determined using MC simulated samples. The multijet background with large $\met$ originates mainly from the misreconstruction of the energy of a jet in the calorimeter, and to a lesser extent from the presence of neutrinos in the decays from heavy-flavor hadrons. In this analysis the multijet background is estimated from data using the {\em jet smearing method}, which is described detail in Appendix~\ref{app:JetSmearingMethod}. The jet smearing method relies on the assumption that the $\met$ of multijet events is dominated by fluctuations in the jet response in the detector that can be measured in the data. The contribution of multijet processes is then normalized in regions defined with exactly the same requirements as the signal regions (Table \ref{tab:SignalRegionCuts}), but with the cut on the angular separation between the transverse momentum of the jets and the missing transverse energy, reverted ($\Delta\phi(\text{jet}, \met) < 0.4$). The cleanup cuts applied to the data sample in Section~\ref{sec:EventSelection} are expected to maintain the non-collision contributions at a percent level. The shape of the timing distribution for non-collision background events is reconstructed from a control data sample with relaxed jet cleanup cuts, and then extrapolated to the signal regions. This extrapolation led to no events in the control samples after cuts, which is an indication that the level of non-collision background is negligible in the analysis. \subsection{Definition of the $W/Z$+jets control regions} \label{subsec:ControlRegionsDefinition} Control regions in data are defined for each signal selection, orthogonal to them, with identified electrons or muons in the final state and with the same requirements on the jet $\pt$, subleading jet vetoes and $\met$. They are used to determine the $W/Z$+jets electroweak background contributions from data. A $\wmn$+jets control sample is defined using events with a muon with $\pt>\unit[10]{GeV}$ and $W$ transverse mass, $\mt$, in the range $\unit[30]{GeV} < \mt < \unit[100]{GeV}$ to further enhance the $\wmn$+jets process. The transverse mass is defined by the lepton ($\ell$) and neutrino ($\nu$) transverse momenta and their $\phi$-directions as: \begin{equation} \mt = \sqrt{ 2\pt^{\ell}\pt^{\nu}(1-\cos{(\phi^{\ell}-\phi^{\nu})}}, \label{eq:TransverseMassDef} \end{equation} \noindent where the ($x, y$) components of the neutrino momentum are taken to be the same as the corresponding $\ptmiss$ components. Similarly, a $Z/\gamma^{\ast}(\rightarrow \mu^{+}\mu^{-})$+jets control sample is defined using events with exactly two muons with invariant mass range $\unit[66]{GeV}<m_{\mu\mu}<\unit[116]{GeV}$, i.e. around the peak of the $Z$ boson resonance. Finally, a $\wen$+jets dominated control sample is also defined for each signal selection with an electron candidate with $\pt>\unit[20]{GeV}$. Figure~\ref{fig:Plot_M1_CR_beforeFit} shows the $\met$ and the leading jet $\pt$ distributions for the three control regions described above for the selection cuts M1. \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRele_met.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRele_pt1.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRwmn_met.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRwmn_pt1.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRzmm_met.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRzmm_pt1.eps} } \end{center} \caption[$\met$ and leading jet $\pt$ distributions in the three control region for the selection cuts of region M1 compared to the background predictions.]{$\met$ and leading jet $\pt$ distributions in the three control region for the selection cuts of region M1 compared to the background predictions. The error bands in the ratios include the statistical and experimental uncertainties on the background predictions.} \label{fig:Plot_M1_CR_beforeFit} \end{figure} Monte Carlo-based normalization factors, determined from the \sherpa{} simulation and including the boson $\pt$ re-weighting explained above, are defined for each of the signal selections to estimate the different electroweak background contributions in the signal regions. As an illustrative example, the contribution from the dominant $\znn$ background process to a given signal region, $N^{Z(\rightarrow\nu\nu)}_{\text{signal}}$, would be determined using the $\wmn$+jets control sample in data, according to: \begin{equation} N^{Z(\rightarrow\nu\bar{\nu})}_{\text{signal}} = N^{\text{MC}(Z(\rightarrow\nu\bar{\nu}))}_{\text{signal}} \times \frac{\left(N^{\text{data}}_{W(\rightarrow\mu\nu),\text{control}} - N^{\text{non-}W}_{W(\rightarrow\mu\nu),\text{control}} \right)}{N^{\text{MC}}_{W(\rightarrow\mu\nu),\text{control}}}, \label{eq:scaleFactorZnunu} \end{equation} \noindent where $N^{\text{MC}(Z(\rightarrow\nu\bar{\nu}))}_{\text{signal}}$ is the background predicted by the MC simulation in the signal region, and $N^{\text{data}}_{W(\rightarrow\mu\nu),\text{control}}$, $N^{\text{MC}}_{W(\rightarrow\mu\nu),\text{control}}$, and $N^{\text{non-}W}_{W(\rightarrow\mu\nu),\text{control}}$ denote, in the control region, the number of $\wmn$+jets candidates in data and MC simulation, and the non-$\wmn$ background contribution, respectively. The latest term refers mainly to top-quark and diboson processes, but also includes contributions from other $W/Z$+jets processes. The normalization factor for this particular example (e.g. the last factor from the previous expression), is defined as the ratio of the number of observed $\wmn$+jets events over the total number of $\wmn$+jets simulated events, both in the control region. \section{Fit of the background processes to the data} \label{sec:Fit} The use of control samples to constrain the dominant background contribution from $\znn$ and $W$+jets, reduces significantly the otherwise relatively large theoretical and experimental systematic uncertainties, of the order of 20\%--30\%, associated with purely MC-based background predictions in the signal regions. For each selection and in order to both normalize and constrain the corresponding background estimates in the different signal regions, and to determine the final uncertainty in the total background, the likelihood shown in Equation~\ref{eq:PdfFit}, \begin{equation} L(\vec{\mu}, \vec{\alpha}) = \prod_{c \in \text{regions}}{\frac{[\nu_c(\vec{\mu}, \vec{\alpha})]^{n_c}}{n_c!}e^{-\nu_c(\vec{\mu}, \vec{\alpha})}}, \prod_{p\in\text{params}}{P_p(\alpha_p)}, \label{eq:PdfFit_copy} \end{equation} \noindent is simultaneously fitted to the $W(\rightarrow \mu\nu)$+jets, $\zmm$+jets and $W(\rightarrow e\nu)$+jets control samples, taking into account the cross contamination between the different background sources in the control samples. \subsection{Normalization factors} \label{subsec:FreeParameters} The likelihood includes unconstrained normalization factors that can adjust the relative contributions of the main processes ($\vec{\mu}$ in Equation \ref{eq:PdfFit_copy}). In particular, three normalization factors are considered, determined from the $\wmn$+jets, $\zmm$+jets and $\wen$+jets control regions, denoted as \texttt{mu\_Wmn}, \texttt{mu\_Zmm} and \texttt{mu\_Ele}. The \texttt{mu\_Wmn} factor is used to constrain the normalization of the $\wmn$+jets and the $\znn$ processes. The \texttt{mu\_Zmm} factor sets the normalization of the $\zmm$+jets process. Finally, the \texttt{mu\_Ele} factor determines the normalization of the $\wen$+jets, $\wtn$, $\zee$+jets and $\ztt$+jets processes. Table~\ref{tab:scaleFactorsSummary} shows a summary of the normalization factors used to normalize each background process. \begin{table}[!ht] \begin{center} \begin{small} \begin{tabular}{lc} \hline\hline {\bf Process} & {\bf Normalization factor} \\ \hline $\wen$+jets & \texttt{mu\_Ele} \\ $\wmn$+jets & \texttt{mu\_Wmn} \\ $\wtn$ & \texttt{mu\_Ele} \\ $\znn$ & \texttt{mu\_Wmn} \\ $\zee$+jets & \texttt{mu\_Ele} \\ $\zmm$+jets & \texttt{mu\_Zmm} \\ $\ztt$+jets & \texttt{mu\_Ele} \\ Top & -- \\ Dibosons & -- \\ Multijet & -- \\ \hline\hline \end{tabular} \end{small} \end{center} \caption{Summary of the normalization factors used to normalize the different background processes in the signal region.} \label{tab:scaleFactorsSummary} \end{table} The choice for the normalization factor \texttt{mu\_Wmn} instead of \texttt{mu\_Zmm}, to estimate the $\znn$ contribution, is motivated by the statistical power of the $\wmn$ control sample in data, about seven times larger than the $\zmm$+jets control sample. Appendix~\ref{app:ClosureTestZnunu} provides Monte Carlo studies, both at particle and at detector level, that confirm the validity of the use of \texttt{mu\_Wmn} to normalize the $\znn$ process. \subsection{Systematic uncertainties} \label{subsec:MonojetSystematicUncertainties} The likelihood from Eq.~\ref{eq:PdfFit_copy} also includes nuisance parameters, $\vec{\alpha}$, that parametrize the contributions of the processes as a function of variations in fractions of sigma, with respect to their nominal prediction, for each systematic uncertainty. These nuisance parameters are normally distributed, with mean 0, indicating that they are centered in the value corresponding to the nominal prediction, and standard deviation 1, in units of potential systematic variations. In the global fit, each nuisance parameter is initialized at such values, and the fit is then allowed to profile the different systematic uncertainties in order to find the configuration that maximizes the likelihood. Values for the nuisance parameters largely differing from 0 would indicate a large mismodeling, and that the fit tries to accommodate to the data with an anomalously large variation of systematic uncertainties. The systematic uncertainties considered in this analysis are summarized and related to their corresponding nuisance parameters in Table~\ref{tab:monosyslist}. A description of each systematic source is detailed below. For each uncertainty, the impact on the total background yield before the fit\footnote{Therefore, the effect in the background contribution corresponding to $\alpha_\text{syst} = \pm 1$} is also discussed. These values are included as inputs in the global analysis fit. The systematic uncertainties are assumed to be correlated through the different background processes, and control and signal regions, unless the contrary is stated. %--- \ref{tab:monosyslist} \input{MonojetAnalysis/Tables/SystematicUncertaintiesDefinitionTable.tex} \paragraph{Jet Energy Scale:} The uncertainty on the absolute jet energy scale (JES) is one of the main uncertainties. In the analysis it is parametrized by a single nuisance parameter, although is the result of combining 18 systematic sources\footnote{The performance of the fit has also been checked when 18 nuisance parameters are considered, and has lead to identical results.}, from the different steps of the jet energy scale calibration. The effect of this uncertainty on the total MC prediction, before it is profiled in the global fit. Before the fit it is approximately 7\% in both the signal and control regions. \paragraph{Jet Energy Resolution:} The effect of the jet energy resolution (JER) in the total background yield is measured in each of the signal regions, and found to be less than 1\%. \paragraph{Jet Vertex Fraction:} The effect of a possible mismodeling in the JVF distribution is investigated by studying the impact in the background yields when the requirement is varied from 0.5 to 0.47 and 0.53. An effect below 1\% in the total background yield is found in all the signal regions. \paragraph{Pile up:} The MC generated events need to be re-weighted in order to correctly describe the pileup conditions in the collisions. This weights are extracted from the comparison of the number of interactions per bunch crossing distribution in both data and MC simulation. Variations on these weights lead to negligible effects in the total background prediction. \paragraph{$\met$ cell-out:}The resolution and scale uncertainties of the CellOut term of the $\met$ are also considered, and each of them is parametrized by a single nuisance parameter. The effect of these uncertainties to the total background in the different signal regions is less than 1\%. \paragraph{Leptons:} The uncertainty on the electron identification varies the total background yield in the signal regions by less than 1\%, and is parametrized by a single nuisance parameter. The effect of the electron energy resolution, also parametrized by one nuisance parameter, leads to a negligible effect. The uncertainty on the electron energy scale accounts for: the variations coming from the $Z$~scale uncertainty; the modeling of the interaction of the electrons with the calorimeter; the presampler scale uncertainty; and the scale uncertainty for low-$\pt$ electrons. This is included in the fit via four separated nuisance parameters, which altogether introduce less than a 0.5\% variation in the total background yield. The uncertainty on the muon identification translates into a 1\% variation on the total background in all the signal regions, and is parametrized by one nuisance parameter. The uncertainty on the muon energy resolution accounts for the resolution effects coming from the Inner Detector and the Muon Spectrometer. This uncertainty, parametrized with two different nuisance parameters, has a negligible effect on the total background contribution. Finally, the uncertainty on the muon energy scale, affects the total background prediction in the signal regions by approximately 0.5\%, and is introduced in the global fit via one nuisance parameter. \paragraph{Theoretical uncertainties on the $W/Z$+jets processes:} Uncertainties on the factorization, renormalization, and parton-shower matching scales and PDFs of the $W/Z$+jets processes, are parametrized, each of them, by a different nuisance parameter. Combined, they produce a variation between 20\% and 25\% in the total background yields, in the different signal selections. An additional nuisance parameter is devoted to parametrize the uncertainty of the re-weighting of the boson $\pt$, and affects the total background prediction by a 2\%. Finally, systematic uncertainties to account for the validity of the use of the $\wmn$+jets process to extract the normalization for $\znn$ and higher-order electroweak corrections affecting differently the $W$+jets and the $Z$+jets processes, are also considered~\cite{Denner:2009gj,Denner:2011vu,Denner:2012ts}. These two effects are parametrized together by a single nuisance parameter, and modify the total background yield between 2\% and 4\% in the different signal regions. More details on the estimation of this uncertainty can be found in Appendix~\ref{app:ClosureTestZnunu}. \paragraph{Theoretical uncertainties on the top-quark-related processes:} Uncertainties on the absolute $\ttbar$ and single top cross sections; uncertainties on the MC generators and the modeling of parton showers employed; variations in the set of parameters that govern the parton showers and the amount of initial- and final-state soft gluon radiation; and uncertainties due to the choice of renormalization and factorization scales and PDFs are considered. The effect of these systematic uncertainties on the total background prediction, varies between 1.6\% and 1.0\% for the different signal selections, and are represented by 13 different nuisance parameters in the fit. \paragraph{Theoretical uncertainties on the diboson:} These uncertainties are estimated in a similar way as for the top-quark-related processes, and translate to an effect on the total background between 0.7\% and 2.3\%. In the fit, these uncertainties are parametrized by 4 nuisance parameters. \paragraph{Multijet uncertainty:} The systematic uncertainty on the multijet is computed by comparing the predictions when using different response functions. A 100\% variation in the multijet prediction is observed, leading to a 1\% uncertainty on the total background for the M1 selection. \paragraph{Luminosity:} The uncertainty on the determination of the total integrated luminosity introduces an 2.8\% variation in the total background yield. This systematic uncertainty is parametrized with a single nuisance parameter in the fit. \paragraph{Statistical uncertainty in the MC simulations:} In order to avoid fluctuations in the global fit, the statistical uncertainties on the Monte Carlo simulations are only considered if they are larger than 5\%. This limitation has a negligible impact on the results, but contributes to a more robust performance of the fit. \paragraph{Trigger efficiency:} All the systematic effects related to the trigger efficiency have a negligible impact on the analysis. \paragraph{} \section{Estimation of the background contributions} \label{sec:ControlRegions} The data and background predictions for the M1 to M6 selections in the $\wen$+jets, $\wmn$+jets and $\zmm$+jets control regions are presented in Tables~\ref{tab:ControlRegion_CRele}, \ref{tab:ControlRegion_CRwmn} and \ref{tab:ControlRegion_CRzmm}, respectively. In each of the kinematic selections, the MC predicted yields before and after the global fit are shown. The normalization factors for the background processes in the different selections are extracted from these tables, and are shown in Table~\ref{tab:scaleFactors}. The uncertainties on the normalization factors include both the statistical and systematic components. The fitted values for the nuisance parameters as well as the correlations among the normalization factors and the nuisance parameters in the global fit, are presented in Appendix~\ref{app:FitResults}, for all the analysis selections. The normalizations are compatible with 1 within uncertainties in all the selections, except in M5 and M6. In these regions, the boson $\pt$ distributions can not be effectively corrected, since a single weight is used for those events with boson $\pt>\unit[400]{GeV}$. Therefore, the boson $\pt$ re-weighting does not modify the shape of this distribution, but only introduces a variation in the normalization of the $W/Z$+jets samples, that needs to be compensated by the normalization factors from the fit. \input{MonojetAnalysis/Tables/YieldsTable_Stop_CRele.tex} %--- \ref{tab:ControlRegion_CRele} \input{MonojetAnalysis/Tables/YieldsTable_Stop_CRwmn.tex} %--- \ref{tab:ControlRegion_CRwmn} \input{MonojetAnalysis/Tables/YieldsTable_Stop_CRzmm.tex} %--- \ref{tab:ControlRegion_CRzmm} \begin{table}[tb] \begin{center} \begin{small} \renewcommand{\baselinestretch}{1.2} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}cccc}\hline \hline \multicolumn{4}{c} {\bf Normalization factors} \\ \hline {\bf Selection} & \texttt{mu\_Ele} & \texttt{mu\_Wmn} & \texttt{mu\_Zmm} \\ \hline M1 & $0.99 \pm 0.21$ & $0.94 \pm 0.20$ & $0.98 \pm 0.20$ \\ M2 & $0.98 \pm 0.22$ & $0.94 \pm 0.19$ & $1.05 \pm 0.21$ \\ M3 & $1.01 \pm 0.24$ & $0.91 \pm 0.19$ & $0.99 \pm 0.22$ \\ M4 & $0.98 \pm 0.22$ & $0.96 \pm 0.21$ & $1.08 \pm 0.24$ \\ M5 & $0.95 \pm 0.25$ & $0.81 \pm 0.19$ & $0.69 \pm 0.20$ \\ M6 & $0.79 \pm 0.24$ & $0.76 \pm 0.18$ & $0.68 \pm 0.23$ \\ \hline \end{tabular*} \end{small} \end{center} \renewcommand{\baselinestretch}{1} \caption[Results on the normalization factors for the different monojet selections.]{ Results on the normalization factors (including statistical and systematic uncertainties) for the different monojet selections.} \label{tab:scaleFactors} \end{table} The main kinematic distributions of the reconstructed leptons for the selection M1 are shown in Figures~\ref{fig:Plot_M1_CRwmn_Leptonkinematics}, \ref{fig:Plot_M1_CRele_Leptonkinematics} and \ref{fig:Plot_M1_CRzmm_Leptonkinematics}. Figures~\ref{fig:Plot_M1_CRwmn_Jetkinematics}, \ref{fig:Plot_M1_CRele_Jetkinematics} and \ref{fig:Plot_M1_CRzmm_Jetkinematics} show the measured jet and $\met$ distributions for the $\wen$+jets, $\wmn$+jets and $\zmm$+jets control regions respectively. \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRwmn_m_pt_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRwmn_m_eta_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRwmn_m_phi_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRwmn_m_MT_fitted.eps} } \end{center} \caption[Kinematic distributions of the identified muons in the $\wmn$+jets control region for the selection cuts of region M1, after the normalization factors extracted from the fit have been applied.]{The measured kinematic distributions of the identified muons in the $\wmn$+jets control region for the selection cuts of region M1 compared to the background predictions. The latter include the global normalization factors extracted from the fit. The error bands in the ratios include the statistical and experimental uncertainties on the background predictions.} \label{fig:Plot_M1_CRwmn_Leptonkinematics} \end{figure} \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRele_e_pt_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRele_e_eta_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRele_e_phi_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRele_e_MT_fitted.eps} } \end{center} \caption[Kinematic distributions of the identified electrons in the $\wen$+jets control region for the selection cuts of region M1, after the normalization factors extracted from the fit have been applied.]{The measured kinematic distributions of the identified electrons in the $\wen$+jets control region for the selection cuts of region M1 compared to the background predictions. The latter include the global normalization factors extracted from the fit. The error bands in the ratios include the statistical and experimental uncertainties on the background predictions.} \label{fig:Plot_M1_CRele_Leptonkinematics} \end{figure} \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRzmm_m_pt_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRzmm_m2_pt_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRzmm_m_Zpt_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRzmm_m_M_fitted.eps} } \end{center} \caption[Kinematic distributions of the identified muons in the $Z/\gamma^{\ast}(\rightarrow \mu^{+}\mu^{-})$+jets control region for the selection cuts of region M1, after the normalization factors extracted from the fit have been applied.]{The measured kinematic distributions of the identified muons in the $\zmm$+jets control region for the selection cuts of region M1 compared to the background predictions. The latter include the global normalization factors extracted from the fit. The error bands in the ratios include the statistical and experimental uncertainties on the background predictions.} \label{fig:Plot_M1_CRzmm_Leptonkinematics} \end{figure} \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRwmn_met_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRwmn_met_phi_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRwmn_pt1_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRwmn_eta1_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRwmn_pt2_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRwmn_metpt1_fitted.eps} } \end{center} \caption[Kinematic distributions of the reconstructed jets and $\met$ in the $\wmn$ control region for the selection cuts of region M1, after the normalization factors extracted from the fit have been applied.]{The measured kinematic distributions of the reconstructed jets and $\met$ in the $\wmn$ control region for the selection cuts of region M1 compared to the background predictions. The latter include the global normalization factors extracted from the fit. The error bands in the ratios include the statistical and experimental uncertainties on the background predictions.} \label{fig:Plot_M1_CRwmn_Jetkinematics} \end{figure} \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRele_met_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRele_met_phi_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRele_pt1_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRele_eta1_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRele_pt2_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRele_metpt1_fitted.eps} } \end{center} \caption[Kinematic distributions of the reconstructed jets and $\met$ in the $\wen$+jets control region for the selection cuts of region M1, after the normalization factors extracted from the fit have been applied.]{The measured kinematic distributions of the reconstructed jets and $\met$ in the $\wen$+jets control region for the selection cuts of region M1 compared to the background predictions. The latter include the global normalization factors extracted from the fit. The error bands in the ratios include the statistical and experimental uncertainties on the background predictions.} \label{fig:Plot_M1_CRele_Jetkinematics} \end{figure} \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRzmm_met_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRzmm_met_phi_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRzmm_pt1_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRzmm_eta1_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRzmm_pt2_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_CRzmm_metpt1_fitted.eps} } \end{center} \caption[Kinematic distributions of the reconstructed jets and $\met$ in the $\zmm$ control region for the selection cuts of region M1, after the normalization factors extracted from the fit have been applied.]{The measured kinematic distributions of the reconstructed jets and $\met$ in the $\zmm$ control region for the selection cuts of region M1 compared to the background predictions. The latter include the global normalization factors extracted from the fit. The error bands in the ratios include the statistical and experimental uncertainties on the background predictions.} \label{fig:Plot_M1_CRzmm_Jetkinematics} \end{figure} All the distributions show a reasonable agreement between data and MC in the control regions, thus pointing to a good modeling of the main SM background processes. \clearpage \section{Results} \label{sec:ResultsSR} The agreement between the data and the MC simulations for the different distributions in the control regions of each selection shown in the previous section ensures the good modeling and control on the prediction of the main electroweak background processes in the signal regions. As already mentioned, the global fit of the likelihood to the data in the different control regions, will translate into a reduction of the systematic effects. Figures~\ref{fig:SystematicUncertaintiesSR} and \ref{fig:SystematicUncertaintiesSR2} summarize the systematic uncertainties for the signal regions M1 to M6 after the global fit. Absolute jet and $\met$ energy scale and resolution systematic effects translate into an uncertainty on the total background that varies between 1.1\% and 1.4\% for M1-M4 and between 2.4\% and 2.1\% for M5 and M6 selections. Uncertainties related to jet quality requirements and pileup description and corrections to the jet $\pt$ and $\met$ introduce a $0.2\%$ to $0.4\%$ uncertainty on the background predictions. Uncertainties on the simulated lepton identification and reconstruction efficiencies, energy/momentum scale and resolution translate into a $0.9\%$ to $1.2\%$ for the different signal regions. \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.995\textwidth]{MonojetAnalysis/Figures/totalSystematicPlot_SR_Stop_A6.eps} } \mbox{ \includegraphics[width=0.995\textwidth]{MonojetAnalysis/Figures/totalSystematicPlot_SR_Stop_A3.eps} } \mbox{ \includegraphics[width=0.995\textwidth]{MonojetAnalysis/Figures/totalSystematicPlot_SR_Stop_A4.eps} } \end{center} \caption[Breakdown of the sources of systematic uncertainties on background estimates in the M1 to M3 signal regions.]{Breakdown of the sources of systematic uncertainties on background estimates in the M1 to M3 signal regions. The first bin (in red) refers to the percentage of total systematic uncertainty with respect to the total background prediction. The individual uncertainties can be correlated, and therefore they do not necessarily add up quadratically to the total background uncertainty.} \label{fig:SystematicUncertaintiesSR} \end{figure} \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.995\textwidth]{MonojetAnalysis/Figures/totalSystematicPlot_SR_Stop_A8.eps} } \mbox{ \includegraphics[width=0.995\textwidth]{MonojetAnalysis/Figures/totalSystematicPlot_SR_Stop_A9.eps} } \mbox{ \includegraphics[width=0.995\textwidth]{MonojetAnalysis/Figures/totalSystematicPlot_SR_Stop_A10.eps} } \end{center} \caption[Breakdown of the sources of systematic uncertainties on background estimates in the M4 to M6 signal regions.]{Breakdown of the sources of systematic uncertainties on background estimates in the M4 to M6 signal regions. The first bin (in red) refers to the percentage of total systematic uncertainty with respect to the total background prediction. The individual uncertainties can be correlated, and therefore they do not necessarily add up quadratically to the total background uncertainty.} \label{fig:SystematicUncertaintiesSR2} \end{figure} Variations on the renormalization/factorization and parton-shower matching scales and PDFs in the \sherpa{} $W/Z$+jets background samples translate into a $0.4\%$ to $1\%$ uncertainty in the total background, while the effect of the boson $\pt$ re-weighting procedure for the simulated $W$ and $Z$ $\pt$ distributions introduces less than a $0.2\%$ effect on the total background estimates. The model uncertainties related to potential differences between $W$+jets and $Z$+jets final states, affecting the normalization of the main irreducible background, $\znn$, are found to vary between about $2\%$ for M1 and $3\%$ for M2 to M6. Theoretical uncertainties on the predicted background yields for top-quark-related processes is found to introduce an uncertainty on the total background between $1.0\%$ and $1.6\%$. Uncertainties on the diboson affects the total background, between $0.7\%$ and $1.3\%$ for M1-M4, $1.7\%$ for M5 and $2.3\%$ for M6 selection. The uncertainty on the multijet estimation leads to a $1\%$ uncertainty on the total background in M1, while it is negligible for the other selections. Finally, the statistical uncertainties in the control regions in both data and MC are included in the analysis via the uncertainties quoted in the \texttt{mu\_Ele}, \texttt{mu\_Wmn} and \texttt{mu\_Zmm} normalization factors. They lead to an additional uncertainty on the final background estimate that varies between $1.2\%$ and $1.4\%$ for M1-M4 selections, but is of the order of $4\%$ for M5 and M6. The total uncertainty on the SM predictions varies between 2.9\% and 9.8\% in the different signal regions, and is summarized in Table~\ref{tab:SystematicSummary}. \begin{table}[tb] \begin{center} \begin{small} \begin{tabular}{cc} \hline \textbf{Selection} & \textbf{Total systematic uncertainty} \\ \hline M1 & 2.9\% \\ M2 & 3.2\% \\ M3 & 4.6\% \\ M4 & 4.6\% \\ M5 & 7.4\% \\ M6 & 9.8\% \\ \hline \end{tabular} \end{small} \end{center} \renewcommand{\baselinestretch}{1} \caption{Summary of the total systematic uncertainties on the SM predictions for the selections M1 to M6.} \label{tab:SystematicSummary} \end{table} The background composition of the control and signal regions for each selection, after the global fit, is shown in Figure~\ref{fig:RegionsComposition}. The first three bins for each selection refer to the $\wmn$+jets, $\wen$+jets and $\zmm$+jets control samples respectively (Tables~\ref{tab:ControlRegion_CRele} to \ref{tab:ControlRegion_CRzmm}). Control samples are dominated by the background process from which they receive the name. By construction, the agreement between the data and the MC simulation in the control samples is perfect, since these regions are used to constrain the backgrounds. The fourth bin refers to the signal region, where the irreducible $\znn$ process dominates, accounting for more than 50\% of the total background. The relative contribution of this process in the different selections increases, as the leading jet $\pt$ and $\met$ requirements tighten. The second most important process in the signal regions is the $\wtn$, due to the hadronically decaying $\tau$-leptons. Further contributions come from $\wen$+jets and $\wmn$+jets processes, which pass the signal region requirements when the leptons are not reconstructed or are misreconstructed as jets. \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/regionsComposition_Stop_M1.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/regionsComposition_Stop_M2.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/regionsComposition_Stop_M3.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/regionsComposition_Stop_M4.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/regionsComposition_Stop_M5.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/regionsComposition_Stop_M6.eps} } \end{center} \caption[Background composition of the different control and signal regions for each of the kinematic selections after fits have been performed.]{Background composition of the different control and signal regions for each of the kinematic selections after fits have been performed. The error bands in the ratios include the total statistical and systematic uncertainties on the total background expectation. From top to bottom, left to right: M1-M6 selections.} \label{fig:RegionsComposition} \end{figure} Table~\ref{tab:SignalRegions} shows the data and the expected background predictions for signal regions M1 to M6. Good agreement between the observed data and the simulation is observed between selections M1 to M5. The selection M6 seems to show an excess of events with respect to the background estimation. The compatibility between the data and the simulation under the hypothesis of having only background can be tested by computing the observed $p$-value, $p_b$, as described in detail in Chapter~\ref{chapter:StatisticalModel}. Table~\ref{tab:pValuesBackgroundOnly} shows the $p$-values for the different signal selections. The $p$-values for the regions M1 to M5 point to a good agreement between the data and the MC simulation, as previously discussed. In the signal region M6, the data and the MC simulation agree within $2\sigma$. This is studied in detail in Appendix~\ref{app:scaleFactorEvolution}, and is finally attributed to a statistical fluctuation in both the data and the MC events. \input{MonojetAnalysis/Tables/YieldsTable_Stop_SR.tex} %--- \ref{tab:SignalRegions} \begin{table}[tb] \begin{center} \begin{tabular}{cccccc} \hline\hline {\bf Signal channel} & & $\mathbf{p_b}$ \\ \hline \multirow{2}{*}{M1} & asymp & $0.51$ \\ & toy & $0.51$ \\ \multirow{2}{*}{M2} & asymp & $0.52$ \\ & toy & $0.52$ \\ \multirow{2}{*}{M3} & asymp & $0.51$ \\ & toy & $0.51$ \\ \multirow{2}{*}{M4} & asymp & $0.48$ \\ & toy & $0.48$ \\ \multirow{2}{*}{M5} & asymp & $0.21$ \\ & toy & $0.21$ \\ \multirow{2}{*}{M6} & asymp & $0.04$ \\ & toy & $0.04$ \\ \hline\hline \end{tabular} \end{center} \caption{$p$-values under the background-only hypothesis for the regions M1-M6, derived from pseudo-experiments (toy) and from an asymptotic approximation.} \label{tab:pValuesBackgroundOnly} \end{table} \clearpage Figures~\ref{fig:Plot_M1_SR_met} and~\ref{fig:Plot_M1_SR_pt1} show the $\met$ and the leading jet $\pt$ distributions in the signal regions M1 to M6, respectively. Values for the $\met$ and leading jet $\pt$ up to $\unit[1.5]{TeV}$ are explored\footnote{No events are found with larger values of $\met$ or leading jet $\pt$.}. Figure~\ref{fig:Plot_M1_SR_eta1} shows the pseudorapidity distribution of the leading jet in all the signal regions, and the distributions of the ratio between the $\met$ and the leading jet $\pt$ are shown in Figure~\ref{fig:Plot_M1_SR_metpt1}. For illustration purposes, two different SUSY scenarios are included, for stop pair production in the $\stoptocharm$ decay channel with stop masses of $\unit[200]{GeV}$ and neutralino masses of $\unit[125]{GeV}$ and $\unit[195]{GeV}$. The Standard Model predictions in all these distributions agree with the data, both in normalization and shape. The predictions in the signal region M6, despite of the global $2\sigma$-level shift in the normalization, also show good agreement in the shape, which points to a statistical fluctuation, as mentioned above. Other kinematic distributions of the reconstructed jets in the selections M1 to M6 are collected in Appendix~\ref{app:SRkimenaticDistr}. \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_SR_met_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A3_SR_met_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A4_SR_met_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A8_SR_met_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A9_SR_met_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A10_SR_met_fitted.eps} } \end{center} \caption[Distributions of the reconstructed $\met$ in the signal regions for the selection cuts of regions M1 to M6, after the normalization factors extracted from the fit have been applied.]{The measured distributions of the reconstructed $\met$ in the signal regions for the selection cuts of regions M1 to M6 compared to the background predictions. The latter include the global normalization factors extracted from the fit. The error bands in the ratios include the statistical and experimental uncertainties on the background predictions. For illustration purposes, the distribution of two different SUSY scenarios for stop pair production are included.} \label{fig:Plot_M1_SR_met} \end{figure} \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_SR_pt1_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A3_SR_pt1_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A4_SR_pt1_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A8_SR_pt1_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A9_SR_pt1_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A10_SR_pt1_fitted.eps} } \end{center} \caption[Distributions of the reconstructed $\pt$ of the leading jet in the signal regions for the selection cuts of regions M1 to M6, after the normalization factors extracted from the fit have been applied.]{The measured distributions of the reconstructed $\pt$ of the leading jet in the signal regions for the selection cuts of regions M1 to M6 compared to the background predictions. The latter include the global normalization factors extracted from the fit. The error bands in the ratios include the statistical and experimental uncertainties on the background predictions. For illustration purposes, the distribution of two different SUSY scenarios for stop pair production are included.} \label{fig:Plot_M1_SR_pt1} \end{figure} \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_SR_eta1_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A3_SR_eta1_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A4_SR_eta1_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A8_SR_eta1_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A9_SR_eta1_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A10_SR_eta1_fitted.eps} } \end{center} \caption[Distributions of the reconstructed $\eta$ of the leading jet in the signal regions for the selection cuts of regions M1 to M6, after the normalization factors extracted from the fit have been applied.]{The measured distributions of the reconstructed $\eta$ of the leading jet in the signal regions for the selection cuts of regions M1 to M6 compared to the background predictions. The latter include the global normalization factors extracted from the fit. The error bands in the ratios include the statistical and experimental uncertainties on the background predictions. For illustration purposes, the distribution of two different SUSY scenarios for stop pair production are included.} \label{fig:Plot_M1_SR_eta1} \end{figure} \begin{figure}[!ht] \begin{center} \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A6_SR_metpt1_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A3_SR_metpt1_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A4_SR_metpt1_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A8_SR_metpt1_fitted.eps} } \mbox{ \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A9_SR_metpt1_fitted.eps} \includegraphics[width=0.495\textwidth]{MonojetAnalysis/Figures/plot_Stop_A10_SR_metpt1_fitted.eps} } \end{center} \caption[Distributions of the ratio between the reconstructed $\met$ and the leading jet $\pt$ in the signal regions for the selection cuts of regions M1 to M6, after the normalization factors extracted from the fit have been applied.]{The measured distributions of the reconstructed ratio between $\met$ and the leading jet $\pt$ in the signal regions for the selection cuts of regions M1 to M6 compared to the background predictions. The latter include the global normalization factors extracted from the fit. The error bands in the ratios include the statistical and experimental uncertainties on the background predictions. For illustration purposes, the distribution of two different SUSY scenarios for stop pair production are included.} \label{fig:Plot_M1_SR_metpt1} \end{figure}